Skip to content

What is large-scale scraping and how does it work?

The internet grows every day. According to Internet Live Stats, there were over 1.9 billion websites on the internet as of 2024. Much of the data on those sites – product catalogs, real estate listings, local business info – is incredibly valuable if extracted and structured.

This is where web scraping comes in. Web scraping is the process of automatically collecting publicly available information from websites using bots. With over 1.9 billion sites out there, each containing thousands or even millions of pages, web scraping at scale is often needed to gather large datasets.

I‘ve been working in the web scraping industry for over 5 years, helping companies across many industries extract the data they need. In my experience, standard web scraping workflows hit limitations when dealing with massive sites like Amazon, eBay, or Craigslist. The average scraper isn‘t built to handle millions of listings or products.

So what exactly constitutes "large-scale" scraping? And how does it differ from normal web scraping? I‘ll cover that next, along with tips on how to overcome the unique challenges of scraping big, complex sites.

Web scraping ethics: don‘t overload sites

First, a quick note on ethics. When scraping any website, it‘s important not to overload the target servers with an excessive number of requests. This can slow down sites or even cause crashes.

I once consulted with a client who wanted to scrape a small business directory website containing 5000 listings. Their initial scraper bombarded the site with requests and got quickly banned. After checking the site‘s traffic stats, we realized it wasn‘t built to handle more than 500 visitors per hour!

So we throttled the scraper to stay under that limit. The project completed successfully without disturbing normal traffic.

The scale of your scraping has to align with the site‘s capabilities. Large sites like Amazon (2.5 million+ daily visitors) can handle far more scraping than a local blog. When in doubt, research the site‘s traffic numbers and start small. Responsible scraping benefits everyone.

For more ethical scraping tips, see my guide on how to build considerate web scrapers. Now let‘s look at how standard scrapers work and where they fall short.

Normal web scraping workflows and their limits

Let‘s walk through the usual scraping process for a run-of-the-mill site, then discuss why this approach fails on massive sites.

Step 1: Open the target homepage

The scraper starts by opening the homepage of the site to be scraped. For example, BestBuy.com to gather electronics product info.

Step 2: Queue top-level pages

The scraper queues up the main category and subcategory pages to scrape, like Television, Laptops, Headphones, and so on.

Step 3: Scrape page content

The scraper harvests data from each page. On a product page, it would extract details like model, price, specs, etc.

Step 4: Crawl linked pages

The scraper follows links to additional related pages, scraping each one. This continues until all pages within scope are scraped.

Step 5: Export scraped data

Finally, the scraped content is exported to a CSV, JSON, database, etc. for analysis and use.

This works fine for small to mid-sized sites. But for massive sites, there are 3 core limitations:

Pagination limits: Most sites paginate content over many numbered pages (1,2,3..) or (1,2,3…350). There‘s usually a limit of 1,000 – 10,000 pages. Not enough for large data sets.

Compute resources: A single computer scraping large sites will quickly hit memory and CPU limits. The scraping rate will throttle.

Banned proxies: Scraping from a single IP at scale gets you banned. Datacenter proxies don‘t provide enough IP diversity.

So what techniques allow large-scale scraping to overcome these challenges?

Solving pagination limits

To bypass pagination caps, we leverage search filtering and splitting.

Say we need to scrape 200,000 product listings from an ecommerce site. The "all products" page paginates to only 10,000 results.

Step 1) Filter by category – Search and filter for subcategories like "electronics" or "clothing" that each contain a subset of the 200k products.

Step 2) Split by price range – Take each subcategory and split the results into price brackets, like $0-50, $50-100, $100-500, etc.

Step 3) Recursively split ranges – For brackets that still contain too many pages, split again. An $100-500 bracket could be divided into $100-300 and $300-500.

By combining filters and targeted range splitting, you can break up pagination limits and scrape massive datasets.

As an example, here‘s how we might split up scraping 220,000 electronics products:

Category Filter Price Range # of Products
Electronics $0-50 18,000
$50-100 26,000
$100-500 92,000
$500-1000 28,000
$1000+ 56,000

As you can see, breaking it down this way allows us to scrape orders of magnitude more products than a single "all products" view.

Scaling up servers for large scrapes

When it comes to compute resources, one server can only scale so far. At some point, the scraping speed will max out the CPU, RAM, or bandwidth available.

The solution is to scale horizontally by distributing the scraping workload across multiple servers. This is known as distributed scraping.

Here‘s how it works:

  • The list of pages and filters to scrape is split up among servers.

  • Scraping runs execute in parallel across the servers. This multiples the scraping power.

  • The scraped data is merged back together into one unified dataset.

Proper task distribution is key – no single server should be overloaded. Servers can be intelligently auto-scaled up or down as needed.

In one large project for a price comparison site, we had to scrape 20 million product listings from 50 top North American ecommerce sites. It wasn‘t feasible from one location.

So we spun up scraper servers on cloud infrastructure across different regions. Each server took on a portion of the sites and page filters. This let the project run at an extremely high speed. Within two weeks, all listing data was extracted.

Distributing at scale makes seemingly impossible scrapes possible.

Using proxies for large scraping

When scraping from multiple servers and locations, proxies are essential to prevent IP blocks.

Datacenter proxies alone often fail at large scale as their IP pools are recognized. Residential proxies are reliable but pricier.

The best approach combines datacenter, residential, and ISP proxies based on project scale.

Here are proxy usage guidelines based on my experience:

Scraping Scale Recommended Proxies
Under 1 million pages/mo Datacenter proxies
1-10 million pages/mo Mix of datacenter and residential
Over 10 million pages/mo Mostly residential with ISP support

Scraping over 10 million pages monthly requires heavy use of residential proxies with ISP partnerships to ensure enough IP diversity. Under 1 million can rely solely on datacenter proxies.

In summary, combining pagination strategies, distributed servers, and the right proxies makes large-scale web scraping achievable for mammoth data extraction projects.

The key is working with an experienced web scraping provider who can plan and execute a robust large-scale solution tailored to your specific scraping needs. If you have a big project in mind, let‘s chat!

Join the conversation

Your email address will not be published. Required fields are marked *