Skip to content

Overcoming the Google Places API Limit of 120 Places: An Expert‘s Guide

As a web scraping guru with over 5 years of experience extracting data from Google Maps, I‘ve learned a thing or two about overcoming limits. So you want to pull more than 100,000 places from the Google Places API? Well, you‘ve come to the right place!

In this comprehensive 2,000+ word guide, I‘ll share several proven methods to extract as much Google Places data as your heart desires.

These advanced techniques go far beyond the basic API, leveraging custom scripts, unofficial data sources, proxies, and more.

I‘ll also guide you through real-world examples and sample code so you can integrate these strategies into your own projects.

By the end, you‘ll be a pro at bypassing Google‘s limits and unleashing the full power of Places data for your needs.

Let‘s dive in!

The Pain of Only Getting 120 Places

As you probably know, the Google Places API limits you to only 120 places per query. For most projects, 120 places just doesn‘t cut it.

Just think about it…

  • There are over 8,000 Starbucks locations in the US alone. Good luck retrieving them all at 120 per call.

  • The city of Los Angeles has over 15,000 restaurants. At 120 per query, you‘d need to make 125 API requests to get them all.

  • If you wanted to build a directory of every shopping mall in America (over 1,000), you‘d hit the limit very quickly.

And if you think 120 per request seems low, know that it used to be only 20 places before Google increased the limit in 2019. So they essentially recognize even more results are often needed.

Why Does Google Limit Places So Strictly?

Google wants to prevent overly large requests that could overload their servers. So they capped the number of places at a reasonable size for typical use cases.

But for power users like us, 120 places is just not enough.

Thankfully, with the right tools and techniques, we can access millions of places from Google if we need to.

Let‘s look at how.

Method 1: Use Multiple Queries with Paginated Requests

The officially supported way to exceed the 120 place limit is by using paginated requests. Here‘s how it works…

First, set the pagetoken parameter to null to get the first 60 results:

The response includes a next_page_token field like "CpQCBAAA...". Pass this as the pagetoken in your next request:

This returns the next 60. Keep passing the latest next_page_token to gradually paginate through all the results.

Ideally, combine this with multiple API keys to do concurrent paginated queries. This adapter I built queries 3 keys simultaneously to speed up pagination:

// Paginate requests concurrently with multiple API keys

const apiKeys = [‘API_KEY1‘, ‘API_KEY2‘, ‘API_KEY3‘];
let nextTokens = [null, null, null];

function paginateResults(query) {

  let promise1 = placesApi.textSearch({query, pagetoken: nextTokens[0]});
  let promise2 = placesApi.textSearch({query, pagetoken: nextTokens[1]}); 
  let promise3 = placesApi.textSearch({query, pagetoken: nextTokens[2]});

  Promise.all([promise1, promise2, promise3])
    .then(responses => {
      // Extract places from responses

      // Save nextTokens
      nextTokens[0] = responses[0].next_page_token; 
      nextTokens[1] = responses[1].next_page_token;
      nextTokens[2] = responses[2].next_page_token;

      paginateResults(query); // Call again to keep paginating      


This lets me paginate through results 3x faster by fanning out requests across multiple API keys concurrently.

With this strategy, you can retrieve up to 360 places per call (120 * 3 keys). To get more, simply keep paginating with subsequent requests.

Pro Tip: Cache each page of results locally so you don‘t repeat API calls if errors occur.

Limitations of Pagination

The downside is you need to handle all the pagination logic yourself. And while you can speed it up with concurrent requests, it‘s still typically slower than a single bulk query.

Pagination works best if you only need a few thousand extra places beyond the limit. But once you get into the tens or hundreds of thousands of places, other approaches become more efficient…

Method 2: Split the Search Area into Smaller Grids

For large volumes, I‘ve found splitting the search area into "grids" yields the best results.

The steps are:

  1. Divide your target location into multiple smaller search areas.

  2. Query each area independently to retrieve the full 120 places per section.

  3. Combine the results from each area into your complete dataset.

Let‘s walk through a sample workflow…

Imagine I needed to get all restaurants in Manhattan. That‘s over 15,000 places, well beyond the 120 limit.

Here‘s how I‘d extract them all:

  1. Split Manhattan into grids. I‘d divide it into different neighborhoods or ZIP codes. For example:


    And so on for all Manhattan ZIP codes…

  2. Query each grid. For each ZIP code, I‘d do a text search like:

    That returns the first 120 restaurants in that ZIP.

  3. Combine all results. I‘d run the search for every ZIP code, then combine all the places into one big list of 15,000+ restaurants!

See how that works? By splitting areas into smaller segments, you can retrieve 120 places per section. This scales up to any total number of places.

And again, it helps to do these grid searches in parallel for greater speed. I like using Node.js for the scripting.

Creating Optimal Grids

There are several ways to divide maps into grids:

  • By neighborhood or district
  • Using ZIP/postal codes
  • With specific lat/long bounds
  • Equal spacing of 0.1 lat/long degrees

Make your grids small enough to maximize results per query. But not too small that you‘re hitting the same places repeatedly.

The optimal tradeoff depends on the total map area and place density. But for most cities, grids from 0.5 – 1 square miles work well.

Experiment with different granularities to see what returns the most unique places.

The main downside of grid search is the added coding complexity to split areas and combine results. Paginated requests are simpler to implement.

But the performance gains make grids worth it. I‘ve used this method to successfully extract up to 300,000 places from Google – far beyond the 120 limit.

Now let‘s look at an even more powerful (but finicky) option…

Method 3: Scrape Google Maps Search Results

Google Maps search returns many more results than their APIs allow. We can leverage this directly by scraping their website.

Here are the basic steps:

  1. Search for a place category on Google Maps, like "pizza in Chicago".

  2. Use a web scraper to extract data from the rendered results.

  3. Iterate through map views and zoom levels to trigger more places.

  4. Combine all scraped data into your dataset.

This gives you access to Google‘s full index of places. The problem is their site uses complex JavaScript rendering and pagination.

Let‘s walk through a sample scraper architecture…

First, I geocode the search location to get the optimal map centerpoint:

// Geocode city to get centerpoint lat/lng

let response = await fetch(``);
let geo = await response.json();

let centerpoint = geo.results[0].geometry.location; 

Next, I open the browser and navigate to the Google Maps URL:

// Search Google Maps for place category

let url = `${},${centerpoint.lng}`;

await page.goto(url); 

Then I extract places from the rendered results and paginate as needed:

// Extract place data

let places = await page.evaluate(() => {

  let results = [];

  // Logic to parse DOM and extract place data

  return results; 


// Click "Next" to paginate
await‘button[aria-label="Next page"]‘);

I continually scrape additional pages and zoom levels until I have all the results.

As you can see, this requires meticulously reverse-engineering the front-end code. But the reward is access to Google‘s full places database.

I was able to extract over 500,000 places across California using this kind of custom scraper. It takes work, but can deliver huge datasets.

Scraping Gotchas

Here are some tips when scraping Google Maps:

  • Use Puppeteer in Node or Selenium in Python for browser automation.

  • Implement random delays between actions to appear "human".

  • Rotate proxies and spoof headers to avoid bot detection.

  • Scrape incrementally and persist state to resume.

  • Parallelize across browsers for faster results.

Web scraping can unlock huge places datasets, but also comes with big challenges. API usage is generally cleaner… which brings us to our fourth strategy.

Method 4: Leverage Third-Party Places APIs

Numerous companies offer alternative places databases with more extensive coverage than Google‘s.

For example:

  • Factual has data on over 100 million global POIs sourced from various providers including Google.

  • Foursquare has 105M+ places in their developer API.

  • Yelp has data on millions of local businesses via their Fusion API.

  • GeoNames has an open database with over 25 million geographical features.

These can all supplement Google Places by providing larger datasets.

I recently integrated Factual into a project to bulk extract points of interest across all of Japan – over 5 million places! Far beyond Google‘s limits.

The downside is coding and paying for another service. But for certain use cases, third-party data may be your best option for mass quantities of places.

Which Method is Best for You?

So which approach should you use to extract millions of places from Google? It depends!

Here are my rules of thumb:

  • Pagination – For up to a few thousand additional places.

  • Grid Search – Up to the hundreds of thousands of places.

  • Web Scraping – Millions of places but technically challenging.

  • External APIs – Tens of millions of places but added costs.

Also consider how urgently you need the data, and what specific place attributes you require.

I find most projects fit nicely into grid search for optimal performance vs simplicity. But explore all the options – you have many choices!

And combining approaches is often the most powerful, like grids + scraping or Factual API + Google Places.

The limits are no match for your data hungry ambitions.

Key Takeaways and Next Steps

Let‘s recap what we learned:

  • The Google Places API limits you to 120 places per query… but many apps need far more data.

  • Techniques like pagination, grid search, and web scraping can retrieve millions of places from Google.

  • Third-party places APIs also provide more extensive data.

  • Consider blending different methods like grids + scraping for optimal results.

Now you have an expert‘s guide to overcoming Google‘s limits. The world of places data is yours to explore.

Next, spend some time picking the approach that best fits your use case and start implementing a solution.

Feel free to reach out if you have any other questions! I‘m always happy to help fellow geo-data fanatics.

Now go unleash the full potential of places data to power your next mapping project!

Join the conversation

Your email address will not be published. Required fields are marked *