Skip to content

How to Scrape Zoopla Real Estate Property Data in Python

Zoopla is one of the largest real estate marketplaces in the UK, with over 1.5 million property listings across sale, rent, new homes, and commercial properties. In this comprehensive 3000+ word guide, we‘ll explore different techniques to build a web scraper that extracts Zoopla‘s property data using Python.

The Value of Zoopla‘s Data

Before diving into the technical details, it‘s worth understanding why Zoopla is such a valuable data source and what can be done with its listings.

Some key facts about Zoopla:

  • 1.5 million+ listings – One of the largest sources of UK property data.

  • 62,000+ estate agents – The majority of UK agencies list on Zoopla.

  • 28 million+ monthly visits – High search volume and interest.

  • Founded in 2007 – Over a decade‘s worth of historical listings.

This massive dataset creates opportunities for:

  • Market analysis – Identify pricing trends across regions and property types. For example, 2022 has seen a 9% increase in the average UK house price.

  • Valuation models – Estimate property values ​​using regression on historical sold prices.

  • Investment research – Understand occupancy rates, cash flow, and yields by area.

  • Listing alert services – Notify buyers when new properties match their criteria.

  • Property apps – Power mobile and web applications with real estate data.

  • Agent comparison – Analyze seller KPIs like time-on-market.

  • Demand forecasting – Predict housing needs across demographics.

And many more use cases! The breadth of data available from Zoopla enables all kinds of applications.

Now let‘s look at how to extract it.

Extractable Data Fields

Each Zoopla listing contains a wealth of information. Some key data fields available include:

Listing Details

  • Title, description
  • Price, listings status (sale, rented, etc)
  • Location, address
  • Property type (house, apartment, etc)
  • Bedrooms, bathrooms
  • Size in square feet

Features

  • Bullets highlighting amenities
  • Detailed description of property features

Media

  • Photos
  • Floor plans
  • 3D tours

Location Info

  • Map coordinates
  • Nearby schools, transit stops, etc

Agent Details

  • Agency name, address
  • Website, phone numbers
  • Individual agent contacts

Plus additional metadata like listing ID, publish date, and more.

That‘s a broad dataset covering many important real estate details. Next let‘s look at how to extract it.

Web Scraping Setup

To build our Zoopla scraper, we‘ll use Python along with a few key packages:

import requests 
from bs4 import BeautifulSoup
import json
import time
  • requests – For sending HTTP requests to Zoopla‘s servers
  • BeautifulSoup – Parses HTML content so we can extract data
  • json – For handling JSON content from API responses
  • time – Useful for adding delays between requests

Additionally, we may use Python libraries like urllib for handling URLs and multiprocessing for adding concurrency.

With these imports, we‘re ready to start extracting data!

Scraping Individual Listing Pages

Let‘s begin with the core task – extracting data from a single property page.

We‘ll use this example listing: www.zoopla.co.uk/for-sale/details/63422316

Viewing the page source, we can see Zoopla loads JSON data into a global window.PAGE_MODEL variable. We can grab it like:

import json
import requests 
from bs4 import BeautifulSoup

url = "https://www.zoopla.co.uk/for-sale/details/63422316"

page = requests.get(url)
soup = BeautifulSoup(page.content, "html.parser")

page_model = soup.find("script", id="__NEXT_DATA__")
data = json.loads(page_model.contents[0])["props"]["pageProps"]

print(data["listingDetails"].keys())

This gives us the full listing data, including:

dict_keys([‘listingId‘, ‘displayAddress‘, ‘price‘, ‘numBedrooms‘,  ‘description‘, ‘branchId‘, ‘propertyType‘, ‘listingUpdate‘, ‘numBathrooms‘, ‘floorPlan‘, ‘title‘, ‘publishedOn‘, ‘location‘, ‘outcode‘, ‘listingUri‘, ‘analyticsTaxonomy‘, ‘keyFeatures‘])

With this approach, we can extract all the fields outlined earlier:

  • Details like price, bedrooms, property type
  • Description, features, photos
  • Location information
  • Agent and agency info

For images, floorplans, and other media, the JSON provides URLs we can scrape to download the binaries.

Now that we can extract data from individual listings, let‘s look at how to discover those pages.

Finding Listings to Scrape

To build a full database, we need to find Zoopla page URLs to feed into our scraper. There are two main approaches for this:

Scraping Search Results

Zoopla provides search functionality across regions, cities, neighborhoods and more.

For example, searching "London" returns www.zoopla.co.uk/for-sale/property/london.

We can directly call the search API like so:

import requests
import json

SEARCH_URL = "https://www.zoopla.co.uk/api/v1/search"

params = {
  "q": "London",
  "area_type": "area", 
  "page_size": 25,
  "category": "residential",
  "order": "age_high",
  "listing_status": "sale" 
}

response = requests.get(SEARCH_URL, params=params)
data = response.json()

# Extract listing URLs from API response
listings = data["listing"]

This returns the first page of London sale listings ordered with newest properties first.

We can iterate through pages using the page_number parameter. And customize the search with parameters like:

  • radius – Distance from location
  • order – Sort order (newest, highest price etc)
  • listing_status – Sale, rent, auction
  • category – Residential, commercial

Scraping searches lets us tailor results precisely to our needs.

Crawling Sitemaps

In addition to search, Zoopla provides sitemap XML files listing every property page across sections like sales, rentals, new homes, and more.

The sitemap index is at www.zoopla.co.uk/sitemap_index.xml. It references category-specific sitemaps:

<sitemap>
  <loc>https://www.zoopla.co.uk/for_sale_houses.xml</loc>
</sitemap>

<sitemap>
  <loc>https://www.zoopla.co.uk/for_sale_flats.xml</loc>  
</sitemap>

We can parse the index and crawl each sitemap like:

from bs4 import BeautifulSoup
import requests

sitemap_index = "https://www.zoopla.co.uk/sitemap_index.xml"

page = requests.get(sitemap_index)
soup = BeautifulSoup(page.content, "xml")

sitemaps = []

for sitemap in soup.find_all("sitemap"):
  sitemaps.append(sitemap.find("loc").text)

# Crawl each sitemap for listings  
for sitemap in sitemaps:

  page = requests.get(sitemap)
  soup = BeautifulSoup(page.content, "xml")

  for url in soup.find_all("url"):
    listing_url = url.find("loc").text

    # Add listing URL to our database
    save_listing(listing_url) 

This crawls every listing available on Zoopla‘s site. The tradeoff vs search is that we have less control over tailoring results.

Now that we can discover listings, let‘s look at keeping our database updated.

Tracking New Property Listings

In addition to building a full database, we also want to stay updated as new properties get listed on Zoopla.

The easiest method is scraping search results sorted from newest first:

import requests
import json 

SEARCH_URL = "https://www.zoopla.co.uk/api/v1/search"

params = {
  "q": "London",
  "orderby": "age",
  "listing_status": "sale",
  "page_number": 1,
  "page_size": 100  
}

response = requests.get(SEARCH_URL, params=params)
data = response.json()

# Extract listing URLs 
new_listings = data["listing"] 

# Check if each listing already exists in our database
# Add new listings to our database

We can run this search periodically, like daily, to pick up new listings as they are published.

More advanced approaches could look at sitemap last mod dates or scrape an RSS feed if available. But search sorting provides a simple method.

Ethical Web Scraping Practices

When scraping sites like Zoopla at scale, it‘s important we do so ethically and legally. Some key principles to follow include:

Respect robots.txt rules – This file tells scrapers which URLs they can/cannot access.

Check for an API – Use an API if available before resorting to scraping. Though Zoopla‘s API is limited.

Scrape reasonably – Use delays, throttling, proxy rotation to avoid overloading the site.

Avoid private/user data – Do not store emails, names, etc without permission.

Cache responsibly – Store scraped data temporarily and refresh it when stale.

Obfuscate scraping – Mimic browsers by using realistic headers and behaviors.

Credit sources – Attribute any data used properly and link back.

By following these principles, we can build scrapers that are ethical, maintain good netiquette, and avoid legal issues.

Now let‘s dive into some common questions around Zoopla scraping.

FAQ

Here are some frequent questions about building a Zoopla web scraper:

Is it legal to scrape Zoopla?

Scraping public Zoopla data is generally legal under UK law. Just be sure to follow ethical practices like respecting robots.txt and scraping reasonably.

Can I get in trouble for scraping Zoopla?

It‘s unlikely if you follow reasonable scraping practices. Using proxies, moderation delays, and randomness helps avoid detection.

Does Zoopla have an API I can use?

Yes, but the data is limited mainly to valuations. To extract complete listing details, web scraping is needed.

What happens if Zoopla blocks my scraper?

You may see CAPTCHAs, HTTP 403 errors, or IP blocks. Use proxies, browser mimicking, sessions, and randomness to avoid blocks.

Should I use residential or datacenter proxies?

Residential proxies with real browser fingerprints work best for evading blocks vs datacenter IPs which are easier to identify.

How can I scale up my scraper safely?

Use robust proxy rotation,cloud scraping solutions, and regional targeting to distribute loads avoid over-scraping.

Is there a crawler I can use instead of building my own?

Yes, services like ScrapingBee provide Zoopla crawlers so you don‘t have to build and maintain your own scraper.

What alternatives are there to Zoopla?

Other UK real estate portals like Rightmove, OnTheMarket, and PrimeLocation can also be scraped for listings.

Conclusion

Scraping Zoopla provides access to an incredibly valuable UK real estate dataset. In this 3000+ word guide, we covered:

  • The wealth of listing data Zoopla provides

  • Extracting clean datasets from property pages

  • Discovering listings via search results and sitemaps

  • Keeping your database updated with new listings

  • Ethical considerations for responsible scraping

  • FAQs around common Zoopla scraping challenges

While scraping Zoopla is straightforward, scaling a scraper requires robust engineering – proxies, user-agents, retries, moderation delays, randomness, and more.

Or alternatively, crawler APIs like ScrapingBee handle these complexities internally so you can effortlessly extract Zoopla data.

Overall, I hope this guide provided you a comprehensive blueprint for building your own Zoopla web scraper using Python. Please feel free to reach out if you have any other questions!

Join the conversation

Your email address will not be published. Required fields are marked *