Web scraping is the process of automatically collecting data from websites. While it‘s possible to write your own code using Python libraries like Requests and BeautifulSoup, this can quickly get complex for large scraping tasks.
That‘s where Scrapy comes in. Scrapy is an open-source web crawling framework that makes it easy to build and scale large scraping projects. With Scrapy, you can crawl websites, extract structured data, and store it in databases or files with minimal hassle.
In this guide, we‘ll walk through everything you need to know to start scraping the web with Scrapy. Whether you‘re new to web scraping or looking to level up your scraping skills, read on to learn how Scrapy can help you extract the data you need with speed and ease.
Why Use Scrapy for Web Scraping?
Scrapy is a powerful and versatile web scraping tool. Here are a few reasons to choose Scrapy for your next project:
Ease of use: Scrapy has a simple, intuitive API. Its well-designed architecture makes it easy to write maintainable, efficient scraping code.
Built-in features: Scrapy includes many features needed for scraping out of the box, handling cookies, authentication, pagination, caching and more.
Performance: Scrapy is fast, using asynchronous requests to fetch multiple pages in parallel. You can easily scrape thousands of pages per minute.
Extensibility: It‘s easy to extend Scrapy and integrate it with other tools and libraries to customize your scraping workflow.
Community: Scrapy has great docs and an active community, so it‘s easy to find help and examples for almost any scraping use case.
Scrapy makes it a breeze to build robust, high-performance scrapers without getting bogged down in low-level details. Let‘s see how it works!
Getting Started with Scrapy
Before we dive into using Scrapy, let‘s cover a few key concepts:
Spiders: The spider is the main component you‘ll write. It‘s a class defining how a site should be scraped – what pages to crawl, what data to extract, and how to parse it.
Selectors: Scrapy uses XPath and CSS expressions to find and extract data from the HTML. Its selector API makes it concise and straightforward to grab the data you want.
Items: Scraped data is collected in Scrapy Items – simple container objects that hold the extracted data, making it easy to process and store.
Item Pipeline: After an item is scraped, it‘s sent to the Item Pipeline which processes it through several components that can filter, sanitize, validate, and store the data.
Scrapy Engine: The Engine coordinates the spiders, schedulers, and downloaders and controls the data flow between them.
Now that you‘re familiar with the basic terminology, let‘s build a Scrapy spider!
Setting Up Your Scrapy Project
First, make sure you have Scrapy installed:
pip install scrapy
Now create a new Scrapy project:
scrapy startproject myproject
This will create a directory structure with the basic files needed for a Scrapy project. Let‘s create our first Spider:
scrapy genspider posts scrapinghub.com
This generates a spider template named posts
that will scrape the site scrapinghub.com
. Open up the new file myproject/spiders/posts.py
:
import scrapy
class PostsSpider(scrapy.Spider):
name = "posts"
start_urls = [‘http://blog.scrapinghub.com‘]
def parse(self, response):
pass
Here we define the Spider by subclassing Scrapy‘s Spider class. We give it a name, a list of URLs to start scraping from, and an empty parse
method.
The parse
method will be called with the HTTP response for each URL scraped. This is where we‘ll extract the data we want from the page using Scrapy‘s selector API.
Extracting Data with Scrapy Selectors
Let‘s extract the titles and URLs of the blog posts from the page. We‘ll use CSS selectors to find the elements we want:
def parse(self, response):
for post in response.css(‘.post-listing .post-item‘):
yield {
‘title‘: post.css(‘.post-header h2 a::text‘).get(),
‘url‘: post.css(‘.post-header h2 a::attr(href)‘).get(),
}
This code uses a CSS selector to find each blog post. For each one, we extract the title text and URL href attribute, yielding them as a Python dict.
The ::text
and ::attr(href)
parts select just the text content and href attribute of the a tag, respectively. The .get()
method returns the extracted data as a string.
We can also use XPath to extract the same data:
def parse(self, response):
for post in response.xpath(‘//div[contains(@class, "post-item")]‘):
yield {
‘title‘: post.xpath(‘.//h2/a/text()‘).get(),
‘url‘: post.xpath(‘.//h2/a/@href‘).get(),
}
XPath expressions are more powerful than CSS selectors, but also more verbose. Choose whichever fits your scraping needs and personal preference.
Running Your Scrapy Spider
Let‘s run our spider and see what it extracts:
scrapy crawl posts -o posts.json
This runs the spider, scraping the blog and outputting the extracted data to a JSON file. You should see Scrapy‘s logs as it fetches pages and extracts data. When it‘s done, open up posts.json
:
[
{
"title": "Extracting reviews from iTunes App Store with Scrapy",
"url": "http://blog.scrapinghub.com/2016/01/25/scrapy-tips-from-the-pros-january-2016-edition/"
},
{
"title": "Scraping car dealership inventory with Scrapy",
"url": "http://blog.scrapinghub.com/2015/12/16/scraping-car-dealership-inventory-with-scrapy/"
},
...
]
Success! We‘ve just scraped our first website with Scrapy. You can build on this basic example to extract all kinds of data from the pages – post dates, authors, categories, and so on.
Scraping Multiple Pages with Scrapy
So far we‘ve only scraped the first page of results. To scrape every page, we‘ll need to find the links to the next pages and follow them.
Following Pagination Links
To do this, we‘ll use Scrapy‘s Request
object to make additional requests to those pages:
def parse(self, response):
# ... extract data ...
next_page = response.css(‘a.next-posts-link::attr(href)‘).get()
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, callback=self.parse)
After extracting data from the page, this code checks for a "Next" link. If found, it uses urljoin
to resolve the URL (since it might be relative).
It then yields a new request to the next page URL, specifying self.parse
as the callback. This tells Scrapy to download the next page and call the same parse
method to extract its data.
Scrapy will keep following next page links until it doesn‘t find anymore. The result is a "crawler" that automatically extracts data from every page of results.
Storing Scraped Items
Scrapy supports multiple ways to store scraped data. The simplest is using Feed Exports to generate JSON, CSV, or XML files using the -o
flag like we did before.
For large scraping tasks, you‘ll usually want to store scraped data in a database. Scrapy makes this easy with its Item Pipeline feature.
To enable an Item Pipeline component, add it to the ITEM_PIPELINES
in settings.py
:
ITEM_PIPELINES = {
‘myproject.pipelines.MongoDBPipeline‘: 300,
}
The number 300 determines the order the pipeline is run in (lower runs first). Now let‘s implement our MongoDB pipeline:
import pymongo
class MongoDBPipeline:
def __init__(self):
connection = pymongo.MongoClient(‘localhost‘, 27017)
db = connection["scraping"]
self.collection = db["posts"]
def process_item(self, item, spider):
self.collection.insert(dict(item))
return item
This simple pipeline connects to a local MongoDB instance, specifies a database and collection, and inserts each scraped item into it.
Scrapy will automatically send each item through the pipeline. You can add additional pipeline stages to filter out duplicates, strip unwanted HTML, validate fields, and more.
Advanced Scrapy Techniques
Scrapy has many features to handle trickier scraping tasks. Here are a few of the most useful:
Handling Login Forms: Use FormRequest
to submit login credentials and persist cookies across requests. Check the COOKIES_ENABLED
setting.
Crawling Infinite Scroll Pages: Detect and simulate AJAX requests used for infinite scrolling using your browser‘s developer tools. Use the SCROLL_INCREMENT
setting.
Avoiding Bot Detection: Rotate user agent strings and IP addresses, set delays between requests, and respect robots.txt
. Use the USER_AGENT
, ROBOTSTXT_OBEY
, and AUTOTHROTTLE
settings.
Enabling JavaScript: For heavy JS sites, use a headless browser like Splash or Puppeteer. Scrapy has official support for Splash, and many third-party extensions for other browsers.
Consult the excellent Scrapy docs to learn more about handling these and other scraping challenges.
Deploying Your Scrapy Spiders
Once you‘ve got your spider working nicely, you‘ll usually want to deploy it to a server to run automatically on a schedule.
Scrapy comes with built-in support for deploying to Scrapyd, a service for running spiders in a production environment. It‘s easy to set up on your own server following the official docs.
For a quick and simple deployment solution, check out the Scrapy Cloud service from the folks at ScrapingHub. It provides a hassle-free hosted environment for running spiders at scale, with handy features like job scheduling, monitoring, and data export.
Scrapy Best Practices
As you build more advanced scrapers, keep these best practices in mind:
Use Item Loaders: Scrapy‘s Item Loader library provides a convenient way to populate items while cleaning and parsing the raw data.
Limit concurrent requests: Setting CONCURRENT_REQUESTS
too high can get your spider banned. Find a good balance between speed and staying under rate limits.
Enable caching: Turn on the HTTPCACHE
setting to avoid re-downloading pages. Great for saving bandwidth during development.
Monitor scraped data: Keep an eye on your pipeline output to catch extraction bugs early. Automated monitoring can help maintain data quality.
Rotate user agents and IPs: Use Scrapy‘s USER_AGENT
setting or a proxy service like ScrapingBee to avoid bot detection.
Follow robots.txt: Don‘t scrape sites that ask not to be scraped. Enable the ROBOTSTXT_OBEY
setting to respect their rules automatically.
Happy scraping!
Conclusion
You should now have a solid foundation in scraping the web with Scrapy. Of course, there‘s a lot more to learn – but you‘re well on your way to building powerful, production-ready crawlers.
To learn more, consult the Scrapy documentation, check out the active Stack Overflow community, and read the official tutorials. With practice and patience, you‘ll be a Scrapy master in no time!
That‘s all for this guide. Now get scraping!