Skip to content

Your Step-by-Step Guide to Scraping Data from Indeed

Hey there! Looking to scrape job listings from Indeed? You‘ve come to the right place.

Indeed is one of the largest job search engines on the web, with over 250 million unique visitors per month. That‘s a massive trove of data on job postings, salaries, company profiles and more.

Unfortunately, Indeed‘s APIs don‘t fully expose all this data. That‘s where web scraping comes in.

In this guide, I‘ll walk you step-by-step through how to scrape Indeed using Python. I‘ll share code snippets you can use, along with tips to scrape accurately and avoid getting blocked.

I‘ll also cover how to automate scraping to run daily, weekly or monthly. That way you can keep your job listing data fresh automatically!

By the end, you‘ll be scraping Indeed job postings like a pro. Let‘s dig in!

Why Scrape Indeed Job Listings?

Before we get our hands dirty with some Python code, let‘s talk about why you might want to scrape data from Indeed in the first place.

Here are just a few ideas:

  • Market research – Analyze job posting trends to identify rising skills or roles in demand. Indeed has data on millions of openings across all industries.

  • Competitive intelligence – See what salaries and benefits companies are offering for similar roles. Useful when benchmarking your own compensation packages.

  • Job search engines – Build custom job boards using Indeed data filtered to specific keywords or locations.

  • Recruiting tools – Track new openings matching candidate skills to surface relevant jobs.

  • Resume analysis – Extract keywords and skills from job descriptions to provide suggestions improving resumes and cover letters.

Those are just a few examples – with rich structured data on job postings, the possibilities are endless!

Now let‘s look at how to actually extract that data using web scraping.

Before diving into the coding, I want to quickly touch on the legality of web scraping. I know some people have concerns here.

The short answer is: scraping public data from Indeed is perfectly legal in most cases, as long as you follow some basic rules:

  • Only access public pages – don‘t try to scrape private user data or logins.

  • Don‘t overload Indeed‘s servers by scraping too aggressively. Follow polite crawling practices.

  • Abide by Indeed‘s Terms of Service. They don‘t specifically prohibit web scraping.

  • Avoid copying large extracts of text verbatim to respect copyright. Paraphrasing is ok.

  • Don‘t republish any private, personal or sensitive data scraped.

If you follow these common sense guidelines, web scraping Indeed for public job listing data is legally permissible in most countries.

Of course, I still recommend consulting an attorney if you have any concerns given laws vary. But you can scrape Indeed with confidence as long as you stay ethical!

Ok, let‘s dive into the fun stuff – actual code!

Scraping Indeed Listings with Python

When scraping large sites like Indeed, Python is a great choice thanks to libraries like Requests, Beautiful Soup and Selenium.

I‘ll walk you through a script to:

  • Extract job listings matching keyword and location searches

  • Parse details like job titles, salaries and descriptions

  • Automate pagination to fetch all listings across multiple pages

Let‘s get started!

Import Libraries

We‘ll use Requests to fetch pages, Beautiful Soup for parsing, Time to throttle, and Pandas to store data:

import requests
from bs4 import BeautifulSoup
import time
import pandas as pd

Requests and BeautifulSoup are all you really need. But Pandas helps manage data, while Time throttles requests.

Define Search Parameters

First, let‘s define what job listings we want. Specify keywords, location and other filters:

keywords = "Remote Software Engineer"
location = "United States" 
salary_min = 100000

This targets high-paying remote software jobs in the US. Adjust to your desired criteria.

Fetch Search Results Page

With parameters set, we‘ll request the URL, passing our keywords and location:

BASE_URL = "https://www.indeed.com/jobs?" 

params = {
  ‘q‘: keywords,
  ‘l‘: location,
  ‘minSalary‘: salary_min,
  ‘remotejob‘: ‘remote‘ # Filter remote jobs  
}

print(f‘Fetching job listings for {keywords} in {location}...‘)

res = requests.get(BASE_URL, params=params)
res.raise_for_status() # Raise exception for 4xx/5xx

This performs the initial search query, filtering by our keywords and parameters.

Parse Results with BeautifulSoup

Next we‘ll parse the HTML of the search results page to extract high-level listing data:

soup = BeautifulSoup(res.text, ‘html.parser‘)

listings = [] # List to store listings

for div in soup.find_all(‘div‘, class_=‘job_seen_beacon‘):

  title = div.find(‘h2‘).text.strip()

  company = div.find(‘span‘, class_=‘companyName‘).text.strip()

  location = div.find(‘div‘, class_=‘companyLocation‘).text.strip()

  # Append listing data    
  listings.append({
    ‘title‘: title,
    ‘company‘: company, 
    ‘location‘: location
  })

Here we locate each listing div, grab the key fields like title and company, and store in our listings list.

Handle Pagination

Indeed splits results across multiple pages. We‘ll need to iterate through each:

# Track page number 
current_page = 0

while True:

  # Increment page
  current_page += 1 

  print(f‘Scraping page {current_page}...‘)

  # Build URL for next page
  next_page_url = BASE_URL + f‘&start={current_page*10}‘  

  # Fetch page HTML
  res = requests.get(next_page_url, params=params)

  # Parse HTML
  soup = BeautifulSoup(res.text, ‘html.parser‘)   

  # Return if last page
  if not soup.find(‘a‘, {‘aria-label‘: ‘Next‘}):
    print(‘Reached last page!‘)
    break

  # Extract listings
  for div in soup.find_all(...):
    # Extract listing data

  # Sleep to throttle requests  
  time.sleep(3) 

print(f‘Scraped {len(listings)} listings‘)

Here we continuously increment the page number, fetch the next page, extract listings, and loop until hitting the last page.

Adding a short time.sleep() throttle helps avoid overwhelming Indeed‘s servers.

Scrape Listing Details

So far we‘ve extracted high-level data like titles and companies. To get details like salaries and descriptions, we‘ll scrape each listing URL:

from selenium import webdriver

driver = webdriver.Chrome()

# Loop through listings
for listing in listings:

  print(f‘Getting details for {listing["title"]}‘)

  # Load listing URL
  url = listing[‘url‘]  
  driver.get(url)

  # Extract key fields
  desc = driver.find_element_by_id(‘jobDescriptionText‘).text
  salary = driver.find_element_by_class_name(‘salary-snippet‘).text

  listing[‘desc‘] = desc
  listing[‘salary‘] = salary

  # Sleep to throttle
  time.sleep(2)

driver.quit()  

Here Selenium provides a full browser to render JavaScript-heavy pages. We load each URL, and extract additional fields like the description and salary.

Pro Tip: Consider using a proxy service to avoid IP blocks when using Selenium at scale.

And that‘s it! With those steps you can scrape thousands of job listings from Indeed automatically.

The end result is structured job data you can analyze or export to tools like Excel. Let‘s look at a few examples next.

What Can You Do with Scraped Indeed Data?

Now that we can scrape Indeed listings, what can we actually do with that data?

Here are just a few ideas:

Export to Excel for Analysis

df = pandas.DataFrame(listings)
df.to_excel(‘indeed_listings.xlsx‘, index=False)

Pandas makes it easy to export results to Excel. This enables powerful filtering, pivot tables and formulas.

Indeed data in Excel

You can analyze trends across locations, salaries, skills and more.

Build Job Search Databases

import sqlite3

# Connect to SQLite database
conn = sqlite3.connect(‘jobs.db‘)

# Create table
conn.execute(‘‘‘
  CREATE TABLE jobs (
    title TEXT,
    company TEXT, 
    description TEXT,
    salary REAL  
  );
‘‘‘)

# Insert listings into database
for listing in listings:
  conn.execute("""
    INSERT INTO jobs VALUES (
      ?, ?, ?, ?
    )""", (listing[‘title‘], listing[‘company‘], 
            listing[‘desc‘], listing[‘salary‘]))

conn.commit()
conn.close()

SQLite provides a simple database to store listings for customized search. Integrate with Flask to build your own job board!

Email Relevant Listings to Candidates

import smtplib
from email.message import EmailMessage

# Connect to SMTP server 
smtp = smtplib.SMTP(‘smtp.domain.com‘)

for listing in listings:

  # Check if listing matches candidate skills  

  if match:

    msg = EmailMessage()
    msg[‘Subject‘] = f‘New job for you - {listing["title"]}‘  
    msg[‘From‘] = ‘[email protected]‘
    msg[‘To‘] = ‘[email protected]‘
    msg.set_content(listing[‘desc‘])

    # Send listing to candidate
    smtp.send_message(msg)

smtp.quit()

Python makes it easy to automatically email candidates new listings matching their skills and interests.

This is just a small sample – with data on millions of listings, the possibilities are endless!

Now let‘s look at running this scraper automatically.

Scheduling Daily Indeed Scrapes

While scraping Indeed in real time is useful, even more valuable is setting up automated, scheduled scrapes to keep your data fresh.

Here are two good options to run the scraper on a fixed recurring schedule:

Cron Jobs

A simple way to automate Python scripts is cron, a standard Linux utility.

Add an entry like this to run daily at 8am:

0 8 * * * python /home/user/indeedScraper.py

You can schedule complex recurrences. But cron lacks reporting if scrapes fail.

Scraping Platforms

For more robust scheduling and automation, I recommend using a dedicated scraping platform like Scrapy or Apify.

These provide browser and proxy automation to handle CAPTCHAs, blocks and JavaScript. And they have easy cron job scheduling built-in.

You also get email alerts, performance analytics and integration options. They really take the headache out of automation!

Here‘s a quick comparison:

Cron JobsScraping Platforms
PricingFreePaid plans
Proxies & Headless BrowsersNeed custom codeIncluded features
SchedulerBasic recurrenceAdvanced options
Monitoring & AlertsNoneEmails and dashboard
Results StorageManual handlingBuilt-in storage & exports

For large, complex sites like Indeed I recommend using a dedicated platform. The additional reliability and features are worth the cost when scraping at scale.

Let‘s Recap

In this guide you learned:

  • Why scraping Indeed is useful for market research, job search and recruiting tools.

  • How to extract listings by mimicking search queries in Python.

  • Best practices like throttling requests and using proxies to avoid blocks.

  • How to parse details like salaries and descriptions from listing pages.

  • Automation options like cron and dedicated scraping platforms to keep your data fresh.

The code samples above should give you a template to start scraping your own Indeed data. Feel free to tweak and build on it for your use case!

Just remember to respect Indeed‘s Terms of Service, avoid scraping too aggressively, and follow good web scraping hygiene to stay on the right side of the law.

I hope this guide gave you a comprehensive overview of how to effectively scrape Indeed using Python. Automating these steps lets you leverage Indeed‘s incredible trove of job listing data.

Let me know if you have any other questions! I‘m always happy to chat more about web scraping best practices.

Good luck with your Indeed scraping project!

Join the conversation

Your email address will not be published. Required fields are marked *