Skip to content

How to Bypass CAPTCHA with Puppeteer: A Comprehensive 2021 Guide

Captcha technology is increasingly being used by websites to combat bots and automated access. Over 25% of websites now employ some form of CAPTCHA, causing challenges for automated scraping and data collection efforts.

However, by leveraging tools like Puppeteer Stealth and Oxylabs Web Unblocker, it is possible to bypass CAPTCHAs and gain access to website content in an automated fashion. In this detailed guide, we‘ll explore popular CAPTCHA bypassing techniques used by web scraping experts.

The Rising Need for Automated Access

First, let‘s understand why automating access to websites is required. Tasks like large-scale data aggregation, price monitoring, ad verification etc need automated tools. But CAPTCHAs can block these useful applications.

According to surveys, over 70% of businesses employ web scraping for gathering market intelligence, optimizing marketing spend and supply chain processes. Automated tools also help researchers analyze trends and identify threats.

Example: A cybersecurity firm needs to scan the web to identify websites infected with malware. Doing this manually is infeasible, but automation allows large-scale scanning. CAPTCHAs can block such beneficial automated access.

Overview of Puppeteer for Browser Automation

Puppeteer is a Node.js library that provides a high-level API for controlling headless Chrome/Chromium over the DevTools Protocol.

With Puppeteer, you can easily automate actions like:

  • Launching a browser instance.
  • Navigating to web pages.
  • Entering input and clicking buttons.
  • Scrolling pages and extracting content.
  • Screenshotting.

For example:

// Launch headless Chrome
const browser = await puppeteer.launch();

// Create a new page 
const page = await browser.newPage();

// Enter text into an input 
await page.type(‘input[name="search"]‘, ‘Hello World‘); 

// Click a button

// Extract results text
const results = await page.evaluate(() => {
  return document.querySelector(‘#results‘).innerText; 

This makes Puppeteer very useful for automated data extraction and scraping. However, websites can detect its headless browser and block it with CAPTCHAs. This is where tools like Puppeteer Stealth become useful.

Bypassing Basic Bot Detection with Puppeteer Stealth

The puppeteer-extra-plugin-stealth package augments Puppeteer to help evade basic bot detection and anti-scraping mechanisms employed by websites.

It does this by modifying the headless Chrome browser to mimic real user actions and environment characteristics. Some techniques used include:

  • Input/Cursor tracking: Adds realistic mouse movements and scrolling instead of instant jumps and teleporting across the page.
  • Browser fingerprinting: Modifies aspects like user-agent, WebGL vendor, canvas to match real browsers instead of headless Chromium.
  • Media handling: Intercepts and fakes audio output devices to mimic a real media environment.
  • Canvas/WebGL spoofing: Passes standard Canvas fingerprinting and WebGL configuration tests.

To use Stealth, you first install the puppeteer-extra and puppeteer-extra-plugin-stealth packages:

npm install puppeteer-extra puppeteer-extra-plugin-stealth

Then attach the stealth plugin before launching Puppeteer:

const puppeteer = require(‘puppeteer-extra‘);
const StealthPlugin = require(‘puppeteer-extra-plugin-stealth‘);

// Add stealth plugin 

// Rest of puppeteer script

This allows bypassing a wide range of basic anti-bot protections: simple device fingerprinting, user-agent checks, headless browser detection etc. But many sites now employ advanced mitigation techniques that Puppeteer Stealth alone cannot bypass.

Limitations of Basic Stealth Plugins

While Stealth can help bypass basic bot detection, many websites now employ advanced mitigation techniques, including:

  • Behavior analysis: Checks for human-like natural patterns in cursor movements, scrolling, clicks etc. Basic scripted patterns are easy to detect.
  • Active JS challenges: Requires solving JS puzzles like mouse tracking, math problems, image selections etc. Needs real browser rendering.
  • Frequent CAPTCHA resetting: Keeps throwing CAPTCHAs randomly throughout the session to combat basic scripted launches.
  • IP blocking: Identifies and blocks traffic from data center ranges and known proxy IPs.

To tackle these, we need a more robust solution capable of orchestrating proxies, browsers and automation.

Bypassing Advanced Bot Mitigation with Oxylabs Web Unblocker

Oxylabs Web Unblocker provides a battle-tested solution for bypassing advanced bot mitigation systems used by target websites.

It works by combining three key capabilities:

1. Global Residential Proxies: Oxylabs provides access to millions of residential IP proxies across countries like the US, Canada, Britain, France etc. This allows rotating source IPs to avoid blocks.

2. Intelligent Browser Automation: Uses real Chrome browsers and Puppeteer automation with Stealth patches to mimic natural browsing patterns. Can handle JS challenges.

3. AI-based Proxy Orchestration: Automatically rotates proxies and browsers to maintain clean sessions and evade behavior analysis.

Here is how you can leverage Web Unblocker APIs in Node.js using the node-fetch and https-proxy-agent libraries:

// Import dependencies
const fetch = require(‘node-fetch‘);
const HttpsProxyAgent = require(‘https-proxy-agent‘);

// Configure proxy agent 
const proxyUrl = ‘http://username:[email protected]:port‘;  
const agent = new HttpsProxyAgent(proxyUrl);

// Fetch target page
const resp = await fetch(‘‘, {

// Handle response
console.log(await resp.text());

The key advantage of Web Unblocker is automatically rotating IPs and browsers to establish new sessions so each request appears completely distinct from analytics perspectives. This allows bypassing even sophisticated behavioral checks.

Integrating Web Unblocker with Puppeteer

You can also directly integrate the Web Unblocker proxy into your Puppeteer scripts to benefit from automatic proxy rotation:

const puppeteer = require(‘puppeteer-extra‘);
// Setup proxy agent
const agent = new HttpsProxyAgent(‘http://username:[email protected]:port‘) 

  ignoreHTTPSErrors: true, 
  args: [`--proxy-server=${agent.proxyUri}`]   

This automatically uses Web Unblocker proxies with each browser instance providing a streamlined solution for CAPTCHA bypassing in your Puppeteer pipelines.

Responsible Web Scraping Best Practices

When bypassing CAPTCHAs to access website content, it‘s important to employ responsible practices including:

  • Respect robots.txt: Avoid scraping pages blocked by the site‘s robots.txt file.
  • Limit load: Control the frequency of requests to avoid overloading target sites.
  • Attribute data: Properly cite any copied data used elsewhere.
  • No plagiarism: Do not copy substantial portions of text content verbatim.
  • Follow laws: Ensure collection methods comply with applicable laws. Seek legal counsel if needed.

Adhering to ethical web scraping principles is vital even when using advanced tools like Web Unblocker.


CAPTCHAs can significantly impede automated data extraction efforts. Using Puppeteer stealth allows bypassing basic protections, but robust solutions like Oxylabs Web Unblocker are needed for advanced bot mitigation. By combining proxy rotations, headless browsers and automation, tools like Web Unblocker provide a reliable way to bypass CAPTCHAs at scale while following responsible practices.

Join the conversation

Your email address will not be published. Required fields are marked *