Skip to content

The Evolution and Controversies of CAPTCHAs: An In-Depth Investigation

CAPTCHAs are one of the most familiar annoyances of the internet age. Those squiggly letters and confusing images popping up to verify if we‘re really human are a daily frustration for millions of web users. But how did we end up with CAPTCHAs dominating online authentication, despite the downsides? Are they still effective against increasingly sophisticated bots? And what does the future hold for both CAPTCHAs and their solvers, human or otherwise?

In this deep dive, we‘ll explore the past, present and future of CAPTCHAs – why they became ubiquitous, how they get beaten, and innovating beyond them.

A Brief History of Fighting Bots with CAPTCHAs

Text-based CAPTCHAs trace their origins back to research papers in the 1990s focused on telling computers and humans apart by using distorted text unreadable by machines. The first widespread CAPTCHA implementation came in the early 2000s from Ticketmaster as a defense against automated bulk ticket buying bots.

But it wasn‘t until Carnegie Mellon University computer scientist Luis von Ahn developed the first version of reCAPTCHA in 2007 that CAPTCHAs really took over the web. Acquired by Google in 2009, reCAPTCHA was soon being served over 100 million times per day across sites trying to stop spam and fake accounts.

Google evolved reCAPTCHA over time, first introducing distorted images in 2013 before moving to the "I‘m not a robot" checkbox backed by advanced risk analysis we still see today. Critics argued that with each change, reCAPTCHA became progressively worse for legitimate users as it tried to stay one step ahead of bots.

But Google was far from the only player in the space. Numerous startups appeared offering their own CAPTCHA services and each major tech company seemed to have their own take – from Facebook‘s short-lived ASCII art CAPTCHAs to LinkedIn‘s now infrequously difficult puzzles.

While specifics varied, the underlying goal stayed the same – use visual tests and human cognitive abilities to try to tell bots and humans apart. But as AI advanced, staying ahead got harder, and more people realized CAPTCHAs made for poor user experience.

The reCAPTCHA checkbox - barely noticeable yet still solvable by advanced bots

The reCAPTCHA checkbox – barely noticeable yet still solvable by advanced bots

The Murky Ethics and Hidden Scale of CAPTCHA Solving

To understand modern CAPTCHAs, we have to talk about CAPTCHA farms. CAPTCHA solving services offering cheap, on-demand human solvers appeared soon after text CAPTCHAs gained popularity in order to help spammers and bot operators bypass the protections.

Services like 2Captcha, Kolop, and Anti-Captcha dominated the space. A solver working for such services could earn as little as $2 for solving 1000 CAPTCHAs – equivalent to a few dollars an hour for the tedious, repetitive work.

While hard numbers are scarce, some estimates suggest the CAPTCHA solving industry employs over 100,000 people and is worth $300-500 million per year. A 2020 study by Intensity Labs based on data from 70 captcha solving services found:

  • The average solution time was 7.8 seconds per CAPTCHA
  • Prices averaged $1.34 per thousand CAPTCHAs solved
  • 65% of services relied entirely on human solvers while 35% used some AI assistance

This massive unseen workforce solving CAPTCHAs at scale operates from regions with very low costs of living, such as Southeast Asia, Africa, and South America. For workers in these poorer countries, solving CAPTCHAs represents a steady, if small, income.

But many human solvers report the work being mentally draining and harming their sense of self-worth. "I feel like a slave," one such CAPTCHA worker told Wired. While they consent to the work, the power dynamics and bargaining positions of low-income LABOR pools give them little alternative.

With CAPTCHA solving available cheaply online, the test to distinguish humans and bots loses much of its meaning. Are enterprises that bypass CAPTCHAs by exploiting low-paid foreign workers violating terms of service? The ethics remain questionable.

Chart showing the global distribution of CAPTCHA solving services

Chart showing the global distribution of CAPTCHA solving services. Source: Intensity Labs Report 2020

Current State of CAPTCHA Solving Technology

Thanks to machine learning advancements, today‘s bots are more capable than ever of solving difficult CAPTCHAs automatically – no exploited foreign laborers required. Let‘s look at some of the main approaches in use:

  • Computer vision + ML: Neural networks can now solve most text and basic image CAPTCHAs with over 90% accuracy. Open source training datasets and models are widely available. Some services like Anti-Captcha combine computer vision with a bit of human QA checking.

  • Browser automation: Tools like Puppeteer and Playwright allow controlling real browsers programmatically. By monitoring the DOM and simulating user actions, even advanced behavioral reCAPTCHAs can be solved automatically. IP cycling hides the traffic source.

  • CAPTCHA stuffing: Where validation is weak, simply submitting known solved CAPTCHAs without looking at the challenge often works. Retrying with dictionary words is also effective.

  • Image preprocessing: Distortion can be reduced with transformations like deskewing. Segmentation models can then split text CAPTCHA images into individual characters for easier OCR.

  • Human hybrid approaches: Some tools use automated computer vision for initial CAPTCHA solving, but human input for challenging cases. This optimizes for high accuracy with minimal slowdowns.

As long as the incentives exist, the battle around CAPTCHA design and solving capabilities will continue. But the balance has firmly shifted in favor of the bots.

CAPTCHAs Provide Terrible User Experience

The Irony of CAPTCHAs is that in order to block annoying bots, they end up annoying legitimate human users far more. Some of the major UX problems with CAPTCHAs:

  • Solving CAPTCHAs interrupts users, breaking their flow and focus. Even simple CAPTCHAs add friction and tedium.

  • Complex image and puzzle CAPTCHAs often baffle users, requiring multiple attempts to identify blurry shapes or try to guess the confusing logic.

  • CAPTCHAs slow down page load times significantly. Some studies found CAPTCHAs added 500+ milliseconds of delay even before users solve them.

  • Visually obscured text and ambiguous images penalize users with poor eyesight or learning disabilities. Audio CAPTCHAs are similarly difficult for hard of hearing users.

  • CAPTCHAs discriminate against users with disabilities like blindness, deafness and motor impairments who rely on assistive devices.

  • Non-Western CAPTCHAs containing Cyrillic letters, Asian characters, etc disadvantage ESL users and those unfamiliar with other languages.

  • Overall roughly 5-10% of users cannot reliably solve CAPTCHAs without assistance. The elderly, disabled and digitally illiterate groups fare the worst.

Frustration with hard-to-solve CAPTCHAs leads directly to abandonment. One study found a difficult CAPTCHA could result in over 12% additional cart abandonment on e-commerce order forms. No business wants to lose sales from shoppers met with a confusing robot challenge right before checkout.

blank

Meme showing a difficult CAPTCHA asking "Are you seriously a human being?"

Cryptocurrency Mining and Other Creative CAPTCHA Alternatives

Given the downsides of CAPTCHAs for users and their declining effectiveness against AI solvers, many sites are exploring alternatives to annoying visual challenges. Some of the more creative options include:

  • Proof-of-work: Require computational effort to verify humanness like Bitcoin mining or finding hash collisions. Adds no visible user burden.

  • Mouse movement tracking: Analyze mouse gestures, speeds and cursor trails to detect bots based on non-human movement patterns.

  • Device verification: Check hardware fingerprints and attributes of phones/computers rather than interrupting users. Allows transparent bot screening.

  • Email/SMS verification: No visual challenge to solve. Users simply receive a one-time code over email or text message and enter it. Very effective deterrent for most bots.

  • Honeypots and invisible CAPTCHA: Hide traps and hidden tests designed to be solved only by bots. No impact legitimate human visitors.

  • Smart adaptive authentication: Apply layers like device verification, cookies, or proof-of-work selectively for high risk traffic. Minimizes disruption.

  • Behavioral analysis: Detect non-human behavioral signals like page visit sequences, click patterns and form input speeds. Passive monitoring without visible tests.

Many platforms like Cloudflare, Google, Microsoft and Twilio offer invisible CAPTCHA and adaptive bot management services. When done right, they provide robust protection without degrading user experience.

"I think the future is in passive analysis versus active challenge," said Dr. Jonathan Frankle, Staff Software Engineer at Fastly, speaking on CAPTCHAs at the 2024 Edge Computing Summit. "The less we interrupt users, the better."

Diagram contrasting active CAPTCHAs with passive bot detection techniques

Diagram contrasting active CAPTCHAs with passive bot detection techniques. Source: Shape Security

The Bleak Future of CAPTCHAs

After over 20 years, the end finally seems to be approaching for CAPTCHAs. Their annoyance and inaccessibility for human users combined with advanced AI solving capabilities means they‘ve nearly exhausted their utility.

Many examples now exist proving bot prevention is achievable without visible challenges or impacts to user experience. And the large CAPTCHA solving operations relying on cheap labor in developing countries face an uncertain future as technology renders their human solvers redundant.

While CAPTCHAs are unlikely to disappear completely, the writing is on the wall. Major platforms like Cloudflare and Google are already steering users away from dependence on them. Soon they‘ll be relegated as a legacy technology only seen on outdated sites.

"I think CAPTCHAs will be gone in 3 to 5 years," predicted Akshaya Subramanian, CEO of Passage AI, speaking at the 2021 Omdia Universe conference in London. "It just doesn‘t make sense to continue punishing real users."

The cat and mouse game between bot creators and bot blockers however will march on eternally. As AI security systems get more sophisticated, so too will the attacks against them. But by employing layered defense and focusing on user experience, organizations can keep ahead of the bots without active user challenges. The less friction we introduce in the process, the better it will be for everyone.

Join the conversation

Your email address will not be published. Required fields are marked *