Curl is the swiss army knife of HTTP tools, and Chrome‘s DevTools are the multi-tool of web development. Put them together, and you have a powerful combo for inspecting, debugging, and replicating any web request. In this guide, we‘ll dive deep into extracting curl requests from Chrome, with step-by-step instructions, tips and techniques for data scrapers and API developers.
Why Curl and Chrome Matter
If you work with web data in any capacity, curl and Chrome are two tools you can‘t afford to ignore.
Curl is a simple but powerful command line tool for sending HTTP requests and transferring data. First released back in 1997, curl has since become ubiquitous, with an estimated 1 billion installs worldwide. It‘s the default choice for API interaction, server debugging, and data extraction for developers across dozens of languages and platforms.
Chrome, meanwhile, has steadily eaten up browser market share over the past decade, and now accounts for nearly 70% of desktop web activity. That means, more likely than not, the behavior of any given website or API is being viewed through the lens of Chrome.
Data scrapers and API developers in particular can benefit from understanding how to bridge the gap between these two essential tools. Chrome‘s DevTools provide an unparalleled real-time view into the HTTP layer of a site or app. Being able to easily extract, modify, and replay requests from DevTools using curl allows you to:
- Debug APIs and integrations in a controlled environment
- Prototype scraping logic without writing any code
- Simulate different clients, user agents, and authentication scenarios
- Quickly generate code snippets for HTTP libraries in any language
In short, if you‘re working with web data and you‘re not using curl with Chrome DevTools, you‘re missing out. Let‘s fix that.
Finding the Request You Need
First, let‘s talk about how to find the specific HTTP request you‘re interested in capturing. Modern websites and web apps make dozens or even hundreds of requests, and sifting through them all in the DevTools Network tab can be overwhelming.
Here‘s a quick overview of the Network tab interface:
Some key things to pay attention to:
- The Filter button lets you quickly narrow down requests by properties like type (e.g. XHR, JS, CSS), status code, or domain.
- The search bar supports regex pattern matching for any text in the request or response.
- Clicking the Name of any request will show you granular details about it in the pane below, including headers, preview, response, timing, and even cookies.
- Preserve Log ensures requests aren‘t cleared when you navigate to a new page.
Once you‘ve found your request, make sure Preserve Log is on, and you‘re ready to copy as curl.
Copying Requests as Curl
To grab a request as a curl command in Chrome:
- Right-click the Name of the request in the Network tab
- Hover over Copy, then select Copy as cURL
This will copy the entire HTTP request to your clipboard, translated into an equivalent curl command, like so:
curl ‘https://api.example.com/endpoint‘ \
-H ‘Authorization: Bearer token123‘ \
-H ‘Content-Type: application/json‘ \
--data-raw ‘{"key":"value"}‘
You can paste this directly into a terminal to re-run the request via curl, or use it as a starting point for building your scraping script or API test.
Chrome offers a few additional options and formats when copying requests as curl, which can be useful in different scenarios:
Option | Description | Example Use Case |
---|---|---|
Copy as cURL | Copies request using default curl syntax with shortened flags | Quick one-off commands in terminal |
Copy as cURL (Windows) | Uses Windows-style paths and escaping | Compatibility with Windows terminals |
Copy as cURL (bash) | Escapes shell special chars and wraps argument in double quotes | Avoiding syntax errors with complex header values |
Copy All as cURL | Copies all requests in DevTools log to a single curl command | Replaying a sequence of dependent requests |
Copy All as cURL (Windows) | Windows variant of the above | Same as above for Windows environment |
Copy All as cURL (bash) | Bash escaped variant of Copy All | Same as above with more robust escaping |
One thing to watch out for, especially when copying multiple requests, is that curl, by default, doesn‘t follow redirects. So if your request triggers a 301 or 302 redirect, you‘ll need to either add the -L
flag to curl, or manually follow the redirect in a subsequent request.
Extracting Partial Requests and Responses
Sometimes you may want to tweak or omit parts of the request when moving from Chrome to curl. For example, maybe you want to test different authentication headers, or inspect the request body independently from headers.
You can selectively copy request details from the DevTools Network tab:
- To grab just the URL, right-click the request and select Copy > Copy URL
- To get request headers as a raw string, right-click and select Copy > Copy Request Headers
- Similarly, for the request body, select Copy > Copy Request Body
You can then modify these snippets and manually stitch them together into a curl command.
DevTools also provides some useful functions for manipulating and extracting response data:
- Any JSON response can be copied as a JavaScript object or array literal via the Right-click > Copy Value option.
- Right-click > Copy Response will copy the entire raw response body string.
- For HTML responses, Right-click > Copy > Copy OuterHTML will grab just the parent element and its contents.
These options are invaluable for verifying API responses or extracting sample HTML for your scraper.
Converting Curl Commands to Code
Once you‘ve fine-tuned your curl request, the final step is translating it into the programming language of your choice.
While curl syntax is fairly simple and universal, different languages and HTTP client libraries represent requests in their own unique way. Rather than mentally convert from curl to Python‘s Requests library, or Node.js Axios, or PHP cURL, you can let an automated code converter handle the translation.
There are a number of free web tools available for converting curl commands to code:
- Postman has a nice web-based curl importer that spits out code snippets for 21 different languages and frameworks.
- Insomnia is another full-featured API testing tool with an easy curl importing feature.
- Curlconverter.com is a lightweight, curl-focused alternative supporting 8+ common languages.
To use any of these tools, just paste in your curl command and select your target language:
Within seconds, you‘ll have your curl request translated to working code using popular HTTP libraries like Python‘s Requests, JS Fetch or Axios, or PHP cURL:
import requests
url = "https://api.example.com/endpoint"
payload="{"key":"value"}"
headers = {
‘Authorization‘: ‘Bearer token123‘,
‘Content-Type‘: ‘application/json‘
}
response = requests.request("GET", url, headers=headers, data=payload)
print(response.text)
From there, it‘s just a matter of refactoring the code into your existing scraping script or API test suite.
Advanced Tips and Techniques
As you incorporate curl and Chrome DevTools into your web scraping workflows, here are a few more advanced tips to keep in mind:
Dealing with Authentication
Many sites and APIs require some form of authentication, such as Bearer tokens, API keys, or session cookies. When you copy a request as curl from Chrome, these auth details are usually included by default.
However, if you‘re seeing 401 Unauthorized errors when running the curl request, you may need to manually copy auth headers from the DevTools Network tab (Right-click request > Copy > Copy Request Headers) and add them to your curl command with the -H
flag.
For cookie-based authentication, make sure you‘ve selected the Preserve Log option in DevTools to persist cookies across requests. You can then pass all cookies to curl using the --cookie
flag followed by a list of name=value
pairs.
Handling CORS Errors
If you‘re using curl to interact with APIs from a different domain than your application, you may run into Cross-Origin Resource Sharing (CORS) errors.
To get around these, you can use the curl --include
flag to dump response headers, which will reveal the exact CORS policy. You can then either adjust your server-side CORS configuration to allow your domain, or explore alternatives like JSONP or server-side proxying.
Retry and Delay
Data scraping often involves making repeated requests to the same server, which can quickly run afoul of rate limits.
The curl --retry
flag lets you automatically re-attempt failed requests a set number of times. Pair it with --retry-delay
to add a pause between attempts and avoid overwhelming the server:
# Retry failed requests 3 times, with a 5 second delay between attempts
curl --retry 3 --retry-delay 5 ‘https://api.example.com/endpoint‘
Saving to a File
Use the curl -o
flag to save the response body to a local file, or -O
to infer the filename from the URL:
# Infer filename from URL
curl -O https://example.com/data.csv
# Save to a specific local file path
curl -o data-dump.json https://api.example.com/endpoint
Combine this with shell scripting or your scraping logic to incrementally save data for long-running jobs.
Putting it All Together
Extracting curl requests from Chrome DevTools is a deceptively simple, but incredibly versatile skill for anyone working with web data.
By leveraging the comprehensive information available in the Network tab, the portability and universality of curl, and the power of code converters and HTTP client libraries, you can approach any website or API with the confidence that you can inspect, extract, and replicate its core behaviors.
Whether you‘re building a scraper from scratch, or just need to debug a tricky API integration, Chrome and curl make it easy to bridge the gap between browser and server, human and machine.
So next time you‘re scratching your head trying to untangle a rat‘s nest of XHRs, form parameters, and wonky response headers, pop open DevTools and liberate your requests with curl. A straightforward, reproducible test case is just a few clicks away.