Skip to content

How to Use cURL With Proxy?

cURL is a versatile command line tool that is often used together with proxy servers for web scraping and automation tasks. This comprehensive guide explains step-by-step how to configure cURL to work seamlessly with different types of proxy servers.

Introduction to cURL

cURL is used by over 5 billion devices and serves over 50 billion requests per day! It ranks among the most popular development tools due to its ubiquity, power, and simplicity.

At its core, cURL allows transferring data using various protocols such as HTTP, HTTPS, FTP, and more. The basic usage is:

curl [options] [URL]

This fetches the content of the provided URL and prints it to the console. The widespread adoption of cURL stems from its flexibility – it can do everything from downloading files to querying APIs to automating logins and web form submission.

Here are some common use cases of cURL:

  • Web scraping – Extract data from websites.
  • API testing – Send requests and sample responses.
  • Automation – Trigger actions and workflows.
  • File transfer – Upload/download files and attachments.

Now let‘s understand how we can supercharge cURL by using it with proxies.

Introduction to Proxies

A proxy server acts as an intermediary that sits between your machine and the remote server you want to access. Instead of connecting directly, your requests are first routed through the proxy server which then forwards them to the destination.

cURL proxy diagram

Using proxies with cURL provides several benefits:

Hide Identity

Proxies allow you to mask your real IP address and appear anonymous while making requests. This is crucial for web scraping to avoid getting blocked.

Bypass Geographic Blocks

Certain websites restrict access based on location. Proxies enable you to route your traffic through a different region to bypass these restrictions.

Improve Performance

Proxies like BrightData offer features like caching to speed up requests and reduce latency. This results in faster scraping and automation.

Rotate IP Addresses

Services like Smartproxy and Soax provide thousands of residential IPs that can be automatically rotated to prevent scraping blocks.

Now that we understand why proxies are useful with cURL, let‘s see how to configure them.


To follow this tutorial, you will need:

  • cURL – Install it on your system if not already available. It comes pre-installed on most Linux and macOS distributions. For Windows, you can download and install the executable from the official site.
  • Proxy Server Details – IP address, port, and credentials if authentication is required. You can easily obtain these details from top proxy providers like BrightData, Smartproxy, etc.

Okay, with that out of the way, let‘s start using proxies with cURL!

Specify Proxy in cURL Command

The most straightforward way to use a proxy with cURL is to provide the proxy details right in the command using the -x or --proxy switch:


Here, -x or --proxy accepts the proxy URL containing authentication credentials, IP address, and port number.

For example, to use an authenticated SOCKS5 proxy server at IP address and port 8080:

curl -x socks5://user123:[email protected]:8080

By default, the protocol is assumed as HTTP. You can explicitly specify other protocols like SOCKS5 demonstrated above.

This method is great for quick tests and overriding defaults for one-off requests. But typing the proxy details each time can get cumbersome. Let‘s look at some better options.

Configure Environment Variables for Proxy

For frequent use, you can set the http_proxy and https_proxy environment variables which apply system-wide:

On Linux/macOS:

export http_proxy="http://IP:PORT"
export https_proxy="http://IP:PORT" 

On Windows:

set http_proxy=http://IP:PORT
set https_proxy=http://IP:PORT

Once set, cURL will automatically use the defined proxies when making HTTP or HTTPS requests respectively.

To disable the proxy, simply unset the variables:

On Linux/macOS:

unset http_proxy
unset https_proxy

On Windows:

set http_proxy=
set https_proxy=

This approach allows you to seamlessly integrate proxies into your cURL scraping workflows.

Create a cURL Config File

Sometimes you may need a proxy only for cURL and not system-wide. In such cases, create a config file that cURL checks on every run.

On Linux/macOS:

Add the proxy details in a .curlrc file within your user‘s home directory:

proxy = http://IP:PORT

On Windows:

Create a file named _curlrc in %APPDATA% (typically C:\Users\Username\AppData\Roaming) with:

proxy = http://IP:PORT  

Now cURL will automatically use this proxy for all requests until explicitly overridden.

Bypass Proxy for Specific Requests

If you have a default proxy configured through environment variables or a config file, you can bypass it for specific requests:

curl --noproxy "*"

The --noproxy option disables proxy for that command. You can also override with a different proxy:

curl --proxy http://IP:PORT

These techniques allow fine-grained control over your proxy usage with cURL.

Common Proxy Examples

Here are some common examples for using various proxies with cURL:

HTTP Proxy

curl -x http://IP:PORT 

Authenticating HTTP Proxy

curl -x http://user:pass@IP:PORT


curl -x https://IP:PORT --insecure

Use --insecure to ignore SSL certificate errors.

SOCKS5 Proxy

curl -x socks5://IP:PORT

Authenticated SOCKS5

curl --socks5 IP:PORT --proxy-user user:pass

As you can see, cURL makes it straightforward to use any type of proxy.

Next, let‘s go over some best practices.

Proxy Best Practices

To leverage proxies effectively with cURL for web scraping, keep these tips in mind:

  • Use anonymous residential proxies as they are less likely to get blocked compared to datacenter IPs. Services like Smartproxy offer unlimited residential proxies ideal for scraping.
  • Implement proxy rotation to periodically change IPs and avoid consecutive blocks. Tools like StickyStatic integrate seamlessly with cURL for automated rotating proxies.
  • For complete anonymity, use Tor proxies. Configure the Tor daemon on your system and route cURL through it.
  • Handle proxy failures gracefully by retrying with a fresh IP to maintain continuity of long-running scraping workflows.
  • Start with a few requests per minute and slowly ramp up the rate to avoid triggering rate limits. Monitor for any blocking and adjust speed accordingly.

Adopting these best practices will result in smooth and stable scraping with cURL and proxies.

Advanced Examples

Beyond basic usage, cURL supports many advanced features that can be combined with proxies:

Submit Web Forms

curl -x IP:PORT -d "param1=value1¶m2=value2" -X POST

File Upload

curl -x IP:PORT -F "file=@/path/to/file.txt"  

Follow Redirects

curl -x IP:PORT -L

Add Headers

curl -x IP:PORT -H "User-Agent: Mozilla"

Authenticate Requests

curl -x IP:PORT -u username:password

Cookies Persistence

curl -x IP:PORT -b cookies.txt

These examples demonstrate the versatility of cURL for advanced use cases involving proxies.

Troubleshooting Common Issues

When using cURL with proxies, you may encounter certain errors like:

Proxy connection failures – Use verbose mode with -v to pinpoint the issue. Verify your proxy IP, port, and credentials. Consider switching to a different proxy server.

SSL/certificate errors – Use --insecure to proceed ignoring the errors. For privacy, you can add -k which does not store or verify certificates.

HTTP errors like 403 or 503 – Your IP may be blocked or rate limited. Rotate to a new proxy IP to resolve. Slow down your requests and monitor for further blocks.

Authentication failures – Double check your proxy username and password. Some proxies require authentication encoding, so try encoding your credentials.

Generic connection issues – Temporarily disable your antivirus or firewall to rule out interference. Use a tool like Ping to check connectivity issues.

Learning to effectively troubleshoot using the -v option and above techniques will help resolve most proxy-related issues with cURL.


This guide covered step-by-step how to use proxies with cURL for both basic and advanced use cases. The key takeaways are:

  • Use -x or --proxy option to directly specify proxy in the command.
  • Set http_proxy and https_proxy environment variables for system-wide defaults.
  • Create .curlrc/_curlrc config files for cURL-only proxies.
  • Bypass proxies with --noproxy or override with new ones.
  • Authenticate requests and handle errors/blocks gracefully.
  • Employ best practices like proxy rotation to avoid blocks.

The powerful combination of cURL and proxies will supercharge your web scraping and automation capabilities while avoiding headaches like blocks and captchas.

Hopefully these tips will help you seamlessly integrate proxies into your cURL workflows. Scrap safely!


Join the conversation

Your email address will not be published. Required fields are marked *