When you make HTTP requests from a C# application using the built-in HttpClient
class, the remote server can see your IP address. This isn‘t ideal if you want to make anonymous requests, access geo-restricted content, or do large-scale web scraping without getting your IP blocked.
The solution is to route your requests through a proxy server that acts as an intermediary between you and the target website. The site will only see the IP of the proxy, allowing you to hide your real identity and location.
In this in-depth tutorial, I‘ll show you exactly how to use proxies with HttpClient
in C# to anonymize your web requests. Whether you need to make one-off anonymous calls or perform high-volume web scraping, you‘ll learn everything you need to know, including:
- How to set up an unauthenticated HTTP/HTTPS proxy
- Using a proxy that requires a username and password
- Rotating proxies to distribute requests and avoid IP bans
- Taking advantage of proxy services to simplify your code
I‘ll walk you through detailed code samples for each use case so you can quickly get started with proxies in your own projects. Let‘s dive in!
Making a Basic Request with HttpClient
Before we get into using proxies, let‘s review how to make a standard request with HttpClient
. Here‘s a simple example:
using System.Net.Http;
var client = new HttpClient();
var response = await client.GetStringAsync("https://api.myip.com");
Console.WriteLine(response);
This code makes a GET request to api.myip.com
to retrieve your current IP address. When I run it, I get back something like:
{"ip":"97.113.25.183","country":"United States","cc":"US"}
While this works, it exposes your real IP to the API server. Let‘s see how to funnel the request through a proxy to hide that info.
Using an Unauthenticated Proxy with HttpClient
The easiest way to use a proxy with HttpClient
is with the WebProxy
and HttpClientHandler
classes:
var proxy = new WebProxy() {
Address = new Uri("http://54.196.79.89:80"),
UseDefaultCredentials = false
};
var handler = new HttpClientHandler() {
Proxy = proxy
};
var client = new HttpClient(handler);
var response = await client.GetStringAsync("https://api.myip.com");
Console.WriteLine(response);
Let‘s break this down:
- We create a new
WebProxy
instance, passing in the URL of our proxy server (I got this from a free proxy list) - Wrap the proxy in an
HttpClientHandler
object - Pass the handler to
HttpClient
‘s constructor - Make the request as before
Now the JSON result shows the IP of the proxy, not our real address:
{"ip":"54.196.79.89","country":"United States","cc":"US"}
Just like that, we‘ve made an anonymous, proxied request with HttpClient
! This opens up a lot of possibilities for privacy, accessing location-restricted data, and web scraping without worrying about your IP getting banned.
There are a couple things to keep in mind:
- You should use a new
HttpClient
instance for each new proxy. Reusing clients across proxies can lead to strange bugs. - Some proxies may not support SSL/HTTPS requests. You can disable certificate validation, but this is insecure and not recommended for anything sensitive.
Authenticating with a Username and Password
Some proxy servers require authentication in the form of a username and password. Supporting this is simple with WebProxy
:
var proxy = new WebProxy() {
Address = new Uri("http://example.com:80"),
UseDefaultCredentials = false,
Credentials = new NetworkCredential("username", "password")
};
We use the NetworkCredential
class to pass the required username and password to the proxy. The rest of the code remains the same as the previous example.
Rotating Multiple Proxies
To avoid getting rate-limited or banned when scraping a website, it‘s a good idea to spread your requests across multiple proxy servers. Here‘s a helper function that chooses a random proxy from a list:
private static HttpClientHandler GetRandomProxyHandler(IEnumerable proxyUrls)
{
var proxy = new WebProxy() {
Address = new Uri(proxyUrls.ElementAt(new Random().Next(proxyUrls.Count()))),
UseDefaultCredentials = false
};
return new HttpClientHandler() { Proxy = proxy };
}
You can call this method with a list of proxy URLs:
var proxyUrls = new string[] {
"http://99.22.184.36:5836",
"http://19.84.121.17:53",
"http://94.170.13.114:8080"
};
var handler = GetRandomProxyHandler(proxyUrls);
var client = new HttpClient(handler);
// Make request with randomly selected proxy
This technique of cycling through proxies helps distribute your request load and avoids triggering anti-bot measures.
Simplifying Proxy Management with ScrapingBee
Managing your own pool of proxy servers can be a hassle. You‘re responsible for finding quality, reliable proxies, testing them, and load balancing your requests. This takes a lot of time and still doesn‘t guarantee success.
That‘s where proxy services like ScrapingBee come in. They handle all the proxy management and rotation on their end so you can focus on your actual application logic.
Making an authenticated request through ScrapingBee is dead simple:
var proxy = new WebProxy("http://proxy.scrapingbee.com:8886") {
Credentials = new NetworkCredential("YOUR_API_KEY", "render_js=False&premium_proxy=True")
};
var handler = new HttpClientHandler() { Proxy = proxy };
var client = new HttpClient(handler);
var response = await client.GetStringAsync("https://api.myip.com");
Console.WriteLine(response);
Rather than connecting directly to third-party proxies, you send all requests through ScrapingBee‘s endpoint. They automatically route you through their proxy pool so you get a different IP on each request with no additional work.
You authenticate by passing your API key as the proxy username and set config options via the password field. Here we‘re disabling JavaScript rendering and using premium (non-shared) proxies for the best performance and reliability.
While you can make 1,000 free requests per month, the real value of ScrapingBee is their premium plans that offer high limits, low latency, and unmatched success rates. It‘s the fastest way to scale up your web scraping projects.
Additional Tips and Libraries for Web Scraping with C#
Proxies are a key part of any professional web scraping pipeline, but there are a few other things to keep in mind:
- Always verify the proxy IP with a service like
api.myip.com
to make sure it‘s working - Prefer HTTPS proxies when possible for better security and compatibility
- Set a request timeout so hanging proxies don‘t stall your code
- Handle common exceptions like
WebException
andHttpRequestException
- Use delays between requests to avoid overwhelming sites
- Respect
robots.txt
files and only scrape content you have permission for
When it comes to parsing data from websites, HTML Agility Pack is the most popular library for traversing and extracting structured data from raw HTML. It has an easy-to-use API for finding elements with XPath or CSS selectors.
For more complex scraping jobs involving JavaScript-rendered content or handling login flows, you may want to use a headless browser framework like Puppeteer Sharp. It allows you to automate a real Chrome browser and unlocks more advanced scraping capabilities.
Final Thoughts
You should now have a solid foundation for anonymous requests and web scraping with C# and HttpClient
! Whether you manage your own proxies or use a service like ScrapingBee, you‘ve seen how to:
- Route GET and POST requests through HTTP/HTTPS proxies
- Authenticate with proxies that require a username and password
- Rotate proxies to improve performance and reliability
- Integrate powerful proxy networks into your code with just a few lines
While this article focused on HttpClient
, the same concepts apply to other popular C# HTTP libraries like RestSharp, Flurl, and more. The ability to tunnel requests through proxies is vital for any kind of anonymous browsing, data collection, or web scraping.
Be sure to combine proxies with responsible scraping practices to get the best results. If you have any other questions or just want to chat about web scraping, feel free to reach out!
For even more C# proxy goodness, check out these handy guides: