Home > Uncategorized > Controlling Remote Chrome Instances with C# and the Chrome DevTools Protocol

Controlling Remote Chrome Instances with C# and the Chrome DevTools Protocol

If you’ve ever needed to programmatically interact with a Chrome browser running on a remote server—whether for web scraping, automated testing, or debugging—you’ve probably discovered that it’s not as straightforward as it might seem. In this post, I’ll walk you through how to connect to a remote Chrome instance using C# and the Chrome DevTools Protocol (CDP), with a practical example of retrieving all cookies, including those pesky HttpOnly cookies that JavaScript can’t touch.

Why Remote Chrome Control?

There are several scenarios where controlling a remote Chrome instance becomes invaluable:

  • Server-side web scraping where you need JavaScript rendering but want to keep your scraping infrastructure separate from your application servers
  • Cross-platform testing where you’re developing on Windows but testing on Linux environments
  • Distributed automation where multiple test runners need to interact with centralized browser instances
  • Debugging production issues where you need to inspect cookies, local storage, or network traffic on a live system

The Chrome DevTools Protocol gives us low-level access to everything Chrome can do—and I mean everything. Unlike browser automation tools that work through the DOM, CDP operates at the browser level, giving you access to cookies (including HttpOnly), network traffic, performance metrics, and much more.

The Challenge: Making Chrome Accessible Remotely

Chrome’s remote debugging feature is powerful, but getting it to work remotely involves some Linux networking quirks that aren’t immediately obvious. Let me break down the problem and solution.

The Problem

When you launch Chrome with the --remote-debugging-port flag, even if you specify --remote-debugging-address=0.0.0.0, Chrome often binds only to 127.0.0.1 (localhost). This means you can’t connect to it from another machine.

You can verify this by checking what Chrome is actually listening on:

netstat -tlnp | grep 9222
tcp        0      0 127.0.0.1:9222          0.0.0.0:*               LISTEN      1891/chrome

See that 127.0.0.1? That’s the problem. It should be 0.0.0.0 to accept connections from any interface.

The Solution: socat to the Rescue

The elegant solution is to use socat (SOcket CAT) to proxy connections. We run Chrome on one port (localhost only), and use socat to forward a public-facing port to Chrome’s localhost port.

Here’s the setup on your Linux server:

# Start Chrome on localhost:9223
google-chrome \
  --headless=new \
  --no-sandbox \
  --disable-gpu \
  --remote-debugging-port=9223 \
  --user-data-dir=/tmp/chrome-remote-debug &

# Use socat to proxy external 9222 to internal 9223
socat TCP-LISTEN:9222,fork,bind=0.0.0.0,reuseaddr TCP:127.0.0.1:9223 &

Now verify it’s working:

netstat -tlnp | grep 9222
tcp        0      0 0.0.0.0:9222            0.0.0.0:*               LISTEN      2103/socat

netstat -tlnp | grep 9223
tcp        0      0 127.0.0.1:9223          0.0.0.0:*               LISTEN      2098/chrome

Perfect! Chrome is safely listening on localhost only, while socat provides the public interface. This is actually more secure than having Chrome directly exposed.

Understanding the Chrome DevTools Protocol

Before we dive into code, let’s understand how CDP works. When Chrome runs with remote debugging enabled, it exposes two types of endpoints:

1. HTTP Endpoints (for discovery)

# Get browser version and WebSocket URL
curl http://your-server:9222/json/version

# Get list of all open pages/targets
curl http://your-server:9222/json

The /json/version endpoint returns something like:

{
   "Browser": "Chrome/143.0.7499.169",
   "Protocol-Version": "1.3",
   "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36...",
   "V8-Version": "14.3.127.17",
   "WebKit-Version": "537.36...",
   "webSocketDebuggerUrl": "ws://your-server:9222/devtools/browser/14706e92-5202-4651-aa97-a72d683bf88e"
}

2. WebSocket Endpoint (for control)

The webSocketDebuggerUrl is what we use to actually control Chrome. All CDP commands flow through this WebSocket connection using a JSON-RPC-like protocol.

Enter PuppeteerSharp

While you could manually handle WebSocket connections and craft CDP commands by hand (and I’ve done that with libraries like MasterDevs.ChromeDevTools), there’s an easier way: PuppeteerSharp.

PuppeteerSharp is a .NET port of Google’s Puppeteer library, providing a high-level API over CDP. The beauty is that it handles all the WebSocket plumbing, message routing, and protocol intricacies for you.

Here’s our complete C# application:

using System;
using System.Net.Http;
using System.Text.Json;
using System.Threading.Tasks;
using PuppeteerSharp;

namespace ChromeRemoteDebugDemo
{
    class Program
    {
        static async Task Main(string[] args)
        {
            // Configuration
            string remoteDebugHost = "xxxx.xxx.xxx.xxx";
            int remoteDebugPort = 9222;
            
            Console.WriteLine("=== Chrome Remote Debug - Cookie Retrieval Demo ===\n");
            
            try
            {
                // Step 1: Get the WebSocket URL from Chrome
                Console.WriteLine($"Connecting to http://{remoteDebugHost}:{remoteDebugPort}/json/version");
                
                using var httpClient = new HttpClient();
                string versionUrl = $"http://{remoteDebugHost}:{remoteDebugPort}/json/version";
                string jsonResponse = await httpClient.GetStringAsync(versionUrl);
                
                // Parse JSON to get webSocketDebuggerUrl
                using JsonDocument doc = JsonDocument.Parse(jsonResponse);
                JsonElement root = doc.RootElement;
                string webSocketUrl = root.GetProperty("webSocketDebuggerUrl").GetString();
                
                Console.WriteLine($"WebSocket URL: {webSocketUrl}\n");
                
                // Step 2: Connect to Chrome using PuppeteerSharp
                Console.WriteLine("Connecting to Chrome via WebSocket...");
                
                var connectOptions = new ConnectOptions
                {
                    BrowserWSEndpoint = webSocketUrl
                };
                
                var browser = await Puppeteer.ConnectAsync(connectOptions);
                Console.WriteLine("Successfully connected!\n");
                
                // Step 3: Get or create a page
                var pages = await browser.PagesAsync();
                IPage page;
                
                if (pages.Length > 0)
                {
                    page = pages[0];
                    Console.WriteLine($"Using existing page: {page.Url}");
                }
                else
                {
                    page = await browser.NewPageAsync();
                    await page.GoToAsync("https://example.com");
                }
                
                // Step 4: Get ALL cookies (including HttpOnly!)
                Console.WriteLine("\nRetrieving all cookies...\n");
                var cookies = await page.GetCookiesAsync();
                
                Console.WriteLine($"Found {cookies.Length} cookie(s):\n");
                
                foreach (var cookie in cookies)
                {
                    Console.WriteLine($"Name:     {cookie.Name}");
                    Console.WriteLine($"Value:    {cookie.Value}");
                    Console.WriteLine($"Domain:   {cookie.Domain}");
                    Console.WriteLine($"Path:     {cookie.Path}");
                    Console.WriteLine($"Secure:   {cookie.Secure}");
                    Console.WriteLine($"HttpOnly: {cookie.HttpOnly}");  // ← This is the magic!
                    Console.WriteLine($"SameSite: {cookie.SameSite}");
                    Console.WriteLine($"Expires:  {(cookie.Expires == -1 ? "Session" : DateTimeOffset.FromUnixTimeSeconds((long)cookie.Expires).ToString())}");
                    Console.WriteLine(new string('-', 80));
                }
                
                await browser.DisconnectAsync();
                Console.WriteLine("\nDisconnected successfully.");
                
            }
            catch (Exception ex)
            {
                Console.WriteLine($"\n❌ ERROR: {ex.Message}");
            }
        }
    }
}

The Key Insight: HttpOnly Cookies

Here’s what makes this approach powerful: page.GetCookiesAsync() returns ALL cookies, including HttpOnly ones.

In a normal web page, JavaScript cannot access HttpOnly cookies—that’s the whole point of the HttpOnly flag. It’s a security feature that prevents XSS attacks from stealing session tokens. But when you’re operating at the CDP level, you’re not bound by JavaScript’s restrictions. You’re talking directly to Chrome’s internals.

This is incredibly useful for:

  • Session management in automation: You can extract session cookies from one browser session and inject them into another
  • Security testing: Verify that sensitive cookies are properly marked HttpOnly
  • Debugging authentication issues: See exactly what cookies are being set by your backend
  • Web scraping: Maintain authenticated sessions across multiple scraper instances

Setting Up the Project

Create a new console application:

dotnet new console -n ChromeRemoteDebugDemo
cd ChromeRemoteDebugDemo

Add PuppeteerSharp:

dotnet add package PuppeteerSharp

Your .csproj should look like:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net8.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="PuppeteerSharp" Version="20.2.4" />
  </ItemGroup>
</Project>

Running the Demo

On your Linux server:

# Install socat if needed
apt-get install socat -y

# Start Chrome on internal port 9223
google-chrome \
  --headless=new \
  --no-sandbox \
  --disable-gpu \
  --remote-debugging-port=9223 \
  --user-data-dir=/tmp/chrome-remote-debug &

# Proxy external 9222 to internal 9223
socat TCP-LISTEN:9222,fork,bind=0.0.0.0,reuseaddr TCP:127.0.0.1:9223 &

On your Windows development machine:

dotnet run

You should see output like:

=== Chrome Remote Debug - Cookie Retrieval Demo ===

Connecting to http://xxxx.xxx.xxx.xxx:9222/json/version
WebSocket URL: ws://xxxx.xxx.xxx.xxx:9222/devtools/browser/14706e92-5202-4651-aa97-a72d683bf88e

Connecting to Chrome via WebSocket...
Successfully connected!

Using existing page: https://example.com

Retrieving all cookies...

Found 2 cookie(s):

Name:     _ga
Value:    GA1.2.123456789.1234567890
Domain:   .example.com
Path:     /
Secure:   True
HttpOnly: False
SameSite: Lax
Expires:  2026-12-27 10:30:45
--------------------------------------------------------------------------------
Name:     session_id
Value:    abc123xyz456
Domain:   example.com
Path:     /
Secure:   True
HttpOnly: True  ← Notice this!
SameSite: Strict
Expires:  Session
--------------------------------------------------------------------------------

Disconnected successfully.

Security Considerations

Before you deploy this in production, consider these security implications:

1. Firewall Configuration

Only expose port 9222 to trusted networks. If you’re running this on a cloud server:

# Allow only your specific IP
sudo ufw allow from YOUR.IP.ADDRESS to any port 9222

Or better yet, use an SSH tunnel and don’t expose the port at all:

# On Windows, create a tunnel
ssh -N -L 9222:localhost:9222 user@remote-server

# Then connect to localhost:9222 in your code

2. Authentication

The Chrome DevTools Protocol has no built-in authentication. Anyone who can connect to the debugging port has complete control over Chrome. This includes:

  • Reading all page content
  • Executing arbitrary JavaScript
  • Accessing all cookies (as we’ve demonstrated)
  • Intercepting and modifying network requests

In production, you should:

  • Use SSH tunnels instead of exposing the port
  • Run Chrome in a sandboxed environment
  • Use short-lived debugging sessions
  • Monitor for unauthorized connections

3. Resource Limits

A runaway Chrome instance can consume significant resources. Consider:

# Limit Chrome's memory usage
google-chrome --headless=new \
  --max-old-space-size=512 \
  --remote-debugging-port=9223 \
  --user-data-dir=/tmp/chrome-remote-debug

Beyond Cookies: What Else Can You Do?

The Chrome DevTools Protocol is incredibly powerful. Here are some other things you can do with this same setup:

Take Screenshots

await page.ScreenshotAsync("/path/to/screenshot.png");

Monitor Network Traffic

await page.SetRequestInterceptionAsync(true);
page.Request += (sender, e) =>
{
    Console.WriteLine($"Request: {e.Request.Url}");
    e.Request.ContinueAsync();
};

Execute JavaScript

var title = await page.EvaluateExpressionAsync<string>("document.title");

Modify Cookies

await page.SetCookieAsync(new CookieParam
{
    Name = "test",
    Value = "123",
    Domain = "example.com",
    HttpOnly = true,  // Can set HttpOnly from CDP!
    Secure = true
});

Emulate Mobile Devices

await page.EmulateAsync(new DeviceDescriptorOptions
{
    Viewport = new ViewPortOptions { Width = 375, Height = 667 },
    UserAgent = "Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X)"
});

Comparing Approaches

You might be wondering how this compares to other approaches:

PuppeteerSharp vs. Selenium

Selenium uses the WebDriver protocol, which is a W3C standard but higher-level and more abstracted. PuppeteerSharp/CDP gives you lower-level access to Chrome specifically.

  • Selenium: Better for cross-browser testing, more stable API
  • PuppeteerSharp: More powerful Chrome-specific features, faster, lighter weight

PuppeteerSharp vs. Raw CDP Libraries

You could use libraries like MasterDevs.ChromeDevTools or ChromeProtocol for more direct CDP access:

// With MasterDevs.ChromeDevTools
var session = new ChromeSession(webSocketUrl);
var cookies = await session.SendAsync(new GetCookiesCommand());

Low-level CDP libraries:

  • Pros: More control, can use experimental CDP features
  • Cons: More verbose, have to handle protocol details

PuppeteerSharp:

  • Pros: High-level API, actively maintained, comprehensive documentation
  • Cons: Abstracts away some CDP features

For most use cases, PuppeteerSharp hits the sweet spot between power and ease of use.

Troubleshooting Common Issues

“Could not connect to Chrome debugging endpoint”

Check firewall:

sudo ufw status
sudo iptables -L -n | grep 9222

Verify Chrome is running:

ps aux | grep chrome
netstat -tlnp | grep 9222

Test locally first:

curl http://localhost:9222/json/version

“No cookies found”

This is normal if the page hasn’t set any cookies. Navigate to a site that does:

await page.GoToAsync("https://github.com");
var cookies = await page.GetCookiesAsync();

Chrome crashes or hangs

Add more stability flags:

google-chrome \
  --headless=new \
  --no-sandbox \
  --disable-gpu \
  --disable-dev-shm-usage \
  --disable-setuid-sandbox \
  --remote-debugging-port=9223 \
  --user-data-dir=/tmp/chrome-remote-debug

Real-World Use Case: Session Management

Here’s a practical example of how I’ve used this in production—managing authenticated sessions for web scraping:

public class SessionManager
{
    private readonly string _remoteChrome;
    
    public async Task<CookieParam[]> LoginAndGetSession(string username, string password)
    {
        var browser = await ConnectToRemoteChrome();
        var page = await browser.NewPageAsync();
        
        // Perform login
        await page.GoToAsync("https://example.com/login");
        await page.TypeAsync("#username", username);
        await page.TypeAsync("#password", password);
        await page.ClickAsync("#login-button");
        await page.WaitForNavigationAsync();
        
        // Extract all cookies (including HttpOnly session tokens!)
        var cookies = await page.GetCookiesAsync();
        
        await browser.DisconnectAsync();
        
        // Store these cookies for later use
        return cookies;
    }
    
    public async Task ReuseSession(CookieParam[] cookies)
    {
        var browser = await ConnectToRemoteChrome();
        var page = await browser.NewPageAsync();
        
        // Inject the saved cookies
        await page.SetCookieAsync(cookies);
        
        // Now you're authenticated!
        await page.GoToAsync("https://example.com/dashboard");
        
        // Do your work...
    }
}

This allows you to:

  1. Log in once in a “master” browser
  2. Extract the session cookies (including HttpOnly auth tokens)
  3. Distribute those cookies to multiple scraper instances
  4. All scrapers are now authenticated without re-logging in

Conclusion

The Chrome DevTools Protocol opens up a world of possibilities for browser automation and debugging. By combining it with PuppeteerSharp and a bit of Linux networking knowledge, you can:

  • Control Chrome instances running anywhere on your network
  • Access all browser data, including HttpOnly cookies
  • Build powerful automation and testing tools
  • Debug production issues remotely

The key takeaways:

  1. Use socat to proxy Chrome’s localhost debugging port to external interfaces
  2. PuppeteerSharp provides the easiest way to interact with CDP from C#
  3. CDP gives you superpowers that normal JavaScript can’t access
  4. Security matters—only expose debugging ports to trusted networks

The complete code from this post is available on GitHub (replace with your actual link). If you found this useful, consider giving it a star!

Have you used the Chrome DevTools Protocol in your projects? What creative uses have you found for it? Drop a comment below—I’d love to hear your experiences!

Further Reading


Tags: C#, Chrome, DevTools Protocol, PuppeteerSharp, Web Automation, Browser Automation, Linux, socat

Categories: Uncategorized Tags: , , ,
  1. No comments yet.
  1. No trackbacks yet.

Leave a comment