Archive

Posts Tagged ‘cloud’

Migrating Google Cloud Run to Scaleway: Bringing Your Cloud Infrastructure Back to Europe


Introduction: Why European Cloud Sovereignty Matters Now More Than Ever

In an era of increasing geopolitical tensions, data sovereignty concerns, and evolving international relations, European companies are reconsidering their dependence on US-based cloud providers. The EU’s growing emphasis on digital sovereignty, combined with uncertainties around US data access laws like the CLOUD Act and recent political developments, has made many businesses uncomfortable with storing sensitive data on American infrastructure.

For EU-based companies running containerized workloads on Google Cloud Run, there’s good news: migrating to European alternatives like Scaleway is surprisingly straightforward. This guide will walk you through the technical process of moving your Cloud Run services to Scaleway’s Serverless Containers—keeping your applications running while bringing your infrastructure back under European jurisdiction.

Why Scaleway?

Scaleway, a French cloud provider founded in 1999, offers a compelling alternative to Google Cloud Run:

  • 🇪🇺 100% European: All data centers located in France, Netherlands, and Poland
  • 📜 GDPR Native: Built from the ground up with European data protection in mind
  • 💰 Transparent Pricing: No hidden costs, generous free tiers, and competitive rates
  • 🔒 Data Sovereignty: Your data never leaves EU jurisdiction
  • ⚡ Scale-to-Zero: Just like Cloud Run, pay only for actual usage
  • 🌱 Environmental Leadership: Strong commitment to sustainable cloud infrastructure

Most importantly: Scaleway Serverless Containers are technically equivalent to Google Cloud Run. Both are built on Knative, meaning your containers will run identically on both platforms.

Prerequisites

Before starting, ensure you have:

  • An existing Google Cloud Run service
  • Windows machine with PowerShell
  • gcloud CLI installed and authenticated
  • A Scaleway account (free to create)
  • Skopeo installed (we’ll cover this)

Understanding the Architecture

Both Google Cloud Run and Scaleway Serverless Containers work the same way:

  1. You provide a container image
  2. The platform runs it on-demand via HTTPS endpoints
  3. It scales automatically (including to zero when idle)
  4. You pay only for execution time

The migration process is simply:

  1. Copy your container image from Google’s registry to Scaleway’s registry
  2. Deploy it as a Scaleway Serverless Container
  3. Update your DNS/endpoints

No code changes required—your existing .NET, Node.js, Python, Go, or any other containerized application works as-is.

Step 1: Install Skopeo (Lightweight Docker Alternative)

Since we’re on Windows and don’t want to run full Docker Desktop, we’ll use Skopeo—a lightweight tool designed specifically for copying container images between registries.

Install via winget:

powershell

winget install RedHat.Skopeo

Or download directly from: https://github.com/containers/skopeo/releases

Why Skopeo?

  • No daemon required: No background services consuming resources
  • Direct registry-to-registry transfer: Images never touch your local disk
  • Minimal footprint: ~50MB vs. several GB for Docker Desktop
  • Perfect for CI/CD: Designed for automation and registry operations

Configure Skopeo’s Trust Policy

Skopeo requires a policy file to determine which registries to trust. Create it:

powershell

# Create the config directory
New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.config\containers"
# Create a permissive policy that trusts all registries
@"
{
"default": [
{
"type": "insecureAcceptAnything"
}
],
"transports": {
"docker-daemon": {
"": [{"type": "insecureAcceptAnything"}]
}
}
}
"@ | Out-File -FilePath "$env:USERPROFILE\.config\containers\policy.json" -Encoding utf8

For production environments, you might want a more restrictive policy that only trusts specific registries:

powershell

@"
{
"default": [{"type": "reject"}],
"transports": {
"docker": {
"gcr.io": [{"type": "insecureAcceptAnything"}],
"europe-west2-docker.pkg.dev": [{"type": "insecureAcceptAnything"}],
"rg.fr-par.scw.cloud": [{"type": "insecureAcceptAnything"}]
}
}
}
"@ | Out-File -FilePath "$env:USERPROFILE\.config\containers\policy.json" -Encoding utf8

Step 2: Find Your Cloud Run Container Image

Your Cloud Run service uses a specific container image. To find it:

Via gcloud CLI (recommended):

bash

gcloud run services describe YOUR-SERVICE-NAME \
--region=YOUR-REGION \
--project=YOUR-PROJECT \
--format='value(spec.template.spec.containers[0].image)'
```
This returns the full image URL, something like:
```
europe-west2-docker.pkg.dev/your-project/cloud-run-source-deploy/your-service@sha256:abc123...

Via Google Cloud Console:

  1. Navigate to Cloud Run in the console
  2. Click your service
  3. Go to the “Revisions” tab
  4. Look for “Container image URL”

The @sha256:... digest is important—it ensures you’re copying the exact image currently running in production.

Step 3: Set Up Scaleway Container Registry

Create a Scaleway Account

  1. Sign up at https://console.scaleway.com/
  2. Complete email verification
  3. Navigate to the console

Create a Container Registry Namespace

  1. Go to ContainersContainer Registry
  2. Click Create namespace
  3. Choose a region (Paris, Amsterdam, or Warsaw)
    • Important: Choose the same region where you’ll deploy your containers
  4. Enter a namespace name (e.g., my-containers, production)
    • Must be unique within that region
    • Lowercase, numbers, and hyphens only
  5. Set Privacy to Private
  6. Click Create

Your registry URL will be: rg.fr-par.scw.cloud/your-namespace

Create API Credentials

  1. Click your profile → API Keys (or visit https://console.scaleway.com/iam/api-keys)
  2. Click Generate API Key
  3. Give it a name (e.g., “container-migration”)
  4. Save the Secret Key securely—it’s only shown once
  5. Note both the Access Key and Secret Key

Step 4: Copy Your Container Image

Now comes the magic—copying your container directly from Google to Scaleway without downloading it locally.

Authenticate and Copy:

powershell

# Set your Scaleway secret key as environment variable (more secure)
$env:SCW_SECRET_KEY = "your-scaleway-secret-key-here"
# Copy the image directly between registries
skopeo copy `
--src-creds="oauth2accesstoken:$(gcloud auth print-access-token)" `
--dest-creds="nologin:$env:SCW_SECRET_KEY" `
docker://europe-west2-docker.pkg.dev/your-project/cloud-run-source-deploy/your-service@sha256:abc123... `
docker://rg.fr-par.scw.cloud/your-namespace/your-service:latest
```
### What's Happening:
- `--src-creds`: Authenticates with Google using your gcloud session
- `--dest-creds`: Authenticates with Scaleway using your API key
- Source URL: Your Google Artifact Registry image
- Destination URL: Your Scaleway Container Registry
The transfer happens directly between registries—your Windows machine just orchestrates it. Even a multi-GB container copies in minutes.
### Verify the Copy:
1. Go to https://console.scaleway.com/registry/namespaces
2. Click your namespace
3. You should see your service image listed with the `latest` tag
## Step 5: Deploy to Scaleway Serverless Containers
### Create a Serverless Container Namespace:
1. Navigate to **Containers** → **Serverless Containers**
2. Click **Create namespace**
3. Choose the **same region** as your Container Registry
4. Give it a name (e.g., `production-services`)
5. Click **Create**
### Deploy Your Container:
1. Click **Create container**
2. **Image source**: Select "Scaleway Container Registry"
3. Choose your namespace and image
4. **Configuration**:
- **Port**: Set to the port your app listens on (usually 8080 for Cloud Run apps)
- **Environment variables**: Copy any env vars from Cloud Run
- **Resources**:
- Memory: Start with what you used in Cloud Run
- vCPU: 0.5-1 vCPU is typical
- **Scaling**:
- **Min scale**: `0` (enables scale-to-zero, just like Cloud Run)
- **Max scale**: Set based on expected traffic (e.g., 10)
5. Click **Deploy container**
### Get Your Endpoint:
After deployment (1-2 minutes), you'll receive an HTTPS endpoint:
```
https://your-container-namespace-xxxxx.functions.fnc.fr-par.scw.cloud

This is your public API endpoint—no API Gateway needed, SSL included for free.

Step 6: Test Your Service

powershell

# Test the endpoint
Invoke-WebRequest -Uri "https://your-container-url.functions.fnc.fr-par.scw.cloud/your-endpoint"

Your application should respond identically to how it did on Cloud Run.

Understanding the Cost Comparison

Google Cloud Run Pricing (Typical):

  • vCPU: $0.00002400/vCPU-second
  • Memory: $0.00000250/GB-second
  • Requests: $0.40 per million
  • Plus: API Gateway, Load Balancer, or other routing costs

Scaleway Serverless Containers:

  • vCPU: €0.00001/vCPU-second (€1.00 per 100k vCPU-s)
  • Memory: €0.000001/GB-second (€0.10 per 100k GB-s)
  • Requests: Free (no per-request charges)
  • HTTPS endpoint: Free (included)
  • Free Tier: 200k vCPU-seconds + 400k GB-seconds per month

Example Calculation:

For an API handling 1 million requests/month, 200ms average response time, 1 vCPU, 2GB memory:

Google Cloud Run:

  • vCPU: 1M × 0.2s × $0.000024 = $4.80
  • Memory: 1M × 0.2s × 2GB × $0.0000025 = $1.00
  • Requests: 1M × $0.0000004 = $0.40
  • Total: ~$6.20/month

Scaleway:

  • vCPU: 200k vCPU-s → Free (within free tier)
  • Memory: 400k GB-s → Free (within free tier)
  • Total: €0.00/month

Even beyond free tiers, Scaleway is typically 30-50% cheaper, with no surprise charges.

Key Differences to Be Aware Of

Similarities (Good News):

✅ Both use Knative under the hood ✅ Both support HTTP, HTTP/2, WebSocket, gRPC ✅ Both scale to zero automatically ✅ Both provide HTTPS endpoints ✅ Both support custom domains ✅ Both integrate with monitoring/logging

Differences:

  • Cold start: Scaleway takes ~2-5 seconds (similar to Cloud Run)
  • Idle timeout: Scaleway scales to zero after 15 minutes (vs. Cloud Run’s varies)
  • Regions: Limited to EU (Paris, Amsterdam, Warsaw) vs. Google’s global presence
  • Ecosystem: Smaller ecosystem than GCP (but rapidly growing)

When Scaleway Makes Sense:

  • ✅ Your primary users/customers are in Europe
  • ✅ GDPR compliance is critical
  • ✅ You want to avoid US jurisdiction over your data
  • ✅ You prefer transparent, predictable pricing
  • ✅ You don’t need GCP-specific services (BigQuery, etc.)

When to Consider Carefully:

  • ⚠️ You need global edge distribution (though you can use CDN)
  • ⚠️ You’re heavily integrated with other GCP services
  • ⚠️ You need GCP’s machine learning services
  • ⚠️ Your customers are primarily in Asia/Americas

Additional Migration Considerations

Environment Variables and Secrets:

Scaleway offers Secret Manager integration. Copy your Cloud Run secrets:

  1. Go to Secret Manager in Scaleway
  2. Create secrets matching your Cloud Run environment variables
  3. Reference them in your container configuration

Custom Domains:

Both platforms support custom domains. In Scaleway:

  1. Go to your container settings
  2. Add custom domain
  3. Update your DNS CNAME to point to Scaleway’s endpoint
  4. SSL is handled automatically

Databases and Storage:

If you’re using Cloud SQL or Cloud Storage:

  • Databases: Consider Scaleway’s Managed PostgreSQL/MySQL or Serverless SQL Database
  • Object Storage: Scaleway Object Storage is S3-compatible
  • Or: Keep using GCP services (cross-cloud is possible, but adds latency)

Monitoring and Logging:

Scaleway provides Cockpit (based on Grafana):

  • Automatic logging for all Serverless Containers
  • Pre-built dashboards
  • Integration with alerts and metrics
  • Similar to Cloud Logging/Monitoring

The Broader Picture: European Digital Sovereignty

This migration isn’t just about cost savings or technical features—it’s about control.

Why EU Companies Are Moving:

  1. Legal Protection: GDPR protections are stronger when data never leaves EU jurisdiction
  2. Political Risk: Reduces exposure to US government data requests under CLOUD Act
  3. Supply Chain Resilience: Diversification away from Big Tech dependency
  4. Supporting European Tech: Strengthens the European cloud ecosystem
  5. Future-Proofing: As digital sovereignty regulations increase, early movers are better positioned

The Economic Argument:

Every euro spent with European cloud providers:

  • Stays in the European economy
  • Supports European jobs and innovation
  • Builds alternatives to US/Chinese tech dominance
  • Strengthens Europe’s strategic autonomy

Conclusion: A Straightforward Path to Sovereignty

Migrating from Google Cloud Run to Scaleway Serverless Containers is technically simple—often taking just a few hours for a typical service. The containers are identical, the pricing is competitive, and the operational model is the same.

But beyond the technical benefits, there’s a strategic argument: as a European company, every infrastructure decision is a choice about where your data lives, who has access to it, and which ecosystem you’re supporting.

Scaleway (and other European cloud providers) aren’t perfect replacements for every GCP use case. But for containerized APIs and web services—which represent the majority of Cloud Run workloads—they’re absolutely production-ready alternatives that keep your infrastructure firmly within European jurisdiction.

In 2026’s geopolitical landscape, that’s not just a nice-to-have—it’s increasingly essential.


Resources

Have you migrated your infrastructure back to Europe? Share your experience in the comments below.

Categories: Uncategorized Tags: , , , ,

Controlling Remote Chrome Instances with C# and the Chrome DevTools Protocol

If you’ve ever needed to programmatically interact with a Chrome browser running on a remote server—whether for web scraping, automated testing, or debugging—you’ve probably discovered that it’s not as straightforward as it might seem. In this post, I’ll walk you through how to connect to a remote Chrome instance using C# and the Chrome DevTools Protocol (CDP), with a practical example of retrieving all cookies, including those pesky HttpOnly cookies that JavaScript can’t touch.

Why Remote Chrome Control?

There are several scenarios where controlling a remote Chrome instance becomes invaluable:

  • Server-side web scraping where you need JavaScript rendering but want to keep your scraping infrastructure separate from your application servers
  • Cross-platform testing where you’re developing on Windows but testing on Linux environments
  • Distributed automation where multiple test runners need to interact with centralized browser instances
  • Debugging production issues where you need to inspect cookies, local storage, or network traffic on a live system

The Chrome DevTools Protocol gives us low-level access to everything Chrome can do—and I mean everything. Unlike browser automation tools that work through the DOM, CDP operates at the browser level, giving you access to cookies (including HttpOnly), network traffic, performance metrics, and much more.

The Challenge: Making Chrome Accessible Remotely

Chrome’s remote debugging feature is powerful, but getting it to work remotely involves some Linux networking quirks that aren’t immediately obvious. Let me break down the problem and solution.

The Problem

When you launch Chrome with the --remote-debugging-port flag, even if you specify --remote-debugging-address=0.0.0.0, Chrome often binds only to 127.0.0.1 (localhost). This means you can’t connect to it from another machine.

You can verify this by checking what Chrome is actually listening on:

netstat -tlnp | grep 9222
tcp        0      0 127.0.0.1:9222          0.0.0.0:*               LISTEN      1891/chrome

See that 127.0.0.1? That’s the problem. It should be 0.0.0.0 to accept connections from any interface.

The Solution: socat to the Rescue

The elegant solution is to use socat (SOcket CAT) to proxy connections. We run Chrome on one port (localhost only), and use socat to forward a public-facing port to Chrome’s localhost port.

Here’s the setup on your Linux server:

# Start Chrome on localhost:9223
google-chrome \
  --headless=new \
  --no-sandbox \
  --disable-gpu \
  --remote-debugging-port=9223 \
  --user-data-dir=/tmp/chrome-remote-debug &

# Use socat to proxy external 9222 to internal 9223
socat TCP-LISTEN:9222,fork,bind=0.0.0.0,reuseaddr TCP:127.0.0.1:9223 &

Now verify it’s working:

netstat -tlnp | grep 9222
tcp        0      0 0.0.0.0:9222            0.0.0.0:*               LISTEN      2103/socat

netstat -tlnp | grep 9223
tcp        0      0 127.0.0.1:9223          0.0.0.0:*               LISTEN      2098/chrome

Perfect! Chrome is safely listening on localhost only, while socat provides the public interface. This is actually more secure than having Chrome directly exposed.

Understanding the Chrome DevTools Protocol

Before we dive into code, let’s understand how CDP works. When Chrome runs with remote debugging enabled, it exposes two types of endpoints:

1. HTTP Endpoints (for discovery)

# Get browser version and WebSocket URL
curl http://your-server:9222/json/version

# Get list of all open pages/targets
curl http://your-server:9222/json

The /json/version endpoint returns something like:

{
   "Browser": "Chrome/143.0.7499.169",
   "Protocol-Version": "1.3",
   "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36...",
   "V8-Version": "14.3.127.17",
   "WebKit-Version": "537.36...",
   "webSocketDebuggerUrl": "ws://your-server:9222/devtools/browser/14706e92-5202-4651-aa97-a72d683bf88e"
}

2. WebSocket Endpoint (for control)

The webSocketDebuggerUrl is what we use to actually control Chrome. All CDP commands flow through this WebSocket connection using a JSON-RPC-like protocol.

Enter PuppeteerSharp

While you could manually handle WebSocket connections and craft CDP commands by hand (and I’ve done that with libraries like MasterDevs.ChromeDevTools), there’s an easier way: PuppeteerSharp.

PuppeteerSharp is a .NET port of Google’s Puppeteer library, providing a high-level API over CDP. The beauty is that it handles all the WebSocket plumbing, message routing, and protocol intricacies for you.

Here’s our complete C# application:

using System;
using System.Net.Http;
using System.Text.Json;
using System.Threading.Tasks;
using PuppeteerSharp;

namespace ChromeRemoteDebugDemo
{
    class Program
    {
        static async Task Main(string[] args)
        {
            // Configuration
            string remoteDebugHost = "xxxx.xxx.xxx.xxx";
            int remoteDebugPort = 9222;
            
            Console.WriteLine("=== Chrome Remote Debug - Cookie Retrieval Demo ===\n");
            
            try
            {
                // Step 1: Get the WebSocket URL from Chrome
                Console.WriteLine($"Connecting to http://{remoteDebugHost}:{remoteDebugPort}/json/version");
                
                using var httpClient = new HttpClient();
                string versionUrl = $"http://{remoteDebugHost}:{remoteDebugPort}/json/version";
                string jsonResponse = await httpClient.GetStringAsync(versionUrl);
                
                // Parse JSON to get webSocketDebuggerUrl
                using JsonDocument doc = JsonDocument.Parse(jsonResponse);
                JsonElement root = doc.RootElement;
                string webSocketUrl = root.GetProperty("webSocketDebuggerUrl").GetString();
                
                Console.WriteLine($"WebSocket URL: {webSocketUrl}\n");
                
                // Step 2: Connect to Chrome using PuppeteerSharp
                Console.WriteLine("Connecting to Chrome via WebSocket...");
                
                var connectOptions = new ConnectOptions
                {
                    BrowserWSEndpoint = webSocketUrl
                };
                
                var browser = await Puppeteer.ConnectAsync(connectOptions);
                Console.WriteLine("Successfully connected!\n");
                
                // Step 3: Get or create a page
                var pages = await browser.PagesAsync();
                IPage page;
                
                if (pages.Length > 0)
                {
                    page = pages[0];
                    Console.WriteLine($"Using existing page: {page.Url}");
                }
                else
                {
                    page = await browser.NewPageAsync();
                    await page.GoToAsync("https://example.com");
                }
                
                // Step 4: Get ALL cookies (including HttpOnly!)
                Console.WriteLine("\nRetrieving all cookies...\n");
                var cookies = await page.GetCookiesAsync();
                
                Console.WriteLine($"Found {cookies.Length} cookie(s):\n");
                
                foreach (var cookie in cookies)
                {
                    Console.WriteLine($"Name:     {cookie.Name}");
                    Console.WriteLine($"Value:    {cookie.Value}");
                    Console.WriteLine($"Domain:   {cookie.Domain}");
                    Console.WriteLine($"Path:     {cookie.Path}");
                    Console.WriteLine($"Secure:   {cookie.Secure}");
                    Console.WriteLine($"HttpOnly: {cookie.HttpOnly}");  // ← This is the magic!
                    Console.WriteLine($"SameSite: {cookie.SameSite}");
                    Console.WriteLine($"Expires:  {(cookie.Expires == -1 ? "Session" : DateTimeOffset.FromUnixTimeSeconds((long)cookie.Expires).ToString())}");
                    Console.WriteLine(new string('-', 80));
                }
                
                await browser.DisconnectAsync();
                Console.WriteLine("\nDisconnected successfully.");
                
            }
            catch (Exception ex)
            {
                Console.WriteLine($"\n❌ ERROR: {ex.Message}");
            }
        }
    }
}

The Key Insight: HttpOnly Cookies

Here’s what makes this approach powerful: page.GetCookiesAsync() returns ALL cookies, including HttpOnly ones.

In a normal web page, JavaScript cannot access HttpOnly cookies—that’s the whole point of the HttpOnly flag. It’s a security feature that prevents XSS attacks from stealing session tokens. But when you’re operating at the CDP level, you’re not bound by JavaScript’s restrictions. You’re talking directly to Chrome’s internals.

This is incredibly useful for:

  • Session management in automation: You can extract session cookies from one browser session and inject them into another
  • Security testing: Verify that sensitive cookies are properly marked HttpOnly
  • Debugging authentication issues: See exactly what cookies are being set by your backend
  • Web scraping: Maintain authenticated sessions across multiple scraper instances

Setting Up the Project

Create a new console application:

dotnet new console -n ChromeRemoteDebugDemo
cd ChromeRemoteDebugDemo

Add PuppeteerSharp:

dotnet add package PuppeteerSharp

Your .csproj should look like:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net8.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="PuppeteerSharp" Version="20.2.4" />
  </ItemGroup>
</Project>

Running the Demo

On your Linux server:

# Install socat if needed
apt-get install socat -y

# Start Chrome on internal port 9223
google-chrome \
  --headless=new \
  --no-sandbox \
  --disable-gpu \
  --remote-debugging-port=9223 \
  --user-data-dir=/tmp/chrome-remote-debug &

# Proxy external 9222 to internal 9223
socat TCP-LISTEN:9222,fork,bind=0.0.0.0,reuseaddr TCP:127.0.0.1:9223 &

On your Windows development machine:

dotnet run

You should see output like:

=== Chrome Remote Debug - Cookie Retrieval Demo ===

Connecting to http://xxxx.xxx.xxx.xxx:9222/json/version
WebSocket URL: ws://xxxx.xxx.xxx.xxx:9222/devtools/browser/14706e92-5202-4651-aa97-a72d683bf88e

Connecting to Chrome via WebSocket...
Successfully connected!

Using existing page: https://example.com

Retrieving all cookies...

Found 2 cookie(s):

Name:     _ga
Value:    GA1.2.123456789.1234567890
Domain:   .example.com
Path:     /
Secure:   True
HttpOnly: False
SameSite: Lax
Expires:  2026-12-27 10:30:45
--------------------------------------------------------------------------------
Name:     session_id
Value:    abc123xyz456
Domain:   example.com
Path:     /
Secure:   True
HttpOnly: True  ← Notice this!
SameSite: Strict
Expires:  Session
--------------------------------------------------------------------------------

Disconnected successfully.

Security Considerations

Before you deploy this in production, consider these security implications:

1. Firewall Configuration

Only expose port 9222 to trusted networks. If you’re running this on a cloud server:

# Allow only your specific IP
sudo ufw allow from YOUR.IP.ADDRESS to any port 9222

Or better yet, use an SSH tunnel and don’t expose the port at all:

# On Windows, create a tunnel
ssh -N -L 9222:localhost:9222 user@remote-server

# Then connect to localhost:9222 in your code

2. Authentication

The Chrome DevTools Protocol has no built-in authentication. Anyone who can connect to the debugging port has complete control over Chrome. This includes:

  • Reading all page content
  • Executing arbitrary JavaScript
  • Accessing all cookies (as we’ve demonstrated)
  • Intercepting and modifying network requests

In production, you should:

  • Use SSH tunnels instead of exposing the port
  • Run Chrome in a sandboxed environment
  • Use short-lived debugging sessions
  • Monitor for unauthorized connections

3. Resource Limits

A runaway Chrome instance can consume significant resources. Consider:

# Limit Chrome's memory usage
google-chrome --headless=new \
  --max-old-space-size=512 \
  --remote-debugging-port=9223 \
  --user-data-dir=/tmp/chrome-remote-debug

Beyond Cookies: What Else Can You Do?

The Chrome DevTools Protocol is incredibly powerful. Here are some other things you can do with this same setup:

Take Screenshots

await page.ScreenshotAsync("/path/to/screenshot.png");

Monitor Network Traffic

await page.SetRequestInterceptionAsync(true);
page.Request += (sender, e) =>
{
    Console.WriteLine($"Request: {e.Request.Url}");
    e.Request.ContinueAsync();
};

Execute JavaScript

var title = await page.EvaluateExpressionAsync<string>("document.title");

Modify Cookies

await page.SetCookieAsync(new CookieParam
{
    Name = "test",
    Value = "123",
    Domain = "example.com",
    HttpOnly = true,  // Can set HttpOnly from CDP!
    Secure = true
});

Emulate Mobile Devices

await page.EmulateAsync(new DeviceDescriptorOptions
{
    Viewport = new ViewPortOptions { Width = 375, Height = 667 },
    UserAgent = "Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X)"
});

Comparing Approaches

You might be wondering how this compares to other approaches:

PuppeteerSharp vs. Selenium

Selenium uses the WebDriver protocol, which is a W3C standard but higher-level and more abstracted. PuppeteerSharp/CDP gives you lower-level access to Chrome specifically.

  • Selenium: Better for cross-browser testing, more stable API
  • PuppeteerSharp: More powerful Chrome-specific features, faster, lighter weight

PuppeteerSharp vs. Raw CDP Libraries

You could use libraries like MasterDevs.ChromeDevTools or ChromeProtocol for more direct CDP access:

// With MasterDevs.ChromeDevTools
var session = new ChromeSession(webSocketUrl);
var cookies = await session.SendAsync(new GetCookiesCommand());

Low-level CDP libraries:

  • Pros: More control, can use experimental CDP features
  • Cons: More verbose, have to handle protocol details

PuppeteerSharp:

  • Pros: High-level API, actively maintained, comprehensive documentation
  • Cons: Abstracts away some CDP features

For most use cases, PuppeteerSharp hits the sweet spot between power and ease of use.

Troubleshooting Common Issues

“Could not connect to Chrome debugging endpoint”

Check firewall:

sudo ufw status
sudo iptables -L -n | grep 9222

Verify Chrome is running:

ps aux | grep chrome
netstat -tlnp | grep 9222

Test locally first:

curl http://localhost:9222/json/version

“No cookies found”

This is normal if the page hasn’t set any cookies. Navigate to a site that does:

await page.GoToAsync("https://github.com");
var cookies = await page.GetCookiesAsync();

Chrome crashes or hangs

Add more stability flags:

google-chrome \
  --headless=new \
  --no-sandbox \
  --disable-gpu \
  --disable-dev-shm-usage \
  --disable-setuid-sandbox \
  --remote-debugging-port=9223 \
  --user-data-dir=/tmp/chrome-remote-debug

Real-World Use Case: Session Management

Here’s a practical example of how I’ve used this in production—managing authenticated sessions for web scraping:

public class SessionManager
{
    private readonly string _remoteChrome;
    
    public async Task<CookieParam[]> LoginAndGetSession(string username, string password)
    {
        var browser = await ConnectToRemoteChrome();
        var page = await browser.NewPageAsync();
        
        // Perform login
        await page.GoToAsync("https://example.com/login");
        await page.TypeAsync("#username", username);
        await page.TypeAsync("#password", password);
        await page.ClickAsync("#login-button");
        await page.WaitForNavigationAsync();
        
        // Extract all cookies (including HttpOnly session tokens!)
        var cookies = await page.GetCookiesAsync();
        
        await browser.DisconnectAsync();
        
        // Store these cookies for later use
        return cookies;
    }
    
    public async Task ReuseSession(CookieParam[] cookies)
    {
        var browser = await ConnectToRemoteChrome();
        var page = await browser.NewPageAsync();
        
        // Inject the saved cookies
        await page.SetCookieAsync(cookies);
        
        // Now you're authenticated!
        await page.GoToAsync("https://example.com/dashboard");
        
        // Do your work...
    }
}

This allows you to:

  1. Log in once in a “master” browser
  2. Extract the session cookies (including HttpOnly auth tokens)
  3. Distribute those cookies to multiple scraper instances
  4. All scrapers are now authenticated without re-logging in

Conclusion

The Chrome DevTools Protocol opens up a world of possibilities for browser automation and debugging. By combining it with PuppeteerSharp and a bit of Linux networking knowledge, you can:

  • Control Chrome instances running anywhere on your network
  • Access all browser data, including HttpOnly cookies
  • Build powerful automation and testing tools
  • Debug production issues remotely

The key takeaways:

  1. Use socat to proxy Chrome’s localhost debugging port to external interfaces
  2. PuppeteerSharp provides the easiest way to interact with CDP from C#
  3. CDP gives you superpowers that normal JavaScript can’t access
  4. Security matters—only expose debugging ports to trusted networks

The complete code from this post is available on GitHub (replace with your actual link). If you found this useful, consider giving it a star!

Have you used the Chrome DevTools Protocol in your projects? What creative uses have you found for it? Drop a comment below—I’d love to hear your experiences!

Further Reading


Tags: C#, Chrome, DevTools Protocol, PuppeteerSharp, Web Automation, Browser Automation, Linux, socat

Categories: Uncategorized Tags: , , ,

How to Check All AWS Regions for Deprecated Python 3.9 Lambda Functions (PowerShell Guide)

If you’ve received an email from AWS notifying you that Python 3.9 is being deprecated for AWS Lambda, you’re not alone. As runtimes reach End-Of-Life, AWS sends warnings so you can update your Lambda functions before support officially ends.

The key question is:

How do you quickly check every AWS region to see where you’re still using Python 3.9?

AWS only gives you a single-region example in their email, but many teams have functions deployed globally. Fortunately, you can automate a full multi-region check using a simple PowerShell script.

This post shows you exactly how to do that.


🚨 Why You Received the Email

AWS is ending support for Python 3.9 in AWS Lambda.
After the deprecation dates:

  • No more security patches
  • No AWS technical support
  • You won’t be able to create/update functions using Python 3.9
  • Your functions will still run, but on an unsupported runtime

To avoid risk, you should upgrade these functions to Python 3.10, 3.11, or 3.12.

But first, you need to find all the functions using Python 3.9 — across all regions.


✔️ Prerequisites

Make sure you have:

  • AWS CLI installed
  • AWS credentials configured (via aws configure)
  • Permissions to run:
    • lambda:ListFunctions
    • ec2:DescribeRegions

🧪 Step 1 — Verify AWS CLI Access

Run this to confirm your CLI is working:

aws sts get-caller-identity --region eu-west-1

If it returns your AWS ARN, you’re good to go.

If you see “You must specify a region”, set a default region:

aws configure set region eu-west-1


📝 Step 2 — PowerShell Script to Check Python 3.9 in All Regions

Save this as aws-lambda-python39-check.ps1 (or any name you prefer):

# Get all AWS regions (forcing region so the call always works)
$regions = (aws ec2 describe-regions --region us-east-1 --query "Regions[].RegionName" --output text) -split "\s+"

foreach ($region in $regions) {
    Write-Host "Checking region: $region ..."
    $functions = aws lambda list-functions `
        --region $region `
        --query "Functions[?Runtime=='python3.9'].FunctionArn" `
        --output text

    if ($functions) {
        Write-Host "  → Found Python 3.9 functions:"
        Write-Host "    $functions"
    } else {
        Write-Host "  → No Python 3.9 functions found."
    }
}

This script does three things:

  1. Retrieves all AWS regions
  2. Loops through each region
  3. Prints any Lambda functions that still use Python 3.9

It handles the common AWS CLI error:

You must specify a region

by explicitly using --region us-east-1 when retrieving the region list.


▶️ Step 3 — Run the Script

Open PowerShell in the folder where your script is saved:

.\aws-lambda-python39-check.ps1

You’ll see output like:

Checking region: eu-west-1 ...
  → Found Python 3.9 functions:
    arn:aws:lambda:eu-west-1:123456789012:function:my-old-function

Checking region: us-east-1 ...
  → No Python 3.9 functions found.

If no functions appear, you’re fully compliant.


🛠️ What to Do Next

For each function identified, update the runtime:

aws lambda update-function-configuration `
    --function-name MyFunction `
    --runtime python3.12

If you package dependencies manually (ZIP deployments), ensure you rebuild them using the new Python version.


🎉 Summary

AWS’s deprecation emails can be slightly alarming, but the fix is simple:

  • Scan all regions
  • Identify Python 3.9 Lambda functions
  • Upgrade them in advance of the cutoff date

With the PowerShell script above, you can audit your entire AWS account in seconds.

Fixing .NET 8 HttpClient Permission Denied Errors on Google Cloud Run

If you’re deploying a .NET 8 application to Google Cloud Run and encountering a mysterious NetworkInformationException (13): Permission denied error when making HTTP requests, you’re not alone. This is a known issue that stems from how .NET’s HttpClient interacts with Cloud Run’s restricted container environment.

The Problem

When your .NET application makes HTTP requests using HttpClient, you might see an error like this:

System.Net.NetworkInformation.NetworkInformationException (13): Permission denied
   at System.Net.NetworkInformation.NetworkChange.CreateSocket()
   at System.Net.NetworkInformation.NetworkChange.add_NetworkAddressChanged(NetworkAddressChangedEventHandler value)
   at System.Net.Http.HttpConnectionPoolManager.StartMonitoringNetworkChanges()

This error occurs because .NET’s HttpClient attempts to monitor network changes and handle advanced HTTP features like HTTP/3 and Alt-Svc (Alternative Services). To do this, it tries to create network monitoring sockets, which requires permissions that Cloud Run containers don’t have by default.

Cloud Run’s security model intentionally restricts certain system-level operations to maintain isolation and security. While this is great for security, it conflicts with .NET’s network monitoring behavior.

Why Does This Happen?

The .NET runtime includes sophisticated connection pooling and HTTP version negotiation features. When a server responds with an Alt-Svc header (suggesting alternative protocols or endpoints), .NET tries to:

  1. Monitor network interface changes
  2. Adapt connection strategies based on network conditions
  3. Support HTTP/3 where available

These features require low-level network access that Cloud Run’s sandboxed environment doesn’t permit.

The Solution

Fortunately, there’s a straightforward fix. You need to disable the features that require elevated network permissions by setting two environment variables:

Environment.SetEnvironmentVariable("DOTNET_SYSTEM_NET_DISABLEIPV6", "1");
Environment.SetEnvironmentVariable("DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT", "false");

Place these lines at the very top of your Program.cs file, before any HTTP client initialization or web application builder creation.

What These Variables Do

  • DOTNET_SYSTEM_NET_DISABLEIPV6: Disables IPv6 support, which also disables the network change monitoring that requires socket creation.
  • DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT: Explicitly disables HTTP/3 support, preventing .NET from trying to negotiate HTTP/3 connections.

Alternative Approaches

Option 1: Set in Dockerfile

You can bake these settings into your container image:

FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app

# Disable network monitoring features
ENV DOTNET_SYSTEM_NET_DISABLEIPV6=1
ENV DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT=false

COPY publish/ .
ENTRYPOINT ["dotnet", "YourApp.dll"]

Option 2: Set via Cloud Run Configuration

You can configure these as environment variables in your Cloud Run deployment:

gcloud run deploy your-service \
  --image gcr.io/your-project/your-image \
  --set-env-vars DOTNET_SYSTEM_NET_DISABLEIPV6=1,DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT=false

Or through the Cloud Console when configuring your service’s environment variables.

Performance Impact

You might wonder if disabling these features affects performance. In practice:

  • HTTP/3 isn’t widely used yet, and most services work perfectly fine with HTTP/2 or HTTP/1.1
  • Network change monitoring is primarily useful for long-running desktop applications that move between networks (like a laptop switching from WiFi to cellular)
  • In a Cloud Run container with a stable network environment, these features provide minimal benefit

The performance impact is negligible, and the tradeoff is well worth it for a working application.

Why It Works Locally But Fails in Cloud Run

This issue often surprises developers because their code works perfectly on their development machine. That’s because:

  • Local development environments typically run with full system permissions
  • Your local machine isn’t running in a restricted container
  • Cloud Run’s security sandbox is much more restrictive than a typical development environment

This is a classic example of environment-specific behavior where security constraints in production expose issues that don’t appear during development.

Conclusion

The Permission denied error when using HttpClient in .NET 8 on Google Cloud Run is caused by the runtime’s attempt to use network monitoring features that aren’t available in Cloud Run’s restricted environment. The fix is simple: disable these features using environment variables.

This solution is officially recognized by the .NET team as the recommended workaround for containerized environments with restricted permissions, so you can use it with confidence in production.

Related Resources


Have you encountered other .NET deployment issues on Cloud Run? Feel free to share your experiences in the comments below.

Categories: Uncategorized Tags: , , , ,

Enhanced Italian Vehicle #API: VIN Numbers Now Available for Motorcycles

We’re excited to announce a significant enhancement to the Italian vehicle data API available through Targa.co.it. Starting today, our API responses now include Vehicle Identification Numbers (VIN) for motorcycle lookups, providing developers and businesses with more comprehensive vehicle data than ever before.

What’s New

The Italian vehicle API has been upgraded to return VIN numbers alongside existing motorcycle data. This enhancement brings motorcycle data parity with our car lookup service, ensuring consistent and complete vehicle information across all vehicle types.

Sample Response Structure

Here’s what you can expect from the enhanced API response for a motorcycle lookup:

json

{
  "Description": "Yamaha XT 1200 Z Super Ténéré",
  "RegistrationYear": "2016",
  "CarMake": {
    "CurrentTextValue": "Yamaha"
  },
  "CarModel": {
    "CurrentTextValue": "XT 1200 Z Super Ténéré"
  },
  "EngineSize": {
    "CurrentTextValue": "1199"
  },
  "FuelType": {
    "CurrentTextValue": ""
  },
  "MakeDescription": {
    "CurrentTextValue": "Yamaha"
  },
  "ModelDescription": {
    "CurrentTextValue": "XT 1200 Z Super Ténéré"
  },
  "Immobiliser": {
    "CurrentTextValue": ""
  },
  "Version": "ABS (2014-2016) 1199cc",
  "ABS": "",
  "AirBag": "",
  "Vin": "JYADP041000002470",
  "KType": "",
  "PowerCV": "",
  "PowerKW": "",
  "PowerFiscal": "",
  "ImageUrl": "http://www.targa.co.it/image.aspx/@WWFtYWhhIFhUIDEyMDAgWiBTdXBlciBUw6luw6lyw6l8bW90b3JjeWNsZQ=="
}

Why VIN Numbers Matter

Vehicle Identification Numbers serve as unique fingerprints for every vehicle, providing several key benefits:

Enhanced Vehicle Verification: VINs offer the most reliable method to verify a vehicle’s authenticity and specifications, reducing fraud in motorcycle transactions.

Complete Vehicle History: Access to VIN enables comprehensive history checks, insurance verification, and recall information lookup.

Improved Business Applications: Insurance companies, dealerships, and fleet management services can now build more robust motorcycle-focused applications with complete vehicle identification.

Regulatory Compliance: Many automotive business processes require VIN verification for legal and regulatory compliance.

Technical Implementation

The VIN field has been seamlessly integrated into existing API responses without breaking changes. The new "Vin" field appears alongside existing motorcycle data, maintaining backward compatibility while extending functionality.

Key Features:

  • No Breaking Changes: Existing integrations continue to work unchanged
  • Consistent Data Structure: Same JSON structure across all vehicle types
  • Comprehensive Coverage: VIN data available for motorcycles registered in the Italian vehicle database
  • Real-time Updates: VIN information reflects the most current data from official Italian vehicle registries

Getting Started

Developers can immediately begin utilizing VIN data in their applications. The API endpoint remains unchanged, and VIN information is automatically included in all motorcycle lookup responses where available.

For businesses already integrated with our Italian vehicle API, this enhancement provides immediate additional value without requiring any code changes. New integrations can take full advantage of complete motorcycle identification data from day one.

Use Cases

This enhancement opens up new possibilities for motorcycle-focused applications:

  • Insurance Platforms: Accurate risk assessment and policy management
  • Marketplace Applications: Enhanced listing verification and buyer confidence
  • Fleet Management: Complete motorcycle inventory tracking
  • Service Centers: Precise parts identification and service history management
  • Regulatory Reporting: Compliance with Italian vehicle registration requirements

Looking Forward

This VIN integration for motorcycles represents our continued commitment to providing comprehensive Italian vehicle data. We’re constantly working to enhance our API capabilities and expand data coverage to better serve the automotive technology ecosystem.

The addition of VIN numbers to motorcycle data brings our Italian API to feature parity with leading international vehicle data providers, while maintaining the accuracy and reliability that Italian businesses have come to expect from Targa.co.it.


Ready to integrate enhanced motorcycle data into your application? Visit Targa.co.it to explore our Italian vehicle API documentation and get started with VIN-enabled motorcycle lookups today.

Porting a PHP OAuth Spotler Client to C#: Lessons Learned

Recently I had to integrate with Spotler’s REST API from a .NET application. Spotler provides a powerful marketing automation platform, and their API uses OAuth 1.0 HMAC-SHA1 signatures for authentication.

They provided a working PHP client, but I needed to port this to C#. Here’s what I learned (and how you can avoid some common pitfalls).


🚀 The Goal

We started with a PHP class that:

✅ Initializes with:

  • consumerKey
  • consumerSecret
  • optional SSL certificate verification

✅ Creates properly signed OAuth 1.0 requests

✅ Makes HTTP requests with cURL and parses the JSON responses.

I needed to replicate this in C# so we could use it inside a modern .NET microservice.


🛠 The Port to C#

🔑 The tricky part: OAuth 1.0 signatures

Spotler’s API requires a specific signature format. It’s critical to:

  1. Build the signature base string by concatenating:
    • The uppercase HTTP method (e.g., GET),
    • The URL-encoded endpoint,
    • And the URL-encoded, sorted OAuth parameters.
  2. Sign it using HMAC-SHA1 with the consumerSecret followed by &.
  3. Base64 encode the HMAC hash.

This looks simple on paper, but tiny differences in escaping or parameter order will cause 401 Unauthorized.

💻 The final C# solution

We used HttpClient for HTTP requests, and HMACSHA1 from System.Security.Cryptography for signatures. Here’s what our C# SpotlerClient does:

✅ Generates the OAuth parameters (consumer_key, nonce, timestamp, etc).
✅ Creates the exact signature base string, matching the PHP implementation character-for-character.
✅ Computes the HMAC-SHA1 signature and Base64 encodes it.
✅ Builds the Authorization header.
✅ Sends the HTTP request, with JSON bodies if needed.

We also added better exception handling: if the API returns an error (like 401), we throw an exception that includes the full response body. This made debugging much faster.


🐛 Debugging tips for OAuth 1.0

  1. Print the signature base string.
    It needs to match exactly what Spotler expects. Any stray spaces or wrong escaping will fail.
  2. Double-check timestamp and nonce generation.
    OAuth requires these to prevent replay attacks.
  3. Compare with the PHP implementation.
    We literally copied the signature generation line-by-line from PHP into C#, carefully mapping rawurlencode to Uri.EscapeDataString.
  4. Turn off SSL validation carefully.
    During development, you might disable certificate checks (ServerCertificateCustomValidationCallback), but never do this in production.

using System.Security.Cryptography;
using System.Text;

namespace SpotlerClient
{
 
    public class SpotlerClient
    {
        private readonly string _consumerKey;
        private readonly string _consumerSecret;
        private readonly string _baseUrl = "https://restapi.mailplus.nl";
        private readonly HttpClient _httpClient;

        public SpotlerClient(string consumerKey, string consumerSecret, bool verifyCertificate = true)
        {
            _consumerKey = consumerKey;
            _consumerSecret = consumerSecret;

            var handler = new HttpClientHandler();
            if (!verifyCertificate)
            {
                handler.ServerCertificateCustomValidationCallback = (sender, cert, chain, sslPolicyErrors) => true;
            }

            _httpClient = new HttpClient(handler);
        }

        public async Task<string> ExecuteAsync(string endpoint, HttpMethod method, string jsonData = null)
        {
            var request = new HttpRequestMessage(method, $"{_baseUrl}/{endpoint}");
            var authHeader = CreateAuthorizationHeader(method.Method, endpoint);
            request.Headers.Add("Accept", "application/json");
            request.Headers.Add("Authorization", authHeader);

            if (jsonData != null)
            {
                request.Content = new StringContent(jsonData, Encoding.UTF8, "application/json");
            }

            var response = await _httpClient.SendAsync(request);

            if (!response.IsSuccessStatusCode)
            {
                var body = await response.Content.ReadAsStringAsync();
                return body;
            }

            return await response.Content.ReadAsStringAsync();
        }

        private string CreateAuthorizationHeader(string httpMethod, string endpoint)
        {
            var timestamp = DateTimeOffset.UtcNow.ToUnixTimeSeconds().ToString();
            var nonce = Guid.NewGuid().ToString("N");

            var paramString = "oauth_consumer_key=" + Uri.EscapeDataString(_consumerKey) +
                              "&oauth_nonce=" + Uri.EscapeDataString(nonce) +
                              "&oauth_signature_method=" + Uri.EscapeDataString("HMAC-SHA1") +
                              "&oauth_timestamp=" + Uri.EscapeDataString(timestamp) +
                              "&oauth_version=" + Uri.EscapeDataString("1.0");

            var sigBase = httpMethod.ToUpper() + "&" +
                          Uri.EscapeDataString(_baseUrl + "/" + endpoint) + "&" +
                          Uri.EscapeDataString(paramString);

            var sigKey = _consumerSecret + "&";

            var signature = ComputeHmacSha1Signature(sigBase, sigKey);

            var authHeader = $"OAuth oauth_consumer_key=\"{_consumerKey}\", " +
                             $"oauth_nonce=\"{nonce}\", " +
                             $"oauth_signature_method=\"HMAC-SHA1\", " +
                             $"oauth_timestamp=\"{timestamp}\", " +
                             $"oauth_version=\"1.0\", " +
                             $"oauth_signature=\"{Uri.EscapeDataString(signature)}\"";

            return authHeader;
        }

        private string ComputeHmacSha1Signature(string data, string key)
        {
            using var hmac = new HMACSHA1(Encoding.UTF8.GetBytes(key));
            var hash = hmac.ComputeHash(Encoding.UTF8.GetBytes(data));
            return Convert.ToBase64String(hash);
        }
    }
}

✅ The payoff

Once the signature was constructed precisely, authentication errors disappeared. We could now use the Spotler REST API seamlessly from C#, including:

  • importing contact lists,
  • starting campaigns,
  • and fetching campaign metrics.

📚 Sample usage

var client = new SpotlerClient(_consumerKey, _consumerSecret, false);
var endpoint = "integrationservice/contact/email@gmail.com";
var json = client.ExecuteAsync(endpoint, HttpMethod.Get).GetAwaiter().GetResult();

🎉 Conclusion

Porting from PHP to C# isn’t always as direct as it looks — especially when it comes to cryptographic signatures. But with careful attention to detail and lots of testing, we managed to build a robust, reusable client.

If you’re facing a similar integration, feel free to reach out or clone this approach. Happy coding!

Categories: Uncategorized Tags: , , , ,

🚫 Why AWS SDK for S3 No Longer Works Smoothly with .NET Framework 4.8 — and How to Fix It

In 2024, more .NET developers are finding themselves in a strange situation: suddenly, tried-and-tested .NET Framework 4.8 applications that interact with Amazon S3 start throwing cryptic build errors or runtime exceptions. The culprit? The AWS SDK for .NET has increasingly shifted toward support for .NET Core / .NET 6+, and full compatibility with .NET Framework is eroding.

In this post, we’ll explain:

  • Why this happens
  • What errors you might see
  • And how to remove the AWS SDK altogether and replace it with pure .NET 4.8-compatible code for downloading (and uploading) files from S3 using Signature Version 4.

🧨 The Problem: AWS SDK & .NET Framework 4.8

The AWS SDK for .NET (like AWSSDK.S3) now depends on modern libraries like:

  • System.Text.Json
  • System.Buffers
  • System.Runtime.CompilerServices.Unsafe
  • Microsoft.Bcl.AsyncInterfaces

These dependencies were designed for .NET Core and later versions — not .NET Framework. While it was once possible to work around this with binding redirects and careful version pinning, the situation has become unstable and error-prone.


❗ Common Symptoms

You may see errors like:

Could not load file or assembly ‘System.Text.Json, Version=6.0.0.11’

Or:

Could not load file or assembly ‘System.Buffers, Version=4.0.5.0’

Or during build:

Warning: Unable to update auto-refresh reference ‘system.text.json.dll’

Even if you install the correct packages, you may end up needing to fight bindingRedirect hell, and still not get a working application.


✅ The Solution: Remove the AWS SDK

Fortunately, you don’t need the SDK to use S3. All AWS S3 requires is a properly signed HTTP request using AWS Signature Version 4, and you can create that yourself using standard .NET 4.8 libraries.


🔐 Downloading from S3 Without the AWS SDK

Here’s how you can download a file from S3 using HttpWebRequest and Signature Version 4.

✔️ The Key Points:

  • You must include the x-amz-content-sha256 header (even for GETs!)
  • You sign the request using your AWS secret key
  • No external packages required — works on plain .NET 4.8

🧩 Code Snippet


public static byte[] DownloadFromS3(string bucketName, string objectKey, string region, string accessKey, string secretKey)
{
var method = "GET";
var service = "s3";
var host = $"{bucketName}.s3.{region}.amazonaws.com";
var uri = $"https://{host}/{objectKey}";
var requestDate = DateTime.UtcNow;
var amzDate = requestDate.ToString("yyyyMMddTHHmmssZ");
var dateStamp = requestDate.ToString("yyyyMMdd");
var canonicalUri = "/" + objectKey;
var signedHeaders = "host;x-amz-content-sha256;x-amz-date";
var payloadHash = HashSHA256(string.Empty); // Required even for GET

var canonicalRequest = $"{method}\n{canonicalUri}\n\nhost:{host}\nx-amz-content-sha256:{payloadHash}\nx-amz-date:{amzDate}\n\n{signedHeaders}\n{payloadHash}";
var credentialScope = $"{dateStamp}/{region}/{service}/aws4_request";
var stringToSign = $"AWS4-HMAC-SHA256\n{amzDate}\n{credentialScope}\n{HashSHA256(canonicalRequest)}";

var signingKey = GetSignatureKey(secretKey, dateStamp, region, service);
var signature = ToHexString(HmacSHA256(signingKey, stringToSign));

var authorizationHeader = $"AWS4-HMAC-SHA256 Credential={accessKey}/{credentialScope}, SignedHeaders={signedHeaders}, Signature={signature}";

var request = (HttpWebRequest)WebRequest.Create(uri);
request.Method = method;
request.Headers["Authorization"] = authorizationHeader;
request.Headers["x-amz-date"] = amzDate;
request.Headers["x-amz-content-sha256"] = payloadHash;

try
{
    using (var response = (HttpWebResponse)request.GetResponse())
    using (var responseStream = response.GetResponseStream())
    using (var memoryStream = new MemoryStream())
    {
        responseStream.CopyTo(memoryStream);
        return memoryStream.ToArray();
    }
}
catch (WebException ex)
{
    using (var errorResponse = (HttpWebResponse)ex.Response)
    using (var reader = new StreamReader(errorResponse.GetResponseStream()))
    {
        var errorText = reader.ReadToEnd();
        throw new Exception($"S3 request failed: {errorText}", ex);
    }
}

}
🔧 Supporting Methods


private static string HashSHA256(string text)
{
using (var sha256 = SHA256.Create())
{
return ToHexString(sha256.ComputeHash(Encoding.UTF8.GetBytes(text)));
}
}
private static byte[] HmacSHA256(byte[] key, string data)
{
using (var hmac = new HMACSHA256(key))
{
return hmac.ComputeHash(Encoding.UTF8.GetBytes(data));
}
}
private static byte[] GetSignatureKey(string secretKey, string dateStamp, string region, string service)
{
var kSecret = Encoding.UTF8.GetBytes("AWS4" + secretKey);
var kDate = HmacSHA256(kSecret, dateStamp);
var kRegion = HmacSHA256(kDate, region);
var kService = HmacSHA256(kRegion, service);
return HmacSHA256(kService, "aws4_request");
}
private static string ToHexString(byte[] bytes)
{
return BitConverter.ToString(bytes).Replace("-", "").ToLowerInvariant();
}


📝 Uploading to S3 Without the AWS SDK
You can extend the same technique for PUT requests. The only differences are:

You calculate the SHA-256 hash of the file content

You include a Content-Type and Content-Length header

You use PUT instead of GET

Let me know in the comments if you’d like the full upload version — it follows the same Signature V4 pattern.

✅ Summary
Feature AWS SDK for .NET Manual Signature V4
.NET Framework 4.8 support ❌ Increasingly broken ✅ Fully supported
Heavy NuGet dependencies ✅ ❌ Minimal
Simple download/upload ✅ ✅ (with more code)
Presigned URLs ✅ 🟡 Manual support

Final Thoughts
If you’re stuck on .NET Framework 4.8 and running into weird AWS SDK issues — you’re not alone. But you’re not stuck either. Dropping the SDK and using HTTP + Signature V4 is entirely viable, especially for simple tasks like uploading/downloading S3 files.

Let me know if you’d like a full upload example, presigned URL generator, or if you’re considering migrating to .NET 6+.

Farewell #Skype. Here’s how their #API worked.

So, with the shutdown of Skype in May 2025, only two months away, there is not much need to hold on tight to our source code for the Skype API. It worked well for us for years on AvatarAPI.com
but with the imminent shutdown, their API will undoubtedly stop working as soon as Skype is shut down, and will no longer be relevant, even if the API stays active for a little while later.

In this post, we’ll take a deep dive into a C# implementation of a Skype user search feature using HTTP requests. This code interacts with Skype’s search API to fetch user profiles based on a given search parameter. We’ll break down the core functionality, security considerations, and potential improvements.

Overview of the SkypeSearch Class

The SkypeSearch class provides a static method, Search, which sends a request to Skype’s search API to retrieve user profiles. It uses an authentication token (SkypeToken) and manages retries in case of failures. Let’s explore its components in detail.

Key Features of the Implementation

  1. Handles API Requests Securely: The method sets various security protocols (Ssl3, Tls, Tls11, Tls12) to ensure compatibility with Skype’s API.
  2. Custom Headers for Authentication: It constructs an HTTP request with necessary headers, including x-skypetoken, x-skype-client, and others.
  3. Manages Rate Limits & Token Refresh: If the API responds with an empty result (potentially due to a 429 Too Many Requests error), the token is refreshed, and the search is retried up to five times.
  4. Enhances API Response: The method modifies the API response to include an additional avatarImageUrl field for each result.

Breaking Down the Search Method

Constructing the API Request

var requestNumber = new Random().Next(100000, 999999);
var url = string.Format(
    "https://search.skype.com/v2.0/search?searchString={0}&requestId={1}&locale=en-GB&sessionId={2}",
    searchParameter, requestNumber, Guid.NewGuid());

This snippet constructs the API request URL with dynamic query parameters, including:

  • searchString: The user input for searching Skype profiles.
  • requestId: A randomly generated request ID for uniqueness.
  • sessionId: A newly generated GUID for session tracking.

Setting HTTP Headers

HTTPHeaderHandler wicket = nvc =>
{
    var nvcSArgs = new NameValueCollection
    {
        {"x-skypetoken", token.Value},
        {"x-skype-client", "1418/8.134.0.202"},
        {"Origin", "https://web.skype.com"}
    };
    return nvcSArgs;
};

Here, we define essential request headers for authentication and compatibility. The x-skypetoken is a crucial element, as it ensures access to Skype’s search API.

Handling API Responses & Retrying on Failure

if (jsonResponse == "")
{
    token = new SkypeToken();
    return Search(searchParameter, token, ++maxRecursion);
}

If an empty response is received (potentially due to an API rate limit), the method refreshes the authentication token and retries the request up to five times to prevent excessive loops.

Enhancing API Response with Profile Avatars

foreach (var node in jResponse["results"])
{
    var skypeId = node["nodeProfileData"]["skypeId"] + "";
    var avatarImageUrl = string.Format(
        "https://avatar.skype.com/v1/avatars/{0}/public?size=l",
        skypeId);
    node["nodeProfileData"]["avatarImageUrl"] = avatarImageUrl;
}

After receiving the API response, the code iterates through the user results and appends an avatarImageUrl field using Skype’s avatar service.

using System;
using System.Collections.Specialized;
using System.Net;
using System.Text;
using Newtonsoft.Json.Linq;

namespace SkypeGraph
{
    public class SkypeSearch
    {
        public static JObject Search(string searchParameter, SkypeToken token, int maxRecursion = 0)
        {
            if (maxRecursion == 5) throw new Exception("Preventing excessive retries");
            ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3 |
                                                   SecurityProtocolType.Tls |
                                                   SecurityProtocolType.Tls11 |
                                                   SecurityProtocolType.Tls12;
            var requestNumber = new Random().Next(100000, 999999);
            var url = string.Format("https://search.skype.com/v2.0/search?searchString={0}&requestId={1}&locale=en-GB&sessionId={2}", searchParameter, requestNumber, Guid.NewGuid());
            var http = new HTTPRequest {Encoder = Encoding.UTF8};
            HTTPHeaderHandler wicket = nvc =>
            {
                var nvcSArgs = new NameValueCollection
                {
                    {"x-skypetoken", token.Value},
                    {"x-skypegraphservicesettings", ""},
                    {"x-skype-client","1418/8.134.0.202"},
                    {"x-ecs-etag", "GAx0SLim69RWpjmJ9Dpc4QBHAou0pY//fX4AZ9JVKU4="},
                    {"Origin", "https://web.skype.com"}
                };
                return nvcSArgs;
            };
            http.OverrideUserAgent =
                "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36";
            http.OverrideAccept = "application/json";
            http.TimeOut = TimeSpan.FromSeconds(5);
            http.HeaderHandler = wicket;
            http.ContentType = "application/json";
            http.Referer = "https://web.skype.com/";
            var jsonResponse = http.Request(url);
            if (jsonResponse == "")
            {
                // In case of a 429 (Too many requests), then refresh the token.
                token = new SkypeToken();
                return Search(searchParameter, token, ++maxRecursion);
            }
            var jResponse = JObject.Parse(jsonResponse);
            #region sample
            /*
             {
                   "requestId":"240120",
                   "results":[
                      {
                         "nodeProfileData":{
                            "skypeId":"live:octavioaparicio_jr",
                            "skypeHandle":"live:octavioaparicio_jr",
                            "name":"octavio aparicio",
                            "avatarUrl":"https://api.skype.com/users/live:octavioaparicio_jr/profile/avatar",
                            "country":"Mexico",
                            "countryCode":"mx",
                            "contactType":"Skype4Consumer"
                         }
                      }
                   ]
                }
             */
            #endregion
            foreach (var node in jResponse["results"])
            {
                var skypeId = node["nodeProfileData"]["skypeId"] + "";
                var avatarImageUrl = string.Format("https://avatar.skype.com/v1/avatars/{0}/public?size=l", skypeId);
                node["nodeProfileData"]["avatarImageUrl"] = avatarImageUrl;
            }
            return jResponse;
        }
    }
}
Categories: Uncategorized Tags: , , , ,

Resolving Unauthorized Error When Deploying an #Azure Function via #ZipDeploy

Deploying an Azure Function to an App Service can sometimes result in an authentication error, preventing successful publishing. One common error developers encounter is:

Error: The attempt to publish the ZIP file through https://<function-name>.scm.azurewebsites.net/api/zipdeploy failed with HTTP status code Unauthorized.

This error typically occurs when the deployment process lacks the necessary authentication permissions to publish to Azure. Below, we outline the steps to resolve this issue by enabling SCM Basic Auth Publishing in the Azure Portal.

Understanding the Issue

The error indicates that Azure is rejecting the deployment request due to authentication failure. This often happens when the SCM (Kudu) deployment service does not have the correct permissions enabled, preventing the publishing process from proceeding.

Solution: Enable SCM Basic Auth Publishing

To resolve this issue, follow these steps:

  1. Open the Azure Portal and navigate to your Function App.
  2. In the left-hand menu, select Configuration.
  3. Under the General settings tab, locate SCM Basic Auth Publishing.
  4. Toggle the setting to On.
  5. Click Save and restart the Function App if necessary.

Once this setting is enabled, retry the deployment from Visual Studio or your chosen deployment method. The unauthorized error should now be resolved.

Additional Considerations

  • Use Deployment Credentials: If you prefer not to enable SCM Basic Auth, consider setting up deployment credentials under Deployment CenterFTP/Credentials.
  • Check Azure Authentication in Visual Studio: Ensure that you are logged into the correct Azure account in Visual Studio under ToolsOptionsAzure Service Authentication.
  • Use Azure CLI for Deployment: If problems persist, try deploying with the Azure CLI:az functionapp deployment source config-zip \ --resource-group <resource-group> \ --name <function-app-name> \ --src <zip-file-path>

By enabling SCM Basic Auth Publishing, you ensure that Azure’s deployment service can authenticate and process your function’s updates smoothly. This quick fix saves time and prevents unnecessary troubleshooting steps.

Categories: Uncategorized Tags: , , , ,

Obtaining an Access Token for Outlook Web Access (#OWA) Using a Consumer Account

If you need programmatic access to Outlook Web Access (OWA) using a Microsoft consumer account (e.g., an Outlook.com, Hotmail, or Live.com email), you can obtain an access token using the Microsoft Authentication Library (MSAL). The following C# code demonstrates how to authenticate a consumer account and retrieve an access token.

Prerequisites

To run this code successfully, ensure you have:

  • .NET installed
  • The Microsoft.Identity.Client NuGet package
  • A registered application in the Microsoft Entra ID (formerly Azure AD) portal with the necessary API permissions

Code Breakdown

The following code authenticates a user using the device code flow, which is useful for scenarios where interactive login via a browser is required but the application does not have direct access to a web interface.

1. Define Authentication Metadata

var authMetadata = new
{
    ClientId = "9199bf20-a13f-4107-85dc-02114787ef48", // Application (client) ID
    Tenant = "consumers", // Target consumer accounts (not work/school accounts)
    Scope = "service::outlook.office.com::MBI_SSL openid profile offline_access"
};
  • ClientId: Identifies the application in Microsoft Entra ID.
  • Tenant: Set to consumers to restrict authentication to personal Microsoft accounts.
  • Scope: Defines the permissions the application is requesting. In this case:
    • service::outlook.office.com::MBI_SSL is required to access Outlook services.
    • openid, profile, and offline_access allow authentication and token refresh.

2. Configure the Authentication Application

var app = PublicClientApplicationBuilder
    .Create(authMetadata.ClientId)
    .WithAuthority($"https://login.microsoftonline.com/{authMetadata.Tenant}")
    .Build();
  • PublicClientApplicationBuilder is used to create a public client application that interacts with Microsoft identity services.
  • .WithAuthority() specifies that authentication should occur against Microsoft’s login endpoint for consumer accounts.

3. Initiate the Device Code Flow

var scopes = new string[] { authMetadata.Scope };

var result = await app.AcquireTokenWithDeviceCode(scopes, deviceCodeResult =>
{
    Console.WriteLine(deviceCodeResult.Message); // Display login instructions
    return Task.CompletedTask;
}).ExecuteAsync();
  • AcquireTokenWithDeviceCode() initiates authentication using a device code.
  • The deviceCodeResult.Message provides instructions to the user on how to authenticate (typically directing them to https://microsoft.com/devicelogin).
  • Once the user completes authentication, the application receives an access token.

4. Retrieve and Display the Access Token

Console.WriteLine($"Access Token: {result.AccessToken}");
  • The retrieved token can now be used to make API calls to Outlook Web Access services.

5. Handle Errors

catch (MsalException ex)
{
    Console.WriteLine($"Authentication failed: {ex.Message}");
}
  • MsalException handles authentication errors, such as incorrect permissions or expired tokens.

Running the Code

  1. Compile and run the program.
  2. Follow the login instructions displayed in the console.
  3. After signing in, the access token will be printed.
  4. Use the token in HTTP requests to Outlook Web Access APIs.

Conclusion

This code provides a straightforward way to obtain an access token for Outlook Web Access using a consumer account. The device code flow is particularly useful for command-line applications or scenarios where interactive authentication via a browser is required.