Archive

Author Archive

Benchmarking reCAPTCHA v3 Solver Services: Speed vs Quality Analysis

When implementing automated systems that need to solve reCAPTCHA v3 challenges, choosing the right solver service can significantly impact both your success rate and operational costs. We conducted a comprehensive benchmark test of five popular reCAPTCHA v3 solving services to compare their performance in terms of both speed and quality scores.

The Results

We tested five major captcha solving services: CapSolver, 2Captcha, AntiCaptcha, NextCaptcha, and DeathByCaptcha. Each service was evaluated both with and without residential proxy support (using Decodo residential proxies).

Speed Performance Rankings

Fastest to Slowest (without proxy):

  1. CapSolver – 3,383ms (3.4 seconds)
  2. NextCaptcha – 6,725ms (6.7 seconds)
  3. DeathByCaptcha – 16,212ms (16.2 seconds)
  4. AntiCaptcha – 17,069ms (17.1 seconds)
  5. 2Captcha – 36,149ms (36.1 seconds)

With residential proxy:

  1. CapSolver – 5,101ms (5.1 seconds)
  2. NextCaptcha – 10,875ms (10.9 seconds)
  3. DeathByCaptcha – 10,861ms (10.9 seconds)
  4. 2Captcha – 25,749ms (25.7 seconds)
  5. AntiCaptcha – Failed (task type not supported with proxy)

Quality Score Results

Here’s where the results become particularly interesting: all services that successfully completed the challenge returned identical scores of 0.10. This uniformly low score across all providers suggests we’re observing a fundamental characteristic of how these services interact with Google’s reCAPTCHA v3 system rather than differences in solver quality.

What Do These Results Tell Us?

1. The Score Mystery

A reCAPTCHA v3 score of 0.10 is at the very bottom of Google’s scoring range (0.0-1.0), indicating that Google’s system detected these tokens as very likely originating from bots. This consistent result across all five services reveals several important insights:

Why such low scores?

  • reCAPTCHA v3 uses machine learning trained on actual site traffic patterns
  • Without established traffic history, the system defaults to suspicious scores
  • Commercial solver services are inherently detectable by Google’s sophisticated fingerprinting
  • The test environment may lack the organic traffic patterns needed for v3 to generate higher scores

As mentioned in our research, CleanTalk found that reCAPTCHA v3 often returns consistent scores in test environments without production traffic. The system needs time to “learn” what normal traffic looks like for a given site before it can effectively differentiate between humans and bots.

2. Speed is the Real Differentiator

Since all services returned the same quality score, speed becomes the primary differentiator:

CapSolver emerged as the clear winner, solving challenges in just 3.4 seconds without proxy and 5.1 seconds with proxy. This represents a 10x speed advantage over the slowest service (2Captcha at 36 seconds).

NextCaptcha came in second place with respectable times of 6.7 seconds (no proxy) and 10.9 seconds (with proxy), making it a solid middle-ground option.

DeathByCaptcha and AntiCaptcha performed similarly at around 16-17 seconds without proxy, though AntiCaptcha failed to support proxy-based solving for this captcha type.

2Captcha was significantly slower at 36 seconds without proxy, though it did improve to 25.7 seconds with proxy enabled.

3. Proxy Support Variations

Proxy support proved inconsistent across services:

  • Most services handled proxies well, with CapSolver, NextCaptcha, DeathByCaptcha, and 2Captcha all successfully completing challenges through residential proxies
  • AntiCaptcha failed with proxy, returning an “ERROR_TASK_NOT_SUPPORTED” error, suggesting their proxy-based reCAPTCHA v3 implementation may have limitations
  • Proxy impact on speed varied: Some services (2Captcha) were faster with proxy, while others (CapSolver, NextCaptcha) were slower

4. Success Rates

All services except AntiCaptcha (with proxy) achieved 100% success rates, meaning they reliably returned valid tokens. However, the validity of a token doesn’t correlate with its quality score—all tokens were valid but all received low scores from Google.

Practical Implications

For High-Volume Operations

If you’re processing thousands of captchas daily, CapSolver’s 3-5 second solve time provides a massive throughput advantage. At scale, this speed difference translates to:

  • Processing 1,000 captchas with CapSolver: ~56 minutes
  • Processing 1,000 captchas with 2Captcha: ~10 hours

For Quality-Sensitive Applications

The uniform 0.10 scores reveal a hard truth: commercial reCAPTCHA v3 solvers may not produce high-quality tokens that pass strict score thresholds. If your target site requires scores above 0.5 or 0.7, these services may not be suitable regardless of which one you choose.

Cost Considerations

Since all services returned the same quality, cost-per-solve becomes the tiebreaker alongside speed:

  • CapSolver: ~$1.00 per 1,000 solves
  • 2Captcha: ~$2.99 per 1,000 solves
  • AntiCaptcha: ~$2.00 per 1,000 solves

CapSolver offers the best speed-to-cost ratio in this comparison.

The Bigger Picture: reCAPTCHA v3 Limitations

These results illuminate a broader challenge with reCAPTCHA v3 solver services. Google’s v3 system is fundamentally different from v2:

  • v2 presented challenges that could be solved by humans or AI
  • v3 analyzes behavior patterns, browser fingerprints, and site-specific traffic history

Commercial solvers can generate valid tokens, but those tokens carry telltale signatures that Google’s machine learning readily identifies. The consistently low scores suggest that Google has effective detection mechanisms for solver-generated traffic.

When Might Scores Improve?

Based on research and documentation:

  1. Production environments with real organic traffic may see better scores
  2. Time – letting reCAPTCHA v3 “train” on a site for days or weeks
  3. Mixed traffic – solver tokens mixed with legitimate user traffic
  4. Residential proxies – though our test showed this alone doesn’t improve scores

Conclusions and Recommendations

If Speed Matters Most

Choose CapSolver. Its 3-5 second solve times are unmatched, and at $1 per 1,000 solves, it’s also the most cost-effective option.

If You Need Proxy Support

Avoid AntiCaptcha for proxy-based v3 solving. CapSolver, NextCaptcha, and DeathByCaptcha all handled residential proxies successfully.

If Quality Scores Matter

Reconsider using solver services entirely. The uniform 0.10 scores suggest that commercial solvers may not be suitable for sites with strict score requirements. Consider alternative approaches:

  • Browser automation with real user simulation
  • Residential proxy networks with actual human solvers
  • Challenging whether reCAPTCHA v3 is the right solution for your use case

The Bottom Line

For raw performance in a test environment, CapSolver dominated with the fastest solve times and lowest cost. However, the universal 0.10 quality scores across all services reveal that speed and cost may be moot points if your application requires high-quality scores that pass Google’s bot detection.

The real takeaway? reCAPTCHA v3 is doing its job—it successfully identifies solver-generated tokens regardless of which service you use. If you need high scores, you’ll need more sophisticated approaches than simply purchasing tokens from commercial solving services.


This benchmark was conducted in January 2026 using production API credentials for all services. Tests were performed with both direct connections and residential proxy infrastructure. Individual results may vary based on site configuration, traffic patterns, and Google’s evolving detection systems.

Migrating Google Cloud Run to Scaleway: Bringing Your Cloud Infrastructure Back to Europe


Introduction: Why European Cloud Sovereignty Matters Now More Than Ever

In an era of increasing geopolitical tensions, data sovereignty concerns, and evolving international relations, European companies are reconsidering their dependence on US-based cloud providers. The EU’s growing emphasis on digital sovereignty, combined with uncertainties around US data access laws like the CLOUD Act and recent political developments, has made many businesses uncomfortable with storing sensitive data on American infrastructure.

For EU-based companies running containerized workloads on Google Cloud Run, there’s good news: migrating to European alternatives like Scaleway is surprisingly straightforward. This guide will walk you through the technical process of moving your Cloud Run services to Scaleway’s Serverless Containers—keeping your applications running while bringing your infrastructure back under European jurisdiction.

Why Scaleway?

Scaleway, a French cloud provider founded in 1999, offers a compelling alternative to Google Cloud Run:

  • 🇪🇺 100% European: All data centers located in France, Netherlands, and Poland
  • 📜 GDPR Native: Built from the ground up with European data protection in mind
  • 💰 Transparent Pricing: No hidden costs, generous free tiers, and competitive rates
  • 🔒 Data Sovereignty: Your data never leaves EU jurisdiction
  • ⚡ Scale-to-Zero: Just like Cloud Run, pay only for actual usage
  • 🌱 Environmental Leadership: Strong commitment to sustainable cloud infrastructure

Most importantly: Scaleway Serverless Containers are technically equivalent to Google Cloud Run. Both are built on Knative, meaning your containers will run identically on both platforms.

Prerequisites

Before starting, ensure you have:

  • An existing Google Cloud Run service
  • Windows machine with PowerShell
  • gcloud CLI installed and authenticated
  • A Scaleway account (free to create)
  • Skopeo installed (we’ll cover this)

Understanding the Architecture

Both Google Cloud Run and Scaleway Serverless Containers work the same way:

  1. You provide a container image
  2. The platform runs it on-demand via HTTPS endpoints
  3. It scales automatically (including to zero when idle)
  4. You pay only for execution time

The migration process is simply:

  1. Copy your container image from Google’s registry to Scaleway’s registry
  2. Deploy it as a Scaleway Serverless Container
  3. Update your DNS/endpoints

No code changes required—your existing .NET, Node.js, Python, Go, or any other containerized application works as-is.

Step 1: Install Skopeo (Lightweight Docker Alternative)

Since we’re on Windows and don’t want to run full Docker Desktop, we’ll use Skopeo—a lightweight tool designed specifically for copying container images between registries.

Install via winget:

powershell

winget install RedHat.Skopeo

Or download directly from: https://github.com/containers/skopeo/releases

Why Skopeo?

  • No daemon required: No background services consuming resources
  • Direct registry-to-registry transfer: Images never touch your local disk
  • Minimal footprint: ~50MB vs. several GB for Docker Desktop
  • Perfect for CI/CD: Designed for automation and registry operations

Configure Skopeo’s Trust Policy

Skopeo requires a policy file to determine which registries to trust. Create it:

powershell

# Create the config directory
New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.config\containers"
# Create a permissive policy that trusts all registries
@"
{
"default": [
{
"type": "insecureAcceptAnything"
}
],
"transports": {
"docker-daemon": {
"": [{"type": "insecureAcceptAnything"}]
}
}
}
"@ | Out-File -FilePath "$env:USERPROFILE\.config\containers\policy.json" -Encoding utf8

For production environments, you might want a more restrictive policy that only trusts specific registries:

powershell

@"
{
"default": [{"type": "reject"}],
"transports": {
"docker": {
"gcr.io": [{"type": "insecureAcceptAnything"}],
"europe-west2-docker.pkg.dev": [{"type": "insecureAcceptAnything"}],
"rg.fr-par.scw.cloud": [{"type": "insecureAcceptAnything"}]
}
}
}
"@ | Out-File -FilePath "$env:USERPROFILE\.config\containers\policy.json" -Encoding utf8

Step 2: Find Your Cloud Run Container Image

Your Cloud Run service uses a specific container image. To find it:

Via gcloud CLI (recommended):

bash

gcloud run services describe YOUR-SERVICE-NAME \
--region=YOUR-REGION \
--project=YOUR-PROJECT \
--format='value(spec.template.spec.containers[0].image)'
```
This returns the full image URL, something like:
```
europe-west2-docker.pkg.dev/your-project/cloud-run-source-deploy/your-service@sha256:abc123...

Via Google Cloud Console:

  1. Navigate to Cloud Run in the console
  2. Click your service
  3. Go to the “Revisions” tab
  4. Look for “Container image URL”

The @sha256:... digest is important—it ensures you’re copying the exact image currently running in production.

Step 3: Set Up Scaleway Container Registry

Create a Scaleway Account

  1. Sign up at https://console.scaleway.com/
  2. Complete email verification
  3. Navigate to the console

Create a Container Registry Namespace

  1. Go to ContainersContainer Registry
  2. Click Create namespace
  3. Choose a region (Paris, Amsterdam, or Warsaw)
    • Important: Choose the same region where you’ll deploy your containers
  4. Enter a namespace name (e.g., my-containers, production)
    • Must be unique within that region
    • Lowercase, numbers, and hyphens only
  5. Set Privacy to Private
  6. Click Create

Your registry URL will be: rg.fr-par.scw.cloud/your-namespace

Create API Credentials

  1. Click your profile → API Keys (or visit https://console.scaleway.com/iam/api-keys)
  2. Click Generate API Key
  3. Give it a name (e.g., “container-migration”)
  4. Save the Secret Key securely—it’s only shown once
  5. Note both the Access Key and Secret Key

Step 4: Copy Your Container Image

Now comes the magic—copying your container directly from Google to Scaleway without downloading it locally.

Authenticate and Copy:

powershell

# Set your Scaleway secret key as environment variable (more secure)
$env:SCW_SECRET_KEY = "your-scaleway-secret-key-here"
# Copy the image directly between registries
skopeo copy `
--src-creds="oauth2accesstoken:$(gcloud auth print-access-token)" `
--dest-creds="nologin:$env:SCW_SECRET_KEY" `
docker://europe-west2-docker.pkg.dev/your-project/cloud-run-source-deploy/your-service@sha256:abc123... `
docker://rg.fr-par.scw.cloud/your-namespace/your-service:latest
```
### What's Happening:
- `--src-creds`: Authenticates with Google using your gcloud session
- `--dest-creds`: Authenticates with Scaleway using your API key
- Source URL: Your Google Artifact Registry image
- Destination URL: Your Scaleway Container Registry
The transfer happens directly between registries—your Windows machine just orchestrates it. Even a multi-GB container copies in minutes.
### Verify the Copy:
1. Go to https://console.scaleway.com/registry/namespaces
2. Click your namespace
3. You should see your service image listed with the `latest` tag
## Step 5: Deploy to Scaleway Serverless Containers
### Create a Serverless Container Namespace:
1. Navigate to **Containers** → **Serverless Containers**
2. Click **Create namespace**
3. Choose the **same region** as your Container Registry
4. Give it a name (e.g., `production-services`)
5. Click **Create**
### Deploy Your Container:
1. Click **Create container**
2. **Image source**: Select "Scaleway Container Registry"
3. Choose your namespace and image
4. **Configuration**:
- **Port**: Set to the port your app listens on (usually 8080 for Cloud Run apps)
- **Environment variables**: Copy any env vars from Cloud Run
- **Resources**:
- Memory: Start with what you used in Cloud Run
- vCPU: 0.5-1 vCPU is typical
- **Scaling**:
- **Min scale**: `0` (enables scale-to-zero, just like Cloud Run)
- **Max scale**: Set based on expected traffic (e.g., 10)
5. Click **Deploy container**
### Get Your Endpoint:
After deployment (1-2 minutes), you'll receive an HTTPS endpoint:
```
https://your-container-namespace-xxxxx.functions.fnc.fr-par.scw.cloud

This is your public API endpoint—no API Gateway needed, SSL included for free.

Step 6: Test Your Service

powershell

# Test the endpoint
Invoke-WebRequest -Uri "https://your-container-url.functions.fnc.fr-par.scw.cloud/your-endpoint"

Your application should respond identically to how it did on Cloud Run.

Understanding the Cost Comparison

Google Cloud Run Pricing (Typical):

  • vCPU: $0.00002400/vCPU-second
  • Memory: $0.00000250/GB-second
  • Requests: $0.40 per million
  • Plus: API Gateway, Load Balancer, or other routing costs

Scaleway Serverless Containers:

  • vCPU: €0.00001/vCPU-second (€1.00 per 100k vCPU-s)
  • Memory: €0.000001/GB-second (€0.10 per 100k GB-s)
  • Requests: Free (no per-request charges)
  • HTTPS endpoint: Free (included)
  • Free Tier: 200k vCPU-seconds + 400k GB-seconds per month

Example Calculation:

For an API handling 1 million requests/month, 200ms average response time, 1 vCPU, 2GB memory:

Google Cloud Run:

  • vCPU: 1M × 0.2s × $0.000024 = $4.80
  • Memory: 1M × 0.2s × 2GB × $0.0000025 = $1.00
  • Requests: 1M × $0.0000004 = $0.40
  • Total: ~$6.20/month

Scaleway:

  • vCPU: 200k vCPU-s → Free (within free tier)
  • Memory: 400k GB-s → Free (within free tier)
  • Total: €0.00/month

Even beyond free tiers, Scaleway is typically 30-50% cheaper, with no surprise charges.

Key Differences to Be Aware Of

Similarities (Good News):

✅ Both use Knative under the hood ✅ Both support HTTP, HTTP/2, WebSocket, gRPC ✅ Both scale to zero automatically ✅ Both provide HTTPS endpoints ✅ Both support custom domains ✅ Both integrate with monitoring/logging

Differences:

  • Cold start: Scaleway takes ~2-5 seconds (similar to Cloud Run)
  • Idle timeout: Scaleway scales to zero after 15 minutes (vs. Cloud Run’s varies)
  • Regions: Limited to EU (Paris, Amsterdam, Warsaw) vs. Google’s global presence
  • Ecosystem: Smaller ecosystem than GCP (but rapidly growing)

When Scaleway Makes Sense:

  • ✅ Your primary users/customers are in Europe
  • ✅ GDPR compliance is critical
  • ✅ You want to avoid US jurisdiction over your data
  • ✅ You prefer transparent, predictable pricing
  • ✅ You don’t need GCP-specific services (BigQuery, etc.)

When to Consider Carefully:

  • ⚠️ You need global edge distribution (though you can use CDN)
  • ⚠️ You’re heavily integrated with other GCP services
  • ⚠️ You need GCP’s machine learning services
  • ⚠️ Your customers are primarily in Asia/Americas

Additional Migration Considerations

Environment Variables and Secrets:

Scaleway offers Secret Manager integration. Copy your Cloud Run secrets:

  1. Go to Secret Manager in Scaleway
  2. Create secrets matching your Cloud Run environment variables
  3. Reference them in your container configuration

Custom Domains:

Both platforms support custom domains. In Scaleway:

  1. Go to your container settings
  2. Add custom domain
  3. Update your DNS CNAME to point to Scaleway’s endpoint
  4. SSL is handled automatically

Databases and Storage:

If you’re using Cloud SQL or Cloud Storage:

  • Databases: Consider Scaleway’s Managed PostgreSQL/MySQL or Serverless SQL Database
  • Object Storage: Scaleway Object Storage is S3-compatible
  • Or: Keep using GCP services (cross-cloud is possible, but adds latency)

Monitoring and Logging:

Scaleway provides Cockpit (based on Grafana):

  • Automatic logging for all Serverless Containers
  • Pre-built dashboards
  • Integration with alerts and metrics
  • Similar to Cloud Logging/Monitoring

The Broader Picture: European Digital Sovereignty

This migration isn’t just about cost savings or technical features—it’s about control.

Why EU Companies Are Moving:

  1. Legal Protection: GDPR protections are stronger when data never leaves EU jurisdiction
  2. Political Risk: Reduces exposure to US government data requests under CLOUD Act
  3. Supply Chain Resilience: Diversification away from Big Tech dependency
  4. Supporting European Tech: Strengthens the European cloud ecosystem
  5. Future-Proofing: As digital sovereignty regulations increase, early movers are better positioned

The Economic Argument:

Every euro spent with European cloud providers:

  • Stays in the European economy
  • Supports European jobs and innovation
  • Builds alternatives to US/Chinese tech dominance
  • Strengthens Europe’s strategic autonomy

Conclusion: A Straightforward Path to Sovereignty

Migrating from Google Cloud Run to Scaleway Serverless Containers is technically simple—often taking just a few hours for a typical service. The containers are identical, the pricing is competitive, and the operational model is the same.

But beyond the technical benefits, there’s a strategic argument: as a European company, every infrastructure decision is a choice about where your data lives, who has access to it, and which ecosystem you’re supporting.

Scaleway (and other European cloud providers) aren’t perfect replacements for every GCP use case. But for containerized APIs and web services—which represent the majority of Cloud Run workloads—they’re absolutely production-ready alternatives that keep your infrastructure firmly within European jurisdiction.

In 2026’s geopolitical landscape, that’s not just a nice-to-have—it’s increasingly essential.


Resources

Have you migrated your infrastructure back to Europe? Share your experience in the comments below.

Categories: Uncategorized Tags: , , , ,

Google Calendar Privacy Proxy

https://github.com/infiniteloopltd/Google-Calendar-Redactor-Proxy/

A lightweight Google Cloud Run service that creates privacy-protected calendar feeds from Google Calendar. Share your availability with colleagues without exposing personal appointment details.

The Problem

You want to share your calendar availability with work colleagues, but:

  • You have multiple calendars (work, personal, family) that you need to consolidate
  • Google Calendar’s subscribed calendars (ICS feeds) don’t count toward your Outlook free/busy status
  • You don’t want to expose personal appointment details to work contacts
  • Outlook’s native calendar sharing only works with Exchange/Microsoft 365 calendars, not external ICS subscriptions

This service solves that problem by creating a privacy-filtered calendar feed that Outlook can subscribe to, showing you as “Busy” during your appointments without revealing what those appointments are.

How It Works

Google Calendar → This Service → Privacy-Protected ICS Feed → Outlook
   (full details)    (redaction)     (busy blocks only)      (subscription)

The service:

  1. Fetches your Google Calendar ICS feed using the private URL
  2. Strips out all identifying information (titles, descriptions, locations, attendees)
  3. Replaces event summaries with “Busy”
  4. Preserves all timing information (when you’re busy/free)
  5. Returns a sanitized ICS feed that Outlook can subscribe to

Use Cases

  • Multiple calendar consolidation: Combine work, personal, and family calendars into one availability view
  • Privacy-protected sharing: Share when you’re busy without sharing what you’re doing
  • Cross-platform calendaring: Bridge Google Calendar into Outlook environments
  • Professional boundaries: Keep personal life private while showing accurate availability

Quick Start

1. Get Your Google Calendar Private URL

  1. Open Google Calendar
  2. Click the ⚙️ Settings icon → Settings
  3. Select your calendar from the left sidebar
  4. Scroll to “Integrate calendar”
  5. Copy the “Secret address in iCal format” URL

Your URL will look like:

https://calendar.google.com/calendar/ical/info%40infiniteloop.ie/private-xxxxxxx/basic.ics

2. Deploy the Service

# Edit deploy.bat and set your PROJECT_ID
deploy.bat

# Or deploy manually
gcloud run deploy calendar-proxy --source . --platform managed --region europe-west1 --allow-unauthenticated

You’ll get a service URL like: https://calendar-proxy-xxxxxxxxxx-ew.a.run.app

3. Construct Your Privacy-Protected Feed URL

From your Google Calendar URL:

https://calendar.google.com/calendar/ical/info%xxxxx.xxx/private-xxxxxxx/basic.ics

Extract:

  • calendarIdinfo@infiniteloop.ie (URL decoded)
  • privateKeyxxxxxxxxxx (just the key, without “private-” prefix)

Build your proxy URL:

https://calendar-proxy-xxxxxxxxxx-ew.a.run.app/calendar?calendarId=info@infiniteloop.ie&privateKey=xxxxxxx

4. Subscribe in Outlook

Outlook Desktop / Web

  1. Open Outlook
  2. Go to Calendar
  3. Click Add Calendar → Subscribe from web
  4. Paste your proxy URL
  5. Give it a name (e.g., “My Availability”)
  6. Click Import

Outlook will now show:

  • ✅ Blocked time during your appointments
  • ✅ “Busy” status for those times
  • ❌ No details about what the appointments are

What Gets Redacted

The service removes all identifying information:

Original ICS PropertyResult
SUMMARY: (event title)→ "Busy"
DESCRIPTION: (event details)→ Removed
LOCATION: (where)→ Removed
ORGANIZER: (who created it)→ Removed
ATTENDEE: (participants)→ Removed
URL: (meeting links)→ Removed
ATTACH: (attachments)→ Removed
CLASS: (privacy)→ Set to PRIVATE

What Gets Preserved

All timing and scheduling information remains intact:

  • ✅ Event start times (DTSTART)
  • ✅ Event end times (DTEND)
  • ✅ Event duration
  • ✅ Recurring events (RRULE)
  • ✅ Exception dates (EXDATE)
  • ✅ Event status (confirmed, tentative, cancelled)
  • ✅ Time zones
  • ✅ All-day events
  • ✅ Unique identifiers (UID)

Technical Details

Stack: .NET 8 / ASP.NET Core Minimal API
Hosting: Google Cloud Run (serverless)
Cost: Virtually free for personal use (Cloud Run free tier: 2M requests/month)
Latency: ~200-500ms per request (fetches from Google, processes, returns)

API Endpoint

GET /calendar?calendarId={id}&privateKey={key}

Parameters:

  • calendarId (required): Your Google Calendar ID (usually your email)
  • privateKey (required): The private key from your Google Calendar ICS URL

Response:

  • Content-Type: text/calendar; charset=utf-8
  • Body: Privacy-redacted ICS feed

Local Development

# Run locally
dotnet run

# Test
curl "http://localhost:8080/calendar?calendarId=test@example.com&privateKey=abc123"

Deployment

Prerequisites

Deploy

# Option 1: Use the batch file
deploy.bat

# Option 2: Manual deployment
gcloud run deploy calendar-proxy ^
  --source . ^
  --platform managed ^
  --region europe-west1 ^
  --allow-unauthenticated ^
  --memory 512Mi

The --allow-unauthenticated flag is required so that Outlook can fetch your calendar without authentication. Your calendar data is still protected by the private key in the URL.

Security & Privacy

Is This Secure?

Yes, with caveats:

✅ Your calendar data is already protected by Google’s private key mechanism
✅ No data is stored – the service is stateless and doesn’t log calendar contents
✅ HTTPS encrypted – All traffic is encrypted in transit
✅ Minimal attack surface – Simple pass-through service with redaction

⚠️ Considerations:

  • Your private key is in the URL you share (same as Google’s original ICS URL)
  • Anyone with your proxy URL can see your busy/free times (but not details)
  • The service runs as --allow-unauthenticated so Outlook can fetch it
  • If you need stricter access control, consider adding authentication

Privacy Features

  • Strips all personally identifying information
  • Marks all events as CLASS:PRIVATE
  • No logging of calendar contents
  • No data persistence
  • Stateless operation

Recommendations

  • Don’t share your proxy URL publicly
  • Treat it like a password – it grants access to your availability
  • Regenerate your Google Calendar private key if compromised
  • Monitor your Cloud Run logs for unexpected access patterns

Cost Estimation

Google Cloud Run pricing (as of 2025):

  • Free tier: 2M requests/month, 360,000 GB-seconds/month
  • Typical calendar: Refreshes every 30-60 minutes
  • Monthly cost: $0 for personal use (well within free tier)

Even with 10 people subscribing to your calendar refreshing every 30 minutes:

  • ~14,400 requests/month
  • ~$0.00 cost

Troubleshooting

“404 Not Found” when subscribing in Outlook

  • Verify your service is deployed: gcloud run services list
  • Check your URL is correctly formatted
  • Ensure --allow-unauthenticated is set

“Invalid calendar” error

  • Verify your Google Calendar private key is correct
  • Test the URL directly in a browser first
  • Check that your calendarId doesn’t have URL encoding issues

Events not showing up

  • Google Calendar ICS feeds can take 12-24 hours to reflect changes
  • Try re-subscribing to the calendar in Outlook
  • Verify the original Google Calendar ICS URL works

Deployment fails

# Ensure you're authenticated
gcloud auth login

# Set your project
gcloud config set project YOUR_PROJECT_ID

# Enable required APIs
gcloud services enable run.googleapis.com
gcloud services enable cloudbuild.googleapis.com

Limitations

  • Refresh rate: Calendar clients typically refresh ICS feeds every 30-60 minutes (not real-time)
  • Google’s ICS feed: Updates can take up to 24 hours to reflect in the ICS feed
  • Authentication: No built-in authentication (relies on URL secrecy)
  • Multi-calendar: Requires one proxy URL per Google Calendar

Alternatives Considered

SolutionProsCons
Native Outlook calendar sharingBuilt-in, real-timeOnly works with Exchange calendars
Calendly/BookingsProfessional, feature-richMonthly cost, overkill for simple availability
Manual sync (Zapier/Power Automate)WorksComplex setup, ongoing maintenance
This solutionSimple, free, privacy-focusedRelies on ICS feed delays

Contributing

Contributions welcome! Areas for enhancement:

  •  Add basic authentication support
  •  Support multiple calendars in one feed
  •  Caching layer to reduce Google Calendar API calls
  •  Health check endpoint
  •  Metrics/monitoring
  •  Custom “Busy” text per calendar

License

MIT License – free to use, modify, and distribute.

Author

Created by Infinite Loop Development Ltd to solve a real business need for calendar privacy across platforms. https://github.com/infiniteloopltd/Google-Calendar-Redactor-Proxy/

Controlling Remote Chrome Instances with C# and the Chrome DevTools Protocol

If you’ve ever needed to programmatically interact with a Chrome browser running on a remote server—whether for web scraping, automated testing, or debugging—you’ve probably discovered that it’s not as straightforward as it might seem. In this post, I’ll walk you through how to connect to a remote Chrome instance using C# and the Chrome DevTools Protocol (CDP), with a practical example of retrieving all cookies, including those pesky HttpOnly cookies that JavaScript can’t touch.

Why Remote Chrome Control?

There are several scenarios where controlling a remote Chrome instance becomes invaluable:

  • Server-side web scraping where you need JavaScript rendering but want to keep your scraping infrastructure separate from your application servers
  • Cross-platform testing where you’re developing on Windows but testing on Linux environments
  • Distributed automation where multiple test runners need to interact with centralized browser instances
  • Debugging production issues where you need to inspect cookies, local storage, or network traffic on a live system

The Chrome DevTools Protocol gives us low-level access to everything Chrome can do—and I mean everything. Unlike browser automation tools that work through the DOM, CDP operates at the browser level, giving you access to cookies (including HttpOnly), network traffic, performance metrics, and much more.

The Challenge: Making Chrome Accessible Remotely

Chrome’s remote debugging feature is powerful, but getting it to work remotely involves some Linux networking quirks that aren’t immediately obvious. Let me break down the problem and solution.

The Problem

When you launch Chrome with the --remote-debugging-port flag, even if you specify --remote-debugging-address=0.0.0.0, Chrome often binds only to 127.0.0.1 (localhost). This means you can’t connect to it from another machine.

You can verify this by checking what Chrome is actually listening on:

netstat -tlnp | grep 9222
tcp        0      0 127.0.0.1:9222          0.0.0.0:*               LISTEN      1891/chrome

See that 127.0.0.1? That’s the problem. It should be 0.0.0.0 to accept connections from any interface.

The Solution: socat to the Rescue

The elegant solution is to use socat (SOcket CAT) to proxy connections. We run Chrome on one port (localhost only), and use socat to forward a public-facing port to Chrome’s localhost port.

Here’s the setup on your Linux server:

# Start Chrome on localhost:9223
google-chrome \
  --headless=new \
  --no-sandbox \
  --disable-gpu \
  --remote-debugging-port=9223 \
  --user-data-dir=/tmp/chrome-remote-debug &

# Use socat to proxy external 9222 to internal 9223
socat TCP-LISTEN:9222,fork,bind=0.0.0.0,reuseaddr TCP:127.0.0.1:9223 &

Now verify it’s working:

netstat -tlnp | grep 9222
tcp        0      0 0.0.0.0:9222            0.0.0.0:*               LISTEN      2103/socat

netstat -tlnp | grep 9223
tcp        0      0 127.0.0.1:9223          0.0.0.0:*               LISTEN      2098/chrome

Perfect! Chrome is safely listening on localhost only, while socat provides the public interface. This is actually more secure than having Chrome directly exposed.

Understanding the Chrome DevTools Protocol

Before we dive into code, let’s understand how CDP works. When Chrome runs with remote debugging enabled, it exposes two types of endpoints:

1. HTTP Endpoints (for discovery)

# Get browser version and WebSocket URL
curl http://your-server:9222/json/version

# Get list of all open pages/targets
curl http://your-server:9222/json

The /json/version endpoint returns something like:

{
   "Browser": "Chrome/143.0.7499.169",
   "Protocol-Version": "1.3",
   "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36...",
   "V8-Version": "14.3.127.17",
   "WebKit-Version": "537.36...",
   "webSocketDebuggerUrl": "ws://your-server:9222/devtools/browser/14706e92-5202-4651-aa97-a72d683bf88e"
}

2. WebSocket Endpoint (for control)

The webSocketDebuggerUrl is what we use to actually control Chrome. All CDP commands flow through this WebSocket connection using a JSON-RPC-like protocol.

Enter PuppeteerSharp

While you could manually handle WebSocket connections and craft CDP commands by hand (and I’ve done that with libraries like MasterDevs.ChromeDevTools), there’s an easier way: PuppeteerSharp.

PuppeteerSharp is a .NET port of Google’s Puppeteer library, providing a high-level API over CDP. The beauty is that it handles all the WebSocket plumbing, message routing, and protocol intricacies for you.

Here’s our complete C# application:

using System;
using System.Net.Http;
using System.Text.Json;
using System.Threading.Tasks;
using PuppeteerSharp;

namespace ChromeRemoteDebugDemo
{
    class Program
    {
        static async Task Main(string[] args)
        {
            // Configuration
            string remoteDebugHost = "xxxx.xxx.xxx.xxx";
            int remoteDebugPort = 9222;
            
            Console.WriteLine("=== Chrome Remote Debug - Cookie Retrieval Demo ===\n");
            
            try
            {
                // Step 1: Get the WebSocket URL from Chrome
                Console.WriteLine($"Connecting to http://{remoteDebugHost}:{remoteDebugPort}/json/version");
                
                using var httpClient = new HttpClient();
                string versionUrl = $"http://{remoteDebugHost}:{remoteDebugPort}/json/version";
                string jsonResponse = await httpClient.GetStringAsync(versionUrl);
                
                // Parse JSON to get webSocketDebuggerUrl
                using JsonDocument doc = JsonDocument.Parse(jsonResponse);
                JsonElement root = doc.RootElement;
                string webSocketUrl = root.GetProperty("webSocketDebuggerUrl").GetString();
                
                Console.WriteLine($"WebSocket URL: {webSocketUrl}\n");
                
                // Step 2: Connect to Chrome using PuppeteerSharp
                Console.WriteLine("Connecting to Chrome via WebSocket...");
                
                var connectOptions = new ConnectOptions
                {
                    BrowserWSEndpoint = webSocketUrl
                };
                
                var browser = await Puppeteer.ConnectAsync(connectOptions);
                Console.WriteLine("Successfully connected!\n");
                
                // Step 3: Get or create a page
                var pages = await browser.PagesAsync();
                IPage page;
                
                if (pages.Length > 0)
                {
                    page = pages[0];
                    Console.WriteLine($"Using existing page: {page.Url}");
                }
                else
                {
                    page = await browser.NewPageAsync();
                    await page.GoToAsync("https://example.com");
                }
                
                // Step 4: Get ALL cookies (including HttpOnly!)
                Console.WriteLine("\nRetrieving all cookies...\n");
                var cookies = await page.GetCookiesAsync();
                
                Console.WriteLine($"Found {cookies.Length} cookie(s):\n");
                
                foreach (var cookie in cookies)
                {
                    Console.WriteLine($"Name:     {cookie.Name}");
                    Console.WriteLine($"Value:    {cookie.Value}");
                    Console.WriteLine($"Domain:   {cookie.Domain}");
                    Console.WriteLine($"Path:     {cookie.Path}");
                    Console.WriteLine($"Secure:   {cookie.Secure}");
                    Console.WriteLine($"HttpOnly: {cookie.HttpOnly}");  // ← This is the magic!
                    Console.WriteLine($"SameSite: {cookie.SameSite}");
                    Console.WriteLine($"Expires:  {(cookie.Expires == -1 ? "Session" : DateTimeOffset.FromUnixTimeSeconds((long)cookie.Expires).ToString())}");
                    Console.WriteLine(new string('-', 80));
                }
                
                await browser.DisconnectAsync();
                Console.WriteLine("\nDisconnected successfully.");
                
            }
            catch (Exception ex)
            {
                Console.WriteLine($"\n❌ ERROR: {ex.Message}");
            }
        }
    }
}

The Key Insight: HttpOnly Cookies

Here’s what makes this approach powerful: page.GetCookiesAsync() returns ALL cookies, including HttpOnly ones.

In a normal web page, JavaScript cannot access HttpOnly cookies—that’s the whole point of the HttpOnly flag. It’s a security feature that prevents XSS attacks from stealing session tokens. But when you’re operating at the CDP level, you’re not bound by JavaScript’s restrictions. You’re talking directly to Chrome’s internals.

This is incredibly useful for:

  • Session management in automation: You can extract session cookies from one browser session and inject them into another
  • Security testing: Verify that sensitive cookies are properly marked HttpOnly
  • Debugging authentication issues: See exactly what cookies are being set by your backend
  • Web scraping: Maintain authenticated sessions across multiple scraper instances

Setting Up the Project

Create a new console application:

dotnet new console -n ChromeRemoteDebugDemo
cd ChromeRemoteDebugDemo

Add PuppeteerSharp:

dotnet add package PuppeteerSharp

Your .csproj should look like:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net8.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="PuppeteerSharp" Version="20.2.4" />
  </ItemGroup>
</Project>

Running the Demo

On your Linux server:

# Install socat if needed
apt-get install socat -y

# Start Chrome on internal port 9223
google-chrome \
  --headless=new \
  --no-sandbox \
  --disable-gpu \
  --remote-debugging-port=9223 \
  --user-data-dir=/tmp/chrome-remote-debug &

# Proxy external 9222 to internal 9223
socat TCP-LISTEN:9222,fork,bind=0.0.0.0,reuseaddr TCP:127.0.0.1:9223 &

On your Windows development machine:

dotnet run

You should see output like:

=== Chrome Remote Debug - Cookie Retrieval Demo ===

Connecting to http://xxxx.xxx.xxx.xxx:9222/json/version
WebSocket URL: ws://xxxx.xxx.xxx.xxx:9222/devtools/browser/14706e92-5202-4651-aa97-a72d683bf88e

Connecting to Chrome via WebSocket...
Successfully connected!

Using existing page: https://example.com

Retrieving all cookies...

Found 2 cookie(s):

Name:     _ga
Value:    GA1.2.123456789.1234567890
Domain:   .example.com
Path:     /
Secure:   True
HttpOnly: False
SameSite: Lax
Expires:  2026-12-27 10:30:45
--------------------------------------------------------------------------------
Name:     session_id
Value:    abc123xyz456
Domain:   example.com
Path:     /
Secure:   True
HttpOnly: True  ← Notice this!
SameSite: Strict
Expires:  Session
--------------------------------------------------------------------------------

Disconnected successfully.

Security Considerations

Before you deploy this in production, consider these security implications:

1. Firewall Configuration

Only expose port 9222 to trusted networks. If you’re running this on a cloud server:

# Allow only your specific IP
sudo ufw allow from YOUR.IP.ADDRESS to any port 9222

Or better yet, use an SSH tunnel and don’t expose the port at all:

# On Windows, create a tunnel
ssh -N -L 9222:localhost:9222 user@remote-server

# Then connect to localhost:9222 in your code

2. Authentication

The Chrome DevTools Protocol has no built-in authentication. Anyone who can connect to the debugging port has complete control over Chrome. This includes:

  • Reading all page content
  • Executing arbitrary JavaScript
  • Accessing all cookies (as we’ve demonstrated)
  • Intercepting and modifying network requests

In production, you should:

  • Use SSH tunnels instead of exposing the port
  • Run Chrome in a sandboxed environment
  • Use short-lived debugging sessions
  • Monitor for unauthorized connections

3. Resource Limits

A runaway Chrome instance can consume significant resources. Consider:

# Limit Chrome's memory usage
google-chrome --headless=new \
  --max-old-space-size=512 \
  --remote-debugging-port=9223 \
  --user-data-dir=/tmp/chrome-remote-debug

Beyond Cookies: What Else Can You Do?

The Chrome DevTools Protocol is incredibly powerful. Here are some other things you can do with this same setup:

Take Screenshots

await page.ScreenshotAsync("/path/to/screenshot.png");

Monitor Network Traffic

await page.SetRequestInterceptionAsync(true);
page.Request += (sender, e) =>
{
    Console.WriteLine($"Request: {e.Request.Url}");
    e.Request.ContinueAsync();
};

Execute JavaScript

var title = await page.EvaluateExpressionAsync<string>("document.title");

Modify Cookies

await page.SetCookieAsync(new CookieParam
{
    Name = "test",
    Value = "123",
    Domain = "example.com",
    HttpOnly = true,  // Can set HttpOnly from CDP!
    Secure = true
});

Emulate Mobile Devices

await page.EmulateAsync(new DeviceDescriptorOptions
{
    Viewport = new ViewPortOptions { Width = 375, Height = 667 },
    UserAgent = "Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X)"
});

Comparing Approaches

You might be wondering how this compares to other approaches:

PuppeteerSharp vs. Selenium

Selenium uses the WebDriver protocol, which is a W3C standard but higher-level and more abstracted. PuppeteerSharp/CDP gives you lower-level access to Chrome specifically.

  • Selenium: Better for cross-browser testing, more stable API
  • PuppeteerSharp: More powerful Chrome-specific features, faster, lighter weight

PuppeteerSharp vs. Raw CDP Libraries

You could use libraries like MasterDevs.ChromeDevTools or ChromeProtocol for more direct CDP access:

// With MasterDevs.ChromeDevTools
var session = new ChromeSession(webSocketUrl);
var cookies = await session.SendAsync(new GetCookiesCommand());

Low-level CDP libraries:

  • Pros: More control, can use experimental CDP features
  • Cons: More verbose, have to handle protocol details

PuppeteerSharp:

  • Pros: High-level API, actively maintained, comprehensive documentation
  • Cons: Abstracts away some CDP features

For most use cases, PuppeteerSharp hits the sweet spot between power and ease of use.

Troubleshooting Common Issues

“Could not connect to Chrome debugging endpoint”

Check firewall:

sudo ufw status
sudo iptables -L -n | grep 9222

Verify Chrome is running:

ps aux | grep chrome
netstat -tlnp | grep 9222

Test locally first:

curl http://localhost:9222/json/version

“No cookies found”

This is normal if the page hasn’t set any cookies. Navigate to a site that does:

await page.GoToAsync("https://github.com");
var cookies = await page.GetCookiesAsync();

Chrome crashes or hangs

Add more stability flags:

google-chrome \
  --headless=new \
  --no-sandbox \
  --disable-gpu \
  --disable-dev-shm-usage \
  --disable-setuid-sandbox \
  --remote-debugging-port=9223 \
  --user-data-dir=/tmp/chrome-remote-debug

Real-World Use Case: Session Management

Here’s a practical example of how I’ve used this in production—managing authenticated sessions for web scraping:

public class SessionManager
{
    private readonly string _remoteChrome;
    
    public async Task<CookieParam[]> LoginAndGetSession(string username, string password)
    {
        var browser = await ConnectToRemoteChrome();
        var page = await browser.NewPageAsync();
        
        // Perform login
        await page.GoToAsync("https://example.com/login");
        await page.TypeAsync("#username", username);
        await page.TypeAsync("#password", password);
        await page.ClickAsync("#login-button");
        await page.WaitForNavigationAsync();
        
        // Extract all cookies (including HttpOnly session tokens!)
        var cookies = await page.GetCookiesAsync();
        
        await browser.DisconnectAsync();
        
        // Store these cookies for later use
        return cookies;
    }
    
    public async Task ReuseSession(CookieParam[] cookies)
    {
        var browser = await ConnectToRemoteChrome();
        var page = await browser.NewPageAsync();
        
        // Inject the saved cookies
        await page.SetCookieAsync(cookies);
        
        // Now you're authenticated!
        await page.GoToAsync("https://example.com/dashboard");
        
        // Do your work...
    }
}

This allows you to:

  1. Log in once in a “master” browser
  2. Extract the session cookies (including HttpOnly auth tokens)
  3. Distribute those cookies to multiple scraper instances
  4. All scrapers are now authenticated without re-logging in

Conclusion

The Chrome DevTools Protocol opens up a world of possibilities for browser automation and debugging. By combining it with PuppeteerSharp and a bit of Linux networking knowledge, you can:

  • Control Chrome instances running anywhere on your network
  • Access all browser data, including HttpOnly cookies
  • Build powerful automation and testing tools
  • Debug production issues remotely

The key takeaways:

  1. Use socat to proxy Chrome’s localhost debugging port to external interfaces
  2. PuppeteerSharp provides the easiest way to interact with CDP from C#
  3. CDP gives you superpowers that normal JavaScript can’t access
  4. Security matters—only expose debugging ports to trusted networks

The complete code from this post is available on GitHub (replace with your actual link). If you found this useful, consider giving it a star!

Have you used the Chrome DevTools Protocol in your projects? What creative uses have you found for it? Drop a comment below—I’d love to hear your experiences!

Further Reading


Tags: C#, Chrome, DevTools Protocol, PuppeteerSharp, Web Automation, Browser Automation, Linux, socat

Categories: Uncategorized Tags: , , ,

How to Set Up Your Own Custom Disposable Email Domain with Mailnesia

Disposable email addresses are incredibly useful for maintaining privacy online, avoiding spam, and testing applications. While services like Mailnesia offer free disposable emails, there’s an even more powerful approach: using your own custom domain with Mailnesia’s infrastructure.

Why Use Your Own Domain?

When you use a standard disposable email service, the domain (like @mailnesia.com) is publicly known. This means:

  • Websites can easily block known disposable email domains
  • There’s no real uniqueness to your addresses
  • You’re sharing the domain with potentially millions of other users

By pointing your own domain to Mailnesia, you get:

  • Higher anonymity – Your domain isn’t in any public disposable email database
  • Unlimited addresses – Create any email address on your domain instantly
  • Professional appearance – Use a legitimate-looking domain for sign-ups
  • Better deliverability – Less likely to be flagged as a disposable email

What You’ll Need

  • A domain name you own (can be purchased for as little as $10/year)
  • Access to your domain’s DNS settings
  • That’s it!

Step-by-Step Setup

1. Access Your DNS Settings

Log into your domain registrar or DNS provider (e.g., Cloudflare, Namecheap, GoDaddy) and navigate to the DNS management section for your domain.

2. Add the MX Record

Create a new MX (Mail Exchange) record with these values:

Type: MX
Name: @ (or leave blank for root domain)
Mail Server: mailnesia.com
Priority/Preference: 10
TTL: 3600 (or default)

Important: Make sure to include the trailing dot if your DNS provider requires it: mailnesia.com.

3. Wait for DNS Propagation

DNS changes can take anywhere from a few minutes to 48 hours to fully propagate, though it’s usually quick (under an hour). You can check if your MX record is live using a DNS lookup tool.

4. Start Using Your Custom Disposable Emails

Once the DNS has propagated, any email sent to any address at your domain will be received by Mailnesia. Access your emails by going to:

https://mailnesia.com/mailbox/USERNAME

Where USERNAME is the part before the @ in your email address.

For example:

  • Email sent to: testing123@yourdomain.com
  • Access inbox at: https://mailnesia.com/mailbox/testing123

Use Cases

This setup is perfect for:

  • Service sign-ups – Use a unique email for each service (e.g., netflix@yourdomain.com, github@yourdomain.com)
  • Testing – Developers can test email functionality without setting up mail servers
  • Privacy protection – Keep your real email address private
  • Spam prevention – If an address gets compromised, simply stop using it
  • Tracking – See which services sell or leak your email by using unique addresses per service

Important Considerations

Security and Privacy

  • No authentication required – Anyone who guesses or knows your username can access that mailbox. Don’t use this for sensitive communications.
  • Temporary storage – Mailnesia emails are not stored permanently. They’re meant to be disposable.
  • No sending capability – This setup only receives emails; you cannot send from these addresses through Mailnesia.

Best Practices

  1. Use random usernames – Instead of newsletter@yourdomain.com, use something like j8dk3h@yourdomain.com for better privacy
  2. Subdomain option – Consider using a subdomain like disposable.yourdomain.com to keep it separate from your main domain
  3. Don’t use for important accounts – Reserve this for non-critical services only
  4. Monitor your usage – Keep track of which addresses you’ve used where

Technical Notes

  • You can still use your domain for regular email by setting up additional MX records with different priorities
  • Some providers may allow you to set up email forwarding in addition to this setup
  • Check Mailnesia’s terms of service for any usage restrictions

Verifying Your Setup

To test if everything is working:

  1. Send a test email to a random address at your domain (e.g., test12345@yourdomain.com)
  2. Visit https://mailnesia.com/mailbox/test12345
  3. Your email should appear within a few seconds

Troubleshooting

Emails not appearing?

  • Verify your MX record is correctly set up using an MX lookup tool
  • Ensure DNS has fully propagated (can take up to 48 hours)
  • Check that you’re using the correct mailbox URL format

Getting bounced emails?

  • Make sure the priority is set to 10 or lower
  • Verify there are no conflicting MX records

Conclusion

Setting up your own custom disposable email domain with Mailnesia is surprisingly simple and provides a powerful privacy tool. With just a single DNS record change, you gain access to unlimited disposable email addresses on your own domain, giving you greater control over your online privacy and reducing spam in your primary inbox.

The enhanced anonymity of using your own domain, combined with the zero-configuration convenience of Mailnesia’s infrastructure, makes this an ideal solution for anyone who values their privacy online.


Remember: This setup is for non-sensitive communications only. For important accounts, always use a proper email service with security features like two-factor authentication.

How to Check All AWS Regions for Deprecated Python 3.9 Lambda Functions (PowerShell Guide)

If you’ve received an email from AWS notifying you that Python 3.9 is being deprecated for AWS Lambda, you’re not alone. As runtimes reach End-Of-Life, AWS sends warnings so you can update your Lambda functions before support officially ends.

The key question is:

How do you quickly check every AWS region to see where you’re still using Python 3.9?

AWS only gives you a single-region example in their email, but many teams have functions deployed globally. Fortunately, you can automate a full multi-region check using a simple PowerShell script.

This post shows you exactly how to do that.


🚨 Why You Received the Email

AWS is ending support for Python 3.9 in AWS Lambda.
After the deprecation dates:

  • No more security patches
  • No AWS technical support
  • You won’t be able to create/update functions using Python 3.9
  • Your functions will still run, but on an unsupported runtime

To avoid risk, you should upgrade these functions to Python 3.10, 3.11, or 3.12.

But first, you need to find all the functions using Python 3.9 — across all regions.


✔️ Prerequisites

Make sure you have:

  • AWS CLI installed
  • AWS credentials configured (via aws configure)
  • Permissions to run:
    • lambda:ListFunctions
    • ec2:DescribeRegions

🧪 Step 1 — Verify AWS CLI Access

Run this to confirm your CLI is working:

aws sts get-caller-identity --region eu-west-1

If it returns your AWS ARN, you’re good to go.

If you see “You must specify a region”, set a default region:

aws configure set region eu-west-1


📝 Step 2 — PowerShell Script to Check Python 3.9 in All Regions

Save this as aws-lambda-python39-check.ps1 (or any name you prefer):

# Get all AWS regions (forcing region so the call always works)
$regions = (aws ec2 describe-regions --region us-east-1 --query "Regions[].RegionName" --output text) -split "\s+"

foreach ($region in $regions) {
    Write-Host "Checking region: $region ..."
    $functions = aws lambda list-functions `
        --region $region `
        --query "Functions[?Runtime=='python3.9'].FunctionArn" `
        --output text

    if ($functions) {
        Write-Host "  → Found Python 3.9 functions:"
        Write-Host "    $functions"
    } else {
        Write-Host "  → No Python 3.9 functions found."
    }
}

This script does three things:

  1. Retrieves all AWS regions
  2. Loops through each region
  3. Prints any Lambda functions that still use Python 3.9

It handles the common AWS CLI error:

You must specify a region

by explicitly using --region us-east-1 when retrieving the region list.


▶️ Step 3 — Run the Script

Open PowerShell in the folder where your script is saved:

.\aws-lambda-python39-check.ps1

You’ll see output like:

Checking region: eu-west-1 ...
  → Found Python 3.9 functions:
    arn:aws:lambda:eu-west-1:123456789012:function:my-old-function

Checking region: us-east-1 ...
  → No Python 3.9 functions found.

If no functions appear, you’re fully compliant.


🛠️ What to Do Next

For each function identified, update the runtime:

aws lambda update-function-configuration `
    --function-name MyFunction `
    --runtime python3.12

If you package dependencies manually (ZIP deployments), ensure you rebuild them using the new Python version.


🎉 Summary

AWS’s deprecation emails can be slightly alarming, but the fix is simple:

  • Scan all regions
  • Identify Python 3.9 Lambda functions
  • Upgrade them in advance of the cutoff date

With the PowerShell script above, you can audit your entire AWS account in seconds.

How to Integrate the RegCheck Vehicle Lookup #API with #OpenAI Actions

In today’s AI-driven world, connecting specialized APIs to large language models opens up powerful possibilities. One particularly useful integration is connecting vehicle registration lookup services to OpenAI’s custom GPTs through Actions. In this tutorial, we’ll walk through how to integrate the RegCheck API with OpenAI Actions, enabling your custom GPT to look up vehicle information from over 30 countries.

What is RegCheck?

RegCheck is a comprehensive vehicle data API that provides detailed information about vehicles based on their registration numbers (license plates). With support for countries including the UK, USA, Australia, and most of Europe, it’s an invaluable tool for automotive businesses, insurance companies, and vehicle marketplace platforms.

Why Integrate with OpenAI Actions?

OpenAI Actions allow custom GPTs to interact with external APIs, extending their capabilities beyond text generation. By integrating RegCheck, you can create a GPT assistant that:

  • Instantly looks up vehicle specifications for customers
  • Provides insurance quotes based on real vehicle data
  • Assists with vehicle valuations and sales listings
  • Answers detailed questions about specific vehicles

Prerequisites

Before you begin, you’ll need:

  • An OpenAI Plus subscription (for creating custom GPTs)
  • A RegCheck API account with credentials
  • Basic familiarity with OpenAPI specifications

Step-by-Step Integration Guide

Step 1: Create Your Custom GPT

Navigate to OpenAI’s platform and create a new custom GPT. Give it a name like “Vehicle Lookup Assistant” and configure its instructions to handle vehicle-related queries.

Step 2: Add the OpenAPI Schema

In your GPT configuration, navigate to the “Actions” section and add the following OpenAPI specification:

yaml

openapi: 3.0.0
info:
  title: RegCheck Vehicle Lookup API
  version: 1.0.0
  description: API for looking up vehicle registration information across multiple countries
servers:
  - url: https://www.regcheck.org.uk/api/json.aspx

paths:
  /Check/{registration}:
    get:
      operationId: checkUKVehicle
      summary: Get details for a vehicle in the UK
      parameters:
        - name: registration
          in: path
          required: true
          schema:
            type: string
          description: UK vehicle registration number
      responses:
        '200':
          description: Successful response
          content:
            application/json:
              schema:
                type: object

  /CheckSpain/{registration}:
    get:
      operationId: checkSpainVehicle
      summary: Get details for a vehicle in Spain
      parameters:
        - name: registration
          in: path
          required: true
          schema:
            type: string
          description: Spanish vehicle registration number
      responses:
        '200':
          description: Successful response
          content:
            application/json:
              schema:
                type: object

  /CheckFrance/{registration}:
    get:
      operationId: checkFranceVehicle
      summary: Get details for a vehicle in France
      parameters:
        - name: registration
          in: path
          required: true
          schema:
            type: string
          description: French vehicle registration number
      responses:
        '200':
          description: Successful response
          content:
            application/json:
              schema:
                type: object

  /VinCheck/{vin}:
    get:
      operationId: checkVehicleByVin
      summary: Get details for a vehicle by VIN number
      parameters:
        - name: vin
          in: path
          required: true
          schema:
            type: string
          description: Vehicle Identification Number
      responses:
        '200':
          description: Successful response
          content:
            application/json:
              schema:
                type: object

Note: You can expand this schema to include additional endpoints for other countries as needed. The RegCheck API supports over 30 countries.

Step 3: Configure Authentication

  1. In the Authentication section, select Basic authentication
  2. Enter your RegCheck API username
  3. Enter your RegCheck API password
  4. OpenAI will securely encrypt and store these credentials

The authentication header will be automatically included in all API requests made by your GPT.

Step 4: Test Your Integration

Use the built-in test feature in the Actions panel to verify the connection:

  1. Select the checkUKVehicle operation
  2. Enter a test registration like YYO7XHH
  3. Click “Test” to see the response

You should receive a JSON response with vehicle details including make, model, year, engine size, and more.

Step 5: Configure GPT Instructions

Update your GPT’s instructions to effectively use the new Actions:

You are a vehicle information assistant. When users provide a vehicle 
registration number, use the appropriate CheckVehicle action based on 
the country. Present the information in a clear, user-friendly format.

Always ask which country the registration is from if not specified.
Provide helpful context about the vehicle data returned.

Example Use Cases

Once integrated, your GPT can handle queries like:

User: “What can you tell me about UK registration YYO7XHH?”

GPT: [Calls checkUKVehicle action] “This is a 2007 Peugeot 307 X-line with a 1.4L petrol engine. It’s a 5-door manual transmission vehicle with right-hand drive…”

User: “Look up Spanish plate 0075LTJ”

GPT: [Calls checkSpainVehicle action] “Here’s the information for that Spanish vehicle…”

Best Practices and Considerations

API Limitations

  • The RegCheck API is currently in BETA and may change without notice
  • Consider implementing error handling in your GPT instructions
  • Be aware of rate limits on your API account

Privacy and Security

  • Never expose API credentials in your GPT’s instructions or responses
  • Inform users that vehicle lookups are being performed
  • Comply with data protection regulations in your jurisdiction

Optimizing Performance

  • Cache frequently requested vehicle information where appropriate
  • Use the most specific endpoint (e.g., CheckSpain vs. generic Check)
  • Consider implementing fallback behavior for failed API calls

Expanding the Integration

The RegCheck API offers many more endpoints you can integrate:

  • UKMOT: Access MOT test history for UK vehicles
  • WheelSize: Get wheel and tire specifications
  • CarSpecifications: Retrieve detailed specs by make/model/year
  • Country-specific checks: Add support for Australia, USA, and 25+ other countries

Simply add these endpoints to your OpenAPI schema following the same pattern.

Troubleshooting Common Issues

Authentication Errors: Double-check your username and password are correct in the Authentication settings.

404 Not Found: Verify the registration format matches the country’s standard format.

Empty Responses: Some vehicles may not have complete data in the RegCheck database.

Conclusion

Integrating the RegCheck API with OpenAI Actions transforms a standard GPT into a powerful vehicle information assistant. Whether you’re building tools for automotive dealerships, insurance platforms, or customer service applications, this integration provides instant access to comprehensive vehicle data from around the world.

The combination of AI’s natural language understanding with RegCheck’s extensive vehicle database creates a seamless user experience that would have required significant custom development just a few years ago.

Ready to get started? Create your RegCheck account, set up your custom GPT, and start building your vehicle lookup assistant today!

Spanish Vehicle Registration API: Complete Guide to Vehicle and Motorcycle Data Lookup in Spain

https://www.matriculaapi.com/
Spain maintains a comprehensive vehicle registration system managed by the Dirección General de Tráfico (DGT), covering over 25 million registered vehicles across the Iberian Peninsula, Balearic Islands, Canary Islands, and Spanish territories. The Spanish Vehicle Registration API provides developers and businesses with access to detailed vehicle specifications, technical data, and theft status information for both cars and motorcycles registered throughout Spain.

Overview of Spanish Vehicle Registration System

Spain’s vehicle registration is centralized under the DGT (Dirección General de Tráfico), which maintains detailed records for all vehicles operating on Spanish roads. The system provides comprehensive data including technical specifications, variant information, and critical safety indicators such as stolen vehicle status.

Spanish license plates have evolved through several formats:

  • Current format (2000-present): Four numbers + three letters (1234 ABC)
  • Previous format: Letters indicating province + numbers + letters
  • Special plates: Diplomatic, military, historical, and temporary registrations

Spanish Vehicle API Features

Available Data for Cars

When querying Spanish car registrations through the /CheckSpain endpoint, you can retrieve:

  • Make and Model – Complete manufacturer and vehicle model identification
  • Registration Year – Year when the vehicle was first registered in Spain
  • Registration Date – Exact date of vehicle registration
  • Engine Specifications – Engine displacement in cubic centimeters
  • Fuel Type – Detailed fuel classification (Diesel, Gasolina, etc.)
  • Vehicle Variant – Specific model variant and trim level
  • Technical Details – Number of seats, doors, and variant type
  • Power Rating – Dynamic power in horsepower
  • VIN Number – Vehicle Identification Number when available
  • Stolen Status – Critical indicator if vehicle has been reported stolen
  • Indicative Price – Reference pricing information

Sample Car Response Format

{
  "Description": "RENAULT MEGANE",
  "CarMake": {
    "CurrentTextValue": "RENAULT"
  },
  "CarModel": {
    "CurrentTextValue": "MEGANE"
  },
  "MakeDescription": {
    "CurrentTextValue": "RENAULT"
  },
  "ModelDescription": {
    "CurrentTextValue": "MEGANE"
  },
  "EngineSize": "1461",
  "VehicleIdentificationNumber": null,
  "RegistrationYear": "2010",
  "RegistrationDate": "06/07/2010",
  "Variation": "EXPRESSION 1.5DCI 85",
  "Seats": null,
  "VariantType": "Diesel 1461 cc 5 puertas",
  "VehicleType": "Car",
  "Fuel": "Diesel",
  "IndicativePrice": null,
  "Doors": "5",
  "AllTerain": null,
  "DynamicPower": "85.0",
  "Stolen": null
}

Spanish Motorcycle API

Dedicated Motorcycle Endpoint

For motorcycles and scooters registered in Spain, use the specialized motorcycle endpoint: https://www.matriculaapi.com/api/bespokeapi.asmx?op=CheckMotorBikeSpain

This endpoint returns motorcycle-specific data optimized for two-wheeled vehicles.

Available Data for Motorcycles

When querying Spanish motorcycle registrations, you can retrieve:

  • Make and Model – Motorcycle manufacturer and model identification
  • Registration Year – Year of first registration in Spain
  • Registration Date – Exact registration date
  • Engine Size – Engine displacement in cubic centimeters
  • Fuel Type – Fuel classification (typically Gasolina for motorcycles)
  • Variant – Specific motorcycle variant and version
  • Number of Seats – Rider and passenger capacity
  • Indicative Price – Reference pricing information
  • Transmission Type – Manual or automatic transmission
  • Dynamic Power – Power output in horsepower
  • Stolen Status – Critical theft indicator
  • VIN Number – Vehicle Identification Number when available

Sample Motorcycle Response Format

{
  "Description": "SUZUKI DL 650 V-STROM",
  "CarMake": {
    "CurrentTextValue": "SUZUKI"
  },
  "CarModel": {
    "CurrentTextValue": "DL 650"
  },
  "MakeDescription": {
    "CurrentTextValue": "SUZUKI"
  },
  "ModelDescription": {
    "CurrentTextValue": "DL 650"
  },
  "EngineSize": "645",
  "VehicleIdentificationNumber": "",
  "RegistrationYear": "2003",
  "RegistrationDate": "01/11/2003",
  "Variation": "V-STROM",
  "Seats": 1,
  "VariantType": "",
  "VehicleType": "MOTOCICLETA",
  "Fuel": "GASOLINA",
  "IndicativePrice": "7909.79",
  "Doors": 0,
  "AllTerain": 0,
  "KType": 0,
  "Transmission": "MANUAL",
  "DynamicPower": "68",
  "Stolen": null
}

API Implementation

Endpoint Usage

The Spanish Vehicle API uses two primary endpoints:

  1. Cars: /CheckSpain – For passenger vehicles, vans, and trucks
  2. Motorcycles: /CheckMotorBikeSpain – For motorcycles, scooters, and mopeds

Both endpoints require:

  • Registration Number – The complete Spanish license plate number
  • Username – Your API authentication credentials

Basic Implementation Example

// JavaScript example for Spanish vehicle lookup
class SpanishVehicleAPI {
  constructor(username) {
    this.username = username;
    this.carUrl = "https://www.matriculaapi.com/api/reg.asmx/CheckSpain";
    this.motorcycleUrl = "https://www.matriculaapi.com/api/bespokeapi.asmx/CheckMotorBikeSpain";
  }
  
  async lookupCar(registrationNumber) {
    const apiUrl = `${this.carUrl}?RegistrationNumber=${registrationNumber}&username=${this.username}`;
    
    try {
      const response = await fetch(apiUrl);
      const xmlText = await response.text();
      
      // Parse XML response
      const parser = new DOMParser();
      const xmlDoc = parser.parseFromString(xmlText, "text/xml");
      const jsonData = xmlDoc.getElementsByTagName("vehicleJson")[0].textContent;
      const vehicleInfo = JSON.parse(jsonData);
      
      return {
        type: 'car',
        make: vehicleInfo.MakeDescription.CurrentTextValue,
        model: vehicleInfo.ModelDescription.CurrentTextValue,
        year: vehicleInfo.RegistrationYear,
        registrationDate: vehicleInfo.RegistrationDate,
        engineSize: vehicleInfo.EngineSize,
        fuel: vehicleInfo.Fuel,
        power: vehicleInfo.DynamicPower,
        variant: vehicleInfo.Variation,
        variantType: vehicleInfo.VariantType,
        doors: vehicleInfo.Doors,
        seats: vehicleInfo.Seats,
        vin: vehicleInfo.VehicleIdentificationNumber,
        kType: vehicleInfo.KType,
        stolen: vehicleInfo.Stolen,
        indicativePrice: vehicleInfo.IndicativePrice
      };
    } catch (error) {
      console.error('Spanish car lookup failed:', error);
      return null;
    }
  }
  
  async lookupMotorcycle(registrationNumber) {
    const apiUrl = `${this.motorcycleUrl}?RegistrationNumber=${registrationNumber}&username=${this.username}`;
    
    try {
      const response = await fetch(apiUrl);
      const xmlText = await response.text();
      
      // Parse XML response
      const parser = new DOMParser();
      const xmlDoc = parser.parseFromString(xmlText, "text/xml");
      const jsonData = xmlDoc.getElementsByTagName("vehicleJson")[0].textContent;
      const motorcycleInfo = JSON.parse(jsonData);
      
      return {
        type: 'motorcycle',
        make: motorcycleInfo.MakeDescription.CurrentTextValue,
        model: motorcycleInfo.ModelDescription.CurrentTextValue,
        year: motorcycleInfo.RegistrationYear,
        registrationDate: motorcycleInfo.RegistrationDate,
        engineSize: motorcycleInfo.EngineSize,
        fuel: motorcycleInfo.Fuel,
        power: motorcycleInfo.DynamicPower,
        variant: motorcycleInfo.Variation,
        seats: motorcycleInfo.Seats,
        transmission: motorcycleInfo.Transmission,
        vin: motorcycleInfo.VehicleIdentificationNumber,
        stolen: motorcycleInfo.Stolen,
        indicativePrice: motorcycleInfo.IndicativePrice
      };
    } catch (error) {
      console.error('Spanish motorcycle lookup failed:', error);
      return null;
    }
  }
  
  async lookupWithAutoDetect(registrationNumber) {
    // Try car lookup first
    let result = await this.lookupCar(registrationNumber);
    if (result && result.make) {
      return result;
    }
    
    // If car lookup fails, try motorcycle
    result = await this.lookupMotorcycle(registrationNumber);
    return result;
  }
}

// Usage examples
const api = new SpanishVehicleAPI("your_username");

// Car lookup
api.lookupCar("5428GXS").then(data => {
  if (data) {
    console.log(`Car: ${data.make} ${data.model} (${data.year})`);
    console.log(`Engine: ${data.engineSize}cc ${data.fuel}`);
    console.log(`Power: ${data.power}HP`);
    console.log(`Variant: ${data.variant}`);
    if (data.stolen) {
      console.warn('⚠️ VEHICLE REPORTED STOLEN');
    }
  }
});

// Motorcycle lookup
api.lookupMotorcycle("1234ABC").then(data => {
  if (data) {
    console.log(`Motorcycle: ${data.make} ${data.model} (${data.year})`);
    console.log(`Engine: ${data.engineSize}cc`);
    console.log(`Power: ${data.power}HP`);
    console.log(`Transmission: ${data.transmission}`);
  }
});

// Auto-detect vehicle type
api.lookupWithAutoDetect("5428GXS").then(data => {
  if (data) {
    console.log(`${data.type}: ${data.make} ${data.model}`);
  }
});

Python Implementation

import requests
import xml.etree.ElementTree as ET
import json

class SpanishVehicleAPI:
    def __init__(self, username):
        self.username = username
        self.car_url = "https://www.matriculaapi.com/api/reg.asmx/CheckSpain"
        self.motorcycle_url = "https://www.matriculaapi.com/api/bespokeapi.asmx/CheckMotorBikeSpain"
    
    def validate_spanish_registration(self, registration):
        """Validate Spanish registration number format"""
        if not registration:
            return False, "Registration number is required"
        
        # Remove spaces and convert to uppercase
        reg = registration.replace(" ", "").upper()
        
        # Spanish format: 4 numbers + 3 letters (modern format)
        if len(reg) < 6 or len(reg) > 8:
            return False, "Invalid registration length"
        
        return True, reg
    
    def lookup_car(self, registration_number):
        """Lookup Spanish car with comprehensive error handling"""
        is_valid, processed_reg = self.validate_spanish_registration(registration_number)
        if not is_valid:
            return {"error": processed_reg}
        
        try:
            params = {
                'RegistrationNumber': processed_reg,
                'username': self.username
            }
            
            response = requests.get(self.car_url, params=params, timeout=15)
            response.raise_for_status()
            
            # Parse XML response
            root = ET.fromstring(response.content)
            json_element = root.find('.//vehicleJson')
            
            if json_element is None or not json_element.text:
                return {"error": "No car data found for this registration number"}
            
            vehicle_data = json.loads(json_element.text)
            
            return {
                'success': True,
                'type': 'car',
                'description': vehicle_data.get('Description'),
                'make': vehicle_data.get('MakeDescription', {}).get('CurrentTextValue'),
                'model': vehicle_data.get('ModelDescription', {}).get('CurrentTextValue'),
                'registration_year': vehicle_data.get('RegistrationYear'),
                'registration_date': vehicle_data.get('RegistrationDate'),
                'engine_size': vehicle_data.get('EngineSize'),
                'fuel_type': vehicle_data.get('Fuel'),
                'power_hp': vehicle_data.get('DynamicPower'),
                'variant': vehicle_data.get('Variation'),
                'variant_type': vehicle_data.get('VariantType'),
                'doors': vehicle_data.get('Doors'),
                'seats': vehicle_data.get('Seats'),
                'vin': vehicle_data.get('VehicleIdentificationNumber'),
                'k_type': vehicle_data.get('KType'),
                'stolen': vehicle_data.get('Stolen'),
                'indicative_price': vehicle_data.get('IndicativePrice'),
                'all_terrain': vehicle_data.get('AllTerain'),
                'raw_data': vehicle_data
            }
            
        except Exception as e:
            return {"error": f"Car lookup failed: {str(e)}"}
    
    def lookup_motorcycle(self, registration_number):
        """Lookup Spanish motorcycle"""
        is_valid, processed_reg = self.validate_spanish_registration(registration_number)
        if not is_valid:
            return {"error": processed_reg}
        
        try:
            params = {
                'RegistrationNumber': processed_reg,
                'username': self.username
            }
            
            response = requests.get(self.motorcycle_url, params=params, timeout=15)
            response.raise_for_status()
            
            # Parse XML response
            root = ET.fromstring(response.content)
            json_element = root.find('.//vehicleJson')
            
            if json_element is None or not json_element.text:
                return {"error": "No motorcycle data found for this registration number"}
            
            motorcycle_data = json.loads(json_element.text)
            
            return {
                'success': True,
                'type': 'motorcycle',
                'description': motorcycle_data.get('Description'),
                'make': motorcycle_data.get('MakeDescription', {}).get('CurrentTextValue'),
                'model': motorcycle_data.get('ModelDescription', {}).get('CurrentTextValue'),
                'registration_year': motorcycle_data.get('RegistrationYear'),
                'registration_date': motorcycle_data.get('RegistrationDate'),
                'engine_size': motorcycle_data.get('EngineSize'),
                'fuel_type': motorcycle_data.get('Fuel'),
                'power_hp': motorcycle_data.get('DynamicPower'),
                'variant': motorcycle_data.get('Variation'),
                'seats': motorcycle_data.get('Seats'),
                'transmission': motorcycle_data.get('Transmission'),
                'vin': motorcycle_data.get('VehicleIdentificationNumber'),
                'stolen': motorcycle_data.get('Stolen'),
                'indicative_price': motorcycle_data.get('IndicativePrice'),
                'raw_data': motorcycle_data
            }
            
        except Exception as e:
            return {"error": f"Motorcycle lookup failed: {str(e)}"}
    
    def lookup_with_auto_detect(self, registration_number):
        """Try both car and motorcycle endpoints automatically"""
        # Try car first
        result = self.lookup_car(registration_number)
        if result.get('success'):
            return result
        
        # Try motorcycle if car lookup fails
        result = self.lookup_motorcycle(registration_number)
        if result.get('success'):
            return result
        
        return {"error": "Vehicle not found in car or motorcycle databases"}

# Usage examples
api = SpanishVehicleAPI("your_username")

# Car lookup
car_result = api.lookup_car("5428GXS")
if car_result.get('success'):
    print(f"Car: {car_result['make']} {car_result['model']}")
    print(f"Year: {car_result['registration_year']}")
    print(f"Engine: {car_result['engine_size']}cc {car_result['fuel_type']}")
    print(f"Power: {car_result['power_hp']}HP")
    print(f"Variant: {car_result['variant']}")
    
    if car_result['stolen']:
        print("⚠️ WARNING: VEHICLE REPORTED STOLEN")

# Motorcycle lookup
bike_result = api.lookup_motorcycle("1234ABC")
if bike_result.get('success'):
    print(f"Motorcycle: {bike_result['make']} {bike_result['model']}")
    print(f"Engine: {bike_result['engine_size']}cc")
    print(f"Power: {bike_result['power_hp']}HP")
    print(f"Transmission: {bike_result['transmission']}")

# Auto-detect vehicle type
auto_result = api.lookup_with_auto_detect("5428GXS")
if auto_result.get('success'):
    print(f"Vehicle Type: {auto_result['type']}")
    print(f"Vehicle: {auto_result['make']} {auto_result['model']}")

Spanish Vehicle Registration Format

Current Format (2000-Present)

Spanish license plates use the format: 1234 ABC

  • Four numbers: Sequential numbering (0000-9999)
  • Three letters: Alphabetical sequence (avoiding vowels and confusing letters)
  • No regional identification in current system

Historical Formats

  • Provincial system (1971-2000): Letters indicating province + numbers + letters
    • Examples: M-1234-AB (Madrid), B-5678-CD (Barcelona)

Special Registration Plates

  • Diplomatic: Special CD series with different formatting
  • Historical: H prefix for vintage vehicles over 30 years old
  • Temporary: Red plates for unregistered vehicles
  • Military: Special military identification series
  • Motorcycle: Same format as cars but typically on smaller plates

Understanding Spanish Vehicle Data

Fuel Type Classifications

Spanish fuel type terminology:

  • Gasolina – Petrol/Gasoline
  • Diesel – Diesel fuel
  • Eléctrico – Electric vehicle
  • Híbrido – Hybrid vehicle
  • GLP – Liquefied Petroleum Gas (Autogas)
  • Gas Natural – Compressed Natural Gas

Vehicle Type Classifications

  • Turismo – Passenger car
  • Furgoneta – Van
  • Camión – Truck
  • Motocicleta – Motorcycle
  • Ciclomotor – Moped
  • Quad – All-terrain quad vehicle

Stolen Vehicle Indicator

The Stolen field is critical for security:

  • null – No theft report on record
  • “Yes” or populated value – Vehicle reported stolen to DGT
  • Always check this field before processing vehicle transactions

Spanish Motorcycle Market

Popular Spanish Motorcycle Brands

While Spain is home to fewer motorcycle manufacturers than some European countries, it has a strong motorcycle culture:

Spanish Manufacturers:

  • RIEJU – Off-road and enduro motorcycles
  • Gas Gas – Trial and enduro specialists
  • Derbi – Scooters and small displacement motorcycles (now part of Piaggio)
  • Bultaco – Historic brand with recent electric motorcycle revival
  • Ossa – Classic motorcycle manufacturer

Popular International Brands in Spain:

  • Honda – Leading market share in Spain
  • Yamaha – Popular for both scooters and motorcycles
  • Suzuki – Strong presence in touring and adventure segments
  • BMW – Premium motorcycle segment leader
  • Harley-Davidson – Cruiser market leader

Motorcycle Registration Specifics

Spanish motorcycles follow the same registration format as cars but with motorcycle-specific data:

  • Detailed engine displacement tracking
  • Power output in horsepower
  • Transmission type (manual vs automatic/semi-automatic)
  • Seating capacity (rider only or with passenger)

Use Cases for Spanish Vehicle API

Insurance Industry

  • Premium Calculations – Engine size and power ratings for risk assessment
  • Claims Processing – Verify vehicle specifications during claims
  • Stolen Vehicle Checks – Critical fraud prevention through stolen status
  • Motorcycle Insurance – Specialized data for two-wheeled vehicle policies

Automotive and Motorcycle Dealers

  • Trade-In Valuations – Indicative pricing and specification verification
  • Vehicle History – Registration date and variant confirmation
  • Fraud Prevention – Stolen vehicle status before purchase
  • Inventory Management – Automated vehicle data for listings

Fleet Management

  • Asset Tracking – Comprehensive vehicle identification for cars and motorcycles
  • Compliance Monitoring – Ensure proper registration across fleet
  • Theft Monitoring – Regular stolen status checks for fleet vehicles
  • Maintenance Planning – Engine specifications for service schedules

Law Enforcement

  • Vehicle Identification – Quick lookups during traffic stops
  • Stolen Vehicle Detection – Immediate access to theft indicators
  • Investigation Support – Vehicle history and specification verification
  • Motorcycle Enforcement – Dedicated motorcycle data for traffic control

Mobile Applications

  • Insurance Apps – Instant vehicle verification for quotes
  • Marketplace Apps – Vehicle specification for classified listings
  • Service Booking – Technical specs for maintenance appointments
  • Parking Apps – Vehicle type identification for permit validation

Error Handling and Security Considerations

class SecureSpanishVehicleAPI extends SpanishVehicleAPI {
  constructor(username) {
    super(username);
    this.maxRetries = 3;
    this.retryDelay = 1000; // milliseconds
  }
  
  async lookupWithSecurity(registrationNumber, vehicleType = 'auto') {
    // Validate input to prevent injection
    if (!this.validateInput(registrationNumber)) {
      return {
        error: true,
        message: "Invalid registration format",
        security: "Input validation failed"
      };
    }
    
    // Perform lookup with retry logic
    let lastError;
    for (let attempt = 1; attempt <= this.maxRetries; attempt++) {
      try {
        let result;
        
        if (vehicleType === 'car') {
          result = await this.lookupCar(registrationNumber);
        } else if (vehicleType === 'motorcycle') {
          result = await this.lookupMotorcycle(registrationNumber);
        } else {
          result = await this.lookupWithAutoDetect(registrationNumber);
        }
        
        // Check for stolen vehicle
        if (result && result.stolen) {
          console.warn(`SECURITY ALERT: Vehicle ${registrationNumber} reported stolen`);
          result.securityAlert = "STOLEN_VEHICLE";
        }
        
        return result;
        
      } catch (error) {
        lastError = error;
        
        if (attempt < this.maxRetries) {
          await new Promise(resolve => 
            setTimeout(resolve, this.retryDelay * attempt)
          );
        }
      }
    }
    
    return {
      error: true,
      message: `Lookup failed after ${this.maxRetries} attempts`,
      details: lastError.message
    };
  }
  
  validateInput(registration) {
    // Prevent SQL injection and XSS
    if (!registration || typeof registration !== 'string') {
      return false;
    }
    
    // Check for suspicious characters
    const suspiciousPattern = /[;<>'"\\]/;
    if (suspiciousPattern.test(registration)) {
      return false;
    }
    
    return true;
  }
}

Data Privacy and Compliance

GDPR Compliance

Spain follows strict EU data protection regulations:

  • Vehicle technical data is not personal information
  • Registration numbers are public vehicle identifiers
  • Implement proper data retention policies
  • Ensure secure handling of stolen vehicle information

Security Best Practices

  • Always check stolen status before vehicle transactions
  • Log all stolen vehicle alerts for audit trails
  • Implement rate limiting to prevent abuse
  • Secure API credentials and use HTTPS only

Getting Started

Account Registration

  1. Sign Up – Register for Spanish vehicle API access
  2. Verification – Complete business verification process
  3. Testing – Use sample registrations for development:
    • Cars: “5428GXS” (Renault Megane from documentation)
    • Motorcycles: Test with various Spanish motorcycle plates
  4. Production – Configure both car and motorcycle endpoints

Integration Checklist

  • [ ] Implement both car and motorcycle endpoints
  • [ ] Add stolen vehicle status checking and alerting
  • [ ] Create auto-detect logic for vehicle type
  • [ ] Design UI for Spanish registration format
  • [ ] Implement security validation for inputs
  • [ ] Add logging for stolen vehicle alerts
  • [ ] Test with both modern and historical plate formats

Sample Data for Testing

  • Cars: 5428GXS (Renault Megane Diesel)
  • Motorcycles: Various Spanish motorcycle registrations
  • Stolen checks: Verify stolen status handling

Conclusion

The Spanish Vehicle Registration API provides comprehensive access to Spain’s vehicle database, offering detailed technical specifications for both cars and motorcycles. The system’s inclusion of stolen vehicle indicators makes it particularly valuable for fraud prevention and security applications, while the dedicated motorcycle endpoint ensures proper data handling for Spain’s significant two-wheeled vehicle population.

Spain’s centralized DGT system ensures consistent data quality while the API’s dual endpoint approach allows for optimized data retrieval for different vehicle types. Understanding Spanish fuel classifications, registration formats, and the critical importance of stolen vehicle checking enhances the effectiveness of API integration.

The motorcycle-specific endpoint recognizes Spain’s vibrant motorcycle culture and provides specialized data fields for transmission types, seating configurations, and power ratings appropriate for two-wheeled vehicles.

Begin accessing Spanish vehicle data by registering for API credentials and exploring the comprehensive database covering cars and motorcycles across all Spanish regions and territories. Always implement stolen vehicle status checking to ensure secure and compliant vehicle data operations.
https://www.matriculaapi.com/

Categories: Uncategorized

Fixing .NET 8 HttpClient Permission Denied Errors on Google Cloud Run

If you’re deploying a .NET 8 application to Google Cloud Run and encountering a mysterious NetworkInformationException (13): Permission denied error when making HTTP requests, you’re not alone. This is a known issue that stems from how .NET’s HttpClient interacts with Cloud Run’s restricted container environment.

The Problem

When your .NET application makes HTTP requests using HttpClient, you might see an error like this:

System.Net.NetworkInformation.NetworkInformationException (13): Permission denied
   at System.Net.NetworkInformation.NetworkChange.CreateSocket()
   at System.Net.NetworkInformation.NetworkChange.add_NetworkAddressChanged(NetworkAddressChangedEventHandler value)
   at System.Net.Http.HttpConnectionPoolManager.StartMonitoringNetworkChanges()

This error occurs because .NET’s HttpClient attempts to monitor network changes and handle advanced HTTP features like HTTP/3 and Alt-Svc (Alternative Services). To do this, it tries to create network monitoring sockets, which requires permissions that Cloud Run containers don’t have by default.

Cloud Run’s security model intentionally restricts certain system-level operations to maintain isolation and security. While this is great for security, it conflicts with .NET’s network monitoring behavior.

Why Does This Happen?

The .NET runtime includes sophisticated connection pooling and HTTP version negotiation features. When a server responds with an Alt-Svc header (suggesting alternative protocols or endpoints), .NET tries to:

  1. Monitor network interface changes
  2. Adapt connection strategies based on network conditions
  3. Support HTTP/3 where available

These features require low-level network access that Cloud Run’s sandboxed environment doesn’t permit.

The Solution

Fortunately, there’s a straightforward fix. You need to disable the features that require elevated network permissions by setting two environment variables:

Environment.SetEnvironmentVariable("DOTNET_SYSTEM_NET_DISABLEIPV6", "1");
Environment.SetEnvironmentVariable("DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT", "false");

Place these lines at the very top of your Program.cs file, before any HTTP client initialization or web application builder creation.

What These Variables Do

  • DOTNET_SYSTEM_NET_DISABLEIPV6: Disables IPv6 support, which also disables the network change monitoring that requires socket creation.
  • DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT: Explicitly disables HTTP/3 support, preventing .NET from trying to negotiate HTTP/3 connections.

Alternative Approaches

Option 1: Set in Dockerfile

You can bake these settings into your container image:

FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app

# Disable network monitoring features
ENV DOTNET_SYSTEM_NET_DISABLEIPV6=1
ENV DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT=false

COPY publish/ .
ENTRYPOINT ["dotnet", "YourApp.dll"]

Option 2: Set via Cloud Run Configuration

You can configure these as environment variables in your Cloud Run deployment:

gcloud run deploy your-service \
  --image gcr.io/your-project/your-image \
  --set-env-vars DOTNET_SYSTEM_NET_DISABLEIPV6=1,DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT=false

Or through the Cloud Console when configuring your service’s environment variables.

Performance Impact

You might wonder if disabling these features affects performance. In practice:

  • HTTP/3 isn’t widely used yet, and most services work perfectly fine with HTTP/2 or HTTP/1.1
  • Network change monitoring is primarily useful for long-running desktop applications that move between networks (like a laptop switching from WiFi to cellular)
  • In a Cloud Run container with a stable network environment, these features provide minimal benefit

The performance impact is negligible, and the tradeoff is well worth it for a working application.

Why It Works Locally But Fails in Cloud Run

This issue often surprises developers because their code works perfectly on their development machine. That’s because:

  • Local development environments typically run with full system permissions
  • Your local machine isn’t running in a restricted container
  • Cloud Run’s security sandbox is much more restrictive than a typical development environment

This is a classic example of environment-specific behavior where security constraints in production expose issues that don’t appear during development.

Conclusion

The Permission denied error when using HttpClient in .NET 8 on Google Cloud Run is caused by the runtime’s attempt to use network monitoring features that aren’t available in Cloud Run’s restricted environment. The fix is simple: disable these features using environment variables.

This solution is officially recognized by the .NET team as the recommended workaround for containerized environments with restricted permissions, so you can use it with confidence in production.

Related Resources


Have you encountered other .NET deployment issues on Cloud Run? Feel free to share your experiences in the comments below.

Categories: Uncategorized Tags: , , , ,

Enhanced Italian Vehicle #API: VIN Numbers Now Available for Motorcycles

We’re excited to announce a significant enhancement to the Italian vehicle data API available through Targa.co.it. Starting today, our API responses now include Vehicle Identification Numbers (VIN) for motorcycle lookups, providing developers and businesses with more comprehensive vehicle data than ever before.

What’s New

The Italian vehicle API has been upgraded to return VIN numbers alongside existing motorcycle data. This enhancement brings motorcycle data parity with our car lookup service, ensuring consistent and complete vehicle information across all vehicle types.

Sample Response Structure

Here’s what you can expect from the enhanced API response for a motorcycle lookup:

json

{
  "Description": "Yamaha XT 1200 Z Super Ténéré",
  "RegistrationYear": "2016",
  "CarMake": {
    "CurrentTextValue": "Yamaha"
  },
  "CarModel": {
    "CurrentTextValue": "XT 1200 Z Super Ténéré"
  },
  "EngineSize": {
    "CurrentTextValue": "1199"
  },
  "FuelType": {
    "CurrentTextValue": ""
  },
  "MakeDescription": {
    "CurrentTextValue": "Yamaha"
  },
  "ModelDescription": {
    "CurrentTextValue": "XT 1200 Z Super Ténéré"
  },
  "Immobiliser": {
    "CurrentTextValue": ""
  },
  "Version": "ABS (2014-2016) 1199cc",
  "ABS": "",
  "AirBag": "",
  "Vin": "JYADP041000002470",
  "KType": "",
  "PowerCV": "",
  "PowerKW": "",
  "PowerFiscal": "",
  "ImageUrl": "http://www.targa.co.it/image.aspx/@WWFtYWhhIFhUIDEyMDAgWiBTdXBlciBUw6luw6lyw6l8bW90b3JjeWNsZQ=="
}

Why VIN Numbers Matter

Vehicle Identification Numbers serve as unique fingerprints for every vehicle, providing several key benefits:

Enhanced Vehicle Verification: VINs offer the most reliable method to verify a vehicle’s authenticity and specifications, reducing fraud in motorcycle transactions.

Complete Vehicle History: Access to VIN enables comprehensive history checks, insurance verification, and recall information lookup.

Improved Business Applications: Insurance companies, dealerships, and fleet management services can now build more robust motorcycle-focused applications with complete vehicle identification.

Regulatory Compliance: Many automotive business processes require VIN verification for legal and regulatory compliance.

Technical Implementation

The VIN field has been seamlessly integrated into existing API responses without breaking changes. The new "Vin" field appears alongside existing motorcycle data, maintaining backward compatibility while extending functionality.

Key Features:

  • No Breaking Changes: Existing integrations continue to work unchanged
  • Consistent Data Structure: Same JSON structure across all vehicle types
  • Comprehensive Coverage: VIN data available for motorcycles registered in the Italian vehicle database
  • Real-time Updates: VIN information reflects the most current data from official Italian vehicle registries

Getting Started

Developers can immediately begin utilizing VIN data in their applications. The API endpoint remains unchanged, and VIN information is automatically included in all motorcycle lookup responses where available.

For businesses already integrated with our Italian vehicle API, this enhancement provides immediate additional value without requiring any code changes. New integrations can take full advantage of complete motorcycle identification data from day one.

Use Cases

This enhancement opens up new possibilities for motorcycle-focused applications:

  • Insurance Platforms: Accurate risk assessment and policy management
  • Marketplace Applications: Enhanced listing verification and buyer confidence
  • Fleet Management: Complete motorcycle inventory tracking
  • Service Centers: Precise parts identification and service history management
  • Regulatory Reporting: Compliance with Italian vehicle registration requirements

Looking Forward

This VIN integration for motorcycles represents our continued commitment to providing comprehensive Italian vehicle data. We’re constantly working to enhance our API capabilities and expand data coverage to better serve the automotive technology ecosystem.

The addition of VIN numbers to motorcycle data brings our Italian API to feature parity with leading international vehicle data providers, while maintaining the accuracy and reliability that Italian businesses have come to expect from Targa.co.it.


Ready to integrate enhanced motorcycle data into your application? Visit Targa.co.it to explore our Italian vehicle API documentation and get started with VIN-enabled motorcycle lookups today.