Archive
Building a Wildcard Catch-All POP3 Mail Server on Ubuntu
Receive mail for any address on any subdomain — no per-account configuration required
Introduction
This guide walks through setting up a wildcard catch-all mail server on Ubuntu using Postfix and Dovecot. The goal is to receive email sent to any address on any subdomain of your domain — for example, anything@abc.yourdomain.com or test@xyz.yourdomain.com — without having to configure individual mailboxes in advance.
This is particularly useful for testing, disposable address systems, API integrations, and mail sink setups where you want to capture inbound mail programmatically. The server will not send mail — only receive it. Mail older than 24 hours is automatically purged.
Architecture Overview
The stack consists of three components working together:
- Postfix — receives inbound SMTP and delivers to a local virtual mailbox
- Dovecot — serves POP3 access to the mailbox
- A single catch-all mailbox — all mail for all subdomains and addresses funnels into one Maildir
Rather than creating individual accounts, everything is routed to a single mailbox. A POP3 client connects with one username and password to retrieve all mail regardless of which address or subdomain it was sent to.
Part 1 — DNS Configuration
How Wildcard MX Records Work
MX records must point to a hostname, not an IP address directly. This means two DNS records are needed: an MX record pointing to a mail hostname, and an A record resolving that hostname to your server’s IP address.
Create the following records in your DNS provider (AWS Route 53 or equivalent):
| Record Name | Type / Value |
| *.yourdomain.com | MX — 10 mail.yourdomain.com |
| mail.yourdomain.com | A — your.server.ip.address |
The wildcard MX record *.yourdomain.com matches any single-level subdomain lookup. When a sending mail server looks up the MX record for abc.yourdomain.com, it matches the wildcard and is directed to mail.yourdomain.com, which in turn resolves to your server’s IP via the A record.
Note that the wildcard covers one subdomain level deep. Mail to anything@abc.yourdomain.com is covered. A deeper level such as anything@a.b.yourdomain.com would require a separate record.
Verifying DNS Records
From a Windows machine, use nslookup to verify records have propagated:
| # Check the MX recordnslookup -type=MX abc.yourdomain.com # Check the A record for the mail hostnslookup mail.yourdomain.com # Query AWS nameservers directly (before public propagation)nslookup -type=NS yourdomain.comnslookup -type=MX abc.yourdomain.com ns-123.awsdns-45.com |
You can also use dnschecker.org to check propagation across multiple global resolvers simultaneously.
Part 2 — Server Setup
Install Postfix and Dovecot
| sudo apt updatesudo apt install postfix dovecot-pop3d -y |
During the Postfix installation prompt, select Internet Site and enter your domain name (e.g. yourdomain.com) when asked for the mail name.
Configure Postfix
Edit the main Postfix configuration file:
| sudo nano /etc/postfix/main.cf |
Add or update the following values:
| myhostname = mail.yourdomain.commydomain = yourdomain.com # Leave mydestination empty — we use virtual mailboxes insteadmydestination = # Accept mail for any subdomain matching the wildcardvirtual_mailbox_domains = regexp:/etc/postfix/virtual_domainsvirtual_mailbox_base = /var/mail/vhostsvirtual_mailbox_maps = regexp:/etc/postfix/virtual_mailboxvirtual_minimum_uid = 100virtual_uid_maps = static:5000virtual_gid_maps = static:5000 # Required to prevent open relaysmtpd_relay_restrictions = permit_mynetworks, reject_unauth_destination |
Create the virtual domains file — this regexp matches any subdomain of your domain:
| sudo nano /etc/postfix/virtual_domains |
| /^\.+\.yourdomain\.com$/ OK |
Create the virtual mailbox map — this catches all addresses and routes them to a single catchall mailbox:
| sudo nano /etc/postfix/virtual_mailbox |
| /^.+@.+\.yourdomain\.com$/ catchall/ |
Rebuild the aliases database (required to avoid a startup warning):
| newaliases |
Create the Virtual Mail User and Mailbox
Postfix delivers mail as a dedicated system user (vmail). Create the user, group, and mailbox directory:
| sudo groupadd -g 5000 vmailsudo useradd -u 5000 -g 5000 -d /var/mail/vhosts -s /sbin/nologin vmailsudo mkdir -p /var/mail/vhosts/catchallsudo chown -R vmail:vmail /var/mail/vhosts |
Configure Dovecot for POP3
Enable the POP3 protocol in the main Dovecot config:
| sudo nano /etc/dovecot/dovecot.conf |
| protocols = pop3 |
Set the mail location to the catchall Maildir:
| sudo nano /etc/dovecot/conf.d/10-mail.conf |
| mail_location = maildir:/var/mail/vhosts/catchall |
Allow plaintext authentication (suitable for internal/trusted use — see the TLS note at the end for public-facing deployments):
| sudo nano /etc/dovecot/conf.d/10-auth.conf |
| disable_plaintext_auth = noauth_mechanisms = plain login passdb { driver = passwd-file args = /etc/dovecot/users} userdb { driver = static args = uid=5000 gid=5000 home=/var/mail/vhosts/catchall} |
Create the Dovecot users file with your chosen credentials:
| sudo nano /etc/dovecot/users # Format: username:{PLAIN}passwordmailuser:{PLAIN}yourpasswordhere |
Start the Services
| sudo systemctl restart postfixsudo systemctl restart dovecot |
Verify Postfix is running:
| postfix status |
Check the mail log for any errors:
| tail -30 /var/log/mail.log |
Part 3 — Firewall Configuration
Cloud Firewall (Linode / AWS / equivalent)
Open the following inbound ports in your cloud provider’s firewall. On Linode this is found under Networking > Firewalls in the dashboard. Changes apply immediately with no reboot required.
| Port / Protocol | Purpose |
| 22 TCP | SSH (ensure this is always open) |
| 25 TCP | SMTP inbound (receiving mail) |
| 110 TCP | POP3 (retrieving mail) |
UFW on the Ubuntu Instance
| sudo ufw allow 22/tcpsudo ufw allow 25/tcpsudo ufw allow 110/tcpsudo ufw enablesudo ufw status |
Always confirm port 22 is allowed before enabling UFW to avoid locking yourself out of SSH.
Part 4 — Testing
Test SMTP Locally
From the server itself, connect to Postfix on port 25 and send a test message. Use 127.0.0.1 rather than localhost to avoid IPv6 connection issues:
| telnet 127.0.0.1 25 |
You should immediately see the greeting banner:
| 220 mail.yourdomain.com ESMTP Postfix |
Then send a test message interactively:
| EHLO test.comMAIL FROM:<test@test.com>RCPT TO:<anything@abc.yourdomain.com>DATASubject: Test mail Hello this is a test.QUIT |
Each step should return a 250 OK response. The RCPT TO line is the critical one — if the wildcard regexp is configured correctly, Postfix will accept any subdomain address. After QUIT, verify the mail landed in the mailbox:
| tail -20 /var/log/mail.logls -la /var/mail/vhosts/catchall/new/ |
You should see a file in the new/ directory — that is the email in Maildir format.
Test POP3 Locally
| telnet 127.0.0.1 110 |
Dovecot should respond with:
| +OK Dovecot (Ubuntu) ready. |
Then authenticate and list messages:
| USER mailuserPASS yourpasswordhereLISTRETR 1QUIT |
A successful LIST response showing message count confirms the full chain is working: inbound SMTP via Postfix, delivery to virtual Maildir, and POP3 retrieval via Dovecot.
Part 5 — Automatic Mail Purge
To automatically delete mail older than 24 hours, add a cron job:
| sudo crontab -e |
Add the following line:
| 0 * * * * find /var/mail/vhosts/catchall -type f -mmin +1440 -delete |
This runs every hour and removes any file in the catchall mailbox older than 1440 minutes (24 hours).
Optional — Silence the Backwards Compatibility Warning
Postfix logs a harmless warning about backwards-compatible default settings. To silence it:
| postconf compatibility_level=3.6postfix reload |
Security Notes
- Port 110 transmits credentials in plaintext. For any public-facing deployment, configure Dovecot with TLS and use POP3S on port 995 instead.
- The smtpd_relay_restrictions = permit_mynetworks, reject_unauth_destination setting prevents your server from acting as an open relay — do not remove this.
- Consider rate limiting inbound SMTP connections if the server is publicly accessible to reduce spam load.
- The vmail system user has no login shell (nologin) and cannot be used to access the system interactively.
Summary
With Postfix and Dovecot configured as described above, your server will:
- Accept inbound SMTP for any address on any subdomain of your domain
- Deliver all mail into a single catch-all Maildir with no per-account configuration
- Expose all received mail via POP3 using a single username and password
- Automatically purge mail older than 24 hours
- Require no restart or reconfiguration when new subdomains or addresses are used
How to Detect and Fix Squid Proxy Abuse
Running an open HTTP proxy on the internet, even temporarily for testing, can quickly attract unwanted attention. Within minutes of deploying an unsecured Squid proxy, malicious actors can discover and abuse it for scanning, attacks, or hiding their origin. Here’s how to spot the warning signs and lock down your proxy.
Symptoms of Proxy Abuse
1. Proxy Stops Working
The most obvious symptom is that your proxy simply stops responding to legitimate requests. Connections timeout or hang indefinitely, even though the Squid service appears to be running.
2. Cache File Descriptor Warnings
When checking the Squid service status, you see repeated warnings like:
WARNING: Your cache is running out of file descriptors
This occurs because the proxy is handling far more concurrent connections than expected for a small test server.
3. Service Shows Active But Unresponsive
The systemd status shows Squid as “active (running)” with normal startup messages, but actual proxy requests fail:
bash
$ sudo systemctl status squid● squid.service - Squid Web Proxy Server Active: active (running)
Yet when you try to use it:
bash
$ curl -x http://your-proxy:8888 https://example.comcurl: (28) Failed to connect to example.com port 443 after 21060ms
4. High Memory or CPU Usage
A small EC2 instance (t2.micro or t3.micro) that should be mostly idle shows elevated resource consumption.
How to Verify Proxy Abuse
Check the Access Logs
The quickest way to confirm abuse is to examine the Squid access log:
bash
sudo tail -100 /var/log/squid/access.log
What to look for:
A healthy proxy used only by you should show:
- One or two source IP addresses (yours)
- Requests to legitimate domains
- Occasional HTTPS CONNECT requests
An abused proxy will show:
- Dozens of different source IP addresses
- CONNECT requests to random IP addresses (not domain names)
- Strange ports: SSH (22), Telnet (23), email ports (25, 587, 993, 110), random high ports
- High frequency of requests
Example of Abuse
Here’s what an abused proxy log looks like:
1770200370.089 59842 172.234.115.25 TCP_TUNNEL/503 0 CONNECT 188.64.128.123:221770200370.089 59842 51.83.10.33 TCP_TUNNEL/503 0 CONNECT 188.64.132.53:4431770200370.089 59841 172.234.115.25 TCP_TUNNEL/503 0 CONNECT 188.64.128.4:221770200370.089 59332 185.90.61.84 TCP_TUNNEL/503 0 CONNECT 188.64.129.251:80001770200370.089 214 91.202.74.22 TCP_TUNNEL/503 0 CONNECT 188.64.129.143:231770200370.191 579 51.83.10.33 TCP_TUNNEL/200 39 CONNECT 188.64.128.4:80211770200370.235 11227 51.83.10.33 TCP_TUNNEL/200 176 CONNECT 188.64.131.66:587
Notice:
- Multiple unique source IPs
- Connections to SSH (port 22), Telnet (port 23), SMTP (port 587)
- Targets are raw IP addresses, not domain names
- Hundreds of requests per minute
This is classic behavior of attackers using your proxy to scan the internet for vulnerable services.
Count Unique IPs
To see how many different IPs are using your proxy:
bash
sudo awk '{print $3}' /var/log/squid/access.log | sort | uniq -c | sort -rn | head -20
If you see more than a handful of IPs (especially if you’re the only legitimate user), your proxy is being abused.
Check Current Connections
See active connections to your proxy port:
bash
sudo netstat -tn | grep :8888
A legitimate test proxy should have 0-2 active connections. Dozens of connections indicate abuse.
How to Fix It Immediately
1. Lock Down the AWS Security Group
The fastest fix is to restrict access at the network level:
Via AWS Console:
- Navigate to EC2 → Security Groups
- Select the security group attached to your proxy instance
- Click “Edit inbound rules”
- Find the rule for your proxy port (e.g., 8888)
- Change Source from
0.0.0.0/0to “My IP”- AWS will auto-detect and fill in your current public IP
- Click “Save rules”
The change takes effect immediately – no restart required.
2. Restart Squid to Kill Existing Connections
Even after locking down the security group, existing connections may persist:
bash
sudo systemctl restart squid
3. Clear the Logs
Start fresh to verify the abuse has stopped:
bash
# Stop Squidsudo systemctl stop squid# Clear logssudo truncate -s 0 /var/log/squid/access.logsudo truncate -s 0 /var/log/squid/cache.log# Clear cache if you're seeing file descriptor warningssudo rm -rf /var/spool/squid/*sudo squid -z# Restartsudo systemctl start squid
4. Verify It’s Fixed
Watch the log in real-time:
bash
sudo tail -f /var/log/squid/access.log
If tail -f just sits there with no output, that’s good – it means no requests are coming through.
Now test from your own machine:
bash
curl -x http://your-proxy-ip:8888 https://ifconfig.me
You should immediately see your request appear in the log, and nothing else.
Prevention Best Practices
For Testing Environments
- Always use IP whitelisting – Never expose a proxy to
0.0.0.0/0even for testing - Use non-standard ports – While not security through obscurity, it reduces automated scanning
- Set up authentication – Even basic auth is better than nothing
- Monitor logs – Check periodically for unexpected traffic
- Terminate when done – Don’t leave test infrastructure running
Minimal Squid Config with Authentication
For slightly better security, add basic authentication:
bash
# Install htpasswdsudo apt install apache2-utils# Create password filesudo htpasswd -c /etc/squid/passwords testuser# Edit squid.confsudo nano /etc/squid/squid.conf
Add these lines:
http_port 8888# Basic authenticationauth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwordsauth_param basic realm proxyacl authenticated proxy_auth REQUIREDhttp_access allow authenticatedhttp_access deny allcache deny all
Restart Squid and now clients must authenticate:
bash
curl -x http://testuser:password@your-proxy:8888 https://example.com
For Production
If you need a production proxy:
- Use a proper reverse proxy like nginx or HAProxy with TLS
- Implement OAuth or certificate-based authentication
- Use AWS PrivateLink or VPC peering instead of public exposure
- Enable detailed logging and monitoring
- Set up rate limiting
- Consider managed solutions like AWS API Gateway or CloudFront
Conclusion
Open proxies are magnets for abuse. Automated scanners continuously sweep the internet looking for misconfigured proxies to exploit. The symptoms are often subtle – file descriptor warnings, poor performance, or timeouts – but the fix is straightforward: restrict access to only trusted IP addresses at the network level.
For testing purposes, AWS Security Groups provide the perfect solution: instant IP-based access control with no performance overhead. Combined with monitoring the Squid access logs, you can quickly detect and eliminate abuse before it impacts your testing or incurs unexpected costs.
Remember: if you’re running a temporary test proxy, lock it down to your IP from the start. It only takes minutes for automated scanners to find and abuse an open proxy.
Key Takeaways:
- ✅ Always restrict proxy access via security groups/firewall rules
- ✅ Monitor access logs for unexpected IP addresses
- ✅ Watch for file descriptor warnings as an early sign of abuse
- ✅ Clear logs and restart after securing to verify the fix
- ✅ Terminate test infrastructure when finished to avoid ongoing costs
Migrating Google Cloud Run to Scaleway: Bringing Your Cloud Infrastructure Back to Europe
Introduction: Why European Cloud Sovereignty Matters Now More Than Ever

In an era of increasing geopolitical tensions, data sovereignty concerns, and evolving international relations, European companies are reconsidering their dependence on US-based cloud providers. The EU’s growing emphasis on digital sovereignty, combined with uncertainties around US data access laws like the CLOUD Act and recent political developments, has made many businesses uncomfortable with storing sensitive data on American infrastructure.
For EU-based companies running containerized workloads on Google Cloud Run, there’s good news: migrating to European alternatives like Scaleway is surprisingly straightforward. This guide will walk you through the technical process of moving your Cloud Run services to Scaleway’s Serverless Containers—keeping your applications running while bringing your infrastructure back under European jurisdiction.
Why Scaleway?
Scaleway, a French cloud provider founded in 1999, offers a compelling alternative to Google Cloud Run:
- 🇪🇺 100% European: All data centers located in France, Netherlands, and Poland
- 📜 GDPR Native: Built from the ground up with European data protection in mind
- 💰 Transparent Pricing: No hidden costs, generous free tiers, and competitive rates
- 🔒 Data Sovereignty: Your data never leaves EU jurisdiction
- ⚡ Scale-to-Zero: Just like Cloud Run, pay only for actual usage
- 🌱 Environmental Leadership: Strong commitment to sustainable cloud infrastructure
Most importantly: Scaleway Serverless Containers are technically equivalent to Google Cloud Run. Both are built on Knative, meaning your containers will run identically on both platforms.
Prerequisites
Before starting, ensure you have:
- An existing Google Cloud Run service
- Windows machine with PowerShell
gcloudCLI installed and authenticated- A Scaleway account (free to create)
- Skopeo installed (we’ll cover this)
Understanding the Architecture
Both Google Cloud Run and Scaleway Serverless Containers work the same way:
- You provide a container image
- The platform runs it on-demand via HTTPS endpoints
- It scales automatically (including to zero when idle)
- You pay only for execution time
The migration process is simply:
- Copy your container image from Google’s registry to Scaleway’s registry
- Deploy it as a Scaleway Serverless Container
- Update your DNS/endpoints
No code changes required—your existing .NET, Node.js, Python, Go, or any other containerized application works as-is.
Step 1: Install Skopeo (Lightweight Docker Alternative)
Since we’re on Windows and don’t want to run full Docker Desktop, we’ll use Skopeo—a lightweight tool designed specifically for copying container images between registries.
Install via winget:
powershell
winget install RedHat.Skopeo
Or download directly from: https://github.com/containers/skopeo/releases
Why Skopeo?
- No daemon required: No background services consuming resources
- Direct registry-to-registry transfer: Images never touch your local disk
- Minimal footprint: ~50MB vs. several GB for Docker Desktop
- Perfect for CI/CD: Designed for automation and registry operations
Configure Skopeo’s Trust Policy
Skopeo requires a policy file to determine which registries to trust. Create it:
powershell
# Create the config directoryNew-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.config\containers"# Create a permissive policy that trusts all registries@"{ "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker-daemon": { "": [{"type": "insecureAcceptAnything"}] } }}"@ | Out-File -FilePath "$env:USERPROFILE\.config\containers\policy.json" -Encoding utf8
For production environments, you might want a more restrictive policy that only trusts specific registries:
powershell
@"{ "default": [{"type": "reject"}], "transports": { "docker": { "gcr.io": [{"type": "insecureAcceptAnything"}], "europe-west2-docker.pkg.dev": [{"type": "insecureAcceptAnything"}], "rg.fr-par.scw.cloud": [{"type": "insecureAcceptAnything"}] } }}"@ | Out-File -FilePath "$env:USERPROFILE\.config\containers\policy.json" -Encoding utf8
Step 2: Find Your Cloud Run Container Image
Your Cloud Run service uses a specific container image. To find it:
Via gcloud CLI (recommended):
bash
gcloud run services describe YOUR-SERVICE-NAME \ --region=YOUR-REGION \ --project=YOUR-PROJECT \ --format='value(spec.template.spec.containers[0].image)'```This returns the full image URL, something like:```europe-west2-docker.pkg.dev/your-project/cloud-run-source-deploy/your-service@sha256:abc123...
Via Google Cloud Console:
- Navigate to Cloud Run in the console
- Click your service
- Go to the “Revisions” tab
- Look for “Container image URL”
The @sha256:... digest is important—it ensures you’re copying the exact image currently running in production.
Step 3: Set Up Scaleway Container Registry
Create a Scaleway Account
- Sign up at https://console.scaleway.com/
- Complete email verification
- Navigate to the console
Create a Container Registry Namespace
- Go to Containers → Container Registry
- Click Create namespace
- Choose a region (Paris, Amsterdam, or Warsaw)
- Important: Choose the same region where you’ll deploy your containers
- Enter a namespace name (e.g.,
my-containers,production)- Must be unique within that region
- Lowercase, numbers, and hyphens only
- Set Privacy to Private
- Click Create
Your registry URL will be: rg.fr-par.scw.cloud/your-namespace
Create API Credentials
- Click your profile → API Keys (or visit https://console.scaleway.com/iam/api-keys)
- Click Generate API Key
- Give it a name (e.g., “container-migration”)
- Save the Secret Key securely—it’s only shown once
- Note both the Access Key and Secret Key
Step 4: Copy Your Container Image
Now comes the magic—copying your container directly from Google to Scaleway without downloading it locally.
Authenticate and Copy:
powershell
# Set your Scaleway secret key as environment variable (more secure)$env:SCW_SECRET_KEY = "your-scaleway-secret-key-here"# Copy the image directly between registriesskopeo copy ` --src-creds="oauth2accesstoken:$(gcloud auth print-access-token)" ` --dest-creds="nologin:$env:SCW_SECRET_KEY" ` docker://europe-west2-docker.pkg.dev/your-project/cloud-run-source-deploy/your-service@sha256:abc123... ` docker://rg.fr-par.scw.cloud/your-namespace/your-service:latest```### What's Happening:- `--src-creds`: Authenticates with Google using your gcloud session- `--dest-creds`: Authenticates with Scaleway using your API key- Source URL: Your Google Artifact Registry image- Destination URL: Your Scaleway Container RegistryThe transfer happens directly between registries—your Windows machine just orchestrates it. Even a multi-GB container copies in minutes.### Verify the Copy:1. Go to https://console.scaleway.com/registry/namespaces2. Click your namespace3. You should see your service image listed with the `latest` tag## Step 5: Deploy to Scaleway Serverless Containers### Create a Serverless Container Namespace:1. Navigate to **Containers** → **Serverless Containers**2. Click **Create namespace**3. Choose the **same region** as your Container Registry4. Give it a name (e.g., `production-services`)5. Click **Create**### Deploy Your Container:1. Click **Create container**2. **Image source**: Select "Scaleway Container Registry"3. Choose your namespace and image4. **Configuration**: - **Port**: Set to the port your app listens on (usually 8080 for Cloud Run apps) - **Environment variables**: Copy any env vars from Cloud Run - **Resources**: - Memory: Start with what you used in Cloud Run - vCPU: 0.5-1 vCPU is typical - **Scaling**: - **Min scale**: `0` (enables scale-to-zero, just like Cloud Run) - **Max scale**: Set based on expected traffic (e.g., 10)5. Click **Deploy container**### Get Your Endpoint:After deployment (1-2 minutes), you'll receive an HTTPS endpoint:```https://your-container-namespace-xxxxx.functions.fnc.fr-par.scw.cloud
This is your public API endpoint—no API Gateway needed, SSL included for free.
Step 6: Test Your Service
powershell
# Test the endpointInvoke-WebRequest -Uri "https://your-container-url.functions.fnc.fr-par.scw.cloud/your-endpoint"
Your application should respond identically to how it did on Cloud Run.
Understanding the Cost Comparison
Google Cloud Run Pricing (Typical):
- vCPU: $0.00002400/vCPU-second
- Memory: $0.00000250/GB-second
- Requests: $0.40 per million
- Plus: API Gateway, Load Balancer, or other routing costs
Scaleway Serverless Containers:
- vCPU: €0.00001/vCPU-second (€1.00 per 100k vCPU-s)
- Memory: €0.000001/GB-second (€0.10 per 100k GB-s)
- Requests: Free (no per-request charges)
- HTTPS endpoint: Free (included)
- Free Tier: 200k vCPU-seconds + 400k GB-seconds per month
Example Calculation:
For an API handling 1 million requests/month, 200ms average response time, 1 vCPU, 2GB memory:
Google Cloud Run:
- vCPU: 1M × 0.2s × $0.000024 = $4.80
- Memory: 1M × 0.2s × 2GB × $0.0000025 = $1.00
- Requests: 1M × $0.0000004 = $0.40
- Total: ~$6.20/month
Scaleway:
- vCPU: 200k vCPU-s → Free (within free tier)
- Memory: 400k GB-s → Free (within free tier)
- Total: €0.00/month
Even beyond free tiers, Scaleway is typically 30-50% cheaper, with no surprise charges.
Key Differences to Be Aware Of
Similarities (Good News):
✅ Both use Knative under the hood ✅ Both support HTTP, HTTP/2, WebSocket, gRPC ✅ Both scale to zero automatically ✅ Both provide HTTPS endpoints ✅ Both support custom domains ✅ Both integrate with monitoring/logging
Differences:
- Cold start: Scaleway takes ~2-5 seconds (similar to Cloud Run)
- Idle timeout: Scaleway scales to zero after 15 minutes (vs. Cloud Run’s varies)
- Regions: Limited to EU (Paris, Amsterdam, Warsaw) vs. Google’s global presence
- Ecosystem: Smaller ecosystem than GCP (but rapidly growing)
When Scaleway Makes Sense:
- ✅ Your primary users/customers are in Europe
- ✅ GDPR compliance is critical
- ✅ You want to avoid US jurisdiction over your data
- ✅ You prefer transparent, predictable pricing
- ✅ You don’t need GCP-specific services (BigQuery, etc.)
When to Consider Carefully:
- ⚠️ You need global edge distribution (though you can use CDN)
- ⚠️ You’re heavily integrated with other GCP services
- ⚠️ You need GCP’s machine learning services
- ⚠️ Your customers are primarily in Asia/Americas
Additional Migration Considerations
Environment Variables and Secrets:
Scaleway offers Secret Manager integration. Copy your Cloud Run secrets:
- Go to Secret Manager in Scaleway
- Create secrets matching your Cloud Run environment variables
- Reference them in your container configuration
Custom Domains:
Both platforms support custom domains. In Scaleway:
- Go to your container settings
- Add custom domain
- Update your DNS CNAME to point to Scaleway’s endpoint
- SSL is handled automatically
Databases and Storage:
If you’re using Cloud SQL or Cloud Storage:
- Databases: Consider Scaleway’s Managed PostgreSQL/MySQL or Serverless SQL Database
- Object Storage: Scaleway Object Storage is S3-compatible
- Or: Keep using GCP services (cross-cloud is possible, but adds latency)
Monitoring and Logging:
Scaleway provides Cockpit (based on Grafana):
- Automatic logging for all Serverless Containers
- Pre-built dashboards
- Integration with alerts and metrics
- Similar to Cloud Logging/Monitoring
The Broader Picture: European Digital Sovereignty
This migration isn’t just about cost savings or technical features—it’s about control.
Why EU Companies Are Moving:
- Legal Protection: GDPR protections are stronger when data never leaves EU jurisdiction
- Political Risk: Reduces exposure to US government data requests under CLOUD Act
- Supply Chain Resilience: Diversification away from Big Tech dependency
- Supporting European Tech: Strengthens the European cloud ecosystem
- Future-Proofing: As digital sovereignty regulations increase, early movers are better positioned
The Economic Argument:
Every euro spent with European cloud providers:
- Stays in the European economy
- Supports European jobs and innovation
- Builds alternatives to US/Chinese tech dominance
- Strengthens Europe’s strategic autonomy
Conclusion: A Straightforward Path to Sovereignty
Migrating from Google Cloud Run to Scaleway Serverless Containers is technically simple—often taking just a few hours for a typical service. The containers are identical, the pricing is competitive, and the operational model is the same.
But beyond the technical benefits, there’s a strategic argument: as a European company, every infrastructure decision is a choice about where your data lives, who has access to it, and which ecosystem you’re supporting.
Scaleway (and other European cloud providers) aren’t perfect replacements for every GCP use case. But for containerized APIs and web services—which represent the majority of Cloud Run workloads—they’re absolutely production-ready alternatives that keep your infrastructure firmly within European jurisdiction.
In 2026’s geopolitical landscape, that’s not just a nice-to-have—it’s increasingly essential.
Resources
- Scaleway Serverless Containers: https://www.scaleway.com/en/serverless-containers/
- Scaleway Documentation: https://www.scaleway.com/en/docs/
- Skopeo Documentation: https://github.com/containers/skopeo
- European Cloud Providers: Research Scaleway, OVHcloud, Hetzner, and others
- EU Digital Sovereignty: European Commission digital strategy resources
Have you migrated your infrastructure back to Europe? Share your experience in the comments below.