Archive

Posts Tagged ‘security’

Building a Wildcard Catch-All POP3 Mail Server on Ubuntu

Receive mail for any address on any subdomain — no per-account configuration required

Introduction

This guide walks through setting up a wildcard catch-all mail server on Ubuntu using Postfix and Dovecot. The goal is to receive email sent to any address on any subdomain of your domain — for example, anything@abc.yourdomain.com or test@xyz.yourdomain.com — without having to configure individual mailboxes in advance.

This is particularly useful for testing, disposable address systems, API integrations, and mail sink setups where you want to capture inbound mail programmatically. The server will not send mail — only receive it. Mail older than 24 hours is automatically purged.

Architecture Overview

The stack consists of three components working together:

  • Postfix — receives inbound SMTP and delivers to a local virtual mailbox
  • Dovecot — serves POP3 access to the mailbox
  • A single catch-all mailbox — all mail for all subdomains and addresses funnels into one Maildir

Rather than creating individual accounts, everything is routed to a single mailbox. A POP3 client connects with one username and password to retrieve all mail regardless of which address or subdomain it was sent to.

Part 1 — DNS Configuration

How Wildcard MX Records Work

MX records must point to a hostname, not an IP address directly. This means two DNS records are needed: an MX record pointing to a mail hostname, and an A record resolving that hostname to your server’s IP address.

Create the following records in your DNS provider (AWS Route 53 or equivalent):

Record NameType / Value
*.yourdomain.comMX — 10 mail.yourdomain.com
mail.yourdomain.comA — your.server.ip.address

The wildcard MX record *.yourdomain.com matches any single-level subdomain lookup. When a sending mail server looks up the MX record for abc.yourdomain.com, it matches the wildcard and is directed to mail.yourdomain.com, which in turn resolves to your server’s IP via the A record.

Note that the wildcard covers one subdomain level deep. Mail to anything@abc.yourdomain.com is covered. A deeper level such as anything@a.b.yourdomain.com would require a separate record.

Verifying DNS Records

From a Windows machine, use nslookup to verify records have propagated:

# Check the MX recordnslookup -type=MX abc.yourdomain.com
# Check the A record for the mail hostnslookup mail.yourdomain.com
# Query AWS nameservers directly (before public propagation)nslookup -type=NS yourdomain.comnslookup -type=MX abc.yourdomain.com ns-123.awsdns-45.com

You can also use dnschecker.org to check propagation across multiple global resolvers simultaneously.

Part 2 — Server Setup

Install Postfix and Dovecot

sudo apt updatesudo apt install postfix dovecot-pop3d -y

During the Postfix installation prompt, select Internet Site and enter your domain name (e.g. yourdomain.com) when asked for the mail name.

Configure Postfix

Edit the main Postfix configuration file:

sudo nano /etc/postfix/main.cf

Add or update the following values:

myhostname = mail.yourdomain.commydomain = yourdomain.com
# Leave mydestination empty — we use virtual mailboxes insteadmydestination =
# Accept mail for any subdomain matching the wildcardvirtual_mailbox_domains = regexp:/etc/postfix/virtual_domainsvirtual_mailbox_base = /var/mail/vhostsvirtual_mailbox_maps = regexp:/etc/postfix/virtual_mailboxvirtual_minimum_uid = 100virtual_uid_maps = static:5000virtual_gid_maps = static:5000
# Required to prevent open relaysmtpd_relay_restrictions = permit_mynetworks, reject_unauth_destination

Create the virtual domains file — this regexp matches any subdomain of your domain:

sudo nano /etc/postfix/virtual_domains
/^\.+\.yourdomain\.com$/    OK

Create the virtual mailbox map — this catches all addresses and routes them to a single catchall mailbox:

sudo nano /etc/postfix/virtual_mailbox
/^.+@.+\.yourdomain\.com$/    catchall/

Rebuild the aliases database (required to avoid a startup warning):

newaliases

Create the Virtual Mail User and Mailbox

Postfix delivers mail as a dedicated system user (vmail). Create the user, group, and mailbox directory:

sudo groupadd -g 5000 vmailsudo useradd -u 5000 -g 5000 -d /var/mail/vhosts -s /sbin/nologin vmailsudo mkdir -p /var/mail/vhosts/catchallsudo chown -R vmail:vmail /var/mail/vhosts

Configure Dovecot for POP3

Enable the POP3 protocol in the main Dovecot config:

sudo nano /etc/dovecot/dovecot.conf
protocols = pop3

Set the mail location to the catchall Maildir:

sudo nano /etc/dovecot/conf.d/10-mail.conf
mail_location = maildir:/var/mail/vhosts/catchall

Allow plaintext authentication (suitable for internal/trusted use — see the TLS note at the end for public-facing deployments):

sudo nano /etc/dovecot/conf.d/10-auth.conf
disable_plaintext_auth = noauth_mechanisms = plain login
passdb {  driver = passwd-file  args = /etc/dovecot/users}
userdb {  driver = static  args = uid=5000 gid=5000 home=/var/mail/vhosts/catchall}

Create the Dovecot users file with your chosen credentials:

sudo nano /etc/dovecot/users
# Format: username:{PLAIN}passwordmailuser:{PLAIN}yourpasswordhere

Start the Services

sudo systemctl restart postfixsudo systemctl restart dovecot

Verify Postfix is running:

postfix status

Check the mail log for any errors:

tail -30 /var/log/mail.log

Part 3 — Firewall Configuration

Cloud Firewall (Linode / AWS / equivalent)

Open the following inbound ports in your cloud provider’s firewall. On Linode this is found under Networking > Firewalls in the dashboard. Changes apply immediately with no reboot required.

Port / ProtocolPurpose
22 TCPSSH (ensure this is always open)
25 TCPSMTP inbound (receiving mail)
110 TCPPOP3 (retrieving mail)

UFW on the Ubuntu Instance

sudo ufw allow 22/tcpsudo ufw allow 25/tcpsudo ufw allow 110/tcpsudo ufw enablesudo ufw status

Always confirm port 22 is allowed before enabling UFW to avoid locking yourself out of SSH.

Part 4 — Testing

Test SMTP Locally

From the server itself, connect to Postfix on port 25 and send a test message. Use 127.0.0.1 rather than localhost to avoid IPv6 connection issues:

telnet 127.0.0.1 25

You should immediately see the greeting banner:

220 mail.yourdomain.com ESMTP Postfix

Then send a test message interactively:

EHLO test.comMAIL FROM:<test@test.com>RCPT TO:<anything@abc.yourdomain.com>DATASubject: Test mail
Hello this is a test.QUIT

Each step should return a 250 OK response. The RCPT TO line is the critical one — if the wildcard regexp is configured correctly, Postfix will accept any subdomain address. After QUIT, verify the mail landed in the mailbox:

tail -20 /var/log/mail.logls -la /var/mail/vhosts/catchall/new/

You should see a file in the new/ directory — that is the email in Maildir format.

Test POP3 Locally

telnet 127.0.0.1 110

Dovecot should respond with:

+OK Dovecot (Ubuntu) ready.

Then authenticate and list messages:

USER mailuserPASS yourpasswordhereLISTRETR 1QUIT

A successful LIST response showing message count confirms the full chain is working: inbound SMTP via Postfix, delivery to virtual Maildir, and POP3 retrieval via Dovecot.

Part 5 — Automatic Mail Purge

To automatically delete mail older than 24 hours, add a cron job:

sudo crontab -e

Add the following line:

0 * * * * find /var/mail/vhosts/catchall -type f -mmin +1440 -delete

This runs every hour and removes any file in the catchall mailbox older than 1440 minutes (24 hours).

Optional — Silence the Backwards Compatibility Warning

Postfix logs a harmless warning about backwards-compatible default settings. To silence it:

postconf compatibility_level=3.6postfix reload

Security Notes

  • Port 110 transmits credentials in plaintext. For any public-facing deployment, configure Dovecot with TLS and use POP3S on port 995 instead.
  • The smtpd_relay_restrictions = permit_mynetworks, reject_unauth_destination setting prevents your server from acting as an open relay — do not remove this.
  • Consider rate limiting inbound SMTP connections if the server is publicly accessible to reduce spam load.
  • The vmail system user has no login shell (nologin) and cannot be used to access the system interactively.

Summary

With Postfix and Dovecot configured as described above, your server will:

  • Accept inbound SMTP for any address on any subdomain of your domain
  • Deliver all mail into a single catch-all Maildir with no per-account configuration
  • Expose all received mail via POP3 using a single username and password
  • Automatically purge mail older than 24 hours
  • Require no restart or reconfiguration when new subdomains or addresses are used

How to Detect and Fix Squid Proxy Abuse

Running an open HTTP proxy on the internet, even temporarily for testing, can quickly attract unwanted attention. Within minutes of deploying an unsecured Squid proxy, malicious actors can discover and abuse it for scanning, attacks, or hiding their origin. Here’s how to spot the warning signs and lock down your proxy.

Symptoms of Proxy Abuse

1. Proxy Stops Working

The most obvious symptom is that your proxy simply stops responding to legitimate requests. Connections timeout or hang indefinitely, even though the Squid service appears to be running.

2. Cache File Descriptor Warnings

When checking the Squid service status, you see repeated warnings like:

WARNING: Your cache is running out of file descriptors

This occurs because the proxy is handling far more concurrent connections than expected for a small test server.

3. Service Shows Active But Unresponsive

The systemd status shows Squid as “active (running)” with normal startup messages, but actual proxy requests fail:

bash

$ sudo systemctl status squid
● squid.service - Squid Web Proxy Server
Active: active (running)

Yet when you try to use it:

bash

$ curl -x http://your-proxy:8888 https://example.com
curl: (28) Failed to connect to example.com port 443 after 21060ms

4. High Memory or CPU Usage

A small EC2 instance (t2.micro or t3.micro) that should be mostly idle shows elevated resource consumption.

How to Verify Proxy Abuse

Check the Access Logs

The quickest way to confirm abuse is to examine the Squid access log:

bash

sudo tail -100 /var/log/squid/access.log

What to look for:

A healthy proxy used only by you should show:

  • One or two source IP addresses (yours)
  • Requests to legitimate domains
  • Occasional HTTPS CONNECT requests

An abused proxy will show:

  • Dozens of different source IP addresses
  • CONNECT requests to random IP addresses (not domain names)
  • Strange ports: SSH (22), Telnet (23), email ports (25, 587, 993, 110), random high ports
  • High frequency of requests

Example of Abuse

Here’s what an abused proxy log looks like:

1770200370.089 59842 172.234.115.25 TCP_TUNNEL/503 0 CONNECT 188.64.128.123:22
1770200370.089 59842 51.83.10.33 TCP_TUNNEL/503 0 CONNECT 188.64.132.53:443
1770200370.089 59841 172.234.115.25 TCP_TUNNEL/503 0 CONNECT 188.64.128.4:22
1770200370.089 59332 185.90.61.84 TCP_TUNNEL/503 0 CONNECT 188.64.129.251:8000
1770200370.089 214 91.202.74.22 TCP_TUNNEL/503 0 CONNECT 188.64.129.143:23
1770200370.191 579 51.83.10.33 TCP_TUNNEL/200 39 CONNECT 188.64.128.4:8021
1770200370.235 11227 51.83.10.33 TCP_TUNNEL/200 176 CONNECT 188.64.131.66:587

Notice:

  • Multiple unique source IPs
  • Connections to SSH (port 22), Telnet (port 23), SMTP (port 587)
  • Targets are raw IP addresses, not domain names
  • Hundreds of requests per minute

This is classic behavior of attackers using your proxy to scan the internet for vulnerable services.

Count Unique IPs

To see how many different IPs are using your proxy:

bash

sudo awk '{print $3}' /var/log/squid/access.log | sort | uniq -c | sort -rn | head -20

If you see more than a handful of IPs (especially if you’re the only legitimate user), your proxy is being abused.

Check Current Connections

See active connections to your proxy port:

bash

sudo netstat -tn | grep :8888

A legitimate test proxy should have 0-2 active connections. Dozens of connections indicate abuse.

How to Fix It Immediately

1. Lock Down the AWS Security Group

The fastest fix is to restrict access at the network level:

Via AWS Console:

  1. Navigate to EC2 → Security Groups
  2. Select the security group attached to your proxy instance
  3. Click “Edit inbound rules”
  4. Find the rule for your proxy port (e.g., 8888)
  5. Change Source from 0.0.0.0/0 to “My IP”
    • AWS will auto-detect and fill in your current public IP
  6. Click “Save rules”

The change takes effect immediately – no restart required.

2. Restart Squid to Kill Existing Connections

Even after locking down the security group, existing connections may persist:

bash

sudo systemctl restart squid

3. Clear the Logs

Start fresh to verify the abuse has stopped:

bash

# Stop Squid
sudo systemctl stop squid
# Clear logs
sudo truncate -s 0 /var/log/squid/access.log
sudo truncate -s 0 /var/log/squid/cache.log
# Clear cache if you're seeing file descriptor warnings
sudo rm -rf /var/spool/squid/*
sudo squid -z
# Restart
sudo systemctl start squid

4. Verify It’s Fixed

Watch the log in real-time:

bash

sudo tail -f /var/log/squid/access.log

If tail -f just sits there with no output, that’s good – it means no requests are coming through.

Now test from your own machine:

bash

curl -x http://your-proxy-ip:8888 https://ifconfig.me

You should immediately see your request appear in the log, and nothing else.

Prevention Best Practices

For Testing Environments

  1. Always use IP whitelisting – Never expose a proxy to 0.0.0.0/0 even for testing
  2. Use non-standard ports – While not security through obscurity, it reduces automated scanning
  3. Set up authentication – Even basic auth is better than nothing
  4. Monitor logs – Check periodically for unexpected traffic
  5. Terminate when done – Don’t leave test infrastructure running

Minimal Squid Config with Authentication

For slightly better security, add basic authentication:

bash

# Install htpasswd
sudo apt install apache2-utils
# Create password file
sudo htpasswd -c /etc/squid/passwords testuser
# Edit squid.conf
sudo nano /etc/squid/squid.conf

Add these lines:

http_port 8888
# Basic authentication
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwords
auth_param basic realm proxy
acl authenticated proxy_auth REQUIRED
http_access allow authenticated
http_access deny all
cache deny all

Restart Squid and now clients must authenticate:

bash

curl -x http://testuser:password@your-proxy:8888 https://example.com

For Production

If you need a production proxy:

  • Use a proper reverse proxy like nginx or HAProxy with TLS
  • Implement OAuth or certificate-based authentication
  • Use AWS PrivateLink or VPC peering instead of public exposure
  • Enable detailed logging and monitoring
  • Set up rate limiting
  • Consider managed solutions like AWS API Gateway or CloudFront

Conclusion

Open proxies are magnets for abuse. Automated scanners continuously sweep the internet looking for misconfigured proxies to exploit. The symptoms are often subtle – file descriptor warnings, poor performance, or timeouts – but the fix is straightforward: restrict access to only trusted IP addresses at the network level.

For testing purposes, AWS Security Groups provide the perfect solution: instant IP-based access control with no performance overhead. Combined with monitoring the Squid access logs, you can quickly detect and eliminate abuse before it impacts your testing or incurs unexpected costs.

Remember: if you’re running a temporary test proxy, lock it down to your IP from the start. It only takes minutes for automated scanners to find and abuse an open proxy.


Key Takeaways:

  • ✅ Always restrict proxy access via security groups/firewall rules
  • ✅ Monitor access logs for unexpected IP addresses
  • ✅ Watch for file descriptor warnings as an early sign of abuse
  • ✅ Clear logs and restart after securing to verify the fix
  • ✅ Terminate test infrastructure when finished to avoid ongoing costs

How to Set Up Your Own Custom Disposable Email Domain with Mailnesia

Disposable email addresses are incredibly useful for maintaining privacy online, avoiding spam, and testing applications. While services like Mailnesia offer free disposable emails, there’s an even more powerful approach: using your own custom domain with Mailnesia’s infrastructure.

Why Use Your Own Domain?

When you use a standard disposable email service, the domain (like @mailnesia.com) is publicly known. This means:

  • Websites can easily block known disposable email domains
  • There’s no real uniqueness to your addresses
  • You’re sharing the domain with potentially millions of other users

By pointing your own domain to Mailnesia, you get:

  • Higher anonymity – Your domain isn’t in any public disposable email database
  • Unlimited addresses – Create any email address on your domain instantly
  • Professional appearance – Use a legitimate-looking domain for sign-ups
  • Better deliverability – Less likely to be flagged as a disposable email

What You’ll Need

  • A domain name you own (can be purchased for as little as $10/year)
  • Access to your domain’s DNS settings
  • That’s it!

Step-by-Step Setup

1. Access Your DNS Settings

Log into your domain registrar or DNS provider (e.g., Cloudflare, Namecheap, GoDaddy) and navigate to the DNS management section for your domain.

2. Add the MX Record

Create a new MX (Mail Exchange) record with these values:

Type: MX
Name: @ (or leave blank for root domain)
Mail Server: mailnesia.com
Priority/Preference: 10
TTL: 3600 (or default)

Important: Make sure to include the trailing dot if your DNS provider requires it: mailnesia.com.

3. Wait for DNS Propagation

DNS changes can take anywhere from a few minutes to 48 hours to fully propagate, though it’s usually quick (under an hour). You can check if your MX record is live using a DNS lookup tool.

4. Start Using Your Custom Disposable Emails

Once the DNS has propagated, any email sent to any address at your domain will be received by Mailnesia. Access your emails by going to:

https://mailnesia.com/mailbox/USERNAME

Where USERNAME is the part before the @ in your email address.

For example:

  • Email sent to: testing123@yourdomain.com
  • Access inbox at: https://mailnesia.com/mailbox/testing123

Use Cases

This setup is perfect for:

  • Service sign-ups – Use a unique email for each service (e.g., netflix@yourdomain.com, github@yourdomain.com)
  • Testing – Developers can test email functionality without setting up mail servers
  • Privacy protection – Keep your real email address private
  • Spam prevention – If an address gets compromised, simply stop using it
  • Tracking – See which services sell or leak your email by using unique addresses per service

Important Considerations

Security and Privacy

  • No authentication required – Anyone who guesses or knows your username can access that mailbox. Don’t use this for sensitive communications.
  • Temporary storage – Mailnesia emails are not stored permanently. They’re meant to be disposable.
  • No sending capability – This setup only receives emails; you cannot send from these addresses through Mailnesia.

Best Practices

  1. Use random usernames – Instead of newsletter@yourdomain.com, use something like j8dk3h@yourdomain.com for better privacy
  2. Subdomain option – Consider using a subdomain like disposable.yourdomain.com to keep it separate from your main domain
  3. Don’t use for important accounts – Reserve this for non-critical services only
  4. Monitor your usage – Keep track of which addresses you’ve used where

Technical Notes

  • You can still use your domain for regular email by setting up additional MX records with different priorities
  • Some providers may allow you to set up email forwarding in addition to this setup
  • Check Mailnesia’s terms of service for any usage restrictions

Verifying Your Setup

To test if everything is working:

  1. Send a test email to a random address at your domain (e.g., test12345@yourdomain.com)
  2. Visit https://mailnesia.com/mailbox/test12345
  3. Your email should appear within a few seconds

Troubleshooting

Emails not appearing?

  • Verify your MX record is correctly set up using an MX lookup tool
  • Ensure DNS has fully propagated (can take up to 48 hours)
  • Check that you’re using the correct mailbox URL format

Getting bounced emails?

  • Make sure the priority is set to 10 or lower
  • Verify there are no conflicting MX records

Conclusion

Setting up your own custom disposable email domain with Mailnesia is surprisingly simple and provides a powerful privacy tool. With just a single DNS record change, you gain access to unlimited disposable email addresses on your own domain, giving you greater control over your online privacy and reducing spam in your primary inbox.

The enhanced anonymity of using your own domain, combined with the zero-configuration convenience of Mailnesia’s infrastructure, makes this an ideal solution for anyone who values their privacy online.


Remember: This setup is for non-sensitive communications only. For important accounts, always use a proper email service with security features like two-factor authentication.

Fixing .NET 8 HttpClient Permission Denied Errors on Google Cloud Run

If you’re deploying a .NET 8 application to Google Cloud Run and encountering a mysterious NetworkInformationException (13): Permission denied error when making HTTP requests, you’re not alone. This is a known issue that stems from how .NET’s HttpClient interacts with Cloud Run’s restricted container environment.

The Problem

When your .NET application makes HTTP requests using HttpClient, you might see an error like this:

System.Net.NetworkInformation.NetworkInformationException (13): Permission denied
   at System.Net.NetworkInformation.NetworkChange.CreateSocket()
   at System.Net.NetworkInformation.NetworkChange.add_NetworkAddressChanged(NetworkAddressChangedEventHandler value)
   at System.Net.Http.HttpConnectionPoolManager.StartMonitoringNetworkChanges()

This error occurs because .NET’s HttpClient attempts to monitor network changes and handle advanced HTTP features like HTTP/3 and Alt-Svc (Alternative Services). To do this, it tries to create network monitoring sockets, which requires permissions that Cloud Run containers don’t have by default.

Cloud Run’s security model intentionally restricts certain system-level operations to maintain isolation and security. While this is great for security, it conflicts with .NET’s network monitoring behavior.

Why Does This Happen?

The .NET runtime includes sophisticated connection pooling and HTTP version negotiation features. When a server responds with an Alt-Svc header (suggesting alternative protocols or endpoints), .NET tries to:

  1. Monitor network interface changes
  2. Adapt connection strategies based on network conditions
  3. Support HTTP/3 where available

These features require low-level network access that Cloud Run’s sandboxed environment doesn’t permit.

The Solution

Fortunately, there’s a straightforward fix. You need to disable the features that require elevated network permissions by setting two environment variables:

Environment.SetEnvironmentVariable("DOTNET_SYSTEM_NET_DISABLEIPV6", "1");
Environment.SetEnvironmentVariable("DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT", "false");

Place these lines at the very top of your Program.cs file, before any HTTP client initialization or web application builder creation.

What These Variables Do

  • DOTNET_SYSTEM_NET_DISABLEIPV6: Disables IPv6 support, which also disables the network change monitoring that requires socket creation.
  • DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT: Explicitly disables HTTP/3 support, preventing .NET from trying to negotiate HTTP/3 connections.

Alternative Approaches

Option 1: Set in Dockerfile

You can bake these settings into your container image:

FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app

# Disable network monitoring features
ENV DOTNET_SYSTEM_NET_DISABLEIPV6=1
ENV DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT=false

COPY publish/ .
ENTRYPOINT ["dotnet", "YourApp.dll"]

Option 2: Set via Cloud Run Configuration

You can configure these as environment variables in your Cloud Run deployment:

gcloud run deploy your-service \
  --image gcr.io/your-project/your-image \
  --set-env-vars DOTNET_SYSTEM_NET_DISABLEIPV6=1,DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT=false

Or through the Cloud Console when configuring your service’s environment variables.

Performance Impact

You might wonder if disabling these features affects performance. In practice:

  • HTTP/3 isn’t widely used yet, and most services work perfectly fine with HTTP/2 or HTTP/1.1
  • Network change monitoring is primarily useful for long-running desktop applications that move between networks (like a laptop switching from WiFi to cellular)
  • In a Cloud Run container with a stable network environment, these features provide minimal benefit

The performance impact is negligible, and the tradeoff is well worth it for a working application.

Why It Works Locally But Fails in Cloud Run

This issue often surprises developers because their code works perfectly on their development machine. That’s because:

  • Local development environments typically run with full system permissions
  • Your local machine isn’t running in a restricted container
  • Cloud Run’s security sandbox is much more restrictive than a typical development environment

This is a classic example of environment-specific behavior where security constraints in production expose issues that don’t appear during development.

Conclusion

The Permission denied error when using HttpClient in .NET 8 on Google Cloud Run is caused by the runtime’s attempt to use network monitoring features that aren’t available in Cloud Run’s restricted environment. The fix is simple: disable these features using environment variables.

This solution is officially recognized by the .NET team as the recommended workaround for containerized environments with restricted permissions, so you can use it with confidence in production.

Related Resources


Have you encountered other .NET deployment issues on Cloud Run? Feel free to share your experiences in the comments below.

Categories: Uncategorized Tags: , , , ,

Unlock Brand Recognition in Emails: Free #BIMI #API from AvatarAPI.com

https://www.avatarapi.com/

Email marketing is more competitive than ever, and standing out in crowded inboxes is a constant challenge. What if there was a way to instantly make your emails more recognizable and trustworthy? Enter BIMI – a game-changing email authentication standard that’s revolutionizing how brands appear in email clients.

What is BIMI? (In Simple Terms)

BIMI stands for “Brand Indicators for Message Identification.” Think of it as a verified profile picture for your company’s emails. Just like how you recognize friends by their profile photos on social media, BIMI lets email providers display your company’s official logo next to emails you send.

Here’s how it works in everyday terms:

  • Traditional email: When Spotify sends you an email, you might only see their name in your inbox
  • BIMI-enabled email: You’d see Spotify’s recognizable logo right next to their name, making it instantly clear the email is legitimate

This visual verification helps recipients quickly identify authentic emails from brands they trust, while making it harder for scammers to impersonate legitimate companies.

Why BIMI Matters for Your Business

Instant Brand Recognition: Your logo appears directly in the inbox, increasing brand visibility and email open rates.

Enhanced Trust: Recipients can immediately verify that emails are genuinely from your company, reducing the likelihood they’ll mark legitimate emails as spam.

Competitive Advantage: Many companies haven’t implemented BIMI yet, so adopting it early helps you stand out.

Better Deliverability: Email providers like Gmail and Yahoo prioritize authenticated emails, potentially improving your delivery rates.

Introducing the Free BIMI API from AVATARAPI.com

While implementing BIMI traditionally requires DNS configuration and technical setup, AVATARAPI.com offers a simple API that lets you retrieve BIMI information for any email domain instantly. This is perfect for:

  • Email marketing platforms checking sender authenticity
  • Security tools validating email sources
  • Analytics services tracking BIMI adoption
  • Developers building email-related applications

How to Use the Free BIMI API

Getting started is incredibly simple. Here’s everything you need to know:

API Endpoint

POST https://avatarapi.com/v2/api.aspx

Request Format

Send a JSON request with these parameters:

{
    "username": "demo",
    "password": "demo___",
    "provider": "Bimi",
    "email": "no-reply@alerts.spotify.com"
}

Parameters Explained:

  • username & password: Use “demo” and “demo___” for free access
  • provider: Set to “Bimi” to retrieve BIMI data
  • email: The email address you want to check for BIMI records

Example Response

The API returns comprehensive BIMI information:

{
    "Name": "",
    "Image": "https://message-editor.scdn.co/spotify_ab_1024216054.svg",
    "Valid": true,
    "City": "",
    "Country": "",
    "IsDefault": false,
    "Success": true,
    "RawData": "",
    "Source": {
        "Name": "Bimi"
    }
}

Response Fields:

  • Image: Direct URL to the company’s BIMI logo
  • Valid: Whether the BIMI record is properly configured
  • Success: Confirms the API call was successful
  • IsDefault: Indicates if this is a fallback or authentic BIMI record

Practical Use Cases

Email Security Platforms: Verify sender authenticity by checking if incoming emails have valid BIMI records.

Marketing Analytics: Track which competitors have implemented BIMI to benchmark your email marketing efforts.

Email Client Development: Integrate BIMI logo display into custom email applications.

Compliance Monitoring: Ensure your company’s BIMI implementation is working correctly across different domains.

Try It Now

Ready to explore BIMI data? The API is free to use with the demo credentials provided above. Simply make a POST request to test it with any email address – try major brands like Spotify, PayPal, or LinkedIn to see their BIMI implementations in action.

Whether you’re a developer building email tools, a marketer researching competitor strategies, or a security professional validating email authenticity, this free BIMI API provides instant access to valuable brand verification data.

Start integrating BIMI checking into your applications today and help make email communication more secure and recognizable for everyone.
https://www.avatarapi.com/

How to Query #LinkedIn from an #Email Address Using AvatarAPI.com

Introduction

When working with professional networking data, LinkedIn is often the go-to platform for retrieving user information based on an email address. Using AvatarAPI.com, developers can easily query LinkedIn and other data providers through a simple API request. In this guide, we’ll explore how to use the API to retrieve LinkedIn profile details from an email address.

API Endpoint

To query LinkedIn using AvatarAPI.com, send a request to:

https://avatarapi.com/v2/api.aspx

JSON Payload

A sample JSON request to query LinkedIn using an email address looks like this:

{
    "username": "demo",
    "password": "demo___",
    "provider": "LinkedIn",
    "email": "jason.smith@gmail.com"
}

Explanation of Parameters:

  • username: Your AvatarAPI.com username.
  • password: Your AvatarAPI.com password.
  • provider: The data source to query. In this case, “LinkedIn” is specified. If omitted, the API will search a default set of providers.
  • email: The email address for which LinkedIn profile data is being requested.

API Response

A successful response from the API may look like this:

{
    "Name": "Jason Smith",
    "Image": "https://media.licdn.com/dms/image/D4E12AQEud3Ll5MI7cQ/article-inline_image-shrink_1500_2232/0/1660833954461?e=1716422400&v=beta&t=r-9LmmNBpvS4bUiL6k-egJ8wUIpEeEMl9NJuAt7pTsc",
    "Valid": true,
    "City": "Los Angeles, California, United States",
    "Country": "US",
    "IsDefault": false,
    "Success": true,
    "RawData": "{\"resultTemplate\":\"ExactMatch\",\"bound\":false,\"persons\":[{\"id\":\"urn:li:person:DgEdy8DNfhxlX15HDuxWp7k6hYP5jIlL8fqtFRN7YR4\",\"displayName\":\"Jason Smith\",\"headline\":\"Creative Co-founder at Mega Ventures\",\"summary\":\"Jason Smith Head of Design at Mega Ventures.\",\"companyName\":\"Mega Ventures\",\"location\":\"Los Angeles, California, United States\",\"linkedInUrl\":\"https://linkedin.com/in/jason-smith\",\"connectionCount\":395,\"skills\":[\"Figma (Software)\",\"Facebook\",\"Customer Service\",\"Event Planning\",\"Social Media\",\"Sales\",\"Healthcare\",\"Management\",\"Web Design\",\"JavaScript\",\"Software Development\",\"Project Management\",\"APIs\"]}]}",
    "Source": {
        "Name": "LinkedIn"
    }
}

Explanation of Response Fields:

  • Name: The full name of the LinkedIn user.
  • Image: The profile image URL.
  • Valid: Indicates whether the returned data is valid.
  • City: The city where the user is located.
  • Country: The country of residence.
  • IsDefault: Indicates whether the data is a fallback/default.
  • Success: Confirms if the request was successful.
  • RawData: Contains additional structured data about the LinkedIn profile, including:
    • LinkedIn ID: A unique identifier for the user’s LinkedIn profile.
    • Display Name: The name displayed on the user’s profile.
    • Headline: The professional headline, typically the current job title or a short description of expertise.
    • Summary: A brief bio or description of the user’s professional background.
    • Company Name: The company where the user currently works.
    • Location: The geographical location of the user.
    • LinkedIn Profile URL: A direct link to the user’s LinkedIn profile.
    • Connection Count: The number of LinkedIn connections the user has.
    • Skills: A list of skills associated with the user’s profile, such as programming languages, software expertise, or industry-specific abilities.
    • Education History: Details about the user’s academic background, including universities attended, degrees earned, and fields of study.
    • Employment History: Information about past and present positions, including company names, job titles, and employment dates.
    • Projects and Accomplishments: Notable work the user has contributed to, certifications, publications, and other professional achievements.
    • Endorsements: Skill endorsements from other LinkedIn users, showcasing credibility in specific domains.
  • Source.Name: The data provider (LinkedIn in this case).

LinkedIn Rate Limiting

By default, LinkedIn queries are subject to rate limits. To bypass these limits, additional parameters can be included in the JSON request:

{
    "overrideAccount": "your_override_username",
    "overridePassword": "your_override_password"
}

Using these credentials allows queries to be processed without rate limiting. However, to enable this feature, you should contact AvatarAPI.com to discuss setup and access.

Conclusion

AvatarAPI.com provides a powerful way to retrieve LinkedIn profile data using just an email address. While LinkedIn is one of the available providers, the API also supports other data sources if the provider field is omitted. With proper setup, including rate-limit bypassing credentials, you can ensure seamless access to professional networking data.

For more details, visit AvatarAPI.com.

Resolving Unauthorized Error When Deploying an #Azure Function via #ZipDeploy

Deploying an Azure Function to an App Service can sometimes result in an authentication error, preventing successful publishing. One common error developers encounter is:

Error: The attempt to publish the ZIP file through https://<function-name>.scm.azurewebsites.net/api/zipdeploy failed with HTTP status code Unauthorized.

This error typically occurs when the deployment process lacks the necessary authentication permissions to publish to Azure. Below, we outline the steps to resolve this issue by enabling SCM Basic Auth Publishing in the Azure Portal.

Understanding the Issue

The error indicates that Azure is rejecting the deployment request due to authentication failure. This often happens when the SCM (Kudu) deployment service does not have the correct permissions enabled, preventing the publishing process from proceeding.

Solution: Enable SCM Basic Auth Publishing

To resolve this issue, follow these steps:

  1. Open the Azure Portal and navigate to your Function App.
  2. In the left-hand menu, select Configuration.
  3. Under the General settings tab, locate SCM Basic Auth Publishing.
  4. Toggle the setting to On.
  5. Click Save and restart the Function App if necessary.

Once this setting is enabled, retry the deployment from Visual Studio or your chosen deployment method. The unauthorized error should now be resolved.

Additional Considerations

  • Use Deployment Credentials: If you prefer not to enable SCM Basic Auth, consider setting up deployment credentials under Deployment CenterFTP/Credentials.
  • Check Azure Authentication in Visual Studio: Ensure that you are logged into the correct Azure account in Visual Studio under ToolsOptionsAzure Service Authentication.
  • Use Azure CLI for Deployment: If problems persist, try deploying with the Azure CLI:az functionapp deployment source config-zip \ --resource-group <resource-group> \ --name <function-app-name> \ --src <zip-file-path>

By enabling SCM Basic Auth Publishing, you ensure that Azure’s deployment service can authenticate and process your function’s updates smoothly. This quick fix saves time and prevents unnecessary troubleshooting steps.

Categories: Uncategorized Tags: , , , ,

Obtaining an Access Token for Outlook Web Access (#OWA) Using a Consumer Account

If you need programmatic access to Outlook Web Access (OWA) using a Microsoft consumer account (e.g., an Outlook.com, Hotmail, or Live.com email), you can obtain an access token using the Microsoft Authentication Library (MSAL). The following C# code demonstrates how to authenticate a consumer account and retrieve an access token.

Prerequisites

To run this code successfully, ensure you have:

  • .NET installed
  • The Microsoft.Identity.Client NuGet package
  • A registered application in the Microsoft Entra ID (formerly Azure AD) portal with the necessary API permissions

Code Breakdown

The following code authenticates a user using the device code flow, which is useful for scenarios where interactive login via a browser is required but the application does not have direct access to a web interface.

1. Define Authentication Metadata

var authMetadata = new
{
    ClientId = "9199bf20-a13f-4107-85dc-02114787ef48", // Application (client) ID
    Tenant = "consumers", // Target consumer accounts (not work/school accounts)
    Scope = "service::outlook.office.com::MBI_SSL openid profile offline_access"
};
  • ClientId: Identifies the application in Microsoft Entra ID.
  • Tenant: Set to consumers to restrict authentication to personal Microsoft accounts.
  • Scope: Defines the permissions the application is requesting. In this case:
    • service::outlook.office.com::MBI_SSL is required to access Outlook services.
    • openid, profile, and offline_access allow authentication and token refresh.

2. Configure the Authentication Application

var app = PublicClientApplicationBuilder
    .Create(authMetadata.ClientId)
    .WithAuthority($"https://login.microsoftonline.com/{authMetadata.Tenant}")
    .Build();
  • PublicClientApplicationBuilder is used to create a public client application that interacts with Microsoft identity services.
  • .WithAuthority() specifies that authentication should occur against Microsoft’s login endpoint for consumer accounts.

3. Initiate the Device Code Flow

var scopes = new string[] { authMetadata.Scope };

var result = await app.AcquireTokenWithDeviceCode(scopes, deviceCodeResult =>
{
    Console.WriteLine(deviceCodeResult.Message); // Display login instructions
    return Task.CompletedTask;
}).ExecuteAsync();
  • AcquireTokenWithDeviceCode() initiates authentication using a device code.
  • The deviceCodeResult.Message provides instructions to the user on how to authenticate (typically directing them to https://microsoft.com/devicelogin).
  • Once the user completes authentication, the application receives an access token.

4. Retrieve and Display the Access Token

Console.WriteLine($"Access Token: {result.AccessToken}");
  • The retrieved token can now be used to make API calls to Outlook Web Access services.

5. Handle Errors

catch (MsalException ex)
{
    Console.WriteLine($"Authentication failed: {ex.Message}");
}
  • MsalException handles authentication errors, such as incorrect permissions or expired tokens.

Running the Code

  1. Compile and run the program.
  2. Follow the login instructions displayed in the console.
  3. After signing in, the access token will be printed.
  4. Use the token in HTTP requests to Outlook Web Access APIs.

Conclusion

This code provides a straightforward way to obtain an access token for Outlook Web Access using a consumer account. The device code flow is particularly useful for command-line applications or scenarios where interactive authentication via a browser is required.

How to Extract #EXIF Data from an Image in .NET 8 with #MetadataExtractor

GIT REPO : https://github.com/infiniteloopltd/ExifResearch

When working with images, EXIF (Exchangeable Image File Format) data can provide valuable information such as the camera model, date and time of capture, GPS coordinates, and much more. Whether you’re building an image processing application or simply want to extract metadata for analysis, knowing how to retrieve EXIF data in a .NET environment is essential.

In this post, we’ll walk through how to extract EXIF data from an image in .NET 8 using the cross-platform MetadataExtractor library.

Why Use MetadataExtractor?

.NET’s traditional System.Drawing.Common library has limitations when it comes to cross-platform compatibility, particularly for non-Windows environments. The MetadataExtractor library, however, is a powerful and platform-independent solution for extracting metadata from various image formats, including EXIF data.

With MetadataExtractor, you can read EXIF metadata from images in a clean, efficient way, making it an ideal choice for .NET Core and .NET 8 developers working on cross-platform applications.

Step 1: Install MetadataExtractor

To begin, you need to add the MetadataExtractor NuGet package to your project. You can install it using the following command:

bashCopy codedotnet add package MetadataExtractor

This package supports EXIF, IPTC, XMP, and many other metadata formats from various image file types.

Step 2: Writing the Code to Extract EXIF Data

Now that the package is installed, let’s write some code to extract EXIF data from an image stored as a byte array.

Here is the complete function:

using System;
using System.Collections.Generic;
using System.IO;
using MetadataExtractor;
using MetadataExtractor.Formats.Exif;

public class ExifReader
{
public static Dictionary<string, string> GetExifData(byte[] imageBytes)
{
var exifData = new Dictionary<string, string>();

try
{
using var ms = new MemoryStream(imageBytes);
var directories = ImageMetadataReader.ReadMetadata(ms);

foreach (var directory in directories)
{
foreach (var tag in directory.Tags)
{
// Add tag name and description to the dictionary
exifData[tag.Name] = tag.Description;
}
}
}
catch (Exception ex)
{
Console.WriteLine($"Error reading EXIF data: {ex.Message}");
}

return exifData;
}
}

How It Works:

  1. Reading Image Metadata: The function uses ImageMetadataReader.ReadMetadata to read all the metadata from the byte array containing the image.
  2. Iterating Through Directories and Tags: EXIF data is organized in directories (for example, the main EXIF data, GPS, and thumbnail directories). We iterate through these directories and their associated tags.
  3. Handling Errors: We wrap the logic in a try-catch block to ensure any potential errors (e.g., unsupported formats) are handled gracefully.

Step 3: Usage Example

To use this function, you can pass an image byte array to it. Here’s an example:

using System;
using System.IO;

class Program
{
static void Main()
{
// Replace with your byte array containing an image
byte[] imageBytes = File.ReadAllBytes("example.jpg");

var exifData = ExifReader.GetExifData(imageBytes);

foreach (var kvp in exifData)
{
Console.WriteLine($"{kvp.Key}: {kvp.Value}");
}
}
}

This code reads an image from the file system as a byte array and then uses the ExifReader.GetExifData method to extract the EXIF data. Finally, it prints out the EXIF tags and their descriptions.

Example Output:

If the image contains EXIF metadata, the output might look something like this:

  "Compression Type": "Baseline",
"Data Precision": "8 bits",
"Image Height": "384 pixels",
"Image Width": "512 pixels",
"Number of Components": "3",
"Component 1": "Y component: Quantization table 0, Sampling factors 2 horiz/2 vert",
"Component 2": "Cb component: Quantization table 1, Sampling factors 1 horiz/1 vert",
"Component 3": "Cr component: Quantization table 1, Sampling factors 1 horiz/1 vert",
"Make": "samsung",
"Model": "SM-G998B",
"Orientation": "Right side, top (Rotate 90 CW)",
"X Resolution": "72 dots per inch",
"Y Resolution": "72 dots per inch",
"Resolution Unit": "Inch",
"Software": "G998BXXU7EWCH",
"Date/Time": "2023:05:02 12:33:47",
"YCbCr Positioning": "Center of pixel array",
"Exposure Time": "1/33 sec",
"F-Number": "f/2.2",
"Exposure Program": "Program normal",
"ISO Speed Ratings": "640",
"Exif Version": "2.20",
"Date/Time Original": "2023:05:02 12:33:47",
"Date/Time Digitized": "2023:05:02 12:33:47",
"Time Zone": "+09:00",
"Time Zone Original": "+09:00",
"Shutter Speed Value": "1 sec",
"Aperture Value": "f/2.2",
"Exposure Bias Value": "0 EV",
"Max Aperture Value": "f/2.2",
"Metering Mode": "Center weighted average",
"Flash": "Flash did not fire",
"Focal Length": "2.2 mm",
"Sub-Sec Time": "404",
"Sub-Sec Time Original": "404",
"Sub-Sec Time Digitized": "404",
"Color Space": "sRGB",
"Exif Image Width": "4000 pixels",
"Exif Image Height": "3000 pixels",
"Exposure Mode": "Auto exposure",
"White Balance Mode": "Auto white balance",
"Digital Zoom Ratio": "1",
"Focal Length 35": "13 mm",
"Scene Capture Type": "Standard",
"Unique Image ID": "F12XSNF00NM",
"Compression": "JPEG (old-style)",
"Thumbnail Offset": "824 bytes",
"Thumbnail Length": "49594 bytes",
"Number of Tables": "4 Huffman tables",
"Detected File Type Name": "JPEG",
"Detected File Type Long Name": "Joint Photographic Experts Group",
"Detected MIME Type": "image/jpeg",
"Expected File Name Extension": "jpg"

This is just a small sample of the information EXIF can store. Depending on the camera and settings, you may find data on GPS location, white balance, focal length, and more.

Why Use EXIF Data?

EXIF data can be valuable in various scenarios:

  • Image processing: Automatically adjust images based on camera settings (e.g., ISO or exposure time).
  • Data analysis: Track when and where photos were taken, especially when handling large datasets of images.
  • Digital forensics: Verify image authenticity by analyzing EXIF metadata for manipulation or alterations.

Conclusion

With the MetadataExtractor library, extracting EXIF data from an image is straightforward and cross-platform compatible. Whether you’re building a photo management app, an image processing tool, or just need to analyze metadata, this approach is an efficient solution for working with EXIF data in .NET 8.

By using this solution, you can extract a wide range of metadata from images, making your applications smarter and more capable. Give it a try and unlock the hidden data in your images!

Understanding TLS fingerprinting.

TLS fingerprinting is a way for Bot discovery software to help discover the difference between a browser and a bot. It works transparently and fast, but not infallable. What it depends on, is that when a secure HTTPS connection is made between client and server, there is an exchange of supported cyphers. based on the cyphers supported, this can be compared against the “claimed” user agent, to see if this would be the cyphers supported by this user-agent (Browser).

It’s easy for a bot to claim to be Chrome, just set the user agent to be the same as a modern version of Chrome, but it’s more difficult to support all the cyphers supported by Chrome, and thus, if the HTTP request says it’s Chrome, but doesn’t support all of Chrome’s Cyphers, then it probably isn’t Chrome, and it’s a Bot.

There is a really handy tool here; https://tls.peet.ws/api/all – which lists the cyphers used in the connection. If you use a browser, like Chrome, you’ll see this list of cyphers:

 "ciphers": [
      "TLS_GREASE (0xEAEA)",
      "TLS_AES_128_GCM_SHA256",
      "TLS_AES_256_GCM_SHA384",
      "TLS_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_RSA_WITH_AES_128_CBC_SHA",
      "TLS_RSA_WITH_AES_256_CBC_SHA"
    ]

Wheras if you visit it using Firefox, you’ll see this;

 "ciphers": [
      "TLS_AES_128_GCM_SHA256",
      "TLS_CHACHA20_POLY1305_SHA256",
      "TLS_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_RSA_WITH_AES_128_CBC_SHA",
      "TLS_RSA_WITH_AES_256_CBC_SHA"
    ],

Use CURL or WebClient in C#, and you’ll see this

 "ciphers": [
      "TLS_AES_256_GCM_SHA384",
      "TLS_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
      "TLS_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_RSA_WITH_AES_256_CBC_SHA256",
      "TLS_RSA_WITH_AES_128_CBC_SHA256",
      "TLS_RSA_WITH_AES_256_CBC_SHA",
      "TLS_RSA_WITH_AES_128_CBC_SHA"
    ],

So, even with a cursory glance, you could check for TLS_GREASE or TLS_CHACHA20_POLY1305_SHA256 and see if these are present, and declar the user as a bot if these cyphers are missing. More advanced coding could check the version of Chrome, the Operating System, and so forth, but the technique is that.

However, using the library TLS-Client in Python allows for more cyphers to be exchanged, and the TLS fingerprint looks much more similar (if not indistinguishable) from Chrome.

https://github.com/infiniteloopltd/TLS

    session = tls_client.Session(
        client_identifier="chrome_120",
        random_tls_extension_order=True
    )
    page_url = "https://tls.peet.ws/api/all"
    res = session.get(
        page_url
    )
    print(res.text)

I am now curious to know If I can apply the same logic to C# …