Archive
How to Set Up Your Own Custom Disposable Email Domain with Mailnesia
Disposable email addresses are incredibly useful for maintaining privacy online, avoiding spam, and testing applications. While services like Mailnesia offer free disposable emails, there’s an even more powerful approach: using your own custom domain with Mailnesia’s infrastructure.
Why Use Your Own Domain?
When you use a standard disposable email service, the domain (like @mailnesia.com) is publicly known. This means:
- Websites can easily block known disposable email domains
- There’s no real uniqueness to your addresses
- You’re sharing the domain with potentially millions of other users
By pointing your own domain to Mailnesia, you get:
- Higher anonymity – Your domain isn’t in any public disposable email database
- Unlimited addresses – Create any email address on your domain instantly
- Professional appearance – Use a legitimate-looking domain for sign-ups
- Better deliverability – Less likely to be flagged as a disposable email
What You’ll Need
- A domain name you own (can be purchased for as little as $10/year)
- Access to your domain’s DNS settings
- That’s it!
Step-by-Step Setup
1. Access Your DNS Settings
Log into your domain registrar or DNS provider (e.g., Cloudflare, Namecheap, GoDaddy) and navigate to the DNS management section for your domain.
2. Add the MX Record
Create a new MX (Mail Exchange) record with these values:
Type: MX
Name: @ (or leave blank for root domain)
Mail Server: mailnesia.com
Priority/Preference: 10
TTL: 3600 (or default)
Important: Make sure to include the trailing dot if your DNS provider requires it: mailnesia.com.
3. Wait for DNS Propagation
DNS changes can take anywhere from a few minutes to 48 hours to fully propagate, though it’s usually quick (under an hour). You can check if your MX record is live using a DNS lookup tool.
4. Start Using Your Custom Disposable Emails
Once the DNS has propagated, any email sent to any address at your domain will be received by Mailnesia. Access your emails by going to:
https://mailnesia.com/mailbox/USERNAME
Where USERNAME is the part before the @ in your email address.
For example:
- Email sent to:
testing123@yourdomain.com - Access inbox at:
https://mailnesia.com/mailbox/testing123
Use Cases
This setup is perfect for:
- Service sign-ups – Use a unique email for each service (e.g.,
netflix@yourdomain.com,github@yourdomain.com) - Testing – Developers can test email functionality without setting up mail servers
- Privacy protection – Keep your real email address private
- Spam prevention – If an address gets compromised, simply stop using it
- Tracking – See which services sell or leak your email by using unique addresses per service
Important Considerations
Security and Privacy
- No authentication required – Anyone who guesses or knows your username can access that mailbox. Don’t use this for sensitive communications.
- Temporary storage – Mailnesia emails are not stored permanently. They’re meant to be disposable.
- No sending capability – This setup only receives emails; you cannot send from these addresses through Mailnesia.
Best Practices
- Use random usernames – Instead of
newsletter@yourdomain.com, use something likej8dk3h@yourdomain.comfor better privacy - Subdomain option – Consider using a subdomain like
disposable.yourdomain.comto keep it separate from your main domain - Don’t use for important accounts – Reserve this for non-critical services only
- Monitor your usage – Keep track of which addresses you’ve used where
Technical Notes
- You can still use your domain for regular email by setting up additional MX records with different priorities
- Some providers may allow you to set up email forwarding in addition to this setup
- Check Mailnesia’s terms of service for any usage restrictions
Verifying Your Setup
To test if everything is working:
- Send a test email to a random address at your domain (e.g.,
test12345@yourdomain.com) - Visit
https://mailnesia.com/mailbox/test12345 - Your email should appear within a few seconds
Troubleshooting
Emails not appearing?
- Verify your MX record is correctly set up using an MX lookup tool
- Ensure DNS has fully propagated (can take up to 48 hours)
- Check that you’re using the correct mailbox URL format
Getting bounced emails?
- Make sure the priority is set to 10 or lower
- Verify there are no conflicting MX records
Conclusion
Setting up your own custom disposable email domain with Mailnesia is surprisingly simple and provides a powerful privacy tool. With just a single DNS record change, you gain access to unlimited disposable email addresses on your own domain, giving you greater control over your online privacy and reducing spam in your primary inbox.
The enhanced anonymity of using your own domain, combined with the zero-configuration convenience of Mailnesia’s infrastructure, makes this an ideal solution for anyone who values their privacy online.
Remember: This setup is for non-sensitive communications only. For important accounts, always use a proper email service with security features like two-factor authentication.
Fixing .NET 8 HttpClient Permission Denied Errors on Google Cloud Run
If you’re deploying a .NET 8 application to Google Cloud Run and encountering a mysterious NetworkInformationException (13): Permission denied error when making HTTP requests, you’re not alone. This is a known issue that stems from how .NET’s HttpClient interacts with Cloud Run’s restricted container environment.
The Problem
When your .NET application makes HTTP requests using HttpClient, you might see an error like this:
System.Net.NetworkInformation.NetworkInformationException (13): Permission denied
at System.Net.NetworkInformation.NetworkChange.CreateSocket()
at System.Net.NetworkInformation.NetworkChange.add_NetworkAddressChanged(NetworkAddressChangedEventHandler value)
at System.Net.Http.HttpConnectionPoolManager.StartMonitoringNetworkChanges()
This error occurs because .NET’s HttpClient attempts to monitor network changes and handle advanced HTTP features like HTTP/3 and Alt-Svc (Alternative Services). To do this, it tries to create network monitoring sockets, which requires permissions that Cloud Run containers don’t have by default.
Cloud Run’s security model intentionally restricts certain system-level operations to maintain isolation and security. While this is great for security, it conflicts with .NET’s network monitoring behavior.
Why Does This Happen?
The .NET runtime includes sophisticated connection pooling and HTTP version negotiation features. When a server responds with an Alt-Svc header (suggesting alternative protocols or endpoints), .NET tries to:
- Monitor network interface changes
- Adapt connection strategies based on network conditions
- Support HTTP/3 where available
These features require low-level network access that Cloud Run’s sandboxed environment doesn’t permit.
The Solution
Fortunately, there’s a straightforward fix. You need to disable the features that require elevated network permissions by setting two environment variables:
Environment.SetEnvironmentVariable("DOTNET_SYSTEM_NET_DISABLEIPV6", "1");
Environment.SetEnvironmentVariable("DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT", "false");
Place these lines at the very top of your Program.cs file, before any HTTP client initialization or web application builder creation.
What These Variables Do
- DOTNET_SYSTEM_NET_DISABLEIPV6: Disables IPv6 support, which also disables the network change monitoring that requires socket creation.
- DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT: Explicitly disables HTTP/3 support, preventing .NET from trying to negotiate HTTP/3 connections.
Alternative Approaches
Option 1: Set in Dockerfile
You can bake these settings into your container image:
FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app
# Disable network monitoring features
ENV DOTNET_SYSTEM_NET_DISABLEIPV6=1
ENV DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT=false
COPY publish/ .
ENTRYPOINT ["dotnet", "YourApp.dll"]
Option 2: Set via Cloud Run Configuration
You can configure these as environment variables in your Cloud Run deployment:
gcloud run deploy your-service \
--image gcr.io/your-project/your-image \
--set-env-vars DOTNET_SYSTEM_NET_DISABLEIPV6=1,DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT=false
Or through the Cloud Console when configuring your service’s environment variables.
Performance Impact
You might wonder if disabling these features affects performance. In practice:
- HTTP/3 isn’t widely used yet, and most services work perfectly fine with HTTP/2 or HTTP/1.1
- Network change monitoring is primarily useful for long-running desktop applications that move between networks (like a laptop switching from WiFi to cellular)
- In a Cloud Run container with a stable network environment, these features provide minimal benefit
The performance impact is negligible, and the tradeoff is well worth it for a working application.
Why It Works Locally But Fails in Cloud Run
This issue often surprises developers because their code works perfectly on their development machine. That’s because:
- Local development environments typically run with full system permissions
- Your local machine isn’t running in a restricted container
- Cloud Run’s security sandbox is much more restrictive than a typical development environment
This is a classic example of environment-specific behavior where security constraints in production expose issues that don’t appear during development.
Conclusion
The Permission denied error when using HttpClient in .NET 8 on Google Cloud Run is caused by the runtime’s attempt to use network monitoring features that aren’t available in Cloud Run’s restricted environment. The fix is simple: disable these features using environment variables.
This solution is officially recognized by the .NET team as the recommended workaround for containerized environments with restricted permissions, so you can use it with confidence in production.
Related Resources
Have you encountered other .NET deployment issues on Cloud Run? Feel free to share your experiences in the comments below.
Unlock Brand Recognition in Emails: Free #BIMI #API from AvatarAPI.com

Email marketing is more competitive than ever, and standing out in crowded inboxes is a constant challenge. What if there was a way to instantly make your emails more recognizable and trustworthy? Enter BIMI – a game-changing email authentication standard that’s revolutionizing how brands appear in email clients.
What is BIMI? (In Simple Terms)
BIMI stands for “Brand Indicators for Message Identification.” Think of it as a verified profile picture for your company’s emails. Just like how you recognize friends by their profile photos on social media, BIMI lets email providers display your company’s official logo next to emails you send.
Here’s how it works in everyday terms:
- Traditional email: When Spotify sends you an email, you might only see their name in your inbox
- BIMI-enabled email: You’d see Spotify’s recognizable logo right next to their name, making it instantly clear the email is legitimate
This visual verification helps recipients quickly identify authentic emails from brands they trust, while making it harder for scammers to impersonate legitimate companies.
Why BIMI Matters for Your Business
Instant Brand Recognition: Your logo appears directly in the inbox, increasing brand visibility and email open rates.
Enhanced Trust: Recipients can immediately verify that emails are genuinely from your company, reducing the likelihood they’ll mark legitimate emails as spam.
Competitive Advantage: Many companies haven’t implemented BIMI yet, so adopting it early helps you stand out.
Better Deliverability: Email providers like Gmail and Yahoo prioritize authenticated emails, potentially improving your delivery rates.
Introducing the Free BIMI API from AVATARAPI.com
While implementing BIMI traditionally requires DNS configuration and technical setup, AVATARAPI.com offers a simple API that lets you retrieve BIMI information for any email domain instantly. This is perfect for:
- Email marketing platforms checking sender authenticity
- Security tools validating email sources
- Analytics services tracking BIMI adoption
- Developers building email-related applications
How to Use the Free BIMI API
Getting started is incredibly simple. Here’s everything you need to know:
API Endpoint
POST https://avatarapi.com/v2/api.aspx
Request Format
Send a JSON request with these parameters:
{
"username": "demo",
"password": "demo___",
"provider": "Bimi",
"email": "no-reply@alerts.spotify.com"
}
Parameters Explained:
username&password: Use “demo” and “demo___” for free accessprovider: Set to “Bimi” to retrieve BIMI dataemail: The email address you want to check for BIMI records
Example Response
The API returns comprehensive BIMI information:
{
"Name": "",
"Image": "https://message-editor.scdn.co/spotify_ab_1024216054.svg",
"Valid": true,
"City": "",
"Country": "",
"IsDefault": false,
"Success": true,
"RawData": "",
"Source": {
"Name": "Bimi"
}
}
Response Fields:
Image: Direct URL to the company’s BIMI logoValid: Whether the BIMI record is properly configuredSuccess: Confirms the API call was successfulIsDefault: Indicates if this is a fallback or authentic BIMI record
Practical Use Cases
Email Security Platforms: Verify sender authenticity by checking if incoming emails have valid BIMI records.
Marketing Analytics: Track which competitors have implemented BIMI to benchmark your email marketing efforts.
Email Client Development: Integrate BIMI logo display into custom email applications.
Compliance Monitoring: Ensure your company’s BIMI implementation is working correctly across different domains.
Try It Now
Ready to explore BIMI data? The API is free to use with the demo credentials provided above. Simply make a POST request to test it with any email address – try major brands like Spotify, PayPal, or LinkedIn to see their BIMI implementations in action.
Whether you’re a developer building email tools, a marketer researching competitor strategies, or a security professional validating email authenticity, this free BIMI API provides instant access to valuable brand verification data.
Start integrating BIMI checking into your applications today and help make email communication more secure and recognizable for everyone.
https://www.avatarapi.com/
How to Query #LinkedIn from an #Email Address Using AvatarAPI.com

Introduction
When working with professional networking data, LinkedIn is often the go-to platform for retrieving user information based on an email address. Using AvatarAPI.com, developers can easily query LinkedIn and other data providers through a simple API request. In this guide, we’ll explore how to use the API to retrieve LinkedIn profile details from an email address.
API Endpoint
To query LinkedIn using AvatarAPI.com, send a request to:
https://avatarapi.com/v2/api.aspx
JSON Payload
A sample JSON request to query LinkedIn using an email address looks like this:
{
"username": "demo",
"password": "demo___",
"provider": "LinkedIn",
"email": "jason.smith@gmail.com"
}
Explanation of Parameters:
- username: Your AvatarAPI.com username.
- password: Your AvatarAPI.com password.
- provider: The data source to query. In this case, “LinkedIn” is specified. If omitted, the API will search a default set of providers.
- email: The email address for which LinkedIn profile data is being requested.
API Response
A successful response from the API may look like this:
{
"Name": "Jason Smith",
"Image": "https://media.licdn.com/dms/image/D4E12AQEud3Ll5MI7cQ/article-inline_image-shrink_1500_2232/0/1660833954461?e=1716422400&v=beta&t=r-9LmmNBpvS4bUiL6k-egJ8wUIpEeEMl9NJuAt7pTsc",
"Valid": true,
"City": "Los Angeles, California, United States",
"Country": "US",
"IsDefault": false,
"Success": true,
"RawData": "{\"resultTemplate\":\"ExactMatch\",\"bound\":false,\"persons\":[{\"id\":\"urn:li:person:DgEdy8DNfhxlX15HDuxWp7k6hYP5jIlL8fqtFRN7YR4\",\"displayName\":\"Jason Smith\",\"headline\":\"Creative Co-founder at Mega Ventures\",\"summary\":\"Jason Smith Head of Design at Mega Ventures.\",\"companyName\":\"Mega Ventures\",\"location\":\"Los Angeles, California, United States\",\"linkedInUrl\":\"https://linkedin.com/in/jason-smith\",\"connectionCount\":395,\"skills\":[\"Figma (Software)\",\"Facebook\",\"Customer Service\",\"Event Planning\",\"Social Media\",\"Sales\",\"Healthcare\",\"Management\",\"Web Design\",\"JavaScript\",\"Software Development\",\"Project Management\",\"APIs\"]}]}",
"Source": {
"Name": "LinkedIn"
}
}
Explanation of Response Fields:
- Name: The full name of the LinkedIn user.
- Image: The profile image URL.
- Valid: Indicates whether the returned data is valid.
- City: The city where the user is located.
- Country: The country of residence.
- IsDefault: Indicates whether the data is a fallback/default.
- Success: Confirms if the request was successful.
- RawData: Contains additional structured data about the LinkedIn profile, including:
- LinkedIn ID: A unique identifier for the user’s LinkedIn profile.
- Display Name: The name displayed on the user’s profile.
- Headline: The professional headline, typically the current job title or a short description of expertise.
- Summary: A brief bio or description of the user’s professional background.
- Company Name: The company where the user currently works.
- Location: The geographical location of the user.
- LinkedIn Profile URL: A direct link to the user’s LinkedIn profile.
- Connection Count: The number of LinkedIn connections the user has.
- Skills: A list of skills associated with the user’s profile, such as programming languages, software expertise, or industry-specific abilities.
- Education History: Details about the user’s academic background, including universities attended, degrees earned, and fields of study.
- Employment History: Information about past and present positions, including company names, job titles, and employment dates.
- Projects and Accomplishments: Notable work the user has contributed to, certifications, publications, and other professional achievements.
- Endorsements: Skill endorsements from other LinkedIn users, showcasing credibility in specific domains.
- Source.Name: The data provider (LinkedIn in this case).
LinkedIn Rate Limiting
By default, LinkedIn queries are subject to rate limits. To bypass these limits, additional parameters can be included in the JSON request:
{
"overrideAccount": "your_override_username",
"overridePassword": "your_override_password"
}
Using these credentials allows queries to be processed without rate limiting. However, to enable this feature, you should contact AvatarAPI.com to discuss setup and access.
Conclusion
AvatarAPI.com provides a powerful way to retrieve LinkedIn profile data using just an email address. While LinkedIn is one of the available providers, the API also supports other data sources if the provider field is omitted. With proper setup, including rate-limit bypassing credentials, you can ensure seamless access to professional networking data.
For more details, visit AvatarAPI.com.
Resolving Unauthorized Error When Deploying an #Azure Function via #ZipDeploy

Deploying an Azure Function to an App Service can sometimes result in an authentication error, preventing successful publishing. One common error developers encounter is:
Error: The attempt to publish the ZIP file through https://<function-name>.scm.azurewebsites.net/api/zipdeploy failed with HTTP status code Unauthorized.
This error typically occurs when the deployment process lacks the necessary authentication permissions to publish to Azure. Below, we outline the steps to resolve this issue by enabling SCM Basic Auth Publishing in the Azure Portal.
Understanding the Issue
The error indicates that Azure is rejecting the deployment request due to authentication failure. This often happens when the SCM (Kudu) deployment service does not have the correct permissions enabled, preventing the publishing process from proceeding.
Solution: Enable SCM Basic Auth Publishing
To resolve this issue, follow these steps:
- Open the Azure Portal and navigate to your Function App.
- In the left-hand menu, select Configuration.
- Under the General settings tab, locate SCM Basic Auth Publishing.
- Toggle the setting to On.
- Click Save and restart the Function App if necessary.
Once this setting is enabled, retry the deployment from Visual Studio or your chosen deployment method. The unauthorized error should now be resolved.
Additional Considerations
- Use Deployment Credentials: If you prefer not to enable SCM Basic Auth, consider setting up deployment credentials under Deployment Center → FTP/Credentials.
- Check Azure Authentication in Visual Studio: Ensure that you are logged into the correct Azure account in Visual Studio under Tools → Options → Azure Service Authentication.
- Use Azure CLI for Deployment: If problems persist, try deploying with the Azure CLI:
az functionapp deployment source config-zip \ --resource-group <resource-group> \ --name <function-app-name> \ --src <zip-file-path>
By enabling SCM Basic Auth Publishing, you ensure that Azure’s deployment service can authenticate and process your function’s updates smoothly. This quick fix saves time and prevents unnecessary troubleshooting steps.
Obtaining an Access Token for Outlook Web Access (#OWA) Using a Consumer Account

If you need programmatic access to Outlook Web Access (OWA) using a Microsoft consumer account (e.g., an Outlook.com, Hotmail, or Live.com email), you can obtain an access token using the Microsoft Authentication Library (MSAL). The following C# code demonstrates how to authenticate a consumer account and retrieve an access token.
Prerequisites
To run this code successfully, ensure you have:
- .NET installed
- The
Microsoft.Identity.ClientNuGet package - A registered application in the Microsoft Entra ID (formerly Azure AD) portal with the necessary API permissions
Code Breakdown
The following code authenticates a user using the device code flow, which is useful for scenarios where interactive login via a browser is required but the application does not have direct access to a web interface.
1. Define Authentication Metadata
var authMetadata = new
{
ClientId = "9199bf20-a13f-4107-85dc-02114787ef48", // Application (client) ID
Tenant = "consumers", // Target consumer accounts (not work/school accounts)
Scope = "service::outlook.office.com::MBI_SSL openid profile offline_access"
};
- ClientId: Identifies the application in Microsoft Entra ID.
- Tenant: Set to
consumersto restrict authentication to personal Microsoft accounts. - Scope: Defines the permissions the application is requesting. In this case:
service::outlook.office.com::MBI_SSLis required to access Outlook services.openid,profile, andoffline_accessallow authentication and token refresh.
2. Configure the Authentication Application
var app = PublicClientApplicationBuilder
.Create(authMetadata.ClientId)
.WithAuthority($"https://login.microsoftonline.com/{authMetadata.Tenant}")
.Build();
- PublicClientApplicationBuilder is used to create a public client application that interacts with Microsoft identity services.
.WithAuthority()specifies that authentication should occur against Microsoft’s login endpoint for consumer accounts.
3. Initiate the Device Code Flow
var scopes = new string[] { authMetadata.Scope };
var result = await app.AcquireTokenWithDeviceCode(scopes, deviceCodeResult =>
{
Console.WriteLine(deviceCodeResult.Message); // Display login instructions
return Task.CompletedTask;
}).ExecuteAsync();
- AcquireTokenWithDeviceCode() initiates authentication using a device code.
- The
deviceCodeResult.Messageprovides instructions to the user on how to authenticate (typically directing them tohttps://microsoft.com/devicelogin). - Once the user completes authentication, the application receives an access token.
4. Retrieve and Display the Access Token
Console.WriteLine($"Access Token: {result.AccessToken}");
- The retrieved token can now be used to make API calls to Outlook Web Access services.
5. Handle Errors
catch (MsalException ex)
{
Console.WriteLine($"Authentication failed: {ex.Message}");
}
- MsalException handles authentication errors, such as incorrect permissions or expired tokens.
Running the Code
- Compile and run the program.
- Follow the login instructions displayed in the console.
- After signing in, the access token will be printed.
- Use the token in HTTP requests to Outlook Web Access APIs.
Conclusion
This code provides a straightforward way to obtain an access token for Outlook Web Access using a consumer account. The device code flow is particularly useful for command-line applications or scenarios where interactive authentication via a browser is required.
How to Extract #EXIF Data from an Image in .NET 8 with #MetadataExtractor

GIT REPO : https://github.com/infiniteloopltd/ExifResearch
When working with images, EXIF (Exchangeable Image File Format) data can provide valuable information such as the camera model, date and time of capture, GPS coordinates, and much more. Whether you’re building an image processing application or simply want to extract metadata for analysis, knowing how to retrieve EXIF data in a .NET environment is essential.
In this post, we’ll walk through how to extract EXIF data from an image in .NET 8 using the cross-platform MetadataExtractor library.
Why Use MetadataExtractor?
.NET’s traditional System.Drawing.Common library has limitations when it comes to cross-platform compatibility, particularly for non-Windows environments. The MetadataExtractor library, however, is a powerful and platform-independent solution for extracting metadata from various image formats, including EXIF data.
With MetadataExtractor, you can read EXIF metadata from images in a clean, efficient way, making it an ideal choice for .NET Core and .NET 8 developers working on cross-platform applications.
Step 1: Install MetadataExtractor
To begin, you need to add the MetadataExtractor NuGet package to your project. You can install it using the following command:
bashCopy codedotnet add package MetadataExtractor
This package supports EXIF, IPTC, XMP, and many other metadata formats from various image file types.
Step 2: Writing the Code to Extract EXIF Data
Now that the package is installed, let’s write some code to extract EXIF data from an image stored as a byte array.
Here is the complete function:
using System;
using System.Collections.Generic;
using System.IO;
using MetadataExtractor;
using MetadataExtractor.Formats.Exif;
public class ExifReader
{
public static Dictionary<string, string> GetExifData(byte[] imageBytes)
{
var exifData = new Dictionary<string, string>();
try
{
using var ms = new MemoryStream(imageBytes);
var directories = ImageMetadataReader.ReadMetadata(ms);
foreach (var directory in directories)
{
foreach (var tag in directory.Tags)
{
// Add tag name and description to the dictionary
exifData[tag.Name] = tag.Description;
}
}
}
catch (Exception ex)
{
Console.WriteLine($"Error reading EXIF data: {ex.Message}");
}
return exifData;
}
}
How It Works:
- Reading Image Metadata: The function uses
ImageMetadataReader.ReadMetadatato read all the metadata from the byte array containing the image. - Iterating Through Directories and Tags: EXIF data is organized in directories (for example, the main EXIF data, GPS, and thumbnail directories). We iterate through these directories and their associated tags.
- Handling Errors: We wrap the logic in a
try-catchblock to ensure any potential errors (e.g., unsupported formats) are handled gracefully.
Step 3: Usage Example
To use this function, you can pass an image byte array to it. Here’s an example:
using System;
using System.IO;
class Program
{
static void Main()
{
// Replace with your byte array containing an image
byte[] imageBytes = File.ReadAllBytes("example.jpg");
var exifData = ExifReader.GetExifData(imageBytes);
foreach (var kvp in exifData)
{
Console.WriteLine($"{kvp.Key}: {kvp.Value}");
}
}
}
This code reads an image from the file system as a byte array and then uses the ExifReader.GetExifData method to extract the EXIF data. Finally, it prints out the EXIF tags and their descriptions.
Example Output:
If the image contains EXIF metadata, the output might look something like this:
"Compression Type": "Baseline",
"Data Precision": "8 bits",
"Image Height": "384 pixels",
"Image Width": "512 pixels",
"Number of Components": "3",
"Component 1": "Y component: Quantization table 0, Sampling factors 2 horiz/2 vert",
"Component 2": "Cb component: Quantization table 1, Sampling factors 1 horiz/1 vert",
"Component 3": "Cr component: Quantization table 1, Sampling factors 1 horiz/1 vert",
"Make": "samsung",
"Model": "SM-G998B",
"Orientation": "Right side, top (Rotate 90 CW)",
"X Resolution": "72 dots per inch",
"Y Resolution": "72 dots per inch",
"Resolution Unit": "Inch",
"Software": "G998BXXU7EWCH",
"Date/Time": "2023:05:02 12:33:47",
"YCbCr Positioning": "Center of pixel array",
"Exposure Time": "1/33 sec",
"F-Number": "f/2.2",
"Exposure Program": "Program normal",
"ISO Speed Ratings": "640",
"Exif Version": "2.20",
"Date/Time Original": "2023:05:02 12:33:47",
"Date/Time Digitized": "2023:05:02 12:33:47",
"Time Zone": "+09:00",
"Time Zone Original": "+09:00",
"Shutter Speed Value": "1 sec",
"Aperture Value": "f/2.2",
"Exposure Bias Value": "0 EV",
"Max Aperture Value": "f/2.2",
"Metering Mode": "Center weighted average",
"Flash": "Flash did not fire",
"Focal Length": "2.2 mm",
"Sub-Sec Time": "404",
"Sub-Sec Time Original": "404",
"Sub-Sec Time Digitized": "404",
"Color Space": "sRGB",
"Exif Image Width": "4000 pixels",
"Exif Image Height": "3000 pixels",
"Exposure Mode": "Auto exposure",
"White Balance Mode": "Auto white balance",
"Digital Zoom Ratio": "1",
"Focal Length 35": "13 mm",
"Scene Capture Type": "Standard",
"Unique Image ID": "F12XSNF00NM",
"Compression": "JPEG (old-style)",
"Thumbnail Offset": "824 bytes",
"Thumbnail Length": "49594 bytes",
"Number of Tables": "4 Huffman tables",
"Detected File Type Name": "JPEG",
"Detected File Type Long Name": "Joint Photographic Experts Group",
"Detected MIME Type": "image/jpeg",
"Expected File Name Extension": "jpg"
This is just a small sample of the information EXIF can store. Depending on the camera and settings, you may find data on GPS location, white balance, focal length, and more.
Why Use EXIF Data?
EXIF data can be valuable in various scenarios:
- Image processing: Automatically adjust images based on camera settings (e.g., ISO or exposure time).
- Data analysis: Track when and where photos were taken, especially when handling large datasets of images.
- Digital forensics: Verify image authenticity by analyzing EXIF metadata for manipulation or alterations.
Conclusion
With the MetadataExtractor library, extracting EXIF data from an image is straightforward and cross-platform compatible. Whether you’re building a photo management app, an image processing tool, or just need to analyze metadata, this approach is an efficient solution for working with EXIF data in .NET 8.
By using this solution, you can extract a wide range of metadata from images, making your applications smarter and more capable. Give it a try and unlock the hidden data in your images!
Understanding TLS fingerprinting.

TLS fingerprinting is a way for Bot discovery software to help discover the difference between a browser and a bot. It works transparently and fast, but not infallable. What it depends on, is that when a secure HTTPS connection is made between client and server, there is an exchange of supported cyphers. based on the cyphers supported, this can be compared against the “claimed” user agent, to see if this would be the cyphers supported by this user-agent (Browser).
It’s easy for a bot to claim to be Chrome, just set the user agent to be the same as a modern version of Chrome, but it’s more difficult to support all the cyphers supported by Chrome, and thus, if the HTTP request says it’s Chrome, but doesn’t support all of Chrome’s Cyphers, then it probably isn’t Chrome, and it’s a Bot.
There is a really handy tool here; https://tls.peet.ws/api/all – which lists the cyphers used in the connection. If you use a browser, like Chrome, you’ll see this list of cyphers:
"ciphers": [
"TLS_GREASE (0xEAEA)",
"TLS_AES_128_GCM_SHA256",
"TLS_AES_256_GCM_SHA384",
"TLS_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
"TLS_RSA_WITH_AES_128_GCM_SHA256",
"TLS_RSA_WITH_AES_256_GCM_SHA384",
"TLS_RSA_WITH_AES_128_CBC_SHA",
"TLS_RSA_WITH_AES_256_CBC_SHA"
]
Wheras if you visit it using Firefox, you’ll see this;
"ciphers": [
"TLS_AES_128_GCM_SHA256",
"TLS_CHACHA20_POLY1305_SHA256",
"TLS_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
"TLS_RSA_WITH_AES_128_GCM_SHA256",
"TLS_RSA_WITH_AES_256_GCM_SHA384",
"TLS_RSA_WITH_AES_128_CBC_SHA",
"TLS_RSA_WITH_AES_256_CBC_SHA"
],
Use CURL or WebClient in C#, and you’ll see this
"ciphers": [
"TLS_AES_256_GCM_SHA384",
"TLS_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384",
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
"TLS_RSA_WITH_AES_256_GCM_SHA384",
"TLS_RSA_WITH_AES_128_GCM_SHA256",
"TLS_RSA_WITH_AES_256_CBC_SHA256",
"TLS_RSA_WITH_AES_128_CBC_SHA256",
"TLS_RSA_WITH_AES_256_CBC_SHA",
"TLS_RSA_WITH_AES_128_CBC_SHA"
],
So, even with a cursory glance, you could check for TLS_GREASE or TLS_CHACHA20_POLY1305_SHA256 and see if these are present, and declar the user as a bot if these cyphers are missing. More advanced coding could check the version of Chrome, the Operating System, and so forth, but the technique is that.
However, using the library TLS-Client in Python allows for more cyphers to be exchanged, and the TLS fingerprint looks much more similar (if not indistinguishable) from Chrome.
https://github.com/infiniteloopltd/TLS
session = tls_client.Session(
client_identifier="chrome_120",
random_tls_extension_order=True
)
page_url = "https://tls.peet.ws/api/all"
res = session.get(
page_url
)
print(res.text)
I am now curious to know If I can apply the same logic to C# …