Archive

Archive for the ‘Uncategorized’ Category

How to Extract #EXIF Data from an Image in .NET 8 with #MetadataExtractor

GIT REPO : https://github.com/infiniteloopltd/ExifResearch

When working with images, EXIF (Exchangeable Image File Format) data can provide valuable information such as the camera model, date and time of capture, GPS coordinates, and much more. Whether you’re building an image processing application or simply want to extract metadata for analysis, knowing how to retrieve EXIF data in a .NET environment is essential.

In this post, we’ll walk through how to extract EXIF data from an image in .NET 8 using the cross-platform MetadataExtractor library.

Why Use MetadataExtractor?

.NET’s traditional System.Drawing.Common library has limitations when it comes to cross-platform compatibility, particularly for non-Windows environments. The MetadataExtractor library, however, is a powerful and platform-independent solution for extracting metadata from various image formats, including EXIF data.

With MetadataExtractor, you can read EXIF metadata from images in a clean, efficient way, making it an ideal choice for .NET Core and .NET 8 developers working on cross-platform applications.

Step 1: Install MetadataExtractor

To begin, you need to add the MetadataExtractor NuGet package to your project. You can install it using the following command:

bashCopy codedotnet add package MetadataExtractor

This package supports EXIF, IPTC, XMP, and many other metadata formats from various image file types.

Step 2: Writing the Code to Extract EXIF Data

Now that the package is installed, let’s write some code to extract EXIF data from an image stored as a byte array.

Here is the complete function:

using System;
using System.Collections.Generic;
using System.IO;
using MetadataExtractor;
using MetadataExtractor.Formats.Exif;

public class ExifReader
{
public static Dictionary<string, string> GetExifData(byte[] imageBytes)
{
var exifData = new Dictionary<string, string>();

try
{
using var ms = new MemoryStream(imageBytes);
var directories = ImageMetadataReader.ReadMetadata(ms);

foreach (var directory in directories)
{
foreach (var tag in directory.Tags)
{
// Add tag name and description to the dictionary
exifData[tag.Name] = tag.Description;
}
}
}
catch (Exception ex)
{
Console.WriteLine($"Error reading EXIF data: {ex.Message}");
}

return exifData;
}
}

How It Works:

  1. Reading Image Metadata: The function uses ImageMetadataReader.ReadMetadata to read all the metadata from the byte array containing the image.
  2. Iterating Through Directories and Tags: EXIF data is organized in directories (for example, the main EXIF data, GPS, and thumbnail directories). We iterate through these directories and their associated tags.
  3. Handling Errors: We wrap the logic in a try-catch block to ensure any potential errors (e.g., unsupported formats) are handled gracefully.

Step 3: Usage Example

To use this function, you can pass an image byte array to it. Here’s an example:

using System;
using System.IO;

class Program
{
static void Main()
{
// Replace with your byte array containing an image
byte[] imageBytes = File.ReadAllBytes("example.jpg");

var exifData = ExifReader.GetExifData(imageBytes);

foreach (var kvp in exifData)
{
Console.WriteLine($"{kvp.Key}: {kvp.Value}");
}
}
}

This code reads an image from the file system as a byte array and then uses the ExifReader.GetExifData method to extract the EXIF data. Finally, it prints out the EXIF tags and their descriptions.

Example Output:

If the image contains EXIF metadata, the output might look something like this:

  "Compression Type": "Baseline",
"Data Precision": "8 bits",
"Image Height": "384 pixels",
"Image Width": "512 pixels",
"Number of Components": "3",
"Component 1": "Y component: Quantization table 0, Sampling factors 2 horiz/2 vert",
"Component 2": "Cb component: Quantization table 1, Sampling factors 1 horiz/1 vert",
"Component 3": "Cr component: Quantization table 1, Sampling factors 1 horiz/1 vert",
"Make": "samsung",
"Model": "SM-G998B",
"Orientation": "Right side, top (Rotate 90 CW)",
"X Resolution": "72 dots per inch",
"Y Resolution": "72 dots per inch",
"Resolution Unit": "Inch",
"Software": "G998BXXU7EWCH",
"Date/Time": "2023:05:02 12:33:47",
"YCbCr Positioning": "Center of pixel array",
"Exposure Time": "1/33 sec",
"F-Number": "f/2.2",
"Exposure Program": "Program normal",
"ISO Speed Ratings": "640",
"Exif Version": "2.20",
"Date/Time Original": "2023:05:02 12:33:47",
"Date/Time Digitized": "2023:05:02 12:33:47",
"Time Zone": "+09:00",
"Time Zone Original": "+09:00",
"Shutter Speed Value": "1 sec",
"Aperture Value": "f/2.2",
"Exposure Bias Value": "0 EV",
"Max Aperture Value": "f/2.2",
"Metering Mode": "Center weighted average",
"Flash": "Flash did not fire",
"Focal Length": "2.2 mm",
"Sub-Sec Time": "404",
"Sub-Sec Time Original": "404",
"Sub-Sec Time Digitized": "404",
"Color Space": "sRGB",
"Exif Image Width": "4000 pixels",
"Exif Image Height": "3000 pixels",
"Exposure Mode": "Auto exposure",
"White Balance Mode": "Auto white balance",
"Digital Zoom Ratio": "1",
"Focal Length 35": "13 mm",
"Scene Capture Type": "Standard",
"Unique Image ID": "F12XSNF00NM",
"Compression": "JPEG (old-style)",
"Thumbnail Offset": "824 bytes",
"Thumbnail Length": "49594 bytes",
"Number of Tables": "4 Huffman tables",
"Detected File Type Name": "JPEG",
"Detected File Type Long Name": "Joint Photographic Experts Group",
"Detected MIME Type": "image/jpeg",
"Expected File Name Extension": "jpg"

This is just a small sample of the information EXIF can store. Depending on the camera and settings, you may find data on GPS location, white balance, focal length, and more.

Why Use EXIF Data?

EXIF data can be valuable in various scenarios:

  • Image processing: Automatically adjust images based on camera settings (e.g., ISO or exposure time).
  • Data analysis: Track when and where photos were taken, especially when handling large datasets of images.
  • Digital forensics: Verify image authenticity by analyzing EXIF metadata for manipulation or alterations.

Conclusion

With the MetadataExtractor library, extracting EXIF data from an image is straightforward and cross-platform compatible. Whether you’re building a photo management app, an image processing tool, or just need to analyze metadata, this approach is an efficient solution for working with EXIF data in .NET 8.

By using this solution, you can extract a wide range of metadata from images, making your applications smarter and more capable. Give it a try and unlock the hidden data in your images!

Storing data directly in GPU memory with #CLOO in C#

Although I’m not entirely sure of a practical application for this. This application, using C# and CLOO can store arbitrary data in the GPU memory. In this case, I’m picking a large file off the disk, and putting it in GPU memory.

In the case of this NVIDIA Geforce card, the memory is dedicated to the GPU, and not shared with the system, ordinarily.

TL;DR; The Github repo is here – https://github.com/infiniteloopltd/GpuMemoryDemo

The core function is here;

 static void Main()
        {
            var platform = ComputePlatform.Platforms[0];
            var device = platform.Devices.FirstOrDefault(d => d.Type.HasFlag(ComputeDeviceTypes.Gpu));
            var context = new ComputeContext(ComputeDeviceTypes.Gpu, new ComputeContextPropertyList(platform), null, IntPtr.Zero);
            var queue = new ComputeCommandQueue(context, device, ComputeCommandQueueFlags.None);

            const string largeFilePath = "C:\\Users\\fiach\\Downloads\\datagrip-2024.3.exe";
            var contents = File.ReadAllBytes(largeFilePath);

            var clBuffer = Store(contents, context, queue);

            var readBackBytes = Retrieve(contents.Length, clBuffer, queue);

            Console.WriteLine($"Original String: {contents[0]}");
            Console.WriteLine($"Read Back String: {readBackBytes[0]}");
            Console.WriteLine($"Strings Match: {contents[0] == readBackBytes[0]}");

            
            // Memory leak here. 
            //Marshal.FreeHGlobal(readBackPtr);
            //Marshal.FreeHGlobal(buffer);
            
        }

        public static ComputeBuffer<byte> Store(byte[] stringBytes, ComputeContext context, ComputeCommandQueue queue)
        {
            var buffer = Marshal.AllocHGlobal(stringBytes.Length);

            Marshal.Copy(stringBytes, 0, buffer, stringBytes.Length);

            var clBuffer = new ComputeBuffer<byte>(context, ComputeMemoryFlags.ReadWrite, stringBytes.Length);

            queue.Write(clBuffer, true, 0, stringBytes.Length, buffer, null);
            
            return clBuffer;
        }

        public static byte[] Retrieve(int size, ComputeBuffer<byte> clBuffer, ComputeCommandQueue queue)
        {
            var readBackPtr = Marshal.AllocHGlobal(size);

            queue.Read(clBuffer, true, 0, size, readBackPtr, null);

            var readBackBytes = new byte[size];

            Marshal.Copy(readBackPtr, readBackBytes, 0, size);

            return readBackBytes;
        }
    }

we’ll walk through a C# program that demonstrates the use of OpenCL to store and retrieve data using the GPU, which can be beneficial for performance in data-heavy applications. Here’s a breakdown of the code:

1. Setting Up OpenCL Context and Queue

The program begins by selecting the first available compute platform and choosing a GPU device from the platform:

csharpCopy codevar platform = ComputePlatform.Platforms[0];
var device = platform.Devices.FirstOrDefault(d => d.Type.HasFlag(ComputeDeviceTypes.Gpu));
var context = new ComputeContext(ComputeDeviceTypes.Gpu, new ComputeContextPropertyList(platform), null, IntPtr.Zero);
var queue = new ComputeCommandQueue(context, device, ComputeCommandQueueFlags.None);
  • ComputePlatform.Platforms[0]: Selects the first OpenCL platform on the machine (typically corresponds to a GPU vendor like NVIDIA or AMD).
  • platform.Devices.FirstOrDefault(...): Finds the first GPU device available on the platform.
  • ComputeContext: Creates an OpenCL context for managing resources like buffers and command queues.
  • ComputeCommandQueue: Initializes a queue to manage commands that will be executed on the selected GPU.

2. Reading a Large File into Memory

The program then loads the contents of a large file into a byte array:

csharpCopy codeconst string largeFilePath = "C:\\Users\\fiach\\Downloads\\datagrip-2024.3.exe";
var contents = File.ReadAllBytes(largeFilePath);

This step reads the entire file into memory, which will later be uploaded to the GPU.

3. Storing Data on the GPU

The Store method is responsible for transferring the byte array to the GPU:

csharpCopy codevar clBuffer = Store(contents, context, queue);
  • It allocates memory using Marshal.AllocHGlobal to hold the byte array.
  • The byte array is then copied into this allocated buffer.
  • A ComputeBuffer<byte> is created on the GPU, and the byte array is written to it using the Write method of the ComputeCommandQueue.

Note: The Store method utilizes Marshal.Copy to handle memory copying between managed memory (RAM) and unmanaged memory (GPU).

4. Retrieving Data from the GPU

The Retrieve method is responsible for reading the data back from the GPU into a byte array:

csharpCopy codevar readBackBytes = Retrieve(contents.Length, clBuffer, queue);
  • The method allocates memory using Marshal.AllocHGlobal to hold the data read from the GPU.
  • The Read method of the ComputeCommandQueue is used to fetch the data from the GPU buffer back into the allocated memory.
  • The memory is then copied into a managed byte array (readBackBytes).

5. Verifying the Data Integrity

The program prints the first byte of the original and retrieved byte arrays, comparing them to verify if the data was correctly transferred and retrieved:

csharpCopy codeConsole.WriteLine($"Original String: {contents[0]}");
Console.WriteLine($"Read Back String: {readBackBytes[0]}");
Console.WriteLine($"Strings Match: {contents[0] == readBackBytes[0]}");

This checks whether the first byte of the file content remains intact after being transferred to and retrieved from the GPU.

6. Memory Management

The program has a commented-out section for freeing unmanaged memory:

csharpCopy code//Marshal.FreeHGlobal(readBackPtr);
//Marshal.FreeHGlobal(buffer);

These lines should be used to free the unmanaged memory buffers allocated with Marshal.AllocHGlobal to avoid memory leaks, but they are commented out here, leaving room for improvement.

Potential Improvements and Issues

  • Memory Leaks: The program does not properly free the unmanaged memory allocated via Marshal.AllocHGlobal, leading to potential memory leaks if run multiple times.
  • Error Handling: The program lacks error handling for situations like missing GPU devices or file read errors.
  • Large File Handling: For large files, this approach may run into memory constraints, and you might need to manage chunked transfers for efficiency.

In summary, this program demonstrates how to work with OpenCL in C# to transfer data between the host system and the GPU. While it shows the core functionality, handling memory leaks and improving error management should be considered for a production-level solution.

Cost-Effective SQL Server Database Restore on Microsoft #Azure: Using SMB Shares

1) Motivation Behind the Process

Managing costs efficiently on Microsoft Azure is a crucial aspect for many businesses, especially when it comes to managing resources like SQL Server databases. One area where I found significant savings was in the restoration of SQL Server databases.

Traditionally, to restore databases, I was using a managed disk. The restore process involved downloading a ZIP file, unzipping it to a .bak file, and then restoring it to the main OS disk. However, there was a significant issue with this setup: the cost of the managed disk.

Even when database restores happened only once every six months, I was still paying for the full capacity of the managed disk—500GB of provisioned space. This means I was paying for unused storage space for extended periods, which could be a significant waste of resources and money.

To tackle this issue, I switched to using Azure Storage Accounts with file shares (standard, not premium), which provided a more cost-effective approach. By restoring the database from an SMB share, I could pay only for the data usage, rather than paying for provisioned capacity on a managed disk. Additionally, I could delete the ZIP and BAK files after the restore process was complete, further optimizing storage costs.

2) Issues and Solutions

While the transition to using an Azure Storage Account for database restores was a great move in terms of cost reduction, it wasn’t without its challenges. One of the main hurdles I encountered during this process was SQLCMD reporting that the .bak file did not exist, even though it clearly did.

Symptoms of the Problem

The error message was:

 3201, Level 16, State 2, Server [ServerName], Line 1
Cannot open backup device '\\<UNC Path>\Backups\GeneralPurpose.bak'. Operating system error 3(The system cannot find the path specified.)
Msg 3013, Level 16, State 1, Server [ServerName], Line 1
RESTORE DATABASE is terminating abnormally.

This was perplexing because I had confirmed that the .bak file existed at the UNC path and that the path was accessible from my system.

Diagnosis

To diagnose the issue, I started by enabling xp_cmdshell in SQL Server. This extended stored procedure allows the execution of operating system commands, which is very helpful for troubleshooting such scenarios.

First, I enabled xp_cmdshell by running the following commands:

-- Enable advanced options
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;

-- Enable xp_cmdshell
EXEC sp_configure 'xp_cmdshell', 1;
RECONFIGURE;

Once xp_cmdshell was enabled, I ran a simple DIR command to verify if SQL Server could access the backup file share:

EXEC xp_cmdshell 'dir \\<UNC Path>\Backups\GeneralPurpose.bak';

The result indicated that the SQL Server service account did not have proper access to the SMB share, and that’s why it couldn’t find the .bak file.

Solution

To resolve this issue, I had to map the network share explicitly within SQL Server using the net use command, which allows SQL Server to authenticate to the SMB share.

Here’s the solution I implemented:

EXEC xp_cmdshell 'net use Z: \\<UNC Path> /user:localhost\<user> <PASSWORD>';

Explanation

  1. Mapping the Network Drive:
    The net use command maps the SMB share to a local drive letter (in this case, Z:), which makes it accessible to SQL Server.
  2. Authentication:
    The /user: flag specifies the username and password needed to authenticate to the share. In my case, I used an account (e.g., localhost\fsausse) with the correct credentials.
  3. Accessing the Share:
    After mapping the network drive, I could proceed to access the .bak file located in the SMB share by using its mapped path (Z:). SQL Server would then be able to restore the database without the “file not found” error.

Once the restore was completed, I could remove the drive mapping with:

EXEC xp_cmdshell 'net use Z: /delete';

This approach ensured that SQL Server had the necessary permissions to access the file on the SMB share, and I could restore my database efficiently, only paying for the data usage on Azure Storage.

Conclusion

By transitioning from a managed disk to an SMB share on Azure Storage, I significantly reduced my costs during database restores. The issue with SQL Server not finding the .bak file was quickly diagnosed and resolved by enabling xp_cmdshell, mapping the network share, and ensuring proper authentication. This process allows me to restore databases in a more cost-effective manner, paying only for the data used during the restore, and avoiding unnecessary storage costs between restores.

For businesses looking to optimize Azure costs, this method provides an efficient, scalable solution for managing large database backups with minimal overhead.

C# – using #OpenCV to determine if an image contains an image of a car (or a duck)

TL;DR; Here is the repo: https://github.com/infiniteloopltd/IsItACar

This demo application can take an image and derermine if the image is that of a Car, or not a car. My test image was of a duck, which was very defintely not car-like. But sillyness aside, this can be very useful for image upload validation – if you want to ensure that your car-sales website doesn’t allow their users to upload nonsense pictures, but only of cars, then this code could be useful.

Why Use Emgu.CV for Computer Vision?

Emgu.CV simplifies the use of OpenCV in C# projects, providing an intuitive interface while keeping the full functionality of OpenCV. For tasks like object detection, it is an ideal choice due to its performance and flexibility.


Prerequisites

Before diving into the code, make sure you have the following set up:

  • Visual Studio (or another preferred C# development environment)
  • Emgu.CV library installed via NuGet:
    • Search for Emgu.CV and Emgu.CV.runtime.windows in the NuGet Package Manager and install them.

Setting Up Your Project

We’ll write a simple application to detect cars in an image. The code uses a pre-trained Haar cascade classifier, which is a popular method for object detection.

The Code

Here’s a complete example demonstrating how to load an image from a byte array and run car detection using Emgu.CV:

csharpCopy codeusing Emgu.CV;
using Emgu.CV.CvEnum;
using Emgu.CV.Structure;
using System;
using System.Drawing;
using System.IO;

class Program
{
    static void Main(string[] args)
    {
        // Load the image into a byte array (this could come from a database or API)
        byte[] imageBytes = File.ReadAllBytes("path_to_your_image.jpg");

        // Create a Mat object to hold the decoded image
        Mat mat = new Mat();

        // Decode the image from the byte array into the Mat object
        CvInvoke.Imdecode(imageBytes, ImreadModes.Color, mat);

        // Convert the Mat to an Image<Bgr, byte> for further processing
        Image<Bgr, byte> image = mat.ToImage<Bgr, byte>();

        // Load the Haar cascade for car detection
        string cascadeFilePath = "path_to_haarcascade_car.xml"; // Download a Haar cascade for cars
        CascadeClassifier carClassifier = new CascadeClassifier(cascadeFilePath);

        // Convert to grayscale for better detection performance
        using (var grayImage = image.Convert<Gray, byte>())
        {
            // Detect cars in the image
            Rectangle[] cars = carClassifier.DetectMultiScale(
                grayImage, 
                scaleFactor: 1.1, 
                minNeighbors: 5, 
                minSize: new Size(30, 30));

            // Draw rectangles around detected cars
            foreach (var car in cars)
            {
                image.Draw(car, new Bgr(Color.Red), 2);
            }

            // Save or display the image with the detected cars
            image.Save("output_image_with_cars.jpg");
            Console.WriteLine($"Detected {cars.Length} car(s) in the image.");
        }
    }
}

Breaking Down the Code

  1. Loading the Image as a Byte Array:csharpCopy codebyte[] imageBytes = File.ReadAllBytes("path_to_your_image.jpg"); Instead of loading an image from a file directly, we load it into a byte array. This approach is beneficial if your image data is not file-based but comes from a more dynamic source, such as a database.
  2. Decoding the Image:csharpCopy codeMat mat = new Mat(); CvInvoke.Imdecode(imageBytes, ImreadModes.Color, mat); We use CvInvoke.Imdecode to convert the byte array into a Mat object, which is OpenCV’s matrix representation of images.
  3. Converting Mat to Image<Bgr, byte>:csharpCopy codeImage<Bgr, byte> image = mat.ToImage<Bgr, byte>(); The Mat is converted to Image<Bgr, byte> to make it easier to work with Emgu.CV functions.
  4. Car Detection Using Haar Cascades:csharpCopy codeRectangle[] cars = carClassifier.DetectMultiScale(grayImage, 1.1, 5, new Size(30, 30)); The Haar cascade method is used for object detection. You’ll need to download a Haar cascade XML file for cars and provide the path.
  5. Drawing Detected Cars:csharpCopy codeimage.Draw(car, new Bgr(Color.Red), 2); Rectangles are drawn around detected cars, and the image is saved or displayed.

Downloading Haar Cascade for Cars

To detect cars, you need a pre-trained Haar cascade file. You can find these files on the OpenCV GitHub repository or by searching online for “haarcascade for car detection.”


Conclusion

This example demonstrates a simple yet powerful way to use Emgu.CV for car detection in C#. While Haar cascades are efficient, modern machine learning methods like YOLO or SSD are more accurate for complex tasks. However, for basic object detection, this approach is easy to implement and performs well for simpler use cases.

Feel free to experiment with different parameters to improve detection accuracy or try integrating more advanced models for more complex scenarios. Happy coding!

#AWS #S3 Error – The request signature we calculated does not match the signature you provided. Check your key and signing method

If you’re working with the AWS SDK for .NET and encounter an error when uploading files to an Amazon S3 bucket, you’re not alone. A recent upgrade in the SDK may introduce unexpected behavior, leading to a “signature mismatch” error for uploads that previously worked smoothly. This blog post describes the problem, analyzes common solutions, and explains how AWS S3 pathing conventions have changed over time—impacting how we specify folders within S3 buckets.

The Problem: “The request signature we calculated does not match the signature you provided.”

When uploading a file to an Amazon S3 bucket using a .NET application, you may encounter this error:

“The request signature we calculated does not match the signature you provided. Check your key and signing method.”

The symptoms of this error can be puzzling. For example, a standard upload to the root of the bucket may succeed, but attempting to upload to a specific folder within the bucket could trigger the error. This was the case in a recent project, where an upload to the bucket carimagerydata succeeded, while uploads to carimagerydata/tx returned the signature mismatch error. The access key, secret key, and permissions were all configured correctly, but specifying the folder path still caused a failure.

Possible Solutions

When you encounter this issue, there are several things to investigate:

1. Bucket Region Configuration

Ensure that the AWS SDK is configured with the correct region for the S3 bucket. The SDK signs requests based on the region setting, and a mismatch between the region used in the code and the actual bucket region often results in signature errors.

csharpCopy codeAmazonS3Config config = new AmazonS3Config
{
    RegionEndpoint = RegionEndpoint.YourBucketRegion // Ensure it's correct
};

2. Signature Version Settings

The AWS SDK uses Signature Version 4 by default, which is compatible with most regions and recommended by AWS. However, certain legacy setups or bucket configurations may expect Signature Version 2. Explicitly setting Signature Version 4 in the configuration can sometimes resolve these errors.

csharpCopy codeAmazonS3Config config = new AmazonS3Config
{
    SignatureVersion = "4", // Explicitly specify Signature Version 4
    RegionEndpoint = RegionEndpoint.YourBucketRegion
};

3. Permissions and Bucket Policies

Check if there are any bucket policies or IAM restrictions specific to the folder path you’re trying to upload to. If your bucket policy restricts access to certain paths, you’ll need to adjust it to allow uploads to the folder.

4. Path Style vs. Virtual-Hosted Style URL

Another possible issue arises from changes in how paths are handled. The AWS SDK has evolved over time, and the method of specifying paths within buckets has also changed. The SDK now defaults to virtual-hosted style URLs, where the bucket name is part of the domain (e.g., bucket-name.s3.amazonaws.com). Older setups, however, may expect path-style URLs, where the bucket name is part of the path (e.g., s3.amazonaws.com/bucket-name/key). Specifying path-style addressing in the configuration can sometimes fix compatibility issues:

csharpCopy codeAmazonS3Config config = new AmazonS3Config
{
    UsePathStyle = true,
    RegionEndpoint = RegionEndpoint.YourBucketRegion
};

Understanding the Key Change: Folder Path Format in S3

The reason these issues are so confusing is that AWS has changed the way folders (often called prefixes) are specified. Historically, users specified a bucket name combined with a folder path and then provided the object’s name. Now, however, the SDK expects a more unified format:

  • Old Format: bucket + path, object
  • New Format: bucket, path + object

This means that in the new format, the folder path (e.g., /tx/) should be included as part of the object key rather than being treated as a separate parameter.

Solution: Specifying the Folder in the Object Key

To upload to a folder within a bucket, you should include the full path in the key itself. For example, if you want to upload yourfile.txt to the tx folder within carimagerydata, the key should be specified as "tx/yourfile.txt".

Here’s how to do it in C#:

csharpCopy codestring bucketName = "carimagerydata";
string keyName = "tx/yourfile.txt"; // Specify the folder in the key
string filePath = @"C:\path\to\your\file.txt";

AmazonS3Client client = new AmazonS3Client(accessKey, secretKey, RegionEndpoint.YourBucketRegion);

PutObjectRequest request = new PutObjectRequest
{
    BucketName = bucketName,
    Key = keyName, // Full path including folder
    FilePath = filePath,
    ContentType = "text/plain" // Example for text files, adjust as needed
};

PutObjectResponse response = await client.PutObjectAsync(request);

Conclusion

This error is a prime example of how changes in SDK conventions can impact legacy applications. The update to a more unified key format for specifying folder paths in S3 may seem minor, but it can cause unexpected issues if you’re unaware of it. By specifying the folder as part of the object key, you can avoid signature mismatch errors and ensure that your application is compatible with the latest AWS SDK practices.

Always remember to check SDK release notes for updates in configuration defaults, particularly when working with cloud services, as conventions and standards may change over time. This small adjustment can save a lot of time when troubleshooting!

Categories: Uncategorized Tags: , , , ,

Understanding TLS fingerprinting.

TLS fingerprinting is a way for Bot discovery software to help discover the difference between a browser and a bot. It works transparently and fast, but not infallable. What it depends on, is that when a secure HTTPS connection is made between client and server, there is an exchange of supported cyphers. based on the cyphers supported, this can be compared against the “claimed” user agent, to see if this would be the cyphers supported by this user-agent (Browser).

It’s easy for a bot to claim to be Chrome, just set the user agent to be the same as a modern version of Chrome, but it’s more difficult to support all the cyphers supported by Chrome, and thus, if the HTTP request says it’s Chrome, but doesn’t support all of Chrome’s Cyphers, then it probably isn’t Chrome, and it’s a Bot.

There is a really handy tool here; https://tls.peet.ws/api/all – which lists the cyphers used in the connection. If you use a browser, like Chrome, you’ll see this list of cyphers:

 "ciphers": [
      "TLS_GREASE (0xEAEA)",
      "TLS_AES_128_GCM_SHA256",
      "TLS_AES_256_GCM_SHA384",
      "TLS_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_RSA_WITH_AES_128_CBC_SHA",
      "TLS_RSA_WITH_AES_256_CBC_SHA"
    ]

Wheras if you visit it using Firefox, you’ll see this;

 "ciphers": [
      "TLS_AES_128_GCM_SHA256",
      "TLS_CHACHA20_POLY1305_SHA256",
      "TLS_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_RSA_WITH_AES_128_CBC_SHA",
      "TLS_RSA_WITH_AES_256_CBC_SHA"
    ],

Use CURL or WebClient in C#, and you’ll see this

 "ciphers": [
      "TLS_AES_256_GCM_SHA384",
      "TLS_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
      "TLS_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_RSA_WITH_AES_256_CBC_SHA256",
      "TLS_RSA_WITH_AES_128_CBC_SHA256",
      "TLS_RSA_WITH_AES_256_CBC_SHA",
      "TLS_RSA_WITH_AES_128_CBC_SHA"
    ],

So, even with a cursory glance, you could check for TLS_GREASE or TLS_CHACHA20_POLY1305_SHA256 and see if these are present, and declar the user as a bot if these cyphers are missing. More advanced coding could check the version of Chrome, the Operating System, and so forth, but the technique is that.

However, using the library TLS-Client in Python allows for more cyphers to be exchanged, and the TLS fingerprint looks much more similar (if not indistinguishable) from Chrome.

https://github.com/infiniteloopltd/TLS

    session = tls_client.Session(
        client_identifier="chrome_120",
        random_tls_extension_order=True
    )
    page_url = "https://tls.peet.ws/api/all"
    res = session.get(
        page_url
    )
    print(res.text)

I am now curious to know If I can apply the same logic to C# …

Chinese Vehicle License Plate #API now available

With a database of 16 Million Chinese number plates and associated vehicle details and owners, we’ve launched an API that allows a lookup on this data. This allows you to determine the make, model and owner of a Chinese Vehicle from it’s license plate, assuming it’s found in our database (Which admittedly is only 4% of the vehicles in China).

The website is available here; https://www.chepaiapi.cn/ and we also have offline data available, for analysis for other purposes.

Car registration plates in China use the /CheckChina  endpoint and returns the following information:

  • Make / Model
  • Age
  • Engine Size
  • VIN
  • Owner details
  • Representative image

In China, only 4% of all vehicles are covered by our database, therefore most searches will not return data, there is no charge for a failed search.

Sample Registration Number: 

浙GCJ300

Sample Json:

{
  "Description": "haima Family",
  "RegistrationYear": "2004",
  "CarMake": {
    "CurrentTextValue": "haima"
  },
  "CarModel": {
    "CurrentTextValue": "Family"
  },
  "Variant": "GL New Yue Class",
  "EngineSize": {
    "CurrentTextValue": "1.6L"
  },
  "MakeDescription": {
    "CurrentTextValue": "haima"
  },
  "ModelDescription": {
    "CurrentTextValue": "Family"
  },
  "NumberOfSeats": {
    "CurrentTextValue": 5
  },
  "NumberOfDoors": {
    "CurrentTextValue": 4
  },
  "BodyStyle": "saloon",
  "VIN": "LH17CKJF04H035018",
  "EngineNumber": "ZM",
  "FuelType": {
    "CurrentTextValue": "gasoline"
  },
  "Owner": {
    "Name": "王坚强",
    "Id": "330725197611214838",
    "Address": "义乌市稠城街道殿山村",
    "Tel": "13868977994"
  },
  "Gonggao": "HMC7161",
  "Location": "浙江省金华市",
  "ImageUrl": "http://chepaiapi.cn/image.aspx/@aGFpbWEgRmFtaWx5"
}
Categories: Uncategorized

Tiny C# code to test webhooks with NGrok

TL;DR; Here is the github repo: https://github.com/infiniteloopltd/MiniServer/

If you need to test webhooks, it can be sometimes awkward, since you have to deploy your webhook handling code to a server, and once there, you can’t debug it. Running it on localhost, then the remote service can’t reach it.

This is where NGrok comes in handy, you can use it to expose (Temporarily) a local website – including your webhook handler, and then use the URL from NGgok as the webhook endpoint. Once it’s working locally, you can deploy it to the server, for more testing.

Here I wrote a tiny webserver that just prints to screen what it HTTP posted / Get’d to it,

static void Main(string[] args)
        {
            HttpListener listener = new HttpListener();
            listener.Prefixes.Add("http://localhost:8080/");
            listener.Start();
            Console.WriteLine("Listening for requests on http://localhost:8080/ ...");

            while (true)
            {
                HttpListenerContext context = listener.GetContext();
                HttpListenerRequest request = context.Request;

                // Log the request method (GET/POST)
                Console.WriteLine("Received {0} request for: {1}", request.HttpMethod, request.Url);

                // Log the headers
                Console.WriteLine("Headers:");
                foreach (string key in request.Headers.AllKeys)
                {
                    Console.WriteLine("{0}: {1}", key, request.Headers[key]);
                }

                // Log the body if it's a POST request
                if (request.HttpMethod == "POST")
                {
                    using (var reader = new StreamReader(request.InputStream, request.ContentEncoding))
                    {
                        string body = reader.ReadToEnd();
                        Console.WriteLine("Body:");
                        Console.WriteLine(body);
                    }
                }

                // Respond with a simple HTML page
                HttpListenerResponse response = context.Response;
                string responseString = "<html><body><h1>Request Received</h1></body></html>";
                byte[] buffer = Encoding.UTF8.GetBytes(responseString);
                response.ContentLength64 = buffer.Length;
                Stream output = response.OutputStream;
                output.Write(buffer, 0, buffer.Length);
                output.Close();
            }

Then the server is exposed via NGrok with

ngrok http 8080 --host-header="localhost:8080" 

I’ve tried similar code with CloudFlare tunnels, but it wasn’t as easy to set up as NGrok.

Categories: Uncategorized

Warning: Unverified sender in emails from #AWS #SES to #Microsoft365

If you see this text:

Warning: Unverified sender : This message may not be sent by the sender that’s displayed. If you aren’t certain this message is safe, please be cautious when interacting with this email, and avoid clicking on any links or downloading attachments that are on it

Appearing on emails sent via AWS SES to Microsoft 365 Accounts, then there is a simple fix, that you can do – which is to apply DKIM to the domain.

I found that manually supplying the DKIM public/private key was easiest.

So. I created a keypair as follows;

openssl genrsa -out dkim_private.pem 2048
openssl rsa -in dkim_private.pem -pubout -out dkim_public.pem

Then, created a TXT DNS record on the domain named default._domainkey.<mydomain>.com

The records should look like this (using the public key, with spaces removed)

v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2uRIflp/tyXzZDgJ6WxEXoqh5jw==

Then, going in to AWS SES, I add a new identity, select domain, then press “Advanced DKIM”, then Provide DKIM authentication token (BYODKIM) – Then paste the private key, with spaces removed.

AWS then goes and checks your DNS, and hopefully within a few minutes you should get an email starting “DKIM setup SUCCESS”

Once that is in place, any email you send via SES on that domain will be DKIM signed, and trusted by Microsoft365, hopefully getting your delivery rate up!

https://www.smtpjs.com/

Categories: Uncategorized

Handling a System.Net.ProtocolViolationException on a webserver in C#

So, it’s actually been a while since I actually wrote a post about Network Programming in C#, like what the blog is called. But this is an interesting case of a crashing server, and how to fix it.

So, I have a self-written HTTP server, proxied to the world via IIS, but every so often it would crash, and since the service was set to auto-restart, about a minute later it would come back up again. But I couldn’t see the pattern why. I was monitoring the service using UptimeRobot, and it was reporting the service was always down. Now, “Always” is a bit suspect. It went down often, but not “always”.

So, I noticed that UptimeRobot makes HTTP HEAD requests rather than HTTP GET requests, and of course, I always tested my server with HTTP GET. And guess what, a HTTP HEAD request would crash the service, and windows would restart it again a minute later.

To test for HTTP HEAD requests, I used the curl -I flag, which issues a HTTP HEAD, and it crashed the server, with this in the event log:


Exception Info: System.Net.ProtocolViolationException
at System.Net.HttpResponseStream.BeginWrite(Byte[], Int32, Int32, System.AsyncCallback, System.Object)
at System.IO.Stream+<>c.b__53_0(System.IO.Stream, ReadWriteParameters, System.AsyncCallback, System.Object)

So, I created my own minimal server in a console app, so I could test it, without the complexity of the full server.

using System;
using System.Net;
using System.Text;
using System.Threading.Tasks;

namespace ServerTest
{
    class HttpServer
    {
        public static readonly HttpListener Listener = new HttpListener();
  
        public static string PageData = $"service running";

        public static async Task HandleIncomingConnections()
        {   
            while (true)
            {
                try
                {
                    // Will wait here until we hear from a connection
                    var ctx = await Listener.GetContextAsync();
                    _ = Task.Run(() => HandleConnection(ctx));
                }
                catch (Exception ex)
                {
                    // Handle or log the exception as needed
                    Console.WriteLine($"Error handling connection: {ex.Message}");
                    // Optionally, you could continue to listen for new connections
                    // Or you could break out of the loop depending on your error handling strategy
                }
            }
        }

        private static async void HandleConnection(HttpListenerContext ctx)
        {
            // Peel out the requests and response objects
            var req = ctx.Request;
            var resp = ctx.Response;
            var response = PageData;
            // Write the response info
            var data = Encoding.UTF8.GetBytes(response);
            resp.ContentType = "text/html";
            resp.ContentEncoding = Encoding.UTF8;
            resp.ContentLength64 = data.LongLength;
            if (req.HttpMethod == "GET")
            {
                // Write out to the response stream (asynchronously), then close it
                await resp.OutputStream.WriteAsync(data, 0, data.Length);
            }

            resp.Close();
        }

    }
}

What I’ve highlighted in bold was the fix. What was happening was that the HEAD request was not expeciting a response, but I was proving one, so I checked to see if the HTTP verb was GET, and writing the response, otherwise not.

And, this worked! after months, if not years of an intermittent bug that was so hard to catch.

Categories: Uncategorized