Using an #API to Retrieve User Details from a #QQ Account ID

QQ, one of China’s largest instant messaging platforms, assigns each user a unique account ID. If you need to retrieve user details from a QQ account ID programmatically, you can use an API such as AvatarAPI. This guide will walk you through making an API request and interpreting the returned JSON response.
API Endpoint
The API request is made to the following URL:
https://avatarapi.com/v2/api.aspx
Request Format
The API expects a POST request with a JSON body containing authentication details (username and password) along with the QQ email ID of the user you want to retrieve information for.
Example Request Body
{
"username": "demo",
"password": "demo___",
"email": "16532096@qq.com"
}
Sending the Request
You can send this request using cURL, Postman, or a programming language like Python. Here’s an example using Python’s requests library:
import requests
import json
url = "https://avatarapi.com/v2/api.aspx"
headers = {"Content-Type": "application/json"}
payload = {
"username": "demo",
"password": "demo___",
"email": "16532096@qq.com"
}
response = requests.post(url, headers=headers, json=payload)
print(response.json())
API Response
The API returns a JSON object with the user’s details. Below is a sample response:
{
"Name": "邱亮",
"Image": "https://q.qlogo.cn/g?b=qq&nk=16532096&s=640",
"Valid": true,
"City": "",
"Country": "China",
"IsDefault": true,
"Success": true,
"RawData": "",
"Source": {
"Name": "QQ"
}
}
Explanation of Response Fields
- Name: The user’s name associated with the QQ account.
- Image: A URL to the user’s QQ avatar image.
- Valid: Boolean flag indicating if the QQ account is valid.
- City: The user’s city (if available).
- Country: The user’s country.
- IsDefault: Indicates whether the profile is using the default avatar.
- Success: Boolean flag indicating whether the API request was successful.
- RawData: Any additional raw data returned from the source.
- Source: The data provider (in this case, QQ).
Use Cases
This API can be useful for:
- Enhancing user profiles by fetching their QQ avatar and details.
- Verifying the validity of QQ accounts before allowing user actions.
- Personalizing content based on user identity from QQ.
Conclusion
Using an API to retrieve QQ user details is a straightforward process. By sending a POST request with the QQ email ID, you can obtain the user’s name, avatar, and other details. Ensure that you handle user data responsibly and comply with any relevant privacy regulations.
For production use, replace the demo credentials with your own API key and ensure secure storage of authentication details.
Obtaining an Access Token for Outlook Web Access (#OWA) Using a Consumer Account

If you need programmatic access to Outlook Web Access (OWA) using a Microsoft consumer account (e.g., an Outlook.com, Hotmail, or Live.com email), you can obtain an access token using the Microsoft Authentication Library (MSAL). The following C# code demonstrates how to authenticate a consumer account and retrieve an access token.
Prerequisites
To run this code successfully, ensure you have:
- .NET installed
- The
Microsoft.Identity.ClientNuGet package - A registered application in the Microsoft Entra ID (formerly Azure AD) portal with the necessary API permissions
Code Breakdown
The following code authenticates a user using the device code flow, which is useful for scenarios where interactive login via a browser is required but the application does not have direct access to a web interface.
1. Define Authentication Metadata
var authMetadata = new
{
ClientId = "9199bf20-a13f-4107-85dc-02114787ef48", // Application (client) ID
Tenant = "consumers", // Target consumer accounts (not work/school accounts)
Scope = "service::outlook.office.com::MBI_SSL openid profile offline_access"
};
- ClientId: Identifies the application in Microsoft Entra ID.
- Tenant: Set to
consumersto restrict authentication to personal Microsoft accounts. - Scope: Defines the permissions the application is requesting. In this case:
service::outlook.office.com::MBI_SSLis required to access Outlook services.openid,profile, andoffline_accessallow authentication and token refresh.
2. Configure the Authentication Application
var app = PublicClientApplicationBuilder
.Create(authMetadata.ClientId)
.WithAuthority($"https://login.microsoftonline.com/{authMetadata.Tenant}")
.Build();
- PublicClientApplicationBuilder is used to create a public client application that interacts with Microsoft identity services.
.WithAuthority()specifies that authentication should occur against Microsoft’s login endpoint for consumer accounts.
3. Initiate the Device Code Flow
var scopes = new string[] { authMetadata.Scope };
var result = await app.AcquireTokenWithDeviceCode(scopes, deviceCodeResult =>
{
Console.WriteLine(deviceCodeResult.Message); // Display login instructions
return Task.CompletedTask;
}).ExecuteAsync();
- AcquireTokenWithDeviceCode() initiates authentication using a device code.
- The
deviceCodeResult.Messageprovides instructions to the user on how to authenticate (typically directing them tohttps://microsoft.com/devicelogin). - Once the user completes authentication, the application receives an access token.
4. Retrieve and Display the Access Token
Console.WriteLine($"Access Token: {result.AccessToken}");
- The retrieved token can now be used to make API calls to Outlook Web Access services.
5. Handle Errors
catch (MsalException ex)
{
Console.WriteLine($"Authentication failed: {ex.Message}");
}
- MsalException handles authentication errors, such as incorrect permissions or expired tokens.
Running the Code
- Compile and run the program.
- Follow the login instructions displayed in the console.
- After signing in, the access token will be printed.
- Use the token in HTTP requests to Outlook Web Access APIs.
Conclusion
This code provides a straightforward way to obtain an access token for Outlook Web Access using a consumer account. The device code flow is particularly useful for command-line applications or scenarios where interactive authentication via a browser is required.
#UFG #API for Poland – Vehicle Insurance Details

How to Use the API for Vehicle Insurance Details in Poland
If you’re working in the insurance industry, vehicle-related services, or simply need a way to verify a car’s insurance status in Poland, there’s a powerful API available to help you out. This API provides quick and reliable access to current insurance details of a vehicle, using just the license plate number.
Overview of the API Endpoint
The API is accessible at the following endpoint:
https://www.tablicarejestracyjnaapi.pl/api/bespokeapi.asmx?op=CheckInsuranceStatusPoland
This endpoint retrieves the insurance details for vehicles registered in Poland. It uses the license plate number as the key input to return the current insurance policy information in XML format.
Key Features
The API provides the following details about a vehicle:
- PolicyNumber: The unique policy number of the insurance.
- Vehicle: The make and model of the vehicle.
- Company: The insurance company providing the policy.
- Address: The company’s registered address.
- IsBlacklisted: A boolean field indicating whether the vehicle is blacklisted.
Below is an example of the XML response:
<InsurancePolicy xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://Regcheck.org.uk/">
<PolicyNumber>920040143596</PolicyNumber>
<Vehicle>RENAULT ARES 826 RZ</Vehicle>
<Company>TOWARZYSTWO UBEZPIECZEŃ I REASEKURACJI WARTA S.A.</Company>
<Address>rondo I. Daszyńskiego 1, 00-843 Warszawa</Address>
<IsBlacklisted>false</IsBlacklisted>
</InsurancePolicy>
How to Use the API
- Send a Request: To use the API, you need to send an HTTP request to the endpoint. Typically, you’ll pass the vehicle’s license plate number as a parameter in the request body or URL query string.
- Process the Response: The response will be in XML format. You can parse the XML to extract the details you need, such as the policy number, the name of the insurance provider, and the vehicle’s blacklisting status.
Example Use Case
Imagine you’re developing a mobile application for a car rental service in Poland. Verifying the insurance status of vehicles in your fleet is crucial for compliance and operational efficiency. By integrating this API, you can:
- Automate insurance checks for newly added vehicles.
- Notify users if a vehicle’s insurance policy has expired or if the vehicle is blacklisted.
- Display detailed insurance information in the app for transparency.
Integration Tips
- Error Handling: Ensure your application handles scenarios where the API returns errors (e.g., invalid license plate numbers or no records found).
- XML Parsing: Use robust XML parsers available in your development language to process the API response efficiently.
- Security: If the API requires authentication, make sure you secure your API keys and follow best practices for handling sensitive information.
Sample Code
Here’s a quick example of how you can call the API in C#:
using System;
using System.Net.Http;
using System.Threading.Tasks;
using System.Xml;
class Program
{
static async Task Main(string[] args)
{
string licensePlate = "WE12345"; // Example license plate number
string apiUrl = "https://www.tablicarejestracyjnaapi.pl/api/bespokeapi.asmx?op=CheckInsuranceStatusPoland";
using HttpClient client = new HttpClient();
HttpResponseMessage response = await client.GetAsync(apiUrl + "?licensePlate=" + licensePlate);
if (response.IsSuccessStatusCode)
{
string responseContent = await response.Content.ReadAsStringAsync();
XmlDocument xmlDoc = new XmlDocument();
xmlDoc.LoadXml(responseContent);
string policyNumber = xmlDoc.SelectSingleNode("//PolicyNumber")?.InnerText;
string vehicle = xmlDoc.SelectSingleNode("//Vehicle")?.InnerText;
string company = xmlDoc.SelectSingleNode("//Company")?.InnerText;
string address = xmlDoc.SelectSingleNode("//Address")?.InnerText;
string isBlacklisted = xmlDoc.SelectSingleNode("//IsBlacklisted")?.InnerText;
Console.WriteLine($"Policy Number: {policyNumber}\nVehicle: {vehicle}\nCompany: {company}\nAddress: {address}\nBlacklisted: {isBlacklisted}");
}
else
{
Console.WriteLine("Failed to retrieve data from the API.");
}
}
}
Conclusion
The API for vehicle insurance details in Poland is a valuable tool for businesses and developers looking to integrate reliable insurance data into their applications. Whether you’re building tools for insurance verification, fleet management, or compliance monitoring, this API provides an efficient way to access up-to-date information with minimal effort.
Start integrating the API today and take your application’s functionality to the next level!
License plate lookup #API now available for #Albania

If you’re working with vehicle registration data in Albania, the /CheckAlbania endpoint from API.com.al is an invaluable resource. It provides a comprehensive set of details for any car registered in Albania, offering insights into the vehicle’s specifications, registration details, and even a representative image.
Here’s what you need to know about the /CheckAlbania endpoint and how it can enhance your applications.
What Information Does /CheckAlbania Provide?
The endpoint delivers a structured JSON response containing key vehicle details, making it easy to integrate with applications. Below is an overview of the information returned:
- Make: Identifies the car manufacturer (e.g., AUDI).
- Age: The year of registration.
- Engine Size & Power: The vehicle’s engine size (in cc) and power output.
- Colour: Describes the car’s exterior color (e.g., “KUQE” for red).
- Traffic Permit: A unique identifier for the vehicle’s traffic permit.
- Representative Image: A URL pointing to a representative image of the car.
Albania Support
Car registration plates in Albania use the /CheckAlbania endpoint and returns the following information:
- Make
- Age
- Engine Size & Power
- Colour
- Traffic Permit
- Representative image
Sample Registration Number:
AB404GM
Sample Json:
{
Description: “AUDI AUTOVETURE”,
RegistrationYear: “2009”,
CarMake: {
CurrentTextValue: “AUDI”,
},
CarModel: {
CurrentTextValue: “AUTOVETURE”,
},
MakeDescription: {
CurrentTextValue: “AUDI”,
},
ModelDescription: {
CurrentTextValue: “AUTOVETURE”,
},
NumberOfSeats: “5”,
Power: “2967”,
Colour: “KUQE”,
TrafficPermit: “0000000358602”,
Owner: “”,
EngineSize: “2490”,
Region: “”,
ImageUrl: “http://www.api.com.al/image.aspx/@QVVESSBBVVRPVkVUVVJF“,
}
How to Extract #EXIF Data from an Image in .NET 8 with #MetadataExtractor

GIT REPO : https://github.com/infiniteloopltd/ExifResearch
When working with images, EXIF (Exchangeable Image File Format) data can provide valuable information such as the camera model, date and time of capture, GPS coordinates, and much more. Whether you’re building an image processing application or simply want to extract metadata for analysis, knowing how to retrieve EXIF data in a .NET environment is essential.
In this post, we’ll walk through how to extract EXIF data from an image in .NET 8 using the cross-platform MetadataExtractor library.
Why Use MetadataExtractor?
.NET’s traditional System.Drawing.Common library has limitations when it comes to cross-platform compatibility, particularly for non-Windows environments. The MetadataExtractor library, however, is a powerful and platform-independent solution for extracting metadata from various image formats, including EXIF data.
With MetadataExtractor, you can read EXIF metadata from images in a clean, efficient way, making it an ideal choice for .NET Core and .NET 8 developers working on cross-platform applications.
Step 1: Install MetadataExtractor
To begin, you need to add the MetadataExtractor NuGet package to your project. You can install it using the following command:
bashCopy codedotnet add package MetadataExtractor
This package supports EXIF, IPTC, XMP, and many other metadata formats from various image file types.
Step 2: Writing the Code to Extract EXIF Data
Now that the package is installed, let’s write some code to extract EXIF data from an image stored as a byte array.
Here is the complete function:
using System;
using System.Collections.Generic;
using System.IO;
using MetadataExtractor;
using MetadataExtractor.Formats.Exif;
public class ExifReader
{
public static Dictionary<string, string> GetExifData(byte[] imageBytes)
{
var exifData = new Dictionary<string, string>();
try
{
using var ms = new MemoryStream(imageBytes);
var directories = ImageMetadataReader.ReadMetadata(ms);
foreach (var directory in directories)
{
foreach (var tag in directory.Tags)
{
// Add tag name and description to the dictionary
exifData[tag.Name] = tag.Description;
}
}
}
catch (Exception ex)
{
Console.WriteLine($"Error reading EXIF data: {ex.Message}");
}
return exifData;
}
}
How It Works:
- Reading Image Metadata: The function uses
ImageMetadataReader.ReadMetadatato read all the metadata from the byte array containing the image. - Iterating Through Directories and Tags: EXIF data is organized in directories (for example, the main EXIF data, GPS, and thumbnail directories). We iterate through these directories and their associated tags.
- Handling Errors: We wrap the logic in a
try-catchblock to ensure any potential errors (e.g., unsupported formats) are handled gracefully.
Step 3: Usage Example
To use this function, you can pass an image byte array to it. Here’s an example:
using System;
using System.IO;
class Program
{
static void Main()
{
// Replace with your byte array containing an image
byte[] imageBytes = File.ReadAllBytes("example.jpg");
var exifData = ExifReader.GetExifData(imageBytes);
foreach (var kvp in exifData)
{
Console.WriteLine($"{kvp.Key}: {kvp.Value}");
}
}
}
This code reads an image from the file system as a byte array and then uses the ExifReader.GetExifData method to extract the EXIF data. Finally, it prints out the EXIF tags and their descriptions.
Example Output:
If the image contains EXIF metadata, the output might look something like this:
"Compression Type": "Baseline",
"Data Precision": "8 bits",
"Image Height": "384 pixels",
"Image Width": "512 pixels",
"Number of Components": "3",
"Component 1": "Y component: Quantization table 0, Sampling factors 2 horiz/2 vert",
"Component 2": "Cb component: Quantization table 1, Sampling factors 1 horiz/1 vert",
"Component 3": "Cr component: Quantization table 1, Sampling factors 1 horiz/1 vert",
"Make": "samsung",
"Model": "SM-G998B",
"Orientation": "Right side, top (Rotate 90 CW)",
"X Resolution": "72 dots per inch",
"Y Resolution": "72 dots per inch",
"Resolution Unit": "Inch",
"Software": "G998BXXU7EWCH",
"Date/Time": "2023:05:02 12:33:47",
"YCbCr Positioning": "Center of pixel array",
"Exposure Time": "1/33 sec",
"F-Number": "f/2.2",
"Exposure Program": "Program normal",
"ISO Speed Ratings": "640",
"Exif Version": "2.20",
"Date/Time Original": "2023:05:02 12:33:47",
"Date/Time Digitized": "2023:05:02 12:33:47",
"Time Zone": "+09:00",
"Time Zone Original": "+09:00",
"Shutter Speed Value": "1 sec",
"Aperture Value": "f/2.2",
"Exposure Bias Value": "0 EV",
"Max Aperture Value": "f/2.2",
"Metering Mode": "Center weighted average",
"Flash": "Flash did not fire",
"Focal Length": "2.2 mm",
"Sub-Sec Time": "404",
"Sub-Sec Time Original": "404",
"Sub-Sec Time Digitized": "404",
"Color Space": "sRGB",
"Exif Image Width": "4000 pixels",
"Exif Image Height": "3000 pixels",
"Exposure Mode": "Auto exposure",
"White Balance Mode": "Auto white balance",
"Digital Zoom Ratio": "1",
"Focal Length 35": "13 mm",
"Scene Capture Type": "Standard",
"Unique Image ID": "F12XSNF00NM",
"Compression": "JPEG (old-style)",
"Thumbnail Offset": "824 bytes",
"Thumbnail Length": "49594 bytes",
"Number of Tables": "4 Huffman tables",
"Detected File Type Name": "JPEG",
"Detected File Type Long Name": "Joint Photographic Experts Group",
"Detected MIME Type": "image/jpeg",
"Expected File Name Extension": "jpg"
This is just a small sample of the information EXIF can store. Depending on the camera and settings, you may find data on GPS location, white balance, focal length, and more.
Why Use EXIF Data?
EXIF data can be valuable in various scenarios:
- Image processing: Automatically adjust images based on camera settings (e.g., ISO or exposure time).
- Data analysis: Track when and where photos were taken, especially when handling large datasets of images.
- Digital forensics: Verify image authenticity by analyzing EXIF metadata for manipulation or alterations.
Conclusion
With the MetadataExtractor library, extracting EXIF data from an image is straightforward and cross-platform compatible. Whether you’re building a photo management app, an image processing tool, or just need to analyze metadata, this approach is an efficient solution for working with EXIF data in .NET 8.
By using this solution, you can extract a wide range of metadata from images, making your applications smarter and more capable. Give it a try and unlock the hidden data in your images!
Storing data directly in GPU memory with #CLOO in C#

Although I’m not entirely sure of a practical application for this. This application, using C# and CLOO can store arbitrary data in the GPU memory. In this case, I’m picking a large file off the disk, and putting it in GPU memory.
In the case of this NVIDIA Geforce card, the memory is dedicated to the GPU, and not shared with the system, ordinarily.
TL;DR; The Github repo is here – https://github.com/infiniteloopltd/GpuMemoryDemo
The core function is here;
static void Main()
{
var platform = ComputePlatform.Platforms[0];
var device = platform.Devices.FirstOrDefault(d => d.Type.HasFlag(ComputeDeviceTypes.Gpu));
var context = new ComputeContext(ComputeDeviceTypes.Gpu, new ComputeContextPropertyList(platform), null, IntPtr.Zero);
var queue = new ComputeCommandQueue(context, device, ComputeCommandQueueFlags.None);
const string largeFilePath = "C:\\Users\\fiach\\Downloads\\datagrip-2024.3.exe";
var contents = File.ReadAllBytes(largeFilePath);
var clBuffer = Store(contents, context, queue);
var readBackBytes = Retrieve(contents.Length, clBuffer, queue);
Console.WriteLine($"Original String: {contents[0]}");
Console.WriteLine($"Read Back String: {readBackBytes[0]}");
Console.WriteLine($"Strings Match: {contents[0] == readBackBytes[0]}");
// Memory leak here.
//Marshal.FreeHGlobal(readBackPtr);
//Marshal.FreeHGlobal(buffer);
}
public static ComputeBuffer<byte> Store(byte[] stringBytes, ComputeContext context, ComputeCommandQueue queue)
{
var buffer = Marshal.AllocHGlobal(stringBytes.Length);
Marshal.Copy(stringBytes, 0, buffer, stringBytes.Length);
var clBuffer = new ComputeBuffer<byte>(context, ComputeMemoryFlags.ReadWrite, stringBytes.Length);
queue.Write(clBuffer, true, 0, stringBytes.Length, buffer, null);
return clBuffer;
}
public static byte[] Retrieve(int size, ComputeBuffer<byte> clBuffer, ComputeCommandQueue queue)
{
var readBackPtr = Marshal.AllocHGlobal(size);
queue.Read(clBuffer, true, 0, size, readBackPtr, null);
var readBackBytes = new byte[size];
Marshal.Copy(readBackPtr, readBackBytes, 0, size);
return readBackBytes;
}
}
we’ll walk through a C# program that demonstrates the use of OpenCL to store and retrieve data using the GPU, which can be beneficial for performance in data-heavy applications. Here’s a breakdown of the code:
1. Setting Up OpenCL Context and Queue
The program begins by selecting the first available compute platform and choosing a GPU device from the platform:
csharpCopy codevar platform = ComputePlatform.Platforms[0];
var device = platform.Devices.FirstOrDefault(d => d.Type.HasFlag(ComputeDeviceTypes.Gpu));
var context = new ComputeContext(ComputeDeviceTypes.Gpu, new ComputeContextPropertyList(platform), null, IntPtr.Zero);
var queue = new ComputeCommandQueue(context, device, ComputeCommandQueueFlags.None);
ComputePlatform.Platforms[0]: Selects the first OpenCL platform on the machine (typically corresponds to a GPU vendor like NVIDIA or AMD).platform.Devices.FirstOrDefault(...): Finds the first GPU device available on the platform.ComputeContext: Creates an OpenCL context for managing resources like buffers and command queues.ComputeCommandQueue: Initializes a queue to manage commands that will be executed on the selected GPU.
2. Reading a Large File into Memory
The program then loads the contents of a large file into a byte array:
csharpCopy codeconst string largeFilePath = "C:\\Users\\fiach\\Downloads\\datagrip-2024.3.exe";
var contents = File.ReadAllBytes(largeFilePath);
This step reads the entire file into memory, which will later be uploaded to the GPU.
3. Storing Data on the GPU
The Store method is responsible for transferring the byte array to the GPU:
csharpCopy codevar clBuffer = Store(contents, context, queue);
- It allocates memory using
Marshal.AllocHGlobalto hold the byte array. - The byte array is then copied into this allocated buffer.
- A
ComputeBuffer<byte>is created on the GPU, and the byte array is written to it using theWritemethod of theComputeCommandQueue.
Note: The Store method utilizes Marshal.Copy to handle memory copying between managed memory (RAM) and unmanaged memory (GPU).
4. Retrieving Data from the GPU
The Retrieve method is responsible for reading the data back from the GPU into a byte array:
csharpCopy codevar readBackBytes = Retrieve(contents.Length, clBuffer, queue);
- The method allocates memory using
Marshal.AllocHGlobalto hold the data read from the GPU. - The
Readmethod of theComputeCommandQueueis used to fetch the data from the GPU buffer back into the allocated memory. - The memory is then copied into a managed byte array (
readBackBytes).
5. Verifying the Data Integrity
The program prints the first byte of the original and retrieved byte arrays, comparing them to verify if the data was correctly transferred and retrieved:
csharpCopy codeConsole.WriteLine($"Original String: {contents[0]}");
Console.WriteLine($"Read Back String: {readBackBytes[0]}");
Console.WriteLine($"Strings Match: {contents[0] == readBackBytes[0]}");
This checks whether the first byte of the file content remains intact after being transferred to and retrieved from the GPU.
6. Memory Management
The program has a commented-out section for freeing unmanaged memory:
csharpCopy code//Marshal.FreeHGlobal(readBackPtr);
//Marshal.FreeHGlobal(buffer);
These lines should be used to free the unmanaged memory buffers allocated with Marshal.AllocHGlobal to avoid memory leaks, but they are commented out here, leaving room for improvement.
Potential Improvements and Issues
- Memory Leaks: The program does not properly free the unmanaged memory allocated via
Marshal.AllocHGlobal, leading to potential memory leaks if run multiple times. - Error Handling: The program lacks error handling for situations like missing GPU devices or file read errors.
- Large File Handling: For large files, this approach may run into memory constraints, and you might need to manage chunked transfers for efficiency.
In summary, this program demonstrates how to work with OpenCL in C# to transfer data between the host system and the GPU. While it shows the core functionality, handling memory leaks and improving error management should be considered for a production-level solution.
Cost-Effective SQL Server Database Restore on Microsoft #Azure: Using SMB Shares

1) Motivation Behind the Process
Managing costs efficiently on Microsoft Azure is a crucial aspect for many businesses, especially when it comes to managing resources like SQL Server databases. One area where I found significant savings was in the restoration of SQL Server databases.
Traditionally, to restore databases, I was using a managed disk. The restore process involved downloading a ZIP file, unzipping it to a .bak file, and then restoring it to the main OS disk. However, there was a significant issue with this setup: the cost of the managed disk.
Even when database restores happened only once every six months, I was still paying for the full capacity of the managed disk—500GB of provisioned space. This means I was paying for unused storage space for extended periods, which could be a significant waste of resources and money.
To tackle this issue, I switched to using Azure Storage Accounts with file shares (standard, not premium), which provided a more cost-effective approach. By restoring the database from an SMB share, I could pay only for the data usage, rather than paying for provisioned capacity on a managed disk. Additionally, I could delete the ZIP and BAK files after the restore process was complete, further optimizing storage costs.
2) Issues and Solutions
While the transition to using an Azure Storage Account for database restores was a great move in terms of cost reduction, it wasn’t without its challenges. One of the main hurdles I encountered during this process was SQLCMD reporting that the .bak file did not exist, even though it clearly did.
Symptoms of the Problem
The error message was:
3201, Level 16, State 2, Server [ServerName], Line 1
Cannot open backup device '\\<UNC Path>\Backups\GeneralPurpose.bak'. Operating system error 3(The system cannot find the path specified.)
Msg 3013, Level 16, State 1, Server [ServerName], Line 1
RESTORE DATABASE is terminating abnormally.
This was perplexing because I had confirmed that the .bak file existed at the UNC path and that the path was accessible from my system.
Diagnosis
To diagnose the issue, I started by enabling xp_cmdshell in SQL Server. This extended stored procedure allows the execution of operating system commands, which is very helpful for troubleshooting such scenarios.
First, I enabled xp_cmdshell by running the following commands:
-- Enable advanced options
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
-- Enable xp_cmdshell
EXEC sp_configure 'xp_cmdshell', 1;
RECONFIGURE;
Once xp_cmdshell was enabled, I ran a simple DIR command to verify if SQL Server could access the backup file share:
EXEC xp_cmdshell 'dir \\<UNC Path>\Backups\GeneralPurpose.bak';
The result indicated that the SQL Server service account did not have proper access to the SMB share, and that’s why it couldn’t find the .bak file.
Solution
To resolve this issue, I had to map the network share explicitly within SQL Server using the net use command, which allows SQL Server to authenticate to the SMB share.
Here’s the solution I implemented:
EXEC xp_cmdshell 'net use Z: \\<UNC Path> /user:localhost\<user> <PASSWORD>';
Explanation
- Mapping the Network Drive:
Thenet usecommand maps the SMB share to a local drive letter (in this case,Z:), which makes it accessible to SQL Server. - Authentication:
The/user:flag specifies the username and password needed to authenticate to the share. In my case, I used an account (e.g.,localhost\fsausse) with the correct credentials. - Accessing the Share:
After mapping the network drive, I could proceed to access the.bakfile located in the SMB share by using its mapped path (Z:). SQL Server would then be able to restore the database without the “file not found” error.
Once the restore was completed, I could remove the drive mapping with:
EXEC xp_cmdshell 'net use Z: /delete';
This approach ensured that SQL Server had the necessary permissions to access the file on the SMB share, and I could restore my database efficiently, only paying for the data usage on Azure Storage.
Conclusion
By transitioning from a managed disk to an SMB share on Azure Storage, I significantly reduced my costs during database restores. The issue with SQL Server not finding the .bak file was quickly diagnosed and resolved by enabling xp_cmdshell, mapping the network share, and ensuring proper authentication. This process allows me to restore databases in a more cost-effective manner, paying only for the data used during the restore, and avoiding unnecessary storage costs between restores.
For businesses looking to optimize Azure costs, this method provides an efficient, scalable solution for managing large database backups with minimal overhead.
C# – using #OpenCV to determine if an image contains an image of a car (or a duck)

TL;DR; Here is the repo: https://github.com/infiniteloopltd/IsItACar
This demo application can take an image and derermine if the image is that of a Car, or not a car. My test image was of a duck, which was very defintely not car-like. But sillyness aside, this can be very useful for image upload validation – if you want to ensure that your car-sales website doesn’t allow their users to upload nonsense pictures, but only of cars, then this code could be useful.
Why Use Emgu.CV for Computer Vision?
Emgu.CV simplifies the use of OpenCV in C# projects, providing an intuitive interface while keeping the full functionality of OpenCV. For tasks like object detection, it is an ideal choice due to its performance and flexibility.
Prerequisites
Before diving into the code, make sure you have the following set up:
- Visual Studio (or another preferred C# development environment)
- Emgu.CV library installed via NuGet:
- Search for
Emgu.CVandEmgu.CV.runtime.windowsin the NuGet Package Manager and install them.
- Search for
Setting Up Your Project
We’ll write a simple application to detect cars in an image. The code uses a pre-trained Haar cascade classifier, which is a popular method for object detection.
The Code
Here’s a complete example demonstrating how to load an image from a byte array and run car detection using Emgu.CV:
csharpCopy codeusing Emgu.CV;
using Emgu.CV.CvEnum;
using Emgu.CV.Structure;
using System;
using System.Drawing;
using System.IO;
class Program
{
static void Main(string[] args)
{
// Load the image into a byte array (this could come from a database or API)
byte[] imageBytes = File.ReadAllBytes("path_to_your_image.jpg");
// Create a Mat object to hold the decoded image
Mat mat = new Mat();
// Decode the image from the byte array into the Mat object
CvInvoke.Imdecode(imageBytes, ImreadModes.Color, mat);
// Convert the Mat to an Image<Bgr, byte> for further processing
Image<Bgr, byte> image = mat.ToImage<Bgr, byte>();
// Load the Haar cascade for car detection
string cascadeFilePath = "path_to_haarcascade_car.xml"; // Download a Haar cascade for cars
CascadeClassifier carClassifier = new CascadeClassifier(cascadeFilePath);
// Convert to grayscale for better detection performance
using (var grayImage = image.Convert<Gray, byte>())
{
// Detect cars in the image
Rectangle[] cars = carClassifier.DetectMultiScale(
grayImage,
scaleFactor: 1.1,
minNeighbors: 5,
minSize: new Size(30, 30));
// Draw rectangles around detected cars
foreach (var car in cars)
{
image.Draw(car, new Bgr(Color.Red), 2);
}
// Save or display the image with the detected cars
image.Save("output_image_with_cars.jpg");
Console.WriteLine($"Detected {cars.Length} car(s) in the image.");
}
}
}
Breaking Down the Code
- Loading the Image as a Byte Array:csharpCopy code
byte[] imageBytes = File.ReadAllBytes("path_to_your_image.jpg");Instead of loading an image from a file directly, we load it into a byte array. This approach is beneficial if your image data is not file-based but comes from a more dynamic source, such as a database. - Decoding the Image:csharpCopy code
Mat mat = new Mat(); CvInvoke.Imdecode(imageBytes, ImreadModes.Color, mat);We useCvInvoke.Imdecodeto convert the byte array into aMatobject, which is OpenCV’s matrix representation of images. - Converting
MattoImage<Bgr, byte>:csharpCopy codeImage<Bgr, byte> image = mat.ToImage<Bgr, byte>();TheMatis converted toImage<Bgr, byte>to make it easier to work with Emgu.CV functions. - Car Detection Using Haar Cascades:csharpCopy code
Rectangle[] cars = carClassifier.DetectMultiScale(grayImage, 1.1, 5, new Size(30, 30));The Haar cascade method is used for object detection. You’ll need to download a Haar cascade XML file for cars and provide the path. - Drawing Detected Cars:csharpCopy code
image.Draw(car, new Bgr(Color.Red), 2);Rectangles are drawn around detected cars, and the image is saved or displayed.
Downloading Haar Cascade for Cars
To detect cars, you need a pre-trained Haar cascade file. You can find these files on the OpenCV GitHub repository or by searching online for “haarcascade for car detection.”
Conclusion
This example demonstrates a simple yet powerful way to use Emgu.CV for car detection in C#. While Haar cascades are efficient, modern machine learning methods like YOLO or SSD are more accurate for complex tasks. However, for basic object detection, this approach is easy to implement and performs well for simpler use cases.
Feel free to experiment with different parameters to improve detection accuracy or try integrating more advanced models for more complex scenarios. Happy coding!
#AWS #S3 Error – The request signature we calculated does not match the signature you provided. Check your key and signing method

If you’re working with the AWS SDK for .NET and encounter an error when uploading files to an Amazon S3 bucket, you’re not alone. A recent upgrade in the SDK may introduce unexpected behavior, leading to a “signature mismatch” error for uploads that previously worked smoothly. This blog post describes the problem, analyzes common solutions, and explains how AWS S3 pathing conventions have changed over time—impacting how we specify folders within S3 buckets.
The Problem: “The request signature we calculated does not match the signature you provided.”
When uploading a file to an Amazon S3 bucket using a .NET application, you may encounter this error:
“The request signature we calculated does not match the signature you provided. Check your key and signing method.”
The symptoms of this error can be puzzling. For example, a standard upload to the root of the bucket may succeed, but attempting to upload to a specific folder within the bucket could trigger the error. This was the case in a recent project, where an upload to the bucket carimagerydata succeeded, while uploads to carimagerydata/tx returned the signature mismatch error. The access key, secret key, and permissions were all configured correctly, but specifying the folder path still caused a failure.
Possible Solutions
When you encounter this issue, there are several things to investigate:
1. Bucket Region Configuration
Ensure that the AWS SDK is configured with the correct region for the S3 bucket. The SDK signs requests based on the region setting, and a mismatch between the region used in the code and the actual bucket region often results in signature errors.
csharpCopy codeAmazonS3Config config = new AmazonS3Config
{
RegionEndpoint = RegionEndpoint.YourBucketRegion // Ensure it's correct
};
2. Signature Version Settings
The AWS SDK uses Signature Version 4 by default, which is compatible with most regions and recommended by AWS. However, certain legacy setups or bucket configurations may expect Signature Version 2. Explicitly setting Signature Version 4 in the configuration can sometimes resolve these errors.
csharpCopy codeAmazonS3Config config = new AmazonS3Config
{
SignatureVersion = "4", // Explicitly specify Signature Version 4
RegionEndpoint = RegionEndpoint.YourBucketRegion
};
3. Permissions and Bucket Policies
Check if there are any bucket policies or IAM restrictions specific to the folder path you’re trying to upload to. If your bucket policy restricts access to certain paths, you’ll need to adjust it to allow uploads to the folder.
4. Path Style vs. Virtual-Hosted Style URL
Another possible issue arises from changes in how paths are handled. The AWS SDK has evolved over time, and the method of specifying paths within buckets has also changed. The SDK now defaults to virtual-hosted style URLs, where the bucket name is part of the domain (e.g., bucket-name.s3.amazonaws.com). Older setups, however, may expect path-style URLs, where the bucket name is part of the path (e.g., s3.amazonaws.com/bucket-name/key). Specifying path-style addressing in the configuration can sometimes fix compatibility issues:
csharpCopy codeAmazonS3Config config = new AmazonS3Config
{
UsePathStyle = true,
RegionEndpoint = RegionEndpoint.YourBucketRegion
};
Understanding the Key Change: Folder Path Format in S3
The reason these issues are so confusing is that AWS has changed the way folders (often called prefixes) are specified. Historically, users specified a bucket name combined with a folder path and then provided the object’s name. Now, however, the SDK expects a more unified format:
- Old Format:
bucket + path, object - New Format:
bucket, path + object
This means that in the new format, the folder path (e.g., /tx/) should be included as part of the object key rather than being treated as a separate parameter.
Solution: Specifying the Folder in the Object Key
To upload to a folder within a bucket, you should include the full path in the key itself. For example, if you want to upload yourfile.txt to the tx folder within carimagerydata, the key should be specified as "tx/yourfile.txt".
Here’s how to do it in C#:
csharpCopy codestring bucketName = "carimagerydata";
string keyName = "tx/yourfile.txt"; // Specify the folder in the key
string filePath = @"C:\path\to\your\file.txt";
AmazonS3Client client = new AmazonS3Client(accessKey, secretKey, RegionEndpoint.YourBucketRegion);
PutObjectRequest request = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName, // Full path including folder
FilePath = filePath,
ContentType = "text/plain" // Example for text files, adjust as needed
};
PutObjectResponse response = await client.PutObjectAsync(request);
Conclusion
This error is a prime example of how changes in SDK conventions can impact legacy applications. The update to a more unified key format for specifying folder paths in S3 may seem minor, but it can cause unexpected issues if you’re unaware of it. By specifying the folder as part of the object key, you can avoid signature mismatch errors and ensure that your application is compatible with the latest AWS SDK practices.
Always remember to check SDK release notes for updates in configuration defaults, particularly when working with cloud services, as conventions and standards may change over time. This small adjustment can save a lot of time when troubleshooting!
Understanding TLS fingerprinting.

TLS fingerprinting is a way for Bot discovery software to help discover the difference between a browser and a bot. It works transparently and fast, but not infallable. What it depends on, is that when a secure HTTPS connection is made between client and server, there is an exchange of supported cyphers. based on the cyphers supported, this can be compared against the “claimed” user agent, to see if this would be the cyphers supported by this user-agent (Browser).
It’s easy for a bot to claim to be Chrome, just set the user agent to be the same as a modern version of Chrome, but it’s more difficult to support all the cyphers supported by Chrome, and thus, if the HTTP request says it’s Chrome, but doesn’t support all of Chrome’s Cyphers, then it probably isn’t Chrome, and it’s a Bot.
There is a really handy tool here; https://tls.peet.ws/api/all – which lists the cyphers used in the connection. If you use a browser, like Chrome, you’ll see this list of cyphers:
"ciphers": [
"TLS_GREASE (0xEAEA)",
"TLS_AES_128_GCM_SHA256",
"TLS_AES_256_GCM_SHA384",
"TLS_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
"TLS_RSA_WITH_AES_128_GCM_SHA256",
"TLS_RSA_WITH_AES_256_GCM_SHA384",
"TLS_RSA_WITH_AES_128_CBC_SHA",
"TLS_RSA_WITH_AES_256_CBC_SHA"
]
Wheras if you visit it using Firefox, you’ll see this;
"ciphers": [
"TLS_AES_128_GCM_SHA256",
"TLS_CHACHA20_POLY1305_SHA256",
"TLS_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
"TLS_RSA_WITH_AES_128_GCM_SHA256",
"TLS_RSA_WITH_AES_256_GCM_SHA384",
"TLS_RSA_WITH_AES_128_CBC_SHA",
"TLS_RSA_WITH_AES_256_CBC_SHA"
],
Use CURL or WebClient in C#, and you’ll see this
"ciphers": [
"TLS_AES_256_GCM_SHA384",
"TLS_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384",
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
"TLS_RSA_WITH_AES_256_GCM_SHA384",
"TLS_RSA_WITH_AES_128_GCM_SHA256",
"TLS_RSA_WITH_AES_256_CBC_SHA256",
"TLS_RSA_WITH_AES_128_CBC_SHA256",
"TLS_RSA_WITH_AES_256_CBC_SHA",
"TLS_RSA_WITH_AES_128_CBC_SHA"
],
So, even with a cursory glance, you could check for TLS_GREASE or TLS_CHACHA20_POLY1305_SHA256 and see if these are present, and declar the user as a bot if these cyphers are missing. More advanced coding could check the version of Chrome, the Operating System, and so forth, but the technique is that.
However, using the library TLS-Client in Python allows for more cyphers to be exchanged, and the TLS fingerprint looks much more similar (if not indistinguishable) from Chrome.
https://github.com/infiniteloopltd/TLS
session = tls_client.Session(
client_identifier="chrome_120",
random_tls_extension_order=True
)
page_url = "https://tls.peet.ws/api/all"
res = session.get(
page_url
)
print(res.text)
I am now curious to know If I can apply the same logic to C# …