#AWS #S3 Error – The request signature we calculated does not match the signature you provided. Check your key and signing method

If you’re working with the AWS SDK for .NET and encounter an error when uploading files to an Amazon S3 bucket, you’re not alone. A recent upgrade in the SDK may introduce unexpected behavior, leading to a “signature mismatch” error for uploads that previously worked smoothly. This blog post describes the problem, analyzes common solutions, and explains how AWS S3 pathing conventions have changed over time—impacting how we specify folders within S3 buckets.

The Problem: “The request signature we calculated does not match the signature you provided.”

When uploading a file to an Amazon S3 bucket using a .NET application, you may encounter this error:

“The request signature we calculated does not match the signature you provided. Check your key and signing method.”

The symptoms of this error can be puzzling. For example, a standard upload to the root of the bucket may succeed, but attempting to upload to a specific folder within the bucket could trigger the error. This was the case in a recent project, where an upload to the bucket carimagerydata succeeded, while uploads to carimagerydata/tx returned the signature mismatch error. The access key, secret key, and permissions were all configured correctly, but specifying the folder path still caused a failure.

Possible Solutions

When you encounter this issue, there are several things to investigate:

1. Bucket Region Configuration

Ensure that the AWS SDK is configured with the correct region for the S3 bucket. The SDK signs requests based on the region setting, and a mismatch between the region used in the code and the actual bucket region often results in signature errors.

csharpCopy codeAmazonS3Config config = new AmazonS3Config
{
    RegionEndpoint = RegionEndpoint.YourBucketRegion // Ensure it's correct
};

2. Signature Version Settings

The AWS SDK uses Signature Version 4 by default, which is compatible with most regions and recommended by AWS. However, certain legacy setups or bucket configurations may expect Signature Version 2. Explicitly setting Signature Version 4 in the configuration can sometimes resolve these errors.

csharpCopy codeAmazonS3Config config = new AmazonS3Config
{
    SignatureVersion = "4", // Explicitly specify Signature Version 4
    RegionEndpoint = RegionEndpoint.YourBucketRegion
};

3. Permissions and Bucket Policies

Check if there are any bucket policies or IAM restrictions specific to the folder path you’re trying to upload to. If your bucket policy restricts access to certain paths, you’ll need to adjust it to allow uploads to the folder.

4. Path Style vs. Virtual-Hosted Style URL

Another possible issue arises from changes in how paths are handled. The AWS SDK has evolved over time, and the method of specifying paths within buckets has also changed. The SDK now defaults to virtual-hosted style URLs, where the bucket name is part of the domain (e.g., bucket-name.s3.amazonaws.com). Older setups, however, may expect path-style URLs, where the bucket name is part of the path (e.g., s3.amazonaws.com/bucket-name/key). Specifying path-style addressing in the configuration can sometimes fix compatibility issues:

csharpCopy codeAmazonS3Config config = new AmazonS3Config
{
    UsePathStyle = true,
    RegionEndpoint = RegionEndpoint.YourBucketRegion
};

Understanding the Key Change: Folder Path Format in S3

The reason these issues are so confusing is that AWS has changed the way folders (often called prefixes) are specified. Historically, users specified a bucket name combined with a folder path and then provided the object’s name. Now, however, the SDK expects a more unified format:

  • Old Format: bucket + path, object
  • New Format: bucket, path + object

This means that in the new format, the folder path (e.g., /tx/) should be included as part of the object key rather than being treated as a separate parameter.

Solution: Specifying the Folder in the Object Key

To upload to a folder within a bucket, you should include the full path in the key itself. For example, if you want to upload yourfile.txt to the tx folder within carimagerydata, the key should be specified as "tx/yourfile.txt".

Here’s how to do it in C#:

csharpCopy codestring bucketName = "carimagerydata";
string keyName = "tx/yourfile.txt"; // Specify the folder in the key
string filePath = @"C:\path\to\your\file.txt";

AmazonS3Client client = new AmazonS3Client(accessKey, secretKey, RegionEndpoint.YourBucketRegion);

PutObjectRequest request = new PutObjectRequest
{
    BucketName = bucketName,
    Key = keyName, // Full path including folder
    FilePath = filePath,
    ContentType = "text/plain" // Example for text files, adjust as needed
};

PutObjectResponse response = await client.PutObjectAsync(request);

Conclusion

This error is a prime example of how changes in SDK conventions can impact legacy applications. The update to a more unified key format for specifying folder paths in S3 may seem minor, but it can cause unexpected issues if you’re unaware of it. By specifying the folder as part of the object key, you can avoid signature mismatch errors and ensure that your application is compatible with the latest AWS SDK practices.

Always remember to check SDK release notes for updates in configuration defaults, particularly when working with cloud services, as conventions and standards may change over time. This small adjustment can save a lot of time when troubleshooting!

Categories: Uncategorized Tags: , , , ,

Understanding TLS fingerprinting.

TLS fingerprinting is a way for Bot discovery software to help discover the difference between a browser and a bot. It works transparently and fast, but not infallable. What it depends on, is that when a secure HTTPS connection is made between client and server, there is an exchange of supported cyphers. based on the cyphers supported, this can be compared against the “claimed” user agent, to see if this would be the cyphers supported by this user-agent (Browser).

It’s easy for a bot to claim to be Chrome, just set the user agent to be the same as a modern version of Chrome, but it’s more difficult to support all the cyphers supported by Chrome, and thus, if the HTTP request says it’s Chrome, but doesn’t support all of Chrome’s Cyphers, then it probably isn’t Chrome, and it’s a Bot.

There is a really handy tool here; https://tls.peet.ws/api/all – which lists the cyphers used in the connection. If you use a browser, like Chrome, you’ll see this list of cyphers:

 "ciphers": [
      "TLS_GREASE (0xEAEA)",
      "TLS_AES_128_GCM_SHA256",
      "TLS_AES_256_GCM_SHA384",
      "TLS_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_RSA_WITH_AES_128_CBC_SHA",
      "TLS_RSA_WITH_AES_256_CBC_SHA"
    ]

Wheras if you visit it using Firefox, you’ll see this;

 "ciphers": [
      "TLS_AES_128_GCM_SHA256",
      "TLS_CHACHA20_POLY1305_SHA256",
      "TLS_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_RSA_WITH_AES_128_CBC_SHA",
      "TLS_RSA_WITH_AES_256_CBC_SHA"
    ],

Use CURL or WebClient in C#, and you’ll see this

 "ciphers": [
      "TLS_AES_256_GCM_SHA384",
      "TLS_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
      "TLS_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_RSA_WITH_AES_256_CBC_SHA256",
      "TLS_RSA_WITH_AES_128_CBC_SHA256",
      "TLS_RSA_WITH_AES_256_CBC_SHA",
      "TLS_RSA_WITH_AES_128_CBC_SHA"
    ],

So, even with a cursory glance, you could check for TLS_GREASE or TLS_CHACHA20_POLY1305_SHA256 and see if these are present, and declar the user as a bot if these cyphers are missing. More advanced coding could check the version of Chrome, the Operating System, and so forth, but the technique is that.

However, using the library TLS-Client in Python allows for more cyphers to be exchanged, and the TLS fingerprint looks much more similar (if not indistinguishable) from Chrome.

https://github.com/infiniteloopltd/TLS

    session = tls_client.Session(
        client_identifier="chrome_120",
        random_tls_extension_order=True
    )
    page_url = "https://tls.peet.ws/api/all"
    res = session.get(
        page_url
    )
    print(res.text)

I am now curious to know If I can apply the same logic to C# …

Chinese Vehicle License Plate #API now available

With a database of 16 Million Chinese number plates and associated vehicle details and owners, we’ve launched an API that allows a lookup on this data. This allows you to determine the make, model and owner of a Chinese Vehicle from it’s license plate, assuming it’s found in our database (Which admittedly is only 4% of the vehicles in China).

The website is available here; https://www.chepaiapi.cn/ and we also have offline data available, for analysis for other purposes.

Car registration plates in China use the /CheckChina  endpoint and returns the following information:

  • Make / Model
  • Age
  • Engine Size
  • VIN
  • Owner details
  • Representative image

In China, only 4% of all vehicles are covered by our database, therefore most searches will not return data, there is no charge for a failed search.

Sample Registration Number: 

浙GCJ300

Sample Json:

{
  "Description": "haima Family",
  "RegistrationYear": "2004",
  "CarMake": {
    "CurrentTextValue": "haima"
  },
  "CarModel": {
    "CurrentTextValue": "Family"
  },
  "Variant": "GL New Yue Class",
  "EngineSize": {
    "CurrentTextValue": "1.6L"
  },
  "MakeDescription": {
    "CurrentTextValue": "haima"
  },
  "ModelDescription": {
    "CurrentTextValue": "Family"
  },
  "NumberOfSeats": {
    "CurrentTextValue": 5
  },
  "NumberOfDoors": {
    "CurrentTextValue": 4
  },
  "BodyStyle": "saloon",
  "VIN": "LH17CKJF04H035018",
  "EngineNumber": "ZM",
  "FuelType": {
    "CurrentTextValue": "gasoline"
  },
  "Owner": {
    "Name": "王坚强",
    "Id": "330725197611214838",
    "Address": "义乌市稠城街道殿山村",
    "Tel": "13868977994"
  },
  "Gonggao": "HMC7161",
  "Location": "浙江省金华市",
  "ImageUrl": "http://chepaiapi.cn/image.aspx/@aGFpbWEgRmFtaWx5"
}
Categories: Uncategorized

Tiny C# code to test webhooks with NGrok

TL;DR; Here is the github repo: https://github.com/infiniteloopltd/MiniServer/

If you need to test webhooks, it can be sometimes awkward, since you have to deploy your webhook handling code to a server, and once there, you can’t debug it. Running it on localhost, then the remote service can’t reach it.

This is where NGrok comes in handy, you can use it to expose (Temporarily) a local website – including your webhook handler, and then use the URL from NGgok as the webhook endpoint. Once it’s working locally, you can deploy it to the server, for more testing.

Here I wrote a tiny webserver that just prints to screen what it HTTP posted / Get’d to it,

static void Main(string[] args)
        {
            HttpListener listener = new HttpListener();
            listener.Prefixes.Add("http://localhost:8080/");
            listener.Start();
            Console.WriteLine("Listening for requests on http://localhost:8080/ ...");

            while (true)
            {
                HttpListenerContext context = listener.GetContext();
                HttpListenerRequest request = context.Request;

                // Log the request method (GET/POST)
                Console.WriteLine("Received {0} request for: {1}", request.HttpMethod, request.Url);

                // Log the headers
                Console.WriteLine("Headers:");
                foreach (string key in request.Headers.AllKeys)
                {
                    Console.WriteLine("{0}: {1}", key, request.Headers[key]);
                }

                // Log the body if it's a POST request
                if (request.HttpMethod == "POST")
                {
                    using (var reader = new StreamReader(request.InputStream, request.ContentEncoding))
                    {
                        string body = reader.ReadToEnd();
                        Console.WriteLine("Body:");
                        Console.WriteLine(body);
                    }
                }

                // Respond with a simple HTML page
                HttpListenerResponse response = context.Response;
                string responseString = "<html><body><h1>Request Received</h1></body></html>";
                byte[] buffer = Encoding.UTF8.GetBytes(responseString);
                response.ContentLength64 = buffer.Length;
                Stream output = response.OutputStream;
                output.Write(buffer, 0, buffer.Length);
                output.Close();
            }

Then the server is exposed via NGrok with

ngrok http 8080 --host-header="localhost:8080" 

I’ve tried similar code with CloudFlare tunnels, but it wasn’t as easy to set up as NGrok.

Categories: Uncategorized

Warning: Unverified sender in emails from #AWS #SES to #Microsoft365

If you see this text:

Warning: Unverified sender : This message may not be sent by the sender that’s displayed. If you aren’t certain this message is safe, please be cautious when interacting with this email, and avoid clicking on any links or downloading attachments that are on it

Appearing on emails sent via AWS SES to Microsoft 365 Accounts, then there is a simple fix, that you can do – which is to apply DKIM to the domain.

I found that manually supplying the DKIM public/private key was easiest.

So. I created a keypair as follows;

openssl genrsa -out dkim_private.pem 2048
openssl rsa -in dkim_private.pem -pubout -out dkim_public.pem

Then, created a TXT DNS record on the domain named default._domainkey.<mydomain>.com

The records should look like this (using the public key, with spaces removed)

v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2uRIflp/tyXzZDgJ6WxEXoqh5jw==

Then, going in to AWS SES, I add a new identity, select domain, then press “Advanced DKIM”, then Provide DKIM authentication token (BYODKIM) – Then paste the private key, with spaces removed.

AWS then goes and checks your DNS, and hopefully within a few minutes you should get an email starting “DKIM setup SUCCESS”

Once that is in place, any email you send via SES on that domain will be DKIM signed, and trusted by Microsoft365, hopefully getting your delivery rate up!

https://www.smtpjs.com/

Categories: Uncategorized

Handling a System.Net.ProtocolViolationException on a webserver in C#

So, it’s actually been a while since I actually wrote a post about Network Programming in C#, like what the blog is called. But this is an interesting case of a crashing server, and how to fix it.

So, I have a self-written HTTP server, proxied to the world via IIS, but every so often it would crash, and since the service was set to auto-restart, about a minute later it would come back up again. But I couldn’t see the pattern why. I was monitoring the service using UptimeRobot, and it was reporting the service was always down. Now, “Always” is a bit suspect. It went down often, but not “always”.

So, I noticed that UptimeRobot makes HTTP HEAD requests rather than HTTP GET requests, and of course, I always tested my server with HTTP GET. And guess what, a HTTP HEAD request would crash the service, and windows would restart it again a minute later.

To test for HTTP HEAD requests, I used the curl -I flag, which issues a HTTP HEAD, and it crashed the server, with this in the event log:


Exception Info: System.Net.ProtocolViolationException
at System.Net.HttpResponseStream.BeginWrite(Byte[], Int32, Int32, System.AsyncCallback, System.Object)
at System.IO.Stream+<>c.b__53_0(System.IO.Stream, ReadWriteParameters, System.AsyncCallback, System.Object)

So, I created my own minimal server in a console app, so I could test it, without the complexity of the full server.

using System;
using System.Net;
using System.Text;
using System.Threading.Tasks;

namespace ServerTest
{
    class HttpServer
    {
        public static readonly HttpListener Listener = new HttpListener();
  
        public static string PageData = $"service running";

        public static async Task HandleIncomingConnections()
        {   
            while (true)
            {
                try
                {
                    // Will wait here until we hear from a connection
                    var ctx = await Listener.GetContextAsync();
                    _ = Task.Run(() => HandleConnection(ctx));
                }
                catch (Exception ex)
                {
                    // Handle or log the exception as needed
                    Console.WriteLine($"Error handling connection: {ex.Message}");
                    // Optionally, you could continue to listen for new connections
                    // Or you could break out of the loop depending on your error handling strategy
                }
            }
        }

        private static async void HandleConnection(HttpListenerContext ctx)
        {
            // Peel out the requests and response objects
            var req = ctx.Request;
            var resp = ctx.Response;
            var response = PageData;
            // Write the response info
            var data = Encoding.UTF8.GetBytes(response);
            resp.ContentType = "text/html";
            resp.ContentEncoding = Encoding.UTF8;
            resp.ContentLength64 = data.LongLength;
            if (req.HttpMethod == "GET")
            {
                // Write out to the response stream (asynchronously), then close it
                await resp.OutputStream.WriteAsync(data, 0, data.Length);
            }

            resp.Close();
        }

    }
}

What I’ve highlighted in bold was the fix. What was happening was that the HEAD request was not expeciting a response, but I was proving one, so I checked to see if the HTTP verb was GET, and writing the response, otherwise not.

And, this worked! after months, if not years of an intermittent bug that was so hard to catch.

Categories: Uncategorized

Car License Plate #API support for #Lithuania

Introducing support for Lithuania via our Car License Plate API

Are you looking to seamlessly integrate detailed vehicle information into your applications? Welcome to Numerio Zenklai API, Lithuania’s latest and most advanced car license plate API. This service is designed to provide comprehensive vehicle details, enhancing your ability to offer top-tier services in the automotive, insurance, and related industries.

Why Choose Numerio Zenklai API?

Numerio Zenklai API offers a robust solution for retrieving detailed information about vehicles registered in Lithuania. With a simple request to the /CheckLithuania endpoint, you can obtain critical data, including the vehicle’s make, model, age, engine size, VIN, insurer, and a representative image.

Key Features

1. Comprehensive Vehicle Data:
Access a rich set of details about any Lithuanian-registered vehicle. For example, a query on the registration number “NAO075” returns the following data:

  • Make and Model: Volkswagen Crafter
  • Registration Year: 2006
  • Engine Size: 2461 cc
  • VIN: WV1ZZZ2EZE6017394
  • Fuel Type: Diesel
  • Insurance Company: ERGO INSURANCE SE LIETUVOS FILIALAS
  • Vehicle Type: Lorry
  • Body Type: Bus
  • Representative Image: Image URL

2. Simple and Fast Integration:
Our API is designed for quick integration, ensuring you can start leveraging vehicle data with minimal setup. Here’s a sample JSON response for an easy understanding of the data format:

{
"Description": "VOLKSWAGEN CRAFTER",
"RegistrationYear": "2006",
"CarMake": {
"CurrentTextValue": "VOLKSWAGEN"
},
"CarModel": {
"CurrentTextValue": "CRAFTER"
},
"MakeDescription": {
"CurrentTextValue": "VOLKSWAGEN"
},
"ModelDescription": {
"CurrentTextValue": "CRAFTER"
},
"EngineSize": {
"CurrentTextValue": "2461"
},
"VIN": "WV1ZZZ2EZE6017394",
"FuelType": "Diesel",
"InsuranceCompany": "ERGO INSURANCE SE LIETUVOS FILIALAS",
"InsuranceCompanyNumber": "ACB 1798038:8192689",
"VehicleType": "LORRY",
"Body": "Bus",
"ImageUrl": "http://www.numeriozenklaiapi.lt/image.aspx/@Vk9MS1NXQUdFTiBDUkFGVEVS"
}

3. Reliable and Up-to-Date Information:
Our API ensures that you always receive the most current and accurate data directly from official sources, making it a reliable tool for various applications.

Use Cases

  • Automotive Industry: Quickly verify vehicle details for sales, maintenance, and servicing.
  • Insurance Companies: Validate vehicle information for underwriting and claims processing.
  • Fleet Management: Monitor and manage a fleet of vehicles efficiently with detailed data.
  • Law Enforcement: Access critical vehicle information swiftly for enforcement and regulatory purposes.

Getting Started

To begin using Numerio Zenklai API, visit our website and check out our comprehensive documentation. Our user-friendly interface and extensive support resources make it easy for developers to integrate the API into their existing systems.

Conclusion

Numerio Zenklai API is your go-to solution for accessing detailed vehicle information in Lithuania. Whether you’re in the automotive industry, insurance sector, or any field that requires precise vehicle data, our API provides the tools you need to enhance your services and streamline your operations.

Experience the power of reliable vehicle information with Numerio Zenklai API today. Visit Numerio Zenklai API to learn more and start integrating now!

Find #Azure Functions still using the in-process model in #Azure

If you got the following email from Microsoft Azure:

Migrate your .NET apps in Azure Functions to the isolated worker model by 10 November 2026
You’re receiving this email because you use the in-process model with .NET applications in Azure Functions.

Beginning 10 November 2026, the in-process model for .NET apps in Azure Functions will no longer be supported. To ensure that your apps that use this model continue being supported, you’ll need to transition to the isolated worker model by that date.

You may still use your .NET apps with the in-process model beyond 10 November 2026, but they will no longer receive security and feature updates from Microsoft.

The isolated worker model offers all the same functionality as the in-process model, plus improvements such as:

Full control of the dependency chain.
Isolation from platform lifecycle activities.
The ability to target Standard Term Support (STS) versions of .NET.
Required action
To ensure your .NET apps in Azure Functions continue to receive support, migrate to the isolated worker model by 10 November 2026.

Here is a script in both Powershell and Dos (Batch), to check which functions are using the in-process model; (Using the azure CLI for Windows)

This script logs into Azure, sets the subscription, lists the function apps, and checks the settings to identify which apps are using the in-process model for .NET:

powershellCopy code# Log in to Azure
az login

# Set your subscription (replace with your subscription ID)
$subscriptionId = "<your-subscription-id>"
az account set --subscription $subscriptionId

# List all function apps in the subscription
$functionApps = az functionapp list --query "[].{name:name, resourceGroup:resourceGroup}" --output tsv

Write-Output "Function Apps using in-process model for .NET:"
Write-Output "--------------------------------------------"

# Loop through each function app
$functionApps | ForEach-Object {
    $name, $resourceGroup = $_ -split "`t"

    # Get the app settings for the function app
    $appSettings = az functionapp config appsettings list --name $name --resource-group $resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='FUNCTIONS_EXTENSION_VERSION']" | ConvertFrom-Json

    # Check if the function app is using the in-process model
    $isInProcess = $appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' -and $_.value -eq 'dotnet' }

    if ($isInProcess) {
        Write-Output "Function App Name: $name, Resource Group: $resourceGroup"
    }
}

To run this script:

  1. Open PowerShell on your Windows machine.
  2. Copy and paste the script into the PowerShell window (or save it as a .ps1 file and run it).
  3. Replace <your-subscription-id> with your actual Azure subscription ID.

This script will list all function apps that are using the in-process model for .NET, helping you identify which ones need to be migrated to the isolated worker model.

If you prefer a DOS batch script, it’s a bit more complex due to the limitations of batch scripting, but here’s an attempt to do something similar using basic batch commands and PowerShell for processing:

batchCopy code@echo off

REM Log in to Azure (this will open a browser for authentication)
az login

REM Set your subscription (replace with your subscription ID)
set SUBSCRIPTION_ID=<your-subscription-id>
az account set --subscription %SUBSCRIPTION_ID%

REM List all function apps in the subscription and save to a temporary file
az functionapp list --query "[].{name:name, resourceGroup:resourceGroup}" --output tsv > functionapps.txt

echo Function Apps using in-process model for .NET:
echo --------------------------------------------

REM Loop through each function app
for /F "tokens=1,2" %%A in (functionapps.txt) do (
    set NAME=%%A
    set RESOURCE_GROUP=%%B

    REM Get the app settings for the function app and save to a temporary file
    az functionapp config appsettings list --name %NAME% --resource-group %RESOURCE_GROUP% --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='FUNCTIONS_EXTENSION_VERSION']" --output json > appsettings.json

    REM Use PowerShell to check if the function app is using the in-process model
    powershell -Command "
        $appSettings = Get-Content 'appsettings.json' | ConvertFrom-Json;
        $isInProcess = $appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' -and $_.value -eq 'dotnet' };
        if ($isInProcess) { Write-Output 'Function App Name: %NAME%, Resource Group: %RESOURCE_GROUP%' }
    "
)

REM Clean up temporary files
del functionapps.txt
del appsettings.json

To run this batch script:

  1. Open a text editor and copy the script into it.
  2. Save the file with a .bat extension, for example, checkFunctionApps.bat.
  3. Run the batch file from the command prompt.

This script uses az commands to log in, set the subscription, and list the function apps. It then checks each app’s settings using PowerShell to determine if it is using the in-process model for .NET.

Updating #AWS Lambda #Python version

If, like me, you have an old Lambda function running Python 3.8, you may have got this email from AWS today;


We are contacting you as we have identified that your AWS Account currently has one or more AWS Lambda functions using the Python 3.8 runtime.

We are ending support for Python 3.8 in Lambda on October 14, 2024. This follows Python 3.8 End-Of-Life (EOL) which is scheduled for October, 2024 [1].

As described in the Lambda runtime support policy [2], end of support for language runtimes in Lambda happens in several stages. Starting on October 14, 2024, Lambda will no longer apply security patches and other updates to the Python 3.8 runtime used by Lambda functions, and functions using Python 3.8 will no longer be eligible for technical support. Also, Python 3.8 will no longer be available in the AWS Console, although you can still create and update functions that use Python 3.8 via AWS CloudFormation, the AWS CLI, AWS SAM, or other tools. Starting February 28, 2025, you will no longer be able to create new Lambda functions using the Python 3.8 runtime. Starting March 31, 2025, you will no longer be able to update existing functions using the Python 3.8 runtime.

So, the first thing, is to find which lambda functions are affected, and this, I wrote a Windows Batch script as follows;

@echo off

set regions=us-east-1 us-east-2 us-west-1 us-west-2 ap-south-1 ap-northeast-1 ap-northeast-2 ap-southeast-1 ap-southeast-2 ca-central-1 eu-central-1 eu-west-1 eu-west-2 eu-west-3 sa-east-1

for %%r in (%regions%) do (
    echo Listing Lambda functions in %%r region:
    aws lambda list-functions --region %%r --output text --query "Functions[?Runtime=='python3.8'].FunctionArn"
    echo -----------------------------------------
)

Then, once I listed all the functions, it’s a matter of going to each one in the AWS Lambda console, scrolling down on the “Code” tab, to change the runtime;

To change the runtime
  1. Open the Functions page of the Lambda console.
  2. Choose the function to update and choose the Code tab.
  3. Scroll down to the Runtime settings section, which is under the code editor.
  4. Choose Edit.
    1. For Runtime, select the runtime identifier.
    2. For Handler, specify file name and handler for your function.
    3. For Architecture, choose the instruction set architecture to use for your function.
  5. Choose Save.

License plate lookup #API now in #Kazakhstan

If your business is in the Automotive trade, and operates in, or with partners in Kazahkstan, it can be useful to be able to determine details of a vehicle based on the license plate of that car, automatically. This API now does that; Which you can find at avtokolik.kz

Car registration plates in Kazakhstan use the /CheckKazakhstan endpoint and returns the
following information:
● Make / Model
● Age
● Engine Size
● Colour
● Region
● Representative image
Sample Registration Number:
860AMZ17
Sample Json:


{
"Description": "OPEL VECTRA",
"RegistrationYear": "1998",
"CarMake": {
"CurrentTextValue": "OPEL"
},
"CarModel": {
"CurrentTextValue": "VECTRA"
},
"MakeDescription": {
"CurrentTextValue": "OPEL"
},
"ModelDescription": {
"CurrentTextValue": "VECTRA"
},
"Colour": "БЕЛЫЙ",
126
"EngineSize": "1598",
"Region": "Jetisu Region",
"ImageUrl": "https://avtokolik.kz/image.aspx/@T1BFTCBWRUNUUkE="
}
  1. Make/Model: Instantly identify the brand and model of the vehicle. For instance, the sample plate “860AMZ17” reveals an Opel Vectra.
  2. Age: Gain insight into the vehicle’s registration year. In our example, the Opel Vectra was registered in 1998.
  3. Engine Size: Know the power under the hood with details about engine size. The Opel Vectra comes equipped with a 1598 cc engine.
  4. Colour: Discover the vehicle’s exterior color. In this case, the Opel Vectra is white (“БЕЛЫЙ”).
  5. Region: Understand the region where the vehicle is registered. The Opel Vectra in our example is registered in the Jetisu Region.
  6. Representative Image: Visualize the vehicle with a representative image. Simply follow the provided URL to see how the Opel Vectra looks.

Utilizing this endpoint not only satisfies curiosity about vehicles on the road but also offers practical insights for various purposes, from verifying vehicle details before purchasing to conducting research on vehicle demographics in different regions of Kazakhstan.

With this accessible tool, individuals and businesses can make informed decisions, whether it’s for personal use, insurance purposes, or market analysis. Embrace the power of technology to unlock a wealth of information hidden within Kazakhstan’s car registration plates.

Categories: Uncategorized Tags: , , , ,