Archive

Posts Tagged ‘cloud’

Fixing .NET 8 HttpClient Permission Denied Errors on Google Cloud Run

If you’re deploying a .NET 8 application to Google Cloud Run and encountering a mysterious NetworkInformationException (13): Permission denied error when making HTTP requests, you’re not alone. This is a known issue that stems from how .NET’s HttpClient interacts with Cloud Run’s restricted container environment.

The Problem

When your .NET application makes HTTP requests using HttpClient, you might see an error like this:

System.Net.NetworkInformation.NetworkInformationException (13): Permission denied
   at System.Net.NetworkInformation.NetworkChange.CreateSocket()
   at System.Net.NetworkInformation.NetworkChange.add_NetworkAddressChanged(NetworkAddressChangedEventHandler value)
   at System.Net.Http.HttpConnectionPoolManager.StartMonitoringNetworkChanges()

This error occurs because .NET’s HttpClient attempts to monitor network changes and handle advanced HTTP features like HTTP/3 and Alt-Svc (Alternative Services). To do this, it tries to create network monitoring sockets, which requires permissions that Cloud Run containers don’t have by default.

Cloud Run’s security model intentionally restricts certain system-level operations to maintain isolation and security. While this is great for security, it conflicts with .NET’s network monitoring behavior.

Why Does This Happen?

The .NET runtime includes sophisticated connection pooling and HTTP version negotiation features. When a server responds with an Alt-Svc header (suggesting alternative protocols or endpoints), .NET tries to:

  1. Monitor network interface changes
  2. Adapt connection strategies based on network conditions
  3. Support HTTP/3 where available

These features require low-level network access that Cloud Run’s sandboxed environment doesn’t permit.

The Solution

Fortunately, there’s a straightforward fix. You need to disable the features that require elevated network permissions by setting two environment variables:

Environment.SetEnvironmentVariable("DOTNET_SYSTEM_NET_DISABLEIPV6", "1");
Environment.SetEnvironmentVariable("DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT", "false");

Place these lines at the very top of your Program.cs file, before any HTTP client initialization or web application builder creation.

What These Variables Do

  • DOTNET_SYSTEM_NET_DISABLEIPV6: Disables IPv6 support, which also disables the network change monitoring that requires socket creation.
  • DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT: Explicitly disables HTTP/3 support, preventing .NET from trying to negotiate HTTP/3 connections.

Alternative Approaches

Option 1: Set in Dockerfile

You can bake these settings into your container image:

FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app

# Disable network monitoring features
ENV DOTNET_SYSTEM_NET_DISABLEIPV6=1
ENV DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT=false

COPY publish/ .
ENTRYPOINT ["dotnet", "YourApp.dll"]

Option 2: Set via Cloud Run Configuration

You can configure these as environment variables in your Cloud Run deployment:

gcloud run deploy your-service \
  --image gcr.io/your-project/your-image \
  --set-env-vars DOTNET_SYSTEM_NET_DISABLEIPV6=1,DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP3SUPPORT=false

Or through the Cloud Console when configuring your service’s environment variables.

Performance Impact

You might wonder if disabling these features affects performance. In practice:

  • HTTP/3 isn’t widely used yet, and most services work perfectly fine with HTTP/2 or HTTP/1.1
  • Network change monitoring is primarily useful for long-running desktop applications that move between networks (like a laptop switching from WiFi to cellular)
  • In a Cloud Run container with a stable network environment, these features provide minimal benefit

The performance impact is negligible, and the tradeoff is well worth it for a working application.

Why It Works Locally But Fails in Cloud Run

This issue often surprises developers because their code works perfectly on their development machine. That’s because:

  • Local development environments typically run with full system permissions
  • Your local machine isn’t running in a restricted container
  • Cloud Run’s security sandbox is much more restrictive than a typical development environment

This is a classic example of environment-specific behavior where security constraints in production expose issues that don’t appear during development.

Conclusion

The Permission denied error when using HttpClient in .NET 8 on Google Cloud Run is caused by the runtime’s attempt to use network monitoring features that aren’t available in Cloud Run’s restricted environment. The fix is simple: disable these features using environment variables.

This solution is officially recognized by the .NET team as the recommended workaround for containerized environments with restricted permissions, so you can use it with confidence in production.

Related Resources


Have you encountered other .NET deployment issues on Cloud Run? Feel free to share your experiences in the comments below.

Categories: Uncategorized Tags: , , , ,

Enhanced Italian Vehicle #API: VIN Numbers Now Available for Motorcycles

We’re excited to announce a significant enhancement to the Italian vehicle data API available through Targa.co.it. Starting today, our API responses now include Vehicle Identification Numbers (VIN) for motorcycle lookups, providing developers and businesses with more comprehensive vehicle data than ever before.

What’s New

The Italian vehicle API has been upgraded to return VIN numbers alongside existing motorcycle data. This enhancement brings motorcycle data parity with our car lookup service, ensuring consistent and complete vehicle information across all vehicle types.

Sample Response Structure

Here’s what you can expect from the enhanced API response for a motorcycle lookup:

json

{
  "Description": "Yamaha XT 1200 Z Super Ténéré",
  "RegistrationYear": "2016",
  "CarMake": {
    "CurrentTextValue": "Yamaha"
  },
  "CarModel": {
    "CurrentTextValue": "XT 1200 Z Super Ténéré"
  },
  "EngineSize": {
    "CurrentTextValue": "1199"
  },
  "FuelType": {
    "CurrentTextValue": ""
  },
  "MakeDescription": {
    "CurrentTextValue": "Yamaha"
  },
  "ModelDescription": {
    "CurrentTextValue": "XT 1200 Z Super Ténéré"
  },
  "Immobiliser": {
    "CurrentTextValue": ""
  },
  "Version": "ABS (2014-2016) 1199cc",
  "ABS": "",
  "AirBag": "",
  "Vin": "JYADP041000002470",
  "KType": "",
  "PowerCV": "",
  "PowerKW": "",
  "PowerFiscal": "",
  "ImageUrl": "http://www.targa.co.it/image.aspx/@WWFtYWhhIFhUIDEyMDAgWiBTdXBlciBUw6luw6lyw6l8bW90b3JjeWNsZQ=="
}

Why VIN Numbers Matter

Vehicle Identification Numbers serve as unique fingerprints for every vehicle, providing several key benefits:

Enhanced Vehicle Verification: VINs offer the most reliable method to verify a vehicle’s authenticity and specifications, reducing fraud in motorcycle transactions.

Complete Vehicle History: Access to VIN enables comprehensive history checks, insurance verification, and recall information lookup.

Improved Business Applications: Insurance companies, dealerships, and fleet management services can now build more robust motorcycle-focused applications with complete vehicle identification.

Regulatory Compliance: Many automotive business processes require VIN verification for legal and regulatory compliance.

Technical Implementation

The VIN field has been seamlessly integrated into existing API responses without breaking changes. The new "Vin" field appears alongside existing motorcycle data, maintaining backward compatibility while extending functionality.

Key Features:

  • No Breaking Changes: Existing integrations continue to work unchanged
  • Consistent Data Structure: Same JSON structure across all vehicle types
  • Comprehensive Coverage: VIN data available for motorcycles registered in the Italian vehicle database
  • Real-time Updates: VIN information reflects the most current data from official Italian vehicle registries

Getting Started

Developers can immediately begin utilizing VIN data in their applications. The API endpoint remains unchanged, and VIN information is automatically included in all motorcycle lookup responses where available.

For businesses already integrated with our Italian vehicle API, this enhancement provides immediate additional value without requiring any code changes. New integrations can take full advantage of complete motorcycle identification data from day one.

Use Cases

This enhancement opens up new possibilities for motorcycle-focused applications:

  • Insurance Platforms: Accurate risk assessment and policy management
  • Marketplace Applications: Enhanced listing verification and buyer confidence
  • Fleet Management: Complete motorcycle inventory tracking
  • Service Centers: Precise parts identification and service history management
  • Regulatory Reporting: Compliance with Italian vehicle registration requirements

Looking Forward

This VIN integration for motorcycles represents our continued commitment to providing comprehensive Italian vehicle data. We’re constantly working to enhance our API capabilities and expand data coverage to better serve the automotive technology ecosystem.

The addition of VIN numbers to motorcycle data brings our Italian API to feature parity with leading international vehicle data providers, while maintaining the accuracy and reliability that Italian businesses have come to expect from Targa.co.it.


Ready to integrate enhanced motorcycle data into your application? Visit Targa.co.it to explore our Italian vehicle API documentation and get started with VIN-enabled motorcycle lookups today.

Porting a PHP OAuth Spotler Client to C#: Lessons Learned

Recently I had to integrate with Spotler’s REST API from a .NET application. Spotler provides a powerful marketing automation platform, and their API uses OAuth 1.0 HMAC-SHA1 signatures for authentication.

They provided a working PHP client, but I needed to port this to C#. Here’s what I learned (and how you can avoid some common pitfalls).


🚀 The Goal

We started with a PHP class that:

✅ Initializes with:

  • consumerKey
  • consumerSecret
  • optional SSL certificate verification

✅ Creates properly signed OAuth 1.0 requests

✅ Makes HTTP requests with cURL and parses the JSON responses.

I needed to replicate this in C# so we could use it inside a modern .NET microservice.


🛠 The Port to C#

🔑 The tricky part: OAuth 1.0 signatures

Spotler’s API requires a specific signature format. It’s critical to:

  1. Build the signature base string by concatenating:
    • The uppercase HTTP method (e.g., GET),
    • The URL-encoded endpoint,
    • And the URL-encoded, sorted OAuth parameters.
  2. Sign it using HMAC-SHA1 with the consumerSecret followed by &.
  3. Base64 encode the HMAC hash.

This looks simple on paper, but tiny differences in escaping or parameter order will cause 401 Unauthorized.

💻 The final C# solution

We used HttpClient for HTTP requests, and HMACSHA1 from System.Security.Cryptography for signatures. Here’s what our C# SpotlerClient does:

✅ Generates the OAuth parameters (consumer_key, nonce, timestamp, etc).
✅ Creates the exact signature base string, matching the PHP implementation character-for-character.
✅ Computes the HMAC-SHA1 signature and Base64 encodes it.
✅ Builds the Authorization header.
✅ Sends the HTTP request, with JSON bodies if needed.

We also added better exception handling: if the API returns an error (like 401), we throw an exception that includes the full response body. This made debugging much faster.


🐛 Debugging tips for OAuth 1.0

  1. Print the signature base string.
    It needs to match exactly what Spotler expects. Any stray spaces or wrong escaping will fail.
  2. Double-check timestamp and nonce generation.
    OAuth requires these to prevent replay attacks.
  3. Compare with the PHP implementation.
    We literally copied the signature generation line-by-line from PHP into C#, carefully mapping rawurlencode to Uri.EscapeDataString.
  4. Turn off SSL validation carefully.
    During development, you might disable certificate checks (ServerCertificateCustomValidationCallback), but never do this in production.

using System.Security.Cryptography;
using System.Text;

namespace SpotlerClient
{
 
    public class SpotlerClient
    {
        private readonly string _consumerKey;
        private readonly string _consumerSecret;
        private readonly string _baseUrl = "https://restapi.mailplus.nl";
        private readonly HttpClient _httpClient;

        public SpotlerClient(string consumerKey, string consumerSecret, bool verifyCertificate = true)
        {
            _consumerKey = consumerKey;
            _consumerSecret = consumerSecret;

            var handler = new HttpClientHandler();
            if (!verifyCertificate)
            {
                handler.ServerCertificateCustomValidationCallback = (sender, cert, chain, sslPolicyErrors) => true;
            }

            _httpClient = new HttpClient(handler);
        }

        public async Task<string> ExecuteAsync(string endpoint, HttpMethod method, string jsonData = null)
        {
            var request = new HttpRequestMessage(method, $"{_baseUrl}/{endpoint}");
            var authHeader = CreateAuthorizationHeader(method.Method, endpoint);
            request.Headers.Add("Accept", "application/json");
            request.Headers.Add("Authorization", authHeader);

            if (jsonData != null)
            {
                request.Content = new StringContent(jsonData, Encoding.UTF8, "application/json");
            }

            var response = await _httpClient.SendAsync(request);

            if (!response.IsSuccessStatusCode)
            {
                var body = await response.Content.ReadAsStringAsync();
                return body;
            }

            return await response.Content.ReadAsStringAsync();
        }

        private string CreateAuthorizationHeader(string httpMethod, string endpoint)
        {
            var timestamp = DateTimeOffset.UtcNow.ToUnixTimeSeconds().ToString();
            var nonce = Guid.NewGuid().ToString("N");

            var paramString = "oauth_consumer_key=" + Uri.EscapeDataString(_consumerKey) +
                              "&oauth_nonce=" + Uri.EscapeDataString(nonce) +
                              "&oauth_signature_method=" + Uri.EscapeDataString("HMAC-SHA1") +
                              "&oauth_timestamp=" + Uri.EscapeDataString(timestamp) +
                              "&oauth_version=" + Uri.EscapeDataString("1.0");

            var sigBase = httpMethod.ToUpper() + "&" +
                          Uri.EscapeDataString(_baseUrl + "/" + endpoint) + "&" +
                          Uri.EscapeDataString(paramString);

            var sigKey = _consumerSecret + "&";

            var signature = ComputeHmacSha1Signature(sigBase, sigKey);

            var authHeader = $"OAuth oauth_consumer_key=\"{_consumerKey}\", " +
                             $"oauth_nonce=\"{nonce}\", " +
                             $"oauth_signature_method=\"HMAC-SHA1\", " +
                             $"oauth_timestamp=\"{timestamp}\", " +
                             $"oauth_version=\"1.0\", " +
                             $"oauth_signature=\"{Uri.EscapeDataString(signature)}\"";

            return authHeader;
        }

        private string ComputeHmacSha1Signature(string data, string key)
        {
            using var hmac = new HMACSHA1(Encoding.UTF8.GetBytes(key));
            var hash = hmac.ComputeHash(Encoding.UTF8.GetBytes(data));
            return Convert.ToBase64String(hash);
        }
    }
}

✅ The payoff

Once the signature was constructed precisely, authentication errors disappeared. We could now use the Spotler REST API seamlessly from C#, including:

  • importing contact lists,
  • starting campaigns,
  • and fetching campaign metrics.

📚 Sample usage

var client = new SpotlerClient(_consumerKey, _consumerSecret, false);
var endpoint = "integrationservice/contact/email@gmail.com";
var json = client.ExecuteAsync(endpoint, HttpMethod.Get).GetAwaiter().GetResult();

🎉 Conclusion

Porting from PHP to C# isn’t always as direct as it looks — especially when it comes to cryptographic signatures. But with careful attention to detail and lots of testing, we managed to build a robust, reusable client.

If you’re facing a similar integration, feel free to reach out or clone this approach. Happy coding!

Categories: Uncategorized Tags: , , , ,

🚫 Why AWS SDK for S3 No Longer Works Smoothly with .NET Framework 4.8 — and How to Fix It

In 2024, more .NET developers are finding themselves in a strange situation: suddenly, tried-and-tested .NET Framework 4.8 applications that interact with Amazon S3 start throwing cryptic build errors or runtime exceptions. The culprit? The AWS SDK for .NET has increasingly shifted toward support for .NET Core / .NET 6+, and full compatibility with .NET Framework is eroding.

In this post, we’ll explain:

  • Why this happens
  • What errors you might see
  • And how to remove the AWS SDK altogether and replace it with pure .NET 4.8-compatible code for downloading (and uploading) files from S3 using Signature Version 4.

🧨 The Problem: AWS SDK & .NET Framework 4.8

The AWS SDK for .NET (like AWSSDK.S3) now depends on modern libraries like:

  • System.Text.Json
  • System.Buffers
  • System.Runtime.CompilerServices.Unsafe
  • Microsoft.Bcl.AsyncInterfaces

These dependencies were designed for .NET Core and later versions — not .NET Framework. While it was once possible to work around this with binding redirects and careful version pinning, the situation has become unstable and error-prone.


❗ Common Symptoms

You may see errors like:

Could not load file or assembly ‘System.Text.Json, Version=6.0.0.11’

Or:

Could not load file or assembly ‘System.Buffers, Version=4.0.5.0’

Or during build:

Warning: Unable to update auto-refresh reference ‘system.text.json.dll’

Even if you install the correct packages, you may end up needing to fight bindingRedirect hell, and still not get a working application.


✅ The Solution: Remove the AWS SDK

Fortunately, you don’t need the SDK to use S3. All AWS S3 requires is a properly signed HTTP request using AWS Signature Version 4, and you can create that yourself using standard .NET 4.8 libraries.


🔐 Downloading from S3 Without the AWS SDK

Here’s how you can download a file from S3 using HttpWebRequest and Signature Version 4.

✔️ The Key Points:

  • You must include the x-amz-content-sha256 header (even for GETs!)
  • You sign the request using your AWS secret key
  • No external packages required — works on plain .NET 4.8

🧩 Code Snippet


public static byte[] DownloadFromS3(string bucketName, string objectKey, string region, string accessKey, string secretKey)
{
var method = "GET";
var service = "s3";
var host = $"{bucketName}.s3.{region}.amazonaws.com";
var uri = $"https://{host}/{objectKey}";
var requestDate = DateTime.UtcNow;
var amzDate = requestDate.ToString("yyyyMMddTHHmmssZ");
var dateStamp = requestDate.ToString("yyyyMMdd");
var canonicalUri = "/" + objectKey;
var signedHeaders = "host;x-amz-content-sha256;x-amz-date";
var payloadHash = HashSHA256(string.Empty); // Required even for GET

var canonicalRequest = $"{method}\n{canonicalUri}\n\nhost:{host}\nx-amz-content-sha256:{payloadHash}\nx-amz-date:{amzDate}\n\n{signedHeaders}\n{payloadHash}";
var credentialScope = $"{dateStamp}/{region}/{service}/aws4_request";
var stringToSign = $"AWS4-HMAC-SHA256\n{amzDate}\n{credentialScope}\n{HashSHA256(canonicalRequest)}";

var signingKey = GetSignatureKey(secretKey, dateStamp, region, service);
var signature = ToHexString(HmacSHA256(signingKey, stringToSign));

var authorizationHeader = $"AWS4-HMAC-SHA256 Credential={accessKey}/{credentialScope}, SignedHeaders={signedHeaders}, Signature={signature}";

var request = (HttpWebRequest)WebRequest.Create(uri);
request.Method = method;
request.Headers["Authorization"] = authorizationHeader;
request.Headers["x-amz-date"] = amzDate;
request.Headers["x-amz-content-sha256"] = payloadHash;

try
{
    using (var response = (HttpWebResponse)request.GetResponse())
    using (var responseStream = response.GetResponseStream())
    using (var memoryStream = new MemoryStream())
    {
        responseStream.CopyTo(memoryStream);
        return memoryStream.ToArray();
    }
}
catch (WebException ex)
{
    using (var errorResponse = (HttpWebResponse)ex.Response)
    using (var reader = new StreamReader(errorResponse.GetResponseStream()))
    {
        var errorText = reader.ReadToEnd();
        throw new Exception($"S3 request failed: {errorText}", ex);
    }
}

}
🔧 Supporting Methods


private static string HashSHA256(string text)
{
using (var sha256 = SHA256.Create())
{
return ToHexString(sha256.ComputeHash(Encoding.UTF8.GetBytes(text)));
}
}
private static byte[] HmacSHA256(byte[] key, string data)
{
using (var hmac = new HMACSHA256(key))
{
return hmac.ComputeHash(Encoding.UTF8.GetBytes(data));
}
}
private static byte[] GetSignatureKey(string secretKey, string dateStamp, string region, string service)
{
var kSecret = Encoding.UTF8.GetBytes("AWS4" + secretKey);
var kDate = HmacSHA256(kSecret, dateStamp);
var kRegion = HmacSHA256(kDate, region);
var kService = HmacSHA256(kRegion, service);
return HmacSHA256(kService, "aws4_request");
}
private static string ToHexString(byte[] bytes)
{
return BitConverter.ToString(bytes).Replace("-", "").ToLowerInvariant();
}


📝 Uploading to S3 Without the AWS SDK
You can extend the same technique for PUT requests. The only differences are:

You calculate the SHA-256 hash of the file content

You include a Content-Type and Content-Length header

You use PUT instead of GET

Let me know in the comments if you’d like the full upload version — it follows the same Signature V4 pattern.

✅ Summary
Feature AWS SDK for .NET Manual Signature V4
.NET Framework 4.8 support ❌ Increasingly broken ✅ Fully supported
Heavy NuGet dependencies ✅ ❌ Minimal
Simple download/upload ✅ ✅ (with more code)
Presigned URLs ✅ 🟡 Manual support

Final Thoughts
If you’re stuck on .NET Framework 4.8 and running into weird AWS SDK issues — you’re not alone. But you’re not stuck either. Dropping the SDK and using HTTP + Signature V4 is entirely viable, especially for simple tasks like uploading/downloading S3 files.

Let me know if you’d like a full upload example, presigned URL generator, or if you’re considering migrating to .NET 6+.

Farewell #Skype. Here’s how their #API worked.

So, with the shutdown of Skype in May 2025, only two months away, there is not much need to hold on tight to our source code for the Skype API. It worked well for us for years on AvatarAPI.com
but with the imminent shutdown, their API will undoubtedly stop working as soon as Skype is shut down, and will no longer be relevant, even if the API stays active for a little while later.

In this post, we’ll take a deep dive into a C# implementation of a Skype user search feature using HTTP requests. This code interacts with Skype’s search API to fetch user profiles based on a given search parameter. We’ll break down the core functionality, security considerations, and potential improvements.

Overview of the SkypeSearch Class

The SkypeSearch class provides a static method, Search, which sends a request to Skype’s search API to retrieve user profiles. It uses an authentication token (SkypeToken) and manages retries in case of failures. Let’s explore its components in detail.

Key Features of the Implementation

  1. Handles API Requests Securely: The method sets various security protocols (Ssl3, Tls, Tls11, Tls12) to ensure compatibility with Skype’s API.
  2. Custom Headers for Authentication: It constructs an HTTP request with necessary headers, including x-skypetoken, x-skype-client, and others.
  3. Manages Rate Limits & Token Refresh: If the API responds with an empty result (potentially due to a 429 Too Many Requests error), the token is refreshed, and the search is retried up to five times.
  4. Enhances API Response: The method modifies the API response to include an additional avatarImageUrl field for each result.

Breaking Down the Search Method

Constructing the API Request

var requestNumber = new Random().Next(100000, 999999);
var url = string.Format(
    "https://search.skype.com/v2.0/search?searchString={0}&requestId={1}&locale=en-GB&sessionId={2}",
    searchParameter, requestNumber, Guid.NewGuid());

This snippet constructs the API request URL with dynamic query parameters, including:

  • searchString: The user input for searching Skype profiles.
  • requestId: A randomly generated request ID for uniqueness.
  • sessionId: A newly generated GUID for session tracking.

Setting HTTP Headers

HTTPHeaderHandler wicket = nvc =>
{
    var nvcSArgs = new NameValueCollection
    {
        {"x-skypetoken", token.Value},
        {"x-skype-client", "1418/8.134.0.202"},
        {"Origin", "https://web.skype.com"}
    };
    return nvcSArgs;
};

Here, we define essential request headers for authentication and compatibility. The x-skypetoken is a crucial element, as it ensures access to Skype’s search API.

Handling API Responses & Retrying on Failure

if (jsonResponse == "")
{
    token = new SkypeToken();
    return Search(searchParameter, token, ++maxRecursion);
}

If an empty response is received (potentially due to an API rate limit), the method refreshes the authentication token and retries the request up to five times to prevent excessive loops.

Enhancing API Response with Profile Avatars

foreach (var node in jResponse["results"])
{
    var skypeId = node["nodeProfileData"]["skypeId"] + "";
    var avatarImageUrl = string.Format(
        "https://avatar.skype.com/v1/avatars/{0}/public?size=l",
        skypeId);
    node["nodeProfileData"]["avatarImageUrl"] = avatarImageUrl;
}

After receiving the API response, the code iterates through the user results and appends an avatarImageUrl field using Skype’s avatar service.

using System;
using System.Collections.Specialized;
using System.Net;
using System.Text;
using Newtonsoft.Json.Linq;

namespace SkypeGraph
{
    public class SkypeSearch
    {
        public static JObject Search(string searchParameter, SkypeToken token, int maxRecursion = 0)
        {
            if (maxRecursion == 5) throw new Exception("Preventing excessive retries");
            ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3 |
                                                   SecurityProtocolType.Tls |
                                                   SecurityProtocolType.Tls11 |
                                                   SecurityProtocolType.Tls12;
            var requestNumber = new Random().Next(100000, 999999);
            var url = string.Format("https://search.skype.com/v2.0/search?searchString={0}&requestId={1}&locale=en-GB&sessionId={2}", searchParameter, requestNumber, Guid.NewGuid());
            var http = new HTTPRequest {Encoder = Encoding.UTF8};
            HTTPHeaderHandler wicket = nvc =>
            {
                var nvcSArgs = new NameValueCollection
                {
                    {"x-skypetoken", token.Value},
                    {"x-skypegraphservicesettings", ""},
                    {"x-skype-client","1418/8.134.0.202"},
                    {"x-ecs-etag", "GAx0SLim69RWpjmJ9Dpc4QBHAou0pY//fX4AZ9JVKU4="},
                    {"Origin", "https://web.skype.com"}
                };
                return nvcSArgs;
            };
            http.OverrideUserAgent =
                "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36";
            http.OverrideAccept = "application/json";
            http.TimeOut = TimeSpan.FromSeconds(5);
            http.HeaderHandler = wicket;
            http.ContentType = "application/json";
            http.Referer = "https://web.skype.com/";
            var jsonResponse = http.Request(url);
            if (jsonResponse == "")
            {
                // In case of a 429 (Too many requests), then refresh the token.
                token = new SkypeToken();
                return Search(searchParameter, token, ++maxRecursion);
            }
            var jResponse = JObject.Parse(jsonResponse);
            #region sample
            /*
             {
                   "requestId":"240120",
                   "results":[
                      {
                         "nodeProfileData":{
                            "skypeId":"live:octavioaparicio_jr",
                            "skypeHandle":"live:octavioaparicio_jr",
                            "name":"octavio aparicio",
                            "avatarUrl":"https://api.skype.com/users/live:octavioaparicio_jr/profile/avatar",
                            "country":"Mexico",
                            "countryCode":"mx",
                            "contactType":"Skype4Consumer"
                         }
                      }
                   ]
                }
             */
            #endregion
            foreach (var node in jResponse["results"])
            {
                var skypeId = node["nodeProfileData"]["skypeId"] + "";
                var avatarImageUrl = string.Format("https://avatar.skype.com/v1/avatars/{0}/public?size=l", skypeId);
                node["nodeProfileData"]["avatarImageUrl"] = avatarImageUrl;
            }
            return jResponse;
        }
    }
}
Categories: Uncategorized Tags: , , , ,

Resolving Unauthorized Error When Deploying an #Azure Function via #ZipDeploy

Deploying an Azure Function to an App Service can sometimes result in an authentication error, preventing successful publishing. One common error developers encounter is:

Error: The attempt to publish the ZIP file through https://<function-name>.scm.azurewebsites.net/api/zipdeploy failed with HTTP status code Unauthorized.

This error typically occurs when the deployment process lacks the necessary authentication permissions to publish to Azure. Below, we outline the steps to resolve this issue by enabling SCM Basic Auth Publishing in the Azure Portal.

Understanding the Issue

The error indicates that Azure is rejecting the deployment request due to authentication failure. This often happens when the SCM (Kudu) deployment service does not have the correct permissions enabled, preventing the publishing process from proceeding.

Solution: Enable SCM Basic Auth Publishing

To resolve this issue, follow these steps:

  1. Open the Azure Portal and navigate to your Function App.
  2. In the left-hand menu, select Configuration.
  3. Under the General settings tab, locate SCM Basic Auth Publishing.
  4. Toggle the setting to On.
  5. Click Save and restart the Function App if necessary.

Once this setting is enabled, retry the deployment from Visual Studio or your chosen deployment method. The unauthorized error should now be resolved.

Additional Considerations

  • Use Deployment Credentials: If you prefer not to enable SCM Basic Auth, consider setting up deployment credentials under Deployment CenterFTP/Credentials.
  • Check Azure Authentication in Visual Studio: Ensure that you are logged into the correct Azure account in Visual Studio under ToolsOptionsAzure Service Authentication.
  • Use Azure CLI for Deployment: If problems persist, try deploying with the Azure CLI:az functionapp deployment source config-zip \ --resource-group <resource-group> \ --name <function-app-name> \ --src <zip-file-path>

By enabling SCM Basic Auth Publishing, you ensure that Azure’s deployment service can authenticate and process your function’s updates smoothly. This quick fix saves time and prevents unnecessary troubleshooting steps.

Categories: Uncategorized Tags: , , , ,

Obtaining an Access Token for Outlook Web Access (#OWA) Using a Consumer Account

If you need programmatic access to Outlook Web Access (OWA) using a Microsoft consumer account (e.g., an Outlook.com, Hotmail, or Live.com email), you can obtain an access token using the Microsoft Authentication Library (MSAL). The following C# code demonstrates how to authenticate a consumer account and retrieve an access token.

Prerequisites

To run this code successfully, ensure you have:

  • .NET installed
  • The Microsoft.Identity.Client NuGet package
  • A registered application in the Microsoft Entra ID (formerly Azure AD) portal with the necessary API permissions

Code Breakdown

The following code authenticates a user using the device code flow, which is useful for scenarios where interactive login via a browser is required but the application does not have direct access to a web interface.

1. Define Authentication Metadata

var authMetadata = new
{
    ClientId = "9199bf20-a13f-4107-85dc-02114787ef48", // Application (client) ID
    Tenant = "consumers", // Target consumer accounts (not work/school accounts)
    Scope = "service::outlook.office.com::MBI_SSL openid profile offline_access"
};
  • ClientId: Identifies the application in Microsoft Entra ID.
  • Tenant: Set to consumers to restrict authentication to personal Microsoft accounts.
  • Scope: Defines the permissions the application is requesting. In this case:
    • service::outlook.office.com::MBI_SSL is required to access Outlook services.
    • openid, profile, and offline_access allow authentication and token refresh.

2. Configure the Authentication Application

var app = PublicClientApplicationBuilder
    .Create(authMetadata.ClientId)
    .WithAuthority($"https://login.microsoftonline.com/{authMetadata.Tenant}")
    .Build();
  • PublicClientApplicationBuilder is used to create a public client application that interacts with Microsoft identity services.
  • .WithAuthority() specifies that authentication should occur against Microsoft’s login endpoint for consumer accounts.

3. Initiate the Device Code Flow

var scopes = new string[] { authMetadata.Scope };

var result = await app.AcquireTokenWithDeviceCode(scopes, deviceCodeResult =>
{
    Console.WriteLine(deviceCodeResult.Message); // Display login instructions
    return Task.CompletedTask;
}).ExecuteAsync();
  • AcquireTokenWithDeviceCode() initiates authentication using a device code.
  • The deviceCodeResult.Message provides instructions to the user on how to authenticate (typically directing them to https://microsoft.com/devicelogin).
  • Once the user completes authentication, the application receives an access token.

4. Retrieve and Display the Access Token

Console.WriteLine($"Access Token: {result.AccessToken}");
  • The retrieved token can now be used to make API calls to Outlook Web Access services.

5. Handle Errors

catch (MsalException ex)
{
    Console.WriteLine($"Authentication failed: {ex.Message}");
}
  • MsalException handles authentication errors, such as incorrect permissions or expired tokens.

Running the Code

  1. Compile and run the program.
  2. Follow the login instructions displayed in the console.
  3. After signing in, the access token will be printed.
  4. Use the token in HTTP requests to Outlook Web Access APIs.

Conclusion

This code provides a straightforward way to obtain an access token for Outlook Web Access using a consumer account. The device code flow is particularly useful for command-line applications or scenarios where interactive authentication via a browser is required.

Cost-Effective SQL Server Database Restore on Microsoft #Azure: Using SMB Shares

1) Motivation Behind the Process

Managing costs efficiently on Microsoft Azure is a crucial aspect for many businesses, especially when it comes to managing resources like SQL Server databases. One area where I found significant savings was in the restoration of SQL Server databases.

Traditionally, to restore databases, I was using a managed disk. The restore process involved downloading a ZIP file, unzipping it to a .bak file, and then restoring it to the main OS disk. However, there was a significant issue with this setup: the cost of the managed disk.

Even when database restores happened only once every six months, I was still paying for the full capacity of the managed disk—500GB of provisioned space. This means I was paying for unused storage space for extended periods, which could be a significant waste of resources and money.

To tackle this issue, I switched to using Azure Storage Accounts with file shares (standard, not premium), which provided a more cost-effective approach. By restoring the database from an SMB share, I could pay only for the data usage, rather than paying for provisioned capacity on a managed disk. Additionally, I could delete the ZIP and BAK files after the restore process was complete, further optimizing storage costs.

2) Issues and Solutions

While the transition to using an Azure Storage Account for database restores was a great move in terms of cost reduction, it wasn’t without its challenges. One of the main hurdles I encountered during this process was SQLCMD reporting that the .bak file did not exist, even though it clearly did.

Symptoms of the Problem

The error message was:

 3201, Level 16, State 2, Server [ServerName], Line 1
Cannot open backup device '\\<UNC Path>\Backups\GeneralPurpose.bak'. Operating system error 3(The system cannot find the path specified.)
Msg 3013, Level 16, State 1, Server [ServerName], Line 1
RESTORE DATABASE is terminating abnormally.

This was perplexing because I had confirmed that the .bak file existed at the UNC path and that the path was accessible from my system.

Diagnosis

To diagnose the issue, I started by enabling xp_cmdshell in SQL Server. This extended stored procedure allows the execution of operating system commands, which is very helpful for troubleshooting such scenarios.

First, I enabled xp_cmdshell by running the following commands:

-- Enable advanced options
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;

-- Enable xp_cmdshell
EXEC sp_configure 'xp_cmdshell', 1;
RECONFIGURE;

Once xp_cmdshell was enabled, I ran a simple DIR command to verify if SQL Server could access the backup file share:

EXEC xp_cmdshell 'dir \\<UNC Path>\Backups\GeneralPurpose.bak';

The result indicated that the SQL Server service account did not have proper access to the SMB share, and that’s why it couldn’t find the .bak file.

Solution

To resolve this issue, I had to map the network share explicitly within SQL Server using the net use command, which allows SQL Server to authenticate to the SMB share.

Here’s the solution I implemented:

EXEC xp_cmdshell 'net use Z: \\<UNC Path> /user:localhost\<user> <PASSWORD>';

Explanation

  1. Mapping the Network Drive:
    The net use command maps the SMB share to a local drive letter (in this case, Z:), which makes it accessible to SQL Server.
  2. Authentication:
    The /user: flag specifies the username and password needed to authenticate to the share. In my case, I used an account (e.g., localhost\fsausse) with the correct credentials.
  3. Accessing the Share:
    After mapping the network drive, I could proceed to access the .bak file located in the SMB share by using its mapped path (Z:). SQL Server would then be able to restore the database without the “file not found” error.

Once the restore was completed, I could remove the drive mapping with:

EXEC xp_cmdshell 'net use Z: /delete';

This approach ensured that SQL Server had the necessary permissions to access the file on the SMB share, and I could restore my database efficiently, only paying for the data usage on Azure Storage.

Conclusion

By transitioning from a managed disk to an SMB share on Azure Storage, I significantly reduced my costs during database restores. The issue with SQL Server not finding the .bak file was quickly diagnosed and resolved by enabling xp_cmdshell, mapping the network share, and ensuring proper authentication. This process allows me to restore databases in a more cost-effective manner, paying only for the data used during the restore, and avoiding unnecessary storage costs between restores.

For businesses looking to optimize Azure costs, this method provides an efficient, scalable solution for managing large database backups with minimal overhead.

#AWS #S3 Error – The request signature we calculated does not match the signature you provided. Check your key and signing method

If you’re working with the AWS SDK for .NET and encounter an error when uploading files to an Amazon S3 bucket, you’re not alone. A recent upgrade in the SDK may introduce unexpected behavior, leading to a “signature mismatch” error for uploads that previously worked smoothly. This blog post describes the problem, analyzes common solutions, and explains how AWS S3 pathing conventions have changed over time—impacting how we specify folders within S3 buckets.

The Problem: “The request signature we calculated does not match the signature you provided.”

When uploading a file to an Amazon S3 bucket using a .NET application, you may encounter this error:

“The request signature we calculated does not match the signature you provided. Check your key and signing method.”

The symptoms of this error can be puzzling. For example, a standard upload to the root of the bucket may succeed, but attempting to upload to a specific folder within the bucket could trigger the error. This was the case in a recent project, where an upload to the bucket carimagerydata succeeded, while uploads to carimagerydata/tx returned the signature mismatch error. The access key, secret key, and permissions were all configured correctly, but specifying the folder path still caused a failure.

Possible Solutions

When you encounter this issue, there are several things to investigate:

1. Bucket Region Configuration

Ensure that the AWS SDK is configured with the correct region for the S3 bucket. The SDK signs requests based on the region setting, and a mismatch between the region used in the code and the actual bucket region often results in signature errors.

csharpCopy codeAmazonS3Config config = new AmazonS3Config
{
    RegionEndpoint = RegionEndpoint.YourBucketRegion // Ensure it's correct
};

2. Signature Version Settings

The AWS SDK uses Signature Version 4 by default, which is compatible with most regions and recommended by AWS. However, certain legacy setups or bucket configurations may expect Signature Version 2. Explicitly setting Signature Version 4 in the configuration can sometimes resolve these errors.

csharpCopy codeAmazonS3Config config = new AmazonS3Config
{
    SignatureVersion = "4", // Explicitly specify Signature Version 4
    RegionEndpoint = RegionEndpoint.YourBucketRegion
};

3. Permissions and Bucket Policies

Check if there are any bucket policies or IAM restrictions specific to the folder path you’re trying to upload to. If your bucket policy restricts access to certain paths, you’ll need to adjust it to allow uploads to the folder.

4. Path Style vs. Virtual-Hosted Style URL

Another possible issue arises from changes in how paths are handled. The AWS SDK has evolved over time, and the method of specifying paths within buckets has also changed. The SDK now defaults to virtual-hosted style URLs, where the bucket name is part of the domain (e.g., bucket-name.s3.amazonaws.com). Older setups, however, may expect path-style URLs, where the bucket name is part of the path (e.g., s3.amazonaws.com/bucket-name/key). Specifying path-style addressing in the configuration can sometimes fix compatibility issues:

csharpCopy codeAmazonS3Config config = new AmazonS3Config
{
    UsePathStyle = true,
    RegionEndpoint = RegionEndpoint.YourBucketRegion
};

Understanding the Key Change: Folder Path Format in S3

The reason these issues are so confusing is that AWS has changed the way folders (often called prefixes) are specified. Historically, users specified a bucket name combined with a folder path and then provided the object’s name. Now, however, the SDK expects a more unified format:

  • Old Format: bucket + path, object
  • New Format: bucket, path + object

This means that in the new format, the folder path (e.g., /tx/) should be included as part of the object key rather than being treated as a separate parameter.

Solution: Specifying the Folder in the Object Key

To upload to a folder within a bucket, you should include the full path in the key itself. For example, if you want to upload yourfile.txt to the tx folder within carimagerydata, the key should be specified as "tx/yourfile.txt".

Here’s how to do it in C#:

csharpCopy codestring bucketName = "carimagerydata";
string keyName = "tx/yourfile.txt"; // Specify the folder in the key
string filePath = @"C:\path\to\your\file.txt";

AmazonS3Client client = new AmazonS3Client(accessKey, secretKey, RegionEndpoint.YourBucketRegion);

PutObjectRequest request = new PutObjectRequest
{
    BucketName = bucketName,
    Key = keyName, // Full path including folder
    FilePath = filePath,
    ContentType = "text/plain" // Example for text files, adjust as needed
};

PutObjectResponse response = await client.PutObjectAsync(request);

Conclusion

This error is a prime example of how changes in SDK conventions can impact legacy applications. The update to a more unified key format for specifying folder paths in S3 may seem minor, but it can cause unexpected issues if you’re unaware of it. By specifying the folder as part of the object key, you can avoid signature mismatch errors and ensure that your application is compatible with the latest AWS SDK practices.

Always remember to check SDK release notes for updates in configuration defaults, particularly when working with cloud services, as conventions and standards may change over time. This small adjustment can save a lot of time when troubleshooting!

Categories: Uncategorized Tags: , , , ,

Car License Plate #API support for #Lithuania

Introducing support for Lithuania via our Car License Plate API

Are you looking to seamlessly integrate detailed vehicle information into your applications? Welcome to Numerio Zenklai API, Lithuania’s latest and most advanced car license plate API. This service is designed to provide comprehensive vehicle details, enhancing your ability to offer top-tier services in the automotive, insurance, and related industries.

Why Choose Numerio Zenklai API?

Numerio Zenklai API offers a robust solution for retrieving detailed information about vehicles registered in Lithuania. With a simple request to the /CheckLithuania endpoint, you can obtain critical data, including the vehicle’s make, model, age, engine size, VIN, insurer, and a representative image.

Key Features

1. Comprehensive Vehicle Data:
Access a rich set of details about any Lithuanian-registered vehicle. For example, a query on the registration number “NAO075” returns the following data:

  • Make and Model: Volkswagen Crafter
  • Registration Year: 2006
  • Engine Size: 2461 cc
  • VIN: WV1ZZZ2EZE6017394
  • Fuel Type: Diesel
  • Insurance Company: ERGO INSURANCE SE LIETUVOS FILIALAS
  • Vehicle Type: Lorry
  • Body Type: Bus
  • Representative Image: Image URL

2. Simple and Fast Integration:
Our API is designed for quick integration, ensuring you can start leveraging vehicle data with minimal setup. Here’s a sample JSON response for an easy understanding of the data format:

{
"Description": "VOLKSWAGEN CRAFTER",
"RegistrationYear": "2006",
"CarMake": {
"CurrentTextValue": "VOLKSWAGEN"
},
"CarModel": {
"CurrentTextValue": "CRAFTER"
},
"MakeDescription": {
"CurrentTextValue": "VOLKSWAGEN"
},
"ModelDescription": {
"CurrentTextValue": "CRAFTER"
},
"EngineSize": {
"CurrentTextValue": "2461"
},
"VIN": "WV1ZZZ2EZE6017394",
"FuelType": "Diesel",
"InsuranceCompany": "ERGO INSURANCE SE LIETUVOS FILIALAS",
"InsuranceCompanyNumber": "ACB 1798038:8192689",
"VehicleType": "LORRY",
"Body": "Bus",
"ImageUrl": "http://www.numeriozenklaiapi.lt/image.aspx/@Vk9MS1NXQUdFTiBDUkFGVEVS"
}

3. Reliable and Up-to-Date Information:
Our API ensures that you always receive the most current and accurate data directly from official sources, making it a reliable tool for various applications.

Use Cases

  • Automotive Industry: Quickly verify vehicle details for sales, maintenance, and servicing.
  • Insurance Companies: Validate vehicle information for underwriting and claims processing.
  • Fleet Management: Monitor and manage a fleet of vehicles efficiently with detailed data.
  • Law Enforcement: Access critical vehicle information swiftly for enforcement and regulatory purposes.

Getting Started

To begin using Numerio Zenklai API, visit our website and check out our comprehensive documentation. Our user-friendly interface and extensive support resources make it easy for developers to integrate the API into their existing systems.

Conclusion

Numerio Zenklai API is your go-to solution for accessing detailed vehicle information in Lithuania. Whether you’re in the automotive industry, insurance sector, or any field that requires precise vehicle data, our API provides the tools you need to enhance your services and streamline your operations.

Experience the power of reliable vehicle information with Numerio Zenklai API today. Visit Numerio Zenklai API to learn more and start integrating now!