Intercept #AJAX “open” statements in #JavaScript

If you want to change the default behaviour of AJAX across your website, perhaps you want to make sure that every AJAX called is logged before executing, or that it is somehow audited for security before being called, you can use interceptor scripts in Javascript that override the default functionality of the XMLHttpRequest object that is behind every AJAX call, even if a library like JQuery is used ontop of it.

So, for instance, if you wanted to catch the body of all POST requests sent via AJAX, you could do this;

(function(send) {
    XMLHttpRequest.prototype.send = function(body) {
        var info="send data\r\n"+body;
        alert(info);, body);

Or, if you wanted to change the destination of all AJAX requests such that all communications are sent via a logging service first, then you could do this;

(function(open) { = function(verb,url,async,user,password) {     , verb,"",async,user,password);
	this.setRequestHeader("X-Original-URL", url);

Where is obviously fictitious.

Hope this is useful to somebody!

Categories: Uncategorized

Car Registration #API now available via #NuGET

NuGet is the de-facto package manager for .NET, and as perhaps a major oversight, the Car Registration API was never available via a NuGet Package.

We’ve put this live today, here: and here is are the steps to use it;

Install the following three NuGet Packages

Install-Package LicensePlateAPI 
Install-Package System.ServiceModel.Primitives
Install-Package System.ServiceModel.Http

Then, assuming you’ve already opened an account, here is some sample code;

var client = LicensePlateAPI.API.GetClient();
var car = client.CheckAsync("{LICENSE PLATE}", "{USERNAME}").Result;

Where evidently {LICENSE PLATE} and {USERNAME} are placeholders. “CheckAsync” checks for UK license plates, but you can change this to any country by using CheckUSAAsync or Check<Country>Async.


Categories: Uncategorized

Car Registration #API now available in #Slovenia

Slovenia is a European country with a population of 2.081 Million, and a total of 1.118 million passenger cars registered, giving a car ownership rate of just over 53%. If your business operates in the automotive industry in Slovenia, then being able to streamline the customer onboarding experience by allowing your user enter a license plate, rather than selecting a make / model can both speed up the user intake. The API also reveals information about the vehicle’s insurer, which may not be known to third parties.

Car registration plates in Slovenia use the /CheckSlovenia  endpoint and return the following information:

  • Make / Model
  • Region
  • Insurer
  • Insurance policy number
  • Representative image

Latency warning: This particular request may take over 10 seconds to complete.

Sample Registration Number: 


Sample Json:

“Description”: “Ford (D) Mustang 2,3 EcoBoost Avt.”,
“CarMake”: {
“CurrentTextValue”: “Ford (D)”
“CarModel”: {
“CurrentTextValue”: “Mustang 2,3 EcoBoost Avt.”
“MakeDescription”: {
“CurrentTextValue”: “Ford (D)”
“ModelDescription”: {
“CurrentTextValue”: “Mustang 2,3 EcoBoost Avt.”
“Insurer”: “Zavarovalnica Sava d.d.”,
“InsurancePolicy”: “000671-4716157”,
“Code”: “MBKECO11”,
“Category”: “Osebno vozilo”,
“Region”: “Ljubljana”,
“ImageUrl”: “;

Sign up for 10 free requests to this API today, via

Categories: Uncategorized

The source at [] is unreachable. [FIXED]

If you are still using Visual Studio 2013, you may notice that today, the Package Manager Console will have stopped working, any attempt to install a new package will give an error like

The source at [] is unreachable. Falling back to NuGet Local Cache at C:\Users\you\AppData\Local\NuGet\Cache

This is because NUGET has withdrawn support for TLS 1.1, and VS 2013 will use TLS 1.1 by default.

To Fix this, type this into the PM console:

[Net.ServicePointManager]::SecurityProtocol=[Net.ServicePointManager]::SecurityProtocol-bOR [Net.SecurityProtocolType]::Tls12
Categories: Uncategorized

Updating a Windows Service via a batch file

If you’ve written a Windows service, and you need to update it, perhaps with a bug fix, you’ll find that the process is very cumbersome, you need to stop the service, uninstall it, pull your changes, install it again, configure the service, then start the service.

This procedure, although not difficult, can eat away 15 minutes of development time on every update, and slows the fix/release cycle.

This is where I created a windows batch file, that does these steps in sequence, to save lots of time.

@echo off
net stop “My Service”
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\installutil /u MyService.exe
git stash save –keep-index –include-untracked
git pull
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\installutil MyService.exe
SC failure “My Service” actions=restart/60000/restart/60000/restart/60000 reset=86400
net start “My Service”

The SC command may not be applicable to you, but this tells the service to restart in the event of a failure.

Hope this helps someone!

Categories: Uncategorized

Generate video from collection of images in C# #FFMPEG

You want to create a dynamic video, from a collection of images – Here is where FFMpeg is a great tool to use with C#

So, as a pre-requisite, you’ll need to download FFMpeg and put it in a folder, I’m calling it “C:\FFmpeg\” – but you can put it anywhere. I’m also assuming you have a collection of images in c:\input, and you want the output video in C:\out-video\out.mp4 – all these paths can be changed.

If you’ve seem my earlier example of capturing video from a webcam and saving to a video, this approach is more elegant, since it doesn’t involve chopping the header off the bitmap array, and swapping the red and blue channels, however, this solution is for Windows only, it’s not cross platform.

I used the FFMediaToolkit Nuget package, and also System.Drawing.Common

FFmpegLoader.FFmpegPath =

var settings = new VideoEncoderSettings(width: 960, height: 544, framerate: 30, codec: VideoCodec.H264);
settings.EncoderPreset = EncoderPreset.Fast;
settings.CRF = 17;
var file = MediaBuilder.CreateContainer(@"C:\out-video\out.mp4").WithVideo(settings).Create();
var files = Directory.GetFiles(@"C:\Input\");
foreach (var inputFile in files)
	var binInputFile = File.ReadAllBytes(inputFile);
	var memInput = new MemoryStream(binInputFile);
	var bitmap = Bitmap.FromStream(memInput) as Bitmap;
	var rect = new System.Drawing.Rectangle(System.Drawing.Point.Empty, bitmap.Size);
	var bitLock = bitmap.LockBits(rect, ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb);
	var bitmapData = ImageData.FromPointer(bitLock.Scan0, ImagePixelFormat.Bgr24, bitmap.Size);
	file.Video.AddFrame(bitmapData); // Encode the frame



So, what this does, is create a container video of a given size and framerate, and codec. (H264), then adds the images one by one. The container is hard-coded to 960×544, but you should base this on the maximum size of the images in your image folder instead.

The images need to be decompressed, from jpeg to Bitmap, then from Bitmap to ImageData, which is an array of BGR24 structures.

Hope this helps someone!

Categories: Uncategorized

Searching available community pharmacies for #COVID vaccinations in Northern Ireland using C#

In northern Ireland, the over 40s should now go through the community pharmacy network to determine what pharmacies are administering the vaccine.

There is a useful web interface here: that allows you search for nearby pharmacies, but the process is quite frustrating, in that you have to click on 10-20 different pharmacies, before you find one that has slots available.

This tool allows you to search most of the the Northern Ireland pharmacy network in one go, to check for vacancies.

It’s available open source here: and I do welcome anyone to adapt this tool to something more user friendly.

It uses the SimplyBook booking system, which is used by most smaller pharmacies in northern ireland. Larger operations like boots use their own system, so this doesn’t cover other booking systems.

Technically, it extracts the list of pharmacies using the Simplybook service, and obtains a CRSF token from each site, once obtained, it makes a request to first check for services then for available days.

The date range is hardcoded, but it searches up to august 8th, I’m sure this can be changed easily.

Hope this helps someone!

Categories: Uncategorized

Using the Google search #API in C#

If you are looking to automate some SEO / SERP processing in Google, it’s not long before you look to see how to automate the Google search API, and in this case, I’m using C#

Don’t even attempt to screen-scrape google, they will spot very quickly, and you’ll have wasted time doing HTML parsing for nothing, use the official API.

Now, the official API has one huge caveat, it is only useful for searching within a set number of specified sites. This means you can’t use it to determine, is my website in position #1 for keyword “Y”, but it can be used to check what pages of your site, or a competitor’s site are indexed.

This caveat rules out 99% of standard use cases, so feel free to close the page now, if it rules out your case. – Although I have seen that it is possible to include an entire Top Level Domain in the Custom search engine, like “*.es” (spain)

So, step 1 is to create a custom search engine, you do this from, and when it is created, copy the “cx” parameter, you will need this later

Step 2, go to and then press “Get A Key”, you’ll need this in step 3

Step 3, build up a URL as follows;…&key=…&q=&#8230;.&start=0

Where cx and key are from step 1 and 2 above respectively.

Q is the search query

Start is a number from 0 to 90 Which represents the start position in the search results. You cannot return more than 100 results using this API.

To request, you just use some code like this

var apiUrl =
"" +
                        searchTerm + "&start=" + start;
var response = http.Request(apiUrl);
var jResponse = JObject.Parse(response);

And hopefully that helps someone!

Categories: Uncategorized

#ZIP file decompression from first principles in C#

TL;DR; here is the Github repo:

First off, if you’re just looking to unzip a zip file, please stop reading, and look at System.IO.Compression instead, however, if you want to write some code in C# to repair a damaged Zip file, or to find a performant way to decompress one file out of a larger zip file, then perhaps this approach may be useful.

So, from Wikipedia, you can get the header format for a Zip file; which repeats for every zip entry (compressed file)

04Local file header signature = 0x04034b50 (PK♥♦ or “PK\3\4”)
42Version needed to extract (minimum)
62General purpose bit flag
82Compression method; e.g. none = 0, DEFLATE = 8 (or “\0x08\0x00”)
102File last modification time
122File last modification date
144CRC-32 of uncompressed data
184Compressed size (or 0xffffffff for ZIP64)
224Uncompressed size (or 0xffffffff for ZIP64)
262File name length (n)
282Extra field length (m)
30nFile name
30+nmExtra field

I only wanted a few fields out of these, so I wrote code to extract them as follows;

eader = BitConverter.ToInt32(zipData.Skip(offset).Take(4).ToArray());
if (header != 0x04034b50)
	IsValid = false;
	return; // Zip header invalid
GeneralPurposeBitFlag = BitConverter.ToInt16(zipData.Skip(offset + 6).Take(2).ToArray());
var compressionMethod = BitConverter.ToInt16(zipData.Skip(offset + 8).Take(2).ToArray());
CompressionMethod = (CompressionMethodEnum) compressionMethod;
CompressedDataSize = BitConverter.ToInt32(zipData.Skip(offset + 18).Take(4).ToArray());
UncompressedDataSize = BitConverter.ToInt32(zipData.Skip(offset + 22).Take(4).ToArray());
CRC = BitConverter.ToInt32(zipData.Skip(offset + 14).Take(4).ToArray());
var fileNameLength = BitConverter.ToInt16(zipData.Skip(offset + 26).Take(2).ToArray());
FileName = Encoding.UTF8.GetString(zipData.Skip(offset + 30).Take(fileNameLength).ToArray());
var extraFieldLength = BitConverter.ToInt16(zipData.Skip(offset + 28).Take(2).ToArray());
ExtraField = zipData.Skip(offset + 30 + fileNameLength).Take(extraFieldLength).ToArray();
var dataStartIndex = offset + 30 + fileNameLength + extraFieldLength;
var bCompressed = zipData.Skip(dataStartIndex).Take(CompressedDataSize).ToArray();
Decompressed = CompressionMethod == CompressionMethodEnum.None ? bCompressed : Deflate(bCompressed);
NextOffset = dataStartIndex + CompressedDataSize;

This rather dense piece of code extracts relevant data from the zip entry header. It also determines if the zip entry is compressed, or left as-is, because with a very small file, then compression can actually increase the file size.

public enum CompressionMethodEnum
            None = 0,
            Deflate = 8

This is the enum I used, 0 for no compression, and 8 for deflate.

Now, if the zip entry is actually compressed, then you really have to revert to some code in .NET to decompress it:

private static byte[] Deflate(byte[] rawData)
	var memCompress = new MemoryStream(rawData);
	Stream csStream = new DeflateStream(memCompress, CompressionMode.Decompress);
	var msDecompress = new MemoryStream();
	var bDecompressed = msDecompress.ToArray();
	return bDecompressed;

I would really love if someone could implement this from first principles also, but the process is very very complicated, and it just fried my head trying to understand it.

So, with this in place, here is the loop I used to extract every file in the archive;

static void Main(string[] args)
	var file = "";
	var bFile = File.ReadAllBytes(file);
	var nextOffset = 0;
		var entry = new ZipEntry(bFile, nextOffset);
		if (!entry.IsValid) break;
		var content = Encoding.UTF8.GetString(entry.Decompressed);
		nextOffset = entry.NextOffset;
	} while (true);

So, you could perhaps use this code to try to repair a corrupt zip file, or maybe optimize the extraction, so you extract on certain data from a large zip – or whatever.

Categories: Uncategorized

High performance extraction of unstructured text from a #PDF in C#

There are a myriad of tools that allow the extraction of text from a PDF, and this is code is not meant as a replacement for them, it was a specific case where I was looking to extract text from a PDF as fast as possible without worrying about the structure of the document. I.e. to very quickly answer the question “on what pages does the text “X” appear?”

In my specific case, performance was of paramount importance, knowing the layout of the page was unimportant.

The Github repo is here:

And the performance was 10x faster than iText, parsing a 270 page PDF in 0.735 seconds.

It’s also a very interesting look at how one could go about creating a PDF reader from first principles, so without further ado, let’s take a look at a PDF, when opened in a text editor:

7 0 obj
/Contents [ 8 0 R  ] 
/Parent 5 0 R 
/Resources 6 0 R 
/Type /Page
6 0 obj
/Font <<
/ttf0 11 0 R 
/ttf1 17 0 R 
/ProcSet 21 0 R 
8 0 obj
/Filter [ /FlateDecode ]
/Length 1492
..... BINARY DATA ...

What is interesting here, is that the page data is encoded in the “BINARY DATA” which is enclosed between the stream and endstream markers

This binary data can be decompressed using the Deflate method. There are other compression schemes used in PDF, and they can even be chained, but that goes beyond the scope of this tool.

Here is the code to uncompress deflated binary data;

private static string Decompress(byte[] input)
            var cutInput = new byte[input.Length - 2];
            Array.Copy(input, 2, cutInput, 0, cutInput.Length);
            var stream = new MemoryStream();
            using (var compressStream = new MemoryStream(cutInput))
            using (var deflateStream = new DeflateStream(compressStream, CompressionMode.Decompress))
            return Encoding.Default.GetString(stream.ToArray());

So, I read through the PDF document, looking for markers of “stream” and “endstream”, and when found, I would snip out the binary data, deflate it to reveal this text;

/DeviceRGB cs
/DeviceRGB CS
1 0 0 1 0 792 cm
18 -18 901.05 -756  re W n
1.5 w
0 0 0 SC
32.05 -271.6 m
685.25 -271.6 l
32.05 -235.6 m
685.25 -235.6 l
1 w
32.05 -723.9 m
685.25 -723.9 l
1 0 0 1 636 -743.7 Tm
0 0 0 sc
0 Tr
/ttf0 9 Tf
-510 0.1 Td
0 Tr

Most of this text is relating to the layout and appearance of the page, and is once again, beyond the scope of the tool. I wanted to extract the text which is represented like (…)Tj , which I extracted using a regex as follows;

 const string strContentRegex = @"\((?<Content>.*?)\)Tj";
 UnstructuredContent = Regex.Matches(RawContent, strContentRegex)
                .Select(m => m.Groups["Content"].Value)

Once this was done, I could then write a Find function, that could find which pages a given string of text appeared;

public List<FastPDFPage> Find(string text)
            return Pages
                .Where(p => p.UnstructuredContent
                    .Any(c => string.Equals(c, text, StringComparison.OrdinalIgnoreCase)))

And, in performance tests, this consistently performed at 0.735 seconds to scan 270 pages, much faster than iText, and a order of magnitude faster than PDF Miner for Python.

Categories: Uncategorized