Archive for August, 2019

Car Registration API #Pakistan #Sindh


Sindh, one of the four provinces of Pakistan, and the latest addition to our support to our Car Registration API network. In Pakistan, we currently cover Punjab (PB), Khyber (KP) and now Sindh (SD). See more at

Data is returned in Pakistan Sindh province in the following format;

  "Description": "HONDA",
  "CarMake": {
    "CurrentTextValue": "HONDA"
  "MakeDescription": {
    "CurrentTextValue": "HONDA"
  "RegistrationDate": "2002-03-06",
  "TaxDate": "2019-12-31",
  "EngineNumber": "D16W9-1003512",
  "BodyType": "SALOON",
  "Owner": "JAM MAHAR ALI RF",
  "CPLC": "CLEAR",
  "Seats": "4",
  "Class": "PR",
  "HP": "1590CC",
  "RegistrationYear": 2002,
  "Image": "http:\/\/\/image.aspx\/@SE9OREE=",
  "Extended": [
      "Key": "Registration No",
      "Value": "ADX-008"
      "Key": "Make",
      "Value": "HONDA"
      "Key": "Registration Date",
      "Value": "2002-03-06"
      "Key": "Tax Payment",
      "Value": "2019-12-31"
      "Key": "Engine No",
      "Value": "D16W9-1003512"
      "Key": "Safe Custody",
      "Value": "PURCHASER are advised to contact MR \r\nWing for avoidng dispute on this subject vehicle"
      "Key": "Body Type",
      "Value": "SALOON"
      "Key": "Owner Name",
      "Value": "JAM MAHAR ALI RF"
      "Key": "Model Year",
      "Value": "2002"
      "Key": "CPLC",
      "Value": "CLEAR"
      "Key": "Seating Capacity",
      "Value": "4"
      "Key": "Class of Vehicle",
      "Value": "PR"
      "Key": "Horse Power",
      "Value": "1590CC"
Categories: Uncategorized

Intercept HTTPS requests made by a live #iOS app using @Telerik #Fiddler


If you want to debug an iOS app that’s installed on a iPhone, to determine what network calls it is making, then if you have a Windows PC, then you can use Telerik’s Fiddler to intercept requests between any iOS app and whatever back-end API it is using, even over HTTPS.

Make sure your PC and iPhone are on the same WIFI network before starting.

First, Install the Telerik Fiddler Application, and make sure your firewall allows connections on port 8888. Now, go Settings > Wifi > (i) > Configure proxy > Manual

Get the IP address of your PC using ipConfig, Mine was, yours may be different.


Enter the port as 8888 and press Save. Now, Fiddler should be able to capture HTTP (insecure) requests.

To Capture Secure Requests (HTTPS), you will need to install a certificate on your iPhone. To Do so, you can download it from your iPhone from here

Or use the Camera app on your iPhone to scan this QR code:



When asked to chose a device, select iPhone


This downloads the cert, but you need to install it and activate it.

Open your settings app, and click “Profile Downloaded” then “Install”


then enable the cert

By pressing settings – general – about – certificate trust settings

Make sure that In Fiddler, you have selected Tools > Options > HTTPS > Decrypt HTTPS traffic, and then restart Fiddler.


I also added extra TLS protocols amd selected the Ignore server cert errors.

Hopefully now, if you operate your iPhone, you should see both HTTP and HTTPS requests appearing in Fiddler.

Certain apps, like the App store (for example) will not work, since they are smart enough to notice that there is a “Man-In-The-Middle” attack going on – and this does highlight the importance of Certificate pinning, which you should do, if you are handling financial data in apps.




Categories: Uncategorized

Customising #Intellisense for #Javascript libraries


If you’re using an IDE like Visual Studio to write web apps, then you may have noticed these “(in <filename>.d.ts)” appearing in the intellisense. Most of these are in-built, but you can write your own in order to customise how this appears, so that you can provide added assistance to people using your Javascript library. Here I am experimenting with the Javascript library.

First, I’m going to auto-generate my d.ts file using Typescript (TSC). So I just rename my js file to a .ts file and run this

tsc –lib dom,ES6 smtp.debug.ts –declaration

The libaries used here (ES6 & dom) where what I needed for my file, you might need to play with this to get it to work with your Javascript.

This generates a Typings file (d.ts) as follows;

declare var Email: {
send: (params: any) => Promise<{}>;
ajaxPost: (url: any, params: any, callback: any) => void;
ajax: (src: any, callback: any) => void;
createCORSRequest: (method: any, url: any) => XMLHttpRequest;

Then you optionally can reference the typings file from the JS file like so;

/// <reference path=”smtp.debug.d.ts” />

However, I found the IDE picked it up automatically, even without this, due to the file names. Unfortunately, it didn’t seem to load a remote d.ts file, so I abandoned this project at this point.


Categories: Uncategorized

Using #AI in #Cryptanalysis



First we set some helper variables that define the project

In [8]:

Next, we load the data from the csv file. We also cache the content of the csv file in “pickle” format. This is because csv format files take a few seconds to load but pickle format files load almost instataneously, therfore, to keep the developer sane, we utilize the cache in development.

In [5]:
input output
0 -2|-72|-11|-2|18|100|-69|15|93|120|15|-97|-35|… 1D8HB58D04F177301
1 -105|-53|20|-126|-87|13|-124|65|58|-116|63|34|… JM1BJ225621628507
2 90|-56|-40|-3|95|0|42|4|-112|48|-37|-10|-7|115… JN1BJ1CP6HW007566
3 116|-85|109|-127|30|-30|23|13|40|127|-97|67|-1… WBA3B1G57ENN90705
4 -121|-22|102|72|-31|-110|-40|36|-117|-119|86|-… 1G1PK5SB4E7391908


We then transform the data into x for the input features, and y for the output predictions. We could have continued to use pandas as our data structure of choice, however we don’t need the advanced features of pandas, therefore we simply load the data input numpy arrays which are much more lightweight.

In [9]:

We split the data into three portions.

  • Test – used for final testing, completely unseen by the model
  • Validation – used for tuning the model hyperparameters, functions as a “pretend” test set
  • Train – the data used to train the model

The key difference between the test set and validation set is the fact that the models will be tuned for the validation set but not the test set.

In [10]:

We then build and train our models. We use a random forest classifier to predict each character in the output sequence. Through experimentation this was the best performing approach.

A high level description of a random forest classifier is that decision trees are built by construction a tree of decision points based on the input data. The decision points are adjusted in training. Multiple trees are built and their combined result is the prediction.

For more information see here:

Further models that were experimented with include a simple feedforward network, a recurrent network and an autoencoder neural network. However, this approach was the best performing by a large margin.

In [81]:
finished with model #0
finished with model #1
finished with model #2
finished with model #3
finished with model #4
finished with model #5
finished with model #6
finished with model #7
finished with model #8
finished with model #9
finished with model #10
finished with model #11
finished with model #12
finished with model #13
finished with model #14
finished with model #15
finished with model #16


Now we evaluate the model to see how it performs and how well it generalizes

In [82]:
In [83]:

now we graph the results to see that the training set attains a near-perfect performance which is to be expected with tree based models. The validation and test sets are consistent with each other, showing that the first 8 digits and the 11th digit are the most accurate. The 9th digit, the security digit, is the least accurate as expected. Everything after the 9th digit (with the exception of digit 11) also has drastically reduced accuracy

In [84]:
In [85]:
In [86]:


Now we construct our model for production by training on the entire dataset and serialized to file.

In [87]:
finished with model #0
finished with model #1
finished with model #2
finished with model #3
finished with model #4
finished with model #5
finished with model #6
finished with model #7
finished with model #8
finished with model #9
finished with model #10
finished with model #11
finished with model #12
finished with model #13
finished with model #14
finished with model #15
finished with model #16
In [90]:
finished 0
finished 1
finished 2
finished 3
finished 4
finished 5
finished 6
finished 7
finished 8
finished 9
finished 10
finished 11
finished 12
finished 13
finished 14
finished 15
finished 16


After serializing the models, they can then be used by our server. We use a flask server that loads the models and responds to inputs to make predictions. The code of which lives in “” and should be fairly self-explanatory.

To run the server, make sure python is installed from

After installing python:

  1. Open the command line program – cmd on windows, terminal on osx
  2. Navigate to the server folder
  3. Run the install file – note this may take some time depending on the computer specifications
    and may look like the program has frozen, please allow it to finish
    a. For osx: run “./” b. For windows: run “install”
  4. Run the run file
    a. For osx: run “./”
    b. For windows: run “run”
  5. After the server starts, a process will run to load the model. The console output will denote the progress and the final message “ALL MODELS LOADED” will indicate the server is ready to use.

Now a Flask server will be available at “http://localhost:5000”. This server can be queried by any REST client. For testing, I recommend PostMan or Restlet Client.

The following endpoints are supported:

  • /test – a test endpoint that responds with a test message
  • /info – responds with test probabilities of each character in the output, can be used to estimate confidence
  • /- accepts a parameter “text” with 24 signed integers separated with “|”, returns the predicted 17 character output sequence

For example: http://localhost:5000?text=-2|-72|-11|-2|18|100|-69|15|93|120|15|-97|-35|52|85|-114|53|-123|-1|-101|-38|125|-100|113
Will return: 1D8HB58D04F177301

Categories: Uncategorized

Uploading to #AWS #Glacier with the #CLI


Amazon Glacier is a great place to store data cheaply that you have no urgent requirement to recover quickly. This could be data that you need to store for “safe keeping”, or backups-of-backups. It’s not a replacement for S3, since you can’t retrieve the data quickly.

The CLI examples on the Amazon site are quite complex, since they recommend you break the data into 1MB chunks before uploading. This approach is simpler, but will only work for files up to 4GB.

First, you’ll need to install the AWS CLI and run AWS configure, and enter your IAM access key, secret, and region (i.e. eu-west-1).

You can create a vault via the AWS web interface, this just confirms that you have access to them via the CLI;

aws glacier list-vaults –account-id –

Which returns;

“VaultList”: [
“VaultARN”: “arn:aws:glacier:eu-west-1:xxxx:vaults/AI-String
“VaultName”: “AI-String-xxx”,
“CreationDate”: “2019-08-20T14:19:59.345Z”,
“NumberOfArchives”: 0,
“SizeInBytes”: 0
“VaultARN”: “arn:aws:glacier:eu-west-1:xxxxxx:vaults/xxxxx
“VaultName”: “xxxxx-www-backup”,
“CreationDate”: “2017-08-27T14:45:10.021Z”,
“LastInventoryDate”: “2017-08-28T09:09:29.270Z”,
“NumberOfArchives”: 3,
“SizeInBytes”: 8880671766

Now, you can upload a file as follows;

aws glacier upload-archive –vault-name AI-String-Processing –accou
nt-id – –archive-description “AI String xxxx” –body AI-String-xxxx

Which returns

“location”: “/xxxxx/vaults/AI-String-xxxx/archives/….”,
“checksum”: “…..f”,
“archiveId”: “…Q”

Now, if you wait about 4 hours, you should see the archive counter increase in the AWS Web User interface.


Categories: Uncategorized

Car Registration #API in #Cyprus


Cyprus has the highest rate of car ownership in the world, with 742 cars per 1,000 people, which in 2009, was 651,149 cars out of a total population of 854,802. If your company is in the automotive trade, and is selling cars, car services, or car parts to Cyprus, then knowing the exact make and model of a Cypriot car from it’s license plate, will make your website that much easier to use.

We’ve developed an API at that allows you determine the make, model, age and engine size of a Cypriot car from it’s number plate. The website is in Greek, but don’t worry if you don’t speak greek, you can access the same API by clicking any of the other country versions, and access the /CheckCyprus API directly.

Cyprus support

Car registration plates in Cyprus use the /CheckCyprus  endpoint and return the following information:

  • Make / Model
  • Engine Size
  • Age
  • Representative image

Sample Registration Number: 


Sample Json:


  “Description”: “DODGE CALIBER”,

  “CarMake”: {

    “CurrentTextValue”: “DODGE”


  “CarModel”: {

    “CurrentTextValue”: “CALIBER”


  “MakeDescription”: {

    “CurrentTextValue”: “DODGE”


  “ModelDescription”: {

    “CurrentTextValue”: “CALIBER”


  “EngineSize”: {

    “CurrentTextValue”: “1998”


  “Power”: “115”,

  “RegistrationDate”: “17/03/2010”,

  “ManufactureDate”: “01/01/2006”,

  “Convertible”: “False”,

  “ImageUrl”: “;


Categories: Uncategorized

.@MaxMind IP to Country offline lookup with #autoUpdate in C#


If you want to determine the country from an IP address, then there are a million and one APIs that you can use, but they tend to have free usage limits, plus there is a performance hit of making a network call every time.

You can donwload a free database from Maxmind that you can use to do the lookup offline, but it ads another complexity – that the fact that IP addresses change ownership, and can map to a different country, if you don’t keep the database updated.

So, not only does this demo determine the Country from an IP address from an offline database, but it also has code to automatically download and update the data every month.

So, if you don’t want to read further, and just jump to the code, here is the repo;

So, the basics first. If you are happy with a rough lookup, then just pull the MaxMind Nuget package as follows

install-package MaxMind.GeoIP2

Then download and unzip the GeoLite2-Country.mmdb file from and place it in your bin folder.

Then all you need is this;

var reader = new DatabaseReader(“GeoLite2-Country.mmdb”);
var response = reader.Country(“”);

Which should say that IP is in the US (It’s Google)

Now, the next fun part is how to update this mmdb file automatically. We can download the TAR.GZ as follows;

var wc = new WebClient();
var bData = wc.DownloadData(“;);
var zippedStream = new MemoryStream(bData);

Which give’s us a Tar GZ file, Which is a file that is in Tar format (*Uncompressed, but a format where multiple files are stored as one) and Gzipped (Compressed). So, we need to Gzip the file, and copy it into a memory stream –

var gzip = new GZipStream(stream, CompressionMode.Decompress);
var mTar = new MemoryStream();
mTar.Seek(0, SeekOrigin.Begin);

Now, with a TAR stream, we have to separate this into a list of objects that define as follows;

public class TarEntry
public string FileName { get; set; }
public byte[] Contents { get; set; }

The code to parse the TAR file is as follows;

private static List<TarEntry> ExtractTarEntries(Stream stream)
var lTarEntries = new List<TarEntry>();
var buffer = new byte[100];
while (true)
stream.Read(buffer, 0, 100);
var name = Encoding.ASCII.GetString(buffer).Trim(‘\0’);
if (String.IsNullOrWhiteSpace(name))
stream.Seek(24, SeekOrigin.Current);
stream.Read(buffer, 0, 12);
var size = Convert.ToInt64(Encoding.ASCII.GetString(buffer, 0, 12).Trim(‘\0’), 8);
stream.Seek(376L, SeekOrigin.Current);
var buf = new byte[size];
stream.Read(buf, 0, buf.Length);
lTarEntries.Add(new TarEntry
Contents = buf,
FileName = name

var pos = stream.Position;

var offset = 512 – (pos % 512);
if (offset == 512)
offset = 0;

stream.Seek(offset, SeekOrigin.Current);
return lTarEntries;

finally, the code to check the age is as follows;

var fi = new FileInfo(“GeoLite2-Country.mmdb”);
if (!fi.Exists || (DateTime.Now – fi.LastWriteTime).TotalDays > 30)

Categories: Uncategorized