Archive
.@MaxMind IP to Country offline lookup with #autoUpdate in C#

If you want to determine the country from an IP address, then there are a million and one APIs that you can use, but they tend to have free usage limits, plus there is a performance hit of making a network call every time.
You can donwload a free database from Maxmind that you can use to do the lookup offline, but it ads another complexity – that the fact that IP addresses change ownership, and can map to a different country, if you don’t keep the database updated.
So, not only does this demo determine the Country from an IP address from an offline database, but it also has code to automatically download and update the data every month.
So, if you don’t want to read further, and just jump to the code, here is the repo;
https://github.com/infiniteloopltd/MaxMindDemo
So, the basics first. If you are happy with a rough lookup, then just pull the MaxMind Nuget package as follows
install-package MaxMind.GeoIP2
Then download and unzip the GeoLite2-Country.mmdb file from https://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz and place it in your bin folder.
Then all you need is this;
var reader = new DatabaseReader(“GeoLite2-Country.mmdb”);
var response = reader.Country(“8.8.8.8”);
Console.WriteLine(response.Country.IsoCode);
Which should say that IP 8.8.8.8 is in the US (It’s Google)
Now, the next fun part is how to update this mmdb file automatically. We can download the TAR.GZ as follows;
var wc = new WebClient();
var bData = wc.DownloadData(“https://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz”);
var zippedStream = new MemoryStream(bData);
Which give’s us a Tar GZ file, Which is a file that is in Tar format (*Uncompressed, but a format where multiple files are stored as one) and Gzipped (Compressed). So, we need to Gzip the file, and copy it into a memory stream –
var gzip = new GZipStream(stream, CompressionMode.Decompress);
var mTar = new MemoryStream();
gzip.CopyTo(mTar);
mTar.Seek(0, SeekOrigin.Begin);
Now, with a TAR stream, we have to separate this into a list of objects that define as follows;
public class TarEntry
{
public string FileName { get; set; }
public byte[] Contents { get; set; }
}
The code to parse the TAR file is as follows;
private static List<TarEntry> ExtractTarEntries(Stream stream)
{
var lTarEntries = new List<TarEntry>();
var buffer = new byte[100];
while (true)
{
stream.Read(buffer, 0, 100);
var name = Encoding.ASCII.GetString(buffer).Trim(‘\0’);
if (String.IsNullOrWhiteSpace(name))
break;
stream.Seek(24, SeekOrigin.Current);
stream.Read(buffer, 0, 12);
var size = Convert.ToInt64(Encoding.ASCII.GetString(buffer, 0, 12).Trim(‘\0’), 8);
stream.Seek(376L, SeekOrigin.Current);
var buf = new byte[size];
stream.Read(buf, 0, buf.Length);
lTarEntries.Add(new TarEntry
{
Contents = buf,
FileName = name
});var pos = stream.Position;
var offset = 512 – (pos % 512);
if (offset == 512)
offset = 0;stream.Seek(offset, SeekOrigin.Current);
}
return lTarEntries;
}
finally, the code to check the age is as follows;
var fi = new FileInfo(“GeoLite2-Country.mmdb”);
if (!fi.Exists || (DateTime.Now – fi.LastWriteTime).TotalDays > 30)
{
DownloadGeoliteDB();
}
Make #DNS queries using client side #Javascript using DNS-JS

DNS is a very simple protocol, which runs over UDP port 53. It’s primary role is to determine the IP address that is related to a domain. So for example, DNS-JS.com resolves to 95.154.244.106, but it’s also used to determine what server handles the email for a given domain, and lots of other ‘glue’ that holds the internet together.
The issue is, you can’t make a low level packet requests using Javascript alone, so this library helps you make DNS requests using Browser-side javascript.
So, a simple example would be;
DNS.Query(“dns-js.com”,
DNS.QueryType.A,
function(data) {
console.log(data);
});
Which makes a DNS “A” type request for “dns-js.com”, and will return the result as a parameter to your callback function, in this case as “data”, and written to the console.
The full list of Query types are as follows;
A : Address Mapping record NS : Name Server record MD : A mail destination (OBSOLETE - use MX) MF : A mail forwarder (OBSOLETE - use MX) CNAME : Canonical Name record SOA : Marks the start of a zone of authority MB : A mailbox domain name (EXPERIMENTAL) MG : A mail group member (EXPERIMENTAL) MR : A mailbox rename domain name (EXPERIMENTAL) NULL : A Null resource record (EXPERIMENTAL) WKS : A well known service description PTR : Reverse-lookup Pointer record HINFO : Host information MINFO : Mailbox or mail list information MX : Mail exchanger record TXT : Text Record RP : Responsible Person AFSDB : AFS Data Base location AAAA : IP Version 6 Address record SRV : Service Location SSHFP : A SSH Fingerprint resource record RRSIG : RRSIG rfc3755 AXFR : DNS zone transfer request. ANY : Generic any query URI : A Uniform Resource Identifier (URI) resource record CAA : A certification authority authorization
and, you can see other demos at https://www.dns-js.com/
Error getting value from ‘ScopeId’ on ‘System.Net.IPAddress’.

When you try to serialize an object that contains an IPAddress, you get the error message Error getting value from ‘ScopeId’ on ‘System.Net.IPAddress’.
So, you have to override how Json.NET (Newtonsoft) serializes this type. Which means you create a class that converts a this problematic type to and from a string youself.
public class IPConverter : JsonConverter<IPAddress>
{
public override void WriteJson(JsonWriter writer, IPAddress value, JsonSerializer serializer)
{
writer.WriteValue(value.ToString());
}public override IPAddress ReadJson(JsonReader reader, Type objectType, IPAddress existingValue, bool hasExistingValue, JsonSerializer serializer)
{
var s = (string)reader.Value;
return IPAddress.Parse(s);
}
}
Now, you pass this class into JsonSerializerSettings, like so;
var jsonSettings = new JsonSerializerSettings();
jsonSettings.Converters.Add(new IPConverter());
var json = JsonConvert.SerializeObject(result, Formatting.Indented, jsonSettings);
Where results is the object that contains the IP address.
Controlling #AWS #Route53 via the command line

If your emergency backup system for your website, is a standby server, and the idea is to boot up the standby server, and switch DNS if your main server ever goes down, then this is all fine … until the day when the server goes down, and every second costs money in downtime, And logging into Route53, and changing those DNS records manually seems to take ages, especially, if you have lots of websites, subdomains, MX records … etc.
Here is where creating a script for AWS Route53 in advance can save precious seconds, and makes sure that you can have an orderly and predictable changeover of DNS records, from live to backup, and then backup to live again.
AWS has a command line interface (CLI), and you can download the tools here; https://aws.amazon.com/cli/
Now, you’ll have to set up AWS CLI by typing aws configure, and providing your access key etc.
You’ll need to get the zone ID of each domain you want to configure, which you can get a list of them all by typing aws route53 list-hosted-zones
Next, you’ll need to create a JSON file for each domain you want to change, with details of the new records you need to change. Here is a sample JSON file:
{
“Comment”: “Update A record”,
“Changes”: [
{
“Action”: “UPSERT”,
“ResourceRecordSet”: {
“Name”: “www.domain.com”,
“Type”: “A”,
“TTL”: 300,
“ResourceRecords”: [
{
“Value”: “xxx.xxx.xxx.xxx”
}
]
}
},
{
“Action”: “UPSERT”,
“ResourceRecordSet”: {
“Name”: “domain.com”,
“Type”: “A”,
“TTL”: 300,
“ResourceRecords”: [
{
“Value”: “xxx.xxx.xxx.xxx”
}
]
}
}
]
}
Obviously, domain.com is updated with your domain, and xxx.xxx.xxx.xxx with your new server IP. You should set the TTL low on the switch-to-backup phase, so that the DNS change is held only temporarily by clients.
You should also create the reverse of this file (switch back to live) for when your main server recovers. The TTL on the switch-to-live file can be longer, as long as you don’t expect your main server to crash again anytime soon!
Now, create a batch file with the command
aws route53 change-resource-record-sets –hosted-zone-id XXXXXX –change-batch file://update.json
Where XXXXX is your zone ID from earlier, and update.json is the file above.
In a real-world example, this batch file, should have lists of lots of domains and subdomains to be changed, and a corresponding batch file that reverses all the changes.
This means, that if your live server ever goes down, you boot up the backup, and run this batch, and the DNS will start directing traffic at your backup server. Once the main server is fixed, you can quickly reverse it, so that traffic is returned to your main server again.
One of the benifits of this, is that, during the stressful event of an outage, you are not wasting time configuring DNS, but can change everything at once, even non-essential websites or domains, that would probably be let crash during an outage.
TweetJS Display #Tweets on your website with #Javascript only

This is probably the start of a much larger project, but starting today, I launched TweetJS.com which is a browser-based Javascript library, that lets you display tweets on your website. No authentication needed.
Currently, it has two methods, ListTweetsOnUserTimeline and Search, where TweetJs.ListTweetsOnUserTimeline takes two parameters, the username, and a callback function. It will return data on tweets posted by a given user. and TweetJs.Search takes two parameters, the search query, and a callback function. It will return data on tweets that contain the search query.
They are called like this
TweetJs.ListTweetsOnUserTimeline("PetrucciMusic",
function (data) {
console.log(data);
});
Under the hood, this makes an ajax call back to our server, which then makes the call to Twitter, using our authentication keys.
Using #tweetmoasharp to retrieve tweets from a user’s timeline

Since Tweetsharp is no longer being maintained, there is a fork called tweetmoasharp that is actively maintained. It is pretty much a drop-in replacement for Tweetsharp, with minor differences.
Here’s some code that I wrote to retrieve the tweets from a given user’s timeline
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
var service = new TwitterService(_consumerKey, _consumerSecret);
service.AuthenticateWith(_accessToken, _accessTokenSecret);var xt = service.ListTweetsOnUserTimeline(new ListTweetsOnUserTimelineOptions() { ScreenName = “PetrucciMusic” });
var strJson = service.Response.Response;
This returns tweets from the account https://twitter.com/PetrucciMusic
The TLS1.2 setting may not be needed, depending on what version of .NET you are using, but I was using an old version, which needed it.
#Google Form restyler – edit the #CSS of Google Forms

Google forms give you a really easy way of asking your users questions, and storing the answers in a google sheet. The big downside, is that you loose alot of control over the way your page is displayed, and you can end up with an iframe in the middle of the page that looks really out of place.
This is where http://googleformrestyler.apixml.net/ can help.
You simply follow these three steps.
Step 1.
Create your google form, and get the embed script by pressing Send » Send via < > » Copy. Then take the “src” parameter out of the iframe element. Which should appear something like this;
https://docs.google.com/forms/..../viewform?embedded=true
Step 2.
Add the following script tag on to your page instead of the iframe element:
http://googleformrestyler.apixml.net/GoogleFormStyler.js
Step 3.
Edit the CSS as you wish
<style>
.freebirdFormviewerViewFooterFooterContainer {
display: none !important
}
</style>
Now, if you want to change the functionality of the page, you can also hack the javascript. For example, by default, when you submit a Google form, it redirects back to Google. If you want to keep the user on your page, and do something else, here’s some Javascript that you can use to change this behaviour;
var ifrm = document.createElement("iframe");
ifrm.setAttribute("name", "swallow");
document.body.appendChild(ifrm);
ifrm.style.display = 'none';
document.forms[0].target="swallow";
var button = document.querySelectorAll("div[role='button']")[0];
button.addEventListener("click", function()
{
alert('Here you can intercept the response');
}, true);
What this javascript does, is that it creates an invisible iframe called “swallow”, then changes the form target to output to this hidden iframe. It then attaches to the click event of the submit button and can run custom javascript once the submit is pressed.
Now, that’s the end-user description, How does this system work, let’s look at the inner workings, specifically, the Javascript that gets loaded:
This contains the following script
(function (XHR) {
var open = XHR.prototype.open;
XHR.prototype.open = function (method, url, async, user, pass) {
this._url = url;
if (url.indexOf("gstatic.com") !== -1 ||
url.indexOf("docs.google.com") !== -1) {
url = "http://googleformrestyler.apixml.net/corsProxy.aspx?base64Url=" + btoa(url);
}
open.call(this, method, url, async, user, pass);
};
})(XMLHttpRequest);
(function() {
var script = document.currentScript ||
/*Polyfill*/ Array.prototype.slice.call(document.getElementsByTagName('script')).pop();
var URL = script.getAttribute('form');
var xhr = new XMLHttpRequest();
xhr.open("GET", URL);
xhr.onload = function() {
document.write(xhr.response);
};
xhr.send();
})();
The first part, hijacks the Ajax functionality on the page, and if the Ajax request is bound for Google (gstatic.com or docs.google.com), and will route it instead via a page called corsProxy.aspx
The second part, makes an ajax request to fetch the contents of the url specified in the form attribute of the script tag, and then document.write’s it out to the page.
CorsProxy.aspx is a ASP.NET script that proxies the request, and returns the result, with the CORS header set.
protected void Page_Load(object sender, EventArgs e)
{
var base64Url = Request.QueryString[“base64Url”];
var data = Convert.FromBase64String(base64Url);
var url = Encoding.ASCII.GetString(data);
var wc = new WebClient();
Response.AppendHeader(“Access-Control-Allow-Origin”, “*”);
var bResponse = wc.DownloadData(url);
Response.BinaryWrite(bResponse);
}
Accepting Direct Debit #payments as an alternative to #Card payments online using .NET

Direct debit is an alternative way to recieve payments, and is the norm for long term subscriptions or utility payments.
GoCardless is a provider of Direct debits and it can accept payments from the following countries;
- United Kingdom – Bacs Direct Debit
- Eurozone – SEPA Direct Debit
- Sweden – Bg Autogiro
- Denmark – Betalingsservice
- Australia – BECS Direct Debit
- New Zealand – BECS Direct Debit
- Canada – Pre-Authorized Debit
Also, with the incomming new SCA applying to EU credit cards from september 14th 2019, this can protect recurring income, since ‘paperless’ direct debit is outside the scope of SCA.
Without further ado, here is a simple demo on how to get this running in Sandbox mode;
tl;dr; here’s the github repo https://github.com/infiniteloopltd/GoCardlessDemo
Create a new .NET console app in VS 2017, then add the GoCardless NUNIT package using
Install-Package GoCardless
Create the client as follows;
var client = GoCardlessClient.Create(
ConfigurationManager.AppSettings[“GoCardlessAccessToken”],
GoCardlessClient.Environment.SANDBOX
);
Defining your access token in the app.config
<appSettings>
<add key=”GoCardlessAccessToken” value=”sandbox_AnONLOlT7Xe4qH_kRNGKBHnp_NWytUpRXAQY4-7j”/>
</appSettings>
Now, you’ll need to generate a URL via the GoCardless API, in order to capture your customer’s details;
var SessionToken = Guid.NewGuid().ToString();
var redirectFlowResponse = client.RedirectFlows.CreateAsync(new RedirectFlowCreateRequest()
{
Description = “Cider Barrels”,
SessionToken = SessionToken,
SuccessRedirectUrl = “https://developer.gocardless.com/example-redirect-uri/”,
// Optionally, prefill customer details on the payment page
PrefilledCustomer = new RedirectFlowCreateRequest.RedirectFlowPrefilledCustomer()
{
GivenName = “Tim”,
FamilyName = “Rogers”,
Email = “tim@gocardless.com”,
AddressLine1 = “338-346 Goswell Road”,
City = “London”,
PostalCode = “EC1V 7LQ”
}
}).Result;var redirectFlow = redirectFlowResponse.RedirectFlow;
OpenUrl(redirectFlow.RedirectUrl);
If this were a website, then you’d Redirect to the RedirectUrl, but here I’m opening Chrome to display the website as follows;
private static void OpenUrl(string url)
{
var proc = new Process
{
StartInfo =
{
FileName = @”C:\Program Files (x86)\Google\Chrome\Application\chrome.exe”,
Arguments = url
}
};
proc.Start();
}
Which shows a page such as the following;

Once confirmed. you are directed to a page such as the following;
https://developer.gocardless.com/example-redirect-uri/?redirect_flow_id=RE0001V1ESQMPFBAMMP9NS869BPJJKKC
Note the redirect_flow_id, this is important, since you need to complete the signup process with some code as follows;
Console.WriteLine(“Type the redirect_flow_id”);
var redirect_flow_id = Console.ReadLine();var redirectFlowResponse2 = client.RedirectFlows
.CompleteAsync(redirect_flow_id,
new RedirectFlowCompleteRequest
{
SessionToken = SessionToken
}
).Result;Console.WriteLine($”Mandate: {redirectFlowResponse2.RedirectFlow.Links.Mandate}”);
Console.WriteLine($”Customer: {redirectFlowResponse2.RedirectFlow.Links.Customer}”);OpenUrl(redirectFlowResponse2.RedirectFlow.ConfirmationUrl);
var mandate = redirectFlowResponse2.RedirectFlow.Links.Mandate;
The ConfirmationUrl looks something like the following;

Now, you have the direct debit mandate, but in order to get money, you need to collect payments against it. – This example collects 10 GBP against the newly created direct debit mandate.
var createResponse = client.Payments.CreateAsync(new PaymentCreateRequest()
{
Amount = 1000,
Currency = PaymentCreateRequest.PaymentCurrency.GBP,
Links = new PaymentCreateRequest.PaymentLinks()
{
Mandate = mandate,
},
Metadata = new Dictionary<string, string>()
{
{“invoice_number”, “001”}
},
IdempotencyKey = SessionToken
}).Result;Payment payment = createResponse.Payment;
Console.WriteLine(payment.Id);
Now, the one thing about direct debit payments, is that there is a big delay between submitting the direct debit, and getting your money. It’s not like credit cards, where the payment is confirmed as soon as you get confirmation.
So, in order to handle out-of-band events from GoCardless, you need to handle webhooks, which I’ve created as follows (default.aspx)
protected void Page_Load(object sender, EventArgs e)
{
// start ngrok like follows
// ngrok http –host-header=rewrite localhost:12193
var requestBody = Request.InputStream;
var requestJson = new StreamReader(requestBody).ReadToEnd();
var secret = ConfigurationManager.AppSettings[“GoCardlessWebhookSecret”];
var signature = Request.Headers[“Webhook-Signature”] ?? “”;foreach (Event evt in WebhookParser.Parse(requestJson, secret, signature))
{
switch (evt.Action)
{
case “created”:
System.Diagnostics.Debug.WriteLine(“Mandate ” + evt.Links.Mandate + ” has been created, yay!”);
break;
case “cancelled”:
System.Diagnostics.Debug.WriteLine(“Mandate ” + evt.Links.Mandate + ” has been cancelled”);
break;
}
}
}
I’ve run it in Visual Studio embedded webserver, it obviously crashes, since there is no post data. But I then use ngrok to proxy the local webserver to the internet;
using the command
ngrok http –host-header=rewrite localhost:12193
Where 12193 is the port that the VS webserver chose.
When ngrok generates a custom domain, I add this into the webhook settings on GoCardless.
Here, for example, is the event when a mandate is cancelled;
{
“events”: [
{
“id”: “EVTESTFWPNEMJP”,
“created_at”: “2019-07-30T16:12:05.407Z”,
“resource_type”: “mandates”,
“action”: “cancelled”,
“links”: {
“mandate”: “index_ID_123”
},
“details”: {
“origin”: “bank”,
“cause”: “bank_account_closed”,
“scheme”: “ach”,
“reason_code”: “R14”,
“description”: “This bank account has been closed as the customer is deceased.”
},
“metadata”: {}
}
]
}
and here is one when a payment has been confirmed.
{
“events”: [
{
“id”: “EVTESTECKZWPZW”,
“created_at”: “2019-07-30T16:15:03.995Z”,
“resource_type”: “payments”,
“action”: “confirmed”,
“links”: {
“payment”: “index_ID_123”
},
“details”: {
“origin”: “gocardless”,
“cause”: “payment_confirmed”,
“description”: “Enough time has passed since the payment was submitted for the banks to return an error, so this payment is now confirmed.”
},
“metadata”: {}
}
]
}
Determine the colour of an object in an image using C# #ImageProcessing

We are going to add a new field to the data on CarImagery.com that shows the colour of the vehicle, and here is the image processing script that I used to determine the colour of the image.
First off, here’s the steps in broad strokes.
- Load the image from a URL
- Crop the image to the middle half, to discard some of the background.
- Average all the colours in the selected area
- Match the colour to the closest named colour, like “Red”, “Blue”, “Black” etc.
So, step 1, here’s how I load an image from a URL into a 2D array of Color[,] objects.
public static Color[,] ToColourArray(Uri url)
{
var wc = new WebClient();
var bData = wc.DownloadData(url);
return ToColourArray(bData);
}public static Color[,] ToColourArray(byte[] img)
{
Stream sInput = new MemoryStream(img);
var imgOriginal = Image.FromStream(sInput) as Bitmap;
var img2D = new Color[imgOriginal.Width, imgOriginal.Height];
for (var i = 0; i < imgOriginal.Width; i++)
{
for (var j = 0; j < imgOriginal.Height; j++)
{
var pixel = imgOriginal.GetPixel(i, j);
img2D[i, j] = pixel;
}
}
return img2D;
}
Now, once I have an array of Color[,] objects, I can more easily manipulate them. Here is how I crop the image to the center, something like this;

public static Color[,] CropMiddle(Color[,] img)
{
var width = img.GetLength(0);
var height = img.GetLength(1);
var rectangle = new Rectangle((int)(width * 0.25), (int)(height * 0.25), width / 2, height /2 );
var imgOutput = new Color[width/2,height/2];
for (var i = rectangle.Left; i < rectangle.Right; i++)
{
for (var j = rectangle.Top; j < rectangle.Bottom; j++)
{
imgOutput[i – rectangle.Left, j – rectangle.Top] = img[i, j];
}
}
return imgOutput;
}
Now, I average the colours within this area;
public static Color AverageColour(Color[,] img)
{
var width = img.GetLength(0);
var height = img.GetLength(1);
var rectangle = new Rectangle(0, 0, width , height);
int r = 0 , g = 0, b = 0;
for (var i = rectangle.Left; i < rectangle.Right; i++)
{
for (var j = rectangle.Top; j < rectangle.Bottom; j++)
{
r += img[i, j].R;
g += img[i, j].G;
b += img[i, j].B;
}
}
r /= width * height;
g /= width * height;
b /= width * height;
return Color.FromArgb(r, g, b);
}
Which leaves me with a Dark brown, specifically #45373B

Then, we need to map this colour to a set of named colours, there are a number of ways of doing this. The most simple way is to measure the euclidean distance (Root of the square of the distance) between the R, G, and B values. However, I found that comparing the Hue and Saturation works better in terms of how we perceive colours.
public static Color GetClosestColour(Color baseColour)
{
Color[] colourArray =
{
Color.Red,
Color.Black,
Color.White,
Color.Blue,
Color.Yellow,
Color.Green,
Color.Gray
};
var colors = colourArray.Select(x => new { Value = x, Diff = CustomColourDifference(x, baseColour) }).ToList();
var min = colors.Min(x => x.Diff);
return colors.Find(x => x.Diff == min).Value;
}
Specifically, what I did here, was say that if the Saturation is less than 10%, then the colour is grayscale (i.e. White, black or gray), otherwise it’s a fully saturated colour (Red, Blue, Green, Yellow, etc.), I’ve called this the CustomColourDifference Function
private static int CustomColourDifference(Color c1, Color c2)
{
int distance = 0;
if (c2.GetSaturation() < 0.1)
{
// Near 0 is grayscale
distance = (int)Math.Abs(c1.GetSaturation() – c2.GetSaturation());
}
else
{
distance = (int)Math.Abs(c1.GetHue() – c2.GetHue());
}
return distance;
}
Which gives me Color.Red. – Which I went on to store in the database, which is outside of the scope of this article.
Free #Alexa Skill for Air Quality in Northern Ireland – #OpenSource

This is a free skill available for Alexa, that you can enable here;
https://smile.amazon.co.uk/Open-Merchant-Account-Ltd-Northern/dp/B07TSRZCMT/ref=sr_1_1
It’s open source, and open to improvements and suggestions by anyone interested in collaborating.
You can ask it questions such as “Alexa, ask air quality northern Ireland what is the level of nitrous oxide in Belfast“, or ask about particulate matter, carbon monoxide, ozone, or sulfur dioxide.
The data itself is courtesty of the Northern Ireland Department of Environment. via AirQualityNI.
At a high level, this skill runs on the Alexa platform, which hooks into Amazon’s Lambda service, running NodeJS. From there, it connects to an API, which is described here; https://github.com/infiniteloopltd/airquality – Which is hosted on a Windows server, and makes the connection on to the AirQualityNI website, and dynamically converts CSV data to the more readable JSON format.
Here is a run down on the constituent parts
The Alexa interaction model is defined here; https://github.com/infiniteloopltd/AirQuality/blob/master/Alexa/interaction.json
This defines the phrases that can be used with the skill, such as
“What is the level of Nitrous oxide in {city}“
Where {city} is of type AMAZON.GB_CITY, this is interpreted as the NitrogenOxideIntent, and passed to the Lambda handler at index.js here;
https://github.com/infiniteloopltd/AirQuality/blob/master/Lambda/index.js
This, then checks the intentName, and if it’s equal to NitrogenOxideIntent, then handleNitrogenOxideIntent is called, which then calls airQuality.NitrogenOxide(..), which is defined here:
https://github.com/infiniteloopltd/AirQuality/blob/master/Lambda/airquality.js
The NitrogenOxide() function calls DescribePollutants(), passing a function to filter the data by city, and these three types of Nitrogen-Oxide pollutants; “NO“,”NO₂” and “NOₓ as NO₂“. DescribePollutants() then makes an HTTP request to http://airquality.derrycreativescollective.com/, which returns yesterday’s mean values of air pollutants in Northen Ireland. – It then formats the response into a spoken string, for example, using NamePollutant() to convert chemical formulae like “NO₂” to “Nitrogen Dioxide”.
Please, feel free to enable the skill on your alexa device, and give it a good rating, also feel free to contact me if you’d like to build upon this skill or API.