Archive for February, 2021

Bypass Google #Recaptcha using #CapMonster

Google Recaptcha is a system that is designed to stop bots from interacting with a website, and only allow humans. However, with everything, there is always a workaround. In this demo I’m using CapMonster’s API, which works, but in my opinion is quite slow, here a sample request takes 52 seconds. So probably unsuited to real-time processing.

I used the Nuget Package created by “Mohammed Boukhlouf” and the full source is here; – without my client ID of course.

Here is the jist of the code;

var start = DateTime.Now;
var client = new CapMonsterClient(secret);
var captchaTask = new RecaptchaV3TaskProxyless
WebsiteUrl = "",
WebsiteKey = "6Le0xVgUAAAAAIt20XEB4rVhYOODgTl00d8juDob",
MinScore = 0.3,
PageAction = "myverify"
// Create the task and get the task id
var taskId = client.CreateTaskAsync(captchaTask).Result;
Console.WriteLine("Created task id : " + taskId);
var solution = client.GetTaskResultAsync<RecaptchaV3TaskProxylessResult>(taskId).Result;
// Recaptcha response to be used in the form
var recaptchaResponse = solution.GRecaptchaResponse;

Console.WriteLine("Solution : " + recaptchaResponse);
var web = new WebClient {Encoding = Encoding.UTF8};
var result = web.UploadString("", "token=" + recaptchaResponse);
var idxStart = result.IndexOf("<pre>", StringComparison.Ordinal);
var idxEnd = result.IndexOf("</pre>", StringComparison.Ordinal);
var jsonResult = result.Substring(idxStart, idxEnd - idxStart);
var end = DateTime.Now;
var duration = end-start;

Categories: Uncategorized

Optimizing index_merge in #MySQL for performance

If you have a query in MySQL that uses an index merge, i.e. that is you are querying on multiple unconnected indexes on the same table, here is a performance tweak, that in my case changed a query from 40 seconds to 0.254 seconds, by reorganising the query with a subquery.

So, My original query was like this:

FROM   data
WHERE  ( l1 = 'no-match' )
        OR ( l2 = 'X'
             AND ( f1 = 'Y'
                    OR p = 'Z' ) ) 

Where “No-Match” is value that didn’t match anything in the table, and X,Y,Z were values that did match something. In the Execution plan, this was being executed as an index_merge, since all the columns had indexes on them, but not connected, but it had a very high cost;

Type Name Cost Rows
table data (index_merge) 92689.48 84079

However, by re-writing the query as a sub-query as follows;

select * from (
	select * from data where 
   		L1 = 'no-match' OR L2 = 'X' 
) data2
  f1 = 'Y' OR p= 'Z' 	

The index_merge was drastically reduced;

Type Name Cost Rows
table data (index_merge) 1189.36 918

And most importantly, the time was reduced to a fraction of the overall cost. I’d also argue that the SQL was a bit easier to read also.

Categories: Uncategorized

Create your own Flash Briefing with #Alexa #Skills

So, this started because I wanted to listen to Italian news (RAI) on an English (UK) Amazon Alexa device, and the news source was not available. With a quick Google, I found someone who released a NodeJS based app that captured the feed from RAI, and reformatted the JSON into a format that is compatible with Alexa. I forked this repo here;

Then, using Heroku, I deployed the Github repo onto a temporary domain, which you can see here;

Then, I headed over to, clicked on Alexa -> Create Alexa Skills -> Console

Then Create Skill -> Enter a Name (Italy News) – > Select a Language (English (UK)) ;

This should match your Alexa device exactly. English US and English UK are different !

Select Flash Briefing -> Create Skill

Add an error message , like “Sorry, failed”

Press “Add new Feed”

Fill in the fields, like preamble, name, Update frequency. The Content type should be audio, and the feed URL should be the Heroku Url above.

Then, from your alexa app in your phone, click More -> Settings -> Flash Briefing, and your new source should be in the list

Now, you’ll have a new news source when you say “Play News” to Alexa

Categories: Uncategorized

Storing temporary data in #Redis in C#

Redis is an in-memory database, designed for ultra-fast data retrieval. It’s great for short-lived data, that perhaps you only need to store for the duration of a user session, or transaction.

The AWS version, of Redis, under the name “ElasticCache” can only be accessed from within the AWS network, or specifically the Redis VPC. This is obviously designed to enforce recommended usage. You get no performance advantage if your Web server needs to traverse the Internet to reach your Redis server.

Here, I’ve used RedisCloud Hosted Redis database, rather than a local installation. But it has the advantage that it can be accessed from anywhere. Good for development, not designed for production. The key is in plain text, feel free to mess about with the server.

So, this was my use case: I wanted to store data temporarily, just for 2 seconds, and then delete it. It’s actually rather non-trivial with a standard MySQL database to do this in a scaleable way.

So, step 1 is to import a client library, here I picked StackExchange.Redis;

Install-Package StackExchange.Redis

Now, I connect to the Redis server, and write a value that will expire in 2 seconds;

const string endpoint = ",password=JU455eaOlQZjVYExorUl1oFouO509Ptu";
var redis = ConnectionMultiplexer.Connect(endpoint);
var db = redis.GetDatabase();

const string setValue = "abcdefg";
db.StringSet("mykey", setValue, TimeSpan.FromSeconds(2));

If I read the value back instantly, I get the expected value of “abcdefg”. If I wait 3 seconds and try to read again, I get null;

string getValue = db.StringGet("mykey");
Console.WriteLine(getValue); // writes: "abcdefg"
string getValue2 = db.StringGet("mykey");
Console.WriteLine(getValue2); // writes nothing

The code is available to clone here;

Categories: Uncategorized
%d bloggers like this: