Archive

Archive for April, 2018

Remote error #logging with #Swift

remotelogcat

When you are running your app in the simulator, or attached via USB, you can see your error messages in the debugger, but whenever your app is in the wild, or even on your client’s phone (Or Apple’s testing department) – you can no longer easily see your debug messages, to understand technically, what’s happening in your app.

Remote error logging isn’t new, and you can even knock a simple remote logging mechanism up your self with a little server-side code, but this should make the process super easy for you.

Firstly, Create an Account on “RemoteLogCat.com” and then you get an API key back, then add the following class to your Swift App:

import Foundation

class Logging
{
static var Key : String?
static func Log(Channel : String, Log : String, Completion: ((Bool) -> ())? = nil)
{
if let apiKey = Logging.Key
{
let strChannel = Channel.addingPercentEncoding(withAllowedCharacters: .urlHostAllowed)!
let strLog = Log.addingPercentEncoding(withAllowedCharacters: .urlHostAllowed)!
print(“\(Channel):\(Log)”)
let url = URL(string: “http://www.remotelogcat.com/log.php?apikey=\(apiKey)&channel=\(strChannel)&log=\(strLog)”)
let task = URLSession.shared.dataTask(with: url!) {(data, response, error) in
Completion?(error == nil)
}
task.resume()
}
else
{
print(“No API Key set for RemoteLogCat.com API”)
Completion?(false)
}
}
}

Then you can simply Log errors to the service using the code:

Logging.Key = “……”
Logging.Log(Channel: “macOS”, Log: “Hello Log!”)

Obviously Logging.Key only needs to be set once, and be aware, that this is an asynchronous method, so if your application terminates immediately afterwards, then nothing will be logged.

You can get a completion handler, by adding

Logging.Log(Channel: “macOS”, Log: “Hello Log!”) {
print(“Success: \($0)”)
}

Where the argument to the completion handler is a Boolean indicating success or failure (i.e. no internet)

 

Categories: Uncategorized

#Bing Image Search using Swift

swift-1

Here’s a quick code snippet on how to use Microsoft’s Bing Search API (AKA Cognitive image API) with Swift and the AlamoFire Cocoapod. You’ll need to get a API key for the bing image search API, and replace the \(Secret.subscriptionKey) below.

static func Search(keyword : String, completion: @escaping (UIImage, String) -> Void )

    {

        let escapedString = keyword.addingPercentEncoding(withAllowedCharacters: .urlHostAllowed)!

        let strUrl = https://api.cognitive.microsoft.com/bing/v7.0/images/search?q=\(escapedString)&count=1&subscription-key=\(Secret.subscriptionKey)”

        Alamofire.request(strUrl).responseJSON { (response) in

             if response.result.isSuccess {

                let searchResult : JSON = JSON (response.result.value!)

                // To-Do: handle image not found

                let imageResult = searchResult[“value”][0][“contentUrl”].string!

                print(imageResult)

                Alamofire.request(imageResult).responseData(completionHandler: { (response) in

                    if response.result.isSuccess {

                        let image = UIImage(data: response.result.value!)

                        completion(image!, imageResult)

                    }

                    else

                    {

                        print(“Image Load Failed! \(response.result.error ?? “error” as! Error)”)

                    }

                })

            }

            else{

                print(“Bing Search Failed! \(response.result.error ?? “error” as! Error)”)

            }

        }

    }

It’s called like so:

Search(keyword: “Kittens”){ (image,url) in

imageView.image = image

}

Categories: Uncategorized

#AI Image Recognition with #CoreML and #Swift

1ad4da1c-dbcc-4720-baea-3c9c00a9a443

Being able to recognise a object from an image is a super-easy thing to do, for humans, but for machines, it’s really difficult. But with Apple’s new CoreML framework it’s now possible to do this on-device, even when offline. The trick is to download InceptionV3 from Apple’s machine learning website, and import this file into your app. With this pre-trained neural network, it can recognise thousands of everyday objects from a photo.

This code is adapted from the London App Brewery’s excellent course on Swift, from Udemy, and the complete source code is available on Github here ; https://github.com/infiniteloopltd/SeaFood

Here’s the code

import UIKit
import CoreML
import Vision

class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {

    @IBOutlet weak var imageView: UIImageView!
    
    let imagePicker = UIImagePickerController()
    
    
    override func viewDidLoad() {
        super.viewDidLoad()
        imagePicker.delegate = self
        imagePicker.sourceType = .camera
        imagePicker.allowsEditing = false
    }
    
    func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
    
        let userPickedimage = info[UIImagePickerControllerOriginalImage] as? UIImage
        imageView.image = userPickedimage
        guard let ciImage = CIImage(image: userPickedimage!) else
        {
            fatalError("failed to create ciImage")
        }
        
        imagePicker.dismiss(animated: true) {
            self.detect(image: ciImage)
        }
    }
    
    func detect(image : CIImage)
    {
        guard let model = try? VNCoreMLModel(for: Inceptionv3().model) else
        {
            fatalError("Failed to covert ML model")
        }
        
        let request = VNCoreMLRequest(model: model) { (request, error) in
            guard let results = request.results as? [VNClassificationObservation] else
            {
                fatalError("Failed to cast to VNClassificationObservation")
            }
            
            print(results)
            
           
            self.ShowMessage(title: "I see a...", message: results[0].identifier, controller: self)
        }
        
        let handler = VNImageRequestHandler(ciImage: image)
        do
        {
            try handler.perform([request])
        }
        catch
        {
            print("\(error)")
        }
        
    }
    
    func ShowMessage(title: String, message : String, controller : UIViewController)
    {
        let cancelText = NSLocalizedString("Cancel", comment: "")
        
        let alertController = UIAlertController(title: title, message: message, preferredStyle: .alert)
        
        let cancelAction = UIAlertAction(title: cancelText, style: .cancel, handler: nil)
        
        alertController.addAction(cancelAction)
        
        controller.present(alertController, animated: true, completion: nil)
    }

    @IBAction func cameraTapped(_ sender: UIBarButtonItem) {
        self.present(imagePicker, animated: true, completion: nil)
    }
    
}

Categories: Uncategorized

#3dsecure #VbV #SecureCode handling with @Cardinity in #PHP

3d-secure-1600x900-55.six-image.standard.510

I recently got set up with Cardinity (A PSP), and I was learning their API using their PHP SDK at https://github.com/cardinity/cardinity-sdk-php/

When I moved from test to live, I discovered that the result of my card was not success or failed, but pending – because 3D secure was activated on the card. Otherwise known as Verified by Visa or Mastercard Securecode.

What happens, is that you need to capture the Securecode url by calling

$payment->getAuthorizationInformation()->getUrl()

and the data to be posted in the PaReq parameter by calling

$payment->getAuthorizationInformation()->getData()

You also need to have a TermUrl – i.e. your callback URL, and MD – Which I used for the payment ID parameters set.

Once you get your callback, then you need to pull out the MD and PaRes from the form data, I’ve put them into $MD and $PaRes variables respectively, then you call

require_once __DIR__ . ‘/vendor/autoload.php’;

use Cardinity\Client;
use Cardinity\Method\Payment;

$client = Client::create([
‘consumerKey’ => ‘…’,
‘consumerSecret’ => ‘…’,
]);

$method = new Payment\Finalize($MD,$PaRes);
$payment = $client->call($method);
$serialized = serialize($payment);
echo($serialized);

… And you should get an object like the following back:

{
“id”: “……”,
“amount”: “10.00”,
“currency”: “EUR”,
“created”: “2018-04-12T14:28:40Z”,
“type”: “authorization”,
“live”: true,
“status”: “approved”,
“order_id”: “1234”,
“description”: “test”,
“country”: “GB”,
“payment_method”: “card”,
“payment_instrument”:
{
“card_brand”: “MasterCard”,
“pan”: “….”,
“exp_year”: 2021,
“exp_month”: 9,
“holder”: “Joe Bloggs”
}
}

Once this code is finished up, we will replace the paypal option on AvatarAPI.com to this Cardinity interface

Categories: Uncategorized

Quick #SQL #Performance fix for #slow queries

quick-fix

Adding indexes to speed up slow queries is nothing new, but knowing exactly what index to add is sometimes a bit of a dark art.

This feature was added in SQL server management studio 2008, so it’s not new, but it changed one query that took 10 seconds to run, to run in under a second, so I can’t recomend this feature enough. – The 99.97% increase in the screenshot was real.

How does it work. you just press “Display execution plan” over your slow query, and if the “Missing index hint” appears in green, then apply it!, you just need to change give it a name.

Obviously, you can’t go overboard on applying indexes, since too many of them can lead to slower inserts and updates, and of course more disk space usage.

 

 

Categories: Uncategorized