Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Articles About Angular and ASP.NET Core

DZone's Guide to

Articles About Angular and ASP.NET Core

In this blog post, I want to give you some guidance on the first steps of starting with Microsoft’s Face Recognition API and using it with Angular and the Angular CLI.

· Web Dev Zone ·
Free Resource

Deploy code to production now. Release to users when ready. Learn how to separate code deployment from user-facing feature releases with LaunchDarkly.

Preparation

The first thing you need is a FaceAPI key which you can get here, FaceAPI. You can log in with your Microsoft, LinkedIn, GitHub, or Facebook account to get one.

Other useful links are:

The complete source code for this blog post can be found on my GitHub. It is based on Angular CLI and Bootstrap 4.

The Goal

The goal of this blog post or this private project I did was to consume the face recognition API, sending a picture to it which you can capture with the camera of your laptop/computer, and then to analyze it and display the results.

Questions to Clarify

The questions I faced prior to this project were:

  • How do I communicate with the Face API?
  • How can I take a picture and POST it?
  • How can I read the response and display it?
  • How can I do all this with Angular?

Let's Get Coding

The first thing I took a look at was the communication with the API. Let's at the example from https://docs.microsoft.com/de-de/azure/cognitive-services/Face/quickstarts/javascript:

var uriBase = "https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect";

// Request parameters.
var params = {
    "returnFaceId": "true",
    "returnFaceLandmarks": "false",
    "returnFaceAttributes": "age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise",
};

// Display the image.
var sourceImageUrl = document.getElementById("inputImage").value;
document.querySelector("#sourceImage").src = sourceImageUrl;

// Perform the REST API call.
$.ajax({
    url: uriBase + "?" + $.param(params),

    // Request headers.
    beforeSend: function(xhrObj){
        xhrObj.setRequestHeader("Content-Type","application/json");
        xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key", subscriptionKey);
    },

    type: "POST",

    // Request body.
    data: '{"url": ' + '"' + sourceImageUrl + '"}',

})

We do fire a request to an endpoint with specific parameters, headers, and a body which is an object with a source URL as a value of a property called “data.” So we can easily do that with Angular as well:

private getHeaders(subscriptionKey: string) {
    let headers = new HttpHeaders();
    headers = headers.set('Content-Type', 'application/octet-stream');
    headers = headers.set('Ocp-Apim-Subscription-Key', subscriptionKey);

    return headers;
}

private getParams() {
const httpParams = new HttpParams()
    .set('returnFaceId', 'true')
    .set('returnFaceLandmarks', 'false')
    .set(
        'returnFaceAttributes',
        'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise'
    );

    return httpParams;

}

But if we take a picture with a service from the camera, we do not save it and have it as an image directly, but, instead, we have a base64 representation of this image.

const context = canvasElement.getContext('2d');
context.drawImage(
        videoElement,
        0,
        0,
        videoElement.videoWidth,
        videoElement.videoHeight
    );

const url = canvasElement.toDataURL('image/png'); // base64 here

So the challenge here was to not send the URL in the body to the Face API but taking the base64 image representation. We can send blobs to an API, which is not difficult, with the new HttpClient Angular provides us. I tried and searched a bit and found the SO answers which I shared in the “links” section at the end of this article. I modified them a bit and covered them in a service so this method here takes care of generating the correct blob:

private makeblob(dataURL) {
    const BASE64_MARKER = ';base64,';
    const parts = dataURL.split(BASE64_MARKER);
    const contentType = parts[0].split(':')[1];
    const raw = window.atob(parts[1]);
    const rawLength = raw.length;
    const uInt8Array = new Uint8Array(rawLength);

    for (let i = 0; i < rawLength; ++i) {
        uInt8Array[i] = raw.charCodeAt(i);
    }

    return new Blob([uInt8Array], { type: contentType });
}

As we now have the headers, the parameters, and the body, we can set up a simple HTTP call to the API with Angular passing the subscriptionKey and the base64 representation of the image:

scanImage(subscriptionKey: string, base64Image: string) {
    const headers = this.getHeaders(subscriptionKey);
    const params = this.getParams();
    const blob = this.makeblob(base64Image);

    return this.httpClient.post<FaceRecognitionResponse>(
        environment.endpoint,
        blob,
        {
            params,
            headers
        }
    );  
}

You can see the full service here.

Now that that's set up, let’s take a look at the response we get back from the API.

[
    {
        "faceId": "...",
        "faceRectangle": {
            ...
        },
        "faceAttributes": {
            "smile": 0,
            "headPose": {
                ...
            },
            "gender": "male",
            "age": 32.1,
            "facialHair": {
                ...
            },
            "glasses": "NoGlasses",
            "emotion": {
                ...
            },
            "blur": {
                ...
            },
            "exposure": {
                    ...
            },
            "noise": {
                ...
            },
            "makeup": {
                ...
            },
            "accessories": [],
            "occlusion": {
                ...
            },
            "hair": {
                ...
            }
        }
    }
]

That’s a lot of information we get back as JSON. So we can easily cast it in our TypeScript object to work with it. So we can ask the camera service to get the photo, use a switch map to ask the face recognition service to work with the data and give back the result.

faceApiResponse: Observable<FaceRecognitionResponse>;

@Component(...)

processImage() {
    if (!this.subscriptionKey) {
        return;
    }

    this.faceApiResponse = this.cameraService.getPhoto().pipe(
      switchMap(base64Image => {
        this.imageString = base64Image;
        return this.faceRecognitionService.scanImage(
          this.subscriptionKey,
          base64Image
        );
      })
    );

}

We can use this response, then, to pass and display it in a table format like:

<app-table [faceApiResponse]="response"></app-table>

You can browse all the source code in the repository here.

The Result

The result is an application which takes a picture, sends it to an API, and then displays the result in a table, but you can also see the full response if you want. Have fun!

AngularFaceRecoginitionApi

Cheers,

Fabian

Deploy code to production now. Release to users when ready. Learn how to separate code deployment from user-facing feature releases with LaunchDarkly.

Topics:
web dev ,angular ,web application development ,facial recognition ,api integration

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}