Over a million developers have joined DZone.

Creating a Microsoft Azure Cognitive Service Face API Application in Half an Hour

DZone 's Guide to

Creating a Microsoft Azure Cognitive Service Face API Application in Half an Hour

Microsoft Azure Cognitive Services is a set of APIs and services used to make apps interactive and intelligent. And you can use it to make a Face API app in 30 minutes.

· AI Zone ·
Free Resource

In this article, I am going to teach you how to create a Microsoft Azure Cognitive Service Face API in half an hour. Microsoft Azure Cognitive Services is a set of APIs and services available for developers to make their applications more interactive and intelligent. This was formerly known as Project Oxford. Here Microsoft uses Machine Learning in the background to create these APIs. So far we have the preceding list of APIs.

  • Emotion and video detection.
  • Facial recognition.
  • Speech recognition.
  • Vision recognition.
  • Speech and language understanding.
  • Prerequisites

  • Azure subscription.
  • Visual Studio.
  • If you don’t have any Azure subscription, please sign up for a subscription here.

    Download the Source Code

    You can always download the source code from here:

  • AzureCognitiveServicesFaceAPI
  • Create the Face API in Azure Portal

    To know how to create a Face Recognition API in Azure Portal, please see this video:

    Before we start coding, please make sure that you add the reference of Microsoft.ProjectOxford.Face from NuGet package manager.

    Image title


    Using the Code

    To get started, open your Visual Studio and create a new WPF application.

    Add image and button control to MainWindow.xaml:

            <Image Stretch="UniformToFill" x:Name="FaceImage" HorizontalAlignment="Left" Margin="0,0,0,30"/>
            <Button x:Name="BtnUpload" VerticalAlignment="Bottom" Content="Upload the image" Margin="20,5" Height="20" Click="BtnUpload_Click"/>

    Create an instance of IFaceServiceClient:

    private readonly IFaceServiceClient _faceServiceClient = new FaceServiceClient("key", "end point"); 

    If you are selecting the location as Southeast Asia while creating the Face API in the portal, it is mandatory to use the second parameter — that is, your end point.

    Load the image to image control:

    private async void BtnUpload_Click(object sender, RoutedEventArgs e)
                //Uploading the image
                var openFrame = new Microsoft.Win32.OpenFileDialog { Filter = "JPEG Image(*.jpg)|*.jpg" };
                var result = openFrame.ShowDialog(this);
                if (!(bool)result)
                var filePath = openFrame.FileName;
                var fileUri = new Uri(filePath);
                var bitMapSource = new BitmapImage();
                bitMapSource.CacheOption = BitmapCacheOption.None;
                bitMapSource.UriSource = fileUri;
                FaceImage.Source = bitMapSource;

    Detect the faces count:

    /// <summary>
            /// Return the face rectangle counts
            /// </summary>
            /// <param name="filePath"></param>
            /// <returns></returns>
            private async Task<FaceRectangle[]> DetectTheFaces(string filePath)
                    using (var imgStream = File.OpenRead(filePath))
                        var faces = await _faceServiceClient.DetectAsync(imgStream);
                        var faceRectangles = faces.Select(face => face.FaceRectangle);
                        return faceRectangles.ToArray();
                catch (Exception e)

    Draw the rectangles in faces found:

    // Detecting the faces count
                Title = "Detecting....";
                FaceRectangle[] facesFound = await DetectTheFaces(filePath);
                Title = $"Found {facesFound.Length} faces";
                // Draw rectangles
                if (facesFound.Length <= 0) return;
                var drwVisual = new DrawingVisual();
                var drwContext = drwVisual.RenderOpen();
                drwContext.DrawImage(bitMapSource, new Rect(0, 0, bitMapSource.Width, bitMapSource.Height));
                var dpi = bitMapSource.DpiX;
                var resizeFactor = 96 / dpi;
                foreach (var faceRect in facesFound)
                    drwContext.DrawRectangle(Brushes.Transparent, new Pen(Brushes.Blue, 6),
                        new Rect(faceRect.Left * resizeFactor, faceRect.Top * resizeFactor, faceRect.Width * resizeFactor,
                            faceRect.Height * resizeFactor));
                var renderToImageCtrl = new RenderTargetBitmap((int)(bitMapSource.PixelWidth * resizeFactor), (int)(bitMapSource.PixelHeight * resizeFactor), 96, 96, PixelFormats.Pbgra32);
                FaceImage.Source = renderToImageCtrl;


    Now run your application and upload an image. It can be a single picture or a group picture.

    Image title

    Azure Face Recognition API Output

    Check the Statistics in Azure Portal

    If everything goes fine, you will be able to see the statistics in your Azure portal as preceding.

    Image title

    Face API Metric in Azure Portal

    If you would like to compare the images or identify the person in an image, please read the documentation here.

    ai ,face api ,microsoft azure ,microsoft cognitive services ,tutorial ,face recognition

    Published at DZone with permission of

    Opinions expressed by DZone contributors are their own.

    {{ parent.title || parent.header.title}}

    {{ parent.tldr }}

    {{ parent.urlSource.name }}