Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Microsoft Cognitive Services: Using a Face API SDK

DZone's Guide to

Microsoft Cognitive Services: Using a Face API SDK

Learn how to use Microsoft's C#/Windows NuGet package to ease your way to using the Cognitive Services Face API in your applications.

· AI Zone
Free Resource

Find out how AI-Fueled APIs from Neura can make interesting products more exciting and engaging. 

Once you have a Microsoft Cognitive Services Face API set up (see my previous post), it's very easy to consume because they are based on a familiar JSON-based REST API. That means you can either access your services by rolling your own code or use one of existing SDKs that are available for most common platforms: Windows, iOS, Android, and Python.

For example, if you're developing a Windows application, you'd use this NuGet package.

In fact, let's build a fresh Windows UWP app that uses Face API Cognitive Service.

  1. Fire up Visual Studio (2017!) and start a new Windows Universal > Blank App project.

    Image title

  2. Go to NuGet Package Manager and update all existing packages.

  3. Browse for and install the Microsoft.ProjectOxford.Face package.

  4. Open MainForm.xaml and put this short piece of XAML UI inside the Page tag:

    <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
        <Grid.RowDefinitions>
            <RowDefinition Height="Auto"/>
            <RowDefinition Height="Auto"/>
            <RowDefinition Height="*"/>
        </Grid.RowDefinitions>
        <TextBox x:Name="FaceApiKeyBox" Header="Face API Key" Grid.Row="0" />
        <StackPanel Orientation="Horizontal" Grid.Row="1">
            <Button Content="Browse" Click="OnBrowseForImage" />
        </StackPanel>
        <Viewbox Grid.Row="2">
            <Grid VerticalAlignment="Center">
                <Image x:Name="Image" Stretch="None" />
                <Grid x:Name="FacesGrid"/>
            </Grid>
        </Viewbox>
  5. Add code for that single event handler. I'll break it down into smaller pieces.

    5a. Pick a file. If a file wasn't picked, do nothing.

    private async void OnBrowseForImage(object sender, RoutedEventArgs e)
    {
        var file = new FileOpenPicker();
        file.FileTypeFilter.Add(".jpg");
        file.FileTypeFilter.Add(".jpeg");
        file.FileTypeFilter.Add(".gif");
        file.FileTypeFilter.Add(".bmp");
        file.FileTypeFilter.Add(".png");
        file.SuggestedStartLocation = PickerLocationId.PicturesLibrary;
        file.ViewMode = PickerViewMode.Thumbnail;
    
        var fileName = await file.PickSingleFileAsync();
        if (fileName == null) return;

    5b. Open the selected file, put its contents into a visual control named Image, and use the same file stream to call Face API; not how FaceServiceClient  is instantiated (providing key and endpoint). Providing the endpoint may currently be optional, depending on which endpoint you've registered your service with.

    Face[] detectedFaces;
    using (var currentStream = await fileName.OpenStreamForReadAsync())
    {
        var bitmap = new BitmapImage();
        await bitmap.SetSourceAsync(currentStream.AsRandomAccessStream());
        Image.Source = bitmap;
        currentStream.Seek(0, SeekOrigin.Begin);
    
        var client = new FaceServiceClient(FaceApiKeyBox.Text, "https://westeurope.api.cognitive.microsoft.com/face/v1.0");
        detectedFaces = await client.DetectAsync(currentStream);
    } 

    The DetectAsync method will upload image data and ask the service to detect any faces it could find and return their data.

    5c. Finally, we'll take that data and draw a rectangle around detected faces.

    FacesGrid.Children.Clear();
    var red = new SolidColorBrush(Colors.Red);
    var white = new SolidColorBrush(Colors.White);
    var transparent = new SolidColorBrush(Colors.Transparent);
    
    foreach (var face in detectedFaces)
    {
        var rectangle = new Rectangle
        {
            Width = face.FaceRectangle.Width,
            Height = face.FaceRectangle.Height,
            StrokeThickness = 4,
            Stroke = red,
            Fill = transparent
        };
    
        var textBlock = new TextBlock {Foreground = white};
    
        var border = new Border
        {
            Padding = new Thickness(5),
            Background = red,
            BorderThickness = new Thickness(0),
            Visibility = Visibility.Collapsed,
            HorizontalAlignment = HorizontalAlignment.Left,
            Child = textBlock
        };
    
        var stackPanel = new StackPanel();
        stackPanel.Margin = new Thickness(face.FaceRectangle.Left, face.FaceRectangle.Top, 0, 0);
        stackPanel.HorizontalAlignment = HorizontalAlignment.Left;
        stackPanel.VerticalAlignment = VerticalAlignment.Top;
        stackPanel.Children.Add(rectangle);
        stackPanel.Children.Add(border);
        stackPanel.DataContext = face;
    
        FacesGrid.Children.Add(stackPanel);
    }
  6. Run the application.

  7. Enter the Face API key that you've got from registering your Azure Face API service.

  8. Browse for an image file.

  9. Wait for the result.

    Image title

I've used Microsoft's C#/Windows NuGet package to ease my way to using the Cognitive Services Face API in my application. Remember there are more SDKs available for you to use on other platforms too! Including some links to their project pages — here's Android, iOS, and Python.

The full source code used for this blog post will be available with my future posts.

To find out how AI-Fueled APIs can increase engagement and retention, download Six Ways to Boost Engagement for Your IoT Device or App with AI today.

Topics:
ai ,microsoft cognitive services ,face api ,face recognition ,tutorial ,api sdk

Published at DZone with permission of Andrej Tozon, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}