Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Firebase ML Kit: Build a Face Features-Detecting App With Face Detection API and Android Things

DZone's Guide to

Firebase ML Kit: Build a Face Features-Detecting App With Face Detection API and Android Things

Learn how to build a device that detects facial features.

· IoT Zone ·
Free Resource

Digi-Key Electronics’ Internet of Things (IoT) Resource Center Inspires the Future: Read More

This article describes how to build a face features detecting app using the face detection API (Firebase ML Kit) and Android Things. The idea of this article comes from the Google project called “Android Things expression flower." This project idea is detecting face characteristics (or face classification) using machine vision based on Firebase ML Kit. Moreover, this project displays the face characteristics using an LCD display and some emoticons.

To build this project, you will need:

  • Raspberry Pi
  • Raspberry Camera
  • LCD Display (SSD1306)

The final result is shown here:

Face detection API with Android Things and Raspberry Pi
Firebase ML Kit face detection with Android Things

Download the Android Things source code

Introduction to Firebase ML Kit

Firebase ML Kit is a mobile SDK that helps us to experiment with machine learning technologies. Tensorflow and CloudVision make it easier to develop mobile apps that use machine learning. Anyway, the machine learning models that stand behind require time and effort. Firebase ML Kit is the Google effort to make machine learning easier to use and more accessible to people that do not know much about machine learning technologies, providing pre-trained models that can be used in developing Android and Android Things app.

This article describes how to implement a machine vision Android Things app that recognizes face features.

It will show how easy is adding machine learning capabilities to an Android Things app without knowing much about Machine Learning and without building and optimizing a Machine learning model.

What Is the Face Detection API in Firebase ML Kit

Using Firebase ML Kit Face detection API is possible to detect faces in a picture or using a camera. In this Android Things project, we will use a camera connected to Raspberry Pi. Moreover, once the face is detected, we can detect face features such as face rotation, size, and so on. Moreover, using Face Detection API, we can go deeper in this face analysis retrieving:

  • Landmark: the point of interest of the face such as left eyes, right eyes, nose base, and so on.
  • Contour: they are points that follow the face shape
  • Classification: it is the capability to detect specific face characteristic. For example, it is possible to detect if one eye is closed or open or if the face is smiling

Moreover, using Face detection API, it possible to track faces in a video sequence. As you can see, they are very interesting features that open new scenario in developing apps.

In this project, as stated before, we will use face classification to represent the face characteristic in an LCD display. To do it the app will use these images to represent the face characteristics:

Firebase ML Kit face characteristics detection
Neutral face
Firebase ML Kit face characteristics detection
Right eye closed
Firebase ML Kit face characteristics detection
Smiling face
Firebase ML Kit face features detection
Left eye closed


How to Use Face Detection API

Now that we know what the Face detection API is, it is time to start using it building the Android Things app.

Before implementing our app, it is necessary to configure a new project using the Firebase Console. This is a very simple step. In the end, you will get a JSON file to download that must be added to your Android Things project.

Set Up Firebase ML Kit

Once the project is configured, it is necessary to configure the Face detection API and add the right dependencies in our project:

dependencies {
    implementation 'com.google.firebase:firebase-ml-vision:18.0.2'
    implementation 'com.google.firebase:firebase-ml-vision-face-model:17.0.2'
}


Next, let us add these lines in our Manifest.xml :

<uses-permission 
     android:name="android.permission.CAMERA"/>
<uses-permission 
     android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<meta-data
     android:name="com.google.firebase.ml.vision.DEPENDENCIES"
     android:value="face" />


How to Use Face Classification to Detect Face Characteristics

It is time to start using the Firebase ML kit and, in more detail, the Face Detection API in this Android Things app. There are two steps to follow in order to detect face characteristics like smile, left or right eye closed, and so on. These steps are shown below:

  • Use the camera to capture the picture
  • Pass the image capture to the Firebase MK Kit to detect the face

By now, we can suppose that the image is captured somehow and we can focus our attention on how to use Firebase ML Kit (Face Detection API) to detect face characteristics.

Configuring Face Detection API

Before applying a face detection process to an image, it is necessary to initialize the Firebase ML Kit and configure the Face Detection API. In the MainActivity and in more details in the onCreatemethod, add this line:

FirebaseApp.initializeApp(this);


To configure the Face detector, it is necessary to use FirebaseVisionFaceDetectorOptions (more info here) in this way:

 FirebaseVisionFaceDetectorOptions options =
                new FirebaseVisionFaceDetectorOptions.Builder();


Next, it is necessary to add the configuration options:

FirebaseVisionFaceDetectorOptions options =
         new FirebaseVisionFaceDetectorOptions.Builder()                      
.setClassificationMode(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.enableTracking()
.build();


The Android Things app is interested in the face classification, as stated before, so we enable this configuration. Moreover, by default, the face detection will use a FAST_MODE (enabled by default).

Finally:

FirebaseVisionFaceDetector detector = FirebaseVision.getInstance().getVisionFaceDetector(options);


Once the detector is ready and correctly configured, we can start detecting face characteristics (or face classification) using a captured image:

 firebaseImage = FirebaseVisionImage.fromBitmap(displayBitmap);
 result = detector
    .detectInImage(firebaseImage)
    .addOnSuccessListener(new 
      OnSuccessListener<List<FirebaseVisionFace>>() {
       @Override
       public void onSuccess(List<FirebaseVisionFace> faces) {
         for (FirebaseVisionFace face : faces) {
           Log.d(TAG, "****************************");
           Log.d(TAG, "face ["+face+"]");
           Log.d(TAG, "Smiling Prob ["+face.getSmilingProbability()+"]");
           Log.d(TAG, "Left eye open ["+face.getLeftEyeOpenProbability()+"]");
           Log.d(TAG, "Right eye open ["+face.getRightEyeOpenProbability()+"]");
                    checkFaceExpression(face);
         }
      }
});


There are some aspects to notice:

  1. Using the displayBitmap, we get the firebaseImage that is the image where we want to detect face characteristics.
  2. The app invokes the method detectInImage to start detecting the face (the app uses the face classification)
  3. The app adds a listener to get notified when the facial characteristics are available
  4. For each face detected, the app gets the probability
  5. Finally, using the probability retrieved before the Android Things app controls the LCD display showing the emoticon

The method checkFaceExpression classified the face determining the facial characteristics. In the end, it notifies the result to the caller (as we will see later):

private void checkFaceExpression(FirebaseVisionFace face) {
  if (face.getSmilingProbability() > 0.5) {
    Log.d(TAG, "**** Smiling ***");
    listener.onSuccess(FACE_STATUS.SMILING);
   }
   if (face.getLeftEyeOpenProbability() < 0.2 &amp;&amp;
      face.getLeftEyeOpenProbability() != -1 &amp;&amp;
      face.getRightEyeOpenProbability() > 0.5) {
    Log.d(TAG, "Right Open..");
    listener.onSuccess(FACE_STATUS.RIGHT_EYE_OPEN_LEFT_CLOSE)
   }
   if (face.getRightEyeOpenProbability() < 0.2 &amp;&amp;
     face.getRightEyeOpenProbability() != -1 &amp;&amp;
     face.getLeftEyeOpenProbability() > 0.5) {
    Log.d(TAG, "Left Open..");        
    listener.onSuccess(FACE_STATUS.LEFT_EYE_OPEN_RIGHT_CLOSE);
   }
   listener.onSuccess(FACE_STATUS.LEFT_OPEN_RIGHT_OPEN);
}


How to Capture the Image Using Camera in Android Things

By now, we have supposed to have already captured the image. This paragraph shows how to do it using a camera connected to the Raspberry Pi. This process is quite simple, and it is the same we use when we implement an Android app. It is possible to break this process in these steps:

  • Open the camera
  • Create a capture session
  • Handle the image

Open the Camera

In this step, the Android Things app initializes the camera. Before using the camera, it is necessary to add the right permission to the Manifest.xml :

<uses-permission android:name="android.permission.CAMERA"/>


Moreover, let us create a new class that will handle all the details related to the face detection and call it FaceDetector.java and its constructor is:

 public FaceDetector(Context ctx, ImageView img, Looper looper) {
   this.ctx = ctx;
   this.img = img;
   this.looper = looper;
 }


We will see later the role of ImageView. Next, check if the camera is present and open it:

private void openCamera(CameraManager camManager) {
  try {
   String[] camIds = camManager.getCameraIdList();
   if (camIds.length < 1) {
     Log.e(TAG, "Camera not available");
     listener.onError();
     return;
   }

  camManager.openCamera(camIds[0],
      new CameraDevice.StateCallback() {
        @Override
        public void onOpened(@NonNull CameraDevice camera) {
          Log.i(TAG, "Camera opened");
          startCamera(camera);
        }
        @Override
        public void onDisconnected(@NonNull CameraDevice camera) 
         {}
        @Override
        public void onError(@NonNull CameraDevice camera, int error) {
         Log.e(TAG, "Error ["+error+"]");
         listener.onError();
        }
      },
      backgroundHandler);
  }
  catch(CameraAccessException cae) {
    cae.printStackTrace();
    listener.onError();
  }
}


Where:

CameraManager cameraManager = (CameraManager) ctx.getSystemService(Context.CAMERA_SERVICE);


The code is quite simple; it is necessary to implement a listener to get notified when the camera is opened or some errors occur. That’s all.

Create a Capture Session

The next step is creating a capture session so that the Android Things app can capture the image. Let us add a new method:

private void startCamera(CameraDevice cameraDevice) {
    try {
     final CaptureRequest.Builder requestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
            requestBuilder.addTarget(mImageReader.getSurface());
            cameraDevice.createCaptureSession(Collections.singletonList(mImageReader.getSurface()),
                    new CameraCaptureSession.StateCallback() {
                        @Override
                        public void onConfigured(@NonNull CameraCaptureSession session) {
                            Log.i(TAG, "Camera configured..");
                            CaptureRequest request = requestBuilder.build();
                            try {                               session.setRepeatingRequest(request, null, backgroundHandler);
                            }
                            catch (CameraAccessException cae) {
                                Log.e(TAG, "Camera session error");
                                cae.printStackTrace();
                            }
                        }

                        @Override
                        public void onConfigureFailed(@NonNull CameraCaptureSession session) {

                        }
                    },
            backgroundHandler);
        }
        catch (CameraAccessException cae) {
            Log.e(TAG, "Camera Access Error");
            cae.printStackTrace();
            listener.onError();
        }

    }


In this method, the Android Things app starts a capturing session, and this app gets notified when the image is captured.

Handle the Image

The last step is to handle the image captured. This image will be sent to the Firebase ML Kit to get the facial characteristics. To this purpose, it is necessary to implement a callback method:

 @Override
 public void onImageAvailable(ImageReader reader) {
   //Log.i(TAG, "Image Ready..");
   Image image = reader.acquireLatestImage();
   // We have to convert the image before
   // use it in Firebase ML Kit
   ...
}


That’s all. The image is ready and the camera has captured it so that we can start detecting face characteristics.

Here are some other useful resources:

Displaying Face-Detected Characteristics Using Android Things and LCD

In this step, we will show how to display face characteristics retrieved by Firebase ML Kit. In this project, Raspberry Pi is connected to an LCD display (SSD1306) that will show the facial characteristics. In this way, the Android Things app can control devices using the face detected.

Before starting, it is useful to show how to connect Raspberry Pi to SSD1306:

android things ssd1306

As you can notice, the connection is very simple. To handle the LCD display, it is necessary to add the right driver to our Android Things project. In the build.gradle add this line:

implementation 'com.google.android.things.contrib:driver-ssd1306:1.1'


To handle all the details related to the LCD, let us create a new class called DisplayManager. The purpose of this class is showing the right image according to the face characteristics detected. We can represent these different characteristics using four different images as described previously. These images must be in the drawable (nodpi).

In order to show this image according to the face characteristics detected, we will add this method to this class:

public void setImage(Resources res, int resId) {
  display.clearPixels();
  Bitmap bmp = BitmapFactory.decodeResource(res, resId);
  BitmapHelper.setBmpData(display, 0,0, bmp, true);
  try {
    display.show();
  }
  catch (IOException ioe) {
    ioe.printStackTrace();
  }
}


Final Step

In this last step, we will glue everything so that the app will work correctly. To do it, it is necessary to add a listener so that the MainActivity will be notified when the facial characteristics are detected. Let us define the listener to the FaceDetector:

 public interface CameraListener {
   public void onError();
   public void onSuccess(FACE_STATUS status);
 }


Where:

 // Face status
 enum FACE_STATUS {
    SMILING,
    LEFT_EYE_OPEN_RIGHT_CLOSE,
    RIGHT_EYE_OPEN_LEFT_CLOSE,
    LEFT_OPEN_RIGHT_OPEN
 }


Now, in the MainActivity, we will implement the listener:

 FaceDetector fc = new FaceDetector(this, img, getMainLooper());
 fc.setListener(new FaceDetector.CameraListener() {
    @Override
    public void onError() {
      // Handle error
    }
    @Override
    public void onSuccess(FaceDetector.FACE_STATUS status) {
       Log.d(TAG, "Face ["+status+"]");
       switch (status) {
         case SMILING:
           display.setImage(getResources(), 
 R.drawable.smiling_face);
           break;
         case LEFT_EYE_OPEN_RIGHT_CLOSE:
             display.setImage(getResources(), R.drawable.right_eyes_closed);
            break;
         case RIGHT_EYE_OPEN_LEFT_CLOSE:
             display.setImage(getResources(), R.drawable.left_eyes_closed);
             break;
         default:
            display.setImage(getResources(), R.drawable.neutral_face);
         }
     }
 });


Creating the App UI

If you want to create the UI of the Android Things app, you have to add the layout:

<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".MainActivity">
  <ImageView android:layout_height="wrap_content"
    android:layout_width="wrap_content"
    android:id="@+id/img" />

</android.support.constraint.ConstraintLayout>


Final Considerations

At the end of this article, you hopefully gained knowledge about how to use the Firebase ML Kit with Android Things. We have explored how to detect face characteristics using machine learning. Firebase ML Kit offers the possibility to test and use machine learning without knowing much about it and without spending time and effort in building ML models. Using the Face Detection Kit API, you can easily build an Android Things app that detects face characteristics.

Digi-Key’s IoT Component Selector is your one-stop-shop for the IoT

Topics:
iot ,iot tutorial ,raspberry pi tutorial ,raspberry pi ,face detection ,firebase ml kit ,api ,android things ,android things tutorial ,face detection api

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}