Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Baby Monitor Chrome Extension – Streaming From Raspberry PI Using SignalR and Cognitive Vision Service

DZone 's Guide to

Baby Monitor Chrome Extension – Streaming From Raspberry PI Using SignalR and Cognitive Vision Service

Want to build your own baby monitor?

· IoT Zone ·
Free Resource

SignalR Streaming is the latest addition to SignalR library and it supports sending fragments of data to clients as soon as it becomes available instead of waiting for all the data to become available. In this article, we will build a small app for baby monitoring to stream camera content from a Raspberry Pi using SignalR streaming. This tool also sends the notification to connected clients whenever it detects baby cry using Cognitive Vision Service.

Overview

This tool consists of the following modules:

  • SignalR Streaming Hub, which will hold the methods for streaming data and notification service.
  • .NET Core based worker service that runs in the background thread to detect baby cry by capturing a photo in the frequent interval and passing it to the cognitive vision service.
  • Azure based cognitive vision service will take the image input and detect if any human face exists and then analyze the face attributes and sends the response back with face attributes values such smile, sadness, anger, etc.
  • SignalR Client is a JavaScript-based chrome extension that runs in chrome browser background. When SignalR Hub sends the notification messages, this will show the popup notification to the user. The user will also have the option to view the live streaming from the client popup window.

Demo

Prerequisites and Dependencies

Steps

PiMonitR SignalR Hub

PiMonitRHub is a streaming hub that holds streaming methods startstream and stopstream. When the SignalR client invokes the startstream method, it calls the camera service to capture the photo and sends it to the client by writing into the ChannelWriter. Whenever an object is written to the ChannelWriter, that object is immediately sent to the client. In the end, the ChannelWriter is completed to tell the client the stream is closed by the writer.TryComplete method. 

public class PiMonitRHub : Hub
{
internal static bool _isStreamRunning = false;
private readonly PiCameraService _piCameraService;
public PiMonitRHub(PiCameraService piCameraService)
{
_piCameraService = piCameraService;
}

public ChannelReader<object> StartStream(CancellationToken cancellationToken)
{
var channel = Channel.CreateUnbounded<object>();
_isStreamRunning = true;
_ = WriteItemsAsync(channel.Writer, cancellationToken);
return channel.Reader;
}

private async Task WriteItemsAsync(ChannelWriter<object> writer, CancellationToken cancellationToken)
{
try
{
while (_isStreamRunning)
{
cancellationToken.ThrowIfCancellationRequested();
await writer.WriteAsync(await _piCameraService.CapturePictureAsByteArray());
await Task.Delay(100, cancellationToken);
}
}
catch (Exception ex)
{
writer.TryComplete(ex);
}

writer.TryComplete();
}

public void StopStream()
{
_isStreamRunning = false;
Clients.All.SendAsync("StopStream");
}
}


PiMonitR Background Service

PiMonitRWorker is a worker service inheriting from the background service. It starts the new thread whenever an application is started and executes the logic inside the ExecuteAsync method in the frequent interval until cancellationtoken is requested.

internal class PiMonitRWorker : BackgroundService
    {        
        private readonly IHubContext<PiMonitRHub> _piMonitRHub;
        private readonly PiCameraService _piCameraService;
        private readonly FaceClientCognitiveService _faceClientCognitiveService;
        public PiMonitRWorker(IHubContext<PiMonitRHub> piMonitRHub,
            PiCameraService piCameraService, FaceClientCognitiveService faceClientCognitiveService)
        {           
            _piMonitRHub = piMonitRHub;
            _piCameraService = piCameraService;
            _faceClientCognitiveService = faceClientCognitiveService;
        }

        protected override async Task ExecuteAsync(CancellationToken stoppingToken)
        {
            while (!stoppingToken.IsCancellationRequested)
            {               
                if (!PiMonitRHub._isStreamRunning)
                {
                    var stream = await _piCameraService.CapturePictureAsStream();         
                    if (await _faceClientCognitiveService.IsCryingDetected(stream))
                    {
                        await _piMonitRHub.Clients.All.SendAsync("ReceiveNotification", "Baby Crying Detected! You want to start streaming?");
                    }
                }
                //Run the background service for every 10 seconds
                await Task.Delay(10000);
            }
        }
    }


In this worker service, it captures the photo using camera service and sends it to cognitive service API to detect the baby cry. If the baby cry is detected, notification hub method will broadcast the notification message to all connected clients. If the client is already watching the stream, this background service will not detect the baby cry until the user stopped watching the stream to avoid duplicate notification to the users.

Cognitive Vision Service

Microsoft Cognitive Service API is a very powerful API to provides the power of AI in a few lines of code. There are various Cognitive Service APIs are available. In this app, I will be using the Cognitive Vision API to detect the face emotion to see if the baby is crying or not. This API will analyze the given photo to detect, recognize the human face and analyze the emotion face attributes such as smile, sadness, etc. Best of all, this service has a free tier, which allows 20 calls per minute so we can get started without paying for anything.

After you register the cognitive service in Azure Portal, you will get the API endpoint and the Keys from the portal.

You can store the Keys and EndPointURL into UserSecrets/AppSettings/Azure Key Vault so that we can access it from configuration API. 

public class FaceClientCognitiveService
{
private readonly IFaceClient faceClient;
private readonly float scoreLimit = 0.5f;
private readonly ILogger<FaceClientCognitiveService> _logger;
public FaceClientCognitiveService(IConfiguration config, ILogger<FaceClientCognitiveService> logger)
{
_logger = logger;
faceClient = new FaceClient(new ApiKeyServiceClientCredentials(config["SubscriptionKey"]),
new System.Net.Http.DelegatingHandler[] { });
faceClient.Endpoint = config["FaceEndPointURL"];
}

public async Task<bool> IsCryingDetected(Stream stream)
{
IList<FaceAttributeType> faceAttributes = new FaceAttributeType[]
{
FaceAttributeType.Emotion
};
// Call the Face API.
try
{
IList<DetectedFace> faceList = await faceClient.Face.DetectWithStreamAsync(stream, false, false, faceAttributes);
if (faceList.Count > 0)
{
var face = faceList[0];
if (face.FaceAttributes.Emotion.Sadness >= scoreLimit ||
face.FaceAttributes.Emotion.Anger >= scoreLimit ||
face.FaceAttributes.Emotion.Fear >= scoreLimit)
{
_logger.LogInformation($"Crying Detected with the score of {face.FaceAttributes.Emotion.Sadness}");
return true;
}
else
{
_logger.LogInformation($"Crying Not Detected with the score of {face.FaceAttributes.Emotion.Sadness}");
}
}
else
{
_logger.LogInformation("No Face Detected");
}
}
catch (Exception e)
{
_logger.LogError(e.Message);
}

return false;
}
}


  • Install the Microsoft.Azure.CognitiveServices.Vision.Face nuget package to install the FaceClient.
  • Before making the API call, set the face attributes parameters to the return only emotion attribute to avoid returning all the data.
  • Face API has got so many face attributes for the identified face. But, for our app, we use the emotion attributes of sadness, anger, and fear.
  • If anyone of the above-mentioned attributes is higher than the 0.5 limit, this method will return true.
  • I came up with 0.5 as a limit for these attributes. However, you can change the value or attributes that work for your use case. I have tested with few crying images and my limit works fine for all those cases.

PiMonitR Camera Service

I am running my Raspberry Pi with Raspian OS, which is based on the Linux ARM architecture. The camera module has a built-in command line tool called raspistill to take the picture. However, I wanted to use some C# wrapper library to capture picture from Pi and found out this wonderful open-source project called MMALSharp, which is an unofficial C# API for the Raspberry Pi camera and it supports Mono 4.x and .NET Standard 2.0.

I installed the nuget package of MMALSharp and initiated the singleton object in the constructor so that it can be reused while streaming the continuous shots of pictures. I have also set the resolution to 640 * 480 for the picture because the default resolution is very high and file size is huge as well.

public class PiCameraService
{
public MMALCamera MMALCamera;
private readonly string picStoragePath = "/home/pi/images/";
private readonly string picExtension = "jpg";
public PiCameraService()
{
MMALCamera = MMALCamera.Instance;
//Setting the Average resolution for reducing the file size
MMALCameraConfig.StillResolution = new Resolution(640, 480);
}

public async Task<byte[]> CapturePictureAsByteArray()
{
var fileName = await CapturePictureAndGetFileName();

string filePath = Path.Join(picStoragePath, $"{fileName}.{picExtension}");
byte[] resultData = await File.ReadAllBytesAsync(filePath);

//Delete the captured picture from PI storage
File.Delete(filePath);
return resultData;
}

public async Task<Stream> CapturePictureAsStream()
{
return new MemoryStream(await CapturePictureAsByteArray());
}

private async Task<string> CapturePictureAndGetFileName()
{
string fileName = null;
using (var imgCaptureHandler = new ImageStreamCaptureHandler(picStoragePath, picExtension))
{
await MMALCamera.TakePicture(imgCaptureHandler, MMALEncoding.JPEG, MMALEncoding.I420);
fileName = imgCaptureHandler.GetFilename();
}
return fileName;
}
}


Publish Server App to Raspberry Pi

Now that we are done with server-side app coding, our next step is to deploy it into Raspberry Pi. In order to publish the app into PI, there are two different ways to publish it.

  • Framework Dependent – It relies on the presence of a shared system-wide version of .NET Core on the target system.
  • Self Contained – It doesn’t rely on the presence of shared components on the target system. All components, including both the .NET Core libraries and the .NET Core runtime, are included with the application and are isolated from other .NET Core applications

I used to self-containment deploy so that all the dependencies are part of the deployment. The following publish command will generate the final output with all the dependencies.

dotnet publish -r linux-arm

You will find the final output in the linux-arm/publish folder under bin folder. I used network file sharing to copy files into the Raspberry Pi.

After all the files are copied, I connected my Raspberry Pi through a remote connection and run the app with the following command in the terminal.

PiMonitR Chrome Extension SignalR Client

I decided to go with chrome extension as my SignalR client because it supports real-time notification, and also, it doesn’t need any server to host the app. In this client app, I have a background script, which will initialize SignalR connection with hub and runs in the background to receive any notification from the hub. It also has a popup window, which will have a start and stop streaming button to invoke the streaming and view the streaming output.

manifest.json

manifest.json will define the background scripts, icons, and permissions that are needed for this extension.

{
"name": "Pi MonitR Client",
"version": "1.0",
"description": "Real time Streaming from Raspnerry PI using SignalR",
"browser_action": {
"default_popup": "popup.html",
"default_icon": {
"16": "images/16.png",
"32": "images/32.png",
"48": "images/48.png",
"128": "images/128.png"
}
},
"icons": {
"16": "images/16.png",
"32": "images/32.png",
"48": "images/48.png",
"128": "images/128.png"
},
"permissions": [
"tabs",
"notifications",
"http://*/*"
],
"background": {
"persistent": true,
"scripts": [
"signalr.js","background.js"
]
},
"manifest_version": 2,
"web_accessible_resources": [
"images/*.png"
]
}


background.js

// The following sample code uses modern ECMAScript 6 features
// that aren't supported in Internet Explorer 11.
// To convert the sample for environments that do not support ECMAScript 6,
// such as Internet Explorer 11, use a transpiler such as
// Babel at http://babeljs.io/.
var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) {
return new (P || (P = Promise))(function (resolve, reject) {
function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }
function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } }
function step(result) { result.done ? resolve(result.value) : new P(function (resolve) { resolve(result.value); }).then(fulfilled, rejected); }
step((generator = generator.apply(thisArg, _arguments || [])).next());
});
};

const hubUrl = "http://pi:5000/hubs/piMonitR"

var connection = new signalR.HubConnectionBuilder()
.withUrl(hubUrl, { logger: signalR.LogLevel.Information })
.build();

// We need an async function in order to use await, but we want this code to run immediately,
// so we use an "immediately-executed async function"
(() => __awaiter(this, void 0, void 0, function* () {
try {
yield connection.start();
}
catch (e) {
console.error(e.toString());
}
}))();

connection.on("ReceiveNotification", (message) => {
new Notification(message, {
icon: '48.png',
body: message
});
});

chrome.runtime.onConnect.addListener(function (externalPort) {
externalPort.onDisconnect.addListener(function () {
connection.invoke("StopStream").catch(err => console.error(err.toString()));
});
});


background.js will initiate the SignalR connection with a hub and the URL defined. We also need signalr.js in the same folder. In order to get the signalr.js file, we need to install signalr npm package and copy the signalr.js from node_modules\@aspnet\signalr\dist\browser folder.

npm install @aspnet/signalr

This background script will keep our signalR client active and when it receives the notification from hub, it will show as chrome notification like below.


<!doctype html>
<html>

<head>
<title>Pi MonitR Dashboard</title>
<script src="popup.js" type="text/javascript"></script>
</head>

<body>
<h1>Pi MonitR - Stream Dashboard</h1>
<div>
<input type="button" id="streamStartButton" value="Start Streaming" />
<input type="button" id="streamStopButton" value="Stop Streaming" disabled />
</div>
<ul id="logContent"></ul>
<img id="streamContent" width="700" height="400" src="" />
</body>
</html>




The popup HTML will show the stream content when the start streaming button is clicked. It will complete the stream when the stop streaming button is clicked.

var __awaiter = chrome.extension.getBackgroundPage().__awaiter;

var connection = chrome.extension.getBackgroundPage().connection;



document.addEventListener(‘DOMContentLoaded’, function () {

const streamStartButton = document.getElementById(‘streamStartButton’);

const streamStopButton = document.getElementById(‘streamStopButton’);

const streamContent = document.getElementById(‘streamContent’);

const logContent = document.getElementById(‘logContent’);



streamStartButton.addEventListener(“click”, (event) => __awaiter(this, void 0, void 0, function* () {

streamStartButton.setAttribute(“disabled”, “disabled”);

streamStopButton.removeAttribute(“disabled”);

try {

connection.stream(“StartStream”)

.subscribe({

next: (item) => {

streamContent.src = “data:image/jpg;base64,” + item;

},

complete: () => {

var li = document.createElement(“li”);

li.textContent = “Stream completed”;

logContent.appendChild(li);

},

error: (err) => {

var li = document.createElement(“li”);

li.textContent = err;

logContent.appendChild(li);

},

});

}

catch (e) {

console.error(e.toString());

}

event.preventDefault();

}));



streamStopButton.addEventListener(“click”, function () {

streamStopButton.setAttribute(“disabled”, “disabled”);

streamStartButton.removeAttribute(“disabled”);

connection.invoke(“StopStream”).catch(err => console.error(err.toString()));

event.preventDefault();

});



connection.on(“StopStream”, () => {

var li = document.createElement(“li”);

li.textContent = “stream closed”;

logContent.appendChild(li);

streamStopButton.setAttribute(“disabled”, “disabled”);

streamStartButton.removeAttribute(“disabled”);

});

});


When the user clicks the start streaming button, it will invoke the stream hub method (StartStream) and subscribe to it. Whenever the hub sends the data, it receives the content and setting that value directly to the image src attribute.

streamContent.src = "data:image/jpg;base64," + item;

When the user clicks the stop streaming button, client invokes the StopStream hub method, which will set the  _isStreamRunning property to false, which will complete the stream.

Conclusion

This is a fun project. I wanted to experiment with SignalR streaming and it worked as I expected. Soon, we are going to have even more new stuff coming in SignalR ( IAsyncEnumerable), which will make even better for many other real-time scenarios. I have uploaded the source code in my GitHub repository.

Happy coding!

Topics:
iot ,raspberry pi ,monitor ,tutorial ,cognitive vision service ,signalr

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}