Nature Language Processing/ML on an IoT Edge Device Using MiNiFi
Learn more about how to deploy MiNiFi on the edge.
Join the DZone community and get the full member experience.Join For Free
Recently, I had an opportunity to dive incredibly deep with MiNiFi to deploy business logic on an edge device. The task was to identify a pattern that could be used to have an edge device execute or call an ML model. Many edge devices do not have enough compute to constantly run ML models; therefore, leveraging Model as a Service or an end point to the model execution becomes super rich within this domain challenge. The example/demo described in this article will deploy a MiNiFi (an edge agent) to receive text from anyone (high interactive demo), have it call a model service ( i.e. Cloudera Data Science WorkBench) to perform text sentiment (positive, negative, neutral), and finally have it forward the text and sentiment to NiFi. Once NiFi receives the data, publishing to downstream to Spark, DataLake, S3, ADLS, Ozne, etc. becomes incredibly simple.
MiNiFi Template (YML): Here
NiFi Template: Here
Sentiment API: Here
Sign up for a free subscription to Aylien to use this demo. It is the ML service endpoint that the MiNiFi agent will call to perform text sentiment.
As little as a single node (even on your laptop, docker) for this demo will do. For this demonstration, I use an Azure node with 5GB of RAM.
Deploy NiFi Workflow
Once NiFi is up and running, grab the NiFi template and import it. The imported template on the NiFi canvas should look like this:
Start the NiFi Flow and also take note of the hostname that NiFi is running on. We will need this to update the MiNiFI YML.
Rename the MiNiFi template file to config.yml and place here:
Replace YOUR-APP-ID-HERE and YOUR-APP-KEY-HERE with your Aylien API ID and Key,
Search for YOUR-HOST-HERE and replace it with your NiFi hostname. Replace up to the colon (:).
Now that config.yml has been updated with the appropriate credentials, it's time to start MiNiFi.
sudo service minifi start
Real-Time Sentiment Analysis
Both NiFi and MiNiFi are assumed to be running, and MiNiFi is ready to receive text. The MiNiFi agent will return the sentiment analysis about the received text and forward the analysis to NiFi. For example, to send a text to a MiNiFi agent, run a curl command (replace YOUR-HOST-NAME with the hostname where MiNiFi is running)
curl -d "MiNiFi is AWESOME" YOUR-HOST-HERE:10222/demo
or use https://apitester.com/
NiFi will receive a response from MiNiFi. On the NiFI canvas, you will see a received event.
Open the queued event (response from MiNiFi) to view the sentiment analysis
This was an end-to-end demonstration of deploying MiNiFi on the edge, interacting with the edge device, and receiving an analyzed (ran through a model service) response from the edge.
Opinions expressed by DZone contributors are their own.