Over a million developers have joined DZone.

Nature Language Processing/ML on an IoT Edge Device Using MiNiFi

DZone 's Guide to

Nature Language Processing/ML on an IoT Edge Device Using MiNiFi

Learn more about how to deploy MiNiFi on the edge.

· IoT Zone ·
Free Resource

Image title


Recently, I had an opportunity to dive incredibly deep with MiNiFi to deploy business logic on an edge device. The task was to identify a pattern that could be used to have an edge device execute or call an ML model. Many edge devices do not have enough compute to constantly run ML models; therefore, leveraging  Model as a Service or an end point to the model execution becomes super rich within this domain challenge. The example/demo described in this article will deploy a MiNiFi (an edge agent) to receive text from anyone (high interactive demo), have it call a model service ( i.e. Cloudera Data Science WorkBench) to perform text sentiment (positive, negative, neutral), and finally have it forward the text and sentiment to NiFi. Once NiFi receives the data, publishing to downstream to Spark, DataLake, S3, ADLS, Ozne, etc. becomes incredibly simple.


MiNiFi Template (YML): Here

NiFi Template: Here

Sentiment API: Here

Sentiment API

Sign up for a free subscription to Aylien to use this demo. It is the ML service endpoint that the MiNiFi agent will call to perform text sentiment.

Edge Device

Use any device that can host a JVM or C++.  For this demonstration, I use an AWS micromachine to act as an edge device. Here are the instructions on installing MiNiFI (super simple)


As little as a single node (even on your laptop, docker) for this demo will do.  For this demonstration, I use an Azure node with 5GB of ram.

Deploy NiFi Workflow

Once NiFi is up and running, grab the NiFi template and import it. The imported template on the NiFi canvas should look like this: 

NiFI Sentiment Flow

Start the NiFi Flow and also take note of the hostname that NiFi is running on.  We will need this to update the MiNiFI YML.


Rename the MiNiFi template file to config.yml and place here:


Replace YOUR-APP-ID-HERE and YOUR-APP-KEY-HERE with your Aylien API ID and Key,

Image title

Search for YOUR-HOST-HERE and replace it with your NiFi hostname.  Replace up to the colon (:).

Image title

Image title

Image title

Now that config.yml has been updated with the appropriate credentials, it's time to start MiNiFi.

 sudo service minifi start

Real-Time Sentiment Analysis

Both NiFi and MiNiFi are assumed to be running, and MiNiFi is ready to receive text. The MiNiFi agent will return the sentiment analysis about the received text and forward the analysis to NiFi. For example, to send a text to a MiNiFi agent, run a curl commend (replace YOUR-HOST-NAME with the hostname where MiNiFi is running)

 curl -d "MiNiFi is AWESOME" YOUR-HOST-HERE:10222/demo 

or use https://apitester.com/

Image title

NiFi will receive a response from MiNiFi. On the NiFI canvas, you will see a received event.

Image title

Open the queued event (response from MiNiFi) to view the sentiment analysis

Image title

This was an end-to-end demonstration of deploying MiNiFi on the edge, interacting with the edge device, and receiving an analyzed (ran through a model service) response from the edge.

analyics ,bigdata ,time series data ,nifi ,minifi ,iot internet of things

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}