Deep Learning 101: Using Apache MXNet on the Edge With Sensors and Intel Movidius
Learn about using Apache MXNet on specialized edge hardware for deep learning and AI.
Join the DZone community and get the full member experience.Join For Free
This is for running Apache MXNet on a Raspberry Pi.
Let's get this installed!
git clone https://github.com/apache/incubator-mxnet.git
The installation instructions at Apache MXNet's website are amazing. Pick your platform and your style. I am doing this the simplest way on Linux path.
This builds on previous builds, so see those articles. We installed the drivers for Sense Hat, Intel Movidius, and the USB WebCam previously. Please note that versions for Raspberry Pi, Apache MXNet, Python, and other drivers are updated every few months, so if you are reading this post DWS 2018, you should check the relevant libraries and update to the latest versions.
You need Python, Python Devel, and PIP installed, and you may need to run as root. You will also need OpenCV installed, as mentioned in the previous article.
In this combined Python script, we grab Sense Hat sensors for temperature, humidity, and more. We also run Movidius image analysis and Apache MXNet Inception on the image that we capture with our webcam.
pip install --upgrade pip pip install scikit-image git clone https://github.com/tspannhw/mxnet_rpi.git sudo apt-get update -y sudo apt-get install python-pip python-opencv python-scipy python-picamera -y sudo apt-get -y install git cmake build-essential g++-4.8 c++-4.8 liblapack* libblas* libopencv* git clone --recursive https://github.com/apache/incubator-mxnet.git mxnet --branch 1.0.0 cd incubator-mxnet export USE_OPENCV = 0 make cd python pip install --upgrade pip pip install -e .<br>pip install mxnet==1.0.0
MiNiFi flow to run Python script and send over images (running on Raspberry Pi):
Routing on server to process either an image or a JSON:
Our Apache NiFi server receiving input from Raspberry Pi:
Apache NiFi server processing the input:
We route to two different processing flows, with one for saving images. The other adds a schema and converts the JSON data into Apache AVRO. The AVRO content is merged and we send that to a central HDF 3.1 cluster that can write to HDFS. We can either stream to an ACID Hive table or convert AVRO to Apache ORC and store it to HDFS and autogenerate an external Hive table on top of it. You can find many examples of both of these processes in my links below. We could also insert into Apache HBase or insert into an Apache Phoenix table, or do all of those and send it to Slack or email and store it in an RDBMS like MySQL and anything else you could think of.
We are using Apache MiniFi Java Agent 0.3.0. I will be adding a follow-up, including MiniFi 0.40 with the native C++ TensorFlow and USB Cam. See this awesome article for TensorFlow.
The source code is here.
This is too easy!
Opinions expressed by DZone contributors are their own.