TensorFlow on the Edge: Part II
Many TensorFlows can be run on many devices — and you can do it with a smaller footprint, lower energy usage, and lower pricing.
Join the DZone community and get the full member experience.Join For Free
Be sure to check out Part I first!
The TensorFlow Swarm can run on 2 Raspberry PIs running Raspian, 1 Old Ubuntu Laptop, and a Dell Mini Inspiron running CentOS.
TensorFlow now has TensorFlow Serving, which communicates via gRPC.
You can also run MXNet on RPI and constrained devices.
NVidia has made the idea of Deep Learning at the edge incredibly powerful! If anyone wants me to review one, send it my way.
The Jetson TX2 is an amazing small device with a NVIDIA Pascal™, 256 CUDA cores GPU, 8GB of fast RAM, 32 GB of fast eMMC storage, SATA support, WIFI, BlueTooth, USB3, HMP Dual Denver 2/2 MB L2, Quad ARM® A57/2 MB L2 CPUs, GPIO, 6 cameras at once — and more.
Running TensorFlow on this should be far better than running on a full server that has weak or nonexistent GPUs — with a smaller footprint, lower energy usage, and lower pricing. This is well-priced and highly performant. Having a few dozen of these in remote zones would make for a great paradigm.
I would recommend evaluating one of these devices installed with Python 2.7, Apache MiniFi, PAHO MQTT, OpenCV, TensorFlow, Python-Dev, and all the build tools. When I eventually get one of these devices, I will do a detailed setup article.
Below is my suggested architecture. The edge box ingests sensor data and images, runs TensorFlow, and uses the MiniFi agent to push messages to NiFi via site-to-site or MQTT. Apache NiFi servers store the data in HDFS as ORC and in HBase to serve data to Zeppelin notebooks via Hive QL, Phoenix on HBase and Spark SQL. Spark Streaming is also used to ingest some of the data pushed via Apache NiFi.
Opinions expressed by DZone contributors are their own.