Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Powering Edge AI With the Powerful Jetson Nano

DZone 's Guide to

Powering Edge AI With the Powerful Jetson Nano

Setting up a Jetson Nano for Deep Learning Edge programming.

· AI Zone ·
Free Resource

Let's learn how to set up a Jetson Nano for deep learning edge programming.

Nano the Device

Nano the Cat









Hardware:

Jetson Nano developer kit. Built around a 128-core Maxwell GPU and quad-core ARM A57 CPU running at 1.43 GHz and coupled with 4GB of LPDDR4 memory! This is power at the edge. I now have a favorite new device.

You need to add some kind of USB WiFi adaptor if you are not hardwired to the ethernet. This is cheap and easy, I added a tiny $15 WiFi adapter and was off to the races.

Operating System:

Ubuntu 18.04

Library Setup:

sudo apt-get update -y



sudo apt-get install git cmake -y



sudo apt-get install libatlas-base-dev gfortran -y



sudo apt-get install libhdf5-serial-dev hdf5-tools -y



sudo apt-get install python3-dev -y



sudo apt-get install libcv-dev libopencv-dev -y



sudo apt-get install fswebcam -y



sudo apt-get install libv4l-dev -y



sudo apt-get install python-opencv -y



pip3 install psutil



pip2 install psutil



pip3.6 install easydict -U



pip3.6 install scikit-learn -U



pip3.6 install opencv-python -U --user



pip3.6 install numpy -U



pip3.6 install mxnet -U



pip3.6 install mxnet-mkl -U



pip3.6 install gluoncv --upgrade



sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev -y

sudo apt-get install python3-pip

sudo pip3 install -U pip

sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu

sudo nvpmodel -q --verbose

pip3 install numpy

pip3 install keras

git clone https://github.com/dusty-nv/jetson-inference

cd jetson-inference

git submodule update --init

tegrastats

pip3 install -U jetson-stats

Source:

https://github.com/tspannhw/minifi-jetson-nano

IoT Setup

Download MiNiFi 0.6.0 Source from Cloudera and Build.

Download MiNiFi Java Agent (Binary)  and Unzip.

Follow these instructions.

On a Server

We want to hookup to EFM to make flow development, deploy, management, and monitoring of MiNiFi agents trivial. Download NiFi Registry. You will also need Apache NiFi.

For a good walkthrough and hands-on demonstration, see this workshop.

See these cool Jetson Nano Projects:  https://developer.nvidia.com/embedded/community/jetson-projects

Monitor Status

https://github.com/rbonghi/jetson_stats

Example Flow

It's easy to add MiNiFi Java or CPP Agents to the Jetson Nano. I did a custom NiFi CPP 0.6.0 build for Jetson. I did a quick flow to run the jetson-inference imagenet-console CPP binary on an image captured from a compatible Logitech USB Webcam with fswebcam. I store the images to /opt/demo/images and pass it on the command line to the CPP console as a proof of concept.

#!/bin/bash



DATE=$(date +"%Y-%m-%d_%H%M")



fswebcam -q -r 1280x720 --no-banner /opt/demo/images/$DATE.jpg



/opt/demo/jetson-inference/build/aarch64/bin/imagenet-console  /opt/demo/images/$DATE.jpg  /opt/demo/images/out_$DATE.jpg

==

imagenet-console

  args (3):  0 [/opt/demo/jetson-inference/build/aarch64/bin/imagenet-console]  1 [/opt/demo/images/2019-07-01_1405.jpg]  2 [/opt/demo/images/out_2019-07-01_1405.jpg]





imageNet -- loading classification network model from:

         -- prototxt     networks/googlenet.prototxt

         -- model        networks/bvlc_googlenet.caffemodel

         -- class_labels networks/ilsvrc12_synset_words.txt

         -- input_blob   'data'

         -- output_blob  'prob'

         -- batch_size   2



[TRT]  TensorRT version 5.0.6

[TRT]  detected model format - caffe  (extension '.caffemodel')

[TRT]  desired precision specified for GPU: FASTEST

[TRT]  requested fasted precision for device GPU without providing valid calibrator, disabling INT8

[TRT]  native precisions detected for GPU:  FP32, FP16

[TRT]  selecting fastest native precision for GPU:  FP16

[TRT]  attempting to open engine cache file /opt/demo/jetson-inference/build/aarch64/bin/networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine

[TRT]  loading network profile from engine cache... /opt/demo/jetson-inference/build/aarch64/bin/networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine

[TRT]  device GPU, /opt/demo/jetson-inference/build/aarch64/bin/networks/bvlc_googlenet.caffemodel loaded

[TRT]  device GPU, CUDA engine context initialized with 2 bindings

[TRT]  binding -- index   0

               -- name    'data'

               -- type    FP32

               -- in/out  INPUT

               -- # dims  3

               -- dim #0  3 (CHANNEL)

               -- dim #1  224 (SPATIAL)

               -- dim #2  224 (SPATIAL)

[TRT]  binding -- index   1

               -- name    'prob'

               -- type    FP32

               -- in/out  OUTPUT

               -- # dims  3

               -- dim #0  1000 (CHANNEL)

               -- dim #1  1 (SPATIAL)

               -- dim #2  1 (SPATIAL)

[TRT]  binding to input 0 data  binding index:  0

[TRT]  binding to input 0 data  dims (b=2 c=3 h=224 w=224) size=1204224

[cuda]  cudaAllocMapped 1204224 bytes, CPU 0x100e30000 GPU 0x100e30000

[TRT]  binding to output 0 prob  binding index:  1

[TRT]  binding to output 0 prob  dims (b=2 c=1000 h=1 w=1) size=8000

[cuda]  cudaAllocMapped 8000 bytes, CPU 0x100f60000 GPU 0x100f60000

device GPU, /opt/demo/jetson-inference/build/aarch64/bin/networks/bvlc_googlenet.caffemodel initialized.

[TRT]  networks/bvlc_googlenet.caffemodel loaded

imageNet -- loaded 1000 class info entries

networks/bvlc_googlenet.caffemodel initialized.

Let me know your thoughts or questions in the comments section.

References

Topics:
jetson nano ,nvidia ,cuda ,tensorflow ,deep learning ,ai ,iot ,cloudera ,apache nifi

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}