Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Integrating NVIDIA Jetson TX1 Running TensorRT Into Deep Learning DataFlows With Apache MiniFi

DZone's Guide to

Integrating NVIDIA Jetson TX1 Running TensorRT Into Deep Learning DataFlows With Apache MiniFi

Learn to integrate the NVidia Jetson TX1 developer kit into deep learning DataFlows with Apache MiniFi in this detailed tutorial.

· Integration Zone ·
Free Resource

The State of API Integration 2018: Get Cloud Elements’ report for the most comprehensive breakdown of the API integration industry’s past, present, and future.

Use Case

Ingesting sensors, images, voice, and video from moving vehicles and running deep learning in the running vehicle. Transporting data and messages to remote data centers via Apache MiniFi and NiFi over Secure S2S HTTPS.

Background

NVidia Jetson TX1 is a specialized developer kit for running a powerful GPU as an embedded device for robots, UAV, and specialized platforms. I envision its usage in field trucks for intermodal, utilities, telecommunications, delivery services, government, and other industries with field vehicles.

Installation and Setup

You will need a workstation running Ubuntu 16 with enough disk space and network access. This will be to download all the software and push it over a network to your NVidia Jetson TX1. You can download Ubuntu here (https://www.ubuntu.com/download/desktop). Fortunately, I had a MiniPC with 4GB of RAM that reformatted with Ubuntu to be the host PC to build my Jetson. You cannot run this from a Mac or Windows machine.

You will need a monitor, mouse, and keyboard for your host machine and a set for your NVidia Jetson.

First step: boot to your NVidia Jetson and set up WiFi networking and make sure your monitor, keyboards, and mouse work.

Make sure you download the latest NVidia JetPack on your host Ubuntu machine at https://developer.nvidia.com/embedded/jetpack. The one I used was JetPack 3.1 and that included: 64-bit Ubuntu 16.04 cuDNN 6.0 TensorRT 2.1 CUDA 8.0.

Initial login: ubuntu/ubuntu

After installation, it will be nvidia/nvidia.

Please change that password; security is important and this GPU could do some serious bitcoin mining.

      sudo su    apt update    apt-get install git zip unzip autoconf automake libtool curl zlib1g-dev maven swig bzip2    apt-get purge libopencv4tegra-dev libopencv4tegra    apt-get purge libopencv4tegra-repo    apt-get update    apt-get install build-essential    apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev    apt-get install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev    apt-get install python2.7-dev    apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev    apt-get install libgtkglext1 libgtkglext1-dev    apt-get install qtbase5-dev    apt-get install libv4l-dev v4l-utils qv4l2 v4l2ucp    cd $HOME/NVIDIA-INSTALL     ./installer.sh  

Downloaded and run NVidia Jetson TX1 JetPack from host Ubuntu computer ./JetPack-L4T-3.1-linux-x64.run.

This will run on the host server for probably an hour and require networking connection between the two and a few reboots.

I added a 64GB SD Card as the space on the Jetson is tiny. I would recommend adding a big SATA hard drive.

      umount /dev/sdb1    mount -o umask=000 -t vfat /dev/sdb1 /media/  

Turn on the fan on the Jetson, echo 255 > /sys/kernel/debug/tegra_fan/target_pwm.

Download MiniFi at https://nifi.apache.org/minifi/download.html or https://hortonworks.com/downloads/#dataflow. You will need to install JDK 8.

      sudo add-apt-repository ppa:webupd8team/java    sudo apt update    sudo apt install oracle-java8-installer -y    download minifi-0.2.0-bin.zip    unzip *.zip    bin/minifi.sh start  

In the next part, we will classify images.

Part 2: Classifying Images with ImageNet

Part 3: Detecting Faces in Images

Part 4: Ingesting with MiniFi and NiFi

Shell Call Example:

/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/detectnet-console pic-007.png outputface7.png facenet

Source Code: https://github.com/tspannhw/jetsontx1-TensorRT

References:

NVidia also provides a good example C++ program for detecting faces, so we try that out next. You can add more training data to improve results, but it found me okay.

In the next step, we'll connect to MiniFi.

Shell Source:

root@tegra - ubuntu: /media/nvidia / 96ed93f9 - 7c40 - 4999 - 85ba - 3eb24262d0a5 / jetson - inference - master / build / aarch64 / bin#. / facedetect.sh
detectnet - console
args(4) : 0[/media/nvidia / 96ed93f9 - 7c40 - 4999 - 85ba - 3eb24262d0a5 / jetson - inference - master / build / aarch64 / bin / detectnet - console]
1[backupimages / tim.jpg] 2[images / outputtim.png] 3[facenet]
detectNet--loading detection network model from: --prototxt
networks / facenet - 120 / deploy.prototxt--model networks / facenet - 120 / snapshot_iter_24000.caffemodel--input_blob 'data'--output_cvg 'coverage'--output_bbox 'bboxes'--threshold 0.500000--batch_size 2[GIE] attempting to open cache file networks / facenet - 120 / snapshot_iter_24000.caffemodel.2.tensorcache[GIE] loading network profile from cache...networks / facenet - 120 / snapshot_iter_24000.caffemodel.2.tensorcache[GIE] platform has FP16 support. [GIE] networks / facenet - 120 / snapshot_iter_24000.caffemodel loaded[GIE] CUDA engine context initialized with 3 bindings[GIE] networks / facenet - 120 / snapshot_iter_24000.caffemodel input binding index: 0[GIE] networks / facenet - 120 / snapshot_iter_24000.caffemodel input dims(b = 2 c = 3 h = 450 w = 450) size = 4860000[cuda] cudaAllocMapped 4860000 bytes,
CPU 0x100ce0000 GPU 0x100ce0000[GIE] networks / facenet - 120 / snapshot_iter_24000.caffemodel output 0 coverage binding index: 1[GIE] networks / facenet - 120 / snapshot_iter_24000.caffemodel output 0 coverage dims(b = 2 c = 1 h = 28 w = 28) size = 6272[cuda] cudaAllocMapped 6272 bytes,
CPU 0x1011a0000 GPU 0x1011a0000[GIE] networks / facenet - 120 / snapshot_iter_24000.caffemodel output 1 bboxes binding index: 2[GIE] networks / facenet - 120 / snapshot_iter_24000.caffemodel output 1 bboxes dims(b = 2 c = 4 h = 28 w = 28) size = 25088[cuda] cudaAllocMapped 25088 bytes,
CPU 0x1012a0000 GPU 0x1012a0000 networks / facenet - 120 / snapshot_iter_24000.caffemodel initialized. [cuda] cudaAllocMapped 16 bytes,
CPU 0x1013a0000 GPU 0x1013a0000 maximum bounding boxes: 3136[cuda] cudaAllocMapped 50176 bytes,
CPU 0x1012a6200 GPU 0x1012a6200[cuda] cudaAllocMapped 12544 bytes,
CPU 0x1011a1a00 GPU 0x1011a1a00 loaded image backupimages / tim.jpg(400 x 400) 2560000 bytes[cuda] cudaAllocMapped 2560000 bytes,
CPU 0x1014a0000 GPU 0x1014a0000 detectnet - console: beginning processing network(1505047556083)[GIE] layer deploy_transform input reformatter 0 - 4.594114 ms[GIE] layer deploy_transform - 1.522865 ms[GIE] layer conv1 / 7x7_s2 + conv1 / relu_7x7 - 24.272917 ms[GIE] layer pool1 / 3x3_s2 - 4.988593 ms[GIE] layer pool1 / norm1 - 1.322396 ms[GIE] layer conv2 / 3x3_reduce + conv2 / relu_3x3_reduce - 2.462032 ms[GIE] layer conv2 / 3x3 + conv2 / relu_3x3 - 29.438957 ms[GIE] layer conv2 / norm2 - 3.703281 ms[GIE] layer pool2 / 3x3_s2 - 3.817292 ms[GIE] layer inception_3a / 1x1 + inception_3a / relu_1x1 || inception_3a / 3x3_reduce + inception_3a / relu_3x3_reduce || inception_3a / 5x5_reduce + inception_3a / relu_5x5_reduce - 4.193281 ms[GIE] layer inception_3a / 3x3 + inception_3a / relu_3x3 - 11.074271 ms[GIE] layer inception_3a / 5x5 + inception_3a / relu_5x5 - 2.207708 ms[GIE] layer inception_3a / pool - 1.708906 ms[GIE] layer inception_3a / pool_proj + inception_3a / relu_pool_proj - 1.522240 ms[GIE] layer inception_3a / 1x1 copy - 0.194323 ms[GIE] layer inception_3b / 1x1 + inception_3b / relu_1x1 || inception_3b / 3x3_reduce + inception_3b / relu_3x3_reduce || inception_3b / 5x5_reduce + inception_3b / relu_5x5_reduce - 8.700052 ms[GIE] layer inception_3b / 3x3 + inception_3b / relu_3x3 - 21.696459 ms[GIE] layer inception_3b / 5x5 + inception_3b / relu_5x5 - 10.463386 ms[GIE] layer inception_3b / pool - 2.265937 ms[GIE] layer inception_3b / pool_proj + inception_3b / relu_pool_proj - 1.910729 ms[GIE] layer inception_3b / 1x1 copy - 0.354375 ms[GIE] layer pool3 / 3x3_s2 - 1.903125 ms[GIE] layer inception_4a / 1x1 + inception_4a / relu_1x1 || inception_4a / 3x3_reduce + inception_4a / relu_3x3_reduce || inception_4a / 5x5_reduce + inception_4a / relu_5x5_reduce - 4.471615 ms[GIE] layer inception_4a / 3x3 + inception_4a / relu_3x3 - 6.044531 ms[GIE] layer inception_4a / 5x5 + inception_4a / relu_5x5 - 0.968907 ms[GIE] layer inception_4a / pool - 1.064114 ms[GIE] layer inception_4a / pool_proj + inception_4a / relu_pool_proj - 1.103750 ms[GIE] layer inception_4a / 1x1 copy - 0.152396 ms[GIE] layer inception_4b / 1x1 + inception_4b / relu_1x1 || inception_4b / 3x3_reduce + inception_4b / relu_3x3_reduce || inception_4b / 5x5_reduce + inception_4b / relu_5x5_reduce - 4.764219 ms[GIE] layer inception_4b / 3x3 + inception_4b / relu_3x3 - 4.324583 ms[GIE] layer inception_4b / 5x5 + inception_4b / relu_5x5 - 1.413073 ms[GIE] layer inception_4b / pool - 1.132969 ms[GIE] layer inception_4b / pool_proj + inception_4b / relu_pool_proj - 1.176146 ms[GIE] layer inception_4b / 1x1 copy - 0.132864 ms[GIE] layer inception_4c / 1x1 + inception_4c / relu_1x1 || inception_4c / 3x3_reduce + inception_4c / relu_3x3_reduce || inception_4c / 5x5_reduce + inception_4c / relu_5x5_reduce - 4.738177 ms[GIE] layer inception_4c / 3x3 + inception_4c / relu_3x3 - 5.503698 ms[GIE] layer inception_4c / 5x5 + inception_4c / relu_5x5 - 1.394011 ms[GIE] layer inception_4c / pool - 1.132656 ms[GIE] layer inception_4c / pool_proj + inception_4c / relu_pool_proj - 1.157812 ms[GIE] layer inception_4c / 1x1 copy - 0.111927 ms[GIE] layer inception_4d / 1x1 + inception_4d / relu_1x1 || inception_4d / 3x3_reduce + inception_4d / relu_3x3_reduce || inception_4d / 5x5_reduce + inception_4d / relu_5x5_reduce - 4.727709 ms[GIE] layer inception_4d / 3x3 + inception_4d / relu_3x3 - 6.811302 ms[GIE] layer inception_4d / 5x5 + inception_4d / relu_5x5 - 1.772187 ms[GIE] layer inception_4d / pool - 1.132084 ms[GIE] layer inception_4d / pool_proj + inception_4d / relu_pool_proj - 1.161718 ms[GIE] layer inception_4d / 1x1 copy - 0.103438 ms[GIE] layer inception_4e / 1x1 + inception_4e / relu_1x1 || inception_4e / 3x3_reduce + inception_4e / relu_3x3_reduce || inception_4e / 5x5_reduce + inception_4e / relu_5x5_reduce - 7.476458 ms[GIE] layer inception_4e / 3x3 + inception_4e / relu_3x3 - 12.779844 ms[GIE] layer inception_4e / 5x5 + inception_4e / relu_5x5 - 3.287656 ms[GIE] layer inception_4e / pool - 1.165417 ms[GIE] layer inception_4e / pool_proj + inception_4e / relu_pool_proj - 2.159844 ms[GIE] layer inception_4e / 1x1 copy - 0.195000 ms[GIE] layer inception_5a / 1x1 + inception_5a / relu_1x1 || inception_5a / 3x3_reduce + inception_5a / relu_3x3_reduce || inception_5a / 5x5_reduce + inception_5a / relu_5x5_reduce - 11.466510 ms[GIE] layer inception_5a / 3x3 + inception_5a / relu_3x3 - 12.746927 ms[GIE] layer inception_5a / 5x5 + inception_5a / relu_5x5 - 3.235729 ms[GIE] layer inception_5a / pool - 1.818386 ms[GIE] layer inception_5a / pool_proj + inception_5a / relu_pool_proj - 3.259010 ms[GIE] layer inception_5a / 1x1 copy - 0.194844 ms[GIE] layer inception_5b / 1x1 + inception_5b / relu_1x1 || inception_5b / 3x3_reduce + inception_5b / relu_3x3_reduce || inception_5b / 5x5_reduce + inception_5b / relu_5x5_reduce - 14.704739 ms[GIE] layer inception_5b / 3x3 + inception_5b / relu_3x3 - 11.462292 ms[GIE] layer inception_5b / 5x5 + inception_5b / relu_5x5 - 4.753594 ms[GIE] layer inception_5b / pool - 1.817604 ms[GIE] layer inception_5b / pool_proj + inception_5b / relu_pool_proj - 3.259792 ms[GIE] layer inception_5b / 1x1 copy - 0.274687 ms[GIE] layer cvg / classifier - 2.113386 ms[GIE] layer coverage / sig - 0.059687 ms[GIE] layer coverage / sig output reformatter 0 - 0.042969 ms[GIE] layer bbox / regressor - 2.062864 ms[GIE] layer bbox / regressor output reformatter 0 - 0.053386 ms[GIE] layer network time - 301.203705 ms detectnet - console: finished processing network(1505047556394) 1 bounding boxes detected bounding box 0(17.527779, -34.222221)(193.388885, 238.500000) w = 175.861115 h = 272.722229 draw boxes 1 0 0.000000 200.000000 255.000000 100.000000 detectnet - console: writing 400x400 image to 'images/outputtim.png'
detectnet - console: successfully wrote 400x400 image to 'images/outputtim.png'

References:

Use This Project: https://github.com/dusty-nv/jetson-inference

This will create a C++ executable to run Classification for ImageNet with TensorRT.

Shell Call Example

root@tegra - ubuntu: /media/nvidia / 96ed93f9 - 7c40 - 4999 - 85ba - 3eb24262d0a5 / jetson - inference - master / build / aarch64 / bin#. / runclassify.sh imagenet - console args(3) : 0[/media/nvidia / 96ed93f9 - 7c40 - 4999 - 85ba - 3eb24262d0a5 / jetson - inference - master / build / aarch64 / bin / imagenet - console] 1[backupimages / granny_smith_1.jpg] 2[images / output_0.jpg] imageNet--loading classification network model from: --prototxt networks / googlenet.prototxt--model networks / bvlc_googlenet.caffemodel--class_labels networks / ilsvrc12_synset_words.txt--input_blob 'data'--output_blob 'prob'--batch_size 2[GIE] attempting to open cache file networks / bvlc_googlenet.caffemodel.2.tensorcache[GIE] loading network profile from cache...networks / bvlc_googlenet.caffemodel.2.tensorcache[GIE] platform has FP16 support. [GIE] networks / bvlc_googlenet.caffemodel loaded[GIE] CUDA engine context initialized with 2 bindings[GIE] networks / bvlc_googlenet.caffemodel input binding index: 0[GIE] networks / bvlc_googlenet.caffemodel input dims(b = 2 c = 3 h = 224 w = 224) size = 1204224[cuda] cudaAllocMapped 1204224 bytes,
CPU 0x100ce0000 GPU 0x100ce0000[GIE] networks / bvlc_googlenet.caffemodel output 0 prob binding index: 1[GIE] networks / bvlc_googlenet.caffemodel output 0 prob dims(b = 2 c = 1000 h = 1 w = 1) size = 8000[cuda] cudaAllocMapped 8000 bytes,
CPU 0x100e20000 GPU 0x100e20000 networks / bvlc_googlenet.caffemodel initialized. [GIE] networks / bvlc_googlenet.caffemodel loaded imageNet--loaded 1000 class info entries networks / bvlc_googlenet.caffemodel initialized.loaded image backupimages / granny_smith_1.jpg(1000 x 1000) 16000000 bytes[cuda] cudaAllocMapped 16000000 bytes,
CPU 0x100f20000 GPU 0x100f20000[GIE] layer conv1 / 7x7_s2 + conv1 / relu_7x7 input reformatter 0 - 1.207813 ms[GIE] layer conv1 / 7x7_s2 + conv1 / relu_7x7 - 6.144531 ms[GIE] layer pool1 / 3x3_s2 - 1.301354 ms[GIE] layer pool1 / norm1 - 0.412240 ms[GIE] layer conv2 / 3x3_reduce + conv2 / relu_3x3_reduce - 0.737552 ms[GIE] layer conv2 / 3x3 + conv2 / relu_3x3 - 11.184843 ms[GIE] layer conv2 / norm2 - 1.052657 ms[GIE] layer pool2 / 3x3_s2 - 0.946510 ms[GIE] layer inception_3a / 1x1 + inception_3a / relu_1x1 || inception_3a / 3x3_reduce + inception_3a / relu_3x3_reduce || inception_3a / 5x5_reduce + inception_3a / relu_5x5_reduce - 1.299844 ms[GIE] layer inception_3a / 3x3 + inception_3a / relu_3x3 - 3.431562 ms[GIE] layer inception_3a / 5x5 + inception_3a / relu_5x5 - 0.697657 ms[GIE] layer inception_3a / pool - 0.449479 ms[GIE] layer inception_3a / pool_proj + inception_3a / relu_pool_proj - 0.542916 ms[GIE] layer inception_3a / 1x1 copy - 0.074375 ms[GIE] layer inception_3b / 1x1 + inception_3b / relu_1x1 || inception_3b / 3x3_reduce + inception_3b / relu_3x3_reduce || inception_3b / 5x5_reduce + inception_3b / relu_5x5_reduce - 2.582917 ms[GIE] layer inception_3b / 3x3 + inception_3b / relu_3x3 - 6.324167 ms[GIE] layer inception_3b / 5x5 + inception_3b / relu_5x5 - 3.262968 ms[GIE] layer inception_3b / pool - 0.586719 ms[GIE] layer inception_3b / pool_proj + inception_3b / relu_pool_proj - 0.657552 ms[GIE] layer inception_3b / 1x1 copy - 0.111511 ms[GIE] layer pool3 / 3x3_s2 - 0.608333 ms[GIE] layer inception_4a / 1x1 + inception_4a / relu_1x1 || inception_4a / 3x3_reduce + inception_4a / relu_3x3_reduce || inception_4a / 5x5_reduce + inception_4a / relu_5x5_reduce - 1.589531 ms[GIE] layer inception_4a / 3x3 + inception_4a / relu_3x3 - 1.027396 ms[GIE] layer inception_4a / 5x5 + inception_4a / relu_5x5 - 0.420052 ms[GIE] layer inception_4a / pool - 0.306563 ms[GIE] layer inception_4a / pool_proj + inception_4a / relu_pool_proj - 0.464583 ms[GIE] layer inception_4a / 1x1 copy - 0.060417 ms[GIE] layer inception_4b / 1x1 + inception_4b / relu_1x1 || inception_4b / 3x3_reduce + inception_4b / relu_3x3_reduce || inception_4b / 5x5_reduce + inception_4b / relu_5x5_reduce - 1.416875 ms[GIE] layer inception_4b / 3x3 + inception_4b / relu_3x3 - 1.157135 ms[GIE] layer inception_4b / 5x5 + inception_4b / relu_5x5 - 0.555886 ms[GIE] layer inception_4b / pool - 0.331354 ms[GIE] layer inception_4b / pool_proj + inception_4b / relu_pool_proj - 0.485677 ms[GIE] layer inception_4b / 1x1 copy - 0.056041 ms[GIE] layer inception_4c / 1x1 + inception_4c / relu_1x1 || inception_4c / 3x3_reduce + inception_4c / relu_3x3_reduce || inception_4c / 5x5_reduce + inception_4c / relu_5x5_reduce - 1.454011 ms[GIE] layer inception_4c / 3x3 + inception_4c / relu_3x3 - 2.771198 ms[GIE] layer inception_4c / 5x5 + inception_4c / relu_5x5 - 0.554844 ms[GIE] layer inception_4c / pool - 0.502604 ms[GIE] layer inception_4c / pool_proj + inception_4c / relu_pool_proj - 0.486198 ms[GIE] layer inception_4c / 1x1 copy - 0.050833 ms[GIE] layer inception_4d / 1x1 + inception_4d / relu_1x1 || inception_4d / 3x3_reduce + inception_4d / relu_3x3_reduce || inception_4d / 5x5_reduce + inception_4d / relu_5x5_reduce - 1.419271 ms[GIE] layer inception_4d / 3x3 + inception_4d / relu_3x3 - 1.781406 ms[GIE] layer inception_4d / 5x5 + inception_4d / relu_5x5 - 0.680052 ms[GIE] layer inception_4d / pool - 0.333542 ms[GIE] layer inception_4d / pool_proj + inception_4d / relu_pool_proj - 0.483854 ms[GIE] layer inception_4d / 1x1 copy - 0.048229 ms[GIE] layer inception_4e / 1x1 + inception_4e / relu_1x1 || inception_4e / 3x3_reduce + inception_4e / relu_3x3_reduce || inception_4e / 5x5_reduce + inception_4e / relu_5x5_reduce - 2.225573 ms[GIE] layer inception_4e / 3x3 + inception_4e / relu_3x3 - 4.142656 ms[GIE] layer inception_4e / 5x5 + inception_4e / relu_5x5 - 0.954427 ms[GIE] layer inception_4e / pool - 0.332917 ms[GIE] layer inception_4e / pool_proj + inception_4e / relu_pool_proj - 0.667344 ms[GIE] layer inception_4e / 1x1 copy - 0.071666 ms[GIE] layer pool4 / 3x3_s2 - 0.275625 ms[GIE] layer inception_5a / 1x1 + inception_5a / relu_1x1 || inception_5a / 3x3_reduce + inception_5a / relu_3x3_reduce || inception_5a / 5x5_reduce + inception_5a / relu_5x5_reduce - 1.685417 ms[GIE] layer inception_5a / 3x3 + inception_5a / relu_3x3 - 2.085990 ms[GIE] layer inception_5a / 5x5 + inception_5a / relu_5x5 - 0.391198 ms[GIE] layer inception_5a / pool - 0.187552 ms[GIE] layer inception_5a / pool_proj + inception_5a / relu_pool_proj - 0.964791 ms[GIE] layer inception_5a / 1x1 copy - 0.041094 ms[GIE] layer inception_5b / 1x1 + inception_5b / relu_1x1 || inception_5b / 3x3_reduce + inception_5b / relu_3x3_reduce || inception_5b / 5x5_reduce + inception_5b / relu_5x5_reduce - 2.327656 ms[GIE] layer inception_5b / 3x3 + inception_5b / relu_3x3 - 1.884532 ms[GIE] layer inception_5b / 5x5 + inception_5b / relu_5x5 - 1.364895 ms[GIE] layer inception_5b / pool - 0.189219 ms[GIE] layer inception_5b / pool_proj + inception_5b / relu_pool_proj - 0.453490 ms[GIE] layer inception_5b / 1x1 copy - 0.045781 ms[GIE] layer pool5 / 7x7_s1 - 0.743281 ms[GIE] layer loss3 / classifier input reformatter 0 - 0.042552 ms[GIE] layer loss3 / classifier - 0.848386 ms[GIE] layer loss3 / classifier output reformatter 0 - 0.042969 ms[GIE] layer prob - 0.092343 ms[GIE] layer prob output reformatter 0 - 0.042552 ms[GIE] layer network time - 84.158958 ms class 0948 - 1.000000(Granny Smith) imagenet - console: 'backupimages/granny_smith_1.jpg' - >100.00000 % class#948(Granny Smith) loaded image fontmapA.png(256 x 512) 2097152 bytes[cuda] cudaAllocMapped 2097152 bytes,
CPU 0x101fa0000 GPU 0x101fa0000[cuda] cudaAllocMapped 8192 bytes,
CPU 0x100e22000 GPU 0x100e22000 imagenet - console: attempting to save output image to 'images/output_0.jpg'
imagenet - console: completed saving 'images/output_0.jpg'
shutting down...

Input File

Output File

The output image contains the highest probability of what's in the image. This one says sunscreen which is a bit weird, I am guessing because my original image is very sunny.

Source Code: https://github.com/tspannhw/jetsontx1-TensorRT

Resources

  1. http://www.jetsonhacks.com/2017/01/28/install-samsung-ssd-on-nvidia-jetson-tx1/
  2. https://github.com/PhilipChicco/pedestrianSys
  3. https://github.com/jetsonhacks?tab=repositories
  4. https://github.com/Netzeband/JetsonTX1_im2txt
  5. https://github.com/DJTobias/Cherry-Autonomous-Racecar
  6. https://github.com/jetsonhacks/postFlashTX1
  7. https://github.com/jetsonhacks/installTensorFlowTX1
  8. http://www.jetsonhacks.com/2016/12/21/jetson-tx1-swap-file-and-development-preparation/

Your API is not enough. Learn why (and how) leading SaaS providers are turning their products into platforms with API integration in the ebook, Build Platforms, Not Products from Cloud Elements.

Topics:
deep learning ,hadoop ,hortonworks ,nvidia ,tx1 ,tensorrt ,integration

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}