Real-Time Data Ingestion With PiCamera

DZone 's Guide to

Real-Time Data Ingestion With PiCamera

Helping further open the door between IoT and Big Data, this hands-on walkthrough details how you tune Raspberry Pi images and video for use in Big Data platforms.

· IoT Zone ·
Free Resource

Image title

I found a cool utility called JP2A that converts JPEGs to ASCII art, so I converted some PiCamera images to ASCII Art.  



With that simple script, you can convert an image stored in HDFS to text.

Image title

My main processing is done in Python, which activates the camera and retrieves still images, a burst, or videos.   

To attach the camera to your Raspberry Pi 3B+, you just need to connect to two wires while you are powered down. Make sure you are grounded and have no static electricity, then plug them in. The package will show you which two wires which are to import. To install the Python code for the camera, read this install document for 2.7.  All you have to do is  pip install PiCamera.   

Raspberry Pis and other small devices often have cameras or can have camera's attached. Raspberry Pis have cheap camera add-ons that can ingest still images and videos. Using a simple Python script, we can ingest images and then ingest them into our central Hadoop Data Lake. This is a nice, simple use case for connected data platforms with both data in motion and data at rest. This data can be processed in-line with Deep Learning Libraries like TensorFlow for image recognition and assessment. Using OpenCV and other tools we can process in motion and look for issues like security breaches, leaks, and other events.

The most difficult part is the Python code, which reads from the camera, adds a watermark, converts the image to bytes, sends it through MQTT, and then FTPs to an FTP server. I do both because networking is always tricky. You could also add if it fails to connect to either, store to a directory on a mapped USB drive. Once network returns send it out, it would be easy to do that with MiniFi which could read that directory.

Image title

Once the file lands into the MQTT broker or FTP server, NIFI pulls it and bring it into the flow. I first store to HDFS for our data at rest permanent storage for future deep learning processing. I also run three processors to extra image metadata and then call JP2A to convert the image into an ASCII picture.

For the Python code, it's simple:


import os
import datetime
import ftplib
import traceback
import math
import random, string
import base64
import json
import paho.mqtt.client as mqtt
import picamera
from time import sleep
from time import gmtime, strftime


def randomword(length):
 return ''.join(random.choice(string.lowercase) for i in range(length))

# Create unique image name
img_name = 'pi_image_{0}_{1}.jpg'.format(randomword(3),strftime("%Y%m%d%H%M%S",gmtime()))

# Capture Image from Pi Camera
 camera = picamera.PiCamera()
 camera.annotate_text = " Stored with Apache NiFi "
 camera.capture(img_name, resize=(500,281))


client = mqtt.Client()
client.connect("cloudmqttiothoster", 14162, 60)

fileContent = f.read()
byteArr = bytearray(fileContent)
message = '"image": {"bytearray":"' + byteArr + '"} } '
print client.publish("image",payload=message,qos=1,retain=False)

ftp = ftplib.FTP()
ftp.connect("ftpserver", "21")
    ftp.login("reallyLongUserName", "FTP PASSWORDS SHOULD BE HARD")
    ftp.storbinary('STOR ' + img_name, open(img_name, 'rb'))

# clean up sent file

For an IoT use case, I grab the image and then FTP it to a cloud server as well as send an MQTT message via a cloud MQTT broker.  You can transmit files in various means and these two are easy. You will need to use PIP to install libraries as necessary.

big data, data ingestion, iot, raspberry pi 3, tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}