Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Using Deep Speech in Streaming Big Data Flows

DZone 's Guide to

Using Deep Speech in Streaming Big Data Flows

Explore using speech-to-text in streams in Big Data environments.

· AI Zone ·
Free Resource

Deep Speech With Apache NiFi 1.8

Tools: Python 3.6, PyAudio, TensorFlow, Deep Speech, Shell, Apache NiFi

Why: Speech-to-Text

Use Case: Voice control and recognition.

Series: Holiday Use Case: Turn on Holiday Lights and Music on command.

Cool Factor: Ever want to run a query on Live Ingested Voice Commands?

Other Options: Voice Controlled with AIY Voice and NiFi

We are using Python 3.6 to write some code around PyAudio, TensorFlow, and Deep Speech to capture audio, store it in a wave file, and then process it with Deep Speech to extract some text. This example is running in OSX without a GPU on Tensorflow v1.11.

The Mozilla Github repo for their Deep Speech implementation has nice getting-started information that I used to integrate our flow with Apache NiFi.

Installation as per Deep Speech

pip3 install deepspeech
wget -O - https://github.com/mozilla/DeepSpeech/releases/download/v0.3.0/deepspeech-0.3.0-models.tar.gz | tar xvfz -

This pre-trained model is available for English. For other languages, you will need to build your own. You can use a beef HDP 3.1 cluster to train this.

Apache NiFi Flow

The flow is simple, and we call our shell script that runs Python that records audio and sends it to Deep Speech for processing.

We get back a voice_string in JSON that we turn into a record for querying and filtering in Apache NiFi.

I am handling a few voice commands for "Save," "Load," and "Move." As you can imagine, you can handle pretty much anything you want. It's a simple way to use voice to control streaming data flows or just to ingest large streams of text. Even using advanced Deep Learning text recognition is still not the strongest.

If you are going to load balance connections between nodes, you have options on compression and load balancing strategies. This can come in handy if you have a lot of servers.

Shell Script

python3.6 /Volumes/TSPANN/projects/DeepSpeech/processnifi.py /Volumes/TSPANN/projects/DeepSpeech/models/output_graph.pbmm /Volumes/TSPANN/projects/DeepSpeech/models/alphabet.txt

Schema

{
  "type" : "record",
  "name" : "voice",
  "fields" : [ {
    "name" : "systemtime",
    "type" : "string",
    "doc" : "Type inferred from '\"12/10/2018 14:53:47\"'"
  }, {
    "name" : "voice_string",
    "type" : "string",
    "doc" : "Type inferred from '\"\"'"
  } ]
}

We can add more fields as needed.

Example Run

HW13125:DeepSpeech tspann$ ./runnifi.sh
TensorFlow: v1.11.0-9-g97d851f04e
DeepSpeech: unknown
2018-12-10 14:36:43.714433: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
{"systemtime": "12/10/2018 14:36:43", "voice_string": "one two three or five six seven eight nine"}

We can run this on top of YARN 3.1 as dockerized or non-dockerized workloads.

Setting up nodes to run HDF 3.3 - Apache NiFi and friends is easy in the cloud or on-premise in OpenStack with super devops tools.

When running Apache NiFi it is easy to monitor in Ambari:

References:

Topics:
deep learning ,big data ,hortoworks ,apache nifi ,python ,ai ,ai tutorial ,deep search tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}