Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Putting Streaming ML Into Production

DZone's Guide to

Putting Streaming ML Into Production

What options are there for deploying ML and DL models into production?

· AI Zone ·
Free Resource

Bias comes in a variety of forms, all of them potentially damaging to the efficacy of your ML algorithm. Read how Alegion's Chief Data Scientist discusses the source of most headlines about AI failures here.

Productionize Streaming ML

So you've done all the work in your hosted Jupyter or Zeppelin notebooks and you are ready to deploy for real production Machine Learning and Deep Learning use cases. What do you need to do and think about beforehand? There are some common features that every system will need.

  • Classification/Run Your ML
  • REST API
  • Security
  • Automation
  • Data Lineage
  • Data Provenance
  • Scripting
  • Integration with Kafka

An optional feature is to run dockerized workloads, which is becoming more of a need.

So what are my options if I don't want some proprietary or vendor locked in solution? Let's get beyond the black box and get into a multi-cloud/hybrid cloud solution in pure open source.

Here are some options, that I have used.

Native Apache NiFi Processors to Process Machine Learning Workloads In Stream

Note: No company supports these processors yet, they are community releases by me.

TensorFlow

You can use my TensorFlow processor to easily classify images as they pass through a NiFi dataflow.

I am working on ones for Deep Learning 4 J and H2O.ai. They are straightforward if you wish to use them.

My solution has all the basic requirements and NiFi can run in Dockerized.

Using a Library/Framework Specific Tool

The MXNet model server works with Apache MXNet and ONNX models. ONNX is supported by a number of other frameworks and there are converters out there. This is an easy to setup REST service that could be hosted on K8 or YARN 3.1. Check out my article for an example.

Tensorflow Serving

TensorFlow Serving can host classification on in HDP 3.1.

The data provenance, data lineage and other features can be added through Apache Atlas or Apache NiFi.

Run Your Classification on YARN 3.1

TensorFlow on YARN 3.1 Example

Apache NiFi — Apache Livy — Apache Spark — Tensorflow

It is very easy to use the processor in Apache NiFi to execute Spark workloads that can run Tensorflow.

Image title

Image title

MLeap or OpenScoring (PMML) With Apache Atlas

This looks like the best option with governance, security and a toolkit. Greg Keys amazing work orchestrating this process has the brightest future for success.

Deploy Your Model to the Edge

NVidia, Hortonworks, Cloudera, and other companies are making tools to smooth out this process.

If your model doesn't change often, pushing a binary to box(es) isn't rocket science. There's also specialized IoT cameras and devices that you can use. I will have an article on the three affordable AI Camera Options: Jevois, PixyCam 2, and OpenMV H7 cam.

Deploy Your Model as PMML to Hortonworks Streaming Analytics Manager

SAM provides an easy way to include PMML execution in your complex event processing and streaming. This works very well with fast Kafka workloads.

None of these are wrong options, it depends on your environment and your needs.

Please comment for suggestions and questions.

Your machine learning project needs enormous amounts of training data to get to a production-ready confidence level. Get a checklist approach to assembling the combination of technology, workforce and project management skills you’ll need to prepare your own training data.

Topics:
machine learning ,deep learning ,hortonworks ,apache spark ,apache nifi ,apache livy ,apache kafka ,ai ,ml workloads ,tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}