Over a million developers have joined DZone.

Visual Recognition With TensorFlow and OpenWhisk

DZone's Guide to

Visual Recognition With TensorFlow and OpenWhisk

Image recognition gets it easier every day. Here's an interesting approach with TensorFlow and Kubernetes that involves predicting types of flowers.

· AI Zone ·
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

My colleague Ansgar Schmidt and I have built a new demo that uses TensorFlow to predict types of flowers. The model is trained in a Kubernetes cluster on the IBM Cloud. Via a web application, pictures can be uploaded to initiate the prediction code, which is executed as OpenWhisk function.

We'd like to open source and document this demo soon. Keep an eye on our blogs. For now, here is a screenshot of the web application.

As starting point for this demo, we've used a code lab from Google TensorFlow for Poets. The lab shows how to leverage the transfer learning capabilities in TensorFlow. Essentially, you can use predefined visual recognition models and retrain only the last layer of the neural network for your own categories. TensorFlow provides different visual recognition models. Since we wanted to run the prediction code in OpenWhisk, we choose the smaller MobileNet model.

Over the next few days, we'll blog about how to use Kubernetes and Object Storage to train and store your own models and how to use OpenWhisk to execute predictions.

If you want to experiment before this, you can run the following code locally.

$ docker run -it --rm gcr.io/tensorflow/tensorflow /bin/bash
$ apt-get update
$ apt-get install -y git
$ git clone https://github.com/googlecodelabs/tensorflow-for-poets-2
$ cd tensorflow-for-poets-2
$ curl http://download.tensorflow.org/example_images/flower_photos.tgz | tar xz -C tf_files
$ ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}"
$ python -m scripts.retrain \
  --bottleneck_dir=tf_files/bottlenecks \
  --how_many_training_steps=500 \
  --model_dir=tf_files/models/ \
  --summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \
  --output_graph=tf_files/retrained_graph.pb \
  --output_labels=tf_files/retrained_labels.txt \
  --architecture="${ARCHITECTURE}" \

The training takes roughly five minutes on my MacBook Pro. After this, you can find the model in the files tf_files/retrained_graph.pb and tf_files/retrained_labels.txt.

In order to run a prediction, run the following command. Since this image has been part of the training set, you might want to get (wget http://...) another image first and change the image parameter.

$ python -m scripts.label_image \
    --graph=tf_files/retrained_graph.pb  \

As a result, you'll get something like this:

Evaluation time (1-image): 0.079s
daisy 0.979356
dandelion 0.0125334
sunflowers 0.00809442
roses 1.40769e-05
tulips 2.08439e-06

Stay tuned for more! 

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

machine learning ,tensorflow ,image recognition ,kubernetes ,ai ,tutorial ,predictive analytics

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}