Over a million developers have joined DZone.

TensorFlow Unites Research and Production Efforts, Released by Google

Google open sources the machine learning library that drives products like Inbox and enables pure research.

Learn how you can maximize big data in the cloud with Apache Hadoop. Download this eBook now. Brought to you in partnership with Hortonworks.

Introduction

On November 9th, Google released TensorFlow, the Google Brain Team's machine learning library, to the public under the Apache 2.0 open source license.  

TensorFlow is a single system that provides some of the same benefits as the first-generation machine learning infrastructure DistBelief, like scalability and production-readiness, but improves upon it by delivering the flexibility and generality which pure research demands as well.

TensorFlow Overview

A low-overhead C++ core drives a dataflow graph; essentially a computational model. Nodes (stateful or not) represent mathematical operations or endpoints, while edges allow multi-dimensional data arrays, or tensors, to traverse between nodes. Tensors flowing

The flexibility of graph construction is the key to mixed use: products like Google Inbox and Photos can take advantage of the same neural networks that researchers can use to experiment with novel combinations  of training tasks.

Distributed modelThe architecture also allows for computation to take place across distributed networks, or in a single, confined device because it abstracts away from the hardware that undergirds it. 

TensorFlow currently has front-ends in C++ and Python, as below: 

import tensorflow as tf
graph = tf.Graph()
# New graph ↑
with graph.AsDefault():
  examples = tf.constant(train_dataset)
  labels = tf.constant(train_labels)
# Training Data and Labels ↑
  W = tf.Variable(tf_truncated_normal([rows * cols, num_labels]))
  b = tf.Variable(tf.zeros([num_labels]))
# Matrices: Variables and Bias ↑  
  logits = tf.mat_mul(examples, W) + b
# Training Computation ↑
  loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, labels))
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Optimizer for loss ↑

with tf.Session(graph=graph) as session:
  tf.InitializeAllVariables().Run()
  for step in xrange(num_steps):
    _, l - session.Run([optimizer, loss])
    if (step % 100 == 0):
      print 'Step %d Loss: %f' % (step, loss)
# Run up to optimizer, loss, then save loss ↑

Hortonworks DataFlow is an integrated platform that makes data ingestion fast, easy, and secure. Download the white paper now.  Brought to you in partnership with Hortonworks

Topics:
tensorflow

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}