TensorFlow Unites Research and Production Efforts, Released by Google
Google open sources the machine learning library that drives products like Inbox and enables pure research.
Join the DZone community and get the full member experience.
Join For FreeIntroduction
On November 9th, Google released TensorFlow, the Google Brain Team's machine learning library, to the public under the Apache 2.0 open source license.
TensorFlow is a single system that provides some of the same benefits as the first-generation machine learning infrastructure DistBelief, like scalability and production-readiness, but improves upon it by delivering the flexibility and generality which pure research demands as well.
TensorFlow Overview
A low-overhead C++ core drives a dataflow graph; essentially a computational model. Nodes (stateful or not) represent mathematical operations or endpoints, while edges allow multi-dimensional data arrays, or tensors, to traverse between nodes.
The flexibility of graph construction is the key to mixed use: products like Google Inbox and Photos can take advantage of the same neural networks that researchers can use to experiment with novel combinations of training tasks.
The architecture also allows for computation to take place across distributed networks, or in a single, confined device because it abstracts away from the hardware that undergirds it.
TensorFlow currently has front-ends in C++ and Python, as below:
import tensorflow as tf
graph = tf.Graph()
# New graph ↑
with graph.AsDefault():
examples = tf.constant(train_dataset)
labels = tf.constant(train_labels)
# Training Data and Labels ↑
W = tf.Variable(tf_truncated_normal([rows * cols, num_labels]))
b = tf.Variable(tf.zeros([num_labels]))
# Matrices: Variables and Bias ↑
logits = tf.mat_mul(examples, W) + b
# Training Computation ↑
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, labels))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Optimizer for loss ↑
with tf.Session(graph=graph) as session:
tf.InitializeAllVariables().Run()
for step in xrange(num_steps):
_, l - session.Run([optimizer, loss])
if (step % 100 == 0):
print 'Step %d Loss: %f' % (step, loss)
# Run up to optimizer, loss, then save loss ↑
Opinions expressed by DZone contributors are their own.
Comments