Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

TensorFlow by Example

DZone's Guide to

TensorFlow by Example

Learn how to use Google's TensorFlow by seeing these applications via the Python API.

· Big Data Zone
Free Resource

Effortlessly power IoT, predictive analytics, and machine learning applications with an elastic, resilient data infrastructure. Learn how with Mesosphere DC/OS.

So unless you live under a rock you’ve heard of Google’s new TensorFlow release. At first read, this may not seem like too much of a big deal. After all, we have python-based mathematics and machine learning packages galore already (I’m looking at you Theano). So who cares about another one? Especially one that’s brand new, and not that well documented yet?

Well, you should. Because TensorFlow is not Theano. While it does have some similarities, specifically with regard to machine learning and neural networking applications, it’s a bit more general. Really, all TensorFlow does is allow you to build numerical systems as data flow graphs, where tensors flow through the edges of the graph to nodes, where computation happens. This structure is bunches more general than a neural networking library. For example, you can put together control systems in TensorFlow pretty easily. You can also model and simulate general data flow systems, like organizational structures. And most importantly, the developers promise to more fully implement a C++ API to support mobile development.

This would allow developers to begin to build their own, on device, pre-trained artificially intelligent software components. Imaging building your own Siri or Cortana, or doing image recognition on-device. Certainly, you can build systems to do this today, but it’s much more difficult. TensorFlow promises to make these kind of software components much easier to build.

Great, How do I use It? Well, look through the documentation! These examples come straight from that. It is pretty rich. But it’s also machine learning/neural network focused, which doesn’t do much for you if you’re working on some other kind of application. Honestly though, the basics are pretty easy to pick up, and I’m going to show you some simple applications via the Python API.

I am going to use neural systems as an example, but I'm not going to use any of the pre-built networks TensorFlow comes with. First, we’re going to start with a single computational layer:

import tensorflow as tf
import numpy as np

NUM_CORES = 6

config = tf.ConfigProto(
    inter_op_parallelism_threads=NUM_CORES,
    intra_op_parallelism_threads=NUM_CORES
)

# Creating the placeholders. Note that we include names
# for more informative errors and shapes as tensorflow will
# do static size checking.
x = tf.placeholder(tf.float32, shape=(1, 10), name='x')
W = tf.placeholder(tf.float32, shape=(10, 4), name='W')
b = tf.placeholder(tf.float32, shape=(1, 4), name='b')

# The fan-in to the summing junction and the summing
# operation.
y = tf.matmul(x, W) + b

# The activation function.
a = tf.nn.sigmoid(y)

# Adding a softmax filter.
m = tf.nn.softmax(a)

# The activation function doesn't really change here.

with tf.Session(config=config) as s:
    s.run(tf.initialize_all_variables())

    # Let's create some numpy matrices.
    # This is for a single layer of four neurons.
    W_in = np.random.rand(10, 4)
    x_in = np.random.rand(1, 10)
    b_in = np.random.rand(1, 4)

    val = s.run(m, 
        feed_dict={
            x: x_in, 
            W: W_in,
            b: b_in
        }
    )

    print val


Okay, so that’s it. What does this do?

This is just a simple matrix multiplication and addition, followed by the application of a non-linear function paired with a softmax evaluation of the result. The functions themselves are pretty straightforward and well defined, and the canned neural network primitives in TensorFlow will apply these automagically. We’re interested in building these manually though, so we’re using the implementations.

So here, the key points are the placeholder definitions and running the system. The placeholders allow you to define particular tensor structures that TensorFlow can validate prior to the run loop activation, making debugging easier. Then, you can generate the data and run the computatoinal graph via the session.run(.) method.

Generalizing to Multiple Layers. So the previous example had only a single layer of computation. We can create a multiple layer system with no error corretion pretty easily:

import tensorflow as tf
import numpy as np

NUM_CORES = 6

config = tf.ConfigProto(
    inter_op_parallelism_threads=NUM_CORES,
    intra_op_parallelism_threads=NUM_CORES
)

# Creating the placeholders. Note that we include names
# for more informative errors and shapes as tensorflow will
# do static size checking.
x_0 = tf.placeholder(tf.float32, shape=(1, 10), name='x_0')
W_0 = tf.placeholder(tf.float32, shape=(10, 4), name='W_0')
b_0 = tf.placeholder(tf.float32, shape=(1, 4), name='b_0')

# The fan-in to the summing junction and the summing
# operation.
y_0 = tf.matmul(x_0, W_0) + b_0

# The activation function.
a_0 = tf.nn.sigmoid(y_0)

# Now for the second layer.
# x_1 = tf.placeholder(tf.float32, shape=(1, 4), name='x_1')
W_1 = tf.placeholder(tf.float32, shape=(4, 2), name='W_1')
b_1 = tf.placeholder(tf.float32, shape=(1, 2), name='b_1')

# The fan-in to the summing junction and the summing
# operation.
y_1 = tf.matmul(a_0, W_1) + b_1

# The activation function.
a_1 = tf.nn.sigmoid(y_1)

# Adding a softmax filter.
m = tf.nn.softmax(a_1)

# The activation function doesn't really change here.

with tf.Session(config=config) as s:
    s.run(tf.initialize_all_variables())

    # Let's create some numpy matrices.
    # This is for a single layer of four neurons.
    W_0_in = np.random.rand(10, 4)
    x_0_in = np.random.rand(1, 10)
    b_0_in = np.random.rand(1, 4)
    W_1_in = np.random.rand(4, 2)
    b_1_in = np.random.rand(1, 2)

    val = s.run(m, 
        feed_dict={
            x_0: x_0_in, 
            W_0: W_0_in,
            b_0: b_0_in,
            W_1: W_1_in,
            b_1: b_1_in
        }
    )

    print val


That’s it! We now have a system with two layers, the first with 10 input and four output values, and the second with four input and two output values. This architecture would be sufficient to allow you to to two-class classification, if we had implemented error correction and trained the network. Nevertheless, the architecture itself is surprisingly powerful. It also follows the same design, essentially - we define the nodes in the system (the sigmoid and softmax functions) and the data that flows between them (the tensor placeholders).

Overall, TensorFlow has been stable, in my experience, and easy to program with, once you understand how to correctly order and dimensionalize the tensors. Good Luck, and happy flowing!

Learn to design and build better data-rich applications with this free eBook from O’Reilly. Brought to you by Mesosphere DC/OS.

Topics:
python ,tensorflow ,machine learning ,neural networks

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}