Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

TPU 2.0 or AI for Everyone

DZone's Guide to

TPU 2.0 or AI for Everyone

Everyone who is willing to share their findings with the world through the TensorFlow community can use this computational power for free.

· AI Zone ·
Free Resource

At their last summit, I/O 2017, Google announced a brand new AI processor. It provides AI enthusiasts with new ways to drill down Machine Learning and neural network training.

Tensor Processing Unit 2.0

Image source: Google Blog

Tensor Processing Unit is a chip designed by Google to deal with AI and neural networks. The first model was announced back in 2016 as the best alternative to traditional CPU’s for machine learning. Over the past year, Google’s TPU has helped the AlphaGo AI system outplay Lee Sedol by boosting its prediction and decision-making skills. The computation power of a TPU is also used to process queries from Google Search, Google Translate, Google Photos, and other products.

First TPU

Image source: Google Cloud Platform Blog

The first chip helped the search giant develop their own AI services, but was incapable of training networks. TPU 2.0, which was showcased at Google IO 2017, goes much further by letting you both train and operate neural networks at impressive speed. 64 boards of 180 teraflops each are combined into a TPU pod. Every pod provides about 11,500 teraflops of computing power. Through the new cloud service, a business can build and operate software via the internet, backed by thousands of TPU’s.

Why TPU 2.0 Is a Revolution

Businesses normally use CPUs to train their neural networks. With the release of TPU 2.0, Google can easily shift the market, which has been dominated by nVidia for several years. Neural networks require a lot of computing power to be trained. Having traditional CPU processors — even hundreds of them — is not the best way to manage network training, as they require too much time and electrical power. According to Jeff Dean, Head of Google Brain, a task that takes about one day if using CPUs can be handled within six hours via a single TPU pod. Such an impressive speed advantage can bring a lot of AI enthusiasts to the market.

Any Drawbacks?

To benefit from TPU 2.0, businesses will have to adopt a brand new way of learning and executing neural networks. It’s not solely about a new chip, it’s also about TensorFlow — a special Google framework designed to work with neural networks and Machine Learning. Despite the fact that it’s an open-source library, many AI researchers are using other software for network training. We at Azoft prefer the Caffe framework. Moving to another library would be time-consuming and would result in enormous optimization efforts.

Takeaway

I wouldn’t be exaggerating by saying that AI is omnipresent today. In mobile development, retail, medicine, finance, etc. — everyone is trying to take advantage of the technology. Google brings it to a whole new level with its next-gen chip.

Google’s TPU farms are fully operational via Google Compute Engine now. That means every company or individual can use these resources both for inference and AI training. Enthusiasts can now deploy experiments faster than ever before using TensorFlow, optimized for a speed boost.

For now, Google gives access to a 1000 TPU rack. Everyone who is willing to share their findings with the world through the TensorFlow community can use this computational power for free.

Topics:
ai ,machine learning ,google ,tensorflow ,neural network

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}