The Big Deal With Deep Learning
The Big Deal With Deep Learning
Also known as "Strong AI" it's time you found out what the future holds for this revolutionary Big Data use case.
Join the DZone community and get the full member experience.Join For Free
The Architect’s Guide to Big Data Application Performance. Get the Guide.
A colleague recently asked me if deep learning lives up to the hype. Is it really revolutionary or just the same old thing dressed up with a new name? As with most fads, the truth is somewhere in between.
Let's start by dismissing the hype. I've heard deep learning referred to as the breakthrough that will lead to "Strong AI" (universal, general purpose AI) and seems to have gotten some rich (Elon Musk), smart (Stephen Hawking) guys (Steve Wozniak) all riled up about the end of humanity. If we've learned anything from using terms like "Big Data" and the "Internet of Everything", such wild claims may be a tad overblown.
But there is more to deep learning than the hype, and it is a significant step forward. Microsoft now uses deep learning algorithms to improve recognition tasks of its virtual assistant, Cortana, including identifying dog breeds and scoring better than humans on a widely-accepted image classification test. And Google made headlines a few years back with a system that recognized cats in YouTube videos.
Earth shattering? Probably not. But these are steps in the right direction. Here are a few ways deep learning is worthy of more than a footnote in history.
Increasingly Abstract Patterns
Deep learning and the algorithms behind it are inspired by two complementary fields: brain science and education.
To help see the connection, it is useful to know how the brain works at a high level. Groups of neurons are responsible for detecting a specific type of pattern. These groups process digital input and fire if they detect a match. In reality each group consists of many neurons firing through biochemical mechanisms, but this approximation works for our purposes.
These groups of neurons are logically stacked together in layers. Each "layer" is responsible for detecting types of patterns at a certain level of abstraction. As you go "up" layers, the input to those neurons are the output of pattern recognizers from lower layers. As you add more layers, you get more abstract patterns.
Ray Kurzweil describes this beautifully in his recent book, How to Create a Mind. Using the visual cortex as an example, he describes how a lower layer of neurons detect edges, color and orientation, while the next level up detects "this is a horizontal line", "this is a semi-circle", "this is a diagonal line" and so on. As you continue stepping up layers, you get more abstract patterns, like letters and eventually words.
Learning is a Process
Deep learning is not just throwing a ton of hardware at the same old neural networks, as some articles have mistakenly described it. Research groups have tried that before, typically with poor results.
The key innovation comes from, of all places, education. Rather than training a neural network all at once, researchers began modeling neural networks using layers (similar to how the brain works) and training each layer in succession. This makes a lot of sense, as it is how we learn. In school, we first learn basic shapes, then our letters, then basic words, all the way up to abstract concepts like emotions and scientific disciplines.
... And the Kitchen Sink
As with many other technology trends, hardware and software must both evolve before a breakthrough occurs. The notion of deep belief networks is not entirely new, but until recently they would have been far too computationally intensive to be even remotely practical.
In the case of deep learning, you really do need to "throw the kitchen sink" at the problem. Companies with enough resources (think Google, Microsoft and Facebook) can now assemble massive clusters of reasonably priced computers that are powerful enough to train these "deep networks" and detect meaningful abstract patterns. To get a sense of scale, the Google project that identified cats in YouTube videos used roughly 16,000 processors. Take that, Grumpy Cat.
To make the most of deep learning, you also need a lot of data to analyze. Thanks to the exponential increase in data from the Internet, sensors, and businesses, we have another kitchen sink we can throw at this problem.
I am not worried about an imminent AI-pocalypse. Instead, I am excited for the future and how a new generation of smart machines, able to learn more like we do, can improve our lives. Here are just a few possibilities:
- Smarter decision support tools that improve the effectiveness of government policy and NGO relief efforts
- Better educational software that can serve as a virtual tutor for those without the means to pay for professional assistance
- Personalized medicine that greatly reduces the trial and error necessary to get a condition under control (Assurex Health is just one example of this exciting space)
- Virtual digital assistants that learn our habits and preferences, helping us organize our calendar, manage tasks, and be more productive
Matt Coatney is an AI expert, data scientist, software developer, technology executive, author, and speaker. His mission is to improve how we interact with smart machines by making software smarter and teaching people how to work (and cope) with advanced technology. Great things happen when smart people and smart machines work together toward a common goal.
Published at DZone with permission of Matt Coatney , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.