Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

AI Will Not Eat the World

DZone 's Guide to

AI Will Not Eat the World

...well, not for a good while anyway.

· AI Zone ·
Free Resource

So I work at the intersection of cybersecurity and machine learning. I use a variety of neural network architectures and machine learning techniques to try to create new ways to detect new malware. I've worked on other projects using machine learning and AI too.

And we have nothing to worry about.

We're about to enter a new valley of despair with AI technologies. The explosion in neural networks that has lead to self-driving cars, autonomous drones, and other modern AI applications is based on two singular events — the development of more complex and biologically inspired network architectures and GPUs.

These kinds of networks, like the ubiquitous convolutional networks we use in deep learning applications today, were originally inspired by the design of the eye and were used in visual image recognition contexts. The first CNNs exhibited monumental increases in performance when compared to other methods folks had used for character recognition, essentially solving handwritten character recognition over the MNIST data set. These networks were certainly novel in their architecture, but the techniques they used weren't really anything new. They represented an evolution of neural techniques, not the revolution that it seemed.

But what really enabled these deep, complex architectures was computational power. Neural models are fast once trained, but they are very difficult to train initially. Propagating error vectors through multiple layers is expensive and slow. The kinds of deep networks used by convolutional networks simply took too long to train previously. When graphical processing units and associated development tools like CUDA really became usable and affordable, this kind of training suddenly became feasible. Networks that took days to train with a CPU could be trained in hours.

But we're beginning to hit the end of the road with AI applications. All of the obvious problems have been, if not solved, at least approached with neural network-based architectures. And they've been reasonably successful, though we've certainly had some notable failures like Uber's self-driving car fiasco and recidivism prediction systems, which seem to codify training set bias more than anything else.

The simple fact is that, yes, you can train these kinds of systems to do amazing things if you have immense amounts of data, which you have access to if you're Google, Facebook, or Netflix. Otherwise, it's not so easy. And selecting unbiased data to effectively train a model to select on the features you're interested in rather than coincident but meaningless features is much harder than people think.

We don't have another GPU revolution on the horizon. We'll certainly continue to develop new algorithms, but we won't have a shiny new engine to run them. And we'll tax GPU clusters just like we did CPU clusters and hit limits with them too. And sure, we have lots of work to do trying to understand the internals of these kinds of systems and what we can do to protect them from interference.

But general intelligence? Mass replacement of jobs with AI? Forget about it. Not with the tools we have today.

Topics:
machine learning ,neural networks ,convolutional networks ,artificial intelligence ,deep learning applications

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}