Artificial intelligence is positioned to make significant changes in our lives over the next ten years. Those changes will be good for some, but not so good for others. We'll see traffic fatalities drop as autonomous cars begin to have widespread use, but at the same time, we'll see taxi drivers and truckers lose their jobs. We'll have more time on our hands as fully featured personal assistants start to become more functional and widespread, but we'll have even less opportunity to connect with our fellow human-brand animals as more of our day-to-day worries are taken care of by our digital Fridays. AI is certainly moving forward quickly as a viable approach for application development and problem-solving. But is it mature? Well, yes and no.
First, when we use the term mature when referencing a technology, what do we even mean? Is it simply adoption? Is it the existence of frameworks that make the technology easier to apply? What exactly do we mean by this?
Well, let's take a look at Wikipedia, the initial internet arbiter of all things, which just so happens to have an entry for mature technology:
A mature technology is a technology that has been in use for long enough that most of its initial faults and inherent problems have been removed or reduced by further development. In some contexts, it may also refer to technology that has not seen widespread use, but whose scientific background is well understood.
One of the key indicators of a mature technology is the ease of use for both non-experts and professionals. Another indicator is a reduction in the rate of new breakthrough advances related to it — whereas inventions related to a (popular) immature technology are usually rapid and diverse, and may change the whole use paradigm — advances to a mature technology are usually incremental improvements only.
So a mature technology has been around, is stable, and is pretty easy to use. It also has a slow rate of change, and we know how it does whatever it does.
Where does AI fall on this list?
Well, first, AI has been around forever. I mean, we started dreaming about this stuff and putting the fundamentals in place almost 70 years ago. That's when Rosenblatt first came up with the essential design for an artificial neuron — the same neuron design we use in deep learning networks today. That's right: the basic element of our self-driving cars and voice recognition systems was originally thought up a bit before 1960. That's pretty old-school.
The next big step in neural networks was error propagation. The foundations for this were developed in the 60s, and error backpropagation was first used in 1973 — a bit newer than neural networks, sure, but not exactly cutting-edge either.
And other AI technologies are even older.
So it's been around for a while. It's also pretty easy to use today, compared to what it used to be like. Today, you can use a system like TensorFlow or Keras and assemble new network architectures in no time. Believe me, things were much more time-consuming even ten years ago, when we were building these things in C/C++ and writing much of these algorithms and data structures ourselves.
Honestly, the rate of change is not that extreme, either. The basic algorithms we use in most AI applications today are not changing very quickly, though they're certainly changing more quickly than, say, carpentry or electrical transmission system design. Most of what we see as fast-moving AI-based technology stems from increasing computing power that allows us to apply AI techniques where we just couldn't before.
AI still falls short in one key measure, though: explainability. While we know how AI works, more complex AI architectures, once trained, are quite opaque. Modern deep neural networks, for example, have complex neuronal architectures that examine input data at a very high level of abstraction. Convolutional neural networks, in wide use today in computer vision and object classification, render input data unintelligible after the first processing layer. And these networks typically have on the order of ten layers. This leads to situations in which networks make correct decisions about what a picture contains the majority of the time but mysteriously fail in cases where the class of a rendered object is obvious (to us, at least). And in most cases, these failures are very difficult to pin down. We just can't really see the underlying system dynamics of the network in a way that makes these failures visible.
Keep in mind that these systems are very specific. They're built and trained to solve narrowly defined problems. They're intended to recognize cats, for example, or drive cars, or transcribe a spoken language — not write a symphony. When will general AI be mature? Who knows? We haven't even developed one yet. Maybe never.
In most ways, AI is mature today.