Coping With Failure: Working Around Limitations in AI
Coping With Failure: Working Around Limitations in AI
Will the perfect AI system ever exist? Let's look at working around limitations in artificial intelligence and coping with failure.
Join the DZone community and get the full member experience.Join For Free
Recent advancements in machine learning, deep learning and other variations of artificial intelligence have been impressive. Yet, when it fails (as we’ve seen in autonomous cars and Facebook’s facial recognition software) we’re not surprised. After all, computers are only as smart as their programmers, right?
Unlike humans who fundamentally just “know” certain things, computers rely on levels of confidence. They are very seldom 100 percent sure of anything and sometimes they’re just wrong. Knowing this, how do build systems that ‘work’ even though the underlying models may lack confidence?
These Systems Do Learn
AI systems have shown an extraordinary ability to learn — fast. They gather information through legal documents, movies, and even people, correcting older experiences as new ones develop. Alexa, for example, learns through interactions with people all the time. Daily improvements are made on how queries are answered and how information is provided. Systems that have feedback mechanisms “how am I doing?” will probably learn faster, but even systems with no feedback can learn by gathering more data.
Sounds pretty straightforward.
But for AI, learning directly isn’t always foolproof. First, humans (the users) are inherently imperfect, and we can make things complicated. We don’t always act in rational ways. Sometimes we use emotion, rather than logic to make decisions — an activity that leads to some very amusing YouTube videos, but also makes life hard for the poor AI system. Watch a child play silly games with Siri and you’ll get the idea of how hard her virtual life is.
Second, programmed learning tends to focus on specific domains, limiting AI’s ability to change course or make corrections based on new information. Teach an AI robot to play pool and it can probably master the bank shot. But change the rules of the game and it may never properly adapt, making the same mistake over and over again.
We Embrace Uncertainty
Uncertainty is a key aspect of human reasoning, intelligence, and life itself. In many ways, the more we know we don’t know, the better. Adding this capability to AI should only make computers better, right?
Researchers at Google and Uber are working on smart AI systems that embrace uncertainty — machines that are encouraged to doubt themselves. The idea is that AI will make better decisions by measuring levels of confidence in a prediction or a decision. Self-doubt may not be such a bad thing: a machine that can give you a measure of how certain it is may be better prepared to avoid a fatal error or catastrophe.
Uber released a new programming language called "Pyro" that merges deep learning with probabilistic programming. According to Noah Goodman, AI expert and Stanford professor, giving deep learning the ability to handle probability makes it smarter. Rather than having to rely on a yes or no answer, a measure of certainty could have far-reaching consequences for complex engineering. Pyro is an intriguing return to the 80’s era of Fuzzy Logic and bears close study. Nice work, Uber!
Are we at an age where computer systems faced with uncertainty make better decisions than humans? Some say yes, we are. “Claudico,” an AI-powered poker player, recently defeated four human poker experts in a "Brain versus AI" competition. The defeat illustrated one thing: a general set of algorithms meant to tackle an information-imperfect situation can outperform humans, in limited domains, where calculation ability is especially valuable. Can Claudico get me a good deal at a Flea Market? Not yet!
When information is less than perfect, as it is in the game of no-limit Texas Hold’em, making smart moves is complicated. Claudico figured out how to counter the individual players' styles while fortifying its own defenses and then created better strategies based on what the other players were doing.
An example that’s close to my heart (and my day job) is Adobe Scan, our free mobile app that turns your phone into a portable PDF scanner. Earlier releases of the app did a good job with larger documents like bank statements, but this latest update now handles the much smaller, very stylized world of business cards. We worked with our customers to "teach" the app how to recognize a broader range of documents that now includes business cards.
The Perfect AI System Will Never Exist, but the Technology Will Continue to Help Solve Problems.
The most competent programmers can never fully anticipate the unpredictable, and despite our best efforts, the perfect AI system will never exist. Someone will inevitably speak with an accent or something will be ambiguous. Intent is often unclear. Type “Boston” into Google and the search engine thinks it knows what you’re looking for, but it can’t be sure. Results turn up everything from Boston College to Boston news channels to how to make the best Boston baked beans. This is a very reasonable design decision from Google, as in fact, there is just not enough information!
Despite failure, AI systems will continue to help solve real-world problems. Mathematical models help determine who gets approved for a mortgage and even who gets called for a job interview. But when AI makes decisions on our behalf, the stakes are suddenly high. Different risks exist, and consumers will ultimately need to decide how much risk they’re willing to tolerate.
Are you driving to the airport or proofreading a research paper? The consequences of failure are vastly different for each — one can injure your body, another your career. Users and practitioners alike must be aware of the risks and uncertainties in what they are trying to accomplish and plan accordingly.
Opinions expressed by DZone contributors are their own.