The Questions That Reveal How Machines Think

DZone 's Guide to

The Questions That Reveal How Machines Think

How do they think?

· AI Zone ·
Free Resource

How do machines think?

How do machines think?

The Turing Test famously aims to test the abilities of artificial intelligence by tasking humans with uncovering when they’re talking to a person and when they’re talking to a machine. It tests the ability of AI to understand human language sufficiently to be able to hold natural seeming conversations.

As anyone who has tried to have a conversation with an AI-powered chatbot or virtual assistant can attest, there remains some way to go before technology can master this most human of abilities. New research from the University of Maryland aims to help AI progress by identifying some 1,200 questions that, while pretty easy for humans to answer, have traditionally stumped the best technology available today.

You may also like:  Are We Making AI Inefficient by Molding It to Human Thinking?

“Most question-answering computer systems don’t explain why they answer the way they do, but our work helps us see what computers actually understand,” the researchers explain. “In addition, we have produced a dataset to test on computers that will reveal if a computer language system is actually reading and doing the same sorts of processing that humans are able to do.”

Smarter Machines

The researchers explain that many of the Q&A systems in operation today rely on either humans or computers to generate the questions that are designed to train the systems. The problem with this approach is that it’s not easy to understand why computers are struggling to answer the questions correctly. The researchers believe that by better understanding what stumps the machines, we can better design datasets to train them.

The team developed a system that was capable of showing its thought processes as it attempted to answer each question, which they believe would not only give insight into the processes the computer was going through, but if deployed in a live environment, allow the human questioner to modify their line of inquiry.

The partnership between man and machine enabled 1,213 questions that had defeated the computer on its own to be successfully answered.

“For three or four years, people have been aware that computer question-answering systems are very brittle and can be fooled very easily,” the authors explain. “But this is the first paper we are aware of that actually uses a machine to help humans break the model itself.”

The team believes that the questions will serve as a valuable dataset to better inform work in natural language processing, while also acting as a training dataset, especially as the questions uncovered six distinct phenomena that baffled AI-based systems.

These failures emerged in either linguistic areas such as paraphrasing or unexpected context, or a failure in reasoning skills, such as the triangulation of the various elements in a question or the requirement to use multiple steps when forging a conclusion.

“Humans are able to generalize more and to see deeper connections,” the researchers explain. “They don’t have the limitless memory of computers, but they still have an advantage in being able to see the forest for the trees. Cataloging the problems computers have helps us understand the issues we need to address so that we can actually get computers to begin to see the forest through the trees and answer questions in the way humans do.”

Suffice to say, there is a little way to go before this kind of scenario emerges, but the research is an interesting indication of the progress being made in enabling machines to better navigate the nuances of human language.

Further Reading

What Developers Need to Know About Machine Learning in the SDLC

What I Think About Machines That Think

ai ,artificial intelligence ,how do machines think ,smarter machines

Published at DZone with permission of Adi Gaskell , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}