I recently took the ‘Learning How To Learn’ course from the University of California, San Diego on Coursera, and they highlight the crucial role analogies play in the way humans learn.
It’s something that researchers from Northwestern University are trying to rectify. They have developed a model, called the structure-mapping engine (SME), that allows machines to reason in a way that’s reminiscent of how humans do.
The model uses analogical problem solving in the same way that we use analogies to solve the various moral dilemmas we may face in our daily lives.
“In terms of thinking like humans, analogies are where it’s at,” the authors say. “Humans use relational statements fluidly to describe things, solve problems, indicate causality, and weigh moral dilemmas.”
Telling Right From Wrong
The model works on the structure-mapping theory of analogy and similarity developed by psychologist Dedre Gentner. The model has commonly been used to understand and explain a range of psychological phenomena.
The analogies we use can be a range of simple or complicated, with previous models struggling to really cope with the range used by people. The team believes that the new model, however, can handle the size and complexities of relational representation needed to enable visual reasoning.
“Relational ability is the key to higher-order cognition,” they say. “Although we share this ability with a few other species, humans greatly exceed other species inability to represent and reason with relations.”
The model is attractive because it offers a far more efficient method of learning than the brute force approach typified by machine learning, which relies on huge amounts of data.
It’s an approach that is increasingly popular with researchers, with a team from Georgia Tech recently releasing a paper that also used stories to teach machines right from wrong.
“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels, and other literature,” the authors say. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”
It’s an approach that the Northwestern team believes they have made real progress with.
“Given a new situation, the machine will try to retrieve one of its prior stories, looking for analogous sacred values, and decide accordingly,” they say.
The SME model has been put through its paces on a number of physics problems taken from the Advanced Placement test, with a second range of tests being done on a number of visual problem-solving tasks.
It’s the kind of development that underlines the huge progress being made by artificial intelligence researchers in recent years, and to further development of work into analogies, the team is releasing the source code for SME alongside a 5,000 example corpus.
They’re confident that using SME could lead to both new developments in AI as well as a better understanding of how humans think and reason.
“SME is already being used in educational software, providing feedback to students by comparing their work with a teacher’s solution,” they conclude. But there is a vast untapped potential for building software tutors that use analogy to help students learn.”