Teaching Machines to Cooperate
Teaching Machines to Cooperate
Is this an example of machines being able to do inherently human tasks? Probably not, but it's an interesting development nonetheless.
Join the DZone community and get the full member experience.Join For Free
Bias comes in a variety of forms, all of them potentially damaging to the efficacy of your ML algorithm. Read how Alegion's Chief Data Scientist discusses the source of most headlines about AI failures here.
The latest generation of AI has proven itself incredibly proficient at the kind of tasks we normally associate as being a strength of computers — but doubts remain about its ability to perform more human tasks.
It might be a small step, but a step has indeed been made in this direction, with research from BYU highlighting how AI can be used to cooperate and compromise rather than compete.
"The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills," the authors say. "AI needs to be able to respond to us and articulate what it's doing. It has to be able to interact with other people."
The researchers programmed several machines with their algorithm before tasking them with playing a two-player game that tested their ability to cooperate in various relationships. The games were designed to examine a range of team designs, including machine and machine, human and machine, and human and human. They wanted to test whether the machines could cooperate better than the pure human-based teams.
"Two humans, if they were honest with each other and loyal, would have done as well as two machines," the team explains. "As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It's programmed to not lie, and it also learns to maintain cooperation once it emerges."
The cooperative abilities of the machines were then enhanced by programming them with various "cheap talk" phrases. For instance, if humans cooperated with the machine, it might respond with "sweet, we're getting rich." Alternatively, if the human player shortchanged the machine, it was capable of issuing a cuss back at them.
This seemingly irrelevant addition had a profound impact, doubling the amount of cooperation they were involved in while also making it harder for human players to determine if they were playing with a man or machine. The team believes their work could have some profound implications.
"In society, relationships break down all the time," they explain. "People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better."
Is it an example of machines being able to do inherently human tasks? Probably not, but it's an interesting development nonetheless.
Published at DZone with permission of Adi Gaskell , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.