Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Teaching Machines to Cooperate

DZone's Guide to

Teaching Machines to Cooperate

Is this an example of machines being able to do inherently human tasks? Probably not, but it's an interesting development nonetheless.

· AI Zone ·
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

The latest generation of AI has proven itself incredibly proficient at the kind of tasks we normally associate as being a strength of computers — but doubts remain about its ability to perform more human tasks.

It might be a small step, but a step has indeed been made in this direction, with research from BYU highlighting how AI can be used to cooperate and compromise rather than compete.

"The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills," the authors say. "AI needs to be able to respond to us and articulate what it's doing. It has to be able to interact with other people."

Autonomous Cooperation

The researchers programmed several machines with their algorithm before tasking them with playing a two-player game that tested their ability to cooperate in various relationships. The games were designed to examine a range of team designs, including machine and machine, human and machine, and human and human. They wanted to test whether the machines could cooperate better than the pure human-based teams.

"Two humans, if they were honest with each other and loyal, would have done as well as two machines," the team explains. "As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It's programmed to not lie, and it also learns to maintain cooperation once it emerges."

The cooperative abilities of the machines were then enhanced by programming them with various "cheap talk" phrases. For instance, if humans cooperated with the machine, it might respond with "sweet, we're getting rich." Alternatively, if the human player shortchanged the machine, it was capable of issuing a cuss back at them.

This seemingly irrelevant addition had a profound impact, doubling the amount of cooperation they were involved in while also making it harder for human players to determine if they were playing with a man or machine. The team believes their work could have some profound implications.

"In society, relationships break down all the time," they explain. "People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better."

Is it an example of machines being able to do inherently human tasks? Probably not, but it's an interesting development nonetheless.

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Topics:
automation ,robotics ,bot development ,machine learning ,ai

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}