Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

New Model to Support Ethical Behavior in Robots

DZone's Guide to

New Model to Support Ethical Behavior in Robots

As robots and humans begin to work together more, researchers are working hard to ensure that robots behave in the right way.

· AI Zone
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

As robots and humans begin to work together more, researchers are working hard to ensure that robots behave in the right way — both for themselves and their human peers.

For instance, I wrote earlier this year about an EU study looking at human-machine interactions.  The project explores some of the current and potential ethical concerns around our use of robots, and indeed some of the unintended consequences of the technology as it evolves. Ultimately, it hopes to support the European Parliament, and through them the individual member states, as they deliberate the best policy framework for robotics and AI now and in the future.

It’s a growing field, however, and the latest such effort has come from researchers at the University of Hertfordshire who have developed a system called Empowerment to help robots work safely alongside humans.

The work, which was recently published in Frontiers in Robotics and AI, takes particular aim at Isaac Asimov’s infamous three laws of robotics. The authors argue that the very notion of harm itself is so complex and context-specific that it’s nearly impossible to explain to a machine.  Therefore, if it’s impossible to explain harm to them, how can machines avoid causing it?

Keeping Options Open

Rather than attempting to understand complex and challenging ethical questions, therefore, the team wanted to develop a system that allowed machines to keep their options open.

“Empowerment means being in a state where you have the greatest potential influence on the world you can perceive,” they explain. “So, for a simple robot, this might be getting safely back to its power station, and not getting stuck, which would limit its options for movement. For a more futuristic, human-like robot this would not just include movement, but could incorporate a variety of parameters, resulting in more human-like drives.”

Empowerment was coded mathematically in such a way as to be easily understandable and adopted by a robot. The concept built upon earlier work, with a key development being to ensure the robot maintains the human’s Empowerment.

“In a dangerous situation, the robot would try to keep the human alive and free from injury,” the team say. “We don’t want to be oppressively protected by robots to minimize any chance of harm, we want to live in a world where robots maintain our Empowerment.”

The team hopes that their work will provide an important cog in the overall development of more ethical robots. Time will tell how accurate that aspiration proves to be.

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Topics:
ai ,robots ,ethics

Published at DZone with permission of Adi Gaskell, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}