Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Codifying the Ethics of Autonomous Cars

DZone's Guide to

Codifying the Ethics of Autonomous Cars

We have to determine whether moral values should be codified into machines and whether we want machines to behave like humans or rather aspire towards something better.

· AI Zone
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

The rise of automated vehicles has provoked a range of ethical and moral discussions, largely revolving around constructs such as the trolley dilemma, which nicely encapsulates the kind of moral decision-making an autonomous vehicle might be forced to make. Historically, it’s been considered something that’s beyond most autonomous systems.

A recent study from The Institute of Cognitive Science at the University of Osnabrück suggests that the kind of moral thinking humans undertake can, in fact, be accurately modeled for autonomous systems to use.

Virtual Training

The system uses immersive virtual reality to study human behavior in a wide range of simulated road traffic scenarios. For instance, participants were asked to drive in a typical suburban setting in foggy weather. They would experience a number of unexpected dilemmas with objects, animals, and humans that would force them to decide which was to be spared.

The results were then placed into a statistical model that helped the team to develop a number of rules that were capable of explaining the observed behavior. The work suggests that our moral decision-making can not only be explained well but also can be modeled in a way that machines can understand.

“Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object,” the authors say.

As the study suggests that our moral decision-making can be codified, the authors advocate urgent discussions around just what those moral judgments should be and indeed whether we want machines to be making them.

“We need to ask whether autonomous systems should adopt moral judgments; if yes, should they imitate moral behavior by imitating human decisions? Should they behave along ethical theories? And if so, which ones? And critically, if things go wrong, who or what is at fault? ” the authors say.

It creates a kind of double dilemma, as first, we have to determine whether moral values are appropriate for inclusion in the guidelines we codify into machines, and then whether we want machines to behave like humans at all or rather aspire towards something better.

What is clear is that the debate around autonomous vehicles is just the start, as with autonomous systems emerging in a growing number of fields, such ethical and moral dilemmas will become more commonplace. Now is the time to start ensuring that such systems have the right rules in place so that the decisions they make are the right ones.

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Topics:
ai ,autonomy ,self-driving cars ,ethics ,virtual reality ,machine learning

Published at DZone with permission of Adi Gaskell, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}