Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Must We Program Self-Driving AI to Kill?

DZone's Guide to

Must We Program Self-Driving AI to Kill?

Regardless of public sentiment, driverless cars are coming. And beyond our questions of whether we want to hand over the wheel to software, there are deeper, more troubling questions to ask.

· AI Zone
Free Resource

Find out how AI-Fueled APIs from Neura can make interesting products more exciting and engaging. 

Advancements in artificial intelligence have brought incredible changes. Increasingly, our lives are becoming automated. Many everyday tasks have already been relegated to algorithms of smart software.

A staggering 70% of stock trading is fully automated, for example. This automation has resulted in massive job loss as well as resentment towards nascent AI technologies. In some senses, the resentment may be deserved. Elon Musk predicts a displacement of 15% of the world’s workforce from driverless cars, a technology he himself is working on.

To Mr. Musk, as well as other futurists and scientists, the manifold ethical issues that surround artificial intelligence are of critical importance and concern. Chief among these concerns are the jobs, careers, and livelihoods that would be taken over by AI, the prospect of which has fostered a distrust of technologies utilizing it. There are obvious social and economic consequences — quite possibly overwhelmingly negative ones — if the development of artificial intelligence goes unchecked.

Many fear general artificial intelligence a great deal more than the economic consequences of displacement. AI that is smarter than the smartest human and is equipped to do more than specialized labor, that could, quite feasibly, become an enemy of humanity and therefore an immediate threat to its survival.

As technology improves, our opinion of AI remains relatively the same. We place trust in the judgment of humans and fear to relinquish control to machines, despite mounting evidence that machines do a much better job of things than we do — possibly in part due to fear of a hyper-intelligent machine. AI-led hedge funds outperform human-run ones, year after year. 1.3 million people die in traffic accidents a year. Driverless cars have resulted in only a handful of fatalities in billions of miles' worth of testing. Intelligent machines also learn more quickly from their mistakes than we do, with the ability to rehearse and rehash mishaps thousands of times in mere moments. Humans learn slowly, needing months or years to accomplish the same tasks. Still, our opinion of AI remains abysmally low, either due to inordinate hubris or prudent caution — most likely a mixture of both.

Regardless of public sentiment, driverless cars are coming. Giants Tesla Motors and Google have already poured billions of dollars into their separate technologies with reasonable success, and Mr. Musk warns that we are much closer to a driverless future than most suspect. Robotics software engineers are making strides in self-driving AI at an awe-inspiring (and alarming) rate.

Beyond our questions of whether we want to hand over the wheel to software, there are deeper, more troubling questions that must be asked.

Regardless of current sentiment, driverless cars are on their way. The real questions we should be asking lie in more ethically complex areas. Namely, must we program driverless AI to kill?

At first, it seems an obvious answer emerges. All AI should be incapable of killing. In other words, no AI should have the ability to choose to kill a human. In that, all deaths from AI should result from a malfunction of some kind — brakes that give out, a cliff that went unseen, a bug in the AI’s programmatic makeup, etc.

The answer may not be so simple, however. For one moment, consider the possibility that a self-driving car is met by a difficult moral choice, not one unlike the philosophical thought experiment, the trolley problem. The car must save the lives of its passengers or save the lives of pedestrians. The car can swerve into a nearby tree to avoid the pedestrians, but this will undoubtedly kill its passengers. Or the car can continue down the road, hitting the passengers without saving the humans within. Should the number of lives on each side matter? Should age factor into the AI’s decision? Would it be immoral not to endow the AI with the ability to make these important, if impossible ethical decisions? Or should we allow the AI to run without any notion of saving or sacrificing lives?

“As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent,” asserts Jean-Francois Bonnefon, Professor of Economics, in a discussion with MIT Technology Review.

“If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”

When asked, most agree that saving more lives takes precedence over saving one. Largely, we believe that saving the young is more important than saving the old. What we agree on may very well end there, however. Even in these seemingly obvious moral decisions, there is no unanimity. Especially when these seemingly obvious moral decisions result in our own injury or death. Not surprisingly, when told a driverless AI must sacrifice our own life for the lives of others, we find its moral compass in need of readjusting. Apparently, in most cases, self-preservation alters our perception of what is right and wrong.

Should driverless AI be programmed to preserve its passengers? Or should it be programmed to have a preference for the reducing the number of deaths? Should we perpetuate self-preservation or the greater good? Are we, as owners of the driverless cars, at least partially responsible for the decisions they make?

At the moment, there are no definitive answers. Driverless cars will no doubt encounter lose-lose scenarios and be tasked with making unpopular decisions. Some may argue that driverless cars should be unburdened by moral dilemmas and refused the ability to kill, even in the name for the greater good. The evils must be weighed to divine a lesser one, however. In order to this, autonomous vehicles must have the ability to weigh them by endowing them with some sort of moral programming.

How we will decide to program these moral algorithms is still largely unknown. In many ways, the fate of AI rests with us. How we decide to proceed and what conclusions we come to about these ethical issues may shape the moralism of our driverless future.

To find out how AI-Fueled APIs can increase engagement and retention, download Six Ways to Boost Engagement for Your IoT Device or App with AI today.

Topics:
ai ,machine learning ,algorithms ,automation ,autonomous vehicles ,driverless cars ,ethics

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}