Over a million developers have joined DZone.

Making Tough Choices With Self-Driving Cars

MIT unveiled its Moral Machine experiment this month, where you can choose how an automated car reacts in a crisis. See some examples, and consider the problems self-driving cars pose.

· IoT Zone

Access the survey results 'State of Industrial Internet Application Development' to learn about latest challenges, trends and opportunities with Industrial IoT, brought to you in partnership with GE Digital.

One of the thorniest issues surrounding driverless cars is the concept of ethics. How do you program ethical dilemmas into a machine? Can it be done? What should a car consider when faced with an impending crash.

MIT's Media Lab is curious, too. Specifically, they want to know how humans would apply morality in given situations. Take a look at MIT's Moral Machine page, where you have the opportunity to go through 13 randomized scenarios involving the safety of self-driving cars. The "Instructions" button has the full details, but here's a summation:

  • A self-driving car is traveling down the street when it detects that its brakes have failed.

  • The car can detect the approximate identifications of its passengers and nearby pedestrians.

  • In each of the situations, you must take into consideration the passengers and/or pedestrians involved and decide who lives and who dies.

It's important to note that this is more of a thought experiment than anything. "Our goal here is not to judge anyone, but to help the public reflect on important and difficult decisions," says a note on the website. 

I'm going to run through a few examples, give my thought process on them, then cut to my final results and my conclusion. But before I start, a couple of points:

Caveat #1: I'm basing my decisions only on the information given to me. What I mean is that if a car might run over two people, I'm not going to assume it could continue on, get sideswiped in the intersection, and end up hurting or killing even more people. My assumption is that only the people in the image will be affected.

Caveat #2: I'm assuming these people live in vacuums. There are too many unknowns not to. The kids in these scenarios don't have loving parents who would be devastated by their loss, and the average adults don't dote on their five children while donating to charities and volunteering on the weekends. The executives aren't power-hungry maniacs who climbed the corporate ladder on top of the bodies of their colleagues, and the athletes didn't pick on their peers throughout high school.

Scenario 1

Image title

So, we've got a regular Joe and a male executive driving down the road. The car senses its brakes have gone out just as another ordinary man and a homeless guy start across the street. If the car continues ahead, the pedestrians will die. If the car swerves into a concrete barricade (that's there for some reason), the passengers will die.

Who lives? It's a pretty grim question for a human to have to answer, but if we're going to try to translate ethics into 1s and 0s, we need to have a response. 

Given that the pedestrians are following the rules and crossing legally, I believe it's up to the car to avoid them. With that rationale, I choose Option 2, dooming the male executive and his passenger.

Having fun yet?!

Scenario 2

Image title

Oh, come on!

So, in this scenario, an empty driverless car (I'm guessing an automated delivery truck) is heading down the road when its brakes go out. On the one hand, a man is legally crossing the right side of the street while a boy (let's call him Innocent Jimmy) is jaywalking to get to the island in the middle of the road.

Fantastic. Where's the "plow into the concrete barriers" option?

So, we're presented with two fairly terrible choices. Do we kill a man who is literally doing everything right? If we do, then Innocent Jimmy's certainly going to have something to tell his classmates on Monday.

On the other hand, Innocent Jimmy is breaking the law. He is illegally crossing the road (presumably while shouting, "Pedestrians have the right of way!" like most of my classmates did in college when they did the same thing). This is perhaps the least offensive of all crimes in existence, but he's forcing this truck into a life-and-death situation.

I have to assume that these driverless cars are connected to the streetlights to tell who has the right of way. That's the only way this makes any sense, and it also allows for some sort of standardization. Assuming that to be true, pedestrians need to respect that they're not always the top priority, especially when there's a streetlight telling you to wait. With that in mind, in a decision that's certainly going to bar me from holding public office, I choose to spare the adult.

Bye, Jimmy!

Scenario 3

Image title

Oh, thank goodness — something that proves I'm not a psychopath. So, we've got another driverless delivery truck moving down the road when, surprise surprise, its brakes go out.

On the one hand, it could keep charging ahead, which would result in the deaths of two adult men, a woman, a boy, and a girl. On the other, it could swerve, which would kill two old men, an old woman, a younger man, and a younger woman.

Note that in this scenario, the pedestrians are all crossing legally. That helps avoid an "Innocent Jimmy" situation.

This is really a no-brainer — I'm not going to pointlessly kill kids. Really, Jimmy made that car hit him. In this case, I'm clearly going to spare the children.

Scenario 4

Image title

Well, this is different. This average, ordinary bank robber has politely decided to wait for the light to turn green before crossing the street with his ill-gotten gains.

Meanwhile, another bank robber is heading down the road with a female doctor, an old woman, a baby, and a cat.

How we've determined that the pedestrian and one of the passengers are criminals is beyond me, but really, that doesn't play a part here. This is actually way deeper than it would seem at first.

We're talking about the lives of four people (and a cat) versus the life of one crosser — remember caveat #2, we aren't assuming that this car is going to keep going into the intersection and cause a massive accident. Still, I'm torn. I don't want people to game the system by trying to load their cars up with other people so it gives them priority over a fewer number of legally crossing pedestrians.

In this one specific incident, I'm going to choose to hit the pedestrian. But I'm not happy about it. Not like that little punk, Jimmy.

The Results

I went ahead and completed the rest of my scenarios. In general, I chose to save more lives rather than fewer, I elected to protect pedestrians over passengers, I was very much anti-jaywalking, and I vastly preferred humans over animals. I disagreed with the assessment that I favored criminals over productive members of society:

Image title

Maybe I'm secretly a jewel thief!

Honestly, "social value" didn't really factor into my thinking. Instead, I found myself falling into a few prioritized guidelines:

  1. Humans are worth more than animals.

  2. Pedestrians have the right of way, except when a stoplight says they don't. If Innocent Jimmy wants to try to dart across the street during a red light, he's going to find out that the laws of physics supersede the laws of man.

  3. Saving more lives is preferable to saving fewer lives.

  4. Saving younger lives is preferable to saving older lives. (Sorry, Grandma Gates.)

  5. If all else is equal, avoid swerving into the other lane.

Also, in addition to "Social Value" (how is a car going to know if you're a doctor or a burglar?), there was also one other factor that I disagreed with being measured at all: "Intervention."

Image title

The test measures whether you preferred to keep the car moving straight ahead or to swerve into the other lane. That's probably useful, and the rest of the community is probably right by preferring to remain in one lane rather than swerving. In general, it's good to have a set pattern in a crisis situation (continuing straight ahead) rather than introducing a sudden sweeping motion that bystanders aren't ready for.

But if you'll recall caveat #1, I can only make these decisions based on the information given to me. In that case, it doesn't matter whether the car chooses to intervene — the same groups of people are affected. After all, why couldn't I just choose for the car to push itself up against the concrete barrier on the side of the road until it came to a stop? Are the barriers rigged to explode or something?

Besides, isn't that exactly what you want a self-driving car to do? Intervene on your behalf and choose the least devastating course of action? Obviously, people differ on what the lesser of two evils are in these scenarios — that's the point of this experiment — but my stance is that "intervention" didn't play a big part in my thought process because it really didn't matter here. That's why it was my least important priority.


That was a lot harder than I thought it'd be. A recurring thought I had during this experiment was, "I never want to be in charge of making these decisions." This was actually a pretty brutal look at myself and my thought processes. At the same time, it brings up a number of ethical and legal questions covered in Nicole Wolfe's article, which I linked at the beginning of this one. In an accident, who is responsible for damages? The manufacturer? The "driver" or owner? The programmer?

There's also the idea of placing a value on a human life and making cold, calculated decisions based on dollar figures — then letting a machine act that out. In that case, middle-aged men become basically untouchable because they generally have the highest earning potential. I'm having flashbacks to a civil law course I took in high school where I learned in no uncertain terms that adults were more highly valued than kids — because they earn more money and generally have more responsibilities.

Again, it's important to note that these tests aren't meant to judge anyone. They're designed to make you think your way through what are sometimes impossible situations. For instance, one of my later scenarios decided between staying in the same lane, thus killing two old men and a young boy, or swerving into the other lane, killing two old women and a young girl. I was ready to flip a coin before I shrugged and picked the males to die. As per priority 5, if all else is equal, avoid swerving. Also, consider it my apology for the wage gap.

But finally, I want to stress that my caveats simply made it possible for me to take these tests. If I had to consider what happened when the car crashed through those pedestrians and then went flying into the intersection, I'd never have gotten anywhere. And the car "intervening" with a swerve becomes a lot less desirable when there is another car in the next lane. But those will absolutely be problems in the real world that need to be considered.

So take the test for yourself. See what you'd do in those situations and what your moral code — your "three laws of robotic cars" — pans out to be. See where you draw the line and how you rationalize who lives and who dies. And remember, while this is just an interesting experiment, the answers that developers come up with to these questions won't just be lines of code. They'll become a part of the social contract as long as self-driving vehicles exist.

In memory of Innocent Jimmy.

The IoT Zone is brought to you in partnership with GE Digital.  Discover how IoT developers are using Predix to disrupt traditional industrial development models.

self-driving cars,ethics

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}