DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report

Thought Experiment: The Ethics of Self-Driving Cars

Patrick Lin's TedEd video about the ethical dilemma of self-driving cars prompts a lot of ethical concerns to take into consideration. These dilemmas could be major roadblocks that prevent the cars from ever being able to be mainstream.

Nicole Wolfe user avatar by
Nicole Wolfe
·
Jul. 04, 16 · Opinion
Like (6)
Save
Tweet
Share
5.71K Views

Join the DZone community and get the full member experience.

Join For Free

Several months ago I went on a TedEd binge watch on YouTube, watching as many videos as I could in one sitting after another. I was sucked in because each video was short — four or five minutes long — and encouraged me to think outside the box about a lot of different topics.

One that really caught my attention was "The Ethical Dilemma of Self-Driving Cars" by Patrick Lin, narrated by Addison Anderson. It is a thought experiment about scenarios that self-driving cars might find themselves making a decision about whose life they should spare: the driver's or an innocent bystander's. Before reading on, I encourage you to watch the video.


This particular video really stuck with me long after watching it. I bring it up in conversations here and there, but I am typically met with the same unopinionated reaction by people: Wow, I hadn't really thought about that before. With our world becoming more and more automated as the years pass, it is important that we take ethics into account. Some of you reading this may have to take scenarios like this into account when you are working on projects in the (near) future, or perhaps even currently. So how can we approach ethics and autonomous machines?

Where to Start

First things first, since artificial intelligence spans so many different technologies these days, let's narrow it down to Patrick Lin's subject: the self-driving car. As many of us already know, Google has been toying around with self-driving cars since 2009, using cars made by Toyota and Lexus. The cars have been making their way across the California highways, racking up the miles to help the project become mainstream one day in the future. Seven years seems like a long time for testing, but when you take safety and human lives into account, the more data collected, the better.

The Ethical Dilemmas

There are several different ethical dilemmas that can be taken into account for this discussion. I'd like to focus on these in particular:

  • What should the A.I. decide when an accident is going to occur?
  • Who should get to pre-determine what decision the A.I. will make?
  • Who should be accountable for accidents?
  • Are there other issues at play?

What Should the A.I. Decide?

The self-driving car takes the car out of the driver's hands so there are fewer chances of mistakes. However, there are times that a car could be put into a position to make a crucial decision, costing someone their life. A self-driving car is programmed with algorithms to make decisions based on certain situations. For example, the car may calculate that it is safer to move into the empty lane to your right rather than run into the car slamming on its brakes in front of you.

What if the car makes no decision at all? Let's consider the recent fatality that occurred in Florida. The driver was struck by a merging tractor trailer while the car failed to apply the brakes while in autopilot mode. Tesla reported that both the car and the driver did not notice the tractor trailer. This is very disconcerting. It wasn't the car that made the wrong choice, but rather no choice at all. Much like the human driver, it did not see that an accident was about to occur so no action was taken. 

But there are decisions that the A.I. is going to have to be able to make. Consider the video's scenario: the self-driving car has to calculate which people are going to die. What should the A.I. be designed to do? Take the action that results in the least number of people dead? Or, perhaps, the action that more people may die in if calculated incorrectly, but if done right, will result in only injuries? 

Would you feel safer as the driver of non-self-driving cars if you knew that the self-driving cars were designed to only hurt or kill their driver and not other drivers? Or perhaps you'd want a self-driving car if you knew that they were going to protect you at all costs.

Regardless of what decision the self-driving car has to make, it will have to be made. One solution could be to let the A.I. learn and use algorithms to determine the outcome. Another could be to let the programmers pre-determine what will happen in specific scenarios. But then again, there are so many variables to think about on the road, can we really prepare for them all? 

Who Should Pre-Determine the Outcome?

There are so many different options here: Should the programmers decide? The government? The people? Let's take a moment to think about all of them.

If the programmers are programming a car to make certain decisions in certain scenarios, several problems arise; one of them is responsibility. By pre-determining the outcomes of an accident, the programmers could potentially be choosing who is going to live or die in some situations. This could even be seen as premeditated murder. If the car is being programmed specifically to steer the car in a certain direction, injuring or killing another driver, then the programmer is now at fault for the casualties.

Let's say instead that the government gets to decide who will live or die in these types of scenarios. This would likely cause an outcry of protest from its citizens, as this would be an unprecedented use of government power.

For example, if a driver knows that in a scenario similar to the first one in the video (the car is put in a position to kill the driver and his passengers, kill the motorcyclist beside him, or hit the SUV on the other side of him, saving two of the three vehicles passengers but harming or killing the remaining one), that the SUV will be hit in the threat of an accident because it's safer, would people continue to buy or drive SUVs? Or maybe motorcyclists would become a target for accidents because a self-driving car with four people should be saved over a motorcycle with just one.

Would the government begin to take other factors into account as well, such as terrorism or class? Maybe one government decides that only rich people's lives should be spared, so self-driving cars begin to take out the poor. Or they could be designed to only spare the lives of thier own citizens over foreigners, or any other number of factors. Where would it end, and how would citizens react to their decisions?

Alright then, so what if we let citizens decide? Which citizens would get to decide? Registered voters? One country to represent the self-driving cars' actions in all countries? What if only citizens of the USA got a say in what would happen; would China or Russia agree? Would they want to drive cars that American citizens had all the say in pre-determining the outcomes?

What if there is another option? What if we let the A.I. decide? A.I. has the ability to learn, so maybe, just maybe, there's a chance that we could program the self-driving car to learn and decide on its own. Then responsibility could fall onto the machine rather than a human being. But aren't there tons of ethical dilemmas within these scenarios? For starters, we can't exactly prosecute a car in a court of law if it makes a choice and kills another person, can we?

Alright. What can we do? Who should be determining who lives or dies? Maybe none of us can, and self-driving cars won't be able to exist in the mainstream, but then again, who knows...

Who Should Be Accountable for Accidents?

Since it is inevitable that accidents are likely to occur for a multitude of reasons, then rather than thinking about the choices that the A.I. should make, perhaps we should think about who should be responsible for these choices.

The first thought is to hold the programmers accountable (since we can't prosecute an inanimate object), but shouldn't the drivers be responsible too? Especially once you take into account that the driver can take control of the car, meaning they could have intervened at some point. There are a lot of different people who could be at fault.

First, we should consider the programmers — they are the ones who are, in some way, pre-determining the outcome. They are designing the car to make choices in certain situations, ergo, they are the first responsible party.

But what about the car manufacturers themselves? Since they put the cars into production and hired the programmers, wouldn't they need to take on the consequences of the self-driving cars' actions?

But then again, what if the programmers are only creating algorithms and outcomes for the self-driving car based on what the government has said the outcome should be (if we consider my earlier scenario) — will the programmers still be held responsible if they are just doing their job?

Then again maybe we've let the citizens decide the outcome, making everyone responsible. This would probably be total chaos.

Are There Other Issues at Play?

Aside from the countless accident scenarios, we can step back for a moment and look at other ethical dilemmas that self-driving cars pose.

Drunk drivers: One of the advantages of a self-driving car might be your car's ability to drive you home safely when you are intoxicated. However, we could never really let the car operate fully on its own, could we? Patrick Lin said that during his test drive the self-driving car demanded he take control of the car back. If the car may need human intervention, a drunk driver is the last person we should be handing the wheel back to.

Adolescent Joyrides: If you were 14 and your mom and dad had a self-driving car, wouldn't you want to take it for a spin to the mall with your friends? If the car is self-driving, wouldn't that mean that a kid could let the car do its thing and get an illegal ride out of it? But then again, would it be illegal, or would we let them take the car out much like a drunk driver (if the cars needed no human intervention)?

Distractions: When I bring up distraction, I don't mean distraction for the drivers of the self-driving cars, but rather the other drivers.

I recall a friend of mine whose brother drove a wicked-looking purple Plymouth Prowler with flames down the sides and gorgeous rims. When I told my friend how awesome the car was, he said that his brother wanted to get rid of it. Why? Because his brother noticed that other drivers would almost wreck their own vehicles trying to look at his. They were so taken aback by how unique it was that they would stop concentrating on the road. The brother did not like feeling that his car's aesthetic could be responsible for an accident.

Google's self-driving cars have logos plastered all over them for now, which is definitely going to grab drivers' attentions. How many times have you seen a car that drives itself, after all? I know I find myself wanting to get a good look at the Google Maps car when I see one drive by, and I've seen that one several times. If it were a self-driving car, I'm certain it would also grab my attention, and the attention of other drivers as it drove by.

Miscalculations: As developers, you all know that no program can be perfect or free of bugs, including self-driving car AI. Maybe the car misinterprets the larger rubber ball bouncing across the street for a small child and veers into an actual child — then what? Or maybe the car has a bug that causes it to shut down while you are driving down a busy highway during rush hour? (I would hope that bugs like this would be caught, but you never know what could happen.) All-in-all, one simple mistake could bring someone to harm or even cost someone their life.

Marketing: Patrick Lin points out that the future of self-driving cars could be connected to your lifestyle. Perhaps you have liked Starbucks on your Facebook page, which is now connected to your vehicle (SSO cars?). What if your car prompted you to stop for coffee in the morning every time you passed the Starbucks around the corner from your house? Would that be unethical in some way?

For a second example, imagine Lexus teamed up with Krispy Kreme and Cadillac teamed up with Dunkin Donuts. Then in the morning on your way to work your Lexus lets you drive past the Krispy Kreme near your house, but then suggests the Dunkin Donuts that's a little out of the way up the road. Would we be letting our cars become influential to our morning breakfast choices?

Maybe it's just annoying, but regardless, there's some murky water when marketing gets involved.

Final Thoughts

It's clear to me that the ethical dilemmas of self-driving cars are deep and complex across numerous scenarios. There may be a day that the decisions that these machines will be making for us become an integral part of our lives. For now, self-driving cars are just a blip on the map, but one day they may become the standard.

As developers, what side do you want to be on, and who do you think should get to pre-determine the outcomes of the dangerous situations that self-driving cars inevitably face? DZone wants to know what you think; let us know in the comments section! 

Driver (software)

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • 19 Most Common OpenSSL Commands for 2023
  • Choosing the Right Framework for Your Project
  • Host Hack Attempt Detection Using ELK
  • Integrate AWS Secrets Manager in Spring Boot Application

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: