Machine Learning – A Lesson Learned
Even artificial intelligence in Google's self-driving cars isn't infallable.
Join the DZone community and get the full member experience.Join For Free
Google self-driving car finally causes an accident — a lesson in AI.
According to The Verge, Google had recently performed a software update that changed the behavior to be more human like.
“So several weeks ago we began giving the self-driving car the capabilities it needs to do what human drivers do: hug the rightmost side of the lane.”
The truth is that one of the complaints about self-driving cars is that they are too cautious so Google adapted the software so the car would move to the far right of the lane so two cars could fit in the single wide lane. This is what a regular, old fashioned, human being does so cars can move more fluidly though the heavily congested streets of California.
All in all a great idea. The incident was that the car detected some road construction and then reentered itself back to the ‘legal’ center of the lane to avoid the obstructions. Even though the car was barely moving and edged back to the center, a bus hit the back side of the car. I am not sure why the self-driving car is at fault, maybe a lawyer can chime in, but the last time I checked, anytime somebody hits you from the back it is their fault. For our lesson today it doesn’t matter who is at fault.
Later in the article Google says:
“We’ve now reviewed this incident (and thousands of variations on it) in our simulator in detail and made refinements to our software. From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future.”
This made me laugh! In general what Google is saying is that is doesn’t matter who has the legal right of way, but when something really big is coming your way you should move.
The lesson is that humans are peculiar and there are rules and then there are “rules”. Programming practical but illegal or illogical rules is going to be part of what we will expect of future AI systems. That is unless we want to become more like machines. Between becoming more like a machine and a machine becoming more like a human I vote the machines become more like us.
Featured image is property of Mark Doliner, unmodified, used with permission.
Published at DZone with permission of John Basso, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.