The Problem Is, This Jeeves Can’t Think
Learn about the risks of driverless cars and how to future-proof the driverless experience.
Join the DZone community and get the full member experience.Join For Free
Circa 2025. An autonomous BMW sedan with a passenger slows down near a crossing in LA. It has sensed an elderly couple on the pavement waiting to cross the road. A couple of minutes pass by, and both parties remain static. The couple — who is actually waiting for their son to pick them up — has no clue why the driverless car has come to a halt in front of them. They gesture the car to go ahead even as the passenger fumes in the backseat. But the vehicle has "machine learned" to be polite and careful.
It does not have an alternative course of behavior, unlike the resourceful Jeeves in a PG Wodehouse novel.
Machine learning — the branch of AI that drives autonomous cars — can’t be programmed.
“You don’t know how to write the requirements,” says Phillip Koopman, a computer scientist with expertise in automotive. That’s why ensuring safety with autonomous vehicles is so tough. One simply has no control over what it learns. Consider AI as a black box. It performs miraculously, but nobody knows what’s running under the hood. As a result, understanding machine learning — and thus programming a driverless car — is seemingly impossible.
Driverless Is Not Danger-Free
Training a driverless car well is not the solution. You can train it with thousands of images and scenarios. But no one has a clue what exactly the model is learning in the process. If a training set consists of a lot of human beings, for example, the AI model might not register the limbs and faces and perceive the dominant color of the shirts instead.
Google once tried training a model to recognize dumbbells. The result: the model could sense a dumbbell only when there was an arm holding it. Similarly, Microsoft’s Tay chatbot turned racist in a day, even after years of training.
Also, training environments are different from real situations. DeepMind’s AlphaGo AI might have displayed superhuman intelligence in beating a human Go player champion — thanks to the extensive real-life training it had received. But just one small change — like increasing the size of the board during the actual game — would have proved fatal for the machine.
Hacking into an AI model — like that of an autonomous car — is a real threat. In 2016, Chinese researchers were able to hack into the yet unhackable Tesla Model S. They remotely took control of the car from 12 miles away, something that can cause a lot of trouble in the near future.
Future-Proofing the Driverless Experience
Driverless is risky. But that’s not stopping car marques and tech disruptors like Uber, Google, Nissan, Starsky Robotics, and Tesla from testing the waters and declaring reasonable success. The question is: should we take their word for it? Ensuring transparency is key. One possible solution could be to rope in independent agencies to validate the success rate. They might not be able to understand what the car learns but could investigate the testing process, testing data, simulation environments, etc. and then pass a verdict.
Autonomy in cars without human help is too futuristic, if not impossible. In the years to come, the world will need human experts to either remotely operate autonomous cars or remotely intervene in case of a situation. Or, a push of a button and the passenger can contact a remote supervisor. After all, the human mind is far more complex — and thus more decisive — than AI. It can easily deal with a situation like the one we started the article with.
Blockchain can come to the aid of self-driving cars being hacked. The technology behind driverless cars is the collection, analysis, and interpretation of big data and making split-second decisions based on it. A transparent, decentralized, and immutable technology can help autonomous cars verify the accuracy of data that they collect from the environment and, at the same time, ensure no one can hack into the system.
It’s Still Early for AI
The need of AI-powered autonomous vehicles on the roads is driven by the ever-increasing number of car accidents today. 90 percent of all accidents are due to human error — road rage, alcoholism, distraction, flouting traffic rules, etc. The promise to self-driving cars is too tempting. In fact, by 2030, the driverless car market will be worth $87 billion.
But the world is still debating about the good, bad, and ugly of AI. Elon Musk calls AI an “existential threat to humanity.” Others like Stephen Hawking and Nick Bostrom agree while Mark Zuckerberg waves off the threat, considering AI to be more of a boon than a bane.
But here’s the footnote: AI is unpredictable.
It’s as dangerous to overestimate AI as it is foolish to underestimate it. The tech is evolving and the world is gaining new insights by the minute.
So is AI the perfect Jeeves to our driverless cars? For the moment, it’s as whimsical as a Heisenberg or Sherlock or even a Phoebe Buffay. It’s interesting, but you can’t trust your life with it yet.
Published at DZone with permission of Yashodeep Sengupta. See the original article here.
Opinions expressed by DZone contributors are their own.