Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How the Real and Virtual Worlds Can Help Train Driverless Cars

DZone's Guide to

How the Real and Virtual Worlds Can Help Train Driverless Cars

Augmented reality can change the way we train driverless vehicles.

· IoT Zone ·
Free Resource

Download The Comparative Guide to Rules Engines for IoT to learn what are the most common automation technologies used in the IoT domain for application development

To date, the training of autonomous vehicles has required vehicles to drive millions of miles in a variety of terrains to physically map and digest the information required to navigate safely. Whilst this approach has been pretty successful, the last leap to get vehicles fit for widespread adoption is likely to be the hardest of all to make.

Research from the University of Michigan suggests a more efficient method might be to use virtual worlds alongside real-world testing. The team believes that augmented reality technology can accelerate the testing of autonomous vehicles by up to 100,000 times, all whilst reducing the cost of testing.

“In order for the public to accept and widely adopt driverless vehicles, we must be able to prove they are safe and trustworthy,” the authors say.   “This requires rigorous and extensive testing that would otherwise take more than a decade to accomplish. Augmented reality testing is not only more efficient; it is also safer and will allow us to ensure driverless vehicles operate dependably with the ability to prevent and avoid crashes.”

A Virtual World

The team developed a virtual environment for testing using techniques borrowed from the gaming world.  The environment allows computer-generated vehicles to interact with their surroundings just as they would in real life.

The project is based at the university’s Mcity Test Facility, which features 16 acres of roads and traffic infrastructure to support the testing of autonomous vehicles. The facility is designed to allow researchers to construct a number of testing scenarios to put both real-life and computer-generated vehicles through their paces.

The fascinating project will see test vehicles behaving as though the road is populated with a large number of vehicles that only exist in the virtual domain. For instance, the paper reveals that an observer might see a vehicle approaching a traffic light and stop seemingly far too soon, but they are actually avoiding a collision with a computer-generated vehicle that is already stopped at the light.

All of the data regarding the virtual vehicles is communicated to the test vehicles using wireless technologies that allow both forms of the vehicle to communicate with one another and, indeed, with various parts of the test-course infrastructure.

“Our new procedure shows great potential to speed up and reduce the cost of testing,” the researchers explain. “It also has the added benefit of allowing us to build a virtual library of computer-generated traffic scenarios that can be practiced without risk of damage or human injuries.”

Rigorous Testing

The researchers test the autonomous vehicles using three distinct methods:

  • Closed-course testing
  • Computer generated simulations
  • Operating vehicles on public roads

Suffice to say, testing vehicles on public roads brings a number of legal risks with it, as well as safety challenges for the public. The ability of vehicles to crash requires testing to be not only different to regular vehicles, but often more rigorous as it involves the kind of scenarios that rarely occur with regular vehicles.

“Most strategies for testing automated vehicles today fall short of what is needed to ensure the safety necessary to make driverless technology viable,” the authors conclude. “The augmented reality environment at the Mcity Test Facility brings us a step closer by offering comprehensive, limitless testing scenarios that can be accomplished in a shorter period of time. That means testing is faster, cheaper, and safer.”

The project builds on earlier work by the project team that also aimed to make testing more efficient. The team utilized a modular approach to driving that sees situations broken down into components that can be easily tested repeatedly. This exposes autonomous vehicles to a condensed set of the most challenging driving conditions. It’s an approach that sees 1,000 miles yield the same returns as driving up to 100 million miles in real-world conditions.

This kind of volume is required due to the huge variety of scenarios a car may face. Indeed, fatal crashes on the road typically only occur every 100 million miles or so of driving.

The team argues that for us to be confident in autonomous vehicles, it will require cars to be 90 percent safer than human drivers. It’s a level that they estimate would require something like 11 billion miles to be driven in real-world settings, which would take well over a decade to achieve. The technology will require a fundamentally different form of testing to that currently used today, especially in the event of a crash occurring. Thankfully, it seems that innovations in how we test are as forthcoming as innovations in the autonomous technology itself.

See how CEP engines, stream processing engines, flow based programming engines and other popular rule-based technologies perform against seven IoT-specific criteria.

Topics:
iot ,driverless car ,virtual reality ,vr ,autonomous cars ,learning ,machine learning ,augemented reality ,ar

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}