{{announcement.body}}
{{announcement.title}}

Autonomy Beyond the Car

DZone 's Guide to

Autonomy Beyond the Car

While you read this article, 20 people around the world will die in car crashes. Autonomous Vehicles (AVs) can eliminate the carnage.

· IoT Zone ·
Free Resource

Autonomy’s Promise

During the time it takes you to read this article, 20 people around the world will die in car crashes. Autonomous Vehicles (AVs) can eliminate the carnage. But safety is just the start. Robocars can drive in efficient formations, saving energy, and making better use of today’s carpool lanes.

Because we don’t need to store vehicles near drivers, we can turn urban parking lots into parks and garages into living space. Autonomy will make extensive travel practical for millions. Robocars will bring mobility and freedom to those who can’t drive. It can make life better and more efficient in so many ways.

I can’t wait for a robocar to pick me up at home and whisk me to work while cooperating with all the other vehicles to eliminate traffic. It’s a compelling and seemingly close vision. There’s a good reason for all the hype. We need autonomy, and we need it soon.

But then, there’s reality. Robocars that do everything drivers can do is still years or even decades away. We’ve made immense progress in many environments. But the real world has so many “corner cases” that truly autonomous operation remains well out of reach.

But that doesn’t mean we have to wait for years. The key is to drop the goal of perfect human-substitute drivers. There are workarounds for the hardest problems if we add off-vehicle assistance. It’s time to take the system approach proven in many other industries and get moving.

The New Automotive Competition

Autonomy is hardly the only disruption impacting the transportation industry. Electric vehicles (EVs) perform better than gas engines with lower pollution while enabling new entrants with much lower barriers. Flying commuter transports will be available in the next decade. 

5G connectivity and fast embedded processors provide access to vast computing from top and bottom. Ridesharing apps and rented scooters are already replacing public transportation. Smart city transports, traffic control, and hyper-loops are just around the corner. There has never been anything approaching this level of disruption in the industry. The pace and breadth of change made last century’s horse-to-car transition seem, well, quaint.

All this change will profoundly reshape the automotive competitive landscape. For decades, traditional OEMs competed on driver experience, engine performance, and styling. In a world with no drivers, no engines, and no car ownership, none of those factors matter.

Even worse, the traditional OEM advantages also don’t matter. Rather than lowering costs, formal “tier 1-2-3” supply chains struggle to coordinate software interfaces, slowing development. Reliance on engine expertise holds back innovation in EVs. Capabilities and processes and even brands honed over decades no longer help.

The new competitive basis is increasingly clear: software and connectivity. Advanced autonomous software will soon be a required feature. It will need to be online. Connectivity will speed new features via “over the air” (OTA) updates, provide ongoing product data, and enable new shared-vehicle business models. From now on, the best-connected software wins.

No change will be more profound than autonomy. And no factor in developing autonomy is more important than learning from other industries. While it’s new to our roads, the technology of autonomy has been developing for many years in flying drones, robotics, underwater vehicles, and military systems. 

These systems bring sensors like LIDAR, fast distributed software designs, and vast experience with remote operation. For the first time since the advent of the microprocessor, externally-developed technologies will be the primary automotive competitive drivers.

Practical Autonomy Requires Remote Operation

Telsa’s “autopilot” mode relieves the driver of many duties, making commutes more relaxing and improving safety. That’s nice, but it doesn’t fundamentally change the economics. The big financial benefits flow from eliminating parking or enabling shared robotaxis. Autonomous driving when there’s a human in the car saves stress. Driving the car with no human saves money, and that changes the game.

However, for many years, there will be situations that AVs can’t understand. Safety is not perfect, but it will soon be better than humans (see box). Still, we can’t have stuck AVs blocking traffic whenever there’s a construction zone or accident scene they can’t navigate around. 

So, we need a way to rescue them. Companies are increasingly realizing that the best way to rescue vehicles is to assist them via remote control, or “teleoperation”. When a robocar gets stuck, it would notify a control center. There, human operators could access the sensors and cameras, analyze the situation, and help.

One way to do that is to allow the remote operator to drive the vehicle, aka “live” teleoperation. This requires live video communications to the operator and low-latency control signals back to the AV. This would require a very fast, reliable, delay-free network. When and if 5G is truly available, this may be practical. Today, it would be hard.

It’s more practical to take a page from the Mars rover handbook (and many other robotics researchers, including the author): control at a distance is much easier by giving direction than by providing real-time feedback. In this case, that means looking at the scene and indicating a path to follow (“strategic” teleoperation). This works with much lower bandwidth and higher latency. After all, it’s how NASA can drive robots on Mars despite 30-minute latencies.

Architecture

Thus, strategic teleoperation will enable the first practical AVs. That requires both in-vehicle control and remote oversight. Most architectures do not consider the implications of intimately connecting these systems. But they should not be designed independently. AVs will need a consistent architecture for the vehicle, control center, and cloud.

In-Vehicle Design

AVs work by connecting sensors to high-performance processors, and from there to control algorithms to guide the vehicle (see the Typical In-Car Software Figure). The early entrants like Waymo and Tesla built everything from scratch. More recently, ecosystems are evolving to create stacks of technology. The best known are:

  • ROS, the Robot Operating System. ROS is very popular in the research community. There are many tools, drivers, and services available, including sensor models, vehicle simulators, and visualizers for developing autonomy. ROS and its components are used by thousands of researchers in robotics. ROS is a research tool. The latest version, ROS2, replaces the weak communication broker, but the overall system is not appropriate for production cars.
  • AUTOSAR. In contrast to ROS, AUTOSAR targets production. AUTOSAR evolved as a practical way for OEMs to specify parts to their suppliers. AUTOSAR specifies far more than software architecture; it defines most everything required to get multiple quotes for similar parts from suppliers. AUTOSAR “classic” predates intelligent control and is far too restrictive to handle advanced software like autonomy. Its new version, AUTOSAR Adaptive, improves flexibility. Several companies field AUTOSAR development kits and tools.

While they differ substantially in capability and end-use, these are primarily in-vehicle technologies. They strive to help users connect and understand sensor sets, algorithms, and control.

AUTOSAR Classic

Typical In-Car Software

The system must fuse sensing, awareness, and planning to determine actions. It then controls the vehicle through a hardware platform interface. Recent designs combine components of several ecosystems including open source technologies like ROS and Apollo and industry standards like AUTOSAR.

Control-Center Design

Control centers are also challenging software environments. Some systems support hundreds of people monitoring large systems with hundreds of thousands of components and variables. 

Many systems vary dynamically, so dataflow must be able to reconfigure quickly. In some centers, each operator station has a unique role. In many others, stations hand off work as it arrives, changing functions as demanded. Control centers need reliable, flexible, fast access to immense datasets.

A control center for an AV fleet will need to monitor many thousands of vehicles. When one needs attention, it will be assigned to an operator. The operator may request live sensor feeds, historical information, and vehicle status and routing. Thus, each station needs access to most any data from across the system, without the ability to predict what particular information is needed ahead of time. 

Other stations will monitor the overall system status and fleet deployment. While the AV fleet case will have unique challenges, these types of control center demands are typical of many systems.

What About the Cloud?

Cloud computing utterly changed enterprise software. It provides seemingly infinite storage and computes at a very low cost. Many naturally assume that many or most AV functions should run in the cloud.

However, the role of the cloud in autonomy is often overstated. Relying on a cloud connection for control or safety functions would require sending all sensor information to the cloud, processing it, and getting it back to the vehicle fast enough to react to external events. Even if the cloud facilities were available, this requires latency guarantees and reserved compute bandwidth well beyond likely capabilities. 

And it makes little sense; there’s no good reason to stream terabytes of sensor data to the cloud. Most of that is video and radar of uninteresting scenes. Besides, the processing needed in a vehicle is mostly numeric calculations best done in a GPU-like chip with many cores. The cloud’s strengths of a central location, elastic general computing, and storage simply aren’t valuable in an AV.

Edge systems, on the other hand, are well suited to the problem. The real-time reaction requires dedicated local CPUs and resources. Edge CPUs are quickly gaining the capability they need to process all the sensor information. So, the core AV algorithms will continue to run on the vehicle for the foreseeable future.

Nonetheless, the cloud plays important side roles. For instance, AI algorithms learn by processing “training sets”, which are snippets of time where there is some interesting action, like a difficult situation or accident. Deep learning works (roughly) by taking those training sets, having a human or other system determine what to do in that case, and then using that as a “lesson” to the AI. This training is best done centrally offline rather than in real-time.

Thus, AI cars do not learn themselves; they get trained AI results from a central source. If an AV experiences an interesting scenario, it becomes a training set and influences the AI. Thus, every AV learns from the experiences of every other AV. The entire fleet learns from each event. The entire fleet then improves over time.

The cloud also has access to all vehicles and the control center. Thus, it’s the right place to run fleet management software, collect and disseminate road status, and dispatch field personnel to the vehicles.

The bottom line, the cloud is a critical part of an AV system. But it’s not the key enabling technology. The key technologies are in the car and the control centers.

Can 5G Help?

When it’s deployed, 5G purports to deliver 5ms latency and extreme throughput. That’s a tempting capability. Nonetheless, 5G advocates are struggling to find compelling use cases. Many proposed uses are in transportation and AVs, including:

  • Executing vehicle control in the cloud. This does not make much sense for the reasons cited above. The cloud has no advantages over vehicle-based software for control.
  • Unsafe condition warning. Fast communications between vehicles and infrastructure could be useful to avoid some types of collisions. For instance, an oversight monitoring function could track and warn cars about impending unsafe scenarios like blind intersections or disturbances in the flow ahead. These “superhuman” capabilities could extend safety beyond what human drivers can do without global sensing.
  • Coordinating vehicles. Longer-term, 5G communications offer the opportunity to coordinate AVs. It could, for instance, be used to allow equipped AVs to enter special lanes (the likely future of today’s carpool lane system), where they can be externally controlled to move at higher speeds or much-closer spacing.
    •  Even without special lanes, controlling traffic strategically can dramatically increase throughput on freeways, alleviating our traffic nightmares. Vehicle-to-vehicle and infrastructure communications could even someday eliminate the need for traffic lights, as intersections become automated threading machines. These are fascinating future capabilities. But we need fundamental operation first.
  • Live teleoperation. This function has a valid need for true 5G capabilities. Unfortunately, market timing may not work out. AVs need teleoperation in the next few years. Widespread, reliable 5G is just now starting the deployment. Relying on 5G to manage your fleet of AVs in the short term is thus risky.

Looking at these potential uses, 5G does not seem to be a critical technology for AVs. No near-term vehicle is likely to rely on off-board systems for control and safety. That’s only somewhat because the connection and remote processing systems are insufficiently reliable.

The greater factors are that onboard systems are increasingly capable of handling these functions and the vehicle designer has full control. 5G can augment safety in situations where local sensing fails. Vehicles need to learn to handle local issues first. Thus, 5G should be thought of as a future “nice to have” as opposed to current enabling infrastructure.

The Evolving Architecture

So how can we put this together into a scalable, “future proof” architecture?

Inside the vehicle, the industry is migrating to adopt a common connectivity standard called the Data Distribution Service (DDS; see box). For instance, the most recent versions of ROS (ROS2) and AUTOSAR Adaptive (since 18.03) use DDS. They chose DDS because of its flexibility, performance, and proven reliable operation.

DDS End to End ConnectionDDS End to End Connection

DDS is well-proven for both on-vehicle and control-room use cases. DDS provides a consistent data model throughout the system. Data routing between levels helps build reliable, large-scale infrastructure.

DDS is also used by hundreds of autonomous vehicle designs. DDS evolved specifically for autonomous systems, first in high-end flight systems, and increasingly in-ground, space, and underwater vehicles. It has many features that autonomous systems need, including real-time delivery, extensive control of delivery Quality of Service (QoS), network and location transparency, and scalability. It automatically discovers sources and needs for data, sends all information direct peer-to-peer for speed, and supports hundreds of platforms. The consolidation around DDS means that it’s the least-risk choice for new designs.

DDS is also the dominant communications architecture for many operational control centers. Examples include the combat management operations centers for most Navy ships, most drone ground-control systems, NASA KSC’s launch control firing room, and power monitoring control rooms for large hydropower plants. In transportation, DDS monitors and controls train systems, metro rail, air traffic control, and airport ground systems.

DDS is well-positioned to take on the teleoperation use case for autonomous systems. It handles both the control center and vehicle cases.

Data centricity means that a vehicle can move around a city without an impact on operation or code, even if its IP address changes. When an interesting situation is detected by the car or an operator, the deep sensor information can be easily transferred to the cloud as a training set for learning. New vehicles, algorithms, operators and other participants can come and go at any time. Data centricity also helps fault tolerance, availability, scaling, and security.

Autonomy Beyond the Car

Autonomy is just around the corner. There is a catch: it cannot be done with traditional automotive architecture. The hype around autonomy assumed that it’s only an in-car problem. That excitement stalled; until AIs are much more capable, autonomous vehicles will need remote assistance.

Thus, autonomy is a distributed system problem, including at least the vehicle, the cloud, and control centers. Since systems are defined by how they share information, success requires an architecture that can tie together all the pieces.

Fortunately, automotive autonomy can leverage lessons from many industries that have been building autonomous systems for years. Within the vehicle, the emphasis is on a high-speed connection, reliability, and AI integration. Control centers need a consistent data model, dynamic access to many types of data, and fast response.

The DDS standard originated in autonomous systems and is the clear leader for both in-vehicle frameworks and control centers. Its greatest strengths are scalability, real-time performance, and software integration. By providing a consistent system-wide data model, it enables building autonomous distributed systems…beyond the car.

Side Bars

Box: The Data Distribution Service (DDS) Standard

Although it transports information, DDS is not like other connectivity technologies. All others, including message-oriented middleware (MOM), Service-Oriented Architecture (SOA), and remote procedure calls (PRC), directly connect active entities. In each of these, applications interact with each other. Architectures that use these emphasize active entities and how they interact. Large distributed systems quickly become hard to manage.

DDS is a connectivity framework and it has a protocol. But, much more importantly, DDS is a data-centric architecture. It implements a simple concept: a shared “global data space”. This simply means that all data appears to be inside every device and algorithm in local memory. This is, of course, an illusion; all data can’t be everywhere. DDS works by keeping track of which application needs what data, knowing when it needs that data, and then delivering it. 

So, while all data isn’t everywhere, any data that any application needs are guaranteed to be present in local memory on time. Applications talk only to their own local memory data space, not to each other. This is the essence of data centricity: “instant” local access to absolutely anything by every device and every algorithm, at every level, in the same way, at any time. It’s an elegant and powerful concept.

Data Distribution Service (DDS) Standard

The Data Distribution Service (DDS) Standard

DDS is the standard that defines a virtual distributed shared memory, or data-centric distributed architecture. DDS understands the data types and controls flow to and from each data object. Conceptually, all system data seems to be virtually “inside” every application in the system, making it easy to share information.

DDS controls flow to and from this memory with “Quality of Service” (QoS) parameters. These specify all required interactions with the data, including flow rates, latencies, and reliability. There are no servers or objects or special locations. Since DDS applications interact only with the shared distributed memory, they are independent of how other applications are written, which processors, languages, or operating system they use, where they live, or when they execute. The result is a simple, naturally-parallel software architecture with a uniform data model that shares system information.

Because each application virtually talks to only its memory, DDS decouples across time, space, and flow. Time decoupling means there is no dependency on startup or join sequence. Space decoupling means data can come from any physical location, so producers and consumers may reside in ECUs, in on-vehicle central processors, or the cloud transparently. Flow decoupling means that each application can request the same or different data at any update rate, over any network, in any language, and with or without reliability guarantees. 

Moving vehicles can even transparently switch IP addresses as they change connections. DDS examines all these factors and determines if it can deliver the data. If not, it flags an error. But if so, then DDS will deliver that data directly and transparently to the application’s local memory. The result is elegant systemwide data sharing without dependence on physical implementation.

Box: Realistic Safety

My first job was in automotive crash protection. Protecting passengers in case of an accident is a noble pursuit, and we’ve made great progress. Modern cars feature multi-stage airbags, side-collision protection, crash-absorbing structures, child seats, and belt pretensioners. These things help, but cars just aren’t that much safer. Hitting a wall at 35mph, let alone 70 mph, just dumps too much energy into the vehicle and its passengers. Protecting delicate body parts in a crash is like trying to jump off a building without spilling a tray of wine glasses. Crash protection is the wrong path.

Advanced Driver Assistance Systems (ADAS) try to make drivers better. Newer vehicles can brake automatically, nudge you into lanes, and warn you about blind spots. But people text, drink, argue, fall asleep, talk on the phone, run lights, take chances, speed, and drive through stop signs with alarming regularity. The only reliable thing about human drivers is they are unreliable. As a result, people cause 94% of fatal collisions. There’s no obvious path to much better safety with the human in control.

So, if ADAS can’t make drivers good enough, can people instead make robocars better drivers? Tesla takes this approach by letting the autonomy control of the car but expecting optional human override. This is the opposite of ADAS, and it makes sense. The robocar catches all the usual situations, leaving drivers to handle only corner cases. It’s certainly not ideal, because the human reaction can be slower when the car handles everything most of the time. That said, Tesla in Autopilot already has a 40% better safety record than other vehicles — a huge win! Unfortunately, this doesn’t enable the huge economic win of empty-car operation. So, it’s not a full solution

This leads to the obvious question: can we make AVs safe enough? Even if the overall safety record is better, every AV crash is front-page news for days. This is unavoidable for anything “new”. But it does highlight a real point: unrealistic expectations may be a bigger problem than immature technology. Yes, robocars make mistakes that people would never make. The goal can’t be to develop autonomous cars that never mistake trucks for bridge overpasses or hit bicycles that cross where there’s no crossing. 

People also make many mistakes that robocars will never make. The metric of “good enough” should simply be to maintain an accident rate that’s better than humans. After all, if you could be twice as safe in a robocar as in a taxi, which would you rather take? The distrust of autonomy is mostly an arrogant assessment of our driving capabilities: 85% of drivers think they’re above average. The reality is that human drivers are not very safe. Robocars will soon be statistically better than humans, despite mistakes.

The bottom line, let’s judge based on results, not bias. We should drop the expectation that robocars must be perfect, or even that they don’t make obvious mistakes. The metric should be “better than humans”. That is, unfortunately, a pretty low bar. We can augment the safety by allowing override when there is a driver in the car. And we can use every accident as new learning for all vehicles. But turn off the grisly news images: when the robocar without a driver is as good as a car with a driver, that should suffice.

Topics:
5g ,architechture ,autonomous ,autonomous cars ,iot

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}