DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. Arguments Between AI Is a Good Thing

Arguments Between AI Is a Good Thing

Computers debating between themselves about a result is not a new thing. With our concerns about whether AI is ''correct,'' this idea deserves another look.

Emmett Coin user avatar by
Emmett Coin
·
Nov. 27, 18 · Opinion
Like (3)
Save
Tweet
Share
3.91K Views

Join the DZone community and get the full member experience.

Join For Free

People (or computers) who enter into an argument can be thought of as adversaries. And most of us have heard about adversarial AI systems. But arguments do not have to be combat, an argument doesn't have to be structured to result in a winner and a loser. Many human arguments are actually more like belief sharing.

And when there are more than two participants, the interaction becomes a robust and reliable learning experience for all the participants. When the discussion moves to a topic that a participant does not have expertise (or holds a false belief), it provides the opportunity to gather that knowledge from the other participant(s). The misalignment of reported beliefs at the very least signals the need for further research on that issue.

In the case of humans, this is the point at which everyone reaches for their smartphones. When there are three or more participants and the majority of them agree on a common belief then it seems pragmatic to at least consider that the majority position is more correct. [Note: This never works with politics or religion in my experience.]

Okay, before I fall down a philosophical mumbo-jumbo rabbit hole let's shift our context to the world of engineering, computers, and AI.

Throughout the history of technology, humans have been delegating actions to devices. And not surprisingly, our concerns about the correctness of those actions correlates directly with the magnitude of the consequences of those actions. Simple things like the lowly pressure cooker have evolved to have multiple failsafe mechanisms to protect us from having a very bad kitchen day. If any one of these mechanisms "decides" that the pressure is too high then it reduces the pressure. Another very simplistic mechanical conversation is the aircraft style safety switch often used to enable and disable the aircraft autopilot.

I fantasize a conversation something like this:
human: (accidentally bumps autopilot on-off switch)
switch: did you really want to turn it off?
human: yes
switch: okay, lift up the cover and move my toggle
human: thanks

Image title

But of course, as the complexity of the behavior of the devices increases and if the repercussions of a bad decision become life-threatening we can no longer manage that behavior in such a simple binary construct. A great example of a system involving incredible complexity mixed with incredibly high stakes is the space shuttle.

Image title

When it was designed the space shuttle was probably the single most complex vehicle ever created. It wrangled more power and had more parts than any vehicle before. It required operational functional control with millisecond precision. It also posed a monumental risk to the crew, the launch site, and anyone or thing that might have been within range of an errant flight path. Given that premise, a great deal of robustness was engineered into the launch and navigation systems. A large part of the robustness was provided by having multiple flight computers working in parallel. 

In fact, the space shuttle was designed to have five identical computers, which all worked on the launch administration task in parallel. Together, they compared the high-level output commands (engine throttling and gimballing, flight control surfaces, etc.) to verify that they agreed within some predetermined amount of acceptable error.

It operated something like this: Three of the computer outputs were compared continuously and critically and if one of those computers reported a result significantly different from the other two inconsistent computer would be instantly swapped out for the fourth computer which was computing in parallel but previously was not submitting any command "votes". If the new quorum of three computers agreed satisfactorily then everything went along just fine and the flight continued normally. If after the computer swap a disagreement was still detected then the computer with the most divergent answer was ignored.

The original software design was supposed to include all five computers in a more complicated fallback scheme but by the time of actual system construction and software integration, it was decided that the "3+1" pooled computer system was easier to develop and test. Simulation suggested that it would fail only three times in the 1 million. And when considered against the higher risk of all of the other systems on the shuttle it was decided that this failure rate was sufficient.

Note: The fifth computer was kept on board as a hardware backup since it was harder to engineer it out of the shuttle then to just leave it in! Even though the algorithmic decision-making of the space shuttle control computers was far less complex and mysterious as our AI-driven autonomous vehicle systems are today the redundancy was seen as critical to the success of the space shuttle.

This brings us to the quandary we have today with AI-based self-driving cars. News stories pop up regularly about an autonomous vehicle doing the wrong thing. The AI might think a light blue colored truck is just the sky and drive through it. It might completely misunderstand whether a pedestrian is crossing the street or not. It might misinterpret marks on the pavement to be traffic governing lines and do inappropriate things (Google "witches trap autonomous vehicle").  Since almost all of these AI-based driving brains are based on neural nets they suffer from two problems:

  1. They only learn about what they have experienced (many times) in their training data

  2. They are "black boxes" and cannot explain why they did what they did

Image title

[Note: This doesn't even begin to address other moral and ethical issues such as the classic trolley problem. These are difficult issues no matter how simple Isaac Asimov made it seem with his "Three Laws of Robotics". ]

One good thing about the state of autonomous vehicles today is that there are many people working independently on the same problem. The entire field of autonomous vehicles got a jumpstart from DARPA Grand Challenge in 2004. The effort started mostly in universities followed by R&D groups with deep pockets (e.g. Google). Then it progressed to auto companies and eventually transportation companies (e.g. Uber). There are dozens of competent groups working on almost exactly the same problem: Safely drive cars through cities, on highways, in and out of parking facilities, etc. each one of these independent systems ingests vast amounts of sensor data and reduces it to some pretty simple outputs:

  1. Go forward (or reverse)

  2. Set velocity to X kph

  3. Turn Y degrees to the left (or right)

  4. Brake at Z percent

No matter how differently each of these independent systems analyzes and processes its sensory input, the outputs are few and very basic. The outputs of any one of the systems can easily be compared to the outputs of any other system. Can you see where this is going?

It's time to start an argument — well, at least a strong debate — between these different autonomous systems. Here's a thought experiment I would like you to consider:

Imagine a research vehicle with multiple (maybe even dozens) of independently developed AI drivers operating in parallel. All of their basic driving command outputs can be compared in parallel. Much like the space shuttle flight control computer system these outputs can be compared for consistency. But we can use this information to glean much more than a simple agreement. The tightness (standard deviation) of the outputs provides an indication of how well a particular driving situation is understood. Remember all of these AI drivers are different. They were raised by different researchers (parents). They experienced different training data (lived different lives). They each have a different perspective on the task. 

What I think is most fascinating in this scenario is that a vehicle driven by a committee of AI drivers could actually teach itself how to drive better. An AI driver that suggested an action that the group considered inappropriate would be able to remember that situation as well as what it should have done. It could confidently add verified training data to its own personal collection. After sleeping on this data (back in its own personal server farm) it should be able to do better with this particular situation in the future. Situations where many of the control outputs are much less consistent would generate a request to the researchers (parents) for more training data examples (experiences) like this situation (the AI could show their parents the video of the confusing situation). These new training examples would be shared with the group as a whole.

I think this approach has a lot of technological merit. Although, politically, I do expect some proprietary sharing issues between these various independent groups related to competitiveness. But I predict that the autonomous driving labs will realize that if one autonomous car drives over a baby buggy or avoids a stray cat only to crash into a crowded bus stop then all of the autonomous driving companies will suffer the bad publicity. The sooner all AI systems can be trusted as safe on the road the sooner they will be able to sell large numbers of them. And after all, they are only sharing the wisdom of their results, not their actual proprietary technology.

At this point some of you are thinking this is just a crackpot idea from some wild-eyed futurist (a.k.a. yours truly the author). But research efforts are being made in this direction. Initial work is being done using this approach today to detect possible AI driving errors and alert the human.

Image title

One of the things that the MIT AVT Study is researching is this sort of argumentation between competing AI. Below is a short video about what they're doing:


The full research paper can be found here.

I think this approach is very exciting for autonomous vehicles, and I expect to see it applied more widely in the future. It certainly could be applied to robotics in general, and there are many ways I can imagine it fitting in with medical and even financial applications. Keep an eye on this technology. 

AI Computer

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Remote Debugging Dangers and Pitfalls
  • Tech Layoffs [Comic]
  • Taming Cloud Costs With Infracost
  • Top 5 PHP REST API Frameworks

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: