Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How AI Can Be Held Accountable

DZone's Guide to

How AI Can Be Held Accountable

As the field of artificial intelligence continues to develop, researchers are now asking how to hold it accountable for its actions.

· AI Zone ·
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

As AI continues to develop at a rapid pace, a growing number of conversations turn to how we can ensure it retains its accountability. The broad consensus to date is that five core things are required:

  1. Someone who is responsible for each instance of AI.
  2. The ability to explain what is done, how it's done, and why it's done.
  3. Confidence in the accuracy of the system and knowledge of where biases may exist.
  4. The ability for third parties to probe and audit the algorithms.
  5. AI that is developed with fairness in mind.

It's a topic discussed in a recent paper from Oxford University researchers. The authors argue for a holistic mindset to encourage the kind of new policies needed to manage technologies such as autonomous vehicles.

The paper provides three recommendations for policymakers looking to ensure our future relationship with AI and robotics is a safe and harmonious one:

  1. There is an assumption in the industry that making a system interpretable makes it less efficient. The authors believe this deserves to be challenged, and indeed I've written previously about some interesting research that does just that and would allow such systems to be deployed at scale.
  2. Suffice to say, while explainability is increasingly feasible, it remains elusive for certain scenarios, and the authors believe that alternative options need to be developed for such situations. It isn't good enough to brush them off as too difficult.
  3. Regulation should be structured so that similar systems are regulated in a similar way. We should work to identify parallels between AI systems so that context-specific regulations can be established.

Accountable AI

One of the issues with making systems accountable is the computing power required to deliver that. There's also a worry that by explaining the workings of a system, you give away the IP of that system. A second paper, from researchers at Harvard University, explores many of these issues.

They aim to provide an explanation in the sense of defining the reasons or justifications for the outcome derived by a system rather than the nuts and bolts of how it works. In other words, it was boiling things down to rules or heuristics — general principles, if you like. Such a top-level account would also reduce the risks of industrial secrets being unveiled.

The team has boiled the matter down to a simple cost/benefit analysis that allows them to determine the optimum time to reveal the workings of the system.

"We find that there are three conditions that characterize situations in which society considers a decision-maker is obligated — morally, socially, or legally — to provide an explanation," they say.

They believe that the decision in question must impact someone other than the person making the decision. Only then will value be derived from questioning the workings of the system.

They also try to ensure that there is a strong and robust legal framework behind matters, as humans are prone to disagree on what is and is not morally justifiable, or even socially desirable. Laws tend to be harder and more codified, however. There are also certain situations within which such explanations are required by law, including areas such as strict liability or discrimination.

This will have a crucial bearing on the circumstances under which AI systems must explain themselves. This also allows for the explanation of a decision to be made separately from the inner workings of the system itself. This is a crucial step in the journey towards explainable AI.

"We recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are," they say.

Suffice to say, they don't believe this to be a final and definitive solution to this challenge, but it is nonetheless an interesting step along the way.

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Topics:
ai ,ethics ,machine learning ,robotics

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}