How AI Can Be Held Accountable
How AI Can Be Held Accountable
As the field of artificial intelligence continues to develop, researchers are now asking how to hold it accountable for its actions.
Join the DZone community and get the full member experience.Join For Free
As AI continues to develop at a rapid pace, a growing number of conversations turn to how we can ensure it retains its accountability. The broad consensus to date is that five core things are required:
- Someone who is responsible for each instance of AI.
- The ability to explain what is done, how it's done, and why it's done.
- Confidence in the accuracy of the system and knowledge of where biases may exist.
- The ability for third parties to probe and audit the algorithms.
- AI that is developed with fairness in mind.
It's a topic discussed in a recent paper from Oxford University researchers. The authors argue for a holistic mindset to encourage the kind of new policies needed to manage technologies such as autonomous vehicles.
The paper provides three recommendations for policymakers looking to ensure our future relationship with AI and robotics is a safe and harmonious one:
- There is an assumption in the industry that making a system interpretable makes it less efficient. The authors believe this deserves to be challenged, and indeed I've written previously about some interesting research that does just that and would allow such systems to be deployed at scale.
- Suffice to say, while explainability is increasingly feasible, it remains elusive for certain scenarios, and the authors believe that alternative options need to be developed for such situations. It isn't good enough to brush them off as too difficult.
- Regulation should be structured so that similar systems are regulated in a similar way. We should work to identify parallels between AI systems so that context-specific regulations can be established.
One of the issues with making systems accountable is the computing power required to deliver that. There's also a worry that by explaining the workings of a system, you give away the IP of that system. A second paper, from researchers at Harvard University, explores many of these issues.
They aim to provide an explanation in the sense of defining the reasons or justifications for the outcome derived by a system rather than the nuts and bolts of how it works. In other words, it was boiling things down to rules or heuristics — general principles, if you like. Such a top-level account would also reduce the risks of industrial secrets being unveiled.
The team has boiled the matter down to a simple cost/benefit analysis that allows them to determine the optimum time to reveal the workings of the system.
"We find that there are three conditions that characterize situations in which society considers a decision-maker is obligated — morally, socially, or legally — to provide an explanation," they say.
They believe that the decision in question must impact someone other than the person making the decision. Only then will value be derived from questioning the workings of the system.
They also try to ensure that there is a strong and robust legal framework behind matters, as humans are prone to disagree on what is and is not morally justifiable, or even socially desirable. Laws tend to be harder and more codified, however. There are also certain situations within which such explanations are required by law, including areas such as strict liability or discrimination.
This will have a crucial bearing on the circumstances under which AI systems must explain themselves. This also allows for the explanation of a decision to be made separately from the inner workings of the system itself. This is a crucial step in the journey towards explainable AI.
"We recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are," they say.
Suffice to say, they don't believe this to be a final and definitive solution to this challenge, but it is nonetheless an interesting step along the way.
Published at DZone with permission of Adi Gaskell , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.