Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How AI Can Remain Fair and Accountable

DZone's Guide to

How AI Can Remain Fair and Accountable

We need to develop some robust ethical foresight analysis to see which grain will grow and which will not and to decide which grains we should sow in the first place.

· AI Zone ·
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

As AI continues to develop at a rapid pace, a growing number of conversations turn to how we can ensure it retains its accountability. The broad consensus to date is that five core things are required:

  1. Someone responsible for each instance of AI.
  2. The ability to explain what is done, how it’s done, and why it’s done.
  3. Confidence in the accuracy of the system, and knowledge of where biases may exist.
  4. The ability for third parties to probe and audit the algorithms.
  5. AI that is developed with fairness in mind.

It’s a topic discussed in a recent paper from Oxford University researchers. The authors argue for a holistic mindset to encourage the kind of new policies needed to manage technologies such as autonomous vehicles.

The paper provides three recommendations for policymakers looking to ensure our future relationship with AI and robotics is a safe and harmonious one.

  1. There is an assumption in the industry that making a system interpretable makes it less efficient. The authors believe this deserves to be challenged, and indeed I’ve written previously about some interesting research that does just that and would allow such systems to be deployed at scale.
  2. Suffice to say, whilst explainability is increasingly feasible, it remains elusive for certain scenarios, and the authors believe that alternative options need to be developed for such situations. It isn’t good enough to brush them off as too difficult.
  3. Regulation should be structured so that similar systems are regulated in a similar way. We should work to identify parallels between AI systems so that context specific regulations can be established.
“The most important thing is to recognize the similarities between algorithms, AI, and robotics. Transparency, privacy, fairness, and accountability are essential for all algorithmic technologies. We need to address these challenges together to design safe systems,” the authors say.

In all the hubbub around AI and its tremendous capabilities, the ability for it to explain what it’s doing is crucial. We’ve seen calls for better data governance, but much less around the accountability that AI will require if we’re to have confidence in it.

“AI can be an immense force for good, but we need to ensure that its risks are prevented or minimized. To do this, it is not enough to react to problems. A permanent crisis approach will not be successful. We need to develop some robust ethical foresight analysis not only to see “which grain will grow and which will not” but above all to decide which grains we should sow in the first place,” the authors continue.

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Topics:
ethics ,ai ,policy

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}