{{announcement.body}}
{{announcement.title}}

Regulating ML/AI-Powered Systems for Bias

DZone 's Guide to

Regulating ML/AI-Powered Systems for Bias

See why we should be regulating AI and ML-powered systems.

· AI Zone ·
Free Resource

Siri and Alexa are good examples of AI, as they listen to human speech, recognize words, perform searches, and translate the text results back into speech. A recent purchase of an AI company called Dynamic Yield by McDonald’s — which analyzes customer spending/eating habits and recommend them other food to purchase — has taken the use of AI to the next step. AI technologies raise important issues like personal privacy rights and whether machines can ever make fair decisions.

There are two main areas where regulation can be helpful.

1. Preventing Abusive Use of the Technology

One good example of this is the use of AI to read lips. A typical human lip reader can have an accuracy rate between 15 to 50 percent. Recently, an AI lip-reading program, created at the University of Oxford, reached an accuracy rate of over 90 percent.

This technology has excellent use for hearing impaired people and is an excellent tool to help them communicate. But on the flip side, the same can also be misused to spy on people.

2. Preventing Bias in Decision Making

Algorithms learn to predict patterns from historical data. Human decisions are inherently biased. In essence, algorithms are trained to learn from imperfect data to make decisions that reflect human bias.

Algorithms are starting to be used to not only filter data but for recommending and, in some cases, outsourcing the decision-making process. For example, algorithms could be deciding:

  • Which individual should be given a loan or not
  • Which defendant should get a bail
  • If a person should be hired or not
  • If we rent this apartment to a person or not
  • Where the next criminal activity could happen

Responsibility for Addressing Discrimination

These algorithms all use existing data to learn from and predict, which means it is just reinforcing all of the current bias in the data.

Image title

For example, say all the people a company hired last year were males in the age group of 25 to 35. You feed this data to the algorithm, and the algorithm is starting to learn that the ideal age for hiring in the company is males in the age group 25 to 35. This also means females or older people will not get a good chance. If the employer applies a rule only to employ candidates who are males in the age group 25 to 35, that is unlawful discrimination. But if the same employer feeds the data to an algorithm and the algorithm learns to discriminate against females or against people above 35, who is responsible for that discrimination?

A New Law Being Proposed

Image title

Congress is proposing a new bill called the Algorithmic Accountability Act and is trying to address the (2nd) bias by forcing large companies to audit their machine learning algorithm-powered systems — like facial recognition or ad targeting algorithms — for bias. This act is aimed at large companies that have revenues of over $50 million and holds data for more than 1 million people.

The key is for these companies to review their use of algorithms that affect consumer legal rights like predicting consumer behavior and involves large amounts of sensitive data. If such an audit comes up with evidence of discrimination, the company has to address them promptly.

Summary

These are all steps in the right direction. This will probably lead to the creation of “Office of algorithmic compliance” and will likely be part of the existing compliance department and will have technical and legal staff administering and reviewing the findings. A chief risk officer or a chief compliance officer will probably take an enhanced role to adhere to the upcoming laws.

Topics:
machine learning ,artifical intelligence ,product management ,bias in ai ,ethics in ai ,ai and discrimination ,algorithmic accountability act

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}