Can AI Self-Police and Reduce Bias?
Let's take a look at whether or not Artificial Intelligence can self-police and, therefore, reduce the bias of people.
Join the DZone community and get the full member experience.Join For Free
Concerns around the potential for AI-based systems to hard code the biases that blight human decision-making has caused considerable consternation among researchers around the world. It has also prompted a number of attempts to overcome this challenge. For instance, I recently wrote about a new tool developed by Accenture to try and identify biases within AI systems.
The tool will check the data that feeds any AI-based tool to determine whether sensitive variables have an impact upon other variables. For instance, gender is usually correlated with profession, so even if a company removes gender from the data set, it can still produce biased results if profession is part of the data set.
The tool will then test for any algorithmic biases in terms of false positives and false negatives. Based upon these results, it will then adjust the model so that impact is equalized and people are treated fairly. In other words, it aims to do more than simply highlight a problem; it also aims to fix it for you. Whilst doing this, it also aims to calculate the trade-offs in performance as a result of the increased fairness, all in a visual way that aims to aid decision making, even among non-technical audiences.
There is also a project led by the Santa Fe Institute, which was documented in a recently published paper, which proposes an algorithm for imposing fairness constraints that prevent a system from showing bias.
“So say the credit card approval rate of black and white [customers] cannot differ more than 20 percent. With this kind of constraint, our algorithm can take that and give the best prediction of satisfying the constraint,” the researchers say. “If you want the difference of 20 percent, tell that to our machine, and our machine can satisfy that constraint.”
The team believes that their algorithm allows users to control the level of fairness that is required by the law in various contexts. It’s worth remembering, however, that fairness comes as a tradeoff, as trying to behave fairly can mean a drop off in the predictive power of the algorithm.
Nonetheless, the team hopes that their work will be adopted by companies to help them identify potential discrimination lurking in their own Machine Learning applications.
They say, “Our hope is that it’s something that can be used so that machines can be prevented from discrimination whenever necessary.”
Published at DZone with permission of Adi Gaskell, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.