Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Fair and Equitable: How IBM Is Removing Bias from AI

DZone 's Guide to

Fair and Equitable: How IBM Is Removing Bias from AI

Let's take a look at how you can make your AI unbiased. Also, explore how IBM is doing just that.

· AI Zone ·
Free Resource

One of the most critical and controversial topics around Artificial Intelligence centers around bias. As more apps come to market that rely on Artificial Intelligence, software developers and data scientists can unwittingly (or perhaps even knowingly) inject their personal biases into these solutions. This can cause a variety of problems ranging from a poor user experience to major errors in critical decision-making.

Fortunately, there's hope. We at IBM have created a solution specifically to address AI bias.

Because flaws and biases may not be easy to detect without the right tool, IBM is deeply committed to delivering services that are unbiased, explainable, value-aligned and transparent. Thus, we are pleased to back up that commitment with the launch of AI Fairness 360, an open-source library to help detect and remove bias in Machine Learning models and data sets.

The AI Fairness 360 Python package includes a comprehensive set of metrics for data sets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in data sets and models. The research community worked together to create 30 fairness metrics and nine state-of-the-art bias mitigation algorithms. The idea is to translate this algorithmic research from the lab into actual practice within domains like finance, human capital management, healthcare, and education.

This creates a comprehensive bias pipeline that fully integrates into the AI lifecycle, but such a pipeline requires a robust set of checkers, “de-biasing” algorithms, and bias explanations. In other words, it requires a toolkit.

IBM Open Sources the AI Fairness 360 Toolkit

One of the key components of the AI Fairness 360 project is the AI Fairness 360 toolkit. The toolkit is designed to address problems of bias through fairness metrics and bias mitigators. The toolkit’s fairness metrics can be used to check for bias in Machine Learning workflows, while its bias mitigators can be used to overcome bias in a workflow to produce a fairer outcome.

Bias can enter the system anywhere in the data-gathering, model-training, and model-serving phases. The training dataset might be biased towards particular instances, for example. The algorithm that creates the model could also be biased, causing it to generate models that are weighted toward particular variables in the input. The test dataset could also have expectations on correct answers that are biased themselves. Testing and mitigating bias should take place at each of these three steps in the Machine Learning process. In the AI Fairness 360 Toolkit codebase, we call these points pre-processing, in-processing, and post-processing.

Image title

How to Free Your AI Systems from Biases

The AI Fairness 360 toolkit is available on GitHub, along with Adversarial Robustness Toolbox (ART), Fabric for Deep Learning (FfDL), and Model Asset Exchange (MAX).

You can get started with AI Fairness 360 using the open-source tutorials. AIF360 open source directory contains a diverse collection of Jupyter notebooks that can be used in various ways. Also, as part of our many Artificial Intelligence and Data Analytics Code Patterns, we have created a Code Pattern to get started with AIF 360. This pattern guides you on how to launch a Jupyter Notebook locally or in IBM Cloud and be able to use it to run AIF360.

In addition, we also announced AI Trust and Transparency services in IBM Cloud to explain which factors influence a given Machine Learning model’s decision, plus its overall accuracy, performance, fairness, and lineage. To detect and remediate bias in your data and model deployments by using a production hosted service on Cloud, you can launch AI Trust and Transparency services in IBM Cloud Catalog. Once launched, you can configure monitors, and follow the self-guided wizards to setup fairness detection and mitigation, in addition to accuracy and explainability.

AI Trust and Transparency checks your deployed model for bias at runtime. To detect bias for a deployed model, you must define feature requirements, such as Age or Gender. You will also specify the output schema for a model or function in Watson Machine Learning (WML) for bias checking to be enabled in AI Trust and Transparency.

Image title

Get Started. Remove Bias from Your AI Systems!

There are additional code patterns available around each of these open source projects, allowing you to get started quickly and easily:

We hope you'll explore these tools and share your feedback. With any open-source project, its quality is only as good as the contributions it receives from the community.

Topics:
artificial inteligence ,machine learning ,deep learning ,ibm ,removing bias from ai ,open source ,ai fairness 360

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}