DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. Fair and Equitable: How IBM Is Removing Bias from AI

Fair and Equitable: How IBM Is Removing Bias from AI

Let's take a look at how you can make your AI unbiased. Also, explore how IBM is doing just that.

Angel Diaz user avatar by
Angel Diaz
·
Oct. 03, 18 · Opinion
Like (2)
Save
Tweet
Share
5.68K Views

Join the DZone community and get the full member experience.

Join For Free

One of the most critical and controversial topics around Artificial Intelligence centers around bias. As more apps come to market that rely on Artificial Intelligence, software developers and data scientists can unwittingly (or perhaps even knowingly) inject their personal biases into these solutions. This can cause a variety of problems ranging from a poor user experience to major errors in critical decision-making.

Fortunately, there's hope. We at IBM have created a solution specifically to address AI bias.

Because flaws and biases may not be easy to detect without the right tool, IBM is deeply committed to delivering services that are unbiased, explainable, value-aligned and transparent. Thus, we are pleased to back up that commitment with the launch of AI Fairness 360, an open-source library to help detect and remove bias in Machine Learning models and data sets.

The AI Fairness 360 Python package includes a comprehensive set of metrics for data sets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in data sets and models. The research community worked together to create 30 fairness metrics and nine state-of-the-art bias mitigation algorithms. The idea is to translate this algorithmic research from the lab into actual practice within domains like finance, human capital management, healthcare, and education.

This creates a comprehensive bias pipeline that fully integrates into the AI lifecycle, but such a pipeline requires a robust set of checkers, “de-biasing” algorithms, and bias explanations. In other words, it requires a toolkit.

IBM Open Sources the AI Fairness 360 Toolkit

One of the key components of the AI Fairness 360 project is the AI Fairness 360 toolkit. The toolkit is designed to address problems of bias through fairness metrics and bias mitigators. The toolkit’s fairness metrics can be used to check for bias in Machine Learning workflows, while its bias mitigators can be used to overcome bias in a workflow to produce a fairer outcome.

Bias can enter the system anywhere in the data-gathering, model-training, and model-serving phases. The training dataset might be biased towards particular instances, for example. The algorithm that creates the model could also be biased, causing it to generate models that are weighted toward particular variables in the input. The test dataset could also have expectations on correct answers that are biased themselves. Testing and mitigating bias should take place at each of these three steps in the Machine Learning process. In the AI Fairness 360 Toolkit codebase, we call these points pre-processing, in-processing, and post-processing.

Image title

How to Free Your AI Systems from Biases

The AI Fairness 360 toolkit is available on GitHub, along with Adversarial Robustness Toolbox (ART), Fabric for Deep Learning (FfDL), and Model Asset Exchange (MAX).

You can get started with AI Fairness 360 using the open-source tutorials. AIF360 open source directory contains a diverse collection of Jupyter notebooks that can be used in various ways. Also, as part of our many Artificial Intelligence and Data Analytics Code Patterns, we have created a Code Pattern to get started with AIF 360. This pattern guides you on how to launch a Jupyter Notebook locally or in IBM Cloud and be able to use it to run AIF360.

In addition, we also announced AI Trust and Transparency services in IBM Cloud to explain which factors influence a given Machine Learning model’s decision, plus its overall accuracy, performance, fairness, and lineage. To detect and remediate bias in your data and model deployments by using a production hosted service on Cloud, you can launch AI Trust and Transparency services in IBM Cloud Catalog. Once launched, you can configure monitors, and follow the self-guided wizards to setup fairness detection and mitigation, in addition to accuracy and explainability.

AI Trust and Transparency checks your deployed model for bias at runtime. To detect bias for a deployed model, you must define feature requirements, such as Age or Gender. You will also specify the output schema for a model or function in Watson Machine Learning (WML) for bias checking to be enabled in AI Trust and Transparency.

Image title

Get Started. Remove Bias from Your AI Systems!

There are additional code patterns available around each of these open source projects, allowing you to get started quickly and easily:

  • Deploy and use a multi-framework Deep Learning platform on Kubernetes
  • Integrate adversarial attacks into a model training pipeline
  • Leverage Tensorflow and Fabric for Deep Learning to train and deploy Fashion MNIST model
  • Create a web app to visually interact with objects detected using Machine Learning

We hope you'll explore these tools and share your feedback. With any open-source project, its quality is only as good as the contributions it receives from the community.

AI Machine learning Open source Data science IBM Cloud Deep learning jupyter notebook

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Internal Components of Apache ZooKeeper and Their Importance
  • DevOps Roadmap for 2022
  • Debugging Streams and Collections
  • How to Use MQTT in Java

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: