DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Beyond Code Coverage: A Risk-Driven Revolution in Software Testing With Machine Learning
  • Accelerating AI Inference With TensorRT
  • AI's Dilemma: When to Retrain and When to Unlearn?
  • Getting Started With GenAI on BigQuery: A Step-by-Step Guide

Trending

  • Developers Beware: Slopsquatting and Vibe Coding Can Increase Risk of AI-Powered Attacks
  • Exploring Intercooler.js: Simplify AJAX With HTML Attributes
  • Implementing API Design First in .NET for Efficient Development, Testing, and CI/CD
  • Supervised Fine-Tuning (SFT) on VLMs: From Pre-trained Checkpoints To Tuned Models
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. Machine Learning Models: Bias Mitigation Strategies

Machine Learning Models: Bias Mitigation Strategies

Discrimination-aware Machine Learning models are helping to change the way technology detects and mitigates bias.

By 
Ajitesh Kumar user avatar
Ajitesh Kumar
·
Nov. 20, 18 · Opinion
Likes (1)
Comment
Save
Tweet
Share
12.3K Views

Join the DZone community and get the full member experience.

Join For Free

In this post, you will learn about some of the bias mitigation strategies that can be applied in ML Model Development lifecycle (MDLC) to achieve discrimination-aware Machine Learning models. The primary objective is to achieve a higher accuracy model while ensuring that the models are lesser discriminant in relation to sensitive/protected attributes. In simple words, the output of the classifier should not correlate with protected or sensitive attributes. Building such ML models becomes the multi-objective optimization problem. The quality of the classifier is measured by its accuracy and the discrimination it makes on the basis of sensitive attributes; the more accurate, the better, and the less discriminant (based on sensitive attributes), the better. The following are some of the bias mitigation strategies:

  • Pre-processing algorithms
  • In-processing algorithms
  • Post-processing algorithms

Here is a diagram representing the bias mitigation strategies for Machine Learning models:

Bias mitigation strategies for Machine lEarning models

Pre-Processing Algorithms

Pre-processing algorithms are used to mitigate bias prevalent in the training data. The idea is to apply one of the following techniques for preprocessing the training data set and then apply classification algorithms for learning an appropriate classifier.

  • Reweighing: Reweighing is a data preprocessing technique that recommends generating weights for the training examples in each (group, label) combination differently to ensure fairness before classification. The idea is to apply appropriate weights to different tuples in the training dataset to make the training dataset discrimination free with respect to the sensitive attributes. Instead of reweighing, one could also apply techniques (non-discrimination constraints) such as suppression (remove sensitive attributes) or massaging the dataset — modify the labels (change the labels appropriately to remove discrimination from the training data). However, the reweighing technique is more effective than the other two mentioned earlier.
  • Optimized preprocessing: The idea is to learn a probabilistic transformation that edits the features and labels in the data with group fairness, individual distortion, and data fidelity constraints and objectives.
  • Learning fair representations: The idea is to find a latent representation that encodes the data well while obfuscating information about protected attributes.
  • Disparate impact remover: Feature values are appropriately edited to increase group fairness while preserving rank-ordering within groups.

In-Processing Algorithms

  • Adversarial Debiasing: A classifier model is learned to maximize prediction accuracy and simultaneously reduce an adversary's ability to determine the protected attribute from the predictions. This approach leads to a fair classifier as the predictions cannot carry any group discrimination information that the adversary can exploit.
  • Prejudice remover: The idea is to add a discrimination-aware regularization term to the learning objective.

Post-Processing Algorithms

  • Equalized odds postprocessing: The algorithm solves a linear program to find probabilities with which to change output labels to optimize equalized odds.
  • Calibrated equalized odds postprocessing: The algorithm optimizes over calibrated classifier score outputs to find probabilities with which to change output labels with an equalized odds objective.
  • Reject option classification: The idea is to give favorable outcomes to unprivileged groups and unfavorable outcomes to privileged groups in a confidence band around the decision boundary with the highest uncertainty.

Summary

In this post, you learned about bias mitigation strategies to build higher performing models while making sure that the models are lesser discriminant. The techniques presented in this post will be updated at regular intervals based on ongoing research.

References

  • AI Fairness 360
  • Data pre-processing techniques for classification without discrimination
Machine learning

Published at DZone with permission of Ajitesh Kumar, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Beyond Code Coverage: A Risk-Driven Revolution in Software Testing With Machine Learning
  • Accelerating AI Inference With TensorRT
  • AI's Dilemma: When to Retrain and When to Unlearn?
  • Getting Started With GenAI on BigQuery: A Step-by-Step Guide

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!