Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Bias Detection in Machine Learning Models Using FairML

DZone's Guide to

Bias Detection in Machine Learning Models Using FairML

Learn about how to use the FairML framework to detect or disprove bias in machine learning models.

· AI Zone ·
Free Resource

Did you know that 50- 80% of your enterprise business processes can be automated with AssistEdge?  Identify processes, deploy bots and scale effortlessly with AssistEdge.

Detecting bias in machine learning models has become of great importance in recent times. Bias in a machine learning model is about the model making predictions which tend to place certain privileged groups at a systematic advantage and certain unprivileged groups at a systematic disadvantage. The primary reason for unwanted bias is the presence of biases in the training data, due to either prejudice in labels or under-sampling/over-sampling of data. Especially in the banking, finance. and insurance industries, customers/partners and regulators are asking tough questions to businesses regarding the initiatives taken to avoid and detect bias. Take an example of the system using a machine learning model to decide those who could re-offend (recidivism — the tendency of a convicted criminal to re-offend). You may want to check out one of our related articles on understanding AI/Machine Learning Bias using Examples.

In this post, you will learn about bias detection technique using the framework, FairML, which can be used to detect and test the presence of bias in the machine learning models.

FairML — Bias Detection by Determining Relative Feature Importance

FairML adopts the technique of finding relative significance/importance of the features used in the machine learning model to detect bias in the model. If the feature is one of the protected attributes, such as gender, race, religion, etc and is found to have high significance, the model is said to be overly dependent on that feature. This implies that the feature (representing protected attributes) is playing an important role in the model's prediction. Thus, the model could be said to be biased and hence, unfair. In case, the said feature is found to have low importance, the model dependence on that feature is very low and hence the model could not be said to be biased towards that feature. The following diagram represents the usage of FairML for bias detection:

Image title

FairML Techniques for Relative Feature Significance

In order to find the relative significance of the features, FairML makes use of the following four ranking algorithms to find feature significance/importance and evaluate combined ranking or feature significance using rankings of each of the following algorithms.

  • Iterative orthogonal feature projection (IOFP)
  • Minimum Redundancy, Maximum Relevance (mRMR)
  • Lasso Regression
  • Random forest

Iterative Orthogonal Feature Projection (IOFP)

This technique is implemented through the following steps:

  • Calculate predictions of initial datasets without removing one or more protected attributes/features
  • Identify the features representing the protected attribute/feature (such as gender, race, religion etc); Do the following for each of the identified features.
    • Remove the data pertaining to that feature from all the input records
    • Transform other features as orthogonal features to the feature (protected attribute) removed in above step.
    • Calculate the predictions using transformed dataset (with the removed feature, and orthogonal features).
    • Compare the predictions made using initial dataset and transformed dataset.
  • Do the above for all those features which represent protected attributes.

If the difference between predictions made using an initial dataset and transformed dataset (with the removed feature/attribute and other features made orthogonal) is statistically significant, the feature could be said to be of high significance/importance.

Minimum Redundancy, Maximum Relevance (mRMR)

In this technique, the idea is to select features that correlate most strongly to the target variable while being far away from previously selected features. In the technique related to maximum relevance, those features are selected which correlate strongly with the target variable. However, at times, it is found that there are input features which are correlated with each other. However, it could be the case when one of these features is significant in relation to the prediction model. Other input features thus act as redundant. The idea is to not include such redundant features. This is where this technique comes into the picture. Include features which have maximum relevance to the output variable but minimum redundancy with other input variables. Heuristic algorithms such as the sequential forward, backward, or bidirectional selections could be used to implement mRMR.

The mRMR ranking module in FairML was programmed in R, leveraging the mRMRe package in the CRAN package manager.

LASSO Regression

LASSO stands for Least Absolute Shrinkage Selection Operator. Linear regression uses the Ordinary Least Squares (OLS) method to estimate the coefficients of the features. However, it has been found that the OLS method does result in low bias, high variance. As part of the regularization technique, the prediction accuracy is found to be improved by shrinking the coefficients of one or more of the insignificant parameters/features to bare minimum or near-to-zero (Ridge regression) or zero (LASSO regression). Ridge regression helps in estimating important features. However, LASSO regression helps in confirming the most important features as the non-significant features' coefficient is set to 0.

For LASSO ranking in FairML, the implementation provided through the popular Scikit-Learn package in Python is leveraged.

Random Forest

Random Forest, an ensemble modeling technique, is used to determine feature importance by making use of the following techniques:

  • The depth of the attributes in the decision tree can be used to determine the importance of the attributes/features. Features higher in the tree could be thought of as affecting a significant portion of the total samples used to learn the tree model. Hence, they are termed features of higher significance.
  • Permuting the value of attributes/features and determining the performance averaged across all the trees. If the attributes/features are highly significant, the change in accuracy would be high.

For the random forest ranking in FairML, the implementation provided through the popular Scikit-Learn package in Python is leveraged.

References

Summary

In this post, you learned about the FairML framework and the technique used to determine bias in machine learning models. Primarily, if it is claimed that the model is biased against a specific class of data, FairML helps you determine the relative significance of the data representing those attributes and appropriately provide an explanation against the claim.

Consuming AI in byte sized applications is the best way to transform digitally. #BuiltOnAI, EdgeVerve’s business application, provides you with everything you need to plug & play AI into your enterprise.  Learn more.

Topics:
ai ,ml ,machine learning ,bias

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}