Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Ethical AI: Lessons from Google AI Principles

DZone's Guide to

Ethical AI: Lessons from Google AI Principles

Let's walk through some ethical guidelines for developing AI products, inspired by Google's set of AI principles.

· AI Zone ·
Free Resource

Start coding something amazing with the IBM library of open source AI code patterns.  Content provided by IBM.

Is your organization using AI/machine learning for many of its products, or planning to use AI models extensively for upcoming products? Do you have a set of AI guiding principles in place for stakeholders such as product managers, data scientists, and machine learning researchers to make sure that safe and unbiased AI is used for developing AI-based solutions? Are you planning to create AI guiding principles for other AI stakeholders, including business stakeholders, customers, and partners?

If the answer to the above questions is not "yes," you should start thinking about laying down AI guiding principles, sooner than later, to help everyone from the executive team to product management to data scientists plan, build, test, deploy, and govern your AI-based products. The rapidly growing capabilities of AI-based systems have started inviting questions from business stakeholders (including customers and partners) to provide details on the impact, governance, ethics, and accountability of AI-based products integrated into various business processes and workflows. No longer can a company afford to hide some of the above details in light of IP-related or privacy concerns. 

In this post, you will learn about some of the AI guiding principles that you could set for your business. They are based on Google's principles for developing AI-based products. These principles are:

  • Overall beneficial to the business
  • Avoiding unfair bias against a group of user entities
  • Ensures safety of customer (Freedom from business risk)
  • Trustworthy (Customers can ask for an explanation)
  • Customer data privacy
  • Continuous governance
  • Built using best of AI tools & frameworks

The following diagram represents the AI guiding principles for Ethical AI:

Image title

Fig. 1: Guiding Principles for Ethical AI

Overall Beneficial to the Business

AI/machine learning models should be built to solve complex business problems, while ensuring that the benefits outweigh any risks posed by the models. The following are a few examples of the different types of risks posed by the respective models:

  • Fake news model: The model predicts whether a news is a fake news or not. The model has a high precision of 95% and a recall of 85%. The 85% recall reveals there is a set of news (although smaller in number) that fails to be predicted as fake, and thus, filtered by the model. However, out of all news which got predicted as fake, 95% accuracy rate implies the model does a good job of predicting news as fake news. The benefits of this model, in my opinion, outweigh the harm done by false negatives.
  • Cancer prediction model: Let's say a model is built for predicting cancer. The precision of the model comes out to be 90%, meaning that out of all predictions made by the model, 90% are correct. So far, so good. However, the recall value is 90%. This represents the fact that out of those who are actually suffering from cancer, the model was able to correctly predict for 90% of people. Others got predicted as the false negative. Is that acceptable? I do not think so. Thus, this model won't be accepted, as it may end up hurting more than it helps.

Avoiding Unfair Bias to a Group of User Entities (Biased or Not?)

AI/ML models often get trained with data sets with the underlying assumption that the data set selected is unbiased, or with an ignorance of its actual bias. The reality is something different. While building models, both the feature set and data associated with these features need to be checked for bias. The bias would need to be tested, during both:

  • The model training phase, and
  • Once the model is built and ready to be tested for moving it into production.

Let's consider a few examples to understand the bias in training datasets:

  • Bias in an image identification model: Let's say there is a model built to identify the human beings in a given image. Discriminatory bias could get introduced into the model if the model is trained with images having people of white skin color. Thus the model — when tested with images depicting people of a different skin color — would not be able to classify the human beings in the correct manner.
  • Bias in a hiring model: Models built for hiring could be subjected to bias such as hiring men or women for specific roles, or hiring those with white-sounding names, or hiring those with specific skill sets for specific positions.
  • Bias in a criminal prediction model: One could see bias in a criminal prediction model, for example, if a person with black skin is considered to have a higher likelihood of committing a crime once he has already done so than a person with white skin. The evaluation metrics shown represent almost 45% false positives.

One must understand that there are two different kinds of bias, whether the bias is based on experience or discrimination. A doctor could use their experience to classify a patient as suffering from a particular disease, or not. This can be called good bias. Alternatively, a model which fails to recognize people of non-white skin color is said to be discriminating in nature. This could be called bad bias. The goal is to detect bad bias and eliminate it, either during the model training phase or after the model is built.

Ensures Customer Safety (Freedom from Business Risk)

A model's performance should be examined to minimize false positives/negatives appropriately. This helps ensure freedom from the risk associated with business functions. Let's take an example of a machine learning model (in an accounts receivable domain) which predicts whether a buyers' orders can be delivered based on their credit score. If the model incorrectly predicts that the order should be delivered as receivables, the supplier could be at risk of not receiving invoice payments on time, which would impact their revenue collections. Thus, this model should not be moved into production, primarily because they could impact business in a negative manner leading to the loss of revenue.

Trustable/Explainable (Customers Can Ask for Explanation)

Models should be trustworthy or explainable. The customers using the model prediction could ask for details related to which features contributed to the prediction. Keeping this in mind, one should either be able to explain or derive how the prediction was made, or avoid using blackbox models where it gets difficult to explain the predictions and instead use simpler linear models.

Customer Data Privacy

As part of governance practice, customer data privacy should be respected. If customers are informed that their data privacy will be maintained and that their data won't be used for any business-related purpose without informing them and taking their permissions, the same should be respected and governed as part of ML model review practices. Business should set up the QA team or audit team that makes sure the customer data privacy agreement is always respected.

Continuous Governance

A machine learning model lifecycle includes aspects related to some of the following:

  • Data
  • Feature engineering
  • Models building (training/testing)
  • ML pipeline/infrastructure

As part of AI guiding principles, the continuous governance controls should be put to audit aspects related to all of the above. Some of the governance controls are as follows:

  • Data: Whether the model is trained with adversary data set that needs to be checked in a continuous manner in either manual or automated ways. Secondly, whether data not allowed to be used for building the model is used otherwise.
  • Feature Engineering: Whether the features' importance has been checked or not. Whether derived features end up using data not allowed per the data privacy agreement. Whether unit tests have been written for feature generation code.
  • Model building: Whether the model performance is optimum. Whether the model is tested for bias. Whether the model is tested on different slices of data.
  • ML pipeline: Whether the ML pipeline is secured or not.

Best of AI Tools & Frameworks

It must be ensured that AI models are built using the best of tools and frameworks. In addition, people involved in building AI models should be trained appropriately at regular intervals with best practices and up-to-date educational materials. The tools and frameworks must ensure some of the following:

  • Most advanced AI technologies such as AutoML, bias tools are used.
  • Best practices related to safety is adopted.

Summary

In this post, you learned about AI guiding principles that you should consider setting for your AI/ML team and business stakeholders, including executive management, customers, and partners for developing and governing AI-based solutions. Some of the most important AI guiding principles include safety, bias, and trustability/explainability.

Start coding something amazing with the IBM library of open source AI code patterns.  Content provided by IBM.

Topics:
ethical ai ,artificial intelligence ,google ai

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}