Explainable AI: Humanizing Artificial Intelligence
We find it hard to trust machines we don’t understand. Lack of trust makes AI seem like modern witchcraft which keeps us from getting benefits from innovative technology.
Join the DZone community and get the full member experience.
Join For FreeArtificial intelligence (AI) is usually treated as a black-box exercise. That means no one pays attention to how the technology works, just that it seems to work correctly. In some circumstances, that’s fine, because most of the time we care about the results, not how they are achieved.
Unfortunately, treating AI as a black-box process creates problems with trust and reliability. On purely technical grounds, it also makes it hard to analyze or resolve issues with AI models.
Here, we look at some of these underlying problems and what is being done to address them.
What Is AI?
You already know how many companies are incorporating Artificial intelligence (AI) into their products. In some cases, the AI buzzword may be just a simple marketing ploy, but many products truly do use AI and machine learning (ML) to move their products forward.
In simple terms, AI refers to any computer system that exhibits intelligent behavior. In this context, intelligence means learning, understanding, or making conceptual leaps.
Currently, the most common form of AI is machine learning, in which a computer algorithm learns to identify patterns in data. ML broadly falls into three categories:
- Supervised learning, in which you use known data to train the model. This is like teaching a child to read using an ABC book. This form of ML is the one you are most likely to encounter. However, it has one big drawback: You need to start with a large amount of trustworthy, accurately-labeled training data.
- Unsupervised learning, in which the model looks for patterns within the data. You do this every time you visit a new location and learn your way around. This form of ML is great in situations where you don’t know anything about your data. One common use is to identify interesting clusters within the data that might be significant.
- Reinforcement learning, wherein the model is rewarded each time it gets something right. This is the classic “trial and error” approach to learning. This ML approach is particularly powerful if you only have a small amount of data initially. It creates the possibility of continuous learning models. Here, your models constantly adapt and evolve as they see new data. This ensures they are never out of date.
The problem with all these approaches is trying to understand the resulting model. In other words, it is hard to humanize artificial intelligence.
The Problem With Trust
The AI systems that are black-box are most likely to be machine learning models where the systems have taught themselves. If you can’t see the data from which the system drew its conclusion, it is hard to understand why the model produces its result -- or to have confidence that the result is correct. You can’t ask the model why it produced the results it does; all you can do is to verify that the result is what you expect.
If you can’t understand why an AI model actually works, how can you trust that your model will always be correct?
As a result, AI has gained a bad reputation thanks to countless dystopian science fiction movies and novels. Sadly, too many of them are accurate. The bias shown by many AI models makes the whole trust issue even worse.
Bias is inherent in humans, but it also is a huge issue for AI, because the systems learn only from past behavior – which may not be the behavior we want to perpetuate.
One example is the emerging use of AI models to predict crime, called predictive policing. These models use previous crime statistics to determine which geographies have the highest rates of crime. Law enforcement concentrates its resources on these areas by adjusting patrol routes. This use of place-based data has been questioned as reinforcing bias, or potentially inferring causation from correlation.
Consider, for example, the impact on crime statistics from the coronavirus pandemic. Violent crime rates have dropped substantially in many major U.S. cities; however in some jurisdictions there are more reports of auto theft and burglary on closed businesses. You and I may reasonably draw a connection between those changes and the recent stay at home orders across the country. Predictive policing models, however, may take decreased frequency of reported crimes and declining arrest rates to infer stability and safety.
There are many forms of bias in artificial intelligence. Studies of facial recognition software, for example, have shown algorithms containing “demographic differentials” that can worsen their accuracy based on a person’s age, gender or race.
Sometimes bias happens when a data scientist performs feature engineering to try to clean up the source data. This can result in subtle but important features being lost.
Probably the most damaging form of bias is societal where the data itself reinforces cultural bias. For instance, advertising algorithms regularly serve up content based on demographic data which perpetuates bias based on age, gender, race, religion, or socio-economic factors. AI’s use in hiring has stumbled in a similar manner.
In all cases, it was humans who introduced the original bias. But how do you uncover whether that has happened and ensure that it doesn’t?
Introducing Explainable AI
To increase trust in AI systems, AI researchers are exploring the concept of explainable AI (XAI) by showing humans what is really going on. The idea is, ultimately, to humanize AI.
XAI avoids problems that arise when you cannot see inside the black box. For instance, in 2017 researchers reported that one AI was effectively cheating. The AI was trained to identify images of horses, as a variant on the classic dogs or cats. However, it turned out that the AI actually learned to identify a specific copyright tag associated with pictures of horses.
To achieve XAI, we need to see and understand what is happening inside the model. A whole branch of theoretical computer science is related to this endeavor -- and it is a much harder problem than you might expect.
It is possible to explain simpler machine learning algorithms. Neural networks, however, are far more complicated. Even the latest techniques, such as layerwise relevance propagation (LRP), only tell you which inputs were the most important in making decisions. So, researchers’ attention has switched to achieving the goal of local interpretability. This is where computer scientists aim to explain just a few particular predictions made by the model.
Why Is It Hard To Understand AI Models?
Many ML models are based on artificial neurons. An artificial neuron (or perceptron) combines one or more weighted inputs using a transfer function. The activation function then uses a threshold to decide whether to fire. This approach replicates the way neurons work in the human brain.
One common ML model is a neural network. These are built up of multiple layers of artificial neurons. The input layer has the same number of inputs as there are features of interest, plus a number of hidden layers. Finally, the output layer has the same number of outputs as labels of interest.
In this intentionally-oversimplified example, the intent is to predict the time you need to get up based on the day of the week and whether you are on holiday.
In this case, the weights are assigned randomly, which produces an incorrect result, suggesting that you need to get up-and-at-'em by 9am on Wednesday.
The exact weights that are applied to each artificial neuron are almost impossible to set by hand. Instead, you must use a process known as backpropagation wherein the algorithm works backward through the model. It adjusts the network weights and biases, seeking to minimize the delta between predicted and expected outputs. So you can get up on Wednesday at 7am, even if you don’t really want to wake up.
Trust and Ethics
Trust in technology has become increasingly relevant. We use AI to diagnose cancer, to identify wanted criminals in crowds, and to make decisions on hiring and firing. If you can’t humanize AI, how can you get people to trust it? If it is hard to trust an AI model, how can you use it ethically?
This is such an important issue that the EU has adopted a set of Ethics Guidelines for Trustworthy AI. These guidelines set seven tests for whether an AI can be considered ethical and trustworthy:
- Human agency and oversight. The AI system should never make decisions without a human having the final say.
- Technical robustness and safety. Before you rely on AI, you need to know that it is robust, will always failsafe, and cannot be hacked.
- Privacy and data governance. AI models often deal with personal data, such as mammographic images used to detect breast cancer. This means data privacy is really important.
- Transparency. The AI model should be explainable to a human.
- Diversity, non-discrimination, and fairness. This relates to the issues of bias we already looked at above.
- Environmental and societal wellbeing. Here, the authors try to address the fears of an AI-driven dystopian future.
- Accountability. Independent oversight or monitoring is essential
And of course, the guidelines stress, AI should only be used lawfully.
Humanizing AI
One common theme runs throughout the guidelines: the importance of humanizing AI to make it trustworthy.
We, humans, find it especially hard to trust a machine we can’t (or at least don’t) understand. This lack of trust and transparency can make AI-based products seem like modern witchcraft; the results feel more like magic than technological reality.
This is especially problematic in software test automation where the system is meant to catch potentially critical issues before an application’s release. How can you be certain your tests are correct if you can’t understand what is happening? What if it makes the wrong decision? How do we know or respond if it misses something?
To address this conundrum, there needs to be improved visibility into where ML algorithms interact with your tests, what decisions they make, and what data it uses to arrive at those conclusions. That leaves you with no more black boxes. No nagging doubt in the back of your mind that maybe the AI got it wrong. No more magic. Just actual, visible, machine learning.
Opinions expressed by DZone contributors are their own.
Comments