Explainable AI (XAI) Design for Detecting Out-of-Distribution Samples and Adversarial Attacks
In this article, we will discuss an interpretable prototype of an unsupervised deep convolutional neural network and autoencoders for anomaly detection.
Join the DZone community and get the full member experience.Join For Free
In this article, we will discuss an interpretable and explainable prototype of an unsupervised deep convolutional neural network. We will also discuss lstm autoencoders-based real-time anomaly detectors for high-dimensional heterogeneous/homogeneous time series multi-sensor data.
What's New in MSDA v1.10.0?
MSDA is an open-source,
low-code multi-sensor data analysis library in Python that aims to reduce the hypothesis to provide insight to cycle time in time-series multi-sensor data analysis and experiments. It enables users to perform end-to-end proof-of-concept experiments quickly and efficiently. The module identifies events in the multidimensional time series by capturing the variation and trend to establish a relationship aimed towards identifying the correlated features, helping in feature selection from raw sensor signals.
It also provides a provision to precisely detect the anomalies in real-time streaming data on an unsupervised deep convolutional neural network. LSTM autoencoders-based detectors are designed to run on GPU/CPU. Finally, a game-theoretic approach is used to explain the output of the built anomaly detector model.
The package includes:
- Time series analysis.
- The variation of each sensor column wrt time (increasing, decreasing, equal).
- How each column's values vary with the wrt of the other column, and the maximum variation ratio between each column's wrt of the other column.
- Relationship establishment with trend array to identify the most appropriate sensor
- Users can select window length and then check the average value and standard deviation across each window for each sensor column
- It provides a count of growth/decay values for each sensor column values above or below a threshold value
- Feature Engineering
- Features involving trend of values across various aggregation windows: change and rate of change in average, standard deviation across a window
- Ratio of changes, growth rate with std. deviation.
- Change over time
- Rate of change over time
- Growth or decay
- Rate of growth or decay
- Count of values above or below a threshold value
** Unsupervised deep time-series anomaly detector. **
** Game-theoretic approach to explain the time-series data model. **
MSDA is simple, easy to use, and low-code. The key features are shown in the figure below:
Who Should Use MSDA?
MSDA is an open-source library that anybody can use. In our view, the ideal target audience of MSDA is:
- Researchers for quick POC testing
- Experienced data scientists who want to increase productivity
- Citizen data scientists who prefer a low code solution
- Students of data science
- Data science professionals and consultants involved in building Proof of Concept projects
What is an Anomaly?
What is an anomaly, and why should it be of any concern? In layman's terms, “anomalies” or “outliers” are the data points in a data space, which are abnormal, or out of trend. Anomaly detection focuses on identifying examples in the data that somehow deviate from what is expected or typical. Now, the question is, “How do you define whether something is abnormal or an outlier?” The quick, rational answer is all those points that don’t follow the trend of the neighboring points in the sample space.
For any business domain, detecting suspicious patterns from a huge set of data is very critical. Say, for example, in the banking domain, fraudulent transactions pose a serious threat and loss/liabilities to the bank. In this article, we will try to learn about detecting anomalies from data without training the model beforehand, because you can’t train a model on data, which we don’t know about! That’s where the whole idea of unsupervised learning helps. We will see two network architectures for building a real-time anomaly detector, i.e., a) Deep CNN b) LSTM AutoEncoder.
These network suits for detecting a wide range of anomalies, i.e., point anomalies, contextual anomalies, and discords in time series data. Since the approach is unsupervised, it requires no labels for anomalies. We use the unlabeled data to capture and learn the data distribution that is used to forecast the normal behavior of a time-series. The first architecture is inspired by the IEEE paper DeepAnT; it consists of two components: time series predictor and anomaly detector. The time series predictor uses a deep convolutional neural network (CNN) to predict the next time stamp on the defined horizon. This component takes a window of time series (used as a reference context) and attempts to predict the next timestamp. The predicted value is then passed to the anomaly detector component, which is responsible for labeling the corresponding timestamp as Non-Anomaly or Anomaly.
The second architecture is inspired by this Nature paper: Deep LSTM-based Stacked Autoencoder for Multivariate Time Series.
Let first understand simply what an autoencoder neural network is. The autoencoder architecture is used to learn efficient data representation in an unsupervised manner. There are three components to an autoencoder:
- an encoding (input) portion that compresses the data, and in the process learns a representation (encoding) for the set of data,
- a component that handles the compressed data (size reduction),
- and a decoder (output) portion that reconstructs the learned representation as close as possible to the original input from the compressed data while minimizing the overall loss function.
So, simply when the data is fed into an autoencoder, it is encoded and then compressed down to a smaller size, and then that smaller representation is decoded back to the original input.
Next, let us understand why LSTM is appropriate here. LSTM stands for long short-term memory and is a neural network architecture capable of learning order dependencies in sequence prediction problems. An LSTM network is a type of recurrent neural network (RNN).
The RNN mainly suffers from vanishing gradients. Gradients contain information, and over time, if the gradients vanish, then important localized information is lost. This is where LSTM is a handful as it helps remember the cell states preserving the information. The basic idea is that the LSTM network has multiple “gates” inside of it with trained parameters. Some of these gates control the modules' “output” and other gates control their “forgetting.”
LSTM networks are a good fit for classifying, processing, and making predictions based on time series data since there can be lags of unknown duration between important events in a time series.
An LSTM Autoencoder is an implementation of an autoencoder for sequence data using an Encoder-Decoder LSTM network architecture. Now that we have seen the basic concepts of each network, let us go through the design of both networks as shown below.
The DeepCNN consists of two convolutional layers. Typically, CNN consists of a sequence of layers which includes convolutional layers, pooling layers, and fully connected layers. Each convolutional layer normally has two stages. In the first stage, the layer performs the mathematical operation called convolution which results in linear activations. In the second stage, a non-linear activation function is applied to each linear activation.
Like other neural networks, the CNN also uses training data to adapt its parameters (weights and biases) to perform the learning task. The parameters of the network are optimized using the ADAM optimizer. The kernel size, i.e, the number of filters, can be tuned further to perform better depending on the dataset. Further, the dropout, learning rate, etc. can be fine-tuned to validate the performance of the network.
The loss function used was the MSELoss (squared L2 norm) that measures the mean squared error between each element in the inputs ‘x’ and target ‘y.' The
LSTMAENN consists of stacked multiple LSTM layers with
input_size—The number of expected features in the input x,
hidden_size— The number of features in the hidden state h,
num_layers —The number of recurrent layers (Default:1), etc.
For more details, refer here. To avoid the scope of interpreting the detected noise in the data as anomalies, we can tune the additional hyper-parameters like ‘lookback’ (time series window size), units in hidden layers, and many more.
Unsupervised Deep Anomaly Detector Models
- ** LSTM Autoencoder **
Now, that we have designed the network architectures. Next, we will go through the further steps with a hands-on demonstration as given below.
1) Install the Package
The easiest way to install MSDA is by using pip.
2) Import Time-Series Data
Here, we will use the climate data from here. This dataset is compiled from several public sources. The dataset consists of daily temperatures and precipitation from 13 Canadian centers. Precipitation is either rain or snow (likely snow in the winter months). In 1940, there is daily data for seven out of the 13 centers, but by 1960 there is daily data from all 13 centers, with the occasional missing value. We have around 80 years of records (daily frequency of data), and we want to identify the anomalies from that climate data. As seen below, this data has 27 features and around 30K records.
3) Data Validation, Pre-Processing, etc.
We start by checking for missing values and imputing those missing values.
impute() from the Preprocessing & ExploratoryDataAnalysis class can be used to find missing values and fill in the missing information. We are replacing the missing values with the mean values (hence, modes=1). There are several utility functions within these classes that can be used for profiling your dataset, manual filtering of outliers, etc. Also, other options provided include DateTime conversions, getting descriptive stats of the data, normality distribution test, etc. For more details peek here.
4) Post-Processing Data To Input Into the Anomaly Detector
Next, we are inputting data with no missing values, removal of unwanted fields, assert the timestamp field, etc. Here, the user can input the column to drop with their index value, and assert the timestamp field with their index value too. This returns two data frames; one will have all the numerical fields without the timestamp index, while the other will have all the numerical fields with timestamp indexing. We need to use one with the timestamp as an index of data for further steps.
5) Data Processing With User-Input Time Window Size
The time window size (lookback size) is given as an input to the function
data_pre_processing from the anamoly class.
With this function, we are also normalizing the data within the range of
[0,1] and then modifying the dataset by including "time-steps" as another additional dimension. The idea is to convert the two-dimensional data set of the dimension from
[Batch Size, Features] to a three-dimensional data set
[Batch Size, Lookback Size, Features]. For more details, inspect here.
6) Selecting Custom User Selection Input Configurations To Train the Anomaly Model
set_config() function, the user can select from the deep network architectures, set time window size, and tune the kernel size. The available models include—Deep Convolutional Neural Network and LSTM AUTOENCODERS that can be given with possible values
lstmaenn, respectively. We choose the time-series window
size=10 and use the kernel size of 3 for the convolutional network.
7) Training the Selected Anomaly Detector Model
One can train the model with either the GPU/CPU based on availability. The compute function will use GPU, if available, otherwise, it will use the CPU resources. The google collab uses NVIDIA TESLA K80 which is the most most popular GPU, while NVIDIA TESLA V100 is the First Tensor Core GPU. The number of epochs for training can be custom set. The device being used will be outputted on the console.
8) Finding Anomalies
Once the training is completed, the next step is to find the anomalies. Now, this brings us back to our fundamental question, i.e., how exactly can we estimate and trace what is an anomaly?. One can use Anomaly Score, Anomaly Likelihood, and some recently developed metrics like the Mahalanobis distance-based confidence score. The Mahalanobis confidence score assumes that the intermediate features of pre-trained neural classifiers follow class conditional Gaussian distributions whose covariances are tied for all distributions, and the confidence score for a new input is defined as the Mahalanobis distance from the closest class conditional distribution.
An Anomaly Score is the fraction of active columns that were not predicted correctly. In contrast, Anomaly Likelihood is the likelihood that a given anomaly score represents a true anomaly. In any dataset, there will be a natural level of uncertainty that creates a certain “normal” number of errors in prediction. Anomaly likelihood accounts for this natural level of error. Since we don’t have the ground truth anomaly label, so in our case, we cannot use this metric. The
find_anamoly() is used to detect anomalies by generating the hypothesis and calculating losses, which are the anomaly confidence scores for individual time stamps given in the data set.
9) Plotting Samples With Confidence Score: DeepCNN Example
Next, we need to visualize the anomalies; the samples are assigned anomaly confidence scores for each timestamp record. The
plot_anamoly_results function can be used to plot the anomaly score with respect to frequencies (bins) and the confidence scores for every timestamp record.
From the above graphs, one can presume that the timestamps/instances which have anomaly confidence scores greater than or equal to 1.2 are likely examples that deviate from what is expected or typical, and thus can be treated as potential anomalies.
10) Interpretable Results of Predictions From the Anomaly Detector—DeepCNN
Finally, a prototype of Explainable AI for the built time-series predictor is designed. Before we go through this step, let us understand what is needed for interpretable models/explainable models.
Why Explainable AI (XAI) Is the Buzz and Need of the Hour?
Data is everywhere and machine learning can mine it for information. Representation learning would become more valuable & highly significant if the results also generated by machine learning models could be easily understood, interpreted, and trusted by humans. That is where Explainable AI comes in, thereby making things no longer a black box.
explainable_results() uses the game-theoretic approach to explain the output of the model. To understand, interpret, and trust the results on the deep models at the individual/sample level, we use the Kernel Explainer. One of the fundamental properties of Shapley values is that they always sum up to the difference between the game outcome when all players are present, and the game outcome when no players are present. For machine learning models, this means that SHAP values of all the input features will always sum up to the difference between baseline (expected) model output, and the current model output for the prediction being explained.
explainable_results function takes the input value for a specific row/instance/sample prediction that was made to be interpreted. It also takes the number of input features (X), and the time-series window size difference (Y). We can get the explainable results at the individual instance level, and also at the batch of data size (say for example first 200 rows, last 50 samples, etc.)
The above graph is the result of the 10th example/sample/record/instance. It can be seen that the features that contributed significantly to the corresponding anomaly confidence score result were the temperature readings from the weather stations of Vancouver, Toronto, Saskatoon, Winnipeg, and Calgary.
- Example Unsupervised Feature Selection Demo Notebook
- Example Unsupervised Anomaly Detector & Explainable AI Demo Notebook
Published at DZone with permission of Ajay Arunachalam. See the original article here.
Opinions expressed by DZone contributors are their own.