The Basics of Deep Learning and How To Apply It To Predict Failures
Deep Learning is a hot topic. Big players like Google, Microsoft and IBM invest heavily in new projects around Deep Learning. Their goal? Developing neural networks which learn more and more complex tasks. But how does it work?
Join the DZone community and get the full member experience.Join For Free
spam filters already filter out our unwanted email with extreme high precision. not many people understand how these unwanted messages are separated from wanted messages. you can’t simply filter based on sender address, as new spam addresses can easily be created. the second reason is that spam is often sent from legitimate email accounts hijacked by third parties. the best way to separate spam is to look at the content of the email messages. the most effective techniques to do this are based on machine learning.
machine learning is a disciple which concerns itself with the development of self-learning systems. these systems learn in an automated fashion to recognize structure in data. in this way, the system learns a model that explains the data with which we can do predictions over unseen data. well-known examples of machine learning are face recognition, voice recognition, and text translation. also, the self-driving car from google is loaded with different kinds of machine learning systems to recognize pedestrians and traffic signs.
the underlying principle
the principle behind machine learning is rather simple. imagine we want to build a machine that separates apples from pears. a digital image is made of an object, and two values called features are extracted from this digital image by a small piece of hand written code. the code extracts the color of the object in the image from red to green and the shape of the object from circular to oval. now imagine we have a set of images containing both apples and pears. for each image, we also know if it contains an apple or a pear, we call these the labels of the images. when we compute the features for the images of the training set and plot them we get the following graphs.
we see that apples and pears mostly occupy there own areas. both object classes are therefore largely separable by separating the space into two distinct spaces (blue line). given a new image of an object, we can now identify it as an apple or a pear by computing the features and checking in which space it lays. in essence, the algorithm has learned from data to differentiate apples from pears.
although this is the case we also see that the system can make mistakes if the computed features are close to the line separating the two object classes. this is because there exists green oval apples and more round red pears. the accuracy of the algorithm is therefore highly dependable on the number of samples in the training set and the quality and number of features used. for example, we could have used a third feature that would quantify the texture of the object. this might have increased the accuracy of the algorithm. the whole process is schematically visualized in the figure bellow.
the above-described method is the essence of machine learning and has been applied in this manner for decades. the most important element is constructing quality features such that object categories are separable. one might ask, however, is it also possible to learn these features directly, instead of hand coding them? this is indeed possible, and methods to do so have existed since the seventies. one method which can be used to learn features automatically are neural networks. neural networks are loosely based on the way the brain works.
artificial neural networks are made up of artificial neurons that model single brain cells. these artificial neurons represent one unit of computation. an artificial neural network receives different values as input (for example from other artificial neurons) and then computes a simple equation to produce a single output value. this output value may then functions as the input for other neurons. by connecting neurons in layers we construct one big artificial neural network. although single neurons perform simple computations the network as a whole can perform a very complex calculation. the image below illustrates this idea, with neurons represented as circles and output-input connections between neurons as lines. the interesting thing about neural networks is that they automatically learn the features required. we can imagine a neural network that can separate apples and pears by learning the shape and color features directly from images it receives as input.
the term deep learning refers to the number of layers in the neural network, also called the depth of network. the depth plays an important role in learning good features. this is because every layer learns a set of features based on the features learned in the layer before. the deeper the network the more complex the features that can be learned. a nice demo that gives more insight into how neural networks work can be found by following the link: playground.tensorflow.org.
although neural networks learn the features by themselves, they weren’t often applied in practice. the reason for this is two-fold. first of all, many training examples are required, second many layers are needed to learn good features, which in turn requires a lot of computational power. with the rise of big data and an increase in computational power over the last few years, it has become possible to apply these neural networks in practice. neural networks can learn far more complex features than can be constructed by hand. therefore they often outperform systems normally coded by hand.
machine learning and deep learning are widely applicable and are not limited to separating pears and apples for industrial agriculture. for example, systems exist that learn to identify cancer cells from healthy cells in medical scans. the accuracy of these kinds of systems has improved rapidly over the last few years. facebook, for example has created a siri-like system that is capable of analyzing the content of pictures with a high degree of accuracy and can then answer questions about the image content .
although these kinds of systems do not perform better than humans yet, there exists more specialized systems that already outperform humans. for example, an application by microsoft recognizes dog breeds with high accuracy, outperforming humans.
machine learning is not only used for classification but is also used for analyzing text. a neural network can, for example, be used to extract the sentiment of text. this indicates how positive or negative the text is. this is a well-known technique used to automatically asses the content of product reviews for example.
the most impressive application of machine learning, in my opinion, is in the field of artificial intelligence. the combination of neural networks with reinforcement learning makes it possible to construct intelligent agents that can learn from their environment.
the best example of this is a system produced by google deepmind which learns to play atari video games like pong and breakout completely autonomously by trial and error. the system only receives screen input and can only produce button presses on the video game controller, just like a human. in some video games, the system actually outperforms humans.
one of the areas we are currently focusing our attention on is the applicability of deep learning in the context of anomaly detection in signals (streams). this means detecting if we see abnormal patterns in signals compared to the normally observed frequently occurring patterns.
the aim is to use anomaly detection to detect problems before they escalate and become catastrophic. this way the problem can be resolved faster allowing for shorter down times. this is based on the premise that abnormal behavior precedes system failure.
anomaly detection & deep learning
neural networks can be used to implement anomaly detection, the idea is to construct a neural network that takes in a signal as input and then reconstruct the same signal in its output. in this manner, the neural network learns features that characterize the signals it is trained on. these features thus define the normal behavior of the signal.
when the network is confronted with an abnormal signal it will not be able to properly reconstruct that signal in its output. this causes a large deviation between the input signal and reconstructed output signal, a term we call the reconstruction error. this reconstruction error indicates abnormal behavior of a signal if applied in real time.
the above text is an english translated version of article:
laan van der t.a. (01-07-2016) de basics van deep learning, ag connect, number 1
although a crash may be preceded by abnormal system behavior, abnormal behavior also occurs on its own without the system necessarily failing. for example, consider an online shop during the holiday seasons, input to the system will deviate due to abnormal shopping patterns. in the ideal case, we would like to directly couple observed patterns in the data to crashes. this would reduce the false positive rate enormously.
anomaly dectection & stackstate
one of the areas stackstate is currently focusing on is the application of deep learning in the context of anomaly detection in signals (streams). stackstate builds an it operations tool for monitoring your full it landscape. the aim is to detect problems and gives insight into the root cause of the issue at hand. this way the problem can be resolved faster allowing for shorter downtime.
disruption of it environments
the last 3 years has seen a significant change in service, infra and application architectures. this change is driven by micro services, containerization, continuous delivery, devops, and iot. all those changes are meant to make the stack more resilient, better scalable and agile. we see that different teams are using different tools to do their job. we also see that new patterns are weaved with old style architectures. this gives a high increase in changes and complexity while the expectation levels of customers are continuously growing. the task to maintain a constantly changing it stack and act swiftly when problems occur is a daunting one. especially with the current tool set. hence the need for a new approach to monitor and manage complex it environments.
Published at DZone with permission of Lodewijk Bogaards. See the original article here.
Opinions expressed by DZone contributors are their own.