Over a million developers have joined DZone.

Introduction to the Perceptron Algorithm

DZone's Guide to

Introduction to the Perceptron Algorithm

Get the basics of the perceptron, a machine learning algorithm that is used for supervised learning with binary classifiers.

· AI Zone ·
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. It is a type of linear classifier, i.e. a classification algorithm that makes all of its predictions based on a linear predictor function combining a set of weights with the feature vector.

The linear classifier says that the training data should be classified into corresponding categories such that if we are applying classification for two categories, then all the training data must be lie in these two categories.

The binary classifier defines that there should be only two categories for classification.

The basic perceptron algorithm is used for binary classification and all the training examples should lie in these categories. The term comes from the basic unit in a neuron, which is called the perceptron.

Origin of the Perceptron

According to Wikipedia:

The perceptron algorithm was invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research. The perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for the IBM 704, it was subsequently implemented in custom-built hardware as the Mark 1 perceptron. This machine was designed for image recognition. It had an array of 400 photocells randomly connected to the neurons. Weights were encoded in potentiometers, and weight updates during learning were performed by electric motors.

Following are the major components of a perceptron:

    • Input: All the features become the input for a perceptron. We denote the input of a perceptron by [x1, x2, x3, ..,xn], where x represents the feature value and n represents the total number of features. We also have special kind of input called the bias. In the image, we have described the value of the BIAS as w0.
    • Weights: The values that are computed over the time of training the model. Initially, we start the value of weights with some initial value and these values get updated for each training error. We represent the weights for perceptron by [w1,w2,w3,.. wn].
    • Bias: A bias neuron allows a classifier to shift the decision boundary left or right. In algebraic terms, the bias neuron allows a classifier to translate its decision boundary. It aims to "move every point a constant distance in a specified direction." Bias helps to train the model faster and with better quality.
    • Weighted summation: Weighted summation is the sum of the values that we get after the multiplication of each weight [wn] associated with the each feature value [xn]. We represent the weighted summation by ∑wixi for all i -> [1 to n].
    • Step/activation function: The role of activation functions is to make neural networks nonlinear. For linear classification, for example, it becomes necessary to make the perceptron as linear as possible.
    • Output: The weighted summation is passed to the step/activation function and whatever value we get after computation is our predicted output.

Inside the Perceptron

  • Firstly, the features for an example are given as input to the perceptron.
  • These input features get multiplied by corresponding weights (starting with initial value).
  • The summation is computed for the value we get after multiplication of each feature with the corresponding weight.
  • The value of the summation is added to the bias.
  • The step/activation function is applied to the new value.

And that's it!

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

ai ,neural network ,machine learning ,perceptron ,algorithm ,tutorial ,supervised learning

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}