Sunday, 7 January 2018

Basics of Backpropagation in Neural Networks(Machine Learning)

Backpropagation is the learning algorithm used in neural networks and is a generalization of the least mean squares algorithm used in linear perceptron. Backpropagation requires a known and expected output value for each input value and therefore it is therefore a supervised learning method.


How It Works
Remember backpropagation is a learning algorithm. How does learning occur? Learning is done by changing the weights of the perceptron after each signal have been processed based on the calculated amount of error in the output compared to the expected result.

Error = Output perceptron - expected result

Calculating the Loss Function
To understand the backpropagation algorithm, you need to understand the concept of the Loss Function which is also the Cost function. This is loosely the same as the error formula above, but this time we would need to formalize the definition a little. It would still be easy and clear.
The loss function calculates the difference between the output of the network and the expected output.

Let y, y' be vectors in n
We select an error function E(y, y') which give the difference between outputs y and y' which is given by:



The error function would be given by the square of the Euclidean distance between y and y' as shown below:
For n training examples, the average error can would be given by:


The partial derivatives with respects to y and y' would be given by:

You have come a long way in understanding the backpropagation algorithm. Just take some time to get these formulas around your head because that is what would form the basis of understanding the backpagation algorithm

The Backpropagation Algorithm
The algorithm starts when an input vector x is entered into the network. This input moves from the input layer through the hidden layers to the output layer and produces an output y. 
The loss function is used to compare this output with the expected(or actual) output to give an error value. The error value is calculated for each of the neurons in the output layer. 
These error values are then propagated backwards from the output layer, through the network.

Backpropagation uses these error values to calculate the gradient of the loss function as they are move back through the network. This gradient, is then used to update the weights of the nodes. The process is repeated again to get another output based on the updated weights. The process of backpropagation is repeated until the error function produces a minimum value.

Summary of the Backpropagation Algorithm
  1. Input vector x is given to the network 
  2. Input propagates forward to produce an output y in the output layer 
  3. Error function is calculated 
  4. Error is propagated backwards into the network 
  5. Weights are adjusted accordingly 
  6. Repeat the process until the final error is minimum