Today we will understand the concept of Perceptron.

The perceptron(or single-layer perceptron) is the simplest model of a neuron that illustrates how a neural network works. The perceptron is a machine learning algorithm developed in 1957 by Frank Rosenblatt and first implemented in IBM 704.

The perceptron is a network that takes a number of inputs, carries out some processing on those inputs and produces an output as can be shown in Figure 1.

How the perceptron works is illustrated in Figure 1. In the example, the perceptron has three inputs

The importance of this inputs is determined by the corresponding weights

This is shown below in Equation 1

Let's write out the formula that joins the inputs and the weights together to produce the output

This function is a trivial one, but it remains the basic formula for the perceptron but I want you to read this equation as

The reason for this is because, the output is not necessarily just a sum of these values, it may also depend on a bias that is added to this expression. In other words, we can think of a perceptron as a 'judge who weights up several evidences together with other rules and the makes a decision'

We would discuss this in detail in the Neural Networks lesson.

This operation of the perceptron serves as the basics of Neural Networks and would serve as a good introduction to learning neural network which we would be examining in subsequent lessons.

Now we would examine a more detailed model of a neural network, but that would be in part 2 because I need to keep this lesson as simple as possible.

**Basics of The Perceptron**The perceptron(or single-layer perceptron) is the simplest model of a neuron that illustrates how a neural network works. The perceptron is a machine learning algorithm developed in 1957 by Frank Rosenblatt and first implemented in IBM 704.

The perceptron is a network that takes a number of inputs, carries out some processing on those inputs and produces an output as can be shown in Figure 1.

Figure 1: How the Perceptron Works |

*How the Perceptron Works*How the perceptron works is illustrated in Figure 1. In the example, the perceptron has three inputs

*x*and_{1}, x_{2}*x*and one output._{3}The importance of this inputs is determined by the corresponding weights

*w*and_{1}, w_{2}*w*assigned to this inputs. The output could be a 0 or a 1 depending on the weighted sum of the inputs. Output is 0 if the sum is below certain threshold or 1 if the output is above certain threshold. This threshold could be a real number and a parameter of the neuron. Since the output of the perceptron could be either 0 or 1, this perceptron is an example of binary classifier._{3}This is shown below in Equation 1

Equation 1: output of a perceptron |

**The Formula**Let's write out the formula that joins the inputs and the weights together to produce the output

*Output*

**=**w_{1}x_{1}+ w_{2}x_{2}+ w_{3}x_{3 }*Output 'depends on' w*

_{1}x_{1}+ w_{2}x_{2}+ w_{3}x_{3 }We would discuss this in detail in the Neural Networks lesson.

This operation of the perceptron serves as the basics of Neural Networks and would serve as a good introduction to learning neural network which we would be examining in subsequent lessons.

Now we would examine a more detailed model of a neural network, but that would be in part 2 because I need to keep this lesson as simple as possible.