In this lesson, we would examine 3 approaches to classification. The first 2 would be based on the a priori knowledge of the probabilities. The third one would be based on the formulation of a discriminant function.

Let's begin with the first one.

The first step is to solve the inference problem of determining the class-conditional probability densities

Then use Bayes theory of the form

to find the posterior class probabilities

We not that the expression

Having found the posterior probability

This approach first solves the inference problem of determining the posterior class probabilities

The third approach is to find a function

For example, in the case of the two-class problem, the function could have a binary output such that

The first approach is the most demanding of the three. This is because it involves finding the joint distribution over both x and

The simplest approach is the last one as it does not require probability functions.

- Determination of the Class-Conditional Densities
*p(x | C*_{k}) - Determination of the a Posteriori Class Probabilities
*p(C*_{k}| x) - Use of a Discriminative Function

Let's begin with the first one.

**1. Determination of the Class-Conditional Densities***p(x | C*_{k})The first step is to solve the inference problem of determining the class-conditional probability densities

*p(x|C*) for each class_{k}*C*individually. Also separately infer the prior class probabilities_{k}*p(C*_{k}).Then use Bayes theory of the form

Bayes Theorem |

to find the posterior class probabilities

*p(C*._{k}| x)We not that the expression

*p(x)*in the numerator can be found by the formula:Having found the posterior probability

*p(C*, we can use decision theory to determine the class membership. Such approaches that model the inputs as well as the outputs are called generative models, because they can be used to generate synthetic data points in the input space._{k}| x)**2. First Determine the Posterior Class Probabilities,***p(C*_{k}| x)This approach first solves the inference problem of determining the posterior class probabilities

*p(C*and then subsequently use decision theory to assign each new x to one of the available classes. Such approaches that model the posterior probabilities directly are called discriminative models._{k}| x),**3. Using a Discriminant Function**The third approach is to find a function

*f(x)*, called a discriminant function. This function maps each input*x*directly to a class label.For example, in the case of the two-class problem, the function could have a binary output such that

*f(x) = 0*represents class C_{1}while*f(x) = 1*represents class C_{2}. In the use of discriminant function, probability is not used. The binary output discriminant function is shown:**Final Notes**The first approach is the most demanding of the three. This is because it involves finding the joint distribution over both x and

*C*. But this approach have the advantage of allowing for marginal dencity of data to be determined._{k}The simplest approach is the last one as it does not require probability functions.