Line 13: Line 13:
 
<math>p(\vec{x}|\omega_i), \qquad i=1,\ldots,k</math>
 
<math>p(\vec{x}|\omega_i), \qquad i=1,\ldots,k</math>
  
Assume a parameter form for <math>p(\vec{x}|\omega_i), \qquad i=1,\ldots,k</math>
+
1.Assume a parameter form for <math>p(\vec{x}|\omega_i), \qquad i=1,\ldots,k</math>
Use training data to estimate the parameters of <math>p(\vec{x}|\omega_i)</math>, e.g. if you assume <math>p(\vec{x}|\omega_i)=\mathcal{N}(\mu,\Sigma)</math>
+
 
 +
2. Use training data to estimate the parameters of <math>p(\vec{x}|\omega_i)</math>, e.g. if you assume <math>p(\vec{x}|\omega_i)=\mathcal{N}(\mu,\Sigma)</math>, then need to estimate \mu and \Sigma.

Revision as of 09:25, 21 April 2010

In Lecture 11, we continued our discussion of Parametric Density Estimation techniques. We discussed the Maximum Likelihood Estimation (MLE) method and look at a couple of 1-dimension examples for case when feature in dataset follows Gaussian distribution. First, we looked at case where mean parameter was unknown, but variance parameter is known. Then we followed with another example where both mean and variance where unknown. Finally, we looked at the slight "bias" problem when calculating the variance.

Below are the notes from lecture.

Maximum Likelihood Estimation (MLE)


General Principles: Given vague knowledge about a situation and some training data (i.e. feature vector values for which the class is known) $ \vec{x}_l, \qquad l=1,\ldots,\text{hopefully large number} $

we want to estimate $ p(\vec{x}|\omega_i), \qquad i=1,\ldots,k $

1.Assume a parameter form for $ p(\vec{x}|\omega_i), \qquad i=1,\ldots,k $

2. Use training data to estimate the parameters of $ p(\vec{x}|\omega_i) $, e.g. if you assume $ p(\vec{x}|\omega_i)=\mathcal{N}(\mu,\Sigma) $, then need to estimate \mu and \Sigma.

Alumni Liaison

Abstract algebra continues the conceptual developments of linear algebra, on an even grander scale.

Dr. Paul Garrett