(New page: <br> <center><font size="4"></font> <font size="4">'''Maximum Likelihood Estimators and Examples''' <br> </font> <font size="2">A [https://www.projectrhea.org/learning/slectures.php slec...)
 
Line 20: Line 20:
 
----
 
----
 
----
 
----
 +
 +
== Introduction ==
 +
 +
Once we have decided on a model(Probability Distribution), our next step is often to estimate some information from the observed data. There are generally two parametric frameworks for estimating unknown information from data. We will refer to these two general frameworks as the Frequentist and Baysian approaches. One very widely used Frequentist estimator is known as the Maximum Likelihood estimator.
 +
 +
 +
In the frequentist approach, one treats the unknown quantity as a deterministic, but unknown parameter vector, <math> \theta \in \Omega</math>. So for example, after we observe the random vector <math>Y \in \mathbb{R}^{n}</math>, then our objective is to use <math>Y</math> to estimate the unknown scalar or vector <math>\theta</math>. In order to formulate this problem, we will assume that the vector <math>Y</math> has a probability density function given by <math>p_{\theta}(y)</math> where <math>\theta</math> parameterizes a family of density functions for <math>Y</math>. We may then use this family of distributions to determine a function, <math>T : \mathbb{R}^{n} \rightarrow \Omega</math>, that can be used to compute an estimate of the unknown parameter as
 +
 +
<center><math>\hat{\theta} = T(Y)</math></center>
 +
 +
Notice, that since <math>T(Y)</math> is a function of random vector <math>Y</math>, the estimate, <math>\hat{\theta}</math>, is a random vector. The mean of the estimator, <math>\bar{\theta}</math>, can be computed as
 +
 +
<center><math>\bar{\theta} = E_{\theta}[\hat{\theta}] = \int_{\mathbb{R}^{n}} T(y)p_{\theta}(y)dy</math></center>
 +
 +
The difference between the mean of the estimator and the value of the parameter is known as the bias and is given by
 +
 +
<center><math> bias_{\theta} = \bar{\theta} -\theta</math></center>
 +
 +
 +
Similarly, the variance of the estimator is given by
 +
 +
<center><math>var_{\theta} = E_{\theta}[(\hat{\theta} -\bar{\theta})^2]</math></center>
 +
 +
and it is easily shown that the mean squared error (MSE) of the estimate is then given by
 +
 +
<center><math>MSE_{\theta} = E_{\theta}[(\hat{\theta}-\theta)^2] = var_{\theta} + (bias_{\theta})^2</math></center>
 +
 +
 +
 +
Since the bias, variance, and the MSE of the estimator will depend on the specific value of <math>\theta</math>, it is often unclear precisely how to compare the accuracy of different estimators. Even estimators that seem quite poor may produce small or zero error for certain values of <math>\theta</math>. For example, consider the estimator which is fixed to the value <math>\hat{\theta}=1</math>, independent of the data. This would seem to be a very poor estimator, but it has an MSE of 0 when <math>\theta=1</math>.
 +
 +
 +
An estimator is said to be consistent if for all <math>\theta \in \Omega</math>, the MSE of the estimator goes to zero as the number of independent data samples, n, goes to infinity. If an estimator is not consistent, this means that even with arbitrarily large quantities of data, the estimate will not approach the true value of the parameter. Consistency would seem to be the least we would expect of an estimator, but we will later see that even some very intuitive estimators are not always consistent.
 +
 +
 +
Ideally, it would be best if one could select an estimator which has uniformly low bias and variance for all values of <math>\theta</math>. This is not always possible, but when it is we have names for such estimators. For example, <math>\hat{\theta}</math> is said to be an unbiased estimator if for all values of <math>\theta</math> the bias is zero, i.e. <math>\theta = \bar{\theta}</math>. If in addition, for all values of <math>\theta</math>, the variance of an estimator is less than or equal to that of all other unbiased estimators, then we say that the estimator is uniformly minimum variance unbiased(UMVU) estimator.
 +
 +
 +
There are many excellent estimators that have been proposed through the years for many different types of problems. However, one very widely used Frequentist estimator is known as the maximum likelihood(ML) estimator given by
 +
 +
<center><math>\hat{\theta} = </math></center>

Revision as of 05:13, 29 April 2014


Maximum Likelihood Estimators and Examples
A slecture by Lu Zhang

Partially based on the ECE662 Spring 2014 lecture material of Prof. Mireille Boutin.




Outline of the slecture

  • Introduction
  • Derivation for Maximum Likelihood Estimates (MLE)
  • Examples
  • Summary
  • References


Introduction

Once we have decided on a model(Probability Distribution), our next step is often to estimate some information from the observed data. There are generally two parametric frameworks for estimating unknown information from data. We will refer to these two general frameworks as the Frequentist and Baysian approaches. One very widely used Frequentist estimator is known as the Maximum Likelihood estimator.


In the frequentist approach, one treats the unknown quantity as a deterministic, but unknown parameter vector, $ \theta \in \Omega $. So for example, after we observe the random vector $ Y \in \mathbb{R}^{n} $, then our objective is to use $ Y $ to estimate the unknown scalar or vector $ \theta $. In order to formulate this problem, we will assume that the vector $ Y $ has a probability density function given by $ p_{\theta}(y) $ where $ \theta $ parameterizes a family of density functions for $ Y $. We may then use this family of distributions to determine a function, $ T : \mathbb{R}^{n} \rightarrow \Omega $, that can be used to compute an estimate of the unknown parameter as

$ \hat{\theta} = T(Y) $

Notice, that since $ T(Y) $ is a function of random vector $ Y $, the estimate, $ \hat{\theta} $, is a random vector. The mean of the estimator, $ \bar{\theta} $, can be computed as

$ \bar{\theta} = E_{\theta}[\hat{\theta}] = \int_{\mathbb{R}^{n}} T(y)p_{\theta}(y)dy $

The difference between the mean of the estimator and the value of the parameter is known as the bias and is given by

$ bias_{\theta} = \bar{\theta} -\theta $


Similarly, the variance of the estimator is given by

$ var_{\theta} = E_{\theta}[(\hat{\theta} -\bar{\theta})^2] $

and it is easily shown that the mean squared error (MSE) of the estimate is then given by

$ MSE_{\theta} = E_{\theta}[(\hat{\theta}-\theta)^2] = var_{\theta} + (bias_{\theta})^2 $


Since the bias, variance, and the MSE of the estimator will depend on the specific value of $ \theta $, it is often unclear precisely how to compare the accuracy of different estimators. Even estimators that seem quite poor may produce small or zero error for certain values of $ \theta $. For example, consider the estimator which is fixed to the value $ \hat{\theta}=1 $, independent of the data. This would seem to be a very poor estimator, but it has an MSE of 0 when $ \theta=1 $.


An estimator is said to be consistent if for all $ \theta \in \Omega $, the MSE of the estimator goes to zero as the number of independent data samples, n, goes to infinity. If an estimator is not consistent, this means that even with arbitrarily large quantities of data, the estimate will not approach the true value of the parameter. Consistency would seem to be the least we would expect of an estimator, but we will later see that even some very intuitive estimators are not always consistent.


Ideally, it would be best if one could select an estimator which has uniformly low bias and variance for all values of $ \theta $. This is not always possible, but when it is we have names for such estimators. For example, $ \hat{\theta} $ is said to be an unbiased estimator if for all values of $ \theta $ the bias is zero, i.e. $ \theta = \bar{\theta} $. If in addition, for all values of $ \theta $, the variance of an estimator is less than or equal to that of all other unbiased estimators, then we say that the estimator is uniformly minimum variance unbiased(UMVU) estimator.


There are many excellent estimators that have been proposed through the years for many different types of problems. However, one very widely used Frequentist estimator is known as the maximum likelihood(ML) estimator given by

$ \hat{\theta} = $

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett