(Minimum Mean-Square Estimation (MMSE))
 
(19 intermediate revisions by 9 users not shown)
Line 1: Line 1:
 +
[[Category:ECE302Fall2008_ProfSanghavi]]
 +
[[Category:probabilities]]
 +
[[Category:ECE302]]
 +
[[Category:cheat sheet]]
 +
 +
=[[ECE302]] Cheat Sheet number 4=
 
==Maximum Likelihood Estimation (ML)==
 
==Maximum Likelihood Estimation (ML)==
 
:<math>\hat a_{ML} = \overset{max}{a}  f_{X}(x_i;a)</math> continuous
 
:<math>\hat a_{ML} = \overset{max}{a}  f_{X}(x_i;a)</math> continuous
  
 
:<math>\hat a_{ML} = \overset{max}{a}  Pr(x_i;a)</math> discrete
 
:<math>\hat a_{ML} = \overset{max}{a}  Pr(x_i;a)</math> discrete
 +
 +
 +
==Chebyshev Inequality==
 +
"Any RV is likely to be close to its mean"
 +
 +
:<math>\Pr(\left|X-E[X]\right|\geq C)\leq\frac{var(X)}{C^2}.</math>
 +
  
 
==Maximum A-Posteriori Estimation (MAP)==
 
==Maximum A-Posteriori Estimation (MAP)==
Line 28: Line 41:
 
==Mean Square Error==
 
==Mean Square Error==
  
:<math>MSE = E[(\theta - \hat \theta(x))^2]</math>
+
:<math>MSE = E[(\Theta - \hat \theta(x))^2]</math>
 +
 
 +
:<math>MSE(E(\Theta)) = var(\Theta) \,</math>
  
 
==Linear Minimum Mean-Square Estimation (LMMSE)==
 
==Linear Minimum Mean-Square Estimation (LMMSE)==
  
:<math>\hat{y}_{\rm LMMSE}(x) = E[\theta]+\frac{COV(x,\theta)}{Var(x)}(x-E[x])</math>
+
The LMMS estimator <math>\hat{Y}</math> of Y based on the variable X is
 +
 
 +
:<math>\hat{Y}_{LMMSE}(x) = E[Y]+\frac{COV(Y,X)}{Var(X)}(X-E[X]) = E[Y] + \rho \frac{\sigma_{Y}}{\sigma_{X}}(X-E[X])</math>
 +
 
 +
where
 +
::<math>\rho = \frac{COV(Y,X)}{\sigma_{Y}\sigma_{X}}</math>
  
 
Law of Iterated Expectation: E[E[X|Y]]=E[X]
 
Law of Iterated Expectation: E[E[X|Y]]=E[X]
  
==Hypothesis Testing: ML Rule==
+
COV(X,Y)=E[XY] - E[X]E[Y]
 +
 
 +
==Hypothesis Testing==
 +
In hypothesis testing <math>\Theta</math> takes on one of ''m'' values, <math>\theta_1,...,\theta_m</math> where ''m'' is usually small; often ''m'' = 2, in which case it is a binary hypthothesis testing problem.
 +
 
 +
The event <math>\Theta = \theta_i</math> is the <math>i^{th}</math> hypothesis denoted by <math>H_i</math>
 +
===ML Rule===
  
 
Given a value of X, we will say H1 is true if X is in region R, else will will say H0 is true.
 
Given a value of X, we will say H1 is true if X is in region R, else will will say H0 is true.
  
'''Type I error'''
+
'''Type I Error: False Rejection'''
  
 
Say <math>H_1</math> when truth is <math>H_0</math>. Probability of this is:  
 
Say <math>H_1</math> when truth is <math>H_0</math>. Probability of this is:  
 
:<math>Pr(\mbox{Say } H_1|H_0) = Pr(x \in R|\theta_0)</math>
 
:<math>Pr(\mbox{Say } H_1|H_0) = Pr(x \in R|\theta_0)</math>
  
'''Type II error'''
+
'''Type II Error: False Acceptance'''
  
 
Say <math>H_0</math> when truth is <math>H_1</math>. Probability of this is:  
 
Say <math>H_0</math> when truth is <math>H_1</math>. Probability of this is:  
 
:<math>Pr(\mbox{Say }H_0|H_1) = Pr(x \in R^C|\theta_1)</math>
 
:<math>Pr(\mbox{Say }H_0|H_1) = Pr(x \in R^C|\theta_1)</math>
  
==Hypothesis Testing: MAP Rule==
+
 
 +
Say H1 if;
 +
:<math>\{f_{X|\theta}(x|\theta1)</math>  >  <math>\{f_{X|\theta}(x|\theta0)</math>
 +
Else H0
 +
 
 +
Say H0 if;
 +
:<math>\{f_{X|\theta}(x|\theta1)</math>  <=  <math>\{f_{X|\theta}(x|\theta0)</math>
 +
Else H1
 +
 
 +
===MAP Rule===
  
 
:<math>\mbox{Overall P(err)} = P_{\theta}(\theta_{0})Pr\Big[\mbox{Say }H_{1}|H_{0}\Big] +P_{\theta}(\theta_{1})Pr\Big[\mbox{Say }H_{0}|H_{1}\Big] </math>
 
:<math>\mbox{Overall P(err)} = P_{\theta}(\theta_{0})Pr\Big[\mbox{Say }H_{1}|H_{0}\Big] +P_{\theta}(\theta_{1})Pr\Big[\mbox{Say }H_{0}|H_{1}\Big] </math>
  
==Likelihood Ratio Test==
+
Note that for Overall P(error), cannot use values from ML estimate.
 +
 
 +
===Likelihood Ratio Test===
  
 
'''''How to find a good rule?'''''
 
'''''How to find a good rule?'''''
 
--[[User:Khosla|Khosla]] 16:44, 13 December 2008 (UTC)
 
--[[User:Khosla|Khosla]] 16:44, 13 December 2008 (UTC)
  
<math>\ L(x) = \frac{P_{\rm X|\theta} (x|\theta_1)}{P_{\rm X|\theta} (x|\theta_0)} </math>
+
For X is discrete
 +
 
 +
:<math>\ L(x) = \frac{p_{X|\theta} (x|\theta_1)}{p_{X|\theta} (x|\theta_0)} </math>
  
 
Choose threshold  (T),
 
Choose threshold  (T),
Line 70: Line 109:
  
 
The Maximum Likelihood rule is a Likelihood Ratio Test with T = 1
 
The Maximum Likelihood rule is a Likelihood Ratio Test with T = 1
 +
The MAP rule is a Likelihood Ratio Test with <math>T=\frac{P_\theta(\theta_0)}{P_\theta(\theta_1)}</math>
  
 
'''Observations''':
 
'''Observations''':
#as T increases Type I Error Increases
+
#as T decreases Type I Error Increases
#as T increases Type II Error Decreases
+
#as T decreases Type II Error Decreases
#as T decreases Type I Error Decreases
+
#as T increases Type I Error Decreases
#as T decreases Type II Error Increases
+
#as T increases Type II Error Increases
 +
(<math>T = 0 \Rightarrow R = \{x|P_{X|\theta}(x|\theta_1) > 0\}</math>.  So, Type I error (<math>Pr(x\in R | H_0)</math>) is maximized as T is minimized.)
 +
 
 +
The threshold value T=1, corresponds to the ML rule.
 +
----
 +
[[Main_Page_ECE302Fall2008sanghavi|Back to ECE302 Fall 2008 Prof. Sanghavi]]

Latest revision as of 13:06, 22 November 2011


ECE302 Cheat Sheet number 4

Maximum Likelihood Estimation (ML)

$ \hat a_{ML} = \overset{max}{a} f_{X}(x_i;a) $ continuous
$ \hat a_{ML} = \overset{max}{a} Pr(x_i;a) $ discrete


Chebyshev Inequality

"Any RV is likely to be close to its mean"

$ \Pr(\left|X-E[X]\right|\geq C)\leq\frac{var(X)}{C^2}. $


Maximum A-Posteriori Estimation (MAP)

$ \hat \theta_{MAP}(x) = \text{arg }\overset{max}{\theta} P_{X|\theta}(x|\theta)P_ {\theta}(\theta) $
$ \hat \theta_{MAP}(x) = \text{arg }\overset{max}{\theta} f_{X|\theta}(x|\theta)P_ {\theta}(\theta) $

Minimum Mean-Square Estimation (MMSE)

$ \hat{y}_{\rm MMSE}(x) = \int_{-\infty}^{\infty} {y}{f}_{\rm Y|X}(y|x)\, dy={E}[Y|X=x] $

Law Of Iterated Expectation

$ E[E[X|Y]] = \begin{cases} \sum_{y} E[X|Y = y]p_Y(y),\,\,\,\,\,\,\,\,\,\,\mbox{ Y discrete,}\\ \int_{-\infty}^{+\infty} E[X|Y = y]f_Y(y)\,dy,\mbox{ Y continuous.} \end{cases} $

Using the total expectation theorem:

$ E\Big[ E[X|Y]] = E[X] $

Mean Square Error

$ MSE = E[(\Theta - \hat \theta(x))^2] $
$ MSE(E(\Theta)) = var(\Theta) \, $

Linear Minimum Mean-Square Estimation (LMMSE)

The LMMS estimator $ \hat{Y} $ of Y based on the variable X is

$ \hat{Y}_{LMMSE}(x) = E[Y]+\frac{COV(Y,X)}{Var(X)}(X-E[X]) = E[Y] + \rho \frac{\sigma_{Y}}{\sigma_{X}}(X-E[X]) $

where

$ \rho = \frac{COV(Y,X)}{\sigma_{Y}\sigma_{X}} $

Law of Iterated Expectation: E[E[X|Y]]=E[X]

COV(X,Y)=E[XY] - E[X]E[Y]

Hypothesis Testing

In hypothesis testing $ \Theta $ takes on one of m values, $ \theta_1,...,\theta_m $ where m is usually small; often m = 2, in which case it is a binary hypthothesis testing problem.

The event $ \Theta = \theta_i $ is the $ i^{th} $ hypothesis denoted by $ H_i $

ML Rule

Given a value of X, we will say H1 is true if X is in region R, else will will say H0 is true.

Type I Error: False Rejection

Say $ H_1 $ when truth is $ H_0 $. Probability of this is:

$ Pr(\mbox{Say } H_1|H_0) = Pr(x \in R|\theta_0) $

Type II Error: False Acceptance

Say $ H_0 $ when truth is $ H_1 $. Probability of this is:

$ Pr(\mbox{Say }H_0|H_1) = Pr(x \in R^C|\theta_1) $


Say H1 if;

$ \{f_{X|\theta}(x|\theta1) $ > $ \{f_{X|\theta}(x|\theta0) $

Else H0

Say H0 if;

$ \{f_{X|\theta}(x|\theta1) $ <= $ \{f_{X|\theta}(x|\theta0) $

Else H1

MAP Rule

$ \mbox{Overall P(err)} = P_{\theta}(\theta_{0})Pr\Big[\mbox{Say }H_{1}|H_{0}\Big] +P_{\theta}(\theta_{1})Pr\Big[\mbox{Say }H_{0}|H_{1}\Big] $

Note that for Overall P(error), cannot use values from ML estimate.

Likelihood Ratio Test

How to find a good rule? --Khosla 16:44, 13 December 2008 (UTC)

For X is discrete

$ \ L(x) = \frac{p_{X|\theta} (x|\theta_1)}{p_{X|\theta} (x|\theta_0)} $

Choose threshold (T),

$ \mbox{Say } \begin{cases} H_{1}; \mbox{ if } L(x) > T\\ H_{0}; \mbox{ if } L(x) < T \end{cases} $

The Maximum Likelihood rule is a Likelihood Ratio Test with T = 1 The MAP rule is a Likelihood Ratio Test with $ T=\frac{P_\theta(\theta_0)}{P_\theta(\theta_1)} $

Observations:

  1. as T decreases Type I Error Increases
  2. as T decreases Type II Error Decreases
  3. as T increases Type I Error Decreases
  4. as T increases Type II Error Increases

($ T = 0 \Rightarrow R = \{x|P_{X|\theta}(x|\theta_1) > 0\} $. So, Type I error ($ Pr(x\in R | H_0) $) is maximized as T is minimized.)

The threshold value T=1, corresponds to the ML rule.


Back to ECE302 Fall 2008 Prof. Sanghavi

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett