Line 21: | Line 21: | ||
**Given c classes + some knowledge about features <math>x \in \mathbb{R}^n</math> (or some other space) | **Given c classes + some knowledge about features <math>x \in \mathbb{R}^n</math> (or some other space) | ||
**Given training data, <math>x_j\sim\rho(x)=\sum\limits_{i=1}^n\rho(x|w_i) Prob(w_i)</math>, unknown class <math>w_{ij}</math> for <math>x_j</math> is know, <math>\forall{j}=1,...,N</math> (N hopefully large enough) | **Given training data, <math>x_j\sim\rho(x)=\sum\limits_{i=1}^n\rho(x|w_i) Prob(w_i)</math>, unknown class <math>w_{ij}</math> for <math>x_j</math> is know, <math>\forall{j}=1,...,N</math> (N hopefully large enough) | ||
+ | **In order to make decision, we need to estimate | ||
+ | |||
+ | <math>\rho(x|w_i)</math>, <math>Prob(w_i)</math> <math>\rightarrow</math> use Bayes rule, | ||
+ | |||
+ | or <math>\rho(x|w_i)</math> <math>\rightarrow</math> use Neyman-Pearson Criterion | ||
Revision as of 21:10, 5 May 2014
Expected Value of MLE estimate over standard deviation and expected deviation
A slecture by ECE student Zhenpeng Zhao
Partly based on the ECE662 Spring 2014 lecture material of Prof. Mireille Boutin.
1. Motivation
- Most likely converge as number of number of training sample increase.
- Simpler than alternate methods such as Bayesian technique.
2. Motivation
- Statistical Density Theory Context
- Given c classes + some knowledge about features $ x \in \mathbb{R}^n $ (or some other space)
- Given training data, $ x_j\sim\rho(x)=\sum\limits_{i=1}^n\rho(x|w_i) Prob(w_i) $, unknown class $ w_{ij} $ for $ x_j $ is know, $ \forall{j}=1,...,N $ (N hopefully large enough)
- In order to make decision, we need to estimate
$ \rho(x|w_i) $, $ Prob(w_i) $ $ \rightarrow $ use Bayes rule,
or $ \rho(x|w_i) $ $ \rightarrow $ use Neyman-Pearson Criterion
(create a question page and put a link below)
Questions and comments
If you have any questions, comments, etc. please post them on https://kiwi.ecn.purdue.edu/rhea/index.php/ECE662Selecture_ZHenpengMLE_Ques.