(Porting over the BPE page so that I may add to it.)
 
(Translating equations...)
Line 3: Line 3:
 
BPE FOR MULTIVARIATE GAUSSIAN :
 
BPE FOR MULTIVARIATE GAUSSIAN :
 
- Estimation of mean, given a known covariance
 
- Estimation of mean, given a known covariance
Consider a set of iid samples |Xset| where |XinRn| is such that |Xdist|.  Suppose we know |Sig|, but wish to estimate |mu| using BPE.  If we assume a prior distribution for the unknown mean to be distributed as a Gaussian random variable, we will obtain a posterior distribution for the mean which is also Gaussian, i.e. |mu_posterior|, where |mu_n| and |Sig_N| are calculated to utilize both our prior knowledge of |mu| and the samples |Xset|.  Fukunaga p. 391 derives that the parameters |mu_N| and |Sig_N| are calculated as follows:
+
Consider a set of iid samples <math>\{X_i\}_{i=1}^N</math> where <math>X_i \in\mathbb{R}^n</math> is such that <math>X_i \sim N(\mu,\Sigma)</math>.  Suppose we know <math>\Sigma</math>, but wish to estimate <math>\mu</math> using BPE.  If we assume a prior distribution for the unknown mean to be distributed as a Gaussian random variable, we will obtain a posterior distribution for the mean which is also Gaussian, i.e. <math>p(\mu|X_1,X_2,\ldots,X_N) = N(\mu_N,\Sigma_N)</math>, where <math>\mu_N</math> and <math>\Sigma_N</math> are calculated to utilize both our prior knowledge of <math>\mu</math> and the samples <math>\{X_i\}_{i=1}^N</math>.  Fukunaga p. 391 derives that the parameters <math>\mu_N</math> and <math>\Sigma_N</math> are calculated as follows:
  
 
|mu_Ndef|,
 
|mu_Ndef|,

Revision as of 12:37, 6 March 2008

`BPE - Bayesian Parameter Estimation from Lecture 7 <https://engineering.purdue.edu/people/mireille.boutin.1/ECE301kiwi/Lecture7>`_

BPE FOR MULTIVARIATE GAUSSIAN : - Estimation of mean, given a known covariance Consider a set of iid samples $ \{X_i\}_{i=1}^N $ where $ X_i \in\mathbb{R}^n $ is such that $ X_i \sim N(\mu,\Sigma) $. Suppose we know $ \Sigma $, but wish to estimate $ \mu $ using BPE. If we assume a prior distribution for the unknown mean to be distributed as a Gaussian random variable, we will obtain a posterior distribution for the mean which is also Gaussian, i.e. $ p(\mu|X_1,X_2,\ldots,X_N) = N(\mu_N,\Sigma_N) $, where $ \mu_N $ and $ \Sigma_N $ are calculated to utilize both our prior knowledge of $ \mu $ and the samples $ \{X_i\}_{i=1}^N $. Fukunaga p. 391 derives that the parameters $ \mu_N $ and $ \Sigma_N $ are calculated as follows:

|mu_Ndef|,

where |mu_0| is the initial "geuss" for the mean |mu|, and |Sig_mu| is the "confidence" in that guess. In other words, we can consider that |mu_prior| is the prior distribution for |mu| that we would assume without seeing any samples. For the covariance parameter, we have

|Sig_Ndef|.

We find that as the number of samples increases, that the effect of the prior knowledge (|mu_0|,|Sig_mu|) decreases so that

|mu_Nlimit|, and |Sig_Nlimit|.

- Estimation of covariance, given a known mean Again, given iid samples |Xset|, |XinRn|, |Xdist|, let us now estimate |Sig| with |mu| known. As in Fukinaga p. 392, we assume that both the posterior distribution of |Sig| is normal (i.e. |Sig_posterior|), and it can be shown that the sample covariance matrix follows a Wishart Distribution. Fukinaga p.392 shows the distribution |pK|, where |Kdef|, and parameter |Sig_0| represents the initial "guess" for |Sig| and |N_0| represents "how many samples were used to compute |Sig_0|". Note that we compute the distribution for |Kdef| instead of |Sig| directly, since the inverse covariance matrix is used in the definition for a normal distribution. It can be shown, then, that

|pKdef|,

where |cnN0def|.

.. |Xset| image:: tex

alt: tex: \{X_i\}_{i=1}^N

.. |XinRn| image:: tex

alt: tex: X_i \in\mathbb{R}^n

.. |Xdist| image:: tex

alt: tex: X_i \sim N(\mu,\Sigma)

.. |Sig| image:: tex

alt: tex: \Sigma

.. |Sig_0| image:: tex

alt: tex: \Sigma_0

.. |N_0| image:: tex

alt: tex: N_0

.. |mu| image:: tex

alt: tex: \mu

.. |Sig_N| image:: tex

alt: tex: \Sigma_N

.. |mu_N| image:: tex

alt: tex: \mu_N

.. |mu_0| image:: tex

alt: tex: \mu_0

.. |Sig_mu| image:: tex

alt: tex: \Sigma_\mu

.. |mu_posterior| image:: tex

alt: tex: p(\mu|X_1,X_2,\ldots,X_N) = N(\mu_N,\Sigma_N)

.. |mu_Ndef| image:: tex

alt: tex: \mu_N = \frac{\Sigma}{N}(\Sigma_\mu + \frac{\Sigma}{N})^{-1}\mu_0 + \Sigma_\mu(\Sigma_\mu + \frac{\Sigma}{N})^{-1}\left(\frac1N\sum_{i=1}^NX_i\right)

.. |mu_prior| image:: tex

alt: tex: N(\mu_0,\Sigma_\mu)

.. |Sig_Ndef| image:: tex

alt: tex: \Sigma_N = \Sigma_0(\Sigma_0+\frac{\Sigma}{N})^{-1}\frac{\Sigma}{N}

.. |mu_Nlimit| image:: tex

alt: tex: \lim_{N\rightarrow\infty}\mu_N = \frac1N\sum_{i=1}^NX_i

.. |Sig_Nlimit| image:: tex

alt: tex: \lim_{N\rightarrow\infty}\Sigma_N = 0

.. |Sig_posterior| image:: tex

alt: tex: p(X|\Sigma) = N(\mu,\Sigma)

.. |pK| image:: tex

alt: tex: p(K|\Sigma_0,N_0)

.. |Kdef| image:: tex

alt: tex: K = \Sigma^{-1}

.. |pKdef| image:: tex

alt: tex: p(K|\Sigma_0,N_0) = c(n,N_0)\left|\frac12N_0\Sigma_0\right|^{(N_0-1)/2}|K|^{(N_0-n-2)/2}\exp(-\frac12\mathrm{trace}(N_0\Sigma_0K))

.. |cnN0def| image:: tex

alt: tex: c(n,N_0) = \left\{\pi^{n(n-1)/4}\prod_{i=1}^n\Gamma\left(\frac{N_0-i}{2}\right)\right\}^{-1}

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett