Line 37: Line 37:
 
</div>
 
</div>
  
where the expected value of a vector or a matrix is found by taking the expected value of the individual components. i.e if ''x<sub>i<\sub>'' is the ''i''th component of '''x''', ''&mu;<sub>i<\sub>'' the ''i''th component of '''&mu;''', and ''&sigma;<sub>ij</sub> the ''ij''th component of '''&Sigma;''', then  
+
where the expected value of a vector or a matrix is found by taking the expected value of the individual components. i.e if ''x<sub>i</sub>'' is the ''i''th component of '''x''', ''&mu;<sub>i</sub>'' the ''i''th component of '''&mu;''', and ''&sigma;<sub>ij</sub> the ''ij''th component of '''&Sigma;''', then  
  
 
<div style="margin-left: 25em;">
 
<div style="margin-left: 25em;">
Line 46: Line 46:
  
 
<div style="margin-left: 25em;">
 
<div style="margin-left: 25em;">
<math>\sigma_ij = \mathcal{E}[(x_i - \mu_i)(x_j - \mu_j)] </math>  
+
<math>\sigma_{ij} = \mathcal{E}[(x_i - \mu_i)(x_j - \mu_j)] </math>  
 
</div>
 
</div>

Revision as of 19:11, 4 April 2013

Discriminant Functions For The Normal Density


       Lets begin with the continuous univariate normal or Gaussian density.

$ f_x = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left [- \frac{1}{2} \left ( \frac{x - \mu}{\sigma} \right)^2 \right ] $


for which the expected value of x is

$ \mu = \mathcal{E}[x] =\int\limits_{-\infty}^{\infty} xp(x)\, dx $

and where the expected squared deviation or variance is

$ \sigma^2 = \mathcal{E}[(x- \mu)^2] =\int\limits_{-\infty}^{\infty} (x- \mu)^2 p(x)\, dx $

       The univariate normal density is completely specified by two parameters; its mean μ and variance σ2. The function fx can be written as N(μ,σ) which says that x is distributed normally with mean μ and variance σ2. Samples from normal distributions tend to cluster about the mean with a spread related to the standard deviation σ.

For the multivariate normal density in d dimensions, fx is written as

$ f_x = \frac{1}{(2 \pi)^ \frac{d}{2} |\boldsymbol{\Sigma}|^\frac{1}{2}} \exp \left [- \frac{1}{2} (\mathbf{x} -\boldsymbol{\mu})^t\boldsymbol{\Sigma}^{-1} (\mathbf{x} -\boldsymbol{\mu}) \right] $

where x is a d-component column vector, μ is the d-component mean vector, Σ is the d-by-d covariance matrix, and |Σ| and Σ-1 are its determinant and inverse respectively. Also,(x - μ)t denotes the transpose of (x - μ).

and

$ \boldsymbol{\Sigma} = \mathcal{E} \left [(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^t \right] = \int(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^t p(\mathbf{x})\, dx $

where the expected value of a vector or a matrix is found by taking the expected value of the individual components. i.e if xi is the ith component of x, μi the ith component of μ, and σij the ijth component of Σ, then

$ \mu_i = \mathcal{E}[x_i] $

and

$ \sigma_{ij} = \mathcal{E}[(x_i - \mu_i)(x_j - \mu_j)] $

Alumni Liaison

EISL lab graduate

Mu Qiao