Line 21: Line 21:
 
</div>  
 
</div>  
  
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The univariate normal density is completely specified by two parameters; its mean ''&mu; '' and variance ''&sigma;<sup>2</sup>''.  The function f<sub>x</sub> can be written as ''N(&mu;,&sigma;)'' which says that ''x'' is distributed normally with mean ''&mu;'' and variance ''&sigma;<sup>2</sup>''.
+
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The univariate normal density is completely specified by two parameters; its mean ''&mu; '' and variance ''&sigma;<sup>2</sup>''.  The function f<sub>x</sub> can be written as ''N(&mu;,&sigma;)'' which says that ''x'' is distributed normally with mean ''&mu;'' and variance ''&sigma;<sup>2</sup>''. Samples from normal distributions tend to cluster about the mean with a spread related to the standard deviation ''&sigma;''.
  
 
For the multivariate normal density in ''d'' dimensions, f<sub>x</sub> is written as  
 
For the multivariate normal density in ''d'' dimensions, f<sub>x</sub> is written as  
Line 35: Line 35:
 
<div style="margin-left: 25em;">
 
<div style="margin-left: 25em;">
 
<math>\boldsymbol{\Sigma} = \mathcal{E} \left [(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^t \right] = \int(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^t p(\mathbf{x})\, dx</math>  
 
<math>\boldsymbol{\Sigma} = \mathcal{E} \left [(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^t \right] = \int(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^t p(\mathbf{x})\, dx</math>  
 +
</div>
 +
 +
where the expected value of a vector or a matrix is found by taking the expected value of the individual components. i.e if ''x<sub>i<\sub>'' is the ''i''th component of '''x''', ''&mu;<sub>i<\sub>'' the ''i''th component of '''&mu;''', and ''&sigma;<sub>ij</sub> the ''ij''th component of '''&Sigma;''', then
 +
 +
<div style="margin-left: 25em;">
 +
<math>\mu_i = \mathcal{E}[x_i] </math>
 +
</div>
 +
 +
and
 +
 +
<div style="margin-left: 25em;">
 +
<math>\sigma_ij = \mathcal{E}[(x_i - \mu_i)(x_j - \mu_j)] </math>
 
</div>
 
</div>

Revision as of 19:05, 4 April 2013

Discriminant Functions For The Normal Density


       Lets begin with the continuous univariate normal or Gaussian density.

$ f_x = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left [- \frac{1}{2} \left ( \frac{x - \mu}{\sigma} \right)^2 \right ] $


for which the expected value of x is

$ \mu = \mathcal{E}[x] =\int\limits_{-\infty}^{\infty} xp(x)\, dx $

and where the expected squared deviation or variance is

$ \sigma^2 = \mathcal{E}[(x- \mu)^2] =\int\limits_{-\infty}^{\infty} (x- \mu)^2 p(x)\, dx $

       The univariate normal density is completely specified by two parameters; its mean μ and variance σ2. The function fx can be written as N(μ,σ) which says that x is distributed normally with mean μ and variance σ2. Samples from normal distributions tend to cluster about the mean with a spread related to the standard deviation σ.

For the multivariate normal density in d dimensions, fx is written as

$ f_x = \frac{1}{(2 \pi)^ \frac{d}{2} |\boldsymbol{\Sigma}|^\frac{1}{2}} \exp \left [- \frac{1}{2} (\mathbf{x} -\boldsymbol{\mu})^t\boldsymbol{\Sigma}^{-1} (\mathbf{x} -\boldsymbol{\mu}) \right] $

where x is a d-component column vector, μ is the d-component mean vector, Σ is the d-by-d covariance matrix, and |Σ| and Σ-1 are its determinant and inverse respectively. Also, (x -&mu)t denotes the transpose of (x -&mu).

and

$ \boldsymbol{\Sigma} = \mathcal{E} \left [(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^t \right] = \int(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^t p(\mathbf{x})\, dx $

where the expected value of a vector or a matrix is found by taking the expected value of the individual components. i.e if xi<\sub> is the ith component of x, μi<\sub> the ith component of μ, and σij the ijth component of Σ, then

$ \mu_i = \mathcal{E}[x_i] $

and

$ \sigma_ij = \mathcal{E}[(x_i - \mu_i)(x_j - \mu_j)] $

Alumni Liaison

Abstract algebra continues the conceptual developments of linear algebra, on an even grander scale.

Dr. Paul Garrett