Line 1: Line 1:
= Discriminant Functions For The Normal Density =
+
= Discriminant Functions For The Normal Density =
  
 
----
 
----
  
       Lets begin with the continuous univariate normal or Gaussian density.
+
       Lets begin with the continuous univariate normal or Gaussian density.  
  
 
<div style="margin-left: 25em;">
 
<div style="margin-left: 25em;">
<math>f_x = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left [- \frac{1}{2} \left ( \frac{x - \mu}{\sigma} \right)^2 \right ] </math>        
+
<math>f_x = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left [- \frac{1}{2} \left ( \frac{x - \mu}{\sigma} \right)^2 \right ] </math>  
</div>
+
</div>  
  
 
+
<br> for which ''the expected value'' of ''x'' is  
for which ''the expected value'' of ''x'' is
+
  
 
<div style="margin-left: 25em;">
 
<div style="margin-left: 25em;">
<math>\mu = \mathcal{E}[x] =\int\limits_{-\infty}^{\infty} xp(x)\, dx</math>        
+
<math>\mu = \mathcal{E}[x] =\int\limits_{-\infty}^{\infty} xp(x)\, dx</math>  
</div>
+
</div>  
  
 
and where the expected squared deviation or ''variance'' is  
 
and where the expected squared deviation or ''variance'' is  
  
 
<div style="margin-left: 25em;">
 
<div style="margin-left: 25em;">
<math>\sigma^2 = \mathcal{E}[(x- \mu)^2] =\int\limits_{-\infty}^{\infty} (x- \mu)^2 p(x)\, dx</math>        
+
<math>\sigma^2 = \mathcal{E}[(x- \mu)^2] =\int\limits_{-\infty}^{\infty} (x- \mu)^2 p(x)\, dx</math>  
</div>
+
</div>  
 +
 
 +
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The univariate normal density is completely specified by two parameters; its mean ''&mu; '' and variance ''&sigma;<sup>2</sup>''. Eq.1 f<sub>x</sub> can be written as ''N(&mu;,&sigma;)
  
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; For the multivariate normal density in ''d'' dimensions, f<sub>x</sub> is written as
+
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; For the multivariate normal density in ''d'' dimensions, f<sub>x</sub> is written as  
  
 
<div style="margin-left: 25em;">
 
<div style="margin-left: 25em;">
<math>f_x = \frac{1}{(2 \pi)^ \frac{d}{2} |\boldsymbol{\Sigma}|^\frac{1}{2}} \exp \left [- \frac{1}{2} (\mathbf{x} -\boldsymbol{\mu})^t\boldsymbol{\Sigma}^{-1} (\mathbf{x} -\boldsymbol{\mu}) \right] </math>        
+
<math>f_x = \frac{1}{(2 \pi)^ \frac{d}{2} |\boldsymbol{\Sigma}|^\frac{1}{2}} \exp \left [- \frac{1}{2} (\mathbf{x} -\boldsymbol{\mu})^t\boldsymbol{\Sigma}^{-1} (\mathbf{x} -\boldsymbol{\mu}) \right] </math>  
 
</div>
 
</div>

Revision as of 17:53, 4 April 2013

Discriminant Functions For The Normal Density


       Lets begin with the continuous univariate normal or Gaussian density.

$ f_x = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left [- \frac{1}{2} \left ( \frac{x - \mu}{\sigma} \right)^2 \right ] $


for which the expected value of x is

$ \mu = \mathcal{E}[x] =\int\limits_{-\infty}^{\infty} xp(x)\, dx $

and where the expected squared deviation or variance is

$ \sigma^2 = \mathcal{E}[(x- \mu)^2] =\int\limits_{-\infty}^{\infty} (x- \mu)^2 p(x)\, dx $

       The univariate normal density is completely specified by two parameters; its mean μ and variance σ2. Eq.1 fx can be written as N(μ,σ)

       For the multivariate normal density in d dimensions, fx is written as

$ f_x = \frac{1}{(2 \pi)^ \frac{d}{2} |\boldsymbol{\Sigma}|^\frac{1}{2}} \exp \left [- \frac{1}{2} (\mathbf{x} -\boldsymbol{\mu})^t\boldsymbol{\Sigma}^{-1} (\mathbf{x} -\boldsymbol{\mu}) \right] $

Alumni Liaison

To all math majors: "Mathematics is a wonderfully rich subject."

Dr. Paul Garrett