Line 1: Line 1:
From landis.m.huffman.1 Tue Feb 12 21:31:30 -0500 2008
 
From: landis.m.huffman.1
 
Date: Tue, 12 Feb 2008 21:31:30 -0500
 
Subject: Tutorial
 
Message-ID: <20080212213130-0500@https://engineering.purdue.edu>
 
 
I, for one, did not know how to do this, so I looked into it so that you don't have to now.
 
 
 
<math> </math>
 
<math> </math>
 
{ Summary:  To generate "colored" samples <math>\tilde{x}\in\mathbb{R}^n \sim \mathcal{N}(\mu,\Sigma)</math> from "white" samples <math>x</math> drawn from <math>\mathcal{N}(\vec{0},I_n)</math>, simply let <math>\tilde{x} = Ax + \mu</math>, where <math>A</math> is the Cholesky decomposition of <math>\Sigma</math>, i.e. <math>\Sigma = AA^T</math>}
 
{ Summary:  To generate "colored" samples <math>\tilde{x}\in\mathbb{R}^n \sim \mathcal{N}(\mu,\Sigma)</math> from "white" samples <math>x</math> drawn from <math>\mathcal{N}(\vec{0},I_n)</math>, simply let <math>\tilde{x} = Ax + \mu</math>, where <math>A</math> is the Cholesky decomposition of <math>\Sigma</math>, i.e. <math>\Sigma = AA^T</math>}
Line 12: Line 4:
 
Consider generating samples <math>\tilde{x}\in\mathbb{R}^n \sim \mathcal{N}(\mu,\Sigma)</math>.  Many platforms (e.g. Matlab) have a random number generator to generate iid samples from (white) Gaussian distribution.  If we seek to "color" the noise with an arbitrary covariance matrix <math>\Sigma</math>, we must produce a "coloring matrix" <math>A</math>.  Let us consider generating a colored sample <math>\tilde{x} = [\tilde{x}_1,\tilde{x}_2,\ldots,\tilde{x}_n]^T</math> from <math>x = [x_1,x_2,\ldots,x_n]^T</math>, where <math>x_1, x_2, \ldots, x_n</math> are iid samples drawn from <math>\mathcal{N}(0,1)</math>.  (Note:  Matlab has a function, mvnrnd.m, to sample from <math>\mathcal{N}(\mu,\Sigma)</math>, but I discuss here the theory behind it).  Relate <math>\tilde{x}</math> to <math>x</math> as follows:
 
Consider generating samples <math>\tilde{x}\in\mathbb{R}^n \sim \mathcal{N}(\mu,\Sigma)</math>.  Many platforms (e.g. Matlab) have a random number generator to generate iid samples from (white) Gaussian distribution.  If we seek to "color" the noise with an arbitrary covariance matrix <math>\Sigma</math>, we must produce a "coloring matrix" <math>A</math>.  Let us consider generating a colored sample <math>\tilde{x} = [\tilde{x}_1,\tilde{x}_2,\ldots,\tilde{x}_n]^T</math> from <math>x = [x_1,x_2,\ldots,x_n]^T</math>, where <math>x_1, x_2, \ldots, x_n</math> are iid samples drawn from <math>\mathcal{N}(0,1)</math>.  (Note:  Matlab has a function, mvnrnd.m, to sample from <math>\mathcal{N}(\mu,\Sigma)</math>, but I discuss here the theory behind it).  Relate <math>\tilde{x}</math> to <math>x</math> as follows:
  
|xxtild1|
+
<math>\tilde{x}_1 = a_{11} x_1,</math>
  
|xxtild2|
+
<math>\tilde{x}_2 = a_{21} x_1  + a_{22} x_2,\ldots</math>
  
|xxtildn|.
+
<math>\tilde{x}_n = \sum_{i=1}^n a_{ni}x_i</math>.
  
We can rewrite this in matrix form as |xxtildMat|, where matrix |A| is lower triangular.  We have, then, that
+
We can rewrite this in matrix form as <math>\tilde{x} = Ax</math>, where matrix <math>A</math> is lower triangular.  We have, then, that
  
 
|E(n)|, and
 
|E(n)|, and

Revision as of 15:41, 20 March 2008

{ Summary: To generate "colored" samples $ \tilde{x}\in\mathbb{R}^n \sim \mathcal{N}(\mu,\Sigma) $ from "white" samples $ x $ drawn from $ \mathcal{N}(\vec{0},I_n) $, simply let $ \tilde{x} = Ax + \mu $, where $ A $ is the Cholesky decomposition of $ \Sigma $, i.e. $ \Sigma = AA^T $}

Consider generating samples $ \tilde{x}\in\mathbb{R}^n \sim \mathcal{N}(\mu,\Sigma) $. Many platforms (e.g. Matlab) have a random number generator to generate iid samples from (white) Gaussian distribution. If we seek to "color" the noise with an arbitrary covariance matrix $ \Sigma $, we must produce a "coloring matrix" $ A $. Let us consider generating a colored sample $ \tilde{x} = [\tilde{x}_1,\tilde{x}_2,\ldots,\tilde{x}_n]^T $ from $ x = [x_1,x_2,\ldots,x_n]^T $, where $ x_1, x_2, \ldots, x_n $ are iid samples drawn from $ \mathcal{N}(0,1) $. (Note: Matlab has a function, mvnrnd.m, to sample from $ \mathcal{N}(\mu,\Sigma) $, but I discuss here the theory behind it). Relate $ \tilde{x} $ to $ x $ as follows:

$ \tilde{x}_1 = a_{11} x_1, $

$ \tilde{x}_2 = a_{21} x_1 + a_{22} x_2,\ldots $

$ \tilde{x}_n = \sum_{i=1}^n a_{ni}x_i $.

We can rewrite this in matrix form as $ \tilde{x} = Ax $, where matrix $ A $ is lower triangular. We have, then, that

|E(n)|, and

|Cov(n,m)def|

|Cov(n,m)| = |Cov(n,m)deffinal|, since |xi|'s are independent, |ximean| and |xivar|

|xivar|.

We are now left with the problem of defining |ani|'s so that the form of |Cov(n,m)| follows the form of |signm|: i.e.

|signm| = |Cov(n,m)| = |Cov(n,m)deffinal|

|sigrelation|, where |A| is lower triangular, and |sig| is positive definite. Therefore, |A| follows the form of what is called the Cholesky decomposition of |sig|.

Thus, to summarize, to generate samples |xdist| from samples |x| drawn from |normdist|, simply let |xxtildrel|, where |A| is the Cholesky decomposition of |sig|.

.. |aat| image:: tex

alt: tex: \Sigma = AA^T

.. |xdist| image:: tex

alt: tex: \tilde{x}\in\mathbb{R}^n \sim \mathcal{N}(\mu,\Sigma)

.. |sig| image:: tex

alt: tex: \Sigma

.. |signm| image:: tex

alt: tex: \Sigma_{nm}

.. |A| image:: tex

alt: tex: A

.. |xtildfull| image:: tex

alt: tex: \tilde{x} = [\tilde{x}_1,\tilde{x}_2,\ldots,\tilde{x}_n]^T

.. |xfull| image:: tex

alt: tex: x = [x_1,x_2,\ldots,x_n]^T

.. |xiid| image:: tex

alt: tex: x_1, x_2, \ldots, x_n

.. |1Dnormdist| image:: tex

alt: tex: \mathcal{N}(0,1)

.. |normdist| image:: tex

alt: tex: \mathcal{N}(\vec{0},I_n)

.. |normdistarb| image:: tex

alt: tex: \mathcal{N}(\mu,\Sigma)

.. |x| image:: tex

alt: tex: x

.. |xtild| image:: tex

alt: tex: \tilde{x}

.. |xxtild1| image:: tex

alt: tex: \tilde{x}_1 = a_{11} x_1,

.. |xxtild2| image:: tex

alt: tex: \tilde{x}_2 = a_{21} x_1 + a_{22} x_2,\ldots

.. |xxtildn| image:: tex

alt: tex: \tilde{x}_n = \sum_{i=1}^n a_{ni}x_i

.. |xxtildMat| image:: tex

alt: tex: \tilde{x} = Ax

.. |xxtildrel| image:: tex

alt: tex: \tilde{x} = Ax + \mu

.. |E(n)| image:: tex

alt: tex: E[\tilde{x}_n] = \sum_{i=1}^n a_{ni}E[x_i] = 0

.. |Cov(n,m)def| image:: tex

alt: tex: Cov[\tilde{x}_n,\tilde{x}_m] = E\left[\left(\sum_{i=1}^na_{ni}x_i\right)\left(\sum_{j=1}^m a_{mj}x_j\right)\right] = \sum_{i=1}^n\sum_{j=1}^m a_{ni}a_{mj}E[x_ix_j] \Rightarrow

.. |Cov(n,m)deffinal| image:: tex

alt: tex: \sum_{i=1}^{\min(m,n)}a_{ni}a_{mi}

.. |Cov(n,m)| image:: tex

alt: tex: Cov(\tilde{x}_n,\tilde{x}_m)

.. |xi| image:: tex

alt: tex: x_i

.. |xivar| image:: tex

alt: tex: Var[x_i] = 1

.. |ximean| image:: tex

alt: tex: E[x_i] = 0

.. |ani| image:: tex

alt: tex: a_{ni}

.. |sigrelation| image:: tex

alt: tex: \Rightarrow \Sigma = AA^T

Alumni Liaison

Ph.D. 2007, working on developing cool imaging technologies for digital cameras, camera phones, and video surveillance cameras.

Buyue Zhang