Help for HW1 ECE662 Spring 2012


{ Summary: To generate "colored" samples $ \tilde{x}\in\mathbb{R}^n \sim \mathcal{N}(\mu,\Sigma) $ from "white" samples $ x $ drawn from $ \mathcal{N}(\vec{0},I_n) $, simply let $ \tilde{x} = Ax + \mu $, where $ A $ is the Cholesky decomposition of $ \Sigma $, i.e. $ \Sigma = AA^T $}

Consider generating samples $ \tilde{x}\in\mathbb{R}^n \sim \mathcal{N}(\mu,\Sigma) $. Many platforms (e.g. Matlab) have a random number generator to generate iid samples from (white) Gaussian distribution. If we seek to "color" the noise with an arbitrary covariance matrix $ \Sigma $, we must produce a "coloring matrix" $ A $. Let us consider generating a colored sample $ \tilde{x} = [\tilde{x}_1,\tilde{x}_2,\ldots,\tilde{x}_n]^T $ from $ x = [x_1,x_2,\ldots,x_n]^T $, where $ x_1, x_2, \ldots, x_n $ are iid samples drawn from $ \mathcal{N}(0,1) $. (Note: Matlab has a function, mvnrnd.m, to sample from $ \mathcal{N}(\mu,\Sigma) $, but I discuss here the theory behind it). Relate $ \tilde{x} $ to $ x $ as follows:

$ \begin{align} \tilde{x}_1 &= a_{11} x_1 \\ \tilde{x}_2 &= a_{21} x_1 + a_{22} x_2 \\ &... \\ \tilde{x}_n &= \sum_{i=1}^n a_{ni}x_i \\ \end{align} $

We can rewrite this in matrix form as $ \tilde{x} = Ax $, where matrix $ A $ is lower triangular. We have, then, that

$ E[\tilde{x}_n] = \sum_{i=1}^n a_{ni}E[x_i] = 0 $, and

$ Cov[\tilde{x}_n,\tilde{x}_m] = E\left[\left(\sum_{i=1}^na_{ni}x_i\right)\left(\sum_{j=1}^m a_{mj}x_j\right)\right] = \sum_{i=1}^n\sum_{j=1}^m a_{ni}a_{mj}E[x_ix_j] \Rightarrow $

$ Cov(\tilde{x}_n,\tilde{x}_m) $ = $ \sum_{i=1}^{\min(m,n)}a_{ni}a_{mi} $, since $ x_i $'s are independent, $ E[x_i] = 0 $ and $ Var[x_i] = 1 $.

We are now left with the problem of defining $ a_{ni} $'s so that the form of $ Cov(\tilde{x}_n,\tilde{x}_m) $ follows the form of $ \Sigma_{nm} $: i.e.

$ \Sigma_{nm} $ = $ Cov(\tilde{x}_n,\tilde{x}_m) $ = $ \sum_{i=1}^{\min(m,n)}a_{ni}a_{mi} $

$ \Rightarrow \Sigma = AA^T $, where $ A $ is lower triangular, and $ \Sigma $ is positive definite. Therefore, $ A $ follows the form of what is called the Cholesky decomposition of $ \Sigma $.


Back to HW1, ECE662, Spring 2012

Back to ECE 662 Spring 2012

Back to "MATLAB Resources for generating Gaussian Data'

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett