(5 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
==7.14 [[[[ECE_PhD_Qualifying_Exams|QE]] 2008 August==
 +
 +
'''1'''
 +
 +
The Weak Law of Large Numbers states that if <math class="inline">\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots</math>  is a sequence of i.i.d.  random variables with finite mean <math class="inline">E\left[\mathbf{X}_{i}\right]=\mu</math>  for every <math class="inline">i</math> , then the sample mean <math class="inline">\mathbf{Y}_{n}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i}</math> converges to <math class="inline">\mu</math>  in probability. Suppose that instead of being i.i.d , <math class="inline">\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots</math>  each have finite mean <math class="inline">\mu</math> , and the covariance function of the sequence <math class="inline">\mathbf{X}_{n}</math>  is <math class="inline">Cov\left(\mathbf{X}_{i},\mathbf{X}_{j}\right)=\sigma^{2}\rho^{\left|i-j\right|}</math> , where <math class="inline">\left|\rho\right|<1</math>  and <math class="inline">\sigma^{2}>0</math> .
 +
 +
a. (13 points)
 +
 +
Find the mean and variance of <math class="inline">\mathbf{Y}_{n}</math> .
 +
 +
<math class="inline">E\left[\mathbf{Y}_{n}\right]=E\left[\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i}\right]=\frac{1}{n}\sum_{i=1}^{n}E\left[\mathbf{X}_{i}\right]=\frac{n\mu}{n}=\mu.</math>
 +
 +
<math class="inline">Cov\left(\mathbf{X}_{i},\mathbf{X}_{j}\right)=E\left[\left(\mathbf{X}_{i}-\mu\right)\left(\mathbf{X}_{j}-\mu\right)\right]=E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]-\mu^{2}.</math>
 +
 +
<math class="inline">E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]=\begin{cases}
 +
\begin{array}{lll}
 +
\sigma^{2}+\mu^{2} &  & ,\; i=j\\
 +
\sigma^{2}\rho^{\left|i-j\right|}+\mu^{2} &  & ,\; i\neq j.
 +
\end{array}\end{cases}</math>
 +
 +
<math class="inline">E\left[\mathbf{Y}_{n}^{2}\right]=E\left[\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbf{X}_{i}\mathbf{X}_{j}\right]=\frac{1}{n^{2}}\left[\sum_{i=1}^{n}E\left[\mathbf{X}_{i}^{2}\right]+\underset{_{i\neq j}}{\sum_{i=1}^{n}\sum_{j=1}^{n}}E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]\right]</math><math class="inline">=\frac{1}{n^{2}}\left[n\left(\sigma^{2}+\mu^{2}\right)+2\sum_{i=1}^{n-1}\left(n-i\right)\left(\sigma^{2}\rho^{i}+\mu^{2}\right)\right]=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]+\mu^{2}.</math>
 +
 +
<math class="inline">Var\left[\mathbf{Y}_{n}\right]=E\left[\mathbf{Y}_{n}^{2}\right]-E\left[\mathbf{Y}_{n}\right]^{2}=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]+\mu^{2}-\mu^{2}=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right].</math>
 +
 +
Note
 +
 +
If the element <math class="inline">m_{ij}</math>  of <math class="inline">n\times n</math>  matrix is defined as <math class="inline">\left|i-j\right|</math> , then
 +
 +
<math class="inline">\left[\begin{array}{ccccc}
 +
0 & 1 & 2 & \cdots & n-1\\
 +
1 & 0 & 1 & \ddots & \vdots\\
 +
2 & 1 & 0 & \ddots & 2\\
 +
\vdots & \ddots & \ddots & \ddots & 1\\
 +
n-1 & \cdots & 2 & 1 & 0
 +
\end{array}\right].</math>
 +
 +
b. (12 points)
 +
 +
Does the sample mean still converge to <math class="inline">\mu</math>  in probability? You must justify your answer.
 +
 +
ref. You can see the definition of [[ECE 600 Convergence|convergence in probability]].
 +
 +
If <math class="inline">\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> , then for any <math class="inline">\epsilon>0</math> , <math class="inline">P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\rightarrow0</math>  as <math class="inline">n\rightarrow\infty</math> . According to the [[ECE 600 Chebyshev Inequality|Chebyshev inequality]], <math class="inline">P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\leq\frac{\sigma_{\mathbf{Y}}^{2}}{\epsilon^{2}}.</math> <math class="inline">\lim_{n\rightarrow\infty}\sigma_{\mathbf{Y}}^{2}</math> Therefore, we know that <math class="inline">\lim_{n\rightarrow\infty}P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)=0</math> . Thus <math class="inline">\mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0</math> .
 +
 +
2
 +
 +
Let <math class="inline">\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots</math>  be a sequence of i.i.d  Bernoulli random variables with <math class="inline">p=1/2</math> , and let <math class="inline">\mathbf{Y}_{n}=2^{n}\mathbf{X}_{1}\mathbf{X}_{2}\cdots\mathbf{X}_{n}</math> .
 +
 +
a. (15 points)
 +
 +
Does the sequence <math class="inline">\mathbf{Y}_{n}</math>  converge to <math class="inline">0</math> almost everywhere?
 +
 +
• You can see the definition of [[ECE 600 Convergence|convergence almost everywhere]].
 +
 +
• range of <math class="inline">\mathbf{X}_{i}=\left\{ 0,1\right\}</math> .
 +
 +
• range of <math class="inline">\mathbf{Y}_{n}=\left\{ 0,2^{n}\right\}</math>  .
 +
 +
• the probability of <math class="inline">\mathbf{Y}_{n}=2^{n}\Longrightarrow P\left(\mathbf{Y}_{n}=2^{n}\right)=P\left(\left\{ \mathbf{X}_{1}=1\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}=1\right\} \right)=\left(\frac{1}{2}\right)^{n}.</math>
 +
 +
• <math class="inline">\lim_{n\rightarrow\infty}P\left(\mathbf{Y}_{n}=2^{n}\right)=0</math> . <math class="inline">\lim_{n\rightarrow\infty}P\left(\mathbf{Y}_{n}=0\right)=1</math> .
 +
 +
• Thus, <math class="inline">\left\{ \mathbf{Y}_{n}\right\} \rightarrow0</math>  with probability 1.
 +
 +
b. (15 points)
 +
 +
Does the sequence <math class="inline">\mathbf{Y}_{n}</math>  converge to 0 in the mean square sense?
 +
 +
Note
 +
 +
You can see the definition of [[ECE 600 Convergence|convergence in mean-square]].
 +
 +
<math class="inline">E\left[\mathbf{X}^{2}\right]=\sum_{k=0}^{1}k^{2}\left(\frac{1}{2}\right)=0^{2}\cdot\frac{1}{2}+1^{2}\cdot\frac{1}{2}=\frac{1}{2}.</math>
 +
 +
<math class="inline">E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]=E\left[2^{2n}\mathbf{X}_{1}^{2}\mathbf{X}_{2}^{2}\cdots\mathbf{X}_{n}^{2}\right]=2^{2n}E\left[\mathbf{X}_{1}^{2}\mathbf{X}_{2}^{2}\cdots\mathbf{X}_{n}^{2}\right]=\left(4E\left[\mathbf{X}^{2}\right]\right)^{n}=2^{n}.</math>
 +
 +
<math class="inline">\lim_{n\rightarrow\infty}E\left[\mathbf{Y}_{n}^{2}\right]=\lim_{n\rightarrow\infty}2^{n}=\infty.</math>
 +
 +
Thus, <math class="inline">\mathbf{Y}_{n}</math>  does not converge to <math class="inline">0</math>  in the mean square sense.
 +
 +
3
 +
 +
Consider a random process <math class="inline">\mathbf{X}\left(t\right)</math>  that assumes values <math class="inline">\pm1</math> . Suppose that <math class="inline">\mathbf{X}\left(0\right)=\pm1</math>  with probability <math class="inline">1/2</math> , and suppose that <math class="inline">\mathbf{X}\left(t\right)</math>  then changes polarity with each occurrence of an event in a Poisson process of rate <math class="inline">\lambda</math> .
 +
 +
Note:
 +
 +
You might find the equations <math class="inline">\frac{1}{2}\left(e^{x}+e^{-x}\right)=\sum_{j=0}^{\infty}\frac{x^{2j}}{\left(2j\right)!}</math>  and <math class="inline">\frac{1}{2}\left(e^{x}-e^{-x}\right)=\sum_{j=0}^{\infty}\frac{x^{2j+1}}{\left(2j+1\right)!}</math>  helpful.
 +
 +
a. (15 points)
 +
 +
Find the probability mass function of <math class="inline">\mathbf{X}\left(t\right)</math> .
 +
 +
<math class="inline">P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=1\right\} \right)=P\left(\left\{ \mathbf{N}\left(t\right)=\text{even}\right\} \right)=\sum_{n=0}^{\infty}e^{-\lambda t}\frac{\left(\lambda t\right)^{2n}}{\left(2n\right)!}=e^{-\lambda t}\sum_{n=0}^{\infty}\frac{\left(\lambda t\right)^{2n}}{\left(2n\right)!}</math><math class="inline">=e^{-\lambda t}\cdot\frac{1}{2}\left(e^{\lambda t}+e^{-\lambda t}\right)=\frac{1}{2}\left(1+e^{-2\lambda t}\right).</math>
 +
 +
<math class="inline">P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)=P\left(\left\{ \mathbf{N}\left(t\right)=\text{odd}\right\} \right)=\sum_{n=0}^{\infty}e^{-\lambda t}\frac{\left(\lambda t\right)^{2n+1}}{\left(2n+1\right)!}=e^{-\lambda t}\sum_{n=0}^{\infty}\frac{\left(\lambda t\right)^{2n+1}}{\left(2n+1\right)!}</math><math class="inline">=e^{-\lambda t}\cdot\frac{1}{2}\left(e^{\lambda t}-e^{-\lambda t}\right)=\frac{1}{2}\left(1-e^{-2\lambda t}\right).</math>
 +
 +
<math class="inline">P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} \right)=P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=1\right\} \right)P\left(\left\{ \mathbf{X}\left(0\right)=1\right\} \right)+P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)P\left(\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)</math><math class="inline">=\frac{1}{2}\left(1+e^{-2\lambda t}\right)\cdot\frac{1}{2}+\frac{1}{2}\left(1-e^{-2\lambda t}\right)\cdot\frac{1}{2}=\frac{1}{2}.</math>
 +
 +
Thus, <math class="inline">P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} \right)=P\left(\left\{ \mathbf{X}\left(t\right)=-1\right\} \right)=\frac{1}{2}</math> .
 +
 +
b. (15 points)
 +
 +
Find the autocovariance function of the random process <math class="inline">\mathbf{X}\left(t\right)</math> .
 +
 +
Note
 +
 +
The explanation about the autocovariance function is [[ECE 600 General Concepts of Stochastic Processes Definitions|here]].
 +
 +
<math class="inline">E\left[\mathbf{X}\left(t\right)\right]=1\cdot\frac{1}{2}+\left(-1\right)\cdot\frac{1}{2}=-\frac{1}{2}+\frac{1}{2}=0.</math>
 +
 +
<math class="inline">C_{\mathbf{XX}}\left(t_{1},t_{2}\right)=E\left[\left(\mathbf{X}\left(t_{1}\right)-\mu_{\mathbf{X}}\left(t_{1}\right)\right)\left(\mathbf{X}\left(t_{2}\right)-\mu_{\mathbf{X}}\left(t_{2}\right)\right)^{*}\right]=R_{\mathbf{XX}}\left(t_{1},t_{2}\right)-\mu_{\mathbf{X}}\left(t_{1}\right)\mu_{\mathbf{X}}\left(t_{2}\right)^{*}</math><math class="inline">=R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=1\right)\cap\mathbf{X}\left(t_{2}=1\right)\right\} \right)-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=-1\right)\cap\mathbf{X}\left(t_{2}=1\right)\right\} \right)</math><math class="inline">-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=1\right)\cap\mathbf{X}\left(t_{2}=-1\right)\right\} \right)+1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=-1\right)\cap\mathbf{X}\left(t_{2}=-1\right)\right\} \right)</math><math class="inline">=1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}\right)=\mathbf{X}\left(t_{2}\right)\right\} \right)-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}\right)\neq\mathbf{X}\left(t_{2}\right)\right\} \right).</math>
 +
 +
<math class="inline">P\left(\left\{ \mathbf{X}\left(t_{1}\right)=\mathbf{X}\left(t_{2}\right)\right\} \right)=P\left(\left\{ \text{number of events occuring within }\left(t_{1},t_{2}\right)\text{is even}\right\} \right)</math>
 +
 +
<math class="inline">=\sum_{n=0}^{\infty}e^{-\lambda\left(t_{2}-t_{1}\right)}\frac{\left[\lambda\left(t_{2}-t_{1}\right)\right]^{2n}}{\left(2n\right)!}=\frac{1}{2}e^{-\lambda\left(t_{2}-t_{1}\right)}\left(e^{\lambda\left(t_{2}-t_{1}\right)}+e^{-\lambda\left(t_{2}-t_{1}\right)}\right)</math><math class="inline">=\frac{1}{2}\left(1+e^{-2\lambda\left(t_{2}-t_{1}\right)}\right).</math>
 +
 +
By using the same method above, we can get <math class="inline">P\left(\left\{ \mathbf{X}\left(t_{1}\right)\neq\mathbf{X}\left(t_{2}\right)\right\} \right)=\frac{1}{2}\left(1-e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)</math> . Thus,
 +
 +
<math class="inline">C_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\frac{1}{2}\left(1+e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)-\frac{1}{2}\left(1-e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)=e^{-2\lambda\left(t_{2}-t_{1}\right)}.</math>
 +
 +
4 (15 points)
 +
 +
Messages arrive at a message center according to a Poisson process of rate <math class="inline">\lambda</math>  messages per hour. Every hour the messages that have arrived during the previous hour are forwarded to their destination. Find the expected value of the total time waited by all messages that arrive during the hour.
 +
 
----
 
----
 
[[ECE600|Back to ECE600]]
 
[[ECE600|Back to ECE600]]
  
[[ECE 600 QE|Back to ECE 600 QE]]
+
[[ECE 600 QE|Back to my ECE 600 QE page]]
 +
 
 +
[[ECE_PhD_Qualifying_Exams|Back to the general ECE PHD QE page]] (for problem discussion)

Latest revision as of 08:28, 27 June 2012

7.14 [[QE 2008 August

1

The Weak Law of Large Numbers states that if $ \mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots $ is a sequence of i.i.d. random variables with finite mean $ E\left[\mathbf{X}_{i}\right]=\mu $ for every $ i $ , then the sample mean $ \mathbf{Y}_{n}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i} $ converges to $ \mu $ in probability. Suppose that instead of being i.i.d , $ \mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots $ each have finite mean $ \mu $ , and the covariance function of the sequence $ \mathbf{X}_{n} $ is $ Cov\left(\mathbf{X}_{i},\mathbf{X}_{j}\right)=\sigma^{2}\rho^{\left|i-j\right|} $ , where $ \left|\rho\right|<1 $ and $ \sigma^{2}>0 $ .

a. (13 points)

Find the mean and variance of $ \mathbf{Y}_{n} $ .

$ E\left[\mathbf{Y}_{n}\right]=E\left[\frac{1}{n}\sum_{i=1}^{n}\mathbf{X}_{i}\right]=\frac{1}{n}\sum_{i=1}^{n}E\left[\mathbf{X}_{i}\right]=\frac{n\mu}{n}=\mu. $

$ Cov\left(\mathbf{X}_{i},\mathbf{X}_{j}\right)=E\left[\left(\mathbf{X}_{i}-\mu\right)\left(\mathbf{X}_{j}-\mu\right)\right]=E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]-\mu^{2}. $

$ E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]=\begin{cases} \begin{array}{lll} \sigma^{2}+\mu^{2} & & ,\; i=j\\ \sigma^{2}\rho^{\left|i-j\right|}+\mu^{2} & & ,\; i\neq j. \end{array}\end{cases} $

$ E\left[\mathbf{Y}_{n}^{2}\right]=E\left[\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbf{X}_{i}\mathbf{X}_{j}\right]=\frac{1}{n^{2}}\left[\sum_{i=1}^{n}E\left[\mathbf{X}_{i}^{2}\right]+\underset{_{i\neq j}}{\sum_{i=1}^{n}\sum_{j=1}^{n}}E\left[\mathbf{X}_{i}\mathbf{X}_{j}\right]\right] $$ =\frac{1}{n^{2}}\left[n\left(\sigma^{2}+\mu^{2}\right)+2\sum_{i=1}^{n-1}\left(n-i\right)\left(\sigma^{2}\rho^{i}+\mu^{2}\right)\right]=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]+\mu^{2}. $

$ Var\left[\mathbf{Y}_{n}\right]=E\left[\mathbf{Y}_{n}^{2}\right]-E\left[\mathbf{Y}_{n}\right]^{2}=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]+\mu^{2}-\mu^{2}=\frac{1}{n^{2}}\left[n\sigma^{2}+2\sigma^{2}\sum_{i=1}^{n-1}\left(n-i\right)\rho^{i}\right]. $

Note

If the element $ m_{ij} $ of $ n\times n $ matrix is defined as $ \left|i-j\right| $ , then

$ \left[\begin{array}{ccccc} 0 & 1 & 2 & \cdots & n-1\\ 1 & 0 & 1 & \ddots & \vdots\\ 2 & 1 & 0 & \ddots & 2\\ \vdots & \ddots & \ddots & \ddots & 1\\ n-1 & \cdots & 2 & 1 & 0 \end{array}\right]. $

b. (12 points)

Does the sample mean still converge to $ \mu $ in probability? You must justify your answer.

ref. You can see the definition of convergence in probability.

If $ \mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0 $ , then for any $ \epsilon>0 $ , $ P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\rightarrow0 $ as $ n\rightarrow\infty $ . According to the Chebyshev inequality, $ P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)\leq\frac{\sigma_{\mathbf{Y}}^{2}}{\epsilon^{2}}. $ $ \lim_{n\rightarrow\infty}\sigma_{\mathbf{Y}}^{2} $ Therefore, we know that $ \lim_{n\rightarrow\infty}P\left(\left\{ \left|\mathbf{Y}_{n}-\mu\right|>\epsilon\right\} \right)=0 $ . Thus $ \mathbf{Y}_{n}\rightarrow\left(p\right)\rightarrow0 $ .

2

Let $ \mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3},\cdots $ be a sequence of i.i.d Bernoulli random variables with $ p=1/2 $ , and let $ \mathbf{Y}_{n}=2^{n}\mathbf{X}_{1}\mathbf{X}_{2}\cdots\mathbf{X}_{n} $ .

a. (15 points)

Does the sequence $ \mathbf{Y}_{n} $ converge to $ 0 $ almost everywhere?

• You can see the definition of convergence almost everywhere.

• range of $ \mathbf{X}_{i}=\left\{ 0,1\right\} $ .

• range of $ \mathbf{Y}_{n}=\left\{ 0,2^{n}\right\} $ .

• the probability of $ \mathbf{Y}_{n}=2^{n}\Longrightarrow P\left(\mathbf{Y}_{n}=2^{n}\right)=P\left(\left\{ \mathbf{X}_{1}=1\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}=1\right\} \right)=\left(\frac{1}{2}\right)^{n}. $

$ \lim_{n\rightarrow\infty}P\left(\mathbf{Y}_{n}=2^{n}\right)=0 $ . $ \lim_{n\rightarrow\infty}P\left(\mathbf{Y}_{n}=0\right)=1 $ .

• Thus, $ \left\{ \mathbf{Y}_{n}\right\} \rightarrow0 $ with probability 1.

b. (15 points)

Does the sequence $ \mathbf{Y}_{n} $ converge to 0 in the mean square sense?

Note

You can see the definition of convergence in mean-square.

$ E\left[\mathbf{X}^{2}\right]=\sum_{k=0}^{1}k^{2}\left(\frac{1}{2}\right)=0^{2}\cdot\frac{1}{2}+1^{2}\cdot\frac{1}{2}=\frac{1}{2}. $

$ E\left[\left|\mathbf{Y}_{n}-0\right|^{2}\right]=E\left[\mathbf{Y}_{n}^{2}\right]=E\left[2^{2n}\mathbf{X}_{1}^{2}\mathbf{X}_{2}^{2}\cdots\mathbf{X}_{n}^{2}\right]=2^{2n}E\left[\mathbf{X}_{1}^{2}\mathbf{X}_{2}^{2}\cdots\mathbf{X}_{n}^{2}\right]=\left(4E\left[\mathbf{X}^{2}\right]\right)^{n}=2^{n}. $

$ \lim_{n\rightarrow\infty}E\left[\mathbf{Y}_{n}^{2}\right]=\lim_{n\rightarrow\infty}2^{n}=\infty. $

Thus, $ \mathbf{Y}_{n} $ does not converge to $ 0 $ in the mean square sense.

3

Consider a random process $ \mathbf{X}\left(t\right) $ that assumes values $ \pm1 $ . Suppose that $ \mathbf{X}\left(0\right)=\pm1 $ with probability $ 1/2 $ , and suppose that $ \mathbf{X}\left(t\right) $ then changes polarity with each occurrence of an event in a Poisson process of rate $ \lambda $ .

Note:

You might find the equations $ \frac{1}{2}\left(e^{x}+e^{-x}\right)=\sum_{j=0}^{\infty}\frac{x^{2j}}{\left(2j\right)!} $ and $ \frac{1}{2}\left(e^{x}-e^{-x}\right)=\sum_{j=0}^{\infty}\frac{x^{2j+1}}{\left(2j+1\right)!} $ helpful.

a. (15 points)

Find the probability mass function of $ \mathbf{X}\left(t\right) $ .

$ P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=1\right\} \right)=P\left(\left\{ \mathbf{N}\left(t\right)=\text{even}\right\} \right)=\sum_{n=0}^{\infty}e^{-\lambda t}\frac{\left(\lambda t\right)^{2n}}{\left(2n\right)!}=e^{-\lambda t}\sum_{n=0}^{\infty}\frac{\left(\lambda t\right)^{2n}}{\left(2n\right)!} $$ =e^{-\lambda t}\cdot\frac{1}{2}\left(e^{\lambda t}+e^{-\lambda t}\right)=\frac{1}{2}\left(1+e^{-2\lambda t}\right). $

$ P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)=P\left(\left\{ \mathbf{N}\left(t\right)=\text{odd}\right\} \right)=\sum_{n=0}^{\infty}e^{-\lambda t}\frac{\left(\lambda t\right)^{2n+1}}{\left(2n+1\right)!}=e^{-\lambda t}\sum_{n=0}^{\infty}\frac{\left(\lambda t\right)^{2n+1}}{\left(2n+1\right)!} $$ =e^{-\lambda t}\cdot\frac{1}{2}\left(e^{\lambda t}-e^{-\lambda t}\right)=\frac{1}{2}\left(1-e^{-2\lambda t}\right). $

$ P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} \right)=P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=1\right\} \right)P\left(\left\{ \mathbf{X}\left(0\right)=1\right\} \right)+P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} |\left\{ \mathbf{X}\left(0\right)=-1\right\} \right)P\left(\left\{ \mathbf{X}\left(0\right)=-1\right\} \right) $$ =\frac{1}{2}\left(1+e^{-2\lambda t}\right)\cdot\frac{1}{2}+\frac{1}{2}\left(1-e^{-2\lambda t}\right)\cdot\frac{1}{2}=\frac{1}{2}. $

Thus, $ P\left(\left\{ \mathbf{X}\left(t\right)=1\right\} \right)=P\left(\left\{ \mathbf{X}\left(t\right)=-1\right\} \right)=\frac{1}{2} $ .

b. (15 points)

Find the autocovariance function of the random process $ \mathbf{X}\left(t\right) $ .

Note

The explanation about the autocovariance function is here.

$ E\left[\mathbf{X}\left(t\right)\right]=1\cdot\frac{1}{2}+\left(-1\right)\cdot\frac{1}{2}=-\frac{1}{2}+\frac{1}{2}=0. $

$ C_{\mathbf{XX}}\left(t_{1},t_{2}\right)=E\left[\left(\mathbf{X}\left(t_{1}\right)-\mu_{\mathbf{X}}\left(t_{1}\right)\right)\left(\mathbf{X}\left(t_{2}\right)-\mu_{\mathbf{X}}\left(t_{2}\right)\right)^{*}\right]=R_{\mathbf{XX}}\left(t_{1},t_{2}\right)-\mu_{\mathbf{X}}\left(t_{1}\right)\mu_{\mathbf{X}}\left(t_{2}\right)^{*} $$ =R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=1\right)\cap\mathbf{X}\left(t_{2}=1\right)\right\} \right)-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=-1\right)\cap\mathbf{X}\left(t_{2}=1\right)\right\} \right) $$ -1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=1\right)\cap\mathbf{X}\left(t_{2}=-1\right)\right\} \right)+1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}=-1\right)\cap\mathbf{X}\left(t_{2}=-1\right)\right\} \right) $$ =1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}\right)=\mathbf{X}\left(t_{2}\right)\right\} \right)-1\cdot P\left(\left\{ \mathbf{X}\left(t_{1}\right)\neq\mathbf{X}\left(t_{2}\right)\right\} \right). $

$ P\left(\left\{ \mathbf{X}\left(t_{1}\right)=\mathbf{X}\left(t_{2}\right)\right\} \right)=P\left(\left\{ \text{number of events occuring within }\left(t_{1},t_{2}\right)\text{is even}\right\} \right) $

$ =\sum_{n=0}^{\infty}e^{-\lambda\left(t_{2}-t_{1}\right)}\frac{\left[\lambda\left(t_{2}-t_{1}\right)\right]^{2n}}{\left(2n\right)!}=\frac{1}{2}e^{-\lambda\left(t_{2}-t_{1}\right)}\left(e^{\lambda\left(t_{2}-t_{1}\right)}+e^{-\lambda\left(t_{2}-t_{1}\right)}\right) $$ =\frac{1}{2}\left(1+e^{-2\lambda\left(t_{2}-t_{1}\right)}\right). $

By using the same method above, we can get $ P\left(\left\{ \mathbf{X}\left(t_{1}\right)\neq\mathbf{X}\left(t_{2}\right)\right\} \right)=\frac{1}{2}\left(1-e^{-2\lambda\left(t_{2}-t_{1}\right)}\right) $ . Thus,

$ C_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\frac{1}{2}\left(1+e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)-\frac{1}{2}\left(1-e^{-2\lambda\left(t_{2}-t_{1}\right)}\right)=e^{-2\lambda\left(t_{2}-t_{1}\right)}. $

4 (15 points)

Messages arrive at a message center according to a Poisson process of rate $ \lambda $ messages per hour. Every hour the messages that have arrived during the previous hour are forwarded to their destination. Find the expected value of the total time waited by all messages that arrive during the hour.


Back to ECE600

Back to my ECE 600 QE page

Back to the general ECE PHD QE page (for problem discussion)

Alumni Liaison

Have a piece of advice for Purdue students? Share it through Rhea!

Alumni Liaison