(New page: ==7.9 QE 2004 August== '''1. (20 pts.)''' A probability space <math>\left(\mathcal{S},\mathcal{F},\mathcal{P}\right)</math> has a sample space consisting of all pairs of positive intege...)
 
 
(4 intermediate revisions by 2 users not shown)
Line 1: Line 1:
==7.9 QE 2004 August==
+
==7.9 [[ECE_PhD_Qualifying_Exams|QE]] 2004 August==
  
 
'''1. (20 pts.)'''
 
'''1. (20 pts.)'''
  
A probability space <math>\left(\mathcal{S},\mathcal{F},\mathcal{P}\right)</math>  has a sample space consisting of all pairs of positive integers: <math>\mathcal{S}=\left\{ \left(k,m\right):\; k=1,2,\cdots;\; m=1,2,\cdots\right\}</math> . The event space <math>\mathcal{F}</math>  is the power set of <math>\mathcal{S}</math> , and the probability measure <math>\mathcal{P}</math>  is specified by the pmf <math>p\left(k,m\right)=p^{2}\left(1-p\right)^{k+m-2},\qquad p\in\left(0,1\right)</math>.  
+
A probability space <math class="inline">\left(\mathcal{S},\mathcal{F},\mathcal{P}\right)</math>  has a sample space consisting of all pairs of positive integers: <math class="inline">\mathcal{S}=\left\{ \left(k,m\right):\; k=1,2,\cdots;\; m=1,2,\cdots\right\}</math> . The event space <math class="inline">\mathcal{F}</math>  is the power set of <math class="inline">\mathcal{S}</math> , and the probability measure <math class="inline">\mathcal{P}</math>  is specified by the pmf <math class="inline">p\left(k,m\right)=p^{2}\left(1-p\right)^{k+m-2},\qquad p\in\left(0,1\right)</math>.  
  
 
(a)
 
(a)
  
Find <math>P\left(\left\{ \left(k,m\right):\; k\geq m\right\} \right)</math> .
+
Find <math class="inline">P\left(\left\{ \left(k,m\right):\; k\geq m\right\} \right)</math> .
  
<math>P\left(\left\{ \left(k,m\right):\; k\geq m\right\} \right)=\sum_{k=1}^{\infty}\sum_{m=1}^{k}p\left(k,m\right)=\sum_{k=1}^{\infty}\sum_{m=1}^{k}p^{2}\left(1-p\right)^{k+m-2}=\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{k=1}^{\infty}\left(1-p\right)^{k}\sum_{m=1}^{k}\left(1-p\right)^{m}</math><math>=\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{k=1}^{\infty}\left(1-p\right)^{k}\cdot\frac{\left(1-p\right)\left(1-\left(1-p\right)^{k}\right)}{1-\left(1-p\right)}=\frac{p}{1-p}\cdot\sum_{k=1}^{\infty}\left(1-p\right)^{k}\cdot\left(1-\left(1-p\right)^{k}\right)</math><math>=\frac{p}{1-p}\cdot\left[\sum_{k=1}^{\infty}\left(1-p\right)^{k}-\sum_{k=1}^{\infty}\left(1-p\right)^{2k}\right]=\frac{p}{1-p}\cdot\left[\frac{1-p}{1-\left(1-p\right)}-\frac{\left(1-p\right)^{2}}{1-\left(1-p\right)^{2}}\right]</math><math>=\frac{p}{1-p}\cdot\left[\frac{1-p}{p}-\frac{\left(1-p\right)^{2}}{p\left(2-p\right)}\right]=1-\frac{1-p}{2-p}=\frac{2-p-1+p}{2-p}=\frac{1}{2-p}.</math>  
+
<math class="inline">P\left(\left\{ \left(k,m\right):\; k\geq m\right\} \right)=\sum_{k=1}^{\infty}\sum_{m=1}^{k}p\left(k,m\right)=\sum_{k=1}^{\infty}\sum_{m=1}^{k}p^{2}\left(1-p\right)^{k+m-2}=\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{k=1}^{\infty}\left(1-p\right)^{k}\sum_{m=1}^{k}\left(1-p\right)^{m}</math><math class="inline">=\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{k=1}^{\infty}\left(1-p\right)^{k}\cdot\frac{\left(1-p\right)\left(1-\left(1-p\right)^{k}\right)}{1-\left(1-p\right)}=\frac{p}{1-p}\cdot\sum_{k=1}^{\infty}\left(1-p\right)^{k}\cdot\left(1-\left(1-p\right)^{k}\right)</math><math class="inline">=\frac{p}{1-p}\cdot\left[\sum_{k=1}^{\infty}\left(1-p\right)^{k}-\sum_{k=1}^{\infty}\left(1-p\right)^{2k}\right]=\frac{p}{1-p}\cdot\left[\frac{1-p}{1-\left(1-p\right)}-\frac{\left(1-p\right)^{2}}{1-\left(1-p\right)^{2}}\right]</math><math class="inline">=\frac{p}{1-p}\cdot\left[\frac{1-p}{p}-\frac{\left(1-p\right)^{2}}{p\left(2-p\right)}\right]=1-\frac{1-p}{2-p}=\frac{2-p-1+p}{2-p}=\frac{1}{2-p}.</math>  
  
 
(b)
 
(b)
  
Find <math>P\left(\left\{ \left(k,m\right):\; k+m=r\right\} \right)</math> , for <math>r=2,3,\cdots</math> .
+
Find <math class="inline">P\left(\left\{ \left(k,m\right):\; k+m=r\right\} \right)</math> , for <math class="inline">r=2,3,\cdots</math> .
  
<math>P\left(\left\{ \left(k,m\right):\; k+m=r\right\} \right)=\sum_{r=2}^{\infty}\sum_{k=1}^{r-1}p\left(k,r-k\right)=\sum_{r=2}^{\infty}\sum_{k=1}^{r-1}p^{2}\left(1-p\right)^{r-2}</math><math>=\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{r=2}^{\infty}\left(r-1\right)\left(1-p\right)^{r}=\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{r=1}^{\infty}r\left(1-p\right)^{r+1}</math><math>=\frac{p^{2}}{1-p}\cdot\sum_{r=1}^{\infty}r\left(1-p\right)^{r}=\frac{p^{2}}{1-p}\cdot\frac{1-p}{\left(1-\left(1-p\right)\right)^{2}}=1.</math>  
+
<math class="inline">P\left(\left\{ \left(k,m\right):\; k+m=r\right\} \right)=\sum_{r=2}^{\infty}\sum_{k=1}^{r-1}p\left(k,r-k\right)=\sum_{r=2}^{\infty}\sum_{k=1}^{r-1}p^{2}\left(1-p\right)^{r-2}</math><math class="inline">=\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{r=2}^{\infty}\left(r-1\right)\left(1-p\right)^{r}=\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{r=1}^{\infty}r\left(1-p\right)^{r+1}</math><math class="inline">=\frac{p^{2}}{1-p}\cdot\sum_{r=1}^{\infty}r\left(1-p\right)^{r}=\frac{p^{2}}{1-p}\cdot\frac{1-p}{\left(1-\left(1-p\right)\right)^{2}}=1.</math>  
  
 
'''Note'''
 
'''Note'''
  
We use Taylor series [CS1TaylorSeries]: <math>\sum_{r=1}^{\infty}r\left(1-p\right)^{r}=\frac{1-p}{\left(1-\left(1-p\right)\right)^{2}}</math> .
+
We use [[ECE 600 Prerequisites Basic Math|Taylor Series]]: <math class="inline">\sum_{r=1}^{\infty}r\left(1-p\right)^{r}=\frac{1-p}{\left(1-\left(1-p\right)\right)^{2}}</math> .
  
 
'''(c)'''
 
'''(c)'''
  
Find <math>P\left(\left\{ \left(k,m\right):\; k\text{ is an odd number}\right\} \right)</math> .
+
Find <math class="inline">P\left(\left\{ \left(k,m\right):\; k\text{ is an odd number}\right\} \right)</math> .
  
<math>P\left(\left\{ \left(k,m\right):\; k\text{ is an odd number}\right\} \right)=1-P\left(\left\{ \left(k,m\right):\; k\text{ is an even number}\right\} \right)</math><math>=1-\sum_{i=1}^{\infty}\sum_{m=1}^{\infty}p\left(2i,m\right)=1-\sum_{i=1}^{\infty}\sum_{m=1}^{\infty}p^{2}\left(1-p\right)^{2i+m-2}</math><math>=1-\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{i=1}^{\infty}\left(1-p\right)^{2i}\sum_{m=1}^{\infty}\left(1-p\right)^{m}=1-\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{i=1}^{\infty}\left(1-p\right)^{2i}\cdot\frac{1-p}{1-\left(1-p\right)}</math><math>=1-\frac{p}{1-p}\cdot\sum_{i=1}^{\infty}\left(1-p\right)^{2i}=1-\frac{p}{1-p}\cdot\frac{\left(1-p\right)^{2}}{1-\left(1-p\right)^{2}}=1-\frac{p}{1-p}\cdot\frac{\left(1-p\right)^{2}}{p\left(2-p\right)}</math><math>=1-\frac{1-p}{2-p}=\frac{2-p-1+p}{2-p}=\frac{1}{2-p}.</math>  
+
<math class="inline">P\left(\left\{ \left(k,m\right):\; k\text{ is an odd number}\right\} \right)=1-P\left(\left\{ \left(k,m\right):\; k\text{ is an even number}\right\} \right)</math><math class="inline">=1-\sum_{i=1}^{\infty}\sum_{m=1}^{\infty}p\left(2i,m\right)=1-\sum_{i=1}^{\infty}\sum_{m=1}^{\infty}p^{2}\left(1-p\right)^{2i+m-2}</math><math class="inline">=1-\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{i=1}^{\infty}\left(1-p\right)^{2i}\sum_{m=1}^{\infty}\left(1-p\right)^{m}=1-\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{i=1}^{\infty}\left(1-p\right)^{2i}\cdot\frac{1-p}{1-\left(1-p\right)}</math><math class="inline">=1-\frac{p}{1-p}\cdot\sum_{i=1}^{\infty}\left(1-p\right)^{2i}=1-\frac{p}{1-p}\cdot\frac{\left(1-p\right)^{2}}{1-\left(1-p\right)^{2}}=1-\frac{p}{1-p}\cdot\frac{\left(1-p\right)^{2}}{p\left(2-p\right)}</math><math class="inline">=1-\frac{1-p}{2-p}=\frac{2-p-1+p}{2-p}=\frac{1}{2-p}.</math>  
  
 
'''2. (20 pts.)'''
 
'''2. (20 pts.)'''
  
Let <math>\mathbf{X}</math>  and <math>\mathbf{Y}</math>  be two independent identically distributed exponential random variables having mean <math>\mu</math> . Let <math>\mathbf{Z}=\mathbf{X}+\mathbf{Y}</math> . Find <math>f_{\mathbf{X}}\left(x|\mathbf{Z}=z\right)</math> , the conditional pdf of <math>\mathbf{X}</math>  given the event <math>\left\{ \mathbf{Z}=z\right\}</math>  .
+
Let <math class="inline">\mathbf{X}</math>  and <math class="inline">\mathbf{Y}</math>  be two independent identically distributed exponential random variables having mean <math class="inline">\mu</math> . Let <math class="inline">\mathbf{Z}=\mathbf{X}+\mathbf{Y}</math> . Find <math class="inline">f_{\mathbf{X}}\left(x|\mathbf{Z}=z\right)</math> , the conditional pdf of <math class="inline">\mathbf{X}</math>  given the event <math class="inline">\left\{ \mathbf{Z}=z\right\}</math>  .
  
 
'''Note'''
 
'''Note'''
  
This problem is very simlar to the example [CS1AdditionOfTwoIndependentPoissonRV] except that it deals with the exponential random variable rather than the Poisson random variable.
+
This problem is very simlar to the [[ECE 600 Exams Addition of two independent Poisson random variables|example]] except that it deals with the exponential random variable rather than the Poisson random variable.
  
 
'''Solution'''
 
'''Solution'''
Line 39: Line 39:
 
By using Bayes' theorem,
 
By using Bayes' theorem,
  
<math>f_{\mathbf{X}}\left(x|\mathbf{Z}=z\right)=\frac{f_{\mathbf{XZ}}\left(x,z\right)}{f_{\mathbf{Z}}\left(z\right)}=\frac{f_{\mathbf{Z}}\left(z|\mathbf{X}=x\right)f_{\mathbf{X}}\left(x\right)}{f_{\mathbf{Z}}\left(z\right)}=\frac{f_{\mathbf{Y}}\left(z-x\right)f_{\mathbf{X}}\left(x\right)}{f_{\mathbf{Z}}\left(z\right)}=?</math>  
+
<math class="inline">f_{\mathbf{X}}\left(x|\mathbf{Z}=z\right)=\frac{f_{\mathbf{XZ}}\left(x,z\right)}{f_{\mathbf{Z}}\left(z\right)}=\frac{f_{\mathbf{Z}}\left(z|\mathbf{X}=x\right)f_{\mathbf{X}}\left(x\right)}{f_{\mathbf{Z}}\left(z\right)}=\frac{f_{\mathbf{Y}}\left(z-x\right)f_{\mathbf{X}}\left(x\right)}{f_{\mathbf{Z}}\left(z\right)}=?</math>  
  
Acording to the definition of the exponential distribution [CS1ExponentialDistribution], <math>f_{\mathbf{X}}\left(x\right)=\frac{1}{\mu}e^{-\frac{x}{\mu}}\text{ and }f_{\mathbf{Y}}\left(y\right)=\frac{1}{\mu}e^{-\frac{y}{\mu}}.</math>  
+
Acording to the definition of the [[ECE 600 Prerequisites Continuous Random Variables|exponential distribution]], <math class="inline">f_{\mathbf{X}}\left(x\right)=\frac{1}{\mu}e^{-\frac{x}{\mu}}\text{ and }f_{\mathbf{Y}}\left(y\right)=\frac{1}{\mu}e^{-\frac{y}{\mu}}.</math>  
  
<math>\Phi_{\mathbf{X}}\left(\omega\right)=\Phi_{\mathbf{Y}}\left(\omega\right)=\frac{1}{1-i\mu\omega}.</math>  
+
<math class="inline">\Phi_{\mathbf{X}}\left(\omega\right)=\Phi_{\mathbf{Y}}\left(\omega\right)=\frac{1}{1-i\mu\omega}.</math>  
  
<math>\Phi_{\mathbf{Z}}\left(\omega\right)=E\left[e^{i\omega\mathbf{Z}}\right]=E\left[e^{i\omega\left(\mathbf{X}+\mathbf{Y}\right)}\right]=E\left[e^{i\omega\mathbf{X}}\right]E\left[e^{i\omega\mathbf{Y}}\right]=\Phi_{\mathbf{X}}\left(\omega\right)\Phi_{\mathbf{Y}}\left(\omega\right)=\frac{1}{1-i\mu\omega}\cdot\frac{1}{1-i\mu\omega}=?</math>  
+
<math class="inline">\Phi_{\mathbf{Z}}\left(\omega\right)=E\left[e^{i\omega\mathbf{Z}}\right]=E\left[e^{i\omega\left(\mathbf{X}+\mathbf{Y}\right)}\right]=E\left[e^{i\omega\mathbf{X}}\right]E\left[e^{i\omega\mathbf{Y}}\right]=\Phi_{\mathbf{X}}\left(\omega\right)\Phi_{\mathbf{Y}}\left(\omega\right)=\frac{1}{1-i\mu\omega}\cdot\frac{1}{1-i\mu\omega}=?</math>  
  
 
'''3. (25 pts.)'''
 
'''3. (25 pts.)'''
  
Let <math>\mathbf{X}_{1},\cdots,\mathbf{X}_{n}</math>  be independent identically distributed (i.i.d. ) random variables uniformaly distributed over the interval <math>\left[0,1\right]</math> .
+
Let <math class="inline">\mathbf{X}_{1},\cdots,\mathbf{X}_{n}</math>  be independent identically distributed (i.i.d. ) random variables uniformaly distributed over the interval <math class="inline">\left[0,1\right]</math> .
  
 
'''(a)'''
 
'''(a)'''
  
Find the probability density function of <math>\mathbf{Y}=\max\left\{ \mathbf{X}_{1},\cdots,\mathbf{X}_{n}\right\}</math> .  
+
Find the probability density function of <math class="inline">\mathbf{Y}=\max\left\{ \mathbf{X}_{1},\cdots,\mathbf{X}_{n}\right\}</math> .  
  
 
ref.
 
ref.
  
This problem is almost identical to the example [CS1SequenceOfUniformlyDistributedRV].
+
This problem is almost identical to the [[ECE 600 Exams Sequence of uniformly distributed random variables|example]].
  
 
'''Solution'''
 
'''Solution'''
  
<math>F_{\mathbf{Y}}(y)=P\left(\left\{ \mathbf{Y}\leq y\right\} \right)=P\left(\left\{ \max\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} \leq y\right\} \right)=P\left(\left\{ \mathbf{X}_{1}\leq y\right\} \cap\left\{ \mathbf{X}_{2}\leq y\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}\leq y\right\} \right)</math><math>=P\left(\left\{ \mathbf{X}_{1}\leq y\right\} \right)P\left(\left\{ \mathbf{X}_{2}\leq y\right\} \right)\cdots P\left(\left\{ \mathbf{X}_{n}\leq y\right\} \right)=\left(F_{\mathbf{X}}\left(y\right)\right)^{n}</math>  
+
<math class="inline">F_{\mathbf{Y}}(y)=P\left(\left\{ \mathbf{Y}\leq y\right\} \right)=P\left(\left\{ \max\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} \leq y\right\} \right)=P\left(\left\{ \mathbf{X}_{1}\leq y\right\} \cap\left\{ \mathbf{X}_{2}\leq y\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}\leq y\right\} \right)</math><math class="inline">=P\left(\left\{ \mathbf{X}_{1}\leq y\right\} \right)P\left(\left\{ \mathbf{X}_{2}\leq y\right\} \right)\cdots P\left(\left\{ \mathbf{X}_{n}\leq y\right\} \right)=\left(F_{\mathbf{X}}\left(y\right)\right)^{n}</math>  
  
where <math>f_{\mathbf{X}}(x)=\mathbf{1}_{\left[0,1\right]}(x)</math>  and <math>F_{\mathbf{X}}\left(x\right)=\left\{ \begin{array}{ll}
+
where <math class="inline">f_{\mathbf{X}}(x)=\mathbf{1}_{\left[0,1\right]}(x)</math>  and <math class="inline">F_{\mathbf{X}}\left(x\right)=\left\{ \begin{array}{ll}
 
0 & \quad,\; x<0\\
 
0 & \quad,\; x<0\\
 
x & \quad,\;0\leq x<1\\
 
x & \quad,\;0\leq x<1\\
Line 69: Line 69:
 
\end{array}\right.</math>  
 
\end{array}\right.</math>  
  
<math>f_{\mathbf{Y}}\left(y\right)=\frac{dF_{\mathbf{Y}}\left(y\right)}{dy}=n\left[F_{\mathbf{X}}\left(y\right)\right]^{n-1}\cdot f_{\mathbf{X}}\left(y\right)=n\cdot y^{n-1}\cdot\mathbf{1}_{\left[0,1\right]}(y).</math>
+
<math class="inline">f_{\mathbf{Y}}\left(y\right)=\frac{dF_{\mathbf{Y}}\left(y\right)}{dy}=n\left[F_{\mathbf{X}}\left(y\right)\right]^{n-1}\cdot f_{\mathbf{X}}\left(y\right)=n\cdot y^{n-1}\cdot\mathbf{1}_{\left[0,1\right]}(y).</math>
  
 
'''(b)'''
 
'''(b)'''
  
Find the probability density function of <math>\mathbf{Z}=\min\left\{ \mathbf{X}_{1},\cdots,\mathbf{X}_{n}\right\}</math> .  
+
Find the probability density function of <math class="inline">\mathbf{Z}=\min\left\{ \mathbf{X}_{1},\cdots,\mathbf{X}_{n}\right\}</math> .  
  
 
'''Solution'''
 
'''Solution'''
  
<math>F_{\mathbf{Z}}(z)=P\left(\left\{ \mathbf{Z}\leq z\right\} \right)=1-P\left(\left\{ \mathbf{Z}>z\right\} \right)=1-P\left(\left\{ \min\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} >z\right\} \right)</math><math>=1-P\left(\left\{ \mathbf{X}_{1}>z\right\} \cap\left\{ \mathbf{X}_{2}>z\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}>z\right\} \right)=1-\left(1-F_{\mathbf{X}}(z)\right)^{n}.</math>  
+
<math class="inline">F_{\mathbf{Z}}(z)=P\left(\left\{ \mathbf{Z}\leq z\right\} \right)=1-P\left(\left\{ \mathbf{Z}>z\right\} \right)=1-P\left(\left\{ \min\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} >z\right\} \right)</math><math class="inline">=1-P\left(\left\{ \mathbf{X}_{1}>z\right\} \cap\left\{ \mathbf{X}_{2}>z\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}>z\right\} \right)=1-\left(1-F_{\mathbf{X}}(z)\right)^{n}.</math>  
  
<math>f_{\mathbf{Z}}(z)=\frac{dF_{\mathbf{Z}}(z)}{dz}=n\left(1-F_{\mathbf{X}}(z)\right)^{n-1}\cdot f_{\mathbf{X}}(z)=n\left(1-z\right)^{n-1}\cdot\mathbf{1}_{\left[0,1\right]}\left(z\right).</math>  
+
<math class="inline">f_{\mathbf{Z}}(z)=\frac{dF_{\mathbf{Z}}(z)}{dz}=n\left(1-F_{\mathbf{X}}(z)\right)^{n-1}\cdot f_{\mathbf{X}}(z)=n\left(1-z\right)^{n-1}\cdot\mathbf{1}_{\left[0,1\right]}\left(z\right).</math>  
  
 
'''4. (35 pts.)'''
 
'''4. (35 pts.)'''
  
Assume that <math>\mathbf{X}\left(t\right)</math>  is a zero-mean, continuous-time, Gaussian white noise process with autocorrelation function <math>R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\frac{N_{0}}{2}\delta\left(t_{1}-t_{2}\right).</math> Let <math>\mathbf{Y}\left(t\right)</math>  be a new random process defined as the output of a linear time-invariant system with impulse response <math>h\left(t\right)=\frac{1}{T}e^{-t/T}\cdot u\left(t\right),</math>  where <math>u\left(t\right)</math>  is the unit step function and <math>T>0</math> .
+
Assume that <math class="inline">\mathbf{X}\left(t\right)</math>  is a zero-mean, continuous-time, Gaussian white noise process with autocorrelation function <math class="inline">R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\frac{N_{0}}{2}\delta\left(t_{1}-t_{2}\right).</math> Let <math class="inline">\mathbf{Y}\left(t\right)</math>  be a new random process defined as the output of a linear time-invariant system with impulse response <math class="inline">h\left(t\right)=\frac{1}{T}e^{-t/T}\cdot u\left(t\right),</math>  where <math class="inline">u\left(t\right)</math>  is the unit step function and <math class="inline">T>0</math> .
  
 
'''(a)'''  
 
'''(a)'''  
  
What is the mean of <math>\mathbf{Y\left(t\right)}</math> ?
+
What is the mean of <math class="inline">\mathbf{Y\left(t\right)}</math> ?
  
<math>E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0.</math>  
+
<math class="inline">E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0.</math>  
  
 
'''(b)'''  
 
'''(b)'''  
  
What is the autocorrelation function of <math>\mathbf{Y}\left(t\right)</math> ?
+
What is the autocorrelation function of <math class="inline">\mathbf{Y}\left(t\right)</math> ?
  
<math>S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}\frac{N_{0}}{2}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=\frac{N_{0}}{2}.</math>  
+
<math class="inline">S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}\frac{N_{0}}{2}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=\frac{N_{0}}{2}.</math>  
  
Let <math>\alpha=\frac{1}{T}</math> .
+
Let <math class="inline">\alpha=\frac{1}{T}</math> .
  
<math>H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{\infty}\alpha e^{-\alpha t}\cdot e^{-i\omega t}dt=\alpha\int_{0}^{\infty}e^{-\left(\alpha+i\omega\right)t}dt=\alpha\frac{e^{-\left(\alpha+i\omega\right)t}}{-\left(\alpha+i\omega\right)}\biggl|_{0}^{\infty}=\frac{\alpha}{\alpha+i\omega}.</math>  
+
<math class="inline">H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{\infty}\alpha e^{-\alpha t}\cdot e^{-i\omega t}dt=\alpha\int_{0}^{\infty}e^{-\left(\alpha+i\omega\right)t}dt=\alpha\frac{e^{-\left(\alpha+i\omega\right)t}}{-\left(\alpha+i\omega\right)}\biggl|_{0}^{\infty}=\frac{\alpha}{\alpha+i\omega}.</math>  
  
<math>S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=S_{\mathbf{XX}}\left(\omega\right)H\left(\omega\right)H^{*}\left(\omega\right)=\frac{N_{0}}{2}\cdot\frac{\alpha}{\alpha+i\omega}\cdot\frac{\alpha}{\alpha-i\omega}=\frac{\alpha^{2}N_{0}}{2\left(\alpha^{2}+\omega^{2}\right)}.</math>  
+
<math class="inline">S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=S_{\mathbf{XX}}\left(\omega\right)H\left(\omega\right)H^{*}\left(\omega\right)=\frac{N_{0}}{2}\cdot\frac{\alpha}{\alpha+i\omega}\cdot\frac{\alpha}{\alpha-i\omega}=\frac{\alpha^{2}N_{0}}{2\left(\alpha^{2}+\omega^{2}\right)}.</math>  
  
<math>S_{\mathbf{YY}}\left(\omega\right)=\frac{\alpha^{2}N_{0}}{2\left(\alpha^{2}+\omega^{2}\right)}=\left(\frac{\alpha N_{0}}{4}\right)\frac{2\alpha}{\alpha^{2}+\omega^{2}}\leftrightarrow\left(\frac{\alpha N_{0}}{4}\right)e^{-\alpha\left|\tau\right|}=R_{\mathbf{YY}}\left(\tau\right).</math>  
+
<math class="inline">S_{\mathbf{YY}}\left(\omega\right)=\frac{\alpha^{2}N_{0}}{2\left(\alpha^{2}+\omega^{2}\right)}=\left(\frac{\alpha N_{0}}{4}\right)\frac{2\alpha}{\alpha^{2}+\omega^{2}}\leftrightarrow\left(\frac{\alpha N_{0}}{4}\right)e^{-\alpha\left|\tau\right|}=R_{\mathbf{YY}}\left(\tau\right).</math>  
  
<math>\because e^{-\alpha\left|\tau\right|}\leftrightarrow\frac{2\alpha}{\alpha^{2}+\omega^{2}}\text{ (on the table given)}.</math>  
+
<math class="inline">\because e^{-\alpha\left|\tau\right|}\leftrightarrow\frac{2\alpha}{\alpha^{2}+\omega^{2}}\text{ (on the table given)}.</math>  
  
<math>\therefore R_{\mathbf{YY}}\left(\tau\right)=\left(\frac{\alpha N_{0}}{4}\right)e^{-\alpha\left|\tau\right|}=\left(\frac{N_{0}}{4T}\right)e^{-\frac{\left|\tau\right|}{T}}.</math>  
+
<math class="inline">\therefore R_{\mathbf{YY}}\left(\tau\right)=\left(\frac{\alpha N_{0}}{4}\right)e^{-\alpha\left|\tau\right|}=\left(\frac{N_{0}}{4T}\right)e^{-\frac{\left|\tau\right|}{T}}.</math>  
  
 
'''(c)'''  
 
'''(c)'''  
  
Write an expression for the <math>n</math> -th order characteristic function of <math>\mathbf{Y}\left(t\right)</math>  sampled at time <math>t_{1},t_{2},\cdots,t_{n}</math> . Simplify as much as possible.
+
Write an expression for the <math class="inline">n</math> -th order characteristic function of <math class="inline">\mathbf{Y}\left(t\right)</math>  sampled at time <math class="inline">t_{1},t_{2},\cdots,t_{n}</math> . Simplify as much as possible.
  
 
'''(d)'''  
 
'''(d)'''  
  
Write an expression for the second-order pdf <math>f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)</math>  of <math>\mathbf{Y}\left(t\right)</math> . simplify as much as possible.
+
Write an expression for the second-order pdf <math class="inline">f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)</math>  of <math class="inline">\mathbf{Y}\left(t\right)</math> . simplify as much as possible.
  
<math>\mathbf{Y}\left(t\right)</math>  is a WSS Gaussian random process with <math>E\left[\mathbf{Y}\left(t\right)\right]=0 , \sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=\frac{N_{0}}{4}</math> .
+
<math class="inline">\mathbf{Y}\left(t\right)</math>  is a WSS Gaussian random process with <math class="inline">E\left[\mathbf{Y}\left(t\right)\right]=0 , \sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=\frac{N_{0}}{4}</math> .
  
<math>r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{C_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=e^{-\alpha\left|t_{1}-t_{2}\right|}.</math>  
+
<math class="inline">r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{C_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=e^{-\alpha\left|t_{1}-t_{2}\right|}.</math>  
  
<math>f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2ry_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} </math><math>=\frac{1}{2\pi\frac{N_{0}}{4}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-1}{2\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[\frac{y_{1}^{2}}{N_{0}/4}-\frac{2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}}{N_{0}/4}+\frac{y_{2}^{2}}{N_{0}/4}\right]\right\} </math><math>=\frac{2}{\pi N_{0}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-2}{N_{0}\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[y_{1}^{2}-2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}+y_{2}^{2}\right]\right\}</math> .  
+
<math class="inline">f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2ry_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} </math><math class="inline">=\frac{1}{2\pi\frac{N_{0}}{4}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-1}{2\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[\frac{y_{1}^{2}}{N_{0}/4}-\frac{2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}}{N_{0}/4}+\frac{y_{2}^{2}}{N_{0}/4}\right]\right\} </math><math class="inline">=\frac{2}{\pi N_{0}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-2}{N_{0}\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[y_{1}^{2}-2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}+y_{2}^{2}\right]\right\}</math> .  
  
 
'''(e)'''  
 
'''(e)'''  
  
Find the minium mean-square estimate of <math>\mathbf{Y}\left(t_{2}\right)</math>  given that <math>\mathbf{Y}\left(t_{1}\right)=y_{1}</math> . Simplify your answer as much as possible.
+
Find the minium mean-square estimate of <math class="inline">\mathbf{Y}\left(t_{2}\right)</math>  given that <math class="inline">\mathbf{Y}\left(t_{1}\right)=y_{1}</math> . Simplify your answer as much as possible.
  
<math>\widehat{y_{2}}_{MMS}\left(y_{1}\right)=E\left[\mathbf{Y}\left(t_{2}\right)|\mathbf{Y}\left(t_{1}\right)=y_{1}\right]=\int_{-\infty}^{\infty}y_{2}\cdot f_{\mathbf{Y}\left(t_{2}\right)}\left(y_{2}|\mathbf{Y}\left(t_{1}\right)=y_{1}\right)dy_{2}</math>  
+
<math class="inline">\widehat{y_{2}}_{MMS}\left(y_{1}\right)=E\left[\mathbf{Y}\left(t_{2}\right)|\mathbf{Y}\left(t_{1}\right)=y_{1}\right]=\int_{-\infty}^{\infty}y_{2}\cdot f_{\mathbf{Y}\left(t_{2}\right)}\left(y_{2}|\mathbf{Y}\left(t_{1}\right)=y_{1}\right)dy_{2}</math>  
  
<math>\text{where }f_{\mathbf{Y}\left(t_{2}\right)}\left(y_{2}|\mathbf{Y}\left(t_{1}\right)=y_{1}\right)=\frac{f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1,}y_{2}\right)}{f_{\mathbf{Y}\left(t_{1}\right)}\left(y_{1}\right)}.</math>  
+
<math class="inline">\text{where }f_{\mathbf{Y}\left(t_{2}\right)}\left(y_{2}|\mathbf{Y}\left(t_{1}\right)=y_{1}\right)=\frac{f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1,}y_{2}\right)}{f_{\mathbf{Y}\left(t_{1}\right)}\left(y_{1}\right)}.</math>  
  
 
----
 
----
 
[[ECE600|Back to ECE600]]
 
[[ECE600|Back to ECE600]]
  
[[ECE 600 QE|Back to ECE 600 QE]]
+
[[ECE 600 QE|Back to my ECE 600 QE page]]
 +
 
 +
[[ECE_PhD_Qualifying_Exams|Back to the general ECE PHD QE page]] (for problem discussion)

Latest revision as of 08:31, 27 June 2012

7.9 QE 2004 August

1. (20 pts.)

A probability space $ \left(\mathcal{S},\mathcal{F},\mathcal{P}\right) $ has a sample space consisting of all pairs of positive integers: $ \mathcal{S}=\left\{ \left(k,m\right):\; k=1,2,\cdots;\; m=1,2,\cdots\right\} $ . The event space $ \mathcal{F} $ is the power set of $ \mathcal{S} $ , and the probability measure $ \mathcal{P} $ is specified by the pmf $ p\left(k,m\right)=p^{2}\left(1-p\right)^{k+m-2},\qquad p\in\left(0,1\right) $.

(a)

Find $ P\left(\left\{ \left(k,m\right):\; k\geq m\right\} \right) $ .

$ P\left(\left\{ \left(k,m\right):\; k\geq m\right\} \right)=\sum_{k=1}^{\infty}\sum_{m=1}^{k}p\left(k,m\right)=\sum_{k=1}^{\infty}\sum_{m=1}^{k}p^{2}\left(1-p\right)^{k+m-2}=\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{k=1}^{\infty}\left(1-p\right)^{k}\sum_{m=1}^{k}\left(1-p\right)^{m} $$ =\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{k=1}^{\infty}\left(1-p\right)^{k}\cdot\frac{\left(1-p\right)\left(1-\left(1-p\right)^{k}\right)}{1-\left(1-p\right)}=\frac{p}{1-p}\cdot\sum_{k=1}^{\infty}\left(1-p\right)^{k}\cdot\left(1-\left(1-p\right)^{k}\right) $$ =\frac{p}{1-p}\cdot\left[\sum_{k=1}^{\infty}\left(1-p\right)^{k}-\sum_{k=1}^{\infty}\left(1-p\right)^{2k}\right]=\frac{p}{1-p}\cdot\left[\frac{1-p}{1-\left(1-p\right)}-\frac{\left(1-p\right)^{2}}{1-\left(1-p\right)^{2}}\right] $$ =\frac{p}{1-p}\cdot\left[\frac{1-p}{p}-\frac{\left(1-p\right)^{2}}{p\left(2-p\right)}\right]=1-\frac{1-p}{2-p}=\frac{2-p-1+p}{2-p}=\frac{1}{2-p}. $

(b)

Find $ P\left(\left\{ \left(k,m\right):\; k+m=r\right\} \right) $ , for $ r=2,3,\cdots $ .

$ P\left(\left\{ \left(k,m\right):\; k+m=r\right\} \right)=\sum_{r=2}^{\infty}\sum_{k=1}^{r-1}p\left(k,r-k\right)=\sum_{r=2}^{\infty}\sum_{k=1}^{r-1}p^{2}\left(1-p\right)^{r-2} $$ =\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{r=2}^{\infty}\left(r-1\right)\left(1-p\right)^{r}=\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{r=1}^{\infty}r\left(1-p\right)^{r+1} $$ =\frac{p^{2}}{1-p}\cdot\sum_{r=1}^{\infty}r\left(1-p\right)^{r}=\frac{p^{2}}{1-p}\cdot\frac{1-p}{\left(1-\left(1-p\right)\right)^{2}}=1. $

Note

We use Taylor Series: $ \sum_{r=1}^{\infty}r\left(1-p\right)^{r}=\frac{1-p}{\left(1-\left(1-p\right)\right)^{2}} $ .

(c)

Find $ P\left(\left\{ \left(k,m\right):\; k\text{ is an odd number}\right\} \right) $ .

$ P\left(\left\{ \left(k,m\right):\; k\text{ is an odd number}\right\} \right)=1-P\left(\left\{ \left(k,m\right):\; k\text{ is an even number}\right\} \right) $$ =1-\sum_{i=1}^{\infty}\sum_{m=1}^{\infty}p\left(2i,m\right)=1-\sum_{i=1}^{\infty}\sum_{m=1}^{\infty}p^{2}\left(1-p\right)^{2i+m-2} $$ =1-\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{i=1}^{\infty}\left(1-p\right)^{2i}\sum_{m=1}^{\infty}\left(1-p\right)^{m}=1-\frac{p^{2}}{\left(1-p\right)^{2}}\cdot\sum_{i=1}^{\infty}\left(1-p\right)^{2i}\cdot\frac{1-p}{1-\left(1-p\right)} $$ =1-\frac{p}{1-p}\cdot\sum_{i=1}^{\infty}\left(1-p\right)^{2i}=1-\frac{p}{1-p}\cdot\frac{\left(1-p\right)^{2}}{1-\left(1-p\right)^{2}}=1-\frac{p}{1-p}\cdot\frac{\left(1-p\right)^{2}}{p\left(2-p\right)} $$ =1-\frac{1-p}{2-p}=\frac{2-p-1+p}{2-p}=\frac{1}{2-p}. $

2. (20 pts.)

Let $ \mathbf{X} $ and $ \mathbf{Y} $ be two independent identically distributed exponential random variables having mean $ \mu $ . Let $ \mathbf{Z}=\mathbf{X}+\mathbf{Y} $ . Find $ f_{\mathbf{X}}\left(x|\mathbf{Z}=z\right) $ , the conditional pdf of $ \mathbf{X} $ given the event $ \left\{ \mathbf{Z}=z\right\} $ .

Note

This problem is very simlar to the example except that it deals with the exponential random variable rather than the Poisson random variable.

Solution

By using Bayes' theorem,

$ f_{\mathbf{X}}\left(x|\mathbf{Z}=z\right)=\frac{f_{\mathbf{XZ}}\left(x,z\right)}{f_{\mathbf{Z}}\left(z\right)}=\frac{f_{\mathbf{Z}}\left(z|\mathbf{X}=x\right)f_{\mathbf{X}}\left(x\right)}{f_{\mathbf{Z}}\left(z\right)}=\frac{f_{\mathbf{Y}}\left(z-x\right)f_{\mathbf{X}}\left(x\right)}{f_{\mathbf{Z}}\left(z\right)}=? $

Acording to the definition of the exponential distribution, $ f_{\mathbf{X}}\left(x\right)=\frac{1}{\mu}e^{-\frac{x}{\mu}}\text{ and }f_{\mathbf{Y}}\left(y\right)=\frac{1}{\mu}e^{-\frac{y}{\mu}}. $

$ \Phi_{\mathbf{X}}\left(\omega\right)=\Phi_{\mathbf{Y}}\left(\omega\right)=\frac{1}{1-i\mu\omega}. $

$ \Phi_{\mathbf{Z}}\left(\omega\right)=E\left[e^{i\omega\mathbf{Z}}\right]=E\left[e^{i\omega\left(\mathbf{X}+\mathbf{Y}\right)}\right]=E\left[e^{i\omega\mathbf{X}}\right]E\left[e^{i\omega\mathbf{Y}}\right]=\Phi_{\mathbf{X}}\left(\omega\right)\Phi_{\mathbf{Y}}\left(\omega\right)=\frac{1}{1-i\mu\omega}\cdot\frac{1}{1-i\mu\omega}=? $

3. (25 pts.)

Let $ \mathbf{X}_{1},\cdots,\mathbf{X}_{n} $ be independent identically distributed (i.i.d. ) random variables uniformaly distributed over the interval $ \left[0,1\right] $ .

(a)

Find the probability density function of $ \mathbf{Y}=\max\left\{ \mathbf{X}_{1},\cdots,\mathbf{X}_{n}\right\} $ .

ref.

This problem is almost identical to the example.

Solution

$ F_{\mathbf{Y}}(y)=P\left(\left\{ \mathbf{Y}\leq y\right\} \right)=P\left(\left\{ \max\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} \leq y\right\} \right)=P\left(\left\{ \mathbf{X}_{1}\leq y\right\} \cap\left\{ \mathbf{X}_{2}\leq y\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}\leq y\right\} \right) $$ =P\left(\left\{ \mathbf{X}_{1}\leq y\right\} \right)P\left(\left\{ \mathbf{X}_{2}\leq y\right\} \right)\cdots P\left(\left\{ \mathbf{X}_{n}\leq y\right\} \right)=\left(F_{\mathbf{X}}\left(y\right)\right)^{n} $

where $ f_{\mathbf{X}}(x)=\mathbf{1}_{\left[0,1\right]}(x) $ and $ F_{\mathbf{X}}\left(x\right)=\left\{ \begin{array}{ll} 0 & \quad,\; x<0\\ x & \quad,\;0\leq x<1\\ 1 & \quad,\; x\geq1. \end{array}\right. $

$ f_{\mathbf{Y}}\left(y\right)=\frac{dF_{\mathbf{Y}}\left(y\right)}{dy}=n\left[F_{\mathbf{X}}\left(y\right)\right]^{n-1}\cdot f_{\mathbf{X}}\left(y\right)=n\cdot y^{n-1}\cdot\mathbf{1}_{\left[0,1\right]}(y). $

(b)

Find the probability density function of $ \mathbf{Z}=\min\left\{ \mathbf{X}_{1},\cdots,\mathbf{X}_{n}\right\} $ .

Solution

$ F_{\mathbf{Z}}(z)=P\left(\left\{ \mathbf{Z}\leq z\right\} \right)=1-P\left(\left\{ \mathbf{Z}>z\right\} \right)=1-P\left(\left\{ \min\left\{ \mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\right\} >z\right\} \right) $$ =1-P\left(\left\{ \mathbf{X}_{1}>z\right\} \cap\left\{ \mathbf{X}_{2}>z\right\} \cap\cdots\cap\left\{ \mathbf{X}_{n}>z\right\} \right)=1-\left(1-F_{\mathbf{X}}(z)\right)^{n}. $

$ f_{\mathbf{Z}}(z)=\frac{dF_{\mathbf{Z}}(z)}{dz}=n\left(1-F_{\mathbf{X}}(z)\right)^{n-1}\cdot f_{\mathbf{X}}(z)=n\left(1-z\right)^{n-1}\cdot\mathbf{1}_{\left[0,1\right]}\left(z\right). $

4. (35 pts.)

Assume that $ \mathbf{X}\left(t\right) $ is a zero-mean, continuous-time, Gaussian white noise process with autocorrelation function $ R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=\frac{N_{0}}{2}\delta\left(t_{1}-t_{2}\right). $ Let $ \mathbf{Y}\left(t\right) $ be a new random process defined as the output of a linear time-invariant system with impulse response $ h\left(t\right)=\frac{1}{T}e^{-t/T}\cdot u\left(t\right), $ where $ u\left(t\right) $ is the unit step function and $ T>0 $ .

(a)

What is the mean of $ \mathbf{Y\left(t\right)} $ ?

$ E\left[\mathbf{Y}\left(t\right)\right]=E\left[\int_{-\infty}^{\infty}h\left(\tau\right)\mathbf{X}\left(t-\tau\right)d\tau\right]=\int_{-\infty}^{\infty}h\left(\tau\right)E\left[\mathbf{X}\left(t-\tau\right)\right]d\tau=\int_{-\infty}^{\infty}h\left(\tau\right)\cdot0d\tau=0. $

(b)

What is the autocorrelation function of $ \mathbf{Y}\left(t\right) $ ?

$ S_{\mathbf{XX}}\left(\omega\right)=\int_{-\infty}^{\infty}\frac{N_{0}}{2}\delta\left(\tau\right)e^{-i\omega\tau}d\tau=\frac{N_{0}}{2}. $

Let $ \alpha=\frac{1}{T} $ .

$ H\left(\omega\right)=\int_{-\infty}^{\infty}h\left(t\right)e^{-i\omega t}dt=\int_{0}^{\infty}\alpha e^{-\alpha t}\cdot e^{-i\omega t}dt=\alpha\int_{0}^{\infty}e^{-\left(\alpha+i\omega\right)t}dt=\alpha\frac{e^{-\left(\alpha+i\omega\right)t}}{-\left(\alpha+i\omega\right)}\biggl|_{0}^{\infty}=\frac{\alpha}{\alpha+i\omega}. $

$ S_{\mathbf{YY}}\left(\omega\right)=S_{\mathbf{XX}}\left(\omega\right)\left|H\left(\omega\right)\right|^{2}=S_{\mathbf{XX}}\left(\omega\right)H\left(\omega\right)H^{*}\left(\omega\right)=\frac{N_{0}}{2}\cdot\frac{\alpha}{\alpha+i\omega}\cdot\frac{\alpha}{\alpha-i\omega}=\frac{\alpha^{2}N_{0}}{2\left(\alpha^{2}+\omega^{2}\right)}. $

$ S_{\mathbf{YY}}\left(\omega\right)=\frac{\alpha^{2}N_{0}}{2\left(\alpha^{2}+\omega^{2}\right)}=\left(\frac{\alpha N_{0}}{4}\right)\frac{2\alpha}{\alpha^{2}+\omega^{2}}\leftrightarrow\left(\frac{\alpha N_{0}}{4}\right)e^{-\alpha\left|\tau\right|}=R_{\mathbf{YY}}\left(\tau\right). $

$ \because e^{-\alpha\left|\tau\right|}\leftrightarrow\frac{2\alpha}{\alpha^{2}+\omega^{2}}\text{ (on the table given)}. $

$ \therefore R_{\mathbf{YY}}\left(\tau\right)=\left(\frac{\alpha N_{0}}{4}\right)e^{-\alpha\left|\tau\right|}=\left(\frac{N_{0}}{4T}\right)e^{-\frac{\left|\tau\right|}{T}}. $

(c)

Write an expression for the $ n $ -th order characteristic function of $ \mathbf{Y}\left(t\right) $ sampled at time $ t_{1},t_{2},\cdots,t_{n} $ . Simplify as much as possible.

(d)

Write an expression for the second-order pdf $ f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right) $ of $ \mathbf{Y}\left(t\right) $ . simplify as much as possible.

$ \mathbf{Y}\left(t\right) $ is a WSS Gaussian random process with $ E\left[\mathbf{Y}\left(t\right)\right]=0 , \sigma_{\mathbf{Y}\left(t\right)}^{2}=R_{\mathbf{YY}}\left(0\right)=\frac{N_{0}}{4} $ .

$ r_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}=r\left(t_{1}-t_{2}\right)=\frac{C_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{\sqrt{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}}=\frac{R_{\mathbf{YY}}\left(t_{1}-t_{2}\right)}{R_{\mathbf{YY}}\left(0\right)}=e^{-\alpha\left|t_{1}-t_{2}\right|}. $

$ f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1},y_{2}\right)=\frac{1}{2\pi\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}\sqrt{1-r^{2}}}\exp\left\{ \frac{-1}{2\left(1-r^{2}\right)}\left[\frac{y_{1}^{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}^{2}}-\frac{2ry_{1}y_{2}}{\sigma_{\mathbf{Y}\left(t_{1}\right)}\sigma_{\mathbf{Y}\left(t_{2}\right)}}+\frac{y_{2}^{2}}{\sigma_{\mathbf{Y}\left(t_{2}\right)}^{2}}\right]\right\} $$ =\frac{1}{2\pi\frac{N_{0}}{4}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-1}{2\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[\frac{y_{1}^{2}}{N_{0}/4}-\frac{2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}}{N_{0}/4}+\frac{y_{2}^{2}}{N_{0}/4}\right]\right\} $$ =\frac{2}{\pi N_{0}\sqrt{1-e^{-2\alpha\left|t_{1}-t_{2}\right|}}}\exp\left\{ \frac{-2}{N_{0}\left(1-e^{-2\alpha\left|t_{1}-t_{2}\right|}\right)}\left[y_{1}^{2}-2y_{1}y_{2}e^{-\alpha\left|t_{1}-t_{2}\right|}+y_{2}^{2}\right]\right\} $ .

(e)

Find the minium mean-square estimate of $ \mathbf{Y}\left(t_{2}\right) $ given that $ \mathbf{Y}\left(t_{1}\right)=y_{1} $ . Simplify your answer as much as possible.

$ \widehat{y_{2}}_{MMS}\left(y_{1}\right)=E\left[\mathbf{Y}\left(t_{2}\right)|\mathbf{Y}\left(t_{1}\right)=y_{1}\right]=\int_{-\infty}^{\infty}y_{2}\cdot f_{\mathbf{Y}\left(t_{2}\right)}\left(y_{2}|\mathbf{Y}\left(t_{1}\right)=y_{1}\right)dy_{2} $

$ \text{where }f_{\mathbf{Y}\left(t_{2}\right)}\left(y_{2}|\mathbf{Y}\left(t_{1}\right)=y_{1}\right)=\frac{f_{\mathbf{Y}\left(t_{1}\right)\mathbf{Y}\left(t_{2}\right)}\left(y_{1,}y_{2}\right)}{f_{\mathbf{Y}\left(t_{1}\right)}\left(y_{1}\right)}. $


Back to ECE600

Back to my ECE 600 QE page

Back to the general ECE PHD QE page (for problem discussion)

Alumni Liaison

Ph.D. 2007, working on developing cool imaging technologies for digital cameras, camera phones, and video surveillance cameras.

Buyue Zhang