Line 30: Line 30:
 
'''(a)'''
 
'''(a)'''
 
<br>  
 
<br>  
Since <math class="inline">\mathbf{Y}_{n} = min \,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \},</math>
+
Since <math class="inline">\mathbf{Y}_{n} = min \,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \}</math>, and <math class="inline"> \{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} </math> are i.i.d R.V. 
 
<br>
 
<br>
 
We can have the CDF of <math class="inline">\mathbf{Y}_{n}</math>
 
We can have the CDF of <math class="inline">\mathbf{Y}_{n}</math>
Line 40: Line 40:
  
 
Since <math class="inline">\mathbf{Y}_{n} </math> is also uniform distributed on the interval [0,1] <br>
 
Since <math class="inline">\mathbf{Y}_{n} </math> is also uniform distributed on the interval [0,1] <br>
When <math class="inline"> y <0 </math>, <math class="inline">P(\{\mathbf{Y}_{n}<0 \}) =0 </math>  since <math class="inline">F_{X_1}(y) =0 </math><br>
+
When <math class="inline"> Y_n = y < 0 </math>, <math class="inline">P(\{\mathbf{Y}_{n}<0 \}) =0 </math>  since <math class="inline">F_{X_1}(y) =0 </math><br>
 +
When <math class="inline"> Y_n = y > 1 </math>, <math class="inline">P(\{\mathbf{Y}_{n}>1 \}) =1 </math>  since <math class="inline">F_{X_1}(y) =1 </math><br>
 +
When <math class="inline"> 0 \leq Y_n = y \leq 1 </math><br>
 +
 
 +
<math> F_{X_1}(y)= \int_{0}^{y}f_{X_1}(x)\,dx=\int_{0}^{y}\, 1 \, dx = y</math><br>
 +
 
 +
Thus, the CDF of <math class="inline">\mathbf{Y}_{n}</math> is the following <br>
 +
<math class="inline">F_{Y_n}\left(y\right)=\begin{cases}
 +
\begin{array}{lll}
 +
0,    y <0 \\
 +
1-(1-y)^n, 0 \leq y \leq 1 \\
 +
1,  y>1
 +
\end{array}\end{cases}</math><br>
 +
Thus, the pdf of <math class="inline">\mathbf{Y}_{n}</math> is
 +
<br>
 +
<math class="inline">f_{Y_n}\left(y\right)=\begin{cases}
 +
\begin{array}{lll}
 +
n(1-y)^{n-1},  0 \leq y \leq 1 \\
 +
0,  \,\,\, \text{elsewhere}
 +
\end{array}\end{cases}</math>
 +
<br>
 +
 
 +
 
 +
'''(b)'''<br>
 +
Let <math class="inline">\mu=E[Y_n]</math> and <math class="inline">\sigma^2=E(|Y_n-\mu|^2)</math>. By Chebyshev inequality, we have
 +
<br>
 +
<math> P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \leq \frac{\sigma^2}{\varepsilon^2} = \frac{E[|\mathbf{Y}_{n}-\mu|^2]}{\varepsilon^2}</math><br>
 +
Now, we examine
 +
<br>
 +
<math> E[|\mathbf{Y}_{n}-\mu|^2] = E[Y_n^2-2 \mu Y_n+\mu^2] = E[Y_n^2]-2\mu E[Y_n]+\mu^2=E[Y_n^2]-\mu^2.</math><br>
 +
 
 +
In problem (a), we have the pdf of <math class="inline">Y_n</math>, we can find <math class="inline">E[Y_n]</math> and <math class="inline">E[Y_n^2]</math>
 +
<br>
 +
<math> E[Y_n] = \int_{0}^{1}y f_{Y_n}(y) \,dy = \int_{0}^{1} y n(1-y)^{n-1} \, dy = \frac{1}{n+1}</math><br>
 +
<math> E[Y_n^2] = \int_{0}^{1}y^2 f_{Y_n}(y) \,dy = \int_{0}^{1} y^2 n(1-y)^{n-1} \, dy = \frac{2}{n^2+3n+2}</math>
 +
<br>
 +
As <math class="inline">n \to \infty</math><br>
 +
<math> \lim_{n \to \infty} E[|\mathbf{Y}_{n}-\mu|^2] = \lim_{n \to \infty} \left( E[Y_n^2]-(E[Y_n])^2 \right)= \lim_{n \to \infty}\left( \frac{2}{n^2+3n+2}- (\frac{1}{n+1})^2\right)=0</math><br>
 +
<math>\Rightarrow P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \leq \frac{E[|\mathbf{Y}_{n}-\mu|^2]}{\varepsilon^2} = 0 \,\,\text{as} \,n \to \infty</math><br>
 +
Thus, the sequence <math class="inline">Y_n </math> converges in probability to <math class="inline">\mu </math> for any <math class="inline">\varepsilon>0 </math>, and <br>
 +
<math> P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \to 0 \,\,\text{as} \,\, n \to \infty</math><br>
 +
 
 +
'''(c)'''<br>
 +
The CDF of <math class="inline">Y_n</math><br>
 +
<math class="inline">F_{Y_n}\left(y\right)=\begin{cases}
 +
\begin{array}{lll}
 +
0,    y <0 \\
 +
1-(1-y)^n, 0 \leq y \leq 1 \\
 +
1,  y>1
 +
\end{array}\end{cases}</math><br>
 +
 
 +
As <math class="inline">n \to \infty</math><br>
 +
<math> F_{Y_n}(y) \to F_{Y}(y)= \begin{cases}
 +
\begin{array}{lll}
 +
0,  \,\,  y <0 \\
 +
1,  \,\, y \geq 0
 +
\end{array}\end{cases}</math><br>
 +
where <math class="inline">1-(1-y)^n \to 1</math> as <math class="inline">n \to \infty</math> when <math class="inline">0 <y<1 </math><br>
 +
Because <math class="inline">0 < (1-y) <1</math>, <math class="inline">(1-y)^n \to 0</math> when <math class="inline">n \to \infty</math><br>
 +
<math> F_{Y}(y)= \begin{cases}
 +
\begin{array}{lll}
 +
0,  \,\,  y <0 \\
 +
1,  \,\, y \geq 0
 +
\end{array}\end{cases}</math><br>
 +
where is a step function.<br>
 +
 
 +
Thus, the sequence of random variable <math class="inline">{Y_n}</math> with cumulative distribution function <math class="inline">F_{Y_n}(y)</math> converges in distribution to the random variable <math class="inline">Y</math> with cumulative distribution function <math class="inline">F_{Y}(y)</math>, which is a step function.

Revision as of 19:49, 25 January 2014


ECE Ph.D. Qualifying Exam

Communication, Networking, Signal and Image Processing (CS)

Question 1: Probability and Random Processes

August 2012



Jump to Problem 2,3


Problem 3

$ \color{blue}\text{Solution 1:} $
$ \color{blue}\text{Solution 2:} $

(a)
Since $ \mathbf{Y}_{n} = min \,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} $, and $ \{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} $ are i.i.d R.V.
We can have the CDF of $ \mathbf{Y}_{n} $
$ P(\mathbf{Y}_{n} \leq y) = P(\,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} \leq y) $
$ = 1-P(\,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} > y) $
$ = 1-P(\mathbf{X}_1 > y)P(\mathbf{X}_2 > y) \dots P(\mathbf{X}_n > y) $
$ = 1-(1-F_{X_1}(y))(1-F_{X_2}(y)) \dots (1-F_{X_n}(y))=1-(1-F_{X_1}(y))^n $

Since $ \mathbf{Y}_{n} $ is also uniform distributed on the interval [0,1]
When $ Y_n = y < 0 $, $ P(\{\mathbf{Y}_{n}<0 \}) =0 $ since $ F_{X_1}(y) =0 $
When $ Y_n = y > 1 $, $ P(\{\mathbf{Y}_{n}>1 \}) =1 $ since $ F_{X_1}(y) =1 $
When $ 0 \leq Y_n = y \leq 1 $

$ F_{X_1}(y)= \int_{0}^{y}f_{X_1}(x)\,dx=\int_{0}^{y}\, 1 \, dx = y $

Thus, the CDF of $ \mathbf{Y}_{n} $ is the following
$ F_{Y_n}\left(y\right)=\begin{cases} \begin{array}{lll} 0, y <0 \\ 1-(1-y)^n, 0 \leq y \leq 1 \\ 1, y>1 \end{array}\end{cases} $
Thus, the pdf of $ \mathbf{Y}_{n} $ is
$ f_{Y_n}\left(y\right)=\begin{cases} \begin{array}{lll} n(1-y)^{n-1}, 0 \leq y \leq 1 \\ 0, \,\,\, \text{elsewhere} \end{array}\end{cases} $


(b)
Let $ \mu=E[Y_n] $ and $ \sigma^2=E(|Y_n-\mu|^2) $. By Chebyshev inequality, we have
$ P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \leq \frac{\sigma^2}{\varepsilon^2} = \frac{E[|\mathbf{Y}_{n}-\mu|^2]}{\varepsilon^2} $
Now, we examine
$ E[|\mathbf{Y}_{n}-\mu|^2] = E[Y_n^2-2 \mu Y_n+\mu^2] = E[Y_n^2]-2\mu E[Y_n]+\mu^2=E[Y_n^2]-\mu^2. $

In problem (a), we have the pdf of $ Y_n $, we can find $ E[Y_n] $ and $ E[Y_n^2] $
$ E[Y_n] = \int_{0}^{1}y f_{Y_n}(y) \,dy = \int_{0}^{1} y n(1-y)^{n-1} \, dy = \frac{1}{n+1} $
$ E[Y_n^2] = \int_{0}^{1}y^2 f_{Y_n}(y) \,dy = \int_{0}^{1} y^2 n(1-y)^{n-1} \, dy = \frac{2}{n^2+3n+2} $
As $ n \to \infty $
$ \lim_{n \to \infty} E[|\mathbf{Y}_{n}-\mu|^2] = \lim_{n \to \infty} \left( E[Y_n^2]-(E[Y_n])^2 \right)= \lim_{n \to \infty}\left( \frac{2}{n^2+3n+2}- (\frac{1}{n+1})^2\right)=0 $
$ \Rightarrow P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \leq \frac{E[|\mathbf{Y}_{n}-\mu|^2]}{\varepsilon^2} = 0 \,\,\text{as} \,n \to \infty $
Thus, the sequence $ Y_n $ converges in probability to $ \mu $ for any $ \varepsilon>0 $, and
$ P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \to 0 \,\,\text{as} \,\, n \to \infty $

(c)
The CDF of $ Y_n $
$ F_{Y_n}\left(y\right)=\begin{cases} \begin{array}{lll} 0, y <0 \\ 1-(1-y)^n, 0 \leq y \leq 1 \\ 1, y>1 \end{array}\end{cases} $

As $ n \to \infty $
$ F_{Y_n}(y) \to F_{Y}(y)= \begin{cases} \begin{array}{lll} 0, \,\, y <0 \\ 1, \,\, y \geq 0 \end{array}\end{cases} $
where $ 1-(1-y)^n \to 1 $ as $ n \to \infty $ when $ 0 <y<1 $
Because $ 0 < (1-y) <1 $, $ (1-y)^n \to 0 $ when $ n \to \infty $
$ F_{Y}(y)= \begin{cases} \begin{array}{lll} 0, \,\, y <0 \\ 1, \,\, y \geq 0 \end{array}\end{cases} $
where is a step function.

Thus, the sequence of random variable $ {Y_n} $ with cumulative distribution function $ F_{Y_n}(y) $ converges in distribution to the random variable $ Y $ with cumulative distribution function $ F_{Y}(y) $, which is a step function.

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood