(11 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | + | <br> | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
<center> | <center> | ||
− | <font size= 4> | + | <font size="4">[[ECE PhD Qualifying Exams|ECE Ph.D. Qualifying Exam]] </font> |
− | [[ | + | |
− | </font | + | |
− | <font size= 4> | + | <font size="4">Communication, Networking, Signal and Image Processing (CS)</font> |
− | Communication, Networking, Signal and Image Processing (CS) | + | |
− | Question 1: Probability and Random Processes | + | <font size="4">Question 1: Probability and Random Processes </font> |
− | </font | + | |
− | August 2012 | + | August 2012 |
− | </center> | + | </center> |
---- | ---- | ||
+ | |||
---- | ---- | ||
− | Jump to [[ECE- | + | |
+ | Jump to [[ECE-QE CS1-2012 solusion-1|Problem 2]],[[ECE-QE_CS1-2012_solusion-2|3]] | ||
+ | |||
---- | ---- | ||
− | =Problem 3= | + | |
+ | = Problem 3 = | ||
+ | |||
+ | ===== Problem statement: ===== | ||
+ | |||
+ | <br> Let <math>\mathbf{X}_{1} \dots \mathbf{X}_{n} \dots </math> be a sequence of independent, identical distributed random variables, each uniformly distributed on the interval [0, 1], an hence having pdf <br> <math>f_{X}\left(x\right)=\begin{cases} | ||
+ | \begin{array}{lll} | ||
+ | 1, \text{ for } 0 \leq x \leq1\\ | ||
+ | 0, \text{ elsewhere. } | ||
+ | \end{array}\end{cases}</math> <br> | ||
+ | |||
+ | Let <math>\mathbf{Y}_{n}</math> be a new random variable defined by <br> | ||
+ | |||
+ | <math>\mathbf{Y}_{n} = min \,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \}</math> | ||
+ | |||
+ | <br> | ||
+ | |||
+ | '''(a)''' Find the pdf of <math>\mathbf{Y}_{n}</math>. | ||
+ | |||
+ | '''(b)''' Does the sequence <math>\mathbf{Y}_{n}</math> converge in probability? | ||
+ | |||
+ | '''(c)''' Does the sequence <math>\mathbf{Y}_{n}</math> converge in distribution? If yes, specify the cumulative function of the random variable it converges to. | ||
+ | |||
+ | <br> | ||
+ | |||
+ | ===== ===== | ||
===== <math>\color{blue}\text{Solution 1:}</math> ===== | ===== <math>\color{blue}\text{Solution 1:}</math> ===== | ||
Line 34: | Line 52: | ||
\end{array} | \end{array} | ||
\right. | \right. | ||
− | </math> | + | </math> |
− | <math> F_{X}(x) = \int_0^x f_{X}(\alpha)d\alpha = \int_0^x d\alpha = x </math> | + | <math> F_{X}(x) = \int_0^x f_{X}(\alpha)d\alpha = \int_0^x d\alpha = x </math> |
− | < | + | <span class="texhtml">''Y''<sub>''n''</sub> = ''m''''i''''n''{''X''<sub>1</sub>,....,''X''<sub>''n''</sub>}</span> |
'''(a)''' <br> | '''(a)''' <br> | ||
− | <math>\; \; \; F_{Y_{n}}(y) = P(\{Y \leq y\}) </math> <br> | + | <math>\; \; \; F_{Y_{n}}(y) = P(\{Y \leq y\}) </math> <br> <math> = P(\{min\{X_{1}, ..., X_{n}\} \leq y\}) </math> <br> |
− | <math> = P(\{min\{X_{1}, ..., X_{n}\} \leq y\}) </math> <br> | + | |
− | < | + | <span class="texhtml"> = 1 − ''P''({''X''<sub>1</sub> > ''y''},....,{''X''<sub>''n''</sub> > ''y''})</span> <br> |
− | < | + | <span class="texhtml"> = 1 − ''P''({''X''<sub>1</sub> > ''y''})''P''({''X''<sub>2</sub> > ''y''})...''P''({''X''<sub>''n''</sub> > ''y''})</span> <br> |
− | < | + | <span class="texhtml"> = 1 − (1 − ''y'')<sup>''n''</sup></span> |
+ | <br> <math>f_{Y_{n}}(y) = \frac{dF_{Y}(y)}{dy} = n(1-y)^{n-1} 1_{[0, 1]}(y)</math> | ||
− | < | + | <br> '''(b)''' <br> |
− | + | Yes. It converges in probability. | |
− | + | ||
− | + | ||
− | Yes. It converges in probability. | + | |
A random variable converges in probability if | A random variable converges in probability if | ||
− | <math>P\{\mid Y_{n} - a \mid > \varepsilon\} \rightarrow 0 \;\; as \; n \rightarrow \inf \;\; for \;any\; \varepsilon > 0 </math> | + | <math>P\{\mid Y_{n} - a \mid > \varepsilon\} \rightarrow 0 \;\; as \; n \rightarrow \inf \;\; for \;any\; \varepsilon > 0 </math> |
− | Then the random variable is said to converge to a. | + | Then the random variable is said to converge to a. |
− | Consider | + | Consider |
− | <math>P\{\mid Y_{n}-0 \mid > \varepsilon\} = P\{Y_{n}>\varepsilon\} </math> | + | <math>P\{\mid Y_{n}-0 \mid > \varepsilon\} = P\{Y_{n}>\varepsilon\} </math> |
− | <math>= 1 - P\{Y_{n} \leq \varepsilon\} = 1 - [1-(1-\varepsilon)^n] = (1-\varepsilon)^n</math> | + | <math>= 1 - P\{Y_{n} \leq \varepsilon\} = 1 - [1-(1-\varepsilon)^n] = (1-\varepsilon)^n</math> |
+ | <br> <math> As\; n\rightarrow \inf \; \; \; P\{\mid Y_{n}-0 \mid > \varepsilon\} = 0 </math> | ||
− | <math> | + | Since <math> 0 < (1-\varepsilon) <1</math> |
− | + | Thus <math>Y_{n} \rightarrow 0</math> | |
− | <math> 0 | + | |
− | + | <br> '''(c)''' <br> | |
− | < | + | |
− | + | From the solution to (a), we have | |
− | + | ||
− | + | ||
− | From the solution to (a), we have | + | |
<math> | <math> | ||
Line 92: | Line 103: | ||
\end{array} | \end{array} | ||
\right. | \right. | ||
− | </math> | + | </math> |
− | It converges in distribution since:<br> <math>convergence(probability) \Rightarrow convergence(distribution) </math> | + | It converges in distribution since:<br> <math>convergence(probability) \Rightarrow convergence(distribution) </math> |
+ | <br> To find the convergence of CDF, <br> <math>\lim_{n \to \infty} F_{Y_{n}}(y) | ||
− | + | = \left\{ | |
− | + | \begin{array}{ll} | |
+ | 0 & \quad y \leq 0 \\ | ||
+ | \lim_{n \to \infty} 1 - (1-y)^{n} & \quad 0 < y \leq 1 \\ | ||
+ | 1 & \quad y > 1 | ||
+ | \end{array} | ||
+ | \right. | ||
− | + | = \left\{ | |
\begin{array}{ll} | \begin{array}{ll} | ||
0 & \quad y \leq 0 \\ | 0 & \quad y \leq 0 \\ | ||
1 & \quad y > 0 | 1 & \quad y > 0 | ||
\end{array} | \end{array} | ||
− | \right. </math> | + | \right.</math> |
− | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} \text{For part ( | + | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} \text{*For part (a), there is a typo for the following equation}}</math></span></font><br> |
− | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} n\ | + | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} F_{Y_{n}}(y) = P(\{Y \leq y\}) = P(min\{X_{1}, \dots, X_{n}\} \leq y) \text{, and should be changed to} }</math></span></font><br> |
− | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} \ | + | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} F_{Y_{n}}(y) = P(\{Y_{n} \leq y\}) = P(min\{X_{1}, \dots, X_{n}\} \leq y) }</math></span></font><br> |
− | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} \text{to }}</math></span></font><br> | + | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} \text{*For part (b), I would like to add some additional parts to the following equation for better understanding}}</math></span></font><br> |
− | <font face="serif"><span style="font-size: 19px;"><math>{\color{red}\text{Since } 0 < (1-\varepsilon) <1 \text{ as } n\rightarrow \infty \Rightarrow (1-\varepsilon)^n | + | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} n\rightarrow \inf \; \; \; P\{\mid Y_{n}-0 \mid > \varepsilon\} = 0}</math></span></font><br> |
+ | |||
+ | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} \text{Since } 0 < (1-\varepsilon) <1}</math></span></font><br> | ||
+ | |||
+ | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} \text{to }}</math></span></font><br> | ||
+ | |||
+ | <font face="serif"><span style="font-size: 19px;"><math>{\color{red}\text{Since } 0 < (1-\varepsilon) <1 \text{, as } n\rightarrow \infty \Rightarrow \lim_{n \to \infty}(1-\varepsilon)^n = 0 \; \; \; \therefore \lim_{n \to \infty}P\{\mid Y_{n}-0 \mid > \varepsilon\} = \lim_{n \to \infty} (1-\varepsilon)^n = 0}</math></span></font><br> | ||
+ | |||
+ | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} \text{*For part (c), the CDF of the random variable } \mathbf{Y}_n \text{ is incorrect. It should be}}</math></span></font><br> | ||
+ | |||
+ | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} F_{Y_n}\left(y\right)=\begin{cases} | ||
+ | \begin{array}{lll} | ||
+ | 0, y <0 \\ | ||
+ | 1-(1-y)^n, 0 \leq y \leq 1 \\ | ||
+ | 1, y>1 | ||
+ | \end{array}\end{cases} }</math></span></font><br> | ||
+ | |||
+ | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} \text{Therefore, it should be modified as follow}}</math></span></font><br> | ||
+ | |||
+ | <font face="serif"><span style="font-size: 19px;"><math>{\color{red} \text{As } n \to \infty \text{,} \,\, F_{Y_n}(y) \to F_{Y}(y)=\begin{cases} | ||
+ | \begin{array}{lll} | ||
+ | 0, y <0 \\ | ||
+ | 1, y \geq 0 | ||
+ | \end{array}\end{cases} }</math></span></font><br> | ||
− | |||
− | |||
===== <math>\color{blue}\text{Solution 2:}</math> ===== | ===== <math>\color{blue}\text{Solution 2:}</math> ===== | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | Since <math | + | '''(a)''' <br> Since <math>\mathbf{Y}_{n} = min \,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \}</math>, and <math> \{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} </math> are i.i.d R.V. <br> We can have the CDF of <math>\mathbf{Y}_{n}</math> <br> <math> P(\mathbf{Y}_{n} \leq y) = P(\,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} \leq y) </math> <br> <math> = 1-P(\,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} > y)</math><br> <math> = 1-P(\mathbf{X}_1 > y)P(\mathbf{X}_2 > y) \dots P(\mathbf{X}_n > y)</math><br> <math> = 1-(1-F_{X_1}(y))(1-F_{X_2}(y)) \dots (1-F_{X_n}(y))=1-(1-F_{X_1}(y))^n</math><br> |
− | + | ||
− | + | ||
− | + | ||
− | <math> | + | Since <math>\mathbf{Y}_{n} </math> is also uniform distributed on the interval [0,1] <br> When <span class="texhtml">''Y''<sub>''n''</sub> = ''y'' < 0</span>, <math>P(\{\mathbf{Y}_{n}<0 \}) =0 </math> since <math>F_{X_1}(y) =0 </math><br> When <span class="texhtml">''Y''<sub>''n''</sub> = ''y'' > 1</span>, <math>P(\{\mathbf{Y}_{n}>1 \}) =1 </math> since <math>F_{X_1}(y) =1 </math><br> When <math> 0 \leq Y_n = y \leq 1 </math><br> |
− | Thus, the CDF of <math | + | <math> F_{X_1}(y)= \int_{0}^{y}f_{X_1}(x)\,dx=\int_{0}^{y}\, 1 \, dx = y</math><br> |
− | <math | + | |
+ | Thus, the CDF of <math>\mathbf{Y}_{n}</math> is the following <br> <math>F_{Y_n}\left(y\right)=\begin{cases} | ||
\begin{array}{lll} | \begin{array}{lll} | ||
0, y <0 \\ | 0, y <0 \\ | ||
1-(1-y)^n, 0 \leq y \leq 1 \\ | 1-(1-y)^n, 0 \leq y \leq 1 \\ | ||
1, y>1 | 1, y>1 | ||
− | \end{array}\end{cases}</math><br> | + | \end{array}\end{cases}</math><br> Thus, the pdf of <math>\mathbf{Y}_{n}</math> is <br> <math>f_{Y_n}\left(y\right)=\begin{cases} |
− | Thus, the pdf of <math | + | |
− | <br> | + | |
− | <math | + | |
\begin{array}{lll} | \begin{array}{lll} | ||
n(1-y)^{n-1}, 0 \leq y \leq 1 \\ | n(1-y)^{n-1}, 0 \leq y \leq 1 \\ | ||
0, \,\,\, \text{elsewhere} | 0, \,\,\, \text{elsewhere} | ||
− | \end{array}\end{cases}</math> | + | \end{array}\end{cases}</math> <br> |
− | <br> | + | |
− | + | ||
− | '''(b)'''<br> | + | <br> '''(b)'''<br> Let <span class="texhtml">μ = ''E''[''Y''<sub>''n''</sub>]</span> and <span class="texhtml">σ<sup>2</sup> = ''E''( | ''Y''<sub>''n''</sub> − μ | <sup>2</sup>)</span>. By Chebyshev inequality, we have <br> <math> P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \leq \frac{\sigma^2}{\varepsilon^2} = \frac{E[|\mathbf{Y}_{n}-\mu|^2]}{\varepsilon^2}</math><br> Now, we examine <br> <math> E[|\mathbf{Y}_{n}-\mu|^2] = E[Y_n^2-2 \mu Y_n+\mu^2] = E[Y_n^2]-2\mu E[Y_n]+\mu^2=E[Y_n^2]-\mu^2.</math><br> |
− | Let < | + | |
− | <br> | + | |
− | <math> P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \leq \frac{\sigma^2}{\varepsilon^2} = \frac{E[|\mathbf{Y}_{n}-\mu|^2]}{\varepsilon^2}</math><br> | + | |
− | Now, we examine | + | |
− | <br> | + | |
− | <math> E[|\mathbf{Y}_{n}-\mu|^2] = E[Y_n^2-2 \mu Y_n+\mu^2] = E[Y_n^2]-2\mu E[Y_n]+\mu^2=E[Y_n^2]-\mu^2.</math><br> | + | |
− | In problem (a), we have the pdf of < | + | In problem (a), we have the pdf of <span class="texhtml">''Y''<sub>''n''</sub></span>, we can find <span class="texhtml">''E''[''Y''<sub>''n''</sub>]</span> and <math>E[Y_n^2]</math> <br> <math> E[Y_n] = \int_{0}^{1}y f_{Y_n}(y) \,dy = \int_{0}^{1} y n(1-y)^{n-1} \, dy = \frac{1}{n+1}</math><br> <math> E[Y_n^2] = \int_{0}^{1}y^2 f_{Y_n}(y) \,dy = \int_{0}^{1} y^2 n(1-y)^{n-1} \, dy = \frac{2}{n^2+3n+2}</math> <br> As <math>n \to \infty</math><br> <math> \lim_{n \to \infty} E[|\mathbf{Y}_{n}-\mu|^2] = \lim_{n \to \infty} \left( E[Y_n^2]-(E[Y_n])^2 \right)= \lim_{n \to \infty}\left( \frac{2}{n^2+3n+2}- (\frac{1}{n+1})^2\right)=0</math><br> <math>\Rightarrow P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \leq \frac{E[|\mathbf{Y}_{n}-\mu|^2]}{\varepsilon^2} = 0 \,\,\text{as} \,n \to \infty</math><br> Thus, the sequence <span class="texhtml">''Y''<sub>''n''</sub></span> converges in probability to <span class="texhtml">μ</span> for any <math>\varepsilon>0 </math>, and <br> <math> \lim_{n \to \infty} \mu = \lim_{n \to \infty} E[Y_{n}] = \lim_{n \to \infty} \frac{1}{n+1} =0 </math><br> <math> \therefore P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) = P(|\mathbf{Y}_{n}-0| \geq \varepsilon) \to 0 \,\,\text{as} \,\, n \to \infty</math><br> |
− | <br> | + | |
− | <math> E[Y_n] = \int_{0}^{1}y f_{Y_n}(y) \,dy = \int_{0}^{1} y n(1-y)^{n-1} \, dy = \frac{1}{n+1}</math><br> | + | |
− | <math> E[Y_n^2] = \int_{0}^{1}y^2 f_{Y_n}(y) \,dy = \int_{0}^{1} y^2 n(1-y)^{n-1} \, dy = \frac{2}{n^2+3n+2}</math> | + | |
− | <br> | + | |
− | As <math | + | |
− | <math> \lim_{n \to \infty} E[|\mathbf{Y}_{n}-\mu|^2] = \lim_{n \to \infty} \left( E[Y_n^2]-(E[Y_n])^2 \right)= \lim_{n \to \infty}\left( \frac{2}{n^2+3n+2}- (\frac{1}{n+1})^2\right)=0</math><br> | + | |
− | <math>\Rightarrow P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \leq \frac{E[|\mathbf{Y}_{n}-\mu|^2]}{\varepsilon^2} = 0 \,\,\text{as} \,n \to \infty</math><br> | + | |
− | Thus, the sequence < | + | |
− | <math> P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \to 0 \,\,\text{as} \,\, n \to \infty</math><br> | + | |
− | '''(c)'''<br> | + | '''(c)'''<br> The CDF of <span class="texhtml">''Y''<sub>''n''</sub></span><br> <math>F_{Y_n}\left(y\right)=\begin{cases} |
− | The CDF of < | + | |
− | <math | + | |
\begin{array}{lll} | \begin{array}{lll} | ||
0, y <0 \\ | 0, y <0 \\ | ||
1-(1-y)^n, 0 \leq y \leq 1 \\ | 1-(1-y)^n, 0 \leq y \leq 1 \\ | ||
1, y>1 | 1, y>1 | ||
− | \end{array}\end{cases}</math><br> | + | \end{array}\end{cases}</math><br> |
− | As <math | + | As <math>n \to \infty</math><br> <math> F_{Y_n}(y) \to F_{Y}(y)= \begin{cases} |
− | <math> F_{Y_n}(y) \to F_{Y}(y)= \begin{cases} | + | |
\begin{array}{lll} | \begin{array}{lll} | ||
0, \,\, y <0 \\ | 0, \,\, y <0 \\ | ||
1, \,\, y \geq 0 | 1, \,\, y \geq 0 | ||
− | \end{array}\end{cases}</math><br> | + | \end{array}\end{cases}</math><br> where <math>1-(1-y)^n \to 1</math> as <math>n \to \infty</math>, when <span class="texhtml">0 < ''y'' < 1</span><br> Because <span class="texhtml">0 < (1 − ''y'') < 1</span>, <math>(1-y)^n \to 0</math>, when <math>n \to \infty</math><br> <math> F_{Y}(y)= \begin{cases} |
− | where <math | + | |
− | Because < | + | |
− | <math> F_{Y}(y)= \begin{cases} | + | |
\begin{array}{lll} | \begin{array}{lll} | ||
0, \,\, y <0 \\ | 0, \,\, y <0 \\ | ||
1, \,\, y \geq 0 | 1, \,\, y \geq 0 | ||
− | \end{array}\end{cases}</math><br> | + | \end{array}\end{cases}</math><br> where this CDF is a step function.<br> |
− | where this CDF is a step function.<br> | + | |
+ | Thus, the sequence of random variable <span class="texhtml">''Y''<sub>''n''</sub></span> with cumulative distribution function <math>F_{Y_n}(y)</math> converges in distribution to the random variable <span class="texhtml">''Y''</span> with cumulative distribution function <span class="texhtml">''F''<sub>''Y''</sub>(''y'')</span>, which is a step function. | ||
+ | |||
+ | <br> | ||
+ | |||
+ | <font color="red">'''<u>Comments on Solution 2:</u>'''</font> | ||
+ | |||
+ | <font color="red">1. Curly braces are required to take probabilities on events.</font> | ||
+ | |||
+ | <font color="red">2. Error in part (c): As n grows to infinity, 1 - (1-y)<sup>n</sup> is definitely not equal to 1, for y = 0. Technically, that would lead to an indeterminate value at y = 0, or can be thought of to be 0 (as engineering common practice).<sup></sup></font> | ||
+ | |||
+ | <br> | ||
+ | |||
+ | === Related Problems === | ||
+ | |||
+ | ---- | ||
+ | |||
+ | 1. Let <span class="texhtml">''X''<sub>''n''</sub></span> be a sequence of independent, identical distributed random variables, each has Cauchy pdf<br> <math> f(x)=\frac{1}{\pi (1+x^2)} \text{, } -\infty < x < \infty </math><br> | ||
+ | |||
+ | Let <span class="texhtml">''Y''<sub>''n''</sub></span> a new random variable defined by<br> <math> Y_{n} = \frac{1}{n} \sum_{i=1}^{n} \, X_{i} </math><br> | ||
+ | |||
+ | '''(a)''' Find the pdf of the sequence <span class="texhtml">''Y''<sub>''n''</sub></span>, and describe how it depends on n. | ||
+ | |||
+ | '''(b)''' Does the sequence <math>\mathbf{Y}_{n}</math> converge in distribution? | ||
+ | |||
+ | '''(c)''' If yes, what is the distribution of the random variable it converges to? | ||
+ | |||
+ | <br> | ||
+ | |||
+ | 2. Explain what happens in this problem when: | ||
+ | |||
+ | (a) Y<sub>n</sub> = max{X<sub>1</sub>, X<sub>2</sub>, ...., X<sub>n</sub>}. | ||
+ | |||
+ | (b) X<sub>1</sub>, X<sub>2</sub>, ...., X<sub>n</sub> are exponentially distributed. | ||
+ | |||
+ | ---- | ||
− | + | [[Category:ECE]] [[Category:QE]] [[Category:CNSIP]] [[Category:Problem_solving]] [[Category:Random_variables]] [[Category:Probability]] |
Latest revision as of 09:16, 15 August 2014
Communication, Networking, Signal and Image Processing (CS)
Question 1: Probability and Random Processes
August 2012
Jump to Problem 2,3
Contents
Problem 3
Problem statement:
Let $ \mathbf{X}_{1} \dots \mathbf{X}_{n} \dots $ be a sequence of independent, identical distributed random variables, each uniformly distributed on the interval [0, 1], an hence having pdf
$ f_{X}\left(x\right)=\begin{cases} \begin{array}{lll} 1, \text{ for } 0 \leq x \leq1\\ 0, \text{ elsewhere. } \end{array}\end{cases} $
Let $ \mathbf{Y}_{n} $ be a new random variable defined by
$ \mathbf{Y}_{n} = min \,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} $
(a) Find the pdf of $ \mathbf{Y}_{n} $.
(b) Does the sequence $ \mathbf{Y}_{n} $ converge in probability?
(c) Does the sequence $ \mathbf{Y}_{n} $ converge in distribution? If yes, specify the cumulative function of the random variable it converges to.
$ \color{blue}\text{Solution 1:} $
$ f_{X}(x) = 1_{[0,1]}(x) = \left\{ \begin{array}{ll} 1 & \quad 0 \leq x \leq 1 \\ 0 & \quad elsewhere \end{array} \right. $
$ F_{X}(x) = \int_0^x f_{X}(\alpha)d\alpha = \int_0^x d\alpha = x $
Yn = m'i'n{X1,....,Xn}
(a)
$ \; \; \; F_{Y_{n}}(y) = P(\{Y \leq y\}) $
$ = P(\{min\{X_{1}, ..., X_{n}\} \leq y\}) $
= 1 − P({X1 > y},....,{Xn > y})
= 1 − P({X1 > y})P({X2 > y})...P({Xn > y})
= 1 − (1 − y)n
$ f_{Y_{n}}(y) = \frac{dF_{Y}(y)}{dy} = n(1-y)^{n-1} 1_{[0, 1]}(y) $
(b)
Yes. It converges in probability.
A random variable converges in probability if
$ P\{\mid Y_{n} - a \mid > \varepsilon\} \rightarrow 0 \;\; as \; n \rightarrow \inf \;\; for \;any\; \varepsilon > 0 $
Then the random variable is said to converge to a.
Consider
$ P\{\mid Y_{n}-0 \mid > \varepsilon\} = P\{Y_{n}>\varepsilon\} $
$ = 1 - P\{Y_{n} \leq \varepsilon\} = 1 - [1-(1-\varepsilon)^n] = (1-\varepsilon)^n $
$ As\; n\rightarrow \inf \; \; \; P\{\mid Y_{n}-0 \mid > \varepsilon\} = 0 $
Since $ 0 < (1-\varepsilon) <1 $
Thus $ Y_{n} \rightarrow 0 $
(c)
From the solution to (a), we have
$ F_{Y_{n}}(y) = \left\{ \begin{array}{ll} 1 - (1-y)^{n} & \quad 0 < y \leq 1 \\ 1 & \quad y > 1 \end{array} \right. $
It converges in distribution since:
$ convergence(probability) \Rightarrow convergence(distribution) $
To find the convergence of CDF,
$ \lim_{n \to \infty} F_{Y_{n}}(y) = \left\{ \begin{array}{ll} 0 & \quad y \leq 0 \\ \lim_{n \to \infty} 1 - (1-y)^{n} & \quad 0 < y \leq 1 \\ 1 & \quad y > 1 \end{array} \right. = \left\{ \begin{array}{ll} 0 & \quad y \leq 0 \\ 1 & \quad y > 0 \end{array} \right. $
$ {\color{red} \text{*For part (a), there is a typo for the following equation}} $
$ {\color{red} F_{Y_{n}}(y) = P(\{Y \leq y\}) = P(min\{X_{1}, \dots, X_{n}\} \leq y) \text{, and should be changed to} } $
$ {\color{red} F_{Y_{n}}(y) = P(\{Y_{n} \leq y\}) = P(min\{X_{1}, \dots, X_{n}\} \leq y) } $
$ {\color{red} \text{*For part (b), I would like to add some additional parts to the following equation for better understanding}} $
$ {\color{red} n\rightarrow \inf \; \; \; P\{\mid Y_{n}-0 \mid > \varepsilon\} = 0} $
$ {\color{red} \text{Since } 0 < (1-\varepsilon) <1} $
$ {\color{red} \text{to }} $
$ {\color{red}\text{Since } 0 < (1-\varepsilon) <1 \text{, as } n\rightarrow \infty \Rightarrow \lim_{n \to \infty}(1-\varepsilon)^n = 0 \; \; \; \therefore \lim_{n \to \infty}P\{\mid Y_{n}-0 \mid > \varepsilon\} = \lim_{n \to \infty} (1-\varepsilon)^n = 0} $
$ {\color{red} \text{*For part (c), the CDF of the random variable } \mathbf{Y}_n \text{ is incorrect. It should be}} $
$ {\color{red} F_{Y_n}\left(y\right)=\begin{cases} \begin{array}{lll} 0, y <0 \\ 1-(1-y)^n, 0 \leq y \leq 1 \\ 1, y>1 \end{array}\end{cases} } $
$ {\color{red} \text{Therefore, it should be modified as follow}} $
$ {\color{red} \text{As } n \to \infty \text{,} \,\, F_{Y_n}(y) \to F_{Y}(y)=\begin{cases} \begin{array}{lll} 0, y <0 \\ 1, y \geq 0 \end{array}\end{cases} } $
$ \color{blue}\text{Solution 2:} $
(a)
Since $ \mathbf{Y}_{n} = min \,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} $, and $ \{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} $ are i.i.d R.V.
We can have the CDF of $ \mathbf{Y}_{n} $
$ P(\mathbf{Y}_{n} \leq y) = P(\,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} \leq y) $
$ = 1-P(\,\{{ \mathbf{X}_1, \mathbf{X}_2, \dots \mathbf{X}_n} \} > y) $
$ = 1-P(\mathbf{X}_1 > y)P(\mathbf{X}_2 > y) \dots P(\mathbf{X}_n > y) $
$ = 1-(1-F_{X_1}(y))(1-F_{X_2}(y)) \dots (1-F_{X_n}(y))=1-(1-F_{X_1}(y))^n $
Since $ \mathbf{Y}_{n} $ is also uniform distributed on the interval [0,1]
When Yn = y < 0, $ P(\{\mathbf{Y}_{n}<0 \}) =0 $ since $ F_{X_1}(y) =0 $
When Yn = y > 1, $ P(\{\mathbf{Y}_{n}>1 \}) =1 $ since $ F_{X_1}(y) =1 $
When $ 0 \leq Y_n = y \leq 1 $
$ F_{X_1}(y)= \int_{0}^{y}f_{X_1}(x)\,dx=\int_{0}^{y}\, 1 \, dx = y $
Thus, the CDF of $ \mathbf{Y}_{n} $ is the following
$ F_{Y_n}\left(y\right)=\begin{cases} \begin{array}{lll} 0, y <0 \\ 1-(1-y)^n, 0 \leq y \leq 1 \\ 1, y>1 \end{array}\end{cases} $
Thus, the pdf of $ \mathbf{Y}_{n} $ is
$ f_{Y_n}\left(y\right)=\begin{cases} \begin{array}{lll} n(1-y)^{n-1}, 0 \leq y \leq 1 \\ 0, \,\,\, \text{elsewhere} \end{array}\end{cases} $
(b)
Let μ = E[Yn] and σ2 = E( | Yn − μ | 2). By Chebyshev inequality, we have
$ P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \leq \frac{\sigma^2}{\varepsilon^2} = \frac{E[|\mathbf{Y}_{n}-\mu|^2]}{\varepsilon^2} $
Now, we examine
$ E[|\mathbf{Y}_{n}-\mu|^2] = E[Y_n^2-2 \mu Y_n+\mu^2] = E[Y_n^2]-2\mu E[Y_n]+\mu^2=E[Y_n^2]-\mu^2. $
In problem (a), we have the pdf of Yn, we can find E[Yn] and $ E[Y_n^2] $
$ E[Y_n] = \int_{0}^{1}y f_{Y_n}(y) \,dy = \int_{0}^{1} y n(1-y)^{n-1} \, dy = \frac{1}{n+1} $
$ E[Y_n^2] = \int_{0}^{1}y^2 f_{Y_n}(y) \,dy = \int_{0}^{1} y^2 n(1-y)^{n-1} \, dy = \frac{2}{n^2+3n+2} $
As $ n \to \infty $
$ \lim_{n \to \infty} E[|\mathbf{Y}_{n}-\mu|^2] = \lim_{n \to \infty} \left( E[Y_n^2]-(E[Y_n])^2 \right)= \lim_{n \to \infty}\left( \frac{2}{n^2+3n+2}- (\frac{1}{n+1})^2\right)=0 $
$ \Rightarrow P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) \leq \frac{E[|\mathbf{Y}_{n}-\mu|^2]}{\varepsilon^2} = 0 \,\,\text{as} \,n \to \infty $
Thus, the sequence Yn converges in probability to μ for any $ \varepsilon>0 $, and
$ \lim_{n \to \infty} \mu = \lim_{n \to \infty} E[Y_{n}] = \lim_{n \to \infty} \frac{1}{n+1} =0 $
$ \therefore P(|\mathbf{Y}_{n}-\mu| \geq \varepsilon) = P(|\mathbf{Y}_{n}-0| \geq \varepsilon) \to 0 \,\,\text{as} \,\, n \to \infty $
(c)
The CDF of Yn
$ F_{Y_n}\left(y\right)=\begin{cases} \begin{array}{lll} 0, y <0 \\ 1-(1-y)^n, 0 \leq y \leq 1 \\ 1, y>1 \end{array}\end{cases} $
As $ n \to \infty $
$ F_{Y_n}(y) \to F_{Y}(y)= \begin{cases} \begin{array}{lll} 0, \,\, y <0 \\ 1, \,\, y \geq 0 \end{array}\end{cases} $
where $ 1-(1-y)^n \to 1 $ as $ n \to \infty $, when 0 < y < 1
Because 0 < (1 − y) < 1, $ (1-y)^n \to 0 $, when $ n \to \infty $
$ F_{Y}(y)= \begin{cases} \begin{array}{lll} 0, \,\, y <0 \\ 1, \,\, y \geq 0 \end{array}\end{cases} $
where this CDF is a step function.
Thus, the sequence of random variable Yn with cumulative distribution function $ F_{Y_n}(y) $ converges in distribution to the random variable Y with cumulative distribution function FY(y), which is a step function.
Comments on Solution 2:
1. Curly braces are required to take probabilities on events.
2. Error in part (c): As n grows to infinity, 1 - (1-y)n is definitely not equal to 1, for y = 0. Technically, that would lead to an indeterminate value at y = 0, or can be thought of to be 0 (as engineering common practice).
Related Problems
1. Let Xn be a sequence of independent, identical distributed random variables, each has Cauchy pdf
$ f(x)=\frac{1}{\pi (1+x^2)} \text{, } -\infty < x < \infty $
Let Yn a new random variable defined by
$ Y_{n} = \frac{1}{n} \sum_{i=1}^{n} \, X_{i} $
(a) Find the pdf of the sequence Yn, and describe how it depends on n.
(b) Does the sequence $ \mathbf{Y}_{n} $ converge in distribution?
(c) If yes, what is the distribution of the random variable it converges to?
2. Explain what happens in this problem when:
(a) Yn = max{X1, X2, ...., Xn}.
(b) X1, X2, ...., Xn are exponentially distributed.