Revision as of 18:04, 18 May 2017 by Li1898 (Talk | contribs)


ECE Ph.D. Qualifying Exam in Communication Networks Signal and Image processing (CS)

August 2014, Problem 2

Problem 1 , 2

Solution 1

a) Take Z-transform on both sides, we have $ \begin{split} &Y(z_1,z_2) = bX(z_1,z_2)+aY(z_1,z_2)z_1^{-1}+aY(z_1,z_2)z_2^{-1}-a^2Y(z_1,z_2)z_1^{-1}z_2^{-1}\\ &Y(z_1,z_2) = bX(z_1,z_2) +Y(z_1,z_2)\left[az_1^{-1}+az_2^{-1}-a^2z_1^{-1}z_2^{-1}\right]\\ &Y(z_1,z_2)\left[1-az_1^{-1}-az_2^{-1}+a^2z_1^{-1}z_2^{-1} \right] = bX(z_1,z_2)\\ &\frac{Y(z_1,z_2)}{X(z_1,z_2)} = \frac{b}{1-az_1^{-1}-az_2^{-1}+a^2z_1^{-1}z_2^{-1}} = \frac{b}{(1-az_1^{-1})(1-az_2^{-1})}\\ \end{split} $

Lack of proof. Should mention the property of Poisson distribution to show the equivalence. See the proof in solution 2.

b) So the impulse can be obtained by reversing Z-transform

$ h(m,n) = ba^mu[m]a^nu[n] = ba^{m+n}u[m]u[n] $

c) Z-transform can be written as $ X(z_1,z_2) = \sum\sum x(m,n)z_1^{-m}z_2^{-n} $

Therefore, when $ z_1=1,z_2=1 $, $ X(z_1,z_2) = \sum\sum x(m,n) $, which is equivalent to the average of the signal. So in order to satisfy the condition, we need $ H(1,1) = 1 $

$ H(1,1) = \frac{b}{(1-az_1^{-1})(1-az_2^{-1})} = \frac{b}{(1-a)^2} = 1 $

So $ b = (1-a)^2 $.

d). $ R_x(k,l) = E[x(m,n)x(m+k,n+l)] = \sum_m\sum_n x(m,n)x(m+k,n+l) $

so when $ k=l=0 $, $ R_x $ is the covariance of the input signal, which is 1.

when $ k \neq 0 $ or $ l \neq 0 $, according to the property of the i.i.d. random variables, we know it should be 0.

Combining those two conditions, we can get that $ R_x(k,l) = \delta_{k,l} $

And the power spectral density can be obtained using $ R_x $, and it is 1.

e) The power spectral density can be calculated from that of input signal.

$ S_y(e^{j\mu}, e^{j\nu}) = \|H(e^{j\mu}, e^{j\nu})\|^2S_x(e^{j\mu}, e^{j\nu}) $

So

$ S_y(e^{j\mu}, e^{j\nu}) = \|\frac{b}{(1-ae^{-j\mu})(1-ae^{-j\nu}))}\|^2\times 1 = \frac{b^2}{{(1-ae^{-j\mu})(1-ae^{-j\nu})}^2} $

Solution 2:

a). As we know $ P\left\{Y_x=k\right\} = \frac{e^{-\lambda_x}\lambda_x^k}{k!} $ is a Potion distribution, it is known that the expectation of a Poisson RV is $ \lambda_x $.

Proof:

$ \begin{split} E[Y_x] &= \sum^{+ \infty}_{k > 0} k \frac{e^{-\lambda_x}\lambda_x^k}{k!}\\ &= \sum^{+ \infty}_{k = 1} \frac{e^{-\lambda_x}\lambda_x^k}{(k-1)!}\\ &= \sum^{+ \infty}_{k = 1} \frac{e^{-\lambda_x}\lambda_x^{k-1}}{(k-1)!}\lambda_x\\ &= \lambda_xe^{-\lambda_x}\sum^{+ \infty}_{k = 0} \frac{\lambda_x^k}{k!}\\ &= \lambda_xe^{-\lambda_x}e^{\lambda_x}\\ &= \lambda_x\\ \end{split} $

So $ E[Y_x] = \lambda_x $

Here, it used $ \sum^{+ \infty}_{k = 0} \frac{\lambda_x^k}{k!} = e^{\lambda_x} $ to derive the final conclusion.

b). Because the number of photons will decrease when increasing the depth, $ d\lambda_x = -\lambda_x\mu(x)dx $

and

$ \frac{d\lambda_x}{dx} = -\lambda_x\mu(x) $

c). The final differential equation in b). is an ordinary differential equation. We can get the expression as

$ \lambda_x = \lambda_0e^{-\int^x_0\mu(t)dt} $

where $ \lambda_0 $ is the initial number of photons.

d). From part c). $ \frac{\lambda_x}{\lambda_0} = e^{-\int^x_0\mu(t)dt} $, so we have

$ \int^x_0\mu(t)dt = -\log\left(\frac{\lambda_T}{\lambda_0}\right) $

e). Because from a). and c)., we can get

$ \int^x_0\mu(t)dt = -\log\left(\frac{Y_T}{Y_0}\right) $

This photon attenuation question is very similar to other questions: for example 2017S-ECE637-Exam1, Problem 3. Related topics are projection problems(e.g.: 2013S-ECE637-Exam1, Problem 2; 2012S-ECE637-Exam1, Problem 3) and scan problems(e.g.: 2016QE-CS5, Problem 1).



Back to QE CS question 1, August 2014

Back to ECE QE page:

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett