Line 68: Line 68:
 
<math>
 
<math>
 
\begin{align}
 
\begin{align}
   & {{h}_{1}}(m,n)=\sum\limits_{j=-N}^{N}{{a}_{j}\delta (m,n-j)=}\sum\limits_{j=-N}^{N}{{a}_{j}\delta (m)\ \delta(n-j)=}= {a}_{n}\delta (m) \\  
+
   & {{h}_{1}}(m,n)=\sum\limits_{j=-N}^{N}{{a}_{j}\delta (m,n-j)=}\sum\limits_{j=-N}^{N}{{a}_{j}\delta (m)\ \delta(n-j)=}= {a}_{n}\ \delta (m) \\  
 
\end{align}
 
\end{align}
 
</math>
 
</math>
Line 74: Line 74:
 
b)<br \>
 
b)<br \>
 
<math>
 
<math>
h_2(m,n) = \sum\limits_{i = - N}^N {{b_i}\delta(m-i,n)}
+
\begin{align}
 +
  & {{h}_{2}}(m,n)=\sum\limits_{i=-N}^{N}{{b}_{i}\delta (m-i,j)=}\sum\limits_{i=-N}^{N}{{b}_{i}\delta (m-i)\ \delta(n)=}= {b}_{m}\ \delta (n) \\
 +
\end{align}
 
</math>
 
</math>
  
 
c)<br \>
 
c)<br \>
 
<math>
 
<math>
h(m,n) = \sum\limits_{i = - N}^N {\sum\limits_{j = - N}^N {{b_i}{a_j}\delta (m - i,n - j)} }
+
\begin{align}
 +
  & h(m,n)=\sum\limits_{i=-N}^{N}{\sum\limits_{j=-N}^{N}{{{b}_{i}}\ {{a}_{j}}\ \delta (m-i,n-j)}}=\sum\limits_{i=-N}^{N}{\sum\limits_{j=-N}^{N}{{{b}_{i}}\ {{a}_{j}}\ \delta (m-i)\ \delta (n-j)}}={{b}_{m}}\ {{a}_{n}} \\
 +
\end{align}
 
</math>
 
</math>
  

Revision as of 20:09, 1 May 2017


ECE Ph.D. Qualifying Exam in Communication Networks Signal and Image processing (CS)

Question 5, August 2012(Published on May 2017), Problem 2

Solution 1,2

Solution1:

a)
$ \begin{align} & {{h}_{1}}(m,n)=\sum\limits_{j=-N}^{N}{{a}_{j}\delta (m,n-j)=}{a}_{n}\delta (m) \\ & \delta (m,n)=\left\{ \begin{matrix} 1\ m=n=0 \\ 0\qquad O.W \\ \end{matrix}, \right. \delta (m,n-j)=\left\{ \begin{matrix} 1\qquad n=j \\ 0\qquad O.W \\ \end{matrix} \right. \\ \end{align} $

b)
$ \begin{align} & {{h}_{2}}(m,n)=\sum\limits_{i=-N}^{N}{{b}_{i}\delta (m-i,n)=}{b}_{m}\delta (n) \\ & \delta (m,n)=\left\{ \begin{matrix} 1\ m=n=0 \\ 0\qquad O.W \\ \end{matrix}, \right. \delta (m-i,n)=\left\{ \begin{matrix} 1\ m=i;\ n=0 \\ 0\qquad O.W \\ \end{matrix} \right. \\ \end{align} $

c)
$ \begin{align} & h(m,n)=\sum\limits_{i=-N}^{N}{\sum\limits_{j=-N}^{N}{{{b}_{i}}\ {{a}_{j}}^{{}}\delta (m-i,n-j)}}={{b}_{m}}\ {{a}_{n}} \\ & z(m,n)=\sum\limits_{i=-N}^{N}{{{b}_{i}}\ y(m-i,n)=}\sum\limits_{i=-N}^{N}{{{b}_{i}}\ \left( \sum\limits_{j=-N}^{N}{{{a}_{j}}\ x(m-i,n-j)} \right)=\sum\limits_{i=-N}^{N}{\sum\limits_{j=-N}^{N}{{{b}_{i}}\ {{a}_{j}}\ x(m-i,n-j)}}} \\ & \delta (m-i,n-j)=\left\{ \begin{matrix} 1\ m=i;\ n=j \\ 0\qquad O.W \\ \end{matrix} \right. \\ \end{align} $

d)
Number of multiplies per output point to implement each individual system = 2N+1 So, The number of multiplies per output point to implement each of the two individual systems is 2(2N+1) = 4N+2.

Number of multiplies per output point to implement the complete system with a single convolution is $ \left( 2N+1 \right)\left( 2N+1 \right)\text{ }=4{{N}^{2}}+4N+1 $

e) Implementing the two systems in sequence requires less computation, but it is more complex and more sensitive to noise. Implementing the two systems in a single complete system requires more computation, but it is simpler, less sensitive to noise, and more stable.


Solution 2:

a)
$ \begin{align} & {{h}_{1}}(m,n)=\sum\limits_{j=-N}^{N}{{a}_{j}\delta (m,n-j)=}\sum\limits_{j=-N}^{N}{{a}_{j}\delta (m)\ \delta(n-j)=}= {a}_{n}\ \delta (m) \\ \end{align} $

b)
$ \begin{align} & {{h}_{2}}(m,n)=\sum\limits_{i=-N}^{N}{{b}_{i}\delta (m-i,j)=}\sum\limits_{i=-N}^{N}{{b}_{i}\delta (m-i)\ \delta(n)=}= {b}_{m}\ \delta (n) \\ \end{align} $

c)
$ \begin{align} & h(m,n)=\sum\limits_{i=-N}^{N}{\sum\limits_{j=-N}^{N}{{{b}_{i}}\ {{a}_{j}}\ \delta (m-i,n-j)}}=\sum\limits_{i=-N}^{N}{\sum\limits_{j=-N}^{N}{{{b}_{i}}\ {{a}_{j}}\ \delta (m-i)\ \delta (n-j)}}={{b}_{m}}\ {{a}_{n}} \\ \end{align} $

d) $ 2N+1 $ multiplies for each of the two individual systems
$ (2N+1)^2 $ multiplies for the complete system with a single convolution

For the complete system with a single convolution, as in each filter location, we will multiply both $ a_j $ and $ b_i $, so we need $ 2(2N+1)^2 $ multiplies in total. But if the student consider that we pre-process the system and calculate the complete filter parameters in advance, then $ (2N+1)^2 $ multiplies is correct.

e) Implement in sequence:

Advantage: Less multiplies, faster
Disadvantage: Need space for intermediate result

Implement a complete system:

Advantage: Intuitive solution. No intermediate result
Disadvantage: More multiplies, slower

Related Problem

Consider a 2D linear space-invariant filter with input $ x(m,n) $, output $ y(m,n) $, and impulse response $ h(m,n) $, so that

$ y(m,n) = h(m,n)* x(m,n). $

The impulse response is given by

$ h(m,n) = \left\{\begin{matrix} \frac{1}{(2N+1)^2}, for \ |m|\leq N \ and\ |n|\leq N \\ 0, \quad\quad\quad\quad\quad otherwise \end{matrix}\right. $

a) If implement this filter with 2D convolution, how many multiplies are needed per output value?

b) Find a separable decomponsition of $ h(m,n) $ so that

$ h(m,n) = g(m)f(n) $

where $ g(m) $ and $ f(n) $ are 1D functions.

c) How can the functions $ g(m) $ and $ f(n) $ be used to compute $ y(m,n) $. What are the advantage and disadvantage of this approach?



Back to ECE QE page:

Alumni Liaison

Prof. Math. Ohio State and Associate Dean
Outstanding Alumnus Purdue Math 2008

Jeff McNeal