(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
== Impulse-train Sampling ==
+
[[Category:ECE301Fall2008mboutin]]
One type of sampling that satisfies the Sampling Theorem is called impulse-train sampling.  This type of sampling is achieved by the use of a periodic impulse train multiplied by a continuous time signal, <math>x(t)\!</math>.  The periodic impulse train, <math>p(t)\!</math> is referred to as the sampling function, the period, <math>T\!</math>, is referred to as the sampling period, and the fundamental frequency of <math>p(t)\!</math>, <math>\omega_s = \frac{2\pi}{T}\!</math>, is the sampling frequency.  We define <math>x_p(t)\!</math> by the equation,
+
[[Category:ECE301]]
<center><math>x_p(t) = x(t)p(t)\!</math>, where</center>
+
[[Category:ECE]]
 +
[[Category:signals and systems]]
 +
[[Category:sampling]]
 +
 
 +
= Impulse-train Sampling =
 +
----
 +
One type of sampling that satisfies the Sampling Theorem is called impulse-train sampling.  This type of sampling is achieved by the use of a periodic impulse train multiplied by a continuous time signal, <math>x(t)</math>.  The periodic impulse train, <math>p(t)</math> is referred to as the sampling function, the period, <math>T</math>, is referred to as the sampling period, and the fundamental frequency of <math>p(t)</math>,  
 +
 
 +
<math>\omega_s = \frac{2\pi}{T},</math>
 +
 
 +
is the sampling frequency.  We define <math>x_p(t)</math> by the equation,
 +
<center><math>x_p(t) = x(t)p(t) \ </math>, where</center>
  
 
<center><math>p(t) = \sum^{\infty}_{n = -\infty} \delta(t - nT)\!</math></center>
 
<center><math>p(t) = \sum^{\infty}_{n = -\infty} \delta(t - nT)\!</math></center>
Line 13: Line 24:
 
                   <math>p(t) = \sum^{\infty}_{n = -\infty} \delta(t - nT)\!</math>
 
                   <math>p(t) = \sum^{\infty}_{n = -\infty} \delta(t - nT)\!</math>
  
By using linearity and the sifting property, <math>x_p(t)\!</math> can be represented as follows,
+
By using linearity and the sifting property, <math>x_p(t) </math> can be represented as follows,
  
<math>x_p(t) = x(t)p(t)\!</math>
+
<math>x_p(t) = x(t)p(t) </math>
  
 
       <math>      = x(t)\sum^{\infty}_{n = -\infty} \delta(t - nT)\!</math>
 
       <math>      = x(t)\sum^{\infty}_{n = -\infty} \delta(t - nT)\!</math>
Line 23: Line 34:
 
       <math>      =\sum^{\infty}_{n = -\infty}x(nT)\delta(t - nT)\!</math>
 
       <math>      =\sum^{\infty}_{n = -\infty}x(nT)\delta(t - nT)\!</math>
  
Now, in the time domain, <math>x_p(t)\!</math> looks like a group of shifted deltas with magnitude equal to the value of <math>x(t)\!</math> at that time, <math>nT\!</math>, in the original function.  In the frequency domain, <math>X_p(\omega)\!</math> looks like shifted copies of the original <math>X(\omega)\!</math> that repeat every <math>\omega_s\!</math>, except that the magnitude of the copies is <math>\frac{1}{T}\!</math> of the magnitude of the original <math>X(\omega)\!</math>.
+
Now, in the time domain, <math>x_p(t) </math> looks like a group of shifted deltas with magnitude equal to the value of <math>x(t) </math> at that time, <math>nT </math>, in the original function.  In the frequency domain, <math>X_p(\omega) </math> looks like shifted copies of the original <math>X(\omega) </math> that repeat every <math>\omega_s </math>, except that the magnitude of the copies is 1/T of the magnitude of the original <math>X(\omega) </math>.
  
=== Why does <math>X_p(\omega)\!</math> look like copies of the original <math>X(\omega)\!</math>? ===
+
=== Why does <math>X_p(\omega) </math> look like copies of the original <math>X(\omega) </math>? ===
This answer can be found simply by using the Fourier Transform of the <math>X_p(\omega)\!</math>.
+
This answer can be found simply by using the Fourier Transform of the <math>X_p(\omega) </math>.
  
 
<math>X_p(\omega) = F(x(t)p(t))\!</math>
 
<math>X_p(\omega) = F(x(t)p(t))\!</math>
Line 36: Line 47:
 
       <math> = \sum^{\infty}_{k = -\infty}\frac{1}{T}X(\omega - k\omega_s)\!</math>
 
       <math> = \sum^{\infty}_{k = -\infty}\frac{1}{T}X(\omega - k\omega_s)\!</math>
  
From the above equation, it is obvious that <math>X_p(\omega)\!</math> is simply shifted copies of the original function (as can be seen by the <math>X(\omega - k\omega_s)\!</math>) that are divided by <math>T\!</math> (as can be seen by <math>\frac{1}{T}\!</math>).
+
From the above equation, it is obvious that <math>X_p(\omega) </math> is simply shifted copies of the original function (as can be seen by the <math>X(\omega - k\omega_s) </math>) that are divided by <math>T </math> (as can be seen by 1/T.
  
=== How to recover <math>x(t)\!</math> ===
+
=== How to recover <math>x(t) </math> ===
In order to recover the original function, <math>x_p(t)\!</math>, we can simply low-pass filter <math>x_p(t)\!</math> as long as the filter,  
+
In order to recover the original function, <math>x_p(t)</math>, we can simply low-pass filter <math>x_p(t)</math> as long as the filter,  
  
 
<center><math>H(\omega) = \left\{ \begin{array}{ll}T,& |\omega|< \omega_c\\ 0,&  else\end{array}\right.\!</math></center>
 
<center><math>H(\omega) = \left\{ \begin{array}{ll}T,& |\omega|< \omega_c\\ 0,&  else\end{array}\right.\!</math></center>
  
with some <math>\omega_c\!</math> satisfying, <math>\omega_m < \omega_c < \omega_s - \omega_m\!</math>.  Also, the low-pass filter must have a gain of <math>T\!</math>.  This can be represented graphically as shown below,
+
with some <math>\omega_c </math> satisfying, <math>\omega_m < \omega_c < \omega_s - \omega_m </math>.  Also, the low-pass filter must have a gain of <math>T</math>.  This can be represented graphically as shown below,
  
                                                      filter
+
                                            filter
 
<center><math>x_p(t)\!</math> -------> <math>H(\omega)\!</math> -------> <math>x_r(t)\!</math>,</center>
 
<center><math>x_p(t)\!</math> -------> <math>H(\omega)\!</math> -------> <math>x_r(t)\!</math>,</center>
  
where <math>x_r(t)\!</math> represents the recovered original function.
+
where <math>x_r(t) </math> represents the recovered original function.
 +
----
 +
[[ECE301|Back to ECE301]]

Latest revision as of 06:11, 16 September 2013


Impulse-train Sampling


One type of sampling that satisfies the Sampling Theorem is called impulse-train sampling. This type of sampling is achieved by the use of a periodic impulse train multiplied by a continuous time signal, $ x(t) $. The periodic impulse train, $ p(t) $ is referred to as the sampling function, the period, $ T $, is referred to as the sampling period, and the fundamental frequency of $ p(t) $,

$ \omega_s = \frac{2\pi}{T}, $

is the sampling frequency. We define $ x_p(t) $ by the equation,

$ x_p(t) = x(t)p(t) \ $, where
$ p(t) = \sum^{\infty}_{n = -\infty} \delta(t - nT)\! $

Graphically, this equation looks as follows,

             $ x(t)\! $ ----------> x --------> $ x_p(t)\! $
                              ^
                              |
                              |
                  $ p(t) = \sum^{\infty}_{n = -\infty} \delta(t - nT)\! $

By using linearity and the sifting property, $ x_p(t) $ can be represented as follows,

$ x_p(t) = x(t)p(t) $

     $        = x(t)\sum^{\infty}_{n = -\infty} \delta(t - nT)\! $
     $        =\sum^{\infty}_{n = -\infty}x(t)\delta(t - nT)\! $
     $        =\sum^{\infty}_{n = -\infty}x(nT)\delta(t - nT)\! $

Now, in the time domain, $ x_p(t) $ looks like a group of shifted deltas with magnitude equal to the value of $ x(t) $ at that time, $ nT $, in the original function. In the frequency domain, $ X_p(\omega) $ looks like shifted copies of the original $ X(\omega) $ that repeat every $ \omega_s $, except that the magnitude of the copies is 1/T of the magnitude of the original $ X(\omega) $.

Why does $ X_p(\omega) $ look like copies of the original $ X(\omega) $?

This answer can be found simply by using the Fourier Transform of the $ X_p(\omega) $.

$ X_p(\omega) = F(x(t)p(t))\! $

      $  = \frac{1}{2\pi}X(\omega) * P(\omega)\! $
      $  = \frac{1}{2\pi}X(\omega) * \sum^{\infty}_{k = -\infty}2\pi a_k \delta(\omega - \omega_s), a_k = \frac{1}{T}\! $
      $  = \sum^{\infty}_{k = -\infty}\frac{1}{T}X(\omega - k\omega_s)\! $

From the above equation, it is obvious that $ X_p(\omega) $ is simply shifted copies of the original function (as can be seen by the $ X(\omega - k\omega_s) $) that are divided by $ T $ (as can be seen by 1/T.

How to recover $ x(t) $

In order to recover the original function, $ x_p(t) $, we can simply low-pass filter $ x_p(t) $ as long as the filter,

$ H(\omega) = \left\{ \begin{array}{ll}T,& |\omega|< \omega_c\\ 0,& else\end{array}\right.\! $

with some $ \omega_c $ satisfying, $ \omega_m < \omega_c < \omega_s - \omega_m $. Also, the low-pass filter must have a gain of $ T $. This can be represented graphically as shown below,

                                           filter
$ x_p(t)\! $ -------> $ H(\omega)\! $ -------> $ x_r(t)\! $,

where $ x_r(t) $ represents the recovered original function.


Back to ECE301

Alumni Liaison

To all math majors: "Mathematics is a wonderfully rich subject."

Dr. Paul Garrett