(48 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
− | + | [[Category:ECE301Fall2008mboutin]] | |
− | + | [[Category:ECE301]] | |
− | + | [[Category:ECE]] | |
+ | [[Category:signals and systems]] | ||
+ | [[Category:sampling]] | ||
− | <center><math>p(t) = \sum^{\infty}_{n = -\infty} \delta(t - nT)\!</math> | + | = Impulse-train Sampling = |
+ | ---- | ||
+ | One type of sampling that satisfies the Sampling Theorem is called impulse-train sampling. This type of sampling is achieved by the use of a periodic impulse train multiplied by a continuous time signal, <math>x(t)</math>. The periodic impulse train, <math>p(t)</math> is referred to as the sampling function, the period, <math>T</math>, is referred to as the sampling period, and the fundamental frequency of <math>p(t)</math>, | ||
+ | |||
+ | <math>\omega_s = \frac{2\pi}{T},</math> | ||
+ | |||
+ | is the sampling frequency. We define <math>x_p(t)</math> by the equation, | ||
+ | <center><math>x_p(t) = x(t)p(t) \ </math>, where</center> | ||
+ | |||
+ | <center><math>p(t) = \sum^{\infty}_{n = -\infty} \delta(t - nT)\!</math></center> | ||
Graphically, this equation looks as follows, | Graphically, this equation looks as follows, | ||
− | <math>x(t)\!</math> -----> x --------><math>x_p(t)\!</math> | + | <math>x(t)\!</math> ----------> x --------> <math>x_p(t)\!</math> |
− | + | ^ | |
− | + | | | |
− | + | | | |
<math>p(t) = \sum^{\infty}_{n = -\infty} \delta(t - nT)\!</math> | <math>p(t) = \sum^{\infty}_{n = -\infty} \delta(t - nT)\!</math> | ||
+ | |||
+ | By using linearity and the sifting property, <math>x_p(t) </math> can be represented as follows, | ||
+ | |||
+ | <math>x_p(t) = x(t)p(t) </math> | ||
+ | |||
+ | <math> = x(t)\sum^{\infty}_{n = -\infty} \delta(t - nT)\!</math> | ||
+ | |||
+ | <math> =\sum^{\infty}_{n = -\infty}x(t)\delta(t - nT)\!</math> | ||
+ | |||
+ | <math> =\sum^{\infty}_{n = -\infty}x(nT)\delta(t - nT)\!</math> | ||
+ | |||
+ | Now, in the time domain, <math>x_p(t) </math> looks like a group of shifted deltas with magnitude equal to the value of <math>x(t) </math> at that time, <math>nT </math>, in the original function. In the frequency domain, <math>X_p(\omega) </math> looks like shifted copies of the original <math>X(\omega) </math> that repeat every <math>\omega_s </math>, except that the magnitude of the copies is 1/T of the magnitude of the original <math>X(\omega) </math>. | ||
+ | |||
+ | === Why does <math>X_p(\omega) </math> look like copies of the original <math>X(\omega) </math>? === | ||
+ | This answer can be found simply by using the Fourier Transform of the <math>X_p(\omega) </math>. | ||
+ | |||
+ | <math>X_p(\omega) = F(x(t)p(t))\!</math> | ||
+ | |||
+ | <math> = \frac{1}{2\pi}X(\omega) * P(\omega)\!</math> | ||
+ | |||
+ | <math> = \frac{1}{2\pi}X(\omega) * \sum^{\infty}_{k = -\infty}2\pi a_k \delta(\omega - \omega_s), a_k = \frac{1}{T}\!</math> | ||
+ | |||
+ | <math> = \sum^{\infty}_{k = -\infty}\frac{1}{T}X(\omega - k\omega_s)\!</math> | ||
+ | |||
+ | From the above equation, it is obvious that <math>X_p(\omega) </math> is simply shifted copies of the original function (as can be seen by the <math>X(\omega - k\omega_s) </math>) that are divided by <math>T </math> (as can be seen by 1/T. | ||
+ | |||
+ | === How to recover <math>x(t) </math> === | ||
+ | In order to recover the original function, <math>x_p(t)</math>, we can simply low-pass filter <math>x_p(t)</math> as long as the filter, | ||
+ | |||
+ | <center><math>H(\omega) = \left\{ \begin{array}{ll}T,& |\omega|< \omega_c\\ 0,& else\end{array}\right.\!</math></center> | ||
+ | |||
+ | with some <math>\omega_c </math> satisfying, <math>\omega_m < \omega_c < \omega_s - \omega_m </math>. Also, the low-pass filter must have a gain of <math>T</math>. This can be represented graphically as shown below, | ||
+ | |||
+ | filter | ||
+ | <center><math>x_p(t)\!</math> -------> <math>H(\omega)\!</math> -------> <math>x_r(t)\!</math>,</center> | ||
+ | |||
+ | where <math>x_r(t) </math> represents the recovered original function. | ||
+ | ---- | ||
+ | [[ECE301|Back to ECE301]] |
Latest revision as of 06:11, 16 September 2013
Impulse-train Sampling
One type of sampling that satisfies the Sampling Theorem is called impulse-train sampling. This type of sampling is achieved by the use of a periodic impulse train multiplied by a continuous time signal, $ x(t) $. The periodic impulse train, $ p(t) $ is referred to as the sampling function, the period, $ T $, is referred to as the sampling period, and the fundamental frequency of $ p(t) $,
$ \omega_s = \frac{2\pi}{T}, $
is the sampling frequency. We define $ x_p(t) $ by the equation,
Graphically, this equation looks as follows,
$ x(t)\! $ ----------> x --------> $ x_p(t)\! $ ^ | | $ p(t) = \sum^{\infty}_{n = -\infty} \delta(t - nT)\! $
By using linearity and the sifting property, $ x_p(t) $ can be represented as follows,
$ x_p(t) = x(t)p(t) $
$ = x(t)\sum^{\infty}_{n = -\infty} \delta(t - nT)\! $
$ =\sum^{\infty}_{n = -\infty}x(t)\delta(t - nT)\! $
$ =\sum^{\infty}_{n = -\infty}x(nT)\delta(t - nT)\! $
Now, in the time domain, $ x_p(t) $ looks like a group of shifted deltas with magnitude equal to the value of $ x(t) $ at that time, $ nT $, in the original function. In the frequency domain, $ X_p(\omega) $ looks like shifted copies of the original $ X(\omega) $ that repeat every $ \omega_s $, except that the magnitude of the copies is 1/T of the magnitude of the original $ X(\omega) $.
Why does $ X_p(\omega) $ look like copies of the original $ X(\omega) $?
This answer can be found simply by using the Fourier Transform of the $ X_p(\omega) $.
$ X_p(\omega) = F(x(t)p(t))\! $
$ = \frac{1}{2\pi}X(\omega) * P(\omega)\! $
$ = \frac{1}{2\pi}X(\omega) * \sum^{\infty}_{k = -\infty}2\pi a_k \delta(\omega - \omega_s), a_k = \frac{1}{T}\! $
$ = \sum^{\infty}_{k = -\infty}\frac{1}{T}X(\omega - k\omega_s)\! $
From the above equation, it is obvious that $ X_p(\omega) $ is simply shifted copies of the original function (as can be seen by the $ X(\omega - k\omega_s) $) that are divided by $ T $ (as can be seen by 1/T.
How to recover $ x(t) $
In order to recover the original function, $ x_p(t) $, we can simply low-pass filter $ x_p(t) $ as long as the filter,
with some $ \omega_c $ satisfying, $ \omega_m < \omega_c < \omega_s - \omega_m $. Also, the low-pass filter must have a gain of $ T $. This can be represented graphically as shown below,
filter
where $ x_r(t) $ represents the recovered original function.