Revision as of 16:59, 10 November 2008 by Thouliha (Talk)

Sampling Theorem

English Definition

a signal $ x(t) $ can be uniquely recovered from its samples if the samples within $ x(nT) $, where n goes from $ (-\infty,\infty) $, and T satisfies the nyquist rate, or $ T < \frac{1}{2}\frac{2\pi}{\omega_m} $

Test for sampling

Instead of using the T based definition, use instead that $ \omega_s>2\omega_m $, where

$ \omega_s $ is the Sampling frequency

$ \omega_m $ is the maximum, or band limited frequency, and

$ 2\omega_m $ is the actual nyquist rate

Proof

The sampling theorem will still only yield an approximation of the original signal, as will be shown by its proof, but the nyquist rate does give a very reliable approximation.

The representation of many samples is represented as $ x_p(t) $. This signal is as follows:

$ x_p(t)=x(t)p(t) $ where p(t) is an impulse train, or mathematically:

$ x_p(t)=\sum_{n=-\infty}^\infty x(t)\delta(t-nT) $

Since $ x(t) $ only has values at intervals of the sampling period $ nT $, the equation can be simplified to

$ x_p(t)=\sum_{n=-\infty}^\infty x(nT)\delta(t-nT) $

This essentially looks like a string of impulses with heights at the same values as the original signal

Our reconstruction, which completes the proof, is done however in the frequency domain:

A multiplication in the time domain becomes a convolution in the frequency domain

$ \mathcal{X}_p(\omega)=\frac{1}{2\pi}\mathcal{X}(\omega)*\sum_{k=-\infty}^\infty \frac{2\pi}{T}\delta(\omega-k\omega_s) $

$ \mathcal{X}_p(\omega)=\sum_{k=-\infty}^\infty \frac{1}{T}\mathcal{X}(\omega-k\omega_s) $

So this represents a lot of shifted frequency representations of the original signal, shifted by $ k\omega_s $ and each with a height of $ \frac{1}{T} $.

Alumni Liaison

Prof. Math. Ohio State and Associate Dean
Outstanding Alumnus Purdue Math 2008

Jeff McNeal