Basic Definition of Sampling

Sampling is the extraction of values of a continuous signal at fixed intervals. We learn more about the frequency spectrum of a signal the faster we sample it. Naturally, if the signal changes much faster than the sampling rate, these changes will not be captured accurately and aliasing occurs.

Nyquist Sampling Theorem

The Nyquist Sampling theorem says that in order to capture all the frequency information of a bandlimited signal, the sampling frequency must be twice the maximum frequency of the signal. In other words, each frequency component must be sampled at least twice per period.

Nyquist Sampling Criteria

$\displaystyle f_m=\text{The max frequency of the signal being sampled}$

$\displaystyle f_s=\text{The sampling frequency}$

$\displaystyle f_s > 2f_m$

There are several ways to think about this idea, if it is not already intuitive. First, consider a sinusoid of arbitrary frequency. The Nyquist Sampling theorem says we must sample at least two points within one period of this sinusoid in order to determine its frequency, given that we won't be doing any guess work. Now consider what the Fourier Transform is. The Fourier Transform is a weighted sum of complex exponentials. For a real signal, the Fourier Transform allows us to break this signal up into a sum of sines and cosines of varying magnitude and phase. All this information is conveniently packaged within the complex coefficients of the Fourier Transform. So if it is understood that a sinsuoid must be sampled twice within a single period to determine its frequency, then one can also understand that once we break up the signal into a sum of sinusoids, the sampling frequency must be fast enough to properly sample the fastest sinusoid of which the signal is composed of.

The Sampling Process

In theory, here is how we would like to sample our signals.

Step 1: Begin with a continuous function x(t).

Step 2: Sample x(t) using an impulse generator or comb function.

Dirac Comb or Impulse Train:

$p_T(t)=\sum_{k=-\infty}^\infty\delta(t-kT_s)$

Sampling of x(t):

$x_s(t)=p_T(t)x(t)=\sum_{k=-\infty}^\infty x(kT_s)\delta(t-kT_s)$

Step 3: Discretize the signal.

$\displaystyle x[n]=x_s(t2\pi/f_s)2\pi/f_s$

After Step 3, the signal is ready to be put through a discrete filter.

It is important to note that this is an idealization of the sampling process. To adhere to the Nyquist sampling theorem, the sampling frequency must be at least twice the maximum frequency. Often, we do not know what the maximum frequency of the signal is. To minimize the effects of aliasing, the signal is first put through a lowpass filter. This effectively sets the maximum frequency of the signal equal to the cutoff frequency of the filter and will allow us to determine a sampling frequency that will satisfy the Nyquist Sampling theorem. This will reduce the effects of aliasing, but may also distort the signal, since higher frequencies are inevitably lost. We also cannot generate an impulse in real life. The actual methods used to sample a continuous time signal will be introduced in sampling part 2. Finally, a sampled signal must be quanitized before discretization. This is because digital filters are limited in what numbers they can represent. This depends on the number of bits your computer is based off of.

To get a better understanding of what is actually happening between Steps 1-3, it is good to observe the frequency domain representation of the signal as it passes through each stage of the sampling process. The following explanation adheres to the idealization of the sampling process.

From a Frequency Standpoint

Step 1: The signal x(t) may be periodic or aperiodic. If the signal is periodic, the frequency domain representation is discrete. If the signal is aperiodic, the frequency domain representation is continuous. A good way to remember this is to remember that sampling in time is equivalent to convolving the frequency domain representation of your signal with an impulse train in the frequency domain. Conversely, sampling in the frequency domain is equivalent to convolving the time domain representation of your signal with an impulse train.

$P_T(f)=1/T\sum_{k=-\infty}^\infty\delta(f-kf_s)$

Step 2: When the signal x(t) is multiplied by the dirac comb p(t), this is equivalent to convolving the frequency domain representation of x(t) with the frequency domain representation of p(t). Since the Fourier Transform of the comb is also an impulse train in the frequency domain, the convolution of X(f) with P(f) simply makes copies of X(f) at each impulse with the magnitude of X(f) scaled by the sampling frequency. The sampled signal now has a frequency domain representation which is periodic with respect to the sampling frequency.

$X_s(f)=X(f)*P_T(f)=1/T\sum_{k=-\infty}^\infty X(f-kf_s)$

Step 3: To discretize the sampled signal, the frequency must be scaled such that the frequency is periodic with respect to $2\pi$. This is because discrete time filters are periodic with respect to $2\pi$. The reason can be seen below.

$x_s(t)=\sum_{k=-\infty}^\infty x(kT_s)\delta(t-kT_s)$

$\mathcal{F}\lbrace x_s(t) \rbrace=\int_{-\infty}^{\infty}\sum_{k=-\infty}^\infty x(kT_s)\delta(t-kT_s)e^{-jwkT_s}dt=\sum_{k=-\infty}^\infty x(kT_s)e^{-jwkT_s}$

The trick to step two is to realize that taking the integral of a weighted sum of impulses is simply the sum of the weights. The result is the Discrete Time Fourier Transform. To solve this summation, we generally use the formula for the sum of a geometric series. This leaves a $e^{-jw}$ term in the DTFT, which causes $\omega$ to become 'trapped' in the complex exponential term, causing the periodicity of the DTFT to be $2\pi$ when plotted versus frequency.

$\sum_{n=-\infty}^\infty x[n]e^{-jwn}$

$X(e^{jw_d})=X_s(f_c f_s/2\pi)$

To transform the continuous time sampled signal the its discrete time representation, let $f_c=\frac{f_d f_s - 2\pi f_s k}{2\pi}$ (for all k = integer), where f_s is the sampling frequency and f_c is a frequency corresponding to the continuous time frequency domain representation. f_d is the corresponding frequency for the discrete time representation of the sampled signal. Since the sampled signal is periodic in the frequency domain, the 2*pi*k term is there to account for this. How this is actually done will be discussed on sampling part 2.

$X_s(e^{j\2pi f_c})=X_s(f_c)\arrowvert_{f_c=\frac{f_d f_s - 2\pi f_s k}{2\pi}}$

Now that the signal has been discretized, a discrete time filter may be applied to it.

Reconstructing the Signal

After the signal has been sampled, discretized, and processed in discrete time, the signal must be reconstructed. The discrete time signal is periodic with respect to 2*pi. To place it back into context with the continuous time domain, the frequency must be scaled again such that the frequency domain is periodic with respect to the sampling frequency. To do this, let $f_d=\frac{2\pi f_c - 2\pi f_s k}{f_s}$.

An impulse generator is used to create impulses separated by a time interval equal to the sampling rate with the weight of each impulse equal to the values of the discrete time signal. In the frequency domain, we now have the frequency domain representation of the filtered signal periodic with respect to the sampling frequency. To extract just 1 period of the frequency domain representation of the signal, the signal is convolved with sinc(t/T_s). This is equivalent to an ideal lowpass filter with a cutoff frequency of $\frac{f_s}{2}$ and magnitude of $T_s$. Remember that the when the original signal was sampled, the frequency domain was scaled by $f_s$. Multiplying by $T_s$ will undo this scaling. The end result is a perfect reconstruction of the processed signal.