Useful Definitions for ECE301: signals and systems

Periodic CT Signal:

• A CT signal $x(t)\$ is called periodic if there exists $T>0\$ period such that $x(t+T)=x(t)\$, for all values of t. The fundamental period is the smallest period of all periods of a signal (denoted by $T_0\$).

In Mathspeak:

• $x(t) periodic \iff \exists T>0 \ni x(t+T)=x(t), \forall t \in \mathbb{R}$

Periodic DT Signal:

• A DT signal $x[n]\$ is called periodic if there exists $N>0\$ period such that $x[n+N]=x[n]\$, for all values of n. the fundamental period is the smallest period of all periods of a signal (denoted by $N_0\$).

In Mathspeak:

• $x[n] periodic \iff \exists N>0 \ni x[n+N]=x[n], \forall n \in \mathbb{N}$

Comment:

• The difference between CT and DT:

Note that the period N must be an integer in DT, but that the period T in CT can be any positive real number.

-Mimi (Wed, 26 Sep 2007 16:29:43)

Overview

For those who are CS geeks like William, the question of whether a function to check whether a system has a given property can be written is very pertinent. I have been looking into symbolic math solvers to try to make them do this.

So far, my most promising lead is the AXIOM solver [1], which might have a LISP interface to its symbolic math engine. This would be ideal.

My notes so far are as follows:

Basics: Systems as "functions that operate on functions"

A system can be represented as a function system(x), where x is some function x(t), that returns another function of the form y(t).

In Common LISP, an example of x might be:

(setq x (lambda (tt) (+ tt 5)))

This corresponds to x(t) = t + 5. t is a reserved constant in LISP, so I use tt. A setq here probably isn't the "right way" to do things, but LISP is not my native language.

At any rate, a simple system to play with would be a time shift. You would represent this as follows (note the extra argument to, representing the amount of time to shift by):

(defun timeshift (x to) (lambda (tt) (funcall x (- tt to))))

You can then represent x(t) -> [Shift by to]? -> y(t) as (in more broken LISP):

(setq y (timeshift x 'to))

At this point, you could in theory make a call

(funcall y tt)

to get y(tt) = x(tt - to) = tt + 5, but LISP doesn't do symbolic math natively. Thus, some work needs to be done to get AXIOM or some other engine to do the heavy lifting.

An example using clisp, with commentary:

(setq x (lambda (tt) (+ tt 5)))  ; x(tt) = tt + 5 (defun timeshift (x to) (lambda (tt) (funcall x (- tt to))))  ; Define the time-shift function

Apply the timeshift to x with to=15.
This is akin to y(t) = x(t - 15) = t - 15 + 5 = t - 10

(setq y (timeshift x 15))

Finally, compute a value of y(t)
y(3) = 3 - 10 = -7

(funcall y 3)

Again, the lack of native symbolic math operations in LISP is extremely limiting, so a symbolic engine needs to be integrated somehow.

Testing properties

As you can see, using LISP to model systems is potentially a powerful method. The "functions operating on functions" way of thinking can be extended even further to the "system" functions (such as timeshift above) to prove things about systems.

Lacking a good symbolic solver, I cannot test the following code yet, but we might write a function "linear" that determines whether the given system function is linear or not.

(defun islinear (sysfunc)

  (let ((y1 (funcall sysfunc 'x1))     ; Define y1 = sysfunc(x1), y2 = sysfunc(x2)
(y2 (funcall sysfunc 'x2)))
(eq
(+ (* 'a (funcall y1 'tt))         ; a*y1(t) + b*y2(t) = ...
(* 'b (funcall y2 'tt)))
(funcall
(funcall sysfunc
; ... = sysfunc(a*x1(t) + b*x2(t))(t) ?
(lambda (tt) (+ (* 'a (funcall 'x1 tt))
(* 'b (funcall 'x2 tt))))
) 'tt)
)))


Properties of Systems

There are five general properties of systems that are introduced in this homework. These include systems with and without memory, time invariant systems, linear systems, causal systems and stable systems. This post will detail how to check if a system exhibits these general properties.

1. Systems with and without memory:

• Def: A system is said to be memoryless if its output for each value of the independent variable at a given time is dependent only on the input at that same time.
• Proving: This is very simple and can be done by visual inspection alone. If there is any kind of phase shift, the systems depends on values other than the input at that current time and is NOT memoryless. Also, if the system is described by an accumulator or summer again it depends of values other than the input at the current time and is NOT memoryless. Note that in a system comprised of the linear combination of sub-systems if any one of the sub-systems is memoryless than the entire system is memoryless.

2. Time Invariant Systems:

• Def: A system is time invariant if a time shift in the input signal results in an identical time shift in the output signal.
• Proving: In equation form, the system y(t) = x(t) is time invariant if y(t-to) = x(t-to). Plug in "t-to" for all "t's" in the system. Simplify. If the end result is just a time delay, then the system is time invariant. The easy way--if there are any "t's" outside the function x(t) [i.e. t*x(t)]? the system must NOT be time invariant.

3. Linear Systems:

• Def: A system is linear if any output can be derived from the sum of the products of an output and constant and another output and constant. In equation form for y(t) = x(t) with outputs y1, y2, and y3: $y_3(t) = a\cdot y_1(t) + b\cdot y_2(t)$
• Proving: Use the equation above to prove. Example 1.17 in the text shows a nice overview. Basically, consider two arbitrary inputs and their respective outputs. A third input is considered to be a linear combination of the first two inputs. Write the output and substitute the third input for the linear combination. Separate the a and b variables. If you can arrange the equation so that the output of the third input is the linear combination of the first two outputs, then the system is linear.

4. Causal Systems:

• Def: A system is causal if the output at any time depends only on the values of the input at the present time and in the past.
• Proving: Consider each component of the system separately. If there is no time delay, the systems depends only on the present time and is causal. If there is a time delay, determine whether it is in past or future time. If it is past time, then the system is causal. When the systems is rotated over the y-axis [i.e. x(-t)]? then if is possible for some values of t that the system is in past time and for others in future. Determine these values. The system is causal for the values of past time as well as for the value of present time and else is NOT causal.

5. Stable Systems:

• Def: A system is stable if a bounded input function yields a bounded output function.
• Proving: Consider y(t) = x(t). If the input function is bounded then $|x(t)| < \epsilon$. Consider the end behavior of all combinations of minimum and maximum values for x(t). If it is bounded, then y(t) IS stable. (Look at Example 1.13 in the text for further instruction).

... --brian.a.baumgartner.1, Wed, 05 Sep 2007 11:20:49

I was mistaken in article 4. Just because a system does not have a time delay does not PROVE that the system is causal. I wrote this without considering time scaling. I daresay that if a system does not have a time delay OR time scaling then it is causal; however I suggest doing the math to back this up. To find the values of when the system is causal and when it is not consider the system y(t) = x(at+b). Next set up the following inequality:

at+b <= t

The system IS causal as long as this inequality holds true. It is NOT causal for at+b > t.

We talked in class about how, in a cascaded system, the "time coordinates" change and that you have to keep track of them as they propagate down the system through other transforms if you are trying to find the final output of the systems. This explanation seems somewhat confusing, so I tried looking at it in a different manor. It seems like we are trying to find the equation for the output by starting at the input, like a signal would, and writing all the transforms or "things" that happen to the signal using different variables, then we go back and substitute so it all works out. I did the reverse and started with the output and "built up" the total effect (output equation) from what happened "most recently" (that is to say the present is at the output and the input is the past), so you don't have to worry about keeping track of different substitution variables (Mimi used squares and squiggles) or the "time coordinates".

A very simple method is as such:

1. Start from the output, take the last transform and put it in parenthesis. It's in a nice package now, it's done, don't touch it.
2. Take that "package" and drop it right into the next "most recent" (one to the left) transform (substitute it for t). Put that in parenthesis, it's your new package.
3. Keep going until you run out of transforms.
4. If so inclined, simplify.

Example:

Sys 1: y1(t)= x(2t) Sys 2: y2(t)= x(t-3)
Input -> Sys 1 -> Sys 2 -> Output
Start from the output, take the "most recent" transform (Sys 2) and put it in parenthesis, so: (t-3)
Next, take the next most recent transform (Sys 1), and drop your (t-3) in it (substitute your "package" of (t-3) for t): 2(t-3)
Simplify: 2t-6
Done! Don't forget it is a transform of a function, not a function itself so you need to state so, as such: z(t) = x(2t-6)

So, put very simply, start at the output and substitute, in iterations, towards the input. That's all you really need to know, I just thought if I was verbose and used analogies I might score bonus points. Keep in mind the all the transforms deal with the independent variable, in this case time. Also, in the spirit of the kiwi, I could be completely wrong about everything. :) So seriously, someone check my work.

Determining the effects of Transforms of the Independent Variable (Time) in the form x(at + b) --michael.a.mitchell.2, Sun, 02 Sep 2007 12:20:52

If you are trying to find the effect of a transform in the form of x(at + b), you should:

1. Delay or advance x(t) by the value of b. (Advance if b>0, delay if b<0)
2. Then scale/reverse time by the value of a. (Compress if |a|>1, Stretch if |a|< 1, Reverse time if a<0)

Properties of Convolution and LTI Systems

Linear Time Invariant (LTI) systems have properties that arise from the properties of convolution.

Property 1: Convolution is Commutative

$x_1(t)*x_2(t) = x_2(t)*x_1(t)\$

System Example: Convolving the input to a system with its impulse response is the same as convolving the impulse response with the input.

Property 2: Convolution is Distributive

$\displaystyle x_1(t)* \left ( x_2(t)+x_3(t) \right ) = x_1(t)*x_2(t)+x_1(t)*x_3(t)$

System Example: Convolving a single input with two impulse responses then adding the output is the same as convolving the input with the sum of the impulse responses.

Property 2: Convolution is Associative

$x_1(t)*x_2(t)*x_3(t) = x_2(t)*x_3(t)*x_1(t)\$

System Example: Convolving an input with an impulse response and convolving that with the impulse response of another system is the same as convolving the two impulse responses and then the input to the system.

Convolution Simplification

Convolution of Unit Step Function:

To take a convolution, first determine whether the system is CT or DT and use the correct formula. Next it's time to simplify. Originally the bounds are set to negative and positive infinity. The unit step function will determine the new set of bounds. Consider the following unit step function as an example: $u(2t-1)\$. This function will be a zero as long as $(2t-1)$ is less than 0. Solve for t and apply the new bounds. Next its time for the real work!

Convolution of Delta Function:

Consider $\delta (ax+b)$. Simplify this convolution by solving for when the delta function is set to one. (This is when the $(ax+b)\$ is equal to zero). That is the only value of the integration or sum, so replace t accordingly and solve.

Framework for computing the CT Convolution of two unit step exponentials

Let's take the convolution of the two most general unit-step exponentials in CT.

This solution can be very helpful in checking your work for convolutions of this form. Just plug in your numbers for the capital letters.

(I know this is kinda long, but it is very detailed to show the process of how to get to the general simplified solution.)

$x_1(t)=Ae^{Bt+C}u(Dt+E) \qquad x_2(t)=Fe^{Gt+H}u(It+J)$

\begin{align} x_1(t)*x_2(t) &= \int_{-\infty}^{\infty}x_1(\tau)x_2(t-\tau)d\tau \\ &=\int_{-\infty}^{\infty}Ae^{B\tau+C}u(D\tau+E)Fe^{G(t-\tau)+H}u(I(t-\tau)+J)d\tau \\ &=AF\int_{-\infty}^{\infty}e^{B\tau+C+G(t-\tau)+H}u(D\tau+E)u(It-I\tau+J)d\tau; \;(u(D\tau+E)=0\;,for\;D\tau+E<0\;\rightarrow\;\tau<\frac{-E}{D}) \\ &=AF\int_{\frac{-E}{D}}^{\infty}e^{\tau(B-G)+Gt+C+H}u(It-I\tau+J)d\tau; \;(u(It-I\tau+J)=0\;,for\;It-I\tau+J<0\;\rightarrow\;\tau>t+\frac{J}{I}) \\ &=AF\int_{\frac{-E}{D}}^{t+\frac{J}{I}}e^{\tau(B-G)+Gt+C+H}d\tau\cdot u(t+\frac{J}{I}+\frac{E}{D}) \\ &=AFe^{Gt+C+H}\int_{\frac{-E}{D}}^{t+\frac{J}{I}}e^{\tau(B-G)}d\tau\cdot u(t+\frac{J}{I}+\frac{E}{D}) \\ &=AFe^{Gt+C+H}\frac{1}{B-G}\left[e^{\tau(B-G)}\right]_{\frac{-E}{D}}^{t+\frac{J}{I}}\cdot u(t+\frac{J}{I}+\frac{E}{D}) \\ &=AFe^{Gt+C+H}\frac{1}{B-G}(e^{(t+\frac{J}{I})\cdot(B-G)}-e^{\frac{-E}{D}\cdot(B-G)})\cdot u(t+\frac{J}{I}+\frac{E}{D}) \\ &=\frac{AF}{B-G}(e^{Gt+CH+(t+\frac{J}{I})\cdot(B-G)}-e^{Gt+C+H-\frac{E}{D}(B-G)})\cdot u(t+\frac{J}{I}+\frac{E}{D}) \\ &=\frac{AF}{B-G}(e^{Bt+C+H+\frac{J}{I}(B-G)}-e^{Gt+C+H+\frac{E}{D}(G-B)})\cdot u(t+\frac{J}{I}+\frac{E}{D}) \end{align}

Example: Problem 2 on Fall 06 Midterm 1:

$Let:\;x_1(t)=x(t)=e^{-2t}u(t) \qquad x_2(t)=h(t)=u(t)$

$Thus:\;A=1,\;B=-2,\;C=0,\;D=1,\;E=0,\;F=1,\;G=0,\;H=0,\;I=1,\;J=0$

\begin{align} x(t)*h(t)&=x_1(t)*x_2(t) \\ &=\frac{1\cdot1}{-2-0}(e^{-2t+0+0+\frac{0}{1}(-2-0)}-e^{0t+0+0+\frac{0}{1}(0--2)})\cdot u(t+\frac{0}{1}+\frac{0}{1}) \\ &=\frac{-1}{2}(e^{-2t}-1)\cdot u(t) \\ &=\frac{1}{2}(1-e^{-2t})\cdot u(t) \end{align}

Definition of Sampling Theorem

A band-limited signal can be recovered by sampling if the sampling frequency $\omega_s$ is greater than $2\omega_m$, where $\omega_m$ is the cut-off frequency. $T = \frac{2\pi}{\omega_s}$

Received $\frac{7}{10}$ because didn't specify cut-off frequency of what and should have used "recovered from sampling" instead of "recovered by sampling."

Example of CT convolution

This is an example of convolution done two ways on a fairly simple general signal.

$x(t) = u(t)\$
$h(t) = {e}^{-\alpha t}u(t), \alpha > 0\$

Now, to convolute them...

1. $y(t) = x(t)*h(t) = \int_{-\infty}^{\infty}x(\tau)h(t-\tau)d\tau$
2. $y(t) = \int_{-\infty}^{\infty}u(\tau){e}^{-\alpha (t-\tau)}u(t-\tau)d\tau$
3. Since $u(\tau)*u(t-\tau) = 0\$ when t < 0, also when $\tau > t\$, you can set the limit accordingly. Keep in mind the following steps (4&5) are for t > 0, else the function is equal to 0.

4. $y(t) = \int_{0}^{t} {e}^{-\alpha (t-\tau)}d\tau = {e}^{-\alpha t} \int_{0}^{t}{e}^{ \alpha \tau}d\tau$
5. $y(t) = {e}^{-\alpha t}\frac{1}{\alpha}({e}^{\alpha t}-1) = \frac{1}{\alpha}(1-{e}^{-\alpha t})$
6. Now you can replace the condition in steps 4&5 with a u(t).

7. $y(t) = \frac{1}{\alpha}(1-{e}^{-\alpha t})u(t)$.

Now, the other way... (by the commutative property)

1. $y(t) = h(t)*x(t) = \int_{-\infty}^{\infty}h(\tau)x(t-\tau)d\tau$
2. $y(t) = \int_{-\infty}^{\infty}{e}^{-\alpha (\tau)}u(\tau)u(t-\tau)d\tau$
3. Since $u(\tau)*u(t-\tau) = 0\$ when t < 0, also when $\tau > t\$, you can set the limit accordingly. Keep in mind the following step (4) is for t > 0, else the function is equal to 0.

4. $y(t) = \int_{0}^{t} {e}^{-\alpha \tau}d\tau = \frac{1}{\alpha}(1-{e}^{-\alpha t})$
5. Now you can replace the condition in step 4 with a u(t).

6. $y(t) = \frac{1}{\alpha}(1-{e}^{-\alpha t})u(t)$

End

Name --dennis.m.snell.1, Sun, 30 Sep 2007 22:25:27

Name --michael.a.mitchell.2, Mon, 01 Oct 2007 15:54:00

Wasn't Sure if the authorship issue had been solved yet. (in class it was said that only the last person to make a change to a page would be credited with it's authorship)

Name --dennis.m.snell.1, Mon, 01 Oct 2007 16:51:27

The authorship issue was not an issue. It was mentioned in class, but by a student asking about it. There is a log of every action and every edit on this kiwi that can be reviewed each week. You are safe in leaving out your name. Sometime soon the editing will be reworked; however, and you might add your name to some other special page, but it will just get lost at the bottom of a topic. I removed your name here, but worry not, you are not forgotten.

... --john.w.fawcett.1, Mon, 15 Oct 2007 11:15:56

why is this under "Exams" as it's parent? Wouldn't Chapter 3 be better?

... --john.w.fawcett.1, Mon, 15 Oct 2007 11:18:06

Sorry, meant Chapter 2. I'll go ahead and add a backlink to chapter 2, but leave this one to Exams up for now.

Coefficient LTI Transfer

When transferring coefficients of a fourier series through an LTI system, each value of $a_k\$ is multiplied by $H(\jmath \omega)\$ in the system output. Therefore...

$\displaystyle a_k \rightarrow a_k \cdot H(\jmath \omega)$

The image below illustrates the process of taking the value of the frequency response function at each frequency of the coefficients and then multiplying by that value to yield the transformed coefficient values.

Note that when $H(\jmath \omega)$ is negative, the sign of the value of the coefficient is flipped.

Low Pass
High Pass
Band Pass
Band Reject

Duality

I found something interesting when you use duality on the same transform pair over and over... Note: I can not find a way to display a proper fourier symbol, so I went with the "\displaystyle {\bf F}" as seen below.

Later note: I found the fraktur typeface looks kind of like a script F, which is "\mathfrak {F}", instead of above. --Mike Walker

$(1)\; \mathfrak {F} (e^{-at}u(t))=\frac{1}{a+j\omega}$

By Duality of (1):

$(2)\; \mathfrak {F} (\frac{1}{a+jt})=2\pi e^{a\omega}u(\omega)$

By Duality of (2) (and interestingly Time Reversal of (1)):

$(3)\; \mathfrak {F} (2\pi e^{at}u(-t))=\frac{2\pi}{a-j\omega}$

By Linearity of (3) the tex:2\pi divides out of both sides:

$(4)\; \mathfrak {F} (e^{at}u(-t))=\frac{1}{a-j\omega}$

By Duality of (4) (and again interestingly Time Reversal of (2)):

$(5)\; \mathfrak {F} (\frac{1}{a-jt})=2\pi e^{-a\omega}u(-\omega)$

By Duality of (5) we see that we get back to (1).

If this is done in a more general since, it becomes clear that it is only necessary to take the dual of a Fourier Transform Pair once. After taking the dual once, one might as well use time reversal. Taking the dual four times will always result in the original pair again after the extra $2\pi \$'s are divided out.

Fourier Transform Table

Time Domain Fourier Domain
$x(t)=\frac{1}{2 \pi} \int_{-\infty}^{\infty} X(j \omega)e^{j \omega t}d \omega$ $X(j \omega)=\int_{-\infty}^\infty x(t) e^{-j \omega t}d t$
$1\$ $2 \pi \delta (\omega)\$
$-0.5+u(t)\$ $\frac{1}{j \omega}\$
$\delta (t)\$ $1\$
$\delta (t-c)\$ $e^{-j \omega c}\$
$u(t)\$ $\pi \delta(\omega)+\frac{1}{j \omega}$
$e^{-bt}u(t)\$ $\frac{1}{j \omega + b}$
$cos \omega_0 t\$ $\pi [\delta ( \omega + \omega_0 ) + \delta ( \omega - \omega_0 ) ]\$
$cos ( \omega_0 t + \theta )\$ $\pi [ e^{-j \theta} \delta ( \omega + \omega_0 ) + e^{j \theta} \delta ( \omega - \omega_0 )]\$
$sin \omega_0 t\$ $j \pi [ \delta ( \omega + \omega_0 ) - \delta ( \omega - \omega_0 ) ]\$
$sin ( \omega_0 t + \theta )\$ $j \pi [ e^{-j \theta} \delta ( \omega + \omega_0 ) - e^{j \theta} \delta ( \omega - \omega_0 ) ]\$
$rect \left ( \frac{t}{\tau} \right )$ $\tau sinc \frac{\tau \omega}{2 \pi}$
$\tau sinc \frac{\tau t}{2 \pi}$ $2 \pi p_\tau\ ( \omega )$
$\left ( 1-\frac{2 |t|}{\tau} \right ) p_\tau (t)$ $\frac{\tau}{2} sinc^2 \frac{\tau \omega}{4 \pi}$
$\frac{\tau}{2} sinc^2 \left ( \frac{\tau t}{4 \pi} \right )$ $2 \pi \left ( 1-\frac{2|\omega|}{\tau} \right ) p_\tau (\omega)$

Notes:

$sinc(x) = \frac {sin(x)}{x}$
$p_\tau (t)\$ is the rectangular pulse function of width $\tau\$

Source courtesy Wikibooks.org

Convergence of Fourier Transforms

Consider $X(j\omega)\$ evaluated according to Equation 4.9:

$X(j\omega) = \int_{-\infty}^\infty x(t)e^{-j \omega t} dt$

and let $x(t)\$ denote the signal obtained by using $X(j\omega)\$ in the right hand side of Equation 4.8:

$x(t) = (1/(2\pi)) \int_{-\infty}^\infty X(j\omega)e^{j \omega t} d\omega$

If $x(t)\$ has finite energy, i.e., if it is square integrable so that Equation 4.11 holds:

$\int_{-\infty}^\infty |x(t)|^2 dt < \infty$

then it is guaranteed that $X(j\omega)\$ is finite, i.e, Equation 4.9 converges.

Let $e(t)\$ denote the error between $\hat{x}(t)\$ and $x(t)\$, i.e. $e(t)=\hat{x}(t) - x(t)\$, then Equation 4.12 follows:

$\int_{-\infty}^\infty |e(t)|^2 dt = 0$

Thus if $x(t)\$ has finite energy, then although $x(t)\$ and $\hat{x}(t)\$ may differ significantly at individual values of $t\$, there is no energy in their difference.