Revision as of 06:46, 31 August 2010 by Bpavlov (Talk | contribs)

I'n's'e'r't'f'o'r'm'u'l'a'h'e'r'e

Homework 1 collaboration area

Feel free to toss around ideas here. Feel free to form teams to toss around ideas. Feel free to create your own workspace for your own team. --Steve Bell 12:11, 20 August 2010 (UTC)

Here is my favorite formula:

$ f(a)=\frac{1}{2\pi i}\int_\gamma \frac{f(z)}{z-a}\ dz. $

Question from a student:

I have a question about P.301 #33. I see in the back of the book that it is not a vector space. I don't understand why though. In the simplest form I would think an identity matrix would satisfy the requirements mentioned in the book for #33. Isn't an identity matrix a vector space?

Answer from Bell:

One of the key elements of being a vector space is that the thing must be closed under addition. The set consisting of just the identity matrix is not a vector space because if I add the identity matrix to itself, I get a matrix with twos down the diagonal, and that isn't in the set. So it isn't closed under addition. (The set consisting of the ZERO matrix is a vector space, though.)

Another question from student: On #24 I set up a non square matrix and performed row echelon; found rank, thus can state whether or not row vectors are linearly dependent; is it correct to transpose original matrix, perform echelon, to find whether or not column vectors are linearly dependent?

Does a set of vectors have to contain the 0 vector to be a vector space? If that is true, then any set of vectors that are linearly independent would not be a vector space?

Answer from Bell:

Yes, a vector space must contain the zero vector (because a constant times any vector has to be in the space if the vector is, even if the constant is zero).

The set of all linear combinations of a bunch of vectors is a vector space, and the zero vector is in there because one of the combinations involves taking all the constants to be zero.

Question from a student:

How do I start problem 6 (reducing the matrix) on page 301? I understand that the row and column space will be the same, but I'm not sure how to get it to reduce to row-echelon form.

Answer from Bell:

You'll need to do row reduction. At some point you'll get

 1  1    a
 0  a-1  1-a
 0  1-a  1-a^2

At this point, you'll need to consider the case a=1. If a=1, the matrix is all 1's and the row space is just the linear span of

[ 1 1 1 ].

If a is not 1, then you can divide row two by a-1 and continue doing row reduction. I think you'll have one more case to consider when you try to deal with the third row after that.

[Q: Only 1 more case? Don't we need to consider the case in which a is not equal to 1 and a is not equal to the value found from the third row after we continue row reduction? ]

Question from student:

Also, what are you wanting for an answer to questions 22-25 (full proof, explanation, example, etc.)?

Anwer from Bell:

For problems 22-25, I'd want you to explain the answer so that someone who has just read 7.4 would understand you and believe what you say. For example, the answer to #25 might start:

If the row vectors of a square matrix are linearly independent, then Rank(A) is equal to the number of rows. But Rank(A)=Rank(A^T). Etc.


[Note from a student about vector space]:

We can show whether two (or more) vectors form a vector space by checking the following conditions:

1)Vectors are closed under vector addition

2)Vectors are closed under multiplication by a scalar

Example: show if all vectors in $ R^2 $ which satisfy $ v_1+v_2=0 $ form a vector space:

Solution:

Any vector which satisfies the above condition can then be written as

$ V=\begin{bmatrix}v_1\\v_2\\v_3\end{bmatrix}=\begin{bmatrix}v_1\\-v_1\\v_3\end{bmatrix} $

where $ v_3 $ can be anything (free variable).

To check the first condition we do this: pick up another vector which satisfies the given conditions and add it to the existing one. Then check whether the resulting vector still satisfies the given restriction ($ v_1+v_2=0 $):

$ V+W=\begin{bmatrix}v_1\\-v_1\\v_3\end{bmatrix}=\begin{bmatrix}w_1\\-w_1\\w_3\end{bmatrix}=\begin{bmatrix}v_1+w_2\\-v_1-w_2\\v_3+w_3\end{bmatrix} $

Now we see that the general form of the final vector did not change: the first and the second components are still opposite, and the third is "free".

To check the second condition we multiply the vector by a scalar constant (e.g. "$ c $")and again see if it satisfies the restriction/condition:

$ c\cdot V=c\cdot\begin{bmatrix}v_1\\-v_1\\v_3\end{bmatrix}=\begin{bmatrix}c\cdot v_1\\-c\cdot v_1\\c\cdot v_3\end{bmatrix} $.

We see that the resulting vector still satisfies the condition of $ v_1+v_2=0 $ (the first and the second components are still opposite, and the third is "free"). Therefore, all vectors in $ R^2 $ for which $ v_1+v_2=0 $ form a vector space. Professor Bell, please correct me if something is not right here..

Alumni Liaison

Sees the importance of signal filtering in medical imaging

Dhruv Lamba, BSEE2010