Homework 1 collaboration area

Feel free to toss around ideas here. Feel free to form teams to toss around ideas. Feel free to create your own workspace for your own team. --Steve Bell 12:11, 20 August 2010 (UTC)

Here is my favorite formula:

$ f(a)=\frac{1}{2\pi i}\int_\gamma \frac{f(z)}{z-a}\ dz. $

Question from a student:

I have a question about P.301 #33. I see in the back of the book that it is not a vector space. I don't understand why though. In the simplest form I would think an identity matrix would satisfy the requirements mentioned in the book for #33. Isn't an identity matrix a vector space?

Answer from Bell:

One of the key elements of being a vector space is that the thing must be closed under addition. The set consisting of just the identity matrix is not a vector space because if I add the identity matrix to itself, I get a matrix with twos down the diagonal, and that isn't in the set. So it isn't closed under addition. (The set consisting of the ZERO matrix is a vector space, though.)

Does a set of vectors have to contain the 0 vector to be a vector space? If that is true, then any set of vectors that are linearly independent would not be a vector space?

Answer from Bell:

Yes, a vector space must contain the zero vector (because a constant times any vector has to be in the space if the vector is, even if the constant is zero).

The set of all linear combinations of a bunch of vectors is a vector space, and the zero vector is in there because one of the combinations involves taking all the constants to be zero.

Question from a student:

How do I start problem 6 (reducing the matrix) on page 301? I understand that the row and column space will be the same, but I'm not sure how to get it to reduce to row-echelon form.

Answer from Bell:

You'll need to do row reduction. At some point you'll get

 1  1    a
 0  a-1  1-a
 0  1-a  1-a^2

At this point, you'll need to consider the case a=1. If a=1, the matrix is all 1's and the row space is just the linear span of

[ 1 1 1 ].

If a is not 1, then you can divide row two by a-1 and continue doing row reduction. I think you'll have one more case to consider when you try to deal with the third row after that.

[Q: Only 1 more case? Don't we need to consider the case in which a is not equal to 1 and a is not equal to the value found from the third row after we continue row reduction?

A: Yes, you are right. After you deal with the case a=1, there will be another value that pops up that causes problems. So next you'll treat the a=something case, and finally, you'll treat the last case, which is a not equal to either of those touchy cases.]

Question from student:

Also, what are you wanting for an answer to questions 22-25 (full proof, explanation, example, etc.)?

Anwer from Bell:

For problems 22-25, I'd want you to explain the answer so that someone who has just read 7.4 would understand you and believe what you say. For example, the answer to #25 might start:

If the row vectors of a square matrix are linearly independent, then Rank(A) is equal to the number of rows. But Rank(A)=Rank(A^T). Etc.

Question from student:

This is a question regarding section 7.3, #11. I like to check my answer with the back of the textbook and I think the book's answer for this one is incorrect. I determined that y and z are free variables because the x column of the R.E.F. has the leading value. The book has x and z are the free variables. Can you either confirm my answer or start me in the correct path for the book's answer?

Answer from Bell:

The book's answer is correct. I bet the answer you got is also correct. And both correct answers describe the same set of solutions. It is quite arbitrary which variables you choose to name x_1, x_2, x_3. The book chose to treat y as its x_1 variable.

[Note from a student about vector space]:

We can show whether two (or more) vectors form a vector space by checking the following conditions:

1)Vectors are closed under vector addition

2)Vectors are closed under multiplication by a scalar

Example: show if all vectors in R2 which satisfy v1 + v2 = 0 form a vector space:

Solution:

Any vector which satisfies the above condition can then be written as

$ V=\begin{bmatrix}v_1\\v_2\\v_3\end{bmatrix}=\begin{bmatrix}v_1\\-v_1\\v_3\end{bmatrix} $

where v3 can be anything (free variable).

To check the first condition we do this: pick up another vector which satisfies the given conditions and add it to the existing one. Then check whether the resulting vector still satisfies the given restriction (v1 + v2 = 0):

$ V+W=\begin{bmatrix}v_1\\-v_1\\v_3\end{bmatrix}=\begin{bmatrix}w_1\\-w_1\\w_3\end{bmatrix}=\begin{bmatrix}v_1+w_2\\-v_1-w_2\\v_3+w_3\end{bmatrix} $

Now we see that the general form of the final vector did not change: the first and the second components are still opposite, and the third is "free".

To check the second condition we multiply the vector by a scalar constant (e.g. "c")and again see if it satisfies the restriction/condition:

$ c\cdot V=c\cdot\begin{bmatrix}v_1\\-v_1\\v_3\end{bmatrix}=\begin{bmatrix}c\cdot v_1\\-c\cdot v_1\\c\cdot v_3\end{bmatrix} $.

We see that the resulting vector still satisfies the condition of v1 + v2 = 0 (the first and the second components are still opposite, and the third is "free"). Therefore, all vectors in R2 for which v1 + v2 = 0 form a vector space.

Professor Bell, please correct me if something is not right here.. --Bpavlov 01:08, 1 September 2010 (UTC)

Nice job! This is what I've been waiting for. Feel free to sign your name when you do something like this. (I have a subconscious Brownie point tabulator somewhere in my head.) You can sign your name by clicking on the little signature icon at the top of the window. Like this: --Steve Bell 12:06, 31 August 2010 (UTC)

I agree the above method from the student is very useful for checking the vector space. But by using this method we cant find the dimension or the basis. Is this right? For that we need to find the general solution to the homogeneous equation...--Sdhar 22:26, 31 August 2010 (UTC)

[Reply to Sdhar]: I still think the above approach can give us all the necessary information. The general solution is there, but since it is very simple, we don't have to form an augmented matrix and then reduce it trying to determine some parametric relations between the vector components. We only have one simple relation: $ v_1=-v_2 $, so we can plug it in right away.

As for the vector space dimension, how many inputs do you need to provide in order to completely specify a vector from this space?

You need to provide two: $ v_1 $ and $ v_3 $. Therefore, this space has dimension 2. Agree? The basis for this vector space would just be a formula which can represent all vectors from the space (this is how I see it). So, in this case it should be

$ V=\begin{bmatrix}v_1\\-v_1\\v_3\end{bmatrix} $ or we can write it as $ V=\begin{bmatrix}s\\-s\\t\end{bmatrix} $.

In the book they just plug in real umbers, e.g. put s = 1 and t = 2:

$ V=\begin{bmatrix}1\\-1\\2\end{bmatrix} $,

but I don't understand that very well...Can somebody explain?--Bpavlov 01:08, 1 September 2010 (UTC)

Question from student:

On #24 I set up a non square matrix and reduced it to row echelon form, found the rank, thus I can state whether or not the row vectors are linearly dependent. Is it correct to transpose the original matrix and reduce to row echelon form to find whether or not the column vectors are linearly dependent?

Answer from Bell:

Yes, that is correct.

Back to the MA 527 start page

To Rhea Course List

Alumni Liaison

To all math majors: "Mathematics is a wonderfully rich subject."

Dr. Paul Garrett