Revision as of 09:56, 10 September 2013 by Bell (Talk | contribs)


Homework 3 collaboration area

MA527 Fall 2013


Question from James Down Under (Jayling):

For Page 329 Question 11. Am I meant to calculate all eigenvalues and eigenvectors or just calculate the eigenvector corresponding to the given eigenvalue of 3?

Answer from Steve Bell :

Yes, you are only supposed to find the eigenvector for lambda=3. (The idea here is to spare you from finding the roots of a rather nasty 3rd degree polynomial.)

Oops! I reread the instructions for 329: 11 just now and I think they give you that lambda =3 hint so that you can factor out a (lambda-3) from the characteristic polynomial and find the other two roots via the quadratic formula. Now I think they really do want you to find all three roots and as many eigenvectors as you can. Since there has been some confusion about this questions, I will not ask the graders to grade it. However, doing it will be good for you. Steve Bell

Jayling: thanks Steve, I did try the hard way first but then started to drown in the algebra.


Question from a student:

Let 3x+4y+2z = 0; 2x+5z= 0 be the system for which I have to find the basis.

When Row Reduced the above system gives [ 1 0 2.5 0 ; 0 1 -1.375 0].

Rank = no of non zero rows = 2 => Dim(rowspace) = 2 ; Nullity = # free variables = 1

Q1: Aren't [ 1 0 2.5] and [0 1 -1.375] called the basis of the system?

A1 from Steve Bell:

Those two vectors form a basis for the ROW SPACE.

The solution space is only 1 dimensional (since the number of free variables is only 1).

Q2: Why is that we get a basis by considering the free variable as some "parameter" and reducing further(and get 1 vector in this case). Isn't that the solution of the system?

A2 from Steve Bell :

If the system row reduces to

[ 1 0  2.5   0 ]
[ 0 1 -1.375 0 ]

then z is the free variable. Let it be t. The top equation gives

x = -2.5 t

and the second equation gives

y = 1.375 t

and of course,

z = t.

So the general solution is

[ x ]   [ -2.5   ]
[ y ] = [  1.375 ] t
[ z ]   [  1     ]

Thus, you can find the solution from the row echelon matrix, but I wouldn't say that you can read it off from there -- not without practice, at least.


Question from a student:

On problem 11, I swapped rows 1 and 2 during row reduction and my final solution has x1 and x2 swapped. Do I need to swap back any row swaps or did I make a mistake along the way? Tlouvar

Eun Young discussed this issue here in a way that is slightly beyond the scope of our course, so I've moved it to here:

Remark from Eun Young

Remark from Steve Bell :

Step 1: Find the eigenvalues from det(A - lambda I)=0.

Step 2: Choose an eigenvalue lambda and plug it into the system

(A - lambda I) a = 0

and solve the system for the eigenvector a. Swapping rows does not change the answer, so you are safe here.

Sometimes you might think you are swapping entries of a vector when you are really multiplying by -1. For example , if [1, -1] is an eigenvector, so is [-1, 1].


Question from Dalec

For #2 on page 351, I found my spectrum to be lambda = 2i , and -i. For the case where lambda = 2i , I am trying to find the eigenvectors, and I get a matrix

[ -i 1+i  |   0]
[ -1+i  -2i  |   0]

Is there a way to get a 0 in the bottom left, or is this simply overcontrained?

- Chris

Suggestions from Shawn Whitman

In one step: multiply row 1 by (1+i) and add to row 2.

In two easier steps: Multiply row 1 by i,

[1, (-1+i)]

[(-1+i), -2i]

then multiply row 1 by (1-i) and add to row 2.

[1, (-1+i)]

[0, 0]


I have questions about determinants. For a homogeneous systems, for non-zero determinants we have only the trivial solution while for zero determinant we have infinitely many solutions. For non-homogeneous system, when the determinant is non-zero we have exactly one solution. 1. What will happen if a non-homogeneous system has zero determinant? 2. From the determinant of a non-homogeneous system can we know when the system doesn't have any solution?

- Farhan

Suggestion from Ryan Russon

Here is what I understand:

For question 1) If we are thinking of a system of equations, then by looking at the determinant, we are only looking at the left-hand side (LHS) of the system. If the determinant of that system is zero, it means that one or more of those equations are dependent on the others. Said differently, one or more of those expressions can be put together by combining the other expressions from the LHS of the system. This also means that any non-homogenous system formed from the components of the LHS expressions may have more than one way to be combined to get the desired solution (i.e. $ \bar{x} $ is not unique). Now if the expanded system looks like:

[1  4  1  | 4] 
[0  2  0  | 1] 
[0  0  0  | 3] 

where you have a statement that "0=3" this is obviously a bad system.

For 2) From the determinant alone, it would not be possible to determine if the system has no solutions. If it is zero, it may have infinitely many or it may be an undetermined system.

Please others chime in and correct me if I am flawed in my thinking.


Question from Ryan Russon:

About p. 338, #3,6, and 8, are we supposed to be finding eigenvectors here? I noticed that they put them in the back of the book, although it only asks to find the spectrum of each, which was defined as the set of eigenvalues in 8.1? I understand that we are using Thms 1-5 to prove our results and it seems like #3 doesn't require finding eigenvectors to prove that it isn't any of the listed matrices. I hope I am not way off-base here. Thanks!

Follow-up question: On p. 338, #6 Are we only to consider $ A \in \mathbb{R}^{n \times n} $ or are we to consider complex matrices as well? Thanks again!


Response from Jake Eppehimer:

I found that #8 is orthogonal, according to theorem 5. It took quite a bit of manipulation with trig identities, but I believe my answer is reasonable. For number 6, I am not exactly sure how to find the eigenvalues. I am considering substituting a couple prime numbers for k and a, but I am unsure if that is the correct way to do it. It doesn't say anything about eigenvectors, and you don't need them to determine what kind of matrix it is.

Response from Ryan Russon

Thanks Jake. I found the eigenvalues for #6 to be: λ1 = a2 = a + k3 = ak by using a cofactor expansion which wasn't too bad. And I think I am a little brain dead today as I can answer my own follow-up: We are obviously not considering $ A \in \mathbb{C}^{n \times n} $ because we are talking about 'symmetric, skew-symmetric, and orthogonal' matrices which are only classes of real-valued matrices.

Response from Mrhoade

Ryan,  I got the three eigenvectors to be (a - k) , (a - k) , and (a + 2k).  I checked these with Matlab using some sample values of a and k and then the eig() function.  It appears to be correct.  The way I achieved this solution was to do a row and a column operation on the characteristic matrix.  Take R3 = R3 -R2  and then C2 = C2+C3 and then do a cofactor expansion on Row 3.  The first eigenvalue of (a - k) pops right out at you as the cofactor from a33.  You can then divide the cofactor out of both sides and come up with a quadratic that will reduce to ((2a + k) +/- 3k ) /2 .  This gives the repeated root (a-k) and the third root (a + 2k). - Mick


Response from Jayling: Ryan I was confused with the definition of Spectrum also. But Steve did state in the last lecture that it was the set of all eigenvalues of A. Also I found via the index in the text that Spectrum is indeed this (see first paragraph of page 324). In summary no need to calculate eigenvectors if the question is asking for the spectrum.

Also with Question 6 I am getting a very nasty looking characteristic equation, so I am not to sure how to solve for the algebraic roots.

Response from Hzillmer Maybe I'm overthinking things but for Question 3 here I got the e-vals to be 2+8i and 2-8i which fails theorem 5 that the absolute value must be 1. Does anyone have a thought as to what I'm missing here?

Response from Kees

Its not orthogonal so there is nothing to prove in theorem five. It can fail theorem 5 since it has no reason to pass it. Easy question, since it is none of the three, I do not have to prove any theorem's, correct? I only have to prove, for example, the orthogonality theorems when it is orthogonal etc, not every theorem every time?

Response from Jayling: If you just do the calculation AAT you will see that you do not get the Identity Matrix and therefore it is not orthogonal. If your eigenvalues are real then the matrix is symmetric, if your eigenvalues are pure imaginary or zero then the matrix is skew symmetric, and if your eigenvalues are real or have complex conjugate pairs where the absolute value is 1 then you have an orthogonal matrix.

On 7.5 #2, I determined the eigenvalues to be -i and 2i.  I can't seem to clean up the math when putting these values back into A to determine the eigenvectors.  Any tips would be appreciated.  Tlouvar

From Steve Bell : James, you are mistaken about some in the paragraph above. A symmetric matrix has real eigenvalues, but the reverse is not a true statement, i.e., it is not true that having real eigenvalues forces a matrix to be symmetric. Same with the other types. These are one-way implications only.

from Jayling: it is probably just a factoring issue, some tricks that I use are factor the denominator and numerator with i. You are not changing the answer because you are just multiplying with 1. Does this help?

As a sanity check you can always hit your calculated eigenvector with the matrix A and see if you indeed get your calculated eigenvalue multiplied by your eigenvector. If you don't then you know that your calculation is incorrect. You are not alone I do get a bit cross eyed with complex numbers, but if you stick with it remembering that i2 = -1 and 1/i = -i then you should be able to navigate through the minefield of the algebra.

from Ryan Leemhuis:

In regards to question 6, you do get a somewhat intimidating characteristic equation. However, if you keep the numbers in order and look for trig identities that can simplify the math the equation works out rather nicely. Specifically the formula cos^2(x)+Sin^2(x) = 1 came in handy for me.

Response from Ryan Russon 18:47, 9 September 2013 (UTC):

Jay, with #6, I got a characteristic equation that looked something like this: 3(a − λ)[λ2 − 2aλ + (a2k2)] = 0 once again using co-factor expansion, but this time along the diagonal.

With regards to Ryan Leemhuis's response, how did you use a trig identity in #6? I sure needed them for #8. Were you refering to #8?

Response from T. Roe:

While working on #3 on pg. 338 I calculated AAT and got:

[68  0] 
[0  68]

Can that be reduced to the identity matrix?


Back to MA527, Fall 2013

Alumni Liaison

Meet a recent graduate heading to Sweden for a Postdoctorate.

Christine Berkesch