Saturday, March 17, 2012

Linear Algebra: Tough Proofs for Determinants

I've been enjoying teaching this course, and I was liking the text, by David C. Lay ... until we got to chapter 3 on determinants. I worked through all the problems last weekend, verifying that everything was easy for me, and read through the proofs. Unfortunately, I thought I got it all, but I was moving too fast, and ended up in front of my class on Monday unable to really do the proofs. Embarrassing!

With my house being broken into later that day (3rd time in 5 months; yes, it's hideous, but nothing was taken this time), I never caught up this week. Yesterday and today, I've spent about 5 hours writing up the proofs for 3 theorems. I still see a few holes, but I'm pretty proud of what I've put together.

My text defines the determinant by expanding on the first row. Looking around online, that doesn't look like a standard definition, but it seems like a fine starting point. From there we want to prove that you can get the determinant by expanding on any row or column. (My text says "We omit the proof to avoid a lengthy digression." Bah! It's not math if you don't know why it's true!) My proof may still have a bit of a hole (regarding which terms are negative), but I think it's more helpful than what I found online.


My proof starts with the definition, expands completely so there are n! terms, each having n factors (which come one from each row, and simultaneously one from each column), observing the symmetry shows that we'd get the same terms no matter what row or column we expand on. The one sticky point is showing that the signs of each term stay the same. I don't think I've quite got that properly proven. Tell me what you think.


det(AB)=det(A).det(B). This proof is done in my text, but I felt it was done badly. I'm following his outline, but writing it up in my own words. I think there's a bit of a hole where I use L*. (The author does this step a bit differently, and I don't like his explanation.) To outline the proof:
  • First, we prove it's true for any elementary matrix times a 2x2 matrix (EA),
  • Then we do induction on the size of the matrix,
  • Last, we show that (almost) any AB can be seen as a series of multiplications by elementary matrices (EB).
Here's my proof. What do you think? Is there a clear way to clean up the induction step?




My 3rd proof was on area of a parallelogram = absolute value of determinant (with column vectors representing adjacent sides of the parallelogram). Not particularly impressive, and I don't have the energy to do the volume proof too. Anyone want to show me a good proof of that? (We have not yet covered dot product or cross product, so it can't reference those notions.) I got this version from a mathematician I spoke with at my math circle a few days ago. I had fun using geogebra to illustrate.

Now, back to my regularly scheduled grading...

No comments:

Post a Comment

 
Math Blog Directory