(20 intermediate revisions by 2 users not shown)
Line 3: Line 3:
 
<u></u>  
 
<u></u>  
  
In linear algebra, the study of matrices is one of the fundamental basis of this subject. One of the concepts within this study, is the notion of an invertible or nonsingular matrix.
+
In linear algebra, the study of matrices is one of the fundamental basis of this subject. One of the concepts within this study, is the notion of an invertible or nonsingular matrix.  
  
 +
<br>
  
[[Category:MA265Fall2010Walther]]
+
== Definition  ==
 +
 
 +
A square matrix is said to be '''invertible '''or '''nonsingular''', if when multiplied by another similar matrix, the result yields the identity matrix.
 +
 
 +
<br>
 +
 
 +
Let ''A'' and ''B'' be ''n × n'' matrices and ''I''<sub>''n''</sub> be the ''n × n'' identity matrix<br>
 +
 
 +
A is invertible or nonsingular and B is its inverse if:
 +
 
 +
<br>
 +
 
 +
<math>\begin{align}
 +
  & AB=BA={{I}_{n}} \\
 +
&  \\
 +
& \overbrace{\left( \begin{matrix}
 +
  {{a}_{11}} & {{a}_{12}} & \cdots  & {{a}_{1n}}  \\
 +
  {{a}_{21}} & {{a}_{22}} & \cdots  & {{a}_{2n}}  \\
 +
  \vdots  & \vdots  & \ddots  & \vdots  \\
 +
  {{a}_{n1}} & {{a}_{n2}} & \cdots  & {{a}_{nn}}  \\
 +
\end{matrix} \right)}^{A}\overbrace{\left( \begin{matrix}
 +
  {{b}_{11}} & {{b}_{12}} & \cdots  & {{b}_{1n}}  \\
 +
  {{b}_{21}} & {{b}_{22}} & \cdots  & {{b}_{2n}}  \\
 +
  \vdots  & \vdots  & \ddots  & \vdots  \\
 +
  {{b}_{n1}} & {{b}_{n2}} & \cdots  & {{b}_{nn}}  \\
 +
\end{matrix} \right)}^{B}=\overbrace{\left( \begin{matrix}
 +
  1 & 0 & \cdots  & 0  \\
 +
  0 & 1 & \cdots  & 0  \\
 +
  \vdots  & \vdots  & \ddots  & \vdots  \\
 +
  0 & 0 & \cdots  & 1  \\
 +
\end{matrix} \right)}^{{{I}_{n}}} \\
 +
\end{align}</math>
 +
 
 +
<br>
 +
 
 +
If the above condition is not met, and the determinant of matrix ''A'' is 0, then it is called '''singular '''or '''noninvertible'''.
 +
 
 +
<br>
 +
 
 +
If matrix ''B'' is matrix ''A'''s inverse, then it is written as ''A''<sup>-1</sup>. Matrix ''B'' is a unique matrix; there only exists one or no possible inverse of matrix ''A''. The proof of this notion is summarized below:<sup></sup>
 +
 
 +
<br> <math>\begin{align}
 +
  & \text{Let }B\text{ and }C\text{ be inverses of }A\text{. Then:} \\
 +
&  \\
 +
& AB=BA={{I}_{n}}\text{ and }AC=CA={{I}_{n}} \\
 +
&  \\
 +
& \text{Since any matrix multiplied by the identity matrix of the same dimension yields that same matrix, then:} \\
 +
&  \\
 +
& B=B{{I}_{n}}=B(AC)=(BA)C={{I}_{n}}C=C \\
 +
& \Rightarrow B=C\text{ and thus the inverse matrix, if it exists, is unique}\text{.} \\
 +
\end{align}</math>
 +
 
 +
<br>
 +
 
 +
This then allows us to write:
 +
 
 +
<br> <span class="texhtml">''A''''A'''''<b><sup> − 1</sup> = ''A''<sup> − 1</sup>''A'' = ''I''<sub>''n''</sub></b></span>
 +
 
 +
<span class="texhtml">&lt;sub&lt;/sub&gt;</span>
 +
 
 +
&lt;span class="texhtml" /&gt;
 +
 
 +
 
 +
 
 +
 
 +
 
 +
== Examples  ==
 +
 
 +
Using example 11 on page 47 of the textbook:
 +
 
 +
<br>
 +
 
 +
<math>\begin{align}
 +
  & \text{Let} \\
 +
&  \\
 +
& A=\left( \begin{matrix}
 +
  1 & 2  \\
 +
  3 & 4  \\
 +
\end{matrix} \right) \\
 +
&  \\
 +
& \text{If the inverse of }A\text{ exists, then let} \\
 +
&  \\
 +
& {{A}^{-1}}=\left( \begin{matrix}
 +
  a & b  \\
 +
  c & d  \\
 +
\end{matrix} \right) \\
 +
&  \\
 +
& \Rightarrow A{{A}^{-1}}=\left( \begin{matrix}
 +
  1 & 2  \\
 +
  3 & 4  \\
 +
\end{matrix} \right)\left( \begin{matrix}
 +
  a & b  \\
 +
  c & d  \\
 +
\end{matrix} \right)={{I}_{2}}=\left( \begin{matrix}
 +
  1 & 0  \\
 +
  0 & 1  \\
 +
\end{matrix} \right) \\
 +
& =\left( \begin{matrix}
 +
  a+2c & b+2d  \\
 +
  3a+4c & 3b+4d  \\
 +
\end{matrix} \right)=\left( \begin{matrix}
 +
  1 & 0  \\
 +
  0 & 1  \\
 +
\end{matrix} \right) \\
 +
&  \\
 +
& \text{Giving us the linear systems} \\
 +
& a+2c=1 \\
 +
& b+2d=0 \\
 +
& 3a+4c=0 \\
 +
& 3b+4d=1 \\
 +
& \Rightarrow \left( \begin{matrix}
 +
  1 & 0 & 2 & 0  \\
 +
  0 & 1 & 0 & 2  \\
 +
  3 & 0 & 4 & 0  \\
 +
  0 & 3 & 0 & 4  \\
 +
\end{matrix} \right)\left( \begin{matrix}
 +
  a  \\
 +
  b  \\
 +
  c  \\
 +
  d  \\
 +
\end{matrix} \right)=\left( \begin{matrix}
 +
  1  \\
 +
  0  \\
 +
  0  \\
 +
  1  \\
 +
\end{matrix} \right) \\
 +
&  \\
 +
& \text{After computing the RREF of the augmented matrix, we obtain:} \\
 +
& \left( \begin{matrix}
 +
  1 & 0 & 0 & 0  \\
 +
  0 & 1 & 0 & 0  \\
 +
  0 & 0 & 1 & 0  \\
 +
  0 & 0 & 0 & 1  \\
 +
\end{matrix}\begin{matrix}
 +
  |  \\
 +
  |  \\
 +
  |  \\
 +
  |  \\
 +
\end{matrix}\begin{matrix}
 +
  -2  \\
 +
  1  \\
 +
  {}^{3}\!\!\diagup\!\!{}_{2}\;  \\
 +
  {}^{-1}\!\!\diagup\!\!{}_{2}\;  \\
 +
\end{matrix} \right)\Rightarrow \left( \begin{matrix}
 +
  a & b  \\
 +
  c & d  \\
 +
\end{matrix} \right)=\left( \begin{matrix}
 +
  -2 & 1  \\
 +
  {}^{3}\!\!\diagup\!\!{}_{2}\; & {}^{-1}\!\!\diagup\!\!{}_{2}\;  \\
 +
\end{matrix} \right) \\
 +
&  \\
 +
& \text{Verifying our initial condition of:} \\
 +
& A{{A}^{-1}}={{A}^{-1}}A=\left( \begin{matrix}
 +
  1 & 2  \\
 +
  3 & 4  \\
 +
\end{matrix} \right)\left( \begin{matrix}
 +
  -2 & 1  \\
 +
  {}^{3}\!\!\diagup\!\!{}_{2}\; & {}^{-1}\!\!\diagup\!\!{}_{2}\;  \\
 +
\end{matrix} \right)=\left( \begin{matrix}
 +
  -2 & 1  \\
 +
  {}^{3}\!\!\diagup\!\!{}_{2}\; & {}^{-1}\!\!\diagup\!\!{}_{2}\;  \\
 +
\end{matrix} \right)\left( \begin{matrix}
 +
  1 & 2  \\
 +
  3 & 4  \\
 +
\end{matrix} \right)=\left( \begin{matrix}
 +
  1 & 0  \\
 +
  0 & 1  \\
 +
\end{matrix} \right) \\
 +
&  \\
 +
& \therefore {{A}^{-1}}=\left( \begin{matrix}
 +
  -2 & 1  \\
 +
  {}^{3}\!\!\diagup\!\!{}_{2}\; & {}^{-1}\!\!\diagup\!\!{}_{2}\;  \\
 +
\end{matrix} \right) \\
 +
\end{align}</math>
 +
 
 +
<br>
 +
 
 +
<br>
 +
 
 +
If however, you are given a linear system, which has no solution, then it can be said the inverse of matrix ''A'' does not exists, and thus ''A'' is singular.
 +
 
 +
 
 +
 
 +
 
 +
 
 +
== Properties of Inverses of Matrices  ==
 +
 
 +
<br> <math>\begin{align}
 +
  & \text{Let }A\text{ and }B\text{ be nonsingular }n\times n\text{ matrices, then:} \\
 +
&  \\
 +
& (1) \\
 +
& AB=\text{nonsingular} \\
 +
& {{(AB)}^{-1}}={{B}^{-1}}{{A}^{-1}} \\
 +
&  \\
 +
& (2) \\
 +
& {{A}_{1}}{{A}_{2}}...{{A}_{r}}=\text{nonsingular} \\
 +
& ({{A}_{1}}{{A}_{2}}...{{A}_{r}})={{A}_{r}}^{-1}{{A}_{r-1}}^{-1}...{{A}_{1}}^{-1} \\
 +
&  \\
 +
& (3) \\
 +
& {{A}^{-1}}=\text{nonsingular} \\
 +
& {{({{A}^{-1}})}^{-1}}=A \\
 +
&  \\
 +
& (4) \\
 +
& {{A}^{T}}=\text{nonsingular} \\
 +
& {{({{A}^{-1}})}^{T}}={{({{A}^{T}})}^{-1}} \\
 +
\end{align}</math>
 +
 
 +
 
 +
 
 +
 
 +
 
 +
== Linear Systems and Inverses ==
 +
 
 +
 
 +
<math>\begin{align}
 +
  & \text{Given the linear system:} \\
 +
& A\mathrm{x}=\mathrm{b} \\
 +
&  \\
 +
& \text{If matrix }A\text{ is nonsingular, then multiplying the linear system by the matrix inverse yields:} \\
 +
& {{A}^{-1}}(A\mathrm{x})={{A}^{-1}}\mathrm{b} \\
 +
& ({{A}^{-1}}A)\mathrm{x}={{A}^{-1}}\mathrm{b} \\
 +
& {{I}_{n}}\mathrm{x}={{A}^{-1}}\mathrm{b} \\
 +
& \mathrm{x}={{A}^{-1}}\mathrm{b} \\
 +
&  \\
 +
& \text{Thus, }\mathrm{x}={{A}^{-1}}\mathrm{b}\text{ is a unique solution to }A\mathrm{x}=\mathrm{b} \\
 +
& \text{If }\mathrm{b}=0\text{ then the unique solution to the homogenous system }A\mathrm{x}=0\text{ is }\mathrm{x}=0 \\
 +
\end{align}</math>
 +
 
 +
<br> <br>
 +
 
 +
 
 +
The textbook sourced in this wiki page was ''Elementary Linear Algebra with Application, 9th Edition, ''B. Kolman, D. R. Hill, Pearson Education, Inc., NJ, USA, 2008
 +
 
 +
<br>
 +
 
 +
----
 +
 
 +
----
 +
 
 +
[[2010 Fall MA 265 Walther|Back to MA265 Fall 2010 Prof Walther]]
 +
 
 +
[[MA265|Back to MA265]]
 +
 
 +
[[Category:MA265Fall2010Walther]] [[Category:MA265]]

Latest revision as of 14:32, 12 December 2010

The Inverse of a Matrix

In linear algebra, the study of matrices is one of the fundamental basis of this subject. One of the concepts within this study, is the notion of an invertible or nonsingular matrix.


Definition

A square matrix is said to be invertible or nonsingular, if when multiplied by another similar matrix, the result yields the identity matrix.


Let A and B be n × n matrices and In be the n × n identity matrix

A is invertible or nonsingular and B is its inverse if:


$ \begin{align} & AB=BA={{I}_{n}} \\ & \\ & \overbrace{\left( \begin{matrix} {{a}_{11}} & {{a}_{12}} & \cdots & {{a}_{1n}} \\ {{a}_{21}} & {{a}_{22}} & \cdots & {{a}_{2n}} \\ \vdots & \vdots & \ddots & \vdots \\ {{a}_{n1}} & {{a}_{n2}} & \cdots & {{a}_{nn}} \\ \end{matrix} \right)}^{A}\overbrace{\left( \begin{matrix} {{b}_{11}} & {{b}_{12}} & \cdots & {{b}_{1n}} \\ {{b}_{21}} & {{b}_{22}} & \cdots & {{b}_{2n}} \\ \vdots & \vdots & \ddots & \vdots \\ {{b}_{n1}} & {{b}_{n2}} & \cdots & {{b}_{nn}} \\ \end{matrix} \right)}^{B}=\overbrace{\left( \begin{matrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \\ \end{matrix} \right)}^{{{I}_{n}}} \\ \end{align} $


If the above condition is not met, and the determinant of matrix A is 0, then it is called singular or noninvertible.


If matrix B is matrix A's inverse, then it is written as A-1. Matrix B is a unique matrix; there only exists one or no possible inverse of matrix A. The proof of this notion is summarized below:


$ \begin{align} & \text{Let }B\text{ and }C\text{ be inverses of }A\text{. Then:} \\ & \\ & AB=BA={{I}_{n}}\text{ and }AC=CA={{I}_{n}} \\ & \\ & \text{Since any matrix multiplied by the identity matrix of the same dimension yields that same matrix, then:} \\ & \\ & B=B{{I}_{n}}=B(AC)=(BA)C={{I}_{n}}C=C \\ & \Rightarrow B=C\text{ and thus the inverse matrix, if it exists, is unique}\text{.} \\ \end{align} $


This then allows us to write:


A'A − 1 = A − 1A = In

<sub</sub>

<span class="texhtml" />



Examples

Using example 11 on page 47 of the textbook:


$ \begin{align} & \text{Let} \\ & \\ & A=\left( \begin{matrix} 1 & 2 \\ 3 & 4 \\ \end{matrix} \right) \\ & \\ & \text{If the inverse of }A\text{ exists, then let} \\ & \\ & {{A}^{-1}}=\left( \begin{matrix} a & b \\ c & d \\ \end{matrix} \right) \\ & \\ & \Rightarrow A{{A}^{-1}}=\left( \begin{matrix} 1 & 2 \\ 3 & 4 \\ \end{matrix} \right)\left( \begin{matrix} a & b \\ c & d \\ \end{matrix} \right)={{I}_{2}}=\left( \begin{matrix} 1 & 0 \\ 0 & 1 \\ \end{matrix} \right) \\ & =\left( \begin{matrix} a+2c & b+2d \\ 3a+4c & 3b+4d \\ \end{matrix} \right)=\left( \begin{matrix} 1 & 0 \\ 0 & 1 \\ \end{matrix} \right) \\ & \\ & \text{Giving us the linear systems} \\ & a+2c=1 \\ & b+2d=0 \\ & 3a+4c=0 \\ & 3b+4d=1 \\ & \Rightarrow \left( \begin{matrix} 1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 2 \\ 3 & 0 & 4 & 0 \\ 0 & 3 & 0 & 4 \\ \end{matrix} \right)\left( \begin{matrix} a \\ b \\ c \\ d \\ \end{matrix} \right)=\left( \begin{matrix} 1 \\ 0 \\ 0 \\ 1 \\ \end{matrix} \right) \\ & \\ & \text{After computing the RREF of the augmented matrix, we obtain:} \\ & \left( \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{matrix}\begin{matrix} | \\ | \\ | \\ | \\ \end{matrix}\begin{matrix} -2 \\ 1 \\ {}^{3}\!\!\diagup\!\!{}_{2}\; \\ {}^{-1}\!\!\diagup\!\!{}_{2}\; \\ \end{matrix} \right)\Rightarrow \left( \begin{matrix} a & b \\ c & d \\ \end{matrix} \right)=\left( \begin{matrix} -2 & 1 \\ {}^{3}\!\!\diagup\!\!{}_{2}\; & {}^{-1}\!\!\diagup\!\!{}_{2}\; \\ \end{matrix} \right) \\ & \\ & \text{Verifying our initial condition of:} \\ & A{{A}^{-1}}={{A}^{-1}}A=\left( \begin{matrix} 1 & 2 \\ 3 & 4 \\ \end{matrix} \right)\left( \begin{matrix} -2 & 1 \\ {}^{3}\!\!\diagup\!\!{}_{2}\; & {}^{-1}\!\!\diagup\!\!{}_{2}\; \\ \end{matrix} \right)=\left( \begin{matrix} -2 & 1 \\ {}^{3}\!\!\diagup\!\!{}_{2}\; & {}^{-1}\!\!\diagup\!\!{}_{2}\; \\ \end{matrix} \right)\left( \begin{matrix} 1 & 2 \\ 3 & 4 \\ \end{matrix} \right)=\left( \begin{matrix} 1 & 0 \\ 0 & 1 \\ \end{matrix} \right) \\ & \\ & \therefore {{A}^{-1}}=\left( \begin{matrix} -2 & 1 \\ {}^{3}\!\!\diagup\!\!{}_{2}\; & {}^{-1}\!\!\diagup\!\!{}_{2}\; \\ \end{matrix} \right) \\ \end{align} $



If however, you are given a linear system, which has no solution, then it can be said the inverse of matrix A does not exists, and thus A is singular.



Properties of Inverses of Matrices


$ \begin{align} & \text{Let }A\text{ and }B\text{ be nonsingular }n\times n\text{ matrices, then:} \\ & \\ & (1) \\ & AB=\text{nonsingular} \\ & {{(AB)}^{-1}}={{B}^{-1}}{{A}^{-1}} \\ & \\ & (2) \\ & {{A}_{1}}{{A}_{2}}...{{A}_{r}}=\text{nonsingular} \\ & ({{A}_{1}}{{A}_{2}}...{{A}_{r}})={{A}_{r}}^{-1}{{A}_{r-1}}^{-1}...{{A}_{1}}^{-1} \\ & \\ & (3) \\ & {{A}^{-1}}=\text{nonsingular} \\ & {{({{A}^{-1}})}^{-1}}=A \\ & \\ & (4) \\ & {{A}^{T}}=\text{nonsingular} \\ & {{({{A}^{-1}})}^{T}}={{({{A}^{T}})}^{-1}} \\ \end{align} $



Linear Systems and Inverses

$ \begin{align} & \text{Given the linear system:} \\ & A\mathrm{x}=\mathrm{b} \\ & \\ & \text{If matrix }A\text{ is nonsingular, then multiplying the linear system by the matrix inverse yields:} \\ & {{A}^{-1}}(A\mathrm{x})={{A}^{-1}}\mathrm{b} \\ & ({{A}^{-1}}A)\mathrm{x}={{A}^{-1}}\mathrm{b} \\ & {{I}_{n}}\mathrm{x}={{A}^{-1}}\mathrm{b} \\ & \mathrm{x}={{A}^{-1}}\mathrm{b} \\ & \\ & \text{Thus, }\mathrm{x}={{A}^{-1}}\mathrm{b}\text{ is a unique solution to }A\mathrm{x}=\mathrm{b} \\ & \text{If }\mathrm{b}=0\text{ then the unique solution to the homogenous system }A\mathrm{x}=0\text{ is }\mathrm{x}=0 \\ \end{align} $




The textbook sourced in this wiki page was Elementary Linear Algebra with Application, 9th Edition, B. Kolman, D. R. Hill, Pearson Education, Inc., NJ, USA, 2008




Back to MA265 Fall 2010 Prof Walther

Back to MA265

Alumni Liaison

Questions/answers with a recent ECE grad

Ryne Rayburn