(One intermediate revision by the same user not shown)
Line 4: Line 4:
 
=== <small> 4.0 Concept <small> ===
 
=== <small> 4.0 Concept <small> ===
  
<font size="3px"> Similar as systems of normal equations, several ODEs can also form a system. A typical system of <math>n</math>
+
<font size="3px"> Similar as systems of algebraic equations, several ODEs can also form a system. A typical system of <math>n</math>
  coupled first-order ODE looks like:
+
  coupled first-order ODEs looks like:
  
 
<math>\frac{dx_1}{dt}=f_1(t,x_1,x_2,...x_n)</math>
 
<math>\frac{dx_1}{dt}=f_1(t,x_1,x_2,...x_n)</math>
Line 15: Line 15:
 
<math>\frac{dx_n}{dt}=f_n(t,x_1,x_2,...x_n)</math>
 
<math>\frac{dx_n}{dt}=f_n(t,x_1,x_2,...x_n)</math>
  
To solve them, we introduce matrix, whose concept here is similar to using matrix operation to solve systems of linear equations (e.g. Gaussian elimination method). There is an essential theorem for it. '''If <math>\frac{dx}{dt}=A\bold{x}</math>, and the <math>n×n</math> matrix <math>A</math> has <math>n</math> distinct real eigenvalues with corresponding eigenvectors, the general solution will be <math>\bold{x}=C_1 e^{\lambda_1 t} \bold{v_1}+C_2 e^{\lambda_2 t} \bold{v_2}+...+C_n e^{\lambda_n t} \bold{v_n} </math>,''' where <math>\lambda_n</math> are eigenvalues, <math>\bold{v_n}</math> are eigenvectors, and <math>C_n</math> are arbitrary constants. Strictly, the theorem is derived from the matrix exponential of the power series for <math>e^A</math>, while we don't prove it here, but use a more intuitive explanation of analogy instead.
+
When the <math>n</math> ODEs are all linear, this is a linear system of ODE. To solve them, we introduce matrix, whose concept here is similar to using matrix operation to solve systems of linear equations (e.g. Gaussian elimination method). There is an essential theorem for it. '''If <math>\frac{dx}{dt}=A\bold{x}</math>, and the <math>n×n</math> matrix <math>A</math> has <math>n</math> distinct real eigenvalues with corresponding eigenvectors, the general solution will be <math>\bold{x}=C_1 e^{\lambda_1 t} \bold{v_1}+C_2 e^{\lambda_2 t} \bold{v_2}+...+C_n e^{\lambda_n t} \bold{v_n} </math>,''' where <math>\lambda_n</math> are eigenvalues, <math>\bold{v_n}</math> are eigenvectors, and <math>C_n</math> are arbitrary constants. Strictly, the theorem is derived from the matrix exponential of the power series for <math>e^A</math>, while we don't prove it here, but use a more intuitive explanation of analogy instead.
  
  
Line 74: Line 74:
 
'''<font size="4px"> 4.2 ODE Systems with Complex Eigenvalues </font>'''
 
'''<font size="4px"> 4.2 ODE Systems with Complex Eigenvalues </font>'''
  
<font size="3px"> Sometimes complex numbers come up while solving the matrix problem to find the eigenvalues and eigenvectors. We know that the complex solutions for normal equations with all real coefficients always come in conjugate pairs. Similarly, in the systems of linear ODEs with all real coefficients (and so all components in the matrix are real), complex eigenvalues will also occur in conjugate pairs, and correspond to pairs of complex conjugate eigenvectors as well. We still put them into the standard form of solution as usual first. But it needs more operations as we are in need of real solutions for ODE systems.
+
<font size="3px"> Sometimes complex numbers come up while solving the matrix problem to find the eigenvalues and eigenvectors. We know that the complex solutions for algebraic equations with all real coefficients always come in conjugate pairs. Similarly, in the systems of linear ODEs with all real coefficients (and so all components in the matrix are real), complex eigenvalues will also occur in conjugate pairs, and correspond to pairs of complex conjugate eigenvectors as well. We still put them into the standard form of solution as usual first. But it needs more operations as we are in need of real solutions for ODE systems.
  
  

Latest revision as of 23:12, 21 November 2017

Linear Systems of ODEs

A slecture by Yijia Wen

4.0 Concept

Similar as systems of algebraic equations, several ODEs can also form a system. A typical system of $ n $

coupled first-order ODEs looks like:

$ \frac{dx_1}{dt}=f_1(t,x_1,x_2,...x_n) $

$ \frac{dx_2}{dt}=f_2(t,x_1,x_2,...x_n) $

...

$ \frac{dx_n}{dt}=f_n(t,x_1,x_2,...x_n) $

When the $ n $ ODEs are all linear, this is a linear system of ODE. To solve them, we introduce matrix, whose concept here is similar to using matrix operation to solve systems of linear equations (e.g. Gaussian elimination method). There is an essential theorem for it. If $ \frac{dx}{dt}=A\bold{x} $, and the $ n×n $ matrix $ A $ has $ n $ distinct real eigenvalues with corresponding eigenvectors, the general solution will be $ \bold{x}=C_1 e^{\lambda_1 t} \bold{v_1}+C_2 e^{\lambda_2 t} \bold{v_2}+...+C_n e^{\lambda_n t} \bold{v_n} $, where $ \lambda_n $ are eigenvalues, $ \bold{v_n} $ are eigenvectors, and $ C_n $ are arbitrary constants. Strictly, the theorem is derived from the matrix exponential of the power series for $ e^A $, while we don't prove it here, but use a more intuitive explanation of analogy instead.


In one-dimensional space, a single linear ODE $ \frac{dx}{dt}=\lambda x $ has a solution $ x=Ae^{\lambda t} $, where $ \lambda $ is a constant. Similarly, in two(or more)-dimensional space, a linear ODE system $ \frac{dx}{dt}=A\bold{x} $ will have a solution in the form $ \bold{x}=e^{\lambda t} \bold{v} $, where $ A $ is a matrix without unknowns, $ \lambda $ is a constant and $ \bold{v} $ is a constant vector. For a matrix, the most identical constant and constant vector are going to be its eigenvalues and eigenvectors. In this tutorial, we are doing systems of two ODEs (hence $ 2×2 $ matrices involved) for examples.


First of all, we should be familiar with how to convert a system of linear equations to the matrix form. The same idea is used to convert a system of linear ODEs to the matrix form. For example, consider the system of linear ODEs

$ \frac{dx}{dt}=8x+2y $,

$ \frac{dy}{dt}=2x+5y $.

We separate the variables and their coefficients to get the matrix form $ \begin{bmatrix} \frac{dx}{dt}\\ \frac{dy}{dt} \end{bmatrix} = \begin{bmatrix} 8 & 2\\ 2 & 5 \end{bmatrix} \begin{bmatrix} x\\ y \end{bmatrix} $

From here we can start our journey.


4.1 ODE Systems with Real Eigenvalues

When we are given a matrix, the first thing to do is to find its identities, which is something distinguished it from anything else. The most intrinsic property for a matrix are its eigenvalues and eigenvectors.

Consider the theorem and system from 4.0, $ \begin{bmatrix} \frac{dx}{dt}\\ \frac{dy}{dt} \end{bmatrix} = \begin{bmatrix} 8 & 2\\ 2 & 5 \end{bmatrix} \begin{bmatrix} x\\ y \end{bmatrix} $. We can easily calculate the eigenvalues $ \lambda_1=4 $, $ \lambda_2=9 $, and therefore eigenvectors $ \bold{v_1}=\begin{bmatrix} 1\\ -2 \end{bmatrix} $, $ \bold{v_2}=\begin{bmatrix} 2\\ 1 \end{bmatrix} $. Plug them in the standard form of general solution in 4.0, we have the general solution to this system of linear ODEs is $ \bold{x}=C_1 e^{4t} \begin{bmatrix} 1\\ -2 \end{bmatrix} + C_2 e^{9t} \begin{bmatrix} 2\\ 1 \end{bmatrix} $, where $ \bold{x}=\begin{bmatrix} x\\ y \end{bmatrix} $.

If initial values are given, we can plug them in to solve out the constant $ C_1 $ and $ C_2 $, to get an explicit solution.


Refer here for further explanation of the phase portrait, an understanding from the geometrical perspective.

Sometimes the eigenvalues will be repeated, refer here for a solution to this, as I feel like I can't explain more clear than it. :)


4.2 ODE Systems with Complex Eigenvalues

Sometimes complex numbers come up while solving the matrix problem to find the eigenvalues and eigenvectors. We know that the complex solutions for algebraic equations with all real coefficients always come in conjugate pairs. Similarly, in the systems of linear ODEs with all real coefficients (and so all components in the matrix are real), complex eigenvalues will also occur in conjugate pairs, and correspond to pairs of complex conjugate eigenvectors as well. We still put them into the standard form of solution as usual first. But it needs more operations as we are in need of real solutions for ODE systems.


Consider a linear system of two ODEs $ \begin{bmatrix} \frac{dx}{dt}\\ \frac{dy}{dt} \end{bmatrix} = \begin{bmatrix} 5 & 2\\ -4 & 1 \end{bmatrix} \begin{bmatrix} x\\ y \end{bmatrix} $. It is easy to find its eigenvalue $ \lambda_1=3+2i $, $ \lambda_2=3-2i $. Their corresponding eigenvectors are $ \bold{v_1}=\begin{bmatrix} 1\\ -1+i \end{bmatrix} $, $ \bold{v_2}=\begin{bmatrix} 1\\ -1-i \end{bmatrix} $. Plug them in the standard form in 4.0 to get the general solution $ \bold{x}=C_1 e^{(3+2i)t} \begin{bmatrix} 1\\ -1+i \end{bmatrix} + C_2 e^{(3-2i)t} \begin{bmatrix} 1\\ -1-i \end{bmatrix} $, where $ \bold{x}=\begin{bmatrix} x\\ y \end{bmatrix} $.


Now it's time to further deduct it to reduce the complex part of the solution- take the real part only. As the eigenvalues and eigenvectors are both in conjugate pairs, their real parts are same. Given an initial value $ \bold{x}(0)=\begin{bmatrix} 2\\ 5 \end{bmatrix} $ and work out the constants $ C_1=1-\frac{7}{2}i $, $ C_2=1+\frac{7}{2}i $, which are also conjugate pairs. Hence, $ \bold{x}=(1-\frac{7}{2}i) e^{(3+2i)t} \begin{bmatrix} 1\\ -1+i \end{bmatrix} + (1+\frac{7}{2}i) e^{(3-2i)t} \begin{bmatrix} 1\\ -1-i \end{bmatrix} $,

$ =2 Re [(1-\frac{7}{2}i) e^{(3+2i)t} \begin{bmatrix} 1\\ -1+i \end{bmatrix}] $, where "Re" represents the real part of the solution,

$ =2 Re [(1-\frac{7}{2}i) e^{3t} e^{2ti} \begin{bmatrix} 1\\ -1+i \end{bmatrix}] $, by the property of power,

$ =2e^{3t} Re [(1-\frac{7}{2}i) e^{2ti} \begin{bmatrix} 1\\ -1+i \end{bmatrix}] $, as $ e^{3t} $ is real,

$ =2e^{3t} Re [(1-\frac{7}{2}i) (cos2t+isin2t) \begin{bmatrix} \begin{bmatrix} 1\\ -1 \end{bmatrix} + i \begin{bmatrix} 0\\ 1 \end{bmatrix} \end{bmatrix}] $, by Euler's Formula $ e^{\theta i}=cos\theta + i sin\theta $ and splitting out the complex number into real and complex parts respectively,

$ =2e^{3t} Re[((cos2t+isin2t)-\frac{7}{2}i cos2t+\frac{7}{2}sin2t) \begin{bmatrix} \begin{bmatrix} 1\\ -1 \end{bmatrix} + i \begin{bmatrix} 0\\ 1 \end{bmatrix} \end{bmatrix}] $, by the product rule for polynomials,

$ =2e^{3t} Re[((cos2t+\frac{7}{2}sin2t)+i(sin2t-\frac{7}{2}cos2t)) \begin{bmatrix} \begin{bmatrix} 1\\ -1 \end{bmatrix} + i \begin{bmatrix} 0\\ 1 \end{bmatrix} \end{bmatrix}] $, by the combination of real and complex parts,

$ =2e^{3t} [(cos2t+\frac{7}{2}sin2t) \begin{bmatrix} 1\\ -1 \end{bmatrix} -(sin2t-\frac{7}{2}cos2t) \begin{bmatrix} 0\\ 1 \end{bmatrix} $, by the product rule for polynomials and giving up the complex parts.


Here, finally we got the explicit solution to the linear system with complex eigenvalues. The standard form is $ \bold{x}=\mu e^{\rho t} (cos(\psi -\omega t)\bold{v_R}+sin(\psi-\omega t)\bold{v_I}) $, where $ \rho $ is the real part of the eigenvalue, $ \bold{v_R} $ and $ \bold{v_I} $ stand for the real and imaginary parts of eigenvector respectively.$ \mu $, $ \psi $, and $ \omega $ are derived from the auxiliary angle formula. When it involves plenty of calculation to derive them, we just keep the solution in the form of the example.

Refer here for further explanation from the geometrical perspective of phase portrait, for a more completed understanding.


4.4 Exercises

Solve the linear systems of ODEs in 4.1 and 4.2 again independently.


4.5 References

Faculty of Mathematics, University of North Carolina at Chapel Hill. (2016). Linear Systems of Differential Equations. Chapel Hill, NC., USA.

Institute of Natural and Mathematical Science, Massey University. (2017). 160.204 Differential Equations I: Course materials. Auckland, New Zealand.

Robinson, J. C. (2003). An introduction to ordinary differential equations. New York, NY., USA: Cambridge University Press.

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett