(Created page with "Table of Contents: # Introduction # The Basic Understanding # Deeper Understanding and Example # Applications # References")
 
Line 1: Line 1:
 
Table of Contents:
 
Table of Contents:
# [[Introduction]]
+
# Introduction
# The Basic Understanding
+
# Basic Understanding
 
# Deeper Understanding and Example
 
# Deeper Understanding and Example
 
# Applications
 
# Applications
 
# References
 
# References
 +
 +
Introduction:
 +
Singular Value Decomposition has many names, such as factor analysis, empirical orthogonal function analysis, and principal component decomposition. In this discussion, I will refer to it as SVD. To put it simply, Singular Value Decomposition is decomposing a singular matrix into three parts so that it is easier to understand and transform. This has many applications in data science when displaying a set of information in a matrix. SVD is mainly used in linear algebra, but it can be understood in its most basic form as early as Calculus or even in Physics classes when breaking down vectors into components and projections. SVD was discovered by many different mathematicians in the late 1800s, namely Eugenio Beltrami and Camille Jordan, but would later become very important with the development of computers and big data.
 +
 +
Basic Understanding:
 +
Before we get into what SVD really means, it is important to understand some basic concepts. Firstly, SVD deals with vectors, which is a quantity with both direction and magnitude, and matrices, which are a collection of values put into rows and columns. Matrices can be described by the number of rows (n) and columns (m) that they have in the form n x m, which is important to understand when performing operations on them. SVD also involves the dot product, which is a way of taking the product of two vectors and having a scalar output.
 +
The main equation for SVD is A=USV^T, where: “A is an m × n matrix, U is an m × n orthogonal matrix, S is an n × n diagonal matrix, and V is an n × n orthogonal matrix,” (1).  An orthogonal matrix is when the columns and rows are perpendicular unit vectors, and a diagonal matrix contains all values that are zero except in the diagonal line down from the top left to the bottom right. Additionally, in the equation, the ^T stands for “transpose” of the matrix which basically flips the original matrix. We know that U^TU and V^TV both give the identity matrix because U and V are orthogonal. Another way of viewing this equation is by looking at it graphically: (7)

Revision as of 16:08, 3 December 2020

Table of Contents:

  1. Introduction
  2. Basic Understanding
  3. Deeper Understanding and Example
  4. Applications
  5. References

Introduction: Singular Value Decomposition has many names, such as factor analysis, empirical orthogonal function analysis, and principal component decomposition. In this discussion, I will refer to it as SVD. To put it simply, Singular Value Decomposition is decomposing a singular matrix into three parts so that it is easier to understand and transform. This has many applications in data science when displaying a set of information in a matrix. SVD is mainly used in linear algebra, but it can be understood in its most basic form as early as Calculus or even in Physics classes when breaking down vectors into components and projections. SVD was discovered by many different mathematicians in the late 1800s, namely Eugenio Beltrami and Camille Jordan, but would later become very important with the development of computers and big data.

Basic Understanding: Before we get into what SVD really means, it is important to understand some basic concepts. Firstly, SVD deals with vectors, which is a quantity with both direction and magnitude, and matrices, which are a collection of values put into rows and columns. Matrices can be described by the number of rows (n) and columns (m) that they have in the form n x m, which is important to understand when performing operations on them. SVD also involves the dot product, which is a way of taking the product of two vectors and having a scalar output. The main equation for SVD is A=USV^T, where: “A is an m × n matrix, U is an m × n orthogonal matrix, S is an n × n diagonal matrix, and V is an n × n orthogonal matrix,” (1). An orthogonal matrix is when the columns and rows are perpendicular unit vectors, and a diagonal matrix contains all values that are zero except in the diagonal line down from the top left to the bottom right. Additionally, in the equation, the ^T stands for “transpose” of the matrix which basically flips the original matrix. We know that U^TU and V^TV both give the identity matrix because U and V are orthogonal. Another way of viewing this equation is by looking at it graphically: (7)

Alumni Liaison

Ph.D. 2007, working on developing cool imaging technologies for digital cameras, camera phones, and video surveillance cameras.

Buyue Zhang