(4 intermediate revisions by the same user not shown)
Line 5: Line 5:
 
Spring 2008, [[user:mboutin|Prof. Boutin]]
 
Spring 2008, [[user:mboutin|Prof. Boutin]]
  
sLecture
+
[[Slectures|Slecture]]
  
 
<font size= 3> Collectively created by the students in [[ECE662:BoutinSpring08_OldKiwi|the class]]</font size>
 
<font size= 3> Collectively created by the students in [[ECE662:BoutinSpring08_OldKiwi|the class]]</font size>
Line 12: Line 12:
 
----
 
----
 
=Lecture 10 Lecture notes=
 
=Lecture 10 Lecture notes=
Quick link to lecture notes: [[Lecture 1 - Introduction_OldKiwi|1]]|
+
Jump to: [[ECE662_Pattern_Recognition_Decision_Making_Processes_Spring2008_sLecture_collective|Outline]]|
 +
[[Lecture 1 - Introduction_OldKiwi|1]]|
 
[[Lecture 2 - Decision Hypersurfaces_OldKiwi|2]]|
 
[[Lecture 2 - Decision Hypersurfaces_OldKiwi|2]]|
 
[[Lecture 3 - Bayes classification_OldKiwi|3]]|
 
[[Lecture 3 - Bayes classification_OldKiwi|3]]|
Line 19: Line 20:
 
[[Lecture 6 - Discriminant Functions_OldKiwi|6]]|
 
[[Lecture 6 - Discriminant Functions_OldKiwi|6]]|
 
[[Lecture 7 - MLE and BPE_OldKiwi|7]]|
 
[[Lecture 7 - MLE and BPE_OldKiwi|7]]|
[[Lecture 8 - MLE, BPE and Linear Discriminant Functions_OldKiwi|8]]
+
[[Lecture 8 - MLE, BPE and Linear Discriminant Functions_OldKiwi|8]]|
 
[[Lecture 9 - Linear Discriminant Functions_OldKiwi|9]]|
 
[[Lecture 9 - Linear Discriminant Functions_OldKiwi|9]]|
[[Lecture 10 - Batch Perceptron and Fisher Linear Discriminant_OldKiwi|10]]
+
[[Lecture 10 - Batch Perceptron and Fisher Linear Discriminant_OldKiwi|10]]|
 
[[Lecture 11 - Fischer's Linear Discriminant again_OldKiwi|11]]|
 
[[Lecture 11 - Fischer's Linear Discriminant again_OldKiwi|11]]|
 
[[Lecture 12 - Support Vector Machine and Quadratic Optimization Problem_OldKiwi|12]]|  
 
[[Lecture 12 - Support Vector Machine and Quadratic Optimization Problem_OldKiwi|12]]|  
Line 42: Line 43:
 
----
 
----
 
----
 
----
The perceptron algorithm maps an input to a single binary output value.  For a proof of the Perceptron convergence theorem, see [[Perceptron_Convergence_Theorem_Old_Kiwi|Perceptron_Convergence_Theorem]]
+
The perceptron algorithm maps an input to a single binary output value.   
  
 +
For a proof of the Perceptron convergence theorem, see this page:
 +
: [[Perceptron_Convergence_Theorem_OldKiwi|Perceptron convergence proof]]
 
First introduced in [[Lecture_9_-_Linear_Discriminant_Functions_OldKiwi|Lecture 9]].  The gradient descent algorithm used is discussed in this lecture.
 
First introduced in [[Lecture_9_-_Linear_Discriminant_Functions_OldKiwi|Lecture 9]].  The gradient descent algorithm used is discussed in this lecture.
 
== Gradient Descent ==
 
== Gradient Descent ==
Line 109: Line 112:
 
[[Category:lecture notes]]
 
[[Category:lecture notes]]
 
[[Category:pattern recognition]]
 
[[Category:pattern recognition]]
[[Category:sLecture]]
+
[[Category:slecture]]

Latest revision as of 11:18, 10 June 2013

ECE662: Statistical Pattern Recognition and Decision Making Processes

Spring 2008, Prof. Boutin

Slecture

Collectively created by the students in the class


Lecture 10 Lecture notes

Jump to: Outline| 1| 2| 3| 4| 5| 6| 7| 8| 9| 10| 11| 12| 13| 14| 15| 16| 17| 18| 19| 20| 21| 22| 23| 24| 25| 26| 27| 28



The perceptron algorithm maps an input to a single binary output value.

For a proof of the Perceptron convergence theorem, see this page:

Perceptron convergence proof

First introduced in Lecture 9. The gradient descent algorithm used is discussed in this lecture.

Gradient Descent

Main article: Gradient Descent

Consider the cost function $ J_p(\vec{c}) = \sum -\vec{c}y_i $, where $ y_i $ is the misclassified data.

We use the gradient descent procedure to minimize $ J_p(\vec{c}) $.

Compute $ \nabla J_p(\vec{c}) = ... = - \sum y_i $.

Follow basic gradient descent procedure:

- Initial guess $ \vec{c_1} $

- Then, update $ \vec{c_2} = \vec{c_1} - \eta(1) \nabla J_p(\vec{c}) $, where $ \eta(1) $ is the step size

- Iterate $ \vec{c_{k+1}} = \vec{c_{k}} - \eta(k) \nabla J_p(\vec{c}) $until it "converges"

( e.g when $ \eta(k) \nabla J_p(\vec{c}) $< threshold )

Gradient Descent in the Perceptron Algorithm

, is due to Novikoff (1962).

$ \vec{c_{k+1}} = \vec{c_k} + cst \sum y_i $, where $ y_i $ is the misclassified data, terminates after a finite number of steps.

But, in practice, we do not have linear separable data. So instead, we use the Least Squares Procedure.

We want $ \vec{c} \cdot y_i > 0 $, for all samples $ y_i $. This is a linear inequality problem which is usually hard to solve. Therefore, we need to convert this problem into a linear equality problem.

We choose $ b_i $ > 0 and solve $ \vec{c} \cdot y_i = b_i $, for all i

The matrix equation has the following form:

Equarion111 OldKiwi.jpg

This can also be written as $ \vec{Y} \cdot \vec{c} = \vec{b} $

If d=n, and $ \vec{y_1} $,..., $ \vec{y_d} $ are "generic" ( i.e. determinant of $ \vec{Y} $ is not 0), then we "can" solve by matrix inversion.

If d > n, over-constrained system (there is no solution in the generic case). This is the case where there is more data than you need, and the information is contradictory. In this case, we seek to minimize $ || Y \vec{c} - \vec{b} ||_{L_2} $. The solution is given by $ \vec{c} = (Y^{\top}Y)^{-1}Y^{\top}b $, if $ |Y^{\top}y| \ne 0 $.

If $ |Y^{\top}y| = 0 $, $ \vec{c} = lim (Y^{\top}Y + \epsilon1)^{-1}Y^{\top}b $ always exists!



Fischer's Linear Discriminant

Main article: Fisher Linear Discriminant

Fischer's Linear Discriminant solves a dual problem: Traditionally, we have defined a separating hyperplane. Fischer's linear discriminant defines a projection which reduced the data to a single dimension.

Fischer's Linear Discriminant optimizes the between class-spread.


Previous: Lecture 9 Next: Lecture 11

Back to ECE662 Spring 2008 Prof. Boutin

Alumni Liaison

Recent Math PhD now doing a post-doctorate at UC Riverside.

Kuei-Nuan Lin