(==)
 
(19 intermediate revisions by 6 users not shown)
Line 1: Line 1:
[http://balthier.ecn.purdue.edu/index.php/ECE662 ECE662 Main Page]
+
<center><font size= 4>
 +
'''[[ECE662]]: Statistical Pattern Recognition and Decision Making Processes'''
 +
</font size>
  
[http://balthier.ecn.purdue.edu/index.php/ECE662#Class_Lecture_Notes Class Lecture Notes]
+
Spring 2008, [[user:mboutin|Prof. Boutin]]
  
 +
[[Slectures|Slecture]]
  
 +
<font size= 3> Collectively created by the students in [[ECE662:BoutinSpring08_OldKiwi|the class]]</font size>
 +
</center>
 +
 +
----
 +
=Lecture 12 Lecture notes=
 +
Jump to: [[ECE662_Pattern_Recognition_Decision_Making_Processes_Spring2008_sLecture_collective|Outline]]|
 +
[[Lecture 1 - Introduction_OldKiwi|1]]|
 +
[[Lecture 2 - Decision Hypersurfaces_OldKiwi|2]]|
 +
[[Lecture 3 - Bayes classification_OldKiwi|3]]|
 +
[[Lecture 4 - Bayes Classification_OldKiwi|4]]|
 +
[[Lecture 5 - Discriminant Functions_OldKiwi|5]]|
 +
[[Lecture 6 - Discriminant Functions_OldKiwi|6]]|
 +
[[Lecture 7 - MLE and BPE_OldKiwi|7]]|
 +
[[Lecture 8 - MLE, BPE and Linear Discriminant Functions_OldKiwi|8]]|
 +
[[Lecture 9 - Linear Discriminant Functions_OldKiwi|9]]|
 +
[[Lecture 10 - Batch Perceptron and Fisher Linear Discriminant_OldKiwi|10]]|
 +
[[Lecture 11 - Fischer's Linear Discriminant again_OldKiwi|11]]|
 +
[[Lecture 12 - Support Vector Machine and Quadratic Optimization Problem_OldKiwi|12]]|
 +
[[Lecture 13 - Kernel function for SVMs and ANNs introduction_OldKiwi|13]]| 
 +
[[Lecture 14 - ANNs, Non-parametric Density Estimation (Parzen Window)_OldKiwi|14]]|
 +
[[Lecture 15 - Parzen Window Method_OldKiwi|15]]|
 +
[[Lecture 16 - Parzen Window Method and K-nearest Neighbor Density Estimate_OldKiwi|16]]|
 +
[[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_OldKiwi|17]]|
 +
[[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_OldKiwi|18]]|
 +
[[Lecture 19 - Nearest Neighbor Error Rates_OldKiwi|19]]|
 +
[[Lecture 20 - Density Estimation using Series Expansion and Decision Trees_OldKiwi|20]]|
 +
[[Lecture 21 - Decision Trees(Continued)_OldKiwi|21]]|
 +
[[Lecture 22 - Decision Trees and Clustering_OldKiwi|22]]|
 +
[[Lecture 23 - Spanning Trees_OldKiwi|23]]|
 +
[[Lecture 24 - Clustering and Hierarchical Clustering_OldKiwi|24]]|
 +
[[Lecture 25 - Clustering Algorithms_OldKiwi|25]]|
 +
[[Lecture 26 - Statistical Clustering Methods_OldKiwi|26]]|
 +
[[Lecture 27 - Clustering by finding valleys of densities_OldKiwi|27]]|
 +
[[Lecture 28 - Final lecture_OldKiwi|28]]
 +
----
 +
----
 
=== Support Vector Machines ===
 
=== Support Vector Machines ===
 
(Continued from [[Lecture 11 - Fischer's Linear Discriminant again_OldKiwi|Lecture 11]])
 
(Continued from [[Lecture 11 - Fischer's Linear Discriminant again_OldKiwi|Lecture 11]])
Line 23: Line 62:
 
So to pose the problem well, we demand that <math>\vec{c}\cdot{y_i} \geq 1,  '\forall{i}' </math> and try to minimize <math>\vec{c}</math>
 
So to pose the problem well, we demand that <math>\vec{c}\cdot{y_i} \geq 1,  '\forall{i}' </math> and try to minimize <math>\vec{c}</math>
  
For small training sets, we can use a training method similar to online [perceptron]
 
    1) Pick a plane |c_vector|
 
    2) Find the worst classified sample.  |yi_0|  (Note: This step is computationally expensive for large data sets)
 
    3) Move plane |c_vector| to improve the classification
 
    4) Repeat steps 2-3 until the algorithm converges
 
  
.. |norm_c| image:: tex
+
*For small training sets, we can use a training method similar to online [[perceptron_OldKiwi|perceptron]]
  :alt: tex: ||\vec{c}||
+
#Pick a plane |c_vector|
 +
#Find the worst classified sample. |yi_0| (Note: This step is computationally expensive for large data sets)
 +
#Move plane |c_vector| to improve the classification
 +
#Repeat steps 2-3 until the algorithm converges
  
.. |c_dot_yi_3| image:: tex
 
  :alt: tex: \vec{c}\cdot{y_i}=1,\forall{i}
 
  
.. |half_norm_c_sq| image:: tex
+
We want to minimize <math>||\vec{c}||</math> subject to the constraints <math>\vec{c}\cdot{y_i} = 1, \forall{i}</math> (i.e. correct classification).  Note: Optimizing <math>||\vec{c}||</math> is hard, because the function is not convex. Instead, optimize <math>\frac{1}{2}||\vec{c}||^2</math> (convex function with the same optimum).
  :alt: tex: \frac{1}{2}||\vec{c}||^2
+
  
We want to minimize |norm_c| subject to the constraints |c_dot_yi_3| (i.e. correct classification).  Note: Optimizing |norm_c| is hard, because the function is not convex.  Instead, optimize |half_norm_c_sq| (convex function with the same optimum).
 
  
.. |c_yi_X| image:: tex
+
This gives a [[Quadratic Optimization Problem_OldKiwi]]: Minimize <math>\frac{1}{2}||\vec{c}||^2</math> subject to the constraints <math>\vec{c}\cdot{y_i} = 1, \forall{i}</math>. However, we should demand that <math>\vec{c}\cdot{y_i} = 1, \forall</math> support vectors <math>y_i
  :alt: tex: \vec{c}\cdot{y}_i\ge1,\forall{i}
+
</math>.
  
.. |c_yi_SV| image:: tex
+
Formulate using [[Lagrange multipliers_OldKiwi]]: (Someone should continue the notes from this point)
  :alt: tex: \vec{c}\cdot{y}_i=1,\forall
+
  
.. |yi| image:: tex
 
  :alt: tex: y_i
 
  
This gives a [Quadratic Optimization Problem]: Minimize |half_norm_c_sq| subject to the constraints |c_yi_X|.  However, we should demand that |c_yi_SV| support vectors |yi|.
+
=== Karush-Kuhn-Tucker Construction ===
 
+
Formulate using [Lagrange multipliers]:
+
(Someone should continue the notes from this point)
+
 
+
Karush-Kuhn-Tucker Construction
+
======================
+
  
 
Reformulate as "Dual Problem"
 
Reformulate as "Dual Problem"
  
Maximize |khh1|, where i, j are indices of SV.
+
Maximize <math>L(\alpha)=\displaystyle \sum_{i} \alpha_{i}-\frac{1}{2}\sum_{i,j} \alpha _i  \alpha_j y_i \cdot y_j</math>, where i, j are indices of SV.
 
+
under constraint |khh2|, and |khh3|
+
 
+
.. |khh1| image:: tex
+
  :alt: tex: L(\alpha)=\displaystyle \sum_{i} \alpha_{i}-\frac{1}{2}\sum_{i,j} \alpha _i  \alpha_j y_i \cdot y_j
+
 
+
.. |khh2| image:: tex
+
  :alt: tex: \displaystyle \sum_{i=1}^{d_1}\alpha _i - \sum_{i=d+1}^{d} \alpha _i = 0
+
 
+
.. |khh3| image:: tex
+
  :alt: tex: \alpha_i \geq 0, \quad \forall{i}
+
 
+
then find |khh4|.
+
 
+
.. |khh4| image:: tex
+
  :alt: tex: \vec{c}=\sum_{i} \alpha_{i} y_i
+
 
+
Key points
+
 
+
+ L only depends on SV
+
+ L only depends on |khh5|, not |khh6| or |khh7| explicitly
+
 
+
.. |khh5| image:: tex
+
  :alt: tex: y_{i} \cdot y_{j}
+
 
+
.. |khh6| image:: tex
+
  :alt: tex: y_i
+
 
+
.. |khh7| image:: tex
+
  :alt: tex: y_j
+
 
+
Recall: maximize |khh8|
+
 
+
.. |khh8| image:: tex
+
  :alt: tex: L(c,\vec{\alpha})=\frac{1}{2}||\vec{c}||^2 - \displaystyle \sum_{i} \alpha _i (\vec{c} \cdot y_i -1 )
+
 
+
 
+
When data is not linearly separable, max |khh9|, Do not try this numerial optimization.
+
 
+
.. |khh9| image:: tex
+
  :alt: tex: L(c,\vec{\alpha})=\infty
+
 
+
+ How to train a support vector machine using nonlinearly separable data.
+
Define "soft margins" by introducing "slack variable" |khh9| , which measure misclassification of |khh6|.
+
 
+
.. |khh9| image:: tex
+
  :alt: tex: \xi_i
+
 
+
|khh10| and try to use as few non-zero as possible (use penalty term)
+
 
+
.. |khh10| image:: tex
+
  :alt: tex: \vec{c} \cdot y_i \geq 1 \Rightarrow \vec{c} \cdot y_i \geq 1 - \xi _i
+
 
+
Quadratic Optimization Problem
+
======================
+
 
+
minimize |khh11|
+
 
+
subject to |khh12|
+
 
+
.. |khh11| image:: tex
+
  :alt: tex: \frac{1}{2}||\vec{c}||^2 + constant \displaystyle \sum_{i=1}^{d} \xi _i
+
  
.. |khh12| image:: tex
+
Under constraint <math>\displaystyle \sum_{i=1}^{d_1}\alpha _i - \sum_{i=d+1}^{d} \alpha _i = 0</math>, and <math>\alpha_i \geq 0, \quad \forall{i}</math>
  :alt: tex: \vec{c} \cdot y_i \geq 1 - \xi _i, \quad \forall{i}
+
  
The Kernel Trick
+
Then find <math>\vec{c}=\sum_{i} \alpha_{i} y_i</math>.
============
+
  
Proposed by Aizerman in another context used by Vapnik,Boser, Guyon to upgrade [SVM] to find non-linear decision boundary based on observation that [SVM] training only introduces |khh5|
+
*Key points:
 +
**L only depends on SV
 +
**L only depends on <math>y_{i} \cdot y_{j}</math>, not <math>y_i</math> or <math>y_j</math> explicitly.
  
|khh13|
+
Recall: maximize <math>L(c,\vec{\alpha})=\frac{1}{2}||\vec{c}||^2 - \displaystyle \sum_{i} \alpha _i (\vec{c} \cdot y_i -1 )</math>
  
.. |khh13| image:: tex
+
When data is not linearly separable, max <math>L(c,\vec{\alpha})=\infty</math>, Do not try this numerial optimization.
:alt: tex: y_i \cdot y_j = \varphi (x_i) \cdot \varphi (x_j)
+
  
A kernel function |khh14| for a given mapping |khh15| is a function such that |khh16|.
 
  
.. |khh14| image:: tex
+
'''How to train a support vector machine using nonlinearly separable data:'''
:alt: tex: K:\Re ^k \times \Re ^k \rightarrow \Re
+
  
.. |khh15| image:: tex
+
Define "soft margins" by introducing "slack variable" <math>\xi_i</math> , which measure misclassification of <math>y_i</math>. This means we introduce penalty terms through the modified conditions below:
:alt: tex: \varphi
+
  
.. |khh16| image:: tex
 
:alt: tex: K(x,x')=\varphi (x) \cdot \varphi (x')
 
  
Therefore, |khh17| -> No need to compute |khh18| and no need to know |khh15|
+
<math>\vec{c} \cdot y_i \geq 1 \Rightarrow \vec{c} \cdot y_i \geq 1 - \xi _i</math> (try to use as few non-zero penalty terms as possible).
  
.. |khh17| image:: tex
+
=== Quadratic Optimization Problem ===
:alt: tex: y_i \cdot y_j = K(x_i,x_j)
+
  
.. |khh18| image:: tex
+
Minimize <math>\frac{1}{2}||\vec{c}||^2 + constant \displaystyle \sum_{i=1}^{d} \xi _i</math>
:alt: tex: \varphi (x_i), \varphi (x_j)
+
  
 +
Subject to <math>\vec{c} \cdot y_i \geq 1 - \xi _i, \quad \forall{i}</math>
  
In order to exploit kernel trick, we need to have good kernel functions. One way to do that is to choose |fx| (called basis functions) and use it to find corresponding kernel. Another method is to construct kernel functions directly by making sure that the function is a valid kernel which means it corresponds to a scalar product in some feature space.
 
Necessary and sufficient condition for a function to be a valid kernel is Gram Matrix K whose elements are |kxx| to be positive semidefinite. Here are some figures to show the general characteristics of a kernel function.
 
  
.. |fx| image:: tex
+
*'''The Kernel Trick'''
:alt: tex: \varphi (x)
+
  
.. |kxx| image:: tex
+
Proposed by Aizerman in another context used by Vapnik,Boser, Guyon to upgrade [[SVM_OldKiwi]] to find non-linear decision boundary based on observation that [[SVM_OldKiwi]] training only introduces <math>y_i\cdot{y_j}</math>.
:alt: tex: k(x_n, x_m)
+
  
.. image:: kernel.jpg
 
  
The left hand side figure represents the set of basis functions, and the right hand side represents the Gaussian Kernel Function.
+
<math>y_i \cdot y_j = \varphi (x_i) \cdot \varphi (x_j)</math>
  
A powerful method for creating good kernel functions is to use simpler kernel functions to build new ones. Because, we can manipulate kernel functions by using some transformations such as scalar multiplications, summing two kernel functions, inserting kernel functions to some other functions and so on.
+
A kernel function <math>K:\Re ^k \times \Re ^k \rightarrow \Re</math> for a given mapping <math>\varphi</math> is a function such that <math>K(x,x')=\varphi (x) \cdot \varphi (x')</math>.
  
2 Nice examples can be found in [Support Vector Machines : 1-D Example] and [SVM: 2-D Example]
+
Therefore, <math>y_i \cdot y_j = K(x_i,x_j)</math> <math>\rightarrow</math> No need to compute <math>\varphi (x_i), \varphi (x_j)</math> and no need to know <math>\varphi</math>.
  
One example of kernel functions
 
  
take |L13_1|
+
In order to exploit kernel trick, we need to have good kernel functions. One way to do that is to choose <math>\varphi (x)</math> (called basis functions) and use it to find corresponding kernel. Another method is to construct kernel functions directly by making sure that the function is a valid kernel which means it corresponds to a scalar product in some feature space.
 +
The necessary and sufficient condition for a function to be a valid kernel is for the Gram Matrix K whose elements are <math>k(x_n, x_m)</math> to be [[positive semidefinite_OldKiwi]].
  
then, |L13_2|
+
A powerful method for creating good kernel functions is to use simpler kernel functions to build new ones. We can manipulate kernel functions by using some transformations such as scalar multiplications, summing two kernel functions, inserting kernel functions into some other functions and so on.
  
|L13_2_2|
 
  
Here kernel is  |L13_3|
+
Some nice examples using support vector machines can be found in [[Support Vector Machine Examples_OldKiwi|Support Vector Machine Examples]] and [[SVM-2D_OldKiwi|SVM-2D]].
  
Note : K is kernel for other functions |L13_4|
+
*One example of kernel functions:
  
(for example |L13_5|
+
Take <math>\varphi (x_1 , x_2)=(x_1^2,\sqrt{2} x_1 x_2, x_2^2 )</math>
  
one-to-one correspondence between kernel and mapping |L13_4| s
+
then, <math>\varphi (x_1 , x_2) \varphi (\acute{x_1} , \acute{x_2})=x_1^2 \acute{x_1^2}+2x_1 x_2 \acute{x_1} \acute{x_2} + x_2^2 \acute{x_2^2}</math>
  
.. |L13_1| image:: tex
+
<math>(x_1^2 \acute{x_1^2} + x_2^2 \acute{x_2^2} )^2 = [(x_1, x_2) (\acute{x_1^2}, \acute{x_2^2})]^2</math>
:alt: tex: \varphi (x_1 , x_2)=(x_1^2,\sqrt{2} x_1 x_2, x_2^2 )
+
  
.. |L13_2| image:: tex
+
Here kernel is  <math>K(\vec x, \acute{\vec x} ) = (\vec x, \acute{\vec x} )^2</math>
:alt: tex: \varphi (x_1 , x_2) \varphi (\acute{x_1} , \acute{x_2})=x_1^2 \acute{x_1^2}+2x_1 x_2 \acute{x_1} \acute{x_2} + x_2^2 \acute{x_2^2}
+
  
.. |L13_2_2| image:: tex
 
:alt: tex: (x_1^2 \acute{x_1^2} + x_2^2 \acute{x_2^2} )^2 = [(x_1, x_2) (\acute{x_1^2}, \acute{x_2^2})]^2
 
  
.. |L13_3| image:: tex
+
Note : K is kernel for other functions <math>\varphi</math>
:alt: tex: K(\vec x, \acute{\vec x} ) = (\vec x, \acute{\vec x} )^2
+
  
.. |L13_4| image:: tex
+
For example, <math>\varphi (x_1 , x_2) = \frac {1}{\sqrt{2}} (x_1^2 - x_2^2, 2x_1^2 x_2^2, x_1^2 + x_2^2)</math>
:alt: tex: \varphi
+
  
.. |L13_5| image:: tex
+
One-to-one correspondence between kernel and mapping <math>\varphi</math>'s.
:alt: tex: \varphi (x_1 , x_2) = \frac {1}{\sqrt{2}} (x_1^2 - x_2^2, 2x_1^2 x_2^2, x_1^2 + x_2^2)
+
----
 +
Previous: [[Lecture_11_-_Fischer's_Linear_Discriminant_again_OldKiwi|Lecture 11]]
 +
Next: [[Lecture_13_-_Kernel_function_for_SVMs_and_ANNs_introduction_OldKiwi|Lecture 13]]
  
== Lectures ==
+
[[ECE662:BoutinSpring08_OldKiwi|Back to ECE662 Spring 2008 Prof. Boutin]]
[http://balthier.ecn.purdue.edu/index.php/Lecture_1_-_Introduction 1] [http://balthier.ecn.purdue.edu/index.php/Lecture_2_-_Decision_Hypersurfaces 2] [http://balthier.ecn.purdue.edu/index.php/Lecture_3_-_Bayes_classification 3]
+
[[Category:ECE662]]
[http://balthier.ecn.purdue.edu/index.php/Lecture_4_-_Bayes_Classification 4] [http://balthier.ecn.purdue.edu/index.php/Lecture_5_-_Discriminant_Functions 5] [http://balthier.ecn.purdue.edu/index.php/Lecture_6_-_Discriminant_Functions 6] [http://balthier.ecn.purdue.edu/index.php/Lecture_7_-_MLE_and_BPE 7] [http://balthier.ecn.purdue.edu/index.php/Lecture_8_-_MLE%2C_BPE_and_Linear_Discriminant_Functions 8] [http://balthier.ecn.purdue.edu/index.php/Lecture_9_-_Linear_Discriminant_Functions 9] [http://balthier.ecn.purdue.edu/index.php/Lecture_10_-_Batch_Perceptron_and_Fisher_Linear_Discriminant 10] [http://balthier.ecn.purdue.edu/index.php/Lecture_11_-_Fischer%27s_Linear_Discriminant_again 11] [http://balthier.ecn.purdue.edu/index.php/Lecture_12_-_Support_Vector_Machine_and_Quadratic_Optimization_Problem 12] [http://balthier.ecn.purdue.edu/index.php/Lecture_13_-_Kernel_function_for_SVMs_and_ANNs_introduction 13] [http://balthier.ecn.purdue.edu/index.php/Lecture_14_-_ANNs%2C_Non-parametric_Density_Estimation_%28Parzen_Window%29 14] [http://balthier.ecn.purdue.edu/index.php/Lecture_15_-_Parzen_Window_Method 15] [http://balthier.ecn.purdue.edu/index.php/Lecture_16_-_Parzen_Window_Method_and_K-nearest_Neighbor_Density_Estimate 16] [http://balthier.ecn.purdue.edu/index.php/Lecture_17_-_Nearest_Neighbors_Clarification_Rule_and_Metrics 17] [http://balthier.ecn.purdue.edu/index.php/Lecture_18_-_Nearest_Neighbors_Clarification_Rule_and_Metrics%28Continued%29 18]
+
[[Category:decision theory]]
 +
[[Category:lecture notes]]
 +
[[Category:pattern recognition]]
 +
[[Category:slecture]]

Latest revision as of 11:19, 10 June 2013

ECE662: Statistical Pattern Recognition and Decision Making Processes

Spring 2008, Prof. Boutin

Slecture

Collectively created by the students in the class


Lecture 12 Lecture notes

Jump to: Outline| 1| 2| 3| 4| 5| 6| 7| 8| 9| 10| 11| 12| 13| 14| 15| 16| 17| 18| 19| 20| 21| 22| 23| 24| 25| 26| 27| 28



Support Vector Machines

(Continued from Lecture 11)

  • Definition

The support vectors are the training points $ y_i $ such that $ \vec{c}\cdot{y_i}=b,\forall{i} $. i.e. they are the closest to the hyperplane.


Lec12 sv pic OldKiwi.PNG


  • How to Train a Support Vector Machine (SVM)

We want to find a $ \vec{c} $ such that $ \vec{c}\cdot{y_i} \geq b, \forall{i} $. This however, is wishful thinking, so we try to find this for as many training samples as possible with $ b $ as large as possible.

Observe: If $ \vec{c} $ is a solution with margin $ b $, then $ \alpha\vec{c} $ is a solution with margin $ \alpha b, \forall{\alpha} \in \Re > 0 $

So to pose the problem well, we demand that $ \vec{c}\cdot{y_i} \geq 1, '\forall{i}' $ and try to minimize $ \vec{c} $


  • For small training sets, we can use a training method similar to online perceptron
  1. Pick a plane |c_vector|
  2. Find the worst classified sample. |yi_0| (Note: This step is computationally expensive for large data sets)
  3. Move plane |c_vector| to improve the classification
  4. Repeat steps 2-3 until the algorithm converges


We want to minimize $ ||\vec{c}|| $ subject to the constraints $ \vec{c}\cdot{y_i} = 1, \forall{i} $ (i.e. correct classification). Note: Optimizing $ ||\vec{c}|| $ is hard, because the function is not convex. Instead, optimize $ \frac{1}{2}||\vec{c}||^2 $ (convex function with the same optimum).


This gives a Quadratic Optimization Problem_OldKiwi: Minimize $ \frac{1}{2}||\vec{c}||^2 $ subject to the constraints $ \vec{c}\cdot{y_i} = 1, \forall{i} $. However, we should demand that $ \vec{c}\cdot{y_i} = 1, \forall $ support vectors $ y_i $.

Formulate using Lagrange multipliers_OldKiwi: (Someone should continue the notes from this point)


Karush-Kuhn-Tucker Construction

Reformulate as "Dual Problem"

Maximize $ L(\alpha)=\displaystyle \sum_{i} \alpha_{i}-\frac{1}{2}\sum_{i,j} \alpha _i \alpha_j y_i \cdot y_j $, where i, j are indices of SV.

Under constraint $ \displaystyle \sum_{i=1}^{d_1}\alpha _i - \sum_{i=d+1}^{d} \alpha _i = 0 $, and $ \alpha_i \geq 0, \quad \forall{i} $

Then find $ \vec{c}=\sum_{i} \alpha_{i} y_i $.

  • Key points:
    • L only depends on SV
    • L only depends on $ y_{i} \cdot y_{j} $, not $ y_i $ or $ y_j $ explicitly.

Recall: maximize $ L(c,\vec{\alpha})=\frac{1}{2}||\vec{c}||^2 - \displaystyle \sum_{i} \alpha _i (\vec{c} \cdot y_i -1 ) $

When data is not linearly separable, max $ L(c,\vec{\alpha})=\infty $, Do not try this numerial optimization.


How to train a support vector machine using nonlinearly separable data:

Define "soft margins" by introducing "slack variable" $ \xi_i $ , which measure misclassification of $ y_i $. This means we introduce penalty terms through the modified conditions below:


$ \vec{c} \cdot y_i \geq 1 \Rightarrow \vec{c} \cdot y_i \geq 1 - \xi _i $ (try to use as few non-zero penalty terms as possible).

Quadratic Optimization Problem

Minimize $ \frac{1}{2}||\vec{c}||^2 + constant \displaystyle \sum_{i=1}^{d} \xi _i $

Subject to $ \vec{c} \cdot y_i \geq 1 - \xi _i, \quad \forall{i} $


  • The Kernel Trick

Proposed by Aizerman in another context used by Vapnik,Boser, Guyon to upgrade SVM_OldKiwi to find non-linear decision boundary based on observation that SVM_OldKiwi training only introduces $ y_i\cdot{y_j} $.


$ y_i \cdot y_j = \varphi (x_i) \cdot \varphi (x_j) $

A kernel function $ K:\Re ^k \times \Re ^k \rightarrow \Re $ for a given mapping $ \varphi $ is a function such that $ K(x,x')=\varphi (x) \cdot \varphi (x') $.

Therefore, $ y_i \cdot y_j = K(x_i,x_j) $ $ \rightarrow $ No need to compute $ \varphi (x_i), \varphi (x_j) $ and no need to know $ \varphi $.


In order to exploit kernel trick, we need to have good kernel functions. One way to do that is to choose $ \varphi (x) $ (called basis functions) and use it to find corresponding kernel. Another method is to construct kernel functions directly by making sure that the function is a valid kernel which means it corresponds to a scalar product in some feature space. The necessary and sufficient condition for a function to be a valid kernel is for the Gram Matrix K whose elements are $ k(x_n, x_m) $ to be positive semidefinite_OldKiwi.

A powerful method for creating good kernel functions is to use simpler kernel functions to build new ones. We can manipulate kernel functions by using some transformations such as scalar multiplications, summing two kernel functions, inserting kernel functions into some other functions and so on.


Some nice examples using support vector machines can be found in Support Vector Machine Examples and SVM-2D.

  • One example of kernel functions:

Take $ \varphi (x_1 , x_2)=(x_1^2,\sqrt{2} x_1 x_2, x_2^2 ) $

then, $ \varphi (x_1 , x_2) \varphi (\acute{x_1} , \acute{x_2})=x_1^2 \acute{x_1^2}+2x_1 x_2 \acute{x_1} \acute{x_2} + x_2^2 \acute{x_2^2} $

$ (x_1^2 \acute{x_1^2} + x_2^2 \acute{x_2^2} )^2 = [(x_1, x_2) (\acute{x_1^2}, \acute{x_2^2})]^2 $

Here kernel is $ K(\vec x, \acute{\vec x} ) = (\vec x, \acute{\vec x} )^2 $


Note : K is kernel for other functions $ \varphi $

For example, $ \varphi (x_1 , x_2) = \frac {1}{\sqrt{2}} (x_1^2 - x_2^2, 2x_1^2 x_2^2, x_1^2 + x_2^2) $

One-to-one correspondence between kernel and mapping $ \varphi $'s.


Previous: Lecture 11 Next: Lecture 13

Back to ECE662 Spring 2008 Prof. Boutin

Alumni Liaison

Abstract algebra continues the conceptual developments of linear algebra, on an even grander scale.

Dr. Paul Garrett