ECE662: Statistical Pattern Recognition and Decision Making Processes

Spring 2008, Prof. Boutin

Collectively created by the students in the class

Lecture 12 Lecture notes

Jump to: Outline| 1| 2| 3| 4| 5| 6| 7| 8| 9| 10| 11| 12| 13| 14| 15| 16| 17| 18| 19| 20| 21| 22| 23| 24| 25| 26| 27| 28

Support Vector Machines

(Continued from Lecture 11)

• Definition

The support vectors are the training points $y_i$ such that $\vec{c}\cdot{y_i}=b,\forall{i}$. i.e. they are the closest to the hyperplane.

• How to Train a Support Vector Machine (SVM)

We want to find a $\vec{c}$ such that $\vec{c}\cdot{y_i} \geq b, \forall{i}$. This however, is wishful thinking, so we try to find this for as many training samples as possible with $b$ as large as possible.

Observe: If $\vec{c}$ is a solution with margin $b$, then $\alpha\vec{c}$ is a solution with margin $\alpha b, \forall{\alpha} \in \Re > 0$

So to pose the problem well, we demand that $\vec{c}\cdot{y_i} \geq 1, '\forall{i}'$ and try to minimize $\vec{c}$

• For small training sets, we can use a training method similar to online perceptron
1. Pick a plane |c_vector|
2. Find the worst classified sample. |yi_0| (Note: This step is computationally expensive for large data sets)
3. Move plane |c_vector| to improve the classification
4. Repeat steps 2-3 until the algorithm converges

We want to minimize $||\vec{c}||$ subject to the constraints $\vec{c}\cdot{y_i} = 1, \forall{i}$ (i.e. correct classification). Note: Optimizing $||\vec{c}||$ is hard, because the function is not convex. Instead, optimize $\frac{1}{2}||\vec{c}||^2$ (convex function with the same optimum).

This gives a Quadratic Optimization Problem_OldKiwi: Minimize $\frac{1}{2}||\vec{c}||^2$ subject to the constraints $\vec{c}\cdot{y_i} = 1, \forall{i}$. However, we should demand that $\vec{c}\cdot{y_i} = 1, \forall$ support vectors $y_i$.

Formulate using Lagrange multipliers_OldKiwi: (Someone should continue the notes from this point)

Karush-Kuhn-Tucker Construction

Reformulate as "Dual Problem"

Maximize $L(\alpha)=\displaystyle \sum_{i} \alpha_{i}-\frac{1}{2}\sum_{i,j} \alpha _i \alpha_j y_i \cdot y_j$, where i, j are indices of SV.

Under constraint $\displaystyle \sum_{i=1}^{d_1}\alpha _i - \sum_{i=d+1}^{d} \alpha _i = 0$, and $\alpha_i \geq 0, \quad \forall{i}$

Then find $\vec{c}=\sum_{i} \alpha_{i} y_i$.

• Key points:
• L only depends on SV
• L only depends on $y_{i} \cdot y_{j}$, not $y_i$ or $y_j$ explicitly.

Recall: maximize $L(c,\vec{\alpha})=\frac{1}{2}||\vec{c}||^2 - \displaystyle \sum_{i} \alpha _i (\vec{c} \cdot y_i -1 )$

When data is not linearly separable, max $L(c,\vec{\alpha})=\infty$, Do not try this numerial optimization.

How to train a support vector machine using nonlinearly separable data:

Define "soft margins" by introducing "slack variable" $\xi_i$ , which measure misclassification of $y_i$. This means we introduce penalty terms through the modified conditions below:

$\vec{c} \cdot y_i \geq 1 \Rightarrow \vec{c} \cdot y_i \geq 1 - \xi _i$ (try to use as few non-zero penalty terms as possible).

Minimize $\frac{1}{2}||\vec{c}||^2 + constant \displaystyle \sum_{i=1}^{d} \xi _i$

Subject to $\vec{c} \cdot y_i \geq 1 - \xi _i, \quad \forall{i}$

• The Kernel Trick

Proposed by Aizerman in another context used by Vapnik,Boser, Guyon to upgrade SVM_OldKiwi to find non-linear decision boundary based on observation that SVM_OldKiwi training only introduces $y_i\cdot{y_j}$.

$y_i \cdot y_j = \varphi (x_i) \cdot \varphi (x_j)$

A kernel function $K:\Re ^k \times \Re ^k \rightarrow \Re$ for a given mapping $\varphi$ is a function such that $K(x,x')=\varphi (x) \cdot \varphi (x')$.

Therefore, $y_i \cdot y_j = K(x_i,x_j)$ $\rightarrow$ No need to compute $\varphi (x_i), \varphi (x_j)$ and no need to know $\varphi$.

In order to exploit kernel trick, we need to have good kernel functions. One way to do that is to choose $\varphi (x)$ (called basis functions) and use it to find corresponding kernel. Another method is to construct kernel functions directly by making sure that the function is a valid kernel which means it corresponds to a scalar product in some feature space. The necessary and sufficient condition for a function to be a valid kernel is for the Gram Matrix K whose elements are $k(x_n, x_m)$ to be positive semidefinite_OldKiwi.

A powerful method for creating good kernel functions is to use simpler kernel functions to build new ones. We can manipulate kernel functions by using some transformations such as scalar multiplications, summing two kernel functions, inserting kernel functions into some other functions and so on.

Some nice examples using support vector machines can be found in Support Vector Machine Examples and SVM-2D.

• One example of kernel functions:

Take $\varphi (x_1 , x_2)=(x_1^2,\sqrt{2} x_1 x_2, x_2^2 )$

then, $\varphi (x_1 , x_2) \varphi (\acute{x_1} , \acute{x_2})=x_1^2 \acute{x_1^2}+2x_1 x_2 \acute{x_1} \acute{x_2} + x_2^2 \acute{x_2^2}$

$(x_1^2 \acute{x_1^2} + x_2^2 \acute{x_2^2} )^2 = [(x_1, x_2) (\acute{x_1^2}, \acute{x_2^2})]^2$

Here kernel is $K(\vec x, \acute{\vec x} ) = (\vec x, \acute{\vec x} )^2$

Note : K is kernel for other functions $\varphi$

For example, $\varphi (x_1 , x_2) = \frac {1}{\sqrt{2}} (x_1^2 - x_2^2, 2x_1^2 x_2^2, x_1^2 + x_2^2)$

One-to-one correspondence between kernel and mapping $\varphi$'s.

Previous: Lecture 11 Next: Lecture 13

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett