Line 58: Line 58:
 
Proof of claim: Given observation <math>\vec{x}</math>, denode by <math>\vec{x'}_d</math> the nearest neighbor of <math>\vec{x}</math> among <math>\{\vec{x}_1,\vec{x}_2, \cdots , \vec{x}_d \}</math>
 
Proof of claim: Given observation <math>\vec{x}</math>, denode by <math>\vec{x'}_d</math> the nearest neighbor of <math>\vec{x}</math> among <math>\{\vec{x}_1,\vec{x}_2, \cdots , \vec{x}_d \}</math>
  
<math>P_d(e \mid \vec{x})=\int p_d(e \mid \vec{x}, \vec{x'}_d)p_d(\vec{x'}_d  \mid \vec{x})</math>
+
<math>P_d(e \mid \vec{x})=\int p_d(e \mid \vec{x}, \vec{x'}_d)p_d(\vec{x'}_d  \mid \vec{x})p(x)dx</math>
 
but <math>\lim _{d \rightarrow \infty } p_d (\vec{x'}_d \mid \vec{x})=\delta {\vec{x'}-\vec{x}}</math>
 
but <math>\lim _{d \rightarrow \infty } p_d (\vec{x'}_d \mid \vec{x})=\delta {\vec{x'}-\vec{x}}</math>
  

Revision as of 13:55, 10 March 2008

ECE662 Main Page

Class Lecture Notes

Nearest Neighbors Clarification Rule(Alternative Approaches) --Han47 10:34, 10 March 2008 (EDT)

Alternative Approach

find invariant coordination $ \varphi : \Re ^k \rightarrow \Re ^n $ --Han47 10:41, 10 March 2008 (EDT) such that $ \varphi (x) = \varphi (\bar x) $ for all $ x, \bar x $ which are related by a rotation & translation

Do not trivialize!

e.g.) $ \varphi (x) =0 $ gives us invariant coordinate but lose separation

Want $ \varphi (x) = \varphi (\bar x) $ $ \Leftrightarrow x, \bar x $ are related by a rotation and translation

Example $ p=(p_1,p_2,\cdots, p_N) \in \Re ^{3 \times N} $ $ \varphi $ maps representation position of taps on body onto $ (d_{12},d_{13},d_{14},\cdots , d_{N-1, N} ) $

where $ d_{ij} $= Euclidean distance between $ p_i $ and $ p_j $

Can reconstruct up to a rotation and translation

Warning: Euclidean distance in invariant coordination space has nothing to do with Euclidean distance or proanstes distance in initial feature space

Nearest Neighbor in $ \Re ^2 $ yields tessalation (tiling of floor with 2D shapes such that 1) no holes and 2) cover all of $ \Re ^2 $ )

Shape of cells depends on metric chosen

E.g., if feature vectors are such that vectors related by a rotation belong to same class $ \rightarrow $ metric should be chosen so that files are rotationally symmetric.

Instead of working with (x,y) rotationally invariant, work with $ z=\sqrt{x^2 + y^2} $

How good is Nearest Neighbor rule? 1) training error is zero: don't care 2) test error? Want to Bayes error rate


  • Nearest Neighbor error rate

recall Probability of error (error rate) on test data is $ P(e)=\int p(e \mid \vec{x}) p(\vec{x}) d\vec{x} $

Let $ P_d(e) $ be the error rate when d training samples are used.

Let $ P=\lim_{d \rightarrow \infty } P_{d}(e) $

Claim limit error rate $ P=\int (1-\sum _{i=1} ^{c}p^2 (\omega _i \mid \vec{x}))p(x)dx $

Proof of claim: Given observation $ \vec{x} $, denode by $ \vec{x'}_d $ the nearest neighbor of $ \vec{x} $ among $ \{\vec{x}_1,\vec{x}_2, \cdots , \vec{x}_d \} $

$ P_d(e \mid \vec{x})=\int p_d(e \mid \vec{x}, \vec{x'}_d)p_d(\vec{x'}_d \mid \vec{x})p(x)dx $ but $ \lim _{d \rightarrow \infty } p_d (\vec{x'}_d \mid \vec{x})=\delta {\vec{x'}-\vec{x}} $

because probability that sample falls into region R centered at $ \vec{x} $ is $ P_{R}=\int _R p(\vec{x' _d})d \vec{x'}_d $.

So, if $ p(\vec{x}) \neq 0 $ (true almost everywhere), then probability that all samples fall outside R is $ \lim _{d \rightarrow \infty} {(1-P_{R})}^d =0 $

So, $ \lim _{d \rightarrow \infty} \vec{x'}_d = \vec{x} $ and $ p_d (\vec{x'}_d \mid \vec{x})=\delta (\vec{x'}_d -\vec{x})+\epsilon _d (\vec{x}) $ where $ \lim _{d \rightarrow \infty } \epsilon _d (x)=0 $

Now $ p_d (e \mid \vec{x}, \vec{x'}_d) $ ?

Let $ \theta , \theta _1 , \theta _2 , \cdots, \theta _{d} $ be the class of $ x , x_1 , x_2 , \cdots , x_d $, respectively.

Using nearest neighbor rule, error if $ \theta \neq $class of $ \vec{x'}_d $$ =: \theta ' _d $

$ \Rightarrow P_d(e \mid \vec{x},\vec{x'}_d)=1-\sum_{i=1} ^ c P(\theta = \omega _i , \theta ' _d = \omega _i \mid \vec{x}, \vec{x'}_d ) $

$ =1- \sum _{i=1} ^c p(\omega _i \mid \vec{x}) p(\omega _i \mid \vec{x'} _d) $

Recall $ P_d (e \mid \vec{x}, \vec{x'}_d)p_d (\vec{x'}_d \mid \vec{x})d \vec{x'}_d $


Get $ P_d (e \mid \vec{x})=(1-\sum _{i=1} ^c p(\omega _i \mid x) p(\omega _i \mid x) ) $ + something that goes to zero as d goes to $ \infty $

$ \lim _{d \rightarrow \infty} p_d (e \mid \vec{x})=(1- \sum _{i=1} ^c {p(\omega _i \mid x)}^2) $ --Han47 11:47, 10 March 2008 (EDT)

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett