(Density Estimation using Series Expansion)
(Density Estimation using Series Expansion)
Line 3: Line 3:
 
Last "non-parametric" technique (although very parametric)
 
Last "non-parametric" technique (although very parametric)
  
Write <math<math>>p(\vec<<math>math>{x})=\sum _{j=0}^{\infty}c_j f_j (\vec{x}) \cong \sum _{j=0} ^{m}c_j f_j (\vec{x})</math> (1)
+
Write <math>p(\vec{x})=\sum _{j=0}^{\infty}c_j f_j (\vec{x}) \cong \sum _{j=0} ^{m}c_j f_j (\vec{x})</math> (1)
  
 
where {<math>fj's</math>} are pre-determined class of functions  
 
where {<math>fj's</math>} are pre-determined class of functions  
  
 
<math>\vec{x} = (x_1, \cdots, x_n)</math> (2)
 
<math>\vec{x} = (x_1, \cdots, x_n)</math> (2)
</math>
+
 
 
Monomials: <math>x_1 , x_1x_2 , x_1 ^3 </math>  (3)
 
Monomials: <math>x_1 , x_1x_2 , x_1 ^3 </math>  (3)
  
Line 18: Line 18:
 
<math>p(x)=\sum _{j=1} ^ {\infty} \frac{p^{(j)} (x_0) (x-x_0)^j}{j!}</math> (5)
 
<math>p(x)=\sum _{j=1} ^ {\infty} \frac{p^{(j)} (x_0) (x-x_0)^j}{j!}</math> (5)
 
when <math>p(x)</math> is analytic
 
when <math>p(x)</math> is analytic
</math>
+
 
 
Taylor polynomial approximation
 
Taylor polynomial approximation
  
</math><math>p(x) \approx \sum _{j=0} ^{m} \frac{p^{(j)}(x)(x-x_0)^j}{j!}</math> (6) when <math>p(x) \in C^{m+1}(\Re)</math>
+
<math>p(x) \approx \sum _{j=0} ^{m} \frac{p^{(j)}(x)(x-x_0)^j}{j!}</math> (6) when <math>p(x) \in C^{m+1}(\Re)</math>
  
When m-1,linear approximation.
+
When m=1,linear approximation.
  
 
<math>p(\vec{x})\approx c_0 + c \cdot (x-x_0)</math> (7)
 
<math>p(\vec{x})\approx c_0 + c \cdot (x-x_0)</math> (7)
Line 31: Line 31:
 
<math>p(x) \cong \frac{K}{dV_d}</math> (8), where K is number of samples in a neighborhood of x
 
<math>p(x) \cong \frac{K}{dV_d}</math> (8), where K is number of samples in a neighborhood of x
  
d = number of total samples
+
              d is number of total samples
  
<math>V_d</math> is volume of that neighborhood
+
            <math>V_d</math> is volume of that neighborhood
  
 
There is relationship between series expansion and Parzen Windows.
 
There is relationship between series expansion and Parzen Windows.
Line 42: Line 42:
  
 
<math>\Phi (\frac{\vec{x}-\vec{x}_i}{h_d})</math> (9) used in approximating <math>p(x)</math>
 
<math>\Phi (\frac{\vec{x}-\vec{x}_i}{h_d})</math> (9) used in approximating <math>p(x)</math>
 +
 +
<math>p(\vec{x}) \cong p_d(\vec{x})=\sum _{i=1} ^{d} \frac{1}{dV_d} \Phi (\frac{\vec{x}-\vec{x}_i}{h_d})</math> (10)
 +
 +
Want <math>p_d (\vec{x})\cong \sum _{j=1} ^{m} c_j (x_i , \cdots, \c_d)f_j (\vec{x})</math> (11)
 +
 +
Write <math>\Phi (\vec{x})= \sum _{j=1} ^{m} c_j (x_i , \cdots, \c_d)f_j (\vec{x})</math> (12)
 +
 +
By computing the series for <math>\Phi (\vec {x}) \cong \sum _{j=1} ^{m} \vec {c}_j f_j (\vec{x})</math> (13)
 +
 +
Example) 1D Gaussian window and Tayor expansion
  
 
==Decision Trees==
 
==Decision Trees==

Revision as of 15:09, 30 March 2008

Density Estimation using Series Expansion

Last "non-parametric" technique (although very parametric)

Write $ p(\vec{x})=\sum _{j=0}^{\infty}c_j f_j (\vec{x}) \cong \sum _{j=0} ^{m}c_j f_j (\vec{x}) $ (1)

where {$ fj's $} are pre-determined class of functions

$ \vec{x} = (x_1, \cdots, x_n) $ (2)

Monomials: $ x_1 , x_1x_2 , x_1 ^3 $ (3)

Polynomials: $ x_1 + x_2 , x_1 + x_1 ^2 +x_1 x_2 $ (4)

E.g.) Taylor expansion about $ x_0 $

In 1-D, $ p(x)=\sum _{j=1} ^ {\infty} \frac{p^{(j)} (x_0) (x-x_0)^j}{j!} $ (5) when $ p(x) $ is analytic

Taylor polynomial approximation

$ p(x) \approx \sum _{j=0} ^{m} \frac{p^{(j)}(x)(x-x_0)^j}{j!} $ (6) when $ p(x) \in C^{m+1}(\Re) $

When m=1,linear approximation.

$ p(\vec{x})\approx c_0 + c \cdot (x-x_0) $ (7)

Must use "Parzen window" approach to approximate $ p(x) $

$ p(x) \cong \frac{K}{dV_d} $ (8), where K is number of samples in a neighborhood of x

              d is number of total samples
            $ V_d $ is volume of that neighborhood

There is relationship between series expansion and Parzen Windows.

Recall window function $ \Phi (\vec{x}) $

<<Picuture>>

$ \Phi (\frac{\vec{x}-\vec{x}_i}{h_d}) $ (9) used in approximating $ p(x) $

$ p(\vec{x}) \cong p_d(\vec{x})=\sum _{i=1} ^{d} \frac{1}{dV_d} \Phi (\frac{\vec{x}-\vec{x}_i}{h_d}) $ (10)

Want $ p_d (\vec{x})\cong \sum _{j=1} ^{m} c_j (x_i , \cdots, \c_d)f_j (\vec{x}) $ (11)

Write $ \Phi (\vec{x})= \sum _{j=1} ^{m} c_j (x_i , \cdots, \c_d)f_j (\vec{x}) $ (12)

By computing the series for $ \Phi (\vec {x}) \cong \sum _{j=1} ^{m} \vec {c}_j f_j (\vec{x}) $ (13)

Example) 1D Gaussian window and Tayor expansion

Decision Trees

Reference DHS Chapter 8 Decision tree is one of the most powerful method for classification, because it simplifies the classification by dividing the problem into subproblems. A sample decision tree and training set from J.R. Quinlan (Induction of Decision Trees) can be given as follows:

Decision OldKiwi.jpg

Trainset OldKiwi.jpg

The decision tree separates two classes. First class is "play tennis" and the second one is "do not play tennis". The decision tree tries to find the answer by asking several question. The purpose is to generate decision tree using the training data.

Instead of asking a complicated question $ g(x) >= 0 or <0 $

The idea: Ask a series of simple questions following a tree structure (linear 1-D).

ECE662 lect20 tree1 OldKiwi.jpg ECE662 lect20 tree2 OldKiwi.jpg


Lectures

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett