Line 65: Line 65:
  
 
[[Image:Lec22_hiercluster_OldKiwi.PNG]]
 
[[Image:Lec22_hiercluster_OldKiwi.PNG]]
 +
 +
[[Image:Lec22_housecluster_OldKiwi.PNG]]

Revision as of 17:18, 6 April 2008

Note: Most tree growing methods favor greatest impurity reduction near the root node.

Ex.

Lecture22 DecisionTree OldKiwi.JPG

To assign category to a leaf node.

Easy!

If sample data is pure

-> assign this class to leaf.

else

-> assign the most frequent class.

Note: Problem of building decision tree is "ill-conditioned"

i.e. small variance in the training data can yield large variations in decision rules obtained.

Ex. p.405(D&H)

A small move of one sample data can change the decision rules a lot.


Reference about clustering

"Data clustering, a review," A.K. Jain, M.N. Murty, P.J. Flynn[1]

"Algorithms for clustering data," A.K. Jain, R.C. Dibes[2]

"Support vector clustering," Ben-Hur, Horn, Siegelmann, Vapnik [3]

"Dynamic cluster formation using level set methods," Yip, Ding, Chan[4]

What is clustering?

The task of finding "natural " groupings in a data set.

Synonymons="unsupervised learning"

PartitionCluster OldKiwi.jpg

HierachichalCluster OldKiwi.jpg

Clustering as a useful technique for searching in databases

Clustering can be used to construct an index for a large dataset to be searched quickly.

  • Definition: An index is a data structure that enables sub-linear time look up.
  • Example: Dewey system to index books in a library

Dewey OldKiwi.jpg

  • Example of Index: Face Recognition

- need face images with label

- must cluster to obtain sub-linear search time

- Search will be faster because of $ \bigtriangleup $ inequality.

Lec22 hiercluster OldKiwi.PNG

Lec22 housecluster OldKiwi.PNG

Alumni Liaison

EISL lab graduate

Mu Qiao