(New page: == Introduction == In hierarchical clustering the data are not partitioned into a particular cluster in a single step. Instead, a series of partitions takes place, which may run from a si...)
 
(No difference)

Latest revision as of 16:01, 6 April 2008

Introduction

In hierarchical clustering the data are not partitioned into a particular cluster in a single step. Instead, a series of partitions takes place, which may run from a single cluster containing all objects to n clusters each containing a single object.


When we implement Hierarchical clustering algorithms we use an iterative procedure. Here is an example of how the so-called single linkage clustering algorithm works:

1) Each data point is its "own" cluster.

2) Find most similar pair of clusters. Different ways to define similarity between clusters: Minimum distance between points in clusters (Euclidian Minimum Spanning Trees), Maximum distance between points in clusters, and average distance between points in clusters.

3) Merge it into a parent cluster.

4) Repeat... until we have merged the whole dataset into one cluster.

This type of algorithm is also known as agglomerative methods, which proceed by series of functions of the n objects into groups.

Alumni Liaison

Questions/answers with a recent ECE grad

Ryne Rayburn