Revision as of 16:01, 6 April 2008 by Mboschru (Talk)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Introduction

In hierarchical clustering the data are not partitioned into a particular cluster in a single step. Instead, a series of partitions takes place, which may run from a single cluster containing all objects to n clusters each containing a single object.


When we implement Hierarchical clustering algorithms we use an iterative procedure. Here is an example of how the so-called single linkage clustering algorithm works:

1) Each data point is its "own" cluster.

2) Find most similar pair of clusters. Different ways to define similarity between clusters: Minimum distance between points in clusters (Euclidian Minimum Spanning Trees), Maximum distance between points in clusters, and average distance between points in clusters.

3) Merge it into a parent cluster.

4) Repeat... until we have merged the whole dataset into one cluster.

This type of algorithm is also known as agglomerative methods, which proceed by series of functions of the n objects into groups.

Alumni Liaison

Ph.D. on Applied Mathematics in Aug 2007. Involved on applications of image super-resolution to electron microscopy

Francisco Blanco-Silva