m |
|||
Line 1: | Line 1: | ||
A decision tree is a classifier that maps from the observation about an item to the conclusion about its target value. | A decision tree is a classifier that maps from the observation about an item to the conclusion about its target value. | ||
− | + | ==Strengths and Weakness of Decision Tree Methods== | |
− | + | ===Strengths of decision tree methods=== | |
− | + | * Decision trees are able to generate understandable rules. | |
+ | * Decision trees perform classification without requiring much computation. | ||
+ | * Decision trees are able to handle both continuous and categorical variables. | ||
+ | * Decision trees provide a clear indication of which fields are most important for prediction or classification. | ||
− | + | ===Weaknesses of decision tree methods=== | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | * Decision trees are less appropriate for estimation tasks where the goal is to predict the value of a continuous attribute. | |
+ | * Decision trees are prone to errors in classification problems with many class and relatively small number of training examples. | ||
+ | * Decision tree can be computationally expensive to train. The process of growing a decision tree is computationally expensive. At each node, each candidate splitting field must be sorted before its best split can be found. In some algorithms, combinations of fields are used and a search must be made for optimal combining weights. Pruning algorithms can also be expensive since many candidate sub-trees must be formed and compared. | ||
+ | * Decision trees do not treat well non-rectangular regions. Most decision-tree algorithms only examine a single field at a time. This leads to rectangular classification boxes that may not correspond well with the actual distribution of records in the decision space. | ||
− | + | ==Reference== | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
* http://dms.irb.hr/tutorial/tut_dtrees.php | * http://dms.irb.hr/tutorial/tut_dtrees.php | ||
* [http://cobweb.ecn.purdue.edu/~landgreb/SMC91.pdf A Survey of Decision Tree Classifier Methodology] | * [http://cobweb.ecn.purdue.edu/~landgreb/SMC91.pdf A Survey of Decision Tree Classifier Methodology] |
Latest revision as of 09:29, 10 April 2008
A decision tree is a classifier that maps from the observation about an item to the conclusion about its target value.
Contents
Strengths and Weakness of Decision Tree Methods
Strengths of decision tree methods
- Decision trees are able to generate understandable rules.
- Decision trees perform classification without requiring much computation.
- Decision trees are able to handle both continuous and categorical variables.
- Decision trees provide a clear indication of which fields are most important for prediction or classification.
Weaknesses of decision tree methods
- Decision trees are less appropriate for estimation tasks where the goal is to predict the value of a continuous attribute.
- Decision trees are prone to errors in classification problems with many class and relatively small number of training examples.
- Decision tree can be computationally expensive to train. The process of growing a decision tree is computationally expensive. At each node, each candidate splitting field must be sorted before its best split can be found. In some algorithms, combinations of fields are used and a search must be made for optimal combining weights. Pruning algorithms can also be expensive since many candidate sub-trees must be formed and compared.
- Decision trees do not treat well non-rectangular regions. Most decision-tree algorithms only examine a single field at a time. This leads to rectangular classification boxes that may not correspond well with the actual distribution of records in the decision space.