Abstract
Diversity among the base classifiers is deemed to be important when constructing a classifier ensemble. Numerous algorithms have been proposed to construct a good classifier ensemble by seeking both the accuracy of the base classifiers and the diversity among them. However, there is no generally accepted definition of diversity, and measuring the diversity explicitly is very difficult. Although researchers have designed several experimental studies to compare different diversity measures, usually confusing results were observed. In this paper, we present a theoretical analysis on six existing diversity measures (namely disagreement measure, double fault measure, KW variance, inter-rater agreement, generalized diversity and measure of difficulty), show underlying relationships between them, and relate them to the concept of margin, which is more explicitly related to the success of ensemble learning algorithms. We illustrate why confusing experimental results were observed and show that the discussed diversity measures are naturally ineffective. Our analysis provides a deeper understanding of the concept of diversity, and hence can help design better ensemble learning algorithms. © Springer Science + Business Media, LLC 2006.
Original language | English |
---|---|
Pages (from-to) | 247-271 |
Number of pages | 25 |
Journal | Machine Learning |
Volume | 65 |
Issue number | 1 |
Early online date | 19 Jul 2006 |
DOIs | |
Publication status | Published - Oct 2006 |
Externally published | Yes |
Keywords
- Classifier ensemble
- Coincident failure diversity
- Disagreement measure
- Diversity measures
- Double fault measure
- Entropy measure
- Generalized diversity
- Interrater agreement
- KW variance
- Majority vote
- Margin distribution
- Measure of difficulty