An analysis of diversity measures


Research output: Journal PublicationsJournal Article (refereed)peer-review

343 Citations (Scopus)


Diversity among the base classifiers is deemed to be important when constructing a classifier ensemble. Numerous algorithms have been proposed to construct a good classifier ensemble by seeking both the accuracy of the base classifiers and the diversity among them. However, there is no generally accepted definition of diversity, and measuring the diversity explicitly is very difficult. Although researchers have designed several experimental studies to compare different diversity measures, usually confusing results were observed. In this paper, we present a theoretical analysis on six existing diversity measures (namely disagreement measure, double fault measure, KW variance, inter-rater agreement, generalized diversity and measure of difficulty), show underlying relationships between them, and relate them to the concept of margin, which is more explicitly related to the success of ensemble learning algorithms. We illustrate why confusing experimental results were observed and show that the discussed diversity measures are naturally ineffective. Our analysis provides a deeper understanding of the concept of diversity, and hence can help design better ensemble learning algorithms. © Springer Science + Business Media, LLC 2006.
Original languageEnglish
Pages (from-to)247-271
Number of pages25
JournalMachine Learning
Issue number1
Early online date19 Jul 2006
Publication statusPublished - Oct 2006
Externally publishedYes


  • Classifier ensemble
  • Coincident failure diversity
  • Disagreement measure
  • Diversity measures
  • Double fault measure
  • Entropy measure
  • Generalized diversity
  • Interrater agreement
  • KW variance
  • Majority vote
  • Margin distribution
  • Measure of difficulty


Dive into the research topics of 'An analysis of diversity measures'. Together they form a unique fingerprint.

Cite this