Evolving hybrid ensembles of learning machines for better generalisation


Research output: Journal PublicationsJournal Article (refereed)peer-review

123 Citations (Scopus)


Ensembles of learning machines have been formally and empirically shown to outperform (generalise better than) single predictors in many cases. Evidence suggests that ensembles generalise better when they constitute members which form a diverse and accurate set. Additionally, there have been a multitude of theories on how one can enforce diversity within a combined predictor setup. We recently attempted to integrate these theories together into a co-evolutionary framework with a view to synthesising new evolutionary ensemble learning algorithms using the fact that multi-objective evolutionary optimisation is a formidable ensemble construction technique. This paper explicates on the intricacies of the proposed framework in addition to presenting detailed empirical results and comparisons with a wide range of algorithms in the machine learning literature. The framework treats diversity and accuracy as evolutionary pressures which are exerted at multiple levels of abstraction and is shown to be effective. © 2006 Elsevier B.V. All rights reserved.
Original languageEnglish
Pages (from-to)686-700
Number of pages15
Issue number7-9 SPEC. ISS.
Early online date4 Feb 2006
Publication statusPublished - Mar 2006
Externally publishedYes

Bibliographical note

This research was undertaken as part of the EPSRC funded project on Market-Based Control of Complex Computational Systems (GR/T10671/01). This is a collaborative project involving the Universities of Birmingham, Liverpool and Southampton and BAE Systems, BT and HP.


  • Ensemble learning
  • Evolutionary computation
  • Hybrid ensembles
  • Multi-objective optimisation
  • Neuroevolution


Dive into the research topics of 'Evolving hybrid ensembles of learning machines for better generalisation'. Together they form a unique fingerprint.

Cite this