Abstract
Incremental learning with concept drift has often been tackled by ensemble methods, where models built in the past can be retrained to attain new models for the current data. Two design questions need to be addressed in developing ensemble methods for incremental learning with concept drift, i.e., which historical (i.e., previously trained) models should be preserved and how to utilize them. A novel ensemble learning method, namely, Diversity and Transfer-based Ensemble Learning (DTEL), is proposed in this paper. Given newly arrived data, DTEL uses each preserved historical model as an initial model and further trains it with the new data via transfer learning. Furthermore, DTEL preserves a diverse set of historical models, rather than a set of historical models that are merely accurate in terms of classification accuracy. Empirical studies on 15 synthetic data streams and 5 real-world data streams (all with concept drifts) demonstrate that DTEL can handle concept drift more effectively than 4 other state-of-The-Art methods. © 2012 IEEE.
Original language | English |
---|---|
Article number | 8246541 |
Pages (from-to) | 4822-4832 |
Number of pages | 11 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 29 |
Issue number | 10 |
Early online date | 4 Jan 2018 |
DOIs | |
Publication status | Published - Oct 2018 |
Externally published | Yes |
Funding
This work was supported in part by the Ministry of Science and Technology of China under Grant 2017YFC0804002, in part by the National Natural Science Foundation of China under Grant 61672478 and Grant 61329302, in part by the , and in part by the Royal Society Newton Advanced Fellowship under Grant NA150123.
Keywords
- Concept drift
- data stream mining
- ensemble learning
- incremental learning
- transfer learning