Abstract
The incremental extreme learning machine (I-ELM) was proposed in 2006 as a method to improve the network architecture of extreme learning machines (ELMs). To improve on the I-ELM, bidirectional extreme learning machines (B-ELMs) were developed in 2012. The B-ELM uses the same method as the I-ELM but separates the odd and even learning steps. At the odd learning step, a hidden node is added like I-ELM. At the even learning step, a new hidden node is added via a formula based on the former added node result. However, some of the hidden nodes generated by the I-ELM may play a minor role; thus, the increase in network complexity due to the B-ELM may be unnecessary. To avoid this issue, this paper proposes an enhanced B-ELM method (referred to as EB-ELM). Several hidden nodes are randomly generated at each odd learning step, however, only the nodes with the largest residual error reduction will be added to the existing network. Simulation results show that the EB-ELM can obtain higher accuracy and achieve better performance than the B-ELM under the same network architecture. In addition, the EB-ELM can achieve a faster convergence rate than the B-ELM, which means that the EB-ELM has smaller network complexity and faster learning speed than the B-ELM.
Original language | English |
---|---|
Pages (from-to) | 19-26 |
Number of pages | 8 |
Journal | Memetic Computing |
Volume | 11 |
Issue number | 1 |
Early online date | 27 Jul 2017 |
DOIs | |
Publication status | Published - Mar 2019 |
Externally published | Yes |
Bibliographical note
The authors would like to thank the editor and reviewers for their invaluable suggestions to improve the quality of this paper. This research is supported by the National Natural Science Foundation of China under Grant No. 61672358.Keywords
- Bidirectional extreme learning machine
- Convergence rate
- Incremental extreme learning machine
- Network architecture