Abstract
Extreme learning machine (ELM) is an efficient training algorithm for single-hidden layer feed-forward neural networks (SLFNs). Two pruned-ELM named P-ELM1 and P-ELM2 are proposed by Rong et al. P-ELM1 and P-ELM2 employ I 2 and information gain to measure the association between the class labels and individual hidden node respectively. But for the continuous value data sets, it is inevitable for P-ELM1 and P-ELM2 to evaluate the probability distributions of the data sets with discretization methods for calculating I 2 and information gain, while the discretization will lead to information loss. Furthermore, the discretization will result in high computational complexity. In order to deal with the problems, based on tolerance rough sets, this paper proposed an improved pruned-ELM algorithm, which can overcome the drawbacks mentioned above. Experimental results along with statistical analysis on 8 UCI data sets show that the improved algorithm outperforms the pruned-ELM in computational complexity and testing accuracy.
Original language | English |
---|---|
Pages (from-to) | 327-345 |
Number of pages | 19 |
Journal | International Journal of Uncertainty, Fuzziness and Knowldege-Based Systems |
Volume | 24 |
Issue number | 3 |
DOIs | |
Publication status | Published - 1 Jun 2016 |
Externally published | Yes |
Bibliographical note
This research is supported by the National Natural Science Foundation of China (Nos. 61170040 and 71371063), by the Natural Science Foundation of Hebei Province (Nos. F2013201110 and F2013201220), by the Key Scientific Research Foundation of Education Department of Hebei Province (No. ZD20131028), and by the Opening Fund of Zhejiang Provincial Top Key Discipline of Computer Science and Technology at Zhejiang Normal University, China.Keywords
- architecture selection
- Extreme learning machine
- pruning
- tolerance rough sets