Abstract
Extreme learning machine (ELM), as an emergent technique for training feed-forward neural networks, has shown good performances on various learning domains. This paper investigates the impact of random weights during the training of ELM. It focuses on the randomness of weights between input and hidden layers, and the dimension change from input layer to hidden layer. The direct motivation is to verify as to whether during the training of ELM, the randomly assigned weights exert some positive effects. Experimentally we show that for many classification and regression problems, the dimension increase caused by random weights in ELM has a performance better than the dimension increase caused by some kernel mappings. We assume that via the random transformation, output-samples are more concentrate than input-samples which will make the learning more efficient. © 2012 Springer-Verlag.
Original language | English |
---|---|
Pages (from-to) | 1465-1475 |
Journal | Soft Computing |
Volume | 16 |
Issue number | 9 |
DOIs | |
Publication status | Published - Sept 2012 |
Externally published | Yes |
Bibliographical note
This paper is partly supported by City University Strategic Research Grant (SRG) 7002680.Keywords
- Dimension change
- Extreme learning machine
- Random weights