Abstract
This paper focuses on the parameter pattern during the initialization of Extreme Learning Machines (ELMs). According to the algorithm, model performance is highly dependent on the matrix rank of its hidden layer. Previous research has already proved that the sigmoid activation function can transform input data to a full rank hidden matrix with probability 1, which secures the stability of ELM solution. In recent study, we notice that, under full-rank condition, the hidden matrix possibly has very small eigenvalue, which seriously affects the model generalization ability. Our study indicates such a negative impact is caused by the discontinuity of generalized inverse at the boundary of full and waning rank. Experiments show that each phase of ELM modeling possibly leads to this rank deficient phenomenon, which harms the test accuracy.
Original language | English |
---|---|
Pages (from-to) | 386-391 |
Number of pages | 6 |
Journal | Neurocomputing |
Volume | 313 |
Early online date | 30 Jun 2018 |
DOIs | |
Publication status | Published - 3 Nov 2018 |
Externally published | Yes |
Bibliographical note
This work was supported in part by the National Natural Science Foundation of China (Grant nos. 61772344 and 61732011 ), in part by the Natural Science Foundation of SZU (Grant nos. 827-000140, 827-000230, and 2017060).Keywords
- Extreme learning machines
- Generalizedinverse
- Neural network
- Rank of matrix