A study on random weights between input and hidden layers in extreme learning machine

Research output: Journal PublicationsJournal Article (refereed)peer-review

28 Citations (Scopus)

Abstract

Extreme learning machine (ELM), as an emergent technique for training feed-forward neural networks, has shown good performances on various learning domains. This paper investigates the impact of random weights during the training of ELM. It focuses on the randomness of weights between input and hidden layers, and the dimension change from input layer to hidden layer. The direct motivation is to verify as to whether during the training of ELM, the randomly assigned weights exert some positive effects. Experimentally we show that for many classification and regression problems, the dimension increase caused by random weights in ELM has a performance better than the dimension increase caused by some kernel mappings. We assume that via the random transformation, output-samples are more concentrate than input-samples which will make the learning more efficient. © 2012 Springer-Verlag.
Original languageEnglish
Pages (from-to)1465-1475
JournalSoft Computing
Volume16
Issue number9
DOIs
Publication statusPublished - Sept 2012
Externally publishedYes

Bibliographical note

This paper is partly supported by City University Strategic Research Grant (SRG) 7002680.

Keywords

  • Dimension change
  • Extreme learning machine
  • Random weights

Fingerprint

Dive into the research topics of 'A study on random weights between input and hidden layers in extreme learning machine'. Together they form a unique fingerprint.

Cite this