Abstract
Activation functions such as Tanh and Sigmoid functions are widely used in Deep Neural Networks (DNNs) and pattern classification problems. To take advantages of different activation functions, the Broad Autoencoder Features (BAF) is proposed in this work. The BAF consists of four parallel-connected Stacked Autoencoders (SAEs) and each of them uses a different activation function, including Sigmoid, Tanh, ReLU, and Softplus. The final learned features can merge such features by various nonlinear mappings from original input features with such a broad setting. This helps to excavate more information from the original input features. Experimental results show that the BAF yields better-learned features and classification performances.
Original language | English |
---|---|
Article number | oa23 |
Journal | International Journal of Cognitive Informatics and Natural Intelligence |
Volume | 15 |
Issue number | 4 |
DOIs | |
Publication status | Published - Oct 2021 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2021 International Journal of Cognitive Informatics and Natural Intelligence. All rights reserved.
Funding
This research was funded by National Natural Science Foundation of China under Grant 61876066 and Guangdong Province Science and Technology Plan Project (Collaborative Innovation and Platform Environment Construction) 2019A050510006.
Keywords
- Feature Learning
- Pattern Classification
- Stacked Autoencoders