Deep Neural Networks (DNNs) demonstrate great performances in pattern classification problems. There are several available activation functions for DNNs while the Sigmoid and the Tanh functions are most widely used choices. In this work, we propose the Broad Autoencoder Features (BAF) to better utilize advantages of different activation functions. The BAF consists of four parallel connected Stacked AutoEncoders (SAEs) with different activation functions: the Sigmoid, the Tanh, the ReLu, and the Softplus. With this broad setting, the final learned features merge learn features using diversified nonlinear mappings from the original input features and such that more information is mined from the original input features. Experimental results show that the BAF yields better learned features in comparison with merging four SAEs using the same activation functions.
|Title of host publication||Proceedings of 2019 IEEE 18th International Conference on Cognitive Informatics and Cognitive Computing, ICCI*CC 2019|
|Publication status||Published - Jul 2019|
- feature learning
- pattern classification
- stacked autoencoder