Abstract
The training of autoencoder (AE) focuses on the selection of connection weights via a minimization of both the training error and a regularized term. However, the ultimate goal of AE training is to autoencode future unseen samples correctly (i.e., good generalization). Minimizing the training error with different regularized terms only indirectly minimizes the generalization error. Moreover, the trained model may not be robust to small perturbations of inputs which may lead to a poor generalization capability. In this paper, we propose a localized stochastic sensitive AE (LiSSA) to enhance the robustness of AE with respect to input perturbations. With the local stochastic sensitivity regularization, LiSSA reduces sensitivity to unseen samples with small differences (perturbations) from training samples. Meanwhile, LiSSA preserves the local connectivity from the original input space to the representation space that learns a more robustness features (intermediate representation) for unseen samples. The classifier using these learned features yields a better generalization capability. Extensive experimental results on 36 benchmarking datasets indicate that LiSSA outperforms several classical and recent AE training methods significantly on classification tasks.
Original language | English |
---|---|
Pages (from-to) | 2748-2760 |
Journal | IEEE Transactions on Cybernetics |
Volume | 51 |
Issue number | 5 |
Early online date | 16 Jul 2019 |
DOIs | |
Publication status | Published - May 2021 |
Externally published | Yes |
Bibliographical note
This work was supported in part by the National Natural Science Foundation of China under Grant 61876066, Grant 61572201, and Grant 61672443, in part by the Guangzhou Science and Technology Plan Project under Grant 201804010245, and in part by the Hong Kong RGC General Research Funds under Grant 9042038 (CityU 11205314) and Grant 9042322 (CityU 11200116).Keywords
- Autoencoder (AE)
- stochastic sensitivity
- training algorithm