Abstract
This paper proposes a feature transformation method to improve the performance of clustering and classification, which is named as weight-matrix learning (WML). A feed-forward neural network is particularly designed for WML, which aims to learn the optimal weights by minimizing an objective function similar to cross-entropy, and the training process is finished based on the technique of batch gradient descent or stochastic gradient descent. The proposed feature transformation is linear, which is a non-trivial extension of a previous technique named feature-weight learning (FWL). Essentially, WML can be considered as a learning technique of departing 0.5-similarity, since it can make the samples with similarity larger than 0.5 closer and the samples with similarity lower than 0.5 farther away. From this perspective, WML is identified as an off-center technique with the center of 0.5-similarity. Theoretically and experimentally, it is validated that WML can significantly improve the performance of some clustering algorithms like k-means, and enhance the performance of some classification algorithms like random weight neural network.
Original language | English |
---|---|
Pages (from-to) | 635-651 |
Number of pages | 17 |
Journal | Information Sciences |
Volume | 503 |
Early online date | 11 Jul 2019 |
DOIs | |
Publication status | Published - Nov 2019 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2019 Elsevier Inc.
Keywords
- Feature transformation
- Feed-forward neural network
- Similarity matrix