Projects per year
Abstract
Labels play a central role in the text classification tasks. However, most studies has a lossy label encoding problem, in which the label will be represented by a meaningless and independent one-hot vector. This paper proposes a novel strategy to dynamically generate a soft pseudo label based on the prediction for each training. This history-based soft pseudo label will be taken as the target to optimize parameters by minimizing the distance between the target and the prediction. In addition, we augment the training data with Mix-up, a widely used method, to prevent overfitting on the small dataset. Extensive experimental results demonstrate that the proposed dynamical soft label strategy significantly improves the performance of several widely used deep learning classification models on binary and multi-class text classification tasks. Not only is our simple and efficient strategy much easier to implement and train, it is also exhibits substantial improvements (up to 2.54% relative improvement on FDCNews datasets with an LSTM encoder) over Label Confusion Learning (LCM)—a state-of-the-art label smoothing model—under the same experimental setting. The experimental result also demonstrate that Mix-up improves our method's performance on smaller datasets, but introduce excess noise in larger datasets, which diminishes the model’s performance.
Original language | English |
---|---|
Pages (from-to) | 2395-2405 |
Number of pages | 11 |
Journal | International Journal of Machine Learning and Cybernetics |
Volume | 14 |
Issue number | 7 |
Early online date | 19 Jan 2023 |
DOIs | |
Publication status | Published - Jul 2023 |
Bibliographical note
Publisher Copyright:© 2023, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
Funding
The research described in this article has been supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (UGC/FDS16/E01/19), the Lam Woo Research Fund (LWP20019) and the Faculty Research Grants (DB22A5 and DB22B4) of Lingnan University, Hong Kong. The authors have no competing interests to declare that are relevant to the content of this article.
Keywords
- Label distribution learning
- Mix-up
- Text classification
Fingerprint
Dive into the research topics of 'Improving text classification via a soft dynamical label strategy'. Together they form a unique fingerprint.Projects
- 2 Finished
-
Cluster-level Social Emotion Classification Across Domains
XIE, H. (PI)
1/03/22 → 28/02/23
Project: Grant Research
-