Contrastive sentence representation learning has made great progress thanks to a range of text augmentation strategies and hard negative sampling techniques. However, most studies directly employ in-batch samples as negative samples, ignoring the semantic relationship between negative samples and anchors, which may lead to negative sampling bias. To address this issue, we propose similarity and relative-similarity strategies for identifying potential false negatives. Moreover, we introduce adaptive false negative elimination and attraction methods to mitigate their adverse effects. Our proposed approaches can also be considered semi-supervised contrastive learning, as the identified false negatives can be viewed as either negative or positive samples for contrastive learning in adaptive false negative elimination and attraction methods. By fusing information from positive and negative pairs, contrastive learning learns rich and discriminative representations that capture the intrinsic characteristics of the sentence. Experimental results indicate that our proposed strategies and methods can bring further significant performance improvements. Specifically, the combination of similarity strategy and adaptive false negative elimination method achieves the best results, yielding an average performance gain of 2.1% compared to SimCSE in semantic textual similarity (STS) tasks. Furthermore, our approach is generalizable and can be applied to different text data augmentation strategies and certain existing contrastive sentence representation learning models. Our experimental code and data are publicly available at the link: https://github.com/Linda230/AFNC.
Bibliographical notePublisher Copyright:
© 2023 The Authors
- Contrastive learning
- Sentence representation learning
- Negative sampling bias
- Adaptive weight
- Semi-supervised learning