Projects per year
Abstract
Self-attention-based models have achieved remarkable progress in short-text mining. However, the quadratic computational complexities restrict their application in long text processing. Prior works have adopted the chunking strategy to divide long documents into chunks and stack a self-attention backbone with the recurrent structure to extract semantic representation. Such an approach disables parallelization of the attention mechanism, significantly increasing the training cost and raising hardware requirements. Revisiting the self-attention mechanism and the recurrent structure, this paper proposes a novel long-document encoding model, Recurrent Attention Network (RAN), to enable the recurrent operation of self-attention. Combining the advantages from both sides, the well-designed RAN is capable of extracting global semantics in both token-level and document-level representations, making it inherently compatible with both sequential and classification tasks, respectively. Furthermore, RAN is computationally scalable as it supports parallelization on long document processing. Extensive experiments demonstrate the long-text encoding ability of the proposed RAN model on both classification and sequential tasks, showing its potential for a wide range of applications.
Original language | English |
---|---|
Title of host publication | 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 |
Publisher | Association for Computational Linguistics |
Pages | 3006-3019 |
Number of pages | 14 |
ISBN (Electronic) | 9781959429623 |
ISBN (Print) | 9781959429623 |
DOIs | |
Publication status | Published - Jul 2023 |
Event | 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 - Toronto, Canada Duration: 9 Jul 2023 → 14 Jul 2023 |
Publication series
Name | Proceedings of the Annual Meeting of the Association for Computational Linguistics |
---|---|
Publisher | Association for Computational Linguistics (ACL) |
ISSN (Print) | 0736-587X |
Conference
Conference | 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 |
---|---|
Country/Territory | Canada |
City | Toronto |
Period | 9/07/23 → 14/07/23 |
Bibliographical note
Publisher Copyright:© 2023 Association for Computational Linguistics.
Funding
Xianming Li, Xiaotian Luo, Xing Lee, and Ying-bin Zhao’s work has been supported by Ant Group. Zongxi Li’s work has been supported by a grant from Hong Kong Metropolitan University (Project Reference No. CP/2022/02). Haoran Xie’s work has been supported by the Direct Grant (DR23B2) and the Faculty Research Grant (DB23A3) of Ling-nan University, Hong Kong. Qing Li’s work has been supported by the Hong Kong Research Grants Council through the Collaborative Research Fund (Project No. C1031-18G). We thank the anonymous reviewers for their careful reading of our manuscript. Their insightful comments and suggestions helped us improve the quality of our manuscript.
Fingerprint
Dive into the research topics of 'Recurrent Attention Networks for Long-text Modeling'. Together they form a unique fingerprint.Projects
- 2 Finished
-
A Preliminary Investigation and Evaluation on Sentence Representation Models based on Contrastive Learning
XIE, H. (PI)
1/01/23 → 31/12/23
Project: Grant Research
-
Integrating Novel Dropout Mechanism into Label Extension for Emotion Classification
XIE, H. (PI)
1/01/23 → 31/12/23
Project: Grant Research