Abstract
The increasing integration of multimedia such as videos and graphical abstracts in scientific publications necessitates advanced summarization techniques. This paper introduces Uni-SciSum, a framework for Scientific Multimodal Summarization with Multimodal Output (SMSMO), addressing the challenges of fusing heterogeneous data sources (e.g., text, images, video, audio) and outputting multimodal summary within a unified architecture. UniSciSum leverages the power of large language models (LLMs) and extends its capability to cross-modal understanding through BridgeNet, a query-based transformer that fuses diverse modalities into a fixed-length embedding. A two-stage training process, involving modalto-modal pre-training and cross-modal instruction tuning, aligns different modalities with summaries and optimizes for multimodal summary generation. Experiments on two new SMSMO datasets show Uni-SciSum outperforms uni- and multi-modality methods, advancing LLM applications in the increasingly multimodal realm of scientific communication.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 31st International Conference on Computational Linguistics: Industry Track |
| Editors | Owen RAMBOW, Leo WANNER, Marianna APIDIANAKI, Hend AL-KHALIFA, Barbara DI EUGENIO, Steven SCHOCKAERT, Kareem DARWISH, Apoorv AGARWAL |
| Publisher | Association for Computational Linguistics (ACL) |
| Pages | 263-275 |
| Number of pages | 13 |
| ISBN (Electronic) | 9798891761971 |
| Publication status | Published - Jan 2025 |
| Event | The 31st International Conference on Computational Linguistics: Industry Track - Abu Dhabi, United Arab Emirates Duration: 19 Jan 2025 → 24 Jan 2025 https://coling2025.org/ |
Publication series
| Name | Proceedings - International Conference on Computational Linguistics, COLING |
|---|---|
| ISSN (Print) | 2951-2093 |
Conference
| Conference | The 31st International Conference on Computational Linguistics: Industry Track |
|---|---|
| Abbreviated title | COLING 2025 |
| Country/Territory | United Arab Emirates |
| City | Abu Dhabi |
| Period | 19/01/25 → 24/01/25 |
| Internet address |
Bibliographical note
Publisher Copyright:©2025 Association for Computational Linguistics.
Funding
The work is supported by the Hong Kong RGC ECS (LU23200223/130393) and Internal Grants of Lingnan University, Hong Kong (code: LWP20018/871232, DR23A9/101194, DB23B5/102083, DB23AI/102070 and 102241).
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 9 Industry, Innovation, and Infrastructure
Fingerprint
Dive into the research topics of 'Enhancing Large Language Models for Scientific Multimodal Summarization with Multimodal Output'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver