Projects per year
In this paper, we propose a multimodal fusion network (termed as MFN) to integrate the text and image data from social media for rumor detection. Given the multimodal features, MFN exploits self-attentive fusion (SAF) mechanism to conduct feature-level fusion by assigning corresponding weights to the complementary modalities. In particular, the textual features are combined with the fused features in a skip-connection manner, as textual features tend to be more distinguishable compared with visual features. Furthermore, MFN introduces latent topic memory (LTM) to store the semantic information about rumor and non-rumor events, benefiting to the identification of the upcoming posts. Extensive experiments conducted on two public datasets show that the proposed MFN outperforms the state-of-the-art approaches.
|Title of host publication||2021 IEEE International Conference on Multimedia and Expo (ICME)|
|Number of pages||6|
|Publication status||Published - 9 Jun 2021|
|Event||2021 IEEE International Conference on Multimedia and Expo (ICME) - Shenzhen, China|
Duration: 5 Jul 2021 → 9 Jul 2021
|Conference||2021 IEEE International Conference on Multimedia and Expo (ICME)|
|Period||5/07/21 → 9/07/21|
Bibliographical noteThis work is supported by the National Natural Science Foundation of China (No. 62076073), the Guangdong Basic and Applied Basic Research Foundation (No. 2020A1515010616), the Guangdong Innovative Research Team Program (No. 2014ZT05G157), the Key-Area Research and Development Program of Guangdong Province (2019B010136001), and the Science and Technology Planning Project of Guangdong Province (LZC0023), a grant from the RGC of HKSAR, China (UGC/FDS16/E01/19), and the Faculty Research Fund (DB21A9), Lingnan University, Hong Kong.
- Multimodal fusion
- Rumor detection