SEFE : Superficial and Essential Forgetting Eliminator for Multimodal Continual Instruction Tuning

  • Jinpeng CHEN
  • , Runmin CONG
  • , Yuzhi ZHAO*
  • , Hongzheng YANG
  • , Guangneng HU
  • , Horace Ho Shing IP
  • , Sam KWONG*
  • *Corresponding author for this work

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Referred Conference Paperpeer-review

Abstract

Multimodal Continual Instruction Tuning (MCIT) aims to enable Multimodal Large Language Models (MLLMs) to incrementally learn new tasks without catastrophic forgetting. In this paper, we explore forgetting in this context, categorizing it into superficial forgetting and essential forgetting. Superficial forgetting refers to cases where the model’s knowledge may not be genuinely lost, but its responses to previous tasks deviate from expected formats due to the influence of subsequent tasks’ answer styles, making the results unusable. By contrast, essential forgetting refers to situations where the model provides correctly formatted but factually inaccurate answers, indicating a true loss of knowledge. Assessing essential forgetting necessitates addressing superficial forgetting first, as severe superficial forgetting can obscure the model’s knowledge state. Hence, we first introduce the Answer Style Diversification (ASD) paradigm, which defines a standardized process for transforming data styles across different tasks, unifying their training sets into similarly diversified styles to prevent superficial forgetting caused by style shifts. Building on this, we propose RegLoRA to mitigate essential forgetting. RegLoRA stabilizes key parameters where prior knowledge is primarily stored by applying regularization, enabling the model to retain existing competencies. Experimental results demonstrate that our overall method, SEFE, achieves state-ofthe-art performance.

Original languageEnglish
Title of host publicationProceedings of the 42nd International Conference on Machine Learning
PublisherML Research Press
Pages7982-8001
Number of pages20
Volume267
Publication statusPublished - Jul 2025
Event42nd International Conference on Machine Learning, ICML 2025 - Vancouver, Canada
Duration: 13 Jul 202519 Jul 2025

Conference

Conference42nd International Conference on Machine Learning, ICML 2025
Country/TerritoryCanada
CityVancouver
Period13/07/2519/07/25

Bibliographical note

Publisher Copyright:
© 2025 by the author(s).

Funding

This work was supported in part by the National Natural Science Foundation of China under Grants 62471278 and 62306220, in part by the Research Grants Council of the Hong Kong Special Administrative Region, China under Grant STG5/E-103/24-R, and in part by the Taishan Scholar Project of Shandong Province under Grant tsqn202306079.

Fingerprint

Dive into the research topics of 'SEFE : Superficial and Essential Forgetting Eliminator for Multimodal Continual Instruction Tuning'. Together they form a unique fingerprint.

Cite this