Abstract
Existing Continual Semantic Segmentation (CSS) methods effectively address the issue of background shift in regular training samples. However, this issue persists in exemplars, i.e., replay samples, which is often overlooked. Each exemplar is annotated only with the classes from its originating task, while other past classes and the current classes during replay are labeled as background. This partial annotation can erase the network’s knowledge of previous classes and impede the learning of new classes. To resolve this, we introduce a new method named Trace Back and Go Ahead (TAGA), which utilizes a backward annotator model and a forward annotator model to generate pseudo-labels for both regular training samples and exemplars, aiming at reducing the adverse effects of incomplete annotations. This approach effectively mitigates the risk of incorrect guidance from both sample types, offering a comprehensive solution to background shift. Additionally, due to a significantly smaller number of exemplars compared to regular training samples, the class distribution in the sample pool of each incremental task exhibits a long-tailed pattern, potentially biasing classification towards incremental classes. Consequently, TAGA incorporates a class-equilibrium sampling strategy that adaptively adjusts the sampling frequencies based on the ratios of exemplars to regular samples and past to new classes, counteracting the skewed distribution. Extensive experiments on two public datasets, Pascal VOC 2012 and ADE20K, demonstrate that our method surpasses state-of-the-art methods.
Original language | English |
---|---|
Article number | 111613 |
Journal | Pattern Recognition |
Volume | 165 |
Early online date | 2 Apr 2025 |
DOIs | |
Publication status | E-pub ahead of print - 2 Apr 2025 |
Bibliographical note
Publisher Copyright:© 2025 Elsevier Ltd
Funding
This work was supported in part by the Taishan Scholar Project of Shandong Province under Grant tsqn202306079 , and in part by the National Natural Science Foundation of China under Grant 62471278 , and in part by Key Project of Science and Technology Innovation 2030 funded by the Ministry of Science and Technology of China under Grant 2018AAA0101301 , in part by the Hong Kong GRF-RGC General Research Fund under Grant 11209819 (CityU 9042816) and Grant 11203820 (CityU 9042598) .
Keywords
- Continual learning
- Continual semantic segmentation
- Replay-based