Abstract
Collaborative perception has recently gained significant attention in autonomous driving, improving perception quality by enabling the exchange of additional information among vehicles. However, deploying collaborative perception systems can lead to domain shifts due to diverse environmental conditions and data heterogeneity among connected and autonomous vehicles (CAVs). To address these challenges, we propose a unified domain generalization framework to be utilized during the training and inference stages of collaborative perception. In the training phase, we introduce an Amplitude Augmentation (AmpAug) method to augment low-frequency image variations, broadening the model’s ability to learn across multiple domains. We also employ a meta-consistency training scheme to simulate domain shifts, optimizing the model with a carefully designed consistency loss to acquire domain-invariant representations. In the inference phase, we introduce an intra-system domain alignment mechanism to reduce or potentially eliminate the domain discrepancy among CAVs prior to inference. Extensive experiments substantiate the effectiveness of our method in comparison with the existing state-of-the-art works.
| Original language | English |
|---|---|
| Pages (from-to) | 1783-1796 |
| Number of pages | 14 |
| Journal | IEEE Transactions on Intelligent Transportation Systems |
| Volume | 26 |
| Issue number | 2 |
| Early online date | 5 Dec 2024 |
| DOIs | |
| Publication status | Published - 2025 |
Bibliographical note
Publisher Copyright:© 2000-2011 IEEE.
Funding
This work was supported in part by the Hong Kong Innovation and Technology Commission under InnoHK Project CIMDA, in part by the Hong Kong SAR Government under the Global STEM Professorship and Research Talent Hub, and in part by the Hong Kong Jockey Club under the Hong Kong JC STEM Lab of Smart City under Grant 2023-0108. The work of Yiqin Deng was supported in part by the National Natural Science Foundation of China under Grant 62301300. The work of Xianhao Chen was supported in part by HKU-SCF FinTech Academy Research and Development Funding.
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 9 Industry, Innovation, and Infrastructure
-
SDG 11 Sustainable Cities and Communities
Keywords
- Domain generalization
- autonomous driving
- bird’s eye view segmentation
- vehicle-to-vehicle collaborative perception
Fingerprint
Dive into the research topics of 'Toward Full-Scene Domain Generalization in Multi-Agent Collaborative Bird’s Eye View Segmentation for Connected and Autonomous Driving'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver