Unsupervised Robust Domain Adaptation: Paradigm, Theory and Algorithm

  • Fuxiang HUANG
  • , Xiaowei FU
  • , Shiyu YE
  • , Lina MA
  • , Wen LI
  • , Xinbo GAO
  • , David ZHANG
  • , Lei ZHANG

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

Unsupervised domain adaptation (UDA) aims to transfer knowledge from a label-rich source domain to an unlabeled target domain by addressing domain shifts. Most UDA approaches emphasize transfer ability, but often overlook robustness against adversarial attacks. Although vanilla adversarial training (VAT) improves the robustness of deep neural networks, it has little effect on UDA. This paper focuses on answering three key questions: 1) Why does VAT, known for its defensive effectiveness, fail in the UDA paradigm? 2) What is the generalization bound theory under attacks and how does it evolve from classical UDA theory? 3) How can we implement a robustification training procedure without complex modifications? Specifically, we explore and reveal the inherent entanglement challenge in general UDA+VAT paradigm, and propose an unsupervised robust domain adaptation (URDA) paradigm. We further derive the generalization bound theory of the URDA paradigm so that it can resist adversarial noise and domain shift. To the best of our knowledge, this is the first time to establish the URDA paradigm and theory. We further introduce a simple, novel yet effective URDA algorithm called Disentangled Adversarial Robustness Training (DART), a two-step training procedure that ensures both transferability and robustness. DART first pre-trains an arbitrary UDA model, and then applies an instantaneous robustification post-training step via disentangled distillation. Experiments on four benchmark datasets with/without attacks show that DART effectively enhances robustness while maintaining domain adaptability, and validate the URDA paradigm and theory.
Original languageEnglish
Article number5
JournalInternational Journal of Computer Vision
Volume134
Issue number1
Early online date12 Dec 2025
DOIs
Publication statusPublished - Jan 2026

Bibliographical note

Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.

Funding

This work was partially supported by National Natural Science Fund of China under Grants 92570110 and 62271090, Chongqing Natural Science Fund under Grant CSTB2024NSCQ-JQX0038, National Key R&D Program of China under Grant 2021YFB3100800 and National Youth Talent Project.

Keywords

  • Unsupervised Domain Adaptation
  • Adversarial Robustness
  • Entanglement Challenge
  • Disentangled Distillation

Fingerprint

Dive into the research topics of 'Unsupervised Robust Domain Adaptation: Paradigm, Theory and Algorithm'. Together they form a unique fingerprint.

Cite this