Ensemble of Intermediate-Level Attacks to Boost Adversarial Transferability

Yunce ZHAO*, Wei HUANG, Wei LIU, Xin YAO

*Corresponding author for this work

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

Abstract

Adversarial examples are effective at deceiving deep neural network models for which they are specifically crafted and also demonstrate transferability across different models. This characteristic facilitates attacks in black-box scenarios, where the internal workings of the victim models are inaccessible. The Intermediate-Level Attack (ILA) enhances transferability by guiding the direction of the generation of perturbations using intermediate-level outputs. However, ILA struggles with transferring examples between models with different architectures, such as from Convolutional Neural Networks (CNNs) to Vision Transformers (ViTs). To address this, we introduce the Ensemble of Intermediate-Level Attacks (EILA). This approach leverages intermediate outputs from multiple source models to more effectively guide perturbation directions in the generation of adversarial examples. By adopting a shared guidance adversarial example strategy, EILA reduces conflicts in perturbation directions across different models, thereby enhancing overall transfer performance. Experimental results reveal significant enhancements in the transferability of adversarial examples across a range of deep learning models, demonstrating the effectiveness of EILA.
Original languageEnglish
Title of host publicationNeural Information Processing 31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, Proceedings, Part X
EditorsMufti MAHMUD, Maryam DOBORJEH, Kevin WONG, Andrew Chi Sing LEUNG, Zohreh DOBORJEH, M. TANVEER
PublisherSpringer
Chapter27
Pages393-407
Number of pages15
ISBN (Electronic)9789819669752
ISBN (Print)9789819669745
DOIs
Publication statusPublished - 24 Jun 2025

Publication series

NameCommunications in Computer and Information Science
Volume2291 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Bibliographical note

Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62250710682), the Guangdong Provincial Key Laboratory (Grant No. 2020B121201001), and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (Grant No.2017ZT07X386).

Keywords

  • AI Safety
  • Adversarial Transferability
  • Deep Neural Networks
  • Ensemble
  • Intermediate-Level Attack

Fingerprint

Dive into the research topics of 'Ensemble of Intermediate-Level Attacks to Boost Adversarial Transferability'. Together they form a unique fingerprint.

Cite this