Abstract
Deep neural networks are fragile as they are easily fooled by inputs with deliberate perturbations, which are key concerns in image security issues. Given a trained neural network, we are always curious about whether the neural network has learned the concept that we’d like it to learn. We want to know whether there might be some vulnerabilities of the neural network that could be exploited by hackers. It could be useful if there is a tool that can be used by non-experts to test a trained neural network and try to find its vulnerabilities. In this paper, we introduce a tool named AdverseGen for generating adversarial examples to a trained deep neural network using the black-box approach, i.e., without using any information about the neural network architecture and its gradient information. Our tool provides customized adversarial attacks for both non-professional users and developers. It can be invoked by a graphical user interface or command line mode to launch adversarial attacks. Moreover, this tool supports different attack goals (targeted, non-targeted) and different distance metrics.
Original language | English |
---|---|
Title of host publication | Artificial Intelligence XXXVIII : 41st SGAI International Conference on Artificial Intelligence, AI 2021, Cambridge, UK, December 14–16, 2021, Proceedings |
Editors | Max BRAMER, Richard ELLIS |
Publisher | Springer Science and Business Media Deutschland GmbH |
Pages | 313-326 |
Number of pages | 14 |
ISBN (Electronic) | 9783030911003 |
ISBN (Print) | 9783030910990 |
DOIs | |
Publication status | Published - 2021 |
Externally published | Yes |
Event | 41st SGAI International Conference on Artificial Intelligence - Cambridge, United Kingdom Duration: 14 Dec 2021 → 16 Dec 2021 |
Publication series
Name | Lecture Notes in Artificial Intelligence |
---|---|
Publisher | Springer |
Volume | 13101 |
ISSN (Print) | 2945-9133 |
ISSN (Electronic) | 2945-9141 |
Name | Lecture Notes in Computer Science |
---|---|
Publisher | Springer |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 41st SGAI International Conference on Artificial Intelligence |
---|---|
Abbreviated title | SGAI-AI 2021 |
Country/Territory | United Kingdom |
City | Cambridge |
Period | 14/12/21 → 16/12/21 |
Bibliographical note
Publisher Copyright:© 2021, Springer Nature Switzerland AG.
Funding
This work was supported by the Research Institute of Trustworthy Autonomous Systems, the Guangdong Provincial Key Laboratory (Grant No. 2020B121201001), the Program for Guangdong Introducing Innovative and Enterpreneurial Teams (Grant No. 2017ZT07X386) and Shenzhen Science and Technology Program (Grant No. KQTD2016112514355531).
Keywords
- Adversarial examples
- Black-box attack
- Deep neural networks