Abstract
DeepFake, an artificial intelligence technology that can automatically synthesize facial forgeries, has recently attracted worldwide attention. While DeepFakes can be entertaining, they can also be used to spread falsified information or be weaponized as cognition warfare. Forensic researchers have been dedicated to designing defensive algorithms to combat such disinformation. However, attacking technologies have been developed to make DeepFake products more aggressive. For example, by launching anti-forensics and adversarial attacks, DeepFakes can be disguised as authentic media to evade forensic detectors. However, such manipulations often sacrifice image quality for satisfactory undetectability. To address this issue, we propose a method to generate a novel adversarial sharpening mask for launching black-box anti-forensics attacks. Unlike many existing methods, our approach injects perturbations that allow DeepFakes to achieve high anti-forensics performance while maintaining pleasant sharpening visual effects. Experimental evaluations demonstrate that our method successfully disrupts state-of-the-art DeepFake detectors. Moreover, compared to images processed by existing DeepFake anti-forensics methods, our method's quality of anti-forensics DeepFakes rendered is significantly improved. Our code is available at https://github.com/fb-reps/HQ-AF_GAN.
Original language | English |
---|---|
Journal | ACM Transactions on Multimedia Computing, Communications and Applications |
DOIs | |
Publication status | E-pub ahead of print - 15 Apr 2025 |
Bibliographical note
This research was conducted during Bing Fan’s master’s studies at Nanchang University.Funding
This work was supported in part by the National Natural Science Foundation of China under Grants 62262041, 62172402, and 62472128, as well as the Jiangxi Provincial Natural Science Foundation under Grant 20232BAB202011.