Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks

作者:

Highlights:

• Current momentum-based methods usually suffer from the transferability saturation dilemma, which may degrade their capacity to break black-box models.

• To mitigate the transferability saturation effect, we propose cyclical optimization that divides the generation process into multiple phases and treats the velocity from the previous phase as helpful knowledge to guide a new attack? We design cyclical augmentation algorithm for further improving fooling rates by stabilizing update directions and learning diverse boundary information without additional retraining costs.

• Cyclical optimization and cyclical augmentation enhance black-box adversarial examples from the perspectives of the optimization and augmentation, respectively. We can apply them simultaneously to achieve more superior performance, referred to as cyclical adversarial attack.

• Our method has well generalizability to integrate with existing methods, and establishes state-of-the-art for transferable attacks against normally trained models and defenses under both standalone and combination settings

摘要

•Current momentum-based methods usually suffer from the transferability saturation dilemma, which may degrade their capacity to break black-box models.•To mitigate the transferability saturation effect, we propose cyclical optimization that divides the generation process into multiple phases and treats the velocity from the previous phase as helpful knowledge to guide a new attack? We design cyclical augmentation algorithm for further improving fooling rates by stabilizing update directions and learning diverse boundary information without additional retraining costs.•Cyclical optimization and cyclical augmentation enhance black-box adversarial examples from the perspectives of the optimization and augmentation, respectively. We can apply them simultaneously to achieve more superior performance, referred to as cyclical adversarial attack.•Our method has well generalizability to integrate with existing methods, and establishes state-of-the-art for transferable attacks against normally trained models and defenses under both standalone and combination settings

论文关键词:Adversarial example,Transferability,Black-box attack,Defenses

论文评审过程:Received 27 January 2022, Revised 31 May 2022, Accepted 3 June 2022, Available online 5 June 2022, Version of Record 21 June 2022.

论文官网地址:https://doi.org/10.1016/j.patcog.2022.108831