Analysis of dominant classes in universal adversarial perturbations
作者:
Highlights:
•
摘要
The reasons why Deep Neural Networks are susceptible to being fooled by adversarial examples remains an open discussion. Indeed, many different strategies can be employed to efficiently generate adversarial attacks, some of them relying on different theoretical justifications. Among these strategies, universal (input-agnostic) perturbations are of particular interest, due to their capability to fool a network independently of the input in which the perturbation is applied. In this work, we investigate an intriguing phenomenon of universal perturbations, which has been reported previously in the literature, yet without a proven justification: universal perturbations change the predicted classes for most inputs into one particular (dominant) class, even if this behavior is not specified during the creation of the perturbation. In order to justify the cause of this phenomenon, we propose a number of hypotheses and experimentally test them using a speech command classification problem in the audio domain as a testbed. Our analyses reveal interesting properties of universal perturbations, suggest new methods to generate such attacks and provide an explanation of dominant classes, under both a geometric and a data-feature perspective.
论文关键词:Adversarial examples,Universal adversarial perturbations,Deep Neural Networks,Robust speech classification
论文评审过程:Received 10 January 2021, Revised 11 October 2021, Accepted 9 November 2021, Available online 20 November 2021, Version of Record 10 December 2021.
论文官网地址:https://doi.org/10.1016/j.knosys.2021.107719