Understanding adversarial attacks on deep learning based medical image analysis systems

作者:

Highlights:

• Medical image DNNs are easier to be attacked than natural non-medical image DNNs.

• Complex biological textures of medical images may lead to more vulnerable regions.

• State-of-the-art deep networks can be overparameterized for medical imaging tasks.

• Medical image adversarial attacks can also be easily detected.

• High detectability may be caused by perturbations outside the pathological regions.

摘要

•Medical image DNNs are easier to be attacked than natural non-medical image DNNs.•Complex biological textures of medical images may lead to more vulnerable regions.•State-of-the-art deep networks can be overparameterized for medical imaging tasks.•Medical image adversarial attacks can also be easily detected.•High detectability may be caused by perturbations outside the pathological regions.

论文关键词:Adversarial attack,Adversarial example detection,Medical image analysis,Deep learning

论文评审过程:Received 18 July 2019, Revised 3 March 2020, Accepted 12 March 2020, Available online 1 May 2020, Version of Record 1 November 2020.

论文官网地址:https://doi.org/10.1016/j.patcog.2020.107332