Poisonous Label Attack: Black-Box Data Poisoning Attack with Enhanced Conditional DCGAN
作者:Haiqing Liu, Daoxing Li, Yuancheng Li
摘要
Data poisoning is identified as a security threat for machine learning models. This paper explores the poisoning attack against the convolutional neural network under black-box conditions. The proposed attack is “black-box,” which means the attacker has no knowledge about the targeted model’s structure and parameters when attacking the model, and it uses “poisonous-labels” images, fake images with crafted wrong labels, as poisons. We present a method for generating “poisonous-label” images that use Enhanced Conditional DCGAN (EC-DCGAN) to synthesizes fake images and uses asymmetric poisoning vectors to mislabel them. We evaluate our method by generating “poisonous-label” images from MNIST and FashionMNIST datasets and using them to manipulate image classifiers. Our experiments demonstrate that, similarly to white box data poisoning attacks, the poisonous label attack can also dramatically increase the classification error.
论文关键词:Data poisoning attack, Generative adversarial network (GAN), Deep convolutional neural networks, Label noise
论文评审过程:
论文官网地址:https://doi.org/10.1007/s11063-021-10584-w