美文网首页
研读笔记-COUNTERING ADVERSARIAL IMAG

研读笔记-COUNTERING ADVERSARIAL IMAG

作者: 不想掉队的小布丁 | 来源:发表于2019-08-29 16:52 被阅读0次

文章链接:
http://arxiv.org/pdf/1711.00117
项目源码:
https://github.com/facebookresearch/adversarial_image_defenses.

主要内容: 使用多种model-agnostic方法:image cropping and rescaling, bit_depth reduction, jpeg compression, total variance minimization, and image quilting.其中total variance minimization和 image quilting的防御效果最强,原因是这些防御是no-differentiable(不可微分的)并且本质上是随机的,这使得对抗样本难以绕过它们。
对抗攻击:

  • Success rate(adversary的成功率):

    Success rate
  • Normalized L2-dissimilarity(衡量扰动的大小):

    Normalized L2-dissimilarity
  • black-box attack setting: an adversary does not have direct access to the model

  • gray-box attack setting: the adversary has access to the model architecture and the model parameters, but is unaware of the defense strategy that is being used.

常见攻击:FGSM,I-FGSM,C&W,DeepFool

对抗方法:

  • image cropping and rescaling: crop and rescale images at training time as part of the data augmentation. At test time, we average predictions over random image crops

  • bit-depth reduction: we reduce images to 3 bits in our experiments

  • JPEG compression: we perform compression at quality level 75 (out of 100).

  • Total variance minimization: This approach randomly selects a small set of pixels, and reconstructs the “simplest” image that is consistent with the selected pixels. The reconstructed image does not contain the adversarial perturbations because these perturbations tend to be small and localized.

TV minimization also changes image structure in non-homogeneous regions of the image, but as these perturbations were not adversarially designed we expect the negative effect of these changes to be limited.
这个操作会改变图像非均匀区域的图像结构,但由于这些扰动不是反向设计的,我们预计这些变化的负面影响是有限的

  • image quilting: constructing a patch database that only contains patches from “clean” images (without adversarial perturbations); the patches used to create the synthesized image are selected by finding the K nearest neighbors (in pixel space) of the corresponding patch from the adversarial image in the patch database, and picking one of these neighbors uniformly at random. The motivation for this defense is that the resulting image only consists of pixels that were not modified by the adversary — the database of real patches is unlikely to contain the structures that appear in adversarial images.

it is interesting to note that the absolute differences between quilted original and the quilted adversarial image appear to be smaller in non-homogeneous regions of the image. This suggests that TV minimization and image quilting lead to inherently different defenses

实验显示image quilting & TV minimization是两种不同的防御

实验
比较5种防御方法及其组合在黑盒、灰盒攻击下的防御效果
项目源码:
https://github.com/facebookresearch/adversarial_image_defenses.
灰盒攻击:IMAGE TRANSFORMATIONS AT TEST TIME

In particular, ensembling 30 predictions over different, random image crops is very efficient: these predictions are correct for 40−60% of the imges (note that 76% is the highest accuracy that one can expect to achieve). This result suggests that adversarial examples are susceptible to changes in the location and scale of the adversarial perturbations.
While not as effective, image transformations based on total variation minimization and image quilting also successfully defend against adversarial examples from all four attacks: applying these transformations allows us to classify 30−40% of the images correctly.
However, the quilting transformation does severely impact the model’s accuracy on non-adversarial images

  • Our results with gray-box attacks suggest that randomness is particularly crucial in developing strong defenses

黑盒攻击: IMAGE TRANSFORMATIONS AT TRAINING AND TEST TIME

Training convolutional networks on images that are transformed in the same way as at test time, indeed, dramatically improves the effectiveness of all transformation defenses. In our experiments, the image-quilting defense is particularly effec- tive against strong attacks: it successfully defends against 80−90% of all four attacks, even when the normalized L2-dissimilarity of the attack approaches 0.08.

黑盒攻击: ENSEMBLING AND MODEL TRANSFER

The results show that gains of 1−2% in classification accuracy can be achieved by ensembling different defenses, whereas transferring attacks to different convolutional network architectures can lead to an improvement of 2−3%. Inception-v4 performs best in our experiments, but this may be partly due to that network having a higher accuracy even in non-adversarial settings. Our best black-box defense achieves an accuracy of about 71% against all four defenses: the attacks deteriorate the accuracy of our best classifier (which combines cropping, TVM, image quilting, and model transfer) by at most 6%.

灰盒攻击: IMAGE TRANSFORMATIONS AT TRAINING AND TEST TIME

The results show that bit-depth reduction and JPEG compression are weak defenses in such a gray- box setting. Whilst their relative ordering varies between attack methods, image cropping and rescal- ing, total variation minimization, and image quilting are fairly robust defenses in the white-box set- ting. Specifically, networks using these defenses classify up to 50% of adversarial images correctly.

相关文章

网友评论

      本文标题:研读笔记-COUNTERING ADVERSARIAL IMAG

      本文链接:https://www.haomeiwen.com/subject/lzubectx.html