对抗样本
概念特点
1 对抗样本是正确样本
2 是普通样本经过一些变换得到的与原始样本相差很小. 但是会误导神经网络的样本.
也可以是训练模型时,模型不好划分的样本,会被训练的网络模型分类错误的样本.
Papers
Lei Xie
ADVERSARIAL EXAMPLES FOR IMPROVING END-TO-END ATTENTION-BASED
SMALL-FOOTPRINT KEYWORD SPOTTING
Training Augmentation with Adversarial Examplesfor Robust Speech Recognition
论文主要利用 FGSM (fast gradient sign method) 生成对抗样本,
扩充数据,增加模型鲁棒性.
https://github.com/sarathknv/adversarial-examples-pytorch
Unsupervised Domain Adaptation by Backpropagation]
https://github.com/fungtion/DANN_py3
Domain-Adversarial Training of Neural Networks
但是这里增加的数据没有处理掉环境失配问题
其他实现方法:
https://github.com/baidu/AdvBox
baidu 实现的生成对抗样本的方法?
FGSM:
https://github.com/1Konny/FGSM
(FGSM : explaining and harnessing adversarial examples, Goodfellow et al.)
(I-FGSM : adversarial examples in the physical world, Kurakin et al.)
网友评论