美文网首页
[论文解读]On the Geometry of Adversa

[论文解读]On the Geometry of Adversa

作者: wangxiaoguang | 来源:发表于2020-03-16 15:25 被阅读0次

    论文题目:On the Geometry of Adversarial Examples
    论文地址:On the Geometry of Adversarial Examples

    Abstract

    对抗样本是机器学习模型的普遍现象,输入看似不可察觉的扰动却能导致准确模型的误分类。我们利用流形重建技术,提出一个几何框架,以分析对抗样本的高维几何。特别是,我们强调了余维数的重要性:对于嵌入到高维空间中的低维数据流形,在流形上有许多方向可用来构造对抗样本。对抗样本是学习决策边界的自然结果,该决策边界很好地对低维数据流形进行了分类,但是对流形附近的点进行了错误分类。使用我们的几何框架,我们证明了(1) a tradeoff between robustness under different norms, (2) that adversarial training in balls around the data is sample inefficient, and (3) sufficient sampling conditions under which nearest neighbor classifiers and ball-based adversarial training are robust.

    1. Introduction

    大规模的深度学习已经在计算机视觉、自然语言处理和机器人等重要问题上取得了突破。此后不久,对抗样本这一有趣现象被发现了。这导致了研究者们发展了许多对抗攻击方法和相应的对抗防御方法。随着深度学习应用到实际生产生活中,我们必须理解模型的弱点,并设计适当的对策。为此,作者提出了用于分析对抗样本的几何框架。
    作者的几何框架基于流形假设,即高维数据是由低维的流形结构嵌入在高维空间中形成的。作者认为对抗样本是恰好分布在流形附近的数据点。模型可以将低维流形的数据分类正确,却无法将流形附近的点分类正确。高余维(the high codimension, the difference between the dimension of the data manifold and the dimension of the embedding space)是对抗样本无处不在的关键原由。
    该论文的贡献如下:

    1. 建立了几何框架。
    2. 强调余维在对抗样本导致的脆弱性方面所起的作用。
    3. 利用该框架,我们证明了如下结果:
      (1). 选择不同的范数(norm)来限制攻击者很重要,因为不同的范数和鲁棒性之间存在权衡;
      (2). 我们表明,使用与训练集相近的对抗样本进行训练的通用方法不足以学习到可靠的决策边界;
      (3). 我们表明,最近邻分类器由于其决策边界远离数据的几何特性而不会遭受这种不足,因此代表了一种潜在的鲁棒分类算法。

    2. Related Work

    主要叙述了前期对抗样本与高维几何相关的研究进展,还介绍了流形重建算法。

    3. The Geometry of Data

    对低维流形嵌入到高维空间\mathbb{R}^{d}进行建模,k维流形\mathcal{M} \subset \mathbb{R}^{d},则余维是d − k
    在这篇论文中,我们感兴趣的是分类问题。因此总共C类的每一类建模数据分别从流形\mathcal{M}_{1}, \ldots, \mathcal{M}_{C}中采样得到,则整个流形空间(data manifold)为\mathcal{M}=\cup_{1 \leq j \leq C} \mathcal{M}_{j}n个点的有限样本XX \subset \mathcal{M},记作X=\left\{X_{1}, X_{2}, \ldots, X_{n}\right\}。每个样本点X_{i}有对应的类别标签y_{i} \in\{1,2, \ldots, C\},则X_{i}采样自流形M_{y_{i}}
    如下图1所示,决策轴(the decision axis)\Lambda_{p}是一系列点的集合,这些点是在p -范数\|\cdot\|_{p}下距离不同流形最近的点(the decision axis \Lambda_{p} is the set of points that have two or more closest points, in the norm \|\cdot\|_{p}, on distinct class manifolds)。

    任取集合T \subset \mathbb{R}^{d},则\mathcal{M}的可达距离\operatorname{rch}_{p}(T ; \mathcal{M})被定义为\inf _{x \in \mathcal{M}, y \in T}\|x-y\|_{p}(The reach \operatorname{rch}_{p}(T ; \mathcal{M}) of \mathcal{M} is defined as inf _{x \in \mathcal{M}, y \in T}\|x-y\|_{p})。
    \mathcal{M}\epsilon-tubular neighborhood被定义为\mathcal{M}^{\epsilon, p}=\left\{x \in \mathbb{R}^{d}: \inf _{y \in \mathcal{M}}\|x-y\|_{p} \leq \epsilon\right\}。虽然\mathcal{M}是k维,\mathcal{M}^{\epsilon, p}却是d维。考虑分类器f: \mathbb{R}^{d} \rightarrow[C],一个\epsilon-对抗样本( \epsilon-adversarial example)是点x \in \mathcal{M}_{i}^{\epsilon, p},使得f(x) \neq i

    4. A Provable Tradeoff in Robustness Between Norms

    在p范数下,如果\epsilon<\mathrm{rch}_{p} \mathcal{D}_{f},则f的决策面(decision boundary)\mathcal{D}_{f}就是\epsilon-robust。换句话说,从点x\in\mathcal{M}开始,扰动(perturbation)\eta_{x}在p范数下必须大于\mathrm{rch}_{p} \mathcal{D}_{f}(跨过决策边界)。在p范数下,最鲁棒的决策边界就是\Lambda_{p}。在定理1中,构造了一个学习设定,其中\Lambda_{2}\Lambda_{\infty}不同。
    Theorem 1. Let S_{1}, S_{2} \subset \mathbb{R}^{d+1} be two concentric d -spheres with radii r_{1}<r_{2} respectively. Let S=S_{1} \cup S_{2} and let \Lambda_{2}, \Lambda_{\infty} be the \|\cdot\|_{2} and \|\cdot\|_{\infty} decision axes of S . Then \Lambda_{2} \neq \Lambda_{\infty} . Furthermore \text { rch }_{2} \Lambda_{\infty} \in \mathcal{O}\left(\text { rch }_{2} \Lambda_{2} / \sqrt{d}\right).

    根据定理1,我们得出在L2范数下从S_{1}\Lambda_{\infty}的最小距离的上限为\text { rch }_{2} \Lambda_{\infty} \in \mathcal{O}\left(\text { rch }_{2} \Lambda_{2} / \sqrt{d}\right)。如果训练分类器f来学习\Lambda_{\infty},则从S_{1}开始,攻击者可以构造一个\|\cdot\|_{2}的对抗样本,扰动小至\mathcal{O}(1/ \sqrt{d})。 因此,f\|\cdot\|_{2}扰动的鲁棒性较低。 图2通过实验验证了这一结果。

    5. Provably Robust Classifiers

    对抗训练是构建健壮模型非常自然的方法,相当于从X^{\epsilon}中采样进行训练。接下来将说明这种方法比其他分类算法的采样效率低得多。
    首先定义一个学习算法\mathcal{L}, 其训练集为X\subset\mathcal{M}X采样自\mathcal{M}\mathcal{L}输出模型f_\mathcal{L}使得对每一个x \in X和标签y,每一个\hat{x} \in B\left(x, \mathrm{rch}_{p} \Lambda_{p}\right),都有f_{\mathcal{L}}(\hat{x})=f_{\mathcal{L}}(x)=y。这里,B(x, r)表示在相应范数中以半径r的x为中心的球。就是说,\mathcal{L}学习了一个模型,该模型对于任何从x直至\text { rch }_{p} \Lambda_{p}\|\cdot\|_{p}扰动都输出相同的标签,正如以x为参数输出的结果一样(That is, \mathcal{L} learns a model that outputs the same label for any \|\cdot\|_{p}-perturbation of x up to \text { rch }_{p} \Lambda_{p} as it outputs for x.)。\mathcal{L}是对抗训练的理论模型。定理2指出\mathcal{L}在高余维中采样效率低。

    定理2来源于定理3和定理4。在定理3和4中,我们将提出最近邻邻居分类器f_{nn}是一种这样的分类算法。最近邻分类器在高余维中天生具有鲁棒性,因为当X密集时,X的Voronoi单元会在垂直于\mathcal{M}的方向上伸长(这里不是太明白)。
    在陈述定理3之前,必须介绍在\mathcal{M}上采样的条件。A \delta-cover of a manifold \mathcal{M} in the norm \|\cdot\|_{p}is a finite set of points X such that for every x\in \mathcal{M}there exists X_{i} such that\left\|x-X_{i}\right\|_{p} \leq \delta(这句话其实是说样本集X是采样充分的,对于原先数据集中任何的元素,都能在样本集中找到与之相近的样本). 定理3给定了充分的条件使得f_{\mathcal{L}}f_{nn}能够准确分类\mathcal{M}^\epsilon,并且f_{nn}的采样密度要大幅低于f_{\mathcal{L}}

    接下来,我们将展示一个设置,其中需要类似于定理3中的δ的边界。在这种设置下,f_{nn}f_{L}的采样要求之间的δ因子之差为2,导致X_{nn}X_{L}的大小之间达到指数差距,以实现相同的鲁棒性。

    Define \Pi_{1}=\left\{x \in \mathbb{R}^{d}: \ell \leq x_{1}, \ldots, x_{k} \leq \mu \text { and } x_{k+1}=\ldots=x_{d}=0\right\} ; that is \Pi_{1} is a subset of the
    x_{1}\text{-...-}x_{k}-plane bounded between the coordinates [\ell, \mu] . Similarly define \Pi_{2}=\left\{x \in \mathbb{R}^{d}: \ell \leq x_{1}, \ldots, x_{k} \leq\right.
    \left.\mu \text { and } x_{k+1}=\ldots=x_{d-1}=0 \text { and } x_{d}=2\right\} . Note that \Pi_{2} lies in the subspace x_{d}=2 ; thus \mathrm{rch}_{2} \Lambda_{2}=1, where \Lambda_{2} is the decision axis of \Pi=\Pi_{1} \cup \Pi_{2} . In the \|\cdot\|_{2} norm we can show that the gap in Theorem 3 is necessary for \Pi=\Pi_{1} \cup \Pi_{2} . Furthermore the bounds we derive for \delta -covers for \Pi for both f_{\mathrm{nn}} and f_{\mathcal{L}} are tight. Combined with well-known properties of covers, we get that the ratio \left|X_{\mathcal{L}}\right| /\left|X_{\mathrm{nn}}\right| is exponential in k .

    我们已经表明,当提供足够密集的\mathcal{M}样本时,\mathcal{L}和最近邻分类器都学习鲁棒的决策边界。但是,在某些设置中,在达到相同数量的鲁棒性方面,最近邻的效率是\mathcal{L}的指数倍高。 实验验证这些理论结果见第8.1节。

    6. X^{\epsilon} is a Poor Model of \mathcal{M}^{\epsilon}

    Madry等(2018)建议在an adversary的帮助下训练鲁棒的分类器,该迭代器在每次迭代时都会围绕训练集产生被错误分类的\epsilon-扰动。 在我们的记法中,这对应于学习决策边界去正确分类X^{\epsilon}=\left\{x \in \mathbb{R}^{d}:\left\|x-X_{i}\right\|_{2} \leq \epsilon \text { for some training point } X_{i}\right\}
    我们认为这种方法在实践中不够鲁棒,因为X^{\epsilon}常常不是\mathcal{M}^{\epsilon}的理想模型。在本节中,我们表明体积vol X^{\epsilon}通常只占体积\mathcal{M}^{\epsilon}的很小一部分。这些结果揭示了为什么第5节中定义的基于球的学习算法\mathcal{L}的采样效率比最近邻分类器低得多。

    Theorem 5. Let \mathcal{M} \subset \mathbb{R}^{d} be a k-dimensional manifold embedded in \mathbb{R}^{d} such that {\text vol}_{k} \mathcal{M}<\infty. Let X \subset \mathcal{M} be a finite set of points sampled from \mathcal{M}. Suppose that \epsilon \leq \mathrm{rch}_{2} \Xi where \Xi is the medial axis of \mathcal{M}, defined as in Dey ( 2007) . Then the percentage of \mathcal{M}^{\epsilon} covered by X^{\epsilon} is upper bounded by
    \frac{\operatorname{vol} X^{\epsilon}}{\operatorname{vol} \mathcal{M}^{\epsilon}} \leq \frac{\pi^{k / 2} \Gamma\left(\frac{d-k}{2}+1\right)}{\Gamma\left(\frac{d}{2}+1\right)} \frac{\epsilon^{k}}{\operatorname{vol}_{k} \mathcal{M}}|X| \in \mathcal{O}\left(\left(\frac{2 \pi}{d-k}\right)^{k / 2} \frac{\epsilon^{k}}{\operatorname{vol}_{k} \mathcal{M}}|X|\right)\tag{2} \label{eq2}

    在高维中,即使\mathcal{M}的适度欠采样也会导致\mathcal{M}^{\epsilon}覆盖率的显着损失,因为以样本为中心的球的并集体积收缩速度快于\mathcal{M}^{\epsilon}的体积。定理5指出,在高维数中,X^{\epsilon}覆盖的\mathcal{M}^{\epsilon}的分数变为0。对于实际可行的训练集大小,X^{\epsilon}几乎没有覆盖。

    Thus X^{\epsilon} is a poor model of \mathcal{M}^{\epsilon}, and high classificaiton accuracy on X^{\epsilon} does not imply high accuracy in \mathcal{M}^{\epsilon}.

    接下来,通过k维平面的特殊情况,提供定理5的直观样例。Define \Pi=\left\{x \in \mathbb{R}^{d}: \ell \leq x_{1}, \ldots, x_{k} \leq \mu \text { and } x_{k+1}=\ldots=x_{d}=0\right\} ; that is \Pi is a subset of the x_{1}\text{-...-}x_{k}-plane bounded between the coordinates [\ell, \mu] . Recall that a \delta-cover of a manifold \mathcal{M} in the norm \|\cdot\|_{p}is a finite set of points X such that for every x\in \mathcal{M}there exists X_{i} such that\left\|x-X_{i}\right\|_{p} \leq \delta. 构造一个明确的\Piδ-cover X很容易:将采样点放置在规则网格的顶点上,如图4中的黑色顶点所示。此规则网格的立方体中心(如图4中的蓝色所示)是样本中最远的点。从网格顶点到中心的距离为\sqrt{k} \Delta / 2,其中\Delta是沿网格轴的点之间的间距。要构造一个δ-cover,我们需要\sqrt{k} \Delta / 2=\delta,它的间距为∆ =2δ/\sqrt{k}δ-cover的样本规模为|X|=\left(\frac{\sqrt{k}(\mu-\ell)}{2 \delta}\right)^{k}。请注意,|X|k(即\Pi的维度)而不是d(即嵌入空间的维度)为指数缩放(Note that |X| scales exponentially in k, the dimension of \Pi, not in d, the dimension of the embedding space)。

    Recall that \Pi^δ is the δ-tubular neighborhood of \Pi. The δ-balls around X, which comprise X^δ , cover \Pi and so any robust approach that guarantees correct classification within X δ will achieve perfect accuracy on \Pi. However, we will show that X^δ covers only a vanishingly small fraction of \Pi^δ . Let B_δ denote the d-ball of radius δ centered at the origin. An upper bound on the volume of X^δ is
    \operatorname{vol} X^{\delta} \leq \operatorname{vol} B_{\delta}|X|=\frac{\pi^{d / 2}}{\Gamma\left(\frac{d}{2}+1\right)} \delta^{d}\left(\frac{\sqrt{k}(\mu-\ell)}{2 \delta}\right)^{k}=\frac{\pi^{d / 2}}{\Gamma\left(\frac{d}{2}+1\right)} \delta^{(d-k)}\left(\frac{\sqrt{k}(\mu-\ell)}{2}\right)^{k}\tag{5}\label{eq5}Next we bound the volume \text{vol }\Pi^δ from below. Intuitively, a lower bound on the volume can be derived by placing a (d − k)-dimensional ball in the normal space at each point of Π and integrating the volumes. Figure 4 (Right) illustrates the lower bound argument in the case of k = 1,d = 2.
    \operatorname{vol} \Pi^{\delta} \geq \operatorname{vol}_{d-k} B_{\delta}^{d-k} \operatorname{vol}_{k} \Pi=\frac{\pi^{(d-k) / 2}}{\Gamma\left(\frac{d-k}{2}+1\right)} \delta^{d-k}(\mu-\ell)^{k}\tag{6}\label{eq6} Combining Equations 5 and 6 gives an upper bound on the percentage of \Pi^{\delta} that is covered by X.
    \frac{\operatorname{vol} X^{\delta}}{\operatorname{vol} \Pi^{\delta}} \leq \frac{\pi^{k / 2} \Gamma\left(\frac{d-k}{2}+1\right)}{\Gamma\left(\frac{d}{2}+1\right)}\left(\frac{\sqrt{k}}{2}\right)^{k}\tag{7}\label{eq7}Notice that the factors involving δ and (\mu − \ell) cancel. Figure 6 (Left) shows that this expression approaches 0 as the codimension (d − k) of \Pi increases.
    Suppose we set δ = 1 and construct a 1-cover of \Pi. The number of points necessary to cover \Pi with balls of radius 1 depends only on k, not the embedding dimension d. However the number of points necessary to cover the tubular neighborhood \Pi^1 with balls of radius 1 increases depends on both k and d. In Theorem 6 we derive a lower bound on the number of samples necessary to cover \Pi^1.


    Theorem 6. Let II be a bounded k -flat as described above, bounded along each axis by \ell<\mu. Let n denote the number of samples necessary to cover the 1-tubular neighborhood \Pi^{1} of \Pi with \|\cdot\|_{2}-balls of radius 1. That is let n be the minimum value for which there exists a finite sample X of size n such that \Pi^{1} \subset \cup_{x \in X} B(x, 1)=X^{1}. Then
    n \geq \frac{\pi^{-k / 2} \Gamma\left(\frac{d}{2}+1\right)}{\Gamma\left(\frac{d-k}{2}+1\right)}(\mu-\ell)^{k} \in \Omega\left(\left(\frac{d-k}{2 \pi}\right)^{k / 2}(\mu-\ell)^{k}\right)\tag{8}\label{eq8}

    定理6指出,通常,对\mathcal{M}进行精确建模比对\mathcal{M}^\epsilon进行建模所需的样本少得多。图6(右)将构造Π的1-cover所需的点数与定理6中覆盖Π^1所需点数的下限进行了比较。覆盖Π^1所需的点数以Ω((d-k)^{k / 2})的形式增加,在d上多项式缩放,在k上呈指数级缩放。与之对比,随着d的增加,构造Π的1-cover所需的数量保持恒定,仅取决于k
    通过在以训练集为中心的ϵ-ball中生成对抗样本来产生鲁棒分类器的方法不能准确地对M^{\epsilon}建模,并且这样做需要更多的样本。 如果该方法在定义Xϵ-ball之外任意发挥作用,那么对抗样本仍将存在,并且很容易找到它们。尽管对抗样本表明了脆弱性,但深度学习在各种任务上表现出色的原因是因为在\mathcal{M}上表现出色比在\mathcal{M}^{\epsilon}上表现出色要容易得多。

    相关文章

      网友评论

          本文标题:[论文解读]On the Geometry of Adversa

          本文链接:https://www.haomeiwen.com/subject/pvdgjhtx.html