四月了~2021的三分之一 好快!加油哈!
大概接下来一个月保持两更或一更了吧
今天记录的还是针对跨域的论文阅读
论文名称:
《Cross-Domain Few-Shot Learning with Meta Fine-Tuning》
论文地址:https://arxiv.org/abs/2005.10544v1
论文代码: https://github.com/johncai117/Meta-Fine-Tuning
本篇文章只记录个人阅读论文的笔记,具体翻译、代码等不展开,详细可见上述的链接.
Background
1.Unfortunately, acquiring a large training data-set is costly due to the need for human annotation. Furthermore, when dealing with rare examples in medical images (e.g. rare diseases) or satellite images (e.g. oil spills), the ability to obtain labelled samples is limited.
(由于需要人工标注,获取大量的训练数据集较为昂贵。除此,在医学图像或者卫星图像这种较少样本的数据集中,获取带有标签的样本是有限的。)
----产生小样本学习的原因(few-shot learning)
2.However, existing few-shot learning methods have been developed with the assumption that the training and test data-set arise from the same distribution. Domain shift would thus be an additional problem as it may prevent the robust transfer of features.
(指出小样本学习中存在的domain shift即域迁移的问题,因为该问题的存在影响了小样本学习的鲁棒性)
3.the CVPR 2020 challenge has introduced a new benchmark that aims to test for generalization ability across a range of vastly different domains, with domains from nat�ural and medical images, domains without perspective, and domains without color
(CVPR2020引入了一个针对小样本域适应的数据集,详细可参考之前的论文阅读:https://www.jianshu.com/p/e6dc55021885)
综上,将该数据集作为域适应的基准数据集进行探索。
Work
Main contribution:
1.Integration of fine-tuning into the episodic training process by exploiting a first-order MAML-based meta-learning algorithm
(通过利用基于一阶MAML的元学习算法将微调集成到episodic训练过程中(以下称为“元微调”)。
这样做是为了使网络学习一组初始权重,这些初始权重很容易在测试域的支持集上进行微调。)
2.Integrates the Meta FineTuning algorithm into a Graph Neural Network that exploits the non-Euclidean structure of the relation between the support set and the query samples.
(将Meta FineTuning算法集成到图神经网络中,该网络利用支持集和查询样本之间关系的非欧几里得结构。)
3.implement data augmentation on the support set during fine-tuning, and achieve a further improvement in accuracy
(在微调期间对支持集执行数据扩充,并进一步提高准确性)
4.combine the above method with a modified fine-tuning baseline method, and combine them into an ensemble to jointly make predictions.
(将上述方法与改进的微调基线方法结合起来,然后将它们组合成一个整体以共同进行预测)
Methodology
1.Graph Neural Networks
元学习模块采用的是图神经网络,在之前写过的论文阅读中《 Cross-domain few-shot classification
via learned feature-wise transformation》即出现过可参考(https://www.jianshu.com/p/353cb4926278)
简单来说,首先,使用线性层将维数为F的特征向量投影到较低维的空间dk上。然后图卷积接收Signal S。
然后,使用线性运算对局部Signal执行图形卷积层GC()。这将产生一个输出=
为了学习边特征,MLP会获取图中顶点的输出向量之间的绝对差值
[用节点来存储更新权重的,每一个节点都代表一幅输入的图像,而每个边上的权重就表示两幅图之间的关系]2.Meta Fine-Tuning
元微调的核心思想是,我们可以使用元学习来优化的预训练模型,而不是找到一组旨在微调的权重初始化
To this end,we apply and adapt the first-order MAML algorithm and simulate the episodic training process. A first-order MAML algorithm can achieve comparable results with the second order algorithm at a lower computational cost
为此,我们应用并修改了一阶MAML算法,并模拟了训练过程。 一阶MAML算法可以较低的计算成本获得与二阶算法相当的结果
该方法还可以应用于任何主干深度的模型,并且可以冻结到任意数量的层。对于本文,我们冻结了最后一个网网块
During step 1 (Meta Fine-Tuning), only support examples are used, and the first 8-layers are frozen. A linear classifier on the ResNet10 features is used to predict the support labels, and the last 2-layers are updated accordingly using CE Loss for 5 epochs.
在步骤1(Meta Fine-Tuning)中,仅使用支持示例,并且冻结了前8层。 使用ResNet10功能上的线性分类器来预测支持标签,并使用5个CE loss相应地更新最后2层。
At step 2, all layers are updated using the episodic training loss. At prediction stage on the test
domain, all layers in the ResNet10 are frozen in step 2
在步骤2,使用episodic 训练损失来更新所有层。 在测试域的预测阶段,在步骤2中冻结ResNet10中的所有层3. Data Augmentation
对于训练期间的数据增强,我们使用代码库中的默认参数。对于测试期间的数据增强,我们从支持图像(我们知道的标签)中采样17个附加图像,并随机执行抖动、随机作物和水平翻转(如果适用)。在微调过程中,我们通过将模型更频繁地暴露于原始图像来对原始图像进行加权。 在最后的预测阶段,支持图像和查询图像都只使用基本图像(即中心作物)。
4.Combining Scores in the Ensemble
For our final submission results, we combine the predic�tions from the modified baseline fine-tuning model and the meta fine-tuning GNN model by normalizing the scores using a softmax function so that the scores from each model sum to 1 and are between 0 and 1, which ensures that each model is given equal weight in the prediction. Then we add them together and take argmax
通过使用softmax函数对分数进行归一化,将修改后的基线微调模型和元微调GNN模型的预测结合起来,以使每个模型的分数总和为1,且介于0之间 和1,可确保在预测中为每个模型赋予相等的权重。 然后我们将它们加在一起并取argmax
流程图:
通过将一阶MAML算法扩展到元微调,并将其与GNN、数据增强和集成方法相结合来实现的。
(有点集成大法的内卷了吧)
Experiments
ENDing~
四月好运呀!!!冲冲冲
网友评论