UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes
https://arxiv.org/abs/2205.10337
2022.5.20
Authors: Alexander Kolesnikov, André Susano Pinto, Lucas Beyer, Xiaohua Zhai, Jeremiah Harmsen, Neil Houlsby
Abstract: We introduce UViM, a unified approach capable of modeling a wide range of computer vision tasks. In contrast to previous models, UViM has the same functional form for all tasks; it requires no task-specific modifications which require extensive human expertise. The approach involves two components: (I) a base model (feed-forward) which is trained to directly predict raw vision outputs, guided by a learned discrete code and (II) a language model (autoregressive) that is trained to generate the guiding code. These components complement each other: the language model is well-suited to modeling structured interdependent data, while the base model is efficient at dealing with high-dimensional outputs. We demonstrate the effectiveness of UViM on three diverse and challenging vision tasks: panoptic segmentation, depth prediction and image colorization, where we achieve competitive and near state-of-the-art results. Our experimental results suggest that UViM is a promising candidate for a unified modeling approach in computer vision. △ Less
Submitted 20 May, 2022; originally announced May 2022.
我们介绍了UViM,这是一种能够对广泛的计算机视觉任务建模的统一方法。与以前的模型相比,UViM对所有任务都具有相同的功能形式;它不需要特定于任务的修改,而这些修改需要大量的人工专业知识。该方法包括两个组成部分:(I)一个基础模型(前馈),由学习的离散代码引导,用于直接预测原始视觉输出;(II)一个语言模型(自回归),用于生成指导代码。这些组件相互补充:语言模型非常适合对结构化相互依赖的数据进行建模,而基础模型在处理高维输出方面非常有效。我们展示了UViM在三种不同且具有挑战性的视觉任务上的有效性:全景分割、深度预测和图像彩色化,在这三种任务中,我们获得了具有竞争力且接近最先进水平的结果。我们的实验结果表明,UViM是计算机视觉中一种很有希望的统一建模方法。
![](https://img.haomeiwen.com/i13727053/792b10d7810c523a.png)
![](https://img.haomeiwen.com/i13727053/9f0fe379370113a3.png)
![](https://img.haomeiwen.com/i13727053/83d78feb7f5c8724.png)
![](https://img.haomeiwen.com/i13727053/b8533466bced6f2b.png)
![](https://img.haomeiwen.com/i13727053/65774be5f89b9594.png)
![](https://img.haomeiwen.com/i13727053/96cd09c569d38f69.png)
网友评论