Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
https://readpaper.com/paper/4561428907278999553
Authors: Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman
Abstract: Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist at any distance. In this setting, existing NeRF-like models often produce blurry or low-resolution renderings (due to the unbalanced detail and scale of nearby and distant objects), are slow to train, and may exhibit artifacts due to the inherent ambiguity of the task of reconstructing a large scene from a small set of images. We present an extension of mip-NeRF (a NeRF variant that addresses sampling and aliasing) that uses a non-linear scene parameterization, online distillation, and a novel distortion-based regularizer to overcome the challenges presented by unbounded scenes. Our model, which we dub "mip-NeRF 360" as we target scenes in which the camera rotates 360 degrees around a point, reduces mean-squared error by 54% compared to mip-NeRF, and is able to produce realistic synthesized views and detailed depth maps for highly intricate, unbounded real-world scenes. △ Less
Submitted 24 November, 2021; v1 submitted 23 November, 2021; originally announced November 2021.
Comments: https://jonbarron.info/mipnerf360/
摘要:尽管神经辐射场(NeRF)在物体和空间小范围区域上显示了令人印象深刻的视图合成结果,但它们在“无界”场景上却很难实现,因为相机可能指向任何方向,而内容可能存在于任何距离。在此设置中,现有的类NeRF模型通常会产生模糊或低分辨率渲染(由于附近和远处对象的不平衡细节和比例),训练速度较慢,并且可能会由于从一小组图像重建大型场景任务的固有模糊性而出现伪影。我们提出了mip-NeRF(一种处理采样和混叠的NeRF变体)的扩展,它使用非线性场景参数化、在线蒸馏和一种新的基于失真的正则化器来克服无界场景带来的挑战。我们的模型,我们称之为“mip NeRF 360”,因为我们的目标场景是相机围绕一个点旋转360度,与mip NeRF相比,均方误差减少了54%,并且能够为高度复杂、无限的真实世界场景生成真实的合成视图和详细的深度贴图。△ 较少的
网友评论