美文网首页
Displaying an AR Experience with

Displaying an AR Experience with

作者: loveFBI | 来源:发表于2018-04-08 20:08 被阅读0次

    Build a custom AR view by rendering camera images and using position-tracking information to display overlay content.

    通过渲染相机图像并使用位置跟踪信息来显示覆盖内容来构建自定义AR视图。


    Overview

    ARKit includes view classes for easily displaying AR experiences with SceneKit or SpriteKit. However, if you instead build your own rendering engine (or integrate with a third-party engine), ARKit also provides all the support necessary to display an AR experience with a custom view.

    概述

    ARKit包含视图类,用于通过SceneKit或SpriteKit轻松显示AR体验。 但是,如果您改为创建自己的渲染引擎(或与第三方引擎集成),则ARKit还提供必要的所有支持以显示具有自定义视图的AR体验。

    In any AR experience, the first step is to configure an ARSession object to manage camera capture and motion processing. A session defines and maintains a correspondence between the real-world space the device inhabits and a virtual space where you model AR content. To display your AR experience in a custom view, you’ll need to:

    Retrieve video frames and tracking information from the session.

    Render those frame images as the backdrop for your view.

    Use the tracking information to position and draw AR content atop the camera image.

    在任何AR体验中,第一步是配置一个ARSession对象来管理相机捕获和运动处理。 会话定义并维护设备居住的真实世界空间与模拟AR内容的虚拟空间之间的对应关系。 要在自定义视图中显示您的AR体验,您需要:

    从会话中检索视频帧和跟踪信息。

    将这些帧图像渲染为您视图的背景。

    使用跟踪信息在相机图像上定位和绘制AR内容。

    Note

    This article covers code found in Xcode project templates. For complete example code, create a new iOS application with the Augmented Reality template, and choose Metal from the Content Technology popup menu.

    注意

    本文介绍了Xcode项目模板中的代码。 有关完整的示例代码,请使用增强现实模板创建一个新的iOS应用程序,然后从内容技术弹出式菜单中选择Metal。

    Get Video Frames and Tracking Data from the Session

    Create and maintain your own ARSession instance, and run it with a session configuration appropriate for the kind of AR experience you want to support. The session captures video from the camera, tracks the device’s position and orientation in a modeled 3D space, and provides ARFrame objects. Each such object contains both an individual video frame image and position tracking information from the moment that frame was captured.

    There are two ways to access ARFrame objects produced by an AR session, depending on whether your app favors a pull or a push design pattern. 

    If you prefer to control frame timing (the pull design pattern), use the session’s currentFrameproperty to get the current frame image and tracking information each time you redraw your view’s contents. The ARKit Xcode template uses this approach:

    从Session获取视频帧和跟踪数据

    创建和维护自己的ARSession实例,并使用适合您希望支持的AR体验类型的Session配置运行它。 该Session从相机捕获视频,在建模的3D空间中跟踪设备的位置和方向,并提供ARFrame对象。 每个这样的对象包含从捕获帧的那一刻起的单独的视频帧图像和位置跟踪信息。

    有两种方法可以访问ARSession产生的ARFrame对象,具体取决于您的应用是否支持拉式或推式设计模式。

    如果您更愿意控制帧时序(拉式设计模式),则每次重新绘制视图内容时,请使用会话的currentFrame属性来获取当前帧图像和跟踪信息。 ARKit Xcode模板使用这种方法:

    Alternatively, if your app design favors a push pattern, implement the session:didUpdateFrame: delegate method, and the session will call it once for each video frame it captures (at 60 frames per second by default).

    Upon obtaining a frame, you’ll need to draw the camera image, and update and render any overlay content your AR experience includes.

    或者,如果您的应用设计偏好推式模式,请实施Session:didUpdateFrame:delegate方法,并且Session将为其捕获的每个视频帧调用一次(默认情况下为每秒60帧)。

    在获取框架后,您需要绘制相机图像,并更新和渲染AR体验包含的任何叠加内容。

    Draw the Camera Image

    Each ARFrame object’s capturedImage property contains a pixel buffer captured from the device camera. To draw this image as the backdrop for your custom view, you’ll need to create textures from the image content and submit GPU rendering commands that use those textures.

    The pixel buffer’s contents are encoded in a biplanar YCbCr (also called YUV) data format; to render the image you’ll need to convert this pixel data to a drawable RGB format. For rendering with Metal, you can perform this conversion most efficiently in GPU shader code. Use CVMetalTextureCache APIs to create two Metal textures from the pixel buffer—one each for the buffer’s luma (Y) and chroma (CbCr) planes:

    绘制相机图像

    每个ARFrame对象的capturedImage属性都包含从设备摄像头捕获的像素缓冲区。 要将此图像作为自定义视图的背景,您需要从图像内容创建纹理并提交使用这些纹理的GPU渲染命令。

    像素缓冲区的内容以双平面YCbCr(也称为YUV)数据格式编码; 要渲染图像,您需要将此像素数据转换为可绘制的RGB格式。 对于使用Metal进行渲染,您可以在GPU着色器代码中最有效地执行此转换。 使用CVMetalTextureCache API从像素缓冲区创建两个Metal纹理 - 缓冲区的亮度(Y)和色度(CbCr)平面各一个:

    Next, encode render commands that draw those two textures using a fragment function that performs YCbCr to RGB conversion with a color transform matrix:

    接下来,编码使用片段函数绘制这两个纹理的渲染命令,该片段函数使用颜色转换矩阵执行YCbCr转换为RGB转换:

    Note

    Use the displayTransformForOrientation:viewportSize: method to make sure the camera image covers the entire view. For example use of this method, as well as complete Metal pipeline setup code, see the full Xcode template. (Create a new iOS application with the Augmented Reality template, and choose Metal from the Content Technology popup menu.)

    注意

    使用displayTransformForOrientation:viewportSize:方法确保摄像机图像覆盖整个视图。 例如,使用此方法以及完整的Metal管道设置代码,请参阅完整的Xcode模板。 (使用增强现实模板创建一个新的iOS应用程序,并从内容技术弹出式菜单中选择Metal。)

    Track and Render Overlay Content 

    AR experiences typically focus on rendering 3D overlay content so that the content appears to be part of the real world seen in the camera image. To achieve this illusion, use the ARAnchorclass to model the position and orientation of your own 3D content relative to real-world space. Anchors provide transforms that you can reference during rendering. 

    For example, the Xcode template creates an anchor located about 20 cm in front of the device whenever a user taps on the screen:

    跟踪和渲染叠加内容

    AR体验通常专注于渲染3D叠加内容,以便内容看起来是摄像机图像中看到的真实世界的一部分。 为了达到这种幻想,请使用ARAnchor类对相对于现实世界空间的3D内容的位置和方向进行建模。 锚点提供可在渲染过程中引用的变换。

    例如,无论用户何时点击屏幕,Xcode模板都会在设备前创建一个位于设备前方约20厘米的锚点:

    In your rendering engine, use the transform property of each ARAnchor object to place visual content. The Xcode template uses each of the anchors added to the session in its handleTap method to position a simple cube mesh:

    在渲染引擎中,使用每个ARAnchor对象的transform属性来放置可视内容。 Xcode模板使用每个在其handleTap方法中添加到会话中的锚来定位简单的多维数据集网格:

    Note

    In a more complex AR experience, you can use hit testing or plane detection to find the positions of real-world surfaces. For details, see the planeDetection property and the hitTest:types: method. In both cases, ARKit provides results as ARAnchor objects, so you still use anchor transforms to place visual content.

    注意

    在更复杂的AR体验中,您可以使用命中测试或平面检测来查找真实世界曲面的位置。 有关详细信息,请参阅planeDetection属性和hitTest:types:方法。 在这两种情况下,ARKit都会提供结果作为ARAnchor对象,因此您仍然使用锚点转换来放置可视内容。

    Render with Realistic Lighting

    When you configure shaders for drawing 3D content in your scene, use the estimated lighting information in each ARFrame object to produce more realistic shading:

    当您配置着色器以在场景中绘制3D内容时,请使用每个ARFrame对象中的估计光照信息来生成更逼真的着色:

    Note

    For the complete set of Metal setup and rendering commands that go with this example, see the full Xcode template. (Create a new iOS application with the Augmented Reality template, and choose Metal from the Content Technology popup menu.)

    注意

    有关此示例中完整的Metal安装和渲染命令集,请参阅完整的Xcode模板。 (使用增强现实模板创建一个新的iOS应用程序,并从内容技术弹出式菜单中选择金属。)

    相关文章

      网友评论

          本文标题:Displaying an AR Experience with

          本文链接:https://www.haomeiwen.com/subject/sdmmhftx.html