美文网首页
Still and Video Media Capture -

Still and Video Media Capture -

作者: __season____ | 来源:发表于2018-06-25 02:47 被阅读0次

    文章目录

    1. 1. Still and Video Media Capture - 静态视频媒体捕获。
      1. 1.1. Use a Capture Session to Coordinate Data Flow - 使用捕捉会话来协调数据流
        1. 1.1.1. Configuring a Session - 配置会话
        2. 1.1.2. Monitoring Capture Session State - 监视捕获会话状态
      2. 1.2. An AVCaptureDevice Object Represents an Input Device - 一个 AVCaptureDevice 对象代表一个输入设备
        1. 1.2.1. Device Characteristics - 设备特点
        2. 1.2.2. Device Capture Settings
          1. 1.2.2.1. Focus Modes - 聚焦模式
          2. 1.2.2.2. Exposure Modes - 曝光模式
          3. 1.2.2.3. Flash Modes - 闪光模式
          4. 1.2.2.4. Torch Mode - 手电筒模式
          5. 1.2.2.5. Video Stabilization - 视频稳定性
          6. 1.2.2.6. White Balance - 白平衡
          7. 1.2.2.7. Setting Device Orientation - 设置设备方向
        3. 1.2.3. Configuring a Device - 配置设备
        4. 1.2.4. Switching Between Devices - 切换装置
      3. 1.3. Use Capture Inputs to Add a Capture Device to a Session - 使用捕获输入将捕获设备添加到会话中
      4. 1.4. Use Capture Outputs to Get Output from a Session - 使用捕获输出从会话得到输出
        1. 1.4.1. Saving to a Movie File - 保存电影文件
          1. 1.4.1.1. Starting a Recording - 开始记录
          2. 1.4.1.2. Ensuring That the File Was Written Successfully - 确保文件被成功写入
          3. 1.4.1.3. Adding Metadata to a File - 将元数据添加到文件中
          4. 1.4.1.4. Processing Frames of Video - 处理视频的帧
          5. 1.4.1.5. Performance Considerations for Processing Video - 处理视频的性能考虑
        2. 1.4.2. Capturing Still Images - 捕获静止图像
          1. 1.4.2.1. Pixel and Encoding Formats - 像素和编码格式
          2. 1.4.2.2. Capturing an Image - 捕获图像
      5. 1.5. Showing the User What’s Being Recorded - 显示用户正在被记录什么
        1. 1.5.1. Video Preview - 视频预览
          1. 1.5.1.1. Video Gravity Modes - 视屏重力模式
          2. 1.5.1.2. Using “Tap to Focus” with a Preview - 使用“点击焦点”预览
        2. 1.5.2. Showing Audio Levels - 显示音频等级
      6. 1.6. Putting It All Together: Capturing Video Frames as UIImage Objects - 总而言之:捕获视频帧用作 UIImage 对象
        1. 1.6.1. Create and Configure a Capture Session - 创建和配置捕获会话
        2. 1.6.2. Create and Configure the Device and Device Input - 创建和配置设备记忆设备输入
        3. 1.6.3. Create and Configure the Video Data Output - 创建和配置视频数据输出
        4. 1.6.4. Implement the Sample Buffer Delegate Method - 实现示例缓冲代理方法
        5. 1.6.5. Starting and Stopping Recording - 启动和停止录制
      7. 1.7. High Frame Rate Video Capture - 高帧速率视频捕获
        1. 1.7.1. Playback - 播放
        2. 1.7.2. Editing - 编辑
        3. 1.7.3. Export - 出口
        4. 1.7.4. Recording - 录制

    Still and Video Media Capture - 静态视频媒体捕获。

    To manage the capture from a device such as a camera or microphone, you assemble objects to represent inputs and outputs, and use an instance of AVCaptureSession to coordinate the data flow between them. Minimally you need:

    • An instance of AVCaptureDevice to represent the input device, such as a camera or microphone
    • An instance of a concrete subclass of AVCaptureInput to configure the ports from the input device
    • An instance of a concrete subclass of AVCaptureOutput to manage the output to a movie file or still image
    • An instance of AVCaptureSession to coordinate the data flow from the input to the output

    从一个设备,例如照相机或者麦克风管理捕获,组合对象来表示输入和输出,并使用 AVCaptureSession 的实例来协调它们之间的数据流。你需要最低限度的了解:

    To show the user a preview of what the camera is recording, you can use an instance of AVCaptureVideoPreviewLayer (a subclass of CALayer).

    You can configure multiple inputs and outputs, coordinated by a single session, as shown in Figure 4-1

    为了向用户展示照相机之前记录的预览,可以使用 AVCaptureVideoPreviewLayer 的实例(CALayer 的一个子类)

    可以配置多个输入和输出,由一个单独的会话协调。如图4-1所示:

    Figure 4-1 A single session can configure multiple inputs and outputs

    For many applications, this is as much detail as you need. For some operations, however, (if you want to monitor the power levels in an audio channel, for example) you need to consider how the various ports of an input device are represented and how those ports are connected to the output.

    对于大多数程序,这有尽可能多的你需要知道的细节。然而对于某些操作(例如如果你想监视音频信道中的功率水平),需要考虑输入设备的各种端口如何表示,以及这些端口是如何连接到输出的。

    A connection between a capture input and a capture output in a capture session is represented by an AVCaptureConnection object. Capture inputs (instances of AVCaptureInput) have one or more input ports (instances of AVCaptureInputPort). Capture outputs (instances of AVCaptureOutput) can accept data from one or more sources (for example, an AVCaptureMovieFileOutput object accepts both video and audio data).

    捕获输入和捕获输出的会话之间的连接表现为 AVCaptureConnection 对象。捕获输入(AVCaptureInput的实例)有一个或多个输入端口(AVCaptureInputPort的实例)。捕获输出(AVCaptureOutput的实例)可以从一个或多个资源接受数据(例如,AVCaptureMovieFileOutput 对象接受音频和视频数据)。

    When you add an input or an output to a session, the session forms connections between all the compatible capture inputs’ ports and capture outputs, as shown in Figure 4-2. A connection between a capture input and a capture output is represented by an AVCaptureConnection object.

    当给会话添加一个输入或者一个输出时,会话构成了所有可兼容的捕获输入端口和捕获输出端口的连接,如图4-2所示。捕获输入与捕获输出之间的连接是由 AVCaptureConnection 对象表示。

    Figure 4-2 AVCaptureConnection represents a connection between an input and output

    You can use a capture connection to enable or disable the flow of data from a given input or to a given output. You can also use a connection to monitor the average and peak power levels in an audio channel.

    可以使用捕获连接来启用或者禁用给定输入或给定输出的数据流。也可以使用连接来监视音频信道中的平均和峰值功率水平。

    Note: Media capture does not support simultaneous capture of both the front-facing and back-facing cameras on iOS devices.

    注意:媒体捕获不支持iOS设备上的前置摄像头和后置摄像头的同时捕捉。

    Use a Capture Session to Coordinate Data Flow - 使用捕捉会话来协调数据流

    An AVCaptureSession object is the central coordinating object you use to manage data capture. You use an instance to coordinate the flow of data from AV input devices to outputs. You add the capture devices and outputs you want to the session, then start data flow by sending the session a startRunning message, and stop the data flow by sending a stopRunning message.

    AVCaptureSession 对象是你用来管理数据捕获的中央协调对象。使用一个实例来协调从 AV 输入设备到输出的数据流。添加捕获设备并且输出你想要的会话,然后发送一个 startRunning 消息启动数据流,发送 stopRunning 消息来停止数据流。

    AVCaptureSession *session = [[AVCaptureSession alloc] init];
    // Add inputs and outputs.
    //添加输入和输出。
    [session startRunning];
    
    • Configuring a Session - 配置会话

    You use a preset on the session to specify the image quality and resolution you want. A preset is a constant that identifies one of a number of possible configurations; in some cases the actual configuration is device-specific:

    使用会话上的 preset 来指定图像的质量和分辨率。预设是一个常数,确定了一部分可能的配置中的一个;在某些情况下,设计的配置是设备特有的:

    | Symbol | Resolution | Comments |
    | AVCaptureSessionPresetHigh | High | Highest recording quality.This varies per device.|
    | AVCaptureSessionPresetMedium | Medium | Suitable for Wi-Fi sharing.The actual values may change.|
    | AVCaptureSessionPresetLow | Low | Suitable for 3G sharing.The actual values may change. |
    | AVCaptureSessionPreset640x480 | 640x480 | VGA |
    | AVCaptureSessionPreset1280x720 | 1280x720 | 720p HD. |
    | AVCaptureSessionPresetPhoto | Photo | Full photo resolution.This is not supported for video output. |

    If you want to set a media frame size-specific configuration, you should check whether it is supported before setting it, as follows:

    如果要设置媒体帧特定大小的配置,应该在设置之前检查是否支持被设定,如下所示:

    if ([session canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
        session.sessionPreset = AVCaptureSessionPreset1280x720;
    }
    else {
        // Handle the failure.
    }
    

    If you need to adjust session parameters at a more granular level than is possible with a preset, or you’d like to make changes to a running session, you surround your changes with the beginConfiguration and commitConfiguration methods. The beginConfiguration and commitConfiguration methods ensure that devices changes occur as a group, minimizing visibility or inconsistency of state. After calling beginConfiguration, you can add or remove outputs, alter the sessionPreset property, or configure individual capture input or output properties. No changes are actually made until you invoke commitConfiguration, at which time they are applied together.

    如果需要比预设情况,更加精细的水平调整会话参数,或者想给一个正在运行的会话做些改变,用 beginConfigurationcommitConfiguration 方法。beginConfigurationcommitConfiguration 方法确保设备作为一个群体在变化,降低状态的清晰度或者不协调性。调用 beginConfiguration 之后,可以添加或者移除输出,改变 sessionPreset 属性,或者单独配置捕获输入或输出属性。在你调用 commitConfiguration 之前实际上是没有变化的,调用的时候它们才被应用到一起。

    [session beginConfiguration];
    // Remove an existing capture device.
    // Add a new capture device.
    // Reset the preset.
    //删除现有的捕捉设备。
    //添加一个新的捕获设备。
    //重置预设。
    [session commitConfiguration];
    
    • Monitoring Capture Session State - 监视捕获会话状态

    A capture session posts notifications that you can observe to be notified, for example, when it starts or stops running, or when it is interrupted. You can register to receive an AVCaptureSessionRuntimeErrorNotification if a runtime error occurs. You can also interrogate the session’s running property to find out if it is running, and its interrupted property to find out if it is interrupted. Additionally, both the running and interrupted properties are key-value observing compliant and the notifications are posted on the main thread.

    捕获会话发出你能观察并被通知到的 notifications,例如,当它开始或者停止运行,或者当它被中断。你可以注册,如果发生了运行阶段的错误,可以接收 AVCaptureSessionRuntimeErrorNotification 。也可以询问会话的 running 属性去发现它正在运行的状态,并且它的 interrupted 属性可以找到它是否被中断了。此外, runninginterrupted 属性是遵从key-value observing ,并且在通知都是在主线程上发布的。

    An AVCaptureDevice Object Represents an Input Device - 一个 AVCaptureDevice 对象代表一个输入设备

    An AVCaptureDevice object abstracts a physical capture device that provides input data (such as audio or video) to an AVCaptureSession object. There is one object for each input device, for example, two video inputs—one for the front-facing the camera, one for the back-facing camera—and one audio input for the microphone.

    一个 AVCaptureDevice 对象抽象出物理捕获设备,提供了输入数据(比如音频或者视频)给 AVCaptureSession 对象。例如每个输入设备都有一个对象,两个视频输入,一个用于前置摄像头,一个用于后置摄像头,一个用于麦克风的音频输入。

    You can find out which capture devices are currently available using the AVCaptureDevice class methods devices and devicesWithMediaType:. And, if necessary, you can find out what features an iPhone, iPad, or iPod offers (see Device Capture Settings). The list of available devices may change, though. Current input devices may become unavailable (if they’re used by another application), and new input devices may become available, (if they’re relinquished by another application). You should register to receive AVCaptureDeviceWasConnectedNotification and AVCaptureDeviceWasDisconnectedNotification notifications to be alerted when the list of available devices changes.

    使用 AVCaptureDevice 类方法 devicesdevicesWithMediaType: 可以找出哪一个捕获设备当前是可用的。而且如果有必要,可以找出 iPhoneiPad 或者 iPod 提供了什么功能(详情看:Device Capture Settings)。虽然可用设备的列表可能会改变。当前输入设备可能会变得不可用(如果他们被另一个应用程序使用),新的输入设备可能成为可用的,(如果他们被另一个应用程序让出)。应该注册,当可用设备列表改变时接收 AVCaptureDeviceWasConnectedNotificationAVCaptureDeviceWasDisconnectedNotification 通知。

    You add an input device to a capture session using a capture input (see Use Capture Inputs to Add a Capture Device to a Session).

    使用捕获输入将输入设备添加到捕获会话中(详情请看:Use Capture Inputs to Add a Capture Device to a Session

    • Device Characteristics - 设备特点

    You can ask a device about its different characteristics. You can also test whether it provides a particular media type or supports a given capture session preset using hasMediaType: and supportsAVCaptureSessionPreset: respectively. To provide information to the user, you can find out the position of the capture device (whether it is on the front or the back of the unit being tested), and its localized name. This may be useful if you want to present a list of capture devices to allow the user to choose one.

    你可以问一个有关设备的不同特征。你也可以使用 hasMediaType: 测试它是否提供了一个特定的媒体类型,或者使用 supportsAVCaptureSessionPreset: 支持一个给定捕捉会话的预设状态。为了给用户提供信息,可以找到捕捉设备的位置(无论它是在正被测试单元的前面还是后面),以及本地化名称。这是很有用的,如果你想提出一个捕获设备的列表,让用户选择一个。

    Figure 4-3 shows the positions of the back-facing (AVCaptureDevicePositionBack) and front-facing (AVCaptureDevicePositionFront) cameras.

    图4-3显示了后置摄像头(AVCaptureDevicePositionBack)和前置摄像头(AVCaptureDevicePositionFront)的位置。

    Note: Media capture does not support simultaneous capture of both the front-facing and back-facing cameras on iOS devices.

    注意:媒体捕获在iOS设备上不支持前置摄像头和后置摄像头同时捕捉。

    Figure 4-3 iOS device front and back facing camera positions

    The following code example iterates over all the available devices and logs their name—and for video devices, their position—on the unit.

    下面的代码示例遍历了所有可用的设备并且记录了它们的名字,视频设备,在装置上的位置。

    NSArray *devices = [AVCaptureDevice devices];
     
    for (AVCaptureDevice *device in devices) {
     
        NSLog(@"Device name: %@", [device localizedName]);
     
        if ([device hasMediaType:AVMediaTypeVideo]) {
     
            if ([device position] == AVCaptureDevicePositionBack) {
                NSLog(@"Device position : back");
            }
            else {
                NSLog(@"Device position : front");
            }
        }
    }
    

    In addition, you can find out the device’s model ID and its unique ID.

    此外,你可以找到该设备的 model ID 和它的 unique ID

    • Device Capture Settings 设备捕获设置

    Different devices have different capabilities; for example, some may support different focus or flash modes; some may support focus on a point of interest.

    例如不同设备有不同的功能,一些可能支持不同的聚焦或者闪光模式;一些可能会支持聚焦在一个兴趣点。

    The following code fragment shows how you can find video input devices that have a torch mode and support a given capture session preset:

    下面的代码片段展示了如何找到有一个 torch 模式的视频输入设备,并且支持一个捕捉会话预设。

    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    NSMutableArray *torchDevices = [[NSMutableArray alloc] init];
     
    for (AVCaptureDevice *device in devices) {
        [if ([device hasTorch] &&
             [device supportsAVCaptureSessionPreset:AVCaptureSessionPreset640x480]) {
            [torchDevices addObject:device];
        }
    }
    

    If you find multiple devices that meet your criteria, you might let the user choose which one they want to use. To display a description of a device to the user, you can use its localizedName property.

    如果找到多个设备满足标准,你可能会让用户选择一个他们想使用的。给用户显示一个设备的描述,可以使用它的 localizedName 属性。

    You use the various different features in similar ways. There are constants to specify a particular mode, and you can ask a device whether it supports a particular mode. In several cases, you can observe a property to be notified when a feature is changing. In all cases, you should lock the device before changing the mode of a particular feature, as described in Configuring a Device.

    用类似的方法使用各种不同的功能。有常量来指定一个特定的模式,也可以问设备是否支持特定的模式。在一些情况下,当功能改变的时候可以观察到要通知的属性。在所有情况下,你应该改变特定功能的模式之前锁定设备,如在设备配置中描述。

    Note: Focus point of interest and exposure point of interest are mutually exclusive, as are focus mode and exposure mode.

    注意:兴趣的焦点和兴趣的曝光点是相互排斥的,因为是聚焦模式和曝光模式。

    • Focus Modes - 聚焦模式

    There are three focus modes:

    • AVCaptureFocusModeLocked: The focal position is fixed.
      This is useful when you want to allow the user to compose a scene then lock the focus.
    • AVCaptureFocusModeAutoFocus: The camera does a single scan focus then reverts to locked.
      This is suitable for a situation where you want to select a particular item on which to focus and then maintain focus on that item even if it is not the center of the scene.
    • AVCaptureFocusModeContinuousAutoFocus: The camera continuously autofocuses as needed.

    有3个聚焦模式:

    You use the isFocusModeSupported: method to determine whether a device supports a given focus mode, then set the mode using the focusMode property.

    使用 isFocusModeSupported: 方法来决定设备是否支持给定的聚焦模式,然后使用 focusMode 属性设置模式。

    In addition, a device may support a focus point of interest. You test for support using focusPointOfInterestSupported. If it’s supported, you set the focal point using focusPointOfInterest. You pass a CGPoint where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right in landscape mode with the home button on the right—this applies even if the device is in portrait mode.

    此外,设备可能支持一个兴趣焦点。使用 focusPointOfInterestSupported 进行支持测试。如果支持,使用 focusPointOfInterest 设置焦点。传一个 CGPoing,横向模式下(就是 home 键在右边)图片的左上角是 {0, 0},右下角是 {1, 1}, – 即使设备是纵向模式也适用。

    You can use the adjustingFocus property to determine whether a device is currently focusing. You can observe the property using key-value observing to be notified when a device starts and stops focusing.

    你可以使用 adjustingFocus 属性来确定设备是否正在聚焦。当设备开始、停止聚焦时可以使用 key-value observing 观察,接收通知。

    If you change the focus mode settings, you can return them to the default configuration as follows:

    如果改变聚焦模式设置,可以将其返回到默认配置,如下所示:

    if ([currentDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]) {
        CGPoint autofocusPoint = CGPointMake(0.5f, 0.5f);
        [currentDevice setFocusPointOfInterest:autofocusPoint];
        [currentDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
    }
    
    • Exposure Modes - 曝光模式

    There are two exposure modes:

    • AVCaptureExposureModeContinuousAutoExposure: The device automatically adjusts the exposure level as needed.
    • AVCaptureExposureModeLocked: The exposure level is fixed at its current level.

    You use the isExposureModeSupported: method to determine whether a device supports a given exposure mode, then set the mode using the exposureMode property.

    有两种曝光模式:

    使用 isExposureModeSupported: 方法来确定设备是否支持给定的曝光模式,然后使用 exposureMode 属性设置模式。

    In addition, a device may support an exposure point of interest. You test for support using exposurePointOfInterestSupported. If it’s supported, you set the exposure point using exposurePointOfInterest. You pass a CGPoint where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right in landscape mode with the home button on the right—this applies even if the device is in portrait mode.

    此外,一个设备支持一个曝光点。使用 exposurePointOfInterestSupported 测试支持。如果支持,使用 exposurePointOfInterest 设置曝光点。传一个 CGPoing,横向模式下(就是 home 键在右边)图片的左上角是 {0, 0},右下角是 {1, 1}, – 即使设备是纵向模式也适用。

    You can use the adjustingExposure property to determine whether a device is currently changing its exposure setting. You can observe the property using key-value observing to be notified when a device starts and stops changing its exposure setting.

    可以使用 adjustingExposure 属性来确定设备当前是否改变它的聚焦设置。当设备开始、停止聚焦时可以使用 key-value observing 观察,接收通知。

    If you change the exposure settings, you can return them to the default configuration as follows:

    如果改变曝光设置,可以将其返回到默认配置,如下所示:

    if ([currentDevice isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]) {
        CGPoint exposurePoint = CGPointMake(0.5f, 0.5f);
        [currentDevice setExposurePointOfInterest:exposurePoint];
        [currentDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
    }
    
    • Flash Modes - 闪光模式

    There are three flash modes:

    • AVCaptureFlashModeOff: The flash will never fire.
    • AVCaptureFlashModeOn: The flash will always fire.
    • AVCaptureFlashModeAuto: The flash will fire dependent on the ambient light conditions.

    You use hasFlash to determine whether a device has a flash. If that method returns YES, you then use the isFlashModeSupported: method, passing the desired mode to determine whether a device supports a given flash mode, then set the mode using the flashMode property.

    有3种闪光模式:

    使用 hasFlash 来确定设备是否有闪光灯。如果这个方法返回 YES ,然后使用 isFlashModeSupported: 方法确定设备是否支持给定的闪光模式,然后使用 flashMode 属性设置模式。

    • Torch Mode - 手电筒模式

    In torch mode, the flash is continuously enabled at a low power to illuminate a video capture. There are three torch modes:

    • AVCaptureTorchModeOff: The torch is always off.
    • AVCaptureTorchModeOn: The torch is always on.
    • AVCaptureTorchModeAuto: The torch is automatically switched on and off as needed.

    You use hasTorch to determine whether a device has a flash. You use the isTorchModeSupported: method to determine whether a device supports a given flash mode, then set the mode using the torchMode property.

    For devices with a torch, the torch only turns on if the device is associated with a running capture session.

    在手电筒模式下,闪光灯在一个低功率下一直开启,以照亮对视频捕获。有3个手电筒模式:

    使用 hasTorch 来确定设备是否有闪光灯。使用 isTorchModeSupported: 方法来确定设备是否支持给定的闪光模式,然后使用 torchMode 属性来设置模式。

    对于一个有手电筒的设备,只有当该设备与一个运行时捕捉会话关联时,才能打开手电筒。

    • Video Stabilization - 视频稳定性

    Cinematic video stabilization is available for connections that operate on video, depending on the specific device hardware. Even so, not all source formats and video resolutions are supported.

    Enabling cinematic video stabilization may also introduce additional latency into the video capture pipeline. To detect when video stabilization is in use, use the videoStabilizationEnabled property. The enablesVideoStabilizationWhenAvailable property allows an application to automatically enable video stabilization if it is supported by the camera. By default automatic stabilization is disabled due to the above limitations.

    电影视频的稳定化可用于连接视频上的操作,这取决于具体的硬件。尽管如此,不是所有的源格式和视频分辨率都被支持。

    使用电影视频稳定化也可能会对视频采集管道引起额外的延迟。正在使用视频稳定化时,使用 videoStabilizationEnabled 属性可以检测。enablesVideoStabilizationWhenAvailable 属性允许应用程序自动使视频稳定化可用,如果它是被摄像头支持的话。由于以上限制,默认自动稳定化是禁用的。

    • White Balance - 白平衡

    There are two white balance modes:

    • AVCaptureWhiteBalanceModeLocked: The white balance mode is fixed.
    • AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance: The camera continuously adjusts the white balance as needed.

    You use the isWhiteBalanceModeSupported: method to determine whether a device supports a given white balance mode, then set the mode using the whiteBalanceMode property.

    You can use the adjustingWhiteBalance property to determine whether a device is currently changing its white balance setting. You can observe the property using key-value observing to be notified when a device starts and stops changing its white balance setting.

    有两个白平衡模式:

    使用 isWhiteBalanceModeSupported: :方法来确定设备是否支持给定的白平衡模式,然后使用 whiteBalanceMode 属性设置模式。

    可以使用 adjustingWhiteBalance 属性来确定设备是否正在改变白平衡设置。当设备开始或者停止改变它的白平衡设置时,可以使用 key-value observing 观察属性,接收通知。

    • Setting Device Orientation - 设置设备方向

    You set the desired orientation on a AVCaptureConnection to specify how you want the images oriented in the AVCaptureOutput (AVCaptureMovieFileOutput, AVCaptureStillImageOutput and AVCaptureVideoDataOutput) for the connection.

    Use the AVCaptureConnectionsupportsVideoOrientation property to determine whether the device supports changing the orientation of the video, and the videoOrientation property to specify how you want the images oriented in the output port. Listing 4-1 shows how to set the orientation for a AVCaptureConnection to AVCaptureVideoOrientationLandscapeLeft:

    AVCaptureConnection 设置期望的方向,来指定你想要的图像在 AVCaptureOutputAVCaptureMovieFileOutputAVCaptureStillImageOutput, AVCaptureVideoDataOutput)中的方向,为了连接。

    使用 AVCaptureConnectionsupportsVideoOrientation 属性来确定设备是否支持改变视频的方向,videoOrientation 属性指定你想要的图像在输出端口的方向。列表4-1显示了如何设置方向,为 AVCaptureConnection 设置 AVCaptureVideoOrientationLandscapeLeft

    Listing 4-1 Setting the orientation of a capture connection

    AVCaptureConnection *captureConnection = <#A capture connection#>;
    if ([captureConnection isVideoOrientationSupported])
    {
        AVCaptureVideoOrientation orientation = AVCaptureVideoOrientationLandscapeLeft;
        [captureConnection setVideoOrientation:orientation];
    }
    
    • Configuring a Device - 配置设备

    To set capture properties on a device, you must first acquire a lock on the device using lockForConfiguration:. This avoids making changes that may be incompatible with settings in other applications. The following code fragment illustrates how to approach changing the focus mode on a device by first determining whether the mode is supported, then attempting to lock the device for reconfiguration. The focus mode is changed only if the lock is obtained, and the lock is released immediately afterward.

    在设备上设置捕获属性,必须先使用 lockForConfiguration: 获得设备锁。这样就避免了在其他应用程序中可能与设置不兼容的更改。下面的代码段演示了首先如何通过确定模式是否被支持的方式改变一个设备上的焦点模式,然后视图锁定设备重新配置。只有当锁被获取到,焦点模式才会被改变,并且锁被释放后立即锁定。

    if ([device isFocusModeSupported:AVCaptureFocusModeLocked]) {
        NSError *error = nil;
        if ([device lockForConfiguration:&error]) {
            device.focusMode = AVCaptureFocusModeLocked;
            [device unlockForConfiguration];
        }
        else {
            // Respond to the failure as appropriate.
    
    

    You should hold the device lock only if you need the settable device properties to remain unchanged. Holding the device lock unnecessarily may degrade capture quality in other applications sharing the device.

    只有在需要设置设备属性保持不变的时候才应该使设备锁保持。没必要的保持设备所,可能会在其他应用程序共享设备时降低捕获质量。

    • Switching Between Devices - 切换装置

    Sometimes you may want to allow users to switch between input devices—for example, switching from using the front-facing to to the back-facing camera. To avoid pauses or stuttering, you can reconfigure a session while it is running, however you should use beginConfiguration and commitConfiguration to bracket your configuration changes:

    有时,你可能想允许用户在输入设备之间进行切换,比如使用前置摄像头到后置摄像头的切换。为了避免暂停或者卡顿,可以在运行时配置一个会话,但是你应该使用 beginConfigurationcommitConfiguration 支持你的配置改变:

    AVCaptureSession *session = <#A capture session#>;
    [session beginConfiguration];
     
    [session removeInput:frontFacingCameraDeviceInput];
    [session addInput:backFacingCameraDeviceInput];
     
    [session commitConfiguration];
    

    When the outermost commitConfiguration is invoked, all the changes are made together. This ensures a smooth transition.

    当最外面的 commitConfiguration 被调用,所有的改变都是一起做的。这保证了平稳过渡。

    Use Capture Inputs to Add a Capture Device to a Session - 使用捕获输入将捕获设备添加到会话中

    To add a capture device to a capture session, you use an instance of AVCaptureDeviceInput (a concrete subclass of the abstract AVCaptureInput class). The capture device input manages the device’s ports.

    添加一个捕获装置到捕获会话中,使用 AVCaptureDeviceInput (AVCaptureInput 抽象类的具体子类)的实例。捕获设备输入管理设备的端口。

    NSError *error;
    AVCaptureDeviceInput *input =
            [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (!input) {
        // Handle the error appropriately.
    }
    

    You add inputs to a session using addInput:. If appropriate, you can check whether a capture input is compatible with an existing session using canAddInput:.

    使用 addInput: 给会话添加一个输入。如果合适的话,可以使用 canAddInput: 检查是否有输入捕获与现有会话是兼容的。

    AVCaptureSession *captureSession = <#Get a capture session#>;
    AVCaptureDeviceInput *captureDeviceInput = <#Get a capture device input#>;
    if ([captureSession canAddInput:captureDeviceInput]) {
        [captureSession addInput:captureDeviceInput];
    }
    else {
        // Handle the failure.
    }
    

    See Configuring a Session for more details on how you might reconfigure a running session.

    An AVCaptureInput vends one or more streams of media data. For example, input devices can provide both audio and video data. Each media stream provided by an input is represented by an AVCaptureInputPort object. A capture session uses an AVCaptureConnection object to define the mapping between a set of AVCaptureInputPort objects and a single AVCaptureOutput.

    有关如果配置一个正在运行的会话,更多细节请查看 Configuring a Session .

    AVCaptureInput 声明一个或者多个媒体数据流。例如,输入设备可以提供音频和视频数据。输入提供的每个媒体流都被一个 AVCaptureInputPort 所表示。一个捕获会话使用 AVCaptureConnection 对象来定义一个 一组 AVCaptureInputPort 对象和一个 AVCaptureOutput 之间的映射。

    Use Capture Outputs to Get Output from a Session - 使用捕获输出从会话得到输出

    To get output from a capture session, you add one or more outputs. An output is an instance of a concrete subclass of AVCaptureOutput. You use:

    • AVCaptureMovieFileOutput to output to a movie file
    • AVCaptureVideoDataOutput if you want to process frames from the video being captured, for example, - to create your own custom view layer
    • AVCaptureAudioDataOutput if you want to process the audio data being captured
    • AVCaptureStillImageOutput if you want to capture still images with accompanying metadata

    You add outputs to a capture session using addOutput:. You check whether a capture output is compatible with an existing session using canAddOutput:. You can add and remove outputs as required while the session is running.

    要从捕获会话得到输出,可以添加一个或多个输出。一个输出是 AVCaptureOutput 的具体子类的实例。下面几种使用:

    使用 addOutput: 把输出添加到捕获会话中。使用 canAddOutput: 检查是否一个捕获输出与现有的会话是兼容的。可以在会话正在运行的时候添加和删除所需的输出。

    AVCaptureSession *captureSession = <#Get a capture session#>;
    AVCaptureMovieFileOutput *movieOutput = <#Create and configure a movie output#>;
    if ([captureSession canAddOutput:movieOutput]) {
        [captureSession addOutput:movieOutput];
    }
    else {
        // Handle the failure.
    }
    
    • Saving to a Movie File - 保存电影文件

    You save movie data to a file using an AVCaptureMovieFileOutput object. (AVCaptureMovieFileOutput is a concrete subclass of AVCaptureFileOutput, which defines much of the basic behavior.) You can configure various aspects of the movie file output, such as the maximum duration of a recording, or its maximum file size. You can also prohibit recording if there is less than a given amount of disk space left.

    使用 AVCaptureMovieFileOutput 对象保存电影数据到文件中。(AVCaptureMovieFileOutputAVCaptureFileOutput 的具体子类,定义了大量的基本行为。)可以电影文件输出的各个方面,如记录的最大时间,或它的最大文件的大小。也可以禁止记录,如果有小于给定磁盘空间的数量。

    AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
    NSURL *fileURL = <#A file URL that identifies the output location#>;
    [aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];
    
    

    The resolution and bit rate for the output depend on the capture session’s sessionPreset. The video encoding is typically H.264 and audio encoding is typically AAC. The actual values vary by device.

    输出的分辨率和比特率取决于捕获会话的 sessionPreset 。视频编码通常是 H.264 ,音频编码通常是 AAC 。实际值因设备而异。

    • Starting a Recording - 开始记录

    You start recording a QuickTime movie using startRecordingToOutputFileURL:recordingDelegate:. You need to supply a file-based URL and a delegate. The URL must not identify an existing file, because the movie file output does not overwrite existing resources. You must also have permission to write to the specified location. The delegate must conform to the AVCaptureFileOutputRecordingDelegate protocol, and must implement the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method.

    使用 startRecordingToOutputFileURL:recordingDelegate: 开始记录一个 QuickTime 电影。需要提供一个基于 URLdelegate 的文件。URL 决不能指向一个已经存在的文件,因为电影文件输出不会覆盖存在的资源。你还必须有权限能写入指定的位置。 delegate 必须符合 AVCaptureFileOutputRecordingDelegate 协议,并且必须实现 captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: 方法。

    AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output> ;
    NSURL *fileURL = <#A file URL that identifies the output location#>;
    [aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];
    

    In the implementation of captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:, the delegate might write the resulting movie to the Camera Roll album. It should also check for any errors that might have occurred.

    captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: 的实现中,代理可以将结果电影写入到相机胶卷专辑中。它也应该可能发生的任何错误。

    • Ensuring That the File Was Written Successfully - 确保文件被成功写入

    To determine whether the file was saved successfully, in the implementation of captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: you check not only the error but also the value of the AVErrorRecordingSuccessfullyFinishedKey in the error’s user info dictionary:

    为了确定文件是否成功被写入,在 captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: 的实现中,不仅要检查错误,还要在错误的用户信息字典中,检查 AVErrorRecordingSuccessfullyFinishedKey 的值。

    - (void)captureOutput:(AVCaptureFileOutput *)captureOutput
            didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
            fromConnections:(NSArray *)connections
            error:(NSError *)error {
     
        BOOL recordedSuccessfully = YES;
        if ([error code] != noErr) {
            // A problem occurred: Find out if the recording was successful.
            id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey];
            if (value) {
                recordedSuccessfully = [value boolValue];
            }
        }
        // Continue as appropriate...
    }
    

    You should check the value of the AVErrorRecordingSuccessfullyFinishedKeykey in the user info dictionary of the error, because the file might have been saved successfully, even though you got an error. The error might indicate that one of your recording constraints was reached—for example, AVErrorMaximumDurationReached or AVErrorMaximumFileSizeReached. Other reasons the recording might stop are:

    The disk is full—AVErrorDiskFull
    The recording device was disconnected—AVErrorDeviceWasDisconnected
    The session was interrupted (for example, a phone call was received)—AVErrorSessionWasInterrupted

    应该在用户的错误信息字典中检查 AVErrorRecordingSuccessfullyFinishedKeykey 的值,因为即使得到了一个错误信息,文件可能已经被成功保存了。这种错误可能表明你的一个记录约束被延迟了,例如 AVErrorMaximumDurationReached 或者 AVErrorMaximumFileSizeReached 。记录可能停止的其他原因是:

    • Adding Metadata to a File - 将元数据添加到文件中

    You can set metadata for the movie file at any time, even while recording. This is useful for situations where the information is not available when the recording starts, as may be the case with location information. Metadata for a file output is represented by an array of AVMetadataItem objects; you use an instance of its mutable subclass, AVMutableMetadataItem, to create metadata of your own.

    可以在任何时间设置电影文件的元数据,即使在记录的时候。这是有用的,当记录开始,信息室不可用的,因为可能是位置信息的情况下。一个输出文件的元数据是由 AVMetadataItem 对象的数组表示;使用其可变子类(AVMutableMetadataItem)的实例,去创建属于你自己的元数据。

    AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
    NSArray *existingMetadataArray = aMovieFileOutput.metadata;
    NSMutableArray *newMetadataArray = nil;
    if (existingMetadataArray) {
        newMetadataArray = [existingMetadataArray mutableCopy];
    }
    else {
        newMetadataArray = [[NSMutableArray alloc] init];
    }
     
    AVMutableMetadataItem *item = [[AVMutableMetadataItem alloc] init];
    item.keySpace = AVMetadataKeySpaceCommon;
    item.key = AVMetadataCommonKeyLocation;
     
    CLLocation *location - <#The location to set#>;
    item.value = [NSString stringWithFormat:@"%+08.4lf%+09.4lf/"
        location.coordinate.latitude, location.coordinate.longitude];
     
    [newMetadataArray addObject:item];
     
    aMovieFileOutput.metadata = newMetadataArray;
    
    • Processing Frames of Video - 处理视频的帧

    An AVCaptureVideoDataOutput object uses delegation to vend video frames. You set the delegate using setSampleBufferDelegate:queue:. In addition to setting the delegate, you specify a serial queue on which they delegate methods are invoked. You must use a serial queue to ensure that frames are delivered to the delegate in the proper order. You can use the queue to modify the priority given to delivering and processing the video frames. See SquareCam for a sample implementation.

    一个 AVCaptureVideoDataOutput 对象使用委托来声明视频帧。使用 setSampleBufferDelegate:queue: 设置代理。除了设置代理,还要制定一个调用它们代理方法的串行队列。必须使用一个串行队列以确保帧以适当的顺序传递给代理。可以使用队列来修改给定传输的优先级和处理视频帧的优先级。查看 SquareCam 有一个简单的实现。

    The frames are presented in the delegate method, captureOutput:didOutputSampleBuffer:fromConnection:, as instances of the CMSampleBufferRef opaque type (see Representations of Media). By default, the buffers are emitted in the camera’s most efficient format. You can use the videoSettings property to specify a custom output format. The video settings property is a dictionary; currently, the only supported key is kCVPixelBufferPixelFormatTypeKey. The recommended pixel formats are returned by the availableVideoCVPixelFormatTypes property , and the availableVideoCodecTypes property returns the supported values. Both Core Graphics and OpenGL work well with the BGRA format:

    在代理方法中(captureOutput:didOutputSampleBuffer:fromConnection:CMSampleBufferRef 不透明类型的实例,详情见 Representations of Media),帧是被露出来的。默认情况下,被放出的缓冲区是相机最有效的格式。可以使用 videoSettings 属性指定自定义输出格式。视频设置属性是一个字典;目前,唯一支持的 keykCVPixelBufferPixelFormatTypeKey。推荐的像素格式是由 availableVideoCVPixelFormatTypes 属性返回的,并且 availableVideoCodecTypes 属性返回支持的值。Core GraphicsOpenGL 都很好的使用 BGRA 格式:

    AVCaptureVideoDataOutput *videoDataOutput = [AVCaptureVideoDataOutput new];
    NSDictionary *newSettings =
                    @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };
    videoDataOutput.videoSettings = newSettings;
     
     // discard if the data output queue is blocked (as we process the still image)
    //如果数据输出队列被阻塞(当我们处理静态映像时),则丢弃它
    [videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];)
     
    // create a serial dispatch queue used for the sample buffer delegate as well as when a still image is captured
    //创建一个用于 样本缓冲区委托以及捕获静态图像时的串行调度队列
    // a serial dispatch queue must be used to guarantee that video frames will be delivered in order
    //必须使用串行调度队列来保证视频帧将按顺序传送
    // see the header doc for setSampleBufferDelegate:queue: for more information
    //有关更多信息,请参见setSampleBufferDelegate:queue的头文档
    videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
    [videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
     
    AVCaptureSession *captureSession = <#The Capture Session#>;
     
    if ( [captureSession canAddOutput:videoDataOutput] )
         [captureSession addOutput:videoDataOutput];
    
    • Performance Considerations for Processing Video - 处理视频的性能考虑

    You should set the session output to the lowest practical resolution for your application. Setting the output to a higher resolution than necessary wastes processing cycles and needlessly consumes power.

    应该将会话输出设置为应用程序的最低分辨率。设置输出超过必要废物处理周期,达到更高的分辨率,从而不必要消耗功率。

    You must ensure that your implementation of captureOutput:didOutputSampleBuffer:fromConnection: is able to process a sample buffer within the amount of time allotted to a frame. If it takes too long and you hold onto the video frames, AV Foundation stops delivering frames, not only to your delegate but also to other outputs such as a preview layer.

    必须确保 captureOutput:didOutputSampleBuffer:fromConnection: 的实现,能够处理大量时间内的样品缓冲,分配到一个帧中。如果它需要很久,你要一直抓住视频帧,AV Foundation 会停止给,你的代理,还有其他输出例如 preview layer ,提供帧。

    You can use the capture video data output’s minFrameDuration property to be sure you have enough time to process a frame — at the cost of having a lower frame rate than would otherwise be the case. You might also make sure that the alwaysDiscardsLateVideoFrames property is set to YES (the default). This ensures that any late video frames are dropped rather than handed to you for processing. Alternatively, if you are recording and it doesn’t matter if the output fames are a little late and you would prefer to get all of them, you can set the property value to NO. This does not mean that frames will not be dropped (that is, frames may still be dropped), but that they may not be dropped as early, or as efficiently.

    可以使用捕获视频数据输出的 minFrameDuration 属性来确保你有足够时间来处理帧 – 在具有较低的帧速率比其他情况下的成本。也可以确保 alwaysDiscardsLateVideoFrames 属性被设为 YES (默认)。这确保任何后期视频的帧都被丢弃,而不是交给你处理。或者,如果你是记录,更想得到它们全部,不介意输出帧稍微晚一点的话,可以设置该属性的值为 NO 。这并不意味着不会丢失帧(即,帧仍有可能丢失),但它们不可能像之前那样减少,或者说是有点效果的。

    • Capturing Still Images - 捕获静止图像

    You use an AVCaptureStillImageOutput output if you want to capture still images with accompanying metadata. The resolution of the image depends on the preset for the session, as well as the device.

    如果你想捕获带着元数据的静止图像,可以使用 AVCaptureStillImageOutput 输出。图像的分辨率取决于会话的预设,以及设备的设置。

    • Pixel and Encoding Formats - 像素和编码格式

    Different devices support different image formats. You can find out what pixel and codec types are supported by a device using availableImageDataCVPixelFormatTypes and availableImageDataCodecTypes respectively. Each method returns an array of the supported values for the specific device. You set the outputSettings dictionary to specify the image format you want, for example:

    不同的设备支持不同的图像格式。使用 availableImageDataCVPixelFormatTypes 可以找到什么样的像素被支持,使用 availableImageDataCodecTypes 可以找到什么样的编解码器类型被支持。每一种方法都返回一个特定设备的支持的值的数组。设置 outputSettings 字典来指定你想要的图像格式,例如:

    AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *outputSettings = @{ AVVideoCodecKey : AVVideoCodecJPEG};
    [stillImageOutput setOutputSettings:outputSettings];
    

    If you want to capture a JPEG image, you should typically not specify your own compression format. Instead, you should let the still image output do the compression for you, since its compression is hardware-accelerated. If you need a data representation of the image, you can use jpegStillImageNSDataRepresentation: to get an NSData object without recompressing the data, even if you modify the image’s metadata.

    如果你想捕获一个 JPEG 图像,通常应该不要指定自己的压缩格式。相反,应该让静态图像输出为你做压缩,因为它的压缩是硬件加速的。如果你需要图像的表示数据,可以使用 jpegStillImageNSDataRepresentation: 得到未压缩数据的NSDate 对象,即使你修改修改图像的元数据。

    • Capturing an Image - 捕获图像

    When you want to capture an image, you send the output a captureStillImageAsynchronouslyFromConnection:completionHandler: message. The first argument is the connection you want to use for the capture. You need to look for the connection whose input port is collecting video:

    当你想捕获图像,给输出发送一个 captureStillImageAsynchronouslyFromConnection:completionHandler: 消息。第一个参数是用于想要捕获使用的连接。你需要寻找输入端口是收集视频的连接。

    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in stillImageOutput.connections) {
        for (AVCaptureInputPort *port in [connection inputPorts]) {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) { break; }
    }
    

    The second argument to captureStillImageAsynchronouslyFromConnection:completionHandler: is a block that takes two arguments: a CMSampleBuffer opaque type containing the image data, and an error. The sample buffer itself may contain metadata, such as an EXIF dictionary, as an attachment. You can modify the attachments if you want, but note the optimization for JPEG images discussed in Pixel and Encoding Formats.

    captureStillImageAsynchronouslyFromConnection:completionHandler: 的第二个参数是一个 blockblock 有两个参数:一个包含图像数据的 CMSampleBuffer 不透明类型,一个 error。样品缓冲自身可能包含元数据,例如 EXIF 字典作为附件。如果你想的话,可以修改附件,但是注意 JPEG 图像进行像素和编码格式的优化。

    [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
        ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
            CFDictionaryRef exifAttachments =
                CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
            if (exifAttachments) {
                // Do something with the attachments.对附件做些什么。
            }
            // Continue as appropriate.适当地继续。
        }];
    

    Showing the User What’s Being Recorded - 显示用户正在被记录什么

    You can provide the user with a preview of what’s being recorded by the camera (using a preview layer) or by the microphone (by monitoring the audio channel).

    可以为用户提供一个预览,关于正在被相机(使用 perview layer)记录什么,或者被麦克风(通过监控音频信道)记录什么。

    • Video Preview - 视频预览

    You can provide the user with a preview of what’s being recorded using an AVCaptureVideoPreviewLayer object. AVCaptureVideoPreviewLayer is a subclass ofCALayer (see Core Animation Programming Guide. You don’t need any outputs to show the preview.

    使用 对象可以给用户提供一个正在被记录的预览。 AVCaptureVideoPreviewLayerCALayer 的子类。(详情见 Core Animation Programming Guide),不需要任何输出去显示预览。

    Using the AVCaptureVideoDataOutput class provides the client application with the ability to access the video pixels before they are presented to the user.

    使用 AVCaptureVideoDataOutput 类提供的访问视频像素才呈现给用户的客户端应用程序的能力。

    Unlike a capture output, a video preview layer maintains a strong reference to the session with which it is associated. This is to ensure that the session is not deallocated while the layer is attempting to display video. This is reflected in the way you initialize a preview layer:

    与捕获输出不同的是,视频预览层与它关联的会话有一个强引用。这是为了确保会话还没有被释放,layer 就尝试去显示视频。这反映在,你初始化一个预览层的方式上:

    AVCaptureSession *captureSession = <#Get a capture session#>;
    CALayer *viewLayer = <#Get a layer from the view in which you want to present the preview#>;
     
    AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
    [viewLayer addSublayer:captureVideoPreviewLayer];
    

    In general, the preview layer behaves like any other CALayer object in the render tree (see Core Animation Programming Guide). You can scale the image and perform transformations, rotations, and so on just as you would any layer. One difference is that you may need to set the layer’s orientation property to specify how it should rotate images coming from the camera. In addition, you can test for device support for video mirroring by querying the supportsVideoMirroring property. You can set the videoMirrored property as required, although when the automaticallyAdjustsVideoMirroring property is set to YES (the default), the mirroring value is automatically set based on the configuration of the session.

    在一般情况下,预览层行为就像渲染树中任何其他 CALayer 对象(见 Core Animation Programming Guide)。可以缩放图像和执行转换、旋转等,就像你可以在任何层。一个不同点是,你可能需要设置层的 orientation 属性来指定它应该如何从相机中旋转图像。此外,可以通过查询 supportsVideoMirroring 属性来测试设备对于视频镜像的支持。可以根据需要设置 videoMirrored 属性,虽然当 automaticallyAdjustsVideoMirroring 属性被设置为 YES (默认情况下), mirroring 值是自动的基于会话配置进行设置。

    • Video Gravity Modes - 视屏重力模式

    The preview layer supports three gravity modes that you set using videoGravity:

    • AVLayerVideoGravityResizeAspect: This preserves the aspect ratio, leaving black bars where the - video does not fill the available screen area.
    • AVLayerVideoGravityResizeAspectFill: This preserves the aspect ratio, but fills the available - screen area, cropping the video when necessary.
    • AVLayerVideoGravityResize: This simply stretches the video to fill the available screen area, even if doing so distorts the image.

    预览层支持3种重力模式,使用 videoGravity 设置:

    • Using “Tap to Focus” with a Preview - 使用“点击焦点”预览

    You need to take care when implementing tap-to-focus in conjunction with a preview layer. You must account for the preview orientation and gravity of the layer, and for the possibility that the preview may be mirrored. See the sample code project AVCam-iOS: Using AVFoundation to Capture Images and Movies for an implementation of this functionality.

    需要注意的是,在实现点击时要注意结合预览层。必须考虑到该层的预览方向和重力,并考虑预览变为镜像显示的可能性。请看示例代码项目:AVCam-iOS: Using AVFoundation to Capture Images and Movies,有关这个功能的实现。

    • Showing Audio Levels - 显示音频等级

    To monitor the average and peak power levels in an audio channel in a capture connection, you use an AVCaptureAudioChannel object. Audio levels are not key-value observable, so you must poll for updated levels as often as you want to update your user interface (for example, 10 times a second).

    在捕获连接中检测音频信道的平均值和峰值功率水平,可以使用 AVCaptureAudioChannel 对象。音频等级不是 key-value 可观察的,所以当你想更新你的用户界面(比如10秒一次),必须调查最新的等级。

    AVCaptureAudioDataOutput *audioDataOutput = <#Get the audio data output#>;
    NSArray *connections = audioDataOutput.connections;
    if ([connections count] > 0) {
        // There should be only one connection to an AVCaptureAudioDataOutput.
       //对于AVCaptureAudioDataOutput应该只有一个连接。
        AVCaptureConnection *connection = [connections objectAtIndex:0];
     
        NSArray *audioChannels = connection.audioChannels;
     
        for (AVCaptureAudioChannel *channel in audioChannels) {
            float avg = channel.averagePowerLevel;
            float peak = channel.peakHoldLevel;
            // Update the level meter user interface.
           //更新水平等级用户界面
    
        }
    }
    

    Putting It All Together: Capturing Video Frames as UIImage Objects - 总而言之:捕获视频帧用作 UIImage 对象

    This brief code example to illustrates how you can capture video and convert the frames you get to UIImage objects. It shows you how to:

    • Create an AVCaptureSession object to coordinate the flow of data from an AV input device to an - output
    • Find the AVCaptureDevice object for the input type you want
    • Create an AVCaptureDeviceInput object for the device
    • Create an AVCaptureVideoDataOutput object to produce video frames
    • Implement a delegate for the AVCaptureVideoDataOutput object to process video frames
    • Implement a function to convert the CMSampleBuffer received by the delegate into a UIImage object

    这个简短的代码示例演示了如何捕捉视频和将帧转化为 UIImage 对象,下面说明方法:

    Note: To focus on the most relevant code, this example omits several aspects of a complete application, including memory management. To use AV Foundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces.

    注意:关注最相关的代码,这个例子省略了一个完成程序的几部分,包括内存管理。为了使用 AV Foundation,你应该有足够的 Cocoa 经验,有能力推断出丢失的碎片。

    • Create and Configure a Capture Session - 创建和配置捕获会话

    You use an AVCaptureSession object to coordinate the flow of data from an AV input device to an output. Create a session, and configure it to produce medium-resolution video frames.

    使用 AVCaptureSession 对象去协调从 AV 输入设备到输出的数据流。创建一个会话,并将其配置产生中等分辨率的视频帧。

    AVCaptureSession *session = [[AVCaptureSession alloc] init];
    session.sessionPreset = AVCaptureSessionPresetMedium;
    
    • Create and Configure the Device and Device Input - 创建和配置设备记忆设备输入

    Capture devices are represented by AVCaptureDevice objects; the class provides methods to retrieve an object for the input type you want. A device has one or more ports, configured using an AVCaptureInput object. Typically, you use the capture input in its default configuration.

    Find a video capture device, then create a device input with the device and add it to the session. If an appropriate device can not be located, then the deviceInputWithDevice:error: method will return an error by reference.

    AVCaptureDevice 对象表示捕获设备;类提供你想要的输入类型对象的方法。一个设备具有一个或者多个端口,使用 AVCaptureInput 对象配置。通常情况下,在它的默认配置中使用捕获输入。

    找到一个视频捕获设备,然后创建一个带着设备的设备输入,并将其添加到会话中,如果合适的设备无法定位,然后 deviceInputWithDevice:error: 方法将会通过引用返回一个错误。

    AVCaptureDevice *device =
            [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
     
    NSError *error = nil;
    AVCaptureDeviceInput *input =
            [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (!input) {
        // Handle the error appropriately.
    }
    [session addInput:input];
    
    • Create and Configure the Video Data Output - 创建和配置视频数据输出

    You use an AVCaptureVideoDataOutput object to process uncompressed frames from the video being captured. You typically configure several aspects of an output. For video, for example, you can specify the pixel format using the videoSettings property and cap the frame rate by setting the minFrameDuration property.

    Create and configure an output for video data and add it to the session; cap the frame rate to 15 fps by setting the minFrameDuration property to 1/15 second:

    使用 AVCaptureVideoDataOutput 对象去处理视频捕获过程中未被压缩的帧。通常配置输出的几个方面。例如视频,可以使用 videoSettings 属性指定像素格式,通过设置 minFrameDuration 属性覆盖帧速率。

    为视频数据创建和配置输出,并将其添加到会话中;通过设置 minFrameDuration 属性为每秒 1/15,将帧速率覆盖为 15 fps

    AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
    [session addOutput:output];
    output.videoSettings =
                    @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };
    output.minFrameDuration = CMTimeMake(1, 15);
    

    The data output object uses delegation to vend the video frames. The delegate must adopt the AVCaptureVideoDataOutputSampleBufferDelegate protocol. When you set the data output’s delegate, you must also provide a queue on which callbacks should be invoked.

    数据输出对象使用委托来声明一个视频帧。代理必须 AVCaptureVideoDataOutputSampleBufferDelegate 协议。当你设置了数据输出的代理,还必须提供一个回调时应该被调用的队列。

    dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
    [output setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);
    

    You use the queue to modify the priority given to delivering and processing the video frames.

    使用队列去修改给定传输和处理视频帧的优先级。

    • Implement the Sample Buffer Delegate Method - 实现示例缓冲代理方法

    In the delegate class, implement the method (captureOutput:didOutputSampleBuffer:fromConnection:) that is called when a sample buffer is written. The video data output object delivers frames as CMSampleBuffer opaque types, so you need to convert from the CMSampleBuffer opaque type to a UIImage object. The function for this operation is shown in Converting CMSampleBuffer to a UIImage Object.

    在代理类,实现方法(captureOutput:didOutputSampleBuffer:fromConnection:),当样本缓冲写入时被调用。视频数据输出对象传递了 CMSampleBuffer 不透明类型的帧,所以你需要从 CMSampleBuffer 不透明类型转化为一个 UIImage 对象。这个操作的功能在 Converting CMSampleBuffer to a UIImage Object 中展示。

    - (void)captureOutput:(AVCaptureOutput *)captureOutput
             didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
             fromConnection:(AVCaptureConnection *)connection {
     
        UIImage *image = imageFromSampleBuffer(sampleBuffer);
        // Add your code here that uses the image.在这里添加使用图片的代码。
    } 
    

    Remember that the delegate method is invoked on the queue you specified in setSampleBufferDelegate:queue:; if you want to update the user interface, you must invoke any relevant code on the main thread.

    记住,代理方法是在 setSampleBufferDelegate:queue: 中你指定的队列中调用;如果你想要更新用户界面,必须在主线程上调用任何相关代码。

    • Starting and Stopping Recording - 启动和停止录制

    After configuring the capture session, you should ensure that the camera has permission to record according to the user’s preferences.

    在配置捕获会话后,应该确保相机根据用户的首相选具有录制的权限。

    NSString *mediaType = AVMediaTypeVideo;
     
    [AVCaptureDevice requestAccessForMediaType:mediaType completionHandler:^(BOOL granted) {
        if (granted)
        {
            //Granted access to mediaType
           //授予对mediaType的访问权限
            [self setDeviceAuthorized:YES];
        }
        else
        {
            //Not granted access to mediaType
           //不授予对mediaType的访问权限
            dispatch_async(dispatch_get_main_queue(), ^{
            [[[UIAlertView alloc] initWithTitle:@"AVCam!"
                                        message:@"AVCam doesn't have permission to use Camera, please change privacy settings"
                                       delegate:self
                              cancelButtonTitle:@"OK"
                              otherButtonTitles:nil] show];
                    [self setDeviceAuthorized:NO];
            });
        }
    }];
    

    If the camera session is configured and the user has approved access to the camera (and if required, the microphone), send a startRunning message to start the recording.

    如果相机会话被配置,用户批准访问摄像头(如果需要,麦克风),发送 startRunning 消息开始录制。

    Important: The startRunning method is a blocking call which can take some time, therefore you should perform session setup on a serial queue so that the main queue isn’t blocked (which keeps the UI responsive). See AVCam-iOS: Using AVFoundation to Capture Images and Movies for the canonical implementation example.

    重点:startRunning 方法正在阻塞调用时,可能需要一些时间,因此你应该在串行队列执行会话建立,为了主队列不被堵塞(使UI相应)。见 AVCam-iOS: Using AVFoundation to Capture Images and Movies ,典型实现的例子。

    [session startRunning];
    

    To stop recording, you send the session a stopRunning message.

    要停止录制,给会话发送一个 stopRunning 消息。

    High Frame Rate Video Capture - 高帧速率视频捕获

    iOS 7.0 introduces high frame rate video capture support (also referred to as “SloMo” video) on selected hardware. The full AVFoundation framework supports high frame rate content.

    You determine the capture capabilities of a device using the AVCaptureDeviceFormat class. This class has methods that return the supported media types, frame rates, field of view, maximum zoom factor, whether video stabilization is supported, and more.

    • Capture supports full 720p (1280 x 720 pixels) resolution at 60 frames per second (fps) including - video stabilization and droppable P-frames (a feature of H264 encoded movies, which allow the - movies to play back smoothly even on slower and older hardware.)
    • Playback has enhanced audio support for slow and fast playback, allowing the time pitch of the - audio can be preserved at slower or faster speeds.
    • Editing has full support for scaled edits in mutable compositions.
    • Export provides two options when supporting 60 fps movies. The variable frame rate, slow or fast motion, can be preserved, or the movie and be converted to an arbitrary slower frame rate such as 30 frames per second.

    The SloPoke sample code demonstrates the AVFoundation support for fast video capture, determining whether hardware supports high frame rate video capture, playback using various rates and time pitch algorithms, and editing (including setting time scales for portions of a composition).

    iOS 7 在特定的硬件中,引入了高帧速率的视频捕获支持(也被称为 “SloMo” 视频)。所有的 AVFoundation 框架都支持高帧速率内容。

    使用 AVCaptureDeviceFormat 类确定设备的捕获能力。该类有一个方法,返回支持媒体类型、帧速率、视图因子、最大缩放因子,是否支持视频稳定性等等。

    • 捕获完全支持每秒60帧的 720p (1280 x 720像素)分辨率,包括视频稳定性和可弃用的帧间编码( H264编码特征的电影,使得电影甚至在更慢更老的硬件也能很顺畅的播放)
    • 播放增强了对于慢速和快速播放的音频支持,允许音频的时间间距可以被保存在较慢或者更快的速度。
    • 编辑已全面支持规模可变的组成编辑。
    • 当支持60fps电影,出口提供了两种选择。可变的帧速率,缓慢或者快速的移动,可以保存,或者电影可以被转换为一个任意的较慢的帧速率,比如每秒30帧。

    SloPoke 示例代码演示了 AVFoundation 支持快速视频捕获,确定硬件是否支持高帧速率视频采集,使用不同速率和时间间距算法播放、编辑(包括设置为一个组件一部分的时间尺度)。

    • Playback - 播放

    An instance of AVPlayer manages most of the playback speed automatically by setting the setRate: method value. The value is used as a multiplier for the playback speed. A value of 1.0 causes normal playback, 0.5 plays back at half speed, 5.0 plays back five times faster than normal, and so on.

    AVPlayer 的实例通过设置 setRate: 方法值,自动管理了大部分的播放速度。值被当做播放速度的乘法器使用。值为 1.0 是正常播放,0.5 是播放速度的一半,5.0 表示播放速度是正常速度的5倍,等等。

    The AVPlayerItem object supports the audioTimePitchAlgorithm property. This property allows you to specify how audio is played when the movie is played at various frame rates using the Time Pitch Algorithm Settings constants.

    AVPlayerItem 对象支持 audioTimePitchAlgorithm 属性。此属性允许你指定在使用时距算法设置常量播放不同的帧速率的电影时,音频的播放方式。

    The following table shows the supported time pitch algorithms, the quality, whether the algorithm causes the audio to snap to specific frame rates, and the frame rate range that each algorithm supports.

    下表显示了支持的时距算法、质量,该算法是否会导致音频突然跳到特定的帧速率,以及每个算法支持的帧速率范围。

    | Time pitch algorithm | Quality | Snaps to specific frame rate | Rate range |
    | AVAudioTimePitchAlgorithmLowQualityZeroLatency | Low quality, suitable for fast-forward, rewind, or low quality voice. | YES | 0.5, 0.666667, 0.8, 1.0, 1.25, 1.5, 2.0 rates. |
    | AVAudioTimePitchAlgorithmTimeDomain | Modest quality, less expensive computationally, suitable for voice. | NO | 0.5–2x rates. |
    | AVAudioTimePitchAlgorithmSpectral | Highest quality, most expensive computationally, preserves the pitch of the original item. | NO | 1/32–32 rates. |
    | AVAudioTimePitchAlgorithmVarispeed | High-quality playback with no pitch correction. | NO | 1/32–32 rates. |

    • Editing - 编辑

    When editing, you use the AVMutableComposition class to build temporal edits.

    • Create a new AVMutableComposition instance using the composition class method.
    • Insert your video asset using the insertTimeRange:ofAsset:atTime:error: method.
    • Set the time scale of a portion of the composition using scaleTimeRange:toDuration:

    当编辑时,使用 AVMutableComposition 类去建立时间编辑。

    • Export - 出口

    Exporting 60 fps video uses the AVAssetExportSession class to export an asset. The content can be exported using two techniques:

    Use the AVAssetExportPresetPassthrough preset to avoid reencoding the movie. It retimes the media with the sections of the media tagged as section 60 fps, section slowed down, or section sped up.

    Use a constant frame rate export for maximum playback compatibility. Set the frameDuration property of the video composition to 30 fps. You can also specify the time pitch by using setting the export session’s audioTimePitchAlgorithm property.

    使用 AVAssetExportSession 类将 60fps 的视频导出到资产。该内容可以使用两种技术导出:

    使用 AVAssetExportPresetPassthrough 预设,避免将电影重新编码。它重新定时媒体,将媒体部分标记为 60fps 的部分,缓慢的部分或者加速的部分。

    使用恒定的帧速率导出最大播放兼容性。设置视频组件的 frameDuration 属性为 30fps 。也可以通过设置导出会话的 audioTimePitchAlgorithm 属性指定时间间距。

    • Recording - 录制

    You capture high frame rate video using the AVCaptureMovieFileOutput class, which automatically supports high frame rate recording. It will automatically select the correct H264 pitch level and bit rate.

    To do custom recording, you must use the AVAssetWriter class, which requires some additional setup.

    使用 AVCaptureMovieFileOutput 类捕获高帧速率的视频,该类自动支持高帧率录制。它会自动选择正确的 H264 的高音和比特率。

    做定制的录制,必须使用 AVAssetWriter 类,这需要一些额外的设置。

    assetWriterInput.expectsMediaDataInRealTime=YES;
    

    This setting ensures that the capture can keep up with the incoming data.

    此设置确保捕获可以跟上传入的数据。

    参考文献:
    Yofer Zhang的博客
    AVFoundation的苹果官网

    相关文章

      网友评论

          本文标题:Still and Video Media Capture -

          本文链接:https://www.haomeiwen.com/subject/ydbfyftx.html