美文网首页iOSSwiftBlogiOS
使用 AVAssetWriter 录制小视频

使用 AVAssetWriter 录制小视频

作者: PonyCui | 来源:发表于2015-12-02 18:35 被阅读6382次

    前言

    最近在开发一个小项目,要用到 AVFoundation 来录制小视频,在开发的过程中遇到不少的坑,希望记录下来。

    整个 AVFoundation 框架真是相当复杂而有序的(这里不得不佩服 Apple 的工程师),要使用 AVFoundation 录制小视频,并且保存成文件,你需要用到以下的类。

    • AVCaptureSession
    • AVCaptureDeviceInput
    • AVCaptureVideoDataOutput
    • AVCaptureAudioDataOutput
    • AVCaptureConnection
    • AVAssetWriter
    • AVAssetWriterInput

    类的定义

    建立一个 Session 类,先定义好必须的属性

    class VideoSessionEntity: NSObject {
        
        let session = AVCaptureSession()
        var tmpFileURL: NSURL?
        
        var videoInput: AVCaptureDeviceInput?
        var videoOutput: AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
        var videoConnection: AVCaptureConnection?
        
        var audioInput: AVCaptureDeviceInput?
        var audioOutput: AVCaptureAudioDataOutput = AVCaptureAudioDataOutput()
        var audioConnection: AVCaptureConnection?
        
        var assetWriter: AVAssetWriter?
        var videoWriterInput: AVAssetWriterInput?
        var audioWriterInput: AVAssetWriterInput?
    
    }
    

    我们需要这么多的属性去完成我们的工作,这里面包括视频、音频的相关属性。

    • session 是整个录制工作的控制中心
    • videoInput 是视频源的入口
    • videoOutput 是视频源的出口
    • videoConnection 是视频源的控制中心
    • audioInput 是音频源的入口
    • audioOutput 是音频源的出口
    • audioConnection 是音频源的控制中心

    创建回调

    其中 videoOutput 和 audioOutput 是需要设置 delegate 的,并且,他们的 delegate 方法是同一个的,它们各需要一个 dispatch_queue_t 进行 GCD 异步回调,这样子创建就可以。

    let videoDataOutputQueue = dispatch_queue_create("com.firefly.videoDataOutputQueue", DISPATCH_QUEUE_SERIAL)
    let audioDataOutputQueue = dispatch_queue_create("com.firefly.audioDataOutputQueue", DISPATCH_QUEUE_SERIAL)
    

    初始化

    在初始化这些属性前,先写两个方法,获取设备。

    func deviceForVideo() -> AVCaptureDevice? {
        let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo)
        for device in devices {
            if let device = device as? AVCaptureDevice {
                if device.position == self.videoSource { //在这里选择前后摄像头
                    return device
                }
            }
        }
        return nil
    }
    
    func deviceForAudio() -> AVCaptureDevice? {
        let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeAudio)
        for device in devices {
            if let device = device as? AVCaptureDevice {
                return device
            }
        }
        return nil
    }
    

    然后,我们可以初始化这些属性了,其实都是简单的操作,就是比较繁琐。

    func configureVideo() {
        if let device = FireFlyCore.sharedCore.post.videoManager.deviceForVideo() {
            do {
                try! device.lockForConfiguration()
                device.activeVideoMaxFrameDuration = CMTimeMake(1, 24)
                device.unlockForConfiguration()
            }
            do {
                videoInput = try AVCaptureDeviceInput(device: device)
                videoOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
                if session.canAddInput(videoInput) {
                    session.addInput(videoInput)
                }
                if session.canAddOutput(videoOutput) {
                    session.addOutput(videoOutput)
                }
                if session.canSetSessionPreset(AVCaptureSessionPreset352x288) {
                    session.sessionPreset = AVCaptureSessionPreset352x288
                }
                videoConnection = videoOutput.connectionWithMediaType(AVMediaTypeVideo)
            }
            catch _ {
                
            }
        }
    }
    
    func configureAudio() {
        if let device = FireFlyCore.sharedCore.post.videoManager.deviceForAudio() {
            do {
                audioInput = try AVCaptureDeviceInput(device: device)
                audioOutput.setSampleBufferDelegate(self, queue: audioDataOutputQueue)
                if session.canAddInput(audioInput) {
                    session.addInput(audioInput)
                }
                if session.canAddOutput(audioOutput) {
                    session.addOutput(audioOutput)
                }
                audioConnection = audioOutput.connectionWithMediaType(AVMediaTypeAudio)
            }
            catch _ {
                
            }
        }
    }
    

    如果没有任何问题,你可以调用 session.startRuning() 方法,这个时候,func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) 代理方法应该可以收到好多的数据回调, 这代表工作正常。

    写入文件

    我们需要初始化 AssetWriter,初始化 AssetWriter 需要一些参数,它们代表视频、音频的压缩参数。

    let videoSetting: [String : AnyObject] = [
        AVVideoCodecKey: AVVideoCodecH264,
        AVVideoWidthKey: 320,
        AVVideoHeightKey: 240,
        AVVideoCompressionPropertiesKey: [
            AVVideoPixelAspectRatioKey: [
                AVVideoPixelAspectRatioHorizontalSpacingKey: 1,
                AVVideoPixelAspectRatioVerticalSpacingKey: 1
            ],
            AVVideoMaxKeyFrameIntervalKey: 1,
            AVVideoAverageBitRateKey: 1280000
        ]
    ]
    
    let audioSetting: [String: AnyObject] = [
        AVFormatIDKey: NSNumber(unsignedInt: kAudioFormatMPEG4AAC),
        AVNumberOfChannelsKey: 1,
        AVSampleRateKey: 22050
    ]
    

    然后开始初始化 AssetWriter。

    • 注意,这里只是初始化,不要在这里调用startWriting() 方法,如果在这此处调用这个方法,你就没有办法设置 startSessionAtSourceTime() 了。
    • 这里还有一个坑,.expectsMediaDataInRealTime = true 必须设为 true,否则,视频会丢帧。
    func startRecording() {
        tmpFileURL = NSURL.fileURLWithPath("\(NSTemporaryDirectory())tmp\(arc4random()).mp4")
        do {
            assetWriter = try AVAssetWriter(URL: tmpFileURL!, fileType: AVFileTypeMPEG4)
            videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSetting)
            videoWriterInput?.expectsMediaDataInRealTime = true
            videoWriterInput?.transform = CGAffineTransformMakeRotation(CGFloat(M_PI / 2))
            if assetWriter!.canAddInput(videoWriterInput!) {
                assetWriter!.addInput(videoWriterInput!)
            }
            audioWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSetting)
            audioWriterInput?.expectsMediaDataInRealTime = true
            if assetWriter!.canAddInput(audioWriterInput!) {
                assetWriter!.addInput(audioWriterInput!)
            }
        }
        catch _ {
            
        }
    }
    
    func endRecording() {
        if let assetWriter = assetWriter {
            if let videoWriterInput = videoWriterInput {
                videoWriterInput.markAsFinished()
            }
            if let audioWriterInput = audioWriterInput {
                audioWriterInput.markAsFinished()
            }
            assetWriter.finishWritingWithCompletionHandler({ () -> Void in
                
            })
        }
    }
    

    写入文件

    回调方法的可以参考这样的写法,如果需要为视频添加滤镜可以在这里实现。

    func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
        objc_sync_enter(self)
        if let assetWriter = assetWriter {
            if assetWriter.status != .Writing && assetWriter.status != .Unknown {
                return
            }
        }
        if let assetWriter = assetWriter where assetWriter.status == AVAssetWriterStatus.Unknown {
            assetWriter.startWriting()
            assetWriter.startSessionAtSourceTime(CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
        }
        if connection == self.videoConnection {
            dispatch_async(videoDataOutputQueue, { () -> Void in
                if let videoWriterInput = self.videoWriterInput where videoWriterInput.readyForMoreMediaData {
                    videoWriterInput.appendSampleBuffer(sampleBuffer)
                }
            })
        }
        else if connection == self.audioConnection {
            dispatch_async(audioDataOutputQueue, { () -> Void in
                if let audioWriterInput = self.audioWriterInput where audioWriterInput.readyForMoreMediaData {
                    audioWriterInput.appendSampleBuffer(sampleBuffer)
                }
            })
        }
        objc_sync_exit(self)
    }
    

    相关文章

      网友评论

      • 陈_某_某:AVAssetWriter是不是无法二次录制?每次保存到本地都需要创建一个对象
      • biyuhuaping:写的很好,学习了,能有源码,贴出来更好了
      • 曼妙的汉子:appendSampleBuffer 不用特意dispatch到音视频线程执行吧,因为本身已经在这个线程了呀,用意是?
      • puppySweet:能不能来一个oc版 AVssetwriter 是独立的么 代码好像不全啊

      本文标题: 使用 AVAssetWriter 录制小视频

      本文链接:https://www.haomeiwen.com/subject/mvxshttx.html