美文网首页收藏iosiOS 移动端开发ios
iOS视频录制(断点录制)的全过程

iOS视频录制(断点录制)的全过程

作者: 茄子_Apple | 来源:发表于2017-03-03 18:04 被阅读1025次

    目前发送动态仅仅能发送图片已经满足不了用户的需求了,应产品的需求,还需要处理发送视频,即视频录制和断点录制.

    由于之前对这块的内容还相对比较陌生,基于最近的临时学习和整理,目前已完成视频的断点录制和视频播放等相关功能,在这里总结一下.
    同时也能给自己留个记忆...

    下面介绍一下视频录制需要用到的类:

    • AVCaptureSession -- 是AVFoundation捕捉视频类的中心枢纽
    • AVCaptureVideoPreviewLayer -- 视频录制展示的layer层
    • AVCaptureDeviceInput -- 视频的输入设备
    • AVCaptureConnection -- 代表AVCaptureInputPort或端口之间的连接,和一个AVCaptureOutput或AVCaptureVideoPreviewLayer在AVCaptureSession中的呈现
    • AVCaptureVideoDataOutput -- 视频输出设备
    • AVCaptureAudioDataOutput -- 语音输出设备
    • AVAssetWriter -- 写入媒体数据到一个新的文件提供服务
    • AVAssetWriterInput 拼接一个多媒体样本类型为CMSampleBuffer的实例到AVAssetWriter对象输出文件,为断点录制的关键
      AVCaptureMovieFileOutput 这个类要慎用,可以生成视频文件,但是不能断点录制,也不能自定义文件的类型,值得注意

    以上的对象分别采用懒加载实现,在viewDidLoad里面将其装载,还可以切换前后摄像头,实现代理方法,录制的过程中回调代理

    实例化AVCaptureVideoDataOutput和AVCaptureAudioDataOutput时,相应的要设置代理,遵守的协议为:
    AVCaptureAudioDataOutputSampleBufferDelegate
    AVCaptureAudioDataOutputSampleBufferDelegate
    例如:

     [_videoOutput setSampleBufferDelegate:self queue:self.captureQueue];
    
     [_audioOutput setSampleBufferDelegate:self queue:self.captureQueue];
    //视频输出
    - (AVCaptureVideoDataOutput *)videoOutput {
      if (!_videoOutput) {
        _videoOutput = [[AVCaptureVideoDataOutput alloc] init];
        [_videoOutput setSampleBufferDelegate:self   queue:self.captureQueue];
        NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
        [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
    nil];
        _videoOutput.videoSettings = setcapSettings;
      }
      return _videoOutput;
    }
    
    //音频输出
    - (AVCaptureAudioDataOutput *)audioOutput {
        if (!_audioOutput) {
        _audioOutput = [[AVCaptureAudioDataOutput alloc] init];
        [_audioOutput setSampleBufferDelegate:self   queue:self.captureQueue];
        }
        return _audioOutput;
    }
    

    实现其代理方法,写入数据, 获取CMSampleBufferRef对象,写入数据

    #pragma mark - 写入数据
    - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
        BOOL isVideo = YES;
        @synchronized(self) {
            if (!self.isCapturing  || self.isPaused) {
                return;
            }
            if (captureOutput != self.videoOutput) {
                isVideo = NO;
            }
            //初始化编码器,当有音频和视频参数时创建编码器
            if ((self.recordEncoder == nil) && !isVideo) {
                CMFormatDescriptionRef fmt = CMSampleBufferGetFormatDescription(sampleBuffer);
                [self setAudioFormat:fmt];
                NSString *videoName = [self getUploadFile_type:@"video" fileType:@"mp4"];
                self.videoPath = [[self getVideoCachePath] stringByAppendingPathComponent:videoName];
                self.recordEncoder = [WCLRecordEncoder encoderForPath:self.videoPath Height:_cy width:_cx channels:_channels samples:_samplerate];
            }
            //判断是否中断录制过
            if (self.discont) {
                if (isVideo) {
                    return;
                }
                self.discont = NO;
                // 计算暂停的时间
                CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
                CMTime last = isVideo ? _lastVideo : _lastAudio;
                if (last.flags & kCMTimeFlags_Valid) {
                    if (_timeOffset.flags & kCMTimeFlags_Valid) {
                        pts = CMTimeSubtract(pts, _timeOffset);
                    }
                    CMTime offset = CMTimeSubtract(pts, last);
                    if (_timeOffset.value == 0) {
                        _timeOffset = offset;
                    }else {
                        _timeOffset = CMTimeAdd(_timeOffset, offset);
                    }
                }
                _lastVideo.flags = 0;
                _lastAudio.flags = 0;
            }
            // 增加sampleBuffer的引用计时,这样我们可以释放这个或修改这个数据,防止在修改时被释放
            CFRetain(sampleBuffer);
            if (_timeOffset.value > 0) {
                CFRelease(sampleBuffer);
                //根据得到的timeOffset调整
                sampleBuffer = [self adjustTime:sampleBuffer by:_timeOffset];
            }
            // 记录暂停上一次录制的时间
            CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
            CMTime dur = CMSampleBufferGetDuration(sampleBuffer);
            if (dur.value > 0) {
                pts = CMTimeAdd(pts, dur);
            }
            if (isVideo) {
                _lastVideo = pts;
            }else {
                _lastAudio = pts;
            }
        }
        CMTime dur = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
        if (self.startTime.value == 0) {
            self.startTime = dur;
        }
        CMTime sub = CMTimeSubtract(dur, self.startTime);
        self.currentRecordTime = CMTimeGetSeconds(sub);
        if (self.currentRecordTime > self.maxRecordTime) {
            if (self.currentRecordTime - self.maxRecordTime < 0.1) {
                if ([self.delegate respondsToSelector:@selector(recordProgress:)]) {
                    dispatch_async(dispatch_get_main_queue(), ^{
                        [self.delegate recordProgress:self.currentRecordTime/self.maxRecordTime];
                    });
                }
            }
            return;
        }
        if ([self.delegate respondsToSelector:@selector(recordProgress:)]) {
            dispatch_async(dispatch_get_main_queue(), ^{
                [self.delegate recordProgress:self.currentRecordTime/self.maxRecordTime];
            });
        }
        // 进行数据编码
        [self.recordEncoder encodeFrame:sampleBuffer isVideo:isVideo];
        CFRelease(sampleBuffer);
    }
    
    //调整媒体数据的时间
    - (CMSampleBufferRef)adjustTime:(CMSampleBufferRef)sample by:(CMTime)offset {
        CMItemCount count;
        CMSampleBufferGetSampleTimingInfoArray(sample, 0, nil, &count);
        CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
        CMSampleBufferGetSampleTimingInfoArray(sample, count, pInfo, &count);
        for (CMItemCount i = 0; i < count; i++) {
            pInfo[i].decodeTimeStamp = CMTimeSubtract(pInfo[i].decodeTimeStamp, offset);
            pInfo[i].presentationTimeStamp = CMTimeSubtract(pInfo[i].presentationTimeStamp, offset);
        }
        CMSampleBufferRef sout;
        CMSampleBufferCreateCopyWithNewTiming(nil, sample, count, pInfo, &sout);
        free(pInfo);
        return sout;
    }
    

    相应的获取视频的第一帧图片的方法,利用AVAssetImageGenerator:

    #pragma mark - Private Method
    - (void)addVideoInfo:(CMAlbumVideoInfo *)videoInfo duration:(CMTime)duration asset:(AVAsset *)asset
    {
        AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
        generator.appliesPreferredTrackTransform = TRUE;
        generator.apertureMode = AVAssetImageGeneratorApertureModeEncodedPixels;
        NSError *thumbnailImageGenerationError = nil;
        CGImageRef thumbnailImageRef = [generator copyCGImageAtTime:duration actualTime:nil error:&thumbnailImageGenerationError];
        if (!thumbnailImageRef) {
             NSLog(@"thumbnailImageGenerationError %@",thumbnailImageGenerationError);
        }
        UIImage *thumbnailImage = thumbnailImageRef ? [[UIImage alloc]initWithCGImage:thumbnailImageRef] : nil;
     
    }
    

    相应的视频压缩函数:

    #pragma mark - 视频压缩
    - (void)compressVideoWithVideoURL:(NSURL *)videoURL savedName:(NSString *)savedName completion:(void (^)(NSString *savedPath))completion {
        // Accessing video by URL
        AVURLAsset *videoAsset = [[AVURLAsset alloc] initWithURL:videoURL options:nil];
        // Find compatible presets by video asset.
        NSArray *presets = [AVAssetExportSession exportPresetsCompatibleWithAsset:videoAsset];
        // Begin to compress video
        // Now we just compress to low resolution if it supports
        // If you need to upload to the server, but server does't support to upload by streaming,
        // You can compress the resolution to lower. Or you can support more higher resolution.
        if ([presets containsObject:AVAssetExportPreset640x480]) {
            AVAssetExportSession *session = [[AVAssetExportSession alloc] initWithAsset:videoAsset  presetName:AVAssetExportPreset640x480];
            
            NSString *doc = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"];
            NSString *folder = [doc stringByAppendingPathComponent:@"HYBVideos"];
            BOOL isDir = NO;
            BOOL isExist = [[NSFileManager defaultManager] fileExistsAtPath:folder isDirectory:&isDir];
            if (!isExist || (isExist && !isDir)) {
                NSError *error = nil;
                [[NSFileManager defaultManager] createDirectoryAtPath:folder
                                          withIntermediateDirectories:YES
                                                           attributes:nil
                                                                error:&error];
                if (error == nil) {
                    NSLog(@"目录创建成功");
                } else {
                    NSLog(@"目录创建失败");
                }
            }
            
            NSString *outPutPath = [folder stringByAppendingPathComponent:savedName];
            session.outputURL = [NSURL fileURLWithPath:outPutPath];
            
            // Optimize for network use.
            session.shouldOptimizeForNetworkUse = true;
            
            NSArray *supportedTypeArray = session.supportedFileTypes;
            if ([supportedTypeArray containsObject:AVFileTypeMPEG4]) {
                session.outputFileType = AVFileTypeMPEG4;
            } else if (supportedTypeArray.count == 0) {
                NSLog(@"No supported file types");
                return;
            } else {
                session.outputFileType = [supportedTypeArray objectAtIndex:0];
            }
            
            // Begin to export video to the output path asynchronously.
            [session exportAsynchronouslyWithCompletionHandler:^{
                if ([session status] == AVAssetExportSessionStatusCompleted) {
                    dispatch_async(dispatch_get_main_queue(), ^{
                        if (completion) {
                            completion([session.outputURL path]);
                        }
                    });
                } else {
                    dispatch_async(dispatch_get_main_queue(), ^{
                        if (completion) {
                            completion(nil);
                        }
                    });
                }
            }];
        }
    }
    

    如上我自己的一点小总结,只是贴出了一些相关的函数,都可以直接拷贝过去拿着用的.可能通过此文章,整个流程还不是很清晰.若有正常做相应功能的或想了解的朋友,大家可以共同交流!

    在iOS的道路上,一起进步着!欢迎留下你的足迹!

    相关文章

      网友评论

      • 船长One:我想问一下这个横屏是怎么处理的 ,demo 横屏后图像不匹配
        茄子_Apple:不好意思,我这边还没有做横屏的适配,后续待研究!
      • Arex:能不能写写录制保存和剪切?
        茄子_Apple:@Arex 最近比较忙啊!实在是没空整理。后续有时间了我再看看,这方面网上的资料应该还是挺多的,先找找看咯!
        Arex:@茄子_Apple 求大神抽空分享一下剪切
        茄子_Apple:@Arex 录制和剪切都在项目里!暂时没抽出来。
      • ffbc345c87dc:AVAssetWriter在退出后台再回来不能接着前面的下载吗?
        茄子_Apple:这是视频(断点)录制,你的需求应该是断点下载吧?
      • ffbc345c87dc:能给个demo吗?137518718@qq.com,感激不尽
        茄子_Apple:@JVSFlipped http://blog.imwcl.com/2016/05/25/iOS开发进阶%20-%20用AVFoundation自定义视频录制功能/ (链接一起复制到浏览器打开)
        JVSFlipped:@茄子_Apple 404了楼主求发个 demo 2141977862@qq.com 感激不尽
        茄子_Apple:可以复制这个链接:http://blog.imwcl.com/2016/05/25/iOS开发进阶%20-%20用AVFoundation自定义视频录制功能/
        这里有demo
      • 来宝:有demo没
        茄子_Apple:http://blog.imwcl.com/2016/05/25/iOS开发进阶%20-%20用AVFoundation自定义视频录制功能/
        这个是一位大神的博客,里面有demo,我也是看的这个.有问题可以一起讨论!
      • 空壳子XJ:请问获取视频截图需要播放才能过去吗
        空壳子XJ:打错了,获取视频截图需要播放才能获取吗
        茄子_Apple:没懂你的意思!你的意思是获取视频第一帧图片吗?

      本文标题:iOS视频录制(断点录制)的全过程

      本文链接:https://www.haomeiwen.com/subject/tbslgttx.html