最近在学习视频录制方面的东西,在网上找了篇博客,写的非常详细,这里是对这篇文章的学习
词汇介绍
- AVCaptureSession: 媒体(音频、视频)捕捉会话,负责把捕捉的音视频数据输出到输出设备,一个捕捉会话可以有多个输入输出
- AVCaptureDevice: 输入设备 包括摄像头、话筒等,通过该对象可以设置物理设备的属性(相机的聚焦白平衡等)
- AVCaptureDeviceInput: 输入数据管理对象,可以根据AVCaptureDevice创建对应的AVCaptureDeviceInput对象,该对象将会被添加到AVCaptureSession中管理
- AVCaptureOutput:输出数据管理对象,用于接受各类输出数据,通常使用其子类AVCaptureAudioDataOutput、AVCaptureStillImageOutput、AVCaptureVideoDataOutput、AVCaptureFileOutput,该对象将会被添加到AVCaptureSession中管理。注意:前面几个对象的输出数据都是NSData类型,而AVCaptureFileOutput代表数据以文件形式输出,类似的,AVCcaptureFileOutput也不会直接创建使用,通常会使用其子类:AVCaptureAudioFileOutput、AVCaptureMovieFileOutput。当把一个输入或者输出添加到AVCaptureSession
- AVCaptrueVideoPreviewLayer: 相机拍摄预览图层,是CAPlayer的子类,使用该对象可以实时查看拍摄和视频录制的效果,创建该对象需要指定对应的AVCaptureSession对象
使用AVFoundation框架实现拍照和视频录制的一般步骤
- 创建AVCaptureSession对象
- 使用AVCaptureDevice静态方法获得所需的设备,例如拍照和视频就需要获取摄像头设备,录音就需要获取话筒设备
- 利用输入设备AVCaptureDevice创建AVCaptureDeviceInput对象
- 初始化数据输出管理对象,如果要拍照就初始化AVCaptureStillImageOutput对象,如果要录制视频就初始化AVCaptureMovieFileOutput对象
- 将数据输入对象(AVCaptureDeviceInput)、数据输出对象(AVCaptureOutput)添加进媒体会话AVCaptureSession中
- 创建视频预览图层AVCaptureVideoPreviewLayer并指定媒体会话,添加图层到显示容器中,并调用AVCaptureSession的startRunning方法开始捕捉
- 将捕获的音频或者视频保存到指定文件
自定义拍照
我们将实现摄像头预览,摄像头切换,闪光灯设置,对焦,拍照保存等功能
- (void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];
//1.初始化捕捉会话
//1.1.初始化
_session = [[AVCaptureSession alloc] init];
//1.2.设置分辨率
if ([_session canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
[_session setSessionPreset:AVCaptureSessionPreset1280x720];
}
//2.获得输入设备 后置摄像头
AVCaptureDevice *captureDevice = [self getCameraDeviceWithPosition:AVCaptureDevicePositionBack];
if (captureDevice == nil) {
NSLog(@"获取后置摄像头失败");
return;
}
NSError *error = nil;
//3.根据输入设备创建输入数据管理对象
AVCaptureDeviceInput *captureInput = [[AVCaptureDeviceInput alloc] initWithDevice:captureDevice error:&error];
self.captureDeviceInput = captureInput;
if (error) {
NSLog(@"取得设备输入对象时出错,%@", error.localizedDescription);
return;
}
//4.创建输出数据管理对象 用于获得输出数据
AVCaptureStillImageOutput *imageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = @{AVVideoCodecKey:AVVideoCodecJPEG};
[imageOutput setOutputSettings:outputSettings];//输出设置
self.imageOutput = imageOutput;
//5.添加输入设备管理对象到捕捉会话
if ([_session canAddInput:_captureDeviceInput]) {
[_session addInput:_captureDeviceInput];
}
//6.添加输出源
if ([_session canAddOutput:_imageOutput]) {
[_session addOutput:_imageOutput];
}
//7.设置预览图层
_previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_session];
_previewLayer.videoGravity=AVLayerVideoGravityResizeAspectFill;//填充模式
_previewLayer.frame = self.view.bounds;
[self.view.layer insertSublayer:_previewLayer atIndex:0];
//8.给设备添加通知 监测监控区域的变化
[self addNotificationToCaptureDevice:captureDevice];
//9.添加手势
[self addGestureToView];
}
在viewWillApper:
方法里面创建媒体捕捉会话,并添加输入源、输出源,添加对输入设备的通知,监测设备监控区域的变化(拍照对准的区域发生变化等等),添加手势来聚焦和调整光标位置,在viewDidAppear:
方法中开始会话捕捉,在 viewDidDisappear:
中停止会话捕捉
- (void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
[_session startRunning];
}
- (void)viewDidDisappear:(BOOL)animated {
[super viewDidDisappear:animated];
[_session stopRunning];
}
定义闪光灯开闭及自动模式功能,注意无论是设置闪光灯、白平衡还是其他输入设备属性,在设置之前必须先锁定配置,修改完后解锁。
/* 定义闪光灯开闭及自动模式功能,注意无论是设置闪光灯、白平衡还是其他输入设备属性,在设置之前必须先锁定配置,修改完后解锁。 */
/* 改变设备属性 进行的是锁住设备操作 通过block返回输入设备 */
- (void)changeDeviceProperty:(PropertyChangeBlock)block {
//1.获得设备
AVCaptureDevice *device = self.captureDeviceInput.device;
NSError *error;
//2.锁住设备
BOOL success = [device lockForConfiguration:&error];
if (success) {
//1.锁定成功 通过block返回输入设备
block(device);
//2.解锁
[device unlockForConfiguration];
}
else {
NSLog(@"设置设备属性过程发生错误,错误信息%@", error.localizedDescription);
}
}
添加通知的方法
- (void)addNotificationToCaptureDevice:(AVCaptureDevice *)device {
//1.先锁住输入设备
[self changeDeviceProperty:^(AVCaptureDevice *device) {
//添加区域改变捕获通知必须首先设置设备允许捕获
//表明接收方是否应该监控领域的变化(如照明变化,实质移动等) 可以通过AVCaptureDeviceSubjectAreaDidChangeNotification通知监测 我们可以希望重新聚焦,调整曝光白平衡等的主题区域 在设置该属性之前必须调用lockForConfiguration方法锁定设备配置
device.subjectAreaChangeMonitoringEnabled = YES;
}];
//2.监测通知
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(areaChanged:) name:AVCaptureDeviceSubjectAreaDidChangeNotification object:device];
}
在通知方法里打印下log
#pragma mark -设备捕获区域发生变化
- (void)areaChanged:(NSNotification *)notification {
NSLog(@"捕获区域改变...");
}
移除通知的方法
/* 移除监控输入设备通知 */
- (void)removeNotificationFromCaptureDevice:(AVCaptureDevice *)device {
[[NSNotificationCenter defaultCenter] removeObserver:self name:AVCaptureDeviceSubjectAreaDidChangeNotification object:device];
}
添加手势
- (void)addGestureToView {
UITapGestureRecognizer *tap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(tapScreen:)];
[self.view addGestureRecognizer:tap];
}
-(void)tapScreen:(UITapGestureRecognizer *)tapGesture {
CGPoint point = [tapGesture locationInView:self.view];
//UI坐标转换成摄像头坐标
CGPoint cameraPoint = [self.previewLayer captureDevicePointOfInterestForPoint:point];
[self setFocusPoint:cameraPoint];
[self focusWithMode:AVCaptureFocusModeAutoFocus exposureMode:AVCaptureExposureModeAutoExpose atPoint:cameraPoint];
}
在手势方法里面设置光标位置(光标是一个imageView),并设置摄像头的聚焦模式和曝光模式
设置光标位置的方法
/* 设置光标位置 */
- (void)setFocusPoint:(CGPoint)point {
self.imageView.center = point;
self.imageView.transform = CGAffineTransformMakeScale(1.5, 1.5);
self.imageView.alpha = 1;
[UIView animateWithDuration:1.0 animations:^{
self.imageView.transform = CGAffineTransformIdentity;
} completion:^(BOOL finished) {
self.imageView.alpha = 0;
}];
}
设置聚焦模式和曝光模式的方法
/* s设置聚焦和曝光模式 */
- (void)focusWithMode:(AVCaptureFocusMode)focusMode
exposureMode:(AVCaptureExposureMode)exposeMode
atPoint:(CGPoint)point
{
//设置曝光模式和聚焦模式 先锁住输入设置
[self changeDeviceProperty:^(AVCaptureDevice *device) {
if ([device isFocusModeSupported:focusMode]) {
[device setFocusMode:focusMode];
}
if ([device isExposureModeSupported:exposeMode]) {
[device setExposureMode:exposeMode];
}
if ([device isFocusPointOfInterestSupported]) {
[device setFocusPointOfInterest:point];
}
if ([device isExposurePointOfInterestSupported]) {
[device setExposurePointOfInterest:point];
}
}];
}
拍照的方法
#pragma mark -拍照
- (IBAction)takePhoto:(id)sender {
//1.根据数据输出管理对象(输出源)获得链接
AVCaptureConnection *connection = [self.imageOutput connectionWithMediaType:AVMediaTypeVideo];
//2.根据连接取得输出数据
[self.imageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
//获取图像数据
NSData *imageData=[AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image=[UIImage imageWithData:imageData];
//存入相册
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}];
}
切换摄像头
切换摄像头就是移除原有的输入源,添加新的输入源
#pragma mark -切换摄像头
/* 切换摄像头的过程就是将原有的输入源移除 添加新的输入源到会话中 */
- (IBAction)switchCamera:(id)sender {
//1.获取原来的输入设备 根据数据输入管理对象获取
AVCaptureDevice *oldCaptureDevice = [self.captureDeviceInput device];
//2.移除输入设备的通知
[self removeNotificationFromCaptureDevice:oldCaptureDevice];
//3.切换摄像头的位置
//3.1.获取当前的位置
AVCaptureDevicePosition currentPosition = oldCaptureDevice.position;
//3.2.获取要切换的位置
AVCaptureDevicePosition targetPosition = AVCaptureDevicePositionFront;
if (currentPosition == AVCaptureDevicePositionFront || AVCaptureDevicePositionUnspecified) {
targetPosition = AVCaptureDevicePositionBack;
}
//4.根据摄像头的位置获取当前的输入设备
AVCaptureDevice *currentCaptureDevice = [self getCameraDeviceWithPosition:targetPosition];
//5.添加对当前输入设备的通知
[self addNotificationToCaptureDevice:currentCaptureDevice];
//6.创建当前设备的数据输入管理对象
AVCaptureDeviceInput *currentInput = [[AVCaptureDeviceInput alloc] initWithDevice:currentCaptureDevice error:nil];
//7.添加新的数据管理对象到捕捉会话
//7.1.开始设置
[_session beginConfiguration];
//7.2.移除原有的输入源
[_session removeInput:self.captureDeviceInput];
//7.3.添加新的输入源
if ([_session canAddInput:currentInput]) {
[_session addInput:currentInput];
//.标记当前的输入源
self.captureDeviceInput = currentInput;
}
//7.4.提交设置
[_session commitConfiguration];
}
根据摄像头的位置(前置/后置)创建输入设备
/* 根据摄像头位置来获取摄像头 */
- (AVCaptureDevice *)getCameraDeviceWithPosition:(AVCaptureDevicePosition)position {
NSArray *cameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in cameras) {
if (device.position == position) {
return device;
}
}
return nil;
}
需要说明的是在beginConfiguration
方法需要和commitConfiguration
方法配套使用,我们可以在这之间可以添加或删除输出,更改sessionPreset或配置单个AVCaptureInput或Output属性
这里基本上就已经实现了拍照功能,还有一些就是设置闪光灯的模式了
这里闪光模式一共有三种
typedef NS_ENUM(NSInteger, AVCaptureFlashMode) {
AVCaptureFlashModeOff = 0,
AVCaptureFlashModeOn = 1,
AVCaptureFlashModeAuto = 2
} NS_AVAILABLE(10_7, 4_0) __TVOS_PROHIBITED;
设置闪光模式
#pragma mark - 设置闪光灯模式
- (void)setFlashMode:(AVCaptureFlashMode)mode {
//先锁住设备
[self changeDeviceProperty:^(AVCaptureDevice *device) {
if ([device isFlashModeSupported:mode]) {//如果支持该模式
[device setFlashMode:mode];
}
}];
}
大致效果为
拍照.gif录制视频
录制视频跟拍照大致差不多,比拍照要多一个麦克风的数据输入管理对象(麦克风输入源),拍照的话是创建AVCaptureStillImageOutput
类型的输出缘,录制视频的话是AVCaptureMovieFileOutput
类型
- (void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];
//初始化捕捉会话
_session = [[AVCaptureSession alloc] init];
if ([_session canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
[_session setSessionPreset:AVCaptureSessionPreset1280x720];
}
//获得相机输入设备
AVCaptureDevice *cameraDevice = [self getCameraDeviceWithPosition:AVCaptureDevicePositionBack];
if (cameraDevice == nil) {
NSLog(@"获取后置摄像头失败");
return;
}
//根据相机输入设备创建相机输入源
NSError *error = nil;
_cameraDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&error];
if (error) {
NSLog(@"%@", error.localizedDescription);
return;
}
//获得话筒输入设备
AVCaptureDevice *audioDevice = [AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio].firstObject;
//创建话筒输入源
_audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
//创建数据输出管理对象
_movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
//添加输入源
if ([_session canAddInput:_cameraDeviceInput]) {
[_session addInput:_cameraDeviceInput];
}
AVCaptureConnection *connection = [_movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
if ([connection isVideoStabilizationSupported]) {
connection.preferredVideoStabilizationMode=AVCaptureVideoStabilizationModeAuto;//通过将preferredVideoStabilizationMode属性设置为AVCaptureVideoStabilizationModeOff以外的值,当模式可用时,流经接收器的视频会稳定
}
if ([_session canAddInput:_audioDeviceInput]) {
[_session addInput:_audioDeviceInput];
}
//添加输出源
if ([_session canAddOutput:_movieFileOutput]) {
[_session addOutput:_movieFileOutput];
}
//创建预览图层
_previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_session];
_previewLayer.frame = self.view.bounds;
_previewLayer.videoGravity=AVLayerVideoGravityResizeAspectFill;//填充模式
[self.view.layer insertSublayer:_previewLayer atIndex:0];
}
开始录制视频
- (IBAction)startRecording:(UIButton *)sender {
if ([self.movieFileOutput isRecording]) {
[self.movieFileOutput stopRecording];
sender.selected = YES;
return;
}
//获得连接
AVCaptureConnection *connection = [self.movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
connection.videoOrientation = self.previewLayer.connection.videoOrientation;
NSString *filePath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES).lastObject stringByAppendingPathComponent:@"myVideo.mp4"];
NSURL *url = [NSURL fileURLWithPath:filePath];
//开始录制 并设置代理
[self.movieFileOutput startRecordingToOutputFileURL:url recordingDelegate:self];
}
遵守AVCaptureFileOutputRecordingDelegate协议并实现代理方法
下面这个方法是必须实现的
#pragma mark - AVCaptureFileOutputRecordingDelegate
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error
{
NSLog(@"视频录制完成");
//将视频存入到相簿
ALAssetsLibrary *assetsLibrary=[[ALAssetsLibrary alloc]init];
[assetsLibrary writeVideoAtPathToSavedPhotosAlbum:outputFileURL completionBlock:^(NSURL *assetURL, NSError *error) {
if (error) {
NSLog(@"保存视频到相簿过程中发生错误,错误信息:%@",error.localizedDescription);
}
NSLog(@"成功保存视频到相簿.");
}];
}
官方文档是这么介绍该方法的
This method is called when the file output has finished writing all data to a file whose recording was stopped, either because startRecordingToOutputFileURL:recordingDelegate: or stopRecording were called, or because an error, described by the error parameter, occurred (if no error occurred, the error parameter will be nil). This method will always be called for each recording request, even if no data is successfully written to the file.
大致意思是
当文件输出已完成将所有数据写入记录已停止的文件时,或者因为调用了startRecordingToOutputFileURL:recordingDelegate:或stopRecording,或者因为发生了由错误参数描述的错误(如果没有发生错误, 错误参数将为nil)。 即使没有数据成功写入文件,也会为每个记录请求调用此方法。
开始录制时的代理方法
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didStartRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections
{
NSLog(@"开始录制视频");
}
切换摄像头的方法
- (IBAction)switchCamera:(id)sender {
//1.获得原来的输入设备
AVCaptureDevice *oldCaptureDevice = [self.cameraDeviceInput device];
//2.移除原来输入设备的通知
[self removeNotificationFromDevice:oldCaptureDevice];
//3.获得现在的输入设备
//3.1.获得原来的设备的位置
AVCaptureDevicePosition oldPosition = oldCaptureDevice.position;
//3.2.获得现在的设备的位置
AVCaptureDevicePosition currentPosition = AVCaptureDevicePositionFront;
if (oldPosition == AVCaptureDevicePositionFront || oldPosition == AVCaptureDevicePositionUnspecified) {
currentPosition = AVCaptureDevicePositionBack;
}
//3.3.根据位置创建当前的输入设备
AVCaptureDevice *currnetCaptureDevice = [self getCameraDeviceWithPosition:currentPosition];
//3.4.给当前设备添加通知
[self addNotificationToCaptureDevice:currnetCaptureDevice];
//4.根据现在的输入设备创建输入源
NSError *error = nil;
AVCaptureDeviceInput *currentInput = [AVCaptureDeviceInput deviceInputWithDevice:currnetCaptureDevice error:&error];
if (error) {
NSLog(@"%@",error.localizedDescription);
return;
}
//5.更换输入源
//5.1.开启设置
[_session beginConfiguration];
//5.2.移除原来的输入源
[_session removeInput:self.cameraDeviceInput];
//5.3.添加现在的输入源
if ([_session canAddInput:currentInput]) {
[_session addInput:currentInput];
self.cameraDeviceInput = currentInput;
}
//5.4.提交设置
[_session commitConfiguration];
}
可以看出跟拍照的切换是一致的
大致效果
视频.gif还有一点需要注意的是因为这里调用了相机和话筒设备,所以需要在info.plist中添加相应的字段
CCF99CA8-86F7-4721-A89C-9A3F7986CD74.png代码地址:点击这里
网友评论