视频缩放
videoScaleAndCropFactor:是一个连接设置,要想在用户界面正确捕获该值,开发者必须对AVCaptureVideoPreviewLayer一个正确的缩放变换。再一个这个属性只有在AVCaptureStillImageOutput连接中设置,
目前出现最新的是AVCaptureDevice提供videoZoomFactor属性,用用控制捕捉设备的缩放等级
这个属性最小值为1,即不能进行缩放的图片。最大值由捕捉谁被属性activeFormat决定。它是AVCaptureDeviceFormat的实例,还包含有设备支持的最大缩放值videoMaxZoomFactor。
设备执行缩放效果是通过居中裁剪由摄像头传感器捕捉到的图片实现。所以过度放大会损失图片质量,具体根据需求判定。
AVCaptureDevice通过两个方法来实现视频缩放
第一直接设置AVCaptureDevice的属性videoZoomFactor
- (CGFloat)maxZoomFactor {
return MIN(self.activeCamera.activeFormat.videoMaxZoomFactor, 4.0f); // 2
}
- (void)setZoomValue:(CGFloat)zoomValue { // 3
if (!self.activeCamera.isRampingVideoZoom) {
NSError *error;
if ([self.activeCamera lockForConfiguration:&error]) { // 4
// Provide linear feel to zoom slider
CGFloat zoomFactor = pow([self maxZoomFactor], zoomValue); // 5
self.activeCamera.videoZoomFactor = zoomFactor;
[self.activeCamera unlockForConfiguration]; // 6
} else {
[self.delegate deviceConfigurationFailedWithError:error];
}
}
}
调用AVCaptureDevice的方法
- (void)rampToVideoZoomFactor:(CGFloat)factor withRate:(float)rate;
人脸检测
当视图中有新的人脸进入时会自动建立相应的焦点。一个黄色的矩形框会显示在新检测的人脸位置,并以矩形的中点完成自动对焦。幸运的是AVFoundation的实时人脸检测功能可在应用程序中实现这个功能。
CoreImage框架中定义了CIDetector和CIFaceFeature对象,它们使用起来非常简单提供了强大的人脸检测功能。不过这些方法并没有针对实时性进行优化,导致现代摄像头和视频应用程序要求的帧率下很难应用。
设置人脸检测输出
- (BOOL)setupSessionOutputs:(NSError **)error {
self.metadataOutput = [[AVCaptureMetadataOutput alloc] init]; // 2
if ([self.captureSession canAddOutput:self.metadataOutput]) {
[self.captureSession addOutput:self.metadataOutput];
NSArray *metadataObjectTypes = @[AVMetadataObjectTypeFace]; // 3
self.metadataOutput.metadataObjectTypes = metadataObjectTypes;
dispatch_queue_t mainQueue = dispatch_get_main_queue();
[self.metadataOutput setMetadataObjectsDelegate:self // 4
queue:mainQueue];
return YES;
} else { // 5
if (error) {
NSDictionary *userInfo = @{NSLocalizedDescriptionKey:
@"Failed to still image output."};
*error = [NSError errorWithDomain:THCameraErrorDomain
code:THCameraErrorFailedToAddOutput
userInfo:userInfo];
}
return NO;
}
}
实现代理
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputMetadataObjects:(NSArray *)metadataObjects
fromConnection:(AVCaptureConnection *)connection {
for (AVMetadataFaceObject *face in metadataObjects) { // 2
NSLog(@"Face detected with ID: %li", (long)face.faceID);
NSLog(@"Face bounds: %@", NSStringFromCGRect(face.bounds));
}
[self.faceDetectionDelegate didDetectFaces:metadataObjects]; // 3
}
处理metadataObjects数据并显示
AVCaptureVideoPreviewLayer的transformedMetadataObjectForMetadataObject:方法可以让摄像头的坐标转化成屏幕坐标。
- (void)didDetectFaces:(NSArray *)faces {
//摄像机坐标转化成屏幕坐标
NSArray *transformedFaces = [self transformedFacesFromFaces:faces];
NSMutableArray *lostFaces = [self.faceLayers.allKeys mutableCopy]; // 1
for (AVMetadataFaceObject *face in transformedFaces) {
NSNumber *faceID = @(face.faceID); // 2
[lostFaces removeObject:faceID];
//创建layer显示
CALayer *layer = [self.faceLayers objectForKey:faceID]; // 3
if (!layer) {
// no layer for faceID, create new face layer
layer = [self makeFaceLayer]; // 4
[self.overlayLayer addSublayer:layer];
self.faceLayers[faceID] = layer;
}
layer.transform = CATransform3DIdentity; // 1
layer.frame = face.bounds;
//根据元数据的人头旋转,更新layer的转换
if (face.hasRollAngle) {
CATransform3D t = [self transformForRollAngle:face.rollAngle]; // 2
layer.transform = CATransform3DConcat(layer.transform, t);
}
//Y轴偏转
if (face.hasYawAngle) {
CATransform3D t = [self transformForYawAngle:face.yawAngle]; // 4
layer.transform = CATransform3DConcat(layer.transform, t);
}
}
//处理减少的人脸layer。有6个人变成4个人,需要把其他的两个人去掉
for (NSNumber *faceID in lostFaces) { // 6
CALayer *layer = [self.faceLayers objectForKey:faceID];
[layer removeFromSuperlayer];
[self.faceLayers removeObjectForKey:faceID];
}
}
- (NSArray *)transformedFacesFromFaces:(NSArray *)faces { // 2
NSMutableArray *transformedFaces = [NSMutableArray array];
for (AVMetadataObject *face in faces) {
AVMetadataObject *transformedFace = // 3
[self.previewLayer transformedMetadataObjectForMetadataObject:face];
[transformedFaces addObject:transformedFace];
}
return transformedFaces;
}
//创建成带有表框的layer
- (CALayer *)makeFaceLayer {
CALayer *layer = [CALayer layer];
layer.borderWidth = 5.0f;
layer.borderColor =
[UIColor colorWithRed:0.188 green:0.517 blue:0.877 alpha:1.000].CGColor;
return layer;
}
// Rotate around Z-axis
- (CATransform3D)transformForRollAngle:(CGFloat)rollAngleInDegrees { // 3
CGFloat rollAngleInRadians = THDegreesToRadians(rollAngleInDegrees);
return CATransform3DMakeRotation(rollAngleInRadians, 0.0f, 0.0f, 1.0f);
}
// Rotate around Y-axis
- (CATransform3D)transformForYawAngle:(CGFloat)yawAngleInDegrees { // 5
CGFloat yawAngleInRadians = THDegreesToRadians(yawAngleInDegrees);
CATransform3D yawTransform =
CATransform3DMakeRotation(yawAngleInRadians, 0.0f, -1.0f, 0.0f);
return CATransform3DConcat(yawTransform, [self orientationTransform]);
}
- (CATransform3D)orientationTransform { // 6
CGFloat angle = 0.0;
switch ([UIDevice currentDevice].orientation) {
case UIDeviceOrientationPortraitUpsideDown:
angle = M_PI;
break;
case UIDeviceOrientationLandscapeRight:
angle = -M_PI / 2.0f;
break;
case UIDeviceOrientationLandscapeLeft:
angle = M_PI / 2.0f;
break;
default: // as UIDeviceOrientationPortrait
angle = 0.0;
break;
}
return CATransform3DMakeRotation(angle, 0.0f, 0.0f, 1.0f);
}
//角度转弧度
static CGFloat THDegreesToRadians(CGFloat degrees) {
return degrees * M_PI / 180;
}
static CATransform3D CATransform3DMakePerspective(CGFloat eyePosition) { // 3
CATransform3D transform = CATransform3DIdentity;
transform.m34 = -1.0 / eyePosition;
return transform;
}
机器可读代码识别
识别二维码和人脸识别原理相同,都是使用AVCaptureMetadataOutput类来实现,我们通过设置AVCaptureMetadataOutput的metadataObjectTypes属性来区分是人脸识别还是二维码等等其他功能。后期的处理也都大同小异,再次不在重复,直接上代码
self.metadataOutput = [[AVCaptureMetadataOutput alloc] init];
if ([self.captureSession canAddOutput:self.metadataOutput]) {
[self.captureSession addOutput:self.metadataOutput];
dispatch_queue_t mainQueue = dispatch_get_main_queue();
[self.metadataOutput setMetadataObjectsDelegate:self
queue:mainQueue];
NSArray *types = @[AVMetadataObjectTypeQRCode, // 1
AVMetadataObjectTypeAztecCode,
AVMetadataObjectTypeUPCECode];
self.metadataOutput.metadataObjectTypes = types;
}
属性corners:定义机器可读代码角的(X,Y)位置的点。
metadataObjectTypes属性为AVMetadataObjectType类型
AVMetadataObjectTypeHumanBody
检测人体,对应的AVMetadataObject类为AVMetadataHumanBodyObject
AVMetadataObjectTypeCatBody
检测猫体,对应的AVMetadataObject类为AVMetadataCatBodyObject
AVMetadataObjectTypeDogBody
检测狗体,对应的AVMetadataObject类为AVMetadataDogBodyObject
AVMetadataObjectTypeSalientObject
检测物体特征,对应的AVMetadataObject类为AVMetadataSalientObject
AVMetadataObjectTypeFace
检测人脸,对应的AVMetadataObject类为AVMetadataFaceObject
对码的检测
对应的AVMetadataObject类为AVMetadataMachineReadableCodeObject
AVMetadataObjectTypeUPCECode: UPC-E
AVMetadataObjectTypeCode39Code: Code39
AVMetadataObjectTypeCode39Mod43Code:Code39Mod43
AVMetadataObjectTypeEAN13Code:EAN-13
AVMetadataObjectTypeEAN8Code:EAN-8
AVMetadataObjectTypeCode93Code:code 93
AVMetadataObjectTypeCode128Code: code 128
AVMetadataObjectTypePDF417Code: PDF 417
AVMetadataObjectTypeQRCode:二维码
AVMetadataObjectTypeAztecCode:Aztec广泛用于航天领域内登机牌
AVMetadataObjectTypeInterleaved2of5Code:交错式2 of 5码
AVMetadataObjectTypeITF14Code:ITF14
AVMetadataObjectTypeDataMatrixCode:DataMatrix
高帧率捕捉
在前面缩放视频时,我们通过访问AVCaptureDevice的activeFormat的videoMaxZoomFactor获取缩放因子最大值,来设置视频的缩放因子,在这里我们通过访问activeFormat的videoSupportedFrameRateRanges,来获取设备的帧率范围,来设置帧率。
1.确定设备所支持的最新帧率
通过AVCaptureDevice的属性formats来查询最大支持帧率
//判断是否支持视频类型
- (BOOL)supportsHighFrameRateCapture {
if (![self hasMediaType:AVMediaTypeVideo]) { // 1
return NO;
}
return [self findHighestQualityOfService].isHighFrameRate; // 2
}
//此代码在AVCaptureDevice的类别中
- (THQualityOfService *)findHighestQualityOfService {
AVCaptureDeviceFormat *maxFormat = nil;
AVFrameRateRange *maxFrameRateRange = nil;
for (AVCaptureDeviceFormat *format in self.formats) {
FourCharCode codecType =
CMVideoFormatDescriptionGetCodecType(format.formatDescription);
//从每个AVCaptureDeviceFormat中的formatDescription获取相应的codecType,
//codecType值为kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange时是视频
if (codecType == kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) { //
NSArray *frameRateRanges = format.videoSupportedFrameRateRanges;
for (AVFrameRateRange *range in frameRateRanges) {
if (range.maxFrameRate > maxFrameRateRange.maxFrameRate) {
maxFormat = format;
maxFrameRateRange = range;
}
}
}
}
return [THQualityOfService qosWithFormat:maxFormat
frameRateRange:maxFrameRateRange];
}
2.设置最大帧率
- (BOOL)enableMaxFrameRateCapture:(NSError **)error {
THQualityOfService *qos = [self findHighestQualityOfService];
if (!qos.isHighFrameRate) { // 1
if (error) {
NSString *message = @"Device does not support high FPS capture";
NSDictionary *userInfo = @{NSLocalizedDescriptionKey : message};
NSUInteger code = THCameraErrorHighFrameRateCaptureNotSupported;
*error = [NSError errorWithDomain:THCameraErrorDomain
code:code
userInfo:userInfo];
}
return NO;
}
if ([self lockForConfiguration:error]) { // 2
CMTime minFrameDuration = qos.frameRateRange.minFrameDuration;
self.activeFormat = qos.format; // 3
self.activeVideoMinFrameDuration = minFrameDuration; // 4
self.activeVideoMaxFrameDuration = minFrameDuration;
[self unlockForConfiguration];
return YES;
}
return NO;
}
视频处理
AVCaptureMovieFileOutput可以实现视频的录制,如果我们想给视频加特效,AVCaptureMovieFileOutput类就无能为力了,我们就会用到框架提供的最底层的视频捕获输出AVCaptureVideoDataOutput.
AVCaptureVideoDataOutput是一个AVCaptureOutput子类,可以直接访问摄像头传感器捕捉的视频帧,这是一个强大的功能,因为这样我们就完全控制了视频数据的格式,时间和元数据,可以按照需求操作视频内容.处理过程我们可以通过OpenGL ES 或CoreImage 再或者Metal。
AVCaptureVideoDataOutput和AVCaptureMetadataOutput方法类似,最明显的就是它们各自都有委托回调。AVCaptureMetadataOutput输出的是AVMetadataObject实例。
AVCaptureVideoDataOutput输出的对象需要通过AVCaptureVideoDataOutputSampleBufferDelegate协议包含数据。
//每当有一个新的视频帧写入时该方法就会被调用,数据会基于视频数据输出的videoSettings属性进行解码或重新编码
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
//每当一个迟到的视频帧被丢弃时就会调用该方法,通常是因为didOutputSampleBuffer中消耗太多处理时间就会调用该方法。开发者应该尽量提供处理效率,否则将收不到缓存数据
- (void)captureOutput:(AVCaptureOutput *)output didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
CMSampleBuffer
CMSampleBuffer是一个由CoreMedia框架提供的CoreFoundation 风格的对象,用于在媒体管道中传输数字样本。CMSampleBuffer的角色是将基础样本数据进行封装并提供格式和时间信息。还会加上所有在转换和处理数据时用到的元数据。
灰度
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CMFormatDescriptionRef formatDescription = // 2
CMSampleBufferGetFormatDescription(sampleBuffer);
CMVideoDimensions dimensions =
CMVideoFormatDescriptionGetDimensions(formatDescription);
size_t width = dimensions.width;
size_t height = dimensions.height;
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);
unsigned char grayPixel;
for (int row = 0 ; row < height; row++) {
for (int column = 0; column < width; column ++) {
grayPixel = (pixel[0]+pixel[1]+pixel[2])/3;
pixel[0] = pixel[1] = pixel[2] = grayPixel;
pixel += 4;
}
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
格式描述
CMFormatDescriptionRef formatDescription =
CMSampleBufferGetFormatDescription(sampleBuffer);
CMMediaType mediaType = CMFormatDescriptionGetMediaType(formatDescription);
{
kCMMediaType_Video = 'vide',
kCMMediaType_Audio = 'soun',
kCMMediaType_Muxed = 'muxx',
kCMMediaType_Text = 'text',
kCMMediaType_ClosedCaption = 'clcp',
kCMMediaType_Subtitle = 'sbtl',
kCMMediaType_TimeCode = 'tmcd',
kCMMediaType_Metadata = 'meta',
}
时间信息
CMSampleBufferGetDecodeTimeStamp//原始的时间戳
CMSampleBufferGetPresentationTimeStamp //获取解码的时间戳
我们一般通过CMSampleBufferRef转换成纹理对象,通过OpenGL ES或者Metal渲染出来,GPUImage
网友评论