前沿
AVFoundation定义了一组功能可以用于创建媒体应用程序时遇到的不部分场景,在使用框架时,开发者首选的方法还是使用专门的高级功能,不过随着我们开发更复杂的媒体应用程序,就可能遇到一些用例,,它们需要的功能并不受AVFoundation框架的内置支持,这时需要使用框架AVAssetReader和AVAssetWriter类提供的低级功能,这些类可以让开发者直接处理媒体样本。
image.png
AVAssetReader
用于从AVAsset中读取媒体样本,通常会配置一个或多个AVAssetReaderOutput实例,并通过copyNextSampleBuffer方法访问音频样本和视频帧。
AVAssetReaderOutput是一个抽象类,不过框架定义了具体实例来从指定的AVAssetTrack中读取解码的媒体样本,从多音频轨道中读取混合输出,或者从多视频轨道总读取组合输出。
1.AVAssetReaderAudioMixOutput
2.AVAssetReaderTrackOutput
3.AVAssetReaderVideoCompositionOutput
4.AVAssetReaderSampleReferenceOutput
一个资源读取器内部通道都是以多线程的方式不断提取下一个可用样本的,这样可以在系统请求资源时最小化时延。尽管提供了低时延的检索操作,还是不倾向于实时操作,比如播放。
AVAssetReader只针对于带有一个资源的媒体样本,如果需要同时从多个基于文件的资源中读取样本,可将它们组合到一个AVAsset子类AVComposition中。
NSURL *fileUrl ;
AVAsset *asset = [AVAsset assetWithURL:fileUrl];
AVAssetTrack *track = [[asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
NSError *serror;
self.assetReader = [[AVAssetReader alloc] initWithAsset:asset error:&serror];
NSDictionary *readerOutputSetting = @{(id)kCVPixelBufferPixelFormatTypeKey :@(kCVPixelFormatType_32BGRA)};
AVAssetReaderTrackOutput *trackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:track outputSettings:readerOutputSetting];
//从资源视频轨道中读取样本,将视频帧解压缩为BGRA格式。
if ([self.assetReader canAddOutput:trackOutput]) {
[self.assetReader addOutput:trackOutput];
}
[self.assetReader startReading];
//trackOutput 的copyNextSampleBuffer读取CMSampleBuffer
AVAssetWriter
对媒体资源进行编码并将其写入到容器文件中,比如一个MPEG-4文件或一个QuickTime文件。
它由一个或多个AVAssetWriterInput对象配置,用于附加将包含要写入容器的媒体样本的CMSampleBuffer对象。
AVAssetWriterInput被配置为可以处理指定的媒体类型,比如音频或视频,并且附加在其后的样本会在最终输出时生成一个独立的AVAssetTrack。当使用一个配置了处理视频样本的AVAssetWriterInput时,会常用到一个专门的适配器对象AVAssetWriterInputPixelBufferAdaptor,这个类在附加被包装为CVPixelBuffer对象的视频样本时提供最优性能。输入信息也可以通过使用AVAssetWriterInputGroup组成互斥的参数,可以创建特定资源,包含在播放时使用AVMediaSelectionGroup和AVMediaSelectionOption类选择的指定语言媒体轨道。
AVAssetWriter可以自动支持交叉媒体样本。AVAssetWriterInput提供一个readyForMoreMediaData属性来指示在保持所需的交错情况下输入信息是否还可以附加更多数据,只有在这个属性值为YES时才可以将一个新的样本添加到输入信息中。
AVAssetWriter可用于实时操作和离线操作两种情况。对于每个场景中都有不同的方法将样本buffer添加到写入对象的输入中。
1.实时:处理实时资源时,比如从AVCaptureVideoDataOutput写入捕捉的样本时,AVAssetWriter应该另expectsMediaDataInRealTime为YES来确保readyForMoreMediaData值被正确计算。从实时资源写入数据优化了写入器,与维持理想交错效果相比,快速写入样本具有更高的优先级。
2.离线:当从离线资源中读取媒体资源时,比如从AVAssetReader读取样本buffer,在附加样本前仍需写入器输入的readyForMoreMediaData属性的状态,不过可以使用requestMediaDataWhenReadyOnQueue:usingBlock:方法控制数据的提供。传到这个方法中的代码块会随写入器输入准备附加更多的样本而不断被调用。添加样本时需要检索数据并从资源中找到下一个样本进行添加。
NSURL *outputUrl ;
NSError *wError;
self.assetWriter = [[AVAssetWriter alloc] initWithURL:outputUrl fileType:AVFileTypeQuickTimeMovie error:&wError];
NSDictionary *writerOutputSettings =
@{
AVVideoCodecKey:AVVideoCodecH264,
AVVideoWidthKey:@1280,
AVVideoHeightKey:@720,
AVVideoCompressionPropertiesKey:@{
AVVideoMaxKeyFrameIntervalKey:@1,
AVVideoAverageBitRateKey:@10500000,
AVVideoProfileLevelKey:AVVideoProfileLevelH264Main31,
}
};
AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:writerOutputSettings];
if ([self.assetWriter canAddInput:writerInput]) {
[self.assetWriter addInput:writerInput];
}
[self.assetWriter startWriting];
与AVAssetExportSession相比,AVAssetWriter明显的优势是它对输出进行编码时能够进行更加细致的压缩设置控制。可以指定关键帧间隔、视频比特率、H.264配置文件、像素宽高比和纯净光圈等设置。
示例
dispatch_queue_t dispatchQueue = dispatch_queue_create("com.writerQueue", NULL);
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
//创建一个新的写入会话,传递资源样本的开始时间。
/**
在写入器输入准备好添加更多样本时,被不断调用。
每次调用期间,输入准备添加更多数据时,再从轨道的输出中复制可用的样本,并附加到输入中。
所有样本从轨道输出中复制后,标记AVAssetWriterInput已经结束并指明添加操作已完成。
**/
[writerInput requestMediaDataWhenReadyOnQueue:dispatchQueue usingBlock:^{
BOOL complete = NO ;
while ([writerInput isReadyForMoreMediaData] && !complete) {
CMSampleBufferRef sampleBuffer = [trackOutput copyNextSampleBuffer];
if (sampleBuffer) {
BOOL result = [writerInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
complete = !result;
} else {
[writerInput markAsFinished];
complete = YES;
}
}
if (complete) {
[self.assetWriter finishWritingWithCompletionHandler:^{
AVAssetWriterStatus status = self.assetWriter.status;
if (status == AVAssetWriterStatusCompleted) {
//
} else {
}
}];
}
}];
重点说一下startSessionAtSourceTime,方法创建了一个新的写入会话,并传递kCMTimeZero参数作为资源样本的开始时间。传给requestMediaDataWhenReadyOnQueue:usingBlock:方法代码块写入器输入准备好添加更多样本时会不断被调用,在每次被调用期间,输入准备添加更多的数据时,再从轨道的输出中复制可用的样本,并将其附加到输入中。当所有样本都从轨道输入中复制后,需要标记AVAssetWriteInput已结束并指明添加操作已经完成。最后AVAssetWriter调用finishWritingWithCompletionHandler关闭写入会话
读取音频样本
把音频数据读成NSData
+ (void)loadAudioSamplesFromAsset:(AVAsset *)asset
completionBlock:(THSampleDataCompletionBlock)completionBlock {
NSString *tracks = @"tracks";
[asset loadValuesAsynchronouslyForKeys:@[tracks] completionHandler:^{ // 1
AVKeyValueStatus status = [asset statusOfValueForKey:tracks error:nil];
NSData *sampleData = nil;
if (status == AVKeyValueStatusLoaded) { // 2
sampleData = [self readAudioSamplesFromAsset:asset];
}
dispatch_async(dispatch_get_main_queue(), ^{ // 3
completionBlock(sampleData);
});
}];
}
+ (NSData *)readAudioSamplesFromAsset:(AVAsset *)asset {
NSError *error = nil;
AVAssetReader *assetReader = // 1
[[AVAssetReader alloc] initWithAsset:asset error:&error];
if (!assetReader) {
NSLog(@"Error creating asset reader: %@", [error localizedDescription]);
return nil;
}
AVAssetTrack *track = // 2
[[asset tracksWithMediaType:AVMediaTypeAudio] firstObject];
NSDictionary *outputSettings = @{ // 3
AVFormatIDKey : @(kAudioFormatLinearPCM),
AVLinearPCMIsBigEndianKey : @NO,
AVLinearPCMIsFloatKey : @NO,
AVLinearPCMBitDepthKey : @(16)
};
AVAssetReaderTrackOutput *trackOutput = // 4
[[AVAssetReaderTrackOutput alloc] initWithTrack:track
outputSettings:outputSettings];
[assetReader addOutput:trackOutput];
[assetReader startReading];
NSMutableData *sampleData = [NSMutableData data];
while (assetReader.status == AVAssetReaderStatusReading) {
CMSampleBufferRef sampleBuffer = [trackOutput copyNextSampleBuffer];// 5
if (sampleBuffer) {
CMBlockBufferRef blockBufferRef = // 6
CMSampleBufferGetDataBuffer(sampleBuffer);
size_t length = CMBlockBufferGetDataLength(blockBufferRef);
SInt16 sampleBytes[length];
CMBlockBufferCopyDataBytes(blockBufferRef, // 7
0,
length,
sampleBytes);
[sampleData appendBytes:sampleBytes length:length];
CMSampleBufferInvalidate(sampleBuffer); // 8
CFRelease(sampleBuffer);
}
}
if (assetReader.status == AVAssetReaderStatusCompleted) { // 9
return sampleData;
} else {
NSLog(@"Failed to read audio samples from asset");
return nil;
}
}
捕捉录制高级方法
1.通过摄像头,可在高级捕获中的视频处理中查看
我们通过对视频滤镜处理,看用户看到经过处理的视频
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
处理方法一般有OpenGL ES Metal 和CoreImage ,目前使用OpenGL ES的比较多。apple 推荐使用Metal
2.写入音视频前期准备
- (void)startWriting {
dispatch_async(self.dispatchQueue, ^{ // 1
NSError *error = nil;
//写入的类型
NSString *fileType = AVFileTypeQuickTimeMovie;
self.assetWriter = // 2
[AVAssetWriter assetWriterWithURL:[self outputURL]
fileType:fileType
error:&error];
if (!self.assetWriter || error) {
NSString *formatString = @"Could not create AVAssetWriter: %@";
NSLog(@"%@", [NSString stringWithFormat:formatString, error]);
return;
}
//视频输入类设置
self.assetWriterVideoInput = // 3
[[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo
outputSettings:self.videoSettings];
//实时型设置
self.assetWriterVideoInput.expectsMediaDataInRealTime = YES;
UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
//设置视频变化,横屏竖屏
self.assetWriterVideoInput.transform = // 4
THTransformForDeviceOrientation(orientation);
NSDictionary *attributes = @{ // 5
(id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA),
(id)kCVPixelBufferWidthKey : self.videoSettings[AVVideoWidthKey],
(id)kCVPixelBufferHeightKey : self.videoSettings[AVVideoHeightKey],
(id)kCVPixelFormatOpenGLESCompatibility : (id)kCFBooleanTrue
};
//写入视频数据更高效
self.assetWriterInputPixelBufferAdaptor = // 6
[[AVAssetWriterInputPixelBufferAdaptor alloc]
initWithAssetWriterInput:self.assetWriterVideoInput
sourcePixelBufferAttributes:attributes];
if ([self.assetWriter canAddInput:self.assetWriterVideoInput]) { // 7
[self.assetWriter addInput:self.assetWriterVideoInput];
} else {
NSLog(@"Unable to add video input.");
return;
}
//音频输入
self.assetWriterAudioInput = // 8
[[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio
outputSettings:self.audioSettings];
//实时性设置
self.assetWriterAudioInput.expectsMediaDataInRealTime = YES;
if ([self.assetWriter canAddInput:self.assetWriterAudioInput]) { // 9
[self.assetWriter addInput:self.assetWriterAudioInput];
} else {
NSLog(@"Unable to add audio input.");
}
self.isWriting = YES; // 10
self.firstSample = YES;
});
}
3.写入数据
(void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer {
if (!self.isWriting) {
return;
}
CMFormatDescriptionRef formatDesc = // 1
CMSampleBufferGetFormatDescription(sampleBuffer);
CMMediaType mediaType = CMFormatDescriptionGetMediaType(formatDesc);
if (mediaType == kCMMediaType_Video) {
CMTime timestamp =
CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if (self.firstSample) { // 2
if ([self.assetWriter startWriting]) {
//设置开始时间戳
[self.assetWriter startSessionAtSourceTime:timestamp];
} else {
NSLog(@"Failed to start writing.");
}
self.firstSample = NO;
}
CVPixelBufferRef outputRenderBuffer = NULL;
CVPixelBufferPoolRef pixelBufferPool =
self.assetWriterInputPixelBufferAdaptor.pixelBufferPool;
OSStatus err = CVPixelBufferPoolCreatePixelBuffer(NULL, // 3
pixelBufferPool,
&outputRenderBuffer);
if (err) {
NSLog(@"Unable to obtain a pixel buffer from the pool.");
return;
}
CVPixelBufferRef imageBuffer = // 4
CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:imageBuffer
options:nil];
[self.activeFilter setValue:sourceImage forKey:kCIInputImageKey];
CIImage *filteredImage = self.activeFilter.outputImage;
if (!filteredImage) {
filteredImage = sourceImage;
}
//渲染经过处理到纹理到渲染缓存区
[self.ciContext render:filteredImage // 5
toCVPixelBuffer:outputRenderBuffer
bounds:filteredImage.extent
colorSpace:self.colorSpace];
if (self.assetWriterVideoInput.readyForMoreMediaData) { // 6
if (![self.assetWriterInputPixelBufferAdaptor
appendPixelBuffer:outputRenderBuffer
withPresentationTime:timestamp]) {
NSLog(@"Error appending pixel buffer.");
}
}
CVPixelBufferRelease(outputRenderBuffer);
}
else if (!self.firstSample && mediaType == kCMMediaType_Audio) { // 7
if (self.assetWriterAudioInput.isReadyForMoreMediaData) {
if (![self.assetWriterAudioInput appendSampleBuffer:sampleBuffer]) {
NSLog(@"Error appending audio sample buffer.");
}
}
}
}
网友评论