美文网首页
[AVFoundation]编辑

[AVFoundation]编辑

作者: 朦胧1919 | 来源:发表于2017-04-06 23:46 被阅读0次

    原文:AVFoundation Programming Guide

    写在前面

    简单翻译一下AVFoundation的相关文档,本人英语水平一般,有不对的欢迎大家留言指正。

    =。= 这一篇翻译的不好 主要是一些专业词汇了解的太少,大家可以参考原文来看,以后会修正。。。

    AVFoundation框架提供了一个丰富的类集来处理视听媒体的编辑。AVFoundation编辑API中最重要的是compositions。一个composition是包含一个或多个媒体assets的tracks的集合。AVMutableComposition 类提供了一个插入和删除tracks的接口,也可以用来管理它们的排序。Figure 3-1展示了一个新的composition是怎么从现有的assets中抽取拼接成一个新的asset的,如果你想要做的是将多个assets有序的合并到一个文件中,这是你需要详细了解的。如果你想要在composition的tracks上处理自定义的音频或视频,你需要单独的合并一个音频或者视频composition。

    Figure 3-1 AVMutableComposition assembles assets together

    使用AVMutableAudioMix 类,你可以在composition中的音频tracks上实现自定义的音频处理。如Figure 3-2。目前,你可以给一个音频track指定一个最大音量或者设置volume ramp

    Figure 3-2 AVMutableAudioMix performs audio mixing

    你可以使用AVMutableVideoComposition类直接操作composition中的视频tracks的编辑,如题 Figure 3-3。通过一个单独的视频composition, 你可以给输出的视频指定显示的大小和比例,也可以设置帧持续时间。通过一个视频composition的说明(instructions)(定义为类AVMutableVideoCompositionLayerInstruction),你可以改变视频的背景色,应用图层说明(layer instructions). 这些图层说明(定义为类AVMutableVideoCompositionLayerInstruction)可以用来实现视频旋转,设置透明度。你可以使用animationTool 属性实现Core Animation。

    Figure 3-3 AVMutableVideoComposition

    你可以使用一个AVAssetExportSession对象实现将audio mixvideo composition组合在一起,如图3-4。你可以使用你的composition初始化输出会话,然后将audio mixvideo composition赋值给audioMixvideoComposition 属性。

    Figure 3-4 Use AVAssetExportSession to combine media elements into an output file

    创建Composition

    你可以使用AVMutableComposition 类来创建你自己的composition。为了添加媒体数据到你的composition中,你必须添加一个或者多个composition tracks,可以使用AVMutableCompositionTrack这个类。下面是一个简单的使用一个video track和一个audio track创建一个可变的composition的栗子:

    AVMutableComposition *mutableComposition = [AVMutableComposition composition];
    
    // Create the video composition track.
    
    AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
    
    // Create the audio composition track.
    
    AVMutableCompositionTrack *mutableCompositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
    
    初始化Composition Track

    当添加一个新的tracks到一个composition的时候,你必须提供一个媒体类型和一个track ID。音频和视频是最经常使用的媒体类型,当然你也可以使用一些其他的特定类型如AVMediaTypeSubtitle or AVMediaTypeText

    每一个track包含一些视听数据,并且拥有一个唯一的标识(track ID)。如果你使用了一个特定的标识kCMPersistentTrackID_Invalid作为track ID,系统会为你生成一个唯一的标识绑定到对应的track上。

    给Composition添加视听数据

    当你有一个包含一个或多个track的composition后,你就可以开始添加你的媒体数据到对应的tracks上。为了添加媒体数据到一个composition track上,你需要访问对应媒体数据所在的 AVAsset 对象。你可以使用可变的composition track接口将多个同样类型的tracks放在同一个track上。以下示例说明如何依次将两个不同的视频tracks添加到相同的composition track上:

    // You can retrieve AVAssets from a number of places, like the camera roll for example.
    
    AVAsset *videoAsset = <#AVAsset with at least one video track#>;
    
    AVAsset *anotherVideoAsset = <#another AVAsset with at least one video track#>;
    
    // Get the first video track from each asset.
    
    AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
    
    AVAssetTrack *anotherVideoAssetTrack = [[anotherVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
    
    // Add them both to the composition.
    
    [mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAssetTrack.timeRange.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil];
    
    [mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,anotherVideoAssetTrack.timeRange.duration) ofTrack:anotherVideoAssetTrack atTime:videoAssetTrack.timeRange.duration error:nil];
    
    检索兼容的Composition Tracks

    在可能的情况下,每种类型的媒体应该只有一个composition track。这种统一兼容的asset tracks使得系统可以使用少量的资源。当连续的呈现媒体数据的时候,你应该将相同类型的媒体数据放置在相同的composition track上。你可以查询一个可变的composition来找到是否存在一个可以兼容你想要放置的track的composition track。

    AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>];
    
    if (compatibleCompositionTrack) {
    
        // Implementation continues.
    
    }
    

    注意:将多个视频片段放置在同一个composition track上可能会导致在切换视频片段的时候丢帧,尤其是在嵌入式设备。视频片段使用的composition tracks的个数,完全取决于你的应用程序的设计和平台。

    Generating a Volume Ramp

    一个AVMutableAudioMix对象可以对你的composition上的所有的音频轨道实现单独的自定义的音频处理。创建一个音频混合可以使用类方法audioMix实现,你可以使用AVMutableAudioMixInputParameters 实例来关联音频混合到你的composition中的特定的tracks上。一个音频混合可以用来改变一个音频轨道的音量。下面的类子展示了如何给一个特定的音轨设置volume ramp来实现声音的慢慢减弱:

    AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix];
    
    // Create the audio mix input parameters object.
    
    AVMutableAudioMixInputParameters *mixParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mutableCompositionAudioTrack];
    
    // Set the volume ramp to slowly fade the audio out over the duration of the composition.
    
    [mixParameters setVolumeRampFromStartVolume:1.f toEndVolume:0.f timeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)];
    
    // Attach the input parameters to the audio mix.
    
    mutableAudioMix.inputParameters = @[mixParameters];
    

    自定义视频处理

    和音频混合一样,你只需要一个AVMutableVideoComposition对象来实现在你的composition的视频tracks上的所有的自定义视频处理。使用一个视频composition,你可以直接给composition的视频tracks设置的合适的显示大小,分辨率,帧率等。详细的属性设置可以参考Setting the Render Size and Frame Duration这个例子。

    改变Composition的背景色

    所有的视频compositions必须有一个AVVideoCompositionInstruction 对象,至少包含一个视频composition说明。你可以是使用AVMutableVideoCompositionInstruction来创建你自己的视频composition说明。使用视频composition说明,你可以改变composition的背景色,指定是否需要后处理或者应用图层说明。

    下面的例子说明了怎么创建一个视频composition instruction来将整个composition的背景色改成红色。

    AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
    
    mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
    
    mutableVideoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];
    
    应用不透明斜率

    视频composition instructions也可以用来实现视频组件的图层说明。一个AVMutableVideoCompositionLayerInstruction对象可以在一个视频轨道上实现变换和设置透明度。一个视频composition instruction的layerInstructions数组的顺序决定了视频在composition instruction的持续时间内,来自源轨道的视频帧应该怎么分层和组合。下面的代码片段展示了如何设置透明度斜率以在转换到第二个视频之前慢慢淡出组合中的前一个视频:

    AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>;
    
    AVAsset *secondVideoAssetTrack = <#AVAssetTrack representing the second video segment played in the composition#>;
    
    // Create the first video composition instruction.
    
    AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
    
    // Set its time range to span the duration of the first video track.
    
    firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
    
    // Create the layer instruction and associate it with the composition video track.
    
    AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
    
    // Create the opacity ramp to fade out the first video track over its entire duration.
    
    [firstVideoLayerInstruction setOpacityRampFromStartOpacity:1.f toEndOpacity:0.f timeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration)];
    
    // Create the second video composition instruction so that the second video track isn't transparent.
    
    AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
    
    // Set its time range to span the duration of the second video track.
    
    secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
    
    // Create the second layer instruction and associate it with the composition video track.
    
    AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
    
    // Attach the first layer instruction to the first video composition instruction.
    
    firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
    
    // Attach the second layer instruction to the second video composition instruction.
    
    secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
    
    // Attach both of the video composition instructions to the video composition.
    
    AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
    
    mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
    
    结合核心动画效果

    视频composition可以通过 animationTool属性添加核心动画的功能。通过这个动画工具,您可以完成视频水印和添加标题或动画遮罩等任务。核心动画可以使用两种不同的方式:您可以添加一个核心动画图层作为它自己的composition track,或者您可以直接将Core Animation效果(使用核心动画图层)渲染到您的composition中的视频帧中。以下代码通过向视频的中心添加水印展示了后一种方式的实现:

    CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>;
    
    CALayer *parentLayer = [CALayer layer];
    
    CALayer *videoLayer = [CALayer layer];
    
    parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
    
    videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
    
    [parentLayer addSublayer:videoLayer];
    
    watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4);
    
    [parentLayer addSublayer:watermarkLayer];
    
    mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
    
    

    整合: 组合多个资源并将结果保存到相册中

    这个简短的代码示例说明了如何组合两个视频资源轨道和一个音频资源轨道来创建一个单独的视频文件。它展示了如何做:

    注意:为了专注于最相关的代码,本示例省略了一个完整的应用程序的几个方面,如内存管理和错误处理。为了使用AVFoundation,你需要有足够的Cocoa开发经验来推断丢失的部分。

    创建Composition

    你可以使用AVMutableComposition对象来将不同资源的tracks组合在一起。创建composition并添加一个音频和一个视频轨道:

    AVMutableComposition *mutableComposition = [AVMutableComposition composition];
    
    AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
    
    AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
    
    添加Assets

    一个空的composition对你没有用处。你需要将两个视频资源轨道和音频资源轨道添加到composition中。

    AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
    
    AVAssetTrack *secondVideoAssetTrack = [[secondVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
    
    [videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration) ofTrack:firstVideoAssetTrack atTime:kCMTimeZero error:nil];
    
    [videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondVideoAssetTrack.timeRange.duration) ofTrack:secondVideoAssetTrack atTime:firstVideoAssetTrack.timeRange.duration error:nil];
    
    [audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)) ofTrack:[[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];
    
    检查视频方向

    将视频轨道和音轨添加到composition中后,您需要确保两个视频轨道的方向正确。默认情况下,所有视频轨道都假定为横向模式。如果您的视频轨道以纵向模式拍摄,导出时视频将无法正确定向。而且,如果您尝试将纵向模式的视频与横向模式下的视频组合在一起,导出会话将无法完成。

    BOOL isFirstVideoPortrait = NO;
    
    CGAffineTransform firstTransform = firstVideoAssetTrack.preferredTransform;
    
    // Check the first video track's preferred transform to determine if it was recorded in portrait mode.
    
    if (firstTransform.a == 0 && firstTransform.d == 0 && (firstTransform.b == 1.0 || firstTransform.b == -1.0) && (firstTransform.c == 1.0 || firstTransform.c == -1.0)) {
    
        isFirstVideoPortrait = YES;
    
    }
    
    BOOL isSecondVideoPortrait = NO;
    
    CGAffineTransform secondTransform = secondVideoAssetTrack.preferredTransform;
    
    // Check the second video track's preferred transform to determine if it was recorded in portrait mode.
    
    if (secondTransform.a == 0 && secondTransform.d == 0 && (secondTransform.b == 1.0 || secondTransform.b == -1.0) && (secondTransform.c == 1.0 || secondTransform.c == -1.0)) {
    
        isSecondVideoPortrait = YES;
    
    }
    
    if ((isFirstVideoAssetPortrait && !isSecondVideoAssetPortrait) || (!isFirstVideoAssetPortrait && isSecondVideoAssetPortrait)) {
    
        UIAlertView *incompatibleVideoOrientationAlert = [[UIAlertView alloc] initWithTitle:@"Error!" message:@"Cannot combine a video shot in portrait mode with a video shot in landscape mode." delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:nil];
    
        [incompatibleVideoOrientationAlert show];
    
        return;
    
    }
    
    应用视频组件图层说明(composition layer instructions)

    一旦您知道视频片段具有兼容的方向,您可以对每个视图片段应用必要的图层说明(layer instructions),并将这些图层说明添加到视频组件(composition)中。

    AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
    
    // Set the time range of the first instruction to span the duration of the first video track.
    
    firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
    
    AVMutableVideoCompositionInstruction * secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
    
    // Set the time range of the second instruction to span the duration of the second video track.
    
    secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
    
    AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
    
    // Set the transform of the first layer instruction to the preferred transform of the first video track.
    
    [firstVideoLayerInstruction setTransform:firstTransform atTime:kCMTimeZero];
    
    AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
    
    // Set the transform of the second layer instruction to the preferred transform of the second video track.
    
    [secondVideoLayerInstruction setTransform:secondTransform atTime:firstVideoAssetTrack.timeRange.duration];
    
    firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
    
    secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
    
    AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
    
    mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
    

    所有AVAssetTrack 对象都有一个preferredTransform 属性,其中包含该资源轨道的方向信息。每当资源轨道显示在屏幕上时,都会应用此转换。在前面的代码中,图层说明(layer instruction)的变换设置为资源轨道的变换,以便在调整渲染大小后,新的composition的视频会正确显示。

    设置渲染大小和帧持续时间

    要完成视频方向修复,您必须相应地调整renderSize属性。您还应为 frameDuration 属性选择合适的值,例如1/30秒(或每秒30帧)。默认情况下,renderScale属性设置为1.0,在这个composition中是合适的。

    CGSize naturalSizeFirst, naturalSizeSecond;
    // If the first video asset was shot in portrait mode, then so was the second one if we made it here.
    if (isFirstVideoAssetPortrait) {
    // Invert the width and height for the video tracks to ensure that they display properly.
        naturalSizeFirst = CGSizeMake(firstVideoAssetTrack.naturalSize.height, firstVideoAssetTrack.naturalSize.width);
        naturalSizeSecond = CGSizeMake(secondVideoAssetTrack.naturalSize.height, secondVideoAssetTrack.naturalSize.width);
    }
    else {
    // If the videos weren't shot in portrait mode, we can just use their natural sizes.
        naturalSizeFirst = firstVideoAssetTrack.naturalSize;
        naturalSizeSecond = secondVideoAssetTrack.naturalSize;
    }
    
    float renderWidth, renderHeight;
    // Set the renderWidth and renderHeight to the max of the two videos widths and heights.
    if (naturalSizeFirst.width > naturalSizeSecond.width) {
        renderWidth = naturalSizeFirst.width;
    }
    else {
        renderWidth = naturalSizeSecond.width;
    }
    if (naturalSizeFirst.height > naturalSizeSecond.height) {
        renderHeight = naturalSizeFirst.height;
    }
    else {
        renderHeight = naturalSizeSecond.height;
    }
    mutableVideoComposition.renderSize = CGSizeMake(renderWidth, renderHeight);
    
    // Set the frame duration to an appropriate value (i.e. 30 frames per second for video).
    
    mutableVideoComposition.frameDuration = CMTimeMake(1,30);
    
    导出Composition并且保存到相册

    最后一步包括将整个composition导出到单个视频文件中,并将该视频保存到相册。您可以使用AVAssetExportSession对象来创建新的视频文件,并传入输出文件所需的URL。然后,您可以使用ALAssetsLibrary 类将生成的视频文件保存到相册。

    // Create a static date formatter so we only have to initialize it once.
    
    static NSDateFormatter *kDateFormatter;
    
    if (!kDateFormatter) {
    
        kDateFormatter = [[NSDateFormatter alloc] init];
    
        kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
    
        kDateFormatter.timeStyle = NSDateFormatterShortStyle;
    
    }
    
    // Create the export session with the composition and set the preset to the highest quality.
    
    AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
    
    // Set the desired output URL for the file created by the export process.
    
    exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:@YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
    
    // Set the output file type to be a QuickTime movie.
    
    exporter.outputFileType = AVFileTypeQuickTimeMovie;
    
    exporter.shouldOptimizeForNetworkUse = YES;
    
    exporter.videoComposition = mutableVideoComposition;
    
    // Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
    
    [exporter exportAsynchronouslyWithCompletionHandler:^{
    
        dispatch_async(dispatch_get_main_queue(), ^{
    
            if (exporter.status == AVAssetExportSessionStatusCompleted) {
    
                ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
    
                if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) {
    
                    [assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL];
    
                }
    
            }
    
        });
    
    }];
    

    相关文章

      网友评论

          本文标题:[AVFoundation]编辑

          本文链接:https://www.haomeiwen.com/subject/wfclottx.html