美文网首页iOS开发(OC)
AVFoundation视频多路和动态水印

AVFoundation视频多路和动态水印

作者: 穿着格子衫的猫 | 来源:发表于2019-03-03 22:15 被阅读0次

    AVFoundation写的一些简单的视频处理
    代码不考虑音频情况,主要分享一下学习结果
    视频多路:介绍一个便捷的多视频重叠处理方法
    视频内自动放大的视频水印:介绍一下如何给视频添加视频水印并给其做动画
    Demo内有AnimationTool的图片静态水印及带动画图片水印
    效果视频和源码已经上传到github上

    我的demo源码

    我参考的前辈们的文档
    AVFoundation API介绍 点下面点下面
    https://blog.csdn.net/u011374318/article/details/78829096

    AVFoundation 水印处理,继续点下面点下面
    http://www.cocoachina.com/ios/20141208/10542.html
    https://www.jianshu.com/p/9f4f37e5abc6


    效果图

    9宫格视频播放.png 动态放大视频水印.png

    AVFoundation多路

    • Talk is cheap, show me the code
    @interface MultipathAsset : NSObject
    @property (nonatomic) AVAsset *asset;
    @property (nonatomic) CGPoint origin;
    
    - (AVPlayerItem *)makePlayerItemWithAssets:(NSArray<MultipathAsset *> *)assets size:(CGSize)size {
        AVMutableComposition *comp = [AVMutableComposition composition];
        
        //记录下 每一个trackID对应的哪个video
        NSMutableDictionary *trackRecord = [NSMutableDictionary dictionaryWithCapacity:assets.count];
        
        for (MultipathAsset *item in assets) {
            AVMutableCompositionTrack *track = [comp addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
            AVAssetTrack *assetTrack = [item.asset tracksWithMediaType:AVMediaTypeVideo].firstObject;
            CMTimeRange range = CMTimeRangeMake(kCMTimeZero, item.asset.duration);
            [track insertTimeRange:range ofTrack:assetTrack atTime:kCMTimeZero error:nil];
            [trackRecord setObject:item forKey:@(track.trackID)];
        }
        
        AVMutableVideoComposition *videoComp = [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:comp];
        videoComp.renderSize = size;
        
        for (AVMutableVideoCompositionInstruction *instruction in videoComp.instructions) {
            [self adjustInstructionLayers:instruction];
            for (AVMutableVideoCompositionLayerInstruction *layerIns in instruction.layerInstructions) {
                MultipathAsset *video = [trackRecord objectForKey:@(layerIns.trackID)];
                if (video) {
                    CGAffineTransform trans = video.asset.preferredTransform;
                    trans = CGAffineTransformTranslate(trans, video.origin.x, video.origin.y);
                    [layerIns setTransform:trans atTime:kCMTimeZero];
                }
            }
        }
        AVPlayerItem *item = [AVPlayerItem playerItemWithAsset:comp];
        item.videoComposition = videoComp;
        return item;
    }
    
    ///调整Instrction的layerIns顺序
    - (void)adjustInstructionLayers:(AVMutableVideoCompositionInstruction *)instruction {
        NSArray *layerInstructions = instruction.layerInstructions;
        layerInstructions = [layerInstructions sortedArrayUsingComparator:^NSComparisonResult(id  _Nonnull obj1, id  _Nonnull obj2) {
            AVMutableVideoCompositionLayerInstruction *layerIns1 = obj1;
            AVMutableVideoCompositionLayerInstruction *layerIns2 = obj2;
            if (layerIns1.trackID < layerIns2.trackID) {
                return NSOrderedDescending;
            } else {
                return NSOrderedAscending;
            }
        }];
        instruction.layerInstructions = layerInstructions;
    }
    

    代码比自己算Instruction和layerInstruction会少很多,也不容易出错

    • 用到的核心方法:
    AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:
    

    让我们看一下它的说明:意思就是它会帮我们把AVAsset的视频轨道们划分好 Instructions和LayerInstructions。


    方法说明.png

    做多视频处理最痛苦的事情就是处理各个轨道,尤其是当同一时间视频重叠的时候,我们需要去划分Instructions和其中的layerInstructions,整个过程既繁琐又容易出问题。
    现在AVFoundation框架可以帮我们做好这部分工作,我们只需要将对应的layerInstruction一一处理就好了

    • 需要注意的点
    1. 帮我们划分好的LayerInstructions不一定是要我们要的层级关系,最前面的layerInstruction对应的视频会出现在最上层,之后的会叠在下面,以此类推,所以我们需要按照我们需求重新调整一下LayerInstructions数组的顺序


      layerInstructions说明.png
    1. 如下添加视频轨道的方法的trackID 会从1开始 然后再=2 再=3 依次+1,这样就可以跟我们传入的视频数组对应起来了,也可以不使用kCMPersistentTrackID_Invalid而是直接指定轨道ID,但是要连续的从1开始不能中断。
      我多路里面用trackRecord记录下每个视频的轨道ID
    AVMutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid
    

    AVFoundation动态放大居中视频水印

    在多路基础上将第二个视频居中显示就好了,并添加一个放大动画。有兴趣的可以自己改造之前多路demo将每一个视频添加移动放大渐显渐隐效果

    AVMutableComposition *comp = [AVMutableComposition composition];
        
        CGSize videoSize = bgAsset.naturalSize;
        CGFloat startX,startY,endX,endY;
        CGFloat scale = 1.2;
        
        {
            startX = videoSize.width / 2 - watermark.naturalSize.width / 2;
            startY = videoSize.height / 2 - watermark.naturalSize.height / 2;
            endX = videoSize.width / 2 - watermark.naturalSize.width * scale / 2;
            endY = videoSize.height / 2 - watermark.naturalSize.height * scale / 2;
        }
        
        AVMutableCompositionTrack *bgTrack = [comp addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
        AVMutableCompositionTrack *wmTrack = [comp addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
        
    
        CMTimeRange bgRange = CMTimeRangeMake(kCMTimeZero, bgAsset.duration);
        AVAssetTrack *bgAssetTrack = [bgAsset tracksWithMediaType:AVMediaTypeVideo].firstObject;
        [bgTrack insertTimeRange:bgRange ofTrack:bgAssetTrack atTime:kCMTimeZero error:nil];
    
        CMTimeRange wmRange = CMTimeRangeMake(kCMTimeZero, watermark.duration);
        AVAssetTrack *wmAssetTrack = [watermark tracksWithMediaType:AVMediaTypeVideo].firstObject;
        [wmTrack insertTimeRange:wmRange ofTrack:wmAssetTrack atTime:kCMTimeZero error:nil];
       
        AVMutableVideoComposition *videoComp = [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:comp];
        videoComp.renderSize = videoSize;
    
        for (AVMutableVideoCompositionInstruction *instruction in videoComp.instructions) {
            [self adjustInstructionLayers:instruction];
            for (AVMutableVideoCompositionLayerInstruction *layerIns in instruction.layerInstructions) {
                if (layerIns.trackID == 1) {
                    [layerIns setTransform:bgAsset.preferredTransform atTime:kCMTimeZero];
                } else {
                    CGAffineTransform startTrans;
                    CGAffineTransform endTrans;
                    
                    CGAffineTransform trans = watermark.preferredTransform;
                    startTrans = CGAffineTransformTranslate(trans, startX, startY);
                    
                    trans =  CGAffineTransformTranslate(trans, endX, endY);
                    endTrans = CGAffineTransformScale(trans, scale, scale);
                    
                    [layerIns setTransformRampFromStartTransform:startTrans toEndTransform:endTrans timeRange:wmRange];
                }
            }
        }
    
    • 需要注意的地方 有一个坑
      LayerInstruction的Transform是基于左上角为原点的!!!,而不是我们平常layer那样默认居中锚点,做类似视频放大动画会出现基于左上角不动而向右边放大的情况,不是我们期望的中心不动四周扩散。
      明白这点就只需要转换一下,居中放大动画只需要计算出放大后的左上角原点,移动过去,之后再给放大就好了
    CGFloat scale = 1.2;
    endX = videoSize.width / 2 - watermark.naturalSize.width * scale / 2;
    endY = videoSize.height / 2 - watermark.naturalSize.height * scale / 2;
    ......
    CGAffineTransform trans = watermark.preferredTransform;
    trans =  CGAffineTransformTranslate(trans, endX, endY);
    endTrans = CGAffineTransformScale(trans, scale, scale);
    

    然后调用如下方法

    layerIns setTransformRampFromStartTransform:startTrans toEndTransform:endTrans timeRange:wmRange
    

    这里额外说明一下LayerInstruction提供了3个动画有关的函数,可以满足我们大部分需求,copy自前辈的API介绍

     //获取包含指定时间的仿射变化梯度信息
    //startTransform、endTransform 用来接收变化过程的起始值与结束值
    //timeRange 用来接收变化的持续时间范围
    //返回值表示指定的时间 time 是否在变化时间 timeRange 内
    - (BOOL)getTransformRampForTime:(CMTime)time startTransform:(nullable CGAffineTransform *)startTransform endTransform:(nullable CGAffineTransform *)endTransform timeRange:(nullable CMTimeRange *)timeRange;
    
    //获取包含指定时间的透明度变化梯度信息
    //startOpacity、endOpacity 用来接收透明度变化过程的起始值与结束值
    //timeRange 用来接收变化的持续时间范围
    //返回值表示指定的时间 time 是否在变化时间 timeRange 内
    - (BOOL)getOpacityRampForTime:(CMTime)time startOpacity:(nullable float *)startOpacity endOpacity:(nullable float *)endOpacity timeRange:(nullable CMTimeRange *)timeRange;
    
    //获取包含指定时间的裁剪区域的变化梯度信息
    //startCropRectangle、endCropRectangle 用来接收变化过程的起始值与结束值
    //timeRange 用来接收变化的持续时间范围
    //返回值表示指定的时间 time 是否在变化时间 timeRange 内
    - (BOOL)getCropRectangleRampForTime:(CMTime)time startCropRectangle:(nullable CGRect *)startCropRectangle endCropRectangle:(nullable CGRect *)endCropRectangle timeRange:(nullable CMTimeRange *)timeRange NS_AVAILABLE(10_9, 7_0);
    
    

    相关文章

      网友评论

        本文标题:AVFoundation视频多路和动态水印

        本文链接:https://www.haomeiwen.com/subject/ykrmuqtx.html