原文
https://www.yuque.com/docs/share/45644f31-0261-42e3-b858-20a7cc953fdc?# 《SVGAPlayer库学习》
前言
以此文记录SVGAPlayer库的学习
地址
https://github.com/svga/SVGAPlayer-iOS
官方介绍
Similar to Lottie. Render After Effects / Animate CC (Flash) animations natively on Android and iOS, Web. 使用 SVGAPlayer 在 Android、iOS、Web中播放 After Effects / Animate CC (Flash) 动画。
大概就是说可以播放AE、Flash动画
其他介绍
SVGAConverter 可以将 Flash 以及 After Effects 动画导出成 .SVGA 文件(实际上是 ZIP 包),供 SVGAPlayer 在各平台播放,SVGAPlayer 支持在 iOS / Android / Web / ReactNative / LayaBox 等平台、游戏引擎播放。
SVGA 做的事情,实际上,非常简单,Converter 会负责从 Flash 或 AE 源文件中提取所有动画元素(位图、矢量),并将其在时间轴中的每帧表现(位移、缩放、旋转、透明度)导出。 Player 会负责将这些信息还原至画布上。
因此,你会发现,SVGA 既有序列帧的特点,又有元素动画的特点。Player 逻辑极度简单,她只负责粗暴地将每一个元素,丝毫不差地渲染到屏幕上,而无须任何插值计算。(我们认为,任何插件计算的逻辑都是复杂的)
也因此,你会发现,SVGA 不同于 Lottie,Lottie 需要在 Player 一层完整地将 After Effects 所有逻辑实现,而 SVGA 则将这些逻辑免去。也因此,SVGA 可以同时支持 Flash,我们相信 Flash 以及其继承者 Animate CC 仍然有强大的生命力,以及完善的设计生态。
SVGA 最初的目标是为降低序列帧动画开销而生的,因此,性能问题一直是 SVGA 关注的焦点。如果你可以深入地探究 SVGA 的实现方式,你会发现,SVGA 实质上做了一件非常重要的事情。她会在动画播放前,一次性地上传所有纹理到 GPU,接着,在播放的过程中,这些纹理会被重复使用。CPU 与 GPU 交换的次数大大减少,同时,纹理的数目也在可控范围。内存、CPU、GPU 占用能达到最优状态。
代码分析
demo 代码实例
image从 SVGAPlayer 里开始看起
SVGAPlayer
SVGAPlayer为显示的容器,从它的结构设计里可以看出SVGA动画的播放逻辑
image imagedraw方法
<pre data-language="objectivec" id="dnsxK" class="ne-codeblock language-objectivec" style="border: 1px solid rgb(232, 232, 232); border-top-left-radius: 2px; border-top-right-radius: 2px; border-bottom-right-radius: 2px; border-bottom-left-radius: 2px; background-color: rgb(249, 249, 249); padding: 16px; font-size: 13px; color: rgb(89, 89, 89);">- (void)draw {
self.drawLayer = [[CALayer alloc] init];
self.drawLayer.frame = CGRectMake(0, 0, self.videoItem.videoSize.width, self.videoItem.videoSize.height);
self.drawLayer.masksToBounds = true;
NSMutableDictionary *tempHostLayers = [NSMutableDictionary dictionary];
NSMutableArray *tempContentLayers = [NSMutableArray array];
[self.videoItem.sprites enumerateObjectsUsingBlock:^(SVGAVideoSpriteEntity * _Nonnull sprite, NSUInteger idx, BOOL * _Nonnull stop) {
UIImage *bitmap;
if (sprite.imageKey != nil) {
NSString *bitmapKey = [sprite.imageKey stringByDeletingPathExtension];
if (self.dynamicObjects[bitmapKey] != nil) {
bitmap = self.dynamicObjects[bitmapKey];
}
else {
bitmap = self.videoItem.images[bitmapKey];
}
}
//把图片和每一帧的位置farmes储存到layer
SVGAContentLayer *contentLayer = [sprite requestLayerWithBitmap:bitmap];
contentLayer.imageKey = sprite.imageKey;
[tempContentLayers addObject:contentLayer];
if ([sprite.imageKey hasSuffix:@".matte"]) {
CALayer *hostLayer = [[CALayer alloc] init];
hostLayer.mask = contentLayer;
tempHostLayers[sprite.imageKey] = hostLayer;
} else {
if (sprite.matteKey && sprite.matteKey.length > 0) {
CALayer *hostLayer = tempHostLayers[sprite.matteKey];
[hostLayer addSublayer:contentLayer];
if (![sprite.matteKey isEqualToString:self.videoItem.sprites[idx - 1].matteKey]) {
[self.drawLayer addSublayer:hostLayer];
}
} else {
[self.drawLayer addSublayer:contentLayer];
}
}
if (sprite.imageKey != nil) {
//查看是否是传入过文字的key
if (self.dynamicTexts[sprite.imageKey] != nil) {
NSAttributedString *text = self.dynamicTexts[sprite.imageKey];
CGSize bitmapSize = CGSizeMake(self.videoItem.images[sprite.imageKey].size.width * self.videoItem.images[sprite.imageKey].scale, self.videoItem.images[sprite.imageKey].size.height * self.videoItem.images[sprite.imageKey].scale);
CGSize size = [text boundingRectWithSize:bitmapSize
options:NSStringDrawingUsesLineFragmentOrigin
context:NULL].size;
CATextLayer *textLayer = [CATextLayer layer];
textLayer.contentsScale = [[UIScreen mainScreen] scale];
[textLayer setString:self.dynamicTexts[sprite.imageKey]];
textLayer.frame = CGRectMake(0, 0, size.width, size.height);
[contentLayer addSublayer:textLayer];
contentLayer.textLayer = textLayer;
[contentLayer resetTextLayerProperties:text];
}
//是否需要隐藏
if (self.dynamicHiddens[sprite.imageKey] != nil &&
[self.dynamicHiddens[sprite.imageKey] boolValue] == YES) {
contentLayer.dynamicHidden = YES;
}
if (self.dynamicDrawings[sprite.imageKey] != nil) {
contentLayer.dynamicDrawingBlock = self.dynamicDrawings[sprite.imageKey];
}
}
}];
self.contentLayers = tempContentLayers;
[self.layer addSublayer:self.drawLayer];
NSMutableArray *audioLayers = [NSMutableArray array];
[self.videoItem.audios enumerateObjectsUsingBlock:^(SVGAAudioEntity * _Nonnull obj, NSUInteger idx, BOOL * _Nonnull stop) {
SVGAAudioLayer *audioLayer = [[SVGAAudioLayer alloc] initWithAudioItem:obj videoItem:self.videoItem];
[audioLayers addObject:audioLayer];
}];
self.audioLayers = audioLayers;
[self update];
[self resize];
}
</pre>
可以看到,其实它是根据 SVGAVideoEntity 里储存SVGAVideoSpriteEntity的数组 来创建layer的
在下面的方法里 把图片和每一帧的位置farmes储存到layer
image image而SVGAContentLayer是继承CALayer的
imageupdate方法
imageresize方法
根据排版模式的不同,设置不同的位置
<pre data-language="objectivec" id="S0D5z" class="ne-codeblock language-objectivec" style="border: 1px solid rgb(232, 232, 232); border-top-left-radius: 2px; border-top-right-radius: 2px; border-bottom-right-radius: 2px; border-bottom-left-radius: 2px; background-color: rgb(249, 249, 249); padding: 16px; font-size: 13px; color: rgb(89, 89, 89);">- (void)resize {
if (self.contentMode == UIViewContentModeScaleAspectFit) {
CGFloat videoRatio = self.videoItem.videoSize.width / self.videoItem.videoSize.height;
CGFloat layerRatio = self.bounds.size.width / self.bounds.size.height;
if (videoRatio > layerRatio) {
CGFloat ratio = self.bounds.size.width / self.videoItem.videoSize.width;
CGPoint offset = CGPointMake(
(1.0 - ratio) / 2.0 * self.videoItem.videoSize.width,
(1.0 - ratio) / 2.0 * self.videoItem.videoSize.height
- (self.bounds.size.height - self.videoItem.videoSize.height * ratio) / 2.0
);
self.drawLayer.transform = CATransform3DMakeAffineTransform(CGAffineTransformMake(ratio, 0, 0, ratio, -offset.x, -offset.y));
}
else {
CGFloat ratio = self.bounds.size.height / self.videoItem.videoSize.height;
CGPoint offset = CGPointMake(
(1.0 - ratio) / 2.0 * self.videoItem.videoSize.width - (self.bounds.size.width - self.videoItem.videoSize.width * ratio) / 2.0,
(1.0 - ratio) / 2.0 * self.videoItem.videoSize.height);
self.drawLayer.transform = CATransform3DMakeAffineTransform(CGAffineTransformMake(ratio, 0, 0, ratio, -offset.x, -offset.y));
}
}
else if (self.contentMode == UIViewContentModeScaleAspectFill) {
CGFloat videoRatio = self.videoItem.videoSize.width / self.videoItem.videoSize.height;
CGFloat layerRatio = self.bounds.size.width / self.bounds.size.height;
if (videoRatio < layerRatio) {
CGFloat ratio = self.bounds.size.width / self.videoItem.videoSize.width;
CGPoint offset = CGPointMake(
(1.0 - ratio) / 2.0 * self.videoItem.videoSize.width,
(1.0 - ratio) / 2.0 * self.videoItem.videoSize.height
- (self.bounds.size.height - self.videoItem.videoSize.height * ratio) / 2.0
);
self.drawLayer.transform = CATransform3DMakeAffineTransform(CGAffineTransformMake(ratio, 0, 0, ratio, -offset.x, -offset.y));
}
else {
CGFloat ratio = self.bounds.size.height / self.videoItem.videoSize.height;
CGPoint offset = CGPointMake(
(1.0 - ratio) / 2.0 * self.videoItem.videoSize.width - (self.bounds.size.width - self.videoItem.videoSize.width * ratio) / 2.0,
(1.0 - ratio) / 2.0 * self.videoItem.videoSize.height);
self.drawLayer.transform = CATransform3DMakeAffineTransform(CGAffineTransformMake(ratio, 0, 0, ratio, -offset.x, -offset.y));
}
}
else if (self.contentMode == UIViewContentModeTop) {
CGFloat scaleX = self.frame.size.width / self.videoItem.videoSize.width;
CGPoint offset = CGPointMake((1.0 - scaleX) / 2.0 * self.videoItem.videoSize.width, (1 - scaleX) / 2.0 * self.videoItem.videoSize.height);
self.drawLayer.transform = CATransform3DMakeAffineTransform(CGAffineTransformMake(scaleX, 0, 0, scaleX, -offset.x, -offset.y));
}
else if (self.contentMode == UIViewContentModeBottom) {
CGFloat scaleX = self.frame.size.width / self.videoItem.videoSize.width;
CGPoint offset = CGPointMake(
(1.0 - scaleX) / 2.0 * self.videoItem.videoSize.width,
(1.0 - scaleX) / 2.0 * self.videoItem.videoSize.height);
self.drawLayer.transform = CATransform3DMakeAffineTransform(CGAffineTransformMake(scaleX, 0, 0, scaleX, -offset.x, -offset.y + self.frame.size.height - self.videoItem.videoSize.height * scaleX));
}
else if (self.contentMode == UIViewContentModeLeft) {
CGFloat scaleY = self.frame.size.height / self.videoItem.videoSize.height;
CGPoint offset = CGPointMake((1.0 - scaleY) / 2.0 * self.videoItem.videoSize.width, (1 - scaleY) / 2.0 * self.videoItem.videoSize.height);
self.drawLayer.transform = CATransform3DMakeAffineTransform(CGAffineTransformMake(scaleY, 0, 0, scaleY, -offset.x, -offset.y));
}
else if (self.contentMode == UIViewContentModeRight) {
CGFloat scaleY = self.frame.size.height / self.videoItem.videoSize.height;
CGPoint offset = CGPointMake(
(1.0 - scaleY) / 2.0 * self.videoItem.videoSize.width,
(1.0 - scaleY) / 2.0 * self.videoItem.videoSize.height);
self.drawLayer.transform = CATransform3DMakeAffineTransform(CGAffineTransformMake(scaleY, 0, 0, scaleY, -offset.x + self.frame.size.width - self.videoItem.videoSize.width * scaleY, -offset.y));
}
else {
CGFloat scaleX = self.frame.size.width / self.videoItem.videoSize.width;
CGFloat scaleY = self.frame.size.height / self.videoItem.videoSize.height;
CGPoint offset = CGPointMake((1.0 - scaleX) / 2.0 * self.videoItem.videoSize.width, (1 - scaleY) / 2.0 * self.videoItem.videoSize.height);
self.drawLayer.transform = CATransform3DMakeAffineTransform(CGAffineTransformMake(scaleX, 0, 0, scaleY, -offset.x, -offset.y));
}
}
</pre>
layoutSubviews
<pre data-language="objectivec" id="qLfnZ" class="ne-codeblock language-objectivec" style="border: 1px solid rgb(232, 232, 232); border-top-left-radius: 2px; border-top-right-radius: 2px; border-bottom-right-radius: 2px; border-bottom-left-radius: 2px; background-color: rgb(249, 249, 249); padding: 16px; font-size: 13px; color: rgb(89, 89, 89);">//layer位置发生改变时 要重新重置一次
- (void)layoutSubviews {
[super layoutSubviews];
[self resize];
}</pre>
定时器方法
image小结
可以知道,svga的动画播放很简单,对于每一个layer我们都有一个key,和对应的frame数组,预计对应的位图信息,当我们要播放动画的时候我们就可以根据fps的时间间隙来修改每一个key的frame,针对不少key的位置可能与上一帧一样,所以在储存位置信息的时候,我们可以用一个标识来标注当前帧的位置和上一帧一样。
<pre data-language="objectivec" id="Yok5s" class="ne-codeblock language-objectivec" style="border: 1px solid rgb(232, 232, 232); border-top-left-radius: 2px; border-top-right-radius: 2px; border-bottom-right-radius: 2px; border-bottom-left-radius: 2px; background-color: rgb(249, 249, 249); padding: 16px; font-size: 13px; color: rgb(89, 89, 89);">- (void)next {
if (self.reversing) { //倒放
self.currentFrame--;
if (self.currentFrame < (NSInteger)MAX(0, self.currentRange.location)) {
self.currentFrame = MIN(self.videoItem.frames - 1, self.currentRange.location + self.currentRange.length - 1);
self.loopCount++;
}
}
else {
self.currentFrame++;
if (self.currentFrame >= MIN(self.videoItem.frames, self.currentRange.location + self.currentRange.length)) {
self.currentFrame = MAX(0, self.currentRange.location);
[self clearAudios];
self.loopCount++;
}
}
if (self.loops > 0 && self.loopCount >= self.loops) {
[self stopAnimation];
if (!self.clearsAfterStop && [self.fillMode isEqualToString:@"Backward"]) {
[self stepToFrame:MAX(0, self.currentRange.location) andPlay:NO];
}
else if (!self.clearsAfterStop && [self.fillMode isEqualToString:@"Forward"]) {
[self stepToFrame:MIN(self.videoItem.frames - 1, self.currentRange.location + self.currentRange.length - 1) andPlay:NO];
}
id delegate = self.delegate;
if (delegate != nil && [delegate respondsToSelector:@selector(svgaPlayerDidFinishedAnimation:)]) {
[delegate svgaPlayerDidFinishedAnimation:self];
}
return;
}
[self update];
id delegate = self.delegate;
if (delegate != nil) { //通知外界播放进度
if ([delegate respondsToSelector:@selector(svgaPlayer:didAnimatedToFrame:)]) {
[delegate svgaPlayer:self didAnimatedToFrame:self.currentFrame];
} else if ([delegate respondsToSelector:@selector(svgaPlayerDidAnimatedToFrame:)]){
[delegate svgaPlayerDidAnimatedToFrame:self.currentFrame];
}
if (self.videoItem.frames > 0) {
if ([delegate respondsToSelector:@selector(svgaPlayer:didAnimatedToPercentage:)]) {
[delegate svgaPlayer:self didAnimatedToPercentage:(CGFloat)(self.currentFrame + 1) / (CGFloat)self.videoItem.frames];
} else if ([delegate respondsToSelector:@selector(svgaPlayerDidAnimatedToPercentage:)]) {
[delegate svgaPlayerDidAnimatedToPercentage:(CGFloat)(self.currentFrame + 1) / (CGFloat)self.videoItem.frames];
}
}
}
}</pre>
SVGAParser
parser 解析器 (婆收),是给外界创建数据的方法,会返回SVGAVideoEntity,来给svgaPlayer提供数据
image可以看到当我们传入date的url 或data的时候 他可以告诉我们成功还是失败,如果成功会返回SVGAVideoEntity
<pre data-language="objectivec" id="Wji1F" class="ne-codeblock language-objectivec" style="border: 1px solid rgb(232, 232, 232); border-top-left-radius: 2px; border-top-right-radius: 2px; border-bottom-right-radius: 2px; border-bottom-left-radius: 2px; background-color: rgb(249, 249, 249); padding: 16px; font-size: 13px; color: rgb(89, 89, 89);">- (void)parseWithData:(nonnull NSData *)data
cacheKey:(nonnull NSString *)cacheKey
completionBlock:(void ( ^ _Nullable)(SVGAVideoEntity * _Nonnull videoItem))completionBlock
failureBlock:(void ( ^ _Nullable)(NSError * _Nonnull error))failureBlock {
SVGAVideoEntity *cacheItem = [SVGAVideoEntity readCache:cacheKey];
if (cacheItem != nil) {
if (completionBlock) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
completionBlock(cacheItem);
}];
}
return;
}
if (!data || data.length < 4) {
return;
}
if (![SVGAParser isZIPData:data]) { //判断是否是zip数据
[parseQueue addOperationWithBlock:^{
NSData *inflateData = [self zlibInflate:data]; //解压
NSError *err;
//继承GPBMessage的实例 GPBMessage是protobuf框架生成的类 其实就和json差不多 就是压缩算法的差异
SVGAProtoMovieEntity *protoObject = [SVGAProtoMovieEntity parseFromData:inflateData error:&err];
if (!err && [protoObject isKindOfClass:[SVGAProtoMovieEntity class]]) {
//这一步相当于初始化一下原始参数和把SVGAProtoMovieEntity的一些参数赋值给SVGAVideoEntity
SVGAVideoEntity *videoItem = [[SVGAVideoEntity alloc] initWithProtoObject:protoObject cacheDir:@""];
//有点奇怪 为什么这3个方法不放到初始化方法里
//把图片和mp3数据创建出来 用2个字典储存 key就是图片或节点的名字
[videoItem resetImagesWithProtoObject:protoObject];
//把图片和动画关联起来 key就是图片或节点的名字
[videoItem resetSpritesWithProtoObject:protoObject];
//把音频文件创建出来用数组储存
[videoItem resetAudiosWithProtoObject:protoObject];
//是否用弱引用储存
if (self.enabledMemoryCache) {
[videoItem saveCache:cacheKey];
} else {
[videoItem saveWeakCache:cacheKey];
}
if (completionBlock) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
completionBlock(videoItem);
}];
}
}
}];
return ;
}
//解压
[unzipQueue addOperationWithBlock:^{
if ([[NSFileManager defaultManager] fileExistsAtPath:[self cacheDirectory:cacheKey]]) {
[self parseWithCacheKey:cacheKey completionBlock:^(SVGAVideoEntity * _Nonnull videoItem) {
if (completionBlock) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
completionBlock(videoItem);
}];
}
} failureBlock:^(NSError * _Nonnull error) {
[self clearCache:cacheKey];
if (failureBlock) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
failureBlock(error);
}];
}
}];
return;
}
//这里的路径是生成一个随机数,有点不好
NSString *tmpPath = [NSTemporaryDirectory() stringByAppendingFormat:@"%u.svga", arc4random()];
if (data != nil) {
[data writeToFile:tmpPath atomically:YES];
NSString *cacheDir = [self cacheDirectory:cacheKey];
if ([cacheDir isKindOfClass:[NSString class]]) {
[[NSFileManager defaultManager] createDirectoryAtPath:cacheDir withIntermediateDirectories:NO attributes:nil error:nil];
[SSZipArchive unzipFileAtPath:tmpPath toDestination:[self cacheDirectory:cacheKey] progressHandler:^(NSString * _Nonnull entry, unz_file_info zipInfo, long entryNumber, long total) {
} completionHandler:^(NSString *path, BOOL succeeded, NSError *error) {
if (error != nil) {
if (failureBlock) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
failureBlock(error);
}];
}
}
else {
if ([[NSFileManager defaultManager] fileExistsAtPath:[cacheDir stringByAppendingString:@"/movie.binary"]]) {
NSError *err;
NSData *protoData = [NSData dataWithContentsOfFile:[cacheDir stringByAppendingString:@"/movie.binary"]];
SVGAProtoMovieEntity *protoObject = [SVGAProtoMovieEntity parseFromData:protoData error:&err];
if (!err) {
SVGAVideoEntity *videoItem = [[SVGAVideoEntity alloc] initWithProtoObject:protoObject cacheDir:cacheDir];
[videoItem resetImagesWithProtoObject:protoObject];
[videoItem resetSpritesWithProtoObject:protoObject];
if (self.enabledMemoryCache) {
[videoItem saveCache:cacheKey];
} else {
[videoItem saveWeakCache:cacheKey];
}
if (completionBlock) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
completionBlock(videoItem);
}];
}
}
else {
if (failureBlock) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
failureBlock([NSError errorWithDomain:NSFilePathErrorKey code:-1 userInfo:nil]);
}];
}
}
}
else {
NSError *err;
NSData *JSONData = [NSData dataWithContentsOfFile:[cacheDir stringByAppendingString:@"/movie.spec"]];
if (JSONData != nil) {
NSDictionary *JSONObject = [NSJSONSerialization JSONObjectWithData:JSONData options:kNilOptions error:&err];
if ([JSONObject isKindOfClass:[NSDictionary class]]) {
SVGAVideoEntity *videoItem = [[SVGAVideoEntity alloc] initWithJSONObject:JSONObject cacheDir:cacheDir];
[videoItem resetImagesWithJSONObject:JSONObject];
[videoItem resetSpritesWithJSONObject:JSONObject];
if (self.enabledMemoryCache) {
[videoItem saveCache:cacheKey];
} else {
[videoItem saveWeakCache:cacheKey];
}
if (completionBlock) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
completionBlock(videoItem);
}];
}
}
}
else {
if (failureBlock) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
failureBlock([NSError errorWithDomain:NSFilePathErrorKey code:-1 userInfo:nil]);
}];
}
}
}
}
}];
}
else {
if (failureBlock) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
failureBlock([NSError errorWithDomain:NSFilePathErrorKey code:-1 userInfo:nil]);
}];
}
}
}
else {
if (failureBlock) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
failureBlock([NSError errorWithDomain:@"Data Error" code:-1 userInfo:nil]);
}];
}
}
}];
}</pre>
核心代码
image从中可以看出,SVGAVideoEntity是由SVGAProtoMovieEntity提供的数据创建而成,之所以要分开一个是因为SVGAProtoMovieEntity是继承GPBMessage,而protobuf的对象,另外一个原因就是SVGAVideoEntity里是支持缓存的
方法分析在下方
SVGAVideoEntity
image imageresetImagesWithProtoObject
此方法主要是创建好图片和MP3文件,然后储存到字典里,key就是节点的名字
image<pre data-language="objectivec" id="UN7X5" class="ne-codeblock language-objectivec" style="border: 1px solid rgb(232, 232, 232); border-top-left-radius: 2px; border-top-right-radius: 2px; border-bottom-right-radius: 2px; border-bottom-left-radius: 2px; background-color: rgb(249, 249, 249); padding: 16px; font-size: 13px; color: rgb(89, 89, 89);">- (void)resetImagesWithProtoObject:(SVGAProtoMovieEntity *)protoObject {
NSMutableDictionary<NSString *, UIImage *> *images = [[NSMutableDictionary alloc] init];
NSMutableDictionary<NSString *, NSData *> *audiosData = [[NSMutableDictionary alloc] init];
NSDictionary *protoImages = [protoObject.images copy];
for (NSString *key in protoImages) {
NSString *fileName = [[NSString alloc] initWithData:protoImages[key] encoding:NSUTF8StringEncoding];
if (fileName != nil) {
NSString *filePath = [self.cacheDir stringByAppendingFormat:@"/%@.png", fileName];
if (![[NSFileManager defaultManager] fileExistsAtPath:filePath]) {
filePath = [self.cacheDir stringByAppendingFormat:@"/%@", fileName];
}
if ([[NSFileManager defaultManager] fileExistsAtPath:filePath]) {
// NSData *imageData = [NSData dataWithContentsOfFile:filePath];
NSData *imageData = [NSData dataWithContentsOfFile:filePath options:NSDataReadingMappedIfSafe error:NULL];
if (imageData != nil) {
UIImage *image = [[UIImage alloc] initWithData:imageData scale:2.0];
if (image != nil) {
[images setObject:image forKey:key];
}
}
}
}
else if ([protoImages[key] isKindOfClass:[NSData class]]) {
if ([SVGAVideoEntity isMP3Data:protoImages[key]]) {
// mp3
[audiosData setObject:protoImages[key] forKey:key];
} else {
UIImage *image = [[UIImage alloc] initWithData:protoImages[key] scale:2.0];
if (image != nil) {
[images setObject:image forKey:key];
}
}
}
}
self.images = images;
self.audiosData = audiosData;
}
</pre>
resetSpritesWithProtoObject
此方法主要是把节点和其frame数据对应起来,而matteKey就是遮罩类型
image image把断点放到resetSpritesWithProtoObject里,在与把svga资源放到网上解析,可以发现,每个节点的名字都可以对应得上,而spriteEntity里的frames就是每一帧的位置了
SVGAProtoMovieEntity
**SVGAProtoMovieEntity 是 SVGAVideoEntity 的主要数据源,其结构如下 **
image
可见SVGAProtoMovieEntity继承GPBMessage,而protobuf的对象,可见为了减少体积,程序员也是煞费苦心
image
SVGAProtoMovieParams
SVGAProtoMovieParams 是 SVGAProtoMovieEntity 的动画参数属性
可以看到帧数都是可以给60整除的,但是没有4,有点奇怪
image
SVGAProtoSpriteEntity
SVGAProtoSpriteEntity 里的元素列表存储的类就是 SVGAProtoSpriteEntity
<pre data-language="objectivec" id="Bphul" class="ne-codeblock language-objectivec" style="border: 1px solid rgb(232, 232, 232); border-top-left-radius: 2px; border-top-right-radius: 2px; border-bottom-right-radius: 2px; border-bottom-left-radius: 2px; background-color: rgb(249, 249, 249); padding: 16px; font-size: 13px; color: rgb(89, 89, 89);">@interface SVGAProtoSpriteEntity : GPBMessage
/** 元件所对应的位图键名, 如果 imageKey 含有 .vector 后缀,该 sprite 为矢量图层 含有 .matte 后缀,该 sprite 为遮罩图层。 */
@property(nonatomic, readwrite, copy, null_resettable) NSString *imageKey;
/** 帧列表 /
@property(nonatomic, readwrite, strong, null_resettable) NSMutableArray<SVGAProtoFrameEntity> framesArray;
/* The number of items in @c framesArray without causing the array to be created. */
@property(nonatomic, readonly) NSUInteger framesArray_Count;
/** 被遮罩图层的 matteKey 对应的是其遮罩图层的 imageKey. */
@property(nonatomic, readwrite, copy, null_resettable) NSString *matteKey;
@end</pre>
SVGAVideoSpriteEntity
其为SVGAVideoEntity的动画元素参数,由SVGAProtoSpriteEntity提供数据
image image可以看到 它有个初始化方法是 - (instancetype)initWithProtoObject:(SVGAProtoSpriteEntity *)protoObject;
也就是说会根据SVGAProtoSpriteEntity来赋值
imageSVGAProtoSpriteEntity
key对应的就是所有的frame
image
<pre data-language="objectivec" id="GXNOe" class="ne-codeblock language-objectivec" style="border: 1px solid rgb(232, 232, 232); border-top-left-radius: 2px; border-top-right-radius: 2px; border-bottom-right-radius: 2px; border-bottom-left-radius: 2px; background-color: rgb(249, 249, 249); padding: 16px; font-size: 13px; color: rgb(89, 89, 89);">// This method is threadsafe because it is initially called
// in +initialize for each subclass.
- (GPBDescriptor *)descriptor {
static GPBDescriptor *descriptor = nil;
if (!descriptor) {
static GPBMessageFieldDescription fields[] = {
{
.name = "imageKey",
.dataTypeSpecific.className = NULL,
.number = SVGAProtoSpriteEntity_FieldNumber_ImageKey,
.hasIndex = 0,
.offset = (uint32_t)offsetof(SVGAProtoSpriteEntity__storage_, imageKey),
.flags = (GPBFieldFlags)(GPBFieldOptional | GPBFieldTextFormatNameCustom),
.dataType = GPBDataTypeString,
},
{ //只有是framesArray的时候才会把className赋值为SVGAProtoFrameEntity
.name = "framesArray",
.dataTypeSpecific.className = GPBStringifySymbol(SVGAProtoFrameEntity),
.number = SVGAProtoSpriteEntity_FieldNumber_FramesArray,
.hasIndex = GPBNoHasBit,
.offset = (uint32_t)offsetof(SVGAProtoSpriteEntity__storage_, framesArray),
.flags = GPBFieldRepeated,
.dataType = GPBDataTypeMessage,
},
{
.name = "matteKey",
.dataTypeSpecific.className = NULL,
.number = SVGAProtoSpriteEntity_FieldNumber_MatteKey,
.hasIndex = 1,
.offset = (uint32_t)offsetof(SVGAProtoSpriteEntity__storage_, matteKey),
.flags = (GPBFieldFlags)(GPBFieldOptional | GPBFieldTextFormatNameCustom),
.dataType = GPBDataTypeString,
},
};
GPBDescriptor *localDescriptor =
[GPBDescriptor allocDescriptorForClass:[SVGAProtoSpriteEntity class]
rootClass:[SVGAProtoSvgaRoot class]
file:SVGAProtoSvgaRoot_FileDescriptor()
fields:fields
fieldCount:(uint32_t)(sizeof(fields) / sizeof(GPBMessageFieldDescription))
storageSize:sizeof(SVGAProtoSpriteEntity__storage_)
flags:GPBDescriptorInitializationFlag_None];
if !GPBOBJC_SKIP_MESSAGE_TEXTFORMAT_EXTRAS
static const char *extraTextFormatInfo =
"\002\001\010\000\003\010\000";
[localDescriptor setupExtraTextInfo:extraTextFormatInfo];
endif // !GPBOBJC_SKIP_MESSAGE_TEXTFORMAT_EXTRAS
NSAssert(descriptor == nil, @"Startup recursed!");
descriptor = localDescriptor;
}
return descriptor;
}</pre>
SVGAProtoFrameEntity
SVGAProtoSpriteEntity里的帧列表数组储存的对象类型就是SVGAProtoFrameEntity
其结构如下
image
SVGAProtoLayout 初始约束大小
image
SVGAProtoTransform 2D 变换矩阵
image
SVGAProtoShapeEntity 矢量元素
SVGAProtoFrameEntity 储存的矢量元素列表的对象类型就是SVGAProtoShapeEntity
image
SVGAProtoShapeEntity_ShapeType
image
SVGAProtoShapeEntity_ShapeArgs
image
SVGAProtoShapeEntity_RectArgs
image
SVGAProtoShapeEntity_EllipseArgs
image
SVGAProtoShapeEntity_ShapeStyle
image
SVGAProtoAudioEntity
image
添加文字
image可以看到,是先通过节点的key找到某个节点,然后根据图片的大小和显示的大小决定CATextLayer的大小
内容则是通过传入的NSAttributedString富文本字符串决定
代码结构图
image image动画原理浅析
2D变换
矩阵运算
UIView的transform属性是一个CGAffineTransform类型,用于在二维空间做旋转,缩放和平移。CGAffineTransform是一个可以和二维空间向量(例如CGPoint)做乘法的3X2的矩阵
image用CGPoint的每一列和CGAffineTransform矩阵的每一行对应元素相乘再求和,就形成了一个新的CGPoint类型的结果。要解释一下图中显示的灰色元素,为了能让矩阵做乘法,左边矩阵的列数一定要和右边矩阵的行数个数相同,所以要给矩阵填充一些标志值,使得既可以让矩阵做乘法,又不改变运算结果,并且没必要存储这些添加的值,因为它们的值不会发生变化,但是要用来做运算。
因此,通常会用3×3(而不是2×3)的矩阵来做二维变换,你可能会见到3行2列格式的矩阵,这是所谓的以列为主的格式,图5.1所示的是以行为主的格式,只要能保持一致,用哪种格式都无所谓。
image通过矩阵运算后的坐标(aX + cY + tx, bX + dY + ty, 1) 我们对比一下可知:
平移
一、设a=d=1, b=c=0
(aX + cY + tx , bX + dY + ty , 1) = (X + tx , Y + ty , 1)
可见,这个时候,坐标是按照向量(tx,ty)进行平移,
也就是函数CGAffineTransform CGAffineMakeTranslation(CGFloat tx,CGFloat ty)的计算原理。
缩放
二、设b=c=tx=ty=0
(aX + cY + tx , bX + dY + ty , 1) = (aX , dY , 1)
可见,这个时候,坐标X按照a进行缩放,Y按照d进行缩放,a,d就是X,Y的比例系数,
也就是函数CGAffineTransform CGAffineTransformMakeScale(CGFloat sx, CGFloat sy)的计算原理。
旋转
a对应于sx,d对应于sy。
三、设tx=ty=0,a=cosβ,b=sinβ,c=-sinβ,d=cosβ
(aX + cY + tx , bX + dY + ty , 1) = (Xcosβ - Ysinβ , Xsinβ + Ycosβ , 1)
可见,这个时候,β就是旋转的角度,逆时针为正,顺时针为负。
也就是函数CGAffineTransform CGAffineTransformMakeRotation(CGFloat angle)的计算原理。
angle即β的弧度表示。
3D变换
矩阵运算
和CGAffineTransform类似,CATransform3D也是一个矩阵,但是和2x3的矩阵不同,CATransform3D是一个可以在3维空间内做变换的4x4的矩阵
image透视投影
在真实世界中,当物体远离我们的时候,由于视角的原因看起来会变小,理论上说远离我们的视图的边要比靠近视角的边跟短,但实际上并没有发生,而我们当前的视角是等距离的,也就是在3D变换中任然保持平行,和之前提到的仿射变换类似。
在等距投影中,远处的物体和近处的物体保持同样的缩放比例,这种投影也有它自己的用处(例如建筑绘图,颠倒,和伪3D视频),但当前我们并不需要。
为了做一些修正,我们需要引入投影变换(又称作z变换)来对除了旋转之外的变换矩阵做一些修改,Core Animation并没有给我们提供设置透视变换的函数,因此我们需要手动修改矩阵值,幸运的是,很简单:
CATransform3D的透视效果通过一个矩阵中一个很简单的元素来控制:m34。m34 用于按比例缩放X和Y的值来计算到底要离视角多远。
m34的默认值是0,我们可以通过设置m34为-1.0 / d来应用透视效果,d代表了想象中视角相机和屏幕之间的距离,以像素为单位,那应该如何计算这个距离呢?实际上并不需要,大概估算一个就好了。
因为视角相机实际上并不存在,所以可以根据屏幕上的显示效果自由决定它的防止的位置。通常500-1000就已经很好了,但对于特定的图层有时候更小后者更大的值会看起来更舒服,减少距离的值会增强透视效果,所以一个非常微小的值会让它看起来更加失真,然而一个非常大的值会让它基本失去透视效果。
网友评论