美文网首页
iOS 图片渲染到屏幕的过程及性能的提升方案

iOS 图片渲染到屏幕的过程及性能的提升方案

作者: huxinwen | 来源:发表于2019-07-17 10:53 被阅读0次

iOS 渲染时CPU与GPU协助的过程:

渲染时CPU与GPU协助的过程.png

图片加载的过程:

  1. 假设我们使用 +imageWithContentsOfFile:方法从磁盘中加载一张图片,这个时候的图片并没有解压缩;
  2. 然后将生成的 UIImage赋值给 UIImageView;
  3. 接着一个隐式的 CATransaction捕获到了 UIImageView图层树的变化;
  4. 在主线程的下一个 runloop到来时,Core Animation提交了这个隐式的 transaction,这个过程可能会对图片进行 copy操作,而受图片是否字节对齐等因素的影响,这个 copy操作可能会涉及以下部分或全部步骤:
    • 分配内存缓冲区用于管理文件 IO 和解压缩操作;
    • 将文件数据从磁盘读到内存中;
    • 将压缩的图片数据解码成未压缩的位图形式,这是一个非常耗时的 CPU 操作;
    • 最后 Core Animation中CALayer使用未压缩的位图数据渲染 UIImageView的图层。
    • CPU计算好图片的Frame,对图片解压之后.就会交给GPU来做图片渲染
  5. 渲染流程
    • GPU获取获取图片的坐标
    • 将坐标交给顶点着色器(顶点计算)
    • 将图片光栅化(获取图片对应屏幕上的像素点)
    • 片元着色器计算(计算每个像素点的最终显示的颜色值)
    • 从帧缓存区中渲染到屏幕上
      我们提到了图片的解压缩是一个非常耗时的 CPU 操作,并且它默认是在主线程中执行的。那么当需要加载的图片比较多时,就会对我们应用的响应性造成严重的影响,尤其是在快速滑动的列表上,这个问题会表现得更加突出。

图片加载解压缩的必要性

由于我们开发过程中,待加载图片都是以.PNG或者.jpeg等格式结尾的,这两者其实都是图片的一种压缩(编码)格式,两者的区别就是:前者是无损压缩,后者是有损压缩。而GPU只有拿到完整原始位图数据才能够将图片完整的显示出来,从.PNG .jpeg等格式结尾的数据到完整原始的位图数据就是一个解压(解码)过程。这就是为什么图片加载需要解压缩的原因。

图片解压工作机制及性能调优方案

  • 图片解压工作机制
    正常的图片解压是在图片即将渲染到屏幕上,系统会在主线程对未解压的图片进行解压处理,如果已经解压过,就不会再次进行解压。
  • 性能调优方案
    上面提到过解压图片是一个非常耗时的操作,当一个图片越大,那么耗时就会越长,并且在主线程进行,在滑动展示图片的界面尤为的明显,没处理好,就会出现滑动卡顿的情况,那么怎么解决呢?
    根据图片解压工作机制,业内通用的方案,就是提前强制解压,并且在子线程完成。
    强制解压用到的核心函数就是
/**
- data :如果不为 NULL ,那么它应该指向一块大小至少为 bytesPerRow * height 字节的内存;如果 为 NULL ,那么系统就会为我们自动分配和释放所需的内存,所以一般指定 NULL 即可;
- width 和height :位图的宽度和高度,分别赋值为图片的像素宽度和像素高度即可;
- bitsPerComponent :像素的每个颜色分量使用的 bit 数,在 RGB 颜色空间下指定 8 即可;
- bytesPerRow :位图的每一行使用的字节数,大小至少为 width * bytes per pixel 字节。当我们指定 0/NULL 时,系统不仅会为我们自动计算,而且还会进行 cache line alignment 的优化
- space :就是我们前面提到的颜色空间,一般使用 RGB 即可;
- bitmapInfo :位图的布局信息.kCGImageAlphaPremultipliedFirst
*/
CG_EXTERN CGContextRef __nullable CGBitmapContextCreate(void * __nullable data,
    size_t width, size_t height, size_t bitsPerComponent, size_t bytesPerRow,
    CGColorSpaceRef cg_nullable space, uint32_t bitmapInfo)
    CG_AVAILABLE_STARTING(__MAC_10_0, __IPHONE_2_0);
  • SDWebImage的解码源码:
- (nullable UIImage *)sd_decompressedAndScaledDownImageWithImage:(nullable UIImage *)image {
    if (![[self class] shouldDecodeImage:image]) {
        return image;
    }
    
    if (![[self class] shouldScaleDownImage:image]) {
        return [self sd_decompressedImageWithImage:image];
    }
    
    CGContextRef destContext;
    
    // autorelease the bitmap context and all vars to help system to free memory when there are memory warning.
    // on iOS7, do not forget to call [[SDImageCache sharedImageCache] clearMemory];
    @autoreleasepool {
        CGImageRef sourceImageRef = image.CGImage;
        
        CGSize sourceResolution = CGSizeZero;
        sourceResolution.width = CGImageGetWidth(sourceImageRef);
        sourceResolution.height = CGImageGetHeight(sourceImageRef);
        float sourceTotalPixels = sourceResolution.width * sourceResolution.height;
        // Determine the scale ratio to apply to the input image
        // that results in an output image of the defined size.
        // see kDestImageSizeMB, and how it relates to destTotalPixels.
        float imageScale = kDestTotalPixels / sourceTotalPixels;
        CGSize destResolution = CGSizeZero;
        destResolution.width = (int)(sourceResolution.width*imageScale);
        destResolution.height = (int)(sourceResolution.height*imageScale);
        
        // current color space
        CGColorSpaceRef colorspaceRef = [[self class] colorSpaceForImageRef:sourceImageRef];
        
        // kCGImageAlphaNone is not supported in CGBitmapContextCreate.
        // Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
        // to create bitmap graphics contexts without alpha info.
        destContext = CGBitmapContextCreate(NULL,
                                            destResolution.width,
                                            destResolution.height,
                                            kBitsPerComponent,
                                            0,
                                            colorspaceRef,
                                            kCGBitmapByteOrderDefault|kCGImageAlphaNoneSkipLast);
        
        if (destContext == NULL) {
            return image;
        }
        CGContextSetInterpolationQuality(destContext, kCGInterpolationHigh);
        
        // Now define the size of the rectangle to be used for the
        // incremental blits from the input image to the output image.
        // we use a source tile width equal to the width of the source
        // image due to the way that iOS retrieves image data from disk.
        // iOS must decode an image from disk in full width 'bands', even
        // if current graphics context is clipped to a subrect within that
        // band. Therefore we fully utilize all of the pixel data that results
        // from a decoding opertion by achnoring our tile size to the full
        // width of the input image.
        CGRect sourceTile = CGRectZero;
        sourceTile.size.width = sourceResolution.width;
        // The source tile height is dynamic. Since we specified the size
        // of the source tile in MB, see how many rows of pixels high it
        // can be given the input image width.
        sourceTile.size.height = (int)(kTileTotalPixels / sourceTile.size.width );
        sourceTile.origin.x = 0.0f;
        // The output tile is the same proportions as the input tile, but
        // scaled to image scale.
        CGRect destTile;
        destTile.size.width = destResolution.width;
        destTile.size.height = sourceTile.size.height * imageScale;
        destTile.origin.x = 0.0f;
        // The source seem overlap is proportionate to the destination seem overlap.
        // this is the amount of pixels to overlap each tile as we assemble the ouput image.
        float sourceSeemOverlap = (int)((kDestSeemOverlap/destResolution.height)*sourceResolution.height);
        CGImageRef sourceTileImageRef;
        // calculate the number of read/write operations required to assemble the
        // output image.
        int iterations = (int)( sourceResolution.height / sourceTile.size.height );
        // If tile height doesn't divide the image height evenly, add another iteration
        // to account for the remaining pixels.
        int remainder = (int)sourceResolution.height % (int)sourceTile.size.height;
        if(remainder) {
            iterations++;
        }
        // Add seem overlaps to the tiles, but save the original tile height for y coordinate calculations.
        float sourceTileHeightMinusOverlap = sourceTile.size.height;
        sourceTile.size.height += sourceSeemOverlap;
        destTile.size.height += kDestSeemOverlap;
        for( int y = 0; y < iterations; ++y ) {
            @autoreleasepool {
                sourceTile.origin.y = y * sourceTileHeightMinusOverlap + sourceSeemOverlap;
                destTile.origin.y = destResolution.height - (( y + 1 ) * sourceTileHeightMinusOverlap * imageScale + kDestSeemOverlap);
                sourceTileImageRef = CGImageCreateWithImageInRect( sourceImageRef, sourceTile );
                if( y == iterations - 1 && remainder ) {
                    float dify = destTile.size.height;
                    destTile.size.height = CGImageGetHeight( sourceTileImageRef ) * imageScale;
                    dify -= destTile.size.height;
                    destTile.origin.y += dify;
                }
                CGContextDrawImage( destContext, destTile, sourceTileImageRef );
                CGImageRelease( sourceTileImageRef );
            }
        }
        
        CGImageRef destImageRef = CGBitmapContextCreateImage(destContext);
        CGContextRelease(destContext);
        if (destImageRef == NULL) {
            return image;
        }
        UIImage *destImage = [[UIImage alloc] initWithCGImage:destImageRef scale:image.scale orientation:image.imageOrientation];
        CGImageRelease(destImageRef);
        if (destImage == nil) {
            return image;
        }
        return destImage;
    }
}

大致的流程就是:

  • 创建上下文CGContextRef;
  • 拿到CGImageRef及对应的参数;
  • 调用解码函数CGBitmapContextCreate;
  • 通过 CGBitmapContextCreateImage拿到解码后的CGContextRef。

YYImage跟SDWebImage流程差不多,在解码的性能上有如下区别:

  • 在解压PNG图片上,SDWebImage > YYImage;
  • 在解压jpeg图片上,SDWebImage < YYImage;

参考:CC老师_HelloCoder的探讨iOS 中图片的解压缩到渲染过程

相关文章

网友评论

      本文标题:iOS 图片渲染到屏幕的过程及性能的提升方案

      本文链接:https://www.haomeiwen.com/subject/yaotlctx.html