美文网首页
iOS 图片解码探索

iOS 图片解码探索

作者: 杨柳小易 | 来源:发表于2017-09-04 17:37 被阅读913次

    前言

    项目中有对本地图片做了一层封装,简单来说就说使用imageWithContentsOfFile 来初始化图片,所有取图片的地方都是通过这层封装来取图片,因为这里没有解码过程,所以,这里先研究一下第三方的解码策略。力争想通,本地大量使用这样子的做法,而且不解码,对性能到底时好时坏……

    UIImage 的

    + (nullable UIImage *)imageWithContentsOfFile:(NSString *)path;
    

    从一个路径获取图片,这个图片是未解码的。

    为甚要解码?恩?你问我为啥要解码。

    事实上,不管是 JPEG 还是 PNG 图片,都是一种压缩的位图图形格式。只不过 PNG 图片是无损压缩,并且支持 alpha 通道,而 JPEG 图片则是有损压缩,可以指定 0-100% 的压缩比,因此,在将磁盘中的图片渲染到屏幕之前,必须先要得到图片的原始像素数据,才能执行后续的绘制操作,这就是为什么需要对图片解压缩的原因。详见 谈谈 iOS 中图片的解压缩

    AFNetwworking 对图片的处理

    AFNetworking 对图片处理,有图片下载,和缓存,下载部分,直接引入

    #import "UIImageView+AFNetworking.h"
    
    

    就可以简单的对网络图片进行下载了,比如使用:

    - (void)setImageWithURL:(NSURL *)url
           placeholderImage:(nullable UIImage *)placeholderImage;
    
    

    对网络图片进行下载,placeholderImage 是默认图片。类似于sdwebimage. 和SDWebimage 的区别在于,AF中图片缓存使用的NSCache.

    对于缓存,我们项目可能只有用户头像的缓存是固定的,其他的缓存,比如:直播的索引图,一分钟变化一次,所以:真的需要把图片存文件么?使用了SDWebimage 相当于,这次打开APP ,本地存的首页图片,几分钟之后几乎全部是无效的,但是它被存在了本地,只有手动清空或者缓存数据超过极限的时候清空。

    AF的作者也说过如下的话:

    NSURLCache 提醒着我们熟悉我们正在操作的系统是多么地重要。开发 iOS 或 OS X 程序时,这些系统中的重中之重,非 URL Loading System莫属。
    无数开发者尝试自己做一个简陋而脆弱的系统来实现网络缓存的功能,殊不知 NSURLCache 只要两行代码就能搞定且好上100倍。甚至更多开发者根本不知道网络缓存的好处,也从未尝试过,导致他们的应用向服务器作了无数不必要的网络请求。
    
    

    http://nshipster.cn/nsurlcache/

    我们不猜测作者想表达的意思,但是对于直播APP,或者有效期只有几分钟的图片,真的有必要存在本地?

    此文我们仅仅关注AFNetworking中图片的下载之后的解码逻辑。

    下载图片解码等逻辑在 AFImageResponseSerializer 类中。

    AFImageResponseSerializer 继承自AFHTTPResponseSerializer,如果下载内容是图片,就会调用AFImageResponseSerializer 相关的函数对图片进行解码。

    在 AFHTTPResponseSerializer 的基础上增加了两个属性

    @property (nonatomic, assign) CGFloat imageScale;
    @property (nonatomic, assign) BOOL automaticallyInflatesResponseImage;
    
    

    一个是缩放因子,一个是,下载好图片后是否对图片进行解码,根据注释 automaticallyInflatesResponseImage 设置为YES 可以显著的提高图片的显示性能。

    - (id)responseObjectForResponse:(NSURLResponse *)response
                               data:(NSData *)data
                              error:(NSError *__autoreleasing *)error
    {
        if (![self validateResponse:(NSHTTPURLResponse *)response data:data error:error]) {
            if (!error || AFErrorOrUnderlyingErrorHasCodeInDomain(*error, NSURLErrorCannotDecodeContentData, AFURLResponseSerializationErrorDomain)) {
                return nil;
            }
        }
    
    #if TARGET_OS_IOS || TARGET_OS_TV || TARGET_OS_WATCH
        if (self.automaticallyInflatesResponseImage) {
            return AFInflatedImageFromResponseWithDataAtScale((NSHTTPURLResponse *)response, data, self.imageScale);
        } else {
            return AFImageWithDataAtScale(data, self.imageScale);
        }
    ....
    
        return nil;
    }
    
    

    可以看到,当http 请求返回的时候,如果请求是图片会调用这里的方法。如果要自动解码 会调用 AFInflatedImageFromResponseWithDataAtScale 方法,如果不用解码,会调用 AFImageWithDataAtScale 返回图片。

    static UIImage * AFImageWithDataAtScale(NSData *data, CGFloat scale) {
        UIImage *image = [UIImage af_safeImageWithData:data];
        if (image.images) {
            return image;
        }
        
        return [[UIImage alloc] initWithCGImage:[image CGImage] scale:scale orientation:image.imageOrientation];
    }
    

    不解码的图片,直接调用系统方法生成

    + (UIImage *)af_safeImageWithData:(NSData *)data {
        UIImage* image = nil;
        static dispatch_once_t onceToken;
        dispatch_once(&onceToken, ^{
            imageLock = [[NSLock alloc] init];
        });
        
        [imageLock lock];
        image = [UIImage imageWithData:data];
        [imageLock unlock];
        return image;
    }
    
    

    这里因为 imageWithData 不是线程安全的,所以加了锁:(有空写代码验证一下!!!!),我们知道 imageWithData 是不会对图片进行解码的,这样子直接返回图片,当显示图片的时候回解码,在主线程,对性能挺考验的。

    我们看看解码处理:

    static UIImage * AFInflatedImageFromResponseWithDataAtScale(NSHTTPURLResponse *response, NSData *data, CGFloat scale) {
        if (!data || [data length] == 0) {
            return nil;
        }
    
        CGImageRef imageRef = NULL;
        CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
    
        if ([response.MIMEType isEqualToString:@"image/png"]) {
            imageRef = CGImageCreateWithPNGDataProvider(dataProvider,  NULL, true, kCGRenderingIntentDefault);
        } else if ([response.MIMEType isEqualToString:@"image/jpeg"]) {
            imageRef = CGImageCreateWithJPEGDataProvider(dataProvider, NULL, true, kCGRenderingIntentDefault);
    
            if (imageRef) {
                CGColorSpaceRef imageColorSpace = CGImageGetColorSpace(imageRef);
                CGColorSpaceModel imageColorSpaceModel = CGColorSpaceGetModel(imageColorSpace);
    
                // CGImageCreateWithJPEGDataProvider does not properly handle CMKY, so fall back to AFImageWithDataAtScale
                if (imageColorSpaceModel == kCGColorSpaceModelCMYK) {
                    CGImageRelease(imageRef);
                    imageRef = NULL;
                }
            }
        }
    
        CGDataProviderRelease(dataProvider);
    
        UIImage *image = AFImageWithDataAtScale(data, scale);
        if (!imageRef) {
            if (image.images || !image) {
                return image;
            }
    
            imageRef = CGImageCreateCopy([image CGImage]);
            if (!imageRef) {
                return nil;
            }
        }
    
        size_t width = CGImageGetWidth(imageRef);
        size_t height = CGImageGetHeight(imageRef);
        size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
    
        if (width * height > 1024 * 1024 || bitsPerComponent > 8) {
            CGImageRelease(imageRef);
    
            return image;
        }
    
        // CGImageGetBytesPerRow() calculates incorrectly in iOS 5.0, so defer to CGBitmapContextCreate
        size_t bytesPerRow = 0;
        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
        CGColorSpaceModel colorSpaceModel = CGColorSpaceGetModel(colorSpace);
        CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
    
        if (colorSpaceModel == kCGColorSpaceModelRGB) {
            uint32_t alpha = (bitmapInfo & kCGBitmapAlphaInfoMask);
    #pragma clang diagnostic push
    #pragma clang diagnostic ignored "-Wassign-enum"
            if (alpha == kCGImageAlphaNone) {
                bitmapInfo &= ~kCGBitmapAlphaInfoMask;
                bitmapInfo |= kCGImageAlphaNoneSkipFirst;
            } else if (!(alpha == kCGImageAlphaNoneSkipFirst || alpha == kCGImageAlphaNoneSkipLast)) {
                bitmapInfo &= ~kCGBitmapAlphaInfoMask;
                bitmapInfo |= kCGImageAlphaPremultipliedFirst;
            }
    #pragma clang diagnostic pop
        }
    
        CGContextRef context = CGBitmapContextCreate(NULL, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
    
        CGColorSpaceRelease(colorSpace);
    
        if (!context) {
            CGImageRelease(imageRef);
    
            return image;
        }
    
        CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), imageRef);
        CGImageRef inflatedImageRef = CGBitmapContextCreateImage(context);
    
        CGContextRelease(context);
    
        UIImage *inflatedImage = [[UIImage alloc] initWithCGImage:inflatedImageRef scale:scale orientation:image.imageOrientation];
    
        CGImageRelease(inflatedImageRef);
        CGImageRelease(imageRef);
    
        return inflatedImage;
    }
    
    

    这里因为解码的过程都是在网络请求回来调用的,在非主线程解码,显示图片的时候直接绘制,节省GPU 开销。

    SDWebImage 解码过程

    SDWebImage 的解码,在

    #import "SDWebImageDecoder.h"
    

    中,

    我们看看实现

    @interface UIImage (ForceDecode)
    
    + (nullable UIImage *)decodedImageWithImage:(nullable UIImage *)image;
    
    + (nullable UIImage *)decodedAndScaledDownImageWithImage:(nullable UIImage *)image;
    
    @end
    
    

    对外提供了两个函数,一个是解码图片,一个是解码大图片。会对图片进行缩放。

    decodedAndScaledDownImageWithImage 函数会在下载图片的时候,如果设置了 SDWebImageDownloaderScaleDownLargeImages ,解码的时候就会调用 decodedAndScaledDownImageWithImage,比如:

    [self.imageView sd_setImageWithURL:[NSURL URLWithString:item.picture] placeholderImage:[UIImage imageNamed:@"PTV_Normal_Default_Icon"] options:SDWebImageScaleDownLargeImages | SDWebImageRetryFailed];
    

    这里下载图片的options 设置的是

    SDWebImageScaleDownLargeImages | SDWebImageRetryFailed
    

    在图片解码的时候就会调用,大图片解码,如果是大图片的话,就会调用大图解码。如下:

    - (void)URLSession:(NSURLSession *)session task:(NSURLSessionTask *)task didCompleteWithError:(NSError *)error
    

    函数中

    if (self.shouldDecompressImages) {
                            if (self.options & SDWebImageDownloaderScaleDownLargeImages) {
    #if SD_UIKIT || SD_WATCH
                                image = [UIImage decodedAndScaledDownImageWithImage:image];
                                [self.imageData setData:UIImagePNGRepresentation(image)];
    #endif
                            } else {
                                image = [UIImage decodedImageWithImage:image];
                            }
                        }
    

    tips:下载图片默认的options 是 SDWebImageRetryFailed,SDWebImageRetryFailed,表示如果一个图片下载失败了,下次会重新下载,如果option 不包含 SDWebImageRetryFailed,就会把下载链接加入黑名单,下次就不会下载了!

    我们看看SDWebiamge中定义的大图处理过程(AF中没有这个过程)

    
    + (nullable UIImage *)decodedAndScaledDownImageWithImage:(nullable UIImage *)image {
        if (![UIImage shouldDecodeImage:image]) {
            return image;
        }
        
        if (![UIImage shouldScaleDownImage:image]) {
            return [UIImage decodedImageWithImage:image];
        }
        ……
    

    上面函数,首先判断能不能解码(shouldDecodeImage),然后是要不要缩放(shouldScaleDownImage),如果不要缩放的,就直接调用一般的解码函数了。

    + (BOOL)shouldScaleDownImage:(nonnull UIImage *)image {
        BOOL shouldScaleDown = YES;
            
        CGImageRef sourceImageRef = image.CGImage;
        CGSize sourceResolution = CGSizeZero;
        sourceResolution.width = CGImageGetWidth(sourceImageRef);
        sourceResolution.height = CGImageGetHeight(sourceImageRef);
        float sourceTotalPixels = sourceResolution.width * sourceResolution.height;
        float imageScale = kDestTotalPixels / sourceTotalPixels;
        if (imageScale < 1) {
            shouldScaleDown = YES;
        } else {
            shouldScaleDown = NO;
        }
        
        return shouldScaleDown;
    }
    
    

    shouldScaleDownImage,这里就做了两件事,获取源图像的宽高,算出总共多大,如果大于最大值,就是需要特殊处理的,否则就不要。

    最大值的定义如下:

    static const CGFloat kDestImageSizeMB = 60.0f;
    
    static const CGFloat kDestTotalPixels = kDestImageSizeMB * kPixelsPerMB;
    

    对大图片的处理方式如下:

    + (nullable UIImage *)decodedAndScaledDownImageWithImage:(nullable UIImage *)image {
        if (![UIImage shouldDecodeImage:image]) {
            return image;
        }
        
        if (![UIImage shouldScaleDownImage:image]) {
            return [UIImage decodedImageWithImage:image];
        }
        
        ///########以下开始处理大图片
        
        CGContextRef destContext;
        
        // autorelease the bitmap context and all vars to help system to free memory when there are memory warning.
        // on iOS7, do not forget to call [[SDImageCache sharedImageCache] clearMemory];
        @autoreleasepool {
            CGImageRef sourceImageRef = image.CGImage;
            
            CGSize sourceResolution = CGSizeZero;
            sourceResolution.width = CGImageGetWidth(sourceImageRef);
            sourceResolution.height = CGImageGetHeight(sourceImageRef);
            float sourceTotalPixels = sourceResolution.width * sourceResolution.height;
            // Determine the scale ratio to apply to the input image
            // that results in an output image of the defined size.
            // see kDestImageSizeMB, and how it relates to destTotalPixels.
            float imageScale = kDestTotalPixels / sourceTotalPixels;
            CGSize destResolution = CGSizeZero;
            destResolution.width = (int)(sourceResolution.width*imageScale);
            destResolution.height = (int)(sourceResolution.height*imageScale);
            
            // current color space
            CGColorSpaceRef colorspaceRef = [UIImage colorSpaceForImageRef:sourceImageRef];
            
            size_t bytesPerRow = kBytesPerPixel * destResolution.width;
            
            // Allocate enough pixel data to hold the output image.
            void* destBitmapData = malloc( bytesPerRow * destResolution.height );
            if (destBitmapData == NULL) {
                return image;
            }
            
            // kCGImageAlphaNone is not supported in CGBitmapContextCreate.
            // Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
            // to create bitmap graphics contexts without alpha info.
            destContext = CGBitmapContextCreate(destBitmapData,
                                                destResolution.width,
                                                destResolution.height,
                                                kBitsPerComponent,
                                                bytesPerRow,
                                                colorspaceRef,
                                                kCGBitmapByteOrderDefault|kCGImageAlphaNoneSkipLast);
            
            if (destContext == NULL) {
                free(destBitmapData);
                return image;
            }
            CGContextSetInterpolationQuality(destContext, kCGInterpolationHigh);
            
            // Now define the size of the rectangle to be used for the
            // incremental blits from the input image to the output image.
            // we use a source tile width equal to the width of the source
            // image due to the way that iOS retrieves image data from disk.
            // iOS must decode an image from disk in full width 'bands', even
            // if current graphics context is clipped to a subrect within that
            // band. Therefore we fully utilize all of the pixel data that results
            // from a decoding opertion by achnoring our tile size to the full
            // width of the input image.
            CGRect sourceTile = CGRectZero;
            sourceTile.size.width = sourceResolution.width;
            // The source tile height is dynamic. Since we specified the size
            // of the source tile in MB, see how many rows of pixels high it
            // can be given the input image width.
            sourceTile.size.height = (int)(kTileTotalPixels / sourceTile.size.width );
            sourceTile.origin.x = 0.0f;
            // The output tile is the same proportions as the input tile, but
            // scaled to image scale.
            CGRect destTile;
            destTile.size.width = destResolution.width;
            destTile.size.height = sourceTile.size.height * imageScale;
            destTile.origin.x = 0.0f;
            // The source seem overlap is proportionate to the destination seem overlap.
            // this is the amount of pixels to overlap each tile as we assemble the ouput image.
            float sourceSeemOverlap = (int)((kDestSeemOverlap/destResolution.height)*sourceResolution.height);
            CGImageRef sourceTileImageRef;
            // calculate the number of read/write operations required to assemble the
            // output image.
            int iterations = (int)( sourceResolution.height / sourceTile.size.height );
            // If tile height doesn't divide the image height evenly, add another iteration
            // to account for the remaining pixels.
            int remainder = (int)sourceResolution.height % (int)sourceTile.size.height;
            if(remainder) {
                iterations++;
            }
            // Add seem overlaps to the tiles, but save the original tile height for y coordinate calculations.
            float sourceTileHeightMinusOverlap = sourceTile.size.height;
            sourceTile.size.height += sourceSeemOverlap;
            destTile.size.height += kDestSeemOverlap;
            for( int y = 0; y < iterations; ++y ) {
                @autoreleasepool {
                    sourceTile.origin.y = y * sourceTileHeightMinusOverlap + sourceSeemOverlap;
                    destTile.origin.y = destResolution.height - (( y + 1 ) * sourceTileHeightMinusOverlap * imageScale + kDestSeemOverlap);
                    sourceTileImageRef = CGImageCreateWithImageInRect( sourceImageRef, sourceTile );
                    if( y == iterations - 1 && remainder ) {
                        float dify = destTile.size.height;
                        destTile.size.height = CGImageGetHeight( sourceTileImageRef ) * imageScale;
                        dify -= destTile.size.height;
                        destTile.origin.y += dify;
                    }
                    CGContextDrawImage( destContext, destTile, sourceTileImageRef );
                    CGImageRelease( sourceTileImageRef );
                }
            }
            
            CGImageRef destImageRef = CGBitmapContextCreateImage(destContext);
            CGContextRelease(destContext);
            if (destImageRef == NULL) {
                return image;
            }
            UIImage *destImage = [UIImage imageWithCGImage:destImageRef scale:image.scale orientation:image.imageOrientation];
            CGImageRelease(destImageRef);
            if (destImage == nil) {
                return image;
            }
            return destImage;
        }
    }
    
    

    decodedAndScaledDownImageWithImage大图处理过程

    使用 @autoreleasepool ,及时释放内存。比如:

    for( int y = 0; y < iterations; ++y ) {
                @autoreleasepool {
                  balabalabalbablablablalbal
                  ////TODO 画图
                }
            }
    

    1: 先计算出最终图片的大小。

    因为图片是根据最大尺寸计算的,所以是根据最大尺寸计算的,

    CGImageRef sourceImageRef = image.CGImage;
           
           CGSize sourceResolution = CGSizeZero;
           sourceResolution.width = CGImageGetWidth(sourceImageRef);
           sourceResolution.height = CGImageGetHeight(sourceImageRef);
           float sourceTotalPixels = sourceResolution.width * sourceResolution.height;
           
           float imageScale = kDestTotalPixels / sourceTotalPixels;
           CGSize destResolution = CGSizeZero;
           destResolution.width = (int)(sourceResolution.width*imageScale);
           destResolution.height = (int)(sourceResolution.height*imageScale);
           
    

    这里先取出sourceImage 的大小,然后求出 缩放因子

    float imageScale = kDestTotalPixels / sourceTotalPixels;
            
    

    能走到这一步,缩放因子肯定是小于1的.

    2: 使用 CGBitmapContextCreate 创建一个大小合适的 CGContextRef ,以后分段作图要使用。
    3: 计算出迭代因子 (iterations),即:(循环的次数)
    4: 分段画图,使用 CGImageCreateWithImageInRect 获取原图的某一段,使用 CGContextDrawImage 把图片画第二步创建好的画布上。
    5: 使用 CGBitmapContextCreateImage 获取解码好的图片
    6:释放相关资源。

    未完,待续......

    图片解码外部连接

    如何避免图片解压缩开销

    图片处理的tips

    谈谈iOS图片的解压缩

    改变图片尺寸的方法和性能对比

    相关文章

      网友评论

          本文标题:iOS 图片解码探索

          本文链接:https://www.haomeiwen.com/subject/gtwejxtx.html