美文网首页iOS
SDWebImage主线之图片解码

SDWebImage主线之图片解码

作者: ChinaChong | 来源:发表于2020-01-15 17:41 被阅读0次

    SDWebImage主线设计的解码大致有两种:普通解码和渐进式解码。本文只对普通解码进行解析。

    普通解码又分为正常解码和大图解码。


    普通解码

    普通解码从 -[SDWebImageDownloaderOperation URLSession:task:didCompleteWithError:] 发起

    更多解析请参考SDWebImage主线梳理(二)

    1. 同步锁加持,发送两个通知:SDWebImageDownloadStopNotification, SDWebImageDownloadFinishNotification

    2. 如果 self.callbackBlocks 有 LoaderCompletedBlock(key=kCompletedCallbackKey), 继续

    3. self.imageData(NSMutableData) 在 didReceiveData: 方法中一点一点拼接的可变data

    4. 异步串行队列(self.coderQueue),调用 SDImageLoaderDecodeImageData() 解码imageData,输出UIImage

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    解码函数 SDImageLoaderDecodeImageData()

    SDImageLoaderDecodeImageData() 函数在 SDImageLoader.m

    1. SDImageScaleFactorForKey(NSString * key),返回一个屏幕倍数因子。

      1. 如果key是普通名字,判断key包不包含"@2x."或者"@3x.",包含就返回这个倍数因子。
      2. 如果key是一个URL,百分号编码的下的@ = %40,判断key包不包含"%402x."或者"%403x."。
    2. 如果不是“仅解码第一帧”并且是动图,则装载动图的所有帧

    3. 如果不是动图,-[SDImageCodersManager(sharedManager) decodedImageWithData:imageData options:],参考详解一

    4. 动图不解码

    5. 如果应该解码,判断是否应该按比例缩小图片

    6. 按比例缩小图片:-[SDImageCoderHelper decodedAndScaledDownImageWithImage:limitBytes:],参考详解二

    7. 不缩小图片:-[SDImageCoderHelper decodedImageWithImage:],参考详解三

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    详解一

    +[SDImageCodersManager sharedManager] 方法

    调用 -[SDImageCodersManager new]-[SDImageCodersManager init]

    + (nonnull instancetype)sharedManager {
        static dispatch_once_t once;
        static id instance;
        dispatch_once(&once, ^{
            instance = [self new];
        });
        return instance;
    }
    
    -[SDImageCodersManager init] 方法
    1. 创建可变数组,初始化赋值 SDImageIOCoder, SDImageGIFCoder, SDImageAPNGCoder; 都是调用 sharedCoder 方法获取, 待展开
    2. 创建信号量线程锁,保存在 SDImageCodersManager 的 _codersLock 属性中
    - (instancetype)init {
        if (self = [super init]) {
            // initialize with default coders
            _imageCoders = [NSMutableArray arrayWithArray:@[[SDImageIOCoder sharedCoder], [SDImageGIFCoder sharedCoder], [SDImageAPNGCoder sharedCoder]]];
            _codersLock = dispatch_semaphore_create(1);
        }
        return self;
    }
    
    -[SDImageCodersManager decodedImageWithData:options:] 方法
    1. 取出(加锁) _imageCoders, SDImageIOCoder, SDImageGIFCoder, SDImageAPNGCoder;

    2. 反转遍历 _imageCoders ,顺序为 SDImageAPNGCoder -> SDImageGIFCoder -> SDImageIOCoder

    3. 每个 coder 都调用 canDecodeFromData: 方法,判断是否是可以解码的格式

      1. -[SDImageIOAnimatedCoder canDecodeFromData:],取两个值(图片格式)进行对比
        1. +[NSData(ImageContentType) sd_imageFormatForImageData:] 获取data中包含的图片格式
          1. -[NSData getBytes:ength:] 获取的一个字节(第一个字节)可以区分图片格式
        2. (self.class.imageFormat 也就是 +[SDImageAPNGCoder imageFormat]) == SDImageFormatPNG
    4. 如果返回值为YES则 coder 调用 decodedImageWithData:options: 方法,输出 image。

      1. -[SDImageIOAnimatedCoder decodedImageWithData:options:] ::: 此处并不是解码的地方,只是将压缩的图片二进制流读取到UIImage中。
    5. 打断for循环,返回image

    // -[SDImageCodersManager decodedImageWithData:options:]
    - (UIImage *)decodedImageWithData:(NSData *)data options:(nullable SDImageCoderOptions *)options {
        if (!data) {
            return nil;
        }
        UIImage *image;
        NSArray<id<SDImageCoder>> *coders = self.coders;
        for (id<SDImageCoder> coder in coders.reverseObjectEnumerator) {
            if ([coder canDecodeFromData:data]) {
                image = [coder decodedImageWithData:data options:options];
                break;
            }
        }
        
        return image;
    }
    
    // -[SDImageIOAnimatedCoder canDecodeFromData:]
    - (BOOL)canDecodeFromData:(nullable NSData *)data {
        return ([NSData sd_imageFormatForImageData:data] == self.class.imageFormat);
    }
    
    @implementation NSData (ImageContentType)
    
    + (SDImageFormat)sd_imageFormatForImageData:(nullable NSData *)data {
        if (!data) {
            return SDImageFormatUndefined;
        }
        // File signatures table: http://www.garykessler.net/library/file_sigs.html
        uint8_t c;
        [data getBytes:&c length:1];
        switch (c) {
            case 0xFF:
                return SDImageFormatJPEG;
            case 0x89:
                return SDImageFormatPNG;
            case 0x47:
                return SDImageFormatGIF;
            case 0x49:
            case 0x4D:
                return SDImageFormatTIFF;
            case 0x52: {
                if (data.length >= 12) {
                    //RIFF....WEBP
                    NSString *testString = [[NSString alloc] initWithData:[data subdataWithRange:NSMakeRange(0, 12)] encoding:NSASCIIStringEncoding];
                    if ([testString hasPrefix:@"RIFF"] && [testString hasSuffix:@"WEBP"]) {
                        return SDImageFormatWebP;
                    }
                }
                break;
            }
            case 0x00: {
                if (data.length >= 12) {
                    //....ftypheic ....ftypheix ....ftyphevc ....ftyphevx
                    NSString *testString = [[NSString alloc] initWithData:[data subdataWithRange:NSMakeRange(4, 8)] encoding:NSASCIIStringEncoding];
                    if ([testString isEqualToString:@"ftypheic"]
                        || [testString isEqualToString:@"ftypheix"]
                        || [testString isEqualToString:@"ftyphevc"]
                        || [testString isEqualToString:@"ftyphevx"]) {
                        return SDImageFormatHEIC;
                    }
                    //....ftypmif1 ....ftypmsf1
                    if ([testString isEqualToString:@"ftypmif1"] || [testString isEqualToString:@"ftypmsf1"]) {
                        return SDImageFormatHEIF;
                    }
                }
                break;
            }
        }
        return SDImageFormatUndefined;
    }
    
    // -[SDImageIOAnimatedCoder decodedImageWithData:options:]
    - (UIImage *)decodedImageWithData:(NSData *)data options:(nullable SDImageCoderOptions *)options {
        if (!data) {
            return nil;
        }
        CGFloat scale = 1;
        NSNumber *scaleFactor = options[SDImageCoderDecodeScaleFactor];
        if (scaleFactor != nil) {
            scale = MAX([scaleFactor doubleValue], 1);
        }
        
    #if SD_MAC
        SDAnimatedImageRep *imageRep = [[SDAnimatedImageRep alloc] initWithData:data];
        NSSize size = NSMakeSize(imageRep.pixelsWide / scale, imageRep.pixelsHigh / scale);
        imageRep.size = size;
        NSImage *animatedImage = [[NSImage alloc] initWithSize:size];
        [animatedImage addRepresentation:imageRep];
        return animatedImage;
    #else
        
        CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)data, NULL);
        if (!source) {
            return nil;
        }
        size_t count = CGImageSourceGetCount(source);
        UIImage *animatedImage;
        
        BOOL decodeFirstFrame = [options[SDImageCoderDecodeFirstFrameOnly] boolValue];
        if (decodeFirstFrame || count <= 1) {
            animatedImage = [[UIImage alloc] initWithData:data scale:scale];
        } else {
            NSMutableArray<SDImageFrame *> *frames = [NSMutableArray array];
            
            for (size_t i = 0; i < count; i++) {
                CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, i, NULL);
                if (!imageRef) {
                    continue;
                }
                
                NSTimeInterval duration = [self.class frameDurationAtIndex:i source:source];
                UIImage *image = [[UIImage alloc] initWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
                CGImageRelease(imageRef);
                
                SDImageFrame *frame = [SDImageFrame frameWithImage:image duration:duration];
                [frames addObject:frame];
            }
            
            NSUInteger loopCount = [self.class imageLoopCountWithSource:source];
            
            animatedImage = [SDImageCoderHelper animatedImageWithFrames:frames];
            animatedImage.sd_imageLoopCount = loopCount;
        }
        animatedImage.sd_imageFormat = self.class.imageFormat;
        CFRelease(source);
        
        return animatedImage;
    #endif
    }
    

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    详解二-解码大图

    +[SDImageCoderHelper decodedAndScaledDownImageWithImage:limitBytes:] 方法
    1. +[SDImageCoderHelper shouldDecodeImage:]

      • 已解码过(判断关联属性sd_isDecoded)的不再解码;
      • 图片为nil不解码;
      • 动图不解码;
    2. +[SDImageCoderHelper shouldScaleDownImage:limitBytes:],判断是否是要比例缩小,如果不需要则直接正常解码图片即可

      1. 限制图片当超过多少字节时需要缩小,可以手动设置也可以走SD的默认值
      2. SD默认当图片总像素数量超过 262144(60M所拥有的像素数量)需要缩小
      3. 只要限制的像素数量比源像素数量小即需要缩小。PS:太小了也不行,至少要超过(大于) 1M 才需要。
    3. 初始化目标总像素量(destTotalPixels,60M的总像素) 以及目标图片的单片(tile)总像素量(tileTotalPixels,20M的总像素)

    更多的解析看源码中添加的注释吧

    + (UIImage *)decodedAndScaledDownImageWithImage:(UIImage *)image limitBytes:(NSUInteger)bytes {
    #if SD_MAC
        return image;
    #else
        if (![self shouldDecodeImage:image]) {
            return image;
        }
        
        if (![self shouldScaleDownImage:image limitBytes:bytes]) {
            return [self decodedImageWithImage:image];
        }
        
        CGFloat destTotalPixels;
        CGFloat tileTotalPixels;
        if (bytes > 0) {
            destTotalPixels = bytes / kBytesPerPixel;
            tileTotalPixels = destTotalPixels / 3;
        } else {
            destTotalPixels = kDestTotalPixels;
            tileTotalPixels = kTileTotalPixels;
        }
        CGContextRef destContext;
        
        // autorelease the bitmap context and all vars to help system to free memory when there are memory warning.
        // on iOS7, do not forget to call [[SDImageCache sharedImageCache] clearMemory];
        @autoreleasepool {
            CGImageRef sourceImageRef = image.CGImage;
            
            CGSize sourceResolution = CGSizeZero;
            sourceResolution.width = CGImageGetWidth(sourceImageRef);
            sourceResolution.height = CGImageGetHeight(sourceImageRef);
            CGFloat sourceTotalPixels = sourceResolution.width * sourceResolution.height;
            // Determine the scale ratio to apply to the input image
            // that results in an output image of the defined size.
            // see kDestImageSizeMB, and how it relates to destTotalPixels.
            // 此处开方,假设destTotalPixels = 60M(拥有的像素量), sourceTotalPixels = 240M, imageScale = 1/2。
            // 因为dest的总像素量需要宽高相乘才能得到,所以需要比例值需要开方。
            CGFloat imageScale = sqrt(destTotalPixels / sourceTotalPixels);
            CGSize destResolution = CGSizeZero;
            destResolution.width = (int)(sourceResolution.width * imageScale);
            destResolution.height = (int)(sourceResolution.height * imageScale);
            
            // device color space
            CGColorSpaceRef colorspaceRef = [self colorSpaceGetDeviceRGB];
            BOOL hasAlpha = [self CGImageContainsAlpha:sourceImageRef];
            // iOS display alpha info (BGRA8888/BGRX8888)
            CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host;
            bitmapInfo |= hasAlpha ? kCGImageAlphaPremultipliedFirst : kCGImageAlphaNoneSkipFirst;
            
            // kCGImageAlphaNone is not supported in CGBitmapContextCreate.
            // Since the original image here has no alpha info, use kCGImageAlphaNoneSkipFirst
            // to create bitmap graphics contexts without alpha info.
            destContext = CGBitmapContextCreate(NULL,
                                                destResolution.width,
                                                destResolution.height,
                                                kBitsPerComponent,
                                                0,
                                                colorspaceRef,
                                                bitmapInfo);
            
            if (destContext == NULL) {
                return image;
            }
            CGContextSetInterpolationQuality(destContext, kCGInterpolationHigh);
            
            // Now define the size of the rectangle to be used for the
            // incremental blits from the input image to the output image.
            // we use a source tile width equal to the width of the source
            // image due to the way that iOS retrieves image data from disk.
            // iOS must decode an image from disk in full width 'bands', even
            // if current graphics context is clipped to a subrect within that
            // band. Therefore we fully utilize all of the pixel data that results
            // from a decoding opertion by achnoring our tile size to the full
            // width of the input image.
            CGRect sourceTile = CGRectZero;
            sourceTile.size.width = sourceResolution.width;
            // The source tile height is dynamic. Since we specified the size
            // of the source tile in MB, see how many rows of pixels high it
            // can be given the input image width.
            sourceTile.size.height = (int)(tileTotalPixels / sourceTile.size.width );
            sourceTile.origin.x = 0.0f;
            // The output tile is the same proportions as the input tile, but
            // scaled to image scale.
            CGRect destTile;
            destTile.size.width = destResolution.width;
            destTile.size.height = sourceTile.size.height * imageScale;
            destTile.origin.x = 0.0f;
            // The source seem overlap is proportionate to the destination seem overlap.
            // this is the amount of pixels to overlap each tile as we assemble the ouput image.
            float sourceSeemOverlap = (int)((kDestSeemOverlap/destResolution.height)*sourceResolution.height);
            CGImageRef sourceTileImageRef;
            // calculate the number of read/write operations required to assemble the
            // output image.
            int iterations = (int)( sourceResolution.height / sourceTile.size.height );
            // If tile height doesn't divide the image height evenly, add another iteration
            // to account for the remaining pixels.
            int remainder = (int)sourceResolution.height % (int)sourceTile.size.height;
            if(remainder) {
                iterations++;
            }
            // Add seem overlaps to the tiles, but save the original tile height for y coordinate calculations.
            float sourceTileHeightMinusOverlap = sourceTile.size.height;
            sourceTile.size.height += sourceSeemOverlap;
            destTile.size.height += kDestSeemOverlap;
            
            for( int y = 0; y < iterations; ++y ) {
                @autoreleasepool {
                    sourceTile.origin.y = y * sourceTileHeightMinusOverlap + sourceSeemOverlap;
                    destTile.origin.y = destResolution.height - (( y + 1 ) * sourceTileHeightMinusOverlap * imageScale + kDestSeemOverlap);// 用户空间和设备空间的坐标系转换
                    sourceTileImageRef = CGImageCreateWithImageInRect( sourceImageRef, sourceTile );
                    if( y == iterations - 1 && remainder ) {// 最后一次并且高不能被整除
                        float dify = destTile.size.height;
                        // 获取不能被整除的最后一小块真实高度
                        destTile.size.height = CGImageGetHeight( sourceTileImageRef ) * imageScale;
                        // 修正y值
                        dify -= destTile.size.height;
                        destTile.origin.y += dify;
                    }
                    CGContextDrawImage( destContext, destTile, sourceTileImageRef );
                    CGImageRelease( sourceTileImageRef );
                }
            }
            
            CGImageRef destImageRef = CGBitmapContextCreateImage(destContext);
            CGContextRelease(destContext);
            if (destImageRef == NULL) {
                return image;
            }
            UIImage *destImage = [[UIImage alloc] initWithCGImage:destImageRef scale:image.scale orientation:image.imageOrientation];
            CGImageRelease(destImageRef);
            if (destImage == nil) {
                return image;
            }
            destImage.sd_isDecoded = YES;
            destImage.sd_imageFormat = image.sd_imageFormat;
            return destImage;
        }
    #endif
    }
    

    这个解码大图不是SD的原创,是苹果官方给出的解决方案,提供demo可下载。想要了解更多,就去《Large Image Downsizing》探索伟大航路吧!
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    详解三-正常解码

    +[SDImageCoderHelper decodedImageWithImage:] 方法
    1. +[SDImageCoderHelper shouldDecodeImage:]:已解码过(判断关联属性sd_isDecoded)的不再解码;图片为nil不解码;动图不解码;

    2. +[SDImageCoderHelper CGImageCreateDecoded:]

    3. +[SDImageCoderHelper CGImageCreateDecoded:orientation:]:输入image.CGImage,输出解码的CGImageRef; 核心解码函数:CGContextDrawImage()

      1. 获取图片的宽高,如果方向是左、右旋转则交换宽高数据

      2. 判断是否含有alpha信息

      3. 获取32位字节顺序(kCGBitmapByteOrder32Host这个宏避免考虑大小端问题),保存到位图信息bitmapInfo

      4. bitmapInfo 按位或添加像素格式(alpha信息)。有alpha选择kCGImageAlphaPremultipliedFirst(BGRA8888), 无alpha选择kCGImageAlphaNoneSkipFirst

      5. 调用 +[SDImageCoderHelper colorSpaceGetDeviceRGB] 获取颜色空间,一个单例

      6. 调用 CGBitmapContextCreate() 函数创建位图上下文

      7. 调用 SDCGContextTransformFromOrientation() 获取方向旋转的 CGAffineTransform,调用 CGContextConcatCTM 关联到位图上下文,坐标系转换

      8. ❗️解码:CGContextDrawImage()❗️

        1. 传入的 bytesPerRow 参数是 0,正常情况下应传入 width * bytesPerPixel,但是这里传入 0 系统会帮助计算,而且系统还加了点小优化。
        2. 可以明显的看到Xcode检测到的内存情况和CPU使用情况的变化
      9. 获取位图:CGBitmapContextCreateImage(),输入context,输出解码后的CGImageRef。

      10. 释放上下文context

    4. -[UIImage initWithCGImage:scale:orientation:],CGImage 转为 UIImage

    5. UIImage 关联属性赋值:decodedImage.sd_isDecoded = YES; decodedImage.sd_imageFormat = image.sd_imageFormat;

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    另外再提一句

    函数 SDImageCacheDecodeImageData() 也是普通的正常解码,与我们前面说的解码函数 SDImageLoaderDecodeImageData() 逻辑基本一致。

    函数实现在 SDImageCacheDefine.m 文件中,此文件仅仅只写了这一个函数,其它什么都没有。

    UIImage * _Nullable SDImageCacheDecodeImageData(NSData * _Nonnull imageData, NSString * _Nonnull cacheKey, SDWebImageOptions options, SDWebImageContext * _Nullable context) {
        UIImage *image;
        BOOL decodeFirstFrame = SD_OPTIONS_CONTAINS(options, SDWebImageDecodeFirstFrameOnly);
        NSNumber *scaleValue = context[SDWebImageContextImageScaleFactor];
        CGFloat scale = scaleValue.doubleValue >= 1 ? scaleValue.doubleValue : SDImageScaleFactorForKey(cacheKey);
        SDImageCoderOptions *coderOptions = @{SDImageCoderDecodeFirstFrameOnly : @(decodeFirstFrame), SDImageCoderDecodeScaleFactor : @(scale)};
        if (context) {
            SDImageCoderMutableOptions *mutableCoderOptions = [coderOptions mutableCopy];
            [mutableCoderOptions setValue:context forKey:SDImageCoderWebImageContext];
            coderOptions = [mutableCoderOptions copy];
        }
        
        if (!decodeFirstFrame) {
            Class animatedImageClass = context[SDWebImageContextAnimatedImageClass];
            // check whether we should use `SDAnimatedImage`
            if ([animatedImageClass isSubclassOfClass:[UIImage class]] && [animatedImageClass conformsToProtocol:@protocol(SDAnimatedImage)]) {
                image = [[animatedImageClass alloc] initWithData:imageData scale:scale options:coderOptions];
                if (image) {
                    // Preload frames if supported
                    if (options & SDWebImagePreloadAllFrames && [image respondsToSelector:@selector(preloadAllFrames)]) {
                        [((id<SDAnimatedImage>)image) preloadAllFrames];
                    }
                } else {
                    // Check image class matching
                    if (options & SDWebImageMatchAnimatedImageClass) {
                        return nil;
                    }
                }
            }
        }
        if (!image) {
            image = [[SDImageCodersManager sharedManager] decodedImageWithData:imageData options:coderOptions];
        }
        if (image) {
            BOOL shouldDecode = !SD_OPTIONS_CONTAINS(options, SDWebImageAvoidDecodeImage);
            if ([image.class conformsToProtocol:@protocol(SDAnimatedImage)]) {
                // `SDAnimatedImage` do not decode
                shouldDecode = NO;
            } else if (image.sd_isAnimated) {
                // animated image do not decode
                shouldDecode = NO;
            }
            if (shouldDecode) {
                BOOL shouldScaleDown = SD_OPTIONS_CONTAINS(options, SDWebImageScaleDownLargeImages);
                if (shouldScaleDown) {
                    image = [SDImageCoderHelper decodedAndScaledDownImageWithImage:image limitBytes:0];
                } else {
                    image = [SDImageCoderHelper decodedImageWithImage:image];
                }
            }
        }
        
        return image;
    }
    

    FAQ

    -[SDWebImageDownloaderDecryptor decryptedDataWithData:response:] 内部调用私有block处理imageData,并返回NSData赋值给imageData。

    • Q1: 寻找 decryptor 初始化的位置,以及 block 的实现在哪
    • A1: 没有实现block的地方,默认情况下 decryptor 是nil,此为开发者来指定的属性

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    • Q2: SDImageLoader.h 和 SDImageLoader.m 文件的构成
    • A2: SDImageLoader.h 只有两个函数的EXPORT声明,以及一个协议的声明;SDImageLoader.m 仅仅是两个函数的实现。

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    • Q3: SDImageCacheDefine 文件和 SDImageLoader 文件
    • A3: 解码代码基本一致,声明的协议不同。
      解码:
      SDImageLoader 的解码实现是 SDImageLoaderDecodeImageData()
      SDImageCacheDefine 的解码实现是 SDImageCacheDecodeImageData()
      协议:
      SDImageLoader 声明的协议是 @protocol SDImageLoader
      SDImageCacheDefine 声明的协议是 @protocol SDImageCache

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    • Q4: 有多少种Coder?
    • A4: Coder大致分两类:苹果内建的和SD自己实现的;一共5种;
    1. SDImageIOCoder 苹果内建,支持PNG, JPEG, TIFF解码,以及 GIF 第一帧的静态图。

      • 使用 UIImage *image = [[UIImage alloc] initWithData:data scale:scale]; 进行解码
    2. SDImageIOAnimatedCoder 下面三种的基类

    3. SDImageGIFCoder 继承自 SDImageIOAnimatedCoder

    4. SDImageAPNGCoder 继承自 SDImageIOAnimatedCoder

    5. SDImageHEICCoder 继承自 SDImageIOAnimatedCoder

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    • Q5: CGContextSetInterpolationQuality()是在设置什么?
    • A5: 在设置“插值质量”。对图片的尺寸进行缩放时,由于尺寸不同,所以在生成新图的过程中像素不可能是一一对应,因此会有插值操作。所谓插值即根据原图和目标图大小比例,结合原图像素信息生成的新的像素的过程。常见的插值算法有线性插值,双线性插值,立方卷积插值等。

    一点点图片相关小知识

    以下内容均来自苹果官方文档和官方示例,地址:Quartz 2D Programming Guide.

    首先普及一下图片的关键小知识:

    颜色空间(ColorSpace, CS)

    颜色空间最直白的说就是平时所见到的例如 RGB、CMYK 这些个东西
    颜色空间文档地址

    像素格式(Pixel format)

    由三个数值组成: Bits per pixel (bpp)、Bits per component (bpc)、Bytes per row
    像素格式文档地址

    颜色分量(Color Component);

    颜色分量就是颜色空间的各个小组件。比如 RGB 颜色空间,R 是一个颜色分量,同理G和B都是颜色分量。


    颜色分量与颜色空间
    颜色空间对应的像素格式

    每个颜色空间对应的像素格式是规定好的(定死的),即bits per pixel (bpp) 和 bits per component (bpc);下表就是各种颜色空间对应的像素格式:

    ColorSpace Pixel format bitmap information constant Availability
    Null 8 bpp, 8 bpc kCGImageAlphaOnly Mac OS X, iOS
    Gray 8 bpp, 8 bpc kCGImageAlphaNone Mac OS X, iOS
    Gray 8 bpp, 8 bpc kCGImageAlphaOnly Mac OS X, iOS
    RGB 16 bpp, 5 bpc kCGImageAlphaNoneSkipFirst Mac OS X, iOS
    RGB 32 bpp, 8 bpc kCGImageAlphaNoneSkipFirst Mac OS X, iOS
    RGB 32 bpp, 8 bpc kCGImageAlphaNoneSkipLast Mac OS X, iOS
    RGB 32 bpp, 8 bpc kCGImageAlphaPremultipliedFirst Mac OS X, iOS
    RGB 32 bpp, 8 bpc kCGImageAlphaPremultipliedLast Mac OS X, iOS

    解释一下为什么上表中像素格式为啥缺了 Bytes per row 这个指标,是因为 Bytes per row 其实是通过计算确定的,Bytes per row = width * bytes per pixel。 所以有了 bpp 也就同时有了 Bytes per row

    参考解码大图的官方示例

    解码大图的官方示例中定义了许多宏,可以让我们很快的熟悉图片的相关知识。哪些是可以定死的,哪些是可以通过计算获得的。

    #define bytesPerMB 1048576.0f 
    #define bytesPerPixel 4.0f
    #define pixelsPerMB ( bytesPerMB / bytesPerPixel ) // 262144 pixels, for 4 bytes per pixel.
    #define destTotalPixels kDestImageSizeMB * pixelsPerMB
    #define tileTotalPixels kSourceImageTileSizeMB * pixelsPerMB
    #define destSeemOverlap 2.0f // the numbers of pixels to overlap the seems where tiles meet.
    #define kDestImageSizeMB 60.0f // The resulting image will be (x)MB of uncompressed image data. 
    #define kSourceImageTileSizeMB 20.0f // The tile size will be (x)MB of uncompressed image data. 
    
    解码的核心函数 CGBitmapContextCreate()

    翻译自 CGBitmapContextCreate() 函数的头文件注释,俺自己翻的,水平有限,请见谅

    原文:The number of components for each pixel is specified by space, which may also specify a destination color profile.
    译文:每个像素的颜色分量数量由“space”指定,它也可以指定目标颜色配置文件。

    原文:Note that the only legal case when space can be NULL is when alpha is specified as kCGImageAlphaOnly.
    译文:注意,space为NULL只有一种情况是合法的,就是当alpha被指定为kCGImageAlphaOnly。

    原文:The number of bits for each component of a pixel is specified by bitsPerComponent.
    译文:一个颜色分量的比特数由“bitsPerComponent”指定。

    原文:The number of bytes per pixel is equal to (bitsPerComponent * number of components + 7)/8.
    译文:一像素拥有的字节数 = (bitsPerComponent * number of components + 7)/8

    原文:Each row of the bitmap consists of bytesPerRow bytes, which must be at least width * bytes per pixel bytes; in addition, bytesPerRow must be an integer multiple of the number of bytes per pixel.
    译文:位图每一行由bytesPerRow 字节构成,bytesPerRow 必须至少是width * bytes per pixel字节。另外,bytesPerRow必须是整型数字乘以一像素拥有的字节数。

    原文:data, if non-NULL, points to a block of memory at least bytesPerRow * height bytes.
    译文:data如果非空,要指向一个至少是bytesPerRow * height 字节的内存块。

    原文:If data is NULL, the data for context is allocated automatically and freed when the context is deallocated.
    译文:如果data为空,context的data会被自动分配内存并且在context被销毁的时候释放。

    原文:bitmapInfo specifies whether the bitmap should contain an alpha channel and how it's to be generated, along with whether the components are floating-point or integer
    译文:bitmapInfo 指定位图是否应该包含一个alpha通道,以及如何生成它,以及颜色分量是浮点数还是整数。

    CGBitmapContextCreate()

    相关文章

      网友评论

        本文标题:SDWebImage主线之图片解码

        本文链接:https://www.haomeiwen.com/subject/oxqaactx.html