美文网首页音视频从入门到放弃
CVPixelBuffer的YUV转RGB的优化

CVPixelBuffer的YUV转RGB的优化

作者: BohrIsLay | 来源:发表于2021-01-17 18:42 被阅读0次

在使用优图的3D人脸检测SDK时,遇到他们的接口进行人脸检测,需要传一个rgb格式的CVPixelBuffer,当时中台需要我们业务侧,快速提供一个优图的3D人脸检测耗时的性能,因为项目里相机采集输出到是YUV格式的CVPixelBuffer,只能将YUV格式转为RGB格式的CVPixelBuffer

以7P手机为列,采集分辨率为720P,优图的人脸检测耗时大概9ms左右,但是转CVPixelBuffer,如果代码写的不注意,耗时可能高达10ms

YUV转RGB的实现原理,其本质是将YUV的格式每个像素点的数据,解析后,设置成RGB的格式每个像素点数据,类似与下面的伪代码

void YUVImage::yuv2rgb(uint8_t yValue, uint8_t uValue, uint8_t vValue,
        uint8_t *r, uint8_t *g, uint8_t *b) const {
    *r = yValue + (1.370705 * (vValue-128));
    *g = yValue - (0.698001 * (vValue-128)) - (0.337633 * (uValue-128));
    *b = yValue + (1.732446 * (uValue-128));
}

转换方式有以下几种方式:

使用CoreImage框架

第一步:将CVPixelBuffer直接转为CIImage

CIImage *image = [CIImage imageWithCVPixelBuffer:inputPixelBuffer];

第二步:基于OpengGL底层的CIContext创建

_glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
_CIRenderContext = [CIContext contextWithEAGLContext:_glContext];

第三步:调用CoreImage的CIContext绘制方法,将CIImage绘制到RGB格式的CVPixelBuffer里

CVPixelBufferRef bgra_pixel_buffer = [self newPixelBuffer:kCVPixelFormatType_32BGRA size:CGSizeMake(iWidth, iHeight)];

[_CIRenderContext render:renderFilter.outputImage toCVPixelBuffer:bgra_pixel_buffer];
 

这个如果bgra_pixel_buffer的创建每次都使用new的方式,则测试会增加1~2ms左右,使用缓存的方式创建,在输入的pixelBuffer的宽高更改时,再new新的对象进行创建,能够提高效率,这样耗时大概4~5ms

虽然CoreImage在这里是基于OpenGL的底层,但内部实现不清楚,比第三种使用GPU的方式性能要差

使用yuv库

获取输入CVPixelBuffer的字节地址,给输出CVPixelBuffer的字节地址填充RGB格式的数据,其内部实现应该是使用了矩阵相乘的方式进行计算的,性能不错,耗时1~2ms,当然这个是基于CPU的方式,在项目中,直播开播的业务场景,选择1080P开播,CPU使用率在88%水平线波动

image.png
//转化
-(CVPixelBufferRef)convertVideoSmapleBufferToBGRAData:(CVImageBufferRef)pixelBuffer{

   
    //图像宽度(像素)
    size_t pixelWidth = CVPixelBufferGetWidth(pixelBuffer);
    //图像高度(像素)
    size_t pixelHeight = CVPixelBufferGetHeight(pixelBuffer);
    
    int src_stride_y = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
    int src_stride_uv = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
    
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    //获取CVImageBufferRef中的y数据
    uint8_t *y_frame = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    //获取CMVImageBufferRef中的uv数据
    uint8_t *uv_frame =(unsigned char *) CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);

   
//    CVPixelBufferRef pixelBuffer1 = NULL;
    if (!_bgra_pixel_buffer) {
        // 创建一个空的32BGRA格式的CVPixelBufferRef
        NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
        CVPixelBufferCreate(kCFAllocatorDefault,
                                              pixelWidth,pixelHeight,kCVPixelFormatType_32BGRA, //rgba
                                              (__bridge CFDictionaryRef)pixelAttributes,&_bgra_pixel_buffer);
    }
    
    int bgra_stride = (int)CVPixelBufferGetBytesPerRow(_bgra_pixel_buffer);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

    CVReturn result = CVPixelBufferLockBaseAddress(_bgra_pixel_buffer, 0);
    if (result != kCVReturnSuccess) {
        if(NULL != _bgra_pixel_buffer){
        CFRelease(_bgra_pixel_buffer);
        }
        //NSLog(@"Failed to lock base address: %d", result);
        return NULL;
    }

    // 得到新创建的CVPixelBufferRef中 rgb数据的首地址
    uint8_t *rgb_data = (uint8_t*)CVPixelBufferGetBaseAddress(_bgra_pixel_buffer);

    // 使用libyuv为rgb_data写入数据,将NV12转换为BGRA
    int ret = NV12ToARGB(y_frame, (int)src_stride_y, uv_frame, (int)src_stride_uv, rgb_data, (int)bgra_stride, (int)pixelWidth, (int)pixelHeight);
    if (ret) {
        //NSLog(@"Error converting NV12 VideoFrame to BGRA: %d", result);
        if(NULL != _bgra_pixel_buffer){
        CFRelease(_bgra_pixel_buffer);
        }
        return NULL;
    }
    CVPixelBufferUnlockBaseAddress(_bgra_pixel_buffer, 0);

    return _bgra_pixel_buffer;
}


使用GPU的方式

使用OpenGL的方式将CVPixelBuffer转为CoreVideo的CVOpenGLTexture,然后绘制到frambuffer里,对应的texture就是图像数据,而和texture对应的CVPixelBuffer就是输出的RGB格式的,因为在shader里进行了YUV转RGB

创建GPUImageFrameBuffer(包含OpenGL里的frameBuffer和texture),创建texture时,是通过CVPixelBuffer占位创建的,即和texture对应的就是CVPixelBuffer

CVOpenGLESTextureCacheCreateTextureFromImage

这个性能挺好,耗时1~2ms,和第二种方式相比较,同样的情况下,开播CPU的占用在75%的水平线波动

image.png
- (void)generateFramebuffer;
{
    runSynchronouslyOnVideoProcessingQueue(^{
        [GPUImageContext useImageProcessingContext];
    
        glGenFramebuffers(1, &framebuffer);
        glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
        
        // By default, all framebuffers on iOS 5.0+ devices are backed by texture caches, using one shared cache
        if ([GPUImageContext supportsFastTextureUpload])
        {
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
            CVOpenGLESTextureCacheRef coreVideoTextureCache = [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache];
            // Code originally sourced from http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/
            
            CFDictionaryRef empty; // empty value for attr value.
            CFMutableDictionaryRef attrs;
            empty = CFDictionaryCreate(kCFAllocatorDefault, NULL, NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); // our empty IOSurface properties dictionary
            attrs = CFDictionaryCreateMutable(kCFAllocatorDefault, 1, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
            CFDictionarySetValue(attrs, kCVPixelBufferIOSurfacePropertiesKey, empty);
            
            CVReturn err = CVPixelBufferCreate(kCFAllocatorDefault, (int)_size.width, (int)_size.height, kCVPixelFormatType_32BGRA, attrs, &renderTarget);
            if (err)
            {
                NSLog(@"FBO size: %f, %f", _size.width, _size.height);
                NSAssert(NO, @"Error at CVPixelBufferCreate %d", err);
            }
            
            err = CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCache, renderTarget,
                                                                NULL, // texture attributes
                                                                GL_TEXTURE_2D,
                                                                _textureOptions.internalFormat, // opengl format
                                                                (int)_size.width,
                                                                (int)_size.height,
                                                                _textureOptions.format, // native iOS format
                                                                _textureOptions.type,
                                                                0,
                                                                &renderTexture);
            if (err)
            {
                NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
            }
            
            CFRelease(attrs);
            CFRelease(empty);
            
            glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
            _texture = CVOpenGLESTextureGetName(renderTexture);
            glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, _textureOptions.wrapS);
            glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, _textureOptions.wrapT);
            
            glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
#endif
        }
        else
        {
            [self generateTexture];

            glBindTexture(GL_TEXTURE_2D, _texture);
            
            glTexImage2D(GL_TEXTURE_2D, 0, _textureOptions.internalFormat, (int)_size.width, (int)_size.height, 0, _textureOptions.format, _textureOptions.type, 0);
            glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _texture, 0);
        }
        
        #ifndef NS_BLOCK_ASSERTIONS
        GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
        NSAssert(status == GL_FRAMEBUFFER_COMPLETE, @"Incomplete filter FBO: %d", status);
        #endif
        
        glBindTexture(GL_TEXTURE_2D, 0);
    });
}

以下是通过GPU的方式将YUV的CVPixelBuffer转为RGB的CVPixelBuffer

@interface GPUYUV2RGBBufferTransfer : NSObject

// 转为BGRA
- (CVPixelBufferRef)convertYUV2BGRAPixelBuffer:(CVPixelBufferRef)pixelBuffer;

@end


@interface GPUYUV2RGBBufferTransfer () {
    GLProgram * nv12Program;
    GLint positionAttribute, textureCoordinateAttribute;
    GLint yTextureUniform, uvTextureUniform;
    GLint yuvConversionMatrixUniform;
    CVOpenGLESTextureRef _yTextureRef;
    CVOpenGLESTextureRef _uvTextureRef;
    GPUImageFramebuffer *outputFramebuffer;
}

@end


@implementation GPUYUV2RGBBufferTransfer



#pragma mark - init

- (instancetype)init {
    self = [super init];
    if (self) {
        [self initYuvConversion];
    }
    return self;
}

- (void)initYuvConversion
{
    runSynchronouslyOnVideoProcessingQueue(^{
        [GPUImageContext useImageProcessingContext];
        self->nv12Program = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString:kGPUImageYUVFullRangeConversionForLAFragmentShaderString];

        if (!self->nv12Program.initialized) {
            [self->nv12Program addAttribute:@"position"];
            [self->nv12Program addAttribute:@"inputTextureCoordinate"];

            if (![self->nv12Program link]) {
                NSAssert(NO, @"Filter shader link failed");
                KWSLogWarn(@"Live:XAV:GPUImageNV12Filter initYuvConversion failed!");
                return;
            }
        }

        self->positionAttribute = [self->nv12Program attributeIndex:@"position"];
        self->textureCoordinateAttribute = [self->nv12Program attributeIndex:@"inputTextureCoordinate"];
        self->yTextureUniform = [self->nv12Program uniformIndex:@"luminanceTexture"];
        self->uvTextureUniform = [self->nv12Program uniformIndex:@"chrominanceTexture"];
        self->yuvConversionMatrixUniform = [self->nv12Program uniformIndex:@"colorConversionMatrix"];

        [GPUImageContext setActiveShaderProgram:self->nv12Program];
    });
}

#pragma mark - public

// 转为BGRA
- (CVPixelBufferRef)convertYUV2BGRAPixelBuffer:(CVPixelBufferRef)pixelBuffer {
    BOOL newVideoFrame = NO;
    @synchronized (self)
    {
        CVPixelBufferRef imageBuffer = pixelBuffer;
        
        static double nvTimebase = 0;
        if (CACurrentMediaTime() - nvTimebase > 5.f)
        {
            KWSLogInfo(@"live:[GPUImageNV12Filter] pixbuffer status:%@ width:%d height:%d planeCnt:%d",imageBuffer ? @"image OK": @"image Empty", (int)CVPixelBufferGetWidth(imageBuffer), (int)CVPixelBufferGetHeight(imageBuffer), (int)CVPixelBufferGetPlaneCount(imageBuffer));
            nvTimebase = CACurrentMediaTime();
        }
        
        if (CVPixelBufferGetPlaneCount(imageBuffer) > 0)
        {
            newVideoFrame = [self loadTexture:&_yTextureRef pixelBuffer:imageBuffer planeIndex:0 pixelFormat:GL_LUMINANCE];
            newVideoFrame &= [self loadTexture:&_uvTextureRef pixelBuffer:imageBuffer planeIndex:1 pixelFormat:GL_LUMINANCE_ALPHA];
        }
        
        [GPUImageContext setActiveShaderProgram:nv12Program];
        
        int bufferWidth = (int) CVPixelBufferGetWidth(pixelBuffer);
        int bufferHeight = (int) CVPixelBufferGetHeight(pixelBuffer);
        int bytesPerRow = (int) CVPixelBufferGetBytesPerRow(pixelBuffer);

        if (!outputFramebuffer || outputFramebuffer.size.width != bufferWidth || outputFramebuffer.size.height != bufferHeight) {
            outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:CGSizeMake(bufferWidth, bufferHeight) onlyTexture:NO];
        }
        
        [outputFramebuffer activateFramebuffer];
        
        if (newVideoFrame)
        {
            static const GLfloat squareVertices[] = {
                -1.0f, -1.0f,
                1.0f, -1.0f,
                -1.0f,  1.0f,
                1.0f,  1.0f,
            };

            static const GLfloat textureCoordinates[] = {
                0.0f, 0.0f,
                1.0f, 0.0f,
                0.0f, 1.0f,
                1.0f, 1.0f,
            };
           
            [self convertYuvToRgb:kColorConversion601FullRange withVertices:squareVertices textureCoordinates:textureCoordinates];
            
            NSLog(@"newVideoFrame");
        }
      
        return outputFramebuffer.pixelBuffer;
    }
}


#pragma mark - other

- (BOOL)loadTexture:(CVOpenGLESTextureRef *)textureOut
        pixelBuffer:(CVPixelBufferRef)pixelBuffer
         planeIndex:(int)planeIndex
        pixelFormat:(GLenum)pixelFormat {
    
    const int width = (int)CVPixelBufferGetWidthOfPlane(pixelBuffer, planeIndex);
    const int height = (int)CVPixelBufferGetHeightOfPlane(pixelBuffer, planeIndex);
    if (*textureOut) {
        CFRelease(*textureOut);
        *textureOut = nil;
    }
    CVReturn ret = CVOpenGLESTextureCacheCreateTextureFromImage(
                                                                kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], pixelBuffer, NULL, GL_TEXTURE_2D, pixelFormat, width,
                                                                height, pixelFormat, GL_UNSIGNED_BYTE, planeIndex, textureOut);
    if (ret != kCVReturnSuccess) {
        if (nil != *textureOut) {
            CFRelease(*textureOut);
            *textureOut = nil;
        }
        KWSLogError(@"live:[GPUImageNV12Filter] loadTexture failed!");
        return NO;
    }
    
    NSAssert(CVOpenGLESTextureGetTarget(*textureOut) == GL_TEXTURE_2D, @"Unexpected GLES texture target");
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(CVOpenGLESTextureGetTarget(*textureOut), CVOpenGLESTextureGetName(*textureOut));
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    return YES;
}

- (void)convertYuvToRgb:(const GLfloat*)preferredConversion withVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
{
    glActiveTexture(GL_TEXTURE1);
    glBindTexture(GL_TEXTURE_2D, CVOpenGLESTextureGetName(_yTextureRef));
    glUniform1i(yTextureUniform, 1);
    
    glActiveTexture(GL_TEXTURE2);
    glBindTexture(GL_TEXTURE_2D, CVOpenGLESTextureGetName(_uvTextureRef));
    glUniform1i(uvTextureUniform, 2);
    
    glUniformMatrix3fv(yuvConversionMatrixUniform, 1, GL_FALSE, preferredConversion);
    
    glEnableVertexAttribArray(positionAttribute);
    glEnableVertexAttribArray(textureCoordinateAttribute);
    glVertexAttribPointer(positionAttribute, 2, GL_FLOAT, 0, 0, vertices);
    glVertexAttribPointer(textureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates);
    
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    glDisableVertexAttribArray(positionAttribute);
    glDisableVertexAttribArray(textureCoordinateAttribute);
    
    glBindTexture(GL_TEXTURE_2D, 0);
    
    // 等待绘制完成
    glFinish();
}

@end

苹果框架Accelerate.vImage

这个测试1~6ms,波动很大

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
    
    var cgImageFormat = vImage_CGImageFormat(bitsPerComponent: 8,
                                             bitsPerPixel: 32,
                                             colorSpace: nil,
                                             bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipFirst.rawValue),
                                             version: 0,
                                             decode: nil,
                                             renderingIntent: .defaultIntent)
    
    var destinationBuffer = vImage_Buffer()
    
    private let dataOutputQueue = DispatchQueue(label: "video data queue",
                                                qos: .userInitiated,
                                                attributes: [],
                                                autoreleaseFrequency: .workItem)
    
    let captureSession = AVCaptureSession()
    
    @IBOutlet var imageView: UIImageView!
    
    override func viewDidLoad() {
        super.viewDidLoad()
        
        guard configureYpCbCrToARGBInfo() == kvImageNoError else {
            return
        }
        
        configureSession()
    }
    
    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)
        
        captureSession.startRunning()
    }
    
    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
        
        captureSession.stopRunning()
    }
    
    deinit {
        free(destinationBuffer.data)
    }
    
    var infoYpCbCrToARGB = vImage_YpCbCrToARGB()
    
    func configureYpCbCrToARGBInfo() -> vImage_Error {
        var pixelRange = vImage_YpCbCrPixelRange(Yp_bias: 16,
                                                 CbCr_bias: 128,
                                                 YpRangeMax: 235,
                                                 CbCrRangeMax: 240,
                                                 YpMax: 235,
                                                 YpMin: 16,
                                                 CbCrMax: 240,
                                                 CbCrMin: 16)
        
        let error = vImageConvert_YpCbCrToARGB_GenerateConversion(
            kvImage_YpCbCrToARGBMatrix_ITU_R_601_4!,
            &pixelRange,
            &infoYpCbCrToARGB,
            kvImage422CbYpCrYp8,
            kvImageARGB8888,
            vImage_Flags(kvImageNoFlags))
        
        return error
    }
    
    func configureSession() {
        
        captureSession.sessionPreset = AVCaptureSession.Preset.photo
        
        guard let backCamera = AVCaptureDevice.default(for: .video) else {
                print("can't discover camera")
                return
        }
        
        do {
            let input = try AVCaptureDeviceInput(device: backCamera)
            
            captureSession.addInput(input)
        } catch {
            print("can't access camera")
            return
        }
        
        let videoOutput = AVCaptureVideoDataOutput()
        
        videoOutput.setSampleBufferDelegate(self,
                                            queue: dataOutputQueue)
        
        if captureSession.canAddOutput(videoOutput) {
            captureSession.addOutput(videoOutput)
            captureSession.startRunning()
        }
    }
    
    // AVCaptureVideoDataOutputSampleBufferDelegate
    func captureOutput(_ output: AVCaptureOutput,
                       didOutput sampleBuffer: CMSampleBuffer,
                       from connection: AVCaptureConnection) {
        
        guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
            return
        }
        
        CVPixelBufferLockBaseAddress(pixelBuffer,
                                     CVPixelBufferLockFlags.readOnly)
        
        displayYpCbCrToRGB(pixelBuffer: pixelBuffer)
        
        CVPixelBufferUnlockBaseAddress(pixelBuffer,
                                       CVPixelBufferLockFlags.readOnly)
    }
    
    func displayYpCbCrToRGB(pixelBuffer: CVPixelBuffer) {
        let startInterval = CFAbsoluteTimeGetCurrent() * 1000;
        assert(CVPixelBufferGetPlaneCount(pixelBuffer) == 2, "Pixel buffer should have 2 planes")
        
        let lumaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
        let lumaWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0)
        let lumaHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0)
        let lumaRowBytes = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0)
        
        var sourceLumaBuffer = vImage_Buffer(data: lumaBaseAddress,
                                             height: vImagePixelCount(lumaHeight),
                                             width: vImagePixelCount(lumaWidth),
                                             rowBytes: lumaRowBytes)
        
        let chromaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1)
        let chromaWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 1)
        let chromaHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1)
        let chromaRowBytes = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1)
        
        var sourceChromaBuffer = vImage_Buffer(data: chromaBaseAddress,
                                               height: vImagePixelCount(chromaHeight),
                                               width: vImagePixelCount(chromaWidth),
                                               rowBytes: chromaRowBytes)
        
        var error = kvImageNoError
        if destinationBuffer.data == nil {
            error = vImageBuffer_Init(&destinationBuffer,
                                      sourceLumaBuffer.height,
                                      sourceLumaBuffer.width,
                                      cgImageFormat.bitsPerPixel,
                                      vImage_Flags(kvImageNoFlags))
            
            guard error == kvImageNoError else {
                return
            }
        }
        
        error = vImageConvert_420Yp8_CbCr8ToARGB8888(&sourceLumaBuffer,
                                                     &sourceChromaBuffer,
                                                     &destinationBuffer,
                                                     &infoYpCbCrToARGB,
                                                     nil,
                                                     255,
                                                     vImage_Flags(kvImagePrintDiagnosticsToConsole))
        let endTimeInterval = CFAbsoluteTimeGetCurrent() * 1000;
        print("transfer time = %f", endTimeInterval - startInterval)
        
        guard error == kvImageNoError else {
            return
        }
        
        let cgImage = vImageCreateCGImageFromBuffer(&destinationBuffer,
                                                    &cgImageFormat,
                                                    nil,
                                                    nil,
                                                    vImage_Flags(kvImageNoFlags),
                                                    &error)
        
        if let cgImage = cgImage, error == kvImageNoError {
            DispatchQueue.main.async {
                self.imageView.image = UIImage(cgImage: cgImage.takeRetainedValue())
            }
        }
    }
    
}

当然,最好的方式,是和优图沟通,让他们兼容YUV,和RGB;

从刚开始他们只支持RGB格式,我们甚至猜测他们就是拿RGB格式的数据进行人脸检测的,因为YUV的Y分量已经可以检测人脸了,所以又建议他们直接使用YUV的Y分量进行人脸检测,进一步提高性能

相关文章

  • CVPixelBuffer的YUV转RGB的优化

    在使用优图的3D人脸检测SDK时,遇到他们的接口进行人脸检测,需要传一个rgb格式的CVPixelBuffer,当...

  • YUV420转换RGB公式

    YUV420转换RGB YUV420转换RGB公式

  • 转码的四个案例

    一,swcale实现rgb24转yuv420p 二,swcale实现YUV转RGB

  • YUV420格式的CVPixelBuffer转换为RGB格式

    ARKit中提取到的CVPixelBuffer为YUV420格式,很多时候我们需要把它转换为RGB格式,然后再进行...

  • YUV转rgb

    图文详解YUV420数据格式

  • FFmpeg - 播放YUV,视频帧格式转换

    播放YUV 定时读取YUV的视频帧 将YUV转换为RGB数据 用RGB数据生成CGimage 在view上绘制CG...

  • 图像处理学习资料

    RGB、YUV和HSV颜色空间模型 RGB立方体模型RGB YUV:其中“Y”表示明亮度(Luminance或Lu...

  • YUV和RGB

    色彩空间 我们经常用到的色彩空间主要有RGB、YUV, CMYK, HSB, HSL等等,其中YUV和RGB是视讯...

  • YUV和RGB

    YUV 的采样与格式 YUV 是⼀种颜⾊编码⽅法,和它等同的还有 RGB 颜⾊编码⽅法。 RGB 颜⾊编码 R G...

  • 通过摄像头识别环境亮度

    YUV是一种新的图像传输格式,YUV主要用于优化彩色视频信号的传输,使其向后相容老式黑白电视。与RGB视频信号传输...

网友评论

    本文标题:CVPixelBuffer的YUV转RGB的优化

    本文链接:https://www.haomeiwen.com/subject/pyczaktx.html