美文网首页音视频从入门到放弃ios developers
iOS 关于CVPixelBufferRef的滤镜处理

iOS 关于CVPixelBufferRef的滤镜处理

作者: 陆离o | 来源:发表于2021-01-25 00:01 被阅读0次

    一.前言

    在iOS音视频开发中,经常会看到CVPixelBufferRef这个数据结构,和ffmpeg中的AVFrame类似,里面保存着原始的图像数据。

    我们发现,在有些场景中将CVPixelBufferRef送入滤镜sdk处理后,并不需要返回sdk处理后CVPixelBufferRef,就能实现滤镜效果显示的改变,如下图场景。

    CVPixelBuffer使用场景

    1.滤镜sdk处理CVPixelBufferRef的操作为同步操作。
    2.滤镜sdk外部和内部的CVPixelBufferRef共享同一块内存。

    二.实现的流程图

    流程图.png
    1.输入原始CVPixelBufferRef,放到GPUImage的滤镜链中处理,输出处理后的纹理A
    2.使用原始CVPixelBufferRef生成纹理B并挂载到frame buffer object的纹理附件中。
    3.将纹理A绘制到frame buffer object上,会更新纹理B的内容,进而更新CVPixelBufferRef的图像数据。
    4.输出滤镜处理后的CVPixelBufferRef,其内存地址和原始的CVPixelBufferRef相同。

    三.关键代码

    1.使用CVPixelBufferRef创建纹理对象的两种方法:

    CoreVideo框架的方法:使用此方法可以创建CVOpenGLESTextureRef纹理,并通过CVOpenGLESTextureGetName(texture)获取纹理id。

    - (GLuint)convertRGBPixelBufferToTexture:(CVPixelBufferRef)pixelBuffer {
        if (!pixelBuffer) {
            return 0;
        }
        CGSize textureSize = CGSizeMake(CVPixelBufferGetWidth(pixelBuffer),
                                        CVPixelBufferGetHeight(pixelBuffer));
        CVOpenGLESTextureRef texture = nil;
        CVReturn status = CVOpenGLESTextureCacheCreateTextureFromImage(nil,
                                                                       [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache],
                                                                       pixelBuffer,
                                                                       nil,
                                                                       GL_TEXTURE_2D,
                                                                       GL_RGBA,
                                                                       textureSize.width,
                                                                       textureSize.height,
                                                                       GL_BGRA,
                                                                       GL_UNSIGNED_BYTE,
                                                                       0,
                                                                       &texture);
        
        if (status != kCVReturnSuccess) {
            NSLog(@"Can't create texture");
        }
        self.renderTexture = texture;
        return CVOpenGLESTextureGetName(texture);
    }
    

    OpenGL的方法:
    创建纹理对象,使用glTexImage2D方法上传CVPixelBufferRef中图像数据data到纹理对象中。

        glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
        glTexImage2D(GL_TEXTURE_2D, 0, _pixelFormat==GPUPixelFormatRGB ? GL_RGB : GL_RGBA, (int)uploadedImageSize.width, (int)uploadedImageSize.height, 0, (GLint)_pixelFormat, (GLenum)_pixelType, bytesToUpload);
    

    2.demo中使用GPUImageRawDataInput作为滤镜链起点,输入CVPixelBufferRef的图像数据,使用GPUImageTextureOutput作为滤镜链终点,输出滤镜处理后的纹理id。

    - (CVPixelBufferRef)renderPixelBuffer:(CVPixelBufferRef)pixelBuffer{
        if (!pixelBuffer) {
            return nil;
        }
        CVPixelBufferRetain(pixelBuffer);
        runSynchronouslyOnVideoProcessingQueue(^{
            [GPUImageContext useImageProcessingContext];
            
            CGSize size = CGSizeMake(CVPixelBufferGetWidth(pixelBuffer),
                                     CVPixelBufferGetHeight(pixelBuffer));
            
            CVPixelBufferLockBaseAddress(pixelBuffer, 0);
            void *bytes = CVPixelBufferGetBaseAddress(pixelBuffer);
            [self.dataInput updateDataFromBytes:bytes size:size];
            CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
            
            [self.dataInput processData];
            GLuint textureId = self.textureOutput.texture;
            [self convertTextureId:textureId textureSize:size pixelBuffer:pixelBuffer];
        });
        CVPixelBufferRelease(pixelBuffer);
        return pixelBuffer;
    }
    
    - (void)newFrameReadyFromTextureOutput:(GPUImageTextureOutput *)callbackTextureOutput{
        [self.textureOutput doneWithTexture];
    }
    

    3.使用原始CVPixelBufferRef创建纹理,将此纹理作为附件挂载到frame buffer object的纹理附件上。绘制滤镜处理后的纹理到帧缓冲对象中。

    - (CVPixelBufferRef)convertTextureId:(GLuint)textureId
                             textureSize:(CGSize)textureSize
                             pixelBuffer:(CVPixelBufferRef)pixelBuffer{
        
        [GPUImageContext useImageProcessingContext];
        [self cleanUpTextures];
    
        GLuint frameBuffer;
        glGenFramebuffers(1, &frameBuffer);
        glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
        // texture
        GLuint targetTextureID = [self convertRGBPixelBufferToTexture:pixelBuffer];
        glBindTexture(GL_TEXTURE_2D, targetTextureID);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureSize.width, textureSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        
        glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, targetTextureID, 0);
        glViewport(0, 0, textureSize.width, textureSize.height);
        
        [self renderTextureWithId:textureId];
        
        glDeleteFramebuffers(1, &frameBuffer);
        glBindFramebuffer(GL_FRAMEBUFFER, 0);
        
        glFlush();
        
        return pixelBuffer;
    }
    

    激活并绑定滤镜纹理,上传顶点坐标,纹理坐标到顶点着色器,开始绘制:

    - (void)renderTextureWithId:(GLuint)textureId{
        [GPUImageContext setActiveShaderProgram:self->normalProgram];
        
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, textureId);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        glUniform1i(self->inputTextureUniform,0);
        
        static const GLfloat squareVertices[] = {
            -1.0f, -1.0f,
            1.0f, -1.0f,
            -1.0f, 1.0f,
            1.0f, 1.0f,
        };
        glVertexAttribPointer(self->positionAttribute, 2, GL_FLOAT, GL_FALSE, 0, squareVertices);
        glVertexAttribPointer(self->textureCoordinateAttribute, 2, GL_FLOAT, GL_FALSE, 0, [GPUImageFilter textureCoordinatesForRotation:kGPUImageNoRotation]);
        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    }
    

    四.总结

    了解了CVPiexlBufferRef以上特性后,在短视频sdk架构中,就可以设计出模块化,可插拔的滤镜组件。在视频采集,编辑,转码等场景中均可快速集成。

    demo中也提供了两个简单的场景:

    1.视频采集过程中添加滤镜:从GPUImageVideoCamera的代理方法中取出CVPixelBufferRef进行滤镜处理。

    #pragma mark - GPUImageVideoCameraDelegate
    - (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
    {
        CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        [[HYRenderManager shareManager] renderItemsToPixelBuffer:pixelBuffer];
    }
    
    视频采集过程中添加滤镜.png

    2.视频播放过程中添加滤镜:在AVPlayer播放时,从实现了AVVideoCompositing协议的方法中取出CVPixelBufferRef进行滤镜处理。

    #pragma mark - EditorCompositionInstructionDelegete
    - (CVPixelBufferRef)renderPixelBuffer:(CVPixelBufferRef)pixelBuffer
    {
        return [[HYRenderManager shareManager] renderItemsToPixelBuffer:pixelBuffer];
    }
    
    视频播放过程中添加滤镜.png

    源码

    Github:Demo地址
    欢迎留言或私信探讨问题及Star,谢谢~

    参考文章:
    在 iOS 中给视频添加滤镜
    深入理解 CVPixelBufferRef

    相关文章

      网友评论

        本文标题:iOS 关于CVPixelBufferRef的滤镜处理

        本文链接:https://www.haomeiwen.com/subject/moovzktx.html