CVPixelBufferRef->滤镜-> CVP

作者: BeautyWang | 来源:发表于2016-11-18 09:50 被阅读2880次

    因为在做直播,最近突然有个想法,自己做做滤镜试试看,实验了很多方法,下面谈谈我从网上了解的结合自己的想法,给大家分享一下。

    首先,说到滤镜肯定首先想到iOS上著名的开源库GPUImage,熟悉GPUImage都知道自定义一个采集端GPUImageVideoCamera再添加滤镜Target即可实现滤镜,但是因为我们直播采用七牛做的,我查看七牛文档,并没有发现可以自己定制相机的地方。so,只能考虑对CVPixelBufferRef处理了。

    先上结果,采用coreImage处理:

    CVPixelBufferRef->CIImage->CIFilter->CIImage->CVPixelBufferRef

    _coreImageContext = [CIContext contextWithEAGLContext:self.openGLESContext options:options];
    - (CVPixelBufferRef)coreImageHandle:(CVPixelBufferRef)pixelBuffer
    {
            CFAbsoluteTime elapsedTime, startTime = CFAbsoluteTimeGetCurrent();
            CIImage *inputImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
            [_coreImageFilter setValue:inputImage forKey:kCIInputImageKey];
            CIImage *outputImage = [_coreImageFilter outputImage];
            elapsedTime = CFAbsoluteTimeGetCurrent() - startTime;
            NSLog(@"Core Image frame time: %f", elapsedTime * 1000.0);
            [_coreImageContext render:outputImage toCVPixelBuffer:pixelBuffer];
            return pixelBuffer;
    }
    

    其中_coreImageContext采用openGL ES 处理,也可变为CPU处理,但是考虑性能还是采用GPU处理吧,此时处理时间一张大概1ms,完全可以支撑24帧的画面

    第一条思路:

    对CVPixelBufferRef提取纹理,扔给GPUImage filter 处理,然后再拿到处理完的纹理变为CVPixelBufferRef,传给七牛。因为openGL基础不足,未成功,暂时放弃
    自己仿着GPUImageVideoCamera 里面
    - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
    方法将PixelBuffer变为纹理,但是注意七牛代理里面回传回来的Buffer pixelFormat=BGRA,所以提取纹理CVPixelBufferGetPlaneCount(cameraFrame)只有一层(待续)

    第二条思路:

    直接处理RGBA,性能消耗太大,放弃
    改方法为转为灰度算法 CPU用量50-80%。太特么烫了

    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    CFAbsoluteTime elapsedTime, startTime = CFAbsoluteTimeGetCurrent();
    unsigned char *data = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);
    size_t bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
    size_t bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
    NSInteger myDataLength = bufferWidth * bufferHeight * 4;
    for (int i = 0; i < myDataLength; i+=4)
    {
            UInt8 r_pixel = data[i];
            UInt8 g_pixel = data[i+1];
            UInt8 b_pixel = data[i+2];
            //Gray = R*0.299 + G*0.587 + B*0.114
            int outputRed = (r_pixel * 0.299) + (g_pixel *0.587) + (b_pixel * 0.114);
            int outputGreen = (r_pixel * 0.299) + (g_pixel *0.587) + (b_pixel * 0.114);
            int outputBlue = (r_pixel * 0.299) + (g_pixel *0.587) + (b_pixel * 0.114);
            if(outputRed>255)outputRed=255;
            if(outputGreen>255)outputGreen=255;
            if(outputBlue>255)outputBlue=255;
            data[i] = outputRed;
            data[i+1] = outputGreen;
            data[i+2] = outputBlue;
    }
    
    elapsedTime = CFAbsoluteTimeGetCurrent() - startTime;
    NSLog(@"CPU frame time: %f", elapsedTime * 1000.0);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    

    第三条思路:

    就是上面结果那个啦~~

    相关文章

      网友评论

        本文标题:CVPixelBufferRef->滤镜-> CVP

        本文链接:https://www.haomeiwen.com/subject/xmmjpttx.html