最近需要给直播项目中添加美颜的功能,调研了很多SDK和开源代码(视决,涂图,七牛,金山云,videoCore等),综合成本/效果/对项目侵入性,最后决定使用一款基于GPUImage实现的 BeautifyFaceDemo美颜滤镜。
关于滤镜代码和实现思路可以到BeautifyFace Github和作者琨君的简书中查看。
集成GPUImageBeautifyFilter和GPUImage Framework
首先需要集成好GPUImage,通过观察目前iOS平台,90%以上美颜方案都是基于这个框架来做的。
原来项目中的AVCaptureDevice需要替换成GPUImageVideoCamera,删除诸如AVCaptureSession/AVCaptureDeviceInput/AVCaptureVideoDataOutput这种GPUImage实现了的部分。修改一些生命周期,摄像头切换,横竖屏旋转等相关逻辑,保证前后行为统一。
声明需要的属性
@property (nonatomic, strong) GPUImageVideoCamera *videoCamera;
//屏幕上显示的View
@property (nonatomic, strong) GPUImageView *filterView;
//BeautifyFace美颜滤镜
@property (nonatomic, strong) GPUImageBeautifyFilter *beautifyFilter;
然后初始化
self.sessionPreset = AVCaptureSessionPreset1280x720;
self.videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:self.sessionPreset cameraPosition:AVCaptureDevicePositionBack];
self.filterView = [[GPUImageView alloc] init];
[self.view insertSubview:self.filterView atIndex:1]; //省略frame的相关设置
//这里我在GPUImageBeautifyFilter中增加个了初始化方法用来设置美颜程度intensity
self.beautifyFilter = [[GPUImageBeautifyFilter alloc] initWithIntensity:0.6];
为filterView增加美颜滤镜
[self.videoCamera addTarget:self.beautifyFilter];
[self.beautifyFilter addTarget:self.filterView];
然后调用startCameraCapture方法就可以看到效果了
[self.videoCamera startCameraCapture];
到这里,仅仅是屏幕显示的内容带有滤镜效果,而作为直播应用,还需要输出带有美颜效果的视频流
输出带有美颜效果的视频流
刚开始集成的时候碰见一个坑,原本的逻辑是实现AVCaptureVideoDataOutputSampleBufferDelegate方法来获得原始帧
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
而GPUImageVideoCamera也实现了一个类似的代理:
@protocol GPUImageVideoCameraDelegate <NSObject>
@optional
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer;
@end
而替换之后发现输出的流依旧是未经美颜的图像,看了实现后发现果不其然,GPUImageVideoCameraDelegate还是通过AVCaptureVideoDataOutputSampleBufferDelegate直接返回的数据,所以想输出带有滤镜的流这里就得借助GPUImageRawDataOutput了
CGSize outputSize = {720, 1280};
GPUImageRawDataOutput *rawDataOutput = [[GPUImageRawDataOutput alloc] initWithImageSize:CGSizeMake(outputSize.width, outputSize.height) resultsInBGRAFormat:YES];
[self.beautifyFilter addTarget:rawDataOutput];
这个GPUImageRawDataOutput其实就是beautifyFilter的输出工具,可在setNewFrameAvailableBlock方法的block中获得带有滤镜效果的数据
__weak GPUImageRawDataOutput *weakOutput = rawDataOutput;
__weak typeof(self) weakSelf = self;
[rawDataOutput setNewFrameAvailableBlock:^{
__strong GPUImageRawDataOutput *strongOutput = weakOutput;
[strongOutput lockFramebufferForReading];
// 这里就可以获取到添加滤镜的数据了
GLubyte *outputBytes = [strongOutput rawBytesForImage];
NSInteger bytesPerRow = [strongOutput bytesPerRowInOutput];
CVPixelBufferRef pixelBuffer = NULL;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, outputSize.width, outputSize.height, kCVPixelFormatType_32BGRA, outputBytes, bytesPerRow, nil, nil, nil, &pixelBuffer);
// 之后可以利用VideoToolBox进行硬编码再结合rtmp协议传输视频流了
[weakSelf encodeWithCVPixelBufferRef:pixelBuffer];
[strongOutput unlockFramebufferAfterReading];
CFRelease(pixelBuffer);
}];
目前依旧存在的问题
经过和其他产品对比,GPUImageBeautifyFilter磨皮效果和花椒最为类似。这里采用双边滤波, 花椒应该用了高斯模糊实现。同印客对比,美白效果一般。
还存在些关于性能的问题:
1 调用setNewFrameAvailableBlock后很多机型只能跑到不多不少15fps
2 在6s这代机型上温度很高,帧率可到30fps但不稳定
Update(8-13)
-
关于性能问题,最近把项目中集成的美颜滤镜(BeautifyFace)里用到的 GPUImageCannyEdgeDetectionFilter 替换为 GPUImageSobelEdgeDetectionFilter 会有很大改善,而且效果几乎一致,6s经过长时间测试没有再次出现高温警告了。(替换也十分简单,直接改俩处类名/变量名就可以了)
-
分享一个BUG,最近发现当开启美颜的时候,关闭直播内存竟然没有释放。分析得出GPUImageRawDataOutput的setNewFrameAvailableBlock方法的block参数仍然保持着self,解决思路就是将GPUImageRawDataOutput移除。
先附上之前的相关release代码:
[self.videoCamera stopCameraCapture];
[self.videoCamera removeInputsAndOutputs];
[self.videoCamera removeAllTargets];
开始以为camera调用removeAllTargets会把camera上面的filter,以及filter的output一同释放,但实际camera并不会'帮忙'移除filter的target,所以需要添加:
[self.beautifyFilter removeAllTargets]; //修复开启美颜内存无法释放的问题
关闭美颜output是直接加在camera上,camera直接removeAllTargets就可以;
开启美颜output加在filter上,camera和filter都需要removeAllTargets。
网友评论
通知外置滤镜处理回调
*/
- (int)onProcess:(AlivcLivePusher *)pusher texture:(int)texture textureWidth:(int)width textureHeight:(int)height extra:(long)extra;
这个是阿里云的外置滤镜处理回调,如何对texture进行再次处理。 我看代码中都对GPUImageVideoCamera添加BeautifyFilter
bool RCDVideoFrameObserver::onCaptureVideoFrame(agora::media::IVideoFrameObserver::VideoFrame &videoFrame)
{
CGSize size = CGSizeMake(videoFrame.width, videoFrame.height);
mInput = [[GPUImageRawDataInput alloc] initWithBytes:bgra size:size];
filter = [[GPUImageBeautifyFilter alloc] init];
mOutput = [[GPUImageRawDataOutput alloc] initWithImageSize:size resultsInBGRAFormat:YES];
[mInput addTarget:filter];
[filter addTarget:mOutput];
__block CVPixelBufferRef pixelBuffer = NULL;
__weak GPUImageRawDataOutput *weakOutput = mOutput;
mOutput.newFrameAvailableBlock = ^{
__strong GPUImageRawDataOutput *strongOutput = weakOutput;
[strongOutput lockFramebufferForReading];
GLubyte *outputBytes = [strongOutput rawBytesForImage];
NSInteger bytesPerRow = [strongOutput bytesPerRowInOutput];
NSLog(@"Bytes per row: %ld", (unsigned long)bytesPerRow);
CVReturn ret = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, 640, 480, kCVPixelFormatType_32BGRA, outputBytes, bytesPerRow, nil, nil, nil, &pixelBuffer);
if (ret != kCVReturnSuccess) {
NSLog(@"status %d",ret);
}
[strongOutput unlockFramebufferAfterReading];
CFRelease(pixelBuffer);
};
[mInput processData];
libyuv::ARGBToI420(bgra, bgra_stride, (uint8_t *)videoFrame.yBuffer, videoFrame.yStride, (uint8_t *)videoFrame.uBuffer, videoFrame.uStride, (uint8_t *)videoFrame.vBuffer, videoFrame.vStride,videoFrame.width, videoFrame.height);
return true
}
我这个处理有问题吗
然后硬编这里有点问题
为什么不使用GPUImageFrameBuffer中自带的,还要去创建一块内存
[rawDataInput updateDataFromBytes:outputBytes size:size];
[rawDataInput processData];
[rawDataInput notifyTargetsAboutNewOutputTexture];
在target为GPUImageView上预览和拍照都没问题的,但是用movieWriter录制在写音频的时候就会崩溃,崩溃在这一行else if(assetWriter.status == AVAssetWriterStatusWriting)
{
if (![assetWriterAudioInput appendSampleBuffer:audioBuffer])
NSLog(@"Problem appending audio buffer at time: %@", CFBridgingRelease(CMTimeCopyDescription(kCFAllocatorDefault, currentSampleTime)));
}
@Lucifron
/** GPUImage's base source object
Images or frames of video are uploaded from source objects, which are subclasses of GPUImageOutput. These include:
- GPUImageVideoCamera (for live video from an iOS camera)
- GPUImageStillCamera (for taking photos with the camera)
- GPUImagePicture (for still images)
- GPUImageMovie (for movies)
Source objects upload still image frames to OpenGL ES as textures, then hand those textures off to the next objects in the processing chain.
*/
说明了它是GPUImageVideoCamera等很多常用类的超类,个人感觉还是这个类是做自定义采集类相关的事情。