项目用到GPUImage录制视频,同时还要加水印录制,录制的教程大把,但是这个库15年就不维护了,导致很多坑没有补。有些坑在githud问题里找到答案,也是不明所以,不过也有朋友遇到类似的问题,就开个帖子记录下,顺便给有需要的朋友参考。
当我照着网上的demo撸了一个录制的视频,一开始就遇到填坑,程序崩溃,报错:Tried to overrelease a framebuffer, did you forget to call -useNextFrameForImageCapture before using -imageFromCurrentFramebuffer!
网上问了很多人也查了很多资料,GPUImageFramebuffer是处理视频帧的类,报错是因为framebufferReferenceCount这个属性等于0,framebufferReferenceCount这个属性是用来做类似ARC中的内存计数一样的存在,可能是为了防止过度释放什么的。
最简单的修复方法,肯定就是把NSAssert(framebufferReferenceCount > 0, @"Tried to overrelease a framebuffer, did you forget to call -useNextFrameForImageCapture before using -imageFromCurrentFramebuffer?");这句代码注释掉了,不过githud上查了所有issue,发现有大神提出以下修复方法,在sdk库里找到GPUImageUIElement.h文件,updateWithTimestamp方法里添加以下语句[outputFramebuffer disableReferenceCounting];// Add this line, because GPUImageTwoInputFilter.m frametime updatedMovieFrameOppositeStillImage is YES, but the secondbuffer not lock(意思大概是第二个帧buffer没有锁住)
#pragma mark GPU源码修改 修复crash
[currentTargetsetInputFramebuffer:outputFramebufferatIndex:textureIndexOfTarget];// add this line, because the outputFramebuffer is update above
完整的方法
- (void)updateWithTimestamp:(CMTime)frameTime;
{
[GPUImageContextuseImageProcessingContext];
CGSizelayerPixelSize = [selflayerSizeInPixels];
GLubyte*imageData = (GLubyte*)calloc(1, (int)layerPixelSize.width* (int)layerPixelSize.height*4);
CGColorSpaceRefgenericRGBColorspace =CGColorSpaceCreateDeviceRGB();
CGContextRefimageContext =CGBitmapContextCreate(imageData, (int)layerPixelSize.width, (int)layerPixelSize.height,8, (int)layerPixelSize.width*4, genericRGBColorspace,kCGBitmapByteOrder32Little|kCGImageAlphaPremultipliedFirst);
//CGContextRotateCTM(imageContext, M_PI_2);
CGContextTranslateCTM(imageContext,0.0f, layerPixelSize.height);
CGContextScaleCTM(imageContext,layer.contentsScale, -layer.contentsScale);
//CGContextSetBlendMode(imageContext, kCGBlendModeCopy); // From Technical Q&A QA1708:http://developer.apple.com/library/ios/#qa/qa1708/_index.html
[layerrenderInContext:imageContext];
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
// TODO: This may not work
outputFramebuffer= [[GPUImageContextsharedFramebufferCache]fetchFramebufferForSize:layerPixelSizetextureOptions:self.outputTextureOptionsonlyTexture:YES];
#pragma mark GPU源码修改 修复crash
[outputFramebufferdisableReferenceCounting];// Add this line, because GPUImageTwoInputFilter.m frametime updatedMovieFrameOppositeStillImage is YES, but the secondbuffer not lock
glBindTexture(GL_TEXTURE_2D, [outputFramebuffertexture]);
// no need to use self.outputTextureOptions here, we always need these texture options
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA, (int)layerPixelSize.width, (int)layerPixelSize.height,0,GL_BGRA,GL_UNSIGNED_BYTE, imageData);
free(imageData);
for(id currentTargetintargets)
{
if(currentTarget !=self.targetToIgnoreForUpdates)
{
NSIntegerindexOfObject = [targetsindexOfObject:currentTarget];
NSIntegertextureIndexOfTarget = [[targetTextureIndicesobjectAtIndex:indexOfObject]integerValue];
[currentTargetsetInputSize:layerPixelSizeatIndex:textureIndexOfTarget];
#pragma mark GPU源码修改 修复crash
[currentTargetsetInputFramebuffer:outputFramebufferatIndex:textureIndexOfTarget];// add this line, because the outputFramebuffer is update above
[currentTargetnewFrameReadyAtTime:frameTimeatIndex:textureIndexOfTarget];
}
}
}
不过用了以上方法虽然不会崩溃了,但是偶尔会出现只录制了5秒视频的bug,方法就是在出现丢帧的时候,重新录制,虽然有点蠢。。。只能这样了
网友评论