Reversing videos efficiently with AVFoundation
The problem
One of the features I wanted to add to my appGIF Grabberin the next version was the ability to set different loop types. In order to support "reverse" and "forward-reverse" style loops, we needed a way to reverse a video (AVAsset) in objective-c.
Note: GIFGrabber, despite its name, actually keeps all the recordings in MP4 video format until the user actually wants to save it as a GIF. Manipulating video and adding effects are much easier and more efficient than dealing with GIFs.
Example of forward (normal) looping GIF:
The same GIF with a reversed loop:
Again, with a forward-reverse (ping-ping) style loop:
Existing solutions
Most of the answers and tutorials I found online suggested usingAVAssetImageGeneratorto output frames as images and then compositing them back in reverse order into a video. Because of the wayAVAssetImageGeneratorworks, there are some major drawbacks to this solution:
Resource intensive and slow (overhead of capturing the frame, writing out to disk, reading those images back from disk, compiling, and then writing out to disk again).
Unreliable frame timing: you specify the interval that the screenshots should be made, but it is not guaranteed and non-deterministic since it will be bounded by CPU and disk I/O
The quality of the frames will change from the original when converted to an image.
Since the reversed video was going to be concatenated with the original, any difference in quality or timing would be very noticable. We needed them to be exactly the same.
A more efficient solution
Since we deal with relatively short videos (30 seconds or less) we wanted to perform the procedure completely in-memory.
This can be achieved by:
UseAVAssetReaderto read in the video as an array ofCMSampleBufferRef[](this struct contains the raw pixel data along with timing info for each frame).
Extract the image/pixel data for each frame and append it with the timing info of its mirror frame. (This step is neccessary because we can't just append theCMSampleBufferRefstructs in reverse order because the timing info is embedded.
UseAVAssetWriterto write it back out to a video file.
You can find the source code here. In the next section, we'll walk through some of the more complicated parts.
1 Read in the video samples
// Initialize the readerAVAssetReader*reader = [[AVAssetReaderalloc] initWithAsset:asset error:&error];AVAssetTrack*videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] lastObject];NSDictionary*readerOutputSettings = [NSDictionarydictionaryWithObjectsAndKeys: [NSNumbernumberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange], kCVPixelBufferPixelFormatTypeKey,nil];AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutputassetReaderTrackOutputWithTrack:videoTrack outputSettings:readerOutputSettings];[reader addOutput:readerOutput];[reader startReading];
First, we initialize theAVAssetReaderobject that will be used to read in the video as a series of samples (frames). We also configure the pixel format for the frame. You can read more about the different pixel format typeshere.
// Read in the samplesNSMutableArray*samples = [[NSMutableArrayalloc] init];CMSampleBufferRef sample;while(sample = [readerOutput copyNextSampleBuffer]) { [samples addObject:(__bridgeid)sample];CFRelease(sample);}
Next, we store the array of samples. Note that becauseCMSampleBufferRefis a native C type, we cast it to an objective-c type ofidusing__bridge.
2 Prepare the writer that will convert the frames back to video
// Initialize the writerAVAssetWriter*writer = [[AVAssetWriteralloc] initWithURL:outputURL fileType:AVFileTypeMPEG4error:&error];
This part is pretty straightforward, theAVAssetWriterobject takes in an output path and the file-type of the output file.
NSDictionary*videoCompressionProps = [NSDictionarydictionaryWithObjectsAndKeys: @(videoTrack.estimatedDataRate),AVVideoAverageBitRateKey,nil];NSDictionary*writerOutputSettings = [NSDictionarydictionaryWithObjectsAndKeys:AVVideoCodecH264,AVVideoCodecKey, [NSNumbernumberWithInt:videoTrack.naturalSize.width],AVVideoWidthKey, [NSNumbernumberWithInt:videoTrack.naturalSize.height],AVVideoHeightKey, videoCompressionProps,AVVideoCompressionPropertiesKey,nil];AVAssetWriterInput*writerInput = [[AVAssetWriterInputalloc] initWithMediaType:AVMediaTypeVideooutputSettings:writerOutputSettings sourceFormatHint:(__bridge CMFormatDescriptionRef)[videoTrack.formatDescriptionslastObject]];[writerInput setExpectsMediaDataInRealTime:NO];
Next, we create theAVAssetWriterInputobject that will feed the frames to theAVAssetWriter. The configuration will depend on your source video - here, we specify the codec, dimensions, and compression properties.
We set theexpectsMediaDataInRealTimeproperty toNOsince we are not processing a live video stream and therefore the writer can take its time without dropping frames.
3 Reversing the frames and save to file
// Initialize an input adaptor so that we can append PixelBufferAVAssetWriterInputPixelBufferAdaptor*pixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptoralloc] initWithAssetWriterInput:writerInput sourcePixelBufferAttributes:nil];[writer addInput:writerInput];[writer startWriting];[writer startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp((__bridge CMSampleBufferRef)samples[0])];
First, we create aAVAssetWriterInputPixelBufferAdaptorobject that acts as an adaptor to the writer input. This will allow the input to read in the pixel buffer of each frame.
// Append the frames to the output.// Notice we append the frames from the tail end, using the timing of the frames from the front.for(NSIntegeri =0; i < samples.count; i++) {// Get the presentation time for the frameCMTime presentationTime = CMSampleBufferGetPresentationTimeStamp((__bridge CMSampleBufferRef)samples[i]);// take the image/pixel buffer from tail end of the arrayCVPixelBufferRef imageBufferRef = CMSampleBufferGetImageBuffer((__bridge CMSampleBufferRef)samples[samples.count- i -1]);while(!writerInput.readyForMoreMediaData) { [NSThreadsleepForTimeInterval:0.1]; } [pixelBufferAdaptor appendPixelBuffer:imageBufferRef withPresentationTime:presentationTime];}[writer finishWriting];
Note: The structure of each sample (CMSampleBufferRef) contains two key pieces of information. A pixel buffer (CVPixelBufferRef) describing the pixel data for the frame, and a presentation timestamp that describes when it is to be displayed.
And finally, we loop through all the frames to get the presentation timestamps and use the pixel buffer from it's mirror (count - i -1) frame. We pass this on to thepixelBufferAdaptorwe created earlier which will feed it into the writer. We also make sure that thewriterInputhas finished processing before passing it the next frame.
And finally, we write the output to disk.
That's it! Your reversed video should be saved and accessible at the output path you specified when initializing the writer.
Download the source code
You candownload the final source code here.
1URL:http://www.andyhin.com/post/5/reverse-video-avfoundation
网友评论