I trying to display a sequence of frames of a video, much like the scrubber bar in photo's app.
Here is the code
self.imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:self.videoAsset];
self.imageGenerator.maximumSize = CGSizeMake(frameDisplaySize.width,
frameDisplaySize.height);
self.imageGenerator.appliesPreferredTrackTransform = YES;
self.imageGenerator.requestedTimeToleranceBefore = kCMTimePositiveInfinity;
self.imageGenerator.requestedTimeToleranceAfter = kCMTimePositiveInfinity;
NSMutableArray *videoFrames = [NSMutableArray array];
[self.imageGenerator generateCGImagesAsynchronouslyForTimes:times
completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime actualTime,
AVAssetImageGeneratorResult result, NSError *error) {
if (result == AVAssetImageGeneratorSucceeded)
{
UIImage *frame = [[UIImage alloc] initWithCGImage:image];
[videoFrames addObject:frame];
}
else if (result == AVAssetImageGeneratorFailed)
{
NSLog(@"Failed with error: %@", [error localizedDescription]);
}
else
{
NSLog(@"Canceled");
}
}];
The problem is when I run the code, this gives me frames with the highest accuracy, which is identical to when the tolerance is set to kCMTimeZero. In fact, no matter what tolerance value I tried, I always get the same frames. Because of the high accuracy, the generation process has a 4 seconds lag. The photo's app only takes 1.5 seconds to show up the frames and the frames are clearly of low accuracy, pretty much only iframes of the video. So, is this a bug in AVAssetImageGenerator or it is the problem in my code?