i have a large NSView
that shows a custom Quartz2D drawing which repeatedly changes at high frame rates. Only some parts of drawing may change from frame to frame though. My approach so far is first drawing into an offscreen bitmap context, then creating an image from that context, and finally updating the content of the view's CoreAnimation layer with that image.
My first question is if that approach generally makes sense and if it's the way to go when it comes to performance?
Drawing to the offscreen bitmap context works fast enough and is optimised to redraw dirty areas only. So after that step i have a set of rectangles which mark regions in the offscreen buffer that should be displayed the screen. For now i simply update the contents of the CoreAnimation layer with an image created from the offscreen bitmap context, which basically works as well but i get flickering, it looks like new frames are shortly displayed on the screen while being not completely (or not at all) drawn yet. I have played around with CATransaction lock/unlock/begin/end/flush
, NSView lockFocus/unlockFocus
, NSDisableScreenUpdates/NSEnableScreenUpdates
but didn't find a way to get around the flickering yet, so i was wondering what is the actually the correct sequence to get the synchronisation right?
Here is a sketch of the initialisation code:
NSView* theView = ...
CALayer* layer = [[CALayer new] autorelease];
layer.actions = [NSDictionary dictionaryWithObject:[NSNull null] forKey:@"contents"];
[theView setLayer: layer];
[theView setWantsLayer: YES];
// bitmapContext gets re-created when the view size increases.
CGContextRef bitmapContext = CGBitmapContextCreate(...);
And here a sketch of the drawing code:
CGRect[] dirtyRegions = ...
NSDisableScreenUpdates();
[CATransaction begin];
[CATransaction setDisableActions: YES];
// draw into dirty regions of bitmapContext
// ...
// create image from bitmap context
void* buffer = CGBitmapContextGetData(bitmapContext);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, ...);
CGImageRef image = CGImageCreate(..., provider, ...);
// update layer contents, dirty regions are ignored
layer.contents = image;
[CATransaction commit];
NSEnableScreenUpdates();
I would also like to take advantage of the knowledge about the dirty regions. Is there a way to update only the dirty regions on the screen using this approach?
Thanks for your help!
UPDATE: I think i found the problem that causes the flickering. I create the image with the pixel buffer from the bitmap context using CGImageCreate(...)
. If i use CGBitmapContextCreateImage(...)
instead it works. CGBitmapContextCreateImage
does copy on write, so it writes the pixels when bitmap context is updated again if i understand correctly, that would explain why it didn't work earlier. I've read somewhere that CGBitmapContextCreateImage
should be used carefully because it makes calls to the Kernel that might affect performance, so i guess i will simply copy the relevant pixels into a new image buffer, taking the dirty regions into account. Does this make sense?
After trying out a lot of different approaches, i dropped using CoreAnimation for uploading pixel data and decided to go with CoreVideo pixel buffers (
CVPixelBufferRef
) in combination with OpenGL for moving the pixels on screen instead.CoreVideo provides some convenient functions to create OpenGL textures from pixel buffers (
CVOpenGLTextureCacheCreateTextureFromImage
), manage them in a texture cache (CVOpenGLTextureCacheRef
), and to draw into the buffer safely (CVPixelBufferLockBaseAddress/CVPixelBufferUnlockBaseAddress
). Uploading the dirty rectangles to the window back-buffer can then be done with normal OpenGL texture mapping commands (glTexCoord2fv
).Another approach that works equally well and has a similar API is
IOSurface
, more information about this is here.