On iOS, what is the fastest way to cache a drawn screen image and display it?

5.1k views Asked by At

Instead of letting drawRect redraw thousands of point every time, I think there are several ways to "cache the image on screen" and any additional drawing, we will add to that image, and just show that image when it is time to drawRect:

  1. Use BitmapContext and draw to a bitmap, and in drawRect, draw this bitmap.

  2. Use CGLayer and draw the CGLayer in drawRect, and this may be faster than method 1, as this image is cached in the graphics card (and it will not count towards the RAM usage for the "memory warning" on iOS?)

  3. Draw to a CGImage, and use the view's layer: view.layer.contents = (id) cgimage;

So there seems to be three methods, and I think CALayer in method (3) can only use a CGImage to achieve it. CALayer by itself cannot cache a screen image, not like CGLayer in (2).

Is method (2) the fastest out of all three, and are there other methods that can accomplish this? I actually plan to animate a few screen images, (looping over 5 or 6 of them), and will try using CADisplayLink to try a highest frame rate of 60fps. Will any of method (1), (2), or (3) use the memory in graphics card and therefore not use the RAM and therefore less likely to get a memory warning from iOS too?

2

There are 2 answers

2
Brad Larson On

Based on the last several questions you've asked, it looks like you are completely confusing CGLayers and CALayers. They are different concepts, and are not really related to one another. A CGLayer is a Core Graphics construct which aids in the rendering of content repeatedly within the canvas of a Core Graphics context, and is confined within a single view, bitmap, or PDF context. Rarely have I had the need to work with a CGLayer.

A CALayer is a Core Animation layer, and there is one backing every UIView within iOS (and layer-backed NSViews on the Mac). You deal with these all the time on iOS, because they are a fundamental piece of the UI architecture. Each UIView is effectively a lightweight wrapper around a CALayer, and each CALayer in turn is effectively a wrapper around a textured quad on the GPU.

When displaying a UIView onscreen, the very first time content needs to be rendered (or when a complete redraw is triggered) Core Graphics is used to take your lines, arcs, and other vector drawing (sometimes including raster bitmaps, as well) and rasterize them to a bitmap. This bitmap is then uploaded and cached on the GPU via your CALayer.

For changes in the interface, such as views being moved around, rotated, scaled, etc., these views or layers do not need to be redrawn again, which is an expensive process. Instead, they are just transformed on the GPU and composited in their new location. This is what enables the smooth animation and scrolling seen throughout the iOS interface.

Therefore, you'll want to avoid using Core Graphics to redraw anything if you want to have the best performance. Cache what parts of the scene you can within CALayers or UIViews. Think about how older-style animation used cels to contain portions of the scene that they would move, instead of having animators redraw every single change in the scene.

You can easily get hundreds of CALayers to animate about the screen smoothly on modern iOS devices. However, if you want to do thousands of points for something like a particle system, you're going to be better served by moving to OpenGL ES for that and rendering using GL_POINTS. That will take much more code to set up, but it may be the only way to get acceptable performance for the "thousands of points" you ask about.

3
hotpaw2 On

One fast method that allows both caching of graphics and modification of those cached graphics contents is a mash-up of your methods (1) and (3).

(1) Create your own bitmap-backed graphics context, draw to it, and later modify it at any time (incrementally add one point or thousands of points every now and then, etc.) as needed. Unfortunately it will be invisible, because there is no way to directly get any bitmap to the display on an iOS device.

So, in addition,

(3) at some frame rate (60 Hz, 30 Hz, etc.), if the bitmap is dirty (has been modified), convert the bitmap context into a CGImage and assign that image to the contents of a CALayer. That will convert and copy your entire bitmap's memory to the GPUs texture cache (this is the slow part). Then use core animation to do whatever you want with the layer (flush it, composite it, fly it around the window, etc.) to display the texture made out of your bitmap. Behind the scenes, Core animation will eventally let the GPU throw a quad using that texture onto some composited window tiles which will eventually be sent to the devices display (this description probably leaving out a whole bunch of stages in the graphics and GPU pipelines). Rinse and repeat as needed in the main UI run loop. My blog post on this method is here.

There is no way to partially modify the contents of an existing GPU texture in use. You either have to replace it with a complete new texture upload, or composite another layer on top of the texture's layer. So you will end up keeping double the memory in use, some in the CPUs address space, some in the GPU texture cache.