Instead of letting drawRect
redraw thousands of point every time, I think there are several ways to "cache the image on screen" and any additional drawing, we will add to that image, and just show that image when it is time to drawRect
:
Use BitmapContext and draw to a bitmap, and in
drawRect
, draw this bitmap.Use
CGLayer
and draw theCGLayer
indrawRect
, and this may be faster than method 1, as this image is cached in the graphics card (and it will not count towards the RAM usage for the "memory warning" on iOS?)Draw to a
CGImage
, and use the view's layer:view.layer.contents = (id) cgimage;
So there seems to be three methods, and I think CALayer
in method (3) can only use a CGImage
to achieve it. CALayer
by itself cannot cache a screen image, not like CGLayer
in (2).
Is method (2) the fastest out of all three, and are there other methods that can accomplish this? I actually plan to animate a few screen images, (looping over 5 or 6 of them), and will try using CADisplayLink
to try a highest frame rate of 60fps. Will any of method (1), (2), or (3) use the memory in graphics card and therefore not use the RAM and therefore less likely to get a memory warning from iOS too?
Based on the last several questions you've asked, it looks like you are completely confusing CGLayers and CALayers. They are different concepts, and are not really related to one another. A CGLayer is a Core Graphics construct which aids in the rendering of content repeatedly within the canvas of a Core Graphics context, and is confined within a single view, bitmap, or PDF context. Rarely have I had the need to work with a CGLayer.
A CALayer is a Core Animation layer, and there is one backing every UIView within iOS (and layer-backed NSViews on the Mac). You deal with these all the time on iOS, because they are a fundamental piece of the UI architecture. Each UIView is effectively a lightweight wrapper around a CALayer, and each CALayer in turn is effectively a wrapper around a textured quad on the GPU.
When displaying a UIView onscreen, the very first time content needs to be rendered (or when a complete redraw is triggered) Core Graphics is used to take your lines, arcs, and other vector drawing (sometimes including raster bitmaps, as well) and rasterize them to a bitmap. This bitmap is then uploaded and cached on the GPU via your CALayer.
For changes in the interface, such as views being moved around, rotated, scaled, etc., these views or layers do not need to be redrawn again, which is an expensive process. Instead, they are just transformed on the GPU and composited in their new location. This is what enables the smooth animation and scrolling seen throughout the iOS interface.
Therefore, you'll want to avoid using Core Graphics to redraw anything if you want to have the best performance. Cache what parts of the scene you can within CALayers or UIViews. Think about how older-style animation used cels to contain portions of the scene that they would move, instead of having animators redraw every single change in the scene.
You can easily get hundreds of CALayers to animate about the screen smoothly on modern iOS devices. However, if you want to do thousands of points for something like a particle system, you're going to be better served by moving to OpenGL ES for that and rendering using GL_POINTS. That will take much more code to set up, but it may be the only way to get acceptable performance for the "thousands of points" you ask about.