Is the concept of a Graphics Context very similar to a file handle? (on iOS and other systems)

382 views Asked by At

Sometimes the term Graphics Context is a little bit abstract. Are they actually system resource, but they are resource from the Graphics Card, just like a file handle is system resource from the hard drive or any permanent storage device?

Just as a file handle has some states about whether the file handle is for read only or read/write, and the current position for the next read operating -- the states, a Graphics Context has states about the current stroke color, stroke width, or any relevant data. (update: and in write mode, we can go to any point in a 200MB file and change data, just like we have the canvas of the Graphics Context and draw things on top of it)

So Graphics Context are actually global, system-wide resource. They are not part of the application singleton or anything, just like a file or file handle is not (necessarily) part of the application singleton.

And if there is no powerful graphics card (or if the graphics card ran out of resource already), then the operating system can simulate a Graphics Context using low level graphics routines using bitmaps, instead of letting the graphics card handle it.

Is this how a Graphics Context work actually, on iOS and most other common OS in general?

2

There are 2 answers

9
Aaron Hayman On

I think it's best not to think of a Graphics Context in terms of a specific system resource. As far as I know, the graphics context doesn't correspond to any specific resource anymore than an any class 'object' does, besides memory of course. Really, the Graphics context is designed to provide a 'canvas' for the core graphics functions to operate on. The truth is, Apple doesn't give us the specific details of how a graphics context works internally. But there are several things we do know about it:

  1. The graphics context is basically a 'state' more than anything else. It holds information such as stoke/fill color, line width, etc for a particular set of drawing routines.

  2. It doesn't process on the GPU. Instead it processes (does all it's drawing) on the CPU and 'passes' the resulting image (some form of a bit map) to the GPU for display/animation (actually it renders the image directly to the GPU's buffers). This is why the 'renderInContext' method isn't working so well in the new iPad 3. renderInContext gives you the image first, which involves rendering and copying the image. If you wish to then display it, it must be passed back to Core Graphics which then writes the image back out. On the iPad 3, this involves a lot of memory (depending on the size of the view) and can easily overflow buffers.

  3. The graphics contexts given to the 'drawRect' method of UIView is designed to provide a context that is as efficient as possible. This is why you can't draw anything in a view outside a context, nor can you create your own context for a view to draw in. The actual drawing is handled in the run loop, which is why we use this method to flag a UIView as needing to be drawn: [view setNeedsDisplay].

  4. The graphics contexts for UIViews are drawn on the main thread and yes, again, processed on the CPU. This does mean overly complex drawings can tie up your main application, but now days with multi-core processors that's not so much of a problem.

  5. You can create a graphics context, but only to draw to draw to an image. This is exactly the same thing as what a UIView context does, except that it's meant to be used by you rather than drawn to the screen or animated. Since iOS 4, you can process these image contexts in other threads (besides the main thread).

    If you're looking to do GPU drawing, I believe the only way to do this is to use OpenGL if you're using iOS. If you're using MacOS, I think you can actually enable Quartz (core-graphics...same thing) drawing on the GPU using QuartzGL. But it may not be worth the effort, see this article: Mac QuartzGL (2D drawing on the graphics card) performance

Update

As you can see in the comments below, the current arrangement Apple has for Quartz drawing is probably the best, especially since views are drawn directly to the GPU buffers. There is a temptation to think that processing anything visual should be done on the GPU but the truth is, GPU's aren't designed for vector drawings. They're designed to handle massive transforms, lighting, texture mapping, etc. By using the CPU to process vector drawing and leaving everything else to the GPU Apple has split the graphics processing appropriately. Moreover, you're not loosing any efficiency in the data transfer between the CPU and GPU since Quartz is drawing directly to the GPU's buffer (which avoids that onerous memcpy).

0
Kurt Revis On

Sometimes the term Graphics Context is a little bit abstract.

Yes, intentionally so. Quartz is meant to be an abstraction, a general-purpose drawing system. It may or may not perform some optimizations with the graphics hardware, internally, but you don't get to have much visibility into that. And the kinds of optimizations it makes may change over time and with different kinds of graphics hardware.

Are they actually system resource, but they are resource from the Graphics Card

No, absolutely not. Quartz is a software renderer -- it works even when there is no graphics hardware present, and can draw to things like PDFs where the graphics hardware wouldn't be of any use.

Internally, Quartz (and its interfaces with the rest of the OS) may have a few "fast paths" that take advantage of the GPU in some situations. But that's by no means the common case.

Just as a file handle has some states about whether the file handle is for read only or read/write, and the current position for the next read operating -- the states, a Graphics Context has states about the current stroke color, stroke width, or any relevant data.

This is correct.

So Graphics Context are actually global, system-wide resource.

No. Quartz is just a library that runs code within your app. If you make a new CGContext, only your app is affected -- exactly the same way as if your code created a new instance of one of your own classes.

And if there is no powerful graphics card (or if the graphics card ran out of resource already), then the operating system can simulate a Graphics Context using low level graphics routines using bitmaps, instead of letting the graphics card handle it.

You have the two cases flipped. In general Quartz is working in software, with bitmaps. In a few cases, it may use the GPU to get those bitmaps on the screen faster, if everything is lined up exactly right.