My iOS game is currently using CADisplayLink for timing of OpenGL rendering operations. I've got a GCD dispatch queue running on a second thread that issues all the OpenGL rendering and state calls to the GPU. Everything works fine, except the timing isn't perfect. I'm seeing frame skips and glitches on occasion, even when the animation interval is changed from 1 (60Hz) to 2 (30Hz).
CADisplayLink calls your selector from the RunLoop on the main thread, which means it can only fire in between other dispatched jobs and input events that the main thread is processing. (I've confirmed this is the case by logging these jobs/events). If those operations take several milliseconds, then CADisplayLink will never be perfectly accurate because it can't interrupt whatever is currently running on the main thread. As I imagine most games do, I'm running game simulation, physics, and scene culling on the main thread.
So what I'm thinking is that I should move all the game simulation and physics stuff off the main thread so that touch events and CADisplayLink can fire as close to when it should be as possible. But I'm not sure if this will solve anything and it's not a trivial amount of work.
I'm wondering, since the presentRenderBuffer call is really the thing that synchronizes your frame with the actual hardware display, maybe all I need is just a really accurate timer that can run in it's own thread (perhaps at higher priority) and trigger the rendering that way. Then I can keep everything on the main thread. It seems all CADisplayLink provides is a way to wait in the case when my code is running faster than 60Hz, so it seems to me that a similar kind of delay can be coded just as easily using a separate thread. What am I missing here?