Limits of Windows Queue Timers

1.8k views Asked by At

I am implementing a timer and need it to run every 50 ms or so and would like the resolution to be 1 ms or less. I started by reading these two articles:

http://www.codeproject.com/Articles/1236/Timers-Tutorial

http://www.virtualdub.org/blog/pivot/entry.php?id=272

Oddly enough they seem to contradict one another. One says queue timers are good for high resolution, the other posts results from a Windows 7 system showing resolution around 15ms (not good enough for my application).

So I ran a test on my system (Win7 64bit i7-4770 CPU @3.4 Ghz). I started at a period of 50ms and this is what I see (time since beginning on left, gap between executions on right; all in ms):

150   50.00
200   50.01
250   50.00
...
450   49.93
500   50.00
550   50.03
...
2250  50.10
2300  50.01

I see that the maximum error is about 100 us and that the average error is probably around 30 us or so. This makes me fairly happy.

So I started dropping the period to see at what point it gets unreliable. I started seeing bad results once I decreased the period <= 5ms.

With a period of 5ms it was not uncommon to see some periods jump between 3 and 6ms every few seconds. If I reduce the period to 1ms periods of 5 to 10 to 40 ms can be seen. I presume that the jumps up to 40ms may be due to the fact that I'm printing stuff to the screen, I dunno.

This is my timer callback code:

VOID CALLBACK timer_execute(PVOID p_parameter, 
   BOOLEAN p_timer_or_wait_fired)
{ 
   LARGE_INTEGER l_now_tick;

   QueryPerformanceCounter(&l_now_tick);

   double now = ((l_now_tick.QuadPart - d_start.QuadPart) * 1000000) / d_frequency.QuadPart;
   double us = ((l_now_tick.QuadPart - d_last_tick.QuadPart) * 1000000) / d_frequency.QuadPart;

   //printf("\n%.0f\t%.2f", now / 1000.0f, ms / 1000.0f);

   if (us > 2000 ||
       us < 100)
   {
      printf("\n%.2f", us / 1000.0f);
   }

   d_last_tick = l_now_tick;
}

Anyways it looks to me as if queue timers are very good tools so long as you're executing at 100hz or slower. Are the bad results posted in the second article I linked to (accuracy of 15ms or so) possibly due to a slower CPU, or a different config?

I'm wondering if I can expect this kind of performance across multiple machines (all as fast or faster than my machine running 64bit Win7)? Also, I noticed that if your callback doesn't exit before the period elapsed, the OS will put another thread in there. This may be obvious, but it didn't stand out to me in any documentation and has significant implications for the client-code.

2

There are 2 answers

3
Arno On BEST ANSWER

The Windows default timer resolution is 15.625 ms. That is the granularity you observe. However, the system timer resolution can be modified as described by MSDN: Obtaining and Setting Timer Resolution. This allows to reduce the granularity to about 1 ms on most platforms. This SO answer discloses how to obtain the current system timer resolution.

The hidden function NtSetTimerResolution(...) even allows to set the timer resolution to 0.5 ms when supported by the platform. See this SO answer to the question "How to setup timer resolution to 0.5 ms?"

...a different config? It depends on underlying hardware and OS version. Check the timer resolution with the tooles mentioned above.

...all as fast or faster than my machine running 64bit Win7)? Yes you can. However, other applications are also allowed to set the timer resolution. Google Chrome is a known example. Such other application may also only temporarily change the timer resolution. Therefore you can never rely on the timer resolution being a constant across platforms/time. The only way to be sure that the timer resolution is controlled by your application is to set the timer granularity to the minimum of 1 ms (0.5ms) by yourself.

Note: Reducing the system timer granularity causes the systems interrupt frequency to increase. It reduces the thread quantum (time slice) and increases the power consumption.

0
Nathanael Moh On

I believe the differences are because of the resource management used in the system. I just learnt about this in a presentation I had to do for my operating systems class. Since there are many processes running it might not be able to queue the process fast enough when the time is too short. In the other hand when it has more time then the process gets queued in time and also it has to do with priority. I hope this was somewhat helpful.