What is the meaning of claims about clock precision/accuracy?

718 views Asked by At

I've seen a lot of discussions of system clocks where it's said that e.g. standard PC clocks under e.g. Windows are precise only +/-10ms, whereas on a real time system clocks have submillisecond precision. But what do these claims mean? How significant this timing variability is depends entirely on the interval over which clock timing is being measured. If two successive clock calls returned timestamps that differed by 10ms, that would be a disaster, and fortunately this isn't the case; but if a clock only loses/gains 10ms over the course of a month, that's virtually perfect timing for any practical purpose. To pose the question a different way, if I make two clock calls that are 1 second apart, what degree of inaccuracy could I expect, for say standard PC-Windows, PC-realtime (e.g. QNX with an mb that supports it), and a Mac?

2

There are 2 answers

0
Jerry On

Your question(s) may lead to a larger discussion. When you're talking about the timing interval over which a clock is measured, I believe that is called drift. If the timestamps from two successive clock calls differed by 10ms, maybe it takes that long to process, maybe there was an interrupt, maybe the clock does drift that badly, maybe the reporting precision is in units of 10ms, maybe there is round off error, etc. The reporting precision of a system clock depends on its speed (ie, 1GHz = 1ns), hardware support, and OS support. Sorry I don't know how Windows compares with Mac.

0
Joachim Sauer On

Since you don't link to any specific discussions on this topic I can only relay what little experience I have on this topic from the Java side:

The granularity of the classical System.currentTimeMillis() has been pretty bad some time ago (15ms on Windows XP). This means that the smallest possible difference between any two adjacent System.currentTimeMillis() calls that don't return the same value are 15ms. So if you measure an event that takes 8ms, then you get either 0ms or 15ms as a result.

For measuring small time spans that's obviously disastrous. For measuring longer time spans, that's not really a problem.

That's one of the primary reasons why Java introduced System.nanoTime which was specifically designed to measure small time spans and usually (i.e. when the OS supports it) has a significantly finer granularity (on all systems I've tested it on it never returned the same value twice, even when called two times in a row with no calculation nbetween).

So modern computers can usually provide pretty fine-granular and pretty precise time measurement, provided you use the correct APIs.