Is the value of steady_clock::now from multiple threads consistent with memory order?

447 views Asked by At

Within one thread, steady_clock::now() is guaranteed to return monotonically increasing values. How does this interact with memory ordering and reads observed by multiple threads?

atomic<int> arg{0};
steady_clock::time_point a, b, c, d;
int e;
thread t1([&](){
    a = steady_clock::now();
    arg.store(1, memory_order_release);
    b = steady_clock::now();
});
thread t2([&](){
    c = steady_clock::now();
    e = arg.load(memory_order_acquire);
    d = steady_clock::now();
});
t1.join();
t2.join()
assert(a <= b);
assert(c <= d);

Here's the important bit:

if (e) {
    assert(a <= d);
} else {
    assert(c <= b);
}

Can these assert ever fail? Or have I misunderstood something about acquire release memory order?

What follows is mostly an explanation and elaboration of my code example.

Thread t1 writes to the atomic arg. It also records the current time before and after the write in a and b respectively. steady_clock guarantees that a <= b.

Thread t2 reads from the atomic arg and saves the value read in e. It also records the current time before and after the read in c and d respectively. steady_clock guarantees that c <= d.

Both threads are then joined. At this point e could be 0 or 1.

If e is 0, then t2 read the value before t1 wrote it. Does this also imply that c = now() in t2 happened before b = now() in t1?

If e is 1 then t1 wrote the value before t2 read it. Does this also imply that a = now() in t1 happened before d = now() in t2?


Here are some existing questions that don't answer what I'm asking:

Is there any std::chrono thread safety guarantee even with multicore context?

I'm not asking whether now() is thread-safe. I know it is.

Is steady_clock monotonic across threads?

This one is much closer, but that example uses mutex. Can I make the same assumptions about memory orderings weaker than seq_cst?

1

There are 1 answers

0
Filipp On

The question is unfortunately incomplete. Acquire and release memory orders prevent reordering in one direction only. Consider t1:

a = steady_clock::now();
arg.store(1, memory_order_release);
b = steady_clock::now();

a indeed gets assigned before arg.store. But the compiler is free to reorder the assignment to b to happen before as well.

Similarly, in t2:

c = steady_clock::now();
e = arg.load(memory_order_acquire);
d = steady_clock::now();

d indeed gets assigned after arg.load. But the compiler is free to reorder the assignment to c to happen after as well.

Therefore assert(c <= b) may fail.

There is however one caveat. This is only true if steady_clock::now() is known to have no observable effects. However, on platforms where I have looked at its implementation, this results in a call to some opaque library function. Compilers must assume that such calls could have observable effects (such as modifying a volatile variable or terminating the program), and therefore will not reorder the calls.

The question would be valid if the important asserts consisted solely of

if (e) { assert (a <= d); }

or if both memory orders were acq_rel or seq_cst.

So, given that either the memory orders are corrected or the assert is checked only when e != 0, I can be sure that the relevant calls to steady_clock::now() actually happen in the correct order, even across threads.

But does this also imply that the values returned are consistent with that order across both t1 and t2, rather than just within t1 and t2 in isolation?

I don't know, which is why I won't accept this answer.