I'm trying gain a deeper understanding of relaxed memory ordering. Per CPP reference, there is no synchronization, however atomicity is still guaranteed. Doesn't atomicity in this case require some form of sync, e.g. how does fetch_add()
below guarantee that only one thread will update the value from y
to y+1
, particularly if writes can be visible out of order to different threads? Is there an implicit sync associated with fetch_add
?
memory_order_relaxed Relaxed operation: there are no synchronization or ordering constraints imposed on other reads or writes, only this operation's atomicity is guaranteed (see Relaxed ordering below)
#include <thread>
#include <iostream>
#include <atomic>
#include <vector>
#include <cassert>
using namespace std;
static uint64_t incr = 100000000LL;
atomic<uint64_t> x;
void g()
{
for (long int i = 0; i < incr; ++i)
{
x.fetch_add(1, std::memory_order_relaxed);
}
}
int main()
{
int Nthreads = 4;
vector<thread> vec;
vec.reserve(Nthreads);
for (auto idx = 0; idx < Nthreads; ++idx)
vec.push_back(thread(g));
for(auto &el : vec)
el.join();
// Does not trigger
assert(x.load() == incr * Nthreads);
}
"Synchronization" has a very specific meaning in C++.
It refers to following. Let's say:
Thread A reads/writes to memory X. (doesn't have to be atomic)
Thread A writes to atomic variable Y. (must be a
release
orseq_cst
write)Thread B reads the variable Y, and sees the value previously written by A. (must be an
acquire
orseq_cst
read)At this point, operations (2) and (3) are said to synchronize with each other.
Thread B reads/writes to memory X. (doesn't have to be atomic)
Normally this would cause a data race with thread A (undefined behavior), but it doesn't because of the synchronization.
This only works with
release
/acquire
/seq_cst
operations, and notrelaxed
operations. That's what the quote means.