In C++ and the Perils of Double-Checked Locking, there's persudo code to implement the pattern correctly which is suggested by the authors. See below,
Singleton* Singleton::instance () {
Singleton* tmp = pInstance;
... // insert memory barrier (1)
if (tmp == 0) {
Lock lock;
tmp = pInstance;
if (tmp == 0) {
tmp = new Singleton;
... // insert memory barrier (2)
pInstance = tmp;
}
}
return tmp;
}
I just wonder that whether the first memory barrier can be moved right above the return statement?
EDIT: Another question: In the linked article, as vidstige quoted
Technically, you don’t need full bidirectional barriers. The first barrier must prevent downwards migration of Singleton’s construction (by another thread); the second barrier must prevent upwards migration of pInstance’s initialization. These are called ”acquire” and ”release” operations, and may yield better performance than full barriers on hardware (such as Itainum) that makes the distinction.
It says that the second barrier doesn't need to be bidirectional, so how can it prevent the assignment to pInstance from being moved before that barrier? Even though the first barrier can prevent upwards migration, but another thread can still have chance to see the un-initialized members.
EDIT: I think I almost understand the purpose of the first barrier. As sonicoder noted, branch prediction may cause tmp to be NULL when the if returns true. To avoid that problem, there must be a acquire barrier to prevent the reading of tmp in return before the reading in if.
The first barrier is paired with the second barrier to achieve synchronize-with relationship, so it can be move down.
EDIT: For those who are interested in this question, I strongly recommend reading memory-barriers.txt.
I didn't see any correct answer here related to your question so I decide to post one even after more than three years;)
Yes, it can.
It's for threads that won't enter the
if
statement, i.e.,pInstance
has already been constructed and initialized correctly, and is visible.The second barrier (the one right before
pInstance = tmp;
) guarantees that the initialization of singleton's members fields are committed to memory beforepInstance = tmp;
is committed. But this does NOT necessarily mean that other threads (on other cores) will see these memory effects in the same order (counter-intuitive, right?). A second thread may see the new value of the pointer in cache but not those member fields yet. When it accesses a member by dereferencing the pointer (e.g.,p->data
), the address of that member may has already been in cache, but not the one that's desired. Bang! A wrong data is read. Note that this is more than theoretical. There are systems that you need perform a cache coherence instruction (e.g., a memory barrier) to pull new data from memory.That's why the first barrier is there. It also explains why it's ok to place it right before the
return
statement (but it has to be afterSingleton* tmp = pInstance;
).A write barrier guarantees that every write preceding it will effectively happen before every write following it. It's a stop sign, and no write can cross it to the other side. For a more detailed description, refer to here.