how do processor knows about the latest copy of cache line in multiprocessor system

141 views Asked by At

In multiprocessor system where each processor have its own copy of cache, how processor comes to know from where to get the copy of data. As it will be present in its own cache,also in caches of other respective processors or main memory i.e. how it will come to know which copy is the latest one

2

There are 2 answers

0
Basile Starynkevitch On

Most processors (in particular x86 in our laptops, desktops, servers) have some hardware provided cache coherence

Often, some synchronization memory barrier instructions exist.

It is rumored that some synchronization machine instructions could be quite slow.

Actually, recent C++2011 and C2011 standards have specific wordings and atomic data types to deal with these, like C++11 std::atomic

In practice, you should use some well established standard library like pthreads (or the C++11 std::thread etc....)

1
AudioBubble On

In a typical modern cache coherent system, if the contents of a memory address are present in multiple caches, their content will be the same. Using the typical invalidation-based coherence mechanism, in order for a processor to change the content, it must gain exclusive ownership of that block of memory. This is done by invalidating any copies. Any subsequent request from a processor that previously had the block cached would result in a miss (the block was invalidated) and a coherence action will find the updated content in the writing processor's cache.

(In earlier implementations of cache coherence with write-through caches, a common bus to memory could be snooped to grab any content changes. Similarly, a processor changing content could broadcast or multicast the changes to any sharers. These methods would keep cached contents the same.)

A more subtle aspect of this process is memory consistency--how different processors see the orderings of memory accesses to different addresses. With sequential consistency all processors see a single ordering of every read and write in the system. This is the easiest consistency model to understand, but in order to support greater parallel operation hardware complexity increases (e.g., rather than waiting to confirm that no ordering conflicts exist, a processor can speculatively continue execution and rollback to a previous known-correct state if an ordering conflict occurred).

A relaxed consistency model allows reads and writes to have inconsistent orderings among different processors. To provide stronger ordering guarantees, memory barrier operations are provided. These operations guarantee that certain types of memory accesses later in program order for the processor performing the barrier operation will be observed by all other processors as occurring after the barrier and certain types of memory accesses (earlier for that processor) will be observed by all processors before the barrier.

A system using a relaxed consistency model could provide the same behavior as a sequential consistency model system by using memory barriers after every memory access. However, systems using a relaxed model will generally not handle such excessive use of barriers well since they are designed to exploit the relaxed demands on memory ordering.