I am trying to understand why cache coherence protocols are designed the way they are. The goal of the cache coherence is to serialize reads/writes to a particular memory location across all cores.
Suppose, writes to memory location A is serialized as A1, A2, A3. Then, once a core reads value A2, it can never read A1 in future. But it can read A3 sometime in future.
I understand this is the goal of the cache coherence protocols.
The current protocols (the standard ones I studied like MSI, MESI etc.) involve communication among cores on every (or couple of) reads/writes. This introduces cache coherence traffic.
Why don't cache coherence protocols only communicate either when
- evicting a dirty cache line or
- another core wants to read a cache line that is dirty in some other processor or
Why cache coherence protocols are "proactive" and not "passive". The strategy I suggest, I believe, would also serialize reads/writes with respect to a particular memory location and would save needless coherency traffic.