Is CLMUL constant time?

788 views Asked by At

Does the carry-less multiplication instruction run in constant time? Said differently, is the time it takes to execute independent of its arguments?

1

There are 1 answers

13
Peter Cordes On

According to https://agner.org/optimize/ and PCLMULQDQ has fixed latency on any given CPU. (http://www.uops.info/table.html doesn't list a latency for it, but has good stuff for most instructions).

There's no reason to expect it to be data-dependent- typically only division / sqrt has data-dependent performance in modern high-performance CPUs. Regular multiply doesn't: instead they just make it fast for the general case with lots of hardware parallelism inside the execution unit.

Out-of-order instruction scheduling is a lot easier when uops have fixed latency, and so is building fully-pipelined execution units for them. The scheduler (reservation station) can avoid having 2 operations finish at the same time on the same port and create a write-back conflict. Or worse, in the same execution unit and cause stalls within it. This is why fixed-latency is very common.

(A microcoded multi-uop pclmulqdq with branching could potentially have variable latency, or more plausibly latency that depends on the immediate operand: maybe an extra shuffle uop or two when the immediate is non-zero. So the fixed-latency of a single uop argument doesn't necessarily apply to a micro-coded instruction, but pclmuqdq is still simple enough that you wouldn't expect it to actually branch internally the way rep movsb has to.)


As @fuz points out, PCLMUL was made for crypto, so data-dependent performance would make it vulnerable to timing attacks. So there's a very strong reason to make PCLMUL constant time. (Or at worst, dependent on the immediate, but not the register/memory sources. e.g. an immediate other than 0 could cost extra shift uops to get the high halves of the sources fed to a 64x64 => 128 carryless-multiply unit.)


Numbers from Agner Fog's tables

On Intel since Broadwell, pclmuludq is 1 uop. On Skylake, it's 7 cycle latency, 1 per clock throughput. (So you need to keep 7 independent PCLMUL operations in flight to saturate the execution unit on port 5). Broadwell has 5 cycle latency. With a memory source operand, it's 1 extra uop.

On Haswell, it's 3 uops (2p0 p5) with 7 cycle latency and one per 2 clock throughput.

On Sandybridge/IvyBridge it's 18 uops, 14c latency, one per 8 clock throughput.

On Westmere (2nd Gen Nehalem) it's 12c latency, one per 8c throughput. (Unknown number of uops, neither Agner Fog nor uops.info has it. But we can safely assume it's microcoded.) This was the first generation to support the instruction- one of the very few differences from Nehalem to Westmere.


On Ryzen it's 4 uops, 4c latency, one per 2 clock throughput. http://instlatx64.atw.hu/ shows it 4.5 cycle latency. I'm not sure what the difference is between their testing and Agner's.

On Piledriver it's 5 uops, 12c latency, one per 7 clock throughput.


On Jaguar it's 1 uop, 3c latency, one per 1 clock throughput!

On Silvermont it's 8 uops, 10c latency/throughput. Goldmont = 3 uops, 6c lat / 3c tput.


See also What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand? and Agner Fog's optimization guide to understand how latency and throughput (and front-end bottlenecks) matter for performance on out-of-order CPUs, depending on the surrounding code.