I need to implement a prefix sum algorithm and would need it to be as fast as possible.
Ex:
[3, 1, 7, 0, 4, 1, 6, 3]
should give:
[3, 4, 11, 11, 15, 16, 22, 25]
Is there a way to do this using SSE SIMD CPU instruction?
My first idea is to sum each pair in parallel recursively until all sum have been computed like below!
//in parallel do
for (int i = 0; i < z.length; i++) {
z[i] = x[i << 1] + x[(i << 1) + 1];
}
To make the algorithm a little bit more clear, z
is not the final output, but instead used to compute the output.
int[] w = computePrefixSum(z);
for (int i = 1; i < ouput.length; i++) {
ouput[i] = (i % 2 == 0) ? (x[i] + ouput[i - 1]) : w[(i - 1) >> 1];
}
The fastest parallel prefix sum algorithm I know of is to run over the sum in two passes in parallel and use SSE as well in the second pass.
In the first pass you calculate partial sums in parallel and store the total sum for each partial sum. In the second pass you add the total sum from the preceding partial sum to the next partial sum. You can run both passes in parallel using multiple threads (e.g. with OpenMP). The second pass you can also use SIMD since a constant value is being added to each partial sum.
Assuming
n
elements of an array,m
cores, and a SIMD width ofw
the time cost should beSince the fist pass does not use SIMD the time cost will always be greater than
n/m
For example for four cores with a SIMD_width of 4 (four 32bit floats with SSE) the cost would be
5n/16
. Or about 3.2 times faster than sequential code which has a time cost ofn
. Using hyper threading the speed up will be greater still.In special cases it's possible to use SIMD on the first pass as well. Then the time cost is simply
I posted the code for the general case which uses OpenMP for the threading and intrinsics for the SSE code and discuss details about the special case at the following link parallel-prefix-cumulative-sum-with-sse
Edit: I managed to find a SIMD version for the first pass which is about twice as fast as sequential code. Now I get a total boost of about 7 on my four core ivy bridge system.
Edit: For larger arrays one problem is that after the first pass most values have been evicted from the cache. I came up with a solution which runs in parallel inside a chunk but runs each chunk serially. The
chunk_size
is a value that should be tuned. For example I set it to 1MB = 256K floats. Now the second pass is done while the values are still inside the level-2 cache. Doing this gives a big improvement for large arrays.Here is the code for SSE. The AVX code is about the same speed so I did not post it here. The function which does the prefix sum is
scan_omp_SSEp2_SSEp1_chunk
. Pass it an arraya
of floats and it fills the arrays
with the cumulative sum.