I am trying to find sum reduction of 32 elements (each 1 byte data) on an Intel i3 processor. I did this:
s=0;
for (i=0; i<32; i++)
{
s = s + a[i];
}
However, its taking more time, since my application is a real-time application requiring much lesser time. Please note that the final sum could be more than 255.
Is there a way I can implement this using low level SIMD SSE2 instructions? Unfortunately I have never used SSE. I tried searching for sse2 function for this purpose, but it is also not available. Is it (sse) guaranteed to reduce the computation time for such a small-sized problems?
Any suggestions??
Note: I have implemented the similar algorithms using OpenCL and CUDA and that worked great but only when the problem size was big. For small sized problems the cost of overhead was more. Not sure how it works on SSE
You can abuse
PSADBW
to calculate horizontal sums of bytes without overflow. For example:Intrinsics version:
This portably compiles back to the same asm, as you can see on Godbolt.
The
reinterpret_cast<const __m128i*>
is necessary because Intel intrinsics before AVX-512 for integer vector load/store take__m128i*
pointer args, instead of a more convenientvoid*
. Some prefer more compact C-style casts like_mm_loadu_si128( (const __m128*) &a[16] )
as a style choice.16 vs. 32 vs. 64-bit SIMD element size doesn't matter much; 16 and 32 are equally efficient on all machines, and 32-bit will avoid overflow even if you use this for summing much larger arrays. (
paddq
is slower on some old CPUs like Core 2; https://agner.org/optimize/ and https://uops.info/) Extracting as 32-bit is definitely more efficient than_mm_extract_epi16
(pextrw
).