I've got a program that's SIGSEGV
'ing in library code. Nothing is jumping out at me when looking at the statement that's causing the SIGSEGV
(see below). But the code uses Intel's AES-NI, and I'm not that familiar with it.
I issued handle all
in hopes of catching the trap that's causing the SIGSEGV
, but the program still just crashes rather than telling me the trap.
How can I get GDB to display the CPU trap that's causing the SIGSEGV
?
Program received signal SIGSEGV, Segmentation fault.
0x00000000004ddf0b in CryptoPP::AESNI_Dec_Block(long long __vector&, long long __vector const*, unsigned int) (block=..., subkeys=0x7fffffffdc60, rounds=0x0)
at rijndael.cpp:1040
1040 block = _mm_aesdec_si128(block, subkeys[i+1]);
(gdb) p block
$1 = (__m128i &) @0x7fffffffcec0: {0x2e37c840668d6030, 0x431362358943e432}
(gdb) x/16b 0x7fffffffcec0
0x7fffffffcec0: 0x30 0x60 0x8d 0x66 0x40 0xc8 0x37 0x2e
0x7fffffffcec8: 0x32 0xe4 0x43 0x89 0x35 0x62 0x13 0x43
You can't: GDB doesn't get to see the trap, only the OS does.
What you can see is the instruction that caused the trap:
It's likely that the problem is alignment. I don't know what
long long __vector
is, but if it's not a 16-byte entity, thensubkeys[i+1]
is not going to be 16-byte aligned, which would be a problem for_mm_aesdec_si128
, since it requires 16-byte alignment for both arguments.