I am debugging a production code written in C and its simplest form can be shown as -
void
test_fun(int sr)
{
int hr = 0;
#define ME 65535
#define SE 256
sr = sr/SE; <-- This should yield 0
if(sr == 1)
hr = ME;
else
hr = (ME+1)/sr; <-- We should crash here.
}
We are passing sr
as 128, which ideally should yield in divide by zero error in processor. I see that this division happens successfully with quotient as 0x7ffffffff (hr
is this value).
This does not happens (it crashes when attempts the division by zero) when I compile and run the same on Intel platform with gcc.
Want to to know principle behind this big quotient. Not sure if it is just some other bug I still need to uncover. Can someone help me with another program that does the same?
Division by zero is undefined behaviour, see C11 standard 6.5.5#5 (final draft).
Getting a trap or SIGFPE is just a courtesy of the CPU/OS. PowerPC as typical RISC CPU does not catch it, as it can safely be detected by a simple check of the divisor right before doing the actual division. x86 OTOH does catch this - typical CISC behaviour.
If required by a higher layer standard, you probably have missed a compiler option which emits this check automatically. POSIX for instance does not enforce SIGFPE, this is optional.