From mboxrd@z Thu Jan 1 00:00:00 1970 From: kenner@vlsi1.ultra.nyu.edu (Richard Kenner) To: gcc@gcc.gnu.org Subject: Pathalogical divides Date: Thu, 21 Sep 2000 12:12:00 -0000 Message-id: <10009211926.AA26210@vlsi1.ultra.nyu.edu> X-SW-Source: 2000-09/msg00504.html Consider the following program on x86: int rem (int a, int b) { return a % b; } int main () { printf ("%d\n", rem (0x80000000, -1));; } When run, rather than producing zero, as expected, it gets a SIGFPE. This is because the division of the largest negative integer by negative one results in an overflow. So the first question is whether this is valid C behavior. Next, compile the above with -O3 on an x86 and notice that GCC gets a SIGFPE when constant-folding. Finally, consider: int foo (int a, int b) { return (a - ((a == 0x80000000 && b == -1) ? 0 : a % b)) / b; } This program when passed "normal" arguments does not get an overflow. But GCC pulls the conditional out of the subtraction and division and causes the compiler to run into the SIGFPE above. I think the compiler crash needs to be fixed. We can do it either by protecting the integer part of simplify_binary_operation against SIGFPE just like the FP or explicitly testing for this case just like we check for divide by zero. Any thoughts about whether we need a run-time test for this case in the "%" operator?