From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 965 invoked by alias); 26 Jul 2010 14:47:45 -0000 Received: (qmail 515 invoked by uid 48); 26 Jul 2010 14:47:18 -0000 Date: Mon, 26 Jul 2010 14:47:00 -0000 Message-ID: <20100726144718.514.qmail@sourceware.org> X-Bugzilla-Reason: CC References: Subject: [Bug tree-optimization/45034] [4.3/4.4/4.5/4.6 Regression] "safe" conversion from unsigned to signed char gives broken code In-Reply-To: Reply-To: gcc-bugzilla@gcc.gnu.org To: gcc-bugs@gcc.gnu.org From: "rakdver at gcc dot gnu dot org" Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org X-SW-Source: 2010-07/txt/msg02869.txt.bz2 ------- Comment #7 from rakdver at gcc dot gnu dot org 2010-07-26 14:47 ------- By the time the code reaches ivopts, it looks (modulo SSA form) this way: signed char x = -128, tmp; for (;;) { tmp = -x; foo ((int) x, (int) tmp, x==-128); ... if (x == 127) break; x++; } Note that all the careful handling of -x in case that x=-128 disappeared. Then, ivopts trust that signed arithmetics does not overflow, and misscompile the program. In fact, it seems that the error is already there at the very beginning: the .original dump shows fixnum_neg { ux = (unsigned char) x; uy = (unsigned char) -(signed char) ux; ... } That is, the negation of unsigned char value is implemented by casting it to signed char, which introduces signed overflow if the value of x is -128. As far as I understand the C standard, this seems incorrect. -- http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45034