From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 29304 invoked by alias); 2 May 2006 18:21:28 -0000 Received: (qmail 29218 invoked by uid 48); 2 May 2006 18:21:22 -0000 Date: Tue, 02 May 2006 18:21:00 -0000 Message-ID: <20060502182122.29217.qmail@sourceware.org> X-Bugzilla-Reason: CC References: Subject: [Bug tree-optimization/27394] double -> char conversion varies with optimization level In-Reply-To: Reply-To: gcc-bugzilla@gcc.gnu.org To: gcc-bugs@gcc.gnu.org From: "amylaar at gcc dot gnu dot org" Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org X-SW-Source: 2006-05/txt/msg00205.txt.bz2 List-Id: ------- Comment #5 from amylaar at gcc dot gnu dot org 2006-05-02 18:21 ------- (In reply to comment #4) > (In reply to comment #1) > > In 3.x, double -> char/int conversion was done consistently with the documented > > behaviour of integer -> signed integer type conversion. > > http://gcc.gnu.org/onlinedocs/gcc-4.1.0/gcc/Integers-implementation.html#Integers-implementation. > > That has nothing to do with float -> integer type conversion. Actually, it has, in two ways: - The wording is inexact. You could argue that 128. is an integer in floating point representation and thus covered by this clause. Although from the context, it appears that that was not the intent. - When the return statement is changed to "return (signed char)(int) d;", the clause applies, and indeed the behaviour becomes consistent. Having different semantics when you add an itermediate cast to int before casting to signed char is somewhat surprising. (I.e. although a conforming implementation, it does not follow the rule of least surprise.) -- http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27394