From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 17069 invoked by alias); 25 Nov 2004 00:52:31 -0000 Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org Received: (qmail 17046 invoked by alias); 25 Nov 2004 00:52:26 -0000 Date: Thu, 25 Nov 2004 00:52:00 -0000 Message-ID: <20041125005226.17045.qmail@sourceware.org> From: "joseph at codesourcery dot com" To: gcc-bugs@gcc.gnu.org In-Reply-To: <20041125003250.18666.jakub@gcc.gnu.org> References: <20041125003250.18666.jakub@gcc.gnu.org> Reply-To: gcc-bugzilla@gcc.gnu.org Subject: [Bug c/18666] Conversion of floating point into bit-fields X-Bugzilla-Reason: CC X-SW-Source: 2004-11/txt/msg03002.txt.bz2 List-Id: ------- Additional Comments From joseph at codesourcery dot com 2004-11-25 00:52 ------- Subject: Re: New: Conversion of floating point into bit-fields On Thu, 25 Nov 2004, jakub at gcc dot gnu dot org wrote: > a valid test or not? This worked with 3.4.x and earlier, but doesn't any > longer. The question is mainly if the type of a.i for the 6.3.1.4/1 purposes > is unsigned int (in this case it would be well-defined, 16 is representable > in unsigned int and storing 16 into unsigned int i : 1 bitfield is defined), > or if the type is integer type with precision 1. There are at least three DRs affirming that the type is unsigned:1, i.e. a type with precision 1. -- http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18666