From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 16229 invoked by alias); 12 Mar 2012 09:38:45 -0000 Received: (qmail 16218 invoked by uid 22791); 12 Mar 2012 09:38:42 -0000 X-SWARE-Spam-Status: No, hits=-5.8 required=5.0 tests=AWL,BAYES_00,RCVD_IN_DNSWL_HI,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Mon, 12 Mar 2012 09:38:29 +0000 Received: from relay2.suse.de (unknown [195.135.220.254]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id 177E2910A1; Mon, 12 Mar 2012 10:38:28 +0100 (CET) Date: Mon, 12 Mar 2012 09:38:00 -0000 From: Richard Guenther To: Eric Botcazou Cc: gcc-patches@gcc.gnu.org Subject: Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere In-Reply-To: <201203091001.39976.ebotcazou@adacore.com> Message-ID: References: <201203091001.39976.ebotcazou@adacore.com> User-Agent: Alpine 2.00 (LNX 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org X-SW-Source: 2012-03/txt/msg00736.txt.bz2 On Fri, 9 Mar 2012, Eric Botcazou wrote: > > This patch also completely replaces get_bit_range (which is where > > PR52097 ICEs) by a trivial implementation. > > How does it short-circuit the decision made by get_best_mode exactly? By > making get_bit_range return non-zero in more cases? It will make get_bit_range return non-zero in all cases (well, in all cases where there is a COMPONENT_REF with a FIELD_DECL that was part of a RECORD_TYPE at the time of finish_record_layout) > > There is PR52134 which will make this patch cause 1 gnat regression. > > This looks rather benign to me. Yeah, it should be easy to fix - the question is whether we can really rely on TYPE_SIZE_UNIT (DECL_CONTEXT (field)) - DECL_FIELD_OFFSET (field) to fold to a constant if field is the first field of a bitfield group. We can as well easily avoid the ICE by not computing a DECL_BIT_FIELD_REPRESENTATIVE for such field in some way. In the end how we divide bitfield groups will probably get some control via a langhook. > > * gimplify.c (gimplify_expr): Translate bitfield accesses > > to BIT_FIELD_REFs of the representative. > > This part isn't in the patch. Oops. And it should not be (I did that for experimentation), ChangeLog piece dropped. > > + /* Return a new underlying object for a bitfield started with FIELD. */ > > + > > + static tree > > + start_bitfield_representative (tree field) > > + { > > + tree repr = make_node (FIELD_DECL); > > + DECL_FIELD_OFFSET (repr) = DECL_FIELD_OFFSET (field); > > + /* Force the representative to begin at an BITS_PER_UNIT aligned > > ...at a BITS_PER_UNIT aligned... Fixed. > > + boundary - C++ may use tail-padding of a base object to > > + continue packing bits so the bitfield region does not start > > + at bit zero (see g++.dg/abi/bitfield5.C for example). > > + Unallocated bits may happen for other reasons as well, > > + for example Ada which allows explicit bit-granular structure layout. > > */ + DECL_FIELD_BIT_OFFSET (repr) > > + = size_binop (BIT_AND_EXPR, > > + DECL_FIELD_BIT_OFFSET (field), > > + bitsize_int (~(BITS_PER_UNIT - 1))); > > + SET_DECL_OFFSET_ALIGN (repr, DECL_OFFSET_ALIGN (field)); > > + DECL_SIZE (repr) = DECL_SIZE (field); > > + DECL_PACKED (repr) = DECL_PACKED (field); > > + DECL_CONTEXT (repr) = DECL_CONTEXT (field); > > + return repr; > > Any particular reason to set DECL_SIZE but not DECL_SIZE_UNIT here? They are > generally set together. No reason, fixed (I just set those fields I will use during updating of 'repr', the other fields are set once the field has final size). > > + > > + /* Finish up a bitfield group that was started by creating the underlying > > + object REPR with the last fied in the bitfield group FIELD. */ > > ...the last field... Fixed. > > + /* Round up bitsize to multiples of BITS_PER_UNIT. */ > > + bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1); > > + > > + /* Find the smallest nice mode to use. > > + ??? Possibly use get_best_mode with appropriate arguments instead > > + (which would eventually require splitting representatives here). */ > > + for (modesize = bitsize; modesize <= maxbitsize; modesize += > > BITS_PER_UNIT) + { > > + mode = mode_for_size (modesize, MODE_INT, 1); > > + if (mode != BLKmode) > > + break; > > + } > > The double loop looks superfluous. Why not re-using the implementation of > smallest_mode_for_size? Ah, indeed. Matching the current implementation would be > for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != VOIDmode; > mode = GET_MODE_WIDER_MODE (mode)) > if (GET_MODE_PRECISION (mode) >= bitsize) > break; > > if (mode != VOIDmode && (GET_MODE_PRECISION (mode) > maxbitsize || GET_MODE_PRECISION (mode) > MAX_FIXED_MODE_SIZE)) > mode = VOIDmode; > > if (mode == VOIDmode) > ... Btw, I _think_ I want GET_MODE_BITSIZE here - we cannot allow GET_MODE_BITSIZE > GET_MODE_PRECISION as that would possibly access memory that is not allowed. Thus, what GET_MODE_* would identify the access size used for a MEM of that mode? Anyway, fixed as you suggested, with the MAX_FIXED_MODE_SIZE check. I'll split out the tree-sra.c piece, re-test and re-post. Thanks, Richard.