From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 34818 invoked by alias); 17 Dec 2015 12:02:26 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 34808 invoked by uid 89); 17 Dec 2015 12:02:25 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.4 required=5.0 tests=AWL,BAYES_00,RCVD_IN_DNSWL_LOW,SPF_PASS,T_RP_MATCHES_RCVD autolearn=ham version=3.3.2 spammy=bitfields X-HELO: mx2.suse.de Received: from mx2.suse.de (HELO mx2.suse.de) (195.135.220.15) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (CAMELLIA256-SHA encrypted) ESMTPS; Thu, 17 Dec 2015 12:02:18 +0000 Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 5B4A5AD35; Thu, 17 Dec 2015 12:02:15 +0000 (UTC) Date: Thu, 17 Dec 2015 12:02:00 -0000 From: Richard Biener To: Jakub Jelinek cc: Richard Sandiford , gcc-patches@gcc.gnu.org Subject: Re: [PATCH] Fix INTEGER_CST handling for > 64 bits wide bitfields (PR tree-optimization/68835) In-Reply-To: <20151217115507.GT18720@tucnak.redhat.com> Message-ID: References: <20151216220751.GQ18720@tucnak.redhat.com> <20151217115507.GT18720@tucnak.redhat.com> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-SW-Source: 2015-12/txt/msg01745.txt.bz2 On Thu, 17 Dec 2015, Jakub Jelinek wrote: > On Thu, Dec 17, 2015 at 12:33:22PM +0100, Richard Biener wrote: > > > @@ -1245,11 +1245,9 @@ static unsigned int > > > get_int_cst_ext_nunits (tree type, const wide_int &cst) > > > { > > > gcc_checking_assert (cst.get_precision () == TYPE_PRECISION (type)); > > > - /* We need an extra zero HWI if CST is an unsigned integer with its > > > - upper bit set, and if CST occupies a whole number of HWIs. */ > > > - if (TYPE_UNSIGNED (type) > > > - && wi::neg_p (cst) > > > - && (cst.get_precision () % HOST_BITS_PER_WIDE_INT) == 0) > > > + /* We need extra HWIs if CST is an unsigned integer with its > > > + upper bit set. */ > > > + if (TYPE_UNSIGNED (type) && wi::neg_p (cst)) > > > return cst.get_precision () / HOST_BITS_PER_WIDE_INT + 1; > > > return cst.get_len (); > > > } > > > > I don't see why this is necessary - if there are enough bits to > > always have a zero MSB for unsigned types then we don't need an > > extra element. > > But there aren't enough bits in that case. > Let's consider how is -2 casted to unsigned type with precision 64, 65, 127, > 128 and 129 is encoded. > cst.get_len () is in each case 1, the first element is -2. > So, what get_int_cst_ext_nunits before my change does is that for > precision 64 and 128 return 2 and 3, for all other precisions (talking just > about < 129) returns 1. Doh, yes. > So, before my patch we have: > precision TREE_INT_CST_NUNITS TREE_INT_CST_EXT_NUNITS TREE_INT_CST_ELT > 64 1 2 -2, 0 > 65 1 1 -2 > 127 1 1 -2 > 128 1 3 -2, -1, 0 > 129 1 1 -2 > and with the patch we have: > 64 1 2 -2, 0 > 65 1 2 -2, 1 > 127 1 2 -2, -1ULL / 2 > 128 1 3 -2, -1, 0 > 129 1 3 -2, -1, 1 > > Thus, before the patch there was nothing that in the ext units would tell > the users the value is actually large unsigned one rather than negative. > I really don't see what is so special on precisions divisible by > HOST_BITS_PER_WIDE_INT. The > TREE_INT_CST_NUNITS elements aren't > guaranteed to be all 0s anyway, they are all -1s and for the precisions > divisible by HOST_BITS_PER_WIDE_INT the last one is 0. That is what is > special on those, with my patch for the other precisions the last one is > simply always some power of two - 1 (other than -1). > > Now, if the value is not in between maximum and maximum - 0x8000000000000000ULL > TREE_INT_CST_NUNITS will not be 1, but say 2 or more, and perhaps again > the len is still smaller than precision / HOST_BITS_PER_WIDE_INT + 1, but > if it is again negative, we want to have some way to add the ext elements > (a series of -1, followed by 0 or other power of two - 1 other than -1). > > > > > > @@ -1266,7 +1264,8 @@ build_new_int_cst (tree type, const wide > > > if (len < ext_len) > > > { > > > --ext_len; > > > - TREE_INT_CST_ELT (nt, ext_len) = 0; > > > + TREE_INT_CST_ELT (nt, ext_len) > > > + = zext_hwi (-1, cst.get_precision () % HOST_BITS_PER_WIDE_INT); > > > > isn't this enough? > > No. This is the len < ext_len case, without the first hunk > if len < ext_len then cst.get_precision () % HOST_BITS_PER_WIDE_INT is > guaranteed to be 0, and thus zext_hwi will return 0. Patch is ok. Thanks, Richard.