From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 29814 invoked by alias); 30 Jul 2010 13:36:23 -0000 Received: (qmail 29792 invoked by uid 22791); 30 Jul 2010 13:36:21 -0000 X-SWARE-Spam-Status: No, hits=-3.4 required=5.0 tests=AWL,BAYES_00,TW_TM,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 30 Jul 2010 13:35:40 +0000 Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.221.2]) by mx2.suse.de (Postfix) with ESMTP id 49AD086A2E for ; Fri, 30 Jul 2010 15:35:38 +0200 (CEST) Date: Fri, 30 Jul 2010 13:39:00 -0000 From: Richard Guenther To: Michael Matz Cc: gcc-patches@gcc.gnu.org Subject: Re: [PATCH][RFC] Bit CCP and pointer alignment propagation In-Reply-To: Message-ID: References: User-Agent: Alpine 2.00 (LNX 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org X-SW-Source: 2010-07/txt/msg02332.txt.bz2 On Fri, 30 Jul 2010, Michael Matz wrote: > Hi, > > On Fri, 30 Jul 2010, Richard Guenther wrote: > > > The pass is heavily copied from CCP and then massaged to only track > > I think they both should be unified. You duplicate quite some amount of > code. Possibly I thought about it - it shouldn't be terribly hard. The core pieces are inside the new folders. > > + /* Bitwise conditional constant propagation (CCP) is based on the SSA > > This whole block comment describes CCP, not the bitwise piece of it that > would actually be more interesting to quickly describe as overview. I'll adjust that. > > + typedef struct bit_prop_value_d { > > + /* Lattice value. */ > > + unsigned lattice_val; > > + > > + /* Bit lattice value. Zero for constant, one for varying. */ > > + double_int bit_lattice_val; > > I always think of this member as a mask, why not call it so? At runtime > for a value X that has MASK and VALUE in the lattice this holds: > (X & ~MASK) == VALUE . Actually I'm not sure you ensure this as you > sometimes only adjust the mask, but not value, so you we might know only > this: (X & ~MASK) == (VALUE & ~MASK). That's true, varying value bits have undefined content. > > + get_value (tree var) > > + { > > + bit_prop_value_t *val; > > + > > + if (const_val == NULL) > > + return NULL; > > const_val can't be NULL here I think. Copied from CCP where get_value is used in fold_constant_aggregate_ref ... Fixed. > > + > > + case GIMPLE_TERNARY_RHS: > > + { > > + /* Handle binary operators that can appear in GIMPLE form. */ > > ternary Copied from CCP. Fixed. > > + get_value_for_expr (tree expr) > > + { > > + else if (TREE_CODE (expr) == ADDR_EXPR > > + && ((align = get_object_alignment (TREE_OPERAND (expr, 0), > > + BITS_PER_UNIT, BIGGEST_ALIGNMENT)) > > + > BITS_PER_UNIT)) > > + { > > + val.lattice_val = CONSTANT; > > + val.bit_lattice_val > > + = double_int_and_not (double_int_mask (TYPE_PRECISION > > + (TREE_TYPE (expr))), > > + uhwi_to_double_int (align / BITS_PER_UNIT - 1)); > > + val.value = double_int_zero; > > So here your mask contains zeros for bits outside the type precision ... Assuming pointers are unsigned. Indeed. > > + bit_prop_value_convert (tree type, bit_prop_value_t rval, tree rval_type) > > + { > > + bit_prop_value_t val; > > + bool uns; > > + > > + /* First extend lattice and value according to the original type. */ > > + uns = (TREE_CODE (rval_type) == INTEGER_TYPE && TYPE_IS_SIZETYPE (rval_type) > > + ? 0 : TYPE_UNSIGNED (rval_type)); > > + val.bit_lattice_val = double_int_ext (rval.bit_lattice_val, > > + TYPE_PRECISION (rval_type), uns); > > ... but here you extend until the width of double_int. Perhaps you want > to mask out the upper bits again? > (And a lattice can't be sign/zero-extended :) A mask could be.) Well - if the value is signed and we do not know its sign bit then we have to sign extend the lattice. > > + evaluate_stmt (gimple stmt) > > + { > ... > > + case LT_EXPR: > > + /* If the most significant bits are not known we know nothing. */ > > + if (double_int_negative_p (r1val.bit_lattice_val) > > + || double_int_negative_p (r2val.bit_lattice_val)) > > + break; > > Here you test the high bit of the mask, but above you fill the mask only > upto TYPE_PRECISION. I'm not sure if it matters for correctness, though. Where exactly? The lattice should always be extended as to how the value is extended. > > + tem = 1; > > + while (tem < (1u << (sizeof (unsigned int) * 8 - 1)) > > + && !(val->bit_lattice_val.low & tem)) > > + tem <<= 1; > > + if (tem == 1) > > + continue; > > "(x ^ (x-1)) >> 1" will give you the mask of the trailing zero bits (that > plus one is your align). Thanks, Richard. k