From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 11781 invoked by alias); 30 Jul 2010 12:39:27 -0000 Received: (qmail 11773 invoked by uid 22791); 30 Jul 2010 12:39:26 -0000 X-SWARE-Spam-Status: No, hits=-5.9 required=5.0 tests=AWL,BAYES_00,RCVD_IN_DNSWL_HI,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from cantor.suse.de (HELO mx1.suse.de) (195.135.220.2) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 30 Jul 2010 12:39:20 +0000 Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.221.2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.suse.de (Postfix) with ESMTP id 7944893F46; Fri, 30 Jul 2010 14:39:18 +0200 (CEST) Date: Fri, 30 Jul 2010 13:02:00 -0000 From: Richard Guenther To: Paolo Bonzini Cc: gcc-patches@gcc.gnu.org Subject: Re: [PATCH][RFC] Bit CCP and pointer alignment propagation In-Reply-To: <4C52C01E.4010202@gnu.org> Message-ID: References: <4C52C01E.4010202@gnu.org> User-Agent: Alpine 2.00 (LNX 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org X-SW-Source: 2010-07/txt/msg02326.txt.bz2 On Fri, 30 Jul 2010, Paolo Bonzini wrote: > > + CONSTANT -> V_i has been found to hold a constant > > + value C. > > "Some bits of V_i have been found to..." Done. > Or just remove/rewrite the head comment and move that comment here: > > > + /* Possible lattice values. */ > > + typedef enum > > + { > > + UNINITIALIZED, > > + UNDEFINED, > > + CONSTANT, > > + VARYING > > + } ccp_lattice_t; > > > + val.bit_lattice_val > > + = double_int_and > > + (double_int_ior (r1val.bit_lattice_val, > > + r2val.bit_lattice_val), > > + double_int_and (double_int_ior (r1val.value, > > + r1val.bit_lattice_val), > > + double_int_ior (r2val.value, > > + r2val.bit_lattice_val))); > > + if (!double_int_minus_one_p (val.bit_lattice_val)) > > + val.lattice_val = CONSTANT; > > In other places of this function you just set it to CONSTANT without the if. > You can just set the lattice_val to CONSTANT here: > > > + bit_prop_value_t val = { VARYING, { -1, -1 }, { 0, 0 } }; > > and change it to VARYING ("if minus one then VARYING") at the end of the > function. If no computation is done in the function, the bit_lattice_val will > stay -1. True. I changed it to initial { CONSTANT, { -1, -1 }, {0, 0}} and drop to varying in the end. So I don't need to set the lattice value at all. > > + case LSHIFT_EXPR: > > + case RSHIFT_EXPR: > > + if (r2val.lattice_val == CONSTANT > > + && double_int_zero_p (r2val.bit_lattice_val)) > > Even if some bits are varying, the result will have at least r2val.value & > ~r2val.bit_lattice_val bits shifted in. So you can do something like > > if (r2val.lattice_val != CONSTANT) > break; > if (double_int_zero_p (r2val.bit_lattice_val)) { > in_mask = r1val.bit_lattice_val; > in_value = r1val.value; > } else { > in_mask = double_int_minus_one; > in_value = double_int_minus_one; > } > > shift = r2val.value & ~r2val.bit_lattice_val; > if (SHIFT_COUNT_TRUNCATED) > shift &= GETMODE_BITSIZE (TYPE_MODE (type)) - 1; > > val.lattice_val = CONSTANT; > val.bit_lattice_val > = double_int_lshift (in_mask, shift, > TYPE_PRECISION (type), false); > val.value = double_int_lshift (in_value, shift, > TYPE_PRECISION (type), false); Hmm. I don't quite understand. Are you saying that only the lower bits of r2val are important if SHIFT_COUNT_TRUNCATED? At least we need to know the sign of the shift amount to be able to tell if we're shifting in zeros from the right or ones from the left. > > + else if (shift < 0) > > + { > > + val.lattice_val = CONSTANT; > > + val.bit_lattice_val > > + = double_int_rshift (r1val.bit_lattice_val, > > + -shift, TYPE_PRECISION (type), true); > > + val.value = double_int_rshift (r1val.value, shift, > > + TYPE_PRECISION (type), true); > > + } > > Here shifted in bits are varying for signed types (unless the sign bit is > constant, but that's not handled here either). That should be handled fine by using an arithmetic shift for both the lattice and the value. So for varying lattice "sign" bit we shift in varying. Hm, but indeed for unsigned constants we shift in the sign bit - fixed to else if (shift < 0) { val.bit_lattice_val = double_int_rshift (r1val.bit_lattice_val, -shift, TYPE_PRECISION (type), true); val.value = double_int_rshift (r1val.value, -shift, TYPE_PRECISION (type), !uns); (typo for using 'shift' fixed as well) > > + unsigned r1ctz = 0, r2ctz = 0; > > + while (r1ctz < HOST_BITS_PER_WIDE_INT > > + && !(r1val.bit_lattice_val.low & (1 << r1ctz)) > > + && !(r1val.value.low & (1 << r1ctz))) > > + r1ctz++; > > + while (r2ctz < HOST_BITS_PER_WIDE_INT > > + && !(r2val.bit_lattice_val.low & (1 << r2ctz)) > > + && !(r2val.value.low & (1 << r2ctz))) > > + r2ctz++; > > This is just a ctz on (v | m), no? This makes it easier to track high bits as > well. Yes. Probably worth adding double_int_ctz. > I'm sure that you can optimize the addition, I'll think about it. I thought about it as well and decided to be safe for now ;) But iterating sth with full-width operations should be possible. Thanks for the review, Richard.