From: Paolo Bonzini <bonzini@gnu.org>
To: Richard Guenther <rguenther@suse.de>
Cc: gcc-patches@gcc.gnu.org
Subject: Re: [PATCH][RFC] Bit CCP and pointer alignment propagation
Date: Fri, 30 Jul 2010 13:16:00 -0000 [thread overview]
Message-ID: <4C52CB87.6060409@gnu.org> (raw)
In-Reply-To: <alpine.LNX.2.00.1007301427520.25856@zhemvz.fhfr.qr>
On 07/30/2010 02:39 PM, Richard Guenther wrote:
>>> + case LSHIFT_EXPR:
>>> + case RSHIFT_EXPR:
>>> + if (r2val.lattice_val == CONSTANT
>>> + && double_int_zero_p (r2val.bit_lattice_val))
>>
>> Even if some bits are varying, the result will have at least r2val.value&
>> ~r2val.bit_lattice_val bits shifted in. So you can do something like
>>
>> if (r2val.lattice_val != CONSTANT)
>> break;
>> if (double_int_zero_p (r2val.bit_lattice_val)) {
>> in_mask = r1val.bit_lattice_val;
>> in_value = r1val.value;
>> } else {
>> in_mask = double_int_minus_one;
>> in_value = double_int_minus_one;
>> }
>>
>> shift = r2val.value& ~r2val.bit_lattice_val;
>> if (SHIFT_COUNT_TRUNCATED)
>> shift&= GETMODE_BITSIZE (TYPE_MODE (type)) - 1;
>>
>> val.lattice_val = CONSTANT;
>> val.bit_lattice_val
>> = double_int_lshift (in_mask, shift,
>> TYPE_PRECISION (type), false);
>> val.value = double_int_lshift (in_value, shift,
>> TYPE_PRECISION (type), false);
>
> Hmm. I don't quite understand. Are you saying that only the
> lower bits of r2val are important if SHIFT_COUNT_TRUNCATED?
> At least we need to know the sign of the shift amount
> to be able to tell if we're shifting in zeros from the right
> or ones from the left.
Well, yes. :)
I wonder if using negative shift counts is just a useless complication,
since that's what got me confused. If you just make "shift" a positive
shift count and test the tree code rather than the sign of shift, the
code I gave above to handle SHIFT_COUNT_TRUNCATED magically starts to work.
>>> + else if (shift< 0)
>>> + {
>>> + val.lattice_val = CONSTANT;
>>> + val.bit_lattice_val
>>> + = double_int_rshift (r1val.bit_lattice_val,
>>> + -shift, TYPE_PRECISION (type), true);
>>> + val.value = double_int_rshift (r1val.value, shift,
>>> + TYPE_PRECISION (type), true);
>>> + }
>>
>> Here shifted in bits are varying for signed types (unless the sign bit is
>> constant, but that's not handled here either).
>
> That should be handled fine by using an arithmetic shift for both
> the lattice and the value.
Aha, right.
> So for varying lattice "sign" bit we
> shift in varying. Hm, but indeed for unsigned constants we shift
> in the sign bit - fixed to
>
> else if (shift< 0)
> {
> val.bit_lattice_val
> = double_int_rshift (r1val.bit_lattice_val,
> -shift, TYPE_PRECISION (type), true);
This should be !uns too, since for unsigned values shifted-in bits are
constant. But it's much more clever than I thought: if the topmost bit
is constant, the newly computed lattice is also constant. Very nice.
This unfortunately wouldn't work if r1val.bit_lattice_val is changed to
in_mask as in my example above. To handle that you could use
msb_constant = uns || !double_bit_set_p (r1val.bit_lattice_val,
TYPE_PRECISION (type) - 1)
and then you pass !msb_constant to this right shift. I'll let you
decide whether it's overkill, but in any case the current clever code
needs documentation.
Just remember to add testcases covering shifting of unsigned values,
positive signed values, and negative signed values. Also file a missing
optimization bug for e.g. (x << (y | 8)) & 255 if you don't include my
suggestions.
>>> + unsigned r1ctz = 0, r2ctz = 0;
>>> + while (r1ctz< HOST_BITS_PER_WIDE_INT
>>> + && !(r1val.bit_lattice_val.low& (1<< r1ctz))
>>> + && !(r1val.value.low& (1<< r1ctz)))
>>> + r1ctz++;
>>> + while (r2ctz< HOST_BITS_PER_WIDE_INT
>>> + && !(r2val.bit_lattice_val.low& (1<< r2ctz))
>>> + && !(r2val.value.low& (1<< r2ctz)))
>>> + r2ctz++;
>>
>> This is just a ctz on (v | m), no? This makes it easier to track high bits as
>> well.
>
> Yes. Probably worth adding double_int_ctz.
Agreed.
Paolo
next prev parent reply other threads:[~2010-07-30 12:54 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-07-30 12:06 Richard Guenther
2010-07-30 12:39 ` Paolo Bonzini
2010-07-30 12:54 ` Paolo Bonzini
2010-07-30 13:02 ` Richard Guenther
2010-07-30 13:16 ` Paolo Bonzini [this message]
2010-07-30 13:29 ` Richard Guenther
2010-07-30 13:38 ` Paolo Bonzini
2010-07-30 13:45 ` Richard Guenther
2010-07-30 14:24 ` Paolo Bonzini
2010-07-30 14:51 ` Richard Guenther
2010-07-30 16:23 ` Richard Henderson
2010-07-30 16:38 ` Richard Guenther
2010-07-30 16:49 ` Joseph S. Myers
2010-07-30 18:06 ` Paolo Bonzini
2010-07-30 13:36 ` Michael Matz
2010-07-30 13:39 ` Richard Guenther
2010-07-30 14:26 ` Michael Matz
2010-08-04 7:43 Jay Foad
2010-08-04 8:18 ` Richard Guenther
2010-08-04 13:05 Jay Foad
2010-08-04 13:11 ` Richard Guenther
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C52CB87.6060409@gnu.org \
--to=bonzini@gnu.org \
--cc=gcc-patches@gcc.gnu.org \
--cc=rguenther@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).