public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* Re: [RFA] New pass for sign/zero extension elimination
@ 2023-11-27 18:19 Joern Rennecke
  2023-11-28  5:51 ` Jeff Law
  0 siblings, 1 reply; 35+ messages in thread
From: Joern Rennecke @ 2023-11-27 18:19 UTC (permalink / raw)
  To: GCC Patches, jlaw

[-- Attachment #1: Type: text/plain, Size: 42 bytes --]

You are applying PATTERN to an INSN_LIST.

[-- Attachment #2: tmp.txt --]
[-- Type: text/plain, Size: 2461 bytes --]

diff --git a/gcc/ext-dce.cc b/gcc/ext-dce.cc
index 52032b50951..4523654538c 100644
--- a/gcc/ext-dce.cc
+++ b/gcc/ext-dce.cc
@@ -122,10 +122,9 @@ safe_for_live_propagation (rtx_code code)
    optimziation phase during use handling will be.  */
 
 static void
-ext_dce_process_sets (rtx insn, bitmap livenow, bitmap live_tmp)
+ext_dce_process_sets (rtx insn, rtx pat, bitmap livenow, bitmap live_tmp)
 {
   subrtx_iterator::array_type array;
-  rtx pat = PATTERN (insn);
   FOR_EACH_SUBRTX (iter, array, pat, NONCONST)
     {
       const_rtx x = *iter;
@@ -377,7 +376,7 @@ binop_implies_op2_fully_live (rtx_code code)
    eliminated in CHANGED_PSEUDOS.  */
 
 static void
-ext_dce_process_uses (rtx insn, bitmap livenow, bitmap live_tmp,
+ext_dce_process_uses (rtx insn, rtx pat, bitmap livenow, bitmap live_tmp,
 		      bool modify, bitmap changed_pseudos)
 {
   /* A nonlocal goto implicitly uses the frame pointer.  */
@@ -389,7 +388,6 @@ ext_dce_process_uses (rtx insn, bitmap livenow, bitmap live_tmp,
     }
 
   subrtx_var_iterator::array_type array_var;
-  rtx pat = PATTERN (insn);
   FOR_EACH_SUBRTX_VAR (iter, array_var, pat, NONCONST)
     {
       /* An EXPR_LIST (from call fusage) ends in NULL_RTX.  */
@@ -640,15 +638,16 @@ ext_dce_process_bb (basic_block bb, bitmap livenow,
       bitmap live_tmp = BITMAP_ALLOC (NULL);
 
       /* First process any sets/clobbers in INSN.  */
-      ext_dce_process_sets (insn, livenow, live_tmp);
+      ext_dce_process_sets (insn, PATTERN (insn), livenow, live_tmp);
 
       /* CALL_INSNs need processing their fusage data.  */
       if (GET_CODE (insn) == CALL_INSN)
-	ext_dce_process_sets (CALL_INSN_FUNCTION_USAGE (insn),
+	ext_dce_process_sets (insn, CALL_INSN_FUNCTION_USAGE (insn),
 			      livenow, live_tmp);
 
       /* And now uses, optimizing away SIGN/ZERO extensions as we go.  */
-      ext_dce_process_uses (insn, livenow, live_tmp, modify, changed_pseudos);
+      ext_dce_process_uses (insn, PATTERN (insn), livenow, live_tmp, modify,
+			    changed_pseudos);
 
       /* And process fusage data for the use as well.  */
       if (GET_CODE (insn) == CALL_INSN)
@@ -663,7 +662,7 @@ ext_dce_process_bb (basic_block bb, bitmap livenow,
 	      if (global_regs[i])
 		bitmap_set_range (livenow, i * 4, 4);
 
-	  ext_dce_process_uses (CALL_INSN_FUNCTION_USAGE (insn),
+	  ext_dce_process_uses (insn, CALL_INSN_FUNCTION_USAGE (insn),
 				livenow, live_tmp, modify, changed_pseudos);
 	}
 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-27 18:19 [RFA] New pass for sign/zero extension elimination Joern Rennecke
@ 2023-11-28  5:51 ` Jeff Law
  0 siblings, 0 replies; 35+ messages in thread
From: Jeff Law @ 2023-11-28  5:51 UTC (permalink / raw)
  To: Joern Rennecke, GCC Patches, jlaw



On 11/27/23 11:19, Joern Rennecke wrote:
> You are applying PATTERN to an INSN_LIST.
I know :-)  That was the late change to clean up some of the horrific 
control flow in the code.

jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-12-01 15:09   ` Jeff Law
@ 2023-12-01 16:17     ` Hans-Peter Nilsson
  0 siblings, 0 replies; 35+ messages in thread
From: Hans-Peter Nilsson @ 2023-12-01 16:17 UTC (permalink / raw)
  To: Jeff Law; +Cc: jlaw, gcc-patches, jivanhakobyan9

> Date: Fri, 1 Dec 2023 08:09:08 -0700
> From: Jeff Law <jeffreyalaw@gmail.com>

> On 11/30/23 18:08, Hans-Peter Nilsson wrote:
> >> Date: Sun, 19 Nov 2023 17:47:56 -0700
> >> From: Jeff Law <jlaw@ventanamicro.com>
> > 
> >> Locally we have had this enabled at -O1 and above to encourage testing,
> >> but I'm thinking that for the trunk enabling at -O2 and above is the
> >> right thing to do.
> > 
> > Yes.
> > 
> >> Thoughts, comments, recommendations?
> > 
> > Sounds great!
> > 
> > It'd be nice if its framework can be re-used for
> > target-specific passes, doing quirky sign- or zero-extend-
> > related optimizations (those that are not just sign- or
> > zero-extend removal).  Perhaps most of those opportunities
> > can be implemented as target hooks in this pass.  Definitely
> > not asking for a change, just imagining future improvements.
> > 
> > Also, I haven't followed the thread and its branches, just
> > offering a word encouragement.
> What kind of quirky target things did you have in mind?  If there's 
> overlap with things we need I might be able to find someone to take it 
> on.  Or might be able to suggest how they can be handled.

Sorry, I was hoping I'd not need to substantiate that part
outside the "not just sign- or zero-extend removal". :)

But perhaps: somewhat trivial would be where the
sign/zero-extension is hidden in an unspec, so the target
needs to be consulted regarding possible elimination and how
to do it.

If that doesn't do it, just ignore that part of the comment.
I have nothing substantial besides this pass sounding like
it'd be a great stepping-stone.  I'm having trouble making
up hypothetical cases here, and it probably wouldn't help.

I hope I'll find time to try the latest version but
definitely no promises.

brgds, H-P

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-12-01  1:08 ` Hans-Peter Nilsson
@ 2023-12-01 15:09   ` Jeff Law
  2023-12-01 16:17     ` Hans-Peter Nilsson
  0 siblings, 1 reply; 35+ messages in thread
From: Jeff Law @ 2023-12-01 15:09 UTC (permalink / raw)
  To: Hans-Peter Nilsson, Jeff Law; +Cc: gcc-patches, jivanhakobyan9



On 11/30/23 18:08, Hans-Peter Nilsson wrote:
>> Date: Sun, 19 Nov 2023 17:47:56 -0700
>> From: Jeff Law <jlaw@ventanamicro.com>
> 
>> Locally we have had this enabled at -O1 and above to encourage testing,
>> but I'm thinking that for the trunk enabling at -O2 and above is the
>> right thing to do.
> 
> Yes.
> 
>> Thoughts, comments, recommendations?
> 
> Sounds great!
> 
> It'd be nice if its framework can be re-used for
> target-specific passes, doing quirky sign- or zero-extend-
> related optimizations (those that are not just sign- or
> zero-extend removal).  Perhaps most of those opportunities
> can be implemented as target hooks in this pass.  Definitely
> not asking for a change, just imagining future improvements.
> 
> Also, I haven't followed the thread and its branches, just
> offering a word encouragement.
What kind of quirky target things did you have in mind?  If there's 
overlap with things we need I might be able to find someone to take it 
on.  Or might be able to suggest how they can be handled.


jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20  0:47 Jeff Law
                   ` (4 preceding siblings ...)
  2023-11-27 11:30 ` Andrew Stubbs
@ 2023-12-01  1:08 ` Hans-Peter Nilsson
  2023-12-01 15:09   ` Jeff Law
  5 siblings, 1 reply; 35+ messages in thread
From: Hans-Peter Nilsson @ 2023-12-01  1:08 UTC (permalink / raw)
  To: Jeff Law; +Cc: gcc-patches, jivanhakobyan9

> Date: Sun, 19 Nov 2023 17:47:56 -0700
> From: Jeff Law <jlaw@ventanamicro.com>

> Locally we have had this enabled at -O1 and above to encourage testing, 
> but I'm thinking that for the trunk enabling at -O2 and above is the 
> right thing to do.

Yes.

> Thoughts, comments, recommendations?

Sounds great!

It'd be nice if its framework can be re-used for
target-specific passes, doing quirky sign- or zero-extend-
related optimizations (those that are not just sign- or
zero-extend removal).  Perhaps most of those opportunities
can be implemented as target hooks in this pass.  Definitely
not asking for a change, just imagining future improvements.

Also, I haven't followed the thread and its branches, just
offering a word encouragement.

brgds, H-P

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-28 13:36       ` Joern Rennecke
  2023-11-28 14:09         ` Joern Rennecke
@ 2023-11-30 17:33         ` Jeff Law
  1 sibling, 0 replies; 35+ messages in thread
From: Jeff Law @ 2023-11-30 17:33 UTC (permalink / raw)
  To: Joern Rennecke, Jeff Law; +Cc: GCC Patches, richard.sandiford



On 11/28/23 06:36, Joern Rennecke wrote:
> On Mon, 27 Nov 2023 at 20:18, Jeff Law <jlaw@ventanamicro.com> wrote:
>>
>>
>>
>> On 11/27/23 13:03, Richard Sandiford wrote:
>>> Joern Rennecke <joern.rennecke@embecosm.com> writes:
>>>>    On 11/20/23 11:26, Richard Sandiford wrote:
>>>>>> +      /* ?!? What is the point of this adjustment to DST_MASK?  */
>>>>>> +      if (code == PLUS || code == MINUS
>>>>>> +  || code == MULT || code == ASHIFT)
>>>>>> + dst_mask
>>>>>> +  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;
>>>>>
>>>>> Yeah, sympathise with the ?!? here :)
>>>> Jeff Law:
>>>>> Inherited.  Like the other bit of magic I think I'll do a test with them
>>>>> pulled out to see if I can make something undesirable trigger.
>>>>
>>>> This represents the carry effect.  Even if the destination only cares about
>>>> some high order bits, you have to consider all lower order bits of the inputs.
>>>>
>>>> For ASHIFT, you could refine this in the case of a constant shift count.
>>>
>>> Ah, right.  Think it would be worth a comment.
>> Definitely.  Wouldn't SIGN_EXTEND have a similar problem?  While we
>> don't care about all the low bits, we do care about that MSB.
> 
> Yes, if bits outside of the MODE_MASK of the input (i.e. higher bits) are
> life in the output, than we want the high bit of the SIGN_EXTEND input live.
So I had a hack in my tree to do this for a while, but it was early in 
the effort to test this across more targets.  It wasn't clear at that 
point in the effort if it was correct or just working around bugs 
elsewhere.  After we had everything working across the various targets I 
pulled it out without ill effects.  It looks like you're handling it in 
your carry_backpropagate changes.

Jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-29 17:37 Joern Rennecke
  2023-11-29 19:13 ` Jivan Hakobyan
@ 2023-11-30 15:37 ` Jeff Law
  1 sibling, 0 replies; 35+ messages in thread
From: Jeff Law @ 2023-11-30 15:37 UTC (permalink / raw)
  To: Joern Rennecke, Jeff Law, GCC Patches



On 11/29/23 10:37, Joern Rennecke wrote:
> Why did you leave out MINUS from safe_for_live_propagation ?
As Jivan noted, mistake on my part.

jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-29 17:37 Joern Rennecke
@ 2023-11-29 19:13 ` Jivan Hakobyan
  2023-11-30 15:37 ` Jeff Law
  1 sibling, 0 replies; 35+ messages in thread
From: Jivan Hakobyan @ 2023-11-29 19:13 UTC (permalink / raw)
  To: Joern Rennecke; +Cc: Jeff Law, GCC Patches

We already noticed it and will roll back in V3



With the best regards
Jivan Hakobyan

> On 29 Nov 2023, at 21:37, Joern Rennecke <joern.rennecke@embecosm.com> wrote:
> 
> Why did you leave out MINUS from safe_for_live_propagation ?

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
@ 2023-11-29 17:37 Joern Rennecke
  2023-11-29 19:13 ` Jivan Hakobyan
  2023-11-30 15:37 ` Jeff Law
  0 siblings, 2 replies; 35+ messages in thread
From: Joern Rennecke @ 2023-11-29 17:37 UTC (permalink / raw)
  To: Jeff Law, GCC Patches

Why did you leave out MINUS from safe_for_live_propagation ?

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-28 13:36       ` Joern Rennecke
@ 2023-11-28 14:09         ` Joern Rennecke
  2023-11-30 17:33         ` Jeff Law
  1 sibling, 0 replies; 35+ messages in thread
From: Joern Rennecke @ 2023-11-28 14:09 UTC (permalink / raw)
  To: Jeff Law; +Cc: GCC Patches, richard.sandiford

On Tue, 28 Nov 2023 at 13:36, Joern Rennecke
<joern.rennecke@embecosm.com> wrote:
 > For the saturating truncation operations, we have the high-to-low
propagation,
> but no low-to-high propagation, so that would be something separate to model.

P.S.:
For unsigned saturating truncation, the propagation from higher to
lower bits only
happens for bits that are truncated off.
e.g. if we truncate a 64 bit value to a 32 bit value, and only the
lower 16 bit of the
result are live, we got an output live mask
0x000000000000ffff     implying an input live mask:
0xffffffff0000ffff

For signed saturating truncation, we got an extra corner case.  For
the same data widths
as above, the value
0xffffffff80000000
truncates to:
0x80000000
but
0x0000000080000000
truncates to:
0x7fffffff

so the top bit that is included in the truncated mode propagates to
all the lower bits
(irrespective if it itself is live in the output), so it is live in
the input if any bit is live in
the output - just like all the truncated-off bits.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-27 20:18     ` Jeff Law
@ 2023-11-28 13:36       ` Joern Rennecke
  2023-11-28 14:09         ` Joern Rennecke
  2023-11-30 17:33         ` Jeff Law
  0 siblings, 2 replies; 35+ messages in thread
From: Joern Rennecke @ 2023-11-28 13:36 UTC (permalink / raw)
  To: Jeff Law; +Cc: GCC Patches, richard.sandiford

On Mon, 27 Nov 2023 at 20:18, Jeff Law <jlaw@ventanamicro.com> wrote:
>
>
>
> On 11/27/23 13:03, Richard Sandiford wrote:
> > Joern Rennecke <joern.rennecke@embecosm.com> writes:
> >>   On 11/20/23 11:26, Richard Sandiford wrote:
> >>>> +      /* ?!? What is the point of this adjustment to DST_MASK?  */
> >>>> +      if (code == PLUS || code == MINUS
> >>>> +  || code == MULT || code == ASHIFT)
> >>>> + dst_mask
> >>>> +  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;
> >>>
> >>> Yeah, sympathise with the ?!? here :)
> >> Jeff Law:
> >>> Inherited.  Like the other bit of magic I think I'll do a test with them
> >>> pulled out to see if I can make something undesirable trigger.
> >>
> >> This represents the carry effect.  Even if the destination only cares about
> >> some high order bits, you have to consider all lower order bits of the inputs.
> >>
> >> For ASHIFT, you could refine this in the case of a constant shift count.
> >
> > Ah, right.  Think it would be worth a comment.
> Definitely.  Wouldn't SIGN_EXTEND have a similar problem?  While we
> don't care about all the low bits, we do care about that MSB.

Yes, if bits outside of the MODE_MASK of the input (i.e. higher bits) are
life in the output, than we want the high bit of the SIGN_EXTEND input live.

OTOH, if the output is not wider, then the high bit of the input is
only life if the
same bit of the output is.  That latter point is important because chains of
same-width sign extends are a prime target for this optimization.

SMUL_HIGHPART / UMUL_HIGHPART also have carry-propagation.

With the saturating operations, we also have propagations from high bit
 into lower bits in the saturating case.  I don't think we can do anything
useful for the saturating addition / multiplication operators safely.

For the saturating truncation operations, we have the high-to-low propagation,
but no low-to-high propagation, so that would be something separate to model.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-27 20:03   ` Richard Sandiford
  2023-11-27 20:18     ` Jeff Law
@ 2023-11-28 13:13     ` Joern Rennecke
  1 sibling, 0 replies; 35+ messages in thread
From: Joern Rennecke @ 2023-11-28 13:13 UTC (permalink / raw)
  To: Joern Rennecke, GCC Patches, jlaw, richard.sandiford

On Mon, 27 Nov 2023 at 20:03, Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Joern Rennecke <joern.rennecke@embecosm.com> writes:
> >  On 11/20/23 11:26, Richard Sandiford wrote:
> >>> +      /* ?!? What is the point of this adjustment to DST_MASK?  */
> >>> +      if (code == PLUS || code == MINUS
> >>> +  || code == MULT || code == ASHIFT)
> >>> + dst_mask
> >>> +  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;
> >>
> >> Yeah, sympathise with the ?!? here :)
> > Jeff Law:
> >> Inherited.  Like the other bit of magic I think I'll do a test with them
> >> pulled out to see if I can make something undesirable trigger.
> >
> > This represents the carry effect.  Even if the destination only cares about
> > some high order bits, you have to consider all lower order bits of the inputs.
> >
> > For ASHIFT, you could refine this in the case of a constant shift count.
>
> Ah, right.  Think it would be worth a comment.
>
> But I wonder whether we should centralise all this code-specific
> information into a single place.  I.e. rather than having one switch to
> say "PLUS is OK" or "AND is OK", and then having code-specific handling
> elsewhere, we could enumerate how to handle a code.

This carry-back-propagation code is used only in that one place, so I
saw no need to put it in a separate function.
But if we need to add to it (handle SIGN_EXTEND, maybe handle
ASHIFT better) and add lots of comments, it makes sense to put it
into an inlinable function so it doesn't disrupt the flow of reading the
code.

Maybe something like this?

/* X, with code CODE, is an operation for which
safe_for_live_propagation holds true,
   and bits set in MASK are live in the result.  Compute a make of (potentially)
   live bits in the non-constant inputs.  In case of
binop_implies_op2_fully_live
   (e.g. shifts), the computed mask may exclusively pertain to the
first operand.  */

HOST_WIDE_INT
carry_backpropagate (HOST_WIDE_INT mask, enum rtx_code code, rtx x)

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-27 17:36 Joern Rennecke
  2023-11-27 17:57 ` Joern Rennecke
@ 2023-11-28  5:50 ` Jeff Law
  1 sibling, 0 replies; 35+ messages in thread
From: Jeff Law @ 2023-11-28  5:50 UTC (permalink / raw)
  To: Joern Rennecke, GCC Patches; +Cc: jlaw, richard.sandiford



On 11/27/23 10:36, Joern Rennecke wrote:
> On 11/20/23 11:26, Richard Sandiford wrote:
> 
>>> +
>>> +      mask = GET_MODE_MASK (GET_MODE (SUBREG_REG (x))) << bit;
>>> +      if (!mask)
>>> + mask = -0x100000000ULL;
>>
>> Not sure I follow this.  What does the -0x100000000ULL constant indicate?
>> Also, isn't it the mask of the outer register that is shifted, rather
>> than the mask of the inner mode?  E.g. if we have:
> Jeff Law:
>> Inherited.  I should have marked it like the other one as needing
>> investigation.  Probably the fastest way is to just rip it out for a
>> test to see what breaks.
> 
> This is for support of types wider than DImode.
> 
> You unsupported tracking of these values in various places, though.
Because we don't track liveness beyond DImode in this code at all and if 
you don't filter out the larger modes, things will go wonky in a bad 
way.  It may be less of an issue after fixing the big endian correction 
code which was totally broken.

Supporting TI, OI, etc would certainly be possible, but it didn't seem 
worth the effort on current architectures.

Jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-27 20:03   ` Richard Sandiford
@ 2023-11-27 20:18     ` Jeff Law
  2023-11-28 13:36       ` Joern Rennecke
  2023-11-28 13:13     ` Joern Rennecke
  1 sibling, 1 reply; 35+ messages in thread
From: Jeff Law @ 2023-11-27 20:18 UTC (permalink / raw)
  To: Joern Rennecke, GCC Patches, richard.sandiford



On 11/27/23 13:03, Richard Sandiford wrote:
> Joern Rennecke <joern.rennecke@embecosm.com> writes:
>>   On 11/20/23 11:26, Richard Sandiford wrote:
>>>> +      /* ?!? What is the point of this adjustment to DST_MASK?  */
>>>> +      if (code == PLUS || code == MINUS
>>>> +  || code == MULT || code == ASHIFT)
>>>> + dst_mask
>>>> +  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;
>>>
>>> Yeah, sympathise with the ?!? here :)
>> Jeff Law:
>>> Inherited.  Like the other bit of magic I think I'll do a test with them
>>> pulled out to see if I can make something undesirable trigger.
>>
>> This represents the carry effect.  Even if the destination only cares about
>> some high order bits, you have to consider all lower order bits of the inputs.
>>
>> For ASHIFT, you could refine this in the case of a constant shift count.
> 
> Ah, right.  Think it would be worth a comment.
Definitely.  Wouldn't SIGN_EXTEND have a similar problem?  While we 
don't care about all the low bits, we do care about that MSB.


> 
> But I wonder whether we should centralise all this code-specific
> information into a single place.  I.e. rather than having one switch to
> say "PLUS is OK" or "AND is OK", and then having code-specific handling
> elsewhere, we could enumerate how to handle a code.
Yea.  That's where I was starting to go with the code which indicates we 
can't necessarily narrow a shift count.  ie, what are the properties of 
the opcodes and how do they translate into the bits we need clear from 
LIVENOW (for sets) and the bits we need to make live (for uses).

Jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-22 17:59   ` Jeff Law
@ 2023-11-27 20:15     ` Richard Sandiford
  0 siblings, 0 replies; 35+ messages in thread
From: Richard Sandiford @ 2023-11-27 20:15 UTC (permalink / raw)
  To: Jeff Law; +Cc: gcc-patches, Jivan Hakobyan

[Sorry for the slow response]

Jeff Law <jlaw@ventanamicro.com> writes:
> On 11/20/23 11:26, Richard Sandiford wrote:
>> 
>> 	  scalar_int_mode outer_mode;
>> 	  if (!is_a<scalar_int_mode> (GET_MODE (x), &outer_mode)
>> 	      || GET_MODE_BITSIZE (outer_mode) > 64)
>> 	    continue;
> Wouldn't we also want to verify that the size is constant, or is it the 
> case that all the variable cases are vector (and would we want to 
> actually depend on that)?

Yeah, all the variable cases are vectors.  We don't support variable-length
scalars at the moment.  (And I hope that never changes. :))

>>> +	      /* We will handle the other operand of a binary operator
>>> +		 at the bottom of the loop by resetting Y.  */
>>> +	      if (BINARY_P (src))
>>> +		y = XEXP (src, 0);
>> 
>> What about UNARY_P, given that NOT is included in the codes above?
> We'll break that inner for(;;) then iterate into the subobject, marking 
> the relevant bits live.  FWIW, the control flow of this code continues 
> to be my biggest concern from a maintenance standpoint.  Figuring it out 
> was a major pain and I've tried to document what is and what is not 
> safe.  But it's still painful to walk through.
>
> I pondered if note_uses/note_stores would be better, but concluded we'd 
> just end up with a ton of state objects to carry around and reasoning 
> about that would be just as hard.

Feels like it would be good to handle the top-level structure explicitly,
(PARALLELs, SETs, SET_SRCs, etc.), then fall back to iteration at the
point that we can no longer do better then "all registers in this expression
are fully live".

If we do that, rtx_properties might be an alternative to explicit
iteration.  The advantage of that is that it can handle destination
and sources as the top-level expression, and records whether each
register is itself a destination or source.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-27 17:57 ` Joern Rennecke
@ 2023-11-27 20:03   ` Richard Sandiford
  2023-11-27 20:18     ` Jeff Law
  2023-11-28 13:13     ` Joern Rennecke
  0 siblings, 2 replies; 35+ messages in thread
From: Richard Sandiford @ 2023-11-27 20:03 UTC (permalink / raw)
  To: Joern Rennecke; +Cc: GCC Patches, jlaw

Joern Rennecke <joern.rennecke@embecosm.com> writes:
>  On 11/20/23 11:26, Richard Sandiford wrote:
>>> +      /* ?!? What is the point of this adjustment to DST_MASK?  */
>>> +      if (code == PLUS || code == MINUS
>>> +  || code == MULT || code == ASHIFT)
>>> + dst_mask
>>> +  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;
>>
>> Yeah, sympathise with the ?!? here :)
> Jeff Law:
>> Inherited.  Like the other bit of magic I think I'll do a test with them
>> pulled out to see if I can make something undesirable trigger.
>
> This represents the carry effect.  Even if the destination only cares about
> some high order bits, you have to consider all lower order bits of the inputs.
>
> For ASHIFT, you could refine this in the case of a constant shift count.

Ah, right.  Think it would be worth a comment.

But I wonder whether we should centralise all this code-specific
information into a single place.  I.e. rather than having one switch to
say "PLUS is OK" or "AND is OK", and then having code-specific handling
elsewhere, we could enumerate how to handle a code.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-27 17:36 Joern Rennecke
@ 2023-11-27 17:57 ` Joern Rennecke
  2023-11-27 20:03   ` Richard Sandiford
  2023-11-28  5:50 ` Jeff Law
  1 sibling, 1 reply; 35+ messages in thread
From: Joern Rennecke @ 2023-11-27 17:57 UTC (permalink / raw)
  To: GCC Patches; +Cc: jlaw, richard.sandiford

 On 11/20/23 11:26, Richard Sandiford wrote:
>> +      /* ?!? What is the point of this adjustment to DST_MASK?  */
>> +      if (code == PLUS || code == MINUS
>> +  || code == MULT || code == ASHIFT)
>> + dst_mask
>> +  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;
>
> Yeah, sympathise with the ?!? here :)
Jeff Law:
> Inherited.  Like the other bit of magic I think I'll do a test with them
> pulled out to see if I can make something undesirable trigger.

This represents the carry effect.  Even if the destination only cares about
some high order bits, you have to consider all lower order bits of the inputs.

For ASHIFT, you could refine this in the case of a constant shift count.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
@ 2023-11-27 17:36 Joern Rennecke
  2023-11-27 17:57 ` Joern Rennecke
  2023-11-28  5:50 ` Jeff Law
  0 siblings, 2 replies; 35+ messages in thread
From: Joern Rennecke @ 2023-11-27 17:36 UTC (permalink / raw)
  To: GCC Patches; +Cc: jlaw, richard.sandiford

On 11/20/23 11:26, Richard Sandiford wrote:

>> +
>> +      mask = GET_MODE_MASK (GET_MODE (SUBREG_REG (x))) << bit;
>> +      if (!mask)
>> + mask = -0x100000000ULL;
>
> Not sure I follow this.  What does the -0x100000000ULL constant indicate?
> Also, isn't it the mask of the outer register that is shifted, rather
> than the mask of the inner mode?  E.g. if we have:
Jeff Law:
> Inherited.  I should have marked it like the other one as needing
> investigation.  Probably the fastest way is to just rip it out for a
> test to see what breaks.

This is for support of types wider than DImode.

You unsupported tracking of these values in various places, though.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-27 11:30 ` Andrew Stubbs
@ 2023-11-27 16:16   ` Jeff Law
  0 siblings, 0 replies; 35+ messages in thread
From: Jeff Law @ 2023-11-27 16:16 UTC (permalink / raw)
  To: Andrew Stubbs, Jeff Law, gcc-patches; +Cc: Jivan Hakobyan



On 11/27/23 04:30, Andrew Stubbs wrote:
> I tried this patch for AMD GCN. We have a similar problem with excess 
> extends, but also for vector modes. Each lane has a minimum 32 bits and 
> GCC's normal assumption is that vector registers have precisely the 
> number of bits they need, so the amdgcn backend patterns have explicit 
> sign/zero extends for QImode and HImode for the instructions that might 
> need it. It would be cool if this pass could eliminate some of those, 
> but at this point I just wanted to check it didn't break anything.
> 
> Unfortunately I get a crash building libgcc:
I strongly suspect this is the same thing that was originally reported 
by Xi Ruoyao.  Just getting back on top of things after the holiday. 
I'll get the V2 posted today.

Jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-26 16:42     ` rep.dot.nop
@ 2023-11-27 16:14       ` Jeff Law
  0 siblings, 0 replies; 35+ messages in thread
From: Jeff Law @ 2023-11-27 16:14 UTC (permalink / raw)
  To: rep.dot.nop, gcc-patches, Dimitar Dimitrov, Jeff Law; +Cc: Jivan Hakobyan



On 11/26/23 09:42, rep.dot.nop@gmail.com wrote:
> On 22 November 2023 23:23:41 CET, Jeff Law <jeffreyalaw@gmail.com> wrote:
>>
>>
>> On 11/20/23 11:56, Dimitar Dimitrov wrote:
>>> On Sun, Nov 19, 2023 at 05:47:56PM -0700, Jeff Law wrote:
>>> ...
> 
>>>> +      enum rtx_code xcode = GET_CODE (x);
>>>> +      if (xcode == SET)
>>>> +	{
>>>> +	  const_rtx dst = SET_DEST (x);
>>>> +	  rtx src = SET_SRC (x);
>>>> +	  const_rtx y;
>>>> +	  unsigned HOST_WIDE_INT bit = 0;
>>>> +
>>>> +	  /* The code of the RHS of a SET.  */
>>>> +	  enum rtx_code code = GET_CODE (src);
>>>> +
>>>> +	  /* ?!? How much of this should mirror SET handling, potentially
>>>> +	     being shared?   */
>>>> +	  if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))
>>>
>>> Shouldn't SUBREG_P be checked first like:
>>> 	  if (SUBREG_P (dst) && SUBREG_BYTE (dst).is_constant ())
>> Yes, absolutely. It'll be fixed in the next update.
>>
>> This also highlighted that I never added pru-elf to the configurations in my tester.  I remember thinking that it needed to be added, but obviously that mental TODO got lost.  I've just fixed that.
> 
> 
> And please drop the superfluous enum from rtx_code while at it?
Sure.
jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20  0:47 Jeff Law
                   ` (3 preceding siblings ...)
  2023-11-20 18:56 ` Dimitar Dimitrov
@ 2023-11-27 11:30 ` Andrew Stubbs
  2023-11-27 16:16   ` Jeff Law
  2023-12-01  1:08 ` Hans-Peter Nilsson
  5 siblings, 1 reply; 35+ messages in thread
From: Andrew Stubbs @ 2023-11-27 11:30 UTC (permalink / raw)
  To: Jeff Law, gcc-patches; +Cc: Jivan Hakobyan

I tried this patch for AMD GCN. We have a similar problem with excess 
extends, but also for vector modes. Each lane has a minimum 32 bits and 
GCC's normal assumption is that vector registers have precisely the 
number of bits they need, so the amdgcn backend patterns have explicit 
sign/zero extends for QImode and HImode for the instructions that might 
need it. It would be cool if this pass could eliminate some of those, 
but at this point I just wanted to check it didn't break anything.

Unfortunately I get a crash building libgcc:

> during RTL pass: ext_dce
> conftest.c: In function 'main':
> conftest.c:16:1: internal compiler error: RTL check: expected code 'subreg', have 'reg' in ext_dce_process_uses, at ext-dce.cc:421
>    16 | }
>       | ^
> 0x8c7aa3 rtl_check_failed_code1(rtx_def const*, rtx_code, char const*, int, char const*)
>         /scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/rtl.cc:770
> 0xa76a27 ext_dce_process_uses
>         /scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:421
> 0x1aeca5c ext_dce_process_bb
>         /scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:651
> 0x1aeca5c ext_dce
>         /scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:802
> 0x1aeca5c execute
>         /scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:868
> Please submit a full bug report, with preprocessed source (by using -freport-bug).
> Please include the complete backtrace with any bug report.
> See <https://gcc.gnu.org/bugs/> for instructions.
> configure:3812: $? = 1
> configure: failed program was:
> | /* confdefs.h */
> | #define PACKAGE_NAME "GNU C Runtime Library"
> | #define PACKAGE_TARNAME "libgcc"
> | #define PACKAGE_VERSION "1.0"
> | #define PACKAGE_STRING "GNU C Runtime Library 1.0"
> | #define PACKAGE_BUGREPORT ""
> | #define PACKAGE_URL "http://www.gnu.org/software/libgcc/"
> | /* end confdefs.h.  */
> |
> | int
> | main ()
> | {
> |
> |   ;
> |   return 0;
> | }

I have no idea if this is an unhandled case or a case that shouldn't 
exist, but it's trying to do "SUBREG_BYTE (dst).is_constant ()" for a 
very simple instruction:

(set (reg/i:SI 168 v8)
     (const_int 0 [0]))

This seems pretty basic to me, but there is some hidden complexity. It's 
possible that the pass has correctly identified that "v8" can hold more 
that just a single integer: in this case we're using a single lane of a 
vector register. No extend is needed here though. The register has 2048 
bits, but only 32 are active in SImode.

Andrew

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-22 22:23   ` Jeff Law
@ 2023-11-26 16:42     ` rep.dot.nop
  2023-11-27 16:14       ` Jeff Law
  0 siblings, 1 reply; 35+ messages in thread
From: rep.dot.nop @ 2023-11-26 16:42 UTC (permalink / raw)
  To: gcc-patches, Jeff Law, Dimitar Dimitrov, Jeff Law
  Cc: gcc-patches, Jivan Hakobyan

On 22 November 2023 23:23:41 CET, Jeff Law <jeffreyalaw@gmail.com> wrote:
>
>
>On 11/20/23 11:56, Dimitar Dimitrov wrote:
>> On Sun, Nov 19, 2023 at 05:47:56PM -0700, Jeff Law wrote:
>> ...

>>> +      enum rtx_code xcode = GET_CODE (x);
>>> +      if (xcode == SET)
>>> +	{
>>> +	  const_rtx dst = SET_DEST (x);
>>> +	  rtx src = SET_SRC (x);
>>> +	  const_rtx y;
>>> +	  unsigned HOST_WIDE_INT bit = 0;
>>> +
>>> +	  /* The code of the RHS of a SET.  */
>>> +	  enum rtx_code code = GET_CODE (src);
>>> +
>>> +	  /* ?!? How much of this should mirror SET handling, potentially
>>> +	     being shared?   */
>>> +	  if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))
>> 
>> Shouldn't SUBREG_P be checked first like:
>> 	  if (SUBREG_P (dst) && SUBREG_BYTE (dst).is_constant ())
>Yes, absolutely. It'll be fixed in the next update.
>
>This also highlighted that I never added pru-elf to the configurations in my tester.  I remember thinking that it needed to be added, but obviously that mental TODO got lost.  I've just fixed that.


And please drop the superfluous enum from rtx_code while at it?

TIA

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20 18:56 ` Dimitar Dimitrov
@ 2023-11-22 22:23   ` Jeff Law
  2023-11-26 16:42     ` rep.dot.nop
  0 siblings, 1 reply; 35+ messages in thread
From: Jeff Law @ 2023-11-22 22:23 UTC (permalink / raw)
  To: Dimitar Dimitrov, Jeff Law; +Cc: gcc-patches, Jivan Hakobyan



On 11/20/23 11:56, Dimitar Dimitrov wrote:
> On Sun, Nov 19, 2023 at 05:47:56PM -0700, Jeff Law wrote:
> ...
>> +/* Process uses in INSN.  Set appropriate bits in LIVENOW for any chunks of
>> +   pseudos that become live, potentially filtering using bits from LIVE_TMP.
>> +
>> +   If MODIFIED is true, then optimize sign/zero extensions to SUBREGs when
>> +   the extended bits are never read and mark pseudos which had extensions
>> +   eliminated in CHANGED_PSEUDOS.  */
>> +
>> +static void
>> +ext_dce_process_uses (rtx insn, bitmap livenow, bitmap live_tmp,
>> +		      bool modify, bitmap changed_pseudos)
>> +{
>> +  /* A nonlocal goto implicitly uses the frame pointer.  */
>> +  if (JUMP_P (insn) && find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX))
>> +    {
>> +      bitmap_set_range (livenow, FRAME_POINTER_REGNUM * 4, 4);
>> +      if (!HARD_FRAME_POINTER_IS_FRAME_POINTER)
>> +	bitmap_set_range (livenow, HARD_FRAME_POINTER_REGNUM * 4, 4);
>> +    }
>> +
>> +  subrtx_var_iterator::array_type array_var;
>> +  rtx pat = PATTERN (insn);
>> +  FOR_EACH_SUBRTX_VAR (iter, array_var, pat, NONCONST)
>> +    {
>> +      /* An EXPR_LIST (from call fusage) ends in NULL_RTX.  */
>> +      rtx x = *iter;
>> +      if (x == NULL_RTX)
>> +	continue;
>> +
>> +      /* So the basic idea in this FOR_EACH_SUBRTX_VAR loop is to
>> +	 handle SETs explicitly, possibly propagating live information
>> +	 into the uses.
>> +
>> +	 We may continue the loop at various points which will cause
>> +	 iteration into the next level of RTL.  Breaking from the loop
>> +	 is never safe as it can lead us to fail to process some of the
>> +	 RTL and thus not make objects live when necessary.  */
>> +      enum rtx_code xcode = GET_CODE (x);
>> +      if (xcode == SET)
>> +	{
>> +	  const_rtx dst = SET_DEST (x);
>> +	  rtx src = SET_SRC (x);
>> +	  const_rtx y;
>> +	  unsigned HOST_WIDE_INT bit = 0;
>> +
>> +	  /* The code of the RHS of a SET.  */
>> +	  enum rtx_code code = GET_CODE (src);
>> +
>> +	  /* ?!? How much of this should mirror SET handling, potentially
>> +	     being shared?   */
>> +	  if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))
> 
> Shouldn't SUBREG_P be checked first like:
> 	  if (SUBREG_P (dst) && SUBREG_BYTE (dst).is_constant ())
Yes, absolutely. It'll be fixed in the next update.

This also highlighted that I never added pru-elf to the configurations 
in my tester.  I remember thinking that it needed to be added, but 
obviously that mental TODO got lost.  I've just fixed that.

jeff


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20 18:26 ` Richard Sandiford
@ 2023-11-22 17:59   ` Jeff Law
  2023-11-27 20:15     ` Richard Sandiford
  0 siblings, 1 reply; 35+ messages in thread
From: Jeff Law @ 2023-11-22 17:59 UTC (permalink / raw)
  To: gcc-patches, Jivan Hakobyan, richard.sandiford



On 11/20/23 11:26, Richard Sandiford wrote:

>> +
>> +/* If we know the destination of CODE only uses some low bits
>> +   (say just the QI bits of an SI operation), then return true
>> +   if we can propagate the need for just the subset of bits
>> +   from the destination to the sources.  */
>> +
>> +static bool
>> +safe_for_live_propagation (rtx_code code)
>> +{
>> +  /* First handle rtx classes which as a whole are known to
>> +     be either safe or unsafe.  */
>> +  switch (GET_RTX_CLASS (code))
>> +    {
>> +      case RTX_OBJ:
>> +	return true;
>> +
>> +      case RTX_COMPARE:
>> +      case RTX_COMM_COMPARE:
>> +      case RTX_TERNARY:
> 
> I suppose operands 1 and 2 of an IF_THEN_ELSE would be safe.
Yes.  The only downside is we'd need to special case IF_THEN_ELSE 
because it doesn't apply to operand 0.  Right now we're pretty 
conservative with anything other than binary codes.  Comment added about 
the possibility of handling I-T-E as well.



> 
> This made me wonder: is this safe for !TRULY_NOOP_TRUNCATION?  But I
> suppose it is.  What !TRULY_NOOP_TRUNCATION models is that the target
> mode has a canonical form that must be maintained, and wouldn't be by
> a plain subreg.  So TRULY_NOOP_TRUNCATION is more of an issue for
> consumers of the liveness information, rather than the computing the
> liveness information itself.
Really interesting question.  I think ext-dce is safe.  As you note this 
is more a consumer side question and on the consumer side we don't muck 
with TRUNCATE at all.




>> +    case SS_TRUNCATE:
>> +    case US_TRUNCATE:
>> +    case PLUS:
>> +    case MULT:
>> +    case SS_MULT:
>> +    case US_MULT:
>> +    case SMUL_HIGHPART:
>> +    case UMUL_HIGHPART:
>> +    case AND:
>> +    case IOR:
>> +    case XOR:
>> +    case SS_PLUS:
>> +    case US_PLUS:
> 
> I don't think it's safe to propagate through saturating ops.
> They don't have the property that (x op y)%z == (x%z op x%z)%z
Yea, you're probably right.  Removed.



>> +
>> +	  /* We don't support vector destinations or destinations
>> +	     wider than DImode.   It is safe to continue this loop.
>> +	     At worst, it will leave things live which could have
>> +	     been made dead.  */
>> +	  if (VECTOR_MODE_P (GET_MODE (x)) || GET_MODE (x) > E_DImode)
>> +	    continue;
> 
> The E_DImode comparison hard-codes an assumption about the order of
> the mode enum.  How about using something like:
Guilty as charged :-)  Not surprised you called that out.



> 
> 	  scalar_int_mode outer_mode;
> 	  if (!is_a<scalar_int_mode> (GET_MODE (x), &outer_mode)
> 	      || GET_MODE_BITSIZE (outer_mode) > 64)
> 	    continue;
Wouldn't we also want to verify that the size is constant, or is it the 
case that all the variable cases are vector (and would we want to 
actually depend on that)?

> 
> The other continues use iter.skip_subrtxes (); when continuing.
> I don't think it matters for correctness whether we do that or not,
> since SETs and CLOBBERs shouldn't be nested.  But skipping should
> be faster.
My thought on not skipping the sub-rtxs in this case was to make sure we 
processed things like memory addresses which could have embedded side 
effects.  It probably doesn't matter in practice though.

> 
> Maybe it would be worth splitting the SET/CLOBBER code out into > a subfunction, to make the loop iteration easier to handle?
Yea, it could use another round of such work.  In the originalm set and 
use handling were one big function which drove me nuts.



>> +	      /* Transfer all the LIVENOW bits for X into LIVE_TMP.  */
>> +	      HOST_WIDE_INT rn = REGNO (SUBREG_REG (x));
>> +	      for (HOST_WIDE_INT i = 4 * rn; i < 4 * rn + 4; i++)
>> +		if (bitmap_bit_p (livenow, i))
>> +		  bitmap_set_bit (live_tmp, i);
>> +
>> +	      /* The mode of the SUBREG tells us how many bits we can
>> +		 clear.  */
>> +	      machine_mode mode = GET_MODE (x);
>> +	      HOST_WIDE_INT size = GET_MODE_SIZE (mode).to_constant ();
>> +	      bitmap_clear_range (livenow, 4 * rn, size);
> 
> Is clearing SIZE bytes correct?  Feels like it should be clearing
> something like log2 (size) + 1 instead.
Yea, I think you're right.  Fixed.



>> +	      bit = SUBREG_BYTE (x).to_constant () * BITS_PER_UNIT;
>> +	      if (WORDS_BIG_ENDIAN)
>> +		bit = (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (x))).to_constant ()
>> +		       - GET_MODE_BITSIZE (GET_MODE (x)).to_constant () - bit);
>> +
>> +	      /* Catch big endian correctness issues rather than triggering
>> +		 undefined behavior.  */
>> +	      gcc_assert (bit < sizeof (HOST_WIDE_INT) * 8);
> 
> This could probably use subreg_lsb, to avoid the inline endianness adjustment.
That's the routine I was looking for!  The original totally mucked up 
the endianness adjustment and I kept thinking we must have an existing 
routine to do this for us but didn't find it immediately, so I just 
banged out a trivial endianness adjustment.

> 
>> +
>> +	      mask = GET_MODE_MASK (GET_MODE (SUBREG_REG (x))) << bit;
>> +	      if (!mask)
>> +		mask = -0x100000000ULL;
> 
> Not sure I follow this.  What does the -0x100000000ULL constant indicate?
> Also, isn't it the mask of the outer register that is shifted, rather
> than the mask of the inner mode?  E.g. if we have:
Inherited.  I should have marked it like the other one as needing 
investigation.  Probably the fastest way is to just rip it out for a 
test to see what breaks.

>> +	  /* Some ports generate (clobber (const_int)).  */
>> +	  else if (CONST_INT_P (x))
> 
> Ugh.  What on earth do they do that for?  I thought that (with const0_rtx)
> was reserved as combine's "something went wrong" representation.
I stumbled over it on avr I think.  I didn't dig into why other than to 
note it occurred in an insn pattern as opposed to rtl templates for 
generation from define_expands, splitters, etc.




>> +/* INSN has a sign/zero extended source inside SET that we will
>> +   try to turn into a SUBREG.  */
>> +static void
>> +ext_dce_try_optimize_insn (rtx_insn *insn, rtx set, bitmap changed_pseudos)
>> +{
>> +  rtx src = SET_SRC (set);
>> +  rtx inner = XEXP (src, 0);
>> +
>> +  /* Avoid (subreg (mem)) and other constructs which may are valid RTL, but
> 
> s/are// or s/may are/may be/
Fixed.  "may be".


>> +
>> +/* Process uses in INSN.  Set appropriate bits in LIVENOW for any chunks of
>> +   pseudos that become live, potentially filtering using bits from LIVE_TMP.
>> +
>> +   If MODIFIED is true, then optimize sign/zero extensions to SUBREGs when
> 
> MODIFY
Fixed.


>> +
>> +	  /* ?!? How much of this should mirror SET handling, potentially
>> +	     being shared?   */
>> +	  if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))
>> +	    {
>> +	      bit = SUBREG_BYTE (dst).to_constant () * BITS_PER_UNIT;
>> +	      if (WORDS_BIG_ENDIAN)
>> +		bit = (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (dst))).to_constant ()
>> +		       - GET_MODE_BITSIZE (GET_MODE (dst)).to_constant () - bit);
> 
> Same subreg_lsb thing here.
Yea, I'll fix them all.  There's 2 or 3 IIRC.


>> +
>> +	      /* ?!? What is the point of this adjustment to DST_MASK?  */
>> +	      if (code == PLUS || code == MINUS
>> +		  || code == MULT || code == ASHIFT)
>> +		dst_mask
>> +		  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;
> 
> Yeah, sympathise with the ?!? here :)
Inherited.  Like the other bit of magic I think I'll do a test with them 
pulled out to see if I can make something undesirable trigger.

> 
>> +	      /* We will handle the other operand of a binary operator
>> +		 at the bottom of the loop by resetting Y.  */
>> +	      if (BINARY_P (src))
>> +		y = XEXP (src, 0);
> 
> What about UNARY_P, given that NOT is included in the codes above?
We'll break that inner for(;;) then iterate into the subobject, marking 
the relevant bits live.  FWIW, the control flow of this code continues 
to be my biggest concern from a maintenance standpoint.  Figuring it out 
was a major pain and I've tried to document what is and what is not 
safe.  But it's still painful to walk through.

I pondered if note_uses/note_stores would be better, but concluded we'd 
just end up with a ton of state objects to carry around and reasoning 
about that would be just as hard.


> 
>> +	      else
>> +		y = src;
>> +
>> +	      /* We're inside a SET and want to process the source operands
>> +		 making things live.  Breaking from this loop will cause
>> +		 the iterator to work on sub-rtxs, so it is safe to break
>> +		 if we see something we don't know how to handle.  */
>> +	      for (;;)
>> +		{
>> +		  /* Strip an outer STRICT_LOW_PART or paradoxical subreg.
>> +		     That has the effect of making the whole referenced
>> +		     register live.  We might be able to avoid that for
>> +		     STRICT_LOW_PART at some point.  */
>> +		  if (GET_CODE (x) == STRICT_LOW_PART
>> +		      || paradoxical_subreg_p (x))
>> +		    x = XEXP (x, 0);
> 
> Isn't "x" still the set at this point?  If it was intended to be "y",
> then I thought STRICT_LOW_PART was for destinations only.
I'll have to go back through the history.  Something definitely does not 
look right here.

> 

>> +		  else if (!CONSTANT_P (y))
>> +		    break;
>> +		  /* We might have (ashift (const_int 1) (reg...)) */
>> +		  else if (CONSTANT_P (y)
>> +			   && binop_implies_op2_fully_live (GET_CODE (src)))
> 
> The CONSTANT_P (y) condition is redundant given the above.
> But couldn't the binop_implies_op2_fully_live be shared by both paths?
> Maybe it could be tested...
Yea, that was a fun little twisty maze.  THe CONSTANT_P is definitely 
reundant.  I had a bunch of fixes related to shifts in the code 
initially, but Jivan and I slowly cleaned up and refactored over time, 
but I think there's still some work to do here.



>> +
>> +  do
>> +    {
>> +      qin = qout = worklist;
>> +
>> +      /* Put every block on the worklist.  */
>> +      int *rpo = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
>> +      int n = inverted_rev_post_order_compute (cfun, rpo);
> 
> Does this need to be recalculated each iteration, or could it be done
> outside the loop?
I think it can/should move outside the loop.  Done.


> 
> Was also wondering if this could use a DF_BACKWARD df_simple_dataflow
> (with its precomputed order), rather than doing the worklist management
> manually.  If df_simple_dataflow isn't efficient enough then we should
> try to fix that...
I'll have to take a look.  I actually never really had to look at this 
code as all the fixes/cleanups were in the sets/uses processing side. 
The liveness iteration and such "just worked".

Thanks a ton for all the feedback.  I've got several of the fixes done 
and I'll throw them into testing today and get a V2 out as a few people 
are playing with this.  V2 won't have fixes for everything you've 
pointed out, so it probably won't be worth the time to review the V2 
knowing a V3 is inevitable.

jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20  0:47 Jeff Law
                   ` (2 preceding siblings ...)
  2023-11-20 18:26 ` Richard Sandiford
@ 2023-11-20 18:56 ` Dimitar Dimitrov
  2023-11-22 22:23   ` Jeff Law
  2023-11-27 11:30 ` Andrew Stubbs
  2023-12-01  1:08 ` Hans-Peter Nilsson
  5 siblings, 1 reply; 35+ messages in thread
From: Dimitar Dimitrov @ 2023-11-20 18:56 UTC (permalink / raw)
  To: Jeff Law; +Cc: gcc-patches, Jivan Hakobyan

On Sun, Nov 19, 2023 at 05:47:56PM -0700, Jeff Law wrote:
...
> +/* Process uses in INSN.  Set appropriate bits in LIVENOW for any chunks of
> +   pseudos that become live, potentially filtering using bits from LIVE_TMP.
> +
> +   If MODIFIED is true, then optimize sign/zero extensions to SUBREGs when
> +   the extended bits are never read and mark pseudos which had extensions
> +   eliminated in CHANGED_PSEUDOS.  */
> +
> +static void
> +ext_dce_process_uses (rtx insn, bitmap livenow, bitmap live_tmp,
> +		      bool modify, bitmap changed_pseudos)
> +{
> +  /* A nonlocal goto implicitly uses the frame pointer.  */
> +  if (JUMP_P (insn) && find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX))
> +    {
> +      bitmap_set_range (livenow, FRAME_POINTER_REGNUM * 4, 4);
> +      if (!HARD_FRAME_POINTER_IS_FRAME_POINTER)
> +	bitmap_set_range (livenow, HARD_FRAME_POINTER_REGNUM * 4, 4);
> +    }
> +
> +  subrtx_var_iterator::array_type array_var;
> +  rtx pat = PATTERN (insn);
> +  FOR_EACH_SUBRTX_VAR (iter, array_var, pat, NONCONST)
> +    {
> +      /* An EXPR_LIST (from call fusage) ends in NULL_RTX.  */
> +      rtx x = *iter;
> +      if (x == NULL_RTX)
> +	continue;
> +
> +      /* So the basic idea in this FOR_EACH_SUBRTX_VAR loop is to
> +	 handle SETs explicitly, possibly propagating live information
> +	 into the uses.
> +
> +	 We may continue the loop at various points which will cause
> +	 iteration into the next level of RTL.  Breaking from the loop
> +	 is never safe as it can lead us to fail to process some of the
> +	 RTL and thus not make objects live when necessary.  */
> +      enum rtx_code xcode = GET_CODE (x);
> +      if (xcode == SET)
> +	{
> +	  const_rtx dst = SET_DEST (x);
> +	  rtx src = SET_SRC (x);
> +	  const_rtx y;
> +	  unsigned HOST_WIDE_INT bit = 0;
> +
> +	  /* The code of the RHS of a SET.  */
> +	  enum rtx_code code = GET_CODE (src);
> +
> +	  /* ?!? How much of this should mirror SET handling, potentially
> +	     being shared?   */
> +	  if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))

Shouldn't SUBREG_P be checked first like:
	  if (SUBREG_P (dst) && SUBREG_BYTE (dst).is_constant ())

On pru-unknown-elf with RTL checking I get:

conftest.c:16:1: internal compiler error: RTL check: expected code 'subreg', have 'reg' in ext_dce_process_uses, at ext-dce.cc:421
   16 | }
      | ^
0x158b39e rtl_check_failed_code1(rtx_def const*, rtx_code, char const*, int, char const*)
        /mnt/nvme/dinux/local-workspace/gcc/gcc/rtl.cc:770
0x223a486 ext_dce_process_uses
        /mnt/nvme/dinux/local-workspace/gcc/gcc/ext-dce.cc:421
0x223b5ba ext_dce_process_bb
        /mnt/nvme/dinux/local-workspace/gcc/gcc/ext-dce.cc:651
0x223bdd3 ext_dce
        /mnt/nvme/dinux/local-workspace/gcc/gcc/ext-dce.cc:802
0x223c0ac execute
        /mnt/nvme/dinux/local-workspace/gcc/gcc/ext-dce.cc:868


Regards,
Dimitar

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20  0:47 Jeff Law
  2023-11-20  1:22 ` Oleg Endo
  2023-11-20  2:23 ` Xi Ruoyao
@ 2023-11-20 18:26 ` Richard Sandiford
  2023-11-22 17:59   ` Jeff Law
  2023-11-20 18:56 ` Dimitar Dimitrov
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 35+ messages in thread
From: Richard Sandiford @ 2023-11-20 18:26 UTC (permalink / raw)
  To: Jeff Law; +Cc: gcc-patches, Jivan Hakobyan

Jeff Law <jlaw@ventanamicro.com> writes:
> This is work originally started by Joern @ Embecosm.
>
> There's been a long standing sense that we're generating too many 
> sign/zero extensions on the RISC-V port.  REE is useful, but it's really 
> focused on a relatively narrow part of the extension problem.
>
> What Joern's patch does is introduce a new pass which tracks liveness of 
> chunks of pseudo regs.  Specifically it tracks bits 0..7, 8..15, 16..31 
> and 32..63.
>
> If it encounters a sign/zero extend that sets bits that are never read, 
> then it replaces the sign/zero extension with a narrowing subreg.  The 
> narrowing subreg usually gets eliminated by subsequent passes (it's just 
> a copy after all).
>
> Jivan has done some analysis and found that it eliminates roughly 1% of 
> the dynamic instruction stream for x264 as well as some redundant 
> extensions in the coremark benchmark (both on rv64).  In my own testing 
> as I worked through issues on other architectures I clearly saw it 
> helping in various places within GCC itself or in the testsuite.
>
> The basic structure is to first do a fairly standard liveness analysis 
> on the chunks, seeding original state with the liveness data from DF. 
> Once that's stable, we do a final pass to identify the useless 
> extensions and transform them into narrowing subregs.
>
> A few key points to remember.
>
> For destination processing it is always safe to ignore a destination. 
> Ignoring a destination merely means that whatever was live after the 
> given insn will continue to be live before the insn.  What is not safe 
> is to clear a bit in the LIVENOW bitmap for a destination chunk that is 
> not set.  This comes into play with things like STRICT_LOW_PART.
>
> For source processing the safe thing to do is to set all the chunks in a 
> register as live.  It is never safe to fail to process a source operand.
>
> When a destination object is not fully live, we try to transfer that 
> limited liveness to the source operands.  So for example if bits 16..63 
> are dead in a destination of a PLUS, we need not mark bits 16..63 as 
> live for the source operands.  We have to be careful -- consider a shift 
> count on a target without SHIFT_COUNT_TRUNCATED set.  So we have both a 
> list of RTL codes where we can transfer liveness and a few codes where 
> one of the operands may need to be fully live (ex, a shift count) while 
> the other input may not need to be fully live (value left shifted).
>
> Locally we have had this enabled at -O1 and above to encourage testing, 
> but I'm thinking that for the trunk enabling at -O2 and above is the 
> right thing to do.
>
> This has (of course) been tested on rv64.  It's also been bootstrapped 
> and regression tested on x86.  Bootstrap and regression tested (C only) 
> for m68k, sh4, sh4eb, alpha.  Earlier versions were also bootstrapped 
> and regression tested on ppc, hppa and s390x (C only for those as well). 
>   It's also been tested on the various crosses in my tester.  So we've 
> got reasonable coverage of 16, 32 and 64 bit targets, big and little 
> endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other 
> oddities.
>
> The included tests are for RISC-V only because not all targets are going 
> to have extraneous extensions.   There's tests from coremark, x264 and 
> GCC's bz database.  It probably wouldn't be hard to add aarch64 
> testscases.  The BZs listed are improved by this patch for aarch64.
>
> Given the amount of work Jivan and I have done, I'm not comfortable 
> self-approving at this time.  I'd much rather have another set of eyes 
> on the code.  Hopefully the code is documented well enough for that to 
> be useful exercise.
>
> So, no need to work from Pago Pago for this patch.  I may make another 
> attempt at the eswin conditional move work while working virtually in 
> Pago Pago though.

Very nice!

> Thoughts, comments, recommendations?
>
> Jeff
>
>
> 	PR target/95650
> 	PR rtl-optimization/96031
> 	PR rtl-optimization/104387
> 	PR rtl-optimization/111384
>
> gcc/
> 	* Makefile.in (OBJS): Add ext-dce.o.
> 	* common.opt (ext-dce): Add new option.
> 	* df-scan.cc (df_get_exit_block_use_set): No longer static.
> 	* df.h (df_get_exit_block_use_set): Prototype.
> 	* ext-dce.cc: New file.
> 	* passes.def: Add ext-dce before combine.
> 	* tree-pass.h (make_pass_ext_dce): Prototype..
>
> gcc/testsuite
> 	* gcc.target/riscv/core_bench_list.c: New test.
> 	* gcc.target/riscv/core_init_matrix.c: New test.
> 	* gcc.target/riscv/core_list_init.c: New test.
> 	* gcc.target/riscv/matrix_add_const.c: New test.
> 	* gcc.target/riscv/mem-extend.c: New test.
> 	* gcc.target/riscv/pr111384.c: New test.
> diff --git a/gcc/Makefile.in b/gcc/Makefile.in
> index 753f2f36618..af6f1415507 100644
> --- a/gcc/Makefile.in
> +++ b/gcc/Makefile.in
> @@ -1451,6 +1451,7 @@ OBJS = \
>  	explow.o \
>  	expmed.o \
>  	expr.o \
> +	ext-dce.o \
>  	fibonacci_heap.o \
>  	file-prefix-map.o \
>  	final.o \
> diff --git a/gcc/common.opt b/gcc/common.opt
> index d21db5d4a20..141dfdf14fd 100644
> --- a/gcc/common.opt
> +++ b/gcc/common.opt
> @@ -3766,4 +3766,8 @@ fipa-ra
>  Common Var(flag_ipa_ra) Optimization
>  Use caller save register across calls if possible.
>  
> +fext-dce
> +Common Var(flag_ext_dce, 1) Optimization Init(0)
> +Perform dead code elimination on zero and sign extensions with special dataflow analysis.
> +
>  ; This comment is to ensure we retain the blank line above.
> diff --git a/gcc/df-scan.cc b/gcc/df-scan.cc
> index 9515740728c..87729ab0f44 100644
> --- a/gcc/df-scan.cc
> +++ b/gcc/df-scan.cc
> @@ -78,7 +78,6 @@ static void df_get_eh_block_artificial_uses (bitmap);
>  
>  static void df_record_entry_block_defs (bitmap);
>  static void df_record_exit_block_uses (bitmap);
> -static void df_get_exit_block_use_set (bitmap);
>  static void df_get_entry_block_def_set (bitmap);
>  static void df_grow_ref_info (struct df_ref_info *, unsigned int);
>  static void df_ref_chain_delete_du_chain (df_ref);
> @@ -3642,7 +3641,7 @@ df_epilogue_uses_p (unsigned int regno)
>  
>  /* Set the bit for regs that are considered being used at the exit. */
>  
> -static void
> +void
>  df_get_exit_block_use_set (bitmap exit_block_uses)
>  {
>    unsigned int i;
> diff --git a/gcc/df.h b/gcc/df.h
> index 402657a7076..abcbb097734 100644
> --- a/gcc/df.h
> +++ b/gcc/df.h
> @@ -1091,6 +1091,7 @@ extern bool df_epilogue_uses_p (unsigned int);
>  extern void df_set_regs_ever_live (unsigned int, bool);
>  extern void df_compute_regs_ever_live (bool);
>  extern void df_scan_verify (void);
> +extern void df_get_exit_block_use_set (bitmap);
>  
>  \f
>  /*----------------------------------------------------------------------------
> diff --git a/gcc/ext-dce.cc b/gcc/ext-dce.cc
> new file mode 100644
> index 00000000000..a6fe2683dcd
> --- /dev/null
> +++ b/gcc/ext-dce.cc
> @@ -0,0 +1,880 @@
> +/* RTL dead zero/sign extension (code) elimination.
> +   Copyright (C) 2000-2022 Free Software Foundation, Inc.
> +
> +This file is part of GCC.
> +
> +GCC is free software; you can redistribute it and/or modify it under
> +the terms of the GNU General Public License as published by the Free
> +Software Foundation; either version 3, or (at your option) any later
> +version.
> +
> +GCC is distributed in the hope that it will be useful, but WITHOUT ANY
> +WARRANTY; without even the implied warranty of MERCHANTABILITY or
> +FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
> +for more details.
> +
> +You should have received a copy of the GNU General Public License
> +along with GCC; see the file COPYING3.  If not see
> +<http://www.gnu.org/licenses/>.  */
> +
> +#include "config.h"
> +#include "system.h"
> +#include "coretypes.h"
> +#include "backend.h"
> +#include "rtl.h"
> +#include "tree.h"
> +#include "memmodel.h"
> +#include "insn-config.h"
> +#include "emit-rtl.h"
> +#include "recog.h"
> +#include "cfganal.h"
> +#include "tree-pass.h"
> +#include "cfgrtl.h"
> +#include "rtl-iter.h"
> +#include "df.h"
> +#include "print-rtl.h"
> +
> +/* We consider four bit groups for liveness:
> +   bit 0..7   (least significant byte)
> +   bit 8..15  (second least significant byte)
> +   bit 16..31
> +   bit 32..BITS_PER_WORD-1  */
> +
> +/* Note this pass could be used to narrow memory loads too.  It's
> +   not clear if that's profitable or not in general.  */
> +
> +#define UNSPEC_P(X) (GET_CODE (X) == UNSPEC || GET_CODE (X) == UNSPEC_VOLATILE)
> +
> +/* If we know the destination of CODE only uses some low bits
> +   (say just the QI bits of an SI operation), then return true
> +   if we can propagate the need for just the subset of bits
> +   from the destination to the sources.  */
> +
> +static bool
> +safe_for_live_propagation (rtx_code code)
> +{
> +  /* First handle rtx classes which as a whole are known to
> +     be either safe or unsafe.  */
> +  switch (GET_RTX_CLASS (code))
> +    {
> +      case RTX_OBJ:
> +	return true;
> +
> +      case RTX_COMPARE:
> +      case RTX_COMM_COMPARE:
> +      case RTX_TERNARY:

I suppose operands 1 and 2 of an IF_THEN_ELSE would be safe.

> +	return false;
> +
> +      default:
> +	break;
> +    }
> +
> +  /* What's left are specific codes.  We only need to identify those
> +     which are safe.   */
> +  switch (code)
> +    {
> +    /* These are trivially safe.  */
> +    case SUBREG:
> +    case NOT:
> +    case ZERO_EXTEND:
> +    case SIGN_EXTEND:
> +    case TRUNCATE:

This made me wonder: is this safe for !TRULY_NOOP_TRUNCATION?  But I
suppose it is.  What !TRULY_NOOP_TRUNCATION models is that the target
mode has a canonical form that must be maintained, and wouldn't be by
a plain subreg.  So TRULY_NOOP_TRUNCATION is more of an issue for
consumers of the liveness information, rather than the computing the
liveness information itself.

> +    case SS_TRUNCATE:
> +    case US_TRUNCATE:
> +    case PLUS:
> +    case MULT:
> +    case SS_MULT:
> +    case US_MULT:
> +    case SMUL_HIGHPART:
> +    case UMUL_HIGHPART:
> +    case AND:
> +    case IOR:
> +    case XOR:
> +    case SS_PLUS:
> +    case US_PLUS:

I don't think it's safe to propagate through saturating ops.
They don't have the property that (x op y)%z == (x%z op x%z)%z

> +      return true;
> +
> +    /* We can propagate for the shifted operand, but not the shift
> +       count.  The count is handled specially.  */
> +    case SS_ASHIFT:
> +    case US_ASHIFT:
> +    case ASHIFT:
> +      return true;
> +
> +    /* There may be other safe codes.  If so they can be added
> +       individually when discovered.  */
> +    default:
> +      return false;
> +    }
> +}
> +
> +/* Clear bits in LIVENOW and set bits in LIVE_TMP for objects
> +   set/clobbered by INSN.
> +
> +   Conceptually it is always safe to ignore a particular destination
> +   here as that will result in more chunks of data being considered
> +   live.  That's what happens when we "continue" the main loop when
> +   we see something we don't know how to handle such as a vector
> +   mode destination.
> +
> +   The more accurate we are in identifying what objects (and chunks
> +   within an object) are set by INSN, the more aggressive the
> +   optimziation phase during use handling will be.  */
> +
> +static void
> +ext_dce_process_sets (rtx insn, bitmap livenow, bitmap live_tmp)
> +{
> +  subrtx_iterator::array_type array;
> +  rtx pat = PATTERN (insn);
> +  FOR_EACH_SUBRTX (iter, array, pat, NONCONST)
> +    {
> +      const_rtx x = *iter;
> +
> +      /* An EXPR_LIST (from call fusage) ends in NULL_RTX.  */
> +      if (x == NULL_RTX)
> +	continue;
> +
> +      if (UNSPEC_P (x))
> +	continue;
> +
> +      if (GET_CODE (x) == SET || GET_CODE (x) == CLOBBER)
> +	{
> +	  unsigned bit = 0;
> +	  x = SET_DEST (x);
> +
> +	  /* We don't support vector destinations or destinations
> +	     wider than DImode.   It is safe to continue this loop.
> +	     At worst, it will leave things live which could have
> +	     been made dead.  */
> +	  if (VECTOR_MODE_P (GET_MODE (x)) || GET_MODE (x) > E_DImode)
> +	    continue;

The E_DImode comparison hard-codes an assumption about the order of
the mode enum.  How about using something like:

	  scalar_int_mode outer_mode;
	  if (!is_a<scalar_int_mode> (GET_MODE (x), &outer_mode)
	      || GET_MODE_BITSIZE (outer_mode) > 64)
	    continue;

The other continues use iter.skip_subrtxes (); when continuing.
I don't think it matters for correctness whether we do that or not,
since SETs and CLOBBERs shouldn't be nested.  But skipping should
be faster.

Maybe it would be worth splitting the SET/CLOBBER code out into
a subfunction, to make the loop iteration easier to handle?

> +
> +	  /* We could have (strict_low_part (subreg ...)).  We can not just
> +	     strip the STRICT_LOW_PART as that would result in clearing
> +	     some bits in LIVENOW that are still live.  So process the
> +	     STRICT_LOW_PART specially.  */
> +	  if (GET_CODE (x) == STRICT_LOW_PART)
> +	    {
> +	      x = XEXP (x, 0);
> +
> +	      /* The only valid operand of a STRICT_LOW_PART is a non
> +		 paradoxical SUBREG.  */
> +	      gcc_assert (SUBREG_P (x)
> +			  && !paradoxical_subreg_p (x)
> +			  && SUBREG_BYTE (x).is_constant ());
> +
> +	      /* I think we should always see a REG here.  But let's
> +		 be sure.  */
> +	      gcc_assert (REG_P (SUBREG_REG (x)));
> +
> +	      /* We don't track values larger than DImode.  */
> +	      gcc_assert (GET_MODE (x) <= E_DImode);
> +
> +	      /* But the inner mode might be larger, just punt for
> +		 that case.  Remember, we can not just continue to process
> +		 the inner RTXs due to the STRICT_LOW_PART.  */
> +	      if (GET_MODE (SUBREG_REG (x)) > E_DImode)

Same comment here about the enum comparisons.

> +		{
> +		  /* Skip the subrtxs of the STRICT_LOW_PART.  We can't
> +		     process them because it'll set objects as no longer
> +		     live when they are in fact still live.  */
> +		  iter.skip_subrtxes ();
> +		  continue;
> +		}
> +
> +	      /* Transfer all the LIVENOW bits for X into LIVE_TMP.  */
> +	      HOST_WIDE_INT rn = REGNO (SUBREG_REG (x));
> +	      for (HOST_WIDE_INT i = 4 * rn; i < 4 * rn + 4; i++)
> +		if (bitmap_bit_p (livenow, i))
> +		  bitmap_set_bit (live_tmp, i);
> +
> +	      /* The mode of the SUBREG tells us how many bits we can
> +		 clear.  */
> +	      machine_mode mode = GET_MODE (x);
> +	      HOST_WIDE_INT size = GET_MODE_SIZE (mode).to_constant ();
> +	      bitmap_clear_range (livenow, 4 * rn, size);

Is clearing SIZE bytes correct?  Feels like it should be clearing
something like log2 (size) + 1 instead.

> +
> +	      /* We have fully processed this destination.  */
> +	      iter.skip_subrtxes ();
> +	      continue;
> +	    }
> +
> +	  /* We can safely strip a paradoxical subreg.  The inner mode will
> +	     be narrower than the outer mode.  We'll clear fewer bits in
> +	     LIVENOW than we'd like, but that's always safe.  */
> +	  if (paradoxical_subreg_p (x))
> +	    x = XEXP (x, 0);
> +
> +	  /* If we have a SUBREG that is too wide, just continue the loop
> +	     and let the iterator go down into SUBREG_REG.  */
> +	  if (SUBREG_P (x) && GET_MODE (SUBREG_REG (x)) > E_DImode)
> +	    continue;
> +
> +	  /* Phase one of destination handling.  First remove any wrapper
> +	     such as SUBREG or ZERO_EXTRACT.  */
> +	  unsigned HOST_WIDE_INT mask = GET_MODE_MASK (GET_MODE (x));
> +	  if (SUBREG_P (x)
> +	      && !paradoxical_subreg_p (x)
> +	      && SUBREG_BYTE (x).is_constant ())
> +	    {
> +	      bit = SUBREG_BYTE (x).to_constant () * BITS_PER_UNIT;
> +	      if (WORDS_BIG_ENDIAN)
> +		bit = (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (x))).to_constant ()
> +		       - GET_MODE_BITSIZE (GET_MODE (x)).to_constant () - bit);
> +
> +	      /* Catch big endian correctness issues rather than triggering
> +		 undefined behavior.  */
> +	      gcc_assert (bit < sizeof (HOST_WIDE_INT) * 8);

This could probably use subreg_lsb, to avoid the inline endianness adjustment.

> +
> +	      mask = GET_MODE_MASK (GET_MODE (SUBREG_REG (x))) << bit;
> +	      if (!mask)
> +		mask = -0x100000000ULL;

Not sure I follow this.  What does the -0x100000000ULL constant indicate?
Also, isn't it the mask of the outer register that is shifted, rather
than the mask of the inner mode?  E.g. if we have:

   (subreg:QI (reg:DI R) 4)

on a little-endian 32-bit target, the bits being changed are
0xff << 4*8 and the discarded bits are 0xffffff00 << 4*8.

> +	      x = SUBREG_REG (x);
> +	    }
> +
> +	  if (GET_CODE (x) == ZERO_EXTRACT)
> +	    {
> +	      /* If either the size or the start position is unknown,
> +		 then assume we know nothing about what is overwritten.
> +		 This is overly conservative, but safe.  */
> +	      if (!CONST_INT_P (XEXP (x, 1)) || !CONST_INT_P (XEXP (x, 2)))
> +		continue;
> +	      mask = (1ULL << INTVAL (XEXP (x, 1))) - 1;
> +	      bit = INTVAL (XEXP (x, 2));
> +	      if (BITS_BIG_ENDIAN)
> +		bit = (GET_MODE_BITSIZE (GET_MODE (x))
> +		       - INTVAL (XEXP (x, 1)) - bit).to_constant ();
> +	      x = XEXP (x, 0);
> +
> +	      /* We can certainly get (zero_extract (subreg ...)).  The
> +		 mode of the zero_extract and location should be sufficient
> +		 and we can just strip the SUBREG.  */
> +	      if (GET_CODE (x) == SUBREG)
> +		x = SUBREG_REG (x);
> +	    }
> +
> +	  /* BIT >= 64 indicates something went horribly wrong.  */
> +	  gcc_assert (bit <= 63);
> +
> +	  /* Now handle the actual object that was changed.  */
> +	  if (REG_P (x))
> +	    {
> +	      /* Transfer the appropriate bits from LIVENOW into
> +		 LIVE_TMP.  */
> +	      HOST_WIDE_INT rn = REGNO (x);
> +	      for (HOST_WIDE_INT i = 4 * rn; i < 4 * rn + 4; i++)
> +		if (bitmap_bit_p (livenow, i))
> +		  bitmap_set_bit (live_tmp, i);
> +
> +	      /* Now clear the bits known written by this instruction.
> +		 Note that BIT need not be a power of two, consider a
> +		 ZERO_EXTRACT destination.  */
> +	      int start = (bit < 8 ? 0 : bit < 16 ? 1 : bit < 32 ? 2 : 3);
> +	      int end = ((mask & ~0xffffffffULL) ? 4
> +			 : (mask & 0xffff0000ULL) ? 3
> +			 : (mask & 0xff00) ? 2 : 1);
> +	      bitmap_clear_range (livenow, 4 * rn + start, end - start);
> +	    }
> +	  /* Some ports generate (clobber (const_int)).  */
> +	  else if (CONST_INT_P (x))

Ugh.  What on earth do they do that for?  I thought that (with const0_rtx)
was reserved as combine's "something went wrong" representation.

> +	    continue;
> +	  else
> +	    gcc_assert (CALL_P (insn)
> +			|| MEM_P (x)
> +			|| x == pc_rtx
> +			|| GET_CODE (x) == SCRATCH);
> +
> +	  iter.skip_subrtxes ();
> +	}
> +      else if (GET_CODE (x) == COND_EXEC)
> +	{
> +	  /* This isn't ideal, but may not be so bad in practice.  */
> +	  iter.skip_subrtxes ();
> +	}
> +    }
> +}
> +
> +/* INSN has a sign/zero extended source inside SET that we will
> +   try to turn into a SUBREG.  */
> +static void
> +ext_dce_try_optimize_insn (rtx_insn *insn, rtx set, bitmap changed_pseudos)
> +{
> +  rtx src = SET_SRC (set);
> +  rtx inner = XEXP (src, 0);
> +
> +  /* Avoid (subreg (mem)) and other constructs which may are valid RTL, but

s/are// or s/may are/may be/

> +     not useful for this optimization.  */
> +  if (!REG_P (inner) && !SUBREG_P (inner))
> +    return;
> +
> +  rtx new_pattern;
> +  if (dump_file)
> +    {
> +      fprintf (dump_file, "Processing insn:\n");
> +      dump_insn_slim (dump_file, insn);
> +      fprintf (dump_file, "Trying to simplify pattern:\n");
> +      print_rtl_single (dump_file, SET_SRC (set));
> +    }
> +
> +  new_pattern = simplify_gen_subreg (GET_MODE (src), inner,
> +				     GET_MODE (inner), 0);
> +  /* simplify_gen_subreg may fail in which case NEW_PATTERN will be NULL.
> +     We must not pass that as a replacement pattern to validate_change.  */
> +  if (new_pattern)
> +    {
> +      int ok = validate_change (insn, &SET_SRC (set), new_pattern, false);
> +
> +      if (ok)
> +	bitmap_set_bit (changed_pseudos, REGNO (SET_DEST (set)));
> +
> +      if (dump_file)
> +	{
> +	  if (ok)
> +	    fprintf (dump_file, "Successfully transformed to:\n");
> +	  else
> +	    fprintf (dump_file, "Failed transformation to:\n");
> +
> +	  print_rtl_single (dump_file, new_pattern);
> +	  fprintf (dump_file, "\n");
> +	}
> +    }
> +  else
> +    {
> +      if (dump_file)
> +	fprintf (dump_file, "Unable to generate valid SUBREG expression.\n");
> +    }
> +}
> +
> +/* Some operators imply that their second operand is fully live,
> +   regardless of how many bits in the output are live.  An example
> +   would be the shift count on a target without SHIFT_COUNT_TRUCATED
> +   defined.
> +
> +   Return TRUE if CODE is such an operator.  FALSE otherwise.  */
> +
> +static bool
> +binop_implies_op2_fully_live (rtx_code code)
> +{
> +  switch (code)
> +    {
> +      case ASHIFT:
> +      case LSHIFTRT:
> +      case ASHIFTRT:
> +      case ROTATE:
> +      case ROTATERT:
> +	return !SHIFT_COUNT_TRUNCATED;
> +
> +      default:
> +	return false;
> +    }
> +}
> +
> +/* Process uses in INSN.  Set appropriate bits in LIVENOW for any chunks of
> +   pseudos that become live, potentially filtering using bits from LIVE_TMP.
> +
> +   If MODIFIED is true, then optimize sign/zero extensions to SUBREGs when

MODIFY

> +   the extended bits are never read and mark pseudos which had extensions
> +   eliminated in CHANGED_PSEUDOS.  */
> +
> +static void
> +ext_dce_process_uses (rtx insn, bitmap livenow, bitmap live_tmp,
> +		      bool modify, bitmap changed_pseudos)
> +{
> +  /* A nonlocal goto implicitly uses the frame pointer.  */
> +  if (JUMP_P (insn) && find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX))
> +    {
> +      bitmap_set_range (livenow, FRAME_POINTER_REGNUM * 4, 4);
> +      if (!HARD_FRAME_POINTER_IS_FRAME_POINTER)
> +	bitmap_set_range (livenow, HARD_FRAME_POINTER_REGNUM * 4, 4);
> +    }
> +
> +  subrtx_var_iterator::array_type array_var;
> +  rtx pat = PATTERN (insn);
> +  FOR_EACH_SUBRTX_VAR (iter, array_var, pat, NONCONST)
> +    {
> +      /* An EXPR_LIST (from call fusage) ends in NULL_RTX.  */
> +      rtx x = *iter;
> +      if (x == NULL_RTX)
> +	continue;
> +
> +      /* So the basic idea in this FOR_EACH_SUBRTX_VAR loop is to
> +	 handle SETs explicitly, possibly propagating live information
> +	 into the uses.
> +
> +	 We may continue the loop at various points which will cause
> +	 iteration into the next level of RTL.  Breaking from the loop
> +	 is never safe as it can lead us to fail to process some of the
> +	 RTL and thus not make objects live when necessary.  */
> +      enum rtx_code xcode = GET_CODE (x);
> +      if (xcode == SET)
> +	{
> +	  const_rtx dst = SET_DEST (x);
> +	  rtx src = SET_SRC (x);
> +	  const_rtx y;
> +	  unsigned HOST_WIDE_INT bit = 0;
> +
> +	  /* The code of the RHS of a SET.  */
> +	  enum rtx_code code = GET_CODE (src);
> +
> +	  /* ?!? How much of this should mirror SET handling, potentially
> +	     being shared?   */
> +	  if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))
> +	    {
> +	      bit = SUBREG_BYTE (dst).to_constant () * BITS_PER_UNIT;
> +	      if (WORDS_BIG_ENDIAN)
> +		bit = (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (dst))).to_constant ()
> +		       - GET_MODE_BITSIZE (GET_MODE (dst)).to_constant () - bit);

Same subreg_lsb thing here.

> +	      if (bit >= HOST_BITS_PER_WIDE_INT)
> +		bit = HOST_BITS_PER_WIDE_INT - 1;
> +	      dst = SUBREG_REG (dst);
> +	    }
> +	  else if (GET_CODE (dst) == ZERO_EXTRACT
> +		   || GET_CODE (dst) == STRICT_LOW_PART)
> +	    dst = XEXP (dst, 0);
> +
> +	  /* Main processing of the uses.  Two major goals here.
> +
> +	     First, we want to try and propagate liveness (or the lack
> +	     thereof) from the destination register to the source
> +	     register(s).
> +
> +	     Second, if the source is an extension, try to optimize
> +	     it into a SUBREG.  The SUBREG form indicates we don't
> +	     care about the upper bits and will usually be copy
> +	     propagated away.
> +
> +	     If we fail to handle something in here, the expectation
> +	     is the iterator will dive into the sub-components and
> +	     mark all the chunks in any found REGs as live.  */
> +	  if (REG_P (dst) && safe_for_live_propagation (code))
> +	    {
> +	      /* Create a mask representing the bits of this output
> +		 operand that are live after this insn.  We can use
> +		 this information to refine the live in state of
> +		 inputs to this insn in many cases.
> +
> +		 We have to do this on a per SET basis, we might have
> +		 an INSN with multiple SETS, some of which can narrow
> +		 the source operand liveness, some of which may not.  */
> +	      unsigned HOST_WIDE_INT dst_mask = 0;
> +	      HOST_WIDE_INT rn = REGNO (dst);
> +	      unsigned HOST_WIDE_INT mask_array[]
> +		= { 0xff, 0xff00, 0xffff0000ULL, -0x100000000ULL };
> +	      for (int i = 0; i < 4; i++)
> +		if (bitmap_bit_p (live_tmp, 4 * rn + i))
> +		  dst_mask |= mask_array[i];
> +	      dst_mask >>= bit;
> +
> +	      /* ??? Could also handle ZERO_EXTRACT / SIGN_EXTRACT
> +		 of the source specially to improve optimization.  */
> +	      if (code == SIGN_EXTEND || code == ZERO_EXTEND)
> +		{
> +		  rtx inner = XEXP (src, 0);
> +		  unsigned HOST_WIDE_INT src_mask
> +		    = GET_MODE_MASK (GET_MODE (inner));
> +
> +		  /* DST_MASK could be zero if we had something in the SET
> +		     that we couldn't handle.  */
> +		  if (modify && dst_mask && (dst_mask & ~src_mask) == 0)
> +		    ext_dce_try_optimize_insn (as_a <rtx_insn *> (insn),
> +					       x, changed_pseudos);
> +
> +		  dst_mask &= src_mask;
> +		  src = XEXP (src, 0);
> +		  code = GET_CODE (src);
> +		}
> +
> +	      /* Optimization is done at this point.  We just want to make
> +		 sure everything that should get marked as live is marked
> +		 from here onward.  */
> +
> +	      /* ?!? What is the point of this adjustment to DST_MASK?  */
> +	      if (code == PLUS || code == MINUS
> +		  || code == MULT || code == ASHIFT)
> +		dst_mask
> +		  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;

Yeah, sympathise with the ?!? here :)

> +	      /* We will handle the other operand of a binary operator
> +		 at the bottom of the loop by resetting Y.  */
> +	      if (BINARY_P (src))
> +		y = XEXP (src, 0);

What about UNARY_P, given that NOT is included in the codes above?

> +	      else
> +		y = src;
> +
> +	      /* We're inside a SET and want to process the source operands
> +		 making things live.  Breaking from this loop will cause
> +		 the iterator to work on sub-rtxs, so it is safe to break
> +		 if we see something we don't know how to handle.  */
> +	      for (;;)
> +		{
> +		  /* Strip an outer STRICT_LOW_PART or paradoxical subreg.
> +		     That has the effect of making the whole referenced
> +		     register live.  We might be able to avoid that for
> +		     STRICT_LOW_PART at some point.  */
> +		  if (GET_CODE (x) == STRICT_LOW_PART
> +		      || paradoxical_subreg_p (x))
> +		    x = XEXP (x, 0);

Isn't "x" still the set at this point?  If it was intended to be "y",
then I thought STRICT_LOW_PART was for destinations only.

> +		  else if (SUBREG_P (y) && SUBREG_BYTE (y).is_constant ())
> +		    {
> +		      /* For anything but (subreg (reg)), break the inner loop
> +			 and process normally (conservatively).  */
> +		      if (!REG_P (SUBREG_REG (y)))
> +			break;
> +		      bit = (SUBREG_BYTE (y).to_constant () * BITS_PER_UNIT);
> +		      if (WORDS_BIG_ENDIAN)
> +			bit = (GET_MODE_BITSIZE
> +			       (GET_MODE (SUBREG_REG (y))).to_constant ()
> +				- GET_MODE_BITSIZE (GET_MODE (y)).to_constant () - bit);
> +		      if (dst_mask)
> +			{
> +			  dst_mask <<= bit;
> +			  if (!dst_mask)
> +			    dst_mask = -0x100000000ULL;
> +			}
> +		      y = SUBREG_REG (y);
> +		    }
> +
> +		  if (REG_P (y))
> +		    {
> +		      /* We have found the use of a register.  We need to mark
> +			 the appropriate chunks of the register live.  The mode
> +			 of the REG is a starting point.  We may refine that
> +			 based on what chunks in the output were live.  */
> +		      rn = 4 * REGNO (y);
> +		      unsigned HOST_WIDE_INT tmp_mask = dst_mask;
> +
> +		      /* If the RTX code for the SET_SRC is not one we can
> +			 propagate destination liveness through, then just
> +			 set the mask to the mode's mask.  */
> +		      if (!safe_for_live_propagation (code))
> +			tmp_mask = GET_MODE_MASK (GET_MODE (y));
> +
> +		      if (tmp_mask & 0xff)
> +			bitmap_set_bit (livenow, rn);
> +		      if (tmp_mask & 0xff00)
> +			bitmap_set_bit (livenow, rn + 1);
> +		      if (tmp_mask & 0xffff0000ULL)
> +			bitmap_set_bit (livenow, rn + 2);
> +		      if (tmp_mask & -0x100000000ULL)
> +			bitmap_set_bit (livenow, rn + 3);
> +
> +		      /* Some operators imply their second operand
> +			 is fully live, break this inner loop which
> +			 will cause the iterator to descent into the
> +			 sub-rtxs outside the SET processing.  */
> +		      if (binop_implies_op2_fully_live (code))
> +			break;
> +		    }
> +		  else if (!CONSTANT_P (y))
> +		    break;
> +		  /* We might have (ashift (const_int 1) (reg...)) */
> +		  else if (CONSTANT_P (y)
> +			   && binop_implies_op2_fully_live (GET_CODE (src)))

The CONSTANT_P (y) condition is redundant given the above.
But couldn't the binop_implies_op2_fully_live be shared by both paths?
Maybe it could be tested...

> +		    break;
> +
> +		  /* If this was anything but a binary operand, break the inner
> +		     loop.  This is conservatively correct as it will cause the
> +		     iterator to look at the sub-rtxs outside the SET context.  */
> +		  if (!BINARY_P (src))
> +		    break;

...here, after we've checked for binary ops.

> +		  /* We processed the first operand of a binary operator.  Now
> +		     handle the second.  */
> +		  y = XEXP (src, 1), src = pc_rtx;
> +		}
> +
> +	      /* These are leaf nodes, no need to iterate down into them.  */
> +	      if (REG_P (y) || CONSTANT_P (y))
> +		iter.skip_subrtxes ();
> +	    }
> +	}
> +      /* If we are reading the low part of a SUBREG, then we can
> +	 refine liveness of the input register, otherwise let the
> +	 iterator continue into SUBREG_REG.  */
> +      else if (xcode == SUBREG
> +	       && REG_P (SUBREG_REG (x))
> +	       && subreg_lowpart_p (x)
> +	       && GET_MODE_BITSIZE (GET_MODE (x)).is_constant ()
> +	       && GET_MODE_BITSIZE (GET_MODE (x)).to_constant () <= 32)
> +	{
> +	  HOST_WIDE_INT size = GET_MODE_BITSIZE (GET_MODE  (x)).to_constant ();
> +	  HOST_WIDE_INT rn = 4 * REGNO (SUBREG_REG (x));
> +
> +	  bitmap_set_bit (livenow, rn);
> +	  if (size > 8)
> +	    bitmap_set_bit (livenow, rn + 1);
> +	  if (size > 16)
> +	    bitmap_set_bit (livenow, rn + 2);
> +	  if (size > 32)
> +	    bitmap_set_bit (livenow, rn + 3);
> +	  iter.skip_subrtxes ();
> +	}
> +      /* If we have a register reference that is not otherwise handled,
> +	 just assume all the chunks are live.  */
> +      else if (REG_P (x))
> +	bitmap_set_range (livenow, REGNO (x) * 4, 4);
> +    }
> +}
> +
> +/* Process a single basic block BB with current liveness information
> +   in LIVENOW, returning updated liveness information.
> +
> +   If MODIFY is true, then this is the last pass and unnecessary
> +   extensions should be eliminated when possible.  If an extension
> +   is removed, the source pseudo is marked in CHANGED_PSEUDOS.  */
> +
> +static bitmap
> +ext_dce_process_bb (basic_block bb, bitmap livenow,
> +		    bool modify, bitmap changed_pseudos)
> +{
> +  rtx_insn *insn;
> +
> +  FOR_BB_INSNS_REVERSE (bb, insn)
> +    {
> +      if (!NONDEBUG_INSN_P (insn))
> +	continue;
> +
> +      /* Live-out state of the destination of this insn.  We can
> +	 use this to refine the live-in state of the sources of
> +	 this insn in many cases.  */
> +      bitmap live_tmp = BITMAP_ALLOC (NULL);
> +
> +      /* First process any sets/clobbers in INSN.  */
> +      ext_dce_process_sets (insn, livenow, live_tmp);
> +
> +      /* CALL_INSNs need processing their fusage data.  */
> +      if (GET_CODE (insn) == CALL_INSN)
> +	ext_dce_process_sets (CALL_INSN_FUNCTION_USAGE (insn),
> +			      livenow, live_tmp);
> +
> +      /* And now uses, optimizing away SIGN/ZERO extensions as we go.  */
> +      ext_dce_process_uses (insn, livenow, live_tmp, modify, changed_pseudos);
> +
> +      /* And process fusage data for the use as well.  */
> +      if (GET_CODE (insn) == CALL_INSN)
> +	{
> +	  if (!FAKE_CALL_P (insn))
> +	    bitmap_set_range (livenow, STACK_POINTER_REGNUM * 4, 4);
> +
> +	  /* If this is not a call to a const fucntion, then assume it
> +	     can read any global register.  */
> +	  if (!RTL_CONST_CALL_P (insn))
> +	    for (unsigned i = 0; i < FIRST_PSEUDO_REGISTER; i++)
> +	      if (global_regs[i])
> +		bitmap_set_range (livenow, i * 4, 4);
> +
> +	  ext_dce_process_uses (CALL_INSN_FUNCTION_USAGE (insn),
> +				livenow, live_tmp, modify, changed_pseudos);
> +	}
> +
> +      BITMAP_FREE (live_tmp);
> +    }
> +  return livenow;
> +}
> +
> +/* We optimize away sign/zero extensions in this pass and replace
> +   them with SUBREGs indicating certain bits are don't cares.
> +
> +   This changes the SUBREG_PROMOTED_VAR_P state of the object.
> +   It is fairly painful to fix this on the fly, so we have
> +   recorded which pseudos are affected and we look for SUBREGs
> +   of those pseudos and fix them up.  */
> +
> +static void
> +reset_subreg_promoted_p (bitmap changed_pseudos)
> +{
> +  /* If we removed an extension, that changed the promoted state
> +     of the destination of that extension.  Thus we need to go
> +     find any SUBREGs that reference that pseudo and adjust their
> +     SUBREG_PROMOTED_P state.  */
> +  for (rtx_insn *insn = get_insns(); insn; insn = NEXT_INSN (insn))
> +    {
> +      if (!NONDEBUG_INSN_P (insn))
> +	continue;
> +
> +      rtx pat = PATTERN (insn);
> +      subrtx_var_iterator::array_type array;
> +      FOR_EACH_SUBRTX_VAR (iter, array, pat, NONCONST)
> +	{
> +	  rtx sub = *iter;
> +
> +	  /* We only care about SUBREGs.  */
> +	  if (GET_CODE (sub) != SUBREG)
> +	    continue;
> +
> +	  const_rtx x = SUBREG_REG (sub);
> +
> +	  /* We only care if the inner object is a REG.  */
> +	  if (!REG_P (x))
> +	    continue;
> +
> +	  /* And only if the SUBREG is a promoted var.  */
> +	  if (!SUBREG_PROMOTED_VAR_P (sub))
> +	    continue;
> +
> +	  if (bitmap_bit_p (changed_pseudos, REGNO (x)))
> +	    SUBREG_PROMOTED_VAR_P (sub) = 0;
> +	}
> +    }
> +}
> +
> +/* Use lifetime analyis to identify extensions that set bits that
> +   are never read.  Turn such extensions into SUBREGs instead which
> +   can often be propagated away.  */
> +
> +static void
> +ext_dce (void)
> +{
> +  basic_block bb, *worklist, *qin, *qout, *qend;
> +  unsigned int qlen;
> +  vec<bitmap_head> livein;
> +  bitmap livenow;
> +  bitmap changed_pseudos;
> +
> +  livein.create (last_basic_block_for_fn (cfun));
> +  livein.quick_grow_cleared (last_basic_block_for_fn (cfun));
> +  for (int i = 0; i < last_basic_block_for_fn (cfun); i++)
> +    bitmap_initialize (&livein[i], &bitmap_default_obstack);
> +
> +  auto_bitmap refs (&bitmap_default_obstack);
> +  df_get_exit_block_use_set (refs);
> +
> +  unsigned i;
> +  bitmap_iterator bi;
> +  EXECUTE_IF_SET_IN_BITMAP (refs, 0, i, bi)
> +    {
> +      for (int j = 0; j < 4; j++)
> +	bitmap_set_bit (&livein[EXIT_BLOCK], i * 4 + j);
> +    }
> +
> +  livenow = BITMAP_ALLOC (NULL);
> +  changed_pseudos = BITMAP_ALLOC (NULL);
> +
> +  worklist
> +    = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
> +
> +  int modify = 0;
> +
> +  do
> +    {
> +      qin = qout = worklist;
> +
> +      /* Put every block on the worklist.  */
> +      int *rpo = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
> +      int n = inverted_rev_post_order_compute (cfun, rpo);

Does this need to be recalculated each iteration, or could it be done
outside the loop?

Was also wondering if this could use a DF_BACKWARD df_simple_dataflow
(with its precomputed order), rather than doing the worklist management
manually.  If df_simple_dataflow isn't efficient enough then we should
try to fix that...

Thanks,
Richard

> +      for (int i = 0; i < n; ++i)
> +	{
> +	  bb = BASIC_BLOCK_FOR_FN (cfun, rpo[i]);
> +	  if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
> +	      || bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
> +	    continue;
> +	  *qin++ = bb;
> +	  bb->aux = bb;
> +	}
> +      free (rpo);
> +
> +      qin = worklist;
> +      qend = &worklist[n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS];
> +      qlen = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
> +
> +      /* Iterate until the worklist is empty.  */
> +      while (qlen)
> +	{
> +	  /* Take the first entry off the worklist.  */
> +	  bb = *qout++;
> +	  qlen--;
> +
> +	  if (qout >= qend)
> +	    qout = worklist;
> +
> +	  /* Clear the aux field of this block so that it can be added to
> +	     the worklist again if necessary.  */
> +	  bb->aux = NULL;
> +
> +	  bitmap_clear (livenow);
> +	  /* Make everything live that's live in the successors.  */
> +	  edge_iterator ei;
> +	  edge e;
> +
> +	  FOR_EACH_EDGE (e, ei, bb->succs)
> +	    bitmap_ior_into (livenow, &livein[e->dest->index]);
> +
> +	  livenow = ext_dce_process_bb (bb, livenow,
> +					modify > 0, changed_pseudos);
> +
> +	  if (!bitmap_equal_p (&livein[bb->index], livenow))
> +	    {
> +	      gcc_assert (!modify);
> +	      bitmap tmp = BITMAP_ALLOC (NULL);
> +	      gcc_assert (!bitmap_and_compl (tmp, &livein[bb->index], livenow));
> +
> +	      bitmap_copy (&livein[bb->index], livenow);
> +
> +	      edge_iterator ei;
> +	      edge e;
> +
> +	      FOR_EACH_EDGE (e, ei, bb->preds)
> +		if (!e->src->aux && e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
> +		  {
> +		    *qin++ = e->src;
> +		    e->src->aux = e;
> +		    qlen++;
> +		    if (qin >= qend)
> +		      qin = worklist;
> +		  }
> +	    }
> +	}
> +    } while (!modify++);
> +
> +  reset_subreg_promoted_p (changed_pseudos);
> +
> +  /* Clean up.  */
> +  BITMAP_FREE (changed_pseudos);
> +  BITMAP_FREE (livenow);
> +  unsigned len = livein.length ();
> +  for (unsigned i = 0; i < len; i++)
> +    bitmap_clear (&livein[i]);
> +  livein.release ();
> +  clear_aux_for_blocks ();
> +  free (worklist);
> +}
> +
> +namespace {
> +
> +const pass_data pass_data_ext_dce =
> +{
> +  RTL_PASS, /* type */
> +  "ext_dce", /* name */
> +  OPTGROUP_NONE, /* optinfo_flags */
> +  TV_NONE, /* tv_id */
> +  PROP_cfglayout, /* properties_required */
> +  0, /* properties_provided */
> +  0, /* properties_destroyed */
> +  0, /* todo_flags_start */
> +  TODO_df_finish, /* todo_flags_finish */
> +};
> +
> +class pass_ext_dce : public rtl_opt_pass
> +{
> +public:
> +  pass_ext_dce (gcc::context *ctxt)
> +    : rtl_opt_pass (pass_data_ext_dce, ctxt)
> +  {}
> +
> +  /* opt_pass methods: */
> +  virtual bool gate (function *) { return optimize > 0; }
> +  virtual unsigned int execute (function *)
> +    {
> +      ext_dce ();
> +      return 0;
> +    }
> +
> +}; // class pass_combine
> +
> +} // anon namespace
> +
> +rtl_opt_pass *
> +make_pass_ext_dce (gcc::context *ctxt)
> +{
> +  return new pass_ext_dce (ctxt);
> +}
> diff --git a/gcc/passes.def b/gcc/passes.def
> index 1e1950bdb39..c075c70d42c 100644
> --- a/gcc/passes.def
> +++ b/gcc/passes.def
> @@ -487,6 +487,7 @@ along with GCC; see the file COPYING3.  If not see
>        NEXT_PASS (pass_inc_dec);
>        NEXT_PASS (pass_initialize_regs);
>        NEXT_PASS (pass_ud_rtl_dce);
> +      NEXT_PASS (pass_ext_dce);
>        NEXT_PASS (pass_combine);
>        NEXT_PASS (pass_if_after_combine);
>        NEXT_PASS (pass_jump_after_combine);
> diff --git a/gcc/testsuite/gcc.target/riscv/core_bench_list.c b/gcc/testsuite/gcc.target/riscv/core_bench_list.c
> new file mode 100644
> index 00000000000..957e9c841ed
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/core_bench_list.c
> @@ -0,0 +1,15 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -fdump-rtl-ext_dce" } */
> +/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
> +
> +short
> +core_bench_list (int N) {
> +
> +  short a = 0;
> +  for (int i = 0; i < 4; i++) {
> +    if (i > N) {
> +      a++;
> +    }
> +  }
> +  return a * 4;
> +}
> diff --git a/gcc/testsuite/gcc.target/riscv/core_init_matrix.c b/gcc/testsuite/gcc.target/riscv/core_init_matrix.c
> new file mode 100644
> index 00000000000..9289244c71f
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/core_init_matrix.c
> @@ -0,0 +1,17 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O1 -fdump-rtl-ext_dce" } */
> +/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
> +
> +void
> +core_init_matrix(short* A, short* B, int seed) {
> +  int  order = 1;
> +
> +  for (int i = 0; i < seed; i++) {
> +    for (int j = 0; j < seed; j++) {
> +      short val = seed + order;
> +      B[i] = val;
> +      A[i] = val;
> +      order++;
> +    }
> +  }
> +}
> diff --git a/gcc/testsuite/gcc.target/riscv/core_list_init.c b/gcc/testsuite/gcc.target/riscv/core_list_init.c
> new file mode 100644
> index 00000000000..2f36dae85aa
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/core_list_init.c
> @@ -0,0 +1,18 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O1 -fdump-rtl-ext_dce" } */
> +/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
> +
> +unsigned short
> +core_list_init (int size, short seed) {
> +
> +  for (int i = 0; i < size; i++) {
> +    unsigned short datpat = ((unsigned short)(seed ^ i) & 0xf);
> +    unsigned short dat = (datpat << 3) | (i & 0x7);
> +    if (i > seed) {
> +      return dat;
> +    }
> +  }
> +
> +  return 0;
> +
> +}
> diff --git a/gcc/testsuite/gcc.target/riscv/matrix_add_const.c b/gcc/testsuite/gcc.target/riscv/matrix_add_const.c
> new file mode 100644
> index 00000000000..9a2dd53b17a
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/matrix_add_const.c
> @@ -0,0 +1,11 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -fdump-rtl-ext_dce" } */
> +/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
> +
> +void
> +matrix_add_const(int N, short *A, short val)
> +{
> +    for (int j = 0; j < N; j++) {
> +      A[j] += val;
> +    }
> +}
> diff --git a/gcc/testsuite/gcc.target/riscv/mem-extend.c b/gcc/testsuite/gcc.target/riscv/mem-extend.c
> new file mode 100644
> index 00000000000..c67f12dfc35
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/mem-extend.c
> @@ -0,0 +1,13 @@
> +/* { dg-do compile } */
> +/* { dg-options "-march=rv64gc_zbb" } */
> +/* { dg-skip-if "" { *-*-* } { "-O0" } } */
> +
> +void
> +foo(short *d, short *tmp) {
> +    int x = d[0] + d[1];
> +    int y = d[2] + d[3];
> +    tmp[0] = x + y;
> +    tmp[1] = x - y;
> +}
> +
> +/* { dg-final { scan-assembler-not {\mzext\.h\M} } } */
> diff --git a/gcc/testsuite/gcc.target/riscv/pr111384.c b/gcc/testsuite/gcc.target/riscv/pr111384.c
> new file mode 100644
> index 00000000000..a4e77d4aeb6
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/pr111384.c
> @@ -0,0 +1,11 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O1 -fdump-rtl-ext_dce" } */
> +/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
> +
> +void
> +foo(unsigned int src, unsigned short *dst1, unsigned short *dst2)
> +{
> +    *dst1 = src;
> +    *dst2 = src;
> +}
> +
> diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h
> index 09e6ada5b2f..773301d731f 100644
> --- a/gcc/tree-pass.h
> +++ b/gcc/tree-pass.h
> @@ -591,6 +591,7 @@ extern rtl_opt_pass *make_pass_reginfo_init (gcc::context *ctxt);
>  extern rtl_opt_pass *make_pass_inc_dec (gcc::context *ctxt);
>  extern rtl_opt_pass *make_pass_stack_ptr_mod (gcc::context *ctxt);
>  extern rtl_opt_pass *make_pass_initialize_regs (gcc::context *ctxt);
> +extern rtl_opt_pass *make_pass_ext_dce (gcc::context *ctxt);
>  extern rtl_opt_pass *make_pass_combine (gcc::context *ctxt);
>  extern rtl_opt_pass *make_pass_if_after_combine (gcc::context *ctxt);
>  extern rtl_opt_pass *make_pass_jump_after_combine (gcc::context *ctxt);

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20  3:32     ` Xi Ruoyao
@ 2023-11-20  3:48       ` Jeff Law
  0 siblings, 0 replies; 35+ messages in thread
From: Jeff Law @ 2023-11-20  3:48 UTC (permalink / raw)
  To: Xi Ruoyao, gcc-patches; +Cc: Jivan Hakobyan



On 11/19/23 20:32, Xi Ruoyao wrote:
> On Sun, 2023-11-19 at 19:52 -0700, Jeff Law wrote:
> 
> /* snip */
> 
>>
>>> Unfortunately, I get some ICE building stage 1 libgcc with this patch on
>>> loongarch64-linux-gnu:
>>>
>>> during RTL pass: ext_dce
>>> ../../../gcc/libgcc/libgcc2.c: In function ‘__absvdi2’:
>>> ../../../gcc/libgcc/libgcc2.c:224:1: internal compiler error: Segmentation fault
>>>     224 | }
>>>         | ^
>>> 0x120baa477 crash_signal
>>> 	../../gcc/gcc/toplev.cc:316
>>> 0x1216aeeb4 ext_dce_process_sets
>>> 	../../gcc/gcc/ext-dce.cc:128
>>> 0x1216afbaf ext_dce_process_bb
>>> 	../../gcc/gcc/ext-dce.cc:647
>>> 0x1216afbaf ext_dce
>>> 	../../gcc/gcc/ext-dce.cc:802
>>> 0x1216afbaf execute
>>> 	../../gcc/gcc/ext-dce.cc:868
>>> Please submit a full bug report, with preprocessed source (by using -freport-bug).
>>> Please include the complete backtrace with any bug report.
>>> See <https://gcc.gnu.org/bugs/> for instructions.
>> I think I know what's going on here.
> 
> The gzip-compressed preprocessed file attached, in case you still need
> it.
Thanks.  I'll double check with it in a bit.   Basically there was a 
late cleanup that could result in walking down the wrong part of a chunk 
of RTL.  My tester is about 30-60 minutes from being able to start a run 
with the fix.

Jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20  2:52   ` Jeff Law
@ 2023-11-20  3:32     ` Xi Ruoyao
  2023-11-20  3:48       ` Jeff Law
  0 siblings, 1 reply; 35+ messages in thread
From: Xi Ruoyao @ 2023-11-20  3:32 UTC (permalink / raw)
  To: Jeff Law, gcc-patches; +Cc: Jivan Hakobyan

[-- Attachment #1: Type: text/plain, Size: 1161 bytes --]

On Sun, 2023-11-19 at 19:52 -0700, Jeff Law wrote:

/* snip */

> 
> > Unfortunately, I get some ICE building stage 1 libgcc with this patch on
> > loongarch64-linux-gnu:
> > 
> > during RTL pass: ext_dce
> > ../../../gcc/libgcc/libgcc2.c: In function ‘__absvdi2’:
> > ../../../gcc/libgcc/libgcc2.c:224:1: internal compiler error: Segmentation fault
> >    224 | }
> >        | ^
> > 0x120baa477 crash_signal
> > 	../../gcc/gcc/toplev.cc:316
> > 0x1216aeeb4 ext_dce_process_sets
> > 	../../gcc/gcc/ext-dce.cc:128
> > 0x1216afbaf ext_dce_process_bb
> > 	../../gcc/gcc/ext-dce.cc:647
> > 0x1216afbaf ext_dce
> > 	../../gcc/gcc/ext-dce.cc:802
> > 0x1216afbaf execute
> > 	../../gcc/gcc/ext-dce.cc:868
> > Please submit a full bug report, with preprocessed source (by using -freport-bug).
> > Please include the complete backtrace with any bug report.
> > See <https://gcc.gnu.org/bugs/> for instructions.
> I think I know what's going on here.

The gzip-compressed preprocessed file attached, in case you still need
it.

-- 
Xi Ruoyao <xry111@xry111.site>
School of Aerospace Science and Technology, Xidian University

[-- Attachment #2: _absvsi2.i.gz --]
[-- Type: application/gzip, Size: 39525 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20  2:51   ` Jeff Law
@ 2023-11-20  2:57     ` Oleg Endo
  0 siblings, 0 replies; 35+ messages in thread
From: Oleg Endo @ 2023-11-20  2:57 UTC (permalink / raw)
  To: Jeff Law, gcc-patches; +Cc: Jivan Hakobyan

On Sun, 2023-11-19 at 19:51 -0700, Jeff Law wrote:
> 
> On 11/19/23 18:22, Oleg Endo wrote:
> > 
> > On Sun, 2023-11-19 at 17:47 -0700, Jeff Law wrote:
> > > This is work originally started by Joern @ Embecosm.
> > > 
> > > There's been a long standing sense that we're generating too many
> > > sign/zero extensions on the RISC-V port.  REE is useful, but it's really
> > > focused on a relatively narrow part of the extension problem.
> > > 
> > > What Joern's patch does is introduce a new pass which tracks liveness of
> > > chunks of pseudo regs.  Specifically it tracks bits 0..7, 8..15, 16..31
> > > and 32..63.
> > > 
> > > If it encounters a sign/zero extend that sets bits that are never read,
> > > then it replaces the sign/zero extension with a narrowing subreg.  The
> > > narrowing subreg usually gets eliminated by subsequent passes (it's just
> > > a copy after all).
> > > 
> > 
> > Have you tried it on SH, too?  (and if so any numbers?)


> Just bootstrap with C regression testing on sh4/sh4eb.  No data on 
> improvements.
> 

Alright.  I'll check what it does for SH once it's in.

Cheers,
Oleg

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20  2:23 ` Xi Ruoyao
  2023-11-20  2:46   ` Jeff Law
@ 2023-11-20  2:52   ` Jeff Law
  2023-11-20  3:32     ` Xi Ruoyao
  1 sibling, 1 reply; 35+ messages in thread
From: Jeff Law @ 2023-11-20  2:52 UTC (permalink / raw)
  To: Xi Ruoyao, gcc-patches; +Cc: Jivan Hakobyan



On 11/19/23 19:23, Xi Ruoyao wrote:
> On Sun, 2023-11-19 at 17:47 -0700, Jeff Law wrote:
>> This is work originally started by Joern @ Embecosm.
>>
>> There's been a long standing sense that we're generating too many
>> sign/zero extensions on the RISC-V port.  REE is useful, but it's really
>> focused on a relatively narrow part of the extension problem.
>>
>> What Joern's patch does is introduce a new pass which tracks liveness of
>> chunks of pseudo regs.  Specifically it tracks bits 0..7, 8..15, 16..31
>> and 32..63.
>>
>> If it encounters a sign/zero extend that sets bits that are never read,
>> then it replaces the sign/zero extension with a narrowing subreg.  The
>> narrowing subreg usually gets eliminated by subsequent passes (it's just
>> a copy after all).
>>
>> Jivan has done some analysis and found that it eliminates roughly 1% of
>> the dynamic instruction stream for x264 as well as some redundant
>> extensions in the coremark benchmark (both on rv64).  In my own testing
>> as I worked through issues on other architectures I clearly saw it
>> helping in various places within GCC itself or in the testsuite.
>>
>> The basic structure is to first do a fairly standard liveness analysis
>> on the chunks, seeding original state with the liveness data from DF.
>> Once that's stable, we do a final pass to identify the useless
>> extensions and transform them into narrowing subregs.
>>
>> A few key points to remember.
>>
>> For destination processing it is always safe to ignore a destination.
>> Ignoring a destination merely means that whatever was live after the
>> given insn will continue to be live before the insn.  What is not safe
>> is to clear a bit in the LIVENOW bitmap for a destination chunk that is
>> not set.  This comes into play with things like STRICT_LOW_PART.
>>
>> For source processing the safe thing to do is to set all the chunks in a
>> register as live.  It is never safe to fail to process a source operand.
>>
>> When a destination object is not fully live, we try to transfer that
>> limited liveness to the source operands.  So for example if bits 16..63
>> are dead in a destination of a PLUS, we need not mark bits 16..63 as
>> live for the source operands.  We have to be careful -- consider a shift
>> count on a target without SHIFT_COUNT_TRUNCATED set.  So we have both a
>> list of RTL codes where we can transfer liveness and a few codes where
>> one of the operands may need to be fully live (ex, a shift count) while
>> the other input may not need to be fully live (value left shifted).
>>
>> Locally we have had this enabled at -O1 and above to encourage testing,
>> but I'm thinking that for the trunk enabling at -O2 and above is the
>> right thing to do.
>>
>> This has (of course) been tested on rv64.  It's also been bootstrapped
>> and regression tested on x86.  Bootstrap and regression tested (C only)
>> for m68k, sh4, sh4eb, alpha.  Earlier versions were also bootstrapped
>> and regression tested on ppc, hppa and s390x (C only for those as well).
>>    It's also been tested on the various crosses in my tester.  So we've
>> got reasonable coverage of 16, 32 and 64 bit targets, big and little
>> endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other
>> oddities.
>>
>> The included tests are for RISC-V only because not all targets are going
>> to have extraneous extensions.   There's tests from coremark, x264 and
>> GCC's bz database.  It probably wouldn't be hard to add aarch64
>> testscases.  The BZs listed are improved by this patch for aarch64.
>>
>> Given the amount of work Jivan and I have done, I'm not comfortable
>> self-approving at this time.  I'd much rather have another set of eyes
>> on the code.  Hopefully the code is documented well enough for that to
>> be useful exercise.
>>
>> So, no need to work from Pago Pago for this patch.  I may make another
>> attempt at the eswin conditional move work while working virtually in
>> Pago Pago though.
>>
>> Thoughts, comments, recommendations?
> 
> Unfortunately, I get some ICE building stage 1 libgcc with this patch on
> loongarch64-linux-gnu:
> 
> during RTL pass: ext_dce
> ../../../gcc/libgcc/libgcc2.c: In function ‘__absvdi2’:
> ../../../gcc/libgcc/libgcc2.c:224:1: internal compiler error: Segmentation fault
>    224 | }
>        | ^
> 0x120baa477 crash_signal
> 	../../gcc/gcc/toplev.cc:316
> 0x1216aeeb4 ext_dce_process_sets
> 	../../gcc/gcc/ext-dce.cc:128
> 0x1216afbaf ext_dce_process_bb
> 	../../gcc/gcc/ext-dce.cc:647
> 0x1216afbaf ext_dce
> 	../../gcc/gcc/ext-dce.cc:802
> 0x1216afbaf execute
> 	../../gcc/gcc/ext-dce.cc:868
> Please submit a full bug report, with preprocessed source (by using -freport-bug).
> Please include the complete backtrace with any bug report.
> See <https://gcc.gnu.org/bugs/> for instructions.
I think I know what's going on here.

jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20  1:22 ` Oleg Endo
@ 2023-11-20  2:51   ` Jeff Law
  2023-11-20  2:57     ` Oleg Endo
  0 siblings, 1 reply; 35+ messages in thread
From: Jeff Law @ 2023-11-20  2:51 UTC (permalink / raw)
  To: Oleg Endo, gcc-patches; +Cc: Jivan Hakobyan



On 11/19/23 18:22, Oleg Endo wrote:
> 
> On Sun, 2023-11-19 at 17:47 -0700, Jeff Law wrote:
>> This is work originally started by Joern @ Embecosm.
>>
>> There's been a long standing sense that we're generating too many
>> sign/zero extensions on the RISC-V port.  REE is useful, but it's really
>> focused on a relatively narrow part of the extension problem.
>>
>> What Joern's patch does is introduce a new pass which tracks liveness of
>> chunks of pseudo regs.  Specifically it tracks bits 0..7, 8..15, 16..31
>> and 32..63.
>>
>> If it encounters a sign/zero extend that sets bits that are never read,
>> then it replaces the sign/zero extension with a narrowing subreg.  The
>> narrowing subreg usually gets eliminated by subsequent passes (it's just
>> a copy after all).
>>
> 
> Have you tried it on SH, too?  (and if so any numbers?)
Just bootstrap with C regression testing on sh4/sh4eb.  No data on 
improvements.

Jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20  2:23 ` Xi Ruoyao
@ 2023-11-20  2:46   ` Jeff Law
  2023-11-20  2:52   ` Jeff Law
  1 sibling, 0 replies; 35+ messages in thread
From: Jeff Law @ 2023-11-20  2:46 UTC (permalink / raw)
  To: gcc-patches



On 11/19/23 19:23, Xi Ruoyao wrote:
> On Sun, 2023-11-19 at 17:47 -0700, Jeff Law wrote:
>> This is work originally started by Joern @ Embecosm.
>>
>> There's been a long standing sense that we're generating too many
>> sign/zero extensions on the RISC-V port.  REE is useful, but it's really
>> focused on a relatively narrow part of the extension problem.
>>
>> What Joern's patch does is introduce a new pass which tracks liveness of
>> chunks of pseudo regs.  Specifically it tracks bits 0..7, 8..15, 16..31
>> and 32..63.
>>
>> If it encounters a sign/zero extend that sets bits that are never read,
>> then it replaces the sign/zero extension with a narrowing subreg.  The
>> narrowing subreg usually gets eliminated by subsequent passes (it's just
>> a copy after all).
>>
>> Jivan has done some analysis and found that it eliminates roughly 1% of
>> the dynamic instruction stream for x264 as well as some redundant
>> extensions in the coremark benchmark (both on rv64).  In my own testing
>> as I worked through issues on other architectures I clearly saw it
>> helping in various places within GCC itself or in the testsuite.
>>
>> The basic structure is to first do a fairly standard liveness analysis
>> on the chunks, seeding original state with the liveness data from DF.
>> Once that's stable, we do a final pass to identify the useless
>> extensions and transform them into narrowing subregs.
>>
>> A few key points to remember.
>>
>> For destination processing it is always safe to ignore a destination.
>> Ignoring a destination merely means that whatever was live after the
>> given insn will continue to be live before the insn.  What is not safe
>> is to clear a bit in the LIVENOW bitmap for a destination chunk that is
>> not set.  This comes into play with things like STRICT_LOW_PART.
>>
>> For source processing the safe thing to do is to set all the chunks in a
>> register as live.  It is never safe to fail to process a source operand.
>>
>> When a destination object is not fully live, we try to transfer that
>> limited liveness to the source operands.  So for example if bits 16..63
>> are dead in a destination of a PLUS, we need not mark bits 16..63 as
>> live for the source operands.  We have to be careful -- consider a shift
>> count on a target without SHIFT_COUNT_TRUNCATED set.  So we have both a
>> list of RTL codes where we can transfer liveness and a few codes where
>> one of the operands may need to be fully live (ex, a shift count) while
>> the other input may not need to be fully live (value left shifted).
>>
>> Locally we have had this enabled at -O1 and above to encourage testing,
>> but I'm thinking that for the trunk enabling at -O2 and above is the
>> right thing to do.
>>
>> This has (of course) been tested on rv64.  It's also been bootstrapped
>> and regression tested on x86.  Bootstrap and regression tested (C only)
>> for m68k, sh4, sh4eb, alpha.  Earlier versions were also bootstrapped
>> and regression tested on ppc, hppa and s390x (C only for those as well).
>>    It's also been tested on the various crosses in my tester.  So we've
>> got reasonable coverage of 16, 32 and 64 bit targets, big and little
>> endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other
>> oddities.
>>
>> The included tests are for RISC-V only because not all targets are going
>> to have extraneous extensions.   There's tests from coremark, x264 and
>> GCC's bz database.  It probably wouldn't be hard to add aarch64
>> testscases.  The BZs listed are improved by this patch for aarch64.
>>
>> Given the amount of work Jivan and I have done, I'm not comfortable
>> self-approving at this time.  I'd much rather have another set of eyes
>> on the code.  Hopefully the code is documented well enough for that to
>> be useful exercise.
>>
>> So, no need to work from Pago Pago for this patch.  I may make another
>> attempt at the eswin conditional move work while working virtually in
>> Pago Pago though.
>>
>> Thoughts, comments, recommendations?
> 
> Unfortunately, I get some ICE building stage 1 libgcc with this patch on
> loongarch64-linux-gnu:
> 
> during RTL pass: ext_dce
> ../../../gcc/libgcc/libgcc2.c: In function ‘__absvdi2’:
> ../../../gcc/libgcc/libgcc2.c:224:1: internal compiler error: Segmentation fault
>    224 | }
>        | ^
> 0x120baa477 crash_signal
> 	../../gcc/gcc/toplev.cc:316
> 0x1216aeeb4 ext_dce_process_sets
> 	../../gcc/gcc/ext-dce.cc:128
> 0x1216afbaf ext_dce_process_bb
> 	../../gcc/gcc/ext-dce.cc:647
> 0x1216afbaf ext_dce
> 	../../gcc/gcc/ext-dce.cc:802
> 0x1216afbaf execute
> 	../../gcc/gcc/ext-dce.cc:868
> Please submit a full bug report, with preprocessed source (by using -freport-bug).
> Please include the complete backtrace with any bug report.
> See <https://gcc.gnu.org/bugs/> for instructions.
Can you pass along the .i file?  I do regularly build loongarch 
(including earlier today) so this is totally unexpected.

jeff

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20  0:47 Jeff Law
  2023-11-20  1:22 ` Oleg Endo
@ 2023-11-20  2:23 ` Xi Ruoyao
  2023-11-20  2:46   ` Jeff Law
  2023-11-20  2:52   ` Jeff Law
  2023-11-20 18:26 ` Richard Sandiford
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 35+ messages in thread
From: Xi Ruoyao @ 2023-11-20  2:23 UTC (permalink / raw)
  To: Jeff Law, gcc-patches; +Cc: Jivan Hakobyan

On Sun, 2023-11-19 at 17:47 -0700, Jeff Law wrote:
> This is work originally started by Joern @ Embecosm.
> 
> There's been a long standing sense that we're generating too many 
> sign/zero extensions on the RISC-V port.  REE is useful, but it's really 
> focused on a relatively narrow part of the extension problem.
> 
> What Joern's patch does is introduce a new pass which tracks liveness of 
> chunks of pseudo regs.  Specifically it tracks bits 0..7, 8..15, 16..31 
> and 32..63.
> 
> If it encounters a sign/zero extend that sets bits that are never read, 
> then it replaces the sign/zero extension with a narrowing subreg.  The
> narrowing subreg usually gets eliminated by subsequent passes (it's just 
> a copy after all).
> 
> Jivan has done some analysis and found that it eliminates roughly 1% of 
> the dynamic instruction stream for x264 as well as some redundant 
> extensions in the coremark benchmark (both on rv64).  In my own testing 
> as I worked through issues on other architectures I clearly saw it 
> helping in various places within GCC itself or in the testsuite.
> 
> The basic structure is to first do a fairly standard liveness analysis
> on the chunks, seeding original state with the liveness data from DF. 
> Once that's stable, we do a final pass to identify the useless 
> extensions and transform them into narrowing subregs.
> 
> A few key points to remember.
> 
> For destination processing it is always safe to ignore a destination. 
> Ignoring a destination merely means that whatever was live after the 
> given insn will continue to be live before the insn.  What is not safe
> is to clear a bit in the LIVENOW bitmap for a destination chunk that is 
> not set.  This comes into play with things like STRICT_LOW_PART.
> 
> For source processing the safe thing to do is to set all the chunks in a 
> register as live.  It is never safe to fail to process a source operand.
> 
> When a destination object is not fully live, we try to transfer that 
> limited liveness to the source operands.  So for example if bits 16..63 
> are dead in a destination of a PLUS, we need not mark bits 16..63 as 
> live for the source operands.  We have to be careful -- consider a shift 
> count on a target without SHIFT_COUNT_TRUNCATED set.  So we have both a 
> list of RTL codes where we can transfer liveness and a few codes where
> one of the operands may need to be fully live (ex, a shift count) while 
> the other input may not need to be fully live (value left shifted).
> 
> Locally we have had this enabled at -O1 and above to encourage testing, 
> but I'm thinking that for the trunk enabling at -O2 and above is the 
> right thing to do.
> 
> This has (of course) been tested on rv64.  It's also been bootstrapped
> and regression tested on x86.  Bootstrap and regression tested (C only) 
> for m68k, sh4, sh4eb, alpha.  Earlier versions were also bootstrapped 
> and regression tested on ppc, hppa and s390x (C only for those as well). 
>   It's also been tested on the various crosses in my tester.  So we've
> got reasonable coverage of 16, 32 and 64 bit targets, big and little 
> endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other 
> oddities.
> 
> The included tests are for RISC-V only because not all targets are going 
> to have extraneous extensions.   There's tests from coremark, x264 and
> GCC's bz database.  It probably wouldn't be hard to add aarch64 
> testscases.  The BZs listed are improved by this patch for aarch64.
> 
> Given the amount of work Jivan and I have done, I'm not comfortable 
> self-approving at this time.  I'd much rather have another set of eyes
> on the code.  Hopefully the code is documented well enough for that to
> be useful exercise.
> 
> So, no need to work from Pago Pago for this patch.  I may make another
> attempt at the eswin conditional move work while working virtually in 
> Pago Pago though.
> 
> Thoughts, comments, recommendations?

Unfortunately, I get some ICE building stage 1 libgcc with this patch on
loongarch64-linux-gnu:

during RTL pass: ext_dce
../../../gcc/libgcc/libgcc2.c: In function ‘__absvdi2’:
../../../gcc/libgcc/libgcc2.c:224:1: internal compiler error: Segmentation fault
  224 | }
      | ^
0x120baa477 crash_signal
	../../gcc/gcc/toplev.cc:316
0x1216aeeb4 ext_dce_process_sets
	../../gcc/gcc/ext-dce.cc:128
0x1216afbaf ext_dce_process_bb
	../../gcc/gcc/ext-dce.cc:647
0x1216afbaf ext_dce
	../../gcc/gcc/ext-dce.cc:802
0x1216afbaf execute
	../../gcc/gcc/ext-dce.cc:868
Please submit a full bug report, with preprocessed source (by using -freport-bug).
Please include the complete backtrace with any bug report.
See <https://gcc.gnu.org/bugs/> for instructions.

-- 
Xi Ruoyao <xry111@xry111.site>
School of Aerospace Science and Technology, Xidian University

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFA] New pass for sign/zero extension elimination
  2023-11-20  0:47 Jeff Law
@ 2023-11-20  1:22 ` Oleg Endo
  2023-11-20  2:51   ` Jeff Law
  2023-11-20  2:23 ` Xi Ruoyao
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 35+ messages in thread
From: Oleg Endo @ 2023-11-20  1:22 UTC (permalink / raw)
  To: Jeff Law, gcc-patches; +Cc: Jivan Hakobyan


On Sun, 2023-11-19 at 17:47 -0700, Jeff Law wrote:
> This is work originally started by Joern @ Embecosm.
> 
> There's been a long standing sense that we're generating too many 
> sign/zero extensions on the RISC-V port.  REE is useful, but it's really 
> focused on a relatively narrow part of the extension problem.
> 
> What Joern's patch does is introduce a new pass which tracks liveness of 
> chunks of pseudo regs.  Specifically it tracks bits 0..7, 8..15, 16..31 
> and 32..63.
> 
> If it encounters a sign/zero extend that sets bits that are never read, 
> then it replaces the sign/zero extension with a narrowing subreg.  The 
> narrowing subreg usually gets eliminated by subsequent passes (it's just 
> a copy after all).
> 

Have you tried it on SH, too?  (and if so any numbers?)

It sounds like this one would be great to remove some of the sign/zero
extension removal hackery that I've accumulated in the SH backend.

Cheers,
Oleg

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [RFA] New pass for sign/zero extension elimination
@ 2023-11-20  0:47 Jeff Law
  2023-11-20  1:22 ` Oleg Endo
                   ` (5 more replies)
  0 siblings, 6 replies; 35+ messages in thread
From: Jeff Law @ 2023-11-20  0:47 UTC (permalink / raw)
  To: gcc-patches; +Cc: Jivan Hakobyan

[-- Attachment #1: Type: text/plain, Size: 3769 bytes --]

This is work originally started by Joern @ Embecosm.

There's been a long standing sense that we're generating too many 
sign/zero extensions on the RISC-V port.  REE is useful, but it's really 
focused on a relatively narrow part of the extension problem.

What Joern's patch does is introduce a new pass which tracks liveness of 
chunks of pseudo regs.  Specifically it tracks bits 0..7, 8..15, 16..31 
and 32..63.

If it encounters a sign/zero extend that sets bits that are never read, 
then it replaces the sign/zero extension with a narrowing subreg.  The 
narrowing subreg usually gets eliminated by subsequent passes (it's just 
a copy after all).

Jivan has done some analysis and found that it eliminates roughly 1% of 
the dynamic instruction stream for x264 as well as some redundant 
extensions in the coremark benchmark (both on rv64).  In my own testing 
as I worked through issues on other architectures I clearly saw it 
helping in various places within GCC itself or in the testsuite.

The basic structure is to first do a fairly standard liveness analysis 
on the chunks, seeding original state with the liveness data from DF. 
Once that's stable, we do a final pass to identify the useless 
extensions and transform them into narrowing subregs.

A few key points to remember.

For destination processing it is always safe to ignore a destination. 
Ignoring a destination merely means that whatever was live after the 
given insn will continue to be live before the insn.  What is not safe 
is to clear a bit in the LIVENOW bitmap for a destination chunk that is 
not set.  This comes into play with things like STRICT_LOW_PART.

For source processing the safe thing to do is to set all the chunks in a 
register as live.  It is never safe to fail to process a source operand.

When a destination object is not fully live, we try to transfer that 
limited liveness to the source operands.  So for example if bits 16..63 
are dead in a destination of a PLUS, we need not mark bits 16..63 as 
live for the source operands.  We have to be careful -- consider a shift 
count on a target without SHIFT_COUNT_TRUNCATED set.  So we have both a 
list of RTL codes where we can transfer liveness and a few codes where 
one of the operands may need to be fully live (ex, a shift count) while 
the other input may not need to be fully live (value left shifted).

Locally we have had this enabled at -O1 and above to encourage testing, 
but I'm thinking that for the trunk enabling at -O2 and above is the 
right thing to do.

This has (of course) been tested on rv64.  It's also been bootstrapped 
and regression tested on x86.  Bootstrap and regression tested (C only) 
for m68k, sh4, sh4eb, alpha.  Earlier versions were also bootstrapped 
and regression tested on ppc, hppa and s390x (C only for those as well). 
  It's also been tested on the various crosses in my tester.  So we've 
got reasonable coverage of 16, 32 and 64 bit targets, big and little 
endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other 
oddities.

The included tests are for RISC-V only because not all targets are going 
to have extraneous extensions.   There's tests from coremark, x264 and 
GCC's bz database.  It probably wouldn't be hard to add aarch64 
testscases.  The BZs listed are improved by this patch for aarch64.

Given the amount of work Jivan and I have done, I'm not comfortable 
self-approving at this time.  I'd much rather have another set of eyes 
on the code.  Hopefully the code is documented well enough for that to 
be useful exercise.

So, no need to work from Pago Pago for this patch.  I may make another 
attempt at the eswin conditional move work while working virtually in 
Pago Pago though.

Thoughts, comments, recommendations?

Jeff



[-- Attachment #2: 0006-extdce.save --]
[-- Type: text/plain, Size: 35226 bytes --]

	PR target/95650
	PR rtl-optimization/96031
	PR rtl-optimization/104387
	PR rtl-optimization/111384

gcc/
	* Makefile.in (OBJS): Add ext-dce.o.
	* common.opt (ext-dce): Add new option.
	* df-scan.cc (df_get_exit_block_use_set): No longer static.
	* df.h (df_get_exit_block_use_set): Prototype.
	* ext-dce.cc: New file.
	* passes.def: Add ext-dce before combine.
	* tree-pass.h (make_pass_ext_dce): Prototype..

gcc/testsuite
	* gcc.target/riscv/core_bench_list.c: New test.
	* gcc.target/riscv/core_init_matrix.c: New test.
	* gcc.target/riscv/core_list_init.c: New test.
	* gcc.target/riscv/matrix_add_const.c: New test.
	* gcc.target/riscv/mem-extend.c: New test.
	* gcc.target/riscv/pr111384.c: New test.

diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 753f2f36618..af6f1415507 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1451,6 +1451,7 @@ OBJS = \
 	explow.o \
 	expmed.o \
 	expr.o \
+	ext-dce.o \
 	fibonacci_heap.o \
 	file-prefix-map.o \
 	final.o \
diff --git a/gcc/common.opt b/gcc/common.opt
index d21db5d4a20..141dfdf14fd 100644
--- a/gcc/common.opt
+++ b/gcc/common.opt
@@ -3766,4 +3766,8 @@ fipa-ra
 Common Var(flag_ipa_ra) Optimization
 Use caller save register across calls if possible.
 
+fext-dce
+Common Var(flag_ext_dce, 1) Optimization Init(0)
+Perform dead code elimination on zero and sign extensions with special dataflow analysis.
+
 ; This comment is to ensure we retain the blank line above.
diff --git a/gcc/df-scan.cc b/gcc/df-scan.cc
index 9515740728c..87729ab0f44 100644
--- a/gcc/df-scan.cc
+++ b/gcc/df-scan.cc
@@ -78,7 +78,6 @@ static void df_get_eh_block_artificial_uses (bitmap);
 
 static void df_record_entry_block_defs (bitmap);
 static void df_record_exit_block_uses (bitmap);
-static void df_get_exit_block_use_set (bitmap);
 static void df_get_entry_block_def_set (bitmap);
 static void df_grow_ref_info (struct df_ref_info *, unsigned int);
 static void df_ref_chain_delete_du_chain (df_ref);
@@ -3642,7 +3641,7 @@ df_epilogue_uses_p (unsigned int regno)
 
 /* Set the bit for regs that are considered being used at the exit. */
 
-static void
+void
 df_get_exit_block_use_set (bitmap exit_block_uses)
 {
   unsigned int i;
diff --git a/gcc/df.h b/gcc/df.h
index 402657a7076..abcbb097734 100644
--- a/gcc/df.h
+++ b/gcc/df.h
@@ -1091,6 +1091,7 @@ extern bool df_epilogue_uses_p (unsigned int);
 extern void df_set_regs_ever_live (unsigned int, bool);
 extern void df_compute_regs_ever_live (bool);
 extern void df_scan_verify (void);
+extern void df_get_exit_block_use_set (bitmap);
 
 \f
 /*----------------------------------------------------------------------------
diff --git a/gcc/ext-dce.cc b/gcc/ext-dce.cc
new file mode 100644
index 00000000000..a6fe2683dcd
--- /dev/null
+++ b/gcc/ext-dce.cc
@@ -0,0 +1,880 @@
+/* RTL dead zero/sign extension (code) elimination.
+   Copyright (C) 2000-2022 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "rtl.h"
+#include "tree.h"
+#include "memmodel.h"
+#include "insn-config.h"
+#include "emit-rtl.h"
+#include "recog.h"
+#include "cfganal.h"
+#include "tree-pass.h"
+#include "cfgrtl.h"
+#include "rtl-iter.h"
+#include "df.h"
+#include "print-rtl.h"
+
+/* We consider four bit groups for liveness:
+   bit 0..7   (least significant byte)
+   bit 8..15  (second least significant byte)
+   bit 16..31
+   bit 32..BITS_PER_WORD-1  */
+
+/* Note this pass could be used to narrow memory loads too.  It's
+   not clear if that's profitable or not in general.  */
+
+#define UNSPEC_P(X) (GET_CODE (X) == UNSPEC || GET_CODE (X) == UNSPEC_VOLATILE)
+
+/* If we know the destination of CODE only uses some low bits
+   (say just the QI bits of an SI operation), then return true
+   if we can propagate the need for just the subset of bits
+   from the destination to the sources.  */
+
+static bool
+safe_for_live_propagation (rtx_code code)
+{
+  /* First handle rtx classes which as a whole are known to
+     be either safe or unsafe.  */
+  switch (GET_RTX_CLASS (code))
+    {
+      case RTX_OBJ:
+	return true;
+
+      case RTX_COMPARE:
+      case RTX_COMM_COMPARE:
+      case RTX_TERNARY:
+	return false;
+
+      default:
+	break;
+    }
+
+  /* What's left are specific codes.  We only need to identify those
+     which are safe.   */
+  switch (code)
+    {
+    /* These are trivially safe.  */
+    case SUBREG:
+    case NOT:
+    case ZERO_EXTEND:
+    case SIGN_EXTEND:
+    case TRUNCATE:
+    case SS_TRUNCATE:
+    case US_TRUNCATE:
+    case PLUS:
+    case MULT:
+    case SS_MULT:
+    case US_MULT:
+    case SMUL_HIGHPART:
+    case UMUL_HIGHPART:
+    case AND:
+    case IOR:
+    case XOR:
+    case SS_PLUS:
+    case US_PLUS:
+      return true;
+
+    /* We can propagate for the shifted operand, but not the shift
+       count.  The count is handled specially.  */
+    case SS_ASHIFT:
+    case US_ASHIFT:
+    case ASHIFT:
+      return true;
+
+    /* There may be other safe codes.  If so they can be added
+       individually when discovered.  */
+    default:
+      return false;
+    }
+}
+
+/* Clear bits in LIVENOW and set bits in LIVE_TMP for objects
+   set/clobbered by INSN.
+
+   Conceptually it is always safe to ignore a particular destination
+   here as that will result in more chunks of data being considered
+   live.  That's what happens when we "continue" the main loop when
+   we see something we don't know how to handle such as a vector
+   mode destination.
+
+   The more accurate we are in identifying what objects (and chunks
+   within an object) are set by INSN, the more aggressive the
+   optimziation phase during use handling will be.  */
+
+static void
+ext_dce_process_sets (rtx insn, bitmap livenow, bitmap live_tmp)
+{
+  subrtx_iterator::array_type array;
+  rtx pat = PATTERN (insn);
+  FOR_EACH_SUBRTX (iter, array, pat, NONCONST)
+    {
+      const_rtx x = *iter;
+
+      /* An EXPR_LIST (from call fusage) ends in NULL_RTX.  */
+      if (x == NULL_RTX)
+	continue;
+
+      if (UNSPEC_P (x))
+	continue;
+
+      if (GET_CODE (x) == SET || GET_CODE (x) == CLOBBER)
+	{
+	  unsigned bit = 0;
+	  x = SET_DEST (x);
+
+	  /* We don't support vector destinations or destinations
+	     wider than DImode.   It is safe to continue this loop.
+	     At worst, it will leave things live which could have
+	     been made dead.  */
+	  if (VECTOR_MODE_P (GET_MODE (x)) || GET_MODE (x) > E_DImode)
+	    continue;
+
+	  /* We could have (strict_low_part (subreg ...)).  We can not just
+	     strip the STRICT_LOW_PART as that would result in clearing
+	     some bits in LIVENOW that are still live.  So process the
+	     STRICT_LOW_PART specially.  */
+	  if (GET_CODE (x) == STRICT_LOW_PART)
+	    {
+	      x = XEXP (x, 0);
+
+	      /* The only valid operand of a STRICT_LOW_PART is a non
+		 paradoxical SUBREG.  */
+	      gcc_assert (SUBREG_P (x)
+			  && !paradoxical_subreg_p (x)
+			  && SUBREG_BYTE (x).is_constant ());
+
+	      /* I think we should always see a REG here.  But let's
+		 be sure.  */
+	      gcc_assert (REG_P (SUBREG_REG (x)));
+
+	      /* We don't track values larger than DImode.  */
+	      gcc_assert (GET_MODE (x) <= E_DImode);
+
+	      /* But the inner mode might be larger, just punt for
+		 that case.  Remember, we can not just continue to process
+		 the inner RTXs due to the STRICT_LOW_PART.  */
+	      if (GET_MODE (SUBREG_REG (x)) > E_DImode)
+		{
+		  /* Skip the subrtxs of the STRICT_LOW_PART.  We can't
+		     process them because it'll set objects as no longer
+		     live when they are in fact still live.  */
+		  iter.skip_subrtxes ();
+		  continue;
+		}
+
+	      /* Transfer all the LIVENOW bits for X into LIVE_TMP.  */
+	      HOST_WIDE_INT rn = REGNO (SUBREG_REG (x));
+	      for (HOST_WIDE_INT i = 4 * rn; i < 4 * rn + 4; i++)
+		if (bitmap_bit_p (livenow, i))
+		  bitmap_set_bit (live_tmp, i);
+
+	      /* The mode of the SUBREG tells us how many bits we can
+		 clear.  */
+	      machine_mode mode = GET_MODE (x);
+	      HOST_WIDE_INT size = GET_MODE_SIZE (mode).to_constant ();
+	      bitmap_clear_range (livenow, 4 * rn, size);
+
+	      /* We have fully processed this destination.  */
+	      iter.skip_subrtxes ();
+	      continue;
+	    }
+
+	  /* We can safely strip a paradoxical subreg.  The inner mode will
+	     be narrower than the outer mode.  We'll clear fewer bits in
+	     LIVENOW than we'd like, but that's always safe.  */
+	  if (paradoxical_subreg_p (x))
+	    x = XEXP (x, 0);
+
+	  /* If we have a SUBREG that is too wide, just continue the loop
+	     and let the iterator go down into SUBREG_REG.  */
+	  if (SUBREG_P (x) && GET_MODE (SUBREG_REG (x)) > E_DImode)
+	    continue;
+
+	  /* Phase one of destination handling.  First remove any wrapper
+	     such as SUBREG or ZERO_EXTRACT.  */
+	  unsigned HOST_WIDE_INT mask = GET_MODE_MASK (GET_MODE (x));
+	  if (SUBREG_P (x)
+	      && !paradoxical_subreg_p (x)
+	      && SUBREG_BYTE (x).is_constant ())
+	    {
+	      bit = SUBREG_BYTE (x).to_constant () * BITS_PER_UNIT;
+	      if (WORDS_BIG_ENDIAN)
+		bit = (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (x))).to_constant ()
+		       - GET_MODE_BITSIZE (GET_MODE (x)).to_constant () - bit);
+
+	      /* Catch big endian correctness issues rather than triggering
+		 undefined behavior.  */
+	      gcc_assert (bit < sizeof (HOST_WIDE_INT) * 8);
+
+	      mask = GET_MODE_MASK (GET_MODE (SUBREG_REG (x))) << bit;
+	      if (!mask)
+		mask = -0x100000000ULL;
+	      x = SUBREG_REG (x);
+	    }
+
+	  if (GET_CODE (x) == ZERO_EXTRACT)
+	    {
+	      /* If either the size or the start position is unknown,
+		 then assume we know nothing about what is overwritten.
+		 This is overly conservative, but safe.  */
+	      if (!CONST_INT_P (XEXP (x, 1)) || !CONST_INT_P (XEXP (x, 2)))
+		continue;
+	      mask = (1ULL << INTVAL (XEXP (x, 1))) - 1;
+	      bit = INTVAL (XEXP (x, 2));
+	      if (BITS_BIG_ENDIAN)
+		bit = (GET_MODE_BITSIZE (GET_MODE (x))
+		       - INTVAL (XEXP (x, 1)) - bit).to_constant ();
+	      x = XEXP (x, 0);
+
+	      /* We can certainly get (zero_extract (subreg ...)).  The
+		 mode of the zero_extract and location should be sufficient
+		 and we can just strip the SUBREG.  */
+	      if (GET_CODE (x) == SUBREG)
+		x = SUBREG_REG (x);
+	    }
+
+	  /* BIT >= 64 indicates something went horribly wrong.  */
+	  gcc_assert (bit <= 63);
+
+	  /* Now handle the actual object that was changed.  */
+	  if (REG_P (x))
+	    {
+	      /* Transfer the appropriate bits from LIVENOW into
+		 LIVE_TMP.  */
+	      HOST_WIDE_INT rn = REGNO (x);
+	      for (HOST_WIDE_INT i = 4 * rn; i < 4 * rn + 4; i++)
+		if (bitmap_bit_p (livenow, i))
+		  bitmap_set_bit (live_tmp, i);
+
+	      /* Now clear the bits known written by this instruction.
+		 Note that BIT need not be a power of two, consider a
+		 ZERO_EXTRACT destination.  */
+	      int start = (bit < 8 ? 0 : bit < 16 ? 1 : bit < 32 ? 2 : 3);
+	      int end = ((mask & ~0xffffffffULL) ? 4
+			 : (mask & 0xffff0000ULL) ? 3
+			 : (mask & 0xff00) ? 2 : 1);
+	      bitmap_clear_range (livenow, 4 * rn + start, end - start);
+	    }
+	  /* Some ports generate (clobber (const_int)).  */
+	  else if (CONST_INT_P (x))
+	    continue;
+	  else
+	    gcc_assert (CALL_P (insn)
+			|| MEM_P (x)
+			|| x == pc_rtx
+			|| GET_CODE (x) == SCRATCH);
+
+	  iter.skip_subrtxes ();
+	}
+      else if (GET_CODE (x) == COND_EXEC)
+	{
+	  /* This isn't ideal, but may not be so bad in practice.  */
+	  iter.skip_subrtxes ();
+	}
+    }
+}
+
+/* INSN has a sign/zero extended source inside SET that we will
+   try to turn into a SUBREG.  */
+static void
+ext_dce_try_optimize_insn (rtx_insn *insn, rtx set, bitmap changed_pseudos)
+{
+  rtx src = SET_SRC (set);
+  rtx inner = XEXP (src, 0);
+
+  /* Avoid (subreg (mem)) and other constructs which may are valid RTL, but
+     not useful for this optimization.  */
+  if (!REG_P (inner) && !SUBREG_P (inner))
+    return;
+
+  rtx new_pattern;
+  if (dump_file)
+    {
+      fprintf (dump_file, "Processing insn:\n");
+      dump_insn_slim (dump_file, insn);
+      fprintf (dump_file, "Trying to simplify pattern:\n");
+      print_rtl_single (dump_file, SET_SRC (set));
+    }
+
+  new_pattern = simplify_gen_subreg (GET_MODE (src), inner,
+				     GET_MODE (inner), 0);
+  /* simplify_gen_subreg may fail in which case NEW_PATTERN will be NULL.
+     We must not pass that as a replacement pattern to validate_change.  */
+  if (new_pattern)
+    {
+      int ok = validate_change (insn, &SET_SRC (set), new_pattern, false);
+
+      if (ok)
+	bitmap_set_bit (changed_pseudos, REGNO (SET_DEST (set)));
+
+      if (dump_file)
+	{
+	  if (ok)
+	    fprintf (dump_file, "Successfully transformed to:\n");
+	  else
+	    fprintf (dump_file, "Failed transformation to:\n");
+
+	  print_rtl_single (dump_file, new_pattern);
+	  fprintf (dump_file, "\n");
+	}
+    }
+  else
+    {
+      if (dump_file)
+	fprintf (dump_file, "Unable to generate valid SUBREG expression.\n");
+    }
+}
+
+/* Some operators imply that their second operand is fully live,
+   regardless of how many bits in the output are live.  An example
+   would be the shift count on a target without SHIFT_COUNT_TRUCATED
+   defined.
+
+   Return TRUE if CODE is such an operator.  FALSE otherwise.  */
+
+static bool
+binop_implies_op2_fully_live (rtx_code code)
+{
+  switch (code)
+    {
+      case ASHIFT:
+      case LSHIFTRT:
+      case ASHIFTRT:
+      case ROTATE:
+      case ROTATERT:
+	return !SHIFT_COUNT_TRUNCATED;
+
+      default:
+	return false;
+    }
+}
+
+/* Process uses in INSN.  Set appropriate bits in LIVENOW for any chunks of
+   pseudos that become live, potentially filtering using bits from LIVE_TMP.
+
+   If MODIFIED is true, then optimize sign/zero extensions to SUBREGs when
+   the extended bits are never read and mark pseudos which had extensions
+   eliminated in CHANGED_PSEUDOS.  */
+
+static void
+ext_dce_process_uses (rtx insn, bitmap livenow, bitmap live_tmp,
+		      bool modify, bitmap changed_pseudos)
+{
+  /* A nonlocal goto implicitly uses the frame pointer.  */
+  if (JUMP_P (insn) && find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX))
+    {
+      bitmap_set_range (livenow, FRAME_POINTER_REGNUM * 4, 4);
+      if (!HARD_FRAME_POINTER_IS_FRAME_POINTER)
+	bitmap_set_range (livenow, HARD_FRAME_POINTER_REGNUM * 4, 4);
+    }
+
+  subrtx_var_iterator::array_type array_var;
+  rtx pat = PATTERN (insn);
+  FOR_EACH_SUBRTX_VAR (iter, array_var, pat, NONCONST)
+    {
+      /* An EXPR_LIST (from call fusage) ends in NULL_RTX.  */
+      rtx x = *iter;
+      if (x == NULL_RTX)
+	continue;
+
+      /* So the basic idea in this FOR_EACH_SUBRTX_VAR loop is to
+	 handle SETs explicitly, possibly propagating live information
+	 into the uses.
+
+	 We may continue the loop at various points which will cause
+	 iteration into the next level of RTL.  Breaking from the loop
+	 is never safe as it can lead us to fail to process some of the
+	 RTL and thus not make objects live when necessary.  */
+      enum rtx_code xcode = GET_CODE (x);
+      if (xcode == SET)
+	{
+	  const_rtx dst = SET_DEST (x);
+	  rtx src = SET_SRC (x);
+	  const_rtx y;
+	  unsigned HOST_WIDE_INT bit = 0;
+
+	  /* The code of the RHS of a SET.  */
+	  enum rtx_code code = GET_CODE (src);
+
+	  /* ?!? How much of this should mirror SET handling, potentially
+	     being shared?   */
+	  if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))
+	    {
+	      bit = SUBREG_BYTE (dst).to_constant () * BITS_PER_UNIT;
+	      if (WORDS_BIG_ENDIAN)
+		bit = (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (dst))).to_constant ()
+		       - GET_MODE_BITSIZE (GET_MODE (dst)).to_constant () - bit);
+	      if (bit >= HOST_BITS_PER_WIDE_INT)
+		bit = HOST_BITS_PER_WIDE_INT - 1;
+	      dst = SUBREG_REG (dst);
+	    }
+	  else if (GET_CODE (dst) == ZERO_EXTRACT
+		   || GET_CODE (dst) == STRICT_LOW_PART)
+	    dst = XEXP (dst, 0);
+
+	  /* Main processing of the uses.  Two major goals here.
+
+	     First, we want to try and propagate liveness (or the lack
+	     thereof) from the destination register to the source
+	     register(s).
+
+	     Second, if the source is an extension, try to optimize
+	     it into a SUBREG.  The SUBREG form indicates we don't
+	     care about the upper bits and will usually be copy
+	     propagated away.
+
+	     If we fail to handle something in here, the expectation
+	     is the iterator will dive into the sub-components and
+	     mark all the chunks in any found REGs as live.  */
+	  if (REG_P (dst) && safe_for_live_propagation (code))
+	    {
+	      /* Create a mask representing the bits of this output
+		 operand that are live after this insn.  We can use
+		 this information to refine the live in state of
+		 inputs to this insn in many cases.
+
+		 We have to do this on a per SET basis, we might have
+		 an INSN with multiple SETS, some of which can narrow
+		 the source operand liveness, some of which may not.  */
+	      unsigned HOST_WIDE_INT dst_mask = 0;
+	      HOST_WIDE_INT rn = REGNO (dst);
+	      unsigned HOST_WIDE_INT mask_array[]
+		= { 0xff, 0xff00, 0xffff0000ULL, -0x100000000ULL };
+	      for (int i = 0; i < 4; i++)
+		if (bitmap_bit_p (live_tmp, 4 * rn + i))
+		  dst_mask |= mask_array[i];
+	      dst_mask >>= bit;
+
+	      /* ??? Could also handle ZERO_EXTRACT / SIGN_EXTRACT
+		 of the source specially to improve optimization.  */
+	      if (code == SIGN_EXTEND || code == ZERO_EXTEND)
+		{
+		  rtx inner = XEXP (src, 0);
+		  unsigned HOST_WIDE_INT src_mask
+		    = GET_MODE_MASK (GET_MODE (inner));
+
+		  /* DST_MASK could be zero if we had something in the SET
+		     that we couldn't handle.  */
+		  if (modify && dst_mask && (dst_mask & ~src_mask) == 0)
+		    ext_dce_try_optimize_insn (as_a <rtx_insn *> (insn),
+					       x, changed_pseudos);
+
+		  dst_mask &= src_mask;
+		  src = XEXP (src, 0);
+		  code = GET_CODE (src);
+		}
+
+	      /* Optimization is done at this point.  We just want to make
+		 sure everything that should get marked as live is marked
+		 from here onward.  */
+
+	      /* ?!? What is the point of this adjustment to DST_MASK?  */
+	      if (code == PLUS || code == MINUS
+		  || code == MULT || code == ASHIFT)
+		dst_mask
+		  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;
+
+	      /* We will handle the other operand of a binary operator
+		 at the bottom of the loop by resetting Y.  */
+	      if (BINARY_P (src))
+		y = XEXP (src, 0);
+	      else
+		y = src;
+
+	      /* We're inside a SET and want to process the source operands
+		 making things live.  Breaking from this loop will cause
+		 the iterator to work on sub-rtxs, so it is safe to break
+		 if we see something we don't know how to handle.  */
+	      for (;;)
+		{
+		  /* Strip an outer STRICT_LOW_PART or paradoxical subreg.
+		     That has the effect of making the whole referenced
+		     register live.  We might be able to avoid that for
+		     STRICT_LOW_PART at some point.  */
+		  if (GET_CODE (x) == STRICT_LOW_PART
+		      || paradoxical_subreg_p (x))
+		    x = XEXP (x, 0);
+		  else if (SUBREG_P (y) && SUBREG_BYTE (y).is_constant ())
+		    {
+		      /* For anything but (subreg (reg)), break the inner loop
+			 and process normally (conservatively).  */
+		      if (!REG_P (SUBREG_REG (y)))
+			break;
+		      bit = (SUBREG_BYTE (y).to_constant () * BITS_PER_UNIT);
+		      if (WORDS_BIG_ENDIAN)
+			bit = (GET_MODE_BITSIZE
+			       (GET_MODE (SUBREG_REG (y))).to_constant ()
+				- GET_MODE_BITSIZE (GET_MODE (y)).to_constant () - bit);
+		      if (dst_mask)
+			{
+			  dst_mask <<= bit;
+			  if (!dst_mask)
+			    dst_mask = -0x100000000ULL;
+			}
+		      y = SUBREG_REG (y);
+		    }
+
+		  if (REG_P (y))
+		    {
+		      /* We have found the use of a register.  We need to mark
+			 the appropriate chunks of the register live.  The mode
+			 of the REG is a starting point.  We may refine that
+			 based on what chunks in the output were live.  */
+		      rn = 4 * REGNO (y);
+		      unsigned HOST_WIDE_INT tmp_mask = dst_mask;
+
+		      /* If the RTX code for the SET_SRC is not one we can
+			 propagate destination liveness through, then just
+			 set the mask to the mode's mask.  */
+		      if (!safe_for_live_propagation (code))
+			tmp_mask = GET_MODE_MASK (GET_MODE (y));
+
+		      if (tmp_mask & 0xff)
+			bitmap_set_bit (livenow, rn);
+		      if (tmp_mask & 0xff00)
+			bitmap_set_bit (livenow, rn + 1);
+		      if (tmp_mask & 0xffff0000ULL)
+			bitmap_set_bit (livenow, rn + 2);
+		      if (tmp_mask & -0x100000000ULL)
+			bitmap_set_bit (livenow, rn + 3);
+
+		      /* Some operators imply their second operand
+			 is fully live, break this inner loop which
+			 will cause the iterator to descent into the
+			 sub-rtxs outside the SET processing.  */
+		      if (binop_implies_op2_fully_live (code))
+			break;
+		    }
+		  else if (!CONSTANT_P (y))
+		    break;
+		  /* We might have (ashift (const_int 1) (reg...)) */
+		  else if (CONSTANT_P (y)
+			   && binop_implies_op2_fully_live (GET_CODE (src)))
+		    break;
+
+		  /* If this was anything but a binary operand, break the inner
+		     loop.  This is conservatively correct as it will cause the
+		     iterator to look at the sub-rtxs outside the SET context.  */
+		  if (!BINARY_P (src))
+		    break;
+
+		  /* We processed the first operand of a binary operator.  Now
+		     handle the second.  */
+		  y = XEXP (src, 1), src = pc_rtx;
+		}
+
+	      /* These are leaf nodes, no need to iterate down into them.  */
+	      if (REG_P (y) || CONSTANT_P (y))
+		iter.skip_subrtxes ();
+	    }
+	}
+      /* If we are reading the low part of a SUBREG, then we can
+	 refine liveness of the input register, otherwise let the
+	 iterator continue into SUBREG_REG.  */
+      else if (xcode == SUBREG
+	       && REG_P (SUBREG_REG (x))
+	       && subreg_lowpart_p (x)
+	       && GET_MODE_BITSIZE (GET_MODE (x)).is_constant ()
+	       && GET_MODE_BITSIZE (GET_MODE (x)).to_constant () <= 32)
+	{
+	  HOST_WIDE_INT size = GET_MODE_BITSIZE (GET_MODE  (x)).to_constant ();
+	  HOST_WIDE_INT rn = 4 * REGNO (SUBREG_REG (x));
+
+	  bitmap_set_bit (livenow, rn);
+	  if (size > 8)
+	    bitmap_set_bit (livenow, rn + 1);
+	  if (size > 16)
+	    bitmap_set_bit (livenow, rn + 2);
+	  if (size > 32)
+	    bitmap_set_bit (livenow, rn + 3);
+	  iter.skip_subrtxes ();
+	}
+      /* If we have a register reference that is not otherwise handled,
+	 just assume all the chunks are live.  */
+      else if (REG_P (x))
+	bitmap_set_range (livenow, REGNO (x) * 4, 4);
+    }
+}
+
+/* Process a single basic block BB with current liveness information
+   in LIVENOW, returning updated liveness information.
+
+   If MODIFY is true, then this is the last pass and unnecessary
+   extensions should be eliminated when possible.  If an extension
+   is removed, the source pseudo is marked in CHANGED_PSEUDOS.  */
+
+static bitmap
+ext_dce_process_bb (basic_block bb, bitmap livenow,
+		    bool modify, bitmap changed_pseudos)
+{
+  rtx_insn *insn;
+
+  FOR_BB_INSNS_REVERSE (bb, insn)
+    {
+      if (!NONDEBUG_INSN_P (insn))
+	continue;
+
+      /* Live-out state of the destination of this insn.  We can
+	 use this to refine the live-in state of the sources of
+	 this insn in many cases.  */
+      bitmap live_tmp = BITMAP_ALLOC (NULL);
+
+      /* First process any sets/clobbers in INSN.  */
+      ext_dce_process_sets (insn, livenow, live_tmp);
+
+      /* CALL_INSNs need processing their fusage data.  */
+      if (GET_CODE (insn) == CALL_INSN)
+	ext_dce_process_sets (CALL_INSN_FUNCTION_USAGE (insn),
+			      livenow, live_tmp);
+
+      /* And now uses, optimizing away SIGN/ZERO extensions as we go.  */
+      ext_dce_process_uses (insn, livenow, live_tmp, modify, changed_pseudos);
+
+      /* And process fusage data for the use as well.  */
+      if (GET_CODE (insn) == CALL_INSN)
+	{
+	  if (!FAKE_CALL_P (insn))
+	    bitmap_set_range (livenow, STACK_POINTER_REGNUM * 4, 4);
+
+	  /* If this is not a call to a const fucntion, then assume it
+	     can read any global register.  */
+	  if (!RTL_CONST_CALL_P (insn))
+	    for (unsigned i = 0; i < FIRST_PSEUDO_REGISTER; i++)
+	      if (global_regs[i])
+		bitmap_set_range (livenow, i * 4, 4);
+
+	  ext_dce_process_uses (CALL_INSN_FUNCTION_USAGE (insn),
+				livenow, live_tmp, modify, changed_pseudos);
+	}
+
+      BITMAP_FREE (live_tmp);
+    }
+  return livenow;
+}
+
+/* We optimize away sign/zero extensions in this pass and replace
+   them with SUBREGs indicating certain bits are don't cares.
+
+   This changes the SUBREG_PROMOTED_VAR_P state of the object.
+   It is fairly painful to fix this on the fly, so we have
+   recorded which pseudos are affected and we look for SUBREGs
+   of those pseudos and fix them up.  */
+
+static void
+reset_subreg_promoted_p (bitmap changed_pseudos)
+{
+  /* If we removed an extension, that changed the promoted state
+     of the destination of that extension.  Thus we need to go
+     find any SUBREGs that reference that pseudo and adjust their
+     SUBREG_PROMOTED_P state.  */
+  for (rtx_insn *insn = get_insns(); insn; insn = NEXT_INSN (insn))
+    {
+      if (!NONDEBUG_INSN_P (insn))
+	continue;
+
+      rtx pat = PATTERN (insn);
+      subrtx_var_iterator::array_type array;
+      FOR_EACH_SUBRTX_VAR (iter, array, pat, NONCONST)
+	{
+	  rtx sub = *iter;
+
+	  /* We only care about SUBREGs.  */
+	  if (GET_CODE (sub) != SUBREG)
+	    continue;
+
+	  const_rtx x = SUBREG_REG (sub);
+
+	  /* We only care if the inner object is a REG.  */
+	  if (!REG_P (x))
+	    continue;
+
+	  /* And only if the SUBREG is a promoted var.  */
+	  if (!SUBREG_PROMOTED_VAR_P (sub))
+	    continue;
+
+	  if (bitmap_bit_p (changed_pseudos, REGNO (x)))
+	    SUBREG_PROMOTED_VAR_P (sub) = 0;
+	}
+    }
+}
+
+/* Use lifetime analyis to identify extensions that set bits that
+   are never read.  Turn such extensions into SUBREGs instead which
+   can often be propagated away.  */
+
+static void
+ext_dce (void)
+{
+  basic_block bb, *worklist, *qin, *qout, *qend;
+  unsigned int qlen;
+  vec<bitmap_head> livein;
+  bitmap livenow;
+  bitmap changed_pseudos;
+
+  livein.create (last_basic_block_for_fn (cfun));
+  livein.quick_grow_cleared (last_basic_block_for_fn (cfun));
+  for (int i = 0; i < last_basic_block_for_fn (cfun); i++)
+    bitmap_initialize (&livein[i], &bitmap_default_obstack);
+
+  auto_bitmap refs (&bitmap_default_obstack);
+  df_get_exit_block_use_set (refs);
+
+  unsigned i;
+  bitmap_iterator bi;
+  EXECUTE_IF_SET_IN_BITMAP (refs, 0, i, bi)
+    {
+      for (int j = 0; j < 4; j++)
+	bitmap_set_bit (&livein[EXIT_BLOCK], i * 4 + j);
+    }
+
+  livenow = BITMAP_ALLOC (NULL);
+  changed_pseudos = BITMAP_ALLOC (NULL);
+
+  worklist
+    = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
+
+  int modify = 0;
+
+  do
+    {
+      qin = qout = worklist;
+
+      /* Put every block on the worklist.  */
+      int *rpo = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
+      int n = inverted_rev_post_order_compute (cfun, rpo);
+      for (int i = 0; i < n; ++i)
+	{
+	  bb = BASIC_BLOCK_FOR_FN (cfun, rpo[i]);
+	  if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
+	      || bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
+	    continue;
+	  *qin++ = bb;
+	  bb->aux = bb;
+	}
+      free (rpo);
+
+      qin = worklist;
+      qend = &worklist[n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS];
+      qlen = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
+
+      /* Iterate until the worklist is empty.  */
+      while (qlen)
+	{
+	  /* Take the first entry off the worklist.  */
+	  bb = *qout++;
+	  qlen--;
+
+	  if (qout >= qend)
+	    qout = worklist;
+
+	  /* Clear the aux field of this block so that it can be added to
+	     the worklist again if necessary.  */
+	  bb->aux = NULL;
+
+	  bitmap_clear (livenow);
+	  /* Make everything live that's live in the successors.  */
+	  edge_iterator ei;
+	  edge e;
+
+	  FOR_EACH_EDGE (e, ei, bb->succs)
+	    bitmap_ior_into (livenow, &livein[e->dest->index]);
+
+	  livenow = ext_dce_process_bb (bb, livenow,
+					modify > 0, changed_pseudos);
+
+	  if (!bitmap_equal_p (&livein[bb->index], livenow))
+	    {
+	      gcc_assert (!modify);
+	      bitmap tmp = BITMAP_ALLOC (NULL);
+	      gcc_assert (!bitmap_and_compl (tmp, &livein[bb->index], livenow));
+
+	      bitmap_copy (&livein[bb->index], livenow);
+
+	      edge_iterator ei;
+	      edge e;
+
+	      FOR_EACH_EDGE (e, ei, bb->preds)
+		if (!e->src->aux && e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
+		  {
+		    *qin++ = e->src;
+		    e->src->aux = e;
+		    qlen++;
+		    if (qin >= qend)
+		      qin = worklist;
+		  }
+	    }
+	}
+    } while (!modify++);
+
+  reset_subreg_promoted_p (changed_pseudos);
+
+  /* Clean up.  */
+  BITMAP_FREE (changed_pseudos);
+  BITMAP_FREE (livenow);
+  unsigned len = livein.length ();
+  for (unsigned i = 0; i < len; i++)
+    bitmap_clear (&livein[i]);
+  livein.release ();
+  clear_aux_for_blocks ();
+  free (worklist);
+}
+
+namespace {
+
+const pass_data pass_data_ext_dce =
+{
+  RTL_PASS, /* type */
+  "ext_dce", /* name */
+  OPTGROUP_NONE, /* optinfo_flags */
+  TV_NONE, /* tv_id */
+  PROP_cfglayout, /* properties_required */
+  0, /* properties_provided */
+  0, /* properties_destroyed */
+  0, /* todo_flags_start */
+  TODO_df_finish, /* todo_flags_finish */
+};
+
+class pass_ext_dce : public rtl_opt_pass
+{
+public:
+  pass_ext_dce (gcc::context *ctxt)
+    : rtl_opt_pass (pass_data_ext_dce, ctxt)
+  {}
+
+  /* opt_pass methods: */
+  virtual bool gate (function *) { return optimize > 0; }
+  virtual unsigned int execute (function *)
+    {
+      ext_dce ();
+      return 0;
+    }
+
+}; // class pass_combine
+
+} // anon namespace
+
+rtl_opt_pass *
+make_pass_ext_dce (gcc::context *ctxt)
+{
+  return new pass_ext_dce (ctxt);
+}
diff --git a/gcc/passes.def b/gcc/passes.def
index 1e1950bdb39..c075c70d42c 100644
--- a/gcc/passes.def
+++ b/gcc/passes.def
@@ -487,6 +487,7 @@ along with GCC; see the file COPYING3.  If not see
       NEXT_PASS (pass_inc_dec);
       NEXT_PASS (pass_initialize_regs);
       NEXT_PASS (pass_ud_rtl_dce);
+      NEXT_PASS (pass_ext_dce);
       NEXT_PASS (pass_combine);
       NEXT_PASS (pass_if_after_combine);
       NEXT_PASS (pass_jump_after_combine);
diff --git a/gcc/testsuite/gcc.target/riscv/core_bench_list.c b/gcc/testsuite/gcc.target/riscv/core_bench_list.c
new file mode 100644
index 00000000000..957e9c841ed
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/core_bench_list.c
@@ -0,0 +1,15 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-rtl-ext_dce" } */
+/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
+
+short
+core_bench_list (int N) {
+
+  short a = 0;
+  for (int i = 0; i < 4; i++) {
+    if (i > N) {
+      a++;
+    }
+  }
+  return a * 4;
+}
diff --git a/gcc/testsuite/gcc.target/riscv/core_init_matrix.c b/gcc/testsuite/gcc.target/riscv/core_init_matrix.c
new file mode 100644
index 00000000000..9289244c71f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/core_init_matrix.c
@@ -0,0 +1,17 @@
+/* { dg-do compile } */
+/* { dg-options "-O1 -fdump-rtl-ext_dce" } */
+/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
+
+void
+core_init_matrix(short* A, short* B, int seed) {
+  int  order = 1;
+
+  for (int i = 0; i < seed; i++) {
+    for (int j = 0; j < seed; j++) {
+      short val = seed + order;
+      B[i] = val;
+      A[i] = val;
+      order++;
+    }
+  }
+}
diff --git a/gcc/testsuite/gcc.target/riscv/core_list_init.c b/gcc/testsuite/gcc.target/riscv/core_list_init.c
new file mode 100644
index 00000000000..2f36dae85aa
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/core_list_init.c
@@ -0,0 +1,18 @@
+/* { dg-do compile } */
+/* { dg-options "-O1 -fdump-rtl-ext_dce" } */
+/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
+
+unsigned short
+core_list_init (int size, short seed) {
+
+  for (int i = 0; i < size; i++) {
+    unsigned short datpat = ((unsigned short)(seed ^ i) & 0xf);
+    unsigned short dat = (datpat << 3) | (i & 0x7);
+    if (i > seed) {
+      return dat;
+    }
+  }
+
+  return 0;
+
+}
diff --git a/gcc/testsuite/gcc.target/riscv/matrix_add_const.c b/gcc/testsuite/gcc.target/riscv/matrix_add_const.c
new file mode 100644
index 00000000000..9a2dd53b17a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/matrix_add_const.c
@@ -0,0 +1,11 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-rtl-ext_dce" } */
+/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
+
+void
+matrix_add_const(int N, short *A, short val)
+{
+    for (int j = 0; j < N; j++) {
+      A[j] += val;
+    }
+}
diff --git a/gcc/testsuite/gcc.target/riscv/mem-extend.c b/gcc/testsuite/gcc.target/riscv/mem-extend.c
new file mode 100644
index 00000000000..c67f12dfc35
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/mem-extend.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zbb" } */
+/* { dg-skip-if "" { *-*-* } { "-O0" } } */
+
+void
+foo(short *d, short *tmp) {
+    int x = d[0] + d[1];
+    int y = d[2] + d[3];
+    tmp[0] = x + y;
+    tmp[1] = x - y;
+}
+
+/* { dg-final { scan-assembler-not {\mzext\.h\M} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/pr111384.c b/gcc/testsuite/gcc.target/riscv/pr111384.c
new file mode 100644
index 00000000000..a4e77d4aeb6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/pr111384.c
@@ -0,0 +1,11 @@
+/* { dg-do compile } */
+/* { dg-options "-O1 -fdump-rtl-ext_dce" } */
+/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
+
+void
+foo(unsigned int src, unsigned short *dst1, unsigned short *dst2)
+{
+    *dst1 = src;
+    *dst2 = src;
+}
+
diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h
index 09e6ada5b2f..773301d731f 100644
--- a/gcc/tree-pass.h
+++ b/gcc/tree-pass.h
@@ -591,6 +591,7 @@ extern rtl_opt_pass *make_pass_reginfo_init (gcc::context *ctxt);
 extern rtl_opt_pass *make_pass_inc_dec (gcc::context *ctxt);
 extern rtl_opt_pass *make_pass_stack_ptr_mod (gcc::context *ctxt);
 extern rtl_opt_pass *make_pass_initialize_regs (gcc::context *ctxt);
+extern rtl_opt_pass *make_pass_ext_dce (gcc::context *ctxt);
 extern rtl_opt_pass *make_pass_combine (gcc::context *ctxt);
 extern rtl_opt_pass *make_pass_if_after_combine (gcc::context *ctxt);
 extern rtl_opt_pass *make_pass_jump_after_combine (gcc::context *ctxt);

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2023-12-01 16:17 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-27 18:19 [RFA] New pass for sign/zero extension elimination Joern Rennecke
2023-11-28  5:51 ` Jeff Law
  -- strict thread matches above, loose matches on Subject: below --
2023-11-29 17:37 Joern Rennecke
2023-11-29 19:13 ` Jivan Hakobyan
2023-11-30 15:37 ` Jeff Law
2023-11-27 17:36 Joern Rennecke
2023-11-27 17:57 ` Joern Rennecke
2023-11-27 20:03   ` Richard Sandiford
2023-11-27 20:18     ` Jeff Law
2023-11-28 13:36       ` Joern Rennecke
2023-11-28 14:09         ` Joern Rennecke
2023-11-30 17:33         ` Jeff Law
2023-11-28 13:13     ` Joern Rennecke
2023-11-28  5:50 ` Jeff Law
2023-11-20  0:47 Jeff Law
2023-11-20  1:22 ` Oleg Endo
2023-11-20  2:51   ` Jeff Law
2023-11-20  2:57     ` Oleg Endo
2023-11-20  2:23 ` Xi Ruoyao
2023-11-20  2:46   ` Jeff Law
2023-11-20  2:52   ` Jeff Law
2023-11-20  3:32     ` Xi Ruoyao
2023-11-20  3:48       ` Jeff Law
2023-11-20 18:26 ` Richard Sandiford
2023-11-22 17:59   ` Jeff Law
2023-11-27 20:15     ` Richard Sandiford
2023-11-20 18:56 ` Dimitar Dimitrov
2023-11-22 22:23   ` Jeff Law
2023-11-26 16:42     ` rep.dot.nop
2023-11-27 16:14       ` Jeff Law
2023-11-27 11:30 ` Andrew Stubbs
2023-11-27 16:16   ` Jeff Law
2023-12-01  1:08 ` Hans-Peter Nilsson
2023-12-01 15:09   ` Jeff Law
2023-12-01 16:17     ` Hans-Peter Nilsson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).