public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [V2] New pass for sign/zero extension elimination -- not ready for "final" review
@ 2023-11-29 19:57 Joern Rennecke
  2023-11-29 20:05 ` Joern Rennecke
  0 siblings, 1 reply; 22+ messages in thread
From: Joern Rennecke @ 2023-11-29 19:57 UTC (permalink / raw)
  To: Jivan Hakobyan, Jeff Law, GCC Patches

[-- Attachment #1: Type: text/plain, Size: 249 bytes --]

Attached is what I have for carry_backpropagate .

The utility of special handling for SS_ASHIFT / US_ASHIFT seems
somewhat marginal.

I suspect it'd be more useful to add handling of LSHIFTRT and ASHIFTRT
.  Some ports do
a lot of static shifting.

[-- Attachment #2: tmp.txt --]
[-- Type: text/plain, Size: 3706 bytes --]

commit ed47c3d0d38f85c9b4e22bdbd079e0665465ef9c
Author: Joern Rennecke <joern.rennecke@embecosm.com>
Date:   Wed Nov 29 18:46:06 2023 +0000

    * ext-dce.c: Fixes for carry handling.
    
    * ext-dce.c (safe_for_live_propagation): Handle MINUS.
      (ext_dce_process_uses): Break out carry handling into ..
      (carry_backpropagate): This new function.
      Better handling of ASHIFT.
      Add handling of SMUL_HIGHPART, UMUL_HIGHPART, SIGN_EXTEND, SS_ASHIFT and
      US_ASHIFT.

diff --git a/gcc/ext-dce.cc b/gcc/ext-dce.cc
index 590656f72c7..2a4508181a1 100644
--- a/gcc/ext-dce.cc
+++ b/gcc/ext-dce.cc
@@ -83,6 +83,7 @@ safe_for_live_propagation (rtx_code code)
     case SIGN_EXTEND:
     case TRUNCATE:
     case PLUS:
+    case MINUS:
     case MULT:
     case SMUL_HIGHPART:
     case UMUL_HIGHPART:
@@ -365,6 +366,67 @@ binop_implies_op2_fully_live (rtx_code code)
     }
 }
 
+/* X, with code CODE, is an operation for which
+safe_for_live_propagation holds true,
+   and bits set in MASK are live in the result.  Compute a make of (potentially)
+   live bits in the non-constant inputs.  In case of
+binop_implies_op2_fully_live
+   (e.g. shifts), the computed mask may exclusively pertain to the
+first operand.  */
+
+HOST_WIDE_INT
+carry_backpropagate (HOST_WIDE_INT mask, enum rtx_code code, rtx x)
+{
+  enum machine_mode mode = GET_MODE (x);
+  HOST_WIDE_INT mmask = GET_MODE_MASK (mode);
+  switch (code)
+    {
+    case ASHIFT:
+      if (CONSTANT_P (XEXP (x, 1))
+	  && known_lt (UINTVAL (XEXP (x, 1)), GET_MODE_BITSIZE (mode)))
+	return mask >> INTVAL (XEXP (x, 1));
+      /* Fall through.  */
+    case PLUS: case MINUS:
+    case MULT:
+      return mask ? ((2ULL << floor_log2 (mask)) - 1) : 0;
+    case SMUL_HIGHPART: case UMUL_HIGHPART:
+      if (!mask || XEXP (x, 1) == const0_rtx)
+	return 0;
+      if (CONSTANT_P (XEXP (x, 1)))
+	{
+	  if (pow2p_hwi (INTVAL (XEXP (x, 1))))
+	    return mmask & (mask << (GET_MODE_BITSIZE (mode).to_constant ()
+				     - exact_log2 (INTVAL (XEXP (x, 1)))));
+
+	  int bits = (2 * GET_MODE_BITSIZE (mode).to_constant ()
+		      - clz_hwi (mask) - ctz_hwi (INTVAL (XEXP (x, 1))));
+	  if (bits < GET_MODE_BITSIZE (mode).to_constant ())
+	    return (1ULL << bits) - 1;
+	}
+      return mmask;
+    case SIGN_EXTEND:
+      if (mask & ~mmask)
+	mask |= 1ULL << (GET_MODE_BITSIZE (mode).to_constant () - 1);
+      return mask;
+
+    /* We propagate for the shifted operand, but not the shift
+       count.  The count is handled specially.  */
+    case SS_ASHIFT:
+    case US_ASHIFT:
+      if (!mask || XEXP (x, 1) == const0_rtx)
+	return 0;
+      if (CONSTANT_P (XEXP (x, 1))
+	  && UINTVAL (XEXP (x, 1)) < GET_MODE_BITSIZE (mode).to_constant ())
+	{
+	  return ((mmask & ~((unsigned HOST_WIDE_INT)mmask
+			     >> (INTVAL (XEXP (x, 1)) + (code == SS_ASHIFT))))
+		  | (mask >> INTVAL (XEXP (x, 1))));
+	}
+      return mmask;
+    default:
+      return mask;
+    }
+}
 /* Process uses in INSN contained in OBJ.  Set appropriate bits in LIVENOW
    for any chunks of pseudos that become live, potentially filtering using
    bits from LIVE_TMP.
@@ -480,11 +542,7 @@ ext_dce_process_uses (rtx_insn *insn, rtx obj, bitmap livenow,
 		 sure everything that should get marked as live is marked
 		 from here onward.  */
 
-	      /* ?!? What is the point of this adjustment to DST_MASK?  */
-	      if (code == PLUS || code == MINUS
-		  || code == MULT || code == ASHIFT)
-		dst_mask
-		  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;
+	      dst_mask = carry_backpropagate (dst_mask, code, src);
 
 	      /* We will handle the other operand of a binary operator
 		 at the bottom of the loop by resetting Y.  */

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-29 19:57 [V2] New pass for sign/zero extension elimination -- not ready for "final" review Joern Rennecke
@ 2023-11-29 20:05 ` Joern Rennecke
  2023-11-30  2:39   ` Joern Rennecke
  0 siblings, 1 reply; 22+ messages in thread
From: Joern Rennecke @ 2023-11-29 20:05 UTC (permalink / raw)
  To: Jivan Hakobyan, Jeff Law, GCC Patches

On Wed, 29 Nov 2023 at 19:57, Joern Rennecke
<joern.rennecke@embecosm.com> wrote:
>
> Attached is what I have for carry_backpropagate .
>
> The utility of special handling for SS_ASHIFT / US_ASHIFT seems
> somewhat marginal.
>
> I suspect it'd be more useful to add handling of LSHIFTRT and ASHIFTRT
> .  Some ports do
> a lot of static shifting.

> +    case SS_ASHIFT:
> +    case US_ASHIFT:
> +      if (!mask || XEXP (x, 1) == const0_rtx)
> +       return 0;

P.S.: I just realize that this is a pasto: in the case of a const0_rtx
shift count,
we returning 0 will usually be wrong.  OTOH the code below will handle this
just almost perfectly - the one imperfection being that SS_ASHIFT will see
the sign bit set live if anything is live.  Not that it actually
matters if we track
liveness in 8 / 8 / 16 / 32 bit chunks.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-29 20:05 ` Joern Rennecke
@ 2023-11-30  2:39   ` Joern Rennecke
  2023-11-30  4:10     ` Joern Rennecke
                       ` (2 more replies)
  0 siblings, 3 replies; 22+ messages in thread
From: Joern Rennecke @ 2023-11-30  2:39 UTC (permalink / raw)
  To: Jivan Hakobyan, Jeff Law, GCC Patches

[-- Attachment #1: Type: text/plain, Size: 505 bytes --]

On Wed, 29 Nov 2023 at 20:05, Joern Rennecke
<joern.rennecke@embecosm.com> wrote:

> > I suspect it'd be more useful to add handling of LSHIFTRT and ASHIFTRT
> > .  Some ports do
> > a lot of static shifting.
>
> > +    case SS_ASHIFT:
> > +    case US_ASHIFT:
> > +      if (!mask || XEXP (x, 1) == const0_rtx)
> > +       return 0;
>
> P.S.: I just realize that this is a pasto: in the case of a const0_rtx
> shift count,
> we returning 0 will usually be wrong.

I've attached my current patch version.

[-- Attachment #2: tmp.txt --]
[-- Type: text/plain, Size: 9620 bytes --]

    ext-dce.cc: handle vector modes.
    
    * ext-dce.cc: Amend comment to explain how liveness of vectors is tracked.
      (carry_backpropagate): Use GET_MODE_INNER.
      (ext_dce_process_sets): Likewise.  Only apply big endian correction for
      subregs if they don't have a vector mode.
      (ext_cde_process_uses): Likewise.

    * ext-dce.cc: carry_backpropagate: [US]S_ASHIFT fix, handle [LA]SHIFTRT
    
    * ext-dce.cc (safe_for_live_propagation): Add LSHIFTRT and ASHIFTRT.
      (carry_backpropagate): Reformat top comment.
      Add handling of LSHIFTRT and ASHIFTRT.
      Fix bit count for [SU]MUL_HIGHPART.
      Fix pasto for [SU]S_ASHIFT.

    * ext-dce.c: Fixes for carry handling.
    
    * ext-dce.c (safe_for_live_propagation): Handle MINUS.
      (ext_dce_process_uses): Break out carry handling into ..
      (carry_backpropagate): This new function.
      Better handling of ASHIFT.
      Add handling of SMUL_HIGHPART, UMUL_HIGHPART, SIGN_EXTEND, SS_ASHIFT and
      US_ASHIFT.

    * ext-dce.c: fix SUBREG_BYTE test
    
    As mentioned in
    https://gcc.gnu.org/pipermail/gcc-patches/2023-November/637486.html
    and
    https://gcc.gnu.org/pipermail/gcc-patches/2023-November/638473.html


diff --git a/gcc/ext-dce.cc b/gcc/ext-dce.cc
index 4e4c57de117..228c50e8b73 100644
--- a/gcc/ext-dce.cc
+++ b/gcc/ext-dce.cc
@@ -38,7 +38,10 @@ along with GCC; see the file COPYING3.  If not see
    bit 0..7   (least significant byte)
    bit 8..15  (second least significant byte)
    bit 16..31
-   bit 32..BITS_PER_WORD-1  */
+   bit 32..BITS_PER_WORD-1
+
+   For vector modes, we apply these bit groups to every lane; if any of the
+   bits in the group are live in any lane, we consider this group live.  */
 
 /* Note this pass could be used to narrow memory loads too.  It's
    not clear if that's profitable or not in general.  */
@@ -83,6 +86,7 @@ safe_for_live_propagation (rtx_code code)
     case SIGN_EXTEND:
     case TRUNCATE:
     case PLUS:
+    case MINUS:
     case MULT:
     case SMUL_HIGHPART:
     case UMUL_HIGHPART:
@@ -96,6 +100,8 @@ safe_for_live_propagation (rtx_code code)
     case SS_ASHIFT:
     case US_ASHIFT:
     case ASHIFT:
+    case LSHIFTRT:
+    case ASHIFTRT:
       return true;
 
     /* There may be other safe codes.  If so they can be added
@@ -215,13 +221,22 @@ ext_dce_process_sets (rtx_insn *insn, rtx obj, bitmap livenow, bitmap live_tmp)
 
 	  /* Phase one of destination handling.  First remove any wrapper
 	     such as SUBREG or ZERO_EXTRACT.  */
-	  unsigned HOST_WIDE_INT mask = GET_MODE_MASK (GET_MODE (x));
+	  unsigned HOST_WIDE_INT mask
+	    = GET_MODE_MASK (GET_MODE_INNER (GET_MODE (x)));
 	  if (SUBREG_P (x)
 	      && !paradoxical_subreg_p (x)
 	      && SUBREG_BYTE (x).is_constant ())
 	    {
-	      bit = subreg_lsb (x).to_constant ();
-	      mask = GET_MODE_MASK (GET_MODE (SUBREG_REG (x))) << bit;
+	      enum machine_mode omode = GET_MODE_INNER (GET_MODE (x));
+	      enum machine_mode imode = GET_MODE (SUBREG_REG (x));
+	      bit = 0;
+	      if (!VECTOR_MODE_P (GET_MODE (x))
+		  || (GET_MODE_SIZE (imode).is_constant ()
+		      && (GET_MODE_SIZE (omode).to_constant ()
+			  > GET_MODE_SIZE (imode).to_constant ())))
+		bit = subreg_lsb (x).to_constant ();
+	      mask = (GET_MODE_MASK (GET_MODE_INNER (GET_MODE (SUBREG_REG (x))))
+		      << bit);
 	      gcc_assert (mask);
 	      if (!mask)
 		mask = -0x100000000ULL;
@@ -365,6 +380,84 @@ binop_implies_op2_fully_live (rtx_code code)
     }
 }
 
+/* X, with code CODE, is an operation for which safe_for_live_propagation
+   holds true, and bits set in MASK are live in the result.  Compute a
+   mask of (potentially) live bits in the non-constant inputs.  In case of
+   binop_implies_op2_fully_live (e.g. shifts), the computed mask may
+   exclusively pertain to the first operand.  */
+
+HOST_WIDE_INT
+carry_backpropagate (HOST_WIDE_INT mask, enum rtx_code code, rtx x)
+{
+  enum machine_mode mode = GET_MODE_INNER (GET_MODE (x));
+  HOST_WIDE_INT mmask = GET_MODE_MASK (mode);
+  switch (code)
+    {
+    case ASHIFT:
+      if (CONSTANT_P (XEXP (x, 1))
+	  && known_lt (UINTVAL (XEXP (x, 1)), GET_MODE_BITSIZE (mode)))
+	return mask >> INTVAL (XEXP (x, 1));
+      /* Fall through.  */
+    case PLUS: case MINUS:
+    case MULT:
+      return mask ? ((2ULL << floor_log2 (mask)) - 1) : 0;
+    case LSHIFTRT:
+      if (CONSTANT_P (XEXP (x, 1))
+	  && known_lt (UINTVAL (XEXP (x, 1)), GET_MODE_BITSIZE (mode)))
+	return mmask & (mask << INTVAL (XEXP (x, 1)));
+      return mmask;
+    case ASHIFTRT:
+      if (CONSTANT_P (XEXP (x, 1))
+	  && known_lt (UINTVAL (XEXP (x, 1)), GET_MODE_BITSIZE (mode)))
+	{
+	  HOST_WIDE_INT sign = 0;
+	  if (HOST_BITS_PER_WIDE_INT - clz_hwi (mask) + INTVAL (XEXP (x, 1))
+	      > GET_MODE_BITSIZE (mode).to_constant ())
+	    sign = (1ULL << GET_MODE_BITSIZE (mode).to_constant ()) - 1;
+	  return sign | (mmask & (mask << INTVAL (XEXP (x, 1))));
+	}
+      return mmask;
+    case SMUL_HIGHPART: case UMUL_HIGHPART:
+      if (!mask || XEXP (x, 1) == const0_rtx)
+	return 0;
+      if (CONSTANT_P (XEXP (x, 1)))
+	{
+	  if (pow2p_hwi (INTVAL (XEXP (x, 1))))
+	    return mmask & (mask << (GET_MODE_BITSIZE (mode).to_constant ()
+				     - exact_log2 (INTVAL (XEXP (x, 1)))));
+
+	  int bits = (HOST_BITS_PER_WIDE_INT
+		      + GET_MODE_BITSIZE (mode).to_constant ()
+		      - clz_hwi (mask) - ctz_hwi (INTVAL (XEXP (x, 1))));
+	  if (bits < GET_MODE_BITSIZE (mode).to_constant ())
+	    return (1ULL << bits) - 1;
+	}
+      return mmask;
+    case SIGN_EXTEND:
+      if (mask & ~mmask)
+	mask |= 1ULL << (GET_MODE_BITSIZE (mode).to_constant () - 1);
+      return mask;
+
+    /* We propagate for the shifted operand, but not the shift
+       count.  The count is handled specially.  */
+    case SS_ASHIFT:
+    case US_ASHIFT:
+      if (!mask)
+	return 0;
+      if (CONSTANT_P (XEXP (x, 1))
+	  && UINTVAL (XEXP (x, 1)) < GET_MODE_BITSIZE (mode).to_constant ())
+	{
+	  return ((mmask & ~((unsigned HOST_WIDE_INT)mmask
+			     >> (INTVAL (XEXP (x, 1))
+				 + (XEXP (x, 1) != const0_rtx
+				    && code == SS_ASHIFT))))
+		  | (mask >> INTVAL (XEXP (x, 1))));
+	}
+      return mmask;
+    default:
+      return mask;
+    }
+}
 /* Process uses in INSN contained in OBJ.  Set appropriate bits in LIVENOW
    for any chunks of pseudos that become live, potentially filtering using
    bits from LIVE_TMP.
@@ -414,11 +507,19 @@ ext_dce_process_uses (rtx_insn *insn, rtx obj, bitmap livenow,
 
 	  /* ?!? How much of this should mirror SET handling, potentially
 	     being shared?   */
-	  if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))
+	  if (SUBREG_P (dst) && SUBREG_BYTE (dst).is_constant ())
 	    {
-	      bit = subreg_lsb (dst).to_constant ();
-	      if (bit >= HOST_BITS_PER_WIDE_INT)
-		bit = HOST_BITS_PER_WIDE_INT - 1;
+	      enum machine_mode omode = GET_MODE_INNER (GET_MODE (dst));
+	      enum machine_mode imode = GET_MODE (SUBREG_REG (dst));
+	      if (!VECTOR_MODE_P (GET_MODE (dst))
+		  || (GET_MODE_SIZE (imode).is_constant ()
+		      && (GET_MODE_SIZE (omode).to_constant ()
+			  > GET_MODE_SIZE (imode).to_constant ())))
+		{
+		  bit = subreg_lsb (dst).to_constant ();
+		  if (bit >= HOST_BITS_PER_WIDE_INT)
+		    bit = HOST_BITS_PER_WIDE_INT - 1;
+		}
 	      dst = SUBREG_REG (dst);
 	    }
 	  else if (GET_CODE (dst) == ZERO_EXTRACT
@@ -464,7 +565,7 @@ ext_dce_process_uses (rtx_insn *insn, rtx obj, bitmap livenow,
 		{
 		  rtx inner = XEXP (src, 0);
 		  unsigned HOST_WIDE_INT src_mask
-		    = GET_MODE_MASK (GET_MODE (inner));
+		    = GET_MODE_MASK (GET_MODE_INNER (GET_MODE (inner)));
 
 		  /* DST_MASK could be zero if we had something in the SET
 		     that we couldn't handle.  */
@@ -480,11 +581,7 @@ ext_dce_process_uses (rtx_insn *insn, rtx obj, bitmap livenow,
 		 sure everything that should get marked as live is marked
 		 from here onward.  */
 
-	      /* ?!? What is the point of this adjustment to DST_MASK?  */
-	      if (code == PLUS || code == MINUS
-		  || code == MULT || code == ASHIFT)
-		dst_mask
-		  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;
+	      dst_mask = carry_backpropagate (dst_mask, code, src);
 
 	      /* We will handle the other operand of a binary operator
 		 at the bottom of the loop by resetting Y.  */
@@ -516,12 +613,20 @@ ext_dce_process_uses (rtx_insn *insn, rtx obj, bitmap livenow,
 			 and process normally (conservatively).  */
 		      if (!REG_P (SUBREG_REG (y)))
 			break;
-		      bit = subreg_lsb (y).to_constant ();
-		      if (dst_mask)
+		      enum machine_mode omode = GET_MODE_INNER (GET_MODE (y));
+		      enum machine_mode imode = GET_MODE (SUBREG_REG (y));
+		      if (!VECTOR_MODE_P (GET_MODE (y))
+			  || (GET_MODE_SIZE (imode).is_constant ()
+			      && (GET_MODE_SIZE (omode).to_constant ()
+				  > GET_MODE_SIZE (imode).to_constant ())))
 			{
-			  dst_mask <<= bit;
-			  if (!dst_mask)
-			    dst_mask = -0x100000000ULL;
+			  bit = subreg_lsb (y).to_constant ();
+			  if (dst_mask)
+			    {
+			      dst_mask <<= bit;
+			      if (!dst_mask)
+				dst_mask = -0x100000000ULL;
+			    }
 			}
 		      y = SUBREG_REG (y);
 		    }
@@ -539,7 +644,8 @@ ext_dce_process_uses (rtx_insn *insn, rtx obj, bitmap livenow,
 			 propagate destination liveness through, then just
 			 set the mask to the mode's mask.  */
 		      if (!safe_for_live_propagation (code))
-			tmp_mask = GET_MODE_MASK (GET_MODE (y));
+			tmp_mask
+			  = GET_MODE_MASK (GET_MODE_INNER (GET_MODE (y)));
 
 		      if (tmp_mask & 0xff)
 			bitmap_set_bit (livenow, rn);

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-30  2:39   ` Joern Rennecke
@ 2023-11-30  4:10     ` Joern Rennecke
       [not found]       ` <734a2733-b55c-4b0e-92c0-21f0b9fb41a7@gmail.com>
  2023-12-12 17:18       ` Jeff Law
  2023-11-30 15:46     ` Jeff Law
  2023-11-30 17:53     ` Jeff Law
  2 siblings, 2 replies; 22+ messages in thread
From: Joern Rennecke @ 2023-11-30  4:10 UTC (permalink / raw)
  To: Jivan Hakobyan, Jeff Law, GCC Patches

[-- Attachment #1: Type: text/plain, Size: 871 bytes --]

 I originally computed mmask in carry_backpropagate from XEXP (x, 0),
but abandoned that when I realized we also get called for RTX_OBJ
things.  I forgot to adjust the SIGN_EXTEND code, though.  Fixed
in the attached revised patch.  Also made sure to not make inputs
of LSHIFTRT / ASHIFTRT live if the output is dead (and commened
the checks for (mask == 0) in the process).

Something that could be done to futher simplif the code is to make
carry_backpropagate do all the rtx_code-dependent propagation
decisions.  I.e. would have cases for RTX_OBJ, AND, OR, IOR etc
that propagate the mask, and the default action would be to make
the input live (after the check not make any bits in the input
live if the output is dead).

Then we wouldn't need safe_for_live_propagation any more.

Not sure if carry_backpropagate would then still be a suitable name
anymore, though.

[-- Attachment #2: tmp.txt --]
[-- Type: text/plain, Size: 9762 bytes --]

    * ext-dce.cc (carry_backpropagate): Always return 0 when output is dead.  Fix SIGN_EXTEND input mask.

    * ext-dce.cc: handle vector modes.
    
    * ext-dce.cc: Amend comment to explain how liveness of vectors is tracked.
      (carry_backpropagate): Use GET_MODE_INNER.
      (ext_dce_process_sets): Likewise.  Only apply big endian correction for
      subregs if they don't have a vector mode.
      (ext_cde_process_uses): Likewise.

    * ext-dce.cc: carry_backpropagate: [US]S_ASHIFT fix, handle [LA]SHIFTRT
    
    * ext-dce.cc (safe_for_live_propagation): Add LSHIFTRT and ASHIFTRT.
      (carry_backpropagate): Reformat top comment.
      Add handling of LSHIFTRT and ASHIFTRT.
      Fix bit count for [SU]MUL_HIGHPART.
      Fix pasto for [SU]S_ASHIFT.

    * ext-dce.c: Fixes for carry handling.
    
    * ext-dce.c (safe_for_live_propagation): Handle MINUS.
      (ext_dce_process_uses): Break out carry handling into ..
      (carry_backpropagate): This new function.
      Better handling of ASHIFT.
      Add handling of SMUL_HIGHPART, UMUL_HIGHPART, SIGN_EXTEND, SS_ASHIFT and
      US_ASHIFT.

    * ext-dce.c: fix SUBREG_BYTE test
    
    As mentioned in
    https://gcc.gnu.org/pipermail/gcc-patches/2023-November/637486.html
    and
    https://gcc.gnu.org/pipermail/gcc-patches/2023-November/638473.html

diff --git a/gcc/ext-dce.cc b/gcc/ext-dce.cc
index 4e4c57de117..fd80052ad75 100644
--- a/gcc/ext-dce.cc
+++ b/gcc/ext-dce.cc
@@ -38,7 +38,10 @@ along with GCC; see the file COPYING3.  If not see
    bit 0..7   (least significant byte)
    bit 8..15  (second least significant byte)
    bit 16..31
-   bit 32..BITS_PER_WORD-1  */
+   bit 32..BITS_PER_WORD-1
+
+   For vector modes, we apply these bit groups to every lane; if any of the
+   bits in the group are live in any lane, we consider this group live.  */
 
 /* Note this pass could be used to narrow memory loads too.  It's
    not clear if that's profitable or not in general.  */
@@ -83,6 +86,7 @@ safe_for_live_propagation (rtx_code code)
     case SIGN_EXTEND:
     case TRUNCATE:
     case PLUS:
+    case MINUS:
     case MULT:
     case SMUL_HIGHPART:
     case UMUL_HIGHPART:
@@ -96,6 +100,8 @@ safe_for_live_propagation (rtx_code code)
     case SS_ASHIFT:
     case US_ASHIFT:
     case ASHIFT:
+    case LSHIFTRT:
+    case ASHIFTRT:
       return true;
 
     /* There may be other safe codes.  If so they can be added
@@ -215,13 +221,22 @@ ext_dce_process_sets (rtx_insn *insn, rtx obj, bitmap livenow, bitmap live_tmp)
 
 	  /* Phase one of destination handling.  First remove any wrapper
 	     such as SUBREG or ZERO_EXTRACT.  */
-	  unsigned HOST_WIDE_INT mask = GET_MODE_MASK (GET_MODE (x));
+	  unsigned HOST_WIDE_INT mask
+	    = GET_MODE_MASK (GET_MODE_INNER (GET_MODE (x)));
 	  if (SUBREG_P (x)
 	      && !paradoxical_subreg_p (x)
 	      && SUBREG_BYTE (x).is_constant ())
 	    {
-	      bit = subreg_lsb (x).to_constant ();
-	      mask = GET_MODE_MASK (GET_MODE (SUBREG_REG (x))) << bit;
+	      enum machine_mode omode = GET_MODE_INNER (GET_MODE (x));
+	      enum machine_mode imode = GET_MODE (SUBREG_REG (x));
+	      bit = 0;
+	      if (!VECTOR_MODE_P (GET_MODE (x))
+		  || (GET_MODE_SIZE (imode).is_constant ()
+		      && (GET_MODE_SIZE (omode).to_constant ()
+			  > GET_MODE_SIZE (imode).to_constant ())))
+		bit = subreg_lsb (x).to_constant ();
+	      mask = (GET_MODE_MASK (GET_MODE_INNER (GET_MODE (SUBREG_REG (x))))
+		      << bit);
 	      gcc_assert (mask);
 	      if (!mask)
 		mask = -0x100000000ULL;
@@ -365,6 +380,85 @@ binop_implies_op2_fully_live (rtx_code code)
     }
 }
 
+/* X, with code CODE, is an operation for which safe_for_live_propagation
+   holds true, and bits set in MASK are live in the result.  Compute a
+   mask of (potentially) live bits in the non-constant inputs.  In case of
+   binop_implies_op2_fully_live (e.g. shifts), the computed mask may
+   exclusively pertain to the first operand.  */
+
+HOST_WIDE_INT
+carry_backpropagate (HOST_WIDE_INT mask, enum rtx_code code, rtx x)
+{
+  if (mask == 0)
+    return 0;
+
+  enum machine_mode mode = GET_MODE_INNER (GET_MODE (x));
+  HOST_WIDE_INT mmask = GET_MODE_MASK (mode);
+  switch (code)
+    {
+    case ASHIFT:
+      if (CONSTANT_P (XEXP (x, 1))
+	  && known_lt (UINTVAL (XEXP (x, 1)), GET_MODE_BITSIZE (mode)))
+	return mask >> INTVAL (XEXP (x, 1));
+      /* Fall through.  */
+    case PLUS: case MINUS:
+    case MULT:
+      return (2ULL << floor_log2 (mask)) - 1;
+    case LSHIFTRT:
+      if (CONSTANT_P (XEXP (x, 1))
+	  && known_lt (UINTVAL (XEXP (x, 1)), GET_MODE_BITSIZE (mode)))
+	return mmask & (mask << INTVAL (XEXP (x, 1)));
+      return mmask;
+    case ASHIFTRT:
+      if (CONSTANT_P (XEXP (x, 1))
+	  && known_lt (UINTVAL (XEXP (x, 1)), GET_MODE_BITSIZE (mode)))
+	{
+	  HOST_WIDE_INT sign = 0;
+	  if (HOST_BITS_PER_WIDE_INT - clz_hwi (mask) + INTVAL (XEXP (x, 1))
+	      > GET_MODE_BITSIZE (mode).to_constant ())
+	    sign = (1ULL << GET_MODE_BITSIZE (mode).to_constant ()) - 1;
+	  return sign | (mmask & (mask << INTVAL (XEXP (x, 1))));
+	}
+      return mmask;
+    case SMUL_HIGHPART: case UMUL_HIGHPART:
+      if (XEXP (x, 1) == const0_rtx)
+	return 0;
+      if (CONSTANT_P (XEXP (x, 1)))
+	{
+	  if (pow2p_hwi (INTVAL (XEXP (x, 1))))
+	    return mmask & (mask << (GET_MODE_BITSIZE (mode).to_constant ()
+				     - exact_log2 (INTVAL (XEXP (x, 1)))));
+
+	  int bits = (HOST_BITS_PER_WIDE_INT
+		      + GET_MODE_BITSIZE (mode).to_constant ()
+		      - clz_hwi (mask) - ctz_hwi (INTVAL (XEXP (x, 1))));
+	  if (bits < GET_MODE_BITSIZE (mode).to_constant ())
+	    return (1ULL << bits) - 1;
+	}
+      return mmask;
+    case SIGN_EXTEND:
+      if (mask & ~GET_MODE_MASK (GET_MODE_INNER (GET_MODE (XEXP (x, 0)))));
+	mask |= 1ULL << (GET_MODE_BITSIZE (mode).to_constant () - 1);
+      return mask;
+
+    /* We propagate for the shifted operand, but not the shift
+       count.  The count is handled specially.  */
+    case SS_ASHIFT:
+    case US_ASHIFT:
+      if (CONSTANT_P (XEXP (x, 1))
+	  && UINTVAL (XEXP (x, 1)) < GET_MODE_BITSIZE (mode).to_constant ())
+	{
+	  return ((mmask & ~((unsigned HOST_WIDE_INT)mmask
+			     >> (INTVAL (XEXP (x, 1))
+				 + (XEXP (x, 1) != const0_rtx
+				    && code == SS_ASHIFT))))
+		  | (mask >> INTVAL (XEXP (x, 1))));
+	}
+      return mmask;
+    default:
+      return mask;
+    }
+}
 /* Process uses in INSN contained in OBJ.  Set appropriate bits in LIVENOW
    for any chunks of pseudos that become live, potentially filtering using
    bits from LIVE_TMP.
@@ -414,11 +508,19 @@ ext_dce_process_uses (rtx_insn *insn, rtx obj, bitmap livenow,
 
 	  /* ?!? How much of this should mirror SET handling, potentially
 	     being shared?   */
-	  if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))
+	  if (SUBREG_P (dst) && SUBREG_BYTE (dst).is_constant ())
 	    {
-	      bit = subreg_lsb (dst).to_constant ();
-	      if (bit >= HOST_BITS_PER_WIDE_INT)
-		bit = HOST_BITS_PER_WIDE_INT - 1;
+	      enum machine_mode omode = GET_MODE_INNER (GET_MODE (dst));
+	      enum machine_mode imode = GET_MODE (SUBREG_REG (dst));
+	      if (!VECTOR_MODE_P (GET_MODE (dst))
+		  || (GET_MODE_SIZE (imode).is_constant ()
+		      && (GET_MODE_SIZE (omode).to_constant ()
+			  > GET_MODE_SIZE (imode).to_constant ())))
+		{
+		  bit = subreg_lsb (dst).to_constant ();
+		  if (bit >= HOST_BITS_PER_WIDE_INT)
+		    bit = HOST_BITS_PER_WIDE_INT - 1;
+		}
 	      dst = SUBREG_REG (dst);
 	    }
 	  else if (GET_CODE (dst) == ZERO_EXTRACT
@@ -464,7 +566,7 @@ ext_dce_process_uses (rtx_insn *insn, rtx obj, bitmap livenow,
 		{
 		  rtx inner = XEXP (src, 0);
 		  unsigned HOST_WIDE_INT src_mask
-		    = GET_MODE_MASK (GET_MODE (inner));
+		    = GET_MODE_MASK (GET_MODE_INNER (GET_MODE (inner)));
 
 		  /* DST_MASK could be zero if we had something in the SET
 		     that we couldn't handle.  */
@@ -480,11 +582,7 @@ ext_dce_process_uses (rtx_insn *insn, rtx obj, bitmap livenow,
 		 sure everything that should get marked as live is marked
 		 from here onward.  */
 
-	      /* ?!? What is the point of this adjustment to DST_MASK?  */
-	      if (code == PLUS || code == MINUS
-		  || code == MULT || code == ASHIFT)
-		dst_mask
-		  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;
+	      dst_mask = carry_backpropagate (dst_mask, code, src);
 
 	      /* We will handle the other operand of a binary operator
 		 at the bottom of the loop by resetting Y.  */
@@ -516,12 +614,20 @@ ext_dce_process_uses (rtx_insn *insn, rtx obj, bitmap livenow,
 			 and process normally (conservatively).  */
 		      if (!REG_P (SUBREG_REG (y)))
 			break;
-		      bit = subreg_lsb (y).to_constant ();
-		      if (dst_mask)
+		      enum machine_mode omode = GET_MODE_INNER (GET_MODE (y));
+		      enum machine_mode imode = GET_MODE (SUBREG_REG (y));
+		      if (!VECTOR_MODE_P (GET_MODE (y))
+			  || (GET_MODE_SIZE (imode).is_constant ()
+			      && (GET_MODE_SIZE (omode).to_constant ()
+				  > GET_MODE_SIZE (imode).to_constant ())))
 			{
-			  dst_mask <<= bit;
-			  if (!dst_mask)
-			    dst_mask = -0x100000000ULL;
+			  bit = subreg_lsb (y).to_constant ();
+			  if (dst_mask)
+			    {
+			      dst_mask <<= bit;
+			      if (!dst_mask)
+				dst_mask = -0x100000000ULL;
+			    }
 			}
 		      y = SUBREG_REG (y);
 		    }
@@ -539,7 +645,8 @@ ext_dce_process_uses (rtx_insn *insn, rtx obj, bitmap livenow,
 			 propagate destination liveness through, then just
 			 set the mask to the mode's mask.  */
 		      if (!safe_for_live_propagation (code))
-			tmp_mask = GET_MODE_MASK (GET_MODE (y));
+			tmp_mask
+			  = GET_MODE_MASK (GET_MODE_INNER (GET_MODE (y)));
 
 		      if (tmp_mask & 0xff)
 			bitmap_set_bit (livenow, rn);

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-30  2:39   ` Joern Rennecke
  2023-11-30  4:10     ` Joern Rennecke
@ 2023-11-30 15:46     ` Jeff Law
  2023-11-30 17:53     ` Jeff Law
  2 siblings, 0 replies; 22+ messages in thread
From: Jeff Law @ 2023-11-30 15:46 UTC (permalink / raw)
  To: Joern Rennecke, Jivan Hakobyan, Jeff Law, GCC Patches



On 11/29/23 19:39, Joern Rennecke wrote:
> On Wed, 29 Nov 2023 at 20:05, Joern Rennecke
> <joern.rennecke@embecosm.com> wrote:
> 
>>> I suspect it'd be more useful to add handling of LSHIFTRT and ASHIFTRT
>>> .  Some ports do
>>> a lot of static shifting.
>>
>>> +    case SS_ASHIFT:
>>> +    case US_ASHIFT:
>>> +      if (!mask || XEXP (x, 1) == const0_rtx)
>>> +       return 0;
>>
>> P.S.: I just realize that this is a pasto: in the case of a const0_rtx
>> shift count,
>> we returning 0 will usually be wrong.
> 
> I've attached my current patch version.
I would strongly suggest we not start adding a lot of new capabilities 
here, deferring such work until gcc-15.  Our focus should be getting the 
existing code working well and cleanly implemented.

Jeff

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-30  2:39   ` Joern Rennecke
  2023-11-30  4:10     ` Joern Rennecke
  2023-11-30 15:46     ` Jeff Law
@ 2023-11-30 17:53     ` Jeff Law
  2023-11-30 18:31       ` Joern Rennecke
  2 siblings, 1 reply; 22+ messages in thread
From: Jeff Law @ 2023-11-30 17:53 UTC (permalink / raw)
  To: Joern Rennecke, Jivan Hakobyan, Jeff Law, GCC Patches



On 11/29/23 19:39, Joern Rennecke wrote:
> On Wed, 29 Nov 2023 at 20:05, Joern Rennecke
> <joern.rennecke@embecosm.com>  wrote:
> 
>>> I suspect it'd be more useful to add handling of LSHIFTRT and ASHIFTRT
>>> .  Some ports do
>>> a lot of static shifting.
>>> +    case SS_ASHIFT:
>>> +    case US_ASHIFT:
>>> +      if (!mask || XEXP (x, 1) == const0_rtx)
>>> +       return 0;
>> P.S.: I just realize that this is a pasto: in the case of a const0_rtx
>> shift count,
>> we returning 0 will usually be wrong.
> I've attached my current patch version.
> 
> 
> tmp.txt
> 
>      ext-dce.cc: handle vector modes.
>      
>      * ext-dce.cc: Amend comment to explain how liveness of vectors is tracked.
>        (carry_backpropagate): Use GET_MODE_INNER.
>        (ext_dce_process_sets): Likewise.  Only apply big endian correction for
>        subregs if they don't have a vector mode.
>        (ext_cde_process_uses): Likewise.
> 
>      * ext-dce.cc: carry_backpropagate: [US]S_ASHIFT fix, handle [LA]SHIFTRT
>      
>      * ext-dce.cc (safe_for_live_propagation): Add LSHIFTRT and ASHIFTRT.
>        (carry_backpropagate): Reformat top comment.
>        Add handling of LSHIFTRT and ASHIFTRT.
>        Fix bit count for [SU]MUL_HIGHPART.
>        Fix pasto for [SU]S_ASHIFT.
> 
>      * ext-dce.c: Fixes for carry handling.
>      
>      * ext-dce.c (safe_for_live_propagation): Handle MINUS.
>        (ext_dce_process_uses): Break out carry handling into ..
>        (carry_backpropagate): This new function.
>        Better handling of ASHIFT.
>        Add handling of SMUL_HIGHPART, UMUL_HIGHPART, SIGN_EXTEND, SS_ASHIFT and
>        US_ASHIFT.
> 
>      * ext-dce.c: fix SUBREG_BYTE test
>      
>      As mentioned in
>      https://gcc.gnu.org/pipermail/gcc-patches/2023-November/637486.html
>      and
>      https://gcc.gnu.org/pipermail/gcc-patches/2023-November/638473.html
> 
> 
> diff --git a/gcc/ext-dce.cc b/gcc/ext-dce.cc
> index 4e4c57de117..228c50e8b73 100644
> --- a/gcc/ext-dce.cc
> +++ b/gcc/ext-dce.cc
> @@ -38,7 +38,10 @@ along with GCC; see the file COPYING3.  If not see
>      bit 0..7   (least significant byte)
>      bit 8..15  (second least significant byte)
>      bit 16..31
> -   bit 32..BITS_PER_WORD-1  */
> +   bit 32..BITS_PER_WORD-1
> +
> +   For vector modes, we apply these bit groups to every lane; if any of the
> +   bits in the group are live in any lane, we consider this group live.  */
Why add vector modes now?  I realize it might help a vectorized sub*_dct 
from x264, but I was thinking that would be more of a gcc-15 improvement.

Was this in fact to help sub*_dct or something else concrete?  Unless 
it's concrete and significant, I'd lean towards deferring the 
enhancement until gcc-15.

>   
>   /* Note this pass could be used to narrow memory loads too.  It's
>      not clear if that's profitable or not in general.  */

> @@ -96,6 +100,8 @@ safe_for_live_propagation (rtx_code code)
>       case SS_ASHIFT:
>       case US_ASHIFT:
>       case ASHIFT:
> +    case LSHIFTRT:
> +    case ASHIFTRT:
>         return true;
So this starts to touch on a cleanup Richard mentioned.  The codes in 
there until now were supposed to be safe across the board.  As we add 
things like LSHIFTRT, we need to describe how to handle liveness 
transfer from the destination into the source(s).  I think what Richard 
is asking for is to just have one place which handles both.

Anyway, my current plan would be to pull in the formatting fixes, the 
back propagation without the vector enhancement.  We've already fixed 
the subreg thingie locally.


jeff


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-30 17:53     ` Jeff Law
@ 2023-11-30 18:31       ` Joern Rennecke
  2023-11-30 19:15         ` Jeff Law
  0 siblings, 1 reply; 22+ messages in thread
From: Joern Rennecke @ 2023-11-30 18:31 UTC (permalink / raw)
  To: Jeff Law; +Cc: Jivan Hakobyan, Jeff Law, GCC Patches

On Thu, 30 Nov 2023 at 17:53, Jeff Law <jeffreyalaw@gmail.com> wrote:
 > >      * ext-dce.c: Fixes for carry handling.
> >
> >      * ext-dce.c (safe_for_live_propagation): Handle MINUS.
> >        (ext_dce_process_uses): Break out carry handling into ..
> >        (carry_backpropagate): This new function.
> >        Better handling of ASHIFT.
> >        Add handling of SMUL_HIGHPART, UMUL_HIGHPART, SIGN_EXTEND, SS_ASHIFT and
> >        US_ASHIFT.
> >
> >      * ext-dce.c: fix SUBREG_BYTE test
> >
> >      As mentioned in
> >      https://gcc.gnu.org/pipermail/gcc-patches/2023-November/637486.html
> >      and
> >      https://gcc.gnu.org/pipermail/gcc-patches/2023-November/638473.html
> >
> >
> > diff --git a/gcc/ext-dce.cc b/gcc/ext-dce.cc
> > index 4e4c57de117..228c50e8b73 100644
> > --- a/gcc/ext-dce.cc
> > +++ b/gcc/ext-dce.cc
> > @@ -38,7 +38,10 @@ along with GCC; see the file COPYING3.  If not see
> >      bit 0..7   (least significant byte)
> >      bit 8..15  (second least significant byte)
> >      bit 16..31
> > -   bit 32..BITS_PER_WORD-1  */
> > +   bit 32..BITS_PER_WORD-1
> > +
> > +   For vector modes, we apply these bit groups to every lane; if any of the
> > +   bits in the group are live in any lane, we consider this group live.  */
> Why add vector modes now?  I realize it might help a vectorized sub*_dct
> from x264, but I was thinking that would be more of a gcc-15 improvement.

Actually, we already did, but because it was unintentional, it wasn't
done properly.

I've been using BEG_MODE_BITSIZE(GET_MODE (x)).to_constant, thinking a mode
should just have a constant size that can easily fit into an int.  I was wrong.
Debugging found that was a scalable vector mode.  SUBREGs, shifts and
other stuff
has vector modes and goes through the code.  Instead of adding code to bail our,
I though it would be a good idea to think about how vector modes can
be supported
without balooning the computation time or memory.  And keeping in mind
the original
intention of the patch - eliminating redundant sign/zero extension -
that actually can
applied to vectors as well, and that means we should consider how
these operations
work on each lane.

By looking at the inner mode of a vector, we also conventiently also
get a sane size.
For complex numbers, it's also saner to treat them as two-element vectors, tham
trying to apply the bit parts while ignoring the structure, so it
makes sense to use
GET_MODE_INNER in general.

Something that could be done for further improvement but seems too complex
for gcc 14 would be to handle vector constants as shift counts.

Come to think of it, I actually applied the wrong test for the integer
shift counts -
it should be CONST_INT_P, not CONSTANT_P.

> >
> >   /* Note this pass could be used to narrow memory loads too.  It's
> >      not clear if that's profitable or not in general.  */
>
> > @@ -96,6 +100,8 @@ safe_for_live_propagation (rtx_code code)
> >       case SS_ASHIFT:
> >       case US_ASHIFT:
> >       case ASHIFT:
> > +    case LSHIFTRT:
> > +    case ASHIFTRT:
> >         return true;
> So this starts to touch on a cleanup Richard mentioned.  The codes in
> there until now were supposed to be safe across the board.

Saturating operations are not safe at all without explicitly computing
the liveness propagation.

>  As we add
> things like LSHIFTRT, we need to describe how to handle liveness
> transfer from the destination into the source(s).  I think what Richard
> is asking for is to just have one place which handles both.

LSHIFTRT is much simpler than the saturating operations.

> Anyway, my current plan would be to pull in the formatting fixes, the
> back propagation without the vector enhancement.

Pretending the vector modes don't happen is not making the code safe.
We have to handle them somehow, so we might as well do that in a way
that is consistent and gives more potential for optimization.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-30 18:31       ` Joern Rennecke
@ 2023-11-30 19:15         ` Jeff Law
  0 siblings, 0 replies; 22+ messages in thread
From: Jeff Law @ 2023-11-30 19:15 UTC (permalink / raw)
  To: Joern Rennecke; +Cc: Jivan Hakobyan, Jeff Law, GCC Patches



On 11/30/23 11:31, Joern Rennecke wrote:

> 
> Pretending the vector modes don't happen is not making the code safe.
> We have to handle them somehow, so we might as well do that in a way
> that is consistent and gives more potential for optimization.
We're not pretending they don't happen.  Quite the opposite.  When we 
see them we need to take appropriate action.

For a set you can ignore since that means we'll keep objects live longer 
than they normally would have been -- which is the safe thing for this pass.

For a use, you can't ignore, ever.  You must always make live anything 
that is potentially used or you run the risk of incorrect code generation.

If that's not what we're doing now, then let's fix that without 
introducing a whole new set of optimizations that need to be analyzed 
and debugged.

I would have been all for this work a month ago, but we're past stage1 
close so the focus needs to be on cleaning up what we've got for gcc-14.

Jeff

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
       [not found]       ` <734a2733-b55c-4b0e-92c0-21f0b9fb41a7@gmail.com>
@ 2023-11-30 21:32         ` Jeff Law
  0 siblings, 0 replies; 22+ messages in thread
From: Jeff Law @ 2023-11-30 21:32 UTC (permalink / raw)
  To: Joern Rennecke, Jivan Hakobyan, Jeff Law, GCC Patches



On 11/30/23 14:22, Jeff Law wrote:

> So if you're going to add opcodes in here, don't they also need to be in 
> safe_for_propagation_p as well?  Otherwise they won't get used, right? 
> I'm thinking about the HIGHPART cases for example.  As well as the 
> saturating shifts which we indicated we were going to take out of 
> safe_for_propagation_p IIRC.
Ignore my comment about HIGHPART :-)

Jeff

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-30  4:10     ` Joern Rennecke
       [not found]       ` <734a2733-b55c-4b0e-92c0-21f0b9fb41a7@gmail.com>
@ 2023-12-12 17:18       ` Jeff Law
  1 sibling, 0 replies; 22+ messages in thread
From: Jeff Law @ 2023-12-12 17:18 UTC (permalink / raw)
  To: Joern Rennecke, Jivan Hakobyan, Jeff Law, GCC Patches



On 11/29/23 21:10, Joern Rennecke wrote:
>   I originally computed mmask in carry_backpropagate from XEXP (x, 0),
> but abandoned that when I realized we also get called for RTX_OBJ
> things.  I forgot to adjust the SIGN_EXTEND code, though.  Fixed
> in the attached revised patch.  Also made sure to not make inputs
> of LSHIFTRT / ASHIFTRT live if the output is dead (and commened
> the checks for (mask == 0) in the process).
> 
> Something that could be done to futher simplif the code is to make
> carry_backpropagate do all the rtx_code-dependent propagation
> decisions.  I.e. would have cases for RTX_OBJ, AND, OR, IOR etc
> that propagate the mask, and the default action would be to make
> the input live (after the check not make any bits in the input
> live if the output is dead).
> 
> Then we wouldn't need safe_for_live_propagation any more.
> 
> Not sure if carry_backpropagate would then still be a suitable name
> anymore, though.
> 
> 
> tmp.txt
> 
>      * ext-dce.cc (carry_backpropagate): Always return 0 when output is dead.  Fix SIGN_EXTEND input mask.
> 
>      * ext-dce.cc: handle vector modes.
>      
>      * ext-dce.cc: Amend comment to explain how liveness of vectors is tracked.
>        (carry_backpropagate): Use GET_MODE_INNER.
>        (ext_dce_process_sets): Likewise.  Only apply big endian correction for
>        subregs if they don't have a vector mode.
>        (ext_cde_process_uses): Likewise.
> 
>      * ext-dce.cc: carry_backpropagate: [US]S_ASHIFT fix, handle [LA]SHIFTRT
>      
>      * ext-dce.cc (safe_for_live_propagation): Add LSHIFTRT and ASHIFTRT.
>        (carry_backpropagate): Reformat top comment.
>        Add handling of LSHIFTRT and ASHIFTRT.
>        Fix bit count for [SU]MUL_HIGHPART.
>        Fix pasto for [SU]S_ASHIFT.
> 
>      * ext-dce.c: Fixes for carry handling.
>      
>      * ext-dce.c (safe_for_live_propagation): Handle MINUS.
>        (ext_dce_process_uses): Break out carry handling into ..
>        (carry_backpropagate): This new function.
>        Better handling of ASHIFT.
>        Add handling of SMUL_HIGHPART, UMUL_HIGHPART, SIGN_EXTEND, SS_ASHIFT and
>        US_ASHIFT.
> 
>      * ext-dce.c: fix SUBREG_BYTE test
I haven't done an update in a little while.  My tester spun this without 
the vector bits which I'm still pondering.  It did flag one issue.

Specifically on the alpha pr53645.c failed due to the ASHIFTRT handling.


> +    case ASHIFTRT:
> +      if (CONSTANT_P (XEXP (x, 1))
> +	  && known_lt (UINTVAL (XEXP (x, 1)), GET_MODE_BITSIZE (mode)))
> +	{
> +	  HOST_WIDE_INT sign = 0;
> +	  if (HOST_BITS_PER_WIDE_INT - clz_hwi (mask) + INTVAL (XEXP (x, 1))
> +	      > GET_MODE_BITSIZE (mode).to_constant ())
> +	    sign = (1ULL << GET_MODE_BITSIZE (mode).to_constant ()) - 1;
> +	  return sign | (mmask & (mask << INTVAL (XEXP (x, 1))));
> +	}
The "-1" when computing the sign bit is meant to apply to the shift count.

Jeff

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-30 15:44   ` Jeff Law
@ 2023-12-01  2:05     ` Xi Ruoyao
  0 siblings, 0 replies; 22+ messages in thread
From: Xi Ruoyao @ 2023-12-01  2:05 UTC (permalink / raw)
  To: Jeff Law, gcc-patches

On Thu, 2023-11-30 at 08:44 -0700, Jeff Law wrote:
> 
> 
> On 11/29/23 02:33, Xi Ruoyao wrote:
> > On Mon, 2023-11-27 at 23:06 -0700, Jeff Law wrote:
> > > This has (of course) been tested on rv64.  It's also been bootstrapped
> > > and regression tested on x86.  Bootstrap and regression tested (C only)
> > > for m68k, sh4, sh4eb, alpha.  Earlier versions were also bootstrapped
> > > and regression tested on ppc, hppa and s390x (C only for those as well).
> > >    It's also been tested on the various crosses in my tester.  So we've
> > > got reasonable coverage of 16, 32 and 64 bit targets, big and little
> > > endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other
> > > oddities.
> > > 
> > > The included tests are for RISC-V only because not all targets are going
> > > to have extraneous extensions.   There's tests from coremark, x264 and
> > > GCC's bz database.  It probably wouldn't be hard to add aarch64
> > > testscases.  The BZs listed are improved by this patch for aarch64.
> > 
> > I've successfully bootstrapped this on loongarch64-linux-gnu and tried
> > the added test cases.  For loongarch64 the redundant extensions are
> > removed for core_bench_list.c, core_init_matrix.c, core_list_init.c,
> > matrix_add_const.c, and pr111384.c, but not mem-extend.c.
> > 
> > Should I change something in LoongArch backend in order to make ext_dce
> > work for mem-extend.c too?  If yes then any pointers?
> I'd bet it was my goof removing MINUS from the list of supported opcodes 
> where we can use narrowing life information from the destination to 
> narrow the lifetime of the sources.
> 
> Try adding MINUS back into safe_for_live_propagation.

I can confirm it works for this this case, and bootstrap & regtest still
fine on LoongArch.

-- 
Xi Ruoyao <xry111@xry111.site>
School of Aerospace Science and Technology, Xidian University

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-29  9:33 ` Xi Ruoyao
  2023-11-29 12:37   ` Xi Ruoyao
@ 2023-11-30 15:44   ` Jeff Law
  2023-12-01  2:05     ` Xi Ruoyao
  1 sibling, 1 reply; 22+ messages in thread
From: Jeff Law @ 2023-11-30 15:44 UTC (permalink / raw)
  To: Xi Ruoyao, gcc-patches



On 11/29/23 02:33, Xi Ruoyao wrote:
> On Mon, 2023-11-27 at 23:06 -0700, Jeff Law wrote:
>> This has (of course) been tested on rv64.  It's also been bootstrapped
>> and regression tested on x86.  Bootstrap and regression tested (C only)
>> for m68k, sh4, sh4eb, alpha.  Earlier versions were also bootstrapped
>> and regression tested on ppc, hppa and s390x (C only for those as well).
>>    It's also been tested on the various crosses in my tester.  So we've
>> got reasonable coverage of 16, 32 and 64 bit targets, big and little
>> endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other
>> oddities.
>>
>> The included tests are for RISC-V only because not all targets are going
>> to have extraneous extensions.   There's tests from coremark, x264 and
>> GCC's bz database.  It probably wouldn't be hard to add aarch64
>> testscases.  The BZs listed are improved by this patch for aarch64.
> 
> I've successfully bootstrapped this on loongarch64-linux-gnu and tried
> the added test cases.  For loongarch64 the redundant extensions are
> removed for core_bench_list.c, core_init_matrix.c, core_list_init.c,
> matrix_add_const.c, and pr111384.c, but not mem-extend.c.
> 
> Should I change something in LoongArch backend in order to make ext_dce
> work for mem-extend.c too?  If yes then any pointers?
I'd bet it was my goof removing MINUS from the list of supported opcodes 
where we can use narrowing life information from the destination to 
narrow the lifetime of the sources.

Try adding MINUS back into safe_for_live_propagation.

Jeff

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-29 15:46     ` Xi Ruoyao
@ 2023-11-29 19:19       ` Jivan Hakobyan
  0 siblings, 0 replies; 22+ messages in thread
From: Jivan Hakobyan @ 2023-11-29 19:19 UTC (permalink / raw)
  To: Xi Ruoyao; +Cc: Jeff Law, gcc-patches


The reason is removing MINUS from safe_for_live_propagation.
We did not do it on purpose, will roll back on V3.




> On 29 Nov 2023, at 19:46, Xi Ruoyao <xry111@xry111.site> wrote:
> 
> On Wed, 2023-11-29 at 20:37 +0800, Xi Ruoyao wrote:
>>> On Wed, 2023-11-29 at 17:33 +0800, Xi Ruoyao wrote:
>>> On Mon, 2023-11-27 at 23:06 -0700, Jeff Law wrote:
>>>> This has (of course) been tested on rv64.  It's also been bootstrapped
>>>> and regression tested on x86.  Bootstrap and regression tested (C only)
>>>> for m68k, sh4, sh4eb, alpha.  Earlier versions were also bootstrapped
>>>> and regression tested on ppc, hppa and s390x (C only for those as well).
>>>>   It's also been tested on the various crosses in my tester.  So we've
>>>> got reasonable coverage of 16, 32 and 64 bit targets, big and little
>>>> endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other
>>>> oddities.
>>>> 
>>>> The included tests are for RISC-V only because not all targets are going
>>>> to have extraneous extensions.   There's tests from coremark, x264 and
>>>> GCC's bz database.  It probably wouldn't be hard to add aarch64
>>>> testscases.  The BZs listed are improved by this patch for aarch64.
>>> 
>>> I've successfully bootstrapped this on loongarch64-linux-gnu and tried
>>> the added test cases.  For loongarch64 the redundant extensions are
>>> removed for core_bench_list.c, core_init_matrix.c, core_list_init.c,
>>> matrix_add_const.c, and pr111384.c, but not mem-extend.c.
> 
>> Follow up: no regression in GCC test suite on LoongArch.
>> 
>>> Should I change something in LoongArch backend in order to make ext_dce
>>> work for mem-extend.c too?  If yes then any pointers?
> 
> Hmm... This test seems not working even for RISC-V:
> 
> $ ./gcc/cc1 -O2 ../gcc/gcc/testsuite/gcc.target/riscv/mem-extend.c  -nostdinc -fdump-rtl-ext_dce -march=rv64gc_zbb -mabi=lp64d -o- 2>&1 | grep -F zext.h
>    zext.h    a5,a5
>    zext.h    a4,a4
> 
> and the 294r.ext_dce file does not contain "Successfully transformed
> to:" lines.
> 
> --
> Xi Ruoyao <xry111@xry111.site>
> School of Aerospace Science and Technology, Xidian University

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-29 12:37   ` Xi Ruoyao
@ 2023-11-29 15:46     ` Xi Ruoyao
  2023-11-29 19:19       ` Jivan Hakobyan
  0 siblings, 1 reply; 22+ messages in thread
From: Xi Ruoyao @ 2023-11-29 15:46 UTC (permalink / raw)
  To: Jeff Law, gcc-patches

On Wed, 2023-11-29 at 20:37 +0800, Xi Ruoyao wrote:
> On Wed, 2023-11-29 at 17:33 +0800, Xi Ruoyao wrote:
> > On Mon, 2023-11-27 at 23:06 -0700, Jeff Law wrote:
> > > This has (of course) been tested on rv64.  It's also been bootstrapped
> > > and regression tested on x86.  Bootstrap and regression tested (C only) 
> > > for m68k, sh4, sh4eb, alpha.  Earlier versions were also bootstrapped 
> > > and regression tested on ppc, hppa and s390x (C only for those as well). 
> > >   It's also been tested on the various crosses in my tester.  So we've
> > > got reasonable coverage of 16, 32 and 64 bit targets, big and little
> > > endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other 
> > > oddities.
> > > 
> > > The included tests are for RISC-V only because not all targets are going 
> > > to have extraneous extensions.   There's tests from coremark, x264 and
> > > GCC's bz database.  It probably wouldn't be hard to add aarch64 
> > > testscases.  The BZs listed are improved by this patch for aarch64.
> > 
> > I've successfully bootstrapped this on loongarch64-linux-gnu and tried
> > the added test cases.  For loongarch64 the redundant extensions are
> > removed for core_bench_list.c, core_init_matrix.c, core_list_init.c,
> > matrix_add_const.c, and pr111384.c, but not mem-extend.c.

> Follow up: no regression in GCC test suite on LoongArch.
> 
> > Should I change something in LoongArch backend in order to make ext_dce
> > work for mem-extend.c too?  If yes then any pointers?

Hmm... This test seems not working even for RISC-V:

$ ./gcc/cc1 -O2 ../gcc/gcc/testsuite/gcc.target/riscv/mem-extend.c  -nostdinc -fdump-rtl-ext_dce -march=rv64gc_zbb -mabi=lp64d -o- 2>&1 | grep -F zext.h
	zext.h	a5,a5
	zext.h	a4,a4

and the 294r.ext_dce file does not contain "Successfully transformed
to:" lines.

-- 
Xi Ruoyao <xry111@xry111.site>
School of Aerospace Science and Technology, Xidian University

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-29  9:33 ` Xi Ruoyao
@ 2023-11-29 12:37   ` Xi Ruoyao
  2023-11-29 15:46     ` Xi Ruoyao
  2023-11-30 15:44   ` Jeff Law
  1 sibling, 1 reply; 22+ messages in thread
From: Xi Ruoyao @ 2023-11-29 12:37 UTC (permalink / raw)
  To: Jeff Law, gcc-patches

On Wed, 2023-11-29 at 17:33 +0800, Xi Ruoyao wrote:
> On Mon, 2023-11-27 at 23:06 -0700, Jeff Law wrote:
> > This has (of course) been tested on rv64.  It's also been bootstrapped
> > and regression tested on x86.  Bootstrap and regression tested (C only) 
> > for m68k, sh4, sh4eb, alpha.  Earlier versions were also bootstrapped 
> > and regression tested on ppc, hppa and s390x (C only for those as well). 
> >   It's also been tested on the various crosses in my tester.  So we've
> > got reasonable coverage of 16, 32 and 64 bit targets, big and little
> > endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other 
> > oddities.
> > 
> > The included tests are for RISC-V only because not all targets are going 
> > to have extraneous extensions.   There's tests from coremark, x264 and
> > GCC's bz database.  It probably wouldn't be hard to add aarch64 
> > testscases.  The BZs listed are improved by this patch for aarch64.
> 
> I've successfully bootstrapped this on loongarch64-linux-gnu and tried
> the added test cases.  For loongarch64 the redundant extensions are
> removed for core_bench_list.c, core_init_matrix.c, core_list_init.c,
> matrix_add_const.c, and pr111384.c, but not mem-extend.c.

Follow up: no regression in GCC test suite on LoongArch.

> Should I change something in LoongArch backend in order to make ext_dce
> work for mem-extend.c too?  If yes then any pointers?

-- 
Xi Ruoyao <xry111@xry111.site>
School of Aerospace Science and Technology, Xidian University

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-28 22:18   ` Jivan Hakobyan
  2023-11-28 23:26     ` Jeff Law
@ 2023-11-29 10:25     ` Richard Sandiford
  1 sibling, 0 replies; 22+ messages in thread
From: Richard Sandiford @ 2023-11-29 10:25 UTC (permalink / raw)
  To: Jivan Hakobyan; +Cc: Andrew Stubbs, Jeff Law, gcc-patches

Jivan Hakobyan <jivanhakobyan9@gmail.com> writes:
>>
>> The amdgcn ICE I reported still exists:
>
>
> Can you send a build command to reproduce ICE.
> I built on x86-64, RV32/64, and did not get any faults.

The ICE that Andrew reported relies on configuring with:

  --enable-checking=yes,extra,rtl

since rtl checking isn't enabled by default.

Thanks,
Richard

>
> On Tue, Nov 28, 2023 at 7:08 PM Andrew Stubbs <ams@codesourcery.com> wrote:
>
>> On 28/11/2023 06:06, Jeff Law wrote:
>> > - Verify we have a SUBREG before looking at SUBREG_BYTE.
>>
>> The amdgcn ICE I reported still exists:
>>
>> > conftest.c:16:1: internal compiler error: RTL check: expected code
>> 'subreg', have 'reg' in ext_dce_process_uses, at ext-dce.cc:417
>> >    16 | }
>> >       | ^
>> > 0x8c7b21 rtl_check_failed_code1(rtx_def const*, rtx_code, char const*,
>> int, char const*)
>> >>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/rtl.cc:770
>> > 0xa768e0 ext_dce_process_uses
>>
>> >>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:417
>> > 0x1aed4bc ext_dce_process_bb
>>
>> >>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:643
>> > 0x1aed4bc ext_dce
>>
>> >>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:794
>> > 0x1aed4bc execute
>>
>> >>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:862
>> > Please submit a full bug report, with preprocessed source (by using
>> -freport-bug).
>> > Please include the complete backtrace with any bug report.
>> > See <https://gcc.gnu.org/bugs/> for instructions.
>> > configure:3812: $? = 1
>> > configure: failed program was:
>> > | /* confdefs.h */
>> > | #define PACKAGE_NAME "GNU C Runtime Library"
>> > | #define PACKAGE_TARNAME "libgcc"
>> > | #define PACKAGE_VERSION "1.0"
>> > | #define PACKAGE_STRING "GNU C Runtime Library 1.0"
>> > | #define PACKAGE_BUGREPORT ""
>> > | #define PACKAGE_URL "http://www.gnu.org/software/libgcc/"
>> > | /* end confdefs.h.  */
>> > |
>> > | int
>> > | main ()
>> > | {
>> > |
>> > |   ;
>> > |   return 0;
>> > | }
>>
>> I think the test is maybe backwards?
>>
>>    /* ?!? How much of this should mirror SET handling, potentially
>>       being shared?   */
>>    if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))
>>
>> Andrew
>>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-28  6:06 Jeff Law
  2023-11-28 15:08 ` Andrew Stubbs
@ 2023-11-29  9:33 ` Xi Ruoyao
  2023-11-29 12:37   ` Xi Ruoyao
  2023-11-30 15:44   ` Jeff Law
  1 sibling, 2 replies; 22+ messages in thread
From: Xi Ruoyao @ 2023-11-29  9:33 UTC (permalink / raw)
  To: Jeff Law, gcc-patches

On Mon, 2023-11-27 at 23:06 -0700, Jeff Law wrote:
> This has (of course) been tested on rv64.  It's also been bootstrapped
> and regression tested on x86.  Bootstrap and regression tested (C only) 
> for m68k, sh4, sh4eb, alpha.  Earlier versions were also bootstrapped 
> and regression tested on ppc, hppa and s390x (C only for those as well). 
>   It's also been tested on the various crosses in my tester.  So we've
> got reasonable coverage of 16, 32 and 64 bit targets, big and little 
> endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other 
> oddities.
> 
> The included tests are for RISC-V only because not all targets are going 
> to have extraneous extensions.   There's tests from coremark, x264 and
> GCC's bz database.  It probably wouldn't be hard to add aarch64 
> testscases.  The BZs listed are improved by this patch for aarch64.

I've successfully bootstrapped this on loongarch64-linux-gnu and tried
the added test cases.  For loongarch64 the redundant extensions are
removed for core_bench_list.c, core_init_matrix.c, core_list_init.c,
matrix_add_const.c, and pr111384.c, but not mem-extend.c.

Should I change something in LoongArch backend in order to make ext_dce
work for mem-extend.c too?  If yes then any pointers?

-- 
Xi Ruoyao <xry111@xry111.site>
School of Aerospace Science and Technology, Xidian University

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-28 23:26     ` Jeff Law
@ 2023-11-29  9:28       ` Andrew Stubbs
  0 siblings, 0 replies; 22+ messages in thread
From: Andrew Stubbs @ 2023-11-29  9:28 UTC (permalink / raw)
  To: Jeff Law, Jivan Hakobyan; +Cc: gcc-patches

On 28/11/2023 23:26, Jeff Law wrote:
> 
> 
> On 11/28/23 15:18, Jivan Hakobyan wrote:
>>     The amdgcn ICE I reported still exists:
>>
>>
>> Can you send a build command to reproduce ICE.
>> I built on x86-64, RV32/64, and did not get any faults.
> THe code is clearly wrong though.  We need to test that we have a subreg 
> before we look at the subreg_byte.  I fixed one of those elsewhere, this 
> may ultimately be a paste-o.  Anyway, I'll fix it for V3.

I have confirmed that swapping the conditions fixes the ICE.

I haven't yet observed the pass having any affect on amdgcn, but it's 
also not broken anything in the limited testing I did.

Andrew

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-28 22:18   ` Jivan Hakobyan
@ 2023-11-28 23:26     ` Jeff Law
  2023-11-29  9:28       ` Andrew Stubbs
  2023-11-29 10:25     ` Richard Sandiford
  1 sibling, 1 reply; 22+ messages in thread
From: Jeff Law @ 2023-11-28 23:26 UTC (permalink / raw)
  To: Jivan Hakobyan, Andrew Stubbs; +Cc: gcc-patches



On 11/28/23 15:18, Jivan Hakobyan wrote:
>     The amdgcn ICE I reported still exists:
> 
> 
> Can you send a build command to reproduce ICE.
> I built on x86-64, RV32/64, and did not get any faults.
THe code is clearly wrong though.  We need to test that we have a subreg 
before we look at the subreg_byte.  I fixed one of those elsewhere, this 
may ultimately be a paste-o.  Anyway, I'll fix it for V3.
jeff


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-28 15:08 ` Andrew Stubbs
@ 2023-11-28 22:18   ` Jivan Hakobyan
  2023-11-28 23:26     ` Jeff Law
  2023-11-29 10:25     ` Richard Sandiford
  0 siblings, 2 replies; 22+ messages in thread
From: Jivan Hakobyan @ 2023-11-28 22:18 UTC (permalink / raw)
  To: Andrew Stubbs; +Cc: Jeff Law, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 2162 bytes --]

>
> The amdgcn ICE I reported still exists:


Can you send a build command to reproduce ICE.
I built on x86-64, RV32/64, and did not get any faults.

On Tue, Nov 28, 2023 at 7:08 PM Andrew Stubbs <ams@codesourcery.com> wrote:

> On 28/11/2023 06:06, Jeff Law wrote:
> > - Verify we have a SUBREG before looking at SUBREG_BYTE.
>
> The amdgcn ICE I reported still exists:
>
> > conftest.c:16:1: internal compiler error: RTL check: expected code
> 'subreg', have 'reg' in ext_dce_process_uses, at ext-dce.cc:417
> >    16 | }
> >       | ^
> > 0x8c7b21 rtl_check_failed_code1(rtx_def const*, rtx_code, char const*,
> int, char const*)
> >>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/rtl.cc:770
> > 0xa768e0 ext_dce_process_uses
>
> >>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:417
> > 0x1aed4bc ext_dce_process_bb
>
> >>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:643
> > 0x1aed4bc ext_dce
>
> >>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:794
> > 0x1aed4bc execute
>
> >>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:862
> > Please submit a full bug report, with preprocessed source (by using
> -freport-bug).
> > Please include the complete backtrace with any bug report.
> > See <https://gcc.gnu.org/bugs/> for instructions.
> > configure:3812: $? = 1
> > configure: failed program was:
> > | /* confdefs.h */
> > | #define PACKAGE_NAME "GNU C Runtime Library"
> > | #define PACKAGE_TARNAME "libgcc"
> > | #define PACKAGE_VERSION "1.0"
> > | #define PACKAGE_STRING "GNU C Runtime Library 1.0"
> > | #define PACKAGE_BUGREPORT ""
> > | #define PACKAGE_URL "http://www.gnu.org/software/libgcc/"
> > | /* end confdefs.h.  */
> > |
> > | int
> > | main ()
> > | {
> > |
> > |   ;
> > |   return 0;
> > | }
>
> I think the test is maybe backwards?
>
>    /* ?!? How much of this should mirror SET handling, potentially
>       being shared?   */
>    if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))
>
> Andrew
>


-- 
With the best regards
Jivan Hakobyan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [V2] New pass for sign/zero extension elimination -- not ready for "final" review
  2023-11-28  6:06 Jeff Law
@ 2023-11-28 15:08 ` Andrew Stubbs
  2023-11-28 22:18   ` Jivan Hakobyan
  2023-11-29  9:33 ` Xi Ruoyao
  1 sibling, 1 reply; 22+ messages in thread
From: Andrew Stubbs @ 2023-11-28 15:08 UTC (permalink / raw)
  To: Jeff Law, gcc-patches

On 28/11/2023 06:06, Jeff Law wrote:
> - Verify we have a SUBREG before looking at SUBREG_BYTE.

The amdgcn ICE I reported still exists:

> conftest.c:16:1: internal compiler error: RTL check: expected code 'subreg', have 'reg' in ext_dce_process_uses, at ext-dce.cc:417
>    16 | }
>       | ^
> 0x8c7b21 rtl_check_failed_code1(rtx_def const*, rtx_code, char const*, int, char const*)
>>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/rtl.cc:770
> 0xa768e0 ext_dce_process_uses
>>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:417
> 0x1aed4bc ext_dce_process_bb
>>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:643
> 0x1aed4bc ext_dce
>>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:794
> 0x1aed4bc execute
>>......./scratch/astubbs/omp/upA/gcnbuild/src/gcc-mainline/gcc/ext-dce.cc:862
> Please submit a full bug report, with preprocessed source (by using -freport-bug).
> Please include the complete backtrace with any bug report.
> See <https://gcc.gnu.org/bugs/> for instructions.
> configure:3812: $? = 1
> configure: failed program was:
> | /* confdefs.h */
> | #define PACKAGE_NAME "GNU C Runtime Library"
> | #define PACKAGE_TARNAME "libgcc"
> | #define PACKAGE_VERSION "1.0"
> | #define PACKAGE_STRING "GNU C Runtime Library 1.0"
> | #define PACKAGE_BUGREPORT ""
> | #define PACKAGE_URL "http://www.gnu.org/software/libgcc/"
> | /* end confdefs.h.  */
> |
> | int
> | main ()
> | {
> |
> |   ;
> |   return 0;
> | }

I think the test is maybe backwards?

   /* ?!? How much of this should mirror SET handling, potentially
      being shared?   */
   if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))

Andrew

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [V2] New pass for sign/zero extension elimination -- not ready for "final" review
@ 2023-11-28  6:06 Jeff Law
  2023-11-28 15:08 ` Andrew Stubbs
  2023-11-29  9:33 ` Xi Ruoyao
  0 siblings, 2 replies; 22+ messages in thread
From: Jeff Law @ 2023-11-28  6:06 UTC (permalink / raw)
  To: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 4702 bytes --]


I've still got some comments from Richard S to work through, but some 
folks are trying to play with this and thus I want to get the fixes to 
date in their hands.

Changes since V1:

- Fix handling of CALL_INSN_FUNCTION_USAGE so we don't apply PATTERN to 
an EXPR_LIST.

- Various comments and comment fixes based on feedback from Richard S.

- Remove saturating ops from safe_for_live_propagation

- Adjust checks for too large of modes.  Still not completely fixed.

- Fix computation of size from mode of SUBREG when handling sets.

- Use subreg_lsb rather than an inline variant.

- Remove a redundant CONSTANT_P check.

- Move calculation of inverted_rev_post_order_compute out of loop

- Verify we have a SUBREG before looking at SUBREG_BYTE.


Given I've still got some feedback from Richard that needs work, 
there'll definitely be a V3.  So I wouldn't lose any sleep if this 
didn't get a deep dive from a review standpoint.


--

This is work originally started by Joern @ Embecosm.

There's been a long standing sense that we're generating too many 
sign/zero extensions on the RISC-V port.  REE is useful, but it's really 
focused on a relatively narrow part of the extension problem.

What Joern's patch does is introduce a new pass which tracks liveness of 
chunks of pseudo regs.  Specifically it tracks bits 0..7, 8..15, 16..31 
and 32..63.

If it encounters a sign/zero extend that sets bits that are never read, 
then it replaces the sign/zero extension with a narrowing subreg.  The 
narrowing subreg usually gets eliminated by subsequent passes (it's just 
a copy after all).

Jivan has done some analysis and found that it eliminates roughly 1% of 
the dynamic instruction stream for x264 as well as some redundant 
extensions in the coremark benchmark (both on rv64).  In my own testing 
as I worked through issues on other architectures I clearly saw it 
helping in various places within GCC itself or in the testsuite.

The basic structure is to first do a fairly standard liveness analysis 
on the chunks, seeding original state with the liveness data from DF. 
Once that's stable, we do a final pass to identify the useless 
extensions and transform them into narrowing subregs.

A few key points to remember.

For destination processing it is always safe to ignore a destination. 
Ignoring a destination merely means that whatever was live after the 
given insn will continue to be live before the insn.  What is not safe 
is to clear a bit in the LIVENOW bitmap for a destination chunk that is 
not set.  This comes into play with things like STRICT_LOW_PART.

For source processing the safe thing to do is to set all the chunks in a 
register as live.  It is never safe to fail to process a source operand.

When a destination object is not fully live, we try to transfer that 
limited liveness to the source operands.  So for example if bits 16..63 
are dead in a destination of a PLUS, we need not mark bits 16..63 as 
live for the source operands.  We have to be careful -- consider a shift 
count on a target without SHIFT_COUNT_TRUNCATED set.  So we have both a 
list of RTL codes where we can transfer liveness and a few codes where 
one of the operands may need to be fully live (ex, a shift count) while 
the other input may not need to be fully live (value left shifted).

Locally we have had this enabled at -O1 and above to encourage testing, 
but I'm thinking that for the trunk enabling at -O2 and above is the 
right thing to do.

This has (of course) been tested on rv64.  It's also been bootstrapped 
and regression tested on x86.  Bootstrap and regression tested (C only) 
for m68k, sh4, sh4eb, alpha.  Earlier versions were also bootstrapped 
and regression tested on ppc, hppa and s390x (C only for those as well). 
  It's also been tested on the various crosses in my tester.  So we've 
got reasonable coverage of 16, 32 and 64 bit targets, big and little 
endian, with and without SHIFT_COUNT_TRUNCATED and all kinds of other 
oddities.

The included tests are for RISC-V only because not all targets are going 
to have extraneous extensions.   There's tests from coremark, x264 and 
GCC's bz database.  It probably wouldn't be hard to add aarch64 
testscases.  The BZs listed are improved by this patch for aarch64.

Given the amount of work Jivan and I have done, I'm not comfortable 
self-approving at this time.  I'd much rather have another set of eyes 
on the code.  Hopefully the code is documented well enough for that to 
be useful exercise.

So, no need to work from Pago Pago for this patch.  I may make another 
attempt at the eswin conditional move work while working virtually in 
Pago Pago though.

Thoughts, comments, recommendations?

[-- Attachment #2: P --]
[-- Type: text/plain, Size: 35031 bytes --]

	PR target/95650
	PR rtl-optimization/96031
	PR rtl-optimization/104387
	PR rtl-optimization/111384

gcc/
	* Makefile.in (OBJS): Add ext-dce.o.
	* common.opt (ext-dce): Add new option.
	* df-scan.cc (df_get_exit_block_use_set): No longer static.
	* df.h (df_get_exit_block_use_set): Prototype.
	* ext-dce.cc: New file.
	* passes.def: Add ext-dce before combine.
	* tree-pass.h (make_pass_ext_dce): Prototype..

gcc/testsuite
	* gcc.target/riscv/core_bench_list.c: New test.
	* gcc.target/riscv/core_init_matrix.c: New test.
	* gcc.target/riscv/core_list_init.c: New test.
	* gcc.target/riscv/matrix_add_const.c: New test.
	* gcc.target/riscv/mem-extend.c: New test.
	* gcc.target/riscv/pr111384.c: New test.

diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 753f2f36618..af6f1415507 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1451,6 +1451,7 @@ OBJS = \
 	explow.o \
 	expmed.o \
 	expr.o \
+	ext-dce.o \
 	fibonacci_heap.o \
 	file-prefix-map.o \
 	final.o \
diff --git a/gcc/common.opt b/gcc/common.opt
index 736a4653578..1ab622270f9 100644
--- a/gcc/common.opt
+++ b/gcc/common.opt
@@ -3778,4 +3778,8 @@ fipa-ra
 Common Var(flag_ipa_ra) Optimization
 Use caller save register across calls if possible.
 
+fext-dce
+Common Var(flag_ext_dce, 1) Optimization Init(0)
+Perform dead code elimination on zero and sign extensions with special dataflow analysis.
+
 ; This comment is to ensure we retain the blank line above.
diff --git a/gcc/df-scan.cc b/gcc/df-scan.cc
index 934c9ca2d81..93c0ba4e15c 100644
--- a/gcc/df-scan.cc
+++ b/gcc/df-scan.cc
@@ -78,7 +78,6 @@ static void df_get_eh_block_artificial_uses (bitmap);
 
 static void df_record_entry_block_defs (bitmap);
 static void df_record_exit_block_uses (bitmap);
-static void df_get_exit_block_use_set (bitmap);
 static void df_get_entry_block_def_set (bitmap);
 static void df_grow_ref_info (struct df_ref_info *, unsigned int);
 static void df_ref_chain_delete_du_chain (df_ref);
@@ -3642,7 +3641,7 @@ df_epilogue_uses_p (unsigned int regno)
 
 /* Set the bit for regs that are considered being used at the exit. */
 
-static void
+void
 df_get_exit_block_use_set (bitmap exit_block_uses)
 {
   unsigned int i;
diff --git a/gcc/df.h b/gcc/df.h
index 402657a7076..abcbb097734 100644
--- a/gcc/df.h
+++ b/gcc/df.h
@@ -1091,6 +1091,7 @@ extern bool df_epilogue_uses_p (unsigned int);
 extern void df_set_regs_ever_live (unsigned int, bool);
 extern void df_compute_regs_ever_live (bool);
 extern void df_scan_verify (void);
+extern void df_get_exit_block_use_set (bitmap);
 
 \f
 /*----------------------------------------------------------------------------
diff --git a/gcc/ext-dce.cc b/gcc/ext-dce.cc
new file mode 100644
index 00000000000..e5989a282c9
--- /dev/null
+++ b/gcc/ext-dce.cc
@@ -0,0 +1,874 @@
+/* RTL dead zero/sign extension (code) elimination.
+   Copyright (C) 2000-2022 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "rtl.h"
+#include "tree.h"
+#include "memmodel.h"
+#include "insn-config.h"
+#include "emit-rtl.h"
+#include "recog.h"
+#include "cfganal.h"
+#include "tree-pass.h"
+#include "cfgrtl.h"
+#include "rtl-iter.h"
+#include "df.h"
+#include "print-rtl.h"
+
+/* We consider four bit groups for liveness:
+   bit 0..7   (least significant byte)
+   bit 8..15  (second least significant byte)
+   bit 16..31
+   bit 32..BITS_PER_WORD-1  */
+
+/* Note this pass could be used to narrow memory loads too.  It's
+   not clear if that's profitable or not in general.  */
+
+#define UNSPEC_P(X) (GET_CODE (X) == UNSPEC || GET_CODE (X) == UNSPEC_VOLATILE)
+
+/* If we know the destination of CODE only uses some low bits
+   (say just the QI bits of an SI operation), then return true
+   if we can propagate the need for just the subset of bits
+   from the destination to the sources.
+
+   FIXME: This is safe for operands 1 and 2 of an IF_THEN_ELSE, but not
+   operand 0.  Thus is likely would need some special casing to handle.  */
+
+static bool
+safe_for_live_propagation (rtx_code code)
+{
+  /* First handle rtx classes which as a whole are known to
+     be either safe or unsafe.  */
+  switch (GET_RTX_CLASS (code))
+    {
+      case RTX_OBJ:
+	return true;
+
+      case RTX_COMPARE:
+      case RTX_COMM_COMPARE:
+      case RTX_TERNARY:
+	return false;
+
+      default:
+	break;
+    }
+
+  /* What's left are specific codes.  We only need to identify those
+     which are safe.   */
+  switch (code)
+    {
+    /* These are trivially safe.  */
+    case SUBREG:
+    case NOT:
+    case ZERO_EXTEND:
+    case SIGN_EXTEND:
+    case TRUNCATE:
+    case PLUS:
+    case MULT:
+    case SMUL_HIGHPART:
+    case UMUL_HIGHPART:
+    case AND:
+    case IOR:
+    case XOR:
+      return true;
+
+    /* We can propagate for the shifted operand, but not the shift
+       count.  The count is handled specially.  */
+    case SS_ASHIFT:
+    case US_ASHIFT:
+    case ASHIFT:
+      return true;
+
+    /* There may be other safe codes.  If so they can be added
+       individually when discovered.  */
+    default:
+      return false;
+    }
+}
+
+/* Clear bits in LIVENOW and set bits in LIVE_TMP for objects
+   set/clobbered by OBJ contained in INSN.
+
+   Conceptually it is always safe to ignore a particular destination
+   here as that will result in more chunks of data being considered
+   live.  That's what happens when we "continue" the main loop when
+   we see something we don't know how to handle such as a vector
+   mode destination.
+
+   The more accurate we are in identifying what objects (and chunks
+   within an object) are set by INSN, the more aggressive the
+   optimziation phase during use handling will be.  */
+
+static void
+ext_dce_process_sets (rtx_insn *insn, rtx obj, bitmap livenow, bitmap live_tmp)
+{
+  subrtx_iterator::array_type array;
+  FOR_EACH_SUBRTX (iter, array, obj, NONCONST)
+    {
+      const_rtx x = *iter;
+
+      /* An EXPR_LIST (from call fusage) ends in NULL_RTX.  */
+      if (x == NULL_RTX)
+	continue;
+
+      if (UNSPEC_P (x))
+	continue;
+
+      if (GET_CODE (x) == SET || GET_CODE (x) == CLOBBER)
+	{
+	  unsigned bit = 0;
+	  x = SET_DEST (x);
+
+	  /* We don't support vector destinations or destinations
+	     wider than DImode.  */
+	  scalar_int_mode outer_mode;
+	  if (!is_a <scalar_int_mode> (GET_MODE (x), &outer_mode)
+	      || GET_MODE_BITSIZE (outer_mode) > 64)
+	    {
+	      /* Skip the subrtxs of this destination.  There is
+		 little value in iterating into the subobjects, so
+		 just skip them for a bit of efficiency.  */
+	      iter.skip_subrtxes ();
+	      continue;
+	    }
+
+	  /* We could have (strict_low_part (subreg ...)).  We can not just
+	     strip the STRICT_LOW_PART as that would result in clearing
+	     some bits in LIVENOW that are still live.  So process the
+	     STRICT_LOW_PART specially.  */
+	  if (GET_CODE (x) == STRICT_LOW_PART)
+	    {
+	      x = XEXP (x, 0);
+
+	      /* The only valid operand of a STRICT_LOW_PART is a non
+		 paradoxical SUBREG.  */
+	      gcc_assert (SUBREG_P (x)
+			  && !paradoxical_subreg_p (x)
+			  && SUBREG_BYTE (x).is_constant ());
+
+	      /* I think we should always see a REG here.  But let's
+		 be sure.  */
+	      gcc_assert (REG_P (SUBREG_REG (x)));
+
+	      /* The inner mode might be larger, just punt for
+		 that case.  Remember, we can not just continue to process
+		 the inner RTXs due to the STRICT_LOW_PART.  */
+	      if (!is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (x)), &outer_mode)
+		  || GET_MODE_BITSIZE (outer_mode) > 64)
+		{
+		  /* Skip the subrtxs of the STRICT_LOW_PART.  We can't
+		     process them because it'll set objects as no longer
+		     live when they are in fact still live.  */
+		  iter.skip_subrtxes ();
+		  continue;
+		}
+
+	      /* Transfer all the LIVENOW bits for X into LIVE_TMP.  */
+	      HOST_WIDE_INT rn = REGNO (SUBREG_REG (x));
+	      for (HOST_WIDE_INT i = 4 * rn; i < 4 * rn + 4; i++)
+		if (bitmap_bit_p (livenow, i))
+		  bitmap_set_bit (live_tmp, i);
+
+	      /* The mode of the SUBREG tells us how many bits we can
+		 clear.  */
+	      machine_mode mode = GET_MODE (x);
+	      HOST_WIDE_INT size
+		= exact_log2 (GET_MODE_SIZE (mode).to_constant ()) + 1;
+	      bitmap_clear_range (livenow, 4 * rn, size);
+
+	      /* We have fully processed this destination.  */
+	      iter.skip_subrtxes ();
+	      continue;
+	    }
+
+	  /* We can safely strip a paradoxical subreg.  The inner mode will
+	     be narrower than the outer mode.  We'll clear fewer bits in
+	     LIVENOW than we'd like, but that's always safe.  */
+	  if (paradoxical_subreg_p (x))
+	    x = XEXP (x, 0);
+
+	  /* If we have a SUBREG that is too wide, just continue the loop
+	     and let the iterator go down into SUBREG_REG.  */
+	  if (SUBREG_P (x)
+	      && (!is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (x)), &outer_mode)
+		  || GET_MODE_BITSIZE (outer_mode) > 64))
+	    continue;
+
+	  /* Phase one of destination handling.  First remove any wrapper
+	     such as SUBREG or ZERO_EXTRACT.  */
+	  unsigned HOST_WIDE_INT mask = GET_MODE_MASK (GET_MODE (x));
+	  if (SUBREG_P (x)
+	      && !paradoxical_subreg_p (x)
+	      && SUBREG_BYTE (x).is_constant ())
+	    {
+	      bit = subreg_lsb (x).to_constant ();
+	      mask = GET_MODE_MASK (GET_MODE (SUBREG_REG (x))) << bit;
+	      gcc_assert (mask);
+	      if (!mask)
+		mask = -0x100000000ULL;
+	      x = SUBREG_REG (x);
+	    }
+
+	  if (GET_CODE (x) == ZERO_EXTRACT)
+	    {
+	      /* If either the size or the start position is unknown,
+		 then assume we know nothing about what is overwritten.
+		 This is overly conservative, but safe.  */
+	      if (!CONST_INT_P (XEXP (x, 1)) || !CONST_INT_P (XEXP (x, 2)))
+		continue;
+	      mask = (1ULL << INTVAL (XEXP (x, 1))) - 1;
+	      bit = INTVAL (XEXP (x, 2));
+	      if (BITS_BIG_ENDIAN)
+		bit = (GET_MODE_BITSIZE (GET_MODE (x))
+		       - INTVAL (XEXP (x, 1)) - bit).to_constant ();
+	      x = XEXP (x, 0);
+
+	      /* We can certainly get (zero_extract (subreg ...)).  The
+		 mode of the zero_extract and location should be sufficient
+		 and we can just strip the SUBREG.  */
+	      if (GET_CODE (x) == SUBREG)
+		x = SUBREG_REG (x);
+	    }
+
+	  /* BIT >= 64 indicates something went horribly wrong.  */
+	  gcc_assert (bit <= 63);
+
+	  /* Now handle the actual object that was changed.  */
+	  if (REG_P (x))
+	    {
+	      /* Transfer the appropriate bits from LIVENOW into
+		 LIVE_TMP.  */
+	      HOST_WIDE_INT rn = REGNO (x);
+	      for (HOST_WIDE_INT i = 4 * rn; i < 4 * rn + 4; i++)
+		if (bitmap_bit_p (livenow, i))
+		  bitmap_set_bit (live_tmp, i);
+
+	      /* Now clear the bits known written by this instruction.
+		 Note that BIT need not be a power of two, consider a
+		 ZERO_EXTRACT destination.  */
+	      int start = (bit < 8 ? 0 : bit < 16 ? 1 : bit < 32 ? 2 : 3);
+	      int end = ((mask & ~0xffffffffULL) ? 4
+			 : (mask & 0xffff0000ULL) ? 3
+			 : (mask & 0xff00) ? 2 : 1);
+	      bitmap_clear_range (livenow, 4 * rn + start, end - start);
+	    }
+	  /* Some ports generate (clobber (const_int)).  */
+	  else if (CONST_INT_P (x))
+	    continue;
+	  else
+	    gcc_assert (CALL_P (insn)
+			|| MEM_P (x)
+			|| x == pc_rtx
+			|| GET_CODE (x) == SCRATCH);
+
+	  iter.skip_subrtxes ();
+	}
+      else if (GET_CODE (x) == COND_EXEC)
+	{
+	  /* This isn't ideal, but may not be so bad in practice.  */
+	  iter.skip_subrtxes ();
+	}
+    }
+}
+
+/* INSN has a sign/zero extended source inside SET that we will
+   try to turn into a SUBREG.  */
+static void
+ext_dce_try_optimize_insn (rtx_insn *insn, rtx set, bitmap changed_pseudos)
+{
+  rtx src = SET_SRC (set);
+  rtx inner = XEXP (src, 0);
+
+  /* Avoid (subreg (mem)) and other constructs which may be valid RTL, but
+     not useful for this optimization.  */
+  if (!REG_P (inner) && !SUBREG_P (inner))
+    return;
+
+  rtx new_pattern;
+  if (dump_file)
+    {
+      fprintf (dump_file, "Processing insn:\n");
+      dump_insn_slim (dump_file, insn);
+      fprintf (dump_file, "Trying to simplify pattern:\n");
+      print_rtl_single (dump_file, SET_SRC (set));
+    }
+
+  new_pattern = simplify_gen_subreg (GET_MODE (src), inner,
+				     GET_MODE (inner), 0);
+  /* simplify_gen_subreg may fail in which case NEW_PATTERN will be NULL.
+     We must not pass that as a replacement pattern to validate_change.  */
+  if (new_pattern)
+    {
+      int ok = validate_change (insn, &SET_SRC (set), new_pattern, false);
+
+      if (ok)
+	bitmap_set_bit (changed_pseudos, REGNO (SET_DEST (set)));
+
+      if (dump_file)
+	{
+	  if (ok)
+	    fprintf (dump_file, "Successfully transformed to:\n");
+	  else
+	    fprintf (dump_file, "Failed transformation to:\n");
+
+	  print_rtl_single (dump_file, new_pattern);
+	  fprintf (dump_file, "\n");
+	}
+    }
+  else
+    {
+      if (dump_file)
+	fprintf (dump_file, "Unable to generate valid SUBREG expression.\n");
+    }
+}
+
+/* Some operators imply that their second operand is fully live,
+   regardless of how many bits in the output are live.  An example
+   would be the shift count on a target without SHIFT_COUNT_TRUCATED
+   defined.
+
+   Return TRUE if CODE is such an operator.  FALSE otherwise.  */
+
+static bool
+binop_implies_op2_fully_live (rtx_code code)
+{
+  switch (code)
+    {
+      case ASHIFT:
+      case LSHIFTRT:
+      case ASHIFTRT:
+      case ROTATE:
+      case ROTATERT:
+	return !SHIFT_COUNT_TRUNCATED;
+
+      default:
+	return false;
+    }
+}
+
+/* Process uses in INSN contained in OBJ.  Set appropriate bits in LIVENOW
+   for any chunks of pseudos that become live, potentially filtering using
+   bits from LIVE_TMP.
+
+   If MODIFY is true, then optimize sign/zero extensions to SUBREGs when
+   the extended bits are never read and mark pseudos which had extensions
+   eliminated in CHANGED_PSEUDOS.  */
+
+static void
+ext_dce_process_uses (rtx_insn *insn, rtx obj, bitmap livenow,
+		      bitmap live_tmp, bool modify, bitmap changed_pseudos)
+{
+  /* A nonlocal goto implicitly uses the frame pointer.  */
+  if (JUMP_P (insn) && find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX))
+    {
+      bitmap_set_range (livenow, FRAME_POINTER_REGNUM * 4, 4);
+      if (!HARD_FRAME_POINTER_IS_FRAME_POINTER)
+	bitmap_set_range (livenow, HARD_FRAME_POINTER_REGNUM * 4, 4);
+    }
+
+  subrtx_var_iterator::array_type array_var;
+  FOR_EACH_SUBRTX_VAR (iter, array_var, obj, NONCONST)
+    {
+      /* An EXPR_LIST (from call fusage) ends in NULL_RTX.  */
+      rtx x = *iter;
+      if (x == NULL_RTX)
+	continue;
+
+      /* So the basic idea in this FOR_EACH_SUBRTX_VAR loop is to
+	 handle SETs explicitly, possibly propagating live information
+	 into the uses.
+
+	 We may continue the loop at various points which will cause
+	 iteration into the next level of RTL.  Breaking from the loop
+	 is never safe as it can lead us to fail to process some of the
+	 RTL and thus not make objects live when necessary.  */
+      rtx_code xcode = GET_CODE (x);
+      if (xcode == SET)
+	{
+	  const_rtx dst = SET_DEST (x);
+	  rtx src = SET_SRC (x);
+	  const_rtx y;
+	  unsigned HOST_WIDE_INT bit = 0;
+
+	  /* The code of the RHS of a SET.  */
+	  rtx_code code = GET_CODE (src);
+
+	  /* ?!? How much of this should mirror SET handling, potentially
+	     being shared?   */
+	  if (SUBREG_BYTE (dst).is_constant () && SUBREG_P (dst))
+	    {
+	      bit = subreg_lsb (dst).to_constant ();
+	      if (bit >= HOST_BITS_PER_WIDE_INT)
+		bit = HOST_BITS_PER_WIDE_INT - 1;
+	      dst = SUBREG_REG (dst);
+	    }
+	  else if (GET_CODE (dst) == ZERO_EXTRACT
+		   || GET_CODE (dst) == STRICT_LOW_PART)
+	    dst = XEXP (dst, 0);
+
+	  /* Main processing of the uses.  Two major goals here.
+
+	     First, we want to try and propagate liveness (or the lack
+	     thereof) from the destination register to the source
+	     register(s).
+
+	     Second, if the source is an extension, try to optimize
+	     it into a SUBREG.  The SUBREG form indicates we don't
+	     care about the upper bits and will usually be copy
+	     propagated away.
+
+	     If we fail to handle something in here, the expectation
+	     is the iterator will dive into the sub-components and
+	     mark all the chunks in any found REGs as live.  */
+	  if (REG_P (dst) && safe_for_live_propagation (code))
+	    {
+	      /* Create a mask representing the bits of this output
+		 operand that are live after this insn.  We can use
+		 this information to refine the live in state of
+		 inputs to this insn in many cases.
+
+		 We have to do this on a per SET basis, we might have
+		 an INSN with multiple SETS, some of which can narrow
+		 the source operand liveness, some of which may not.  */
+	      unsigned HOST_WIDE_INT dst_mask = 0;
+	      HOST_WIDE_INT rn = REGNO (dst);
+	      unsigned HOST_WIDE_INT mask_array[]
+		= { 0xff, 0xff00, 0xffff0000ULL, -0x100000000ULL };
+	      for (int i = 0; i < 4; i++)
+		if (bitmap_bit_p (live_tmp, 4 * rn + i))
+		  dst_mask |= mask_array[i];
+	      dst_mask >>= bit;
+
+	      /* ??? Could also handle ZERO_EXTRACT / SIGN_EXTRACT
+		 of the source specially to improve optimization.  */
+	      if (code == SIGN_EXTEND || code == ZERO_EXTEND)
+		{
+		  rtx inner = XEXP (src, 0);
+		  unsigned HOST_WIDE_INT src_mask
+		    = GET_MODE_MASK (GET_MODE (inner));
+
+		  /* DST_MASK could be zero if we had something in the SET
+		     that we couldn't handle.  */
+		  if (modify && dst_mask && (dst_mask & ~src_mask) == 0)
+		    ext_dce_try_optimize_insn (insn, x, changed_pseudos);
+
+		  dst_mask &= src_mask;
+		  src = XEXP (src, 0);
+		  code = GET_CODE (src);
+		}
+
+	      /* Optimization is done at this point.  We just want to make
+		 sure everything that should get marked as live is marked
+		 from here onward.  */
+
+	      /* ?!? What is the point of this adjustment to DST_MASK?  */
+	      if (code == PLUS || code == MINUS
+		  || code == MULT || code == ASHIFT)
+		dst_mask
+		  = dst_mask ? ((2ULL << floor_log2 (dst_mask)) - 1) : 0;
+
+	      /* We will handle the other operand of a binary operator
+		 at the bottom of the loop by resetting Y.  */
+	      if (BINARY_P (src))
+		y = XEXP (src, 0);
+	      else
+		y = src;
+
+	      /* We're inside a SET and want to process the source operands
+		 making things live.  Breaking from this loop will cause
+		 the iterator to work on sub-rtxs, so it is safe to break
+		 if we see something we don't know how to handle.  */
+	      for (;;)
+		{
+		  /* Strip an outer STRICT_LOW_PART or paradoxical subreg.
+		     That has the effect of making the whole referenced
+		     register live.  We might be able to avoid that for
+		     STRICT_LOW_PART at some point.  */
+		  /* XXX This all looks wrong.  Note the STRICT_LOW_PART.
+		     if we're processing a dest, then why look at Y in the
+		     else clause.  If processing a src, then STRICT_LOW_PART
+		     shouldn't happen.  */
+		  if (GET_CODE (x) == STRICT_LOW_PART
+		      || paradoxical_subreg_p (x))
+		    x = XEXP (x, 0);
+		  else if (SUBREG_P (y) && SUBREG_BYTE (y).is_constant ())
+		    {
+		      /* For anything but (subreg (reg)), break the inner loop
+			 and process normally (conservatively).  */
+		      if (!REG_P (SUBREG_REG (y)))
+			break;
+		      bit = subreg_lsb (y).to_constant ();
+		      if (dst_mask)
+			{
+			  dst_mask <<= bit;
+			  if (!dst_mask)
+			    dst_mask = -0x100000000ULL;
+			}
+		      y = SUBREG_REG (y);
+		    }
+
+		  if (REG_P (y))
+		    {
+		      /* We have found the use of a register.  We need to mark
+			 the appropriate chunks of the register live.  The mode
+			 of the REG is a starting point.  We may refine that
+			 based on what chunks in the output were live.  */
+		      rn = 4 * REGNO (y);
+		      unsigned HOST_WIDE_INT tmp_mask = dst_mask;
+
+		      /* If the RTX code for the SET_SRC is not one we can
+			 propagate destination liveness through, then just
+			 set the mask to the mode's mask.  */
+		      if (!safe_for_live_propagation (code))
+			tmp_mask = GET_MODE_MASK (GET_MODE (y));
+
+		      if (tmp_mask & 0xff)
+			bitmap_set_bit (livenow, rn);
+		      if (tmp_mask & 0xff00)
+			bitmap_set_bit (livenow, rn + 1);
+		      if (tmp_mask & 0xffff0000ULL)
+			bitmap_set_bit (livenow, rn + 2);
+		      if (tmp_mask & -0x100000000ULL)
+			bitmap_set_bit (livenow, rn + 3);
+
+		      /* Some operators imply their second operand
+			 is fully live, break this inner loop which
+			 will cause the iterator to descent into the
+			 sub-rtxs outside the SET processing.  */
+		      if (binop_implies_op2_fully_live (code))
+			break;
+		    }
+		  else if (!CONSTANT_P (y))
+		    break;
+		  /* We might have (ashift (const_int 1) (reg...)) */
+		  /* XXX share this logic with code below.  */
+		  else if (binop_implies_op2_fully_live (GET_CODE (src)))
+		    break;
+
+		  /* If this was anything but a binary operand, break the inner
+		     loop.  This is conservatively correct as it will cause the
+		     iterator to look at the sub-rtxs outside the SET context.  */
+		  if (!BINARY_P (src))
+		    break;
+
+		  /* We processed the first operand of a binary operator.  Now
+		     handle the second.  */
+		  y = XEXP (src, 1), src = pc_rtx;
+		}
+
+	      /* These are leaf nodes, no need to iterate down into them.  */
+	      if (REG_P (y) || CONSTANT_P (y))
+		iter.skip_subrtxes ();
+	    }
+	}
+      /* If we are reading the low part of a SUBREG, then we can
+	 refine liveness of the input register, otherwise let the
+	 iterator continue into SUBREG_REG.  */
+      else if (xcode == SUBREG
+	       && REG_P (SUBREG_REG (x))
+	       && subreg_lowpart_p (x)
+	       && GET_MODE_BITSIZE (GET_MODE (x)).is_constant ()
+	       && GET_MODE_BITSIZE (GET_MODE (x)).to_constant () <= 32)
+	{
+	  HOST_WIDE_INT size = GET_MODE_BITSIZE (GET_MODE  (x)).to_constant ();
+	  HOST_WIDE_INT rn = 4 * REGNO (SUBREG_REG (x));
+
+	  bitmap_set_bit (livenow, rn);
+	  if (size > 8)
+	    bitmap_set_bit (livenow, rn + 1);
+	  if (size > 16)
+	    bitmap_set_bit (livenow, rn + 2);
+	  if (size > 32)
+	    bitmap_set_bit (livenow, rn + 3);
+	  iter.skip_subrtxes ();
+	}
+      /* If we have a register reference that is not otherwise handled,
+	 just assume all the chunks are live.  */
+      else if (REG_P (x))
+	bitmap_set_range (livenow, REGNO (x) * 4, 4);
+    }
+}
+
+/* Process a single basic block BB with current liveness information
+   in LIVENOW, returning updated liveness information.
+
+   If MODIFY is true, then this is the last pass and unnecessary
+   extensions should be eliminated when possible.  If an extension
+   is removed, the source pseudo is marked in CHANGED_PSEUDOS.  */
+
+static bitmap
+ext_dce_process_bb (basic_block bb, bitmap livenow,
+		    bool modify, bitmap changed_pseudos)
+{
+  rtx_insn *insn;
+
+  FOR_BB_INSNS_REVERSE (bb, insn)
+    {
+      if (!NONDEBUG_INSN_P (insn))
+	continue;
+
+      /* Live-out state of the destination of this insn.  We can
+	 use this to refine the live-in state of the sources of
+	 this insn in many cases.  */
+      bitmap live_tmp = BITMAP_ALLOC (NULL);
+
+      /* First process any sets/clobbers in INSN.  */
+      ext_dce_process_sets (insn, PATTERN (insn), livenow, live_tmp);
+
+      /* CALL_INSNs need processing their fusage data.  */
+      if (GET_CODE (insn) == CALL_INSN)
+	ext_dce_process_sets (insn, CALL_INSN_FUNCTION_USAGE (insn),
+			      livenow, live_tmp);
+
+      /* And now uses, optimizing away SIGN/ZERO extensions as we go.  */
+      ext_dce_process_uses (insn, PATTERN (insn), livenow, live_tmp,
+			    modify, changed_pseudos);
+
+      /* And process fusage data for the use as well.  */
+      if (GET_CODE (insn) == CALL_INSN)
+	{
+	  if (!FAKE_CALL_P (insn))
+	    bitmap_set_range (livenow, STACK_POINTER_REGNUM * 4, 4);
+
+	  /* If this is not a call to a const fucntion, then assume it
+	     can read any global register.  */
+	  if (!RTL_CONST_CALL_P (insn))
+	    for (unsigned i = 0; i < FIRST_PSEUDO_REGISTER; i++)
+	      if (global_regs[i])
+		bitmap_set_range (livenow, i * 4, 4);
+
+	  ext_dce_process_uses (insn, CALL_INSN_FUNCTION_USAGE (insn),
+				livenow, live_tmp, modify, changed_pseudos);
+	}
+
+      BITMAP_FREE (live_tmp);
+    }
+  return livenow;
+}
+
+/* We optimize away sign/zero extensions in this pass and replace
+   them with SUBREGs indicating certain bits are don't cares.
+
+   This changes the SUBREG_PROMOTED_VAR_P state of the object.
+   It is fairly painful to fix this on the fly, so we have
+   recorded which pseudos are affected and we look for SUBREGs
+   of those pseudos and fix them up.  */
+
+static void
+reset_subreg_promoted_p (bitmap changed_pseudos)
+{
+  /* If we removed an extension, that changed the promoted state
+     of the destination of that extension.  Thus we need to go
+     find any SUBREGs that reference that pseudo and adjust their
+     SUBREG_PROMOTED_P state.  */
+  for (rtx_insn *insn = get_insns(); insn; insn = NEXT_INSN (insn))
+    {
+      if (!NONDEBUG_INSN_P (insn))
+	continue;
+
+      rtx pat = PATTERN (insn);
+      subrtx_var_iterator::array_type array;
+      FOR_EACH_SUBRTX_VAR (iter, array, pat, NONCONST)
+	{
+	  rtx sub = *iter;
+
+	  /* We only care about SUBREGs.  */
+	  if (GET_CODE (sub) != SUBREG)
+	    continue;
+
+	  const_rtx x = SUBREG_REG (sub);
+
+	  /* We only care if the inner object is a REG.  */
+	  if (!REG_P (x))
+	    continue;
+
+	  /* And only if the SUBREG is a promoted var.  */
+	  if (!SUBREG_PROMOTED_VAR_P (sub))
+	    continue;
+
+	  if (bitmap_bit_p (changed_pseudos, REGNO (x)))
+	    SUBREG_PROMOTED_VAR_P (sub) = 0;
+	}
+    }
+}
+
+/* Use lifetime analyis to identify extensions that set bits that
+   are never read.  Turn such extensions into SUBREGs instead which
+   can often be propagated away.  */
+
+static void
+ext_dce (void)
+{
+  basic_block bb, *worklist, *qin, *qout, *qend;
+  unsigned int qlen;
+  vec<bitmap_head> livein;
+  bitmap livenow;
+  bitmap changed_pseudos;
+
+  livein.create (last_basic_block_for_fn (cfun));
+  livein.quick_grow_cleared (last_basic_block_for_fn (cfun));
+  for (int i = 0; i < last_basic_block_for_fn (cfun); i++)
+    bitmap_initialize (&livein[i], &bitmap_default_obstack);
+
+  auto_bitmap refs (&bitmap_default_obstack);
+  df_get_exit_block_use_set (refs);
+
+  unsigned i;
+  bitmap_iterator bi;
+  EXECUTE_IF_SET_IN_BITMAP (refs, 0, i, bi)
+    {
+      for (int j = 0; j < 4; j++)
+	bitmap_set_bit (&livein[EXIT_BLOCK], i * 4 + j);
+    }
+
+  livenow = BITMAP_ALLOC (NULL);
+  changed_pseudos = BITMAP_ALLOC (NULL);
+
+  worklist
+    = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
+
+  int modify = 0;
+
+  int *rpo = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
+  int n = inverted_rev_post_order_compute (cfun, rpo);
+  do
+    {
+      qin = qout = worklist;
+
+      /* Put every block on the worklist.  */
+      for (int i = 0; i < n; ++i)
+	{
+	  bb = BASIC_BLOCK_FOR_FN (cfun, rpo[i]);
+	  if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
+	      || bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
+	    continue;
+	  *qin++ = bb;
+	  bb->aux = bb;
+	}
+
+      qin = worklist;
+      qend = &worklist[n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS];
+      qlen = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
+
+      /* Iterate until the worklist is empty.  */
+      while (qlen)
+	{
+	  /* Take the first entry off the worklist.  */
+	  bb = *qout++;
+	  qlen--;
+
+	  if (qout >= qend)
+	    qout = worklist;
+
+	  /* Clear the aux field of this block so that it can be added to
+	     the worklist again if necessary.  */
+	  bb->aux = NULL;
+
+	  bitmap_clear (livenow);
+	  /* Make everything live that's live in the successors.  */
+	  edge_iterator ei;
+	  edge e;
+
+	  FOR_EACH_EDGE (e, ei, bb->succs)
+	    bitmap_ior_into (livenow, &livein[e->dest->index]);
+
+	  livenow = ext_dce_process_bb (bb, livenow,
+					modify > 0, changed_pseudos);
+
+	  if (!bitmap_equal_p (&livein[bb->index], livenow))
+	    {
+	      gcc_assert (!modify);
+	      bitmap tmp = BITMAP_ALLOC (NULL);
+	      gcc_assert (!bitmap_and_compl (tmp, &livein[bb->index], livenow));
+
+	      bitmap_copy (&livein[bb->index], livenow);
+
+	      edge_iterator ei;
+	      edge e;
+
+	      FOR_EACH_EDGE (e, ei, bb->preds)
+		if (!e->src->aux && e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
+		  {
+		    *qin++ = e->src;
+		    e->src->aux = e;
+		    qlen++;
+		    if (qin >= qend)
+		      qin = worklist;
+		  }
+	    }
+	}
+    } while (!modify++);
+
+
+  reset_subreg_promoted_p (changed_pseudos);
+
+  /* Clean up.  */
+  free (rpo);
+  BITMAP_FREE (changed_pseudos);
+  BITMAP_FREE (livenow);
+  unsigned len = livein.length ();
+  for (unsigned i = 0; i < len; i++)
+    bitmap_clear (&livein[i]);
+  livein.release ();
+  clear_aux_for_blocks ();
+  free (worklist);
+}
+
+namespace {
+
+const pass_data pass_data_ext_dce =
+{
+  RTL_PASS, /* type */
+  "ext_dce", /* name */
+  OPTGROUP_NONE, /* optinfo_flags */
+  TV_NONE, /* tv_id */
+  PROP_cfglayout, /* properties_required */
+  0, /* properties_provided */
+  0, /* properties_destroyed */
+  0, /* todo_flags_start */
+  TODO_df_finish, /* todo_flags_finish */
+};
+
+class pass_ext_dce : public rtl_opt_pass
+{
+public:
+  pass_ext_dce (gcc::context *ctxt)
+    : rtl_opt_pass (pass_data_ext_dce, ctxt)
+  {}
+
+  /* opt_pass methods: */
+  virtual bool gate (function *) { return optimize > 0; }
+  virtual unsigned int execute (function *)
+    {
+      ext_dce ();
+      return 0;
+    }
+
+}; // class pass_combine
+
+} // anon namespace
+
+rtl_opt_pass *
+make_pass_ext_dce (gcc::context *ctxt)
+{
+  return new pass_ext_dce (ctxt);
+}
diff --git a/gcc/passes.def b/gcc/passes.def
index 1e1950bdb39..c075c70d42c 100644
--- a/gcc/passes.def
+++ b/gcc/passes.def
@@ -487,6 +487,7 @@ along with GCC; see the file COPYING3.  If not see
       NEXT_PASS (pass_inc_dec);
       NEXT_PASS (pass_initialize_regs);
       NEXT_PASS (pass_ud_rtl_dce);
+      NEXT_PASS (pass_ext_dce);
       NEXT_PASS (pass_combine);
       NEXT_PASS (pass_if_after_combine);
       NEXT_PASS (pass_jump_after_combine);
diff --git a/gcc/testsuite/gcc.target/riscv/core_bench_list.c b/gcc/testsuite/gcc.target/riscv/core_bench_list.c
new file mode 100644
index 00000000000..957e9c841ed
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/core_bench_list.c
@@ -0,0 +1,15 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-rtl-ext_dce" } */
+/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
+
+short
+core_bench_list (int N) {
+
+  short a = 0;
+  for (int i = 0; i < 4; i++) {
+    if (i > N) {
+      a++;
+    }
+  }
+  return a * 4;
+}
diff --git a/gcc/testsuite/gcc.target/riscv/core_init_matrix.c b/gcc/testsuite/gcc.target/riscv/core_init_matrix.c
new file mode 100644
index 00000000000..9289244c71f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/core_init_matrix.c
@@ -0,0 +1,17 @@
+/* { dg-do compile } */
+/* { dg-options "-O1 -fdump-rtl-ext_dce" } */
+/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
+
+void
+core_init_matrix(short* A, short* B, int seed) {
+  int  order = 1;
+
+  for (int i = 0; i < seed; i++) {
+    for (int j = 0; j < seed; j++) {
+      short val = seed + order;
+      B[i] = val;
+      A[i] = val;
+      order++;
+    }
+  }
+}
diff --git a/gcc/testsuite/gcc.target/riscv/core_list_init.c b/gcc/testsuite/gcc.target/riscv/core_list_init.c
new file mode 100644
index 00000000000..2f36dae85aa
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/core_list_init.c
@@ -0,0 +1,18 @@
+/* { dg-do compile } */
+/* { dg-options "-O1 -fdump-rtl-ext_dce" } */
+/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
+
+unsigned short
+core_list_init (int size, short seed) {
+
+  for (int i = 0; i < size; i++) {
+    unsigned short datpat = ((unsigned short)(seed ^ i) & 0xf);
+    unsigned short dat = (datpat << 3) | (i & 0x7);
+    if (i > seed) {
+      return dat;
+    }
+  }
+
+  return 0;
+
+}
diff --git a/gcc/testsuite/gcc.target/riscv/matrix_add_const.c b/gcc/testsuite/gcc.target/riscv/matrix_add_const.c
new file mode 100644
index 00000000000..9a2dd53b17a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/matrix_add_const.c
@@ -0,0 +1,11 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-rtl-ext_dce" } */
+/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
+
+void
+matrix_add_const(int N, short *A, short val)
+{
+    for (int j = 0; j < N; j++) {
+      A[j] += val;
+    }
+}
diff --git a/gcc/testsuite/gcc.target/riscv/mem-extend.c b/gcc/testsuite/gcc.target/riscv/mem-extend.c
new file mode 100644
index 00000000000..c67f12dfc35
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/mem-extend.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gc_zbb" } */
+/* { dg-skip-if "" { *-*-* } { "-O0" } } */
+
+void
+foo(short *d, short *tmp) {
+    int x = d[0] + d[1];
+    int y = d[2] + d[3];
+    tmp[0] = x + y;
+    tmp[1] = x - y;
+}
+
+/* { dg-final { scan-assembler-not {\mzext\.h\M} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/pr111384.c b/gcc/testsuite/gcc.target/riscv/pr111384.c
new file mode 100644
index 00000000000..a4e77d4aeb6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/pr111384.c
@@ -0,0 +1,11 @@
+/* { dg-do compile } */
+/* { dg-options "-O1 -fdump-rtl-ext_dce" } */
+/* { dg-final { scan-rtl-dump {Successfully transformed} "ext_dce" } } */
+
+void
+foo(unsigned int src, unsigned short *dst1, unsigned short *dst2)
+{
+    *dst1 = src;
+    *dst2 = src;
+}
+
diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h
index 09e6ada5b2f..773301d731f 100644
--- a/gcc/tree-pass.h
+++ b/gcc/tree-pass.h
@@ -591,6 +591,7 @@ extern rtl_opt_pass *make_pass_reginfo_init (gcc::context *ctxt);
 extern rtl_opt_pass *make_pass_inc_dec (gcc::context *ctxt);
 extern rtl_opt_pass *make_pass_stack_ptr_mod (gcc::context *ctxt);
 extern rtl_opt_pass *make_pass_initialize_regs (gcc::context *ctxt);
+extern rtl_opt_pass *make_pass_ext_dce (gcc::context *ctxt);
 extern rtl_opt_pass *make_pass_combine (gcc::context *ctxt);
 extern rtl_opt_pass *make_pass_if_after_combine (gcc::context *ctxt);
 extern rtl_opt_pass *make_pass_jump_after_combine (gcc::context *ctxt);

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2023-12-12 17:19 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-29 19:57 [V2] New pass for sign/zero extension elimination -- not ready for "final" review Joern Rennecke
2023-11-29 20:05 ` Joern Rennecke
2023-11-30  2:39   ` Joern Rennecke
2023-11-30  4:10     ` Joern Rennecke
     [not found]       ` <734a2733-b55c-4b0e-92c0-21f0b9fb41a7@gmail.com>
2023-11-30 21:32         ` Jeff Law
2023-12-12 17:18       ` Jeff Law
2023-11-30 15:46     ` Jeff Law
2023-11-30 17:53     ` Jeff Law
2023-11-30 18:31       ` Joern Rennecke
2023-11-30 19:15         ` Jeff Law
  -- strict thread matches above, loose matches on Subject: below --
2023-11-28  6:06 Jeff Law
2023-11-28 15:08 ` Andrew Stubbs
2023-11-28 22:18   ` Jivan Hakobyan
2023-11-28 23:26     ` Jeff Law
2023-11-29  9:28       ` Andrew Stubbs
2023-11-29 10:25     ` Richard Sandiford
2023-11-29  9:33 ` Xi Ruoyao
2023-11-29 12:37   ` Xi Ruoyao
2023-11-29 15:46     ` Xi Ruoyao
2023-11-29 19:19       ` Jivan Hakobyan
2023-11-30 15:44   ` Jeff Law
2023-12-01  2:05     ` Xi Ruoyao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).