public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH][AArch64] Properly cost zero_extend+ashift forms of ubfi[xz]
@ 2015-12-04  9:30 Kyrill Tkachov
  2015-12-11  9:53 ` Kyrill Tkachov
  2015-12-16 14:59 ` James Greenhalgh
  0 siblings, 2 replies; 3+ messages in thread
From: Kyrill Tkachov @ 2015-12-04  9:30 UTC (permalink / raw)
  To: GCC Patches; +Cc: Marcus Shawcroft, Richard Earnshaw, James Greenhalgh

[-- Attachment #1: Type: text/plain, Size: 862 bytes --]

Hi all,

We don't handle properly the patterns for the [us]bfiz and [us]bfx instructions when they
have an extend+ashift form. For example, the *<ANY_EXTEND:optab><GPI:mode>_ashl<SHORT:mode> pattern.
This leads to rtx costs recuring into the extend and assigning a cost to these patterns that is too
large.

This patch fixes that oversight.
I stumbled across this when working on a different combine patch and ended up matching the above
pattern, only to have it rejected for -mcpu=cortex-a53 due to the erroneous cost.

Bootstrapped and tested on aarch64.

Ok for trunk?

Thanks,
Kyrill

2015-12-04  Kyrylo Tkachov  <kyrylo.tkachov@arm.com>

     * config/aarch64/aarch64.c (aarch64_extend_bitfield_pattern_p):
     New function.
     (aarch64_rtx_costs, ZERO_EXTEND, SIGN_EXTEND cases): Use the above
     to handle extend+shift rtxes.

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: aarch64-extend-bfiz.patch --]
[-- Type: text/x-patch; name=aarch64-extend-bfiz.patch, Size: 2489 bytes --]

diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index c97ecdc0859e0a24792a57aeb18b2e4ea35918f4..d180f6f2d37a280ad77f34caad8496ddaa6e01b2 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -5833,6 +5833,50 @@ aarch64_if_then_else_costs (rtx op0, rtx op1, rtx op2, int *cost, bool speed)
   return false;
 }
 
+/* Check whether X is a bitfield operation of the form shift + extend that
+   maps down to a UBFIZ/SBFIZ/UBFX/SBFX instruction.  If so, return the
+   operand to which the bitfield operation is applied to.  Otherwise return
+   NULL_RTX.  */
+
+static rtx
+aarch64_extend_bitfield_pattern_p (rtx x)
+{
+  rtx_code outer_code = GET_CODE (x);
+  machine_mode outer_mode = GET_MODE (x);
+
+  if (outer_code != ZERO_EXTEND && outer_code != SIGN_EXTEND
+      && outer_mode != SImode && outer_mode != DImode)
+    return NULL_RTX;
+
+  rtx inner = XEXP (x, 0);
+  rtx_code inner_code = GET_CODE (inner);
+  machine_mode inner_mode = GET_MODE (inner);
+  rtx op = NULL_RTX;
+
+  switch (inner_code)
+    {
+      case ASHIFT:
+	if (CONST_INT_P (XEXP (inner, 1))
+	    && (inner_mode == QImode || inner_mode == HImode))
+	  op = XEXP (inner, 0);
+	break;
+      case LSHIFTRT:
+	if (outer_code == ZERO_EXTEND && CONST_INT_P (XEXP (inner, 1))
+	    && (inner_mode == QImode || inner_mode == HImode))
+	  op = XEXP (inner, 0);
+	break;
+      case ASHIFTRT:
+	if (outer_code == SIGN_EXTEND && CONST_INT_P (XEXP (inner, 1))
+	    && (inner_mode == QImode || inner_mode == HImode))
+	  op = XEXP (inner, 0);
+	break;
+      default:
+	break;
+    }
+
+  return op;
+}
+
 /* Calculate the cost of calculating X, storing it in *COST.  Result
    is true if the total cost of the operation has now been calculated.  */
 static bool
@@ -6521,6 +6565,14 @@ cost_plus:
 	  return true;
 	}
 
+      op0 = aarch64_extend_bitfield_pattern_p (x);
+      if (op0)
+	{
+	  *cost += rtx_cost (op0, mode, ZERO_EXTEND, 0, speed);
+	  if (speed)
+	    *cost += extra_cost->alu.bfx;
+	  return true;
+	}
       if (speed)
 	{
 	  if (VECTOR_MODE_P (mode))
@@ -6552,6 +6604,14 @@ cost_plus:
 	  return true;
 	}
 
+      op0 = aarch64_extend_bitfield_pattern_p (x);
+      if (op0)
+	{
+	  *cost += rtx_cost (op0, mode, SIGN_EXTEND, 0, speed);
+	  if (speed)
+	    *cost += extra_cost->alu.bfx;
+	  return true;
+	}
       if (speed)
 	{
 	  if (VECTOR_MODE_P (mode))

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH][AArch64] Properly cost zero_extend+ashift forms of ubfi[xz]
  2015-12-04  9:30 [PATCH][AArch64] Properly cost zero_extend+ashift forms of ubfi[xz] Kyrill Tkachov
@ 2015-12-11  9:53 ` Kyrill Tkachov
  2015-12-16 14:59 ` James Greenhalgh
  1 sibling, 0 replies; 3+ messages in thread
From: Kyrill Tkachov @ 2015-12-11  9:53 UTC (permalink / raw)
  To: GCC Patches; +Cc: Marcus Shawcroft, Richard Earnshaw, James Greenhalgh

Ping.
https://gcc.gnu.org/ml/gcc-patches/2015-12/msg00526.html

Thanks,
Kyrill

On 04/12/15 09:30, Kyrill Tkachov wrote:
> Hi all,
>
> We don't handle properly the patterns for the [us]bfiz and [us]bfx instructions when they
> have an extend+ashift form. For example, the *<ANY_EXTEND:optab><GPI:mode>_ashl<SHORT:mode> pattern.
> This leads to rtx costs recuring into the extend and assigning a cost to these patterns that is too
> large.
>
> This patch fixes that oversight.
> I stumbled across this when working on a different combine patch and ended up matching the above
> pattern, only to have it rejected for -mcpu=cortex-a53 due to the erroneous cost.
>
> Bootstrapped and tested on aarch64.
>
> Ok for trunk?
>
> Thanks,
> Kyrill
>
> 2015-12-04  Kyrylo Tkachov  <kyrylo.tkachov@arm.com>
>
>     * config/aarch64/aarch64.c (aarch64_extend_bitfield_pattern_p):
>     New function.
>     (aarch64_rtx_costs, ZERO_EXTEND, SIGN_EXTEND cases): Use the above
>     to handle extend+shift rtxes.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH][AArch64] Properly cost zero_extend+ashift forms of ubfi[xz]
  2015-12-04  9:30 [PATCH][AArch64] Properly cost zero_extend+ashift forms of ubfi[xz] Kyrill Tkachov
  2015-12-11  9:53 ` Kyrill Tkachov
@ 2015-12-16 14:59 ` James Greenhalgh
  1 sibling, 0 replies; 3+ messages in thread
From: James Greenhalgh @ 2015-12-16 14:59 UTC (permalink / raw)
  To: Kyrill Tkachov; +Cc: GCC Patches, Marcus Shawcroft, Richard Earnshaw

On Fri, Dec 04, 2015 at 09:30:45AM +0000, Kyrill Tkachov wrote:
> Hi all,
> 
> We don't handle properly the patterns for the [us]bfiz and [us]bfx instructions when they
> have an extend+ashift form. For example, the *<ANY_EXTEND:optab><GPI:mode>_ashl<SHORT:mode> pattern.
> This leads to rtx costs recuring into the extend and assigning a cost to these patterns that is too
> large.
> 
> This patch fixes that oversight.
> I stumbled across this when working on a different combine patch and ended up matching the above
> pattern, only to have it rejected for -mcpu=cortex-a53 due to the erroneous cost.
> 
> Bootstrapped and tested on aarch64.
> 
> Ok for trunk?
> 
> Thanks,
> Kyrill
> 
> 2015-12-04  Kyrylo Tkachov  <kyrylo.tkachov@arm.com>
> 
>     * config/aarch64/aarch64.c (aarch64_extend_bitfield_pattern_p):
>     New function.
>     (aarch64_rtx_costs, ZERO_EXTEND, SIGN_EXTEND cases): Use the above
>     to handle extend+shift rtxes.

> diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
> index c97ecdc0859e0a24792a57aeb18b2e4ea35918f4..d180f6f2d37a280ad77f34caad8496ddaa6e01b2 100644
> --- a/gcc/config/aarch64/aarch64.c
> +++ b/gcc/config/aarch64/aarch64.c
> @@ -5833,6 +5833,50 @@ aarch64_if_then_else_costs (rtx op0, rtx op1, rtx op2, int *cost, bool speed)
>    return false;
>  }
>  
> +/* Check whether X is a bitfield operation of the form shift + extend that
> +   maps down to a UBFIZ/SBFIZ/UBFX/SBFX instruction.  If so, return the
> +   operand to which the bitfield operation is applied to.  Otherwise return

No need for that second "to" at the end of the sentence.

> +   NULL_RTX.  */
> +
> +static rtx
> +aarch64_extend_bitfield_pattern_p (rtx x)
> +{
> +  rtx_code outer_code = GET_CODE (x);
> +  machine_mode outer_mode = GET_MODE (x);
> +
> +  if (outer_code != ZERO_EXTEND && outer_code != SIGN_EXTEND
> +      && outer_mode != SImode && outer_mode != DImode)
> +    return NULL_RTX;
> +
> +  rtx inner = XEXP (x, 0);
> +  rtx_code inner_code = GET_CODE (inner);
> +  machine_mode inner_mode = GET_MODE (inner);
> +  rtx op = NULL_RTX;
> +
> +  switch (inner_code)
> +    {
> +      case ASHIFT:
> +	if (CONST_INT_P (XEXP (inner, 1))
> +	    && (inner_mode == QImode || inner_mode == HImode))
> +	  op = XEXP (inner, 0);
> +	break;
> +      case LSHIFTRT:
> +	if (outer_code == ZERO_EXTEND && CONST_INT_P (XEXP (inner, 1))
> +	    && (inner_mode == QImode || inner_mode == HImode))
> +	  op = XEXP (inner, 0);
> +	break;
> +      case ASHIFTRT:
> +	if (outer_code == SIGN_EXTEND && CONST_INT_P (XEXP (inner, 1))
> +	    && (inner_mode == QImode || inner_mode == HImode))
> +	  op = XEXP (inner, 0);
> +	break;
> +      default:
> +	break;
> +    }
> +
> +  return op;
> +}
> +
>  /* Calculate the cost of calculating X, storing it in *COST.  Result
>     is true if the total cost of the operation has now been calculated.  */
>  static bool
> @@ -6521,6 +6565,14 @@ cost_plus:
>  	  return true;
>  	}
>  
> +      op0 = aarch64_extend_bitfield_pattern_p (x);
> +      if (op0)
> +	{
> +	  *cost += rtx_cost (op0, mode, ZERO_EXTEND, 0, speed);
> +	  if (speed)
> +	    *cost += extra_cost->alu.bfx;
> +	  return true;
> +	}

Newline here.

>        if (speed)
>  	{
>  	  if (VECTOR_MODE_P (mode))
> @@ -6552,6 +6604,14 @@ cost_plus:
>  	  return true;
>  	}
>  
> +      op0 = aarch64_extend_bitfield_pattern_p (x);
> +      if (op0)
> +	{
> +	  *cost += rtx_cost (op0, mode, SIGN_EXTEND, 0, speed);
> +	  if (speed)
> +	    *cost += extra_cost->alu.bfx;
> +	  return true;
> +	}

And here.

>        if (speed)
>  	{
>  	  if (VECTOR_MODE_P (mode))

OK with those changes.

Thanks,
James


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-12-16 14:59 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-04  9:30 [PATCH][AArch64] Properly cost zero_extend+ashift forms of ubfi[xz] Kyrill Tkachov
2015-12-11  9:53 ` Kyrill Tkachov
2015-12-16 14:59 ` James Greenhalgh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).