public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] vect: Try folding first for shifted value generation [PR107240]
@ 2022-10-19  3:18 Kewen.Lin
  2022-10-19  7:43 ` Richard Biener
  0 siblings, 1 reply; 3+ messages in thread
From: Kewen.Lin @ 2022-10-19  3:18 UTC (permalink / raw)
  To: GCC Patches
  Cc: Richard Biener, Andre Vieira (lists),
	Segher Boessenkool, Richard Sandiford

Hi,

As PR107240 shows, when both the value to be shifted and the
count used for shifting are constants, it doesn't actually
requires a target to support vector shift operations.

This patch is to try fold_build2 for the generation of the
shifted value first, if it's folded, the shift is gone,
otherwise it's the same as before.

It can help to make the failures of vect-bitfield-write-{2,3}.c
gone on Power.

Bootstrapped and regtested on x86_64-redhat-linux,
aarch64-linux-gnu and powerpc64{,le}-linux-gnu.

Is it ok for trunk?

BR,
Kewen
-----
	PR tree-optimization/107240

gcc/ChangeLog:

	* tree-vect-patterns.cc (vect_recog_bit_insert_pattern): Attempt to
	fold shifted value.
---
 gcc/tree-vect-patterns.cc | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc
index 6afd57a50c4..3beda774ec3 100644
--- a/gcc/tree-vect-patterns.cc
+++ b/gcc/tree-vect-patterns.cc
@@ -2098,9 +2098,11 @@ vect_recog_bit_insert_pattern (vec_info *vinfo, stmt_vec_info stmt_info,
   tree shifted = value;
   if (shift_n)
     {
+      tree shifted_tmp
+	= fold_build2 (LSHIFT_EXPR, container_type, value, shift);
       pattern_stmt
 	= gimple_build_assign (vect_recog_temp_ssa_var (container_type),
-			       LSHIFT_EXPR, value, shift);
+			       shifted_tmp);
       append_pattern_def_seq (vinfo, stmt_info, pattern_stmt);
       shifted = gimple_get_lhs (pattern_stmt);
     }
--
2.27.0

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] vect: Try folding first for shifted value generation [PR107240]
  2022-10-19  3:18 [PATCH] vect: Try folding first for shifted value generation [PR107240] Kewen.Lin
@ 2022-10-19  7:43 ` Richard Biener
  2022-10-19  8:51   ` Kewen.Lin
  0 siblings, 1 reply; 3+ messages in thread
From: Richard Biener @ 2022-10-19  7:43 UTC (permalink / raw)
  To: Kewen.Lin
  Cc: GCC Patches, Andre Vieira (lists), Segher Boessenkool, Richard Sandiford

On Wed, Oct 19, 2022 at 5:18 AM Kewen.Lin <linkw@linux.ibm.com> wrote:
>
> Hi,
>
> As PR107240 shows, when both the value to be shifted and the
> count used for shifting are constants, it doesn't actually
> requires a target to support vector shift operations.
>
> This patch is to try fold_build2 for the generation of the
> shifted value first, if it's folded, the shift is gone,
> otherwise it's the same as before.
>
> It can help to make the failures of vect-bitfield-write-{2,3}.c
> gone on Power.
>
> Bootstrapped and regtested on x86_64-redhat-linux,
> aarch64-linux-gnu and powerpc64{,le}-linux-gnu.
>
> Is it ok for trunk?
>
> BR,
> Kewen
> -----
>         PR tree-optimization/107240
>
> gcc/ChangeLog:
>
>         * tree-vect-patterns.cc (vect_recog_bit_insert_pattern): Attempt to
>         fold shifted value.
> ---
>  gcc/tree-vect-patterns.cc | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc
> index 6afd57a50c4..3beda774ec3 100644
> --- a/gcc/tree-vect-patterns.cc
> +++ b/gcc/tree-vect-patterns.cc
> @@ -2098,9 +2098,11 @@ vect_recog_bit_insert_pattern (vec_info *vinfo, stmt_vec_info stmt_info,
>    tree shifted = value;
>    if (shift_n)
>      {
> +      tree shifted_tmp
> +       = fold_build2 (LSHIFT_EXPR, container_type, value, shift);
>        pattern_stmt
>         = gimple_build_assign (vect_recog_temp_ssa_var (container_type),
> -                              LSHIFT_EXPR, value, shift);
> +                              shifted_tmp);

The canonical way would be to use

         gimple_seq stmts = NULL;
         shifted = gimple_build (&stmts, LSHIFT_EXPR, container_type,
value, shift);
         if (!gimple_seq_empty_p (stmts))
           append_pattern_def_seq (vinfo, stmt_info,
gimple_seq_first_stmt (stmts));

That also avoids the spurious val = constant; with your patch.

OK if that works.

thanks,
Richard.

>        append_pattern_def_seq (vinfo, stmt_info, pattern_stmt);
>        shifted = gimple_get_lhs (pattern_stmt);
>      }
> --
> 2.27.0

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] vect: Try folding first for shifted value generation [PR107240]
  2022-10-19  7:43 ` Richard Biener
@ 2022-10-19  8:51   ` Kewen.Lin
  0 siblings, 0 replies; 3+ messages in thread
From: Kewen.Lin @ 2022-10-19  8:51 UTC (permalink / raw)
  To: Richard Biener
  Cc: GCC Patches, Andre Vieira (lists), Segher Boessenkool, Richard Sandiford

Hi Richi,

on 2022/10/19 15:43, Richard Biener wrote:
> On Wed, Oct 19, 2022 at 5:18 AM Kewen.Lin <linkw@linux.ibm.com> wrote:
>>
>> Hi,
>>
>> As PR107240 shows, when both the value to be shifted and the
>> count used for shifting are constants, it doesn't actually
>> requires a target to support vector shift operations.
>>
>> This patch is to try fold_build2 for the generation of the
>> shifted value first, if it's folded, the shift is gone,
>> otherwise it's the same as before.
>>
>> It can help to make the failures of vect-bitfield-write-{2,3}.c
>> gone on Power.
>>
>> Bootstrapped and regtested on x86_64-redhat-linux,
>> aarch64-linux-gnu and powerpc64{,le}-linux-gnu.
>>
>> Is it ok for trunk?
>>
>> BR,
>> Kewen
>> -----
>>         PR tree-optimization/107240
>>
>> gcc/ChangeLog:
>>
>>         * tree-vect-patterns.cc (vect_recog_bit_insert_pattern): Attempt to
>>         fold shifted value.
>> ---
>>  gcc/tree-vect-patterns.cc | 4 +++-
>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc
>> index 6afd57a50c4..3beda774ec3 100644
>> --- a/gcc/tree-vect-patterns.cc
>> +++ b/gcc/tree-vect-patterns.cc
>> @@ -2098,9 +2098,11 @@ vect_recog_bit_insert_pattern (vec_info *vinfo, stmt_vec_info stmt_info,
>>    tree shifted = value;
>>    if (shift_n)
>>      {
>> +      tree shifted_tmp
>> +       = fold_build2 (LSHIFT_EXPR, container_type, value, shift);
>>        pattern_stmt
>>         = gimple_build_assign (vect_recog_temp_ssa_var (container_type),
>> -                              LSHIFT_EXPR, value, shift);
>> +                              shifted_tmp);
> 
> The canonical way would be to use
> 
>          gimple_seq stmts = NULL;
>          shifted = gimple_build (&stmts, LSHIFT_EXPR, container_type,
> value, shift);
>          if (!gimple_seq_empty_p (stmts))
>            append_pattern_def_seq (vinfo, stmt_info,
> gimple_seq_first_stmt (stmts));
> 
> That also avoids the spurious val = constant; with your patch.
> 

Thanks for the suggestion!  This works well by testing those two
cases locally.

I searched around, it seems gimple_build doesn't provide one
interface for its users to specify a ssa name for the result,
previously we use vect_recog_temp_ssa_var for the shifted
result, but I think it's trivial.

I'll do a full testing further as before and push it if
everything goes well.  Thanks again.

BR,
Kewen

> OK if that works.
> 
> thanks,
> Richard.
> 
>>        append_pattern_def_seq (vinfo, stmt_info, pattern_stmt);
>>        shifted = gimple_get_lhs (pattern_stmt);
>>      }
>> --
>> 2.27.0

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-10-19  8:51 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-19  3:18 [PATCH] vect: Try folding first for shifted value generation [PR107240] Kewen.Lin
2022-10-19  7:43 ` Richard Biener
2022-10-19  8:51   ` Kewen.Lin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).