public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Tamar Christina <Tamar.Christina@arm.com>
To: "pan2.li@intel.com" <pan2.li@intel.com>,
	"gcc-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>
Cc: "juzhe.zhong@rivai.ai" <juzhe.zhong@rivai.ai>,
	"kito.cheng@gmail.com" <kito.cheng@gmail.com>,
	"richard.guenther@gmail.com" <richard.guenther@gmail.com>,
	"hongtao.liu@intel.com" <hongtao.liu@intel.com>
Subject: RE: [PATCH v3] Internal-fn: Introduce new internal function SAT_ADD
Date: Wed, 1 May 2024 17:06:00 +0000	[thread overview]
Message-ID: <VI1PR08MB53251FB73B0BCB33D028C86BFF192@VI1PR08MB5325.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <20240429075322.1587986-1-pan2.li@intel.com>

Hi,

> From: Pan Li <pan2.li@intel.com>
> 
> Update in v3:
> * Rebase upstream for conflict.
> 
> Update in v2:
> * Fix one failure for x86 bootstrap.
> 
> Original log:
> 
> This patch would like to add the middle-end presentation for the
> saturation add.  Aka set the result of add to the max when overflow.
> It will take the pattern similar as below.
> 
> SAT_ADD (x, y) => (x + y) | (-(TYPE)((TYPE)(x + y) < x))
> 
> Take uint8_t as example, we will have:
> 
> * SAT_ADD (1, 254)   => 255.
> * SAT_ADD (1, 255)   => 255.
> * SAT_ADD (2, 255)   => 255.
> * SAT_ADD (255, 255) => 255.
> 
> The patch also implement the SAT_ADD in the riscv backend as
> the sample for both the scalar and vector.  Given below example:
> 
> uint64_t sat_add_u64 (uint64_t x, uint64_t y)
> {
>   return (x + y) | (- (uint64_t)((uint64_t)(x + y) < x));
> }
> 
> Before this patch:
> uint64_t sat_add_uint64_t (uint64_t x, uint64_t y)
> {
>   long unsigned int _1;
>   _Bool _2;
>   long unsigned int _3;
>   long unsigned int _4;
>   uint64_t _7;
>   long unsigned int _10;
>   __complex__ long unsigned int _11;
> 
> ;;   basic block 2, loop depth 0
> ;;    pred:       ENTRY
>   _11 = .ADD_OVERFLOW (x_5(D), y_6(D));
>   _1 = REALPART_EXPR <_11>;
>   _10 = IMAGPART_EXPR <_11>;
>   _2 = _10 != 0;
>   _3 = (long unsigned int) _2;
>   _4 = -_3;
>   _7 = _1 | _4;
>   return _7;
> ;;    succ:       EXIT
> 
> }
> 
> After this patch:
> uint64_t sat_add_uint64_t (uint64_t x, uint64_t y)
> {
>   uint64_t _7;
> 
> ;;   basic block 2, loop depth 0
> ;;    pred:       ENTRY
>   _7 = .SAT_ADD (x_5(D), y_6(D)); [tail call]
>   return _7;
> ;;    succ:       EXIT
> }
> 
> For vectorize, we leverage the existing vect pattern recog to find
> the pattern similar to scalar and let the vectorizer to perform
> the rest part for standard name usadd<mode>3 in vector mode.
> The riscv vector backend have insn "Vector Single-Width Saturating
> Add and Subtract" which can be leveraged when expand the usadd<mode>3
> in vector mode.  For example:
> 
> void vec_sat_add_u64 (uint64_t *out, uint64_t *x, uint64_t *y, unsigned n)
> {
>   unsigned i;
> 
>   for (i = 0; i < n; i++)
>     out[i] = (x[i] + y[i]) | (- (uint64_t)((uint64_t)(x[i] + y[i]) < x[i]));
> }
> 
> Before this patch:
> void vec_sat_add_u64 (uint64_t *out, uint64_t *x, uint64_t *y, unsigned n)
> {
>   ...
>   _80 = .SELECT_VL (ivtmp_78, POLY_INT_CST [2, 2]);
>   ivtmp_58 = _80 * 8;
>   vect__4.7_61 = .MASK_LEN_LOAD (vectp_x.5_59, 64B, { -1, ... }, _80, 0);
>   vect__6.10_65 = .MASK_LEN_LOAD (vectp_y.8_63, 64B, { -1, ... }, _80, 0);
>   vect__7.11_66 = vect__4.7_61 + vect__6.10_65;
>   mask__8.12_67 = vect__4.7_61 > vect__7.11_66;
>   vect__12.15_72 = .VCOND_MASK (mask__8.12_67, { 18446744073709551615,
> ... }, vect__7.11_66);
>   .MASK_LEN_STORE (vectp_out.16_74, 64B, { -1, ... }, _80, 0, vect__12.15_72);
>   vectp_x.5_60 = vectp_x.5_59 + ivtmp_58;
>   vectp_y.8_64 = vectp_y.8_63 + ivtmp_58;
>   vectp_out.16_75 = vectp_out.16_74 + ivtmp_58;
>   ivtmp_79 = ivtmp_78 - _80;
>   ...
> }
> 
> vec_sat_add_u64:
>   ...
>   vsetvli a5,a3,e64,m1,ta,ma
>   vle64.v v0,0(a1)
>   vle64.v v1,0(a2)
>   slli    a4,a5,3
>   sub     a3,a3,a5
>   add     a1,a1,a4
>   add     a2,a2,a4
>   vadd.vv v1,v0,v1
>   vmsgtu.vv       v0,v0,v1
>   vmerge.vim      v1,v1,-1,v0
>   vse64.v v1,0(a0)
>   ...
> 
> After this patch:
> void vec_sat_add_u64 (uint64_t *out, uint64_t *x, uint64_t *y, unsigned n)
> {
>   ...
>   _62 = .SELECT_VL (ivtmp_60, POLY_INT_CST [2, 2]);
>   ivtmp_46 = _62 * 8;
>   vect__4.7_49 = .MASK_LEN_LOAD (vectp_x.5_47, 64B, { -1, ... }, _62, 0);
>   vect__6.10_53 = .MASK_LEN_LOAD (vectp_y.8_51, 64B, { -1, ... }, _62, 0);
>   vect__12.11_54 = .SAT_ADD (vect__4.7_49, vect__6.10_53);
>   .MASK_LEN_STORE (vectp_out.12_56, 64B, { -1, ... }, _62, 0, vect__12.11_54);
>   ...
> }
> 
> vec_sat_add_u64:
>   ...
>   vsetvli a5,a3,e64,m1,ta,ma
>   vle64.v v1,0(a1)
>   vle64.v v2,0(a2)
>   slli    a4,a5,3
>   sub     a3,a3,a5
>   add     a1,a1,a4
>   add     a2,a2,a4
>   vsaddu.vv       v1,v1,v2
>   vse64.v v1,0(a0)
>   ...
> 
> To limit the patch size for review, only unsigned version of
> usadd<mode>3 are involved here. The signed version will be covered
> in the underlying patch(es).
> 
> The below test suites are passed for this patch.
> * The riscv fully regression tests.
> * The aarch64 fully regression tests.
> * The x86 bootstrap tests.
> * The x86 fully regression tests.
> 
> 	PR target/51492
> 	PR target/112600
> 
> gcc/ChangeLog:
> 
> 	* config/riscv/autovec.md (usadd<mode>3): New pattern expand
> 	for unsigned SAT_ADD vector.
> 	* config/riscv/riscv-protos.h (riscv_expand_usadd): New func
> 	decl to expand usadd<mode>3 pattern.
> 	(expand_vec_usadd): Ditto but for vector.
> 	* config/riscv/riscv-v.cc (emit_vec_saddu): New func impl to
> 	emit the vsadd insn.
> 	(expand_vec_usadd): New func impl to expand usadd<mode>3 for
> 	vector.
> 	* config/riscv/riscv.cc (riscv_expand_usadd): New func impl
> 	to expand usadd<mode>3 for scalar.
> 	* config/riscv/riscv.md (usadd<mode>3): New pattern expand
> 	for unsigned SAT_ADD scalar.
> 	* config/riscv/vector.md: Allow VLS mode for vsaddu.
> 	* internal-fn.cc (commutative_binary_fn_p): Add type IFN_SAT_ADD.
> 	* internal-fn.def (SAT_ADD): Add new signed optab SAT_ADD.
> 	* match.pd: Add unsigned SAT_ADD match and simply.
> 	* optabs.def (OPTAB_NL): Remove fixed-point limitation for us/ssadd.
> 	* tree-vect-patterns.cc (vect_sat_add_build_call): New func impl
> 	to build the IFN_SAT_ADD gimple call.
> 	(vect_recog_sat_add_pattern): New func impl to recog the pattern
> 	for unsigned SAT_ADD.
> 

Could you split the generic changes off from the RISCV changes? The RISCV changes need to be reviewed by the backend maintainer.

Could you also split off the vectorizer change from scalar recog one? Typically I would structure a change like this as:

1. create types/structures + scalar recogn
2. Vector recog code
3. Backend changes

Which makes review and bisect easier. I'll only focus on the generic bits.

> diff --git a/gcc/internal-fn.cc b/gcc/internal-fn.cc
> index 2c764441cde..1104bb03b41 100644
> --- a/gcc/internal-fn.cc
> +++ b/gcc/internal-fn.cc
> @@ -4200,6 +4200,7 @@ commutative_binary_fn_p (internal_fn fn)
>      case IFN_UBSAN_CHECK_MUL:
>      case IFN_ADD_OVERFLOW:
>      case IFN_MUL_OVERFLOW:
> +    case IFN_SAT_ADD:
>      case IFN_VEC_WIDEN_PLUS:
>      case IFN_VEC_WIDEN_PLUS_LO:
>      case IFN_VEC_WIDEN_PLUS_HI:
> diff --git a/gcc/internal-fn.def b/gcc/internal-fn.def
> index 848bb9dbff3..47326b7033c 100644
> --- a/gcc/internal-fn.def
> +++ b/gcc/internal-fn.def
> @@ -275,6 +275,9 @@ DEF_INTERNAL_SIGNED_OPTAB_FN (MULHS, ECF_CONST
> | ECF_NOTHROW, first,
>  DEF_INTERNAL_SIGNED_OPTAB_FN (MULHRS, ECF_CONST | ECF_NOTHROW,
> first,
>  			      smulhrs, umulhrs, binary)
> 
> +DEF_INTERNAL_SIGNED_OPTAB_FN (SAT_ADD, ECF_CONST | ECF_NOTHROW,
> first,
> +			      ssadd, usadd, binary)
> +

Is ECF_NOTHROW correct here? At least on most targets I believe the scalar version
can set flags/throw exceptions if the saturation happens?

>  DEF_INTERNAL_COND_FN (ADD, ECF_CONST, add, binary)
>  DEF_INTERNAL_COND_FN (SUB, ECF_CONST, sub, binary)
>  DEF_INTERNAL_COND_FN (MUL, ECF_CONST, smul, binary)
> diff --git a/gcc/match.pd b/gcc/match.pd
> index d401e7503e6..0b0298df829 100644
> --- a/gcc/match.pd
> +++ b/gcc/match.pd
> @@ -3043,6 +3043,70 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>         || POINTER_TYPE_P (itype))
>        && wi::eq_p (wi::to_wide (int_cst), wi::max_value (itype))))))
> 

Hmm I believe Richi mentioned that he wanted the recognition done in isel?

The problem with doing it in match.pd is that it replaces the operations quite
early the pipeline. Did I miss an email perhaps? The early replacement means we
lose optimizations and things such as range calculations etc, since e.g. ranger
doesn't know these internal functions.

I think Richi will want this in islet or mult widening but I'll continue with match.pd
review just in case.

> +/* Unsigned Saturation Add */
> +(match (usadd_left_part_1 @0 @1)
> + (plus:c @0 @1)
> + (if (INTEGRAL_TYPE_P (type)
> +      && TYPE_UNSIGNED (TREE_TYPE (@0))
> +      && types_match (type, TREE_TYPE (@0))
> +      && types_match (type, TREE_TYPE (@1)))))
> +
> +(match (usadd_right_part_1 @0 @1)
> + (negate (convert (lt (plus:c @0 @1) @0)))
> + (if (INTEGRAL_TYPE_P (type)
> +      && TYPE_UNSIGNED (TREE_TYPE (@0))
> +      && types_match (type, TREE_TYPE (@0))
> +      && types_match (type, TREE_TYPE (@1)))))
> +
> +(match (usadd_right_part_2 @0 @1)
> + (negate (convert (gt @0 (plus:c @0 @1))))
> + (if (INTEGRAL_TYPE_P (type)
> +      && TYPE_UNSIGNED (TREE_TYPE (@0))
> +      && types_match (type, TREE_TYPE (@0))
> +      && types_match (type, TREE_TYPE (@1)))))

Predicates can be overloaded, so these two can just be usadd_right_part which then...

> +
> +/* Unsigned saturation add. Case 1 (branchless):
> +   SAT_U_ADD = (X + Y) | - ((X + Y) < X) or
> +   SAT_U_ADD = (X + Y) | - (X > (X + Y)).  */
> +(simplify
> + (bit_ior:c
> +  (usadd_left_part_1 @0 @1)
> +  (usadd_right_part_1 @0 @1))
> + (if (optimize) (IFN_SAT_ADD @0 @1)))


The optimize checks in the match.pd file are weird as it seems to check if we have
optimizations enabled?

We don't typically need to do this.

> +(simplify
> + (bit_ior:c
> +  (usadd_left_part_1 @0 @1)
> +  (usadd_right_part_2 @0 @1))
> + (if (optimize) (IFN_SAT_ADD @0 @1)))
> +

Allows you to collapse rules like these into one line. Similarly for below.

Note  that even when moving to gimple-isel you can reuse the match.pd code by
Leveraging it to build the predicates for you and call them from another pass.
See how ctz_table_index is used for example.

Doing this, moving it to gimple-isel.cc should be easy.

> +/* Unsigned saturation add. Case 2 (branch):
> +   SAT_U_ADD = (X + Y) >= x ? (X + Y) : -1 or
> +   SAT_U_ADD = x <= (X + Y) ? (X + Y) : -1.  */
> +(simplify
> + (cond (ge (usadd_left_part_1@2 @0 @1) @0) @2 integer_minus_onep)
> + (if (optimize) (IFN_SAT_ADD @0 @1)))
> +(simplify
> + (cond (le @0 (usadd_left_part_1@2 @0 @1)) @2 integer_minus_onep)
> + (if (optimize) (IFN_SAT_ADD @0 @1)))
> +
> +/* Vect recog pattern will leverage unsigned_integer_sat_add.  */
> +(match (unsigned_integer_sat_add @0 @1)
> + (bit_ior:c
> +  (usadd_left_part_1 @0 @1)
> +  (usadd_right_part_1 @0 @1))
> + (if (optimize)))
> +(match (unsigned_integer_sat_add @0 @1)
> + (bit_ior:c
> +  (usadd_left_part_1 @0 @1)
> +  (usadd_right_part_2 @0 @1))
> + (if (optimize)))
> +(match (unsigned_integer_sat_add @0 @1)
> + (cond (ge (usadd_left_part_1@2 @0 @1) @0) @2 integer_minus_onep)
> + (if (optimize)))
> +(match (unsigned_integer_sat_add @0 @1)
> + (cond (le @0 (usadd_left_part_1@2 @0 @1)) @2 integer_minus_onep)
> + (if (optimize)))
> +
>  /* x >  y  &&  x != XXX_MIN  -->  x > y
>     x >  y  &&  x == XXX_MIN  -->  false . */
>  (for eqne (eq ne)
> diff --git a/gcc/optabs.def b/gcc/optabs.def
> index ad14f9328b9..3f2cb46aff8 100644
> --- a/gcc/optabs.def
> +++ b/gcc/optabs.def
> @@ -111,8 +111,8 @@ OPTAB_NX(add_optab, "add$F$a3")
>  OPTAB_NX(add_optab, "add$Q$a3")
>  OPTAB_VL(addv_optab, "addv$I$a3", PLUS, "add", '3', gen_intv_fp_libfunc)
>  OPTAB_VX(addv_optab, "add$F$a3")
> -OPTAB_NL(ssadd_optab, "ssadd$Q$a3", SS_PLUS, "ssadd", '3',
> gen_signed_fixed_libfunc)
> -OPTAB_NL(usadd_optab, "usadd$Q$a3", US_PLUS, "usadd", '3',
> gen_unsigned_fixed_libfunc)
> +OPTAB_NL(ssadd_optab, "ssadd$a3", SS_PLUS, "ssadd", '3',
> gen_signed_fixed_libfunc)
> +OPTAB_NL(usadd_optab, "usadd$a3", US_PLUS, "usadd", '3',
> gen_unsigned_fixed_libfunc)
>  OPTAB_NL(sub_optab, "sub$P$a3", MINUS, "sub", '3', gen_int_fp_fixed_libfunc)
>  OPTAB_NX(sub_optab, "sub$F$a3")
>  OPTAB_NX(sub_optab, "sub$Q$a3")
...
> diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc
> index 87c2acff386..77924cf10f8 100644
> --- a/gcc/tree-vect-patterns.cc
> +++ b/gcc/tree-vect-patterns.cc
> @@ -4487,6 +4487,67 @@ vect_recog_mult_pattern (vec_info *vinfo,
>    return pattern_stmt;
>  }
> 
> +static gimple *
> +vect_sat_add_build_call (vec_info *vinfo, gimple *last_stmt, tree *type_out,
> +			 tree op_0, tree op_1)
> +{
> +  tree itype = TREE_TYPE (op_0);
> +  tree vtype = get_vectype_for_scalar_type (vinfo, itype);
> +
> +  if (vtype == NULL_TREE)
> +    return NULL;
> +
> +  if (!direct_internal_fn_supported_p (IFN_SAT_ADD, vtype,
> OPTIMIZE_FOR_SPEED))
> +    return NULL;
> +
> +  *type_out = vtype;
> +
> +  gcall *call = gimple_build_call_internal (IFN_SAT_ADD, 2, op_0, op_1);
> +  gimple_call_set_lhs (call, vect_recog_temp_ssa_var (itype, NULL));
> +  gimple_call_set_nothrow (call, /* nothrow_p */ true);
> +  gimple_set_location (call, gimple_location (last_stmt));
> +
> +  vect_pattern_detected ("vect_recog_sat_add_pattern", last_stmt);
> +
> +  return call;
> +}

The function has only one caller, you should just inline it into the pattern.

> +/*
> + * Try to detect saturation add pattern (SAT_ADD), aka below gimple:
> + *   _7 = _4 + _6;
> + *   _8 = _4 > _7;
> + *   _9 = (long unsigned int) _8;
> + *   _10 = -_9;
> + *   _12 = _7 | _10;
> + *
> + * And then simplied to
> + *   _12 = .SAT_ADD (_4, _6);
> + */
> +extern bool gimple_unsigned_integer_sat_add (tree, tree*, tree (*)(tree));
> +
> +static gimple *
> +vect_recog_sat_add_pattern (vec_info *vinfo, stmt_vec_info stmt_vinfo,
> +			    tree *type_out)
> +{
> +  gimple *last_stmt = stmt_vinfo->stmt;
> +

STMT_VINFO_STMT (stmt_vinfo);

> +  if (!is_gimple_assign (last_stmt))
> +    return NULL;
> +
> +  tree res_ops[2];
> +  tree lhs = gimple_assign_lhs (last_stmt);

Once you inline vect_sat_add_build_call you can do the check for
vtype here, which is the cheaper check so perform it early.

Otherwise this looks really good!

Thanks for working on it,

Tamar

> +
> +  if (gimple_unsigned_integer_sat_add (lhs, res_ops, NULL))
> +    {
> +      gimple *call = vect_sat_add_build_call (vinfo, last_stmt, type_out,
> +					      res_ops[0], res_ops[1]);
> +      if (call)
> +	return call;
> +    }
> +
> +  return NULL;
> +}
> +
>  /* Detect a signed division by a constant that wouldn't be
>     otherwise vectorized:
> 
> @@ -6987,6 +7048,7 @@ static vect_recog_func vect_vect_recog_func_ptrs[] = {
>    { vect_recog_vector_vector_shift_pattern, "vector_vector_shift" },
>    { vect_recog_divmod_pattern, "divmod" },
>    { vect_recog_mult_pattern, "mult" },
> +  { vect_recog_sat_add_pattern, "sat_add" },
>    { vect_recog_mixed_size_cond_pattern, "mixed_size_cond" },
>    { vect_recog_gcond_pattern, "gcond" },
>    { vect_recog_bool_pattern, "bool" },
> --
> 2.34.1


  reply	other threads:[~2024-05-01 17:06 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-06 12:07 [PATCH v1] " pan2.li
2024-04-07  7:03 ` [PATCH v2] " pan2.li
2024-04-28 12:10   ` Li, Pan2
2024-04-29  7:53 ` [PATCH v3] " pan2.li
2024-05-01 17:06   ` Tamar Christina [this message]
2024-05-02  3:10     ` Li, Pan2
2024-05-02  3:25       ` Tamar Christina
2024-05-02 10:57         ` Li, Pan2
2024-05-02 12:57           ` Tamar Christina
2024-05-03  1:45             ` Li, Pan2
2024-05-06 14:48 ` [PATCH v4 1/3] Internal-fn: Support new IFN SAT_ADD for unsigned scalar int pan2.li
2024-05-13  9:09   ` Tamar Christina
2024-05-13 13:36     ` Li, Pan2
2024-05-13 15:03       ` Tamar Christina
2024-05-14  1:50         ` Li, Pan2
2024-05-14 13:18   ` Richard Biener
2024-05-14 14:14     ` Li, Pan2
2024-05-06 14:49 ` [PATCH v4 2/3] VECT: Support new IFN SAT_ADD for unsigned vector int pan2.li
2024-05-13  9:10   ` Tamar Christina
2024-05-14 13:21   ` Richard Biener
2024-05-06 14:50 ` [PATCH v4 3/3] RISC-V: Implement IFN SAT_ADD for both the scalar and vector pan2.li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=VI1PR08MB53251FB73B0BCB33D028C86BFF192@VI1PR08MB5325.eurprd08.prod.outlook.com \
    --to=tamar.christina@arm.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=hongtao.liu@intel.com \
    --cc=juzhe.zhong@rivai.ai \
    --cc=kito.cheng@gmail.com \
    --cc=pan2.li@intel.com \
    --cc=richard.guenther@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).