public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Tamar Christina <Tamar.Christina@arm.com>
To: "Li, Pan2" <pan2.li@intel.com>,
	"gcc-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>
Cc: "juzhe.zhong@rivai.ai" <juzhe.zhong@rivai.ai>,
	"kito.cheng@gmail.com" <kito.cheng@gmail.com>,
	"richard.guenther@gmail.com" <richard.guenther@gmail.com>,
	"Liu, Hongtao" <hongtao.liu@intel.com>
Subject: RE: [PATCH v3] Internal-fn: Introduce new internal function SAT_ADD
Date: Thu, 2 May 2024 12:57:31 +0000	[thread overview]
Message-ID: <VI1PR08MB5325A9C6C13B4AB1B21C1610FF182@VI1PR08MB5325.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <MW5PR11MB590886D7425828D04919FE8CA9182@MW5PR11MB5908.namprd11.prod.outlook.com>

> > So he was responding for how to do it for the vectorizer and scalar parts.
> > Remember that the goal is not to introduce new gimple IL that can block other
> optimizations.
> > The vectorizer already introduces new IL (various IFN) but this is fine as we don't
> track things like ranges for
> > vector instructions.  So we don't loose any information here.
> 
> > Now for the scalar, if we do an early replacement like in match.pd we prevent a
> lot of other optimizations
> > because they don't know what IFN_SAT_ADD does. gimple-isel runs pretty late,
> and so at this point we don't
> > expect many more optimizations to happen, so it's a safe spot to insert more IL
> with "unknown semantics".
> 
> > Was that your intention Richi?
> 
> Thanks Tamar for clear explanation, does that mean both the scalar and vector will
> go isel approach? If so I may
> misunderstand in previous that it is only for vectorize.

No, The isel would only be for the scalar, The vectorizer will still use the vect_pattern.
It needs to so we can cost the operation correctly, and in some cases depending on how
the saturation is described you are unable the vectorize.  The pattern allows us to catch
these cases and still vectorize.

But you should be able to use the same match.pd predicate for both the vectorizer pattern
and isel.

> 
> Understand the point that we would like to put the pattern match late but I may
> have a question here.
> Given SAT_ADD related pattern is sort of complicated, it is possible that the sub-
> expression of SAT_ADD is optimized
> In early pass by others and we can hardly catch the shapes later.
> 
> For example, there is a plus expression in SAT_ADD, and in early pass it may be
> optimized to .ADD_OVERFLOW, and
> then the pattern is quite different to aware of that in later pass.
> 

Yeah, it looks like this transformation is done in widening_mul, which is the other
place richi suggested to recognize SAT_ADD.  widening_mul already runs quite
late as well so it's also ok.

If you put it there before the code that transforms the sequence to overflow it
should work.

Eventually we do need to recognize this variant since:

uint64_t
add_sat(uint64_t x, uint64_t y) noexcept
{
    uint64_t z;
    if (!__builtin_add_overflow(x, y, &z))
	    return z;
    return -1u;
}

Is a valid and common way to do saturation too.

But for now, it's fine.

Cheers,
Tamar

> Sorry not sure if my understanding is correct, feel free to correct me.
> 
> Pan
> 
> -----Original Message-----
> From: Tamar Christina <Tamar.Christina@arm.com>
> Sent: Thursday, May 2, 2024 11:26 AM
> To: Li, Pan2 <pan2.li@intel.com>; gcc-patches@gcc.gnu.org
> Cc: juzhe.zhong@rivai.ai; kito.cheng@gmail.com; richard.guenther@gmail.com;
> Liu, Hongtao <hongtao.liu@intel.com>
> Subject: RE: [PATCH v3] Internal-fn: Introduce new internal function SAT_ADD
> 
> > -----Original Message-----
> > From: Li, Pan2 <pan2.li@intel.com>
> > Sent: Thursday, May 2, 2024 4:11 AM
> > To: Tamar Christina <Tamar.Christina@arm.com>; gcc-patches@gcc.gnu.org
> > Cc: juzhe.zhong@rivai.ai; kito.cheng@gmail.com; richard.guenther@gmail.com;
> > Liu, Hongtao <hongtao.liu@intel.com>
> > Subject: RE: [PATCH v3] Internal-fn: Introduce new internal function SAT_ADD
> >
> > Thanks Tamar
> >
> > > Could you also split off the vectorizer change from scalar recog one? Typically I
> > would structure a change like this as:
> >
> > > 1. create types/structures + scalar recogn
> > > 2. Vector recog code
> > > 3. Backend changes
> >
> > Sure thing, will rearrange the patch like this.
> >
> > > Is ECF_NOTHROW correct here? At least on most targets I believe the scalar
> > version
> > > can set flags/throw exceptions if the saturation happens?
> >
> > I see, will remove that.
> >
> > > Hmm I believe Richi mentioned that he wanted the recognition done in isel?
> >
> > > The problem with doing it in match.pd is that it replaces the operations quite
> > > early the pipeline. Did I miss an email perhaps? The early replacement means
> we
> > > lose optimizations and things such as range calculations etc, since e.g. ranger
> > > doesn't know these internal functions.
> >
> > > I think Richi will want this in islet or mult widening but I'll continue with
> match.pd
> > > review just in case.
> >
> > If I understand is correct, Richard suggested try vectorizer patterns first and then
> > possible isel.
> > Thus, I don't have a try for SAT_ADD in ISEL as vectorizer patterns works well for
> > SAT_ADD.
> > Let's wait the confirmation from Richard. Below are the original words from
> > previous mail for reference.
> >
> 
> I think the comment he made was this
> 
> > > Given we have saturating integer alu like below, could you help to coach me the
> most reasonable way to represent
> > > It in scalar as well as vectorize part? Sorry not familiar with this part and still dig
> into how it works...
> >
> > As in your v2, .SAT_ADD for both sat_uadd and sat_sadd, similar for
> > the other cases.
> >
> > As I said, use vectorizer patterns and possibly do instruction
> > selection at ISEL/widen_mult time.
> 
> So he was responding for how to do it for the vectorizer and scalar parts.
> Remember that the goal is not to introduce new gimple IL that can block other
> optimizations.
> The vectorizer already introduces new IL (various IFN) but this is fine as we don't
> track things like ranges for
> vector instructions.  So we don't loose any information here.
> 
> Now for the scalar, if we do an early replacement like in match.pd we prevent a lot
> of other optimizations
> because they don't know what IFN_SAT_ADD does. gimple-isel runs pretty late,
> and so at this point we don't
> expect many more optimizations to happen, so it's a safe spot to insert more IL
> with "unknown semantics".
> 
> Was that your intention Richi?
> 
> Thanks,
> Tamar
> 
> > >> As I said, use vectorizer patterns and possibly do instruction
> > >> selection at ISEL/widen_mult time.
> >
> > > The optimize checks in the match.pd file are weird as it seems to check if we
> have
> > > optimizations enabled?
> >
> > > We don't typically need to do this.
> >
> > Sure, will remove this.
> >
> > > The function has only one caller, you should just inline it into the pattern.
> >
> > Sure thing.
> >
> > > Once you inline vect_sat_add_build_call you can do the check for
> > > vtype here, which is the cheaper check so perform it early.
> >
> > Sure thing.
> >
> > Thanks again and will send the v4 with all comments addressed, as well as the
> test
> > results.
> >
> > Pan
> >
> > -----Original Message-----
> > From: Tamar Christina <Tamar.Christina@arm.com>
> > Sent: Thursday, May 2, 2024 1:06 AM
> > To: Li, Pan2 <pan2.li@intel.com>; gcc-patches@gcc.gnu.org
> > Cc: juzhe.zhong@rivai.ai; kito.cheng@gmail.com; richard.guenther@gmail.com;
> > Liu, Hongtao <hongtao.liu@intel.com>
> > Subject: RE: [PATCH v3] Internal-fn: Introduce new internal function SAT_ADD
> >
> > Hi,
> >
> > > From: Pan Li <pan2.li@intel.com>
> > >
> > > Update in v3:
> > > * Rebase upstream for conflict.
> > >
> > > Update in v2:
> > > * Fix one failure for x86 bootstrap.
> > >
> > > Original log:
> > >
> > > This patch would like to add the middle-end presentation for the
> > > saturation add.  Aka set the result of add to the max when overflow.
> > > It will take the pattern similar as below.
> > >
> > > SAT_ADD (x, y) => (x + y) | (-(TYPE)((TYPE)(x + y) < x))
> > >
> > > Take uint8_t as example, we will have:
> > >
> > > * SAT_ADD (1, 254)   => 255.
> > > * SAT_ADD (1, 255)   => 255.
> > > * SAT_ADD (2, 255)   => 255.
> > > * SAT_ADD (255, 255) => 255.
> > >
> > > The patch also implement the SAT_ADD in the riscv backend as
> > > the sample for both the scalar and vector.  Given below example:
> > >
> > > uint64_t sat_add_u64 (uint64_t x, uint64_t y)
> > > {
> > >   return (x + y) | (- (uint64_t)((uint64_t)(x + y) < x));
> > > }
> > >
> > > Before this patch:
> > > uint64_t sat_add_uint64_t (uint64_t x, uint64_t y)
> > > {
> > >   long unsigned int _1;
> > >   _Bool _2;
> > >   long unsigned int _3;
> > >   long unsigned int _4;
> > >   uint64_t _7;
> > >   long unsigned int _10;
> > >   __complex__ long unsigned int _11;
> > >
> > > ;;   basic block 2, loop depth 0
> > > ;;    pred:       ENTRY
> > >   _11 = .ADD_OVERFLOW (x_5(D), y_6(D));
> > >   _1 = REALPART_EXPR <_11>;
> > >   _10 = IMAGPART_EXPR <_11>;
> > >   _2 = _10 != 0;
> > >   _3 = (long unsigned int) _2;
> > >   _4 = -_3;
> > >   _7 = _1 | _4;
> > >   return _7;
> > > ;;    succ:       EXIT
> > >
> > > }
> > >
> > > After this patch:
> > > uint64_t sat_add_uint64_t (uint64_t x, uint64_t y)
> > > {
> > >   uint64_t _7;
> > >
> > > ;;   basic block 2, loop depth 0
> > > ;;    pred:       ENTRY
> > >   _7 = .SAT_ADD (x_5(D), y_6(D)); [tail call]
> > >   return _7;
> > > ;;    succ:       EXIT
> > > }
> > >
> > > For vectorize, we leverage the existing vect pattern recog to find
> > > the pattern similar to scalar and let the vectorizer to perform
> > > the rest part for standard name usadd<mode>3 in vector mode.
> > > The riscv vector backend have insn "Vector Single-Width Saturating
> > > Add and Subtract" which can be leveraged when expand the usadd<mode>3
> > > in vector mode.  For example:
> > >
> > > void vec_sat_add_u64 (uint64_t *out, uint64_t *x, uint64_t *y, unsigned n)
> > > {
> > >   unsigned i;
> > >
> > >   for (i = 0; i < n; i++)
> > >     out[i] = (x[i] + y[i]) | (- (uint64_t)((uint64_t)(x[i] + y[i]) < x[i]));
> > > }
> > >
> > > Before this patch:
> > > void vec_sat_add_u64 (uint64_t *out, uint64_t *x, uint64_t *y, unsigned n)
> > > {
> > >   ...
> > >   _80 = .SELECT_VL (ivtmp_78, POLY_INT_CST [2, 2]);
> > >   ivtmp_58 = _80 * 8;
> > >   vect__4.7_61 = .MASK_LEN_LOAD (vectp_x.5_59, 64B, { -1, ... }, _80, 0);
> > >   vect__6.10_65 = .MASK_LEN_LOAD (vectp_y.8_63, 64B, { -1, ... }, _80, 0);
> > >   vect__7.11_66 = vect__4.7_61 + vect__6.10_65;
> > >   mask__8.12_67 = vect__4.7_61 > vect__7.11_66;
> > >   vect__12.15_72 = .VCOND_MASK (mask__8.12_67, {
> > 18446744073709551615,
> > > ... }, vect__7.11_66);
> > >   .MASK_LEN_STORE (vectp_out.16_74, 64B, { -1, ... }, _80, 0,
> vect__12.15_72);
> > >   vectp_x.5_60 = vectp_x.5_59 + ivtmp_58;
> > >   vectp_y.8_64 = vectp_y.8_63 + ivtmp_58;
> > >   vectp_out.16_75 = vectp_out.16_74 + ivtmp_58;
> > >   ivtmp_79 = ivtmp_78 - _80;
> > >   ...
> > > }
> > >
> > > vec_sat_add_u64:
> > >   ...
> > >   vsetvli a5,a3,e64,m1,ta,ma
> > >   vle64.v v0,0(a1)
> > >   vle64.v v1,0(a2)
> > >   slli    a4,a5,3
> > >   sub     a3,a3,a5
> > >   add     a1,a1,a4
> > >   add     a2,a2,a4
> > >   vadd.vv v1,v0,v1
> > >   vmsgtu.vv       v0,v0,v1
> > >   vmerge.vim      v1,v1,-1,v0
> > >   vse64.v v1,0(a0)
> > >   ...
> > >
> > > After this patch:
> > > void vec_sat_add_u64 (uint64_t *out, uint64_t *x, uint64_t *y, unsigned n)
> > > {
> > >   ...
> > >   _62 = .SELECT_VL (ivtmp_60, POLY_INT_CST [2, 2]);
> > >   ivtmp_46 = _62 * 8;
> > >   vect__4.7_49 = .MASK_LEN_LOAD (vectp_x.5_47, 64B, { -1, ... }, _62, 0);
> > >   vect__6.10_53 = .MASK_LEN_LOAD (vectp_y.8_51, 64B, { -1, ... }, _62, 0);
> > >   vect__12.11_54 = .SAT_ADD (vect__4.7_49, vect__6.10_53);
> > >   .MASK_LEN_STORE (vectp_out.12_56, 64B, { -1, ... }, _62, 0,
> vect__12.11_54);
> > >   ...
> > > }
> > >
> > > vec_sat_add_u64:
> > >   ...
> > >   vsetvli a5,a3,e64,m1,ta,ma
> > >   vle64.v v1,0(a1)
> > >   vle64.v v2,0(a2)
> > >   slli    a4,a5,3
> > >   sub     a3,a3,a5
> > >   add     a1,a1,a4
> > >   add     a2,a2,a4
> > >   vsaddu.vv       v1,v1,v2
> > >   vse64.v v1,0(a0)
> > >   ...
> > >
> > > To limit the patch size for review, only unsigned version of
> > > usadd<mode>3 are involved here. The signed version will be covered
> > > in the underlying patch(es).
> > >
> > > The below test suites are passed for this patch.
> > > * The riscv fully regression tests.
> > > * The aarch64 fully regression tests.
> > > * The x86 bootstrap tests.
> > > * The x86 fully regression tests.
> > >
> > > 	PR target/51492
> > > 	PR target/112600
> > >
> > > gcc/ChangeLog:
> > >
> > > 	* config/riscv/autovec.md (usadd<mode>3): New pattern expand
> > > 	for unsigned SAT_ADD vector.
> > > 	* config/riscv/riscv-protos.h (riscv_expand_usadd): New func
> > > 	decl to expand usadd<mode>3 pattern.
> > > 	(expand_vec_usadd): Ditto but for vector.
> > > 	* config/riscv/riscv-v.cc (emit_vec_saddu): New func impl to
> > > 	emit the vsadd insn.
> > > 	(expand_vec_usadd): New func impl to expand usadd<mode>3 for
> > > 	vector.
> > > 	* config/riscv/riscv.cc (riscv_expand_usadd): New func impl
> > > 	to expand usadd<mode>3 for scalar.
> > > 	* config/riscv/riscv.md (usadd<mode>3): New pattern expand
> > > 	for unsigned SAT_ADD scalar.
> > > 	* config/riscv/vector.md: Allow VLS mode for vsaddu.
> > > 	* internal-fn.cc (commutative_binary_fn_p): Add type IFN_SAT_ADD.
> > > 	* internal-fn.def (SAT_ADD): Add new signed optab SAT_ADD.
> > > 	* match.pd: Add unsigned SAT_ADD match and simply.
> > > 	* optabs.def (OPTAB_NL): Remove fixed-point limitation for us/ssadd.
> > > 	* tree-vect-patterns.cc (vect_sat_add_build_call): New func impl
> > > 	to build the IFN_SAT_ADD gimple call.
> > > 	(vect_recog_sat_add_pattern): New func impl to recog the pattern
> > > 	for unsigned SAT_ADD.
> > >
> >
> > Could you split the generic changes off from the RISCV changes? The RISCV
> > changes need to be reviewed by the backend maintainer.
> >
> > Could you also split off the vectorizer change from scalar recog one? Typically I
> > would structure a change like this as:
> >
> > 1. create types/structures + scalar recogn
> > 2. Vector recog code
> > 3. Backend changes
> >
> > Which makes review and bisect easier. I'll only focus on the generic bits.
> >
> > > diff --git a/gcc/internal-fn.cc b/gcc/internal-fn.cc
> > > index 2c764441cde..1104bb03b41 100644
> > > --- a/gcc/internal-fn.cc
> > > +++ b/gcc/internal-fn.cc
> > > @@ -4200,6 +4200,7 @@ commutative_binary_fn_p (internal_fn fn)
> > >      case IFN_UBSAN_CHECK_MUL:
> > >      case IFN_ADD_OVERFLOW:
> > >      case IFN_MUL_OVERFLOW:
> > > +    case IFN_SAT_ADD:
> > >      case IFN_VEC_WIDEN_PLUS:
> > >      case IFN_VEC_WIDEN_PLUS_LO:
> > >      case IFN_VEC_WIDEN_PLUS_HI:
> > > diff --git a/gcc/internal-fn.def b/gcc/internal-fn.def
> > > index 848bb9dbff3..47326b7033c 100644
> > > --- a/gcc/internal-fn.def
> > > +++ b/gcc/internal-fn.def
> > > @@ -275,6 +275,9 @@ DEF_INTERNAL_SIGNED_OPTAB_FN (MULHS,
> > ECF_CONST
> > > | ECF_NOTHROW, first,
> > >  DEF_INTERNAL_SIGNED_OPTAB_FN (MULHRS, ECF_CONST | ECF_NOTHROW,
> > > first,
> > >  			      smulhrs, umulhrs, binary)
> > >
> > > +DEF_INTERNAL_SIGNED_OPTAB_FN (SAT_ADD, ECF_CONST |
> ECF_NOTHROW,
> > > first,
> > > +			      ssadd, usadd, binary)
> > > +
> >
> > Is ECF_NOTHROW correct here? At least on most targets I believe the scalar
> version
> > can set flags/throw exceptions if the saturation happens?
> >
> > >  DEF_INTERNAL_COND_FN (ADD, ECF_CONST, add, binary)
> > >  DEF_INTERNAL_COND_FN (SUB, ECF_CONST, sub, binary)
> > >  DEF_INTERNAL_COND_FN (MUL, ECF_CONST, smul, binary)
> > > diff --git a/gcc/match.pd b/gcc/match.pd
> > > index d401e7503e6..0b0298df829 100644
> > > --- a/gcc/match.pd
> > > +++ b/gcc/match.pd
> > > @@ -3043,6 +3043,70 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
> > >         || POINTER_TYPE_P (itype))
> > >        && wi::eq_p (wi::to_wide (int_cst), wi::max_value (itype))))))
> > >
> >
> > Hmm I believe Richi mentioned that he wanted the recognition done in isel?
> >
> > The problem with doing it in match.pd is that it replaces the operations quite
> > early the pipeline. Did I miss an email perhaps? The early replacement means we
> > lose optimizations and things such as range calculations etc, since e.g. ranger
> > doesn't know these internal functions.
> >
> > I think Richi will want this in islet or mult widening but I'll continue with match.pd
> > review just in case.
> >
> > > +/* Unsigned Saturation Add */
> > > +(match (usadd_left_part_1 @0 @1)
> > > + (plus:c @0 @1)
> > > + (if (INTEGRAL_TYPE_P (type)
> > > +      && TYPE_UNSIGNED (TREE_TYPE (@0))
> > > +      && types_match (type, TREE_TYPE (@0))
> > > +      && types_match (type, TREE_TYPE (@1)))))
> > > +
> > > +(match (usadd_right_part_1 @0 @1)
> > > + (negate (convert (lt (plus:c @0 @1) @0)))
> > > + (if (INTEGRAL_TYPE_P (type)
> > > +      && TYPE_UNSIGNED (TREE_TYPE (@0))
> > > +      && types_match (type, TREE_TYPE (@0))
> > > +      && types_match (type, TREE_TYPE (@1)))))
> > > +
> > > +(match (usadd_right_part_2 @0 @1)
> > > + (negate (convert (gt @0 (plus:c @0 @1))))
> > > + (if (INTEGRAL_TYPE_P (type)
> > > +      && TYPE_UNSIGNED (TREE_TYPE (@0))
> > > +      && types_match (type, TREE_TYPE (@0))
> > > +      && types_match (type, TREE_TYPE (@1)))))
> >
> > Predicates can be overloaded, so these two can just be usadd_right_part which
> > then...
> >
> > > +
> > > +/* Unsigned saturation add. Case 1 (branchless):
> > > +   SAT_U_ADD = (X + Y) | - ((X + Y) < X) or
> > > +   SAT_U_ADD = (X + Y) | - (X > (X + Y)).  */
> > > +(simplify
> > > + (bit_ior:c
> > > +  (usadd_left_part_1 @0 @1)
> > > +  (usadd_right_part_1 @0 @1))
> > > + (if (optimize) (IFN_SAT_ADD @0 @1)))
> >
> >
> > The optimize checks in the match.pd file are weird as it seems to check if we have
> > optimizations enabled?
> >
> > We don't typically need to do this.
> >
> > > +(simplify
> > > + (bit_ior:c
> > > +  (usadd_left_part_1 @0 @1)
> > > +  (usadd_right_part_2 @0 @1))
> > > + (if (optimize) (IFN_SAT_ADD @0 @1)))
> > > +
> >
> > Allows you to collapse rules like these into one line. Similarly for below.
> >
> > Note  that even when moving to gimple-isel you can reuse the match.pd code by
> > Leveraging it to build the predicates for you and call them from another pass.
> > See how ctz_table_index is used for example.
> >
> > Doing this, moving it to gimple-isel.cc should be easy.
> >
> > > +/* Unsigned saturation add. Case 2 (branch):
> > > +   SAT_U_ADD = (X + Y) >= x ? (X + Y) : -1 or
> > > +   SAT_U_ADD = x <= (X + Y) ? (X + Y) : -1.  */
> > > +(simplify
> > > + (cond (ge (usadd_left_part_1@2 @0 @1) @0) @2 integer_minus_onep)
> > > + (if (optimize) (IFN_SAT_ADD @0 @1)))
> > > +(simplify
> > > + (cond (le @0 (usadd_left_part_1@2 @0 @1)) @2 integer_minus_onep)
> > > + (if (optimize) (IFN_SAT_ADD @0 @1)))
> > > +
> > > +/* Vect recog pattern will leverage unsigned_integer_sat_add.  */
> > > +(match (unsigned_integer_sat_add @0 @1)
> > > + (bit_ior:c
> > > +  (usadd_left_part_1 @0 @1)
> > > +  (usadd_right_part_1 @0 @1))
> > > + (if (optimize)))
> > > +(match (unsigned_integer_sat_add @0 @1)
> > > + (bit_ior:c
> > > +  (usadd_left_part_1 @0 @1)
> > > +  (usadd_right_part_2 @0 @1))
> > > + (if (optimize)))
> > > +(match (unsigned_integer_sat_add @0 @1)
> > > + (cond (ge (usadd_left_part_1@2 @0 @1) @0) @2 integer_minus_onep)
> > > + (if (optimize)))
> > > +(match (unsigned_integer_sat_add @0 @1)
> > > + (cond (le @0 (usadd_left_part_1@2 @0 @1)) @2 integer_minus_onep)
> > > + (if (optimize)))
> > > +
> > >  /* x >  y  &&  x != XXX_MIN  -->  x > y
> > >     x >  y  &&  x == XXX_MIN  -->  false . */
> > >  (for eqne (eq ne)
> > > diff --git a/gcc/optabs.def b/gcc/optabs.def
> > > index ad14f9328b9..3f2cb46aff8 100644
> > > --- a/gcc/optabs.def
> > > +++ b/gcc/optabs.def
> > > @@ -111,8 +111,8 @@ OPTAB_NX(add_optab, "add$F$a3")
> > >  OPTAB_NX(add_optab, "add$Q$a3")
> > >  OPTAB_VL(addv_optab, "addv$I$a3", PLUS, "add", '3', gen_intv_fp_libfunc)
> > >  OPTAB_VX(addv_optab, "add$F$a3")
> > > -OPTAB_NL(ssadd_optab, "ssadd$Q$a3", SS_PLUS, "ssadd", '3',
> > > gen_signed_fixed_libfunc)
> > > -OPTAB_NL(usadd_optab, "usadd$Q$a3", US_PLUS, "usadd", '3',
> > > gen_unsigned_fixed_libfunc)
> > > +OPTAB_NL(ssadd_optab, "ssadd$a3", SS_PLUS, "ssadd", '3',
> > > gen_signed_fixed_libfunc)
> > > +OPTAB_NL(usadd_optab, "usadd$a3", US_PLUS, "usadd", '3',
> > > gen_unsigned_fixed_libfunc)
> > >  OPTAB_NL(sub_optab, "sub$P$a3", MINUS, "sub", '3',
> > gen_int_fp_fixed_libfunc)
> > >  OPTAB_NX(sub_optab, "sub$F$a3")
> > >  OPTAB_NX(sub_optab, "sub$Q$a3")
> > ...
> > > diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc
> > > index 87c2acff386..77924cf10f8 100644
> > > --- a/gcc/tree-vect-patterns.cc
> > > +++ b/gcc/tree-vect-patterns.cc
> > > @@ -4487,6 +4487,67 @@ vect_recog_mult_pattern (vec_info *vinfo,
> > >    return pattern_stmt;
> > >  }
> > >
> > > +static gimple *
> > > +vect_sat_add_build_call (vec_info *vinfo, gimple *last_stmt, tree *type_out,
> > > +			 tree op_0, tree op_1)
> > > +{
> > > +  tree itype = TREE_TYPE (op_0);
> > > +  tree vtype = get_vectype_for_scalar_type (vinfo, itype);
> > > +
> > > +  if (vtype == NULL_TREE)
> > > +    return NULL;
> > > +
> > > +  if (!direct_internal_fn_supported_p (IFN_SAT_ADD, vtype,
> > > OPTIMIZE_FOR_SPEED))
> > > +    return NULL;
> > > +
> > > +  *type_out = vtype;
> > > +
> > > +  gcall *call = gimple_build_call_internal (IFN_SAT_ADD, 2, op_0, op_1);
> > > +  gimple_call_set_lhs (call, vect_recog_temp_ssa_var (itype, NULL));
> > > +  gimple_call_set_nothrow (call, /* nothrow_p */ true);
> > > +  gimple_set_location (call, gimple_location (last_stmt));
> > > +
> > > +  vect_pattern_detected ("vect_recog_sat_add_pattern", last_stmt);
> > > +
> > > +  return call;
> > > +}
> >
> > The function has only one caller, you should just inline it into the pattern.
> >
> > > +/*
> > > + * Try to detect saturation add pattern (SAT_ADD), aka below gimple:
> > > + *   _7 = _4 + _6;
> > > + *   _8 = _4 > _7;
> > > + *   _9 = (long unsigned int) _8;
> > > + *   _10 = -_9;
> > > + *   _12 = _7 | _10;
> > > + *
> > > + * And then simplied to
> > > + *   _12 = .SAT_ADD (_4, _6);
> > > + */
> > > +extern bool gimple_unsigned_integer_sat_add (tree, tree*, tree (*)(tree));
> > > +
> > > +static gimple *
> > > +vect_recog_sat_add_pattern (vec_info *vinfo, stmt_vec_info stmt_vinfo,
> > > +			    tree *type_out)
> > > +{
> > > +  gimple *last_stmt = stmt_vinfo->stmt;
> > > +
> >
> > STMT_VINFO_STMT (stmt_vinfo);
> >
> > > +  if (!is_gimple_assign (last_stmt))
> > > +    return NULL;
> > > +
> > > +  tree res_ops[2];
> > > +  tree lhs = gimple_assign_lhs (last_stmt);
> >
> > Once you inline vect_sat_add_build_call you can do the check for
> > vtype here, which is the cheaper check so perform it early.
> >
> > Otherwise this looks really good!
> >
> > Thanks for working on it,
> >
> > Tamar
> >
> > > +
> > > +  if (gimple_unsigned_integer_sat_add (lhs, res_ops, NULL))
> > > +    {
> > > +      gimple *call = vect_sat_add_build_call (vinfo, last_stmt, type_out,
> > > +					      res_ops[0], res_ops[1]);
> > > +      if (call)
> > > +	return call;
> > > +    }
> > > +
> > > +  return NULL;
> > > +}
> > > +
> > >  /* Detect a signed division by a constant that wouldn't be
> > >     otherwise vectorized:
> > >
> > > @@ -6987,6 +7048,7 @@ static vect_recog_func vect_vect_recog_func_ptrs[]
> =
> > {
> > >    { vect_recog_vector_vector_shift_pattern, "vector_vector_shift" },
> > >    { vect_recog_divmod_pattern, "divmod" },
> > >    { vect_recog_mult_pattern, "mult" },
> > > +  { vect_recog_sat_add_pattern, "sat_add" },
> > >    { vect_recog_mixed_size_cond_pattern, "mixed_size_cond" },
> > >    { vect_recog_gcond_pattern, "gcond" },
> > >    { vect_recog_bool_pattern, "bool" },
> > > --
> > > 2.34.1


  reply	other threads:[~2024-05-02 12:57 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-06 12:07 [PATCH v1] " pan2.li
2024-04-07  7:03 ` [PATCH v2] " pan2.li
2024-04-28 12:10   ` Li, Pan2
2024-04-29  7:53 ` [PATCH v3] " pan2.li
2024-05-01 17:06   ` Tamar Christina
2024-05-02  3:10     ` Li, Pan2
2024-05-02  3:25       ` Tamar Christina
2024-05-02 10:57         ` Li, Pan2
2024-05-02 12:57           ` Tamar Christina [this message]
2024-05-03  1:45             ` Li, Pan2
2024-05-06 14:48 ` [PATCH v4 1/3] Internal-fn: Support new IFN SAT_ADD for unsigned scalar int pan2.li
2024-05-13  9:09   ` Tamar Christina
2024-05-13 13:36     ` Li, Pan2
2024-05-13 15:03       ` Tamar Christina
2024-05-14  1:50         ` Li, Pan2
2024-05-14 13:18   ` Richard Biener
2024-05-14 14:14     ` Li, Pan2
2024-05-06 14:49 ` [PATCH v4 2/3] VECT: Support new IFN SAT_ADD for unsigned vector int pan2.li
2024-05-13  9:10   ` Tamar Christina
2024-05-14 13:21   ` Richard Biener
2024-05-06 14:50 ` [PATCH v4 3/3] RISC-V: Implement IFN SAT_ADD for both the scalar and vector pan2.li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=VI1PR08MB5325A9C6C13B4AB1B21C1610FF182@VI1PR08MB5325.eurprd08.prod.outlook.com \
    --to=tamar.christina@arm.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=hongtao.liu@intel.com \
    --cc=juzhe.zhong@rivai.ai \
    --cc=kito.cheng@gmail.com \
    --cc=pan2.li@intel.com \
    --cc=richard.guenther@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).