public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Aldy Hernandez <aldyh@redhat.com>
To: Tamar Christina <Tamar.Christina@arm.com>
Cc: "gcc-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>,
	nd <nd@arm.com>,  "amacleod@redhat.com" <amacleod@redhat.com>
Subject: Re: [PATCH 2/4][ranger]: Add range-ops for widen addition and widen multiplication [PR108583]
Date: Wed, 8 Mar 2023 09:57:06 +0100	[thread overview]
Message-ID: <CAGm3qMU=HxugRQQyW8c6uSuMYBrRhWdA25h3aHdMo8Buf=gk9w@mail.gmail.com> (raw)
In-Reply-To: <VI1PR08MB5325450B32D107197024C14EFFB69@VI1PR08MB5325.eurprd08.prod.outlook.com>

As Andrew has been advising on this one, I'd prefer for him to review
it.  However, he's on vacation this week.  FYI...

Aldy

On Mon, Mar 6, 2023 at 12:22 PM Tamar Christina <Tamar.Christina@arm.com> wrote:
>
> Ping.
>
> And updated the patch to reject cases that we don't expect or can handle cleanly for now.
>
> Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.
>
> Ok for master?
>
> Thanks,
> Tamar
>
> gcc/ChangeLog:
>
>         PR target/108583
>         * gimple-range-op.h (gimple_range_op_handler): Add maybe_non_standard.
>         * gimple-range-op.cc (gimple_range_op_handler::gimple_range_op_handler):
>         Use it.
>         (gimple_range_op_handler::maybe_non_standard): New.
>         * range-op.cc (class operator_widen_plus_signed,
>         operator_widen_plus_signed::wi_fold, class operator_widen_plus_unsigned,
>         operator_widen_plus_unsigned::wi_fold, class operator_widen_mult_signed,
>         operator_widen_mult_signed::wi_fold, class operator_widen_mult_unsigned,
>         operator_widen_mult_unsigned::wi_fold,
>         ptr_op_widen_mult_signed, ptr_op_widen_mult_unsigned,
>         ptr_op_widen_plus_signed, ptr_op_widen_plus_unsigned): New.
>         * range-op.h (ptr_op_widen_mult_signed, ptr_op_widen_mult_unsigned,
>         ptr_op_widen_plus_signed, ptr_op_widen_plus_unsigned): New
>
> Co-Authored-By: Andrew MacLeod <amacleod@redhat.com>
>
> --- Inline copy of patch ---
>
> diff --git a/gcc/gimple-range-op.h b/gcc/gimple-range-op.h
> index 743b858126e333ea9590c0f175aacb476260c048..1bf63c5ce6f5db924a1f5907ab4539e376281bd0 100644
> --- a/gcc/gimple-range-op.h
> +++ b/gcc/gimple-range-op.h
> @@ -41,6 +41,7 @@ public:
>                  relation_trio = TRIO_VARYING);
>  private:
>    void maybe_builtin_call ();
> +  void maybe_non_standard ();
>    gimple *m_stmt;
>    tree m_op1, m_op2;
>  };
> diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
> index d9dfdc56939bb62ade72726b15c3d5e87e4ddcd1..a5d625387e712c170e1e68f6a7d494027f6ef0d0 100644
> --- a/gcc/gimple-range-op.cc
> +++ b/gcc/gimple-range-op.cc
> @@ -179,6 +179,8 @@ gimple_range_op_handler::gimple_range_op_handler (gimple *s)
>    // statements.
>    if (is_a <gcall *> (m_stmt))
>      maybe_builtin_call ();
> +  else
> +    maybe_non_standard ();
>  }
>
>  // Calculate what we can determine of the range of this unary
> @@ -764,6 +766,57 @@ public:
>    }
>  } op_cfn_parity;
>
> +// Set up a gimple_range_op_handler for any nonstandard function which can be
> +// supported via range-ops.
> +
> +void
> +gimple_range_op_handler::maybe_non_standard ()
> +{
> +  range_operator *signed_op = ptr_op_widen_mult_signed;
> +  range_operator *unsigned_op = ptr_op_widen_mult_unsigned;
> +  if (gimple_code (m_stmt) == GIMPLE_ASSIGN)
> +    switch (gimple_assign_rhs_code (m_stmt))
> +      {
> +       case WIDEN_PLUS_EXPR:
> +       {
> +         signed_op = ptr_op_widen_plus_signed;
> +         unsigned_op = ptr_op_widen_plus_unsigned;
> +       }
> +       gcc_fallthrough ();
> +       case WIDEN_MULT_EXPR:
> +       {
> +         m_valid = false;
> +         m_op1 = gimple_assign_rhs1 (m_stmt);
> +         m_op2 = gimple_assign_rhs2 (m_stmt);
> +         tree ret = gimple_assign_lhs (m_stmt);
> +         bool signed1 = TYPE_SIGN (TREE_TYPE (m_op1)) == SIGNED;
> +         bool signed2 = TYPE_SIGN (TREE_TYPE (m_op2)) == SIGNED;
> +         bool signed_ret = TYPE_SIGN (TREE_TYPE (ret)) == SIGNED;
> +
> +         /* Normally these operands should all have the same sign, but
> +            some passes and violate this by taking mismatched sign args.  At
> +            the moment the only one that's possible is mismatch inputs and
> +            unsigned output.  Once ranger supports signs for the operands we
> +            can properly fix it,  for now only accept the case we can do
> +            correctly.  */
> +         if ((signed1 ^ signed2) && signed_ret)
> +           return;
> +
> +         m_valid = true;
> +         if (signed2 && !signed1)
> +           std::swap (m_op1, m_op2);
> +
> +         if (signed1 || signed2)
> +           m_int = signed_op;
> +         else
> +           m_int = unsigned_op;
> +         break;
> +       }
> +       default:
> +         break;
> +      }
> +}
> +
>  // Set up a gimple_range_op_handler for any built in function which can be
>  // supported via range-ops.
>
> diff --git a/gcc/range-op.h b/gcc/range-op.h
> index f00b747f08a1fa8404c63bfe5a931b4048008b03..b1eeac70df81f2bdf228af7adff5399e7ac5e5d6 100644
> --- a/gcc/range-op.h
> +++ b/gcc/range-op.h
> @@ -311,4 +311,8 @@ private:
>  // This holds the range op table for floating point operations.
>  extern floating_op_table *floating_tree_table;
>
> +extern range_operator *ptr_op_widen_mult_signed;
> +extern range_operator *ptr_op_widen_mult_unsigned;
> +extern range_operator *ptr_op_widen_plus_signed;
> +extern range_operator *ptr_op_widen_plus_unsigned;
>  #endif // GCC_RANGE_OP_H
> diff --git a/gcc/range-op.cc b/gcc/range-op.cc
> index 5c67bce6d3aab81ad3186b902e09d6a96878d9bb..718ccb6f074e1a2a9ef1b7a5d4e879898d4a7fc3 100644
> --- a/gcc/range-op.cc
> +++ b/gcc/range-op.cc
> @@ -1556,6 +1556,73 @@ operator_plus::op2_range (irange &r, tree type,
>    return op1_range (r, type, lhs, op1, rel.swap_op1_op2 ());
>  }
>
> +class operator_widen_plus_signed : public range_operator
> +{
> +public:
> +  virtual void wi_fold (irange &r, tree type,
> +                       const wide_int &lh_lb,
> +                       const wide_int &lh_ub,
> +                       const wide_int &rh_lb,
> +                       const wide_int &rh_ub) const;
> +} op_widen_plus_signed;
> +range_operator *ptr_op_widen_plus_signed = &op_widen_plus_signed;
> +
> +void
> +operator_widen_plus_signed::wi_fold (irange &r, tree type,
> +                                    const wide_int &lh_lb,
> +                                    const wide_int &lh_ub,
> +                                    const wide_int &rh_lb,
> +                                    const wide_int &rh_ub) const
> +{
> +   wi::overflow_type ov_lb, ov_ub;
> +   signop s = TYPE_SIGN (type);
> +
> +   wide_int lh_wlb
> +     = wide_int::from (lh_lb, wi::get_precision (lh_lb) * 2, SIGNED);
> +   wide_int lh_wub
> +     = wide_int::from (lh_ub, wi::get_precision (lh_ub) * 2, SIGNED);
> +   wide_int rh_wlb = wide_int::from (rh_lb, wi::get_precision (rh_lb) * 2, s);
> +   wide_int rh_wub = wide_int::from (rh_ub, wi::get_precision (rh_ub) * 2, s);
> +
> +   wide_int new_lb = wi::add (lh_wlb, rh_wlb, s, &ov_lb);
> +   wide_int new_ub = wi::add (lh_wub, rh_wub, s, &ov_ub);
> +
> +   r = int_range<2> (type, new_lb, new_ub);
> +}
> +
> +class operator_widen_plus_unsigned : public range_operator
> +{
> +public:
> +  virtual void wi_fold (irange &r, tree type,
> +                       const wide_int &lh_lb,
> +                       const wide_int &lh_ub,
> +                       const wide_int &rh_lb,
> +                       const wide_int &rh_ub) const;
> +} op_widen_plus_unsigned;
> +range_operator *ptr_op_widen_plus_unsigned = &op_widen_plus_unsigned;
> +
> +void
> +operator_widen_plus_unsigned::wi_fold (irange &r, tree type,
> +                                      const wide_int &lh_lb,
> +                                      const wide_int &lh_ub,
> +                                      const wide_int &rh_lb,
> +                                      const wide_int &rh_ub) const
> +{
> +   wi::overflow_type ov_lb, ov_ub;
> +   signop s = TYPE_SIGN (type);
> +
> +   wide_int lh_wlb
> +     = wide_int::from (lh_lb, wi::get_precision (lh_lb) * 2, UNSIGNED);
> +   wide_int lh_wub
> +     = wide_int::from (lh_ub, wi::get_precision (lh_ub) * 2, UNSIGNED);
> +   wide_int rh_wlb = wide_int::from (rh_lb, wi::get_precision (rh_lb) * 2, s);
> +   wide_int rh_wub = wide_int::from (rh_ub, wi::get_precision (rh_ub) * 2, s);
> +
> +   wide_int new_lb = wi::add (lh_wlb, rh_wlb, s, &ov_lb);
> +   wide_int new_ub = wi::add (lh_wub, rh_wub, s, &ov_ub);
> +
> +   r = int_range<2> (type, new_lb, new_ub);
> +}
>
>  class operator_minus : public range_operator
>  {
> @@ -2031,6 +2098,70 @@ operator_mult::wi_fold (irange &r, tree type,
>      }
>  }
>
> +class operator_widen_mult_signed : public range_operator
> +{
> +public:
> +  virtual void wi_fold (irange &r, tree type,
> +                       const wide_int &lh_lb,
> +                       const wide_int &lh_ub,
> +                       const wide_int &rh_lb,
> +                       const wide_int &rh_ub)
> +    const;
> +} op_widen_mult_signed;
> +range_operator *ptr_op_widen_mult_signed = &op_widen_mult_signed;
> +
> +void
> +operator_widen_mult_signed::wi_fold (irange &r, tree type,
> +                                    const wide_int &lh_lb,
> +                                    const wide_int &lh_ub,
> +                                    const wide_int &rh_lb,
> +                                    const wide_int &rh_ub) const
> +{
> +  signop s = TYPE_SIGN (type);
> +
> +  wide_int lh_wlb = wide_int::from (lh_lb, wi::get_precision (lh_lb) * 2, SIGNED);
> +  wide_int lh_wub = wide_int::from (lh_ub, wi::get_precision (lh_ub) * 2, SIGNED);
> +  wide_int rh_wlb = wide_int::from (rh_lb, wi::get_precision (rh_lb) * 2, s);
> +  wide_int rh_wub = wide_int::from (rh_ub, wi::get_precision (rh_ub) * 2, s);
> +
> +  /* We don't expect a widening multiplication to be able to overflow but range
> +     calculations for multiplications are complicated.  After widening the
> +     operands lets call the base class.  */
> +  return op_mult.wi_fold (r, type, lh_wlb, lh_wub, rh_wlb, rh_wub);
> +}
> +
> +
> +class operator_widen_mult_unsigned : public range_operator
> +{
> +public:
> +  virtual void wi_fold (irange &r, tree type,
> +                       const wide_int &lh_lb,
> +                       const wide_int &lh_ub,
> +                       const wide_int &rh_lb,
> +                       const wide_int &rh_ub)
> +    const;
> +} op_widen_mult_unsigned;
> +range_operator *ptr_op_widen_mult_unsigned = &op_widen_mult_unsigned;
> +
> +void
> +operator_widen_mult_unsigned::wi_fold (irange &r, tree type,
> +                                      const wide_int &lh_lb,
> +                                      const wide_int &lh_ub,
> +                                      const wide_int &rh_lb,
> +                                      const wide_int &rh_ub) const
> +{
> +  signop s = TYPE_SIGN (type);
> +
> +  wide_int lh_wlb = wide_int::from (lh_lb, wi::get_precision (lh_lb) * 2, UNSIGNED);
> +  wide_int lh_wub = wide_int::from (lh_ub, wi::get_precision (lh_ub) * 2, UNSIGNED);
> +  wide_int rh_wlb = wide_int::from (rh_lb, wi::get_precision (rh_lb) * 2, s);
> +  wide_int rh_wub = wide_int::from (rh_ub, wi::get_precision (rh_ub) * 2, s);
> +
> +  /* We don't expect a widening multiplication to be able to overflow but range
> +     calculations for multiplications are complicated.  After widening the
> +     operands lets call the base class.  */
> +  return op_mult.wi_fold (r, type, lh_wlb, lh_wub, rh_wlb, rh_wub);
> +}
>
>  class operator_div : public cross_product_operator
>  {
>


  reply	other threads:[~2023-03-08  8:57 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-27 12:32 [PATCH 1/4]middle-end: Revert can_special_div_by_const changes [PR108583] Tamar Christina
2023-02-27 12:33 ` [PATCH 2/4][ranger]: Add range-ops for widen addition and widen multiplication [PR108583] Tamar Christina
2023-03-06 11:20   ` Tamar Christina
2023-03-08  8:57     ` Aldy Hernandez [this message]
2023-03-09 19:37       ` Tamar Christina
2023-03-10 13:32         ` Andrew MacLeod
2023-03-10 14:11           ` Tamar Christina
2023-03-10 14:30             ` Richard Biener
2023-02-27 12:33 ` [PATCH 3/4]middle-end: Implement preferred_div_as_shifts_over_mult [PR108583] Tamar Christina
2023-03-06 11:23   ` Tamar Christina
2023-03-08  8:55     ` Richard Sandiford
2023-03-09 19:39       ` Tamar Christina
2023-03-10  8:39         ` Richard Sandiford
2023-02-27 12:34 ` [PATCH 4/4]AArch64 Update div-bitmask to implement new optab instead of target hook [PR108583] Tamar Christina
2023-03-06 11:21   ` Tamar Christina
2023-03-08  9:17     ` Richard Sandiford
2023-03-08  9:25       ` Tamar Christina
2023-03-08 10:44         ` Richard Sandiford
2023-02-27 14:07 ` [PATCH 1/4]middle-end: Revert can_special_div_by_const changes [PR108583] Richard Biener

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGm3qMU=HxugRQQyW8c6uSuMYBrRhWdA25h3aHdMo8Buf=gk9w@mail.gmail.com' \
    --to=aldyh@redhat.com \
    --cc=Tamar.Christina@arm.com \
    --cc=amacleod@redhat.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=nd@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).