public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc r13-3901] [range-ops] Update known bitmasks using CCP for all operators.
@ 2022-11-11 13:53 Aldy Hernandez
  0 siblings, 0 replies; only message in thread
From: Aldy Hernandez @ 2022-11-11 13:53 UTC (permalink / raw)
  To: gcc-cvs

https://gcc.gnu.org/g:c16c40808331a02947b1ad962e85e1b40e30a707

commit r13-3901-gc16c40808331a02947b1ad962e85e1b40e30a707
Author: Aldy Hernandez <aldyh@redhat.com>
Date:   Thu Nov 10 11:24:48 2022 +0100

    [range-ops] Update known bitmasks using CCP for all operators.
    
    Use bit-CCP to calculate bitmasks for all integer operators, instead
    of the half-assed job we were doing with just a handful of operators.
    
    This sets us up nicely for tracking known-one bitmasks in the next
    release, as all we'll have to do is just store them in the irange.
    
    All in all, this series of patches incur a 1.9% penalty to VRP, with
    no measurable difference in overall compile time.  The reason is
    three-fold:
    
    (a) There's double dispatch going on.  First, the dispatch for the
    range-ops virtuals, and now the switch in bit_value_binop.
    
    (b) The maybe nonzero mask is stored as a tree and there is an endless
    back and forth with wide-ints.  This will be a non-issue next release,
    when we convert irange to wide-ints.
    
    (c) New functionality has a cost.  We were handling 2 cases (plus
    casts).  Now we handle 20.
    
    I can play around with moving the bit_value_binop cases into inlined
    methods in the different range-op entries, and see if that improves
    anything, but I doubt (a) buys us that much.  Certainly something that
    can be done in stage3 if it's measurable in any significant way.
    
    p.s It would be nice in the future to teach the op[12]_range methods about
    the masks.
    
    gcc/ChangeLog:
    
            * range-op.cc (range_operator::fold_range): Call
            update_known_bitmask.
            (operator_bitwise_and::fold_range): Avoid setting nonzero bits
            when range is undefined.

Diff:
---
 gcc/range-op.cc | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 00a736e983d..9eec46441a3 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -245,6 +245,7 @@ range_operator::fold_range (irange &r, tree type,
       wi_fold_in_parts (r, type, lh.lower_bound (), lh.upper_bound (),
 			rh.lower_bound (), rh.upper_bound ());
       op1_op2_relation_effect (r, type, lh, rh, rel);
+      update_known_bitmask (r, m_code, lh, rh);
       return true;
     }
 
@@ -262,10 +263,12 @@ range_operator::fold_range (irange &r, tree type,
 	if (r.varying_p ())
 	  {
 	    op1_op2_relation_effect (r, type, lh, rh, rel);
+	    update_known_bitmask (r, m_code, lh, rh);
 	    return true;
 	  }
       }
   op1_op2_relation_effect (r, type, lh, rh, rel);
+  update_known_bitmask (r, m_code, lh, rh);
   return true;
 }
 
@@ -2873,7 +2876,7 @@ operator_bitwise_and::fold_range (irange &r, tree type,
 {
   if (range_operator::fold_range (r, type, lh, rh))
     {
-      if (!lh.undefined_p () && !rh.undefined_p ())
+      if (!r.undefined_p () && !lh.undefined_p () && !rh.undefined_p ())
 	r.set_nonzero_bits (wi::bit_and (lh.get_nonzero_bits (),
 					 rh.get_nonzero_bits ()));
       return true;

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2022-11-11 13:53 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-11 13:53 [gcc r13-3901] [range-ops] Update known bitmasks using CCP for all operators Aldy Hernandez

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).