public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* Project Ranger Submission / On Demand VRP
@ 2020-10-02 16:56 Andrew MacLeod
  2020-10-02 16:58 ` [PATCH 1/6] Ranger patches Andrew MacLeod
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Andrew MacLeod @ 2020-10-02 16:56 UTC (permalink / raw)
  To: gcc-patches

Where to start?

   This is the culmination of numerous years work on the ranger for 
generating accurate on-demand ranges in GCC.
There are 2 patch sets as a part of this submission

1)  The ranger &  Enhancements to EVRP
3)  Other passes converted to Ranger

There is still some missing functionality in the ranger which is 
required for it to be a full replacement of either EVRP or VRP, but its 
complete enough to offer some enhanced functionality which we think is 
worthwhile.

It is still purely range based, which means it does not currently handle 
equivalences, nor other relations.  I have the relation oracle 
prototyped, and may still be able to get at least some of it in before 
the end of stage 1.

I will go into more details with the "hybrid" EVRP mode in the intro to 
that patchset, but basically you can run EVRP in one of 3 modes:
   - classic EVRP mode,
   - Ranger only (RVRP) or
   - hybrid mode where both engines are used.  This allows us to perform 
extra optimizations we could not perform before, all while continuing to 
exercise both engines fully, as well as articulating in dump files what 
the differences are.

performance:

Performance has been the primary holdup, and it still not as good as it 
was.  The original irange class was wide_int based and presumed quick 
and efficient manipulate of ranges. Back then, we were comparable to 
EVRP in speed.  When we merged value_range with irange last year  to 
produce int_range, we changed the internal representation to be tree 
based.   This has slowed down the mult-range code noticeably as now all 
the various sub-range pairs are allocated as trees/hash table looked up 
each time.  That plus various other tweaks have slowed us down.  We have 
plans for addressing at least some of this discrepancy.

Regardless, the net effect is that our performance runs have shown 
ranger now takes about 70% longer than EVRP does when compiling now.  
Since the new hybrid mode uses both engines, we see a 170% increase in 
time required to execute EVRP.  EVRP starts off as a very fast pass, and 
In a compile of 242 GCC source files, it amounts to about a 3 second 
increase out of 280, or about 1% overall.

This is offset by the speed up we get by converting the -wrestrict pass 
to use the range.  That pass speeds up to 15%  of its original time. 
Yes, its an 85% reduction because it only calculates the few ranges it 
requires.

The end result of all patches being applied is we find overall compile 
time to typically be slightly faster with these changes than before, 
plus we are solving more cases within EVRP.  This data is backed up by a 
callgrind analysis of a compile where less insutrctions end up being 
executed with ths path.



Regardless, our goal with this submission is to enable other passes to 
begin using the rangers functionality now, as well as getting it 
regularly exercised while we move toward a complete VRP replacement.  
The more we replace, the faster things will get :-)

The idea is that we run in hybrid mode for the rest of stage1 which 
fully exercises both engines, and see where we end up early in the next 
stage.  We can assess based on functionality vs speed whether we want 
EVRP to run in classic mode, hybrid mode, or rvrp only mode for the next 
release.

During the remainder of stage 1, we will work on a combination of 
performance and functionality, and then once stage 1 closes, we'll focus 
primarily on performance.  We have a laundry list of things to attack 
and we hope to get most of the lost time back.

I will provide the ranger and the hybrid EVRP patches, and Aldy will 
follow on with the different optimization conversions.

These patches bootstrap, pass all regressions, and additionally have 
gone thru an entire build of fedora in the past week, in hybrid mode.  
At least on our branch :-)

More details in each of the patchsets.

I'll let these sit here for a few days, and barring issues, will 
re-post  the latest versions early next week along with any requires 
testcase tweaks for hybrid mode,  and check them in. (we're still 
tracking down a fedora build failure, and i have a couple of testcases 
to look at with the latest merge from trunk)

Andrew













^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/6] Ranger patches.
  2020-10-02 16:56 Project Ranger Submission / On Demand VRP Andrew MacLeod
@ 2020-10-02 16:58 ` Andrew MacLeod
  2020-10-02 16:59 ` [PATCH 2/6] gimple-range-edge Andrew MacLeod
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Andrew MacLeod @ 2020-10-02 16:58 UTC (permalink / raw)
  To: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 1138 bytes --]

This patch set contains the various components that make up a ranger.

Ranger files are prefixed by   "gimple-range".

gimple-range-cache.{h,cc} :   Various caches used by the ranger.
gimple-range-edge.{h,cc} :    Outgoing edge range calculations, 
particularly switch edge ranges.
gimple-range-gori.{h,cc} :     "Generate Outgoing Range Info" module 
which calculates ranges on exit to basic blocks.
gimple-range.{h,cc} :             gimple_ranger which pulls together the 
other components and provides on-demand ranges.


These are pretty much self contained and do not require changing any 
other files.

Consumers need only include "gimple-range.h"   and then invoke a ranger 
for use within the pass

gimple_ranger ranger;
<..>
  if (ranger.range_of_expr (r, name, stmt))
     use_range (r);

Follow on patches from Aldy will convert some of the passes will better 
demonstrate usage once we're in and running.

I plan to update the documentation over the next couple of weeks.

I will let these patches sit here for a few days, and barring any 
issues, will then commit them early next week.


Andrew





[-- Attachment #2: ranger1.diff --]
[-- Type: text/x-patch, Size: 449 bytes --]


	* Makefile.in (OBJS): Add gimple-range*.o.

diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 50d6c83eb76..5a8fb0d7612 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1369,6 +1369,10 @@ OBJS = \
 	gimple-loop-versioning.o \
 	gimple-low.o \
 	gimple-pretty-print.o \
+	gimple-range.o \
+	gimple-range-cache.o \
+	gimple-range-edge.o \
+	gimple-range-gori.o \
 	gimple-ssa-backprop.o \
 	gimple-ssa-evrp.o \
 	gimple-ssa-evrp-analyze.o \

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 2/6] gimple-range-edge
  2020-10-02 16:56 Project Ranger Submission / On Demand VRP Andrew MacLeod
  2020-10-02 16:58 ` [PATCH 1/6] Ranger patches Andrew MacLeod
@ 2020-10-02 16:59 ` Andrew MacLeod
  2020-10-05 12:09   ` Jakub Jelinek
  2020-10-02 17:01 ` [PATCH 3/6] gimple-range-gori Andrew MacLeod
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: Andrew MacLeod @ 2020-10-02 16:59 UTC (permalink / raw)
  To: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 2249 bytes --]

This file produces constant edge ranges.  It provides a class which can 
be instantiated and will return the range on any edge.

This was originally suppose to be trivial, but switches mucked that up :-)

There are 2 basic expressions that can result ina  constant range on an 
edge:
Note there is nothing related to ssa or anythin else, its about ranges 
which are forced by the edge.

a gimple condition:

BB2:
     if (_3 != 0)
       goto <bb 3>; [INV]
     else
       goto <bb 5>; [INV]

a query for range on edge 2->3 returns [1,1]  (TRUE)
a query for range on edge 2->5 returns [0,0] (FALSE)

the other case, and the reason this exists, is for switches.  There is 
no simple way to map the values on switch edges implied by the labels 
without walking through the switch statement and calculating them.  The 
on-demand ranger was spending a lot of time recalculating these values 
for multiple requests, thus this class was created to process a switch 
once, and cache the values.

y = x + 4
switch (x)
      {
          case 0:
          case 1:
              return 1;
          case 2:
              return y;
          case 3:
          case 10:
              return 3;
      default:
         return x;
      }

Any request for the range on any edge leaving the switch will calculate 
all the edge ranges at once and cache them.  subsequent queries will 
then just return the cached value.

<bb 4> :
      switch (x_2(D)) <default: <L7> [INV], case 0 ... 1: <L9> [INV], 
case 2: <L4> [INV], case 3: <L5> [INV], case 10: <L5> [INV]>

4->7      x_2(D) :      unsigned int [4, 9][11, +INF]         // default 
edge
4->8      x_2(D) :      unsigned int [0, 1]
4->5      x_2(D) :      unsigned int [2, 2]
4->6      x_2(D) :      unsigned int [3, 3][10, 10]

There is no ranger dependency, this can be used on any valid gimple IL 
to get the ranges implied by the edge.

The ranger is needed to map those values to the switch variable, and 
also apply any previous ranges or derived values (ie, if you ask for the 
range of 'y' in case 2, it will return unsigned int [6,6].

[-- Attachment #2: ranger2.diff --]
[-- Type: text/x-patch, Size: 8637 bytes --]


	* gimple-range-edge.h: New File.
	(outgoing_range): Calculate/cache constant outgoing edge ranges.

	* gimple-range-edge.cc: New file.
	(gimple_outgoing_range_stmt_p): New.  Find control statement.
	(outgoing_range::outgoing_range): New.
	(outgoing_range::~outgoing_range): New.
	(outgoing_range::get_edge_range): New.  Internal switch edge query.
	(outgoing_range::calc_switch_ranges): New.  Calculate switch ranges.
	(outgoing_range::edge_range_p): New.  Find constant range on edge.

diff --git a/gcc/gimple-range-edge.h b/gcc/gimple-range-edge.h
new file mode 100644
index 00000000000..400c814ac7e
--- /dev/null
+++ b/gcc/gimple-range-edge.h
@@ -0,0 +1,55 @@
+/* Gimple range edge header file.
+   Copyright (C) 2020 Free Software Foundation, Inc.
+   Contributed by Andrew MacLeod <amacleod@redhat.com>
+   and Aldy Hernandez <aldyh@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef GIMPLE_RANGE_EDGE_H
+#define GIMPLE_RANGE_EDGE_H
+
+// This class is used to query ranges on constant edges in GIMPLE.
+//
+// For a COND_EXPR, the TRUE edge will return [1,1] and the false edge a [0,0].
+//
+// For SWITCH_EXPR, it is awkward to calculate ranges.  When a request
+// is made, the entire switch is evalauted and the results cached. 
+// Any future requests to that switch will use the cached value, providing
+// dramatic decrease in computation time.
+//
+// The API is simple, just ask for the range on the edge.
+// The return value is NULL for no range, or the branch statement which the
+// edge gets the range from, along with the range.
+
+class outgoing_range
+{
+public:
+  outgoing_range ();
+  ~outgoing_range ();
+  gimple *edge_range_p (irange &r, edge e);
+private:
+  void calc_switch_ranges (gswitch *sw);
+  bool get_edge_range (irange &r, gimple *s, edge e);
+
+  hash_map<edge, irange *> *m_edge_table;
+  irange_allocator m_range_allocator;
+}; 
+
+// If there is a range control statment at the end of block BB, return it.
+gimple *gimple_outgoing_range_stmt_p (basic_block bb);
+
+#endif  // GIMPLE_RANGE_EDGE_H
diff --git a/gcc/gimple-range-edge.cc b/gcc/gimple-range-edge.cc
new file mode 100644
index 00000000000..c5ee54fec51
--- /dev/null
+++ b/gcc/gimple-range-edge.cc
@@ -0,0 +1,197 @@
+/* Gimple range edge functionaluity.
+   Copyright (C) 2020 Free Software Foundation, Inc.
+   Contributed by Andrew MacLeod <amacleod@redhat.com>
+   and Aldy Hernandez <aldyh@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "tree.h"
+#include "gimple.h"
+#include "ssa.h"
+#include "gimple-pretty-print.h"
+#include "gimple-iterator.h"
+#include "tree-cfg.h"
+#include "gimple-range.h"
+
+// If there is a range control statment at the end of block BB, return it.
+// Otherwise return NULL.
+
+gimple *
+gimple_outgoing_range_stmt_p (basic_block bb)
+{
+  gimple_stmt_iterator gsi = gsi_last_nondebug_bb (bb);
+  if (!gsi_end_p (gsi))
+    {
+      gimple *s = gsi_stmt (gsi);
+      if (is_a<gcond *> (s) && gimple_range_handler (s))
+	return gsi_stmt (gsi);
+      gswitch *sw = dyn_cast<gswitch *> (s);
+      if (sw && irange::supports_type_p (TREE_TYPE (gimple_switch_index (sw))))
+	return gsi_stmt (gsi);
+    }
+  return NULL;
+}
+
+
+outgoing_range::outgoing_range ()
+{
+  m_edge_table = NULL;
+}
+
+outgoing_range::~outgoing_range ()
+{
+  if (m_edge_table)
+    delete m_edge_table;
+}
+
+
+// Get a range for a switch edge E from statement S and return it in R.
+// Use a cached value if it exists, or calculate it if not.
+
+bool
+outgoing_range::get_edge_range (irange &r, gimple *s, edge e)
+{
+  gcc_checking_assert (is_a<gswitch *> (s));
+  gswitch *sw = as_a<gswitch *> (s);
+
+  // ADA currently has cases where the index is 64 bits and the case
+  // arguments are 32 bit, causing a trap when we create a case_range.
+  // Until this is resolved (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87798)
+  // punt on switches where the labels dont match the argument.
+  if (gimple_switch_num_labels (sw) > 1 && 
+      TYPE_PRECISION (TREE_TYPE (CASE_LOW (gimple_switch_label (sw, 1)))) !=
+      TYPE_PRECISION (TREE_TYPE (gimple_switch_index (sw))))
+    return false;
+
+   if (!m_edge_table)
+     m_edge_table = new hash_map<edge, irange *> (n_edges_for_fn (cfun));
+
+   irange **val = m_edge_table->get (e);
+   if (!val)
+     {
+       calc_switch_ranges (sw);
+       val = m_edge_table->get (e);
+       gcc_checking_assert (val);
+     }
+    r = **val;
+  return true;
+}
+
+
+// Calculate all switch edges from SW and cache them in the hash table.
+
+void
+outgoing_range::calc_switch_ranges (gswitch *sw)
+{
+  bool existed;
+  unsigned x, lim;
+  lim = gimple_switch_num_labels (sw);
+  tree type = TREE_TYPE (gimple_switch_index (sw));
+  
+  edge default_edge = gimple_switch_default_edge (cfun, sw);
+  irange *&default_slot = m_edge_table->get_or_insert (default_edge, &existed);
+
+  // This should be the first call into this switch.  For the default
+  // range case, start with varying and intersect each other case from
+  // it.
+
+  gcc_checking_assert (!existed);
+
+  // Allocate an int_range_max for default case.
+  default_slot = m_range_allocator.allocate (255);
+  default_slot->set_varying (type);
+
+  for (x = 1; x < lim; x++)
+    {
+      edge e = gimple_switch_edge (cfun, sw, x);
+
+      // If this edge is the same as the default edge, do nothing else.
+      if (e == default_edge)
+	continue;
+
+      tree low = CASE_LOW (gimple_switch_label (sw, x));
+      tree high = CASE_HIGH (gimple_switch_label (sw, x));
+      if (!high)
+	high = low;
+
+      // Remove the case range from the default case.
+      int_range_max def_range (low, high);
+      range_cast (def_range, type);
+      def_range.invert ();
+      default_slot->intersect (def_range);
+
+      // Create/union this case with anything on else on the edge.
+      int_range_max case_range (low, high);
+      range_cast (case_range, type);
+      irange *&slot = m_edge_table->get_or_insert (e, &existed);
+      if (existed)
+	{
+	  case_range.union_ (*slot);
+	  if (slot->fits_p (case_range))
+	    {
+	      *slot = case_range;
+	      continue;
+	    }
+	}
+      // If there was an existing range and it doesn't fit, we lose the memory.
+      // It'll get reclaimed when the obstack is freed.  This seems less
+      // intrusive than allocating max ranges for each case.
+      slot = m_range_allocator.allocate (case_range);
+    }
+}
+
+
+// Calculate the range forced on on edge E by control flow, return it
+// in R.  Return the statment which defines the range, otherwise
+// return NULL
+
+gimple *
+outgoing_range::edge_range_p (irange &r, edge e)
+{
+  // Determine if there is an outgoing edge.
+  gimple *s = gimple_outgoing_range_stmt_p (e->src);
+  if (!s)
+    return NULL;
+
+  if (is_a<gcond *> (s))
+    {
+      if (e->flags & EDGE_TRUE_VALUE)
+	r = int_range<2> (boolean_true_node, boolean_true_node);
+      else if (e->flags & EDGE_FALSE_VALUE)
+	r = int_range<2> (boolean_false_node, boolean_false_node);
+      else
+	gcc_unreachable ();
+      return s;
+    }
+
+  gcc_checking_assert (is_a<gswitch *> (s));
+  gswitch *sw = as_a<gswitch *> (s);
+  tree type = TREE_TYPE (gimple_switch_index (sw));
+
+  if (!irange::supports_type_p (type))
+    return NULL;
+
+  if (get_edge_range (r, sw, e))
+    return s;
+
+  return NULL;
+}

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 3/6] gimple-range-gori
  2020-10-02 16:56 Project Ranger Submission / On Demand VRP Andrew MacLeod
  2020-10-02 16:58 ` [PATCH 1/6] Ranger patches Andrew MacLeod
  2020-10-02 16:59 ` [PATCH 2/6] gimple-range-edge Andrew MacLeod
@ 2020-10-02 17:01 ` Andrew MacLeod
  2020-10-02 17:02 ` [PATCH 4/6] gimple-range-cache Andrew MacLeod
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Andrew MacLeod @ 2020-10-02 17:01 UTC (permalink / raw)
  To: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 1861 bytes --]

This is the true core of the ranger.

The GORI (Generates Outgoing Range Information) engine contains all the 
smarts to utilize the functionality provided via range-ops in order to 
calculate outgoing ranges on an edge, based on the control flow at the exit.

It functions *only* at the basic block level.   Based on the control 
values on the control edges, it can calculate dependent ranges within 
the block, as well as restriction implied on incoming values in the block.

thats a mouthful.  A simple example is better.


  int t = a - 4;
   if (a > 3 && t < 11)
       return t;

This maps to gimple:

=========== BB 2 ============
a_5(D)  int VARYING
     <bb 2> :
     t_6 = a_5(D) + -4;
     _1 = a_5(D) > 3;
     _2 = t_6 <= 10;
     _3 = _1 & _2;
     if (_3 != 0)
       goto <bb 3>; [INV]
     else
       goto <bb 4>; [INV]

For this block, the gori engine can figure out the following ranges:

globally true:
t_6 : int [-INF, 2147483643]

On the true edge 2->3:
2->3  (T) _1 :  _Bool [1, 1]
2->3  (T) _2 :  _Bool [1, 1]
2->3  (T) _3 :  _Bool [1, 1]
2->3  (T) a_5(D) :      int [4, 14]
2->3  (T) t_6 :         int [0, 10]

And on the false edge 2->4:
2->4  (F) _3 :  _Bool [0, 0]
2->4  (F) a_5(D) :      int [-2147483644, 3][15, +INF]
2->4  (F) t_6 :         int [-INF, -1][11, 2147483643]


AS long as there are entries in range-ops for the gimple operation, it 
can wind back through an arbitary number of expressions and utilizing 
the expression solving abilities of range-ops come up with calculations.

The API for this class is pretty straightforward.. simply ask for the 
range of X on an edge:
   bool outgoing_edge_range_p (irange &r, edge e, tree name);
   bool has_edge_range_p (edge e, tree name);

everything else is hidden under the covers.

Andrew


[-- Attachment #2: ranger3.diff --]
[-- Type: text/x-patch, Size: 48810 bytes --]


	* gimple-range-gori.h: New File.
	(class gori_compute): New.  Generates Outgoing Range Info computation.
	(class gori_compute_cache): New.  Adds logical stmt cache.

	* gimple-range-gori.cc: New File.
	(class range_def_chain): New.  Definition chain calculator.
	(range_def_chain::range_def_chain): New.
	(range_def_chain::~range_def_chain): New.
	(range_def_chain::in_chain_p): New.  Query chain contents.
	(range_def_chain::build_def_chain): New.  Build a new def chain.
	(range_def_chain::has_def_chain): New.  Query existance of a chain.
	(range_def_chain::get_def_chain): New.  Get or calculate a chain.
	(class gori_map): New.  Add GORI information to def_chains.
	(gori_map::gori_map): New.
	(gori_map::~gori_map): New.
	(gori_map::exports): New.  Return export bitmap for a block.
	(gori_map::is_export_p): New.  Is SSA_NAME an export.
	(gori_map::def_chain_in_export_p): New.  Is any chain member an export.
	(gori_map::maybe_add_gori): New.  Add def chain exports if appropriate.
	(gori_map::calculate_gori): New.  Calculate summary for BB.
	(gori_map::dump): New. Dump contents.
	(gori_compute::gori_compute): New.
	(gori_compute::~gori_compute): New.
	(gori_compute::ssa_range_in_bb): New.  Get incoming ssa-name value.
	(gori_compute::expr_range_in_bb): New.  Get incoming expression value.
	(gori_compute::compute_name_range_op): New.  Calculate outgoing range
	for an an ssa-name in a statement.
	(gori_compute::compute_operand_range_switch):New.  Handle switch.
	(is_gimple_logical_p): New.
	(gori_compute::compute_operand_range): New.
	(range_is_either_true_or_false): New.
	(gori_compute::logical_combine): New.  Combine ranges through a
	logical expression.
	(gori_compute::optimize_logical_operands): New.
	(gori_compute::compute_logical_operands_in_chain): New.
	(gori_compute::compute_logical_operands): New.  Calculate incoming
	ranges for both sides of a logical expression when approriate.
	(gori_compute::compute_operand1_range): New.  Compute op1_range.
	(gori_compute::compute_operand2_range): New.  Compute op2_range.
	(gori_compute::compute_operand1_and_operand2_range): New.  op1_range
	and op2_range are the same SSA_NAME.
	(gori_compute::has_edge_range_p): New.  Can ssa-name be calcualted.
	(gori_compute::dump): New.
	(gori_compute::outgoing_edge_range_p): New.  Calculate an outgoing edge
	range for ssa-name entry point.
	(class logical_stmt_cache): New.  logical evaluation cache.
	(logical_stmt_cache::logical_stmt_cache): New.
	(logical_stmt_cache::~logical_stmt_cache): New.
	(logical_stmt_cache::set_range): New.
	(logical_stmt_cache::get_range): New.
	(logical_stmt_cache::cached_name): New.  
	(logical_stmt_cache::same_cached_name): New.  Do cached names match.
	(logical_stmt_cache::cacheable_p): New.  Is statement cacheable.
	(logical_stmt_cache::slot_diagnostics): New.  Debugging diagnostics.
	(logical_stmt_cache::dump): New.
	(gori_compute_cache::gori_compute_cache): New.
	(gori_compute_cache::~gori_compute_cache): New.
	(gori_compute_cache::compute_operand_range): New.  Check cache first.
	(gori_compute_cache::cache_stmt): New.  Create entry if possible.

diff --git a/gcc/gimple-range-gori.h b/gcc/gimple-range-gori.h
new file mode 100644
index 00000000000..8ef452bf433
--- /dev/null
+++ b/gcc/gimple-range-gori.h
@@ -0,0 +1,138 @@
+/* Header file for gimple range GORI structures.
+   Copyright (C) 2017-2020 Free Software Foundation, Inc.
+   Contributed by Andrew MacLeod <amacleod@redhat.com>
+   and Aldy Hernandez <aldyh@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_GIMPLE_RANGE_GORI_H
+#define GCC_GIMPLE_RANGE_GORI_H
+
+
+// This class is used to determine which SSA_NAMES can have ranges
+// calculated for them on outgoing edges from basic blocks.  This represents
+// ONLY the effect of the basic block edge->src on a range.
+//
+// There are 2 primary entry points:
+//
+// has_edge_range_p (edge e, tree name)  
+//   returns true if the outgoing edge *may* be able to produce range
+//   information for ssa_name NAME on edge E.
+//   FALSE is returned if this edge does not affect the range of NAME.
+//
+// outgoing_edge_range_p (irange &range, edge e, tree name)
+//   Actually does the calculation of RANGE for name on E
+//   This represents application of whatever static range effect edge E
+//   may have on NAME, not any cumulative effect.
+
+// There are also some internal APIs
+//
+// ssa_range_in_bb ()  is an internal routine which is used to start any
+// calculation chain using SSA_NAMES which come from outside the block. ie
+//      a_2 = b_4 - 8
+//      if (a_2 < 30)
+// on the true edge, a_2 is known to be [0, 29]
+// b_4 can be calculated as [8, 37]
+// during this calculation, b_4 is considered an "import" and ssa_range_in_bb
+// is queried for a starting range which is used in the calculation.
+// A default value of VARYING provides the raw static info for the edge.
+//
+// If there is any known range for b_4 coming into this block, it can refine
+// the results.  This allows for cascading results to be propogated.
+// if b_4 is [100, 200] on entry to the block, feeds into the calculation
+// of a_2 = [92, 192], and finally on the true edge the range would be 
+// an empty range [] because it is not possible for the true edge to be taken.
+//
+// expr_range_in_bb is simply a wrapper which calls ssa_range_in_bb for 
+// SSA_NAMES and otherwise simply calculates the range of the expression.
+//
+// The remaining routines are internal use only.
+
+class gori_compute 
+{
+public:
+  gori_compute ();
+  ~gori_compute ();
+  bool outgoing_edge_range_p (irange &r, edge e, tree name);
+  bool has_edge_range_p (edge e, tree name);
+  void dump (FILE *f);
+protected:
+  virtual void ssa_range_in_bb (irange &r, tree name, basic_block bb);
+  virtual bool compute_operand_range (irange &r, gimple *stmt,
+				      const irange &lhs, tree name);
+
+  void expr_range_in_bb (irange &r, tree expr, basic_block bb);
+  bool compute_logical_operands (irange &r, gimple *stmt,
+				 const irange &lhs,
+				 tree name);
+  void compute_logical_operands_in_chain (class tf_range &range,
+					  gimple *stmt, const irange &lhs,
+					  tree name, tree op,
+					  bool op_in_chain);
+  bool optimize_logical_operands (tf_range &range, gimple *stmt,
+				  const irange &lhs, tree name, tree op);
+  bool logical_combine (irange &r, enum tree_code code, const irange &lhs,
+			const class tf_range &op1_range,
+			const class tf_range &op2_range);
+  int_range<2> m_bool_zero;           // Boolean false cached.
+  int_range<2> m_bool_one;            // Boolean true cached.
+
+private:
+  bool compute_operand_range_switch (irange &r, gswitch *stmt,
+				     const irange &lhs, tree name);
+  bool compute_name_range_op (irange &r, gimple *stmt, const irange &lhs,
+			      tree name);
+  bool compute_operand1_range (irange &r, gimple *stmt, const irange &lhs,
+			       tree name);
+  bool compute_operand2_range (irange &r, gimple *stmt, const irange &lhs,
+			       tree name);
+  bool compute_operand1_and_operand2_range (irange &r, gimple *stmt,
+					    const irange &lhs, tree name);
+
+  class gori_map *m_gori_map;
+  outgoing_range outgoing;	// Edge values for COND_EXPR & SWITCH_EXPR.
+};
+
+
+// This class adds a cache to gori_computes for logical expressions.
+//       bool result = x && y
+// requires calcuation of both X and Y for both true and false results.
+// There are 4 combinations [0,0][0,0] [0,0][1,1] [1,1][0,0] and [1,1][1,1].
+// Note that each pair of possible results for X and Y are used twice, and
+// the calcuation of those results are the same each time.
+//
+// The cache simply checks if a stmt is cachable, and if so, saves both the
+// true and false results for the next time the query is made.
+//
+// This is used to speed up long chains of logical operations which
+// quickly become exponential.
+
+class gori_compute_cache : public gori_compute
+{
+public:
+  gori_compute_cache ();
+  ~gori_compute_cache ();
+protected:
+  virtual bool compute_operand_range (irange &r, gimple *stmt,
+				      const irange &lhs, tree name);
+private:
+  void cache_stmt (gimple *);
+  typedef gori_compute super;
+  class logical_stmt_cache *m_cache;
+};
+
+#endif // GCC_GIMPLE_RANGE_GORI_H
diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
new file mode 100644
index 00000000000..eaf1a445c25
--- /dev/null
+++ b/gcc/gimple-range-gori.cc
@@ -0,0 +1,1321 @@
+/* Gimple range GORI functions.
+   Copyright (C) 2017-2020 Free Software Foundation, Inc.
+   Contributed by Andrew MacLeod <amacleod@redhat.com>
+   and Aldy Hernandez <aldyh@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "tree.h"
+#include "gimple.h"
+#include "ssa.h"
+#include "gimple-pretty-print.h"
+#include "gimple-range.h"
+
+
+/* RANGE_DEF_CHAIN is used to determine what SSA names in a block can
+   have range information calculated for them, and what the
+   dependencies on each other are.
+
+   Information for a basic block is calculated once and stored.  It is
+   only calculated the first time a query is made, so if no queries
+   are made, there is little overhead.
+
+   The def_chain bitmap is indexed by SSA_NAME_VERSION.  Bits are set
+   within this bitmap to indicate SSA names that are defined in the
+   SAME block and used to calculate this SSA name.
+
+
+    <bb 2> :
+      _1 = x_4(D) + -2;
+      _2 = _1 * 4;
+      j_7 = foo ();
+      q_5 = _2 + 3;
+      if (q_5 <= 13)
+
+    _1  : x_4(D)
+    _2  : 1  x_4(D)
+    q_5  : _1  _2  x_4(D)
+
+    This dump indicates the bits set in the def_chain vector.
+    as well as demonstrates the def_chain bits for the related ssa_names.
+
+    Checking the chain for _2 indicates that _1 and x_4 are used in
+    its evaluation.
+
+    Def chains also only include statements which are valid gimple
+    so a def chain will only span statements for which the range
+    engine implements operations for.  */
+
+
+class range_def_chain
+{
+public:
+  range_def_chain ();
+  ~range_def_chain ();
+  bool has_def_chain (tree name);
+  bitmap get_def_chain (tree name);
+  bool in_chain_p (tree name, tree def);
+private:
+  vec<bitmap> m_def_chain;	// SSA_NAME : def chain components.
+  void build_def_chain (tree name, bitmap result, basic_block bb);
+};
+
+
+// Construct a range_def_chain.
+
+range_def_chain::range_def_chain ()
+{
+  m_def_chain.create (0);
+  m_def_chain.safe_grow_cleared (num_ssa_names);
+}
+
+// Destruct a range_def_chain.
+
+range_def_chain::~range_def_chain ()
+{
+  unsigned x;
+  for (x = 0; x < m_def_chain.length (); ++x)
+    if (m_def_chain[x])
+      BITMAP_FREE (m_def_chain[x]);
+  m_def_chain.release ();
+}
+
+// Return true if NAME is in the def chain of DEF.  If BB is provided,
+// only return true if the defining statement of DEF is in BB.
+
+bool
+range_def_chain::in_chain_p (tree name, tree def)
+{
+  gcc_checking_assert (gimple_range_ssa_p (def));
+  gcc_checking_assert (gimple_range_ssa_p (name));
+
+  // Get the defintion chain for DEF.
+  bitmap chain = get_def_chain (def);
+
+  if (chain == NULL)
+    return false;
+  return bitmap_bit_p (chain, SSA_NAME_VERSION (name));
+}
+
+// Build def_chains for NAME if it is in BB.  Copy the def chain into RESULT.
+
+void
+range_def_chain::build_def_chain (tree name, bitmap result, basic_block bb)
+{
+  bitmap b;
+  gimple *def_stmt = SSA_NAME_DEF_STMT (name);
+  // Add this operand into the result.
+  bitmap_set_bit (result, SSA_NAME_VERSION (name));
+
+  if (gimple_bb (def_stmt) == bb && !is_a<gphi *>(def_stmt))
+    {
+      // Get the def chain for the operand.
+      b = get_def_chain (name);
+      // If there was one, copy it into result.
+      if (b)
+	bitmap_ior_into (result, b);
+    }
+}
+
+// Return TRUE if NAME has been processed for a def_chain.
+
+inline bool
+range_def_chain::has_def_chain (tree name)
+{
+  // Ensure there is an entry in the internal vector.
+  unsigned v = SSA_NAME_VERSION (name);
+  if (v >= m_def_chain.length ())
+    m_def_chain.safe_grow_cleared (num_ssa_names + 1);
+  return (m_def_chain[v] != NULL);
+}
+
+// Calculate the def chain for NAME and all of its dependent
+// operands. Only using names in the same BB.  Return the bitmap of
+// all names in the m_def_chain.  This only works for supported range
+// statements.
+
+bitmap
+range_def_chain::get_def_chain (tree name)
+{
+  tree ssa1, ssa2, ssa3;
+  unsigned v = SSA_NAME_VERSION (name);
+
+  // If it has already been processed, just return the cached value.
+  if (has_def_chain (name))
+    return m_def_chain[v];
+
+  // No definition chain for default defs.
+  if (SSA_NAME_IS_DEFAULT_DEF (name))
+    return NULL;
+
+  gimple *stmt = SSA_NAME_DEF_STMT (name);
+  if (gimple_range_handler (stmt))
+    {
+      ssa1 = gimple_range_ssa_p (gimple_range_operand1 (stmt));
+      ssa2 = gimple_range_ssa_p (gimple_range_operand2 (stmt));
+      ssa3 = NULL_TREE;
+    }
+  else if (is_a<gassign *> (stmt)
+	   && gimple_assign_rhs_code (stmt) == COND_EXPR)
+    {
+      gassign *st = as_a<gassign *> (stmt);
+      ssa1 = gimple_range_ssa_p (gimple_assign_rhs1 (st));
+      ssa2 = gimple_range_ssa_p (gimple_assign_rhs2 (st));
+      ssa3 = gimple_range_ssa_p (gimple_assign_rhs3 (st));
+    }
+  else
+    return NULL;
+
+  basic_block bb = gimple_bb (stmt);
+
+  m_def_chain[v] = BITMAP_ALLOC (NULL);
+
+  if (ssa1)
+    build_def_chain (ssa1, m_def_chain[v], bb);
+  if (ssa2)
+    build_def_chain (ssa2, m_def_chain[v], bb);
+  if (ssa3)
+    build_def_chain (ssa3, m_def_chain[v], bb);
+
+  // If we run into pathological cases where the defintion chains are
+  // huge (ie  huge basic block fully unrolled) we might be able to limit
+  // this by deciding here that if some criteria is satisfied, we change the
+  // def_chain back to be just the ssa-names.  That will help prevent chains
+  // of a_2 = b_6 + a_8 from creating a pathological case.
+  return m_def_chain[v];
+}
+
+// -------------------------------------------------------------------
+
+/* GORI_MAP is used to accumulate what SSA names in a block can
+   generate range information, and provides tools for the block ranger
+   to enable it to efficiently calculate these ranges.
+
+   GORI stands for "Generates Outgoing Range Information."
+
+   It utilizes the range_def_chain class to contruct def_chains.
+   Information for a basic block is calculated once and stored.  It is
+   only calculated the first time a query is made.  If no queries are
+   made, there is little overhead.
+
+   one bitmap is maintained for each basic block:
+   m_outgoing  : a set bit indicates a range can be generated for a name.
+
+   Generally speaking, the m_outgoing vector is the union of the
+   entire def_chain of all SSA names used in the last statement of the
+   block which generate ranges.  */
+
+class gori_map : public range_def_chain
+{
+public:
+  gori_map ();
+  ~gori_map ();
+
+  bool is_export_p (tree name, basic_block bb);
+  bool def_chain_in_export_p (tree name, basic_block bb);
+
+  void dump (FILE *f);
+  void dump (FILE *f, basic_block bb);
+private:
+  bitmap_obstack m_bitmaps;
+  vec<bitmap> m_outgoing;	// BB: Outgoing ranges calculatable on edges
+  void maybe_add_gori (tree name, basic_block bb);
+  void calculate_gori (basic_block bb);
+  bitmap exports (basic_block bb);
+};
+
+
+// Initialize a gori-map structure.
+
+gori_map::gori_map ()
+{
+  m_outgoing.create (0);
+  m_outgoing.safe_grow_cleared (last_basic_block_for_fn (cfun));
+  bitmap_obstack_initialize (&m_bitmaps);
+}
+
+// Free any memory the GORI map allocated.
+
+gori_map::~gori_map ()
+{
+  bitmap_obstack_release (&m_bitmaps);
+  m_outgoing.release ();
+}
+
+// Return the bitmap vector of all export from BB.  Calculate if necessary.
+
+bitmap
+gori_map::exports (basic_block bb)
+{
+  if (!m_outgoing[bb->index])
+    calculate_gori (bb);
+  return m_outgoing[bb->index];
+}
+
+// Return true if NAME is can have ranges generated for it from basic
+// block BB.
+
+bool
+gori_map::is_export_p (tree name, basic_block bb)
+{
+  return bitmap_bit_p (exports (bb), SSA_NAME_VERSION (name));
+}
+
+// Return true if any element in the def chain of NAME is in the
+// export list for BB.
+
+bool
+gori_map::def_chain_in_export_p (tree name, basic_block bb)
+{
+  bitmap a = exports (bb);
+  bitmap b = get_def_chain (name);
+  if (a && b)
+    return bitmap_intersect_p (a, b);
+  return false;
+}
+
+// If NAME is non-NULL and defined in block BB, calculate the def
+// chain and add it to m_outgoing.
+
+void
+gori_map::maybe_add_gori (tree name, basic_block bb)
+{
+  if (name)
+    {
+      gimple *s = SSA_NAME_DEF_STMT (name);
+      bitmap r = get_def_chain (name);
+      // Check if there is a def chain, and it is in this block.
+      if (r && gimple_bb (s) == bb)
+	bitmap_copy (m_outgoing[bb->index], r);
+      // Def chain doesn't include itself, and even if there isn't a
+      // def chain, this name should be added to exports.
+      bitmap_set_bit (m_outgoing[bb->index], SSA_NAME_VERSION (name));
+    }
+}
+
+// Calculate all the required information for BB.
+
+void
+gori_map::calculate_gori (basic_block bb)
+{
+  tree name;
+  if (bb->index >= (signed int)m_outgoing.length ())
+    m_outgoing.safe_grow_cleared (last_basic_block_for_fn (cfun));
+  gcc_checking_assert (m_outgoing[bb->index] == NULL);
+  m_outgoing[bb->index] = BITMAP_ALLOC (&m_bitmaps);
+
+  // If this block's last statement may generate range informaiton, go
+  // calculate it.
+  gimple *stmt = gimple_outgoing_range_stmt_p (bb);
+  if (!stmt)
+    return;
+  if (is_a<gcond *> (stmt))
+    {
+      gcond *gc = as_a<gcond *>(stmt);
+      name = gimple_range_ssa_p (gimple_cond_lhs (gc));
+      maybe_add_gori (name, gimple_bb (stmt));
+
+      name = gimple_range_ssa_p (gimple_cond_rhs (gc));
+      maybe_add_gori (name, gimple_bb (stmt));
+    }
+  else
+    {
+      gswitch *gs = as_a<gswitch *>(stmt);
+      name = gimple_range_ssa_p (gimple_switch_index (gs));
+      maybe_add_gori (name, gimple_bb (stmt));
+    }
+}
+
+// Dump the table information for BB to file F.
+
+void
+gori_map::dump (FILE *f, basic_block bb)
+{
+  bool header = false;
+  const char *header_string = "bb%-4d ";
+  const char *header2 = "       ";
+  bool printed_something = false;;
+  unsigned x, y;
+  bitmap_iterator bi;
+
+  // BB was not processed.
+  if (!m_outgoing[bb->index])
+    return;
+
+  // Dump the def chain for each SSA_NAME defined in BB.
+  for (x = 1; x < num_ssa_names; x++)
+    {
+      tree name = ssa_name (x);
+      if (!name)
+	continue;
+      gimple *stmt = SSA_NAME_DEF_STMT (name);
+      bitmap chain = (has_def_chain (name) ? get_def_chain (name) : NULL);
+      if (stmt && gimple_bb (stmt) == bb && chain && !bitmap_empty_p (chain))
+        {
+	  fprintf (f, header_string, bb->index);
+	  header_string = header2;
+	  header = true;
+	  print_generic_expr (f, name, TDF_SLIM);
+	  fprintf (f, " : ");
+	  EXECUTE_IF_SET_IN_BITMAP (chain, 0, y, bi)
+	    {
+	      print_generic_expr (f, ssa_name (y), TDF_SLIM);
+	      fprintf (f, "  ");
+	    }
+	  fprintf (f, "\n");
+	}
+    }
+
+  printed_something |= header;
+
+  // Now dump the export vector.
+  header = false;
+  EXECUTE_IF_SET_IN_BITMAP (m_outgoing[bb->index], 0, y, bi)
+    {
+      if (!header)
+        {
+	  fprintf (f, header_string, bb->index);
+	  fprintf (f, "exports: ");
+	  header_string = header2;
+	  header = true;
+	}
+      print_generic_expr (f, ssa_name (y), TDF_SLIM);
+      fprintf (f, "  ");
+    }
+  if (header)
+    fputc ('\n', f);
+
+  printed_something |= header;
+  if (printed_something)
+    fprintf (f, "\n");
+}
+
+// Dump the entire GORI map structure to file F.
+
+void
+gori_map::dump (FILE *f)
+{
+  basic_block bb;
+  FOR_EACH_BB_FN (bb, cfun)
+    {
+      dump (f, bb);
+      if (m_outgoing[bb->index])
+	fprintf (f, "\n");
+    }
+}
+
+DEBUG_FUNCTION void
+debug (gori_map &g)
+{
+  g.dump (stderr);
+}
+
+// -------------------------------------------------------------------
+
+// Construct a gori_compute object.
+
+gori_compute::gori_compute ()
+{
+  // Create a boolean_type true and false range.
+  m_bool_zero = int_range<2> (boolean_false_node, boolean_false_node);
+  m_bool_one = int_range<2> (boolean_true_node, boolean_true_node);
+  m_gori_map = new gori_map;
+}
+
+// Destruct a gori_compute_object.
+
+gori_compute::~gori_compute ()
+{
+  delete m_gori_map;
+}
+
+// Provide a default of VARYING for all incoming SSA names.
+
+void
+gori_compute::ssa_range_in_bb (irange &r, tree name, basic_block)
+{
+  r.set_varying (TREE_TYPE (name));
+}
+
+void
+gori_compute::expr_range_in_bb (irange &r, tree expr, basic_block bb)
+{
+  if (gimple_range_ssa_p (expr))
+    ssa_range_in_bb (r, expr, bb);
+  else
+    get_tree_range (r, expr);
+}
+
+// Calculate the range for NAME if the lhs of statement S has the
+// range LHS.  Return the result in R.  Return false if no range can be
+// calculated.
+
+bool
+gori_compute::compute_name_range_op (irange &r, gimple *stmt,
+				     const irange &lhs, tree name)
+{
+  int_range_max op1_range, op2_range;
+
+  tree op1 = gimple_range_operand1 (stmt);
+  tree op2 = gimple_range_operand2 (stmt);
+
+  // Operand 1 is the name being looked for, evaluate it.
+  if (op1 == name)
+    {
+      expr_range_in_bb (op1_range, op1, gimple_bb (stmt));
+      if (!op2)
+	{
+	  // The second parameter to a unary operation is the range
+	  // for the type of operand1, but if it can be reduced
+	  // further, the results will be better.  Start with what we
+	  // know of the range of OP1 instead of the full type.
+	  return gimple_range_calc_op1 (r, stmt, lhs, op1_range);
+	}
+      // If we need the second operand, get a value and evaluate.
+      expr_range_in_bb (op2_range, op2, gimple_bb (stmt));
+      if (gimple_range_calc_op1 (r, stmt, lhs, op2_range))
+	r.intersect (op1_range);
+      else
+        r = op1_range;
+      return true;
+    }
+
+  if (op2 == name)
+    {
+      expr_range_in_bb (op1_range, op1, gimple_bb (stmt));
+      expr_range_in_bb (r, op2, gimple_bb (stmt));
+      if (gimple_range_calc_op2 (op2_range, stmt, lhs, op1_range))
+        r.intersect (op2_range);
+      return true;
+    }
+  return false;
+}
+
+// Given the switch S, return an evaluation in R for NAME when the lhs
+// evaluates to LHS.  Returning false means the name being looked for
+// was not resolvable.
+
+bool
+gori_compute::compute_operand_range_switch (irange &r, gswitch *s,
+					    const irange &lhs,
+					    tree name)
+{
+  tree op1 = gimple_switch_index (s);
+
+  // If name matches, the range is simply the range from the edge.
+  // Empty ranges are viral as they are on a path which isn't
+  // executable.
+  if (op1 == name || lhs.undefined_p ())
+    {
+      r = lhs;
+      return true;
+    }
+
+  // If op1 is in the defintion chain, pass lhs back.
+  if (gimple_range_ssa_p (op1) && m_gori_map->in_chain_p (name, op1))
+    return compute_operand_range (r, SSA_NAME_DEF_STMT (op1), lhs, name);
+
+  return false;
+}
+
+// Return TRUE if GS is a logical && or || expression.
+
+static inline bool
+is_gimple_logical_p (const gimple *gs)
+{
+  // Look for boolean and/or condition.
+  if (gimple_code (gs) == GIMPLE_ASSIGN)
+    switch (gimple_expr_code (gs))
+      {
+	case TRUTH_AND_EXPR:
+	case TRUTH_OR_EXPR:
+	  return true;
+
+	case BIT_AND_EXPR:
+	case BIT_IOR_EXPR:
+	  // Bitwise operations on single bits are logical too.
+	  if (types_compatible_p (TREE_TYPE (gimple_assign_rhs1 (gs)),
+				  boolean_type_node))
+	    return true;
+	  break;
+
+	default:
+	  break;
+      }
+  return false;
+}
+
+// Return an evaluation for NAME as it would appear in STMT when the
+// statement's lhs evaluates to LHS.  If successful, return TRUE and
+// store the evaluation in R, otherwise return FALSE.
+
+bool
+gori_compute::compute_operand_range (irange &r, gimple *stmt,
+				     const irange &lhs, tree name)
+{
+  // Empty ranges are viral as they are on an unexecutable path.
+  if (lhs.undefined_p ())
+    {
+      r.set_undefined ();
+      return true;
+    }
+  if (is_a<gswitch *> (stmt))
+    return compute_operand_range_switch (r, as_a<gswitch *> (stmt), lhs, name);
+  if (!gimple_range_handler (stmt))
+    return false;
+
+  tree op1 = gimple_range_ssa_p (gimple_range_operand1 (stmt));
+  tree op2 = gimple_range_ssa_p (gimple_range_operand2 (stmt));
+
+  // The base ranger handles NAME on this statement.
+  if (op1 == name || op2 == name)
+    return compute_name_range_op (r, stmt, lhs, name);
+
+  if (is_gimple_logical_p (stmt))
+    return compute_logical_operands (r, stmt, lhs, name);
+
+  // NAME is not in this stmt, but one of the names in it ought to be
+  // derived from it.
+  bool op1_in_chain = op1 && m_gori_map->in_chain_p (name, op1);
+  bool op2_in_chain = op2 && m_gori_map->in_chain_p (name, op2);
+  if (op1_in_chain && op2_in_chain)
+    return compute_operand1_and_operand2_range (r, stmt, lhs, name);
+  if (op1_in_chain)
+    return compute_operand1_range (r, stmt, lhs, name);
+  if (op2_in_chain)
+    return compute_operand2_range (r, stmt, lhs, name);
+
+  // If neither operand is derived, this statement tells us nothing.
+  return false;
+}
+
+// Return TRUE if range R is either a true or false compatible range.
+
+static bool
+range_is_either_true_or_false (const irange &r)
+{
+  if (r.undefined_p ())
+    return false;
+
+  // This is complicated by the fact that Ada has multi-bit booleans,
+  // so true can be ~[0, 0] (i.e. [1,MAX]).
+  tree type = r.type ();
+  gcc_checking_assert (types_compatible_p (type, boolean_type_node));
+  return (r.singleton_p () || !r.contains_p (build_zero_cst (type)));
+}
+
+// A pair of ranges for true/false paths.
+
+struct tf_range
+{
+  tf_range () { }
+  tf_range (const irange &t_range, const irange &f_range)
+  {
+    true_range = t_range;
+    false_range = f_range;
+  }
+  int_range_max true_range, false_range;
+};
+
+// Evaluate a binary logical expression by combining the true and
+// false ranges for each of the operands based on the result value in
+// the LHS.
+
+bool
+gori_compute::logical_combine (irange &r, enum tree_code code,
+			       const irange &lhs,
+			       const tf_range &op1, const tf_range &op2)
+{
+  if (op1.true_range.varying_p ()
+      && op1.false_range.varying_p ()
+      && op2.true_range.varying_p ()
+      && op2.false_range.varying_p ())
+    return false;
+
+  // This is not a simple fold of a logical expression, rather it
+  // determines ranges which flow through the logical expression.
+  //
+  // Assuming x_8 is an unsigned char, and relational statements:
+  //	      b_1 = x_8 < 20
+  //	      b_2 = x_8 > 5
+  // consider the logical expression and branch:
+  //          c_2 = b_1 && b_2
+  //          if (c_2)
+  //
+  // To determine the range of x_8 on either edge of the branch, one
+  // must first determine what the range of x_8 is when the boolean
+  // values of b_1 and b_2 are both true and false.
+  //    b_1 TRUE      x_8 = [0, 19]
+  //    b_1 FALSE     x_8 = [20, 255]
+  //    b_2 TRUE      x_8 = [6, 255]
+  //    b_2 FALSE     x_8 = [0,5].
+  //
+  // These ranges are then combined based on the expected outcome of
+  // the branch.  The range on the TRUE side of the branch must satisfy
+  //     b_1 == true && b_2 == true
+  //
+  // In terms of x_8, that means both x_8 == [0, 19] and x_8 = [6, 255]
+  // must be true.  The range of x_8 on the true side must be the
+  // intersection of both ranges since both must be true.  Thus the
+  // range of x_8 on the true side is [6, 19].
+  //
+  // To determine the ranges on the FALSE side, all 3 combinations of
+  // failing ranges must be considered, and combined as any of them
+  // can cause the false result.
+  //
+  // If the LHS can be TRUE or FALSE, then evaluate both a TRUE and
+  // FALSE results and combine them.  If we fell back to VARYING any
+  // range restrictions that have been discovered up to this point
+  // would be lost.
+  if (!range_is_either_true_or_false (lhs))
+    {
+      int_range_max r1;
+      if (logical_combine (r1, code, m_bool_zero, op1, op2)
+	  && logical_combine (r, code, m_bool_one, op1, op2))
+	{
+	  r.union_ (r1);
+	  return true;
+	}
+      return false;
+    }
+
+  switch (code)
+    {
+      //  A logical AND combines ranges from 2 boolean conditions.
+      //       c_2 = b_1 && b_2
+      case TRUTH_AND_EXPR:
+      case BIT_AND_EXPR:
+        if (!lhs.zero_p ())
+	  {
+	    // The TRUE side is the intersection of the the 2 true ranges.
+	    r = op1.true_range;
+	    r.intersect (op2.true_range);
+	  }
+	else
+	  {
+	    // The FALSE side is the union of the other 3 cases.
+	    int_range_max ff (op1.false_range);
+	    ff.intersect (op2.false_range);
+	    int_range_max tf (op1.true_range);
+	    tf.intersect (op2.false_range);
+	    int_range_max ft (op1.false_range);
+	    ft.intersect (op2.true_range);
+	    r = ff;
+	    r.union_ (tf);
+	    r.union_ (ft);
+	  }
+        break;
+      //  A logical OR combines ranges from 2 boolean conditons.
+      // 	c_2 = b_1 || b_2
+      case TRUTH_OR_EXPR:
+      case BIT_IOR_EXPR:
+        if (lhs.zero_p ())
+	  {
+	    // An OR operation will only take the FALSE path if both
+	    // operands are false, so [20, 255] intersect [0, 5] is the
+	    // union: [0,5][20,255].
+	    r = op1.false_range;
+	    r.intersect (op2.false_range);
+	  }
+	else
+	  {
+	    // The TRUE side of an OR operation will be the union of
+	    // the other three combinations.
+	    int_range_max tt (op1.true_range);
+	    tt.intersect (op2.true_range);
+	    int_range_max tf (op1.true_range);
+	    tf.intersect (op2.false_range);
+	    int_range_max ft (op1.false_range);
+	    ft.intersect (op2.true_range);
+	    r = tt;
+	    r.union_ (tf);
+	    r.union_ (ft);
+	  }
+	break;
+      default:
+        gcc_unreachable ();
+    }
+
+  return true;
+}
+
+// Helper function for compute_logical_operands_in_chain that computes
+// the range of logical statements that can be computed without
+// chasing down operands.  These are things like [0 = x | y] where we
+// know neither operand can be non-zero, or [1 = x & y] where we know
+// neither operand can be zero.
+
+bool
+gori_compute::optimize_logical_operands (tf_range &range,
+					 gimple *stmt,
+					 const irange &lhs,
+					 tree name,
+					 tree op)
+{
+  enum tree_code code = gimple_expr_code (stmt);
+
+  // Optimize [0 = x | y], since neither operand can ever be non-zero.
+  if ((code == BIT_IOR_EXPR || code == TRUTH_OR_EXPR) && lhs.zero_p ())
+    {
+      if (!compute_operand_range (range.false_range, SSA_NAME_DEF_STMT (op),
+				  m_bool_zero, name))
+	expr_range_in_bb (range.false_range, name, gimple_bb (stmt));
+      range.true_range = range.false_range;
+      return true;
+    }
+  // Optimize [1 = x & y], since neither operand can ever be zero.
+  if ((code == BIT_AND_EXPR || code == TRUTH_AND_EXPR) && lhs == m_bool_one)
+    {
+      if (!compute_operand_range (range.true_range, SSA_NAME_DEF_STMT (op),
+				  m_bool_one, name))
+	expr_range_in_bb (range.true_range, name, gimple_bb (stmt));
+      range.false_range = range.true_range;
+      return true;
+    }
+  return false;
+}
+
+// Given a logical STMT, calculate true and false ranges for each
+// potential path of NAME, assuming NAME came through the OP chain if
+// OP_IN_CHAIN is true.
+
+void
+gori_compute::compute_logical_operands_in_chain (tf_range &range,
+						 gimple *stmt,
+						 const irange &lhs,
+						 tree name,
+						 tree op, bool op_in_chain)
+{
+  if (!op_in_chain)
+    {
+      // If op is not in chain, use its known value.
+      expr_range_in_bb (range.true_range, name, gimple_bb (stmt));
+      range.false_range = range.true_range;
+      return;
+    }
+  if (optimize_logical_operands (range, stmt, lhs, name, op))
+    return;
+
+  // Calulate ranges for true and false on both sides, since the false
+  // path is not always a simple inversion of the true side.
+  if (!compute_operand_range (range.true_range, SSA_NAME_DEF_STMT (op),
+			      m_bool_one, name))
+    expr_range_in_bb (range.true_range, name, gimple_bb (stmt));
+  if (!compute_operand_range (range.false_range, SSA_NAME_DEF_STMT (op),
+			      m_bool_zero, name))
+    expr_range_in_bb (range.false_range, name, gimple_bb (stmt));
+}
+
+// Given a logical STMT, calculate true and false for each potential
+// path using NAME, and resolve the outcome based on the logical
+// operator.
+
+bool
+gori_compute::compute_logical_operands (irange &r, gimple *stmt,
+					const irange &lhs,
+					tree name)
+{
+  // Reaching this point means NAME is not in this stmt, but one of
+  // the names in it ought to be derived from it.
+  tree op1 = gimple_range_operand1 (stmt);
+  tree op2 = gimple_range_operand2 (stmt);
+  gcc_checking_assert (op1 != name && op2 != name);
+
+  bool op1_in_chain = (gimple_range_ssa_p (op1)
+		       && m_gori_map->in_chain_p (name, op1));
+  bool op2_in_chain = (gimple_range_ssa_p (op2)
+		       && m_gori_map->in_chain_p (name, op2));
+
+  // If neither operand is derived, then this stmt tells us nothing.
+  if (!op1_in_chain && !op2_in_chain)
+    return false;
+
+  tf_range op1_range, op2_range;
+  compute_logical_operands_in_chain (op1_range, stmt, lhs,
+				     name, op1, op1_in_chain);
+  compute_logical_operands_in_chain (op2_range, stmt, lhs,
+				     name, op2, op2_in_chain);
+  return logical_combine (r, gimple_expr_code (stmt), lhs,
+			  op1_range, op2_range);
+}
+
+// Calculate a range for NAME from the operand 1 position of STMT
+// assuming the result of the statement is LHS.  Return the range in
+// R, or false if no range could be calculated.
+
+bool
+gori_compute::compute_operand1_range (irange &r, gimple *stmt,
+				      const irange &lhs, tree name)
+{
+  int_range_max op1_range, op2_range;
+  tree op1 = gimple_range_operand1 (stmt);
+  tree op2 = gimple_range_operand2 (stmt);
+
+  expr_range_in_bb (op1_range, op1, gimple_bb (stmt));
+
+  // Now calcuated the operand and put that result in r.
+  if (op2)
+    {
+      expr_range_in_bb (op2_range, op2, gimple_bb (stmt));
+      if (!gimple_range_calc_op1 (r, stmt, lhs, op2_range))
+	return false;
+    }
+  else
+    {
+      // We pass op1_range to the unary operation.  Nomally it's a
+      // hidden range_for_type parameter, but sometimes having the
+      // actual range can result in better information.
+      if (!gimple_range_calc_op1 (r, stmt, lhs, op1_range))
+	return false;
+    }
+
+  // Intersect the calculated result with the known result.
+  op1_range.intersect (r);
+
+  gimple *src_stmt = SSA_NAME_DEF_STMT (op1);
+  // If def stmt is outside of this BB, then name must be an import.
+  if (!src_stmt || (gimple_bb (src_stmt) != gimple_bb (stmt)))
+    {
+      // If this isn't the right import statement, then abort calculation.
+      if (!src_stmt || gimple_get_lhs (src_stmt) != name)
+        return false;
+      return compute_name_range_op (r, src_stmt, op1_range, name);
+    }
+  // Then feed this range back as the LHS of the defining statement.
+  return compute_operand_range (r, src_stmt, op1_range, name);
+}
+
+
+// Calculate a range for NAME from the operand 2 position of S
+// assuming the result of the statement is LHS.  Return the range in
+// R, or false if no range could be calculated.
+
+bool
+gori_compute::compute_operand2_range (irange &r, gimple *stmt,
+				      const irange &lhs, tree name)
+{
+  int_range_max op1_range, op2_range;
+  tree op1 = gimple_range_operand1 (stmt);
+  tree op2 = gimple_range_operand2 (stmt);
+
+  expr_range_in_bb (op1_range, op1, gimple_bb (stmt));
+  expr_range_in_bb (op2_range, op2, gimple_bb (stmt));
+
+  // Intersect with range for op2 based on lhs and op1.
+  if (gimple_range_calc_op2 (r, stmt, lhs, op1_range))
+    op2_range.intersect (r);
+
+  gimple *src_stmt = SSA_NAME_DEF_STMT (op2);
+  // If def stmt is outside of this BB, then name must be an import.
+  if (!src_stmt || (gimple_bb (src_stmt) != gimple_bb (stmt)))
+    {
+      // If  this isn't the right src statement, then abort calculation.
+      if (!src_stmt || gimple_get_lhs (src_stmt) != name)
+        return false;
+      return compute_name_range_op (r, src_stmt, op2_range, name);
+    }
+  // Then feed this range back as the LHS of the defining statement.
+  return compute_operand_range (r, src_stmt, op2_range, name);
+}
+
+// Calculate a range for NAME from both operand positions of S
+// assuming the result of the statement is LHS.  Return the range in
+// R, or false if no range could be calculated.
+
+bool
+gori_compute::compute_operand1_and_operand2_range
+					(irange &r,
+					 gimple *stmt,
+					 const irange &lhs,
+					 tree name)
+{
+  int_range_max op_range;
+
+  // Calculate a good a range for op2.  Since op1 == op2, this will
+  // have already included whatever the actual range of name is.
+  if (!compute_operand2_range (op_range, stmt, lhs, name))
+    return false;
+
+  // Now get the range thru op1.
+  if (!compute_operand1_range (r, stmt, lhs, name))
+    return false;
+
+  // Whichever range is the most permissive is the one we need to
+  // use. (?)  OR is that true?  Maybe this should be intersection?
+  r.union_ (op_range);
+  return true;
+}
+
+// Return TRUE if a range can be calcalated for NAME on edge E.
+
+bool
+gori_compute::has_edge_range_p (edge e, tree name)
+{
+  return (m_gori_map->is_export_p (name, e->src)
+	  || m_gori_map->def_chain_in_export_p (name, e->src));
+}
+
+// Dump what is known to GORI computes to listing file F.
+
+void
+gori_compute::dump (FILE *f)
+{
+  m_gori_map->dump (f);
+}
+
+// Calculate a range on edge E and return it in R.  Try to evaluate a
+// range for NAME on this edge.  Return FALSE if this is either not a
+// control edge or NAME is not defined by this edge.
+
+bool
+gori_compute::outgoing_edge_range_p (irange &r, edge e, tree name)
+{
+  int_range_max lhs;
+
+  gcc_checking_assert (gimple_range_ssa_p (name));
+  // Determine if there is an outgoing edge.
+  gimple *stmt = outgoing.edge_range_p (lhs, e);
+  if (!stmt)
+    return false;
+
+  // If NAME can be calculated on the edge, use that.
+  if (m_gori_map->is_export_p (name, e->src))
+    return compute_operand_range (r, stmt, lhs, name);
+
+  // Otherwise see if NAME is derived from something that can be
+  // calculated.  This performs no dynamic lookups whatsover, so it is
+  // low cost.
+  return false;
+}
+
+// --------------------------------------------------------------------------
+
+// Cache for SSAs that appear on the RHS of a boolean assignment.
+//
+// Boolean assignments of logical expressions (i.e. LHS = j_5 > 999)
+// have SSA operands whose range depend on the LHS of the assigment.
+// That is, the range of j_5 when LHS is true is different than when
+// LHS is false.
+//
+// This class caches the TRUE/FALSE ranges of such SSAs to avoid
+// recomputing.
+
+class logical_stmt_cache
+{
+public:
+  logical_stmt_cache ();
+  ~logical_stmt_cache ();
+  void set_range (tree lhs, tree name, const tf_range &);
+  bool get_range (tf_range &r, tree lhs, tree name) const;
+  bool cacheable_p (gimple *, const irange *lhs_range = NULL) const;
+  void dump (FILE *, gimple *stmt) const;
+  tree same_cached_name (tree lhs1, tree lh2) const;
+private:
+  tree cached_name (tree lhs) const;
+  void slot_diagnostics (tree lhs, const tf_range &range) const;
+  struct cache_entry
+  {
+    cache_entry (tree name, const irange &t_range, const irange &f_range);
+    void dump (FILE *out) const;
+    tree name;
+    tf_range range;
+  };
+  vec<cache_entry *> m_ssa_cache;
+};
+
+logical_stmt_cache::cache_entry::cache_entry (tree name,
+					      const irange &t_range,
+					      const irange &f_range)
+  : name (name), range (t_range, f_range)
+{
+}
+
+logical_stmt_cache::logical_stmt_cache ()
+{
+  m_ssa_cache.create (num_ssa_names + num_ssa_names / 10);
+  m_ssa_cache.safe_grow_cleared (num_ssa_names);
+}
+
+logical_stmt_cache::~logical_stmt_cache ()
+{
+  for (unsigned i = 0; i < m_ssa_cache.length (); ++i)
+    if (m_ssa_cache[i])
+      delete m_ssa_cache[i];
+  m_ssa_cache.release ();
+}
+
+// Dump cache_entry to OUT.
+
+void
+logical_stmt_cache::cache_entry::dump (FILE *out) const
+{
+  fprintf (out, "name=");
+  print_generic_expr (out, name, TDF_SLIM);
+  fprintf (out, " ");
+  range.true_range.dump (out);
+  fprintf (out, ", ");
+  range.false_range.dump (out);
+  fprintf (out, "\n");
+}
+
+// Update range for cache entry of NAME as it appears in the defining
+// statement of LHS.
+
+void
+logical_stmt_cache::set_range (tree lhs, tree name, const tf_range &range)
+{
+  unsigned version = SSA_NAME_VERSION (lhs);
+  if (version >= m_ssa_cache.length ())
+    m_ssa_cache.safe_grow_cleared (num_ssa_names + num_ssa_names / 10);
+
+  cache_entry *slot = m_ssa_cache[version];
+  slot_diagnostics (lhs, range);
+  if (slot)
+    {
+      // The IL must have changed.  Update the carried SSA name for
+      // consistency.  Testcase is libgomp.fortran/doacross1.f90.
+      if (slot->name != name)
+	slot->name = name;
+      return;
+    }
+  m_ssa_cache[version]
+    = new cache_entry (name, range.true_range, range.false_range);
+}
+
+// If there is a cached entry of NAME, set it in R and return TRUE,
+// otherwise return FALSE.  LHS is the defining statement where NAME
+// appeared.
+
+bool
+logical_stmt_cache::get_range (tf_range &r, tree lhs, tree name) const
+{
+  gcc_checking_assert (cacheable_p (SSA_NAME_DEF_STMT (lhs)));
+  if (cached_name (lhs) == name)
+    {
+      unsigned version = SSA_NAME_VERSION (lhs);
+      if (m_ssa_cache[version])
+	{
+	  r = m_ssa_cache[version]->range;
+	  return true;
+	}
+    }
+  return false;
+}
+
+// If the defining statement of LHS is in the cache, return the SSA
+// operand being cached.  That is, return SSA for LHS = SSA .RELOP. OP2.
+
+tree
+logical_stmt_cache::cached_name (tree lhs) const
+{
+  unsigned version = SSA_NAME_VERSION (lhs);
+
+  if (version >= m_ssa_cache.length ())
+    return NULL;
+
+  if (m_ssa_cache[version])
+    return m_ssa_cache[version]->name;
+  return NULL;
+}
+
+// Return TRUE if the cached name for LHS1 is the same as the
+// cached name for LHS2.
+
+tree
+logical_stmt_cache::same_cached_name (tree lhs1, tree lhs2) const
+{
+  tree name = cached_name (lhs1);
+  if (name && name == cached_name (lhs2))
+    return name;
+  return NULL;
+}
+
+// Return TRUE if STMT is a statement we are interested in caching.
+// LHS_RANGE is any known range for the LHS of STMT.
+
+bool
+logical_stmt_cache::cacheable_p (gimple *stmt, const irange *lhs_range) const
+{
+  if (gimple_code (stmt) == GIMPLE_ASSIGN
+      && types_compatible_p (TREE_TYPE (gimple_assign_lhs (stmt)),
+			     boolean_type_node)
+      && TREE_CODE (gimple_assign_rhs1 (stmt)) == SSA_NAME)
+    {
+      switch (gimple_expr_code (stmt))
+	{
+	case LT_EXPR:
+	case LE_EXPR:
+	case GT_EXPR:
+	case GE_EXPR:
+	case EQ_EXPR:
+	case NE_EXPR:
+	case TRUTH_AND_EXPR:
+	case BIT_AND_EXPR:
+	case TRUTH_OR_EXPR:
+	case BIT_IOR_EXPR:
+	  return !lhs_range || range_is_either_true_or_false (*lhs_range);
+	default:
+	  return false;
+	}
+    }
+  return false;
+}
+
+// Output debugging diagnostics for the cache entry for LHS.  RANGE is
+// the new range that is being cached.
+
+void
+logical_stmt_cache::slot_diagnostics (tree lhs, const tf_range &range) const
+{
+  gimple *stmt = SSA_NAME_DEF_STMT (lhs);
+  unsigned version = SSA_NAME_VERSION (lhs);
+  cache_entry *slot = m_ssa_cache[version];
+
+  if (!slot)
+    {
+      if (DEBUG_RANGE_CACHE)
+	{
+	  fprintf (dump_file ? dump_file : stderr, "registering range for: ");
+	  dump (dump_file ? dump_file : stderr, stmt);
+	}
+      return;
+    }
+  if (DEBUG_RANGE_CACHE)
+    fprintf (dump_file ? dump_file : stderr,
+	     "reusing range for SSA #%d\n", version);
+  if (CHECKING_P && (slot->range.true_range != range.true_range
+		     || slot->range.false_range != range.false_range))
+    {
+      fprintf (stderr, "FATAL: range altered for cached: ");
+      dump (stderr, stmt);
+      fprintf (stderr, "Attempt to change to:\n");
+      fprintf (stderr, "TRUE=");
+      range.true_range.dump (stderr);
+      fprintf (stderr, ", FALSE=");
+      range.false_range.dump (stderr);
+      fprintf (stderr, "\n");
+      gcc_unreachable ();
+    }
+}
+
+// Dump the cache information for STMT.
+
+void
+logical_stmt_cache::dump (FILE *out, gimple *stmt) const
+{
+  tree lhs = gimple_assign_lhs (stmt);
+  cache_entry *entry = m_ssa_cache[SSA_NAME_VERSION (lhs)];
+
+  print_gimple_stmt (out, stmt, 0, TDF_SLIM);
+  if (entry)
+    {
+      fprintf (out, "\tname = ");
+      print_generic_expr (out, entry->name);
+      fprintf (out, " lhs(%d)= ", SSA_NAME_VERSION (lhs));
+      print_generic_expr (out, lhs);
+      fprintf (out, "\n\tTRUE=");
+      entry->range.true_range.dump (out);
+      fprintf (out, ", FALSE=");
+      entry->range.false_range.dump (out);
+      fprintf (out, "\n");
+    }
+  else
+    fprintf (out, "[EMPTY]\n");
+}
+
+gori_compute_cache::gori_compute_cache ()
+{
+  m_cache = new logical_stmt_cache;
+}
+
+gori_compute_cache::~gori_compute_cache ()
+{
+  delete m_cache;
+}
+
+// Caching version of compute_operand_range.  If NAME, as it appears
+// in STMT, has already been cached return it from the cache,
+// otherwise compute the operand range as normal and cache it.
+
+bool
+gori_compute_cache::compute_operand_range (irange &r, gimple *stmt,
+					   const irange &lhs_range, tree name)
+{
+  bool cacheable = m_cache->cacheable_p (stmt, &lhs_range);
+  if (cacheable)
+    {
+      tree lhs = gimple_assign_lhs (stmt);
+      tf_range range;
+      if (m_cache->get_range (range, lhs, name))
+	{
+	  if (lhs_range.zero_p ())
+	    r = range.false_range;
+	  else
+	    r = range.true_range;
+	  return true;
+	}
+    }
+  if (super::compute_operand_range (r, stmt, lhs_range, name))
+    {
+      if (cacheable)
+	cache_stmt (stmt);
+      return true;
+    }
+  return false;
+}
+
+// Cache STMT if possible.
+
+void
+gori_compute_cache::cache_stmt (gimple *stmt)
+{
+  gcc_checking_assert (m_cache->cacheable_p (stmt));
+  enum tree_code code = gimple_expr_code (stmt);
+  tree lhs = gimple_assign_lhs (stmt);
+  tree op1 = gimple_range_operand1 (stmt);
+  tree op2 = gimple_range_operand2 (stmt);
+  int_range_max r_true_side, r_false_side;
+
+  // LHS = s_5 > 999.
+  if (TREE_CODE (op2) == INTEGER_CST)
+    {
+      range_operator *handler = range_op_handler (code, TREE_TYPE (lhs));
+      int_range_max op2_range;
+      expr_range_in_bb (op2_range, op2, gimple_bb (stmt));
+      tree type = TREE_TYPE (op1);
+      handler->op1_range (r_true_side, type, m_bool_one, op2_range);
+      handler->op1_range (r_false_side, type, m_bool_zero, op2_range);
+      m_cache->set_range (lhs, op1, tf_range (r_true_side, r_false_side));
+    }
+  // LHS = s_5 > b_8.
+  else if (tree cached_name = m_cache->same_cached_name (op1, op2))
+    {
+      tf_range op1_range, op2_range;
+      gcc_assert (m_cache->get_range (op1_range, op1, cached_name));
+      gcc_assert (m_cache->get_range (op2_range, op2, cached_name));
+      gcc_assert (logical_combine (r_true_side, code, m_bool_one,
+				   op1_range, op2_range));
+      gcc_assert (logical_combine (r_false_side, code, m_bool_zero,
+				   op1_range, op2_range));
+      m_cache->set_range (lhs, cached_name,
+			  tf_range (r_true_side, r_false_side));
+    }
+}

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 4/6] gimple-range-cache
  2020-10-02 16:56 Project Ranger Submission / On Demand VRP Andrew MacLeod
                   ` (2 preceding siblings ...)
  2020-10-02 17:01 ` [PATCH 3/6] gimple-range-gori Andrew MacLeod
@ 2020-10-02 17:02 ` Andrew MacLeod
  2020-10-02 17:04 ` [PATCH 5/6] gimple-range Andrew MacLeod
  2020-10-02 17:06 ` [PATCH 6/6] Hybrid EVRP Andrew MacLeod
  5 siblings, 0 replies; 9+ messages in thread
From: Andrew MacLeod @ 2020-10-02 17:02 UTC (permalink / raw)
  To: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 2397 bytes --]

These are the various caches used by the ranger.

- non-null-ref :  Tracks non-null pointer dereferences in blocks so we 
can properly calculate non-null pointer ranges
- on entry cache : Block ranges tracks the calucalted values sof 
ssa-names on entry to each basic block.
- global cache:  Stores whats globally known. Also used to determine if 
a stmt has been evaluated already.

And finally, the "Ranger Cache" which combines all those caches together.

It does far more than that tho.

It links them with a gori-computes component so that when it is 
calculating an outgoing range, it can "seed" the value with whatever has 
already been computed.  ie, if a_4 is used within a basic block, it's 
value  can be seeded with whatever the range-on-entry cache says a_4 is 
for that block.

Furthermore, the ranger cache also takes responsible for populating the 
on-entry cache.  When the ranger asks the cache for the range of a_4 on 
entry to BB9, the ranger cache is responsible for either retrieving the 
stored value, or computing it the first time.

The first time computation simply looks back to the definition (or 
previously populated entries) and applies any intervening GORI edge 
calculations between the definition and this block. and propagates those 
values along the edges until the cache is filled.  Note no new range 
calculations are requested, its all based on what already known between 
all the caches and what GORI can do with those values.

Its the rangers responsibility to ensure before asking for the cache to 
be filled that any global ranges have been calculated.

This component is also where the iterative update engine is located, and 
it purely based on injecting a new range into the on-entry cache, 
checking if propagating that range to successor blocks changes anything, 
and if so propagating those changes.   The update engine is currently 
only used within the cache fill for handling back edges and certain 
"poor" values that were used.

The API for this component is again pretty simple.  It exposes the 
caches it maintains, as well as 2 entry points.

  ssa_range_in_bb  :   which is a virtual function used by the gori 
computes component to access the on-entry or global cache for an ssa-name
  block_range :  This is the entry point the ranger uses to ask for the 
on entry range of a name which may trigger a cache fill if needed.

[-- Attachment #2: ranger4.diff --]
[-- Type: text/x-patch, Size: 31439 bytes --]


	* gimple-range-cache.h: New File.
	(class non_null_ref): New.  Non-null reference tracker.
	(class block_range_cache): New.  Block range on-entry cache.
	(class ssa_global_cache): New.  Global range cache.
	(class ranger_cache): New.  GORI computes plus opther caches combined.

	* gimple-range-cache.cc: New File.
	(non_null_ref::non_null_ref): New.
	(non_null_ref::~non_null_ref): New.
	(non_null_ref::non_null_deref_p): New.  Non-null in block query.
	(non_null_ref::process_name): New.  Find non-null block references.
	(class ssa_block_ranges): New.  Track ranges for a single SSA_NAME.
	(ssa_block_ranges::ssa_block_ranges): New.
	(ssa_block_ranges::~ssa_block_ranges): New.
	(ssa_block_ranges::set_bb_range): New.
	(ssa_block_ranges::set_bb_varying): New.
	(ssa_block_ranges::get_bb_range): New.
	(ssa_block_ranges::bb_range_p): New.
	(ssa_block_ranges::dump): New.
	(block_range_cache::block_range_cache): New.  Manage ssa_block_ranges.
	(block_range_cache::~block_range_cache): New.
	(block_range_cache::get_block_ranges): New.  Get an ssa_block_range.
	(block_range_cache::set_bb_range): New.
	(block_range_cache::set_bb_varying): New.
	(block_range_cache::get_bb_range): New.
	(block_range_cache::bb_range_p): New.
	(block_range_cache::dump): New.
	(ssa_global_cache::ssa_global_cache): New.
	(ssa_global_cache::~ssa_global_cache): New.
	(ssa_global_cache::get_global_range): New.
	(ssa_global_cache::set_global_range): New.
	(ssa_global_cache::clear_global_range): New.
	(ssa_global_cache::clear): New. Clear everything.
	(ssa_global_cache::dump): New.
	(ranger_cache::ranger_cache): New.
	(ranger_cache::~ranger_cache): New.
	(ranger_cache::push_poor_value): New.  Add value to recalculation queue.
	(ranger_cache::ssa_range_in_bb): New.  Provide best available range
	without any calculations for an ssa_name.
	(ranger_cache::block_range): New.  Fill block cache for ssa_name from
	definition block to this block, and return range.
	(ranger_cache::add_to_update): New.  Add to list of blocks to update.
	(ranger_cache::iterative_cache_update): New.  Empty update list by 
	updating block ranges until done.
	(ranger_cache::fill_block_cache): New.  Create list of blocks to
	update.

diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h
new file mode 100644
index 00000000000..29ab01e2a98
--- /dev/null
+++ b/gcc/gimple-range-cache.h
@@ -0,0 +1,120 @@
+/* Header file for gimple ranger SSA cache.
+   Copyright (C) 2017-2020 Free Software Foundation, Inc.
+   Contributed by Andrew MacLeod <amacleod@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_SSA_RANGE_CACHE_H
+#define GCC_SSA_RANGE_CACHE_H
+
+#include "gimple-range-gori.h" 
+
+// Class used to track non-null references of an SSA name.  A vector
+// of bitmaps indexed by SSA name is maintained.  When indexed by
+// basic block, an on-bit indicates there is a non-null dereference
+// for that SSA in that block.
+
+class non_null_ref
+{
+public:
+  non_null_ref ();
+  ~non_null_ref ();
+  bool non_null_deref_p (tree name, basic_block bb);
+private:
+  vec <bitmap> m_nn;
+  void process_name (tree name);
+  bitmap_obstack m_bitmaps;
+};
+
+// This class manages a vector of pointers to ssa_block ranges.  It
+// provides the basis for the "range on entry" cache for all
+// SSA names.
+
+class block_range_cache
+{
+public:
+  block_range_cache ();
+  ~block_range_cache ();
+
+  void set_bb_range (tree name, const basic_block bb, const irange &r);
+  void set_bb_varying (tree name, const basic_block bb);
+  bool get_bb_range (irange &r, tree name, const basic_block bb);
+  bool bb_range_p (tree name, const basic_block bb);
+
+  void dump (FILE *f);
+  void dump (FILE *f, basic_block bb, bool print_varying = true);
+private:
+  vec<class ssa_block_ranges *> m_ssa_ranges;
+  ssa_block_ranges &get_block_ranges (tree name);
+  irange_allocator *m_irange_allocator;
+};
+
+// This global cache is used with the range engine as markers for what
+// has been visited during this incarnation.  Once the ranger evaluates
+// a name, it is typically not re-evaluated again.
+
+class ssa_global_cache
+{
+public:
+  ssa_global_cache ();
+  ~ssa_global_cache ();
+  bool get_global_range (irange &r, tree name) const;
+  void set_global_range (tree name, const irange &r);
+  void clear_global_range (tree name);
+  void clear ();
+  void dump (FILE *f = stderr);
+private:
+  vec<irange *> m_tab;
+  class irange_allocator *m_irange_allocator;
+};
+
+// This class provides all the caches a global ranger may need, and makes 
+// them available for gori-computes to query so outgoing edges can be
+// properly calculated.
+
+class ranger_cache : public gori_compute_cache
+{
+public:
+  ranger_cache (class range_query &q);
+  ~ranger_cache ();
+
+  virtual void ssa_range_in_bb (irange &r, tree name, basic_block bb);
+  bool block_range (irange &r, basic_block bb, tree name, bool calc = true);
+
+  ssa_global_cache m_globals;
+  block_range_cache m_on_entry;
+  non_null_ref m_non_null;
+private:
+  void add_to_update (basic_block bb);
+  void fill_block_cache (tree name, basic_block bb, basic_block def_bb);
+  void iterative_cache_update (tree name);
+
+  vec<basic_block> m_workback;
+  vec<basic_block> m_update_list;
+
+  // Iterative "poor value" calculations.
+  struct update_record
+  {
+    basic_block bb;	// Block which value needs to be calculated in.
+    tree calc;		// SSA_NAME which needs its value calculated.
+  };
+  bool push_poor_value (basic_block bb, tree name);
+  vec<update_record> m_poor_value_list;
+  class range_query &query;
+};
+
+#endif // GCC_SSA_RANGE_CACHE_H
diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
new file mode 100644
index 00000000000..13b9933cc01
--- /dev/null
+++ b/gcc/gimple-range-cache.cc
@@ -0,0 +1,877 @@
+/* Gimple ranger SSA cache implementation.
+   Copyright (C) 2017-2020 Free Software Foundation, Inc.
+   Contributed by Andrew MacLeod <amacleod@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "insn-codes.h"
+#include "tree.h"
+#include "gimple.h"
+#include "ssa.h"
+#include "gimple-pretty-print.h"
+#include "gimple-range.h"
+
+// During contructor, allocate the vector of ssa_names.
+
+non_null_ref::non_null_ref ()
+{
+  m_nn.create (0);
+  m_nn.safe_grow_cleared (num_ssa_names);
+  bitmap_obstack_initialize (&m_bitmaps);
+}
+
+// Free any bitmaps which were allocated,a swell as the vector itself.
+
+non_null_ref::~non_null_ref ()
+{
+  bitmap_obstack_release (&m_bitmaps);
+  m_nn.release ();
+}
+
+// Return true if NAME has a non-null dereference in block bb.  If this is the
+// first query for NAME, calculate the summary first.
+
+bool
+non_null_ref::non_null_deref_p (tree name, basic_block bb)
+{
+  if (!POINTER_TYPE_P (TREE_TYPE (name)))
+    return false;
+
+  unsigned v = SSA_NAME_VERSION (name);
+  if (!m_nn[v])
+    process_name (name);
+
+  return bitmap_bit_p (m_nn[v], bb->index);
+}
+
+// Allocate an populate the bitmap for NAME.  An ON bit for a block
+// index indicates there is a non-null reference in that block.  In
+// order to populate the bitmap, a quick run of all the immediate uses
+// are made and the statement checked to see if a non-null dereference
+// is made on that statement.
+
+void
+non_null_ref::process_name (tree name)
+{
+  unsigned v = SSA_NAME_VERSION (name);
+  use_operand_p use_p;
+  imm_use_iterator iter;
+  bitmap b;
+
+  // Only tracked for pointers.
+  if (!POINTER_TYPE_P (TREE_TYPE (name)))
+    return;
+
+  // Already processed if a bitmap has been allocated.
+  if (m_nn[v])
+    return;
+
+  b = BITMAP_ALLOC (&m_bitmaps);
+
+  // Loop over each immediate use and see if it implies a non-null value.
+  FOR_EACH_IMM_USE_FAST (use_p, iter, name)
+    {
+      gimple *s = USE_STMT (use_p);
+      unsigned index = gimple_bb (s)->index;
+      tree value;
+      enum tree_code comp_code;
+
+      // If bit is already set for this block, dont bother looking again.
+      if (bitmap_bit_p (b, index))
+	continue;
+
+      // If we can infer a != 0 range, then set the bit for this BB
+      if (infer_value_range (s, name, &comp_code, &value))
+	{
+	  if (comp_code == NE_EXPR && integer_zerop (value))
+	    bitmap_set_bit (b, index);
+	}
+    }
+
+  m_nn[v] = b;
+}
+
+// -------------------------------------------------------------------------
+
+// This class implements a cache of ranges indexed by basic block.  It
+// represents all that is known about an SSA_NAME on entry to each
+// block.  It caches a range-for-type varying range so it doesn't need
+// to be reformed all the time.  If a range is ever always associated
+// with a type, we can use that instead.  Whenever varying is being
+// set for a block, the cache simply points to this cached one rather
+// than create a new one each time.
+
+class ssa_block_ranges
+{
+public:
+  ssa_block_ranges (tree t, irange_allocator *allocator);
+  ~ssa_block_ranges ();
+
+  void set_bb_range (const basic_block bb, const irange &r);
+  void set_bb_varying (const basic_block bb);
+  bool get_bb_range (irange &r, const basic_block bb);
+  bool bb_range_p (const basic_block bb);
+
+  void dump(FILE *f);
+private:
+  vec<irange *> m_tab;
+  irange *m_type_range;
+  tree m_type;
+  irange_allocator *m_irange_allocator;
+};
+
+
+// Initialize a block cache for an ssa_name of type T.
+
+ssa_block_ranges::ssa_block_ranges (tree t, irange_allocator *allocator)
+{
+  gcc_checking_assert (TYPE_P (t));
+  m_type = t;
+  m_irange_allocator = allocator;
+
+  m_tab.create (0);
+  m_tab.safe_grow_cleared (last_basic_block_for_fn (cfun));
+
+  // Create the cached type range.
+  m_type_range = m_irange_allocator->allocate (2);
+  m_type_range->set_varying (t);
+
+  m_tab[ENTRY_BLOCK_PTR_FOR_FN (cfun)->index] = m_type_range;
+}
+
+// Destruct block range.
+
+ssa_block_ranges::~ssa_block_ranges ()
+{
+  m_tab.release ();
+}
+
+// Set the range for block BB to be R.
+
+void
+ssa_block_ranges::set_bb_range (const basic_block bb, const irange &r)
+{
+  irange *m = m_irange_allocator->allocate (r);
+  m_tab[bb->index] = m;
+}
+
+// Set the range for block BB to the range for the type.
+
+void
+ssa_block_ranges::set_bb_varying (const basic_block bb)
+{
+  m_tab[bb->index] = m_type_range;
+}
+
+// Return the range associated with block BB in R.  Return false if
+// there is no range.
+
+bool
+ssa_block_ranges::get_bb_range (irange &r, const basic_block bb)
+{
+  irange *m = m_tab[bb->index];
+  if (m)
+    {
+      r = *m;
+      return true;
+    }
+  return false;
+}
+
+// Return true if a range is present.
+
+bool
+ssa_block_ranges::bb_range_p (const basic_block bb)
+{
+  return m_tab[bb->index] != NULL;
+}
+
+
+// Print the list of known ranges for file F in a nice format.
+
+void
+ssa_block_ranges::dump (FILE *f)
+{
+  basic_block bb;
+  int_range_max r;
+
+  FOR_EACH_BB_FN (bb, cfun)
+    if (get_bb_range (r, bb))
+      {
+	fprintf (f, "BB%d  -> ", bb->index);
+	r.dump (f);
+	fprintf (f, "\n");
+      }
+}
+
+// -------------------------------------------------------------------------
+
+// Initialize the block cache.
+
+block_range_cache::block_range_cache ()
+{
+  m_ssa_ranges.create (0);
+  m_ssa_ranges.safe_grow_cleared (num_ssa_names);
+  m_irange_allocator = new irange_allocator;
+}
+
+// Remove any m_block_caches which have been created.
+
+block_range_cache::~block_range_cache ()
+{
+  unsigned x;
+  for (x = 0; x < m_ssa_ranges.length (); ++x)
+    {
+      if (m_ssa_ranges[x])
+	delete m_ssa_ranges[x];
+    }
+  delete m_irange_allocator;
+  // Release the vector itself.
+  m_ssa_ranges.release ();
+}
+
+// Return a reference to the m_block_cache for NAME.  If it has not been
+// accessed yet, allocate it.
+
+ssa_block_ranges &
+block_range_cache::get_block_ranges (tree name)
+{
+  unsigned v = SSA_NAME_VERSION (name);
+  if (v >= m_ssa_ranges.length ())
+    m_ssa_ranges.safe_grow_cleared (num_ssa_names + 1);
+
+  if (!m_ssa_ranges[v])
+    m_ssa_ranges[v] = new ssa_block_ranges (TREE_TYPE (name), m_irange_allocator);
+
+  return *(m_ssa_ranges[v]);
+}
+
+// Set the range for NAME on entry to block BB to R.
+
+void
+block_range_cache::set_bb_range (tree name, const basic_block bb,
+				 const irange &r)
+{
+  return get_block_ranges (name).set_bb_range (bb, r);
+}
+
+// Set the range for NAME on entry to block BB to varying.
+
+void
+block_range_cache::set_bb_varying (tree name, const basic_block bb)
+{
+  return get_block_ranges (name).set_bb_varying (bb);
+}
+
+// Return the range for NAME on entry to BB in R.  Return true if there
+// is one.
+
+bool
+block_range_cache::get_bb_range (irange &r, tree name, const basic_block bb)
+{
+  return get_block_ranges (name).get_bb_range (r, bb);
+}
+
+// Return true if NAME has a range set in block BB.
+
+bool
+block_range_cache::bb_range_p (tree name, const basic_block bb)
+{
+  return get_block_ranges (name).bb_range_p (bb);
+}
+
+// Print all known block caches to file F.
+
+void
+block_range_cache::dump (FILE *f)
+{
+  unsigned x;
+  for (x = 0; x < m_ssa_ranges.length (); ++x)
+    {
+      if (m_ssa_ranges[x])
+	{
+	  fprintf (f, " Ranges for ");
+	  print_generic_expr (f, ssa_name (x), TDF_NONE);
+	  fprintf (f, ":\n");
+	  m_ssa_ranges[x]->dump (f);
+	  fprintf (f, "\n");
+	}
+    }
+}
+
+// Print all known ranges on entry to blobk BB to file F.
+
+void
+block_range_cache::dump (FILE *f, basic_block bb, bool print_varying)
+{
+  unsigned x;
+  int_range_max r;
+  bool summarize_varying = false;
+  for (x = 1; x < m_ssa_ranges.length (); ++x)
+    {
+      if (!gimple_range_ssa_p (ssa_name (x)))
+	continue;
+      if (m_ssa_ranges[x] && m_ssa_ranges[x]->get_bb_range (r, bb))
+	{
+	  if (!print_varying && r.varying_p ())
+	    {
+	      summarize_varying = true;
+	      continue;
+	    }
+	  print_generic_expr (f, ssa_name (x), TDF_NONE);
+	  fprintf (f, "\t");
+	  r.dump(f);
+	  fprintf (f, "\n");
+	}
+    }
+  // If there were any varying entries, lump them all together.
+  if (summarize_varying)
+    {
+      fprintf (f, "VARYING_P on entry : ");
+      for (x = 1; x < num_ssa_names; ++x)
+	{
+	  if (!gimple_range_ssa_p (ssa_name (x)))
+	    continue;
+	  if (m_ssa_ranges[x] && m_ssa_ranges[x]->get_bb_range (r, bb))
+	    {
+	      if (r.varying_p ())
+		{
+		  print_generic_expr (f, ssa_name (x), TDF_NONE);
+		  fprintf (f, "  ");
+		}
+	    }
+	}
+      fprintf (f, "\n");
+    }
+}
+
+// -------------------------------------------------------------------------
+
+// Initialize a global cache.
+
+ssa_global_cache::ssa_global_cache ()
+{
+  m_tab.create (0);
+  m_tab.safe_grow_cleared (num_ssa_names);
+  m_irange_allocator = new irange_allocator;
+}
+
+// Deconstruct a global cache.
+
+ssa_global_cache::~ssa_global_cache ()
+{
+  m_tab.release ();
+  delete m_irange_allocator;
+}
+
+// Retrieve the global range of NAME from cache memory if it exists. 
+// Return the value in R.
+
+bool
+ssa_global_cache::get_global_range (irange &r, tree name) const
+{
+  unsigned v = SSA_NAME_VERSION (name);
+  if (v >= m_tab.length ())
+    return false;
+
+  irange *stow = m_tab[v];
+  if (!stow)
+    return false;
+  r = *stow;
+  return true;
+}
+
+// Set the range for NAME to R in the global cache.
+
+void
+ssa_global_cache::set_global_range (tree name, const irange &r)
+{
+  unsigned v = SSA_NAME_VERSION (name);
+  if (v >= m_tab.length ())
+    m_tab.safe_grow_cleared (num_ssa_names + 1);
+
+  irange *m = m_tab[v];
+  if (m && m->fits_p (r))
+    *m = r;
+  else
+    m_tab[v] = m_irange_allocator->allocate (r);
+}
+
+// Set the range for NAME to R in the glonbal cache.
+
+void
+ssa_global_cache::clear_global_range (tree name)
+{
+  unsigned v = SSA_NAME_VERSION (name);
+  if (v >= m_tab.length ())
+    m_tab.safe_grow_cleared (num_ssa_names + 1);
+  m_tab[v] = NULL;
+}
+
+// Clear the global cache.
+
+void
+ssa_global_cache::clear ()
+{
+  memset (m_tab.address(), 0, m_tab.length () * sizeof (irange *));
+}
+
+// Dump the contents of the global cache to F.
+
+void
+ssa_global_cache::dump (FILE *f)
+{
+  unsigned x;
+  int_range_max r;
+  fprintf (f, "Non-varying global ranges:\n");
+  fprintf (f, "=========================:\n");
+  for ( x = 1; x < num_ssa_names; x++)
+    if (gimple_range_ssa_p (ssa_name (x)) &&
+	get_global_range (r, ssa_name (x))  && !r.varying_p ())
+      {
+	print_generic_expr (f, ssa_name (x), TDF_NONE);
+	fprintf (f, "  : ");
+	r.dump (f);
+	fprintf (f, "\n");
+      }
+  fputc ('\n', f);
+}
+
+// --------------------------------------------------------------------------
+
+ranger_cache::ranger_cache (range_query &q) : query (q)
+{
+  m_workback.create (0);
+  m_workback.safe_grow_cleared (last_basic_block_for_fn (cfun));
+  m_update_list.create (0);
+  m_update_list.safe_grow_cleared (last_basic_block_for_fn (cfun));
+  m_update_list.truncate (0);
+  m_poor_value_list.create (0);
+  m_poor_value_list.safe_grow_cleared (20);
+  m_poor_value_list.truncate (0);
+}
+
+ranger_cache::~ranger_cache ()
+{
+  m_poor_value_list.release ();
+  m_workback.release ();
+  m_update_list.release ();
+}
+
+// Push a request for a new lookup in block BB of name.  Return true if
+// the request is actually made (ie, isn't a duplicate).
+
+bool
+ranger_cache::push_poor_value (basic_block bb, tree name)
+{
+  if (m_poor_value_list.length ())
+    {
+      // Don't push anything else to the same block.  If there are multiple 
+      // things required, another request will come during a later evaluation
+      // and this prevents oscillation building uneccessary depth.
+      if ((m_poor_value_list.last ()).bb == bb)
+	return false;
+    }
+
+  struct update_record rec;
+  rec.bb = bb;
+  rec.calc = name;
+  m_poor_value_list.safe_push (rec);
+  return true;
+}
+
+//  Provide lookup for the gori-computes class to access the best known range
+//  of an ssa_name in any given basic block.  Note, this does no additonal
+//  lookups, just accesses the data that is already known.
+
+void
+ranger_cache::ssa_range_in_bb (irange &r, tree name, basic_block bb)
+{
+  gimple *s = SSA_NAME_DEF_STMT (name);
+  basic_block def_bb = ((s && gimple_bb (s)) ? gimple_bb (s) :
+					       ENTRY_BLOCK_PTR_FOR_FN (cfun));
+  if (bb == def_bb)
+    {
+      // NAME is defined in this block, so request its current value
+      if (!m_globals.get_global_range (r, name))
+	{
+	  // If it doesn't have a value calculated, it means it's a
+	  // "poor" value being used in some calculation.  Queue it up
+	  // as a poor value to be improved later.
+	  r = gimple_range_global (name);
+	  if (push_poor_value (bb, name))
+	    {
+	      if (DEBUG_RANGE_CACHE)
+		{
+		  fprintf (dump_file,
+			   "*CACHE* no global def in bb %d for ", bb->index);
+		  print_generic_expr (dump_file, name, TDF_SLIM);
+		  fprintf (dump_file, " depth : %d\n",
+			   m_poor_value_list.length ());
+		}
+	    }
+	 }
+    }
+  // Look for the on-entry value of name in BB from the cache.
+  else if (!m_on_entry.get_bb_range (r, name, bb))
+    {
+      // If it has no entry then mark this as a poor value.
+      if (push_poor_value (bb, name))
+	{
+	  if (DEBUG_RANGE_CACHE)
+	    {
+	      fprintf (dump_file,
+		       "*CACHE* no on entry range in bb %d for ", bb->index);
+	      print_generic_expr (dump_file, name, TDF_SLIM);
+	      fprintf (dump_file, " depth : %d\n", m_poor_value_list.length ());
+	    }
+	}
+      // Try to pick up any known global value as a best guess for now.
+      if (!m_globals.get_global_range (r, name))
+	r = gimple_range_global (name);
+    }
+
+  // Check if pointers have any non-null dereferences.  Non-call
+  // exceptions mean we could throw in the middle of the block, so just
+  // punt for now on those.
+  if (r.varying_p () && m_non_null.non_null_deref_p (name, bb) &&
+      !cfun->can_throw_non_call_exceptions)
+    r = range_nonzero (TREE_TYPE (name));
+}
+
+// Return a static range for NAME on entry to basic block BB in R.  If
+// calc is true, fill any cache entries required between BB and the
+// def block for NAME.  Otherwise, return false if the cache is empty.
+
+bool
+ranger_cache::block_range (irange &r, basic_block bb, tree name, bool calc)
+{
+  gcc_checking_assert (gimple_range_ssa_p (name));
+
+  if (calc)
+    {
+      gimple *def_stmt = SSA_NAME_DEF_STMT (name);
+      basic_block def_bb = NULL;
+      if (def_stmt)
+	def_bb = gimple_bb (def_stmt);;
+      if (!def_bb)
+	{
+	  // If we get to the entry block, this better be a default def
+	  // or range_on_entry was called for a block not dominated by
+	  // the def.  
+	  gcc_checking_assert (SSA_NAME_IS_DEFAULT_DEF (name));
+	  def_bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
+	}
+
+      // There is no range on entry for the definition block.
+      if (def_bb == bb)
+	return false;
+
+      // Otherwise, go figure out what is known in predecessor blocks.
+      fill_block_cache (name, bb, def_bb);
+      gcc_checking_assert (m_on_entry.bb_range_p (name, bb));
+    }
+  return m_on_entry.get_bb_range (r, name, bb);
+}
+
+// Add BB to the list of blocks to update, unless it's already in the list.
+
+void
+ranger_cache::add_to_update (basic_block bb)
+{
+  if (!m_update_list.contains (bb))
+    m_update_list.quick_push (bb);
+}
+
+// If there is anything in the iterative update_list, continue
+// processing NAME until the list of blocks is empty.
+
+void
+ranger_cache::iterative_cache_update (tree name)
+{
+  basic_block bb;
+  edge_iterator ei;
+  edge e;
+  int_range_max new_range;
+  int_range_max current_range;
+  int_range_max e_range;
+
+  // Process each block by seeing if its calculated range on entry is
+  // the same as its cached value. If there is a difference, update
+  // the cache to reflect the new value, and check to see if any
+  // successors have cache entries which may need to be checked for
+  // updates.
+
+  while (m_update_list.length () > 0)
+    {
+      bb = m_update_list.pop ();
+      gcc_checking_assert (m_on_entry.bb_range_p (name, bb));
+      m_on_entry.get_bb_range (current_range, name, bb);
+
+      // Calculate the "new" range on entry by unioning the pred edges.
+      new_range.set_undefined ();
+      FOR_EACH_EDGE (e, ei, bb->preds)
+	{
+	  if (DEBUG_RANGE_CACHE)
+	    fprintf (dump_file, "   edge %d->%d :", e->src->index, bb->index);
+	  // Get whatever range we can for this edge.
+	  if (!outgoing_edge_range_p (e_range, e, name))
+	    {
+	      ssa_range_in_bb (e_range, name, e->src);
+	      if (DEBUG_RANGE_CACHE)
+		{
+		  fprintf (dump_file, "No outgoing edge range, picked up ");
+		  e_range.dump(dump_file);
+		  fprintf (dump_file, "\n");
+		}
+	    }
+	  else
+	    {
+	      if (DEBUG_RANGE_CACHE)
+		{
+		  fprintf (dump_file, "outgoing range :");
+		  e_range.dump(dump_file);
+		  fprintf (dump_file, "\n");
+		}
+	    }
+	  new_range.union_ (e_range);
+	  if (new_range.varying_p ())
+	    break;
+	}
+
+      if (DEBUG_RANGE_CACHE)
+	{
+	  fprintf (dump_file, "FWD visiting block %d for ", bb->index);
+	  print_generic_expr (dump_file, name, TDF_SLIM);
+	  fprintf (dump_file, "  starting range : ");
+	  current_range.dump (dump_file);
+	  fprintf (dump_file, "\n");
+	}
+
+      // If the range on entry has changed, update it.
+      if (new_range != current_range)
+	{
+	  if (DEBUG_RANGE_CACHE) 
+	    {
+	      fprintf (dump_file, "      Updating range to ");
+	      new_range.dump (dump_file);
+	      fprintf (dump_file, "\n      Updating blocks :");
+	    }
+	  m_on_entry.set_bb_range (name, bb, new_range);
+	  // Mark each successor that has a range to re-check its range
+	  FOR_EACH_EDGE (e, ei, bb->succs)
+	    if (m_on_entry.bb_range_p (name, e->dest))
+	      {
+		if (DEBUG_RANGE_CACHE) 
+		  fprintf (dump_file, " bb%d",e->dest->index);
+		add_to_update (e->dest);
+	      }
+	  if (DEBUG_RANGE_CACHE) 
+	    fprintf (dump_file, "\n");
+	}
+    }
+    if (DEBUG_RANGE_CACHE)
+      {
+	fprintf (dump_file, "DONE visiting blocks for ");
+	print_generic_expr (dump_file, name, TDF_SLIM);
+	fprintf (dump_file, "\n");
+      }
+}
+
+// Make sure that the range-on-entry cache for NAME is set for block BB.
+// Work back through the CFG to DEF_BB ensuring the range is calculated
+// on the block/edges leading back to that point.
+
+void
+ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb)
+{
+  edge_iterator ei;
+  edge e;
+  int_range_max block_result;
+  int_range_max undefined;
+  unsigned poor_list_start = m_poor_value_list.length ();  
+
+  // At this point we shouldn't be looking at the def, entry or exit block.
+  gcc_checking_assert (bb != def_bb && bb != ENTRY_BLOCK_PTR_FOR_FN (cfun) &&
+		       bb != EXIT_BLOCK_PTR_FOR_FN (cfun));
+
+  // If the block cache is set, then we've already visited this block.
+  if (m_on_entry.bb_range_p (name, bb))
+    return;
+
+  // Visit each block back to the DEF.  Initialize each one to UNDEFINED.
+  // m_visited at the end will contain all the blocks that we needed to set
+  // the range_on_entry cache for.
+  m_workback.truncate (0);
+  m_workback.quick_push (bb);
+  undefined.set_undefined ();
+  m_on_entry.set_bb_range (name, bb, undefined);
+  gcc_checking_assert (m_update_list.length () == 0);
+
+  if (DEBUG_RANGE_CACHE)
+    {
+      fprintf (dump_file, "\n");
+      print_generic_expr (dump_file, name, TDF_SLIM);
+      fprintf (dump_file, " : ");
+    }
+
+  while (m_workback.length () > 0)
+    {
+      basic_block node = m_workback.pop ();
+      if (DEBUG_RANGE_CACHE)
+	{
+	  fprintf (dump_file, "BACK visiting block %d for ", node->index);
+	  print_generic_expr (dump_file, name, TDF_SLIM);
+	  fprintf (dump_file, "\n");
+	}
+
+      FOR_EACH_EDGE (e, ei, node->preds)
+	{
+	  basic_block pred = e->src;
+	  int_range_max r;
+
+	  if (DEBUG_RANGE_CACHE)
+	    fprintf (dump_file, "  %d->%d ",e->src->index, e->dest->index);
+
+	  // If the pred block is the def block add this BB to update list.
+	  if (pred == def_bb)
+	    {
+	      add_to_update (node);
+	      continue;
+	    }
+
+	  // If the pred is entry but NOT def, then it is used before
+	  // defined, it'll get set to [] and no need to update it.
+	  if (pred == ENTRY_BLOCK_PTR_FOR_FN (cfun))
+	    {
+	      if (DEBUG_RANGE_CACHE)
+		fprintf (dump_file, "entry: bail.");
+	      continue;
+	    }
+
+	  // Regardless of whether we have visited pred or not, if the
+	  // pred has a non-null reference, revisit this block.
+	  if (m_non_null.non_null_deref_p (name, pred))
+	    {
+	      if (DEBUG_RANGE_CACHE)
+		fprintf (dump_file, "nonnull: update ");
+	      add_to_update (node);
+	    }
+
+	  // If the pred block already has a range, or if it can contribute
+	  // something new. Ie, the edge generates a range of some sort.
+	  if (m_on_entry.get_bb_range (r, name, pred))
+	    {
+	      if (DEBUG_RANGE_CACHE)
+		fprintf (dump_file, "has cache, ");
+	      if (!r.undefined_p () || has_edge_range_p (e, name))
+		{
+		  add_to_update (node);
+		  if (DEBUG_RANGE_CACHE)
+		    fprintf (dump_file, "update. ");
+		}
+	      continue;
+	    }
+
+	  if (DEBUG_RANGE_CACHE)
+	    fprintf (dump_file, "pushing undefined pred block. ");
+	  // If the pred hasn't been visited (has no range), add it to
+	  // the list.
+	  gcc_checking_assert (!m_on_entry.bb_range_p (name, pred));
+	  m_on_entry.set_bb_range (name, pred, undefined);
+	  m_workback.quick_push (pred);
+	}
+    }
+
+  if (DEBUG_RANGE_CACHE)
+    fprintf (dump_file, "\n");
+
+  // Now fill in the marked blocks with values.
+  iterative_cache_update (name);
+  if (DEBUG_RANGE_CACHE)
+    fprintf (dump_file, "  iterative update done.\n");
+
+  // Now that the cache has been updated, check to see if there were any 
+  // SSA_NAMES used in filling the cache which were "poor values".
+  // We can evaluate them, and inject any new values into the iteration 
+  // list, and see if it improves any on-entry values.
+  if (poor_list_start !=  m_poor_value_list.length ())
+    {
+      gcc_checking_assert (poor_list_start < m_poor_value_list.length ());
+      while (poor_list_start < m_poor_value_list.length ())
+	{
+	  // Find a range for this unresolved value.   
+	  // Note, this may spawn new cache filling cycles, but by the time it
+	  // is finished, the work vectors will all be back to the same state
+	  // as before the call.  The update record vector will always be
+	  // returned to the current state upon return.
+	  struct update_record rec = m_poor_value_list.pop ();
+	  basic_block calc_bb = rec.bb;
+	  int_range_max tmp;
+
+	  // The update work list should be empty at this point.
+	  gcc_checking_assert (m_update_list.length () == 0);
+
+	  if (DEBUG_RANGE_CACHE)
+	    {
+	      fprintf (dump_file, "(%d:%d)Calculating ",
+		       m_poor_value_list.length () + 1, poor_list_start);
+	      print_generic_expr (dump_file, name, TDF_SLIM);
+	      fprintf (dump_file, " used poor value for ");
+	      print_generic_expr (dump_file, rec.calc, TDF_SLIM);
+	      fprintf (dump_file, " in bb%d, trying to improve:\n",
+		       calc_bb->index);
+	    }
+
+	  // It must have at least one edge, pick edge 0.  we just want to
+	  // calculate a range at the exit from the block so the caches feeding
+	  // this block will be filled up. 
+	  gcc_checking_assert (EDGE_SUCC (calc_bb, 0));
+	  query.range_on_edge (tmp, EDGE_SUCC (calc_bb, 0), rec.calc);
+	  
+	  if (DEBUG_RANGE_CACHE)
+	    fprintf (dump_file, "    Checking successors of bb%d :",
+		     calc_bb->index);
+
+	  // Try recalculating any successor blocks with the new value.
+	  // Note that even if this value is refined from the initial value,
+	  // it may not affect the calculation, but the iterative update
+	  // will resolve that efficently.
+	  FOR_EACH_EDGE (e, ei, calc_bb->succs)
+	    {
+	      if (DEBUG_RANGE_CACHE)
+		fprintf (dump_file, "bb%d: ", e->dest->index);
+	      // Only update active cache entries.
+	      if (m_on_entry.bb_range_p (name, e->dest))
+		{
+		  if (DEBUG_RANGE_CACHE)
+		    fprintf (dump_file, "update ");
+		  add_to_update (e->dest);
+		}
+	    }
+	  if (DEBUG_RANGE_CACHE)
+	    fprintf (dump_file, "\n");
+	  // Now see if there is a new value.
+	  iterative_cache_update (name);
+	}
+    }
+ 
+}

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 5/6] gimple-range
  2020-10-02 16:56 Project Ranger Submission / On Demand VRP Andrew MacLeod
                   ` (3 preceding siblings ...)
  2020-10-02 17:02 ` [PATCH 4/6] gimple-range-cache Andrew MacLeod
@ 2020-10-02 17:04 ` Andrew MacLeod
  2020-10-02 17:06 ` [PATCH 6/6] Hybrid EVRP Andrew MacLeod
  5 siblings, 0 replies; 9+ messages in thread
From: Andrew MacLeod @ 2020-10-02 17:04 UTC (permalink / raw)
  To: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 1434 bytes --]

This is the ranger component that pulls all the others bits together.

Its API is basically the range_query we've already pushed into the 
compiler. The main ones a client are likely to use are

bool range_of_stmt (irange &r, gimple *, tree name = NULL)
bool range_of_expr (irange &r, tree name, gimple * = NULL)
bool range_on_edge (irange &r, edge e, tree name)

These are for calculating ranges of ssa names as used ON a statement, ON 
and edge, or as the result of a statement.. respectively.

The ranger is responsible for calculating the range of any gimple 
statement, but before doing that, resolves any dependencies that 
statement has first.

given
a_2 = b_3 + c_4

The ranger will first use its own API to resolve a range_of_expr (b_3), 
then a range_of_expr (c_4), then finally calculate a range for a_2using 
the values it gets back.

A lot of the heavy lifting is done by the earlier components, the ranger 
is mostly responsible for actually calculating a global value for an 
ssa_name based on the stmt, as well as resolving dependencies as much as 
possible before asking for a cache fill. Cycles are handled by breaking 
the request cycle if we get back to the same definition, and then using 
a best guess up to that point.

Future enhancements will involve updating back edges with more accurate 
values when they become available.. but thats more of an issue for when 
we go to replace the ASSERT driven VRP pass.

[-- Attachment #2: ranger5.diff --]
[-- Type: text/x-patch, Size: 46583 bytes --]


	* gimple-range.h: New File.
	(class gimple_ranger): New.  Ranger base class.
	(gimple_range_handler): New.  Get range-ops handler.
	(gimple_range_ssa_p): New.  Is this a range comptible ssa_name.
	(gimple_range_global): New.  Retreive legacy global value.
	(class trace_ranger): New.  Ranger with tracing capabilties.

	* gimple-range.cc: New File.
	(adjust_pointer_diff_expr): New.  Ptrdiff adjustements.
	(gimple_range_adjustment): New.  Multi-statement range adjustments.
	(get_tree_range): New.  What range is known about a tree node.
	(gimple_range_fold): New.  Gimple interface to range_op::fold.
	(gimple_range_base_of_assignment): New.  Base name of an assignment.
	(gimple_range_operand1): New.  Symbolic name of operand 1.
	(gimple_range_operand2): New.  Symbolic name of operand 2.
	(gimple_range_calc_op1): New.  Gimple interface to range-op::op1_range.
	(gimple_range_calc_op2): New.  Gimple interface to range-op::op2_range.
	(gimple_ranger::calc_stmt): New.  Calculate a range for a statement.
	(gimple_ranger::range_of_range_op): New.  Handle range_op statements.
	(gimple_ranger::range_of_non_trivial_assignment): New.  Handle pointer
	expressions on ssa_names.
	(gimple_ranger::range_of_phi): New.
	(gimple_ranger::range_of_call): New.
	(gimple_ranger::range_of_builtin_ubsan_call): New.
	(gimple_ranger::range_of_builtin_call): New.
	(gimple_ranger::range_of_cond_expr): New.  Handle x ? y : z.
	(gimple_ranger::range_of_expr): New.  Calculate range on stmt.
	(gimple_ranger::range_on_entry): New.  Calculate range at block entry.
	(gimple_ranger::range_on_exit): New.  Calculate range at block exit.
	(gimple_ranger::range_on_edge): New.  Calculate range on an edge.
	(gimple_ranger::range_of_stmt): New.  Calculate LHS of statement.
	(gimple_ranger::export_global_ranges): New.  Export known ranges to
	the legacy cache SSA_NAME_RANGE_INFO.
	(gimple_ranger::dump): New.
	(gimple_ranger::range_of_ssa_name_with_loop_info): New. SCEV hook.
	(trace_ranger::trace_ranger): New.
	(trace_ranger::dumping: New.  Tracing prefix handler.
	(trace_ranger::trailer): New.  Tracing suffix handler.
	(trace_ranger::range_on_edge): New.  Provide tracing.
	(trace_ranger::range_on_entry): New.  Provide tracing.
	(trace_ranger::range_on_exit): New.  Provide tracing.
	(trace_ranger::range_of_stmt): New.  Provide tracing.
	(trace_ranger::range_of_expr): New.  Provide tracing.

diff --git a/gcc/gimple-range.h b/gcc/gimple-range.h
new file mode 100644
index 00000000000..c45760af393
--- /dev/null
+++ b/gcc/gimple-range.h
@@ -0,0 +1,170 @@
+/* Header file for the GIMPLE range interface.
+   Copyright (C) 2019-2020 Free Software Foundation, Inc.
+   Contributed by Andrew MacLeod <amacleod@redhat.com>
+   and Aldy Hernandez <aldyh@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_GIMPLE_RANGE_STMT_H
+#define GCC_GIMPLE_RANGE_STMT_H
+
+
+#include "range.h"
+#include "range-op.h"
+#include "gimple-range-edge.h"
+#include "gimple-range-gori.h"
+#include "gimple-range-cache.h"
+#include "value-query.h"
+
+// This is the basic range generator interface.
+//
+// This base class provides all the API entry points, but only provides
+// functionality at the statement level.  Ie, it can calculate ranges on
+// statements, but does no additonal lookup.
+//
+// All the range_of_* methods will return a range if the types is
+// supported by the range engine.  It may be the full range for the
+// type, AKA varying_p or it may be a refined range.  If the range
+// type is not supported, then false is returned.  Non-statement
+// related methods return whatever the current global value is.
+
+
+class gimple_ranger : public range_query
+{
+public:
+  gimple_ranger () : m_cache (*this) { }
+  virtual bool range_of_stmt (irange &r, gimple *, tree name = NULL) OVERRIDE;
+  virtual bool range_of_expr (irange &r, tree name, gimple * = NULL) OVERRIDE;
+  virtual bool range_on_edge (irange &r, edge e, tree name) OVERRIDE;
+  virtual void range_on_entry (irange &r, basic_block bb, tree name);
+  virtual void range_on_exit (irange &r, basic_block bb, tree name);
+  void export_global_ranges ();
+  void dump (FILE *f);
+protected:
+  bool calc_stmt (irange &r, gimple *s, tree name = NULL_TREE);
+  bool range_of_range_op (irange &r, gimple *s);
+  bool range_of_call (irange &r, gcall *call);
+  bool range_of_cond_expr (irange &r, gassign* cond);
+  ranger_cache m_cache;
+private:
+  bool range_of_phi (irange &r, gphi *phi);
+  bool range_of_non_trivial_assignment (irange &r, gimple *s);
+  bool range_of_builtin_call (irange &r, gcall *call);
+  void range_of_builtin_ubsan_call (irange &r, gcall *call, tree_code code);
+  bool range_with_loop_info (irange &r, tree name);
+  void range_of_ssa_name_with_loop_info (irange &, tree, class loop *,
+					 gphi *);
+};
+
+// Calculate a basic range for a tree expression.
+extern bool get_tree_range (irange &r, tree expr);
+
+// These routines provide a GIMPLE interface to the range-ops code.
+extern tree gimple_range_operand1 (const gimple *s);
+extern tree gimple_range_operand2 (const gimple *s);
+extern tree gimple_range_base_of_assignment (const gimple *s);
+extern bool gimple_range_fold (irange &res, const gimple *s,
+			       const irange &r1);
+extern bool gimple_range_fold (irange &res, const gimple *s,
+			       const irange &r1,
+			       const irange &r2);
+extern bool gimple_range_calc_op1 (irange &r, const gimple *s,
+				   const irange &lhs_range);
+extern bool gimple_range_calc_op1 (irange &r, const gimple *s,
+				   const irange &lhs_range,
+				   const irange &op2_range);
+extern bool gimple_range_calc_op2 (irange &r, const gimple *s,
+				   const irange &lhs_range,
+				   const irange &op1_range);
+
+
+// Return the range_operator pointer for this statement.  This routine
+// can also be used to gate whether a routine is range-ops enabled.
+
+static inline range_operator *
+gimple_range_handler (const gimple *s)
+{
+  if ((gimple_code (s) == GIMPLE_ASSIGN) || (gimple_code (s) == GIMPLE_COND))
+    return range_op_handler (gimple_expr_code (s), gimple_expr_type (s));
+  return NULL;
+}
+
+// Return EXP if it is an SSA_NAME with a type supported by gimple ranges.
+
+static inline tree
+gimple_range_ssa_p (tree exp)
+{
+  if (exp && TREE_CODE (exp) == SSA_NAME &&
+      !SSA_NAME_IS_VIRTUAL_OPERAND (exp) &&
+      irange::supports_type_p (TREE_TYPE (exp)))
+    return exp;
+  return NULL_TREE;
+}
+
+// Return the legacy GCC global range for NAME if it has one, otherwise
+// return VARYING.
+
+static inline value_range
+gimple_range_global (tree name)
+{
+  gcc_checking_assert (gimple_range_ssa_p (name));
+  tree type = TREE_TYPE (name);
+#if 0
+  // Reenable picking up global ranges when we are OK failing tests that look
+  // for builtin_unreachable in the code, like
+  // RUNTESTFLAGS=dg.exp=pr61034.C check-g++
+  // pre-optimizations (inlining) set a global range which causes the ranger
+  // to remove the condition which leads to builtin_unreachable.
+  if (!POINTER_TYPE_P (type) && SSA_NAME_RANGE_INFO (name))
+    {
+      // Return a range from an SSA_NAME's available range.
+      wide_int min, max;
+      enum value_range_kind kind = get_range_info (name, &min, &max);
+      return value_range (type, min, max, kind);
+    }
+#endif
+ // Otherwise return range for the type.
+ return value_range (type);
+}
+
+
+// This class overloads the ranger routines to provide tracing facilties
+// Entry and exit values to each of the APIs is placed in the dumpfile.
+
+class trace_ranger : public gimple_ranger
+{
+public:
+  trace_ranger ();
+  virtual bool range_of_stmt (irange &r, gimple *s, tree name = NULL_TREE);
+  virtual bool range_of_expr (irange &r, tree name, gimple *s = NULL);
+  virtual bool range_on_edge (irange &r, edge e, tree name);
+  virtual void range_on_entry (irange &r, basic_block bb, tree name);
+  virtual void range_on_exit (irange &r, basic_block bb, tree name);
+private:
+  static const unsigned bump = 2;
+  unsigned indent;
+  unsigned trace_count;		// Current trace index count.
+
+  bool dumping (unsigned counter, bool trailing = false);
+  bool trailer (unsigned counter, const char *caller, bool result, tree name,
+		const irange &r);
+};
+
+// Flag to enable debugging the various internal Caches.
+#define DEBUG_RANGE_CACHE (0 && dump_file)
+
+#endif // GCC_GIMPLE_RANGE_STMT_H
diff --git a/gcc/gimple-range.cc b/gcc/gimple-range.cc
new file mode 100644
index 00000000000..75c03d6610b
--- /dev/null
+++ b/gcc/gimple-range.cc
@@ -0,0 +1,1284 @@
+/* Code for GIMPLE range related routines.
+   Copyright (C) 2019-2020 Free Software Foundation, Inc.
+   Contributed by Andrew MacLeod <amacleod@redhat.com>
+   and Aldy Hernandez <aldyh@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "insn-codes.h"
+#include "rtl.h"
+#include "tree.h"
+#include "gimple.h"
+#include "ssa.h"
+#include "gimple-pretty-print.h"
+#include "gimple-iterator.h"
+#include "optabs-tree.h"
+#include "gimple-fold.h"
+#include "tree-cfg.h"
+#include "fold-const.h"
+#include "tree-cfg.h"
+#include "wide-int.h"
+#include "fold-const.h"
+#include "case-cfn-macros.h"
+#include "omp-general.h"
+#include "cfgloop.h"
+#include "tree-ssa-loop.h"
+#include "tree-scalar-evolution.h"
+#include "dbgcnt.h"
+#include "alloc-pool.h"
+#include "vr-values.h"
+#include "gimple-range.h"
+
+
+// Adjust the range for a pointer difference where the operands came
+// from a memchr.
+//
+// This notices the following sequence:
+//
+//	def = __builtin_memchr (arg, 0, sz)
+//	n = def - arg
+//
+// The range for N can be narrowed to [0, PTRDIFF_MAX - 1].
+
+static void
+adjust_pointer_diff_expr (irange &res, const gimple *diff_stmt)
+{
+  tree op0 = gimple_assign_rhs1 (diff_stmt);
+  tree op1 = gimple_assign_rhs2 (diff_stmt);
+  tree op0_ptype = TREE_TYPE (TREE_TYPE (op0));
+  tree op1_ptype = TREE_TYPE (TREE_TYPE (op1));
+  gimple *call;
+
+  if (TREE_CODE (op0) == SSA_NAME
+      && TREE_CODE (op1) == SSA_NAME
+      && (call = SSA_NAME_DEF_STMT (op0))
+      && is_gimple_call (call)
+      && gimple_call_builtin_p (call, BUILT_IN_MEMCHR)
+      && TYPE_MODE (op0_ptype) == TYPE_MODE (char_type_node)
+      && TYPE_PRECISION (op0_ptype) == TYPE_PRECISION (char_type_node)
+      && TYPE_MODE (op1_ptype) == TYPE_MODE (char_type_node)
+      && TYPE_PRECISION (op1_ptype) == TYPE_PRECISION (char_type_node)
+      && gimple_call_builtin_p (call, BUILT_IN_MEMCHR)
+      && vrp_operand_equal_p (op1, gimple_call_arg (call, 0))
+      && integer_zerop (gimple_call_arg (call, 1)))
+    {
+      tree max = vrp_val_max (ptrdiff_type_node);
+      wide_int wmax = wi::to_wide (max, TYPE_PRECISION (TREE_TYPE (max)));
+      tree expr_type = gimple_expr_type (diff_stmt);
+      tree range_min = build_zero_cst (expr_type);
+      tree range_max = wide_int_to_tree (expr_type, wmax - 1);
+      int_range<2> r (range_min, range_max);
+      res.intersect (r);
+    }
+}
+
+// This function looks for situations when walking the use/def chains
+// may provide additonal contextual range information not exposed on
+// this statement.  Like knowing the IMAGPART return value from a
+// builtin function is a boolean result.
+
+// We should rework how we're called, as we have an op_unknown entry
+// for IMAGPART_EXPR and POINTER_DIFF_EXPR in range-ops just so this
+// function gets called.
+
+static void
+gimple_range_adjustment (irange &res, const gimple *stmt)
+{
+  switch (gimple_expr_code (stmt))
+    {
+    case POINTER_DIFF_EXPR:
+      adjust_pointer_diff_expr (res, stmt);
+      return;
+
+    case IMAGPART_EXPR:
+      {
+	tree name = TREE_OPERAND (gimple_assign_rhs1 (stmt), 0);
+	if (TREE_CODE (name) == SSA_NAME)
+	  {
+	    gimple *def_stmt = SSA_NAME_DEF_STMT (name);
+	    if (def_stmt && is_gimple_call (def_stmt)
+		&& gimple_call_internal_p (def_stmt))
+	      {
+		switch (gimple_call_internal_fn (def_stmt))
+		  {
+		  case IFN_ADD_OVERFLOW:
+		  case IFN_SUB_OVERFLOW:
+		  case IFN_MUL_OVERFLOW:
+		  case IFN_ATOMIC_COMPARE_EXCHANGE:
+		    {
+		      int_range<2> r;
+		      r.set_varying (boolean_type_node);
+		      tree type = TREE_TYPE (gimple_assign_lhs (stmt));
+		      range_cast (r, type);
+		      res.intersect (r);
+		    }
+		  default:
+		    break;
+		  }
+	      }
+	  }
+	break;
+      }
+
+    default:
+      break;
+    }
+}
+
+// Return a range in R for the tree EXPR.  Return true if a range is
+// representable.
+
+bool
+get_tree_range (irange &r, tree expr)
+{
+  tree type;
+  if (TYPE_P (expr))
+    type = expr;
+  else
+    type = TREE_TYPE (expr);
+
+  // Return false if the type isn't suported.
+  if (!irange::supports_type_p (type))
+    return false;
+
+  switch (TREE_CODE (expr))
+    {
+      case INTEGER_CST:
+	r.set (expr, expr);
+	return true;
+
+      case SSA_NAME:
+	r = gimple_range_global (expr);
+	return true;
+
+      case ADDR_EXPR:
+        {
+	  // Handle &var which can show up in phi arguments.
+	  bool ov;
+	  if (tree_single_nonzero_warnv_p (expr, &ov))
+	    {
+	      r = range_nonzero (type);
+	      return true;
+	    }
+	  break;
+	}
+
+      default:
+        break;
+    }
+  r.set_varying (type);
+  return true;
+}
+
+// Fold this unary statement using R1 as operand1's range, returning
+// the result in RES.  Return false if the operation fails.
+
+bool
+gimple_range_fold (irange &res, const gimple *stmt, const irange &r1)
+{
+  gcc_checking_assert (gimple_range_handler (stmt));
+
+  tree type = gimple_expr_type (stmt);
+  // Unary SSA operations require the LHS type as the second range.
+  int_range<2> r2 (type);
+
+  return gimple_range_fold (res, stmt, r1, r2);
+}
+
+// Fold this binary statement using R1 and R2 as the operands ranges,
+// returning the result in RES.  Return false if the operation fails.
+
+bool
+gimple_range_fold (irange &res, const gimple *stmt,
+		   const irange &r1, const irange &r2)
+{
+  gcc_checking_assert (gimple_range_handler (stmt));
+
+  gimple_range_handler (stmt)->fold_range (res, gimple_expr_type (stmt),
+					   r1, r2);
+
+  // If there are any gimple lookups, do those now.
+  gimple_range_adjustment (res, stmt);
+  return true;
+}
+
+// Return the base of the RHS of an assignment.
+
+tree
+gimple_range_base_of_assignment (const gimple *stmt)
+{
+  gcc_checking_assert (gimple_code (stmt) == GIMPLE_ASSIGN);
+  tree op1 = gimple_assign_rhs1 (stmt);
+  if (gimple_assign_rhs_code (stmt) == ADDR_EXPR)
+    return get_base_address (TREE_OPERAND (op1, 0));
+  return op1;
+}
+
+// Return the first operand of this statement if it is a valid operand
+// supported by ranges, otherwise return NULL_TREE.  Special case is
+// &(SSA_NAME expr), return the SSA_NAME instead of the ADDR expr.
+
+tree
+gimple_range_operand1 (const gimple *stmt)
+{
+  gcc_checking_assert (gimple_range_handler (stmt));
+
+  switch (gimple_code (stmt))
+    {
+      case GIMPLE_COND:
+	return gimple_cond_lhs (stmt);
+      case GIMPLE_ASSIGN:
+	{
+	  tree base = gimple_range_base_of_assignment (stmt);
+	  if (base && TREE_CODE (base) == MEM_REF)
+	    {
+	      // If the base address is an SSA_NAME, we return it
+	      // here.  This allows processing of the range of that
+	      // name, while the rest of the expression is simply
+	      // ignored.  The code in range_ops will see the
+	      // ADDR_EXPR and do the right thing.
+	      tree ssa = TREE_OPERAND (base, 0);
+	      if (TREE_CODE (ssa) == SSA_NAME)
+		return ssa;
+	    }
+	  return base;
+	}
+      default:
+	break;
+    }
+  return NULL;
+}
+
+// Return the second operand of statement STMT, otherwise return NULL_TREE.
+
+tree
+gimple_range_operand2 (const gimple *stmt)
+{
+  gcc_checking_assert (gimple_range_handler (stmt));
+
+  switch (gimple_code (stmt))
+    {
+    case GIMPLE_COND:
+      return gimple_cond_rhs (stmt);
+    case GIMPLE_ASSIGN:
+      if (gimple_num_ops (stmt) >= 3)
+	return gimple_assign_rhs2 (stmt);
+    default:
+      break;
+    }
+  return NULL_TREE;
+}
+
+// Calculate what we can determine of the range of this unary
+// statement's operand if the lhs of the expression has the range
+// LHS_RANGE.  Return false if nothing can be determined.
+
+bool
+gimple_range_calc_op1 (irange &r, const gimple *stmt, const irange &lhs_range)
+{
+  gcc_checking_assert (gimple_num_ops (stmt) < 3);
+
+  // An empty range is viral.
+  tree type = TREE_TYPE (gimple_range_operand1 (stmt));
+  if (lhs_range.undefined_p ())
+    {
+      r.set_undefined ();
+      return true;
+    }
+  // Unary operations require the type of the first operand in the
+  // second range position.
+  int_range<2> type_range (type);
+  return gimple_range_handler (stmt)->op1_range (r, type, lhs_range,
+						 type_range);
+}
+
+// Calculate what we can determine of the range of this statement's
+// first operand if the lhs of the expression has the range LHS_RANGE
+// and the second operand has the range OP2_RANGE.  Return false if
+// nothing can be determined.
+
+bool
+gimple_range_calc_op1 (irange &r, const gimple *stmt,
+		       const irange &lhs_range, const irange &op2_range)
+{
+  // Unary operation are allowed to pass a range in for second operand
+  // as there are often additional restrictions beyond the type which
+  // can be imposed.  See operator_cast::op1_range().
+  tree type = TREE_TYPE (gimple_range_operand1 (stmt));
+  // An empty range is viral.
+  if (op2_range.undefined_p () || lhs_range.undefined_p ())
+    {
+      r.set_undefined ();
+      return true;
+    }
+  return gimple_range_handler (stmt)->op1_range (r, type, lhs_range,
+						 op2_range);
+}
+
+// Calculate what we can determine of the range of this statement's
+// second operand if the lhs of the expression has the range LHS_RANGE
+// and the first operand has the range OP1_RANGE.  Return false if
+// nothing can be determined.
+
+bool
+gimple_range_calc_op2 (irange &r, const gimple *stmt,
+		       const irange &lhs_range, const irange &op1_range)
+{
+  tree type = TREE_TYPE (gimple_range_operand2 (stmt));
+  // An empty range is viral.
+  if (op1_range.undefined_p () || lhs_range.undefined_p ())
+    {
+      r.set_undefined ();
+      return true;
+    }
+  return gimple_range_handler (stmt)->op2_range (r, type, lhs_range,
+						 op1_range);
+}
+
+// Calculate a range for statement S and return it in R. If NAME is provided it
+// represents the SSA_NAME on the LHS of the statement. It is only required
+// if there is more than one lhs/output.  If a range cannot
+// be calculated, return false.
+
+bool
+gimple_ranger::calc_stmt (irange &r, gimple *s, tree name)
+{
+  bool res = false;
+  // If name is specified, make sure it is an LHS of S.
+  gcc_checking_assert (name ? SSA_NAME_DEF_STMT (name) == s : true);
+
+  if (gimple_range_handler (s))
+    res = range_of_range_op (r, s);
+  else if (is_a<gphi *>(s))
+    res = range_of_phi (r, as_a<gphi *> (s));
+  else if (is_a<gcall *>(s))
+    res = range_of_call (r, as_a<gcall *> (s));
+  else if (is_a<gassign *> (s) && gimple_assign_rhs_code (s) == COND_EXPR)
+    res = range_of_cond_expr (r, as_a<gassign *> (s));
+  else
+    {
+      // If no name is specified, try the expression kind.
+      if (!name)
+	{
+	  tree t = gimple_expr_type (s);
+	  if (!irange::supports_type_p (t))
+	    return false;
+	  r.set_varying (t);
+	  return true;
+	}
+      // We don't understand the stmt, so return the global range.
+      r = gimple_range_global (name);
+      return true;
+    }
+  if (res)
+    {
+      if (r.undefined_p ())
+	return true;
+      if (name && TREE_TYPE (name) != r.type ())
+	range_cast (r, TREE_TYPE (name));
+      return true;
+    }
+  return false;
+}
+
+// Calculate a range for range_op statement S and return it in R.  If any
+// If a range cannot be calculated, return false.
+
+bool
+gimple_ranger::range_of_range_op (irange &r, gimple *s)
+{
+  int_range_max range1, range2;
+  tree type = gimple_expr_type (s);
+  gcc_checking_assert (irange::supports_type_p (type));
+
+  tree op1 = gimple_range_operand1 (s);
+  tree op2 = gimple_range_operand2 (s);
+
+  if (range_of_non_trivial_assignment (r, s))
+    return true;
+
+  if (range_of_expr (range1, op1, s))
+    {
+      if (!op2)
+	return gimple_range_fold (r, s, range1);
+
+      if (range_of_expr (range2, op2, s))
+	return gimple_range_fold (r, s, range1, range2);
+    }
+  r.set_varying (type);
+  return true;
+}
+
+// Calculate the range of a non-trivial assignment.  That is, is one
+// inolving arithmetic on an SSA name (for example, an ADDR_EXPR).
+// Return the range in R.
+//
+// If a range cannot be calculated, return false.
+
+bool
+gimple_ranger::range_of_non_trivial_assignment (irange &r, gimple *stmt)
+{
+  if (gimple_code (stmt) != GIMPLE_ASSIGN)
+    return false;
+
+  tree base = gimple_range_base_of_assignment (stmt);
+  if (base && TREE_CODE (base) == MEM_REF
+      && TREE_CODE (TREE_OPERAND (base, 0)) == SSA_NAME)
+    {
+      int_range_max range1;
+      tree ssa = TREE_OPERAND (base, 0);
+      if (range_of_expr (range1, ssa, stmt))
+	{
+	  tree type = TREE_TYPE (ssa);
+	  range_operator *op = range_op_handler (POINTER_PLUS_EXPR, type);
+	  int_range<2> offset (TREE_OPERAND (base, 1), TREE_OPERAND (base, 1));
+	  op->fold_range (r, type, range1, offset);
+	  return true;
+	}
+    }
+  return false;
+}
+
+// Calculate a range for phi statement S and return it in R.
+// If a range cannot be calculated, return false.
+
+bool
+gimple_ranger::range_of_phi (irange &r, gphi *phi)
+{
+  tree phi_def = gimple_phi_result (phi);
+  tree type = TREE_TYPE (phi_def);
+  int_range_max arg_range;
+  unsigned x;
+
+  if (!irange::supports_type_p (type))
+    return false;
+
+  // Start with an empty range, unioning in each argument's range.
+  r.set_undefined ();
+  for (x = 0; x < gimple_phi_num_args (phi); x++)
+    {
+      tree arg = gimple_phi_arg_def (phi, x);
+      edge e = gimple_phi_arg_edge (phi, x);
+
+      range_on_edge (arg_range, e, arg);
+      r.union_ (arg_range);
+      // Once the value reaches varying, stop looking.
+      if (r.varying_p ())
+	break;
+    }
+
+  // If SCEV is available, query if this PHI has any knonwn values.
+  if (scev_initialized_p () && !POINTER_TYPE_P (TREE_TYPE (phi_def)))
+    {
+      value_range loop_range;
+      class loop *l = loop_containing_stmt (phi);
+      if (l)
+        {
+	  range_of_ssa_name_with_loop_info (loop_range, phi_def, l, phi);
+	  if (!loop_range.varying_p ())
+	    {
+	      if (dump_file && (dump_flags & TDF_DETAILS))
+		{
+		  fprintf (dump_file, "   Loops range found for ");
+		  print_generic_expr (dump_file, phi_def, TDF_SLIM);
+		  fprintf (dump_file, ": ");
+		  loop_range.dump (dump_file);
+		  fprintf (dump_file, " and calculated range :");
+		  r.dump (dump_file);
+		  fprintf (dump_file, "\n");
+		}
+	      r.intersect (loop_range);
+	    }
+	}
+    }
+
+  return true;
+}
+
+// Calculate a range for call statement S and return it in R.
+// If a range cannot be calculated, return false.
+
+bool
+gimple_ranger::range_of_call (irange &r, gcall *call)
+{
+  tree type = gimple_call_return_type (call);
+  tree lhs = gimple_call_lhs (call);
+  bool strict_overflow_p;
+
+  if (!irange::supports_type_p (type))
+    return false;
+
+  if (range_of_builtin_call (r, call))
+    ;
+  else if (gimple_stmt_nonnegative_warnv_p (call, &strict_overflow_p))
+    r.set (build_int_cst (type, 0), TYPE_MAX_VALUE (type));
+  else if (gimple_call_nonnull_result_p (call)
+	   || gimple_call_nonnull_arg (call))
+    r = range_nonzero (type);
+  else
+    r.set_varying (type);
+
+  // If there is an LHS, intersect that with what is known.
+  if (lhs)
+    {
+      value_range def;
+      def = gimple_range_global (lhs);
+      r.intersect (def);
+    }
+  return true;
+}
+
+
+void
+gimple_ranger::range_of_builtin_ubsan_call (irange &r, gcall *call,
+					    tree_code code)
+{
+  gcc_checking_assert (code == PLUS_EXPR || code == MINUS_EXPR
+		       || code == MULT_EXPR);
+  tree type = gimple_call_return_type (call);
+  range_operator *op = range_op_handler (code, type);
+  gcc_checking_assert (op);
+  int_range_max ir0, ir1;
+  tree arg0 = gimple_call_arg (call, 0);
+  tree arg1 = gimple_call_arg (call, 1);
+  gcc_assert (range_of_expr (ir0, arg0, call));
+  gcc_assert (range_of_expr (ir1, arg1, call));
+
+  bool saved_flag_wrapv = flag_wrapv;
+  // Pretend the arithmetic is wrapping.  If there is any overflow,
+  // we'll complain, but will actually do wrapping operation.
+  flag_wrapv = 1;
+  op->fold_range (r, type, ir0, ir1);
+  flag_wrapv = saved_flag_wrapv;
+
+  // If for both arguments vrp_valueize returned non-NULL, this should
+  // have been already folded and if not, it wasn't folded because of
+  // overflow.  Avoid removing the UBSAN_CHECK_* calls in that case.
+  if (r.singleton_p ())
+    r.set_varying (type);
+}
+
+
+bool
+gimple_ranger::range_of_builtin_call (irange &r, gcall *call)
+{
+  combined_fn func = gimple_call_combined_fn (call);
+  if (func == CFN_LAST)
+    return false;
+
+  tree type = gimple_call_return_type (call);
+  tree arg;
+  int mini, maxi, zerov, prec;
+  scalar_int_mode mode;
+
+  switch (func)
+    {
+    case CFN_BUILT_IN_CONSTANT_P:
+      if (cfun->after_inlining)
+	{
+	  r.set_zero (type);
+	  // r.equiv_clear ();
+	  return true;
+	}
+      arg = gimple_call_arg (call, 0);
+      if (range_of_expr (r, arg, call) && r.singleton_p ())
+	{
+	  r.set (build_one_cst (type), build_one_cst (type));
+	  return true;
+	}
+      break;
+
+    CASE_CFN_FFS:
+    CASE_CFN_POPCOUNT:
+      // __builtin_ffs* and __builtin_popcount* return [0, prec].
+      arg = gimple_call_arg (call, 0);
+      prec = TYPE_PRECISION (TREE_TYPE (arg));
+      mini = 0;
+      maxi = prec;
+      gcc_assert (range_of_expr (r, arg, call));
+      // If arg is non-zero, then ffs or popcount are non-zero.
+      if (!range_includes_zero_p (&r))
+	mini = 1;
+      // If some high bits are known to be zero, decrease the maximum.
+      if (!r.undefined_p ())
+	{
+	  wide_int max = r.upper_bound ();
+	  maxi = wi::floor_log2 (max) + 1;
+	}
+      r.set (build_int_cst (type, mini), build_int_cst (type, maxi));
+      return true;
+
+    CASE_CFN_PARITY:
+      r.set (build_zero_cst (type), build_one_cst (type));
+      return true;
+
+    CASE_CFN_CLZ:
+      // __builtin_c[lt]z* return [0, prec-1], except when the
+      // argument is 0, but that is undefined behavior.
+      //
+      // On many targets where the CLZ RTL or optab value is defined
+      // for 0, the value is prec, so include that in the range by
+      // default.
+      arg = gimple_call_arg (call, 0);
+      prec = TYPE_PRECISION (TREE_TYPE (arg));
+      mini = 0;
+      maxi = prec;
+      mode = SCALAR_INT_TYPE_MODE (TREE_TYPE (arg));
+      if (optab_handler (clz_optab, mode) != CODE_FOR_nothing
+	  && CLZ_DEFINED_VALUE_AT_ZERO (mode, zerov)
+	  // Only handle the single common value.
+	  && zerov != prec)
+	// Magic value to give up, unless we can prove arg is non-zero.
+	mini = -2;
+
+      gcc_assert (range_of_expr (r, arg, call));
+      // From clz of minimum we can compute result maximum.
+      if (r.constant_p ())
+	{
+	  maxi = prec - 1 - wi::floor_log2 (r.lower_bound ());
+	  if (maxi != prec)
+	    mini = 0;
+	}
+      else if (!range_includes_zero_p (&r))
+	{
+	  maxi = prec - 1;
+	  mini = 0;
+	}
+      if (mini == -2)
+	break;
+      // From clz of maximum we can compute result minimum.
+      if (r.constant_p ())
+	{
+	  mini = prec - 1 - wi::floor_log2 (r.upper_bound ());
+	  if (mini == prec)
+	    break;
+	}
+      if (mini == -2)
+	break;
+      r.set (build_int_cst (type, mini), build_int_cst (type, maxi));
+      return true;
+
+    CASE_CFN_CTZ:
+      // __builtin_ctz* return [0, prec-1], except for when the
+      // argument is 0, but that is undefined behavior.
+      //
+      // If there is a ctz optab for this mode and
+      // CTZ_DEFINED_VALUE_AT_ZERO, include that in the range,
+      // otherwise just assume 0 won't be seen.
+      arg = gimple_call_arg (call, 0);
+      prec = TYPE_PRECISION (TREE_TYPE (arg));
+      mini = 0;
+      maxi = prec - 1;
+      mode = SCALAR_INT_TYPE_MODE (TREE_TYPE (arg));
+      if (optab_handler (ctz_optab, mode) != CODE_FOR_nothing
+	  && CTZ_DEFINED_VALUE_AT_ZERO (mode, zerov))
+	{
+	  // Handle only the two common values.
+	  if (zerov == -1)
+	    mini = -1;
+	  else if (zerov == prec)
+	    maxi = prec;
+	  else
+	    // Magic value to give up, unless we can prove arg is non-zero.
+	    mini = -2;
+	}
+      gcc_assert (range_of_expr (r, arg, call));
+      if (!r.undefined_p ())
+	{
+	  if (r.lower_bound () != 0)
+	    {
+	      mini = 0;
+	      maxi = prec - 1;
+	    }
+	  // If some high bits are known to be zero, we can decrease
+	  // the maximum.
+	  wide_int max = r.upper_bound ();
+	  if (max == 0)
+	    break;
+	  maxi = wi::floor_log2 (max);
+	}
+      if (mini == -2)
+	break;
+      r.set (build_int_cst (type, mini), build_int_cst (type, maxi));
+      return true;
+
+    CASE_CFN_CLRSB:
+      arg = gimple_call_arg (call, 0);
+      prec = TYPE_PRECISION (TREE_TYPE (arg));
+      r.set (build_int_cst (type, 0), build_int_cst (type, prec - 1));
+      return true;
+    case CFN_UBSAN_CHECK_ADD:
+      range_of_builtin_ubsan_call (r, call, PLUS_EXPR);
+      return true;
+    case CFN_UBSAN_CHECK_SUB:
+      range_of_builtin_ubsan_call (r, call, MINUS_EXPR);
+      return true;
+    case CFN_UBSAN_CHECK_MUL:
+      range_of_builtin_ubsan_call (r, call, MULT_EXPR);
+      return true;
+
+    case CFN_GOACC_DIM_SIZE:
+    case CFN_GOACC_DIM_POS:
+      // Optimizing these two internal functions helps the loop
+      // optimizer eliminate outer comparisons.  Size is [1,N]
+      // and pos is [0,N-1].
+      {
+	bool is_pos = func == CFN_GOACC_DIM_POS;
+	int axis = oacc_get_ifn_dim_arg (call);
+	int size = oacc_get_fn_dim_size (current_function_decl, axis);
+	if (!size)
+	  // If it's dynamic, the backend might know a hardware limitation.
+	  size = targetm.goacc.dim_limit (axis);
+
+	r.set (build_int_cst (type, is_pos ? 0 : 1),
+	       size
+	       ? build_int_cst (type, size - is_pos) : vrp_val_max (type));
+	return true;
+      }
+
+    case CFN_BUILT_IN_STRLEN:
+      if (tree lhs = gimple_call_lhs (call))
+	if (ptrdiff_type_node
+	    && (TYPE_PRECISION (ptrdiff_type_node)
+		== TYPE_PRECISION (TREE_TYPE (lhs))))
+	  {
+	    tree type = TREE_TYPE (lhs);
+	    tree max = vrp_val_max (ptrdiff_type_node);
+	    wide_int wmax
+	      = wi::to_wide (max, TYPE_PRECISION (TREE_TYPE (max)));
+	    tree range_min = build_zero_cst (type);
+	    // To account for the terminating NULL, the maximum length
+	    // is one less than the maximum array size, which in turn
+	    // is one less than PTRDIFF_MAX (or SIZE_MAX where it's
+	    // smaller than the former type).
+	    // FIXME: Use max_object_size() - 1 here.
+	    tree range_max = wide_int_to_tree (type, wmax - 2);
+	    r.set (range_min, range_max);
+	    return true;
+	  }
+      break;
+    default:
+      break;
+    }
+  return false;
+}
+
+
+
+// Calculate a range for COND_EXPR statement S and return it in R.
+// If a range cannot be calculated, return false.
+
+bool
+gimple_ranger::range_of_cond_expr  (irange &r, gassign *s)
+{
+  int_range_max cond_range, range1, range2;
+  tree cond = gimple_assign_rhs1 (s);
+  tree op1 = gimple_assign_rhs2 (s);
+  tree op2 = gimple_assign_rhs3 (s);
+
+  gcc_checking_assert (gimple_assign_rhs_code (s) == COND_EXPR);
+  gcc_checking_assert (useless_type_conversion_p  (TREE_TYPE (op1),
+						   TREE_TYPE (op2)));
+  if (!irange::supports_type_p (TREE_TYPE (op1)))
+    return false;
+
+  gcc_assert (range_of_expr (cond_range, cond, s));
+  gcc_assert (range_of_expr (range1, op1, s));
+  gcc_assert (range_of_expr (range2, op2, s));
+
+  // If the condition is known, choose the appropriate expression.
+  if (cond_range.singleton_p ())
+    {
+      // False, pick second operand.
+      if (cond_range.zero_p ())
+	r = range2;
+      else
+	r = range1;
+    }
+  else
+    {
+      r = range1;
+      r.union_ (range2);
+    }
+  return true;
+}
+
+bool
+gimple_ranger::range_of_expr (irange &r, tree expr, gimple *stmt)
+{
+  if (!gimple_range_ssa_p (expr))
+    return get_tree_range (r, expr);
+
+  // If there is no statement, just get the global value.
+  if (!stmt)
+    {
+      if (!m_cache.m_globals.get_global_range (r, expr))
+        r = gimple_range_global (expr);
+      return true;
+    }
+
+  basic_block bb = gimple_bb (stmt);
+  gimple *def_stmt = SSA_NAME_DEF_STMT (expr);
+
+  // If name is defined in this block, try to get an range from S.
+  if (def_stmt && gimple_bb (def_stmt) == bb)
+    gcc_assert (range_of_stmt (r, def_stmt, expr));
+  else
+    // Otherwise OP comes from outside this block, use range on entry.
+    range_on_entry (r, bb, expr);
+
+  // No range yet, see if there is a dereference in the block.
+  // We don't care if it's between the def and a use within a block
+  // because the entire block must be executed anyway.
+  // FIXME:?? For non-call exceptions we could have a statement throw
+  // which causes an early block exit.
+  // in which case we may need to walk from S back to the def/top of block
+  // to make sure the deref happens between S and there before claiming
+  // there is a deref.   Punt for now.
+  if (!cfun->can_throw_non_call_exceptions && r.varying_p () &&
+      m_cache.m_non_null.non_null_deref_p (expr, bb))
+    r = range_nonzero (TREE_TYPE (expr));
+
+  return true;
+}
+
+// Return the range of NAME on entry to block BB in R.
+
+void
+gimple_ranger::range_on_entry (irange &r, basic_block bb, tree name)
+{
+  int_range_max entry_range;
+  gcc_checking_assert (gimple_range_ssa_p (name));
+
+  // Start with any known range
+  gcc_assert (range_of_stmt (r, SSA_NAME_DEF_STMT (name), name));
+
+  // Now see if there is any on_entry value which may refine it.
+  if (m_cache.block_range (entry_range, bb, name))
+    r.intersect (entry_range);
+}
+
+// Calculate the range for NAME at the end of block BB and return it in R.
+// Return false if no range can be calculated.
+
+void
+gimple_ranger::range_on_exit (irange &r, basic_block bb, tree name)
+{
+  // on-exit from the exit block?
+  gcc_checking_assert (bb != EXIT_BLOCK_PTR_FOR_FN (cfun));
+
+  gimple *s = last_stmt (bb);
+  // If there is no statement in the block and this isn't the entry
+  // block, go get the range_on_entry for this block.  For the entry
+  // block, a NULL stmt will return the global value for NAME.
+  if (!s && bb != ENTRY_BLOCK_PTR_FOR_FN (cfun))
+    range_on_entry (r, bb, name);
+  else
+    gcc_assert (range_of_expr (r, name, s));
+  gcc_checking_assert (r.undefined_p ()
+		       || types_compatible_p (r.type(), TREE_TYPE (name)));
+}
+
+// Calculate a range for NAME on edge E and return it in R.
+
+bool
+gimple_ranger::range_on_edge (irange &r, edge e, tree name)
+{
+  int_range_max edge_range;
+  gcc_checking_assert (irange::supports_type_p (TREE_TYPE (name)));
+
+  // PHI arguments can be constants, catch these here.
+  if (!gimple_range_ssa_p (name))
+    {
+      gcc_assert (range_of_expr (r, name));
+      return true;
+    }
+
+  range_on_exit (r, e->src, name);
+  gcc_checking_assert  (r.undefined_p ()
+			|| types_compatible_p (r.type(), TREE_TYPE (name)));
+
+  // Check to see if NAME is defined on edge e.
+  if (m_cache.outgoing_edge_range_p (edge_range, e, name))
+    r.intersect (edge_range);
+
+  return true;
+}
+
+// Calculate a range for statement S and return it in R.  If NAME is
+// provided it represents the SSA_NAME on the LHS of the statement.
+// It is only required if there is more than one lhs/output.  Check
+// the global cache for NAME first to see if the evaluation can be
+// avoided.  If a range cannot be calculated, return false.
+
+bool
+gimple_ranger::range_of_stmt (irange &r, gimple *s, tree name)
+{
+  // If no name, simply call the base routine.
+  if (!name)
+    name = gimple_get_lhs (s);
+
+  if (!name)
+    return calc_stmt (r, s, NULL_TREE);
+
+  gcc_checking_assert (TREE_CODE (name) == SSA_NAME &&
+		       irange::supports_type_p (TREE_TYPE (name)));
+
+  // If this STMT has already been processed, return that value.
+  if (m_cache.m_globals.get_global_range (r, name))
+    return true;
+  // Avoid infinite recursion by initializing global cache
+  int_range_max tmp = gimple_range_global (name);
+  m_cache.m_globals.set_global_range (name, tmp);
+
+  gcc_assert (calc_stmt (r, s, name));
+
+  if (is_a<gphi *> (s))
+    r.intersect (tmp);
+  m_cache.m_globals.set_global_range (name, r);
+  return true;
+}
+
+// This routine will export whatever global ranges are known to GCC
+// SSA_RANGE_NAME_INFO fields.
+
+void
+gimple_ranger::export_global_ranges ()
+{
+  unsigned x;
+  int_range_max r;
+  if (dump_file)
+    {
+      fprintf (dump_file, "Exported global range table\n");
+      fprintf (dump_file, "===========================\n");
+    }
+
+  for ( x = 1; x < num_ssa_names; x++)
+    {
+      tree name = ssa_name (x);
+      if (name && !SSA_NAME_IN_FREE_LIST (name)
+	  && gimple_range_ssa_p (name)
+	  && m_cache.m_globals.get_global_range (r, name)
+	  && !r.varying_p())
+	{
+	  // Make sure the new range is a subset of the old range.
+	  int_range_max old_range;
+	  old_range = gimple_range_global (name);
+	  old_range.intersect (r);
+	  /* Disable this while we fix tree-ssa/pr61743-2.c.  */
+	  //gcc_checking_assert (old_range == r);
+
+	  // WTF? Can't write non-null pointer ranges?? stupid set_range_info!
+	  if (!POINTER_TYPE_P (TREE_TYPE (name)) && !r.undefined_p ())
+	    {
+	      value_range vr = r;
+	      set_range_info (name, vr);
+	      if (dump_file)
+		{
+		  print_generic_expr (dump_file, name , TDF_SLIM);
+		  fprintf (dump_file, " --> ");
+		  vr.dump (dump_file);
+		  fprintf (dump_file, "\n");
+		  fprintf (dump_file, "         irange : ");
+		  r.dump (dump_file);
+		  fprintf (dump_file, "\n");
+		}
+	    }
+	}
+    }
+}
+
+// Print the known table values to file F.
+
+void
+gimple_ranger::dump (FILE *f)
+{
+  basic_block bb;
+
+  FOR_EACH_BB_FN (bb, cfun)
+    {
+      unsigned x;
+      edge_iterator ei;
+      edge e;
+      int_range_max range;
+      fprintf (f, "\n=========== BB %d ============\n", bb->index);
+      m_cache.m_on_entry.dump (f, bb);
+
+      dump_bb (f, bb, 4, TDF_NONE);
+
+      // Now find any globals defined in this block.
+      for (x = 1; x < num_ssa_names; x++)
+	{
+	  tree name = ssa_name (x);
+	  if (gimple_range_ssa_p (name) && SSA_NAME_DEF_STMT (name) &&
+	      gimple_bb (SSA_NAME_DEF_STMT (name)) == bb &&
+	      m_cache.m_globals.get_global_range (range, name))
+	    {
+	      if (!range.varying_p ())
+	       {
+		 print_generic_expr (f, name, TDF_SLIM);
+		 fprintf (f, " : ");
+		 range.dump (f);
+		 fprintf (f, "\n");
+	       }
+
+	    }
+	}
+
+      // And now outgoing edges, if they define anything.
+      FOR_EACH_EDGE (e, ei, bb->succs)
+	{
+	  for (x = 1; x < num_ssa_names; x++)
+	    {
+	      tree name = gimple_range_ssa_p (ssa_name (x));
+	      if (name && m_cache.outgoing_edge_range_p (range, e, name))
+		{
+		  gimple *s = SSA_NAME_DEF_STMT (name);
+		  // Only print the range if this is the def block, or
+		  // the on entry cache for either end of the edge is
+		  // set.
+		  if ((s && bb == gimple_bb (s)) ||
+		      m_cache.block_range (range, bb, name, false) ||
+		      m_cache.block_range (range, e->dest, name, false))
+		    {
+		      range_on_edge (range, e, name);
+		      if (!range.varying_p ())
+			{
+			  fprintf (f, "%d->%d ", e->src->index,
+				   e->dest->index);
+			  char c = ' ';
+			  if (e->flags & EDGE_TRUE_VALUE)
+			    fprintf (f, " (T)%c", c);
+			  else if (e->flags & EDGE_FALSE_VALUE)
+			    fprintf (f, " (F)%c", c);
+			  else
+			    fprintf (f, "     ");
+			  print_generic_expr (f, name, TDF_SLIM);
+			  fprintf(f, " : \t");
+			  range.dump(f);
+			  fprintf (f, "\n");
+			}
+		    }
+		}
+	    }
+	}
+    }
+
+  m_cache.m_globals.dump (dump_file);
+  fprintf (f, "\n");
+
+  if (dump_flags & TDF_DETAILS)
+    {
+      fprintf (f, "\nDUMPING GORI MAP\n");
+      m_cache.dump (f);
+      fprintf (f, "\n");
+    }
+}
+
+// If SCEV has any information about phi node NAME, return it as a range in R.
+
+void
+gimple_ranger::range_of_ssa_name_with_loop_info (irange &r, tree name,
+						 class loop *l, gphi *phi)
+{
+  gcc_checking_assert (TREE_CODE (name) == SSA_NAME);
+  tree min, max, type = TREE_TYPE (name);
+  if (bounds_of_var_in_loop (&min, &max, this, l, phi, name))
+    {
+      // ?? We could do better here.  Since MIN/MAX can only be an
+      // SSA, SSA +- INTEGER_CST, or INTEGER_CST, we could easily call
+      // the ranger and solve anything not an integer.
+      if (TREE_CODE (min) != INTEGER_CST)
+	min = vrp_val_min (type);
+      if (TREE_CODE (max) != INTEGER_CST)
+	max = vrp_val_max (type);
+      r.set (min, max);
+    }
+  else
+    r.set_varying (type);
+}
+
+// --------------------------------------------------------------------------
+// trace_ranger implementation.
+
+
+trace_ranger::trace_ranger ()
+{
+  indent = 0;
+  trace_count = 0;
+}
+
+// If dumping, return true and print the prefix for the next output line.
+
+bool
+trace_ranger::dumping (unsigned counter, bool trailing)
+{
+  if (dump_file && (dump_flags & TDF_DETAILS))
+    {
+      // Print counter index as well as INDENT spaces.
+      if (!trailing)
+	fprintf (dump_file, " %-7u ", counter);
+      else
+	fprintf (dump_file, "         ");
+      unsigned x;
+      for (x = 0; x< indent; x++)
+	fputc (' ', dump_file);
+      return true;
+    }
+  return false;
+}
+
+// After calling a routine, if dumping, print the CALLER, NAME, and RESULT,
+// returning RESULT.
+
+bool
+trace_ranger::trailer (unsigned counter, const char *caller, bool result,
+		       tree name, const irange &r)
+{
+  if (dumping (counter, true))
+    {
+      indent -= bump;
+      fputs(result ? "TRUE : " : "FALSE : ", dump_file);
+      fprintf (dump_file, "(%u) ", counter);
+      fputs (caller, dump_file);
+      fputs (" (",dump_file);
+      if (name)
+	print_generic_expr (dump_file, name, TDF_SLIM);
+      fputs (") ",dump_file);
+      if (result)
+	{
+	  r.dump (dump_file);
+	  fputc('\n', dump_file);
+	}
+      else
+	fputc('\n', dump_file);
+      // Marks the end of a request.
+      if (indent == 0)
+	fputc('\n', dump_file);
+    }
+  return result;
+}
+
+// Tracing version of range_on_edge.  Call it with printing wrappers.
+
+bool
+trace_ranger::range_on_edge (irange &r, edge e, tree name)
+{
+  unsigned idx = ++trace_count;
+  if (dumping (idx))
+    {
+      fprintf (dump_file, "range_on_edge (");
+      print_generic_expr (dump_file, name, TDF_SLIM);
+      fprintf (dump_file, ") on edge %d->%d\n", e->src->index, e->dest->index);
+      indent += bump;
+    }
+
+  bool res = gimple_ranger::range_on_edge (r, e, name);
+  trailer (idx, "range_on_edge", true, name, r);
+  return res;
+}
+
+// Tracing version of range_on_entry.  Call it with printing wrappers.
+
+void
+trace_ranger::range_on_entry (irange &r, basic_block bb, tree name)
+{
+  unsigned idx = ++trace_count;
+  if (dumping (idx))
+    {
+      fprintf (dump_file, "range_on_entry (");
+      print_generic_expr (dump_file, name, TDF_SLIM);
+      fprintf (dump_file, ") to BB %d\n", bb->index);
+      indent += bump;
+    }
+
+  gimple_ranger::range_on_entry (r, bb, name);
+
+  trailer (idx, "range_on_entry", true, name, r);
+}
+
+// Tracing version of range_on_exit.  Call it with printing wrappers.
+
+void
+trace_ranger::range_on_exit (irange &r, basic_block bb, tree name)
+{
+  unsigned idx = ++trace_count;
+  if (dumping (idx))
+    {
+      fprintf (dump_file, "range_on_exit (");
+      print_generic_expr (dump_file, name, TDF_SLIM);
+      fprintf (dump_file, ") from BB %d\n", bb->index);
+      indent += bump;
+    }
+
+  gimple_ranger::range_on_exit (r, bb, name);
+
+  trailer (idx, "range_on_exit", true, name, r);
+}
+
+// Tracing version of range_of_stmt.  Call it with printing wrappers.
+
+bool
+trace_ranger::range_of_stmt (irange &r, gimple *s, tree name)
+{
+  bool res;
+  unsigned idx = ++trace_count;
+  if (dumping (idx))
+    {
+      fprintf (dump_file, "range_of_stmt (");
+      if (name)
+	print_generic_expr (dump_file, name, TDF_SLIM);
+      fputs (") at stmt ", dump_file);
+      print_gimple_stmt (dump_file, s, 0, TDF_SLIM);
+      indent += bump;
+    }
+
+  res = gimple_ranger::range_of_stmt (r, s, name);
+
+  return trailer (idx, "range_of_stmt", res, name, r);
+}
+
+// Tracing version of range_of_expr.  Call it with printing wrappers.
+
+bool
+trace_ranger::range_of_expr (irange &r, tree name, gimple *s)
+{
+  bool res;
+  unsigned idx = ++trace_count;
+  if (dumping (idx))
+    {
+      fprintf (dump_file, "range_of_expr(");
+      print_generic_expr (dump_file, name, TDF_SLIM);
+      fputs (")", dump_file);
+      if (s)
+	{
+	  fputs (" at stmt ", dump_file);
+	  print_gimple_stmt (dump_file, s, 0, TDF_SLIM);
+	}
+      else
+	fputs ("\n", dump_file);
+      indent += bump;
+    }
+
+  res = gimple_ranger::range_of_expr (r, name, s);
+
+  return trailer (idx, "range_of_expr", res, name, r);
+}

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 6/6] Hybrid EVRP
  2020-10-02 16:56 Project Ranger Submission / On Demand VRP Andrew MacLeod
                   ` (4 preceding siblings ...)
  2020-10-02 17:04 ` [PATCH 5/6] gimple-range Andrew MacLeod
@ 2020-10-02 17:06 ` Andrew MacLeod
  5 siblings, 0 replies; 9+ messages in thread
From: Andrew MacLeod @ 2020-10-02 17:06 UTC (permalink / raw)
  To: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 2752 bytes --]

The patch switches to a hybrid EVRP pass which utilizes both the Ranger 
and the classic EVRP pass.

it introduces a new undocumented option:

-fevrp-mode=   which can be one of the following options

evrp-only    : This is classic EVRP mode, identical to whats in trunk now.
rvrp-only     : This runs EVRP using *only* the ranger for ranges.
*evrp-first *   : This is a hybrid mode, which uses EVRP to satisfy 
simplifications first, and if that fails, then tries with the ranger
rvrp-first     : Another hybrid mode, this time it tries to simplify 
with the ranger first, then with EVRP.
rvrp-trace    : same as rvrp-only, except a lot of tracing information 
is also dumped to the listing file
rvrp-debug   : This is similar to rvrp-trace, except it also include a 
lot of debug information fo the on-entry caches.
trace            :This runs in EVRP-first mode, but turns on tracing in 
the ranger

The default option currently enabled is *evrp_first*
This gives us similar functionality to what trunk current has, except 
its enhanced by trying to use the ranger to find additional cases.

We see numerous places where the ranger provides enhanced result, the 
primary cases are
   a) When switches are involved.. we provide very precise ranges to 
each switch case, including default,  and we see cases where we can 
eliminate branches due to that
   b) We track ranges on edges quite accurately, and are not limited to 
single entry blocks. In particular we are seeing a number of places 
where ranges are being propagated into PHIs that were not before:

ie, from PR 81192:

   if (j_8(D) != 2147483647)
     goto <bb 4>; [50.00%]
   else
     goto <bb 5>; [50.00%]
<bb 4> :
   iftmp.2_11 = j_8(D) + 1;
<bb 5> :
   # iftmp.2_12 = PHI <j_8(D)(3), iftmp.2_11(4)>

hybrid mode now recognizes a constant can be propagated into the 3->5 
edge and
produces
   # iftmp.2_12 = PHI <2147483647(3), iftmp.2_11(4)>

As a result, we're finding a lot of jump threading opportunities are 
being exposed earlier.


The patch provides 3 EVRP passes, and uses the option to choose which of 
the 3 are invoked. You can see from the patch how interchangeable we 
have managed to make the range engines.

The goal here is to continue exercising both engines regularly, which 
making it easy to detect when one engine is better.  when a dump_file is 
requested for the pass, any time there is a variance in results between 
the 2 engines will be highlighted by lines such as

    EVRP:hybrid: RVRP found singleton 3
    EVRP:hybrid: EVRP found singleton -1B
    EVRP:hybrid: Second query simplifed stmt

We'll be using these to work on identifying differences/issues  as we 
move towards replacing EVRP/VRP fully.

Andrew

[-- Attachment #2: evrp.diff --]
[-- Type: text/x-patch, Size: 11240 bytes --]


	* flag-types.h (enum evrp_mode): New enumerated type EVRP_MODE_*.
	* common.opt (fevrp-mode): New undocumented flag.
	* vr-values.c (simplify_using_ranges::fold_cond): Try range_of_stmt
	first to see if it returns a value.
	(simplify_using_ranges::simplify_switch_using_ranges): Return true if
	any changes were made to the switch.
	* gimple-ssa-evrp.c: Include gimple-range.h
	(class rvrp_folder): EVRP folding using ranger exclusively.
	(rvrp_folder::rvrp_folder): New.
	(rvrp_folder::~rvrp_folder): New.
	(rvrp_folder::value_of_expr): New.  Use rangers value_of_expr.
	(rvrp_folder::value_on_edge): New.  Use rangers value_on_edge.
	(rvrp_folder::value_of_Stmt): New.  Use rangers value_of_stmt.
	(rvrp_folder::fold_stmt): New.  Call the simplifier.
	(class hybrid_folder): EVRP folding using both engines.
	(hybrid_folder::hybrid_folder): New.
	(hybrid_folder::~hybrid_folder): New.
	(hybrid_folder::fold_stmt): New.  Simplify with one engne, then the
	other.
	(hybrid_folder::value_of_expr): New.  Use both value routines.
	(hybrid_folder::value_on_edge): New.  Use both value routines.
	(hybrid_folder::value_of_stmt): New.  Use both value routines.
	(hybrid_folder::choose_value): New.  Choose between range_analzyer and
	rangers values.
	(execute_early_vrp): Choose a folder based on flag_evrp_mode.

diff --git a/gcc/flag-types.h b/gcc/flag-types.h
index 852ea76eaa2..0ca2a1cff46 100644
--- a/gcc/flag-types.h
+++ b/gcc/flag-types.h
@@ -382,4 +382,16 @@ enum parloops_schedule_type
   PARLOOPS_SCHEDULE_RUNTIME
 };
 
+/* EVRP mode.  */
+enum evrp_mode
+{
+  EVRP_MODE_EVRP_ONLY,
+  EVRP_MODE_RVRP_ONLY,
+  EVRP_MODE_EVRP_FIRST,
+  EVRP_MODE_RVRP_FIRST,
+  EVRP_MODE_RVRP_TRACE,
+  EVRP_MODE_RVRP_DEBUG,
+  EVRP_MODE_TRACE
+};
+
 #endif /* ! GCC_FLAG_TYPES_H */
diff --git a/gcc/common.opt b/gcc/common.opt
index 292c2de694e..71b9292a22e 100644
--- a/gcc/common.opt
+++ b/gcc/common.opt
@@ -2870,6 +2870,34 @@ ftree-vrp
 Common Report Var(flag_tree_vrp) Init(0) Optimization
 Perform Value Range Propagation on trees.
 
+fevrp-mode=
+Common Undocumented Joined RejectNegative Enum(evrp_mode) Var(flag_evrp_mode) Init(EVRP_MODE_EVRP_FIRST) Optimization
+-fevrp-mode=[evrp-only|rvrp-only|evrp-first|rvrp-first|rvrp-trace|rvrp-debug|trace] Specifies the mode Early VRP should operate in.
+
+Enum
+Name(evrp_mode) Type(enum evrp_mode) UnknownError(unknown evrp mode %qs)
+
+EnumValue
+Enum(evrp_mode) String(evrp-only) Value(EVRP_MODE_EVRP_ONLY)
+
+EnumValue
+Enum(evrp_mode) String(rvrp-only) Value(EVRP_MODE_RVRP_ONLY)
+
+EnumValue
+Enum(evrp_mode) String(evrp-first) Value(EVRP_MODE_EVRP_FIRST)
+
+EnumValue
+Enum(evrp_mode) String(rvrp-first) Value(EVRP_MODE_RVRP_FIRST)
+
+EnumValue
+Enum(evrp_mode) String(rvrp-trace) Value(EVRP_MODE_RVRP_TRACE)
+
+EnumValue
+Enum(evrp_mode) String(rvrp-debug) Value(EVRP_MODE_RVRP_DEBUG)
+
+EnumValue
+Enum(evrp_mode) String(hybrid-trace) Value(EVRP_MODE_TRACE)
+
 fsplit-paths
 Common Report Var(flag_split_paths) Init(0) Optimization
 Split paths leading to loop backedges.
diff --git a/gcc/vr-values.c b/gcc/vr-values.c
index 4d7dfd0b4bf..88aa672466c 100644
--- a/gcc/vr-values.c
+++ b/gcc/vr-values.c
@@ -3606,6 +3606,35 @@ simplify_using_ranges::fold_cond (gcond *cond)
      some point we should merge all variants of this code.  */
   edge taken_edge;
   vrp_visit_cond_stmt (cond, &taken_edge);
+
+  int_range_max r;
+  if (query->range_of_stmt (r, cond) && r.singleton_p ())
+    {
+      // COND has already been folded if arguments are constant.
+      if (TREE_CODE (gimple_cond_lhs (cond)) != SSA_NAME
+	  && TREE_CODE (gimple_cond_rhs (cond)) != SSA_NAME)
+	return false;
+
+      if (r.zero_p ())
+	{
+	  gcc_checking_assert (!taken_edge
+			       || taken_edge->flags & EDGE_FALSE_VALUE);
+	  if (dump_file && (dump_flags & TDF_DETAILS) && !taken_edge)
+	    fprintf (dump_file, "\nPredicate evaluates to: 0\n");
+	  gimple_cond_make_false (cond);
+	}
+      else
+	{
+	  gcc_checking_assert (!taken_edge
+			       || taken_edge->flags & EDGE_TRUE_VALUE);
+	  if (dump_file && (dump_flags & TDF_DETAILS) && !taken_edge)
+	    fprintf (dump_file, "\nPredicate evaluates to: 1\n");
+	  gimple_cond_make_true (cond);
+	}
+      update_stmt (cond);
+      return true;
+    }
+
   if (taken_edge)
     {
       if (taken_edge->flags & EDGE_TRUE_VALUE)
@@ -3947,7 +3976,7 @@ simplify_using_ranges::simplify_switch_using_ranges (gswitch *stmt)
   su.stmt = stmt;
   su.vec = vec2;
   to_update_switch_stmts.safe_push (su);
-  return false;
+  return true;
 }
 
 void
diff --git a/gcc/gimple-ssa-evrp.c b/gcc/gimple-ssa-evrp.c
index 60bf82a6805..29680b05855 100644
--- a/gcc/gimple-ssa-evrp.c
+++ b/gcc/gimple-ssa-evrp.c
@@ -41,6 +41,10 @@ along with GCC; see the file COPYING3.  If not see
 #include "tree-cfgcleanup.h"
 #include "vr-values.h"
 #include "gimple-ssa-evrp-analyze.h"
+#include "gimple-range.h"
+
+// This is the classic EVRP folder which uses a dominator walk and pushes
+// ranges into the next block if it is a single predecessor block.
 
 class evrp_folder : public substitute_and_fold_engine
 {
@@ -98,12 +102,196 @@ public:
     m_range_analyzer.set_defs_to_varying (stmt);
   }
 
-private:
+protected:
   DISABLE_COPY_AND_ASSIGN (evrp_folder);
   evrp_range_analyzer m_range_analyzer;
   simplify_using_ranges simplifier;
 };
 
+// This is a ranger based folder which continues to use the dominator
+// walk to access the substitute and fold machinery.  Ranges are calculated
+// on demand.
+
+class rvrp_folder : public substitute_and_fold_engine
+{
+public:
+
+  rvrp_folder () : substitute_and_fold_engine (), m_simplifier ()
+  { 
+    if (flag_evrp_mode == EVRP_MODE_RVRP_TRACE
+	|| flag_evrp_mode == EVRP_MODE_RVRP_DEBUG)
+      m_ranger = new trace_ranger ();
+    else
+      m_ranger = new gimple_ranger ();
+    m_simplifier.set_range_query (m_ranger);
+  }
+      
+  ~rvrp_folder ()
+  {
+    if (dump_file && (dump_flags & TDF_DETAILS))
+      m_ranger->dump (dump_file);
+    delete m_ranger;
+  }
+
+  tree value_of_expr (tree name, gimple *s = NULL) OVERRIDE
+  {
+    return m_ranger->value_of_expr (name, s);
+  }
+
+  tree value_on_edge (edge e, tree name) OVERRIDE
+  {
+    return m_ranger->value_on_edge (e, name);
+  }
+
+  tree value_of_stmt (gimple *s, tree name = NULL) OVERRIDE
+  {
+    return m_ranger->value_of_stmt (s, name);
+  }
+
+  bool fold_stmt (gimple_stmt_iterator *gsi) OVERRIDE
+  {
+    return m_simplifier.simplify (gsi);
+  }
+
+private:
+  DISABLE_COPY_AND_ASSIGN (rvrp_folder);
+  gimple_ranger *m_ranger;
+  simplify_using_ranges m_simplifier;
+};
+
+// In a hybrid folder, start with an EVRP folder, and add the required
+// fold_stmt bits to either try the ranger first or second.
+//
+// The 3 value_* routines will always query both EVRP and the ranger for
+// a result, and ensure they return the same value.  If either returns a value
+// when the other doesn't, it is flagged in the listing, and the discoverd
+// value is returned.
+//
+// The simplifier is unable to process 2 different sources, thus we try to 
+// use one engine, and if it fails to simplify, try using the other engine.
+// It is reported when the first attempt fails and the second succeeds.
+
+class hybrid_folder : public evrp_folder
+{
+public:
+  hybrid_folder (bool evrp_first)
+  {
+    if (flag_evrp_mode == EVRP_MODE_TRACE)
+      m_ranger = new trace_ranger ();
+    else
+      m_ranger = new gimple_ranger ();
+
+    if (evrp_first)
+      {
+	first = &m_range_analyzer;
+	second = m_ranger;
+      }
+     else
+      {
+	first = m_ranger;
+	second = &m_range_analyzer;
+      }
+  }
+
+  ~hybrid_folder ()
+  {
+    if (dump_file && (dump_flags & TDF_DETAILS))
+      m_ranger->dump (dump_file);
+    delete m_ranger;
+  }
+
+  bool fold_stmt (gimple_stmt_iterator *gsi) OVERRIDE
+    {
+      simplifier.set_range_query (first);
+      if (simplifier.simplify (gsi))
+	return true;
+
+      simplifier.set_range_query (second);
+      if (simplifier.simplify (gsi))
+	{
+	  if (dump_file)
+	    fprintf (dump_file, "EVRP:hybrid: Second query simplifed stmt\n");
+	  return true;
+	}
+      return false;
+    }
+
+  tree value_of_expr (tree name, gimple *) OVERRIDE;
+  tree value_on_edge (edge, tree name) OVERRIDE;
+  tree value_of_stmt (gimple *, tree name) OVERRIDE;
+
+private:
+  DISABLE_COPY_AND_ASSIGN (hybrid_folder);
+  gimple_ranger *m_ranger;
+  range_query *first;
+  range_query *second;
+  tree choose_value (tree evrp_val, tree ranger_val);
+};
+
+
+tree
+hybrid_folder::value_of_expr (tree op, gimple *stmt)
+{
+  tree evrp_ret = evrp_folder::value_of_expr (op, stmt);
+  tree ranger_ret = m_ranger->value_of_expr (op, stmt);
+  return choose_value (evrp_ret, ranger_ret);
+}
+
+tree
+hybrid_folder::value_on_edge (edge e, tree op)
+{
+  tree evrp_ret = evrp_folder::value_on_edge (e, op);
+  tree ranger_ret = m_ranger->value_on_edge (e, op);
+  return choose_value (evrp_ret, ranger_ret);
+}
+
+tree
+hybrid_folder::value_of_stmt (gimple *stmt, tree op) 
+{
+  tree evrp_ret = evrp_folder::value_of_stmt (stmt, op);
+  tree ranger_ret = m_ranger->value_of_stmt (stmt, op);
+  return choose_value (evrp_ret, ranger_ret);
+}
+
+// Given trees returned by EVRP and Ranger, choose/report the value to use
+// by the folder.
+
+tree
+hybrid_folder::choose_value (tree evrp_val, tree ranger_val)
+{
+  if (!ranger_val)
+    {
+      // If neither returned a value, return NULL_TREE.
+      if (!evrp_val)
+	return NULL_TREE;
+
+      // Otherwise EVRP found something.
+      if (dump_file)
+	{
+	  fprintf (dump_file, "EVRP:hybrid: EVRP found singleton ");
+	  print_generic_expr (dump_file, evrp_val);
+	  fprintf (dump_file, "\n");
+	}
+      return evrp_val;
+    }
+
+  // Otherwise ranger found a value, if they match we're good.
+  if (evrp_val && !compare_values (evrp_val, ranger_val))
+    return evrp_val;
+
+  // We should never get different singletons.
+  gcc_checking_assert (!evrp_val);
+
+  // Now ranger has found a value, but EVRP did not.
+  if (dump_file)
+    {
+      fprintf (dump_file, "EVRP:hybrid: RVRP found singleton ");
+      print_generic_expr (dump_file, ranger_val);
+      fprintf (dump_file, "\n");
+    }
+  return ranger_val;
+}
+
 /* Main entry point for the early vrp pass which is a simplified non-iterative
    version of vrp where basic blocks are visited in dominance order.  Value
    ranges discovered in early vrp will also be used by ipa-vrp.  */
@@ -120,8 +308,33 @@ execute_early_vrp ()
   scev_initialize ();
   calculate_dominance_info (CDI_DOMINATORS);
 
-  evrp_folder folder;
-  folder.substitute_and_fold ();
+  switch (flag_evrp_mode)
+    {
+    case EVRP_MODE_EVRP_ONLY:
+      {
+	evrp_folder folder;
+	folder.substitute_and_fold ();
+	break;
+      }
+    case EVRP_MODE_RVRP_ONLY:
+    case EVRP_MODE_RVRP_TRACE:
+    case EVRP_MODE_RVRP_DEBUG:
+      {
+	rvrp_folder folder;
+	folder.substitute_and_fold ();
+	break;
+      }
+    case EVRP_MODE_EVRP_FIRST:
+    case EVRP_MODE_RVRP_FIRST:
+    case EVRP_MODE_TRACE:
+      {
+	hybrid_folder folder (flag_evrp_mode != EVRP_MODE_RVRP_FIRST);
+	folder.substitute_and_fold ();
+	break;
+      }
+    default:
+      gcc_unreachable ();
+    }
 
   scev_finalize ();
   loop_optimizer_finalize ();

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/6] gimple-range-edge
  2020-10-02 16:59 ` [PATCH 2/6] gimple-range-edge Andrew MacLeod
@ 2020-10-05 12:09   ` Jakub Jelinek
  2020-10-05 13:15     ` Andrew MacLeod
  0 siblings, 1 reply; 9+ messages in thread
From: Jakub Jelinek @ 2020-10-05 12:09 UTC (permalink / raw)
  To: Andrew MacLeod; +Cc: gcc-patches

On Fri, Oct 02, 2020 at 12:59:54PM -0400, Andrew MacLeod via Gcc-patches wrote:
> 
> The ranger is needed to map those values to the switch variable, and also
> apply any previous ranges or derived values (ie, if you ask for the range of
> 'y' in case 2, it will return unsigned int [6,6].

> 
> 	* gimple-range-edge.h: New File.
> 	(outgoing_range): Calculate/cache constant outgoing edge ranges.
> 
> 	* gimple-range-edge.cc: New file.
> 	(gimple_outgoing_range_stmt_p): New.  Find control statement.
> 	(outgoing_range::outgoing_range): New.
> 	(outgoing_range::~outgoing_range): New.
> 	(outgoing_range::get_edge_range): New.  Internal switch edge query.
> 	(outgoing_range::calc_switch_ranges): New.  Calculate switch ranges.
> 	(outgoing_range::edge_range_p): New.  Find constant range on edge.

Just a ChangeLog comment (ditto for several other patches).
When you add a new file, just say that and nothing else, i.e.
	* gimple-range-edge.h: New File.
	* gimple-range-edge.cc: New file.
and that's it.  Everything in the new file is new, no need to state it
explicitly.

	Jakub


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/6] gimple-range-edge
  2020-10-05 12:09   ` Jakub Jelinek
@ 2020-10-05 13:15     ` Andrew MacLeod
  0 siblings, 0 replies; 9+ messages in thread
From: Andrew MacLeod @ 2020-10-05 13:15 UTC (permalink / raw)
  To: Jakub Jelinek; +Cc: gcc-patches

On 10/5/20 8:09 AM, Jakub Jelinek wrote:
> On Fri, Oct 02, 2020 at 12:59:54PM -0400, Andrew MacLeod via Gcc-patches wrote:
>> The ranger is needed to map those values to the switch variable, and also
>> apply any previous ranges or derived values (ie, if you ask for the range of
>> 'y' in case 2, it will return unsigned int [6,6].
>> 	* gimple-range-edge.h: New File.
>> 	(outgoing_range): Calculate/cache constant outgoing edge ranges.
>>
>> 	* gimple-range-edge.cc: New file.
>> 	(gimple_outgoing_range_stmt_p): New.  Find control statement.
>> 	(outgoing_range::outgoing_range): New.
>> 	(outgoing_range::~outgoing_range): New.
>> 	(outgoing_range::get_edge_range): New.  Internal switch edge query.
>> 	(outgoing_range::calc_switch_ranges): New.  Calculate switch ranges.
>> 	(outgoing_range::edge_range_p): New.  Find constant range on edge.
> Just a ChangeLog comment (ditto for several other patches).
> When you add a new file, just say that and nothing else, i.e.
> 	* gimple-range-edge.h: New File.
> 	* gimple-range-edge.cc: New file.
> and that's it.  Everything in the new file is new, no need to state it
> explicitly.
>
> 	Jakub
Really? huh. ok.  Figured it was useful for anyone looking up a routine 
name.     That will dramatically shrink these changelogs :-)

Andrew


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-10-05 13:15 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-02 16:56 Project Ranger Submission / On Demand VRP Andrew MacLeod
2020-10-02 16:58 ` [PATCH 1/6] Ranger patches Andrew MacLeod
2020-10-02 16:59 ` [PATCH 2/6] gimple-range-edge Andrew MacLeod
2020-10-05 12:09   ` Jakub Jelinek
2020-10-05 13:15     ` Andrew MacLeod
2020-10-02 17:01 ` [PATCH 3/6] gimple-range-gori Andrew MacLeod
2020-10-02 17:02 ` [PATCH 4/6] gimple-range-cache Andrew MacLeod
2020-10-02 17:04 ` [PATCH 5/6] gimple-range Andrew MacLeod
2020-10-02 17:06 ` [PATCH 6/6] Hybrid EVRP Andrew MacLeod

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).