public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [COMMITTED] vrange_storage overhaul
@ 2023-05-01  6:28 Aldy Hernandez
  2023-05-01  6:28 ` [COMMITTED] Remove irange::{min,max,kind} Aldy Hernandez
                   ` (10 more replies)
  0 siblings, 11 replies; 12+ messages in thread
From: Aldy Hernandez @ 2023-05-01  6:28 UTC (permalink / raw)
  To: GCC patches; +Cc: Andrew MacLeod, Aldy Hernandez

[tl;dr: This is a rewrite of value-range-storage.* such that global
ranges and the internal ranger cache can use the same efficient
storage mechanism.  It is optimized such that when wide_ints are
dropped into irange, the copying back and forth from storage will be
very fast, while being able to hold any number of sub-ranges
dynamically allocated at run-time.  This replaces the global storage
mechanism which was limited to 6-subranges.]

Previously we had a vrange allocator for use in the ranger cache.  It
worked with trees and could be used in place (fast), but it was not
memory efficient.  With the upcoming switch to wide_ints for irange,
we can't afford to allocate ranges that can be used in place, because
an irange will be significantly larger, as it will hold full
wide_ints.  We need a trailing_wide_int mechanism similar to what we
use for global ranges, but fast enough to use in the ranger's cache.

The global ranges had another allocation mechanism that was
trailing_wide_int based.  It was memory efficient but slow given the
constant conversions from trees to wide_ints.

This patch gets us the best of both worlds by providing a storage
mechanism with a custom trailing wide int interface, while at the same
time being fast enough to use in the ranger cache.

We use a custom trailing wide_int mechanism but more flexible than
trailing_wide_int, since the latter has compile-time fixed-sized
wide_ints.  The original TWI structure has the current length of each
wide_int in a static portion preceeding the variable length:

template <int N>
struct GTY((user)) trailing_wide_ints
{
...
...
  /* The current length of each number.
     that will, in turn, turn off TBAA on gimple, trees and RTL.  */
  struct {unsigned char len;} m_len[N];

  /* The variable-length part of the structure, which always contains
     at least one HWI.  Element I starts at index I * M_MAX_LEN.  */
  HOST_WIDE_INT m_val[1];
};

We need both m_len[] and m_val[] to be variable-length at run-time.
In the previous incarnation of the storage mechanism the limitation of
m_len[] being static meant that we were limited to whatever [N] could
use up the unused bits in the TWI control world.  In practice this
meant we were limited to 6 sub-ranges.  This worked fine for global
ranges, but is a no go for our internal cache, where we must represent
things exactly (ranges for switches, etc).

The new implementation removes this restriction by making both m_len[]
and m_val[] variable length.  Also, rolling our own allows future
optimization be using some of the leftover bits in the control world.

Also, in preparation for the wide_int conversion, vrange_storage is
now optimized to blast the bits directly into the ultimate irange
instead of going through the irange API.  So ultimately copying back
and forth between the ranger cache and the storage mechanism is just a
matter of copying a few bits for the control word, and copying an
array of HOST_WIDE_INTs.  These changes were heavily profiled, and
yielded a good chunk of the overall speedup for the wide_int
conversion.

Finally, vrange_storage is now a first class structure with GTY
markers and all, thus alleviating the void * hack in struct
tree_ssa_name and friends.  This removes a few warts in the API and
looks cleaner overall.

gcc/ChangeLog:

	* gimple-fold.cc (maybe_fold_comparisons_from_match_pd): Adjust
	for vrange_storage.
	* gimple-range-cache.cc (sbr_vector::sbr_vector): Same.
	(sbr_vector::grow): Same.
	(sbr_vector::set_bb_range): Same.
	(sbr_vector::get_bb_range): Same.
	(sbr_sparse_bitmap::sbr_sparse_bitmap): Same.
	(sbr_sparse_bitmap::set_bb_range): Same.
	(sbr_sparse_bitmap::get_bb_range): Same.
	(block_range_cache::block_range_cache): Same.
	(ssa_global_cache::ssa_global_cache): Same.
	(ssa_global_cache::get_global_range): Same.
	(ssa_global_cache::set_global_range): Same.
	* gimple-range-cache.h: Same.
	* gimple-range-edge.cc
	(gimple_outgoing_range::gimple_outgoing_range): Same.
	(gimple_outgoing_range::switch_edge_range): Same.
	(gimple_outgoing_range::calc_switch_ranges): Same.
	* gimple-range-edge.h: Same.
	* gimple-range-infer.cc
	(infer_range_manager::infer_range_manager): Same.
	(infer_range_manager::get_nonzero): Same.
	(infer_range_manager::maybe_adjust_range): Same.
	(infer_range_manager::add_range): Same.
	* gimple-range-infer.h: Rename obstack_vrange_allocator to
	vrange_allocator.
	* tree-core.h (struct irange_storage_slot): Remove.
	(struct tree_ssa_name): Remove irange_info and frange_info.  Make
	range_info a pointer to vrange_storage.
	* tree-ssanames.cc (range_info_fits_p): Adjust for vrange_storage.
	(range_info_alloc): Same.
	(range_info_free): Same.
	(range_info_get_range): Same.
	(range_info_set_range): Same.
	(get_nonzero_bits): Same.
	* value-query.cc (get_ssa_name_range_info): Same.
	* value-range-storage.cc (class vrange_internal_alloc): New.
	(class vrange_obstack_alloc): New.
	(class vrange_ggc_alloc): New.
	(vrange_allocator::vrange_allocator): New.
	(vrange_allocator::~vrange_allocator): New.
	(vrange_storage::alloc_slot): New.
	(vrange_allocator::alloc): New.
	(vrange_allocator::free): New.
	(vrange_allocator::clone): New.
	(vrange_allocator::clone_varying): New.
	(vrange_allocator::clone_undefined): New.
	(vrange_storage::alloc): New.
	(vrange_storage::set_vrange): Remove slot argument.
	(vrange_storage::get_vrange): Same.
	(vrange_storage::fits_p): Same.
	(vrange_storage::equal_p): New.
	(irange_storage::write_lengths_address): New.
	(irange_storage::lengths_address): New.
	(irange_storage_slot::alloc_slot): Remove.
	(irange_storage::alloc): New.
	(irange_storage_slot::irange_storage_slot): Remove.
	(irange_storage::irange_storage): New.
	(write_wide_int): New.
	(irange_storage_slot::set_irange): Remove.
	(irange_storage::set_irange): New.
	(read_wide_int): New.
	(irange_storage_slot::get_irange): Remove.
	(irange_storage::get_irange): New.
	(irange_storage_slot::size): Remove.
	(irange_storage::equal_p): New.
	(irange_storage_slot::num_wide_ints_needed): Remove.
	(irange_storage::size): New.
	(irange_storage_slot::fits_p): Remove.
	(irange_storage::fits_p): New.
	(irange_storage_slot::dump): Remove.
	(irange_storage::dump): New.
	(frange_storage_slot::alloc_slot): Remove.
	(frange_storage::alloc): New.
	(frange_storage_slot::set_frange): Remove.
	(frange_storage::set_frange): New.
	(frange_storage_slot::get_frange): Remove.
	(frange_storage::get_frange): New.
	(frange_storage_slot::fits_p): Remove.
	(frange_storage::equal_p): New.
	(frange_storage::fits_p): New.
	(ggc_vrange_allocator): New.
	(ggc_alloc_vrange_storage): New.
	* value-range-storage.h (class vrange_storage): Rewrite.
	(class irange_storage): Rewrite.
	(class frange_storage): Rewrite.
	(class obstack_vrange_allocator): Remove.
	(class ggc_vrange_allocator): Remove.
	(vrange_allocator::alloc_vrange): Remove.
	(vrange_allocator::alloc_irange): Remove.
	(vrange_allocator::alloc_frange): Remove.
	(ggc_alloc_vrange_storage): New.
	* value-range.h (class irange): Rename vrange_allocator to
	irange_storage.
	(class frange): Same.
---
 gcc/gimple-fold.cc         |   4 +-
 gcc/gimple-range-cache.cc  |  61 +++--
 gcc/gimple-range-cache.h   |   2 +-
 gcc/gimple-range-edge.cc   |  23 +-
 gcc/gimple-range-edge.h    |   4 +-
 gcc/gimple-range-infer.cc  |  25 +-
 gcc/gimple-range-infer.h   |   2 +-
 gcc/tree-core.h            |  16 +-
 gcc/tree-ssanames.cc       |  28 +--
 gcc/value-query.cc         |   7 +-
 gcc/value-range-storage.cc | 478 +++++++++++++++++++++++++++++--------
 gcc/value-range-storage.h  | 226 +++++-------------
 gcc/value-range.h          |   4 +-
 13 files changed, 518 insertions(+), 362 deletions(-)

diff --git a/gcc/gimple-fold.cc b/gcc/gimple-fold.cc
index 2b6855d1205..1d0e4c32c40 100644
--- a/gcc/gimple-fold.cc
+++ b/gcc/gimple-fold.cc
@@ -6919,7 +6919,7 @@ and_comparisons_1 (tree type, enum tree_code code1, tree op1a, tree op1b,
 }
 
 static basic_block fosa_bb;
-static vec<std::pair<tree, void *> > *fosa_unwind;
+static vec<std::pair<tree, vrange_storage *> > *fosa_unwind;
 static tree
 follow_outer_ssa_edges (tree val)
 {
@@ -7006,7 +7006,7 @@ maybe_fold_comparisons_from_match_pd (tree type, enum tree_code code,
 		      type, gimple_assign_lhs (stmt1),
 		      gimple_assign_lhs (stmt2));
   fosa_bb = outer_cond_bb;
-  auto_vec<std::pair<tree, void *>, 8> unwind_stack;
+  auto_vec<std::pair<tree, vrange_storage *>, 8> unwind_stack;
   fosa_unwind = &unwind_stack;
   if (op.resimplify (NULL, (!outer_cond_bb
 			    ? follow_all_ssa_edges : follow_outer_ssa_edges)))
diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 5510efba1ca..92622fc5000 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -85,10 +85,10 @@ public:
   virtual bool get_bb_range (vrange &r, const_basic_block bb) override;
   virtual bool bb_range_p (const_basic_block bb) override;
 protected:
-  vrange **m_tab;	// Non growing vector.
+  vrange_storage **m_tab;	// Non growing vector.
   int m_tab_size;
-  vrange *m_varying;
-  vrange *m_undefined;
+  vrange_storage *m_varying;
+  vrange_storage *m_undefined;
   tree m_type;
   vrange_allocator *m_range_allocator;
   bool m_zero_p;
@@ -106,16 +106,14 @@ sbr_vector::sbr_vector (tree t, vrange_allocator *allocator, bool zero_p)
   m_zero_p = zero_p;
   m_range_allocator = allocator;
   m_tab_size = last_basic_block_for_fn (cfun) + 1;
-  m_tab = static_cast <vrange **>
-    (allocator->alloc (m_tab_size * sizeof (vrange *)));
+  m_tab = static_cast <vrange_storage **>
+    (allocator->alloc (m_tab_size * sizeof (vrange_storage *)));
   if (zero_p)
     memset (m_tab, 0, m_tab_size * sizeof (vrange *));
 
   // Create the cached type range.
-  m_varying = m_range_allocator->alloc_vrange (t);
-  m_undefined = m_range_allocator->alloc_vrange (t);
-  m_varying->set_varying (t);
-  m_undefined->set_undefined ();
+  m_varying = m_range_allocator->clone_varying (t);
+  m_undefined = m_range_allocator->clone_undefined (t);
 }
 
 // Grow the vector when the CFG has increased in size.
@@ -132,11 +130,11 @@ sbr_vector::grow ()
   int new_size = inc + curr_bb_size;
 
   // Allocate new memory, copy the old vector and clear the new space.
-  vrange **t = static_cast <vrange **>
-    (m_range_allocator->alloc (new_size * sizeof (vrange *)));
-  memcpy (t, m_tab, m_tab_size * sizeof (vrange *));
+  vrange_storage **t = static_cast <vrange_storage **>
+    (m_range_allocator->alloc (new_size * sizeof (vrange_storage *)));
+  memcpy (t, m_tab, m_tab_size * sizeof (vrange_storage *));
   if (m_zero_p)
-    memset (t + m_tab_size, 0, (new_size - m_tab_size) * sizeof (vrange *));
+    memset (t + m_tab_size, 0, (new_size - m_tab_size) * sizeof (vrange_storage *));
 
   m_tab = t;
   m_tab_size = new_size;
@@ -147,7 +145,7 @@ sbr_vector::grow ()
 bool
 sbr_vector::set_bb_range (const_basic_block bb, const vrange &r)
 {
-  vrange *m;
+  vrange_storage *m;
   if (bb->index >= m_tab_size)
     grow ();
   if (r.varying_p ())
@@ -168,10 +166,10 @@ sbr_vector::get_bb_range (vrange &r, const_basic_block bb)
 {
   if (bb->index >= m_tab_size)
     return false;
-  vrange *m = m_tab[bb->index];
+  vrange_storage *m = m_tab[bb->index];
   if (m)
     {
-      r = *m;
+      m->get_vrange (r, m_type);
       return true;
     }
   return false;
@@ -255,7 +253,7 @@ private:
   void bitmap_set_quad (bitmap head, int quad, int quad_value);
   int bitmap_get_quad (const_bitmap head, int quad);
   vrange_allocator *m_range_allocator;
-  vrange *m_range[SBR_NUM];
+  vrange_storage *m_range[SBR_NUM];
   bitmap_head bitvec;
   tree m_type;
 };
@@ -272,15 +270,16 @@ sbr_sparse_bitmap::sbr_sparse_bitmap (tree t, vrange_allocator *allocator,
   bitmap_tree_view (&bitvec);
   m_range_allocator = allocator;
   // Pre-cache varying.
-  m_range[0] = m_range_allocator->alloc_vrange (t);
-  m_range[0]->set_varying (t);
+  m_range[0] = m_range_allocator->clone_varying (t);
   // Pre-cache zero and non-zero values for pointers.
   if (POINTER_TYPE_P (t))
     {
-      m_range[1] = m_range_allocator->alloc_vrange (t);
-      m_range[1]->set_nonzero (t);
-      m_range[2] = m_range_allocator->alloc_vrange (t);
-      m_range[2]->set_zero (t);
+      int_range<2> nonzero;
+      nonzero.set_nonzero (t);
+      m_range[1] = m_range_allocator->clone (nonzero);
+      int_range<2> zero;
+      zero.set_zero (t);
+      m_range[2] = m_range_allocator->clone (zero);
     }
   else
     m_range[1] = m_range[2] = NULL;
@@ -321,7 +320,7 @@ sbr_sparse_bitmap::set_bb_range (const_basic_block bb, const vrange &r)
 
   // Loop thru the values to see if R is already present.
   for (int x = 0; x < SBR_NUM; x++)
-    if (!m_range[x] || r == *(m_range[x]))
+    if (!m_range[x] || m_range[x]->equal_p (r, m_type))
       {
 	if (!m_range[x])
 	  m_range[x] = m_range_allocator->clone (r);
@@ -348,7 +347,7 @@ sbr_sparse_bitmap::get_bb_range (vrange &r, const_basic_block bb)
   if (value == SBR_UNDEF)
     r.set_undefined ();
   else
-    r = *(m_range[value - 1]);
+    m_range[value - 1]->get_vrange (r, m_type);
   return true;
 }
 
@@ -369,7 +368,7 @@ block_range_cache::block_range_cache ()
   bitmap_obstack_initialize (&m_bitmaps);
   m_ssa_ranges.create (0);
   m_ssa_ranges.safe_grow_cleared (num_ssa_names);
-  m_range_allocator = new obstack_vrange_allocator;
+  m_range_allocator = new vrange_allocator;
 }
 
 // Remove any m_block_caches which have been created.
@@ -535,7 +534,7 @@ block_range_cache::dump (FILE *f, basic_block bb, bool print_varying)
 ssa_cache::ssa_cache ()
 {
   m_tab.create (0);
-  m_range_allocator = new obstack_vrange_allocator;
+  m_range_allocator = new vrange_allocator;
 }
 
 // Deconstruct an ssa cache.
@@ -567,10 +566,10 @@ ssa_cache::get_range (vrange &r, tree name) const
   if (v >= m_tab.length ())
     return false;
 
-  vrange *stow = m_tab[v];
+  vrange_storage *stow = m_tab[v];
   if (!stow)
     return false;
-  r = *stow;
+  stow->get_vrange (r, TREE_TYPE (name));
   return true;
 }
 
@@ -584,9 +583,9 @@ ssa_cache::set_range (tree name, const vrange &r)
   if (v >= m_tab.length ())
     m_tab.safe_grow_cleared (num_ssa_names + 1);
 
-  vrange *m = m_tab[v];
+  vrange_storage *m = m_tab[v];
   if (m && m->fits_p (r))
-    *m = r;
+    m->set_vrange (r);
   else
     m_tab[v] = m_range_allocator->clone (r);
   return m != NULL;
diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h
index 9032df9e3e3..946fbc51465 100644
--- a/gcc/gimple-range-cache.h
+++ b/gcc/gimple-range-cache.h
@@ -65,7 +65,7 @@ public:
   void dump (FILE *f = stderr);
 protected:
   virtual bool dump_range_query (vrange &r, tree name) const;
-  vec<vrange *> m_tab;
+  vec<vrange_storage *> m_tab;
   vrange_allocator *m_range_allocator;
 };
 
diff --git a/gcc/gimple-range-edge.cc b/gcc/gimple-range-edge.cc
index 8fedac58fe6..22fb709c9b1 100644
--- a/gcc/gimple-range-edge.cc
+++ b/gcc/gimple-range-edge.cc
@@ -69,7 +69,7 @@ gimple_outgoing_range::gimple_outgoing_range (int max_sw_edges)
 {
   m_edge_table = NULL;
   m_max_edges = max_sw_edges;
-  m_range_allocator = new obstack_vrange_allocator;
+  m_range_allocator = new vrange_allocator;
 }
 
 
@@ -97,16 +97,16 @@ gimple_outgoing_range::switch_edge_range (irange &r, gswitch *sw, edge e)
     return false;
 
    if (!m_edge_table)
-     m_edge_table = new hash_map<edge, irange *> (n_edges_for_fn (cfun));
+     m_edge_table = new hash_map<edge, vrange_storage *> (n_edges_for_fn (cfun));
 
-   irange **val = m_edge_table->get (e);
+   vrange_storage **val = m_edge_table->get (e);
    if (!val)
      {
        calc_switch_ranges (sw);
        val = m_edge_table->get (e);
        gcc_checking_assert (val);
      }
-    r = **val;
+   (*val)->get_vrange (r, TREE_TYPE (gimple_switch_index (sw)));
   return true;
 }
 
@@ -150,29 +150,30 @@ gimple_outgoing_range::calc_switch_ranges (gswitch *sw)
       // Create/union this case with anything on else on the edge.
       int_range_max case_range (low, high);
       range_cast (case_range, type);
-      irange *&slot = m_edge_table->get_or_insert (e, &existed);
+      vrange_storage *&slot = m_edge_table->get_or_insert (e, &existed);
       if (existed)
 	{
 	  // If this doesn't change the value, move on.
-	  if (!case_range.union_ (*slot))
+	  int_range_max tmp;
+	  slot->get_vrange (tmp, type);
+	  if (!case_range.union_ (tmp))
 	   continue;
 	  if (slot->fits_p (case_range))
 	    {
-	      *slot = case_range;
+	      slot->set_vrange (case_range);
 	      continue;
 	    }
 	}
       // If there was an existing range and it doesn't fit, we lose the memory.
       // It'll get reclaimed when the obstack is freed.  This seems less
       // intrusive than allocating max ranges for each case.
-      slot = m_range_allocator->clone <irange> (case_range);
+      slot = m_range_allocator->clone (case_range);
     }
 
-  irange *&slot = m_edge_table->get_or_insert (default_edge, &existed);
+  vrange_storage *&slot = m_edge_table->get_or_insert (default_edge, &existed);
   // This should be the first call into this switch.
   gcc_checking_assert (!existed);
-  irange *dr = m_range_allocator->clone <irange> (default_range);
-  slot = dr;
+  slot = m_range_allocator->clone (default_range);
 }
 
 
diff --git a/gcc/gimple-range-edge.h b/gcc/gimple-range-edge.h
index bb0de1b1d3e..86b9c0cbcb1 100644
--- a/gcc/gimple-range-edge.h
+++ b/gcc/gimple-range-edge.h
@@ -46,8 +46,8 @@ private:
   bool switch_edge_range (irange &r, gswitch *sw, edge e);
 
   int m_max_edges;
-  hash_map<edge, irange *> *m_edge_table;
-  class obstack_vrange_allocator *m_range_allocator;
+  hash_map<edge, vrange_storage *> *m_edge_table;
+  class vrange_allocator *m_range_allocator;
 };
 
 // If there is a range control statement at the end of block BB, return it.
diff --git a/gcc/gimple-range-infer.cc b/gcc/gimple-range-infer.cc
index 14ccd7347e6..a6f7d4e7991 100644
--- a/gcc/gimple-range-infer.cc
+++ b/gcc/gimple-range-infer.cc
@@ -181,7 +181,7 @@ class exit_range
 {
 public:
   tree name;
-  vrange *range;
+  vrange_storage *range;
   exit_range *next;
 };
 
@@ -221,7 +221,7 @@ infer_range_manager::infer_range_manager (bool do_search)
   // Non-zero elements are very common, so cache them for each ssa-name.
   m_nonzero.create (0);
   m_nonzero.safe_grow_cleared (num_ssa_names + 1);
-  m_range_allocator = new obstack_vrange_allocator;
+  m_range_allocator = new vrange_allocator;
 }
 
 // Destruct a range infer manager.
@@ -246,7 +246,8 @@ infer_range_manager::get_nonzero (tree name)
     m_nonzero.safe_grow_cleared (num_ssa_names + 20);
   if (!m_nonzero[v])
     {
-      m_nonzero[v] = m_range_allocator->alloc_vrange (TREE_TYPE (name));
+      m_nonzero[v]
+	= (irange *) m_range_allocator->alloc (sizeof (int_range <2>));
       m_nonzero[v]->set_nonzero (TREE_TYPE (name));
     }
   return *(m_nonzero[v]);
@@ -292,7 +293,10 @@ infer_range_manager::maybe_adjust_range (vrange &r, tree name, basic_block bb)
   exit_range *ptr = m_on_exit[bb->index].find_ptr (name);
   gcc_checking_assert (ptr);
   // Return true if this exit range changes R, otherwise false.
-  return r.intersect (*(ptr->range));
+  tree type = TREE_TYPE (name);
+  Value_Range tmp (type);
+  ptr->range->get_vrange (tmp, type);
+  return r.intersect (tmp);
 }
 
 // Add range R as an inferred range for NAME in block BB.
@@ -320,17 +324,16 @@ infer_range_manager::add_range (tree name, basic_block bb, const vrange &r)
   exit_range *ptr = m_on_exit[bb->index].find_ptr (name);
   if (ptr)
     {
-      Value_Range cur (r);
+      tree type = TREE_TYPE (name);
+      Value_Range cur (r), name_range (type);
+      ptr->range->get_vrange (name_range, type);
       // If no new info is added, just return.
-      if (!cur.intersect (*(ptr->range)))
+      if (!cur.intersect (name_range))
 	return;
       if (ptr->range->fits_p (cur))
-	*(ptr->range) = cur;
+	ptr->range->set_vrange (cur);
       else
-	{
-	  vrange &v = cur;
-	  ptr->range = m_range_allocator->clone (v);
-	}
+	ptr->range = m_range_allocator->clone (cur);
       return;
     }
 
diff --git a/gcc/gimple-range-infer.h b/gcc/gimple-range-infer.h
index 3c85e29c0bd..34716ca6402 100644
--- a/gcc/gimple-range-infer.h
+++ b/gcc/gimple-range-infer.h
@@ -80,7 +80,7 @@ private:
   bitmap m_seen;
   bitmap_obstack m_bitmaps;
   struct obstack m_list_obstack;
-  class obstack_vrange_allocator *m_range_allocator;
+  class vrange_allocator *m_range_allocator;
 };
 
 #endif // GCC_GIMPLE_RANGE_SIDE_H
diff --git a/gcc/tree-core.h b/gcc/tree-core.h
index fd2be57b78c..847f0b1e994 100644
--- a/gcc/tree-core.h
+++ b/gcc/tree-core.h
@@ -33,7 +33,6 @@ struct function;
 struct real_value;
 struct fixed_value;
 struct ptr_info_def;
-struct irange_storage_slot;
 struct die_struct;
 
 
@@ -1605,17 +1604,12 @@ struct GTY(()) tree_ssa_name {
 
   /* Value range information.  */
   union ssa_name_info_type {
-    /* Ranges for integers.  */
-    struct GTY ((tag ("0"))) irange_storage_slot *irange_info;
-    /* Ranges for floating point numbers.  */
-    struct GTY ((tag ("1"))) frange_storage_slot *frange_info;
-    /* Pointer attributes used for alias analysis.  */
-    struct GTY ((tag ("2"))) ptr_info_def *ptr_info;
-    /* This holds any range info supported by ranger (except ptr_info
-       above) and is managed by vrange_storage.  */
-    void * GTY ((skip)) range_info;
+    /* Range and aliasing info for pointers.  */
+    struct GTY ((tag ("0"))) ptr_info_def *ptr_info;
+    /* Range info for everything else.  */
+    struct GTY ((tag ("1"))) vrange_storage * range_info;
   } GTY ((desc ("%1.typed.type ?" \
-		"(POINTER_TYPE_P (TREE_TYPE ((tree)&%1)) ? 2 : SCALAR_FLOAT_TYPE_P (TREE_TYPE ((tree)&%1))) : 3"))) info;
+		"!POINTER_TYPE_P (TREE_TYPE ((tree)&%1)) : 2"))) info;
   /* Immediate uses list for this SSA_NAME.  */
   struct ssa_use_operand_t imm_uses;
 };
diff --git a/gcc/tree-ssanames.cc b/gcc/tree-ssanames.cc
index 08aa166ef17..b6cbf97b878 100644
--- a/gcc/tree-ssanames.cc
+++ b/gcc/tree-ssanames.cc
@@ -72,9 +72,6 @@ unsigned int ssa_name_nodes_created;
 #define FREE_SSANAMES(fun) (fun)->gimple_df->free_ssanames
 #define FREE_SSANAMES_QUEUE(fun) (fun)->gimple_df->free_ssanames_queue
 
-static ggc_vrange_allocator ggc_allocator;
-static vrange_storage vstore (&ggc_allocator);
-
 /* Return TRUE if NAME has global range info.  */
 
 inline bool
@@ -89,8 +86,8 @@ inline bool
 range_info_fits_p (tree name, const vrange &r)
 {
   gcc_checking_assert (range_info_p (name));
-  void *mem = SSA_NAME_RANGE_INFO (name);
-  return vrange_storage::fits_p (mem, r);
+  vrange_storage *mem = SSA_NAME_RANGE_INFO (name);
+  return mem->fits_p (r);
 }
 
 /* Allocate a new global range for NAME and set it to R.  Return the
@@ -99,7 +96,7 @@ range_info_fits_p (tree name, const vrange &r)
 inline void *
 range_info_alloc (tree name, const vrange &r)
 {
-  void *mem = vstore.alloc_slot (r);
+  vrange_storage *mem = ggc_alloc_vrange_storage (r);
   SSA_NAME_RANGE_INFO (name) = mem;
   return mem;
 }
@@ -109,16 +106,16 @@ range_info_alloc (tree name, const vrange &r)
 inline void
 range_info_free (tree name)
 {
-  void *mem = SSA_NAME_RANGE_INFO (name);
-  vstore.free (mem);
+  vrange_storage *mem = SSA_NAME_RANGE_INFO (name);
+  ggc_free (mem);
 }
 
 /* Return the global range for NAME in R.  */
 
 inline void
-range_info_get_range (tree name, vrange &r)
+range_info_get_range (const_tree name, vrange &r)
 {
-  vstore.get_vrange (SSA_NAME_RANGE_INFO (name), r, TREE_TYPE (name));
+  SSA_NAME_RANGE_INFO (name)->get_vrange (r, TREE_TYPE (name));
 }
 
 /* Set the global range for NAME from R.  Return TRUE if successfull,
@@ -136,7 +133,7 @@ range_info_set_range (tree name, const vrange &r)
     }
   else
     {
-      vstore.set_vrange (SSA_NAME_RANGE_INFO (name), r);
+      SSA_NAME_RANGE_INFO (name)->set_vrange (r);
       return true;
     }
 }
@@ -492,12 +489,9 @@ get_nonzero_bits (const_tree name)
   if (!range_info_p (name) || !irange::supports_p (TREE_TYPE (name)))
     return wi::shwi (-1, precision);
 
-  /* Optimization to get at the nonzero bits because we know the
-     storage type.  This saves us measurable time compared to going
-     through vrange_storage.  */
-  irange_storage_slot *ri
-    = static_cast <irange_storage_slot *> (SSA_NAME_RANGE_INFO (name));
-  return ri->get_nonzero_bits ();
+  int_range_max tmp;
+  range_info_get_range (name, tmp);
+  return tmp.get_nonzero_bits ();
 }
 
 /* Return TRUE is OP, an SSA_NAME has a range of values [0..1], false
diff --git a/gcc/value-query.cc b/gcc/value-query.cc
index 538cfad19b1..8ccdc9f8852 100644
--- a/gcc/value-query.cc
+++ b/gcc/value-query.cc
@@ -261,13 +261,10 @@ get_ssa_name_range_info (vrange &r, const_tree name)
   gcc_checking_assert (!POINTER_TYPE_P (type));
   gcc_checking_assert (TREE_CODE (name) == SSA_NAME);
 
-  void *ri = SSA_NAME_RANGE_INFO (name);
+  vrange_storage *ri = SSA_NAME_RANGE_INFO (name);
 
   if (ri)
-    {
-      vrange_storage vstore (NULL);
-      vstore.get_vrange (ri, r, TREE_TYPE (name));
-    }
+    ri->get_vrange (r, TREE_TYPE (name));
   else
     r.set_varying (type);
 }
diff --git a/gcc/value-range-storage.cc b/gcc/value-range-storage.cc
index bf23f6dd476..98a6d99af78 100644
--- a/gcc/value-range-storage.cc
+++ b/gcc/value-range-storage.cc
@@ -30,35 +30,137 @@ along with GCC; see the file COPYING3.  If not see
 #include "gimple-range.h"
 #include "value-range-storage.h"
 
-// Return a newly allocated slot holding R, or NULL if storing a range
-// of R's type is not supported.
+// Generic memory allocator to share one interface between GC and
+// obstack allocators.
+
+class vrange_internal_alloc
+{
+public:
+  vrange_internal_alloc () { }
+  virtual ~vrange_internal_alloc () { }
+  virtual void *alloc (size_t size) = 0;
+  virtual void free (void *) = 0;
+private:
+  DISABLE_COPY_AND_ASSIGN (vrange_internal_alloc);
+};
+
+class vrange_obstack_alloc final: public vrange_internal_alloc
+{
+public:
+  vrange_obstack_alloc ()
+  {
+    obstack_init (&m_obstack);
+  }
+  virtual ~vrange_obstack_alloc () final override
+  {
+    obstack_free (&m_obstack, NULL);
+  }
+  virtual void *alloc (size_t size) final override
+  {
+    return obstack_alloc (&m_obstack, size);
+  }
+  virtual void free (void *) final override { }
+private:
+  obstack m_obstack;
+};
+
+class vrange_ggc_alloc final: public vrange_internal_alloc
+{
+public:
+  vrange_ggc_alloc () { }
+  virtual ~vrange_ggc_alloc () final override { }
+  virtual void *alloc (size_t size) final override
+  {
+    return ggc_internal_alloc (size);
+  }
+  virtual void free (void *p) final override
+  {
+    return ggc_free (p);
+  }
+};
+
+vrange_allocator::vrange_allocator (bool gc)
+  : m_gc (gc)
+{
+  if (gc)
+    m_alloc = new vrange_ggc_alloc;
+  else
+    m_alloc = new vrange_obstack_alloc;
+}
+
+vrange_allocator::~vrange_allocator ()
+{
+  delete m_alloc;
+}
 
 void *
-vrange_storage::alloc_slot (const vrange &r)
+vrange_allocator::alloc (size_t size)
+{
+  return m_alloc->alloc (size);
+}
+
+void
+vrange_allocator::free (void *p)
+{
+  m_alloc->free (p);
+}
+
+// Allocate a new vrange_storage object initialized to R and return
+// it.
+
+vrange_storage *
+vrange_allocator::clone (const vrange &r)
+{
+  return vrange_storage::alloc (*m_alloc, r);
+}
+
+vrange_storage *
+vrange_allocator::clone_varying (tree type)
 {
-  gcc_checking_assert (m_alloc);
+  if (irange::supports_p (type))
+    return irange_storage::alloc (*m_alloc, int_range <1> (type));
+  if (frange::supports_p (type))
+    return frange_storage::alloc (*m_alloc, frange (type));
+  return NULL;
+}
 
+vrange_storage *
+vrange_allocator::clone_undefined (tree type)
+{
+  if (irange::supports_p (type))
+    return irange_storage::alloc (*m_alloc, int_range<1> ());
+  if (frange::supports_p (type))
+    return frange_storage::alloc  (*m_alloc, frange ());
+  return NULL;
+}
+
+// Allocate a new vrange_storage object initialized to R and return
+// it.  Return NULL if R is unsupported.
+
+vrange_storage *
+vrange_storage::alloc (vrange_internal_alloc &allocator, const vrange &r)
+{
   if (is_a <irange> (r))
-    return irange_storage_slot::alloc_slot (*m_alloc, as_a <irange> (r));
+    return irange_storage::alloc (allocator, as_a <irange> (r));
   if (is_a <frange> (r))
-    return frange_storage_slot::alloc_slot (*m_alloc, as_a <frange> (r));
+    return frange_storage::alloc (allocator, as_a <frange> (r));
   return NULL;
 }
 
-// Set SLOT to R.
+// Set storage to R.
 
 void
-vrange_storage::set_vrange (void *slot, const vrange &r)
+vrange_storage::set_vrange (const vrange &r)
 {
   if (is_a <irange> (r))
     {
-      irange_storage_slot *s = static_cast <irange_storage_slot *> (slot);
+      irange_storage *s = static_cast <irange_storage *> (this);
       gcc_checking_assert (s->fits_p (as_a <irange> (r)));
       s->set_irange (as_a <irange> (r));
     }
   else if (is_a <frange> (r))
     {
-      frange_storage_slot *s = static_cast <frange_storage_slot *> (slot);
+      frange_storage *s = static_cast <frange_storage *> (this);
       gcc_checking_assert (s->fits_p (as_a <frange> (r)));
       s->set_frange (as_a <frange> (r));
     }
@@ -66,188 +168,324 @@ vrange_storage::set_vrange (void *slot, const vrange &r)
     gcc_unreachable ();
 }
 
-// Restore R from SLOT.  TYPE is the type of R.
+// Restore R from storage.
 
 void
-vrange_storage::get_vrange (const void *slot, vrange &r, tree type)
+vrange_storage::get_vrange (vrange &r, tree type) const
 {
   if (is_a <irange> (r))
     {
-      const irange_storage_slot *s
-	= static_cast <const irange_storage_slot *> (slot);
+      const irange_storage *s = static_cast <const irange_storage *> (this);
       s->get_irange (as_a <irange> (r), type);
     }
   else if (is_a <frange> (r))
     {
-      const frange_storage_slot *s
-	= static_cast <const frange_storage_slot *> (slot);
+      const frange_storage *s = static_cast <const frange_storage *> (this);
       s->get_frange (as_a <frange> (r), type);
     }
   else
     gcc_unreachable ();
 }
 
-// Return TRUE if SLOT can fit R.
+// Return TRUE if storage can fit R.
 
 bool
-vrange_storage::fits_p (const void *slot, const vrange &r)
+vrange_storage::fits_p (const vrange &r) const
 {
   if (is_a <irange> (r))
     {
-      const irange_storage_slot *s
-	= static_cast <const irange_storage_slot *> (slot);
+      const irange_storage *s = static_cast <const irange_storage *> (this);
       return s->fits_p (as_a <irange> (r));
     }
   if (is_a <frange> (r))
     {
-      const frange_storage_slot *s
-	= static_cast <const frange_storage_slot *> (slot);
+      const frange_storage *s = static_cast <const frange_storage *> (this);
       return s->fits_p (as_a <frange> (r));
     }
   gcc_unreachable ();
   return false;
 }
 
-// Factory that creates a new irange_storage_slot object containing R.
-// This is the only way to construct an irange slot as stack creation
-// is disallowed.
+// Return TRUE if the range in storage is equal to R.
+
+bool
+vrange_storage::equal_p (const vrange &r, tree type) const
+{
+  if (is_a <irange> (r))
+    {
+      const irange_storage *s = static_cast <const irange_storage *> (this);
+      return s->equal_p (as_a <irange> (r), type);
+    }
+  if (is_a <frange> (r))
+    {
+      const frange_storage *s = static_cast <const frange_storage *> (this);
+      return s->equal_p (as_a <frange> (r), type);
+    }
+  gcc_unreachable ();
+}
+
+//============================================================================
+// irange_storage implementation
+//============================================================================
+
+unsigned char *
+irange_storage::write_lengths_address ()
+{
+  return (unsigned char *)&m_val[(m_num_ranges * 2 + 1)
+				 * WIDE_INT_MAX_HWIS (m_precision)];
+}
+
+const unsigned char *
+irange_storage::lengths_address () const
+{
+  return const_cast <irange_storage *> (this)->write_lengths_address ();
+}
 
-irange_storage_slot *
-irange_storage_slot::alloc_slot (vrange_allocator &allocator, const irange &r)
+// Allocate a new irange_storage object initialized to R.
+
+irange_storage *
+irange_storage::alloc (vrange_internal_alloc &allocator, const irange &r)
 {
-  size_t size = irange_storage_slot::size (r);
-  irange_storage_slot *p
-    = static_cast <irange_storage_slot *> (allocator.alloc (size));
-  new (p) irange_storage_slot (r);
+  size_t size = irange_storage::size (r);
+  irange_storage *p = static_cast <irange_storage *> (allocator.alloc (size));
+  new (p) irange_storage (r);
   return p;
 }
 
-// Initialize the current slot with R.
+// Initialize the storage with R.
 
-irange_storage_slot::irange_storage_slot (const irange &r)
+irange_storage::irange_storage (const irange &r)
+  : m_max_ranges (r.num_pairs ())
 {
-  gcc_checking_assert (!r.undefined_p ());
+  m_num_ranges = m_max_ranges;
+  set_irange (r);
+}
 
-  unsigned prec = TYPE_PRECISION (r.type ());
-  unsigned n = num_wide_ints_needed (r);
-  if (n > MAX_INTS)
-    {
-      int_range<MAX_PAIRS> squash (r);
-      m_ints.set_precision (prec, num_wide_ints_needed (squash));
-      set_irange (squash);
-    }
-  else
-    {
-      m_ints.set_precision (prec, n);
-      set_irange (r);
-    }
+static inline void
+write_wide_int (HOST_WIDE_INT *&val, unsigned char *&len, const wide_int &w)
+{
+  *len = w.get_len ();
+  for (unsigned i = 0; i < *len; ++i)
+    *val++ = w.elt (i);
+  ++len;
 }
 
-// Store R into the current slot.
+// Store R into the current storage.
 
 void
-irange_storage_slot::set_irange (const irange &r)
+irange_storage::set_irange (const irange &r)
 {
   gcc_checking_assert (fits_p (r));
 
-  m_ints[0] = r.get_nonzero_bits ();
+  if (r.undefined_p ())
+    {
+      m_kind = VR_UNDEFINED;
+      return;
+    }
+  if (r.varying_p ())
+    {
+      m_kind = VR_VARYING;
+      return;
+    }
+
+  m_precision = TYPE_PRECISION (r.type ());
+  m_num_ranges = r.num_pairs ();
+  m_kind = VR_RANGE;
+
+  HOST_WIDE_INT *val = &m_val[0];
+  unsigned char *len = write_lengths_address ();
+
+  for (unsigned i = 0; i < r.num_pairs (); ++i)
+    {
+      write_wide_int (val, len, r.lower_bound (i));
+      write_wide_int (val, len, r.upper_bound (i));
+    }
+  if (r.m_nonzero_mask)
+    write_wide_int (val, len, wi::to_wide (r.m_nonzero_mask));
+  else
+    write_wide_int (val, len, wi::minus_one (m_precision));
 
-  unsigned pairs = r.num_pairs ();
-  for (unsigned i = 0; i < pairs; ++i)
+  if (flag_checking)
     {
-      m_ints[i*2 + 1] = r.lower_bound (i);
-      m_ints[i*2 + 2] = r.upper_bound (i);
+      int_range_max tmp;
+      get_irange (tmp, r.type ());
+      gcc_checking_assert (tmp == r);
     }
 }
 
-// Restore a range of TYPE from the current slot into R.
+static inline void
+read_wide_int (wide_int &w,
+	       const HOST_WIDE_INT *val, unsigned char len, unsigned prec)
+{
+  trailing_wide_int_storage stow (prec, &len,
+				  const_cast <HOST_WIDE_INT *> (val));
+  w = trailing_wide_int (stow);
+}
+
+// Restore a range of TYPE from storage into R.
 
 void
-irange_storage_slot::get_irange (irange &r, tree type) const
+irange_storage::get_irange (irange &r, tree type) const
 {
-  gcc_checking_assert (TYPE_PRECISION (type) == m_ints.get_precision ());
+  if (m_kind == VR_UNDEFINED)
+    {
+      r.set_undefined ();
+      return;
+    }
+  if (m_kind == VR_VARYING)
+    {
+      r.set_varying (type);
+      return;
+    }
 
-  r.set_undefined ();
-  unsigned nelements = m_ints.num_elements ();
-  for (unsigned i = 1; i < nelements; i += 2)
+  gcc_checking_assert (TYPE_PRECISION (type) == m_precision);
+  const HOST_WIDE_INT *val = &m_val[0];
+  const unsigned char *len = lengths_address ();
+  wide_int w;
+
+  // Handle the common case where R can fit the new range.
+  if (r.m_max_ranges >= m_num_ranges)
     {
-      int_range<2> tmp (type, m_ints[i], m_ints[i + 1]);
-      r.union_ (tmp);
+      r.m_kind = VR_RANGE;
+      r.m_num_ranges = m_num_ranges;
+      for (unsigned i = 0; i < m_num_ranges * 2; ++i)
+	{
+	  read_wide_int (w, val, *len, m_precision);
+	  r.m_base[i] = wide_int_to_tree (type, w);
+	  val += *len++;
+	}
+    }
+  // Otherwise build the range piecewise.
+  else
+    {
+      r.set_undefined ();
+      for (unsigned i = 0; i < m_num_ranges; ++i)
+	{
+	  wide_int lb, ub;
+	  read_wide_int (lb, val, *len, m_precision);
+	  val += *len++;
+	  read_wide_int (ub, val, *len, m_precision);
+	  val += *len++;
+	  int_range<1> tmp (type, lb, ub);
+	  r.union_ (tmp);
+	}
+    }
+  read_wide_int (w, val, *len, m_precision);
+  if (w == -1)
+    r.m_nonzero_mask = NULL;
+  else
+    {
+      r.m_nonzero_mask = wide_int_to_tree (type, w);
+      if (r.m_kind == VR_VARYING)
+	r.m_kind = VR_RANGE;
     }
-  r.set_nonzero_bits (get_nonzero_bits ());
-}
 
-// Return the size in bytes to allocate a slot that can hold R.
+  if (flag_checking)
+    r.verify_range ();
+}
 
-size_t
-irange_storage_slot::size (const irange &r)
+bool
+irange_storage::equal_p (const irange &r, tree type) const
 {
-  gcc_checking_assert (!r.undefined_p ());
-
-  unsigned prec = TYPE_PRECISION (r.type ());
-  unsigned n = num_wide_ints_needed (r);
-  if (n > MAX_INTS)
-    n = MAX_INTS;
-  return (sizeof (irange_storage_slot)
-	  + trailing_wide_ints<MAX_INTS>::extra_size (prec, n));
+  if (m_kind == VR_UNDEFINED || r.undefined_p ())
+    return m_kind == r.m_kind;
+  if (m_kind == VR_VARYING || r.varying_p ())
+    return m_kind == r.m_kind && types_compatible_p (r.type (), type);
+
+  tree rtype = r.type ();
+  if (!types_compatible_p (rtype, type))
+    return false;
+
+  // ?? We could make this faster by doing the comparison in place,
+  // without going through get_irange.
+  int_range_max tmp;
+  get_irange (tmp, rtype);
+  return tmp == r;
 }
 
-// Return the number of wide ints needed to represent R.
+// Return the size in bytes to allocate storage that can hold R.
 
-unsigned int
-irange_storage_slot::num_wide_ints_needed (const irange &r)
+size_t
+irange_storage::size (const irange &r)
 {
-  return r.num_pairs () * 2 + 1;
+  if (r.undefined_p ())
+    return sizeof (irange_storage);
+
+  unsigned prec = TYPE_PRECISION (r.type ());
+  unsigned n = r.num_pairs () * 2 + 1;
+  unsigned hwi_size = ((n * WIDE_INT_MAX_HWIS (prec) - 1)
+		       * sizeof (HOST_WIDE_INT));
+  unsigned len_size = n;
+  return sizeof (irange_storage) + hwi_size + len_size;
 }
 
-// Return TRUE if R fits in the current slot.
+// Return TRUE if R fits in the current storage.
 
 bool
-irange_storage_slot::fits_p (const irange &r) const
+irange_storage::fits_p (const irange &r) const
 {
-  return m_ints.num_elements () >= num_wide_ints_needed (r);
+  return m_max_ranges >= r.num_pairs ();
 }
 
-// Dump the current slot.
-
 void
-irange_storage_slot::dump () const
+irange_storage::dump () const
 {
-  fprintf (stderr, "raw irange_storage_slot:\n");
-  for (unsigned i = 1; i < m_ints.num_elements (); i += 2)
+  fprintf (stderr, "irange_storage (prec=%d, ranges=%d):\n",
+	   m_precision, m_num_ranges);
+
+  if (m_num_ranges == 0)
+    return;
+
+  const HOST_WIDE_INT *val = &m_val[0];
+  const unsigned char *len = lengths_address ();
+  int i, j;
+
+  fprintf (stderr, "  lengths = [ ");
+  for (i = 0; i < m_num_ranges * 2 + 1; ++i)
+    fprintf (stderr, "%d ", len[i]);
+  fprintf (stderr, "]\n");
+
+  for (i = 0; i < m_num_ranges; ++i)
     {
-      m_ints[i].dump ();
-      m_ints[i + 1].dump ();
+      for (j = 0; j < *len; ++j)
+	fprintf (stderr, "  [PAIR %d] LB " HOST_WIDE_INT_PRINT_DEC "\n", i,
+		 *val++);
+      ++len;
+      for (j = 0; j < *len; ++j)
+	fprintf (stderr, "  [PAIR %d] UB " HOST_WIDE_INT_PRINT_DEC "\n", i,
+		 *val++);
+      ++len;
     }
-  fprintf (stderr, "NONZERO ");
-  wide_int nz = get_nonzero_bits ();
-  nz.dump ();
+  for (j = 0; j < *len; ++j)
+    fprintf (stderr, "  [NZ] " HOST_WIDE_INT_PRINT_DEC "\n", *val++);
 }
 
 DEBUG_FUNCTION void
-debug (const irange_storage_slot &storage)
+debug (const irange_storage &storage)
 {
   storage.dump ();
   fprintf (stderr, "\n");
 }
 
-// Implementation of frange_storage_slot.
+//============================================================================
+// frange_storage implementation
+//============================================================================
 
-frange_storage_slot *
-frange_storage_slot::alloc_slot (vrange_allocator &allocator, const frange &r)
+// Allocate a new frange_storage object initialized to R.
+
+frange_storage *
+frange_storage::alloc (vrange_internal_alloc &allocator, const frange &r)
 {
-  size_t size = sizeof (frange_storage_slot);
-  frange_storage_slot *p
-    = static_cast <frange_storage_slot *> (allocator.alloc (size));
-  new (p) frange_storage_slot (r);
+  size_t size = sizeof (frange_storage);
+  frange_storage *p = static_cast <frange_storage *> (allocator.alloc (size));
+  new (p) frange_storage (r);
   return p;
 }
 
 void
-frange_storage_slot::set_frange (const frange &r)
+frange_storage::set_frange (const frange &r)
 {
   gcc_checking_assert (fits_p (r));
-  gcc_checking_assert (!r.undefined_p ());
 
   m_kind = r.m_kind;
   m_min = r.m_min;
@@ -257,7 +495,7 @@ frange_storage_slot::set_frange (const frange &r)
 }
 
 void
-frange_storage_slot::get_frange (frange &r, tree type) const
+frange_storage::get_frange (frange &r, tree type) const
 {
   gcc_checking_assert (r.supports_type_p (type));
 
@@ -275,6 +513,11 @@ frange_storage_slot::get_frange (frange &r, tree type) const
 	r.set_undefined ();
       return;
     }
+  if (m_kind == VR_UNDEFINED)
+    {
+      r.set_undefined ();
+      return;
+    }
 
   // We use the constructor to create the new range instead of writing
   // out the bits into the frange directly, because the global range
@@ -293,7 +536,34 @@ frange_storage_slot::get_frange (frange &r, tree type) const
 }
 
 bool
-frange_storage_slot::fits_p (const frange &) const
+frange_storage::equal_p (const frange &r, tree type) const
+{
+  if (r.undefined_p ())
+    return m_kind == VR_UNDEFINED;
+
+  tree rtype = type;
+  if (!types_compatible_p (rtype, type))
+    return false;
+
+  frange tmp;
+  get_frange (tmp, rtype);
+  return tmp == r;
+}
+
+bool
+frange_storage::fits_p (const frange &) const
 {
   return true;
 }
+
+static vrange_allocator ggc_vrange_allocator (true);
+
+vrange_storage *ggc_alloc_vrange_storage (tree type)
+{
+  return ggc_vrange_allocator.clone_varying (type);
+}
+
+vrange_storage *ggc_alloc_vrange_storage (const vrange &r)
+{
+  return ggc_vrange_allocator.clone (r);
+}
diff --git a/gcc/value-range-storage.h b/gcc/value-range-storage.h
index 070b85c5739..4ec0da73059 100644
--- a/gcc/value-range-storage.h
+++ b/gcc/value-range-storage.h
@@ -21,97 +21,101 @@ along with GCC; see the file COPYING3.  If not see
 #ifndef GCC_VALUE_RANGE_STORAGE_H
 #define GCC_VALUE_RANGE_STORAGE_H
 
-// This class is used to allocate the minimum amount of storage needed
-// for a given range.  Storage is automatically freed at destruction
-// of the class.
+// This class is used to allocate chunks of memory that can store
+// ranges as memory efficiently as possible.
 
 class vrange_allocator
 {
 public:
-  vrange_allocator () { }
-  virtual ~vrange_allocator () { }
-  // Allocate a range of TYPE.
-  vrange *alloc_vrange (tree type);
-  // Allocate a memory block of BYTES.
-  virtual void *alloc (unsigned bytes) = 0;
-  virtual void free (void *p) = 0;
-  // Return a clone of SRC.
-  template <typename T> T *clone (const T &src);
+  // Use GC memory when GC is true, otherwise use obstacks.
+  vrange_allocator (bool gc = false);
+  ~vrange_allocator ();
+  class vrange_storage *clone (const vrange &r);
+  vrange_storage *clone_varying (tree type);
+  vrange_storage *clone_undefined (tree type);
+  void *alloc (size_t size);
+  void free (void *);
 private:
-  irange *alloc_irange (unsigned pairs);
-  frange *alloc_frange ();
-  void operator= (const vrange_allocator &) = delete;
+  DISABLE_COPY_AND_ASSIGN (vrange_allocator);
+  class vrange_internal_alloc *m_alloc;
+  bool m_gc;
 };
 
-// This class is used to allocate chunks of memory that can store
-// ranges as memory efficiently as possible.  It is meant to be used
-// when long term storage of a range is needed.  The class can be used
-// with any vrange_allocator (i.e. alloca or GC).
+// Efficient memory storage for a vrange.
+//
+// The GTY marker here does nothing but get gengtype to generate the
+// ggc_test_and_set_mark calls.  We ignore the derived classes, since
+// they don't contain any pointers.
 
-class vrange_storage
+class GTY(()) vrange_storage
 {
 public:
-  vrange_storage (vrange_allocator *alloc) : m_alloc (alloc) { }
-  void *alloc_slot (const vrange &r);
-  void free (void *slot) { m_alloc->free (slot); }
-  void get_vrange (const void *slot, vrange &r, tree type);
-  void set_vrange (void *slot, const vrange &r);
-  static bool fits_p (const void *slot, const vrange &r);
-private:
-  DISABLE_COPY_AND_ASSIGN (vrange_storage);
-  vrange_allocator *m_alloc;
+  static vrange_storage *alloc (vrange_internal_alloc &, const vrange &);
+  void get_vrange (vrange &r, tree type) const;
+  void set_vrange (const vrange &r);
+  bool fits_p (const vrange &r) const;
+  bool equal_p (const vrange &r, tree type) const;
+protected:
+  // Stack initialization disallowed.
+  vrange_storage () { }
 };
 
-// A chunk of memory pointing to an irange storage.
+// Efficient memory storage for an irange.
 
-class GTY ((variable_size)) irange_storage_slot
+class irange_storage : public vrange_storage
 {
 public:
-  static irange_storage_slot *alloc_slot (vrange_allocator &, const irange &r);
+  static irange_storage *alloc (vrange_internal_alloc &, const irange &);
   void set_irange (const irange &r);
   void get_irange (irange &r, tree type) const;
-  wide_int get_nonzero_bits () const { return m_ints[0]; }
+  bool equal_p (const irange &r, tree type) const;
   bool fits_p (const irange &r) const;
-  static size_t size (const irange &r);
   void dump () const;
 private:
-  DISABLE_COPY_AND_ASSIGN (irange_storage_slot);
-  friend void gt_ggc_mx_irange_storage_slot (void *);
-  friend void gt_pch_p_19irange_storage_slot (void *, void *,
+  DISABLE_COPY_AND_ASSIGN (irange_storage);
+  static size_t size (const irange &r);
+  const unsigned char *lengths_address () const;
+  unsigned char *write_lengths_address ();
+  friend void gt_ggc_mx_irange_storage (void *);
+  friend void gt_pch_p_14irange_storage (void *, void *,
 					      gt_pointer_operator, void *);
-  friend void gt_pch_nx_irange_storage_slot (void *);
+  friend void gt_pch_nx_irange_storage (void *);
+
+  // The shared precision of each number.
+  unsigned short m_precision;
 
-  // This is the maximum number of wide_int's allowed in the trailing
-  // ints structure, without going over 16 bytes (128 bits) in the
-  // control word that precedes the HOST_WIDE_INTs in
-  // trailing_wide_ints::m_val[].
-  static const unsigned MAX_INTS = 12;
+  // The max number of sub-ranges that fit in this storage.
+  const unsigned char m_max_ranges;
 
-  // Maximum number of range pairs we can handle, considering the
-  // nonzero bits take one wide_int.
-  static const unsigned MAX_PAIRS = (MAX_INTS - 1) / 2;
+  // The number of stored sub-ranges.
+  unsigned char m_num_ranges;
 
-  // Constructor is private to disallow stack initialization.  Use
-  // alloc_slot() to create objects.
-  irange_storage_slot (const irange &r);
+  enum value_range_kind m_kind : 3;
 
-  static unsigned num_wide_ints_needed (const irange &r);
+  // The length of this is m_num_ranges * 2 + 1 to accomodate the nonzero bits.
+  HOST_WIDE_INT m_val[1];
 
-  trailing_wide_ints<MAX_INTS> m_ints;
+  // Another variable-length part of the structure following the HWIs.
+  // This is the length of each wide_int in m_val.
+  //
+  // unsigned char m_len[];
+
+  irange_storage (const irange &r);
 };
 
-// A chunk of memory to store an frange to long term memory.
+// Efficient memory storage for an frange.
 
-class GTY (()) frange_storage_slot
+class frange_storage : public vrange_storage
 {
  public:
-  static frange_storage_slot *alloc_slot (vrange_allocator &, const frange &r);
+  static frange_storage *alloc (vrange_internal_alloc &, const frange &r);
   void set_frange (const frange &r);
   void get_frange (frange &r, tree type) const;
+  bool equal_p (const frange &r, tree type) const;
   bool fits_p (const frange &) const;
  private:
-  frange_storage_slot (const frange &r) { set_frange (r); }
-  DISABLE_COPY_AND_ASSIGN (frange_storage_slot);
+  frange_storage (const frange &r) { set_frange (r); }
+  DISABLE_COPY_AND_ASSIGN (frange_storage);
 
   enum value_range_kind m_kind;
   REAL_VALUE_TYPE m_min;
@@ -120,113 +124,7 @@ class GTY (()) frange_storage_slot
   bool m_neg_nan;
 };
 
-class obstack_vrange_allocator final: public vrange_allocator
-{
-public:
-  obstack_vrange_allocator ()
-  {
-    obstack_init (&m_obstack);
-  }
-  virtual ~obstack_vrange_allocator () final override
-  {
-    obstack_free (&m_obstack, NULL);
-  }
-  virtual void *alloc (unsigned bytes) final override
-  {
-    return obstack_alloc (&m_obstack, bytes);
-  }
-  virtual void free (void *) final override { }
-private:
-  obstack m_obstack;
-};
-
-class ggc_vrange_allocator final: public vrange_allocator
-{
-public:
-  ggc_vrange_allocator () { }
-  virtual ~ggc_vrange_allocator () final override { }
-  virtual void *alloc (unsigned bytes) final override
-  {
-    return ggc_internal_alloc (bytes);
-  }
-  virtual void free (void *p) final override
-  {
-    return ggc_free (p);
-  }
-};
-
-// Return a new range to hold ranges of TYPE.  The newly allocated
-// range is initialized to VR_UNDEFINED.
-
-inline vrange *
-vrange_allocator::alloc_vrange (tree type)
-{
-  if (irange::supports_p (type))
-    return alloc_irange (2);
-  if (frange::supports_p (type))
-    return alloc_frange ();
-  return NULL;
-  gcc_unreachable ();
-}
-
-// Return a new range with NUM_PAIRS.
-
-inline irange *
-vrange_allocator::alloc_irange (unsigned num_pairs)
-{
-  // Never allocate 0 pairs.
-  if (num_pairs < 1)
-    num_pairs = 2;
-
-  size_t nbytes = sizeof (tree) * 2 * num_pairs;
-
-  // Allocate the irange and required memory for the vector.
-  void *r = alloc (sizeof (irange));
-  tree *mem = static_cast <tree *> (alloc (nbytes));
-  return new (r) irange (mem, num_pairs);
-}
-
-inline frange *
-vrange_allocator::alloc_frange ()
-{
-  void *r = alloc (sizeof (frange));
-  return new (r) frange ();
-}
-
-// Return a clone of an irange.
-
-template <>
-inline irange *
-vrange_allocator::clone <irange> (const irange &src)
-{
-  irange *r = alloc_irange (src.num_pairs ());
-  *r = src;
-  return r;
-}
-
-// Return a clone of an frange.
-
-template <>
-inline frange *
-vrange_allocator::clone <frange> (const frange &src)
-{
-  frange *r = alloc_frange ();
-  *r = src;
-  return r;
-}
-
-// Return a clone of a vrange.
-
-template <>
-inline vrange *
-vrange_allocator::clone <vrange> (const vrange &src)
-{
-  if (is_a <irange> (src))
-    return clone <irange> (as_a <irange> (src));
-  if (is_a <frange> (src))
-    return clone <frange> (as_a <frange> (src));
-  return NULL;
-  gcc_unreachable ();
-}
+extern vrange_storage *ggc_alloc_vrange_storage (tree type);
+extern vrange_storage *ggc_alloc_vrange_storage (const vrange &);
 
 #endif // GCC_VALUE_RANGE_STORAGE_H
diff --git a/gcc/value-range.h b/gcc/value-range.h
index 0b61341e5c4..9d485fbbe77 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -119,7 +119,7 @@ namespace inchash
 class GTY((user)) irange : public vrange
 {
   friend value_range_kind get_legacy_range (const irange &, tree &, tree &);
-  friend class vrange_allocator;
+  friend class irange_storage;
 public:
   // In-place setters.
   virtual void set (tree, tree, value_range_kind = VR_RANGE) override;
@@ -310,7 +310,7 @@ nan_state::neg_p () const
 
 class GTY((user)) frange : public vrange
 {
-  friend class frange_storage_slot;
+  friend class frange_storage;
   friend class vrange_printer;
   friend void gt_ggc_mx (frange *);
   friend void gt_pch_nx (frange *);
-- 
2.40.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [COMMITTED] Remove irange::{min,max,kind}.
  2023-05-01  6:28 [COMMITTED] vrange_storage overhaul Aldy Hernandez
@ 2023-05-01  6:28 ` Aldy Hernandez
  2023-05-01  6:28 ` [COMMITTED] Remove irange::tree_{lower,upper}_bound Aldy Hernandez
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Aldy Hernandez @ 2023-05-01  6:28 UTC (permalink / raw)
  To: GCC patches; +Cc: Andrew MacLeod, Aldy Hernandez

gcc/ChangeLog:

	* tree-ssa-loop-niter.cc (refine_value_range_using_guard): Remove
	kind() call.
	(determine_value_range): Same.
	(record_nonwrapping_iv): Same.
	(infer_loop_bounds_from_signedness): Same.
	(scev_var_range_cant_overflow): Same.
	* tree-vrp.cc (operand_less_p): Delete.
	* tree-vrp.h (operand_less_p): Delete.
	* value-range.cc (get_legacy_range): Remove uses of deprecated API.
	(irange::value_inside_range): Delete.
	* value-range.h (vrange::kind): Delete.
	(irange::num_pairs): Remove check of m_kind.
	(irange::min): Delete.
	(irange::max): Delete.
---
 gcc/tree-ssa-loop-niter.cc | 26 +++++++++++++-------
 gcc/tree-vrp.cc            | 24 -------------------
 gcc/tree-vrp.h             |  1 -
 gcc/value-range.cc         | 49 --------------------------------------
 gcc/value-range.h          | 37 +---------------------------
 5 files changed, 19 insertions(+), 118 deletions(-)

diff --git a/gcc/tree-ssa-loop-niter.cc b/gcc/tree-ssa-loop-niter.cc
index 33233979ba0..c0ed6573409 100644
--- a/gcc/tree-ssa-loop-niter.cc
+++ b/gcc/tree-ssa-loop-niter.cc
@@ -223,7 +223,8 @@ refine_value_range_using_guard (tree type, tree var,
   else if (TREE_CODE (varc1) == SSA_NAME
 	   && INTEGRAL_TYPE_P (type)
 	   && get_range_query (cfun)->range_of_expr (r, varc1)
-	   && r.kind () == VR_RANGE)
+	   && !r.undefined_p ()
+	   && !r.varying_p ())
     {
       gcc_assert (wi::le_p (r.lower_bound (), r.upper_bound (), sgn));
       wi::to_mpz (r.lower_bound (), minc1, sgn);
@@ -368,7 +369,10 @@ determine_value_range (class loop *loop, tree type, tree var, mpz_t off,
       /* Either for VAR itself...  */
       Value_Range var_range (TREE_TYPE (var));
       get_range_query (cfun)->range_of_expr (var_range, var);
-      rtype = var_range.kind ();
+      if (var_range.varying_p () || var_range.undefined_p ())
+	rtype = VR_VARYING;
+      else
+	rtype = VR_RANGE;
       if (!var_range.undefined_p ())
 	{
 	  minv = var_range.lower_bound ();
@@ -384,7 +388,8 @@ determine_value_range (class loop *loop, tree type, tree var, mpz_t off,
 	  if (PHI_ARG_DEF_FROM_EDGE (phi, e) == var
 	      && get_range_query (cfun)->range_of_expr (phi_range,
 						    gimple_phi_result (phi))
-	      && phi_range.kind () == VR_RANGE)
+	      && !phi_range.varying_p ()
+	      && !phi_range.undefined_p ())
 	    {
 	      if (rtype != VR_RANGE)
 		{
@@ -404,7 +409,10 @@ determine_value_range (class loop *loop, tree type, tree var, mpz_t off,
 		    {
 		      Value_Range vr (TREE_TYPE (var));
 		      get_range_query (cfun)->range_of_expr (vr, var);
-		      rtype = vr.kind ();
+		      if (vr.varying_p () || vr.undefined_p ())
+			rtype = VR_VARYING;
+		      else
+			rtype = VR_RANGE;
 		      if (!vr.undefined_p ())
 			{
 			  minv = vr.lower_bound ();
@@ -4045,7 +4053,8 @@ record_nonwrapping_iv (class loop *loop, tree base, tree step, gimple *stmt,
       if (TREE_CODE (orig_base) == SSA_NAME
 	  && TREE_CODE (high) == INTEGER_CST
 	  && INTEGRAL_TYPE_P (TREE_TYPE (orig_base))
-	  && (base_range.kind () == VR_RANGE
+	  && ((!base_range.varying_p ()
+	       && !base_range.undefined_p ())
 	      || get_cst_init_from_scev (orig_base, &max, false))
 	  && wi::gts_p (wi::to_wide (high), max))
 	base = wide_int_to_tree (unsigned_type, max);
@@ -4067,7 +4076,8 @@ record_nonwrapping_iv (class loop *loop, tree base, tree step, gimple *stmt,
       if (TREE_CODE (orig_base) == SSA_NAME
 	  && TREE_CODE (low) == INTEGER_CST
 	  && INTEGRAL_TYPE_P (TREE_TYPE (orig_base))
-	  && (base_range.kind () == VR_RANGE
+	  && ((!base_range.varying_p ()
+	       && !base_range.undefined_p ())
 	      || get_cst_init_from_scev (orig_base, &min, true))
 	  && wi::gts_p (min, wi::to_wide (low)))
 	base = wide_int_to_tree (unsigned_type, min);
@@ -4335,7 +4345,7 @@ infer_loop_bounds_from_signedness (class loop *loop, gimple *stmt)
   high = upper_bound_in_type (type, type);
   Value_Range r (TREE_TYPE (def));
   get_range_query (cfun)->range_of_expr (r, def);
-  if (r.kind () == VR_RANGE)
+  if (!r.varying_p () && !r.undefined_p ())
     {
       low = wide_int_to_tree (type, r.lower_bound ());
       high = wide_int_to_tree (type, r.upper_bound ());
@@ -5385,7 +5395,7 @@ scev_var_range_cant_overflow (tree var, tree step, class loop *loop)
 
   Value_Range r (TREE_TYPE (var));
   get_range_query (cfun)->range_of_expr (r, var);
-  if (r.kind () != VR_RANGE)
+  if (r.varying_p () || r.undefined_p ())
     return false;
 
   /* VAR is a scev whose evolution part is STEP and value range info
diff --git a/gcc/tree-vrp.cc b/gcc/tree-vrp.cc
index 6c6e0382809..c0dcd50ee01 100644
--- a/gcc/tree-vrp.cc
+++ b/gcc/tree-vrp.cc
@@ -367,30 +367,6 @@ get_single_symbol (tree t, bool *neg, tree *inv)
   return t;
 }
 
-/* Return
-   1 if VAL < VAL2
-   0 if !(VAL < VAL2)
-   -2 if those are incomparable.  */
-int
-operand_less_p (tree val, tree val2)
-{
-  /* LT is folded faster than GE and others.  Inline the common case.  */
-  if (TREE_CODE (val) == INTEGER_CST && TREE_CODE (val2) == INTEGER_CST)
-    return tree_int_cst_lt (val, val2);
-  else if (TREE_CODE (val) == SSA_NAME && TREE_CODE (val2) == SSA_NAME)
-    return val == val2 ? 0 : -2;
-  else
-    {
-      int cmp = compare_values (val, val2);
-      if (cmp == -1)
-	return 1;
-      else if (cmp == 0 || cmp == 1)
-	return 0;
-      else
-	return -2;
-    }
-}
-
 /* Compare two values VAL1 and VAL2.  Return
 
    	-2 if VAL1 and VAL2 cannot be compared at compile-time,
diff --git a/gcc/tree-vrp.h b/gcc/tree-vrp.h
index 58216388ee6..ba0a314d510 100644
--- a/gcc/tree-vrp.h
+++ b/gcc/tree-vrp.h
@@ -24,7 +24,6 @@ along with GCC; see the file COPYING3.  If not see
 
 extern int compare_values (tree, tree);
 extern int compare_values_warnv (tree, tree, bool *);
-extern int operand_less_p (tree, tree);
 
 extern enum value_range_kind intersect_range_with_nonzero_bits
   (enum value_range_kind, wide_int *, wide_int *, const wide_int &, signop);
diff --git a/gcc/value-range.cc b/gcc/value-range.cc
index c11c3f58d2c..ee43efa1ab5 100644
--- a/gcc/value-range.cc
+++ b/gcc/value-range.cc
@@ -907,15 +907,10 @@ irange::operator= (const irange &src)
 value_range_kind
 get_legacy_range (const irange &r, tree &min, tree &max)
 {
-  value_range_kind old_kind = r.kind ();
-  tree old_min = r.min ();
-  tree old_max = r.max ();
-
   if (r.undefined_p ())
     {
       min = NULL_TREE;
       max = NULL_TREE;
-      gcc_checking_assert (old_kind == VR_UNDEFINED);
       return VR_UNDEFINED;
     }
 
@@ -924,9 +919,6 @@ get_legacy_range (const irange &r, tree &min, tree &max)
     {
       min = wide_int_to_tree (type, r.lower_bound ());
       max = wide_int_to_tree (type, r.upper_bound ());
-      gcc_checking_assert (old_kind == VR_VARYING);
-      gcc_checking_assert (vrp_operand_equal_p (old_min, min));
-      gcc_checking_assert (vrp_operand_equal_p (old_max, max));
       return VR_VARYING;
     }
 
@@ -946,9 +938,6 @@ get_legacy_range (const irange &r, tree &min, tree &max)
 
   min = wide_int_to_tree (type, r.lower_bound ());
   max = wide_int_to_tree (type, r.upper_bound ());
-  gcc_checking_assert (old_kind == VR_RANGE);
-  gcc_checking_assert (vrp_operand_equal_p (old_min, min));
-  gcc_checking_assert (vrp_operand_equal_p (old_max, max));
   return VR_RANGE;
 }
 
@@ -1165,44 +1154,6 @@ irange::singleton_p (tree *result) const
   return false;
 }
 
-/* Return 1 if VAL is inside value range.
-	  0 if VAL is not inside value range.
-	 -2 if we cannot tell either way.
-
-   Benchmark compile/20001226-1.c compilation time after changing this
-   function.  */
-
-int
-irange::value_inside_range (tree val) const
-{
-  if (varying_p ())
-    return 1;
-
-  if (undefined_p ())
-    return 0;
-
-  gcc_checking_assert (TREE_CODE (val) == INTEGER_CST);
-
-  // FIXME:
-  if (TREE_CODE (val) == INTEGER_CST)
-    return contains_p (val);
-
-  int cmp1 = operand_less_p (val, min ());
-  if (cmp1 == -2)
-    return -2;
-  if (cmp1 == 1)
-    return m_kind != VR_RANGE;
-
-  int cmp2 = operand_less_p (max (), val);
-  if (cmp2 == -2)
-    return -2;
-
-  if (m_kind == VR_RANGE)
-    return !cmp2;
-  else
-    return !!cmp2;
-}
-
 /* Return TRUE if range contains INTEGER_CST.  */
 /* Return 1 if VAL is inside value range.
 	  0 if VAL is not inside value range.
diff --git a/gcc/value-range.h b/gcc/value-range.h
index 9d485fbbe77..68f380a2dbb 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -166,10 +166,6 @@ public:
   wide_int get_nonzero_bits () const;
   void set_nonzero_bits (const wide_int_ref &bits);
 
-  // Deprecated legacy public methods.
-  tree min () const;				// DEPRECATED
-  tree max () const;				// DEPRECATED
-
 protected:
   irange (tree *, unsigned);
   // potential promotion to public?
@@ -188,7 +184,6 @@ protected:
   void normalize_kind ();
 
   void verify_range ();
-  int value_inside_range (tree) const;
 
 private:
   friend void gt_ggc_mx (irange *);
@@ -499,7 +494,6 @@ public:
   void set (tree min, tree max, value_range_kind kind = VR_RANGE)
     { return m_vrange->set (min, max, kind); }
   tree type () { return m_vrange->type (); }
-  enum value_range_kind kind () { return m_vrange->kind (); }
   bool varying_p () const { return m_vrange->varying_p (); }
   bool undefined_p () const { return m_vrange->undefined_p (); }
   void set_varying (tree type) { m_vrange->set_varying (type); }
@@ -645,26 +639,12 @@ extern bool vrp_operand_equal_p (const_tree, const_tree);
 inline REAL_VALUE_TYPE frange_val_min (const_tree type);
 inline REAL_VALUE_TYPE frange_val_max (const_tree type);
 
-inline value_range_kind
-vrange::kind () const
-{
-  return m_kind;
-}
-
 // Number of sub-ranges in a range.
 
 inline unsigned
 irange::num_pairs () const
 {
-  if (m_kind == VR_ANTI_RANGE)
-    {
-      bool constant_p = (TREE_CODE (min ()) == INTEGER_CST
-			 && TREE_CODE (max ()) == INTEGER_CST);
-      gcc_checking_assert (constant_p);
-      return 2;
-    }
-  else
-    return m_num_ranges;
+  return m_num_ranges;
 }
 
 inline tree
@@ -701,21 +681,6 @@ irange::tree_upper_bound () const
   return tree_upper_bound (m_num_ranges - 1);
 }
 
-inline tree
-irange::min () const
-{
-  return tree_lower_bound (0);
-}
-
-inline tree
-irange::max () const
-{
-  if (m_num_ranges)
-    return tree_upper_bound ();
-  else
-    return NULL;
-}
-
 inline bool
 irange::varying_compatible_p () const
 {
-- 
2.40.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [COMMITTED] Remove irange::tree_{lower,upper}_bound.
  2023-05-01  6:28 [COMMITTED] vrange_storage overhaul Aldy Hernandez
  2023-05-01  6:28 ` [COMMITTED] Remove irange::{min,max,kind} Aldy Hernandez
@ 2023-05-01  6:28 ` Aldy Hernandez
  2023-05-01  6:28 ` [COMMITTED] Various cleanups in vr-values.cc towards ranger API Aldy Hernandez
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Aldy Hernandez @ 2023-05-01  6:28 UTC (permalink / raw)
  To: GCC patches; +Cc: Andrew MacLeod, Aldy Hernandez

gcc/ChangeLog:

	* value-range.cc (irange::irange_set_anti_range): Remove uses of
	tree_lower_bound and tree_upper_bound.
	(irange::verify_range): Same.
	(irange::operator==): Same.
	(irange::singleton_p): Same.
	* value-range.h (irange::tree_lower_bound): Delete.
	(irange::tree_upper_bound): Delete.
	(irange::lower_bound): Delete.
	(irange::upper_bound): Delete.
	(irange::zero_p): Remove uses of tree_lower_bound and
	tree_upper_bound.
---
 gcc/value-range.cc | 36 ++++++++++++++++++------------------
 gcc/value-range.h  | 39 ++++-----------------------------------
 2 files changed, 22 insertions(+), 53 deletions(-)

diff --git a/gcc/value-range.cc b/gcc/value-range.cc
index ee43efa1ab5..a0e49df28f3 100644
--- a/gcc/value-range.cc
+++ b/gcc/value-range.cc
@@ -1010,7 +1010,7 @@ irange::irange_set_anti_range (tree min, tree max)
     {
       wide_int lim1 = wi::sub (w_min, 1, sign, &ovf);
       gcc_checking_assert (ovf != wi::OVF_OVERFLOW);
-      m_base[0] = type_range.tree_lower_bound (0);
+      m_base[0] = wide_int_to_tree (type, type_range.lower_bound (0));
       m_base[1] = wide_int_to_tree (type, lim1);
       m_num_ranges = 1;
     }
@@ -1025,7 +1025,8 @@ irange::irange_set_anti_range (tree min, tree max)
       wide_int lim2 = wi::add (w_max, 1, sign, &ovf);
       gcc_checking_assert (ovf != wi::OVF_OVERFLOW);
       m_base[m_num_ranges * 2] = wide_int_to_tree (type, lim2);
-      m_base[m_num_ranges * 2 + 1] = type_range.tree_upper_bound (0);
+      m_base[m_num_ranges * 2 + 1]
+	= wide_int_to_tree (type, type_range.upper_bound (0));
       ++m_num_ranges;
     }
 
@@ -1104,9 +1105,9 @@ irange::verify_range ()
   gcc_checking_assert (!varying_compatible_p ());
   for (unsigned i = 0; i < m_num_ranges; ++i)
     {
-      tree lb = tree_lower_bound (i);
-      tree ub = tree_upper_bound (i);
-      int c = compare_values (lb, ub);
+      wide_int lb = lower_bound (i);
+      wide_int ub = upper_bound (i);
+      int c = wi::cmp (lb, ub, TYPE_SIGN (type ()));
       gcc_checking_assert (c == 0 || c == -1);
     }
 }
@@ -1120,20 +1121,20 @@ irange::operator== (const irange &other) const
   if (m_num_ranges == 0)
     return true;
 
+  signop sign1 = TYPE_SIGN (type ());
+  signop sign2 = TYPE_SIGN (other.type ());
+
   for (unsigned i = 0; i < m_num_ranges; ++i)
     {
-      tree lb = tree_lower_bound (i);
-      tree ub = tree_upper_bound (i);
-      tree lb_other = other.tree_lower_bound (i);
-      tree ub_other = other.tree_upper_bound (i);
-      if (!operand_equal_p (lb, lb_other, 0)
-	  || !operand_equal_p (ub, ub_other, 0))
+      widest_int lb = widest_int::from (lower_bound (i), sign1);
+      widest_int ub = widest_int::from (upper_bound (i), sign1);
+      widest_int lb_other = widest_int::from (other.lower_bound (i), sign2);
+      widest_int ub_other = widest_int::from (other.upper_bound (i), sign2);
+      if (lb != lb_other || ub != ub_other)
 	return false;
     }
-  widest_int nz1 = widest_int::from (get_nonzero_bits (),
-				     TYPE_SIGN (type ()));
-  widest_int nz2 = widest_int::from (other.get_nonzero_bits (),
-				     TYPE_SIGN (other.type ()));
+  widest_int nz1 = widest_int::from (get_nonzero_bits (), sign1);
+  widest_int nz2 = widest_int::from (other.get_nonzero_bits (), sign2);
   return nz1 == nz2;
 }
 
@@ -1144,11 +1145,10 @@ irange::operator== (const irange &other) const
 bool
 irange::singleton_p (tree *result) const
 {
-  if (num_pairs () == 1 && (wi::to_wide (tree_lower_bound ())
-			    == wi::to_wide (tree_upper_bound ())))
+  if (num_pairs () == 1 && lower_bound () == upper_bound ())
     {
       if (result)
-	*result = tree_lower_bound ();
+	*result = wide_int_to_tree (type (), lower_bound ());
       return true;
     }
   return false;
diff --git a/gcc/value-range.h b/gcc/value-range.h
index 68f380a2dbb..10c44c5c062 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -168,10 +168,6 @@ public:
 
 protected:
   irange (tree *, unsigned);
-  // potential promotion to public?
-  tree tree_lower_bound (unsigned = 0) const;
-  tree tree_upper_bound (unsigned) const;
-  tree tree_upper_bound () const;
 
    // In-place operators.
   bool irange_union (const irange &);
@@ -654,33 +650,6 @@ irange::type () const
   return TREE_TYPE (m_base[0]);
 }
 
-// Return the lower bound of a sub-range expressed as a tree.  PAIR is
-// the sub-range in question.
-
-inline tree
-irange::tree_lower_bound (unsigned pair) const
-{
-  return m_base[pair * 2];
-}
-
-// Return the upper bound of a sub-range expressed as a tree.  PAIR is
-// the sub-range in question.
-
-inline tree
-irange::tree_upper_bound (unsigned pair) const
-{
-  return m_base[pair * 2 + 1];
-}
-
-// Return the highest bound of a range expressed as a tree.
-
-inline tree
-irange::tree_upper_bound () const
-{
-  gcc_checking_assert (m_num_ranges);
-  return tree_upper_bound (m_num_ranges - 1);
-}
-
 inline bool
 irange::varying_compatible_p () const
 {
@@ -730,8 +699,8 @@ inline bool
 irange::zero_p () const
 {
   return (m_kind == VR_RANGE && m_num_ranges == 1
-	  && integer_zerop (tree_lower_bound (0))
-	  && integer_zerop (tree_upper_bound (0)));
+	  && lower_bound (0) == 0
+	  && upper_bound (0) == 0);
 }
 
 inline bool
@@ -910,7 +879,7 @@ irange::lower_bound (unsigned pair) const
 {
   gcc_checking_assert (m_num_ranges > 0);
   gcc_checking_assert (pair + 1 <= num_pairs ());
-  return wi::to_wide (tree_lower_bound (pair));
+  return wi::to_wide (m_base[pair * 2]);
 }
 
 // Return the upper bound of a sub-range.  PAIR is the sub-range in
@@ -921,7 +890,7 @@ irange::upper_bound (unsigned pair) const
 {
   gcc_checking_assert (m_num_ranges > 0);
   gcc_checking_assert (pair + 1 <= num_pairs ());
-  return wi::to_wide (tree_upper_bound (pair));
+  return wi::to_wide (m_base[pair * 2 + 1]);
 }
 
 // Return the highest bound of a range.
-- 
2.40.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [COMMITTED] Various cleanups in vr-values.cc towards ranger API.
  2023-05-01  6:28 [COMMITTED] vrange_storage overhaul Aldy Hernandez
  2023-05-01  6:28 ` [COMMITTED] Remove irange::{min,max,kind} Aldy Hernandez
  2023-05-01  6:28 ` [COMMITTED] Remove irange::tree_{lower,upper}_bound Aldy Hernandez
@ 2023-05-01  6:28 ` Aldy Hernandez
  2023-05-01  6:28 ` [COMMITTED] Convert get_legacy_range in bounds_of_var_in_loop to irange API Aldy Hernandez
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Aldy Hernandez @ 2023-05-01  6:28 UTC (permalink / raw)
  To: GCC patches; +Cc: Andrew MacLeod, Aldy Hernandez

gcc/ChangeLog:

	* vr-values.cc (check_for_binary_op_overflow): Tidy up by using
	ranger API.
	(compare_ranges): Delete.
	(compare_range_with_value): Delete.
	(bounds_of_var_in_loop): Tidy up by using ranger API.
	(simplify_using_ranges::fold_cond_with_ops): Cleanup and rename
	from vrp_evaluate_conditional_warnv_with_ops_using_ranges.
	(simplify_using_ranges::legacy_fold_cond_overflow): Remove
	strict_overflow_p and only_ranges.
	(simplify_using_ranges::legacy_fold_cond): Adjust call to
	legacy_fold_cond_overflow.
	(simplify_using_ranges::simplify_abs_using_ranges): Adjust for
	rename.
	(range_fits_type_p): Rename value_range to irange.
	* vr-values.h (range_fits_type_p): Adjust prototype.
---
 gcc/vr-values.cc | 531 ++++++++---------------------------------------
 gcc/vr-values.h  |   8 +-
 2 files changed, 87 insertions(+), 452 deletions(-)

diff --git a/gcc/vr-values.cc b/gcc/vr-values.cc
index 2ee234c4d88..7f623102ac6 100644
--- a/gcc/vr-values.cc
+++ b/gcc/vr-values.cc
@@ -104,43 +104,34 @@ check_for_binary_op_overflow (range_query *query,
 			      tree op0, tree op1, bool *ovf, gimple *s = NULL)
 {
   value_range vr0, vr1;
-  if (TREE_CODE (op0) == SSA_NAME)
-    {
-      if (!query->range_of_expr (vr0, op0, s))
-	vr0.set_varying (TREE_TYPE (op0));
-    }
-  else if (TREE_CODE (op0) == INTEGER_CST)
-    vr0.set (op0, op0);
-  else
+  if (!query->range_of_expr (vr0, op0, s))
     vr0.set_varying (TREE_TYPE (op0));
-
-  if (TREE_CODE (op1) == SSA_NAME)
-    {
-      if (!query->range_of_expr (vr1, op1, s))
-	vr1.set_varying (TREE_TYPE (op1));
-    }
-  else if (TREE_CODE (op1) == INTEGER_CST)
-    vr1.set (op1, op1);
-  else
+  if (!query->range_of_expr (vr1, op1, s))
     vr1.set_varying (TREE_TYPE (op1));
 
   tree vr0min, vr0max, vr1min, vr1max;
-  value_range_kind kind0 = get_legacy_range (vr0, vr0min, vr0max);
-  value_range_kind kind1 = get_legacy_range (vr1, vr1min, vr1max);
-  if (kind0 != VR_RANGE
-      || TREE_OVERFLOW (vr0min)
-      || TREE_OVERFLOW (vr0max))
+  if (vr0.undefined_p () || vr0.varying_p ())
     {
       vr0min = vrp_val_min (TREE_TYPE (op0));
       vr0max = vrp_val_max (TREE_TYPE (op0));
     }
-  if (kind1 != VR_RANGE
-      || TREE_OVERFLOW (vr1min)
-      || TREE_OVERFLOW (vr1max))
+  else
+    {
+      tree type = vr0.type ();
+      vr0min = wide_int_to_tree (type, vr0.lower_bound ());
+      vr0max = wide_int_to_tree (type, vr0.upper_bound ());
+    }
+  if (vr1.undefined_p () || vr1.varying_p ())
     {
       vr1min = vrp_val_min (TREE_TYPE (op1));
       vr1max = vrp_val_max (TREE_TYPE (op1));
     }
+  else
+    {
+      tree type = vr1.type ();
+      vr1min = wide_int_to_tree (type, vr1.lower_bound ());
+      vr1max = wide_int_to_tree (type, vr1.upper_bound ());
+    }
   *ovf = arith_overflowed_p (subcode, type, vr0min,
 			     subcode == MINUS_EXPR ? vr1max : vr1min);
   if (arith_overflowed_p (subcode, type, vr0max,
@@ -208,269 +199,6 @@ check_for_binary_op_overflow (range_query *query,
   return true;
 }
 
-/* Given two numeric value ranges VR0, VR1 and a comparison code COMP:
-
-   - Return BOOLEAN_TRUE_NODE if VR0 COMP VR1 always returns true for
-     all the values in the ranges.
-
-   - Return BOOLEAN_FALSE_NODE if the comparison always returns false.
-
-   - Return NULL_TREE if it is not always possible to determine the
-     value of the comparison.
-
-   Also set *STRICT_OVERFLOW_P to indicate whether comparision evaluation
-   assumed signed overflow is undefined.  */
-
-
-static tree
-compare_ranges (enum tree_code comp, const value_range *vr0,
-		const value_range *vr1, bool *strict_overflow_p)
-{
-  /* VARYING or UNDEFINED ranges cannot be compared.  */
-  if (vr0->varying_p ()
-      || vr0->undefined_p ()
-      || vr1->varying_p ()
-      || vr1->undefined_p ())
-    return NULL_TREE;
-
-  /* Anti-ranges need to be handled separately.  */
-  tree vr0min, vr0max, vr1min, vr1max;
-  value_range_kind kind0 = get_legacy_range (*vr0, vr0min, vr0max);
-  value_range_kind kind1 = get_legacy_range (*vr1, vr1min, vr1max);
-  if (kind0 == VR_ANTI_RANGE || kind1 == VR_ANTI_RANGE)
-    {
-      /* If both are anti-ranges, then we cannot compute any
-	 comparison.  */
-      if (kind0 == VR_ANTI_RANGE && kind1 == VR_ANTI_RANGE)
-	return NULL_TREE;
-
-      /* These comparisons are never statically computable.  */
-      if (comp == GT_EXPR
-	  || comp == GE_EXPR
-	  || comp == LT_EXPR
-	  || comp == LE_EXPR)
-	return NULL_TREE;
-
-      /* Equality can be computed only between a range and an
-	 anti-range.  ~[VAL1, VAL2] == [VAL1, VAL2] is always false.  */
-      if (kind0 == VR_RANGE)
-	{
-	  /* To simplify processing, make VR0 the anti-range.  */
-	  kind0 = kind1;
-	  vr0min = vr1min;
-	  vr0max = vr1max;
-	  kind1 = get_legacy_range (*vr0, vr1min, vr1max);
-	}
-
-      gcc_assert (comp == NE_EXPR || comp == EQ_EXPR);
-
-      if (compare_values_warnv (vr0min, vr1min, strict_overflow_p) == 0
-	  && compare_values_warnv (vr0max, vr1max, strict_overflow_p) == 0)
-	return (comp == NE_EXPR) ? boolean_true_node : boolean_false_node;
-
-      return NULL_TREE;
-    }
-
-  /* Simplify processing.  If COMP is GT_EXPR or GE_EXPR, switch the
-     operands around and change the comparison code.  */
-  if (comp == GT_EXPR || comp == GE_EXPR)
-    {
-      comp = (comp == GT_EXPR) ? LT_EXPR : LE_EXPR;
-      kind0 = kind1;
-      vr0min = vr1min;
-      vr0max = vr1max;
-      kind1 = get_legacy_range (*vr0, vr1min, vr1max);
-    }
-
-  if (comp == EQ_EXPR)
-    {
-      /* Equality may only be computed if both ranges represent
-	 exactly one value.  */
-      if (compare_values_warnv (vr0min, vr0max, strict_overflow_p) == 0
-	  && compare_values_warnv (vr1min, vr1max, strict_overflow_p) == 0)
-	{
-	  int cmp_min = compare_values_warnv (vr0min, vr1min,
-					      strict_overflow_p);
-	  int cmp_max = compare_values_warnv (vr0max, vr1max,
-					      strict_overflow_p);
-	  if (cmp_min == 0 && cmp_max == 0)
-	    return boolean_true_node;
-	  else if (cmp_min != -2 && cmp_max != -2)
-	    return boolean_false_node;
-	}
-      /* If [V0_MIN, V1_MAX] < [V1_MIN, V1_MAX] then V0 != V1.  */
-      else if (compare_values_warnv (vr0min, vr1max,
-				     strict_overflow_p) == 1
-	       || compare_values_warnv (vr1min, vr0max,
-					strict_overflow_p) == 1)
-	return boolean_false_node;
-
-      return NULL_TREE;
-    }
-  else if (comp == NE_EXPR)
-    {
-      int cmp1, cmp2;
-
-      /* If VR0 is completely to the left or completely to the right
-	 of VR1, they are always different.  Notice that we need to
-	 make sure that both comparisons yield similar results to
-	 avoid comparing values that cannot be compared at
-	 compile-time.  */
-      cmp1 = compare_values_warnv (vr0max, vr1min, strict_overflow_p);
-      cmp2 = compare_values_warnv (vr0min, vr1max, strict_overflow_p);
-      if ((cmp1 == -1 && cmp2 == -1) || (cmp1 == 1 && cmp2 == 1))
-	return boolean_true_node;
-
-      /* If VR0 and VR1 represent a single value and are identical,
-	 return false.  */
-      else if (compare_values_warnv (vr0min, vr0max,
-				     strict_overflow_p) == 0
-	       && compare_values_warnv (vr1min, vr1max,
-					strict_overflow_p) == 0
-	       && compare_values_warnv (vr0min, vr1min,
-					strict_overflow_p) == 0
-	       && compare_values_warnv (vr0max, vr1max,
-					strict_overflow_p) == 0)
-	return boolean_false_node;
-
-      /* Otherwise, they may or may not be different.  */
-      else
-	return NULL_TREE;
-    }
-  else if (comp == LT_EXPR || comp == LE_EXPR)
-    {
-      int tst;
-
-      /* If VR0 is to the left of VR1, return true.  */
-      tst = compare_values_warnv (vr0max, vr1min, strict_overflow_p);
-      if ((comp == LT_EXPR && tst == -1)
-	  || (comp == LE_EXPR && (tst == -1 || tst == 0)))
-	return boolean_true_node;
-
-      /* If VR0 is to the right of VR1, return false.  */
-      tst = compare_values_warnv (vr0min, vr1max, strict_overflow_p);
-      if ((comp == LT_EXPR && (tst == 0 || tst == 1))
-	  || (comp == LE_EXPR && tst == 1))
-	return boolean_false_node;
-
-      /* Otherwise, we don't know.  */
-      return NULL_TREE;
-    }
-
-  gcc_unreachable ();
-}
-
-/* Given a value range VR, a value VAL and a comparison code COMP, return
-   BOOLEAN_TRUE_NODE if VR COMP VAL always returns true for all the
-   values in VR.  Return BOOLEAN_FALSE_NODE if the comparison
-   always returns false.  Return NULL_TREE if it is not always
-   possible to determine the value of the comparison.  Also set
-   *STRICT_OVERFLOW_P to indicate whether comparision evaluation
-   assumed signed overflow is undefined.  */
-
-static tree
-compare_range_with_value (enum tree_code comp, const value_range *vr,
-			  tree val, bool *strict_overflow_p)
-{
-  if (vr->varying_p () || vr->undefined_p ())
-    return NULL_TREE;
-
-  /* Anti-ranges need to be handled separately.  */
-  tree min, max;
-  if (get_legacy_range (*vr, min, max) == VR_ANTI_RANGE)
-    {
-      /* For anti-ranges, the only predicates that we can compute at
-	 compile time are equality and inequality.  */
-      if (comp == GT_EXPR
-	  || comp == GE_EXPR
-	  || comp == LT_EXPR
-	  || comp == LE_EXPR)
-	return NULL_TREE;
-
-      /* ~[VAL_1, VAL_2] OP VAL is known if VAL_1 <= VAL <= VAL_2.  */
-      bool contains_p = TREE_CODE (val) != INTEGER_CST || vr->contains_p (val);
-      if (!contains_p)
-	return (comp == NE_EXPR) ? boolean_true_node : boolean_false_node;
-
-      return NULL_TREE;
-    }
-
-  if (comp == EQ_EXPR)
-    {
-      /* EQ_EXPR may only be computed if VR represents exactly
-	 one value.  */
-      if (compare_values_warnv (min, max, strict_overflow_p) == 0)
-	{
-	  int cmp = compare_values_warnv (min, val, strict_overflow_p);
-	  if (cmp == 0)
-	    return boolean_true_node;
-	  else if (cmp == -1 || cmp == 1 || cmp == 2)
-	    return boolean_false_node;
-	}
-      else if (compare_values_warnv (val, min, strict_overflow_p) == -1
-	       || compare_values_warnv (max, val, strict_overflow_p) == -1)
-	return boolean_false_node;
-
-      return NULL_TREE;
-    }
-  else if (comp == NE_EXPR)
-    {
-      /* If VAL is not inside VR, then they are always different.  */
-      if (compare_values_warnv (max, val, strict_overflow_p) == -1
-	  || compare_values_warnv (min, val, strict_overflow_p) == 1)
-	return boolean_true_node;
-
-      /* If VR represents exactly one value equal to VAL, then return
-	 false.  */
-      if (compare_values_warnv (min, max, strict_overflow_p) == 0
-	  && compare_values_warnv (min, val, strict_overflow_p) == 0)
-	return boolean_false_node;
-
-      /* Otherwise, they may or may not be different.  */
-      return NULL_TREE;
-    }
-  else if (comp == LT_EXPR || comp == LE_EXPR)
-    {
-      int tst;
-
-      /* If VR is to the left of VAL, return true.  */
-      tst = compare_values_warnv (max, val, strict_overflow_p);
-      if ((comp == LT_EXPR && tst == -1)
-	  || (comp == LE_EXPR && (tst == -1 || tst == 0)))
-	return boolean_true_node;
-
-      /* If VR is to the right of VAL, return false.  */
-      tst = compare_values_warnv (min, val, strict_overflow_p);
-      if ((comp == LT_EXPR && (tst == 0 || tst == 1))
-	  || (comp == LE_EXPR && tst == 1))
-	return boolean_false_node;
-
-      /* Otherwise, we don't know.  */
-      return NULL_TREE;
-    }
-  else if (comp == GT_EXPR || comp == GE_EXPR)
-    {
-      int tst;
-
-      /* If VR is to the right of VAL, return true.  */
-      tst = compare_values_warnv (min, val, strict_overflow_p);
-      if ((comp == GT_EXPR && tst == 1)
-	  || (comp == GE_EXPR && (tst == 0 || tst == 1)))
-	return boolean_true_node;
-
-      /* If VR is to the left of VAL, return false.  */
-      tst = compare_values_warnv (max, val, strict_overflow_p);
-      if ((comp == GT_EXPR && (tst == -1 || tst == 0))
-	  || (comp == GE_EXPR && tst == -1))
-	return boolean_false_node;
-
-      /* Otherwise, we don't know.  */
-      return NULL_TREE;
-    }
-
-  gcc_unreachable ();
-}
-
 static inline void
 fix_overflow (tree *min, tree *max)
 {
@@ -557,12 +285,12 @@ bounds_of_var_in_loop (tree *min, tree *max, range_query *query,
 
   /* Try to use estimated number of iterations for the loop to constrain the
      final value in the evolution.  */
-  tree rmin, rmax;
   if (TREE_CODE (step) == INTEGER_CST
       && is_gimple_val (init)
       && (TREE_CODE (init) != SSA_NAME
 	  || (query->range_of_expr (r, init, stmt)
-	      && get_legacy_range (r, rmin, rmax) == VR_RANGE)))
+	      && !r.varying_p ()
+	      && !r.undefined_p ())))
     {
       widest_int nit;
 
@@ -586,35 +314,29 @@ bounds_of_var_in_loop (tree *min, tree *max, range_query *query,
 		  || wi::gts_p (wtmp, 0) == wi::gts_p (wi::to_wide (step), 0)))
 	    {
 	      value_range maxvr, vr0, vr1;
-	      if (TREE_CODE (init) == SSA_NAME)
-		query->range_of_expr (vr0, init, stmt);
-	      else if (is_gimple_min_invariant (init))
-		vr0.set (init, init);
-	      else
+	      if (!query->range_of_expr (vr0, init, stmt))
 		vr0.set_varying (TREE_TYPE (init));
-	      tree tem = wide_int_to_tree (TREE_TYPE (init), wtmp);
-	      vr1.set (tem, tem);
+	      vr1.set (TREE_TYPE (init), wtmp, wtmp);
 
 	      range_op_handler handler (PLUS_EXPR, TREE_TYPE (init));
 	      if (!handler.fold_range (maxvr, TREE_TYPE (init), vr0, vr1))
 		maxvr.set_varying (TREE_TYPE (init));
-	      tree maxvr_min, maxvr_max;
-	      value_range_kind maxvr_kind
-		= get_legacy_range (maxvr, maxvr_min, maxvr_max);
 
 	      /* Likewise if the addition did.  */
-	      if (maxvr_kind == VR_RANGE)
+	      if (!maxvr.varying_p () && !maxvr.undefined_p ())
 		{
 		  int_range<2> initvr;
 
-		  if (TREE_CODE (init) == SSA_NAME)
-		    query->range_of_expr (initvr, init, stmt);
-		  else if (is_gimple_min_invariant (init))
-		    initvr.set (init, init);
-		  else
+		  if (!query->range_of_expr (initvr, init, stmt)
+		      || initvr.undefined_p ())
 		    return false;
 
 		  tree initvr_min, initvr_max;
+		  tree maxvr_type = maxvr.type ();
+		  tree maxvr_min = wide_int_to_tree (maxvr_type,
+						     maxvr.lower_bound ());
+		  tree maxvr_max = wide_int_to_tree (maxvr_type,
+						     maxvr.upper_bound ());
 		  get_legacy_range (initvr, initvr_min, initvr_max);
 
 		  /* Check if init + nit * step overflows.  Though we checked
@@ -649,40 +371,33 @@ bounds_of_var_in_loop (tree *min, tree *max, range_query *query,
    optimizers.  */
 
 tree
-simplify_using_ranges::vrp_evaluate_conditional_warnv_with_ops_using_ranges
-    (enum tree_code code, tree op0, tree op1, bool * strict_overflow_p,
-     gimple *s)
+simplify_using_ranges::fold_cond_with_ops (enum tree_code code,
+					   tree op0, tree op1, gimple *s)
 {
-  bool ssa0 = TREE_CODE (op0) == SSA_NAME;
-  bool ssa1 = TREE_CODE (op1) == SSA_NAME;
-  value_range vr0, vr1;
-  if (ssa0 && !query->range_of_expr (vr0, op0, s))
-    vr0.set_varying (TREE_TYPE (op0));
-  if (ssa1 && !query->range_of_expr (vr1, op1, s))
-    vr1.set_varying (TREE_TYPE (op1));
+  int_range_max r0, r1;
+  if (!query->range_of_expr (r0, op0, s)
+      || !query->range_of_expr (r1, op1, s))
+    return NULL_TREE;
 
-  tree res = NULL_TREE;
-  if (ssa0 && ssa1)
-    res = compare_ranges (code, &vr0, &vr1, strict_overflow_p);
-  if (!res && ssa0)
-    res = compare_range_with_value (code, &vr0, op1, strict_overflow_p);
-  if (!res && ssa1)
-    res = (compare_range_with_value
-	    (swap_tree_comparison (code), &vr1, op0, strict_overflow_p));
-  return res;
+  tree type = TREE_TYPE (op0);
+  int_range<1> res;
+  range_op_handler handler (code, type);
+  if (handler && handler.fold_range (res, type, r0, r1))
+    {
+      if (res == range_true (type))
+	return boolean_true_node;
+      if (res == range_false (type))
+	return boolean_false_node;
+    }
+  return NULL;
 }
 
 /* Helper function for legacy_fold_cond.  */
 
 tree
-simplify_using_ranges::legacy_fold_cond_overflow (gimple *stmt,
-						  bool *strict_overflow_p,
-						  bool *only_ranges)
+simplify_using_ranges::legacy_fold_cond_overflow (gimple *stmt)
 {
   tree ret;
-  if (only_ranges)
-    *only_ranges = true;
-
   tree_code code = gimple_cond_code (stmt);
   tree op0 = gimple_cond_lhs (stmt);
   tree op1 = gimple_cond_rhs (stmt);
@@ -760,11 +475,8 @@ simplify_using_ranges::legacy_fold_cond_overflow (gimple *stmt,
 	}
     }
 
-  if ((ret = vrp_evaluate_conditional_warnv_with_ops_using_ranges
-	       (code, op0, op1, strict_overflow_p, stmt)))
+  if ((ret = fold_cond_with_ops (code, op0, op1, stmt)))
     return ret;
-  if (only_ranges)
-    *only_ranges = false;
   return NULL_TREE;
 }
 
@@ -801,8 +513,7 @@ simplify_using_ranges::legacy_fold_cond (gcond *stmt, edge *taken_edge_p)
       fprintf (dump_file, "\n");
     }
 
-  bool sop;
-  val = legacy_fold_cond_overflow (stmt, &sop, NULL);
+  val = legacy_fold_cond_overflow (stmt);
   if (val)
     *taken_edge_p = find_taken_edge (gimple_bb (stmt), val);
 
@@ -1054,25 +765,8 @@ simplify_using_ranges::simplify_div_or_mod_using_ranges
     val = integer_one_node;
   else
     {
-      bool sop = false;
-
-      val = compare_range_with_value (GE_EXPR, &vr, integer_zero_node, &sop);
-
-      if (val
-	  && sop
-	  && integer_onep (val)
-	  && issue_strict_overflow_warning (WARN_STRICT_OVERFLOW_MISC))
-	{
-	  location_t location;
-
-	  if (!gimple_has_location (stmt))
-	    location = input_location;
-	  else
-	    location = gimple_location (stmt);
-	  warning_at (location, OPT_Wstrict_overflow,
-		      "assuming signed overflow does not occur when "
-		      "simplifying %</%> or %<%%%> to %<>>%> or %<&%>");
-	}
+      tree zero = build_zero_cst (TREE_TYPE (op0));
+      val = fold_cond_with_ops (GE_EXPR, op0, zero, stmt);
     }
 
   if (val && integer_onep (val))
@@ -1115,33 +809,14 @@ simplify_using_ranges::simplify_min_or_max_using_ranges
 {
   tree op0 = gimple_assign_rhs1 (stmt);
   tree op1 = gimple_assign_rhs2 (stmt);
-  bool sop = false;
   tree val;
 
-  val = (vrp_evaluate_conditional_warnv_with_ops_using_ranges
-	 (LE_EXPR, op0, op1, &sop, stmt));
+  val = fold_cond_with_ops (LE_EXPR, op0, op1, stmt);
   if (!val)
-    {
-      sop = false;
-      val = (vrp_evaluate_conditional_warnv_with_ops_using_ranges
-	     (LT_EXPR, op0, op1, &sop, stmt));
-    }
+    val = fold_cond_with_ops (LT_EXPR, op0, op1, stmt);
 
   if (val)
     {
-      if (sop && issue_strict_overflow_warning (WARN_STRICT_OVERFLOW_MISC))
-	{
-	  location_t location;
-
-	  if (!gimple_has_location (stmt))
-	    location = input_location;
-	  else
-	    location = gimple_location (stmt);
-	  warning_at (location, OPT_Wstrict_overflow,
-		      "assuming signed overflow does not occur when "
-		      "simplifying %<min/max (X,Y)%> to %<X%> or %<Y%>");
-	}
-
       /* VAL == TRUE -> OP0 < or <= op1
 	 VAL == FALSE -> OP0 > or >= op1.  */
       tree res = ((gimple_assign_rhs_code (stmt) == MAX_EXPR)
@@ -1162,52 +837,26 @@ simplify_using_ranges::simplify_abs_using_ranges (gimple_stmt_iterator *gsi,
 						  gimple *stmt)
 {
   tree op = gimple_assign_rhs1 (stmt);
-  value_range vr;
+  tree zero = build_zero_cst (TREE_TYPE (op));
+  tree val = fold_cond_with_ops (LE_EXPR, op, zero, stmt);
 
-  if (!query->range_of_expr (vr, op, stmt))
-    vr.set_undefined ();
-
-  if (!vr.undefined_p () && !vr.varying_p ())
+  if (!val)
     {
-      tree val = NULL;
-      bool sop = false;
-
-      val = compare_range_with_value (LE_EXPR, &vr, integer_zero_node, &sop);
-      if (!val)
-	{
-	  /* The range is neither <= 0 nor > 0.  Now see if it is
-	     either < 0 or >= 0.  */
-	  sop = false;
-	  val = compare_range_with_value (LT_EXPR, &vr, integer_zero_node,
-					  &sop);
-	}
-
-      if (val)
-	{
-	  if (sop && issue_strict_overflow_warning (WARN_STRICT_OVERFLOW_MISC))
-	    {
-	      location_t location;
-
-	      if (!gimple_has_location (stmt))
-		location = input_location;
-	      else
-		location = gimple_location (stmt);
-	      warning_at (location, OPT_Wstrict_overflow,
-			  "assuming signed overflow does not occur when "
-			  "simplifying %<abs (X)%> to %<X%> or %<-X%>");
-	    }
-
-	  gimple_assign_set_rhs1 (stmt, op);
-	  if (integer_zerop (val))
-	    gimple_assign_set_rhs_code (stmt, SSA_NAME);
-	  else
-	    gimple_assign_set_rhs_code (stmt, NEGATE_EXPR);
-	  update_stmt (stmt);
-	  fold_stmt (gsi, follow_single_use_edges);
-	  return true;
-	}
+      /* The range is neither <= 0 nor > 0.  Now see if it is
+	 either < 0 or >= 0.  */
+      val = fold_cond_with_ops (LT_EXPR, op, zero, stmt);
+    }
+  if (val)
+    {
+      gimple_assign_set_rhs1 (stmt, op);
+      if (integer_zerop (val))
+	gimple_assign_set_rhs_code (stmt, SSA_NAME);
+      else
+	gimple_assign_set_rhs_code (stmt, NEGATE_EXPR);
+      update_stmt (stmt);
+      fold_stmt (gsi, follow_single_use_edges);
+      return true;
     }
-
   return false;
 }
 
@@ -1252,24 +901,11 @@ simplify_using_ranges::simplify_bit_ops_using_ranges
   wide_int must_be_nonzero0, must_be_nonzero1;
   wide_int mask;
 
-  if (TREE_CODE (op0) == SSA_NAME)
-    {
-      if (!query->range_of_expr (vr0, op0, stmt))
-	vr0.set_varying (TREE_TYPE (op0));
-    }
-  else if (is_gimple_min_invariant (op0))
-    vr0.set (op0, op0);
-  else
+  if (!query->range_of_expr (vr0, op0, stmt)
+      || vr0.undefined_p ())
     return false;
-
-  if (TREE_CODE (op1) == SSA_NAME)
-    {
-      if (!query->range_of_expr (vr1, op1, stmt))
-	vr1.set_varying (TREE_TYPE (op1));
-    }
-  else if (is_gimple_min_invariant (op1))
-    vr1.set (op1, op1);
-  else
+  if (!query->range_of_expr (vr1, op1, stmt)
+      || vr1.undefined_p ())
     return false;
 
   if (!vr_set_zero_nonzero_bits (TREE_TYPE (op0), &vr0, &may_be_nonzero0,
@@ -1393,7 +1029,7 @@ test_for_singularity (enum tree_code cond_code, tree op0,
    by PRECISION and UNSIGNED_P.  */
 
 bool
-range_fits_type_p (const value_range *vr,
+range_fits_type_p (const irange *vr,
 		   unsigned dest_precision, signop dest_sgn)
 {
   tree src_type;
@@ -1417,27 +1053,28 @@ range_fits_type_p (const value_range *vr,
     return true;
 
   /* Now we can only handle ranges with constant bounds.  */
-  tree vrmin, vrmax;
-  value_range_kind kind = get_legacy_range (*vr, vrmin, vrmax);
-  if (kind != VR_RANGE)
+  if (vr->undefined_p () || vr->varying_p ())
     return false;
 
+  wide_int vrmin = vr->lower_bound ();
+  wide_int vrmax = vr->upper_bound ();
+
   /* For sign changes, the MSB of the wide_int has to be clear.
      An unsigned value with its MSB set cannot be represented by
      a signed wide_int, while a negative value cannot be represented
      by an unsigned wide_int.  */
   if (src_sgn != dest_sgn
-      && (wi::lts_p (wi::to_wide (vrmin), 0)
-	  || wi::lts_p (wi::to_wide (vrmax), 0)))
+      && (wi::lts_p (vrmin, 0) || wi::lts_p (vrmax, 0)))
     return false;
 
   /* Then we can perform the conversion on both ends and compare
      the result for equality.  */
-  tem = wi::ext (wi::to_widest (vrmin), dest_precision, dest_sgn);
-  if (tem != wi::to_widest (vrmin))
+  signop sign = TYPE_SIGN (vr->type ());
+  tem = wi::ext (widest_int::from (vrmin, sign), dest_precision, dest_sgn);
+  if (tem != widest_int::from (vrmin, sign))
     return false;
-  tem = wi::ext (wi::to_widest (vrmax), dest_precision, dest_sgn);
-  if (tem != wi::to_widest (vrmax))
+  tem = wi::ext (widest_int::from (vrmax, sign), dest_precision, dest_sgn);
+  if (tem != widest_int::from (vrmax, sign))
     return false;
 
   return true;
diff --git a/gcc/vr-values.h b/gcc/vr-values.h
index ff814155881..dc0c22df4d8 100644
--- a/gcc/vr-values.h
+++ b/gcc/vr-values.h
@@ -36,7 +36,8 @@ public:
   bool fold_cond (gcond *);
 private:
   void legacy_fold_cond (gcond *, edge *);
-  tree legacy_fold_cond_overflow (gimple *stmt, bool *, bool *);
+  tree legacy_fold_cond_overflow (gimple *stmt);
+  tree fold_cond_with_ops (tree_code, tree, tree, gimple *s);
   bool simplify_casted_cond (gcond *);
   bool simplify_truth_ops_using_ranges (gimple_stmt_iterator *, gimple *);
   bool simplify_div_or_mod_using_ranges (gimple_stmt_iterator *, gimple *);
@@ -51,9 +52,6 @@ private:
 
   bool two_valued_val_range_p (tree, tree *, tree *, gimple *);
   bool op_with_boolean_value_range_p (tree, gimple *);
-  tree vrp_evaluate_conditional_warnv_with_ops_using_ranges (enum tree_code,
-							     tree, tree,
-							     bool *, gimple *s);
   void set_and_propagate_unexecutable (edge e);
   void cleanup_edges_and_switches (void);
 
@@ -74,7 +72,7 @@ private:
   vec<edge> m_flag_set_edges;  // List of edges with flag to be cleared.
 };
 
-extern bool range_fits_type_p (const value_range *vr,
+extern bool range_fits_type_p (const irange *vr,
 			       unsigned dest_precision, signop dest_sgn);
 extern bool bounds_of_var_in_loop (tree *min, tree *max, range_query *,
 				   class loop *loop, gimple *stmt, tree var);
-- 
2.40.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [COMMITTED] Convert get_legacy_range in bounds_of_var_in_loop to irange API.
  2023-05-01  6:28 [COMMITTED] vrange_storage overhaul Aldy Hernandez
                   ` (2 preceding siblings ...)
  2023-05-01  6:28 ` [COMMITTED] Various cleanups in vr-values.cc towards ranger API Aldy Hernandez
@ 2023-05-01  6:28 ` Aldy Hernandez
  2023-05-01  6:29 ` [COMMITTED] Merge irange::union/intersect into irange_union/intersect Aldy Hernandez
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Aldy Hernandez @ 2023-05-01  6:28 UTC (permalink / raw)
  To: GCC patches; +Cc: Andrew MacLeod, Aldy Hernandez

gcc/ChangeLog:

	* vr-values.cc (bounds_of_var_in_loop): Convert to irange API.
---
 gcc/vr-values.cc | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/gcc/vr-values.cc b/gcc/vr-values.cc
index 7f623102ac6..3d28198f9f5 100644
--- a/gcc/vr-values.cc
+++ b/gcc/vr-values.cc
@@ -331,13 +331,16 @@ bounds_of_var_in_loop (tree *min, tree *max, range_query *query,
 		      || initvr.undefined_p ())
 		    return false;
 
-		  tree initvr_min, initvr_max;
+		  tree initvr_type = initvr.type ();
+		  tree initvr_min = wide_int_to_tree (initvr_type,
+						      initvr.lower_bound ());
+		  tree initvr_max = wide_int_to_tree (initvr_type,
+						      initvr.upper_bound ());
 		  tree maxvr_type = maxvr.type ();
 		  tree maxvr_min = wide_int_to_tree (maxvr_type,
 						     maxvr.lower_bound ());
 		  tree maxvr_max = wide_int_to_tree (maxvr_type,
 						     maxvr.upper_bound ());
-		  get_legacy_range (initvr, initvr_min, initvr_max);
 
 		  /* Check if init + nit * step overflows.  Though we checked
 		     scev {init, step}_loop doesn't wrap, it is not enough
-- 
2.40.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [COMMITTED] Merge irange::union/intersect into irange_union/intersect.
  2023-05-01  6:28 [COMMITTED] vrange_storage overhaul Aldy Hernandez
                   ` (3 preceding siblings ...)
  2023-05-01  6:28 ` [COMMITTED] Convert get_legacy_range in bounds_of_var_in_loop to irange API Aldy Hernandez
@ 2023-05-01  6:29 ` Aldy Hernandez
  2023-05-01  6:29 ` [COMMITTED] Conversion to irange wide_int API Aldy Hernandez
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Aldy Hernandez @ 2023-05-01  6:29 UTC (permalink / raw)
  To: GCC patches; +Cc: Andrew MacLeod, Aldy Hernandez

gcc/ChangeLog:

	* value-range.cc (irange::irange_union): Rename to...
	(irange::union_): ...this.
	(irange::irange_intersect): Rename to...
	(irange::intersect): ...this.
	* value-range.h (irange::union_): Delete.
	(irange::intersect): Delete.
---
 gcc/value-range.cc | 11 +++++++----
 gcc/value-range.h  | 14 --------------
 2 files changed, 7 insertions(+), 18 deletions(-)

diff --git a/gcc/value-range.cc b/gcc/value-range.cc
index a0e49df28f3..69b214ecc06 100644
--- a/gcc/value-range.cc
+++ b/gcc/value-range.cc
@@ -1246,11 +1246,13 @@ irange::irange_single_pair_union (const irange &r)
   return true;
 }
 
-// union_ for multi-ranges.
+// Return TRUE if anything changes.
 
 bool
-irange::irange_union (const irange &r)
+irange::union_ (const vrange &v)
 {
+  const irange &r = as_a <irange> (v);
+
   if (r.undefined_p ())
     return false;
 
@@ -1415,11 +1417,12 @@ irange::irange_contains_p (const irange &r) const
 }
 
 
-// Intersect for multi-ranges.  Return TRUE if anything changes.
+// Return TRUE if anything changes.
 
 bool
-irange::irange_intersect (const irange &r)
+irange::intersect (const vrange &v)
 {
+  const irange &r = as_a <irange> (v);
   gcc_checking_assert (undefined_p () || r.undefined_p ()
 		       || range_compatible_p (type (), r.type ()));
 
diff --git a/gcc/value-range.h b/gcc/value-range.h
index 10c44c5c062..6d108154dc1 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -170,8 +170,6 @@ protected:
   irange (tree *, unsigned);
 
    // In-place operators.
-  bool irange_union (const irange &);
-  bool irange_intersect (const irange &);
   void irange_set (tree, tree);
   void irange_set_anti_range (tree, tree);
   bool irange_contains_p (const irange &) const;
@@ -903,18 +901,6 @@ irange::upper_bound () const
   return upper_bound (pairs - 1);
 }
 
-inline bool
-irange::union_ (const vrange &r)
-{
-  return irange_union (as_a <irange> (r));
-}
-
-inline bool
-irange::intersect (const vrange &r)
-{
-  return irange_intersect (as_a <irange> (r));
-}
-
 // Set value range VR to a nonzero range of type TYPE.
 
 inline void
-- 
2.40.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [COMMITTED] Conversion to irange wide_int API.
  2023-05-01  6:28 [COMMITTED] vrange_storage overhaul Aldy Hernandez
                   ` (4 preceding siblings ...)
  2023-05-01  6:29 ` [COMMITTED] Merge irange::union/intersect into irange_union/intersect Aldy Hernandez
@ 2023-05-01  6:29 ` Aldy Hernandez
  2023-05-01  6:29 ` [COMMITTED] Replace vrp_val* with wide_ints Aldy Hernandez
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Aldy Hernandez @ 2023-05-01  6:29 UTC (permalink / raw)
  To: GCC patches; +Cc: Andrew MacLeod, Aldy Hernandez

This converts the irange API to use wide_ints exclusively, along with
its users.

This patch will slow down VRP, as there will be more useless
wide_int to tree conversions.  However, this slowdown is only
temporary, as a follow-up patch will convert the internal
representation of iranges to wide_ints for a net overall gain
in performance.

gcc/ChangeLog:

	* fold-const.cc (expr_not_equal_to): Convert to irange wide_int API.
	* gimple-fold.cc (size_must_be_zero_p): Same.
	* gimple-loop-versioning.cc
	(loop_versioning::prune_loop_conditions): Same.
	* gimple-range-edge.cc (gcond_edge_range): Same.
	(gimple_outgoing_range::calc_switch_ranges): Same.
	* gimple-range-fold.cc (adjust_imagpart_expr): Same.
	(adjust_realpart_expr): Same.
	(fold_using_range::range_of_address): Same.
	(fold_using_range::relation_fold_and_or): Same.
	* gimple-range-gori.cc (gori_compute::gori_compute): Same.
	(range_is_either_true_or_false): Same.
	* gimple-range-op.cc (cfn_toupper_tolower::get_letter_range): Same.
	(cfn_clz::fold_range): Same.
	(cfn_ctz::fold_range): Same.
	* gimple-range-tests.cc (class test_expr_eval): Same.
	* gimple-ssa-warn-alloca.cc (alloca_call_type): Same.
	* ipa-cp.cc (ipa_value_range_from_jfunc): Same.
	(propagate_vr_across_jump_function): Same.
	(decide_whether_version_node): Same.
	* ipa-fnsummary.cc (evaluate_conditions_for_known_args): Same.
	* ipa-prop.cc (ipa_get_value_range): Same.
	* range-op.cc (get_shift_range): Same.
	(value_range_from_overflowed_bounds): Same.
	(value_range_with_overflow): Same.
	(create_possibly_reversed_range): Same.
	(equal_op1_op2_relation): Same.
	(not_equal_op1_op2_relation): Same.
	(lt_op1_op2_relation): Same.
	(le_op1_op2_relation): Same.
	(gt_op1_op2_relation): Same.
	(ge_op1_op2_relation): Same.
	(operator_mult::op1_range): Same.
	(operator_exact_divide::op1_range): Same.
	(operator_lshift::op1_range): Same.
	(operator_rshift::op1_range): Same.
	(operator_cast::op1_range): Same.
	(operator_logical_and::fold_range): Same.
	(set_nonzero_range_from_mask): Same.
	(operator_bitwise_or::op1_range): Same.
	(operator_bitwise_xor::op1_range): Same.
	(operator_addr_expr::fold_range): Same.
	(pointer_plus_operator::wi_fold): Same.
	(pointer_or_operator::op1_range): Same.
	(INT): Same.
	(UINT): Same.
	(INT16): Same.
	(UINT16): Same.
	(SCHAR): Same.
	(UCHAR): Same.
	(range_op_cast_tests): Same.
	(range_op_lshift_tests): Same.
	(range_op_rshift_tests): Same.
	(range_op_bitwise_and_tests): Same.
	(range_relational_tests): Same.
	* range.cc (range_zero): Same.
	(range_nonzero): Same.
	* range.h (range_true): Same.
	(range_false): Same.
	(range_true_and_false): Same.
	* tree-data-ref.cc (split_constant_offset_1): Same.
	* tree-ssa-loop-ch.cc (entry_loop_condition_is_static): Same.
	* tree-ssa-loop-unswitch.cc (struct unswitch_predicate): Same.
	(find_unswitching_predicates_for_bb): Same.
	* tree-ssa-phiopt.cc (value_replacement): Same.
	* tree-ssa-threadbackward.cc
	(back_threader::find_taken_edge_cond): Same.
	* tree-ssanames.cc (ssa_name_has_boolean_range): Same.
	* tree-vrp.cc (find_case_label_range): Same.
	* value-query.cc (range_query::get_tree_range): Same.
	* value-range.cc (irange::set_nonnegative): Same.
	(frange::contains_p): Same.
	(frange::singleton_p): Same.
	(frange::internal_singleton_p): Same.
	(irange::irange_set): Same.
	(irange::irange_set_1bit_anti_range): Same.
	(irange::irange_set_anti_range): Same.
	(irange::set): Same.
	(irange::operator==): Same.
	(irange::singleton_p): Same.
	(irange::contains_p): Same.
	(irange::set_range_from_nonzero_bits): Same.
	(DEFINE_INT_RANGE_INSTANCE): Same.
	(INT): Same.
	(UINT): Same.
	(SCHAR): Same.
	(UINT128): Same.
	(UCHAR): Same.
	(range): New.
	(tree_range): New.
	(range_int): New.
	(range_uint): New.
	(range_uint128): New.
	(range_uchar): New.
	(range_char): New.
	(build_range3): Convert to irange wide_int API.
	(range_tests_irange3): Same.
	(range_tests_int_range_max): Same.
	(range_tests_strict_enum): Same.
	(range_tests_misc): Same.
	(range_tests_nonzero_bits): Same.
	(range_tests_nan): Same.
	(range_tests_signed_zeros): Same.
	* value-range.h (Value_Range::Value_Range): Same.
	(irange::set): Same.
	(irange::nonzero_p): Same.
	(irange::contains_p): Same.
	(range_includes_zero_p): Same.
	(irange::set_nonzero): Same.
	(irange::set_zero): Same.
	(contains_zero_p): Same.
	(frange::contains_p): Same.
	* vr-values.cc
	(simplify_using_ranges::op_with_boolean_value_range_p): Same.
	(bounds_of_var_in_loop): Same.
	(simplify_using_ranges::legacy_fold_cond_overflow): Same.
---
 gcc/fold-const.cc              |   3 +-
 gcc/gimple-fold.cc             |   4 +-
 gcc/gimple-loop-versioning.cc  |   2 +-
 gcc/gimple-range-edge.cc       |  17 +-
 gcc/gimple-range-fold.cc       |  14 +-
 gcc/gimple-range-gori.cc       |   7 +-
 gcc/gimple-range-op.cc         |  42 +--
 gcc/gimple-range-tests.cc      |   9 +-
 gcc/gimple-ssa-warn-alloca.cc  |   5 +-
 gcc/ipa-cp.cc                  |  10 +-
 gcc/ipa-prop.cc                |   3 +-
 gcc/ipa-prop.h                 |   5 +-
 gcc/range-op.cc                | 263 +++++++++---------
 gcc/range.cc                   |   7 +-
 gcc/range.h                    |  14 +-
 gcc/tree-data-ref.cc           |   7 +-
 gcc/tree-ssa-loop-ch.cc        |   8 +-
 gcc/tree-ssa-loop-unswitch.cc  |  17 +-
 gcc/tree-ssa-phiopt.cc         |   3 +-
 gcc/tree-ssa-threadbackward.cc |   4 +-
 gcc/tree-ssanames.cc           |   5 +-
 gcc/tree-vrp.cc                |   8 +-
 gcc/value-query.cc             |  17 +-
 gcc/value-range.cc             | 472 ++++++++++++++++++++-------------
 gcc/value-range.h              |  75 ++++--
 gcc/vr-values.cc               |  27 +-
 26 files changed, 608 insertions(+), 440 deletions(-)

diff --git a/gcc/fold-const.cc b/gcc/fold-const.cc
index 7d2352dbcdd..db54bfc5662 100644
--- a/gcc/fold-const.cc
+++ b/gcc/fold-const.cc
@@ -10882,8 +10882,7 @@ expr_not_equal_to (tree t, const wide_int &w)
       else
 	get_global_range_query ()->range_of_expr (vr, t);
 
-      if (!vr.undefined_p ()
-	  && !vr.contains_p (wide_int_to_tree (TREE_TYPE (t), w)))
+      if (!vr.undefined_p () && !vr.contains_p (w))
 	return true;
       /* If T has some known zero bits and W has any of those bits set,
 	 then T is known not to be equal to W.  */
diff --git a/gcc/gimple-fold.cc b/gcc/gimple-fold.cc
index 1d0e4c32c40..581575b65ec 100644
--- a/gcc/gimple-fold.cc
+++ b/gcc/gimple-fold.cc
@@ -873,8 +873,8 @@ size_must_be_zero_p (tree size)
   /* Compute the value of SSIZE_MAX, the largest positive value that
      can be stored in ssize_t, the signed counterpart of size_t.  */
   wide_int ssize_max = wi::lshift (wi::one (prec), prec - 1) - 1;
-  value_range valid_range (build_int_cst (type, 0),
-			   wide_int_to_tree (type, ssize_max));
+  wide_int zero = wi::zero (TYPE_PRECISION (type));
+  value_range valid_range (type, zero, ssize_max);
   value_range vr;
   if (cfun)
     get_range_query (cfun)->range_of_expr (vr, size);
diff --git a/gcc/gimple-loop-versioning.cc b/gcc/gimple-loop-versioning.cc
index 640bb28016f..7b55129baa7 100644
--- a/gcc/gimple-loop-versioning.cc
+++ b/gcc/gimple-loop-versioning.cc
@@ -1476,7 +1476,7 @@ loop_versioning::prune_loop_conditions (class loop *loop)
       gimple *stmt = first_stmt (loop->header);
 
       if (get_range_query (cfun)->range_of_expr (r, name, stmt)
-	  && !r.contains_p (build_one_cst (TREE_TYPE (name))))
+	  && !r.contains_p (wi::one (TYPE_PRECISION (TREE_TYPE (name)))))
 	{
 	  if (dump_enabled_p ())
 	    dump_printf_loc (MSG_NOTE, find_loop_location (loop),
diff --git a/gcc/gimple-range-edge.cc b/gcc/gimple-range-edge.cc
index 22fb709c9b1..5fc7e791c1b 100644
--- a/gcc/gimple-range-edge.cc
+++ b/gcc/gimple-range-edge.cc
@@ -59,9 +59,9 @@ gcond_edge_range (irange &r, edge e)
 {
   gcc_checking_assert (e->flags & (EDGE_TRUE_VALUE | EDGE_FALSE_VALUE));
   if (e->flags & EDGE_TRUE_VALUE)
-    r = int_range<2> (boolean_true_node, boolean_true_node);
+    r = range_true ();
   else
-    r = int_range<2> (boolean_false_node, boolean_false_node);
+    r = range_false ();
 }
 
 
@@ -136,19 +136,22 @@ gimple_outgoing_range::calc_switch_ranges (gswitch *sw)
       if (e == default_edge)
 	continue;
 
-      tree low = CASE_LOW (gimple_switch_label (sw, x));
-      tree high = CASE_HIGH (gimple_switch_label (sw, x));
-      if (!high)
+      wide_int low = wi::to_wide (CASE_LOW (gimple_switch_label (sw, x)));
+      wide_int high;
+      tree tree_high = CASE_HIGH (gimple_switch_label (sw, x));
+      if (tree_high)
+	high = wi::to_wide (tree_high);
+      else
 	high = low;
 
       // Remove the case range from the default case.
-      int_range_max def_range (low, high);
+      int_range_max def_range (type, low, high);
       range_cast (def_range, type);
       def_range.invert ();
       default_range.intersect (def_range);
 
       // Create/union this case with anything on else on the edge.
-      int_range_max case_range (low, high);
+      int_range_max case_range (type, low, high);
       range_cast (case_range, type);
       vrange_storage *&slot = m_edge_table->get_or_insert (e, &existed);
       if (existed)
diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 180f349eda9..62875a35038 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -404,7 +404,8 @@ adjust_imagpart_expr (vrange &res, const gimple *stmt)
       tree cst = gimple_assign_rhs1 (def_stmt);
       if (TREE_CODE (cst) == COMPLEX_CST)
 	{
-	  int_range<2> imag (TREE_IMAGPART (cst), TREE_IMAGPART (cst));
+	  wide_int w = wi::to_wide (TREE_IMAGPART (cst));
+	  int_range<1> imag (TREE_TYPE (TREE_IMAGPART (cst)), w, w);
 	  res.intersect (imag);
 	}
     }
@@ -430,8 +431,8 @@ adjust_realpart_expr (vrange &res, const gimple *stmt)
       tree cst = gimple_assign_rhs1 (def_stmt);
       if (TREE_CODE (cst) == COMPLEX_CST)
 	{
-	  tree imag = TREE_REALPART (cst);
-	  int_range<2> tmp (imag, imag);
+	  wide_int imag = wi::to_wide (TREE_REALPART (cst));
+	  int_range<2> tmp (TREE_TYPE (TREE_REALPART (cst)), imag, imag);
 	  res.intersect (tmp);
 	}
     }
@@ -689,7 +690,8 @@ fold_using_range::range_of_address (irange &r, gimple *stmt, fur_source &src)
 	{
 	  /* For -fdelete-null-pointer-checks -fno-wrapv-pointer we don't
 	     allow going from non-NULL pointer to NULL.  */
-	  if (r.undefined_p () || !r.contains_p (build_zero_cst (r.type ())))
+	  if (r.undefined_p ()
+	      || !r.contains_p (wi::zero (TYPE_PRECISION (TREE_TYPE (expr)))))
 	    {
 	      /* We could here instead adjust r by off >> LOG2_BITS_PER_UNIT
 		 using POINTER_PLUS_EXPR if off_cst and just fall back to
@@ -1069,7 +1071,7 @@ fold_using_range::relation_fold_and_or (irange& lhs_range, gimple *s,
   else if (ssa1_dep1 != ssa2_dep2 || ssa1_dep2 != ssa2_dep1)
     return;
 
-  int_range<2> bool_one (boolean_true_node, boolean_true_node);
+  int_range<2> bool_one = range_true ();
 
   relation_kind relation1 = handler1.op1_op2_relation (bool_one);
   relation_kind relation2 = handler2.op1_op2_relation (bool_one);
@@ -1081,7 +1083,7 @@ fold_using_range::relation_fold_and_or (irange& lhs_range, gimple *s,
 
   // x && y is false if the relation intersection of the true cases is NULL.
   if (is_and && relation_intersect (relation1, relation2) == VREL_UNDEFINED)
-    lhs_range = int_range<2> (boolean_false_node, boolean_false_node);
+    lhs_range = range_false (boolean_type_node);
   // x || y is true if the union of the true cases is NO-RELATION..
   // ie, one or the other being true covers the full range of possibilities.
   else if (!is_and && relation_union (relation1, relation2) == VREL_VARYING)
diff --git a/gcc/gimple-range-gori.cc b/gcc/gimple-range-gori.cc
index 9d0cc97bf8c..a1c8d51e484 100644
--- a/gcc/gimple-range-gori.cc
+++ b/gcc/gimple-range-gori.cc
@@ -562,8 +562,8 @@ gori_compute::gori_compute (int not_executable_flag)
 {
   m_not_executable_flag = not_executable_flag;
   // Create a boolean_type true and false range.
-  m_bool_zero = int_range<2> (boolean_false_node, boolean_false_node);
-  m_bool_one = int_range<2> (boolean_true_node, boolean_true_node);
+  m_bool_zero = range_false ();
+  m_bool_one = range_true ();
   if (dump_file && (param_ranger_debug & RANGER_DEBUG_GORI))
     tracer.enable_trace ();
 }
@@ -731,7 +731,8 @@ range_is_either_true_or_false (const irange &r)
   // so true can be ~[0, 0] (i.e. [1,MAX]).
   tree type = r.type ();
   gcc_checking_assert (range_compatible_p (type, boolean_type_node));
-  return (r.singleton_p () || !r.contains_p (build_zero_cst (type)));
+  return (r.singleton_p ()
+	  || !r.contains_p (wi::zero (TYPE_PRECISION (type))));
 }
 
 // Evaluate a binary logical expression by combining the true and
diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
index 6fa26f5d3a2..29c7c776a2c 100644
--- a/gcc/gimple-range-op.cc
+++ b/gcc/gimple-range-op.cc
@@ -276,7 +276,8 @@ public:
   {
     if (lh.singleton_p ())
       {
-	r.set (build_one_cst (type), build_one_cst (type));
+	wide_int one = wi::one (TYPE_PRECISION (type));
+	r.set (type, one, one);
 	return true;
       }
     if (cfun->after_inlining)
@@ -298,7 +299,8 @@ public:
   {
     if (lh.singleton_p ())
       {
-	r.set (build_one_cst (type), build_one_cst (type));
+	wide_int one = wi::one (TYPE_PRECISION (type));
+	r.set (type, one, one);
 	return true;
       }
     if (cfun->after_inlining)
@@ -359,7 +361,7 @@ public:
 	r.update_nan (false);
 	return true;
       }
-    if (!lhs.contains_p (build_zero_cst (lhs.type ())))
+    if (!lhs.contains_p (wi::zero (TYPE_PRECISION (lhs.type ()))))
       {
 	r.set (type, frange_val_min (type), dconstm0);
 	r.update_nan (true);
@@ -589,8 +591,12 @@ cfn_toupper_tolower::get_letter_range (tree type, irange &lowers,
 
   if ((z - a == 25) && (Z - A == 25))
     {
-      lowers = int_range<2> (build_int_cst (type, a), build_int_cst (type, z));
-      uppers = int_range<2> (build_int_cst (type, A), build_int_cst (type, Z));
+      lowers = int_range<2> (type,
+			     wi::shwi (a, TYPE_PRECISION (type)),
+			     wi::shwi (z, TYPE_PRECISION (type)));
+      uppers = int_range<2> (type,
+			     wi::shwi (A, TYPE_PRECISION (type)),
+			     wi::shwi (Z, TYPE_PRECISION (type)));
       return true;
     }
   // Unknown character set.
@@ -648,7 +654,7 @@ public:
       range_cast (tmp, unsigned_type_for (tmp.type ()));
     wide_int max = tmp.upper_bound ();
     maxi = wi::floor_log2 (max) + 1;
-    r.set (build_int_cst (type, mini), build_int_cst (type, maxi));
+    r.set (type, wi::shwi (mini, prec), wi::shwi (maxi, prec));
     return true;
   }
 } op_cfn_ffs;
@@ -753,7 +759,9 @@ cfn_clz::fold_range (irange &r, tree type, const irange &lh,
 
   if (mini == -2)
     return false;
-  r.set (build_int_cst (type, mini), build_int_cst (type, maxi));
+  r.set (type,
+	 wi::shwi (mini, TYPE_PRECISION (type)),
+	 wi::shwi (maxi, TYPE_PRECISION (type)));
   return true;
 }
 
@@ -823,7 +831,9 @@ cfn_ctz::fold_range (irange &r, tree type, const irange &lh,
 
   if (mini == -2)
     return false;
-  r.set (build_int_cst (type, mini), build_int_cst (type, maxi));
+  r.set (type,
+	 wi::shwi (mini, TYPE_PRECISION (type)),
+	 wi::shwi (maxi, TYPE_PRECISION (type)));
   return true;
 }
 
@@ -839,7 +849,9 @@ public:
     if (lh.undefined_p ())
       return false;
     int prec = TYPE_PRECISION (lh.type ());
-    r.set (build_int_cst (type, 0), build_int_cst (type, prec - 1));
+    r.set (type,
+	   wi::zero (TYPE_PRECISION (type)),
+	   wi::shwi (prec - 1, TYPE_PRECISION (type)));
     return true;
   }
 } op_cfn_clrsb;
@@ -891,14 +903,12 @@ public:
     tree max = vrp_val_max (ptrdiff_type_node);
     wide_int wmax
       = wi::to_wide (max, TYPE_PRECISION (TREE_TYPE (max)));
-    tree range_min = build_zero_cst (type);
     // To account for the terminating NULL, the maximum length
     // is one less than the maximum array size, which in turn
     // is one less than PTRDIFF_MAX (or SIZE_MAX where it's
     // smaller than the former type).
     // FIXME: Use max_object_size() - 1 here.
-    tree range_max = wide_int_to_tree (type, wmax - 2);
-    r.set (range_min, range_max);
+    r.set (type, wi::zero (TYPE_PRECISION (type)), wmax - 2);
     return true;
   }
 } op_cfn_strlen;
@@ -922,9 +932,11 @@ public:
       // If it's dynamic, the backend might know a hardware limitation.
       size = targetm.goacc.dim_limit (axis);
 
-    r.set (build_int_cst (type, m_is_pos ? 0 : 1),
+    r.set (type,
+	   wi::shwi (m_is_pos ? 0 : 1, TYPE_PRECISION (type)),
 	   size
-	   ? build_int_cst (type, size - m_is_pos) : vrp_val_max (type));
+	   ? wi::shwi (size - m_is_pos, TYPE_PRECISION (type))
+	   : wi::to_wide (vrp_val_max (type)));
     return true;
   }
 private:
@@ -940,7 +952,7 @@ public:
   virtual bool fold_range (irange &r, tree type, const irange &,
 			   const irange &, relation_trio) const
   {
-    r.set (build_zero_cst (type), build_one_cst (type));
+    r = range_true_and_false (type);
     return true;
   }
 } op_cfn_parity;
diff --git a/gcc/gimple-range-tests.cc b/gcc/gimple-range-tests.cc
index 7e4d234ddda..c325a7bcebd 100644
--- a/gcc/gimple-range-tests.cc
+++ b/gcc/gimple-range-tests.cc
@@ -35,7 +35,9 @@ public:
 
     // [5,10] + [15,20] => [20, 30]
     tree expr = fold_build2 (PLUS_EXPR, type, op0, op1);
-    int_range<2> expect (build_int_cst (type, 20), build_int_cst (type, 30));
+    int_range<1> expect (type,
+			 wi::shwi (20, TYPE_PRECISION (type)),
+			 wi::shwi (30, TYPE_PRECISION (type)));
     int_range_max r;
 
     ASSERT_TRUE (range_of_expr (r, expr));
@@ -45,14 +47,15 @@ public:
   virtual bool range_of_expr (vrange &v, tree expr, gimple * = NULL) override
   {
     irange &r = as_a <irange> (v);
+    unsigned prec = TYPE_PRECISION (type);
     if (expr == op0)
       {
-	r.set (build_int_cst (type, 5), build_int_cst (type, 10));
+	r.set (type, wi::shwi (5, prec), wi::shwi (10, prec));
 	return true;
       }
     if (expr == op1)
       {
-	r.set (build_int_cst (type, 15), build_int_cst (type, 20));
+	r.set (type, wi::shwi (15, prec), wi::shwi (20, prec));
 	return true;
       }
     return gimple_ranger::range_of_expr (r, expr);
diff --git a/gcc/gimple-ssa-warn-alloca.cc b/gcc/gimple-ssa-warn-alloca.cc
index 4374f572cd9..c129aca16e2 100644
--- a/gcc/gimple-ssa-warn-alloca.cc
+++ b/gcc/gimple-ssa-warn-alloca.cc
@@ -222,8 +222,9 @@ alloca_call_type (gimple *stmt, bool is_vla)
       && !r.varying_p ())
     {
       // The invalid bits are anything outside of [0, MAX_SIZE].
-      int_range<2> invalid_range (build_int_cst (size_type_node, 0),
-				  build_int_cst (size_type_node, max_size),
+      int_range<1> invalid_range (size_type_node,
+				  wi::shwi (0, TYPE_PRECISION (size_type_node)),
+				  wi::shwi (max_size, TYPE_PRECISION (size_type_node)),
 				  VR_ANTI_RANGE);
 
       r.intersect (invalid_range);
diff --git a/gcc/ipa-cp.cc b/gcc/ipa-cp.cc
index a5b45a8e6b9..1f5e0e13872 100644
--- a/gcc/ipa-cp.cc
+++ b/gcc/ipa-cp.cc
@@ -1943,8 +1943,9 @@ ipa_value_range_from_jfunc (ipa_node_params *info, cgraph_edge *cs,
       if (!(*sum->m_vr)[idx].known)
 	return vr;
       tree vr_type = ipa_get_type (info, idx);
-      value_range srcvr (wide_int_to_tree (vr_type, (*sum->m_vr)[idx].min),
-			 wide_int_to_tree (vr_type, (*sum->m_vr)[idx].max),
+      value_range srcvr (vr_type,
+			 (*sum->m_vr)[idx].min,
+			 (*sum->m_vr)[idx].max,
 			 (*sum->m_vr)[idx].type);
 
       enum tree_code operation = ipa_get_jf_pass_through_operation (jfunc);
@@ -2799,7 +2800,8 @@ propagate_vr_across_jump_function (cgraph_edge *cs, ipa_jump_func *jfunc,
 	  if (TREE_OVERFLOW_P (val))
 	    val = drop_tree_overflow (val);
 
-	  value_range tmpvr (val, val);
+	  value_range tmpvr (TREE_TYPE (val),
+			     wi::to_wide (val), wi::to_wide (val));
 	  return dest_lat->meet_with (&tmpvr);
 	}
     }
@@ -6204,7 +6206,7 @@ decide_about_value (struct cgraph_node *node, int index, HOST_WIDE_INT offset,
    necessary.  */
 
 static inline bool
-ipa_range_contains_p (const irange &r, tree val)
+ipa_range_contains_p (const vrange &r, tree val)
 {
   if (r.undefined_p ())
     return false;
diff --git a/gcc/ipa-prop.cc b/gcc/ipa-prop.cc
index c6d4585aed1..0f3cb3dd9f9 100644
--- a/gcc/ipa-prop.cc
+++ b/gcc/ipa-prop.cc
@@ -2220,7 +2220,8 @@ ipa_get_value_range (value_range *tmp)
 static value_range *
 ipa_get_value_range (enum value_range_kind kind, tree min, tree max)
 {
-  value_range tmp (min, max, kind);
+  value_range tmp (TREE_TYPE (min),
+		   wi::to_wide (min), wi::to_wide (max), kind);
   return ipa_get_value_range (&tmp);
 }
 
diff --git a/gcc/ipa-prop.h b/gcc/ipa-prop.h
index 93785a6a8e6..d4936d4eaff 100644
--- a/gcc/ipa-prop.h
+++ b/gcc/ipa-prop.h
@@ -1208,7 +1208,10 @@ inline void
 ipa_range_set_and_normalize (irange &r, tree val)
 {
   if (TREE_CODE (val) == INTEGER_CST)
-    r.set (val, val);
+    {
+      wide_int w = wi::to_wide (val);
+      r.set (TREE_TYPE (val), w, w);
+    }
   else if (TREE_CODE (val) == ADDR_EXPR)
     r.set_nonzero (TREE_TYPE (val));
   else
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 215a1613b38..224a561c170 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -119,8 +119,9 @@ get_shift_range (irange &r, tree type, const irange &op)
     return false;
 
   // Build valid range and intersect it with the shift range.
-  r = value_range (build_int_cst_type (op.type (), 0),
-		   build_int_cst_type (op.type (), TYPE_PRECISION (type) - 1));
+  r = value_range (op.type (),
+		   wi::shwi (0, TYPE_PRECISION (op.type ())),
+		   wi::shwi (TYPE_PRECISION (type) - 1, TYPE_PRECISION (op.type ())));
   r.intersect (op);
 
   // If there are no valid ranges in the shift range, returned false.
@@ -414,11 +415,7 @@ value_range_from_overflowed_bounds (irange &r, tree type,
   if (covers || wi::cmp (tmin, tmax, sgn) > 0)
     r.set_varying (type);
   else
-    {
-      tree tree_min = wide_int_to_tree (type, tmin);
-      tree tree_max = wide_int_to_tree (type, tmax);
-      r.set (tree_min, tree_max, VR_ANTI_RANGE);
-    }
+    r.set (type, tmin, tmax, VR_ANTI_RANGE);
 }
 
 // Create and return a range from a pair of wide-ints.  MIN_OVF and
@@ -458,8 +455,7 @@ value_range_with_overflow (irange &r, tree type,
 	  else
 	    // No overflow or both overflow or underflow.  The range
 	    // kind stays normal.
-	    r.set (wide_int_to_tree (type, tmin),
-		   wide_int_to_tree (type, tmax));
+	    r.set (type, tmin, tmax);
 	  return;
 	}
 
@@ -497,8 +493,7 @@ value_range_with_overflow (irange &r, tree type,
       else
         new_ub = wmax;
 
-      r.set (wide_int_to_tree (type, new_lb),
-	     wide_int_to_tree (type, new_ub));
+      r.set (type, new_lb, new_ub);
     }
 }
 
@@ -516,7 +511,7 @@ create_possibly_reversed_range (irange &r, tree type,
     value_range_from_overflowed_bounds (r, type, new_lb, new_ub);
   else
     // Otherwise it's just a normal range.
-    r.set (wide_int_to_tree (type, new_lb), wide_int_to_tree (type, new_ub));
+    r.set (type, new_lb, new_ub);
 }
 
 // Return the summary information about boolean range LHS.  If EMPTY/FULL,
@@ -581,7 +576,7 @@ equal_op1_op2_relation (const irange &lhs)
     return VREL_NE;
 
   // TRUE = op1 == op2 indicates EQ_EXPR.
-  if (!lhs.contains_p (build_zero_cst (lhs.type ())))
+  if (lhs.undefined_p () || !contains_zero_p (lhs))
     return VREL_EQ;
   return VREL_VARYING;
 }
@@ -701,7 +696,7 @@ not_equal_op1_op2_relation (const irange &lhs)
     return VREL_EQ;
 
   // TRUE = op1 != op2  indicates NE_EXPR.
-  if (!lhs.contains_p (build_zero_cst (lhs.type ())))
+  if (lhs.undefined_p () || !contains_zero_p (lhs))
     return VREL_NE;
   return VREL_VARYING;
 }
@@ -881,7 +876,7 @@ lt_op1_op2_relation (const irange &lhs)
     return VREL_GE;
 
   // TRUE = op1 < op2 indicates LT_EXPR.
-  if (!lhs.contains_p (build_zero_cst (lhs.type ())))
+  if (lhs.undefined_p () || !contains_zero_p (lhs))
     return VREL_LT;
   return VREL_VARYING;
 }
@@ -1001,7 +996,7 @@ le_op1_op2_relation (const irange &lhs)
     return VREL_GT;
 
   // TRUE = op1 <= op2 indicates LE_EXPR.
-  if (!lhs.contains_p (build_zero_cst (lhs.type ())))
+  if (lhs.undefined_p () || !contains_zero_p (lhs))
     return VREL_LE;
   return VREL_VARYING;
 }
@@ -1118,7 +1113,7 @@ gt_op1_op2_relation (const irange &lhs)
     return VREL_LE;
 
   // TRUE = op1 > op2 indicates GT_EXPR.
-  if (!lhs.contains_p (build_zero_cst (lhs.type ())))
+  if (!contains_zero_p (lhs))
     return VREL_GT;
   return VREL_VARYING;
 }
@@ -1234,7 +1229,7 @@ ge_op1_op2_relation (const irange &lhs)
     return VREL_LT;
 
   // TRUE = op1 >= op2 indicates GE_EXPR.
-  if (!lhs.contains_p (build_zero_cst (lhs.type ())))
+  if (!contains_zero_p (lhs))
     return VREL_GE;
   return VREL_VARYING;
 }
@@ -1963,7 +1958,6 @@ operator_mult::op1_range (irange &r, tree type,
 			  const irange &lhs, const irange &op2,
 			  relation_trio) const
 {
-  tree offset;
   if (lhs.undefined_p ())
     return false;
 
@@ -1973,7 +1967,8 @@ operator_mult::op1_range (irange &r, tree type,
   if (TYPE_OVERFLOW_WRAPS (type))
     return false;
 
-  if (op2.singleton_p (&offset) && !integer_zerop (offset))
+  wide_int offset;
+  if (op2.singleton_p (offset) && offset != 0)
     return range_op_handler (TRUNC_DIV_EXPR, type).fold_range (r, type,
 							       lhs, op2);
   return false;
@@ -2284,15 +2279,14 @@ operator_exact_divide::op1_range (irange &r, tree type,
 {
   if (lhs.undefined_p ())
     return false;
-  tree offset;
+  wide_int offset;
   // [2, 4] = op1 / [3,3]   since its exact divide, no need to worry about
   // remainders in the endpoints, so op1 = [2,4] * [3,3] = [6,12].
   // We wont bother trying to enumerate all the in between stuff :-P
   // TRUE accuracy is [6,6][9,9][12,12].  This is unlikely to matter most of
   // the time however.
   // If op2 is a multiple of 2, we would be able to set some non-zero bits.
-  if (op2.singleton_p (&offset)
-      && !integer_zerop (offset))
+  if (op2.singleton_p (offset) && offset != 0)
     return range_op_handler (MULT_EXPR, type).fold_range (r, type, lhs, op2);
   return false;
 }
@@ -2495,16 +2489,15 @@ operator_lshift::op1_range (irange &r,
 {
   if (lhs.undefined_p ())
     return false;
-  tree shift_amount;
 
-  if (!lhs.contains_p (build_zero_cst (type)))
+  if (!contains_zero_p (lhs))
     r.set_nonzero (type);
   else
     r.set_varying (type);
 
-  if (op2.singleton_p (&shift_amount))
+  wide_int shift;
+  if (op2.singleton_p (shift))
     {
-      wide_int shift = wi::to_wide (shift_amount);
       if (wi::lt_p (shift, 0, SIGNED))
 	return false;
       if (wi::ge_p (shift, wi::uhwi (TYPE_PRECISION (type),
@@ -2541,8 +2534,7 @@ operator_lshift::op1_range (irange &r,
       // This would be [0x42, 0xFC] aka [01000010, 11111100].
 
       // Ideally we do this for each subrange, but just lump them all for now.
-      unsigned low_bits = TYPE_PRECISION (utype)
-			  - TREE_INT_CST_LOW (shift_amount);
+      unsigned low_bits = TYPE_PRECISION (utype) - shift.to_uhwi ();
       wide_int up_mask = wi::mask (low_bits, true, TYPE_PRECISION (utype));
       wide_int new_ub = wi::bit_or (up_mask, tmp_range.upper_bound ());
       wide_int new_lb = wi::set_bit (tmp_range.lower_bound (), low_bits);
@@ -2566,18 +2558,18 @@ operator_rshift::op1_range (irange &r,
 			    const irange &op2,
 			    relation_trio) const
 {
-  tree shift;
   if (lhs.undefined_p ())
     return false;
-  if (op2.singleton_p (&shift))
+  wide_int shift;
+  if (op2.singleton_p (shift))
     {
       // Ignore nonsensical shifts.
       unsigned prec = TYPE_PRECISION (type);
-      if (wi::ge_p (wi::to_wide (shift),
-		    wi::uhwi (prec, TYPE_PRECISION (TREE_TYPE (shift))),
+      if (wi::ge_p (shift,
+		    wi::uhwi (prec, TYPE_PRECISION (op2.type ())),
 		    UNSIGNED))
 	return false;
-      if (wi::to_wide (shift) == 0)
+      if (shift == 0)
 	{
 	  r = lhs;
 	  return true;
@@ -2593,7 +2585,7 @@ operator_rshift::op1_range (irange &r,
 	  r.set_undefined ();
 	  return true;
 	}
-      int_range_max shift_range (shift, shift);
+      int_range_max shift_range (op2.type (), shift, shift);
       int_range_max lb, ub;
       op_lshift.fold_range (lb, type, lhs_refined, shift_range);
       //    LHS
@@ -2605,12 +2597,14 @@ operator_rshift::op1_range (irange &r,
       tree mask = fold_build1 (BIT_NOT_EXPR, type,
 			       fold_build2 (LSHIFT_EXPR, type,
 					    build_minus_one_cst (type),
-					    shift));
-      int_range_max mask_range (build_zero_cst (type), mask);
+					    wide_int_to_tree (op2.type (), shift)));
+      int_range_max mask_range (type,
+				wi::zero (TYPE_PRECISION (type)),
+				wi::to_wide (mask));
       op_plus.fold_range (ub, type, lb, mask_range);
       r = lb;
       r.union_ (ub);
-      if (!lhs_refined.contains_p (build_zero_cst (type)))
+      if (!contains_zero_p (lhs_refined))
 	{
 	  mask_range.invert ();
 	  r.intersect (mask_range);
@@ -2853,7 +2847,7 @@ operator_cast::op1_range (irange &r, tree type,
 	{
 	  // If the LHS is not a pointer nor a singleton, then it is
 	  // either VARYING or non-zero.
-	  if (!lhs.contains_p (build_zero_cst (lhs.type ())))
+	  if (!contains_zero_p (lhs))
 	    r.set_nonzero (type);
 	  else
 	    r.set_varying (type);
@@ -2971,8 +2965,7 @@ operator_logical_and::fold_range (irange &r, tree type,
   if ((wi::eq_p (lh.lower_bound (), 0) && wi::eq_p (lh.upper_bound (), 0))
       || (wi::eq_p (lh.lower_bound (), 0) && wi::eq_p (rh.upper_bound (), 0)))
     r = range_false (type);
-  else if (lh.contains_p (build_zero_cst (lh.type ()))
-	   || rh.contains_p (build_zero_cst (rh.type ())))
+  else if (contains_zero_p (lh) || contains_zero_p (rh))
     // To reach this point, there must be a logical 1 on each side, and
     // the only remaining question is whether there is a zero or not.
     r = range_true_and_false (type);
@@ -3288,7 +3281,7 @@ operator_bitwise_and::wi_fold (irange &r, tree type,
 static void
 set_nonzero_range_from_mask (irange &r, tree type, const irange &lhs)
 {
-  if (!lhs.contains_p (build_zero_cst (type)))
+  if (!contains_zero_p (lhs))
     r = range_nonzero (type);
   else
     r.set_varying (type);
@@ -3605,8 +3598,7 @@ operator_bitwise_or::op1_range (irange &r, tree type,
 
   if (lhs.zero_p ())
     {
-      tree zero = build_zero_cst (type);
-      r = int_range<1> (zero, zero);
+      r.set_zero (type);
       return true;
     }
   r.set_varying (type);
@@ -3743,7 +3735,7 @@ operator_bitwise_xor::op1_range (irange &r, tree type,
 	  else if (op2.zero_p ())
 	    r = range_true (type);
 	  // See get_bool_state for the rationale
-	  else if (op2.contains_p (build_zero_cst (op2.type ())))
+	  else if (contains_zero_p (op2))
 	    r = range_true_and_false (type);
 	  else
 	    r = range_false (type);
@@ -4346,7 +4338,7 @@ operator_addr_expr::fold_range (irange &r, tree type,
   // Return a non-null pointer of the LHS type (passed in op2).
   if (lh.zero_p ())
     r = range_zero (type);
-  else if (!lh.contains_p (build_zero_cst (lh.type ())))
+  else if (!contains_zero_p (lh))
     r = range_nonzero (type);
   else
     r.set_varying (type);
@@ -4387,8 +4379,7 @@ pointer_plus_operator::wi_fold (irange &r, tree type,
   // Check for [0,0] + const, and simply return the const.
   if (lh_lb == 0 && lh_ub == 0 && rh_lb == rh_ub)
     {
-      tree val = wide_int_to_tree (type, rh_lb);
-      r.set (val, val);
+      r.set (type, rh_lb, rh_lb);
       return;
     }
 
@@ -4522,8 +4513,7 @@ pointer_or_operator::op1_range (irange &r, tree type,
     return false;
   if (lhs.zero_p ())
     {
-      tree zero = build_zero_cst (type);
-      r = int_range<1> (zero, zero);
+      r.set_zero (type);
       return true;
     }
   r.set_varying (type);
@@ -4880,112 +4870,120 @@ range_cast (vrange &r, tree type)
 
 namespace selftest
 {
-#define INT(N) build_int_cst (integer_type_node, (N))
-#define UINT(N) build_int_cstu (unsigned_type_node, (N))
-#define INT16(N) build_int_cst (short_integer_type_node, (N))
-#define UINT16(N) build_int_cstu (short_unsigned_type_node, (N))
-#define SCHAR(N) build_int_cst (signed_char_type_node, (N))
-#define UCHAR(N) build_int_cstu (unsigned_char_type_node, (N))
+#define INT(x) wi::shwi ((x), TYPE_PRECISION (integer_type_node))
+#define UINT(x) wi::uhwi ((x), TYPE_PRECISION (unsigned_type_node))
+#define INT16(x) wi::shwi ((x), TYPE_PRECISION (short_integer_type_node))
+#define UINT16(x) wi::uhwi ((x), TYPE_PRECISION (short_unsigned_type_node))
+#define SCHAR(x) wi::shwi ((x), TYPE_PRECISION (signed_char_type_node))
+#define UCHAR(x) wi::uhwi ((x), TYPE_PRECISION (unsigned_char_type_node))
 
 static void
 range_op_cast_tests ()
 {
   int_range<2> r0, r1, r2, rold;
   r0.set_varying (integer_type_node);
-  tree maxint = wide_int_to_tree (integer_type_node, r0.upper_bound ());
+  wide_int maxint = r0.upper_bound ();
 
   // If a range is in any way outside of the range for the converted
   // to range, default to the range for the new type.
   r0.set_varying (short_integer_type_node);
-  tree minshort = wide_int_to_tree (short_integer_type_node, r0.lower_bound ());
-  tree maxshort = wide_int_to_tree (short_integer_type_node, r0.upper_bound ());
-  if (TYPE_PRECISION (TREE_TYPE (maxint))
+  wide_int minshort = r0.lower_bound ();
+  wide_int maxshort = r0.upper_bound ();
+  if (TYPE_PRECISION (integer_type_node)
       > TYPE_PRECISION (short_integer_type_node))
     {
-      r1 = int_range<1> (integer_zero_node, maxint);
+      r1 = int_range<1> (integer_type_node,
+			 wi::zero (TYPE_PRECISION (integer_type_node)),
+			 maxint);
       range_cast (r1, short_integer_type_node);
-      ASSERT_TRUE (r1.lower_bound () == wi::to_wide (minshort)
-		   && r1.upper_bound() == wi::to_wide (maxshort));
+      ASSERT_TRUE (r1.lower_bound () == minshort
+		   && r1.upper_bound() == maxshort);
     }
 
   // (unsigned char)[-5,-1] => [251,255].
-  r0 = rold = int_range<1> (SCHAR (-5), SCHAR (-1));
+  r0 = rold = int_range<1> (signed_char_type_node, SCHAR (-5), SCHAR (-1));
   range_cast (r0, unsigned_char_type_node);
-  ASSERT_TRUE (r0 == int_range<1> (UCHAR (251), UCHAR (255)));
+  ASSERT_TRUE (r0 == int_range<1> (unsigned_char_type_node,
+				   UCHAR (251), UCHAR (255)));
   range_cast (r0, signed_char_type_node);
   ASSERT_TRUE (r0 == rold);
 
   // (signed char)[15, 150] => [-128,-106][15,127].
-  r0 = rold = int_range<1> (UCHAR (15), UCHAR (150));
+  r0 = rold = int_range<1> (unsigned_char_type_node, UCHAR (15), UCHAR (150));
   range_cast (r0, signed_char_type_node);
-  r1 = int_range<1> (SCHAR (15), SCHAR (127));
-  r2 = int_range<1> (SCHAR (-128), SCHAR (-106));
+  r1 = int_range<1> (signed_char_type_node, SCHAR (15), SCHAR (127));
+  r2 = int_range<1> (signed_char_type_node, SCHAR (-128), SCHAR (-106));
   r1.union_ (r2);
   ASSERT_TRUE (r1 == r0);
   range_cast (r0, unsigned_char_type_node);
   ASSERT_TRUE (r0 == rold);
 
   // (unsigned char)[-5, 5] => [0,5][251,255].
-  r0 = rold = int_range<1> (SCHAR (-5), SCHAR (5));
+  r0 = rold = int_range<1> (signed_char_type_node, SCHAR (-5), SCHAR (5));
   range_cast (r0, unsigned_char_type_node);
-  r1 = int_range<1> (UCHAR (251), UCHAR (255));
-  r2 = int_range<1> (UCHAR (0), UCHAR (5));
+  r1 = int_range<1> (unsigned_char_type_node, UCHAR (251), UCHAR (255));
+  r2 = int_range<1> (unsigned_char_type_node, UCHAR (0), UCHAR (5));
   r1.union_ (r2);
   ASSERT_TRUE (r0 == r1);
   range_cast (r0, signed_char_type_node);
   ASSERT_TRUE (r0 == rold);
 
   // (unsigned char)[-5,5] => [0,5][251,255].
-  r0 = int_range<1> (INT (-5), INT (5));
+  r0 = int_range<1> (integer_type_node, INT (-5), INT (5));
   range_cast (r0, unsigned_char_type_node);
-  r1 = int_range<1> (UCHAR (0), UCHAR (5));
-  r1.union_ (int_range<1> (UCHAR (251), UCHAR (255)));
+  r1 = int_range<1> (unsigned_char_type_node, UCHAR (0), UCHAR (5));
+  r1.union_ (int_range<1> (unsigned_char_type_node, UCHAR (251), UCHAR (255)));
   ASSERT_TRUE (r0 == r1);
 
   // (unsigned char)[5U,1974U] => [0,255].
-  r0 = int_range<1> (UINT (5), UINT (1974));
+  r0 = int_range<1> (unsigned_type_node, UINT (5), UINT (1974));
   range_cast (r0, unsigned_char_type_node);
-  ASSERT_TRUE (r0 == int_range<1> (UCHAR (0), UCHAR (255)));
+  ASSERT_TRUE (r0 == int_range<1> (unsigned_char_type_node, UCHAR (0), UCHAR (255)));
   range_cast (r0, integer_type_node);
   // Going to a wider range should not sign extend.
-  ASSERT_TRUE (r0 == int_range<1> (INT (0), INT (255)));
+  ASSERT_TRUE (r0 == int_range<1> (integer_type_node, INT (0), INT (255)));
 
   // (unsigned char)[-350,15] => [0,255].
-  r0 = int_range<1> (INT (-350), INT (15));
+  r0 = int_range<1> (integer_type_node, INT (-350), INT (15));
   range_cast (r0, unsigned_char_type_node);
   ASSERT_TRUE (r0 == (int_range<1>
-		      (TYPE_MIN_VALUE (unsigned_char_type_node),
-		       TYPE_MAX_VALUE (unsigned_char_type_node))));
+		      (unsigned_char_type_node,
+		       min_limit (unsigned_char_type_node),
+		       max_limit (unsigned_char_type_node))));
 
   // Casting [-120,20] from signed char to unsigned short.
   // => [0, 20][0xff88, 0xffff].
-  r0 = int_range<1> (SCHAR (-120), SCHAR (20));
+  r0 = int_range<1> (signed_char_type_node, SCHAR (-120), SCHAR (20));
   range_cast (r0, short_unsigned_type_node);
-  r1 = int_range<1> (UINT16 (0), UINT16 (20));
-  r2 = int_range<1> (UINT16 (0xff88), UINT16 (0xffff));
+  r1 = int_range<1> (short_unsigned_type_node, UINT16 (0), UINT16 (20));
+  r2 = int_range<1> (short_unsigned_type_node,
+		     UINT16 (0xff88), UINT16 (0xffff));
   r1.union_ (r2);
   ASSERT_TRUE (r0 == r1);
   // A truncating cast back to signed char will work because [-120, 20]
   // is representable in signed char.
   range_cast (r0, signed_char_type_node);
-  ASSERT_TRUE (r0 == int_range<1> (SCHAR (-120), SCHAR (20)));
+  ASSERT_TRUE (r0 == int_range<1> (signed_char_type_node,
+				   SCHAR (-120), SCHAR (20)));
 
   // unsigned char -> signed short
   //	(signed short)[(unsigned char)25, (unsigned char)250]
   // => [(signed short)25, (signed short)250]
-  r0 = rold = int_range<1> (UCHAR (25), UCHAR (250));
+  r0 = rold = int_range<1> (unsigned_char_type_node, UCHAR (25), UCHAR (250));
   range_cast (r0, short_integer_type_node);
-  r1 = int_range<1> (INT16 (25), INT16 (250));
+  r1 = int_range<1> (short_integer_type_node, INT16 (25), INT16 (250));
   ASSERT_TRUE (r0 == r1);
   range_cast (r0, unsigned_char_type_node);
   ASSERT_TRUE (r0 == rold);
 
   // Test casting a wider signed [-MIN,MAX] to a narrower unsigned.
-  r0 = int_range<1> (TYPE_MIN_VALUE (long_long_integer_type_node),
-	       TYPE_MAX_VALUE (long_long_integer_type_node));
+  r0 = int_range<1> (long_long_integer_type_node,
+		     min_limit (long_long_integer_type_node),
+		     max_limit (long_long_integer_type_node));
   range_cast (r0, short_unsigned_type_node);
-  r1 = int_range<1> (TYPE_MIN_VALUE (short_unsigned_type_node),
-	       TYPE_MAX_VALUE (short_unsigned_type_node));
+  r1 = int_range<1> (short_unsigned_type_node,
+		     min_limit (short_unsigned_type_node),
+		     max_limit (short_unsigned_type_node));
   ASSERT_TRUE (r0 == r1);
 
   // Casting NONZERO to a narrower type will wrap/overflow so
@@ -4999,8 +4997,9 @@ range_op_cast_tests ()
     {
       r0 = range_nonzero (integer_type_node);
       range_cast (r0, short_integer_type_node);
-      r1 = int_range<1> (TYPE_MIN_VALUE (short_integer_type_node),
-			 TYPE_MAX_VALUE (short_integer_type_node));
+      r1 = int_range<1> (short_integer_type_node,
+			 min_limit (short_integer_type_node),
+			 max_limit (short_integer_type_node));
       ASSERT_TRUE (r0 == r1);
     }
 
@@ -5010,8 +5009,8 @@ range_op_cast_tests ()
   // Converting this to 32-bits signed is [-MIN_16,-1][1, +MAX_16].
   r0 = range_nonzero (short_integer_type_node);
   range_cast (r0, integer_type_node);
-  r1 = int_range<1> (INT (-32768), INT (-1));
-  r2 = int_range<1> (INT (1), INT (32767));
+  r1 = int_range<1> (integer_type_node, INT (-32768), INT (-1));
+  r2 = int_range<1> (integer_type_node, INT (1), INT (32767));
   r1.union_ (r2);
   ASSERT_TRUE (r0 == r1);
 }
@@ -5024,17 +5023,16 @@ range_op_lshift_tests ()
   {
     int_range_max res;
     tree big_type = long_long_unsigned_type_node;
+    unsigned big_prec = TYPE_PRECISION (big_type);
     // big_num = 0x808,0000,0000,0000
-    tree big_num = fold_build2 (LSHIFT_EXPR, big_type,
-				build_int_cst (big_type, 0x808),
-				build_int_cst (big_type, 48));
+    wide_int big_num = wi::lshift (wi::uhwi (0x808, big_prec),
+				   wi::uhwi (48, big_prec));
     op_bitwise_and.fold_range (res, big_type,
 			       int_range <1> (big_type),
-			       int_range <1> (big_num, big_num));
+			       int_range <1> (big_type, big_num, big_num));
     // val = 0x8,0000,0000,0000
-    tree val = fold_build2 (LSHIFT_EXPR, big_type,
-			    build_int_cst (big_type, 0x8),
-			    build_int_cst (big_type, 48));
+    wide_int val = wi::lshift (wi::uhwi (8, big_prec),
+			       wi::uhwi (48, big_prec));
     ASSERT_TRUE (res.contains_p (val));
   }
 
@@ -5042,13 +5040,13 @@ range_op_lshift_tests ()
     {
       // unsigned VARYING = op1 << 1 should be VARYING.
       int_range<2> lhs (unsigned_type_node);
-      int_range<2> shift (INT (1), INT (1));
+      int_range<2> shift (unsigned_type_node, INT (1), INT (1));
       int_range_max op1;
       op_lshift.op1_range (op1, unsigned_type_node, lhs, shift);
       ASSERT_TRUE (op1.varying_p ());
 
       // 0 = op1 << 1  should be [0,0], [0x8000000, 0x8000000].
-      int_range<2> zero (UINT (0), UINT (0));
+      int_range<2> zero (unsigned_type_node, UINT (0), UINT (0));
       op_lshift.op1_range (op1, unsigned_type_node, zero, shift);
       ASSERT_TRUE (op1.num_pairs () == 2);
       // Remove the [0,0] range.
@@ -5065,13 +5063,13 @@ range_op_lshift_tests ()
     {
       // unsigned VARYING = op1 << 1 should be VARYING.
       int_range<2> lhs (integer_type_node);
-      int_range<2> shift (INT (1), INT (1));
+      int_range<2> shift (integer_type_node, INT (1), INT (1));
       int_range_max op1;
       op_lshift.op1_range (op1, integer_type_node, lhs, shift);
       ASSERT_TRUE (op1.varying_p ());
 
       //  0 = op1 << 1  should be [0,0], [0x8000000, 0x8000000].
-      int_range<2> zero (INT (0), INT (0));
+      int_range<2> zero (integer_type_node, INT (0), INT (0));
       op_lshift.op1_range (op1, integer_type_node, zero, shift);
       ASSERT_TRUE (op1.num_pairs () == 2);
       // Remove the [0,0] range.
@@ -5090,10 +5088,11 @@ range_op_rshift_tests ()
 {
   // unsigned: [3, MAX] = OP1 >> 1
   {
-    int_range_max lhs (build_int_cst (unsigned_type_node, 3),
-		       TYPE_MAX_VALUE (unsigned_type_node));
-    int_range_max one (build_one_cst (unsigned_type_node),
-		       build_one_cst (unsigned_type_node));
+    int_range_max lhs (unsigned_type_node,
+		       UINT (3), max_limit (unsigned_type_node));
+    int_range_max one (unsigned_type_node,
+		       wi::one (TYPE_PRECISION (unsigned_type_node)),
+		       wi::one (TYPE_PRECISION (unsigned_type_node)));
     int_range_max op1;
     op_rshift.op1_range (op1, unsigned_type_node, lhs, one);
     ASSERT_FALSE (op1.contains_p (UINT (3)));
@@ -5101,8 +5100,9 @@ range_op_rshift_tests ()
 
   // signed: [3, MAX] = OP1 >> 1
   {
-    int_range_max lhs (INT (3), TYPE_MAX_VALUE (integer_type_node));
-    int_range_max one (INT (1), INT (1));
+    int_range_max lhs (integer_type_node,
+		       INT (3), max_limit (integer_type_node));
+    int_range_max one (integer_type_node, INT (1), INT (1));
     int_range_max op1;
     op_rshift.op1_range (op1, integer_type_node, lhs, one);
     ASSERT_FALSE (op1.contains_p (INT (-2)));
@@ -5111,9 +5111,10 @@ range_op_rshift_tests ()
   // This is impossible, so OP1 should be [].
   // signed: [MIN, MIN] = OP1 >> 1
   {
-    int_range_max lhs (TYPE_MIN_VALUE (integer_type_node),
-		       TYPE_MIN_VALUE (integer_type_node));
-    int_range_max one (INT (1), INT (1));
+    int_range_max lhs (integer_type_node,
+		       min_limit (integer_type_node),
+		       min_limit (integer_type_node));
+    int_range_max one (integer_type_node, INT (1), INT (1));
     int_range_max op1;
     op_rshift.op1_range (op1, integer_type_node, lhs, one);
     ASSERT_TRUE (op1.undefined_p ());
@@ -5122,8 +5123,8 @@ range_op_rshift_tests ()
   // signed: ~[-1] = OP1 >> 31
   if (TYPE_PRECISION (integer_type_node) > 31)
     {
-      int_range_max lhs (INT (-1), INT (-1), VR_ANTI_RANGE);
-      int_range_max shift (INT (31), INT (31));
+      int_range_max lhs (integer_type_node, INT (-1), INT (-1), VR_ANTI_RANGE);
+      int_range_max shift (integer_type_node, INT (31), INT (31));
       int_range_max op1;
       op_rshift.op1_range (op1, integer_type_node, lhs, shift);
       int_range_max negatives = range_negatives (integer_type_node);
@@ -5136,13 +5137,11 @@ static void
 range_op_bitwise_and_tests ()
 {
   int_range_max res;
-  tree min = vrp_val_min (integer_type_node);
-  tree max = vrp_val_max (integer_type_node);
-  tree tiny = fold_build2 (PLUS_EXPR, integer_type_node, min,
-			   build_one_cst (integer_type_node));
-  int_range_max i1 (tiny, max);
-  int_range_max i2 (build_int_cst (integer_type_node, 255),
-		    build_int_cst (integer_type_node, 255));
+  wide_int min = min_limit (integer_type_node);
+  wide_int max = max_limit (integer_type_node);
+  wide_int tiny = wi::add (min, wi::one (TYPE_PRECISION (integer_type_node)));
+  int_range_max i1 (integer_type_node, tiny, max);
+  int_range_max i2 (integer_type_node, INT (255), INT (255));
 
   // [MIN+1, MAX] = OP1 & 255: OP1 is VARYING
   op_bitwise_and.op1_range (res, integer_type_node, i1, i2);
@@ -5155,8 +5154,8 @@ range_op_bitwise_and_tests ()
 
   // For 0 = x & MASK, x is ~MASK.
   {
-    int_range<2> zero (integer_zero_node, integer_zero_node);
-    int_range<2> mask = int_range<2> (INT (7), INT (7));
+    int_range<2> zero (integer_type_node, INT (0), INT (0));
+    int_range<2> mask = int_range<2> (integer_type_node, INT (7), INT (7));
     op_bitwise_and.op1_range (res, integer_type_node, zero, mask);
     wide_int inv = wi::shwi (~7U, TYPE_PRECISION (integer_type_node));
     ASSERT_TRUE (res.get_nonzero_bits () == inv);
@@ -5169,7 +5168,7 @@ range_op_bitwise_and_tests ()
   ASSERT_TRUE (res.nonzero_p ());
 
   // (NEGATIVE | X) is nonzero.
-  i1 = int_range<1> (INT (-5), INT (-3));
+  i1 = int_range<1> (integer_type_node, INT (-5), INT (-3));
   i2.set_varying (integer_type_node);
   op_bitwise_or.fold_range (res, integer_type_node, i1, i2);
   ASSERT_FALSE (res.contains_p (INT (0)));
@@ -5179,22 +5178,22 @@ static void
 range_relational_tests ()
 {
   int_range<2> lhs (unsigned_char_type_node);
-  int_range<2> op1 (UCHAR (8), UCHAR (10));
-  int_range<2> op2 (UCHAR (20), UCHAR (20));
+  int_range<2> op1 (unsigned_char_type_node, UCHAR (8), UCHAR (10));
+  int_range<2> op2 (unsigned_char_type_node, UCHAR (20), UCHAR (20));
 
   // Never wrapping additions mean LHS > OP1.
   relation_kind code = op_plus.lhs_op1_relation (lhs, op1, op2, VREL_VARYING);
   ASSERT_TRUE (code == VREL_GT);
 
   // Most wrapping additions mean nothing...
-  op1 = int_range<2> (UCHAR (8), UCHAR (10));
-  op2 = int_range<2> (UCHAR (0), UCHAR (255));
+  op1 = int_range<2> (unsigned_char_type_node, UCHAR (8), UCHAR (10));
+  op2 = int_range<2> (unsigned_char_type_node, UCHAR (0), UCHAR (255));
   code = op_plus.lhs_op1_relation (lhs, op1, op2, VREL_VARYING);
   ASSERT_TRUE (code == VREL_VARYING);
 
   // However, always wrapping additions mean LHS < OP1.
-  op1 = int_range<2> (UCHAR (1), UCHAR (255));
-  op2 = int_range<2> (UCHAR (255), UCHAR (255));
+  op1 = int_range<2> (unsigned_char_type_node, UCHAR (1), UCHAR (255));
+  op2 = int_range<2> (unsigned_char_type_node, UCHAR (255), UCHAR (255));
   code = op_plus.lhs_op1_relation (lhs, op1, op2, VREL_VARYING);
   ASSERT_TRUE (code == VREL_LT);
 }
diff --git a/gcc/range.cc b/gcc/range.cc
index d9e9e0c788c..7c4a83032d7 100644
--- a/gcc/range.cc
+++ b/gcc/range.cc
@@ -32,14 +32,15 @@ along with GCC; see the file COPYING3.  If not see
 value_range
 range_zero (tree type)
 {
-  return value_range (build_zero_cst (type), build_zero_cst (type));
+  wide_int zero = wi::zero (TYPE_PRECISION (type));
+  return value_range (type, zero, zero);
 }
 
 value_range
 range_nonzero (tree type)
 {
-  return value_range (build_zero_cst (type), build_zero_cst (type),
-		      VR_ANTI_RANGE);
+  wide_int zero = wi::zero (TYPE_PRECISION (type));
+  return value_range (type, zero, zero, VR_ANTI_RANGE);
 }
 
 value_range
diff --git a/gcc/range.h b/gcc/range.h
index 3b0e9efffbf..f6a55baf80e 100644
--- a/gcc/range.h
+++ b/gcc/range.h
@@ -29,30 +29,30 @@ value_range range_negatives (tree type);
 // Return an irange instance that is a boolean TRUE.
 
 inline int_range<1>
-range_true (tree type)
+range_true (tree type = boolean_type_node)
 {
   unsigned prec = TYPE_PRECISION (type);
-  return int_range<2> (type, wi::one (prec), wi::one (prec));
+  return int_range<1> (type, wi::one (prec), wi::one (prec));
 }
 
 // Return an irange instance that is a boolean FALSE.
 
 inline int_range<1>
-range_false (tree type)
+range_false (tree type = boolean_type_node)
 {
   unsigned prec = TYPE_PRECISION (type);
-  return int_range<2> (type, wi::zero (prec), wi::zero (prec));
+  return int_range<1> (type, wi::zero (prec), wi::zero (prec));
 }
 
 // Return an irange that covers both true and false.
 
 inline int_range<1>
-range_true_and_false (tree type)
+range_true_and_false (tree type = boolean_type_node)
 {
   unsigned prec = TYPE_PRECISION (type);
   if (prec == 1)
-    return int_range<2> (type);
-  return int_range<2> (type, wi::zero (prec), wi::one (prec));
+    return int_range<1> (type);
+  return int_range<1> (type, wi::zero (prec), wi::one (prec));
 }
 
 #endif // GCC_RANGE_H
diff --git a/gcc/tree-data-ref.cc b/gcc/tree-data-ref.cc
index b3a1d410cbd..b576cce6db6 100644
--- a/gcc/tree-data-ref.cc
+++ b/gcc/tree-data-ref.cc
@@ -769,7 +769,10 @@ split_constant_offset_1 (tree type, tree op0, enum tree_code code, tree op1,
       *var = size_int (0);
       *off = fold_convert (ssizetype, op0);
       if (result_range)
-	result_range->set (op0, op0);
+	{
+	  wide_int w = wi::to_wide (op0);
+	  result_range->set (TREE_TYPE (op0), w, w);
+	}
       return true;
 
     case POINTER_PLUS_EXPR:
@@ -795,7 +798,7 @@ split_constant_offset_1 (tree type, tree op0, enum tree_code code, tree op1,
 	return false;
 
       split_constant_offset (op0, &var0, &off0, &op0_range, cache, limit);
-      op1_range.set (op1, op1);
+      op1_range.set (TREE_TYPE (op1), wi::to_wide (op1), wi::to_wide (op1));
       *off = size_binop (MULT_EXPR, off0, fold_convert (ssizetype, op1));
       if (!compute_distributive_range (type, op0_range, code, op1_range,
 				       off, result_range))
diff --git a/gcc/tree-ssa-loop-ch.cc b/gcc/tree-ssa-loop-ch.cc
index 83c2c1c6792..692e8ce7c38 100644
--- a/gcc/tree-ssa-loop-ch.cc
+++ b/gcc/tree-ssa-loop-ch.cc
@@ -79,15 +79,15 @@ entry_loop_condition_is_static (class loop *l, gimple_ranger *ranger)
   if (!loop_exit_edge_p (l, true_e) && !loop_exit_edge_p (l, false_e))
     return false;
 
-  tree desired_static_value;
+  int_range<1> desired_static_range;
   if (loop_exit_edge_p (l, true_e))
-    desired_static_value = boolean_false_node;
+    desired_static_range = range_false ();
   else
-    desired_static_value = boolean_true_node;
+    desired_static_range = range_true ();
 
   int_range<2> r;
   edge_range_query (r, e, last, *ranger);
-  return r == int_range<2> (desired_static_value, desired_static_value);
+  return r == desired_static_range;
 }
 
 /* Check whether we should duplicate HEADER of LOOP.  At most *LIMIT
diff --git a/gcc/tree-ssa-loop-unswitch.cc b/gcc/tree-ssa-loop-unswitch.cc
index 588610eaa47..081fb42ba54 100644
--- a/gcc/tree-ssa-loop-unswitch.cc
+++ b/gcc/tree-ssa-loop-unswitch.cc
@@ -142,14 +142,14 @@ struct unswitch_predicate
 	auto range_op = range_op_handler (code, TREE_TYPE (lhs));
 	int_range<2> rhs_range (TREE_TYPE (rhs));
 	if (CONSTANT_CLASS_P (rhs))
-	  rhs_range.set (rhs, rhs);
+	  {
+	    wide_int w = wi::to_wide (rhs);
+	    rhs_range.set (TREE_TYPE (rhs), w, w);
+	  }
 	if (!range_op.op1_range (true_range, TREE_TYPE (lhs),
-				 int_range<2> (boolean_true_node,
-					       boolean_true_node), rhs_range)
+				 range_true (), rhs_range)
 	    || !range_op.op1_range (false_range, TREE_TYPE (lhs),
-				    int_range<2> (boolean_false_node,
-						  boolean_false_node),
-				    rhs_range))
+				    range_false (), rhs_range))
 	  {
 	    true_range.set_varying (TREE_TYPE (lhs));
 	    false_range.set_varying (TREE_TYPE (lhs));
@@ -605,12 +605,13 @@ find_unswitching_predicates_for_bb (basic_block bb, class loop *loop,
 	      tree cmp1 = fold_build2 (GE_EXPR, boolean_type_node, idx, low);
 	      tree cmp2 = fold_build2 (LE_EXPR, boolean_type_node, idx, high);
 	      cmp = fold_build2 (BIT_AND_EXPR, boolean_type_node, cmp1, cmp2);
-	      lab_range.set (low, high);
+	      lab_range.set (idx_type, wi::to_wide (low), wi::to_wide (high));
 	    }
 	  else
 	    {
 	      cmp = fold_build2 (EQ_EXPR, boolean_type_node, idx, low);
-	      lab_range.set (low, low);
+	      wide_int w = wi::to_wide (low);
+	      lab_range.set (idx_type, w, w);
 	    }
 
 	  /* Combine the expression with the existing one.  */
diff --git a/gcc/tree-ssa-phiopt.cc b/gcc/tree-ssa-phiopt.cc
index 4b43f1abdbc..874526f0baa 100644
--- a/gcc/tree-ssa-phiopt.cc
+++ b/gcc/tree-ssa-phiopt.cc
@@ -1138,7 +1138,8 @@ value_replacement (basic_block cond_bb, basic_block middle_bb,
 		      if (get_global_range_query ()->range_of_expr (r, phires,
 								    phi))
 			{
-			  int_range<2> tmp (carg, carg);
+			  wide_int warg = wi::to_wide (carg);
+			  int_range<2> tmp (TREE_TYPE (carg), warg, warg);
 			  r.union_ (tmp);
 			  reset_flow_sensitive_info (phires);
 			  set_range_info (phires, r);
diff --git a/gcc/tree-ssa-threadbackward.cc b/gcc/tree-ssa-threadbackward.cc
index 962b33d88da..d5da4b0c1b1 100644
--- a/gcc/tree-ssa-threadbackward.cc
+++ b/gcc/tree-ssa-threadbackward.cc
@@ -327,8 +327,8 @@ back_threader::find_taken_edge_cond (const vec<basic_block> &path,
   if (solver.unreachable_path_p ())
     return UNREACHABLE_EDGE;
 
-  int_range<2> true_range (boolean_true_node, boolean_true_node);
-  int_range<2> false_range (boolean_false_node, boolean_false_node);
+  int_range<2> true_range = range_true ();
+  int_range<2> false_range = range_false ();
 
   if (r == true_range || r == false_range)
     {
diff --git a/gcc/tree-ssanames.cc b/gcc/tree-ssanames.cc
index b6cbf97b878..a510dfa031a 100644
--- a/gcc/tree-ssanames.cc
+++ b/gcc/tree-ssanames.cc
@@ -522,10 +522,9 @@ ssa_name_has_boolean_range (tree op)
   if (INTEGRAL_TYPE_P (TREE_TYPE (op))
       && (TYPE_PRECISION (TREE_TYPE (op)) > 1))
     {
-      int_range<2> onezero (build_zero_cst (TREE_TYPE (op)),
-			    build_one_cst (TREE_TYPE (op)));
       int_range<2> r;
-      if (get_range_query (cfun)->range_of_expr (r, op) && r == onezero)
+      if (get_range_query (cfun)->range_of_expr (r, op)
+	  && r == range_true_and_false (TREE_TYPE (op)))
 	return true;
 
       if (wi::eq_p (get_nonzero_bits (op), 1))
diff --git a/gcc/tree-vrp.cc b/gcc/tree-vrp.cc
index c0dcd50ee01..d28637b1918 100644
--- a/gcc/tree-vrp.cc
+++ b/gcc/tree-vrp.cc
@@ -837,7 +837,9 @@ find_case_label_range (gswitch *switch_stmt, const irange *range_of_op)
       tree label = gimple_switch_label (switch_stmt, i);
       tree case_high
 	= CASE_HIGH (label) ? CASE_HIGH (label) : CASE_LOW (label);
-      int_range_max label_range (CASE_LOW (label), case_high);
+      int_range_max label_range (type,
+				 wi::to_wide (CASE_LOW (label)),
+				 wi::to_wide (case_high));
       if (!types_compatible_p (label_range.type (), range_of_op->type ()))
 	range_cast (label_range, range_of_op->type ());
       label_range.intersect (*range_of_op);
@@ -861,7 +863,9 @@ find_case_label_range (gswitch *switch_stmt, const irange *range_of_op)
       tree case_high = CASE_HIGH (max_label);
       if (!case_high)
 	case_high = CASE_LOW (max_label);
-      int_range_max label_range (CASE_LOW (min_label), case_high);
+      int_range_max label_range (TREE_TYPE (CASE_LOW (min_label)),
+				 wi::to_wide (CASE_LOW (min_label)),
+				 wi::to_wide (case_high));
       if (!types_compatible_p (label_range.type (), range_of_op->type ()))
 	range_cast (label_range, range_of_op->type ());
       label_range.intersect (*range_of_op);
diff --git a/gcc/value-query.cc b/gcc/value-query.cc
index 8ccdc9f8852..43297f17c39 100644
--- a/gcc/value-query.cc
+++ b/gcc/value-query.cc
@@ -176,16 +176,21 @@ range_query::get_tree_range (vrange &r, tree expr, gimple *stmt)
   switch (TREE_CODE (expr))
     {
     case INTEGER_CST:
-      if (TREE_OVERFLOW_P (expr))
-	expr = drop_tree_overflow (expr);
-      r.set (expr, expr);
-      return true;
+      {
+	irange &i = as_a <irange> (r);
+	if (TREE_OVERFLOW_P (expr))
+	  expr = drop_tree_overflow (expr);
+	wide_int w = wi::to_wide (expr);
+	i.set (TREE_TYPE (expr), w, w);
+	return true;
+      }
 
     case REAL_CST:
       {
 	frange &f = as_a <frange> (r);
-	f.set (expr, expr);
-	if (!real_isnan (TREE_REAL_CST_PTR (expr)))
+	REAL_VALUE_TYPE *rv = TREE_REAL_CST_PTR (expr);
+	f.set (TREE_TYPE (expr), *rv, *rv);
+	if (!real_isnan (rv))
 	  f.clear_nan ();
 	return true;
       }
diff --git a/gcc/value-range.cc b/gcc/value-range.cc
index 69b214ecc06..f2148722a3a 100644
--- a/gcc/value-range.cc
+++ b/gcc/value-range.cc
@@ -301,7 +301,9 @@ irange::fits_p (const vrange &r) const
 void
 irange::set_nonnegative (tree type)
 {
-  set (build_int_cst (type, 0), TYPE_MAX_VALUE (type));
+  set (type,
+       wi::zero (TYPE_PRECISION (type)),
+       wi::to_wide (TYPE_MAX_VALUE (type)));
 }
 
 void
@@ -700,13 +702,12 @@ frange::operator== (const frange &src) const
   return false;
 }
 
-// Return TRUE if range contains the TREE_REAL_CST_PTR in CST.
+// Return TRUE if range contains R.
 
 bool
-frange::contains_p (tree cst) const
+frange::contains_p (const REAL_VALUE_TYPE &r) const
 {
   gcc_checking_assert (m_kind != VR_ANTI_RANGE);
-  const REAL_VALUE_TYPE *rv = TREE_REAL_CST_PTR (cst);
 
   if (undefined_p ())
     return false;
@@ -714,7 +715,7 @@ frange::contains_p (tree cst) const
   if (varying_p ())
     return true;
 
-  if (real_isnan (rv))
+  if (real_isnan (&r))
     {
       // No NAN in range.
       if (!m_pos_nan && !m_neg_nan)
@@ -722,16 +723,16 @@ frange::contains_p (tree cst) const
       // Both +NAN and -NAN are present.
       if (m_pos_nan && m_neg_nan)
 	return true;
-      return m_neg_nan == rv->sign;
+      return m_neg_nan == r.sign;
     }
   if (known_isnan ())
     return false;
 
-  if (real_compare (GE_EXPR, rv, &m_min) && real_compare (LE_EXPR, rv, &m_max))
+  if (real_compare (GE_EXPR, &r, &m_min) && real_compare (LE_EXPR, &r, &m_max))
     {
       // Make sure the signs are equal for signed zeros.
-      if (HONOR_SIGNED_ZEROS (m_type) && real_iszero (rv))
-	return rv->sign == m_min.sign || rv->sign == m_max.sign;
+      if (HONOR_SIGNED_ZEROS (m_type) && real_iszero (&r))
+	return r.sign == m_min.sign || r.sign == m_max.sign;
       return true;
     }
   return false;
@@ -743,7 +744,7 @@ frange::contains_p (tree cst) const
 // A NAN can never be a singleton.
 
 bool
-frange::singleton_p (tree *result) const
+frange::internal_singleton_p (REAL_VALUE_TYPE *result) const
 {
   if (m_kind == VR_RANGE && real_identical (&m_min, &m_max))
     {
@@ -766,6 +767,18 @@ frange::singleton_p (tree *result) const
 	    return false;
 	}
 
+      if (result)
+	*result = m_min;
+      return true;
+    }
+  return false;
+}
+
+bool
+frange::singleton_p (tree *result) const
+{
+  if (internal_singleton_p ())
+    {
       if (result)
 	*result = build_real (m_type, m_min);
       return true;
@@ -773,6 +786,12 @@ frange::singleton_p (tree *result) const
   return false;
 }
 
+bool
+frange::singleton_p (REAL_VALUE_TYPE &r) const
+{
+  return internal_singleton_p (&r);
+}
+
 bool
 frange::supports_type_p (const_tree type) const
 {
@@ -942,13 +961,10 @@ get_legacy_range (const irange &r, tree &min, tree &max)
 }
 
 void
-irange::irange_set (tree min, tree max)
+irange::irange_set (tree type, const wide_int &min, const wide_int &max)
 {
-  gcc_checking_assert (!POLY_INT_CST_P (min));
-  gcc_checking_assert (!POLY_INT_CST_P (max));
-
-  m_base[0] = min;
-  m_base[1] = max;
+  m_base[0] = wide_int_to_tree (type, min);
+  m_base[1] = wide_int_to_tree (type, max);
   m_num_ranges = 1;
   m_kind = VR_RANGE;
   m_nonzero_mask = NULL;
@@ -959,26 +975,31 @@ irange::irange_set (tree min, tree max)
 }
 
 void
-irange::irange_set_1bit_anti_range (tree min, tree max)
+irange::irange_set_1bit_anti_range (tree type,
+				    const wide_int &min, const wide_int &max)
 {
-  tree type = TREE_TYPE (min);
   gcc_checking_assert (TYPE_PRECISION (type) == 1);
 
-  if (operand_equal_p (min, max))
+  if (min == max)
     {
       // Since these are 1-bit quantities, they can only be [MIN,MIN]
       // or [MAX,MAX].
-      if (vrp_val_is_min (min))
-	min = max = vrp_val_max (type);
+      if (min == wi::to_wide (TYPE_MIN_VALUE (type)))
+	{
+	  wide_int tmp = wi::to_wide (TYPE_MAX_VALUE (type));
+	  set (type, tmp, tmp);
+	}
       else
-	min = max = vrp_val_min (type);
-      set (min, max);
+	{
+	  wide_int tmp = wi::to_wide (TYPE_MIN_VALUE (type));
+	  set (type, tmp, tmp);
+	}
     }
   else
     {
       // The only alternative is [MIN,MAX], which is the empty range.
-      gcc_checking_assert (vrp_val_is_min (min));
-      gcc_checking_assert (vrp_val_is_max (max));
+      gcc_checking_assert (min == wi::to_wide (TYPE_MIN_VALUE (type)));
+      gcc_checking_assert (max == wi::to_wide (TYPE_MAX_VALUE (type)));
       set_undefined ();
     }
   if (flag_checking)
@@ -986,43 +1007,38 @@ irange::irange_set_1bit_anti_range (tree min, tree max)
 }
 
 void
-irange::irange_set_anti_range (tree min, tree max)
+irange::irange_set_anti_range (tree type,
+			       const wide_int &min, const wide_int &max)
 {
-  gcc_checking_assert (!POLY_INT_CST_P (min));
-  gcc_checking_assert (!POLY_INT_CST_P (max));
-
-  if (TYPE_PRECISION (TREE_TYPE (min)) == 1)
+  if (TYPE_PRECISION (type) == 1)
     {
-      irange_set_1bit_anti_range (min, max);
+      irange_set_1bit_anti_range (type, min, max);
       return;
     }
 
   // set an anti-range
-  tree type = TREE_TYPE (min);
   signop sign = TYPE_SIGN (type);
   int_range<2> type_range (type);
   // Calculate INVERSE([I,J]) as [-MIN, I-1][J+1, +MAX].
   m_num_ranges = 0;
   wi::overflow_type ovf;
 
-  wide_int w_min = wi::to_wide (min);
-  if (wi::ne_p (w_min, type_range.lower_bound ()))
+  if (wi::ne_p (min, type_range.lower_bound ()))
     {
-      wide_int lim1 = wi::sub (w_min, 1, sign, &ovf);
+      wide_int lim1 = wi::sub (min, 1, sign, &ovf);
       gcc_checking_assert (ovf != wi::OVF_OVERFLOW);
       m_base[0] = wide_int_to_tree (type, type_range.lower_bound (0));
       m_base[1] = wide_int_to_tree (type, lim1);
       m_num_ranges = 1;
     }
-  wide_int w_max = wi::to_wide (max);
-  if (wi::ne_p (w_max, type_range.upper_bound ()))
+  if (wi::ne_p (max, type_range.upper_bound ()))
     {
       if (m_max_ranges == 1 && m_num_ranges)
 	{
 	  set_varying (type);
 	  return;
 	}
-      wide_int lim2 = wi::add (w_max, 1, sign, &ovf);
+      wide_int lim2 = wi::add (max, 1, sign, &ovf);
       gcc_checking_assert (ovf != wi::OVF_OVERFLOW);
       m_base[m_num_ranges * 2] = wide_int_to_tree (type, lim2);
       m_base[m_num_ranges * 2 + 1]
@@ -1047,6 +1063,36 @@ irange::irange_set_anti_range (tree min, tree max)
    This routine exists to ease canonicalization in the case where we
    extract ranges from var + CST op limit.  */
 
+void
+irange::set (tree type, const wide_int &rmin, const wide_int &rmax,
+	     value_range_kind kind)
+{
+  if (kind == VR_UNDEFINED)
+    {
+      irange::set_undefined ();
+      return;
+    }
+
+  if (kind == VR_VARYING)
+    {
+      set_varying (type);
+      return;
+    }
+
+  signop sign = TYPE_SIGN (type);
+  unsigned prec = TYPE_PRECISION (type);
+  wide_int min = wide_int::from (rmin, prec, sign);
+  wide_int max = wide_int::from (rmax, prec, sign);
+
+  if (kind == VR_RANGE)
+    irange_set (type, min, max);
+  else
+    {
+      gcc_checking_assert (kind == VR_ANTI_RANGE);
+      irange_set_anti_range (type, min, max);
+    }
+}
+
 void
 irange::set (tree min, tree max, value_range_kind kind)
 {
@@ -1072,13 +1118,8 @@ irange::set (tree min, tree max, value_range_kind kind)
   if (TREE_OVERFLOW_P (max))
     max = drop_tree_overflow (max);
 
-  if (kind == VR_RANGE)
-    irange_set (min, max);
-  else
-    {
-      gcc_checking_assert (kind == VR_ANTI_RANGE);
-      irange_set_anti_range (min, max);
-    }
+  return set (TREE_TYPE (min),
+	      wi::to_wide (min), wi::to_wide (max), kind);
 }
 
 // Check the validity of the range.
@@ -1138,9 +1179,7 @@ irange::operator== (const irange &other) const
   return nz1 == nz2;
 }
 
-/* If range is a singleton, place it in RESULT and return TRUE.
-   Note: A singleton can be any gimple invariant, not just constants.
-   So, [&x, &x] counts as a singleton.  */
+/* If range is a singleton, place it in RESULT and return TRUE.  */
 
 bool
 irange::singleton_p (tree *result) const
@@ -1154,37 +1193,41 @@ irange::singleton_p (tree *result) const
   return false;
 }
 
-/* Return TRUE if range contains INTEGER_CST.  */
-/* Return 1 if VAL is inside value range.
-	  0 if VAL is not inside value range.
+bool
+irange::singleton_p (wide_int &w) const
+{
+  if (num_pairs () == 1 && lower_bound () == upper_bound ())
+    {
+      w = lower_bound ();
+      return true;
+    }
+  return false;
+}
+
+/* Return 1 if CST is inside value range.
+	  0 if CST is not inside value range.
 
    Benchmark compile/20001226-1.c compilation time after changing this
    function.  */
 
-
 bool
-irange::contains_p (tree cst) const
+irange::contains_p (const wide_int &cst) const
 {
   if (undefined_p ())
     return false;
 
-  gcc_checking_assert (TREE_CODE (cst) == INTEGER_CST);
-
   // See if we can exclude CST based on the nonzero bits.
-  if (m_nonzero_mask)
-    {
-      wide_int cstw = wi::to_wide (cst);
-      if (cstw != 0 && wi::bit_and (wi::to_wide (m_nonzero_mask), cstw) == 0)
-	return false;
-    }
+  if (m_nonzero_mask
+      && cst != 0
+      && wi::bit_and (wi::to_wide (m_nonzero_mask), cst) == 0)
+    return false;
 
-  signop sign = TYPE_SIGN (TREE_TYPE (cst));
-  wide_int v = wi::to_wide (cst);
+  signop sign = TYPE_SIGN (type ());
   for (unsigned r = 0; r < m_num_ranges; ++r)
     {
-      if (wi::lt_p (v, lower_bound (r), sign))
+      if (wi::lt_p (cst, lower_bound (r), sign))
 	return false;
-      if (wi::le_p (v, upper_bound (r), sign))
+      if (wi::le_p (cst, upper_bound (r), sign))
 	return true;
     }
 
@@ -1760,10 +1803,10 @@ irange::set_range_from_nonzero_bits ()
   if (popcount == 1)
     {
       // Make sure we don't pessimize the range.
-      if (!contains_p (m_nonzero_mask))
+      if (!contains_p (wi::to_wide (m_nonzero_mask)))
 	return false;
 
-      bool has_zero = contains_p (build_zero_cst (type ()));
+      bool has_zero = contains_zero_p (*this);
       tree nz = m_nonzero_mask;
       set (nz, nz);
       m_nonzero_mask = nz;
@@ -2085,7 +2128,6 @@ gt_ggc_mx (int_range<2> *&x)
 }
 
 #define DEFINE_INT_RANGE_INSTANCE(N)					\
-  template int_range<N>::int_range(tree, tree, value_range_kind);	\
   template int_range<N>::int_range(tree_node *,				\
 				   const wide_int &,			\
 				   const wide_int &,			\
@@ -2103,20 +2145,73 @@ DEFINE_INT_RANGE_INSTANCE(255)
 #if CHECKING_P
 #include "selftest.h"
 
+#define INT(x) wi::shwi ((x), TYPE_PRECISION (integer_type_node))
+#define UINT(x) wi::uhwi ((x), TYPE_PRECISION (unsigned_type_node))
+#define SCHAR(x) wi::shwi ((x), TYPE_PRECISION (signed_char_type_node))
+
 namespace selftest
 {
-#define INT(N) build_int_cst (integer_type_node, (N))
-#define UINT(N) build_int_cstu (unsigned_type_node, (N))
-#define UINT128(N) build_int_cstu (u128_type, (N))
-#define UCHAR(N) build_int_cstu (unsigned_char_type_node, (N))
-#define SCHAR(N) build_int_cst (signed_char_type_node, (N))
+
+static int_range<2>
+range (tree type, int a, int b, value_range_kind kind = VR_RANGE)
+{
+  wide_int w1, w2;
+  if (TYPE_UNSIGNED (type))
+    {
+      w1 = wi::uhwi (a, TYPE_PRECISION (type));
+      w2 = wi::uhwi (b, TYPE_PRECISION (type));
+    }
+  else
+    {
+      w1 = wi::shwi (a, TYPE_PRECISION (type));
+      w2 = wi::shwi (b, TYPE_PRECISION (type));
+    }
+  return int_range<2> (type, w1, w2, kind);
+}
+
+static int_range<2>
+tree_range (tree a, tree b, value_range_kind kind = VR_RANGE)
+{
+  return int_range<2> (TREE_TYPE (a), wi::to_wide (a), wi::to_wide (b), kind);
+}
+
+static int_range<2>
+range_int (int a, int b, value_range_kind kind = VR_RANGE)
+{
+  return range (integer_type_node, a, b, kind);
+}
+
+static int_range<2>
+range_uint (int a, int b, value_range_kind kind = VR_RANGE)
+{
+  return range (unsigned_type_node, a, b, kind);
+}
+
+static int_range<2>
+range_uint128 (int a, int b, value_range_kind kind = VR_RANGE)
+{
+  tree u128_type_node = build_nonstandard_integer_type (128, 1);
+  return range (u128_type_node, a, b, kind);
+}
+
+static int_range<2>
+range_uchar (int a, int b, value_range_kind kind = VR_RANGE)
+{
+  return range (unsigned_char_type_node, a, b, kind);
+}
+
+static int_range<2>
+range_char (int a, int b, value_range_kind kind = VR_RANGE)
+{
+  return range (signed_char_type_node, a, b, kind);
+}
 
 static int_range<3>
 build_range3 (int a, int b, int c, int d, int e, int f)
 {
-  int_range<3> i1 (INT (a), INT (b));
-  int_range<3> i2 (INT (c), INT (d));
-  int_range<3> i3 (INT (e), INT (f));
+  int_range<3> i1 = range_int (a, b);
+  int_range<3> i2 = range_int (c, d);
+  int_range<3> i3 = range_int (e, f);
   i1.union_ (i2);
   i1.union_ (i3);
   return i1;
@@ -2125,76 +2220,75 @@ build_range3 (int a, int b, int c, int d, int e, int f)
 static void
 range_tests_irange3 ()
 {
-  typedef int_range<3> int_range3;
-  int_range3 r0, r1, r2;
-  int_range3 i1, i2, i3;
+  int_range<3> r0, r1, r2;
+  int_range<3> i1, i2, i3;
 
   // ([10,20] U [5,8]) U [1,3] ==> [1,3][5,8][10,20].
-  r0 = int_range3 (INT (10), INT (20));
-  r1 = int_range3 (INT (5), INT (8));
+  r0 = range_int (10, 20);
+  r1 = range_int (5, 8);
   r0.union_ (r1);
-  r1 = int_range3 (INT (1), INT (3));
+  r1 = range_int (1, 3);
   r0.union_ (r1);
   ASSERT_TRUE (r0 == build_range3 (1, 3, 5, 8, 10, 20));
 
   // [1,3][5,8][10,20] U [-5,0] => [-5,3][5,8][10,20].
-  r1 = int_range3 (INT (-5), INT (0));
+  r1 = range_int (-5, 0);
   r0.union_ (r1);
   ASSERT_TRUE (r0 == build_range3 (-5, 3, 5, 8, 10, 20));
 
   // [10,20][30,40] U [50,60] ==> [10,20][30,40][50,60].
-  r1 = int_range3 (INT (50), INT (60));
-  r0 = int_range3 (INT (10), INT (20));
-  r0.union_ (int_range3 (INT (30), INT (40)));
+  r1 = range_int (50, 60);
+  r0 = range_int (10, 20);
+  r0.union_ (range_int (30, 40));
   r0.union_ (r1);
   ASSERT_TRUE (r0 == build_range3 (10, 20, 30, 40, 50, 60));
   // [10,20][30,40][50,60] U [70, 80] ==> [10,20][30,40][50,60][70,80].
-  r1 = int_range3 (INT (70), INT (80));
+  r1 = range_int (70, 80);
   r0.union_ (r1);
 
   r2 = build_range3 (10, 20, 30, 40, 50, 60);
-  r2.union_ (int_range3 (INT (70), INT (80)));
+  r2.union_ (range_int (70, 80));
   ASSERT_TRUE (r0 == r2);
 
   // [10,20][30,40][50,60] U [6,35] => [6,40][50,60].
   r0 = build_range3 (10, 20, 30, 40, 50, 60);
-  r1 = int_range3 (INT (6), INT (35));
+  r1 = range_int (6, 35);
   r0.union_ (r1);
-  r1 = int_range3 (INT (6), INT (40));
-  r1.union_ (int_range3 (INT (50), INT (60)));
+  r1 = range_int (6, 40);
+  r1.union_ (range_int (50, 60));
   ASSERT_TRUE (r0 == r1);
 
   // [10,20][30,40][50,60] U [6,60] => [6,60].
   r0 = build_range3 (10, 20, 30, 40, 50, 60);
-  r1 = int_range3 (INT (6), INT (60));
+  r1 = range_int (6, 60);
   r0.union_ (r1);
-  ASSERT_TRUE (r0 == int_range3 (INT (6), INT (60)));
+  ASSERT_TRUE (r0 == range_int (6, 60));
 
   // [10,20][30,40][50,60] U [6,70] => [6,70].
   r0 = build_range3 (10, 20, 30, 40, 50, 60);
-  r1 = int_range3 (INT (6), INT (70));
+  r1 = range_int (6, 70);
   r0.union_ (r1);
-  ASSERT_TRUE (r0 == int_range3 (INT (6), INT (70)));
+  ASSERT_TRUE (r0 == range_int (6, 70));
 
   // [10,20][30,40][50,60] U [35,70] => [10,20][30,70].
   r0 = build_range3 (10, 20, 30, 40, 50, 60);
-  r1 = int_range3 (INT (35), INT (70));
+  r1 = range_int (35, 70);
   r0.union_ (r1);
-  r1 = int_range3 (INT (10), INT (20));
-  r1.union_ (int_range3 (INT (30), INT (70)));
+  r1 = range_int (10, 20);
+  r1.union_ (range_int (30, 70));
   ASSERT_TRUE (r0 == r1);
 
   // [10,20][30,40][50,60] U [15,35] => [10,40][50,60].
   r0 = build_range3 (10, 20, 30, 40, 50, 60);
-  r1 = int_range3 (INT (15), INT (35));
+  r1 = range_int (15, 35);
   r0.union_ (r1);
-  r1 = int_range3 (INT (10), INT (40));
-  r1.union_ (int_range3 (INT (50), INT (60)));
+  r1 = range_int (10, 40);
+  r1.union_ (range_int (50, 60));
   ASSERT_TRUE (r0 == r1);
 
   // [10,20][30,40][50,60] U [35,35] => [10,20][30,40][50,60].
   r0 = build_range3 (10, 20, 30, 40, 50, 60);
-  r1 = int_range3 (INT (35), INT (35));
+  r1 = range_int (35, 35);
   r0.union_ (r1);
   ASSERT_TRUE (r0 == build_range3 (10, 20, 30, 40, 50, 60));
 }
@@ -2208,7 +2302,7 @@ range_tests_int_range_max ()
   // Build a huge multi-range range.
   for (nrange = 0; nrange < 50; ++nrange)
     {
-      int_range<1> tmp (INT (nrange*10), INT (nrange*10 + 5));
+      int_range<1> tmp = range_int (nrange*10, nrange *10 + 5);
       big.union_ (tmp);
     }
   ASSERT_TRUE (big.num_pairs () == nrange);
@@ -2221,18 +2315,16 @@ range_tests_int_range_max ()
   big.invert ();
   ASSERT_TRUE (big.num_pairs () == nrange + 1);
 
-  int_range<1> tmp (INT (5), INT (37));
+  int_range<1> tmp = range_int (5, 37);
   big.intersect (tmp);
   ASSERT_TRUE (big.num_pairs () == 4);
 
   // Test that [10,10][20,20] does NOT contain 15.
   {
-    int_range_max i1 (build_int_cst (integer_type_node, 10),
-		      build_int_cst (integer_type_node, 10));
-    int_range_max i2 (build_int_cst (integer_type_node, 20),
-		      build_int_cst (integer_type_node, 20));
+    int_range_max i1 = range_int (10, 10);
+    int_range_max i2 = range_int (20, 20);
     i1.union_ (i2);
-    ASSERT_FALSE (i1.contains_p (build_int_cst (integer_type_node, 15)));
+    ASSERT_FALSE (i1.contains_p (INT (15)));
   }
 }
 
@@ -2249,11 +2341,10 @@ range_tests_strict_enum ()
 
   // Test that even though vr1 covers the strict enum domain ([0, 3]),
   // it does not cover the domain of the underlying type.
-  int_range<1> vr1 (build_int_cstu (rtype, 0), build_int_cstu (rtype, 1));
-  int_range<1> vr2 (build_int_cstu (rtype, 2), build_int_cstu (rtype, 3));
+  int_range<1> vr1 = range (rtype, 0, 1);
+  int_range<1> vr2 = range (rtype, 2, 3);
   vr1.union_ (vr2);
-  ASSERT_TRUE (vr1 == int_range<1> (build_int_cstu (rtype, 0),
-				    build_int_cstu (rtype, 3)));
+  ASSERT_TRUE (vr1 == range (rtype, 0, 3));
   ASSERT_FALSE (vr1.varying_p ());
 
   // Test that copying to a multi-range does not change things.
@@ -2262,7 +2353,7 @@ range_tests_strict_enum ()
   ASSERT_FALSE (ir1.varying_p ());
 
   // The same test as above, but using TYPE_{MIN,MAX}_VALUE instead of [0,3].
-  vr1 = int_range<1> (TYPE_MIN_VALUE (rtype), TYPE_MAX_VALUE (rtype));
+  vr1 = tree_range (TYPE_MIN_VALUE (rtype), TYPE_MAX_VALUE (rtype));
   ir1 = vr1;
   ASSERT_TRUE (ir1 == vr1);
   ASSERT_FALSE (ir1.varying_p ());
@@ -2281,8 +2372,8 @@ range_tests_misc ()
   tree one_bit_min = vrp_val_min (one_bit_type);
   tree one_bit_max = vrp_val_max (one_bit_type);
   {
-    int_range<2> min (one_bit_min, one_bit_min);
-    int_range<2> max (one_bit_max, one_bit_max);
+    int_range<2> min = tree_range (one_bit_min, one_bit_min);
+    int_range<2> max = tree_range (one_bit_max, one_bit_max);
     max.union_ (min);
     ASSERT_TRUE (max.varying_p ());
   }
@@ -2291,8 +2382,8 @@ range_tests_misc ()
 
   // Test inversion of 1-bit signed integers.
   {
-    int_range<2> min (one_bit_min, one_bit_min);
-    int_range<2> max (one_bit_max, one_bit_max);
+    int_range<2> min = tree_range (one_bit_min, one_bit_min);
+    int_range<2> max = tree_range (one_bit_max, one_bit_max);
     int_range<2> t;
     t = min;
     t.invert ();
@@ -2303,79 +2394,81 @@ range_tests_misc ()
   }
 
   // Test that NOT(255) is [0..254] in 8-bit land.
-  int_range<1> not_255 (UCHAR (255), UCHAR (255), VR_ANTI_RANGE);
-  ASSERT_TRUE (not_255 == int_range<1> (UCHAR (0), UCHAR (254)));
+  int_range<1> not_255 = range_uchar (255, 255, VR_ANTI_RANGE);
+  ASSERT_TRUE (not_255 == range_uchar (0, 254));
 
   // Test that NOT(0) is [1..255] in 8-bit land.
   int_range<2> not_zero = range_nonzero (unsigned_char_type_node);
-  ASSERT_TRUE (not_zero == int_range<1> (UCHAR (1), UCHAR (255)));
+  ASSERT_TRUE (not_zero == range_uchar (1, 255));
 
   // Check that [0,127][0x..ffffff80,0x..ffffff]
   //  => ~[128, 0x..ffffff7f].
-  r0 = int_range<1> (UINT128 (0), UINT128 (127));
-  tree high = build_minus_one_cst (u128_type);
+  r0 = range_uint128 (0, 127);
+  wide_int high = wi::minus_one (128);
   // low = -1 - 127 => 0x..ffffff80.
-  tree low = fold_build2 (MINUS_EXPR, u128_type, high, UINT128(127));
-  r1 = int_range<1> (low, high); // [0x..ffffff80, 0x..ffffffff]
+  wide_int low = wi::sub (high, wi::uhwi (127, 128));
+  r1 = int_range<1> (u128_type, low, high); // [0x..ffffff80, 0x..ffffffff]
   // r0 = [0,127][0x..ffffff80,0x..fffffff].
   r0.union_ (r1);
   // r1 = [128, 0x..ffffff7f].
-  r1 = int_range<1> (UINT128(128),
-		     fold_build2 (MINUS_EXPR, u128_type,
-				  build_minus_one_cst (u128_type),
-				  UINT128(128)));
+  r1 = int_range<1> (u128_type,
+		     wi::uhwi (128, 128),
+		     wi::sub (wi::minus_one (128), wi::uhwi (128, 128)));
   r0.invert ();
   ASSERT_TRUE (r0 == r1);
 
   r0.set_varying (integer_type_node);
-  tree minint = wide_int_to_tree (integer_type_node, r0.lower_bound ());
-  tree maxint = wide_int_to_tree (integer_type_node, r0.upper_bound ());
+  wide_int minint = r0.lower_bound ();
+  wide_int maxint = r0.upper_bound ();
 
   r0.set_varying (short_integer_type_node);
 
   r0.set_varying (unsigned_type_node);
-  tree maxuint = wide_int_to_tree (unsigned_type_node, r0.upper_bound ());
+  wide_int maxuint = r0.upper_bound ();
 
   // Check that ~[0,5] => [6,MAX] for unsigned int.
-  r0 = int_range<1> (UINT (0), UINT (5));
+  r0 = range_uint (0, 5);
   r0.invert ();
-  ASSERT_TRUE (r0 == int_range<1> (UINT(6), maxuint));
+  ASSERT_TRUE (r0 == int_range<1> (unsigned_type_node,
+				   wi::uhwi (6, TYPE_PRECISION (unsigned_type_node)),
+				   maxuint));
 
   // Check that ~[10,MAX] => [0,9] for unsigned int.
-  r0 = int_range<1> (UINT(10), maxuint);
+  r0 = int_range<1> (unsigned_type_node,
+		     wi::uhwi (10, TYPE_PRECISION (unsigned_type_node)),
+		     maxuint);
   r0.invert ();
-  ASSERT_TRUE (r0 == int_range<1> (UINT (0), UINT (9)));
+  ASSERT_TRUE (r0 == range_uint (0, 9));
 
   // Check that ~[0,5] => [6,MAX] for unsigned 128-bit numbers.
-  r0 = int_range<1> (UINT128 (0), UINT128 (5), VR_ANTI_RANGE);
-  r1 = int_range<1> (UINT128(6), build_minus_one_cst (u128_type));
+  r0 = range_uint128 (0, 5, VR_ANTI_RANGE);
+  r1 = int_range<1> (u128_type, wi::uhwi (6, 128), wi::minus_one (128));
   ASSERT_TRUE (r0 == r1);
 
   // Check that [~5] is really [-MIN,4][6,MAX].
-  r0 = int_range<2> (INT (5), INT (5), VR_ANTI_RANGE);
-  r1 = int_range<1> (minint, INT (4));
-  r1.union_ (int_range<1> (INT (6), maxint));
+  r0 = range_int (5, 5, VR_ANTI_RANGE);
+  r1 = int_range<1> (integer_type_node, minint, INT (4));
+  r1.union_ (int_range<1> (integer_type_node, INT (6), maxint));
   ASSERT_FALSE (r1.undefined_p ());
   ASSERT_TRUE (r0 == r1);
 
-  r1 = int_range<1> (INT (5), INT (5));
+  r1 = range_int (5, 5);
   int_range<2> r2 (r1);
   ASSERT_TRUE (r1 == r2);
 
-  r1 = int_range<1> (INT (5), INT (10));
+  r1 = range_int (5, 10);
 
-  r1 = int_range<1> (integer_type_node,
-		     wi::to_wide (INT (5)), wi::to_wide (INT (10)));
+  r1 = range_int (5, 10);
   ASSERT_TRUE (r1.contains_p (INT (7)));
 
-  r1 = int_range<1> (SCHAR (0), SCHAR (20));
+  r1 = range_char (0, 20);
   ASSERT_TRUE (r1.contains_p (SCHAR(15)));
   ASSERT_FALSE (r1.contains_p (SCHAR(300)));
 
   // NOT([10,20]) ==> [-MIN,9][21,MAX].
-  r0 = r1 = int_range<1> (INT (10), INT (20));
-  r2 = int_range<1> (minint, INT(9));
-  r2.union_ (int_range<1> (INT(21), maxint));
+  r0 = r1 = range_int (10, 20);
+  r2 = int_range<1> (integer_type_node, minint, INT(9));
+  r2.union_ (int_range<1> (integer_type_node, INT(21), maxint));
   ASSERT_FALSE (r2.undefined_p ());
   r1.invert ();
   ASSERT_TRUE (r1 == r2);
@@ -2385,11 +2478,9 @@ range_tests_misc ()
 
   // Test that booleans and their inverse work as expected.
   r0 = range_zero (boolean_type_node);
-  ASSERT_TRUE (r0 == int_range<1> (build_zero_cst (boolean_type_node),
-				   build_zero_cst (boolean_type_node)));
+  ASSERT_TRUE (r0 == range_false ());
   r0.invert ();
-  ASSERT_TRUE (r0 == int_range<1> (build_one_cst (boolean_type_node),
-				   build_one_cst (boolean_type_node)));
+  ASSERT_TRUE (r0 == range_true ());
 
   // Make sure NULL and non-NULL of pointer types work, and that
   // inverses of them are consistent.
@@ -2401,34 +2492,34 @@ range_tests_misc ()
   ASSERT_TRUE (r0 == r1);
 
   // [10,20] U [15, 30] => [10, 30].
-  r0 = int_range<1> (INT (10), INT (20));
-  r1 = int_range<1> (INT (15), INT (30));
+  r0 = range_int (10, 20);
+  r1 = range_int (15, 30);
   r0.union_ (r1);
-  ASSERT_TRUE (r0 == int_range<1> (INT (10), INT (30)));
+  ASSERT_TRUE (r0 == range_int (10, 30));
 
   // [15,40] U [] => [15,40].
-  r0 = int_range<1> (INT (15), INT (40));
+  r0 = range_int (15, 40);
   r1.set_undefined ();
   r0.union_ (r1);
-  ASSERT_TRUE (r0 == int_range<1> (INT (15), INT (40)));
+  ASSERT_TRUE (r0 == range_int (15, 40));
 
   // [10,20] U [10,10] => [10,20].
-  r0 = int_range<1> (INT (10), INT (20));
-  r1 = int_range<1> (INT (10), INT (10));
+  r0 = range_int (10, 20);
+  r1 = range_int (10, 10);
   r0.union_ (r1);
-  ASSERT_TRUE (r0 == int_range<1> (INT (10), INT (20)));
+  ASSERT_TRUE (r0 == range_int (10, 20));
 
   // [10,20] U [9,9] => [9,20].
-  r0 = int_range<1> (INT (10), INT (20));
-  r1 = int_range<1> (INT (9), INT (9));
+  r0 = range_int (10, 20);
+  r1 = range_int (9, 9);
   r0.union_ (r1);
-  ASSERT_TRUE (r0 == int_range<1> (INT (9), INT (20)));
+  ASSERT_TRUE (r0 == range_int (9, 20));
 
   // [10,20] ^ [15,30] => [15,20].
-  r0 = int_range<1> (INT (10), INT (20));
-  r1 = int_range<1> (INT (15), INT (30));
+  r0 = range_int (10, 20);
+  r1 = range_int (15, 30);
   r0.intersect (r1);
-  ASSERT_TRUE (r0 == int_range<1> (INT (15), INT (20)));
+  ASSERT_TRUE (r0 == range_int (15, 20));
 
   // Test the internal sanity of wide_int's wrt HWIs.
   ASSERT_TRUE (wi::max_value (TYPE_PRECISION (boolean_type_node),
@@ -2436,18 +2527,18 @@ range_tests_misc ()
 	       == wi::uhwi (1, TYPE_PRECISION (boolean_type_node)));
 
   // Test zero_p().
-  r0 = int_range<1> (INT (0), INT (0));
+  r0 = range_int (0, 0);
   ASSERT_TRUE (r0.zero_p ());
 
   // Test nonzero_p().
-  r0 = int_range<1> (INT (0), INT (0));
+  r0 = range_int (0, 0);
   r0.invert ();
   ASSERT_TRUE (r0.nonzero_p ());
 
   // r0 = ~[1,1]
-  r0 = int_range<2> (UINT (1), UINT (1), VR_ANTI_RANGE);
+  r0 = range_int (1, 1, VR_ANTI_RANGE);
   // r1 = ~[3,3]
-  r1 = int_range<2> (UINT (3), UINT (3), VR_ANTI_RANGE);
+  r1 = range_int (3, 3, VR_ANTI_RANGE);
 
   // vv = [0,0][2,2][4, MAX]
   int_range<3> vv = r0;
@@ -2456,7 +2547,7 @@ range_tests_misc ()
   ASSERT_TRUE (vv.contains_p (UINT (2)));
   ASSERT_TRUE (vv.num_pairs () == 3);
 
-  r0 = int_range<1> (UINT (1), UINT (1));
+  r0 = range_uint (1, 1);
   // And union it with  [0,0][2,2][4,MAX] multi range
   r0.union_ (vv);
   // The result should be [0,2][4,MAX], or ~[3,3]  but it must contain 2
@@ -2493,7 +2584,7 @@ range_tests_nonzero_bits ()
   ASSERT_TRUE (r0.get_nonzero_bits () == 0xff);
 
   // Intersect of nonzero bits.
-  r0.set (INT (0), INT (255));
+  r0 = range_int (0, 255);
   r0.set_nonzero_bits (0xfe);
   r1.set_varying (integer_type_node);
   r1.set_nonzero_bits (0xf0);
@@ -2502,7 +2593,7 @@ range_tests_nonzero_bits ()
 
   // Intersect where the mask of nonzero bits is implicit from the range.
   r0.set_varying (integer_type_node);
-  r1.set (INT (0), INT (255));
+  r1 = range_int (0, 255);
   r0.intersect (r1);
   ASSERT_TRUE (r0.get_nonzero_bits () == 0xff);
 
@@ -2631,13 +2722,13 @@ range_tests_nan ()
   // NAN is in a VARYING.
   r0.set_varying (float_type_node);
   real_nan (&r, "", 1, TYPE_MODE (float_type_node));
-  tree nan = build_real (float_type_node, r);
+  REAL_VALUE_TYPE nan = r;
   ASSERT_TRUE (r0.contains_p (nan));
 
   // -NAN is in a VARYING.
   r0.set_varying (float_type_node);
   q = real_value_negate (&r);
-  tree neg_nan = build_real (float_type_node, q);
+  REAL_VALUE_TYPE neg_nan = q;
   ASSERT_TRUE (r0.contains_p (neg_nan));
 
   // Clearing the NAN on a [] NAN is the empty set.
@@ -2669,28 +2760,29 @@ range_tests_nan ()
 static void
 range_tests_signed_zeros ()
 {
-  tree zero = build_zero_cst (float_type_node);
-  tree neg_zero = fold_build1 (NEGATE_EXPR, float_type_node, zero);
+  REAL_VALUE_TYPE zero = dconst0;
+  REAL_VALUE_TYPE neg_zero = zero;
+  neg_zero.sign = 1;
   frange r0, r1;
   bool signbit;
 
   // [0,0] contains [0,0] but not [-0,-0] and vice versa.
-  r0 = frange (zero, zero);
-  r1 = frange (neg_zero, neg_zero);
+  r0 = frange_float ("0.0", "0.0");
+  r1 = frange_float ("-0.0", "-0.0");
   ASSERT_TRUE (r0.contains_p (zero));
   ASSERT_TRUE (!r0.contains_p (neg_zero));
   ASSERT_TRUE (r1.contains_p (neg_zero));
   ASSERT_TRUE (!r1.contains_p (zero));
 
   // Test contains_p() when we know the sign of the zero.
-  r0 = frange (zero, zero);
+  r0 = frange_float ("0.0", "0.0");
   ASSERT_TRUE (r0.contains_p (zero));
   ASSERT_FALSE (r0.contains_p (neg_zero));
-  r0 = frange (neg_zero, neg_zero);
+  r0 = frange_float ("-0.0", "-0.0");
   ASSERT_TRUE (r0.contains_p (neg_zero));
   ASSERT_FALSE (r0.contains_p (zero));
 
-  r0 = frange (neg_zero, zero);
+  r0 = frange_float ("-0.0", "0.0");
   ASSERT_TRUE (r0.contains_p (neg_zero));
   ASSERT_TRUE (r0.contains_p (zero));
 
@@ -2700,8 +2792,8 @@ range_tests_signed_zeros ()
 
   // The intersection of zeros that differ in sign is a NAN (or
   // undefined if not honoring NANs).
-  r0 = frange (neg_zero, neg_zero);
-  r1 = frange (zero, zero);
+  r0 = frange_float ("-0.0", "-0.0");
+  r1 = frange_float ("0.0", "0.0");
   r0.intersect (r1);
   if (HONOR_NANS (float_type_node))
     ASSERT_TRUE (r0.known_isnan ());
@@ -2709,18 +2801,18 @@ range_tests_signed_zeros ()
     ASSERT_TRUE (r0.undefined_p ());
 
   // The union of zeros that differ in sign is a zero with unknown sign.
-  r0 = frange (zero, zero);
-  r1 = frange (neg_zero, neg_zero);
+  r0 = frange_float ("0.0", "0.0");
+  r1 = frange_float ("-0.0", "-0.0");
   r0.union_ (r1);
   ASSERT_TRUE (r0.zero_p () && !r0.signbit_p (signbit));
 
   // [-0, +0] has an unknown sign.
-  r0 = frange (neg_zero, zero);
+  r0 = frange_float ("-0.0", "0.0");
   ASSERT_TRUE (r0.zero_p () && !r0.signbit_p (signbit));
 
   // [-0, +0] ^ [0, 0] is [0, 0]
-  r0 = frange (neg_zero, zero);
-  r1 = frange (zero, zero);
+  r0 = frange_float ("-0.0", "0.0");
+  r1 = frange_float ("0.0", "0.0");
   r0.intersect (r1);
   ASSERT_TRUE (r0.zero_p ());
 
diff --git a/gcc/value-range.h b/gcc/value-range.h
index 6d108154dc1..633a234d41f 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -122,8 +122,7 @@ class GTY((user)) irange : public vrange
   friend class irange_storage;
 public:
   // In-place setters.
-  virtual void set (tree, tree, value_range_kind = VR_RANGE) override;
-  void set (tree type, const wide_int_ref &, const wide_int_ref &,
+  void set (tree type, const wide_int &, const wide_int &,
 	    value_range_kind = VR_RANGE);
   virtual void set_nonzero (tree type) override;
   virtual void set_zero (tree type) override;
@@ -146,7 +145,8 @@ public:
   virtual bool zero_p () const override;
   virtual bool nonzero_p () const override;
   virtual bool singleton_p (tree *result = NULL) const override;
-  virtual bool contains_p (tree cst) const override;
+  bool singleton_p (wide_int &) const;
+  bool contains_p (const wide_int &) const;
 
   // In-place operators.
   virtual bool union_ (const vrange &) override;
@@ -167,11 +167,13 @@ public:
   void set_nonzero_bits (const wide_int_ref &bits);
 
 protected:
+  virtual void set (tree, tree, value_range_kind = VR_RANGE) override;
+  virtual bool contains_p (tree cst) const override;
   irange (tree *, unsigned);
 
    // In-place operators.
-  void irange_set (tree, tree);
-  void irange_set_anti_range (tree, tree);
+  void irange_set (tree type, const wide_int &, const wide_int &);
+  void irange_set_anti_range (tree type, const wide_int &, const wide_int &);
   bool irange_contains_p (const irange &) const;
   bool irange_single_pair_union (const irange &r);
 
@@ -184,7 +186,8 @@ private:
   friend void gt_pch_nx (irange *);
   friend void gt_pch_nx (irange *, gt_pointer_operator, void *);
 
-  void irange_set_1bit_anti_range (tree, tree);
+  void irange_set_1bit_anti_range (tree type,
+				   const wide_int &, const wide_int &);
   bool varying_compatible_p () const;
   bool intersect_nonzero_bits (const irange &r);
   bool union_nonzero_bits (const irange &r);
@@ -206,7 +209,6 @@ class GTY((user)) int_range : public irange
 {
 public:
   int_range ();
-  int_range (tree, tree, value_range_kind = VR_RANGE);
   int_range (tree type, const wide_int &, const wide_int &,
 	     value_range_kind = VR_RANGE);
   int_range (tree type);
@@ -214,6 +216,8 @@ public:
   int_range (const irange &);
   virtual ~int_range () = default;
   int_range& operator= (const int_range &);
+protected:
+  int_range (tree, tree, value_range_kind = VR_RANGE);
 private:
   template <unsigned X> friend void gt_ggc_mx (int_range<X> *);
   template <unsigned X> friend void gt_pch_nx (int_range<X> *);
@@ -319,7 +323,6 @@ public:
     return SCALAR_FLOAT_TYPE_P (type) && !DECIMAL_FLOAT_TYPE_P (type);
   }
   virtual tree type () const override;
-  virtual void set (tree, tree, value_range_kind = VR_RANGE) override;
   void set (tree type, const REAL_VALUE_TYPE &, const REAL_VALUE_TYPE &,
 	    value_range_kind = VR_RANGE);
   void set (tree type, const REAL_VALUE_TYPE &, const REAL_VALUE_TYPE &,
@@ -330,8 +333,9 @@ public:
   virtual void set_undefined () override;
   virtual bool union_ (const vrange &) override;
   virtual bool intersect (const vrange &) override;
-  virtual bool contains_p (tree) const override;
+  bool contains_p (const REAL_VALUE_TYPE &) const;
   virtual bool singleton_p (tree *result = NULL) const override;
+  bool singleton_p (REAL_VALUE_TYPE &r) const;
   virtual bool supports_type_p (const_tree type) const override;
   virtual void accept (const vrange_visitor &v) const override;
   virtual bool zero_p () const override;
@@ -361,7 +365,13 @@ public:
   bool maybe_isinf () const;
   bool signbit_p (bool &signbit) const;
   bool nan_signbit_p (bool &signbit) const;
+
+protected:
+  virtual bool contains_p (tree cst) const override;
+  virtual void set (tree, tree, value_range_kind = VR_RANGE) override;
+
 private:
+  bool internal_singleton_p (REAL_VALUE_TYPE * = NULL) const;
   void verify_range ();
   bool normalize_kind ();
   bool union_nans (const frange &);
@@ -485,8 +495,6 @@ public:
   static bool supports_type_p (const_tree type);
 
   // Convenience methods for vrange compatibility.
-  void set (tree min, tree max, value_range_kind kind = VR_RANGE)
-    { return m_vrange->set (min, max, kind); }
   tree type () { return m_vrange->type (); }
   bool varying_p () const { return m_vrange->varying_p (); }
   bool undefined_p () const { return m_vrange->undefined_p (); }
@@ -536,7 +544,7 @@ inline
 Value_Range::Value_Range (tree min, tree max, value_range_kind kind)
 {
   init (TREE_TYPE (min));
-  set (min, max, kind);
+  m_vrange->set (min, max, kind);
 }
 
 inline
@@ -674,13 +682,6 @@ irange::varying_compatible_p () const
   return true;
 }
 
-inline void
-irange::set (tree type, const wide_int_ref &min, const wide_int_ref &max,
-	     value_range_kind kind)
-{
-  set (wide_int_to_tree (type, min), wide_int_to_tree (type, max), kind);
-}
-
 inline bool
 vrange::varying_p () const
 {
@@ -707,8 +708,8 @@ irange::nonzero_p () const
   if (undefined_p ())
     return false;
 
-  tree zero = build_zero_cst (type ());
-  return *this == int_range<2> (zero, zero, VR_ANTI_RANGE);
+  wide_int zero = wi::zero (TYPE_PRECISION (type ()));
+  return *this == int_range<2> (type (), zero, zero, VR_ANTI_RANGE);
 }
 
 inline bool
@@ -717,6 +718,12 @@ irange::supports_p (const_tree type)
   return INTEGRAL_TYPE_P (type) || POINTER_TYPE_P (type);
 }
 
+inline bool
+irange::contains_p (tree cst) const
+{
+  return contains_p (wi::to_wide (cst));
+}
+
 inline bool
 range_includes_zero_p (const irange *vr)
 {
@@ -726,7 +733,7 @@ range_includes_zero_p (const irange *vr)
   if (vr->varying_p ())
     return true;
 
-  tree zero = build_zero_cst (vr->type ());
+  wide_int zero = wi::zero (TYPE_PRECISION (vr->type ()));
   return vr->contains_p (zero);
 }
 
@@ -906,8 +913,8 @@ irange::upper_bound () const
 inline void
 irange::set_nonzero (tree type)
 {
-  tree zero = build_int_cst (type, 0);
-  irange_set_anti_range (zero, zero);
+  wide_int zero = wi::zero (TYPE_PRECISION (type));
+  set (type, zero, zero, VR_ANTI_RANGE);
 }
 
 // Set value range VR to a ZERO range of type TYPE.
@@ -915,8 +922,8 @@ irange::set_nonzero (tree type)
 inline void
 irange::set_zero (tree type)
 {
-  tree z = build_int_cst (type, 0);
-  irange_set (z, z);
+  wide_int zero = wi::zero (TYPE_PRECISION (type));
+  set (type, zero, zero);
 }
 
 // Normalize a range to VARYING or UNDEFINED if possible.
@@ -935,6 +942,16 @@ irange::normalize_kind ()
     }
 }
 
+inline bool
+contains_zero_p (const irange &r)
+{
+  if (r.undefined_p ())
+    return true;
+
+  wide_int zero = wi::zero (TYPE_PRECISION (r.type ()));
+  return r.contains_p (zero);
+}
+
 // Return the maximum value for TYPE.
 
 inline tree
@@ -1083,6 +1100,12 @@ frange::update_nan (bool sign)
     }
 }
 
+inline bool
+frange::contains_p (tree cst) const
+{
+  return contains_p (*TREE_REAL_CST_PTR (cst));
+}
+
 // Clear the NAN bit and adjust the range.
 
 inline void
diff --git a/gcc/vr-values.cc b/gcc/vr-values.cc
index 3d28198f9f5..49ae324419a 100644
--- a/gcc/vr-values.cc
+++ b/gcc/vr-values.cc
@@ -88,8 +88,7 @@ simplify_using_ranges::op_with_boolean_value_range_p (tree op, gimple *s)
      as [0,1].  */
   value_range vr;
   return (query->range_of_expr (vr, op, s)
-	  && vr == value_range (build_zero_cst (TREE_TYPE (op)),
-				build_one_cst (TREE_TYPE (op))));
+	  && vr == range_true_and_false (TREE_TYPE (op)));
 }
 
 /* Helper function for simplify_internal_call_using_ranges and
@@ -316,7 +315,11 @@ bounds_of_var_in_loop (tree *min, tree *max, range_query *query,
 	      value_range maxvr, vr0, vr1;
 	      if (!query->range_of_expr (vr0, init, stmt))
 		vr0.set_varying (TREE_TYPE (init));
-	      vr1.set (TREE_TYPE (init), wtmp, wtmp);
+	      tree tinit = TREE_TYPE (init);
+	      wide_int winit = wide_int::from (wtmp,
+					       TYPE_PRECISION (tinit),
+					       TYPE_SIGN (tinit));
+	      vr1.set (TREE_TYPE (init), winit, winit);
 
 	      range_op_handler handler (PLUS_EXPR, TREE_TYPE (init));
 	      if (!handler.fold_range (maxvr, TREE_TYPE (init), vr0, vr1))
@@ -444,15 +447,25 @@ simplify_using_ranges::legacy_fold_cond_overflow (gimple *stmt)
       else
 	{
 	  value_range vro, vri;
+	  tree type = TREE_TYPE (op0);
 	  if (code == GT_EXPR || code == GE_EXPR)
 	    {
-	      vro.set (TYPE_MIN_VALUE (TREE_TYPE (op0)), x, VR_ANTI_RANGE);
-	      vri.set (TYPE_MIN_VALUE (TREE_TYPE (op0)), x);
+	      vro.set (type,
+		       wi::to_wide (TYPE_MIN_VALUE (type)),
+		       wi::to_wide (x), VR_ANTI_RANGE);
+	      vri.set (type,
+		       wi::to_wide (TYPE_MIN_VALUE (type)),
+		       wi::to_wide (x));
 	    }
 	  else if (code == LT_EXPR || code == LE_EXPR)
 	    {
-	      vro.set (TYPE_MIN_VALUE (TREE_TYPE (op0)), x);
-	      vri.set (TYPE_MIN_VALUE (TREE_TYPE (op0)), x, VR_ANTI_RANGE);
+	      vro.set (type,
+		       wi::to_wide (TYPE_MIN_VALUE (type)),
+		       wi::to_wide (x));
+	      vri.set (type,
+		       wi::to_wide (TYPE_MIN_VALUE (type)),
+		       wi::to_wide (x),
+		       VR_ANTI_RANGE);
 	    }
 	  else
 	    gcc_unreachable ();
-- 
2.40.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [COMMITTED] Replace vrp_val* with wide_ints.
  2023-05-01  6:28 [COMMITTED] vrange_storage overhaul Aldy Hernandez
                   ` (5 preceding siblings ...)
  2023-05-01  6:29 ` [COMMITTED] Conversion to irange wide_int API Aldy Hernandez
@ 2023-05-01  6:29 ` Aldy Hernandez
  2023-05-01  6:29 ` [COMMITTED] Rewrite bounds_of_var_in_loop() to use ranges Aldy Hernandez
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Aldy Hernandez @ 2023-05-01  6:29 UTC (permalink / raw)
  To: GCC patches; +Cc: Andrew MacLeod, Aldy Hernandez

This patch removes all uses of vrp_val_{min,max} in favor for a
irange_val_* which are wide_int based.  This will leave only one use
of vrp_val_* which returns trees in range_of_ssa_name_with_loop_info()
because it needs to work with non-integers (floats, etc).  In a
follow-up patch, this function will also be cleaned up such that
vrp_val_* can be deleted.

The functions min_limit and max_limit in range-op.cc are now useless
as they're basically irange_val*.  I didn't rename them yet to avoid
churn.  I'll do it in a later patch.

gcc/ChangeLog:

	* gimple-range-fold.cc (adjust_pointer_diff_expr): Rewrite with
	irange_val*.
	(vrp_val_max): New.
	(vrp_val_min): New.
	* gimple-range-op.cc (cfn_strlen::fold_range): Use irange_val_*.
	* range-op.cc (max_limit): Same.
	(min_limit): Same.
	(plus_minus_ranges): Same.
	(operator_rshift::op1_range): Same.
	(operator_cast::inside_domain_p): Same.
	* value-range.cc (vrp_val_is_max): Delete.
	(vrp_val_is_min): Delete.
	(range_tests_misc): Use irange_val_*.
	* value-range.h (vrp_val_is_min): Delete.
	(vrp_val_is_max): Delete.
	(vrp_val_max): Delete.
	(irange_val_min): New.
	(vrp_val_min): Delete.
	(irange_val_max): New.
	* vr-values.cc (check_for_binary_op_overflow): Use irange_val_*.
---
 gcc/gimple-range-fold.cc | 40 +++++++++++++++++++++++++++++----
 gcc/gimple-range-op.cc   |  8 +++----
 gcc/range-op.cc          | 19 +++++++---------
 gcc/value-range.cc       | 37 +++++--------------------------
 gcc/value-range.h        | 41 +++++++---------------------------
 gcc/vr-values.cc         | 48 ++++++++++++++--------------------------
 6 files changed, 78 insertions(+), 115 deletions(-)

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 62875a35038..1b76e6e02a3 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -360,10 +360,10 @@ adjust_pointer_diff_expr (irange &res, const gimple *diff_stmt)
       && vrp_operand_equal_p (op1, gimple_call_arg (call, 0))
       && integer_zerop (gimple_call_arg (call, 1)))
     {
-      tree max = vrp_val_max (ptrdiff_type_node);
-      unsigned prec = TYPE_PRECISION (TREE_TYPE (max));
-      wide_int wmaxm1 = wi::to_wide (max, prec) - 1;
-      res.intersect (int_range<2> (TREE_TYPE (max), wi::zero (prec), wmaxm1));
+      wide_int maxm1 = irange_val_max (ptrdiff_type_node) - 1;
+      res.intersect (int_range<2> (ptrdiff_type_node,
+				   wi::zero (TYPE_PRECISION (ptrdiff_type_node)),
+				   maxm1));
     }
 }
 
@@ -966,6 +966,38 @@ tree_upper_bound (const vrange &r, tree type)
   return NULL;
 }
 
+// Return the maximum value for TYPE.
+
+static inline tree
+vrp_val_max (const_tree type)
+{
+  if (INTEGRAL_TYPE_P (type)
+      || POINTER_TYPE_P (type))
+    return wide_int_to_tree (const_cast <tree> (type), irange_val_max (type));
+  if (frange::supports_p (type))
+    {
+      REAL_VALUE_TYPE r = frange_val_max (type);
+      return build_real (const_cast <tree> (type), r);
+    }
+  return NULL_TREE;
+}
+
+// Return the minimum value for TYPE.
+
+static inline tree
+vrp_val_min (const_tree type)
+{
+  if (INTEGRAL_TYPE_P (type)
+      || POINTER_TYPE_P (type))
+    return wide_int_to_tree (const_cast <tree> (type), irange_val_min (type));
+  if (frange::supports_p (type))
+    {
+      REAL_VALUE_TYPE r = frange_val_min (type);
+      return build_real (const_cast <tree> (type), r);
+    }
+  return NULL_TREE;
+}
+
 // If SCEV has any information about phi node NAME, return it as a range in R.
 
 void
diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
index 29c7c776a2c..3aef8357d8d 100644
--- a/gcc/gimple-range-op.cc
+++ b/gcc/gimple-range-op.cc
@@ -900,15 +900,13 @@ public:
   virtual bool fold_range (irange &r, tree type, const irange &,
 			   const irange &, relation_trio) const
   {
-    tree max = vrp_val_max (ptrdiff_type_node);
-    wide_int wmax
-      = wi::to_wide (max, TYPE_PRECISION (TREE_TYPE (max)));
+    wide_int max = irange_val_max (ptrdiff_type_node);
     // To account for the terminating NULL, the maximum length
     // is one less than the maximum array size, which in turn
     // is one less than PTRDIFF_MAX (or SIZE_MAX where it's
     // smaller than the former type).
     // FIXME: Use max_object_size() - 1 here.
-    r.set (type, wi::zero (TYPE_PRECISION (type)), wmax - 2);
+    r.set (type, wi::zero (TYPE_PRECISION (type)), max - 2);
     return true;
   }
 } op_cfn_strlen;
@@ -936,7 +934,7 @@ public:
 	   wi::shwi (m_is_pos ? 0 : 1, TYPE_PRECISION (type)),
 	   size
 	   ? wi::shwi (size - m_is_pos, TYPE_PRECISION (type))
-	   : wi::to_wide (vrp_val_max (type)));
+	   : irange_val_max (type));
     return true;
   }
 private:
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 224a561c170..fc0eef998e4 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -97,7 +97,7 @@ update_known_bitmask (irange &r, tree_code code,
 static inline wide_int
 max_limit (const_tree type)
 {
-  return wi::max_value (TYPE_PRECISION (type) , TYPE_SIGN (type));
+  return irange_val_max (type);
 }
 
 // Return the lower limit for a type.
@@ -105,7 +105,7 @@ max_limit (const_tree type)
 static inline wide_int
 min_limit (const_tree type)
 {
-  return wi::min_value (TYPE_PRECISION (type) , TYPE_SIGN (type));
+  return irange_val_min (type);
 }
 
 // Return false if shifting by OP is undefined behavior.  Otherwise, return
@@ -1463,14 +1463,14 @@ plus_minus_ranges (irange &r_ov, irange &r_normal, const irange &offset,
     {
       //  [ 0 , INF - OFF]
       lb = wi::zero (prec);
-      ub = wi::sub (wi::to_wide (vrp_val_max (type)), off, UNSIGNED, &ov);
+      ub = wi::sub (irange_val_max (type), off, UNSIGNED, &ov);
       kind = VREL_GT;
     }
   else
     {
       //  [ OFF, INF ]
       lb = off;
-      ub = wi::to_wide (vrp_val_max (type));
+      ub = irange_val_max (type);
       kind = VREL_LT;
     }
   int_range<2> normal_range (type, lb, ub);
@@ -2594,13 +2594,10 @@ operator_rshift::op1_range (irange &r,
       // OP1 is anything from 0011 1000 to 0011 1111.  That is, a
       // range from LHS<<3 plus a mask of the 3 bits we shifted on the
       // right hand side (0x07).
-      tree mask = fold_build1 (BIT_NOT_EXPR, type,
-			       fold_build2 (LSHIFT_EXPR, type,
-					    build_minus_one_cst (type),
-					    wide_int_to_tree (op2.type (), shift)));
+      wide_int mask = wi::bit_not (wi::lshift (wi::minus_one (prec), shift));
       int_range_max mask_range (type,
 				wi::zero (TYPE_PRECISION (type)),
-				wi::to_wide (mask));
+				mask);
       op_plus.fold_range (ub, type, lb, mask_range);
       r = lb;
       r.union_ (ub);
@@ -2731,8 +2728,8 @@ operator_cast::inside_domain_p (const wide_int &min,
 				const wide_int &max,
 				const irange &range) const
 {
-  wide_int domain_min = wi::to_wide (vrp_val_min (range.type ()));
-  wide_int domain_max = wi::to_wide (vrp_val_max (range.type ()));
+  wide_int domain_min = irange_val_min (range.type ());
+  wide_int domain_max = irange_val_max (range.type ());
   signop domain_sign = TYPE_SIGN (range.type ());
   return (wi::le_p (min, domain_max, domain_sign)
 	  && wi::le_p (max, domain_max, domain_sign)
diff --git a/gcc/value-range.cc b/gcc/value-range.cc
index f2148722a3a..cf694ccaa28 100644
--- a/gcc/value-range.cc
+++ b/gcc/value-range.cc
@@ -1990,31 +1990,6 @@ debug (const value_range &vr)
   fprintf (stderr, "\n");
 }
 
-/* Return whether VAL is equal to the maximum value of its type.
-   We can't do a simple equality comparison with TYPE_MAX_VALUE because
-   C typedefs and Ada subtypes can produce types whose TYPE_MAX_VALUE
-   is not == to the integer constant with the same value in the type.  */
-
-bool
-vrp_val_is_max (const_tree val)
-{
-  tree type_max = vrp_val_max (TREE_TYPE (val));
-  return (val == type_max
-	  || (type_max != NULL_TREE
-	      && operand_equal_p (val, type_max, 0)));
-}
-
-/* Return whether VAL is equal to the minimum value of its type.  */
-
-bool
-vrp_val_is_min (const_tree val)
-{
-  tree type_min = vrp_val_min (TREE_TYPE (val));
-  return (val == type_min
-	  || (type_min != NULL_TREE
-	      && operand_equal_p (val, type_min, 0)));
-}
-
 /* Return true, if VAL1 and VAL2 are equal values for VRP purposes.  */
 
 bool
@@ -2369,11 +2344,11 @@ range_tests_misc ()
   // Test 1-bit signed integer union.
   // [-1,-1] U [0,0] = VARYING.
   tree one_bit_type = build_nonstandard_integer_type (1, 0);
-  tree one_bit_min = vrp_val_min (one_bit_type);
-  tree one_bit_max = vrp_val_max (one_bit_type);
+  wide_int one_bit_min = irange_val_min (one_bit_type);
+  wide_int one_bit_max = irange_val_max (one_bit_type);
   {
-    int_range<2> min = tree_range (one_bit_min, one_bit_min);
-    int_range<2> max = tree_range (one_bit_max, one_bit_max);
+    int_range<2> min = int_range<2> (one_bit_type, one_bit_min, one_bit_min);
+    int_range<2> max = int_range<2> (one_bit_type, one_bit_max, one_bit_max);
     max.union_ (min);
     ASSERT_TRUE (max.varying_p ());
   }
@@ -2382,8 +2357,8 @@ range_tests_misc ()
 
   // Test inversion of 1-bit signed integers.
   {
-    int_range<2> min = tree_range (one_bit_min, one_bit_min);
-    int_range<2> max = tree_range (one_bit_max, one_bit_max);
+    int_range<2> min = int_range<2> (one_bit_type, one_bit_min, one_bit_min);
+    int_range<2> max = int_range<2> (one_bit_type, one_bit_max, one_bit_max);
     int_range<2> t;
     t = min;
     t.invert ();
diff --git a/gcc/value-range.h b/gcc/value-range.h
index 633a234d41f..b040e2f254f 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -635,8 +635,6 @@ Value_Range::supports_type_p (const_tree type)
 
 extern value_range_kind get_legacy_range (const irange &, tree &min, tree &max);
 extern void dump_value_range (FILE *, const vrange *);
-extern bool vrp_val_is_min (const_tree);
-extern bool vrp_val_is_max (const_tree);
 extern bool vrp_operand_equal_p (const_tree, const_tree);
 inline REAL_VALUE_TYPE frange_val_min (const_tree type);
 inline REAL_VALUE_TYPE frange_val_max (const_tree type);
@@ -952,41 +950,18 @@ contains_zero_p (const irange &r)
   return r.contains_p (zero);
 }
 
-// Return the maximum value for TYPE.
-
-inline tree
-vrp_val_max (const_tree type)
+inline wide_int
+irange_val_min (const_tree type)
 {
-  if (INTEGRAL_TYPE_P (type))
-    return TYPE_MAX_VALUE (type);
-  if (POINTER_TYPE_P (type))
-    {
-      wide_int max = wi::max_value (TYPE_PRECISION (type), TYPE_SIGN (type));
-      return wide_int_to_tree (const_cast<tree> (type), max);
-    }
-  if (frange::supports_p (type))
-    {
-      REAL_VALUE_TYPE r = frange_val_max (type);
-      return build_real (const_cast <tree> (type), r);
-    }
-  return NULL_TREE;
+  gcc_checking_assert (irange::supports_p (type));
+  return wi::min_value (TYPE_PRECISION (type), TYPE_SIGN (type));
 }
 
-// Return the minimum value for TYPE.
-
-inline tree
-vrp_val_min (const_tree type)
+inline wide_int
+irange_val_max (const_tree type)
 {
-  if (INTEGRAL_TYPE_P (type))
-    return TYPE_MIN_VALUE (type);
-  if (POINTER_TYPE_P (type))
-    return build_zero_cst (const_cast<tree> (type));
-  if (frange::supports_p (type))
-    {
-      REAL_VALUE_TYPE r = frange_val_min (type);
-      return build_real (const_cast <tree> (type), r);
-    }
-  return NULL_TREE;
+  gcc_checking_assert (irange::supports_p (type));
+  return wi::max_value (TYPE_PRECISION (type), TYPE_SIGN (type));
 }
 
 inline
diff --git a/gcc/vr-values.cc b/gcc/vr-values.cc
index 49ae324419a..31df6b85ce6 100644
--- a/gcc/vr-values.cc
+++ b/gcc/vr-values.cc
@@ -103,34 +103,16 @@ check_for_binary_op_overflow (range_query *query,
 			      tree op0, tree op1, bool *ovf, gimple *s = NULL)
 {
   value_range vr0, vr1;
-  if (!query->range_of_expr (vr0, op0, s))
+  if (!query->range_of_expr (vr0, op0, s) || vr0.undefined_p ())
     vr0.set_varying (TREE_TYPE (op0));
-  if (!query->range_of_expr (vr1, op1, s))
+  if (!query->range_of_expr (vr1, op1, s) || vr1.undefined_p ())
     vr1.set_varying (TREE_TYPE (op1));
 
-  tree vr0min, vr0max, vr1min, vr1max;
-  if (vr0.undefined_p () || vr0.varying_p ())
-    {
-      vr0min = vrp_val_min (TREE_TYPE (op0));
-      vr0max = vrp_val_max (TREE_TYPE (op0));
-    }
-  else
-    {
-      tree type = vr0.type ();
-      vr0min = wide_int_to_tree (type, vr0.lower_bound ());
-      vr0max = wide_int_to_tree (type, vr0.upper_bound ());
-    }
-  if (vr1.undefined_p () || vr1.varying_p ())
-    {
-      vr1min = vrp_val_min (TREE_TYPE (op1));
-      vr1max = vrp_val_max (TREE_TYPE (op1));
-    }
-  else
-    {
-      tree type = vr1.type ();
-      vr1min = wide_int_to_tree (type, vr1.lower_bound ());
-      vr1max = wide_int_to_tree (type, vr1.upper_bound ());
-    }
+  tree vr0min = wide_int_to_tree (TREE_TYPE (op0), vr0.lower_bound ());
+  tree vr0max = wide_int_to_tree (TREE_TYPE (op0), vr0.upper_bound ());
+  tree vr1min = wide_int_to_tree (TREE_TYPE (op1), vr1.lower_bound ());
+  tree vr1max = wide_int_to_tree (TREE_TYPE (op1), vr1.upper_bound ());
+
   *ovf = arith_overflowed_p (subcode, type, vr0min,
 			     subcode == MINUS_EXPR ? vr1max : vr1min);
   if (arith_overflowed_p (subcode, type, vr0max,
@@ -152,10 +134,12 @@ check_for_binary_op_overflow (range_query *query,
       widest_int wmin, wmax;
       widest_int w[4];
       int i;
-      w[0] = wi::to_widest (vr0min);
-      w[1] = wi::to_widest (vr0max);
-      w[2] = wi::to_widest (vr1min);
-      w[3] = wi::to_widest (vr1max);
+      signop sign0 = TYPE_SIGN (TREE_TYPE (op0));
+      signop sign1 = TYPE_SIGN (TREE_TYPE (op1));
+      w[0] = widest_int::from (vr0.lower_bound (), sign0);
+      w[1] = widest_int::from (vr0.upper_bound (), sign0);
+      w[2] = widest_int::from (vr1.lower_bound (), sign1);
+      w[3] = widest_int::from (vr1.upper_bound (), sign1);
       for (i = 0; i < 4; i++)
 	{
 	  widest_int wt;
@@ -186,8 +170,10 @@ check_for_binary_op_overflow (range_query *query,
 	}
       /* The result of op0 CODE op1 is known to be in range
 	 [wmin, wmax].  */
-      widest_int wtmin = wi::to_widest (vrp_val_min (type));
-      widest_int wtmax = wi::to_widest (vrp_val_max (type));
+      widest_int wtmin
+	= widest_int::from (irange_val_min (type), TYPE_SIGN (type));
+      widest_int wtmax
+	= widest_int::from (irange_val_max (type), TYPE_SIGN (type));
       /* If all values in [wmin, wmax] are smaller than
 	 [wtmin, wtmax] or all are larger than [wtmin, wtmax],
 	 the arithmetic operation will always overflow.  */
-- 
2.40.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [COMMITTED] Rewrite bounds_of_var_in_loop() to use ranges.
  2023-05-01  6:28 [COMMITTED] vrange_storage overhaul Aldy Hernandez
                   ` (6 preceding siblings ...)
  2023-05-01  6:29 ` [COMMITTED] Replace vrp_val* with wide_ints Aldy Hernandez
@ 2023-05-01  6:29 ` Aldy Hernandez
  2023-05-01  6:29 ` [COMMITTED] Convert internal representation of irange to wide_ints Aldy Hernandez
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Aldy Hernandez @ 2023-05-01  6:29 UTC (permalink / raw)
  To: GCC patches; +Cc: Andrew MacLeod, Aldy Hernandez

Little by little, bounds_of_var_in_loop() has grown into an
unmaintainable mess.  This patch rewrites the code to use the relevant
APIs as well as refactor it to make it more readable.

gcc/ChangeLog:

	* gimple-range-fold.cc (tree_lower_bound): Delete.
	(tree_upper_bound): Delete.
	(vrp_val_max): Delete.
	(vrp_val_min): Delete.
	(fold_using_range::range_of_ssa_name_with_loop_info): Call
	range_of_var_in_loop.
	* vr-values.cc (valid_value_p): Delete.
	(fix_overflow): Delete.
	(get_scev_info): New.
	(bounds_of_var_in_loop): Refactor into...
	(induction_variable_may_overflow_p): ...this,
	(range_from_loop_direction): ...and this,
	(range_of_var_in_loop): ...and this.
	* vr-values.h (bounds_of_var_in_loop): Delete.
	(range_of_var_in_loop): New.
---
 gcc/gimple-range-fold.cc |  80 +----------
 gcc/vr-values.cc         | 282 ++++++++++++++++-----------------------
 gcc/vr-values.h          |   4 +-
 3 files changed, 117 insertions(+), 249 deletions(-)

diff --git a/gcc/gimple-range-fold.cc b/gcc/gimple-range-fold.cc
index 1b76e6e02a3..96cbd799488 100644
--- a/gcc/gimple-range-fold.cc
+++ b/gcc/gimple-range-fold.cc
@@ -944,60 +944,6 @@ fold_using_range::range_of_cond_expr  (vrange &r, gassign *s, fur_source &src)
   return true;
 }
 
-// Return the lower bound of R as a tree.
-
-static inline tree
-tree_lower_bound (const vrange &r, tree type)
-{
-  if (is_a <irange> (r))
-    return wide_int_to_tree (type, as_a <irange> (r).lower_bound ());
-  // ?? Handle floats when they contain endpoints.
-  return NULL;
-}
-
-// Return the upper bound of R as a tree.
-
-static inline tree
-tree_upper_bound (const vrange &r, tree type)
-{
-  if (is_a <irange> (r))
-    return wide_int_to_tree (type, as_a <irange> (r).upper_bound ());
-  // ?? Handle floats when they contain endpoints.
-  return NULL;
-}
-
-// Return the maximum value for TYPE.
-
-static inline tree
-vrp_val_max (const_tree type)
-{
-  if (INTEGRAL_TYPE_P (type)
-      || POINTER_TYPE_P (type))
-    return wide_int_to_tree (const_cast <tree> (type), irange_val_max (type));
-  if (frange::supports_p (type))
-    {
-      REAL_VALUE_TYPE r = frange_val_max (type);
-      return build_real (const_cast <tree> (type), r);
-    }
-  return NULL_TREE;
-}
-
-// Return the minimum value for TYPE.
-
-static inline tree
-vrp_val_min (const_tree type)
-{
-  if (INTEGRAL_TYPE_P (type)
-      || POINTER_TYPE_P (type))
-    return wide_int_to_tree (const_cast <tree> (type), irange_val_min (type));
-  if (frange::supports_p (type))
-    {
-      REAL_VALUE_TYPE r = frange_val_min (type);
-      return build_real (const_cast <tree> (type), r);
-    }
-  return NULL_TREE;
-}
-
 // If SCEV has any information about phi node NAME, return it as a range in R.
 
 void
@@ -1006,30 +952,8 @@ fold_using_range::range_of_ssa_name_with_loop_info (vrange &r, tree name,
 						    fur_source &src)
 {
   gcc_checking_assert (TREE_CODE (name) == SSA_NAME);
-  tree min, max, type = TREE_TYPE (name);
-  if (bounds_of_var_in_loop (&min, &max, src.query (), l, phi, name))
-    {
-      if (!is_gimple_constant (min))
-	{
-	  if (src.query ()->range_of_expr (r, min, phi) && !r.undefined_p ())
-	    min = tree_lower_bound (r, type);
-	  else
-	    min = vrp_val_min (type);
-	}
-      if (!is_gimple_constant (max))
-	{
-	  if (src.query ()->range_of_expr (r, max, phi) && !r.undefined_p ())
-	    max = tree_upper_bound (r, type);
-	  else
-	    max = vrp_val_max (type);
-	}
-      if (min && max)
-	{
-	  r.set (min, max);
-	  return;
-	}
-    }
-  r.set_varying (type);
+  if (!range_of_var_in_loop (r, name, l, phi, src.query ()))
+    r.set_varying (TREE_TYPE (name));
 }
 
 // -----------------------------------------------------------------------
diff --git a/gcc/vr-values.cc b/gcc/vr-values.cc
index 31df6b85ce6..86c1bf8ebc6 100644
--- a/gcc/vr-values.cc
+++ b/gcc/vr-values.cc
@@ -52,23 +52,6 @@ along with GCC; see the file COPYING3.  If not see
 #include "range-op.h"
 #include "gimple-range.h"
 
-/* Returns true if EXPR is a valid value (as expected by compare_values) --
-   a gimple invariant, or SSA_NAME +- CST.  */
-
-static bool
-valid_value_p (tree expr)
-{
-  if (TREE_CODE (expr) == SSA_NAME)
-    return true;
-
-  if (TREE_CODE (expr) == PLUS_EXPR
-      || TREE_CODE (expr) == MINUS_EXPR)
-    return (TREE_CODE (TREE_OPERAND (expr, 0)) == SSA_NAME
-	    && TREE_CODE (TREE_OPERAND (expr, 1)) == INTEGER_CST);
-
-  return is_gimple_min_invariant (expr);
-}
-
 /* Return true if op is in a boolean [0, 1] value-range.  */
 
 bool
@@ -184,178 +167,139 @@ check_for_binary_op_overflow (range_query *query,
   return true;
 }
 
-static inline void
-fix_overflow (tree *min, tree *max)
+/* Set INIT, STEP, and DIRECTION the the corresponding values of NAME
+   within LOOP, and return TRUE.  Otherwise return FALSE, and set R to
+   the conservative range of NAME within the loop.  */
+
+static bool
+get_scev_info (vrange &r, tree name, gimple *stmt, class loop *l,
+	       tree &init, tree &step, enum ev_direction &dir)
 {
-  /* Even for valid range info, sometimes overflow flag will leak in.
-     As GIMPLE IL should have no constants with TREE_OVERFLOW set, we
-     drop them.  */
-  if (TREE_OVERFLOW_P (*min))
-    *min = drop_tree_overflow (*min);
-  if (TREE_OVERFLOW_P (*max))
-    *max = drop_tree_overflow (*max);
-
-  gcc_checking_assert (compare_values (*min, *max) != 1);
+  tree ev = analyze_scalar_evolution (l, name);
+  tree chrec = instantiate_parameters (l, ev);
+  tree type = TREE_TYPE (name);
+  if (TREE_CODE (chrec) != POLYNOMIAL_CHREC)
+    {
+      r.set_varying (type);
+      return false;
+    }
+  if (is_gimple_min_invariant (chrec))
+    {
+      if (is_gimple_constant (chrec))
+	r.set (chrec, chrec);
+      else
+	r.set_varying (type);
+      return false;
+    }
+
+  init = initial_condition_in_loop_num (chrec, l->num);
+  step = evolution_part_in_loop_num (chrec, l->num);
+  if (!init || !step)
+    {
+      r.set_varying (type);
+      return false;
+    }
+  dir = scev_direction (chrec);
+  if (dir == EV_DIR_UNKNOWN
+      || scev_probably_wraps_p (NULL, init, step, stmt,
+				get_chrec_loop (chrec), true))
+    {
+      r.set_varying (type);
+      return false;
+    }
+  return true;
 }
 
-/* Given a VAR in STMT within LOOP, determine the bounds of the
-   variable and store it in MIN/MAX and return TRUE.  If no bounds
-   could be determined, return FALSE.  */
+/* Return TRUE if STEP * NIT may overflow when calculated in TYPE.  */
 
-bool
-bounds_of_var_in_loop (tree *min, tree *max, range_query *query,
-		       class loop *loop, gimple *stmt, tree var)
+static bool
+induction_variable_may_overflow_p (tree type,
+				   const wide_int &step, const widest_int &nit)
 {
-  tree init, step, chrec, tmin, tmax, type = TREE_TYPE (var);
-  enum ev_direction dir;
-  int_range<2> r;
+  wi::overflow_type ovf;
+  signop sign = TYPE_SIGN (type);
+  widest_int max_step = wi::mul (widest_int::from (step, sign),
+				 nit, sign, &ovf);
 
-  chrec = instantiate_parameters (loop, analyze_scalar_evolution (loop, var));
+  if (ovf || !wi::fits_to_tree_p (max_step, type))
+    return true;
 
-  /* Like in PR19590, scev can return a constant function.  */
-  if (is_gimple_min_invariant (chrec))
-    {
-      *min = *max = chrec;
-      fix_overflow (min, max);
-      return true;
-    }
+  /* For a signed type we have to check whether the result has the
+     expected signedness which is that of the step as number of
+     iterations is unsigned.  */
+  return (sign == SIGNED
+	  && wi::gts_p (max_step, 0) != wi::gts_p (step, 0));
+}
 
-  if (TREE_CODE (chrec) != POLYNOMIAL_CHREC)
-    return false;
+/* Set R to the range from BEGIN to END, assuming the direction of the
+   loop is DIR.  */
 
-  init = initial_condition_in_loop_num (chrec, loop->num);
-  step = evolution_part_in_loop_num (chrec, loop->num);
+static void
+range_from_loop_direction (irange &r, tree type,
+			   const irange &begin, const irange &end,
+			   ev_direction dir)
+{
+  signop sign = TYPE_SIGN (type);
 
-  if (!init || !step)
-    return false;
+  if (begin.undefined_p () || end.undefined_p ())
+    r.set_varying (type);
+  else if (dir == EV_DIR_GROWS)
+    {
+      if (wi::gt_p (begin.lower_bound (), end.upper_bound (), sign))
+	r.set_varying (type);
+      else
+	r = int_range<1> (type, begin.lower_bound (), end.upper_bound ());
+    }
+  else
+    {
+      if (wi::gt_p (end.lower_bound (), begin.upper_bound (), sign))
+	r.set_varying (type);
+      else
+	r = int_range<1> (type, end.lower_bound (), begin.upper_bound ());
+    }
+}
 
-  Value_Range rinit (TREE_TYPE (init));
-  Value_Range rstep (TREE_TYPE (step));
-  /* If INIT is an SSA with a singleton range, set INIT to said
-     singleton, otherwise leave INIT alone.  */
-  if (TREE_CODE (init) == SSA_NAME
-      && query->range_of_expr (rinit, init, stmt))
-    rinit.singleton_p (&init);
-  /* Likewise for step.  */
-  if (TREE_CODE (step) == SSA_NAME
-      && query->range_of_expr (rstep, step, stmt))
-    rstep.singleton_p (&step);
-
-  /* If STEP is symbolic, we can't know whether INIT will be the
-     minimum or maximum value in the range.  Also, unless INIT is
-     a simple expression, compare_values and possibly other functions
-     in tree-vrp won't be able to handle it.  */
-  if (step == NULL_TREE
-      || !is_gimple_min_invariant (step)
-      || !valid_value_p (init))
-    return false;
+/* Set V to the range of NAME in STMT within LOOP.  Return TRUE if a
+   range was found.  */
 
-  dir = scev_direction (chrec);
-  if (/* Do not adjust ranges if we do not know whether the iv increases
-	 or decreases,  ... */
-      dir == EV_DIR_UNKNOWN
-      /* ... or if it may wrap.  */
-      || scev_probably_wraps_p (NULL_TREE, init, step, stmt,
-				get_chrec_loop (chrec), true))
+bool
+range_of_var_in_loop (vrange &v, tree name, class loop *l, gimple *stmt,
+		      range_query *query)
+{
+  tree init, step;
+  enum ev_direction dir;
+  if (!get_scev_info (v, name, stmt, l, init, step, dir))
+    return true;
+
+  // Calculate ranges for the values from SCEV.
+  irange &r = as_a <irange> (v);
+  tree type = TREE_TYPE (init);
+  int_range<2> rinit (type), rstep (type), max_init (type);
+  if (!query->range_of_expr (rinit, init, stmt)
+      || !query->range_of_expr (rstep, step, stmt))
     return false;
 
-  if (POINTER_TYPE_P (type) || !TYPE_MIN_VALUE (type))
-    tmin = lower_bound_in_type (type, type);
-  else
-    tmin = TYPE_MIN_VALUE (type);
-  if (POINTER_TYPE_P (type) || !TYPE_MAX_VALUE (type))
-    tmax = upper_bound_in_type (type, type);
-  else
-    tmax = TYPE_MAX_VALUE (type);
-
-  /* Try to use estimated number of iterations for the loop to constrain the
-     final value in the evolution.  */
-  if (TREE_CODE (step) == INTEGER_CST
-      && is_gimple_val (init)
-      && (TREE_CODE (init) != SSA_NAME
-	  || (query->range_of_expr (r, init, stmt)
-	      && !r.varying_p ()
-	      && !r.undefined_p ())))
+  // Calculate the final range of NAME if possible.
+  if (rinit.singleton_p () && rstep.singleton_p ())
     {
       widest_int nit;
+      if (!max_loop_iterations (l, &nit))
+	return false;
 
-      /* We are only entering here for loop header PHI nodes, so using
-	 the number of latch executions is the correct thing to use.  */
-      if (max_loop_iterations (loop, &nit))
+      if (!induction_variable_may_overflow_p (type, rstep.lower_bound (), nit))
 	{
-	  signop sgn = TYPE_SIGN (TREE_TYPE (step));
-	  wi::overflow_type overflow;
-
-	  widest_int wtmp = wi::mul (wi::to_widest (step), nit, sgn,
-				     &overflow);
-	  /* If the multiplication overflowed we can't do a meaningful
-	     adjustment.  Likewise if the result doesn't fit in the type
-	     of the induction variable.  For a signed type we have to
-	     check whether the result has the expected signedness which
-	     is that of the step as number of iterations is unsigned.  */
-	  if (!overflow
-	      && wi::fits_to_tree_p (wtmp, TREE_TYPE (init))
-	      && (sgn == UNSIGNED
-		  || wi::gts_p (wtmp, 0) == wi::gts_p (wi::to_wide (step), 0)))
-	    {
-	      value_range maxvr, vr0, vr1;
-	      if (!query->range_of_expr (vr0, init, stmt))
-		vr0.set_varying (TREE_TYPE (init));
-	      tree tinit = TREE_TYPE (init);
-	      wide_int winit = wide_int::from (wtmp,
-					       TYPE_PRECISION (tinit),
-					       TYPE_SIGN (tinit));
-	      vr1.set (TREE_TYPE (init), winit, winit);
-
-	      range_op_handler handler (PLUS_EXPR, TREE_TYPE (init));
-	      if (!handler.fold_range (maxvr, TREE_TYPE (init), vr0, vr1))
-		maxvr.set_varying (TREE_TYPE (init));
-
-	      /* Likewise if the addition did.  */
-	      if (!maxvr.varying_p () && !maxvr.undefined_p ())
-		{
-		  int_range<2> initvr;
-
-		  if (!query->range_of_expr (initvr, init, stmt)
-		      || initvr.undefined_p ())
-		    return false;
-
-		  tree initvr_type = initvr.type ();
-		  tree initvr_min = wide_int_to_tree (initvr_type,
-						      initvr.lower_bound ());
-		  tree initvr_max = wide_int_to_tree (initvr_type,
-						      initvr.upper_bound ());
-		  tree maxvr_type = maxvr.type ();
-		  tree maxvr_min = wide_int_to_tree (maxvr_type,
-						     maxvr.lower_bound ());
-		  tree maxvr_max = wide_int_to_tree (maxvr_type,
-						     maxvr.upper_bound ());
-
-		  /* Check if init + nit * step overflows.  Though we checked
-		     scev {init, step}_loop doesn't wrap, it is not enough
-		     because the loop may exit immediately.  Overflow could
-		     happen in the plus expression in this case.  */
-		  if ((dir == EV_DIR_DECREASES
-		       && compare_values (maxvr_min, initvr_min) != -1)
-		      || (dir == EV_DIR_GROWS
-			  && compare_values (maxvr_max, initvr_max) != 1))
-		    return false;
-
-		  tmin = maxvr_min;
-		  tmax = maxvr_max;
-		}
-	    }
+	  // Calculate the max bounds for init (init + niter * step).
+	  wide_int w = wide_int::from (nit, TYPE_PRECISION (type), TYPE_SIGN (type));
+	  int_range<1> niter (type, w, w);
+	  int_range_max max_step;
+	  range_op_handler mult_handler (MULT_EXPR, type);
+	  range_op_handler plus_handler (PLUS_EXPR, type);
+	  if (!mult_handler.fold_range (max_step, type, niter, rstep)
+	      || !plus_handler.fold_range (max_init, type, rinit, max_step))
+	    return false;
 	}
     }
-
-  *min = tmin;
-  *max = tmax;
-  if (dir == EV_DIR_DECREASES)
-    *max = init;
-  else
-    *min = init;
-
-  fix_overflow (min, max);
+  range_from_loop_direction (r, type, rinit, max_init, dir);
   return true;
 }
 
diff --git a/gcc/vr-values.h b/gcc/vr-values.h
index dc0c22df4d8..df79a3a570b 100644
--- a/gcc/vr-values.h
+++ b/gcc/vr-values.h
@@ -74,7 +74,7 @@ private:
 
 extern bool range_fits_type_p (const irange *vr,
 			       unsigned dest_precision, signop dest_sgn);
-extern bool bounds_of_var_in_loop (tree *min, tree *max, range_query *,
-				   class loop *loop, gimple *stmt, tree var);
+extern bool range_of_var_in_loop (vrange &, tree var, class loop *, gimple *,
+				  range_query *);
 
 #endif /* GCC_VR_VALUES_H */
-- 
2.40.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [COMMITTED] Convert internal representation of irange to wide_ints.
  2023-05-01  6:28 [COMMITTED] vrange_storage overhaul Aldy Hernandez
                   ` (7 preceding siblings ...)
  2023-05-01  6:29 ` [COMMITTED] Rewrite bounds_of_var_in_loop() to use ranges Aldy Hernandez
@ 2023-05-01  6:29 ` Aldy Hernandez
  2023-05-01  6:29 ` [COMMITTED] Cleanup irange::set Aldy Hernandez
  2023-05-01  6:29 ` [COMMITTED] Inline irange::set_nonzero Aldy Hernandez
  10 siblings, 0 replies; 12+ messages in thread
From: Aldy Hernandez @ 2023-05-01  6:29 UTC (permalink / raw)
  To: GCC patches; +Cc: Andrew MacLeod, Aldy Hernandez

gcc/ChangeLog:

	* range-op.cc (update_known_bitmask): Adjust for irange containing
	wide_ints internally.
	* tree-ssanames.cc (set_nonzero_bits): Same.
	* tree-ssanames.h (set_nonzero_bits): Same.
	* value-range-storage.cc (irange_storage::set_irange): Same.
	(irange_storage::get_irange): Same.
	* value-range.cc (irange::operator=): Same.
	(irange::irange_set): Same.
	(irange::irange_set_1bit_anti_range): Same.
	(irange::irange_set_anti_range): Same.
	(irange::set): Same.
	(irange::verify_range): Same.
	(irange::contains_p): Same.
	(irange::irange_single_pair_union): Same.
	(irange::union_): Same.
	(irange::irange_contains_p): Same.
	(irange::intersect): Same.
	(irange::invert): Same.
	(irange::set_range_from_nonzero_bits): Same.
	(irange::set_nonzero_bits): Same.
	(mask_to_wi): Same.
	(irange::intersect_nonzero_bits): Same.
	(irange::union_nonzero_bits): Same.
	(gt_ggc_mx): Same.
	(gt_pch_nx): Same.
	(tree_range): Same.
	(range_tests_strict_enum): Same.
	(range_tests_misc): Same.
	(range_tests_nonzero_bits): Same.
	* value-range.h (irange::type): Same.
	(irange::varying_compatible_p): Same.
	(irange::irange): Same.
	(int_range::int_range): Same.
	(irange::set_undefined): Same.
	(irange::set_varying): Same.
	(irange::lower_bound): Same.
	(irange::upper_bound): Same.
---
 gcc/range-op.cc            |   3 +-
 gcc/tree-ssanames.cc       |   2 +-
 gcc/tree-ssanames.h        |   2 +-
 gcc/value-range-storage.cc |  22 +--
 gcc/value-range.cc         | 267 ++++++++++++++++---------------------
 gcc/value-range.h          |  70 ++++------
 6 files changed, 153 insertions(+), 213 deletions(-)

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index fc0eef998e4..3ab2c665901 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -89,7 +89,8 @@ update_known_bitmask (irange &r, tree_code code,
   bit_value_binop (code, sign, prec, &value, &mask,
 		   lh_sign, lh_prec, lh_value, lh_mask,
 		   rh_sign, rh_prec, rh_value, rh_mask);
-  r.set_nonzero_bits (value | mask);
+  wide_int tmp = wide_int::from (value | mask, prec, sign);
+  r.set_nonzero_bits (tmp);
 }
 
 // Return the upper limit for a type.
diff --git a/gcc/tree-ssanames.cc b/gcc/tree-ssanames.cc
index a510dfa031a..5fdb6a37e9f 100644
--- a/gcc/tree-ssanames.cc
+++ b/gcc/tree-ssanames.cc
@@ -456,7 +456,7 @@ set_ptr_nonnull (tree name)
 /* Update the non-zero bits bitmask of NAME.  */
 
 void
-set_nonzero_bits (tree name, const wide_int_ref &mask)
+set_nonzero_bits (tree name, const wide_int &mask)
 {
   gcc_assert (!POINTER_TYPE_P (TREE_TYPE (name)));
 
diff --git a/gcc/tree-ssanames.h b/gcc/tree-ssanames.h
index b09e71bf779..f3fa609208a 100644
--- a/gcc/tree-ssanames.h
+++ b/gcc/tree-ssanames.h
@@ -58,7 +58,7 @@ struct GTY(()) ptr_info_def
 
 /* Sets the value range to SSA.  */
 extern bool set_range_info (tree, const vrange &);
-extern void set_nonzero_bits (tree, const wide_int_ref &);
+extern void set_nonzero_bits (tree, const wide_int &);
 extern wide_int get_nonzero_bits (const_tree);
 extern bool ssa_name_has_boolean_range (tree);
 extern void init_ssanames (struct function *, int);
diff --git a/gcc/value-range-storage.cc b/gcc/value-range-storage.cc
index 98a6d99af78..7d2de5e8384 100644
--- a/gcc/value-range-storage.cc
+++ b/gcc/value-range-storage.cc
@@ -300,10 +300,7 @@ irange_storage::set_irange (const irange &r)
       write_wide_int (val, len, r.lower_bound (i));
       write_wide_int (val, len, r.upper_bound (i));
     }
-  if (r.m_nonzero_mask)
-    write_wide_int (val, len, wi::to_wide (r.m_nonzero_mask));
-  else
-    write_wide_int (val, len, wi::minus_one (m_precision));
+  write_wide_int (val, len, r.m_nonzero_mask);
 
   if (flag_checking)
     {
@@ -341,17 +338,16 @@ irange_storage::get_irange (irange &r, tree type) const
   gcc_checking_assert (TYPE_PRECISION (type) == m_precision);
   const HOST_WIDE_INT *val = &m_val[0];
   const unsigned char *len = lengths_address ();
-  wide_int w;
 
   // Handle the common case where R can fit the new range.
   if (r.m_max_ranges >= m_num_ranges)
     {
       r.m_kind = VR_RANGE;
       r.m_num_ranges = m_num_ranges;
+      r.m_type = type;
       for (unsigned i = 0; i < m_num_ranges * 2; ++i)
 	{
-	  read_wide_int (w, val, *len, m_precision);
-	  r.m_base[i] = wide_int_to_tree (type, w);
+	  read_wide_int (r.m_base[i], val, *len, m_precision);
 	  val += *len++;
 	}
     }
@@ -370,15 +366,9 @@ irange_storage::get_irange (irange &r, tree type) const
 	  r.union_ (tmp);
 	}
     }
-  read_wide_int (w, val, *len, m_precision);
-  if (w == -1)
-    r.m_nonzero_mask = NULL;
-  else
-    {
-      r.m_nonzero_mask = wide_int_to_tree (type, w);
-      if (r.m_kind == VR_VARYING)
-	r.m_kind = VR_RANGE;
-    }
+  read_wide_int (r.m_nonzero_mask, val, *len, m_precision);
+  if (r.m_kind == VR_VARYING)
+    r.m_kind = VR_RANGE;
 
   if (flag_checking)
     r.verify_range ();
diff --git a/gcc/value-range.cc b/gcc/value-range.cc
index cf694ccaa28..2dc6b98bc63 100644
--- a/gcc/value-range.cc
+++ b/gcc/value-range.cc
@@ -914,6 +914,7 @@ irange::operator= (const irange &src)
     m_base[x - 1] = src.m_base[src.m_num_ranges * 2 - 1];
 
   m_num_ranges = lim;
+  m_type = src.m_type;
   m_kind = src.m_kind;
   m_nonzero_mask = src.m_nonzero_mask;
   if (m_max_ranges == 1)
@@ -963,11 +964,12 @@ get_legacy_range (const irange &r, tree &min, tree &max)
 void
 irange::irange_set (tree type, const wide_int &min, const wide_int &max)
 {
-  m_base[0] = wide_int_to_tree (type, min);
-  m_base[1] = wide_int_to_tree (type, max);
+  m_type = type;
+  m_base[0] = min;
+  m_base[1] = max;
   m_num_ranges = 1;
   m_kind = VR_RANGE;
-  m_nonzero_mask = NULL;
+  m_nonzero_mask = wi::minus_one (TYPE_PRECISION (type));
   normalize_kind ();
 
   if (flag_checking)
@@ -978,28 +980,26 @@ void
 irange::irange_set_1bit_anti_range (tree type,
 				    const wide_int &min, const wide_int &max)
 {
-  gcc_checking_assert (TYPE_PRECISION (type) == 1);
+  unsigned prec = TYPE_PRECISION (type);
+  signop sign = TYPE_SIGN (type);
+  gcc_checking_assert (prec == 1);
 
   if (min == max)
     {
+      wide_int tmp;
       // Since these are 1-bit quantities, they can only be [MIN,MIN]
       // or [MAX,MAX].
-      if (min == wi::to_wide (TYPE_MIN_VALUE (type)))
-	{
-	  wide_int tmp = wi::to_wide (TYPE_MAX_VALUE (type));
-	  set (type, tmp, tmp);
-	}
+      if (min == wi::min_value (prec, sign))
+	tmp = wi::max_value (prec, sign);
       else
-	{
-	  wide_int tmp = wi::to_wide (TYPE_MIN_VALUE (type));
-	  set (type, tmp, tmp);
-	}
+	tmp = wi::min_value (prec, sign);
+      set (type, tmp, tmp);
     }
   else
     {
       // The only alternative is [MIN,MAX], which is the empty range.
-      gcc_checking_assert (min == wi::to_wide (TYPE_MIN_VALUE (type)));
-      gcc_checking_assert (max == wi::to_wide (TYPE_MAX_VALUE (type)));
+      gcc_checking_assert (min == wi::min_value (prec, sign));
+      gcc_checking_assert (max == wi::max_value (prec, sign));
       set_undefined ();
     }
   if (flag_checking)
@@ -1027,8 +1027,8 @@ irange::irange_set_anti_range (tree type,
     {
       wide_int lim1 = wi::sub (min, 1, sign, &ovf);
       gcc_checking_assert (ovf != wi::OVF_OVERFLOW);
-      m_base[0] = wide_int_to_tree (type, type_range.lower_bound (0));
-      m_base[1] = wide_int_to_tree (type, lim1);
+      m_base[0] = type_range.lower_bound (0);
+      m_base[1] = lim1;
       m_num_ranges = 1;
     }
   if (wi::ne_p (max, type_range.upper_bound ()))
@@ -1040,14 +1040,13 @@ irange::irange_set_anti_range (tree type,
 	}
       wide_int lim2 = wi::add (max, 1, sign, &ovf);
       gcc_checking_assert (ovf != wi::OVF_OVERFLOW);
-      m_base[m_num_ranges * 2] = wide_int_to_tree (type, lim2);
-      m_base[m_num_ranges * 2 + 1]
-	= wide_int_to_tree (type, type_range.upper_bound (0));
+      m_base[m_num_ranges * 2] = lim2;
+      m_base[m_num_ranges * 2 + 1] = type_range.upper_bound (0);
       ++m_num_ranges;
     }
 
   m_kind = VR_RANGE;
-  m_nonzero_mask = NULL;
+  m_nonzero_mask = wi::minus_one (TYPE_PRECISION (type));
   normalize_kind ();
 
   if (flag_checking)
@@ -1079,6 +1078,7 @@ irange::set (tree type, const wide_int &rmin, const wide_int &rmax,
       return;
     }
 
+  m_type = type;
   signop sign = TYPE_SIGN (type);
   unsigned prec = TYPE_PRECISION (type);
   wide_int min = wide_int::from (rmin, prec, sign);
@@ -1134,12 +1134,14 @@ irange::verify_range ()
       return;
     }
   gcc_checking_assert (m_num_ranges <= m_max_ranges);
+  unsigned prec = TYPE_PRECISION (m_type);
   if (m_kind == VR_VARYING)
     {
-      gcc_checking_assert (!m_nonzero_mask
-			   || wi::to_wide (m_nonzero_mask) == -1);
+      gcc_checking_assert (m_nonzero_mask == -1);
       gcc_checking_assert (m_num_ranges == 1);
       gcc_checking_assert (varying_compatible_p ());
+      gcc_checking_assert (lower_bound ().get_precision () == prec);
+      gcc_checking_assert (upper_bound ().get_precision () == prec);
       return;
     }
   gcc_checking_assert (m_num_ranges != 0);
@@ -1148,9 +1150,12 @@ irange::verify_range ()
     {
       wide_int lb = lower_bound (i);
       wide_int ub = upper_bound (i);
-      int c = wi::cmp (lb, ub, TYPE_SIGN (type ()));
+      gcc_checking_assert (lb.get_precision () == prec);
+      gcc_checking_assert (ub.get_precision () == prec);
+      int c = wi::cmp (lb, ub, TYPE_SIGN (m_type));
       gcc_checking_assert (c == 0 || c == -1);
     }
+  gcc_checking_assert (m_nonzero_mask.get_precision () == prec);
 }
 
 bool
@@ -1217,9 +1222,9 @@ irange::contains_p (const wide_int &cst) const
     return false;
 
   // See if we can exclude CST based on the nonzero bits.
-  if (m_nonzero_mask
+  if (m_nonzero_mask != -1
       && cst != 0
-      && wi::bit_and (wi::to_wide (m_nonzero_mask), cst) == 0)
+      && wi::bit_and (m_nonzero_mask, cst) == 0)
     return false;
 
   signop sign = TYPE_SIGN (type ());
@@ -1243,17 +1248,18 @@ irange::irange_single_pair_union (const irange &r)
   gcc_checking_assert (!undefined_p () && !varying_p ());
   gcc_checking_assert (!r.undefined_p () && !varying_p ());
 
-  signop sign = TYPE_SIGN (TREE_TYPE (m_base[0]));
+  signop sign = TYPE_SIGN (m_type);
   // Check if current lower bound is also the new lower bound.
-  if (wi::le_p (wi::to_wide (m_base[0]), wi::to_wide (r.m_base[0]), sign))
+  if (wi::le_p (m_base[0], r.m_base[0], sign))
     {
       // If current upper bound is new upper bound, we're done.
-      if (wi::le_p (wi::to_wide (r.m_base[1]), wi::to_wide (m_base[1]), sign))
+      if (wi::le_p (r.m_base[1], m_base[1], sign))
 	return union_nonzero_bits (r);
       // Otherwise R has the new upper bound.
       // Check for overlap/touching ranges, or single target range.
       if (m_max_ranges == 1
-	  || wi::to_widest (m_base[1]) + 1 >= wi::to_widest (r.m_base[0]))
+	  || (widest_int::from (m_base[1], sign) + 1
+	      >= widest_int::from (r.m_base[0], TYPE_SIGN (r.m_type))))
 	m_base[1] = r.m_base[1];
       else
 	{
@@ -1267,15 +1273,16 @@ irange::irange_single_pair_union (const irange &r)
     }
 
   // Set the new lower bound to R's lower bound.
-  tree lb = m_base[0];
+  wide_int lb = m_base[0];
   m_base[0] = r.m_base[0];
 
   // If R fully contains THIS range, just set the upper bound.
-  if (wi::ge_p (wi::to_wide (r.m_base[1]), wi::to_wide (m_base[1]), sign))
+  if (wi::ge_p (r.m_base[1], m_base[1], sign))
     m_base[1] = r.m_base[1];
   // Check for overlapping ranges, or target limited to a single range.
   else if (m_max_ranges == 1
-	   || wi::to_widest (r.m_base[1]) + 1 >= wi::to_widest (lb))
+	   || (widest_int::from (r.m_base[1], TYPE_SIGN (r.m_type)) + 1
+	       >= widest_int::from (lb, sign)))
     ;
   else
     {
@@ -1336,13 +1343,15 @@ irange::union_ (const vrange &v)
   // the merge is performed.
   //
   // [Xi,Yi]..[Xn,Yn]  U  [Xj,Yj]..[Xm,Ym]   -->  [Xk,Yk]..[Xp,Yp]
-  auto_vec<tree, 20> res (m_num_ranges * 2 + r.m_num_ranges * 2);
+  auto_vec<wide_int, 20> res (m_num_ranges * 2 + r.m_num_ranges * 2);
   unsigned i = 0, j = 0, k = 0;
+  signop sign = TYPE_SIGN (m_type);
 
   while (i < m_num_ranges * 2 && j < r.m_num_ranges * 2)
     {
       // lower of Xi and Xj is the lowest point.
-      if (wi::to_widest (m_base[i]) <= wi::to_widest (r.m_base[j]))
+      if (widest_int::from (m_base[i], sign)
+	  <= widest_int::from (r.m_base[j], sign))
 	{
 	  res.quick_push (m_base[i]);
 	  res.quick_push (m_base[i + 1]);
@@ -1375,10 +1384,12 @@ irange::union_ (const vrange &v)
   for (j = 2; j < k ; j += 2)
     {
       // Current upper+1 is >= lower bound next pair, then we merge ranges.
-      if (wi::to_widest (res[i - 1]) + 1 >= wi::to_widest (res[j]))
+      if (widest_int::from (res[i - 1], sign) + 1
+	  >= widest_int::from (res[j], sign))
 	{
 	  // New upper bounds is greater of current or the next one.
-	  if (wi::to_widest (res[j + 1]) > wi::to_widest (res[i - 1]))
+	  if (widest_int::from (res[j + 1], sign)
+	      > widest_int::from (res[i - 1], sign))
 	    res[i - 1] = res[j + 1];
 	}
       else
@@ -1424,18 +1435,18 @@ irange::irange_contains_p (const irange &r) const
 
   // In order for THIS to fully contain R, all of the pairs within R must
   // be fully contained by the pairs in this object.
-  signop sign = TYPE_SIGN (TREE_TYPE(m_base[0]));
+  signop sign = TYPE_SIGN (m_type);
   unsigned ri = 0;
   unsigned i = 0;
-  tree rl = r.m_base[0];
-  tree ru = r.m_base[1];
-  tree l = m_base[0];
-  tree u = m_base[1];
+  wide_int rl = r.m_base[0];
+  wide_int ru = r.m_base[1];
+  wide_int l = m_base[0];
+  wide_int u = m_base[1];
   while (1)
     {
       // If r is contained within this range, move to the next R
-      if (wi::ge_p (wi::to_wide (rl), wi::to_wide (l), sign)
-	  && wi::le_p (wi::to_wide (ru), wi::to_wide (u), sign))
+      if (wi::ge_p (rl, l, sign)
+	  && wi::le_p (ru, u, sign))
 	{
 	  // This pair is OK, Either done, or bump to the next.
 	  if (++ri >= r.num_pairs ())
@@ -1445,7 +1456,7 @@ irange::irange_contains_p (const irange &r) const
 	  continue;
 	}
       // Otherwise, check if this's pair occurs before R's.
-      if (wi::lt_p (wi::to_wide (u), wi::to_wide (rl), sign))
+      if (wi::lt_p (u, rl, sign))
 	{
 	  // There's still at least one pair of R left.
 	  if (++i >= num_pairs ())
@@ -1498,7 +1509,7 @@ irange::intersect (const vrange &v)
   if (r.irange_contains_p (*this))
     return intersect_nonzero_bits (r);
 
-  signop sign = TYPE_SIGN (TREE_TYPE(m_base[0]));
+  signop sign = TYPE_SIGN (m_type);
   unsigned bld_pair = 0;
   unsigned bld_lim = m_max_ranges;
   int_range_max r2 (*this);
@@ -1507,17 +1518,17 @@ irange::intersect (const vrange &v)
   for (unsigned i = 0; i < r.num_pairs (); )
     {
       // If r1's upper is < r2's lower, we can skip r1's pair.
-      tree ru = r.m_base[i * 2 + 1];
-      tree r2l = r2.m_base[i2 * 2];
-      if (wi::lt_p (wi::to_wide (ru), wi::to_wide (r2l), sign))
+      wide_int ru = r.m_base[i * 2 + 1];
+      wide_int r2l = r2.m_base[i2 * 2];
+      if (wi::lt_p (ru, r2l, sign))
 	{
 	  i++;
 	  continue;
 	}
       // Likewise, skip r2's pair if its excluded.
-      tree r2u = r2.m_base[i2 * 2 + 1];
-      tree rl = r.m_base[i * 2];
-      if (wi::lt_p (wi::to_wide (r2u), wi::to_wide (rl), sign))
+      wide_int r2u = r2.m_base[i2 * 2 + 1];
+      wide_int rl = r.m_base[i * 2];
+      if (wi::lt_p (r2u, rl, sign))
 	{
 	  i2++;
 	  if (i2 < r2_lim)
@@ -1531,7 +1542,7 @@ irange::intersect (const vrange &v)
       // set.
       if (bld_pair < bld_lim)
 	{
-	  if (wi::ge_p (wi::to_wide (rl), wi::to_wide (r2l), sign))
+	  if (wi::ge_p (rl, r2l, sign))
 	    m_base[bld_pair * 2] = rl;
 	  else
 	    m_base[bld_pair * 2] = r2l;
@@ -1541,7 +1552,7 @@ irange::intersect (const vrange &v)
 	bld_pair--;
 
       // ...and choose the lower of the upper bounds.
-      if (wi::le_p (wi::to_wide (ru), wi::to_wide (r2u), sign))
+      if (wi::le_p (ru, r2u, sign))
 	{
 	  m_base[bld_pair * 2 + 1] = ru;
 	  bld_pair++;
@@ -1604,27 +1615,27 @@ irange::intersect (const wide_int& lb, const wide_int& ub)
   unsigned pair_lim = num_pairs ();
   for (unsigned i = 0; i < pair_lim; i++)
     {
-      tree pairl = m_base[i * 2];
-      tree pairu = m_base[i * 2 + 1];
+      wide_int pairl = m_base[i * 2];
+      wide_int pairu = m_base[i * 2 + 1];
       // Once UB is less than a pairs lower bound, we're done.
-      if (wi::lt_p (ub, wi::to_wide (pairl), sign))
+      if (wi::lt_p (ub, pairl, sign))
 	break;
       // if LB is greater than this pairs upper, this pair is excluded.
-      if (wi::lt_p (wi::to_wide (pairu), lb, sign))
+      if (wi::lt_p (pairu, lb, sign))
 	continue;
 
       // Must be some overlap.  Find the highest of the lower bounds,
       // and set it
-      if (wi::gt_p (lb, wi::to_wide (pairl), sign))
-	m_base[bld_index * 2] = wide_int_to_tree (range_type, lb);
+      if (wi::gt_p (lb, pairl, sign))
+	m_base[bld_index * 2] = lb;
       else
 	m_base[bld_index * 2] = pairl;
 
       // ...and choose the lower of the upper bounds and if the base pair
       // has the lower upper bound, need to check next pair too.
-      if (wi::lt_p (ub, wi::to_wide (pairu), sign))
+      if (wi::lt_p (ub, pairu, sign))
 	{
-	  m_base[bld_index++ * 2 + 1] = wide_int_to_tree (range_type, ub);
+	  m_base[bld_index++ * 2 + 1] = ub;
 	  break;
 	}
       else
@@ -1696,12 +1707,12 @@ irange::invert ()
   signop sign = TYPE_SIGN (ttype);
   wide_int type_min = wi::min_value (prec, sign);
   wide_int type_max = wi::max_value (prec, sign);
-  m_nonzero_mask = NULL;
+  m_nonzero_mask = wi::minus_one (prec);
   if (m_num_ranges == m_max_ranges
       && lower_bound () != type_min
       && upper_bound () != type_max)
     {
-      m_base[1] = wide_int_to_tree (ttype, type_max);
+      m_base[1] = type_max;
       m_num_ranges = 1;
       return;
     }
@@ -1723,9 +1734,9 @@ irange::invert ()
   // which doesn't set the underflow bit.
   if (type_min != orig_range.lower_bound ())
     {
-      m_base[nitems++] = wide_int_to_tree (ttype, type_min);
+      m_base[nitems++] = type_min;
       tmp = subtract_one (orig_range.lower_bound (), ttype, ovf);
-      m_base[nitems++] = wide_int_to_tree (ttype, tmp);
+      m_base[nitems++] = tmp;
       if (ovf)
 	nitems = 0;
     }
@@ -1738,11 +1749,10 @@ irange::invert ()
 	{
 	  // The middle ranges cannot have MAX/MIN, so there's no need
 	  // to check for unsigned overflow on the +1 and -1 here.
-	  tmp = wi::add (wi::to_wide (orig_range.m_base[j]), 1, sign, &ovf);
-	  m_base[nitems++] = wide_int_to_tree (ttype, tmp);
-	  tmp = subtract_one (wi::to_wide (orig_range.m_base[j + 1]),
-			      ttype, ovf);
-	  m_base[nitems++] = wide_int_to_tree (ttype, tmp);
+	  tmp = wi::add (orig_range.m_base[j], 1, sign, &ovf);
+	  m_base[nitems++] = tmp;
+	  tmp = subtract_one (orig_range.m_base[j + 1], ttype, ovf);
+	  m_base[nitems++] = tmp;
 	  if (ovf)
 	    nitems -= 2;
 	}
@@ -1753,11 +1763,11 @@ irange::invert ()
   // However, if this will overflow on the PLUS 1, don't even bother.
   // This also handles adding one to an unsigned MAX, which doesn't
   // set the overflow bit.
-  if (type_max != wi::to_wide (orig_range.m_base[i]))
+  if (type_max != orig_range.m_base[i])
     {
-      tmp = add_one (wi::to_wide (orig_range.m_base[i]), ttype, ovf);
-      m_base[nitems++] = wide_int_to_tree (ttype, tmp);
-      m_base[nitems++] = wide_int_to_tree (ttype, type_max);
+      tmp = add_one (orig_range.m_base[i], ttype, ovf);
+      m_base[nitems++] = tmp;
+      m_base[nitems++] = type_max;
       if (ovf)
 	nitems -= 2;
     }
@@ -1794,21 +1804,21 @@ bool
 irange::set_range_from_nonzero_bits ()
 {
   gcc_checking_assert (!undefined_p ());
-  if (!m_nonzero_mask)
+  if (m_nonzero_mask == -1)
     return false;
-  unsigned popcount = wi::popcount (wi::to_wide (m_nonzero_mask));
+  unsigned popcount = wi::popcount (m_nonzero_mask);
 
   // If we have only one bit set in the mask, we can figure out the
   // range immediately.
   if (popcount == 1)
     {
       // Make sure we don't pessimize the range.
-      if (!contains_p (wi::to_wide (m_nonzero_mask)))
+      if (!contains_p (m_nonzero_mask))
 	return false;
 
       bool has_zero = contains_zero_p (*this);
-      tree nz = m_nonzero_mask;
-      set (nz, nz);
+      wide_int nz = m_nonzero_mask;
+      set (m_type, nz, nz);
       m_nonzero_mask = nz;
       if (has_zero)
 	{
@@ -1827,26 +1837,15 @@ irange::set_range_from_nonzero_bits ()
 }
 
 void
-irange::set_nonzero_bits (const wide_int_ref &bits)
+irange::set_nonzero_bits (const wide_int &bits)
 {
   gcc_checking_assert (!undefined_p ());
-  unsigned prec = TYPE_PRECISION (type ());
-
-  if (bits == -1)
-    {
-      m_nonzero_mask = NULL;
-      normalize_kind ();
-      if (flag_checking)
-	verify_range ();
-      return;
-    }
 
   // Drop VARYINGs with a nonzero mask to a plain range.
   if (m_kind == VR_VARYING && bits != -1)
     m_kind = VR_RANGE;
 
-  wide_int nz = wide_int::from (bits, prec, TYPE_SIGN (type ()));
-  m_nonzero_mask = wide_int_to_tree (type (), nz);
+  m_nonzero_mask = bits;
   if (set_range_from_nonzero_bits ())
     return;
 
@@ -1870,21 +1869,10 @@ irange::get_nonzero_bits () const
   // the mask precisely up to date at all times.  Instead, we default
   // to -1 and set it when explicitly requested.  However, this
   // function will always return the correct mask.
-  if (m_nonzero_mask)
-    return wi::to_wide (m_nonzero_mask) & get_nonzero_bits_from_range ();
-  else
+  if (m_nonzero_mask == -1)
     return get_nonzero_bits_from_range ();
-}
-
-// Convert tree mask to wide_int.  Returns -1 for NULL masks.
-
-inline wide_int
-mask_to_wi (tree mask, tree type)
-{
-  if (mask)
-    return wi::to_wide (mask);
   else
-    return wi::shwi (-1, TYPE_PRECISION (type));
+    return m_nonzero_mask & get_nonzero_bits_from_range ();
 }
 
 // Intersect the nonzero bits in R into THIS and normalize the range.
@@ -1895,7 +1883,7 @@ irange::intersect_nonzero_bits (const irange &r)
 {
   gcc_checking_assert (!undefined_p () && !r.undefined_p ());
 
-  if (!m_nonzero_mask && !r.m_nonzero_mask)
+  if (m_nonzero_mask == -1 && r.m_nonzero_mask == -1)
     {
       normalize_kind ();
       if (flag_checking)
@@ -1904,15 +1892,14 @@ irange::intersect_nonzero_bits (const irange &r)
     }
 
   bool changed = false;
-  tree t = type ();
-  if (mask_to_wi (m_nonzero_mask, t) != mask_to_wi (r.m_nonzero_mask, t))
+  if (m_nonzero_mask != r.m_nonzero_mask)
     {
       wide_int nz = get_nonzero_bits () & r.get_nonzero_bits ();
       // If the nonzero bits did not change, return false.
       if (nz == get_nonzero_bits ())
 	return false;
 
-      m_nonzero_mask = wide_int_to_tree (t, nz);
+      m_nonzero_mask = nz;
       if (set_range_from_nonzero_bits ())
 	return true;
       changed = true;
@@ -1931,7 +1918,7 @@ irange::union_nonzero_bits (const irange &r)
 {
   gcc_checking_assert (!undefined_p () && !r.undefined_p ());
 
-  if (!m_nonzero_mask && !r.m_nonzero_mask)
+  if (m_nonzero_mask == -1 && r.m_nonzero_mask == -1)
     {
       normalize_kind ();
       if (flag_checking)
@@ -1940,11 +1927,9 @@ irange::union_nonzero_bits (const irange &r)
     }
 
   bool changed = false;
-  tree t = type ();
-  if (mask_to_wi (m_nonzero_mask, t) != mask_to_wi (r.m_nonzero_mask, t))
+  if (m_nonzero_mask != r.m_nonzero_mask)
     {
-      wide_int nz = get_nonzero_bits () | r.get_nonzero_bits ();
-      m_nonzero_mask = wide_int_to_tree (t, nz);
+      m_nonzero_mask = get_nonzero_bits () | r.get_nonzero_bits ();
       // No need to call set_range_from_nonzero_bits, because we'll
       // never narrow the range.  Besides, it would cause endless
       // recursion because of the union_ in
@@ -2005,25 +1990,15 @@ vrp_operand_equal_p (const_tree val1, const_tree val2)
 void
 gt_ggc_mx (irange *x)
 {
-  for (unsigned i = 0; i < x->m_num_ranges; ++i)
-    {
-      gt_ggc_mx (x->m_base[i * 2]);
-      gt_ggc_mx (x->m_base[i * 2 + 1]);
-    }
-  if (x->m_nonzero_mask)
-    gt_ggc_mx (x->m_nonzero_mask);
+  if (!x->undefined_p ())
+    gt_ggc_mx (x->m_type);
 }
 
 void
 gt_pch_nx (irange *x)
 {
-  for (unsigned i = 0; i < x->m_num_ranges; ++i)
-    {
-      gt_pch_nx (x->m_base[i * 2]);
-      gt_pch_nx (x->m_base[i * 2 + 1]);
-    }
-  if (x->m_nonzero_mask)
-    gt_pch_nx (x->m_nonzero_mask);
+  if (!x->undefined_p ())
+    gt_pch_nx (x->m_type);
 }
 
 void
@@ -2034,8 +2009,6 @@ gt_pch_nx (irange *x, gt_pointer_operator op, void *cookie)
       op (&x->m_base[i * 2], NULL, cookie);
       op (&x->m_base[i * 2 + 1], NULL, cookie);
     }
-  if (x->m_nonzero_mask)
-    op (&x->m_nonzero_mask, NULL, cookie);
 }
 
 void
@@ -2144,12 +2117,6 @@ range (tree type, int a, int b, value_range_kind kind = VR_RANGE)
   return int_range<2> (type, w1, w2, kind);
 }
 
-static int_range<2>
-tree_range (tree a, tree b, value_range_kind kind = VR_RANGE)
-{
-  return int_range<2> (TREE_TYPE (a), wi::to_wide (a), wi::to_wide (b), kind);
-}
-
 static int_range<2>
 range_int (int a, int b, value_range_kind kind = VR_RANGE)
 {
@@ -2328,7 +2295,9 @@ range_tests_strict_enum ()
   ASSERT_FALSE (ir1.varying_p ());
 
   // The same test as above, but using TYPE_{MIN,MAX}_VALUE instead of [0,3].
-  vr1 = tree_range (TYPE_MIN_VALUE (rtype), TYPE_MAX_VALUE (rtype));
+  vr1 = int_range<2> (rtype,
+		      wi::to_wide (TYPE_MIN_VALUE (rtype)),
+		      wi::to_wide (TYPE_MAX_VALUE (rtype)));
   ir1 = vr1;
   ASSERT_TRUE (ir1 == vr1);
   ASSERT_FALSE (ir1.varying_p ());
@@ -2522,11 +2491,11 @@ range_tests_misc ()
   ASSERT_TRUE (vv.contains_p (UINT (2)));
   ASSERT_TRUE (vv.num_pairs () == 3);
 
-  r0 = range_uint (1, 1);
+  r0 = range_int (1, 1);
   // And union it with  [0,0][2,2][4,MAX] multi range
   r0.union_ (vv);
   // The result should be [0,2][4,MAX], or ~[3,3]  but it must contain 2
-  ASSERT_TRUE (r0.contains_p (UINT (2)));
+  ASSERT_TRUE (r0.contains_p (INT (2)));
 }
 
 static void
@@ -2536,33 +2505,33 @@ range_tests_nonzero_bits ()
 
   // Adding nonzero bits to a varying drops the varying.
   r0.set_varying (integer_type_node);
-  r0.set_nonzero_bits (255);
+  r0.set_nonzero_bits (INT (255));
   ASSERT_TRUE (!r0.varying_p ());
   // Dropping the nonzero bits brings us back to varying.
-  r0.set_nonzero_bits (-1);
+  r0.set_nonzero_bits (INT (-1));
   ASSERT_TRUE (r0.varying_p ());
 
   // Test contains_p with nonzero bits.
   r0.set_zero (integer_type_node);
   ASSERT_TRUE (r0.contains_p (INT (0)));
   ASSERT_FALSE (r0.contains_p (INT (1)));
-  r0.set_nonzero_bits (0xfe);
+  r0.set_nonzero_bits (INT (0xfe));
   ASSERT_FALSE (r0.contains_p (INT (0x100)));
   ASSERT_FALSE (r0.contains_p (INT (0x3)));
 
   // Union of nonzero bits.
   r0.set_varying (integer_type_node);
-  r0.set_nonzero_bits (0xf0);
+  r0.set_nonzero_bits (INT (0xf0));
   r1.set_varying (integer_type_node);
-  r1.set_nonzero_bits (0xf);
+  r1.set_nonzero_bits (INT (0xf));
   r0.union_ (r1);
   ASSERT_TRUE (r0.get_nonzero_bits () == 0xff);
 
   // Intersect of nonzero bits.
   r0 = range_int (0, 255);
-  r0.set_nonzero_bits (0xfe);
+  r0.set_nonzero_bits (INT (0xfe));
   r1.set_varying (integer_type_node);
-  r1.set_nonzero_bits (0xf0);
+  r1.set_nonzero_bits (INT (0xf0));
   r0.intersect (r1);
   ASSERT_TRUE (r0.get_nonzero_bits () == 0xf0);
 
@@ -2579,13 +2548,13 @@ range_tests_nonzero_bits ()
   x = wi::bit_not (x);
   r0.set_nonzero_bits (x); 	// 0xff..ff00
   r1.set_varying (integer_type_node);
-  r1.set_nonzero_bits (0xff);
+  r1.set_nonzero_bits (INT (0xff));
   r0.union_ (r1);
   ASSERT_TRUE (r0.varying_p ());
 
   // Test that setting a nonzero bit of 1 does not pessimize the range.
   r0.set_zero (integer_type_node);
-  r0.set_nonzero_bits (1);
+  r0.set_nonzero_bits (INT (1));
   ASSERT_TRUE (r0.zero_p ());
 }
 
diff --git a/gcc/value-range.h b/gcc/value-range.h
index b040e2f254f..9f82b0011c7 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -164,12 +164,12 @@ public:
 
   // Nonzero masks.
   wide_int get_nonzero_bits () const;
-  void set_nonzero_bits (const wide_int_ref &bits);
+  void set_nonzero_bits (const wide_int &bits);
 
 protected:
   virtual void set (tree, tree, value_range_kind = VR_RANGE) override;
   virtual bool contains_p (tree cst) const override;
-  irange (tree *, unsigned);
+  irange (wide_int *, unsigned);
 
    // In-place operators.
   void irange_set (tree type, const wide_int &, const wide_int &);
@@ -197,8 +197,9 @@ private:
   bool intersect (const wide_int& lb, const wide_int& ub);
   unsigned char m_num_ranges;
   const unsigned char m_max_ranges;
-  tree m_nonzero_mask;
-  tree *m_base;
+  tree m_type;
+  wide_int m_nonzero_mask;
+  wide_int *m_base;
 };
 
 // Here we describe an irange with N pairs of ranges.  The storage for
@@ -224,7 +225,7 @@ private:
   template <unsigned X> friend void gt_pch_nx (int_range<X> *,
 					       gt_pointer_operator, void *);
 
-  tree m_ranges[N*2];
+  wide_int m_ranges[N*2];
 };
 
 // Unsupported temporaries may be created by ranger before it's known
@@ -651,7 +652,7 @@ inline tree
 irange::type () const
 {
   gcc_checking_assert (m_num_ranges > 0);
-  return TREE_TYPE (m_base[0]);
+  return m_type;
 }
 
 inline bool
@@ -660,23 +661,19 @@ irange::varying_compatible_p () const
   if (m_num_ranges != 1)
     return false;
 
-  tree l = m_base[0];
-  tree u = m_base[1];
-  tree t = TREE_TYPE (l);
+  const wide_int &l = m_base[0];
+  const wide_int &u = m_base[1];
+  tree t = m_type;
 
   if (m_kind == VR_VARYING && t == error_mark_node)
     return true;
 
   unsigned prec = TYPE_PRECISION (t);
   signop sign = TYPE_SIGN (t);
-  if (INTEGRAL_TYPE_P (t))
-    return (wi::to_wide (l) == wi::min_value (prec, sign)
-	    && wi::to_wide (u) == wi::max_value (prec, sign)
-	    && (!m_nonzero_mask || wi::to_wide (m_nonzero_mask) == -1));
-  if (POINTER_TYPE_P (t))
-    return (wi::to_wide (l) == 0
-	    && wi::to_wide (u) == wi::max_value (prec, sign)
-	    && (!m_nonzero_mask || wi::to_wide (m_nonzero_mask) == -1));
+  if (INTEGRAL_TYPE_P (t) || POINTER_TYPE_P (t))
+    return (l == wi::min_value (prec, sign)
+	    && u == wi::max_value (prec, sign)
+	    && m_nonzero_mask == -1);
   return true;
 }
 
@@ -769,7 +766,7 @@ gt_pch_nx (int_range<N> *x, gt_pointer_operator op, void *cookie)
 // Constructors for irange
 
 inline
-irange::irange (tree *base, unsigned nranges)
+irange::irange (wide_int *base, unsigned nranges)
   : vrange (VR_IRANGE),
     m_max_ranges (nranges)
 {
@@ -812,9 +809,7 @@ int_range<N>::int_range (tree type, const wide_int &wmin, const wide_int &wmax,
 			 value_range_kind kind)
   : irange (m_ranges, N)
 {
-  tree min = wide_int_to_tree (type, wmin);
-  tree max = wide_int_to_tree (type, wmax);
-  set (min, max, kind);
+  set (type, wmin, wmax, kind);
 }
 
 template<unsigned N>
@@ -836,8 +831,8 @@ inline void
 irange::set_undefined ()
 {
   m_kind = VR_UNDEFINED;
+  m_type = NULL;
   m_num_ranges = 0;
-  m_nonzero_mask = NULL;
 }
 
 inline void
@@ -845,33 +840,18 @@ irange::set_varying (tree type)
 {
   m_kind = VR_VARYING;
   m_num_ranges = 1;
-  m_nonzero_mask = NULL;
+  m_nonzero_mask = wi::minus_one (TYPE_PRECISION (type));
 
-  if (INTEGRAL_TYPE_P (type))
+  if (INTEGRAL_TYPE_P (type) || POINTER_TYPE_P (type))
     {
+      m_type = type;
       // Strict enum's require varying to be not TYPE_MIN/MAX, but rather
       // min_value and max_value.
-      wide_int min = wi::min_value (TYPE_PRECISION (type), TYPE_SIGN (type));
-      wide_int max = wi::max_value (TYPE_PRECISION (type), TYPE_SIGN (type));
-      if (wi::eq_p (max, wi::to_wide (TYPE_MAX_VALUE (type)))
-	  && wi::eq_p (min, wi::to_wide (TYPE_MIN_VALUE (type))))
-	{
-	  m_base[0] = TYPE_MIN_VALUE (type);
-	  m_base[1] = TYPE_MAX_VALUE (type);
-	}
-      else
-	{
-	  m_base[0] = wide_int_to_tree (type, min);
-	  m_base[1] = wide_int_to_tree (type, max);
-	}
-    }
-  else if (POINTER_TYPE_P (type))
-    {
-      m_base[0] = build_int_cst (type, 0);
-      m_base[1] = build_int_cst (type, -1);
+      m_base[0] = wi::min_value (TYPE_PRECISION (type), TYPE_SIGN (type));
+      m_base[1] = wi::max_value (TYPE_PRECISION (type), TYPE_SIGN (type));
     }
   else
-    m_base[0] = m_base[1] = error_mark_node;
+    m_type = error_mark_node;
 }
 
 // Return the lower bound of a sub-range.  PAIR is the sub-range in
@@ -882,7 +862,7 @@ irange::lower_bound (unsigned pair) const
 {
   gcc_checking_assert (m_num_ranges > 0);
   gcc_checking_assert (pair + 1 <= num_pairs ());
-  return wi::to_wide (m_base[pair * 2]);
+  return m_base[pair * 2];
 }
 
 // Return the upper bound of a sub-range.  PAIR is the sub-range in
@@ -893,7 +873,7 @@ irange::upper_bound (unsigned pair) const
 {
   gcc_checking_assert (m_num_ranges > 0);
   gcc_checking_assert (pair + 1 <= num_pairs ());
-  return wi::to_wide (m_base[pair * 2 + 1]);
+  return m_base[pair * 2 + 1];
 }
 
 // Return the highest bound of a range.
-- 
2.40.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [COMMITTED] Cleanup irange::set.
  2023-05-01  6:28 [COMMITTED] vrange_storage overhaul Aldy Hernandez
                   ` (8 preceding siblings ...)
  2023-05-01  6:29 ` [COMMITTED] Convert internal representation of irange to wide_ints Aldy Hernandez
@ 2023-05-01  6:29 ` Aldy Hernandez
  2023-05-01  6:29 ` [COMMITTED] Inline irange::set_nonzero Aldy Hernandez
  10 siblings, 0 replies; 12+ messages in thread
From: Aldy Hernandez @ 2023-05-01  6:29 UTC (permalink / raw)
  To: GCC patches; +Cc: Andrew MacLeod, Aldy Hernandez

Now that anti-ranges are no more and iranges contain wide_ints instead
of trees, various cleanups are possible.  This is one of a handful of
patches improving the performance of irange::set() which is not on a
hot path, but quite sensitive because it is so pervasive.

gcc/ChangeLog:

	* gimple-range-op.cc (cfn_ffs::fold_range): Use the correct
	precision.
	* gimple-ssa-warn-alloca.cc (alloca_call_type): Use <2> for
	invalid_range, as it is an inverse range.
	* tree-vrp.cc (find_case_label_range): Avoid trees.
	* value-range.cc (irange::irange_set): Delete.
	(irange::irange_set_1bit_anti_range): Delete.
	(irange::irange_set_anti_range): Delete.
	(irange::set): Cleanup.
	* value-range.h (class irange): Remove irange_set,
	irange_set_anti_range, irange_set_1bit_anti_range.
	(irange::set_undefined): Remove set to m_type.
---
 gcc/gimple-range-op.cc        |   4 +-
 gcc/gimple-ssa-warn-alloca.cc |   2 +-
 gcc/tree-vrp.cc               |   8 +-
 gcc/value-range.cc            | 175 ++++++++++------------------------
 gcc/value-range.h             |   5 -
 5 files changed, 59 insertions(+), 135 deletions(-)

diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
index 3aef8357d8d..5d1f921ba40 100644
--- a/gcc/gimple-range-op.cc
+++ b/gcc/gimple-range-op.cc
@@ -654,7 +654,9 @@ public:
       range_cast (tmp, unsigned_type_for (tmp.type ()));
     wide_int max = tmp.upper_bound ();
     maxi = wi::floor_log2 (max) + 1;
-    r.set (type, wi::shwi (mini, prec), wi::shwi (maxi, prec));
+    r.set (type,
+	   wi::shwi (mini, TYPE_PRECISION (type)),
+	   wi::shwi (maxi, TYPE_PRECISION (type)));
     return true;
   }
 } op_cfn_ffs;
diff --git a/gcc/gimple-ssa-warn-alloca.cc b/gcc/gimple-ssa-warn-alloca.cc
index c129aca16e2..2d8ab93a81d 100644
--- a/gcc/gimple-ssa-warn-alloca.cc
+++ b/gcc/gimple-ssa-warn-alloca.cc
@@ -222,7 +222,7 @@ alloca_call_type (gimple *stmt, bool is_vla)
       && !r.varying_p ())
     {
       // The invalid bits are anything outside of [0, MAX_SIZE].
-      int_range<1> invalid_range (size_type_node,
+      int_range<2> invalid_range (size_type_node,
 				  wi::shwi (0, TYPE_PRECISION (size_type_node)),
 				  wi::shwi (max_size, TYPE_PRECISION (size_type_node)),
 				  VR_ANTI_RANGE);
diff --git a/gcc/tree-vrp.cc b/gcc/tree-vrp.cc
index d28637b1918..0761b6896fe 100644
--- a/gcc/tree-vrp.cc
+++ b/gcc/tree-vrp.cc
@@ -827,6 +827,8 @@ find_case_label_range (gswitch *switch_stmt, const irange *range_of_op)
   size_t i, j;
   tree op = gimple_switch_index (switch_stmt);
   tree type = TREE_TYPE (op);
+  unsigned prec = TYPE_PRECISION (type);
+  signop sign = TYPE_SIGN (type);
   tree tmin = wide_int_to_tree (type, range_of_op->lower_bound ());
   tree tmax = wide_int_to_tree (type, range_of_op->upper_bound ());
   find_case_label_range (switch_stmt, tmin, tmax, &i, &j);
@@ -837,9 +839,11 @@ find_case_label_range (gswitch *switch_stmt, const irange *range_of_op)
       tree label = gimple_switch_label (switch_stmt, i);
       tree case_high
 	= CASE_HIGH (label) ? CASE_HIGH (label) : CASE_LOW (label);
+      wide_int wlow = wi::to_wide (CASE_LOW (label));
+      wide_int whigh = wi::to_wide (case_high);
       int_range_max label_range (type,
-				 wi::to_wide (CASE_LOW (label)),
-				 wi::to_wide (case_high));
+				 wide_int::from (wlow, prec, sign),
+				 wide_int::from (whigh, prec, sign));
       if (!types_compatible_p (label_range.type (), range_of_op->type ()))
 	range_cast (label_range, range_of_op->type ());
       label_range.intersect (*range_of_op);
diff --git a/gcc/value-range.cc b/gcc/value-range.cc
index 2dc6b98bc63..655ffc2d6d4 100644
--- a/gcc/value-range.cc
+++ b/gcc/value-range.cc
@@ -961,98 +961,6 @@ get_legacy_range (const irange &r, tree &min, tree &max)
   return VR_RANGE;
 }
 
-void
-irange::irange_set (tree type, const wide_int &min, const wide_int &max)
-{
-  m_type = type;
-  m_base[0] = min;
-  m_base[1] = max;
-  m_num_ranges = 1;
-  m_kind = VR_RANGE;
-  m_nonzero_mask = wi::minus_one (TYPE_PRECISION (type));
-  normalize_kind ();
-
-  if (flag_checking)
-    verify_range ();
-}
-
-void
-irange::irange_set_1bit_anti_range (tree type,
-				    const wide_int &min, const wide_int &max)
-{
-  unsigned prec = TYPE_PRECISION (type);
-  signop sign = TYPE_SIGN (type);
-  gcc_checking_assert (prec == 1);
-
-  if (min == max)
-    {
-      wide_int tmp;
-      // Since these are 1-bit quantities, they can only be [MIN,MIN]
-      // or [MAX,MAX].
-      if (min == wi::min_value (prec, sign))
-	tmp = wi::max_value (prec, sign);
-      else
-	tmp = wi::min_value (prec, sign);
-      set (type, tmp, tmp);
-    }
-  else
-    {
-      // The only alternative is [MIN,MAX], which is the empty range.
-      gcc_checking_assert (min == wi::min_value (prec, sign));
-      gcc_checking_assert (max == wi::max_value (prec, sign));
-      set_undefined ();
-    }
-  if (flag_checking)
-    verify_range ();
-}
-
-void
-irange::irange_set_anti_range (tree type,
-			       const wide_int &min, const wide_int &max)
-{
-  if (TYPE_PRECISION (type) == 1)
-    {
-      irange_set_1bit_anti_range (type, min, max);
-      return;
-    }
-
-  // set an anti-range
-  signop sign = TYPE_SIGN (type);
-  int_range<2> type_range (type);
-  // Calculate INVERSE([I,J]) as [-MIN, I-1][J+1, +MAX].
-  m_num_ranges = 0;
-  wi::overflow_type ovf;
-
-  if (wi::ne_p (min, type_range.lower_bound ()))
-    {
-      wide_int lim1 = wi::sub (min, 1, sign, &ovf);
-      gcc_checking_assert (ovf != wi::OVF_OVERFLOW);
-      m_base[0] = type_range.lower_bound (0);
-      m_base[1] = lim1;
-      m_num_ranges = 1;
-    }
-  if (wi::ne_p (max, type_range.upper_bound ()))
-    {
-      if (m_max_ranges == 1 && m_num_ranges)
-	{
-	  set_varying (type);
-	  return;
-	}
-      wide_int lim2 = wi::add (max, 1, sign, &ovf);
-      gcc_checking_assert (ovf != wi::OVF_OVERFLOW);
-      m_base[m_num_ranges * 2] = lim2;
-      m_base[m_num_ranges * 2 + 1] = type_range.upper_bound (0);
-      ++m_num_ranges;
-    }
-
-  m_kind = VR_RANGE;
-  m_nonzero_mask = wi::minus_one (TYPE_PRECISION (type));
-  normalize_kind ();
-
-  if (flag_checking)
-    verify_range ();
-}
-
 /* Set value range to the canonical form of {VRTYPE, MIN, MAX, EQUIV}.
    This means adjusting VRTYPE, MIN and MAX representing the case of a
    wrapping range with MAX < MIN covering [MIN, type_max] U [type_min, MAX]
@@ -1063,48 +971,69 @@ irange::irange_set_anti_range (tree type,
    extract ranges from var + CST op limit.  */
 
 void
-irange::set (tree type, const wide_int &rmin, const wide_int &rmax,
+irange::set (tree type, const wide_int &min, const wide_int &max,
 	     value_range_kind kind)
 {
-  if (kind == VR_UNDEFINED)
-    {
-      irange::set_undefined ();
-      return;
-    }
-
-  if (kind == VR_VARYING)
-    {
-      set_varying (type);
-      return;
-    }
+  unsigned prec = TYPE_PRECISION (type);
+  signop sign = TYPE_SIGN (type);
+  wide_int min_value = wi::min_value (prec, sign);
+  wide_int max_value = wi::max_value (prec, sign);
 
   m_type = type;
-  signop sign = TYPE_SIGN (type);
-  unsigned prec = TYPE_PRECISION (type);
-  wide_int min = wide_int::from (rmin, prec, sign);
-  wide_int max = wide_int::from (rmax, prec, sign);
+  m_nonzero_mask = wi::minus_one (prec);
 
   if (kind == VR_RANGE)
-    irange_set (type, min, max);
+    {
+      m_base[0] = min;
+      m_base[1] = max;
+      m_num_ranges = 1;
+      if (min == min_value && max == max_value)
+	m_kind = VR_VARYING;
+      else
+	m_kind = VR_RANGE;
+    }
   else
     {
       gcc_checking_assert (kind == VR_ANTI_RANGE);
-      irange_set_anti_range (type, min, max);
+      gcc_checking_assert (m_max_ranges > 1);
+
+      m_kind = VR_UNDEFINED;
+      m_num_ranges = 0;
+      wi::overflow_type ovf;
+      wide_int lim;
+      if (sign == SIGNED)
+	lim = wi::add (min, -1, sign, &ovf);
+      else
+	lim = wi::sub (min, 1, sign, &ovf);
+
+      if (!ovf)
+	{
+	  m_kind = VR_RANGE;
+	  m_base[0] = min_value;
+	  m_base[1] = lim;
+	  ++m_num_ranges;
+	}
+      if (sign == SIGNED)
+	lim = wi::sub (max, -1, sign, &ovf);
+      else
+	lim = wi::add (max, 1, sign, &ovf);
+      if (!ovf)
+	{
+	  m_kind = VR_RANGE;
+	  m_base[m_num_ranges * 2] = lim;
+	  m_base[m_num_ranges * 2 + 1] = max_value;
+	  ++m_num_ranges;
+	}
     }
+
+  if (flag_checking)
+    verify_range ();
 }
 
 void
 irange::set (tree min, tree max, value_range_kind kind)
 {
-  if (kind == VR_UNDEFINED)
-    {
-      irange::set_undefined ();
-      return;
-    }
-
-  if (kind == VR_VARYING
-      || POLY_INT_CST_P (min)
-      || POLY_INT_CST_P (max))
+  if (POLY_INT_CST_P (min) || POLY_INT_CST_P (max))
     {
       set_varying (TREE_TYPE (min));
       return;
@@ -1113,13 +1042,7 @@ irange::set (tree min, tree max, value_range_kind kind)
   gcc_checking_assert (TREE_CODE (min) == INTEGER_CST);
   gcc_checking_assert (TREE_CODE (max) == INTEGER_CST);
 
-  if (TREE_OVERFLOW_P (min))
-    min = drop_tree_overflow (min);
-  if (TREE_OVERFLOW_P (max))
-    max = drop_tree_overflow (max);
-
-  return set (TREE_TYPE (min),
-	      wi::to_wide (min), wi::to_wide (max), kind);
+  return set (TREE_TYPE (min), wi::to_wide (min), wi::to_wide (max), kind);
 }
 
 // Check the validity of the range.
diff --git a/gcc/value-range.h b/gcc/value-range.h
index 9f82b0011c7..9a834c91b17 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -172,8 +172,6 @@ protected:
   irange (wide_int *, unsigned);
 
    // In-place operators.
-  void irange_set (tree type, const wide_int &, const wide_int &);
-  void irange_set_anti_range (tree type, const wide_int &, const wide_int &);
   bool irange_contains_p (const irange &) const;
   bool irange_single_pair_union (const irange &r);
 
@@ -186,8 +184,6 @@ private:
   friend void gt_pch_nx (irange *);
   friend void gt_pch_nx (irange *, gt_pointer_operator, void *);
 
-  void irange_set_1bit_anti_range (tree type,
-				   const wide_int &, const wide_int &);
   bool varying_compatible_p () const;
   bool intersect_nonzero_bits (const irange &r);
   bool union_nonzero_bits (const irange &r);
@@ -831,7 +827,6 @@ inline void
 irange::set_undefined ()
 {
   m_kind = VR_UNDEFINED;
-  m_type = NULL;
   m_num_ranges = 0;
 }
 
-- 
2.40.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [COMMITTED] Inline irange::set_nonzero.
  2023-05-01  6:28 [COMMITTED] vrange_storage overhaul Aldy Hernandez
                   ` (9 preceding siblings ...)
  2023-05-01  6:29 ` [COMMITTED] Cleanup irange::set Aldy Hernandez
@ 2023-05-01  6:29 ` Aldy Hernandez
  10 siblings, 0 replies; 12+ messages in thread
From: Aldy Hernandez @ 2023-05-01  6:29 UTC (permalink / raw)
  To: GCC patches; +Cc: Andrew MacLeod, Aldy Hernandez

irange::set_nonzero is used everywhere and benefits immensely from
inlining.

gcc/ChangeLog:

	* value-range.h (irange::set_nonzero): Inline.
---
 gcc/value-range.h | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/gcc/value-range.h b/gcc/value-range.h
index 9a834c91b17..5cff50e6d03 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -886,8 +886,24 @@ irange::upper_bound () const
 inline void
 irange::set_nonzero (tree type)
 {
-  wide_int zero = wi::zero (TYPE_PRECISION (type));
-  set (type, zero, zero, VR_ANTI_RANGE);
+  unsigned prec = TYPE_PRECISION (type);
+
+  if (TYPE_UNSIGNED (type))
+    {
+      m_type = type;
+      m_kind = VR_RANGE;
+      m_base[0] = wi::one (prec);
+      m_base[1] = m_nonzero_mask = wi::minus_one (prec);
+      m_num_ranges = 1;
+
+      if (flag_checking)
+	verify_range ();
+    }
+  else
+    {
+      wide_int zero = wi::zero (prec);
+      set (type, zero, zero, VR_ANTI_RANGE);
+    }
 }
 
 // Set value range VR to a ZERO range of type TYPE.
-- 
2.40.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-05-01  6:29 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-01  6:28 [COMMITTED] vrange_storage overhaul Aldy Hernandez
2023-05-01  6:28 ` [COMMITTED] Remove irange::{min,max,kind} Aldy Hernandez
2023-05-01  6:28 ` [COMMITTED] Remove irange::tree_{lower,upper}_bound Aldy Hernandez
2023-05-01  6:28 ` [COMMITTED] Various cleanups in vr-values.cc towards ranger API Aldy Hernandez
2023-05-01  6:28 ` [COMMITTED] Convert get_legacy_range in bounds_of_var_in_loop to irange API Aldy Hernandez
2023-05-01  6:29 ` [COMMITTED] Merge irange::union/intersect into irange_union/intersect Aldy Hernandez
2023-05-01  6:29 ` [COMMITTED] Conversion to irange wide_int API Aldy Hernandez
2023-05-01  6:29 ` [COMMITTED] Replace vrp_val* with wide_ints Aldy Hernandez
2023-05-01  6:29 ` [COMMITTED] Rewrite bounds_of_var_in_loop() to use ranges Aldy Hernandez
2023-05-01  6:29 ` [COMMITTED] Convert internal representation of irange to wide_ints Aldy Hernandez
2023-05-01  6:29 ` [COMMITTED] Cleanup irange::set Aldy Hernandez
2023-05-01  6:29 ` [COMMITTED] Inline irange::set_nonzero Aldy Hernandez

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).