public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* wide-int branch now up for public comment and review
@ 2013-08-13 20:57 Kenneth Zadeck
  2013-08-22  8:25 ` Richard Sandiford
                   ` (2 more replies)
  0 siblings, 3 replies; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-13 20:57 UTC (permalink / raw)
  To: rguenther, gcc-patches, Mike Stump, r.sandiford

Richi and everyone else who may be interested,

Congrats on your first child.  They are a lot of fun, but are very
high maintenence.

Today we put up the wide-int branch for all to see and play with. See

svn+ssh://gcc.gnu.org/svn/gcc/branches/wide-int

At this point, we have completed testing it on x86-64.  Not only is it
regression free, but for code that uses only 64 bit or smaller data
types, it produces identical machine language (if a couple of changes
are made to the truck - see the patch below).  We are currently
working on the PPC and expect to get this platform to the same
position very soon.

 From a high level view, the branch looks somewhat closer to what you
asked for than I would have expected.  There are now three
implementations of wide-int as a template.  The default is the one you
saw before and takes its precision from the input mode or type. There
are two other template instances which have fixed precisions that are
defined to be large enough to be assumed to be infinite (just like
your favorite use of double-int).  Both are used in places where there
is not the notion of precision correctness of the operands. One is
used for all addressing arithmetic and the other is used mostly in the
vectorizer and loop optimizations.  The bottom line is that both a
finite and infinite precision model are really necessary in the
current structure of GCC.

The two infinite precision classes are not exactly the storage classes
that you proposed because they are implemented using the same storage
model as the default template but they do provide a different view of
the math which I assume was your primary concern.  You may also decide
that there is not reason to have a separate class for the addressing
arithmetic since they work substantially the same way.  We did it so
that we have the option in the future to allow the two reps to
diverge.

The one place where I can see changing which template is used is in
tree-ssa-ccp.  This is the only one of the many GCC constant
propagator that does not use the default template.  I did not convert
this pass to use the default template because, for testing purposes
(at your suggestion), we did tried to minimize the improvements so
that we get the same code out with wide-int.  When I convert it to use
the default template, the pass will run slightly faster and will find
slightly more constants: both very desirable features, but not in the
context of getting this large patch into GCC.

As I said earlier, we get the same code as long as the program uses
only 64 bit or smaller types.  For code that uses larger types, we do
not.  The problem actually stems from one of the assertions that you
made when we were arguing about fixed vs infinite precision.  You had
said that a lot of the code depended on double ints behaving like
infinite precision.  You were right!!!  However, what this really
meant is that when that code was subjected to at 128 bit type, it just
produced bogus results!!!!  All of this has been fixed now on the
branch.  The code that uses the default template works within it's
precision.  The code that uses one of the infinite precision templates
can be guaranteed that there is always enough head room because we
sniff out the largest mode on the target and multiply that by 4.
However, the net result is that programs that use 128 bit types get
better code out that is more likely to be correct.

The vast majority of the patch falls into two types of code:

1) The 4 files that hold the wide-int code itself.  You have seen a
    lot of this code before except for the infinite precision
    templates.  Also the classes are more C++ than C in their flavor.
    In particular, the integration with trees is very tight in that an
    int-cst or regular integers can be the operands of any wide-int
    operation.

2) The code that encapsulates the representation of a TREE_INT_CST.
    For the latter, I introduced a series of abstractions to hide the
    access so that I could change the representation of TREE_INT_CST
    away from having exactly two HWIs.  I do not really like these
    abstractions, but the good news is that most of them can/will go
    away after this branch is integrated into the trunk.  These
    abstractions allow the code to do the same function, without
    exposing the change in the data structures.  However, they preserve
    the fact that for the most part, the middle end of the compiler
    tries to do no optimization on anything larger than a single HWI.
    But this preserves the basic behavior of the compiler which is what
    you asked us to do.

    The abstractions that I have put in to hide the rep of TREE_INT_CST 
are:

    host_integerp (x, 1) -> tree_fits_uhwi_p (x)
    host_integerp (x, 0) -> tree_fits_shwi_p (x)
    host_integerp (x, TYPE_UNSIGNED (y)) -> tree_fits_hwi_p (x, 
TYPE_SIGN (y))
    host_integerp (x, TYPE_UNSIGNED (x)) -> tree_fits_hwi_p (x)


    TREE_INT_CST_HIGH (x) == 0 || TREE_INT_CST_HIGH (value) == -1 -> 
cst_fits_shwi_p (x)
    TREE_INT_CST_HIGH (x) + (tree_int_cst_sgn (x) < 0) -> 
cst_fits_shwi_p (x)
    cst_and_fits_in_hwi (x) -> cst_fits_shwi_p (x)

    TREE_INT_CST_HIGH (x) == 0) -> cst_fits_uhwi_p (x)

    tree_low_cst (x, 1) ->  tree_to_uhwi (x)
    tree_low_cst (x, 0) ->  tree_to_shwi (x)
    TREE_INT_CST_LOW (x) -> to either tree_to_uhwi (x), tree_to_shwi (x) 
or tree_to_hwi (x)

    Code that used the TREE_INT_CST_HIGH in ways beyond checking to see
    if contained 0 or -1 was converted directly to wide-int.


You had proposed that one of the ways that we should/could test the
non single HWI paths in wide-int was to change the size of the element
of the array used to represent value in wide-int.   I believe that
there are better ways to do this testing.   For one, the infinite
precision templates do not use the fast pathway anyway because
currently those pathways are only triggered for precisions that fit in
a single HWI.   (There is the possibility that some of the infinite
precision functions could use this fast path, but they currently do
not.)   However, what we are planning to do when the ppc gets stable
is to build a 64 bit compiler for the x86 that uses a 32 bit HWI.
This is no longer a supported path, but fixing the bugs on it would
shake out the remaining places where the compiler (as well as the
wide-int code) gets the wrong answer for larger types.

The code still has our tracing in it.   We will remove it before the
branch is committed, but for large scale debugging, we find this
very useful.

I am not going to close with the typical "ok to commit?" closing
because I know you will have a lot to say.   But I do think that you
will find that this is a lot closer to what you envisioned than what
you saw before.

kenny

=====================================

The two patches for the truck below are necessary to get identical
code between the wide-int branch and the truck.   The first patch has
been submitted for review and fixes a bug.   The second patch will not
be submitted as it is just for compatibility.   The second patch
slightly changes the hash function that the rtl gcse passes use. Code
is modified based on the traversal of a hash function, so if the hash
functions are not identical, the code is slightly different between
the two branches.


=====================================
diff --git a/gcc/expr.c b/gcc/expr.c
index 923f59b..f5744b0 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -4815,7 +4815,8 @@ expand_assignment (tree to, tree from, bool 
nontemporal)
                    bitregion_start, bitregion_end,
                    mode1, from,
                    get_alias_set (to), nontemporal);
-      else if (bitpos >= mode_bitsize / 2)
+      else if (bitpos >= mode_bitsize / 2
+           && bitpos+bitsize <= mode_bitsize)
          result = store_field (XEXP (to_rtx, 1), bitsize,
                    bitpos - mode_bitsize / 2,
                    bitregion_start, bitregion_end,
@@ -4834,8 +4835,12 @@ expand_assignment (tree to, tree from, bool 
nontemporal)
          }
        else
          {
+          HOST_WIDE_INT extra = 0;
+          if (bitpos+bitsize > mode_bitsize)
+        extra = bitpos+bitsize - mode_bitsize;
            rtx temp = assign_stack_temp (GET_MODE (to_rtx),
-                        GET_MODE_SIZE (GET_MODE (to_rtx)));
+                        GET_MODE_SIZE (GET_MODE (to_rtx))
+                        + extra);
            write_complex_part (temp, XEXP (to_rtx, 0), false);
            write_complex_part (temp, XEXP (to_rtx, 1), true);
            result = store_field (temp, bitsize, bitpos,
diff --git a/gcc/rtl.def b/gcc/rtl.def
index b4ce1b9..5ed015c 100644
--- a/gcc/rtl.def
+++ b/gcc/rtl.def
@@ -342,6 +342,8 @@ DEF_RTL_EXPR(TRAP_IF, "trap_if", "ee", RTX_EXTRA)
  /* numeric integer constant */
  DEF_RTL_EXPR(CONST_INT, "const_int", "w", RTX_CONST_OBJ)

+DEF_RTL_EXPR(CONST_WIDE_INT, "const_wide_int", "", RTX_CONST_OBJ)
+
  /* fixed-point constant */
  DEF_RTL_EXPR(CONST_FIXED, "const_fixed", "www", RTX_CONST_OBJ)


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-13 20:57 wide-int branch now up for public comment and review Kenneth Zadeck
@ 2013-08-22  8:25 ` Richard Sandiford
  2013-08-23 15:03 ` Richard Sandiford
  2013-08-24 18:42 ` Florian Weimer
  2 siblings, 0 replies; 50+ messages in thread
From: Richard Sandiford @ 2013-08-22  8:25 UTC (permalink / raw)
  To: Kenneth Zadeck; +Cc: rguenther, gcc-patches, Mike Stump

Thanks for doing this.  I haven't had chance to look at the branch yet
(hope to soon), but the results on mips64-linux-gnu look pretty good:

  http://gcc.gnu.org/ml/gcc-testresults/2013-08/msg02033.html

The only failures that stand out as being possibly wide-int-related are:

  FAIL: gcc.dg/fixed-point/int-warning.c  (test for warnings, line ...)

which isn't something that x86_64-linux-gnu or powerpc64-linux-gnu
would test AFAIK.  Haven't had chance to look into why it's failing yet,
but FWIW it's a compile-only test so should be reproducable with a cc1 cross.

I'll run a test with the corresponding trunk version too.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-13 20:57 wide-int branch now up for public comment and review Kenneth Zadeck
  2013-08-22  8:25 ` Richard Sandiford
@ 2013-08-23 15:03 ` Richard Sandiford
  2013-08-23 21:01   ` Kenneth Zadeck
                     ` (7 more replies)
  2013-08-24 18:42 ` Florian Weimer
  2 siblings, 8 replies; 50+ messages in thread
From: Richard Sandiford @ 2013-08-23 15:03 UTC (permalink / raw)
  To: Kenneth Zadeck; +Cc: rguenther, gcc-patches, Mike Stump, r.sandiford

[-- Attachment #1: Type: text/plain, Size: 12260 bytes --]

Hi Kenny,

This is the first time I've looked at the implementation of wide-int.h
(rather than just looking at the rtl changes, which as you know I like
in general), so FWIW here are some comments on wide-int.h.  I expect
a lot of them overlap with Richard B.'s comments.

I also expect many of them are going to be annoying, sorry, but this
first one definitely will.  The coding conventions say that functions
should be defined outside the class:

    http://gcc.gnu.org/codingconventions.html

and that opening braces should be on their own line, so most of the file
needs to be reformatted.  I went through and made that change with the
patch below, in the process of reading through.  I also removed "SGN
must be SIGNED or UNSIGNED." because it seemed redundant when those are
the only two values available.  The patch fixes a few other coding standard
problems and typos, but I've not made any actual code changes (or at least,
I didn't mean to).

Does it look OK to install?

I'm still unsure about these "infinite" precision types, but I understand
the motivation and I have no objections.  However:

>     * Code that does widening conversions.  The canonical way that
>       this is performed is to sign or zero extend the input value to
>       the max width based on the sign of the type of the source and
>       then to truncate that value to the target type.  This is in
>       preference to using the sign of the target type to extend the
>       value directly (which gets the wrong value for the conversion
>       of large unsigned numbers to larger signed types).

I don't understand this particular reason.  Using the sign of the source
type is obviously right, but why does that mean we need "infinite" precision,
rather than just doubling the precision of the source?

>   * When a constant that has an integer type is converted to a
>     wide-int it comes in with precision 0.  For these constants the
>     top bit does accurately reflect the sign of that constant; this
>     is an exception to the normal rule that the signedness is not
>     represented.  When used in a binary operation, the wide-int
>     implementation properly extends these constants so that they
>     properly match the other operand of the computation.  This allows
>     you write:
>
>                tree t = ...
>                wide_int x = t + 6;
>
>     assuming t is a int_cst.

This seems dangerous.  Not all code that uses "unsigned HOST_WIDE_INT"
actually wants it to be an unsigned value.  Some code uses it to avoid
the undefinedness of signed overflow.  So these overloads could lead
to us accidentally zero-extending what's conceptually a signed value
without any obvious indication that that's happening.  Also, hex constants
are unsigned int, but it doesn't seem safe to assume that 0x80000000 was
meant to be zero-extended.

I realise the same thing can happen if you mix "unsigned int" with
HOST_WIDE_INT, but the point is that you shouldn't really do that
in general, whereas we're defining these overloads precisely so that
a mixture can be used.

I'd prefer some explicit indication of the sign, at least for anything
other than plain "int" (so that the compiler will complain about uses
of "unsigned int" and above).

>   Note that the bits above the precision are not defined and the
>   algorithms used here are careful not to depend on their value.  In
>   particular, values that come in from rtx constants may have random
>   bits.

I have a feeling I'm rehashing a past debate, sorry, but rtx constants can't
have random bits.  The upper bits must be a sign extension of the value.
There's exactly one valid rtx for each (value, mode) pair.  If you saw
something different then that sounds like a bug.  The rules should already
be fairly well enforced though, since something like (const_int 128) --
or (const_int 256) -- will not match a QImode operand.

This is probably the part of the representation that I disagree most with.
There seem to be two main ways we could hande the extension to whole HWIs:

(1) leave the stored upper bits undefined and extend them on read
(2) keep the stored upper bits in extended form

The patch goes for (1) but (2) seems better to me, for a few reasons:

* As above, constants coming from rtl are already in the right form,
  so if you create a wide_int from an rtx and only query it, no explicit
  extension is needed.

* Things like logical operations and right shifts naturally preserve
  the sign-extended form, so only a subset of write operations need
  to take special measures.

* You have a public interface that exposes the underlying HWIs
  (which is fine with me FWIW), so it seems better to expose a fully-defined
  HWI rather than only a partially-defined HWI.

E.g. zero_p is:

  HOST_WIDE_INT x;

  if (precision && precision < HOST_BITS_PER_WIDE_INT)
    x = sext_hwi (val[0], precision);
  else if (len == 0)
    {
      gcc_assert (precision == 0);
      return true;
    }
  else
    x = val[0];

  return len == 1 && x == 0;

but I think it really ought to be just:

  return len == 1 && val[0] == 0;

>   When the precision is 0, all the bits in the LEN elements of
>   VEC are significant with no undefined bits.  Precisionless
>   constants are limited to being one or two HOST_WIDE_INTs.  When two
>   are used the upper value is 0, and the high order bit of the first
>   value is set.  (Note that this may need to be generalized if it is
>   ever necessary to support 32bit HWIs again).

I didn't understand this.  When are two HOST_WIDE_INTs needed for
"precision 0"?

> #define addr_max_bitsize (64)
> #define addr_max_precision \

These should either be lower-case C++ constants or upper-case macros.

>  /* VAL is set to a size that is capable of computing a full
>     multiplication on the largest mode that is represented on the
>     target.  Currently there is a part of tree-vrp that requires 2x +
>     2 bits of precision where x is the precision of the variables
>     being optimized.  */
>  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];

This comment seems redundant with the one above WIDE_INT_MAX_ELTS
and likely to get out of date.

>     So real hardware only looks at a small part of the shift amount.
>     On IBM machines, this tends to be 1 more than what is necessary
>     to encode the shift amount.  The rest of the world looks at only
>     the minimum number of bits.  This means that only 3 gate delays
>     are necessary to set up the shifter.

I agree that it makes sense for wide_int to provide a form of shift
in which the shift amount is truncated.  However, I strongly believe
wide-int.h should not test SHIFT_COUNT_TRUNCATED directly.  It should
be up to the callers to decide when truncation is needed (and to what width).

We really need to get rid of the #include "tm.h" in wide-int.h.
MAX_BITSIZE_MODE_ANY_INT should be the only partially-target-dependent
thing in there.  If that comes from tm.h then perhaps we should put it
into a new header file instead.

> /* Return THIS as a signed HOST_WIDE_INT.  If THIS does not fit in
>    PREC, the information is lost. */
> HOST_WIDE_INT
> to_shwi (unsigned int prec = 0) const

Just dropping the excess bits seems dangerous.  I think we should assert
instead, at least when prec is 0.

> /* Return true if THIS is negative based on the interpretation of SGN.
>    For UNSIGNED, this is always false.  This is correct even if
>    precision is 0.  */
> inline bool
> wide_int::neg_p (signop sgn) const

It seems odd that you have to pass SIGNED here.  I assume you were doing
it so that the caller is forced to confirm signedness in the cases where
a tree type is involved, but:

* neg_p kind of implies signedness anyway
* you don't require this for minus_one_p, so the interface isn't consistent
* at the rtl level signedness isn't a property of the "type" (mode),
  so it seems strange to add an extra hoop there

> /* Return true if THIS fits in an unsigned HOST_WIDE_INT with no
>    loss of precision.  */
> inline bool
> wide_int_ro::fits_uhwi_p () const
> {
>   return len == 1 || (len == 2 && val[1] == 0);
> }

This doesn't look right, since len == 1 could mean that you have a
gazillion-bit all-ones number.  Also, the val[1] == 0 check seems
to assume the upper bits are defined when the precision isn't a multiple
of the HWI size (although as above I that's a good thing and should be
enforced).

sign_mask has:

>   gcc_unreachable ();
> #if 0
>   return val[len - 1] >> (HOST_BITS_PER_WIDE_INT - 1);
> #endif

Maybe remove this?

The inline div routines do:

>   if (overflow)
>     *overflow = false;

but then just pass overflow to divmod_internal.  Seems better to initialise
*overflow there instead.

div_floor has:

>     return divmod_internal (true, val, len, p1, s, cl, p2, sgn,
>			    &remainder, false, overflow);
>
>     if (quotient.neg_p (sgn) && !remainder.zero_p ())
>       return quotient - 1;
>     return quotient;
>   }

where the last bit is unreachable.

> /* Divide DIVISOR into THIS producing the remainder.  The result is
>    the same size as the operands.  The sign is specified in SGN.
>    The output is floor truncated.  OVERFLOW is set to true if the
>    result overflows, false otherwise.  */
> template <typename T>
> inline wide_int_ro
> wide_int_ro::mod_floor (const T &c, signop sgn, bool *overflow = 0) const

It's really the quotient that's floor-truncated, not the output
(the remainder).  I was a bit confused at first why you'd need to
truncate an integer remainder.  Same for the other functions.

> #ifdef DEBUG_WIDE_INT
>   debug_vwa ("wide_int_ro:: %d = (%s == %s)\n", result, *this, s, cl, p2);
> #endif

I think these are going to bitrot quickly if we #if 0 then out.
I think we should either use:

  if (DEBUG_WIDE_INT)
    debug_vwa ("wide_int_ro:: %d = (%s == %s)\n", result, *this, s, cl, p2);

or drop them.

The implementations of the general to_shwi1 and to_shwi2 functions look
identical.  I think the comment should explain why two functions are needed.

> /* Negate THIS.  */
> inline wide_int_ro
> wide_int_ro::operator - () const
> {
>   wide_int_ro r;
>   r = wide_int_ro (0) - *this;
>   return r;
> }
>
> /* Negate THIS.  */
> inline wide_int_ro
> wide_int_ro::neg () const
> {
>   wide_int_ro z = wide_int_ro::from_shwi (0, precision);
>
>   gcc_checking_assert (precision);
>   return z - *this;
> }

Why do we need both of these, and why are they implemented slightly
differently?

> template <int bitsize>
> inline bool
> fixed_wide_int <bitsize>::multiple_of_p (const wide_int_ro &factor,
>					 signop sgn,
>					 fixed_wide_int *multiple) const
> {
>   return wide_int_ro::multiple_of_p (factor, sgn,
>				     reinterpret_cast <wide_int *> (multiple));
> }

The patch has several instances of this kind of reinterpret_cast.
It looks like an aliasing violation.

The main thing that's changed since the early patches is that we now
have a mixture of wide-int types.  This seems to have led to a lot of
boiler-plate forwarding functions (or at least it felt like that while
moving them all out the class).  And that in turn seems to be because
you're trying to keep everything as member functions.  E.g. a lot of the
forwarders are from a member function to a static function.

Wouldn't it be better to have the actual classes be light-weight,
with little more than accessors, and do the actual work with non-member
template functions?  There seems to be 3 grades of wide-int:

  (1) read-only, constant precision  (from int, etc.)
  (2) read-write, constant precision  (fixed_wide_int)
  (3) read-write, variable precision  (wide_int proper)

but we should be able to hide that behind templates, with compiler errors
if you try to write to (1), etc.

To take one example, the reason we can't simply use things like
std::min on wide ints is because signedness needs to be specified
explicitly, but there's a good reason why the standard defined
std::min (x, y) rather than x.min (y).  It seems like we ought
to have smin and umin functions alongside std::min, rather than
make them member functions.  We could put them in a separate namespace
if necessary.

I might have a go at trying this last part next week, unless Richard is
already in the process of rewriting things :-)

Thanks,
Richard



[-- Attachment #2: reformat-wide-int.diff.bz2 --]
[-- Type: application/x-bzip2, Size: 19230 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-23 15:03 ` Richard Sandiford
@ 2013-08-23 21:01   ` Kenneth Zadeck
  2013-08-24 10:44     ` Richard Sandiford
  2013-08-24  0:03   ` Mike Stump
                     ` (6 subsequent siblings)
  7 siblings, 1 reply; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-23 21:01 UTC (permalink / raw)
  To: rguenther, gcc-patches, Mike Stump, r.sandiford, rdsandiford,
	Kenneth Zadeck

On 08/23/2013 11:02 AM, Richard Sandiford wrote:
> Hi Kenny,
>
> This is the first time I've looked at the implementation of wide-int.h
> (rather than just looking at the rtl changes, which as you know I like
> in general), so FWIW here are some comments on wide-int.h.  I expect
> a lot of them overlap with Richard B.'s comments.
>
> I also expect many of them are going to be annoying, sorry, but this
> first one definitely will.  The coding conventions say that functions
> should be defined outside the class:
>
>      http://gcc.gnu.org/codingconventions.html
>
> and that opening braces should be on their own line, so most of the file
> needs to be reformatted.  I went through and made that change with the
> patch below, in the process of reading through.  I also removed "SGN
> must be SIGNED or UNSIGNED." because it seemed redundant when those are
> the only two values available.  The patch fixes a few other coding standard
> problems and typos, but I've not made any actual code changes (or at least,
> I didn't mean to).
I had started the file with the functions outside of the class and mike 
had stated putting them in the class, and so i went with putting them in 
the class since many of them were one liners and so having them out of 
line just doubled the size of everything. however, we did not look at 
the coding conventions and that really settles the argument.  Thanks for 
doing this.
> Does it look OK to install?
you can check it in.
> I'm still unsure about these "infinite" precision types, but I understand
> the motivation and I have no objections.  However:
>
>>      * Code that does widening conversions.  The canonical way that
>>        this is performed is to sign or zero extend the input value to
>>        the max width based on the sign of the type of the source and
>>        then to truncate that value to the target type.  This is in
>>        preference to using the sign of the target type to extend the
>>        value directly (which gets the wrong value for the conversion
>>        of large unsigned numbers to larger signed types).


> I don't understand this particular reason.  Using the sign of the source
> type is obviously right, but why does that mean we need "infinite" precision,
> rather than just doubling the precision of the source?
in a sense, "infinite" does not really mean infinite, it really just 
means large enough so that you never loose any information from the 
top.    For widening all that you really need to be "infinite" is one 
more bit larger than the destination type.   We could have had an abi 
where you specified the precision of every operation based on the 
precisions of the inputs.   Instead, for these kinds of operations, we 
decided to sniff the port and determine a fixed width that was large 
enough for everything that was needed for the port. We call that number 
infinite.  This sort of follows the convention that double-int was used 
with were infinite was 128 bits, but with our design/implementation, we 
(hopefully) have no bugs where the size of the datatypes needed never 
runs into implementation limits.

>>    * When a constant that has an integer type is converted to a
>>      wide-int it comes in with precision 0.  For these constants the
>>      top bit does accurately reflect the sign of that constant; this
>>      is an exception to the normal rule that the signedness is not
>>      represented.  When used in a binary operation, the wide-int
>>      implementation properly extends these constants so that they
>>      properly match the other operand of the computation.  This allows
>>      you write:
>>
>>                 tree t = ...
>>                 wide_int x = t + 6;
>>
>>      assuming t is a int_cst.
> This seems dangerous.  Not all code that uses "unsigned HOST_WIDE_INT"
> actually wants it to be an unsigned value.  Some code uses it to avoid
> the undefinedness of signed overflow.  So these overloads could lead
> to us accidentally zero-extending what's conceptually a signed value
> without any obvious indication that that's happening.  Also, hex constants
> are unsigned int, but it doesn't seem safe to assume that 0x80000000 was
> meant to be zero-extended.
>
> I realise the same thing can happen if you mix "unsigned int" with
> HOST_WIDE_INT, but the point is that you shouldn't really do that
> in general, whereas we're defining these overloads precisely so that
> a mixture can be used.
>
> I'd prefer some explicit indication of the sign, at least for anything
> other than plain "int" (so that the compiler will complain about uses
> of "unsigned int" and above).

There is a part of me that finds this scary and a part of me that feels 
that the concern is largely theoretical.    It does make it much easier 
to read and understand the code to be able to write "t + 6" rather than 
"wide_int (t) + wide_int::from_uhwi" (6) but of course you loose some 
control over how 6 is converted.

>>    Note that the bits above the precision are not defined and the
>>    algorithms used here are careful not to depend on their value.  In
>>    particular, values that come in from rtx constants may have random
>>    bits.
> I have a feeling I'm rehashing a past debate, sorry, but rtx constants can't
> have random bits.  The upper bits must be a sign extension of the value.
> There's exactly one valid rtx for each (value, mode) pair.  If you saw
> something different then that sounds like a bug.  The rules should already
> be fairly well enforced though, since something like (const_int 128) --
> or (const_int 256) -- will not match a QImode operand.
>
> This is probably the part of the representation that I disagree most with.
> There seem to be two main ways we could hande the extension to whole HWIs:
>
> (1) leave the stored upper bits undefined and extend them on read
> (2) keep the stored upper bits in extended form
It is not a matter of opening old wounds.   I had run some tests on 
x86-64 and was never able to assume that the bits above the precision 
had always been canonized.   I will admit that i did not try to run down 
the bugs because i thought that since the rtx constructors did not have 
a mode associated with them now was one required to in the constructors, 
that this was not an easily solvable problem.   So i have no idea if i 
hit the one and only bug or was about to start drinking from a fire 
hose.   But the back ends are full of GEN_INT (a) where a came from god 
knows where and you almost never see it properly canonized.   I think 
that until GEN_INT takes a mandatory non VOIDmode mode parameter, and 
that constructor canonizes it, you are doomed chasing this bug 
forever.    My/our experience on the dataflow branch was that unless you 
go clean things up AND put in a bunch of traps to keep people honest, 
you are never going to be able to make this assumption.

Having said that, we actually do neither of (1) or (2) inside of 
wide-int.  For rtl to wide-int, we leave the upper bits undefined and 
never allow you to look at them because the constructor has a mode and 
that mode allows you to draw a line in the sand.   There is no 
constructor for the "infinite" wide ints from rtl so you have no way to 
look.

Doing this allows us to do something that richi really wanted: avoiding 
copying.   We do not get to do as much richi would like and when he 
comes back from leave, he may have other places where can apply it, but 
right now if you say w = t + 6 as above, it "borrows" the rep from t to 
do the add, it does not really build a wide-int. We also do this if t is 
an rtx const.   But if we had to canonize the number, then we could not 
borrow the rep.
> The patch goes for (1) but (2) seems better to me, for a few reasons:
>
> * As above, constants coming from rtl are already in the right form,
>    so if you create a wide_int from an rtx and only query it, no explicit
>    extension is needed.
>
> * Things like logical operations and right shifts naturally preserve
>    the sign-extended form, so only a subset of write operations need
>    to take special measures.
right now the internals of wide-int do not keep the bits above the 
precision clean.   as you point out this could be fixed by changing 
lshift, add, sub, mul, div (and anything else i have forgotten about) 
and removing the code that cleans this up on exit.   I actually do not 
really care which way we go here but if we do go on keeping the bits 
clean above the precision inside wide-int, we are going to have to clean 
the bits in the constructors from rtl, or fix some/a lot of bugs.

But if you want to go with the stay clean plan you have to start clean, 
so at the rtl level this means copying. and it is the not copying trick 
that pushed me in the direction we went.

At the tree level, this is not an issue.   There are no constructors for 
tree-csts that do not have a type and before we copy the rep from the 
wide-int to the tree, we clean the top bits.   (BTW - If i had to guess 
what the bug is with the missing messages on the mips port, it would be 
because the front ends HAD a bad habit of creating constants that did 
not fit into a type and then later checking to see if there were any 
interesting bits above the precision in the int-cst.  This now does not 
work because we clean out those top bits on construction but it would 
not surprise me if we missed the fixed point constant path.)   So at the 
tree level, we could easily go either way here, but there is a cost at 
the rtl level with doing (2).


> * You have a public interface that exposes the underlying HWIs
>    (which is fine with me FWIW), so it seems better to expose a fully-defined
>    HWI rather than only a partially-defined HWI.
>
> E.g. zero_p is:
>
>    HOST_WIDE_INT x;
>
>    if (precision && precision < HOST_BITS_PER_WIDE_INT)
>      x = sext_hwi (val[0], precision);
>    else if (len == 0)
>      {
>        gcc_assert (precision == 0);
>        return true;
>      }
the above test for len==0 has been removed because it is rot.

>    else
>      x = val[0];
>
>    return len == 1 && x == 0;
>
> but I think it really ought to be just:
>
>    return len == 1 && val[0] == 0;
If we did your 2, it would be this way.
>>    When the precision is 0, all the bits in the LEN elements of
>>    VEC are significant with no undefined bits.  Precisionless
>>    constants are limited to being one or two HOST_WIDE_INTs.  When two
>>    are used the upper value is 0, and the high order bit of the first
>>    value is set.  (Note that this may need to be generalized if it is
>>    ever necessary to support 32bit HWIs again).
> I didn't understand this.  When are two HOST_WIDE_INTs needed for
> "precision 0"?
if a large unsigned constant comes in with the top bit set, the 
canonized value takes 2 hwis, the top hwi being 0.
>> #define addr_max_bitsize (64)
>> #define addr_max_precision \
> These should either be lower-case C++ constants or upper-case macros.
this will be fixed.
>>   /* VAL is set to a size that is capable of computing a full
>>      multiplication on the largest mode that is represented on the
>>      target.  Currently there is a part of tree-vrp that requires 2x +
>>      2 bits of precision where x is the precision of the variables
>>      being optimized.  */
>>   HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
> This comment seems redundant with the one above WIDE_INT_MAX_ELTS
> and likely to get out of date.
this will be fixed
>>      So real hardware only looks at a small part of the shift amount.
>>      On IBM machines, this tends to be 1 more than what is necessary
>>      to encode the shift amount.  The rest of the world looks at only
>>      the minimum number of bits.  This means that only 3 gate delays
>>      are necessary to set up the shifter.
> I agree that it makes sense for wide_int to provide a form of shift
> in which the shift amount is truncated.  However, I strongly believe
> wide-int.h should not test SHIFT_COUNT_TRUNCATED directly.  It should
> be up to the callers to decide when truncation is needed (and to what width).
richi does not like this either so i will get rid of it.
>
> We really need to get rid of the #include "tm.h" in wide-int.h.
> MAX_BITSIZE_MODE_ANY_INT should be the only partially-target-dependent
> thing in there.  If that comes from tm.h then perhaps we should put it
> into a new header file instead.
I will talk to mike about fixing this.
>> /* Return THIS as a signed HOST_WIDE_INT.  If THIS does not fit in
>>     PREC, the information is lost. */
>> HOST_WIDE_INT
>> to_shwi (unsigned int prec = 0) const
> Just dropping the excess bits seems dangerous.  I think we should assert
> instead, at least when prec is 0.
there are times when this is useful.   there is a lot of code that just 
wants to look at the bottom bits to do some alignment stuff. I guess 
that code could just grab the bottom elt of the array, but has not 
generally been how these this has been done.
>> /* Return true if THIS is negative based on the interpretation of SGN.
>>     For UNSIGNED, this is always false.  This is correct even if
>>     precision is 0.  */
>> inline bool
>> wide_int::neg_p (signop sgn) const
> It seems odd that you have to pass SIGNED here.  I assume you were doing
> it so that the caller is forced to confirm signedness in the cases where
> a tree type is involved, but:
>
> * neg_p kind of implies signedness anyway
> * you don't require this for minus_one_p, so the interface isn't consistent
> * at the rtl level signedness isn't a property of the "type" (mode),
>    so it seems strange to add an extra hoop there
it was done this way so that you can pass in TYPE_SIGN (t) in as the 
second parameter.   We could default the parameter to SIGNED and that 
would solve both cases.   I will look into minus_one_p.


>> /* Return true if THIS fits in an unsigned HOST_WIDE_INT with no
>>     loss of precision.  */
>> inline bool
>> wide_int_ro::fits_uhwi_p () const
>> {
>>    return len == 1 || (len == 2 && val[1] == 0);
>> }
> This doesn't look right, since len == 1 could mean that you have a
> gazillion-bit all-ones number.  Also, the val[1] == 0 check seems
> to assume the upper bits are defined when the precision isn't a multiple
> of the HWI size (although as above I that's a good thing and should be
> enforced).
you are correct.
> sign_mask has:

>>    gcc_unreachable ();
>> #if 0
>>    return val[len - 1] >> (HOST_BITS_PER_WIDE_INT - 1);
>> #endif
> Maybe remove this?
>
> The inline div routines do:
i will work on this this weekend.   tree vrp has not been our friend and 
sometimes does not like to compile this function.

>>    if (overflow)
>>      *overflow = false;
> but then just pass overflow to divmod_internal.  Seems better to initialise
> *overflow there instead.
>
> div_floor has:
>
>>      return divmod_internal (true, val, len, p1, s, cl, p2, sgn,
>> 			    &remainder, false, overflow);
>>
>>      if (quotient.neg_p (sgn) && !remainder.zero_p ())
>>        return quotient - 1;
>>      return quotient;
>>    }
> where the last bit is unreachable.
not to mention that the compiler never complained.
>
>> /* Divide DIVISOR into THIS producing the remainder.  The result is
>>     the same size as the operands.  The sign is specified in SGN.
>>     The output is floor truncated.  OVERFLOW is set to true if the
>>     result overflows, false otherwise.  */
>> template <typename T>
>> inline wide_int_ro
>> wide_int_ro::mod_floor (const T &c, signop sgn, bool *overflow = 0) const
> It's really the quotient that's floor-truncated, not the output
> (the remainder).  I was a bit confused at first why you'd need to
> truncate an integer remainder.  Same for the other functions.
The comments needs work, not the code.   You do have to adjust the 
remainder in some cases, but it is not truncation.
>> #ifdef DEBUG_WIDE_INT
>>    debug_vwa ("wide_int_ro:: %d = (%s == %s)\n", result, *this, s, cl, p2);
>> #endif
> I think these are going to bitrot quickly if we #if 0 then out.
> I think we should either use:
>
>    if (DEBUG_WIDE_INT)
>      debug_vwa ("wide_int_ro:: %d = (%s == %s)\n", result, *this, s, cl, p2);
>
> or drop them.
my plan is to leave these in while the branch is still being developed 
and then get rid of them before it is merged.    My guess is that i am 
going to need them still when i try the 32bit hwi test.

> The implementations of the general to_shwi1 and to_shwi2 functions look
> identical.  I think the comment should explain why two functions are needed.
I will check this
>> /* Negate THIS.  */
>> inline wide_int_ro
>> wide_int_ro::operator - () const
>> {
>>    wide_int_ro r;
>>    r = wide_int_ro (0) - *this;
>>    return r;
>> }
>>
>> /* Negate THIS.  */
>> inline wide_int_ro
>> wide_int_ro::neg () const
>> {
>>    wide_int_ro z = wide_int_ro::from_shwi (0, precision);
>>
>>    gcc_checking_assert (precision);
>>    return z - *this;
>> }
> Why do we need both of these, and why are they implemented slightly
> differently?
neg should go away.
>> template <int bitsize>
>> inline bool
>> fixed_wide_int <bitsize>::multiple_of_p (const wide_int_ro &factor,
>> 					 signop sgn,
>> 					 fixed_wide_int *multiple) const
>> {
>>    return wide_int_ro::multiple_of_p (factor, sgn,
>> 				     reinterpret_cast <wide_int *> (multiple));
>> }
> The patch has several instances of this kind of reinterpret_cast.
> It looks like an aliasing violation.
>
> The main thing that's changed since the early patches is that we now
> have a mixture of wide-int types.  This seems to have led to a lot of
> boiler-plate forwarding functions (or at least it felt like that while
> moving them all out the class).  And that in turn seems to be because
> you're trying to keep everything as member functions.  E.g. a lot of the
> forwarders are from a member function to a static function.
>
> Wouldn't it be better to have the actual classes be light-weight,
> with little more than accessors, and do the actual work with non-member
> template functions?  There seems to be 3 grades of wide-int:
>
>    (1) read-only, constant precision  (from int, etc.)
>    (2) read-write, constant precision  (fixed_wide_int)
>    (3) read-write, variable precision  (wide_int proper)
>
> but we should be able to hide that behind templates, with compiler errors
> if you try to write to (1), etc.
>
> To take one example, the reason we can't simply use things like
> std::min on wide ints is because signedness needs to be specified
> explicitly, but there's a good reason why the standard defined
> std::min (x, y) rather than x.min (y).  It seems like we ought
> to have smin and umin functions alongside std::min, rather than
> make them member functions.  We could put them in a separate namespace
> if necessary.
>
> I might have a go at trying this last part next week, unless Richard is
> already in the process of rewriting things :-)
mike will answer this.
> Thanks,
> Richard
>
>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-23 15:03 ` Richard Sandiford
  2013-08-23 21:01   ` Kenneth Zadeck
@ 2013-08-24  0:03   ` Mike Stump
  2013-08-24  1:59   ` Mike Stump
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 50+ messages in thread
From: Mike Stump @ 2013-08-24  0:03 UTC (permalink / raw)
  To: Richard Sandiford; +Cc: Kenneth Zadeck, rguenther, gcc-patches, r.sandiford

On Aug 23, 2013, at 8:02 AM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
>> /* Negate THIS.  */
>> inline wide_int_ro
>> wide_int_ro::operator - () const
>> {
>>  wide_int_ro r;
>>  r = wide_int_ro (0) - *this;
>>  return r;
>> }
>> 
>> /* Negate THIS.  */
>> inline wide_int_ro
>> wide_int_ro::neg () const
>> {
>>  wide_int_ro z = wide_int_ro::from_shwi (0, precision);
>> 
>>  gcc_checking_assert (precision);
>>  return z - *this;
>> }
> 
> Why do we need both of these, and why are they implemented slightly
> differently?

Thanks for the review.

neg can go away.  There was a time when operator names weren't used, and I added them to make client code more natural.  I've removed neg () and updated neg (overflow) to match the style of operator -().

Index: wide-int.cc
===================================================================
--- wide-int.cc	(revision 201894)
+++ wide-int.cc	(working copy)
@@ -1545,7 +1545,7 @@ wide_int_ro::abs () const
   gcc_checking_assert (precision);
 
   if (sign_mask ())
-    result = neg ();
+    result = -*this;
   else
     result = *this;
 
Index: wide-int.h
===================================================================
--- wide-int.h	(revision 201950)
+++ wide-int.h	(working copy)
@@ -1800,19 +1800,7 @@ class GTY(()) wide_int_ro {
 
   /* Negate this.  */
   inline wide_int_ro operator - () const {
-    wide_int_ro r;
-    r = wide_int_ro (0) - *this;
-    return r;
-  }
-
-  /* Negate THIS.  */
-  inline wide_int_ro
-  neg () const
-  {
-    wide_int_ro z = wide_int_ro::from_shwi (0, precision);
-
-    gcc_checking_assert (precision);
-    return z - *this;
+    return wide_int_ro (0) - *this;
   }
 
   /* Negate THIS.  OVERFLOW is set true if the value cannot be
@@ -1820,12 +1808,11 @@ class GTY(()) wide_int_ro {
   inline wide_int_ro
   neg (bool *overflow) const
   {
-    wide_int_ro z = wide_int_ro::from_shwi (0, precision);
-
     gcc_checking_assert (precision);
+
     *overflow = only_sign_bit_p ();
 
-    return z - *this;
+    return wide_int_ro (0) - *this;
   }
 
   wide_int_ro parity () const;

>> template <int bitsize>
>> inline bool
>> fixed_wide_int <bitsize>::multiple_of_p (const wide_int_ro &factor,
>> 					 signop sgn,
>> 					 fixed_wide_int *multiple) const
>> {
>>  return wide_int_ro::multiple_of_p (factor, sgn,
>> 				     reinterpret_cast <wide_int *> (multiple));
>> }
> 
> The patch has several instances of this kind of reinterpret_cast.
> It looks like an aliasing violation.

The cast is completely unneeded, so I removed it.  sdivmod_floor was of the same class, so I removed it as well.  These two uses of reinterpret_cast are a bit different from the rest of them.  I'll see about addressing the remaining ones in a followup.

diff --git a/gcc/wide-int.h b/gcc/wide-int.h
index 0a906a9..3fef0d5 100644
--- a/gcc/wide-int.h
+++ b/gcc/wide-int.h
@@ -3081,9 +3081,7 @@ public:
   bool multiple_of_p (const wide_int_ro &factor,
                      signop sgn,
                      fixed_wide_int *multiple) const {
-    return wide_int_ro::multiple_of_p (factor,
-                                      sgn,
-                                      reinterpret_cast <wide_int *> (multiple));
+    return wide_int_ro::multiple_of_p (factor, sgn, multiple);
   }
 
   /* Conversion to and from GMP integer representations.  */
@@ -3336,7 +3334,7 @@ public:
   }
   template <typename T>
   inline fixed_wide_int sdivmod_floor (const T &c, fixed_wide_int *mod) const {
-    return wide_int_ro::sdivmod_floor (c, reinterpret_cast <wide_int *> (mod));
+    return wide_int_ro::sdivmod_floor (c, mod);
   }
 
   /* Shifting rotating and extracting.  */



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-23 15:03 ` Richard Sandiford
  2013-08-23 21:01   ` Kenneth Zadeck
  2013-08-24  0:03   ` Mike Stump
@ 2013-08-24  1:59   ` Mike Stump
  2013-08-24  3:34   ` Mike Stump
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 50+ messages in thread
From: Mike Stump @ 2013-08-24  1:59 UTC (permalink / raw)
  To: Richard Sandiford; +Cc: Kenneth Zadeck, rguenther, gcc-patches, r.sandiford

On Aug 23, 2013, at 8:02 AM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
>> #define addr_max_bitsize (64)
>> #define addr_max_precision \
> 
> These should either be lower-case C++ constants or upper-case macros.

Fixed:

diff --git a/gcc/wide-int.h b/gcc/wide-int.h
index 9ccdf7c..b40962c 100644
--- a/gcc/wide-int.h
+++ b/gcc/wide-int.h
@@ -247,15 +247,15 @@ along with GCC; see the file COPYING3.  If not see
    on any platform is 64 bits.  When that changes, then it is likely
    that a target hook should be defined so that targets can make this
    value larger for those targets.  */
-#define addr_max_bitsize (64)
+const int addr_max_bitsize = 64;
 
 /* This is the internal precision used when doing any address
    arithmetic.  The '4' is really 3 + 1.  Three of the bits are for
    the number of extra bits needed to do bit addresses and single bit is
    allow everything to be signed without loosing any precision.  Then
    everything is rounded up to the next HWI for efficiency.  */
-#define addr_max_precision \
-  ((addr_max_bitsize + 4 + HOST_BITS_PER_WIDE_INT - 1) & ~(HOST_BITS_PER_WIDE_INT - 1))
+const int addr_max_precision
+  = ((addr_max_bitsize + 4 + HOST_BITS_PER_WIDE_INT - 1) & ~(HOST_BITS_PER_WIDE_INT - 1));
 
 enum ShiftOp {
   NONE,


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-23 15:03 ` Richard Sandiford
                     ` (2 preceding siblings ...)
  2013-08-24  1:59   ` Mike Stump
@ 2013-08-24  3:34   ` Mike Stump
  2013-08-24  9:04     ` Richard Sandiford
  2013-08-24 20:46   ` Kenneth Zadeck
                     ` (3 subsequent siblings)
  7 siblings, 1 reply; 50+ messages in thread
From: Mike Stump @ 2013-08-24  3:34 UTC (permalink / raw)
  To: Richard Sandiford; +Cc: Kenneth Zadeck, rguenther, gcc-patches, r.sandiford

On Aug 23, 2013, at 8:02 AM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
>>  * When a constant that has an integer type is converted to a
>>    wide-int it comes in with precision 0.  For these constants the
>>    top bit does accurately reflect the sign of that constant; this
>>    is an exception to the normal rule that the signedness is not
>>    represented.  When used in a binary operation, the wide-int
>>    implementation properly extends these constants so that they
>>    properly match the other operand of the computation.  This allows
>>    you write:
>> 
>>               tree t = ...
>>               wide_int x = t + 6;
>> 
>>    assuming t is a int_cst.
> 
> This seems dangerous.  Not all code that uses "unsigned HOST_WIDE_INT"
> actually wants it to be an unsigned value.  Some code uses it to avoid
> the undefinedness of signed overflow.  So these overloads could lead
> to us accidentally zero-extending what's conceptually a signed value
> without any obvious indication that that's happening.  Also, hex constants
> are unsigned int, but it doesn't seem safe to assume that 0x80000000 was
> meant to be zero-extended.
> 
> I realise the same thing can happen if you mix "unsigned int" with
> HOST_WIDE_INT, but the point is that you shouldn't really do that
> in general, whereas we're defining these overloads precisely so that
> a mixture can be used.

So, I don't like penalizing all users, because one user might write incorrect code.  We have the simple operators so that users can retain some of the simplicity and beauty that is the underlying language.  Those semantics are well known and reasonable.  We reasonably match those semantics.  At the end of the day, we have to be able to trust what the user writes.  Now, a user that doesn't trust themselves, can elect to not use these functions; they aren't required to use them.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-24  3:34   ` Mike Stump
@ 2013-08-24  9:04     ` Richard Sandiford
  0 siblings, 0 replies; 50+ messages in thread
From: Richard Sandiford @ 2013-08-24  9:04 UTC (permalink / raw)
  To: Mike Stump; +Cc: Kenneth Zadeck, rguenther, gcc-patches, r.sandiford

Mike Stump <mikestump@comcast.net> writes:
> On Aug 23, 2013, at 8:02 AM, Richard Sandiford
> <rdsandiford@googlemail.com> wrote:
>>>  * When a constant that has an integer type is converted to a
>>>    wide-int it comes in with precision 0.  For these constants the
>>>    top bit does accurately reflect the sign of that constant; this
>>>    is an exception to the normal rule that the signedness is not
>>>    represented.  When used in a binary operation, the wide-int
>>>    implementation properly extends these constants so that they
>>>    properly match the other operand of the computation.  This allows
>>>    you write:
>>> 
>>>               tree t = ...
>>>               wide_int x = t + 6;
>>> 
>>>    assuming t is a int_cst.
>> 
>> This seems dangerous.  Not all code that uses "unsigned HOST_WIDE_INT"
>> actually wants it to be an unsigned value.  Some code uses it to avoid
>> the undefinedness of signed overflow.  So these overloads could lead
>> to us accidentally zero-extending what's conceptually a signed value
>> without any obvious indication that that's happening.  Also, hex constants
>> are unsigned int, but it doesn't seem safe to assume that 0x80000000 was
>> meant to be zero-extended.
>> 
>> I realise the same thing can happen if you mix "unsigned int" with
>> HOST_WIDE_INT, but the point is that you shouldn't really do that
>> in general, whereas we're defining these overloads precisely so that
>> a mixture can be used.
>
> So, I don't like penalizing all users, because one user might write
> incorrect code.  We have the simple operators so that users can retain
> some of the simplicity and beauty that is the underlying language.
> Those semantics are well known and reasonable.  We reasonably match
> those semantics.  At the end of the day, we have to be able to trust
> what the user writes.  Now, a user that doesn't trust themselves, can
> elect to not use these functions; they aren't required to use them.

But the point of my "unsigned HOST_WIDE_INT" example is that the
semantics of the language can also get in the way.  If you're adding
or multiplying two smallish constants, you should generally use
"unsigned HOST_WIDE_INT" in order to avoid the undefinedness of signed
overflow.  In that kind of case the "unsigned" in no way implies that
you want the value to be extended to the wider types that we're
adding with this patch.  I'm worried about making that assumption
just to provide a bit of syntactic sugar, especially when any
wrong-code bug would be highly value-dependent.  When this kind of
thing is done at the rtl level, the original constant (in a CONST_INT)
is also HOST_WIDE_INT-sized, so the kind of code that I'm talking about
did not do any promotion before the wide-int conversion.  That changes
if we generalise one of the values to a wide_int.

It isn't just a case of being confident when writing new code.
It's a case of us changing a large body of existing code in one go,
some of which uses unsigned values for the reason above.

Like I say, all this objection is for "unsigned int and above".
If we provide the syntactic sugar for plain "int" and use templates
to force a compiler error for "higher" types then that would still allow
things like "x + 2" but force a bit of analysis for "x + y" in cases
where "x" and "y" are not the same type and one is wide_int.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-23 21:01   ` Kenneth Zadeck
@ 2013-08-24 10:44     ` Richard Sandiford
  2013-08-24 13:10       ` Richard Sandiford
  2013-08-24 21:22       ` Kenneth Zadeck
  0 siblings, 2 replies; 50+ messages in thread
From: Richard Sandiford @ 2013-08-24 10:44 UTC (permalink / raw)
  To: Kenneth Zadeck; +Cc: rguenther, gcc-patches, Mike Stump, r.sandiford

Kenneth Zadeck <zadeck@naturalbridge.com> writes:
>>>      * Code that does widening conversions.  The canonical way that
>>>        this is performed is to sign or zero extend the input value to
>>>        the max width based on the sign of the type of the source and
>>>        then to truncate that value to the target type.  This is in
>>>        preference to using the sign of the target type to extend the
>>>        value directly (which gets the wrong value for the conversion
>>>        of large unsigned numbers to larger signed types).
>
>
>> I don't understand this particular reason.  Using the sign of the source
>> type is obviously right, but why does that mean we need "infinite" precision,
>> rather than just doubling the precision of the source?
> in a sense, "infinite" does not really mean infinite, it really just 
> means large enough so that you never loose any information from the 
> top.    For widening all that you really need to be "infinite" is one 
> more bit larger than the destination type.

I'm still being clueless, sorry, but why does the intermediate int need
to be wider than the destination type?  If you're going from an N1-bit
value to an N2>N1-bit value, I don't see why you ever need more than
N2 bits.  Maybe I'm misunderstanding what the comment means by
"widening conversions".

>> This seems dangerous.  Not all code that uses "unsigned HOST_WIDE_INT"
>> actually wants it to be an unsigned value.  Some code uses it to avoid
>> the undefinedness of signed overflow.  So these overloads could lead
>> to us accidentally zero-extending what's conceptually a signed value
>> without any obvious indication that that's happening.  Also, hex constants
>> are unsigned int, but it doesn't seem safe to assume that 0x80000000 was
>> meant to be zero-extended.
>>
>> I realise the same thing can happen if you mix "unsigned int" with
>> HOST_WIDE_INT, but the point is that you shouldn't really do that
>> in general, whereas we're defining these overloads precisely so that
>> a mixture can be used.
>>
>> I'd prefer some explicit indication of the sign, at least for anything
>> other than plain "int" (so that the compiler will complain about uses
>> of "unsigned int" and above).
>
> There is a part of me that finds this scary and a part of me that feels 
> that the concern is largely theoretical.    It does make it much easier 
> to read and understand the code to be able to write "t + 6" rather than 
> "wide_int (t) + wide_int::from_uhwi" (6) but of course you loose some 
> control over how 6 is converted.

I replied in more detail to Mike's comment, but the reason I suggested
only allowing this for plain "int" (and using templates to forbid
"unsigned int" and wider types) was that you could still write "t + 6".
You just couldn't write "t + 0x80000000" or "t + x", where "x" is a
HOST_WIDE_INT or similar.

Code can store values in "HOST_WIDE_INT" or "unsigned HOST_WIDE_INT"
if we know at GCC compile time that HOST_WIDE_INT is big enough.
But code doesn't necessarily store values in a "HOST_WIDE_INT"
because we know at GCC compile time that the value is signed,
or in a "unsigned HOST_WIDE_INT" because we know at GCC compile
time that the value is unsigned.  The signedness often depends
on the input.  The language still forces us to pick one or the
other though.  I don't feel comfortable with having syntactic
sugar that reads too much into the choice.

>>>    Note that the bits above the precision are not defined and the
>>>    algorithms used here are careful not to depend on their value.  In
>>>    particular, values that come in from rtx constants may have random
>>>    bits.
>> I have a feeling I'm rehashing a past debate, sorry, but rtx constants can't
>> have random bits.  The upper bits must be a sign extension of the value.
>> There's exactly one valid rtx for each (value, mode) pair.  If you saw
>> something different then that sounds like a bug.  The rules should already
>> be fairly well enforced though, since something like (const_int 128) --
>> or (const_int 256) -- will not match a QImode operand.
>>
>> This is probably the part of the representation that I disagree most with.
>> There seem to be two main ways we could hande the extension to whole HWIs:
>>
>> (1) leave the stored upper bits undefined and extend them on read
>> (2) keep the stored upper bits in extended form
> It is not a matter of opening old wounds.   I had run some tests on 
> x86-64 and was never able to assume that the bits above the precision 
> had always been canonized.   I will admit that i did not try to run down 
> the bugs because i thought that since the rtx constructors did not have 
> a mode associated with them now was one required to in the constructors, 
> that this was not an easily solvable problem.   So i have no idea if i 
> hit the one and only bug or was about to start drinking from a fire 
> hose.

Hopefully the first one. :-)   Would you mind going back and trying again,
so that we at least have some idea what kind of bugs they were?

> But the back ends are full of GEN_INT (a) where a came from god 
> knows where and you almost never see it properly canonized.   I think 
> that until GEN_INT takes a mandatory non VOIDmode mode parameter, and 
> that constructor canonizes it, you are doomed chasing this bug 
> forever.

Well, sure, the bug crops up now and then (I fixed another instance
a week or so ago).  But generally this bug shows up as an ICE --
usually an unrecognisable instruction ICE -- precisely because this
is something that the code is already quite picky about.

Most of the GEN_INTs in the backend will be correct.  They're in
a position to make asumptions about instructions and target modes.

> Having said that, we actually do neither of (1) or (2) inside of 
> wide-int.  For rtl to wide-int, we leave the upper bits undefined and 
> never allow you to look at them because the constructor has a mode and 
> that mode allows you to draw a line in the sand.   There is no 
> constructor for the "infinite" wide ints from rtl so you have no way to 
> look.

Sorry, I don't understand the distinction you're making between this
and (1).  The precision is taken from the mode, and you flesh out the
upper bits on read if you care.

> Doing this allows us to do something that richi really wanted: avoiding 
> copying.   We do not get to do as much richi would like and when he 
> comes back from leave, he may have other places where can apply it, but 
> right now if you say w = t + 6 as above, it "borrows" the rep from t to 
> do the add, it does not really build a wide-int. We also do this if t is 
> an rtx const.   But if we had to canonize the number, then we could not 
> borrow the rep.

But the point is that the number is already canonical for rtx, so you
should do nothing.  I.e. (2) will be doing less work.  If we have other
input types besides rtx that don't define the upper bits, then under (2)
their wide_int accessor functions (to_shwi[12] in the current scheme)
should do the extension.

>> The patch goes for (1) but (2) seems better to me, for a few reasons:
>>
>> * As above, constants coming from rtl are already in the right form,
>>    so if you create a wide_int from an rtx and only query it, no explicit
>>    extension is needed.
>>
>> * Things like logical operations and right shifts naturally preserve
>>    the sign-extended form, so only a subset of write operations need
>>    to take special measures.
> right now the internals of wide-int do not keep the bits above the 
> precision clean.   as you point out this could be fixed by changing 
> lshift, add, sub, mul, div (and anything else i have forgotten about) 
> and removing the code that cleans this up on exit.   I actually do not 
> really care which way we go here but if we do go on keeping the bits 
> clean above the precision inside wide-int, we are going to have to clean 
> the bits in the constructors from rtl, or fix some/a lot of bugs.
>
> But if you want to go with the stay clean plan you have to start clean, 
> so at the rtl level this means copying. and it is the not copying trick 
> that pushed me in the direction we went.
>
> At the tree level, this is not an issue.   There are no constructors for 
> tree-csts that do not have a type and before we copy the rep from the 
> wide-int to the tree, we clean the top bits.   (BTW - If i had to guess 
> what the bug is with the missing messages on the mips port, it would be 
> because the front ends HAD a bad habit of creating constants that did 
> not fit into a type and then later checking to see if there were any 
> interesting bits above the precision in the int-cst.  This now does not 
> work because we clean out those top bits on construction but it would 
> not surprise me if we missed the fixed point constant path.)   So at the 
> tree level, we could easily go either way here, but there is a cost at 
> the rtl level with doing (2).

TBH, I think we should do (2) and fix whatever bugs you saw with invalid
rtx constants.

>>>    When the precision is 0, all the bits in the LEN elements of
>>>    VEC are significant with no undefined bits.  Precisionless
>>>    constants are limited to being one or two HOST_WIDE_INTs.  When two
>>>    are used the upper value is 0, and the high order bit of the first
>>>    value is set.  (Note that this may need to be generalized if it is
>>>    ever necessary to support 32bit HWIs again).
>> I didn't understand this.  When are two HOST_WIDE_INTs needed for
>> "precision 0"?
> if a large unsigned constant comes in with the top bit set, the 
> canonized value takes 2 hwis, the top hwi being 0.

Ah, yeah.  But then that sounds like another reason to only allow
"int" to have zero precision.

>>> /* Return THIS as a signed HOST_WIDE_INT.  If THIS does not fit in
>>>     PREC, the information is lost. */
>>> HOST_WIDE_INT
>>> to_shwi (unsigned int prec = 0) const
>> Just dropping the excess bits seems dangerous.  I think we should assert
>> instead, at least when prec is 0.
> there are times when this is useful.   there is a lot of code that just 
> wants to look at the bottom bits to do some alignment stuff. I guess 
> that code could just grab the bottom elt of the array, but has not 
> generally been how these this has been done.

Yeah, using elt to mean "get the low HOST_WIDE_INT" sounds better to me FWIW.
"to_shwi" sounds like a type conversion whereas "elt" is more obviously
accessing only part of the value.

>>>    gcc_unreachable ();
>>> #if 0
>>>    return val[len - 1] >> (HOST_BITS_PER_WIDE_INT - 1);
>>> #endif
>> Maybe remove this?
>>
>> The inline div routines do:
> i will work on this this weekend.   tree vrp has not been our friend and 
> sometimes does not like to compile this function.

OK.  If it's just temporary then never mind, but a comment might help.
I remember you mentioned tree-vrp in a reply to one of Richard's reviews,
so this is probably something you've been asked before.  And will probably
be asked again if there's no comment :-)

>> I think these are going to bitrot quickly if we #if 0 then out.
>> I think we should either use:
>>
>>    if (DEBUG_WIDE_INT)
>>      debug_vwa ("wide_int_ro:: %d = (%s == %s)\n", result, *this, s, cl, p2);
>>
>> or drop them.
> my plan is to leave these in while the branch is still being developed 
> and then get rid of them before it is merged.    My guess is that i am 
> going to need them still when i try the 32bit hwi test.

Ah, OK, sounds good.

One thing I hit yesterday while trying out the "lightweight classes"
idea from the end of the mail is that, as things stand, most operations
end up with machine instructions to do:

  if (*p1 == 0)
    *p1 = *p2;

  if (*p2 == 0)
    *p2 = *p1;

E.g. for wide_int + wide_int, wide_int + tree and wide_int + rtx,
where the compiler has no way of statically proving that the precisions
are nonzero.  Considering all the template goo we're adding to avoid
a copy (even though that copy is going to be 24 bytes in the worst case
on most targets, and could be easily SRAed), this seems like a relatively
high overhead for syntactic sugar, especially on hosts that need branches
rather than conditional moves.

Either all this is premature optimisation (including avoiding the
copying) or we should probably do something to avoid these checks too.
I wonder how easy it would be to restrict this use of "zero precision"
(i.e. flexible precision) to those where primitive types like "int" are
used as template arguments to operators, and require a precision when
constructing a wide_int.  I wouldn't have expected "real" precision 0
(from zero-width bitfields or whatever) to need any special handling
compared to precision 1 or 2.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-24 10:44     ` Richard Sandiford
@ 2013-08-24 13:10       ` Richard Sandiford
  2013-08-24 18:16         ` Kenneth Zadeck
  2013-08-24 21:22       ` Kenneth Zadeck
  1 sibling, 1 reply; 50+ messages in thread
From: Richard Sandiford @ 2013-08-24 13:10 UTC (permalink / raw)
  To: Kenneth Zadeck; +Cc: rguenther, gcc-patches, Mike Stump, r.sandiford

Richard Sandiford <rdsandiford@googlemail.com> writes:
> I wonder how easy it would be to restrict this use of "zero precision"
> (i.e. flexible precision) to those where primitive types like "int" are
> used as template arguments to operators, and require a precision when
> constructing a wide_int.  I wouldn't have expected "real" precision 0
> (from zero-width bitfields or whatever) to need any special handling
> compared to precision 1 or 2.

I tried the last bit -- requiring a precision when constructing a
wide_int -- and it seemed surprising easy.  What do you think of
the attached?  Most of the forced knock-on changes seem like improvements,
but the java part is a bit ugly.  I also went with "wide_int (0, prec).cmp"
for now, although I'd like to add static cmp, cmps and cmpu alongside
leu_p, etc., if that's OK.  It would then be possible to write
"wide_int::cmp (0, ...)" and avoid the wide_int construction altogether.

I wondered whether you might also want to get rid of the build_int_cst*
functions, but that still looks a long way off, so I hope using them in
these two places doesn't seem too bad.

This is just an incremental step.  I've also only run it through a
subset of the testsuite so far, but full tests are in progress...

Thanks,
Richard


Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/fold-const.c	2013-08-24 01:00:00.000000000 +0100
@@ -8865,15 +8865,16 @@ pointer_may_wrap_p (tree base, tree offs
   if (bitpos < 0)
     return true;
 
+  int precision = TYPE_PRECISION (TREE_TYPE (base));
   if (offset == NULL_TREE)
-    wi_offset = wide_int::zero (TYPE_PRECISION (TREE_TYPE (base)));
+    wi_offset = wide_int::zero (precision);
   else if (TREE_CODE (offset) != INTEGER_CST || TREE_OVERFLOW (offset))
     return true;
   else
     wi_offset = offset;
 
   bool overflow;
-  wide_int units = wide_int::from_shwi (bitpos / BITS_PER_UNIT);
+  wide_int units = wide_int::from_shwi (bitpos / BITS_PER_UNIT, precision);
   total = wi_offset.add (units, UNSIGNED, &overflow);
   if (overflow)
     return true;
Index: gcc/gimple-ssa-strength-reduction.c
===================================================================
--- gcc/gimple-ssa-strength-reduction.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/gimple-ssa-strength-reduction.c	2013-08-24 01:00:00.000000000 +0100
@@ -777,7 +777,6 @@ restructure_reference (tree *pbase, tree
 {
   tree base = *pbase, offset = *poffset;
   max_wide_int index = *pindex;
-  wide_int bpu = BITS_PER_UNIT;
   tree mult_op0, t1, t2, type;
   max_wide_int c1, c2, c3, c4;
 
@@ -786,7 +785,7 @@ restructure_reference (tree *pbase, tree
       || TREE_CODE (base) != MEM_REF
       || TREE_CODE (offset) != MULT_EXPR
       || TREE_CODE (TREE_OPERAND (offset, 1)) != INTEGER_CST
-      || !index.umod_floor (bpu).zero_p ())
+      || !index.umod_floor (BITS_PER_UNIT).zero_p ())
     return false;
 
   t1 = TREE_OPERAND (base, 0);
@@ -822,7 +821,7 @@ restructure_reference (tree *pbase, tree
       c2 = 0;
     }
 
-  c4 = index.udiv_floor (bpu);
+  c4 = index.udiv_floor (BITS_PER_UNIT);
 
   *pbase = t1;
   *poffset = fold_build2 (MULT_EXPR, sizetype, t2,
Index: gcc/java/jcf-parse.c
===================================================================
--- gcc/java/jcf-parse.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/java/jcf-parse.c	2013-08-24 01:00:00.000000000 +0100
@@ -1043,9 +1043,10 @@ get_constant (JCF *jcf, int index)
 	wide_int val;
 
 	num = JPOOL_UINT (jcf, index);
-	val = wide_int (num).sforce_to_size (32).lshift_widen (32, 64);
+	val = wide_int::from_hwi (num, long_type_node)
+	  .sforce_to_size (32).lshift_widen (32, 64);
 	num = JPOOL_UINT (jcf, index + 1);
-	val |= wide_int (num);
+	val |= wide_int::from_hwi (num, long_type_node);
 
 	value = wide_int_to_tree (long_type_node, val);
 	break;
Index: gcc/loop-unroll.c
===================================================================
--- gcc/loop-unroll.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/loop-unroll.c	2013-08-24 01:00:00.000000000 +0100
@@ -816,8 +816,7 @@ unroll_loop_constant_iterations (struct
 	  desc->niter -= exit_mod;
 	  loop->nb_iterations_upper_bound -= exit_mod;
 	  if (loop->any_estimate
-	      && wide_int (exit_mod).leu_p
-	           (loop->nb_iterations_estimate))
+	      && wide_int::leu_p (exit_mod, loop->nb_iterations_estimate))
 	    loop->nb_iterations_estimate -= exit_mod;
 	  else
 	    loop->any_estimate = false;
@@ -860,8 +859,7 @@ unroll_loop_constant_iterations (struct
 	  desc->niter -= exit_mod + 1;
 	  loop->nb_iterations_upper_bound -= exit_mod + 1;
 	  if (loop->any_estimate
-	      && wide_int (exit_mod + 1).leu_p
-	           (loop->nb_iterations_estimate))
+	      && wide_int::leu_p (exit_mod + 1, loop->nb_iterations_estimate))
 	    loop->nb_iterations_estimate -= exit_mod + 1;
 	  else
 	    loop->any_estimate = false;
@@ -1381,7 +1379,7 @@ decide_peel_simple (struct loop *loop, i
   if (estimated_loop_iterations (loop, &iterations))
     {
       /* TODO: unsigned/signed confusion */
-      if (wide_int::from_shwi (npeel).leu_p (iterations))
+      if (wide_int::leu_p (npeel, iterations))
 	{
 	  if (dump_file)
 	    {
Index: gcc/real.c
===================================================================
--- gcc/real.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/real.c	2013-08-24 01:00:00.000000000 +0100
@@ -2401,7 +2401,7 @@ real_digit (int n)
   gcc_assert (n <= 9);
 
   if (n > 0 && num[n].cl == rvc_zero)
-    real_from_integer (&num[n], VOIDmode, wide_int (n), UNSIGNED);
+    real_from_integer (&num[n], VOIDmode, n, UNSIGNED);
 
   return &num[n];
 }
Index: gcc/tree-predcom.c
===================================================================
--- gcc/tree-predcom.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/tree-predcom.c	2013-08-24 01:00:00.000000000 +0100
@@ -923,7 +923,7 @@ add_ref_to_chain (chain_p chain, dref re
 
   gcc_assert (root->offset.les_p (ref->offset));
   dist = ref->offset - root->offset;
-  if (max_wide_int::from_uhwi (MAX_DISTANCE).leu_p (dist))
+  if (wide_int::leu_p (MAX_DISTANCE, dist))
     {
       free (ref);
       return;
Index: gcc/tree-pretty-print.c
===================================================================
--- gcc/tree-pretty-print.c	2013-08-24 12:48:00.091379339 +0100
+++ gcc/tree-pretty-print.c	2013-08-24 01:00:00.000000000 +0100
@@ -1295,7 +1295,7 @@ dump_generic_node (pretty_printer *buffe
 	tree field, val;
 	bool is_struct_init = false;
 	bool is_array_init = false;
-	wide_int curidx = 0;
+	wide_int curidx;
 	pp_left_brace (buffer);
 	if (TREE_CLOBBER_P (node))
 	  pp_string (buffer, "CLOBBER");
Index: gcc/tree-ssa-ccp.c
===================================================================
--- gcc/tree-ssa-ccp.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/tree-ssa-ccp.c	2013-08-24 01:00:00.000000000 +0100
@@ -526,7 +526,7 @@ get_value_from_alignment (tree expr)
 	      : -1).and_not (align / BITS_PER_UNIT - 1);
   val.lattice_val = val.mask.minus_one_p () ? VARYING : CONSTANT;
   if (val.lattice_val == CONSTANT)
-    val.value = wide_int_to_tree (type, bitpos / BITS_PER_UNIT);
+    val.value = build_int_cstu (type, bitpos / BITS_PER_UNIT);
   else
     val.value = NULL_TREE;
 
Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c	2013-08-24 12:48:00.093379358 +0100
+++ gcc/tree-vrp.c	2013-08-24 01:00:00.000000000 +0100
@@ -2420,9 +2420,9 @@ extract_range_from_binary_expr_1 (value_
 	      wmin = min0 - max1;
 	      wmax = max0 - min1;
 
-	      if (wide_int (0).cmp (max1, sgn) != wmin.cmp (min0, sgn))
+	      if (wide_int (0, prec).cmp (max1, sgn) != wmin.cmp (min0, sgn))
 		min_ovf = min0.cmp (max1, sgn);
-	      if (wide_int (0).cmp (min1, sgn) != wmax.cmp (max0, sgn))
+	      if (wide_int (0, prec).cmp (min1, sgn) != wmax.cmp (max0, sgn))
 		max_ovf = max0.cmp (min1, sgn);
 	    }
 
@@ -4911,8 +4911,8 @@ register_edge_assert_for_2 (tree name, e
       gimple def_stmt = SSA_NAME_DEF_STMT (name);
       tree name2 = NULL_TREE, names[2], cst2 = NULL_TREE;
       tree val2 = NULL_TREE;
-      wide_int mask = 0;
       unsigned int prec = TYPE_PRECISION (TREE_TYPE (val));
+      wide_int mask (0, prec);
       unsigned int nprec = prec;
       enum tree_code rhs_code = ERROR_MARK;
 
@@ -5101,7 +5101,7 @@ register_edge_assert_for_2 (tree name, e
 	}
       if (names[0] || names[1])
 	{
-	  wide_int minv, maxv = 0, valv, cst2v;
+	  wide_int minv, maxv, valv, cst2v;
 	  wide_int tem, sgnbit;
 	  bool valid_p = false, valn = false, cst2n = false;
 	  enum tree_code ccode = comp_code;
@@ -5170,7 +5170,7 @@ register_edge_assert_for_2 (tree name, e
 		      goto lt_expr;
 		    }
 		  if (!cst2n)
-		    sgnbit = 0;
+		    sgnbit = wide_int::zero (nprec);
 		}
 	      break;
 
Index: gcc/tree.c
===================================================================
--- gcc/tree.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/tree.c	2013-08-24 01:00:00.000000000 +0100
@@ -1048,13 +1048,13 @@ build_int_cst (tree type, HOST_WIDE_INT
   if (!type)
     type = integer_type_node;
 
-  return wide_int_to_tree (type, low);
+  return wide_int_to_tree (type, wide_int::from_hwi (low, type));
 }
 
 /* static inline */ tree
 build_int_cstu (tree type, unsigned HOST_WIDE_INT cst)
 {
-  return wide_int_to_tree (type, cst);
+  return wide_int_to_tree (type, wide_int::from_hwi (cst, type));
 }
 
 /* Create an INT_CST node with a LOW value sign extended to TYPE.  */
@@ -1064,7 +1064,7 @@ build_int_cst_type (tree type, HOST_WIDE
 {
   gcc_assert (type);
 
-  return wide_int_to_tree (type, low);
+  return wide_int_to_tree (type, wide_int::from_hwi (low, type));
 }
 
 /* Constructs tree in type TYPE from with value given by CST.  Signedness
@@ -10688,7 +10688,7 @@ lower_bound_in_type (tree outer, tree in
 	 contains all values of INNER type.  In particular, both INNER
 	 and OUTER types have zero in common.  */
       || (oprec > iprec && TYPE_UNSIGNED (inner)))
-    return wide_int_to_tree (outer, 0);
+    return build_int_cst (outer, 0);
   else
     {
       /* If we are widening a signed type to another signed type, we
Index: gcc/wide-int.cc
===================================================================
--- gcc/wide-int.cc	2013-08-24 12:48:00.096379386 +0100
+++ gcc/wide-int.cc	2013-08-24 01:00:00.000000000 +0100
@@ -32,6 +32,8 @@ along with GCC; see the file COPYING3.
 const int MAX_SIZE = 4 * (MAX_BITSIZE_MODE_ANY_INT / 4
 		     + MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT + 32);
 
+static const HOST_WIDE_INT zeros[WIDE_INT_MAX_ELTS] = {};
+
 /*
  * Internal utilities.
  */
@@ -2517,7 +2519,7 @@ wide_int_ro::divmod_internal (bool compu
     {
       if (top_bit_of (dividend, dividend_len, dividend_prec))
 	{
-	  u0 = sub_large (wide_int (0).val, 1,
+	  u0 = sub_large (zeros, 1,
 			  dividend_prec, dividend, dividend_len, UNSIGNED);
 	  dividend = u0.val;
 	  dividend_len = u0.len;
@@ -2525,7 +2527,7 @@ wide_int_ro::divmod_internal (bool compu
 	}
       if (top_bit_of (divisor, divisor_len, divisor_prec))
 	{
-	  u1 = sub_large (wide_int (0).val, 1,
+	  u1 = sub_large (zeros, 1,
 			  divisor_prec, divisor, divisor_len, UNSIGNED);
 	  divisor = u1.val;
 	  divisor_len = u1.len;
Index: gcc/wide-int.h
===================================================================
--- gcc/wide-int.h	2013-08-24 12:14:20.979479335 +0100
+++ gcc/wide-int.h	2013-08-24 01:00:00.000000000 +0100
@@ -230,6 +230,11 @@ #define WIDE_INT_H
 #define DEBUG_WIDE_INT
 #endif
 
+/* Used for overloaded functions in which the only other acceptable
+   scalar type is const_tree.  It stops a plain 0 from being treated
+   as a null tree.  */
+struct never_used {};
+
 /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
    early examination of the target's mode file.  Thus it is safe that
    some small multiple of this number is easily larger than any number
@@ -324,15 +329,16 @@ class GTY(()) wide_int_ro
 public:
   wide_int_ro ();
   wide_int_ro (const_tree);
-  wide_int_ro (HOST_WIDE_INT);
-  wide_int_ro (int);
-  wide_int_ro (unsigned HOST_WIDE_INT);
-  wide_int_ro (unsigned int);
+  wide_int_ro (never_used *);
+  wide_int_ro (HOST_WIDE_INT, unsigned int);
+  wide_int_ro (int, unsigned int);
+  wide_int_ro (unsigned HOST_WIDE_INT, unsigned int);
+  wide_int_ro (unsigned int, unsigned int);
   wide_int_ro (const rtx_mode_t &);
 
   /* Conversions.  */
-  static wide_int_ro from_shwi (HOST_WIDE_INT, unsigned int = 0);
-  static wide_int_ro from_uhwi (unsigned HOST_WIDE_INT, unsigned int = 0);
+  static wide_int_ro from_shwi (HOST_WIDE_INT, unsigned int);
+  static wide_int_ro from_uhwi (unsigned HOST_WIDE_INT, unsigned int);
   static wide_int_ro from_hwi (HOST_WIDE_INT, const_tree);
   static wide_int_ro from_shwi (HOST_WIDE_INT, enum machine_mode);
   static wide_int_ro from_uhwi (unsigned HOST_WIDE_INT, enum machine_mode);
@@ -349,9 +355,11 @@ class GTY(()) wide_int_ro
 
   static wide_int_ro max_value (unsigned int, signop, unsigned int = 0);
   static wide_int_ro max_value (const_tree);
+  static wide_int_ro max_value (never_used *);
   static wide_int_ro max_value (enum machine_mode, signop);
   static wide_int_ro min_value (unsigned int, signop, unsigned int = 0);
   static wide_int_ro min_value (const_tree);
+  static wide_int_ro min_value (never_used *);
   static wide_int_ro min_value (enum machine_mode, signop);
 
   /* Small constants.  These are generally only needed in the places
@@ -842,18 +850,16 @@ class GTY(()) wide_int : public wide_int
   wide_int ();
   wide_int (const wide_int_ro &);
   wide_int (const_tree);
-  wide_int (HOST_WIDE_INT);
-  wide_int (int);
-  wide_int (unsigned HOST_WIDE_INT);
-  wide_int (unsigned int);
+  wide_int (never_used *);
+  wide_int (HOST_WIDE_INT, unsigned int);
+  wide_int (int, unsigned int);
+  wide_int (unsigned HOST_WIDE_INT, unsigned int);
+  wide_int (unsigned int, unsigned int);
   wide_int (const rtx_mode_t &);
 
   wide_int &operator = (const wide_int_ro &);
   wide_int &operator = (const_tree);
-  wide_int &operator = (HOST_WIDE_INT);
-  wide_int &operator = (int);
-  wide_int &operator = (unsigned HOST_WIDE_INT);
-  wide_int &operator = (unsigned int);
+  wide_int &operator = (never_used *);
   wide_int &operator = (const rtx_mode_t &);
 
   wide_int &operator ++ ();
@@ -904,28 +910,28 @@ inline wide_int_ro::wide_int_ro (const_t
 		      TYPE_PRECISION (TREE_TYPE (tcst)), false);
 }
 
-inline wide_int_ro::wide_int_ro (HOST_WIDE_INT op0)
+inline wide_int_ro::wide_int_ro (HOST_WIDE_INT op0, unsigned int prec)
 {
-  precision = 0;
+  precision = prec;
   val[0] = op0;
   len = 1;
 }
 
-inline wide_int_ro::wide_int_ro (int op0)
+inline wide_int_ro::wide_int_ro (int op0, unsigned int prec)
 {
-  precision = 0;
+  precision = prec;
   val[0] = op0;
   len = 1;
 }
 
-inline wide_int_ro::wide_int_ro (unsigned HOST_WIDE_INT op0)
+inline wide_int_ro::wide_int_ro (unsigned HOST_WIDE_INT op0, unsigned int prec)
 {
-  *this = from_uhwi (op0);
+  *this = from_uhwi (op0, prec);
 }
 
-inline wide_int_ro::wide_int_ro (unsigned int op0)
+inline wide_int_ro::wide_int_ro (unsigned int op0, unsigned int prec)
 {
-  *this = from_uhwi (op0);
+  *this = from_uhwi (op0, prec);
 }
 
 inline wide_int_ro::wide_int_ro (const rtx_mode_t &op0)
@@ -2264,7 +2270,7 @@ wide_int_ro::mul_high (const T &c, signo
 wide_int_ro::operator - () const
 {
   wide_int_ro r;
-  r = wide_int_ro (0) - *this;
+  r = zero (precision) - *this;
   return r;
 }
 
@@ -2277,7 +2283,7 @@ wide_int_ro::neg (bool *overflow) const
 
   *overflow = only_sign_bit_p ();
 
-  return wide_int_ro (0) - *this;
+  return zero (precision) - *this;
 }
 
 /* Return THIS - C.  */
@@ -3147,28 +3153,28 @@ inline wide_int::wide_int (const_tree tc
 		      TYPE_PRECISION (TREE_TYPE (tcst)), false);
 }
 
-inline wide_int::wide_int (HOST_WIDE_INT op0)
+inline wide_int::wide_int (HOST_WIDE_INT op0, unsigned int prec)
 {
-  precision = 0;
+  precision = prec;
   val[0] = op0;
   len = 1;
 }
 
-inline wide_int::wide_int (int op0)
+inline wide_int::wide_int (int op0, unsigned int prec)
 {
-  precision = 0;
+  precision = prec;
   val[0] = op0;
   len = 1;
 }
 
-inline wide_int::wide_int (unsigned HOST_WIDE_INT op0)
+inline wide_int::wide_int (unsigned HOST_WIDE_INT op0, unsigned int prec)
 {
-  *this = wide_int_ro::from_uhwi (op0);
+  *this = wide_int_ro::from_uhwi (op0, prec);
 }
 
-inline wide_int::wide_int (unsigned int op0)
+inline wide_int::wide_int (unsigned int op0, unsigned int prec)
 {
-  *this = wide_int_ro::from_uhwi (op0);
+  *this = wide_int_ro::from_uhwi (op0, prec);
 }
 
 inline wide_int::wide_int (const rtx_mode_t &op0)
@@ -3567,31 +3573,28 @@ inline fixed_wide_int <bitsize>::fixed_w
 
 template <int bitsize>
 inline fixed_wide_int <bitsize>::fixed_wide_int (HOST_WIDE_INT op0)
-  : wide_int_ro (op0)
+  : wide_int_ro (op0, bitsize)
 {
-  precision = bitsize;
 }
 
 template <int bitsize>
-inline fixed_wide_int <bitsize>::fixed_wide_int (int op0) : wide_int_ro (op0)
+inline fixed_wide_int <bitsize>::fixed_wide_int (int op0)
+  : wide_int_ro (op0, bitsize)
 {
-  precision = bitsize;
 }
 
 template <int bitsize>
 inline fixed_wide_int <bitsize>::fixed_wide_int (unsigned HOST_WIDE_INT op0)
-  : wide_int_ro (op0)
+  : wide_int_ro (op0, bitsize)
 {
-  precision = bitsize;
   if (neg_p (SIGNED))
     static_cast <wide_int_ro &> (*this) = zext (HOST_BITS_PER_WIDE_INT);
 }
 
 template <int bitsize>
 inline fixed_wide_int <bitsize>::fixed_wide_int (unsigned int op0)
-  : wide_int_ro (op0)
+  : wide_int_ro (op0, bitsize)
 {
-  precision = bitsize;
   if (sizeof (int) == sizeof (HOST_WIDE_INT)
       && neg_p (SIGNED))
     *this = zext (HOST_BITS_PER_WIDE_INT);
@@ -3661,9 +3664,7 @@ fixed_wide_int <bitsize>::operator = (co
 inline fixed_wide_int <bitsize> &
 fixed_wide_int <bitsize>::operator = (HOST_WIDE_INT op0)
 {
-  static_cast <wide_int_ro &> (*this) = op0;
-  precision = bitsize;
-
+  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
   return *this;
 }
 
@@ -3671,9 +3672,7 @@ fixed_wide_int <bitsize>::operator = (HO
 inline fixed_wide_int <bitsize> &
 fixed_wide_int <bitsize>::operator = (int op0)
 {
-  static_cast <wide_int_ro &> (*this) = op0;
-  precision = bitsize;
-
+  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
   return *this;
 }
 
@@ -3681,8 +3680,7 @@ fixed_wide_int <bitsize>::operator = (in
 inline fixed_wide_int <bitsize> &
 fixed_wide_int <bitsize>::operator = (unsigned HOST_WIDE_INT op0)
 {
-  static_cast <wide_int_ro &> (*this) = op0;
-  precision = bitsize;
+  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
 
   /* This is logically top_bit_set_p.  */
   if (neg_p (SIGNED))
@@ -3695,8 +3693,7 @@ fixed_wide_int <bitsize>::operator = (un
 inline fixed_wide_int <bitsize> &
 fixed_wide_int <bitsize>::operator = (unsigned int op0)
 {
-  static_cast <wide_int_ro &> (*this) = op0;
-  precision = bitsize;
+  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
 
   if (sizeof (int) == sizeof (HOST_WIDE_INT)
       && neg_p (SIGNED))

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-24 13:10       ` Richard Sandiford
@ 2013-08-24 18:16         ` Kenneth Zadeck
  2013-08-25  7:27           ` Richard Sandiford
  0 siblings, 1 reply; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-24 18:16 UTC (permalink / raw)
  To: rguenther, gcc-patches, Mike Stump, r.sandiford, rdsandiford

On 08/24/2013 08:05 AM, Richard Sandiford wrote:
> Richard Sandiford <rdsandiford@googlemail.com> writes:
>> I wonder how easy it would be to restrict this use of "zero precision"
>> (i.e. flexible precision) to those where primitive types like "int" are
>> used as template arguments to operators, and require a precision when
>> constructing a wide_int.  I wouldn't have expected "real" precision 0
>> (from zero-width bitfields or whatever) to need any special handling
>> compared to precision 1 or 2.
> I tried the last bit -- requiring a precision when constructing a
> wide_int -- and it seemed surprising easy.  What do you think of
> the attached?  Most of the forced knock-on changes seem like improvements,
> but the java part is a bit ugly.  I also went with "wide_int (0, prec).cmp"
> for now, although I'd like to add static cmp, cmps and cmpu alongside
> leu_p, etc., if that's OK.  It would then be possible to write
> "wide_int::cmp (0, ...)" and avoid the wide_int construction altogether.
>
> I wondered whether you might also want to get rid of the build_int_cst*
> functions, but that still looks a long way off, so I hope using them in
> these two places doesn't seem too bad.
>
> This is just an incremental step.  I've also only run it through a
> subset of the testsuite so far, but full tests are in progress...
So i am going to make two high level comments here and then i am going 
to leave the ultimate decision to the community.   (1) I am mildly in 
favor of leaving prec 0 stuff the way that it is (2) my guess is that 
richi also will favor this.   My justification for (2) is because he had 
a lot of comments about this before he went on leave and this is 
substantially the way that it was when he left. Also, remember that one 
of his biggest dislikes was having to specify precisions.

However, this question is really bigger than this branch which is why i 
hope others will join in, because this really comes down to how do we 
want the compiler to look when it is fully converted to c++.   It has 
taken me a while to get used to writing and reading code like this where 
there is a lot of c++ magic going on behind the scenes.   And with that 
magic comes the responsibility of the programmer to get it right.  There 
were/are a lot of people in the gcc community that did not want go down 
the c++ pathway for exactly this reason.  However, i am being converted.

The rest of my comments are small comments about the patch, because some 
of it should be done no matter how the decision is made.
=====
It is perfectly fine to add the static versions of the cmp functions and 
the usage of those functions in this patch looks perfectly reasonable.

>
> Thanks,
> Richard
>
>
> Index: gcc/fold-const.c
> ===================================================================
> --- gcc/fold-const.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/fold-const.c	2013-08-24 01:00:00.000000000 +0100
> @@ -8865,15 +8865,16 @@ pointer_may_wrap_p (tree base, tree offs
>     if (bitpos < 0)
>       return true;
>   
> +  int precision = TYPE_PRECISION (TREE_TYPE (base));
>     if (offset == NULL_TREE)
> -    wi_offset = wide_int::zero (TYPE_PRECISION (TREE_TYPE (base)));
> +    wi_offset = wide_int::zero (precision);
>     else if (TREE_CODE (offset) != INTEGER_CST || TREE_OVERFLOW (offset))
>       return true;
>     else
>       wi_offset = offset;
>   
>     bool overflow;
> -  wide_int units = wide_int::from_shwi (bitpos / BITS_PER_UNIT);
> +  wide_int units = wide_int::from_shwi (bitpos / BITS_PER_UNIT, precision);
>     total = wi_offset.add (units, UNSIGNED, &overflow);
>     if (overflow)
>       return true;
So this is a part of the code that really should have been using 
addr_wide_int rather that wide_int.  It is doing address arithmetic with 
bit positions.    Because of this, the precision that the calculations 
should have been done with the precision of 3 + what comes out of the 
type.   The addr_wide_int has a fixed precision that is guaranteed to be 
large enough for any address math on the machine.

> Index: gcc/gimple-ssa-strength-reduction.c
> ===================================================================
> --- gcc/gimple-ssa-strength-reduction.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/gimple-ssa-strength-reduction.c	2013-08-24 01:00:00.000000000 +0100
> @@ -777,7 +777,6 @@ restructure_reference (tree *pbase, tree
>   {
>     tree base = *pbase, offset = *poffset;
>     max_wide_int index = *pindex;
> -  wide_int bpu = BITS_PER_UNIT;
>     tree mult_op0, t1, t2, type;
>     max_wide_int c1, c2, c3, c4;
>   
> @@ -786,7 +785,7 @@ restructure_reference (tree *pbase, tree
>         || TREE_CODE (base) != MEM_REF
>         || TREE_CODE (offset) != MULT_EXPR
>         || TREE_CODE (TREE_OPERAND (offset, 1)) != INTEGER_CST
> -      || !index.umod_floor (bpu).zero_p ())
> +      || !index.umod_floor (BITS_PER_UNIT).zero_p ())
>       return false;
>   
>     t1 = TREE_OPERAND (base, 0);
> @@ -822,7 +821,7 @@ restructure_reference (tree *pbase, tree
>         c2 = 0;
>       }
>   
> -  c4 = index.udiv_floor (bpu);
> +  c4 = index.udiv_floor (BITS_PER_UNIT);
>   
this is just coding style and i like your way better,  however, richi 
asked me to go easy on the cleanups because it makes it difficult more 
difficult to review a patch this big.    use your judgment here.
>     *pbase = t1;
>     *poffset = fold_build2 (MULT_EXPR, sizetype, t2,
> Index: gcc/java/jcf-parse.c
> ===================================================================
> --- gcc/java/jcf-parse.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/java/jcf-parse.c	2013-08-24 01:00:00.000000000 +0100
> @@ -1043,9 +1043,10 @@ get_constant (JCF *jcf, int index)
>   	wide_int val;
>   
>   	num = JPOOL_UINT (jcf, index);
> -	val = wide_int (num).sforce_to_size (32).lshift_widen (32, 64);
> +	val = wide_int::from_hwi (num, long_type_node)
> +	  .sforce_to_size (32).lshift_widen (32, 64);
>   	num = JPOOL_UINT (jcf, index + 1);
> -	val |= wide_int (num);
> +	val |= wide_int::from_hwi (num, long_type_node);
>   
>   	value = wide_int_to_tree (long_type_node, val);
>   	break;
This is a somewhat older patch before we got aggressive about this.
if i was coding this now, it would be

val |= num;

> Index: gcc/loop-unroll.c
> ===================================================================
> --- gcc/loop-unroll.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/loop-unroll.c	2013-08-24 01:00:00.000000000 +0100
> @@ -816,8 +816,7 @@ unroll_loop_constant_iterations (struct
>   	  desc->niter -= exit_mod;
>   	  loop->nb_iterations_upper_bound -= exit_mod;
>   	  if (loop->any_estimate
> -	      && wide_int (exit_mod).leu_p
> -	           (loop->nb_iterations_estimate))
> +	      && wide_int::leu_p (exit_mod, loop->nb_iterations_estimate))
>   	    loop->nb_iterations_estimate -= exit_mod;
>   	  else
>   	    loop->any_estimate = false;
> @@ -860,8 +859,7 @@ unroll_loop_constant_iterations (struct
>   	  desc->niter -= exit_mod + 1;
>   	  loop->nb_iterations_upper_bound -= exit_mod + 1;
>   	  if (loop->any_estimate
> -	      && wide_int (exit_mod + 1).leu_p
> -	           (loop->nb_iterations_estimate))
> +	      && wide_int::leu_p (exit_mod + 1, loop->nb_iterations_estimate))
>   	    loop->nb_iterations_estimate -= exit_mod + 1;
>   	  else
>   	    loop->any_estimate = false;
> @@ -1381,7 +1379,7 @@ decide_peel_simple (struct loop *loop, i
>     if (estimated_loop_iterations (loop, &iterations))
>       {
>         /* TODO: unsigned/signed confusion */
> -      if (wide_int::from_shwi (npeel).leu_p (iterations))
> +      if (wide_int::leu_p (npeel, iterations))
>   	{
>   	  if (dump_file)
>   	    {

All of this is perfectly fine.   this reflects us not going back after 
the static functions were put in and using them to the full extent.

> Index: gcc/real.c
> ===================================================================
> --- gcc/real.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/real.c	2013-08-24 01:00:00.000000000 +0100
> @@ -2401,7 +2401,7 @@ real_digit (int n)
>     gcc_assert (n <= 9);
>   
>     if (n > 0 && num[n].cl == rvc_zero)
> -    real_from_integer (&num[n], VOIDmode, wide_int (n), UNSIGNED);
> +    real_from_integer (&num[n], VOIDmode, n, UNSIGNED);
>   
>     return &num[n];
>   }
> Index: gcc/tree-predcom.c
> ===================================================================
> --- gcc/tree-predcom.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/tree-predcom.c	2013-08-24 01:00:00.000000000 +0100
> @@ -923,7 +923,7 @@ add_ref_to_chain (chain_p chain, dref re
>   
>     gcc_assert (root->offset.les_p (ref->offset));
>     dist = ref->offset - root->offset;
> -  if (max_wide_int::from_uhwi (MAX_DISTANCE).leu_p (dist))
> +  if (wide_int::leu_p (MAX_DISTANCE, dist))
>       {
>         free (ref);
>         return;
> Index: gcc/tree-pretty-print.c
> ===================================================================
> --- gcc/tree-pretty-print.c	2013-08-24 12:48:00.091379339 +0100
> +++ gcc/tree-pretty-print.c	2013-08-24 01:00:00.000000000 +0100
> @@ -1295,7 +1295,7 @@ dump_generic_node (pretty_printer *buffe
>   	tree field, val;
>   	bool is_struct_init = false;
>   	bool is_array_init = false;
> -	wide_int curidx = 0;
> +	wide_int curidx;
>   	pp_left_brace (buffer);
>   	if (TREE_CLOBBER_P (node))
>   	  pp_string (buffer, "CLOBBER");
> Index: gcc/tree-ssa-ccp.c
> ===================================================================
> --- gcc/tree-ssa-ccp.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/tree-ssa-ccp.c	2013-08-24 01:00:00.000000000 +0100
> @@ -526,7 +526,7 @@ get_value_from_alignment (tree expr)
>   	      : -1).and_not (align / BITS_PER_UNIT - 1);
>     val.lattice_val = val.mask.minus_one_p () ? VARYING : CONSTANT;
>     if (val.lattice_val == CONSTANT)
> -    val.value = wide_int_to_tree (type, bitpos / BITS_PER_UNIT);
> +    val.value = build_int_cstu (type, bitpos / BITS_PER_UNIT);
>     else
>       val.value = NULL_TREE;
>   
> Index: gcc/tree-vrp.c
> ===================================================================
> --- gcc/tree-vrp.c	2013-08-24 12:48:00.093379358 +0100
> +++ gcc/tree-vrp.c	2013-08-24 01:00:00.000000000 +0100
> @@ -2420,9 +2420,9 @@ extract_range_from_binary_expr_1 (value_
>   	      wmin = min0 - max1;
>   	      wmax = max0 - min1;
>   
> -	      if (wide_int (0).cmp (max1, sgn) != wmin.cmp (min0, sgn))
> +	      if (wide_int (0, prec).cmp (max1, sgn) != wmin.cmp (min0, sgn))
>   		min_ovf = min0.cmp (max1, sgn);
> -	      if (wide_int (0).cmp (min1, sgn) != wmax.cmp (max0, sgn))
> +	      if (wide_int (0, prec).cmp (min1, sgn) != wmax.cmp (max0, sgn))
>   		max_ovf = max0.cmp (min1, sgn);
>   	    }
>   
of course this is really a place for the static functions.
> @@ -4911,8 +4911,8 @@ register_edge_assert_for_2 (tree name, e
>         gimple def_stmt = SSA_NAME_DEF_STMT (name);
>         tree name2 = NULL_TREE, names[2], cst2 = NULL_TREE;
>         tree val2 = NULL_TREE;
> -      wide_int mask = 0;
>         unsigned int prec = TYPE_PRECISION (TREE_TYPE (val));
> +      wide_int mask (0, prec);
>         unsigned int nprec = prec;
>         enum tree_code rhs_code = ERROR_MARK;
>   
> @@ -5101,7 +5101,7 @@ register_edge_assert_for_2 (tree name, e
>   	}
>         if (names[0] || names[1])
>   	{
> -	  wide_int minv, maxv = 0, valv, cst2v;
> +	  wide_int minv, maxv, valv, cst2v;
>   	  wide_int tem, sgnbit;
>   	  bool valid_p = false, valn = false, cst2n = false;
>   	  enum tree_code ccode = comp_code;
> @@ -5170,7 +5170,7 @@ register_edge_assert_for_2 (tree name, e
>   		      goto lt_expr;
>   		    }
>   		  if (!cst2n)
> -		    sgnbit = 0;
> +		    sgnbit = wide_int::zero (nprec);
>   		}
>   	      break;
>   
> Index: gcc/tree.c
> ===================================================================
> --- gcc/tree.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/tree.c	2013-08-24 01:00:00.000000000 +0100
> @@ -1048,13 +1048,13 @@ build_int_cst (tree type, HOST_WIDE_INT
>     if (!type)
>       type = integer_type_node;
>   
> -  return wide_int_to_tree (type, low);
> +  return wide_int_to_tree (type, wide_int::from_hwi (low, type));
>   }
>   
>   /* static inline */ tree
>   build_int_cstu (tree type, unsigned HOST_WIDE_INT cst)
>   {
> -  return wide_int_to_tree (type, cst);
> +  return wide_int_to_tree (type, wide_int::from_hwi (cst, type));
>   }
>   
>   /* Create an INT_CST node with a LOW value sign extended to TYPE.  */
> @@ -1064,7 +1064,7 @@ build_int_cst_type (tree type, HOST_WIDE
>   {
>     gcc_assert (type);
>   
> -  return wide_int_to_tree (type, low);
> +  return wide_int_to_tree (type, wide_int::from_hwi (low, type));
>   }
>   
>   /* Constructs tree in type TYPE from with value given by CST.  Signedness
> @@ -10688,7 +10688,7 @@ lower_bound_in_type (tree outer, tree in
>   	 contains all values of INNER type.  In particular, both INNER
>   	 and OUTER types have zero in common.  */
>         || (oprec > iprec && TYPE_UNSIGNED (inner)))
> -    return wide_int_to_tree (outer, 0);
> +    return build_int_cst (outer, 0);
>     else
>       {
>         /* If we are widening a signed type to another signed type, we
> Index: gcc/wide-int.cc
> ===================================================================
> --- gcc/wide-int.cc	2013-08-24 12:48:00.096379386 +0100
> +++ gcc/wide-int.cc	2013-08-24 01:00:00.000000000 +0100
> @@ -32,6 +32,8 @@ along with GCC; see the file COPYING3.
>   const int MAX_SIZE = 4 * (MAX_BITSIZE_MODE_ANY_INT / 4
>   		     + MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT + 32);
>   
> +static const HOST_WIDE_INT zeros[WIDE_INT_MAX_ELTS] = {};
> +
>   /*
>    * Internal utilities.
>    */
> @@ -2517,7 +2519,7 @@ wide_int_ro::divmod_internal (bool compu
>       {
>         if (top_bit_of (dividend, dividend_len, dividend_prec))
>   	{
> -	  u0 = sub_large (wide_int (0).val, 1,
> +	  u0 = sub_large (zeros, 1,
>   			  dividend_prec, dividend, dividend_len, UNSIGNED);
>   	  dividend = u0.val;
>   	  dividend_len = u0.len;
> @@ -2525,7 +2527,7 @@ wide_int_ro::divmod_internal (bool compu
>   	}
>         if (top_bit_of (divisor, divisor_len, divisor_prec))
>   	{
> -	  u1 = sub_large (wide_int (0).val, 1,
> +	  u1 = sub_large (zeros, 1,
>   			  divisor_prec, divisor, divisor_len, UNSIGNED);
>   	  divisor = u1.val;
>   	  divisor_len = u1.len;
> Index: gcc/wide-int.h
> ===================================================================
> --- gcc/wide-int.h	2013-08-24 12:14:20.979479335 +0100
> +++ gcc/wide-int.h	2013-08-24 01:00:00.000000000 +0100
> @@ -230,6 +230,11 @@ #define WIDE_INT_H
>   #define DEBUG_WIDE_INT
>   #endif
>   
> +/* Used for overloaded functions in which the only other acceptable
> +   scalar type is const_tree.  It stops a plain 0 from being treated
> +   as a null tree.  */
> +struct never_used {};
> +
>   /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
>      early examination of the target's mode file.  Thus it is safe that
>      some small multiple of this number is easily larger than any number
> @@ -324,15 +329,16 @@ class GTY(()) wide_int_ro
>   public:
>     wide_int_ro ();
>     wide_int_ro (const_tree);
> -  wide_int_ro (HOST_WIDE_INT);
> -  wide_int_ro (int);
> -  wide_int_ro (unsigned HOST_WIDE_INT);
> -  wide_int_ro (unsigned int);
> +  wide_int_ro (never_used *);
> +  wide_int_ro (HOST_WIDE_INT, unsigned int);
> +  wide_int_ro (int, unsigned int);
> +  wide_int_ro (unsigned HOST_WIDE_INT, unsigned int);
> +  wide_int_ro (unsigned int, unsigned int);
>     wide_int_ro (const rtx_mode_t &);
>   
>     /* Conversions.  */
> -  static wide_int_ro from_shwi (HOST_WIDE_INT, unsigned int = 0);
> -  static wide_int_ro from_uhwi (unsigned HOST_WIDE_INT, unsigned int = 0);
> +  static wide_int_ro from_shwi (HOST_WIDE_INT, unsigned int);
> +  static wide_int_ro from_uhwi (unsigned HOST_WIDE_INT, unsigned int);
>     static wide_int_ro from_hwi (HOST_WIDE_INT, const_tree);
>     static wide_int_ro from_shwi (HOST_WIDE_INT, enum machine_mode);
>     static wide_int_ro from_uhwi (unsigned HOST_WIDE_INT, enum machine_mode);
> @@ -349,9 +355,11 @@ class GTY(()) wide_int_ro
>   
>     static wide_int_ro max_value (unsigned int, signop, unsigned int = 0);
>     static wide_int_ro max_value (const_tree);
> +  static wide_int_ro max_value (never_used *);
>     static wide_int_ro max_value (enum machine_mode, signop);
>     static wide_int_ro min_value (unsigned int, signop, unsigned int = 0);
>     static wide_int_ro min_value (const_tree);
> +  static wide_int_ro min_value (never_used *);
>     static wide_int_ro min_value (enum machine_mode, signop);
>   
>     /* Small constants.  These are generally only needed in the places
> @@ -842,18 +850,16 @@ class GTY(()) wide_int : public wide_int
>     wide_int ();
>     wide_int (const wide_int_ro &);
>     wide_int (const_tree);
> -  wide_int (HOST_WIDE_INT);
> -  wide_int (int);
> -  wide_int (unsigned HOST_WIDE_INT);
> -  wide_int (unsigned int);
> +  wide_int (never_used *);
> +  wide_int (HOST_WIDE_INT, unsigned int);
> +  wide_int (int, unsigned int);
> +  wide_int (unsigned HOST_WIDE_INT, unsigned int);
> +  wide_int (unsigned int, unsigned int);
>     wide_int (const rtx_mode_t &);
>   
>     wide_int &operator = (const wide_int_ro &);
>     wide_int &operator = (const_tree);
> -  wide_int &operator = (HOST_WIDE_INT);
> -  wide_int &operator = (int);
> -  wide_int &operator = (unsigned HOST_WIDE_INT);
> -  wide_int &operator = (unsigned int);
> +  wide_int &operator = (never_used *);
>     wide_int &operator = (const rtx_mode_t &);
>   
>     wide_int &operator ++ ();
> @@ -904,28 +910,28 @@ inline wide_int_ro::wide_int_ro (const_t
>   		      TYPE_PRECISION (TREE_TYPE (tcst)), false);
>   }
>   
> -inline wide_int_ro::wide_int_ro (HOST_WIDE_INT op0)
> +inline wide_int_ro::wide_int_ro (HOST_WIDE_INT op0, unsigned int prec)
>   {
> -  precision = 0;
> +  precision = prec;
>     val[0] = op0;
>     len = 1;
>   }
>   
> -inline wide_int_ro::wide_int_ro (int op0)
> +inline wide_int_ro::wide_int_ro (int op0, unsigned int prec)
>   {
> -  precision = 0;
> +  precision = prec;
>     val[0] = op0;
>     len = 1;
>   }
>   
> -inline wide_int_ro::wide_int_ro (unsigned HOST_WIDE_INT op0)
> +inline wide_int_ro::wide_int_ro (unsigned HOST_WIDE_INT op0, unsigned int prec)
>   {
> -  *this = from_uhwi (op0);
> +  *this = from_uhwi (op0, prec);
>   }
>   
> -inline wide_int_ro::wide_int_ro (unsigned int op0)
> +inline wide_int_ro::wide_int_ro (unsigned int op0, unsigned int prec)
>   {
> -  *this = from_uhwi (op0);
> +  *this = from_uhwi (op0, prec);
>   }
>   
>   inline wide_int_ro::wide_int_ro (const rtx_mode_t &op0)
> @@ -2264,7 +2270,7 @@ wide_int_ro::mul_high (const T &c, signo
>   wide_int_ro::operator - () const
>   {
>     wide_int_ro r;
> -  r = wide_int_ro (0) - *this;
> +  r = zero (precision) - *this;
>     return r;
>   }
>   
> @@ -2277,7 +2283,7 @@ wide_int_ro::neg (bool *overflow) const
>   
>     *overflow = only_sign_bit_p ();
>   
> -  return wide_int_ro (0) - *this;
> +  return zero (precision) - *this;
>   }
>   
>   /* Return THIS - C.  */
> @@ -3147,28 +3153,28 @@ inline wide_int::wide_int (const_tree tc
>   		      TYPE_PRECISION (TREE_TYPE (tcst)), false);
>   }
>   
> -inline wide_int::wide_int (HOST_WIDE_INT op0)
> +inline wide_int::wide_int (HOST_WIDE_INT op0, unsigned int prec)
>   {
> -  precision = 0;
> +  precision = prec;
>     val[0] = op0;
>     len = 1;
>   }
>   
> -inline wide_int::wide_int (int op0)
> +inline wide_int::wide_int (int op0, unsigned int prec)
>   {
> -  precision = 0;
> +  precision = prec;
>     val[0] = op0;
>     len = 1;
>   }
>   
> -inline wide_int::wide_int (unsigned HOST_WIDE_INT op0)
> +inline wide_int::wide_int (unsigned HOST_WIDE_INT op0, unsigned int prec)
>   {
> -  *this = wide_int_ro::from_uhwi (op0);
> +  *this = wide_int_ro::from_uhwi (op0, prec);
>   }
>   
> -inline wide_int::wide_int (unsigned int op0)
> +inline wide_int::wide_int (unsigned int op0, unsigned int prec)
>   {
> -  *this = wide_int_ro::from_uhwi (op0);
> +  *this = wide_int_ro::from_uhwi (op0, prec);
>   }
>   
>   inline wide_int::wide_int (const rtx_mode_t &op0)
> @@ -3567,31 +3573,28 @@ inline fixed_wide_int <bitsize>::fixed_w
>   
>   template <int bitsize>
>   inline fixed_wide_int <bitsize>::fixed_wide_int (HOST_WIDE_INT op0)
> -  : wide_int_ro (op0)
> +  : wide_int_ro (op0, bitsize)
>   {
> -  precision = bitsize;
>   }
>   
>   template <int bitsize>
> -inline fixed_wide_int <bitsize>::fixed_wide_int (int op0) : wide_int_ro (op0)
> +inline fixed_wide_int <bitsize>::fixed_wide_int (int op0)
> +  : wide_int_ro (op0, bitsize)
>   {
> -  precision = bitsize;
>   }
>   
>   template <int bitsize>
>   inline fixed_wide_int <bitsize>::fixed_wide_int (unsigned HOST_WIDE_INT op0)
> -  : wide_int_ro (op0)
> +  : wide_int_ro (op0, bitsize)
>   {
> -  precision = bitsize;
>     if (neg_p (SIGNED))
>       static_cast <wide_int_ro &> (*this) = zext (HOST_BITS_PER_WIDE_INT);
>   }
>   
>   template <int bitsize>
>   inline fixed_wide_int <bitsize>::fixed_wide_int (unsigned int op0)
> -  : wide_int_ro (op0)
> +  : wide_int_ro (op0, bitsize)
>   {
> -  precision = bitsize;
>     if (sizeof (int) == sizeof (HOST_WIDE_INT)
>         && neg_p (SIGNED))
>       *this = zext (HOST_BITS_PER_WIDE_INT);
> @@ -3661,9 +3664,7 @@ fixed_wide_int <bitsize>::operator = (co
>   inline fixed_wide_int <bitsize> &
>   fixed_wide_int <bitsize>::operator = (HOST_WIDE_INT op0)
>   {
> -  static_cast <wide_int_ro &> (*this) = op0;
> -  precision = bitsize;
> -
> +  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
>     return *this;
>   }
>   
> @@ -3671,9 +3672,7 @@ fixed_wide_int <bitsize>::operator = (HO
>   inline fixed_wide_int <bitsize> &
>   fixed_wide_int <bitsize>::operator = (int op0)
>   {
> -  static_cast <wide_int_ro &> (*this) = op0;
> -  precision = bitsize;
> -
> +  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
>     return *this;
>   }
>   
> @@ -3681,8 +3680,7 @@ fixed_wide_int <bitsize>::operator = (in
>   inline fixed_wide_int <bitsize> &
>   fixed_wide_int <bitsize>::operator = (unsigned HOST_WIDE_INT op0)
>   {
> -  static_cast <wide_int_ro &> (*this) = op0;
> -  precision = bitsize;
> +  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
>   
>     /* This is logically top_bit_set_p.  */
>     if (neg_p (SIGNED))
> @@ -3695,8 +3693,7 @@ fixed_wide_int <bitsize>::operator = (un
>   inline fixed_wide_int <bitsize> &
>   fixed_wide_int <bitsize>::operator = (unsigned int op0)
>   {
> -  static_cast <wide_int_ro &> (*this) = op0;
> -  precision = bitsize;
> +  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
>   
>     if (sizeof (int) == sizeof (HOST_WIDE_INT)
>         && neg_p (SIGNED))

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-13 20:57 wide-int branch now up for public comment and review Kenneth Zadeck
  2013-08-22  8:25 ` Richard Sandiford
  2013-08-23 15:03 ` Richard Sandiford
@ 2013-08-24 18:42 ` Florian Weimer
  2013-08-24 19:48   ` Kenneth Zadeck
  2 siblings, 1 reply; 50+ messages in thread
From: Florian Weimer @ 2013-08-24 18:42 UTC (permalink / raw)
  To: Kenneth Zadeck; +Cc: gcc-patches

On 08/13/2013 10:57 PM, Kenneth Zadeck wrote:
> 1) The 4 files that hold the wide-int code itself.  You have seen a
>     lot of this code before except for the infinite precision
>     templates.  Also the classes are more C++ than C in their flavor.
>     In particular, the integration with trees is very tight in that an
>     int-cst or regular integers can be the operands of any wide-int
>     operation.

Are any of these conversions lossy?  Maybe some of these constructors 
should be made explicit?

-- 
Florian Weimer / Red Hat Product Security Team

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-24 18:42 ` Florian Weimer
@ 2013-08-24 19:48   ` Kenneth Zadeck
  0 siblings, 0 replies; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-24 19:48 UTC (permalink / raw)
  To: Florian Weimer; +Cc: gcc-patches

On 08/24/2013 02:16 PM, Florian Weimer wrote:
> On 08/13/2013 10:57 PM, Kenneth Zadeck wrote:
>> 1) The 4 files that hold the wide-int code itself.  You have seen a
>>     lot of this code before except for the infinite precision
>>     templates.  Also the classes are more C++ than C in their flavor.
>>     In particular, the integration with trees is very tight in that an
>>     int-cst or regular integers can be the operands of any wide-int
>>     operation.
>
> Are any of these conversions lossy?  Maybe some of these constructors 
> should be made explicit?
>
It depends, there is nothing wrong with lossy conversions as long as you 
know what you are doing.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-23 15:03 ` Richard Sandiford
                     ` (3 preceding siblings ...)
  2013-08-24  3:34   ` Mike Stump
@ 2013-08-24 20:46   ` Kenneth Zadeck
  2013-08-25 10:52   ` Richard Sandiford
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-24 20:46 UTC (permalink / raw)
  To: rguenther, gcc-patches, Mike Stump, r.sandiford, rdsandiford

[-- Attachment #1: Type: text/plain, Size: 737 bytes --]

fixed with the enclosed patch.

On 08/23/2013 11:02 AM, Richard Sandiford wrote:
>> /* Return true if THIS is negative based on the interpretation of SGN.
>>     For UNSIGNED, this is always false.  This is correct even if
>>     precision is 0.  */
>> inline bool
>> wide_int::neg_p (signop sgn) const
> It seems odd that you have to pass SIGNED here.  I assume you were doing
> it so that the caller is forced to confirm signedness in the cases where
> a tree type is involved, but:
>
> * neg_p kind of implies signedness anyway
> * you don't require this for minus_one_p, so the interface isn't consistent
> * at the rtl level signedness isn't a property of the "type" (mode),
>    so it seems strange to add an extra hoop there
>
>


[-- Attachment #2: xx.diff --]
[-- Type: text/x-patch, Size: 21874 bytes --]

Index: gcc/ada/gcc-interface/decl.c
===================================================================
--- gcc/ada/gcc-interface/decl.c	(revision 201967)
+++ gcc/ada/gcc-interface/decl.c	(working copy)
@@ -7479,7 +7479,7 @@ annotate_value (tree gnu_size)
 	  tree op1 = TREE_OPERAND (gnu_size, 1);
 	  wide_int signed_op1
 	    = wide_int::from_tree (op1).sforce_to_size (TYPE_PRECISION (sizetype));
-	  if (signed_op1.neg_p (SIGNED))
+	  if (signed_op1.neg_p ())
 	    {
 	      op1 = wide_int_to_tree (sizetype, -signed_op1);
 	      pre_op1 = annotate_value (build1 (NEGATE_EXPR, sizetype, op1));
Index: gcc/c-family/c-ada-spec.c
===================================================================
--- gcc/c-family/c-ada-spec.c	(revision 201967)
+++ gcc/c-family/c-ada-spec.c	(working copy)
@@ -2197,7 +2197,7 @@ dump_generic_ada_node (pretty_printer *b
 	{
 	  wide_int val = node;
 	  int i;
-	  if (val.neg_p (SIGNED))
+	  if (val.neg_p ())
 	    {
 	      pp_minus (buffer);
 	      val = -val;
Index: gcc/config/sparc/sparc.c
===================================================================
--- gcc/config/sparc/sparc.c	(revision 201967)
+++ gcc/config/sparc/sparc.c	(working copy)
@@ -10624,7 +10624,7 @@ sparc_fold_builtin (tree fndecl, int n_a
 	      overall_overflow |= overall_overflow;
 	      tmp = e0.add (tmp, SIGNED, &overflow);
 	      overall_overflow |= overall_overflow;
-	      if (tmp.neg_p (SIGNED))
+	      if (tmp.neg_p ())
 		{
 		  tmp = tmp.neg (&overflow);
 		  overall_overflow |= overall_overflow;
Index: gcc/expr.c
===================================================================
--- gcc/expr.c	(revision 201967)
+++ gcc/expr.c	(working copy)
@@ -6718,7 +6718,7 @@ get_inner_reference (tree exp, HOST_WIDE
   if (offset)
     {
       /* Avoid returning a negative bitpos as this may wreak havoc later.  */
-      if (bit_offset.neg_p (SIGNED))
+      if (bit_offset.neg_p ())
         {
 	  addr_wide_int mask
 	    = addr_wide_int::mask (BITS_PER_UNIT == 8
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c	(revision 201967)
+++ gcc/fold-const.c	(working copy)
@@ -183,13 +183,13 @@ div_if_zero_remainder (const_tree arg1,
 	 precision by 1 bit, iff the top bit is set.  */
       if (sgn == UNSIGNED)
 	{
-	  if (warg1.neg_p (SIGNED))
+	  if (warg1.neg_p ())
 	    warg1 = warg1.force_to_size (warg1.get_precision () + 1, sgn);
 	  sgn = SIGNED;
 	}
       else
 	{
-	  if (warg2.neg_p (SIGNED))
+	  if (warg2.neg_p ())
 	    warg2 = warg2.force_to_size (warg2.get_precision () + 1, sgn2);
 	}
     }
@@ -979,7 +979,7 @@ int_const_binop_1 (enum tree_code code,
 
     case RSHIFT_EXPR:
     case LSHIFT_EXPR:
-      if (arg2.neg_p (SIGNED))
+      if (arg2.neg_p ())
 	{
 	  arg2 = -arg2;
 	  if (code == RSHIFT_EXPR)
@@ -999,7 +999,7 @@ int_const_binop_1 (enum tree_code code,
       
     case RROTATE_EXPR:
     case LROTATE_EXPR:
-      if (arg2.neg_p (SIGNED))
+      if (arg2.neg_p ())
 	{
 	  arg2 = -arg2;
 	  if (code == RROTATE_EXPR)
@@ -7180,7 +7180,7 @@ fold_plusminus_mult_expr (location_t loc
       /* As we canonicalize A - 2 to A + -2 get rid of that sign for
 	 the purpose of this canonicalization.  */
       if (TYPE_SIGN (TREE_TYPE (arg1)) == SIGNED
-	  && wide_int (arg1).neg_p (SIGNED)
+	  && wide_int (arg1).neg_p ()
 	  && negate_expr_p (arg1)
 	  && code == PLUS_EXPR)
 	{
@@ -12323,7 +12323,7 @@ fold_binary_loc (location_t loc,
 	  && TYPE_SIGN (type) == SIGNED
 	  && TREE_CODE (arg1) == INTEGER_CST
 	  && !TREE_OVERFLOW (arg1)
-	  && wide_int (arg1).neg_p (SIGNED)
+	  && wide_int (arg1).neg_p ()
 	  && !TYPE_OVERFLOW_TRAPS (type)
 	  /* Avoid this transformation if C is INT_MIN, i.e. C == -C.  */
 	  && !sign_bit_p (arg1, arg1))
Index: gcc/gimple-ssa-strength-reduction.c
===================================================================
--- gcc/gimple-ssa-strength-reduction.c	(revision 201967)
+++ gcc/gimple-ssa-strength-reduction.c	(working copy)
@@ -1824,7 +1824,7 @@ cand_abs_increment (slsr_cand_t c)
 {
   max_wide_int increment = cand_increment (c);
 
-  if (!address_arithmetic_p && increment.neg_p (SIGNED))
+  if (!address_arithmetic_p && increment.neg_p ())
     increment = -increment;
 
   return increment;
@@ -1872,7 +1872,7 @@ replace_mult_candidate (slsr_cand_t c, t
 	 types, introduce a cast.  */
       if (!useless_type_conversion_p (target_type, TREE_TYPE (basis_name)))
 	basis_name = introduce_cast_before_cand (c, target_type, basis_name);
-      if (bump.neg_p (SIGNED)) 
+      if (bump.neg_p ()) 
 	{
 	  code = MINUS_EXPR;
 	  bump = -bump;
@@ -2005,7 +2005,7 @@ create_add_on_incoming_edge (slsr_cand_t
       tree bump_tree;
       enum tree_code code = PLUS_EXPR;
       max_wide_int bump = increment * c->stride;
-      if (bump.neg_p (SIGNED))
+      if (bump.neg_p ())
 	{
 	  code = MINUS_EXPR;
 	  bump = -bump;
@@ -2018,7 +2018,7 @@ create_add_on_incoming_edge (slsr_cand_t
   else
     {
       int i;
-      bool negate_incr = (!address_arithmetic_p && increment.neg_p (SIGNED));
+      bool negate_incr = (!address_arithmetic_p && increment.neg_p ());
       i = incr_vec_index (negate_incr ? -increment : increment);
       gcc_assert (i >= 0);
 
@@ -2312,7 +2312,7 @@ record_increment (slsr_cand_t c, const m
 
   /* Treat increments that differ only in sign as identical so as to
      share initializers, unless we are generating pointer arithmetic.  */
-  if (!address_arithmetic_p && increment.neg_p (SIGNED))
+  if (!address_arithmetic_p && increment.neg_p ())
     increment = -increment;
 
   for (i = 0; i < incr_vec_len; i++)
@@ -3044,7 +3044,7 @@ all_phi_incrs_profitable (slsr_cand_t c,
 	      slsr_cand_t arg_cand = base_cand_from_table (arg);
 	      max_wide_int increment = arg_cand->index - basis->index;
 
-	      if (!address_arithmetic_p && increment.neg_p (SIGNED))
+	      if (!address_arithmetic_p && increment.neg_p ())
 		increment = -increment;
 
 	      j = incr_vec_index (increment);
Index: gcc/predict.c
===================================================================
--- gcc/predict.c	(revision 201967)
+++ gcc/predict.c	(working copy)
@@ -1260,7 +1260,7 @@ predict_iv_comparison (struct loop *loop
       loop_count = tem.div_trunc (compare_step, SIGNED, &overflow);
       overall_overflow |= overflow;
 
-      if ((!compare_step.neg_p (SIGNED))
+      if ((!compare_step.neg_p ())
           ^ (compare_code == LT_EXPR || compare_code == LE_EXPR))
 	{
 	  /* (loop_bound - compare_bound) / compare_step */
@@ -1281,9 +1281,9 @@ predict_iv_comparison (struct loop *loop
 	++compare_count;
       if (loop_bound_code == LE_EXPR || loop_bound_code == GE_EXPR)
 	++loop_count;
-      if (compare_count.neg_p (SIGNED))
+      if (compare_count.neg_p ())
         compare_count = 0;
-      if (loop_count.neg_p (SIGNED))
+      if (loop_count.neg_p ())
         loop_count = 0;
       if (loop_count.zero_p ())
 	probability = 0;
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c	(revision 201967)
+++ gcc/simplify-rtx.c	(working copy)
@@ -3787,35 +3787,35 @@ simplify_const_binary_operation (enum rt
 	  break;
 
 	case LSHIFTRT:
-	  if (wide_int (std::make_pair (op1, mode)).neg_p (SIGNED))
+	  if (wide_int (std::make_pair (op1, mode)).neg_p ())
 	    return NULL_RTX;
 
 	  result = wop0.rshiftu (pop1, bitsize, TRUNC);
 	  break;
 	  
 	case ASHIFTRT:
-	  if (wide_int (std::make_pair (op1, mode)).neg_p (SIGNED))
+	  if (wide_int (std::make_pair (op1, mode)).neg_p ())
 	    return NULL_RTX;
 
 	  result = wop0.rshifts (pop1, bitsize, TRUNC);
 	  break;
 	  
 	case ASHIFT:
-	  if (wide_int (std::make_pair (op1, mode)).neg_p (SIGNED))
+	  if (wide_int (std::make_pair (op1, mode)).neg_p ())
 	    return NULL_RTX;
 
 	  result = wop0.lshift (pop1, bitsize, TRUNC);
 	  break;
 	  
 	case ROTATE:
-	  if (wide_int (std::make_pair (op1, mode)).neg_p (SIGNED))
+	  if (wide_int (std::make_pair (op1, mode)).neg_p ())
 	    return NULL_RTX;
 
 	  result = wop0.lrotate (pop1);
 	  break;
 	  
 	case ROTATERT:
-	  if (wide_int (std::make_pair (op1, mode)).neg_p (SIGNED))
+	  if (wide_int (std::make_pair (op1, mode)).neg_p ())
 	    return NULL_RTX;
 
 	  result = wop0.rrotate (pop1);
Index: gcc/tree-affine.c
===================================================================
--- gcc/tree-affine.c	(revision 201967)
+++ gcc/tree-affine.c	(working copy)
@@ -407,7 +407,7 @@ add_elt_to_tree (tree expr, tree type, t
 			 fold_build2 (MULT_EXPR, type1, elt,
 				      wide_int_to_tree (type1, scale)));
 
-  if (scale.neg_p (SIGNED))
+  if (scale.neg_p ())
     {
       code = MINUS_EXPR;
       scale = -scale;
@@ -450,7 +450,7 @@ aff_combination_to_tree (aff_tree *comb)
 
   /* Ensure that we get x - 1, not x + (-1) or x + 0xff..f if x is
      unsigned.  */
-  if (comb->offset.neg_p (SIGNED))
+  if (comb->offset.neg_p ())
     {
       off = -comb->offset;
       sgn = -1;
@@ -901,12 +901,12 @@ aff_comb_cannot_overlap_p (aff_tree *dif
     return false;
 
   d = diff->offset;
-  if (d.neg_p (SIGNED))
+  if (d.neg_p ())
     {
       /* The second object is before the first one, we succeed if the last
 	 element of the second object is before the start of the first one.  */
       bound = d + size2 - 1;
-      return bound.neg_p (SIGNED);
+      return bound.neg_p ();
     }
   else
     {
Index: gcc/tree-object-size.c
===================================================================
--- gcc/tree-object-size.c	(revision 201967)
+++ gcc/tree-object-size.c	(working copy)
@@ -192,7 +192,7 @@ addr_object_size (struct object_size_inf
       if (sz != unknown[object_size_type])
 	{
 	  addr_wide_int dsz = addr_wide_int (sz) - mem_ref_offset (pt_var);
-	  if (dsz.neg_p (SIGNED))
+	  if (dsz.neg_p ())
 	    sz = 0;
 	  else if (dsz.fits_uhwi_p ())
 	    sz = dsz.to_uhwi ();
Index: gcc/tree-ssa-alias.c
===================================================================
--- gcc/tree-ssa-alias.c	(revision 201967)
+++ gcc/tree-ssa-alias.c	(working copy)
@@ -883,7 +883,7 @@ indirect_ref_may_alias_decl_p (tree ref1
      so that the resulting offset adjustment is positive.  */
   moff = mem_ref_offset (base1);
   moff = moff.lshift (BITS_PER_UNIT == 8 ? 3 : exact_log2 (BITS_PER_UNIT));
-  if (moff.neg_p (SIGNED))
+  if (moff.neg_p ())
     offset2p += (-moff).to_short_addr ();
   else
     offset1p += moff.to_short_addr ();
@@ -959,7 +959,7 @@ indirect_ref_may_alias_decl_p (tree ref1
     {
       addr_wide_int moff = mem_ref_offset (dbase2);
       moff = moff.lshift (BITS_PER_UNIT == 8 ? 3 : exact_log2 (BITS_PER_UNIT));
-      if (moff.neg_p (SIGNED))
+      if (moff.neg_p ())
 	doffset1 -= (-moff).to_short_addr ();
       else
 	doffset2 -= moff.to_short_addr ();
@@ -1053,13 +1053,13 @@ indirect_refs_may_alias_p (tree ref1 ATT
 	 so that the resulting offset adjustment is positive.  */
       moff = mem_ref_offset (base1);
       moff = moff.lshift (BITS_PER_UNIT == 8 ? 3 : exact_log2 (BITS_PER_UNIT));
-      if (moff.neg_p (SIGNED))
+      if (moff.neg_p ())
 	offset2 += (-moff).to_short_addr ();
       else
 	offset1 += moff.to_shwi ();
       moff = mem_ref_offset (base2);
       moff = moff.lshift (BITS_PER_UNIT == 8 ? 3 : exact_log2 (BITS_PER_UNIT));
-      if (moff.neg_p (SIGNED))
+      if (moff.neg_p ())
 	offset1 += (-moff).to_short_addr ();
       else
 	offset2 += moff.to_short_addr ();
Index: gcc/tree-ssa-ccp.c
===================================================================
--- gcc/tree-ssa-ccp.c	(revision 201967)
+++ gcc/tree-ssa-ccp.c	(working copy)
@@ -1173,7 +1173,7 @@ bit_value_binop_1 (enum tree_code code,
 	    }
 	  else 
 	    {
-	      if (shift.neg_p (SIGNED))
+	      if (shift.neg_p ())
 		{
 		  shift = -shift;
 		  if (code == RROTATE_EXPR)
@@ -1210,7 +1210,7 @@ bit_value_binop_1 (enum tree_code code,
 	    }
 	  else 
 	    {
-	      if (shift.neg_p (SIGNED))
+	      if (shift.neg_p ())
 		{
 		  shift = -shift;
 		  if (code == RSHIFT_EXPR)
@@ -1327,7 +1327,7 @@ bit_value_binop_1 (enum tree_code code,
 	    o2mask = r2mask;
 	  }
 	/* If the most significant bits are not known we know nothing.  */
-	if (o1mask.neg_p (SIGNED) || o2mask.neg_p (SIGNED))
+	if (o1mask.neg_p () || o2mask.neg_p ())
 	  break;
 
 	/* For comparisons the signedness is in the comparison operands.  */
Index: gcc/tree-ssa-loop-niter.c
===================================================================
--- gcc/tree-ssa-loop-niter.c	(revision 201967)
+++ gcc/tree-ssa-loop-niter.c	(working copy)
@@ -2432,11 +2432,11 @@ derive_constant_upper_bound_ops (tree ty
 
       bnd = derive_constant_upper_bound (op0);
 
-      if (cst.neg_p (SIGNED))
+      if (cst.neg_p ())
 	{
 	  cst = -cst;
 	  /* Avoid CST == 0x80000...  */
-	  if (cst.neg_p (SIGNED))
+	  if (cst.neg_p ())
 	    return max;;
 
 	  /* OP0 + CST.  We need to check that
Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c	(revision 201967)
+++ gcc/tree-vrp.c	(working copy)
@@ -5110,8 +5110,8 @@ register_edge_assert_for_2 (tree name, e
 	  cst2v = wide_int (cst2).zforce_to_size (nprec);
 	  if (TYPE_SIGN (TREE_TYPE (val)) == SIGNED)
 	    {
-	      valn = valv.sext (nprec).neg_p (SIGNED);
-	      cst2n = cst2v.sext (nprec).neg_p (SIGNED);
+	      valn = valv.sext (nprec).neg_p ();
+	      cst2n = cst2v.sext (nprec).neg_p ();
 	    }
 	  /* If CST2 doesn't have most significant bit set,
 	     but VAL is negative, we have comparison like
@@ -5154,7 +5154,7 @@ register_edge_assert_for_2 (tree name, e
 		  sgnbit = wide_int::zero (nprec);
 		  goto lt_expr;
 		}
-	      if (!cst2n && cst2v.sext (nprec).neg_p (SIGNED))
+	      if (!cst2n && cst2v.sext (nprec).neg_p ())
 		sgnbit = wide_int::set_bit_in_zero (nprec - 1, nprec);
 	      if (!sgnbit.zero_p ())
 		{
Index: gcc/tree.c
===================================================================
--- gcc/tree.c	(revision 201967)
+++ gcc/tree.c	(working copy)
@@ -1257,7 +1257,7 @@ wide_int_to_tree (tree type, const wide_
 
 	  if (cst.minus_one_p ())
 	    ix = 0;
-	  else if (!cst.neg_p (SIGNED))
+	  else if (!cst.neg_p ())
 	    {
 	      if (prec < HOST_BITS_PER_WIDE_INT)
 		{
@@ -1410,7 +1410,7 @@ cache_integer_cst (tree t)
 
 	  if (integer_minus_onep (t))
 	    ix = 0;
-	  else if (!wide_int (t).neg_p (SIGNED))
+	  else if (!wide_int (t).neg_p ())
 	    {
 	      if (prec < HOST_BITS_PER_WIDE_INT)
 		{
@@ -6842,7 +6842,7 @@ tree_int_cst_sgn (const_tree t)
     return 0;
   else if (TYPE_UNSIGNED (TREE_TYPE (t)))
     return 1;
-  else if (w.neg_p (SIGNED))
+  else if (w.neg_p ())
     return -1;
   else
     return 1;
@@ -8557,8 +8557,8 @@ retry:
       wd = type_low_bound;
       if (unsc != TYPE_UNSIGNED (TREE_TYPE (type_low_bound)))
 	{
-	  int c_neg = (!unsc && wc.neg_p (SIGNED));
-	  int t_neg = (unsc && wd.neg_p (SIGNED));
+	  int c_neg = (!unsc && wc.neg_p ());
+	  int t_neg = (unsc && wd.neg_p ());
 
 	  if (c_neg && !t_neg)
 	    return false;
@@ -8578,8 +8578,8 @@ retry:
       wd = type_high_bound;
       if (unsc != TYPE_UNSIGNED (TREE_TYPE (type_high_bound)))
 	{
-	  int c_neg = (!unsc && wc.neg_p (SIGNED));
-	  int t_neg = (unsc && wd.neg_p (SIGNED));
+	  int c_neg = (!unsc && wc.neg_p ());
+	  int t_neg = (unsc && wd.neg_p ());
 
 	  if (t_neg && !c_neg)
 	    return false;
@@ -8600,7 +8600,7 @@ retry:
   /* Perform some generic filtering which may allow making a decision
      even if the bounds are not constant.  First, negative integers
      never fit in unsigned types, */
-  if (TYPE_UNSIGNED (type) && !unsc && wc.neg_p (SIGNED))
+  if (TYPE_UNSIGNED (type) && !unsc && wc.neg_p ())
     return false;
 
   /* Second, narrower types always fit in wider ones.  */
@@ -8608,7 +8608,7 @@ retry:
     return true;
 
   /* Third, unsigned integers with top bit set never fit signed types.  */
-  if (! TYPE_UNSIGNED (type) && unsc && wc.neg_p (SIGNED))
+  if (! TYPE_UNSIGNED (type) && unsc && wc.neg_p ())
     return false;
 
   /* If we haven't been able to decide at this point, there nothing more we
Index: gcc/wide-int-print.cc
===================================================================
--- gcc/wide-int-print.cc	(revision 201967)
+++ gcc/wide-int-print.cc	(working copy)
@@ -61,7 +61,7 @@ print_decs (const wide_int &wi, char *bu
   if ((wi.get_precision () <= HOST_BITS_PER_WIDE_INT)
       || (wi.get_len () == 1))
     {
-      if (wi.neg_p (SIGNED))
+      if (wi.neg_p ())
       	sprintf (buf, "-" HOST_WIDE_INT_PRINT_UNSIGNED, -wi.to_shwi ());
       else
 	sprintf (buf, HOST_WIDE_INT_PRINT_DEC, wi.to_shwi ());
@@ -88,7 +88,7 @@ void
 print_decu (const wide_int &wi, char *buf)
 {
   if ((wi.get_precision () <= HOST_BITS_PER_WIDE_INT)
-      || (wi.get_len () == 1 && !wi.neg_p (SIGNED)))
+      || (wi.get_len () == 1 && !wi.neg_p ()))
     sprintf (buf, HOST_WIDE_INT_PRINT_UNSIGNED, wi.to_uhwi ());
   else
     print_hex (wi, buf);
@@ -114,7 +114,7 @@ print_hex (const wide_int &wi, char *buf
     buf += sprintf (buf, "0x0");
   else
     {
-      if (wi.neg_p (SIGNED))
+      if (wi.neg_p ())
 	{
 	  int j;
 	  /* If the number is negative, we may need to pad value with
Index: gcc/wide-int.cc
===================================================================
--- gcc/wide-int.cc	(revision 201967)
+++ gcc/wide-int.cc	(working copy)
@@ -1662,7 +1662,7 @@ wide_int_ro::clz () const
       else if (!CLZ_DEFINED_VALUE_AT_ZERO (mode, count))
 	count = precision;
     }
-  else if (neg_p (SIGNED))
+  else if (neg_p ())
     count = 0;
   else
     {
@@ -1712,7 +1712,7 @@ wide_int_ro::clrsb () const
 {
   gcc_checking_assert (precision);
 
-  if (neg_p (SIGNED))
+  if (neg_p ())
     return operator ~ ().clz () - 1;
 
   return clz () - 1;
Index: gcc/wide-int.h
===================================================================
--- gcc/wide-int.h	(revision 201967)
+++ gcc/wide-int.h	(working copy)
@@ -309,11 +309,6 @@ class GTY(()) wide_int_ro
   /* Internal representation.  */
 
 protected:
-  /* VAL is set to a size that is capable of computing a full
-     multiplication on the largest mode that is represented on the
-     target.  Currently there is a part of tree-vrp that requires 2x +
-     2 bits of precision where x is the precision of the variables
-     being optimized.  */
   HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
   unsigned short len;
   unsigned int precision;
@@ -372,7 +367,7 @@ public:
   bool minus_one_p () const;
   bool zero_p () const;
   bool one_p () const;
-  bool neg_p (signop) const;
+  bool neg_p (signop sgn = SIGNED) const;
   bool multiple_of_p (const wide_int_ro &, signop, wide_int_ro *) const;
 
   /* Comparisons, note that only equality is an operator.  The other
@@ -1117,11 +1112,6 @@ wide_int_ro::zero_p () const
 
   if (precision && precision < HOST_BITS_PER_WIDE_INT)
     x = sext_hwi (val[0], precision);
-  else if (len == 0)
-    {
-      gcc_assert (precision == 0);
-      return true;
-    }
   else
     x = val[0];
 
@@ -2495,13 +2485,13 @@ wide_int_ro::div_round (const T &c, sign
       if (sgn == SIGNED)
 	{
 	  wide_int_ro p_remainder
-	    = remainder.neg_p (SIGNED) ? -remainder : remainder;
-	  wide_int_ro p_divisor = divisor.neg_p (SIGNED) ? -divisor : divisor;
+	    = remainder.neg_p () ? -remainder : remainder;
+	  wide_int_ro p_divisor = divisor.neg_p () ? -divisor : divisor;
 	  p_divisor = p_divisor.rshiftu_large (1);
 
 	  if (p_divisor.gts_p (p_remainder))
 	    {
-	      if (quotient.neg_p (SIGNED))
+	      if (quotient.neg_p ())
 		return quotient - 1;
 	      else
 		return quotient + 1;
@@ -2726,14 +2716,14 @@ wide_int_ro::mod_round (const T &c, sign
       wide_int_ro divisor = wide_int_ro::from_array (s, cl, precision);
       if (sgn == SIGNED)
 	{
-	  wide_int_ro p_remainder = (remainder.neg_p (SIGNED)
+	  wide_int_ro p_remainder = (remainder.neg_p ()
 				     ? -remainder : remainder);
-	  wide_int_ro p_divisor = divisor.neg_p (SIGNED) ? -divisor : divisor;
+	  wide_int_ro p_divisor = divisor.neg_p () ? -divisor : divisor;
 	  p_divisor = p_divisor.rshiftu_large (1);
 
 	  if (p_divisor.gts_p (p_remainder))
 	    {
-	      if (quotient.neg_p (SIGNED))
+	      if (quotient.neg_p ())
 		return remainder + divisor;
 	      else
 		return remainder - divisor;
@@ -3542,7 +3532,7 @@ template <int bitsize>
 inline fixed_wide_int <bitsize>
 fixed_wide_int <bitsize>::from_wide_int (const wide_int &w)
 {
-  if (w.neg_p (SIGNED))
+  if (w.neg_p ())
     return w.sforce_to_size (bitsize);
   return w.zforce_to_size (bitsize);
 }
@@ -3583,7 +3573,7 @@ inline fixed_wide_int <bitsize>::fixed_w
   : wide_int_ro (op0)
 {
   precision = bitsize;
-  if (neg_p (SIGNED))
+  if (neg_p ())
     static_cast <wide_int_ro &> (*this) = zext (HOST_BITS_PER_WIDE_INT);
 }
 
@@ -3593,7 +3583,7 @@ inline fixed_wide_int <bitsize>::fixed_w
 {
   precision = bitsize;
   if (sizeof (int) == sizeof (HOST_WIDE_INT)
-      && neg_p (SIGNED))
+      && neg_p ())
     *this = zext (HOST_BITS_PER_WIDE_INT);
 }
 
@@ -3651,7 +3641,7 @@ fixed_wide_int <bitsize>::operator = (co
   precision = bitsize;
 
   /* This is logically top_bit_set_p.  */
-  if (TYPE_SIGN (type) == UNSIGNED && neg_p (SIGNED))
+  if (TYPE_SIGN (type) == UNSIGNED && neg_p ())
     static_cast <wide_int_ro &> (*this) = zext (TYPE_PRECISION (type));
 
   return *this;
@@ -3685,7 +3675,7 @@ fixed_wide_int <bitsize>::operator = (un
   precision = bitsize;
 
   /* This is logically top_bit_set_p.  */
-  if (neg_p (SIGNED))
+  if (neg_p ())
     static_cast <wide_int_ro &> (*this) = zext (HOST_BITS_PER_WIDE_INT);
 
   return *this;
@@ -3699,7 +3689,7 @@ fixed_wide_int <bitsize>::operator = (un
   precision = bitsize;
 
   if (sizeof (int) == sizeof (HOST_WIDE_INT)
-      && neg_p (SIGNED))
+      && neg_p ())
     *this = zext (HOST_BITS_PER_WIDE_INT);
 
   return *this;

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-24 10:44     ` Richard Sandiford
  2013-08-24 13:10       ` Richard Sandiford
@ 2013-08-24 21:22       ` Kenneth Zadeck
  1 sibling, 0 replies; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-24 21:22 UTC (permalink / raw)
  To: rguenther, gcc-patches, Mike Stump, r.sandiford, rdsandiford

[-- Attachment #1: Type: text/plain, Size: 2387 bytes --]


>>> The patch goes for (1) but (2) seems better to me, for a few reasons:
>>>
>>> * As above, constants coming from rtl are already in the right form,
>>>     so if you create a wide_int from an rtx and only query it, no explicit
>>>     extension is needed.
>>>
>>> * Things like logical operations and right shifts naturally preserve
>>>     the sign-extended form, so only a subset of write operations need
>>>     to take special measures.
>> right now the internals of wide-int do not keep the bits above the
>> precision clean.   as you point out this could be fixed by changing
>> lshift, add, sub, mul, div (and anything else i have forgotten about)
>> and removing the code that cleans this up on exit.   I actually do not
>> really care which way we go here but if we do go on keeping the bits
>> clean above the precision inside wide-int, we are going to have to clean
>> the bits in the constructors from rtl, or fix some/a lot of bugs.
>>
>> But if you want to go with the stay clean plan you have to start clean,
>> so at the rtl level this means copying. and it is the not copying trick
>> that pushed me in the direction we went.
>>
>> At the tree level, this is not an issue.   There are no constructors for
>> tree-csts that do not have a type and before we copy the rep from the
>> wide-int to the tree, we clean the top bits.   (BTW - If i had to guess
>> what the bug is with the missing messages on the mips port, it would be
>> because the front ends HAD a bad habit of creating constants that did
>> not fit into a type and then later checking to see if there were any
>> interesting bits above the precision in the int-cst.  This now does not
>> work because we clean out those top bits on construction but it would
>> not surprise me if we missed the fixed point constant path.)   So at the
>> tree level, we could easily go either way here, but there is a cost at
>> the rtl level with doing (2).
> TBH, I think we should do (2) and fix whatever bugs you saw with invalid
> rtx constants.
>
luckily (or perhaps unluckily) if you try the test it fails pretty 
quickly - building gcclib on the x86-64.   I have enclosed a patch to 
check this.    you can try it your self and see if you really think this 
is right path.

good luck, i fear you may need it.

on the other hand, if it is just a few quick bugs, then i will agree 
that (2) is right path.

kenny



[-- Attachment #2: xx.diff --]
[-- Type: text/x-patch, Size: 537 bytes --]

Index: gcc/wide-int.cc
===================================================================
--- gcc/wide-int.cc	(revision 201968)
+++ gcc/wide-int.cc	(working copy)
@@ -171,6 +171,10 @@ wide_int_ro::from_rtx (const rtx_mode_t
     case CONST_INT:
       result.val[0] = INTVAL (x);
       result.len = 1;
+
+      if (prec != HOST_BITS_PER_WIDE_INT)
+	gcc_assert (result.val[0] == sext_hwi (result.val[0], prec));
+
 #ifdef DEBUG_WIDE_INT
       debug_wh ("wide_int:: %s = from_rtx ("HOST_WIDE_INT_PRINT_HEX")\n",
 		result, INTVAL (x));

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-24 18:16         ` Kenneth Zadeck
@ 2013-08-25  7:27           ` Richard Sandiford
  2013-08-25 13:21             ` Kenneth Zadeck
  0 siblings, 1 reply; 50+ messages in thread
From: Richard Sandiford @ 2013-08-25  7:27 UTC (permalink / raw)
  To: Kenneth Zadeck; +Cc: rguenther, gcc-patches, Mike Stump, r.sandiford

Kenneth Zadeck <zadeck@naturalbridge.com> writes:
> On 08/24/2013 08:05 AM, Richard Sandiford wrote:
>> Richard Sandiford <rdsandiford@googlemail.com> writes:
>>> I wonder how easy it would be to restrict this use of "zero precision"
>>> (i.e. flexible precision) to those where primitive types like "int" are
>>> used as template arguments to operators, and require a precision when
>>> constructing a wide_int.  I wouldn't have expected "real" precision 0
>>> (from zero-width bitfields or whatever) to need any special handling
>>> compared to precision 1 or 2.
>> I tried the last bit -- requiring a precision when constructing a
>> wide_int -- and it seemed surprising easy.  What do you think of
>> the attached?  Most of the forced knock-on changes seem like improvements,
>> but the java part is a bit ugly.  I also went with "wide_int (0, prec).cmp"
>> for now, although I'd like to add static cmp, cmps and cmpu alongside
>> leu_p, etc., if that's OK.  It would then be possible to write
>> "wide_int::cmp (0, ...)" and avoid the wide_int construction altogether.
>>
>> I wondered whether you might also want to get rid of the build_int_cst*
>> functions, but that still looks a long way off, so I hope using them in
>> these two places doesn't seem too bad.
>>
>> This is just an incremental step.  I've also only run it through a
>> subset of the testsuite so far, but full tests are in progress...
> So i am going to make two high level comments here and then i am going 
> to leave the ultimate decision to the community.   (1) I am mildly in 
> favor of leaving prec 0 stuff the way that it is (2) my guess is that 
> richi also will favor this.   My justification for (2) is because he had 
> a lot of comments about this before he went on leave and this is 
> substantially the way that it was when he left. Also, remember that one 
> of his biggest dislikes was having to specify precisions.

Hmm, but you seem to be talking about zero precision in general.
(I'm going to call it "flexible precision" to avoid confusion with
the zero-width bitfield stuff.)  Whereas this patch is specifically
about constructing flexible-precision _wide_int_ objects.  I think
wide_int objects should always have a known, fixed precision.

Note that fixed_wide_ints can still use primitive types in the
same way as before, since there the precision is inherent to the
fixed_wide_int.  The templated operators also work in the same
way as before.  Only the construction of wide_int proper is affected.

As it stands you have various wide_int operators that cannot handle two
flexible-precision inputs.  This means that innocent-looking code like:

  extern wide_int foo (wide_int);
  wide_int bar () { return foo (0); }

ICEs when combined with equally innocent-looking code like:

  wide_int foo (wide_int x) { return x + 1; }

So in practice you have to know when calling a function whether any
paths in that function will try applying an operator with a primitive type.
If so, you need to specify a precison when constructing the wide_int
argument.  If not you can leave it out.  That seems really unclean.

The point of this template stuff is to avoid constructing wide_int objects
from primitive integers whereever possible.  And I think the fairly
small size of the patch shows that you've succeeded in doing that.
But I think we really should specify a precision in the handful of cases
where a wide_int does still need to be constructed directly from
a primitive type.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-23 15:03 ` Richard Sandiford
                     ` (4 preceding siblings ...)
  2013-08-24 20:46   ` Kenneth Zadeck
@ 2013-08-25 10:52   ` Richard Sandiford
  2013-08-25 15:14     ` Kenneth Zadeck
  2013-08-26  2:22     ` Mike Stump
  2013-08-25 18:12   ` Mike Stump
  2013-08-28  9:06   ` Richard Biener
  7 siblings, 2 replies; 50+ messages in thread
From: Richard Sandiford @ 2013-08-25 10:52 UTC (permalink / raw)
  To: Kenneth Zadeck; +Cc: rguenther, gcc-patches, Mike Stump, r.sandiford

Richard Sandiford <rdsandiford@googlemail.com> writes:
> The main thing that's changed since the early patches is that we now
> have a mixture of wide-int types.  This seems to have led to a lot of
> boiler-plate forwarding functions (or at least it felt like that while
> moving them all out the class).  And that in turn seems to be because
> you're trying to keep everything as member functions.  E.g. a lot of the
> forwarders are from a member function to a static function.
>
> Wouldn't it be better to have the actual classes be light-weight,
> with little more than accessors, and do the actual work with non-member
> template functions?  There seems to be 3 grades of wide-int:
>
>   (1) read-only, constant precision  (from int, etc.)
>   (2) read-write, constant precision  (fixed_wide_int)
>   (3) read-write, variable precision  (wide_int proper)
>
> but we should be able to hide that behind templates, with compiler errors
> if you try to write to (1), etc.
>
> To take one example, the reason we can't simply use things like
> std::min on wide ints is because signedness needs to be specified
> explicitly, but there's a good reason why the standard defined
> std::min (x, y) rather than x.min (y).  It seems like we ought
> to have smin and umin functions alongside std::min, rather than
> make them member functions.  We could put them in a separate namespace
> if necessary.

FWIW, here's a patch that shows the beginnings of what I mean.
The changes are:

(1) Using a new templated class, wide_int_accessors, to access the
    integer object.  For now this just contains a single function,
    to_shwi, but I expect more to follow...

(2) Adding a new namespace, wi, for the operators.  So far this
    just contains the previously-static comparison functions
    and whatever else was needed to avoid cross-dependencies
    between wi and wide_int_ro (except for the debug routines).

(3) Removing the comparison member functions and using the static
    ones everywhere.

The idea behind using a namespace rather than static functions
is that it makes it easier to separate the core, tree and rtx bits.
IMO wide-int.h shouldn't know about trees and rtxes, and all routines
related to them should be in tree.h and rtl.h instead.  But using
static functions means that you have to declare everything in one place.
Also, it feels odd for wide_int to be both an object and a home
of static functions that don't always operate on wide_ints, e.g. when
comparing a CONST_INT against 16.

The eventual aim is to use wide_int_accessors (via the wi interface
routines) to abstract away everything about the underlying object.
Then wide_int_ro should not need to have any fields.  wide_int can
have the fields that wide_int_ro has now, and fixed_wide_int will
just have an array and length.  The array can also be the right
size for the int template parameter, rather than always being
WIDE_INT_MAX_ELTS.

The aim is also to use wide_int_accessors to handle the flexible
precision case, so that it only kicks in when primitive types are
used as operator arguments.

I used a wide_int_accessors class rather than just using templated
wi functions because I think it's dangerous to have a default
implementation of things like to_shwi1 and to_shwi2.  The default
implementation we have now is only suitable for primitive types
(because of the sizeof), but could successfully match any type
that provides enough arithmetic to satisfy signedp and top_bit_set.
I admit that's only a theoretical problem though.

I realise I'm probably not being helpful here.  In fact I'm probably
being the cook too many and should really just leave this up to you
two and Richard.  But I realised while reading through wide-int.h
the other day that I have strong opinions about how this should
be done. :-(

Tested on x86_64-linux-gnu FWIW.  I expect this to remain local though.

Thanks,
Richard


Index: gcc/ada/gcc-interface/cuintp.c
===================================================================
--- gcc/ada/gcc-interface/cuintp.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/ada/gcc-interface/cuintp.c	2013-08-25 07:42:29.133616663 +0100
@@ -177,7 +177,7 @@ UI_From_gnu (tree Input)
      in a signed 64-bit integer.  */
   if (tree_fits_shwi_p (Input))
     return UI_From_Int (tree_to_shwi (Input));
-  else if (wide_int::lts_p (Input, 0) && TYPE_UNSIGNED (gnu_type))
+  else if (wi::lts_p (Input, 0) && TYPE_UNSIGNED (gnu_type))
     return No_Uint;
 #endif
 
Index: gcc/alias.c
===================================================================
--- gcc/alias.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/alias.c	2013-08-25 07:42:29.134616672 +0100
@@ -340,8 +340,8 @@ ao_ref_from_mem (ao_ref *ref, const_rtx
 	  || (DECL_P (ref->base)
 	      && (DECL_SIZE (ref->base) == NULL_TREE
 		  || TREE_CODE (DECL_SIZE (ref->base)) != INTEGER_CST
-		  || wide_int::ltu_p (DECL_SIZE (ref->base),
-				      ref->offset + ref->size)))))
+		  || wi::ltu_p (DECL_SIZE (ref->base),
+				ref->offset + ref->size)))))
     return false;
 
   return true;
Index: gcc/c-family/c-common.c
===================================================================
--- gcc/c-family/c-common.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/c-family/c-common.c	2013-08-25 07:42:29.138616711 +0100
@@ -7925,8 +7925,8 @@ handle_alloc_size_attribute (tree *node,
       wide_int p;
 
       if (TREE_CODE (position) != INTEGER_CST
-	  || (p = wide_int (position)).ltu_p (1)
-	  || p.gtu_p (arg_count) )
+	  || wi::ltu_p (p = wide_int (position), 1)
+	  || wi::gtu_p (p, arg_count))
 	{
 	  warning (OPT_Wattributes,
 	           "alloc_size parameter outside range");
Index: gcc/c-family/c-lex.c
===================================================================
--- gcc/c-family/c-lex.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/c-family/c-lex.c	2013-08-25 07:42:29.139616721 +0100
@@ -545,7 +545,7 @@ narrowest_unsigned_type (const wide_int
 	continue;
       upper = TYPE_MAX_VALUE (integer_types[itk]);
 
-      if (wide_int::geu_p (upper, val))
+      if (wi::geu_p (upper, val))
 	return (enum integer_type_kind) itk;
     }
 
@@ -573,7 +573,7 @@ narrowest_signed_type (const wide_int &v
 	continue;
       upper = TYPE_MAX_VALUE (integer_types[itk]);
 
-      if (wide_int::geu_p (upper, val))
+      if (wi::geu_p (upper, val))
 	return (enum integer_type_kind) itk;
     }
 
Index: gcc/c-family/c-pretty-print.c
===================================================================
--- gcc/c-family/c-pretty-print.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/c-family/c-pretty-print.c	2013-08-25 07:42:29.140616730 +0100
@@ -919,7 +919,7 @@ pp_c_integer_constant (c_pretty_printer
     {
       wide_int wi = i;
 
-      if (wi.lt_p (i, 0, TYPE_SIGN (TREE_TYPE (i))))
+      if (wi::lt_p (i, 0, TYPE_SIGN (TREE_TYPE (i))))
 	{
 	  pp_minus (pp);
 	  wi = -wi;
Index: gcc/cgraph.c
===================================================================
--- gcc/cgraph.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/cgraph.c	2013-08-25 07:42:29.141616740 +0100
@@ -624,7 +624,7 @@ cgraph_add_thunk (struct cgraph_node *de
   
   node = cgraph_create_node (alias);
   gcc_checking_assert (!virtual_offset
-		       || wide_int::eq_p (virtual_offset, virtual_value));
+		       || wi::eq_p (virtual_offset, virtual_value));
   node->thunk.fixed_offset = fixed_offset;
   node->thunk.this_adjusting = this_adjusting;
   node->thunk.virtual_value = virtual_value;
Index: gcc/config/bfin/bfin.c
===================================================================
--- gcc/config/bfin/bfin.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/config/bfin/bfin.c	2013-08-25 07:42:29.168617001 +0100
@@ -3285,7 +3285,7 @@ bfin_local_alignment (tree type, unsigne
      memcpy can use 32 bit loads/stores.  */
   if (TYPE_SIZE (type)
       && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
-      && (!wide_int::gtu_p (TYPE_SIZE (type), 8))
+      && !wi::gtu_p (TYPE_SIZE (type), 8)
       && align < 32)
     return 32;
   return align;
Index: gcc/config/i386/i386.c
===================================================================
--- gcc/config/i386/i386.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/config/i386/i386.c	2013-08-25 07:42:29.175617069 +0100
@@ -25695,7 +25695,7 @@ ix86_data_alignment (tree type, int alig
       && AGGREGATE_TYPE_P (type)
       && TYPE_SIZE (type)
       && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
-      && (wide_int::geu_p (TYPE_SIZE (type), max_align))
+      && wi::geu_p (TYPE_SIZE (type), max_align)
       && align < max_align)
     align = max_align;
 
@@ -25706,7 +25706,7 @@ ix86_data_alignment (tree type, int alig
       if ((opt ? AGGREGATE_TYPE_P (type) : TREE_CODE (type) == ARRAY_TYPE)
 	  && TYPE_SIZE (type)
 	  && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
-	  && (wide_int::geu_p (TYPE_SIZE (type), 128))
+	  && wi::geu_p (TYPE_SIZE (type), 128)
 	  && align < 128)
 	return 128;
     }
@@ -25821,7 +25821,7 @@ ix86_local_alignment (tree exp, enum mac
 		  != TYPE_MAIN_VARIANT (va_list_type_node)))
 	  && TYPE_SIZE (type)
 	  && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
-	  && (wide_int::geu_p (TYPE_SIZE (type), 16))
+	  && wi::geu_p (TYPE_SIZE (type), 16)
 	  && align < 128)
 	return 128;
     }
Index: gcc/config/rs6000/rs6000-c.c
===================================================================
--- gcc/config/rs6000/rs6000-c.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/config/rs6000/rs6000-c.c	2013-08-25 07:42:29.188617194 +0100
@@ -4196,7 +4196,7 @@ altivec_resolve_overloaded_builtin (loca
       mode = TYPE_MODE (arg1_type);
       if ((mode == V2DFmode || mode == V2DImode) && VECTOR_MEM_VSX_P (mode)
 	  && TREE_CODE (arg2) == INTEGER_CST
-	  && wide_int::ltu_p (arg2, 2))
+	  && wi::ltu_p (arg2, 2))
 	{
 	  tree call = NULL_TREE;
 
@@ -4281,7 +4281,7 @@ altivec_resolve_overloaded_builtin (loca
       mode = TYPE_MODE (arg1_type);
       if ((mode == V2DFmode || mode == V2DImode) && VECTOR_UNIT_VSX_P (mode)
 	  && tree_fits_uhwi_p (arg2)
-	  && wide_int::ltu_p (arg2, 2))
+	  && wi::ltu_p (arg2, 2))
 	{
 	  tree call = NULL_TREE;
 
Index: gcc/cp/init.c
===================================================================
--- gcc/cp/init.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/cp/init.c	2013-08-25 07:42:29.189617204 +0100
@@ -2381,7 +2381,7 @@ build_new_1 (vec<tree, va_gc> **placemen
       gcc_assert (TREE_CODE (size) == INTEGER_CST);
       cookie_size = targetm.cxx.get_cookie_size (elt_type);
       gcc_assert (TREE_CODE (cookie_size) == INTEGER_CST);
-      gcc_checking_assert (addr_wide_int (cookie_size).ltu_p(max_size));
+      gcc_checking_assert (wi::ltu_p (cookie_size, max_size));
       /* Unconditionally subtract the cookie size.  This decreases the
 	 maximum object size and is safe even if we choose not to use
 	 a cookie after all.  */
@@ -2389,7 +2389,7 @@ build_new_1 (vec<tree, va_gc> **placemen
       bool overflow;
       inner_size = addr_wide_int (size)
 		   .mul (inner_nelts_count, SIGNED, &overflow);
-      if (overflow || inner_size.gtu_p (max_size))
+      if (overflow || wi::gtu_p (inner_size, max_size))
 	{
 	  if (complain & tf_error)
 	    error ("size of array is too large");
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/dwarf2out.c	2013-08-25 07:42:29.192617233 +0100
@@ -14783,7 +14783,7 @@ field_byte_offset (const_tree decl)
       object_offset_in_bits
 	= round_up_to_align (object_offset_in_bits, type_align_in_bits);
 
-      if (object_offset_in_bits.gtu_p (bitpos_int))
+      if (wi::gtu_p (object_offset_in_bits, bitpos_int))
 	{
 	  object_offset_in_bits = deepest_bitpos - type_size_in_bits;
 
@@ -16218,7 +16218,7 @@ add_bound_info (dw_die_ref subrange_die,
 		  	     zext_hwi (tree_to_hwi (bound), prec));
 	  }
 	else if (prec == HOST_BITS_PER_WIDE_INT 
-		 || (cst_fits_uhwi_p (bound) && wide_int (bound).ges_p (0)))
+		 || (cst_fits_uhwi_p (bound) && wi::ges_p (bound, 0)))
 	  add_AT_unsigned (subrange_die, bound_attr, tree_to_hwi (bound));
 	else
 	  add_AT_wide (subrange_die, bound_attr, wide_int (bound));
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c	2013-08-25 07:42:28.417609742 +0100
+++ gcc/fold-const.c	2013-08-25 07:42:29.194617252 +0100
@@ -510,7 +510,7 @@ negate_expr_p (tree t)
       if (TREE_CODE (TREE_OPERAND (t, 1)) == INTEGER_CST)
 	{
 	  tree op1 = TREE_OPERAND (t, 1);
-	  if (wide_int::eq_p (op1, TYPE_PRECISION (type) - 1))
+	  if (wi::eq_p (op1, TYPE_PRECISION (type) - 1))
 	    return true;
 	}
       break;
@@ -721,7 +721,7 @@ fold_negate_expr (location_t loc, tree t
       if (TREE_CODE (TREE_OPERAND (t, 1)) == INTEGER_CST)
 	{
 	  tree op1 = TREE_OPERAND (t, 1);
-	  if (wide_int::eq_p (op1, TYPE_PRECISION (type) - 1))
+	  if (wi::eq_p (op1, TYPE_PRECISION (type) - 1))
 	    {
 	      tree ntype = TYPE_UNSIGNED (type)
 			   ? signed_type_for (type)
@@ -5836,7 +5836,7 @@ extract_muldiv_1 (tree t, tree c, enum t
 	  && (tcode == RSHIFT_EXPR || TYPE_UNSIGNED (TREE_TYPE (op0)))
 	  /* const_binop may not detect overflow correctly,
 	     so check for it explicitly here.  */
-	  && wide_int::gtu_p (TYPE_PRECISION (TREE_TYPE (size_one_node)), op1)
+	  && wi::gtu_p (TYPE_PRECISION (TREE_TYPE (size_one_node)), op1)
 	  && 0 != (t1 = fold_convert (ctype,
 				      const_binop (LSHIFT_EXPR,
 						   size_one_node,
@@ -6602,7 +6602,8 @@ fold_single_bit_test (location_t loc, en
 	 not overflow, adjust BITNUM and INNER.  */
       if (TREE_CODE (inner) == RSHIFT_EXPR
 	  && TREE_CODE (TREE_OPERAND (inner, 1)) == INTEGER_CST
-	  && (wide_int (TREE_OPERAND (inner, 1) + bitnum).ltu_p (TYPE_PRECISION (type))))
+	  && wi::ltu_p (TREE_OPERAND (inner, 1) + bitnum,
+			TYPE_PRECISION (type)))
 	{
 	  bitnum += tree_to_hwi (TREE_OPERAND (inner, 1));
 	  inner = TREE_OPERAND (inner, 0);
@@ -12911,7 +12912,7 @@ fold_binary_loc (location_t loc,
 	  prec = TYPE_PRECISION (itype);
 
 	  /* Check for a valid shift count.  */
-	  if (wide_int::ltu_p (arg001, prec))
+	  if (wi::ltu_p (arg001, prec))
 	    {
 	      tree arg01 = TREE_OPERAND (arg0, 1);
 	      tree arg000 = TREE_OPERAND (TREE_OPERAND (arg0, 0), 0);
@@ -13036,7 +13037,7 @@ fold_binary_loc (location_t loc,
 	  tree arg00 = TREE_OPERAND (arg0, 0);
 	  tree arg01 = TREE_OPERAND (arg0, 1);
 	  tree itype = TREE_TYPE (arg00);
-	  if (wide_int::eq_p (arg01, TYPE_PRECISION (itype) - 1))
+	  if (wi::eq_p (arg01, TYPE_PRECISION (itype) - 1))
 	    {
 	      if (TYPE_UNSIGNED (itype))
 		{
@@ -14341,7 +14342,7 @@ fold_ternary_loc (location_t loc, enum t
 	      /* Make sure that the perm value is in an acceptable
 		 range.  */
 	      t = val;
-	      if (t.gtu_p (nelts_cnt))
+	      if (wi::gtu_p (t, nelts_cnt))
 		{
 		  need_mask_canon = true;
 		  sel[i] = t.to_uhwi () & (nelts_cnt - 1);
@@ -15163,7 +15164,7 @@ multiple_of_p (tree type, const_tree top
 	  op1 = TREE_OPERAND (top, 1);
 	  /* const_binop may not detect overflow correctly,
 	     so check for it explicitly here.  */
-	  if (wide_int::gtu_p (TYPE_PRECISION (TREE_TYPE (size_one_node)), op1)
+	  if (wi::gtu_p (TYPE_PRECISION (TREE_TYPE (size_one_node)), op1)
 	      && 0 != (t1 = fold_convert (type,
 					  const_binop (LSHIFT_EXPR,
 						       size_one_node,
Index: gcc/fortran/trans-intrinsic.c
===================================================================
--- gcc/fortran/trans-intrinsic.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/fortran/trans-intrinsic.c	2013-08-25 07:42:29.195617262 +0100
@@ -986,8 +986,9 @@ trans_this_image (gfc_se * se, gfc_expr
 	{
 	  wide_int wdim_arg = dim_arg;
 
-	  if (wdim_arg.ltu_p (1)
-	      || wdim_arg.gtu_p (GFC_TYPE_ARRAY_CORANK (TREE_TYPE (desc))))
+	  if (wi::ltu_p (wdim_arg, 1)
+	      || wi::gtu_p (wdim_arg,
+			    GFC_TYPE_ARRAY_CORANK (TREE_TYPE (desc))))
 	    gfc_error ("'dim' argument of %s intrinsic at %L is not a valid "
 		       "dimension index", expr->value.function.isym->name,
 		       &expr->where);
@@ -1346,8 +1347,8 @@ gfc_conv_intrinsic_bound (gfc_se * se, g
     {
       wide_int wbound = bound;
       if (((!as || as->type != AS_ASSUMED_RANK)
-	      && wbound.geu_p (GFC_TYPE_ARRAY_RANK (TREE_TYPE (desc))))
-	  || wbound.gtu_p (GFC_MAX_DIMENSIONS))
+	   && wi::geu_p (wbound, GFC_TYPE_ARRAY_RANK (TREE_TYPE (desc))))
+	  || wi::gtu_p (wbound, GFC_MAX_DIMENSIONS))
 	gfc_error ("'dim' argument of %s intrinsic at %L is not a valid "
 		   "dimension index", upper ? "UBOUND" : "LBOUND",
 		   &expr->where);
@@ -1543,7 +1544,8 @@ conv_intrinsic_cobound (gfc_se * se, gfc
       if (INTEGER_CST_P (bound))
 	{
 	  wide_int wbound = bound;
-	  if (wbound.ltu_p (1) || wbound.gtu_p (GFC_TYPE_ARRAY_CORANK (TREE_TYPE (desc))))
+	  if (wi::ltu_p (wbound, 1)
+	      || wi::gtu_p (wbound, GFC_TYPE_ARRAY_CORANK (TREE_TYPE (desc))))
 	    gfc_error ("'dim' argument of %s intrinsic at %L is not a valid "
 		       "dimension index", expr->value.function.isym->name,
 		       &expr->where);
Index: gcc/gimple-fold.c
===================================================================
--- gcc/gimple-fold.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/gimple-fold.c	2013-08-25 07:42:29.195617262 +0100
@@ -2799,7 +2799,7 @@ fold_array_ctor_reference (tree type, tr
      be larger than size of array element.  */
   if (!TYPE_SIZE_UNIT (type)
       || TREE_CODE (TYPE_SIZE_UNIT (type)) != INTEGER_CST
-      || elt_size.lts_p (addr_wide_int (TYPE_SIZE_UNIT (type))))
+      || wi::lts_p (elt_size, TYPE_SIZE_UNIT (type)))
     return NULL_TREE;
 
   /* Compute the array index we look for.  */
@@ -2902,7 +2902,7 @@ fold_nonarray_ctor_reference (tree type,
 	 [BITOFFSET, BITOFFSET_END)?  */
       if (access_end.cmps (bitoffset) > 0
 	  && (field_size == NULL_TREE
-	      || addr_wide_int (offset).lts_p (bitoffset_end)))
+	      || wi::lts_p (offset, bitoffset_end)))
 	{
 	  addr_wide_int inner_offset = addr_wide_int (offset) - bitoffset;
 	  /* We do have overlap.  Now see if field is large enough to
@@ -2910,7 +2910,7 @@ fold_nonarray_ctor_reference (tree type,
 	     fields.  */
 	  if (access_end.cmps (bitoffset_end) > 0)
 	    return NULL_TREE;
-	  if (addr_wide_int (offset).lts_p (bitoffset))
+	  if (wi::lts_p (offset, bitoffset))
 	    return NULL_TREE;
 	  return fold_ctor_reference (type, cval,
 				      inner_offset.to_uhwi (), size,
Index: gcc/gimple-ssa-strength-reduction.c
===================================================================
--- gcc/gimple-ssa-strength-reduction.c	2013-08-25 07:42:28.418609752 +0100
+++ gcc/gimple-ssa-strength-reduction.c	2013-08-25 07:42:29.196617272 +0100
@@ -2355,8 +2355,8 @@ record_increment (slsr_cand_t c, const m
       if (c->kind == CAND_ADD
 	  && !is_phi_adjust
 	  && c->index == increment
-	  && (increment.gts_p (1)
-	      || increment.lts_p (-1))
+	  && (wi::gts_p (increment, 1)
+	      || wi::lts_p (increment, -1))
 	  && (gimple_assign_rhs_code (c->cand_stmt) == PLUS_EXPR
 	      || gimple_assign_rhs_code (c->cand_stmt) == POINTER_PLUS_EXPR))
 	{
Index: gcc/loop-doloop.c
===================================================================
--- gcc/loop-doloop.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/loop-doloop.c	2013-08-25 07:42:29.196617272 +0100
@@ -461,9 +461,10 @@ doloop_modify (struct loop *loop, struct
       /* Determine if the iteration counter will be non-negative.
 	 Note that the maximum value loaded is iterations_max - 1.  */
       if (max_loop_iterations (loop, &iterations)
-	  && (iterations.leu_p (wide_int::set_bit_in_zero 
-				(GET_MODE_PRECISION (mode) - 1,
-				 GET_MODE_PRECISION (mode)))))
+	  && wi::leu_p (iterations,
+			wide_int::set_bit_in_zero
+			(GET_MODE_PRECISION (mode) - 1,
+			 GET_MODE_PRECISION (mode))))
 	nonneg = 1;
       break;
 
@@ -697,7 +698,7 @@ doloop_optimize (struct loop *loop)
 	 computed, we must be sure that the number of iterations fits into
 	 the new mode.  */
       && (word_mode_size >= GET_MODE_PRECISION (mode)
-	  || iter.leu_p (word_mode_max)))
+	  || wi::leu_p (iter, word_mode_max)))
     {
       if (word_mode_size > GET_MODE_PRECISION (mode))
 	{
Index: gcc/loop-unroll.c
===================================================================
--- gcc/loop-unroll.c	2013-08-25 07:42:28.420609771 +0100
+++ gcc/loop-unroll.c	2013-08-25 07:42:29.196617272 +0100
@@ -693,7 +693,7 @@ decide_unroll_constant_iterations (struc
   if (desc->niter < 2 * nunroll
       || ((estimated_loop_iterations (loop, &iterations)
 	   || max_loop_iterations (loop, &iterations))
-	  && iterations.ltu_p (2 * nunroll)))
+	  && wi::ltu_p (iterations, 2 * nunroll)))
     {
       if (dump_file)
 	fprintf (dump_file, ";; Not unrolling loop, doesn't roll\n");
@@ -816,7 +816,7 @@ unroll_loop_constant_iterations (struct
 	  desc->niter -= exit_mod;
 	  loop->nb_iterations_upper_bound -= exit_mod;
 	  if (loop->any_estimate
-	      && wide_int::leu_p (exit_mod, loop->nb_iterations_estimate))
+	      && wi::leu_p (exit_mod, loop->nb_iterations_estimate))
 	    loop->nb_iterations_estimate -= exit_mod;
 	  else
 	    loop->any_estimate = false;
@@ -859,7 +859,7 @@ unroll_loop_constant_iterations (struct
 	  desc->niter -= exit_mod + 1;
 	  loop->nb_iterations_upper_bound -= exit_mod + 1;
 	  if (loop->any_estimate
-	      && wide_int::leu_p (exit_mod + 1, loop->nb_iterations_estimate))
+	      && wi::leu_p (exit_mod + 1, loop->nb_iterations_estimate))
 	    loop->nb_iterations_estimate -= exit_mod + 1;
 	  else
 	    loop->any_estimate = false;
@@ -992,7 +992,7 @@ decide_unroll_runtime_iterations (struct
   /* Check whether the loop rolls.  */
   if ((estimated_loop_iterations (loop, &iterations)
        || max_loop_iterations (loop, &iterations))
-      && iterations.ltu_p (2 * nunroll))
+      && wi::ltu_p (iterations, 2 * nunroll))
     {
       if (dump_file)
 	fprintf (dump_file, ";; Not unrolling loop, doesn't roll\n");
@@ -1379,7 +1379,7 @@ decide_peel_simple (struct loop *loop, i
   if (estimated_loop_iterations (loop, &iterations))
     {
       /* TODO: unsigned/signed confusion */
-      if (wide_int::leu_p (npeel, iterations))
+      if (wi::leu_p (npeel, iterations))
 	{
 	  if (dump_file)
 	    {
@@ -1396,7 +1396,7 @@ decide_peel_simple (struct loop *loop, i
   /* If we have small enough bound on iterations, we can still peel (completely
      unroll).  */
   else if (max_loop_iterations (loop, &iterations)
-           && iterations.ltu_p (npeel))
+           && wi::ltu_p (iterations, npeel))
     npeel = iterations.to_shwi () + 1;
   else
     {
@@ -1547,7 +1547,7 @@ decide_unroll_stupid (struct loop *loop,
   /* Check whether the loop rolls.  */
   if ((estimated_loop_iterations (loop, &iterations)
        || max_loop_iterations (loop, &iterations))
-      && iterations.ltu_p (2 * nunroll))
+      && wi::ltu_p (iterations, 2 * nunroll))
     {
       if (dump_file)
 	fprintf (dump_file, ";; Not unrolling loop, doesn't roll\n");
Index: gcc/lto/lto.c
===================================================================
--- gcc/lto/lto.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/lto/lto.c	2013-08-25 07:42:29.206617368 +0100
@@ -1778,7 +1778,7 @@ #define compare_values(X) \
 
   if (CODE_CONTAINS_STRUCT (code, TS_INT_CST))
     {
-      if (!wide_int::eq_p (t1, t2))
+      if (!wi::eq_p (t1, t2))
 	return false;
     }
 
Index: gcc/rtl.h
===================================================================
--- gcc/rtl.h	2013-08-25 07:17:37.505554513 +0100
+++ gcc/rtl.h	2013-08-25 07:42:29.197617281 +0100
@@ -1402,10 +1402,10 @@ get_mode (const rtx_mode_t p)
 
 /* Specialization of to_shwi1 function in wide-int.h for rtl.  This
    cannot be in wide-int.h because of circular includes.  */
-template<>
-inline const HOST_WIDE_INT*
-wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, 
-		       unsigned int *l, unsigned int *p, const rtx_mode_t& rp)
+inline const HOST_WIDE_INT *
+wide_int_accessors <rtx_mode_t>::to_shwi (HOST_WIDE_INT *, unsigned int *l,
+					  unsigned int *p,
+					  const rtx_mode_t &rp)
 {
   const rtx rcst = get_rtx (rp);
   enum machine_mode mode = get_mode (rp);
@@ -1414,34 +1414,6 @@ wide_int_ro::to_shwi1 (HOST_WIDE_INT *s
 
   switch (GET_CODE (rcst))
     {
-    case CONST_INT:
-      *l = 1;
-      return &INTVAL (rcst);
-      
-    case CONST_WIDE_INT:
-      *l = CONST_WIDE_INT_NUNITS (rcst);
-      return &CONST_WIDE_INT_ELT (rcst, 0);
-      
-    case CONST_DOUBLE:
-      *l = 2;
-      return &CONST_DOUBLE_LOW (rcst);
-      
-    default:
-      gcc_unreachable ();
-    }
-}
-
-/* Specialization of to_shwi2 function in wide-int.h for rtl.  This
-   cannot be in wide-int.h because of circular includes.  */
-template<>
-inline const HOST_WIDE_INT*
-wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, 
-		       unsigned int *l, const rtx_mode_t& rp)
-{
-  const rtx rcst = get_rtx (rp);
-
-  switch (GET_CODE (rcst))
-    {
     case CONST_INT:
       *l = 1;
       return &INTVAL (rcst);
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/simplify-rtx.c	2013-08-25 07:42:29.198617291 +0100
@@ -4649,8 +4649,8 @@ simplify_const_relational_operation (enu
 	return comparison_result (code, CMP_EQ);
       else
 	{
-	  int cr = wo0.lts_p (ptrueop1) ? CMP_LT : CMP_GT;
-	  cr |= wo0.ltu_p (ptrueop1) ? CMP_LTU : CMP_GTU;
+	  int cr = wi::lts_p (wo0, ptrueop1) ? CMP_LT : CMP_GT;
+	  cr |= wi::ltu_p (wo0, ptrueop1) ? CMP_LTU : CMP_GTU;
 	  return comparison_result (code, cr);
 	}
     }
Index: gcc/tree-affine.c
===================================================================
--- gcc/tree-affine.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/tree-affine.c	2013-08-25 07:42:29.198617291 +0100
@@ -911,7 +911,7 @@ aff_comb_cannot_overlap_p (aff_tree *dif
   else
     {
       /* We succeed if the second object starts after the first one ends.  */
-      return size1.les_p (d);
+      return wi::les_p (size1, d);
     }
 }
 
Index: gcc/tree-chrec.c
===================================================================
--- gcc/tree-chrec.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/tree-chrec.c	2013-08-25 07:42:29.198617291 +0100
@@ -475,7 +475,7 @@ tree_fold_binomial (tree type, tree n, u
   num = n;
 
   /* Check that k <= n.  */
-  if (num.ltu_p (k))
+  if (wi::ltu_p (num, k))
     return NULL_TREE;
 
   /* Denominator = 2.  */
Index: gcc/tree-predcom.c
===================================================================
--- gcc/tree-predcom.c	2013-08-25 07:42:28.421609781 +0100
+++ gcc/tree-predcom.c	2013-08-25 07:42:29.199617301 +0100
@@ -921,9 +921,9 @@ add_ref_to_chain (chain_p chain, dref re
   dref root = get_chain_root (chain);
   max_wide_int dist;
 
-  gcc_assert (root->offset.les_p (ref->offset));
+  gcc_assert (wi::les_p (root->offset, ref->offset));
   dist = ref->offset - root->offset;
-  if (wide_int::leu_p (MAX_DISTANCE, dist))
+  if (wi::leu_p (MAX_DISTANCE, dist))
     {
       free (ref);
       return;
@@ -1194,7 +1194,7 @@ determine_roots_comp (struct loop *loop,
   FOR_EACH_VEC_ELT (comp->refs, i, a)
     {
       if (!chain || DR_IS_WRITE (a->ref)
-	  || max_wide_int (MAX_DISTANCE).leu_p (a->offset - last_ofs))
+	  || wi::leu_p (MAX_DISTANCE, a->offset - last_ofs))
 	{
 	  if (nontrivial_chain_p (chain))
 	    {
Index: gcc/tree-ssa-loop-ivcanon.c
===================================================================
--- gcc/tree-ssa-loop-ivcanon.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/tree-ssa-loop-ivcanon.c	2013-08-25 07:42:29.199617301 +0100
@@ -488,7 +488,7 @@ remove_exits_and_undefined_stmts (struct
 	 into unreachable (or trap when debugging experience is supposed
 	 to be good).  */
       if (!elt->is_exit
-	  && elt->bound.ltu_p (max_wide_int (npeeled)))
+	  && wi::ltu_p (elt->bound, npeeled))
 	{
 	  gimple_stmt_iterator gsi = gsi_for_stmt (elt->stmt);
 	  gimple stmt = gimple_build_call
@@ -505,7 +505,7 @@ remove_exits_and_undefined_stmts (struct
 	}
       /* If we know the exit will be taken after peeling, update.  */
       else if (elt->is_exit
-	       && elt->bound.leu_p (max_wide_int (npeeled)))
+	       && wi::leu_p (elt->bound, npeeled))
 	{
 	  basic_block bb = gimple_bb (elt->stmt);
 	  edge exit_edge = EDGE_SUCC (bb, 0);
@@ -545,7 +545,7 @@ remove_redundant_iv_tests (struct loop *
       /* Exit is pointless if it won't be taken before loop reaches
 	 upper bound.  */
       if (elt->is_exit && loop->any_upper_bound
-          && loop->nb_iterations_upper_bound.ltu_p (elt->bound))
+          && wi::ltu_p (loop->nb_iterations_upper_bound, elt->bound))
 	{
 	  basic_block bb = gimple_bb (elt->stmt);
 	  edge exit_edge = EDGE_SUCC (bb, 0);
@@ -562,7 +562,7 @@ remove_redundant_iv_tests (struct loop *
 	      || !integer_zerop (niter.may_be_zero)
 	      || !niter.niter
 	      || TREE_CODE (niter.niter) != INTEGER_CST
-	      || !loop->nb_iterations_upper_bound.ltu_p (niter.niter))
+	      || !wi::ltu_p (loop->nb_iterations_upper_bound, niter.niter))
 	    continue;
 	  
 	  if (dump_file && (dump_flags & TDF_DETAILS))
Index: gcc/tree-ssa-loop-ivopts.c
===================================================================
--- gcc/tree-ssa-loop-ivopts.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/tree-ssa-loop-ivopts.c	2013-08-25 07:42:29.200617310 +0100
@@ -4659,7 +4659,7 @@ may_eliminate_iv (struct ivopts_data *da
       if (stmt_after_increment (loop, cand, use->stmt))
         max_niter += 1;
       period_value = period;
-      if (max_niter.gtu_p (period_value))
+      if (wi::gtu_p (max_niter, period_value))
         {
           /* See if we can take advantage of inferred loop bound information.  */
           if (data->loop_single_exit_p)
@@ -4667,7 +4667,7 @@ may_eliminate_iv (struct ivopts_data *da
               if (!max_loop_iterations (loop, &max_niter))
                 return false;
               /* The loop bound is already adjusted by adding 1.  */
-              if (max_niter.gtu_p (period_value))
+              if (wi::gtu_p (max_niter, period_value))
                 return false;
             }
           else
Index: gcc/tree-ssa-loop-niter.c
===================================================================
--- gcc/tree-ssa-loop-niter.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/tree-ssa-loop-niter.c	2013-08-25 07:42:29.200617310 +0100
@@ -2410,7 +2410,7 @@ derive_constant_upper_bound_ops (tree ty
 
       /* If the bound does not fit in TYPE, max. value of TYPE could be
 	 attained.  */
-      if (max.ltu_p (bnd))
+      if (wi::ltu_p (max, bnd))
 	return max;
 
       return bnd;
@@ -2443,7 +2443,7 @@ derive_constant_upper_bound_ops (tree ty
 	     BND <= MAX (type) - CST.  */
 
 	  mmax -= cst;
-	  if (bnd.ltu_p (mmax))
+	  if (wi::ltu_p (bnd, max))
 	    return max;
 
 	  return bnd + cst;
@@ -2463,7 +2463,7 @@ derive_constant_upper_bound_ops (tree ty
 	  /* This should only happen if the type is unsigned; however, for
 	     buggy programs that use overflowing signed arithmetics even with
 	     -fno-wrapv, this condition may also be true for signed values.  */
-	  if (bnd.ltu_p (cst))
+	  if (wi::ltu_p (bnd, cst))
 	    return max;
 
 	  if (TYPE_UNSIGNED (type))
@@ -2519,14 +2519,14 @@ record_niter_bound (struct loop *loop, c
      current estimation is smaller.  */
   if (upper
       && (!loop->any_upper_bound
-	  || i_bound.ltu_p (loop->nb_iterations_upper_bound)))
+	  || wi::ltu_p (i_bound, loop->nb_iterations_upper_bound)))
     {
       loop->any_upper_bound = true;
       loop->nb_iterations_upper_bound = i_bound;
     }
   if (realistic
       && (!loop->any_estimate
-	  || i_bound.ltu_p (loop->nb_iterations_estimate)))
+	  || wi::ltu_p (i_bound, loop->nb_iterations_estimate)))
     {
       loop->any_estimate = true;
       loop->nb_iterations_estimate = i_bound;
@@ -2536,7 +2536,8 @@ record_niter_bound (struct loop *loop, c
      number of iterations, use the upper bound instead.  */
   if (loop->any_upper_bound
       && loop->any_estimate
-      && loop->nb_iterations_upper_bound.ltu_p (loop->nb_iterations_estimate))
+      && wi::ltu_p (loop->nb_iterations_upper_bound,
+		    loop->nb_iterations_estimate))
     loop->nb_iterations_estimate = loop->nb_iterations_upper_bound;
 }
 
@@ -2642,7 +2643,7 @@ record_estimate (struct loop *loop, tree
   i_bound += delta;
 
   /* If an overflow occurred, ignore the result.  */
-  if (i_bound.ltu_p (delta))
+  if (wi::ltu_p (i_bound, delta))
     return;
 
   if (upper && !is_exit)
@@ -3051,7 +3052,7 @@ bound_index (vec<max_wide_int> bounds, c
 
       if (index == bound)
 	return middle;
-      else if (index.ltu_p (bound))
+      else if (wi::ltu_p (index, bound))
 	begin = middle + 1;
       else
 	end = middle;
@@ -3093,7 +3094,7 @@ discover_iteration_bound_by_body_walk (s
 	}
 
       if (!loop->any_upper_bound
-	  || bound.ltu_p (loop->nb_iterations_upper_bound))
+	  || wi::ltu_p (bound, loop->nb_iterations_upper_bound))
         bounds.safe_push (bound);
     }
 
@@ -3124,7 +3125,7 @@ discover_iteration_bound_by_body_walk (s
 	}
 
       if (!loop->any_upper_bound
-	  || bound.ltu_p (loop->nb_iterations_upper_bound))
+	  || wi::ltu_p (bound, loop->nb_iterations_upper_bound))
 	{
 	  ptrdiff_t index = bound_index (bounds, bound);
 	  void **entry = pointer_map_contains (bb_bounds,
@@ -3259,7 +3260,7 @@ maybe_lower_iteration_bound (struct loop
   for (elt = loop->bounds; elt; elt = elt->next)
     {
       if (!elt->is_exit
-	  && elt->bound.ltu_p (loop->nb_iterations_upper_bound))
+	  && wi::ltu_p (elt->bound, loop->nb_iterations_upper_bound))
 	{
 	  if (!not_executed_last_iteration)
 	    not_executed_last_iteration = pointer_set_create ();
@@ -3556,7 +3557,7 @@ max_stmt_executions (struct loop *loop,
 
   *nit += 1;
 
-  return (*nit).gtu_p (nit_minus_one);
+  return wi::gtu_p (*nit, nit_minus_one);
 }
 
 /* Sets NIT to the estimated number of executions of the latch of the
@@ -3575,7 +3576,7 @@ estimated_stmt_executions (struct loop *
 
   *nit += 1;
 
-  return (*nit).gtu_p (nit_minus_one);
+  return wi::gtu_p (*nit, nit_minus_one);
 }
 
 /* Records estimates on numbers of iterations of loops.  */
Index: gcc/tree-ssa.c
===================================================================
--- gcc/tree-ssa.c	2013-08-25 07:17:37.505554513 +0100
+++ gcc/tree-ssa.c	2013-08-25 07:42:29.201617320 +0100
@@ -1829,8 +1829,8 @@ non_rewritable_mem_ref_base (tree ref)
 	  && useless_type_conversion_p (TREE_TYPE (base),
 					TREE_TYPE (TREE_TYPE (decl)))
 	  && mem_ref_offset (base).fits_uhwi_p ()
-	  && addr_wide_int (TYPE_SIZE_UNIT (TREE_TYPE (decl)))
-	     .gtu_p (mem_ref_offset (base))
+	  && wi::gtu_p (TYPE_SIZE_UNIT (TREE_TYPE (decl)),
+			mem_ref_offset (base))
 	  && multiple_of_p (sizetype, TREE_OPERAND (base, 1),
 			    TYPE_SIZE_UNIT (TREE_TYPE (base))))
 	return NULL_TREE;
Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c	2013-08-25 07:42:28.470610254 +0100
+++ gcc/tree-vrp.c	2013-08-25 07:42:29.202617330 +0100
@@ -2652,13 +2652,13 @@ extract_range_from_binary_expr_1 (value_
 	  /* Canonicalize the intervals.  */
 	  if (sign == UNSIGNED)
 	    {
-	      if (size.ltu_p (min0 + max0))
+	      if (wi::ltu_p (size, min0 + max0))
 		{
 		  min0 -= size;
 		  max0 -= size;
 		}
 
-	      if (size.ltu_p (min1 + max1))
+	      if (wi::ltu_p (size, min1 + max1))
 		{
 		  min1 -= size;
 		  max1 -= size;
@@ -2673,7 +2673,7 @@ extract_range_from_binary_expr_1 (value_
 	  /* Sort the 4 products so that min is in prod0 and max is in
 	     prod3.  */
 	  /* min0min1 > max0max1 */
-	  if (prod0.gts_p (prod3))
+	  if (wi::gts_p (prod0, prod3))
 	    {
 	      wide_int tmp = prod3;
 	      prod3 = prod0;
@@ -2681,21 +2681,21 @@ extract_range_from_binary_expr_1 (value_
 	    }
 
 	  /* min0max1 > max0min1 */
-	  if (prod1.gts_p (prod2))
+	  if (wi::gts_p (prod1, prod2))
 	    {
 	      wide_int tmp = prod2;
 	      prod2 = prod1;
 	      prod1 = tmp;
 	    }
 
-	  if (prod0.gts_p (prod1))
+	  if (wi::gts_p (prod0, prod1))
 	    {
 	      wide_int tmp = prod1;
 	      prod1 = prod0;
 	      prod0 = tmp;
 	    }
 
-	  if (prod2.gts_p (prod3))
+	  if (wi::gts_p (prod2, prod3))
 	    {
 	      wide_int tmp = prod3;
 	      prod3 = prod2;
@@ -2704,7 +2704,7 @@ extract_range_from_binary_expr_1 (value_
 
 	  /* diff = max - min.  */
 	  prod2 = prod3 - prod0;
-	  if (prod2.geu_p (sizem1))
+	  if (wi::geu_p (prod2, sizem1))
 	    {
 	      /* the range covers all values.  */
 	      set_value_range_to_varying (vr);
@@ -2801,14 +2801,14 @@ extract_range_from_binary_expr_1 (value_
 		{
 		  low_bound = bound;
 		  high_bound = complement;
-		  if (wide_int::ltu_p (vr0.max, low_bound))
+		  if (wi::ltu_p (vr0.max, low_bound))
 		    {
 		      /* [5, 6] << [1, 2] == [10, 24].  */
 		      /* We're shifting out only zeroes, the value increases
 			 monotonically.  */
 		      in_bounds = true;
 		    }
-		  else if (high_bound.ltu_p (vr0.min))
+		  else if (wi::ltu_p (high_bound, vr0.min))
 		    {
 		      /* [0xffffff00, 0xffffffff] << [1, 2]
 		         == [0xfffffc00, 0xfffffffe].  */
@@ -2822,8 +2822,8 @@ extract_range_from_binary_expr_1 (value_
 		  /* [-1, 1] << [1, 2] == [-4, 4].  */
 		  low_bound = complement;
 		  high_bound = bound;
-		  if (wide_int::lts_p (vr0.max, high_bound)
-		      && low_bound.lts_p (wide_int (vr0.min)))
+		  if (wi::lts_p (vr0.max, high_bound)
+		      && wi::lts_p (low_bound, vr0.min))
 		    {
 		      /* For non-negative numbers, we're shifting out only
 			 zeroes, the value increases monotonically.
@@ -3844,7 +3844,7 @@ adjust_range_with_scev (value_range_t *v
 	  if (!overflow
 	      && wtmp.fits_to_tree_p (TREE_TYPE (init))
 	      && (sgn == UNSIGNED
-		  || (wtmp.gts_p (0) == wide_int::gts_p (step, 0))))
+		  || wi::gts_p (wtmp, 0) == wi::gts_p (step, 0)))
 	    {
 	      tem = wide_int_to_tree (TREE_TYPE (init), wtmp);
 	      extract_range_from_binary_expr (&maxvr, PLUS_EXPR,
@@ -4736,7 +4736,7 @@ masked_increment (wide_int val, wide_int
       res = bit - 1;
       res = (val + bit).and_not (res);
       res &= mask;
-      if (res.gtu_p (val))
+      if (wi::gtu_p (res, val))
 	return res ^ sgnbit;
     }
   return val ^ sgnbit;
@@ -6235,7 +6235,7 @@ search_for_addr_array (tree t, location_
 
       idx = mem_ref_offset (t);
       idx = idx.sdiv_trunc (addr_wide_int (el_sz));
-      if (idx.lts_p (0))
+      if (wi::lts_p (idx, 0))
 	{
 	  if (dump_file && (dump_flags & TDF_DETAILS))
 	    {
@@ -6247,9 +6247,7 @@ search_for_addr_array (tree t, location_
 		      "array subscript is below array bounds");
 	  TREE_NO_WARNING (t) = 1;
 	}
-      else if (idx.gts_p (addr_wide_int (up_bound)
-			  - low_bound
-			  + 1))
+      else if (wi::gts_p (idx, addr_wide_int (up_bound) - low_bound + 1))
 	{
 	  if (dump_file && (dump_flags & TDF_DETAILS))
 	    {
@@ -8681,7 +8679,7 @@ range_fits_type_p (value_range_t *vr, un
      a signed wide_int, while a negative value cannot be represented
      by an unsigned wide_int.  */
   if (src_sgn != dest_sgn
-      && (max_wide_int (vr->min).lts_p (0) || max_wide_int (vr->max).lts_p (0)))
+      && (wi::lts_p (vr->min, 0) || wi::lts_p (vr->max, 0)))
     return false;
 
   /* Then we can perform the conversion on both ends and compare
@@ -8985,7 +8983,7 @@ simplify_conversion_using_ranges (gimple
 
   /* If the first conversion is not injective, the second must not
      be widening.  */
-  if ((innermax - innermin).gtu_p (max_wide_int::mask (middle_prec, false))
+  if (wi::gtu_p (innermax - innermin, max_wide_int::mask (middle_prec, false))
       && middle_prec < final_prec)
     return false;
   /* We also want a medium value so that we can track the effect that
Index: gcc/tree.c
===================================================================
--- gcc/tree.c	2013-08-25 07:42:28.423609800 +0100
+++ gcc/tree.c	2013-08-25 07:42:29.203617339 +0100
@@ -1228,7 +1228,7 @@ wide_int_to_tree (tree type, const wide_
     case BOOLEAN_TYPE:
       /* Cache false or true.  */
       limit = 2;
-      if (cst.leu_p (1))
+      if (wi::leu_p (cst, 1))
 	ix = cst.to_uhwi ();
       break;
 
@@ -1247,7 +1247,7 @@ wide_int_to_tree (tree type, const wide_
 	      if (cst.to_uhwi () < (unsigned HOST_WIDE_INT) INTEGER_SHARE_LIMIT)
 		ix = cst.to_uhwi ();
 	    }
-	  else if (cst.ltu_p (INTEGER_SHARE_LIMIT))
+	  else if (wi::ltu_p (cst, INTEGER_SHARE_LIMIT))
 	    ix = cst.to_uhwi ();
 	}
       else
@@ -1264,7 +1264,7 @@ wide_int_to_tree (tree type, const wide_
 		  if (cst.to_shwi () < INTEGER_SHARE_LIMIT)
 		    ix = cst.to_shwi () + 1;
 		}
-	      else if (cst.lts_p (INTEGER_SHARE_LIMIT))
+	      else if (wi::lts_p (cst, INTEGER_SHARE_LIMIT))
 		ix = cst.to_shwi () + 1;
 	    }
 	}
@@ -1381,7 +1381,7 @@ cache_integer_cst (tree t)
     case BOOLEAN_TYPE:
       /* Cache false or true.  */
       limit = 2;
-      if (wide_int::ltu_p (t, 2))
+      if (wi::ltu_p (t, 2))
 	ix = TREE_INT_CST_ELT (t, 0);
       break;
 
@@ -1400,7 +1400,7 @@ cache_integer_cst (tree t)
 	      if (tree_to_uhwi (t) < (unsigned HOST_WIDE_INT) INTEGER_SHARE_LIMIT)
 		ix = tree_to_uhwi (t);
 	    }
-	  else if (wide_int::ltu_p (t, INTEGER_SHARE_LIMIT))
+	  else if (wi::ltu_p (t, INTEGER_SHARE_LIMIT))
 	    ix = tree_to_uhwi (t);
 	}
       else
@@ -1417,7 +1417,7 @@ cache_integer_cst (tree t)
 		  if (tree_to_shwi (t) < INTEGER_SHARE_LIMIT)
 		    ix = tree_to_shwi (t) + 1;
 		}
-	      else if (wide_int::ltu_p (t, INTEGER_SHARE_LIMIT))
+	      else if (wi::ltu_p (t, INTEGER_SHARE_LIMIT))
 		ix = tree_to_shwi (t) + 1;
 	    }
 	}
@@ -1451,7 +1451,7 @@ cache_integer_cst (tree t)
       /* If there is already an entry for the number verify it's the
          same.  */
       if (*slot)
-	gcc_assert (wide_int::eq_p (((tree)*slot), t));
+	gcc_assert (wi::eq_p (tree (*slot), t));
       else
 	/* Otherwise insert this one into the hash table.  */
 	*slot = t;
@@ -6757,7 +6757,7 @@ tree_int_cst_equal (const_tree t1, const
   prec2 = TYPE_PRECISION (TREE_TYPE (t2));
 
   if (prec1 == prec2)
-    return wide_int::eq_p (t1, t2);
+    return wi::eq_p (t1, t2);
   else if (prec1 < prec2)
     return (wide_int (t1)).force_to_size (prec2, TYPE_SIGN (TREE_TYPE (t1))) == t2;
   else
@@ -8562,7 +8562,7 @@ int_fits_type_p (const_tree c, const_tre
 
 	  if (c_neg && !t_neg)
 	    return false;
-	  if ((c_neg || !t_neg) && wc.ltu_p (wd))
+	  if ((c_neg || !t_neg) && wi::ltu_p (wc, wd))
 	    return false;
 	}
       else if (wc.cmp (wd, TYPE_SIGN (TREE_TYPE (type_low_bound))) < 0)
@@ -8583,7 +8583,7 @@ int_fits_type_p (const_tree c, const_tre
 
 	  if (t_neg && !c_neg)
 	    return false;
-	  if ((t_neg || !c_neg) && wc.gtu_p (wd))
+	  if ((t_neg || !c_neg) && wi::gtu_p (wc, wd))
 	    return false;
 	}
       else if (wc.cmp (wd, TYPE_SIGN (TREE_TYPE (type_high_bound))) > 0)
Index: gcc/tree.h
===================================================================
--- gcc/tree.h	2013-08-25 07:17:37.505554513 +0100
+++ gcc/tree.h	2013-08-25 07:42:29.204617349 +0100
@@ -1411,10 +1411,10 @@ #define TREE_LANG_FLAG_6(NODE) \
 /* Define additional fields and accessors for nodes representing constants.  */
 
 #define INT_CST_LT(A, B)				\
-  (wide_int::lts_p (A, B))
+  (wi::lts_p (A, B))
 
 #define INT_CST_LT_UNSIGNED(A, B)			\
-  (wide_int::ltu_p (A, B))
+  (wi::ltu_p (A, B))
 
 #define TREE_INT_CST_NUNITS(NODE) (INTEGER_CST_CHECK (NODE)->base.u.length)
 #define TREE_INT_CST_ELT(NODE, I) TREE_INT_CST_ELT_CHECK (NODE, I)
Index: gcc/wide-int.cc
===================================================================
--- gcc/wide-int.cc	2013-08-25 07:42:28.471610264 +0100
+++ gcc/wide-int.cc	2013-08-25 07:42:29.205617359 +0100
@@ -598,9 +598,9 @@ top_bit_of (const HOST_WIDE_INT *a, unsi
 
 /* Return true if OP0 == OP1.  */
 bool
-wide_int_ro::eq_p_large (const HOST_WIDE_INT *op0, unsigned int op0len,
-			 unsigned int prec,
-			 const HOST_WIDE_INT *op1, unsigned int op1len)
+wi::eq_p_large (const HOST_WIDE_INT *op0, unsigned int op0len,
+		unsigned int prec,
+		const HOST_WIDE_INT *op1, unsigned int op1len)
 {
   int l0 = op0len - 1;
   unsigned int small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
@@ -628,10 +628,10 @@ wide_int_ro::eq_p_large (const HOST_WIDE
 
 /* Return true if OP0 < OP1 using signed comparisons.  */
 bool
-wide_int_ro::lts_p_large (const HOST_WIDE_INT *op0, unsigned int op0len,
-			  unsigned int p0,
-			  const HOST_WIDE_INT *op1, unsigned int op1len,
-			  unsigned int p1)
+wi::lts_p_large (const HOST_WIDE_INT *op0, unsigned int op0len,
+		 unsigned int p0,
+		 const HOST_WIDE_INT *op1, unsigned int op1len,
+		 unsigned int p1)
 {
   HOST_WIDE_INT s0, s1;
   unsigned HOST_WIDE_INT u0, u1;
@@ -709,8 +709,8 @@ wide_int_ro::cmps_large (const HOST_WIDE
 
 /* Return true if OP0 < OP1 using unsigned comparisons.  */
 bool
-wide_int_ro::ltu_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, unsigned int p0,
-			  const HOST_WIDE_INT *op1, unsigned int op1len, unsigned int p1)
+wi::ltu_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, unsigned int p0,
+		 const HOST_WIDE_INT *op1, unsigned int op1len, unsigned int p1)
 {
   unsigned HOST_WIDE_INT x0;
   unsigned HOST_WIDE_INT x1;
Index: gcc/wide-int.h
===================================================================
--- gcc/wide-int.h	2013-08-25 07:42:28.424609809 +0100
+++ gcc/wide-int.h	2013-08-25 08:23:14.445592968 +0100
@@ -304,6 +304,95 @@ signedp <unsigned long> (unsigned long)
   return false;
 }
 
+/* This class, which has no default implementation, is expected to
+   provide the following routines:
+
+   HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p,
+			   ... x)
+     -- Decompose integer X into a length, precision and array of
+	HOST_WIDE_INTs.  Store the length in *L, the precision in *P
+	and return the array.  S is available as scratch space if needed.  */
+template <typename T> struct wide_int_accessors;
+
+namespace wi
+{
+  template <typename T1, typename T2>
+  bool eq_p (const T1 &, const T2 &);
+
+  template <typename T1, typename T2>
+  bool lt_p (const T1 &, const T2 &, signop);
+
+  template <typename T1, typename T2>
+  bool lts_p (const T1 &, const T2 &);
+
+  template <typename T1, typename T2>
+  bool ltu_p (const T1 &, const T2 &);
+
+  template <typename T1, typename T2>
+  bool le_p (const T1 &, const T2 &, signop);
+
+  template <typename T1, typename T2>
+  bool les_p (const T1 &, const T2 &);
+
+  template <typename T1, typename T2>
+  bool leu_p (const T1 &, const T2 &);
+
+  template <typename T1, typename T2>
+  bool gt_p (const T1 &, const T2 &, signop);
+
+  template <typename T1, typename T2>
+  bool gts_p (const T1 &, const T2 &);
+
+  template <typename T1, typename T2>
+  bool gtu_p (const T1 &, const T2 &);
+
+  template <typename T1, typename T2>
+  bool ge_p (const T1 &, const T2 &, signop);
+
+  template <typename T1, typename T2>
+  bool ges_p (const T1 &, const T2 &);
+
+  template <typename T1, typename T2>
+  bool geu_p (const T1 &, const T2 &);
+
+  /* Comparisons.  */
+  bool eq_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
+		   const HOST_WIDE_INT *, unsigned int);
+  bool lts_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
+		    const HOST_WIDE_INT *, unsigned int, unsigned int);
+  bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
+		    const HOST_WIDE_INT *, unsigned int, unsigned int);
+  void check_precision (unsigned int *, unsigned int *, bool, bool);
+
+  template <typename T>
+  const HOST_WIDE_INT *to_shwi1 (HOST_WIDE_INT *, unsigned int *,
+				 unsigned int *, const T &);
+
+  template <typename T>
+  const HOST_WIDE_INT *to_shwi2 (HOST_WIDE_INT *, unsigned int *, const T &);
+}
+
+/* Decompose integer X into a length, precision and array of HOST_WIDE_INTs.
+   Store the length in *L, the precision in *P and return the array.
+   S is available as a scratch array if needed, and can be used as
+   the return value.  */
+template <typename T>
+inline const HOST_WIDE_INT *
+wi::to_shwi1 (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p,
+	      const T &x)
+{
+  return wide_int_accessors <T>::to_shwi (s, l, p, x);
+}
+
+/* Like to_shwi1, but without the precision.  */
+template <typename T>
+inline const HOST_WIDE_INT *
+wi::to_shwi2 (HOST_WIDE_INT *s, unsigned int *l, const T &x)
+{
+  unsigned int p;
+  return wide_int_accessors <T>::to_shwi (s, l, &p, x);
+}
+
 class wide_int;
 
 class GTY(()) wide_int_ro
@@ -323,7 +412,6 @@ class GTY(()) wide_int_ro
   unsigned short len;
   unsigned int precision;
 
-  const HOST_WIDE_INT *get_val () const;
   wide_int_ro &operator = (const wide_int_ro &);
 
 public:
@@ -374,6 +462,7 @@ class GTY(()) wide_int_ro
   /* Public accessors for the interior of a wide int.  */
   unsigned short get_len () const;
   unsigned int get_precision () const;
+  const HOST_WIDE_INT *get_val () const;
   HOST_WIDE_INT elt (unsigned int) const;
 
   /* Comparative functions.  */
@@ -389,85 +478,10 @@ class GTY(()) wide_int_ro
   template <typename T>
   bool operator == (const T &) const;
 
-  template <typename T1, typename T2>
-  static bool eq_p (const T1 &, const T2 &);
-
   template <typename T>
   bool operator != (const T &) const;
 
   template <typename T>
-  bool lt_p (const T &, signop) const;
-
-  template <typename T1, typename T2>
-  static bool lt_p (const T1 &, const T2 &, signop);
-
-  template <typename T>
-  bool lts_p (const T &) const;
-
-  template <typename T1, typename T2>
-  static bool lts_p (const T1 &, const T2 &);
-
-  template <typename T>
-  bool ltu_p (const T &) const;
-
-  template <typename T1, typename T2>
-  static bool ltu_p (const T1 &, const T2 &);
-
-  template <typename T>
-  bool le_p (const T &, signop) const;
-
-  template <typename T1, typename T2>
-  static bool le_p (const T1 &, const T2 &, signop);
-
-  template <typename T>
-  bool les_p (const T &) const;
-
-  template <typename T1, typename T2>
-  static bool les_p (const T1 &, const T2 &);
-
-  template <typename T>
-  bool leu_p (const T &) const;
-
-  template <typename T1, typename T2>
-  static bool leu_p (const T1 &, const T2 &);
-
-  template <typename T>
-  bool gt_p (const T &, signop) const;
-
-  template <typename T1, typename T2>
-  static bool gt_p (const T1 &, const T2 &, signop);
-
-  template <typename T>
-  bool gts_p (const T &) const;
-
-  template <typename T1, typename T2>
-  static bool gts_p (const T1 &, const T2 &);
-
-  template <typename T>
-  bool gtu_p (const T &) const;
-
-  template <typename T1, typename T2>
-  static bool gtu_p (const T1 &, const T2 &);
-
-  template <typename T>
-  bool ge_p (const T &, signop) const;
-
-  template <typename T1, typename T2>
-  static bool ge_p (const T1 &, const T2 &, signop);
-
-  template <typename T>
-  bool ges_p (const T &) const;
-
-  template <typename T1, typename T2>
-  static bool ges_p (const T1 &, const T2 &);
-
-  template <typename T>
-  bool geu_p (const T &) const;
-
-  template <typename T1, typename T2>
-  static bool geu_p (const T1 &, const T2 &);
-
-  template <typename T>
   int cmp (const T &, signop) const;
 
   template <typename T>
@@ -705,18 +719,10 @@ class GTY(()) wide_int_ro
   /* Internal versions that do the work if the values do not fit in a HWI.  */
 
   /* Comparisons */
-  static bool eq_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
-			  const HOST_WIDE_INT *, unsigned int);
-  static bool lts_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
-			   const HOST_WIDE_INT *, unsigned int, unsigned int);
   static int cmps_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
 			 const HOST_WIDE_INT *, unsigned int, unsigned int);
-  static bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
-			   const HOST_WIDE_INT *, unsigned int, unsigned int);
   static int cmpu_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
 			 const HOST_WIDE_INT *, unsigned int, unsigned int);
-  static void check_precision (unsigned int *, unsigned int *, bool, bool);
-
 
   /* Logicals.  */
   static wide_int_ro and_large (const HOST_WIDE_INT *, unsigned int,
@@ -769,17 +775,6 @@ class GTY(()) wide_int_ro
   int trunc_shift (const HOST_WIDE_INT *, unsigned int, unsigned int,
 		   ShiftOp) const;
 
-  template <typename T>
-  static bool top_bit_set (T);
-
-  template <typename T>
-  static const HOST_WIDE_INT *to_shwi1 (HOST_WIDE_INT *, unsigned int *,
-					unsigned int *, const T &);
-
-  template <typename T>
-  static const HOST_WIDE_INT *to_shwi2 (HOST_WIDE_INT *, unsigned int *,
-					const T &);
-
 #ifdef DEBUG_WIDE_INT
   /* Debugging routines.  */
   static void debug_wa (const char *, const wide_int_ro &,
@@ -1163,51 +1158,11 @@ wide_int_ro::neg_p (signop sgn) const
   return sign_mask () != 0;
 }
 
-/* Return true if THIS == C.  If both operands have nonzero precisions,
-   the precisions must be the same.  */
-template <typename T>
-inline bool
-wide_int_ro::operator == (const T &c) const
-{
-  bool result;
-  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
-  const HOST_WIDE_INT *s;
-  unsigned int cl;
-  unsigned int p1, p2;
-
-  p1 = precision;
-
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, false);
-
-  if (p1 == 0)
-    /* There are prec 0 types and we need to do this to check their
-       min and max values.  */
-    result = (len == cl) && (val[0] == s[0]);
-  else if (p1 < HOST_BITS_PER_WIDE_INT)
-    {
-      unsigned HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << p1) - 1;
-      result = (val[0] & mask) == (s[0] & mask);
-    }
-  else if (p1 == HOST_BITS_PER_WIDE_INT)
-    result = val[0] == s[0];
-  else
-    result = eq_p_large (val, len, p1, s, cl);
-
-  if (result)
-    gcc_assert (len == cl);
-
-#ifdef DEBUG_WIDE_INT
-  debug_vwa ("wide_int_ro:: %d = (%s == %s)\n", result, *this, s, cl, p2);
-#endif
-  return result;
-}
-
 /* Return true if C1 == C2.  If both parameters have nonzero precisions,
    then those precisions must be equal.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::eq_p (const T1 &c1, const T2 &c2)
+wi::eq_p (const T1 &c1, const T2 &c2)
 {
   bool result;
   HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS];
@@ -1237,51 +1192,28 @@ wide_int_ro::eq_p (const T1 &c1, const T
   return result;
 }
 
-/* Return true if THIS != C.  If both parameters have nonzero precisions,
-   then those precisions must be equal.  */
+/* Return true if THIS == C.  If both operands have nonzero precisions,
+   the precisions must be the same.  */
 template <typename T>
 inline bool
-wide_int_ro::operator != (const T &c) const
+wide_int_ro::operator == (const T &c) const
 {
-  return !(*this == c);
+  return wi::eq_p (*this, c);
 }
 
-/* Return true if THIS < C using signed comparisons.  */
+/* Return true if THIS != C.  If both parameters have nonzero precisions,
+   then those precisions must be equal.  */
 template <typename T>
 inline bool
-wide_int_ro::lts_p (const T &c) const
+wide_int_ro::operator != (const T &c) const
 {
-  bool result;
-  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
-  const HOST_WIDE_INT *s;
-  unsigned int cl;
-  unsigned int p1, p2;
-
-  p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, false, true);
-
-  if (p1 <= HOST_BITS_PER_WIDE_INT
-      && p2 <= HOST_BITS_PER_WIDE_INT)
-    {
-      gcc_assert (cl != 0);
-      HOST_WIDE_INT x0 = sext_hwi (val[0], p1);
-      HOST_WIDE_INT x1 = sext_hwi (s[0], p2);
-      result = x0 < x1;
-    }
-  else
-    result = lts_p_large (val, len, p1, s, cl, p2);
-
-#ifdef DEBUG_WIDE_INT
-  debug_vwa ("wide_int_ro:: %d = (%s lts_p %s\n", result, *this, s, cl, p2);
-#endif
-  return result;
+  return !wi::eq_p (*this, c);
 }
 
 /* Return true if C1 < C2 using signed comparisons.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::lts_p (const T1 &c1, const T2 &c2)
+wi::lts_p (const T1 &c1, const T2 &c2)
 {
   bool result;
   HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS];
@@ -1305,38 +1237,8 @@ wide_int_ro::lts_p (const T1 &c1, const
     result = lts_p_large (s1, cl1, p1, s2, cl2, p2);
 
 #ifdef DEBUG_WIDE_INT
-  debug_vaa ("wide_int_ro:: %d = (%s lts_p %s\n", result, s1, cl1, p1, s2, cl2, p2);
-#endif
-  return result;
-}
-
-/* Return true if THIS < C using unsigned comparisons.  */
-template <typename T>
-inline bool
-wide_int_ro::ltu_p (const T &c) const
-{
-  bool result;
-  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
-  const HOST_WIDE_INT *s;
-  unsigned int cl;
-  unsigned int p1, p2;
-
-  p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, false, true);
-
-  if (p1 <= HOST_BITS_PER_WIDE_INT
-      && p2 <= HOST_BITS_PER_WIDE_INT)
-    {
-      unsigned HOST_WIDE_INT x0 = zext_hwi (val[0], p1);
-      unsigned HOST_WIDE_INT x1 = zext_hwi (s[0], p2);
-      result = x0 < x1;
-    }
-  else
-    result = ltu_p_large (val, len, p1, s, cl, p2);
-
-#ifdef DEBUG_WIDE_INT
-  debug_vwa ("wide_int_ro:: %d = (%s ltu_p %s)\n", result, *this, s, cl, p2);
+  wide_int_ro::debug_vaa ("wide_int_ro:: %d = (%s lts_p %s\n",
+			  result, s1, cl1, p1, s2, cl2, p2);
 #endif
   return result;
 }
@@ -1344,7 +1246,7 @@ wide_int_ro::ltu_p (const T &c) const
 /* Return true if C1 < C2 using unsigned comparisons.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::ltu_p (const T1 &c1, const T2 &c2)
+wi::ltu_p (const T1 &c1, const T2 &c2)
 {
   bool result;
   HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS];
@@ -1372,21 +1274,10 @@ wide_int_ro::ltu_p (const T1 &c1, const
   return result;
 }
 
-/* Return true if THIS < C.  Signedness is indicated by SGN.  */
-template <typename T>
-inline bool
-wide_int_ro::lt_p (const T &c, signop sgn) const
-{
-  if (sgn == SIGNED)
-    return lts_p (c);
-  else
-    return ltu_p (c);
-}
-
 /* Return true if C1 < C2.  Signedness is indicated by SGN.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::lt_p (const T1 &c1, const T2 &c2, signop sgn)
+wi::lt_p (const T1 &c1, const T2 &c2, signop sgn)
 {
   if (sgn == SIGNED)
     return lts_p (c1, c2);
@@ -1394,53 +1285,26 @@ wide_int_ro::lt_p (const T1 &c1, const T
     return ltu_p (c1, c2);
 }
 
-/* Return true if THIS <= C using signed comparisons.  */
-template <typename T>
-inline bool
-wide_int_ro::les_p (const T &c) const
-{
-  return !gts_p (c);
-}
-
 /* Return true if C1 <= C2 using signed comparisons.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::les_p (const T1 &c1, const T2 &c2)
+wi::les_p (const T1 &c1, const T2 &c2)
 {
   return !gts_p (c1, c2);
 }
 
-/* Return true if THIS <= C using unsigned comparisons.  */
-template <typename T>
-inline bool
-wide_int_ro::leu_p (const T &c) const
-{
-  return !gtu_p (c);
-}
-
 /* Return true if C1 <= C2 using unsigned comparisons.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::leu_p (const T1 &c1, const T2 &c2)
+wi::leu_p (const T1 &c1, const T2 &c2)
 {
   return !gtu_p (c1, c2);
 }
 
-/* Return true if THIS <= C.  Signedness is indicated by SGN.  */
-template <typename T>
-inline bool
-wide_int_ro::le_p (const T &c, signop sgn) const
-{
-  if (sgn == SIGNED)
-    return les_p (c);
-  else
-    return leu_p (c);
-}
-
 /* Return true if C1 <= C2.  Signedness is indicated by SGN.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::le_p (const T1 &c1, const T2 &c2, signop sgn)
+wi::le_p (const T1 &c1, const T2 &c2, signop sgn)
 {
   if (sgn == SIGNED)
     return les_p (c1, c2);
@@ -1448,53 +1312,26 @@ wide_int_ro::le_p (const T1 &c1, const T
     return leu_p (c1, c2);
 }
 
-/* Return true if THIS > C using signed comparisons.  */
-template <typename T>
-inline bool
-wide_int_ro::gts_p (const T &c) const
-{
-  return lts_p (c, *this);
-}
-
 /* Return true if C1 > C2 using signed comparisons.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::gts_p (const T1 &c1, const T2 &c2)
+wi::gts_p (const T1 &c1, const T2 &c2)
 {
   return lts_p (c2, c1);
 }
 
-/* Return true if THIS > C using unsigned comparisons.  */
-template <typename T>
-inline bool
-wide_int_ro::gtu_p (const T &c) const
-{
-  return ltu_p (c, *this);
-}
-
 /* Return true if C1 > C2 using unsigned comparisons.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::gtu_p (const T1 &c1, const T2 &c2)
+wi::gtu_p (const T1 &c1, const T2 &c2)
 {
   return ltu_p (c2, c1);
 }
 
-/* Return true if THIS > C.  Signedness is indicated by SGN.  */
-template <typename T>
-inline bool
-wide_int_ro::gt_p (const T &c, signop sgn) const
-{
-  if (sgn == SIGNED)
-    return gts_p (c);
-  else
-    return gtu_p (c);
-}
-
 /* Return true if C1 > C2.  Signedness is indicated by SGN.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::gt_p (const T1 &c1, const T2 &c2, signop sgn)
+wi::gt_p (const T1 &c1, const T2 &c2, signop sgn)
 {
   if (sgn == SIGNED)
     return gts_p (c1, c2);
@@ -1502,53 +1339,26 @@ wide_int_ro::gt_p (const T1 &c1, const T
     return gtu_p (c1, c2);
 }
 
-/* Return true if THIS >= C using signed comparisons.  */
-template <typename T>
-inline bool
-wide_int_ro::ges_p (const T &c) const
-{
-  return !lts_p (c);
-}
-
 /* Return true if C1 >= C2 using signed comparisons.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::ges_p (const T1 &c1, const T2 &c2)
+wi::ges_p (const T1 &c1, const T2 &c2)
 {
   return !lts_p (c1, c2);
 }
 
-/* Return true if THIS >= C using unsigned comparisons.  */
-template <typename T>
-inline bool
-wide_int_ro::geu_p (const T &c) const
-{
-  return !ltu_p (c);
-}
-
 /* Return true if C1 >= C2 using unsigned comparisons.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::geu_p (const T1 &c1, const T2 &c2)
+wi::geu_p (const T1 &c1, const T2 &c2)
 {
   return !ltu_p (c1, c2);
 }
 
-/* Return true if THIS >= C.  Signedness is indicated by SGN.  */
-template <typename T>
-inline bool
-wide_int_ro::ge_p (const T &c, signop sgn) const
-{
-  if (sgn == SIGNED)
-    return ges_p (c);
-  else
-    return geu_p (c);
-}
-
 /* Return true if C1 >= C2.  Signedness is indicated by SGN.  */
 template <typename T1, typename T2>
 inline bool
-wide_int_ro::ge_p (const T1 &c1, const T2 &c2, signop sgn)
+wi::ge_p (const T1 &c1, const T2 &c2, signop sgn)
 {
   if (sgn == SIGNED)
     return ges_p (c1, c2);
@@ -1568,7 +1378,7 @@ wide_int_ro::cmps (const T &c) const
   unsigned int cl;
   unsigned int prec;
 
-  s = to_shwi1 (ws, &cl, &prec, c);
+  s = wi::to_shwi1 (ws, &cl, &prec, c);
   if (prec == 0)
     prec = precision;
 
@@ -1606,7 +1416,7 @@ wide_int_ro::cmpu (const T &c) const
   unsigned int cl;
   unsigned int prec;
 
-  s = to_shwi1 (ws, &cl, &prec, c);
+  s = wi::to_shwi1 (ws, &cl, &prec, c);
   if (prec == 0)
     prec = precision;
 
@@ -1681,13 +1491,12 @@ wide_int_ro::min (const T &c, signop sgn
 
   p1 = precision;
 
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
-  if (sgn == SIGNED)
-    return lts_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
-  else
-    return ltu_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
+  return (wi::lt_p (*this, c, sgn)
+	  ? *this
+	  : wide_int_ro::from_array (s, cl, p1, false));
 }
 
 /* Return the signed or unsigned min of THIS and OP1.  */
@@ -1695,9 +1504,9 @@ wide_int_ro::min (const T &c, signop sgn
 wide_int_ro::min (const wide_int_ro &op1, signop sgn) const
 {
   if (sgn == SIGNED)
-    return lts_p (op1) ? (*this) : op1;
+    return wi::lts_p (*this, op1) ? *this : op1;
   else
-    return ltu_p (op1) ? (*this) : op1;
+    return wi::ltu_p (*this, op1) ? *this : op1;
 }
 
 /* Return the signed or unsigned max of THIS and C.  */
@@ -1712,22 +1521,18 @@ wide_int_ro::max (const T &c, signop sgn
 
   p1 = precision;
 
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
-  if (sgn == SIGNED)
-    return gts_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
-  else
-    return gtu_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
+  return (wi::gt_p (*this, c, sgn)
+	  ? *this
+	  : wide_int_ro::from_array (s, cl, p1, false));
 }
 
 /* Return the signed or unsigned max of THIS and OP1.  */
 inline wide_int_ro
 wide_int_ro::max (const wide_int_ro &op1, signop sgn) const
 {
-  if (sgn == SIGNED)
-    return gts_p (op1) ? (*this) : op1;
-  else
-    return gtu_p (op1) ? (*this) : op1;
+  return wi::gt_p (*this, op1, sgn) ? *this : op1;
 }
 
 /* Return the signed min of THIS and C.  */
@@ -1742,17 +1547,19 @@ wide_int_ro::smin (const T &c) const
 
   p1 = precision;
 
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
-  return lts_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
+  return (wi::lts_p (*this, c)
+	  ? *this
+	  : wide_int_ro::from_array (s, cl, p1, false));
 }
 
 /* Return the signed min of THIS and OP1.  */
 inline wide_int_ro
 wide_int_ro::smin (const wide_int_ro &op1) const
 {
-  return lts_p (op1) ? (*this) : op1;
+  return wi::lts_p (*this, op1) ? *this : op1;
 }
 
 /* Return the signed max of THIS and C.  */
@@ -1767,17 +1574,19 @@ wide_int_ro::smax (const T &c) const
 
   p1 = precision;
 
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
-  return gts_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
+  return (wi::gts_p (*this, c)
+	  ? *this
+	  : wide_int_ro::from_array (s, cl, p1, false));
 }
 
 /* Return the signed max of THIS and OP1.  */
 inline wide_int_ro
 wide_int_ro::smax (const wide_int_ro &op1) const
 {
-  return gts_p (op1) ? (*this) : op1;
+  return wi::gts_p (*this, op1) ? *this : op1;
 }
 
 /* Return the unsigned min of THIS and C.  */
@@ -1792,15 +1601,17 @@ wide_int_ro::umin (const T &c) const
 
   p1 = precision;
 
-  s = to_shwi1 (ws, &cl, &p2, c);
-  return ltu_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  return (wi::ltu_p (*this, c)
+	  ? *this
+	  : wide_int_ro::from_array (s, cl, p1, false));
 }
 
 /* Return the unsigned min of THIS and OP1.  */
 inline wide_int_ro
 wide_int_ro::umin (const wide_int_ro &op1) const
 {
-  return ltu_p (op1) ? (*this) : op1;
+  return wi::ltu_p (*this, op1) ? *this : op1;
 }
 
 /* Return the unsigned max of THIS and C.  */
@@ -1815,17 +1626,19 @@ wide_int_ro::umax (const T &c) const
 
   p1 = precision;
 
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
-  return gtu_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
+  return (wi::gtu_p (*this, c)
+	  ? *this
+	  : wide_int_ro::from_array (s, cl, p1, false));
 }
 
 /* Return the unsigned max of THIS and OP1.  */
 inline wide_int_ro
 wide_int_ro::umax (const wide_int_ro &op1) const
 {
-  return gtu_p (op1) ? (*this) : op1;
+  return wi::gtu_p (*this, op1) ? *this : op1;
 }
 
 /* Return THIS extended to PREC.  The signedness of the extension is
@@ -1891,8 +1704,8 @@ wide_int_ro::operator & (const T &c) con
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   if (p1 <= HOST_BITS_PER_WIDE_INT)
     {
@@ -1921,8 +1734,8 @@ wide_int_ro::and_not (const T &c) const
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   if (p1 <= HOST_BITS_PER_WIDE_INT)
     {
@@ -1973,8 +1786,8 @@ wide_int_ro::operator | (const T &c) con
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   if (p1 <= HOST_BITS_PER_WIDE_INT)
     {
@@ -2003,8 +1816,8 @@ wide_int_ro::or_not (const T &c) const
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   if (p1 <= HOST_BITS_PER_WIDE_INT)
     {
@@ -2033,8 +1846,8 @@ wide_int_ro::operator ^ (const T &c) con
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   if (p1 <= HOST_BITS_PER_WIDE_INT)
     {
@@ -2063,8 +1876,8 @@ wide_int_ro::operator + (const T &c) con
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   if (p1 <= HOST_BITS_PER_WIDE_INT)
     {
@@ -2096,8 +1909,8 @@ wide_int_ro::add (const T &c, signop sgn
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   if (p1 <= HOST_BITS_PER_WIDE_INT)
     {
@@ -2141,8 +1954,8 @@ wide_int_ro::operator * (const T &c) con
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   if (p1 <= HOST_BITS_PER_WIDE_INT)
     {
@@ -2176,8 +1989,8 @@ wide_int_ro::mul (const T &c, signop sgn
   if (overflow)
     *overflow = false;
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   return mul_internal (false, false,
 		       val, len, p1,
@@ -2217,8 +2030,8 @@ wide_int_ro::mul_full (const T &c, signo
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   return mul_internal (false, true,
 		       val, len, p1,
@@ -2257,8 +2070,8 @@ wide_int_ro::mul_high (const T &c, signo
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   return mul_internal (true, false,
 		       val, len, p1,
@@ -2298,8 +2111,8 @@ wide_int_ro::operator - (const T &c) con
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   if (p1 <= HOST_BITS_PER_WIDE_INT)
     {
@@ -2331,8 +2144,8 @@ wide_int_ro::sub (const T &c, signop sgn
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, true, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, true, true);
 
   if (p1 <= HOST_BITS_PER_WIDE_INT)
     {
@@ -2379,8 +2192,8 @@ wide_int_ro::div_trunc (const T &c, sign
   if (overflow)
     *overflow = false;
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, false, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, false, true);
 
   return divmod_internal (true, val, len, p1, s, cl, p2, sgn,
 			  &remainder, false, overflow);
@@ -2420,8 +2233,8 @@ wide_int_ro::div_floor (const T &c, sign
   if (overflow)
     *overflow = false;
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, false, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, false, true);
 
   return divmod_internal (true, val, len, p1, s, cl, p2, sgn,
 			  &remainder, false, overflow);
@@ -2461,8 +2274,8 @@ wide_int_ro::div_ceil (const T &c, signo
   if (overflow)
     *overflow = false;
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, false, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, false, true);
 
   quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn,
 			      &remainder, true, overflow);
@@ -2490,8 +2303,8 @@ wide_int_ro::div_round (const T &c, sign
   if (overflow)
     *overflow = false;
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, false, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, false, true);
 
   quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn,
 			      &remainder, true, overflow);
@@ -2505,7 +2318,7 @@ wide_int_ro::div_round (const T &c, sign
 	  wide_int_ro p_divisor = divisor.neg_p (SIGNED) ? -divisor : divisor;
 	  p_divisor = p_divisor.rshiftu_large (1);
 
-	  if (p_divisor.gts_p (p_remainder))
+	  if (wi::gts_p (p_divisor, p_remainder))
 	    {
 	      if (quotient.neg_p (SIGNED))
 		return quotient - 1;
@@ -2516,7 +2329,7 @@ wide_int_ro::div_round (const T &c, sign
       else
 	{
 	  wide_int_ro p_divisor = divisor.rshiftu_large (1);
-	  if (p_divisor.gtu_p (remainder))
+	  if (wi::gtu_p (p_divisor, remainder))
 	    return quotient + 1;
 	}
     }
@@ -2537,8 +2350,8 @@ wide_int_ro::divmod_trunc (const T &c, w
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, false, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, false, true);
 
   return divmod_internal (true, val, len, p1, s, cl, p2, sgn,
 			  remainder, true, 0);
@@ -2575,8 +2388,8 @@ wide_int_ro::divmod_floor (const T &c, w
   unsigned int p1, p2;
 
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, false, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, false, true);
 
   quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn,
 			      remainder, true, 0);
@@ -2613,8 +2426,8 @@ wide_int_ro::mod_trunc (const T &c, sign
   if (overflow)
     *overflow = false;
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, false, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, false, true);
 
   divmod_internal (false, val, len, p1, s, cl, p2, sgn,
 		   &remainder, true, overflow);
@@ -2655,8 +2468,8 @@ wide_int_ro::mod_floor (const T &c, sign
   if (overflow)
     *overflow = false;
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, false, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, false, true);
 
   quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn,
 			      &remainder, true, overflow);
@@ -2692,8 +2505,8 @@ wide_int_ro::mod_ceil (const T &c, signo
   if (overflow)
     *overflow = false;
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, false, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, false, true);
 
   quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn,
 			      &remainder, true, overflow);
@@ -2721,8 +2534,8 @@ wide_int_ro::mod_round (const T &c, sign
   if (overflow)
     *overflow = false;
   p1 = precision;
-  s = to_shwi1 (ws, &cl, &p2, c);
-  check_precision (&p1, &p2, false, true);
+  s = wi::to_shwi1 (ws, &cl, &p2, c);
+  wi::check_precision (&p1, &p2, false, true);
 
   quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn,
 			      &remainder, true, overflow);
@@ -2737,7 +2550,7 @@ wide_int_ro::mod_round (const T &c, sign
 	  wide_int_ro p_divisor = divisor.neg_p (SIGNED) ? -divisor : divisor;
 	  p_divisor = p_divisor.rshiftu_large (1);
 
-	  if (p_divisor.gts_p (p_remainder))
+	  if (wi::gts_p (p_divisor, p_remainder))
 	    {
 	      if (quotient.neg_p (SIGNED))
 		return remainder + divisor;
@@ -2748,7 +2561,7 @@ wide_int_ro::mod_round (const T &c, sign
       else
 	{
 	  wide_int_ro p_divisor = divisor.rshiftu_large (1);
-	  if (p_divisor.gtu_p (remainder))
+	  if (wi::gtu_p (p_divisor, remainder))
 	    return remainder - divisor;
 	}
     }
@@ -2768,7 +2581,7 @@ wide_int_ro::lshift (const T &c, unsigne
   unsigned int cl;
   HOST_WIDE_INT shift;
 
-  s = to_shwi2 (ws, &cl, c);
+  s = wi::to_shwi2 (ws, &cl, c);
 
   gcc_checking_assert (precision);
 
@@ -2806,7 +2619,7 @@ wide_int_ro::lshift_widen (const T &c, u
   unsigned int cl;
   HOST_WIDE_INT shift;
 
-  s = to_shwi2 (ws, &cl, c);
+  s = wi::to_shwi2 (ws, &cl, c);
 
   gcc_checking_assert (precision);
   gcc_checking_assert (res_prec);
@@ -2843,7 +2656,7 @@ wide_int_ro::lrotate (const T &c, unsign
   const HOST_WIDE_INT *s;
   unsigned int cl;
 
-  s = to_shwi2 (ws, &cl, c);
+  s = wi::to_shwi2 (ws, &cl, c);
 
   return lrotate ((unsigned HOST_WIDE_INT) s[0], prec);
 }
@@ -2901,7 +2714,7 @@ wide_int_ro::rshiftu (const T &c, unsign
   unsigned int cl;
   HOST_WIDE_INT shift;
 
-  s = to_shwi2 (ws, &cl, c);
+  s = wi::to_shwi2 (ws, &cl, c);
   gcc_checking_assert (precision);
   shift = trunc_shift (s, cl, bitsize, trunc_op);
 
@@ -2944,7 +2757,7 @@ wide_int_ro::rshifts (const T &c, unsign
   unsigned int cl;
   HOST_WIDE_INT shift;
 
-  s = to_shwi2 (ws, &cl, c);
+  s = wi::to_shwi2 (ws, &cl, c);
   gcc_checking_assert (precision);
   shift = trunc_shift (s, cl, bitsize, trunc_op);
 
@@ -2989,7 +2802,7 @@ wide_int_ro::rrotate (const T &c, unsign
   const HOST_WIDE_INT *s;
   unsigned int cl;
 
-  s = to_shwi2 (ws, &cl, c);
+  s = wi::to_shwi2 (ws, &cl, c);
   return rrotate ((unsigned HOST_WIDE_INT) s[0], prec);
 }
 
@@ -3080,25 +2893,26 @@ wide_int_ro::trunc_shift (const HOST_WID
     return cnt[0] & (bitsize - 1);
 }
 
+/* Implementation of wide_int_accessors for primitive integer types
+   like "int".  */
+template <typename T>
+struct primitive_wide_int_accessors
+{
+  static const HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *, unsigned int *,
+				       unsigned int *, const T &);
+};
+
 template <typename T>
 inline bool
-wide_int_ro::top_bit_set (T x)
+top_bit_set (T x)
 {
-  return (x >> (sizeof (x)*8 - 1)) != 0;
+  return (x >> (sizeof (x) * 8 - 1)) != 0;
 }
 
-/* The following template and its overrides are used for the first
-   and second operand of static binary comparison functions.
-   These have been implemented so that pointer copying is done
-   from the rep of the operands rather than actual data copying.
-   This is safe even for garbage collected objects since the value
-   is immediately throw away.
-
-   This template matches all integers.  */
 template <typename T>
 inline const HOST_WIDE_INT *
-wide_int_ro::to_shwi1 (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p,
-		       const T &x)
+primitive_wide_int_accessors <T>::to_shwi (HOST_WIDE_INT *s, unsigned int *l,
+					   unsigned int *p, const T &x)
 {
   s[0] = x;
   if (signedp (x)
@@ -3114,29 +2928,23 @@ wide_int_ro::to_shwi1 (HOST_WIDE_INT *s,
   return s;
 }
 
-/* The following template and its overrides are used for the second
-   operand of binary functions.  These have been implemented so that
-   pointer copying is done from the rep of the second operand rather
-   than actual data copying.  This is safe even for garbage collected
-   objects since the value is immediately throw away.
+template <>
+struct wide_int_accessors <int>
+  : public primitive_wide_int_accessors <int> {};
 
-   The next template matches all integers.  */
-template <typename T>
-inline const HOST_WIDE_INT *
-wide_int_ro::to_shwi2 (HOST_WIDE_INT *s, unsigned int *l, const T &x)
-{
-  s[0] = x;
-  if (signedp (x)
-      || sizeof (T) < sizeof (HOST_WIDE_INT)
-      || ! top_bit_set (x))
-    *l = 1;
-  else
-    {
-      s[1] = 0;
-      *l = 2;
-    }
-  return s;
-}
+template <>
+struct wide_int_accessors <unsigned int>
+  : public primitive_wide_int_accessors <unsigned int> {};
+
+#if HOST_BITS_PER_INT != HOST_BITS_PER_WIDE_INT
+template <>
+struct wide_int_accessors <HOST_WIDE_INT>
+  : public primitive_wide_int_accessors <HOST_WIDE_INT> {};
+
+template <>
+struct wide_int_accessors <unsigned HOST_WIDE_INT>
+  : public primitive_wide_int_accessors <unsigned HOST_WIDE_INT> {};
+#endif
 
 inline wide_int::wide_int () {}
 
@@ -3275,7 +3083,6 @@ class GTY(()) fixed_wide_int : public wi
 protected:
   fixed_wide_int &operator = (const wide_int &);
   fixed_wide_int (const wide_int_ro);
-  const HOST_WIDE_INT *get_val () const;
 
   using wide_int_ro::val;
 
@@ -3285,16 +3092,8 @@ class GTY(()) fixed_wide_int : public wi
   using wide_int_ro::to_short_addr;
   using wide_int_ro::fits_uhwi_p;
   using wide_int_ro::fits_shwi_p;
-  using wide_int_ro::gtu_p;
-  using wide_int_ro::gts_p;
-  using wide_int_ro::geu_p;
-  using wide_int_ro::ges_p;
   using wide_int_ro::to_shwi;
   using wide_int_ro::operator ==;
-  using wide_int_ro::ltu_p;
-  using wide_int_ro::lts_p;
-  using wide_int_ro::leu_p;
-  using wide_int_ro::les_p;
   using wide_int_ro::to_uhwi;
   using wide_int_ro::cmps;
   using wide_int_ro::neg_p;
@@ -3510,13 +3309,6 @@ inline fixed_wide_int <bitsize>::fixed_w
 }
 
 template <int bitsize>
-inline const HOST_WIDE_INT *
-fixed_wide_int <bitsize>::get_val () const
-{
-  return val;
-}
-
-template <int bitsize>
 inline fixed_wide_int <bitsize>
 fixed_wide_int <bitsize>::from_wide_int (const wide_int &w)
 {
@@ -4165,118 +3957,62 @@ extern void gt_pch_nx(max_wide_int*);
 
 extern addr_wide_int mem_ref_offset (const_tree);
 
-/* The wide-int overload templates.  */
-
 template <>
-inline const HOST_WIDE_INT*
-wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
-		       unsigned int *l, unsigned int *p,
-		       const wide_int_ro &y)
-{
-  *p = y.precision;
-  *l = y.len;
-  return y.val;
-}
-
-template <>
-inline const HOST_WIDE_INT*
-wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
-		       unsigned int *l, unsigned int *p,
-		       const wide_int &y)
-{
-  *p = y.precision;
-  *l = y.len;
-  return y.val;
-}
-
-
-template <>
-inline const HOST_WIDE_INT*
-wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
-		       unsigned int *l, unsigned int *p,
-		       const fixed_wide_int <addr_max_precision> &y)
+struct wide_int_accessors <wide_int_ro>
 {
-  *p = y.get_precision ();
-  *l = y.get_len ();
-  return y.get_val ();
-}
+  static const HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *, unsigned int *,
+				       unsigned int *, const wide_int_ro &);
+};
 
-#if addr_max_precision != MAX_BITSIZE_MODE_ANY_INT
-template <>
-inline const HOST_WIDE_INT*
-wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
-		       unsigned int *l, unsigned int *p,
-		       const fixed_wide_int <MAX_BITSIZE_MODE_ANY_INT> &y)
+inline const HOST_WIDE_INT *
+wide_int_accessors <wide_int_ro>::to_shwi (HOST_WIDE_INT *, unsigned int *l,
+					   unsigned int *p,
+					   const wide_int_ro &y)
 {
   *p = y.get_precision ();
   *l = y.get_len ();
   return y.get_val ();
 }
-#endif
 
 template <>
-inline const HOST_WIDE_INT*
-wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
-		       unsigned int *l, const wide_int &y)
-{
-  *l = y.len;
-  return y.val;
-}
+struct wide_int_accessors <wide_int>
+  : public wide_int_accessors <wide_int_ro> {};
 
+template <>
+template <int N>
+struct wide_int_accessors <fixed_wide_int <N> >
+  : public wide_int_accessors <wide_int_ro> {};
 
 /* The tree and const_tree overload templates.   */
 template <>
-inline const HOST_WIDE_INT*
-wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
-		       unsigned int *l, unsigned int *p,
-		       const tree &tcst)
+struct wide_int_accessors <const_tree>
 {
-  tree type = TREE_TYPE (tcst);
-
-  *p = TYPE_PRECISION (type);
-  *l = TREE_INT_CST_NUNITS (tcst);
-  return (const HOST_WIDE_INT*)&TREE_INT_CST_ELT (tcst, 0);
-}
+  static const HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *, unsigned int *,
+				       unsigned int *, const_tree);
+};
 
-template <>
-inline const HOST_WIDE_INT*
-wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
-		       unsigned int *l, unsigned int *p,
-		       const const_tree &tcst)
+inline const HOST_WIDE_INT *
+wide_int_accessors <const_tree>::to_shwi (HOST_WIDE_INT *, unsigned int *l,
+					  unsigned int *p, const_tree tcst)
 {
   tree type = TREE_TYPE (tcst);
 
   *p = TYPE_PRECISION (type);
   *l = TREE_INT_CST_NUNITS (tcst);
-  return (const HOST_WIDE_INT*)&TREE_INT_CST_ELT (tcst, 0);
+  return (const HOST_WIDE_INT *) &TREE_INT_CST_ELT (tcst, 0);
 }
 
 template <>
-inline const HOST_WIDE_INT*
-wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
-		       unsigned int *l, const tree &tcst)
-{
-  *l = TREE_INT_CST_NUNITS (tcst);
-  return (const HOST_WIDE_INT*)&TREE_INT_CST_ELT (tcst, 0);
-}
-
-template <>
-inline const HOST_WIDE_INT*
-wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
-		       unsigned int *l, const const_tree &tcst)
-{
-  *l = TREE_INT_CST_NUNITS (tcst);
-  return (const HOST_WIDE_INT*)&TREE_INT_CST_ELT (tcst, 0);
-}
+struct wide_int_accessors <tree> : public wide_int_accessors <const_tree> {};
 
 /* Checking for the functions that require that at least one of the
    operands have a nonzero precision.  If both of them have a precision,
    then if CHECK_EQUAL is true, require that the precision be the same.  */
 
 inline void
-wide_int_ro::check_precision (unsigned int *p1, unsigned int *p2,
-			      bool check_equal ATTRIBUTE_UNUSED,
-			      bool check_zero ATTRIBUTE_UNUSED)
+wi::check_precision (unsigned int *p1, unsigned int *p2,
+		     bool check_equal ATTRIBUTE_UNUSED,
+		     bool check_zero ATTRIBUTE_UNUSED)
 {
   gcc_checking_assert ((!check_zero) || *p1 != 0 || *p2 != 0);
 
@@ -4298,9 +4034,11 @@ typedef std::pair <rtx, enum machine_mod
 /* There should logically be an overload for rtl here, but it cannot
    be here because of circular include issues.  It is in rtl.h.  */
 template <>
-inline const HOST_WIDE_INT*
-wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
-		       unsigned int *l, const rtx_mode_t &rp);
+struct wide_int_accessors <rtx_mode_t>
+{
+  static const HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *, unsigned int *,
+				       unsigned int *, const rtx_mode_t &);
+};
 
 /* tree related routines.  */
 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-25  7:27           ` Richard Sandiford
@ 2013-08-25 13:21             ` Kenneth Zadeck
  0 siblings, 0 replies; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-25 13:21 UTC (permalink / raw)
  To: rguenther, gcc-patches, Mike Stump, r.sandiford, rdsandiford

On 08/25/2013 02:42 AM, Richard Sandiford wrote:
> Kenneth Zadeck <zadeck@naturalbridge.com> writes:
>> On 08/24/2013 08:05 AM, Richard Sandiford wrote:
>>> Richard Sandiford <rdsandiford@googlemail.com> writes:
>>>> I wonder how easy it would be to restrict this use of "zero precision"
>>>> (i.e. flexible precision) to those where primitive types like "int" are
>>>> used as template arguments to operators, and require a precision when
>>>> constructing a wide_int.  I wouldn't have expected "real" precision 0
>>>> (from zero-width bitfields or whatever) to need any special handling
>>>> compared to precision 1 or 2.
>>> I tried the last bit -- requiring a precision when constructing a
>>> wide_int -- and it seemed surprising easy.  What do you think of
>>> the attached?  Most of the forced knock-on changes seem like improvements,
>>> but the java part is a bit ugly.  I also went with "wide_int (0, prec).cmp"
>>> for now, although I'd like to add static cmp, cmps and cmpu alongside
>>> leu_p, etc., if that's OK.  It would then be possible to write
>>> "wide_int::cmp (0, ...)" and avoid the wide_int construction altogether.
>>>
>>> I wondered whether you might also want to get rid of the build_int_cst*
>>> functions, but that still looks a long way off, so I hope using them in
>>> these two places doesn't seem too bad.
>>>
>>> This is just an incremental step.  I've also only run it through a
>>> subset of the testsuite so far, but full tests are in progress...
>> So i am going to make two high level comments here and then i am going
>> to leave the ultimate decision to the community.   (1) I am mildly in
>> favor of leaving prec 0 stuff the way that it is (2) my guess is that
>> richi also will favor this.   My justification for (2) is because he had
>> a lot of comments about this before he went on leave and this is
>> substantially the way that it was when he left. Also, remember that one
>> of his biggest dislikes was having to specify precisions.
> Hmm, but you seem to be talking about zero precision in general.
> (I'm going to call it "flexible precision" to avoid confusion with
> the zero-width bitfield stuff.)
I have tried to purge the zero width bitfield case from my mind. it was 
an ugly incident in the conversion.


> Whereas this patch is specifically
> about constructing flexible-precision _wide_int_ objects.  I think
> wide_int objects should always have a known, fixed precision.
This is where we differ.  I do not.   The top level idea is really 
motivated by richi, but i have come to appreciate his criticism. Many of 
the times, the specification of the precision is simply redundant and it 
glops up the code.

> Note that fixed_wide_ints can still use primitive types in the
> same way as before, since there the precision is inherent to the
> fixed_wide_int.  The templated operators also work in the same
> way as before.  Only the construction of wide_int proper is affected.
>
> As it stands you have various wide_int operators that cannot handle two
> flexible-precision inputs.  This means that innocent-looking code like:
>
>    extern wide_int foo (wide_int);
>    wide_int bar () { return foo (0); }
>
> ICEs when combined with equally innocent-looking code like:
>
>    wide_int foo (wide_int x) { return x + 1; }
>
> So in practice you have to know when calling a function whether any
> paths in that function will try applying an operator with a primitive type.
> If so, you need to specify a precison when constructing the wide_int
> argument.  If not you can leave it out.  That seems really unclean.
my wife, who is a lawyer, likes to quote an old Brittish chancellor: 
"hard cases make bad law".
The fact that you occasionally  have to specify one should not be 
justification for throwing out the entire thing.

>
> The point of this template stuff is to avoid constructing wide_int objects
> from primitive integers whereever possible.  And I think the fairly
> small size of the patch shows that you've succeeded in doing that.
> But I think we really should specify a precision in the handful of cases
> where a wide_int does still need to be constructed directly from
> a primitive type.
>
> Thanks,
> Richard
As i said earlier.    Lets see what others in the community feel about this.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-25 10:52   ` Richard Sandiford
@ 2013-08-25 15:14     ` Kenneth Zadeck
  2013-08-26  2:22     ` Mike Stump
  1 sibling, 0 replies; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-25 15:14 UTC (permalink / raw)
  To: rguenther, gcc-patches, Mike Stump, r.sandiford, rdsandiford

On 08/25/2013 03:26 AM, Richard Sandiford wrote:
> Richard Sandiford <rdsandiford@googlemail.com> writes:
>> The main thing that's changed since the early patches is that we now
>> have a mixture of wide-int types.  This seems to have led to a lot of
>> boiler-plate forwarding functions (or at least it felt like that while
>> moving them all out the class).  And that in turn seems to be because
>> you're trying to keep everything as member functions.  E.g. a lot of the
>> forwarders are from a member function to a static function.
>>
>> Wouldn't it be better to have the actual classes be light-weight,
>> with little more than accessors, and do the actual work with non-member
>> template functions?  There seems to be 3 grades of wide-int:
>>
>>    (1) read-only, constant precision  (from int, etc.)
>>    (2) read-write, constant precision  (fixed_wide_int)
>>    (3) read-write, variable precision  (wide_int proper)
>>
>> but we should be able to hide that behind templates, with compiler errors
>> if you try to write to (1), etc.
>>
>> To take one example, the reason we can't simply use things like
>> std::min on wide ints is because signedness needs to be specified
>> explicitly, but there's a good reason why the standard defined
>> std::min (x, y) rather than x.min (y).  It seems like we ought
>> to have smin and umin functions alongside std::min, rather than
>> make them member functions.  We could put them in a separate namespace
>> if necessary.
> FWIW, here's a patch that shows the beginnings of what I mean.
> The changes are:
>
> (1) Using a new templated class, wide_int_accessors, to access the
>      integer object.  For now this just contains a single function,
>      to_shwi, but I expect more to follow...
>
> (2) Adding a new namespace, wi, for the operators.  So far this
>      just contains the previously-static comparison functions
>      and whatever else was needed to avoid cross-dependencies
>      between wi and wide_int_ro (except for the debug routines).
>
> (3) Removing the comparison member functions and using the static
>      ones everywhere.
>
> The idea behind using a namespace rather than static functions
> is that it makes it easier to separate the core, tree and rtx bits.
> IMO wide-int.h shouldn't know about trees and rtxes, and all routines
> related to them should be in tree.h and rtl.h instead.  But using
> static functions means that you have to declare everything in one place.
> Also, it feels odd for wide_int to be both an object and a home
> of static functions that don't always operate on wide_ints, e.g. when
> comparing a CONST_INT against 16.
>
> The eventual aim is to use wide_int_accessors (via the wi interface
> routines) to abstract away everything about the underlying object.
> Then wide_int_ro should not need to have any fields.  wide_int can
> have the fields that wide_int_ro has now, and fixed_wide_int will
> just have an array and length.  The array can also be the right
> size for the int template parameter, rather than always being
> WIDE_INT_MAX_ELTS.
>
> The aim is also to use wide_int_accessors to handle the flexible
> precision case, so that it only kicks in when primitive types are
> used as operator arguments.
>
> I used a wide_int_accessors class rather than just using templated
> wi functions because I think it's dangerous to have a default
> implementation of things like to_shwi1 and to_shwi2.  The default
> implementation we have now is only suitable for primitive types
> (because of the sizeof), but could successfully match any type
> that provides enough arithmetic to satisfy signedp and top_bit_set.
> I admit that's only a theoretical problem though.
>
> I realise I'm probably not being helpful here.  In fact I'm probably
> being the cook too many and should really just leave this up to you
> two and Richard.  But I realised while reading through wide-int.h
> the other day that I have strong opinions about how this should
> be done. :-(
>
> Tested on x86_64-linux-gnu FWIW.  I expect this to remain local though.
>
> Thanks,
> Richard
This is really mostly style.   I think that there are a lot of places 
where the static functions are better than the oo cases. However, if 
have a wide-int in your hand already i like just making the oo call.   
It is a shame that there is now way to just say i want both but i only 
want to specify one.   The problem is that you can take this too far and 
only have static functions and then it just begins to look more like 
badly written c code.  However, i do agree that most of the uses of 
comparison functions could be static - BUT NOT ALL.

however, i let mike "design" the interface because i am just learning 
c++.   So he gets to carry this conversation - at least until richi 
returns!!!!
>
> Index: gcc/ada/gcc-interface/cuintp.c
> ===================================================================
> --- gcc/ada/gcc-interface/cuintp.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/ada/gcc-interface/cuintp.c	2013-08-25 07:42:29.133616663 +0100
> @@ -177,7 +177,7 @@ UI_From_gnu (tree Input)
>        in a signed 64-bit integer.  */
>     if (tree_fits_shwi_p (Input))
>       return UI_From_Int (tree_to_shwi (Input));
> -  else if (wide_int::lts_p (Input, 0) && TYPE_UNSIGNED (gnu_type))
> +  else if (wi::lts_p (Input, 0) && TYPE_UNSIGNED (gnu_type))
>       return No_Uint;
>   #endif
>   
> Index: gcc/alias.c
> ===================================================================
> --- gcc/alias.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/alias.c	2013-08-25 07:42:29.134616672 +0100
> @@ -340,8 +340,8 @@ ao_ref_from_mem (ao_ref *ref, const_rtx
>   	  || (DECL_P (ref->base)
>   	      && (DECL_SIZE (ref->base) == NULL_TREE
>   		  || TREE_CODE (DECL_SIZE (ref->base)) != INTEGER_CST
> -		  || wide_int::ltu_p (DECL_SIZE (ref->base),
> -				      ref->offset + ref->size)))))
> +		  || wi::ltu_p (DECL_SIZE (ref->base),
> +				ref->offset + ref->size)))))
>       return false;
>   
>     return true;
> Index: gcc/c-family/c-common.c
> ===================================================================
> --- gcc/c-family/c-common.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/c-family/c-common.c	2013-08-25 07:42:29.138616711 +0100
> @@ -7925,8 +7925,8 @@ handle_alloc_size_attribute (tree *node,
>         wide_int p;
>   
>         if (TREE_CODE (position) != INTEGER_CST
> -	  || (p = wide_int (position)).ltu_p (1)
> -	  || p.gtu_p (arg_count) )
> +	  || wi::ltu_p (p = wide_int (position), 1)
> +	  || wi::gtu_p (p, arg_count))
>   	{
>   	  warning (OPT_Wattributes,
>   	           "alloc_size parameter outside range");
> Index: gcc/c-family/c-lex.c
> ===================================================================
> --- gcc/c-family/c-lex.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/c-family/c-lex.c	2013-08-25 07:42:29.139616721 +0100
> @@ -545,7 +545,7 @@ narrowest_unsigned_type (const wide_int
>   	continue;
>         upper = TYPE_MAX_VALUE (integer_types[itk]);
>   
> -      if (wide_int::geu_p (upper, val))
> +      if (wi::geu_p (upper, val))
>   	return (enum integer_type_kind) itk;
>       }
>   
> @@ -573,7 +573,7 @@ narrowest_signed_type (const wide_int &v
>   	continue;
>         upper = TYPE_MAX_VALUE (integer_types[itk]);
>   
> -      if (wide_int::geu_p (upper, val))
> +      if (wi::geu_p (upper, val))
>   	return (enum integer_type_kind) itk;
>       }
>   
> Index: gcc/c-family/c-pretty-print.c
> ===================================================================
> --- gcc/c-family/c-pretty-print.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/c-family/c-pretty-print.c	2013-08-25 07:42:29.140616730 +0100
> @@ -919,7 +919,7 @@ pp_c_integer_constant (c_pretty_printer
>       {
>         wide_int wi = i;
>   
> -      if (wi.lt_p (i, 0, TYPE_SIGN (TREE_TYPE (i))))
> +      if (wi::lt_p (i, 0, TYPE_SIGN (TREE_TYPE (i))))
>   	{
>   	  pp_minus (pp);
>   	  wi = -wi;
> Index: gcc/cgraph.c
> ===================================================================
> --- gcc/cgraph.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/cgraph.c	2013-08-25 07:42:29.141616740 +0100
> @@ -624,7 +624,7 @@ cgraph_add_thunk (struct cgraph_node *de
>     
>     node = cgraph_create_node (alias);
>     gcc_checking_assert (!virtual_offset
> -		       || wide_int::eq_p (virtual_offset, virtual_value));
> +		       || wi::eq_p (virtual_offset, virtual_value));
>     node->thunk.fixed_offset = fixed_offset;
>     node->thunk.this_adjusting = this_adjusting;
>     node->thunk.virtual_value = virtual_value;
> Index: gcc/config/bfin/bfin.c
> ===================================================================
> --- gcc/config/bfin/bfin.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/config/bfin/bfin.c	2013-08-25 07:42:29.168617001 +0100
> @@ -3285,7 +3285,7 @@ bfin_local_alignment (tree type, unsigne
>        memcpy can use 32 bit loads/stores.  */
>     if (TYPE_SIZE (type)
>         && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
> -      && (!wide_int::gtu_p (TYPE_SIZE (type), 8))
> +      && !wi::gtu_p (TYPE_SIZE (type), 8)
>         && align < 32)
>       return 32;
>     return align;
> Index: gcc/config/i386/i386.c
> ===================================================================
> --- gcc/config/i386/i386.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/config/i386/i386.c	2013-08-25 07:42:29.175617069 +0100
> @@ -25695,7 +25695,7 @@ ix86_data_alignment (tree type, int alig
>         && AGGREGATE_TYPE_P (type)
>         && TYPE_SIZE (type)
>         && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
> -      && (wide_int::geu_p (TYPE_SIZE (type), max_align))
> +      && wi::geu_p (TYPE_SIZE (type), max_align)
>         && align < max_align)
>       align = max_align;
>   
> @@ -25706,7 +25706,7 @@ ix86_data_alignment (tree type, int alig
>         if ((opt ? AGGREGATE_TYPE_P (type) : TREE_CODE (type) == ARRAY_TYPE)
>   	  && TYPE_SIZE (type)
>   	  && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
> -	  && (wide_int::geu_p (TYPE_SIZE (type), 128))
> +	  && wi::geu_p (TYPE_SIZE (type), 128)
>   	  && align < 128)
>   	return 128;
>       }
> @@ -25821,7 +25821,7 @@ ix86_local_alignment (tree exp, enum mac
>   		  != TYPE_MAIN_VARIANT (va_list_type_node)))
>   	  && TYPE_SIZE (type)
>   	  && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
> -	  && (wide_int::geu_p (TYPE_SIZE (type), 16))
> +	  && wi::geu_p (TYPE_SIZE (type), 16)
>   	  && align < 128)
>   	return 128;
>       }
> Index: gcc/config/rs6000/rs6000-c.c
> ===================================================================
> --- gcc/config/rs6000/rs6000-c.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/config/rs6000/rs6000-c.c	2013-08-25 07:42:29.188617194 +0100
> @@ -4196,7 +4196,7 @@ altivec_resolve_overloaded_builtin (loca
>         mode = TYPE_MODE (arg1_type);
>         if ((mode == V2DFmode || mode == V2DImode) && VECTOR_MEM_VSX_P (mode)
>   	  && TREE_CODE (arg2) == INTEGER_CST
> -	  && wide_int::ltu_p (arg2, 2))
> +	  && wi::ltu_p (arg2, 2))
>   	{
>   	  tree call = NULL_TREE;
>   
> @@ -4281,7 +4281,7 @@ altivec_resolve_overloaded_builtin (loca
>         mode = TYPE_MODE (arg1_type);
>         if ((mode == V2DFmode || mode == V2DImode) && VECTOR_UNIT_VSX_P (mode)
>   	  && tree_fits_uhwi_p (arg2)
> -	  && wide_int::ltu_p (arg2, 2))
> +	  && wi::ltu_p (arg2, 2))
>   	{
>   	  tree call = NULL_TREE;
>   
> Index: gcc/cp/init.c
> ===================================================================
> --- gcc/cp/init.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/cp/init.c	2013-08-25 07:42:29.189617204 +0100
> @@ -2381,7 +2381,7 @@ build_new_1 (vec<tree, va_gc> **placemen
>         gcc_assert (TREE_CODE (size) == INTEGER_CST);
>         cookie_size = targetm.cxx.get_cookie_size (elt_type);
>         gcc_assert (TREE_CODE (cookie_size) == INTEGER_CST);
> -      gcc_checking_assert (addr_wide_int (cookie_size).ltu_p(max_size));
> +      gcc_checking_assert (wi::ltu_p (cookie_size, max_size));
>         /* Unconditionally subtract the cookie size.  This decreases the
>   	 maximum object size and is safe even if we choose not to use
>   	 a cookie after all.  */
> @@ -2389,7 +2389,7 @@ build_new_1 (vec<tree, va_gc> **placemen
>         bool overflow;
>         inner_size = addr_wide_int (size)
>   		   .mul (inner_nelts_count, SIGNED, &overflow);
> -      if (overflow || inner_size.gtu_p (max_size))
> +      if (overflow || wi::gtu_p (inner_size, max_size))
>   	{
>   	  if (complain & tf_error)
>   	    error ("size of array is too large");
> Index: gcc/dwarf2out.c
> ===================================================================
> --- gcc/dwarf2out.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/dwarf2out.c	2013-08-25 07:42:29.192617233 +0100
> @@ -14783,7 +14783,7 @@ field_byte_offset (const_tree decl)
>         object_offset_in_bits
>   	= round_up_to_align (object_offset_in_bits, type_align_in_bits);
>   
> -      if (object_offset_in_bits.gtu_p (bitpos_int))
> +      if (wi::gtu_p (object_offset_in_bits, bitpos_int))
>   	{
>   	  object_offset_in_bits = deepest_bitpos - type_size_in_bits;
>   
> @@ -16218,7 +16218,7 @@ add_bound_info (dw_die_ref subrange_die,
>   		  	     zext_hwi (tree_to_hwi (bound), prec));
>   	  }
>   	else if (prec == HOST_BITS_PER_WIDE_INT
> -		 || (cst_fits_uhwi_p (bound) && wide_int (bound).ges_p (0)))
> +		 || (cst_fits_uhwi_p (bound) && wi::ges_p (bound, 0)))
>   	  add_AT_unsigned (subrange_die, bound_attr, tree_to_hwi (bound));
>   	else
>   	  add_AT_wide (subrange_die, bound_attr, wide_int (bound));
> Index: gcc/fold-const.c
> ===================================================================
> --- gcc/fold-const.c	2013-08-25 07:42:28.417609742 +0100
> +++ gcc/fold-const.c	2013-08-25 07:42:29.194617252 +0100
> @@ -510,7 +510,7 @@ negate_expr_p (tree t)
>         if (TREE_CODE (TREE_OPERAND (t, 1)) == INTEGER_CST)
>   	{
>   	  tree op1 = TREE_OPERAND (t, 1);
> -	  if (wide_int::eq_p (op1, TYPE_PRECISION (type) - 1))
> +	  if (wi::eq_p (op1, TYPE_PRECISION (type) - 1))
>   	    return true;
>   	}
>         break;
> @@ -721,7 +721,7 @@ fold_negate_expr (location_t loc, tree t
>         if (TREE_CODE (TREE_OPERAND (t, 1)) == INTEGER_CST)
>   	{
>   	  tree op1 = TREE_OPERAND (t, 1);
> -	  if (wide_int::eq_p (op1, TYPE_PRECISION (type) - 1))
> +	  if (wi::eq_p (op1, TYPE_PRECISION (type) - 1))
>   	    {
>   	      tree ntype = TYPE_UNSIGNED (type)
>   			   ? signed_type_for (type)
> @@ -5836,7 +5836,7 @@ extract_muldiv_1 (tree t, tree c, enum t
>   	  && (tcode == RSHIFT_EXPR || TYPE_UNSIGNED (TREE_TYPE (op0)))
>   	  /* const_binop may not detect overflow correctly,
>   	     so check for it explicitly here.  */
> -	  && wide_int::gtu_p (TYPE_PRECISION (TREE_TYPE (size_one_node)), op1)
> +	  && wi::gtu_p (TYPE_PRECISION (TREE_TYPE (size_one_node)), op1)
>   	  && 0 != (t1 = fold_convert (ctype,
>   				      const_binop (LSHIFT_EXPR,
>   						   size_one_node,
> @@ -6602,7 +6602,8 @@ fold_single_bit_test (location_t loc, en
>   	 not overflow, adjust BITNUM and INNER.  */
>         if (TREE_CODE (inner) == RSHIFT_EXPR
>   	  && TREE_CODE (TREE_OPERAND (inner, 1)) == INTEGER_CST
> -	  && (wide_int (TREE_OPERAND (inner, 1) + bitnum).ltu_p (TYPE_PRECISION (type))))
> +	  && wi::ltu_p (TREE_OPERAND (inner, 1) + bitnum,
> +			TYPE_PRECISION (type)))
>   	{
>   	  bitnum += tree_to_hwi (TREE_OPERAND (inner, 1));
>   	  inner = TREE_OPERAND (inner, 0);
> @@ -12911,7 +12912,7 @@ fold_binary_loc (location_t loc,
>   	  prec = TYPE_PRECISION (itype);
>   
>   	  /* Check for a valid shift count.  */
> -	  if (wide_int::ltu_p (arg001, prec))
> +	  if (wi::ltu_p (arg001, prec))
>   	    {
>   	      tree arg01 = TREE_OPERAND (arg0, 1);
>   	      tree arg000 = TREE_OPERAND (TREE_OPERAND (arg0, 0), 0);
> @@ -13036,7 +13037,7 @@ fold_binary_loc (location_t loc,
>   	  tree arg00 = TREE_OPERAND (arg0, 0);
>   	  tree arg01 = TREE_OPERAND (arg0, 1);
>   	  tree itype = TREE_TYPE (arg00);
> -	  if (wide_int::eq_p (arg01, TYPE_PRECISION (itype) - 1))
> +	  if (wi::eq_p (arg01, TYPE_PRECISION (itype) - 1))
>   	    {
>   	      if (TYPE_UNSIGNED (itype))
>   		{
> @@ -14341,7 +14342,7 @@ fold_ternary_loc (location_t loc, enum t
>   	      /* Make sure that the perm value is in an acceptable
>   		 range.  */
>   	      t = val;
> -	      if (t.gtu_p (nelts_cnt))
> +	      if (wi::gtu_p (t, nelts_cnt))
>   		{
>   		  need_mask_canon = true;
>   		  sel[i] = t.to_uhwi () & (nelts_cnt - 1);
> @@ -15163,7 +15164,7 @@ multiple_of_p (tree type, const_tree top
>   	  op1 = TREE_OPERAND (top, 1);
>   	  /* const_binop may not detect overflow correctly,
>   	     so check for it explicitly here.  */
> -	  if (wide_int::gtu_p (TYPE_PRECISION (TREE_TYPE (size_one_node)), op1)
> +	  if (wi::gtu_p (TYPE_PRECISION (TREE_TYPE (size_one_node)), op1)
>   	      && 0 != (t1 = fold_convert (type,
>   					  const_binop (LSHIFT_EXPR,
>   						       size_one_node,
> Index: gcc/fortran/trans-intrinsic.c
> ===================================================================
> --- gcc/fortran/trans-intrinsic.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/fortran/trans-intrinsic.c	2013-08-25 07:42:29.195617262 +0100
> @@ -986,8 +986,9 @@ trans_this_image (gfc_se * se, gfc_expr
>   	{
>   	  wide_int wdim_arg = dim_arg;
>   
> -	  if (wdim_arg.ltu_p (1)
> -	      || wdim_arg.gtu_p (GFC_TYPE_ARRAY_CORANK (TREE_TYPE (desc))))
> +	  if (wi::ltu_p (wdim_arg, 1)
> +	      || wi::gtu_p (wdim_arg,
> +			    GFC_TYPE_ARRAY_CORANK (TREE_TYPE (desc))))
>   	    gfc_error ("'dim' argument of %s intrinsic at %L is not a valid "
>   		       "dimension index", expr->value.function.isym->name,
>   		       &expr->where);
> @@ -1346,8 +1347,8 @@ gfc_conv_intrinsic_bound (gfc_se * se, g
>       {
>         wide_int wbound = bound;
>         if (((!as || as->type != AS_ASSUMED_RANK)
> -	      && wbound.geu_p (GFC_TYPE_ARRAY_RANK (TREE_TYPE (desc))))
> -	  || wbound.gtu_p (GFC_MAX_DIMENSIONS))
> +	   && wi::geu_p (wbound, GFC_TYPE_ARRAY_RANK (TREE_TYPE (desc))))
> +	  || wi::gtu_p (wbound, GFC_MAX_DIMENSIONS))
>   	gfc_error ("'dim' argument of %s intrinsic at %L is not a valid "
>   		   "dimension index", upper ? "UBOUND" : "LBOUND",
>   		   &expr->where);
> @@ -1543,7 +1544,8 @@ conv_intrinsic_cobound (gfc_se * se, gfc
>         if (INTEGER_CST_P (bound))
>   	{
>   	  wide_int wbound = bound;
> -	  if (wbound.ltu_p (1) || wbound.gtu_p (GFC_TYPE_ARRAY_CORANK (TREE_TYPE (desc))))
> +	  if (wi::ltu_p (wbound, 1)
> +	      || wi::gtu_p (wbound, GFC_TYPE_ARRAY_CORANK (TREE_TYPE (desc))))
>   	    gfc_error ("'dim' argument of %s intrinsic at %L is not a valid "
>   		       "dimension index", expr->value.function.isym->name,
>   		       &expr->where);
> Index: gcc/gimple-fold.c
> ===================================================================
> --- gcc/gimple-fold.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/gimple-fold.c	2013-08-25 07:42:29.195617262 +0100
> @@ -2799,7 +2799,7 @@ fold_array_ctor_reference (tree type, tr
>        be larger than size of array element.  */
>     if (!TYPE_SIZE_UNIT (type)
>         || TREE_CODE (TYPE_SIZE_UNIT (type)) != INTEGER_CST
> -      || elt_size.lts_p (addr_wide_int (TYPE_SIZE_UNIT (type))))
> +      || wi::lts_p (elt_size, TYPE_SIZE_UNIT (type)))
>       return NULL_TREE;
>   
>     /* Compute the array index we look for.  */
> @@ -2902,7 +2902,7 @@ fold_nonarray_ctor_reference (tree type,
>   	 [BITOFFSET, BITOFFSET_END)?  */
>         if (access_end.cmps (bitoffset) > 0
>   	  && (field_size == NULL_TREE
> -	      || addr_wide_int (offset).lts_p (bitoffset_end)))
> +	      || wi::lts_p (offset, bitoffset_end)))
>   	{
>   	  addr_wide_int inner_offset = addr_wide_int (offset) - bitoffset;
>   	  /* We do have overlap.  Now see if field is large enough to
> @@ -2910,7 +2910,7 @@ fold_nonarray_ctor_reference (tree type,
>   	     fields.  */
>   	  if (access_end.cmps (bitoffset_end) > 0)
>   	    return NULL_TREE;
> -	  if (addr_wide_int (offset).lts_p (bitoffset))
> +	  if (wi::lts_p (offset, bitoffset))
>   	    return NULL_TREE;
>   	  return fold_ctor_reference (type, cval,
>   				      inner_offset.to_uhwi (), size,
> Index: gcc/gimple-ssa-strength-reduction.c
> ===================================================================
> --- gcc/gimple-ssa-strength-reduction.c	2013-08-25 07:42:28.418609752 +0100
> +++ gcc/gimple-ssa-strength-reduction.c	2013-08-25 07:42:29.196617272 +0100
> @@ -2355,8 +2355,8 @@ record_increment (slsr_cand_t c, const m
>         if (c->kind == CAND_ADD
>   	  && !is_phi_adjust
>   	  && c->index == increment
> -	  && (increment.gts_p (1)
> -	      || increment.lts_p (-1))
> +	  && (wi::gts_p (increment, 1)
> +	      || wi::lts_p (increment, -1))
>   	  && (gimple_assign_rhs_code (c->cand_stmt) == PLUS_EXPR
>   	      || gimple_assign_rhs_code (c->cand_stmt) == POINTER_PLUS_EXPR))
>   	{
> Index: gcc/loop-doloop.c
> ===================================================================
> --- gcc/loop-doloop.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/loop-doloop.c	2013-08-25 07:42:29.196617272 +0100
> @@ -461,9 +461,10 @@ doloop_modify (struct loop *loop, struct
>         /* Determine if the iteration counter will be non-negative.
>   	 Note that the maximum value loaded is iterations_max - 1.  */
>         if (max_loop_iterations (loop, &iterations)
> -	  && (iterations.leu_p (wide_int::set_bit_in_zero
> -				(GET_MODE_PRECISION (mode) - 1,
> -				 GET_MODE_PRECISION (mode)))))
> +	  && wi::leu_p (iterations,
> +			wide_int::set_bit_in_zero
> +			(GET_MODE_PRECISION (mode) - 1,
> +			 GET_MODE_PRECISION (mode))))
>   	nonneg = 1;
>         break;
>   
> @@ -697,7 +698,7 @@ doloop_optimize (struct loop *loop)
>   	 computed, we must be sure that the number of iterations fits into
>   	 the new mode.  */
>         && (word_mode_size >= GET_MODE_PRECISION (mode)
> -	  || iter.leu_p (word_mode_max)))
> +	  || wi::leu_p (iter, word_mode_max)))
>       {
>         if (word_mode_size > GET_MODE_PRECISION (mode))
>   	{
> Index: gcc/loop-unroll.c
> ===================================================================
> --- gcc/loop-unroll.c	2013-08-25 07:42:28.420609771 +0100
> +++ gcc/loop-unroll.c	2013-08-25 07:42:29.196617272 +0100
> @@ -693,7 +693,7 @@ decide_unroll_constant_iterations (struc
>     if (desc->niter < 2 * nunroll
>         || ((estimated_loop_iterations (loop, &iterations)
>   	   || max_loop_iterations (loop, &iterations))
> -	  && iterations.ltu_p (2 * nunroll)))
> +	  && wi::ltu_p (iterations, 2 * nunroll)))
>       {
>         if (dump_file)
>   	fprintf (dump_file, ";; Not unrolling loop, doesn't roll\n");
> @@ -816,7 +816,7 @@ unroll_loop_constant_iterations (struct
>   	  desc->niter -= exit_mod;
>   	  loop->nb_iterations_upper_bound -= exit_mod;
>   	  if (loop->any_estimate
> -	      && wide_int::leu_p (exit_mod, loop->nb_iterations_estimate))
> +	      && wi::leu_p (exit_mod, loop->nb_iterations_estimate))
>   	    loop->nb_iterations_estimate -= exit_mod;
>   	  else
>   	    loop->any_estimate = false;
> @@ -859,7 +859,7 @@ unroll_loop_constant_iterations (struct
>   	  desc->niter -= exit_mod + 1;
>   	  loop->nb_iterations_upper_bound -= exit_mod + 1;
>   	  if (loop->any_estimate
> -	      && wide_int::leu_p (exit_mod + 1, loop->nb_iterations_estimate))
> +	      && wi::leu_p (exit_mod + 1, loop->nb_iterations_estimate))
>   	    loop->nb_iterations_estimate -= exit_mod + 1;
>   	  else
>   	    loop->any_estimate = false;
> @@ -992,7 +992,7 @@ decide_unroll_runtime_iterations (struct
>     /* Check whether the loop rolls.  */
>     if ((estimated_loop_iterations (loop, &iterations)
>          || max_loop_iterations (loop, &iterations))
> -      && iterations.ltu_p (2 * nunroll))
> +      && wi::ltu_p (iterations, 2 * nunroll))
>       {
>         if (dump_file)
>   	fprintf (dump_file, ";; Not unrolling loop, doesn't roll\n");
> @@ -1379,7 +1379,7 @@ decide_peel_simple (struct loop *loop, i
>     if (estimated_loop_iterations (loop, &iterations))
>       {
>         /* TODO: unsigned/signed confusion */
> -      if (wide_int::leu_p (npeel, iterations))
> +      if (wi::leu_p (npeel, iterations))
>   	{
>   	  if (dump_file)
>   	    {
> @@ -1396,7 +1396,7 @@ decide_peel_simple (struct loop *loop, i
>     /* If we have small enough bound on iterations, we can still peel (completely
>        unroll).  */
>     else if (max_loop_iterations (loop, &iterations)
> -           && iterations.ltu_p (npeel))
> +           && wi::ltu_p (iterations, npeel))
>       npeel = iterations.to_shwi () + 1;
>     else
>       {
> @@ -1547,7 +1547,7 @@ decide_unroll_stupid (struct loop *loop,
>     /* Check whether the loop rolls.  */
>     if ((estimated_loop_iterations (loop, &iterations)
>          || max_loop_iterations (loop, &iterations))
> -      && iterations.ltu_p (2 * nunroll))
> +      && wi::ltu_p (iterations, 2 * nunroll))
>       {
>         if (dump_file)
>   	fprintf (dump_file, ";; Not unrolling loop, doesn't roll\n");
> Index: gcc/lto/lto.c
> ===================================================================
> --- gcc/lto/lto.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/lto/lto.c	2013-08-25 07:42:29.206617368 +0100
> @@ -1778,7 +1778,7 @@ #define compare_values(X) \
>   
>     if (CODE_CONTAINS_STRUCT (code, TS_INT_CST))
>       {
> -      if (!wide_int::eq_p (t1, t2))
> +      if (!wi::eq_p (t1, t2))
>   	return false;
>       }
>   
> Index: gcc/rtl.h
> ===================================================================
> --- gcc/rtl.h	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/rtl.h	2013-08-25 07:42:29.197617281 +0100
> @@ -1402,10 +1402,10 @@ get_mode (const rtx_mode_t p)
>   
>   /* Specialization of to_shwi1 function in wide-int.h for rtl.  This
>      cannot be in wide-int.h because of circular includes.  */
> -template<>
> -inline const HOST_WIDE_INT*
> -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
> -		       unsigned int *l, unsigned int *p, const rtx_mode_t& rp)
> +inline const HOST_WIDE_INT *
> +wide_int_accessors <rtx_mode_t>::to_shwi (HOST_WIDE_INT *, unsigned int *l,
> +					  unsigned int *p,
> +					  const rtx_mode_t &rp)
>   {
>     const rtx rcst = get_rtx (rp);
>     enum machine_mode mode = get_mode (rp);
> @@ -1414,34 +1414,6 @@ wide_int_ro::to_shwi1 (HOST_WIDE_INT *s
>   
>     switch (GET_CODE (rcst))
>       {
> -    case CONST_INT:
> -      *l = 1;
> -      return &INTVAL (rcst);
> -
> -    case CONST_WIDE_INT:
> -      *l = CONST_WIDE_INT_NUNITS (rcst);
> -      return &CONST_WIDE_INT_ELT (rcst, 0);
> -
> -    case CONST_DOUBLE:
> -      *l = 2;
> -      return &CONST_DOUBLE_LOW (rcst);
> -
> -    default:
> -      gcc_unreachable ();
> -    }
> -}
> -
> -/* Specialization of to_shwi2 function in wide-int.h for rtl.  This
> -   cannot be in wide-int.h because of circular includes.  */
> -template<>
> -inline const HOST_WIDE_INT*
> -wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
> -		       unsigned int *l, const rtx_mode_t& rp)
> -{
> -  const rtx rcst = get_rtx (rp);
> -
> -  switch (GET_CODE (rcst))
> -    {
>       case CONST_INT:
>         *l = 1;
>         return &INTVAL (rcst);
> Index: gcc/simplify-rtx.c
> ===================================================================
> --- gcc/simplify-rtx.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/simplify-rtx.c	2013-08-25 07:42:29.198617291 +0100
> @@ -4649,8 +4649,8 @@ simplify_const_relational_operation (enu
>   	return comparison_result (code, CMP_EQ);
>         else
>   	{
> -	  int cr = wo0.lts_p (ptrueop1) ? CMP_LT : CMP_GT;
> -	  cr |= wo0.ltu_p (ptrueop1) ? CMP_LTU : CMP_GTU;
> +	  int cr = wi::lts_p (wo0, ptrueop1) ? CMP_LT : CMP_GT;
> +	  cr |= wi::ltu_p (wo0, ptrueop1) ? CMP_LTU : CMP_GTU;
>   	  return comparison_result (code, cr);
>   	}
>       }
> Index: gcc/tree-affine.c
> ===================================================================
> --- gcc/tree-affine.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/tree-affine.c	2013-08-25 07:42:29.198617291 +0100
> @@ -911,7 +911,7 @@ aff_comb_cannot_overlap_p (aff_tree *dif
>     else
>       {
>         /* We succeed if the second object starts after the first one ends.  */
> -      return size1.les_p (d);
> +      return wi::les_p (size1, d);
>       }
>   }
>   
> Index: gcc/tree-chrec.c
> ===================================================================
> --- gcc/tree-chrec.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/tree-chrec.c	2013-08-25 07:42:29.198617291 +0100
> @@ -475,7 +475,7 @@ tree_fold_binomial (tree type, tree n, u
>     num = n;
>   
>     /* Check that k <= n.  */
> -  if (num.ltu_p (k))
> +  if (wi::ltu_p (num, k))
>       return NULL_TREE;
>   
>     /* Denominator = 2.  */
> Index: gcc/tree-predcom.c
> ===================================================================
> --- gcc/tree-predcom.c	2013-08-25 07:42:28.421609781 +0100
> +++ gcc/tree-predcom.c	2013-08-25 07:42:29.199617301 +0100
> @@ -921,9 +921,9 @@ add_ref_to_chain (chain_p chain, dref re
>     dref root = get_chain_root (chain);
>     max_wide_int dist;
>   
> -  gcc_assert (root->offset.les_p (ref->offset));
> +  gcc_assert (wi::les_p (root->offset, ref->offset));
>     dist = ref->offset - root->offset;
> -  if (wide_int::leu_p (MAX_DISTANCE, dist))
> +  if (wi::leu_p (MAX_DISTANCE, dist))
>       {
>         free (ref);
>         return;
> @@ -1194,7 +1194,7 @@ determine_roots_comp (struct loop *loop,
>     FOR_EACH_VEC_ELT (comp->refs, i, a)
>       {
>         if (!chain || DR_IS_WRITE (a->ref)
> -	  || max_wide_int (MAX_DISTANCE).leu_p (a->offset - last_ofs))
> +	  || wi::leu_p (MAX_DISTANCE, a->offset - last_ofs))
>   	{
>   	  if (nontrivial_chain_p (chain))
>   	    {
> Index: gcc/tree-ssa-loop-ivcanon.c
> ===================================================================
> --- gcc/tree-ssa-loop-ivcanon.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/tree-ssa-loop-ivcanon.c	2013-08-25 07:42:29.199617301 +0100
> @@ -488,7 +488,7 @@ remove_exits_and_undefined_stmts (struct
>   	 into unreachable (or trap when debugging experience is supposed
>   	 to be good).  */
>         if (!elt->is_exit
> -	  && elt->bound.ltu_p (max_wide_int (npeeled)))
> +	  && wi::ltu_p (elt->bound, npeeled))
>   	{
>   	  gimple_stmt_iterator gsi = gsi_for_stmt (elt->stmt);
>   	  gimple stmt = gimple_build_call
> @@ -505,7 +505,7 @@ remove_exits_and_undefined_stmts (struct
>   	}
>         /* If we know the exit will be taken after peeling, update.  */
>         else if (elt->is_exit
> -	       && elt->bound.leu_p (max_wide_int (npeeled)))
> +	       && wi::leu_p (elt->bound, npeeled))
>   	{
>   	  basic_block bb = gimple_bb (elt->stmt);
>   	  edge exit_edge = EDGE_SUCC (bb, 0);
> @@ -545,7 +545,7 @@ remove_redundant_iv_tests (struct loop *
>         /* Exit is pointless if it won't be taken before loop reaches
>   	 upper bound.  */
>         if (elt->is_exit && loop->any_upper_bound
> -          && loop->nb_iterations_upper_bound.ltu_p (elt->bound))
> +          && wi::ltu_p (loop->nb_iterations_upper_bound, elt->bound))
>   	{
>   	  basic_block bb = gimple_bb (elt->stmt);
>   	  edge exit_edge = EDGE_SUCC (bb, 0);
> @@ -562,7 +562,7 @@ remove_redundant_iv_tests (struct loop *
>   	      || !integer_zerop (niter.may_be_zero)
>   	      || !niter.niter
>   	      || TREE_CODE (niter.niter) != INTEGER_CST
> -	      || !loop->nb_iterations_upper_bound.ltu_p (niter.niter))
> +	      || !wi::ltu_p (loop->nb_iterations_upper_bound, niter.niter))
>   	    continue;
>   	
>   	  if (dump_file && (dump_flags & TDF_DETAILS))
> Index: gcc/tree-ssa-loop-ivopts.c
> ===================================================================
> --- gcc/tree-ssa-loop-ivopts.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/tree-ssa-loop-ivopts.c	2013-08-25 07:42:29.200617310 +0100
> @@ -4659,7 +4659,7 @@ may_eliminate_iv (struct ivopts_data *da
>         if (stmt_after_increment (loop, cand, use->stmt))
>           max_niter += 1;
>         period_value = period;
> -      if (max_niter.gtu_p (period_value))
> +      if (wi::gtu_p (max_niter, period_value))
>           {
>             /* See if we can take advantage of inferred loop bound information.  */
>             if (data->loop_single_exit_p)
> @@ -4667,7 +4667,7 @@ may_eliminate_iv (struct ivopts_data *da
>                 if (!max_loop_iterations (loop, &max_niter))
>                   return false;
>                 /* The loop bound is already adjusted by adding 1.  */
> -              if (max_niter.gtu_p (period_value))
> +              if (wi::gtu_p (max_niter, period_value))
>                   return false;
>               }
>             else
> Index: gcc/tree-ssa-loop-niter.c
> ===================================================================
> --- gcc/tree-ssa-loop-niter.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/tree-ssa-loop-niter.c	2013-08-25 07:42:29.200617310 +0100
> @@ -2410,7 +2410,7 @@ derive_constant_upper_bound_ops (tree ty
>   
>         /* If the bound does not fit in TYPE, max. value of TYPE could be
>   	 attained.  */
> -      if (max.ltu_p (bnd))
> +      if (wi::ltu_p (max, bnd))
>   	return max;
>   
>         return bnd;
> @@ -2443,7 +2443,7 @@ derive_constant_upper_bound_ops (tree ty
>   	     BND <= MAX (type) - CST.  */
>   
>   	  mmax -= cst;
> -	  if (bnd.ltu_p (mmax))
> +	  if (wi::ltu_p (bnd, max))
>   	    return max;
>   
>   	  return bnd + cst;
> @@ -2463,7 +2463,7 @@ derive_constant_upper_bound_ops (tree ty
>   	  /* This should only happen if the type is unsigned; however, for
>   	     buggy programs that use overflowing signed arithmetics even with
>   	     -fno-wrapv, this condition may also be true for signed values.  */
> -	  if (bnd.ltu_p (cst))
> +	  if (wi::ltu_p (bnd, cst))
>   	    return max;
>   
>   	  if (TYPE_UNSIGNED (type))
> @@ -2519,14 +2519,14 @@ record_niter_bound (struct loop *loop, c
>        current estimation is smaller.  */
>     if (upper
>         && (!loop->any_upper_bound
> -	  || i_bound.ltu_p (loop->nb_iterations_upper_bound)))
> +	  || wi::ltu_p (i_bound, loop->nb_iterations_upper_bound)))
>       {
>         loop->any_upper_bound = true;
>         loop->nb_iterations_upper_bound = i_bound;
>       }
>     if (realistic
>         && (!loop->any_estimate
> -	  || i_bound.ltu_p (loop->nb_iterations_estimate)))
> +	  || wi::ltu_p (i_bound, loop->nb_iterations_estimate)))
>       {
>         loop->any_estimate = true;
>         loop->nb_iterations_estimate = i_bound;
> @@ -2536,7 +2536,8 @@ record_niter_bound (struct loop *loop, c
>        number of iterations, use the upper bound instead.  */
>     if (loop->any_upper_bound
>         && loop->any_estimate
> -      && loop->nb_iterations_upper_bound.ltu_p (loop->nb_iterations_estimate))
> +      && wi::ltu_p (loop->nb_iterations_upper_bound,
> +		    loop->nb_iterations_estimate))
>       loop->nb_iterations_estimate = loop->nb_iterations_upper_bound;
>   }
>   
> @@ -2642,7 +2643,7 @@ record_estimate (struct loop *loop, tree
>     i_bound += delta;
>   
>     /* If an overflow occurred, ignore the result.  */
> -  if (i_bound.ltu_p (delta))
> +  if (wi::ltu_p (i_bound, delta))
>       return;
>   
>     if (upper && !is_exit)
> @@ -3051,7 +3052,7 @@ bound_index (vec<max_wide_int> bounds, c
>   
>         if (index == bound)
>   	return middle;
> -      else if (index.ltu_p (bound))
> +      else if (wi::ltu_p (index, bound))
>   	begin = middle + 1;
>         else
>   	end = middle;
> @@ -3093,7 +3094,7 @@ discover_iteration_bound_by_body_walk (s
>   	}
>   
>         if (!loop->any_upper_bound
> -	  || bound.ltu_p (loop->nb_iterations_upper_bound))
> +	  || wi::ltu_p (bound, loop->nb_iterations_upper_bound))
>           bounds.safe_push (bound);
>       }
>   
> @@ -3124,7 +3125,7 @@ discover_iteration_bound_by_body_walk (s
>   	}
>   
>         if (!loop->any_upper_bound
> -	  || bound.ltu_p (loop->nb_iterations_upper_bound))
> +	  || wi::ltu_p (bound, loop->nb_iterations_upper_bound))
>   	{
>   	  ptrdiff_t index = bound_index (bounds, bound);
>   	  void **entry = pointer_map_contains (bb_bounds,
> @@ -3259,7 +3260,7 @@ maybe_lower_iteration_bound (struct loop
>     for (elt = loop->bounds; elt; elt = elt->next)
>       {
>         if (!elt->is_exit
> -	  && elt->bound.ltu_p (loop->nb_iterations_upper_bound))
> +	  && wi::ltu_p (elt->bound, loop->nb_iterations_upper_bound))
>   	{
>   	  if (!not_executed_last_iteration)
>   	    not_executed_last_iteration = pointer_set_create ();
> @@ -3556,7 +3557,7 @@ max_stmt_executions (struct loop *loop,
>   
>     *nit += 1;
>   
> -  return (*nit).gtu_p (nit_minus_one);
> +  return wi::gtu_p (*nit, nit_minus_one);
>   }
>   
>   /* Sets NIT to the estimated number of executions of the latch of the
> @@ -3575,7 +3576,7 @@ estimated_stmt_executions (struct loop *
>   
>     *nit += 1;
>   
> -  return (*nit).gtu_p (nit_minus_one);
> +  return wi::gtu_p (*nit, nit_minus_one);
>   }
>   
>   /* Records estimates on numbers of iterations of loops.  */
> Index: gcc/tree-ssa.c
> ===================================================================
> --- gcc/tree-ssa.c	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/tree-ssa.c	2013-08-25 07:42:29.201617320 +0100
> @@ -1829,8 +1829,8 @@ non_rewritable_mem_ref_base (tree ref)
>   	  && useless_type_conversion_p (TREE_TYPE (base),
>   					TREE_TYPE (TREE_TYPE (decl)))
>   	  && mem_ref_offset (base).fits_uhwi_p ()
> -	  && addr_wide_int (TYPE_SIZE_UNIT (TREE_TYPE (decl)))
> -	     .gtu_p (mem_ref_offset (base))
> +	  && wi::gtu_p (TYPE_SIZE_UNIT (TREE_TYPE (decl)),
> +			mem_ref_offset (base))
>   	  && multiple_of_p (sizetype, TREE_OPERAND (base, 1),
>   			    TYPE_SIZE_UNIT (TREE_TYPE (base))))
>   	return NULL_TREE;
> Index: gcc/tree-vrp.c
> ===================================================================
> --- gcc/tree-vrp.c	2013-08-25 07:42:28.470610254 +0100
> +++ gcc/tree-vrp.c	2013-08-25 07:42:29.202617330 +0100
> @@ -2652,13 +2652,13 @@ extract_range_from_binary_expr_1 (value_
>   	  /* Canonicalize the intervals.  */
>   	  if (sign == UNSIGNED)
>   	    {
> -	      if (size.ltu_p (min0 + max0))
> +	      if (wi::ltu_p (size, min0 + max0))
>   		{
>   		  min0 -= size;
>   		  max0 -= size;
>   		}
>   
> -	      if (size.ltu_p (min1 + max1))
> +	      if (wi::ltu_p (size, min1 + max1))
>   		{
>   		  min1 -= size;
>   		  max1 -= size;
> @@ -2673,7 +2673,7 @@ extract_range_from_binary_expr_1 (value_
>   	  /* Sort the 4 products so that min is in prod0 and max is in
>   	     prod3.  */
>   	  /* min0min1 > max0max1 */
> -	  if (prod0.gts_p (prod3))
> +	  if (wi::gts_p (prod0, prod3))
>   	    {
>   	      wide_int tmp = prod3;
>   	      prod3 = prod0;
> @@ -2681,21 +2681,21 @@ extract_range_from_binary_expr_1 (value_
>   	    }
>   
>   	  /* min0max1 > max0min1 */
> -	  if (prod1.gts_p (prod2))
> +	  if (wi::gts_p (prod1, prod2))
>   	    {
>   	      wide_int tmp = prod2;
>   	      prod2 = prod1;
>   	      prod1 = tmp;
>   	    }
>   
> -	  if (prod0.gts_p (prod1))
> +	  if (wi::gts_p (prod0, prod1))
>   	    {
>   	      wide_int tmp = prod1;
>   	      prod1 = prod0;
>   	      prod0 = tmp;
>   	    }
>   
> -	  if (prod2.gts_p (prod3))
> +	  if (wi::gts_p (prod2, prod3))
>   	    {
>   	      wide_int tmp = prod3;
>   	      prod3 = prod2;
> @@ -2704,7 +2704,7 @@ extract_range_from_binary_expr_1 (value_
>   
>   	  /* diff = max - min.  */
>   	  prod2 = prod3 - prod0;
> -	  if (prod2.geu_p (sizem1))
> +	  if (wi::geu_p (prod2, sizem1))
>   	    {
>   	      /* the range covers all values.  */
>   	      set_value_range_to_varying (vr);
> @@ -2801,14 +2801,14 @@ extract_range_from_binary_expr_1 (value_
>   		{
>   		  low_bound = bound;
>   		  high_bound = complement;
> -		  if (wide_int::ltu_p (vr0.max, low_bound))
> +		  if (wi::ltu_p (vr0.max, low_bound))
>   		    {
>   		      /* [5, 6] << [1, 2] == [10, 24].  */
>   		      /* We're shifting out only zeroes, the value increases
>   			 monotonically.  */
>   		      in_bounds = true;
>   		    }
> -		  else if (high_bound.ltu_p (vr0.min))
> +		  else if (wi::ltu_p (high_bound, vr0.min))
>   		    {
>   		      /* [0xffffff00, 0xffffffff] << [1, 2]
>   		         == [0xfffffc00, 0xfffffffe].  */
> @@ -2822,8 +2822,8 @@ extract_range_from_binary_expr_1 (value_
>   		  /* [-1, 1] << [1, 2] == [-4, 4].  */
>   		  low_bound = complement;
>   		  high_bound = bound;
> -		  if (wide_int::lts_p (vr0.max, high_bound)
> -		      && low_bound.lts_p (wide_int (vr0.min)))
> +		  if (wi::lts_p (vr0.max, high_bound)
> +		      && wi::lts_p (low_bound, vr0.min))
>   		    {
>   		      /* For non-negative numbers, we're shifting out only
>   			 zeroes, the value increases monotonically.
> @@ -3844,7 +3844,7 @@ adjust_range_with_scev (value_range_t *v
>   	  if (!overflow
>   	      && wtmp.fits_to_tree_p (TREE_TYPE (init))
>   	      && (sgn == UNSIGNED
> -		  || (wtmp.gts_p (0) == wide_int::gts_p (step, 0))))
> +		  || wi::gts_p (wtmp, 0) == wi::gts_p (step, 0)))
>   	    {
>   	      tem = wide_int_to_tree (TREE_TYPE (init), wtmp);
>   	      extract_range_from_binary_expr (&maxvr, PLUS_EXPR,
> @@ -4736,7 +4736,7 @@ masked_increment (wide_int val, wide_int
>         res = bit - 1;
>         res = (val + bit).and_not (res);
>         res &= mask;
> -      if (res.gtu_p (val))
> +      if (wi::gtu_p (res, val))
>   	return res ^ sgnbit;
>       }
>     return val ^ sgnbit;
> @@ -6235,7 +6235,7 @@ search_for_addr_array (tree t, location_
>   
>         idx = mem_ref_offset (t);
>         idx = idx.sdiv_trunc (addr_wide_int (el_sz));
> -      if (idx.lts_p (0))
> +      if (wi::lts_p (idx, 0))
>   	{
>   	  if (dump_file && (dump_flags & TDF_DETAILS))
>   	    {
> @@ -6247,9 +6247,7 @@ search_for_addr_array (tree t, location_
>   		      "array subscript is below array bounds");
>   	  TREE_NO_WARNING (t) = 1;
>   	}
> -      else if (idx.gts_p (addr_wide_int (up_bound)
> -			  - low_bound
> -			  + 1))
> +      else if (wi::gts_p (idx, addr_wide_int (up_bound) - low_bound + 1))
>   	{
>   	  if (dump_file && (dump_flags & TDF_DETAILS))
>   	    {
> @@ -8681,7 +8679,7 @@ range_fits_type_p (value_range_t *vr, un
>        a signed wide_int, while a negative value cannot be represented
>        by an unsigned wide_int.  */
>     if (src_sgn != dest_sgn
> -      && (max_wide_int (vr->min).lts_p (0) || max_wide_int (vr->max).lts_p (0)))
> +      && (wi::lts_p (vr->min, 0) || wi::lts_p (vr->max, 0)))
>       return false;
>   
>     /* Then we can perform the conversion on both ends and compare
> @@ -8985,7 +8983,7 @@ simplify_conversion_using_ranges (gimple
>   
>     /* If the first conversion is not injective, the second must not
>        be widening.  */
> -  if ((innermax - innermin).gtu_p (max_wide_int::mask (middle_prec, false))
> +  if (wi::gtu_p (innermax - innermin, max_wide_int::mask (middle_prec, false))
>         && middle_prec < final_prec)
>       return false;
>     /* We also want a medium value so that we can track the effect that
> Index: gcc/tree.c
> ===================================================================
> --- gcc/tree.c	2013-08-25 07:42:28.423609800 +0100
> +++ gcc/tree.c	2013-08-25 07:42:29.203617339 +0100
> @@ -1228,7 +1228,7 @@ wide_int_to_tree (tree type, const wide_
>       case BOOLEAN_TYPE:
>         /* Cache false or true.  */
>         limit = 2;
> -      if (cst.leu_p (1))
> +      if (wi::leu_p (cst, 1))
>   	ix = cst.to_uhwi ();
>         break;
>   
> @@ -1247,7 +1247,7 @@ wide_int_to_tree (tree type, const wide_
>   	      if (cst.to_uhwi () < (unsigned HOST_WIDE_INT) INTEGER_SHARE_LIMIT)
>   		ix = cst.to_uhwi ();
>   	    }
> -	  else if (cst.ltu_p (INTEGER_SHARE_LIMIT))
> +	  else if (wi::ltu_p (cst, INTEGER_SHARE_LIMIT))
>   	    ix = cst.to_uhwi ();
>   	}
>         else
> @@ -1264,7 +1264,7 @@ wide_int_to_tree (tree type, const wide_
>   		  if (cst.to_shwi () < INTEGER_SHARE_LIMIT)
>   		    ix = cst.to_shwi () + 1;
>   		}
> -	      else if (cst.lts_p (INTEGER_SHARE_LIMIT))
> +	      else if (wi::lts_p (cst, INTEGER_SHARE_LIMIT))
>   		ix = cst.to_shwi () + 1;
>   	    }
>   	}
> @@ -1381,7 +1381,7 @@ cache_integer_cst (tree t)
>       case BOOLEAN_TYPE:
>         /* Cache false or true.  */
>         limit = 2;
> -      if (wide_int::ltu_p (t, 2))
> +      if (wi::ltu_p (t, 2))
>   	ix = TREE_INT_CST_ELT (t, 0);
>         break;
>   
> @@ -1400,7 +1400,7 @@ cache_integer_cst (tree t)
>   	      if (tree_to_uhwi (t) < (unsigned HOST_WIDE_INT) INTEGER_SHARE_LIMIT)
>   		ix = tree_to_uhwi (t);
>   	    }
> -	  else if (wide_int::ltu_p (t, INTEGER_SHARE_LIMIT))
> +	  else if (wi::ltu_p (t, INTEGER_SHARE_LIMIT))
>   	    ix = tree_to_uhwi (t);
>   	}
>         else
> @@ -1417,7 +1417,7 @@ cache_integer_cst (tree t)
>   		  if (tree_to_shwi (t) < INTEGER_SHARE_LIMIT)
>   		    ix = tree_to_shwi (t) + 1;
>   		}
> -	      else if (wide_int::ltu_p (t, INTEGER_SHARE_LIMIT))
> +	      else if (wi::ltu_p (t, INTEGER_SHARE_LIMIT))
>   		ix = tree_to_shwi (t) + 1;
>   	    }
>   	}
> @@ -1451,7 +1451,7 @@ cache_integer_cst (tree t)
>         /* If there is already an entry for the number verify it's the
>            same.  */
>         if (*slot)
> -	gcc_assert (wide_int::eq_p (((tree)*slot), t));
> +	gcc_assert (wi::eq_p (tree (*slot), t));
>         else
>   	/* Otherwise insert this one into the hash table.  */
>   	*slot = t;
> @@ -6757,7 +6757,7 @@ tree_int_cst_equal (const_tree t1, const
>     prec2 = TYPE_PRECISION (TREE_TYPE (t2));
>   
>     if (prec1 == prec2)
> -    return wide_int::eq_p (t1, t2);
> +    return wi::eq_p (t1, t2);
>     else if (prec1 < prec2)
>       return (wide_int (t1)).force_to_size (prec2, TYPE_SIGN (TREE_TYPE (t1))) == t2;
>     else
> @@ -8562,7 +8562,7 @@ int_fits_type_p (const_tree c, const_tre
>   
>   	  if (c_neg && !t_neg)
>   	    return false;
> -	  if ((c_neg || !t_neg) && wc.ltu_p (wd))
> +	  if ((c_neg || !t_neg) && wi::ltu_p (wc, wd))
>   	    return false;
>   	}
>         else if (wc.cmp (wd, TYPE_SIGN (TREE_TYPE (type_low_bound))) < 0)
> @@ -8583,7 +8583,7 @@ int_fits_type_p (const_tree c, const_tre
>   
>   	  if (t_neg && !c_neg)
>   	    return false;
> -	  if ((t_neg || !c_neg) && wc.gtu_p (wd))
> +	  if ((t_neg || !c_neg) && wi::gtu_p (wc, wd))
>   	    return false;
>   	}
>         else if (wc.cmp (wd, TYPE_SIGN (TREE_TYPE (type_high_bound))) > 0)
> Index: gcc/tree.h
> ===================================================================
> --- gcc/tree.h	2013-08-25 07:17:37.505554513 +0100
> +++ gcc/tree.h	2013-08-25 07:42:29.204617349 +0100
> @@ -1411,10 +1411,10 @@ #define TREE_LANG_FLAG_6(NODE) \
>   /* Define additional fields and accessors for nodes representing constants.  */
>   
>   #define INT_CST_LT(A, B)				\
> -  (wide_int::lts_p (A, B))
> +  (wi::lts_p (A, B))
>   
>   #define INT_CST_LT_UNSIGNED(A, B)			\
> -  (wide_int::ltu_p (A, B))
> +  (wi::ltu_p (A, B))
>   
>   #define TREE_INT_CST_NUNITS(NODE) (INTEGER_CST_CHECK (NODE)->base.u.length)
>   #define TREE_INT_CST_ELT(NODE, I) TREE_INT_CST_ELT_CHECK (NODE, I)
> Index: gcc/wide-int.cc
> ===================================================================
> --- gcc/wide-int.cc	2013-08-25 07:42:28.471610264 +0100
> +++ gcc/wide-int.cc	2013-08-25 07:42:29.205617359 +0100
> @@ -598,9 +598,9 @@ top_bit_of (const HOST_WIDE_INT *a, unsi
>   
>   /* Return true if OP0 == OP1.  */
>   bool
> -wide_int_ro::eq_p_large (const HOST_WIDE_INT *op0, unsigned int op0len,
> -			 unsigned int prec,
> -			 const HOST_WIDE_INT *op1, unsigned int op1len)
> +wi::eq_p_large (const HOST_WIDE_INT *op0, unsigned int op0len,
> +		unsigned int prec,
> +		const HOST_WIDE_INT *op1, unsigned int op1len)
>   {
>     int l0 = op0len - 1;
>     unsigned int small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
> @@ -628,10 +628,10 @@ wide_int_ro::eq_p_large (const HOST_WIDE
>   
>   /* Return true if OP0 < OP1 using signed comparisons.  */
>   bool
> -wide_int_ro::lts_p_large (const HOST_WIDE_INT *op0, unsigned int op0len,
> -			  unsigned int p0,
> -			  const HOST_WIDE_INT *op1, unsigned int op1len,
> -			  unsigned int p1)
> +wi::lts_p_large (const HOST_WIDE_INT *op0, unsigned int op0len,
> +		 unsigned int p0,
> +		 const HOST_WIDE_INT *op1, unsigned int op1len,
> +		 unsigned int p1)
>   {
>     HOST_WIDE_INT s0, s1;
>     unsigned HOST_WIDE_INT u0, u1;
> @@ -709,8 +709,8 @@ wide_int_ro::cmps_large (const HOST_WIDE
>   
>   /* Return true if OP0 < OP1 using unsigned comparisons.  */
>   bool
> -wide_int_ro::ltu_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, unsigned int p0,
> -			  const HOST_WIDE_INT *op1, unsigned int op1len, unsigned int p1)
> +wi::ltu_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, unsigned int p0,
> +		 const HOST_WIDE_INT *op1, unsigned int op1len, unsigned int p1)
>   {
>     unsigned HOST_WIDE_INT x0;
>     unsigned HOST_WIDE_INT x1;
> Index: gcc/wide-int.h
> ===================================================================
> --- gcc/wide-int.h	2013-08-25 07:42:28.424609809 +0100
> +++ gcc/wide-int.h	2013-08-25 08:23:14.445592968 +0100
> @@ -304,6 +304,95 @@ signedp <unsigned long> (unsigned long)
>     return false;
>   }
>   
> +/* This class, which has no default implementation, is expected to
> +   provide the following routines:
> +
> +   HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p,
> +			   ... x)
> +     -- Decompose integer X into a length, precision and array of
> +	HOST_WIDE_INTs.  Store the length in *L, the precision in *P
> +	and return the array.  S is available as scratch space if needed.  */
> +template <typename T> struct wide_int_accessors;
> +
> +namespace wi
> +{
> +  template <typename T1, typename T2>
> +  bool eq_p (const T1 &, const T2 &);
> +
> +  template <typename T1, typename T2>
> +  bool lt_p (const T1 &, const T2 &, signop);
> +
> +  template <typename T1, typename T2>
> +  bool lts_p (const T1 &, const T2 &);
> +
> +  template <typename T1, typename T2>
> +  bool ltu_p (const T1 &, const T2 &);
> +
> +  template <typename T1, typename T2>
> +  bool le_p (const T1 &, const T2 &, signop);
> +
> +  template <typename T1, typename T2>
> +  bool les_p (const T1 &, const T2 &);
> +
> +  template <typename T1, typename T2>
> +  bool leu_p (const T1 &, const T2 &);
> +
> +  template <typename T1, typename T2>
> +  bool gt_p (const T1 &, const T2 &, signop);
> +
> +  template <typename T1, typename T2>
> +  bool gts_p (const T1 &, const T2 &);
> +
> +  template <typename T1, typename T2>
> +  bool gtu_p (const T1 &, const T2 &);
> +
> +  template <typename T1, typename T2>
> +  bool ge_p (const T1 &, const T2 &, signop);
> +
> +  template <typename T1, typename T2>
> +  bool ges_p (const T1 &, const T2 &);
> +
> +  template <typename T1, typename T2>
> +  bool geu_p (const T1 &, const T2 &);
> +
> +  /* Comparisons.  */
> +  bool eq_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
> +		   const HOST_WIDE_INT *, unsigned int);
> +  bool lts_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
> +		    const HOST_WIDE_INT *, unsigned int, unsigned int);
> +  bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
> +		    const HOST_WIDE_INT *, unsigned int, unsigned int);
> +  void check_precision (unsigned int *, unsigned int *, bool, bool);
> +
> +  template <typename T>
> +  const HOST_WIDE_INT *to_shwi1 (HOST_WIDE_INT *, unsigned int *,
> +				 unsigned int *, const T &);
> +
> +  template <typename T>
> +  const HOST_WIDE_INT *to_shwi2 (HOST_WIDE_INT *, unsigned int *, const T &);
> +}
> +
> +/* Decompose integer X into a length, precision and array of HOST_WIDE_INTs.
> +   Store the length in *L, the precision in *P and return the array.
> +   S is available as a scratch array if needed, and can be used as
> +   the return value.  */
> +template <typename T>
> +inline const HOST_WIDE_INT *
> +wi::to_shwi1 (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p,
> +	      const T &x)
> +{
> +  return wide_int_accessors <T>::to_shwi (s, l, p, x);
> +}
> +
> +/* Like to_shwi1, but without the precision.  */
> +template <typename T>
> +inline const HOST_WIDE_INT *
> +wi::to_shwi2 (HOST_WIDE_INT *s, unsigned int *l, const T &x)
> +{
> +  unsigned int p;
> +  return wide_int_accessors <T>::to_shwi (s, l, &p, x);
> +}
> +
>   class wide_int;
>   
>   class GTY(()) wide_int_ro
> @@ -323,7 +412,6 @@ class GTY(()) wide_int_ro
>     unsigned short len;
>     unsigned int precision;
>   
> -  const HOST_WIDE_INT *get_val () const;
>     wide_int_ro &operator = (const wide_int_ro &);
>   
>   public:
> @@ -374,6 +462,7 @@ class GTY(()) wide_int_ro
>     /* Public accessors for the interior of a wide int.  */
>     unsigned short get_len () const;
>     unsigned int get_precision () const;
> +  const HOST_WIDE_INT *get_val () const;
>     HOST_WIDE_INT elt (unsigned int) const;
>   
>     /* Comparative functions.  */
> @@ -389,85 +478,10 @@ class GTY(()) wide_int_ro
>     template <typename T>
>     bool operator == (const T &) const;
>   
> -  template <typename T1, typename T2>
> -  static bool eq_p (const T1 &, const T2 &);
> -
>     template <typename T>
>     bool operator != (const T &) const;
>   
>     template <typename T>
> -  bool lt_p (const T &, signop) const;
> -
> -  template <typename T1, typename T2>
> -  static bool lt_p (const T1 &, const T2 &, signop);
> -
> -  template <typename T>
> -  bool lts_p (const T &) const;
> -
> -  template <typename T1, typename T2>
> -  static bool lts_p (const T1 &, const T2 &);
> -
> -  template <typename T>
> -  bool ltu_p (const T &) const;
> -
> -  template <typename T1, typename T2>
> -  static bool ltu_p (const T1 &, const T2 &);
> -
> -  template <typename T>
> -  bool le_p (const T &, signop) const;
> -
> -  template <typename T1, typename T2>
> -  static bool le_p (const T1 &, const T2 &, signop);
> -
> -  template <typename T>
> -  bool les_p (const T &) const;
> -
> -  template <typename T1, typename T2>
> -  static bool les_p (const T1 &, const T2 &);
> -
> -  template <typename T>
> -  bool leu_p (const T &) const;
> -
> -  template <typename T1, typename T2>
> -  static bool leu_p (const T1 &, const T2 &);
> -
> -  template <typename T>
> -  bool gt_p (const T &, signop) const;
> -
> -  template <typename T1, typename T2>
> -  static bool gt_p (const T1 &, const T2 &, signop);
> -
> -  template <typename T>
> -  bool gts_p (const T &) const;
> -
> -  template <typename T1, typename T2>
> -  static bool gts_p (const T1 &, const T2 &);
> -
> -  template <typename T>
> -  bool gtu_p (const T &) const;
> -
> -  template <typename T1, typename T2>
> -  static bool gtu_p (const T1 &, const T2 &);
> -
> -  template <typename T>
> -  bool ge_p (const T &, signop) const;
> -
> -  template <typename T1, typename T2>
> -  static bool ge_p (const T1 &, const T2 &, signop);
> -
> -  template <typename T>
> -  bool ges_p (const T &) const;
> -
> -  template <typename T1, typename T2>
> -  static bool ges_p (const T1 &, const T2 &);
> -
> -  template <typename T>
> -  bool geu_p (const T &) const;
> -
> -  template <typename T1, typename T2>
> -  static bool geu_p (const T1 &, const T2 &);
> -
> -  template <typename T>
>     int cmp (const T &, signop) const;
>   
>     template <typename T>
> @@ -705,18 +719,10 @@ class GTY(()) wide_int_ro
>     /* Internal versions that do the work if the values do not fit in a HWI.  */
>   
>     /* Comparisons */
> -  static bool eq_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
> -			  const HOST_WIDE_INT *, unsigned int);
> -  static bool lts_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
> -			   const HOST_WIDE_INT *, unsigned int, unsigned int);
>     static int cmps_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
>   			 const HOST_WIDE_INT *, unsigned int, unsigned int);
> -  static bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
> -			   const HOST_WIDE_INT *, unsigned int, unsigned int);
>     static int cmpu_large (const HOST_WIDE_INT *, unsigned int, unsigned int,
>   			 const HOST_WIDE_INT *, unsigned int, unsigned int);
> -  static void check_precision (unsigned int *, unsigned int *, bool, bool);
> -
>   
>     /* Logicals.  */
>     static wide_int_ro and_large (const HOST_WIDE_INT *, unsigned int,
> @@ -769,17 +775,6 @@ class GTY(()) wide_int_ro
>     int trunc_shift (const HOST_WIDE_INT *, unsigned int, unsigned int,
>   		   ShiftOp) const;
>   
> -  template <typename T>
> -  static bool top_bit_set (T);
> -
> -  template <typename T>
> -  static const HOST_WIDE_INT *to_shwi1 (HOST_WIDE_INT *, unsigned int *,
> -					unsigned int *, const T &);
> -
> -  template <typename T>
> -  static const HOST_WIDE_INT *to_shwi2 (HOST_WIDE_INT *, unsigned int *,
> -					const T &);
> -
>   #ifdef DEBUG_WIDE_INT
>     /* Debugging routines.  */
>     static void debug_wa (const char *, const wide_int_ro &,
> @@ -1163,51 +1158,11 @@ wide_int_ro::neg_p (signop sgn) const
>     return sign_mask () != 0;
>   }
>   
> -/* Return true if THIS == C.  If both operands have nonzero precisions,
> -   the precisions must be the same.  */
> -template <typename T>
> -inline bool
> -wide_int_ro::operator == (const T &c) const
> -{
> -  bool result;
> -  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
> -  const HOST_WIDE_INT *s;
> -  unsigned int cl;
> -  unsigned int p1, p2;
> -
> -  p1 = precision;
> -
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, false);
> -
> -  if (p1 == 0)
> -    /* There are prec 0 types and we need to do this to check their
> -       min and max values.  */
> -    result = (len == cl) && (val[0] == s[0]);
> -  else if (p1 < HOST_BITS_PER_WIDE_INT)
> -    {
> -      unsigned HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << p1) - 1;
> -      result = (val[0] & mask) == (s[0] & mask);
> -    }
> -  else if (p1 == HOST_BITS_PER_WIDE_INT)
> -    result = val[0] == s[0];
> -  else
> -    result = eq_p_large (val, len, p1, s, cl);
> -
> -  if (result)
> -    gcc_assert (len == cl);
> -
> -#ifdef DEBUG_WIDE_INT
> -  debug_vwa ("wide_int_ro:: %d = (%s == %s)\n", result, *this, s, cl, p2);
> -#endif
> -  return result;
> -}
> -
>   /* Return true if C1 == C2.  If both parameters have nonzero precisions,
>      then those precisions must be equal.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::eq_p (const T1 &c1, const T2 &c2)
> +wi::eq_p (const T1 &c1, const T2 &c2)
>   {
>     bool result;
>     HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS];
> @@ -1237,51 +1192,28 @@ wide_int_ro::eq_p (const T1 &c1, const T
>     return result;
>   }
>   
> -/* Return true if THIS != C.  If both parameters have nonzero precisions,
> -   then those precisions must be equal.  */
> +/* Return true if THIS == C.  If both operands have nonzero precisions,
> +   the precisions must be the same.  */
>   template <typename T>
>   inline bool
> -wide_int_ro::operator != (const T &c) const
> +wide_int_ro::operator == (const T &c) const
>   {
> -  return !(*this == c);
> +  return wi::eq_p (*this, c);
>   }
>   
> -/* Return true if THIS < C using signed comparisons.  */
> +/* Return true if THIS != C.  If both parameters have nonzero precisions,
> +   then those precisions must be equal.  */
>   template <typename T>
>   inline bool
> -wide_int_ro::lts_p (const T &c) const
> +wide_int_ro::operator != (const T &c) const
>   {
> -  bool result;
> -  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
> -  const HOST_WIDE_INT *s;
> -  unsigned int cl;
> -  unsigned int p1, p2;
> -
> -  p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, false, true);
> -
> -  if (p1 <= HOST_BITS_PER_WIDE_INT
> -      && p2 <= HOST_BITS_PER_WIDE_INT)
> -    {
> -      gcc_assert (cl != 0);
> -      HOST_WIDE_INT x0 = sext_hwi (val[0], p1);
> -      HOST_WIDE_INT x1 = sext_hwi (s[0], p2);
> -      result = x0 < x1;
> -    }
> -  else
> -    result = lts_p_large (val, len, p1, s, cl, p2);
> -
> -#ifdef DEBUG_WIDE_INT
> -  debug_vwa ("wide_int_ro:: %d = (%s lts_p %s\n", result, *this, s, cl, p2);
> -#endif
> -  return result;
> +  return !wi::eq_p (*this, c);
>   }
>   
>   /* Return true if C1 < C2 using signed comparisons.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::lts_p (const T1 &c1, const T2 &c2)
> +wi::lts_p (const T1 &c1, const T2 &c2)
>   {
>     bool result;
>     HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS];
> @@ -1305,38 +1237,8 @@ wide_int_ro::lts_p (const T1 &c1, const
>       result = lts_p_large (s1, cl1, p1, s2, cl2, p2);
>   
>   #ifdef DEBUG_WIDE_INT
> -  debug_vaa ("wide_int_ro:: %d = (%s lts_p %s\n", result, s1, cl1, p1, s2, cl2, p2);
> -#endif
> -  return result;
> -}
> -
> -/* Return true if THIS < C using unsigned comparisons.  */
> -template <typename T>
> -inline bool
> -wide_int_ro::ltu_p (const T &c) const
> -{
> -  bool result;
> -  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
> -  const HOST_WIDE_INT *s;
> -  unsigned int cl;
> -  unsigned int p1, p2;
> -
> -  p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, false, true);
> -
> -  if (p1 <= HOST_BITS_PER_WIDE_INT
> -      && p2 <= HOST_BITS_PER_WIDE_INT)
> -    {
> -      unsigned HOST_WIDE_INT x0 = zext_hwi (val[0], p1);
> -      unsigned HOST_WIDE_INT x1 = zext_hwi (s[0], p2);
> -      result = x0 < x1;
> -    }
> -  else
> -    result = ltu_p_large (val, len, p1, s, cl, p2);
> -
> -#ifdef DEBUG_WIDE_INT
> -  debug_vwa ("wide_int_ro:: %d = (%s ltu_p %s)\n", result, *this, s, cl, p2);
> +  wide_int_ro::debug_vaa ("wide_int_ro:: %d = (%s lts_p %s\n",
> +			  result, s1, cl1, p1, s2, cl2, p2);
>   #endif
>     return result;
>   }
> @@ -1344,7 +1246,7 @@ wide_int_ro::ltu_p (const T &c) const
>   /* Return true if C1 < C2 using unsigned comparisons.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::ltu_p (const T1 &c1, const T2 &c2)
> +wi::ltu_p (const T1 &c1, const T2 &c2)
>   {
>     bool result;
>     HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS];
> @@ -1372,21 +1274,10 @@ wide_int_ro::ltu_p (const T1 &c1, const
>     return result;
>   }
>   
> -/* Return true if THIS < C.  Signedness is indicated by SGN.  */
> -template <typename T>
> -inline bool
> -wide_int_ro::lt_p (const T &c, signop sgn) const
> -{
> -  if (sgn == SIGNED)
> -    return lts_p (c);
> -  else
> -    return ltu_p (c);
> -}
> -
>   /* Return true if C1 < C2.  Signedness is indicated by SGN.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::lt_p (const T1 &c1, const T2 &c2, signop sgn)
> +wi::lt_p (const T1 &c1, const T2 &c2, signop sgn)
>   {
>     if (sgn == SIGNED)
>       return lts_p (c1, c2);
> @@ -1394,53 +1285,26 @@ wide_int_ro::lt_p (const T1 &c1, const T
>       return ltu_p (c1, c2);
>   }
>   
> -/* Return true if THIS <= C using signed comparisons.  */
> -template <typename T>
> -inline bool
> -wide_int_ro::les_p (const T &c) const
> -{
> -  return !gts_p (c);
> -}
> -
>   /* Return true if C1 <= C2 using signed comparisons.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::les_p (const T1 &c1, const T2 &c2)
> +wi::les_p (const T1 &c1, const T2 &c2)
>   {
>     return !gts_p (c1, c2);
>   }
>   
> -/* Return true if THIS <= C using unsigned comparisons.  */
> -template <typename T>
> -inline bool
> -wide_int_ro::leu_p (const T &c) const
> -{
> -  return !gtu_p (c);
> -}
> -
>   /* Return true if C1 <= C2 using unsigned comparisons.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::leu_p (const T1 &c1, const T2 &c2)
> +wi::leu_p (const T1 &c1, const T2 &c2)
>   {
>     return !gtu_p (c1, c2);
>   }
>   
> -/* Return true if THIS <= C.  Signedness is indicated by SGN.  */
> -template <typename T>
> -inline bool
> -wide_int_ro::le_p (const T &c, signop sgn) const
> -{
> -  if (sgn == SIGNED)
> -    return les_p (c);
> -  else
> -    return leu_p (c);
> -}
> -
>   /* Return true if C1 <= C2.  Signedness is indicated by SGN.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::le_p (const T1 &c1, const T2 &c2, signop sgn)
> +wi::le_p (const T1 &c1, const T2 &c2, signop sgn)
>   {
>     if (sgn == SIGNED)
>       return les_p (c1, c2);
> @@ -1448,53 +1312,26 @@ wide_int_ro::le_p (const T1 &c1, const T
>       return leu_p (c1, c2);
>   }
>   
> -/* Return true if THIS > C using signed comparisons.  */
> -template <typename T>
> -inline bool
> -wide_int_ro::gts_p (const T &c) const
> -{
> -  return lts_p (c, *this);
> -}
> -
>   /* Return true if C1 > C2 using signed comparisons.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::gts_p (const T1 &c1, const T2 &c2)
> +wi::gts_p (const T1 &c1, const T2 &c2)
>   {
>     return lts_p (c2, c1);
>   }
>   
> -/* Return true if THIS > C using unsigned comparisons.  */
> -template <typename T>
> -inline bool
> -wide_int_ro::gtu_p (const T &c) const
> -{
> -  return ltu_p (c, *this);
> -}
> -
>   /* Return true if C1 > C2 using unsigned comparisons.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::gtu_p (const T1 &c1, const T2 &c2)
> +wi::gtu_p (const T1 &c1, const T2 &c2)
>   {
>     return ltu_p (c2, c1);
>   }
>   
> -/* Return true if THIS > C.  Signedness is indicated by SGN.  */
> -template <typename T>
> -inline bool
> -wide_int_ro::gt_p (const T &c, signop sgn) const
> -{
> -  if (sgn == SIGNED)
> -    return gts_p (c);
> -  else
> -    return gtu_p (c);
> -}
> -
>   /* Return true if C1 > C2.  Signedness is indicated by SGN.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::gt_p (const T1 &c1, const T2 &c2, signop sgn)
> +wi::gt_p (const T1 &c1, const T2 &c2, signop sgn)
>   {
>     if (sgn == SIGNED)
>       return gts_p (c1, c2);
> @@ -1502,53 +1339,26 @@ wide_int_ro::gt_p (const T1 &c1, const T
>       return gtu_p (c1, c2);
>   }
>   
> -/* Return true if THIS >= C using signed comparisons.  */
> -template <typename T>
> -inline bool
> -wide_int_ro::ges_p (const T &c) const
> -{
> -  return !lts_p (c);
> -}
> -
>   /* Return true if C1 >= C2 using signed comparisons.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::ges_p (const T1 &c1, const T2 &c2)
> +wi::ges_p (const T1 &c1, const T2 &c2)
>   {
>     return !lts_p (c1, c2);
>   }
>   
> -/* Return true if THIS >= C using unsigned comparisons.  */
> -template <typename T>
> -inline bool
> -wide_int_ro::geu_p (const T &c) const
> -{
> -  return !ltu_p (c);
> -}
> -
>   /* Return true if C1 >= C2 using unsigned comparisons.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::geu_p (const T1 &c1, const T2 &c2)
> +wi::geu_p (const T1 &c1, const T2 &c2)
>   {
>     return !ltu_p (c1, c2);
>   }
>   
> -/* Return true if THIS >= C.  Signedness is indicated by SGN.  */
> -template <typename T>
> -inline bool
> -wide_int_ro::ge_p (const T &c, signop sgn) const
> -{
> -  if (sgn == SIGNED)
> -    return ges_p (c);
> -  else
> -    return geu_p (c);
> -}
> -
>   /* Return true if C1 >= C2.  Signedness is indicated by SGN.  */
>   template <typename T1, typename T2>
>   inline bool
> -wide_int_ro::ge_p (const T1 &c1, const T2 &c2, signop sgn)
> +wi::ge_p (const T1 &c1, const T2 &c2, signop sgn)
>   {
>     if (sgn == SIGNED)
>       return ges_p (c1, c2);
> @@ -1568,7 +1378,7 @@ wide_int_ro::cmps (const T &c) const
>     unsigned int cl;
>     unsigned int prec;
>   
> -  s = to_shwi1 (ws, &cl, &prec, c);
> +  s = wi::to_shwi1 (ws, &cl, &prec, c);
>     if (prec == 0)
>       prec = precision;
>   
> @@ -1606,7 +1416,7 @@ wide_int_ro::cmpu (const T &c) const
>     unsigned int cl;
>     unsigned int prec;
>   
> -  s = to_shwi1 (ws, &cl, &prec, c);
> +  s = wi::to_shwi1 (ws, &cl, &prec, c);
>     if (prec == 0)
>       prec = precision;
>   
> @@ -1681,13 +1491,12 @@ wide_int_ro::min (const T &c, signop sgn
>   
>     p1 = precision;
>   
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
> -  if (sgn == SIGNED)
> -    return lts_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
> -  else
> -    return ltu_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
> +  return (wi::lt_p (*this, c, sgn)
> +	  ? *this
> +	  : wide_int_ro::from_array (s, cl, p1, false));
>   }
>   
>   /* Return the signed or unsigned min of THIS and OP1.  */
> @@ -1695,9 +1504,9 @@ wide_int_ro::min (const T &c, signop sgn
>   wide_int_ro::min (const wide_int_ro &op1, signop sgn) const
>   {
>     if (sgn == SIGNED)
> -    return lts_p (op1) ? (*this) : op1;
> +    return wi::lts_p (*this, op1) ? *this : op1;
>     else
> -    return ltu_p (op1) ? (*this) : op1;
> +    return wi::ltu_p (*this, op1) ? *this : op1;
>   }
>   
>   /* Return the signed or unsigned max of THIS and C.  */
> @@ -1712,22 +1521,18 @@ wide_int_ro::max (const T &c, signop sgn
>   
>     p1 = precision;
>   
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> -  if (sgn == SIGNED)
> -    return gts_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
> -  else
> -    return gtu_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
> +  return (wi::gt_p (*this, c, sgn)
> +	  ? *this
> +	  : wide_int_ro::from_array (s, cl, p1, false));
>   }
>   
>   /* Return the signed or unsigned max of THIS and OP1.  */
>   inline wide_int_ro
>   wide_int_ro::max (const wide_int_ro &op1, signop sgn) const
>   {
> -  if (sgn == SIGNED)
> -    return gts_p (op1) ? (*this) : op1;
> -  else
> -    return gtu_p (op1) ? (*this) : op1;
> +  return wi::gt_p (*this, op1, sgn) ? *this : op1;
>   }
>   
>   /* Return the signed min of THIS and C.  */
> @@ -1742,17 +1547,19 @@ wide_int_ro::smin (const T &c) const
>   
>     p1 = precision;
>   
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
> -  return lts_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
> +  return (wi::lts_p (*this, c)
> +	  ? *this
> +	  : wide_int_ro::from_array (s, cl, p1, false));
>   }
>   
>   /* Return the signed min of THIS and OP1.  */
>   inline wide_int_ro
>   wide_int_ro::smin (const wide_int_ro &op1) const
>   {
> -  return lts_p (op1) ? (*this) : op1;
> +  return wi::lts_p (*this, op1) ? *this : op1;
>   }
>   
>   /* Return the signed max of THIS and C.  */
> @@ -1767,17 +1574,19 @@ wide_int_ro::smax (const T &c) const
>   
>     p1 = precision;
>   
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
> -  return gts_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
> +  return (wi::gts_p (*this, c)
> +	  ? *this
> +	  : wide_int_ro::from_array (s, cl, p1, false));
>   }
>   
>   /* Return the signed max of THIS and OP1.  */
>   inline wide_int_ro
>   wide_int_ro::smax (const wide_int_ro &op1) const
>   {
> -  return gts_p (op1) ? (*this) : op1;
> +  return wi::gts_p (*this, op1) ? *this : op1;
>   }
>   
>   /* Return the unsigned min of THIS and C.  */
> @@ -1792,15 +1601,17 @@ wide_int_ro::umin (const T &c) const
>   
>     p1 = precision;
>   
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  return ltu_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  return (wi::ltu_p (*this, c)
> +	  ? *this
> +	  : wide_int_ro::from_array (s, cl, p1, false));
>   }
>   
>   /* Return the unsigned min of THIS and OP1.  */
>   inline wide_int_ro
>   wide_int_ro::umin (const wide_int_ro &op1) const
>   {
> -  return ltu_p (op1) ? (*this) : op1;
> +  return wi::ltu_p (*this, op1) ? *this : op1;
>   }
>   
>   /* Return the unsigned max of THIS and C.  */
> @@ -1815,17 +1626,19 @@ wide_int_ro::umax (const T &c) const
>   
>     p1 = precision;
>   
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
> -  return gtu_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false);
> +  return (wi::gtu_p (*this, c)
> +	  ? *this
> +	  : wide_int_ro::from_array (s, cl, p1, false));
>   }
>   
>   /* Return the unsigned max of THIS and OP1.  */
>   inline wide_int_ro
>   wide_int_ro::umax (const wide_int_ro &op1) const
>   {
> -  return gtu_p (op1) ? (*this) : op1;
> +  return wi::gtu_p (*this, op1) ? *this : op1;
>   }
>   
>   /* Return THIS extended to PREC.  The signedness of the extension is
> @@ -1891,8 +1704,8 @@ wide_int_ro::operator & (const T &c) con
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     if (p1 <= HOST_BITS_PER_WIDE_INT)
>       {
> @@ -1921,8 +1734,8 @@ wide_int_ro::and_not (const T &c) const
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     if (p1 <= HOST_BITS_PER_WIDE_INT)
>       {
> @@ -1973,8 +1786,8 @@ wide_int_ro::operator | (const T &c) con
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     if (p1 <= HOST_BITS_PER_WIDE_INT)
>       {
> @@ -2003,8 +1816,8 @@ wide_int_ro::or_not (const T &c) const
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     if (p1 <= HOST_BITS_PER_WIDE_INT)
>       {
> @@ -2033,8 +1846,8 @@ wide_int_ro::operator ^ (const T &c) con
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     if (p1 <= HOST_BITS_PER_WIDE_INT)
>       {
> @@ -2063,8 +1876,8 @@ wide_int_ro::operator + (const T &c) con
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     if (p1 <= HOST_BITS_PER_WIDE_INT)
>       {
> @@ -2096,8 +1909,8 @@ wide_int_ro::add (const T &c, signop sgn
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     if (p1 <= HOST_BITS_PER_WIDE_INT)
>       {
> @@ -2141,8 +1954,8 @@ wide_int_ro::operator * (const T &c) con
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     if (p1 <= HOST_BITS_PER_WIDE_INT)
>       {
> @@ -2176,8 +1989,8 @@ wide_int_ro::mul (const T &c, signop sgn
>     if (overflow)
>       *overflow = false;
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     return mul_internal (false, false,
>   		       val, len, p1,
> @@ -2217,8 +2030,8 @@ wide_int_ro::mul_full (const T &c, signo
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     return mul_internal (false, true,
>   		       val, len, p1,
> @@ -2257,8 +2070,8 @@ wide_int_ro::mul_high (const T &c, signo
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     return mul_internal (true, false,
>   		       val, len, p1,
> @@ -2298,8 +2111,8 @@ wide_int_ro::operator - (const T &c) con
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     if (p1 <= HOST_BITS_PER_WIDE_INT)
>       {
> @@ -2331,8 +2144,8 @@ wide_int_ro::sub (const T &c, signop sgn
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, true, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, true, true);
>   
>     if (p1 <= HOST_BITS_PER_WIDE_INT)
>       {
> @@ -2379,8 +2192,8 @@ wide_int_ro::div_trunc (const T &c, sign
>     if (overflow)
>       *overflow = false;
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, false, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, false, true);
>   
>     return divmod_internal (true, val, len, p1, s, cl, p2, sgn,
>   			  &remainder, false, overflow);
> @@ -2420,8 +2233,8 @@ wide_int_ro::div_floor (const T &c, sign
>     if (overflow)
>       *overflow = false;
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, false, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, false, true);
>   
>     return divmod_internal (true, val, len, p1, s, cl, p2, sgn,
>   			  &remainder, false, overflow);
> @@ -2461,8 +2274,8 @@ wide_int_ro::div_ceil (const T &c, signo
>     if (overflow)
>       *overflow = false;
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, false, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, false, true);
>   
>     quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn,
>   			      &remainder, true, overflow);
> @@ -2490,8 +2303,8 @@ wide_int_ro::div_round (const T &c, sign
>     if (overflow)
>       *overflow = false;
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, false, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, false, true);
>   
>     quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn,
>   			      &remainder, true, overflow);
> @@ -2505,7 +2318,7 @@ wide_int_ro::div_round (const T &c, sign
>   	  wide_int_ro p_divisor = divisor.neg_p (SIGNED) ? -divisor : divisor;
>   	  p_divisor = p_divisor.rshiftu_large (1);
>   
> -	  if (p_divisor.gts_p (p_remainder))
> +	  if (wi::gts_p (p_divisor, p_remainder))
>   	    {
>   	      if (quotient.neg_p (SIGNED))
>   		return quotient - 1;
> @@ -2516,7 +2329,7 @@ wide_int_ro::div_round (const T &c, sign
>         else
>   	{
>   	  wide_int_ro p_divisor = divisor.rshiftu_large (1);
> -	  if (p_divisor.gtu_p (remainder))
> +	  if (wi::gtu_p (p_divisor, remainder))
>   	    return quotient + 1;
>   	}
>       }
> @@ -2537,8 +2350,8 @@ wide_int_ro::divmod_trunc (const T &c, w
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, false, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, false, true);
>   
>     return divmod_internal (true, val, len, p1, s, cl, p2, sgn,
>   			  remainder, true, 0);
> @@ -2575,8 +2388,8 @@ wide_int_ro::divmod_floor (const T &c, w
>     unsigned int p1, p2;
>   
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, false, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, false, true);
>   
>     quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn,
>   			      remainder, true, 0);
> @@ -2613,8 +2426,8 @@ wide_int_ro::mod_trunc (const T &c, sign
>     if (overflow)
>       *overflow = false;
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, false, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, false, true);
>   
>     divmod_internal (false, val, len, p1, s, cl, p2, sgn,
>   		   &remainder, true, overflow);
> @@ -2655,8 +2468,8 @@ wide_int_ro::mod_floor (const T &c, sign
>     if (overflow)
>       *overflow = false;
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, false, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, false, true);
>   
>     quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn,
>   			      &remainder, true, overflow);
> @@ -2692,8 +2505,8 @@ wide_int_ro::mod_ceil (const T &c, signo
>     if (overflow)
>       *overflow = false;
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, false, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, false, true);
>   
>     quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn,
>   			      &remainder, true, overflow);
> @@ -2721,8 +2534,8 @@ wide_int_ro::mod_round (const T &c, sign
>     if (overflow)
>       *overflow = false;
>     p1 = precision;
> -  s = to_shwi1 (ws, &cl, &p2, c);
> -  check_precision (&p1, &p2, false, true);
> +  s = wi::to_shwi1 (ws, &cl, &p2, c);
> +  wi::check_precision (&p1, &p2, false, true);
>   
>     quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn,
>   			      &remainder, true, overflow);
> @@ -2737,7 +2550,7 @@ wide_int_ro::mod_round (const T &c, sign
>   	  wide_int_ro p_divisor = divisor.neg_p (SIGNED) ? -divisor : divisor;
>   	  p_divisor = p_divisor.rshiftu_large (1);
>   
> -	  if (p_divisor.gts_p (p_remainder))
> +	  if (wi::gts_p (p_divisor, p_remainder))
>   	    {
>   	      if (quotient.neg_p (SIGNED))
>   		return remainder + divisor;
> @@ -2748,7 +2561,7 @@ wide_int_ro::mod_round (const T &c, sign
>         else
>   	{
>   	  wide_int_ro p_divisor = divisor.rshiftu_large (1);
> -	  if (p_divisor.gtu_p (remainder))
> +	  if (wi::gtu_p (p_divisor, remainder))
>   	    return remainder - divisor;
>   	}
>       }
> @@ -2768,7 +2581,7 @@ wide_int_ro::lshift (const T &c, unsigne
>     unsigned int cl;
>     HOST_WIDE_INT shift;
>   
> -  s = to_shwi2 (ws, &cl, c);
> +  s = wi::to_shwi2 (ws, &cl, c);
>   
>     gcc_checking_assert (precision);
>   
> @@ -2806,7 +2619,7 @@ wide_int_ro::lshift_widen (const T &c, u
>     unsigned int cl;
>     HOST_WIDE_INT shift;
>   
> -  s = to_shwi2 (ws, &cl, c);
> +  s = wi::to_shwi2 (ws, &cl, c);
>   
>     gcc_checking_assert (precision);
>     gcc_checking_assert (res_prec);
> @@ -2843,7 +2656,7 @@ wide_int_ro::lrotate (const T &c, unsign
>     const HOST_WIDE_INT *s;
>     unsigned int cl;
>   
> -  s = to_shwi2 (ws, &cl, c);
> +  s = wi::to_shwi2 (ws, &cl, c);
>   
>     return lrotate ((unsigned HOST_WIDE_INT) s[0], prec);
>   }
> @@ -2901,7 +2714,7 @@ wide_int_ro::rshiftu (const T &c, unsign
>     unsigned int cl;
>     HOST_WIDE_INT shift;
>   
> -  s = to_shwi2 (ws, &cl, c);
> +  s = wi::to_shwi2 (ws, &cl, c);
>     gcc_checking_assert (precision);
>     shift = trunc_shift (s, cl, bitsize, trunc_op);
>   
> @@ -2944,7 +2757,7 @@ wide_int_ro::rshifts (const T &c, unsign
>     unsigned int cl;
>     HOST_WIDE_INT shift;
>   
> -  s = to_shwi2 (ws, &cl, c);
> +  s = wi::to_shwi2 (ws, &cl, c);
>     gcc_checking_assert (precision);
>     shift = trunc_shift (s, cl, bitsize, trunc_op);
>   
> @@ -2989,7 +2802,7 @@ wide_int_ro::rrotate (const T &c, unsign
>     const HOST_WIDE_INT *s;
>     unsigned int cl;
>   
> -  s = to_shwi2 (ws, &cl, c);
> +  s = wi::to_shwi2 (ws, &cl, c);
>     return rrotate ((unsigned HOST_WIDE_INT) s[0], prec);
>   }
>   
> @@ -3080,25 +2893,26 @@ wide_int_ro::trunc_shift (const HOST_WID
>       return cnt[0] & (bitsize - 1);
>   }
>   
> +/* Implementation of wide_int_accessors for primitive integer types
> +   like "int".  */
> +template <typename T>
> +struct primitive_wide_int_accessors
> +{
> +  static const HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *, unsigned int *,
> +				       unsigned int *, const T &);
> +};
> +
>   template <typename T>
>   inline bool
> -wide_int_ro::top_bit_set (T x)
> +top_bit_set (T x)
>   {
> -  return (x >> (sizeof (x)*8 - 1)) != 0;
> +  return (x >> (sizeof (x) * 8 - 1)) != 0;
>   }
>   
> -/* The following template and its overrides are used for the first
> -   and second operand of static binary comparison functions.
> -   These have been implemented so that pointer copying is done
> -   from the rep of the operands rather than actual data copying.
> -   This is safe even for garbage collected objects since the value
> -   is immediately throw away.
> -
> -   This template matches all integers.  */
>   template <typename T>
>   inline const HOST_WIDE_INT *
> -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p,
> -		       const T &x)
> +primitive_wide_int_accessors <T>::to_shwi (HOST_WIDE_INT *s, unsigned int *l,
> +					   unsigned int *p, const T &x)
>   {
>     s[0] = x;
>     if (signedp (x)
> @@ -3114,29 +2928,23 @@ wide_int_ro::to_shwi1 (HOST_WIDE_INT *s,
>     return s;
>   }
>   
> -/* The following template and its overrides are used for the second
> -   operand of binary functions.  These have been implemented so that
> -   pointer copying is done from the rep of the second operand rather
> -   than actual data copying.  This is safe even for garbage collected
> -   objects since the value is immediately throw away.
> +template <>
> +struct wide_int_accessors <int>
> +  : public primitive_wide_int_accessors <int> {};
>   
> -   The next template matches all integers.  */
> -template <typename T>
> -inline const HOST_WIDE_INT *
> -wide_int_ro::to_shwi2 (HOST_WIDE_INT *s, unsigned int *l, const T &x)
> -{
> -  s[0] = x;
> -  if (signedp (x)
> -      || sizeof (T) < sizeof (HOST_WIDE_INT)
> -      || ! top_bit_set (x))
> -    *l = 1;
> -  else
> -    {
> -      s[1] = 0;
> -      *l = 2;
> -    }
> -  return s;
> -}
> +template <>
> +struct wide_int_accessors <unsigned int>
> +  : public primitive_wide_int_accessors <unsigned int> {};
> +
> +#if HOST_BITS_PER_INT != HOST_BITS_PER_WIDE_INT
> +template <>
> +struct wide_int_accessors <HOST_WIDE_INT>
> +  : public primitive_wide_int_accessors <HOST_WIDE_INT> {};
> +
> +template <>
> +struct wide_int_accessors <unsigned HOST_WIDE_INT>
> +  : public primitive_wide_int_accessors <unsigned HOST_WIDE_INT> {};
> +#endif
>   
>   inline wide_int::wide_int () {}
>   
> @@ -3275,7 +3083,6 @@ class GTY(()) fixed_wide_int : public wi
>   protected:
>     fixed_wide_int &operator = (const wide_int &);
>     fixed_wide_int (const wide_int_ro);
> -  const HOST_WIDE_INT *get_val () const;
>   
>     using wide_int_ro::val;
>   
> @@ -3285,16 +3092,8 @@ class GTY(()) fixed_wide_int : public wi
>     using wide_int_ro::to_short_addr;
>     using wide_int_ro::fits_uhwi_p;
>     using wide_int_ro::fits_shwi_p;
> -  using wide_int_ro::gtu_p;
> -  using wide_int_ro::gts_p;
> -  using wide_int_ro::geu_p;
> -  using wide_int_ro::ges_p;
>     using wide_int_ro::to_shwi;
>     using wide_int_ro::operator ==;
> -  using wide_int_ro::ltu_p;
> -  using wide_int_ro::lts_p;
> -  using wide_int_ro::leu_p;
> -  using wide_int_ro::les_p;
>     using wide_int_ro::to_uhwi;
>     using wide_int_ro::cmps;
>     using wide_int_ro::neg_p;
> @@ -3510,13 +3309,6 @@ inline fixed_wide_int <bitsize>::fixed_w
>   }
>   
>   template <int bitsize>
> -inline const HOST_WIDE_INT *
> -fixed_wide_int <bitsize>::get_val () const
> -{
> -  return val;
> -}
> -
> -template <int bitsize>
>   inline fixed_wide_int <bitsize>
>   fixed_wide_int <bitsize>::from_wide_int (const wide_int &w)
>   {
> @@ -4165,118 +3957,62 @@ extern void gt_pch_nx(max_wide_int*);
>   
>   extern addr_wide_int mem_ref_offset (const_tree);
>   
> -/* The wide-int overload templates.  */
> -
>   template <>
> -inline const HOST_WIDE_INT*
> -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
> -		       unsigned int *l, unsigned int *p,
> -		       const wide_int_ro &y)
> -{
> -  *p = y.precision;
> -  *l = y.len;
> -  return y.val;
> -}
> -
> -template <>
> -inline const HOST_WIDE_INT*
> -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
> -		       unsigned int *l, unsigned int *p,
> -		       const wide_int &y)
> -{
> -  *p = y.precision;
> -  *l = y.len;
> -  return y.val;
> -}
> -
> -
> -template <>
> -inline const HOST_WIDE_INT*
> -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
> -		       unsigned int *l, unsigned int *p,
> -		       const fixed_wide_int <addr_max_precision> &y)
> +struct wide_int_accessors <wide_int_ro>
>   {
> -  *p = y.get_precision ();
> -  *l = y.get_len ();
> -  return y.get_val ();
> -}
> +  static const HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *, unsigned int *,
> +				       unsigned int *, const wide_int_ro &);
> +};
>   
> -#if addr_max_precision != MAX_BITSIZE_MODE_ANY_INT
> -template <>
> -inline const HOST_WIDE_INT*
> -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
> -		       unsigned int *l, unsigned int *p,
> -		       const fixed_wide_int <MAX_BITSIZE_MODE_ANY_INT> &y)
> +inline const HOST_WIDE_INT *
> +wide_int_accessors <wide_int_ro>::to_shwi (HOST_WIDE_INT *, unsigned int *l,
> +					   unsigned int *p,
> +					   const wide_int_ro &y)
>   {
>     *p = y.get_precision ();
>     *l = y.get_len ();
>     return y.get_val ();
>   }
> -#endif
>   
>   template <>
> -inline const HOST_WIDE_INT*
> -wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
> -		       unsigned int *l, const wide_int &y)
> -{
> -  *l = y.len;
> -  return y.val;
> -}
> +struct wide_int_accessors <wide_int>
> +  : public wide_int_accessors <wide_int_ro> {};
>   
> +template <>
> +template <int N>
> +struct wide_int_accessors <fixed_wide_int <N> >
> +  : public wide_int_accessors <wide_int_ro> {};
>   
>   /* The tree and const_tree overload templates.   */
>   template <>
> -inline const HOST_WIDE_INT*
> -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
> -		       unsigned int *l, unsigned int *p,
> -		       const tree &tcst)
> +struct wide_int_accessors <const_tree>
>   {
> -  tree type = TREE_TYPE (tcst);
> -
> -  *p = TYPE_PRECISION (type);
> -  *l = TREE_INT_CST_NUNITS (tcst);
> -  return (const HOST_WIDE_INT*)&TREE_INT_CST_ELT (tcst, 0);
> -}
> +  static const HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *, unsigned int *,
> +				       unsigned int *, const_tree);
> +};
>   
> -template <>
> -inline const HOST_WIDE_INT*
> -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
> -		       unsigned int *l, unsigned int *p,
> -		       const const_tree &tcst)
> +inline const HOST_WIDE_INT *
> +wide_int_accessors <const_tree>::to_shwi (HOST_WIDE_INT *, unsigned int *l,
> +					  unsigned int *p, const_tree tcst)
>   {
>     tree type = TREE_TYPE (tcst);
>   
>     *p = TYPE_PRECISION (type);
>     *l = TREE_INT_CST_NUNITS (tcst);
> -  return (const HOST_WIDE_INT*)&TREE_INT_CST_ELT (tcst, 0);
> +  return (const HOST_WIDE_INT *) &TREE_INT_CST_ELT (tcst, 0);
>   }
>   
>   template <>
> -inline const HOST_WIDE_INT*
> -wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
> -		       unsigned int *l, const tree &tcst)
> -{
> -  *l = TREE_INT_CST_NUNITS (tcst);
> -  return (const HOST_WIDE_INT*)&TREE_INT_CST_ELT (tcst, 0);
> -}
> -
> -template <>
> -inline const HOST_WIDE_INT*
> -wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
> -		       unsigned int *l, const const_tree &tcst)
> -{
> -  *l = TREE_INT_CST_NUNITS (tcst);
> -  return (const HOST_WIDE_INT*)&TREE_INT_CST_ELT (tcst, 0);
> -}
> +struct wide_int_accessors <tree> : public wide_int_accessors <const_tree> {};
>   
>   /* Checking for the functions that require that at least one of the
>      operands have a nonzero precision.  If both of them have a precision,
>      then if CHECK_EQUAL is true, require that the precision be the same.  */
>   
>   inline void
> -wide_int_ro::check_precision (unsigned int *p1, unsigned int *p2,
> -			      bool check_equal ATTRIBUTE_UNUSED,
> -			      bool check_zero ATTRIBUTE_UNUSED)
> +wi::check_precision (unsigned int *p1, unsigned int *p2,
> +		     bool check_equal ATTRIBUTE_UNUSED,
> +		     bool check_zero ATTRIBUTE_UNUSED)
>   {
>     gcc_checking_assert ((!check_zero) || *p1 != 0 || *p2 != 0);
>   
> @@ -4298,9 +4034,11 @@ typedef std::pair <rtx, enum machine_mod
>   /* There should logically be an overload for rtl here, but it cannot
>      be here because of circular include issues.  It is in rtl.h.  */
>   template <>
> -inline const HOST_WIDE_INT*
> -wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
> -		       unsigned int *l, const rtx_mode_t &rp);
> +struct wide_int_accessors <rtx_mode_t>
> +{
> +  static const HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *, unsigned int *,
> +				       unsigned int *, const rtx_mode_t &);
> +};
>   
>   /* tree related routines.  */
>   

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-23 15:03 ` Richard Sandiford
                     ` (5 preceding siblings ...)
  2013-08-25 10:52   ` Richard Sandiford
@ 2013-08-25 18:12   ` Mike Stump
  2013-08-25 18:57     ` Richard Sandiford
  2013-08-25 21:38     ` Joseph S. Myers
  2013-08-28  9:06   ` Richard Biener
  7 siblings, 2 replies; 50+ messages in thread
From: Mike Stump @ 2013-08-25 18:12 UTC (permalink / raw)
  To: Richard Sandiford; +Cc: Kenneth Zadeck, rguenther, gcc-patches, r.sandiford

On Aug 23, 2013, at 8:02 AM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
> We really need to get rid of the #include "tm.h" in wide-int.h.
> MAX_BITSIZE_MODE_ANY_INT should be the only partially-target-dependent
> thing in there.  If that comes from tm.h then perhaps we should put it
> into a new header file instead.

BITS_PER_UNIT comes from there as well, and I'd need both.  Grabbing the #defines we generate is easy enough, but BITS_PER_UNIT would be more annoying.  No port in the tree makes use of it yet (other than 8).  So, do we just assume BITS_PER_UNIT is 8?

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-25 18:12   ` Mike Stump
@ 2013-08-25 18:57     ` Richard Sandiford
  2013-08-25 19:59       ` Mike Stump
  2013-08-25 20:11       ` Mike Stump
  2013-08-25 21:38     ` Joseph S. Myers
  1 sibling, 2 replies; 50+ messages in thread
From: Richard Sandiford @ 2013-08-25 18:57 UTC (permalink / raw)
  To: Mike Stump; +Cc: Kenneth Zadeck, rguenther, gcc-patches, r.sandiford

Mike Stump <mikestump@comcast.net> writes:
> On Aug 23, 2013, at 8:02 AM, Richard Sandiford
> <rdsandiford@googlemail.com> wrote:
>> We really need to get rid of the #include "tm.h" in wide-int.h.
>> MAX_BITSIZE_MODE_ANY_INT should be the only partially-target-dependent
>> thing in there.  If that comes from tm.h then perhaps we should put it
>> into a new header file instead.
>
> BITS_PER_UNIT comes from there as well, and I'd need both.  Grabbing the
> #defines we generate is easy enough, but BITS_PER_UNIT would be more
> annoying.  No port in the tree makes use of it yet (other than 8).  So,
> do we just assume BITS_PER_UNIT is 8?

Looks like wide-int is just using BITS_PER_UNIT to get the number of
bits in "char".  That's a host thing, so it should be CHAR_BIT instead.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-25 18:57     ` Richard Sandiford
@ 2013-08-25 19:59       ` Mike Stump
  2013-08-25 20:11       ` Mike Stump
  1 sibling, 0 replies; 50+ messages in thread
From: Mike Stump @ 2013-08-25 19:59 UTC (permalink / raw)
  To: Richard Sandiford; +Cc: Kenneth Zadeck, rguenther, gcc-patches, r.sandiford

On Aug 25, 2013, at 11:29 AM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
> Mike Stump <mikestump@comcast.net> writes:
>> On Aug 23, 2013, at 8:02 AM, Richard Sandiford
>> <rdsandiford@googlemail.com> wrote:
>>> We really need to get rid of the #include "tm.h" in wide-int.h.
>>> MAX_BITSIZE_MODE_ANY_INT should be the only partially-target-dependent
>>> thing in there.  If that comes from tm.h then perhaps we should put it
>>> into a new header file instead.
>> 
>> BITS_PER_UNIT comes from there as well, and I'd need both.  Grabbing the
>> #defines we generate is easy enough, but BITS_PER_UNIT would be more
>> annoying.  No port in the tree makes use of it yet (other than 8).  So,
>> do we just assume BITS_PER_UNIT is 8?
> 
> Looks like wide-int is just using BITS_PER_UNIT to get the number of
> bits in "char".  That's a host thing, so it should be CHAR_BIT instead.

?  What?  No.  BITS_PER_UNIT is a feature of the target machine, so, it is absolutely wrong to use a property of the host machine or the build machine.  We don't use sizeof(int) to set the size of int on the target for the example same reason.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-25 18:57     ` Richard Sandiford
  2013-08-25 19:59       ` Mike Stump
@ 2013-08-25 20:11       ` Mike Stump
  1 sibling, 0 replies; 50+ messages in thread
From: Mike Stump @ 2013-08-25 20:11 UTC (permalink / raw)
  To: Richard Sandiford; +Cc: Kenneth Zadeck, rguenther, gcc-patches, r.sandiford

On Aug 25, 2013, at 11:29 AM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
> Looks like wide-int is just using BITS_PER_UNIT to get the number of
> bits in "char".  That's a host thing, so it should be CHAR_BIT instead.

Oh, Kenny did point out one sin:

diff --git a/gcc/wide-int.cc b/gcc/wide-int.cc
index 37ce5b3..891c227 100644
--- a/gcc/wide-int.cc
+++ b/gcc/wide-int.cc
@@ -2056,7 +2056,7 @@ wide_int_ro::mul_internal (bool high, bool full,
 
   /* The 2 is for a full mult.  */
   memset (r, 0, half_blocks_needed * 2
-         * HOST_BITS_PER_HALF_WIDE_INT / BITS_PER_UNIT);
+         * HOST_BITS_PER_HALF_WIDE_INT / CHAR_BIT);
 
   for (j = 0; j < half_blocks_needed; j++)
     {

which I fixed.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-25 18:12   ` Mike Stump
  2013-08-25 18:57     ` Richard Sandiford
@ 2013-08-25 21:38     ` Joseph S. Myers
  2013-08-25 21:53       ` Mike Stump
  1 sibling, 1 reply; 50+ messages in thread
From: Joseph S. Myers @ 2013-08-25 21:38 UTC (permalink / raw)
  To: Mike Stump
  Cc: Richard Sandiford, Kenneth Zadeck, rguenther, gcc-patches, r.sandiford

On Sun, 25 Aug 2013, Mike Stump wrote:

> On Aug 23, 2013, at 8:02 AM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
> > We really need to get rid of the #include "tm.h" in wide-int.h.
> > MAX_BITSIZE_MODE_ANY_INT should be the only partially-target-dependent
> > thing in there.  If that comes from tm.h then perhaps we should put it
> > into a new header file instead.
> 
> BITS_PER_UNIT comes from there as well, and I'd need both.  Grabbing the 
> #defines we generate is easy enough, but BITS_PER_UNIT would be more 
> annoying.  No port in the tree makes use of it yet (other than 8).  So, 
> do we just assume BITS_PER_UNIT is 8?

Regarding avoiding tm.h dependence through BITS_PER_UNIT (without actually 
converting it from a target macro to a target hook), see my suggestions at 
<http://gcc.gnu.org/ml/gcc-patches/2010-11/msg02617.html>.  It would seem 
fairly reasonable, if in future other macros are converted to hooks and 
it's possible to build multiple back ends into a single compiler binary, 
to require that all such back ends share a value of BITS_PER_UNIT.

BITS_PER_UNIT describes the number of bits in QImode - the RTL-level byte.  
I don't think wide-int should care about that at all.  As I've previously 
noted, many front-end uses of BITS_PER_UNIT really care about the C-level 
char and so should be TYPE_PRECISION (char_type_node).  Generally, before 
thinking about how to get BITS_PER_UNIT somewhere, consider whether the 
code is actually correct to be using BITS_PER_UNIT at all - whether it's 
the RTL-level QImode that is really what's relevant to the code.

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-25 21:38     ` Joseph S. Myers
@ 2013-08-25 21:53       ` Mike Stump
  0 siblings, 0 replies; 50+ messages in thread
From: Mike Stump @ 2013-08-25 21:53 UTC (permalink / raw)
  To: Joseph S. Myers
  Cc: Richard Sandiford, Kenneth Zadeck, rguenther, gcc-patches, r.sandiford

On Aug 25, 2013, at 1:11 PM, "Joseph S. Myers" <joseph@codesourcery.com> wrote:
> On Sun, 25 Aug 2013, Mike Stump wrote:
>> On Aug 23, 2013, at 8:02 AM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
>>> We really need to get rid of the #include "tm.h" in wide-int.h.
>>> MAX_BITSIZE_MODE_ANY_INT should be the only partially-target-dependent
>>> thing in there.  If that comes from tm.h then perhaps we should put it
>>> into a new header file instead.
>> 
>> BITS_PER_UNIT comes from there as well, and I'd need both.  Grabbing the 
>> #defines we generate is easy enough, but BITS_PER_UNIT would be more 
>> annoying.  No port in the tree makes use of it yet (other than 8).  So, 
>> do we just assume BITS_PER_UNIT is 8?
> 
> Regarding avoiding tm.h dependence through BITS_PER_UNIT (without actually 
> converting it from a target macro to a target hook), see my suggestions at 
> <http://gcc.gnu.org/ml/gcc-patches/2010-11/msg02617.html>.  It would seem 
> fairly reasonable, if in future other macros are converted to hooks and 
> it's possible to build multiple back ends into a single compiler binary, 
> to require that all such back ends share a value of BITS_PER_UNIT.

Ick.  I don't see the beauty of this direction.  If one wants to move in a, we can generate code for any target, fine, let's do that.  If someone wants to make a target, just a dynamically loaded shared library, let's do that.  There are all sorts of directions to move it, but the intermediate, let's design and implement a system, where for some targets it works, and for some targets it doesn't work, well, I think that is short sighted and we ought not target that.

Having a separate tm-blabla.h for some of the selections of the target is fine, but mandating that as the form for doing a target is, well, bad.

I'd love to see the entire hook and target selection mechanism brought up to 1990's level of sophistication, the 1970s are over.  Sigh.  Anyway, all this is well beyond the scope of the work at hand.

> As I've previously 
> noted, many front-end uses of BITS_PER_UNIT really care about the C-level 
> char and so should be TYPE_PRECISION (char_type_node).  Generally, before 
> thinking about how to get BITS_PER_UNIT somewhere, consider whether the 
> code is actually correct to be using BITS_PER_UNIT at all - whether it's 
> the RTL-level QImode that is really what's relevant to the code.

I think we got all the uses correct.  Let us know if any are wrong.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-25 10:52   ` Richard Sandiford
  2013-08-25 15:14     ` Kenneth Zadeck
@ 2013-08-26  2:22     ` Mike Stump
  2013-08-26  5:40       ` Kenneth Zadeck
  2013-08-28  9:11       ` Richard Biener
  1 sibling, 2 replies; 50+ messages in thread
From: Mike Stump @ 2013-08-26  2:22 UTC (permalink / raw)
  To: Richard Sandiford; +Cc: Kenneth Zadeck, rguenther, gcc-patches, r.sandiford

On Aug 25, 2013, at 12:26 AM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
> (2) Adding a new namespace, wi, for the operators.  So far this
>    just contains the previously-static comparison functions
>    and whatever else was needed to avoid cross-dependencies
>    between wi and wide_int_ro (except for the debug routines).

It seems reasonable; I don't see anything I object to.  Seems like most of the time, the code is shorter (though, you use wi, which is fairly short).  It doesn't seem any more complex, though, knowing how to spell the operation wide_int:: v wi:: is confusing on the client side.  I'm torn between this and the nice things that come with the patch.

> (3) Removing the comparison member functions and using the static
>    ones everywhere.

I've love to have richi weigh in (or someone else that wants to play the role of C++ coding expert)…  I'd defer to them…

> The idea behind using a namespace rather than static functions
> is that it makes it easier to separate the core, tree and rtx bits.

Being able to separate core, tree and rtx bits gets a +1 in my book.  I do understand the beauty of this.

> IMO wide-int.h shouldn't know about trees and rtxes, and all routines
> related to them should be in tree.h and rtl.h instead.  But using
> static functions means that you have to declare everything in one place.
> Also, it feels odd for wide_int to be both an object and a home
> of static functions that don't always operate on wide_ints, e.g. when
> comparing a CONST_INT against 16.

Yes, though, does wi feel odd being a home for comparing a CONST_INT and 16?  :-)

> I realise I'm probably not being helpful here.

Iterating on how we want to code to look like is reasonable.  Prettying it up where it needs it, is good.

Indeed, if the code is as you like, and as richi likes, we'll then our mission is just about complete.  :-)  For this patch, I'd love to defer to richi (or someone that has a stronger opinion than I do) to say, better, worse…

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-26  2:22     ` Mike Stump
@ 2013-08-26  5:40       ` Kenneth Zadeck
  2013-08-28  9:11       ` Richard Biener
  1 sibling, 0 replies; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-26  5:40 UTC (permalink / raw)
  To: Mike Stump; +Cc: Richard Sandiford, rguenther, gcc-patches, r.sandiford

On 08/25/2013 06:55 PM, Mike Stump wrote:
> On Aug 25, 2013, at 12:26 AM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
>> (2) Adding a new namespace, wi, for the operators.  So far this
>>     just contains the previously-static comparison functions
>>     and whatever else was needed to avoid cross-dependencies
>>     between wi and wide_int_ro (except for the debug routines).
> It seems reasonable; I don't see anything I object to.  Seems like most of the time, the code is shorter (though, you use wi, which is fairly short).  It doesn't seem any more complex, though, knowing how to spell the operation wide_int:: v wi:: is confusing on the client side.  I'm torn between this and the nice things that come with the patch.
>
>> (3) Removing the comparison member functions and using the static
>>     ones everywhere.
> I've love to have richi weigh in (or someone else that wants to play the role of C++ coding expert)Â…  I'd defer to themÂ…
>
>> The idea behind using a namespace rather than static functions
>> is that it makes it easier to separate the core, tree and rtx bits.
> Being able to separate core, tree and rtx bits gets a +1 in my book.  I do understand the beauty of this.
>
>> IMO wide-int.h shouldn't know about trees and rtxes, and all routines
>> related to them should be in tree.h and rtl.h instead.  But using
>> static functions means that you have to declare everything in one place.
>> Also, it feels odd for wide_int to be both an object and a home
>> of static functions that don't always operate on wide_ints, e.g. when
>> comparing a CONST_INT against 16.
> Yes, though, does wi feel odd being a home for comparing a CONST_INT and 16?  :-)
on the other hand, how else are you going to do this.    i have not seen 
anyone sign up to make an oo version of rtl and even if they did that, 
the consts are just a small part of it.

i agree that it is odd, but then again, it is actually nice to have a 
largely similar interface for trees and rtl.

>
>> I realise I'm probably not being helpful here.
> Iterating on how we want to code to look like is reasonable.  Prettying it up where it needs it, is good.
>
> Indeed, if the code is as you like, and as richi likes, we'll then our mission is just about complete.  :-)  For this patch, I'd love to defer to richi (or someone that has a stronger opinion than I do) to say, better, worseÂ…

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-23 15:03 ` Richard Sandiford
                     ` (6 preceding siblings ...)
  2013-08-25 18:12   ` Mike Stump
@ 2013-08-28  9:06   ` Richard Biener
  2013-08-28  9:51     ` Richard Sandiford
                       ` (2 more replies)
  7 siblings, 3 replies; 50+ messages in thread
From: Richard Biener @ 2013-08-28  9:06 UTC (permalink / raw)
  To: Richard Sandiford; +Cc: Kenneth Zadeck, gcc-patches, Mike Stump, r.sandiford

On Fri, 23 Aug 2013, Richard Sandiford wrote:

> Hi Kenny,
> 
> This is the first time I've looked at the implementation of wide-int.h
> (rather than just looking at the rtl changes, which as you know I like
> in general), so FWIW here are some comments on wide-int.h.  I expect
> a lot of them overlap with Richard B.'s comments.
> 
> I also expect many of them are going to be annoying, sorry, but this
> first one definitely will.  The coding conventions say that functions
> should be defined outside the class:
> 
>     http://gcc.gnu.org/codingconventions.html
> 
> and that opening braces should be on their own line, so most of the file
> needs to be reformatted.  I went through and made that change with the
> patch below, in the process of reading through.  I also removed "SGN
> must be SIGNED or UNSIGNED." because it seemed redundant when those are
> the only two values available.  The patch fixes a few other coding standard
> problems and typos, but I've not made any actual code changes (or at least,
> I didn't mean to).
> 
> Does it look OK to install?
> 
> I'm still unsure about these "infinite" precision types, but I understand
> the motivation and I have no objections.  However:
> 
> >     * Code that does widening conversions.  The canonical way that
> >       this is performed is to sign or zero extend the input value to
> >       the max width based on the sign of the type of the source and
> >       then to truncate that value to the target type.  This is in
> >       preference to using the sign of the target type to extend the
> >       value directly (which gets the wrong value for the conversion
> >       of large unsigned numbers to larger signed types).
> 
> I don't understand this particular reason.  Using the sign of the source
> type is obviously right, but why does that mean we need "infinite" precision,
> rather than just doubling the precision of the source?

The comment indeed looks redundant - of course it is not correct
to extend the value "directly".

> >   * When a constant that has an integer type is converted to a
> >     wide-int it comes in with precision 0.  For these constants the
> >     top bit does accurately reflect the sign of that constant; this
> >     is an exception to the normal rule that the signedness is not
> >     represented.  When used in a binary operation, the wide-int
> >     implementation properly extends these constants so that they
> >     properly match the other operand of the computation.  This allows
> >     you write:
> >
> >                tree t = ...
> >                wide_int x = t + 6;
> >
> >     assuming t is a int_cst.
> 
> This seems dangerous.  Not all code that uses "unsigned HOST_WIDE_INT"
> actually wants it to be an unsigned value.  Some code uses it to avoid
> the undefinedness of signed overflow.  So these overloads could lead
> to us accidentally zero-extending what's conceptually a signed value
> without any obvious indication that that's happening.  Also, hex constants
> are unsigned int, but it doesn't seem safe to assume that 0x80000000 was
> meant to be zero-extended.
>
> I realise the same thing can happen if you mix "unsigned int" with
> HOST_WIDE_INT, but the point is that you shouldn't really do that
> in general, whereas we're defining these overloads precisely so that
> a mixture can be used.
> 
> I'd prefer some explicit indication of the sign, at least for anything
> other than plain "int" (so that the compiler will complain about uses
> of "unsigned int" and above).

I prefer the automatic promotion - it is exactly what regular C types
do.  Now, consider

  wide_int x = ... construct 5 with precision 16 ...
  wide_int y = x + 6;

now, '6' is 'int' (precision 32), but with wide-int we treat it
as precision '0' ('infinite').  For x + 6 we the _truncate_ its
precision to that of 'x'(?) not exactly matching C behavior
(where we'd promote 'x' to 'int', perform the add and then truncate
to the precision of 'y' - which for wide-int gets its precision
from the result of x + 6).

Mimicing C would support dropping those 'require equal precision'
asserts but also would require us to properly track a sign to be
able to promote properly (or as I argued all the time always
properly sign-extend values so we effectively have infinite precision
anyway).

The fits_uhwi_p implementation changes scares me off that
"upper bits are undefined" thing a lot again... (I hate introducing
'undefinedness' into the compiler by 'design')

> >   Note that the bits above the precision are not defined and the
> >   algorithms used here are careful not to depend on their value.  In
> >   particular, values that come in from rtx constants may have random
> >   bits.

Which is a red herring.  It should be fixed.  I cannot even believe
that sentence given the uses of CONST_DOUBLE_LOW/CONST_DOUBLE_HIGH
or INTVAL/UINTVAL.  I don't see accesses masking out 'undefined' bits
anywhere.

> I have a feeling I'm rehashing a past debate, sorry, but rtx constants can't
> have random bits.  The upper bits must be a sign extension of the value.
> There's exactly one valid rtx for each (value, mode) pair.  If you saw
> something different then that sounds like a bug.  The rules should already
> be fairly well enforced though, since something like (const_int 128) --
> or (const_int 256) -- will not match a QImode operand.

See.  We're saved ;)

> This is probably the part of the representation that I disagree most with.
> There seem to be two main ways we could hande the extension to whole HWIs:
> 
> (1) leave the stored upper bits undefined and extend them on read
> (2) keep the stored upper bits in extended form
> 
> The patch goes for (1) but (2) seems better to me, for a few reasons:

I agree whole-heartedly.

> * As above, constants coming from rtl are already in the right form,
>   so if you create a wide_int from an rtx and only query it, no explicit
>   extension is needed.
> 
> * Things like logical operations and right shifts naturally preserve
>   the sign-extended form, so only a subset of write operations need
>   to take special measures.
> 
> * You have a public interface that exposes the underlying HWIs
>   (which is fine with me FWIW), so it seems better to expose a fully-defined
>   HWI rather than only a partially-defined HWI.
> 
> E.g. zero_p is:
> 
>   HOST_WIDE_INT x;
> 
>   if (precision && precision < HOST_BITS_PER_WIDE_INT)
>     x = sext_hwi (val[0], precision);
>   else if (len == 0)
>     {
>       gcc_assert (precision == 0);
>       return true;
>     }
>   else
>     x = val[0];
> 
>   return len == 1 && x == 0;
> 
> but I think it really ought to be just:
> 
>   return len == 1 && val[0] == 0;

Yes!

But then - what value does keeping track of a 'precision' have
in this case?  It seems to me it's only a "convenient carrier"
for

  wide_int x = wide-int-from-RTX (y);
  machine_mode saved_mode = mode-available? GET_MODE (y) : magic-mode;
  ... process x ...
  RTX = RTX-from-wide_int (x, saved_mode);

that is, wide-int doesn't do anything with 'precision' but you
can extract it later to not need to remember a mode you were
interested in?

Oh, and of course some operations require a 'precision', like rotate.

> >   When the precision is 0, all the bits in the LEN elements of
> >   VEC are significant with no undefined bits.  Precisionless
> >   constants are limited to being one or two HOST_WIDE_INTs.  When two
> >   are used the upper value is 0, and the high order bit of the first
> >   value is set.  (Note that this may need to be generalized if it is
> >   ever necessary to support 32bit HWIs again).
> 
> I didn't understand this.  When are two HOST_WIDE_INTs needed for
> "precision 0"?

For the wide_int containing unsigned HOST_WIDE_INT ~0.  As we
sign-extend the representation (heh, yes, we do or should!) we
require an extra HWI to store the fact that ~0 is unsigned.

> The main thing that's changed since the early patches is that we now
> have a mixture of wide-int types.  This seems to have led to a lot of
> boiler-plate forwarding functions (or at least it felt like that while
> moving them all out the class).  And that in turn seems to be because
> you're trying to keep everything as member functions.  E.g. a lot of the
> forwarders are from a member function to a static function.
> 
> Wouldn't it be better to have the actual classes be light-weight,
> with little more than accessors, and do the actual work with non-member
> template functions?  There seems to be 3 grades of wide-int:
> 
>   (1) read-only, constant precision  (from int, etc.)
>   (2) read-write, constant precision  (fixed_wide_int)
>   (3) read-write, variable precision  (wide_int proper)
> 
> but we should be able to hide that behind templates, with compiler errors
> if you try to write to (1), etc.

Yeah, I'm probably trying to clean up the implementation once I
got past recovering from two month without GCC ...

Richard.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-26  2:22     ` Mike Stump
  2013-08-26  5:40       ` Kenneth Zadeck
@ 2013-08-28  9:11       ` Richard Biener
  2013-08-29 13:34         ` Kenneth Zadeck
  1 sibling, 1 reply; 50+ messages in thread
From: Richard Biener @ 2013-08-28  9:11 UTC (permalink / raw)
  To: Mike Stump; +Cc: Richard Sandiford, Kenneth Zadeck, gcc-patches, r.sandiford

On Sun, 25 Aug 2013, Mike Stump wrote:

> On Aug 25, 2013, at 12:26 AM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
> > (2) Adding a new namespace, wi, for the operators.  So far this
> >    just contains the previously-static comparison functions
> >    and whatever else was needed to avoid cross-dependencies
> >    between wi and wide_int_ro (except for the debug routines).
> 
> It seems reasonable; I don't see anything I object to.  Seems like most of the time, the code is shorter (though, you use wi, which is fairly short).  It doesn't seem any more complex, though, knowing how to spell the operation wide_int:: v wi:: is confusing on the client side.  I'm torn between this and the nice things that come with the patch.
> 
> > (3) Removing the comparison member functions and using the static
> >    ones everywhere.
> 
> I've love to have richi weigh in (or someone else that wants to play the 
> role of C++ coding expert)?  I'd defer to them?

Yeah - wi::lt (a, b) is much better than a.lt (b) IMHO.  It mimics how
the standard library works.

> > The idea behind using a namespace rather than static functions
> > is that it makes it easier to separate the core, tree and rtx bits.
> 
> Being able to separate core, tree and rtx bits gets a +1 in my book.  I 
> do understand the beauty of this.

Now, if you look back in discussions I wanted a storage 
abstraction anyway.  Basically the interface is

class wide_int_storage
{
  int precision ();
  int len ();
  element_t get (unsigned);
  void set (unsigned, element_t);
};

and wide_int is then templated like

template <class storage>
class wide_int : public storage
{
};

where RTX / tree storage classes provide read-only access to their
storage and a rvalue integer rep to its value.

You can look at my example draft implementation I posted some
months ago.  But I'll gladly wiggle on the branch to make it
more like above (easy step one: don't access the wide-int members
directly but via accessor functions)

> > IMO wide-int.h shouldn't know about trees and rtxes, and all routines
> > related to them should be in tree.h and rtl.h instead.  But using
> > static functions means that you have to declare everything in one place.
> > Also, it feels odd for wide_int to be both an object and a home
> > of static functions that don't always operate on wide_ints, e.g. when
> > comparing a CONST_INT against 16.

Indeed - in my sample the wide-int-rtx-storage and wide-int-tree-storage
storage models were declared in rtl.h and tree.h and wide-int.h did
know nothing about them.

> Yes, though, does wi feel odd being a home for comparing a CONST_INT and 
> 16?  :-)
> 
> > I realise I'm probably not being helpful here.
> 
> Iterating on how we want to code to look like is reasonable.  Prettying 
> it up where it needs it, is good.
> 
> Indeed, if the code is as you like, and as richi likes, we'll then our 
> mission is just about complete.  :-)  For this patch, I'd love to defer 
> to richi (or someone that has a stronger opinion than I do) to say, 
> better, worse?

The comparisons?  Better.

Thanks,
Richard.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28  9:06   ` Richard Biener
@ 2013-08-28  9:51     ` Richard Sandiford
  2013-08-28 10:40       ` Richard Biener
  2013-08-28 13:11     ` Kenneth Zadeck
  2013-08-29  0:15     ` Kenneth Zadeck
  2 siblings, 1 reply; 50+ messages in thread
From: Richard Sandiford @ 2013-08-28  9:51 UTC (permalink / raw)
  To: Richard Biener; +Cc: Kenneth Zadeck, gcc-patches, Mike Stump

Richard Biener <rguenther@suse.de> writes:
>> * As above, constants coming from rtl are already in the right form,
>>   so if you create a wide_int from an rtx and only query it, no explicit
>>   extension is needed.
>> 
>> * Things like logical operations and right shifts naturally preserve
>>   the sign-extended form, so only a subset of write operations need
>>   to take special measures.
>> 
>> * You have a public interface that exposes the underlying HWIs
>>   (which is fine with me FWIW), so it seems better to expose a fully-defined
>>   HWI rather than only a partially-defined HWI.
>> 
>> E.g. zero_p is:
>> 
>>   HOST_WIDE_INT x;
>> 
>>   if (precision && precision < HOST_BITS_PER_WIDE_INT)
>>     x = sext_hwi (val[0], precision);
>>   else if (len == 0)
>>     {
>>       gcc_assert (precision == 0);
>>       return true;
>>     }
>>   else
>>     x = val[0];
>> 
>>   return len == 1 && x == 0;
>> 
>> but I think it really ought to be just:
>> 
>>   return len == 1 && val[0] == 0;
>
> Yes!
>
> But then - what value does keeping track of a 'precision' have
> in this case?  It seems to me it's only a "convenient carrier"
> for
>
>   wide_int x = wide-int-from-RTX (y);
>   machine_mode saved_mode = mode-available? GET_MODE (y) : magic-mode;
>   ... process x ...
>   RTX = RTX-from-wide_int (x, saved_mode);
>
> that is, wide-int doesn't do anything with 'precision' but you
> can extract it later to not need to remember a mode you were
> interested in?

I can see why you like the constant-precision, very wide integers for trees,
where the constants have an inherent sign.  But (and I think this might be
old ground too :-)), that isn't the case with rtl.  At the tree level,
using constant-precision, very wide integers allows you to add a 32-bit
signed INTEGER_CST to a 16-unsigned INTEGER_CST.  And that has an
obvious meaning, both as a 32-bit result or as a wider result, depending
on how you choose to use it.  But in rtl there is no meaning to adding
an SImode and an HImode value together, since we don't know how to
extend the HImode value beyond its precision.  You must explicitly sign-
or zero-extend the value first.  (The fact that we choose to sign-extend
rtl constants when storing them in HWIs is just a representation detail,
to avoid having undefined bits in the HWIs.  It doesn't mean that rtx
values themselves are signed.  We could have used a zero-extending
representation instead without changing the semantics.)

So the precision variable is good for the rtl level in several ways:

- As you say, it avoids adding the explicit truncations that (in practice)
  every rtl operation would need

- It's more efficient in that case, since we don't calculate high values
  and then discard them immediately.  The common GET_MODE_PRECISION (mode)
  <= HOST_BITS_PER_WIDE_INT case stays a pure HWI operation, despite all
  the wide-int trappings.

- It's a good way of checking type safety and making sure that excess
  bits aren't accidentally given a semantic meaning.  This is the most
  important reason IMO.

The branch has both the constant-precision, very wide integers that we
want for trees and the variable-precision integers we want for rtl,
so it's not an "either or".  With the accessor-based implementation,
there should be very little cost to having both.

>> The main thing that's changed since the early patches is that we now
>> have a mixture of wide-int types.  This seems to have led to a lot of
>> boiler-plate forwarding functions (or at least it felt like that while
>> moving them all out the class).  And that in turn seems to be because
>> you're trying to keep everything as member functions.  E.g. a lot of the
>> forwarders are from a member function to a static function.
>> 
>> Wouldn't it be better to have the actual classes be light-weight,
>> with little more than accessors, and do the actual work with non-member
>> template functions?  There seems to be 3 grades of wide-int:
>> 
>>   (1) read-only, constant precision  (from int, etc.)
>>   (2) read-write, constant precision  (fixed_wide_int)
>>   (3) read-write, variable precision  (wide_int proper)
>> 
>> but we should be able to hide that behind templates, with compiler errors
>> if you try to write to (1), etc.
>
> Yeah, I'm probably trying to clean up the implementation once I
> got past recovering from two month without GCC ...

FWIW, I've been plugging away at a version that uses accessors.
I hope to have it vaguely presentable by the middle of next week,
in case your recovery takes that long...

Thanks,
Richard

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28  9:51     ` Richard Sandiford
@ 2013-08-28 10:40       ` Richard Biener
  2013-08-28 11:52         ` Richard Sandiford
  2013-08-28 16:08         ` Mike Stump
  0 siblings, 2 replies; 50+ messages in thread
From: Richard Biener @ 2013-08-28 10:40 UTC (permalink / raw)
  To: Richard Sandiford; +Cc: Kenneth Zadeck, gcc-patches, Mike Stump

On Wed, 28 Aug 2013, Richard Sandiford wrote:

> Richard Biener <rguenther@suse.de> writes:
> >> * As above, constants coming from rtl are already in the right form,
> >>   so if you create a wide_int from an rtx and only query it, no explicit
> >>   extension is needed.
> >> 
> >> * Things like logical operations and right shifts naturally preserve
> >>   the sign-extended form, so only a subset of write operations need
> >>   to take special measures.
> >> 
> >> * You have a public interface that exposes the underlying HWIs
> >>   (which is fine with me FWIW), so it seems better to expose a fully-defined
> >>   HWI rather than only a partially-defined HWI.
> >> 
> >> E.g. zero_p is:
> >> 
> >>   HOST_WIDE_INT x;
> >> 
> >>   if (precision && precision < HOST_BITS_PER_WIDE_INT)
> >>     x = sext_hwi (val[0], precision);
> >>   else if (len == 0)
> >>     {
> >>       gcc_assert (precision == 0);
> >>       return true;
> >>     }
> >>   else
> >>     x = val[0];
> >> 
> >>   return len == 1 && x == 0;
> >> 
> >> but I think it really ought to be just:
> >> 
> >>   return len == 1 && val[0] == 0;
> >
> > Yes!
> >
> > But then - what value does keeping track of a 'precision' have
> > in this case?  It seems to me it's only a "convenient carrier"
> > for
> >
> >   wide_int x = wide-int-from-RTX (y);
> >   machine_mode saved_mode = mode-available? GET_MODE (y) : magic-mode;
> >   ... process x ...
> >   RTX = RTX-from-wide_int (x, saved_mode);
> >
> > that is, wide-int doesn't do anything with 'precision' but you
> > can extract it later to not need to remember a mode you were
> > interested in?
> 
> I can see why you like the constant-precision, very wide integers for trees,
> where the constants have an inherent sign.  But (and I think this might be
> old ground too :-)), that isn't the case with rtl.  At the tree level,
> using constant-precision, very wide integers allows you to add a 32-bit
> signed INTEGER_CST to a 16-unsigned INTEGER_CST.  And that has an
> obvious meaning, both as a 32-bit result or as a wider result, depending
> on how you choose to use it.  But in rtl there is no meaning to adding
> an SImode and an HImode value together, since we don't know how to
> extend the HImode value beyond its precision.  You must explicitly sign-
> or zero-extend the value first.  (The fact that we choose to sign-extend
> rtl constants when storing them in HWIs is just a representation detail,
> to avoid having undefined bits in the HWIs.  It doesn't mean that rtx
> values themselves are signed.  We could have used a zero-extending
> representation instead without changing the semantics.)

Yeah, that was my understanding.

> So the precision variable is good for the rtl level in several ways:
> 
> - As you say, it avoids adding the explicit truncations that (in practice)
>   every rtl operation would need
> 
> - It's more efficient in that case, since we don't calculate high values
>   and then discard them immediately.  The common GET_MODE_PRECISION (mode)
>   <= HOST_BITS_PER_WIDE_INT case stays a pure HWI operation, despite all
>   the wide-int trappings.
> 
> - It's a good way of checking type safety and making sure that excess
>   bits aren't accidentally given a semantic meaning.  This is the most
>   important reason IMO.
> 
> The branch has both the constant-precision, very wide integers that we
> want for trees and the variable-precision integers we want for rtl,
> so it's not an "either or".  With the accessor-based implementation,
> there should be very little cost to having both.

So what I wonder (and where we maybe disagree) is how much code
wants to inspect "intermediate" results.  Say originally you have

rtx foo (rtx x, rtx y)
{
  rtx tem = simplify_const_binary_operation (PLUS, GET_MODE (x), x, 
GEN_INT (1));
  rtx res = simplify_const_binary_operation (MINUS, GET_MODE (tem), tem, 
y);
  return res;
}

and with wide-int you want to change that to

rtx foo (rtx x, rtx y)
{
  wide_int tem = wide_int (x) + 1;
  wide_int res = tem - y;
  return res.to_rtx ();
}

how much code ever wants to inspect 'tem' or 'res'?
That is, does it matter
if 'tem' and 'res' would have been calculated in "infinite precision"
and only to_rtx () would do the truncation to the desired mode?

I think not.  The amount of code performing multiple operations on
_constants_ in sequence is extremely low (if it even exists).

So I'd rather have to_rtx get a mode argument (or a precision) and
perform the required truncation / sign-extension at RTX construction
time (which is an expensive operation anyway).

So where does this "break"?  It only breaks where previous code
broke (or where previous code had measures to not break that we
don't carry over to the wide-int case).  Obvious case is unsigned
division on the sign-extended rep of RTL constants for example.

> >> The main thing that's changed since the early patches is that we now
> >> have a mixture of wide-int types.  This seems to have led to a lot of
> >> boiler-plate forwarding functions (or at least it felt like that while
> >> moving them all out the class).  And that in turn seems to be because
> >> you're trying to keep everything as member functions.  E.g. a lot of the
> >> forwarders are from a member function to a static function.
> >> 
> >> Wouldn't it be better to have the actual classes be light-weight,
> >> with little more than accessors, and do the actual work with non-member
> >> template functions?  There seems to be 3 grades of wide-int:
> >> 
> >>   (1) read-only, constant precision  (from int, etc.)
> >>   (2) read-write, constant precision  (fixed_wide_int)
> >>   (3) read-write, variable precision  (wide_int proper)
> >> 
> >> but we should be able to hide that behind templates, with compiler errors
> >> if you try to write to (1), etc.
> >
> > Yeah, I'm probably trying to clean up the implementation once I
> > got past recovering from two month without GCC ...
> 
> FWIW, I've been plugging away at a version that uses accessors.
> I hope to have it vaguely presentable by the middle of next week,
> in case your recovery takes that long...

Depends on my priorities ;)

Btw, rtl.h still wastes space with

struct GTY((variable_size)) hwivec_def {
  int num_elem;         /* number of elements */
  HOST_WIDE_INT elem[1];
};

struct GTY((chain_next ("RTX_NEXT (&%h)"),
            chain_prev ("RTX_PREV (&%h)"), variable_size)) rtx_def {
...
  /* The first element of the operands of this rtx.
     The number of operands and their types are controlled
     by the `code' field, according to rtl.def.  */
  union u {
    rtunion fld[1];
    HOST_WIDE_INT hwint[1];
    struct block_symbol block_sym;
    struct real_value rv;
    struct fixed_value fv;
    struct hwivec_def hwiv;
  } GTY ((special ("rtx_def"), desc ("GET_CODE (&%0)"))) u;
};

there are 32bits available before the union.  If you don't use
those for num_elem then all wide-ints will at least take as
much space as DOUBLE_INTs originally took - and large ints
that would have required DOUBLE_INTs in the past will now
require more space than before.  Which means your math
motivating the 'num_elem' encoding stuff is wrong.  With
moving 'num_elem' before u you can even re-use the hwint
field in the union as the existing double-int code does
(which in fact could simply do the encoding trick in the
old CONST_DOUBLE scheme, similar for the tree INTEGER_CST
container).

Richard.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28 10:40       ` Richard Biener
@ 2013-08-28 11:52         ` Richard Sandiford
  2013-08-28 12:04           ` Richard Biener
  2013-08-28 16:08         ` Mike Stump
  1 sibling, 1 reply; 50+ messages in thread
From: Richard Sandiford @ 2013-08-28 11:52 UTC (permalink / raw)
  To: Richard Biener; +Cc: Kenneth Zadeck, gcc-patches, Mike Stump

Richard Biener <rguenther@suse.de> writes:
>> So the precision variable is good for the rtl level in several ways:
>> 
>> - As you say, it avoids adding the explicit truncations that (in practice)
>>   every rtl operation would need
>> 
>> - It's more efficient in that case, since we don't calculate high values
>>   and then discard them immediately.  The common GET_MODE_PRECISION (mode)
>>   <= HOST_BITS_PER_WIDE_INT case stays a pure HWI operation, despite all
>>   the wide-int trappings.
>> 
>> - It's a good way of checking type safety and making sure that excess
>>   bits aren't accidentally given a semantic meaning.  This is the most
>>   important reason IMO.
>> 
>> The branch has both the constant-precision, very wide integers that we
>> want for trees and the variable-precision integers we want for rtl,
>> so it's not an "either or".  With the accessor-based implementation,
>> there should be very little cost to having both.
>
> So what I wonder (and where we maybe disagree) is how much code
> wants to inspect "intermediate" results.  Say originally you have
>
> rtx foo (rtx x, rtx y)
> {
>   rtx tem = simplify_const_binary_operation (PLUS, GET_MODE (x), x, 
> GEN_INT (1));
>   rtx res = simplify_const_binary_operation (MINUS, GET_MODE (tem), tem, 
> y);
>   return res;
> }
>
> and with wide-int you want to change that to
>
> rtx foo (rtx x, rtx y)
> {
>   wide_int tem = wide_int (x) + 1;
>   wide_int res = tem - y;
>   return res.to_rtx ();
> }
>
> how much code ever wants to inspect 'tem' or 'res'?
> That is, does it matter
> if 'tem' and 'res' would have been calculated in "infinite precision"
> and only to_rtx () would do the truncation to the desired mode?
>
> I think not.  The amount of code performing multiple operations on
> _constants_ in sequence is extremely low (if it even exists).
>
> So I'd rather have to_rtx get a mode argument (or a precision) and
> perform the required truncation / sign-extension at RTX construction
> time (which is an expensive operation anyway).

I agree this is where we disagree.  I don't understand why you think
the above is better.  Why do we want to do "infinite precision"
addition of two values when only the lowest N bits of those values
have a (semantically) defined meaning?  Earlier in the thread it sounded
like we both agreed that having undefined bits in the _representation_
was bad.  So why do we want to do calculations on parts of values that
are undefined in the (rtx) semantics?

E.g. say we're adding two rtx values whose mode just happens to be
HOST_BITS_PER_WIDE_INT in size.  Why does it make sense to calculate
the carry from adding the two HWIs, only to add it to an upper HWI
that has no semantically-defined value?  It's garbage in, garbage out.

Providing this for rtl doesn't affect the tree-level operations in any
way.  Although things like addition require both arguments to have the
same precision after promotion, that's trivially true for trees, since
(a) the inputs used there -- C integers and tree constants -- can be
promoted and (b) tree-level code uses fixed_wide_int <N>, where every
fixed_wide_int <N> has the same precision.

And you can still do "infinite-precision" arithmetic on rtx constants
if you want.  You just have to say how you want the constant to be
extended (sign or zero), so that the value of all bits is meaningful.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28 11:52         ` Richard Sandiford
@ 2013-08-28 12:04           ` Richard Biener
  2013-08-28 12:32             ` Richard Sandiford
  0 siblings, 1 reply; 50+ messages in thread
From: Richard Biener @ 2013-08-28 12:04 UTC (permalink / raw)
  To: Richard Sandiford; +Cc: Kenneth Zadeck, gcc-patches, Mike Stump

On Wed, 28 Aug 2013, Richard Sandiford wrote:

> Richard Biener <rguenther@suse.de> writes:
> >> So the precision variable is good for the rtl level in several ways:
> >> 
> >> - As you say, it avoids adding the explicit truncations that (in practice)
> >>   every rtl operation would need
> >> 
> >> - It's more efficient in that case, since we don't calculate high values
> >>   and then discard them immediately.  The common GET_MODE_PRECISION (mode)
> >>   <= HOST_BITS_PER_WIDE_INT case stays a pure HWI operation, despite all
> >>   the wide-int trappings.
> >> 
> >> - It's a good way of checking type safety and making sure that excess
> >>   bits aren't accidentally given a semantic meaning.  This is the most
> >>   important reason IMO.
> >> 
> >> The branch has both the constant-precision, very wide integers that we
> >> want for trees and the variable-precision integers we want for rtl,
> >> so it's not an "either or".  With the accessor-based implementation,
> >> there should be very little cost to having both.
> >
> > So what I wonder (and where we maybe disagree) is how much code
> > wants to inspect "intermediate" results.  Say originally you have
> >
> > rtx foo (rtx x, rtx y)
> > {
> >   rtx tem = simplify_const_binary_operation (PLUS, GET_MODE (x), x, 
> > GEN_INT (1));
> >   rtx res = simplify_const_binary_operation (MINUS, GET_MODE (tem), tem, 
> > y);
> >   return res;
> > }
> >
> > and with wide-int you want to change that to
> >
> > rtx foo (rtx x, rtx y)
> > {
> >   wide_int tem = wide_int (x) + 1;
> >   wide_int res = tem - y;
> >   return res.to_rtx ();
> > }
> >
> > how much code ever wants to inspect 'tem' or 'res'?
> > That is, does it matter
> > if 'tem' and 'res' would have been calculated in "infinite precision"
> > and only to_rtx () would do the truncation to the desired mode?
> >
> > I think not.  The amount of code performing multiple operations on
> > _constants_ in sequence is extremely low (if it even exists).
> >
> > So I'd rather have to_rtx get a mode argument (or a precision) and
> > perform the required truncation / sign-extension at RTX construction
> > time (which is an expensive operation anyway).
> 
> I agree this is where we disagree.  I don't understand why you think
> the above is better.  Why do we want to do "infinite precision"
> addition of two values when only the lowest N bits of those values
> have a (semantically) defined meaning?  Earlier in the thread it sounded
> like we both agreed that having undefined bits in the _representation_
> was bad.  So why do we want to do calculations on parts of values that
> are undefined in the (rtx) semantics?
> 
> E.g. say we're adding two rtx values whose mode just happens to be
> HOST_BITS_PER_WIDE_INT in size.  Why does it make sense to calculate
> the carry from adding the two HWIs, only to add it to an upper HWI
> that has no semantically-defined value?  It's garbage in, garbage out.

Not garbage in, and not garbage out (just wasted work).  That's
the possible downside - the upside is to get rid of the notion of
a 'precision'.

But yes, it's good that we agree on the fact that undefined bits
in the _representation_ are wrong.

OTOH they still will be in some ways "undefined" if you consider

  wide_int xw = from_rtx (xr, mode);
  tree xt = to_tree (xw, type);
  wide_int xw2 = from_tree (xt);

with an unsigned type xw and xw2 will not be equal (in the
'extension' bits) for a value with MSB set.  That is, RTL
chooses to always sign-extend, tree chooses to extend according
to sign information.  wide-int chooses to ... ?  (it seems the
wide-int overall comment lost the part that defined its encoding,
but it seems that we still sign-extend val[len-1], so
(unsigned HOST_WIDE_INT)-1 is { -1U, 0 } with len == 2 and
(HOST_WIDE_INT)-1 is { -1 } with len == 1.  In RTL both
would be encoded with len == 1 (no distinction between a signed
and unsigned number with all bits set), on the current
tree representation the encoding would be with len == 1, too,
as we have TYPE_UNSIGNED to tell us the sign.

So we still need to somehow "map" between those representations.

Coming from the tree side (as opposed to from the RTL side) I'd
have argued you need a 'signed' flag ;)  We side-stepped that
by doing the extension trick in a way that preserves sign information.

Looking at the RTL representation from that wide-int representation
makes RTL look as if all constants are signed.  That's fine if
code that wants to do unsigned stuff properly extends.  So - do
we need a from_rtx_unsigned () constructor that does this?
I'm worried about all the extensions done in operations like add ():

  if (p1 <= HOST_BITS_PER_WIDE_INT)
    {
      result.len = 1;
      result.precision = p1;
      result.val[0] = val[0] + s[0];
      if (p1 < HOST_BITS_PER_WIDE_INT)
        result.val[0] = sext_hwi (result.val[0], p1);
      if (sgn == SIGNED)
        {
          HOST_WIDE_INT x
            = (((result.val[0] ^ val[0]) & (result.val[0] ^ s[0]))
               >> (p1 - 1)) & 1;
          *overflow = (x != 0);
        }
      else
        *overflow = ((unsigned HOST_WIDE_INT) result.val[0]
                     < (unsigned HOST_WIDE_INT) val[0]);
      }

that's supposed to be a cheap operation ... and as far as I can see
it even computes wrong outcome :/  Given precision 32 and the
unsigned value 0xfffffffe, thus { 0x00000000fffffffe }, and
adding 0 it will produce { 0xfffffffffffffffe }, the signed value -2.

So that

      if (p1 < HOST_BITS_PER_WIDE_INT)
        result.val[0] = sext_hwi (result.val[0], p1);

is clearly wrong.  Yes, it may be needed when constructing a
RTX with that value.

Richard.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28 12:04           ` Richard Biener
@ 2013-08-28 12:32             ` Richard Sandiford
  2013-08-28 12:49               ` Richard Biener
  0 siblings, 1 reply; 50+ messages in thread
From: Richard Sandiford @ 2013-08-28 12:32 UTC (permalink / raw)
  To: Richard Biener; +Cc: Kenneth Zadeck, gcc-patches, Mike Stump

Richard Biener <rguenther@suse.de> writes:

> On Wed, 28 Aug 2013, Richard Sandiford wrote:
>
>> Richard Biener <rguenther@suse.de> writes:
>> >> So the precision variable is good for the rtl level in several ways:
>> >> 
>> >> - As you say, it avoids adding the explicit truncations that (in practice)
>> >>   every rtl operation would need
>> >> 
>> >> - It's more efficient in that case, since we don't calculate high values
>> >>   and then discard them immediately.  The common GET_MODE_PRECISION (mode)
>> >>   <= HOST_BITS_PER_WIDE_INT case stays a pure HWI operation, despite all
>> >>   the wide-int trappings.
>> >> 
>> >> - It's a good way of checking type safety and making sure that excess
>> >>   bits aren't accidentally given a semantic meaning.  This is the most
>> >>   important reason IMO.
>> >> 
>> >> The branch has both the constant-precision, very wide integers that we
>> >> want for trees and the variable-precision integers we want for rtl,
>> >> so it's not an "either or".  With the accessor-based implementation,
>> >> there should be very little cost to having both.
>> >
>> > So what I wonder (and where we maybe disagree) is how much code
>> > wants to inspect "intermediate" results.  Say originally you have
>> >
>> > rtx foo (rtx x, rtx y)
>> > {
>> >   rtx tem = simplify_const_binary_operation (PLUS, GET_MODE (x), x, 
>> > GEN_INT (1));
>> >   rtx res = simplify_const_binary_operation (MINUS, GET_MODE (tem), tem, 
>> > y);
>> >   return res;
>> > }
>> >
>> > and with wide-int you want to change that to
>> >
>> > rtx foo (rtx x, rtx y)
>> > {
>> >   wide_int tem = wide_int (x) + 1;
>> >   wide_int res = tem - y;
>> >   return res.to_rtx ();
>> > }
>> >
>> > how much code ever wants to inspect 'tem' or 'res'?
>> > That is, does it matter
>> > if 'tem' and 'res' would have been calculated in "infinite precision"
>> > and only to_rtx () would do the truncation to the desired mode?
>> >
>> > I think not.  The amount of code performing multiple operations on
>> > _constants_ in sequence is extremely low (if it even exists).
>> >
>> > So I'd rather have to_rtx get a mode argument (or a precision) and
>> > perform the required truncation / sign-extension at RTX construction
>> > time (which is an expensive operation anyway).
>> 
>> I agree this is where we disagree.  I don't understand why you think
>> the above is better.  Why do we want to do "infinite precision"
>> addition of two values when only the lowest N bits of those values
>> have a (semantically) defined meaning?  Earlier in the thread it sounded
>> like we both agreed that having undefined bits in the _representation_
>> was bad.  So why do we want to do calculations on parts of values that
>> are undefined in the (rtx) semantics?
>> 
>> E.g. say we're adding two rtx values whose mode just happens to be
>> HOST_BITS_PER_WIDE_INT in size.  Why does it make sense to calculate
>> the carry from adding the two HWIs, only to add it to an upper HWI
>> that has no semantically-defined value?  It's garbage in, garbage out.
>
> Not garbage in, and not garbage out (just wasted work).

Well, it's not garbage in the sense of an uninitialised HWI detected
by valgrind (say).  But it's semantic garbage.

> That's the possible downside - the upside is to get rid of the notion
> of a 'precision'.

No, it's still there, just in a different place.

> OTOH they still will be in some ways "undefined" if you consider
>
>   wide_int xw = from_rtx (xr, mode);
>   tree xt = to_tree (xw, type);
>   wide_int xw2 = from_tree (xt);
>
> with an unsigned type xw and xw2 will not be equal (in the
> 'extension' bits) for a value with MSB set.

Do you mean it's undefined as things stand, or when using "infinite
precision" for rtl?  It shouldn't lead to anything undefined at
the moment.  Only the low GET_MODE_BITSIZE (mode) bits of xw are
meaningful, but those are also the only bits that would be used.

> That is, RTL chooses to always sign-extend, tree chooses to extend
> according to sign information.  wide-int chooses to ... ?  (it seems
> the wide-int overall comment lost the part that defined its encoding,
> but it seems that we still sign-extend val[len-1], so (unsigned
> HOST_WIDE_INT)-1 is { -1U, 0 } with len == 2 and (HOST_WIDE_INT)-1 is
> { -1 } with len == 1.

Only if the precision is > HOST_BITS_PER_WIDE_INT.  If the precision
is HOST_BITS_PER_WIDE_INT then both are { -1U }.  "len" is never
greater than precision * HOST_BITS_PER_WIDE_INT.

> In RTL both would be encoded with len == 1 (no
> distinction between a signed and unsigned number with all bits set),

Same again: both are -1 if the mode is HOST_BITS_PER_WIDE_INT or smaller.
If the mode is wider then RTL too uses { -1, 0 }.  So the current wide_int
representation matches the RTL representation pretty closely, except for
the part about wide_int leaving excess bits undefined.  But that's just
a convenience, it isn't important in terms of what operators to.

> on the current tree representation the encoding would be with len ==
> 1, too, as we have TYPE_UNSIGNED to tell us the sign.

OK.

> So we still need to somehow "map" between those representations.

Right, that's what the constructors, from_* and to_* routines do.

> Looking at the RTL representation from that wide-int representation
> makes RTL look as if all constants are signed.

Well, except for direct accessors like elt(), the wide-int representation
is shielded from the interface (as it should be).  It doesn't affect the
result of arithmetic.  The representation should have no visible effect
for rtl or tree users who avoid elt().

Thanks,
Richard

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28 12:32             ` Richard Sandiford
@ 2013-08-28 12:49               ` Richard Biener
  2013-08-28 16:58                 ` Mike Stump
  0 siblings, 1 reply; 50+ messages in thread
From: Richard Biener @ 2013-08-28 12:49 UTC (permalink / raw)
  To: Richard Sandiford; +Cc: Kenneth Zadeck, gcc-patches, Mike Stump

On Wed, 28 Aug 2013, Richard Sandiford wrote:

> Richard Biener <rguenther@suse.de> writes:
> 
> > On Wed, 28 Aug 2013, Richard Sandiford wrote:
> >
> >> Richard Biener <rguenther@suse.de> writes:
> >> >> So the precision variable is good for the rtl level in several ways:
> >> >> 
> >> >> - As you say, it avoids adding the explicit truncations that (in practice)
> >> >>   every rtl operation would need
> >> >> 
> >> >> - It's more efficient in that case, since we don't calculate high values
> >> >>   and then discard them immediately.  The common GET_MODE_PRECISION (mode)
> >> >>   <= HOST_BITS_PER_WIDE_INT case stays a pure HWI operation, despite all
> >> >>   the wide-int trappings.
> >> >> 
> >> >> - It's a good way of checking type safety and making sure that excess
> >> >>   bits aren't accidentally given a semantic meaning.  This is the most
> >> >>   important reason IMO.
> >> >> 
> >> >> The branch has both the constant-precision, very wide integers that we
> >> >> want for trees and the variable-precision integers we want for rtl,
> >> >> so it's not an "either or".  With the accessor-based implementation,
> >> >> there should be very little cost to having both.
> >> >
> >> > So what I wonder (and where we maybe disagree) is how much code
> >> > wants to inspect "intermediate" results.  Say originally you have
> >> >
> >> > rtx foo (rtx x, rtx y)
> >> > {
> >> >   rtx tem = simplify_const_binary_operation (PLUS, GET_MODE (x), x, 
> >> > GEN_INT (1));
> >> >   rtx res = simplify_const_binary_operation (MINUS, GET_MODE (tem), tem, 
> >> > y);
> >> >   return res;
> >> > }
> >> >
> >> > and with wide-int you want to change that to
> >> >
> >> > rtx foo (rtx x, rtx y)
> >> > {
> >> >   wide_int tem = wide_int (x) + 1;
> >> >   wide_int res = tem - y;
> >> >   return res.to_rtx ();
> >> > }
> >> >
> >> > how much code ever wants to inspect 'tem' or 'res'?
> >> > That is, does it matter
> >> > if 'tem' and 'res' would have been calculated in "infinite precision"
> >> > and only to_rtx () would do the truncation to the desired mode?
> >> >
> >> > I think not.  The amount of code performing multiple operations on
> >> > _constants_ in sequence is extremely low (if it even exists).
> >> >
> >> > So I'd rather have to_rtx get a mode argument (or a precision) and
> >> > perform the required truncation / sign-extension at RTX construction
> >> > time (which is an expensive operation anyway).
> >> 
> >> I agree this is where we disagree.  I don't understand why you think
> >> the above is better.  Why do we want to do "infinite precision"
> >> addition of two values when only the lowest N bits of those values
> >> have a (semantically) defined meaning?  Earlier in the thread it sounded
> >> like we both agreed that having undefined bits in the _representation_
> >> was bad.  So why do we want to do calculations on parts of values that
> >> are undefined in the (rtx) semantics?
> >> 
> >> E.g. say we're adding two rtx values whose mode just happens to be
> >> HOST_BITS_PER_WIDE_INT in size.  Why does it make sense to calculate
> >> the carry from adding the two HWIs, only to add it to an upper HWI
> >> that has no semantically-defined value?  It's garbage in, garbage out.
> >
> > Not garbage in, and not garbage out (just wasted work).
> 
> Well, it's not garbage in the sense of an uninitialised HWI detected
> by valgrind (say).  But it's semantic garbage.
> 
> > That's the possible downside - the upside is to get rid of the notion
> > of a 'precision'.
> 
> No, it's still there, just in a different place.
> 
> > OTOH they still will be in some ways "undefined" if you consider
> >
> >   wide_int xw = from_rtx (xr, mode);
> >   tree xt = to_tree (xw, type);
> >   wide_int xw2 = from_tree (xt);
> >
> > with an unsigned type xw and xw2 will not be equal (in the
> > 'extension' bits) for a value with MSB set.
> 
> Do you mean it's undefined as things stand, or when using "infinite
> precision" for rtl?  It shouldn't lead to anything undefined at
> the moment.  Only the low GET_MODE_BITSIZE (mode) bits of xw are
> meaningful, but those are also the only bits that would be used.
> 
> > That is, RTL chooses to always sign-extend, tree chooses to extend
> > according to sign information.  wide-int chooses to ... ?  (it seems
> > the wide-int overall comment lost the part that defined its encoding,
> > but it seems that we still sign-extend val[len-1], so (unsigned
> > HOST_WIDE_INT)-1 is { -1U, 0 } with len == 2 and (HOST_WIDE_INT)-1 is
> > { -1 } with len == 1.
> 
> Only if the precision is > HOST_BITS_PER_WIDE_INT.  If the precision
> is HOST_BITS_PER_WIDE_INT then both are { -1U }.

That wasn't my understanding on how things work.

> "len" is never
> greater than precision * HOST_BITS_PER_WIDE_INT.

"len" can be one larger than precision * HOST_BITS_PER_WIDE_INT as
I originally designed the encoding scheme.  It was supposed to
be able to capture the difference between a positive and a negative
number (unlike the RTL rep).

I see canonize() truncates to blocks_needed.

That said, I'm still missing one of my most important requests:

 - all references to HOST_WIDE_INT (and HOST_BITS_PER_WIDE_INT and
   firends) need to go and be replaced with a private typedef
   and proper constants (apart from in the _hwi interface API of course)
 - wide_int needs to work with the storage not being HOST_WIDE_INT,
   in the end it should be HOST_WIDEST_FAST_INT, but for testing coverage
   it ideally should work for 'signed char' as well (at _least_ it needs
   to work for plain 'int')

> > In RTL both would be encoded with len == 1 (no
> > distinction between a signed and unsigned number with all bits set),
> 
> Same again: both are -1 if the mode is HOST_BITS_PER_WIDE_INT or smaller.
> If the mode is wider then RTL too uses { -1, 0 }.  So the current wide_int
> representation matches the RTL representation pretty closely, except for
> the part about wide_int leaving excess bits undefined.  But that's just
> a convenience, it isn't important in terms of what operators to.
> 
> > on the current tree representation the encoding would be with len ==
> > 1, too, as we have TYPE_UNSIGNED to tell us the sign.
> 
> OK.
> 
> > So we still need to somehow "map" between those representations.
> 
> Right, that's what the constructors, from_* and to_* routines do.

I wonder where the from_tree and to_tree ones are?  Are they
from_double_int / wide_int_to_tree (what's wide_int_to_infinite_tree?)

> > Looking at the RTL representation from that wide-int representation
> > makes RTL look as if all constants are signed.
> 
> Well, except for direct accessors like elt(), the wide-int representation
> is shielded from the interface (as it should be).  It doesn't affect the
> result of arithmetic.  The representation should have no visible effect
> for rtl or tree users who avoid elt().

True.  Though it should be one that allows an efficient implementation.

Richard.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28  9:06   ` Richard Biener
  2013-08-28  9:51     ` Richard Sandiford
@ 2013-08-28 13:11     ` Kenneth Zadeck
  2013-08-29  0:15     ` Kenneth Zadeck
  2 siblings, 0 replies; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-28 13:11 UTC (permalink / raw)
  To: Richard Biener; +Cc: Richard Sandiford, gcc-patches, Mike Stump, r.sandiford


>>>    Note that the bits above the precision are not defined and the
>>>    algorithms used here are careful not to depend on their value.  In
>>>    particular, values that come in from rtx constants may have random
>>>    bits.
> Which is a red herring.  It should be fixed.  I cannot even believe
> that sentence given the uses of CONST_DOUBLE_LOW/CONST_DOUBLE_HIGH
> or INTVAL/UINTVAL.  I don't see accesses masking out 'undefined' bits
> anywhere.

Richi,

you asked for the entire patch as a branch so that you could look at the 
whole picture.
It is now time for you to do that.    I understand it is very large and 
it will take some time for you to get your head around the whole 
thing.   But remember this:

The vast majority of the clients are dealing with intermediate code that 
has explicit promotions.    Not only the rtl level, but the majority of 
the tree level takes inputs that have explicit precisions that have been 
matched and wants to see an explicit precision result.   For this code, 
doing the fixed precision thing where you do not ever ask about what is 
behind the curtain is a very good match.

However, there are parts of the compiler, all within the tree or gimple 
level, that do not have this view.   For this, there are two templates 
export an interface that behaves in a manner very similar to what 
double-int does when the precision was smaller than 128 bits.  (we 
discovered a large number of bugs when using double int for timode 
because they did make an infinite precision assumption, but that is 
another story.)  All numbers are converted to signed numbers that are 
extended based on their input type and the math is performed in a large 
enough field so that they never push near the end.   We know what the 
end is because we sniff the port.

At this point we looked at the pass we were converting and we used the 
appropriate implementation that match the style of coding and the 
algorithm. i.e. we made no substantive changes.  As mentioned in my 
earlier mail, i plan to change in the future tree-ssa-ccp to use the 
fixed precision form, but this is a change that is motivated by being 
able to find more constants with the proper representation than what is 
beautiful.   But this is the only case where i think the rep should be 
substantially changed.

I know you find the fixed precision stuff unappealing.   But the truth 
is that you wanted us to make this patch so that it did as little damage 
to the way the compiler worked as possible and given that so much of the 
compiler actually does fixed precision math, this is the path of least 
resistance.

If it is reasonable to change the rtl, we may change that, but the truth 
is that the clients never see this so it is not as much of an issue as 
you are making it.   Now you can see all of the clients, see this for 
yourself.

Kenny


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28 10:40       ` Richard Biener
  2013-08-28 11:52         ` Richard Sandiford
@ 2013-08-28 16:08         ` Mike Stump
  2013-08-29  7:42           ` Richard Biener
  1 sibling, 1 reply; 50+ messages in thread
From: Mike Stump @ 2013-08-28 16:08 UTC (permalink / raw)
  To: Richard Biener; +Cc: Richard Sandiford, Kenneth Zadeck, gcc-patches

On Aug 28, 2013, at 3:22 AM, Richard Biener <rguenther@suse.de> wrote:
> Btw, rtl.h still wastes space with
> 
> struct GTY((variable_size)) hwivec_def {
>  int num_elem;         /* number of elements */
>  HOST_WIDE_INT elem[1];
> };
> 
> struct GTY((chain_next ("RTX_NEXT (&%h)"),
>            chain_prev ("RTX_PREV (&%h)"), variable_size)) rtx_def {
> ...
>  /* The first element of the operands of this rtx.
>     The number of operands and their types are controlled
>     by the `code' field, according to rtl.def.  */
>  union u {
>    rtunion fld[1];
>    HOST_WIDE_INT hwint[1];
>    struct block_symbol block_sym;
>    struct real_value rv;
>    struct fixed_value fv;
>    struct hwivec_def hwiv;
>  } GTY ((special ("rtx_def"), desc ("GET_CODE (&%0)"))) u;
> };
> 
> there are 32bits available before the union.  If you don't use
> those for num_elem then all wide-ints will at least take as
> much space as DOUBLE_INTs originally took - and large ints
> that would have required DOUBLE_INTs in the past will now
> require more space than before.  Which means your math
> motivating the 'num_elem' encoding stuff is wrong.  With
> moving 'num_elem' before u you can even re-use the hwint
> field in the union as the existing double-int code does
> (which in fact could simply do the encoding trick in the
> old CONST_DOUBLE scheme, similar for the tree INTEGER_CST
> container).

So, HOST_WIDE_INT is likely 64 bits, and likely is 64 bit aligned.  The base (stuff before the union) is 32 bits.  There is a 32 bit gap, even if not used before the HOST_WIDE_INT elem.  We place the num_elem is this gap.  Even if the field were removed, the size would not change, nor the placement of elem.  So, short of packing, a 32-bit HWI host or going with a 32-bit type instead of a HOST_WIDE_INT, I'm not sure I follow you?  I tend to discount 32-bit hosted compilers as a thing of the past.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28 12:49               ` Richard Biener
@ 2013-08-28 16:58                 ` Mike Stump
  2013-08-28 21:15                   ` Kenneth Zadeck
  0 siblings, 1 reply; 50+ messages in thread
From: Mike Stump @ 2013-08-28 16:58 UTC (permalink / raw)
  To: Richard Biener; +Cc: Richard Sandiford, Kenneth Zadeck, gcc-patches

On Aug 28, 2013, at 5:48 AM, Richard Biener <rguenther@suse.de> wrote:
>> Only if the precision is > HOST_BITS_PER_WIDE_INT.  If the precision
>> is HOST_BITS_PER_WIDE_INT then both are { -1U }.
> 
> That wasn't my understanding on how things work.

You are thinking about prec==0 numbers.  These are useful and important for ease of use of the wide-int package.  They allow one to do:

  wide_int w = …;

  w = w + 6;
  w = w - 3;
  w = w + (unsigned HOST_WIDE_INT)~0;

and extend the constant out to the precision of the other side.  This is a very narrow feature and not a general property of a wide_int.  In general, signedness of a wide_int is an external feature of wide_int.  We only permit prec==0 numbers for ease of use, and ease of use needs to track the sign, in the general case.  Now, one is free to have a precision that allows the sign to be stored, this is available to the user, if they want.  They merely are not forced to do this.  For example, RTL largely doesn't want or need a sign.

>> Right, that's what the constructors, from_* and to_* routines do.
> 
> I wonder where the from_tree and to_tree ones are?

tree t;
wide_int w = t;

wide_int_to_tree needs an additional type, so, the spelling is not as short out of necessity.

> Are they
> from_double_int / wide_int_to_tree (what's wide_int_to_infinite_tree?)

I think wide_int_to_infinite_tree is leftover junk.  I removed it:

diff --git a/gcc/wide-int.h b/gcc/wide-int.h
index 86be20a..83c2170 100644
--- a/gcc/wide-int.h
+++ b/gcc/wide-int.h
@@ -4203,8 +4203,6 @@ wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
 /* tree related routines.  */
 
 extern tree wide_int_to_tree (tree type, const wide_int_ro &cst);
-extern tree wide_int_to_infinite_tree (tree type, const wide_int_ro &cst,
-                                      unsigned int prec);
 extern tree force_fit_type_wide (tree, const wide_int_ro &, int, bool);
 
 /* real related routines.  */

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28 16:58                 ` Mike Stump
@ 2013-08-28 21:15                   ` Kenneth Zadeck
  2013-08-29  3:18                     ` Mike Stump
  0 siblings, 1 reply; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-28 21:15 UTC (permalink / raw)
  To: Mike Stump; +Cc: Richard Biener, Richard Sandiford, gcc-patches

On 08/28/2013 12:45 PM, Mike Stump wrote:
> On Aug 28, 2013, at 5:48 AM, Richard Biener <rguenther@suse.de> wrote:
>>> Only if the precision is > HOST_BITS_PER_WIDE_INT.  If the precision
>>> is HOST_BITS_PER_WIDE_INT then both are { -1U }.
>> That wasn't my understanding on how things work.
> You are thinking about prec==0 numbers.  These are useful and important for ease of use of the wide-int package.  They allow one to do:
>
>    wide_int w = Â…;
>
>    w = w + 6;
>    w = w - 3;
>    w = w + (unsigned HOST_WIDE_INT)~0;
>
> and extend the constant out to the precision of the other side.  This is a very narrow feature and not a general property of a wide_int.  In general, signedness of a wide_int is an external feature of wide_int.  We only permit prec==0 numbers for ease of use, and ease of use needs to track the sign, in the general case.  Now, one is free to have a precision that allows the sign to be stored, this is available to the user, if they want.  They merely are not forced to do this.  For example, RTL largely doesn't want or need a sign.
>
>>> Right, that's what the constructors, from_* and to_* routines do.
>> I wonder where the from_tree and to_tree ones are?
> tree t;
> wide_int w = t;
>
> wide_int_to_tree needs an additional type, so, the spelling is not as short out of necessity.
>
i made wide_int_to_tree a function that lives in tree.[ch], not a member 
function of wide-int.    This seemed to be consistent with the way other 
things were done.    if you want it to be a member function, that is 
certainly doable.


>> Are they
>> from_double_int / wide_int_to_tree (what's wide_int_to_infinite_tree?)
> I think wide_int_to_infinite_tree is leftover junk.  I removed it:
>
> diff --git a/gcc/wide-int.h b/gcc/wide-int.h
> index 86be20a..83c2170 100644
> --- a/gcc/wide-int.h
> +++ b/gcc/wide-int.h
> @@ -4203,8 +4203,6 @@ wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
>   /* tree related routines.  */
>   
>   extern tree wide_int_to_tree (tree type, const wide_int_ro &cst);
> -extern tree wide_int_to_infinite_tree (tree type, const wide_int_ro &cst,
> -                                      unsigned int prec);
>   extern tree force_fit_type_wide (tree, const wide_int_ro &, int, bool);
>   
>   /* real related routines.  */
>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28  9:06   ` Richard Biener
  2013-08-28  9:51     ` Richard Sandiford
  2013-08-28 13:11     ` Kenneth Zadeck
@ 2013-08-29  0:15     ` Kenneth Zadeck
  2013-08-29  9:13       ` Richard Biener
  2 siblings, 1 reply; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-29  0:15 UTC (permalink / raw)
  To: Richard Biener; +Cc: Richard Sandiford, gcc-patches, Mike Stump, r.sandiford


>>>    Note that the bits above the precision are not defined and the
>>>    algorithms used here are careful not to depend on their value.  In
>>>    particular, values that come in from rtx constants may have random
>>>    bits.
> Which is a red herring.  It should be fixed.  I cannot even believe
> that sentence given the uses of CONST_DOUBLE_LOW/CONST_DOUBLE_HIGH
> or INTVAL/UINTVAL.  I don't see accesses masking out 'undefined' bits
> anywhere.
>
I can agree with you that this could be fixed.   But it is not necessary 
to fix it.   The rtl level and most of the tree level has existed for a 
long time by doing math within the precision.

you do not see the masking at the rtl level because the masking is not 
necessary.    if you never look beyond the precision you just do not 
care.    There is the issue with divide and mod and quite frankly the 
code on the trunk scares me to death.   my code at the rtl level makes 
sure that everything is clean since i can see if it is a udiv or a div 
that is enough info to clean the value up.

>> I have a feeling I'm rehashing a past debate, sorry, but rtx constants can't
>> have random bits.  The upper bits must be a sign extension of the value.
>> There's exactly one valid rtx for each (value, mode) pair.  If you saw
>> something different then that sounds like a bug.  The rules should already
>> be fairly well enforced though, since something like (const_int 128) --
>> or (const_int 256) -- will not match a QImode operand.
> See.  We're saved ;)
this is richard's theory.   There is a block of code at wide-int.cc:175 
that is ifdefed out that checks to see if the rtl consts are 
canonical.   if you bring it in, building the libraries fails very 
quickly.   The million dollar question is this the only bug or is this 
the first bug of a 1000 bugs.   Your comment about not seeing any 
masking cuts both ways.   There is very little code on the way in if you 
create ints with GEN_INT that makes sure someone has done the right 
thing on that side.   So my guess is that there are a lot of failures 
and a large number of them will be in the ports.

If richard is right then there will be changes.  The internals of 
wide-int will be changed so that everything is maintained in canonical 
form rather than just doing the canonization on the outside.   This will 
clean up things like fits_uhwi_p and a lot more of the wide-int internals.

But i think that if you want to change the compiler to use infinite 
precision arithmetic, you really ought to have a better reason that you 
think that it is cleaner.   Because, it buys you nothing for most of the 
compiler and it is slower AND it is a much bigger change to the compiler 
than the one we want to make.


>> This is probably the part of the representation that I disagree most with.
>> There seem to be two main ways we could hande the extension to whole HWIs:
>>
>> (1) leave the stored upper bits undefined and extend them on read
>> (2) keep the stored upper bits in extended form
>>
>> The patch goes for (1) but (2) seems better to me, for a few reasons:
> I agree whole-heartedly.
my statement is above.   if we can do 2 then we should.   But I do not 
think that it is worth several people-years to do this.
>
>> * As above, constants coming from rtl are already in the right form,
>>    so if you create a wide_int from an rtx and only query it, no explicit
>>    extension is needed.
>>
>> * Things like logical operations and right shifts naturally preserve
>>    the sign-extended form, so only a subset of write operations need
>>    to take special measures.
>>
>> * You have a public interface that exposes the underlying HWIs
>>    (which is fine with me FWIW), so it seems better to expose a fully-defined
>>    HWI rather than only a partially-defined HWI.
>>
>> E.g. zero_p is:
>>
>>    HOST_WIDE_INT x;
>>
>>    if (precision && precision < HOST_BITS_PER_WIDE_INT)
>>      x = sext_hwi (val[0], precision);
>>    else if (len == 0)
>>      {
>>        gcc_assert (precision == 0);
>>        return true;
>>      }
>>    else
>>      x = val[0];
>>
>>    return len == 1 && x == 0;
>>
>> but I think it really ought to be just:
>>
>>    return len == 1 && val[0] == 0;
> Yes!
>
> But then - what value does keeping track of a 'precision' have
> in this case?  It seems to me it's only a "convenient carrier"
> for
>
>    wide_int x = wide-int-from-RTX (y);
>    machine_mode saved_mode = mode-available? GET_MODE (y) : magic-mode;
>    ... process x ...
>    RTX = RTX-from-wide_int (x, saved_mode);
>
> that is, wide-int doesn't do anything with 'precision' but you
> can extract it later to not need to remember a mode you were
> interested in?
>
> Oh, and of course some operations require a 'precision', like rotate.
As i have said, when the operands and result are all precision correct, 
as they are generally in both rtl and tree, then the default fixed 
precision interface is really very clean.    I know you object to some 
implementation issues, but this is why we put the branch up, so you can 
see what you get from the other side.


>>>    When the precision is 0, all the bits in the LEN elements of
>>>    VEC are significant with no undefined bits.  Precisionless
>>>    constants are limited to being one or two HOST_WIDE_INTs.  When two
>>>    are used the upper value is 0, and the high order bit of the first
>>>    value is set.  (Note that this may need to be generalized if it is
>>>    ever necessary to support 32bit HWIs again).
>> I didn't understand this.  When are two HOST_WIDE_INTs needed for
>> "precision 0"?
> For the wide_int containing unsigned HOST_WIDE_INT ~0.  As we
> sign-extend the representation (heh, yes, we do or should!) we
> require an extra HWI to store the fact that ~0 is unsigned.
>
>> The main thing that's changed since the early patches is that we now
>> have a mixture of wide-int types.  This seems to have led to a lot of
>> boiler-plate forwarding functions (or at least it felt like that while
>> moving them all out the class).  And that in turn seems to be because
>> you're trying to keep everything as member functions.  E.g. a lot of the
>> forwarders are from a member function to a static function.
>>
>> Wouldn't it be better to have the actual classes be light-weight,
>> with little more than accessors, and do the actual work with non-member
>> template functions?  There seems to be 3 grades of wide-int:
>>
>>    (1) read-only, constant precision  (from int, etc.)
>>    (2) read-write, constant precision  (fixed_wide_int)
>>    (3) read-write, variable precision  (wide_int proper)
>>
>> but we should be able to hide that behind templates, with compiler errors
>> if you try to write to (1), etc.
> Yeah, I'm probably trying to clean up the implementation once I
> got past recovering from two month without GCC ...

> Richard.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28 21:15                   ` Kenneth Zadeck
@ 2013-08-29  3:18                     ` Mike Stump
  0 siblings, 0 replies; 50+ messages in thread
From: Mike Stump @ 2013-08-29  3:18 UTC (permalink / raw)
  To: Kenneth Zadeck; +Cc: Richard Biener, Richard Sandiford, gcc-patches

On Aug 28, 2013, at 1:41 PM, Kenneth Zadeck <zadeck@naturalbridge.com> wrote:
> On 08/28/2013 12:45 PM, Mike Stump wrote:
>> 
>> tree t;
>> wide_int w = t;
>> 
>> wide_int_to_tree needs an additional type, so, the spelling is not as short out of necessity.

> i made wide_int_to_tree a function that lives in tree.[ch], not a member function of wide-int.    This seemed to be consistent with the way other things were done.    if you want it to be a member function, that is certainly doable.

EOUTOFDATE:

   There are constructors to create the various forms of wide-int from
   trees, rtl and constants.  For trees and constants, you can simply say:

             tree t = ...;
             wide_int x = t;
             wide_int y = 6;
public:
  wide_int_ro ();
  wide_int_ro (const_tree);

:-)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28 16:08         ` Mike Stump
@ 2013-08-29  7:42           ` Richard Biener
  2013-08-29 19:34             ` Mike Stump
  0 siblings, 1 reply; 50+ messages in thread
From: Richard Biener @ 2013-08-29  7:42 UTC (permalink / raw)
  To: Mike Stump; +Cc: Richard Sandiford, Kenneth Zadeck, gcc-patches

On Wed, 28 Aug 2013, Mike Stump wrote:

> On Aug 28, 2013, at 3:22 AM, Richard Biener <rguenther@suse.de> wrote:
> > Btw, rtl.h still wastes space with
> > 
> > struct GTY((variable_size)) hwivec_def {
> >  int num_elem;         /* number of elements */
> >  HOST_WIDE_INT elem[1];
> > };
> > 
> > struct GTY((chain_next ("RTX_NEXT (&%h)"),
> >            chain_prev ("RTX_PREV (&%h)"), variable_size)) rtx_def {
> > ...
> >  /* The first element of the operands of this rtx.
> >     The number of operands and their types are controlled
> >     by the `code' field, according to rtl.def.  */
> >  union u {
> >    rtunion fld[1];
> >    HOST_WIDE_INT hwint[1];
> >    struct block_symbol block_sym;
> >    struct real_value rv;
> >    struct fixed_value fv;
> >    struct hwivec_def hwiv;
> >  } GTY ((special ("rtx_def"), desc ("GET_CODE (&%0)"))) u;
> > };
> > 
> > there are 32bits available before the union.  If you don't use
> > those for num_elem then all wide-ints will at least take as
> > much space as DOUBLE_INTs originally took - and large ints
> > that would have required DOUBLE_INTs in the past will now
> > require more space than before.  Which means your math
> > motivating the 'num_elem' encoding stuff is wrong.  With
> > moving 'num_elem' before u you can even re-use the hwint
> > field in the union as the existing double-int code does
> > (which in fact could simply do the encoding trick in the
> > old CONST_DOUBLE scheme, similar for the tree INTEGER_CST
> > container).
> 
> So, HOST_WIDE_INT is likely 64 bits, and likely is 64 bit aligned.  The 
> base (stuff before the union) is 32 bits.  There is a 32 bit gap, even 
> if not used before the HOST_WIDE_INT elem.  We place the num_elem is 
> this gap.

No, you don't.  You place num_elem 64bit aligned _after_ the gap.
And you have another 32bit gap, as you say, before elem.

> Even if the field were removed, the size would not change, 
> nor the placement of elem.  So, short of packing, a 32-bit HWI host or 
> going with a 32-bit type instead of a HOST_WIDE_INT, I'm not sure I 
> follow you?  I tend to discount 32-bit hosted compilers as a thing of 
> the past.

Me, too.  On 32bit hosts nothing would change as 'u' is 32bit aligned
there (ok, on 32bit hosts putting num_elem before 'u' would actually
increase memory usage - but as you say, 32bit hosted compilers are
a thing of the past ;)).

Richard.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-29  0:15     ` Kenneth Zadeck
@ 2013-08-29  9:13       ` Richard Biener
  2013-08-29 12:38         ` Kenneth Zadeck
  0 siblings, 1 reply; 50+ messages in thread
From: Richard Biener @ 2013-08-29  9:13 UTC (permalink / raw)
  To: Kenneth Zadeck; +Cc: Richard Sandiford, gcc-patches, Mike Stump, r.sandiford

On Wed, 28 Aug 2013, Kenneth Zadeck wrote:

> 
> > > >    Note that the bits above the precision are not defined and the
> > > >    algorithms used here are careful not to depend on their value.  In
> > > >    particular, values that come in from rtx constants may have random
> > > >    bits.
> > Which is a red herring.  It should be fixed.  I cannot even believe
> > that sentence given the uses of CONST_DOUBLE_LOW/CONST_DOUBLE_HIGH
> > or INTVAL/UINTVAL.  I don't see accesses masking out 'undefined' bits
> > anywhere.
> > 
> I can agree with you that this could be fixed.   But it is not necessary to
> fix it.   The rtl level and most of the tree level has existed for a long time
> by doing math within the precision.
> 
> you do not see the masking at the rtl level because the masking is not
> necessary.    if you never look beyond the precision you just do not care.
> There is the issue with divide and mod and quite frankly the code on the trunk
> scares me to death.   my code at the rtl level makes sure that everything is
> clean since i can see if it is a udiv or a div that is enough info to clean
> the value up.
> 
> > > I have a feeling I'm rehashing a past debate, sorry, but rtx constants
> > > can't
> > > have random bits.  The upper bits must be a sign extension of the value.
> > > There's exactly one valid rtx for each (value, mode) pair.  If you saw
> > > something different then that sounds like a bug.  The rules should already
> > > be fairly well enforced though, since something like (const_int 128) --
> > > or (const_int 256) -- will not match a QImode operand.
> > See.  We're saved ;)
> this is richard's theory.   There is a block of code at wide-int.cc:175 that
> is ifdefed out that checks to see if the rtl consts are canonical.   if you
> bring it in, building the libraries fails very quickly.   The million dollar
> question is this the only bug or is this the first bug of a 1000 bugs.   Your
> comment about not seeing any masking cuts both ways.   There is very little
> code on the way in if you create ints with GEN_INT that makes sure someone has
> done the right thing on that side.   So my guess is that there are a lot of
> failures and a large number of them will be in the ports.

Well, clearly RTL code _expects_ constants to be properly extended.  I
can reproduce the bug and I simply guess that's a matter of mismatching
modes here (or performing an implicit truncation without properly
extending the constant).

    at /space/rguenther/src/svn/wide-int/gcc/combine.c:10086
10086                                                      GEN_INT 
(count));
(gdb) l
10081
10082                 mask_rtx = GEN_INT (nonzero_bits (varop, GET_MODE 
(varop)));
10083
10084                 mask_rtx
10085                   = simplify_const_binary_operation (code, 
result_mode, mask_rtx,
10086                                                      GEN_INT 
(count));

uses of GEN_INT are frowned upon ... for exactly that reason - the
mask_rtx is not a proper RTL constant for SImode.

Btw, all this isn't a reason to not have a well-defined wide-int
representation.  You just have (as you have for trees) to properly
canonize the representation at the time you convert from RTL
constants to wide-int constants.

In the long run we want a uniform representation of constants
so we can do zero-copying - but it looks like we now have
three different representations - the tree one (sign or zero
extended dependent on sign), RTL (garbage as you show) and
wide-int (always sign-extended).

That's why I was looking at at least matching what tree does
(because tree constants _are_ properly canonized).

Richard.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-29  9:13       ` Richard Biener
@ 2013-08-29 12:38         ` Kenneth Zadeck
  0 siblings, 0 replies; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-29 12:38 UTC (permalink / raw)
  To: Richard Biener; +Cc: Richard Sandiford, gcc-patches, Mike Stump, r.sandiford

On 08/29/2013 04:42 AM, Richard Biener wrote:
> On Wed, 28 Aug 2013, Kenneth Zadeck wrote:
>
>>>>>     Note that the bits above the precision are not defined and the
>>>>>     algorithms used here are careful not to depend on their value.  In
>>>>>     particular, values that come in from rtx constants may have random
>>>>>     bits.
>>> Which is a red herring.  It should be fixed.  I cannot even believe
>>> that sentence given the uses of CONST_DOUBLE_LOW/CONST_DOUBLE_HIGH
>>> or INTVAL/UINTVAL.  I don't see accesses masking out 'undefined' bits
>>> anywhere.
>>>
>> I can agree with you that this could be fixed.   But it is not necessary to
>> fix it.   The rtl level and most of the tree level has existed for a long time
>> by doing math within the precision.
>>
>> you do not see the masking at the rtl level because the masking is not
>> necessary.    if you never look beyond the precision you just do not care.
>> There is the issue with divide and mod and quite frankly the code on the trunk
>> scares me to death.   my code at the rtl level makes sure that everything is
>> clean since i can see if it is a udiv or a div that is enough info to clean
>> the value up.
>>
>>>> I have a feeling I'm rehashing a past debate, sorry, but rtx constants
>>>> can't
>>>> have random bits.  The upper bits must be a sign extension of the value.
>>>> There's exactly one valid rtx for each (value, mode) pair.  If you saw
>>>> something different then that sounds like a bug.  The rules should already
>>>> be fairly well enforced though, since something like (const_int 128) --
>>>> or (const_int 256) -- will not match a QImode operand.
>>> See.  We're saved ;)
>> this is richard's theory.   There is a block of code at wide-int.cc:175 that
>> is ifdefed out that checks to see if the rtl consts are canonical.   if you
>> bring it in, building the libraries fails very quickly.   The million dollar
>> question is this the only bug or is this the first bug of a 1000 bugs.   Your
>> comment about not seeing any masking cuts both ways.   There is very little
>> code on the way in if you create ints with GEN_INT that makes sure someone has
>> done the right thing on that side.   So my guess is that there are a lot of
>> failures and a large number of them will be in the ports.
> Well, clearly RTL code _expects_ constants to be properly extended.  I
> can reproduce the bug and I simply guess that's a matter of mismatching
> modes here (or performing an implicit truncation without properly
> extending the constant).
>
>      at /space/rguenther/src/svn/wide-int/gcc/combine.c:10086
> 10086                                                      GEN_INT
> (count));
> (gdb) l
> 10081
> 10082                 mask_rtx = GEN_INT (nonzero_bits (varop, GET_MODE
> (varop)));
> 10083
> 10084                 mask_rtx
> 10085                   = simplify_const_binary_operation (code,
> result_mode, mask_rtx,
> 10086                                                      GEN_INT
> (count));
>
> uses of GEN_INT are frowned upon ... for exactly that reason - the
> mask_rtx is not a proper RTL constant for SImode.
over time, the GEN_INTs will go away at the portable rtl level  as more 
of the code is transitioned to use wide-int.    The port story is not so 
good.   Any port that uses TI or beyond will likely evolve to using 
wide-int for the math and unifying the cases between CONST_INT and 
CONST_WIDE_INT.  (the wide-int constructor from rtl takes either and the 
constructor to rtl looks at the constant.) But a port like the mips that 
has no TI will likely never change.
>
> Btw, all this isn't a reason to not have a well-defined wide-int
> representation.  You just have (as you have for trees) to properly
> canonize the representation at the time you convert from RTL
> constants to wide-int constants.
The wide-int representation is completely well defined.    This is what 
i cannot see why you do not understand.   You do not need to look above 
the precision so there is nothing undefined!!!!!!   I know this bothers 
you very badly and i will agree that if it is easy to clean up rtl to 
match this, then it would simplify the code somewhat.    But it is not 
undefined or random.   Furthermore, cleaning it will not change the abi 
so if rtl gets cleaner, we can change this.

> In the long run we want a uniform representation of constants
> so we can do zero-copying - but it looks like we now have
> three different representations - the tree one (sign or zero
> extended dependent on sign), RTL (garbage as you show) and
> wide-int (always sign-extended).
You wanted this to look more like double-int, now that it does, you now 
to see the flaws in it.    Do i ever get to win?   I do not think anyone 
would have been served well by trying to make trees have only signed 
constants in the way that double-int did.  I will admit that the 
implementation would be cleaner if we could just change everything else 
in the compiler to match a super clean wide-int design.   I took the 
other approach and bottled as much of the ugliness behind the a api and 
tried to keep the rest of the compiler looking mostly the way that it does.

To a first approximation, ugliness is a conserved force in the universe.

>
> That's why I was looking at at least matching what tree does
> (because tree constants _are_ properly canonized).
well, they are very close.    i am likely working on the "last" bug now 
for mips.
>
> Richard.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-28  9:11       ` Richard Biener
@ 2013-08-29 13:34         ` Kenneth Zadeck
  0 siblings, 0 replies; 50+ messages in thread
From: Kenneth Zadeck @ 2013-08-29 13:34 UTC (permalink / raw)
  To: Richard Biener; +Cc: Mike Stump, Richard Sandiford, gcc-patches, r.sandiford

On 08/28/2013 05:08 AM, Richard Biener wrote:
> On Sun, 25 Aug 2013, Mike Stump wrote:
>
>> On Aug 25, 2013, at 12:26 AM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
>>> (2) Adding a new namespace, wi, for the operators.  So far this
>>>     just contains the previously-static comparison functions
>>>     and whatever else was needed to avoid cross-dependencies
>>>     between wi and wide_int_ro (except for the debug routines).
>> It seems reasonable; I don't see anything I object to.  Seems like most of the time, the code is shorter (though, you use wi, which is fairly short).  It doesn't seem any more complex, though, knowing how to spell the operation wide_int:: v wi:: is confusing on the client side.  I'm torn between this and the nice things that come with the patch.
>>
>>> (3) Removing the comparison member functions and using the static
>>>     ones everywhere.
>> I've love to have richi weigh in (or someone else that wants to play the
>> role of C++ coding expert)?  I'd defer to them?
> Yeah - wi::lt (a, b) is much better than a.lt (b) IMHO.  It mimics how
> the standard library works.
>
>>> The idea behind using a namespace rather than static functions
>>> is that it makes it easier to separate the core, tree and rtx bits.
>> Being able to separate core, tree and rtx bits gets a +1 in my book.  I
>> do understand the beauty of this.
> Now, if you look back in discussions I wanted a storage
> abstraction anyway.  Basically the interface is
>
> class wide_int_storage
> {
>    int precision ();
>    int len ();
>    element_t get (unsigned);
>    void set (unsigned, element_t);
> };
>
> and wide_int is then templated like
>
> template <class storage>
> class wide_int : public storage
> {
> };
>
> where RTX / tree storage classes provide read-only access to their
> storage and a rvalue integer rep to its value.
>
> You can look at my example draft implementation I posted some
> months ago.  But I'll gladly wiggle on the branch to make it
> more like above (easy step one: don't access the wide-int members
> directly but via accessor functions)
you are of course welcome to do this.   But there were two questions 
that i never got an answer to.

1) how does doing this help in any way.   Are their clients that will 
use this?

2) isn't this just going to slow wide-int down?  It seems clear that 
there is a real performance cost in that now there is going to be an 
extra step of indirection on each access to the underlying data structures.

I will point out we have cut down on the copying by sharing the 
underlying value from trees or rtl when the value is short lived. But 
given that most constants only take a single hwi to represent, getting 
rid of the copying seems like a very small payback for the increase in 
access costs.
>
>>> IMO wide-int.h shouldn't know about trees and rtxes, and all routines
>>> related to them should be in tree.h and rtl.h instead.  But using
>>> static functions means that you have to declare everything in one place.
>>> Also, it feels odd for wide_int to be both an object and a home
>>> of static functions that don't always operate on wide_ints, e.g. when
>>> comparing a CONST_INT against 16.
> Indeed - in my sample the wide-int-rtx-storage and wide-int-tree-storage
> storage models were declared in rtl.h and tree.h and wide-int.h did
> know nothing about them.
this could be done for us too, it has nothing to do with the storage model.
>> Yes, though, does wi feel odd being a home for comparing a CONST_INT and
>> 16?  :-)
>>
>>> I realise I'm probably not being helpful here.
>> Iterating on how we want to code to look like is reasonable.  Prettying
>> it up where it needs it, is good.
>>
>> Indeed, if the code is as you like, and as richi likes, we'll then our
>> mission is just about complete.  :-)  For this patch, I'd love to defer
>> to richi (or someone that has a stronger opinion than I do) to say,
>> better, worse?
> The comparisons?  Better.
richard s is making these static.
> Thanks,
> Richard.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-29  7:42           ` Richard Biener
@ 2013-08-29 19:34             ` Mike Stump
  2013-08-30  8:51               ` Richard Biener
  2013-09-01 19:26               ` Richard Sandiford
  0 siblings, 2 replies; 50+ messages in thread
From: Mike Stump @ 2013-08-29 19:34 UTC (permalink / raw)
  To: Richard Biener; +Cc: Richard Sandiford, Kenneth Zadeck, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 2121 bytes --]

On Aug 29, 2013, at 12:36 AM, Richard Biener <rguenther@suse.de> wrote:
> On Wed, 28 Aug 2013, Mike Stump wrote:
> 
>> On Aug 28, 2013, at 3:22 AM, Richard Biener <rguenther@suse.de> wrote:
>>> Btw, rtl.h still wastes space with
>>> 
>>> struct GTY((variable_size)) hwivec_def {
>>> int num_elem;         /* number of elements */
>>> HOST_WIDE_INT elem[1];
>>> };
>>> 
>>> struct GTY((chain_next ("RTX_NEXT (&%h)"),
>>>           chain_prev ("RTX_PREV (&%h)"), variable_size)) rtx_def {
>>> ...
>>> /* The first element of the operands of this rtx.
>>>    The number of operands and their types are controlled
>>>    by the `code' field, according to rtl.def.  */
>>> union u {
>>>   rtunion fld[1];
>>>   HOST_WIDE_INT hwint[1];
>>>   struct block_symbol block_sym;
>>>   struct real_value rv;
>>>   struct fixed_value fv;
>>>   struct hwivec_def hwiv;
>>> } GTY ((special ("rtx_def"), desc ("GET_CODE (&%0)"))) u;
>>> };
>>> 
>>> there are 32bits available before the union.  If you don't use
>>> those for num_elem then all wide-ints will at least take as
>>> much space as DOUBLE_INTs originally took - and large ints
>>> that would have required DOUBLE_INTs in the past will now
>>> require more space than before.  Which means your math
>>> motivating the 'num_elem' encoding stuff is wrong.  With
>>> moving 'num_elem' before u you can even re-use the hwint
>>> field in the union as the existing double-int code does
>>> (which in fact could simply do the encoding trick in the
>>> old CONST_DOUBLE scheme, similar for the tree INTEGER_CST
>>> container).
>> 
>> So, HOST_WIDE_INT is likely 64 bits, and likely is 64 bit aligned.  The 
>> base (stuff before the union) is 32 bits.  There is a 32 bit gap, even 
>> if not used before the HOST_WIDE_INT elem.  We place the num_elem is 
>> this gap.
> 
> No, you don't.  You place num_elem 64bit aligned _after_ the gap.
> And you have another 32bit gap, as you say, before elem.

Ah, ok, I get it, thanks for the explanation.  This removes the second gap creator and puts the field into the gap before the u union.


[-- Attachment #2: p.diffs.txt --]
[-- Type: text/plain, Size: 7896 bytes --]

diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index ce40347..143f298 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -594,7 +594,7 @@ immed_wide_int_const (const wide_int &v, enum machine_mode mode)
     /* It is so tempting to just put the mode in here.  Must control
        myself ... */
     PUT_MODE (value, VOIDmode);
-    HWI_PUT_NUM_ELEM (CONST_WIDE_INT_VEC (value), len);
+    CWI_PUT_NUM_ELEM (value, len);
 
     for (i = 0; i < len; i++)
       CONST_WIDE_INT_ELT (value, i) = v.elt (i);
diff --git a/gcc/print-rtl.c b/gcc/print-rtl.c
index 3620bd6..8dad9f6 100644
--- a/gcc/print-rtl.c
+++ b/gcc/print-rtl.c
@@ -616,7 +616,7 @@ print_rtx (const_rtx in_rtx)
     case CONST_WIDE_INT:
       if (! flag_simple)
 	fprintf (outfile, " ");
-      hwivec_output_hex (outfile, CONST_WIDE_INT_VEC (in_rtx));
+      cwi_output_hex (outfile, in_rtx);
       break;
 #endif
 
diff --git a/gcc/read-rtl.c b/gcc/read-rtl.c
index 707ef3f..c198b5b 100644
--- a/gcc/read-rtl.c
+++ b/gcc/read-rtl.c
@@ -1352,7 +1352,6 @@ read_rtx_code (const char *code_name)
       read_name (&name);
       validate_const_wide_int (name.string);
       {
-	hwivec hwiv;
 	const char *s = name.string;
 	int len;
 	int index = 0;
@@ -1377,7 +1376,6 @@ read_rtx_code (const char *code_name)
 
 	return_rtx = const_wide_int_alloc (wlen);
 
-	hwiv = CONST_WIDE_INT_VEC (return_rtx);
 	while (pos > 0)
 	  {
 #if HOST_BITS_PER_WIDE_INT == 64
@@ -1385,13 +1383,13 @@ read_rtx_code (const char *code_name)
 #else
 	    sscanf (s + pos, "%8" HOST_WIDE_INT_PRINT "x", &wi);
 #endif
-	    XHWIVEC_ELT (hwiv, index++) = wi;
+	    CWI_ELT (return_rtx, index++) = wi;
 	    pos -= gs;
 	  }
 	strncpy (buf, s, gs - pos);
 	buf [gs - pos] = 0;
 	sscanf (buf, "%" HOST_WIDE_INT_PRINT "x", &wi);
-	XHWIVEC_ELT (hwiv, index++) = wi;
+	CWI_ELT (return_rtx, index++) = wi;
 	/* TODO: After reading, do we want to canonicalize with:
 	   value = lookup_const_wide_int (value); ? */
       }
diff --git a/gcc/rtl.c b/gcc/rtl.c
index 074e425..b913d0d 100644
--- a/gcc/rtl.c
+++ b/gcc/rtl.c
@@ -225,18 +225,18 @@ rtx_alloc_stat (RTX_CODE code MEM_STAT_DECL)
   return rtx_alloc_stat_v (code PASS_MEM_STAT, 0);
 }
 
-/* Write the wide constant OP0 to OUTFILE.  */
+/* Write the wide constant X to OUTFILE.  */
 
 void
-hwivec_output_hex (FILE *outfile, const_hwivec op0)
+cwi_output_hex (FILE *outfile, const_rtx x)
 {
-  int i = HWI_GET_NUM_ELEM (op0);
+  int i = CWI_GET_NUM_ELEM (x);
   gcc_assert (i > 0);
-  if (XHWIVEC_ELT (op0, i-1) == 0)
+  if (CWI_ELT (x, i-1) == 0)
     fprintf (outfile, "0x");
-  fprintf (outfile, HOST_WIDE_INT_PRINT_HEX, XHWIVEC_ELT (op0, --i));
+  fprintf (outfile, HOST_WIDE_INT_PRINT_HEX, CWI_ELT (x, --i));
   while (--i >= 0)
-    fprintf (outfile, HOST_WIDE_INT_PRINT_PADDED_HEX, XHWIVEC_ELT (op0, i));
+    fprintf (outfile, HOST_WIDE_INT_PRINT_PADDED_HEX, CWI_ELT (x, i));
 }
 
 \f
@@ -843,12 +843,12 @@ rtl_check_failed_block_symbol (const char *file, int line, const char *func)
 
 /* XXX Maybe print the vector?  */
 void
-hwivec_check_failed_bounds (const_hwivec r, int n, const char *file, int line,
-			    const char *func)
+cwi_check_failed_bounds (const_rtx x, int n, const char *file, int line,
+			 const char *func)
 {
   internal_error
     ("RTL check: access of hwi elt %d of vector with last elt %d in %s, at %s:%d",
-     n, GET_NUM_ELEM (r) - 1, func, trim_filename (file), line);
+     n, CWI_GET_NUM_ELEM (x) - 1, func, trim_filename (file), line);
 }
 
 /* XXX Maybe print the vector?  */
diff --git a/gcc/rtl.h b/gcc/rtl.h
index 6b45b41..a218ee9 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -252,12 +252,14 @@ struct GTY(()) object_block {
 };
 
 struct GTY((variable_size)) hwivec_def {
-  int num_elem;		/* number of elements */
   HOST_WIDE_INT elem[1];
 };
 
-#define HWI_GET_NUM_ELEM(HWIVEC)	((HWIVEC)->num_elem)
-#define HWI_PUT_NUM_ELEM(HWIVEC, NUM)	((HWIVEC)->num_elem = (NUM))
+/* Number of elements of the HWIVEC if RTX is a CONST_WIDE_INT.  */
+#define CWI_GET_NUM_ELEM(RTX)					\
+  ((int)RTL_FLAG_CHECK1("CWI_GET_NUM_ELEM", (RTX), CONST_WIDE_INT)->u2.num_elem)
+#define CWI_PUT_NUM_ELEM(RTX, NUM)					\
+  (RTL_FLAG_CHECK1("CWI_PUT_NUM_ELEM", (RTX), CONST_WIDE_INT)->u2.num_elem = (NUM))
 
 /* RTL expression ("rtx").  */
 
@@ -345,6 +347,14 @@ struct GTY((chain_next ("RTX_NEXT (&%h)"),
      1 in a VALUE or DEBUG_EXPR is NO_LOC_P in var-tracking.c.  */
   unsigned return_val : 1;
 
+  union {
+    /* RTXs are free to use up to 32 bit from here.  */
+
+    /* In a CONST_WIDE_INT (aka hwivec_def), this is the number of HOST_WIDE_INTs
+       in the hwivec_def.  */
+    unsigned  GTY ((tag ("CONST_WIDE_INT"))) num_elem:32;
+  } GTY ((desc ("GET_CODE (&%0)"))) u2;
+
   /* The first element of the operands of this rtx.
      The number of operands and their types are controlled
      by the `code' field, according to rtl.def.  */
@@ -643,12 +653,14 @@ equality.  */
 			       __FUNCTION__);				\
      &_rtx->u.hwint[_n]; }))
 
-#define XHWIVEC_ELT(HWIVEC, I) __extension__				\
-(*({ __typeof (HWIVEC) const _hwivec = (HWIVEC); const int _i = (I);	\
-     if (_i < 0 || _i >= HWI_GET_NUM_ELEM (_hwivec))			\
-       hwivec_check_failed_bounds (_hwivec, _i, __FILE__, __LINE__,	\
-				  __FUNCTION__);			\
-     &_hwivec->elem[_i]; }))
+#define CWI_ELT(RTX, I) __extension__					\
+(*({ __typeof (RTX) const _rtx = (RTX);					\
+     int _max = CWI_GET_NUM_ELEM (_rtx);				\
+     const int _i = (I);						\
+     if (_i < 0 || _i >= _max)						\
+       cwi_check_failed_bounds (_rtx, _i, __FILE__, __LINE__,	\
+				__FUNCTION__);				\
+     &_rtx->u.hwiv.elem[_i]; }))
 
 #define XCWINT(RTX, N, C) __extension__					\
 (*({ __typeof (RTX) const _rtx = (RTX);					\
@@ -711,8 +723,8 @@ extern void rtl_check_failed_code_mode (const_rtx, enum rtx_code, enum machine_m
     ATTRIBUTE_NORETURN;
 extern void rtl_check_failed_block_symbol (const char *, int, const char *)
     ATTRIBUTE_NORETURN;
-extern void hwivec_check_failed_bounds (const_hwivec, int, const char *, int,
-					const char *)
+extern void cwi_check_failed_bounds (const_rtx, int, const char *, int,
+				     const char *)
     ATTRIBUTE_NORETURN;
 extern void rtvec_check_failed_bounds (const_rtvec, int, const char *, int,
 				       const char *)
@@ -726,7 +738,7 @@ extern void rtvec_check_failed_bounds (const_rtvec, int, const char *, int,
 #define RTL_CHECKC2(RTX, N, C1, C2) ((RTX)->u.fld[N])
 #define RTVEC_ELT(RTVEC, I)	    ((RTVEC)->elem[I])
 #define XWINT(RTX, N)		    ((RTX)->u.hwint[N])
-#define XHWIVEC_ELT(HWIVEC, I)	    ((HWIVEC)->elem[I])
+#define CWI_ELT(RTX, I)		    ((RTX)->u.hwiv.elem[I])
 #define XCWINT(RTX, N, C)	    ((RTX)->u.hwint[N])
 #define XCMWINT(RTX, N, C, M)	    ((RTX)->u.hwint[N])
 #define XCNMWINT(RTX, N, C, M)	    ((RTX)->u.hwint[N])
@@ -1223,8 +1235,8 @@ rhs_regno (const_rtx x)
    CONST_WIDE_INT_ELT gets one of the elements.  0 is the least
    significant HOST_WIDE_INT.  */
 #define CONST_WIDE_INT_VEC(RTX) HWIVEC_CHECK (RTX, CONST_WIDE_INT)
-#define CONST_WIDE_INT_NUNITS(RTX) HWI_GET_NUM_ELEM (CONST_WIDE_INT_VEC (RTX))
-#define CONST_WIDE_INT_ELT(RTX, N) XHWIVEC_ELT (CONST_WIDE_INT_VEC (RTX), N) 
+#define CONST_WIDE_INT_NUNITS(RTX) CWI_GET_NUM_ELEM (RTX)
+#define CONST_WIDE_INT_ELT(RTX, N) CWI_ELT (RTX, N)
 
 /* For a CONST_DOUBLE:
 #if TARGET_SUPPORTS_WIDE_INT == 0
@@ -1982,7 +1994,7 @@ extern void end_sequence (void);
 #if TARGET_SUPPORTS_WIDE_INT == 0
 extern double_int rtx_to_double_int (const_rtx);
 #endif
-extern void hwivec_output_hex (FILE *, const_hwivec);
+extern void cwi_output_hex (FILE *, const_rtx);
 #ifndef GENERATOR_FILE
 extern rtx immed_wide_int_const (const wide_int &cst, enum machine_mode mode);
 #endif

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-29 19:34             ` Mike Stump
@ 2013-08-30  8:51               ` Richard Biener
  2013-09-01 19:26               ` Richard Sandiford
  1 sibling, 0 replies; 50+ messages in thread
From: Richard Biener @ 2013-08-30  8:51 UTC (permalink / raw)
  To: Mike Stump; +Cc: Richard Sandiford, Kenneth Zadeck, gcc-patches

On Thu, 29 Aug 2013, Mike Stump wrote:

> On Aug 29, 2013, at 12:36 AM, Richard Biener <rguenther@suse.de> wrote:
> > On Wed, 28 Aug 2013, Mike Stump wrote:
> > 
> >> On Aug 28, 2013, at 3:22 AM, Richard Biener <rguenther@suse.de> wrote:
> >>> Btw, rtl.h still wastes space with
> >>> 
> >>> struct GTY((variable_size)) hwivec_def {
> >>> int num_elem;         /* number of elements */
> >>> HOST_WIDE_INT elem[1];
> >>> };
> >>> 
> >>> struct GTY((chain_next ("RTX_NEXT (&%h)"),
> >>>           chain_prev ("RTX_PREV (&%h)"), variable_size)) rtx_def {
> >>> ...
> >>> /* The first element of the operands of this rtx.
> >>>    The number of operands and their types are controlled
> >>>    by the `code' field, according to rtl.def.  */
> >>> union u {
> >>>   rtunion fld[1];
> >>>   HOST_WIDE_INT hwint[1];
> >>>   struct block_symbol block_sym;
> >>>   struct real_value rv;
> >>>   struct fixed_value fv;
> >>>   struct hwivec_def hwiv;
> >>> } GTY ((special ("rtx_def"), desc ("GET_CODE (&%0)"))) u;
> >>> };
> >>> 
> >>> there are 32bits available before the union.  If you don't use
> >>> those for num_elem then all wide-ints will at least take as
> >>> much space as DOUBLE_INTs originally took - and large ints
> >>> that would have required DOUBLE_INTs in the past will now
> >>> require more space than before.  Which means your math
> >>> motivating the 'num_elem' encoding stuff is wrong.  With
> >>> moving 'num_elem' before u you can even re-use the hwint
> >>> field in the union as the existing double-int code does
> >>> (which in fact could simply do the encoding trick in the
> >>> old CONST_DOUBLE scheme, similar for the tree INTEGER_CST
> >>> container).
> >> 
> >> So, HOST_WIDE_INT is likely 64 bits, and likely is 64 bit aligned.  The 
> >> base (stuff before the union) is 32 bits.  There is a 32 bit gap, even 
> >> if not used before the HOST_WIDE_INT elem.  We place the num_elem is 
> >> this gap.
> > 
> > No, you don't.  You place num_elem 64bit aligned _after_ the gap.
> > And you have another 32bit gap, as you say, before elem.
> 
> Ah, ok, I get it, thanks for the explanation.  This removes the second 
> gap creator and puts the field into the gap before the u union.

 struct GTY((variable_size)) hwivec_def {
-  int num_elem;		/* number of elements */
   HOST_WIDE_INT elem[1];
 };

no need to wrap this in an extra struct type.  In fact you can
re-use the hwint member and its accessors in

  union u {
    rtunion fld[1];
    HOST_WIDE_INT hwint[1];
    struct block_symbol block_sym;
    struct real_value rv;
    struct fixed_value fv;
  } GTY ((special ("rtx_def"), desc ("GET_CODE (&%0)"))) u;

Richard.




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-08-29 19:34             ` Mike Stump
  2013-08-30  8:51               ` Richard Biener
@ 2013-09-01 19:26               ` Richard Sandiford
  2013-09-05 21:00                 ` Richard Sandiford
  1 sibling, 1 reply; 50+ messages in thread
From: Richard Sandiford @ 2013-09-01 19:26 UTC (permalink / raw)
  To: Mike Stump; +Cc: Richard Biener, Kenneth Zadeck, gcc-patches

Mike Stump <mikestump@comcast.net> writes:
> @@ -643,12 +653,14 @@ equality.  */
>  			       __FUNCTION__);				\
>       &_rtx->u.hwint[_n]; }))
>  
> -#define XHWIVEC_ELT(HWIVEC, I) __extension__				\
> -(*({ __typeof (HWIVEC) const _hwivec = (HWIVEC); const int _i = (I);	\
> -     if (_i < 0 || _i >= HWI_GET_NUM_ELEM (_hwivec))			\
> -       hwivec_check_failed_bounds (_hwivec, _i, __FILE__, __LINE__,	\
> -				  __FUNCTION__);			\
> -     &_hwivec->elem[_i]; }))
> +#define CWI_ELT(RTX, I) __extension__					\
> +(*({ __typeof (RTX) const _rtx = (RTX);					\
> +     int _max = CWI_GET_NUM_ELEM (_rtx);				\

CWI_GET_NUM_ELEM also uses "_rtx" for its temporary variable, so the
last line includes the equivalent of:

  __typeof (_rtx) _rtx = _rtx;

Is the fix below OK?  We do a similar thing for block symbols, etc.

Thanks,
Richard


Index: gcc/rtl.h
===================================================================
--- gcc/rtl.h	2013-09-01 14:00:21.032885857 +0100
+++ gcc/rtl.h	2013-09-01 17:41:49.474023618 +0100
@@ -654,13 +654,13 @@ #define XWINT(RTX, N) __extension__
      &_rtx->u.hwint[_n]; }))
 
 #define CWI_ELT(RTX, I) __extension__					\
-(*({ __typeof (RTX) const _rtx = (RTX);					\
-     int _max = CWI_GET_NUM_ELEM (_rtx);				\
+(*({ __typeof (RTX) const _cwi = (RTX);					\
+     int _max = CWI_GET_NUM_ELEM (_cwi);				\
      const int _i = (I);						\
      if (_i < 0 || _i >= _max)						\
-       cwi_check_failed_bounds (_rtx, _i, __FILE__, __LINE__,	\
+       cwi_check_failed_bounds (_cwi, _i, __FILE__, __LINE__,		\
 				__FUNCTION__);				\
-     &_rtx->u.hwiv.elem[_i]; }))
+     &_cwi->u.hwiv.elem[_i]; }))
 
 #define XCWINT(RTX, N, C) __extension__					\
 (*({ __typeof (RTX) const _rtx = (RTX);					\

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-09-01 19:26               ` Richard Sandiford
@ 2013-09-05 21:00                 ` Richard Sandiford
  2013-09-06  0:10                   ` Mike Stump
  0 siblings, 1 reply; 50+ messages in thread
From: Richard Sandiford @ 2013-09-05 21:00 UTC (permalink / raw)
  To: Mike Stump; +Cc: Richard Biener, Kenneth Zadeck, gcc-patches

Ping.  I should have said that bootstrapping with rtl checking enabled
is broken as things stand.

Thanks,
Richard

Richard Sandiford <rdsandiford@googlemail.com> writes:
> Mike Stump <mikestump@comcast.net> writes:
>> @@ -643,12 +653,14 @@ equality.  */
>>  			       __FUNCTION__);				\
>>       &_rtx->u.hwint[_n]; }))
>>  
>> -#define XHWIVEC_ELT(HWIVEC, I) __extension__				\
>> -(*({ __typeof (HWIVEC) const _hwivec = (HWIVEC); const int _i = (I);	\
>> -     if (_i < 0 || _i >= HWI_GET_NUM_ELEM (_hwivec))			\
>> -       hwivec_check_failed_bounds (_hwivec, _i, __FILE__, __LINE__,	\
>> -				  __FUNCTION__);			\
>> -     &_hwivec->elem[_i]; }))
>> +#define CWI_ELT(RTX, I) __extension__					\
>> +(*({ __typeof (RTX) const _rtx = (RTX);					\
>> +     int _max = CWI_GET_NUM_ELEM (_rtx);				\
>
> CWI_GET_NUM_ELEM also uses "_rtx" for its temporary variable, so the
> last line includes the equivalent of:
>
>   __typeof (_rtx) _rtx = _rtx;
>
> Is the fix below OK?  We do a similar thing for block symbols, etc.
>
> Thanks,
> Richard
>
>
> Index: gcc/rtl.h
> ===================================================================
> --- gcc/rtl.h	2013-09-01 14:00:21.032885857 +0100
> +++ gcc/rtl.h	2013-09-01 17:41:49.474023618 +0100
> @@ -654,13 +654,13 @@ #define XWINT(RTX, N) __extension__
>       &_rtx->u.hwint[_n]; }))
>  
>  #define CWI_ELT(RTX, I) __extension__					\
> -(*({ __typeof (RTX) const _rtx = (RTX);					\
> -     int _max = CWI_GET_NUM_ELEM (_rtx);				\
> +(*({ __typeof (RTX) const _cwi = (RTX);					\
> +     int _max = CWI_GET_NUM_ELEM (_cwi);				\
>       const int _i = (I);						\
>       if (_i < 0 || _i >= _max)						\
> -       cwi_check_failed_bounds (_rtx, _i, __FILE__, __LINE__,	\
> +       cwi_check_failed_bounds (_cwi, _i, __FILE__, __LINE__,		\
>  				__FUNCTION__);				\
> -     &_rtx->u.hwiv.elem[_i]; }))
> +     &_cwi->u.hwiv.elem[_i]; }))
>  
>  #define XCWINT(RTX, N, C) __extension__					\
>  (*({ __typeof (RTX) const _rtx = (RTX);					\

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: wide-int branch now up for public comment and review
  2013-09-05 21:00                 ` Richard Sandiford
@ 2013-09-06  0:10                   ` Mike Stump
  0 siblings, 0 replies; 50+ messages in thread
From: Mike Stump @ 2013-09-06  0:10 UTC (permalink / raw)
  To: Richard Sandiford; +Cc: Richard Biener, Kenneth Zadeck, gcc-patches

On Sep 5, 2013, at 2:00 PM, Richard Sandiford <rdsandiford@googlemail.com> wrote:
> Ping.  I should have said that bootstrapping with rtl checking enabled
> is broken as things stand.

Yes, this is fine.

> Richard Sandiford <rdsandiford@googlemail.com> writes:
>> Mike Stump <mikestump@comcast.net> writes:
>>> @@ -643,12 +653,14 @@ equality.  */
>>> 			       __FUNCTION__);				\
>>>      &_rtx->u.hwint[_n]; }))
>>> 
>>> -#define XHWIVEC_ELT(HWIVEC, I) __extension__				\
>>> -(*({ __typeof (HWIVEC) const _hwivec = (HWIVEC); const int _i = (I);	\
>>> -     if (_i < 0 || _i >= HWI_GET_NUM_ELEM (_hwivec))			\
>>> -       hwivec_check_failed_bounds (_hwivec, _i, __FILE__, __LINE__,	\
>>> -				  __FUNCTION__);			\
>>> -     &_hwivec->elem[_i]; }))
>>> +#define CWI_ELT(RTX, I) __extension__					\
>>> +(*({ __typeof (RTX) const _rtx = (RTX);					\
>>> +     int _max = CWI_GET_NUM_ELEM (_rtx);				\
>> 
>> CWI_GET_NUM_ELEM also uses "_rtx" for its temporary variable, so the
>> last line includes the equivalent of:
>> 
>>  __typeof (_rtx) _rtx = _rtx;
>> 
>> Is the fix below OK?  We do a similar thing for block symbols, etc.
>> 
>> Thanks,
>> Richard
>> 
>> 
>> Index: gcc/rtl.h
>> ===================================================================
>> --- gcc/rtl.h	2013-09-01 14:00:21.032885857 +0100
>> +++ gcc/rtl.h	2013-09-01 17:41:49.474023618 +0100
>> @@ -654,13 +654,13 @@ #define XWINT(RTX, N) __extension__
>>      &_rtx->u.hwint[_n]; }))
>> 
>> #define CWI_ELT(RTX, I) __extension__					\
>> -(*({ __typeof (RTX) const _rtx = (RTX);					\
>> -     int _max = CWI_GET_NUM_ELEM (_rtx);				\
>> +(*({ __typeof (RTX) const _cwi = (RTX);					\
>> +     int _max = CWI_GET_NUM_ELEM (_cwi);				\
>>      const int _i = (I);						\
>>      if (_i < 0 || _i >= _max)						\
>> -       cwi_check_failed_bounds (_rtx, _i, __FILE__, __LINE__,	\
>> +       cwi_check_failed_bounds (_cwi, _i, __FILE__, __LINE__,		\
>> 				__FUNCTION__);				\
>> -     &_rtx->u.hwiv.elem[_i]; }))
>> +     &_cwi->u.hwiv.elem[_i]; }))
>> 
>> #define XCWINT(RTX, N, C) __extension__					\
>> (*({ __typeof (RTX) const _rtx = (RTX);					\

^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2013-09-06  0:10 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-13 20:57 wide-int branch now up for public comment and review Kenneth Zadeck
2013-08-22  8:25 ` Richard Sandiford
2013-08-23 15:03 ` Richard Sandiford
2013-08-23 21:01   ` Kenneth Zadeck
2013-08-24 10:44     ` Richard Sandiford
2013-08-24 13:10       ` Richard Sandiford
2013-08-24 18:16         ` Kenneth Zadeck
2013-08-25  7:27           ` Richard Sandiford
2013-08-25 13:21             ` Kenneth Zadeck
2013-08-24 21:22       ` Kenneth Zadeck
2013-08-24  0:03   ` Mike Stump
2013-08-24  1:59   ` Mike Stump
2013-08-24  3:34   ` Mike Stump
2013-08-24  9:04     ` Richard Sandiford
2013-08-24 20:46   ` Kenneth Zadeck
2013-08-25 10:52   ` Richard Sandiford
2013-08-25 15:14     ` Kenneth Zadeck
2013-08-26  2:22     ` Mike Stump
2013-08-26  5:40       ` Kenneth Zadeck
2013-08-28  9:11       ` Richard Biener
2013-08-29 13:34         ` Kenneth Zadeck
2013-08-25 18:12   ` Mike Stump
2013-08-25 18:57     ` Richard Sandiford
2013-08-25 19:59       ` Mike Stump
2013-08-25 20:11       ` Mike Stump
2013-08-25 21:38     ` Joseph S. Myers
2013-08-25 21:53       ` Mike Stump
2013-08-28  9:06   ` Richard Biener
2013-08-28  9:51     ` Richard Sandiford
2013-08-28 10:40       ` Richard Biener
2013-08-28 11:52         ` Richard Sandiford
2013-08-28 12:04           ` Richard Biener
2013-08-28 12:32             ` Richard Sandiford
2013-08-28 12:49               ` Richard Biener
2013-08-28 16:58                 ` Mike Stump
2013-08-28 21:15                   ` Kenneth Zadeck
2013-08-29  3:18                     ` Mike Stump
2013-08-28 16:08         ` Mike Stump
2013-08-29  7:42           ` Richard Biener
2013-08-29 19:34             ` Mike Stump
2013-08-30  8:51               ` Richard Biener
2013-09-01 19:26               ` Richard Sandiford
2013-09-05 21:00                 ` Richard Sandiford
2013-09-06  0:10                   ` Mike Stump
2013-08-28 13:11     ` Kenneth Zadeck
2013-08-29  0:15     ` Kenneth Zadeck
2013-08-29  9:13       ` Richard Biener
2013-08-29 12:38         ` Kenneth Zadeck
2013-08-24 18:42 ` Florian Weimer
2013-08-24 19:48   ` Kenneth Zadeck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).