public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [RFC] > WIDE_INT_MAX_PREC support in wide-int
@ 2023-08-28 14:34 Jakub Jelinek
  2023-08-29  9:49 ` Richard Biener
  0 siblings, 1 reply; 16+ messages in thread
From: Jakub Jelinek @ 2023-08-28 14:34 UTC (permalink / raw)
  To: Richard Biener, Richard Sandiford; +Cc: gcc-patches

Hi!

While the _BitInt series isn't committed yet, I had a quick look at
lifting the current lowest limitation on maximum _BitInt precision,
that wide_int can only support wide_int until WIDE_INT_MAX_PRECISION - 1.

Note, other limits if that is lifted are INTEGER_CST currently using 3
unsigned char members and so being able to only hold up to 255 * 64 = 16320
bit numbers and then TYPE_PRECISION being 16-bit, so limiting us to 65535
bits.  The INTEGER_CST limit could be dealt with by dropping the
int_length.offset "cache" and making int_length.extended and
int_length.unextended members unsinged short rather than unsigned char.

The following so far just compile tested patch changes wide_int_storage
to be a union, for precisions up to WIDE_INT_MAX_PRECISION inclusive it
will work as before (just being no longer trivially copyable type and
having an inline destructor), while larger precision instead use a pointer
to heap allocated array.
For wide_int this is fairly easy (of course, I'd need to see what the
patch does to gcc code size and compile time performance, some
growth/slowdown is certain), but I'd like to brainstorm on
widest_int/widest2_int.

Currently it is a constant precision storage with WIDE_INT_MAX_PRECISION
precision (widest2_int twice that), so memory layout-wide on at least 64-bit
hosts identical to wide_int, just it doesn't have precision member and so
32 bits smaller on 32-bit hosts.  It is used in lots of places.

I think the most common is what is done e.g. in tree_int_cst* comparisons
and similarly, using wi::to_widest () to just compare INTEGER_CSTs.
That case actually doesn't even use wide_int but widest_extended_tree
as storage, unless stored into widest_int in between (that happens in
various spots as well).  For comparisons, it would be fine if
widest_int_storage/widest_extended_tree storages had a dynamic precision,
WIDE_INT_MAX_PRECISION for most of the cases (if only
precision < WIDE_INT_MAX_PRECISION is involved), otherwise the needed
precision (e.g. for binary ops) which would be what we say have in
INTEGER_CST or some type, rounded up to whole multiples of HOST_WIDE_INTs
and if unsigned with multiple of HOST_WIDE_INT precision, have another
HWI to make it always sign-extended.

Another common case is how e.g. tree-ssa-ccp.cc uses them, that is mostly
for bitwise ops and so I think the above would be just fine for that case.

Another case is how tree-ssa-loop-niter.cc uses it, I think for such a usage
it really wants something widest, perhaps we could just try to punt for
_BitInt(N) for N >= WIDE_INT_MAX_PRECISION in there, so that we never care
about bits beyond that limit?

Some passes only use widest_int after the bitint lowering spot, we don't
really need to care about those.

I think another possibility could be to make widest_int_storage etc. always
pretend it has 65536 bit precision or something similarly large and make the
decision on whether inline array or pointer is used in the storage be done
using len.  Unfortunately, set_len method is usually called after filling
the array, not before it (it even sign-extends some cases, so it has to be
done that late).

Or for e.g. binary ops compute widest_int precision based on the 2 (for
binary) or 1 (for unary) operand's .len involved?

Thoughts on this?

Note, the wide-int.cc change is just to show it does something, it would be
a waste to put that into self-test when _BitInt can support such sizes.

2023-08-28  Jakub Jelinek  <jakub@redhat.com>

	* wide-int.h (wide_int_storage): Replace val member with a union of
	val and valp.  Declare destructor.
	(wide_int_storage::wide_int_storage): Initialize precision to 0
	in default ctor.  Allocate u.valp if needed in copy ctor.
	(wide_int_storage::~wide_int_storage): New.
	(wide_int_storage::operator =): Delete and/or allocate u.valp if
	needed.
	(wide_int_storage::get_val, wide_int_storage::write_val): Return
	u.valp for precision > WIDE_INT_MAX_PRECISION, otherwise u.val.
	(wide_int_storage::set_len): Use write_val instead of accessing
	val directly.
	(wide_int_storage::create): Allocate u.valp if needed.
	* value-range.h (irange::maybe_resize): Use a loop instead of
	memcpy.
	* wide-int.cc (wide_int_cc_tests): Add a test for 4096 bit wide_int
	addition.

--- gcc/wide-int.h.jj	2023-06-07 09:42:14.997126190 +0200
+++ gcc/wide-int.h	2023-08-28 15:09:06.498448770 +0200
@@ -1065,7 +1065,11 @@ namespace wi
 class GTY(()) wide_int_storage
 {
 private:
-  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
+  union
+  {
+    HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
+    HOST_WIDE_INT *valp;
+  } GTY((skip)) u;
   unsigned int len;
   unsigned int precision;
 
@@ -1073,6 +1077,7 @@ public:
   wide_int_storage ();
   template <typename T>
   wide_int_storage (const T &);
+  ~wide_int_storage ();
 
   /* The standard generic_wide_int storage methods.  */
   unsigned int get_precision () const;
@@ -1104,7 +1109,7 @@ namespace wi
   };
 }
 
-inline wide_int_storage::wide_int_storage () {}
+inline wide_int_storage::wide_int_storage () : precision (0) {}
 
 /* Initialize the storage from integer X, in its natural precision.
    Note that we do not allow integers with host-dependent precision
@@ -1117,9 +1122,17 @@ inline wide_int_storage::wide_int_storag
   { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
   WIDE_INT_REF_FOR (T) xi (x);
   precision = xi.precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+    u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
   wi::copy (*this, xi);
 }
 
+inline wide_int_storage::~wide_int_storage ()
+{
+  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+    XDELETEVEC (u.valp);
+}
+
 template <typename T>
 inline wide_int_storage&
 wide_int_storage::operator = (const T &x)
@@ -1127,7 +1140,15 @@ wide_int_storage::operator = (const T &x
   { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
   { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
   WIDE_INT_REF_FOR (T) xi (x);
-  precision = xi.precision;
+  if (UNLIKELY (precision != xi.precision))
+    {
+      if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+	XDELETEVEC (u.valp);
+      precision = xi.precision;
+      if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+	u.valp = XNEWVEC (HOST_WIDE_INT,
+			  CEIL (precision, HOST_BITS_PER_WIDE_INT));
+    }
   wi::copy (*this, xi);
   return *this;
 }
@@ -1141,7 +1162,7 @@ wide_int_storage::get_precision () const
 inline const HOST_WIDE_INT *
 wide_int_storage::get_val () const
 {
-  return val;
+  return UNLIKELY (precision > WIDE_INT_MAX_PRECISION) ? u.valp : u.val;
 }
 
 inline unsigned int
@@ -1153,7 +1174,7 @@ wide_int_storage::get_len () const
 inline HOST_WIDE_INT *
 wide_int_storage::write_val ()
 {
-  return val;
+  return UNLIKELY (precision > WIDE_INT_MAX_PRECISION) ? u.valp : u.val;
 }
 
 inline void
@@ -1161,8 +1182,10 @@ wide_int_storage::set_len (unsigned int
 {
   len = l;
   if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
-    val[len - 1] = sext_hwi (val[len - 1],
-			     precision % HOST_BITS_PER_WIDE_INT);
+    {
+      HOST_WIDE_INT &v = write_val ()[len - 1];
+      v = sext_hwi (v, precision % HOST_BITS_PER_WIDE_INT);
+    }
 }
 
 /* Treat X as having signedness SGN and convert it to a PRECISION-bit
@@ -1196,6 +1219,9 @@ wide_int_storage::create (unsigned int p
 {
   wide_int x;
   x.precision = precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+    x.u.valp = XNEWVEC (HOST_WIDE_INT,
+			CEIL (precision, HOST_BITS_PER_WIDE_INT));
   return x;
 }
 
--- gcc/value-range.h.jj	2023-08-08 15:55:09.619120863 +0200
+++ gcc/value-range.h	2023-08-28 15:08:51.295648228 +0200
@@ -624,7 +624,9 @@ irange::maybe_resize (int needed)
     {
       m_max_ranges = HARD_MAX_RANGES;
       wide_int *newmem = new wide_int[m_max_ranges * 2];
-      memcpy (newmem, m_base, sizeof (wide_int) * num_pairs () * 2);
+      unsigned n = num_pairs () * 2;
+      for (unsigned i = 0; i < n; ++i)
+	newmem[i] = m_base[i];
       m_base = newmem;
     }
 }
--- gcc/wide-int.cc.jj	2023-08-08 15:55:09.621120835 +0200
+++ gcc/wide-int.cc	2023-08-28 15:29:48.620086813 +0200
@@ -2617,6 +2617,110 @@ wide_int_cc_tests ()
 	     wi::shifted_mask (0, 128, false, 128));
   ASSERT_EQ (wi::mask (128, true, 128),
 	     wi::shifted_mask (0, 128, true, 128));
+  const HOST_WIDE_INT a[192] = {
+    HOST_WIDE_INT_UC (0x0b2b03862d1fbe27), HOST_WIDE_INT_UC (0x444bb6ac06835f26),
+    HOST_WIDE_INT_UC (0x9d930632270edc17), HOST_WIDE_INT_UC (0xf9f1f7b1a4d298d3),
+    HOST_WIDE_INT_UC (0x1f87ccdd7b021c38), HOST_WIDE_INT_UC (0xf0366f9e68bbcdb2),
+    HOST_WIDE_INT_UC (0x2fcbfa32959408aa), HOST_WIDE_INT_UC (0xbdf7d4beb7b3dbe7),
+    HOST_WIDE_INT_UC (0x4a64ba19bdf3363f), HOST_WIDE_INT_UC (0x145c2ec5ae314c2f),
+    HOST_WIDE_INT_UC (0x307bf01303ca99d5), HOST_WIDE_INT_UC (0x82cac09501c0df1c),
+    HOST_WIDE_INT_UC (0x8119188bcf59391d), HOST_WIDE_INT_UC (0xd24ac359b0510387),
+    HOST_WIDE_INT_UC (0x1b6f9cd3e388da86), HOST_WIDE_INT_UC (0x4e2990a31337004a),
+    HOST_WIDE_INT_UC (0xd62f16910ab640ae), HOST_WIDE_INT_UC (0xa34a6f3c87668eaa),
+    HOST_WIDE_INT_UC (0x37e46f52b873eb07), HOST_WIDE_INT_UC (0xa498e8e255eaa65c),
+    HOST_WIDE_INT_UC (0x2370a16cbbdaa0af), HOST_WIDE_INT_UC (0x4f305e68993df752),
+    HOST_WIDE_INT_UC (0x074d3e131bd30499), HOST_WIDE_INT_UC (0xf4caf8393dbd01c4),
+    HOST_WIDE_INT_UC (0xb9e6794f494b3934), HOST_WIDE_INT_UC (0x7d7b2cc51969de8e),
+    HOST_WIDE_INT_UC (0x87494b790cce95f1), HOST_WIDE_INT_UC (0xeba990c44573c5c8),
+    HOST_WIDE_INT_UC (0x755007ea9663d2ea), HOST_WIDE_INT_UC (0xe5afe63b489e3d19),
+    HOST_WIDE_INT_UC (0x82138483f2c2831c), HOST_WIDE_INT_UC (0x5488d7a6d99ce301),
+    HOST_WIDE_INT_UC (0xd1d713ee75465be7), HOST_WIDE_INT_UC (0x29222cca5699b802),
+    HOST_WIDE_INT_UC (0x28e6308201df3eff), HOST_WIDE_INT_UC (0x720e2cef5151c53d),
+    HOST_WIDE_INT_UC (0xac381f111d9e336d), HOST_WIDE_INT_UC (0xfe4ae42ca0336dee),
+    HOST_WIDE_INT_UC (0xebd720f35a1baebc), HOST_WIDE_INT_UC (0x4fd3dbbf7d4324d6),
+    HOST_WIDE_INT_UC (0x4d78cb3165e57f22), HOST_WIDE_INT_UC (0x62e39c282e564f40),
+    HOST_WIDE_INT_UC (0x58a8b34a0882fabb), HOST_WIDE_INT_UC (0xbd6a54b970aa6765),
+    HOST_WIDE_INT_UC (0x12f7298ae3ec1a4e), HOST_WIDE_INT_UC (0xb3dfe9e1c64aba27),
+    HOST_WIDE_INT_UC (0xf5ae414ef25fcfb0), HOST_WIDE_INT_UC (0x6bd04f05fc0656ae),
+    HOST_WIDE_INT_UC (0x61c83d0178ecc390), HOST_WIDE_INT_UC (0xbe5310392ee661d9),
+    HOST_WIDE_INT_UC (0xb1ef589359431e81), HOST_WIDE_INT_UC (0x187f0dbf9a2cb650),
+    HOST_WIDE_INT_UC (0xab7b6664a0b0aec2), HOST_WIDE_INT_UC (0x287a358e7bdad628),
+    HOST_WIDE_INT_UC (0xb6853808e16aeb8b), HOST_WIDE_INT_UC (0x2268d04ba71b1ff7),
+    HOST_WIDE_INT_UC (0xadd0a43eb925494a), HOST_WIDE_INT_UC (0xaabe8fa96600a548),
+    HOST_WIDE_INT_UC (0x4f9a6641525c31e3), HOST_WIDE_INT_UC (0x90fd1e86293f4bd4),
+    HOST_WIDE_INT_UC (0xe2ad2b5c90e9800b), HOST_WIDE_INT_UC (0x914e8dacffa771fc),
+    HOST_WIDE_INT_UC (0xab104f92f2b7f5f0), HOST_WIDE_INT_UC (0x7ba77c13f62c21c4),
+
+    HOST_WIDE_INT_UC (0x004eb118946c8b0a), HOST_WIDE_INT_UC (0xcd10ba90ac387e24),
+    HOST_WIDE_INT_UC (0x3165a4c40640630e), HOST_WIDE_INT_UC (0x76dbccb2bb28b589),
+    HOST_WIDE_INT_UC (0x78c7d08d1846ba72), HOST_WIDE_INT_UC (0x088dadabc29b7eee),
+    HOST_WIDE_INT_UC (0xce09b01b92a09c9f), HOST_WIDE_INT_UC (0x5a3020593ce05c03),
+    HOST_WIDE_INT_UC (0x2bdc49e21551752d), HOST_WIDE_INT_UC (0x0c68f10ea335eed3),
+    HOST_WIDE_INT_UC (0xc7eeacac4c89f081), HOST_WIDE_INT_UC (0x1709baf3ff0cbf03),
+    HOST_WIDE_INT_UC (0x30f6ee76b7390893), HOST_WIDE_INT_UC (0x34837770023b44df),
+    HOST_WIDE_INT_UC (0x03bb2fa9e55edd44), HOST_WIDE_INT_UC (0xdcde0127dcf651cc),
+    HOST_WIDE_INT_UC (0xddf5b10f46c14a92), HOST_WIDE_INT_UC (0x5fd6a6333b7fc3d4),
+    HOST_WIDE_INT_UC (0xf00d6a63a6292f33), HOST_WIDE_INT_UC (0x4c1b946f4bfdf52a),
+    HOST_WIDE_INT_UC (0x995e31dd31510f3b), HOST_WIDE_INT_UC (0x8d35a772d465d990),
+    HOST_WIDE_INT_UC (0xdef217407399bfcc), HOST_WIDE_INT_UC (0x0afb0b5823306986),
+    HOST_WIDE_INT_UC (0xbb3485a144d31f32), HOST_WIDE_INT_UC (0x59f476dbe59fbd66),
+    HOST_WIDE_INT_UC (0x63ae89916180817f), HOST_WIDE_INT_UC (0xee37dbd94e282511),
+    HOST_WIDE_INT_UC (0x811c761fe6104d7e), HOST_WIDE_INT_UC (0x1ed873f682f029e2),
+    HOST_WIDE_INT_UC (0xc23b89782db3f7f0), HOST_WIDE_INT_UC (0x98dee95dea174c4c),
+    HOST_WIDE_INT_UC (0x5f91f3949dc9992e), HOST_WIDE_INT_UC (0xc36ae182d8aa7d32),
+    HOST_WIDE_INT_UC (0x61abef3db5f22a7f), HOST_WIDE_INT_UC (0x91ce45bbc50c2eef),
+    HOST_WIDE_INT_UC (0x5ab513c1350cd605), HOST_WIDE_INT_UC (0xcad14061bf6ec9fb),
+    HOST_WIDE_INT_UC (0x29557d00db0a03ed), HOST_WIDE_INT_UC (0xd084f8402af7c773),
+    HOST_WIDE_INT_UC (0x3becec18677ff915), HOST_WIDE_INT_UC (0x12bfce5297ee2e67),
+    HOST_WIDE_INT_UC (0x49328e0ad6868d03), HOST_WIDE_INT_UC (0xae508be370a5fe87),
+    HOST_WIDE_INT_UC (0xd04dbe85dd7b93e0), HOST_WIDE_INT_UC (0x2c8c32cb40d820db),
+    HOST_WIDE_INT_UC (0x17a33407c1a4f783), HOST_WIDE_INT_UC (0x0333bdab351f1a1b),
+    HOST_WIDE_INT_UC (0x9bf82ce2b590bd0e), HOST_WIDE_INT_UC (0xc28894ae9eb4a655),
+    HOST_WIDE_INT_UC (0xe5f78919f01d70f0), HOST_WIDE_INT_UC (0x597376afa702626f),
+    HOST_WIDE_INT_UC (0xb7e652d747bd63da), HOST_WIDE_INT_UC (0xffa518a4ec1620f7),
+    HOST_WIDE_INT_UC (0xc7e3951a33f99457), HOST_WIDE_INT_UC (0x939c109b56348cb2),
+    HOST_WIDE_INT_UC (0x0ba1c65a20616b8c), HOST_WIDE_INT_UC (0x230611c1a547fd7b),
+    HOST_WIDE_INT_UC (0x5c9356e353506379), HOST_WIDE_INT_UC (0x6c32318308ba24f1),
+    HOST_WIDE_INT_UC (0xd6c4a34b4f7f9a10), HOST_WIDE_INT_UC (0x26d3a3979e9e363c),
+    HOST_WIDE_INT_UC (0xe8a16f6587bffa80), HOST_WIDE_INT_UC (0xc1ed972017d689a0),
+
+    HOST_WIDE_INT_UC (0x0b79b49ec18c4931), HOST_WIDE_INT_UC (0x115c713cb2bbdd4a),
+    HOST_WIDE_INT_UC (0xcef8aaf62d4f3f26), HOST_WIDE_INT_UC (0x70cdc4645ffb4e5c),
+    HOST_WIDE_INT_UC (0x984f9d6a9348d6ab), HOST_WIDE_INT_UC (0xf8c41d4a2b574ca0),
+    HOST_WIDE_INT_UC (0xfdd5aa4e2834a549), HOST_WIDE_INT_UC (0x1827f517f49437ea),
+    HOST_WIDE_INT_UC (0x764103fbd344ab6d), HOST_WIDE_INT_UC (0x20c51fd451673b02),
+    HOST_WIDE_INT_UC (0xf86a9cbf50548a56), HOST_WIDE_INT_UC (0x99d47b8900cd9e1f),
+    HOST_WIDE_INT_UC (0xb2100702869241b0), HOST_WIDE_INT_UC (0x06ce3ac9b28c4866),
+    HOST_WIDE_INT_UC (0x1f2acc7dc8e7b7cb), HOST_WIDE_INT_UC (0x2b0791caf02d5216),
+    HOST_WIDE_INT_UC (0xb424c7a051778b41), HOST_WIDE_INT_UC (0x0321156fc2e6527f),
+    HOST_WIDE_INT_UC (0x27f1d9b65e9d1a3b), HOST_WIDE_INT_UC (0xf0b47d51a1e89b87),
+    HOST_WIDE_INT_UC (0xbcced349ed2bafea), HOST_WIDE_INT_UC (0xdc6605db6da3d0e2),
+    HOST_WIDE_INT_UC (0xe63f55538f6cc465), HOST_WIDE_INT_UC (0xffc6039160ed6b4a),
+    HOST_WIDE_INT_UC (0x751afef08e1e5866), HOST_WIDE_INT_UC (0xd76fa3a0ff099bf5),
+    HOST_WIDE_INT_UC (0xeaf7d50a6e4f1770), HOST_WIDE_INT_UC (0xd9e16c9d939bead9),
+    HOST_WIDE_INT_UC (0xf66c7e0a7c742069), HOST_WIDE_INT_UC (0x04885a31cb8e66fb),
+    HOST_WIDE_INT_UC (0x444f0dfc20767b0d), HOST_WIDE_INT_UC (0xed67c104c3b42f4e),
+    HOST_WIDE_INT_UC (0x31690783130ff515), HOST_WIDE_INT_UC (0xec8d0e4d2f443535),
+    HOST_WIDE_INT_UC (0x8a921fbfb7d1697e), HOST_WIDE_INT_UC (0x03dc72ab165df42c),
+    HOST_WIDE_INT_UC (0x06ed32d252ab0973), HOST_WIDE_INT_UC (0xc91c248e5fa237ea),
+    HOST_WIDE_INT_UC (0x152c9df43525b2aa), HOST_WIDE_INT_UC (0x2058d3ffa83aec4a),
+    HOST_WIDE_INT_UC (0x8965b749cd657838), HOST_WIDE_INT_UC (0x75a36a7ac6447da7),
+    HOST_WIDE_INT_UC (0xa1db4154df0987be), HOST_WIDE_INT_UC (0x6bbae09ce15065ec),
+    HOST_WIDE_INT_UC (0xe344e810c167ae2f), HOST_WIDE_INT_UC (0xe06c1cad0722db02),
+    HOST_WIDE_INT_UC (0x0d517556b404c733), HOST_WIDE_INT_UC (0x6f040cb1312570ca),
+    HOST_WIDE_INT_UC (0xfdc069e42e7d809e), HOST_WIDE_INT_UC (0x80dba4e7cd9b082e),
+    HOST_WIDE_INT_UC (0x97e6e1ad49608f72), HOST_WIDE_INT_UC (0x71f2846f412f18c0),
+    HOST_WIDE_INT_UC (0x6361b93be86e129c), HOST_WIDE_INT_UC (0x281f4e3367f0f720),
+    HOST_WIDE_INT_UC (0x7e68cd2315647fe3), HOST_WIDE_INT_UC (0xb604e0e6fd4facaa),
+    HOST_WIDE_INT_UC (0xb9726a98d986b4d6), HOST_WIDE_INT_UC (0xcdc4a16b0b48a2c3),
+    HOST_WIDE_INT_UC (0xac2dbd24a5ac955c), HOST_WIDE_INT_UC (0xfd2f500931f970c5),
+    HOST_WIDE_INT_UC (0xb971cea7e0691a1b), HOST_WIDE_INT_UC (0xb82231449e45a839),
+    HOST_WIDE_INT_UC (0x93b1bef87a77f070), HOST_WIDE_INT_UC (0x3d9513340e02ab65)
+  };
+  wide_int b = wide_int::from_array (&a[0], 64, 4096);
+  wide_int c = wide_int::from_array (&a[64], 64, 4096);
+  wide_int d = wide_int::from_array (&a[128], 64, 4096);
+  ASSERT_EQ (b + c, d);
 }
 
 } // namespace selftest

	Jakub


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] > WIDE_INT_MAX_PREC support in wide-int
  2023-08-28 14:34 [RFC] > WIDE_INT_MAX_PREC support in wide-int Jakub Jelinek
@ 2023-08-29  9:49 ` Richard Biener
  2023-08-29 10:42   ` Richard Sandiford
  2023-08-29 14:46   ` [RFC] > WIDE_INT_MAX_PREC support in wide-int Jakub Jelinek
  0 siblings, 2 replies; 16+ messages in thread
From: Richard Biener @ 2023-08-29  9:49 UTC (permalink / raw)
  To: Jakub Jelinek; +Cc: Richard Sandiford, gcc-patches

On Mon, 28 Aug 2023, Jakub Jelinek wrote:

> Hi!
> 
> While the _BitInt series isn't committed yet, I had a quick look at
> lifting the current lowest limitation on maximum _BitInt precision,
> that wide_int can only support wide_int until WIDE_INT_MAX_PRECISION - 1.
> 
> Note, other limits if that is lifted are INTEGER_CST currently using 3
> unsigned char members and so being able to only hold up to 255 * 64 = 16320
> bit numbers and then TYPE_PRECISION being 16-bit, so limiting us to 65535
> bits.  The INTEGER_CST limit could be dealt with by dropping the
> int_length.offset "cache" and making int_length.extended and
> int_length.unextended members unsinged short rather than unsigned char.
> 
> The following so far just compile tested patch changes wide_int_storage
> to be a union, for precisions up to WIDE_INT_MAX_PRECISION inclusive it
> will work as before (just being no longer trivially copyable type and
> having an inline destructor), while larger precision instead use a pointer
> to heap allocated array.
> For wide_int this is fairly easy (of course, I'd need to see what the
> patch does to gcc code size and compile time performance, some
> growth/slowdown is certain), but I'd like to brainstorm on
> widest_int/widest2_int.
> 
> Currently it is a constant precision storage with WIDE_INT_MAX_PRECISION
> precision (widest2_int twice that), so memory layout-wide on at least 64-bit
> hosts identical to wide_int, just it doesn't have precision member and so
> 32 bits smaller on 32-bit hosts.  It is used in lots of places.
> 
> I think the most common is what is done e.g. in tree_int_cst* comparisons
> and similarly, using wi::to_widest () to just compare INTEGER_CSTs.
> That case actually doesn't even use wide_int but widest_extended_tree
> as storage, unless stored into widest_int in between (that happens in
> various spots as well).  For comparisons, it would be fine if
> widest_int_storage/widest_extended_tree storages had a dynamic precision,
> WIDE_INT_MAX_PRECISION for most of the cases (if only
> precision < WIDE_INT_MAX_PRECISION is involved), otherwise the needed
> precision (e.g. for binary ops) which would be what we say have in
> INTEGER_CST or some type, rounded up to whole multiples of HOST_WIDE_INTs
> and if unsigned with multiple of HOST_WIDE_INT precision, have another
> HWI to make it always sign-extended.
> 
> Another common case is how e.g. tree-ssa-ccp.cc uses them, that is mostly
> for bitwise ops and so I think the above would be just fine for that case.
> 
> Another case is how tree-ssa-loop-niter.cc uses it, I think for such a usage
> it really wants something widest, perhaps we could just try to punt for
> _BitInt(N) for N >= WIDE_INT_MAX_PRECISION in there, so that we never care
> about bits beyond that limit?

I'll note tree-ssa-loop-niter.cc also uses GMP in some cases, widest_int
is really trying to be poor-mans GMP by limiting the maximum precision.

> Some passes only use widest_int after the bitint lowering spot, we don't
> really need to care about those.
> 
> I think another possibility could be to make widest_int_storage etc. always
> pretend it has 65536 bit precision or something similarly large and make the
> decision on whether inline array or pointer is used in the storage be done
> using len.  Unfortunately, set_len method is usually called after filling
> the array, not before it (it even sign-extends some cases, so it has to be
> done that late).
> 
> Or for e.g. binary ops compute widest_int precision based on the 2 (for
> binary) or 1 (for unary) operand's .len involved?
> 
> Thoughts on this?

The simplest way would probably to keep widest_int at 
WIDE_INT_MAX_PRECISION like we have now and assert that this is
enough at ::to_widest time (we probably do already).  And then
declare uses with more precision need to use GMP.

Not sure if that's not also a viable way for wide_int - we're
only losing optimization here, no?

Richard.

> Note, the wide-int.cc change is just to show it does something, it would be
> a waste to put that into self-test when _BitInt can support such sizes.
> 
> 2023-08-28  Jakub Jelinek  <jakub@redhat.com>
> 
> 	* wide-int.h (wide_int_storage): Replace val member with a union of
> 	val and valp.  Declare destructor.
> 	(wide_int_storage::wide_int_storage): Initialize precision to 0
> 	in default ctor.  Allocate u.valp if needed in copy ctor.
> 	(wide_int_storage::~wide_int_storage): New.
> 	(wide_int_storage::operator =): Delete and/or allocate u.valp if
> 	needed.
> 	(wide_int_storage::get_val, wide_int_storage::write_val): Return
> 	u.valp for precision > WIDE_INT_MAX_PRECISION, otherwise u.val.
> 	(wide_int_storage::set_len): Use write_val instead of accessing
> 	val directly.
> 	(wide_int_storage::create): Allocate u.valp if needed.
> 	* value-range.h (irange::maybe_resize): Use a loop instead of
> 	memcpy.
> 	* wide-int.cc (wide_int_cc_tests): Add a test for 4096 bit wide_int
> 	addition.
> 
> --- gcc/wide-int.h.jj	2023-06-07 09:42:14.997126190 +0200
> +++ gcc/wide-int.h	2023-08-28 15:09:06.498448770 +0200
> @@ -1065,7 +1065,11 @@ namespace wi
>  class GTY(()) wide_int_storage
>  {
>  private:
> -  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
> +  union
> +  {
> +    HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
> +    HOST_WIDE_INT *valp;
> +  } GTY((skip)) u;
>    unsigned int len;
>    unsigned int precision;
>  
> @@ -1073,6 +1077,7 @@ public:
>    wide_int_storage ();
>    template <typename T>
>    wide_int_storage (const T &);
> +  ~wide_int_storage ();
>  
>    /* The standard generic_wide_int storage methods.  */
>    unsigned int get_precision () const;
> @@ -1104,7 +1109,7 @@ namespace wi
>    };
>  }
>  
> -inline wide_int_storage::wide_int_storage () {}
> +inline wide_int_storage::wide_int_storage () : precision (0) {}
>  
>  /* Initialize the storage from integer X, in its natural precision.
>     Note that we do not allow integers with host-dependent precision
> @@ -1117,9 +1122,17 @@ inline wide_int_storage::wide_int_storag
>    { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
>    WIDE_INT_REF_FOR (T) xi (x);
>    precision = xi.precision;
> +  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +    u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
>    wi::copy (*this, xi);
>  }
>  
> +inline wide_int_storage::~wide_int_storage ()
> +{
> +  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +    XDELETEVEC (u.valp);
> +}
> +
>  template <typename T>
>  inline wide_int_storage&
>  wide_int_storage::operator = (const T &x)
> @@ -1127,7 +1140,15 @@ wide_int_storage::operator = (const T &x
>    { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
>    { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
>    WIDE_INT_REF_FOR (T) xi (x);
> -  precision = xi.precision;
> +  if (UNLIKELY (precision != xi.precision))
> +    {
> +      if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +	XDELETEVEC (u.valp);
> +      precision = xi.precision;
> +      if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +	u.valp = XNEWVEC (HOST_WIDE_INT,
> +			  CEIL (precision, HOST_BITS_PER_WIDE_INT));
> +    }
>    wi::copy (*this, xi);
>    return *this;
>  }
> @@ -1141,7 +1162,7 @@ wide_int_storage::get_precision () const
>  inline const HOST_WIDE_INT *
>  wide_int_storage::get_val () const
>  {
> -  return val;
> +  return UNLIKELY (precision > WIDE_INT_MAX_PRECISION) ? u.valp : u.val;
>  }
>  
>  inline unsigned int
> @@ -1153,7 +1174,7 @@ wide_int_storage::get_len () const
>  inline HOST_WIDE_INT *
>  wide_int_storage::write_val ()
>  {
> -  return val;
> +  return UNLIKELY (precision > WIDE_INT_MAX_PRECISION) ? u.valp : u.val;
>  }
>  
>  inline void
> @@ -1161,8 +1182,10 @@ wide_int_storage::set_len (unsigned int
>  {
>    len = l;
>    if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
> -    val[len - 1] = sext_hwi (val[len - 1],
> -			     precision % HOST_BITS_PER_WIDE_INT);
> +    {
> +      HOST_WIDE_INT &v = write_val ()[len - 1];
> +      v = sext_hwi (v, precision % HOST_BITS_PER_WIDE_INT);
> +    }
>  }
>  
>  /* Treat X as having signedness SGN and convert it to a PRECISION-bit
> @@ -1196,6 +1219,9 @@ wide_int_storage::create (unsigned int p
>  {
>    wide_int x;
>    x.precision = precision;
> +  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +    x.u.valp = XNEWVEC (HOST_WIDE_INT,
> +			CEIL (precision, HOST_BITS_PER_WIDE_INT));
>    return x;
>  }
>  
> --- gcc/value-range.h.jj	2023-08-08 15:55:09.619120863 +0200
> +++ gcc/value-range.h	2023-08-28 15:08:51.295648228 +0200
> @@ -624,7 +624,9 @@ irange::maybe_resize (int needed)
>      {
>        m_max_ranges = HARD_MAX_RANGES;
>        wide_int *newmem = new wide_int[m_max_ranges * 2];
> -      memcpy (newmem, m_base, sizeof (wide_int) * num_pairs () * 2);
> +      unsigned n = num_pairs () * 2;
> +      for (unsigned i = 0; i < n; ++i)
> +	newmem[i] = m_base[i];
>        m_base = newmem;
>      }
>  }
> --- gcc/wide-int.cc.jj	2023-08-08 15:55:09.621120835 +0200
> +++ gcc/wide-int.cc	2023-08-28 15:29:48.620086813 +0200
> @@ -2617,6 +2617,110 @@ wide_int_cc_tests ()
>  	     wi::shifted_mask (0, 128, false, 128));
>    ASSERT_EQ (wi::mask (128, true, 128),
>  	     wi::shifted_mask (0, 128, true, 128));
> +  const HOST_WIDE_INT a[192] = {
> +    HOST_WIDE_INT_UC (0x0b2b03862d1fbe27), HOST_WIDE_INT_UC (0x444bb6ac06835f26),
> +    HOST_WIDE_INT_UC (0x9d930632270edc17), HOST_WIDE_INT_UC (0xf9f1f7b1a4d298d3),
> +    HOST_WIDE_INT_UC (0x1f87ccdd7b021c38), HOST_WIDE_INT_UC (0xf0366f9e68bbcdb2),
> +    HOST_WIDE_INT_UC (0x2fcbfa32959408aa), HOST_WIDE_INT_UC (0xbdf7d4beb7b3dbe7),
> +    HOST_WIDE_INT_UC (0x4a64ba19bdf3363f), HOST_WIDE_INT_UC (0x145c2ec5ae314c2f),
> +    HOST_WIDE_INT_UC (0x307bf01303ca99d5), HOST_WIDE_INT_UC (0x82cac09501c0df1c),
> +    HOST_WIDE_INT_UC (0x8119188bcf59391d), HOST_WIDE_INT_UC (0xd24ac359b0510387),
> +    HOST_WIDE_INT_UC (0x1b6f9cd3e388da86), HOST_WIDE_INT_UC (0x4e2990a31337004a),
> +    HOST_WIDE_INT_UC (0xd62f16910ab640ae), HOST_WIDE_INT_UC (0xa34a6f3c87668eaa),
> +    HOST_WIDE_INT_UC (0x37e46f52b873eb07), HOST_WIDE_INT_UC (0xa498e8e255eaa65c),
> +    HOST_WIDE_INT_UC (0x2370a16cbbdaa0af), HOST_WIDE_INT_UC (0x4f305e68993df752),
> +    HOST_WIDE_INT_UC (0x074d3e131bd30499), HOST_WIDE_INT_UC (0xf4caf8393dbd01c4),
> +    HOST_WIDE_INT_UC (0xb9e6794f494b3934), HOST_WIDE_INT_UC (0x7d7b2cc51969de8e),
> +    HOST_WIDE_INT_UC (0x87494b790cce95f1), HOST_WIDE_INT_UC (0xeba990c44573c5c8),
> +    HOST_WIDE_INT_UC (0x755007ea9663d2ea), HOST_WIDE_INT_UC (0xe5afe63b489e3d19),
> +    HOST_WIDE_INT_UC (0x82138483f2c2831c), HOST_WIDE_INT_UC (0x5488d7a6d99ce301),
> +    HOST_WIDE_INT_UC (0xd1d713ee75465be7), HOST_WIDE_INT_UC (0x29222cca5699b802),
> +    HOST_WIDE_INT_UC (0x28e6308201df3eff), HOST_WIDE_INT_UC (0x720e2cef5151c53d),
> +    HOST_WIDE_INT_UC (0xac381f111d9e336d), HOST_WIDE_INT_UC (0xfe4ae42ca0336dee),
> +    HOST_WIDE_INT_UC (0xebd720f35a1baebc), HOST_WIDE_INT_UC (0x4fd3dbbf7d4324d6),
> +    HOST_WIDE_INT_UC (0x4d78cb3165e57f22), HOST_WIDE_INT_UC (0x62e39c282e564f40),
> +    HOST_WIDE_INT_UC (0x58a8b34a0882fabb), HOST_WIDE_INT_UC (0xbd6a54b970aa6765),
> +    HOST_WIDE_INT_UC (0x12f7298ae3ec1a4e), HOST_WIDE_INT_UC (0xb3dfe9e1c64aba27),
> +    HOST_WIDE_INT_UC (0xf5ae414ef25fcfb0), HOST_WIDE_INT_UC (0x6bd04f05fc0656ae),
> +    HOST_WIDE_INT_UC (0x61c83d0178ecc390), HOST_WIDE_INT_UC (0xbe5310392ee661d9),
> +    HOST_WIDE_INT_UC (0xb1ef589359431e81), HOST_WIDE_INT_UC (0x187f0dbf9a2cb650),
> +    HOST_WIDE_INT_UC (0xab7b6664a0b0aec2), HOST_WIDE_INT_UC (0x287a358e7bdad628),
> +    HOST_WIDE_INT_UC (0xb6853808e16aeb8b), HOST_WIDE_INT_UC (0x2268d04ba71b1ff7),
> +    HOST_WIDE_INT_UC (0xadd0a43eb925494a), HOST_WIDE_INT_UC (0xaabe8fa96600a548),
> +    HOST_WIDE_INT_UC (0x4f9a6641525c31e3), HOST_WIDE_INT_UC (0x90fd1e86293f4bd4),
> +    HOST_WIDE_INT_UC (0xe2ad2b5c90e9800b), HOST_WIDE_INT_UC (0x914e8dacffa771fc),
> +    HOST_WIDE_INT_UC (0xab104f92f2b7f5f0), HOST_WIDE_INT_UC (0x7ba77c13f62c21c4),
> +
> +    HOST_WIDE_INT_UC (0x004eb118946c8b0a), HOST_WIDE_INT_UC (0xcd10ba90ac387e24),
> +    HOST_WIDE_INT_UC (0x3165a4c40640630e), HOST_WIDE_INT_UC (0x76dbccb2bb28b589),
> +    HOST_WIDE_INT_UC (0x78c7d08d1846ba72), HOST_WIDE_INT_UC (0x088dadabc29b7eee),
> +    HOST_WIDE_INT_UC (0xce09b01b92a09c9f), HOST_WIDE_INT_UC (0x5a3020593ce05c03),
> +    HOST_WIDE_INT_UC (0x2bdc49e21551752d), HOST_WIDE_INT_UC (0x0c68f10ea335eed3),
> +    HOST_WIDE_INT_UC (0xc7eeacac4c89f081), HOST_WIDE_INT_UC (0x1709baf3ff0cbf03),
> +    HOST_WIDE_INT_UC (0x30f6ee76b7390893), HOST_WIDE_INT_UC (0x34837770023b44df),
> +    HOST_WIDE_INT_UC (0x03bb2fa9e55edd44), HOST_WIDE_INT_UC (0xdcde0127dcf651cc),
> +    HOST_WIDE_INT_UC (0xddf5b10f46c14a92), HOST_WIDE_INT_UC (0x5fd6a6333b7fc3d4),
> +    HOST_WIDE_INT_UC (0xf00d6a63a6292f33), HOST_WIDE_INT_UC (0x4c1b946f4bfdf52a),
> +    HOST_WIDE_INT_UC (0x995e31dd31510f3b), HOST_WIDE_INT_UC (0x8d35a772d465d990),
> +    HOST_WIDE_INT_UC (0xdef217407399bfcc), HOST_WIDE_INT_UC (0x0afb0b5823306986),
> +    HOST_WIDE_INT_UC (0xbb3485a144d31f32), HOST_WIDE_INT_UC (0x59f476dbe59fbd66),
> +    HOST_WIDE_INT_UC (0x63ae89916180817f), HOST_WIDE_INT_UC (0xee37dbd94e282511),
> +    HOST_WIDE_INT_UC (0x811c761fe6104d7e), HOST_WIDE_INT_UC (0x1ed873f682f029e2),
> +    HOST_WIDE_INT_UC (0xc23b89782db3f7f0), HOST_WIDE_INT_UC (0x98dee95dea174c4c),
> +    HOST_WIDE_INT_UC (0x5f91f3949dc9992e), HOST_WIDE_INT_UC (0xc36ae182d8aa7d32),
> +    HOST_WIDE_INT_UC (0x61abef3db5f22a7f), HOST_WIDE_INT_UC (0x91ce45bbc50c2eef),
> +    HOST_WIDE_INT_UC (0x5ab513c1350cd605), HOST_WIDE_INT_UC (0xcad14061bf6ec9fb),
> +    HOST_WIDE_INT_UC (0x29557d00db0a03ed), HOST_WIDE_INT_UC (0xd084f8402af7c773),
> +    HOST_WIDE_INT_UC (0x3becec18677ff915), HOST_WIDE_INT_UC (0x12bfce5297ee2e67),
> +    HOST_WIDE_INT_UC (0x49328e0ad6868d03), HOST_WIDE_INT_UC (0xae508be370a5fe87),
> +    HOST_WIDE_INT_UC (0xd04dbe85dd7b93e0), HOST_WIDE_INT_UC (0x2c8c32cb40d820db),
> +    HOST_WIDE_INT_UC (0x17a33407c1a4f783), HOST_WIDE_INT_UC (0x0333bdab351f1a1b),
> +    HOST_WIDE_INT_UC (0x9bf82ce2b590bd0e), HOST_WIDE_INT_UC (0xc28894ae9eb4a655),
> +    HOST_WIDE_INT_UC (0xe5f78919f01d70f0), HOST_WIDE_INT_UC (0x597376afa702626f),
> +    HOST_WIDE_INT_UC (0xb7e652d747bd63da), HOST_WIDE_INT_UC (0xffa518a4ec1620f7),
> +    HOST_WIDE_INT_UC (0xc7e3951a33f99457), HOST_WIDE_INT_UC (0x939c109b56348cb2),
> +    HOST_WIDE_INT_UC (0x0ba1c65a20616b8c), HOST_WIDE_INT_UC (0x230611c1a547fd7b),
> +    HOST_WIDE_INT_UC (0x5c9356e353506379), HOST_WIDE_INT_UC (0x6c32318308ba24f1),
> +    HOST_WIDE_INT_UC (0xd6c4a34b4f7f9a10), HOST_WIDE_INT_UC (0x26d3a3979e9e363c),
> +    HOST_WIDE_INT_UC (0xe8a16f6587bffa80), HOST_WIDE_INT_UC (0xc1ed972017d689a0),
> +
> +    HOST_WIDE_INT_UC (0x0b79b49ec18c4931), HOST_WIDE_INT_UC (0x115c713cb2bbdd4a),
> +    HOST_WIDE_INT_UC (0xcef8aaf62d4f3f26), HOST_WIDE_INT_UC (0x70cdc4645ffb4e5c),
> +    HOST_WIDE_INT_UC (0x984f9d6a9348d6ab), HOST_WIDE_INT_UC (0xf8c41d4a2b574ca0),
> +    HOST_WIDE_INT_UC (0xfdd5aa4e2834a549), HOST_WIDE_INT_UC (0x1827f517f49437ea),
> +    HOST_WIDE_INT_UC (0x764103fbd344ab6d), HOST_WIDE_INT_UC (0x20c51fd451673b02),
> +    HOST_WIDE_INT_UC (0xf86a9cbf50548a56), HOST_WIDE_INT_UC (0x99d47b8900cd9e1f),
> +    HOST_WIDE_INT_UC (0xb2100702869241b0), HOST_WIDE_INT_UC (0x06ce3ac9b28c4866),
> +    HOST_WIDE_INT_UC (0x1f2acc7dc8e7b7cb), HOST_WIDE_INT_UC (0x2b0791caf02d5216),
> +    HOST_WIDE_INT_UC (0xb424c7a051778b41), HOST_WIDE_INT_UC (0x0321156fc2e6527f),
> +    HOST_WIDE_INT_UC (0x27f1d9b65e9d1a3b), HOST_WIDE_INT_UC (0xf0b47d51a1e89b87),
> +    HOST_WIDE_INT_UC (0xbcced349ed2bafea), HOST_WIDE_INT_UC (0xdc6605db6da3d0e2),
> +    HOST_WIDE_INT_UC (0xe63f55538f6cc465), HOST_WIDE_INT_UC (0xffc6039160ed6b4a),
> +    HOST_WIDE_INT_UC (0x751afef08e1e5866), HOST_WIDE_INT_UC (0xd76fa3a0ff099bf5),
> +    HOST_WIDE_INT_UC (0xeaf7d50a6e4f1770), HOST_WIDE_INT_UC (0xd9e16c9d939bead9),
> +    HOST_WIDE_INT_UC (0xf66c7e0a7c742069), HOST_WIDE_INT_UC (0x04885a31cb8e66fb),
> +    HOST_WIDE_INT_UC (0x444f0dfc20767b0d), HOST_WIDE_INT_UC (0xed67c104c3b42f4e),
> +    HOST_WIDE_INT_UC (0x31690783130ff515), HOST_WIDE_INT_UC (0xec8d0e4d2f443535),
> +    HOST_WIDE_INT_UC (0x8a921fbfb7d1697e), HOST_WIDE_INT_UC (0x03dc72ab165df42c),
> +    HOST_WIDE_INT_UC (0x06ed32d252ab0973), HOST_WIDE_INT_UC (0xc91c248e5fa237ea),
> +    HOST_WIDE_INT_UC (0x152c9df43525b2aa), HOST_WIDE_INT_UC (0x2058d3ffa83aec4a),
> +    HOST_WIDE_INT_UC (0x8965b749cd657838), HOST_WIDE_INT_UC (0x75a36a7ac6447da7),
> +    HOST_WIDE_INT_UC (0xa1db4154df0987be), HOST_WIDE_INT_UC (0x6bbae09ce15065ec),
> +    HOST_WIDE_INT_UC (0xe344e810c167ae2f), HOST_WIDE_INT_UC (0xe06c1cad0722db02),
> +    HOST_WIDE_INT_UC (0x0d517556b404c733), HOST_WIDE_INT_UC (0x6f040cb1312570ca),
> +    HOST_WIDE_INT_UC (0xfdc069e42e7d809e), HOST_WIDE_INT_UC (0x80dba4e7cd9b082e),
> +    HOST_WIDE_INT_UC (0x97e6e1ad49608f72), HOST_WIDE_INT_UC (0x71f2846f412f18c0),
> +    HOST_WIDE_INT_UC (0x6361b93be86e129c), HOST_WIDE_INT_UC (0x281f4e3367f0f720),
> +    HOST_WIDE_INT_UC (0x7e68cd2315647fe3), HOST_WIDE_INT_UC (0xb604e0e6fd4facaa),
> +    HOST_WIDE_INT_UC (0xb9726a98d986b4d6), HOST_WIDE_INT_UC (0xcdc4a16b0b48a2c3),
> +    HOST_WIDE_INT_UC (0xac2dbd24a5ac955c), HOST_WIDE_INT_UC (0xfd2f500931f970c5),
> +    HOST_WIDE_INT_UC (0xb971cea7e0691a1b), HOST_WIDE_INT_UC (0xb82231449e45a839),
> +    HOST_WIDE_INT_UC (0x93b1bef87a77f070), HOST_WIDE_INT_UC (0x3d9513340e02ab65)
> +  };
> +  wide_int b = wide_int::from_array (&a[0], 64, 4096);
> +  wide_int c = wide_int::from_array (&a[64], 64, 4096);
> +  wide_int d = wide_int::from_array (&a[128], 64, 4096);
> +  ASSERT_EQ (b + c, d);
>  }
>  
>  } // namespace selftest
> 
> 	Jakub
> 
> 

-- 
Richard Biener <rguenther@suse.de>
SUSE Software Solutions Germany GmbH,
Frankenstrasse 146, 90461 Nuernberg, Germany;
GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] > WIDE_INT_MAX_PREC support in wide-int
  2023-08-29  9:49 ` Richard Biener
@ 2023-08-29 10:42   ` Richard Sandiford
  2023-08-29 15:09     ` Jakub Jelinek
  2023-08-29 14:46   ` [RFC] > WIDE_INT_MAX_PREC support in wide-int Jakub Jelinek
  1 sibling, 1 reply; 16+ messages in thread
From: Richard Sandiford @ 2023-08-29 10:42 UTC (permalink / raw)
  To: Richard Biener; +Cc: Jakub Jelinek, gcc-patches

Just some off-the-cuff thoughts.  Might think differently when
I've had more time...

Richard Biener <rguenther@suse.de> writes:
> On Mon, 28 Aug 2023, Jakub Jelinek wrote:
>
>> Hi!
>> 
>> While the _BitInt series isn't committed yet, I had a quick look at
>> lifting the current lowest limitation on maximum _BitInt precision,
>> that wide_int can only support wide_int until WIDE_INT_MAX_PRECISION - 1.
>> 
>> Note, other limits if that is lifted are INTEGER_CST currently using 3
>> unsigned char members and so being able to only hold up to 255 * 64 = 16320
>> bit numbers and then TYPE_PRECISION being 16-bit, so limiting us to 65535
>> bits.  The INTEGER_CST limit could be dealt with by dropping the
>> int_length.offset "cache" and making int_length.extended and
>> int_length.unextended members unsinged short rather than unsigned char.
>> 
>> The following so far just compile tested patch changes wide_int_storage
>> to be a union, for precisions up to WIDE_INT_MAX_PRECISION inclusive it
>> will work as before (just being no longer trivially copyable type and
>> having an inline destructor), while larger precision instead use a pointer
>> to heap allocated array.
>> For wide_int this is fairly easy (of course, I'd need to see what the
>> patch does to gcc code size and compile time performance, some
>> growth/slowdown is certain), but I'd like to brainstorm on
>> widest_int/widest2_int.
>> 
>> Currently it is a constant precision storage with WIDE_INT_MAX_PRECISION
>> precision (widest2_int twice that), so memory layout-wide on at least 64-bit
>> hosts identical to wide_int, just it doesn't have precision member and so
>> 32 bits smaller on 32-bit hosts.  It is used in lots of places.
>> 
>> I think the most common is what is done e.g. in tree_int_cst* comparisons
>> and similarly, using wi::to_widest () to just compare INTEGER_CSTs.
>> That case actually doesn't even use wide_int but widest_extended_tree
>> as storage, unless stored into widest_int in between (that happens in
>> various spots as well).  For comparisons, it would be fine if
>> widest_int_storage/widest_extended_tree storages had a dynamic precision,
>> WIDE_INT_MAX_PRECISION for most of the cases (if only
>> precision < WIDE_INT_MAX_PRECISION is involved), otherwise the needed
>> precision (e.g. for binary ops) which would be what we say have in
>> INTEGER_CST or some type, rounded up to whole multiples of HOST_WIDE_INTs
>> and if unsigned with multiple of HOST_WIDE_INT precision, have another
>> HWI to make it always sign-extended.
>> 
>> Another common case is how e.g. tree-ssa-ccp.cc uses them, that is mostly
>> for bitwise ops and so I think the above would be just fine for that case.
>> 
>> Another case is how tree-ssa-loop-niter.cc uses it, I think for such a usage
>> it really wants something widest, perhaps we could just try to punt for
>> _BitInt(N) for N >= WIDE_INT_MAX_PRECISION in there, so that we never care
>> about bits beyond that limit?
>
> I'll note tree-ssa-loop-niter.cc also uses GMP in some cases, widest_int
> is really trying to be poor-mans GMP by limiting the maximum precision.

I'd characterise widest_int as "a wide_int that is big enough to hold
all supported integer types, without losing sign information".  It's
not big enough to do arbitrary arithmetic without losing precision
(in the way that GMP is).

If the new limit on integer sizes is 65535 bits for all targets,
then I think that means that widest_int needs to become a 65536-bit type.
(But not with all bits represented all the time, of course.)

[ And at that point I think widest_int should ideally become a GMP wrapper.
  The wide_int stuff isn't optimised for such large sizes, even accepting
  that large sizes will be a worst case.  That might not be easy to do with
  the current infrastructure though.  Especially not if widest_ints are
  stored in GC-ed structures. ]

That seems like it would stand the biggest chance of preserving
existing semantics.  But we might want to define new typedefs for
narrower limits.  E.g. the current widest_int limit probably still
makes sense for operations on scalar_int_modes.  (But then most
RTL arithmetic should use wide_int rather than widest_int.)

Perhaps some widest_int uses are really restricted to address-like
things and could instead use offset_int.  Until now there hasn't been
much incentive to make the distinction.

And perhaps we could identify other similar cases where the limit is
known (statically) to be the current limit, rather than 65536.

I think one of the worst things we could do is push the requirement
up to users of the API to have one path for _BitInts and one for "normal"
integers.  That's bound to lead to a whack-a-mole effect.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] > WIDE_INT_MAX_PREC support in wide-int
  2023-08-29  9:49 ` Richard Biener
  2023-08-29 10:42   ` Richard Sandiford
@ 2023-08-29 14:46   ` Jakub Jelinek
  1 sibling, 0 replies; 16+ messages in thread
From: Jakub Jelinek @ 2023-08-29 14:46 UTC (permalink / raw)
  To: Richard Biener; +Cc: Richard Sandiford, gcc-patches

On Tue, Aug 29, 2023 at 09:49:59AM +0000, Richard Biener wrote:
> The simplest way would probably to keep widest_int at 
> WIDE_INT_MAX_PRECISION like we have now and assert that this is
> enough at ::to_widest time (we probably do already).  And then
> declare uses with more precision need to use GMP.
> 
> Not sure if that's not also a viable way for wide_int - we're
> only losing optimization here, no?

No.  In lots of places it is essential (constant evaluation), in other
places very much desirable optimization even or especially on _BitInt (e.g.
value range optimizations, ccp, ...) and then sure, spots where just punting
on larger _BitInt for optimization is something we can live with (and yet
other spots which only happen after the lowering and so will never see that
at all).

I think if we go with allowing _BitInt (N) for N >= WIDE_INT_MAX_PRECISION,
the most important thing is we want to slow down the compiler as little as
possible for the common case, no _BitInt at all or _BitInt smaller than that,
and furthermore from maintainance POV, we want most of the code to work as
is.  Adding assertion that precision is < WIDE_INT_MAX_PRECISION on
widest_int construction from wide_int/INTEGER_CST etc. wouldn't achieve
those goals.  Grepping outside of wide-int.{cc,h} for widest_int, I see
around 450 matches, sure, that doesn't mean 450 places where we'd need to
add some guard to punt for the larger _BitInt (if it is possible to punt at
all), but I'm sure it would be more than hundred of places.  And having
similarly say 50% of those 100-200 spots have a fallback using gmp would be
also a maintainance nightmare, because those spots for larger precisions
would be much less tested than normal code.  Sure, we can add some guards
in some places and the niter stuff could very well be one of them, or simply
also use there wide_int with carefully chosen precision, say
WIDE_INT_MAX_PRECISION if _BitInt isn't involved or at most 128 bits,
and otherwise something wider.  But I'm worried of the maintainance
nightmare etc. if we have to touch hundreds of places and figure out how to
punt or what else to do in each of those spots.

Compared to this, I see only 15 hits for widest2_int, in 2 places,
arith_overflowed_p and operator_mult::wi_fold, that is something that could
very well be done either always using appropriate wider precision wide_int,
or even do it twice, once for < WIDE_INT_MAX_PRECISION input precision and
another time wider.

	Jakub


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] > WIDE_INT_MAX_PREC support in wide-int
  2023-08-29 10:42   ` Richard Sandiford
@ 2023-08-29 15:09     ` Jakub Jelinek
  2023-09-28 14:03       ` [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int Jakub Jelinek
  0 siblings, 1 reply; 16+ messages in thread
From: Jakub Jelinek @ 2023-08-29 15:09 UTC (permalink / raw)
  To: Richard Biener, gcc-patches, richard.sandiford

On Tue, Aug 29, 2023 at 11:42:48AM +0100, Richard Sandiford wrote:
> > I'll note tree-ssa-loop-niter.cc also uses GMP in some cases, widest_int
> > is really trying to be poor-mans GMP by limiting the maximum precision.
> 
> I'd characterise widest_int as "a wide_int that is big enough to hold
> all supported integer types, without losing sign information".  It's
> not big enough to do arbitrary arithmetic without losing precision
> (in the way that GMP is).
> 
> If the new limit on integer sizes is 65535 bits for all targets,
> then I think that means that widest_int needs to become a 65536-bit type.
> (But not with all bits represented all the time, of course.)

If the widest_int storage would be dependent on the len rather than
precision for how it is stored, then I think we'd need a new method which
would be called at the start of filling the limbs where we'd tell how many
limbs there would be (i.e. what will set_len be called with later on), and
do nothing for all storages but the new widest_int_storage.

> [ And at that point I think widest_int should ideally become a GMP wrapper.
>   The wide_int stuff isn't optimised for such large sizes, even accepting
>   that large sizes will be a worst case.  That might not be easy to do with
>   the current infrastructure though.  Especially not if widest_ints are
>   stored in GC-ed structures. ]

I strongly hope widest_ints aren't stored in GC-ed structures, we should
be using INTEGER_CSTs or RTXes in GC-ed structures, not wide_int/widest_int
IMNSHO.
nm cc1plus | grep gt_ggc.*widest_int
doesn't show anything,
nm cc1plus | grep gt_ggc.*wide_int
0000000000b2c88d T _Z43gt_ggc_mx_hash_table_const_wide_int_hasher_Pv
0000000000d00b3a T _Z44gt_ggc_mx_generic_wide_int_wide_int_storage_Pv
0000000000d1c8bb W _Z9gt_ggc_mxI16wide_int_storageEvP16generic_wide_intIT_E
0000000000b2eecb W _Z9gt_ggc_mxI21const_wide_int_hasherEvP10hash_tableIT_Lb0E11xcallocatorE
0000000001596e00 T _Z9gt_ggc_mxP16generic_wide_intI22fixed_wide_int_storageILi576EEE
0000000000d00b8e T _Z9gt_ggc_mxR16wide_int_storage
0000000000b2c8e1 T _Z9gt_ggc_mxR21const_wide_int_hasher
From those symbols, only the second has any undefined references to that
(in dwarf2out.o) plus gtype-desc.o defines them and has some references.

In dwarf2out it is dw_val_node's v.wide_int_ptr.  Already the (apparently
incomplete) wide_int patch would need to deal with that, either by making
those say fixed_wide_int_storage <WIDE_INT_MAX_PRECISION> or something
similar, so it would never store anything larger, or run the destructor
when actually freeing it (dunno if that is possible right now in GC).

> That seems like it would stand the biggest chance of preserving
> existing semantics.  But we might want to define new typedefs for
> narrower limits.  E.g. the current widest_int limit probably still
> makes sense for operations on scalar_int_modes.  (But then most
> RTL arithmetic should use wide_int rather than widest_int.)
> 
> Perhaps some widest_int uses are really restricted to address-like
> things and could instead use offset_int.  Until now there hasn't been
> much incentive to make the distinction.

Sure.

> And perhaps we could identify other similar cases where the limit is
> known (statically) to be the current limit, rather than 65536.
> 
> I think one of the worst things we could do is push the requirement
> up to users of the API to have one path for _BitInts and one for "normal"
> integers.  That's bound to lead to a whack-a-mole effect.

Yeah.
As for uses of GMP, I think actually most wide-int.{h,cc} operations aren't
that bad compared to GMP, it is most important to have a fast path for the
most common case and for something rare being say some small constant times
slower than GMP is acceptable (say because GMP will have some assembly
hand-written inner loop while wide-int doesn't).  Something different
is say multiplication/division, perhaps for those in the out of line
wide-int.cc helpers we could check for precision above certain boundary and
in that case convert to mpn, perform multiplication/division/modulo using
that and convert back, because the gmp algorithms for those are much better.

	Jakub


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int
  2023-08-29 15:09     ` Jakub Jelinek
@ 2023-09-28 14:03       ` Jakub Jelinek
  2023-09-28 15:53         ` Aldy Hernandez
                           ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Jakub Jelinek @ 2023-09-28 14:03 UTC (permalink / raw)
  To: Richard Biener, Richard Sandiford, Aldy Hernandez; +Cc: gcc-patches

Hi!

On Tue, Aug 29, 2023 at 05:09:52PM +0200, Jakub Jelinek via Gcc-patches wrote:
> On Tue, Aug 29, 2023 at 11:42:48AM +0100, Richard Sandiford wrote:
> > > I'll note tree-ssa-loop-niter.cc also uses GMP in some cases, widest_int
> > > is really trying to be poor-mans GMP by limiting the maximum precision.
> > 
> > I'd characterise widest_int as "a wide_int that is big enough to hold
> > all supported integer types, without losing sign information".  It's
> > not big enough to do arbitrary arithmetic without losing precision
> > (in the way that GMP is).
> > 
> > If the new limit on integer sizes is 65535 bits for all targets,
> > then I think that means that widest_int needs to become a 65536-bit type.
> > (But not with all bits represented all the time, of course.)
> 
> If the widest_int storage would be dependent on the len rather than
> precision for how it is stored, then I think we'd need a new method which
> would be called at the start of filling the limbs where we'd tell how many
> limbs there would be (i.e. what will set_len be called with later on), and
> do nothing for all storages but the new widest_int_storage.

So, I've spent some time on this.  While wide_int is in the patch a fixed/variable
number of limbs (aka len) storage depending on precision (precision >
WIDE_INT_MAX_PRECISION means heap allocated limb array, otherwise it is
inline), widest_int has always very large precision
(WIDEST_INT_MAX_PRECISION, currently defined to the INTEGER_CST imposed
limitation of 255 64-bit limbs) but uses inline array for length
corresponding up to WIDE_INT_MAX_PRECISION bits and for larger one uses
similarly to wide_int a heap allocated array of limbs.
These changes make both wide_int and widest_int obviously non-POD, not
trivially default constructible, nor trivially copy constructible, trivially
destructible, trivially copyable, so not a good fit for GC and some vec
operations.
One common use of wide_int in GC structures was in dwarf2out.{h,cc}; but as
large _BitInt constants don't appear in RTL, we really don't need such large
precisions there.
So, for wide_int the patch introduces rwide_int, restricted wide_int, which
acts like the old wide_int (except that it is now trivially default
constructible and has assertions precision isn't set above
WIDE_INT_MAX_PRECISION).
For widest_int, the nastiness is that because it always has huge precision
of 16320 right now,
a) we need to be told upfront in wide-int.h before calling the large
   value internal functions in wide-int.cc how many elements we'll need for
   the result (some reasonable upper estimate is fine)
b) various of the wide-int.cc functions were lazy and assumed precision is
   small enough and often used up to that many elements, which is
   undesirable; so, it now tries to decreas that and use xi.len etc. based
   estimates instead if possible (sometimes only if precision is above
   WIDE_INT_MAX_PRECISION)
c) with the higher precision, behavior changes for lrshift (-1, 2) etc. or
   unsigned division with dividend having most significant bit set in
   widest_int - while such values were considered to be above or equal to
   1 << (WIDE_INT_MAX_PRECISION - 2), now they are with
   WIDEST_INT_MAX_PRECISION and so much larger; but lrshift on widest_int
   is I think only done in ccp and I'd strongly hope that we treat the
   values as unsigned and so usually much smaller length; so it is just
   when we call wi::lrshift (-1, 2) or similar that results change.
I've noticed that for wide_int or widest_int references even simple
operations like eq_p liked to allocate and immediately free huge buffers,
which was caused by wide_int doing allocation on creation with a particular
precision and e.g. get_binary_precision running into that.  So, I've
duplicated that to avoid the allocations when all we need is just a
precision.

The patch below doesn't actually build anymore since the vec.h asserts
(which point to useful stuff though), so temporarily I've applied it also
with
--- gcc/vec.h.xx	2023-09-28 12:56:09.055786055 +0200
+++ gcc/vec.h	2023-09-28 13:15:31.760487111 +0200
@@ -1197,7 +1197,7 @@ template<typename T, typename A>
 inline void
 vec<T, A, vl_embed>::qsort (int (*cmp) (const void *, const void *))
 {
-  static_assert (vec_detail::is_trivially_copyable_or_pair <T>::value, "");
+//  static_assert (vec_detail::is_trivially_copyable_or_pair <T>::value, "");
   if (length () > 1)
     gcc_qsort (address (), length (), sizeof (T), cmp);
 }
@@ -1422,7 +1422,7 @@ template<typename T>
 void
 gt_ggc_mx (vec<T, va_gc> *v)
 {
-  static_assert (std::is_trivially_destructible <T>::value, "");
+//  static_assert (std::is_trivially_destructible <T>::value, "");
   extern void gt_ggc_mx (T &);
   for (unsigned i = 0; i < v->length (); i++)
     gt_ggc_mx ((*v)[i]);
hack.  The two spots that trigger are tree-ssa-loop-niter.cc doing qsort on
widest_int vector (to be exact, swapping elements in the vector of
widest_int or wide_int by memcpy actually would work, the reason it has
non-trivial destructor and copy assignment/copy constructor is to make sure
distinct objects have (if needed) distinct heap allocations and that those
are freed in the end, but the bitwise memcpy swapping preserves that), and
omp_general.cc using two widest_int members in a GC structure.  For some
reason, a more important problem isn't diagnosed, loop and nb_iter_bound
structs (also GC) having widest_int members (first one 2, second one just
one).  And then there is e.g. another issue with slsr, which allocates
structs containing widest_int in obstack, not expecting it would need to
construct those (and where to destruct them).  Also, ipa_bits contains
2 widest_int members in GC allocated structure.  Actually the reason
is quite obvious, my assert has been added just for GC vec of non-trivially
destructible types, neither loop, nor ipa_bits are used in vectors.  Bet
we should make wide_int_storage and widest_int_storage GTY ((user)) and
just declare but don't define the handlers or something similar.

And, now the question is what to do about this.  I guess for omp_general
I could just use generic_wide_int <fixed_wide_int_storage <1024> > or
something similar, after all the widest_int wasn't really great when it
had maximum precision of WIDE_INT_MAX_PRECISION, different values on
different targets, it has very few uses and is easy to change (thinking
about this, makes me wonder what we do for offloading if offload host
has different WIDE_INT_MAX_PRECISION from offload target).

But the more important question is what to do about loop/niters analysis.
I think for number of iteration analysis it might be ok to punt somehow
(if there is a way to tell that number of iterations is unknown) if we
get some bound which is too large to be expressible in some reasonably small
fixed precision (whether it is WIDE_INT_MAX_PRECISION, or something
different is a question).  We could either introduce yet another widest_int
like storage which would have still WIDEST_INT_MAX_PRECISION precision, but
would ICE if length is set to something above its fixed width.  One problem
is that the write_val estimations are often just conservatively larger and
could trigger even if the value fits in the end.  Or we could use
generic_wide_int <fixed_wide_int_storage <WIDE_INT_MAX_PRECISION> > (perhaps
call that rwidest_int), the drawback would be that it would be slightly harder
to use as it has different precision from widest_int, we'd need to do some
from on it or the like.  Plus I really don't know the niters code to know
how to punt.

ipa_bits is even worse, because unlike niter analysis, I think it is very
much desirable to support IPA VRP of all supported _BitInt sizes.  Shall
we perhaps use trailing_wide_int storage in there, or conditionally
rwidest_int vs. INTEGER_CSTs for stuff that doesn't fit, something else?

What about slsr?  This is after bitint lowering, so it shouldn't be
performing opts on larger BITINT_TYPEs and so could also go with the
rwidest_int.

With the above vec.h hack the short (in number of lines, otherwise
it is large, each 16319 bit decimal constant is huge) testcase below works,
but even make check-gcc RUNTESTFLAGS=dg.exp=bitint* (i.e. the compile only
tests) show some ICEs, some of them due to widest_int in loop, others in
slsr, others to be debugged.

As for the qsort in niters, if we change niters to use some rwidest_int,
either fixed or something new, then the sorting problem could go away.
Another option would be to rename vec_detail::is_trivially_copyable_or_pair
trait to say vec_detail::is_qsort_sortable and allow code to amend that
trait on a type by type basis when needed after analysing it works correctly
for some further type (like wide_int or widest_int).  But am not sure it
would work if widest-int.h is included before vec.h etc.

Your thoughts on all of this?

--- gcc/wide-int.h.jj	2023-09-27 10:37:39.456836804 +0200
+++ gcc/wide-int.h	2023-09-28 14:55:40.059632413 +0200
@@ -27,7 +27,7 @@ along with GCC; see the file COPYING3.
    other longer storage GCC representations (rtl and tree).
 
    The actual precision of a wide_int depends on the flavor.  There
-   are three predefined flavors:
+   are four predefined flavors:
 
      1) wide_int (the default).  This flavor does the math in the
      precision of its input arguments.  It is assumed (and checked)
@@ -53,6 +53,10 @@ along with GCC; see the file COPYING3.
      multiply, division, shifts, comparisons, and operations that need
      overflow detected), the signedness must be specified separately.
 
+     For precisions up to WIDE_INT_MAX_PRECISION, it uses an inline
+     buffer in the type, for larger precisions up to WIDEST_INT_MAX_PRECISION
+     it uses a pointer to heap allocated buffer.
+
      2) offset_int.  This is a fixed-precision integer that can hold
      any address offset, measured in either bits or bytes, with at
      least one extra sign bit.  At the moment the maximum address
@@ -76,11 +80,15 @@ along with GCC; see the file COPYING3.
        wi::leu_p (a, b) as a more efficient short-hand for
        "a >= 0 && a <= b". ]
 
+     3) rwide_int.  Restricted wide_int.  This is similar to
+     wide_int, but maximum possible precision is WIDE_INT_MAX_PRECISION
+     and it always uses an inline buffer.  offset_int and rwide_int are
+     GC-friendly, wide_int and widest_int are not.
+
      3) widest_int.  This representation is an approximation of
      infinite precision math.  However, it is not really infinite
      precision math as in the GMP library.  It is really finite
-     precision math where the precision is 4 times the size of the
-     largest integer that the target port can represent.
+     precision math where the precision is WIDEST_INT_MAX_PRECISION.
 
      Like offset_int, widest_int is wider than all the values that
      it needs to represent, so the integers are logically signed.
@@ -242,6 +250,13 @@ along with GCC; see the file COPYING3.
 
 #define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
 
+/* Precision of widest_int and largest _BitInt precision + 1 we can
+   support.  */
+#define WIDEST_INT_MAX_ELTS 255
+#define WIDEST_INT_MAX_PRECISION (WIDEST_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
+
+STATIC_ASSERT (WIDE_INT_MAX_ELTS < WIDEST_INT_MAX_ELTS);
+
 /* This is the max size of any pointer on any machine.  It does not
    seem to be as easy to sniff this out of the machine description as
    it is for MAX_BITSIZE_MODE_ANY_INT since targets may support
@@ -307,17 +322,19 @@ along with GCC; see the file COPYING3.
 #define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
   WI_BINARY_RESULT (T1, T2) RESULT = \
     wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
-  HOST_WIDE_INT *VAL = RESULT.write_val ()
+  HOST_WIDE_INT *VAL = RESULT.write_val (0)
 
 /* Similar for the result of a unary operation on X, which has type T.  */
 #define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
   WI_UNARY_RESULT (T) RESULT = \
     wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
-  HOST_WIDE_INT *VAL = RESULT.write_val ()
+  HOST_WIDE_INT *VAL = RESULT.write_val (0)
 
 template <typename T> class generic_wide_int;
 template <int N> class fixed_wide_int_storage;
 class wide_int_storage;
+class rwide_int_storage;
+template <int N> class widest_int_storage;
 
 /* An N-bit integer.  Until we can use typedef templates, use this instead.  */
 #define FIXED_WIDE_INT(N) \
@@ -325,10 +342,9 @@ class wide_int_storage;
 
 typedef generic_wide_int <wide_int_storage> wide_int;
 typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) offset_int;
-typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION) widest_int;
-/* Spelled out explicitly (rather than through FIXED_WIDE_INT)
-   so as not to confuse gengtype.  */
-typedef generic_wide_int < fixed_wide_int_storage <WIDE_INT_MAX_PRECISION * 2> > widest2_int;
+typedef generic_wide_int <rwide_int_storage> rwide_int;
+typedef generic_wide_int <widest_int_storage <WIDE_INT_MAX_PRECISION> > widest_int;
+typedef generic_wide_int <widest_int_storage <WIDE_INT_MAX_PRECISION * 2> > widest2_int;
 
 /* wi::storage_ref can be a reference to a primitive type,
    so this is the conservatively-correct setting.  */
@@ -380,7 +396,11 @@ namespace wi
 
     /* The integer has a constant precision (known at GCC compile time)
        and is signed.  */
-    CONST_PRECISION
+    CONST_PRECISION,
+
+    /* Like CONST_PRECISION, but with WIDEST_INT_MAX_PRECISION or larger
+       precision where not all elements of arrays are always present.  */
+    WIDEST_CONST_PRECISION
   };
 
   /* This class, which has no default implementation, is expected to
@@ -390,9 +410,15 @@ namespace wi
        Classifies the type of T.
 
      static const unsigned int precision;
-       Only defined if precision_type == CONST_PRECISION.  Specifies the
+       Only defined if precision_type == CONST_PRECISION or
+       precision_type == WIDEST_CONST_PRECISION.  Specifies the
        precision of all integers of type T.
 
+     static const unsigned int inl_precision;
+       Only defined if precision_type == WIDEST_CONST_PRECISION.
+       Specifies precision which is represented in the inline
+       arrays.
+
      static const bool host_dependent_precision;
        True if the precision of T depends (or can depend) on the host.
 
@@ -415,9 +441,10 @@ namespace wi
   struct binary_traits;
 
   /* Specify the result type for each supported combination of binary
-     inputs.  Note that CONST_PRECISION and VAR_PRECISION cannot be
-     mixed, in order to give stronger type checking.  When both inputs
-     are CONST_PRECISION, they must have the same precision.  */
+     inputs.  Note that CONST_PRECISION, WIDEST_CONST_PRECISION and
+     VAR_PRECISION cannot be mixed, in order to give stronger type
+     checking.  When both inputs are CONST_PRECISION or both are
+     WIDEST_CONST_PRECISION, they must have the same precision.  */
   template <typename T1, typename T2>
   struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
   {
@@ -447,6 +474,17 @@ namespace wi
   };
 
   template <typename T1, typename T2>
+  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, WIDEST_CONST_PRECISION>
+  {
+    typedef generic_wide_int < widest_int_storage
+			       <int_traits <T2>::inl_precision> > result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
+    typedef result_type signed_shift_result_type;
+    typedef bool signed_predicate_result;
+  };
+
+  template <typename T1, typename T2>
   struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
   {
     typedef wide_int result_type;
@@ -468,6 +506,17 @@ namespace wi
   };
 
   template <typename T1, typename T2>
+  struct binary_traits <T1, T2, WIDEST_CONST_PRECISION, FLEXIBLE_PRECISION>
+  {
+    typedef generic_wide_int < widest_int_storage
+			       <int_traits <T1>::inl_precision> > result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
+    typedef result_type signed_shift_result_type;
+    typedef bool signed_predicate_result;
+  };
+
+  template <typename T1, typename T2>
   struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
   {
     STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
@@ -482,6 +531,18 @@ namespace wi
   };
 
   template <typename T1, typename T2>
+  struct binary_traits <T1, T2, WIDEST_CONST_PRECISION, WIDEST_CONST_PRECISION>
+  {
+    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
+    typedef generic_wide_int < widest_int_storage
+			       <int_traits <T1>::inl_precision> > result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
+    typedef result_type signed_shift_result_type;
+    typedef bool signed_predicate_result;
+  };
+
+  template <typename T1, typename T2>
   struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
   {
     typedef wide_int result_type;
@@ -709,8 +770,10 @@ wi::storage_ref::get_val () const
    Although not required by generic_wide_int itself, writable storage
    classes can also provide the following functions:
 
-   HOST_WIDE_INT *write_val ()
-     Get a modifiable version of get_val ()
+   HOST_WIDE_INT *write_val (unsigned int)
+     Get a modifiable version of get_val ().  The argument should be
+     upper estimation for LEN (ignored by all storages but
+     widest_int_storage).
 
    unsigned int set_len (unsigned int len)
      Set the value returned by get_len () to LEN.  */
@@ -777,6 +840,8 @@ public:
 
   static const bool is_sign_extended
     = wi::int_traits <generic_wide_int <storage> >::is_sign_extended;
+  static const bool needs_write_val_arg
+    = wi::int_traits <generic_wide_int <storage> >::needs_write_val_arg;
 };
 
 template <typename storage>
@@ -1049,6 +1114,7 @@ namespace wi
     static const enum precision_type precision_type = VAR_PRECISION;
     static const bool host_dependent_precision = HDP;
     static const bool is_sign_extended = SE;
+    static const bool needs_write_val_arg = false;
   };
 }
 
@@ -1065,7 +1131,11 @@ namespace wi
 class GTY(()) wide_int_storage
 {
 private:
-  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
+  union
+  {
+    HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
+    HOST_WIDE_INT *valp;
+  } GTY((skip)) u;
   unsigned int len;
   unsigned int precision;
 
@@ -1073,14 +1143,17 @@ public:
   wide_int_storage ();
   template <typename T>
   wide_int_storage (const T &);
+  wide_int_storage (const wide_int_storage &);
+  ~wide_int_storage ();
 
   /* The standard generic_wide_int storage methods.  */
   unsigned int get_precision () const;
   const HOST_WIDE_INT *get_val () const;
   unsigned int get_len () const;
-  HOST_WIDE_INT *write_val ();
+  HOST_WIDE_INT *write_val (unsigned int);
   void set_len (unsigned int, bool = false);
 
+  wide_int_storage &operator = (const wide_int_storage &);
   template <typename T>
   wide_int_storage &operator = (const T &);
 
@@ -1099,12 +1172,15 @@ namespace wi
     /* Guaranteed by a static assert in the wide_int_storage constructor.  */
     static const bool host_dependent_precision = false;
     static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
     template <typename T1, typename T2>
     static wide_int get_binary_result (const T1 &, const T2 &);
+    template <typename T1, typename T2>
+    static unsigned int get_binary_precision (const T1 &, const T2 &);
   };
 }
 
-inline wide_int_storage::wide_int_storage () {}
+inline wide_int_storage::wide_int_storage () : precision (0) {}
 
 /* Initialize the storage from integer X, in its natural precision.
    Note that we do not allow integers with host-dependent precision
@@ -1113,21 +1189,75 @@ inline wide_int_storage::wide_int_storag
 template <typename T>
 inline wide_int_storage::wide_int_storage (const T &x)
 {
-  { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
-  { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
+  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type
+		 != wi::WIDEST_CONST_PRECISION);
   WIDE_INT_REF_FOR (T) xi (x);
   precision = xi.precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+    u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
   wi::copy (*this, xi);
 }
 
+inline wide_int_storage::wide_int_storage (const wide_int_storage &x)
+{
+  len = x.len;
+  precision = x.precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
+      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
+    }
+  else
+    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
+}
+
+inline wide_int_storage::~wide_int_storage ()
+{
+  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+    XDELETEVEC (u.valp);
+}
+
+inline wide_int_storage&
+wide_int_storage::operator = (const wide_int_storage &x)
+{
+  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+    {
+      if (this == &x)
+	return *this;
+      XDELETEVEC (u.valp);
+    }
+  len = x.len;
+  precision = x.precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
+      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
+    }
+  else
+    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
+  return *this;
+}
+
 template <typename T>
 inline wide_int_storage&
 wide_int_storage::operator = (const T &x)
 {
-  { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
-  { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
+  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type
+		 != wi::WIDEST_CONST_PRECISION);
   WIDE_INT_REF_FOR (T) xi (x);
-  precision = xi.precision;
+  if (UNLIKELY (precision != xi.precision))
+    {
+      if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+	XDELETEVEC (u.valp);
+      precision = xi.precision;
+      if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+	u.valp = XNEWVEC (HOST_WIDE_INT,
+			  CEIL (precision, HOST_BITS_PER_WIDE_INT));
+    }
   wi::copy (*this, xi);
   return *this;
 }
@@ -1141,7 +1271,7 @@ wide_int_storage::get_precision () const
 inline const HOST_WIDE_INT *
 wide_int_storage::get_val () const
 {
-  return val;
+  return UNLIKELY (precision > WIDE_INT_MAX_PRECISION) ? u.valp : u.val;
 }
 
 inline unsigned int
@@ -1151,9 +1281,9 @@ wide_int_storage::get_len () const
 }
 
 inline HOST_WIDE_INT *
-wide_int_storage::write_val ()
+wide_int_storage::write_val (unsigned int)
 {
-  return val;
+  return UNLIKELY (precision > WIDE_INT_MAX_PRECISION) ? u.valp : u.val;
 }
 
 inline void
@@ -1161,8 +1291,10 @@ wide_int_storage::set_len (unsigned int
 {
   len = l;
   if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
-    val[len - 1] = sext_hwi (val[len - 1],
-			     precision % HOST_BITS_PER_WIDE_INT);
+    {
+      HOST_WIDE_INT &v = write_val (len)[len - 1];
+      v = sext_hwi (v, precision % HOST_BITS_PER_WIDE_INT);
+    }
 }
 
 /* Treat X as having signedness SGN and convert it to a PRECISION-bit
@@ -1172,7 +1304,7 @@ wide_int_storage::from (const wide_int_r
 			signop sgn)
 {
   wide_int result = wide_int::create (precision);
-  result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
+  result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
 				     x.precision, precision, sgn));
   return result;
 }
@@ -1185,7 +1317,7 @@ wide_int_storage::from_array (const HOST
 			      unsigned int precision, bool need_canon_p)
 {
   wide_int result = wide_int::create (precision);
-  result.set_len (wi::from_array (result.write_val (), val, len, precision,
+  result.set_len (wi::from_array (result.write_val (len), val, len, precision,
 				  need_canon_p));
   return result;
 }
@@ -1196,6 +1328,9 @@ wide_int_storage::create (unsigned int p
 {
   wide_int x;
   x.precision = precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
+    x.u.valp = XNEWVEC (HOST_WIDE_INT,
+			CEIL (precision, HOST_BITS_PER_WIDE_INT));
   return x;
 }
 
@@ -1212,6 +1347,194 @@ wi::int_traits <wide_int_storage>::get_b
     return wide_int::create (wi::get_precision (x));
 }
 
+template <typename T1, typename T2>
+inline unsigned int
+wi::int_traits <wide_int_storage>::get_binary_precision (const T1 &x,
+							 const T2 &y)
+{
+  /* This shouldn't be used for two flexible-precision inputs.  */
+  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
+		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
+  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
+    return wi::get_precision (y);
+  else
+    return wi::get_precision (x);
+}
+
+/* The storage used by rwide_int.  */
+class GTY(()) rwide_int_storage
+{
+private:
+  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
+  unsigned int len;
+  unsigned int precision;
+
+public:
+  rwide_int_storage () = default;
+  template <typename T>
+  rwide_int_storage (const T &);
+
+  /* The standard generic_rwide_int storage methods.  */
+  unsigned int get_precision () const;
+  const HOST_WIDE_INT *get_val () const;
+  unsigned int get_len () const;
+  HOST_WIDE_INT *write_val (unsigned int);
+  void set_len (unsigned int, bool = false);
+
+  template <typename T>
+  rwide_int_storage &operator = (const T &);
+
+  static rwide_int from (const wide_int_ref &, unsigned int, signop);
+  static rwide_int from_array (const HOST_WIDE_INT *, unsigned int,
+			       unsigned int, bool = true);
+  static rwide_int create (unsigned int);
+};
+
+namespace wi
+{
+  template <>
+  struct int_traits <rwide_int_storage>
+  {
+    static const enum precision_type precision_type = VAR_PRECISION;
+    /* Guaranteed by a static assert in the rwide_int_storage constructor.  */
+    static const bool host_dependent_precision = false;
+    static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
+    template <typename T1, typename T2>
+    static rwide_int get_binary_result (const T1 &, const T2 &);
+    template <typename T1, typename T2>
+    static unsigned int get_binary_precision (const T1 &, const T2 &);
+  };
+}
+
+/* Initialize the storage from integer X, in its natural precision.
+   Note that we do not allow integers with host-dependent precision
+   to become rwide_ints; rwide_ints must always be logically independent
+   of the host.  */
+template <typename T>
+inline rwide_int_storage::rwide_int_storage (const T &x)
+{
+  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type
+		 != wi::WIDEST_CONST_PRECISION);
+  WIDE_INT_REF_FOR (T) xi (x);
+  precision = xi.precision;
+  gcc_assert (precision <= WIDE_INT_MAX_PRECISION);
+  wi::copy (*this, xi);
+}
+
+template <typename T>
+inline rwide_int_storage&
+rwide_int_storage::operator = (const T &x)
+{
+  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type
+		 != wi::WIDEST_CONST_PRECISION);
+  WIDE_INT_REF_FOR (T) xi (x);
+  precision = xi.precision;
+  gcc_assert (precision <= WIDE_INT_MAX_PRECISION);
+  wi::copy (*this, xi);
+  return *this;
+}
+
+inline unsigned int
+rwide_int_storage::get_precision () const
+{
+  return precision;
+}
+
+inline const HOST_WIDE_INT *
+rwide_int_storage::get_val () const
+{
+  return val;
+}
+
+inline unsigned int
+rwide_int_storage::get_len () const
+{
+  return len;
+}
+
+inline HOST_WIDE_INT *
+rwide_int_storage::write_val (unsigned int)
+{
+  return val;
+}
+
+inline void
+rwide_int_storage::set_len (unsigned int l, bool is_sign_extended)
+{
+  len = l;
+  if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
+    val[len - 1] = sext_hwi (val[len - 1],
+			     precision % HOST_BITS_PER_WIDE_INT);
+}
+
+/* Treat X as having signedness SGN and convert it to a PRECISION-bit
+   number.  */
+inline rwide_int
+rwide_int_storage::from (const wide_int_ref &x, unsigned int precision,
+			 signop sgn)
+{
+  rwide_int result = rwide_int::create (precision);
+  result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
+				     x.precision, precision, sgn));
+  return result;
+}
+
+/* Create a rwide_int from the explicit block encoding given by VAL and
+   LEN.  PRECISION is the precision of the integer.  NEED_CANON_P is
+   true if the encoding may have redundant trailing blocks.  */
+inline rwide_int
+rwide_int_storage::from_array (const HOST_WIDE_INT *val, unsigned int len,
+			       unsigned int precision, bool need_canon_p)
+{
+  rwide_int result = rwide_int::create (precision);
+  result.set_len (wi::from_array (result.write_val (len), val, len, precision,
+				  need_canon_p));
+  return result;
+}
+
+/* Return an uninitialized rwide_int with precision PRECISION.  */
+inline rwide_int
+rwide_int_storage::create (unsigned int precision)
+{
+  rwide_int x;
+  gcc_assert (precision <= WIDE_INT_MAX_PRECISION);
+  x.precision = precision;
+  return x;
+}
+
+template <typename T1, typename T2>
+inline rwide_int
+wi::int_traits <rwide_int_storage>::get_binary_result (const T1 &x,
+						       const T2 &y)
+{
+  /* This shouldn't be used for two flexible-precision inputs.  */
+  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
+		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
+  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
+    return rwide_int::create (wi::get_precision (y));
+  else
+    return rwide_int::create (wi::get_precision (x));
+}
+
+template <typename T1, typename T2>
+inline unsigned int
+wi::int_traits <rwide_int_storage>::get_binary_precision (const T1 &x,
+							  const T2 &y)
+{
+  /* This shouldn't be used for two flexible-precision inputs.  */
+  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
+		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
+  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
+    return wi::get_precision (y);
+  else
+    return wi::get_precision (x);
+}
+
 /* The storage used by FIXED_WIDE_INT (N).  */
 template <int N>
 class GTY(()) fixed_wide_int_storage
@@ -1221,7 +1544,7 @@ private:
   unsigned int len;
 
 public:
-  fixed_wide_int_storage ();
+  fixed_wide_int_storage () = default;
   template <typename T>
   fixed_wide_int_storage (const T &);
 
@@ -1229,7 +1552,7 @@ public:
   unsigned int get_precision () const;
   const HOST_WIDE_INT *get_val () const;
   unsigned int get_len () const;
-  HOST_WIDE_INT *write_val ();
+  HOST_WIDE_INT *write_val (unsigned int);
   void set_len (unsigned int, bool = false);
 
   static FIXED_WIDE_INT (N) from (const wide_int_ref &, signop);
@@ -1245,15 +1568,15 @@ namespace wi
     static const enum precision_type precision_type = CONST_PRECISION;
     static const bool host_dependent_precision = false;
     static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
     static const unsigned int precision = N;
     template <typename T1, typename T2>
     static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
+    template <typename T1, typename T2>
+    static unsigned int get_binary_precision (const T1 &, const T2 &);
   };
 }
 
-template <int N>
-inline fixed_wide_int_storage <N>::fixed_wide_int_storage () {}
-
 /* Initialize the storage from integer X, in precision N.  */
 template <int N>
 template <typename T>
@@ -1288,7 +1611,7 @@ fixed_wide_int_storage <N>::get_len () c
 
 template <int N>
 inline HOST_WIDE_INT *
-fixed_wide_int_storage <N>::write_val ()
+fixed_wide_int_storage <N>::write_val (unsigned int)
 {
   return val;
 }
@@ -1308,7 +1631,7 @@ inline FIXED_WIDE_INT (N)
 fixed_wide_int_storage <N>::from (const wide_int_ref &x, signop sgn)
 {
   FIXED_WIDE_INT (N) result;
-  result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
+  result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
 				     x.precision, N, sgn));
   return result;
 }
@@ -1323,7 +1646,7 @@ fixed_wide_int_storage <N>::from_array (
 					bool need_canon_p)
 {
   FIXED_WIDE_INT (N) result;
-  result.set_len (wi::from_array (result.write_val (), val, len,
+  result.set_len (wi::from_array (result.write_val (len), val, len,
 				  N, need_canon_p));
   return result;
 }
@@ -1337,6 +1660,244 @@ get_binary_result (const T1 &, const T2
   return FIXED_WIDE_INT (N) ();
 }
 
+template <int N>
+template <typename T1, typename T2>
+inline unsigned int
+wi::int_traits < fixed_wide_int_storage <N> >::
+get_binary_precision (const T1 &, const T2 &)
+{
+  return N;
+}
+
+#define WIDEST_INT(N) generic_wide_int < widest_int_storage <N> >
+
+/* The storage used by widest_int.  */
+template <int N>
+class GTY(()) widest_int_storage
+{
+private:
+  union
+  {
+    HOST_WIDE_INT val[WIDE_INT_MAX_HWIS (N)];
+    HOST_WIDE_INT *valp;
+  } GTY((skip)) u;
+  unsigned int len;
+
+public:
+  widest_int_storage ();
+  widest_int_storage (const widest_int_storage &);
+  template <typename T>
+  widest_int_storage (const T &);
+  ~widest_int_storage ();
+  widest_int_storage &operator = (const widest_int_storage &);
+  template <typename T>
+  inline widest_int_storage& operator = (const T &);
+
+  /* The standard generic_wide_int storage methods.  */
+  unsigned int get_precision () const;
+  const HOST_WIDE_INT *get_val () const;
+  unsigned int get_len () const;
+  HOST_WIDE_INT *write_val (unsigned int);
+  void set_len (unsigned int, bool = false);
+
+  static WIDEST_INT (N) from (const wide_int_ref &, signop);
+  static WIDEST_INT (N) from_array (const HOST_WIDE_INT *, unsigned int,
+				    bool = true);
+};
+
+namespace wi
+{
+  template <int N>
+  struct int_traits < widest_int_storage <N> >
+  {
+    static const enum precision_type precision_type = WIDEST_CONST_PRECISION;
+    static const bool host_dependent_precision = false;
+    static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = true;
+    static const unsigned int precision
+      = N / WIDE_INT_MAX_PRECISION * WIDEST_INT_MAX_PRECISION;
+    static const unsigned int inl_precision = N;
+    template <typename T1, typename T2>
+    static WIDEST_INT (N) get_binary_result (const T1 &, const T2 &);
+    template <typename T1, typename T2>
+    static unsigned int get_binary_precision (const T1 &, const T2 &);
+  };
+}
+
+template <int N>
+inline widest_int_storage <N>::widest_int_storage () : len (0) {}
+
+/* Initialize the storage from integer X, in precision N.  */
+template <int N>
+template <typename T>
+inline widest_int_storage <N>::widest_int_storage (const T &x) : len (0)
+{
+  /* Check for type compatibility.  We don't want to initialize a
+     widest integer from something like a wide_int.  */
+  WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED;
+  wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N / WIDE_INT_MAX_PRECISION
+					    * WIDEST_INT_MAX_PRECISION));
+}
+
+template <int N>
+inline
+widest_int_storage <N>::widest_int_storage (const widest_int_storage <N> &x)
+{
+  len = x.len;
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, len);
+      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
+    }
+  else
+    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
+}
+
+template <int N>
+inline widest_int_storage <N>::~widest_int_storage ()
+{
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    XDELETEVEC (u.valp);
+}
+
+template <int N>
+inline widest_int_storage <N>&
+widest_int_storage <N>::operator = (const widest_int_storage <N> &x)
+{
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    {
+      if (this == &x)
+	return *this;
+      XDELETEVEC (u.valp);
+    }
+  len = x.len;
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, len);
+      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
+    }
+  else
+    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
+  return *this;
+}
+
+template <int N>
+template <typename T>
+inline widest_int_storage <N>&
+widest_int_storage <N>::operator = (const T &x)
+{
+  /* Check for type compatibility.  We don't want to assign a
+     widest integer from something like a wide_int.  */
+  WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED;
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    XDELETEVEC (u.valp);
+  len = 0;
+  wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N / WIDE_INT_MAX_PRECISION
+					    * WIDEST_INT_MAX_PRECISION));
+  return *this;
+}
+
+template <int N>
+inline unsigned int
+widest_int_storage <N>::get_precision () const
+{
+  return N / WIDE_INT_MAX_PRECISION * WIDEST_INT_MAX_PRECISION;
+}
+
+template <int N>
+inline const HOST_WIDE_INT *
+widest_int_storage <N>::get_val () const
+{
+  return UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT) ? u.valp : u.val;
+}
+
+template <int N>
+inline unsigned int
+widest_int_storage <N>::get_len () const
+{
+  return len;
+}
+
+template <int N>
+inline HOST_WIDE_INT *
+widest_int_storage <N>::write_val (unsigned int l)
+{
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    XDELETEVEC (u.valp);
+  len = l;
+  if (UNLIKELY (l > N / HOST_BITS_PER_WIDE_INT))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, l);
+      return u.valp;
+    }
+  return u.val;
+}
+
+template <int N>
+inline void
+widest_int_storage <N>::set_len (unsigned int l, bool)
+{
+  gcc_checking_assert (l <= len);
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT)
+      && l <= N / HOST_BITS_PER_WIDE_INT)
+    {
+      HOST_WIDE_INT *valp = u.valp;
+      memcpy (u.val, valp, len * sizeof (u.val[0]));
+      XDELETEVEC (valp);
+    }
+  len = l;
+  /* There are no excess bits in val[len - 1].  */
+  STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
+}
+
+/* Treat X as having signedness SGN and convert it to an N-bit number.  */
+template <int N>
+inline WIDEST_INT (N)
+widest_int_storage <N>::from (const wide_int_ref &x, signop sgn)
+{
+  WIDEST_INT (N) result;
+  unsigned int exp_len = x.len;
+  unsigned int prec = result.get_precision ();
+  if (sgn == UNSIGNED && prec > x.precision && x.val[x.len - 1] < 0)
+    exp_len = CEIL (x.precision, HOST_BITS_PER_WIDE_INT) + 1;
+  result.set_len (wi::force_to_size (result.write_val (exp_len), x.val, x.len,
+				     x.precision, prec, sgn));
+  return result;
+}
+
+/* Create a WIDEST_INT (N) from the explicit block encoding given by
+   VAL and LEN.  NEED_CANON_P is true if the encoding may have redundant
+   trailing blocks.  */
+template <int N>
+inline WIDEST_INT (N)
+widest_int_storage <N>::from_array (const HOST_WIDE_INT *val,
+				    unsigned int len,
+				    bool need_canon_p)
+{
+  WIDEST_INT (N) result;
+  result.set_len (wi::from_array (result.write_val (len), val, len,
+				  result.get_precision (), need_canon_p));
+  return result;
+}
+
+template <int N>
+template <typename T1, typename T2>
+inline WIDEST_INT (N)
+wi::int_traits < widest_int_storage <N> >::
+get_binary_result (const T1 &, const T2 &)
+{
+  return WIDEST_INT (N) ();
+}
+
+template <int N>
+template <typename T1, typename T2>
+inline unsigned int
+wi::int_traits < widest_int_storage <N> >::
+get_binary_precision (const T1 &, const T2 &)
+{
+  return N / WIDE_INT_MAX_PRECISION * WIDEST_INT_MAX_PRECISION;
+}
+
 /* A reference to one element of a trailing_wide_ints structure.  */
 class trailing_wide_int_storage
 {
@@ -1359,7 +1920,7 @@ public:
   unsigned int get_len () const;
   unsigned int get_precision () const;
   const HOST_WIDE_INT *get_val () const;
-  HOST_WIDE_INT *write_val ();
+  HOST_WIDE_INT *write_val (unsigned int);
   void set_len (unsigned int, bool = false);
 
   template <typename T>
@@ -1445,7 +2006,7 @@ trailing_wide_int_storage::get_val () co
 }
 
 inline HOST_WIDE_INT *
-trailing_wide_int_storage::write_val ()
+trailing_wide_int_storage::write_val (unsigned int)
 {
   return m_val;
 }
@@ -1528,6 +2089,7 @@ namespace wi
     static const enum precision_type precision_type = FLEXIBLE_PRECISION;
     static const bool host_dependent_precision = true;
     static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
     static unsigned int get_precision (T);
     static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, T);
   };
@@ -1699,6 +2261,7 @@ namespace wi
        precision of HOST_WIDE_INT.  */
     static const bool host_dependent_precision = false;
     static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
     static unsigned int get_precision (const wi::hwi_with_prec &);
     static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
 				      const wi::hwi_with_prec &);
@@ -1804,8 +2367,8 @@ template <typename T1, typename T2>
 inline unsigned int
 wi::get_binary_precision (const T1 &x, const T2 &y)
 {
-  return get_precision (wi::int_traits <WI_BINARY_RESULT (T1, T2)>::
-			get_binary_result (x, y));
+  return wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_precision (x,
+									   y);
 }
 
 /* Copy the contents of Y to X, but keeping X's current precision.  */
@@ -1813,9 +2376,9 @@ template <typename T1, typename T2>
 inline void
 wi::copy (T1 &x, const T2 &y)
 {
-  HOST_WIDE_INT *xval = x.write_val ();
-  const HOST_WIDE_INT *yval = y.get_val ();
   unsigned int len = y.get_len ();
+  HOST_WIDE_INT *xval = x.write_val (len);
+  const HOST_WIDE_INT *yval = y.get_val ();
   unsigned int i = 0;
   do
     xval[i] = yval[i];
@@ -2162,6 +2725,8 @@ wi::bit_not (const T &x)
 {
   WI_UNARY_RESULT_VAR (result, val, T, x);
   WIDE_INT_REF_FOR (T) xi (x, get_precision (result));
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len);
   for (unsigned int i = 0; i < xi.len; ++i)
     val[i] = ~xi.val[i];
   result.set_len (xi.len);
@@ -2203,6 +2768,8 @@ wi::sext (const T &x, unsigned int offse
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T) xi (x, precision);
 
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len);
   if (offset <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = sext_hwi (xi.ulow (), offset);
@@ -2230,6 +2797,9 @@ wi::zext (const T &x, unsigned int offse
       return result;
     }
 
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len,
+				 CEIL (offset, HOST_BITS_PER_WIDE_INT)));
   /* In these cases we know that at least the top bit will be clear,
      so no sign extension is necessary.  */
   if (offset < HOST_BITS_PER_WIDE_INT)
@@ -2259,6 +2829,9 @@ wi::set_bit (const T &x, unsigned int bi
   WI_UNARY_RESULT_VAR (result, val, T, x);
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T) xi (x, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len,
+				 bit / HOST_BITS_PER_WIDE_INT + 1));
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = xi.ulow () | (HOST_WIDE_INT_1U << bit);
@@ -2280,6 +2853,8 @@ wi::bswap (const T &x)
   WI_UNARY_RESULT_VAR (result, val, T, x);
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T) xi (x, precision);
+  if (result.needs_write_val_arg)
+    gcc_unreachable (); /* bswap on widest_int makes no sense.  */
   result.set_len (bswap_large (val, xi.val, xi.len, precision));
   return result;
 }
@@ -2292,6 +2867,8 @@ wi::bitreverse (const T &x)
   WI_UNARY_RESULT_VAR (result, val, T, x);
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T) xi (x, precision);
+  if (result.needs_write_val_arg)
+    gcc_unreachable (); /* bitreverse on widest_int makes no sense.  */
   result.set_len (bitreverse_large (val, xi.val, xi.len, precision));
   return result;
 }
@@ -2368,6 +2945,8 @@ wi::bit_and (const T1 &x, const T2 &y)
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () & yi.ulow ();
@@ -2389,6 +2968,8 @@ wi::bit_and_not (const T1 &x, const T2 &
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () & ~yi.ulow ();
@@ -2410,6 +2991,8 @@ wi::bit_or (const T1 &x, const T2 &y)
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () | yi.ulow ();
@@ -2431,6 +3014,8 @@ wi::bit_or_not (const T1 &x, const T2 &y
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () | ~yi.ulow ();
@@ -2452,6 +3037,8 @@ wi::bit_xor (const T1 &x, const T2 &y)
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () ^ yi.ulow ();
@@ -2472,6 +3059,8 @@ wi::add (const T1 &x, const T2 &y)
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len) + 1);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = xi.ulow () + yi.ulow ();
@@ -2515,6 +3104,8 @@ wi::add (const T1 &x, const T2 &y, signo
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len) + 1);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       unsigned HOST_WIDE_INT xl = xi.ulow ();
@@ -2558,6 +3149,8 @@ wi::sub (const T1 &x, const T2 &y)
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len) + 1);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = xi.ulow () - yi.ulow ();
@@ -2601,6 +3194,8 @@ wi::sub (const T1 &x, const T2 &y, signo
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len) + 1);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       unsigned HOST_WIDE_INT xl = xi.ulow ();
@@ -2643,6 +3238,8 @@ wi::mul (const T1 &x, const T2 &y)
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len + yi.len + 2);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = xi.ulow () * yi.ulow ();
@@ -2664,6 +3261,8 @@ wi::mul (const T1 &x, const T2 &y, signo
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len + yi.len + 2);
   result.set_len (mul_internal (val, xi.val, xi.len,
 				yi.val, yi.len, precision,
 				sgn, overflow, false));
@@ -2698,6 +3297,8 @@ wi::mul_high (const T1 &x, const T2 &y,
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    gcc_unreachable (); /* mul_high on widest_int doesn't make sense.  */
   result.set_len (mul_internal (val, xi.val, xi.len,
 				yi.val, yi.len, precision,
 				sgn, 0, true));
@@ -2716,6 +3317,12 @@ wi::div_trunc (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y);
 
+  if (quotient.needs_write_val_arg)
+    quotient_val = quotient.write_val ((sgn == UNSIGNED
+					&& xi.val[xi.len - 1] < 0)
+				       ? CEIL (precision,
+					       HOST_BITS_PER_WIDE_INT) + 1
+				       : xi.len + 1);
   quotient.set_len (divmod_internal (quotient_val, 0, 0, xi.val, xi.len,
 				     precision,
 				     yi.val, yi.len, yi.precision,
@@ -2753,6 +3360,15 @@ wi::div_floor (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2795,6 +3411,15 @@ wi::div_ceil (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2828,6 +3453,15 @@ wi::div_round (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2871,6 +3505,15 @@ wi::divmod_trunc (const T1 &x, const T2
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2915,6 +3558,8 @@ wi::mod_trunc (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (remainder.needs_write_val_arg)
+    remainder_val = remainder.write_val (yi.len);
   divmod_internal (0, &remainder_len, remainder_val,
 		   xi.val, xi.len, precision,
 		   yi.val, yi.len, yi.precision, sgn, overflow);
@@ -2955,6 +3600,15 @@ wi::mod_floor (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2991,6 +3645,15 @@ wi::mod_ceil (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -3017,6 +3680,15 @@ wi::mod_round (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -3086,12 +3758,16 @@ wi::lshift (const T1 &x, const T2 &y)
   /* Handle the simple cases quickly.   */
   if (geu_p (yi, precision))
     {
+      if (result.needs_write_val_arg)
+	val = result.write_val (1);
       val[0] = 0;
       result.set_len (1);
     }
   else
     {
       unsigned int shift = yi.to_uhwi ();
+      if (result.needs_write_val_arg)
+	val = result.write_val (xi.len + shift / HOST_BITS_PER_WIDE_INT + 1);
       /* For fixed-precision integers like offset_int and widest_int,
 	 handle the case where the shift value is constant and the
 	 result is a single nonnegative HWI (meaning that we don't
@@ -3130,12 +3806,23 @@ wi::lrshift (const T1 &x, const T2 &y)
   /* Handle the simple cases quickly.   */
   if (geu_p (yi, xi.precision))
     {
+      if (result.needs_write_val_arg)
+	val = result.write_val (1);
       val[0] = 0;
       result.set_len (1);
     }
   else
     {
       unsigned int shift = yi.to_uhwi ();
+      if (result.needs_write_val_arg)
+	{
+	  unsigned int est_len = xi.len;
+	  if (xi.val[xi.len - 1] < 0 && shift)
+	    /* Logical right shift of sign-extended value might need a very
+	       large precision e.g. for widest_int.  */
+	    est_len = CEIL (xi.precision - shift, HOST_BITS_PER_WIDE_INT) + 1;
+	  val = result.write_val (est_len);
+	}
       /* For fixed-precision integers like offset_int and widest_int,
 	 handle the case where the shift value is constant and the
 	 shifted value is a single nonnegative HWI (meaning that all
@@ -3171,6 +3858,8 @@ wi::arshift (const T1 &x, const T2 &y)
      since the result can be no larger than that.  */
   WIDE_INT_REF_FOR (T1) xi (x);
   WIDE_INT_REF_FOR (T2) yi (y);
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len);
   /* Handle the simple cases quickly.   */
   if (geu_p (yi, xi.precision))
     {
@@ -3465,7 +4154,7 @@ inline wide_int
 wi::mask (unsigned int width, bool negate_p, unsigned int precision)
 {
   wide_int result = wide_int::create (precision);
-  result.set_len (mask (result.write_val (), width, negate_p, precision));
+  result.set_len (mask (result.write_val (0), width, negate_p, precision));
   return result;
 }
 
@@ -3477,7 +4166,7 @@ wi::shifted_mask (unsigned int start, un
 		  unsigned int precision)
 {
   wide_int result = wide_int::create (precision);
-  result.set_len (shifted_mask (result.write_val (), start, width, negate_p,
+  result.set_len (shifted_mask (result.write_val (0), start, width, negate_p,
 				precision));
   return result;
 }
@@ -3498,8 +4187,8 @@ wi::mask (unsigned int width, bool negat
 {
   STATIC_ASSERT (wi::int_traits<T>::precision);
   T result;
-  result.set_len (mask (result.write_val (), width, negate_p,
-			wi::int_traits <T>::precision));
+  result.set_len (mask (result.write_val (width / HOST_BITS_PER_WIDE_INT + 1),
+			width, negate_p, wi::int_traits <T>::precision));
   return result;
 }
 
@@ -3512,9 +4201,13 @@ wi::shifted_mask (unsigned int start, un
 {
   STATIC_ASSERT (wi::int_traits<T>::precision);
   T result;
-  result.set_len (shifted_mask (result.write_val (), start, width,
-				negate_p,
-				wi::int_traits <T>::precision));
+  unsigned int prec = wi::int_traits <T>::precision;
+  unsigned int est_len
+    = result.needs_write_val_arg
+      ? ((start + (width > prec - start ? prec - start : width))
+	 / HOST_BITS_PER_WIDE_INT + 1) : 0;
+  result.set_len (shifted_mask (result.write_val (est_len), start, width,
+				negate_p, prec));
   return result;
 }
 
--- gcc/wide-int.cc.jj	2023-09-27 10:37:39.429837179 +0200
+++ gcc/wide-int.cc	2023-09-28 14:59:04.121819198 +0200
@@ -51,7 +51,7 @@ typedef unsigned int UDWtype __attribute
 #include "longlong.h"
 #endif
 
-static const HOST_WIDE_INT zeros[WIDE_INT_MAX_ELTS] = {};
+static const HOST_WIDE_INT zeros[1] = {};
 
 /*
  * Internal utilities.
@@ -62,8 +62,7 @@ static const HOST_WIDE_INT zeros[WIDE_IN
 #define HALF_INT_MASK ((HOST_WIDE_INT_1 << HOST_BITS_PER_HALF_WIDE_INT) - 1)
 
 #define BLOCK_OF(TARGET) ((TARGET) / HOST_BITS_PER_WIDE_INT)
-#define BLOCKS_NEEDED(PREC) \
-  (PREC ? (((PREC) + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT) : 1)
+#define BLOCKS_NEEDED(PREC) (PREC ? CEIL (PREC, HOST_BITS_PER_WIDE_INT) : 1)
 #define SIGN_MASK(X) ((HOST_WIDE_INT) (X) < 0 ? -1 : 0)
 
 /* Return the value a VAL[I] if I < LEN, otherwise, return 0 or -1
@@ -96,7 +95,7 @@ canonize (HOST_WIDE_INT *val, unsigned i
   top = val[len - 1];
   if (len * HOST_BITS_PER_WIDE_INT > precision)
     val[len - 1] = top = sext_hwi (top, precision % HOST_BITS_PER_WIDE_INT);
-  if (top != 0 && top != (HOST_WIDE_INT)-1)
+  if (top != 0 && top != HOST_WIDE_INT_M1)
     return len;
 
   /* At this point we know that the top is either 0 or -1.  Find the
@@ -163,7 +162,7 @@ wi::from_buffer (const unsigned char *bu
   /* We have to clear all the bits ourself, as we merely or in values
      below.  */
   unsigned int len = BLOCKS_NEEDED (precision);
-  HOST_WIDE_INT *val = result.write_val ();
+  HOST_WIDE_INT *val = result.write_val (0);
   for (unsigned int i = 0; i < len; ++i)
     val[i] = 0;
 
@@ -232,8 +231,7 @@ wi::to_mpz (const wide_int_ref &x, mpz_t
     }
   else if (excess < 0 && wi::neg_p (x))
     {
-      int extra
-	= (-excess + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT;
+      int extra = CEIL (-excess, HOST_BITS_PER_WIDE_INT);
       HOST_WIDE_INT *t = XALLOCAVEC (HOST_WIDE_INT, len + extra);
       for (int i = 0; i < len; i++)
 	t[i] = v[i];
@@ -280,8 +278,8 @@ wi::from_mpz (const_tree type, mpz_t x,
      extracted from the GMP manual, section "Integer Import and Export":
      http://gmplib.org/manual/Integer-Import-and-Export.html  */
   numb = CHAR_BIT * sizeof (HOST_WIDE_INT);
-  count = (mpz_sizeinbase (x, 2) + numb - 1) / numb;
-  HOST_WIDE_INT *val = res.write_val ();
+  count = CEIL (mpz_sizeinbase (x, 2), numb);
+  HOST_WIDE_INT *val = res.write_val (0);
   /* Read the absolute value.
 
      Write directly to the wide_int storage if possible, otherwise leave
@@ -1334,21 +1332,6 @@ wi::mul_internal (HOST_WIDE_INT *val, co
   unsigned HOST_WIDE_INT o0, o1, k, t;
   unsigned int i;
   unsigned int j;
-  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
-  unsigned int half_blocks_needed = blocks_needed * 2;
-  /* The sizes here are scaled to support a 2x largest mode by 2x
-     largest mode yielding a 4x largest mode result.  This is what is
-     needed by vpn.  */
-
-  unsigned HOST_HALF_WIDE_INT
-    u[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  unsigned HOST_HALF_WIDE_INT
-    v[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  /* The '2' in 'R' is because we are internally doing a full
-     multiply.  */
-  unsigned HOST_HALF_WIDE_INT
-    r[2 * 4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1;
 
   /* If the top level routine did not really pass in an overflow, then
      just make sure that we never attempt to set it.  */
@@ -1469,6 +1452,35 @@ wi::mul_internal (HOST_WIDE_INT *val, co
       return 1;
     }
 
+  /* The sizes here are scaled to support a 2x WIDE_INT_MAX_PRECISION by 2x
+     WIDE_INT_MAX_PRECISION yielding a 4x WIDE_INT_MAX_PRECISION result.  */
+
+  unsigned HOST_HALF_WIDE_INT
+    ubuf[4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    vbuf[4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
+  /* The '2' in 'R' is because we are internally doing a full
+     multiply.  */
+  unsigned HOST_HALF_WIDE_INT
+    rbuf[2 * 4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
+  const HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1;
+  unsigned HOST_HALF_WIDE_INT *u = ubuf;
+  unsigned HOST_HALF_WIDE_INT *v = vbuf;
+  unsigned HOST_HALF_WIDE_INT *r = rbuf;
+
+  if (prec > WIDE_INT_MAX_PRECISION && !high)
+    prec = (op1len + op2len + 1) * HOST_BITS_PER_WIDE_INT;
+  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
+  unsigned int half_blocks_needed = blocks_needed * 2;
+  if (UNLIKELY (prec > WIDE_INT_MAX_PRECISION))
+    {
+      unsigned HOST_HALF_WIDE_INT *buf
+	= XALLOCAVEC (unsigned HOST_HALF_WIDE_INT, 4 * 4 * blocks_needed);
+      u = buf;
+      v = u + 4 * blocks_needed;
+      r = v + 4 * blocks_needed;
+    }
+
   /* We do unsigned mul and then correct it.  */
   wi_unpack (u, op1val, op1len, half_blocks_needed, prec, SIGNED);
   wi_unpack (v, op2val, op2len, half_blocks_needed, prec, SIGNED);
@@ -1782,16 +1794,6 @@ wi::divmod_internal (HOST_WIDE_INT *quot
 		     unsigned int divisor_prec, signop sgn,
 		     wi::overflow_type *oflow)
 {
-  unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec);
-  unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec);
-  unsigned HOST_HALF_WIDE_INT
-    b_quotient[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  unsigned HOST_HALF_WIDE_INT
-    b_remainder[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  unsigned HOST_HALF_WIDE_INT
-    b_dividend[(4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT) + 1];
-  unsigned HOST_HALF_WIDE_INT
-    b_divisor[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
   unsigned int m, n;
   bool dividend_neg = false;
   bool divisor_neg = false;
@@ -1910,6 +1912,41 @@ wi::divmod_internal (HOST_WIDE_INT *quot
 	}
     }
 
+  unsigned HOST_HALF_WIDE_INT
+    b_quotient_buf[4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    b_remainder_buf[4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    b_dividend_buf[(4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT)
+		   + 1];
+  unsigned HOST_HALF_WIDE_INT
+    b_divisor_buf[4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT *b_quotient = b_quotient_buf;
+  unsigned HOST_HALF_WIDE_INT *b_remainder = b_remainder_buf;
+  unsigned HOST_HALF_WIDE_INT *b_dividend = b_dividend_buf;
+  unsigned HOST_HALF_WIDE_INT *b_divisor = b_divisor_buf;
+
+  if (dividend_prec > WIDE_INT_MAX_PRECISION
+      && (sgn == SIGNED || dividend_val[dividend_len - 1] >= 0))
+    dividend_prec = (dividend_len + 1) * HOST_BITS_PER_WIDE_INT;
+  if (divisor_prec > WIDE_INT_MAX_PRECISION)
+    divisor_prec = divisor_len * HOST_BITS_PER_WIDE_INT;
+  unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec);
+  unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec);
+  if (UNLIKELY (dividend_prec > WIDE_INT_MAX_PRECISION)
+      || UNLIKELY (divisor_prec > WIDE_INT_MAX_PRECISION))
+    {
+      unsigned HOST_HALF_WIDE_INT *buf
+        = XALLOCAVEC (unsigned HOST_HALF_WIDE_INT,
+		      12 * dividend_blocks_needed
+		      + 4 * divisor_blocks_needed + 1);
+      b_quotient = buf;
+      b_remainder = b_quotient + 4 * dividend_blocks_needed;
+      b_dividend = b_remainder + 4 * dividend_blocks_needed;
+      b_divisor = b_dividend + 4 * dividend_blocks_needed + 1;
+      memset (b_quotient, 0,
+	      4 * dividend_blocks_needed * sizeof (HOST_HALF_WIDE_INT));
+    }
   wi_unpack (b_dividend, dividend.get_val (), dividend.get_len (),
 	     dividend_blocks_needed, dividend_prec, UNSIGNED);
   wi_unpack (b_divisor, divisor.get_val (), divisor.get_len (),
@@ -1924,7 +1961,8 @@ wi::divmod_internal (HOST_WIDE_INT *quot
   while (n > 1 && b_divisor[n - 1] == 0)
     n--;
 
-  memset (b_quotient, 0, sizeof (b_quotient));
+  if (b_quotient == b_quotient_buf)
+    memset (b_quotient_buf, 0, sizeof (b_quotient_buf));
 
   divmod_internal_2 (b_quotient, b_remainder, b_dividend, b_divisor, m, n);
 
@@ -1970,6 +2008,8 @@ wi::lshift_large (HOST_WIDE_INT *val, co
 
   /* The whole-block shift fills with zeros.  */
   unsigned int len = BLOCKS_NEEDED (precision);
+  if (UNLIKELY (len > WIDE_INT_MAX_ELTS))
+    len = xlen + skip + 1;
   for (unsigned int i = 0; i < skip; ++i)
     val[i] = 0;
 
@@ -1993,22 +2033,17 @@ wi::lshift_large (HOST_WIDE_INT *val, co
   return canonize (val, len, precision);
 }
 
-/* Right shift XVAL by SHIFT and store the result in VAL.  Return the
+/* Right shift XVAL by SHIFT and store the result in VAL.  LEN is the
    number of blocks in VAL.  The input has XPRECISION bits and the
    output has XPRECISION - SHIFT bits.  */
-static unsigned int
+static void
 rshift_large_common (HOST_WIDE_INT *val, const HOST_WIDE_INT *xval,
-		     unsigned int xlen, unsigned int xprecision,
-		     unsigned int shift)
+		     unsigned int xlen, unsigned int shift, unsigned int len)
 {
   /* Split the shift into a whole-block shift and a subblock shift.  */
   unsigned int skip = shift / HOST_BITS_PER_WIDE_INT;
   unsigned int small_shift = shift % HOST_BITS_PER_WIDE_INT;
 
-  /* Work out how many blocks are needed to store the significant bits
-     (excluding the upper zeros or signs).  */
-  unsigned int len = BLOCKS_NEEDED (xprecision - shift);
-
   /* It's easier to handle the simple block case specially.  */
   if (small_shift == 0)
     for (unsigned int i = 0; i < len; ++i)
@@ -2025,7 +2060,6 @@ rshift_large_common (HOST_WIDE_INT *val,
 	  val[i] |= curr << (-small_shift % HOST_BITS_PER_WIDE_INT);
 	}
     }
-  return len;
 }
 
 /* Logically right shift XVAL by SHIFT and store the result in VAL.
@@ -2036,11 +2070,18 @@ wi::lrshift_large (HOST_WIDE_INT *val, c
 		   unsigned int xlen, unsigned int xprecision,
 		   unsigned int precision, unsigned int shift)
 {
-  unsigned int len = rshift_large_common (val, xval, xlen, xprecision, shift);
+  /* Work out how many blocks are needed to store the significant bits
+     (excluding the upper zeros or signs).  */
+  unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
+  unsigned int len = blocks_needed;
+  if (UNLIKELY (len > WIDE_INT_MAX_ELTS) && len > xlen && xval[xlen - 1] >= 0)
+    len = xlen;
+
+  rshift_large_common (val, xval, xlen, shift, len);
 
   /* The value we just created has precision XPRECISION - SHIFT.
      Zero-extend it to wider precisions.  */
-  if (precision > xprecision - shift)
+  if (precision > xprecision - shift && len == blocks_needed)
     {
       unsigned int small_prec = (xprecision - shift) % HOST_BITS_PER_WIDE_INT;
       if (small_prec)
@@ -2063,11 +2104,18 @@ wi::arshift_large (HOST_WIDE_INT *val, c
 		   unsigned int xlen, unsigned int xprecision,
 		   unsigned int precision, unsigned int shift)
 {
-  unsigned int len = rshift_large_common (val, xval, xlen, xprecision, shift);
+  /* Work out how many blocks are needed to store the significant bits
+     (excluding the upper zeros or signs).  */
+  unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
+  unsigned int len = blocks_needed;
+  if (UNLIKELY (len > WIDE_INT_MAX_ELTS) && len > xlen)
+    len = xlen;
+
+  rshift_large_common (val, xval, xlen, shift, len);
 
   /* The value we just created has precision XPRECISION - SHIFT.
      Sign-extend it to wider types.  */
-  if (precision > xprecision - shift)
+  if (precision > xprecision - shift && len == blocks_needed)
     {
       unsigned int small_prec = (xprecision - shift) % HOST_BITS_PER_WIDE_INT;
       if (small_prec)
@@ -2399,9 +2447,12 @@ from_int (int i)
 static void
 assert_deceq (const char *expected, const wide_int_ref &wi, signop sgn)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_dec (wi, buf, sgn);
-  ASSERT_STREQ (expected, buf);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_dec (wi, p, sgn);
+  ASSERT_STREQ (expected, p);
 }
 
 /* Likewise for base 16.  */
@@ -2409,9 +2460,12 @@ assert_deceq (const char *expected, cons
 static void
 assert_hexeq (const char *expected, const wide_int_ref &wi)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_hex (wi, buf);
-  ASSERT_STREQ (expected, buf);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_hex (wi, p);
+  ASSERT_STREQ (expected, p);
 }
 
 /* Test cases.  */
--- gcc/print-tree.cc.jj	2023-07-11 13:40:39.000000000 +0200
+++ gcc/print-tree.cc	2023-09-28 14:12:40.257284557 +0200
@@ -365,13 +365,13 @@ print_node (FILE *file, const char *pref
     fputs (code == CALL_EXPR ? " must-tail-call" : " static", file);
   if (TREE_DEPRECATED (node))
     fputs (" deprecated", file);
-  if (TREE_UNAVAILABLE (node))
-    fputs (" unavailable", file);
   if (TREE_VISITED (node))
     fputs (" visited", file);
 
   if (code != TREE_VEC && code != INTEGER_CST && code != SSA_NAME)
     {
+      if (TREE_UNAVAILABLE (node))
+	fputs (" unavailable", file);
       if (TREE_LANG_FLAG_0 (node))
 	fputs (" tree_0", file);
       if (TREE_LANG_FLAG_1 (node))
--- gcc/dwarf2out.cc.jj	2023-09-28 12:05:50.905151340 +0200
+++ gcc/dwarf2out.cc	2023-09-28 13:06:34.492017940 +0200
@@ -397,7 +397,7 @@ dump_struct_debug (tree type, enum debug
    of the number.  */
 
 static unsigned int
-get_full_len (const wide_int &op)
+get_full_len (const rwide_int &op)
 {
   int prec = wi::get_precision (op);
   return ((prec + HOST_BITS_PER_WIDE_INT - 1)
@@ -3900,7 +3900,7 @@ static void add_data_member_location_att
 						struct vlr_context *);
 static bool add_const_value_attribute (dw_die_ref, machine_mode, rtx);
 static void insert_int (HOST_WIDE_INT, unsigned, unsigned char *);
-static void insert_wide_int (const wide_int &, unsigned char *, int);
+static void insert_wide_int (const rwide_int &, unsigned char *, int);
 static unsigned insert_float (const_rtx, unsigned char *);
 static rtx rtl_for_decl_location (tree);
 static bool add_location_or_const_value_attribute (dw_die_ref, tree, bool);
@@ -4598,14 +4598,14 @@ AT_unsigned (dw_attr_node *a)
 
 static inline void
 add_AT_wide (dw_die_ref die, enum dwarf_attribute attr_kind,
-	     const wide_int& w)
+	     const rwide_int& w)
 {
   dw_attr_node attr;
 
   attr.dw_attr = attr_kind;
   attr.dw_attr_val.val_class = dw_val_class_wide_int;
   attr.dw_attr_val.val_entry = NULL;
-  attr.dw_attr_val.v.val_wide = ggc_alloc<wide_int> ();
+  attr.dw_attr_val.v.val_wide = ggc_alloc<rwide_int> ();
   *attr.dw_attr_val.v.val_wide = w;
   add_dwarf_attr (die, &attr);
 }
@@ -16714,7 +16714,7 @@ mem_loc_descriptor (rtx rtl, machine_mod
 	  mem_loc_result->dw_loc_oprnd1.v.val_die_ref.external = 0;
 	  mem_loc_result->dw_loc_oprnd2.val_class
 	    = dw_val_class_wide_int;
-	  mem_loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<wide_int> ();
+	  mem_loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<rwide_int> ();
 	  *mem_loc_result->dw_loc_oprnd2.v.val_wide = rtx_mode_t (rtl, mode);
 	}
       break;
@@ -17288,7 +17288,7 @@ loc_descriptor (rtx rtl, machine_mode mo
 	  loc_result = new_loc_descr (DW_OP_implicit_value,
 				      GET_MODE_SIZE (int_mode), 0);
 	  loc_result->dw_loc_oprnd2.val_class = dw_val_class_wide_int;
-	  loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<wide_int> ();
+	  loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<rwide_int> ();
 	  *loc_result->dw_loc_oprnd2.v.val_wide = rtx_mode_t (rtl, int_mode);
 	}
       break;
@@ -20189,7 +20189,7 @@ extract_int (const unsigned char *src, u
 /* Writes wide_int values to dw_vec_const array.  */
 
 static void
-insert_wide_int (const wide_int &val, unsigned char *dest, int elt_size)
+insert_wide_int (const rwide_int &val, unsigned char *dest, int elt_size)
 {
   int i;
 
@@ -20274,7 +20274,7 @@ add_const_value_attribute (dw_die_ref di
 	  && (GET_MODE_PRECISION (int_mode)
 	      & (HOST_BITS_PER_WIDE_INT - 1)) == 0)
 	{
-	  wide_int w = rtx_mode_t (rtl, int_mode);
+	  rwide_int w = rtx_mode_t (rtl, int_mode);
 	  add_AT_wide (die, DW_AT_const_value, w);
 	  return true;
 	}
--- gcc/dwarf2out.h.jj	2023-09-27 10:37:38.536849616 +0200
+++ gcc/dwarf2out.h	2023-09-28 13:06:34.492017940 +0200
@@ -30,7 +30,7 @@ typedef struct dw_cfi_node *dw_cfi_ref;
 typedef struct dw_loc_descr_node *dw_loc_descr_ref;
 typedef struct dw_loc_list_struct *dw_loc_list_ref;
 typedef struct dw_discr_list_node *dw_discr_list_ref;
-typedef wide_int *wide_int_ptr;
+typedef rwide_int *rwide_int_ptr;
 
 
 /* Call frames are described using a sequence of Call Frame
@@ -252,7 +252,7 @@ struct GTY(()) dw_val_node {
       unsigned HOST_WIDE_INT
 	GTY ((tag ("dw_val_class_unsigned_const"))) val_unsigned;
       double_int GTY ((tag ("dw_val_class_const_double"))) val_double;
-      wide_int_ptr GTY ((tag ("dw_val_class_wide_int"))) val_wide;
+      rwide_int_ptr GTY ((tag ("dw_val_class_wide_int"))) val_wide;
       dw_vec_const GTY ((tag ("dw_val_class_vec"))) val_vec;
       struct dw_val_die_union
 	{
--- gcc/tree.h.jj	2023-09-27 10:37:39.114841566 +0200
+++ gcc/tree.h	2023-09-28 13:06:34.506017744 +0200
@@ -6258,13 +6258,17 @@ namespace wi
   template <int N>
   struct int_traits <extended_tree <N> >
   {
-    static const enum precision_type precision_type = CONST_PRECISION;
+    static const enum precision_type precision_type
+      = N == ADDR_MAX_PRECISION ? CONST_PRECISION : WIDEST_CONST_PRECISION;
     static const bool host_dependent_precision = false;
     static const bool is_sign_extended = true;
     static const unsigned int precision = N;
+    static const unsigned int inl_precision
+      = N == ADDR_MAX_PRECISION ? 0
+	     : N / WIDEST_INT_MAX_PRECISION * WIDE_INT_MAX_PRECISION;
   };
 
-  typedef extended_tree <WIDE_INT_MAX_PRECISION> widest_extended_tree;
+  typedef extended_tree <WIDEST_INT_MAX_PRECISION> widest_extended_tree;
   typedef extended_tree <ADDR_MAX_PRECISION> offset_extended_tree;
 
   typedef const generic_wide_int <widest_extended_tree> tree_to_widest_ref;
@@ -6292,7 +6296,8 @@ namespace wi
   tree_to_poly_wide_ref to_poly_wide (const_tree);
 
   template <int N>
-  struct ints_for <generic_wide_int <extended_tree <N> >, CONST_PRECISION>
+  struct ints_for <generic_wide_int <extended_tree <N> >,
+		   int_traits <extended_tree <N> >::precision_type>
   {
     typedef generic_wide_int <extended_tree <N> > extended;
     static extended zero (const extended &);
@@ -6308,7 +6313,7 @@ namespace wi
 
 /* Used to convert a tree to a widest2_int like this:
    widest2_int foo = widest2_int_cst (some_tree).  */
-typedef generic_wide_int <wi::extended_tree <WIDE_INT_MAX_PRECISION * 2> >
+typedef generic_wide_int <wi::extended_tree <WIDEST_INT_MAX_PRECISION * 2> >
   widest2_int_cst;
 
 /* Refer to INTEGER_CST T as though it were a widest_int.
@@ -6444,7 +6449,7 @@ wi::extended_tree <N>::get_len () const
 {
   if (N == ADDR_MAX_PRECISION)
     return TREE_INT_CST_OFFSET_NUNITS (m_t);
-  else if (N >= WIDE_INT_MAX_PRECISION)
+  else if (N >= WIDEST_INT_MAX_PRECISION)
     return TREE_INT_CST_EXT_NUNITS (m_t);
   else
     /* This class is designed to be used for specific output precisions
@@ -6530,7 +6535,8 @@ wi::to_poly_wide (const_tree t)
 template <int N>
 inline generic_wide_int <wi::extended_tree <N> >
 wi::ints_for <generic_wide_int <wi::extended_tree <N> >,
-	      wi::CONST_PRECISION>::zero (const extended &x)
+	      wi::int_traits <wi::extended_tree <N> >::precision_type
+	     >::zero (const extended &x)
 {
   return build_zero_cst (TREE_TYPE (x.get_tree ()));
 }
--- gcc/value-range.cc.jj	2023-09-27 10:37:39.240839811 +0200
+++ gcc/value-range.cc	2023-09-28 13:06:34.512017660 +0200
@@ -245,17 +245,24 @@ vrange::dump (FILE *file) const
 void
 irange_bitmask::dump (FILE *file) const
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p;
   pretty_printer buffer;
 
   pp_needs_newline (&buffer) = true;
   buffer.buffer->stream = file;
   pp_string (&buffer, "MASK ");
-  print_hex (m_mask, buf);
-  pp_string (&buffer, buf);
+  unsigned len_mask = m_mask.get_len ();
+  unsigned len_val = m_value.get_len ();
+  unsigned len = MAX (len_mask, len_val);
+  if (len > WIDE_INT_MAX_ELTS)
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  else
+    p = buf;
+  print_hex (m_mask, p);
+  pp_string (&buffer, p);
   pp_string (&buffer, " VALUE ");
-  print_hex (m_value, buf);
-  pp_string (&buffer, buf);
+  print_hex (m_value, p);
+  pp_string (&buffer, p);
   pp_flush (&buffer);
 }
 
--- gcc/c/c-decl.cc.jj	2023-09-27 10:37:38.428851119 +0200
+++ gcc/c/c-decl.cc	2023-09-28 13:06:34.514017632 +0200
@@ -12355,11 +12355,11 @@ declspecs_add_type (location_t loc, stru
 				spec.expr);
 		      return specs;
 		    }
-		  if (wi::to_widest (spec.expr) > WIDE_INT_MAX_PRECISION - 1)
+		  if (wi::to_widest (spec.expr) > WIDEST_INT_MAX_PRECISION - 1)
 		    {
 		      error_at (loc, "%<_BitInt%> argument %qE is larger than "
 				     "%<BITINT_MAXWIDTH%> %qd",
-				spec.expr, (int) WIDE_INT_MAX_PRECISION - 1);
+				spec.expr, (int) WIDEST_INT_MAX_PRECISION - 1);
 		      return specs;
 		    }
 		  specs->u.bitint_prec = tree_to_uhwi (spec.expr);
--- gcc/gengtype.cc.jj	2023-09-27 10:37:38.751846621 +0200
+++ gcc/gengtype.cc	2023-09-28 13:06:34.515017618 +0200
@@ -5236,7 +5236,6 @@ main (int argc, char **argv)
       POS_HERE (do_scalar_typedef ("double_int", &pos));
       POS_HERE (do_scalar_typedef ("poly_int64_pod", &pos));
       POS_HERE (do_scalar_typedef ("offset_int", &pos));
-      POS_HERE (do_scalar_typedef ("widest_int", &pos));
       POS_HERE (do_scalar_typedef ("int64_t", &pos));
       POS_HERE (do_scalar_typedef ("poly_int64", &pos));
       POS_HERE (do_scalar_typedef ("poly_uint64", &pos));
--- gcc/tree-ssa-loop-niter.cc.jj	2023-09-27 10:37:39.072842151 +0200
+++ gcc/tree-ssa-loop-niter.cc	2023-09-28 13:06:34.515017618 +0200
@@ -3873,12 +3873,17 @@ do_warn_aggressive_loop_optimizations (c
     return;
 
   gimple *estmt = last_nondebug_stmt (e->src);
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_dec (i_bound, buf, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations))
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p;
+  unsigned len = i_bound.get_len ();
+  if (len > WIDE_INT_MAX_ELTS)
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  else
+    p = buf;
+  print_dec (i_bound, p, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations))
 	     ? UNSIGNED : SIGNED);
   auto_diagnostic_group d;
   if (warning_at (gimple_location (stmt), OPT_Waggressive_loop_optimizations,
-		  "iteration %s invokes undefined behavior", buf))
+		  "iteration %s invokes undefined behavior", p))
     inform (gimple_location (estmt), "within this loop");
   loop->warned_aggressive_loop_optimizations = true;
 }
--- gcc/c-family/c-warn.cc.jj	2023-09-27 10:37:38.334852428 +0200
+++ gcc/c-family/c-warn.cc	2023-09-28 13:06:34.524017491 +0200
@@ -1517,13 +1517,15 @@ match_case_to_enum_1 (tree key, tree typ
     return;
 
   char buf[WIDE_INT_PRINT_BUFFER_SIZE];
+  wide_int w = wi::to_wide (key);
 
+  gcc_assert (w.get_len () <= WIDE_INT_MAX_ELTS);
   if (tree_fits_uhwi_p (key))
-    print_dec (wi::to_wide (key), buf, UNSIGNED);
+    print_dec (w, buf, UNSIGNED);
   else if (tree_fits_shwi_p (key))
-    print_dec (wi::to_wide (key), buf, SIGNED);
+    print_dec (w, buf, SIGNED);
   else
-    print_hex (wi::to_wide (key), buf);
+    print_hex (w, buf);
 
   if (TYPE_NAME (type) == NULL_TREE)
     warning_at (DECL_SOURCE_LOCATION (CASE_LABEL (label)),
--- gcc/c-family/c-cppbuiltin.cc.jj	2023-09-27 10:37:38.226853933 +0200
+++ gcc/c-family/c-cppbuiltin.cc	2023-09-28 13:06:34.541017253 +0200
@@ -1195,10 +1195,10 @@ c_cpp_builtins (cpp_reader *pfile)
       struct bitint_info info;
       /* For now, restrict __BITINT_MAXWIDTH__ to what can be represented in
 	 wide_int and widest_int.  */
-      if (targetm.c.bitint_type_info (WIDE_INT_MAX_PRECISION - 1, &info))
+      if (targetm.c.bitint_type_info (WIDEST_INT_MAX_PRECISION - 1, &info))
 	{
 	  cpp_define_formatted (pfile, "__BITINT_MAXWIDTH__=%d",
-				(int) WIDE_INT_MAX_PRECISION - 1);
+				(int) WIDEST_INT_MAX_PRECISION - 1);
 	  if (flag_building_libgcc)
 	    {
 	      scalar_int_mode limb_mode
--- gcc/c-family/c-lex.cc.jj	2023-09-27 10:37:38.272853292 +0200
+++ gcc/c-family/c-lex.cc	2023-09-28 13:06:34.550017127 +0200
@@ -843,7 +843,7 @@ interpret_integer (const cpp_token *toke
       int max_bits_per_digit = 4; // ceil (log2 (10))
       unsigned int prefix_len = 0;
       bool hex = false;
-      const int bitint_maxwidth = WIDE_INT_MAX_PRECISION - 1;
+      const int bitint_maxwidth = WIDEST_INT_MAX_PRECISION - 1;
       if ((flags & CPP_N_RADIX) == CPP_N_OCTAL)
 	{
 	  max_bits_per_digit = 3;
--- gcc/value-range-pretty-print.cc.jj	2023-09-27 10:37:39.170840787 +0200
+++ gcc/value-range-pretty-print.cc	2023-09-28 13:06:34.550017127 +0200
@@ -99,12 +99,19 @@ vrange_printer::print_irange_bitmasks (c
     return;
 
   pp_string (pp, " MASK ");
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_hex (bm.mask (), buf);
-  pp_string (pp, buf);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p;
+  unsigned len_mask = bm.mask ().get_len ();
+  unsigned len_val = bm.value ().get_len ();
+  unsigned len = MAX (len_mask, len_val);
+  if (len > WIDE_INT_MAX_ELTS)
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  else
+    p = buf;
+  print_hex (bm.mask (), p);
+  pp_string (pp, p);
   pp_string (pp, " VALUE ");
-  print_hex (bm.value (), buf);
-  pp_string (pp, buf);
+  print_hex (bm.value (), p);
+  pp_string (pp, p);
 }
 
 void
--- gcc/poly-int.h.jj	2023-09-27 10:37:38.874844909 +0200
+++ gcc/poly-int.h	2023-09-28 13:06:34.551017113 +0200
@@ -97,6 +97,18 @@ struct poly_coeff_traits<T, wi::CONST_PR
   static const int rank = precision * 2 / CHAR_BIT;
 };
 
+template<typename T>
+struct poly_coeff_traits<T, wi::WIDEST_CONST_PRECISION>
+{
+  typedef WI_UNARY_RESULT (T) result;
+  typedef int int_type;
+  /* These types are always signed.  */
+  static const int signedness = 1;
+  static const int precision = wi::int_traits<T>::precision;
+  static const int inl_precision = wi::int_traits<T>::inl_precision;
+  static const int rank = precision * 2 / CHAR_BIT;
+};
+
 /* Information about a pair of coefficient types.  */
 template<typename T1, typename T2>
 struct poly_coeff_pair_traits
--- gcc/godump.cc.jj	2023-09-27 10:37:38.805845870 +0200
+++ gcc/godump.cc	2023-09-28 13:06:34.551017113 +0200
@@ -1154,7 +1154,11 @@ go_output_typedef (class godump_containe
 	    snprintf (buf, sizeof buf, HOST_WIDE_INT_PRINT_UNSIGNED,
 		      tree_to_uhwi (value));
 	  else
-	    print_hex (wi::to_wide (element), buf);
+	    {
+	      wide_int w = wi::to_wide (element);
+	      gcc_assert (w.get_len () <= WIDE_INT_MAX_ELTS);
+	      print_hex (w, buf);
+	    }
 
 	  mhval->value = xstrdup (buf);
 	  *slot = mhval;
--- gcc/value-range.h.jj	2023-09-27 10:37:39.268839422 +0200
+++ gcc/value-range.h	2023-09-28 13:06:34.555017057 +0200
@@ -626,7 +626,9 @@ irange::maybe_resize (int needed)
     {
       m_max_ranges = HARD_MAX_RANGES;
       wide_int *newmem = new wide_int[m_max_ranges * 2];
-      memcpy (newmem, m_base, sizeof (wide_int) * num_pairs () * 2);
+      unsigned n = num_pairs () * 2;
+      for (unsigned i = 0; i < n; ++i)
+	newmem[i] = m_base[i];
       m_base = newmem;
     }
 }
--- gcc/stor-layout.cc.jj	2023-09-27 10:37:38.951843836 +0200
+++ gcc/stor-layout.cc	2023-09-28 13:06:34.560016987 +0200
@@ -2946,7 +2946,7 @@ set_min_and_max_values_for_integral_type
   if (precision < 1)
     return;
 
-  gcc_assert (precision <= WIDE_INT_MAX_PRECISION);
+  gcc_assert (precision <= WIDEST_INT_MAX_PRECISION);
 
   TYPE_MIN_VALUE (type)
     = wide_int_to_tree (type, wi::min_value (precision, sgn));
--- gcc/wide-int-print.cc.jj	2023-09-27 10:37:39.379837876 +0200
+++ gcc/wide-int-print.cc	2023-09-28 14:24:04.824794192 +0200
@@ -74,9 +74,12 @@ print_decs (const wide_int_ref &wi, char
 void
 print_decs (const wide_int_ref &wi, FILE *file)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_decs (wi, buf);
-  fputs (buf, file);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_decs (wi, p);
+  fputs (p, file);
 }
 
 /* Try to print the unsigned self in decimal to BUF if the number fits
@@ -98,9 +101,12 @@ print_decu (const wide_int_ref &wi, char
 void
 print_decu (const wide_int_ref &wi, FILE *file)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_decu (wi, buf);
-  fputs (buf, file);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_decu (wi, p);
+  fputs (p, file);
 }
 
 void
@@ -134,9 +140,12 @@ print_hex (const wide_int_ref &val, char
 void
 print_hex (const wide_int_ref &wi, FILE *file)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_hex (wi, buf);
-  fputs (buf, file);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_hex (wi, p);
+  fputs (p, file);
 }
 
 /* Print larger precision wide_int.  Not defined as inline in a header
--- gcc/testsuite/gcc.dg/bitint-38.c.jj	2023-09-28 15:02:23.182069788 +0200
+++ gcc/testsuite/gcc.dg/bitint-38.c	2023-09-28 15:02:39.168848976 +0200
@@ -0,0 +1,17 @@
+/* PR c/102989 */
+/* { dg-do compile { target { bitint } } } */
+
+#if __BITINT_MAXWIDTH__ >= 16319
+constexpr unsigned _BitInt(16319) a
+  = 468098567701677261276215481936770442254383643766995378241600227179396283432916865881332215867106489159251577495372085663487092317743244770597287633199005374998455333587280357490149993101811392051483761495987108264964738337118155155862715438910721661230332533185335581757600511846854115932637261969633134365868695363914570578110064471868475841348589366933645410987699979080140212849909081188170910464967486231358935212897096260626033055536141835599284498474737858487658470115144771923114826312283863035503700600141440724426364699636330240414271275626021294939422483250619629005959992243418661230122132667769781183790338759345884903821695590991577228520523725302048215447841573113840811593638413425054938213262961448317898574140533090004992732688525115004782973893244091427000396890427152225308661078954671066069234453757593181753900865203439035402480306413572239610467142591920809187367438071170100969567440044691427487959785637338381651309916782063670286046547585240837892307170928849485877186793280707600840866783471799148179250818387716183127323346199533387463363442356218803779697005759324410376476855222420876262425985571982818180353870410149824214544313013285199544193496624223219986402944849622489422007678564946174797892795089330899535624727777525330789492703574564112252955147770942929761545604350869404246558274752353510370157229485004402131043153454290397929387276374054938578976878606467217359398684275050519104413914286024106808116340712273059427362293703151355498336213170698894448405369398757188523160460292714875857879968173578328191358215972493513271297875634400793301929250052822258636015650857683023900709845410838487936778533250407886180954576046340697908584020951295048844938047865657029072850797442976146895294184993736999505485665742811313795405530674199848055802759901786376822069529342971261963119332476504064285869362049662083405789828433132154933242817432809415810548180658750393692272729586232842065658490971201927780014258815333115459695117942273551876646844821076723664040282772834511419891351278169017103987094803829594286352340468346618726088781492626816188657331359104171819822673805856317828499039088088223137258297373929043307673570090396947789598799922928643843532617012164811074618881774622628943539037974883812689130801860915090035870244061005819418130068390986470314677853605080103313411837904358287837401546257413240466939893527508931541065241929872307203876443882106193262544652290132364691671910332006127864146991404015366683569317248057949596070354929361158326955551600236075268435044105880162798380799161607987365282458662031599096921825176202707890730023698706855762932691688259365358964076595824577775275991183149118372047206055118463112864604063853894820407249837871368934941438119680605528546887256934334246075596746410297954458632358171428714141820918183384435681332379317541048252391710712196623406338702061195213724569303285402242853671386113148211535691685461836458295037538034378318055108240082414441205300401526732399959228346926528586852743389490978734787926721999855388794711837164423007719626109179005466113706450765269687580819822772189301084503627297389675134228222337286867641110511061980231247884533492442898936743429641958314135329073406495776369208158032115883850691010569048983941126771477990976092252391972812691669847446798507244106121667885423025613769258102773855537509733295805013313937402282804897213847221072647111605172349464564089914906493508133855389627177663426057763252086286325343811254757681803068276278048757997425284334713190226818463023074461900176958010055572434983135171145365242339273326984465181064287264645470832091115100640584104375577304056951969456200138485313560009272338228103637763863289261673258726736753407044143664079479496972580560534494806170810469304773005873590626280072387999668522546747985701599613975101188543857852141559251634058676718308000324869809628199442681565615662912626022796064414496106344236431285697688357707992989966561557171729972093533007476947862215922583204811189015550505642082475400647639520782187776825395598257421714106473869797642678266380755873356747812273977691604147842741151722919464734890326772594979022403228191075586910464204870254674290437668861177639713112762996390246102030994917186957826982084194156870398312336059100521566034092740694642613192909850644003933745129291062576341213874815510099835708723355432970090139671120232910747665906191360160259512198160849784197597300106223945960886603127136037120000864968668651452411048372895607382907494278810971475663944948791458618662250238375166523484847507342040066801856222328988662049579299600545682490412754483621051190231623196265549391964259780178070495642538883789503379406531279338866955157646654913405181879254189185904298325865503395688786311067669273609670603076582607253527084977744533187145642686236350165593980428575119329911921382240780504527422630654086941060242757131313184709635181001199631726283364158943337968797uwb
+    + 9935443518057456429927126655222257817207511311671335832560065573055276678747990652907348839741818562757939084649073348172108397183827020377941725983107513636287406530526358253508437290241937276908386282904353079102904535675608604576486162998319427702851278408213641454837223079616401615875672453250148421679223829417834227518133091055180270249266161676677176149675164257640812344297935650729629801878758059944090168862730519817203352341458310363811482318083270232434329317323822818991134500601669868922396013512969477839456472345812312321924215241849772147687455760224559240952737319009348540894966363568158349501355229264646770018071590502441702787269097973979899837683122194103110089728425676690246091146993955037918425772840022288222832932542516091501149477160856564464376910293230091963573119230648026667896399352790982611957569978972038178519570278447540707502861678502657905192743225893225663994807568918644898273702285483676385717651104042002105352993176512166420085064452431753181365805833548922676748890412420332694609096819779765600345216390394307257556778223743443958983962113723193551247897995423762348092103893683711373897139168289420267660611409947644548715007787832959251167553175096639147674776117973100447903243626902892382263767591328038235708593401563793019418124453166386471792468421003855894206584354731489363668134077946203546067237235657746480296831651791790385981397558458905904641394246279782746736009101862366868068363411976388557697921914317179371206444085390779634831369723370050764678852846779369497232374780691905280992368079762747352245519607264154197148958896955661904214909184952289996142050604821608749900417845137727596903100452350067551305840998280482775209883278873071895588751811462342517825753493814997918418437455474992422243919549967371964423457440287296270855605850954685912644303354019058716916735522533065323057755479803668782530250381988211075034655760123250249441440684338450953823290346909689822527652698723502872312570305261196768477498898020793071808758903381796873868682378850925211629392760628685222745073544116615635557910805357623590218023715832716372532519372862093828545797325567803691998051785156065861566888871461130133522039321843439017964382030080752476709398731341173062430275003111954907627837208488348686666904765710656917706470924318432160155450726007668035494571779793129212242101293274853237850848806152774463689243426683295884648680790240363097015218347966399166380090370628591288712305133171869639679922854066493076773166970190482988828017031016891561971986279675371963020932469337264061317786330566839383989384760935590299287963546863848119999451739548405124001514033096695605580766121611440638549988895970262425133218159848061727217163487131806481686766843789971465247903534853837951413845786667122427182648989156599529647439419553785158561613114023267303869927565170507781782366447011340851258178534101585950081423437703778492347448230473897643505773957385504112182446690585033823747175966929091293693201061858670141209129091452861292276276012910624071241165402089161606944423826245461608594935732481900198240862293409442308800690019550831630479883000579884614601906961723011354449804576794339826056986957680090916046848673419723529694384653809400377218545075269148766129194637039408225515678013332188074997217667835494940043014917877438354902673107453164275280010251040360040937308738925689475725131639032011979009642713542292894219059352972933151112376197383814925363288670995556269447804994925086791728136906693249507115097807060365872110998210768336078389508724184863597285987736912073071980137162590779664675033429119327855307827174673749257462983054221631797527009987595732460222197367608440973488211898471439302051388806818521659685873672383828021329848153410204926607710971678268541677584421695238011784351386047869158787156634630693872428067864980320063293435887574745859067024988485742353278548704467544298793511583587659713711677065792371199329419372392720321981862269890024832348999865449339856339220386853162641984444934998176248821703154774794026863423846665361147912580310179333239849314145158103813724371277156031826070213656189218428551171492579367736652650240510840524479280661922149370381404863668038229922105064658335083314946842545978050497021795217124947959575065471749872278802756371390871441004232633252611825748658593540667831098874027223327541523742857750954119615708541514145110863925049204517574000824797900817585376961462754521495100198829675100958066639531958106704159717265035205597161047879510849900587565746603225763129877434317949842105742386965886137117798642168190733367414126797929434627532307855448841035433795229031275545885872876848846666666475465866905332293095381494096702328649920740506658930503053162777944821433383407283155178707970906458023827141681140372968356084617001053870499079884384019820875585843129082894687740533946763756846924952825251383026364635539377880784234770789463152435704464616uwb;
+constexpr unsigned _BitInt(16319) b
+  = 20129744567093027275741005070628998262449166046517026903695683755854448756834360166513132405078796314602781998330705368407367482030156637206994877425582250124595106718397028199112773892105727478029626122540718672466812244172521968825004812596684190534400169291245019886664334632347203172906471830047918779870667296830826108769036384267604969509336398421516482677170697323144807237345130767733861415665037591249948490085867356183319101541167176586195051721766552194667530417142250556133895688441663400613014781276825394358975458967475147806589013506569415945496841131100738180426238464950629268379774013285627049621529192047736803089092751891513992605419086502588233332057296638567290306093910878742093500873864277174719410183640765821580587831967716708363976225535905317908137780497267444416760176647705834046996010820212494244083222254037700699529789991033448979912128507710343500466786839351071045788239200231971288879352062329627654083430317549832483148696514166354870702716570783257707960927427529476249626444239951812293100465038963807939297639901456086408459677292249078230581624034160083198437374539728677906306289960873601083706201882999243554025429957091619812945018432503309674349427513057767160754691227365332241845175797106713295593063635202655344273695438810685712451003351469460085582752740414723264094665962205140763820691773090780866423727990711323748512766522537850976590598658397979845215595029782750537140603588592215363608992433922289542233458102634259275757690440754308009593855238137227351798446486981151672766513716998027602215751256719370429397129549459120277202327118788743080998483470436192625398340057850391478909668185290635380423955404607217710958636050373730838469336370845039431945543326700579270919052885975364141422331087288874462285858637176621255141698264412903522678033317989170115880081516284097559300133507799471895326457336815172421155995525168781635131143991136416642016744949082321204689839861376266795485532171923826942486502913400286963940309484507484129423576156798044985198780159055788525538310878089397895175129162099671894337526801235280427428321205321530735108239848594278720839317921782831352363541199919557577597546876704462612904924694431903072332864341465745291866718067601041404212430941956177407763481845568339170224196193106463030409080073136605433869775860974939991008596874978506245689726966715206639438259724689301019692258116991317695012205036157177039536905494005833948384397446492918129185274359806145454148241131925838562069991934872329314452016900728948186477387223161994145551216156032211038319475270853818660079065895119923373317496777184177315345923787700803986965175033224375435249224949151191006574511519055220741174631165879299688118138728380219550143006894817522270338472413899079751917314505754802052988622174392135207139715960212346858882422543222621408433817817181595201086403368301839080592455115463829425708132345811270911456928961301265223101989524481521721969838980208647528038509328501705428950749820080720418776718084142086501267418284241370398868561282277848391673847937247873117719906103441015578245152673184719538896073697272475250261227685660058944107087333786104761624391816175414338999215260190162551489343436332492645887029551964578826432156700872459216605843463884228343167159924792752429816064841479438134662749621639560203443871326810129872763539114284811330805213188716333471069710270583945841626338361700846410927750916663908367683188084193258384935122236639934335284160522042065088923421928660724095726039642836343542211473282392554371973074108770797447448654428325845253304889062021031599531436606775029315849674756213988932349651640552571880780461452187094400408403309806507698230071584809861634596000425300485805174853406774961321055086995665513868382285048348264250174388793184093524675621762558537763747237314473883173686633576273836946507237880619627632543093619281096675643877749217588495383292078713230253993525326209732859301842016440189010027733234997657748351253359664018894197346327201303258090754079801393874104215986193719394144148559622409051961205332355846077533183278890738832391535561074612724819789952480872328880408266970201766239451001690274739141595541572957753788951050043026811943691163688663710637928472363177936029259448725818579129920714382357882142208643606823754520733994646572586821541644398149238544337745998203264678454665487925173493921777764033537269522992103115842823750405588538846833724101543165897489915300004787110814394934465518176677482202804123781727309993329004830726928892557850582806559007396866888620985629055058474721708813614135721948922060211334334572381348586196886746758900465692833094336637178459072850215866106799456460266354416689624866015411034238864944123721969568161372557215009049887790769403406590484422511214573790761107726077762451440539965975955360773797196902546431341823788555069435728043202455375041817472821677779625286961992491729576392881089462100341878uwb
+    / 42uwb;
+constexpr unsigned _BitInt(16319) c
+  = 26277232382028447345935282100364413976442241120491848683780108318345774920397366452596924421335605374686659278612312801604887370376076386444511450318895545695570784577285598906650901929444302296033412199632594998376064124714220414913923213779444306833277388995703552219430575080927111195417046911177019070713847128826447830096432003962403463656558600431115273248877177875063381111477888059798858016050213420475851620413016793445517539227019973682699447952322388748860981947593432985730684746088183583225184347825110697327973294826205227564425769950503423435597165969299975681406974619941538502827193742760455245269483134360940023933986344217577102114800134253879530890064362520368475535738854741806292542624386473461274620987891355541987873664157022522167908591164654787501854546457737341526763516705032705254046172926268968997302379261582933264475402063191548343982201230445504659038868786347667710658240088825869575188227013335559298579845948690316856611693386990691782821847535492639223427223360712994033576990398197160051785889033125034223732954451076425681456628201904077784454089380196178912326887148822779198657689238010492393879170486604804437202791286852035982584159978541711417080787022338893101116171974852272032081114570327098305927880933671644227124990161298341841320653588271798586647749346370617067175316167393884414111921877638201303618067479025167446526964230732790261566590993315887290551248612349150417516918700813876388862131622594037955509016393068514645257179527317715173019090736514553638608004576856188118523434383702648256819068546345047653068719910165573154521302405552789235554333112380164692074092017083602440917300094238211450798274305773890594242881597233221582216100516212402569681571888843321851284369613879319709906369098535804168065394213774970627125064665536078444150533436796088491087726051879648804306086489894004214709726215682689504951069889191755818331155532574370572928592103344141366890552816031266922028893616252999452323417869066941579667306347161357254079241809644500681547267163742601555111699376923690500014172294337681007418735910341792131377741308586228268385825579773985382339854821729670313925456724869607910114957040810377671394779834675225181536565444830551924417794139736686594557660483813045525089850285373756403594900392226296617656189774567019900237644329891280192776067340109751100025818473155267503490628146429306493520953677660612094758307190480072039980575323428994009982415676875786338343681850769724258724712947129844865182522700509869810541147515988955709784790248266593581532414091983670376426534289079098742549505127694160521110700035496658932724007621759500091227595477831200325335242614162624218010753586306794482732500765136299548052958345872488446969032973871418565484570096440609125401439516349061951073344772753817168731533186740449206533184858409824331269879276752302819075938894191764603880669059804914705202932220114574769307945938446355744093058483466098741029671133305308451601510124097336668044362140994842230895354232007936193610666215236351383330719496758577095102466235782700820575938453736277546445932135116947993404356975890051717304128693125699951445791328843668647245439797933691355015781238038148597339831348341049751957204680813855138272253234219030458164179195368888878989362640509486440530112337687890165646824152338885218611665567933423652236621168833497594762922586523151554244316284075364923316223457798336995440229801638249044555841786652868778333857626201712694823945146208412572567947403078655159448178467488335673853886982143607843369103504905837049147006413324087204923968347162406372146304110247436210704329838033967549296094708909042352807942165389054391217609084676765464997803900415653278041220586434133698802658726748950122980183615091029049242919298428066745937148593879994539254240070220900694662200741796632687373414952817000938093930497338259168439649970963774406833411431113922194082765390241161715106142638681072839764035976877223152727829248475639970029777900589595383604989099084081251802305001465530685587689066710306032849298712531664047230963409638484129598076118133347670029704549206295184751171783054889490211218045322681317529569999778899567668829982207035948032411418382057247326141072264502161892285323531743728756335449414720326329614400327415751813608405440522389476951223717685562226240221655814783640319063683104993438443847695342093582440489676230855515734722099028773790309518629302472390856918840009781940193713784596688294176313226823907143925396584175086934911386332502448539920116580493698106175151294846382915609543814748269873022997601962804377576934064368480060369871027634248583037300264157126892396407333810094970488786868749240778818119777818968060847669660858189435863648299750130319878885182309492320093569553086644726783916663680961005542160003603514646606310756647257217877792590840884087816175376150368236330721380807047180835128240716072193739218623529235235449408073833764uwb
+    >> 171;
+static_assert (a == 10403542085759133691203342137159028259461894955438331210801665800234672962180907518788681055608925051917190662144445433835595489501570265148539013616306519011285861864113638610998587283343748668959870044400340187367869274012726759732348878437230149364081610941398977036594823591463255731808309715219781556045092524781748798096243155527048746090614751043610821560662864236720952557147844731917800712343725546175449104075627616077829385396994452199410766816558008090921987787438967590914249326913953731957899714113110918563882837045448642562338486517475793442626878243475178869958697311252767202125088496235928130685145568023992654921893286093433280015789621699281948053130963767216950901322064090115301029360256916486236324346980555378227825665231041206505932451054100655891377307183657244188881780309602697733965633806548575793711470844175477213922050584861112947113328821094578714380110663964395764964375008963336325761662071121014767368961020824065775639039724097407257977371623360602667242992626829630277589757195892131842788347638167481783472539736593840645020141666099662762763659119482517961624374850646183224354529879255694192077493038699570091875155722960929748259201284457182471153956119946261637096783796538046622701136421992223281799392319105563566498086105138357131671079600937329401554014025354725298453142629483842874038291307431207948198280389112036878226218928165845324560374437065373122000792930554833265840423016148390974876479752688661617125284208020330726704780298561478529279775092768807953202013307072084373090254748865483609183726295735240865516817482898554990450888147008484162850924835809973020042760450232447237837196378388135483084055028396408249214425019231777824054821326738728924661602608905318664721047678808734917923923121217803736039325080641571812479260200189082647677675380297657174607422686495562781202604884582727406463545308236800937463493199421020490845203940782000643133713413924683795888948837880891750307666957538835987772265423203470320354145742841869795472799186154631385288573730129094228733379855432514817031425884584962254283999586850250406406681047191820544352342046667950146374296364655891915135310082529994904874562441551527081311638121766367661807914647092917287784017613115795691373814041086838720316968010349263776702775009771662737124600992709418630470128579612748138807983617697487500079502839532266478317788699680283395230308668613168191852557234122469290277763000256531531071762280960597416576452124575885006363492171314551026369237325119844147154972582617127637240421323781252125819313268498872048683068789228870983086306586111793007178693570562554975762384431236664489360478109692520183356042112794589756922036102025380888246082763911915622037570736969677850621708281909652070776450422110772285659921383413532725137107621514770958361581240471968542997294446402584844918179956881219978405772785713402046471903103404871352324277109089891640558983922159359479964068994923538490500501798825116238188381267330618026093160290205596669795981834842352271011063939632623926629960113926326029952143452354640614061049438932665467928443113232214498101774523178129020155017228802221901469548072234073334681052461327832268955923701109732874360984002493130025470753861967432493102395766279717815113135763810886216491770265724160887688887515282293447287121039545323777928286876711267049135547760773655845950622676327972280622345486253084626121247885891757458308974259466441284967765824561478351421051923081842594791616249682768594796413184742007504540382141773556098929461233842797978566466734240436032269122908057438314319410489575244845739320693764798687398942275314333361838560358278583766983210126081046020231469705836544611252075187733112560778125560225565803349953151880800601890382648216375737077015744684142132303864494083237680306898134033570758401131735819237730280209424231954121970154195575070728876653187928423918894211617093567094857926079694003950142962763480728907322409338954277493711834363423032309296862081371923061150409402403668284066920335645815769603890931600189625120845560771835017710222988445713995722670892970377791415975424998772977793133120924108755323766471601770964843725827421304729349535336212587039242582503381150992918495310760366078232133800372960134691178665615437284018675587037783965019497398984583781291648236566997741116811234934754542646608973862932050896956712947890625239848619289180051302224085308716715734850608995498117691600907423641124622236235949675965926735290984369155077055324647942699875972019355174794849379024365265476001505043957802797349447782453767742359446787304217770032967959809288342189111153359045680464231699344620995535326063943372491385550455978845273436611631962336651743357242055102619760848116407351488643448217122169718350824452317641509534606434395208225350712889271762643740106849245478364448395994915755050465135468245061369394410933866013068008514339549345174558881983866497072827311379042433413uwb);
+static_assert (b == 479279632549833982755738215967357101486884905869453021516563898948915446591294289678884104882828483681018619007873937343032559095956110409690354224418625002966550159961834004740780330764422082810229193393826635058733624861250523067262019347540099774628575459315357616349150824579695313640630281667807589996920649924543478780215152006371546893079438057655154349456445174360590648508217399231758605134881847410713059287758746575793311941456361347290358374327775052253988819455767870384140373534325319062214637649448223675213701403987503519204500321584986093940400979311922337629196153927395934961423190792514929752893552191612781025930779806940809347748073488156862698382316586632554531097474068541478416687472958980350462147229542043370966376951612302580094672036569174235908042392792082009922861348754900810642762162386011767716267196524707159512614047405558309045526869231198654773018734270263596328291409529332649735222668150705420335319769465472201979730869384913211207207537399601373999069700655463720229201053332186006978582500927709712840419997653716343058563745053549481680514857956192457105651774755444712054911665735085740088242901976172465572034046597419519355833772202459754151176845548994456208445029222984100996313709454921745133168181790539412958897510447873469344071508368320478228160779533683887240349189576312875329064089835494782533898285493126755916970631488996451823585682342809043933704643566255965170014371156957508657356962712435465291272811967482363708516439065578762133187029479457794090439202070979801732536040880905419100375029921889772128503084510931435171483979018779597166630558819909348223770001377390273307373052030729413819617985823981374070443715485088829487365151686786653141560555397632839783786973475603908129103121125925582435377586599443363217659482486021512444715078999742145616192417054383275221431750185701711793487079447980295741809417265923372265027237884200396238493927359102885825948568128006352273465051712472070059202450319054451522388321059702003081513718019001071076161432358471155369959782811652330837503075288087426055655400029411438748293362031465017502577139252244731448555188613876936961036695236179942323751116112011014592974397486473882674592008130136792663493287323834319147915022427528033518178139180198551672004671264439595962120954122300129377851806213689047404966592261393005849755403969409681891387136302126214754577574214078992738385834194218500941354892714424617818676129678402812599649389519193939384481931712519965763571236544579269391714688112594004439937791027666527275028956096005024721892268353662349049501568931426746983749923266289936079664852088114380642027976981532748458314879741695023966059798072743350980348361092364278288527112580481417860547783209941006436630295569025708378983678708447667928300527961717504931897999052674925211486251029110033534138519456704647644914365911948549537915597987234033945431722519315974082307832411934886264333083916226707665948547147824941143774031630992986403589281430493343304207573431954440506367102005746914258775268625663056944615427077330312326664431034309894720122682694874274735620802316011315482410182991906165335883031756812018133914090861319389023790839528337203606889129436487920140167370284870924438860873830296648014424844378195912932551426780779819757525353368558050825303562419989528653425507781193568399131883673447888828695552112293654073088339775808234324436627659543962164946450396759723040075906766506152022264815158093674649622869572430121164843379253826764183953324829436751005035078152203675523168431161209463034491772102996315554878311000500752369796109685119745615468446576523546008325039060775520970963367909216533343057221662059707100715990114520515109428581554773471551782223970832412406073499896797949247197263055911053575580685552002226777990994346631851517791364630330551754443656577948498726362806681419705536740324268597539896282803552799726080554573302695958428417269671660306173853381343814024048279362738039470198839365706286164147555864933364363287875097138128425573909904433183795098670203800533548856219174579901097084123411402160448390274656216062207733804522678116007830485911118338137291415500040244636646228465275546613185451215477214924093897408659253897872331630294361379429268082112519489979283826532913282908147824847781517964779380824918394924322420104717839012960422523766744397106063463998218416521947089619846125464833145312281971994057275917591591279145274837283273569411904875883590818927011083766111368623876288661469697856984023924541117354584710728162060928747544449729071086406072820826707352705098469570212430005031769870770984490147544922541878582516496026055634218534739829767044431114272772863484628968800592047985977005687260574374332608765746965647976405949709304033414442630581488362251756922883517287565772653346189666094175256518980878632057889091042584644510374477219106080358138511257658994752983022904583136418485544787844335722425uwb);
+static_assert (c == 8779107423697189837569390605084121179785924908521985744210325591223667924519652625818373720019509245903707006132632572173386255064201355735198759440688262514780984111791042739566301784897316373994922192050963272288434060342288511971569697680026523760811225516430052699754044682818892679819131995600216280966062736732384732411361657444399695883865096103428759622813867735547259978529319436889864013687219390567604283318011100799953451520968441264866031813954488628058475114348729275414143158917874709599556247183695853838552321088973445876088042556810479910661449374661999675082811103814453353294194886612961492737263277271551889038610730760478459569256149321998350414023066363814989311109728311712989022996247280182587921449185353922885937877604500400738774240008709945289791605011177739657720181601453512259882004564462415828652714904289727235210537277721389816687643366145200001177712112197515695578887483792988755435401388456145854488880537088360397994643216014828495662460205686448548113229841097955613958440901375416256532864511852298696327611517233241324799070919491286426159788792631723833717451538043437364017185237743182402835670087683125602640318887451596650323528720128188198547270462971612157603487958526705005955580409441670771849388016438035850194585870327013409236236730914217722025655319472231141666790287955685713636274653565577454275838590350806168639165264676470440930351612992518904664647715805865941038423768376846697817543122409517591717292238745940345900530458551468519245767864531742102178628854376524513367983209186974575765707273973775386840081238803880335095740836386527208267311808973522450391189055739828936937359693167240524660624945856907042041257347192086984009640984509322622503890256046324768341632643546455779035376002061691113121234273164937984171774242327769915688742564049454163158318121818582764775268091292470889088445575108022688069271697198283151469645400870507006663799330661702702747443254220478311056407220749648103123435473381583520873055218734115120978678440455896458852497569989966723235965608706826593607128847630137618509151255834742636438796285569873869967729341871213521030011427372987388572674228441333458857512226049283243347521457804912008781036966786374760325341492033297848368160903260470019067535330611645909560888797451907088389764190403007998305168673029446934012245138838180596098559442570696150011296218144186387024615885302290744905340666905921743970013779813332493771192048043297281423248489056841417013807670308191095732464221451376997270745468459702152796818222745730565721202663103043121160101459833683249558684459108862536961994308535039970814557821268170388745941980378838969910592895670554291811739768771829941043857819603751246957962236091154755893962038363120690483862423001038948620681611253867149296463690417828034303547922792249098522404751428960713875050463906134150846089705714470303918299012691600285355859412924847760497076978432722446602521825089097454542343354847347396045079587757210635356999268706465425788833311190517623061860675230010994127196459030322166751571656642321690787471906609473496034789643710478162255664092991251446787887635351852933826820719781733754578161073401362668109819113924252291125741395271474342305574536974918273938513597418963787308994593434191890687730302495910686072338836413159162281072263542758257699588089838677469397467899348065293581751035844389848387161847435160327276066603683131703246410409122832793376751512688745195564021646069245992363396468100513536211651450610523315211697125774638845313243973083536417692075962486918844667432144353019722959653638632948294049984266861870151255315023346724671430499257993958049088066160870545025276597975154855537620265690354041028742742755074396597631965320380782500944568424053420038357524917125099241334990032189526465838192972110970861380060986802081948044345526414857158569939005895236672306344348212805851269920711043891306875873016330601673973249327072503571873518366750575070091051288590764788630190966776854031578939382690709022667421734442841784680826494146620589862829612704279521637740421694195051400095278084716974624615208392585573200182664157066813849346058321763156523965698465901396025152159642193562900743812715885811057212579017860488539960334406702752688595217360219470968738009774067915037157027492209108801337707562571266897723911401203374308490793226200974353356835311756384895692909802720948968131504604855466961987314701846460342135201914356152591684810924688350929140120187693089324255924634578576427004426339299493833434502951593902551451002292839635000904253250021884625417628756439862964325562720709528784964868687330847894476999577326582332350213148861205413652337499383416531545707272907994755638339630221576707954964236210962693804639714754668679841134928393081284209158098202683744650513918920168330598432362389777471870631039488408769354863001967531729415686631571754649uwb);
+#endif

	Jakub


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int
  2023-09-28 14:03       ` [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int Jakub Jelinek
@ 2023-09-28 15:53         ` Aldy Hernandez
  2023-09-29  8:37           ` Jakub Jelinek
  2023-09-29  8:24         ` Jakub Jelinek
                           ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: Aldy Hernandez @ 2023-09-28 15:53 UTC (permalink / raw)
  To: Jakub Jelinek, Richard Biener, Richard Sandiford, Andrew MacLeod
  Cc: gcc-patches



On 9/28/23 10:03, Jakub Jelinek wrote:
> Hi!
> 
> On Tue, Aug 29, 2023 at 05:09:52PM +0200, Jakub Jelinek via Gcc-patches wrote:
>> On Tue, Aug 29, 2023 at 11:42:48AM +0100, Richard Sandiford wrote:
>>>> I'll note tree-ssa-loop-niter.cc also uses GMP in some cases, widest_int
>>>> is really trying to be poor-mans GMP by limiting the maximum precision.
>>>
>>> I'd characterise widest_int as "a wide_int that is big enough to hold
>>> all supported integer types, without losing sign information".  It's
>>> not big enough to do arbitrary arithmetic without losing precision
>>> (in the way that GMP is).
>>>
>>> If the new limit on integer sizes is 65535 bits for all targets,
>>> then I think that means that widest_int needs to become a 65536-bit type.
>>> (But not with all bits represented all the time, of course.)
>>
>> If the widest_int storage would be dependent on the len rather than
>> precision for how it is stored, then I think we'd need a new method which
>> would be called at the start of filling the limbs where we'd tell how many
>> limbs there would be (i.e. what will set_len be called with later on), and
>> do nothing for all storages but the new widest_int_storage.
> 
> So, I've spent some time on this.  While wide_int is in the patch a fixed/variable
> number of limbs (aka len) storage depending on precision (precision >
> WIDE_INT_MAX_PRECISION means heap allocated limb array, otherwise it is
> inline), widest_int has always very large precision
> (WIDEST_INT_MAX_PRECISION, currently defined to the INTEGER_CST imposed
> limitation of 255 64-bit limbs) but uses inline array for length
> corresponding up to WIDE_INT_MAX_PRECISION bits and for larger one uses
> similarly to wide_int a heap allocated array of limbs.
> These changes make both wide_int and widest_int obviously non-POD, not
> trivially default constructible, nor trivially copy constructible, trivially
> destructible, trivially copyable, so not a good fit for GC and some vec
> operations.
> One common use of wide_int in GC structures was in dwarf2out.{h,cc}; but as
> large _BitInt constants don't appear in RTL, we really don't need such large
> precisions there.
> So, for wide_int the patch introduces rwide_int, restricted wide_int, which
> acts like the old wide_int (except that it is now trivially default
> constructible and has assertions precision isn't set above
> WIDE_INT_MAX_PRECISION).
> For widest_int, the nastiness is that because it always has huge precision
> of 16320 right now,
> a) we need to be told upfront in wide-int.h before calling the large
>     value internal functions in wide-int.cc how many elements we'll need for
>     the result (some reasonable upper estimate is fine)
> b) various of the wide-int.cc functions were lazy and assumed precision is
>     small enough and often used up to that many elements, which is
>     undesirable; so, it now tries to decreas that and use xi.len etc. based
>     estimates instead if possible (sometimes only if precision is above
>     WIDE_INT_MAX_PRECISION)
> c) with the higher precision, behavior changes for lrshift (-1, 2) etc. or
>     unsigned division with dividend having most significant bit set in
>     widest_int - while such values were considered to be above or equal to
>     1 << (WIDE_INT_MAX_PRECISION - 2), now they are with
>     WIDEST_INT_MAX_PRECISION and so much larger; but lrshift on widest_int
>     is I think only done in ccp and I'd strongly hope that we treat the
>     values as unsigned and so usually much smaller length; so it is just
>     when we call wi::lrshift (-1, 2) or similar that results change.
> I've noticed that for wide_int or widest_int references even simple
> operations like eq_p liked to allocate and immediately free huge buffers,
> which was caused by wide_int doing allocation on creation with a particular
> precision and e.g. get_binary_precision running into that.  So, I've
> duplicated that to avoid the allocations when all we need is just a
> precision.
> 
> The patch below doesn't actually build anymore since the vec.h asserts
> (which point to useful stuff though), so temporarily I've applied it also
> with
> --- gcc/vec.h.xx	2023-09-28 12:56:09.055786055 +0200
> +++ gcc/vec.h	2023-09-28 13:15:31.760487111 +0200
> @@ -1197,7 +1197,7 @@ template<typename T, typename A>
>   inline void
>   vec<T, A, vl_embed>::qsort (int (*cmp) (const void *, const void *))
>   {
> -  static_assert (vec_detail::is_trivially_copyable_or_pair <T>::value, "");
> +//  static_assert (vec_detail::is_trivially_copyable_or_pair <T>::value, "");
>     if (length () > 1)
>       gcc_qsort (address (), length (), sizeof (T), cmp);
>   }
> @@ -1422,7 +1422,7 @@ template<typename T>
>   void
>   gt_ggc_mx (vec<T, va_gc> *v)
>   {
> -  static_assert (std::is_trivially_destructible <T>::value, "");
> +//  static_assert (std::is_trivially_destructible <T>::value, "");
>     extern void gt_ggc_mx (T &);
>     for (unsigned i = 0; i < v->length (); i++)
>       gt_ggc_mx ((*v)[i]);
> hack.  The two spots that trigger are tree-ssa-loop-niter.cc doing qsort on
> widest_int vector (to be exact, swapping elements in the vector of
> widest_int or wide_int by memcpy actually would work, the reason it has
> non-trivial destructor and copy assignment/copy constructor is to make sure
> distinct objects have (if needed) distinct heap allocations and that those
> are freed in the end, but the bitwise memcpy swapping preserves that), and
> omp_general.cc using two widest_int members in a GC structure.  For some
> reason, a more important problem isn't diagnosed, loop and nb_iter_bound
> structs (also GC) having widest_int members (first one 2, second one just
> one).  And then there is e.g. another issue with slsr, which allocates
> structs containing widest_int in obstack, not expecting it would need to
> construct those (and where to destruct them).  Also, ipa_bits contains
> 2 widest_int members in GC allocated structure.  Actually the reason
> is quite obvious, my assert has been added just for GC vec of non-trivially
> destructible types, neither loop, nor ipa_bits are used in vectors.  Bet
> we should make wide_int_storage and widest_int_storage GTY ((user)) and
> just declare but don't define the handlers or something similar.
> 
> And, now the question is what to do about this.  I guess for omp_general
> I could just use generic_wide_int <fixed_wide_int_storage <1024> > or
> something similar, after all the widest_int wasn't really great when it
> had maximum precision of WIDE_INT_MAX_PRECISION, different values on
> different targets, it has very few uses and is easy to change (thinking
> about this, makes me wonder what we do for offloading if offload host
> has different WIDE_INT_MAX_PRECISION from offload target).
> 
> But the more important question is what to do about loop/niters analysis.
> I think for number of iteration analysis it might be ok to punt somehow
> (if there is a way to tell that number of iterations is unknown) if we
> get some bound which is too large to be expressible in some reasonably small
> fixed precision (whether it is WIDE_INT_MAX_PRECISION, or something
> different is a question).  We could either introduce yet another widest_int
> like storage which would have still WIDEST_INT_MAX_PRECISION precision, but
> would ICE if length is set to something above its fixed width.  One problem
> is that the write_val estimations are often just conservatively larger and
> could trigger even if the value fits in the end.  Or we could use
> generic_wide_int <fixed_wide_int_storage <WIDE_INT_MAX_PRECISION> > (perhaps
> call that rwidest_int), the drawback would be that it would be slightly harder
> to use as it has different precision from widest_int, we'd need to do some
> from on it or the like.  Plus I really don't know the niters code to know
> how to punt.
> 
> ipa_bits is even worse, because unlike niter analysis, I think it is very
> much desirable to support IPA VRP of all supported _BitInt sizes.  Shall
> we perhaps use trailing_wide_int storage in there, or conditionally
> rwidest_int vs. INTEGER_CSTs for stuff that doesn't fit, something else?

BTW, we already track value/mask pairs in the irange, so I think 
ipa_bits should ultimately disappear.  Doing so would probably simplify 
the code base.

The value-range* changes look reasonable to me.

Aldy

> 
> What about slsr?  This is after bitint lowering, so it shouldn't be
> performing opts on larger BITINT_TYPEs and so could also go with the
> rwidest_int.
> 
> With the above vec.h hack the short (in number of lines, otherwise
> it is large, each 16319 bit decimal constant is huge) testcase below works,
> but even make check-gcc RUNTESTFLAGS=dg.exp=bitint* (i.e. the compile only
> tests) show some ICEs, some of them due to widest_int in loop, others in
> slsr, others to be debugged.
> 
> As for the qsort in niters, if we change niters to use some rwidest_int,
> either fixed or something new, then the sorting problem could go away.
> Another option would be to rename vec_detail::is_trivially_copyable_or_pair
> trait to say vec_detail::is_qsort_sortable and allow code to amend that
> trait on a type by type basis when needed after analysing it works correctly
> for some further type (like wide_int or widest_int).  But am not sure it
> would work if widest-int.h is included before vec.h etc.
> 
> Your thoughts on all of this?
> 
> --- gcc/wide-int.h.jj	2023-09-27 10:37:39.456836804 +0200
> +++ gcc/wide-int.h	2023-09-28 14:55:40.059632413 +0200
> @@ -27,7 +27,7 @@ along with GCC; see the file COPYING3.
>      other longer storage GCC representations (rtl and tree).
>   
>      The actual precision of a wide_int depends on the flavor.  There
> -   are three predefined flavors:
> +   are four predefined flavors:
>   
>        1) wide_int (the default).  This flavor does the math in the
>        precision of its input arguments.  It is assumed (and checked)
> @@ -53,6 +53,10 @@ along with GCC; see the file COPYING3.
>        multiply, division, shifts, comparisons, and operations that need
>        overflow detected), the signedness must be specified separately.
>   
> +     For precisions up to WIDE_INT_MAX_PRECISION, it uses an inline
> +     buffer in the type, for larger precisions up to WIDEST_INT_MAX_PRECISION
> +     it uses a pointer to heap allocated buffer.
> +
>        2) offset_int.  This is a fixed-precision integer that can hold
>        any address offset, measured in either bits or bytes, with at
>        least one extra sign bit.  At the moment the maximum address
> @@ -76,11 +80,15 @@ along with GCC; see the file COPYING3.
>          wi::leu_p (a, b) as a more efficient short-hand for
>          "a >= 0 && a <= b". ]
>   
> +     3) rwide_int.  Restricted wide_int.  This is similar to
> +     wide_int, but maximum possible precision is WIDE_INT_MAX_PRECISION
> +     and it always uses an inline buffer.  offset_int and rwide_int are
> +     GC-friendly, wide_int and widest_int are not.
> +
>        3) widest_int.  This representation is an approximation of
>        infinite precision math.  However, it is not really infinite
>        precision math as in the GMP library.  It is really finite
> -     precision math where the precision is 4 times the size of the
> -     largest integer that the target port can represent.
> +     precision math where the precision is WIDEST_INT_MAX_PRECISION.
>   
>        Like offset_int, widest_int is wider than all the values that
>        it needs to represent, so the integers are logically signed.
> @@ -242,6 +250,13 @@ along with GCC; see the file COPYING3.
>   
>   #define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
>   
> +/* Precision of widest_int and largest _BitInt precision + 1 we can
> +   support.  */
> +#define WIDEST_INT_MAX_ELTS 255
> +#define WIDEST_INT_MAX_PRECISION (WIDEST_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
> +
> +STATIC_ASSERT (WIDE_INT_MAX_ELTS < WIDEST_INT_MAX_ELTS);
> +
>   /* This is the max size of any pointer on any machine.  It does not
>      seem to be as easy to sniff this out of the machine description as
>      it is for MAX_BITSIZE_MODE_ANY_INT since targets may support
> @@ -307,17 +322,19 @@ along with GCC; see the file COPYING3.
>   #define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
>     WI_BINARY_RESULT (T1, T2) RESULT = \
>       wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
> -  HOST_WIDE_INT *VAL = RESULT.write_val ()
> +  HOST_WIDE_INT *VAL = RESULT.write_val (0)
>   
>   /* Similar for the result of a unary operation on X, which has type T.  */
>   #define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
>     WI_UNARY_RESULT (T) RESULT = \
>       wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
> -  HOST_WIDE_INT *VAL = RESULT.write_val ()
> +  HOST_WIDE_INT *VAL = RESULT.write_val (0)
>   
>   template <typename T> class generic_wide_int;
>   template <int N> class fixed_wide_int_storage;
>   class wide_int_storage;
> +class rwide_int_storage;
> +template <int N> class widest_int_storage;
>   
>   /* An N-bit integer.  Until we can use typedef templates, use this instead.  */
>   #define FIXED_WIDE_INT(N) \
> @@ -325,10 +342,9 @@ class wide_int_storage;
>   
>   typedef generic_wide_int <wide_int_storage> wide_int;
>   typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) offset_int;
> -typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION) widest_int;
> -/* Spelled out explicitly (rather than through FIXED_WIDE_INT)
> -   so as not to confuse gengtype.  */
> -typedef generic_wide_int < fixed_wide_int_storage <WIDE_INT_MAX_PRECISION * 2> > widest2_int;
> +typedef generic_wide_int <rwide_int_storage> rwide_int;
> +typedef generic_wide_int <widest_int_storage <WIDE_INT_MAX_PRECISION> > widest_int;
> +typedef generic_wide_int <widest_int_storage <WIDE_INT_MAX_PRECISION * 2> > widest2_int;
>   
>   /* wi::storage_ref can be a reference to a primitive type,
>      so this is the conservatively-correct setting.  */
> @@ -380,7 +396,11 @@ namespace wi
>   
>       /* The integer has a constant precision (known at GCC compile time)
>          and is signed.  */
> -    CONST_PRECISION
> +    CONST_PRECISION,
> +
> +    /* Like CONST_PRECISION, but with WIDEST_INT_MAX_PRECISION or larger
> +       precision where not all elements of arrays are always present.  */
> +    WIDEST_CONST_PRECISION
>     };
>   
>     /* This class, which has no default implementation, is expected to
> @@ -390,9 +410,15 @@ namespace wi
>          Classifies the type of T.
>   
>        static const unsigned int precision;
> -       Only defined if precision_type == CONST_PRECISION.  Specifies the
> +       Only defined if precision_type == CONST_PRECISION or
> +       precision_type == WIDEST_CONST_PRECISION.  Specifies the
>          precision of all integers of type T.
>   
> +     static const unsigned int inl_precision;
> +       Only defined if precision_type == WIDEST_CONST_PRECISION.
> +       Specifies precision which is represented in the inline
> +       arrays.
> +
>        static const bool host_dependent_precision;
>          True if the precision of T depends (or can depend) on the host.
>   
> @@ -415,9 +441,10 @@ namespace wi
>     struct binary_traits;
>   
>     /* Specify the result type for each supported combination of binary
> -     inputs.  Note that CONST_PRECISION and VAR_PRECISION cannot be
> -     mixed, in order to give stronger type checking.  When both inputs
> -     are CONST_PRECISION, they must have the same precision.  */
> +     inputs.  Note that CONST_PRECISION, WIDEST_CONST_PRECISION and
> +     VAR_PRECISION cannot be mixed, in order to give stronger type
> +     checking.  When both inputs are CONST_PRECISION or both are
> +     WIDEST_CONST_PRECISION, they must have the same precision.  */
>     template <typename T1, typename T2>
>     struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
>     {
> @@ -447,6 +474,17 @@ namespace wi
>     };
>   
>     template <typename T1, typename T2>
> +  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, WIDEST_CONST_PRECISION>
> +  {
> +    typedef generic_wide_int < widest_int_storage
> +			       <int_traits <T2>::inl_precision> > result_type;
> +    typedef result_type operator_result;
> +    typedef bool predicate_result;
> +    typedef result_type signed_shift_result_type;
> +    typedef bool signed_predicate_result;
> +  };
> +
> +  template <typename T1, typename T2>
>     struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
>     {
>       typedef wide_int result_type;
> @@ -468,6 +506,17 @@ namespace wi
>     };
>   
>     template <typename T1, typename T2>
> +  struct binary_traits <T1, T2, WIDEST_CONST_PRECISION, FLEXIBLE_PRECISION>
> +  {
> +    typedef generic_wide_int < widest_int_storage
> +			       <int_traits <T1>::inl_precision> > result_type;
> +    typedef result_type operator_result;
> +    typedef bool predicate_result;
> +    typedef result_type signed_shift_result_type;
> +    typedef bool signed_predicate_result;
> +  };
> +
> +  template <typename T1, typename T2>
>     struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
>     {
>       STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
> @@ -482,6 +531,18 @@ namespace wi
>     };
>   
>     template <typename T1, typename T2>
> +  struct binary_traits <T1, T2, WIDEST_CONST_PRECISION, WIDEST_CONST_PRECISION>
> +  {
> +    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
> +    typedef generic_wide_int < widest_int_storage
> +			       <int_traits <T1>::inl_precision> > result_type;
> +    typedef result_type operator_result;
> +    typedef bool predicate_result;
> +    typedef result_type signed_shift_result_type;
> +    typedef bool signed_predicate_result;
> +  };
> +
> +  template <typename T1, typename T2>
>     struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
>     {
>       typedef wide_int result_type;
> @@ -709,8 +770,10 @@ wi::storage_ref::get_val () const
>      Although not required by generic_wide_int itself, writable storage
>      classes can also provide the following functions:
>   
> -   HOST_WIDE_INT *write_val ()
> -     Get a modifiable version of get_val ()
> +   HOST_WIDE_INT *write_val (unsigned int)
> +     Get a modifiable version of get_val ().  The argument should be
> +     upper estimation for LEN (ignored by all storages but
> +     widest_int_storage).
>   
>      unsigned int set_len (unsigned int len)
>        Set the value returned by get_len () to LEN.  */
> @@ -777,6 +840,8 @@ public:
>   
>     static const bool is_sign_extended
>       = wi::int_traits <generic_wide_int <storage> >::is_sign_extended;
> +  static const bool needs_write_val_arg
> +    = wi::int_traits <generic_wide_int <storage> >::needs_write_val_arg;
>   };
>   
>   template <typename storage>
> @@ -1049,6 +1114,7 @@ namespace wi
>       static const enum precision_type precision_type = VAR_PRECISION;
>       static const bool host_dependent_precision = HDP;
>       static const bool is_sign_extended = SE;
> +    static const bool needs_write_val_arg = false;
>     };
>   }
>   
> @@ -1065,7 +1131,11 @@ namespace wi
>   class GTY(()) wide_int_storage
>   {
>   private:
> -  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
> +  union
> +  {
> +    HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
> +    HOST_WIDE_INT *valp;
> +  } GTY((skip)) u;
>     unsigned int len;
>     unsigned int precision;
>   
> @@ -1073,14 +1143,17 @@ public:
>     wide_int_storage ();
>     template <typename T>
>     wide_int_storage (const T &);
> +  wide_int_storage (const wide_int_storage &);
> +  ~wide_int_storage ();
>   
>     /* The standard generic_wide_int storage methods.  */
>     unsigned int get_precision () const;
>     const HOST_WIDE_INT *get_val () const;
>     unsigned int get_len () const;
> -  HOST_WIDE_INT *write_val ();
> +  HOST_WIDE_INT *write_val (unsigned int);
>     void set_len (unsigned int, bool = false);
>   
> +  wide_int_storage &operator = (const wide_int_storage &);
>     template <typename T>
>     wide_int_storage &operator = (const T &);
>   
> @@ -1099,12 +1172,15 @@ namespace wi
>       /* Guaranteed by a static assert in the wide_int_storage constructor.  */
>       static const bool host_dependent_precision = false;
>       static const bool is_sign_extended = true;
> +    static const bool needs_write_val_arg = false;
>       template <typename T1, typename T2>
>       static wide_int get_binary_result (const T1 &, const T2 &);
> +    template <typename T1, typename T2>
> +    static unsigned int get_binary_precision (const T1 &, const T2 &);
>     };
>   }
>   
> -inline wide_int_storage::wide_int_storage () {}
> +inline wide_int_storage::wide_int_storage () : precision (0) {}
>   
>   /* Initialize the storage from integer X, in its natural precision.
>      Note that we do not allow integers with host-dependent precision
> @@ -1113,21 +1189,75 @@ inline wide_int_storage::wide_int_storag
>   template <typename T>
>   inline wide_int_storage::wide_int_storage (const T &x)
>   {
> -  { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
> -  { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
> +  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
> +  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
> +  STATIC_ASSERT (wi::int_traits<T>::precision_type
> +		 != wi::WIDEST_CONST_PRECISION);
>     WIDE_INT_REF_FOR (T) xi (x);
>     precision = xi.precision;
> +  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +    u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
>     wi::copy (*this, xi);
>   }
>   
> +inline wide_int_storage::wide_int_storage (const wide_int_storage &x)
> +{
> +  len = x.len;
> +  precision = x.precision;
> +  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +    {
> +      u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
> +      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
> +    }
> +  else
> +    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
> +}
> +
> +inline wide_int_storage::~wide_int_storage ()
> +{
> +  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +    XDELETEVEC (u.valp);
> +}
> +
> +inline wide_int_storage&
> +wide_int_storage::operator = (const wide_int_storage &x)
> +{
> +  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +    {
> +      if (this == &x)
> +	return *this;
> +      XDELETEVEC (u.valp);
> +    }
> +  len = x.len;
> +  precision = x.precision;
> +  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +    {
> +      u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
> +      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
> +    }
> +  else
> +    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
> +  return *this;
> +}
> +
>   template <typename T>
>   inline wide_int_storage&
>   wide_int_storage::operator = (const T &x)
>   {
> -  { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
> -  { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
> +  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
> +  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
> +  STATIC_ASSERT (wi::int_traits<T>::precision_type
> +		 != wi::WIDEST_CONST_PRECISION);
>     WIDE_INT_REF_FOR (T) xi (x);
> -  precision = xi.precision;
> +  if (UNLIKELY (precision != xi.precision))
> +    {
> +      if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +	XDELETEVEC (u.valp);
> +      precision = xi.precision;
> +      if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +	u.valp = XNEWVEC (HOST_WIDE_INT,
> +			  CEIL (precision, HOST_BITS_PER_WIDE_INT));
> +    }
>     wi::copy (*this, xi);
>     return *this;
>   }
> @@ -1141,7 +1271,7 @@ wide_int_storage::get_precision () const
>   inline const HOST_WIDE_INT *
>   wide_int_storage::get_val () const
>   {
> -  return val;
> +  return UNLIKELY (precision > WIDE_INT_MAX_PRECISION) ? u.valp : u.val;
>   }
>   
>   inline unsigned int
> @@ -1151,9 +1281,9 @@ wide_int_storage::get_len () const
>   }
>   
>   inline HOST_WIDE_INT *
> -wide_int_storage::write_val ()
> +wide_int_storage::write_val (unsigned int)
>   {
> -  return val;
> +  return UNLIKELY (precision > WIDE_INT_MAX_PRECISION) ? u.valp : u.val;
>   }
>   
>   inline void
> @@ -1161,8 +1291,10 @@ wide_int_storage::set_len (unsigned int
>   {
>     len = l;
>     if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
> -    val[len - 1] = sext_hwi (val[len - 1],
> -			     precision % HOST_BITS_PER_WIDE_INT);
> +    {
> +      HOST_WIDE_INT &v = write_val (len)[len - 1];
> +      v = sext_hwi (v, precision % HOST_BITS_PER_WIDE_INT);
> +    }
>   }
>   
>   /* Treat X as having signedness SGN and convert it to a PRECISION-bit
> @@ -1172,7 +1304,7 @@ wide_int_storage::from (const wide_int_r
>   			signop sgn)
>   {
>     wide_int result = wide_int::create (precision);
> -  result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
> +  result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
>   				     x.precision, precision, sgn));
>     return result;
>   }
> @@ -1185,7 +1317,7 @@ wide_int_storage::from_array (const HOST
>   			      unsigned int precision, bool need_canon_p)
>   {
>     wide_int result = wide_int::create (precision);
> -  result.set_len (wi::from_array (result.write_val (), val, len, precision,
> +  result.set_len (wi::from_array (result.write_val (len), val, len, precision,
>   				  need_canon_p));
>     return result;
>   }
> @@ -1196,6 +1328,9 @@ wide_int_storage::create (unsigned int p
>   {
>     wide_int x;
>     x.precision = precision;
> +  if (UNLIKELY (precision > WIDE_INT_MAX_PRECISION))
> +    x.u.valp = XNEWVEC (HOST_WIDE_INT,
> +			CEIL (precision, HOST_BITS_PER_WIDE_INT));
>     return x;
>   }
>   
> @@ -1212,6 +1347,194 @@ wi::int_traits <wide_int_storage>::get_b
>       return wide_int::create (wi::get_precision (x));
>   }
>   
> +template <typename T1, typename T2>
> +inline unsigned int
> +wi::int_traits <wide_int_storage>::get_binary_precision (const T1 &x,
> +							 const T2 &y)
> +{
> +  /* This shouldn't be used for two flexible-precision inputs.  */
> +  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
> +		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
> +  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
> +    return wi::get_precision (y);
> +  else
> +    return wi::get_precision (x);
> +}
> +
> +/* The storage used by rwide_int.  */
> +class GTY(()) rwide_int_storage
> +{
> +private:
> +  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
> +  unsigned int len;
> +  unsigned int precision;
> +
> +public:
> +  rwide_int_storage () = default;
> +  template <typename T>
> +  rwide_int_storage (const T &);
> +
> +  /* The standard generic_rwide_int storage methods.  */
> +  unsigned int get_precision () const;
> +  const HOST_WIDE_INT *get_val () const;
> +  unsigned int get_len () const;
> +  HOST_WIDE_INT *write_val (unsigned int);
> +  void set_len (unsigned int, bool = false);
> +
> +  template <typename T>
> +  rwide_int_storage &operator = (const T &);
> +
> +  static rwide_int from (const wide_int_ref &, unsigned int, signop);
> +  static rwide_int from_array (const HOST_WIDE_INT *, unsigned int,
> +			       unsigned int, bool = true);
> +  static rwide_int create (unsigned int);
> +};
> +
> +namespace wi
> +{
> +  template <>
> +  struct int_traits <rwide_int_storage>
> +  {
> +    static const enum precision_type precision_type = VAR_PRECISION;
> +    /* Guaranteed by a static assert in the rwide_int_storage constructor.  */
> +    static const bool host_dependent_precision = false;
> +    static const bool is_sign_extended = true;
> +    static const bool needs_write_val_arg = false;
> +    template <typename T1, typename T2>
> +    static rwide_int get_binary_result (const T1 &, const T2 &);
> +    template <typename T1, typename T2>
> +    static unsigned int get_binary_precision (const T1 &, const T2 &);
> +  };
> +}
> +
> +/* Initialize the storage from integer X, in its natural precision.
> +   Note that we do not allow integers with host-dependent precision
> +   to become rwide_ints; rwide_ints must always be logically independent
> +   of the host.  */
> +template <typename T>
> +inline rwide_int_storage::rwide_int_storage (const T &x)
> +{
> +  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
> +  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
> +  STATIC_ASSERT (wi::int_traits<T>::precision_type
> +		 != wi::WIDEST_CONST_PRECISION);
> +  WIDE_INT_REF_FOR (T) xi (x);
> +  precision = xi.precision;
> +  gcc_assert (precision <= WIDE_INT_MAX_PRECISION);
> +  wi::copy (*this, xi);
> +}
> +
> +template <typename T>
> +inline rwide_int_storage&
> +rwide_int_storage::operator = (const T &x)
> +{
> +  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
> +  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
> +  STATIC_ASSERT (wi::int_traits<T>::precision_type
> +		 != wi::WIDEST_CONST_PRECISION);
> +  WIDE_INT_REF_FOR (T) xi (x);
> +  precision = xi.precision;
> +  gcc_assert (precision <= WIDE_INT_MAX_PRECISION);
> +  wi::copy (*this, xi);
> +  return *this;
> +}
> +
> +inline unsigned int
> +rwide_int_storage::get_precision () const
> +{
> +  return precision;
> +}
> +
> +inline const HOST_WIDE_INT *
> +rwide_int_storage::get_val () const
> +{
> +  return val;
> +}
> +
> +inline unsigned int
> +rwide_int_storage::get_len () const
> +{
> +  return len;
> +}
> +
> +inline HOST_WIDE_INT *
> +rwide_int_storage::write_val (unsigned int)
> +{
> +  return val;
> +}
> +
> +inline void
> +rwide_int_storage::set_len (unsigned int l, bool is_sign_extended)
> +{
> +  len = l;
> +  if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
> +    val[len - 1] = sext_hwi (val[len - 1],
> +			     precision % HOST_BITS_PER_WIDE_INT);
> +}
> +
> +/* Treat X as having signedness SGN and convert it to a PRECISION-bit
> +   number.  */
> +inline rwide_int
> +rwide_int_storage::from (const wide_int_ref &x, unsigned int precision,
> +			 signop sgn)
> +{
> +  rwide_int result = rwide_int::create (precision);
> +  result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
> +				     x.precision, precision, sgn));
> +  return result;
> +}
> +
> +/* Create a rwide_int from the explicit block encoding given by VAL and
> +   LEN.  PRECISION is the precision of the integer.  NEED_CANON_P is
> +   true if the encoding may have redundant trailing blocks.  */
> +inline rwide_int
> +rwide_int_storage::from_array (const HOST_WIDE_INT *val, unsigned int len,
> +			       unsigned int precision, bool need_canon_p)
> +{
> +  rwide_int result = rwide_int::create (precision);
> +  result.set_len (wi::from_array (result.write_val (len), val, len, precision,
> +				  need_canon_p));
> +  return result;
> +}
> +
> +/* Return an uninitialized rwide_int with precision PRECISION.  */
> +inline rwide_int
> +rwide_int_storage::create (unsigned int precision)
> +{
> +  rwide_int x;
> +  gcc_assert (precision <= WIDE_INT_MAX_PRECISION);
> +  x.precision = precision;
> +  return x;
> +}
> +
> +template <typename T1, typename T2>
> +inline rwide_int
> +wi::int_traits <rwide_int_storage>::get_binary_result (const T1 &x,
> +						       const T2 &y)
> +{
> +  /* This shouldn't be used for two flexible-precision inputs.  */
> +  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
> +		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
> +  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
> +    return rwide_int::create (wi::get_precision (y));
> +  else
> +    return rwide_int::create (wi::get_precision (x));
> +}
> +
> +template <typename T1, typename T2>
> +inline unsigned int
> +wi::int_traits <rwide_int_storage>::get_binary_precision (const T1 &x,
> +							  const T2 &y)
> +{
> +  /* This shouldn't be used for two flexible-precision inputs.  */
> +  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
> +		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
> +  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
> +    return wi::get_precision (y);
> +  else
> +    return wi::get_precision (x);
> +}
> +
>   /* The storage used by FIXED_WIDE_INT (N).  */
>   template <int N>
>   class GTY(()) fixed_wide_int_storage
> @@ -1221,7 +1544,7 @@ private:
>     unsigned int len;
>   
>   public:
> -  fixed_wide_int_storage ();
> +  fixed_wide_int_storage () = default;
>     template <typename T>
>     fixed_wide_int_storage (const T &);
>   
> @@ -1229,7 +1552,7 @@ public:
>     unsigned int get_precision () const;
>     const HOST_WIDE_INT *get_val () const;
>     unsigned int get_len () const;
> -  HOST_WIDE_INT *write_val ();
> +  HOST_WIDE_INT *write_val (unsigned int);
>     void set_len (unsigned int, bool = false);
>   
>     static FIXED_WIDE_INT (N) from (const wide_int_ref &, signop);
> @@ -1245,15 +1568,15 @@ namespace wi
>       static const enum precision_type precision_type = CONST_PRECISION;
>       static const bool host_dependent_precision = false;
>       static const bool is_sign_extended = true;
> +    static const bool needs_write_val_arg = false;
>       static const unsigned int precision = N;
>       template <typename T1, typename T2>
>       static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
> +    template <typename T1, typename T2>
> +    static unsigned int get_binary_precision (const T1 &, const T2 &);
>     };
>   }
>   
> -template <int N>
> -inline fixed_wide_int_storage <N>::fixed_wide_int_storage () {}
> -
>   /* Initialize the storage from integer X, in precision N.  */
>   template <int N>
>   template <typename T>
> @@ -1288,7 +1611,7 @@ fixed_wide_int_storage <N>::get_len () c
>   
>   template <int N>
>   inline HOST_WIDE_INT *
> -fixed_wide_int_storage <N>::write_val ()
> +fixed_wide_int_storage <N>::write_val (unsigned int)
>   {
>     return val;
>   }
> @@ -1308,7 +1631,7 @@ inline FIXED_WIDE_INT (N)
>   fixed_wide_int_storage <N>::from (const wide_int_ref &x, signop sgn)
>   {
>     FIXED_WIDE_INT (N) result;
> -  result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
> +  result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
>   				     x.precision, N, sgn));
>     return result;
>   }
> @@ -1323,7 +1646,7 @@ fixed_wide_int_storage <N>::from_array (
>   					bool need_canon_p)
>   {
>     FIXED_WIDE_INT (N) result;
> -  result.set_len (wi::from_array (result.write_val (), val, len,
> +  result.set_len (wi::from_array (result.write_val (len), val, len,
>   				  N, need_canon_p));
>     return result;
>   }
> @@ -1337,6 +1660,244 @@ get_binary_result (const T1 &, const T2
>     return FIXED_WIDE_INT (N) ();
>   }
>   
> +template <int N>
> +template <typename T1, typename T2>
> +inline unsigned int
> +wi::int_traits < fixed_wide_int_storage <N> >::
> +get_binary_precision (const T1 &, const T2 &)
> +{
> +  return N;
> +}
> +
> +#define WIDEST_INT(N) generic_wide_int < widest_int_storage <N> >
> +
> +/* The storage used by widest_int.  */
> +template <int N>
> +class GTY(()) widest_int_storage
> +{
> +private:
> +  union
> +  {
> +    HOST_WIDE_INT val[WIDE_INT_MAX_HWIS (N)];
> +    HOST_WIDE_INT *valp;
> +  } GTY((skip)) u;
> +  unsigned int len;
> +
> +public:
> +  widest_int_storage ();
> +  widest_int_storage (const widest_int_storage &);
> +  template <typename T>
> +  widest_int_storage (const T &);
> +  ~widest_int_storage ();
> +  widest_int_storage &operator = (const widest_int_storage &);
> +  template <typename T>
> +  inline widest_int_storage& operator = (const T &);
> +
> +  /* The standard generic_wide_int storage methods.  */
> +  unsigned int get_precision () const;
> +  const HOST_WIDE_INT *get_val () const;
> +  unsigned int get_len () const;
> +  HOST_WIDE_INT *write_val (unsigned int);
> +  void set_len (unsigned int, bool = false);
> +
> +  static WIDEST_INT (N) from (const wide_int_ref &, signop);
> +  static WIDEST_INT (N) from_array (const HOST_WIDE_INT *, unsigned int,
> +				    bool = true);
> +};
> +
> +namespace wi
> +{
> +  template <int N>
> +  struct int_traits < widest_int_storage <N> >
> +  {
> +    static const enum precision_type precision_type = WIDEST_CONST_PRECISION;
> +    static const bool host_dependent_precision = false;
> +    static const bool is_sign_extended = true;
> +    static const bool needs_write_val_arg = true;
> +    static const unsigned int precision
> +      = N / WIDE_INT_MAX_PRECISION * WIDEST_INT_MAX_PRECISION;
> +    static const unsigned int inl_precision = N;
> +    template <typename T1, typename T2>
> +    static WIDEST_INT (N) get_binary_result (const T1 &, const T2 &);
> +    template <typename T1, typename T2>
> +    static unsigned int get_binary_precision (const T1 &, const T2 &);
> +  };
> +}
> +
> +template <int N>
> +inline widest_int_storage <N>::widest_int_storage () : len (0) {}
> +
> +/* Initialize the storage from integer X, in precision N.  */
> +template <int N>
> +template <typename T>
> +inline widest_int_storage <N>::widest_int_storage (const T &x) : len (0)
> +{
> +  /* Check for type compatibility.  We don't want to initialize a
> +     widest integer from something like a wide_int.  */
> +  WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED;
> +  wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N / WIDE_INT_MAX_PRECISION
> +					    * WIDEST_INT_MAX_PRECISION));
> +}
> +
> +template <int N>
> +inline
> +widest_int_storage <N>::widest_int_storage (const widest_int_storage <N> &x)
> +{
> +  len = x.len;
> +  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
> +    {
> +      u.valp = XNEWVEC (HOST_WIDE_INT, len);
> +      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
> +    }
> +  else
> +    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
> +}
> +
> +template <int N>
> +inline widest_int_storage <N>::~widest_int_storage ()
> +{
> +  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
> +    XDELETEVEC (u.valp);
> +}
> +
> +template <int N>
> +inline widest_int_storage <N>&
> +widest_int_storage <N>::operator = (const widest_int_storage <N> &x)
> +{
> +  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
> +    {
> +      if (this == &x)
> +	return *this;
> +      XDELETEVEC (u.valp);
> +    }
> +  len = x.len;
> +  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
> +    {
> +      u.valp = XNEWVEC (HOST_WIDE_INT, len);
> +      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
> +    }
> +  else
> +    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
> +  return *this;
> +}
> +
> +template <int N>
> +template <typename T>
> +inline widest_int_storage <N>&
> +widest_int_storage <N>::operator = (const T &x)
> +{
> +  /* Check for type compatibility.  We don't want to assign a
> +     widest integer from something like a wide_int.  */
> +  WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED;
> +  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
> +    XDELETEVEC (u.valp);
> +  len = 0;
> +  wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N / WIDE_INT_MAX_PRECISION
> +					    * WIDEST_INT_MAX_PRECISION));
> +  return *this;
> +}
> +
> +template <int N>
> +inline unsigned int
> +widest_int_storage <N>::get_precision () const
> +{
> +  return N / WIDE_INT_MAX_PRECISION * WIDEST_INT_MAX_PRECISION;
> +}
> +
> +template <int N>
> +inline const HOST_WIDE_INT *
> +widest_int_storage <N>::get_val () const
> +{
> +  return UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT) ? u.valp : u.val;
> +}
> +
> +template <int N>
> +inline unsigned int
> +widest_int_storage <N>::get_len () const
> +{
> +  return len;
> +}
> +
> +template <int N>
> +inline HOST_WIDE_INT *
> +widest_int_storage <N>::write_val (unsigned int l)
> +{
> +  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
> +    XDELETEVEC (u.valp);
> +  len = l;
> +  if (UNLIKELY (l > N / HOST_BITS_PER_WIDE_INT))
> +    {
> +      u.valp = XNEWVEC (HOST_WIDE_INT, l);
> +      return u.valp;
> +    }
> +  return u.val;
> +}
> +
> +template <int N>
> +inline void
> +widest_int_storage <N>::set_len (unsigned int l, bool)
> +{
> +  gcc_checking_assert (l <= len);
> +  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT)
> +      && l <= N / HOST_BITS_PER_WIDE_INT)
> +    {
> +      HOST_WIDE_INT *valp = u.valp;
> +      memcpy (u.val, valp, len * sizeof (u.val[0]));
> +      XDELETEVEC (valp);
> +    }
> +  len = l;
> +  /* There are no excess bits in val[len - 1].  */
> +  STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
> +}
> +
> +/* Treat X as having signedness SGN and convert it to an N-bit number.  */
> +template <int N>
> +inline WIDEST_INT (N)
> +widest_int_storage <N>::from (const wide_int_ref &x, signop sgn)
> +{
> +  WIDEST_INT (N) result;
> +  unsigned int exp_len = x.len;
> +  unsigned int prec = result.get_precision ();
> +  if (sgn == UNSIGNED && prec > x.precision && x.val[x.len - 1] < 0)
> +    exp_len = CEIL (x.precision, HOST_BITS_PER_WIDE_INT) + 1;
> +  result.set_len (wi::force_to_size (result.write_val (exp_len), x.val, x.len,
> +				     x.precision, prec, sgn));
> +  return result;
> +}
> +
> +/* Create a WIDEST_INT (N) from the explicit block encoding given by
> +   VAL and LEN.  NEED_CANON_P is true if the encoding may have redundant
> +   trailing blocks.  */
> +template <int N>
> +inline WIDEST_INT (N)
> +widest_int_storage <N>::from_array (const HOST_WIDE_INT *val,
> +				    unsigned int len,
> +				    bool need_canon_p)
> +{
> +  WIDEST_INT (N) result;
> +  result.set_len (wi::from_array (result.write_val (len), val, len,
> +				  result.get_precision (), need_canon_p));
> +  return result;
> +}
> +
> +template <int N>
> +template <typename T1, typename T2>
> +inline WIDEST_INT (N)
> +wi::int_traits < widest_int_storage <N> >::
> +get_binary_result (const T1 &, const T2 &)
> +{
> +  return WIDEST_INT (N) ();
> +}
> +
> +template <int N>
> +template <typename T1, typename T2>
> +inline unsigned int
> +wi::int_traits < widest_int_storage <N> >::
> +get_binary_precision (const T1 &, const T2 &)
> +{
> +  return N / WIDE_INT_MAX_PRECISION * WIDEST_INT_MAX_PRECISION;
> +}
> +
>   /* A reference to one element of a trailing_wide_ints structure.  */
>   class trailing_wide_int_storage
>   {
> @@ -1359,7 +1920,7 @@ public:
>     unsigned int get_len () const;
>     unsigned int get_precision () const;
>     const HOST_WIDE_INT *get_val () const;
> -  HOST_WIDE_INT *write_val ();
> +  HOST_WIDE_INT *write_val (unsigned int);
>     void set_len (unsigned int, bool = false);
>   
>     template <typename T>
> @@ -1445,7 +2006,7 @@ trailing_wide_int_storage::get_val () co
>   }
>   
>   inline HOST_WIDE_INT *
> -trailing_wide_int_storage::write_val ()
> +trailing_wide_int_storage::write_val (unsigned int)
>   {
>     return m_val;
>   }
> @@ -1528,6 +2089,7 @@ namespace wi
>       static const enum precision_type precision_type = FLEXIBLE_PRECISION;
>       static const bool host_dependent_precision = true;
>       static const bool is_sign_extended = true;
> +    static const bool needs_write_val_arg = false;
>       static unsigned int get_precision (T);
>       static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, T);
>     };
> @@ -1699,6 +2261,7 @@ namespace wi
>          precision of HOST_WIDE_INT.  */
>       static const bool host_dependent_precision = false;
>       static const bool is_sign_extended = true;
> +    static const bool needs_write_val_arg = false;
>       static unsigned int get_precision (const wi::hwi_with_prec &);
>       static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
>   				      const wi::hwi_with_prec &);
> @@ -1804,8 +2367,8 @@ template <typename T1, typename T2>
>   inline unsigned int
>   wi::get_binary_precision (const T1 &x, const T2 &y)
>   {
> -  return get_precision (wi::int_traits <WI_BINARY_RESULT (T1, T2)>::
> -			get_binary_result (x, y));
> +  return wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_precision (x,
> +									   y);
>   }
>   
>   /* Copy the contents of Y to X, but keeping X's current precision.  */
> @@ -1813,9 +2376,9 @@ template <typename T1, typename T2>
>   inline void
>   wi::copy (T1 &x, const T2 &y)
>   {
> -  HOST_WIDE_INT *xval = x.write_val ();
> -  const HOST_WIDE_INT *yval = y.get_val ();
>     unsigned int len = y.get_len ();
> +  HOST_WIDE_INT *xval = x.write_val (len);
> +  const HOST_WIDE_INT *yval = y.get_val ();
>     unsigned int i = 0;
>     do
>       xval[i] = yval[i];
> @@ -2162,6 +2725,8 @@ wi::bit_not (const T &x)
>   {
>     WI_UNARY_RESULT_VAR (result, val, T, x);
>     WIDE_INT_REF_FOR (T) xi (x, get_precision (result));
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (xi.len);
>     for (unsigned int i = 0; i < xi.len; ++i)
>       val[i] = ~xi.val[i];
>     result.set_len (xi.len);
> @@ -2203,6 +2768,8 @@ wi::sext (const T &x, unsigned int offse
>     unsigned int precision = get_precision (result);
>     WIDE_INT_REF_FOR (T) xi (x, precision);
>   
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (xi.len);
>     if (offset <= HOST_BITS_PER_WIDE_INT)
>       {
>         val[0] = sext_hwi (xi.ulow (), offset);
> @@ -2230,6 +2797,9 @@ wi::zext (const T &x, unsigned int offse
>         return result;
>       }
>   
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (MAX (xi.len,
> +				 CEIL (offset, HOST_BITS_PER_WIDE_INT)));
>     /* In these cases we know that at least the top bit will be clear,
>        so no sign extension is necessary.  */
>     if (offset < HOST_BITS_PER_WIDE_INT)
> @@ -2259,6 +2829,9 @@ wi::set_bit (const T &x, unsigned int bi
>     WI_UNARY_RESULT_VAR (result, val, T, x);
>     unsigned int precision = get_precision (result);
>     WIDE_INT_REF_FOR (T) xi (x, precision);
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (MAX (xi.len,
> +				 bit / HOST_BITS_PER_WIDE_INT + 1));
>     if (precision <= HOST_BITS_PER_WIDE_INT)
>       {
>         val[0] = xi.ulow () | (HOST_WIDE_INT_1U << bit);
> @@ -2280,6 +2853,8 @@ wi::bswap (const T &x)
>     WI_UNARY_RESULT_VAR (result, val, T, x);
>     unsigned int precision = get_precision (result);
>     WIDE_INT_REF_FOR (T) xi (x, precision);
> +  if (result.needs_write_val_arg)
> +    gcc_unreachable (); /* bswap on widest_int makes no sense.  */
>     result.set_len (bswap_large (val, xi.val, xi.len, precision));
>     return result;
>   }
> @@ -2292,6 +2867,8 @@ wi::bitreverse (const T &x)
>     WI_UNARY_RESULT_VAR (result, val, T, x);
>     unsigned int precision = get_precision (result);
>     WIDE_INT_REF_FOR (T) xi (x, precision);
> +  if (result.needs_write_val_arg)
> +    gcc_unreachable (); /* bitreverse on widest_int makes no sense.  */
>     result.set_len (bitreverse_large (val, xi.val, xi.len, precision));
>     return result;
>   }
> @@ -2368,6 +2945,8 @@ wi::bit_and (const T1 &x, const T2 &y)
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y, precision);
>     bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (MAX (xi.len, yi.len));
>     if (LIKELY (xi.len + yi.len == 2))
>       {
>         val[0] = xi.ulow () & yi.ulow ();
> @@ -2389,6 +2968,8 @@ wi::bit_and_not (const T1 &x, const T2 &
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y, precision);
>     bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (MAX (xi.len, yi.len));
>     if (LIKELY (xi.len + yi.len == 2))
>       {
>         val[0] = xi.ulow () & ~yi.ulow ();
> @@ -2410,6 +2991,8 @@ wi::bit_or (const T1 &x, const T2 &y)
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y, precision);
>     bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (MAX (xi.len, yi.len));
>     if (LIKELY (xi.len + yi.len == 2))
>       {
>         val[0] = xi.ulow () | yi.ulow ();
> @@ -2431,6 +3014,8 @@ wi::bit_or_not (const T1 &x, const T2 &y
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y, precision);
>     bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (MAX (xi.len, yi.len));
>     if (LIKELY (xi.len + yi.len == 2))
>       {
>         val[0] = xi.ulow () | ~yi.ulow ();
> @@ -2452,6 +3037,8 @@ wi::bit_xor (const T1 &x, const T2 &y)
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y, precision);
>     bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (MAX (xi.len, yi.len));
>     if (LIKELY (xi.len + yi.len == 2))
>       {
>         val[0] = xi.ulow () ^ yi.ulow ();
> @@ -2472,6 +3059,8 @@ wi::add (const T1 &x, const T2 &y)
>     unsigned int precision = get_precision (result);
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y, precision);
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (MAX (xi.len, yi.len) + 1);
>     if (precision <= HOST_BITS_PER_WIDE_INT)
>       {
>         val[0] = xi.ulow () + yi.ulow ();
> @@ -2515,6 +3104,8 @@ wi::add (const T1 &x, const T2 &y, signo
>     unsigned int precision = get_precision (result);
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y, precision);
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (MAX (xi.len, yi.len) + 1);
>     if (precision <= HOST_BITS_PER_WIDE_INT)
>       {
>         unsigned HOST_WIDE_INT xl = xi.ulow ();
> @@ -2558,6 +3149,8 @@ wi::sub (const T1 &x, const T2 &y)
>     unsigned int precision = get_precision (result);
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y, precision);
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (MAX (xi.len, yi.len) + 1);
>     if (precision <= HOST_BITS_PER_WIDE_INT)
>       {
>         val[0] = xi.ulow () - yi.ulow ();
> @@ -2601,6 +3194,8 @@ wi::sub (const T1 &x, const T2 &y, signo
>     unsigned int precision = get_precision (result);
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y, precision);
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (MAX (xi.len, yi.len) + 1);
>     if (precision <= HOST_BITS_PER_WIDE_INT)
>       {
>         unsigned HOST_WIDE_INT xl = xi.ulow ();
> @@ -2643,6 +3238,8 @@ wi::mul (const T1 &x, const T2 &y)
>     unsigned int precision = get_precision (result);
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y, precision);
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (xi.len + yi.len + 2);
>     if (precision <= HOST_BITS_PER_WIDE_INT)
>       {
>         val[0] = xi.ulow () * yi.ulow ();
> @@ -2664,6 +3261,8 @@ wi::mul (const T1 &x, const T2 &y, signo
>     unsigned int precision = get_precision (result);
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y, precision);
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (xi.len + yi.len + 2);
>     result.set_len (mul_internal (val, xi.val, xi.len,
>   				yi.val, yi.len, precision,
>   				sgn, overflow, false));
> @@ -2698,6 +3297,8 @@ wi::mul_high (const T1 &x, const T2 &y,
>     unsigned int precision = get_precision (result);
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y, precision);
> +  if (result.needs_write_val_arg)
> +    gcc_unreachable (); /* mul_high on widest_int doesn't make sense.  */
>     result.set_len (mul_internal (val, xi.val, xi.len,
>   				yi.val, yi.len, precision,
>   				sgn, 0, true));
> @@ -2716,6 +3317,12 @@ wi::div_trunc (const T1 &x, const T2 &y,
>     WIDE_INT_REF_FOR (T1) xi (x, precision);
>     WIDE_INT_REF_FOR (T2) yi (y);
>   
> +  if (quotient.needs_write_val_arg)
> +    quotient_val = quotient.write_val ((sgn == UNSIGNED
> +					&& xi.val[xi.len - 1] < 0)
> +				       ? CEIL (precision,
> +					       HOST_BITS_PER_WIDE_INT) + 1
> +				       : xi.len + 1);
>     quotient.set_len (divmod_internal (quotient_val, 0, 0, xi.val, xi.len,
>   				     precision,
>   				     yi.val, yi.len, yi.precision,
> @@ -2753,6 +3360,15 @@ wi::div_floor (const T1 &x, const T2 &y,
>     WIDE_INT_REF_FOR (T2) yi (y);
>   
>     unsigned int remainder_len;
> +  if (quotient.needs_write_val_arg)
> +    {
> +      quotient_val = quotient.write_val ((sgn == UNSIGNED
> +					  && xi.val[xi.len - 1] < 0)
> +					 ? CEIL (precision,
> +						 HOST_BITS_PER_WIDE_INT) + 1
> +					 : xi.len + 1);
> +      remainder_val = remainder.write_val (yi.len);
> +    }
>     quotient.set_len (divmod_internal (quotient_val,
>   				     &remainder_len, remainder_val,
>   				     xi.val, xi.len, precision,
> @@ -2795,6 +3411,15 @@ wi::div_ceil (const T1 &x, const T2 &y,
>     WIDE_INT_REF_FOR (T2) yi (y);
>   
>     unsigned int remainder_len;
> +  if (quotient.needs_write_val_arg)
> +    {
> +      quotient_val = quotient.write_val ((sgn == UNSIGNED
> +					  && xi.val[xi.len - 1] < 0)
> +					 ? CEIL (precision,
> +						 HOST_BITS_PER_WIDE_INT) + 1
> +					 : xi.len + 1);
> +      remainder_val = remainder.write_val (yi.len);
> +    }
>     quotient.set_len (divmod_internal (quotient_val,
>   				     &remainder_len, remainder_val,
>   				     xi.val, xi.len, precision,
> @@ -2828,6 +3453,15 @@ wi::div_round (const T1 &x, const T2 &y,
>     WIDE_INT_REF_FOR (T2) yi (y);
>   
>     unsigned int remainder_len;
> +  if (quotient.needs_write_val_arg)
> +    {
> +      quotient_val = quotient.write_val ((sgn == UNSIGNED
> +					  && xi.val[xi.len - 1] < 0)
> +					 ? CEIL (precision,
> +						 HOST_BITS_PER_WIDE_INT) + 1
> +					 : xi.len + 1);
> +      remainder_val = remainder.write_val (yi.len);
> +    }
>     quotient.set_len (divmod_internal (quotient_val,
>   				     &remainder_len, remainder_val,
>   				     xi.val, xi.len, precision,
> @@ -2871,6 +3505,15 @@ wi::divmod_trunc (const T1 &x, const T2
>     WIDE_INT_REF_FOR (T2) yi (y);
>   
>     unsigned int remainder_len;
> +  if (quotient.needs_write_val_arg)
> +    {
> +      quotient_val = quotient.write_val ((sgn == UNSIGNED
> +					  && xi.val[xi.len - 1] < 0)
> +					 ? CEIL (precision,
> +						 HOST_BITS_PER_WIDE_INT) + 1
> +					 : xi.len + 1);
> +      remainder_val = remainder.write_val (yi.len);
> +    }
>     quotient.set_len (divmod_internal (quotient_val,
>   				     &remainder_len, remainder_val,
>   				     xi.val, xi.len, precision,
> @@ -2915,6 +3558,8 @@ wi::mod_trunc (const T1 &x, const T2 &y,
>     WIDE_INT_REF_FOR (T2) yi (y);
>   
>     unsigned int remainder_len;
> +  if (remainder.needs_write_val_arg)
> +    remainder_val = remainder.write_val (yi.len);
>     divmod_internal (0, &remainder_len, remainder_val,
>   		   xi.val, xi.len, precision,
>   		   yi.val, yi.len, yi.precision, sgn, overflow);
> @@ -2955,6 +3600,15 @@ wi::mod_floor (const T1 &x, const T2 &y,
>     WIDE_INT_REF_FOR (T2) yi (y);
>   
>     unsigned int remainder_len;
> +  if (quotient.needs_write_val_arg)
> +    {
> +      quotient_val = quotient.write_val ((sgn == UNSIGNED
> +					  && xi.val[xi.len - 1] < 0)
> +					 ? CEIL (precision,
> +						 HOST_BITS_PER_WIDE_INT) + 1
> +					 : xi.len + 1);
> +      remainder_val = remainder.write_val (yi.len);
> +    }
>     quotient.set_len (divmod_internal (quotient_val,
>   				     &remainder_len, remainder_val,
>   				     xi.val, xi.len, precision,
> @@ -2991,6 +3645,15 @@ wi::mod_ceil (const T1 &x, const T2 &y,
>     WIDE_INT_REF_FOR (T2) yi (y);
>   
>     unsigned int remainder_len;
> +  if (quotient.needs_write_val_arg)
> +    {
> +      quotient_val = quotient.write_val ((sgn == UNSIGNED
> +					  && xi.val[xi.len - 1] < 0)
> +					 ? CEIL (precision,
> +						 HOST_BITS_PER_WIDE_INT) + 1
> +					 : xi.len + 1);
> +      remainder_val = remainder.write_val (yi.len);
> +    }
>     quotient.set_len (divmod_internal (quotient_val,
>   				     &remainder_len, remainder_val,
>   				     xi.val, xi.len, precision,
> @@ -3017,6 +3680,15 @@ wi::mod_round (const T1 &x, const T2 &y,
>     WIDE_INT_REF_FOR (T2) yi (y);
>   
>     unsigned int remainder_len;
> +  if (quotient.needs_write_val_arg)
> +    {
> +      quotient_val = quotient.write_val ((sgn == UNSIGNED
> +					  && xi.val[xi.len - 1] < 0)
> +					 ? CEIL (precision,
> +						 HOST_BITS_PER_WIDE_INT) + 1
> +					 : xi.len + 1);
> +      remainder_val = remainder.write_val (yi.len);
> +    }
>     quotient.set_len (divmod_internal (quotient_val,
>   				     &remainder_len, remainder_val,
>   				     xi.val, xi.len, precision,
> @@ -3086,12 +3758,16 @@ wi::lshift (const T1 &x, const T2 &y)
>     /* Handle the simple cases quickly.   */
>     if (geu_p (yi, precision))
>       {
> +      if (result.needs_write_val_arg)
> +	val = result.write_val (1);
>         val[0] = 0;
>         result.set_len (1);
>       }
>     else
>       {
>         unsigned int shift = yi.to_uhwi ();
> +      if (result.needs_write_val_arg)
> +	val = result.write_val (xi.len + shift / HOST_BITS_PER_WIDE_INT + 1);
>         /* For fixed-precision integers like offset_int and widest_int,
>   	 handle the case where the shift value is constant and the
>   	 result is a single nonnegative HWI (meaning that we don't
> @@ -3130,12 +3806,23 @@ wi::lrshift (const T1 &x, const T2 &y)
>     /* Handle the simple cases quickly.   */
>     if (geu_p (yi, xi.precision))
>       {
> +      if (result.needs_write_val_arg)
> +	val = result.write_val (1);
>         val[0] = 0;
>         result.set_len (1);
>       }
>     else
>       {
>         unsigned int shift = yi.to_uhwi ();
> +      if (result.needs_write_val_arg)
> +	{
> +	  unsigned int est_len = xi.len;
> +	  if (xi.val[xi.len - 1] < 0 && shift)
> +	    /* Logical right shift of sign-extended value might need a very
> +	       large precision e.g. for widest_int.  */
> +	    est_len = CEIL (xi.precision - shift, HOST_BITS_PER_WIDE_INT) + 1;
> +	  val = result.write_val (est_len);
> +	}
>         /* For fixed-precision integers like offset_int and widest_int,
>   	 handle the case where the shift value is constant and the
>   	 shifted value is a single nonnegative HWI (meaning that all
> @@ -3171,6 +3858,8 @@ wi::arshift (const T1 &x, const T2 &y)
>        since the result can be no larger than that.  */
>     WIDE_INT_REF_FOR (T1) xi (x);
>     WIDE_INT_REF_FOR (T2) yi (y);
> +  if (result.needs_write_val_arg)
> +    val = result.write_val (xi.len);
>     /* Handle the simple cases quickly.   */
>     if (geu_p (yi, xi.precision))
>       {
> @@ -3465,7 +4154,7 @@ inline wide_int
>   wi::mask (unsigned int width, bool negate_p, unsigned int precision)
>   {
>     wide_int result = wide_int::create (precision);
> -  result.set_len (mask (result.write_val (), width, negate_p, precision));
> +  result.set_len (mask (result.write_val (0), width, negate_p, precision));
>     return result;
>   }
>   
> @@ -3477,7 +4166,7 @@ wi::shifted_mask (unsigned int start, un
>   		  unsigned int precision)
>   {
>     wide_int result = wide_int::create (precision);
> -  result.set_len (shifted_mask (result.write_val (), start, width, negate_p,
> +  result.set_len (shifted_mask (result.write_val (0), start, width, negate_p,
>   				precision));
>     return result;
>   }
> @@ -3498,8 +4187,8 @@ wi::mask (unsigned int width, bool negat
>   {
>     STATIC_ASSERT (wi::int_traits<T>::precision);
>     T result;
> -  result.set_len (mask (result.write_val (), width, negate_p,
> -			wi::int_traits <T>::precision));
> +  result.set_len (mask (result.write_val (width / HOST_BITS_PER_WIDE_INT + 1),
> +			width, negate_p, wi::int_traits <T>::precision));
>     return result;
>   }
>   
> @@ -3512,9 +4201,13 @@ wi::shifted_mask (unsigned int start, un
>   {
>     STATIC_ASSERT (wi::int_traits<T>::precision);
>     T result;
> -  result.set_len (shifted_mask (result.write_val (), start, width,
> -				negate_p,
> -				wi::int_traits <T>::precision));
> +  unsigned int prec = wi::int_traits <T>::precision;
> +  unsigned int est_len
> +    = result.needs_write_val_arg
> +      ? ((start + (width > prec - start ? prec - start : width))
> +	 / HOST_BITS_PER_WIDE_INT + 1) : 0;
> +  result.set_len (shifted_mask (result.write_val (est_len), start, width,
> +				negate_p, prec));
>     return result;
>   }
>   
> --- gcc/wide-int.cc.jj	2023-09-27 10:37:39.429837179 +0200
> +++ gcc/wide-int.cc	2023-09-28 14:59:04.121819198 +0200
> @@ -51,7 +51,7 @@ typedef unsigned int UDWtype __attribute
>   #include "longlong.h"
>   #endif
>   
> -static const HOST_WIDE_INT zeros[WIDE_INT_MAX_ELTS] = {};
> +static const HOST_WIDE_INT zeros[1] = {};
>   
>   /*
>    * Internal utilities.
> @@ -62,8 +62,7 @@ static const HOST_WIDE_INT zeros[WIDE_IN
>   #define HALF_INT_MASK ((HOST_WIDE_INT_1 << HOST_BITS_PER_HALF_WIDE_INT) - 1)
>   
>   #define BLOCK_OF(TARGET) ((TARGET) / HOST_BITS_PER_WIDE_INT)
> -#define BLOCKS_NEEDED(PREC) \
> -  (PREC ? (((PREC) + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT) : 1)
> +#define BLOCKS_NEEDED(PREC) (PREC ? CEIL (PREC, HOST_BITS_PER_WIDE_INT) : 1)
>   #define SIGN_MASK(X) ((HOST_WIDE_INT) (X) < 0 ? -1 : 0)
>   
>   /* Return the value a VAL[I] if I < LEN, otherwise, return 0 or -1
> @@ -96,7 +95,7 @@ canonize (HOST_WIDE_INT *val, unsigned i
>     top = val[len - 1];
>     if (len * HOST_BITS_PER_WIDE_INT > precision)
>       val[len - 1] = top = sext_hwi (top, precision % HOST_BITS_PER_WIDE_INT);
> -  if (top != 0 && top != (HOST_WIDE_INT)-1)
> +  if (top != 0 && top != HOST_WIDE_INT_M1)
>       return len;
>   
>     /* At this point we know that the top is either 0 or -1.  Find the
> @@ -163,7 +162,7 @@ wi::from_buffer (const unsigned char *bu
>     /* We have to clear all the bits ourself, as we merely or in values
>        below.  */
>     unsigned int len = BLOCKS_NEEDED (precision);
> -  HOST_WIDE_INT *val = result.write_val ();
> +  HOST_WIDE_INT *val = result.write_val (0);
>     for (unsigned int i = 0; i < len; ++i)
>       val[i] = 0;
>   
> @@ -232,8 +231,7 @@ wi::to_mpz (const wide_int_ref &x, mpz_t
>       }
>     else if (excess < 0 && wi::neg_p (x))
>       {
> -      int extra
> -	= (-excess + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT;
> +      int extra = CEIL (-excess, HOST_BITS_PER_WIDE_INT);
>         HOST_WIDE_INT *t = XALLOCAVEC (HOST_WIDE_INT, len + extra);
>         for (int i = 0; i < len; i++)
>   	t[i] = v[i];
> @@ -280,8 +278,8 @@ wi::from_mpz (const_tree type, mpz_t x,
>        extracted from the GMP manual, section "Integer Import and Export":
>        http://gmplib.org/manual/Integer-Import-and-Export.html  */
>     numb = CHAR_BIT * sizeof (HOST_WIDE_INT);
> -  count = (mpz_sizeinbase (x, 2) + numb - 1) / numb;
> -  HOST_WIDE_INT *val = res.write_val ();
> +  count = CEIL (mpz_sizeinbase (x, 2), numb);
> +  HOST_WIDE_INT *val = res.write_val (0);
>     /* Read the absolute value.
>   
>        Write directly to the wide_int storage if possible, otherwise leave
> @@ -1334,21 +1332,6 @@ wi::mul_internal (HOST_WIDE_INT *val, co
>     unsigned HOST_WIDE_INT o0, o1, k, t;
>     unsigned int i;
>     unsigned int j;
> -  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
> -  unsigned int half_blocks_needed = blocks_needed * 2;
> -  /* The sizes here are scaled to support a 2x largest mode by 2x
> -     largest mode yielding a 4x largest mode result.  This is what is
> -     needed by vpn.  */
> -
> -  unsigned HOST_HALF_WIDE_INT
> -    u[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
> -  unsigned HOST_HALF_WIDE_INT
> -    v[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
> -  /* The '2' in 'R' is because we are internally doing a full
> -     multiply.  */
> -  unsigned HOST_HALF_WIDE_INT
> -    r[2 * 4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
> -  HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1;
>   
>     /* If the top level routine did not really pass in an overflow, then
>        just make sure that we never attempt to set it.  */
> @@ -1469,6 +1452,35 @@ wi::mul_internal (HOST_WIDE_INT *val, co
>         return 1;
>       }
>   
> +  /* The sizes here are scaled to support a 2x WIDE_INT_MAX_PRECISION by 2x
> +     WIDE_INT_MAX_PRECISION yielding a 4x WIDE_INT_MAX_PRECISION result.  */
> +
> +  unsigned HOST_HALF_WIDE_INT
> +    ubuf[4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
> +  unsigned HOST_HALF_WIDE_INT
> +    vbuf[4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
> +  /* The '2' in 'R' is because we are internally doing a full
> +     multiply.  */
> +  unsigned HOST_HALF_WIDE_INT
> +    rbuf[2 * 4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
> +  const HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1;
> +  unsigned HOST_HALF_WIDE_INT *u = ubuf;
> +  unsigned HOST_HALF_WIDE_INT *v = vbuf;
> +  unsigned HOST_HALF_WIDE_INT *r = rbuf;
> +
> +  if (prec > WIDE_INT_MAX_PRECISION && !high)
> +    prec = (op1len + op2len + 1) * HOST_BITS_PER_WIDE_INT;
> +  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
> +  unsigned int half_blocks_needed = blocks_needed * 2;
> +  if (UNLIKELY (prec > WIDE_INT_MAX_PRECISION))
> +    {
> +      unsigned HOST_HALF_WIDE_INT *buf
> +	= XALLOCAVEC (unsigned HOST_HALF_WIDE_INT, 4 * 4 * blocks_needed);
> +      u = buf;
> +      v = u + 4 * blocks_needed;
> +      r = v + 4 * blocks_needed;
> +    }
> +
>     /* We do unsigned mul and then correct it.  */
>     wi_unpack (u, op1val, op1len, half_blocks_needed, prec, SIGNED);
>     wi_unpack (v, op2val, op2len, half_blocks_needed, prec, SIGNED);
> @@ -1782,16 +1794,6 @@ wi::divmod_internal (HOST_WIDE_INT *quot
>   		     unsigned int divisor_prec, signop sgn,
>   		     wi::overflow_type *oflow)
>   {
> -  unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec);
> -  unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec);
> -  unsigned HOST_HALF_WIDE_INT
> -    b_quotient[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
> -  unsigned HOST_HALF_WIDE_INT
> -    b_remainder[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
> -  unsigned HOST_HALF_WIDE_INT
> -    b_dividend[(4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT) + 1];
> -  unsigned HOST_HALF_WIDE_INT
> -    b_divisor[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
>     unsigned int m, n;
>     bool dividend_neg = false;
>     bool divisor_neg = false;
> @@ -1910,6 +1912,41 @@ wi::divmod_internal (HOST_WIDE_INT *quot
>   	}
>       }
>   
> +  unsigned HOST_HALF_WIDE_INT
> +    b_quotient_buf[4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
> +  unsigned HOST_HALF_WIDE_INT
> +    b_remainder_buf[4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
> +  unsigned HOST_HALF_WIDE_INT
> +    b_dividend_buf[(4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT)
> +		   + 1];
> +  unsigned HOST_HALF_WIDE_INT
> +    b_divisor_buf[4 * WIDE_INT_MAX_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
> +  unsigned HOST_HALF_WIDE_INT *b_quotient = b_quotient_buf;
> +  unsigned HOST_HALF_WIDE_INT *b_remainder = b_remainder_buf;
> +  unsigned HOST_HALF_WIDE_INT *b_dividend = b_dividend_buf;
> +  unsigned HOST_HALF_WIDE_INT *b_divisor = b_divisor_buf;
> +
> +  if (dividend_prec > WIDE_INT_MAX_PRECISION
> +      && (sgn == SIGNED || dividend_val[dividend_len - 1] >= 0))
> +    dividend_prec = (dividend_len + 1) * HOST_BITS_PER_WIDE_INT;
> +  if (divisor_prec > WIDE_INT_MAX_PRECISION)
> +    divisor_prec = divisor_len * HOST_BITS_PER_WIDE_INT;
> +  unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec);
> +  unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec);
> +  if (UNLIKELY (dividend_prec > WIDE_INT_MAX_PRECISION)
> +      || UNLIKELY (divisor_prec > WIDE_INT_MAX_PRECISION))
> +    {
> +      unsigned HOST_HALF_WIDE_INT *buf
> +        = XALLOCAVEC (unsigned HOST_HALF_WIDE_INT,
> +		      12 * dividend_blocks_needed
> +		      + 4 * divisor_blocks_needed + 1);
> +      b_quotient = buf;
> +      b_remainder = b_quotient + 4 * dividend_blocks_needed;
> +      b_dividend = b_remainder + 4 * dividend_blocks_needed;
> +      b_divisor = b_dividend + 4 * dividend_blocks_needed + 1;
> +      memset (b_quotient, 0,
> +	      4 * dividend_blocks_needed * sizeof (HOST_HALF_WIDE_INT));
> +    }
>     wi_unpack (b_dividend, dividend.get_val (), dividend.get_len (),
>   	     dividend_blocks_needed, dividend_prec, UNSIGNED);
>     wi_unpack (b_divisor, divisor.get_val (), divisor.get_len (),
> @@ -1924,7 +1961,8 @@ wi::divmod_internal (HOST_WIDE_INT *quot
>     while (n > 1 && b_divisor[n - 1] == 0)
>       n--;
>   
> -  memset (b_quotient, 0, sizeof (b_quotient));
> +  if (b_quotient == b_quotient_buf)
> +    memset (b_quotient_buf, 0, sizeof (b_quotient_buf));
>   
>     divmod_internal_2 (b_quotient, b_remainder, b_dividend, b_divisor, m, n);
>   
> @@ -1970,6 +2008,8 @@ wi::lshift_large (HOST_WIDE_INT *val, co
>   
>     /* The whole-block shift fills with zeros.  */
>     unsigned int len = BLOCKS_NEEDED (precision);
> +  if (UNLIKELY (len > WIDE_INT_MAX_ELTS))
> +    len = xlen + skip + 1;
>     for (unsigned int i = 0; i < skip; ++i)
>       val[i] = 0;
>   
> @@ -1993,22 +2033,17 @@ wi::lshift_large (HOST_WIDE_INT *val, co
>     return canonize (val, len, precision);
>   }
>   
> -/* Right shift XVAL by SHIFT and store the result in VAL.  Return the
> +/* Right shift XVAL by SHIFT and store the result in VAL.  LEN is the
>      number of blocks in VAL.  The input has XPRECISION bits and the
>      output has XPRECISION - SHIFT bits.  */
> -static unsigned int
> +static void
>   rshift_large_common (HOST_WIDE_INT *val, const HOST_WIDE_INT *xval,
> -		     unsigned int xlen, unsigned int xprecision,
> -		     unsigned int shift)
> +		     unsigned int xlen, unsigned int shift, unsigned int len)
>   {
>     /* Split the shift into a whole-block shift and a subblock shift.  */
>     unsigned int skip = shift / HOST_BITS_PER_WIDE_INT;
>     unsigned int small_shift = shift % HOST_BITS_PER_WIDE_INT;
>   
> -  /* Work out how many blocks are needed to store the significant bits
> -     (excluding the upper zeros or signs).  */
> -  unsigned int len = BLOCKS_NEEDED (xprecision - shift);
> -
>     /* It's easier to handle the simple block case specially.  */
>     if (small_shift == 0)
>       for (unsigned int i = 0; i < len; ++i)
> @@ -2025,7 +2060,6 @@ rshift_large_common (HOST_WIDE_INT *val,
>   	  val[i] |= curr << (-small_shift % HOST_BITS_PER_WIDE_INT);
>   	}
>       }
> -  return len;
>   }
>   
>   /* Logically right shift XVAL by SHIFT and store the result in VAL.
> @@ -2036,11 +2070,18 @@ wi::lrshift_large (HOST_WIDE_INT *val, c
>   		   unsigned int xlen, unsigned int xprecision,
>   		   unsigned int precision, unsigned int shift)
>   {
> -  unsigned int len = rshift_large_common (val, xval, xlen, xprecision, shift);
> +  /* Work out how many blocks are needed to store the significant bits
> +     (excluding the upper zeros or signs).  */
> +  unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
> +  unsigned int len = blocks_needed;
> +  if (UNLIKELY (len > WIDE_INT_MAX_ELTS) && len > xlen && xval[xlen - 1] >= 0)
> +    len = xlen;
> +
> +  rshift_large_common (val, xval, xlen, shift, len);
>   
>     /* The value we just created has precision XPRECISION - SHIFT.
>        Zero-extend it to wider precisions.  */
> -  if (precision > xprecision - shift)
> +  if (precision > xprecision - shift && len == blocks_needed)
>       {
>         unsigned int small_prec = (xprecision - shift) % HOST_BITS_PER_WIDE_INT;
>         if (small_prec)
> @@ -2063,11 +2104,18 @@ wi::arshift_large (HOST_WIDE_INT *val, c
>   		   unsigned int xlen, unsigned int xprecision,
>   		   unsigned int precision, unsigned int shift)
>   {
> -  unsigned int len = rshift_large_common (val, xval, xlen, xprecision, shift);
> +  /* Work out how many blocks are needed to store the significant bits
> +     (excluding the upper zeros or signs).  */
> +  unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
> +  unsigned int len = blocks_needed;
> +  if (UNLIKELY (len > WIDE_INT_MAX_ELTS) && len > xlen)
> +    len = xlen;
> +
> +  rshift_large_common (val, xval, xlen, shift, len);
>   
>     /* The value we just created has precision XPRECISION - SHIFT.
>        Sign-extend it to wider types.  */
> -  if (precision > xprecision - shift)
> +  if (precision > xprecision - shift && len == blocks_needed)
>       {
>         unsigned int small_prec = (xprecision - shift) % HOST_BITS_PER_WIDE_INT;
>         if (small_prec)
> @@ -2399,9 +2447,12 @@ from_int (int i)
>   static void
>   assert_deceq (const char *expected, const wide_int_ref &wi, signop sgn)
>   {
> -  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
> -  print_dec (wi, buf, sgn);
> -  ASSERT_STREQ (expected, buf);
> +  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
> +  unsigned len = wi.get_len ();
> +  if (UNLIKELY (len > WIDE_INT_MAX_ELTS))
> +    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
> +  print_dec (wi, p, sgn);
> +  ASSERT_STREQ (expected, p);
>   }
>   
>   /* Likewise for base 16.  */
> @@ -2409,9 +2460,12 @@ assert_deceq (const char *expected, cons
>   static void
>   assert_hexeq (const char *expected, const wide_int_ref &wi)
>   {
> -  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
> -  print_hex (wi, buf);
> -  ASSERT_STREQ (expected, buf);
> +  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
> +  unsigned len = wi.get_len ();
> +  if (UNLIKELY (len > WIDE_INT_MAX_ELTS))
> +    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
> +  print_hex (wi, p);
> +  ASSERT_STREQ (expected, p);
>   }
>   
>   /* Test cases.  */
> --- gcc/print-tree.cc.jj	2023-07-11 13:40:39.000000000 +0200
> +++ gcc/print-tree.cc	2023-09-28 14:12:40.257284557 +0200
> @@ -365,13 +365,13 @@ print_node (FILE *file, const char *pref
>       fputs (code == CALL_EXPR ? " must-tail-call" : " static", file);
>     if (TREE_DEPRECATED (node))
>       fputs (" deprecated", file);
> -  if (TREE_UNAVAILABLE (node))
> -    fputs (" unavailable", file);
>     if (TREE_VISITED (node))
>       fputs (" visited", file);
>   
>     if (code != TREE_VEC && code != INTEGER_CST && code != SSA_NAME)
>       {
> +      if (TREE_UNAVAILABLE (node))
> +	fputs (" unavailable", file);
>         if (TREE_LANG_FLAG_0 (node))
>   	fputs (" tree_0", file);
>         if (TREE_LANG_FLAG_1 (node))
> --- gcc/dwarf2out.cc.jj	2023-09-28 12:05:50.905151340 +0200
> +++ gcc/dwarf2out.cc	2023-09-28 13:06:34.492017940 +0200
> @@ -397,7 +397,7 @@ dump_struct_debug (tree type, enum debug
>      of the number.  */
>   
>   static unsigned int
> -get_full_len (const wide_int &op)
> +get_full_len (const rwide_int &op)
>   {
>     int prec = wi::get_precision (op);
>     return ((prec + HOST_BITS_PER_WIDE_INT - 1)
> @@ -3900,7 +3900,7 @@ static void add_data_member_location_att
>   						struct vlr_context *);
>   static bool add_const_value_attribute (dw_die_ref, machine_mode, rtx);
>   static void insert_int (HOST_WIDE_INT, unsigned, unsigned char *);
> -static void insert_wide_int (const wide_int &, unsigned char *, int);
> +static void insert_wide_int (const rwide_int &, unsigned char *, int);
>   static unsigned insert_float (const_rtx, unsigned char *);
>   static rtx rtl_for_decl_location (tree);
>   static bool add_location_or_const_value_attribute (dw_die_ref, tree, bool);
> @@ -4598,14 +4598,14 @@ AT_unsigned (dw_attr_node *a)
>   
>   static inline void
>   add_AT_wide (dw_die_ref die, enum dwarf_attribute attr_kind,
> -	     const wide_int& w)
> +	     const rwide_int& w)
>   {
>     dw_attr_node attr;
>   
>     attr.dw_attr = attr_kind;
>     attr.dw_attr_val.val_class = dw_val_class_wide_int;
>     attr.dw_attr_val.val_entry = NULL;
> -  attr.dw_attr_val.v.val_wide = ggc_alloc<wide_int> ();
> +  attr.dw_attr_val.v.val_wide = ggc_alloc<rwide_int> ();
>     *attr.dw_attr_val.v.val_wide = w;
>     add_dwarf_attr (die, &attr);
>   }
> @@ -16714,7 +16714,7 @@ mem_loc_descriptor (rtx rtl, machine_mod
>   	  mem_loc_result->dw_loc_oprnd1.v.val_die_ref.external = 0;
>   	  mem_loc_result->dw_loc_oprnd2.val_class
>   	    = dw_val_class_wide_int;
> -	  mem_loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<wide_int> ();
> +	  mem_loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<rwide_int> ();
>   	  *mem_loc_result->dw_loc_oprnd2.v.val_wide = rtx_mode_t (rtl, mode);
>   	}
>         break;
> @@ -17288,7 +17288,7 @@ loc_descriptor (rtx rtl, machine_mode mo
>   	  loc_result = new_loc_descr (DW_OP_implicit_value,
>   				      GET_MODE_SIZE (int_mode), 0);
>   	  loc_result->dw_loc_oprnd2.val_class = dw_val_class_wide_int;
> -	  loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<wide_int> ();
> +	  loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<rwide_int> ();
>   	  *loc_result->dw_loc_oprnd2.v.val_wide = rtx_mode_t (rtl, int_mode);
>   	}
>         break;
> @@ -20189,7 +20189,7 @@ extract_int (const unsigned char *src, u
>   /* Writes wide_int values to dw_vec_const array.  */
>   
>   static void
> -insert_wide_int (const wide_int &val, unsigned char *dest, int elt_size)
> +insert_wide_int (const rwide_int &val, unsigned char *dest, int elt_size)
>   {
>     int i;
>   
> @@ -20274,7 +20274,7 @@ add_const_value_attribute (dw_die_ref di
>   	  && (GET_MODE_PRECISION (int_mode)
>   	      & (HOST_BITS_PER_WIDE_INT - 1)) == 0)
>   	{
> -	  wide_int w = rtx_mode_t (rtl, int_mode);
> +	  rwide_int w = rtx_mode_t (rtl, int_mode);
>   	  add_AT_wide (die, DW_AT_const_value, w);
>   	  return true;
>   	}
> --- gcc/dwarf2out.h.jj	2023-09-27 10:37:38.536849616 +0200
> +++ gcc/dwarf2out.h	2023-09-28 13:06:34.492017940 +0200
> @@ -30,7 +30,7 @@ typedef struct dw_cfi_node *dw_cfi_ref;
>   typedef struct dw_loc_descr_node *dw_loc_descr_ref;
>   typedef struct dw_loc_list_struct *dw_loc_list_ref;
>   typedef struct dw_discr_list_node *dw_discr_list_ref;
> -typedef wide_int *wide_int_ptr;
> +typedef rwide_int *rwide_int_ptr;
>   
>   
>   /* Call frames are described using a sequence of Call Frame
> @@ -252,7 +252,7 @@ struct GTY(()) dw_val_node {
>         unsigned HOST_WIDE_INT
>   	GTY ((tag ("dw_val_class_unsigned_const"))) val_unsigned;
>         double_int GTY ((tag ("dw_val_class_const_double"))) val_double;
> -      wide_int_ptr GTY ((tag ("dw_val_class_wide_int"))) val_wide;
> +      rwide_int_ptr GTY ((tag ("dw_val_class_wide_int"))) val_wide;
>         dw_vec_const GTY ((tag ("dw_val_class_vec"))) val_vec;
>         struct dw_val_die_union
>   	{
> --- gcc/tree.h.jj	2023-09-27 10:37:39.114841566 +0200
> +++ gcc/tree.h	2023-09-28 13:06:34.506017744 +0200
> @@ -6258,13 +6258,17 @@ namespace wi
>     template <int N>
>     struct int_traits <extended_tree <N> >
>     {
> -    static const enum precision_type precision_type = CONST_PRECISION;
> +    static const enum precision_type precision_type
> +      = N == ADDR_MAX_PRECISION ? CONST_PRECISION : WIDEST_CONST_PRECISION;
>       static const bool host_dependent_precision = false;
>       static const bool is_sign_extended = true;
>       static const unsigned int precision = N;
> +    static const unsigned int inl_precision
> +      = N == ADDR_MAX_PRECISION ? 0
> +	     : N / WIDEST_INT_MAX_PRECISION * WIDE_INT_MAX_PRECISION;
>     };
>   
> -  typedef extended_tree <WIDE_INT_MAX_PRECISION> widest_extended_tree;
> +  typedef extended_tree <WIDEST_INT_MAX_PRECISION> widest_extended_tree;
>     typedef extended_tree <ADDR_MAX_PRECISION> offset_extended_tree;
>   
>     typedef const generic_wide_int <widest_extended_tree> tree_to_widest_ref;
> @@ -6292,7 +6296,8 @@ namespace wi
>     tree_to_poly_wide_ref to_poly_wide (const_tree);
>   
>     template <int N>
> -  struct ints_for <generic_wide_int <extended_tree <N> >, CONST_PRECISION>
> +  struct ints_for <generic_wide_int <extended_tree <N> >,
> +		   int_traits <extended_tree <N> >::precision_type>
>     {
>       typedef generic_wide_int <extended_tree <N> > extended;
>       static extended zero (const extended &);
> @@ -6308,7 +6313,7 @@ namespace wi
>   
>   /* Used to convert a tree to a widest2_int like this:
>      widest2_int foo = widest2_int_cst (some_tree).  */
> -typedef generic_wide_int <wi::extended_tree <WIDE_INT_MAX_PRECISION * 2> >
> +typedef generic_wide_int <wi::extended_tree <WIDEST_INT_MAX_PRECISION * 2> >
>     widest2_int_cst;
>   
>   /* Refer to INTEGER_CST T as though it were a widest_int.
> @@ -6444,7 +6449,7 @@ wi::extended_tree <N>::get_len () const
>   {
>     if (N == ADDR_MAX_PRECISION)
>       return TREE_INT_CST_OFFSET_NUNITS (m_t);
> -  else if (N >= WIDE_INT_MAX_PRECISION)
> +  else if (N >= WIDEST_INT_MAX_PRECISION)
>       return TREE_INT_CST_EXT_NUNITS (m_t);
>     else
>       /* This class is designed to be used for specific output precisions
> @@ -6530,7 +6535,8 @@ wi::to_poly_wide (const_tree t)
>   template <int N>
>   inline generic_wide_int <wi::extended_tree <N> >
>   wi::ints_for <generic_wide_int <wi::extended_tree <N> >,
> -	      wi::CONST_PRECISION>::zero (const extended &x)
> +	      wi::int_traits <wi::extended_tree <N> >::precision_type
> +	     >::zero (const extended &x)
>   {
>     return build_zero_cst (TREE_TYPE (x.get_tree ()));
>   }
> --- gcc/value-range.cc.jj	2023-09-27 10:37:39.240839811 +0200
> +++ gcc/value-range.cc	2023-09-28 13:06:34.512017660 +0200
> @@ -245,17 +245,24 @@ vrange::dump (FILE *file) const
>   void
>   irange_bitmask::dump (FILE *file) const
>   {
> -  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
> +  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p;
>     pretty_printer buffer;
>   
>     pp_needs_newline (&buffer) = true;
>     buffer.buffer->stream = file;
>     pp_string (&buffer, "MASK ");
> -  print_hex (m_mask, buf);
> -  pp_string (&buffer, buf);
> +  unsigned len_mask = m_mask.get_len ();
> +  unsigned len_val = m_value.get_len ();
> +  unsigned len = MAX (len_mask, len_val);
> +  if (len > WIDE_INT_MAX_ELTS)
> +    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
> +  else
> +    p = buf;
> +  print_hex (m_mask, p);
> +  pp_string (&buffer, p);
>     pp_string (&buffer, " VALUE ");
> -  print_hex (m_value, buf);
> -  pp_string (&buffer, buf);
> +  print_hex (m_value, p);
> +  pp_string (&buffer, p);
>     pp_flush (&buffer);
>   }
>   
> --- gcc/c/c-decl.cc.jj	2023-09-27 10:37:38.428851119 +0200
> +++ gcc/c/c-decl.cc	2023-09-28 13:06:34.514017632 +0200
> @@ -12355,11 +12355,11 @@ declspecs_add_type (location_t loc, stru
>   				spec.expr);
>   		      return specs;
>   		    }
> -		  if (wi::to_widest (spec.expr) > WIDE_INT_MAX_PRECISION - 1)
> +		  if (wi::to_widest (spec.expr) > WIDEST_INT_MAX_PRECISION - 1)
>   		    {
>   		      error_at (loc, "%<_BitInt%> argument %qE is larger than "
>   				     "%<BITINT_MAXWIDTH%> %qd",
> -				spec.expr, (int) WIDE_INT_MAX_PRECISION - 1);
> +				spec.expr, (int) WIDEST_INT_MAX_PRECISION - 1);
>   		      return specs;
>   		    }
>   		  specs->u.bitint_prec = tree_to_uhwi (spec.expr);
> --- gcc/gengtype.cc.jj	2023-09-27 10:37:38.751846621 +0200
> +++ gcc/gengtype.cc	2023-09-28 13:06:34.515017618 +0200
> @@ -5236,7 +5236,6 @@ main (int argc, char **argv)
>         POS_HERE (do_scalar_typedef ("double_int", &pos));
>         POS_HERE (do_scalar_typedef ("poly_int64_pod", &pos));
>         POS_HERE (do_scalar_typedef ("offset_int", &pos));
> -      POS_HERE (do_scalar_typedef ("widest_int", &pos));
>         POS_HERE (do_scalar_typedef ("int64_t", &pos));
>         POS_HERE (do_scalar_typedef ("poly_int64", &pos));
>         POS_HERE (do_scalar_typedef ("poly_uint64", &pos));
> --- gcc/tree-ssa-loop-niter.cc.jj	2023-09-27 10:37:39.072842151 +0200
> +++ gcc/tree-ssa-loop-niter.cc	2023-09-28 13:06:34.515017618 +0200
> @@ -3873,12 +3873,17 @@ do_warn_aggressive_loop_optimizations (c
>       return;
>   
>     gimple *estmt = last_nondebug_stmt (e->src);
> -  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
> -  print_dec (i_bound, buf, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations))
> +  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p;
> +  unsigned len = i_bound.get_len ();
> +  if (len > WIDE_INT_MAX_ELTS)
> +    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
> +  else
> +    p = buf;
> +  print_dec (i_bound, p, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations))
>   	     ? UNSIGNED : SIGNED);
>     auto_diagnostic_group d;
>     if (warning_at (gimple_location (stmt), OPT_Waggressive_loop_optimizations,
> -		  "iteration %s invokes undefined behavior", buf))
> +		  "iteration %s invokes undefined behavior", p))
>       inform (gimple_location (estmt), "within this loop");
>     loop->warned_aggressive_loop_optimizations = true;
>   }
> --- gcc/c-family/c-warn.cc.jj	2023-09-27 10:37:38.334852428 +0200
> +++ gcc/c-family/c-warn.cc	2023-09-28 13:06:34.524017491 +0200
> @@ -1517,13 +1517,15 @@ match_case_to_enum_1 (tree key, tree typ
>       return;
>   
>     char buf[WIDE_INT_PRINT_BUFFER_SIZE];
> +  wide_int w = wi::to_wide (key);
>   
> +  gcc_assert (w.get_len () <= WIDE_INT_MAX_ELTS);
>     if (tree_fits_uhwi_p (key))
> -    print_dec (wi::to_wide (key), buf, UNSIGNED);
> +    print_dec (w, buf, UNSIGNED);
>     else if (tree_fits_shwi_p (key))
> -    print_dec (wi::to_wide (key), buf, SIGNED);
> +    print_dec (w, buf, SIGNED);
>     else
> -    print_hex (wi::to_wide (key), buf);
> +    print_hex (w, buf);
>   
>     if (TYPE_NAME (type) == NULL_TREE)
>       warning_at (DECL_SOURCE_LOCATION (CASE_LABEL (label)),
> --- gcc/c-family/c-cppbuiltin.cc.jj	2023-09-27 10:37:38.226853933 +0200
> +++ gcc/c-family/c-cppbuiltin.cc	2023-09-28 13:06:34.541017253 +0200
> @@ -1195,10 +1195,10 @@ c_cpp_builtins (cpp_reader *pfile)
>         struct bitint_info info;
>         /* For now, restrict __BITINT_MAXWIDTH__ to what can be represented in
>   	 wide_int and widest_int.  */
> -      if (targetm.c.bitint_type_info (WIDE_INT_MAX_PRECISION - 1, &info))
> +      if (targetm.c.bitint_type_info (WIDEST_INT_MAX_PRECISION - 1, &info))
>   	{
>   	  cpp_define_formatted (pfile, "__BITINT_MAXWIDTH__=%d",
> -				(int) WIDE_INT_MAX_PRECISION - 1);
> +				(int) WIDEST_INT_MAX_PRECISION - 1);
>   	  if (flag_building_libgcc)
>   	    {
>   	      scalar_int_mode limb_mode
> --- gcc/c-family/c-lex.cc.jj	2023-09-27 10:37:38.272853292 +0200
> +++ gcc/c-family/c-lex.cc	2023-09-28 13:06:34.550017127 +0200
> @@ -843,7 +843,7 @@ interpret_integer (const cpp_token *toke
>         int max_bits_per_digit = 4; // ceil (log2 (10))
>         unsigned int prefix_len = 0;
>         bool hex = false;
> -      const int bitint_maxwidth = WIDE_INT_MAX_PRECISION - 1;
> +      const int bitint_maxwidth = WIDEST_INT_MAX_PRECISION - 1;
>         if ((flags & CPP_N_RADIX) == CPP_N_OCTAL)
>   	{
>   	  max_bits_per_digit = 3;
> --- gcc/value-range-pretty-print.cc.jj	2023-09-27 10:37:39.170840787 +0200
> +++ gcc/value-range-pretty-print.cc	2023-09-28 13:06:34.550017127 +0200
> @@ -99,12 +99,19 @@ vrange_printer::print_irange_bitmasks (c
>       return;
>   
>     pp_string (pp, " MASK ");
> -  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
> -  print_hex (bm.mask (), buf);
> -  pp_string (pp, buf);
> +  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p;
> +  unsigned len_mask = bm.mask ().get_len ();
> +  unsigned len_val = bm.value ().get_len ();
> +  unsigned len = MAX (len_mask, len_val);
> +  if (len > WIDE_INT_MAX_ELTS)
> +    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
> +  else
> +    p = buf;
> +  print_hex (bm.mask (), p);
> +  pp_string (pp, p);
>     pp_string (pp, " VALUE ");
> -  print_hex (bm.value (), buf);
> -  pp_string (pp, buf);
> +  print_hex (bm.value (), p);
> +  pp_string (pp, p);
>   }
>   
>   void
> --- gcc/poly-int.h.jj	2023-09-27 10:37:38.874844909 +0200
> +++ gcc/poly-int.h	2023-09-28 13:06:34.551017113 +0200
> @@ -97,6 +97,18 @@ struct poly_coeff_traits<T, wi::CONST_PR
>     static const int rank = precision * 2 / CHAR_BIT;
>   };
>   
> +template<typename T>
> +struct poly_coeff_traits<T, wi::WIDEST_CONST_PRECISION>
> +{
> +  typedef WI_UNARY_RESULT (T) result;
> +  typedef int int_type;
> +  /* These types are always signed.  */
> +  static const int signedness = 1;
> +  static const int precision = wi::int_traits<T>::precision;
> +  static const int inl_precision = wi::int_traits<T>::inl_precision;
> +  static const int rank = precision * 2 / CHAR_BIT;
> +};
> +
>   /* Information about a pair of coefficient types.  */
>   template<typename T1, typename T2>
>   struct poly_coeff_pair_traits
> --- gcc/godump.cc.jj	2023-09-27 10:37:38.805845870 +0200
> +++ gcc/godump.cc	2023-09-28 13:06:34.551017113 +0200
> @@ -1154,7 +1154,11 @@ go_output_typedef (class godump_containe
>   	    snprintf (buf, sizeof buf, HOST_WIDE_INT_PRINT_UNSIGNED,
>   		      tree_to_uhwi (value));
>   	  else
> -	    print_hex (wi::to_wide (element), buf);
> +	    {
> +	      wide_int w = wi::to_wide (element);
> +	      gcc_assert (w.get_len () <= WIDE_INT_MAX_ELTS);
> +	      print_hex (w, buf);
> +	    }
>   
>   	  mhval->value = xstrdup (buf);
>   	  *slot = mhval;
> --- gcc/value-range.h.jj	2023-09-27 10:37:39.268839422 +0200
> +++ gcc/value-range.h	2023-09-28 13:06:34.555017057 +0200
> @@ -626,7 +626,9 @@ irange::maybe_resize (int needed)
>       {
>         m_max_ranges = HARD_MAX_RANGES;
>         wide_int *newmem = new wide_int[m_max_ranges * 2];
> -      memcpy (newmem, m_base, sizeof (wide_int) * num_pairs () * 2);
> +      unsigned n = num_pairs () * 2;
> +      for (unsigned i = 0; i < n; ++i)
> +	newmem[i] = m_base[i];
>         m_base = newmem;
>       }
>   }
> --- gcc/stor-layout.cc.jj	2023-09-27 10:37:38.951843836 +0200
> +++ gcc/stor-layout.cc	2023-09-28 13:06:34.560016987 +0200
> @@ -2946,7 +2946,7 @@ set_min_and_max_values_for_integral_type
>     if (precision < 1)
>       return;
>   
> -  gcc_assert (precision <= WIDE_INT_MAX_PRECISION);
> +  gcc_assert (precision <= WIDEST_INT_MAX_PRECISION);
>   
>     TYPE_MIN_VALUE (type)
>       = wide_int_to_tree (type, wi::min_value (precision, sgn));
> --- gcc/wide-int-print.cc.jj	2023-09-27 10:37:39.379837876 +0200
> +++ gcc/wide-int-print.cc	2023-09-28 14:24:04.824794192 +0200
> @@ -74,9 +74,12 @@ print_decs (const wide_int_ref &wi, char
>   void
>   print_decs (const wide_int_ref &wi, FILE *file)
>   {
> -  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
> -  print_decs (wi, buf);
> -  fputs (buf, file);
> +  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
> +  unsigned len = wi.get_len ();
> +  if (UNLIKELY (len > WIDE_INT_MAX_ELTS))
> +    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
> +  print_decs (wi, p);
> +  fputs (p, file);
>   }
>   
>   /* Try to print the unsigned self in decimal to BUF if the number fits
> @@ -98,9 +101,12 @@ print_decu (const wide_int_ref &wi, char
>   void
>   print_decu (const wide_int_ref &wi, FILE *file)
>   {
> -  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
> -  print_decu (wi, buf);
> -  fputs (buf, file);
> +  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
> +  unsigned len = wi.get_len ();
> +  if (UNLIKELY (len > WIDE_INT_MAX_ELTS))
> +    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
> +  print_decu (wi, p);
> +  fputs (p, file);
>   }
>   
>   void
> @@ -134,9 +140,12 @@ print_hex (const wide_int_ref &val, char
>   void
>   print_hex (const wide_int_ref &wi, FILE *file)
>   {
> -  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
> -  print_hex (wi, buf);
> -  fputs (buf, file);
> +  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
> +  unsigned len = wi.get_len ();
> +  if (UNLIKELY (len > WIDE_INT_MAX_ELTS))
> +    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
> +  print_hex (wi, p);
> +  fputs (p, file);
>   }
>   
>   /* Print larger precision wide_int.  Not defined as inline in a header
> --- gcc/testsuite/gcc.dg/bitint-38.c.jj	2023-09-28 15:02:23.182069788 +0200
> +++ gcc/testsuite/gcc.dg/bitint-38.c	2023-09-28 15:02:39.168848976 +0200
> @@ -0,0 +1,17 @@
> +/* PR c/102989 */
> +/* { dg-do compile { target { bitint } } } */
> +
> +#if __BITINT_MAXWIDTH__ >= 16319
> +constexpr unsigned _BitInt(16319) a
> +  = 468098567701677261276215481936770442254383643766995378241600227179396283432916865881332215867106489159251577495372085663487092317743244770597287633199005374998455333587280357490149993101811392051483761495987108264964738337118155155862715438910721661230332533185335581757600511846854115932637261969633134365868695363914570578110064471868475841348589366933645410987699979080140212849909081188170910464967486231358935212897096260626033055536141835599284498474737858487658470115144771923114826312283863035503700600141440724426364699636330240414271275626021294939422483250619629005959992243418661230122132667769781183790338759345884903821695590991577228520523725302048215447841573113840811593638413425054938213262961448317898574140533090004992732688525115004782973893244091427000396890427152225308661078954671066069234453757593181753900865203439035402480306413572239610467142591920809187367438071170100969567440044691427487959785637338381651309916782063670286046547585240837892307170928849485877186793280707600840866783471799148179250818387716183127323346199533387463363442356218803779697005759324410376476855222420876262425985571982818180353870410149824214544313013285199544193496624223219986402944849622489422007678564946174797892795089330899535624727777525330789492703574564112252955147770942929761545604350869404246558274752353510370157229485004402131043153454290397929387276374054938578976878606467217359398684275050519104413914286024106808116340712273059427362293703151355498336213170698894448405369398757188523160460292714875857879968173578328191358215972493513271297875634400793301929250052822258636015650857683023900709845410838487936778533250407886180954576046340697908584020951295048844938047865657029072850797442976146895294184993736999505485665742811313795405530674199848055802759901786376822069529342971261963119332476504064285869362049662083405789828433132154933242817432809415810548180658750393692272729586232842065658490971201927780014258815333115459695117942273551876646844821076723664040282772834511419891351278169017103987094803829594286352340468346618726088781492626816188657331359104171819822673805856317828499039088088223137258297373929043307673570090396947789598799922928643843532617012164811074618881774622628943539037974883812689130801860915090035870244061005819418130068390986470314677853605080103313411837904358287837401546257413240466939893527508931541065241929872307203876443882106193262544652290132364691671910332006127864146991404015366683569317248057949596070354929361158326955551600236075268435044105880162798380799161607987365282458662031599096921825176202707890730023698706855762932691688259365358964076595824577775275991183149118372047206055118463112864604063853894820407249837871368934941438119680605528546887256934334246075596746410297954458632358171428714141820918183384435681332379317541048252391710712196623406338702061195213724569303285402242853671386113148211535691685461836458295037538034378318055108240082414441205300401526732399959228346926528586852743389490978734787926721999855388794711837164423007719626109179005466113706450765269687580819822772189301084503627297389675134228222337286867641110511061980231247884533492442898936743429641958314135329073406495776369208158032115883850691010569048983941126771477990976092252391972812691669847446798507244106121667885423025613769258102773855537509733295805013313937402282804897213847221072647111605172349464564089914906493508133855389627177663426057763252086286325343811254757681803068276278048757997425284334713190226818463023074461900176958010055572434983135171145365242339273326984465181064287264645470832091115100640584104375577304056951969456200138485313560009272338228103637763863289261673258726736753407044143664079479496972580560534494806170810469304773005873590626280072387999668522546747985701599613975101188543857852141559251634058676718308000324869809628199442681565615662912626022796064414496106344236431285697688357707992989966561557171729972093533007476947862215922583204811189015550505642082475400647639520782187776825395598257421714106473869797642678266380755873356747812273977691604147842741151722919464734890326772594979022403228191075586910464204870254674290437668861177639713112762996390246102030994917186957826982084194156870398312336059100521566034092740694642613192909850644003933745129291062576341213874815510099835708723355432970090139671120232910747665906191360160259512198160849784197597300106223945960886603127136037120000864968668651452411048372895607382907494278810971475663944948791458618662250238375166523484847507342040066801856222328988662049579299600545682490412754483621051190231623196265549391964259780178070495642538883789503379406531279338866955157646654913405181879254189185904298325865503395688786311067669273609670603076582607253527084977744533187145642686236350165593980428575119329911921382240780504527422630654086941060242757131313184709635181001199631726283364158943337968797uwb
> +    + 9935443518057456429927126655222257817207511311671335832560065573055276678747990652907348839741818562757939084649073348172108397183827020377941725983107513636287406530526358253508437290241937276908386282904353079102904535675608604576486162998319427702851278408213641454837223079616401615875672453250148421679223829417834227518133091055180270249266161676677176149675164257640812344297935650729629801878758059944090168862730519817203352341458310363811482318083270232434329317323822818991134500601669868922396013512969477839456472345812312321924215241849772147687455760224559240952737319009348540894966363568158349501355229264646770018071590502441702787269097973979899837683122194103110089728425676690246091146993955037918425772840022288222832932542516091501149477160856564464376910293230091963573119230648026667896399352790982611957569978972038178519570278447540707502861678502657905192743225893225663994807568918644898273702285483676385717651104042002105352993176512166420085064452431753181365805833548922676748890412420332694609096819779765600345216390394307257556778223743443958983962113723193551247897995423762348092103893683711373897139168289420267660611409947644548715007787832959251167553175096639147674776117973100447903243626902892382263767591328038235708593401563793019418124453166386471792468421003855894206584354731489363668134077946203546067237235657746480296831651791790385981397558458905904641394246279782746736009101862366868068363411976388557697921914317179371206444085390779634831369723370050764678852846779369497232374780691905280992368079762747352245519607264154197148958896955661904214909184952289996142050604821608749900417845137727596903100452350067551305840998280482775209883278873071895588751811462342517825753493814997918418437455474992422243919549967371964423457440287296270855605850954685912644303354019058716916735522533065323057755479803668782530250381988211075034655760123250249441440684338450953823290346909689822527652698723502872312570305261196768477498898020793071808758903381796873868682378850925211629392760628685222745073544116615635557910805357623590218023715832716372532519372862093828545797325567803691998051785156065861566888871461130133522039321843439017964382030080752476709398731341173062430275003111954907627837208488348686666904765710656917706470924318432160155450726007668035494571779793129212242101293274853237850848806152774463689243426683295884648680790240363097015218347966399166380090370628591288712305133171869639679922854066493076773166970190482988828017031016891561971986279675371963020932469337264061317786330566839383989384760935590299287963546863848119999451739548405124001514033096695605580766121611440638549988895970262425133218159848061727217163487131806481686766843789971465247903534853837951413845786667122427182648989156599529647439419553785158561613114023267303869927565170507781782366447011340851258178534101585950081423437703778492347448230473897643505773957385504112182446690585033823747175966929091293693201061858670141209129091452861292276276012910624071241165402089161606944423826245461608594935732481900198240862293409442308800690019550831630479883000579884614601906961723011354449804576794339826056986957680090916046848673419723529694384653809400377218545075269148766129194637039408225515678013332188074997217667835494940043014917877438354902673107453164275280010251040360040937308738925689475725131639032011979009642713542292894219059352972933151112376197383814925363288670995556269447804994925086791728136906693249507115097807060365872110998210768336078389508724184863597285987736912073071980137162590779664675033429119327855307827174673749257462983054221631797527009987595732460222197367608440973488211898471439302051388806818521659685873672383828021329848153410204926607710971678268541677584421695238011784351386047869158787156634630693872428067864980320063293435887574745859067024988485742353278548704467544298793511583587659713711677065792371199329419372392720321981862269890024832348999865449339856339220386853162641984444934998176248821703154774794026863423846665361147912580310179333239849314145158103813724371277156031826070213656189218428551171492579367736652650240510840524479280661922149370381404863668038229922105064658335083314946842545978050497021795217124947959575065471749872278802756371390871441004232633252611825748658593540667831098874027223327541523742857750954119615708541514145110863925049204517574000824797900817585376961462754521495100198829675100958066639531958106704159717265035205597161047879510849900587565746603225763129877434317949842105742386965886137117798642168190733367414126797929434627532307855448841035433795229031275545885872876848846666666475465866905332293095381494096702328649920740506658930503053162777944821433383407283155178707970906458023827141681140372968356084617001053870499079884384019820875585843129082894687740533946763756846924952825251383026364635539377880784234770789463152435704464616uwb;
> +constexpr unsigned _BitInt(16319) b
> +  = 20129744567093027275741005070628998262449166046517026903695683755854448756834360166513132405078796314602781998330705368407367482030156637206994877425582250124595106718397028199112773892105727478029626122540718672466812244172521968825004812596684190534400169291245019886664334632347203172906471830047918779870667296830826108769036384267604969509336398421516482677170697323144807237345130767733861415665037591249948490085867356183319101541167176586195051721766552194667530417142250556133895688441663400613014781276825394358975458967475147806589013506569415945496841131100738180426238464950629268379774013285627049621529192047736803089092751891513992605419086502588233332057296638567290306093910878742093500873864277174719410183640765821580587831967716708363976225535905317908137780497267444416760176647705834046996010820212494244083222254037700699529789991033448979912128507710343500466786839351071045788239200231971288879352062329627654083430317549832483148696514166354870702716570783257707960927427529476249626444239951812293100465038963807939297639901456086408459677292249078230581624034160083198437374539728677906306289960873601083706201882999243554025429957091619812945018432503309674349427513057767160754691227365332241845175797106713295593063635202655344273695438810685712451003351469460085582752740414723264094665962205140763820691773090780866423727990711323748512766522537850976590598658397979845215595029782750537140603588592215363608992433922289542233458102634259275757690440754308009593855238137227351798446486981151672766513716998027602215751256719370429397129549459120277202327118788743080998483470436192625398340057850391478909668185290635380423955404607217710958636050373730838469336370845039431945543326700579270919052885975364141422331087288874462285858637176621255141698264412903522678033317989170115880081516284097559300133507799471895326457336815172421155995525168781635131143991136416642016744949082321204689839861376266795485532171923826942486502913400286963940309484507484129423576156798044985198780159055788525538310878089397895175129162099671894337526801235280427428321205321530735108239848594278720839317921782831352363541199919557577597546876704462612904924694431903072332864341465745291866718067601041404212430941956177407763481845568339170224196193106463030409080073136605433869775860974939991008596874978506245689726966715206639438259724689301019692258116991317695012205036157177039536905494005833948384397446492918129185274359806145454148241131925838562069991934872329314452016900728948186477387223161994145551216156032211038319475270853818660079065895119923373317496777184177315345923787700803986965175033224375435249224949151191006574511519055220741174631165879299688118138728380219550143006894817522270338472413899079751917314505754802052988622174392135207139715960212346858882422543222621408433817817181595201086403368301839080592455115463829425708132345811270911456928961301265223101989524481521721969838980208647528038509328501705428950749820080720418776718084142086501267418284241370398868561282277848391673847937247873117719906103441015578245152673184719538896073697272475250261227685660058944107087333786104761624391816175414338999215260190162551489343436332492645887029551964578826432156700872459216605843463884228343167159924792752429816064841479438134662749621639560203443871326810129872763539114284811330805213188716333471069710270583945841626338361700846410927750916663908367683188084193258384935122236639934335284160522042065088923421928660724095726039642836343542211473282392554371973074108770797447448654428325845253304889062021031599531436606775029315849674756213988932349651640552571880780461452187094400408403309806507698230071584809861634596000425300485805174853406774961321055086995665513868382285048348264250174388793184093524675621762558537763747237314473883173686633576273836946507237880619627632543093619281096675643877749217588495383292078713230253993525326209732859301842016440189010027733234997657748351253359664018894197346327201303258090754079801393874104215986193719394144148559622409051961205332355846077533183278890738832391535561074612724819789952480872328880408266970201766239451001690274739141595541572957753788951050043026811943691163688663710637928472363177936029259448725818579129920714382357882142208643606823754520733994646572586821541644398149238544337745998203264678454665487925173493921777764033537269522992103115842823750405588538846833724101543165897489915300004787110814394934465518176677482202804123781727309993329004830726928892557850582806559007396866888620985629055058474721708813614135721948922060211334334572381348586196886746758900465692833094336637178459072850215866106799456460266354416689624866015411034238864944123721969568161372557215009049887790769403406590484422511214573790761107726077762451440539965975955360773797196902546431341823788555069435728043202455375041817472821677779625286961992491729576392881089462100341878uwb
> +    / 42uwb;
> +constexpr unsigned _BitInt(16319) c
> +  = 26277232382028447345935282100364413976442241120491848683780108318345774920397366452596924421335605374686659278612312801604887370376076386444511450318895545695570784577285598906650901929444302296033412199632594998376064124714220414913923213779444306833277388995703552219430575080927111195417046911177019070713847128826447830096432003962403463656558600431115273248877177875063381111477888059798858016050213420475851620413016793445517539227019973682699447952322388748860981947593432985730684746088183583225184347825110697327973294826205227564425769950503423435597165969299975681406974619941538502827193742760455245269483134360940023933986344217577102114800134253879530890064362520368475535738854741806292542624386473461274620987891355541987873664157022522167908591164654787501854546457737341526763516705032705254046172926268968997302379261582933264475402063191548343982201230445504659038868786347667710658240088825869575188227013335559298579845948690316856611693386990691782821847535492639223427223360712994033576990398197160051785889033125034223732954451076425681456628201904077784454089380196178912326887148822779198657689238010492393879170486604804437202791286852035982584159978541711417080787022338893101116171974852272032081114570327098305927880933671644227124990161298341841320653588271798586647749346370617067175316167393884414111921877638201303618067479025167446526964230732790261566590993315887290551248612349150417516918700813876388862131622594037955509016393068514645257179527317715173019090736514553638608004576856188118523434383702648256819068546345047653068719910165573154521302405552789235554333112380164692074092017083602440917300094238211450798274305773890594242881597233221582216100516212402569681571888843321851284369613879319709906369098535804168065394213774970627125064665536078444150533436796088491087726051879648804306086489894004214709726215682689504951069889191755818331155532574370572928592103344141366890552816031266922028893616252999452323417869066941579667306347161357254079241809644500681547267163742601555111699376923690500014172294337681007418735910341792131377741308586228268385825579773985382339854821729670313925456724869607910114957040810377671394779834675225181536565444830551924417794139736686594557660483813045525089850285373756403594900392226296617656189774567019900237644329891280192776067340109751100025818473155267503490628146429306493520953677660612094758307190480072039980575323428994009982415676875786338343681850769724258724712947129844865182522700509869810541147515988955709784790248266593581532414091983670376426534289079098742549505127694160521110700035496658932724007621759500091227595477831200325335242614162624218010753586306794482732500765136299548052958345872488446969032973871418565484570096440609125401439516349061951073344772753817168731533186740449206533184858409824331269879276752302819075938894191764603880669059804914705202932220114574769307945938446355744093058483466098741029671133305308451601510124097336668044362140994842230895354232007936193610666215236351383330719496758577095102466235782700820575938453736277546445932135116947993404356975890051717304128693125699951445791328843668647245439797933691355015781238038148597339831348341049751957204680813855138272253234219030458164179195368888878989362640509486440530112337687890165646824152338885218611665567933423652236621168833497594762922586523151554244316284075364923316223457798336995440229801638249044555841786652868778333857626201712694823945146208412572567947403078655159448178467488335673853886982143607843369103504905837049147006413324087204923968347162406372146304110247436210704329838033967549296094708909042352807942165389054391217609084676765464997803900415653278041220586434133698802658726748950122980183615091029049242919298428066745937148593879994539254240070220900694662200741796632687373414952817000938093930497338259168439649970963774406833411431113922194082765390241161715106142638681072839764035976877223152727829248475639970029777900589595383604989099084081251802305001465530685587689066710306032849298712531664047230963409638484129598076118133347670029704549206295184751171783054889490211218045322681317529569999778899567668829982207035948032411418382057247326141072264502161892285323531743728756335449414720326329614400327415751813608405440522389476951223717685562226240221655814783640319063683104993438443847695342093582440489676230855515734722099028773790309518629302472390856918840009781940193713784596688294176313226823907143925396584175086934911386332502448539920116580493698106175151294846382915609543814748269873022997601962804377576934064368480060369871027634248583037300264157126892396407333810094970488786868749240778818119777818968060847669660858189435863648299750130319878885182309492320093569553086644726783916663680961005542160003603514646606310756647257217877792590840884087816175376150368236330721380807047180835128240716072193739218623529235235449408073833764uwb
> +    >> 171;
> +static_assert (a == 10403542085759133691203342137159028259461894955438331210801665800234672962180907518788681055608925051917190662144445433835595489501570265148539013616306519011285861864113638610998587283343748668959870044400340187367869274012726759732348878437230149364081610941398977036594823591463255731808309715219781556045092524781748798096243155527048746090614751043610821560662864236720952557147844731917800712343725546175449104075627616077829385396994452199410766816558008090921987787438967590914249326913953731957899714113110918563882837045448642562338486517475793442626878243475178869958697311252767202125088496235928130685145568023992654921893286093433280015789621699281948053130963767216950901322064090115301029360256916486236324346980555378227825665231041206505932451054100655891377307183657244188881780309602697733965633806548575793711470844175477213922050584861112947113328821094578714380110663964395764964375008963336325761662071121014767368961020824065775639039724097407257977371623360602667242992626829630277589757195892131842788347638167481783472539736593840645020141666099662762763659119482517961624374850646183224354529879255694192077493038699570091875155722960929748259201284457182471153956119946261637096783796538046622701136421992223281799392319105563566498086105138357131671079600937329401554014025354725298453142629483842874038291307431207948198280389112036878226218928165845324560374437065373122000792930554833265840423016148390974876479752688661617125284208020330726704780298561478529279775092768807953202013307072084373090254748865483609183726295735240865516817482898554990450888147008484162850924835809973020042760450232447237837196378388135483084055028396408249214425019231777824054821326738728924661602608905318664721047678808734917923923121217803736039325080641571812479260200189082647677675380297657174607422686495562781202604884582727406463545308236800937463493199421020490845203940782000643133713413924683795888948837880891750307666957538835987772265423203470320354145742841869795472799186154631385288573730129094228733379855432514817031425884584962254283999586850250406406681047191820544352342046667950146374296364655891915135310082529994904874562441551527081311638121766367661807914647092917287784017613115795691373814041086838720316968010349263776702775009771662737124600992709418630470128579612748138807983617697487500079502839532266478317788699680283395230308668613168191852557234122469290277763000256531531071762280960597416576452124575885006363492171314551026369237325119844147154972582617127637240421323781252125819313268498872048683068789228870983086306586111793007178693570562554975762384431236664489360478109692520183356042112794589756922036102025380888246082763911915622037570736969677850621708281909652070776450422110772285659921383413532725137107621514770958361581240471968542997294446402584844918179956881219978405772785713402046471903103404871352324277109089891640558983922159359479964068994923538490500501798825116238188381267330618026093160290205596669795981834842352271011063939632623926629960113926326029952143452354640614061049438932665467928443113232214498101774523178129020155017228802221901469548072234073334681052461327832268955923701109732874360984002493130025470753861967432493102395766279717815113135763810886216491770265724160887688887515282293447287121039545323777928286876711267049135547760773655845950622676327972280622345486253084626121247885891757458308974259466441284967765824561478351421051923081842594791616249682768594796413184742007504540382141773556098929461233842797978566466734240436032269122908057438314319410489575244845739320693764798687398942275314333361838560358278583766983210126081046020231469705836544611252075187733112560778125560225565803349953151880800601890382648216375737077015744684142132303864494083237680306898134033570758401131735819237730280209424231954121970154195575070728876653187928423918894211617093567094857926079694003950142962763480728907322409338954277493711834363423032309296862081371923061150409402403668284066920335645815769603890931600189625120845560771835017710222988445713995722670892970377791415975424998772977793133120924108755323766471601770964843725827421304729349535336212587039242582503381150992918495310760366078232133800372960134691178665615437284018675587037783965019497398984583781291648236566997741116811234934754542646608973862932050896956712947890625239848619289180051302224085308716715734850608995498117691600907423641124622236235949675965926735290984369155077055324647942699875972019355174794849379024365265476001505043957802797349447782453767742359446787304217770032967959809288342189111153359045680464231699344620995535326063943372491385550455978845273436611631962336651743357242055102619760848116407351488643448217122169718350824452317641509534606434395208225350712889271762643740106849245478364448395994915755050465135468245061369394410933866013068008514339549345174558881983866497072827311379042433413uwb);
> +static_assert (b == 479279632549833982755738215967357101486884905869453021516563898948915446591294289678884104882828483681018619007873937343032559095956110409690354224418625002966550159961834004740780330764422082810229193393826635058733624861250523067262019347540099774628575459315357616349150824579695313640630281667807589996920649924543478780215152006371546893079438057655154349456445174360590648508217399231758605134881847410713059287758746575793311941456361347290358374327775052253988819455767870384140373534325319062214637649448223675213701403987503519204500321584986093940400979311922337629196153927395934961423190792514929752893552191612781025930779806940809347748073488156862698382316586632554531097474068541478416687472958980350462147229542043370966376951612302580094672036569174235908042392792082009922861348754900810642762162386011767716267196524707159512614047405558309045526869231198654773018734270263596328291409529332649735222668150705420335319769465472201979730869384913211207207537399601373999069700655463720229201053332186006978582500927709712840419997653716343058563745053549481680514857956192457105651774755444712054911665735085740088242901976172465572034046597419519355833772202459754151176845548994456208445029222984100996313709454921745133168181790539412958897510447873469344071508368320478228160779533683887240349189576312875329064089835494782533898285493126755916970631488996451823585682342809043933704643566255965170014371156957508657356962712435465291272811967482363708516439065578762133187029479457794090439202070979801732536040880905419100375029921889772128503084510931435171483979018779597166630558819909348223770001377390273307373052030729413819617985823981374070443715485088829487365151686786653141560555397632839783786973475603908129103121125925582435377586599443363217659482486021512444715078999742145616192417054383275221431750185701711793487079447980295741809417265923372265027237884200396238493927359102885825948568128006352273465051712472070059202450319054451522388321059702003081513718019001071076161432358471155369959782811652330837503075288087426055655400029411438748293362031465017502577139252244731448555188613876936961036695236179942323751116112011014592974397486473882674592008130136792663493287323834319147915022427528033518178139180198551672004671264439595962120954122300129377851806213689047404966592261393005849755403969409681891387136302126214754577574214078992738385834194218500941354892714424617818676129678402812599649389519193939384481931712519965763571236544579269391714688112594004439937791027666527275028956096005024721892268353662349049501568931426746983749923266289936079664852088114380642027976981532748458314879741695023966059798072743350980348361092364278288527112580481417860547783209941006436630295569025708378983678708447667928300527961717504931897999052674925211486251029110033534138519456704647644914365911948549537915597987234033945431722519315974082307832411934886264333083916226707665948547147824941143774031630992986403589281430493343304207573431954440506367102005746914258775268625663056944615427077330312326664431034309894720122682694874274735620802316011315482410182991906165335883031756812018133914090861319389023790839528337203606889129436487920140167370284870924438860873830296648014424844378195912932551426780779819757525353368558050825303562419989528653425507781193568399131883673447888828695552112293654073088339775808234324436627659543962164946450396759723040075906766506152022264815158093674649622869572430121164843379253826764183953324829436751005035078152203675523168431161209463034491772102996315554878311000500752369796109685119745615468446576523546008325039060775520970963367909216533343057221662059707100715990114520515109428581554773471551782223970832412406073499896797949247197263055911053575580685552002226777990994346631851517791364630330551754443656577948498726362806681419705536740324268597539896282803552799726080554573302695958428417269671660306173853381343814024048279362738039470198839365706286164147555864933364363287875097138128425573909904433183795098670203800533548856219174579901097084123411402160448390274656216062207733804522678116007830485911118338137291415500040244636646228465275546613185451215477214924093897408659253897872331630294361379429268082112519489979283826532913282908147824847781517964779380824918394924322420104717839012960422523766744397106063463998218416521947089619846125464833145312281971994057275917591591279145274837283273569411904875883590818927011083766111368623876288661469697856984023924541117354584710728162060928747544449729071086406072820826707352705098469570212430005031769870770984490147544922541878582516496026055634218534739829767044431114272772863484628968800592047985977005687260574374332608765746965647976405949709304033414442630581488362251756922883517287565772653346189666094175256518980878632057889091042584644510374477219106080358138511257658994752983022904583136418485544787844335722425uwb);
> +static_assert (c == 8779107423697189837569390605084121179785924908521985744210325591223667924519652625818373720019509245903707006132632572173386255064201355735198759440688262514780984111791042739566301784897316373994922192050963272288434060342288511971569697680026523760811225516430052699754044682818892679819131995600216280966062736732384732411361657444399695883865096103428759622813867735547259978529319436889864013687219390567604283318011100799953451520968441264866031813954488628058475114348729275414143158917874709599556247183695853838552321088973445876088042556810479910661449374661999675082811103814453353294194886612961492737263277271551889038610730760478459569256149321998350414023066363814989311109728311712989022996247280182587921449185353922885937877604500400738774240008709945289791605011177739657720181601453512259882004564462415828652714904289727235210537277721389816687643366145200001177712112197515695578887483792988755435401388456145854488880537088360397994643216014828495662460205686448548113229841097955613958440901375416256532864511852298696327611517233241324799070919491286426159788792631723833717451538043437364017185237743182402835670087683125602640318887451596650323528720128188198547270462971612157603487958526705005955580409441670771849388016438035850194585870327013409236236730914217722025655319472231141666790287955685713636274653565577454275838590350806168639165264676470440930351612992518904664647715805865941038423768376846697817543122409517591717292238745940345900530458551468519245767864531742102178628854376524513367983209186974575765707273973775386840081238803880335095740836386527208267311808973522450391189055739828936937359693167240524660624945856907042041257347192086984009640984509322622503890256046324768341632643546455779035376002061691113121234273164937984171774242327769915688742564049454163158318121818582764775268091292470889088445575108022688069271697198283151469645400870507006663799330661702702747443254220478311056407220749648103123435473381583520873055218734115120978678440455896458852497569989966723235965608706826593607128847630137618509151255834742636438796285569873869967729341871213521030011427372987388572674228441333458857512226049283243347521457804912008781036966786374760325341492033297848368160903260470019067535330611645909560888797451907088389764190403007998305168673029446934012245138838180596098559442570696150011296218144186387024615885302290744905340666905921743970013779813332493771192048043297281423248489056841417013807670308191095732464221451376997270745468459702152796818222745730565721202663103043121160101459833683249558684459108862536961994308535039970814557821268170388745941980378838969910592895670554291811739768771829941043857819603751246957962236091154755893962038363120690483862423001038948620681611253867149296463690417828034303547922792249098522404751428960713875050463906134150846089705714470303918299012691600285355859412924847760497076978432722446602521825089097454542343354847347396045079587757210635356999268706465425788833311190517623061860675230010994127196459030322166751571656642321690787471906609473496034789643710478162255664092991251446787887635351852933826820719781733754578161073401362668109819113924252291125741395271474342305574536974918273938513597418963787308994593434191890687730302495910686072338836413159162281072263542758257699588089838677469397467899348065293581751035844389848387161847435160327276066603683131703246410409122832793376751512688745195564021646069245992363396468100513536211651450610523315211697125774638845313243973083536417692075962486918844667432144353019722959653638632948294049984266861870151255315023346724671430499257993958049088066160870545025276597975154855537620265690354041028742742755074396597631965320380782500944568424053420038357524917125099241334990032189526465838192972110970861380060986802081948044345526414857158569939005895236672306344348212805851269920711043891306875873016330601673973249327072503571873518366750575070091051288590764788630190966776854031578939382690709022667421734442841784680826494146620589862829612704279521637740421694195051400095278084716974624615208392585573200182664157066813849346058321763156523965698465901396025152159642193562900743812715885811057212579017860488539960334406702752688595217360219470968738009774067915037157027492209108801337707562571266897723911401203374308490793226200974353356835311756384895692909802720948968131504604855466961987314701846460342135201914356152591684810924688350929140120187693089324255924634578576427004426339299493833434502951593902551451002292839635000904253250021884625417628756439862964325562720709528784964868687330847894476999577326582332350213148861205413652337499383416531545707272907994755638339630221576707954964236210962693804639714754668679841134928393081284209158098202683744650513918920168330598432362389777471870631039488408769354863001967531729415686631571754649uwb);
> +#endif
> 
> 	Jakub
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int
  2023-09-28 14:03       ` [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int Jakub Jelinek
  2023-09-28 15:53         ` Aldy Hernandez
@ 2023-09-29  8:24         ` Jakub Jelinek
  2023-09-29  9:25           ` Richard Biener
  2023-09-29  9:49         ` Richard Biener
  2023-10-05 15:11         ` Jakub Jelinek
  3 siblings, 1 reply; 16+ messages in thread
From: Jakub Jelinek @ 2023-09-29  8:24 UTC (permalink / raw)
  To: Richard Biener, Richard Sandiford, Aldy Hernandez, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 2825 bytes --]

On Thu, Sep 28, 2023 at 04:03:55PM +0200, Jakub Jelinek wrote:
> Bet we should make wide_int_storage and widest_int_storage GTY ((user)) and
> just declare but don't define the handlers or something similar.

That doesn't catch anything, but the following incremental patch compiles
just fine, proving we don't have any wide_int in GC memory anymore after
the wide_int -> rwide_int change in dwarf2out.h.
And the attached incremental patch on top of it which deletes even
widest_int from GC shows that we use widest_int in GC in:
omp_declare_variant_entry::score
omp_declare_variant_entry::score_in_declare_simd_clone
nb_iter_bound::member
loop::nb_iterations_upper_bound
loop::nb_iterations_likely_upper_bound
loop::nb_iterations_estimate
ipa_bits::value
ipa_bits::mask
so pretty much everything I spoke about (except I thought loop has
2 such members when it has 3).

--- gcc/wide-int.h	2023-09-28 14:55:40.059632413 +0200
+++ gcc/wide-int.h	2023-09-29 09:59:58.703931879 +0200
@@ -85,7 +85,7 @@
      and it always uses an inline buffer.  offset_int and rwide_int are
      GC-friendly, wide_int and widest_int are not.
 
-     3) widest_int.  This representation is an approximation of
+     4) widest_int.  This representation is an approximation of
      infinite precision math.  However, it is not really infinite
      precision math as in the GMP library.  It is really finite
      precision math where the precision is WIDEST_INT_MAX_PRECISION.
@@ -4063,21 +4063,61 @@
   return wi::smod_trunc (x, y);
 }
 
-template<typename T>
+void gt_ggc_mx (generic_wide_int <wide_int_storage> *) = delete;
+void gt_pch_nx (generic_wide_int <wide_int_storage> *) = delete;
+void gt_pch_nx (generic_wide_int <wide_int_storage> *,
+		gt_pointer_operator, void *) = delete;
+
+inline void
+gt_ggc_mx (generic_wide_int <rwide_int_storage> *)
+{
+}
+
+inline void
+gt_pch_nx (generic_wide_int <rwide_int_storage> *)
+{
+}
+
+inline void
+gt_pch_nx (generic_wide_int <rwide_int_storage> *, gt_pointer_operator, void *)
+{
+}
+
+template<int N>
+void
+gt_ggc_mx (generic_wide_int <fixed_wide_int_storage <N> > *)
+{
+}
+
+template<int N>
+void
+gt_pch_nx (generic_wide_int <fixed_wide_int_storage <N> > *)
+{
+}
+
+template<int N>
+void
+gt_pch_nx (generic_wide_int <fixed_wide_int_storage <N> > *,
+	   gt_pointer_operator, void *)
+{
+}
+
+template<int N>
 void
-gt_ggc_mx (generic_wide_int <T> *)
+gt_ggc_mx (generic_wide_int <widest_int_storage <N> > *)
 {
 }
 
-template<typename T>
+template<int N>
 void
-gt_pch_nx (generic_wide_int <T> *)
+gt_pch_nx (generic_wide_int <widest_int_storage <N> > *)
 {
 }
 
-template<typename T>
+template<int N>
 void
-gt_pch_nx (generic_wide_int <T> *, gt_pointer_operator, void *)
+gt_pch_nx (generic_wide_int <widest_int_storage <N> > *,
+	   gt_pointer_operator, void *)
 {
 }
 


	Jakub

[-- Attachment #2: 2 --]
[-- Type: text/plain, Size: 758 bytes --]

--- gcc/wide-int.h.jj	2023-09-29 09:59:58.703931879 +0200
+++ gcc/wide-int.h	2023-09-29 10:05:27.653317149 +0200
@@ -4103,23 +4103,12 @@ gt_pch_nx (generic_wide_int <fixed_wide_
 }
 
 template<int N>
-void
-gt_ggc_mx (generic_wide_int <widest_int_storage <N> > *)
-{
-}
-
+void gt_ggc_mx (generic_wide_int <widest_int_storage <N> > *) = delete;
 template<int N>
-void
-gt_pch_nx (generic_wide_int <widest_int_storage <N> > *)
-{
-}
-
+void gt_pch_nx (generic_wide_int <widest_int_storage <N> > *) = delete;
 template<int N>
-void
-gt_pch_nx (generic_wide_int <widest_int_storage <N> > *,
-	   gt_pointer_operator, void *)
-{
-}
+void gt_pch_nx (generic_wide_int <widest_int_storage <N> > *,
+		gt_pointer_operator, void *) = delete;
 
 template<int N>
 void

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int
  2023-09-28 15:53         ` Aldy Hernandez
@ 2023-09-29  8:37           ` Jakub Jelinek
  2023-09-29 12:04             ` Aldy Hernandez
  0 siblings, 1 reply; 16+ messages in thread
From: Jakub Jelinek @ 2023-09-29  8:37 UTC (permalink / raw)
  To: Aldy Hernandez
  Cc: Richard Biener, Richard Sandiford, Andrew MacLeod, gcc-patches

On Thu, Sep 28, 2023 at 11:53:53AM -0400, Aldy Hernandez wrote:
> > ipa_bits is even worse, because unlike niter analysis, I think it is very
> > much desirable to support IPA VRP of all supported _BitInt sizes.  Shall
> > we perhaps use trailing_wide_int storage in there, or conditionally
> > rwidest_int vs. INTEGER_CSTs for stuff that doesn't fit, something else?
> 
> BTW, we already track value/mask pairs in the irange, so I think ipa_bits
> should ultimately disappear.  Doing so would probably simplify the code
> base.

Well, having irange in GC memory would be equally bad, it does have
non-trivial destructors (plus isn't meant to be space efficient either,
right?).
Though, perhaps we should use value-range-storage.h for that now that it
can store value/mask pair as well?  Either tweak it on the IPA side
such that everything is stored together (both the IPA VRP and IPA bit CCP)
or say use vrange_storage with zero (or one dummy) ranges + the value/mask
pair.

	Jakub


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int
  2023-09-29  8:24         ` Jakub Jelinek
@ 2023-09-29  9:25           ` Richard Biener
  0 siblings, 0 replies; 16+ messages in thread
From: Richard Biener @ 2023-09-29  9:25 UTC (permalink / raw)
  To: Jakub Jelinek; +Cc: Richard Sandiford, Aldy Hernandez, gcc-patches

On Fri, 29 Sep 2023, Jakub Jelinek wrote:

> On Thu, Sep 28, 2023 at 04:03:55PM +0200, Jakub Jelinek wrote:
> > Bet we should make wide_int_storage and widest_int_storage GTY ((user)) and
> > just declare but don't define the handlers or something similar.
> 
> That doesn't catch anything, but the following incremental patch compiles
> just fine, proving we don't have any wide_int in GC memory anymore after
> the wide_int -> rwide_int change in dwarf2out.h.
> And the attached incremental patch on top of it which deletes even
> widest_int from GC shows that we use widest_int in GC in:
[..]

> nb_iter_bound::member
> loop::nb_iterations_upper_bound
> loop::nb_iterations_likely_upper_bound
> loop::nb_iterations_estimate

I think those should better be bound to max-fixed-mode, they were
HWI at some point (even that should be OK, but of course
non-likely upper_bound needs to be conservative).  Using
widest_int here, esp. non-x86 is quite wasting.  The functions
setting these need to be careful with overflows then.

> so pretty much everything I spoke about (except I thought loop has
> 2 such members when it has 3).
> 
> --- gcc/wide-int.h	2023-09-28 14:55:40.059632413 +0200
> +++ gcc/wide-int.h	2023-09-29 09:59:58.703931879 +0200
> @@ -85,7 +85,7 @@
>       and it always uses an inline buffer.  offset_int and rwide_int are
>       GC-friendly, wide_int and widest_int are not.
>  
> -     3) widest_int.  This representation is an approximation of
> +     4) widest_int.  This representation is an approximation of
>       infinite precision math.  However, it is not really infinite
>       precision math as in the GMP library.  It is really finite
>       precision math where the precision is WIDEST_INT_MAX_PRECISION.
> @@ -4063,21 +4063,61 @@
>    return wi::smod_trunc (x, y);
>  }
>  
> -template<typename T>
> +void gt_ggc_mx (generic_wide_int <wide_int_storage> *) = delete;
> +void gt_pch_nx (generic_wide_int <wide_int_storage> *) = delete;
> +void gt_pch_nx (generic_wide_int <wide_int_storage> *,
> +		gt_pointer_operator, void *) = delete;
> +
> +inline void
> +gt_ggc_mx (generic_wide_int <rwide_int_storage> *)
> +{
> +}
> +
> +inline void
> +gt_pch_nx (generic_wide_int <rwide_int_storage> *)
> +{
> +}
> +
> +inline void
> +gt_pch_nx (generic_wide_int <rwide_int_storage> *, gt_pointer_operator, void *)
> +{
> +}
> +
> +template<int N>
> +void
> +gt_ggc_mx (generic_wide_int <fixed_wide_int_storage <N> > *)
> +{
> +}
> +
> +template<int N>
> +void
> +gt_pch_nx (generic_wide_int <fixed_wide_int_storage <N> > *)
> +{
> +}
> +
> +template<int N>
> +void
> +gt_pch_nx (generic_wide_int <fixed_wide_int_storage <N> > *,
> +	   gt_pointer_operator, void *)
> +{
> +}
> +
> +template<int N>
>  void
> -gt_ggc_mx (generic_wide_int <T> *)
> +gt_ggc_mx (generic_wide_int <widest_int_storage <N> > *)
>  {
>  }
>  
> -template<typename T>
> +template<int N>
>  void
> -gt_pch_nx (generic_wide_int <T> *)
> +gt_pch_nx (generic_wide_int <widest_int_storage <N> > *)
>  {
>  }
>  
> -template<typename T>
> +template<int N>
>  void
> -gt_pch_nx (generic_wide_int <T> *, gt_pointer_operator, void *)
> +gt_pch_nx (generic_wide_int <widest_int_storage <N> > *,
> +	   gt_pointer_operator, void *)
>  {
>  }
>  
> 
> 
> 	Jakub
> 

-- 
Richard Biener <rguenther@suse.de>
SUSE Software Solutions Germany GmbH,
Frankenstrasse 146, 90461 Nuernberg, Germany;
GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int
  2023-09-28 14:03       ` [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int Jakub Jelinek
  2023-09-28 15:53         ` Aldy Hernandez
  2023-09-29  8:24         ` Jakub Jelinek
@ 2023-09-29  9:49         ` Richard Biener
  2023-09-29 10:30           ` Richard Sandiford
  2023-10-05 15:11         ` Jakub Jelinek
  3 siblings, 1 reply; 16+ messages in thread
From: Richard Biener @ 2023-09-29  9:49 UTC (permalink / raw)
  To: Jakub Jelinek; +Cc: Richard Sandiford, Aldy Hernandez, gcc-patches

On Thu, 28 Sep 2023, Jakub Jelinek wrote:

> Hi!
> 
> On Tue, Aug 29, 2023 at 05:09:52PM +0200, Jakub Jelinek via Gcc-patches wrote:
> > On Tue, Aug 29, 2023 at 11:42:48AM +0100, Richard Sandiford wrote:
> > > > I'll note tree-ssa-loop-niter.cc also uses GMP in some cases, widest_int
> > > > is really trying to be poor-mans GMP by limiting the maximum precision.
> > > 
> > > I'd characterise widest_int as "a wide_int that is big enough to hold
> > > all supported integer types, without losing sign information".  It's
> > > not big enough to do arbitrary arithmetic without losing precision
> > > (in the way that GMP is).
> > > 
> > > If the new limit on integer sizes is 65535 bits for all targets,
> > > then I think that means that widest_int needs to become a 65536-bit type.
> > > (But not with all bits represented all the time, of course.)
> > 
> > If the widest_int storage would be dependent on the len rather than
> > precision for how it is stored, then I think we'd need a new method which
> > would be called at the start of filling the limbs where we'd tell how many
> > limbs there would be (i.e. what will set_len be called with later on), and
> > do nothing for all storages but the new widest_int_storage.
> 
> So, I've spent some time on this.  While wide_int is in the patch a fixed/variable
> number of limbs (aka len) storage depending on precision (precision >
> WIDE_INT_MAX_PRECISION means heap allocated limb array, otherwise it is
> inline), widest_int has always very large precision
> (WIDEST_INT_MAX_PRECISION, currently defined to the INTEGER_CST imposed
> limitation of 255 64-bit limbs) but uses inline array for length
> corresponding up to WIDE_INT_MAX_PRECISION bits and for larger one uses
> similarly to wide_int a heap allocated array of limbs.
> These changes make both wide_int and widest_int obviously non-POD, not
> trivially default constructible, nor trivially copy constructible, trivially
> destructible, trivially copyable, so not a good fit for GC and some vec
> operations.
> One common use of wide_int in GC structures was in dwarf2out.{h,cc}; but as
> large _BitInt constants don't appear in RTL, we really don't need such large
> precisions there.
> So, for wide_int the patch introduces rwide_int, restricted wide_int, which
> acts like the old wide_int (except that it is now trivially default
> constructible and has assertions precision isn't set above
> WIDE_INT_MAX_PRECISION).
> For widest_int, the nastiness is that because it always has huge precision
> of 16320 right now,
> a) we need to be told upfront in wide-int.h before calling the large
>    value internal functions in wide-int.cc how many elements we'll need for
>    the result (some reasonable upper estimate is fine)
> b) various of the wide-int.cc functions were lazy and assumed precision is
>    small enough and often used up to that many elements, which is
>    undesirable; so, it now tries to decreas that and use xi.len etc. based
>    estimates instead if possible (sometimes only if precision is above
>    WIDE_INT_MAX_PRECISION)
> c) with the higher precision, behavior changes for lrshift (-1, 2) etc. or
>    unsigned division with dividend having most significant bit set in
>    widest_int - while such values were considered to be above or equal to
>    1 << (WIDE_INT_MAX_PRECISION - 2), now they are with
>    WIDEST_INT_MAX_PRECISION and so much larger; but lrshift on widest_int
>    is I think only done in ccp and I'd strongly hope that we treat the
>    values as unsigned and so usually much smaller length; so it is just
>    when we call wi::lrshift (-1, 2) or similar that results change.
> I've noticed that for wide_int or widest_int references even simple
> operations like eq_p liked to allocate and immediately free huge buffers,
> which was caused by wide_int doing allocation on creation with a particular
> precision and e.g. get_binary_precision running into that.  So, I've
> duplicated that to avoid the allocations when all we need is just a
> precision.
> 
> The patch below doesn't actually build anymore since the vec.h asserts
> (which point to useful stuff though), so temporarily I've applied it also
> with
> --- gcc/vec.h.xx	2023-09-28 12:56:09.055786055 +0200
> +++ gcc/vec.h	2023-09-28 13:15:31.760487111 +0200
> @@ -1197,7 +1197,7 @@ template<typename T, typename A>
>  inline void
>  vec<T, A, vl_embed>::qsort (int (*cmp) (const void *, const void *))
>  {
> -  static_assert (vec_detail::is_trivially_copyable_or_pair <T>::value, "");
> +//  static_assert (vec_detail::is_trivially_copyable_or_pair <T>::value, "");
>    if (length () > 1)
>      gcc_qsort (address (), length (), sizeof (T), cmp);
>  }
> @@ -1422,7 +1422,7 @@ template<typename T>
>  void
>  gt_ggc_mx (vec<T, va_gc> *v)
>  {
> -  static_assert (std::is_trivially_destructible <T>::value, "");
> +//  static_assert (std::is_trivially_destructible <T>::value, "");
>    extern void gt_ggc_mx (T &);
>    for (unsigned i = 0; i < v->length (); i++)
>      gt_ggc_mx ((*v)[i]);
> hack.  The two spots that trigger are tree-ssa-loop-niter.cc doing qsort on
> widest_int vector (to be exact, swapping elements in the vector of

For this (besides choosing a fixed smaller widest_int as indicated in
the other mail) sorting could be done indirect by sorting a
[0, 1, 2 ... n-1 ] vector instead.

> And, now the question is what to do about this.  I guess for omp_general
> I could just use generic_wide_int <fixed_wide_int_storage <1024> > or
> something similar, after all the widest_int wasn't really great when it
> had maximum precision of WIDE_INT_MAX_PRECISION, different values on
> different targets, it has very few uses and is easy to change (thinking
> about this, makes me wonder what we do for offloading if offload host
> has different WIDE_INT_MAX_PRECISION from offload target).
> 
> But the more important question is what to do about loop/niters analysis.
> I think for number of iteration analysis it might be ok to punt somehow
> (if there is a way to tell that number of iterations is unknown) if we
> get some bound which is too large to be expressible in some reasonably small
> fixed precision (whether it is WIDE_INT_MAX_PRECISION, or something
> different is a question).  We could either introduce yet another widest_int
> like storage which would have still WIDEST_INT_MAX_PRECISION precision, but
> would ICE if length is set to something above its fixed width.  One problem
> is that the write_val estimations are often just conservatively larger and
> could trigger even if the value fits in the end.  Or we could use
> generic_wide_int <fixed_wide_int_storage <WIDE_INT_MAX_PRECISION> > (perhaps
> call that rwidest_int), the drawback would be that it would be slightly harder
> to use as it has different precision from widest_int, we'd need to do some
> from on it or the like.  Plus I really don't know the niters code to know
> how to punt.

I think when widest_int is no longer bound by something like the
largest integer mode but now has to cater for arbitrary large _BitInt
we have to get rid of widest_int or we have to make it variable-precision
and reallocate it like auto_vec<T, n>.

For GC we can have the storage still heap allocated but of course
CTOR/DTOR is going to be a pain (so better not use widest_int in GC).

> ipa_bits is even worse, because unlike niter analysis, I think it is very
> much desirable to support IPA VRP of all supported _BitInt sizes.  Shall
> we perhaps use trailing_wide_int storage in there, or conditionally
> rwidest_int vs. INTEGER_CSTs for stuff that doesn't fit, something else?

trailing_wide_int storage is the way to go here

> What about slsr?  This is after bitint lowering, so it shouldn't be
> performing opts on larger BITINT_TYPEs and so could also go with the
> rwidest_int.

Just to say I don't really like adding another "widest" int, but
slsr shouldn't need to GC any of that so widest_int should be fine?

Richard.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int
  2023-09-29  9:49         ` Richard Biener
@ 2023-09-29 10:30           ` Richard Sandiford
  2023-09-29 10:58             ` Jakub Jelinek
  0 siblings, 1 reply; 16+ messages in thread
From: Richard Sandiford @ 2023-09-29 10:30 UTC (permalink / raw)
  To: Richard Biener; +Cc: Jakub Jelinek, Aldy Hernandez, gcc-patches

Richard Biener <rguenther@suse.de> writes:
> On Thu, 28 Sep 2023, Jakub Jelinek wrote:
>
>> Hi!
>> 
>> On Tue, Aug 29, 2023 at 05:09:52PM +0200, Jakub Jelinek via Gcc-patches wrote:
>> > On Tue, Aug 29, 2023 at 11:42:48AM +0100, Richard Sandiford wrote:
>> > > > I'll note tree-ssa-loop-niter.cc also uses GMP in some cases, widest_int
>> > > > is really trying to be poor-mans GMP by limiting the maximum precision.
>> > > 
>> > > I'd characterise widest_int as "a wide_int that is big enough to hold
>> > > all supported integer types, without losing sign information".  It's
>> > > not big enough to do arbitrary arithmetic without losing precision
>> > > (in the way that GMP is).
>> > > 
>> > > If the new limit on integer sizes is 65535 bits for all targets,
>> > > then I think that means that widest_int needs to become a 65536-bit type.
>> > > (But not with all bits represented all the time, of course.)
>> > 
>> > If the widest_int storage would be dependent on the len rather than
>> > precision for how it is stored, then I think we'd need a new method which
>> > would be called at the start of filling the limbs where we'd tell how many
>> > limbs there would be (i.e. what will set_len be called with later on), and
>> > do nothing for all storages but the new widest_int_storage.
>> 
>> So, I've spent some time on this.  While wide_int is in the patch a fixed/variable
>> number of limbs (aka len) storage depending on precision (precision >
>> WIDE_INT_MAX_PRECISION means heap allocated limb array, otherwise it is
>> inline), widest_int has always very large precision
>> (WIDEST_INT_MAX_PRECISION, currently defined to the INTEGER_CST imposed
>> limitation of 255 64-bit limbs) but uses inline array for length
>> corresponding up to WIDE_INT_MAX_PRECISION bits and for larger one uses
>> similarly to wide_int a heap allocated array of limbs.
>> These changes make both wide_int and widest_int obviously non-POD, not
>> trivially default constructible, nor trivially copy constructible, trivially
>> destructible, trivially copyable, so not a good fit for GC and some vec
>> operations.
>> One common use of wide_int in GC structures was in dwarf2out.{h,cc}; but as
>> large _BitInt constants don't appear in RTL, we really don't need such large
>> precisions there.
>> So, for wide_int the patch introduces rwide_int, restricted wide_int, which
>> acts like the old wide_int (except that it is now trivially default
>> constructible and has assertions precision isn't set above
>> WIDE_INT_MAX_PRECISION).
>> For widest_int, the nastiness is that because it always has huge precision
>> of 16320 right now,
>> a) we need to be told upfront in wide-int.h before calling the large
>>    value internal functions in wide-int.cc how many elements we'll need for
>>    the result (some reasonable upper estimate is fine)
>> b) various of the wide-int.cc functions were lazy and assumed precision is
>>    small enough and often used up to that many elements, which is
>>    undesirable; so, it now tries to decreas that and use xi.len etc. based
>>    estimates instead if possible (sometimes only if precision is above
>>    WIDE_INT_MAX_PRECISION)
>> c) with the higher precision, behavior changes for lrshift (-1, 2) etc. or
>>    unsigned division with dividend having most significant bit set in
>>    widest_int - while such values were considered to be above or equal to
>>    1 << (WIDE_INT_MAX_PRECISION - 2), now they are with
>>    WIDEST_INT_MAX_PRECISION and so much larger; but lrshift on widest_int
>>    is I think only done in ccp and I'd strongly hope that we treat the
>>    values as unsigned and so usually much smaller length; so it is just
>>    when we call wi::lrshift (-1, 2) or similar that results change.
>> I've noticed that for wide_int or widest_int references even simple
>> operations like eq_p liked to allocate and immediately free huge buffers,
>> which was caused by wide_int doing allocation on creation with a particular
>> precision and e.g. get_binary_precision running into that.  So, I've
>> duplicated that to avoid the allocations when all we need is just a
>> precision.
>> 
>> The patch below doesn't actually build anymore since the vec.h asserts
>> (which point to useful stuff though), so temporarily I've applied it also
>> with
>> --- gcc/vec.h.xx	2023-09-28 12:56:09.055786055 +0200
>> +++ gcc/vec.h	2023-09-28 13:15:31.760487111 +0200
>> @@ -1197,7 +1197,7 @@ template<typename T, typename A>
>>  inline void
>>  vec<T, A, vl_embed>::qsort (int (*cmp) (const void *, const void *))
>>  {
>> -  static_assert (vec_detail::is_trivially_copyable_or_pair <T>::value, "");
>> +//  static_assert (vec_detail::is_trivially_copyable_or_pair <T>::value, "");
>>    if (length () > 1)
>>      gcc_qsort (address (), length (), sizeof (T), cmp);
>>  }
>> @@ -1422,7 +1422,7 @@ template<typename T>
>>  void
>>  gt_ggc_mx (vec<T, va_gc> *v)
>>  {
>> -  static_assert (std::is_trivially_destructible <T>::value, "");
>> +//  static_assert (std::is_trivially_destructible <T>::value, "");
>>    extern void gt_ggc_mx (T &);
>>    for (unsigned i = 0; i < v->length (); i++)
>>      gt_ggc_mx ((*v)[i]);
>> hack.  The two spots that trigger are tree-ssa-loop-niter.cc doing qsort on
>> widest_int vector (to be exact, swapping elements in the vector of
>
> For this (besides choosing a fixed smaller widest_int as indicated in
> the other mail) sorting could be done indirect by sorting a
> [0, 1, 2 ... n-1 ] vector instead.
>
>> And, now the question is what to do about this.  I guess for omp_general
>> I could just use generic_wide_int <fixed_wide_int_storage <1024> > or
>> something similar, after all the widest_int wasn't really great when it
>> had maximum precision of WIDE_INT_MAX_PRECISION, different values on
>> different targets, it has very few uses and is easy to change (thinking
>> about this, makes me wonder what we do for offloading if offload host
>> has different WIDE_INT_MAX_PRECISION from offload target).
>> 
>> But the more important question is what to do about loop/niters analysis.
>> I think for number of iteration analysis it might be ok to punt somehow
>> (if there is a way to tell that number of iterations is unknown) if we
>> get some bound which is too large to be expressible in some reasonably small
>> fixed precision (whether it is WIDE_INT_MAX_PRECISION, or something
>> different is a question).  We could either introduce yet another widest_int
>> like storage which would have still WIDEST_INT_MAX_PRECISION precision, but
>> would ICE if length is set to something above its fixed width.  One problem
>> is that the write_val estimations are often just conservatively larger and
>> could trigger even if the value fits in the end.  Or we could use
>> generic_wide_int <fixed_wide_int_storage <WIDE_INT_MAX_PRECISION> > (perhaps
>> call that rwidest_int), the drawback would be that it would be slightly harder
>> to use as it has different precision from widest_int, we'd need to do some
>> from on it or the like.  Plus I really don't know the niters code to know
>> how to punt.
>
> I think when widest_int is no longer bound by something like the
> largest integer mode but now has to cater for arbitrary large _BitInt
> we have to get rid of widest_int or we have to make it variable-precision
> and reallocate it like auto_vec<T, n>.

Yeah, think I agree with this.  widest_int really combined two things:

(a) a way of storing any integer IL value without loss of precision

(b) a way of attaching sign information

Arithmetic on widest_int is dubious, because it wraps at an essentially
arbitrary point.

_BitInt means that (a) is no longer a sensible concept.  And (b) could
be achieved by having a variant of wide_int with sign information
(like LLVM's APSInt).

In a way, _BitInt undermines the whole premise of the wide_int
representation.  Originally the idea was to have a structure that
was entirely self-contained and GC-friendly.  We're having to give
up on both of those (rightly and understandably).  So we might want
to reconsider how many elements are stored inline, and whether other
aspects of the design could be tweaked.

We could probably also clean up a lot of stuff now that we have
access to C++11.  (Or maybe even C++14. :) )

But that could end up being a large rewrite.

The approach in the patch looks good to me from a quick scan FWIW.
Will try to review over the weekend.

Thanks,
Richard

> For GC we can have the storage still heap allocated but of course
> CTOR/DTOR is going to be a pain (so better not use widest_int in GC).
>
>> ipa_bits is even worse, because unlike niter analysis, I think it is very
>> much desirable to support IPA VRP of all supported _BitInt sizes.  Shall
>> we perhaps use trailing_wide_int storage in there, or conditionally
>> rwidest_int vs. INTEGER_CSTs for stuff that doesn't fit, something else?
>
> trailing_wide_int storage is the way to go here
>
>> What about slsr?  This is after bitint lowering, so it shouldn't be
>> performing opts on larger BITINT_TYPEs and so could also go with the
>> rwidest_int.
>
> Just to say I don't really like adding another "widest" int, but
> slsr shouldn't need to GC any of that so widest_int should be fine?
>
> Richard.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int
  2023-09-29 10:30           ` Richard Sandiford
@ 2023-09-29 10:58             ` Jakub Jelinek
  0 siblings, 0 replies; 16+ messages in thread
From: Jakub Jelinek @ 2023-09-29 10:58 UTC (permalink / raw)
  To: Richard Biener, Aldy Hernandez, gcc-patches, richard.sandiford

On Fri, Sep 29, 2023 at 11:30:06AM +0100, Richard Sandiford wrote:
> Yeah, think I agree with this.  widest_int really combined two things:
> 
> (a) a way of storing any integer IL value without loss of precision
> 
> (b) a way of attaching sign information
> 
> Arithmetic on widest_int is dubious, because it wraps at an essentially
> arbitrary point.

Yeah, but some more sensible than others.  Logical ops make sense at any
precision, addition/subtraction might better have for widest_int a few bits
of extra maximum precision over wide_int maximum supported precision such
that overflows are caught, but say for multiplication that would be much
more.  Of course, we already document that bswap/rotates don't make any
sense on widest_int, and as I wrote, e.g. lrshift or unsigned division
of values with MSB set are very questionable too.

> The approach in the patch looks good to me from a quick scan FWIW.
> Will try to review over the weekend.

For the actual patch I have another worry (but without the GTY widest_int
uses and slsr etc. addressed first it can't be easily verified). 
wide_int_ref_storage has VAR_PRECISION like wide_int, while I've hacked up
get_binary_precision not to allocate uselessly for it a lot of memory,
I'm afraid any time we perform some operation on wide_int_refs created from
widest_int (so, they get in most cases reasonably small get_len () but
huge get_precision ()) we'd uselessly allocate 255 HOST_WIDE_INTs of memory
from heap.  So maybe wide_int should also like widest_int in the patch
have u.val vs. u.valp decided based on estimated or later real get_len ()
rather than get_precision ().  In the end, I think we should make sure that
unless _BitInt is seen in the sources, we don't really ever allocate any
heap memory in wide_int/widest_int.  At least unless we change the number
of inline elements of the arrays for wide_int/widest_int, if we lower that
say to some hardcoded number of limbs on all arches (say 4 or 6 or 8, 9
x86-64 is kind of weird) that allocations happen only very rarely.  Normal
128-bit precision math shouldn't trigger them certainly.

	Jakub


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int
  2023-09-29  8:37           ` Jakub Jelinek
@ 2023-09-29 12:04             ` Aldy Hernandez
  0 siblings, 0 replies; 16+ messages in thread
From: Aldy Hernandez @ 2023-09-29 12:04 UTC (permalink / raw)
  To: Jakub Jelinek
  Cc: Richard Biener, Richard Sandiford, Andrew MacLeod, gcc-patches



On 9/29/23 04:37, Jakub Jelinek wrote:
> On Thu, Sep 28, 2023 at 11:53:53AM -0400, Aldy Hernandez wrote:
>>> ipa_bits is even worse, because unlike niter analysis, I think it is very
>>> much desirable to support IPA VRP of all supported _BitInt sizes.  Shall
>>> we perhaps use trailing_wide_int storage in there, or conditionally
>>> rwidest_int vs. INTEGER_CSTs for stuff that doesn't fit, something else?
>>
>> BTW, we already track value/mask pairs in the irange, so I think ipa_bits
>> should ultimately disappear.  Doing so would probably simplify the code
>> base.
> 
> Well, having irange in GC memory would be equally bad, it does have
> non-trivial destructors (plus isn't meant to be space efficient either,
> right?).

Correct, irange is not space efficient by a long shot.  Any GC and long 
term requirements should be stored through the value-range-storage.h 
mechanism.

I already converted the ipa_vr ranges that live in GC memory to 
vrange_storage.  See the ipa_vr class.  So I think you could just nuke 
the ipa_bits and use the ranges already in ipa_vr.

> Though, perhaps we should use value-range-storage.h for that now that it
> can store value/mask pair as well?  Either tweak it on the IPA side
> such that everything is stored together (both the IPA VRP and IPA bit CCP)

Right.

Aldy

> or say use vrange_storage with zero (or one dummy) ranges + the value/mask
> pair.
> 
> 	Jakub
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int
  2023-09-28 14:03       ` [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int Jakub Jelinek
                           ` (2 preceding siblings ...)
  2023-09-29  9:49         ` Richard Biener
@ 2023-10-05 15:11         ` Jakub Jelinek
  2023-10-06 17:41           ` Jakub Jelinek
  3 siblings, 1 reply; 16+ messages in thread
From: Jakub Jelinek @ 2023-10-05 15:11 UTC (permalink / raw)
  To: Richard Biener, Richard Sandiford; +Cc: gcc-patches

Hi!

On Thu, Sep 28, 2023 at 04:03:55PM +0200, Jakub Jelinek wrote:
> Your thoughts on all of this?

So, here is some further progress on the patch (on top of the ipa_bits
removal patch).
The most important changes since the last patch
1) it now builds cc1 and passes self-tests without problems without any
   hacks; omp-general now uses fixed_wide_int_storage <1024> which matches
   what it wants, GC friendly target independent bitsize scoring wide
   integer, loop uses fixed_wide_int_storage <WIDE_INT_MAX_INL_PRECISION>
   with punting if some bound doesn't fit, ipa_bits resolves the IPA side,
   slsr uses offset_int which I think should be good enough for address
   offset computations (whether in bytes or bits with enough extra bits)
2) WIDE_INT_MAX_PRECISION was a weird macro in the last patch, because it
   wasn't maximum wide_int precision, so I've change it, such that we have
   WIDE_INT_MAX_INL_{PRECISION,ELTS} - limit on what wide_int as well as
				       widest_int uses in inline arrays
   RWIDE_INT_MAX_{PRECISION,ELTS} - equal to above, maximum precision/len
				    of rwide_int
   WIDE_INT_MAX_{PRECISION,ELTS} - maximum wide_int precision, currently
				   derived from the INTEGER_CST limitation
				   of 255 HOST_WIDE_INT limbs; this is
				   also maximum supported _BitInt bitsize + 1
   WIDEST_INT_MAX_{PRECISION,ELTS} - this could be the same as above, but so
				     that it is really widest, I've picked
				     twice as large so that we catch small
				     addition/subtraction overflows and if
				     lucky also multiplication
3) some extra bugfixes
make check-gcc RUNTESTFLAGS=dg.exp=bitint*.c is clean,
make check-gcc RUNTESTFLAGS=dg-torture.exp=bitint* has a single ICE
I'm still working on, tree-ssa.exp=*slsr* seems to have 2 ICEs and haven't
tried to bootstrap/regtest it yet.

--- gcc/tree-vect-loop.cc.jj	2023-10-04 16:28:04.354782008 +0200
+++ gcc/tree-vect-loop.cc	2023-10-05 11:52:25.001491397 +0200
@@ -11681,7 +11681,7 @@ vect_transform_loop (loop_vec_info loop_
 					LOOP_VINFO_VECT_FACTOR (loop_vinfo),
 					&bound))
 	    loop->nb_iterations_upper_bound
-	      = wi::umin ((widest_int) (bound - 1),
+	      = wi::umin ((bound_wide_int) (bound - 1),
 			  loop->nb_iterations_upper_bound);
       }
   }
--- gcc/wide-int-print.cc.jj	2023-10-04 16:28:04.447780740 +0200
+++ gcc/wide-int-print.cc	2023-10-05 11:36:55.265242917 +0200
@@ -74,9 +74,12 @@ print_decs (const wide_int_ref &wi, char
 void
 print_decs (const wide_int_ref &wi, FILE *file)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_decs (wi, buf);
-  fputs (buf, file);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_decs (wi, p);
+  fputs (p, file);
 }
 
 /* Try to print the unsigned self in decimal to BUF if the number fits
@@ -98,9 +101,12 @@ print_decu (const wide_int_ref &wi, char
 void
 print_decu (const wide_int_ref &wi, FILE *file)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_decu (wi, buf);
-  fputs (buf, file);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_decu (wi, p);
+  fputs (p, file);
 }
 
 void
@@ -134,9 +140,12 @@ print_hex (const wide_int_ref &val, char
 void
 print_hex (const wide_int_ref &wi, FILE *file)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_hex (wi, buf);
-  fputs (buf, file);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_hex (wi, p);
+  fputs (p, file);
 }
 
 /* Print larger precision wide_int.  Not defined as inline in a header
--- gcc/lto-streamer-out.cc.jj	2023-10-04 16:28:04.201784093 +0200
+++ gcc/lto-streamer-out.cc	2023-10-05 11:36:54.700250663 +0200
@@ -2173,13 +2173,26 @@ output_cfg (struct output_block *ob, str
 			   loop_estimation, EST_LAST, loop->estimate_state);
       streamer_write_hwi (ob, loop->any_upper_bound);
       if (loop->any_upper_bound)
-	streamer_write_widest_int (ob, loop->nb_iterations_upper_bound);
+	{
+	  widest_int w = widest_int::from (loop->nb_iterations_upper_bound,
+					   SIGNED);
+	  streamer_write_widest_int (ob, w);
+	}
       streamer_write_hwi (ob, loop->any_likely_upper_bound);
       if (loop->any_likely_upper_bound)
-	streamer_write_widest_int (ob, loop->nb_iterations_likely_upper_bound);
+	{
+	  widest_int w
+	    = widest_int::from (loop->nb_iterations_likely_upper_bound,
+				SIGNED);
+	  streamer_write_widest_int (ob, w);
+	}
       streamer_write_hwi (ob, loop->any_estimate);
       if (loop->any_estimate)
-	streamer_write_widest_int (ob, loop->nb_iterations_estimate);
+	{
+	  widest_int w = widest_int::from (loop->nb_iterations_estimate,
+					   SIGNED);
+	  streamer_write_widest_int (ob, w);
+	}
 
       /* Write OMP SIMD related info.  */
       streamer_write_hwi (ob, loop->safelen);
--- gcc/value-range.h.jj	2023-10-04 16:28:04.436780890 +0200
+++ gcc/value-range.h	2023-10-05 11:36:55.257243027 +0200
@@ -626,7 +626,9 @@ irange::maybe_resize (int needed)
     {
       m_max_ranges = HARD_MAX_RANGES;
       wide_int *newmem = new wide_int[m_max_ranges * 2];
-      memcpy (newmem, m_base, sizeof (wide_int) * num_pairs () * 2);
+      unsigned n = num_pairs () * 2;
+      for (unsigned i = 0; i < n; ++i)
+	newmem[i] = m_base[i];
       m_base = newmem;
     }
 }
--- gcc/tree-ssa-loop-ivopts.cc.jj	2023-09-29 18:58:47.317894622 +0200
+++ gcc/tree-ssa-loop-ivopts.cc	2023-10-05 15:44:56.457466394 +0200
@@ -1036,10 +1036,12 @@ niter_for_exit (struct ivopts_data *data
 	 names that appear in phi nodes on abnormal edges, so that we do not
 	 create overlapping life ranges for them (PR 27283).  */
       desc = XNEW (class tree_niter_desc);
+      ::new (static_cast<void*> (desc)) tree_niter_desc ();
       if (!number_of_iterations_exit (data->current_loop,
 				      exit, desc, true)
      	  || contains_abnormal_ssa_name_p (desc->niter))
 	{
+	  desc->~tree_niter_desc ();
 	  XDELETE (desc);
 	  desc = NULL;
 	}
@@ -7894,6 +7896,7 @@ remove_unused_ivs (struct ivopts_data *d
 bool
 free_tree_niter_desc (edge const &, tree_niter_desc *const &value, void *)
 {
+  value->~tree_niter_desc ();
   free (value);
   return true;
 }
--- gcc/lto-streamer-in.cc.jj	2023-10-04 16:28:04.178784406 +0200
+++ gcc/lto-streamer-in.cc	2023-10-05 11:36:54.730250251 +0200
@@ -1122,13 +1122,16 @@ input_cfg (class lto_input_block *ib, cl
       loop->estimate_state = streamer_read_enum (ib, loop_estimation, EST_LAST);
       loop->any_upper_bound = streamer_read_hwi (ib);
       if (loop->any_upper_bound)
-	loop->nb_iterations_upper_bound = streamer_read_widest_int (ib);
+	loop->nb_iterations_upper_bound
+	  = bound_wide_int::from (streamer_read_widest_int (ib), SIGNED);
       loop->any_likely_upper_bound = streamer_read_hwi (ib);
       if (loop->any_likely_upper_bound)
-	loop->nb_iterations_likely_upper_bound = streamer_read_widest_int (ib);
+	loop->nb_iterations_likely_upper_bound
+	  = bound_wide_int::from (streamer_read_widest_int (ib), SIGNED);
       loop->any_estimate = streamer_read_hwi (ib);
       if (loop->any_estimate)
-	loop->nb_iterations_estimate = streamer_read_widest_int (ib);
+	loop->nb_iterations_estimate
+	  = bound_wide_int::from (streamer_read_widest_int (ib), SIGNED);
 
       /* Read OMP SIMD related info.  */
       loop->safelen = streamer_read_hwi (ib);
@@ -1888,13 +1891,17 @@ lto_input_tree_1 (class lto_input_block
       tree type = stream_read_tree_ref (ib, data_in);
       unsigned HOST_WIDE_INT len = streamer_read_uhwi (ib);
       unsigned HOST_WIDE_INT i;
-      HOST_WIDE_INT a[WIDE_INT_MAX_ELTS];
+      HOST_WIDE_INT abuf[WIDE_INT_MAX_INL_ELTS], *a = abuf;
 
+      if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+	a = XALLOCAVEC (HOST_WIDE_INT, len);
       for (i = 0; i < len; i++)
 	a[i] = streamer_read_hwi (ib);
       gcc_assert (TYPE_PRECISION (type) <= WIDE_INT_MAX_PRECISION);
-      result = wide_int_to_tree (type, wide_int::from_array
-				 (a, len, TYPE_PRECISION (type)));
+      result
+	= wide_int_to_tree (type,
+			    wide_int::from_array (a, len,
+						  TYPE_PRECISION (type)));
       streamer_tree_cache_append (data_in->reader_cache, result, hash);
     }
   else if (tag == LTO_tree_scc || tag == LTO_trees)
--- gcc/value-range.cc.jj	2023-10-04 16:28:04.416781162 +0200
+++ gcc/value-range.cc	2023-10-05 11:36:54.835248812 +0200
@@ -245,17 +245,24 @@ vrange::dump (FILE *file) const
 void
 irange_bitmask::dump (FILE *file) const
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p;
   pretty_printer buffer;
 
   pp_needs_newline (&buffer) = true;
   buffer.buffer->stream = file;
   pp_string (&buffer, "MASK ");
-  print_hex (m_mask, buf);
-  pp_string (&buffer, buf);
+  unsigned len_mask = m_mask.get_len ();
+  unsigned len_val = m_value.get_len ();
+  unsigned len = MAX (len_mask, len_val);
+  if (len > WIDE_INT_MAX_INL_ELTS)
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  else
+    p = buf;
+  print_hex (m_mask, p);
+  pp_string (&buffer, p);
   pp_string (&buffer, " VALUE ");
-  print_hex (m_value, buf);
-  pp_string (&buffer, buf);
+  print_hex (m_value, p);
+  pp_string (&buffer, p);
   pp_flush (&buffer);
 }
 
--- gcc/testsuite/gcc.dg/bitint-38.c.jj	2023-10-05 11:36:54.667251115 +0200
+++ gcc/testsuite/gcc.dg/bitint-38.c	2023-10-05 12:57:07.941106025 +0200
@@ -0,0 +1,18 @@
+/* PR c/102989 */
+/* { dg-do compile { target { bitint } } } */
+/* { dg-options "-std=c2x" } */
+
+#if __BITINT_MAXWIDTH__ >= 16319
+constexpr unsigned _BitInt(16319) a
+  = 468098567701677261276215481936770442254383643766995378241600227179396283432916865881332215867106489159251577495372085663487092317743244770597287633199005374998455333587280357490149993101811392051483761495987108264964738337118155155862715438910721661230332533185335581757600511846854115932637261969633134365868695363914570578110064471868475841348589366933645410987699979080140212849909081188170910464967486231358935212897096260626033055536141835599284498474737858487658470115144771923114826312283863035503700600141440724426364699636330240414271275626021294939422483250619629005959992243418661230122132667769781183790338759345884903821695590991577228520523725302048215447841573113840811593638413425054938213262961448317898574140533090004992732688525115004782973893244091427000396890427152225308661078954671066069234453757593181753900865203439035402480306413572239610467142591920809187367438071170100969567440044691427487959785637338381651309916782063670286046547585240837892307170928849485877186793280707600840866783471799148179250818387716183127323346199533387463363442356218803779697005759324410376476855222420876262425985571982818180353870410149824214544313013285199544193496624223219986402944849622489422007678564946174797892795089330899535624727777525330789492703574564112252955147770942929761545604350869404246558274752353510370157229485004402131043153454290397929387276374054938578976878606467217359398684275050519104413914286024106808116340712273059427362293703151355498336213170698894448405369398757188523160460292714875857879968173578328191358215972493513271297875634400793301929250052822258636015650857683023900709845410838487936778533250407886180954576046340697908584020951295048844938047865657029072850797442976146895294184993736999505485665742811313795405530674199848055802759901786376822069529342971261963119332476504064285869362049662083405789828433132154933242817432809415810548180658750393692272729586232842065658490971201927780014258815333115459695117942273551876646844821076723664040282772834511419891351278169017103987094803829594286352340468346618726088781492626816188657331359104171819822673805856317828499039088088223137258297373929043307673570090396947789598799922928643843532617012164811074618881774622628943539037974883812689130801860915090035870244061005819418130068390986470314677853605080103313411837904358287837401546257413240466939893527508931541065241929872307203876443882106193262544652290132364691671910332006127864146991404015366683569317248057949596070354929361158326955551600236075268435044105880162798380799161607987365282458662031599096921825176202707890730023698706855762932691688259365358964076595824577775275991183149118372047206055118463112864604063853894820407249837871368934941438119680605528546887256934334246075596746410297954458632358171428714141820918183384435681332379317541048252391710712196623406338702061195213724569303285402242853671386113148211535691685461836458295037538034378318055108240082414441205300401526732399959228346926528586852743389490978734787926721999855388794711837164423007719626109179005466113706450765269687580819822772189301084503627297389675134228222337286867641110511061980231247884533492442898936743429641958314135329073406495776369208158032115883850691010569048983941126771477990976092252391972812691669847446798507244106121667885423025613769258102773855537509733295805013313937402282804897213847221072647111605172349464564089914906493508133855389627177663426057763252086286325343811254757681803068276278048757997425284334713190226818463023074461900176958010055572434983135171145365242339273326984465181064287264645470832091115100640584104375577304056951969456200138485313560009272338228103637763863289261673258726736753407044143664079479496972580560534494806170810469304773005873590626280072387999668522546747985701599613975101188543857852141559251634058676718308000324869809628199442681565615662912626022796064414496106344236431285697688357707992989966561557171729972093533007476947862215922583204811189015550505642082475400647639520782187776825395598257421714106473869797642678266380755873356747812273977691604147842741151722919464734890326772594979022403228191075586910464204870254674290437668861177639713112762996390246102030994917186957826982084194156870398312336059100521566034092740694642613192909850644003933745129291062576341213874815510099835708723355432970090139671120232910747665906191360160259512198160849784197597300106223945960886603127136037120000864968668651452411048372895607382907494278810971475663944948791458618662250238375166523484847507342040066801856222328988662049579299600545682490412754483621051190231623196265549391964259780178070495642538883789503379406531279338866955157646654913405181879254189185904298325865503395688786311067669273609670603076582607253527084977744533187145642686236350165593980428575119329911921382240780504527422630654086941060242757131313184709635181001199631726283364158943337968797uwb
+    + 9935443518057456429927126655222257817207511311671335832560065573055276678747990652907348839741818562757939084649073348172108397183827020377941725983107513636287406530526358253508437290241937276908386282904353079102904535675608604576486162998319427702851278408213641454837223079616401615875672453250148421679223829417834227518133091055180270249266161676677176149675164257640812344297935650729629801878758059944090168862730519817203352341458310363811482318083270232434329317323822818991134500601669868922396013512969477839456472345812312321924215241849772147687455760224559240952737319009348540894966363568158349501355229264646770018071590502441702787269097973979899837683122194103110089728425676690246091146993955037918425772840022288222832932542516091501149477160856564464376910293230091963573119230648026667896399352790982611957569978972038178519570278447540707502861678502657905192743225893225663994807568918644898273702285483676385717651104042002105352993176512166420085064452431753181365805833548922676748890412420332694609096819779765600345216390394307257556778223743443958983962113723193551247897995423762348092103893683711373897139168289420267660611409947644548715007787832959251167553175096639147674776117973100447903243626902892382263767591328038235708593401563793019418124453166386471792468421003855894206584354731489363668134077946203546067237235657746480296831651791790385981397558458905904641394246279782746736009101862366868068363411976388557697921914317179371206444085390779634831369723370050764678852846779369497232374780691905280992368079762747352245519607264154197148958896955661904214909184952289996142050604821608749900417845137727596903100452350067551305840998280482775209883278873071895588751811462342517825753493814997918418437455474992422243919549967371964423457440287296270855605850954685912644303354019058716916735522533065323057755479803668782530250381988211075034655760123250249441440684338450953823290346909689822527652698723502872312570305261196768477498898020793071808758903381796873868682378850925211629392760628685222745073544116615635557910805357623590218023715832716372532519372862093828545797325567803691998051785156065861566888871461130133522039321843439017964382030080752476709398731341173062430275003111954907627837208488348686666904765710656917706470924318432160155450726007668035494571779793129212242101293274853237850848806152774463689243426683295884648680790240363097015218347966399166380090370628591288712305133171869639679922854066493076773166970190482988828017031016891561971986279675371963020932469337264061317786330566839383989384760935590299287963546863848119999451739548405124001514033096695605580766121611440638549988895970262425133218159848061727217163487131806481686766843789971465247903534853837951413845786667122427182648989156599529647439419553785158561613114023267303869927565170507781782366447011340851258178534101585950081423437703778492347448230473897643505773957385504112182446690585033823747175966929091293693201061858670141209129091452861292276276012910624071241165402089161606944423826245461608594935732481900198240862293409442308800690019550831630479883000579884614601906961723011354449804576794339826056986957680090916046848673419723529694384653809400377218545075269148766129194637039408225515678013332188074997217667835494940043014917877438354902673107453164275280010251040360040937308738925689475725131639032011979009642713542292894219059352972933151112376197383814925363288670995556269447804994925086791728136906693249507115097807060365872110998210768336078389508724184863597285987736912073071980137162590779664675033429119327855307827174673749257462983054221631797527009987595732460222197367608440973488211898471439302051388806818521659685873672383828021329848153410204926607710971678268541677584421695238011784351386047869158787156634630693872428067864980320063293435887574745859067024988485742353278548704467544298793511583587659713711677065792371199329419372392720321981862269890024832348999865449339856339220386853162641984444934998176248821703154774794026863423846665361147912580310179333239849314145158103813724371277156031826070213656189218428551171492579367736652650240510840524479280661922149370381404863668038229922105064658335083314946842545978050497021795217124947959575065471749872278802756371390871441004232633252611825748658593540667831098874027223327541523742857750954119615708541514145110863925049204517574000824797900817585376961462754521495100198829675100958066639531958106704159717265035205597161047879510849900587565746603225763129877434317949842105742386965886137117798642168190733367414126797929434627532307855448841035433795229031275545885872876848846666666475465866905332293095381494096702328649920740506658930503053162777944821433383407283155178707970906458023827141681140372968356084617001053870499079884384019820875585843129082894687740533946763756846924952825251383026364635539377880784234770789463152435704464616uwb;
+constexpr unsigned _BitInt(16319) b
+  = 20129744567093027275741005070628998262449166046517026903695683755854448756834360166513132405078796314602781998330705368407367482030156637206994877425582250124595106718397028199112773892105727478029626122540718672466812244172521968825004812596684190534400169291245019886664334632347203172906471830047918779870667296830826108769036384267604969509336398421516482677170697323144807237345130767733861415665037591249948490085867356183319101541167176586195051721766552194667530417142250556133895688441663400613014781276825394358975458967475147806589013506569415945496841131100738180426238464950629268379774013285627049621529192047736803089092751891513992605419086502588233332057296638567290306093910878742093500873864277174719410183640765821580587831967716708363976225535905317908137780497267444416760176647705834046996010820212494244083222254037700699529789991033448979912128507710343500466786839351071045788239200231971288879352062329627654083430317549832483148696514166354870702716570783257707960927427529476249626444239951812293100465038963807939297639901456086408459677292249078230581624034160083198437374539728677906306289960873601083706201882999243554025429957091619812945018432503309674349427513057767160754691227365332241845175797106713295593063635202655344273695438810685712451003351469460085582752740414723264094665962205140763820691773090780866423727990711323748512766522537850976590598658397979845215595029782750537140603588592215363608992433922289542233458102634259275757690440754308009593855238137227351798446486981151672766513716998027602215751256719370429397129549459120277202327118788743080998483470436192625398340057850391478909668185290635380423955404607217710958636050373730838469336370845039431945543326700579270919052885975364141422331087288874462285858637176621255141698264412903522678033317989170115880081516284097559300133507799471895326457336815172421155995525168781635131143991136416642016744949082321204689839861376266795485532171923826942486502913400286963940309484507484129423576156798044985198780159055788525538310878089397895175129162099671894337526801235280427428321205321530735108239848594278720839317921782831352363541199919557577597546876704462612904924694431903072332864341465745291866718067601041404212430941956177407763481845568339170224196193106463030409080073136605433869775860974939991008596874978506245689726966715206639438259724689301019692258116991317695012205036157177039536905494005833948384397446492918129185274359806145454148241131925838562069991934872329314452016900728948186477387223161994145551216156032211038319475270853818660079065895119923373317496777184177315345923787700803986965175033224375435249224949151191006574511519055220741174631165879299688118138728380219550143006894817522270338472413899079751917314505754802052988622174392135207139715960212346858882422543222621408433817817181595201086403368301839080592455115463829425708132345811270911456928961301265223101989524481521721969838980208647528038509328501705428950749820080720418776718084142086501267418284241370398868561282277848391673847937247873117719906103441015578245152673184719538896073697272475250261227685660058944107087333786104761624391816175414338999215260190162551489343436332492645887029551964578826432156700872459216605843463884228343167159924792752429816064841479438134662749621639560203443871326810129872763539114284811330805213188716333471069710270583945841626338361700846410927750916663908367683188084193258384935122236639934335284160522042065088923421928660724095726039642836343542211473282392554371973074108770797447448654428325845253304889062021031599531436606775029315849674756213988932349651640552571880780461452187094400408403309806507698230071584809861634596000425300485805174853406774961321055086995665513868382285048348264250174388793184093524675621762558537763747237314473883173686633576273836946507237880619627632543093619281096675643877749217588495383292078713230253993525326209732859301842016440189010027733234997657748351253359664018894197346327201303258090754079801393874104215986193719394144148559622409051961205332355846077533183278890738832391535561074612724819789952480872328880408266970201766239451001690274739141595541572957753788951050043026811943691163688663710637928472363177936029259448725818579129920714382357882142208643606823754520733994646572586821541644398149238544337745998203264678454665487925173493921777764033537269522992103115842823750405588538846833724101543165897489915300004787110814394934465518176677482202804123781727309993329004830726928892557850582806559007396866888620985629055058474721708813614135721948922060211334334572381348586196886746758900465692833094336637178459072850215866106799456460266354416689624866015411034238864944123721969568161372557215009049887790769403406590484422511214573790761107726077762451440539965975955360773797196902546431341823788555069435728043202455375041817472821677779625286961992491729576392881089462100341878uwb
+    / 42uwb;
+constexpr unsigned _BitInt(16319) c
+  = 26277232382028447345935282100364413976442241120491848683780108318345774920397366452596924421335605374686659278612312801604887370376076386444511450318895545695570784577285598906650901929444302296033412199632594998376064124714220414913923213779444306833277388995703552219430575080927111195417046911177019070713847128826447830096432003962403463656558600431115273248877177875063381111477888059798858016050213420475851620413016793445517539227019973682699447952322388748860981947593432985730684746088183583225184347825110697327973294826205227564425769950503423435597165969299975681406974619941538502827193742760455245269483134360940023933986344217577102114800134253879530890064362520368475535738854741806292542624386473461274620987891355541987873664157022522167908591164654787501854546457737341526763516705032705254046172926268968997302379261582933264475402063191548343982201230445504659038868786347667710658240088825869575188227013335559298579845948690316856611693386990691782821847535492639223427223360712994033576990398197160051785889033125034223732954451076425681456628201904077784454089380196178912326887148822779198657689238010492393879170486604804437202791286852035982584159978541711417080787022338893101116171974852272032081114570327098305927880933671644227124990161298341841320653588271798586647749346370617067175316167393884414111921877638201303618067479025167446526964230732790261566590993315887290551248612349150417516918700813876388862131622594037955509016393068514645257179527317715173019090736514553638608004576856188118523434383702648256819068546345047653068719910165573154521302405552789235554333112380164692074092017083602440917300094238211450798274305773890594242881597233221582216100516212402569681571888843321851284369613879319709906369098535804168065394213774970627125064665536078444150533436796088491087726051879648804306086489894004214709726215682689504951069889191755818331155532574370572928592103344141366890552816031266922028893616252999452323417869066941579667306347161357254079241809644500681547267163742601555111699376923690500014172294337681007418735910341792131377741308586228268385825579773985382339854821729670313925456724869607910114957040810377671394779834675225181536565444830551924417794139736686594557660483813045525089850285373756403594900392226296617656189774567019900237644329891280192776067340109751100025818473155267503490628146429306493520953677660612094758307190480072039980575323428994009982415676875786338343681850769724258724712947129844865182522700509869810541147515988955709784790248266593581532414091983670376426534289079098742549505127694160521110700035496658932724007621759500091227595477831200325335242614162624218010753586306794482732500765136299548052958345872488446969032973871418565484570096440609125401439516349061951073344772753817168731533186740449206533184858409824331269879276752302819075938894191764603880669059804914705202932220114574769307945938446355744093058483466098741029671133305308451601510124097336668044362140994842230895354232007936193610666215236351383330719496758577095102466235782700820575938453736277546445932135116947993404356975890051717304128693125699951445791328843668647245439797933691355015781238038148597339831348341049751957204680813855138272253234219030458164179195368888878989362640509486440530112337687890165646824152338885218611665567933423652236621168833497594762922586523151554244316284075364923316223457798336995440229801638249044555841786652868778333857626201712694823945146208412572567947403078655159448178467488335673853886982143607843369103504905837049147006413324087204923968347162406372146304110247436210704329838033967549296094708909042352807942165389054391217609084676765464997803900415653278041220586434133698802658726748950122980183615091029049242919298428066745937148593879994539254240070220900694662200741796632687373414952817000938093930497338259168439649970963774406833411431113922194082765390241161715106142638681072839764035976877223152727829248475639970029777900589595383604989099084081251802305001465530685587689066710306032849298712531664047230963409638484129598076118133347670029704549206295184751171783054889490211218045322681317529569999778899567668829982207035948032411418382057247326141072264502161892285323531743728756335449414720326329614400327415751813608405440522389476951223717685562226240221655814783640319063683104993438443847695342093582440489676230855515734722099028773790309518629302472390856918840009781940193713784596688294176313226823907143925396584175086934911386332502448539920116580493698106175151294846382915609543814748269873022997601962804377576934064368480060369871027634248583037300264157126892396407333810094970488786868749240778818119777818968060847669660858189435863648299750130319878885182309492320093569553086644726783916663680961005542160003603514646606310756647257217877792590840884087816175376150368236330721380807047180835128240716072193739218623529235235449408073833764uwb
+    >> 171;
+static_assert (a == 10403542085759133691203342137159028259461894955438331210801665800234672962180907518788681055608925051917190662144445433835595489501570265148539013616306519011285861864113638610998587283343748668959870044400340187367869274012726759732348878437230149364081610941398977036594823591463255731808309715219781556045092524781748798096243155527048746090614751043610821560662864236720952557147844731917800712343725546175449104075627616077829385396994452199410766816558008090921987787438967590914249326913953731957899714113110918563882837045448642562338486517475793442626878243475178869958697311252767202125088496235928130685145568023992654921893286093433280015789621699281948053130963767216950901322064090115301029360256916486236324346980555378227825665231041206505932451054100655891377307183657244188881780309602697733965633806548575793711470844175477213922050584861112947113328821094578714380110663964395764964375008963336325761662071121014767368961020824065775639039724097407257977371623360602667242992626829630277589757195892131842788347638167481783472539736593840645020141666099662762763659119482517961624374850646183224354529879255694192077493038699570091875155722960929748259201284457182471153956119946261637096783796538046622701136421992223281799392319105563566498086105138357131671079600937329401554014025354725298453142629483842874038291307431207948198280389112036878226218928165845324560374437065373122000792930554833265840423016148390974876479752688661617125284208020330726704780298561478529279775092768807953202013307072084373090254748865483609183726295735240865516817482898554990450888147008484162850924835809973020042760450232447237837196378388135483084055028396408249214425019231777824054821326738728924661602608905318664721047678808734917923923121217803736039325080641571812479260200189082647677675380297657174607422686495562781202604884582727406463545308236800937463493199421020490845203940782000643133713413924683795888948837880891750307666957538835987772265423203470320354145742841869795472799186154631385288573730129094228733379855432514817031425884584962254283999586850250406406681047191820544352342046667950146374296364655891915135310082529994904874562441551527081311638121766367661807914647092917287784017613115795691373814041086838720316968010349263776702775009771662737124600992709418630470128579612748138807983617697487500079502839532266478317788699680283395230308668613168191852557234122469290277763000256531531071762280960597416576452124575885006363492171314551026369237325119844147154972582617127637240421323781252125819313268498872048683068789228870983086306586111793007178693570562554975762384431236664489360478109692520183356042112794589756922036102025380888246082763911915622037570736969677850621708281909652070776450422110772285659921383413532725137107621514770958361581240471968542997294446402584844918179956881219978405772785713402046471903103404871352324277109089891640558983922159359479964068994923538490500501798825116238188381267330618026093160290205596669795981834842352271011063939632623926629960113926326029952143452354640614061049438932665467928443113232214498101774523178129020155017228802221901469548072234073334681052461327832268955923701109732874360984002493130025470753861967432493102395766279717815113135763810886216491770265724160887688887515282293447287121039545323777928286876711267049135547760773655845950622676327972280622345486253084626121247885891757458308974259466441284967765824561478351421051923081842594791616249682768594796413184742007504540382141773556098929461233842797978566466734240436032269122908057438314319410489575244845739320693764798687398942275314333361838560358278583766983210126081046020231469705836544611252075187733112560778125560225565803349953151880800601890382648216375737077015744684142132303864494083237680306898134033570758401131735819237730280209424231954121970154195575070728876653187928423918894211617093567094857926079694003950142962763480728907322409338954277493711834363423032309296862081371923061150409402403668284066920335645815769603890931600189625120845560771835017710222988445713995722670892970377791415975424998772977793133120924108755323766471601770964843725827421304729349535336212587039242582503381150992918495310760366078232133800372960134691178665615437284018675587037783965019497398984583781291648236566997741116811234934754542646608973862932050896956712947890625239848619289180051302224085308716715734850608995498117691600907423641124622236235949675965926735290984369155077055324647942699875972019355174794849379024365265476001505043957802797349447782453767742359446787304217770032967959809288342189111153359045680464231699344620995535326063943372491385550455978845273436611631962336651743357242055102619760848116407351488643448217122169718350824452317641509534606434395208225350712889271762643740106849245478364448395994915755050465135468245061369394410933866013068008514339549345174558881983866497072827311379042433413uwb);
+static_assert (b == 479279632549833982755738215967357101486884905869453021516563898948915446591294289678884104882828483681018619007873937343032559095956110409690354224418625002966550159961834004740780330764422082810229193393826635058733624861250523067262019347540099774628575459315357616349150824579695313640630281667807589996920649924543478780215152006371546893079438057655154349456445174360590648508217399231758605134881847410713059287758746575793311941456361347290358374327775052253988819455767870384140373534325319062214637649448223675213701403987503519204500321584986093940400979311922337629196153927395934961423190792514929752893552191612781025930779806940809347748073488156862698382316586632554531097474068541478416687472958980350462147229542043370966376951612302580094672036569174235908042392792082009922861348754900810642762162386011767716267196524707159512614047405558309045526869231198654773018734270263596328291409529332649735222668150705420335319769465472201979730869384913211207207537399601373999069700655463720229201053332186006978582500927709712840419997653716343058563745053549481680514857956192457105651774755444712054911665735085740088242901976172465572034046597419519355833772202459754151176845548994456208445029222984100996313709454921745133168181790539412958897510447873469344071508368320478228160779533683887240349189576312875329064089835494782533898285493126755916970631488996451823585682342809043933704643566255965170014371156957508657356962712435465291272811967482363708516439065578762133187029479457794090439202070979801732536040880905419100375029921889772128503084510931435171483979018779597166630558819909348223770001377390273307373052030729413819617985823981374070443715485088829487365151686786653141560555397632839783786973475603908129103121125925582435377586599443363217659482486021512444715078999742145616192417054383275221431750185701711793487079447980295741809417265923372265027237884200396238493927359102885825948568128006352273465051712472070059202450319054451522388321059702003081513718019001071076161432358471155369959782811652330837503075288087426055655400029411438748293362031465017502577139252244731448555188613876936961036695236179942323751116112011014592974397486473882674592008130136792663493287323834319147915022427528033518178139180198551672004671264439595962120954122300129377851806213689047404966592261393005849755403969409681891387136302126214754577574214078992738385834194218500941354892714424617818676129678402812599649389519193939384481931712519965763571236544579269391714688112594004439937791027666527275028956096005024721892268353662349049501568931426746983749923266289936079664852088114380642027976981532748458314879741695023966059798072743350980348361092364278288527112580481417860547783209941006436630295569025708378983678708447667928300527961717504931897999052674925211486251029110033534138519456704647644914365911948549537915597987234033945431722519315974082307832411934886264333083916226707665948547147824941143774031630992986403589281430493343304207573431954440506367102005746914258775268625663056944615427077330312326664431034309894720122682694874274735620802316011315482410182991906165335883031756812018133914090861319389023790839528337203606889129436487920140167370284870924438860873830296648014424844378195912932551426780779819757525353368558050825303562419989528653425507781193568399131883673447888828695552112293654073088339775808234324436627659543962164946450396759723040075906766506152022264815158093674649622869572430121164843379253826764183953324829436751005035078152203675523168431161209463034491772102996315554878311000500752369796109685119745615468446576523546008325039060775520970963367909216533343057221662059707100715990114520515109428581554773471551782223970832412406073499896797949247197263055911053575580685552002226777990994346631851517791364630330551754443656577948498726362806681419705536740324268597539896282803552799726080554573302695958428417269671660306173853381343814024048279362738039470198839365706286164147555864933364363287875097138128425573909904433183795098670203800533548856219174579901097084123411402160448390274656216062207733804522678116007830485911118338137291415500040244636646228465275546613185451215477214924093897408659253897872331630294361379429268082112519489979283826532913282908147824847781517964779380824918394924322420104717839012960422523766744397106063463998218416521947089619846125464833145312281971994057275917591591279145274837283273569411904875883590818927011083766111368623876288661469697856984023924541117354584710728162060928747544449729071086406072820826707352705098469570212430005031769870770984490147544922541878582516496026055634218534739829767044431114272772863484628968800592047985977005687260574374332608765746965647976405949709304033414442630581488362251756922883517287565772653346189666094175256518980878632057889091042584644510374477219106080358138511257658994752983022904583136418485544787844335722425uwb);
+static_assert (c == 8779107423697189837569390605084121179785924908521985744210325591223667924519652625818373720019509245903707006132632572173386255064201355735198759440688262514780984111791042739566301784897316373994922192050963272288434060342288511971569697680026523760811225516430052699754044682818892679819131995600216280966062736732384732411361657444399695883865096103428759622813867735547259978529319436889864013687219390567604283318011100799953451520968441264866031813954488628058475114348729275414143158917874709599556247183695853838552321088973445876088042556810479910661449374661999675082811103814453353294194886612961492737263277271551889038610730760478459569256149321998350414023066363814989311109728311712989022996247280182587921449185353922885937877604500400738774240008709945289791605011177739657720181601453512259882004564462415828652714904289727235210537277721389816687643366145200001177712112197515695578887483792988755435401388456145854488880537088360397994643216014828495662460205686448548113229841097955613958440901375416256532864511852298696327611517233241324799070919491286426159788792631723833717451538043437364017185237743182402835670087683125602640318887451596650323528720128188198547270462971612157603487958526705005955580409441670771849388016438035850194585870327013409236236730914217722025655319472231141666790287955685713636274653565577454275838590350806168639165264676470440930351612992518904664647715805865941038423768376846697817543122409517591717292238745940345900530458551468519245767864531742102178628854376524513367983209186974575765707273973775386840081238803880335095740836386527208267311808973522450391189055739828936937359693167240524660624945856907042041257347192086984009640984509322622503890256046324768341632643546455779035376002061691113121234273164937984171774242327769915688742564049454163158318121818582764775268091292470889088445575108022688069271697198283151469645400870507006663799330661702702747443254220478311056407220749648103123435473381583520873055218734115120978678440455896458852497569989966723235965608706826593607128847630137618509151255834742636438796285569873869967729341871213521030011427372987388572674228441333458857512226049283243347521457804912008781036966786374760325341492033297848368160903260470019067535330611645909560888797451907088389764190403007998305168673029446934012245138838180596098559442570696150011296218144186387024615885302290744905340666905921743970013779813332493771192048043297281423248489056841417013807670308191095732464221451376997270745468459702152796818222745730565721202663103043121160101459833683249558684459108862536961994308535039970814557821268170388745941980378838969910592895670554291811739768771829941043857819603751246957962236091154755893962038363120690483862423001038948620681611253867149296463690417828034303547922792249098522404751428960713875050463906134150846089705714470303918299012691600285355859412924847760497076978432722446602521825089097454542343354847347396045079587757210635356999268706465425788833311190517623061860675230010994127196459030322166751571656642321690787471906609473496034789643710478162255664092991251446787887635351852933826820719781733754578161073401362668109819113924252291125741395271474342305574536974918273938513597418963787308994593434191890687730302495910686072338836413159162281072263542758257699588089838677469397467899348065293581751035844389848387161847435160327276066603683131703246410409122832793376751512688745195564021646069245992363396468100513536211651450610523315211697125774638845313243973083536417692075962486918844667432144353019722959653638632948294049984266861870151255315023346724671430499257993958049088066160870545025276597975154855537620265690354041028742742755074396597631965320380782500944568424053420038357524917125099241334990032189526465838192972110970861380060986802081948044345526414857158569939005895236672306344348212805851269920711043891306875873016330601673973249327072503571873518366750575070091051288590764788630190966776854031578939382690709022667421734442841784680826494146620589862829612704279521637740421694195051400095278084716974624615208392585573200182664157066813849346058321763156523965698465901396025152159642193562900743812715885811057212579017860488539960334406702752688595217360219470968738009774067915037157027492209108801337707562571266897723911401203374308490793226200974353356835311756384895692909802720948968131504604855466961987314701846460342135201914356152591684810924688350929140120187693089324255924634578576427004426339299493833434502951593902551451002292839635000904253250021884625417628756439862964325562720709528784964868687330847894476999577326582332350213148861205413652337499383416531545707272907994755638339630221576707954964236210962693804639714754668679841134928393081284209158098202683744650513918920168330598432362389777471870631039488408769354863001967531729415686631571754649uwb);
+#endif
--- gcc/tree-ssa-loop-niter.cc.jj	2023-10-04 16:28:04.329782348 +0200
+++ gcc/tree-ssa-loop-niter.cc	2023-10-05 11:36:54.982246797 +0200
@@ -3873,12 +3873,17 @@ do_warn_aggressive_loop_optimizations (c
     return;
 
   gimple *estmt = last_nondebug_stmt (e->src);
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_dec (i_bound, buf, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations))
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p;
+  unsigned len = i_bound.get_len ();
+  if (len > WIDE_INT_MAX_INL_ELTS)
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  else
+    p = buf;
+  print_dec (i_bound, p, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations))
 	     ? UNSIGNED : SIGNED);
   auto_diagnostic_group d;
   if (warning_at (gimple_location (stmt), OPT_Waggressive_loop_optimizations,
-		  "iteration %s invokes undefined behavior", buf))
+		  "iteration %s invokes undefined behavior", p))
     inform (gimple_location (estmt), "within this loop");
   loop->warned_aggressive_loop_optimizations = true;
 }
@@ -3915,6 +3920,9 @@ record_estimate (class loop *loop, tree
   else
     gcc_checking_assert (i_bound == wi::to_widest (bound));
 
+  if (wi::min_precision (i_bound, SIGNED) > bound_wide_int ().get_precision ())
+    return;
+
   /* If we have a guaranteed upper bound, record it in the appropriate
      list, unless this is an !is_exit bound (i.e. undefined behavior in
      at_stmt) in a loop with known constant number of iterations.  */
@@ -3925,7 +3933,7 @@ record_estimate (class loop *loop, tree
     {
       class nb_iter_bound *elt = ggc_alloc<nb_iter_bound> ();
 
-      elt->bound = i_bound;
+      elt->bound = bound_wide_int::from (i_bound, SIGNED);
       elt->stmt = at_stmt;
       elt->is_exit = is_exit;
       elt->next = loop->bounds;
@@ -4410,8 +4418,8 @@ infer_loop_bounds_from_undefined (class
 static int
 wide_int_cmp (const void *p1, const void *p2)
 {
-  const widest_int *d1 = (const widest_int *) p1;
-  const widest_int *d2 = (const widest_int *) p2;
+  const bound_wide_int *d1 = (const bound_wide_int *) p1;
+  const bound_wide_int *d2 = (const bound_wide_int *) p2;
   return wi::cmpu (*d1, *d2);
 }
 
@@ -4419,7 +4427,7 @@ wide_int_cmp (const void *p1, const void
    Lookup by binary search.  */
 
 static int
-bound_index (const vec<widest_int> &bounds, const widest_int &bound)
+bound_index (const vec<bound_wide_int> &bounds, const bound_wide_int &bound)
 {
   unsigned int end = bounds.length ();
   unsigned int begin = 0;
@@ -4428,7 +4436,7 @@ bound_index (const vec<widest_int> &boun
   while (begin != end)
     {
       unsigned int middle = (begin + end) / 2;
-      widest_int index = bounds[middle];
+      bound_wide_int index = bounds[middle];
 
       if (index == bound)
 	return middle;
@@ -4450,7 +4458,7 @@ static void
 discover_iteration_bound_by_body_walk (class loop *loop)
 {
   class nb_iter_bound *elt;
-  auto_vec<widest_int> bounds;
+  auto_vec<bound_wide_int> bounds;
   vec<vec<basic_block> > queues = vNULL;
   vec<basic_block> queue = vNULL;
   ptrdiff_t queue_index;
@@ -4459,7 +4467,7 @@ discover_iteration_bound_by_body_walk (c
   /* Discover what bounds may interest us.  */
   for (elt = loop->bounds; elt; elt = elt->next)
     {
-      widest_int bound = elt->bound;
+      bound_wide_int bound = elt->bound;
 
       /* Exit terminates loop at given iteration, while non-exits produce undefined
 	 effect on the next iteration.  */
@@ -4492,7 +4500,7 @@ discover_iteration_bound_by_body_walk (c
   hash_map<basic_block, ptrdiff_t> bb_bounds;
   for (elt = loop->bounds; elt; elt = elt->next)
     {
-      widest_int bound = elt->bound;
+      bound_wide_int bound = elt->bound;
       if (!elt->is_exit)
 	{
 	  bound += 1;
@@ -4601,7 +4609,8 @@ discover_iteration_bound_by_body_walk (c
 	  print_decu (bounds[latch_index], dump_file);
 	  fprintf (dump_file, "\n");
 	}
-      record_niter_bound (loop, bounds[latch_index], false, true);
+      record_niter_bound (loop, widest_int::from (bounds[latch_index],
+						  SIGNED), false, true);
     }
 
   queues.release ();
@@ -4704,7 +4713,8 @@ maybe_lower_iteration_bound (class loop
       if (dump_file && (dump_flags & TDF_DETAILS))
 	fprintf (dump_file, "Reducing loop iteration estimate by 1; "
 		 "undefined statement must be executed at the last iteration.\n");
-      record_niter_bound (loop, loop->nb_iterations_upper_bound - 1,
+      record_niter_bound (loop, widest_int::from (loop->nb_iterations_upper_bound,
+						  SIGNED) - 1,
 			  false, true);
     }
 
@@ -4860,10 +4870,13 @@ estimate_numbers_of_iterations (class lo
      not break code with undefined behavior by not recording smaller
      maximum number of iterations.  */
   if (loop->nb_iterations
-      && TREE_CODE (loop->nb_iterations) == INTEGER_CST)
+      && TREE_CODE (loop->nb_iterations) == INTEGER_CST
+      && (wi::min_precision (wi::to_widest (loop->nb_iterations), SIGNED)
+	  <= bound_wide_int ().get_precision ()))
     {
       loop->any_upper_bound = true;
-      loop->nb_iterations_upper_bound = wi::to_widest (loop->nb_iterations);
+      loop->nb_iterations_upper_bound
+        = bound_wide_int::from (wi::to_widest (loop->nb_iterations), SIGNED);
     }
 }
 
@@ -5114,7 +5127,7 @@ n_of_executions_at_most (gimple *stmt,
 			 class nb_iter_bound *niter_bound,
 			 tree niter)
 {
-  widest_int bound = niter_bound->bound;
+  widest_int bound = widest_int::from (niter_bound->bound, SIGNED);
   tree nit_type = TREE_TYPE (niter), e;
   enum tree_code cmp;
 
--- gcc/cfgloop.h.jj	2023-10-04 16:28:04.010786695 +0200
+++ gcc/cfgloop.h	2023-10-05 11:36:55.065245659 +0200
@@ -44,6 +44,9 @@ enum iv_extend_code
   IV_UNKNOWN_EXTEND
 };
 
+typedef generic_wide_int <fixed_wide_int_storage <WIDE_INT_MAX_INL_PRECISION> >
+  bound_wide_int;
+
 /* The structure describing a bound on number of iterations of a loop.  */
 
 class GTY ((chain_next ("%h.next"))) nb_iter_bound {
@@ -58,7 +61,7 @@ public:
         overflows (as MAX + 1 is sometimes produced as the estimate on number
 	of executions of STMT).
      b) it is consistent with the result of number_of_iterations_exit.  */
-  widest_int bound;
+  bound_wide_int bound;
 
   /* True if, after executing the statement BOUND + 1 times, we will
      leave the loop; that is, all the statements after it are executed at most
@@ -161,14 +164,14 @@ public:
 
   /* An integer guaranteed to be greater or equal to nb_iterations.  Only
      valid if any_upper_bound is true.  */
-  widest_int nb_iterations_upper_bound;
+  bound_wide_int nb_iterations_upper_bound;
 
-  widest_int nb_iterations_likely_upper_bound;
+  bound_wide_int nb_iterations_likely_upper_bound;
 
   /* An integer giving an estimate on nb_iterations.  Unlike
      nb_iterations_upper_bound, there is no guarantee that it is at least
      nb_iterations.  */
-  widest_int nb_iterations_estimate;
+  bound_wide_int nb_iterations_estimate;
 
   /* If > 0, an integer, where the user asserted that for any
      I in [ 0, nb_iterations ) and for any J in
--- gcc/tree.h.jj	2023-10-04 16:28:04.403781340 +0200
+++ gcc/tree.h	2023-10-05 11:36:54.793249388 +0200
@@ -6258,13 +6258,17 @@ namespace wi
   template <int N>
   struct int_traits <extended_tree <N> >
   {
-    static const enum precision_type precision_type = CONST_PRECISION;
+    static const enum precision_type precision_type
+      = N == ADDR_MAX_PRECISION ? CONST_PRECISION : WIDEST_CONST_PRECISION;
     static const bool host_dependent_precision = false;
     static const bool is_sign_extended = true;
     static const unsigned int precision = N;
+    static const unsigned int inl_precision
+      = N == ADDR_MAX_PRECISION ? 0
+	     : N / WIDEST_INT_MAX_PRECISION * WIDE_INT_MAX_INL_PRECISION;
   };
 
-  typedef extended_tree <WIDE_INT_MAX_PRECISION> widest_extended_tree;
+  typedef extended_tree <WIDEST_INT_MAX_PRECISION> widest_extended_tree;
   typedef extended_tree <ADDR_MAX_PRECISION> offset_extended_tree;
 
   typedef const generic_wide_int <widest_extended_tree> tree_to_widest_ref;
@@ -6292,7 +6296,8 @@ namespace wi
   tree_to_poly_wide_ref to_poly_wide (const_tree);
 
   template <int N>
-  struct ints_for <generic_wide_int <extended_tree <N> >, CONST_PRECISION>
+  struct ints_for <generic_wide_int <extended_tree <N> >,
+		   int_traits <extended_tree <N> >::precision_type>
   {
     typedef generic_wide_int <extended_tree <N> > extended;
     static extended zero (const extended &);
@@ -6308,7 +6313,7 @@ namespace wi
 
 /* Used to convert a tree to a widest2_int like this:
    widest2_int foo = widest2_int_cst (some_tree).  */
-typedef generic_wide_int <wi::extended_tree <WIDE_INT_MAX_PRECISION * 2> >
+typedef generic_wide_int <wi::extended_tree <WIDEST_INT_MAX_PRECISION * 2> >
   widest2_int_cst;
 
 /* Refer to INTEGER_CST T as though it were a widest_int.
@@ -6444,7 +6449,7 @@ wi::extended_tree <N>::get_len () const
 {
   if (N == ADDR_MAX_PRECISION)
     return TREE_INT_CST_OFFSET_NUNITS (m_t);
-  else if (N >= WIDE_INT_MAX_PRECISION)
+  else if (N >= WIDEST_INT_MAX_PRECISION)
     return TREE_INT_CST_EXT_NUNITS (m_t);
   else
     /* This class is designed to be used for specific output precisions
@@ -6530,7 +6535,8 @@ wi::to_poly_wide (const_tree t)
 template <int N>
 inline generic_wide_int <wi::extended_tree <N> >
 wi::ints_for <generic_wide_int <wi::extended_tree <N> >,
-	      wi::CONST_PRECISION>::zero (const extended &x)
+	      wi::int_traits <wi::extended_tree <N> >::precision_type
+	     >::zero (const extended &x)
 {
   return build_zero_cst (TREE_TYPE (x.get_tree ()));
 }
--- gcc/cfgloop.cc.jj	2023-10-04 16:28:03.991786955 +0200
+++ gcc/cfgloop.cc	2023-10-05 11:36:55.157244398 +0200
@@ -1895,33 +1895,38 @@ void
 record_niter_bound (class loop *loop, const widest_int &i_bound,
 		    bool realistic, bool upper)
 {
+  if (wi::min_precision (i_bound, SIGNED) > bound_wide_int ().get_precision ())
+    return;
+
+  bound_wide_int bound = bound_wide_int::from (i_bound, SIGNED);
+
   /* Update the bounds only when there is no previous estimation, or when the
      current estimation is smaller.  */
   if (upper
       && (!loop->any_upper_bound
-	  || wi::ltu_p (i_bound, loop->nb_iterations_upper_bound)))
+	  || wi::ltu_p (bound, loop->nb_iterations_upper_bound)))
     {
       loop->any_upper_bound = true;
-      loop->nb_iterations_upper_bound = i_bound;
+      loop->nb_iterations_upper_bound = bound;
       if (!loop->any_likely_upper_bound)
 	{
 	  loop->any_likely_upper_bound = true;
-	  loop->nb_iterations_likely_upper_bound = i_bound;
+	  loop->nb_iterations_likely_upper_bound = bound;
 	}
     }
   if (realistic
       && (!loop->any_estimate
-	  || wi::ltu_p (i_bound, loop->nb_iterations_estimate)))
+	  || wi::ltu_p (bound, loop->nb_iterations_estimate)))
     {
       loop->any_estimate = true;
-      loop->nb_iterations_estimate = i_bound;
+      loop->nb_iterations_estimate = bound;
     }
   if (!realistic
       && (!loop->any_likely_upper_bound
-          || wi::ltu_p (i_bound, loop->nb_iterations_likely_upper_bound)))
+          || wi::ltu_p (bound, loop->nb_iterations_likely_upper_bound)))
     {
       loop->any_likely_upper_bound = true;
-      loop->nb_iterations_likely_upper_bound = i_bound;
+      loop->nb_iterations_likely_upper_bound = bound;
     }
 
   /* If an upper bound is smaller than the realistic estimate of the
@@ -2018,7 +2023,7 @@ get_estimated_loop_iterations (class loo
       return false;
     }
 
-  *nit = loop->nb_iterations_estimate;
+  *nit = widest_int::from (loop->nb_iterations_estimate, SIGNED);
   return true;
 }
 
@@ -2032,7 +2037,7 @@ get_max_loop_iterations (const class loo
   if (!loop->any_upper_bound)
     return false;
 
-  *nit = loop->nb_iterations_upper_bound;
+  *nit = widest_int::from (loop->nb_iterations_upper_bound, SIGNED);
   return true;
 }
 
@@ -2066,7 +2071,7 @@ get_likely_max_loop_iterations (class lo
   if (!loop->any_likely_upper_bound)
     return false;
 
-  *nit = loop->nb_iterations_likely_upper_bound;
+  *nit = widest_int::from (loop->nb_iterations_likely_upper_bound, SIGNED);
   return true;
 }
 
--- gcc/gimple-ssa-strength-reduction.cc.jj	2023-01-02 09:32:29.884176934 +0100
+++ gcc/gimple-ssa-strength-reduction.cc	2023-10-05 14:45:14.554340423 +0200
@@ -238,7 +238,7 @@ public:
   tree stride;
 
   /* The index constant i.  */
-  widest_int index;
+  offset_int index;
 
   /* The type of the candidate.  This is normally the type of base_expr,
      but casts may have occurred when combining feeding instructions.
@@ -333,7 +333,7 @@ class incr_info_d
 {
 public:
   /* The increment that relates a candidate to its basis.  */
-  widest_int incr;
+  offset_int incr;
 
   /* How many times the increment occurs in the candidate tree.  */
   unsigned count;
@@ -677,7 +677,7 @@ record_potential_basis (slsr_cand_t c, t
 
 static slsr_cand_t
 alloc_cand_and_find_basis (enum cand_kind kind, gimple *gs, tree base,
-			   const widest_int &index, tree stride, tree ctype,
+			   const offset_int &index, tree stride, tree ctype,
 			   tree stype, unsigned savings)
 {
   slsr_cand_t c = (slsr_cand_t) obstack_alloc (&cand_obstack,
@@ -893,7 +893,7 @@ slsr_process_phi (gphi *phi, bool speed)
    int (i * S).
    Otherwise, just return double int zero.  */
 
-static widest_int
+static offset_int
 backtrace_base_for_ref (tree *pbase)
 {
   tree base_in = *pbase;
@@ -922,7 +922,7 @@ backtrace_base_for_ref (tree *pbase)
 	{
 	  /* X = B + (1 * S), S is integer constant.  */
 	  *pbase = base_cand->base_expr;
-	  return wi::to_widest (base_cand->stride);
+	  return wi::to_offset (base_cand->stride);
 	}
       else if (base_cand->kind == CAND_ADD
 	       && TREE_CODE (base_cand->stride) == INTEGER_CST
@@ -966,13 +966,13 @@ backtrace_base_for_ref (tree *pbase)
     *PINDEX:   C1 + (C2 * C3) + C4 + (C5 * C3)  */
 
 static bool
-restructure_reference (tree *pbase, tree *poffset, widest_int *pindex,
+restructure_reference (tree *pbase, tree *poffset, offset_int *pindex,
 		       tree *ptype)
 {
   tree base = *pbase, offset = *poffset;
-  widest_int index = *pindex;
+  offset_int index = *pindex;
   tree mult_op0, t1, t2, type;
-  widest_int c1, c2, c3, c4, c5;
+  offset_int c1, c2, c3, c4, c5;
   offset_int mem_offset;
 
   if (!base
@@ -985,18 +985,18 @@ restructure_reference (tree *pbase, tree
     return false;
 
   t1 = TREE_OPERAND (base, 0);
-  c1 = widest_int::from (mem_offset, SIGNED);
+  c1 = offset_int::from (mem_offset, SIGNED);
   type = TREE_TYPE (TREE_OPERAND (base, 1));
 
   mult_op0 = TREE_OPERAND (offset, 0);
-  c3 = wi::to_widest (TREE_OPERAND (offset, 1));
+  c3 = wi::to_offset (TREE_OPERAND (offset, 1));
 
   if (TREE_CODE (mult_op0) == PLUS_EXPR)
 
     if (TREE_CODE (TREE_OPERAND (mult_op0, 1)) == INTEGER_CST)
       {
 	t2 = TREE_OPERAND (mult_op0, 0);
-	c2 = wi::to_widest (TREE_OPERAND (mult_op0, 1));
+	c2 = wi::to_offset (TREE_OPERAND (mult_op0, 1));
       }
     else
       return false;
@@ -1006,7 +1006,7 @@ restructure_reference (tree *pbase, tree
     if (TREE_CODE (TREE_OPERAND (mult_op0, 1)) == INTEGER_CST)
       {
 	t2 = TREE_OPERAND (mult_op0, 0);
-	c2 = -wi::to_widest (TREE_OPERAND (mult_op0, 1));
+	c2 = -wi::to_offset (TREE_OPERAND (mult_op0, 1));
       }
     else
       return false;
@@ -1057,7 +1057,7 @@ slsr_process_ref (gimple *gs)
   HOST_WIDE_INT cbitpos;
   if (reversep || !bitpos.is_constant (&cbitpos))
     return;
-  widest_int index = cbitpos;
+  offset_int index = cbitpos;
 
   if (!restructure_reference (&base, &offset, &index, &type))
     return;
@@ -1079,7 +1079,7 @@ create_mul_ssa_cand (gimple *gs, tree ba
 {
   tree base = NULL_TREE, stride = NULL_TREE, ctype = NULL_TREE;
   tree stype = NULL_TREE;
-  widest_int index;
+  offset_int index;
   unsigned savings = 0;
   slsr_cand_t c;
   slsr_cand_t base_cand = base_cand_from_table (base_in);
@@ -1112,7 +1112,7 @@ create_mul_ssa_cand (gimple *gs, tree ba
 	     ============================
 	     X = B + ((i' * S) * Z)  */
 	  base = base_cand->base_expr;
-	  index = base_cand->index * wi::to_widest (base_cand->stride);
+	  index = base_cand->index * wi::to_offset (base_cand->stride);
 	  stride = stride_in;
 	  ctype = base_cand->cand_type;
 	  stype = TREE_TYPE (stride_in);
@@ -1149,7 +1149,7 @@ static slsr_cand_t
 create_mul_imm_cand (gimple *gs, tree base_in, tree stride_in, bool speed)
 {
   tree base = NULL_TREE, stride = NULL_TREE, ctype = NULL_TREE;
-  widest_int index, temp;
+  offset_int index, temp;
   unsigned savings = 0;
   slsr_cand_t c;
   slsr_cand_t base_cand = base_cand_from_table (base_in);
@@ -1165,7 +1165,7 @@ create_mul_imm_cand (gimple *gs, tree ba
 	     X = Y * c
 	     ============================
 	     X = (B + i') * (S * c)  */
-	  temp = wi::to_widest (base_cand->stride) * wi::to_widest (stride_in);
+	  temp = wi::to_offset (base_cand->stride) * wi::to_offset (stride_in);
 	  if (wi::fits_to_tree_p (temp, TREE_TYPE (stride_in)))
 	    {
 	      base = base_cand->base_expr;
@@ -1200,7 +1200,7 @@ create_mul_imm_cand (gimple *gs, tree ba
 	     ===========================
 	     X = (B + S) * c  */
 	  base = base_cand->base_expr;
-	  index = wi::to_widest (base_cand->stride);
+	  index = wi::to_offset (base_cand->stride);
 	  stride = stride_in;
 	  ctype = base_cand->cand_type;
 	  if (has_single_use (base_in))
@@ -1281,7 +1281,7 @@ create_add_ssa_cand (gimple *gs, tree ba
 {
   tree base = NULL_TREE, stride = NULL_TREE, ctype = NULL_TREE;
   tree stype = NULL_TREE;
-  widest_int index;
+  offset_int index;
   unsigned savings = 0;
   slsr_cand_t c;
   slsr_cand_t base_cand = base_cand_from_table (base_in);
@@ -1300,7 +1300,7 @@ create_add_ssa_cand (gimple *gs, tree ba
 	     ===========================
 	     X = Y + ((+/-1 * S) * B)  */
 	  base = base_in;
-	  index = wi::to_widest (addend_cand->stride);
+	  index = wi::to_offset (addend_cand->stride);
 	  if (subtract_p)
 	    index = -index;
 	  stride = addend_cand->base_expr;
@@ -1350,7 +1350,7 @@ create_add_ssa_cand (gimple *gs, tree ba
 		     ===========================
 		     Value:  X = Y + ((-1 * S) * B)  */
 		  base = base_in;
-		  index = wi::to_widest (subtrahend_cand->stride);
+		  index = wi::to_offset (subtrahend_cand->stride);
 		  index = -index;
 		  stride = subtrahend_cand->base_expr;
 		  ctype = TREE_TYPE (base_in);
@@ -1389,13 +1389,13 @@ create_add_ssa_cand (gimple *gs, tree ba
    about BASE_IN into the new candidate.  Return the new candidate.  */
 
 static slsr_cand_t
-create_add_imm_cand (gimple *gs, tree base_in, const widest_int &index_in,
+create_add_imm_cand (gimple *gs, tree base_in, const offset_int &index_in,
 		     bool speed)
 {
   enum cand_kind kind = CAND_ADD;
   tree base = NULL_TREE, stride = NULL_TREE, ctype = NULL_TREE;
   tree stype = NULL_TREE;
-  widest_int index, multiple;
+  offset_int index, multiple;
   unsigned savings = 0;
   slsr_cand_t c;
   slsr_cand_t base_cand = base_cand_from_table (base_in);
@@ -1405,7 +1405,7 @@ create_add_imm_cand (gimple *gs, tree ba
       signop sign = TYPE_SIGN (TREE_TYPE (base_cand->stride));
 
       if (TREE_CODE (base_cand->stride) == INTEGER_CST
-	  && wi::multiple_of_p (index_in, wi::to_widest (base_cand->stride),
+	  && wi::multiple_of_p (index_in, wi::to_offset (base_cand->stride),
 				sign, &multiple))
 	{
 	  /* Y = (B + i') * S, S constant, c = kS for some integer k
@@ -1494,7 +1494,7 @@ slsr_process_add (gimple *gs, tree rhs1,
   else if (TREE_CODE (rhs2) == INTEGER_CST)
     {
       /* Record an interpretation for the add-immediate.  */
-      widest_int index = wi::to_widest (rhs2);
+      offset_int index = wi::to_offset (rhs2);
       if (subtract_p)
 	index = -index;
 
@@ -2079,7 +2079,7 @@ phi_dependent_cand_p (slsr_cand_t c)
 /* Calculate the increment required for candidate C relative to 
    its basis.  */
 
-static widest_int
+static offset_int
 cand_increment (slsr_cand_t c)
 {
   slsr_cand_t basis;
@@ -2102,10 +2102,10 @@ cand_increment (slsr_cand_t c)
    for this candidate, return the absolute value of that increment
    instead.  */
 
-static inline widest_int
+static inline offset_int
 cand_abs_increment (slsr_cand_t c)
 {
-  widest_int increment = cand_increment (c);
+  offset_int increment = cand_increment (c);
 
   if (!address_arithmetic_p && wi::neg_p (increment))
     increment = -increment;
@@ -2126,7 +2126,7 @@ cand_already_replaced (slsr_cand_t c)
    replace_conditional_candidate.  */
 
 static void
-replace_mult_candidate (slsr_cand_t c, tree basis_name, widest_int bump)
+replace_mult_candidate (slsr_cand_t c, tree basis_name, offset_int bump)
 {
   tree target_type = TREE_TYPE (gimple_assign_lhs (c->cand_stmt));
   enum tree_code cand_code = gimple_assign_rhs_code (c->cand_stmt);
@@ -2245,7 +2245,7 @@ replace_unconditional_candidate (slsr_ca
     return;
 
   basis = lookup_cand (c->basis);
-  widest_int bump = cand_increment (c) * wi::to_widest (c->stride);
+  offset_int bump = cand_increment (c) * wi::to_offset (c->stride);
 
   replace_mult_candidate (c, gimple_assign_lhs (basis->cand_stmt), bump);
 }
@@ -2255,7 +2255,7 @@ replace_unconditional_candidate (slsr_ca
    MAX_INCR_VEC_LEN increments have been found.  */
 
 static inline int
-incr_vec_index (const widest_int &increment)
+incr_vec_index (const offset_int &increment)
 {
   unsigned i;
   
@@ -2275,7 +2275,7 @@ incr_vec_index (const widest_int &increm
 
 static tree
 create_add_on_incoming_edge (slsr_cand_t c, tree basis_name,
-			     widest_int increment, edge e, location_t loc,
+			     offset_int increment, edge e, location_t loc,
 			     bool known_stride)
 {
   tree lhs, basis_type;
@@ -2299,7 +2299,7 @@ create_add_on_incoming_edge (slsr_cand_t
     {
       tree bump_tree;
       enum tree_code code = plus_code;
-      widest_int bump = increment * wi::to_widest (c->stride);
+      offset_int bump = increment * wi::to_offset (c->stride);
       if (wi::neg_p (bump) && !POINTER_TYPE_P (basis_type))
 	{
 	  code = MINUS_EXPR;
@@ -2427,7 +2427,7 @@ create_phi_basis_1 (slsr_cand_t c, gimpl
 	  feeding_def = gimple_assign_lhs (basis->cand_stmt);
 	else
 	  {
-	    widest_int incr = -basis->index;
+	    offset_int incr = -basis->index;
 	    feeding_def = create_add_on_incoming_edge (c, basis_name, incr,
 						       e, loc, known_stride);
 	  }
@@ -2444,7 +2444,7 @@ create_phi_basis_1 (slsr_cand_t c, gimpl
 	  else
 	    {
 	      slsr_cand_t arg_cand = base_cand_from_table (arg);
-	      widest_int diff = arg_cand->index - basis->index;
+	      offset_int diff = arg_cand->index - basis->index;
 	      feeding_def = create_add_on_incoming_edge (c, basis_name, diff,
 							 e, loc, known_stride);
 	    }
@@ -2525,7 +2525,7 @@ replace_conditional_candidate (slsr_cand
 			   basis_name, loc, KNOWN_STRIDE);
 
   /* Replace C with an add of the new basis phi and a constant.  */
-  widest_int bump = c->index * wi::to_widest (c->stride);
+  offset_int bump = c->index * wi::to_offset (c->stride);
 
   replace_mult_candidate (c, name, bump);
 }
@@ -2614,7 +2614,7 @@ replace_uncond_cands_and_profitable_phis
     {
       /* A multiply candidate with a stride of 1 is just an artifice
 	 of a copy or cast; there is no value in replacing it.  */
-      if (c->kind == CAND_MULT && wi::to_widest (c->stride) != 1)
+      if (c->kind == CAND_MULT && wi::to_offset (c->stride) != 1)
 	{
 	  /* A candidate dependent upon a phi will replace a multiply by 
 	     a constant with an add, and will insert at most one add for
@@ -2681,7 +2681,7 @@ count_candidates (slsr_cand_t c)
    candidates with the same increment, also record T_0 for subsequent use.  */
 
 static void
-record_increment (slsr_cand_t c, widest_int increment, bool is_phi_adjust)
+record_increment (slsr_cand_t c, offset_int increment, bool is_phi_adjust)
 {
   bool found = false;
   unsigned i;
@@ -2786,7 +2786,7 @@ record_phi_increments_1 (slsr_cand_t bas
 	record_phi_increments_1 (basis, arg_def);
       else
 	{
-	  widest_int diff;
+	  offset_int diff;
 
 	  if (operand_equal_p (arg, phi_cand->base_expr, 0))
 	    {
@@ -2856,7 +2856,7 @@ record_increments (slsr_cand_t c)
 /* Recursive helper function for phi_incr_cost.  */
 
 static int
-phi_incr_cost_1 (slsr_cand_t c, const widest_int &incr, gimple *phi,
+phi_incr_cost_1 (slsr_cand_t c, const offset_int &incr, gimple *phi,
 		 int *savings)
 {
   unsigned i;
@@ -2883,7 +2883,7 @@ phi_incr_cost_1 (slsr_cand_t c, const wi
 	}
       else
 	{
-	  widest_int diff;
+	  offset_int diff;
 	  slsr_cand_t arg_cand;
 
 	  /* When the PHI argument is just a pass-through to the base
@@ -2925,7 +2925,7 @@ phi_incr_cost_1 (slsr_cand_t c, const wi
    uses.  */
 
 static int
-phi_incr_cost (slsr_cand_t c, const widest_int &incr, gimple *phi,
+phi_incr_cost (slsr_cand_t c, const offset_int &incr, gimple *phi,
 	       int *savings)
 {
   int retval = phi_incr_cost_1 (c, incr, phi, savings);
@@ -2981,10 +2981,10 @@ optimize_cands_for_speed_p (slsr_cand_t
 
 static int
 lowest_cost_path (int cost_in, int repl_savings, slsr_cand_t c,
-		  const widest_int &incr, bool count_phis)
+		  const offset_int &incr, bool count_phis)
 {
   int local_cost, sib_cost, savings = 0;
-  widest_int cand_incr = cand_abs_increment (c);
+  offset_int cand_incr = cand_abs_increment (c);
 
   if (cand_already_replaced (c))
     local_cost = cost_in;
@@ -3027,11 +3027,11 @@ lowest_cost_path (int cost_in, int repl_
    would go dead.  */
 
 static int
-total_savings (int repl_savings, slsr_cand_t c, const widest_int &incr,
+total_savings (int repl_savings, slsr_cand_t c, const offset_int &incr,
 	       bool count_phis)
 {
   int savings = 0;
-  widest_int cand_incr = cand_abs_increment (c);
+  offset_int cand_incr = cand_abs_increment (c);
 
   if (incr == cand_incr && !cand_already_replaced (c))
     savings += repl_savings + c->dead_savings;
@@ -3239,7 +3239,7 @@ ncd_for_two_cands (basic_block bb1, basi
    candidates, return the earliest candidate in the block in *WHERE.  */
 
 static basic_block
-ncd_with_phi (slsr_cand_t c, const widest_int &incr, gphi *phi,
+ncd_with_phi (slsr_cand_t c, const offset_int &incr, gphi *phi,
 	      basic_block ncd, slsr_cand_t *where)
 {
   unsigned i;
@@ -3255,7 +3255,7 @@ ncd_with_phi (slsr_cand_t c, const wides
 	ncd = ncd_with_phi (c, incr, as_a <gphi *> (arg_def), ncd, where);
       else 
 	{
-	  widest_int diff;
+	  offset_int diff;
 
 	  if (operand_equal_p (arg, phi_cand->base_expr, 0))
 	    diff = -basis->index;
@@ -3282,7 +3282,7 @@ ncd_with_phi (slsr_cand_t c, const wides
    return the earliest candidate in the block in *WHERE.  */
 
 static basic_block
-ncd_of_cand_and_phis (slsr_cand_t c, const widest_int &incr, slsr_cand_t *where)
+ncd_of_cand_and_phis (slsr_cand_t c, const offset_int &incr, slsr_cand_t *where)
 {
   basic_block ncd = NULL;
 
@@ -3308,7 +3308,7 @@ ncd_of_cand_and_phis (slsr_cand_t c, con
    *WHERE.  */
 
 static basic_block
-nearest_common_dominator_for_cands (slsr_cand_t c, const widest_int &incr,
+nearest_common_dominator_for_cands (slsr_cand_t c, const offset_int &incr,
 				    slsr_cand_t *where)
 {
   basic_block sib_ncd = NULL, dep_ncd = NULL, this_ncd = NULL, ncd;
@@ -3385,7 +3385,7 @@ insert_initializers (slsr_cand_t c)
       gassign *init_stmt;
       gassign *cast_stmt = NULL;
       tree new_name, incr_tree, init_stride;
-      widest_int incr = incr_vec[i].incr;
+      offset_int incr = incr_vec[i].incr;
 
       if (!profitable_increment_p (i)
 	  || incr == 1
@@ -3550,7 +3550,7 @@ all_phi_incrs_profitable_1 (slsr_cand_t
       else
 	{
 	  int j;
-	  widest_int increment;
+	  offset_int increment;
 
 	  if (operand_equal_p (arg, phi_cand->base_expr, 0))
 	    increment = -basis->index;
@@ -3681,7 +3681,7 @@ replace_one_candidate (slsr_cand_t c, un
   tree orig_rhs1, orig_rhs2;
   tree rhs2;
   enum tree_code orig_code, repl_code;
-  widest_int cand_incr;
+  offset_int cand_incr;
 
   orig_code = gimple_assign_rhs_code (c->cand_stmt);
   orig_rhs1 = gimple_assign_rhs1 (c->cand_stmt);
@@ -3839,7 +3839,7 @@ replace_profitable_candidates (slsr_cand
 {
   if (!cand_already_replaced (c))
     {
-      widest_int increment = cand_abs_increment (c);
+      offset_int increment = cand_abs_increment (c);
       enum tree_code orig_code = gimple_assign_rhs_code (c->cand_stmt);
       int i;
 
--- gcc/real.cc.jj	2023-10-04 16:28:04.263783248 +0200
+++ gcc/real.cc	2023-10-05 11:36:54.902247893 +0200
@@ -1477,7 +1477,7 @@ real_to_integer (const REAL_VALUE_TYPE *
 wide_int
 real_to_integer (const REAL_VALUE_TYPE *r, bool *fail, int precision)
 {
-  HOST_WIDE_INT val[2 * WIDE_INT_MAX_ELTS];
+  HOST_WIDE_INT valb[WIDE_INT_MAX_INL_ELTS], *val;
   int exp;
   int words, w;
   wide_int result;
@@ -1516,7 +1516,11 @@ real_to_integer (const REAL_VALUE_TYPE *
 	 is the smallest HWI-multiple that has at least PRECISION bits.
 	 This ensures that the top bit of the significand is in the
 	 top bit of the wide_int.  */
-      words = (precision + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT;
+      words = ((precision + HOST_BITS_PER_WIDE_INT - 1)
+	       / HOST_BITS_PER_WIDE_INT);
+      val = valb;
+      if (UNLIKELY (words > WIDE_INT_MAX_INL_ELTS))
+	val = XALLOCAVEC (HOST_WIDE_INT, words);
       w = words * HOST_BITS_PER_WIDE_INT;
 
 #if (HOST_BITS_PER_WIDE_INT == HOST_BITS_PER_LONG)
--- gcc/omp-general.cc.jj	2023-10-04 16:28:04.218783861 +0200
+++ gcc/omp-general.cc	2023-10-05 11:36:55.169244233 +0200
@@ -1986,13 +1986,17 @@ omp_get_context_selector (tree ctx, cons
   return NULL_TREE;
 }
 
+/* Needs to be a GC-friendly widest_int variant, but precision is
+   desirable to be the same on all targets.  */
+typedef generic_wide_int <fixed_wide_int_storage <1024> > score_wide_int;
+
 /* Compute *SCORE for context selector CTX.  Return true if the score
    would be different depending on whether it is a declare simd clone or
    not.  DECLARE_SIMD should be true for the case when it would be
    a declare simd clone.  */
 
 static bool
-omp_context_compute_score (tree ctx, widest_int *score, bool declare_simd)
+omp_context_compute_score (tree ctx, score_wide_int *score, bool declare_simd)
 {
   tree construct = omp_get_context_selector (ctx, "construct", NULL);
   bool has_kind = omp_get_context_selector (ctx, "device", "kind");
@@ -2007,7 +2011,8 @@ omp_context_compute_score (tree ctx, wid
 	  if (TREE_PURPOSE (t3)
 	      && strcmp (IDENTIFIER_POINTER (TREE_PURPOSE (t3)), " score") == 0
 	      && TREE_CODE (TREE_VALUE (t3)) == INTEGER_CST)
-	    *score += wi::to_widest (TREE_VALUE (t3));
+	    *score += score_wide_int::from (wi::to_wide (TREE_VALUE (t3)),
+					    TYPE_SIGN (TREE_TYPE (t3)));
   if (construct || has_kind || has_arch || has_isa)
     {
       int scores[12];
@@ -2028,16 +2033,16 @@ omp_context_compute_score (tree ctx, wid
 		  *score = -1;
 		  return ret;
 		}
-	      *score += wi::shifted_mask <widest_int> (scores[b + n], 1, false);
+	      *score += wi::shifted_mask <score_wide_int> (scores[b + n], 1, false);
 	    }
 	  if (has_kind)
-	    *score += wi::shifted_mask <widest_int> (scores[b + nconstructs],
+	    *score += wi::shifted_mask <score_wide_int> (scores[b + nconstructs],
 						     1, false);
 	  if (has_arch)
-	    *score += wi::shifted_mask <widest_int> (scores[b + nconstructs] + 1,
+	    *score += wi::shifted_mask <score_wide_int> (scores[b + nconstructs] + 1,
 						     1, false);
 	  if (has_isa)
-	    *score += wi::shifted_mask <widest_int> (scores[b + nconstructs] + 2,
+	    *score += wi::shifted_mask <score_wide_int> (scores[b + nconstructs] + 2,
 						     1, false);
 	}
       else /* FIXME: Implement this.  */
@@ -2051,9 +2056,9 @@ struct GTY(()) omp_declare_variant_entry
   /* NODE of the variant.  */
   cgraph_node *variant;
   /* Score if not in declare simd clone.  */
-  widest_int score;
+  score_wide_int score;
   /* Score if in declare simd clone.  */
-  widest_int score_in_declare_simd_clone;
+  score_wide_int score_in_declare_simd_clone;
   /* Context selector for the variant.  */
   tree ctx;
   /* True if the context selector is known to match already.  */
@@ -2214,12 +2219,12 @@ omp_resolve_late_declare_variant (tree a
 	    }
       }
 
-  widest_int max_score = -1;
+  score_wide_int max_score = -1;
   varentry2 = NULL;
   FOR_EACH_VEC_SAFE_ELT (entryp->variants, i, varentry1)
     if (matches[i])
       {
-	widest_int score
+	score_wide_int score
 	  = (cur_node->simdclone ? varentry1->score_in_declare_simd_clone
 	     : varentry1->score);
 	if (score > max_score)
@@ -2300,8 +2305,8 @@ omp_resolve_declare_variant (tree base)
 
   if (any_deferred)
     {
-      widest_int max_score1 = 0;
-      widest_int max_score2 = 0;
+      score_wide_int max_score1 = 0;
+      score_wide_int max_score2 = 0;
       bool first = true;
       unsigned int i;
       tree attr1, attr2;
@@ -2311,8 +2316,8 @@ omp_resolve_declare_variant (tree base)
       vec_alloc (entry.variants, variants.length ());
       FOR_EACH_VEC_ELT (variants, i, attr1)
 	{
-	  widest_int score1;
-	  widest_int score2;
+	  score_wide_int score1;
+	  score_wide_int score2;
 	  bool need_two;
 	  tree ctx = TREE_VALUE (TREE_VALUE (attr1));
 	  need_two = omp_context_compute_score (ctx, &score1, false);
@@ -2471,16 +2476,16 @@ omp_resolve_declare_variant (tree base)
 		variants[j] = NULL_TREE;
 	    }
       }
-  widest_int max_score1 = 0;
-  widest_int max_score2 = 0;
+  score_wide_int max_score1 = 0;
+  score_wide_int max_score2 = 0;
   bool first = true;
   FOR_EACH_VEC_ELT (variants, i, attr1)
     if (attr1)
       {
 	if (variant1)
 	  {
-	    widest_int score1;
-	    widest_int score2;
+	    score_wide_int score1;
+	    score_wide_int score2;
 	    bool need_two;
 	    tree ctx;
 	    if (first)
@@ -2552,7 +2557,7 @@ omp_lto_output_declare_variant_alt (lto_
       gcc_assert (nvar != LCC_NOT_FOUND);
       streamer_write_hwi_stream (ob->main_stream, nvar);
 
-      for (widest_int *w = &varentry->score; ;
+      for (score_wide_int *w = &varentry->score; ;
 	   w = &varentry->score_in_declare_simd_clone)
 	{
 	  unsigned len = w->get_len ();
@@ -2602,15 +2607,15 @@ omp_lto_input_declare_variant_alt (lto_i
       omp_declare_variant_entry varentry;
       varentry.variant
 	= dyn_cast<cgraph_node *> (nodes[streamer_read_hwi (ib)]);
-      for (widest_int *w = &varentry.score; ;
+      for (score_wide_int *w = &varentry.score; ;
 	   w = &varentry.score_in_declare_simd_clone)
 	{
 	  unsigned len2 = streamer_read_hwi (ib);
-	  HOST_WIDE_INT arr[WIDE_INT_MAX_ELTS];
-	  gcc_assert (len2 <= WIDE_INT_MAX_ELTS);
+	  HOST_WIDE_INT arr[WIDE_INT_MAX_HWIS (1024)];
+	  gcc_assert (len2 <= WIDE_INT_MAX_HWIS (1024));
 	  for (unsigned int j = 0; j < len2; j++)
 	    arr[j] = streamer_read_hwi (ib);
-	  *w = widest_int::from_array (arr, len2, true);
+	  *w = score_wide_int::from_array (arr, len2, true);
 	  if (w == &varentry.score_in_declare_simd_clone)
 	    break;
 	}
--- gcc/graphite-isl-ast-to-gimple.cc.jj	2023-10-04 16:28:04.164784597 +0200
+++ gcc/graphite-isl-ast-to-gimple.cc	2023-10-05 11:36:55.064245673 +0200
@@ -274,7 +274,7 @@ widest_int_from_isl_expr_int (__isl_keep
   isl_val *val = isl_ast_expr_get_val (expr);
   size_t n = isl_val_n_abs_num_chunks (val, sizeof (HOST_WIDE_INT));
   HOST_WIDE_INT *chunks = XALLOCAVEC (HOST_WIDE_INT, n);
-  if (n > WIDE_INT_MAX_ELTS
+  if (n > WIDEST_INT_MAX_ELTS
       || isl_val_get_abs_num_chunks (val, sizeof (HOST_WIDE_INT), chunks) == -1)
     {
       isl_val_free (val);
--- gcc/poly-int.h.jj	2023-10-04 16:28:04.242783534 +0200
+++ gcc/poly-int.h	2023-10-05 11:36:55.194243890 +0200
@@ -109,6 +109,21 @@ struct poly_coeff_traits<T, wi::CONST_PR
   struct init_cast { using type = const Arg &; };
 };
 
+template<typename T>
+struct poly_coeff_traits<T, wi::WIDEST_CONST_PRECISION>
+{
+  typedef WI_UNARY_RESULT (T) result;
+  typedef int int_type;
+  /* These types are always signed.  */
+  static const int signedness = 1;
+  static const int precision = wi::int_traits<T>::precision;
+  static const int inl_precision = wi::int_traits<T>::inl_precision;
+  static const int rank = precision * 2 / CHAR_BIT;
+
+  template<typename Arg>
+  struct init_cast { using type = const Arg &; };
+};
+
 /* Information about a pair of coefficient types.  */
 template<typename T1, typename T2>
 struct poly_coeff_pair_traits
--- gcc/gimple-ssa-warn-alloca.cc.jj	2023-10-04 16:28:04.126785115 +0200
+++ gcc/gimple-ssa-warn-alloca.cc	2023-10-05 11:36:55.126244823 +0200
@@ -310,7 +310,7 @@ pass_walloca::execute (function *fun)
 
 	  enum opt_code wcode
 	    = is_vla ? OPT_Wvla_larger_than_ : OPT_Walloca_larger_than_;
-	  char buff[WIDE_INT_MAX_PRECISION / 4 + 4];
+	  char buff[WIDE_INT_MAX_INL_PRECISION / 4 + 4];
 	  switch (t.type)
 	    {
 	    case ALLOCA_OK:
@@ -329,6 +329,7 @@ pass_walloca::execute (function *fun)
 				      "large")))
 		    && t.limit != 0)
 		  {
+		    gcc_assert (t.limit.get_len () < WIDE_INT_MAX_INL_ELTS);
 		    print_decu (t.limit, buff);
 		    inform (loc, "limit is %wu bytes, but argument "
 				 "may be as large as %s",
@@ -347,6 +348,7 @@ pass_walloca::execute (function *fun)
 				 : G_("argument to %<alloca%> is too large")))
 		    && t.limit != 0)
 		  {
+		    gcc_assert (t.limit.get_len () < WIDE_INT_MAX_INL_ELTS);
 		    print_decu (t.limit, buff);
 		    inform (loc, "limit is %wu bytes, but argument is %s",
 			    is_vla ? warn_vla_limit : adjusted_alloca_limit,
--- gcc/tree.cc.jj	2023-10-04 16:28:04.399781394 +0200
+++ gcc/tree.cc	2023-10-05 11:36:54.618251787 +0200
@@ -2676,13 +2676,13 @@ build_zero_cst (tree type)
 tree
 build_replicated_int_cst (tree type, unsigned int width, HOST_WIDE_INT value)
 {
-  int n = (TYPE_PRECISION (type) + HOST_BITS_PER_WIDE_INT - 1)
-    / HOST_BITS_PER_WIDE_INT;
+  int n = ((TYPE_PRECISION (type) + HOST_BITS_PER_WIDE_INT - 1)
+	   / HOST_BITS_PER_WIDE_INT);
   unsigned HOST_WIDE_INT low, mask;
-  HOST_WIDE_INT a[WIDE_INT_MAX_ELTS];
+  HOST_WIDE_INT a[WIDE_INT_MAX_INL_ELTS];
   int i;
 
-  gcc_assert (n && n <= WIDE_INT_MAX_ELTS);
+  gcc_assert (n && n <= WIDE_INT_MAX_INL_ELTS);
 
   if (width == HOST_BITS_PER_WIDE_INT)
     low = value;
@@ -2696,8 +2696,8 @@ build_replicated_int_cst (tree type, uns
     a[i] = low;
 
   gcc_assert (TYPE_PRECISION (type) <= MAX_BITSIZE_MODE_ANY_INT);
-  return wide_int_to_tree
-    (type, wide_int::from_array (a, n, TYPE_PRECISION (type)));
+  return wide_int_to_tree (type, wide_int::from_array (a, n,
+						       TYPE_PRECISION (type)));
 }
 
 /* If floating-point type TYPE has an IEEE-style sign bit, return an
--- gcc/gengtype.cc.jj	2023-10-04 16:28:04.102785442 +0200
+++ gcc/gengtype.cc	2023-10-05 11:36:54.966247016 +0200
@@ -5235,7 +5235,6 @@ main (int argc, char **argv)
       POS_HERE (do_scalar_typedef ("FIXED_VALUE_TYPE", &pos));
       POS_HERE (do_scalar_typedef ("double_int", &pos));
       POS_HERE (do_scalar_typedef ("offset_int", &pos));
-      POS_HERE (do_scalar_typedef ("widest_int", &pos));
       POS_HERE (do_scalar_typedef ("int64_t", &pos));
       POS_HERE (do_scalar_typedef ("poly_int64", &pos));
       POS_HERE (do_scalar_typedef ("poly_uint64", &pos));
--- gcc/dwarf2out.cc.jj	2023-10-04 16:28:04.065785946 +0200
+++ gcc/dwarf2out.cc	2023-10-05 11:36:54.656251266 +0200
@@ -397,7 +397,7 @@ dump_struct_debug (tree type, enum debug
    of the number.  */
 
 static unsigned int
-get_full_len (const wide_int &op)
+get_full_len (const rwide_int &op)
 {
   int prec = wi::get_precision (op);
   return ((prec + HOST_BITS_PER_WIDE_INT - 1)
@@ -3900,7 +3900,7 @@ static void add_data_member_location_att
 						struct vlr_context *);
 static bool add_const_value_attribute (dw_die_ref, machine_mode, rtx);
 static void insert_int (HOST_WIDE_INT, unsigned, unsigned char *);
-static void insert_wide_int (const wide_int &, unsigned char *, int);
+static void insert_wide_int (const rwide_int &, unsigned char *, int);
 static unsigned insert_float (const_rtx, unsigned char *);
 static rtx rtl_for_decl_location (tree);
 static bool add_location_or_const_value_attribute (dw_die_ref, tree, bool);
@@ -4598,14 +4598,14 @@ AT_unsigned (dw_attr_node *a)
 
 static inline void
 add_AT_wide (dw_die_ref die, enum dwarf_attribute attr_kind,
-	     const wide_int& w)
+	     const rwide_int& w)
 {
   dw_attr_node attr;
 
   attr.dw_attr = attr_kind;
   attr.dw_attr_val.val_class = dw_val_class_wide_int;
   attr.dw_attr_val.val_entry = NULL;
-  attr.dw_attr_val.v.val_wide = ggc_alloc<wide_int> ();
+  attr.dw_attr_val.v.val_wide = ggc_alloc<rwide_int> ();
   *attr.dw_attr_val.v.val_wide = w;
   add_dwarf_attr (die, &attr);
 }
@@ -16714,7 +16714,7 @@ mem_loc_descriptor (rtx rtl, machine_mod
 	  mem_loc_result->dw_loc_oprnd1.v.val_die_ref.external = 0;
 	  mem_loc_result->dw_loc_oprnd2.val_class
 	    = dw_val_class_wide_int;
-	  mem_loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<wide_int> ();
+	  mem_loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<rwide_int> ();
 	  *mem_loc_result->dw_loc_oprnd2.v.val_wide = rtx_mode_t (rtl, mode);
 	}
       break;
@@ -17288,7 +17288,7 @@ loc_descriptor (rtx rtl, machine_mode mo
 	  loc_result = new_loc_descr (DW_OP_implicit_value,
 				      GET_MODE_SIZE (int_mode), 0);
 	  loc_result->dw_loc_oprnd2.val_class = dw_val_class_wide_int;
-	  loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<wide_int> ();
+	  loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<rwide_int> ();
 	  *loc_result->dw_loc_oprnd2.v.val_wide = rtx_mode_t (rtl, int_mode);
 	}
       break;
@@ -20189,7 +20189,7 @@ extract_int (const unsigned char *src, u
 /* Writes wide_int values to dw_vec_const array.  */
 
 static void
-insert_wide_int (const wide_int &val, unsigned char *dest, int elt_size)
+insert_wide_int (const rwide_int &val, unsigned char *dest, int elt_size)
 {
   int i;
 
@@ -20274,7 +20274,7 @@ add_const_value_attribute (dw_die_ref di
 	  && (GET_MODE_PRECISION (int_mode)
 	      & (HOST_BITS_PER_WIDE_INT - 1)) == 0)
 	{
-	  wide_int w = rtx_mode_t (rtl, int_mode);
+	  rwide_int w = rtx_mode_t (rtl, int_mode);
 	  add_AT_wide (die, DW_AT_const_value, w);
 	  return true;
 	}
--- gcc/wide-int.cc.jj	2023-10-04 16:28:04.466780481 +0200
+++ gcc/wide-int.cc	2023-10-05 11:36:55.054245810 +0200
@@ -51,7 +51,7 @@ typedef unsigned int UDWtype __attribute
 #include "longlong.h"
 #endif
 
-static const HOST_WIDE_INT zeros[WIDE_INT_MAX_ELTS] = {};
+static const HOST_WIDE_INT zeros[1] = {};
 
 /*
  * Internal utilities.
@@ -62,8 +62,7 @@ static const HOST_WIDE_INT zeros[WIDE_IN
 #define HALF_INT_MASK ((HOST_WIDE_INT_1 << HOST_BITS_PER_HALF_WIDE_INT) - 1)
 
 #define BLOCK_OF(TARGET) ((TARGET) / HOST_BITS_PER_WIDE_INT)
-#define BLOCKS_NEEDED(PREC) \
-  (PREC ? (((PREC) + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT) : 1)
+#define BLOCKS_NEEDED(PREC) (PREC ? CEIL (PREC, HOST_BITS_PER_WIDE_INT) : 1)
 #define SIGN_MASK(X) ((HOST_WIDE_INT) (X) < 0 ? -1 : 0)
 
 /* Return the value a VAL[I] if I < LEN, otherwise, return 0 or -1
@@ -96,7 +95,7 @@ canonize (HOST_WIDE_INT *val, unsigned i
   top = val[len - 1];
   if (len * HOST_BITS_PER_WIDE_INT > precision)
     val[len - 1] = top = sext_hwi (top, precision % HOST_BITS_PER_WIDE_INT);
-  if (top != 0 && top != (HOST_WIDE_INT)-1)
+  if (top != 0 && top != HOST_WIDE_INT_M1)
     return len;
 
   /* At this point we know that the top is either 0 or -1.  Find the
@@ -163,7 +162,7 @@ wi::from_buffer (const unsigned char *bu
   /* We have to clear all the bits ourself, as we merely or in values
      below.  */
   unsigned int len = BLOCKS_NEEDED (precision);
-  HOST_WIDE_INT *val = result.write_val ();
+  HOST_WIDE_INT *val = result.write_val (0);
   for (unsigned int i = 0; i < len; ++i)
     val[i] = 0;
 
@@ -232,8 +231,7 @@ wi::to_mpz (const wide_int_ref &x, mpz_t
     }
   else if (excess < 0 && wi::neg_p (x))
     {
-      int extra
-	= (-excess + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT;
+      int extra = CEIL (-excess, HOST_BITS_PER_WIDE_INT);
       HOST_WIDE_INT *t = XALLOCAVEC (HOST_WIDE_INT, len + extra);
       for (int i = 0; i < len; i++)
 	t[i] = v[i];
@@ -280,8 +278,8 @@ wi::from_mpz (const_tree type, mpz_t x,
      extracted from the GMP manual, section "Integer Import and Export":
      http://gmplib.org/manual/Integer-Import-and-Export.html  */
   numb = CHAR_BIT * sizeof (HOST_WIDE_INT);
-  count = (mpz_sizeinbase (x, 2) + numb - 1) / numb;
-  HOST_WIDE_INT *val = res.write_val ();
+  count = CEIL (mpz_sizeinbase (x, 2), numb);
+  HOST_WIDE_INT *val = res.write_val (0);
   /* Read the absolute value.
 
      Write directly to the wide_int storage if possible, otherwise leave
@@ -289,7 +287,7 @@ wi::from_mpz (const_tree type, mpz_t x,
      to use mpz_tdiv_r_2exp for the latter case, but the situation is
      pathological and it seems safer to operate on the original mpz value
      in all cases.  */
-  void *valres = mpz_export (count <= WIDE_INT_MAX_ELTS ? val : 0,
+  void *valres = mpz_export (count <= WIDE_INT_MAX_INL_ELTS ? val : 0,
 			     &count, -1, sizeof (HOST_WIDE_INT), 0, 0, x);
   if (count < 1)
     {
@@ -1334,21 +1332,6 @@ wi::mul_internal (HOST_WIDE_INT *val, co
   unsigned HOST_WIDE_INT o0, o1, k, t;
   unsigned int i;
   unsigned int j;
-  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
-  unsigned int half_blocks_needed = blocks_needed * 2;
-  /* The sizes here are scaled to support a 2x largest mode by 2x
-     largest mode yielding a 4x largest mode result.  This is what is
-     needed by vpn.  */
-
-  unsigned HOST_HALF_WIDE_INT
-    u[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  unsigned HOST_HALF_WIDE_INT
-    v[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  /* The '2' in 'R' is because we are internally doing a full
-     multiply.  */
-  unsigned HOST_HALF_WIDE_INT
-    r[2 * 4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1;
 
   /* If the top level routine did not really pass in an overflow, then
      just make sure that we never attempt to set it.  */
@@ -1469,6 +1452,36 @@ wi::mul_internal (HOST_WIDE_INT *val, co
       return 1;
     }
 
+  /* The sizes here are scaled to support a 2x WIDE_INT_MAX_INL_PRECISION by 2x
+     WIDE_INT_MAX_INL_PRECISION yielding a 4x WIDE_INT_MAX_INL_PRECISION
+     result.  */
+
+  unsigned HOST_HALF_WIDE_INT
+    ubuf[4 * WIDE_INT_MAX_INL_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    vbuf[4 * WIDE_INT_MAX_INL_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
+  /* The '2' in 'R' is because we are internally doing a full
+     multiply.  */
+  unsigned HOST_HALF_WIDE_INT
+    rbuf[2 * 4 * WIDE_INT_MAX_INL_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
+  const HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1;
+  unsigned HOST_HALF_WIDE_INT *u = ubuf;
+  unsigned HOST_HALF_WIDE_INT *v = vbuf;
+  unsigned HOST_HALF_WIDE_INT *r = rbuf;
+
+  if (prec > WIDE_INT_MAX_INL_PRECISION && !high)
+    prec = (op1len + op2len + 1) * HOST_BITS_PER_WIDE_INT;
+  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
+  unsigned int half_blocks_needed = blocks_needed * 2;
+  if (UNLIKELY (prec > WIDE_INT_MAX_INL_PRECISION))
+    {
+      unsigned HOST_HALF_WIDE_INT *buf
+	= XALLOCAVEC (unsigned HOST_HALF_WIDE_INT, 4 * 4 * blocks_needed);
+      u = buf;
+      v = u + 4 * blocks_needed;
+      r = v + 4 * blocks_needed;
+    }
+
   /* We do unsigned mul and then correct it.  */
   wi_unpack (u, op1val, op1len, half_blocks_needed, prec, SIGNED);
   wi_unpack (v, op2val, op2len, half_blocks_needed, prec, SIGNED);
@@ -1782,16 +1795,6 @@ wi::divmod_internal (HOST_WIDE_INT *quot
 		     unsigned int divisor_prec, signop sgn,
 		     wi::overflow_type *oflow)
 {
-  unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec);
-  unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec);
-  unsigned HOST_HALF_WIDE_INT
-    b_quotient[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  unsigned HOST_HALF_WIDE_INT
-    b_remainder[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  unsigned HOST_HALF_WIDE_INT
-    b_dividend[(4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT) + 1];
-  unsigned HOST_HALF_WIDE_INT
-    b_divisor[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
   unsigned int m, n;
   bool dividend_neg = false;
   bool divisor_neg = false;
@@ -1910,6 +1913,44 @@ wi::divmod_internal (HOST_WIDE_INT *quot
 	}
     }
 
+  unsigned HOST_HALF_WIDE_INT
+    b_quotient_buf[4 * WIDE_INT_MAX_INL_PRECISION
+		   / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    b_remainder_buf[4 * WIDE_INT_MAX_INL_PRECISION
+		    / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    b_dividend_buf[(4 * WIDE_INT_MAX_INL_PRECISION
+		    / HOST_BITS_PER_HALF_WIDE_INT) + 1];
+  unsigned HOST_HALF_WIDE_INT
+    b_divisor_buf[4 * WIDE_INT_MAX_INL_PRECISION
+		  / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT *b_quotient = b_quotient_buf;
+  unsigned HOST_HALF_WIDE_INT *b_remainder = b_remainder_buf;
+  unsigned HOST_HALF_WIDE_INT *b_dividend = b_dividend_buf;
+  unsigned HOST_HALF_WIDE_INT *b_divisor = b_divisor_buf;
+
+  if (dividend_prec > WIDE_INT_MAX_INL_PRECISION
+      && (sgn == SIGNED || dividend_val[dividend_len - 1] >= 0))
+    dividend_prec = (dividend_len + 1) * HOST_BITS_PER_WIDE_INT;
+  if (divisor_prec > WIDE_INT_MAX_INL_PRECISION)
+    divisor_prec = divisor_len * HOST_BITS_PER_WIDE_INT;
+  unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec);
+  unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec);
+  if (UNLIKELY (dividend_prec > WIDE_INT_MAX_INL_PRECISION)
+      || UNLIKELY (divisor_prec > WIDE_INT_MAX_INL_PRECISION))
+    {
+      unsigned HOST_HALF_WIDE_INT *buf
+        = XALLOCAVEC (unsigned HOST_HALF_WIDE_INT,
+		      12 * dividend_blocks_needed
+		      + 4 * divisor_blocks_needed + 1);
+      b_quotient = buf;
+      b_remainder = b_quotient + 4 * dividend_blocks_needed;
+      b_dividend = b_remainder + 4 * dividend_blocks_needed;
+      b_divisor = b_dividend + 4 * dividend_blocks_needed + 1;
+      memset (b_quotient, 0,
+	      4 * dividend_blocks_needed * sizeof (HOST_HALF_WIDE_INT));
+    }
   wi_unpack (b_dividend, dividend.get_val (), dividend.get_len (),
 	     dividend_blocks_needed, dividend_prec, UNSIGNED);
   wi_unpack (b_divisor, divisor.get_val (), divisor.get_len (),
@@ -1924,7 +1965,8 @@ wi::divmod_internal (HOST_WIDE_INT *quot
   while (n > 1 && b_divisor[n - 1] == 0)
     n--;
 
-  memset (b_quotient, 0, sizeof (b_quotient));
+  if (b_quotient == b_quotient_buf)
+    memset (b_quotient_buf, 0, sizeof (b_quotient_buf));
 
   divmod_internal_2 (b_quotient, b_remainder, b_dividend, b_divisor, m, n);
 
@@ -1970,6 +2012,8 @@ wi::lshift_large (HOST_WIDE_INT *val, co
 
   /* The whole-block shift fills with zeros.  */
   unsigned int len = BLOCKS_NEEDED (precision);
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    len = xlen + skip + 1;
   for (unsigned int i = 0; i < skip; ++i)
     val[i] = 0;
 
@@ -1993,22 +2037,17 @@ wi::lshift_large (HOST_WIDE_INT *val, co
   return canonize (val, len, precision);
 }
 
-/* Right shift XVAL by SHIFT and store the result in VAL.  Return the
+/* Right shift XVAL by SHIFT and store the result in VAL.  LEN is the
    number of blocks in VAL.  The input has XPRECISION bits and the
    output has XPRECISION - SHIFT bits.  */
-static unsigned int
+static void
 rshift_large_common (HOST_WIDE_INT *val, const HOST_WIDE_INT *xval,
-		     unsigned int xlen, unsigned int xprecision,
-		     unsigned int shift)
+		     unsigned int xlen, unsigned int shift, unsigned int len)
 {
   /* Split the shift into a whole-block shift and a subblock shift.  */
   unsigned int skip = shift / HOST_BITS_PER_WIDE_INT;
   unsigned int small_shift = shift % HOST_BITS_PER_WIDE_INT;
 
-  /* Work out how many blocks are needed to store the significant bits
-     (excluding the upper zeros or signs).  */
-  unsigned int len = BLOCKS_NEEDED (xprecision - shift);
-
   /* It's easier to handle the simple block case specially.  */
   if (small_shift == 0)
     for (unsigned int i = 0; i < len; ++i)
@@ -2025,7 +2064,6 @@ rshift_large_common (HOST_WIDE_INT *val,
 	  val[i] |= curr << (-small_shift % HOST_BITS_PER_WIDE_INT);
 	}
     }
-  return len;
 }
 
 /* Logically right shift XVAL by SHIFT and store the result in VAL.
@@ -2036,11 +2074,20 @@ wi::lrshift_large (HOST_WIDE_INT *val, c
 		   unsigned int xlen, unsigned int xprecision,
 		   unsigned int precision, unsigned int shift)
 {
-  unsigned int len = rshift_large_common (val, xval, xlen, xprecision, shift);
+  /* Work out how many blocks are needed to store the significant bits
+     (excluding the upper zeros or signs).  */
+  unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
+  unsigned int len = blocks_needed;
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)
+      && len > xlen
+      && xval[xlen - 1] >= 0)
+    len = xlen;
+
+  rshift_large_common (val, xval, xlen, shift, len);
 
   /* The value we just created has precision XPRECISION - SHIFT.
      Zero-extend it to wider precisions.  */
-  if (precision > xprecision - shift)
+  if (precision > xprecision - shift && len == blocks_needed)
     {
       unsigned int small_prec = (xprecision - shift) % HOST_BITS_PER_WIDE_INT;
       if (small_prec)
@@ -2063,11 +2110,18 @@ wi::arshift_large (HOST_WIDE_INT *val, c
 		   unsigned int xlen, unsigned int xprecision,
 		   unsigned int precision, unsigned int shift)
 {
-  unsigned int len = rshift_large_common (val, xval, xlen, xprecision, shift);
+  /* Work out how many blocks are needed to store the significant bits
+     (excluding the upper zeros or signs).  */
+  unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
+  unsigned int len = blocks_needed;
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS) && len > xlen)
+    len = xlen;
+
+  rshift_large_common (val, xval, xlen, shift, len);
 
   /* The value we just created has precision XPRECISION - SHIFT.
      Sign-extend it to wider types.  */
-  if (precision > xprecision - shift)
+  if (precision > xprecision - shift && len == blocks_needed)
     {
       unsigned int small_prec = (xprecision - shift) % HOST_BITS_PER_WIDE_INT;
       if (small_prec)
@@ -2399,9 +2453,12 @@ from_int (int i)
 static void
 assert_deceq (const char *expected, const wide_int_ref &wi, signop sgn)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_dec (wi, buf, sgn);
-  ASSERT_STREQ (expected, buf);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_dec (wi, p, sgn);
+  ASSERT_STREQ (expected, p);
 }
 
 /* Likewise for base 16.  */
@@ -2409,9 +2466,12 @@ assert_deceq (const char *expected, cons
 static void
 assert_hexeq (const char *expected, const wide_int_ref &wi)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_hex (wi, buf);
-  ASSERT_STREQ (expected, buf);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_hex (wi, p);
+  ASSERT_STREQ (expected, p);
 }
 
 /* Test cases.  */
@@ -2428,7 +2488,7 @@ test_printing ()
   assert_hexeq ("0x1fffffffffffffffff", wi::shwi (-1, 69));
   assert_hexeq ("0xffffffffffffffff", wi::mask (64, false, 69));
   assert_hexeq ("0xffffffffffffffff", wi::mask <widest_int> (64, false));
-  if (WIDE_INT_MAX_PRECISION > 128)
+  if (WIDE_INT_MAX_INL_PRECISION > 128)
     {
       assert_hexeq ("0x20000000000000000fffffffffffffffe",
 		    wi::lshift (1, 129) + wi::lshift (1, 64) - 2);
--- gcc/c-family/c-warn.cc.jj	2023-10-04 16:28:03.935787718 +0200
+++ gcc/c-family/c-warn.cc	2023-10-05 11:36:55.090245316 +0200
@@ -1517,13 +1517,15 @@ match_case_to_enum_1 (tree key, tree typ
     return;
 
   char buf[WIDE_INT_PRINT_BUFFER_SIZE];
+  wide_int w = wi::to_wide (key);
 
+  gcc_assert (w.get_len () <= WIDE_INT_MAX_INL_ELTS);
   if (tree_fits_uhwi_p (key))
-    print_dec (wi::to_wide (key), buf, UNSIGNED);
+    print_dec (w, buf, UNSIGNED);
   else if (tree_fits_shwi_p (key))
-    print_dec (wi::to_wide (key), buf, SIGNED);
+    print_dec (w, buf, SIGNED);
   else
-    print_hex (wi::to_wide (key), buf);
+    print_hex (w, buf);
 
   if (TYPE_NAME (type) == NULL_TREE)
     warning_at (DECL_SOURCE_LOCATION (CASE_LABEL (label)),
--- gcc/wide-int.h.jj	2023-10-04 16:28:04.468780454 +0200
+++ gcc/wide-int.h	2023-10-05 12:26:01.136193645 +0200
@@ -27,7 +27,7 @@ along with GCC; see the file COPYING3.
    other longer storage GCC representations (rtl and tree).
 
    The actual precision of a wide_int depends on the flavor.  There
-   are three predefined flavors:
+   are four predefined flavors:
 
      1) wide_int (the default).  This flavor does the math in the
      precision of its input arguments.  It is assumed (and checked)
@@ -53,6 +53,10 @@ along with GCC; see the file COPYING3.
      multiply, division, shifts, comparisons, and operations that need
      overflow detected), the signedness must be specified separately.
 
+     For precisions up to WIDE_INT_MAX_INL_PRECISION, it uses an inline
+     buffer in the type, for larger precisions up to WIDEST_INT_MAX_PRECISION
+     it uses a pointer to heap allocated buffer.
+
      2) offset_int.  This is a fixed-precision integer that can hold
      any address offset, measured in either bits or bytes, with at
      least one extra sign bit.  At the moment the maximum address
@@ -76,11 +80,15 @@ along with GCC; see the file COPYING3.
        wi::leu_p (a, b) as a more efficient short-hand for
        "a >= 0 && a <= b". ]
 
-     3) widest_int.  This representation is an approximation of
+     3) rwide_int.  Restricted wide_int.  This is similar to
+     wide_int, but maximum possible precision is RWIDE_INT_MAX_PRECISION
+     and it always uses an inline buffer.  offset_int and rwide_int are
+     GC-friendly, wide_int and widest_int are not.
+
+     4) widest_int.  This representation is an approximation of
      infinite precision math.  However, it is not really infinite
      precision math as in the GMP library.  It is really finite
-     precision math where the precision is 4 times the size of the
-     largest integer that the target port can represent.
+     precision math where the precision is WIDEST_INT_MAX_PRECISION.
 
      Like offset_int, widest_int is wider than all the values that
      it needs to represent, so the integers are logically signed.
@@ -231,17 +239,34 @@ along with GCC; see the file COPYING3.
    can be arbitrarily different from X.  */
 
 /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
-   early examination of the target's mode file.  The WIDE_INT_MAX_ELTS
+   early examination of the target's mode file.  The WIDE_INT_MAX_INL_ELTS
    can accomodate at least 1 more bit so that unsigned numbers of that
    mode can be represented as a signed value.  Note that it is still
    possible to create fixed_wide_ints that have precisions greater than
    MAX_BITSIZE_MODE_ANY_INT.  This can be useful when representing a
    double-width multiplication result, for example.  */
-#define WIDE_INT_MAX_ELTS \
-  ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) / HOST_BITS_PER_WIDE_INT)
-
+#define WIDE_INT_MAX_INL_ELTS \
+  ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) \
+   / HOST_BITS_PER_WIDE_INT)
+
+#define WIDE_INT_MAX_INL_PRECISION \
+  (WIDE_INT_MAX_INL_ELTS * HOST_BITS_PER_WIDE_INT)
+
+/* Precision of wide_int and largest _BitInt precision + 1 we can
+   support.  */
+#define WIDE_INT_MAX_ELTS 255
 #define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
 
+#define RWIDE_INT_MAX_ELTS WIDE_INT_MAX_INL_ELTS
+#define RWIDE_INT_MAX_PRECISION WIDE_INT_MAX_INL_PRECISION
+
+/* Precision of widest_int and largest _BitInt precision + 1 we can
+   support.  */
+#define WIDEST_INT_MAX_ELTS 510
+#define WIDEST_INT_MAX_PRECISION (WIDEST_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
+
+STATIC_ASSERT (WIDE_INT_MAX_INL_ELTS < WIDE_INT_MAX_ELTS);
+
 /* This is the max size of any pointer on any machine.  It does not
    seem to be as easy to sniff this out of the machine description as
    it is for MAX_BITSIZE_MODE_ANY_INT since targets may support
@@ -307,17 +332,19 @@ along with GCC; see the file COPYING3.
 #define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
   WI_BINARY_RESULT (T1, T2) RESULT = \
     wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
-  HOST_WIDE_INT *VAL = RESULT.write_val ()
+  HOST_WIDE_INT *VAL = RESULT.write_val (0)
 
 /* Similar for the result of a unary operation on X, which has type T.  */
 #define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
   WI_UNARY_RESULT (T) RESULT = \
     wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
-  HOST_WIDE_INT *VAL = RESULT.write_val ()
+  HOST_WIDE_INT *VAL = RESULT.write_val (0)
 
 template <typename T> class generic_wide_int;
 template <int N> class fixed_wide_int_storage;
 class wide_int_storage;
+class rwide_int_storage;
+template <int N> class widest_int_storage;
 
 /* An N-bit integer.  Until we can use typedef templates, use this instead.  */
 #define FIXED_WIDE_INT(N) \
@@ -325,10 +352,9 @@ class wide_int_storage;
 
 typedef generic_wide_int <wide_int_storage> wide_int;
 typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) offset_int;
-typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION) widest_int;
-/* Spelled out explicitly (rather than through FIXED_WIDE_INT)
-   so as not to confuse gengtype.  */
-typedef generic_wide_int < fixed_wide_int_storage <WIDE_INT_MAX_PRECISION * 2> > widest2_int;
+typedef generic_wide_int <rwide_int_storage> rwide_int;
+typedef generic_wide_int <widest_int_storage <WIDE_INT_MAX_INL_PRECISION> > widest_int;
+typedef generic_wide_int <widest_int_storage <WIDE_INT_MAX_INL_PRECISION * 2> > widest2_int;
 
 /* wi::storage_ref can be a reference to a primitive type,
    so this is the conservatively-correct setting.  */
@@ -380,7 +406,11 @@ namespace wi
 
     /* The integer has a constant precision (known at GCC compile time)
        and is signed.  */
-    CONST_PRECISION
+    CONST_PRECISION,
+
+    /* Like CONST_PRECISION, but with WIDEST_INT_MAX_PRECISION or larger
+       precision where not all elements of arrays are always present.  */
+    WIDEST_CONST_PRECISION
   };
 
   /* This class, which has no default implementation, is expected to
@@ -390,9 +420,15 @@ namespace wi
        Classifies the type of T.
 
      static const unsigned int precision;
-       Only defined if precision_type == CONST_PRECISION.  Specifies the
+       Only defined if precision_type == CONST_PRECISION or
+       precision_type == WIDEST_CONST_PRECISION.  Specifies the
        precision of all integers of type T.
 
+     static const unsigned int inl_precision;
+       Only defined if precision_type == WIDEST_CONST_PRECISION.
+       Specifies precision which is represented in the inline
+       arrays.
+
      static const bool host_dependent_precision;
        True if the precision of T depends (or can depend) on the host.
 
@@ -415,9 +451,10 @@ namespace wi
   struct binary_traits;
 
   /* Specify the result type for each supported combination of binary
-     inputs.  Note that CONST_PRECISION and VAR_PRECISION cannot be
-     mixed, in order to give stronger type checking.  When both inputs
-     are CONST_PRECISION, they must have the same precision.  */
+     inputs.  Note that CONST_PRECISION, WIDEST_CONST_PRECISION and
+     VAR_PRECISION cannot be mixed, in order to give stronger type
+     checking.  When both inputs are CONST_PRECISION or both are
+     WIDEST_CONST_PRECISION, they must have the same precision.  */
   template <typename T1, typename T2>
   struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
   {
@@ -447,6 +484,17 @@ namespace wi
   };
 
   template <typename T1, typename T2>
+  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, WIDEST_CONST_PRECISION>
+  {
+    typedef generic_wide_int < widest_int_storage
+			       <int_traits <T2>::inl_precision> > result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
+    typedef result_type signed_shift_result_type;
+    typedef bool signed_predicate_result;
+  };
+
+  template <typename T1, typename T2>
   struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
   {
     typedef wide_int result_type;
@@ -468,6 +516,17 @@ namespace wi
   };
 
   template <typename T1, typename T2>
+  struct binary_traits <T1, T2, WIDEST_CONST_PRECISION, FLEXIBLE_PRECISION>
+  {
+    typedef generic_wide_int < widest_int_storage
+			       <int_traits <T1>::inl_precision> > result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
+    typedef result_type signed_shift_result_type;
+    typedef bool signed_predicate_result;
+  };
+
+  template <typename T1, typename T2>
   struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
   {
     STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
@@ -482,6 +541,18 @@ namespace wi
   };
 
   template <typename T1, typename T2>
+  struct binary_traits <T1, T2, WIDEST_CONST_PRECISION, WIDEST_CONST_PRECISION>
+  {
+    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
+    typedef generic_wide_int < widest_int_storage
+			       <int_traits <T1>::inl_precision> > result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
+    typedef result_type signed_shift_result_type;
+    typedef bool signed_predicate_result;
+  };
+
+  template <typename T1, typename T2>
   struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
   {
     typedef wide_int result_type;
@@ -709,8 +780,10 @@ wi::storage_ref::get_val () const
    Although not required by generic_wide_int itself, writable storage
    classes can also provide the following functions:
 
-   HOST_WIDE_INT *write_val ()
-     Get a modifiable version of get_val ()
+   HOST_WIDE_INT *write_val (unsigned int)
+     Get a modifiable version of get_val ().  The argument should be
+     upper estimation for LEN (ignored by all storages but
+     widest_int_storage).
 
    unsigned int set_len (unsigned int len)
      Set the value returned by get_len () to LEN.  */
@@ -777,6 +850,8 @@ public:
 
   static const bool is_sign_extended
     = wi::int_traits <generic_wide_int <storage> >::is_sign_extended;
+  static const bool needs_write_val_arg
+    = wi::int_traits <generic_wide_int <storage> >::needs_write_val_arg;
 };
 
 template <typename storage>
@@ -1049,6 +1124,7 @@ namespace wi
     static const enum precision_type precision_type = VAR_PRECISION;
     static const bool host_dependent_precision = HDP;
     static const bool is_sign_extended = SE;
+    static const bool needs_write_val_arg = false;
   };
 }
 
@@ -1065,7 +1141,11 @@ namespace wi
 class GTY(()) wide_int_storage
 {
 private:
-  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
+  union
+  {
+    HOST_WIDE_INT val[WIDE_INT_MAX_INL_ELTS];
+    HOST_WIDE_INT *valp;
+  } GTY((skip)) u;
   unsigned int len;
   unsigned int precision;
 
@@ -1073,14 +1153,17 @@ public:
   wide_int_storage ();
   template <typename T>
   wide_int_storage (const T &);
+  wide_int_storage (const wide_int_storage &);
+  ~wide_int_storage ();
 
   /* The standard generic_wide_int storage methods.  */
   unsigned int get_precision () const;
   const HOST_WIDE_INT *get_val () const;
   unsigned int get_len () const;
-  HOST_WIDE_INT *write_val ();
+  HOST_WIDE_INT *write_val (unsigned int);
   void set_len (unsigned int, bool = false);
 
+  wide_int_storage &operator = (const wide_int_storage &);
   template <typename T>
   wide_int_storage &operator = (const T &);
 
@@ -1099,12 +1182,15 @@ namespace wi
     /* Guaranteed by a static assert in the wide_int_storage constructor.  */
     static const bool host_dependent_precision = false;
     static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
     template <typename T1, typename T2>
     static wide_int get_binary_result (const T1 &, const T2 &);
+    template <typename T1, typename T2>
+    static unsigned int get_binary_precision (const T1 &, const T2 &);
   };
 }
 
-inline wide_int_storage::wide_int_storage () {}
+inline wide_int_storage::wide_int_storage () : precision (0) {}
 
 /* Initialize the storage from integer X, in its natural precision.
    Note that we do not allow integers with host-dependent precision
@@ -1113,21 +1199,75 @@ inline wide_int_storage::wide_int_storag
 template <typename T>
 inline wide_int_storage::wide_int_storage (const T &x)
 {
-  { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
-  { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
+  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type
+		 != wi::WIDEST_CONST_PRECISION);
   WIDE_INT_REF_FOR (T) xi (x);
   precision = xi.precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+    u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
   wi::copy (*this, xi);
 }
 
+inline wide_int_storage::wide_int_storage (const wide_int_storage &x)
+{
+  len = x.len;
+  precision = x.precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
+      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
+    }
+  else
+    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
+}
+
+inline wide_int_storage::~wide_int_storage ()
+{
+  if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+    XDELETEVEC (u.valp);
+}
+
+inline wide_int_storage&
+wide_int_storage::operator = (const wide_int_storage &x)
+{
+  if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+    {
+      if (this == &x)
+	return *this;
+      XDELETEVEC (u.valp);
+    }
+  len = x.len;
+  precision = x.precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
+      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
+    }
+  else
+    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
+  return *this;
+}
+
 template <typename T>
 inline wide_int_storage&
 wide_int_storage::operator = (const T &x)
 {
-  { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
-  { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
+  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type
+		 != wi::WIDEST_CONST_PRECISION);
   WIDE_INT_REF_FOR (T) xi (x);
-  precision = xi.precision;
+  if (UNLIKELY (precision != xi.precision))
+    {
+      if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+	XDELETEVEC (u.valp);
+      precision = xi.precision;
+      if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+	u.valp = XNEWVEC (HOST_WIDE_INT,
+			  CEIL (precision, HOST_BITS_PER_WIDE_INT));
+    }
   wi::copy (*this, xi);
   return *this;
 }
@@ -1141,7 +1281,7 @@ wide_int_storage::get_precision () const
 inline const HOST_WIDE_INT *
 wide_int_storage::get_val () const
 {
-  return val;
+  return UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION) ? u.valp : u.val;
 }
 
 inline unsigned int
@@ -1151,9 +1291,9 @@ wide_int_storage::get_len () const
 }
 
 inline HOST_WIDE_INT *
-wide_int_storage::write_val ()
+wide_int_storage::write_val (unsigned int)
 {
-  return val;
+  return UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION) ? u.valp : u.val;
 }
 
 inline void
@@ -1161,8 +1301,10 @@ wide_int_storage::set_len (unsigned int
 {
   len = l;
   if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
-    val[len - 1] = sext_hwi (val[len - 1],
-			     precision % HOST_BITS_PER_WIDE_INT);
+    {
+      HOST_WIDE_INT &v = write_val (len)[len - 1];
+      v = sext_hwi (v, precision % HOST_BITS_PER_WIDE_INT);
+    }
 }
 
 /* Treat X as having signedness SGN and convert it to a PRECISION-bit
@@ -1172,7 +1314,7 @@ wide_int_storage::from (const wide_int_r
 			signop sgn)
 {
   wide_int result = wide_int::create (precision);
-  result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
+  result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
 				     x.precision, precision, sgn));
   return result;
 }
@@ -1185,7 +1327,7 @@ wide_int_storage::from_array (const HOST
 			      unsigned int precision, bool need_canon_p)
 {
   wide_int result = wide_int::create (precision);
-  result.set_len (wi::from_array (result.write_val (), val, len, precision,
+  result.set_len (wi::from_array (result.write_val (len), val, len, precision,
 				  need_canon_p));
   return result;
 }
@@ -1196,6 +1338,9 @@ wide_int_storage::create (unsigned int p
 {
   wide_int x;
   x.precision = precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+    x.u.valp = XNEWVEC (HOST_WIDE_INT,
+			CEIL (precision, HOST_BITS_PER_WIDE_INT));
   return x;
 }
 
@@ -1212,6 +1357,194 @@ wi::int_traits <wide_int_storage>::get_b
     return wide_int::create (wi::get_precision (x));
 }
 
+template <typename T1, typename T2>
+inline unsigned int
+wi::int_traits <wide_int_storage>::get_binary_precision (const T1 &x,
+							 const T2 &y)
+{
+  /* This shouldn't be used for two flexible-precision inputs.  */
+  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
+		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
+  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
+    return wi::get_precision (y);
+  else
+    return wi::get_precision (x);
+}
+
+/* The storage used by rwide_int.  */
+class GTY(()) rwide_int_storage
+{
+private:
+  HOST_WIDE_INT val[RWIDE_INT_MAX_ELTS];
+  unsigned int len;
+  unsigned int precision;
+
+public:
+  rwide_int_storage () = default;
+  template <typename T>
+  rwide_int_storage (const T &);
+
+  /* The standard generic_rwide_int storage methods.  */
+  unsigned int get_precision () const;
+  const HOST_WIDE_INT *get_val () const;
+  unsigned int get_len () const;
+  HOST_WIDE_INT *write_val (unsigned int);
+  void set_len (unsigned int, bool = false);
+
+  template <typename T>
+  rwide_int_storage &operator = (const T &);
+
+  static rwide_int from (const wide_int_ref &, unsigned int, signop);
+  static rwide_int from_array (const HOST_WIDE_INT *, unsigned int,
+			       unsigned int, bool = true);
+  static rwide_int create (unsigned int);
+};
+
+namespace wi
+{
+  template <>
+  struct int_traits <rwide_int_storage>
+  {
+    static const enum precision_type precision_type = VAR_PRECISION;
+    /* Guaranteed by a static assert in the rwide_int_storage constructor.  */
+    static const bool host_dependent_precision = false;
+    static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
+    template <typename T1, typename T2>
+    static rwide_int get_binary_result (const T1 &, const T2 &);
+    template <typename T1, typename T2>
+    static unsigned int get_binary_precision (const T1 &, const T2 &);
+  };
+}
+
+/* Initialize the storage from integer X, in its natural precision.
+   Note that we do not allow integers with host-dependent precision
+   to become rwide_ints; rwide_ints must always be logically independent
+   of the host.  */
+template <typename T>
+inline rwide_int_storage::rwide_int_storage (const T &x)
+{
+  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type
+		 != wi::WIDEST_CONST_PRECISION);
+  WIDE_INT_REF_FOR (T) xi (x);
+  precision = xi.precision;
+  gcc_assert (precision <= RWIDE_INT_MAX_PRECISION);
+  wi::copy (*this, xi);
+}
+
+template <typename T>
+inline rwide_int_storage&
+rwide_int_storage::operator = (const T &x)
+{
+  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type
+		 != wi::WIDEST_CONST_PRECISION);
+  WIDE_INT_REF_FOR (T) xi (x);
+  precision = xi.precision;
+  gcc_assert (precision <= RWIDE_INT_MAX_PRECISION);
+  wi::copy (*this, xi);
+  return *this;
+}
+
+inline unsigned int
+rwide_int_storage::get_precision () const
+{
+  return precision;
+}
+
+inline const HOST_WIDE_INT *
+rwide_int_storage::get_val () const
+{
+  return val;
+}
+
+inline unsigned int
+rwide_int_storage::get_len () const
+{
+  return len;
+}
+
+inline HOST_WIDE_INT *
+rwide_int_storage::write_val (unsigned int)
+{
+  return val;
+}
+
+inline void
+rwide_int_storage::set_len (unsigned int l, bool is_sign_extended)
+{
+  len = l;
+  if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
+    val[len - 1] = sext_hwi (val[len - 1],
+			     precision % HOST_BITS_PER_WIDE_INT);
+}
+
+/* Treat X as having signedness SGN and convert it to a PRECISION-bit
+   number.  */
+inline rwide_int
+rwide_int_storage::from (const wide_int_ref &x, unsigned int precision,
+			 signop sgn)
+{
+  rwide_int result = rwide_int::create (precision);
+  result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
+				     x.precision, precision, sgn));
+  return result;
+}
+
+/* Create a rwide_int from the explicit block encoding given by VAL and
+   LEN.  PRECISION is the precision of the integer.  NEED_CANON_P is
+   true if the encoding may have redundant trailing blocks.  */
+inline rwide_int
+rwide_int_storage::from_array (const HOST_WIDE_INT *val, unsigned int len,
+			       unsigned int precision, bool need_canon_p)
+{
+  rwide_int result = rwide_int::create (precision);
+  result.set_len (wi::from_array (result.write_val (len), val, len, precision,
+				  need_canon_p));
+  return result;
+}
+
+/* Return an uninitialized rwide_int with precision PRECISION.  */
+inline rwide_int
+rwide_int_storage::create (unsigned int precision)
+{
+  rwide_int x;
+  gcc_assert (precision <= RWIDE_INT_MAX_PRECISION);
+  x.precision = precision;
+  return x;
+}
+
+template <typename T1, typename T2>
+inline rwide_int
+wi::int_traits <rwide_int_storage>::get_binary_result (const T1 &x,
+						       const T2 &y)
+{
+  /* This shouldn't be used for two flexible-precision inputs.  */
+  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
+		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
+  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
+    return rwide_int::create (wi::get_precision (y));
+  else
+    return rwide_int::create (wi::get_precision (x));
+}
+
+template <typename T1, typename T2>
+inline unsigned int
+wi::int_traits <rwide_int_storage>::get_binary_precision (const T1 &x,
+							  const T2 &y)
+{
+  /* This shouldn't be used for two flexible-precision inputs.  */
+  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
+		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
+  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
+    return wi::get_precision (y);
+  else
+    return wi::get_precision (x);
+}
+
 /* The storage used by FIXED_WIDE_INT (N).  */
 template <int N>
 class GTY(()) fixed_wide_int_storage
@@ -1221,7 +1554,7 @@ private:
   unsigned int len;
 
 public:
-  fixed_wide_int_storage ();
+  fixed_wide_int_storage () = default;
   template <typename T>
   fixed_wide_int_storage (const T &);
 
@@ -1229,7 +1562,7 @@ public:
   unsigned int get_precision () const;
   const HOST_WIDE_INT *get_val () const;
   unsigned int get_len () const;
-  HOST_WIDE_INT *write_val ();
+  HOST_WIDE_INT *write_val (unsigned int);
   void set_len (unsigned int, bool = false);
 
   static FIXED_WIDE_INT (N) from (const wide_int_ref &, signop);
@@ -1245,15 +1578,15 @@ namespace wi
     static const enum precision_type precision_type = CONST_PRECISION;
     static const bool host_dependent_precision = false;
     static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
     static const unsigned int precision = N;
     template <typename T1, typename T2>
     static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
+    template <typename T1, typename T2>
+    static unsigned int get_binary_precision (const T1 &, const T2 &);
   };
 }
 
-template <int N>
-inline fixed_wide_int_storage <N>::fixed_wide_int_storage () {}
-
 /* Initialize the storage from integer X, in precision N.  */
 template <int N>
 template <typename T>
@@ -1288,7 +1621,7 @@ fixed_wide_int_storage <N>::get_len () c
 
 template <int N>
 inline HOST_WIDE_INT *
-fixed_wide_int_storage <N>::write_val ()
+fixed_wide_int_storage <N>::write_val (unsigned int)
 {
   return val;
 }
@@ -1308,7 +1641,7 @@ inline FIXED_WIDE_INT (N)
 fixed_wide_int_storage <N>::from (const wide_int_ref &x, signop sgn)
 {
   FIXED_WIDE_INT (N) result;
-  result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
+  result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
 				     x.precision, N, sgn));
   return result;
 }
@@ -1323,7 +1656,7 @@ fixed_wide_int_storage <N>::from_array (
 					bool need_canon_p)
 {
   FIXED_WIDE_INT (N) result;
-  result.set_len (wi::from_array (result.write_val (), val, len,
+  result.set_len (wi::from_array (result.write_val (len), val, len,
 				  N, need_canon_p));
   return result;
 }
@@ -1337,6 +1670,244 @@ get_binary_result (const T1 &, const T2
   return FIXED_WIDE_INT (N) ();
 }
 
+template <int N>
+template <typename T1, typename T2>
+inline unsigned int
+wi::int_traits < fixed_wide_int_storage <N> >::
+get_binary_precision (const T1 &, const T2 &)
+{
+  return N;
+}
+
+#define WIDEST_INT(N) generic_wide_int < widest_int_storage <N> >
+
+/* The storage used by widest_int.  */
+template <int N>
+class GTY(()) widest_int_storage
+{
+private:
+  union
+  {
+    HOST_WIDE_INT val[WIDE_INT_MAX_HWIS (N)];
+    HOST_WIDE_INT *valp;
+  } GTY((skip)) u;
+  unsigned int len;
+
+public:
+  widest_int_storage ();
+  widest_int_storage (const widest_int_storage &);
+  template <typename T>
+  widest_int_storage (const T &);
+  ~widest_int_storage ();
+  widest_int_storage &operator = (const widest_int_storage &);
+  template <typename T>
+  inline widest_int_storage& operator = (const T &);
+
+  /* The standard generic_wide_int storage methods.  */
+  unsigned int get_precision () const;
+  const HOST_WIDE_INT *get_val () const;
+  unsigned int get_len () const;
+  HOST_WIDE_INT *write_val (unsigned int);
+  void set_len (unsigned int, bool = false);
+
+  static WIDEST_INT (N) from (const wide_int_ref &, signop);
+  static WIDEST_INT (N) from_array (const HOST_WIDE_INT *, unsigned int,
+				    bool = true);
+};
+
+namespace wi
+{
+  template <int N>
+  struct int_traits < widest_int_storage <N> >
+  {
+    static const enum precision_type precision_type = WIDEST_CONST_PRECISION;
+    static const bool host_dependent_precision = false;
+    static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = true;
+    static const unsigned int precision
+      = N / WIDE_INT_MAX_INL_PRECISION * WIDEST_INT_MAX_PRECISION;
+    static const unsigned int inl_precision = N;
+    template <typename T1, typename T2>
+    static WIDEST_INT (N) get_binary_result (const T1 &, const T2 &);
+    template <typename T1, typename T2>
+    static unsigned int get_binary_precision (const T1 &, const T2 &);
+  };
+}
+
+template <int N>
+inline widest_int_storage <N>::widest_int_storage () : len (0) {}
+
+/* Initialize the storage from integer X, in precision N.  */
+template <int N>
+template <typename T>
+inline widest_int_storage <N>::widest_int_storage (const T &x) : len (0)
+{
+  /* Check for type compatibility.  We don't want to initialize a
+     widest integer from something like a wide_int.  */
+  WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED;
+  wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N / WIDE_INT_MAX_INL_PRECISION
+					    * WIDEST_INT_MAX_PRECISION));
+}
+
+template <int N>
+inline
+widest_int_storage <N>::widest_int_storage (const widest_int_storage <N> &x)
+{
+  len = x.len;
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, len);
+      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
+    }
+  else
+    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
+}
+
+template <int N>
+inline widest_int_storage <N>::~widest_int_storage ()
+{
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    XDELETEVEC (u.valp);
+}
+
+template <int N>
+inline widest_int_storage <N>&
+widest_int_storage <N>::operator = (const widest_int_storage <N> &x)
+{
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    {
+      if (this == &x)
+	return *this;
+      XDELETEVEC (u.valp);
+    }
+  len = x.len;
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, len);
+      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
+    }
+  else
+    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
+  return *this;
+}
+
+template <int N>
+template <typename T>
+inline widest_int_storage <N>&
+widest_int_storage <N>::operator = (const T &x)
+{
+  /* Check for type compatibility.  We don't want to assign a
+     widest integer from something like a wide_int.  */
+  WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED;
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    XDELETEVEC (u.valp);
+  len = 0;
+  wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N / WIDE_INT_MAX_INL_PRECISION
+					    * WIDEST_INT_MAX_PRECISION));
+  return *this;
+}
+
+template <int N>
+inline unsigned int
+widest_int_storage <N>::get_precision () const
+{
+  return N / WIDE_INT_MAX_INL_PRECISION * WIDEST_INT_MAX_PRECISION;
+}
+
+template <int N>
+inline const HOST_WIDE_INT *
+widest_int_storage <N>::get_val () const
+{
+  return UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT) ? u.valp : u.val;
+}
+
+template <int N>
+inline unsigned int
+widest_int_storage <N>::get_len () const
+{
+  return len;
+}
+
+template <int N>
+inline HOST_WIDE_INT *
+widest_int_storage <N>::write_val (unsigned int l)
+{
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    XDELETEVEC (u.valp);
+  len = l;
+  if (UNLIKELY (l > N / HOST_BITS_PER_WIDE_INT))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, l);
+      return u.valp;
+    }
+  return u.val;
+}
+
+template <int N>
+inline void
+widest_int_storage <N>::set_len (unsigned int l, bool)
+{
+  gcc_checking_assert (l <= len);
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT)
+      && l <= N / HOST_BITS_PER_WIDE_INT)
+    {
+      HOST_WIDE_INT *valp = u.valp;
+      memcpy (u.val, valp, len * sizeof (u.val[0]));
+      XDELETEVEC (valp);
+    }
+  len = l;
+  /* There are no excess bits in val[len - 1].  */
+  STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
+}
+
+/* Treat X as having signedness SGN and convert it to an N-bit number.  */
+template <int N>
+inline WIDEST_INT (N)
+widest_int_storage <N>::from (const wide_int_ref &x, signop sgn)
+{
+  WIDEST_INT (N) result;
+  unsigned int exp_len = x.len;
+  unsigned int prec = result.get_precision ();
+  if (sgn == UNSIGNED && prec > x.precision && x.val[x.len - 1] < 0)
+    exp_len = CEIL (x.precision, HOST_BITS_PER_WIDE_INT) + 1;
+  result.set_len (wi::force_to_size (result.write_val (exp_len), x.val, x.len,
+				     x.precision, prec, sgn));
+  return result;
+}
+
+/* Create a WIDEST_INT (N) from the explicit block encoding given by
+   VAL and LEN.  NEED_CANON_P is true if the encoding may have redundant
+   trailing blocks.  */
+template <int N>
+inline WIDEST_INT (N)
+widest_int_storage <N>::from_array (const HOST_WIDE_INT *val,
+				    unsigned int len,
+				    bool need_canon_p)
+{
+  WIDEST_INT (N) result;
+  result.set_len (wi::from_array (result.write_val (len), val, len,
+				  result.get_precision (), need_canon_p));
+  return result;
+}
+
+template <int N>
+template <typename T1, typename T2>
+inline WIDEST_INT (N)
+wi::int_traits < widest_int_storage <N> >::
+get_binary_result (const T1 &, const T2 &)
+{
+  return WIDEST_INT (N) ();
+}
+
+template <int N>
+template <typename T1, typename T2>
+inline unsigned int
+wi::int_traits < widest_int_storage <N> >::
+get_binary_precision (const T1 &, const T2 &)
+{
+  return N / WIDE_INT_MAX_INL_PRECISION * WIDEST_INT_MAX_PRECISION;
+}
+
 /* A reference to one element of a trailing_wide_ints structure.  */
 class trailing_wide_int_storage
 {
@@ -1359,7 +1930,7 @@ public:
   unsigned int get_len () const;
   unsigned int get_precision () const;
   const HOST_WIDE_INT *get_val () const;
-  HOST_WIDE_INT *write_val ();
+  HOST_WIDE_INT *write_val (unsigned int);
   void set_len (unsigned int, bool = false);
 
   template <typename T>
@@ -1445,7 +2016,7 @@ trailing_wide_int_storage::get_val () co
 }
 
 inline HOST_WIDE_INT *
-trailing_wide_int_storage::write_val ()
+trailing_wide_int_storage::write_val (unsigned int)
 {
   return m_val;
 }
@@ -1528,6 +2099,7 @@ namespace wi
     static const enum precision_type precision_type = FLEXIBLE_PRECISION;
     static const bool host_dependent_precision = true;
     static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
     static unsigned int get_precision (T);
     static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, T);
   };
@@ -1699,6 +2271,7 @@ namespace wi
        precision of HOST_WIDE_INT.  */
     static const bool host_dependent_precision = false;
     static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
     static unsigned int get_precision (const wi::hwi_with_prec &);
     static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
 				      const wi::hwi_with_prec &);
@@ -1804,8 +2377,8 @@ template <typename T1, typename T2>
 inline unsigned int
 wi::get_binary_precision (const T1 &x, const T2 &y)
 {
-  return get_precision (wi::int_traits <WI_BINARY_RESULT (T1, T2)>::
-			get_binary_result (x, y));
+  return wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_precision (x,
+									   y);
 }
 
 /* Copy the contents of Y to X, but keeping X's current precision.  */
@@ -1813,9 +2386,9 @@ template <typename T1, typename T2>
 inline void
 wi::copy (T1 &x, const T2 &y)
 {
-  HOST_WIDE_INT *xval = x.write_val ();
-  const HOST_WIDE_INT *yval = y.get_val ();
   unsigned int len = y.get_len ();
+  HOST_WIDE_INT *xval = x.write_val (len);
+  const HOST_WIDE_INT *yval = y.get_val ();
   unsigned int i = 0;
   do
     xval[i] = yval[i];
@@ -2162,6 +2735,8 @@ wi::bit_not (const T &x)
 {
   WI_UNARY_RESULT_VAR (result, val, T, x);
   WIDE_INT_REF_FOR (T) xi (x, get_precision (result));
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len);
   for (unsigned int i = 0; i < xi.len; ++i)
     val[i] = ~xi.val[i];
   result.set_len (xi.len);
@@ -2203,6 +2778,9 @@ wi::sext (const T &x, unsigned int offse
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T) xi (x, precision);
 
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len,
+				 CEIL (offset, HOST_BITS_PER_WIDE_INT)));
   if (offset <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = sext_hwi (xi.ulow (), offset);
@@ -2230,6 +2808,9 @@ wi::zext (const T &x, unsigned int offse
       return result;
     }
 
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len,
+				 offset / HOST_BITS_PER_WIDE_INT + 1));
   /* In these cases we know that at least the top bit will be clear,
      so no sign extension is necessary.  */
   if (offset < HOST_BITS_PER_WIDE_INT)
@@ -2259,6 +2840,9 @@ wi::set_bit (const T &x, unsigned int bi
   WI_UNARY_RESULT_VAR (result, val, T, x);
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T) xi (x, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len,
+				 bit / HOST_BITS_PER_WIDE_INT + 1));
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = xi.ulow () | (HOST_WIDE_INT_1U << bit);
@@ -2280,6 +2864,8 @@ wi::bswap (const T &x)
   WI_UNARY_RESULT_VAR (result, val, T, x);
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T) xi (x, precision);
+  if (result.needs_write_val_arg)
+    gcc_unreachable (); /* bswap on widest_int makes no sense.  */
   result.set_len (bswap_large (val, xi.val, xi.len, precision));
   return result;
 }
@@ -2292,6 +2878,8 @@ wi::bitreverse (const T &x)
   WI_UNARY_RESULT_VAR (result, val, T, x);
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T) xi (x, precision);
+  if (result.needs_write_val_arg)
+    gcc_unreachable (); /* bitreverse on widest_int makes no sense.  */
   result.set_len (bitreverse_large (val, xi.val, xi.len, precision));
   return result;
 }
@@ -2368,6 +2956,8 @@ wi::bit_and (const T1 &x, const T2 &y)
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () & yi.ulow ();
@@ -2389,6 +2979,8 @@ wi::bit_and_not (const T1 &x, const T2 &
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () & ~yi.ulow ();
@@ -2410,6 +3002,8 @@ wi::bit_or (const T1 &x, const T2 &y)
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () | yi.ulow ();
@@ -2431,6 +3025,8 @@ wi::bit_or_not (const T1 &x, const T2 &y
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () | ~yi.ulow ();
@@ -2452,6 +3048,8 @@ wi::bit_xor (const T1 &x, const T2 &y)
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () ^ yi.ulow ();
@@ -2472,6 +3070,8 @@ wi::add (const T1 &x, const T2 &y)
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len) + 1);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = xi.ulow () + yi.ulow ();
@@ -2515,6 +3115,8 @@ wi::add (const T1 &x, const T2 &y, signo
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len) + 1);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       unsigned HOST_WIDE_INT xl = xi.ulow ();
@@ -2558,6 +3160,8 @@ wi::sub (const T1 &x, const T2 &y)
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len) + 1);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = xi.ulow () - yi.ulow ();
@@ -2601,6 +3205,8 @@ wi::sub (const T1 &x, const T2 &y, signo
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len) + 1);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       unsigned HOST_WIDE_INT xl = xi.ulow ();
@@ -2643,6 +3249,8 @@ wi::mul (const T1 &x, const T2 &y)
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len + yi.len + 2);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = xi.ulow () * yi.ulow ();
@@ -2664,6 +3272,8 @@ wi::mul (const T1 &x, const T2 &y, signo
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len + yi.len + 2);
   result.set_len (mul_internal (val, xi.val, xi.len,
 				yi.val, yi.len, precision,
 				sgn, overflow, false));
@@ -2698,6 +3308,8 @@ wi::mul_high (const T1 &x, const T2 &y,
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    gcc_unreachable (); /* mul_high on widest_int doesn't make sense.  */
   result.set_len (mul_internal (val, xi.val, xi.len,
 				yi.val, yi.len, precision,
 				sgn, 0, true));
@@ -2716,6 +3328,12 @@ wi::div_trunc (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y);
 
+  if (quotient.needs_write_val_arg)
+    quotient_val = quotient.write_val ((sgn == UNSIGNED
+					&& xi.val[xi.len - 1] < 0)
+				       ? CEIL (precision,
+					       HOST_BITS_PER_WIDE_INT) + 1
+				       : xi.len + 1);
   quotient.set_len (divmod_internal (quotient_val, 0, 0, xi.val, xi.len,
 				     precision,
 				     yi.val, yi.len, yi.precision,
@@ -2753,6 +3371,15 @@ wi::div_floor (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2795,6 +3422,15 @@ wi::div_ceil (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2828,6 +3464,15 @@ wi::div_round (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2871,6 +3516,15 @@ wi::divmod_trunc (const T1 &x, const T2
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2915,6 +3569,8 @@ wi::mod_trunc (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (remainder.needs_write_val_arg)
+    remainder_val = remainder.write_val (yi.len);
   divmod_internal (0, &remainder_len, remainder_val,
 		   xi.val, xi.len, precision,
 		   yi.val, yi.len, yi.precision, sgn, overflow);
@@ -2955,6 +3611,15 @@ wi::mod_floor (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2991,6 +3656,15 @@ wi::mod_ceil (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -3017,6 +3691,15 @@ wi::mod_round (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -3086,12 +3769,16 @@ wi::lshift (const T1 &x, const T2 &y)
   /* Handle the simple cases quickly.   */
   if (geu_p (yi, precision))
     {
+      if (result.needs_write_val_arg)
+	val = result.write_val (1);
       val[0] = 0;
       result.set_len (1);
     }
   else
     {
       unsigned int shift = yi.to_uhwi ();
+      if (result.needs_write_val_arg)
+	val = result.write_val (xi.len + shift / HOST_BITS_PER_WIDE_INT + 1);
       /* For fixed-precision integers like offset_int and widest_int,
 	 handle the case where the shift value is constant and the
 	 result is a single nonnegative HWI (meaning that we don't
@@ -3130,12 +3817,23 @@ wi::lrshift (const T1 &x, const T2 &y)
   /* Handle the simple cases quickly.   */
   if (geu_p (yi, xi.precision))
     {
+      if (result.needs_write_val_arg)
+	val = result.write_val (1);
       val[0] = 0;
       result.set_len (1);
     }
   else
     {
       unsigned int shift = yi.to_uhwi ();
+      if (result.needs_write_val_arg)
+	{
+	  unsigned int est_len = xi.len;
+	  if (xi.val[xi.len - 1] < 0 && shift)
+	    /* Logical right shift of sign-extended value might need a very
+	       large precision e.g. for widest_int.  */
+	    est_len = CEIL (xi.precision - shift, HOST_BITS_PER_WIDE_INT) + 1;
+	  val = result.write_val (est_len);
+	}
       /* For fixed-precision integers like offset_int and widest_int,
 	 handle the case where the shift value is constant and the
 	 shifted value is a single nonnegative HWI (meaning that all
@@ -3171,6 +3869,8 @@ wi::arshift (const T1 &x, const T2 &y)
      since the result can be no larger than that.  */
   WIDE_INT_REF_FOR (T1) xi (x);
   WIDE_INT_REF_FOR (T2) yi (y);
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len);
   /* Handle the simple cases quickly.   */
   if (geu_p (yi, xi.precision))
     {
@@ -3374,25 +4074,56 @@ operator % (const T1 &x, const T2 &y)
   return wi::smod_trunc (x, y);
 }
 
-template<typename T>
+void gt_ggc_mx (generic_wide_int <wide_int_storage> *) = delete;
+void gt_pch_nx (generic_wide_int <wide_int_storage> *) = delete;
+void gt_pch_nx (generic_wide_int <wide_int_storage> *,
+		gt_pointer_operator, void *) = delete;
+
+inline void
+gt_ggc_mx (generic_wide_int <rwide_int_storage> *)
+{
+}
+
+inline void
+gt_pch_nx (generic_wide_int <rwide_int_storage> *)
+{
+}
+
+inline void
+gt_pch_nx (generic_wide_int <rwide_int_storage> *, gt_pointer_operator, void *)
+{
+}
+
+template<int N>
 void
-gt_ggc_mx (generic_wide_int <T> *)
+gt_ggc_mx (generic_wide_int <fixed_wide_int_storage <N> > *)
 {
 }
 
-template<typename T>
+template<int N>
 void
-gt_pch_nx (generic_wide_int <T> *)
+gt_pch_nx (generic_wide_int <fixed_wide_int_storage <N> > *)
 {
 }
 
-template<typename T>
+template<int N>
 void
-gt_pch_nx (generic_wide_int <T> *, gt_pointer_operator, void *)
+gt_pch_nx (generic_wide_int <fixed_wide_int_storage <N> > *,
+	   gt_pointer_operator, void *)
 {
 }
 
 template<int N>
+void gt_ggc_mx (generic_wide_int <widest_int_storage <N> > *) = delete;
+
+template<int N>
+void gt_pch_nx (generic_wide_int <widest_int_storage <N> > *) = delete;
+
+template<int N>
+void gt_pch_nx (generic_wide_int <widest_int_storage <N> > *,
+		gt_pointer_operator, void *) = delete;
+
+template<int N>
 void
 gt_ggc_mx (trailing_wide_ints <N> *)
 {
@@ -3465,7 +4196,7 @@ inline wide_int
 wi::mask (unsigned int width, bool negate_p, unsigned int precision)
 {
   wide_int result = wide_int::create (precision);
-  result.set_len (mask (result.write_val (), width, negate_p, precision));
+  result.set_len (mask (result.write_val (0), width, negate_p, precision));
   return result;
 }
 
@@ -3477,7 +4208,7 @@ wi::shifted_mask (unsigned int start, un
 		  unsigned int precision)
 {
   wide_int result = wide_int::create (precision);
-  result.set_len (shifted_mask (result.write_val (), start, width, negate_p,
+  result.set_len (shifted_mask (result.write_val (0), start, width, negate_p,
 				precision));
   return result;
 }
@@ -3498,8 +4229,8 @@ wi::mask (unsigned int width, bool negat
 {
   STATIC_ASSERT (wi::int_traits<T>::precision);
   T result;
-  result.set_len (mask (result.write_val (), width, negate_p,
-			wi::int_traits <T>::precision));
+  result.set_len (mask (result.write_val (width / HOST_BITS_PER_WIDE_INT + 1),
+			width, negate_p, wi::int_traits <T>::precision));
   return result;
 }
 
@@ -3512,9 +4243,13 @@ wi::shifted_mask (unsigned int start, un
 {
   STATIC_ASSERT (wi::int_traits<T>::precision);
   T result;
-  result.set_len (shifted_mask (result.write_val (), start, width,
-				negate_p,
-				wi::int_traits <T>::precision));
+  unsigned int prec = wi::int_traits <T>::precision;
+  unsigned int est_len
+    = result.needs_write_val_arg
+      ? ((start + (width > prec - start ? prec - start : width))
+	 / HOST_BITS_PER_WIDE_INT + 1) : 0;
+  result.set_len (shifted_mask (result.write_val (est_len), start, width,
+				negate_p, prec));
   return result;
 }
 
--- gcc/godump.cc.jj	2023-10-04 16:28:04.148784815 +0200
+++ gcc/godump.cc	2023-10-05 11:36:55.219243548 +0200
@@ -1154,7 +1154,11 @@ go_output_typedef (class godump_containe
 	    snprintf (buf, sizeof buf, HOST_WIDE_INT_PRINT_UNSIGNED,
 		      tree_to_uhwi (value));
 	  else
-	    print_hex (wi::to_wide (element), buf);
+	    {
+	      wide_int w = wi::to_wide (element);
+	      gcc_assert (w.get_len () <= WIDE_INT_MAX_INL_ELTS);
+	      print_hex (w, buf);
+	    }
 
 	  mhval->value = xstrdup (buf);
 	  *slot = mhval;
--- gcc/tree-ssa-loop-ivcanon.cc.jj	2023-10-04 16:28:04.310782607 +0200
+++ gcc/tree-ssa-loop-ivcanon.cc	2023-10-05 11:36:55.219243548 +0200
@@ -622,10 +622,11 @@ remove_redundant_iv_tests (class loop *l
 	      || !integer_zerop (niter.may_be_zero)
 	      || !niter.niter
 	      || TREE_CODE (niter.niter) != INTEGER_CST
-	      || !wi::ltu_p (loop->nb_iterations_upper_bound,
+	      || !wi::ltu_p (widest_int::from (loop->nb_iterations_upper_bound,
+					       SIGNED),
 			     wi::to_widest (niter.niter)))
 	    continue;
-	  
+
 	  if (dump_file && (dump_flags & TDF_DETAILS))
 	    {
 	      fprintf (dump_file, "Removed pointless exit: ");
--- gcc/value-range-pretty-print.cc.jj	2023-10-04 16:28:04.415781176 +0200
+++ gcc/value-range-pretty-print.cc	2023-10-05 11:36:55.142244603 +0200
@@ -99,12 +99,19 @@ vrange_printer::print_irange_bitmasks (c
     return;
 
   pp_string (pp, " MASK ");
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_hex (bm.mask (), buf);
-  pp_string (pp, buf);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p;
+  unsigned len_mask = bm.mask ().get_len ();
+  unsigned len_val = bm.value ().get_len ();
+  unsigned len = MAX (len_mask, len_val);
+  if (len > WIDE_INT_MAX_INL_ELTS)
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  else
+    p = buf;
+  print_hex (bm.mask (), p);
+  pp_string (pp, p);
   pp_string (pp, " VALUE ");
-  print_hex (bm.value (), buf);
-  pp_string (pp, buf);
+  print_hex (bm.value (), p);
+  pp_string (pp, p);
 }
 
 void
--- gcc/print-tree.cc.jj	2023-10-04 16:28:04.257783330 +0200
+++ gcc/print-tree.cc	2023-10-05 11:36:54.630251622 +0200
@@ -365,13 +365,13 @@ print_node (FILE *file, const char *pref
     fputs (code == CALL_EXPR ? " must-tail-call" : " static", file);
   if (TREE_DEPRECATED (node))
     fputs (" deprecated", file);
-  if (TREE_UNAVAILABLE (node))
-    fputs (" unavailable", file);
   if (TREE_VISITED (node))
     fputs (" visited", file);
 
   if (code != TREE_VEC && code != INTEGER_CST && code != SSA_NAME)
     {
+      if (TREE_UNAVAILABLE (node))
+	fputs (" unavailable", file);
       if (TREE_LANG_FLAG_0 (node))
 	fputs (" tree_0", file);
       if (TREE_LANG_FLAG_1 (node))
--- gcc/wide-int-print.h.jj	2023-10-04 16:28:04.448780726 +0200
+++ gcc/wide-int-print.h	2023-10-05 11:36:54.630251622 +0200
@@ -22,7 +22,7 @@ along with GCC; see the file COPYING3.
 
 #include <stdio.h>
 
-#define WIDE_INT_PRINT_BUFFER_SIZE (WIDE_INT_MAX_PRECISION / 4 + 4)
+#define WIDE_INT_PRINT_BUFFER_SIZE (WIDE_INT_MAX_INL_PRECISION / 4 + 4)
 
 /* Printing functions.  */
 
--- gcc/dwarf2out.h.jj	2023-10-04 16:28:04.095785537 +0200
+++ gcc/dwarf2out.h	2023-10-05 11:36:54.666251128 +0200
@@ -30,7 +30,7 @@ typedef struct dw_cfi_node *dw_cfi_ref;
 typedef struct dw_loc_descr_node *dw_loc_descr_ref;
 typedef struct dw_loc_list_struct *dw_loc_list_ref;
 typedef struct dw_discr_list_node *dw_discr_list_ref;
-typedef wide_int *wide_int_ptr;
+typedef rwide_int *rwide_int_ptr;
 
 
 /* Call frames are described using a sequence of Call Frame
@@ -252,7 +252,7 @@ struct GTY(()) dw_val_node {
       unsigned HOST_WIDE_INT
 	GTY ((tag ("dw_val_class_unsigned_const"))) val_unsigned;
       double_int GTY ((tag ("dw_val_class_const_double"))) val_double;
-      wide_int_ptr GTY ((tag ("dw_val_class_wide_int"))) val_wide;
+      rwide_int_ptr GTY ((tag ("dw_val_class_wide_int"))) val_wide;
       dw_vec_const GTY ((tag ("dw_val_class_vec"))) val_vec;
       struct dw_val_die_union
 	{
--- gcc/data-streamer-in.cc.jj	2023-10-04 16:28:04.025786491 +0200
+++ gcc/data-streamer-in.cc	2023-10-05 11:36:54.843248702 +0200
@@ -277,10 +277,12 @@ streamer_read_value_range (class lto_inp
 wide_int
 streamer_read_wide_int (class lto_input_block *ib)
 {
-  HOST_WIDE_INT a[WIDE_INT_MAX_ELTS];
+  HOST_WIDE_INT abuf[WIDE_INT_MAX_INL_ELTS], *a = abuf;
   int i;
   int prec = streamer_read_uhwi (ib);
   int len = streamer_read_uhwi (ib);
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    a = XALLOCAVEC (HOST_WIDE_INT, len);
   for (i = 0; i < len; i++)
     a[i] = streamer_read_hwi (ib);
   return wide_int::from_array (a, len, prec);
@@ -292,10 +294,12 @@ streamer_read_wide_int (class lto_input_
 widest_int
 streamer_read_widest_int (class lto_input_block *ib)
 {
-  HOST_WIDE_INT a[WIDE_INT_MAX_ELTS];
+  HOST_WIDE_INT abuf[WIDE_INT_MAX_INL_ELTS], *a = abuf;
   int i;
   int prec ATTRIBUTE_UNUSED = streamer_read_uhwi (ib);
   int len = streamer_read_uhwi (ib);
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    a = XALLOCAVEC (HOST_WIDE_INT, len);
   for (i = 0; i < len; i++)
     a[i] = streamer_read_hwi (ib);
   return widest_int::from_array (a, len);


	Jakub


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int
  2023-10-05 15:11         ` Jakub Jelinek
@ 2023-10-06 17:41           ` Jakub Jelinek
  0 siblings, 0 replies; 16+ messages in thread
From: Jakub Jelinek @ 2023-10-06 17:41 UTC (permalink / raw)
  To: Richard Biener, Richard Sandiford; +Cc: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 162657 bytes --]

Hi!

On Thu, Oct 05, 2023 at 05:11:02PM +0200, Jakub Jelinek wrote:
> On Thu, Sep 28, 2023 at 04:03:55PM +0200, Jakub Jelinek wrote:
> > Your thoughts on all of this?
> 
> So, here is some further progress on the patch (on top of the ipa_bits
> removal patch).

Further progress, this patch now passes bootstrap/regtest on x86_64-linux.

As I've mentioned before, wide_int still allocates based on precision, not
on actual needed length (while widest_int allocates on needed length) and
the patch as is seems to do the very large allocations in various cases
even when no _BitInt is every parsed in the source.

To see how often does that happen, I've applied the first attached
incremental patch as a hack to gather statistics on such allocations and
then (in the same patch) attempted to tweak the largest offenders.
The most common problem with huge (usually exactly 510 limbs) allocations
has been because 5 spots in the sources do force_fit_type (type, wi::to_widest
(some_tree), ...), with a comment that it wants to ensure it is sign or zero
extended properly according to the original sign.  I've used
force_fit_type (type, wide_int::from (wi::to_wide (some_tree),
MAX (TYPE_PRECISION (type), TYPE_PRECISION (TREE_TYPE (some_tree))),
TYPE_SIGN (TREE_TYPE (some_tree))), ...);
for that instead - force_fit_type takes const wide_int_ref &, so for
widest_int something with the 32640 bit precision but which wants to create
wide_int rather than widest_int as unary/binary operation result and that
is why when trying to wi::ext it we allocate large vectors.  I think
maximum of the two precisions is all we need.

Another problem was with bit ccp TRUNC_DIV_EXPR UNSIGNED handling,
for some reason the widest_int mask is often sign-extended rather than
zero-extended and trying to udiv_trunc something wi::neg_p results again
into 510-ish limbs.  I think just zero extending it before division like
we e.g. do for arithmetic right shift is the right thing.

With that, on make -j32 -k check-gcc I only saw allocations in _BitInt
tests, and except for the newly added bitint-38.c test which tests
unsigned _BitInt(16319) everything was quite small.

So perhaps with cleaning up the force_fit_type+tree-ssa-ccp.cc hunks
of the hack patch we could get away with wide_int doing precision based
allocations.

Another thing is that I've added #pragma GCC diagnostic to wide-int.h
to workaround PR111715 false positive warnings on tree-affine.o.
The second attached patch (just compile tested) removes those pragmas
again and adds a short hack where we know that write_val for widest_int
was passed exact length rather than approximate upper bound and so we don't
need to do anything at all in set_len.

--- gcc/tree-vect-loop.cc.jj	2023-10-04 16:28:04.354782008 +0200
+++ gcc/tree-vect-loop.cc	2023-10-05 11:52:25.001491397 +0200
@@ -11681,7 +11681,7 @@ vect_transform_loop (loop_vec_info loop_
 					LOOP_VINFO_VECT_FACTOR (loop_vinfo),
 					&bound))
 	    loop->nb_iterations_upper_bound
-	      = wi::umin ((widest_int) (bound - 1),
+	      = wi::umin ((bound_wide_int) (bound - 1),
 			  loop->nb_iterations_upper_bound);
       }
   }
--- gcc/wide-int-print.cc.jj	2023-10-04 16:28:04.447780740 +0200
+++ gcc/wide-int-print.cc	2023-10-05 11:36:55.265242917 +0200
@@ -74,9 +74,12 @@ print_decs (const wide_int_ref &wi, char
 void
 print_decs (const wide_int_ref &wi, FILE *file)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_decs (wi, buf);
-  fputs (buf, file);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_decs (wi, p);
+  fputs (p, file);
 }
 
 /* Try to print the unsigned self in decimal to BUF if the number fits
@@ -98,9 +101,12 @@ print_decu (const wide_int_ref &wi, char
 void
 print_decu (const wide_int_ref &wi, FILE *file)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_decu (wi, buf);
-  fputs (buf, file);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_decu (wi, p);
+  fputs (p, file);
 }
 
 void
@@ -134,9 +140,12 @@ print_hex (const wide_int_ref &val, char
 void
 print_hex (const wide_int_ref &wi, FILE *file)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_hex (wi, buf);
-  fputs (buf, file);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_hex (wi, p);
+  fputs (p, file);
 }
 
 /* Print larger precision wide_int.  Not defined as inline in a header
--- gcc/lto-streamer-out.cc.jj	2023-10-04 16:28:04.201784093 +0200
+++ gcc/lto-streamer-out.cc	2023-10-05 11:36:54.700250663 +0200
@@ -2173,13 +2173,26 @@ output_cfg (struct output_block *ob, str
 			   loop_estimation, EST_LAST, loop->estimate_state);
       streamer_write_hwi (ob, loop->any_upper_bound);
       if (loop->any_upper_bound)
-	streamer_write_widest_int (ob, loop->nb_iterations_upper_bound);
+	{
+	  widest_int w = widest_int::from (loop->nb_iterations_upper_bound,
+					   SIGNED);
+	  streamer_write_widest_int (ob, w);
+	}
       streamer_write_hwi (ob, loop->any_likely_upper_bound);
       if (loop->any_likely_upper_bound)
-	streamer_write_widest_int (ob, loop->nb_iterations_likely_upper_bound);
+	{
+	  widest_int w
+	    = widest_int::from (loop->nb_iterations_likely_upper_bound,
+				SIGNED);
+	  streamer_write_widest_int (ob, w);
+	}
       streamer_write_hwi (ob, loop->any_estimate);
       if (loop->any_estimate)
-	streamer_write_widest_int (ob, loop->nb_iterations_estimate);
+	{
+	  widest_int w = widest_int::from (loop->nb_iterations_estimate,
+					   SIGNED);
+	  streamer_write_widest_int (ob, w);
+	}
 
       /* Write OMP SIMD related info.  */
       streamer_write_hwi (ob, loop->safelen);
--- gcc/value-range.h.jj	2023-10-04 16:28:04.436780890 +0200
+++ gcc/value-range.h	2023-10-05 11:36:55.257243027 +0200
@@ -626,7 +626,9 @@ irange::maybe_resize (int needed)
     {
       m_max_ranges = HARD_MAX_RANGES;
       wide_int *newmem = new wide_int[m_max_ranges * 2];
-      memcpy (newmem, m_base, sizeof (wide_int) * num_pairs () * 2);
+      unsigned n = num_pairs () * 2;
+      for (unsigned i = 0; i < n; ++i)
+	newmem[i] = m_base[i];
       m_base = newmem;
     }
 }
--- gcc/tree-ssa-loop-ivopts.cc.jj	2023-09-29 18:58:47.317894622 +0200
+++ gcc/tree-ssa-loop-ivopts.cc	2023-10-06 12:40:49.512169963 +0200
@@ -1036,10 +1036,12 @@ niter_for_exit (struct ivopts_data *data
 	 names that appear in phi nodes on abnormal edges, so that we do not
 	 create overlapping life ranges for them (PR 27283).  */
       desc = XNEW (class tree_niter_desc);
+      ::new (static_cast<void*> (desc)) tree_niter_desc ();
       if (!number_of_iterations_exit (data->current_loop,
 				      exit, desc, true)
      	  || contains_abnormal_ssa_name_p (desc->niter))
 	{
+	  desc->~tree_niter_desc ();
 	  XDELETE (desc);
 	  desc = NULL;
 	}
@@ -7894,7 +7896,11 @@ remove_unused_ivs (struct ivopts_data *d
 bool
 free_tree_niter_desc (edge const &, tree_niter_desc *const &value, void *)
 {
-  free (value);
+  if (value)
+    {
+      value->~tree_niter_desc ();
+      free (value);
+    }
   return true;
 }
 
--- gcc/lto-streamer-in.cc.jj	2023-10-04 16:28:04.178784406 +0200
+++ gcc/lto-streamer-in.cc	2023-10-05 11:36:54.730250251 +0200
@@ -1122,13 +1122,16 @@ input_cfg (class lto_input_block *ib, cl
       loop->estimate_state = streamer_read_enum (ib, loop_estimation, EST_LAST);
       loop->any_upper_bound = streamer_read_hwi (ib);
       if (loop->any_upper_bound)
-	loop->nb_iterations_upper_bound = streamer_read_widest_int (ib);
+	loop->nb_iterations_upper_bound
+	  = bound_wide_int::from (streamer_read_widest_int (ib), SIGNED);
       loop->any_likely_upper_bound = streamer_read_hwi (ib);
       if (loop->any_likely_upper_bound)
-	loop->nb_iterations_likely_upper_bound = streamer_read_widest_int (ib);
+	loop->nb_iterations_likely_upper_bound
+	  = bound_wide_int::from (streamer_read_widest_int (ib), SIGNED);
       loop->any_estimate = streamer_read_hwi (ib);
       if (loop->any_estimate)
-	loop->nb_iterations_estimate = streamer_read_widest_int (ib);
+	loop->nb_iterations_estimate
+	  = bound_wide_int::from (streamer_read_widest_int (ib), SIGNED);
 
       /* Read OMP SIMD related info.  */
       loop->safelen = streamer_read_hwi (ib);
@@ -1888,13 +1891,17 @@ lto_input_tree_1 (class lto_input_block
       tree type = stream_read_tree_ref (ib, data_in);
       unsigned HOST_WIDE_INT len = streamer_read_uhwi (ib);
       unsigned HOST_WIDE_INT i;
-      HOST_WIDE_INT a[WIDE_INT_MAX_ELTS];
+      HOST_WIDE_INT abuf[WIDE_INT_MAX_INL_ELTS], *a = abuf;
 
+      if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+	a = XALLOCAVEC (HOST_WIDE_INT, len);
       for (i = 0; i < len; i++)
 	a[i] = streamer_read_hwi (ib);
       gcc_assert (TYPE_PRECISION (type) <= WIDE_INT_MAX_PRECISION);
-      result = wide_int_to_tree (type, wide_int::from_array
-				 (a, len, TYPE_PRECISION (type)));
+      result
+	= wide_int_to_tree (type,
+			    wide_int::from_array (a, len,
+						  TYPE_PRECISION (type)));
       streamer_tree_cache_append (data_in->reader_cache, result, hash);
     }
   else if (tag == LTO_tree_scc || tag == LTO_trees)
--- gcc/value-range.cc.jj	2023-10-04 16:28:04.416781162 +0200
+++ gcc/value-range.cc	2023-10-05 11:36:54.835248812 +0200
@@ -245,17 +245,24 @@ vrange::dump (FILE *file) const
 void
 irange_bitmask::dump (FILE *file) const
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p;
   pretty_printer buffer;
 
   pp_needs_newline (&buffer) = true;
   buffer.buffer->stream = file;
   pp_string (&buffer, "MASK ");
-  print_hex (m_mask, buf);
-  pp_string (&buffer, buf);
+  unsigned len_mask = m_mask.get_len ();
+  unsigned len_val = m_value.get_len ();
+  unsigned len = MAX (len_mask, len_val);
+  if (len > WIDE_INT_MAX_INL_ELTS)
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  else
+    p = buf;
+  print_hex (m_mask, p);
+  pp_string (&buffer, p);
   pp_string (&buffer, " VALUE ");
-  print_hex (m_value, buf);
-  pp_string (&buffer, buf);
+  print_hex (m_value, p);
+  pp_string (&buffer, p);
   pp_flush (&buffer);
 }
 
--- gcc/testsuite/gcc.dg/bitint-38.c.jj	2023-10-05 11:36:54.667251115 +0200
+++ gcc/testsuite/gcc.dg/bitint-38.c	2023-10-05 12:57:07.941106025 +0200
@@ -0,0 +1,18 @@
+/* PR c/102989 */
+/* { dg-do compile { target { bitint } } } */
+/* { dg-options "-std=c2x" } */
+
+#if __BITINT_MAXWIDTH__ >= 16319
+constexpr unsigned _BitInt(16319) a
+  = 468098567701677261276215481936770442254383643766995378241600227179396283432916865881332215867106489159251577495372085663487092317743244770597287633199005374998455333587280357490149993101811392051483761495987108264964738337118155155862715438910721661230332533185335581757600511846854115932637261969633134365868695363914570578110064471868475841348589366933645410987699979080140212849909081188170910464967486231358935212897096260626033055536141835599284498474737858487658470115144771923114826312283863035503700600141440724426364699636330240414271275626021294939422483250619629005959992243418661230122132667769781183790338759345884903821695590991577228520523725302048215447841573113840811593638413425054938213262961448317898574140533090004992732688525115004782973893244091427000396890427152225308661078954671066069234453757593181753900865203439035402480306413572239610467142591920809187367438071170100969567440044691427487959785637338381651309916782063670286046547585240837892307170928849485877186793280707600840866783471799148179250818387716183127323346199533387463363442356218803779697005759324410376476855222420876262425985571982818180353870410149824214544313013285199544193496624223219986402944849622489422007678564946174797892795089330899535624727777525330789492703574564112252955147770942929761545604350869404246558274752353510370157229485004402131043153454290397929387276374054938578976878606467217359398684275050519104413914286024106808116340712273059427362293703151355498336213170698894448405369398757188523160460292714875857879968173578328191358215972493513271297875634400793301929250052822258636015650857683023900709845410838487936778533250407886180954576046340697908584020951295048844938047865657029072850797442976146895294184993736999505485665742811313795405530674199848055802759901786376822069529342971261963119332476504064285869362049662083405789828433132154933242817432809415810548180658750393692272729586232842065658490971201927780014258815333115459695117942273551876646844821076723664040282772834511419891351278169017103987094803829594286352340468346618726088781492626816188657331359104171819822673805856317828499039088088223137258297373929043307673570090396947789598799922928643843532617012164811074618881774622628943539037974883812689130801860915090035870244061005819418130068390986470314677853605080103313411837904358287837401546257413240466939893527508931541065241929872307203876443882106193262544652290132364691671910332006127864146991404015366683569317248057949596070354929361158326955551600236075268435044105880162798380799161607987365282458662031599096921825176202707890730023698706855762932691688259365358964076595824577775275991183149118372047206055118463112864604063853894820407249837871368934941438119680605528546887256934334246075596746410297954458632358171428714141820918183384435681332379317541048252391710712196623406338702061195213724569303285402242853671386113148211535691685461836458295037538034378318055108240082414441205300401526732399959228346926528586852743389490978734787926721999855388794711837164423007719626109179005466113706450765269687580819822772189301084503627297389675134228222337286867641110511061980231247884533492442898936743429641958314135329073406495776369208158032115883850691010569048983941126771477990976092252391972812691669847446798507244106121667885423025613769258102773855537509733295805013313937402282804897213847221072647111605172349464564089914906493508133855389627177663426057763252086286325343811254757681803068276278048757997425284334713190226818463023074461900176958010055572434983135171145365242339273326984465181064287264645470832091115100640584104375577304056951969456200138485313560009272338228103637763863289261673258726736753407044143664079479496972580560534494806170810469304773005873590626280072387999668522546747985701599613975101188543857852141559251634058676718308000324869809628199442681565615662912626022796064414496106344236431285697688357707992989966561557171729972093533007476947862215922583204811189015550505642082475400647639520782187776825395598257421714106473869797642678266380755873356747812273977691604147842741151722919464734890326772594979022403228191075586910464204870254674290437668861177639713112762996390246102030994917186957826982084194156870398312336059100521566034092740694642613192909850644003933745129291062576341213874815510099835708723355432970090139671120232910747665906191360160259512198160849784197597300106223945960886603127136037120000864968668651452411048372895607382907494278810971475663944948791458618662250238375166523484847507342040066801856222328988662049579299600545682490412754483621051190231623196265549391964259780178070495642538883789503379406531279338866955157646654913405181879254189185904298325865503395688786311067669273609670603076582607253527084977744533187145642686236350165593980428575119329911921382240780504527422630654086941060242757131313184709635181001199631726283364158943337968797uwb
+    + 9935443518057456429927126655222257817207511311671335832560065573055276678747990652907348839741818562757939084649073348172108397183827020377941725983107513636287406530526358253508437290241937276908386282904353079102904535675608604576486162998319427702851278408213641454837223079616401615875672453250148421679223829417834227518133091055180270249266161676677176149675164257640812344297935650729629801878758059944090168862730519817203352341458310363811482318083270232434329317323822818991134500601669868922396013512969477839456472345812312321924215241849772147687455760224559240952737319009348540894966363568158349501355229264646770018071590502441702787269097973979899837683122194103110089728425676690246091146993955037918425772840022288222832932542516091501149477160856564464376910293230091963573119230648026667896399352790982611957569978972038178519570278447540707502861678502657905192743225893225663994807568918644898273702285483676385717651104042002105352993176512166420085064452431753181365805833548922676748890412420332694609096819779765600345216390394307257556778223743443958983962113723193551247897995423762348092103893683711373897139168289420267660611409947644548715007787832959251167553175096639147674776117973100447903243626902892382263767591328038235708593401563793019418124453166386471792468421003855894206584354731489363668134077946203546067237235657746480296831651791790385981397558458905904641394246279782746736009101862366868068363411976388557697921914317179371206444085390779634831369723370050764678852846779369497232374780691905280992368079762747352245519607264154197148958896955661904214909184952289996142050604821608749900417845137727596903100452350067551305840998280482775209883278873071895588751811462342517825753493814997918418437455474992422243919549967371964423457440287296270855605850954685912644303354019058716916735522533065323057755479803668782530250381988211075034655760123250249441440684338450953823290346909689822527652698723502872312570305261196768477498898020793071808758903381796873868682378850925211629392760628685222745073544116615635557910805357623590218023715832716372532519372862093828545797325567803691998051785156065861566888871461130133522039321843439017964382030080752476709398731341173062430275003111954907627837208488348686666904765710656917706470924318432160155450726007668035494571779793129212242101293274853237850848806152774463689243426683295884648680790240363097015218347966399166380090370628591288712305133171869639679922854066493076773166970190482988828017031016891561971986279675371963020932469337264061317786330566839383989384760935590299287963546863848119999451739548405124001514033096695605580766121611440638549988895970262425133218159848061727217163487131806481686766843789971465247903534853837951413845786667122427182648989156599529647439419553785158561613114023267303869927565170507781782366447011340851258178534101585950081423437703778492347448230473897643505773957385504112182446690585033823747175966929091293693201061858670141209129091452861292276276012910624071241165402089161606944423826245461608594935732481900198240862293409442308800690019550831630479883000579884614601906961723011354449804576794339826056986957680090916046848673419723529694384653809400377218545075269148766129194637039408225515678013332188074997217667835494940043014917877438354902673107453164275280010251040360040937308738925689475725131639032011979009642713542292894219059352972933151112376197383814925363288670995556269447804994925086791728136906693249507115097807060365872110998210768336078389508724184863597285987736912073071980137162590779664675033429119327855307827174673749257462983054221631797527009987595732460222197367608440973488211898471439302051388806818521659685873672383828021329848153410204926607710971678268541677584421695238011784351386047869158787156634630693872428067864980320063293435887574745859067024988485742353278548704467544298793511583587659713711677065792371199329419372392720321981862269890024832348999865449339856339220386853162641984444934998176248821703154774794026863423846665361147912580310179333239849314145158103813724371277156031826070213656189218428551171492579367736652650240510840524479280661922149370381404863668038229922105064658335083314946842545978050497021795217124947959575065471749872278802756371390871441004232633252611825748658593540667831098874027223327541523742857750954119615708541514145110863925049204517574000824797900817585376961462754521495100198829675100958066639531958106704159717265035205597161047879510849900587565746603225763129877434317949842105742386965886137117798642168190733367414126797929434627532307855448841035433795229031275545885872876848846666666475465866905332293095381494096702328649920740506658930503053162777944821433383407283155178707970906458023827141681140372968356084617001053870499079884384019820875585843129082894687740533946763756846924952825251383026364635539377880784234770789463152435704464616uwb;
+constexpr unsigned _BitInt(16319) b
+  = 20129744567093027275741005070628998262449166046517026903695683755854448756834360166513132405078796314602781998330705368407367482030156637206994877425582250124595106718397028199112773892105727478029626122540718672466812244172521968825004812596684190534400169291245019886664334632347203172906471830047918779870667296830826108769036384267604969509336398421516482677170697323144807237345130767733861415665037591249948490085867356183319101541167176586195051721766552194667530417142250556133895688441663400613014781276825394358975458967475147806589013506569415945496841131100738180426238464950629268379774013285627049621529192047736803089092751891513992605419086502588233332057296638567290306093910878742093500873864277174719410183640765821580587831967716708363976225535905317908137780497267444416760176647705834046996010820212494244083222254037700699529789991033448979912128507710343500466786839351071045788239200231971288879352062329627654083430317549832483148696514166354870702716570783257707960927427529476249626444239951812293100465038963807939297639901456086408459677292249078230581624034160083198437374539728677906306289960873601083706201882999243554025429957091619812945018432503309674349427513057767160754691227365332241845175797106713295593063635202655344273695438810685712451003351469460085582752740414723264094665962205140763820691773090780866423727990711323748512766522537850976590598658397979845215595029782750537140603588592215363608992433922289542233458102634259275757690440754308009593855238137227351798446486981151672766513716998027602215751256719370429397129549459120277202327118788743080998483470436192625398340057850391478909668185290635380423955404607217710958636050373730838469336370845039431945543326700579270919052885975364141422331087288874462285858637176621255141698264412903522678033317989170115880081516284097559300133507799471895326457336815172421155995525168781635131143991136416642016744949082321204689839861376266795485532171923826942486502913400286963940309484507484129423576156798044985198780159055788525538310878089397895175129162099671894337526801235280427428321205321530735108239848594278720839317921782831352363541199919557577597546876704462612904924694431903072332864341465745291866718067601041404212430941956177407763481845568339170224196193106463030409080073136605433869775860974939991008596874978506245689726966715206639438259724689301019692258116991317695012205036157177039536905494005833948384397446492918129185274359806145454148241131925838562069991934872329314452016900728948186477387223161994145551216156032211038319475270853818660079065895119923373317496777184177315345923787700803986965175033224375435249224949151191006574511519055220741174631165879299688118138728380219550143006894817522270338472413899079751917314505754802052988622174392135207139715960212346858882422543222621408433817817181595201086403368301839080592455115463829425708132345811270911456928961301265223101989524481521721969838980208647528038509328501705428950749820080720418776718084142086501267418284241370398868561282277848391673847937247873117719906103441015578245152673184719538896073697272475250261227685660058944107087333786104761624391816175414338999215260190162551489343436332492645887029551964578826432156700872459216605843463884228343167159924792752429816064841479438134662749621639560203443871326810129872763539114284811330805213188716333471069710270583945841626338361700846410927750916663908367683188084193258384935122236639934335284160522042065088923421928660724095726039642836343542211473282392554371973074108770797447448654428325845253304889062021031599531436606775029315849674756213988932349651640552571880780461452187094400408403309806507698230071584809861634596000425300485805174853406774961321055086995665513868382285048348264250174388793184093524675621762558537763747237314473883173686633576273836946507237880619627632543093619281096675643877749217588495383292078713230253993525326209732859301842016440189010027733234997657748351253359664018894197346327201303258090754079801393874104215986193719394144148559622409051961205332355846077533183278890738832391535561074612724819789952480872328880408266970201766239451001690274739141595541572957753788951050043026811943691163688663710637928472363177936029259448725818579129920714382357882142208643606823754520733994646572586821541644398149238544337745998203264678454665487925173493921777764033537269522992103115842823750405588538846833724101543165897489915300004787110814394934465518176677482202804123781727309993329004830726928892557850582806559007396866888620985629055058474721708813614135721948922060211334334572381348586196886746758900465692833094336637178459072850215866106799456460266354416689624866015411034238864944123721969568161372557215009049887790769403406590484422511214573790761107726077762451440539965975955360773797196902546431341823788555069435728043202455375041817472821677779625286961992491729576392881089462100341878uwb
+    / 42uwb;
+constexpr unsigned _BitInt(16319) c
+  = 26277232382028447345935282100364413976442241120491848683780108318345774920397366452596924421335605374686659278612312801604887370376076386444511450318895545695570784577285598906650901929444302296033412199632594998376064124714220414913923213779444306833277388995703552219430575080927111195417046911177019070713847128826447830096432003962403463656558600431115273248877177875063381111477888059798858016050213420475851620413016793445517539227019973682699447952322388748860981947593432985730684746088183583225184347825110697327973294826205227564425769950503423435597165969299975681406974619941538502827193742760455245269483134360940023933986344217577102114800134253879530890064362520368475535738854741806292542624386473461274620987891355541987873664157022522167908591164654787501854546457737341526763516705032705254046172926268968997302379261582933264475402063191548343982201230445504659038868786347667710658240088825869575188227013335559298579845948690316856611693386990691782821847535492639223427223360712994033576990398197160051785889033125034223732954451076425681456628201904077784454089380196178912326887148822779198657689238010492393879170486604804437202791286852035982584159978541711417080787022338893101116171974852272032081114570327098305927880933671644227124990161298341841320653588271798586647749346370617067175316167393884414111921877638201303618067479025167446526964230732790261566590993315887290551248612349150417516918700813876388862131622594037955509016393068514645257179527317715173019090736514553638608004576856188118523434383702648256819068546345047653068719910165573154521302405552789235554333112380164692074092017083602440917300094238211450798274305773890594242881597233221582216100516212402569681571888843321851284369613879319709906369098535804168065394213774970627125064665536078444150533436796088491087726051879648804306086489894004214709726215682689504951069889191755818331155532574370572928592103344141366890552816031266922028893616252999452323417869066941579667306347161357254079241809644500681547267163742601555111699376923690500014172294337681007418735910341792131377741308586228268385825579773985382339854821729670313925456724869607910114957040810377671394779834675225181536565444830551924417794139736686594557660483813045525089850285373756403594900392226296617656189774567019900237644329891280192776067340109751100025818473155267503490628146429306493520953677660612094758307190480072039980575323428994009982415676875786338343681850769724258724712947129844865182522700509869810541147515988955709784790248266593581532414091983670376426534289079098742549505127694160521110700035496658932724007621759500091227595477831200325335242614162624218010753586306794482732500765136299548052958345872488446969032973871418565484570096440609125401439516349061951073344772753817168731533186740449206533184858409824331269879276752302819075938894191764603880669059804914705202932220114574769307945938446355744093058483466098741029671133305308451601510124097336668044362140994842230895354232007936193610666215236351383330719496758577095102466235782700820575938453736277546445932135116947993404356975890051717304128693125699951445791328843668647245439797933691355015781238038148597339831348341049751957204680813855138272253234219030458164179195368888878989362640509486440530112337687890165646824152338885218611665567933423652236621168833497594762922586523151554244316284075364923316223457798336995440229801638249044555841786652868778333857626201712694823945146208412572567947403078655159448178467488335673853886982143607843369103504905837049147006413324087204923968347162406372146304110247436210704329838033967549296094708909042352807942165389054391217609084676765464997803900415653278041220586434133698802658726748950122980183615091029049242919298428066745937148593879994539254240070220900694662200741796632687373414952817000938093930497338259168439649970963774406833411431113922194082765390241161715106142638681072839764035976877223152727829248475639970029777900589595383604989099084081251802305001465530685587689066710306032849298712531664047230963409638484129598076118133347670029704549206295184751171783054889490211218045322681317529569999778899567668829982207035948032411418382057247326141072264502161892285323531743728756335449414720326329614400327415751813608405440522389476951223717685562226240221655814783640319063683104993438443847695342093582440489676230855515734722099028773790309518629302472390856918840009781940193713784596688294176313226823907143925396584175086934911386332502448539920116580493698106175151294846382915609543814748269873022997601962804377576934064368480060369871027634248583037300264157126892396407333810094970488786868749240778818119777818968060847669660858189435863648299750130319878885182309492320093569553086644726783916663680961005542160003603514646606310756647257217877792590840884087816175376150368236330721380807047180835128240716072193739218623529235235449408073833764uwb
+    >> 171;
+static_assert (a == 10403542085759133691203342137159028259461894955438331210801665800234672962180907518788681055608925051917190662144445433835595489501570265148539013616306519011285861864113638610998587283343748668959870044400340187367869274012726759732348878437230149364081610941398977036594823591463255731808309715219781556045092524781748798096243155527048746090614751043610821560662864236720952557147844731917800712343725546175449104075627616077829385396994452199410766816558008090921987787438967590914249326913953731957899714113110918563882837045448642562338486517475793442626878243475178869958697311252767202125088496235928130685145568023992654921893286093433280015789621699281948053130963767216950901322064090115301029360256916486236324346980555378227825665231041206505932451054100655891377307183657244188881780309602697733965633806548575793711470844175477213922050584861112947113328821094578714380110663964395764964375008963336325761662071121014767368961020824065775639039724097407257977371623360602667242992626829630277589757195892131842788347638167481783472539736593840645020141666099662762763659119482517961624374850646183224354529879255694192077493038699570091875155722960929748259201284457182471153956119946261637096783796538046622701136421992223281799392319105563566498086105138357131671079600937329401554014025354725298453142629483842874038291307431207948198280389112036878226218928165845324560374437065373122000792930554833265840423016148390974876479752688661617125284208020330726704780298561478529279775092768807953202013307072084373090254748865483609183726295735240865516817482898554990450888147008484162850924835809973020042760450232447237837196378388135483084055028396408249214425019231777824054821326738728924661602608905318664721047678808734917923923121217803736039325080641571812479260200189082647677675380297657174607422686495562781202604884582727406463545308236800937463493199421020490845203940782000643133713413924683795888948837880891750307666957538835987772265423203470320354145742841869795472799186154631385288573730129094228733379855432514817031425884584962254283999586850250406406681047191820544352342046667950146374296364655891915135310082529994904874562441551527081311638121766367661807914647092917287784017613115795691373814041086838720316968010349263776702775009771662737124600992709418630470128579612748138807983617697487500079502839532266478317788699680283395230308668613168191852557234122469290277763000256531531071762280960597416576452124575885006363492171314551026369237325119844147154972582617127637240421323781252125819313268498872048683068789228870983086306586111793007178693570562554975762384431236664489360478109692520183356042112794589756922036102025380888246082763911915622037570736969677850621708281909652070776450422110772285659921383413532725137107621514770958361581240471968542997294446402584844918179956881219978405772785713402046471903103404871352324277109089891640558983922159359479964068994923538490500501798825116238188381267330618026093160290205596669795981834842352271011063939632623926629960113926326029952143452354640614061049438932665467928443113232214498101774523178129020155017228802221901469548072234073334681052461327832268955923701109732874360984002493130025470753861967432493102395766279717815113135763810886216491770265724160887688887515282293447287121039545323777928286876711267049135547760773655845950622676327972280622345486253084626121247885891757458308974259466441284967765824561478351421051923081842594791616249682768594796413184742007504540382141773556098929461233842797978566466734240436032269122908057438314319410489575244845739320693764798687398942275314333361838560358278583766983210126081046020231469705836544611252075187733112560778125560225565803349953151880800601890382648216375737077015744684142132303864494083237680306898134033570758401131735819237730280209424231954121970154195575070728876653187928423918894211617093567094857926079694003950142962763480728907322409338954277493711834363423032309296862081371923061150409402403668284066920335645815769603890931600189625120845560771835017710222988445713995722670892970377791415975424998772977793133120924108755323766471601770964843725827421304729349535336212587039242582503381150992918495310760366078232133800372960134691178665615437284018675587037783965019497398984583781291648236566997741116811234934754542646608973862932050896956712947890625239848619289180051302224085308716715734850608995498117691600907423641124622236235949675965926735290984369155077055324647942699875972019355174794849379024365265476001505043957802797349447782453767742359446787304217770032967959809288342189111153359045680464231699344620995535326063943372491385550455978845273436611631962336651743357242055102619760848116407351488643448217122169718350824452317641509534606434395208225350712889271762643740106849245478364448395994915755050465135468245061369394410933866013068008514339549345174558881983866497072827311379042433413uwb);
+static_assert (b == 479279632549833982755738215967357101486884905869453021516563898948915446591294289678884104882828483681018619007873937343032559095956110409690354224418625002966550159961834004740780330764422082810229193393826635058733624861250523067262019347540099774628575459315357616349150824579695313640630281667807589996920649924543478780215152006371546893079438057655154349456445174360590648508217399231758605134881847410713059287758746575793311941456361347290358374327775052253988819455767870384140373534325319062214637649448223675213701403987503519204500321584986093940400979311922337629196153927395934961423190792514929752893552191612781025930779806940809347748073488156862698382316586632554531097474068541478416687472958980350462147229542043370966376951612302580094672036569174235908042392792082009922861348754900810642762162386011767716267196524707159512614047405558309045526869231198654773018734270263596328291409529332649735222668150705420335319769465472201979730869384913211207207537399601373999069700655463720229201053332186006978582500927709712840419997653716343058563745053549481680514857956192457105651774755444712054911665735085740088242901976172465572034046597419519355833772202459754151176845548994456208445029222984100996313709454921745133168181790539412958897510447873469344071508368320478228160779533683887240349189576312875329064089835494782533898285493126755916970631488996451823585682342809043933704643566255965170014371156957508657356962712435465291272811967482363708516439065578762133187029479457794090439202070979801732536040880905419100375029921889772128503084510931435171483979018779597166630558819909348223770001377390273307373052030729413819617985823981374070443715485088829487365151686786653141560555397632839783786973475603908129103121125925582435377586599443363217659482486021512444715078999742145616192417054383275221431750185701711793487079447980295741809417265923372265027237884200396238493927359102885825948568128006352273465051712472070059202450319054451522388321059702003081513718019001071076161432358471155369959782811652330837503075288087426055655400029411438748293362031465017502577139252244731448555188613876936961036695236179942323751116112011014592974397486473882674592008130136792663493287323834319147915022427528033518178139180198551672004671264439595962120954122300129377851806213689047404966592261393005849755403969409681891387136302126214754577574214078992738385834194218500941354892714424617818676129678402812599649389519193939384481931712519965763571236544579269391714688112594004439937791027666527275028956096005024721892268353662349049501568931426746983749923266289936079664852088114380642027976981532748458314879741695023966059798072743350980348361092364278288527112580481417860547783209941006436630295569025708378983678708447667928300527961717504931897999052674925211486251029110033534138519456704647644914365911948549537915597987234033945431722519315974082307832411934886264333083916226707665948547147824941143774031630992986403589281430493343304207573431954440506367102005746914258775268625663056944615427077330312326664431034309894720122682694874274735620802316011315482410182991906165335883031756812018133914090861319389023790839528337203606889129436487920140167370284870924438860873830296648014424844378195912932551426780779819757525353368558050825303562419989528653425507781193568399131883673447888828695552112293654073088339775808234324436627659543962164946450396759723040075906766506152022264815158093674649622869572430121164843379253826764183953324829436751005035078152203675523168431161209463034491772102996315554878311000500752369796109685119745615468446576523546008325039060775520970963367909216533343057221662059707100715990114520515109428581554773471551782223970832412406073499896797949247197263055911053575580685552002226777990994346631851517791364630330551754443656577948498726362806681419705536740324268597539896282803552799726080554573302695958428417269671660306173853381343814024048279362738039470198839365706286164147555864933364363287875097138128425573909904433183795098670203800533548856219174579901097084123411402160448390274656216062207733804522678116007830485911118338137291415500040244636646228465275546613185451215477214924093897408659253897872331630294361379429268082112519489979283826532913282908147824847781517964779380824918394924322420104717839012960422523766744397106063463998218416521947089619846125464833145312281971994057275917591591279145274837283273569411904875883590818927011083766111368623876288661469697856984023924541117354584710728162060928747544449729071086406072820826707352705098469570212430005031769870770984490147544922541878582516496026055634218534739829767044431114272772863484628968800592047985977005687260574374332608765746965647976405949709304033414442630581488362251756922883517287565772653346189666094175256518980878632057889091042584644510374477219106080358138511257658994752983022904583136418485544787844335722425uwb);
+static_assert (c == 8779107423697189837569390605084121179785924908521985744210325591223667924519652625818373720019509245903707006132632572173386255064201355735198759440688262514780984111791042739566301784897316373994922192050963272288434060342288511971569697680026523760811225516430052699754044682818892679819131995600216280966062736732384732411361657444399695883865096103428759622813867735547259978529319436889864013687219390567604283318011100799953451520968441264866031813954488628058475114348729275414143158917874709599556247183695853838552321088973445876088042556810479910661449374661999675082811103814453353294194886612961492737263277271551889038610730760478459569256149321998350414023066363814989311109728311712989022996247280182587921449185353922885937877604500400738774240008709945289791605011177739657720181601453512259882004564462415828652714904289727235210537277721389816687643366145200001177712112197515695578887483792988755435401388456145854488880537088360397994643216014828495662460205686448548113229841097955613958440901375416256532864511852298696327611517233241324799070919491286426159788792631723833717451538043437364017185237743182402835670087683125602640318887451596650323528720128188198547270462971612157603487958526705005955580409441670771849388016438035850194585870327013409236236730914217722025655319472231141666790287955685713636274653565577454275838590350806168639165264676470440930351612992518904664647715805865941038423768376846697817543122409517591717292238745940345900530458551468519245767864531742102178628854376524513367983209186974575765707273973775386840081238803880335095740836386527208267311808973522450391189055739828936937359693167240524660624945856907042041257347192086984009640984509322622503890256046324768341632643546455779035376002061691113121234273164937984171774242327769915688742564049454163158318121818582764775268091292470889088445575108022688069271697198283151469645400870507006663799330661702702747443254220478311056407220749648103123435473381583520873055218734115120978678440455896458852497569989966723235965608706826593607128847630137618509151255834742636438796285569873869967729341871213521030011427372987388572674228441333458857512226049283243347521457804912008781036966786374760325341492033297848368160903260470019067535330611645909560888797451907088389764190403007998305168673029446934012245138838180596098559442570696150011296218144186387024615885302290744905340666905921743970013779813332493771192048043297281423248489056841417013807670308191095732464221451376997270745468459702152796818222745730565721202663103043121160101459833683249558684459108862536961994308535039970814557821268170388745941980378838969910592895670554291811739768771829941043857819603751246957962236091154755893962038363120690483862423001038948620681611253867149296463690417828034303547922792249098522404751428960713875050463906134150846089705714470303918299012691600285355859412924847760497076978432722446602521825089097454542343354847347396045079587757210635356999268706465425788833311190517623061860675230010994127196459030322166751571656642321690787471906609473496034789643710478162255664092991251446787887635351852933826820719781733754578161073401362668109819113924252291125741395271474342305574536974918273938513597418963787308994593434191890687730302495910686072338836413159162281072263542758257699588089838677469397467899348065293581751035844389848387161847435160327276066603683131703246410409122832793376751512688745195564021646069245992363396468100513536211651450610523315211697125774638845313243973083536417692075962486918844667432144353019722959653638632948294049984266861870151255315023346724671430499257993958049088066160870545025276597975154855537620265690354041028742742755074396597631965320380782500944568424053420038357524917125099241334990032189526465838192972110970861380060986802081948044345526414857158569939005895236672306344348212805851269920711043891306875873016330601673973249327072503571873518366750575070091051288590764788630190966776854031578939382690709022667421734442841784680826494146620589862829612704279521637740421694195051400095278084716974624615208392585573200182664157066813849346058321763156523965698465901396025152159642193562900743812715885811057212579017860488539960334406702752688595217360219470968738009774067915037157027492209108801337707562571266897723911401203374308490793226200974353356835311756384895692909802720948968131504604855466961987314701846460342135201914356152591684810924688350929140120187693089324255924634578576427004426339299493833434502951593902551451002292839635000904253250021884625417628756439862964325562720709528784964868687330847894476999577326582332350213148861205413652337499383416531545707272907994755638339630221576707954964236210962693804639714754668679841134928393081284209158098202683744650513918920168330598432362389777471870631039488408769354863001967531729415686631571754649uwb);
+#endif
--- gcc/tree-ssa-loop-niter.cc.jj	2023-10-04 16:28:04.329782348 +0200
+++ gcc/tree-ssa-loop-niter.cc	2023-10-05 11:36:54.982246797 +0200
@@ -3873,12 +3873,17 @@ do_warn_aggressive_loop_optimizations (c
     return;
 
   gimple *estmt = last_nondebug_stmt (e->src);
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_dec (i_bound, buf, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations))
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p;
+  unsigned len = i_bound.get_len ();
+  if (len > WIDE_INT_MAX_INL_ELTS)
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  else
+    p = buf;
+  print_dec (i_bound, p, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations))
 	     ? UNSIGNED : SIGNED);
   auto_diagnostic_group d;
   if (warning_at (gimple_location (stmt), OPT_Waggressive_loop_optimizations,
-		  "iteration %s invokes undefined behavior", buf))
+		  "iteration %s invokes undefined behavior", p))
     inform (gimple_location (estmt), "within this loop");
   loop->warned_aggressive_loop_optimizations = true;
 }
@@ -3915,6 +3920,9 @@ record_estimate (class loop *loop, tree
   else
     gcc_checking_assert (i_bound == wi::to_widest (bound));
 
+  if (wi::min_precision (i_bound, SIGNED) > bound_wide_int ().get_precision ())
+    return;
+
   /* If we have a guaranteed upper bound, record it in the appropriate
      list, unless this is an !is_exit bound (i.e. undefined behavior in
      at_stmt) in a loop with known constant number of iterations.  */
@@ -3925,7 +3933,7 @@ record_estimate (class loop *loop, tree
     {
       class nb_iter_bound *elt = ggc_alloc<nb_iter_bound> ();
 
-      elt->bound = i_bound;
+      elt->bound = bound_wide_int::from (i_bound, SIGNED);
       elt->stmt = at_stmt;
       elt->is_exit = is_exit;
       elt->next = loop->bounds;
@@ -4410,8 +4418,8 @@ infer_loop_bounds_from_undefined (class
 static int
 wide_int_cmp (const void *p1, const void *p2)
 {
-  const widest_int *d1 = (const widest_int *) p1;
-  const widest_int *d2 = (const widest_int *) p2;
+  const bound_wide_int *d1 = (const bound_wide_int *) p1;
+  const bound_wide_int *d2 = (const bound_wide_int *) p2;
   return wi::cmpu (*d1, *d2);
 }
 
@@ -4419,7 +4427,7 @@ wide_int_cmp (const void *p1, const void
    Lookup by binary search.  */
 
 static int
-bound_index (const vec<widest_int> &bounds, const widest_int &bound)
+bound_index (const vec<bound_wide_int> &bounds, const bound_wide_int &bound)
 {
   unsigned int end = bounds.length ();
   unsigned int begin = 0;
@@ -4428,7 +4436,7 @@ bound_index (const vec<widest_int> &boun
   while (begin != end)
     {
       unsigned int middle = (begin + end) / 2;
-      widest_int index = bounds[middle];
+      bound_wide_int index = bounds[middle];
 
       if (index == bound)
 	return middle;
@@ -4450,7 +4458,7 @@ static void
 discover_iteration_bound_by_body_walk (class loop *loop)
 {
   class nb_iter_bound *elt;
-  auto_vec<widest_int> bounds;
+  auto_vec<bound_wide_int> bounds;
   vec<vec<basic_block> > queues = vNULL;
   vec<basic_block> queue = vNULL;
   ptrdiff_t queue_index;
@@ -4459,7 +4467,7 @@ discover_iteration_bound_by_body_walk (c
   /* Discover what bounds may interest us.  */
   for (elt = loop->bounds; elt; elt = elt->next)
     {
-      widest_int bound = elt->bound;
+      bound_wide_int bound = elt->bound;
 
       /* Exit terminates loop at given iteration, while non-exits produce undefined
 	 effect on the next iteration.  */
@@ -4492,7 +4500,7 @@ discover_iteration_bound_by_body_walk (c
   hash_map<basic_block, ptrdiff_t> bb_bounds;
   for (elt = loop->bounds; elt; elt = elt->next)
     {
-      widest_int bound = elt->bound;
+      bound_wide_int bound = elt->bound;
       if (!elt->is_exit)
 	{
 	  bound += 1;
@@ -4601,7 +4609,8 @@ discover_iteration_bound_by_body_walk (c
 	  print_decu (bounds[latch_index], dump_file);
 	  fprintf (dump_file, "\n");
 	}
-      record_niter_bound (loop, bounds[latch_index], false, true);
+      record_niter_bound (loop, widest_int::from (bounds[latch_index],
+						  SIGNED), false, true);
     }
 
   queues.release ();
@@ -4704,7 +4713,8 @@ maybe_lower_iteration_bound (class loop
       if (dump_file && (dump_flags & TDF_DETAILS))
 	fprintf (dump_file, "Reducing loop iteration estimate by 1; "
 		 "undefined statement must be executed at the last iteration.\n");
-      record_niter_bound (loop, loop->nb_iterations_upper_bound - 1,
+      record_niter_bound (loop, widest_int::from (loop->nb_iterations_upper_bound,
+						  SIGNED) - 1,
 			  false, true);
     }
 
@@ -4860,10 +4870,13 @@ estimate_numbers_of_iterations (class lo
      not break code with undefined behavior by not recording smaller
      maximum number of iterations.  */
   if (loop->nb_iterations
-      && TREE_CODE (loop->nb_iterations) == INTEGER_CST)
+      && TREE_CODE (loop->nb_iterations) == INTEGER_CST
+      && (wi::min_precision (wi::to_widest (loop->nb_iterations), SIGNED)
+	  <= bound_wide_int ().get_precision ()))
     {
       loop->any_upper_bound = true;
-      loop->nb_iterations_upper_bound = wi::to_widest (loop->nb_iterations);
+      loop->nb_iterations_upper_bound
+        = bound_wide_int::from (wi::to_widest (loop->nb_iterations), SIGNED);
     }
 }
 
@@ -5114,7 +5127,7 @@ n_of_executions_at_most (gimple *stmt,
 			 class nb_iter_bound *niter_bound,
 			 tree niter)
 {
-  widest_int bound = niter_bound->bound;
+  widest_int bound = widest_int::from (niter_bound->bound, SIGNED);
   tree nit_type = TREE_TYPE (niter), e;
   enum tree_code cmp;
 
--- gcc/cfgloop.h.jj	2023-10-04 16:28:04.010786695 +0200
+++ gcc/cfgloop.h	2023-10-05 11:36:55.065245659 +0200
@@ -44,6 +44,9 @@ enum iv_extend_code
   IV_UNKNOWN_EXTEND
 };
 
+typedef generic_wide_int <fixed_wide_int_storage <WIDE_INT_MAX_INL_PRECISION> >
+  bound_wide_int;
+
 /* The structure describing a bound on number of iterations of a loop.  */
 
 class GTY ((chain_next ("%h.next"))) nb_iter_bound {
@@ -58,7 +61,7 @@ public:
         overflows (as MAX + 1 is sometimes produced as the estimate on number
 	of executions of STMT).
      b) it is consistent with the result of number_of_iterations_exit.  */
-  widest_int bound;
+  bound_wide_int bound;
 
   /* True if, after executing the statement BOUND + 1 times, we will
      leave the loop; that is, all the statements after it are executed at most
@@ -161,14 +164,14 @@ public:
 
   /* An integer guaranteed to be greater or equal to nb_iterations.  Only
      valid if any_upper_bound is true.  */
-  widest_int nb_iterations_upper_bound;
+  bound_wide_int nb_iterations_upper_bound;
 
-  widest_int nb_iterations_likely_upper_bound;
+  bound_wide_int nb_iterations_likely_upper_bound;
 
   /* An integer giving an estimate on nb_iterations.  Unlike
      nb_iterations_upper_bound, there is no guarantee that it is at least
      nb_iterations.  */
-  widest_int nb_iterations_estimate;
+  bound_wide_int nb_iterations_estimate;
 
   /* If > 0, an integer, where the user asserted that for any
      I in [ 0, nb_iterations ) and for any J in
--- gcc/tree.h.jj	2023-10-04 16:28:04.403781340 +0200
+++ gcc/tree.h	2023-10-05 11:36:54.793249388 +0200
@@ -6258,13 +6258,17 @@ namespace wi
   template <int N>
   struct int_traits <extended_tree <N> >
   {
-    static const enum precision_type precision_type = CONST_PRECISION;
+    static const enum precision_type precision_type
+      = N == ADDR_MAX_PRECISION ? CONST_PRECISION : WIDEST_CONST_PRECISION;
     static const bool host_dependent_precision = false;
     static const bool is_sign_extended = true;
     static const unsigned int precision = N;
+    static const unsigned int inl_precision
+      = N == ADDR_MAX_PRECISION ? 0
+	     : N / WIDEST_INT_MAX_PRECISION * WIDE_INT_MAX_INL_PRECISION;
   };
 
-  typedef extended_tree <WIDE_INT_MAX_PRECISION> widest_extended_tree;
+  typedef extended_tree <WIDEST_INT_MAX_PRECISION> widest_extended_tree;
   typedef extended_tree <ADDR_MAX_PRECISION> offset_extended_tree;
 
   typedef const generic_wide_int <widest_extended_tree> tree_to_widest_ref;
@@ -6292,7 +6296,8 @@ namespace wi
   tree_to_poly_wide_ref to_poly_wide (const_tree);
 
   template <int N>
-  struct ints_for <generic_wide_int <extended_tree <N> >, CONST_PRECISION>
+  struct ints_for <generic_wide_int <extended_tree <N> >,
+		   int_traits <extended_tree <N> >::precision_type>
   {
     typedef generic_wide_int <extended_tree <N> > extended;
     static extended zero (const extended &);
@@ -6308,7 +6313,7 @@ namespace wi
 
 /* Used to convert a tree to a widest2_int like this:
    widest2_int foo = widest2_int_cst (some_tree).  */
-typedef generic_wide_int <wi::extended_tree <WIDE_INT_MAX_PRECISION * 2> >
+typedef generic_wide_int <wi::extended_tree <WIDEST_INT_MAX_PRECISION * 2> >
   widest2_int_cst;
 
 /* Refer to INTEGER_CST T as though it were a widest_int.
@@ -6444,7 +6449,7 @@ wi::extended_tree <N>::get_len () const
 {
   if (N == ADDR_MAX_PRECISION)
     return TREE_INT_CST_OFFSET_NUNITS (m_t);
-  else if (N >= WIDE_INT_MAX_PRECISION)
+  else if (N >= WIDEST_INT_MAX_PRECISION)
     return TREE_INT_CST_EXT_NUNITS (m_t);
   else
     /* This class is designed to be used for specific output precisions
@@ -6530,7 +6535,8 @@ wi::to_poly_wide (const_tree t)
 template <int N>
 inline generic_wide_int <wi::extended_tree <N> >
 wi::ints_for <generic_wide_int <wi::extended_tree <N> >,
-	      wi::CONST_PRECISION>::zero (const extended &x)
+	      wi::int_traits <wi::extended_tree <N> >::precision_type
+	     >::zero (const extended &x)
 {
   return build_zero_cst (TREE_TYPE (x.get_tree ()));
 }
--- gcc/cfgloop.cc.jj	2023-10-04 16:28:03.991786955 +0200
+++ gcc/cfgloop.cc	2023-10-05 11:36:55.157244398 +0200
@@ -1895,33 +1895,38 @@ void
 record_niter_bound (class loop *loop, const widest_int &i_bound,
 		    bool realistic, bool upper)
 {
+  if (wi::min_precision (i_bound, SIGNED) > bound_wide_int ().get_precision ())
+    return;
+
+  bound_wide_int bound = bound_wide_int::from (i_bound, SIGNED);
+
   /* Update the bounds only when there is no previous estimation, or when the
      current estimation is smaller.  */
   if (upper
       && (!loop->any_upper_bound
-	  || wi::ltu_p (i_bound, loop->nb_iterations_upper_bound)))
+	  || wi::ltu_p (bound, loop->nb_iterations_upper_bound)))
     {
       loop->any_upper_bound = true;
-      loop->nb_iterations_upper_bound = i_bound;
+      loop->nb_iterations_upper_bound = bound;
       if (!loop->any_likely_upper_bound)
 	{
 	  loop->any_likely_upper_bound = true;
-	  loop->nb_iterations_likely_upper_bound = i_bound;
+	  loop->nb_iterations_likely_upper_bound = bound;
 	}
     }
   if (realistic
       && (!loop->any_estimate
-	  || wi::ltu_p (i_bound, loop->nb_iterations_estimate)))
+	  || wi::ltu_p (bound, loop->nb_iterations_estimate)))
     {
       loop->any_estimate = true;
-      loop->nb_iterations_estimate = i_bound;
+      loop->nb_iterations_estimate = bound;
     }
   if (!realistic
       && (!loop->any_likely_upper_bound
-          || wi::ltu_p (i_bound, loop->nb_iterations_likely_upper_bound)))
+          || wi::ltu_p (bound, loop->nb_iterations_likely_upper_bound)))
     {
       loop->any_likely_upper_bound = true;
-      loop->nb_iterations_likely_upper_bound = i_bound;
+      loop->nb_iterations_likely_upper_bound = bound;
     }
 
   /* If an upper bound is smaller than the realistic estimate of the
@@ -2018,7 +2023,7 @@ get_estimated_loop_iterations (class loo
       return false;
     }
 
-  *nit = loop->nb_iterations_estimate;
+  *nit = widest_int::from (loop->nb_iterations_estimate, SIGNED);
   return true;
 }
 
@@ -2032,7 +2037,7 @@ get_max_loop_iterations (const class loo
   if (!loop->any_upper_bound)
     return false;
 
-  *nit = loop->nb_iterations_upper_bound;
+  *nit = widest_int::from (loop->nb_iterations_upper_bound, SIGNED);
   return true;
 }
 
@@ -2066,7 +2071,7 @@ get_likely_max_loop_iterations (class lo
   if (!loop->any_likely_upper_bound)
     return false;
 
-  *nit = loop->nb_iterations_likely_upper_bound;
+  *nit = widest_int::from (loop->nb_iterations_likely_upper_bound, SIGNED);
   return true;
 }
 
--- gcc/gimple-ssa-strength-reduction.cc.jj	2023-01-02 09:32:29.884176934 +0100
+++ gcc/gimple-ssa-strength-reduction.cc	2023-10-05 14:45:14.554340423 +0200
@@ -238,7 +238,7 @@ public:
   tree stride;
 
   /* The index constant i.  */
-  widest_int index;
+  offset_int index;
 
   /* The type of the candidate.  This is normally the type of base_expr,
      but casts may have occurred when combining feeding instructions.
@@ -333,7 +333,7 @@ class incr_info_d
 {
 public:
   /* The increment that relates a candidate to its basis.  */
-  widest_int incr;
+  offset_int incr;
 
   /* How many times the increment occurs in the candidate tree.  */
   unsigned count;
@@ -677,7 +677,7 @@ record_potential_basis (slsr_cand_t c, t
 
 static slsr_cand_t
 alloc_cand_and_find_basis (enum cand_kind kind, gimple *gs, tree base,
-			   const widest_int &index, tree stride, tree ctype,
+			   const offset_int &index, tree stride, tree ctype,
 			   tree stype, unsigned savings)
 {
   slsr_cand_t c = (slsr_cand_t) obstack_alloc (&cand_obstack,
@@ -893,7 +893,7 @@ slsr_process_phi (gphi *phi, bool speed)
    int (i * S).
    Otherwise, just return double int zero.  */
 
-static widest_int
+static offset_int
 backtrace_base_for_ref (tree *pbase)
 {
   tree base_in = *pbase;
@@ -922,7 +922,7 @@ backtrace_base_for_ref (tree *pbase)
 	{
 	  /* X = B + (1 * S), S is integer constant.  */
 	  *pbase = base_cand->base_expr;
-	  return wi::to_widest (base_cand->stride);
+	  return wi::to_offset (base_cand->stride);
 	}
       else if (base_cand->kind == CAND_ADD
 	       && TREE_CODE (base_cand->stride) == INTEGER_CST
@@ -966,13 +966,13 @@ backtrace_base_for_ref (tree *pbase)
     *PINDEX:   C1 + (C2 * C3) + C4 + (C5 * C3)  */
 
 static bool
-restructure_reference (tree *pbase, tree *poffset, widest_int *pindex,
+restructure_reference (tree *pbase, tree *poffset, offset_int *pindex,
 		       tree *ptype)
 {
   tree base = *pbase, offset = *poffset;
-  widest_int index = *pindex;
+  offset_int index = *pindex;
   tree mult_op0, t1, t2, type;
-  widest_int c1, c2, c3, c4, c5;
+  offset_int c1, c2, c3, c4, c5;
   offset_int mem_offset;
 
   if (!base
@@ -985,18 +985,18 @@ restructure_reference (tree *pbase, tree
     return false;
 
   t1 = TREE_OPERAND (base, 0);
-  c1 = widest_int::from (mem_offset, SIGNED);
+  c1 = offset_int::from (mem_offset, SIGNED);
   type = TREE_TYPE (TREE_OPERAND (base, 1));
 
   mult_op0 = TREE_OPERAND (offset, 0);
-  c3 = wi::to_widest (TREE_OPERAND (offset, 1));
+  c3 = wi::to_offset (TREE_OPERAND (offset, 1));
 
   if (TREE_CODE (mult_op0) == PLUS_EXPR)
 
     if (TREE_CODE (TREE_OPERAND (mult_op0, 1)) == INTEGER_CST)
       {
 	t2 = TREE_OPERAND (mult_op0, 0);
-	c2 = wi::to_widest (TREE_OPERAND (mult_op0, 1));
+	c2 = wi::to_offset (TREE_OPERAND (mult_op0, 1));
       }
     else
       return false;
@@ -1006,7 +1006,7 @@ restructure_reference (tree *pbase, tree
     if (TREE_CODE (TREE_OPERAND (mult_op0, 1)) == INTEGER_CST)
       {
 	t2 = TREE_OPERAND (mult_op0, 0);
-	c2 = -wi::to_widest (TREE_OPERAND (mult_op0, 1));
+	c2 = -wi::to_offset (TREE_OPERAND (mult_op0, 1));
       }
     else
       return false;
@@ -1057,7 +1057,7 @@ slsr_process_ref (gimple *gs)
   HOST_WIDE_INT cbitpos;
   if (reversep || !bitpos.is_constant (&cbitpos))
     return;
-  widest_int index = cbitpos;
+  offset_int index = cbitpos;
 
   if (!restructure_reference (&base, &offset, &index, &type))
     return;
@@ -1079,7 +1079,7 @@ create_mul_ssa_cand (gimple *gs, tree ba
 {
   tree base = NULL_TREE, stride = NULL_TREE, ctype = NULL_TREE;
   tree stype = NULL_TREE;
-  widest_int index;
+  offset_int index;
   unsigned savings = 0;
   slsr_cand_t c;
   slsr_cand_t base_cand = base_cand_from_table (base_in);
@@ -1112,7 +1112,7 @@ create_mul_ssa_cand (gimple *gs, tree ba
 	     ============================
 	     X = B + ((i' * S) * Z)  */
 	  base = base_cand->base_expr;
-	  index = base_cand->index * wi::to_widest (base_cand->stride);
+	  index = base_cand->index * wi::to_offset (base_cand->stride);
 	  stride = stride_in;
 	  ctype = base_cand->cand_type;
 	  stype = TREE_TYPE (stride_in);
@@ -1149,7 +1149,7 @@ static slsr_cand_t
 create_mul_imm_cand (gimple *gs, tree base_in, tree stride_in, bool speed)
 {
   tree base = NULL_TREE, stride = NULL_TREE, ctype = NULL_TREE;
-  widest_int index, temp;
+  offset_int index, temp;
   unsigned savings = 0;
   slsr_cand_t c;
   slsr_cand_t base_cand = base_cand_from_table (base_in);
@@ -1165,7 +1165,7 @@ create_mul_imm_cand (gimple *gs, tree ba
 	     X = Y * c
 	     ============================
 	     X = (B + i') * (S * c)  */
-	  temp = wi::to_widest (base_cand->stride) * wi::to_widest (stride_in);
+	  temp = wi::to_offset (base_cand->stride) * wi::to_offset (stride_in);
 	  if (wi::fits_to_tree_p (temp, TREE_TYPE (stride_in)))
 	    {
 	      base = base_cand->base_expr;
@@ -1200,7 +1200,7 @@ create_mul_imm_cand (gimple *gs, tree ba
 	     ===========================
 	     X = (B + S) * c  */
 	  base = base_cand->base_expr;
-	  index = wi::to_widest (base_cand->stride);
+	  index = wi::to_offset (base_cand->stride);
 	  stride = stride_in;
 	  ctype = base_cand->cand_type;
 	  if (has_single_use (base_in))
@@ -1281,7 +1281,7 @@ create_add_ssa_cand (gimple *gs, tree ba
 {
   tree base = NULL_TREE, stride = NULL_TREE, ctype = NULL_TREE;
   tree stype = NULL_TREE;
-  widest_int index;
+  offset_int index;
   unsigned savings = 0;
   slsr_cand_t c;
   slsr_cand_t base_cand = base_cand_from_table (base_in);
@@ -1300,7 +1300,7 @@ create_add_ssa_cand (gimple *gs, tree ba
 	     ===========================
 	     X = Y + ((+/-1 * S) * B)  */
 	  base = base_in;
-	  index = wi::to_widest (addend_cand->stride);
+	  index = wi::to_offset (addend_cand->stride);
 	  if (subtract_p)
 	    index = -index;
 	  stride = addend_cand->base_expr;
@@ -1350,7 +1350,7 @@ create_add_ssa_cand (gimple *gs, tree ba
 		     ===========================
 		     Value:  X = Y + ((-1 * S) * B)  */
 		  base = base_in;
-		  index = wi::to_widest (subtrahend_cand->stride);
+		  index = wi::to_offset (subtrahend_cand->stride);
 		  index = -index;
 		  stride = subtrahend_cand->base_expr;
 		  ctype = TREE_TYPE (base_in);
@@ -1389,13 +1389,13 @@ create_add_ssa_cand (gimple *gs, tree ba
    about BASE_IN into the new candidate.  Return the new candidate.  */
 
 static slsr_cand_t
-create_add_imm_cand (gimple *gs, tree base_in, const widest_int &index_in,
+create_add_imm_cand (gimple *gs, tree base_in, const offset_int &index_in,
 		     bool speed)
 {
   enum cand_kind kind = CAND_ADD;
   tree base = NULL_TREE, stride = NULL_TREE, ctype = NULL_TREE;
   tree stype = NULL_TREE;
-  widest_int index, multiple;
+  offset_int index, multiple;
   unsigned savings = 0;
   slsr_cand_t c;
   slsr_cand_t base_cand = base_cand_from_table (base_in);
@@ -1405,7 +1405,7 @@ create_add_imm_cand (gimple *gs, tree ba
       signop sign = TYPE_SIGN (TREE_TYPE (base_cand->stride));
 
       if (TREE_CODE (base_cand->stride) == INTEGER_CST
-	  && wi::multiple_of_p (index_in, wi::to_widest (base_cand->stride),
+	  && wi::multiple_of_p (index_in, wi::to_offset (base_cand->stride),
 				sign, &multiple))
 	{
 	  /* Y = (B + i') * S, S constant, c = kS for some integer k
@@ -1494,7 +1494,7 @@ slsr_process_add (gimple *gs, tree rhs1,
   else if (TREE_CODE (rhs2) == INTEGER_CST)
     {
       /* Record an interpretation for the add-immediate.  */
-      widest_int index = wi::to_widest (rhs2);
+      offset_int index = wi::to_offset (rhs2);
       if (subtract_p)
 	index = -index;
 
@@ -2079,7 +2079,7 @@ phi_dependent_cand_p (slsr_cand_t c)
 /* Calculate the increment required for candidate C relative to 
    its basis.  */
 
-static widest_int
+static offset_int
 cand_increment (slsr_cand_t c)
 {
   slsr_cand_t basis;
@@ -2102,10 +2102,10 @@ cand_increment (slsr_cand_t c)
    for this candidate, return the absolute value of that increment
    instead.  */
 
-static inline widest_int
+static inline offset_int
 cand_abs_increment (slsr_cand_t c)
 {
-  widest_int increment = cand_increment (c);
+  offset_int increment = cand_increment (c);
 
   if (!address_arithmetic_p && wi::neg_p (increment))
     increment = -increment;
@@ -2126,7 +2126,7 @@ cand_already_replaced (slsr_cand_t c)
    replace_conditional_candidate.  */
 
 static void
-replace_mult_candidate (slsr_cand_t c, tree basis_name, widest_int bump)
+replace_mult_candidate (slsr_cand_t c, tree basis_name, offset_int bump)
 {
   tree target_type = TREE_TYPE (gimple_assign_lhs (c->cand_stmt));
   enum tree_code cand_code = gimple_assign_rhs_code (c->cand_stmt);
@@ -2245,7 +2245,7 @@ replace_unconditional_candidate (slsr_ca
     return;
 
   basis = lookup_cand (c->basis);
-  widest_int bump = cand_increment (c) * wi::to_widest (c->stride);
+  offset_int bump = cand_increment (c) * wi::to_offset (c->stride);
 
   replace_mult_candidate (c, gimple_assign_lhs (basis->cand_stmt), bump);
 }
@@ -2255,7 +2255,7 @@ replace_unconditional_candidate (slsr_ca
    MAX_INCR_VEC_LEN increments have been found.  */
 
 static inline int
-incr_vec_index (const widest_int &increment)
+incr_vec_index (const offset_int &increment)
 {
   unsigned i;
   
@@ -2275,7 +2275,7 @@ incr_vec_index (const widest_int &increm
 
 static tree
 create_add_on_incoming_edge (slsr_cand_t c, tree basis_name,
-			     widest_int increment, edge e, location_t loc,
+			     offset_int increment, edge e, location_t loc,
 			     bool known_stride)
 {
   tree lhs, basis_type;
@@ -2299,7 +2299,7 @@ create_add_on_incoming_edge (slsr_cand_t
     {
       tree bump_tree;
       enum tree_code code = plus_code;
-      widest_int bump = increment * wi::to_widest (c->stride);
+      offset_int bump = increment * wi::to_offset (c->stride);
       if (wi::neg_p (bump) && !POINTER_TYPE_P (basis_type))
 	{
 	  code = MINUS_EXPR;
@@ -2427,7 +2427,7 @@ create_phi_basis_1 (slsr_cand_t c, gimpl
 	  feeding_def = gimple_assign_lhs (basis->cand_stmt);
 	else
 	  {
-	    widest_int incr = -basis->index;
+	    offset_int incr = -basis->index;
 	    feeding_def = create_add_on_incoming_edge (c, basis_name, incr,
 						       e, loc, known_stride);
 	  }
@@ -2444,7 +2444,7 @@ create_phi_basis_1 (slsr_cand_t c, gimpl
 	  else
 	    {
 	      slsr_cand_t arg_cand = base_cand_from_table (arg);
-	      widest_int diff = arg_cand->index - basis->index;
+	      offset_int diff = arg_cand->index - basis->index;
 	      feeding_def = create_add_on_incoming_edge (c, basis_name, diff,
 							 e, loc, known_stride);
 	    }
@@ -2525,7 +2525,7 @@ replace_conditional_candidate (slsr_cand
 			   basis_name, loc, KNOWN_STRIDE);
 
   /* Replace C with an add of the new basis phi and a constant.  */
-  widest_int bump = c->index * wi::to_widest (c->stride);
+  offset_int bump = c->index * wi::to_offset (c->stride);
 
   replace_mult_candidate (c, name, bump);
 }
@@ -2614,7 +2614,7 @@ replace_uncond_cands_and_profitable_phis
     {
       /* A multiply candidate with a stride of 1 is just an artifice
 	 of a copy or cast; there is no value in replacing it.  */
-      if (c->kind == CAND_MULT && wi::to_widest (c->stride) != 1)
+      if (c->kind == CAND_MULT && wi::to_offset (c->stride) != 1)
 	{
 	  /* A candidate dependent upon a phi will replace a multiply by 
 	     a constant with an add, and will insert at most one add for
@@ -2681,7 +2681,7 @@ count_candidates (slsr_cand_t c)
    candidates with the same increment, also record T_0 for subsequent use.  */
 
 static void
-record_increment (slsr_cand_t c, widest_int increment, bool is_phi_adjust)
+record_increment (slsr_cand_t c, offset_int increment, bool is_phi_adjust)
 {
   bool found = false;
   unsigned i;
@@ -2786,7 +2786,7 @@ record_phi_increments_1 (slsr_cand_t bas
 	record_phi_increments_1 (basis, arg_def);
       else
 	{
-	  widest_int diff;
+	  offset_int diff;
 
 	  if (operand_equal_p (arg, phi_cand->base_expr, 0))
 	    {
@@ -2856,7 +2856,7 @@ record_increments (slsr_cand_t c)
 /* Recursive helper function for phi_incr_cost.  */
 
 static int
-phi_incr_cost_1 (slsr_cand_t c, const widest_int &incr, gimple *phi,
+phi_incr_cost_1 (slsr_cand_t c, const offset_int &incr, gimple *phi,
 		 int *savings)
 {
   unsigned i;
@@ -2883,7 +2883,7 @@ phi_incr_cost_1 (slsr_cand_t c, const wi
 	}
       else
 	{
-	  widest_int diff;
+	  offset_int diff;
 	  slsr_cand_t arg_cand;
 
 	  /* When the PHI argument is just a pass-through to the base
@@ -2925,7 +2925,7 @@ phi_incr_cost_1 (slsr_cand_t c, const wi
    uses.  */
 
 static int
-phi_incr_cost (slsr_cand_t c, const widest_int &incr, gimple *phi,
+phi_incr_cost (slsr_cand_t c, const offset_int &incr, gimple *phi,
 	       int *savings)
 {
   int retval = phi_incr_cost_1 (c, incr, phi, savings);
@@ -2981,10 +2981,10 @@ optimize_cands_for_speed_p (slsr_cand_t
 
 static int
 lowest_cost_path (int cost_in, int repl_savings, slsr_cand_t c,
-		  const widest_int &incr, bool count_phis)
+		  const offset_int &incr, bool count_phis)
 {
   int local_cost, sib_cost, savings = 0;
-  widest_int cand_incr = cand_abs_increment (c);
+  offset_int cand_incr = cand_abs_increment (c);
 
   if (cand_already_replaced (c))
     local_cost = cost_in;
@@ -3027,11 +3027,11 @@ lowest_cost_path (int cost_in, int repl_
    would go dead.  */
 
 static int
-total_savings (int repl_savings, slsr_cand_t c, const widest_int &incr,
+total_savings (int repl_savings, slsr_cand_t c, const offset_int &incr,
 	       bool count_phis)
 {
   int savings = 0;
-  widest_int cand_incr = cand_abs_increment (c);
+  offset_int cand_incr = cand_abs_increment (c);
 
   if (incr == cand_incr && !cand_already_replaced (c))
     savings += repl_savings + c->dead_savings;
@@ -3239,7 +3239,7 @@ ncd_for_two_cands (basic_block bb1, basi
    candidates, return the earliest candidate in the block in *WHERE.  */
 
 static basic_block
-ncd_with_phi (slsr_cand_t c, const widest_int &incr, gphi *phi,
+ncd_with_phi (slsr_cand_t c, const offset_int &incr, gphi *phi,
 	      basic_block ncd, slsr_cand_t *where)
 {
   unsigned i;
@@ -3255,7 +3255,7 @@ ncd_with_phi (slsr_cand_t c, const wides
 	ncd = ncd_with_phi (c, incr, as_a <gphi *> (arg_def), ncd, where);
       else 
 	{
-	  widest_int diff;
+	  offset_int diff;
 
 	  if (operand_equal_p (arg, phi_cand->base_expr, 0))
 	    diff = -basis->index;
@@ -3282,7 +3282,7 @@ ncd_with_phi (slsr_cand_t c, const wides
    return the earliest candidate in the block in *WHERE.  */
 
 static basic_block
-ncd_of_cand_and_phis (slsr_cand_t c, const widest_int &incr, slsr_cand_t *where)
+ncd_of_cand_and_phis (slsr_cand_t c, const offset_int &incr, slsr_cand_t *where)
 {
   basic_block ncd = NULL;
 
@@ -3308,7 +3308,7 @@ ncd_of_cand_and_phis (slsr_cand_t c, con
    *WHERE.  */
 
 static basic_block
-nearest_common_dominator_for_cands (slsr_cand_t c, const widest_int &incr,
+nearest_common_dominator_for_cands (slsr_cand_t c, const offset_int &incr,
 				    slsr_cand_t *where)
 {
   basic_block sib_ncd = NULL, dep_ncd = NULL, this_ncd = NULL, ncd;
@@ -3385,7 +3385,7 @@ insert_initializers (slsr_cand_t c)
       gassign *init_stmt;
       gassign *cast_stmt = NULL;
       tree new_name, incr_tree, init_stride;
-      widest_int incr = incr_vec[i].incr;
+      offset_int incr = incr_vec[i].incr;
 
       if (!profitable_increment_p (i)
 	  || incr == 1
@@ -3550,7 +3550,7 @@ all_phi_incrs_profitable_1 (slsr_cand_t
       else
 	{
 	  int j;
-	  widest_int increment;
+	  offset_int increment;
 
 	  if (operand_equal_p (arg, phi_cand->base_expr, 0))
 	    increment = -basis->index;
@@ -3681,7 +3681,7 @@ replace_one_candidate (slsr_cand_t c, un
   tree orig_rhs1, orig_rhs2;
   tree rhs2;
   enum tree_code orig_code, repl_code;
-  widest_int cand_incr;
+  offset_int cand_incr;
 
   orig_code = gimple_assign_rhs_code (c->cand_stmt);
   orig_rhs1 = gimple_assign_rhs1 (c->cand_stmt);
@@ -3839,7 +3839,7 @@ replace_profitable_candidates (slsr_cand
 {
   if (!cand_already_replaced (c))
     {
-      widest_int increment = cand_abs_increment (c);
+      offset_int increment = cand_abs_increment (c);
       enum tree_code orig_code = gimple_assign_rhs_code (c->cand_stmt);
       int i;
 
--- gcc/real.cc.jj	2023-10-04 16:28:04.263783248 +0200
+++ gcc/real.cc	2023-10-05 11:36:54.902247893 +0200
@@ -1477,7 +1477,7 @@ real_to_integer (const REAL_VALUE_TYPE *
 wide_int
 real_to_integer (const REAL_VALUE_TYPE *r, bool *fail, int precision)
 {
-  HOST_WIDE_INT val[2 * WIDE_INT_MAX_ELTS];
+  HOST_WIDE_INT valb[WIDE_INT_MAX_INL_ELTS], *val;
   int exp;
   int words, w;
   wide_int result;
@@ -1516,7 +1516,11 @@ real_to_integer (const REAL_VALUE_TYPE *
 	 is the smallest HWI-multiple that has at least PRECISION bits.
 	 This ensures that the top bit of the significand is in the
 	 top bit of the wide_int.  */
-      words = (precision + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT;
+      words = ((precision + HOST_BITS_PER_WIDE_INT - 1)
+	       / HOST_BITS_PER_WIDE_INT);
+      val = valb;
+      if (UNLIKELY (words > WIDE_INT_MAX_INL_ELTS))
+	val = XALLOCAVEC (HOST_WIDE_INT, words);
       w = words * HOST_BITS_PER_WIDE_INT;
 
 #if (HOST_BITS_PER_WIDE_INT == HOST_BITS_PER_LONG)
--- gcc/omp-general.cc.jj	2023-10-04 16:28:04.218783861 +0200
+++ gcc/omp-general.cc	2023-10-06 13:39:37.002609538 +0200
@@ -1986,13 +1986,17 @@ omp_get_context_selector (tree ctx, cons
   return NULL_TREE;
 }
 
+/* Needs to be a GC-friendly widest_int variant, but precision is
+   desirable to be the same on all targets.  */
+typedef generic_wide_int <fixed_wide_int_storage <1024> > score_wide_int;
+
 /* Compute *SCORE for context selector CTX.  Return true if the score
    would be different depending on whether it is a declare simd clone or
    not.  DECLARE_SIMD should be true for the case when it would be
    a declare simd clone.  */
 
 static bool
-omp_context_compute_score (tree ctx, widest_int *score, bool declare_simd)
+omp_context_compute_score (tree ctx, score_wide_int *score, bool declare_simd)
 {
   tree construct = omp_get_context_selector (ctx, "construct", NULL);
   bool has_kind = omp_get_context_selector (ctx, "device", "kind");
@@ -2007,7 +2011,11 @@ omp_context_compute_score (tree ctx, wid
 	  if (TREE_PURPOSE (t3)
 	      && strcmp (IDENTIFIER_POINTER (TREE_PURPOSE (t3)), " score") == 0
 	      && TREE_CODE (TREE_VALUE (t3)) == INTEGER_CST)
-	    *score += wi::to_widest (TREE_VALUE (t3));
+	    {
+	      tree t4 = TREE_VALUE (t3);
+	      *score += score_wide_int::from (wi::to_wide (t4),
+					      TYPE_SIGN (TREE_TYPE (t4)));
+	    }
   if (construct || has_kind || has_arch || has_isa)
     {
       int scores[12];
@@ -2028,16 +2036,16 @@ omp_context_compute_score (tree ctx, wid
 		  *score = -1;
 		  return ret;
 		}
-	      *score += wi::shifted_mask <widest_int> (scores[b + n], 1, false);
+	      *score += wi::shifted_mask <score_wide_int> (scores[b + n], 1, false);
 	    }
 	  if (has_kind)
-	    *score += wi::shifted_mask <widest_int> (scores[b + nconstructs],
+	    *score += wi::shifted_mask <score_wide_int> (scores[b + nconstructs],
 						     1, false);
 	  if (has_arch)
-	    *score += wi::shifted_mask <widest_int> (scores[b + nconstructs] + 1,
+	    *score += wi::shifted_mask <score_wide_int> (scores[b + nconstructs] + 1,
 						     1, false);
 	  if (has_isa)
-	    *score += wi::shifted_mask <widest_int> (scores[b + nconstructs] + 2,
+	    *score += wi::shifted_mask <score_wide_int> (scores[b + nconstructs] + 2,
 						     1, false);
 	}
       else /* FIXME: Implement this.  */
@@ -2051,9 +2059,9 @@ struct GTY(()) omp_declare_variant_entry
   /* NODE of the variant.  */
   cgraph_node *variant;
   /* Score if not in declare simd clone.  */
-  widest_int score;
+  score_wide_int score;
   /* Score if in declare simd clone.  */
-  widest_int score_in_declare_simd_clone;
+  score_wide_int score_in_declare_simd_clone;
   /* Context selector for the variant.  */
   tree ctx;
   /* True if the context selector is known to match already.  */
@@ -2214,12 +2222,12 @@ omp_resolve_late_declare_variant (tree a
 	    }
       }
 
-  widest_int max_score = -1;
+  score_wide_int max_score = -1;
   varentry2 = NULL;
   FOR_EACH_VEC_SAFE_ELT (entryp->variants, i, varentry1)
     if (matches[i])
       {
-	widest_int score
+	score_wide_int score
 	  = (cur_node->simdclone ? varentry1->score_in_declare_simd_clone
 	     : varentry1->score);
 	if (score > max_score)
@@ -2300,8 +2308,8 @@ omp_resolve_declare_variant (tree base)
 
   if (any_deferred)
     {
-      widest_int max_score1 = 0;
-      widest_int max_score2 = 0;
+      score_wide_int max_score1 = 0;
+      score_wide_int max_score2 = 0;
       bool first = true;
       unsigned int i;
       tree attr1, attr2;
@@ -2311,8 +2319,8 @@ omp_resolve_declare_variant (tree base)
       vec_alloc (entry.variants, variants.length ());
       FOR_EACH_VEC_ELT (variants, i, attr1)
 	{
-	  widest_int score1;
-	  widest_int score2;
+	  score_wide_int score1;
+	  score_wide_int score2;
 	  bool need_two;
 	  tree ctx = TREE_VALUE (TREE_VALUE (attr1));
 	  need_two = omp_context_compute_score (ctx, &score1, false);
@@ -2471,16 +2479,16 @@ omp_resolve_declare_variant (tree base)
 		variants[j] = NULL_TREE;
 	    }
       }
-  widest_int max_score1 = 0;
-  widest_int max_score2 = 0;
+  score_wide_int max_score1 = 0;
+  score_wide_int max_score2 = 0;
   bool first = true;
   FOR_EACH_VEC_ELT (variants, i, attr1)
     if (attr1)
       {
 	if (variant1)
 	  {
-	    widest_int score1;
-	    widest_int score2;
+	    score_wide_int score1;
+	    score_wide_int score2;
 	    bool need_two;
 	    tree ctx;
 	    if (first)
@@ -2552,7 +2560,7 @@ omp_lto_output_declare_variant_alt (lto_
       gcc_assert (nvar != LCC_NOT_FOUND);
       streamer_write_hwi_stream (ob->main_stream, nvar);
 
-      for (widest_int *w = &varentry->score; ;
+      for (score_wide_int *w = &varentry->score; ;
 	   w = &varentry->score_in_declare_simd_clone)
 	{
 	  unsigned len = w->get_len ();
@@ -2602,15 +2610,15 @@ omp_lto_input_declare_variant_alt (lto_i
       omp_declare_variant_entry varentry;
       varentry.variant
 	= dyn_cast<cgraph_node *> (nodes[streamer_read_hwi (ib)]);
-      for (widest_int *w = &varentry.score; ;
+      for (score_wide_int *w = &varentry.score; ;
 	   w = &varentry.score_in_declare_simd_clone)
 	{
 	  unsigned len2 = streamer_read_hwi (ib);
-	  HOST_WIDE_INT arr[WIDE_INT_MAX_ELTS];
-	  gcc_assert (len2 <= WIDE_INT_MAX_ELTS);
+	  HOST_WIDE_INT arr[WIDE_INT_MAX_HWIS (1024)];
+	  gcc_assert (len2 <= WIDE_INT_MAX_HWIS (1024));
 	  for (unsigned int j = 0; j < len2; j++)
 	    arr[j] = streamer_read_hwi (ib);
-	  *w = widest_int::from_array (arr, len2, true);
+	  *w = score_wide_int::from_array (arr, len2, true);
 	  if (w == &varentry.score_in_declare_simd_clone)
 	    break;
 	}
--- gcc/graphite-isl-ast-to-gimple.cc.jj	2023-10-04 16:28:04.164784597 +0200
+++ gcc/graphite-isl-ast-to-gimple.cc	2023-10-05 11:36:55.064245673 +0200
@@ -274,7 +274,7 @@ widest_int_from_isl_expr_int (__isl_keep
   isl_val *val = isl_ast_expr_get_val (expr);
   size_t n = isl_val_n_abs_num_chunks (val, sizeof (HOST_WIDE_INT));
   HOST_WIDE_INT *chunks = XALLOCAVEC (HOST_WIDE_INT, n);
-  if (n > WIDE_INT_MAX_ELTS
+  if (n > WIDEST_INT_MAX_ELTS
       || isl_val_get_abs_num_chunks (val, sizeof (HOST_WIDE_INT), chunks) == -1)
     {
       isl_val_free (val);
--- gcc/poly-int.h.jj	2023-10-04 16:28:04.242783534 +0200
+++ gcc/poly-int.h	2023-10-05 11:36:55.194243890 +0200
@@ -109,6 +109,21 @@ struct poly_coeff_traits<T, wi::CONST_PR
   struct init_cast { using type = const Arg &; };
 };
 
+template<typename T>
+struct poly_coeff_traits<T, wi::WIDEST_CONST_PRECISION>
+{
+  typedef WI_UNARY_RESULT (T) result;
+  typedef int int_type;
+  /* These types are always signed.  */
+  static const int signedness = 1;
+  static const int precision = wi::int_traits<T>::precision;
+  static const int inl_precision = wi::int_traits<T>::inl_precision;
+  static const int rank = precision * 2 / CHAR_BIT;
+
+  template<typename Arg>
+  struct init_cast { using type = const Arg &; };
+};
+
 /* Information about a pair of coefficient types.  */
 template<typename T1, typename T2>
 struct poly_coeff_pair_traits
--- gcc/gimple-ssa-warn-alloca.cc.jj	2023-10-04 16:28:04.126785115 +0200
+++ gcc/gimple-ssa-warn-alloca.cc	2023-10-05 11:36:55.126244823 +0200
@@ -310,7 +310,7 @@ pass_walloca::execute (function *fun)
 
 	  enum opt_code wcode
 	    = is_vla ? OPT_Wvla_larger_than_ : OPT_Walloca_larger_than_;
-	  char buff[WIDE_INT_MAX_PRECISION / 4 + 4];
+	  char buff[WIDE_INT_MAX_INL_PRECISION / 4 + 4];
 	  switch (t.type)
 	    {
 	    case ALLOCA_OK:
@@ -329,6 +329,7 @@ pass_walloca::execute (function *fun)
 				      "large")))
 		    && t.limit != 0)
 		  {
+		    gcc_assert (t.limit.get_len () < WIDE_INT_MAX_INL_ELTS);
 		    print_decu (t.limit, buff);
 		    inform (loc, "limit is %wu bytes, but argument "
 				 "may be as large as %s",
@@ -347,6 +348,7 @@ pass_walloca::execute (function *fun)
 				 : G_("argument to %<alloca%> is too large")))
 		    && t.limit != 0)
 		  {
+		    gcc_assert (t.limit.get_len () < WIDE_INT_MAX_INL_ELTS);
 		    print_decu (t.limit, buff);
 		    inform (loc, "limit is %wu bytes, but argument is %s",
 			    is_vla ? warn_vla_limit : adjusted_alloca_limit,
--- gcc/tree-affine.cc.jj	2023-09-28 12:05:50.975150358 +0200
+++ gcc/tree-affine.cc	2023-10-06 10:06:46.671895782 +0200
@@ -805,6 +805,7 @@ aff_combination_expand (aff_tree *comb A
 	      continue;
 	    }
 	  exp = XNEW (class name_expansion);
+	  ::new (static_cast<void *> (exp)) name_expansion ();
 	  exp->in_progress = 1;
 	  if (!*cache)
 	    *cache = new hash_map<tree, name_expansion *>;
@@ -860,6 +861,7 @@ tree_to_aff_combination_expand (tree exp
 bool
 free_name_expansion (tree const &, name_expansion **value, void *)
 {
+  (*value)->~name_expansion ();
   free (*value);
   return true;
 }
--- gcc/tree.cc.jj	2023-10-04 16:28:04.399781394 +0200
+++ gcc/tree.cc	2023-10-05 11:36:54.618251787 +0200
@@ -2676,13 +2676,13 @@ build_zero_cst (tree type)
 tree
 build_replicated_int_cst (tree type, unsigned int width, HOST_WIDE_INT value)
 {
-  int n = (TYPE_PRECISION (type) + HOST_BITS_PER_WIDE_INT - 1)
-    / HOST_BITS_PER_WIDE_INT;
+  int n = ((TYPE_PRECISION (type) + HOST_BITS_PER_WIDE_INT - 1)
+	   / HOST_BITS_PER_WIDE_INT);
   unsigned HOST_WIDE_INT low, mask;
-  HOST_WIDE_INT a[WIDE_INT_MAX_ELTS];
+  HOST_WIDE_INT a[WIDE_INT_MAX_INL_ELTS];
   int i;
 
-  gcc_assert (n && n <= WIDE_INT_MAX_ELTS);
+  gcc_assert (n && n <= WIDE_INT_MAX_INL_ELTS);
 
   if (width == HOST_BITS_PER_WIDE_INT)
     low = value;
@@ -2696,8 +2696,8 @@ build_replicated_int_cst (tree type, uns
     a[i] = low;
 
   gcc_assert (TYPE_PRECISION (type) <= MAX_BITSIZE_MODE_ANY_INT);
-  return wide_int_to_tree
-    (type, wide_int::from_array (a, n, TYPE_PRECISION (type)));
+  return wide_int_to_tree (type, wide_int::from_array (a, n,
+						       TYPE_PRECISION (type)));
 }
 
 /* If floating-point type TYPE has an IEEE-style sign bit, return an
--- gcc/gengtype.cc.jj	2023-10-04 16:28:04.102785442 +0200
+++ gcc/gengtype.cc	2023-10-05 11:36:54.966247016 +0200
@@ -5235,7 +5235,6 @@ main (int argc, char **argv)
       POS_HERE (do_scalar_typedef ("FIXED_VALUE_TYPE", &pos));
       POS_HERE (do_scalar_typedef ("double_int", &pos));
       POS_HERE (do_scalar_typedef ("offset_int", &pos));
-      POS_HERE (do_scalar_typedef ("widest_int", &pos));
       POS_HERE (do_scalar_typedef ("int64_t", &pos));
       POS_HERE (do_scalar_typedef ("poly_int64", &pos));
       POS_HERE (do_scalar_typedef ("poly_uint64", &pos));
--- gcc/dwarf2out.cc.jj	2023-10-04 16:28:04.065785946 +0200
+++ gcc/dwarf2out.cc	2023-10-05 11:36:54.656251266 +0200
@@ -397,7 +397,7 @@ dump_struct_debug (tree type, enum debug
    of the number.  */
 
 static unsigned int
-get_full_len (const wide_int &op)
+get_full_len (const rwide_int &op)
 {
   int prec = wi::get_precision (op);
   return ((prec + HOST_BITS_PER_WIDE_INT - 1)
@@ -3900,7 +3900,7 @@ static void add_data_member_location_att
 						struct vlr_context *);
 static bool add_const_value_attribute (dw_die_ref, machine_mode, rtx);
 static void insert_int (HOST_WIDE_INT, unsigned, unsigned char *);
-static void insert_wide_int (const wide_int &, unsigned char *, int);
+static void insert_wide_int (const rwide_int &, unsigned char *, int);
 static unsigned insert_float (const_rtx, unsigned char *);
 static rtx rtl_for_decl_location (tree);
 static bool add_location_or_const_value_attribute (dw_die_ref, tree, bool);
@@ -4598,14 +4598,14 @@ AT_unsigned (dw_attr_node *a)
 
 static inline void
 add_AT_wide (dw_die_ref die, enum dwarf_attribute attr_kind,
-	     const wide_int& w)
+	     const rwide_int& w)
 {
   dw_attr_node attr;
 
   attr.dw_attr = attr_kind;
   attr.dw_attr_val.val_class = dw_val_class_wide_int;
   attr.dw_attr_val.val_entry = NULL;
-  attr.dw_attr_val.v.val_wide = ggc_alloc<wide_int> ();
+  attr.dw_attr_val.v.val_wide = ggc_alloc<rwide_int> ();
   *attr.dw_attr_val.v.val_wide = w;
   add_dwarf_attr (die, &attr);
 }
@@ -16714,7 +16714,7 @@ mem_loc_descriptor (rtx rtl, machine_mod
 	  mem_loc_result->dw_loc_oprnd1.v.val_die_ref.external = 0;
 	  mem_loc_result->dw_loc_oprnd2.val_class
 	    = dw_val_class_wide_int;
-	  mem_loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<wide_int> ();
+	  mem_loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<rwide_int> ();
 	  *mem_loc_result->dw_loc_oprnd2.v.val_wide = rtx_mode_t (rtl, mode);
 	}
       break;
@@ -17288,7 +17288,7 @@ loc_descriptor (rtx rtl, machine_mode mo
 	  loc_result = new_loc_descr (DW_OP_implicit_value,
 				      GET_MODE_SIZE (int_mode), 0);
 	  loc_result->dw_loc_oprnd2.val_class = dw_val_class_wide_int;
-	  loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<wide_int> ();
+	  loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<rwide_int> ();
 	  *loc_result->dw_loc_oprnd2.v.val_wide = rtx_mode_t (rtl, int_mode);
 	}
       break;
@@ -20189,7 +20189,7 @@ extract_int (const unsigned char *src, u
 /* Writes wide_int values to dw_vec_const array.  */
 
 static void
-insert_wide_int (const wide_int &val, unsigned char *dest, int elt_size)
+insert_wide_int (const rwide_int &val, unsigned char *dest, int elt_size)
 {
   int i;
 
@@ -20274,7 +20274,7 @@ add_const_value_attribute (dw_die_ref di
 	  && (GET_MODE_PRECISION (int_mode)
 	      & (HOST_BITS_PER_WIDE_INT - 1)) == 0)
 	{
-	  wide_int w = rtx_mode_t (rtl, int_mode);
+	  rwide_int w = rtx_mode_t (rtl, int_mode);
 	  add_AT_wide (die, DW_AT_const_value, w);
 	  return true;
 	}
--- gcc/wide-int.cc.jj	2023-10-04 16:28:04.466780481 +0200
+++ gcc/wide-int.cc	2023-10-06 12:31:56.841517949 +0200
@@ -51,7 +51,7 @@ typedef unsigned int UDWtype __attribute
 #include "longlong.h"
 #endif
 
-static const HOST_WIDE_INT zeros[WIDE_INT_MAX_ELTS] = {};
+static const HOST_WIDE_INT zeros[1] = {};
 
 /*
  * Internal utilities.
@@ -62,8 +62,7 @@ static const HOST_WIDE_INT zeros[WIDE_IN
 #define HALF_INT_MASK ((HOST_WIDE_INT_1 << HOST_BITS_PER_HALF_WIDE_INT) - 1)
 
 #define BLOCK_OF(TARGET) ((TARGET) / HOST_BITS_PER_WIDE_INT)
-#define BLOCKS_NEEDED(PREC) \
-  (PREC ? (((PREC) + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT) : 1)
+#define BLOCKS_NEEDED(PREC) (PREC ? CEIL (PREC, HOST_BITS_PER_WIDE_INT) : 1)
 #define SIGN_MASK(X) ((HOST_WIDE_INT) (X) < 0 ? -1 : 0)
 
 /* Return the value a VAL[I] if I < LEN, otherwise, return 0 or -1
@@ -96,7 +95,7 @@ canonize (HOST_WIDE_INT *val, unsigned i
   top = val[len - 1];
   if (len * HOST_BITS_PER_WIDE_INT > precision)
     val[len - 1] = top = sext_hwi (top, precision % HOST_BITS_PER_WIDE_INT);
-  if (top != 0 && top != (HOST_WIDE_INT)-1)
+  if (top != 0 && top != HOST_WIDE_INT_M1)
     return len;
 
   /* At this point we know that the top is either 0 or -1.  Find the
@@ -163,7 +162,7 @@ wi::from_buffer (const unsigned char *bu
   /* We have to clear all the bits ourself, as we merely or in values
      below.  */
   unsigned int len = BLOCKS_NEEDED (precision);
-  HOST_WIDE_INT *val = result.write_val ();
+  HOST_WIDE_INT *val = result.write_val (0);
   for (unsigned int i = 0; i < len; ++i)
     val[i] = 0;
 
@@ -232,8 +231,7 @@ wi::to_mpz (const wide_int_ref &x, mpz_t
     }
   else if (excess < 0 && wi::neg_p (x))
     {
-      int extra
-	= (-excess + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT;
+      int extra = CEIL (-excess, HOST_BITS_PER_WIDE_INT);
       HOST_WIDE_INT *t = XALLOCAVEC (HOST_WIDE_INT, len + extra);
       for (int i = 0; i < len; i++)
 	t[i] = v[i];
@@ -280,8 +278,8 @@ wi::from_mpz (const_tree type, mpz_t x,
      extracted from the GMP manual, section "Integer Import and Export":
      http://gmplib.org/manual/Integer-Import-and-Export.html  */
   numb = CHAR_BIT * sizeof (HOST_WIDE_INT);
-  count = (mpz_sizeinbase (x, 2) + numb - 1) / numb;
-  HOST_WIDE_INT *val = res.write_val ();
+  count = CEIL (mpz_sizeinbase (x, 2), numb);
+  HOST_WIDE_INT *val = res.write_val (0);
   /* Read the absolute value.
 
      Write directly to the wide_int storage if possible, otherwise leave
@@ -289,7 +287,7 @@ wi::from_mpz (const_tree type, mpz_t x,
      to use mpz_tdiv_r_2exp for the latter case, but the situation is
      pathological and it seems safer to operate on the original mpz value
      in all cases.  */
-  void *valres = mpz_export (count <= WIDE_INT_MAX_ELTS ? val : 0,
+  void *valres = mpz_export (count <= WIDE_INT_MAX_INL_ELTS ? val : 0,
 			     &count, -1, sizeof (HOST_WIDE_INT), 0, 0, x);
   if (count < 1)
     {
@@ -1334,21 +1332,6 @@ wi::mul_internal (HOST_WIDE_INT *val, co
   unsigned HOST_WIDE_INT o0, o1, k, t;
   unsigned int i;
   unsigned int j;
-  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
-  unsigned int half_blocks_needed = blocks_needed * 2;
-  /* The sizes here are scaled to support a 2x largest mode by 2x
-     largest mode yielding a 4x largest mode result.  This is what is
-     needed by vpn.  */
-
-  unsigned HOST_HALF_WIDE_INT
-    u[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  unsigned HOST_HALF_WIDE_INT
-    v[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  /* The '2' in 'R' is because we are internally doing a full
-     multiply.  */
-  unsigned HOST_HALF_WIDE_INT
-    r[2 * 4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1;
 
   /* If the top level routine did not really pass in an overflow, then
      just make sure that we never attempt to set it.  */
@@ -1469,6 +1452,36 @@ wi::mul_internal (HOST_WIDE_INT *val, co
       return 1;
     }
 
+  /* The sizes here are scaled to support a 2x WIDE_INT_MAX_INL_PRECISION by 2x
+     WIDE_INT_MAX_INL_PRECISION yielding a 4x WIDE_INT_MAX_INL_PRECISION
+     result.  */
+
+  unsigned HOST_HALF_WIDE_INT
+    ubuf[4 * WIDE_INT_MAX_INL_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    vbuf[4 * WIDE_INT_MAX_INL_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
+  /* The '2' in 'R' is because we are internally doing a full
+     multiply.  */
+  unsigned HOST_HALF_WIDE_INT
+    rbuf[2 * 4 * WIDE_INT_MAX_INL_PRECISION / HOST_BITS_PER_HALF_WIDE_INT];
+  const HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1;
+  unsigned HOST_HALF_WIDE_INT *u = ubuf;
+  unsigned HOST_HALF_WIDE_INT *v = vbuf;
+  unsigned HOST_HALF_WIDE_INT *r = rbuf;
+
+  if (prec > WIDE_INT_MAX_INL_PRECISION && !high)
+    prec = (op1len + op2len + 1) * HOST_BITS_PER_WIDE_INT;
+  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
+  unsigned int half_blocks_needed = blocks_needed * 2;
+  if (UNLIKELY (prec > WIDE_INT_MAX_INL_PRECISION))
+    {
+      unsigned HOST_HALF_WIDE_INT *buf
+	= XALLOCAVEC (unsigned HOST_HALF_WIDE_INT, 4 * 4 * blocks_needed);
+      u = buf;
+      v = u + 4 * blocks_needed;
+      r = v + 4 * blocks_needed;
+    }
+
   /* We do unsigned mul and then correct it.  */
   wi_unpack (u, op1val, op1len, half_blocks_needed, prec, SIGNED);
   wi_unpack (v, op2val, op2len, half_blocks_needed, prec, SIGNED);
@@ -1782,16 +1795,6 @@ wi::divmod_internal (HOST_WIDE_INT *quot
 		     unsigned int divisor_prec, signop sgn,
 		     wi::overflow_type *oflow)
 {
-  unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec);
-  unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec);
-  unsigned HOST_HALF_WIDE_INT
-    b_quotient[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  unsigned HOST_HALF_WIDE_INT
-    b_remainder[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
-  unsigned HOST_HALF_WIDE_INT
-    b_dividend[(4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT) + 1];
-  unsigned HOST_HALF_WIDE_INT
-    b_divisor[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
   unsigned int m, n;
   bool dividend_neg = false;
   bool divisor_neg = false;
@@ -1910,6 +1913,44 @@ wi::divmod_internal (HOST_WIDE_INT *quot
 	}
     }
 
+  unsigned HOST_HALF_WIDE_INT
+    b_quotient_buf[4 * WIDE_INT_MAX_INL_PRECISION
+		   / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    b_remainder_buf[4 * WIDE_INT_MAX_INL_PRECISION
+		    / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    b_dividend_buf[(4 * WIDE_INT_MAX_INL_PRECISION
+		    / HOST_BITS_PER_HALF_WIDE_INT) + 1];
+  unsigned HOST_HALF_WIDE_INT
+    b_divisor_buf[4 * WIDE_INT_MAX_INL_PRECISION
+		  / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT *b_quotient = b_quotient_buf;
+  unsigned HOST_HALF_WIDE_INT *b_remainder = b_remainder_buf;
+  unsigned HOST_HALF_WIDE_INT *b_dividend = b_dividend_buf;
+  unsigned HOST_HALF_WIDE_INT *b_divisor = b_divisor_buf;
+
+  if (dividend_prec > WIDE_INT_MAX_INL_PRECISION
+      && (sgn == SIGNED || dividend_val[dividend_len - 1] >= 0))
+    dividend_prec = (dividend_len + 1) * HOST_BITS_PER_WIDE_INT;
+  if (divisor_prec > WIDE_INT_MAX_INL_PRECISION)
+    divisor_prec = divisor_len * HOST_BITS_PER_WIDE_INT;
+  unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec);
+  unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec);
+  if (UNLIKELY (dividend_prec > WIDE_INT_MAX_INL_PRECISION)
+      || UNLIKELY (divisor_prec > WIDE_INT_MAX_INL_PRECISION))
+    {
+      unsigned HOST_HALF_WIDE_INT *buf
+        = XALLOCAVEC (unsigned HOST_HALF_WIDE_INT,
+		      12 * dividend_blocks_needed
+		      + 4 * divisor_blocks_needed + 1);
+      b_quotient = buf;
+      b_remainder = b_quotient + 4 * dividend_blocks_needed;
+      b_dividend = b_remainder + 4 * dividend_blocks_needed;
+      b_divisor = b_dividend + 4 * dividend_blocks_needed + 1;
+      memset (b_quotient, 0,
+	      4 * dividend_blocks_needed * sizeof (HOST_HALF_WIDE_INT));
+    }
   wi_unpack (b_dividend, dividend.get_val (), dividend.get_len (),
 	     dividend_blocks_needed, dividend_prec, UNSIGNED);
   wi_unpack (b_divisor, divisor.get_val (), divisor.get_len (),
@@ -1924,7 +1965,8 @@ wi::divmod_internal (HOST_WIDE_INT *quot
   while (n > 1 && b_divisor[n - 1] == 0)
     n--;
 
-  memset (b_quotient, 0, sizeof (b_quotient));
+  if (b_quotient == b_quotient_buf)
+    memset (b_quotient_buf, 0, sizeof (b_quotient_buf));
 
   divmod_internal_2 (b_quotient, b_remainder, b_dividend, b_divisor, m, n);
 
@@ -1970,6 +2012,8 @@ wi::lshift_large (HOST_WIDE_INT *val, co
 
   /* The whole-block shift fills with zeros.  */
   unsigned int len = BLOCKS_NEEDED (precision);
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    len = xlen + skip + 1;
   for (unsigned int i = 0; i < skip; ++i)
     val[i] = 0;
 
@@ -1993,22 +2037,17 @@ wi::lshift_large (HOST_WIDE_INT *val, co
   return canonize (val, len, precision);
 }
 
-/* Right shift XVAL by SHIFT and store the result in VAL.  Return the
+/* Right shift XVAL by SHIFT and store the result in VAL.  LEN is the
    number of blocks in VAL.  The input has XPRECISION bits and the
    output has XPRECISION - SHIFT bits.  */
-static unsigned int
+static void
 rshift_large_common (HOST_WIDE_INT *val, const HOST_WIDE_INT *xval,
-		     unsigned int xlen, unsigned int xprecision,
-		     unsigned int shift)
+		     unsigned int xlen, unsigned int shift, unsigned int len)
 {
   /* Split the shift into a whole-block shift and a subblock shift.  */
   unsigned int skip = shift / HOST_BITS_PER_WIDE_INT;
   unsigned int small_shift = shift % HOST_BITS_PER_WIDE_INT;
 
-  /* Work out how many blocks are needed to store the significant bits
-     (excluding the upper zeros or signs).  */
-  unsigned int len = BLOCKS_NEEDED (xprecision - shift);
-
   /* It's easier to handle the simple block case specially.  */
   if (small_shift == 0)
     for (unsigned int i = 0; i < len; ++i)
@@ -2025,7 +2064,6 @@ rshift_large_common (HOST_WIDE_INT *val,
 	  val[i] |= curr << (-small_shift % HOST_BITS_PER_WIDE_INT);
 	}
     }
-  return len;
 }
 
 /* Logically right shift XVAL by SHIFT and store the result in VAL.
@@ -2036,11 +2074,20 @@ wi::lrshift_large (HOST_WIDE_INT *val, c
 		   unsigned int xlen, unsigned int xprecision,
 		   unsigned int precision, unsigned int shift)
 {
-  unsigned int len = rshift_large_common (val, xval, xlen, xprecision, shift);
+  /* Work out how many blocks are needed to store the significant bits
+     (excluding the upper zeros or signs).  */
+  unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
+  unsigned int len = blocks_needed;
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)
+      && len > xlen
+      && xval[xlen - 1] >= 0)
+    len = xlen;
+
+  rshift_large_common (val, xval, xlen, shift, len);
 
   /* The value we just created has precision XPRECISION - SHIFT.
      Zero-extend it to wider precisions.  */
-  if (precision > xprecision - shift)
+  if (precision > xprecision - shift && len == blocks_needed)
     {
       unsigned int small_prec = (xprecision - shift) % HOST_BITS_PER_WIDE_INT;
       if (small_prec)
@@ -2063,11 +2110,18 @@ wi::arshift_large (HOST_WIDE_INT *val, c
 		   unsigned int xlen, unsigned int xprecision,
 		   unsigned int precision, unsigned int shift)
 {
-  unsigned int len = rshift_large_common (val, xval, xlen, xprecision, shift);
+  /* Work out how many blocks are needed to store the significant bits
+     (excluding the upper zeros or signs).  */
+  unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
+  unsigned int len = blocks_needed;
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS) && len > xlen)
+    len = xlen;
+
+  rshift_large_common (val, xval, xlen, shift, len);
 
   /* The value we just created has precision XPRECISION - SHIFT.
      Sign-extend it to wider types.  */
-  if (precision > xprecision - shift)
+  if (precision > xprecision - shift && len == blocks_needed)
     {
       unsigned int small_prec = (xprecision - shift) % HOST_BITS_PER_WIDE_INT;
       if (small_prec)
@@ -2399,9 +2453,12 @@ from_int (int i)
 static void
 assert_deceq (const char *expected, const wide_int_ref &wi, signop sgn)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_dec (wi, buf, sgn);
-  ASSERT_STREQ (expected, buf);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_dec (wi, p, sgn);
+  ASSERT_STREQ (expected, p);
 }
 
 /* Likewise for base 16.  */
@@ -2409,9 +2466,12 @@ assert_deceq (const char *expected, cons
 static void
 assert_hexeq (const char *expected, const wide_int_ref &wi)
 {
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_hex (wi, buf);
-  ASSERT_STREQ (expected, buf);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf;
+  unsigned len = wi.get_len ();
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  print_hex (wi, p);
+  ASSERT_STREQ (expected, p);
 }
 
 /* Test cases.  */
@@ -2428,7 +2488,7 @@ test_printing ()
   assert_hexeq ("0x1fffffffffffffffff", wi::shwi (-1, 69));
   assert_hexeq ("0xffffffffffffffff", wi::mask (64, false, 69));
   assert_hexeq ("0xffffffffffffffff", wi::mask <widest_int> (64, false));
-  if (WIDE_INT_MAX_PRECISION > 128)
+  if (WIDE_INT_MAX_INL_PRECISION > 128)
     {
       assert_hexeq ("0x20000000000000000fffffffffffffffe",
 		    wi::lshift (1, 129) + wi::lshift (1, 64) - 2);
--- gcc/c-family/c-warn.cc.jj	2023-10-04 16:28:03.935787718 +0200
+++ gcc/c-family/c-warn.cc	2023-10-05 11:36:55.090245316 +0200
@@ -1517,13 +1517,15 @@ match_case_to_enum_1 (tree key, tree typ
     return;
 
   char buf[WIDE_INT_PRINT_BUFFER_SIZE];
+  wide_int w = wi::to_wide (key);
 
+  gcc_assert (w.get_len () <= WIDE_INT_MAX_INL_ELTS);
   if (tree_fits_uhwi_p (key))
-    print_dec (wi::to_wide (key), buf, UNSIGNED);
+    print_dec (w, buf, UNSIGNED);
   else if (tree_fits_shwi_p (key))
-    print_dec (wi::to_wide (key), buf, SIGNED);
+    print_dec (w, buf, SIGNED);
   else
-    print_hex (wi::to_wide (key), buf);
+    print_hex (w, buf);
 
   if (TYPE_NAME (type) == NULL_TREE)
     warning_at (DECL_SOURCE_LOCATION (CASE_LABEL (label)),
--- gcc/wide-int.h.jj	2023-10-04 16:28:04.468780454 +0200
+++ gcc/wide-int.h	2023-10-06 15:13:31.117547151 +0200
@@ -27,7 +27,7 @@ along with GCC; see the file COPYING3.
    other longer storage GCC representations (rtl and tree).
 
    The actual precision of a wide_int depends on the flavor.  There
-   are three predefined flavors:
+   are four predefined flavors:
 
      1) wide_int (the default).  This flavor does the math in the
      precision of its input arguments.  It is assumed (and checked)
@@ -53,6 +53,10 @@ along with GCC; see the file COPYING3.
      multiply, division, shifts, comparisons, and operations that need
      overflow detected), the signedness must be specified separately.
 
+     For precisions up to WIDE_INT_MAX_INL_PRECISION, it uses an inline
+     buffer in the type, for larger precisions up to WIDEST_INT_MAX_PRECISION
+     it uses a pointer to heap allocated buffer.
+
      2) offset_int.  This is a fixed-precision integer that can hold
      any address offset, measured in either bits or bytes, with at
      least one extra sign bit.  At the moment the maximum address
@@ -76,11 +80,15 @@ along with GCC; see the file COPYING3.
        wi::leu_p (a, b) as a more efficient short-hand for
        "a >= 0 && a <= b". ]
 
-     3) widest_int.  This representation is an approximation of
+     3) rwide_int.  Restricted wide_int.  This is similar to
+     wide_int, but maximum possible precision is RWIDE_INT_MAX_PRECISION
+     and it always uses an inline buffer.  offset_int and rwide_int are
+     GC-friendly, wide_int and widest_int are not.
+
+     4) widest_int.  This representation is an approximation of
      infinite precision math.  However, it is not really infinite
      precision math as in the GMP library.  It is really finite
-     precision math where the precision is 4 times the size of the
-     largest integer that the target port can represent.
+     precision math where the precision is WIDEST_INT_MAX_PRECISION.
 
      Like offset_int, widest_int is wider than all the values that
      it needs to represent, so the integers are logically signed.
@@ -231,17 +239,34 @@ along with GCC; see the file COPYING3.
    can be arbitrarily different from X.  */
 
 /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
-   early examination of the target's mode file.  The WIDE_INT_MAX_ELTS
+   early examination of the target's mode file.  The WIDE_INT_MAX_INL_ELTS
    can accomodate at least 1 more bit so that unsigned numbers of that
    mode can be represented as a signed value.  Note that it is still
    possible to create fixed_wide_ints that have precisions greater than
    MAX_BITSIZE_MODE_ANY_INT.  This can be useful when representing a
    double-width multiplication result, for example.  */
-#define WIDE_INT_MAX_ELTS \
-  ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) / HOST_BITS_PER_WIDE_INT)
-
+#define WIDE_INT_MAX_INL_ELTS \
+  ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) \
+   / HOST_BITS_PER_WIDE_INT)
+
+#define WIDE_INT_MAX_INL_PRECISION \
+  (WIDE_INT_MAX_INL_ELTS * HOST_BITS_PER_WIDE_INT)
+
+/* Precision of wide_int and largest _BitInt precision + 1 we can
+   support.  */
+#define WIDE_INT_MAX_ELTS 255
 #define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
 
+#define RWIDE_INT_MAX_ELTS WIDE_INT_MAX_INL_ELTS
+#define RWIDE_INT_MAX_PRECISION WIDE_INT_MAX_INL_PRECISION
+
+/* Precision of widest_int and largest _BitInt precision + 1 we can
+   support.  */
+#define WIDEST_INT_MAX_ELTS 510
+#define WIDEST_INT_MAX_PRECISION (WIDEST_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
+
+STATIC_ASSERT (WIDE_INT_MAX_INL_ELTS < WIDE_INT_MAX_ELTS);
+
 /* This is the max size of any pointer on any machine.  It does not
    seem to be as easy to sniff this out of the machine description as
    it is for MAX_BITSIZE_MODE_ANY_INT since targets may support
@@ -307,17 +332,19 @@ along with GCC; see the file COPYING3.
 #define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
   WI_BINARY_RESULT (T1, T2) RESULT = \
     wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
-  HOST_WIDE_INT *VAL = RESULT.write_val ()
+  HOST_WIDE_INT *VAL = RESULT.write_val (0)
 
 /* Similar for the result of a unary operation on X, which has type T.  */
 #define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
   WI_UNARY_RESULT (T) RESULT = \
     wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
-  HOST_WIDE_INT *VAL = RESULT.write_val ()
+  HOST_WIDE_INT *VAL = RESULT.write_val (0)
 
 template <typename T> class generic_wide_int;
 template <int N> class fixed_wide_int_storage;
 class wide_int_storage;
+class rwide_int_storage;
+template <int N> class widest_int_storage;
 
 /* An N-bit integer.  Until we can use typedef templates, use this instead.  */
 #define FIXED_WIDE_INT(N) \
@@ -325,10 +352,9 @@ class wide_int_storage;
 
 typedef generic_wide_int <wide_int_storage> wide_int;
 typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) offset_int;
-typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION) widest_int;
-/* Spelled out explicitly (rather than through FIXED_WIDE_INT)
-   so as not to confuse gengtype.  */
-typedef generic_wide_int < fixed_wide_int_storage <WIDE_INT_MAX_PRECISION * 2> > widest2_int;
+typedef generic_wide_int <rwide_int_storage> rwide_int;
+typedef generic_wide_int <widest_int_storage <WIDE_INT_MAX_INL_PRECISION> > widest_int;
+typedef generic_wide_int <widest_int_storage <WIDE_INT_MAX_INL_PRECISION * 2> > widest2_int;
 
 /* wi::storage_ref can be a reference to a primitive type,
    so this is the conservatively-correct setting.  */
@@ -380,7 +406,11 @@ namespace wi
 
     /* The integer has a constant precision (known at GCC compile time)
        and is signed.  */
-    CONST_PRECISION
+    CONST_PRECISION,
+
+    /* Like CONST_PRECISION, but with WIDEST_INT_MAX_PRECISION or larger
+       precision where not all elements of arrays are always present.  */
+    WIDEST_CONST_PRECISION
   };
 
   /* This class, which has no default implementation, is expected to
@@ -390,9 +420,15 @@ namespace wi
        Classifies the type of T.
 
      static const unsigned int precision;
-       Only defined if precision_type == CONST_PRECISION.  Specifies the
+       Only defined if precision_type == CONST_PRECISION or
+       precision_type == WIDEST_CONST_PRECISION.  Specifies the
        precision of all integers of type T.
 
+     static const unsigned int inl_precision;
+       Only defined if precision_type == WIDEST_CONST_PRECISION.
+       Specifies precision which is represented in the inline
+       arrays.
+
      static const bool host_dependent_precision;
        True if the precision of T depends (or can depend) on the host.
 
@@ -415,9 +451,10 @@ namespace wi
   struct binary_traits;
 
   /* Specify the result type for each supported combination of binary
-     inputs.  Note that CONST_PRECISION and VAR_PRECISION cannot be
-     mixed, in order to give stronger type checking.  When both inputs
-     are CONST_PRECISION, they must have the same precision.  */
+     inputs.  Note that CONST_PRECISION, WIDEST_CONST_PRECISION and
+     VAR_PRECISION cannot be mixed, in order to give stronger type
+     checking.  When both inputs are CONST_PRECISION or both are
+     WIDEST_CONST_PRECISION, they must have the same precision.  */
   template <typename T1, typename T2>
   struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
   {
@@ -447,6 +484,17 @@ namespace wi
   };
 
   template <typename T1, typename T2>
+  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, WIDEST_CONST_PRECISION>
+  {
+    typedef generic_wide_int < widest_int_storage
+			       <int_traits <T2>::inl_precision> > result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
+    typedef result_type signed_shift_result_type;
+    typedef bool signed_predicate_result;
+  };
+
+  template <typename T1, typename T2>
   struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
   {
     typedef wide_int result_type;
@@ -468,6 +516,17 @@ namespace wi
   };
 
   template <typename T1, typename T2>
+  struct binary_traits <T1, T2, WIDEST_CONST_PRECISION, FLEXIBLE_PRECISION>
+  {
+    typedef generic_wide_int < widest_int_storage
+			       <int_traits <T1>::inl_precision> > result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
+    typedef result_type signed_shift_result_type;
+    typedef bool signed_predicate_result;
+  };
+
+  template <typename T1, typename T2>
   struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
   {
     STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
@@ -482,6 +541,18 @@ namespace wi
   };
 
   template <typename T1, typename T2>
+  struct binary_traits <T1, T2, WIDEST_CONST_PRECISION, WIDEST_CONST_PRECISION>
+  {
+    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
+    typedef generic_wide_int < widest_int_storage
+			       <int_traits <T1>::inl_precision> > result_type;
+    typedef result_type operator_result;
+    typedef bool predicate_result;
+    typedef result_type signed_shift_result_type;
+    typedef bool signed_predicate_result;
+  };
+
+  template <typename T1, typename T2>
   struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
   {
     typedef wide_int result_type;
@@ -709,8 +780,10 @@ wi::storage_ref::get_val () const
    Although not required by generic_wide_int itself, writable storage
    classes can also provide the following functions:
 
-   HOST_WIDE_INT *write_val ()
-     Get a modifiable version of get_val ()
+   HOST_WIDE_INT *write_val (unsigned int)
+     Get a modifiable version of get_val ().  The argument should be
+     upper estimation for LEN (ignored by all storages but
+     widest_int_storage).
 
    unsigned int set_len (unsigned int len)
      Set the value returned by get_len () to LEN.  */
@@ -777,6 +850,8 @@ public:
 
   static const bool is_sign_extended
     = wi::int_traits <generic_wide_int <storage> >::is_sign_extended;
+  static const bool needs_write_val_arg
+    = wi::int_traits <generic_wide_int <storage> >::needs_write_val_arg;
 };
 
 template <typename storage>
@@ -1049,6 +1124,7 @@ namespace wi
     static const enum precision_type precision_type = VAR_PRECISION;
     static const bool host_dependent_precision = HDP;
     static const bool is_sign_extended = SE;
+    static const bool needs_write_val_arg = false;
   };
 }
 
@@ -1065,7 +1141,11 @@ namespace wi
 class GTY(()) wide_int_storage
 {
 private:
-  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
+  union
+  {
+    HOST_WIDE_INT val[WIDE_INT_MAX_INL_ELTS];
+    HOST_WIDE_INT *valp;
+  } GTY((skip)) u;
   unsigned int len;
   unsigned int precision;
 
@@ -1073,14 +1153,17 @@ public:
   wide_int_storage ();
   template <typename T>
   wide_int_storage (const T &);
+  wide_int_storage (const wide_int_storage &);
+  ~wide_int_storage ();
 
   /* The standard generic_wide_int storage methods.  */
   unsigned int get_precision () const;
   const HOST_WIDE_INT *get_val () const;
   unsigned int get_len () const;
-  HOST_WIDE_INT *write_val ();
+  HOST_WIDE_INT *write_val (unsigned int);
   void set_len (unsigned int, bool = false);
 
+  wide_int_storage &operator = (const wide_int_storage &);
   template <typename T>
   wide_int_storage &operator = (const T &);
 
@@ -1099,12 +1182,15 @@ namespace wi
     /* Guaranteed by a static assert in the wide_int_storage constructor.  */
     static const bool host_dependent_precision = false;
     static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
     template <typename T1, typename T2>
     static wide_int get_binary_result (const T1 &, const T2 &);
+    template <typename T1, typename T2>
+    static unsigned int get_binary_precision (const T1 &, const T2 &);
   };
 }
 
-inline wide_int_storage::wide_int_storage () {}
+inline wide_int_storage::wide_int_storage () : precision (0) {}
 
 /* Initialize the storage from integer X, in its natural precision.
    Note that we do not allow integers with host-dependent precision
@@ -1113,21 +1199,75 @@ inline wide_int_storage::wide_int_storag
 template <typename T>
 inline wide_int_storage::wide_int_storage (const T &x)
 {
-  { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
-  { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
+  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type
+		 != wi::WIDEST_CONST_PRECISION);
   WIDE_INT_REF_FOR (T) xi (x);
   precision = xi.precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+    u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
   wi::copy (*this, xi);
 }
 
+inline wide_int_storage::wide_int_storage (const wide_int_storage &x)
+{
+  len = x.len;
+  precision = x.precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
+      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
+    }
+  else if (LIKELY (precision))
+    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
+}
+
+inline wide_int_storage::~wide_int_storage ()
+{
+  if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+    XDELETEVEC (u.valp);
+}
+
+inline wide_int_storage&
+wide_int_storage::operator = (const wide_int_storage &x)
+{
+  if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+    {
+      if (this == &x)
+	return *this;
+      XDELETEVEC (u.valp);
+    }
+  len = x.len;
+  precision = x.precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
+      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
+    }
+  else if (LIKELY (precision))
+    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
+  return *this;
+}
+
 template <typename T>
 inline wide_int_storage&
 wide_int_storage::operator = (const T &x)
 {
-  { STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision); }
-  { STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION); }
+  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type
+		 != wi::WIDEST_CONST_PRECISION);
   WIDE_INT_REF_FOR (T) xi (x);
-  precision = xi.precision;
+  if (UNLIKELY (precision != xi.precision))
+    {
+      if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+	XDELETEVEC (u.valp);
+      precision = xi.precision;
+      if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+	u.valp = XNEWVEC (HOST_WIDE_INT,
+			  CEIL (precision, HOST_BITS_PER_WIDE_INT));
+    }
   wi::copy (*this, xi);
   return *this;
 }
@@ -1141,7 +1281,7 @@ wide_int_storage::get_precision () const
 inline const HOST_WIDE_INT *
 wide_int_storage::get_val () const
 {
-  return val;
+  return UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION) ? u.valp : u.val;
 }
 
 inline unsigned int
@@ -1151,9 +1291,9 @@ wide_int_storage::get_len () const
 }
 
 inline HOST_WIDE_INT *
-wide_int_storage::write_val ()
+wide_int_storage::write_val (unsigned int)
 {
-  return val;
+  return UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION) ? u.valp : u.val;
 }
 
 inline void
@@ -1161,8 +1301,10 @@ wide_int_storage::set_len (unsigned int
 {
   len = l;
   if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
-    val[len - 1] = sext_hwi (val[len - 1],
-			     precision % HOST_BITS_PER_WIDE_INT);
+    {
+      HOST_WIDE_INT &v = write_val (len)[len - 1];
+      v = sext_hwi (v, precision % HOST_BITS_PER_WIDE_INT);
+    }
 }
 
 /* Treat X as having signedness SGN and convert it to a PRECISION-bit
@@ -1172,7 +1314,7 @@ wide_int_storage::from (const wide_int_r
 			signop sgn)
 {
   wide_int result = wide_int::create (precision);
-  result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
+  result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
 				     x.precision, precision, sgn));
   return result;
 }
@@ -1185,7 +1327,7 @@ wide_int_storage::from_array (const HOST
 			      unsigned int precision, bool need_canon_p)
 {
   wide_int result = wide_int::create (precision);
-  result.set_len (wi::from_array (result.write_val (), val, len, precision,
+  result.set_len (wi::from_array (result.write_val (len), val, len, precision,
 				  need_canon_p));
   return result;
 }
@@ -1196,6 +1338,9 @@ wide_int_storage::create (unsigned int p
 {
   wide_int x;
   x.precision = precision;
+  if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+    x.u.valp = XNEWVEC (HOST_WIDE_INT,
+			CEIL (precision, HOST_BITS_PER_WIDE_INT));
   return x;
 }
 
@@ -1212,6 +1357,194 @@ wi::int_traits <wide_int_storage>::get_b
     return wide_int::create (wi::get_precision (x));
 }
 
+template <typename T1, typename T2>
+inline unsigned int
+wi::int_traits <wide_int_storage>::get_binary_precision (const T1 &x,
+							 const T2 &y)
+{
+  /* This shouldn't be used for two flexible-precision inputs.  */
+  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
+		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
+  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
+    return wi::get_precision (y);
+  else
+    return wi::get_precision (x);
+}
+
+/* The storage used by rwide_int.  */
+class GTY(()) rwide_int_storage
+{
+private:
+  HOST_WIDE_INT val[RWIDE_INT_MAX_ELTS];
+  unsigned int len;
+  unsigned int precision;
+
+public:
+  rwide_int_storage () = default;
+  template <typename T>
+  rwide_int_storage (const T &);
+
+  /* The standard generic_rwide_int storage methods.  */
+  unsigned int get_precision () const;
+  const HOST_WIDE_INT *get_val () const;
+  unsigned int get_len () const;
+  HOST_WIDE_INT *write_val (unsigned int);
+  void set_len (unsigned int, bool = false);
+
+  template <typename T>
+  rwide_int_storage &operator = (const T &);
+
+  static rwide_int from (const wide_int_ref &, unsigned int, signop);
+  static rwide_int from_array (const HOST_WIDE_INT *, unsigned int,
+			       unsigned int, bool = true);
+  static rwide_int create (unsigned int);
+};
+
+namespace wi
+{
+  template <>
+  struct int_traits <rwide_int_storage>
+  {
+    static const enum precision_type precision_type = VAR_PRECISION;
+    /* Guaranteed by a static assert in the rwide_int_storage constructor.  */
+    static const bool host_dependent_precision = false;
+    static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
+    template <typename T1, typename T2>
+    static rwide_int get_binary_result (const T1 &, const T2 &);
+    template <typename T1, typename T2>
+    static unsigned int get_binary_precision (const T1 &, const T2 &);
+  };
+}
+
+/* Initialize the storage from integer X, in its natural precision.
+   Note that we do not allow integers with host-dependent precision
+   to become rwide_ints; rwide_ints must always be logically independent
+   of the host.  */
+template <typename T>
+inline rwide_int_storage::rwide_int_storage (const T &x)
+{
+  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type
+		 != wi::WIDEST_CONST_PRECISION);
+  WIDE_INT_REF_FOR (T) xi (x);
+  precision = xi.precision;
+  gcc_assert (precision <= RWIDE_INT_MAX_PRECISION);
+  wi::copy (*this, xi);
+}
+
+template <typename T>
+inline rwide_int_storage&
+rwide_int_storage::operator = (const T &x)
+{
+  STATIC_ASSERT (!wi::int_traits<T>::host_dependent_precision);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type != wi::CONST_PRECISION);
+  STATIC_ASSERT (wi::int_traits<T>::precision_type
+		 != wi::WIDEST_CONST_PRECISION);
+  WIDE_INT_REF_FOR (T) xi (x);
+  precision = xi.precision;
+  gcc_assert (precision <= RWIDE_INT_MAX_PRECISION);
+  wi::copy (*this, xi);
+  return *this;
+}
+
+inline unsigned int
+rwide_int_storage::get_precision () const
+{
+  return precision;
+}
+
+inline const HOST_WIDE_INT *
+rwide_int_storage::get_val () const
+{
+  return val;
+}
+
+inline unsigned int
+rwide_int_storage::get_len () const
+{
+  return len;
+}
+
+inline HOST_WIDE_INT *
+rwide_int_storage::write_val (unsigned int)
+{
+  return val;
+}
+
+inline void
+rwide_int_storage::set_len (unsigned int l, bool is_sign_extended)
+{
+  len = l;
+  if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
+    val[len - 1] = sext_hwi (val[len - 1],
+			     precision % HOST_BITS_PER_WIDE_INT);
+}
+
+/* Treat X as having signedness SGN and convert it to a PRECISION-bit
+   number.  */
+inline rwide_int
+rwide_int_storage::from (const wide_int_ref &x, unsigned int precision,
+			 signop sgn)
+{
+  rwide_int result = rwide_int::create (precision);
+  result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
+				     x.precision, precision, sgn));
+  return result;
+}
+
+/* Create a rwide_int from the explicit block encoding given by VAL and
+   LEN.  PRECISION is the precision of the integer.  NEED_CANON_P is
+   true if the encoding may have redundant trailing blocks.  */
+inline rwide_int
+rwide_int_storage::from_array (const HOST_WIDE_INT *val, unsigned int len,
+			       unsigned int precision, bool need_canon_p)
+{
+  rwide_int result = rwide_int::create (precision);
+  result.set_len (wi::from_array (result.write_val (len), val, len, precision,
+				  need_canon_p));
+  return result;
+}
+
+/* Return an uninitialized rwide_int with precision PRECISION.  */
+inline rwide_int
+rwide_int_storage::create (unsigned int precision)
+{
+  rwide_int x;
+  gcc_assert (precision <= RWIDE_INT_MAX_PRECISION);
+  x.precision = precision;
+  return x;
+}
+
+template <typename T1, typename T2>
+inline rwide_int
+wi::int_traits <rwide_int_storage>::get_binary_result (const T1 &x,
+						       const T2 &y)
+{
+  /* This shouldn't be used for two flexible-precision inputs.  */
+  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
+		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
+  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
+    return rwide_int::create (wi::get_precision (y));
+  else
+    return rwide_int::create (wi::get_precision (x));
+}
+
+template <typename T1, typename T2>
+inline unsigned int
+wi::int_traits <rwide_int_storage>::get_binary_precision (const T1 &x,
+							  const T2 &y)
+{
+  /* This shouldn't be used for two flexible-precision inputs.  */
+  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
+		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
+  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
+    return wi::get_precision (y);
+  else
+    return wi::get_precision (x);
+}
+
 /* The storage used by FIXED_WIDE_INT (N).  */
 template <int N>
 class GTY(()) fixed_wide_int_storage
@@ -1221,7 +1554,7 @@ private:
   unsigned int len;
 
 public:
-  fixed_wide_int_storage ();
+  fixed_wide_int_storage () = default;
   template <typename T>
   fixed_wide_int_storage (const T &);
 
@@ -1229,7 +1562,7 @@ public:
   unsigned int get_precision () const;
   const HOST_WIDE_INT *get_val () const;
   unsigned int get_len () const;
-  HOST_WIDE_INT *write_val ();
+  HOST_WIDE_INT *write_val (unsigned int);
   void set_len (unsigned int, bool = false);
 
   static FIXED_WIDE_INT (N) from (const wide_int_ref &, signop);
@@ -1245,15 +1578,15 @@ namespace wi
     static const enum precision_type precision_type = CONST_PRECISION;
     static const bool host_dependent_precision = false;
     static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
     static const unsigned int precision = N;
     template <typename T1, typename T2>
     static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
+    template <typename T1, typename T2>
+    static unsigned int get_binary_precision (const T1 &, const T2 &);
   };
 }
 
-template <int N>
-inline fixed_wide_int_storage <N>::fixed_wide_int_storage () {}
-
 /* Initialize the storage from integer X, in precision N.  */
 template <int N>
 template <typename T>
@@ -1288,7 +1621,7 @@ fixed_wide_int_storage <N>::get_len () c
 
 template <int N>
 inline HOST_WIDE_INT *
-fixed_wide_int_storage <N>::write_val ()
+fixed_wide_int_storage <N>::write_val (unsigned int)
 {
   return val;
 }
@@ -1308,7 +1641,7 @@ inline FIXED_WIDE_INT (N)
 fixed_wide_int_storage <N>::from (const wide_int_ref &x, signop sgn)
 {
   FIXED_WIDE_INT (N) result;
-  result.set_len (wi::force_to_size (result.write_val (), x.val, x.len,
+  result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
 				     x.precision, N, sgn));
   return result;
 }
@@ -1323,7 +1656,7 @@ fixed_wide_int_storage <N>::from_array (
 					bool need_canon_p)
 {
   FIXED_WIDE_INT (N) result;
-  result.set_len (wi::from_array (result.write_val (), val, len,
+  result.set_len (wi::from_array (result.write_val (len), val, len,
 				  N, need_canon_p));
   return result;
 }
@@ -1337,6 +1670,255 @@ get_binary_result (const T1 &, const T2
   return FIXED_WIDE_INT (N) ();
 }
 
+template <int N>
+template <typename T1, typename T2>
+inline unsigned int
+wi::int_traits < fixed_wide_int_storage <N> >::
+get_binary_precision (const T1 &, const T2 &)
+{
+  return N;
+}
+
+#define WIDEST_INT(N) generic_wide_int < widest_int_storage <N> >
+
+/* The storage used by widest_int.  */
+template <int N>
+class GTY(()) widest_int_storage
+{
+private:
+  union
+  {
+    HOST_WIDE_INT val[WIDE_INT_MAX_HWIS (N)];
+    HOST_WIDE_INT *valp;
+  } GTY((skip)) u;
+  unsigned int len;
+
+public:
+  widest_int_storage ();
+  widest_int_storage (const widest_int_storage &);
+  template <typename T>
+  widest_int_storage (const T &);
+  ~widest_int_storage ();
+  widest_int_storage &operator = (const widest_int_storage &);
+  template <typename T>
+  inline widest_int_storage& operator = (const T &);
+
+  /* The standard generic_wide_int storage methods.  */
+  unsigned int get_precision () const;
+  const HOST_WIDE_INT *get_val () const;
+  unsigned int get_len () const;
+  HOST_WIDE_INT *write_val (unsigned int);
+  void set_len (unsigned int, bool = false);
+
+  static WIDEST_INT (N) from (const wide_int_ref &, signop);
+  static WIDEST_INT (N) from_array (const HOST_WIDE_INT *, unsigned int,
+				    bool = true);
+};
+
+namespace wi
+{
+  template <int N>
+  struct int_traits < widest_int_storage <N> >
+  {
+    static const enum precision_type precision_type = WIDEST_CONST_PRECISION;
+    static const bool host_dependent_precision = false;
+    static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = true;
+    static const unsigned int precision
+      = N / WIDE_INT_MAX_INL_PRECISION * WIDEST_INT_MAX_PRECISION;
+    static const unsigned int inl_precision = N;
+    template <typename T1, typename T2>
+    static WIDEST_INT (N) get_binary_result (const T1 &, const T2 &);
+    template <typename T1, typename T2>
+    static unsigned int get_binary_precision (const T1 &, const T2 &);
+  };
+}
+
+template <int N>
+inline widest_int_storage <N>::widest_int_storage () : len (0) {}
+
+/* Initialize the storage from integer X, in precision N.  */
+template <int N>
+template <typename T>
+inline widest_int_storage <N>::widest_int_storage (const T &x) : len (0)
+{
+  /* Check for type compatibility.  We don't want to initialize a
+     widest integer from something like a wide_int.  */
+  WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED;
+  wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N / WIDE_INT_MAX_INL_PRECISION
+					    * WIDEST_INT_MAX_PRECISION));
+}
+
+template <int N>
+inline
+widest_int_storage <N>::widest_int_storage (const widest_int_storage &x)
+{
+  len = x.len;
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, len);
+      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
+    }
+  else
+    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
+}
+
+template <int N>
+inline widest_int_storage <N>::~widest_int_storage ()
+{
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    XDELETEVEC (u.valp);
+}
+
+template <int N>
+inline widest_int_storage <N>&
+widest_int_storage <N>::operator = (const widest_int_storage <N> &x)
+{
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    {
+      if (this == &x)
+	return *this;
+      XDELETEVEC (u.valp);
+    }
+  len = x.len;
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, len);
+      memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
+    }
+  else
+    memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT));
+  return *this;
+}
+
+template <int N>
+template <typename T>
+inline widest_int_storage <N>&
+widest_int_storage <N>::operator = (const T &x)
+{
+  /* Check for type compatibility.  We don't want to assign a
+     widest integer from something like a wide_int.  */
+  WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED;
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    XDELETEVEC (u.valp);
+  len = 0;
+  wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N / WIDE_INT_MAX_INL_PRECISION
+					    * WIDEST_INT_MAX_PRECISION));
+  return *this;
+}
+
+template <int N>
+inline unsigned int
+widest_int_storage <N>::get_precision () const
+{
+  return N / WIDE_INT_MAX_INL_PRECISION * WIDEST_INT_MAX_PRECISION;
+}
+
+template <int N>
+inline const HOST_WIDE_INT *
+widest_int_storage <N>::get_val () const
+{
+  return UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT) ? u.valp : u.val;
+}
+
+template <int N>
+inline unsigned int
+widest_int_storage <N>::get_len () const
+{
+  return len;
+}
+
+template <int N>
+inline HOST_WIDE_INT *
+widest_int_storage <N>::write_val (unsigned int l)
+{
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
+    XDELETEVEC (u.valp);
+  len = l;
+  if (UNLIKELY (l > N / HOST_BITS_PER_WIDE_INT))
+    {
+      u.valp = XNEWVEC (HOST_WIDE_INT, l);
+      return u.valp;
+    }
+  return u.val;
+}
+
+#if GCC_VERSION >= 4007
+#pragma GCC diagnostic push
+#pragma GCC diagnostic ignored "-Wfree-nonheap-object"
+#pragma GCC diagnostic ignored "-Warray-bounds="
+#pragma GCC diagnostic ignored "-Wstringop-overread"
+#endif
+
+template <int N>
+inline void
+widest_int_storage <N>::set_len (unsigned int l, bool)
+{
+  gcc_checking_assert (l <= len);
+  if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT)
+      && l <= N / HOST_BITS_PER_WIDE_INT)
+    {
+      HOST_WIDE_INT *valp = u.valp;
+      memcpy (u.val, valp, l * sizeof (u.val[0]));
+      XDELETEVEC (valp);
+    }
+  len = l;
+  /* There are no excess bits in val[len - 1].  */
+  STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
+}
+
+#if GCC_VERSION >= 4007
+#pragma GCC diagnostic pop
+#endif
+
+/* Treat X as having signedness SGN and convert it to an N-bit number.  */
+template <int N>
+inline WIDEST_INT (N)
+widest_int_storage <N>::from (const wide_int_ref &x, signop sgn)
+{
+  WIDEST_INT (N) result;
+  unsigned int exp_len = x.len;
+  unsigned int prec = result.get_precision ();
+  if (sgn == UNSIGNED && prec > x.precision && x.val[x.len - 1] < 0)
+    exp_len = CEIL (x.precision, HOST_BITS_PER_WIDE_INT) + 1;
+  result.set_len (wi::force_to_size (result.write_val (exp_len), x.val, x.len,
+				     x.precision, prec, sgn));
+  return result;
+}
+
+/* Create a WIDEST_INT (N) from the explicit block encoding given by
+   VAL and LEN.  NEED_CANON_P is true if the encoding may have redundant
+   trailing blocks.  */
+template <int N>
+inline WIDEST_INT (N)
+widest_int_storage <N>::from_array (const HOST_WIDE_INT *val,
+				    unsigned int len,
+				    bool need_canon_p)
+{
+  WIDEST_INT (N) result;
+  result.set_len (wi::from_array (result.write_val (len), val, len,
+				  result.get_precision (), need_canon_p));
+  return result;
+}
+
+template <int N>
+template <typename T1, typename T2>
+inline WIDEST_INT (N)
+wi::int_traits < widest_int_storage <N> >::
+get_binary_result (const T1 &, const T2 &)
+{
+  return WIDEST_INT (N) ();
+}
+
+template <int N>
+template <typename T1, typename T2>
+inline unsigned int
+wi::int_traits < widest_int_storage <N> >::
+get_binary_precision (const T1 &, const T2 &)
+{
+  return N / WIDE_INT_MAX_INL_PRECISION * WIDEST_INT_MAX_PRECISION;
+}
+
 /* A reference to one element of a trailing_wide_ints structure.  */
 class trailing_wide_int_storage
 {
@@ -1359,7 +1941,7 @@ public:
   unsigned int get_len () const;
   unsigned int get_precision () const;
   const HOST_WIDE_INT *get_val () const;
-  HOST_WIDE_INT *write_val ();
+  HOST_WIDE_INT *write_val (unsigned int);
   void set_len (unsigned int, bool = false);
 
   template <typename T>
@@ -1445,7 +2027,7 @@ trailing_wide_int_storage::get_val () co
 }
 
 inline HOST_WIDE_INT *
-trailing_wide_int_storage::write_val ()
+trailing_wide_int_storage::write_val (unsigned int)
 {
   return m_val;
 }
@@ -1528,6 +2110,7 @@ namespace wi
     static const enum precision_type precision_type = FLEXIBLE_PRECISION;
     static const bool host_dependent_precision = true;
     static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
     static unsigned int get_precision (T);
     static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, T);
   };
@@ -1699,6 +2282,7 @@ namespace wi
        precision of HOST_WIDE_INT.  */
     static const bool host_dependent_precision = false;
     static const bool is_sign_extended = true;
+    static const bool needs_write_val_arg = false;
     static unsigned int get_precision (const wi::hwi_with_prec &);
     static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
 				      const wi::hwi_with_prec &);
@@ -1804,8 +2388,8 @@ template <typename T1, typename T2>
 inline unsigned int
 wi::get_binary_precision (const T1 &x, const T2 &y)
 {
-  return get_precision (wi::int_traits <WI_BINARY_RESULT (T1, T2)>::
-			get_binary_result (x, y));
+  return wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_precision (x,
+									   y);
 }
 
 /* Copy the contents of Y to X, but keeping X's current precision.  */
@@ -1813,9 +2397,9 @@ template <typename T1, typename T2>
 inline void
 wi::copy (T1 &x, const T2 &y)
 {
-  HOST_WIDE_INT *xval = x.write_val ();
-  const HOST_WIDE_INT *yval = y.get_val ();
   unsigned int len = y.get_len ();
+  HOST_WIDE_INT *xval = x.write_val (len);
+  const HOST_WIDE_INT *yval = y.get_val ();
   unsigned int i = 0;
   do
     xval[i] = yval[i];
@@ -2162,6 +2746,8 @@ wi::bit_not (const T &x)
 {
   WI_UNARY_RESULT_VAR (result, val, T, x);
   WIDE_INT_REF_FOR (T) xi (x, get_precision (result));
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len);
   for (unsigned int i = 0; i < xi.len; ++i)
     val[i] = ~xi.val[i];
   result.set_len (xi.len);
@@ -2203,6 +2789,9 @@ wi::sext (const T &x, unsigned int offse
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T) xi (x, precision);
 
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len,
+				 CEIL (offset, HOST_BITS_PER_WIDE_INT)));
   if (offset <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = sext_hwi (xi.ulow (), offset);
@@ -2230,6 +2819,9 @@ wi::zext (const T &x, unsigned int offse
       return result;
     }
 
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len,
+				 offset / HOST_BITS_PER_WIDE_INT + 1));
   /* In these cases we know that at least the top bit will be clear,
      so no sign extension is necessary.  */
   if (offset < HOST_BITS_PER_WIDE_INT)
@@ -2259,6 +2851,9 @@ wi::set_bit (const T &x, unsigned int bi
   WI_UNARY_RESULT_VAR (result, val, T, x);
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T) xi (x, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len,
+				 bit / HOST_BITS_PER_WIDE_INT + 1));
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = xi.ulow () | (HOST_WIDE_INT_1U << bit);
@@ -2280,6 +2875,8 @@ wi::bswap (const T &x)
   WI_UNARY_RESULT_VAR (result, val, T, x);
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T) xi (x, precision);
+  if (result.needs_write_val_arg)
+    gcc_unreachable (); /* bswap on widest_int makes no sense.  */
   result.set_len (bswap_large (val, xi.val, xi.len, precision));
   return result;
 }
@@ -2292,6 +2889,8 @@ wi::bitreverse (const T &x)
   WI_UNARY_RESULT_VAR (result, val, T, x);
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T) xi (x, precision);
+  if (result.needs_write_val_arg)
+    gcc_unreachable (); /* bitreverse on widest_int makes no sense.  */
   result.set_len (bitreverse_large (val, xi.val, xi.len, precision));
   return result;
 }
@@ -2368,6 +2967,8 @@ wi::bit_and (const T1 &x, const T2 &y)
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () & yi.ulow ();
@@ -2389,6 +2990,8 @@ wi::bit_and_not (const T1 &x, const T2 &
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () & ~yi.ulow ();
@@ -2410,6 +3013,8 @@ wi::bit_or (const T1 &x, const T2 &y)
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () | yi.ulow ();
@@ -2431,6 +3036,8 @@ wi::bit_or_not (const T1 &x, const T2 &y
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () | ~yi.ulow ();
@@ -2452,6 +3059,8 @@ wi::bit_xor (const T1 &x, const T2 &y)
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
   bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended;
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len));
   if (LIKELY (xi.len + yi.len == 2))
     {
       val[0] = xi.ulow () ^ yi.ulow ();
@@ -2472,6 +3081,8 @@ wi::add (const T1 &x, const T2 &y)
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len) + 1);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = xi.ulow () + yi.ulow ();
@@ -2515,6 +3126,8 @@ wi::add (const T1 &x, const T2 &y, signo
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len) + 1);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       unsigned HOST_WIDE_INT xl = xi.ulow ();
@@ -2558,6 +3171,8 @@ wi::sub (const T1 &x, const T2 &y)
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len) + 1);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = xi.ulow () - yi.ulow ();
@@ -2601,6 +3216,8 @@ wi::sub (const T1 &x, const T2 &y, signo
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (MAX (xi.len, yi.len) + 1);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       unsigned HOST_WIDE_INT xl = xi.ulow ();
@@ -2643,6 +3260,8 @@ wi::mul (const T1 &x, const T2 &y)
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len + yi.len + 2);
   if (precision <= HOST_BITS_PER_WIDE_INT)
     {
       val[0] = xi.ulow () * yi.ulow ();
@@ -2664,6 +3283,8 @@ wi::mul (const T1 &x, const T2 &y, signo
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len + yi.len + 2);
   result.set_len (mul_internal (val, xi.val, xi.len,
 				yi.val, yi.len, precision,
 				sgn, overflow, false));
@@ -2698,6 +3319,8 @@ wi::mul_high (const T1 &x, const T2 &y,
   unsigned int precision = get_precision (result);
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y, precision);
+  if (result.needs_write_val_arg)
+    gcc_unreachable (); /* mul_high on widest_int doesn't make sense.  */
   result.set_len (mul_internal (val, xi.val, xi.len,
 				yi.val, yi.len, precision,
 				sgn, 0, true));
@@ -2716,6 +3339,12 @@ wi::div_trunc (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T1) xi (x, precision);
   WIDE_INT_REF_FOR (T2) yi (y);
 
+  if (quotient.needs_write_val_arg)
+    quotient_val = quotient.write_val ((sgn == UNSIGNED
+					&& xi.val[xi.len - 1] < 0)
+				       ? CEIL (precision,
+					       HOST_BITS_PER_WIDE_INT) + 1
+				       : xi.len + 1);
   quotient.set_len (divmod_internal (quotient_val, 0, 0, xi.val, xi.len,
 				     precision,
 				     yi.val, yi.len, yi.precision,
@@ -2753,6 +3382,15 @@ wi::div_floor (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2795,6 +3433,15 @@ wi::div_ceil (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2828,6 +3475,15 @@ wi::div_round (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2871,6 +3527,15 @@ wi::divmod_trunc (const T1 &x, const T2
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2915,6 +3580,8 @@ wi::mod_trunc (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (remainder.needs_write_val_arg)
+    remainder_val = remainder.write_val (yi.len);
   divmod_internal (0, &remainder_len, remainder_val,
 		   xi.val, xi.len, precision,
 		   yi.val, yi.len, yi.precision, sgn, overflow);
@@ -2955,6 +3622,15 @@ wi::mod_floor (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -2991,6 +3667,15 @@ wi::mod_ceil (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -3017,6 +3702,15 @@ wi::mod_round (const T1 &x, const T2 &y,
   WIDE_INT_REF_FOR (T2) yi (y);
 
   unsigned int remainder_len;
+  if (quotient.needs_write_val_arg)
+    {
+      quotient_val = quotient.write_val ((sgn == UNSIGNED
+					  && xi.val[xi.len - 1] < 0)
+					 ? CEIL (precision,
+						 HOST_BITS_PER_WIDE_INT) + 1
+					 : xi.len + 1);
+      remainder_val = remainder.write_val (yi.len);
+    }
   quotient.set_len (divmod_internal (quotient_val,
 				     &remainder_len, remainder_val,
 				     xi.val, xi.len, precision,
@@ -3086,12 +3780,16 @@ wi::lshift (const T1 &x, const T2 &y)
   /* Handle the simple cases quickly.   */
   if (geu_p (yi, precision))
     {
+      if (result.needs_write_val_arg)
+	val = result.write_val (1);
       val[0] = 0;
       result.set_len (1);
     }
   else
     {
       unsigned int shift = yi.to_uhwi ();
+      if (result.needs_write_val_arg)
+	val = result.write_val (xi.len + shift / HOST_BITS_PER_WIDE_INT + 1);
       /* For fixed-precision integers like offset_int and widest_int,
 	 handle the case where the shift value is constant and the
 	 result is a single nonnegative HWI (meaning that we don't
@@ -3130,12 +3828,23 @@ wi::lrshift (const T1 &x, const T2 &y)
   /* Handle the simple cases quickly.   */
   if (geu_p (yi, xi.precision))
     {
+      if (result.needs_write_val_arg)
+	val = result.write_val (1);
       val[0] = 0;
       result.set_len (1);
     }
   else
     {
       unsigned int shift = yi.to_uhwi ();
+      if (result.needs_write_val_arg)
+	{
+	  unsigned int est_len = xi.len;
+	  if (xi.val[xi.len - 1] < 0 && shift)
+	    /* Logical right shift of sign-extended value might need a very
+	       large precision e.g. for widest_int.  */
+	    est_len = CEIL (xi.precision - shift, HOST_BITS_PER_WIDE_INT) + 1;
+	  val = result.write_val (est_len);
+	}
       /* For fixed-precision integers like offset_int and widest_int,
 	 handle the case where the shift value is constant and the
 	 shifted value is a single nonnegative HWI (meaning that all
@@ -3171,6 +3880,8 @@ wi::arshift (const T1 &x, const T2 &y)
      since the result can be no larger than that.  */
   WIDE_INT_REF_FOR (T1) xi (x);
   WIDE_INT_REF_FOR (T2) yi (y);
+  if (result.needs_write_val_arg)
+    val = result.write_val (xi.len);
   /* Handle the simple cases quickly.   */
   if (geu_p (yi, xi.precision))
     {
@@ -3374,25 +4085,56 @@ operator % (const T1 &x, const T2 &y)
   return wi::smod_trunc (x, y);
 }
 
-template<typename T>
+void gt_ggc_mx (generic_wide_int <wide_int_storage> *) = delete;
+void gt_pch_nx (generic_wide_int <wide_int_storage> *) = delete;
+void gt_pch_nx (generic_wide_int <wide_int_storage> *,
+		gt_pointer_operator, void *) = delete;
+
+inline void
+gt_ggc_mx (generic_wide_int <rwide_int_storage> *)
+{
+}
+
+inline void
+gt_pch_nx (generic_wide_int <rwide_int_storage> *)
+{
+}
+
+inline void
+gt_pch_nx (generic_wide_int <rwide_int_storage> *, gt_pointer_operator, void *)
+{
+}
+
+template<int N>
 void
-gt_ggc_mx (generic_wide_int <T> *)
+gt_ggc_mx (generic_wide_int <fixed_wide_int_storage <N> > *)
 {
 }
 
-template<typename T>
+template<int N>
 void
-gt_pch_nx (generic_wide_int <T> *)
+gt_pch_nx (generic_wide_int <fixed_wide_int_storage <N> > *)
 {
 }
 
-template<typename T>
+template<int N>
 void
-gt_pch_nx (generic_wide_int <T> *, gt_pointer_operator, void *)
+gt_pch_nx (generic_wide_int <fixed_wide_int_storage <N> > *,
+	   gt_pointer_operator, void *)
 {
 }
 
 template<int N>
+void gt_ggc_mx (generic_wide_int <widest_int_storage <N> > *) = delete;
+
+template<int N>
+void gt_pch_nx (generic_wide_int <widest_int_storage <N> > *) = delete;
+
+template<int N>
+void gt_pch_nx (generic_wide_int <widest_int_storage <N> > *,
+		gt_pointer_operator, void *) = delete;
+
+template<int N>
 void
 gt_ggc_mx (trailing_wide_ints <N> *)
 {
@@ -3465,7 +4207,7 @@ inline wide_int
 wi::mask (unsigned int width, bool negate_p, unsigned int precision)
 {
   wide_int result = wide_int::create (precision);
-  result.set_len (mask (result.write_val (), width, negate_p, precision));
+  result.set_len (mask (result.write_val (0), width, negate_p, precision));
   return result;
 }
 
@@ -3477,7 +4219,7 @@ wi::shifted_mask (unsigned int start, un
 		  unsigned int precision)
 {
   wide_int result = wide_int::create (precision);
-  result.set_len (shifted_mask (result.write_val (), start, width, negate_p,
+  result.set_len (shifted_mask (result.write_val (0), start, width, negate_p,
 				precision));
   return result;
 }
@@ -3498,8 +4240,8 @@ wi::mask (unsigned int width, bool negat
 {
   STATIC_ASSERT (wi::int_traits<T>::precision);
   T result;
-  result.set_len (mask (result.write_val (), width, negate_p,
-			wi::int_traits <T>::precision));
+  result.set_len (mask (result.write_val (width / HOST_BITS_PER_WIDE_INT + 1),
+			width, negate_p, wi::int_traits <T>::precision));
   return result;
 }
 
@@ -3512,9 +4254,13 @@ wi::shifted_mask (unsigned int start, un
 {
   STATIC_ASSERT (wi::int_traits<T>::precision);
   T result;
-  result.set_len (shifted_mask (result.write_val (), start, width,
-				negate_p,
-				wi::int_traits <T>::precision));
+  unsigned int prec = wi::int_traits <T>::precision;
+  unsigned int est_len
+    = result.needs_write_val_arg
+      ? ((start + (width > prec - start ? prec - start : width))
+	 / HOST_BITS_PER_WIDE_INT + 1) : 0;
+  result.set_len (shifted_mask (result.write_val (est_len), start, width,
+				negate_p, prec));
   return result;
 }
 
--- gcc/godump.cc.jj	2023-10-04 16:28:04.148784815 +0200
+++ gcc/godump.cc	2023-10-05 11:36:55.219243548 +0200
@@ -1154,7 +1154,11 @@ go_output_typedef (class godump_containe
 	    snprintf (buf, sizeof buf, HOST_WIDE_INT_PRINT_UNSIGNED,
 		      tree_to_uhwi (value));
 	  else
-	    print_hex (wi::to_wide (element), buf);
+	    {
+	      wide_int w = wi::to_wide (element);
+	      gcc_assert (w.get_len () <= WIDE_INT_MAX_INL_ELTS);
+	      print_hex (w, buf);
+	    }
 
 	  mhval->value = xstrdup (buf);
 	  *slot = mhval;
--- gcc/tree-ssa-loop-ivcanon.cc.jj	2023-10-04 16:28:04.310782607 +0200
+++ gcc/tree-ssa-loop-ivcanon.cc	2023-10-05 11:36:55.219243548 +0200
@@ -622,10 +622,11 @@ remove_redundant_iv_tests (class loop *l
 	      || !integer_zerop (niter.may_be_zero)
 	      || !niter.niter
 	      || TREE_CODE (niter.niter) != INTEGER_CST
-	      || !wi::ltu_p (loop->nb_iterations_upper_bound,
+	      || !wi::ltu_p (widest_int::from (loop->nb_iterations_upper_bound,
+					       SIGNED),
 			     wi::to_widest (niter.niter)))
 	    continue;
-	  
+
 	  if (dump_file && (dump_flags & TDF_DETAILS))
 	    {
 	      fprintf (dump_file, "Removed pointless exit: ");
--- gcc/value-range-pretty-print.cc.jj	2023-10-04 16:28:04.415781176 +0200
+++ gcc/value-range-pretty-print.cc	2023-10-05 11:36:55.142244603 +0200
@@ -99,12 +99,19 @@ vrange_printer::print_irange_bitmasks (c
     return;
 
   pp_string (pp, " MASK ");
-  char buf[WIDE_INT_PRINT_BUFFER_SIZE];
-  print_hex (bm.mask (), buf);
-  pp_string (pp, buf);
+  char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p;
+  unsigned len_mask = bm.mask ().get_len ();
+  unsigned len_val = bm.value ().get_len ();
+  unsigned len = MAX (len_mask, len_val);
+  if (len > WIDE_INT_MAX_INL_ELTS)
+    p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4);
+  else
+    p = buf;
+  print_hex (bm.mask (), p);
+  pp_string (pp, p);
   pp_string (pp, " VALUE ");
-  print_hex (bm.value (), buf);
-  pp_string (pp, buf);
+  print_hex (bm.value (), p);
+  pp_string (pp, p);
 }
 
 void
--- gcc/print-tree.cc.jj	2023-10-04 16:28:04.257783330 +0200
+++ gcc/print-tree.cc	2023-10-05 11:36:54.630251622 +0200
@@ -365,13 +365,13 @@ print_node (FILE *file, const char *pref
     fputs (code == CALL_EXPR ? " must-tail-call" : " static", file);
   if (TREE_DEPRECATED (node))
     fputs (" deprecated", file);
-  if (TREE_UNAVAILABLE (node))
-    fputs (" unavailable", file);
   if (TREE_VISITED (node))
     fputs (" visited", file);
 
   if (code != TREE_VEC && code != INTEGER_CST && code != SSA_NAME)
     {
+      if (TREE_UNAVAILABLE (node))
+	fputs (" unavailable", file);
       if (TREE_LANG_FLAG_0 (node))
 	fputs (" tree_0", file);
       if (TREE_LANG_FLAG_1 (node))
--- gcc/wide-int-print.h.jj	2023-10-04 16:28:04.448780726 +0200
+++ gcc/wide-int-print.h	2023-10-05 11:36:54.630251622 +0200
@@ -22,7 +22,7 @@ along with GCC; see the file COPYING3.
 
 #include <stdio.h>
 
-#define WIDE_INT_PRINT_BUFFER_SIZE (WIDE_INT_MAX_PRECISION / 4 + 4)
+#define WIDE_INT_PRINT_BUFFER_SIZE (WIDE_INT_MAX_INL_PRECISION / 4 + 4)
 
 /* Printing functions.  */
 
--- gcc/dwarf2out.h.jj	2023-10-04 16:28:04.095785537 +0200
+++ gcc/dwarf2out.h	2023-10-05 11:36:54.666251128 +0200
@@ -30,7 +30,7 @@ typedef struct dw_cfi_node *dw_cfi_ref;
 typedef struct dw_loc_descr_node *dw_loc_descr_ref;
 typedef struct dw_loc_list_struct *dw_loc_list_ref;
 typedef struct dw_discr_list_node *dw_discr_list_ref;
-typedef wide_int *wide_int_ptr;
+typedef rwide_int *rwide_int_ptr;
 
 
 /* Call frames are described using a sequence of Call Frame
@@ -252,7 +252,7 @@ struct GTY(()) dw_val_node {
       unsigned HOST_WIDE_INT
 	GTY ((tag ("dw_val_class_unsigned_const"))) val_unsigned;
       double_int GTY ((tag ("dw_val_class_const_double"))) val_double;
-      wide_int_ptr GTY ((tag ("dw_val_class_wide_int"))) val_wide;
+      rwide_int_ptr GTY ((tag ("dw_val_class_wide_int"))) val_wide;
       dw_vec_const GTY ((tag ("dw_val_class_vec"))) val_vec;
       struct dw_val_die_union
 	{
--- gcc/data-streamer-in.cc.jj	2023-10-04 16:28:04.025786491 +0200
+++ gcc/data-streamer-in.cc	2023-10-05 11:36:54.843248702 +0200
@@ -277,10 +277,12 @@ streamer_read_value_range (class lto_inp
 wide_int
 streamer_read_wide_int (class lto_input_block *ib)
 {
-  HOST_WIDE_INT a[WIDE_INT_MAX_ELTS];
+  HOST_WIDE_INT abuf[WIDE_INT_MAX_INL_ELTS], *a = abuf;
   int i;
   int prec = streamer_read_uhwi (ib);
   int len = streamer_read_uhwi (ib);
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    a = XALLOCAVEC (HOST_WIDE_INT, len);
   for (i = 0; i < len; i++)
     a[i] = streamer_read_hwi (ib);
   return wide_int::from_array (a, len, prec);
@@ -292,10 +294,12 @@ streamer_read_wide_int (class lto_input_
 widest_int
 streamer_read_widest_int (class lto_input_block *ib)
 {
-  HOST_WIDE_INT a[WIDE_INT_MAX_ELTS];
+  HOST_WIDE_INT abuf[WIDE_INT_MAX_INL_ELTS], *a = abuf;
   int i;
   int prec ATTRIBUTE_UNUSED = streamer_read_uhwi (ib);
   int len = streamer_read_uhwi (ib);
+  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
+    a = XALLOCAVEC (HOST_WIDE_INT, len);
   for (i = 0; i < len; i++)
     a[i] = streamer_read_hwi (ib);
   return widest_int::from_array (a, len);

	Jakub

[-- Attachment #2: Q012ra --]
[-- Type: text/plain, Size: 7537 bytes --]

--- gcc/tree-ssa-ccp.cc.jj	2023-08-24 15:37:29.264410998 +0200
+++ gcc/tree-ssa-ccp.cc	2023-10-06 17:20:49.504965969 +0200
@@ -1966,7 +1966,8 @@ bit_value_binop (enum tree_code code, si
 		  }
 		else
 		  {
-		    widest_int upper = wi::udiv_trunc (r1max, r2min);
+		    widest_int upper
+		      = wi::udiv_trunc (wi::zext (r1max, width), r2min);
 		    unsigned int lzcount = wi::clz (upper);
 		    unsigned int bits = wi::get_precision (upper) - lzcount;
 		    *mask = wi::mask <widest_int> (bits, false);
--- gcc/wide-int.cc.jj	2023-10-06 12:31:56.841517949 +0200
+++ gcc/wide-int.cc	2023-10-06 17:21:59.930022075 +0200
@@ -2406,6 +2406,17 @@ debug (const widest_int *ptr)
     fprintf (stderr, "<nil>\n");
 }
 
+bool wide_int_bitint_seen = false;
+
+void
+wide_int_log (const char *p, int n)
+{
+  extern const char *current_function_name (void);
+  FILE *f = fopen ("/tmp/wis", "a");
+  fprintf (f, "%d %s %s %s %d %c\n", (int) BITS_PER_WORD, main_input_filename ? main_input_filename : "-", current_function_name (), p, n, wide_int_bitint_seen ? 'y' : 'n');
+  fclose (f);
+}
+
 #if CHECKING_P
 
 namespace selftest {
--- gcc/gimple-ssa-sprintf.cc.jj	2023-01-02 09:32:20.797308227 +0100
+++ gcc/gimple-ssa-sprintf.cc	2023-10-06 17:08:45.516732616 +0200
@@ -1181,8 +1181,15 @@ adjust_range_for_overflow (tree dirtype,
 							      *argmin),
 					     size_int (dirprec)))))
     {
-      *argmin = force_fit_type (dirtype, wi::to_widest (*argmin), 0, false);
-      *argmax = force_fit_type (dirtype, wi::to_widest (*argmax), 0, false);
+      unsigned int maxprec = MAX (argprec, dirprec);
+      *argmin = force_fit_type (dirtype,
+				wide_int::from (wi::to_wide (*argmin), maxprec,
+						TYPE_SIGN (argtype)),
+				0, false);
+      *argmax = force_fit_type (dirtype,
+				wide_int::from (wi::to_wide (*argmax), maxprec,
+						TYPE_SIGN (argtype)),
+				0, false);
 
       /* If *ARGMIN is still less than *ARGMAX the conversion above
 	 is safe.  Otherwise, it has overflowed and would be unsafe.  */
--- gcc/match.pd.jj	2023-10-04 10:26:45.861259889 +0200
+++ gcc/match.pd	2023-10-06 17:09:34.435070589 +0200
@@ -6431,8 +6431,12 @@ (define_operator_list SYNC_FETCH_AND_AND
        code and here to avoid a spurious overflow flag on the resulting
        constant which fold_convert produces.  */
     (if (TREE_CODE (@1) == INTEGER_CST)
-     (cmp @00 { force_fit_type (TREE_TYPE (@00), wi::to_widest (@1), 0,
-				TREE_OVERFLOW (@1)); })
+     (cmp @00 { force_fit_type (TREE_TYPE (@00),
+				wide_int::from (wi::to_wide (@1),
+						MAX (TYPE_PRECISION (TREE_TYPE (@1)),
+						     TYPE_PRECISION (TREE_TYPE (@00))),
+						TYPE_SIGN (TREE_TYPE (@1))),
+				0, TREE_OVERFLOW (@1)); })
      (cmp @00 (convert @1)))
 
     (if (TYPE_PRECISION (TREE_TYPE (@0)) > TYPE_PRECISION (TREE_TYPE (@00)))
--- gcc/tree.cc.jj	2023-10-05 11:36:54.618251787 +0200
+++ gcc/tree.cc	2023-10-06 17:23:07.321118844 +0200
@@ -7178,6 +7178,8 @@ tree
 build_bitint_type (unsigned HOST_WIDE_INT precision, int unsignedp)
 {
   tree itype, ret;
+extern bool wide_int_bitint_seen;
+if (precision > 128) wide_int_bitint_seen = true;
 
   gcc_checking_assert (precision >= 1 + !unsignedp);
 
--- gcc/wide-int.h.jj	2023-10-06 13:12:05.720338130 +0200
+++ gcc/wide-int.h	2023-10-06 17:42:59.980139497 +0200
@@ -1206,7 +1206,11 @@ inline wide_int_storage::wide_int_storag
   WIDE_INT_REF_FOR (T) xi (x);
   precision = xi.precision;
   if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+{
+extern void wide_int_log (const char *, int);
+wide_int_log ("ctor", precision);
     u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
+}
   wi::copy (*this, xi);
 }
 
@@ -1216,6 +1220,8 @@ inline wide_int_storage::wide_int_storag
   precision = x.precision;
   if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
     {
+extern void wide_int_log (const char *, int);
+wide_int_log ("copy ctor", precision);
       u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
       memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
     }
@@ -1242,6 +1248,8 @@ wide_int_storage::operator = (const wide
   precision = x.precision;
   if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
     {
+extern void wide_int_log (const char *, int);
+wide_int_log ("operator=1", precision);
       u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
       memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
     }
@@ -1265,8 +1273,12 @@ wide_int_storage::operator = (const T &x
 	XDELETEVEC (u.valp);
       precision = xi.precision;
       if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+{
+extern void wide_int_log (const char *, int);
+wide_int_log ("operator=2", precision);
 	u.valp = XNEWVEC (HOST_WIDE_INT,
 			  CEIL (precision, HOST_BITS_PER_WIDE_INT));
+}
     }
   wi::copy (*this, xi);
   return *this;
@@ -1339,8 +1351,12 @@ wide_int_storage::create (unsigned int p
   wide_int x;
   x.precision = precision;
   if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
+{
+extern void wide_int_log (const char *, int);
+wide_int_log ("create", precision);
     x.u.valp = XNEWVEC (HOST_WIDE_INT,
 			CEIL (precision, HOST_BITS_PER_WIDE_INT));
+}
   return x;
 }
 
@@ -1756,6 +1772,8 @@ widest_int_storage <N>::widest_int_stora
   len = x.len;
   if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
     {
+extern void wide_int_log (const char *, int);
+wide_int_log ("wi copy ctor", len);
       u.valp = XNEWVEC (HOST_WIDE_INT, len);
       memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
     }
@@ -1783,6 +1801,8 @@ widest_int_storage <N>::operator = (cons
   len = x.len;
   if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT))
     {
+extern void wide_int_log (const char *, int);
+wide_int_log ("wi operator=1", len);
       u.valp = XNEWVEC (HOST_WIDE_INT, len);
       memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
     }
@@ -1837,6 +1857,8 @@ widest_int_storage <N>::write_val (unsig
   len = l;
   if (UNLIKELY (l > N / HOST_BITS_PER_WIDE_INT))
     {
+extern void wide_int_log (const char *, int);
+wide_int_log ("wi write_val", l);
       u.valp = XNEWVEC (HOST_WIDE_INT, l);
       return u.valp;
     }
--- gcc/fold-const.cc.jj	2023-09-29 18:58:47.252895500 +0200
+++ gcc/fold-const.cc	2023-10-06 17:03:24.561076214 +0200
@@ -2137,7 +2137,10 @@ fold_convert_const_int_from_int (tree ty
   /* Given an integer constant, make new constant with new type,
      appropriately sign-extended or truncated.  Use widest_int
      so that any extension is done according ARG1's type.  */
-  return force_fit_type (type, wi::to_widest (arg1),
+  tree arg1_type = TREE_TYPE (arg1);
+  unsigned prec = MAX (TYPE_PRECISION (arg1_type), TYPE_PRECISION (type));
+  return force_fit_type (type, wide_int::from (wi::to_wide (arg1), prec,
+					       TYPE_SIGN (arg1_type)),
 			 !POINTER_TYPE_P (TREE_TYPE (arg1)),
 			 TREE_OVERFLOW (arg1));
 }
@@ -9565,8 +9568,13 @@ fold_unary_loc (location_t loc, enum tre
 	    }
 	  if (change)
 	    {
-	      tem = force_fit_type (type, wi::to_widest (and1), 0,
-				    TREE_OVERFLOW (and1));
+	      tree and1_type = TREE_TYPE (and1);
+	      unsigned prec = MAX (TYPE_PRECISION (and1_type),
+				   TYPE_PRECISION (type));
+	      tem = force_fit_type (type, 
+				    wide_int::from (wi::to_wide (and1), prec,
+						    TYPE_SIGN (and1_type)),
+				    0, TREE_OVERFLOW (and1));
 	      return fold_build2_loc (loc, BIT_AND_EXPR, type,
 				      fold_convert_loc (loc, type, and0), tem);
 	    }

[-- Attachment #3: Q012rb --]
[-- Type: text/plain, Size: 1256 bytes --]

--- gcc/wide-int.h.jj	2023-10-06 15:13:31.117547151 +0200
+++ gcc/wide-int.h	2023-10-06 18:31:35.031659272 +0200
@@ -1843,13 +1843,6 @@ widest_int_storage <N>::write_val (unsig
   return u.val;
 }
 
-#if GCC_VERSION >= 4007
-#pragma GCC diagnostic push
-#pragma GCC diagnostic ignored "-Wfree-nonheap-object"
-#pragma GCC diagnostic ignored "-Warray-bounds="
-#pragma GCC diagnostic ignored "-Wstringop-overread"
-#endif
-
 template <int N>
 inline void
 widest_int_storage <N>::set_len (unsigned int l, bool)
@@ -1867,10 +1860,6 @@ widest_int_storage <N>::set_len (unsigne
   STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
 }
 
-#if GCC_VERSION >= 4007
-#pragma GCC diagnostic pop
-#endif
-
 /* Treat X as having signedness SGN and convert it to an N-bit number.  */
 template <int N>
 inline WIDEST_INT (N)
@@ -2404,7 +2393,10 @@ wi::copy (T1 &x, const T2 &y)
   do
     xval[i] = yval[i];
   while (++i < len);
-  x.set_len (len, y.is_sign_extended);
+  /* For widest_int write_val is called with an exact value, not
+     upper bound for len, so nothing is needed further.  */
+  if (!wi::int_traits <T1>::needs_write_val_arg)
+    x.set_len (len, y.is_sign_extended);
 }
 
 /* Return true if X fits in a HOST_WIDE_INT with no loss of precision.  */

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-10-06 17:41 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-28 14:34 [RFC] > WIDE_INT_MAX_PREC support in wide-int Jakub Jelinek
2023-08-29  9:49 ` Richard Biener
2023-08-29 10:42   ` Richard Sandiford
2023-08-29 15:09     ` Jakub Jelinek
2023-09-28 14:03       ` [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int Jakub Jelinek
2023-09-28 15:53         ` Aldy Hernandez
2023-09-29  8:37           ` Jakub Jelinek
2023-09-29 12:04             ` Aldy Hernandez
2023-09-29  8:24         ` Jakub Jelinek
2023-09-29  9:25           ` Richard Biener
2023-09-29  9:49         ` Richard Biener
2023-09-29 10:30           ` Richard Sandiford
2023-09-29 10:58             ` Jakub Jelinek
2023-10-05 15:11         ` Jakub Jelinek
2023-10-06 17:41           ` Jakub Jelinek
2023-08-29 14:46   ` [RFC] > WIDE_INT_MAX_PREC support in wide-int Jakub Jelinek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).