From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp-out1.suse.de (smtp-out1.suse.de [IPv6:2001:67c:2178:6::1c]) by sourceware.org (Postfix) with ESMTPS id ED8513857835 for ; Thu, 12 Oct 2023 11:10:13 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org ED8513857835 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=suse.de Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=suse.de Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 8980F21860; Thu, 12 Oct 2023 11:10:12 +0000 (UTC) Received: from wotan.suse.de (wotan.suse.de [10.160.0.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 355692C984; Thu, 12 Oct 2023 11:10:12 +0000 (UTC) Date: Thu, 12 Oct 2023 11:10:12 +0000 (UTC) From: Richard Biener To: Jakub Jelinek cc: Richard Sandiford , gcc-patches@gcc.gnu.org Subject: Re: [PATCH] wide-int: Allow up to 16320 bits wide_int and change widest_int precision to 32640 bits [PR102989] In-Reply-To: Message-ID: References: User-Agent: Alpine 2.22 (LSU 394 2020-01-19) MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="-1609957120-1752750060-1697109012=:8126" X-Spam-Level: Authentication-Results: smtp-out1.suse.de; dkim=none; dmarc=none; spf=softfail (smtp-out1.suse.de: 149.44.160.134 is neither permitted nor denied by domain of rguenther@suse.de) smtp.mailfrom=rguenther@suse.de X-Rspamd-Server: rspamd2 X-Spamd-Result: default: False [-1.51 / 50.00]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; TO_DN_SOME(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[multipart/mixed,text/plain]; DMARC_NA(0.20)[suse.de]; NEURAL_HAM_LONG(-3.00)[-1.000]; R_SPF_SOFTFAIL(0.60)[~all:c]; RWL_MAILSPIKE_GOOD(0.00)[149.44.160.134:from]; VIOLATED_DIRECT_SPF(3.50)[]; MX_GOOD(-0.01)[]; CTYPE_MIXED_BOGUS(1.00)[]; NEURAL_HAM_SHORT(-1.00)[-1.000]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.20)[]; MIME_TRACE(0.00)[0:+,1:+]; RCVD_COUNT_TWO(0.00)[2]; BAYES_HAM(-3.00)[100.00%] X-Spam-Score: -1.51 X-Rspamd-Queue-Id: 8980F21860 X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00,KAM_DMARC_STATUS,KAM_SHORT,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. ---1609957120-1752750060-1697109012=:8126 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 8BIT On Wed, 11 Oct 2023, Jakub Jelinek wrote: > Hi! > > Here is an updated wide_int/widest_int patch. It is on top of the > dwarf2out.{h,cc} patch, so that stuff has been removed from it and rwide* > removed from wide-int.h, has the first two patches from > https://gcc.gnu.org/pipermail/gcc-patches/2023-October/632375.html > incorporated as well (but not the third one) and has the wide-int.cc > requested changes in as well. > I've bootstrapped it together with the attached checking patch, wonder > whether we want to check it in as well (basically, -fstack-protector like > canary checking to make sure the upper bound estimations are right, though > only in effect when using the inline array and only if the needed length > is not equal to the length of the inline array). That change discovered > a bug in the division wide-int.h inlines, unlike what I thought, > divmod_internal uses the dividend based precision rather than divisor based > precision for the remainder (if any is computed) and so the added checking > was actually flagging buffer overflows in certain divisions. In theory, > for the remainder, we could change divmod_internal such that it would store > minimum of dividend_len + 1 and divisor_len + 1 or something like that (the > + 1 in there always for possible sign changes and/or needs to add 0 as limb > above it), but I'd prefer to change it incrementally if at all. > On the other side, seems I was wrong with the + 2 for multiplication and + 1 > works fine even with the checking, so no need to add extra comment > explaining something so weird. > > What isn't done is the move ctors (and move assignment operators?), I'm > afraid my C++ isn't good enough to play with that and I hope they can be > also added incrementally. And bounds_wide_int is still using > WIDE_INT_MAX_INL_ELTS, not reducing to 128 bits (at which point we could > probably just use offset_int), but again I hope if we want to change, we > could do that incrementally and be able to differentiate those changes in > bisection. > > Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk (and > just the first patch or both)? OK for both with the adjustments Richard asked for. Thanks, Richard. > 2023-10-11 Jakub Jelinek > > PR c/102989 > * wide-int.h: Adjust file comment. > (WIDE_INT_MAX_INL_ELTS): Define to former value of WIDE_INT_MAX_ELTS. > (WIDE_INT_MAX_INL_PRECISION): Define. > (WIDE_INT_MAX_ELTS): Change to 255. Assert that WIDE_INT_MAX_INL_ELTS > is smaller than WIDE_INT_MAX_ELTS. > (RWIDE_INT_MAX_ELTS, RWIDE_INT_MAX_PRECISION, WIDEST_INT_MAX_ELTS, > WIDEST_INT_MAX_PRECISION): Define. > (WI_BINARY_RESULT_VAR, WI_UNARY_RESULT_VAR): Change write_val callers > to pass 0 as a new argument. > (class widest_int_storage): Likewise. > (widest_int, widest2_int): Change typedefs to use widest_int_storage > rather than fixed_wide_int_storage. > (enum wi::precision_type): Add INL_CONST_PRECISION enumerator. > (struct binary_traits): Add partial specializations for > INL_CONST_PRECISION. > (generic_wide_int): Add needs_write_val_arg static data member. > (int_traits): Likewise. > (wide_int_storage): Replace val non-static data member with a union > u of it and HOST_WIDE_INT *valp. Declare copy constructor, copy > assignment operator and destructor. Add unsigned int argument to > write_val. > (wide_int_storage::wide_int_storage): Initialize precision to 0 > in the default ctor. Remove unnecessary {}s around STATIC_ASSERTs. > Assert in non-default ctor T's precision_type is not > INL_CONST_PRECISION and allocate u.valp for large precision. Add > copy constructor. > (wide_int_storage::~wide_int_storage): New. > (wide_int_storage::operator=): Add copy assignment operator. In > assignment operator remove unnecessary {}s around STATIC_ASSERTs, > assert ctor T's precision_type is not INL_CONST_PRECISION and > if precision changes, deallocate and/or allocate u.valp. > (wide_int_storage::get_val): Return u.valp rather than u.val for > large precision. > (wide_int_storage::write_val): Likewise. Add an unused unsigned int > argument. > (wide_int_storage::set_len): Use write_val instead of writing val > directly. > (wide_int_storage::from, wide_int_storage::from_array): Adjust > write_val callers. > (wide_int_storage::create): Allocate u.valp for large precisions. > (wi::int_traits ::get_binary_precision): New. > (fixed_wide_int_storage::fixed_wide_int_storage): Make default > ctor defaulted. > (fixed_wide_int_storage::write_val): Add unused unsigned int argument. > (fixed_wide_int_storage::from, fixed_wide_int_storage::from_array): > Adjust write_val callers. > (wi::int_traits ::get_binary_precision): New. > (WIDEST_INT): Define. > (widest_int_storage): New template class. > (wi::int_traits ): New. > (trailing_wide_int_storage::write_val): Add unused unsigned int > argument. > (wi::get_binary_precision): Use > wi::int_traits ::get_binary_precision > rather than get_precision on get_binary_result. > (wi::copy): Adjust write_val callers. Don't call set_len if > needs_write_val_arg. > (wi::bit_not): If result.needs_write_val_arg, call write_val > again with upper bound estimate of len. > (wi::sext, wi::zext, wi::set_bit): Likewise. > (wi::bit_and, wi::bit_and_not, wi::bit_or, wi::bit_or_not, > wi::bit_xor, wi::add, wi::sub, wi::mul, wi::mul_high, wi::div_trunc, > wi::div_floor, wi::div_ceil, wi::div_round, wi::divmod_trunc, > wi::mod_trunc, wi::mod_floor, wi::mod_ceil, wi::mod_round, > wi::lshift, wi::lrshift, wi::arshift): Likewise. > (wi::bswap, wi::bitreverse): Assert result.needs_write_val_arg > is false. > (gt_ggc_mx, gt_pch_nx): Remove generic template for all > generic_wide_int, instead add functions and templates for each > storage of generic_wide_int. Make functions for > generic_wide_int and templates for > generic_wide_int > deleted. > (wi::mask, wi::shifted_mask): Adjust write_val calls. > * wide-int.cc (zeros): Decrease array size to 1. > (BLOCKS_NEEDED): Use CEIL. > (canonize): Use HOST_WIDE_INT_M1. > (wi::from_buffer): Pass 0 to write_val. > (wi::to_mpz): Use CEIL. > (wi::from_mpz): Likewise. Pass 0 to write_val. Use > WIDE_INT_MAX_INL_ELTS instead of WIDE_INT_MAX_ELTS. > (wi::mul_internal): Use WIDE_INT_MAX_INL_PRECISION instead of > MAX_BITSIZE_MODE_ANY_INT in automatic array sizes, for prec > above WIDE_INT_MAX_INL_PRECISION estimate precision from > lengths of operands. Use XALLOCAVEC allocated buffers for > prec above WIDE_INT_MAX_INL_PRECISION. > (wi::divmod_internal): Likewise. > (wi::lshift_large): For len > WIDE_INT_MAX_INL_ELTS estimate > it from xlen and skip. > (rshift_large_common): Remove xprecision argument, add len > argument with len computed in caller. Don't return anything. > (wi::lrshift_large, wi::arshift_large): Compute len here > and pass it to rshift_large_common, for lengths above > WIDE_INT_MAX_INL_ELTS using estimations from xlen if possible. > (assert_deceq, assert_hexeq): For lengths above > WIDE_INT_MAX_INL_ELTS use XALLOCAVEC allocated buffer. > (test_printing): Use WIDE_INT_MAX_INL_PRECISION instead of > WIDE_INT_MAX_PRECISION. > * wide-int-print.h (WIDE_INT_PRINT_BUFFER_SIZE): Use > WIDE_INT_MAX_INL_PRECISION instead of WIDE_INT_MAX_PRECISION. > * wide-int-print.cc (print_decs, print_decu, print_hex): For > lengths above WIDE_INT_MAX_INL_ELTS use XALLOCAVEC allocated buffer. > * tree.h (wi::int_traits>): Change precision_type > to INL_CONST_PRECISION for N == ADDR_MAX_PRECISION. > (widest_extended_tree): Use WIDEST_INT_MAX_PRECISION instead of > WIDE_INT_MAX_PRECISION. > (wi::ints_for): Use int_traits >::precision_type > instead of hard coded CONST_PRECISION. > (widest2_int_cst): Use WIDEST_INT_MAX_PRECISION instead of > WIDE_INT_MAX_PRECISION. > (wi::extended_tree ::get_len): Use WIDEST_INT_MAX_PRECISION rather > than WIDE_INT_MAX_PRECISION. > (wi::ints_for::zero): Use > wi::int_traits >::precision_type instead of > wi::CONST_PRECISION. > * tree.cc (build_replicated_int_cst): Formatting fix. Use > WIDE_INT_MAX_INL_ELTS rather than WIDE_INT_MAX_ELTS. > * print-tree.cc (print_node): Don't print TREE_UNAVAILABLE on > INTEGER_CSTs, TREE_VECs or SSA_NAMEs. > * double-int.h (wi::int_traits ::precision_type): Change > to INL_CONST_PRECISION from CONST_PRECISION. > * poly-int.h (struct poly_coeff_traits): Add partial specialization > for wi::INL_CONST_PRECISION. > * cfgloop.h (bound_wide_int): New typedef. > (struct nb_iter_bound): Change bound type from widest_int to > bound_wide_int. > (struct loop): Change nb_iterations_upper_bound, > nb_iterations_likely_upper_bound and nb_iterations_estimate type from > widest_int to bound_wide_int. > * cfgloop.cc (record_niter_bound): Return early if wi::min_precision > of i_bound is too large for bound_wide_int. Adjustments for the > widest_int to bound_wide_int type change in non-static data members. > (get_estimated_loop_iterations, get_max_loop_iterations, > get_likely_max_loop_iterations): Adjustments for the widest_int to > bound_wide_int type change in non-static data members. > * tree-vect-loop.cc (vect_transform_loop): Likewise. > * tree-ssa-loop-niter.cc (do_warn_aggressive_loop_optimizations): Use > XALLOCAVEC allocated buffer for i_bound len above > WIDE_INT_MAX_INL_ELTS. > (record_estimate): Return early if wi::min_precision of i_bound is too > large for bound_wide_int. Adjustments for the widest_int to > bound_wide_int type change in non-static data members. > (wide_int_cmp): Use bound_wide_int instead of widest_int. > (bound_index): Use bound_wide_int instead of widest_int. > (discover_iteration_bound_by_body_walk): Likewise. Use > widest_int::from to convert it to widest_int when passed to > record_niter_bound. > (maybe_lower_iteration_bound): Use widest_int::from to convert it to > widest_int when passed to record_niter_bound. > (estimate_numbers_of_iteration): Don't record upper bound if > loop->nb_iterations has too large precision for bound_wide_int. > (n_of_executions_at_most): Use widest_int::from. > * tree-ssa-loop-ivcanon.cc (remove_redundant_iv_tests): Adjust for > the widest_int to bound_wide_int changes. > * match.pd (fold_sign_changed_comparison simplification): Use > wide_int::from on wi::to_wide instead of wi::to_widest. > * value-range.h (irange::maybe_resize): Avoid using memcpy on > non-trivially copyable elements. > * value-range.cc (irange_bitmask::dump): Use XALLOCAVEC allocated > buffer for mask or value len above WIDE_INT_PRINT_BUFFER_SIZE. > * fold-const.cc (fold_convert_const_int_from_int, fold_unary_loc): > Use wide_int::from on wi::to_wide instead of wi::to_widest. > * tree-ssa-ccp.cc (bit_value_binop): Zero extend r1max from width > before calling wi::udiv_trunc. > * lto-streamer-out.cc (output_cfg): Adjustments for the widest_int to > bound_wide_int type change in non-static data members. > * lto-streamer-in.cc (input_cfg): Likewise. > (lto_input_tree_1): Use WIDE_INT_MAX_INL_ELTS rather than > WIDE_INT_MAX_ELTS. For length above WIDE_INT_MAX_INL_ELTS use > XALLOCAVEC allocated buffer. Formatting fix. > * data-streamer-in.cc (streamer_read_wide_int, > streamer_read_widest_int): Likewise. > * tree-affine.cc (aff_combination_expand): Use placement new to > construct name_expansion. > (free_name_expansion): Destruct name_expansion. > * gimple-ssa-strength-reduction.cc (struct slsr_cand_d): Change > index type from widest_int to offset_int. > (class incr_info_d): Change incr type from widest_int to offset_int. > (alloc_cand_and_find_basis, backtrace_base_for_ref, > restructure_reference, slsr_process_ref, create_mul_ssa_cand, > create_mul_imm_cand, create_add_ssa_cand, create_add_imm_cand, > slsr_process_add, cand_abs_increment, replace_mult_candidate, > replace_unconditional_candidate, incr_vec_index, > create_add_on_incoming_edge, create_phi_basis_1, > replace_conditional_candidate, record_increment, > record_phi_increments_1, phi_incr_cost_1, phi_incr_cost, > lowest_cost_path, total_savings, ncd_with_phi, ncd_of_cand_and_phis, > nearest_common_dominator_for_cands, insert_initializers, > all_phi_incrs_profitable_1, replace_one_candidate, > replace_profitable_candidates): Use offset_int rather than widest_int > and wi::to_offset rather than wi::to_widest. > * real.cc (real_to_integer): Use WIDE_INT_MAX_INL_ELTS rather than > 2 * WIDE_INT_MAX_ELTS and for words above that use XALLOCAVEC > allocated buffer. > * tree-ssa-loop-ivopts.cc (niter_for_exit): Use placement new > to construct tree_niter_desc and destruct it on failure. > (free_tree_niter_desc): Destruct tree_niter_desc if value is non-NULL. > * gengtype.cc (main): Remove widest_int handling. > * graphite-isl-ast-to-gimple.cc (widest_int_from_isl_expr_int): Use > WIDEST_INT_MAX_ELTS instead of WIDE_INT_MAX_ELTS. > * gimple-ssa-warn-alloca.cc (pass_walloca::execute): Use > WIDE_INT_MAX_INL_PRECISION instead of WIDE_INT_MAX_PRECISION and > assert get_len () fits into it. > * value-range-pretty-print.cc (vrange_printer::print_irange_bitmasks): > For mask or value lengths above WIDE_INT_MAX_INL_ELTS use XALLOCAVEC > allocated buffer. > * gimple-ssa-sprintf.cc (adjust_range_for_overflow): Use > wide_int::from on wi::to_wide instead of wi::to_widest. > * omp-general.cc (score_wide_int): New typedef. > (omp_context_compute_score): Use score_wide_int instead of widest_int > and adjust for those changes. > (struct omp_declare_variant_entry): Change score and > score_in_declare_simd_clone non-static data member type from widest_int > to score_wide_int. > (omp_resolve_late_declare_variant, omp_resolve_declare_variant): Use > score_wide_int instead of widest_int and adjust for those changes. > (omp_lto_output_declare_variant_alt): Likewise. > (omp_lto_input_declare_variant_alt): Likewise. > * godump.cc (go_output_typedef): Assert get_len () is smaller than > WIDE_INT_MAX_INL_ELTS. > gcc/c-family/ > * c-warn.cc (match_case_to_enum_1): Use wi::to_wide just once instead > of 3 times, assert get_len () is smaller than WIDE_INT_MAX_INL_ELTS. > gcc/testsuite/ > * gcc.dg/bitint-38.c: New test. > > --- gcc/wide-int.h.jj 2023-10-10 11:55:51.556417840 +0200 > +++ gcc/wide-int.h 2023-10-11 13:52:51.224806205 +0200 > @@ -53,6 +53,10 @@ along with GCC; see the file COPYING3. > multiply, division, shifts, comparisons, and operations that need > overflow detected), the signedness must be specified separately. > > + For precisions up to WIDE_INT_MAX_INL_PRECISION, it uses an inline > + buffer in the type, for larger precisions up to WIDEST_INT_MAX_PRECISION > + it uses a pointer to heap allocated buffer. > + > 2) offset_int. This is a fixed-precision integer that can hold > any address offset, measured in either bits or bytes, with at > least one extra sign bit. At the moment the maximum address > @@ -79,8 +83,7 @@ along with GCC; see the file COPYING3. > 3) widest_int. This representation is an approximation of > infinite precision math. However, it is not really infinite > precision math as in the GMP library. It is really finite > - precision math where the precision is 4 times the size of the > - largest integer that the target port can represent. > + precision math where the precision is WIDEST_INT_MAX_PRECISION. > > Like offset_int, widest_int is wider than all the values that > it needs to represent, so the integers are logically signed. > @@ -231,17 +234,31 @@ along with GCC; see the file COPYING3. > can be arbitrarily different from X. */ > > /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very > - early examination of the target's mode file. The WIDE_INT_MAX_ELTS > + early examination of the target's mode file. The WIDE_INT_MAX_INL_ELTS > can accomodate at least 1 more bit so that unsigned numbers of that > mode can be represented as a signed value. Note that it is still > possible to create fixed_wide_ints that have precisions greater than > MAX_BITSIZE_MODE_ANY_INT. This can be useful when representing a > double-width multiplication result, for example. */ > -#define WIDE_INT_MAX_ELTS \ > - ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) / HOST_BITS_PER_WIDE_INT) > - > +#define WIDE_INT_MAX_INL_ELTS \ > + ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) \ > + / HOST_BITS_PER_WIDE_INT) > + > +#define WIDE_INT_MAX_INL_PRECISION \ > + (WIDE_INT_MAX_INL_ELTS * HOST_BITS_PER_WIDE_INT) > + > +/* Precision of wide_int and largest _BitInt precision + 1 we can > + support. */ > +#define WIDE_INT_MAX_ELTS 255 > #define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT) > > +/* Precision of widest_int and largest _BitInt precision + 1 we can > + support. */ > +#define WIDEST_INT_MAX_ELTS 510 > +#define WIDEST_INT_MAX_PRECISION (WIDEST_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT) > + > +STATIC_ASSERT (WIDE_INT_MAX_INL_ELTS < WIDE_INT_MAX_ELTS); > + > /* This is the max size of any pointer on any machine. It does not > seem to be as easy to sniff this out of the machine description as > it is for MAX_BITSIZE_MODE_ANY_INT since targets may support > @@ -307,17 +324,18 @@ along with GCC; see the file COPYING3. > #define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \ > WI_BINARY_RESULT (T1, T2) RESULT = \ > wi::int_traits ::get_binary_result (X, Y); \ > - HOST_WIDE_INT *VAL = RESULT.write_val () > + HOST_WIDE_INT *VAL = RESULT.write_val (0) > > /* Similar for the result of a unary operation on X, which has type T. */ > #define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \ > WI_UNARY_RESULT (T) RESULT = \ > wi::int_traits ::get_binary_result (X, X); \ > - HOST_WIDE_INT *VAL = RESULT.write_val () > + HOST_WIDE_INT *VAL = RESULT.write_val (0) > > template class generic_wide_int; > template class fixed_wide_int_storage; > class wide_int_storage; > +template class widest_int_storage; > > /* An N-bit integer. Until we can use typedef templates, use this instead. */ > #define FIXED_WIDE_INT(N) \ > @@ -325,10 +343,8 @@ class wide_int_storage; > > typedef generic_wide_int wide_int; > typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) offset_int; > -typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION) widest_int; > -/* Spelled out explicitly (rather than through FIXED_WIDE_INT) > - so as not to confuse gengtype. */ > -typedef generic_wide_int < fixed_wide_int_storage > widest2_int; > +typedef generic_wide_int > widest_int; > +typedef generic_wide_int > widest2_int; > > /* wi::storage_ref can be a reference to a primitive type, > so this is the conservatively-correct setting. */ > @@ -378,8 +394,12 @@ namespace wi > /* The integer has a variable precision but no defined signedness. */ > VAR_PRECISION, > > - /* The integer has a constant precision (known at GCC compile time) > - and is signed. */ > + /* The integer has a constant precision (known at GCC compile time), > + is signed and all elements are in inline buffer. */ > + INL_CONST_PRECISION, > + > + /* Like INL_CONST_PRECISION, but elements can be heap allocated for > + larger lengths. */ > CONST_PRECISION > }; > > @@ -390,7 +410,8 @@ namespace wi > Classifies the type of T. > > static const unsigned int precision; > - Only defined if precision_type == CONST_PRECISION. Specifies the > + Only defined if precision_type == INL_CONST_PRECISION or > + precision_type == CONST_PRECISION. Specifies the > precision of all integers of type T. > > static const bool host_dependent_precision; > @@ -415,9 +436,10 @@ namespace wi > struct binary_traits; > > /* Specify the result type for each supported combination of binary > - inputs. Note that CONST_PRECISION and VAR_PRECISION cannot be > - mixed, in order to give stronger type checking. When both inputs > - are CONST_PRECISION, they must have the same precision. */ > + inputs. Note that INL_CONST_PRECISION, CONST_PRECISION and > + VAR_PRECISION cannot be mixed, in order to give stronger type > + checking. When both inputs are INL_CONST_PRECISION or both are > + CONST_PRECISION, they must have the same precision. */ > template > struct binary_traits > { > @@ -434,7 +456,7 @@ namespace wi > }; > > template > - struct binary_traits > + struct binary_traits > { > /* Spelled out explicitly (rather than through FIXED_WIDE_INT) > so as not to confuse gengtype. */ > @@ -447,6 +469,17 @@ namespace wi > }; > > template > + struct binary_traits > + { > + typedef generic_wide_int < widest_int_storage > + ::precision> > result_type; > + typedef result_type operator_result; > + typedef bool predicate_result; > + typedef result_type signed_shift_result_type; > + typedef bool signed_predicate_result; > + }; > + > + template > struct binary_traits > { > typedef wide_int result_type; > @@ -455,7 +488,7 @@ namespace wi > }; > > template > - struct binary_traits > + struct binary_traits > { > /* Spelled out explicitly (rather than through FIXED_WIDE_INT) > so as not to confuse gengtype. */ > @@ -468,7 +501,18 @@ namespace wi > }; > > template > - struct binary_traits > + struct binary_traits > + { > + typedef generic_wide_int < widest_int_storage > + ::precision> > result_type; > + typedef result_type operator_result; > + typedef bool predicate_result; > + typedef result_type signed_shift_result_type; > + typedef bool signed_predicate_result; > + }; > + > + template > + struct binary_traits > { > STATIC_ASSERT (int_traits ::precision == int_traits ::precision); > /* Spelled out explicitly (rather than through FIXED_WIDE_INT) > @@ -482,6 +526,18 @@ namespace wi > }; > > template > + struct binary_traits > + { > + STATIC_ASSERT (int_traits ::precision == int_traits ::precision); > + typedef generic_wide_int < widest_int_storage > + ::precision> > result_type; > + typedef result_type operator_result; > + typedef bool predicate_result; > + typedef result_type signed_shift_result_type; > + typedef bool signed_predicate_result; > + }; > + > + template > struct binary_traits > { > typedef wide_int result_type; > @@ -709,8 +765,10 @@ wi::storage_ref::get_val () const > Although not required by generic_wide_int itself, writable storage > classes can also provide the following functions: > > - HOST_WIDE_INT *write_val () > - Get a modifiable version of get_val () > + HOST_WIDE_INT *write_val (unsigned int) > + Get a modifiable version of get_val (). The argument should be > + upper estimation for LEN (ignored by all storages but > + widest_int_storage). > > unsigned int set_len (unsigned int len) > Set the value returned by get_len () to LEN. */ > @@ -777,6 +835,8 @@ public: > > static const bool is_sign_extended > = wi::int_traits >::is_sign_extended; > + static const bool needs_write_val_arg > + = wi::int_traits >::needs_write_val_arg; > }; > > template > @@ -1049,6 +1109,7 @@ namespace wi > static const enum precision_type precision_type = VAR_PRECISION; > static const bool host_dependent_precision = HDP; > static const bool is_sign_extended = SE; > + static const bool needs_write_val_arg = false; > }; > } > > @@ -1065,7 +1126,11 @@ namespace wi > class GTY(()) wide_int_storage > { > private: > - HOST_WIDE_INT val[WIDE_INT_MAX_ELTS]; > + union > + { > + HOST_WIDE_INT val[WIDE_INT_MAX_INL_ELTS]; > + HOST_WIDE_INT *valp; > + } GTY((skip)) u; > unsigned int len; > unsigned int precision; > > @@ -1073,14 +1138,17 @@ public: > wide_int_storage (); > template > wide_int_storage (const T &); > + wide_int_storage (const wide_int_storage &); > + ~wide_int_storage (); > > /* The standard generic_wide_int storage methods. */ > unsigned int get_precision () const; > const HOST_WIDE_INT *get_val () const; > unsigned int get_len () const; > - HOST_WIDE_INT *write_val (); > + HOST_WIDE_INT *write_val (unsigned int); > void set_len (unsigned int, bool = false); > > + wide_int_storage &operator = (const wide_int_storage &); > template > wide_int_storage &operator = (const T &); > > @@ -1099,12 +1167,15 @@ namespace wi > /* Guaranteed by a static assert in the wide_int_storage constructor. */ > static const bool host_dependent_precision = false; > static const bool is_sign_extended = true; > + static const bool needs_write_val_arg = false; > template > static wide_int get_binary_result (const T1 &, const T2 &); > + template > + static unsigned int get_binary_precision (const T1 &, const T2 &); > }; > } > > -inline wide_int_storage::wide_int_storage () {} > +inline wide_int_storage::wide_int_storage () : precision (0) {} > > /* Initialize the storage from integer X, in its natural precision. > Note that we do not allow integers with host-dependent precision > @@ -1113,21 +1184,67 @@ inline wide_int_storage::wide_int_storag > template > inline wide_int_storage::wide_int_storage (const T &x) > { > - { STATIC_ASSERT (!wi::int_traits::host_dependent_precision); } > - { STATIC_ASSERT (wi::int_traits::precision_type != wi::CONST_PRECISION); } > + STATIC_ASSERT (!wi::int_traits::host_dependent_precision); > + STATIC_ASSERT (wi::int_traits::precision_type != wi::CONST_PRECISION); > + STATIC_ASSERT (wi::int_traits::precision_type != wi::INL_CONST_PRECISION); > WIDE_INT_REF_FOR (T) xi (x); > precision = xi.precision; > + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) > + u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT)); > wi::copy (*this, xi); > } > > +inline wide_int_storage::wide_int_storage (const wide_int_storage &x) > +{ > + memcpy (this, &x, sizeof (wide_int_storage)); > + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) > + { > + u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT)); > + memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT)); > + } > +} > + > +inline wide_int_storage::~wide_int_storage () > +{ > + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) > + XDELETEVEC (u.valp); > +} > + > +inline wide_int_storage& > +wide_int_storage::operator = (const wide_int_storage &x) > +{ > + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) > + { > + if (this == &x) > + return *this; > + XDELETEVEC (u.valp); > + } > + memcpy (this, &x, sizeof (wide_int_storage)); > + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) > + { > + u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT)); > + memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT)); > + } > + return *this; > +} > + > template > inline wide_int_storage& > wide_int_storage::operator = (const T &x) > { > - { STATIC_ASSERT (!wi::int_traits::host_dependent_precision); } > - { STATIC_ASSERT (wi::int_traits::precision_type != wi::CONST_PRECISION); } > + STATIC_ASSERT (!wi::int_traits::host_dependent_precision); > + STATIC_ASSERT (wi::int_traits::precision_type != wi::CONST_PRECISION); > + STATIC_ASSERT (wi::int_traits::precision_type != wi::INL_CONST_PRECISION); > WIDE_INT_REF_FOR (T) xi (x); > - precision = xi.precision; > + if (UNLIKELY (precision != xi.precision)) > + { > + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) > + XDELETEVEC (u.valp); > + precision = xi.precision; > + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) > + u.valp = XNEWVEC (HOST_WIDE_INT, > + CEIL (precision, HOST_BITS_PER_WIDE_INT)); > + } > wi::copy (*this, xi); > return *this; > } > @@ -1141,7 +1258,7 @@ wide_int_storage::get_precision () const > inline const HOST_WIDE_INT * > wide_int_storage::get_val () const > { > - return val; > + return UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION) ? u.valp : u.val; > } > > inline unsigned int > @@ -1151,9 +1268,9 @@ wide_int_storage::get_len () const > } > > inline HOST_WIDE_INT * > -wide_int_storage::write_val () > +wide_int_storage::write_val (unsigned int) > { > - return val; > + return UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION) ? u.valp : u.val; > } > > inline void > @@ -1161,8 +1278,10 @@ wide_int_storage::set_len (unsigned int > { > len = l; > if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision) > - val[len - 1] = sext_hwi (val[len - 1], > - precision % HOST_BITS_PER_WIDE_INT); > + { > + HOST_WIDE_INT &v = write_val (len)[len - 1]; > + v = sext_hwi (v, precision % HOST_BITS_PER_WIDE_INT); > + } > } > > /* Treat X as having signedness SGN and convert it to a PRECISION-bit > @@ -1172,7 +1291,7 @@ wide_int_storage::from (const wide_int_r > signop sgn) > { > wide_int result = wide_int::create (precision); > - result.set_len (wi::force_to_size (result.write_val (), x.val, x.len, > + result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len, > x.precision, precision, sgn)); > return result; > } > @@ -1185,7 +1304,7 @@ wide_int_storage::from_array (const HOST > unsigned int precision, bool need_canon_p) > { > wide_int result = wide_int::create (precision); > - result.set_len (wi::from_array (result.write_val (), val, len, precision, > + result.set_len (wi::from_array (result.write_val (len), val, len, precision, > need_canon_p)); > return result; > } > @@ -1196,6 +1315,9 @@ wide_int_storage::create (unsigned int p > { > wide_int x; > x.precision = precision; > + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) > + x.u.valp = XNEWVEC (HOST_WIDE_INT, > + CEIL (precision, HOST_BITS_PER_WIDE_INT)); > return x; > } > > @@ -1212,6 +1334,20 @@ wi::int_traits ::get_b > return wide_int::create (wi::get_precision (x)); > } > > +template > +inline unsigned int > +wi::int_traits ::get_binary_precision (const T1 &x, > + const T2 &y) > +{ > + /* This shouldn't be used for two flexible-precision inputs. */ > + STATIC_ASSERT (wi::int_traits ::precision_type != FLEXIBLE_PRECISION > + || wi::int_traits ::precision_type != FLEXIBLE_PRECISION); > + if (wi::int_traits ::precision_type == FLEXIBLE_PRECISION) > + return wi::get_precision (y); > + else > + return wi::get_precision (x); > +} > + > /* The storage used by FIXED_WIDE_INT (N). */ > template > class GTY(()) fixed_wide_int_storage > @@ -1221,7 +1357,7 @@ private: > unsigned int len; > > public: > - fixed_wide_int_storage (); > + fixed_wide_int_storage () = default; > template > fixed_wide_int_storage (const T &); > > @@ -1229,7 +1365,7 @@ public: > unsigned int get_precision () const; > const HOST_WIDE_INT *get_val () const; > unsigned int get_len () const; > - HOST_WIDE_INT *write_val (); > + HOST_WIDE_INT *write_val (unsigned int); > void set_len (unsigned int, bool = false); > > static FIXED_WIDE_INT (N) from (const wide_int_ref &, signop); > @@ -1242,18 +1378,18 @@ namespace wi > template > struct int_traits < fixed_wide_int_storage > > { > - static const enum precision_type precision_type = CONST_PRECISION; > + static const enum precision_type precision_type = INL_CONST_PRECISION; > static const bool host_dependent_precision = false; > static const bool is_sign_extended = true; > + static const bool needs_write_val_arg = false; > static const unsigned int precision = N; > template > static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &); > + template > + static unsigned int get_binary_precision (const T1 &, const T2 &); > }; > } > > -template > -inline fixed_wide_int_storage ::fixed_wide_int_storage () {} > - > /* Initialize the storage from integer X, in precision N. */ > template > template > @@ -1288,7 +1424,7 @@ fixed_wide_int_storage ::get_len () c > > template > inline HOST_WIDE_INT * > -fixed_wide_int_storage ::write_val () > +fixed_wide_int_storage ::write_val (unsigned int) > { > return val; > } > @@ -1308,7 +1444,7 @@ inline FIXED_WIDE_INT (N) > fixed_wide_int_storage ::from (const wide_int_ref &x, signop sgn) > { > FIXED_WIDE_INT (N) result; > - result.set_len (wi::force_to_size (result.write_val (), x.val, x.len, > + result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len, > x.precision, N, sgn)); > return result; > } > @@ -1323,7 +1459,7 @@ fixed_wide_int_storage ::from_array ( > bool need_canon_p) > { > FIXED_WIDE_INT (N) result; > - result.set_len (wi::from_array (result.write_val (), val, len, > + result.set_len (wi::from_array (result.write_val (len), val, len, > N, need_canon_p)); > return result; > } > @@ -1337,6 +1473,236 @@ get_binary_result (const T1 &, const T2 > return FIXED_WIDE_INT (N) (); > } > > +template > +template > +inline unsigned int > +wi::int_traits < fixed_wide_int_storage >:: > +get_binary_precision (const T1 &, const T2 &) > +{ > + return N; > +} > + > +#define WIDEST_INT(N) generic_wide_int < widest_int_storage > > + > +/* The storage used by widest_int. */ > +template > +class GTY(()) widest_int_storage > +{ > +private: > + union > + { > + HOST_WIDE_INT val[WIDE_INT_MAX_INL_ELTS]; > + HOST_WIDE_INT *valp; > + } GTY((skip)) u; > + unsigned int len; > + > +public: > + widest_int_storage (); > + widest_int_storage (const widest_int_storage &); > + template > + widest_int_storage (const T &); > + ~widest_int_storage (); > + widest_int_storage &operator = (const widest_int_storage &); > + template > + inline widest_int_storage& operator = (const T &); > + > + /* The standard generic_wide_int storage methods. */ > + unsigned int get_precision () const; > + const HOST_WIDE_INT *get_val () const; > + unsigned int get_len () const; > + HOST_WIDE_INT *write_val (unsigned int); > + void set_len (unsigned int, bool = false); > + > + static WIDEST_INT (N) from (const wide_int_ref &, signop); > + static WIDEST_INT (N) from_array (const HOST_WIDE_INT *, unsigned int, > + bool = true); > +}; > + > +namespace wi > +{ > + template > + struct int_traits < widest_int_storage > > + { > + static const enum precision_type precision_type = CONST_PRECISION; > + static const bool host_dependent_precision = false; > + static const bool is_sign_extended = true; > + static const bool needs_write_val_arg = true; > + static const unsigned int precision = N; > + template > + static WIDEST_INT (N) get_binary_result (const T1 &, const T2 &); > + template > + static unsigned int get_binary_precision (const T1 &, const T2 &); > + }; > +} > + > +template > +inline widest_int_storage ::widest_int_storage () : len (0) {} > + > +/* Initialize the storage from integer X, in precision N. */ > +template > +template > +inline widest_int_storage ::widest_int_storage (const T &x) : len (0) > +{ > + /* Check for type compatibility. We don't want to initialize a > + widest integer from something like a wide_int. */ > + WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED; > + wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N)); > +} > + > +template > +inline > +widest_int_storage ::widest_int_storage (const widest_int_storage &x) > +{ > + memcpy (this, &x, sizeof (widest_int_storage)); > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + { > + u.valp = XNEWVEC (HOST_WIDE_INT, len); > + memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT)); > + } > +} > + > +template > +inline widest_int_storage ::~widest_int_storage () > +{ > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + XDELETEVEC (u.valp); > +} > + > +template > +inline widest_int_storage & > +widest_int_storage ::operator = (const widest_int_storage &x) > +{ > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + { > + if (this == &x) > + return *this; > + XDELETEVEC (u.valp); > + } > + memcpy (this, &x, sizeof (widest_int_storage)); > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + { > + u.valp = XNEWVEC (HOST_WIDE_INT, len); > + memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT)); > + } > + return *this; > +} > + > +template > +template > +inline widest_int_storage & > +widest_int_storage ::operator = (const T &x) > +{ > + /* Check for type compatibility. We don't want to assign a > + widest integer from something like a wide_int. */ > + WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED; > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + XDELETEVEC (u.valp); > + len = 0; > + wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N)); > + return *this; > +} > + > +template > +inline unsigned int > +widest_int_storage ::get_precision () const > +{ > + return N; > +} > + > +template > +inline const HOST_WIDE_INT * > +widest_int_storage ::get_val () const > +{ > + return UNLIKELY (len > WIDE_INT_MAX_INL_ELTS) ? u.valp : u.val; > +} > + > +template > +inline unsigned int > +widest_int_storage ::get_len () const > +{ > + return len; > +} > + > +template > +inline HOST_WIDE_INT * > +widest_int_storage ::write_val (unsigned int l) > +{ > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + XDELETEVEC (u.valp); > + len = l; > + if (UNLIKELY (l > WIDE_INT_MAX_INL_ELTS)) > + { > + u.valp = XNEWVEC (HOST_WIDE_INT, l); > + return u.valp; > + } > + return u.val; > +} > + > +template > +inline void > +widest_int_storage ::set_len (unsigned int l, bool) > +{ > + gcc_checking_assert (l <= len); > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS) > + && l <= WIDE_INT_MAX_INL_ELTS) > + { > + HOST_WIDE_INT *valp = u.valp; > + memcpy (u.val, valp, l * sizeof (u.val[0])); > + XDELETEVEC (valp); > + } > + len = l; > + /* There are no excess bits in val[len - 1]. */ > + STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0); > +} > + > +/* Treat X as having signedness SGN and convert it to an N-bit number. */ > +template > +inline WIDEST_INT (N) > +widest_int_storage ::from (const wide_int_ref &x, signop sgn) > +{ > + WIDEST_INT (N) result; > + unsigned int exp_len = x.len; > + unsigned int prec = result.get_precision (); > + if (sgn == UNSIGNED && prec > x.precision && x.val[x.len - 1] < 0) > + exp_len = CEIL (x.precision, HOST_BITS_PER_WIDE_INT) + 1; > + result.set_len (wi::force_to_size (result.write_val (exp_len), x.val, x.len, > + x.precision, prec, sgn)); > + return result; > +} > + > +/* Create a WIDEST_INT (N) from the explicit block encoding given by > + VAL and LEN. NEED_CANON_P is true if the encoding may have redundant > + trailing blocks. */ > +template > +inline WIDEST_INT (N) > +widest_int_storage ::from_array (const HOST_WIDE_INT *val, > + unsigned int len, > + bool need_canon_p) > +{ > + WIDEST_INT (N) result; > + result.set_len (wi::from_array (result.write_val (len), val, len, > + result.get_precision (), need_canon_p)); > + return result; > +} > + > +template > +template > +inline WIDEST_INT (N) > +wi::int_traits < widest_int_storage >:: > +get_binary_result (const T1 &, const T2 &) > +{ > + return WIDEST_INT (N) (); > +} > + > +template > +template > +inline unsigned int > +wi::int_traits < widest_int_storage >:: > +get_binary_precision (const T1 &, const T2 &) > +{ > + return N; > +} > + > /* A reference to one element of a trailing_wide_ints structure. */ > class trailing_wide_int_storage > { > @@ -1359,7 +1725,7 @@ public: > unsigned int get_len () const; > unsigned int get_precision () const; > const HOST_WIDE_INT *get_val () const; > - HOST_WIDE_INT *write_val (); > + HOST_WIDE_INT *write_val (unsigned int); > void set_len (unsigned int, bool = false); > > template > @@ -1445,7 +1811,7 @@ trailing_wide_int_storage::get_val () co > } > > inline HOST_WIDE_INT * > -trailing_wide_int_storage::write_val () > +trailing_wide_int_storage::write_val (unsigned int) > { > return m_val; > } > @@ -1528,6 +1894,7 @@ namespace wi > static const enum precision_type precision_type = FLEXIBLE_PRECISION; > static const bool host_dependent_precision = true; > static const bool is_sign_extended = true; > + static const bool needs_write_val_arg = false; > static unsigned int get_precision (T); > static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, T); > }; > @@ -1699,6 +2066,7 @@ namespace wi > precision of HOST_WIDE_INT. */ > static const bool host_dependent_precision = false; > static const bool is_sign_extended = true; > + static const bool needs_write_val_arg = false; > static unsigned int get_precision (const wi::hwi_with_prec &); > static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, > const wi::hwi_with_prec &); > @@ -1804,8 +2172,8 @@ template > inline unsigned int > wi::get_binary_precision (const T1 &x, const T2 &y) > { > - return get_precision (wi::int_traits :: > - get_binary_result (x, y)); > + using res_traits = wi::int_traits ; > + return res_traits::get_binary_precision (x, y); > } > > /* Copy the contents of Y to X, but keeping X's current precision. */ > @@ -1813,14 +2181,17 @@ template > inline void > wi::copy (T1 &x, const T2 &y) > { > - HOST_WIDE_INT *xval = x.write_val (); > - const HOST_WIDE_INT *yval = y.get_val (); > unsigned int len = y.get_len (); > + HOST_WIDE_INT *xval = x.write_val (len); > + const HOST_WIDE_INT *yval = y.get_val (); > unsigned int i = 0; > do > xval[i] = yval[i]; > while (++i < len); > - x.set_len (len, y.is_sign_extended); > + /* For widest_int write_val is called with an exact value, not > + upper bound for len, so nothing is needed further. */ > + if (!wi::int_traits ::needs_write_val_arg) > + x.set_len (len, y.is_sign_extended); > } > > /* Return true if X fits in a HOST_WIDE_INT with no loss of precision. */ > @@ -2162,6 +2533,8 @@ wi::bit_not (const T &x) > { > WI_UNARY_RESULT_VAR (result, val, T, x); > WIDE_INT_REF_FOR (T) xi (x, get_precision (result)); > + if (result.needs_write_val_arg) > + val = result.write_val (xi.len); > for (unsigned int i = 0; i < xi.len; ++i) > val[i] = ~xi.val[i]; > result.set_len (xi.len); > @@ -2203,6 +2576,9 @@ wi::sext (const T &x, unsigned int offse > unsigned int precision = get_precision (result); > WIDE_INT_REF_FOR (T) xi (x, precision); > > + if (result.needs_write_val_arg) > + val = result.write_val (MAX (xi.len, > + CEIL (offset, HOST_BITS_PER_WIDE_INT))); > if (offset <= HOST_BITS_PER_WIDE_INT) > { > val[0] = sext_hwi (xi.ulow (), offset); > @@ -2230,6 +2606,9 @@ wi::zext (const T &x, unsigned int offse > return result; > } > > + if (result.needs_write_val_arg) > + val = result.write_val (MAX (xi.len, > + offset / HOST_BITS_PER_WIDE_INT + 1)); > /* In these cases we know that at least the top bit will be clear, > so no sign extension is necessary. */ > if (offset < HOST_BITS_PER_WIDE_INT) > @@ -2259,6 +2638,9 @@ wi::set_bit (const T &x, unsigned int bi > WI_UNARY_RESULT_VAR (result, val, T, x); > unsigned int precision = get_precision (result); > WIDE_INT_REF_FOR (T) xi (x, precision); > + if (result.needs_write_val_arg) > + val = result.write_val (MAX (xi.len, > + bit / HOST_BITS_PER_WIDE_INT + 1)); > if (precision <= HOST_BITS_PER_WIDE_INT) > { > val[0] = xi.ulow () | (HOST_WIDE_INT_1U << bit); > @@ -2280,6 +2662,8 @@ wi::bswap (const T &x) > WI_UNARY_RESULT_VAR (result, val, T, x); > unsigned int precision = get_precision (result); > WIDE_INT_REF_FOR (T) xi (x, precision); > + static_assert (!result.needs_write_val_arg, > + "bswap on widest_int makes no sense"); > result.set_len (bswap_large (val, xi.val, xi.len, precision)); > return result; > } > @@ -2292,6 +2676,8 @@ wi::bitreverse (const T &x) > WI_UNARY_RESULT_VAR (result, val, T, x); > unsigned int precision = get_precision (result); > WIDE_INT_REF_FOR (T) xi (x, precision); > + static_assert (!result.needs_write_val_arg, > + "bitreverse on widest_int makes no sense"); > result.set_len (bitreverse_large (val, xi.val, xi.len, precision)); > return result; > } > @@ -2368,6 +2754,8 @@ wi::bit_and (const T1 &x, const T2 &y) > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y, precision); > bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended; > + if (result.needs_write_val_arg) > + val = result.write_val (MAX (xi.len, yi.len)); > if (LIKELY (xi.len + yi.len == 2)) > { > val[0] = xi.ulow () & yi.ulow (); > @@ -2389,6 +2777,8 @@ wi::bit_and_not (const T1 &x, const T2 & > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y, precision); > bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended; > + if (result.needs_write_val_arg) > + val = result.write_val (MAX (xi.len, yi.len)); > if (LIKELY (xi.len + yi.len == 2)) > { > val[0] = xi.ulow () & ~yi.ulow (); > @@ -2410,6 +2800,8 @@ wi::bit_or (const T1 &x, const T2 &y) > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y, precision); > bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended; > + if (result.needs_write_val_arg) > + val = result.write_val (MAX (xi.len, yi.len)); > if (LIKELY (xi.len + yi.len == 2)) > { > val[0] = xi.ulow () | yi.ulow (); > @@ -2431,6 +2823,8 @@ wi::bit_or_not (const T1 &x, const T2 &y > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y, precision); > bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended; > + if (result.needs_write_val_arg) > + val = result.write_val (MAX (xi.len, yi.len)); > if (LIKELY (xi.len + yi.len == 2)) > { > val[0] = xi.ulow () | ~yi.ulow (); > @@ -2452,6 +2846,8 @@ wi::bit_xor (const T1 &x, const T2 &y) > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y, precision); > bool is_sign_extended = xi.is_sign_extended && yi.is_sign_extended; > + if (result.needs_write_val_arg) > + val = result.write_val (MAX (xi.len, yi.len)); > if (LIKELY (xi.len + yi.len == 2)) > { > val[0] = xi.ulow () ^ yi.ulow (); > @@ -2472,6 +2868,8 @@ wi::add (const T1 &x, const T2 &y) > unsigned int precision = get_precision (result); > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y, precision); > + if (result.needs_write_val_arg) > + val = result.write_val (MAX (xi.len, yi.len) + 1); > if (precision <= HOST_BITS_PER_WIDE_INT) > { > val[0] = xi.ulow () + yi.ulow (); > @@ -2515,6 +2913,8 @@ wi::add (const T1 &x, const T2 &y, signo > unsigned int precision = get_precision (result); > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y, precision); > + if (result.needs_write_val_arg) > + val = result.write_val (MAX (xi.len, yi.len) + 1); > if (precision <= HOST_BITS_PER_WIDE_INT) > { > unsigned HOST_WIDE_INT xl = xi.ulow (); > @@ -2558,6 +2958,8 @@ wi::sub (const T1 &x, const T2 &y) > unsigned int precision = get_precision (result); > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y, precision); > + if (result.needs_write_val_arg) > + val = result.write_val (MAX (xi.len, yi.len) + 1); > if (precision <= HOST_BITS_PER_WIDE_INT) > { > val[0] = xi.ulow () - yi.ulow (); > @@ -2601,6 +3003,8 @@ wi::sub (const T1 &x, const T2 &y, signo > unsigned int precision = get_precision (result); > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y, precision); > + if (result.needs_write_val_arg) > + val = result.write_val (MAX (xi.len, yi.len) + 1); > if (precision <= HOST_BITS_PER_WIDE_INT) > { > unsigned HOST_WIDE_INT xl = xi.ulow (); > @@ -2643,6 +3047,8 @@ wi::mul (const T1 &x, const T2 &y) > unsigned int precision = get_precision (result); > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y, precision); > + if (result.needs_write_val_arg) > + val = result.write_val (xi.len + yi.len + 2); > if (precision <= HOST_BITS_PER_WIDE_INT) > { > val[0] = xi.ulow () * yi.ulow (); > @@ -2664,6 +3070,8 @@ wi::mul (const T1 &x, const T2 &y, signo > unsigned int precision = get_precision (result); > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y, precision); > + if (result.needs_write_val_arg) > + val = result.write_val (xi.len + yi.len + 2); > result.set_len (mul_internal (val, xi.val, xi.len, > yi.val, yi.len, precision, > sgn, overflow, false)); > @@ -2698,6 +3106,8 @@ wi::mul_high (const T1 &x, const T2 &y, > unsigned int precision = get_precision (result); > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y, precision); > + static_assert (!result.needs_write_val_arg, > + "mul_high on widest_int doesn't make sense"); > result.set_len (mul_internal (val, xi.val, xi.len, > yi.val, yi.len, precision, > sgn, 0, true)); > @@ -2716,6 +3126,12 @@ wi::div_trunc (const T1 &x, const T2 &y, > WIDE_INT_REF_FOR (T1) xi (x, precision); > WIDE_INT_REF_FOR (T2) yi (y); > > + if (quotient.needs_write_val_arg) > + quotient_val = quotient.write_val ((sgn == UNSIGNED > + && xi.val[xi.len - 1] < 0) > + ? CEIL (precision, > + HOST_BITS_PER_WIDE_INT) + 1 > + : xi.len + 1); > quotient.set_len (divmod_internal (quotient_val, 0, 0, xi.val, xi.len, > precision, > yi.val, yi.len, yi.precision, > @@ -2753,6 +3169,16 @@ wi::div_floor (const T1 &x, const T2 &y, > WIDE_INT_REF_FOR (T2) yi (y); > > unsigned int remainder_len; > + if (quotient.needs_write_val_arg) > + { > + unsigned int est_len; > + if (sgn == UNSIGNED && xi.val[xi.len - 1] < 0) > + est_len = CEIL (precision, HOST_BITS_PER_WIDE_INT) + 1; > + else > + est_len = xi.len + 1; > + quotient_val = quotient.write_val (est_len); > + remainder_val = remainder.write_val (est_len); > + } > quotient.set_len (divmod_internal (quotient_val, > &remainder_len, remainder_val, > xi.val, xi.len, precision, > @@ -2795,6 +3221,16 @@ wi::div_ceil (const T1 &x, const T2 &y, > WIDE_INT_REF_FOR (T2) yi (y); > > unsigned int remainder_len; > + if (quotient.needs_write_val_arg) > + { > + unsigned int est_len; > + if (sgn == UNSIGNED && xi.val[xi.len - 1] < 0) > + est_len = CEIL (precision, HOST_BITS_PER_WIDE_INT) + 1; > + else > + est_len = xi.len + 1; > + quotient_val = quotient.write_val (est_len); > + remainder_val = remainder.write_val (est_len); > + } > quotient.set_len (divmod_internal (quotient_val, > &remainder_len, remainder_val, > xi.val, xi.len, precision, > @@ -2828,6 +3264,16 @@ wi::div_round (const T1 &x, const T2 &y, > WIDE_INT_REF_FOR (T2) yi (y); > > unsigned int remainder_len; > + if (quotient.needs_write_val_arg) > + { > + unsigned int est_len; > + if (sgn == UNSIGNED && xi.val[xi.len - 1] < 0) > + est_len = CEIL (precision, HOST_BITS_PER_WIDE_INT) + 1; > + else > + est_len = xi.len + 1; > + quotient_val = quotient.write_val (est_len); > + remainder_val = remainder.write_val (est_len); > + } > quotient.set_len (divmod_internal (quotient_val, > &remainder_len, remainder_val, > xi.val, xi.len, precision, > @@ -2871,6 +3317,16 @@ wi::divmod_trunc (const T1 &x, const T2 > WIDE_INT_REF_FOR (T2) yi (y); > > unsigned int remainder_len; > + if (quotient.needs_write_val_arg) > + { > + unsigned int est_len; > + if (sgn == UNSIGNED && xi.val[xi.len - 1] < 0) > + est_len = CEIL (precision, HOST_BITS_PER_WIDE_INT) + 1; > + else > + est_len = xi.len + 1; > + quotient_val = quotient.write_val (est_len); > + remainder_val = remainder.write_val (est_len); > + } > quotient.set_len (divmod_internal (quotient_val, > &remainder_len, remainder_val, > xi.val, xi.len, precision, > @@ -2915,6 +3371,12 @@ wi::mod_trunc (const T1 &x, const T2 &y, > WIDE_INT_REF_FOR (T2) yi (y); > > unsigned int remainder_len; > + if (remainder.needs_write_val_arg) > + remainder_val = remainder.write_val ((sgn == UNSIGNED > + && xi.val[xi.len - 1] < 0) > + ? CEIL (precision, > + HOST_BITS_PER_WIDE_INT) + 1 > + : xi.len + 1); > divmod_internal (0, &remainder_len, remainder_val, > xi.val, xi.len, precision, > yi.val, yi.len, yi.precision, sgn, overflow); > @@ -2955,6 +3417,16 @@ wi::mod_floor (const T1 &x, const T2 &y, > WIDE_INT_REF_FOR (T2) yi (y); > > unsigned int remainder_len; > + if (quotient.needs_write_val_arg) > + { > + unsigned int est_len; > + if (sgn == UNSIGNED && xi.val[xi.len - 1] < 0) > + est_len = CEIL (precision, HOST_BITS_PER_WIDE_INT) + 1; > + else > + est_len = xi.len + 1; > + quotient_val = quotient.write_val (est_len); > + remainder_val = remainder.write_val (est_len); > + } > quotient.set_len (divmod_internal (quotient_val, > &remainder_len, remainder_val, > xi.val, xi.len, precision, > @@ -2991,6 +3463,16 @@ wi::mod_ceil (const T1 &x, const T2 &y, > WIDE_INT_REF_FOR (T2) yi (y); > > unsigned int remainder_len; > + if (quotient.needs_write_val_arg) > + { > + unsigned int est_len; > + if (sgn == UNSIGNED && xi.val[xi.len - 1] < 0) > + est_len = CEIL (precision, HOST_BITS_PER_WIDE_INT) + 1; > + else > + est_len = xi.len + 1; > + quotient_val = quotient.write_val (est_len); > + remainder_val = remainder.write_val (est_len); > + } > quotient.set_len (divmod_internal (quotient_val, > &remainder_len, remainder_val, > xi.val, xi.len, precision, > @@ -3017,6 +3499,16 @@ wi::mod_round (const T1 &x, const T2 &y, > WIDE_INT_REF_FOR (T2) yi (y); > > unsigned int remainder_len; > + if (quotient.needs_write_val_arg) > + { > + unsigned int est_len; > + if (sgn == UNSIGNED && xi.val[xi.len - 1] < 0) > + est_len = CEIL (precision, HOST_BITS_PER_WIDE_INT) + 1; > + else > + est_len = xi.len + 1; > + quotient_val = quotient.write_val (est_len); > + remainder_val = remainder.write_val (est_len); > + } > quotient.set_len (divmod_internal (quotient_val, > &remainder_len, remainder_val, > xi.val, xi.len, precision, > @@ -3086,12 +3578,16 @@ wi::lshift (const T1 &x, const T2 &y) > /* Handle the simple cases quickly. */ > if (geu_p (yi, precision)) > { > + if (result.needs_write_val_arg) > + val = result.write_val (1); > val[0] = 0; > result.set_len (1); > } > else > { > unsigned int shift = yi.to_uhwi (); > + if (result.needs_write_val_arg) > + val = result.write_val (xi.len + shift / HOST_BITS_PER_WIDE_INT + 1); > /* For fixed-precision integers like offset_int and widest_int, > handle the case where the shift value is constant and the > result is a single nonnegative HWI (meaning that we don't > @@ -3130,12 +3626,23 @@ wi::lrshift (const T1 &x, const T2 &y) > /* Handle the simple cases quickly. */ > if (geu_p (yi, xi.precision)) > { > + if (result.needs_write_val_arg) > + val = result.write_val (1); > val[0] = 0; > result.set_len (1); > } > else > { > unsigned int shift = yi.to_uhwi (); > + if (result.needs_write_val_arg) > + { > + unsigned int est_len = xi.len; > + if (xi.val[xi.len - 1] < 0 && shift) > + /* Logical right shift of sign-extended value might need a very > + large precision e.g. for widest_int. */ > + est_len = CEIL (xi.precision - shift, HOST_BITS_PER_WIDE_INT) + 1; > + val = result.write_val (est_len); > + } > /* For fixed-precision integers like offset_int and widest_int, > handle the case where the shift value is constant and the > shifted value is a single nonnegative HWI (meaning that all > @@ -3171,6 +3678,8 @@ wi::arshift (const T1 &x, const T2 &y) > since the result can be no larger than that. */ > WIDE_INT_REF_FOR (T1) xi (x); > WIDE_INT_REF_FOR (T2) yi (y); > + if (result.needs_write_val_arg) > + val = result.write_val (xi.len); > /* Handle the simple cases quickly. */ > if (geu_p (yi, xi.precision)) > { > @@ -3374,25 +3883,41 @@ operator % (const T1 &x, const T2 &y) > return wi::smod_trunc (x, y); > } > > -template > +void gt_ggc_mx (generic_wide_int *) = delete; > +void gt_pch_nx (generic_wide_int *) = delete; > +void gt_pch_nx (generic_wide_int *, > + gt_pointer_operator, void *) = delete; > + > +template > void > -gt_ggc_mx (generic_wide_int *) > +gt_ggc_mx (generic_wide_int > *) > { > } > > -template > +template > void > -gt_pch_nx (generic_wide_int *) > +gt_pch_nx (generic_wide_int > *) > { > } > > -template > +template > void > -gt_pch_nx (generic_wide_int *, gt_pointer_operator, void *) > +gt_pch_nx (generic_wide_int > *, > + gt_pointer_operator, void *) > { > } > > template > +void gt_ggc_mx (generic_wide_int > *) = delete; > + > +template > +void gt_pch_nx (generic_wide_int > *) = delete; > + > +template > +void gt_pch_nx (generic_wide_int > *, > + gt_pointer_operator, void *) = delete; > + > +template > void > gt_ggc_mx (trailing_wide_ints *) > { > @@ -3465,7 +3990,7 @@ inline wide_int > wi::mask (unsigned int width, bool negate_p, unsigned int precision) > { > wide_int result = wide_int::create (precision); > - result.set_len (mask (result.write_val (), width, negate_p, precision)); > + result.set_len (mask (result.write_val (0), width, negate_p, precision)); > return result; > } > > @@ -3477,7 +4002,7 @@ wi::shifted_mask (unsigned int start, un > unsigned int precision) > { > wide_int result = wide_int::create (precision); > - result.set_len (shifted_mask (result.write_val (), start, width, negate_p, > + result.set_len (shifted_mask (result.write_val (0), start, width, negate_p, > precision)); > return result; > } > @@ -3498,8 +4023,8 @@ wi::mask (unsigned int width, bool negat > { > STATIC_ASSERT (wi::int_traits::precision); > T result; > - result.set_len (mask (result.write_val (), width, negate_p, > - wi::int_traits ::precision)); > + result.set_len (mask (result.write_val (width / HOST_BITS_PER_WIDE_INT + 1), > + width, negate_p, wi::int_traits ::precision)); > return result; > } > > @@ -3512,9 +4037,13 @@ wi::shifted_mask (unsigned int start, un > { > STATIC_ASSERT (wi::int_traits::precision); > T result; > - result.set_len (shifted_mask (result.write_val (), start, width, > - negate_p, > - wi::int_traits ::precision)); > + unsigned int prec = wi::int_traits ::precision; > + unsigned int est_len > + = result.needs_write_val_arg > + ? ((start + (width > prec - start ? prec - start : width)) > + / HOST_BITS_PER_WIDE_INT + 1) : 0; > + result.set_len (shifted_mask (result.write_val (est_len), start, width, > + negate_p, prec)); > return result; > } > > --- gcc/wide-int.cc.jj 2023-10-10 11:55:51.554417867 +0200 > +++ gcc/wide-int.cc 2023-10-11 14:41:23.719132402 +0200 > @@ -51,7 +51,7 @@ typedef unsigned int UDWtype __attribute > #include "longlong.h" > #endif > > -static const HOST_WIDE_INT zeros[WIDE_INT_MAX_ELTS] = {}; > +static const HOST_WIDE_INT zeros[1] = {}; > > /* > * Internal utilities. > @@ -62,8 +62,7 @@ static const HOST_WIDE_INT zeros[WIDE_IN > #define HALF_INT_MASK ((HOST_WIDE_INT_1 << HOST_BITS_PER_HALF_WIDE_INT) - 1) > > #define BLOCK_OF(TARGET) ((TARGET) / HOST_BITS_PER_WIDE_INT) > -#define BLOCKS_NEEDED(PREC) \ > - (PREC ? (((PREC) + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT) : 1) > +#define BLOCKS_NEEDED(PREC) (PREC ? CEIL (PREC, HOST_BITS_PER_WIDE_INT) : 1) > #define SIGN_MASK(X) ((HOST_WIDE_INT) (X) < 0 ? -1 : 0) > > /* Return the value a VAL[I] if I < LEN, otherwise, return 0 or -1 > @@ -96,7 +95,7 @@ canonize (HOST_WIDE_INT *val, unsigned i > top = val[len - 1]; > if (len * HOST_BITS_PER_WIDE_INT > precision) > val[len - 1] = top = sext_hwi (top, precision % HOST_BITS_PER_WIDE_INT); > - if (top != 0 && top != (HOST_WIDE_INT)-1) > + if (top != 0 && top != HOST_WIDE_INT_M1) > return len; > > /* At this point we know that the top is either 0 or -1. Find the > @@ -163,7 +162,7 @@ wi::from_buffer (const unsigned char *bu > /* We have to clear all the bits ourself, as we merely or in values > below. */ > unsigned int len = BLOCKS_NEEDED (precision); > - HOST_WIDE_INT *val = result.write_val (); > + HOST_WIDE_INT *val = result.write_val (0); > for (unsigned int i = 0; i < len; ++i) > val[i] = 0; > > @@ -232,8 +231,7 @@ wi::to_mpz (const wide_int_ref &x, mpz_t > } > else if (excess < 0 && wi::neg_p (x)) > { > - int extra > - = (-excess + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT; > + int extra = CEIL (-excess, HOST_BITS_PER_WIDE_INT); > HOST_WIDE_INT *t = XALLOCAVEC (HOST_WIDE_INT, len + extra); > for (int i = 0; i < len; i++) > t[i] = v[i]; > @@ -280,8 +278,8 @@ wi::from_mpz (const_tree type, mpz_t x, > extracted from the GMP manual, section "Integer Import and Export": > http://gmplib.org/manual/Integer-Import-and-Export.html */ > numb = CHAR_BIT * sizeof (HOST_WIDE_INT); > - count = (mpz_sizeinbase (x, 2) + numb - 1) / numb; > - HOST_WIDE_INT *val = res.write_val (); > + count = CEIL (mpz_sizeinbase (x, 2), numb); > + HOST_WIDE_INT *val = res.write_val (0); > /* Read the absolute value. > > Write directly to the wide_int storage if possible, otherwise leave > @@ -289,7 +287,7 @@ wi::from_mpz (const_tree type, mpz_t x, > to use mpz_tdiv_r_2exp for the latter case, but the situation is > pathological and it seems safer to operate on the original mpz value > in all cases. */ > - void *valres = mpz_export (count <= WIDE_INT_MAX_ELTS ? val : 0, > + void *valres = mpz_export (count <= WIDE_INT_MAX_INL_ELTS ? val : 0, > &count, -1, sizeof (HOST_WIDE_INT), 0, 0, x); > if (count < 1) > { > @@ -1334,21 +1332,6 @@ wi::mul_internal (HOST_WIDE_INT *val, co > unsigned HOST_WIDE_INT o0, o1, k, t; > unsigned int i; > unsigned int j; > - unsigned int blocks_needed = BLOCKS_NEEDED (prec); > - unsigned int half_blocks_needed = blocks_needed * 2; > - /* The sizes here are scaled to support a 2x largest mode by 2x > - largest mode yielding a 4x largest mode result. This is what is > - needed by vpn. */ > - > - unsigned HOST_HALF_WIDE_INT > - u[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT]; > - unsigned HOST_HALF_WIDE_INT > - v[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT]; > - /* The '2' in 'R' is because we are internally doing a full > - multiply. */ > - unsigned HOST_HALF_WIDE_INT > - r[2 * 4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT]; > - HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1; > > /* If the top level routine did not really pass in an overflow, then > just make sure that we never attempt to set it. */ > @@ -1469,6 +1452,37 @@ wi::mul_internal (HOST_WIDE_INT *val, co > return 1; > } > > + /* The sizes here are scaled to support a 2x WIDE_INT_MAX_INL_PRECISION by 2x > + WIDE_INT_MAX_INL_PRECISION yielding a 4x WIDE_INT_MAX_INL_PRECISION > + result. */ > + > + unsigned HOST_HALF_WIDE_INT > + ubuf[4 * WIDE_INT_MAX_INL_PRECISION / HOST_BITS_PER_HALF_WIDE_INT]; > + unsigned HOST_HALF_WIDE_INT > + vbuf[4 * WIDE_INT_MAX_INL_PRECISION / HOST_BITS_PER_HALF_WIDE_INT]; > + /* The '2' in 'R' is because we are internally doing a full > + multiply. */ > + unsigned HOST_HALF_WIDE_INT > + rbuf[2 * 4 * WIDE_INT_MAX_INL_PRECISION / HOST_BITS_PER_HALF_WIDE_INT]; > + const HOST_WIDE_INT mask > + = (HOST_WIDE_INT_1 << HOST_BITS_PER_HALF_WIDE_INT) - 1; > + unsigned HOST_HALF_WIDE_INT *u = ubuf; > + unsigned HOST_HALF_WIDE_INT *v = vbuf; > + unsigned HOST_HALF_WIDE_INT *r = rbuf; > + > + if (!high) > + prec = MIN ((op1len + op2len + 1) * HOST_BITS_PER_WIDE_INT, prec); > + unsigned int blocks_needed = BLOCKS_NEEDED (prec); > + unsigned int half_blocks_needed = blocks_needed * 2; > + if (UNLIKELY (prec > WIDE_INT_MAX_INL_PRECISION)) > + { > + unsigned HOST_HALF_WIDE_INT *buf > + = XALLOCAVEC (unsigned HOST_HALF_WIDE_INT, 4 * 4 * blocks_needed); > + u = buf; > + v = u + 4 * blocks_needed; > + r = v + 4 * blocks_needed; > + } > + > /* We do unsigned mul and then correct it. */ > wi_unpack (u, op1val, op1len, half_blocks_needed, prec, SIGNED); > wi_unpack (v, op2val, op2len, half_blocks_needed, prec, SIGNED); > @@ -1782,16 +1796,6 @@ wi::divmod_internal (HOST_WIDE_INT *quot > unsigned int divisor_prec, signop sgn, > wi::overflow_type *oflow) > { > - unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec); > - unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec); > - unsigned HOST_HALF_WIDE_INT > - b_quotient[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT]; > - unsigned HOST_HALF_WIDE_INT > - b_remainder[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT]; > - unsigned HOST_HALF_WIDE_INT > - b_dividend[(4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT) + 1]; > - unsigned HOST_HALF_WIDE_INT > - b_divisor[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT]; > unsigned int m, n; > bool dividend_neg = false; > bool divisor_neg = false; > @@ -1910,6 +1914,44 @@ wi::divmod_internal (HOST_WIDE_INT *quot > } > } > > + unsigned HOST_HALF_WIDE_INT > + b_quotient_buf[4 * WIDE_INT_MAX_INL_PRECISION > + / HOST_BITS_PER_HALF_WIDE_INT]; > + unsigned HOST_HALF_WIDE_INT > + b_remainder_buf[4 * WIDE_INT_MAX_INL_PRECISION > + / HOST_BITS_PER_HALF_WIDE_INT]; > + unsigned HOST_HALF_WIDE_INT > + b_dividend_buf[(4 * WIDE_INT_MAX_INL_PRECISION > + / HOST_BITS_PER_HALF_WIDE_INT) + 1]; > + unsigned HOST_HALF_WIDE_INT > + b_divisor_buf[4 * WIDE_INT_MAX_INL_PRECISION > + / HOST_BITS_PER_HALF_WIDE_INT]; > + unsigned HOST_HALF_WIDE_INT *b_quotient = b_quotient_buf; > + unsigned HOST_HALF_WIDE_INT *b_remainder = b_remainder_buf; > + unsigned HOST_HALF_WIDE_INT *b_dividend = b_dividend_buf; > + unsigned HOST_HALF_WIDE_INT *b_divisor = b_divisor_buf; > + > + if (sgn == SIGNED || dividend_val[dividend_len - 1] >= 0) > + dividend_prec = MIN ((dividend_len + 1) * HOST_BITS_PER_WIDE_INT, > + dividend_prec); > + if (sgn == SIGNED || divisor_val[divisor_len - 1] >= 0) > + divisor_prec = MIN (divisor_len * HOST_BITS_PER_WIDE_INT, divisor_prec); > + unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec); > + unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec); > + if (UNLIKELY (dividend_prec > WIDE_INT_MAX_INL_PRECISION) > + || UNLIKELY (divisor_prec > WIDE_INT_MAX_INL_PRECISION)) > + { > + unsigned HOST_HALF_WIDE_INT *buf > + = XALLOCAVEC (unsigned HOST_HALF_WIDE_INT, > + 12 * dividend_blocks_needed > + + 4 * divisor_blocks_needed + 1); > + b_quotient = buf; > + b_remainder = b_quotient + 4 * dividend_blocks_needed; > + b_dividend = b_remainder + 4 * dividend_blocks_needed; > + b_divisor = b_dividend + 4 * dividend_blocks_needed + 1; > + memset (b_quotient, 0, > + 4 * dividend_blocks_needed * sizeof (HOST_HALF_WIDE_INT)); > + } > wi_unpack (b_dividend, dividend.get_val (), dividend.get_len (), > dividend_blocks_needed, dividend_prec, UNSIGNED); > wi_unpack (b_divisor, divisor.get_val (), divisor.get_len (), > @@ -1924,7 +1966,8 @@ wi::divmod_internal (HOST_WIDE_INT *quot > while (n > 1 && b_divisor[n - 1] == 0) > n--; > > - memset (b_quotient, 0, sizeof (b_quotient)); > + if (b_quotient == b_quotient_buf) > + memset (b_quotient_buf, 0, sizeof (b_quotient_buf)); > > divmod_internal_2 (b_quotient, b_remainder, b_dividend, b_divisor, m, n); > > @@ -1970,6 +2013,8 @@ wi::lshift_large (HOST_WIDE_INT *val, co > > /* The whole-block shift fills with zeros. */ > unsigned int len = BLOCKS_NEEDED (precision); > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + len = xlen + skip + 1; > for (unsigned int i = 0; i < skip; ++i) > val[i] = 0; > > @@ -1993,22 +2038,17 @@ wi::lshift_large (HOST_WIDE_INT *val, co > return canonize (val, len, precision); > } > > -/* Right shift XVAL by SHIFT and store the result in VAL. Return the > +/* Right shift XVAL by SHIFT and store the result in VAL. LEN is the > number of blocks in VAL. The input has XPRECISION bits and the > output has XPRECISION - SHIFT bits. */ > -static unsigned int > +static void > rshift_large_common (HOST_WIDE_INT *val, const HOST_WIDE_INT *xval, > - unsigned int xlen, unsigned int xprecision, > - unsigned int shift) > + unsigned int xlen, unsigned int shift, unsigned int len) > { > /* Split the shift into a whole-block shift and a subblock shift. */ > unsigned int skip = shift / HOST_BITS_PER_WIDE_INT; > unsigned int small_shift = shift % HOST_BITS_PER_WIDE_INT; > > - /* Work out how many blocks are needed to store the significant bits > - (excluding the upper zeros or signs). */ > - unsigned int len = BLOCKS_NEEDED (xprecision - shift); > - > /* It's easier to handle the simple block case specially. */ > if (small_shift == 0) > for (unsigned int i = 0; i < len; ++i) > @@ -2025,7 +2065,6 @@ rshift_large_common (HOST_WIDE_INT *val, > val[i] |= curr << (-small_shift % HOST_BITS_PER_WIDE_INT); > } > } > - return len; > } > > /* Logically right shift XVAL by SHIFT and store the result in VAL. > @@ -2036,11 +2075,20 @@ wi::lrshift_large (HOST_WIDE_INT *val, c > unsigned int xlen, unsigned int xprecision, > unsigned int precision, unsigned int shift) > { > - unsigned int len = rshift_large_common (val, xval, xlen, xprecision, shift); > + /* Work out how many blocks are needed to store the significant bits > + (excluding the upper zeros or signs). */ > + unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift); > + unsigned int len = blocks_needed; > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS) > + && len > xlen > + && xval[xlen - 1] >= 0) > + len = xlen; > + > + rshift_large_common (val, xval, xlen, shift, len); > > /* The value we just created has precision XPRECISION - SHIFT. > Zero-extend it to wider precisions. */ > - if (precision > xprecision - shift) > + if (precision > xprecision - shift && len == blocks_needed) > { > unsigned int small_prec = (xprecision - shift) % HOST_BITS_PER_WIDE_INT; > if (small_prec) > @@ -2063,11 +2111,18 @@ wi::arshift_large (HOST_WIDE_INT *val, c > unsigned int xlen, unsigned int xprecision, > unsigned int precision, unsigned int shift) > { > - unsigned int len = rshift_large_common (val, xval, xlen, xprecision, shift); > + /* Work out how many blocks are needed to store the significant bits > + (excluding the upper zeros or signs). */ > + unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift); > + unsigned int len = blocks_needed; > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS) && len > xlen) > + len = xlen; > + > + rshift_large_common (val, xval, xlen, shift, len); > > /* The value we just created has precision XPRECISION - SHIFT. > Sign-extend it to wider types. */ > - if (precision > xprecision - shift) > + if (precision > xprecision - shift && len == blocks_needed) > { > unsigned int small_prec = (xprecision - shift) % HOST_BITS_PER_WIDE_INT; > if (small_prec) > @@ -2399,9 +2454,12 @@ from_int (int i) > static void > assert_deceq (const char *expected, const wide_int_ref &wi, signop sgn) > { > - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; > - print_dec (wi, buf, sgn); > - ASSERT_STREQ (expected, buf); > + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf; > + unsigned len = wi.get_len (); > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); > + print_dec (wi, p, sgn); > + ASSERT_STREQ (expected, p); > } > > /* Likewise for base 16. */ > @@ -2409,9 +2467,12 @@ assert_deceq (const char *expected, cons > static void > assert_hexeq (const char *expected, const wide_int_ref &wi) > { > - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; > - print_hex (wi, buf); > - ASSERT_STREQ (expected, buf); > + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf; > + unsigned len = wi.get_len (); > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); > + print_hex (wi, p); > + ASSERT_STREQ (expected, p); > } > > /* Test cases. */ > @@ -2428,7 +2489,7 @@ test_printing () > assert_hexeq ("0x1fffffffffffffffff", wi::shwi (-1, 69)); > assert_hexeq ("0xffffffffffffffff", wi::mask (64, false, 69)); > assert_hexeq ("0xffffffffffffffff", wi::mask (64, false)); > - if (WIDE_INT_MAX_PRECISION > 128) > + if (WIDE_INT_MAX_INL_PRECISION > 128) > { > assert_hexeq ("0x20000000000000000fffffffffffffffe", > wi::lshift (1, 129) + wi::lshift (1, 64) - 2); > --- gcc/wide-int-print.h.jj 2023-10-10 11:55:51.532418172 +0200 > +++ gcc/wide-int-print.h 2023-10-11 11:01:31.695235476 +0200 > @@ -22,7 +22,7 @@ along with GCC; see the file COPYING3. > > #include > > -#define WIDE_INT_PRINT_BUFFER_SIZE (WIDE_INT_MAX_PRECISION / 4 + 4) > +#define WIDE_INT_PRINT_BUFFER_SIZE (WIDE_INT_MAX_INL_PRECISION / 4 + 4) > > /* Printing functions. */ > > --- gcc/wide-int-print.cc.jj 2023-10-10 11:55:51.513418435 +0200 > +++ gcc/wide-int-print.cc 2023-10-11 11:01:31.709235281 +0200 > @@ -74,9 +74,12 @@ print_decs (const wide_int_ref &wi, char > void > print_decs (const wide_int_ref &wi, FILE *file) > { > - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; > - print_decs (wi, buf); > - fputs (buf, file); > + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf; > + unsigned len = wi.get_len (); > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); > + print_decs (wi, p); > + fputs (p, file); > } > > /* Try to print the unsigned self in decimal to BUF if the number fits > @@ -98,9 +101,12 @@ print_decu (const wide_int_ref &wi, char > void > print_decu (const wide_int_ref &wi, FILE *file) > { > - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; > - print_decu (wi, buf); > - fputs (buf, file); > + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf; > + unsigned len = wi.get_len (); > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); > + print_decu (wi, p); > + fputs (p, file); > } > > void > @@ -134,9 +140,12 @@ print_hex (const wide_int_ref &val, char > void > print_hex (const wide_int_ref &wi, FILE *file) > { > - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; > - print_hex (wi, buf); > - fputs (buf, file); > + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p = buf; > + unsigned len = wi.get_len (); > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); > + print_hex (wi, p); > + fputs (p, file); > } > > /* Print larger precision wide_int. Not defined as inline in a header > --- gcc/tree.h.jj 2023-10-10 11:55:51.467419072 +0200 > +++ gcc/tree.h 2023-10-11 11:07:34.152258432 +0200 > @@ -6258,13 +6258,14 @@ namespace wi > template > struct int_traits > > { > - static const enum precision_type precision_type = CONST_PRECISION; > + static const enum precision_type precision_type > + = N == ADDR_MAX_PRECISION ? INL_CONST_PRECISION : CONST_PRECISION; > static const bool host_dependent_precision = false; > static const bool is_sign_extended = true; > static const unsigned int precision = N; > }; > > - typedef extended_tree widest_extended_tree; > + typedef extended_tree widest_extended_tree; > typedef extended_tree offset_extended_tree; > > typedef const generic_wide_int tree_to_widest_ref; > @@ -6292,7 +6293,8 @@ namespace wi > tree_to_poly_wide_ref to_poly_wide (const_tree); > > template > - struct ints_for >, CONST_PRECISION> > + struct ints_for >, > + int_traits >::precision_type> > { > typedef generic_wide_int > extended; > static extended zero (const extended &); > @@ -6308,7 +6310,7 @@ namespace wi > > /* Used to convert a tree to a widest2_int like this: > widest2_int foo = widest2_int_cst (some_tree). */ > -typedef generic_wide_int > > +typedef generic_wide_int > > widest2_int_cst; > > /* Refer to INTEGER_CST T as though it were a widest_int. > @@ -6444,7 +6446,7 @@ wi::extended_tree ::get_len () const > { > if (N == ADDR_MAX_PRECISION) > return TREE_INT_CST_OFFSET_NUNITS (m_t); > - else if (N >= WIDE_INT_MAX_PRECISION) > + else if (N >= WIDEST_INT_MAX_PRECISION) > return TREE_INT_CST_EXT_NUNITS (m_t); > else > /* This class is designed to be used for specific output precisions > @@ -6530,7 +6532,8 @@ wi::to_poly_wide (const_tree t) > template > inline generic_wide_int > > wi::ints_for >, > - wi::CONST_PRECISION>::zero (const extended &x) > + wi::int_traits >::precision_type > + >::zero (const extended &x) > { > return build_zero_cst (TREE_TYPE (x.get_tree ())); > } > --- gcc/tree.cc.jj 2023-10-10 11:55:51.463419128 +0200 > +++ gcc/tree.cc 2023-10-11 11:01:31.750234711 +0200 > @@ -2676,13 +2676,13 @@ build_zero_cst (tree type) > tree > build_replicated_int_cst (tree type, unsigned int width, HOST_WIDE_INT value) > { > - int n = (TYPE_PRECISION (type) + HOST_BITS_PER_WIDE_INT - 1) > - / HOST_BITS_PER_WIDE_INT; > + int n = ((TYPE_PRECISION (type) + HOST_BITS_PER_WIDE_INT - 1) > + / HOST_BITS_PER_WIDE_INT); > unsigned HOST_WIDE_INT low, mask; > - HOST_WIDE_INT a[WIDE_INT_MAX_ELTS]; > + HOST_WIDE_INT a[WIDE_INT_MAX_INL_ELTS]; > int i; > > - gcc_assert (n && n <= WIDE_INT_MAX_ELTS); > + gcc_assert (n && n <= WIDE_INT_MAX_INL_ELTS); > > if (width == HOST_BITS_PER_WIDE_INT) > low = value; > @@ -2696,8 +2696,8 @@ build_replicated_int_cst (tree type, uns > a[i] = low; > > gcc_assert (TYPE_PRECISION (type) <= MAX_BITSIZE_MODE_ANY_INT); > - return wide_int_to_tree > - (type, wide_int::from_array (a, n, TYPE_PRECISION (type))); > + return wide_int_to_tree (type, wide_int::from_array (a, n, > + TYPE_PRECISION (type))); > } > > /* If floating-point type TYPE has an IEEE-style sign bit, return an > --- gcc/print-tree.cc.jj 2023-10-10 11:55:51.268421828 +0200 > +++ gcc/print-tree.cc 2023-10-11 11:01:31.758234599 +0200 > @@ -365,13 +365,13 @@ print_node (FILE *file, const char *pref > fputs (code == CALL_EXPR ? " must-tail-call" : " static", file); > if (TREE_DEPRECATED (node)) > fputs (" deprecated", file); > - if (TREE_UNAVAILABLE (node)) > - fputs (" unavailable", file); > if (TREE_VISITED (node)) > fputs (" visited", file); > > if (code != TREE_VEC && code != INTEGER_CST && code != SSA_NAME) > { > + if (TREE_UNAVAILABLE (node)) > + fputs (" unavailable", file); > if (TREE_LANG_FLAG_0 (node)) > fputs (" tree_0", file); > if (TREE_LANG_FLAG_1 (node)) > --- gcc/double-int.h.jj 2023-10-10 11:55:51.042424958 +0200 > +++ gcc/double-int.h 2023-10-11 11:07:34.150258459 +0200 > @@ -440,7 +440,7 @@ namespace wi > template <> > struct int_traits > { > - static const enum precision_type precision_type = CONST_PRECISION; > + static const enum precision_type precision_type = INL_CONST_PRECISION; > static const bool host_dependent_precision = true; > static const unsigned int precision = HOST_BITS_PER_DOUBLE_INT; > static unsigned int get_precision (const double_int &); > --- gcc/poly-int.h.jj 2023-10-10 11:55:51.255422008 +0200 > +++ gcc/poly-int.h 2023-10-11 11:07:34.149258473 +0200 > @@ -96,6 +96,20 @@ struct poly_coeff_traits }; > > template > +struct poly_coeff_traits > +{ > + typedef WI_UNARY_RESULT (T) result; > + typedef int int_type; > + /* These types are always signed. */ > + static const int signedness = 1; > + static const int precision = wi::int_traits::precision; > + static const int rank = precision * 2 / CHAR_BIT; > + > + template > + struct init_cast { using type = const Arg &; }; > +}; > + > +template > struct poly_coeff_traits > { > typedef WI_UNARY_RESULT (T) result; > --- gcc/cfgloop.h.jj 2023-10-10 11:55:51.020425263 +0200 > +++ gcc/cfgloop.h 2023-10-11 11:01:31.794234098 +0200 > @@ -44,6 +44,9 @@ enum iv_extend_code > IV_UNKNOWN_EXTEND > }; > > +typedef generic_wide_int > > + bound_wide_int; > + > /* The structure describing a bound on number of iterations of a loop. */ > > class GTY ((chain_next ("%h.next"))) nb_iter_bound { > @@ -58,7 +61,7 @@ public: > overflows (as MAX + 1 is sometimes produced as the estimate on number > of executions of STMT). > b) it is consistent with the result of number_of_iterations_exit. */ > - widest_int bound; > + bound_wide_int bound; > > /* True if, after executing the statement BOUND + 1 times, we will > leave the loop; that is, all the statements after it are executed at most > @@ -161,14 +164,14 @@ public: > > /* An integer guaranteed to be greater or equal to nb_iterations. Only > valid if any_upper_bound is true. */ > - widest_int nb_iterations_upper_bound; > + bound_wide_int nb_iterations_upper_bound; > > - widest_int nb_iterations_likely_upper_bound; > + bound_wide_int nb_iterations_likely_upper_bound; > > /* An integer giving an estimate on nb_iterations. Unlike > nb_iterations_upper_bound, there is no guarantee that it is at least > nb_iterations. */ > - widest_int nb_iterations_estimate; > + bound_wide_int nb_iterations_estimate; > > /* If > 0, an integer, where the user asserted that for any > I in [ 0, nb_iterations ) and for any J in > --- gcc/cfgloop.cc.jj 2023-10-10 11:55:51.002425512 +0200 > +++ gcc/cfgloop.cc 2023-10-11 11:01:31.804233959 +0200 > @@ -1895,33 +1895,38 @@ void > record_niter_bound (class loop *loop, const widest_int &i_bound, > bool realistic, bool upper) > { > + if (wi::min_precision (i_bound, SIGNED) > bound_wide_int ().get_precision ()) > + return; > + > + bound_wide_int bound = bound_wide_int::from (i_bound, SIGNED); > + > /* Update the bounds only when there is no previous estimation, or when the > current estimation is smaller. */ > if (upper > && (!loop->any_upper_bound > - || wi::ltu_p (i_bound, loop->nb_iterations_upper_bound))) > + || wi::ltu_p (bound, loop->nb_iterations_upper_bound))) > { > loop->any_upper_bound = true; > - loop->nb_iterations_upper_bound = i_bound; > + loop->nb_iterations_upper_bound = bound; > if (!loop->any_likely_upper_bound) > { > loop->any_likely_upper_bound = true; > - loop->nb_iterations_likely_upper_bound = i_bound; > + loop->nb_iterations_likely_upper_bound = bound; > } > } > if (realistic > && (!loop->any_estimate > - || wi::ltu_p (i_bound, loop->nb_iterations_estimate))) > + || wi::ltu_p (bound, loop->nb_iterations_estimate))) > { > loop->any_estimate = true; > - loop->nb_iterations_estimate = i_bound; > + loop->nb_iterations_estimate = bound; > } > if (!realistic > && (!loop->any_likely_upper_bound > - || wi::ltu_p (i_bound, loop->nb_iterations_likely_upper_bound))) > + || wi::ltu_p (bound, loop->nb_iterations_likely_upper_bound))) > { > loop->any_likely_upper_bound = true; > - loop->nb_iterations_likely_upper_bound = i_bound; > + loop->nb_iterations_likely_upper_bound = bound; > } > > /* If an upper bound is smaller than the realistic estimate of the > @@ -2018,7 +2023,7 @@ get_estimated_loop_iterations (class loo > return false; > } > > - *nit = loop->nb_iterations_estimate; > + *nit = widest_int::from (loop->nb_iterations_estimate, SIGNED); > return true; > } > > @@ -2032,7 +2037,7 @@ get_max_loop_iterations (const class loo > if (!loop->any_upper_bound) > return false; > > - *nit = loop->nb_iterations_upper_bound; > + *nit = widest_int::from (loop->nb_iterations_upper_bound, SIGNED); > return true; > } > > @@ -2066,7 +2071,7 @@ get_likely_max_loop_iterations (class lo > if (!loop->any_likely_upper_bound) > return false; > > - *nit = loop->nb_iterations_likely_upper_bound; > + *nit = widest_int::from (loop->nb_iterations_likely_upper_bound, SIGNED); > return true; > } > > --- gcc/tree-vect-loop.cc.jj 2023-10-10 11:55:51.432419557 +0200 > +++ gcc/tree-vect-loop.cc 2023-10-11 11:01:31.832233571 +0200 > @@ -11681,7 +11681,7 @@ vect_transform_loop (loop_vec_info loop_ > LOOP_VINFO_VECT_FACTOR (loop_vinfo), > &bound)) > loop->nb_iterations_upper_bound > - = wi::umin ((widest_int) (bound - 1), > + = wi::umin ((bound_wide_int) (bound - 1), > loop->nb_iterations_upper_bound); > } > } > --- gcc/tree-ssa-loop-niter.cc.jj 2023-10-10 11:55:51.406419917 +0200 > +++ gcc/tree-ssa-loop-niter.cc 2023-10-11 11:01:31.843233418 +0200 > @@ -3873,12 +3873,17 @@ do_warn_aggressive_loop_optimizations (c > return; > > gimple *estmt = last_nondebug_stmt (e->src); > - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; > - print_dec (i_bound, buf, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations)) > + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p; > + unsigned len = i_bound.get_len (); > + if (len > WIDE_INT_MAX_INL_ELTS) > + p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); > + else > + p = buf; > + print_dec (i_bound, p, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations)) > ? UNSIGNED : SIGNED); > auto_diagnostic_group d; > if (warning_at (gimple_location (stmt), OPT_Waggressive_loop_optimizations, > - "iteration %s invokes undefined behavior", buf)) > + "iteration %s invokes undefined behavior", p)) > inform (gimple_location (estmt), "within this loop"); > loop->warned_aggressive_loop_optimizations = true; > } > @@ -3915,6 +3920,9 @@ record_estimate (class loop *loop, tree > else > gcc_checking_assert (i_bound == wi::to_widest (bound)); > > + if (wi::min_precision (i_bound, SIGNED) > bound_wide_int ().get_precision ()) > + return; > + > /* If we have a guaranteed upper bound, record it in the appropriate > list, unless this is an !is_exit bound (i.e. undefined behavior in > at_stmt) in a loop with known constant number of iterations. */ > @@ -3925,7 +3933,7 @@ record_estimate (class loop *loop, tree > { > class nb_iter_bound *elt = ggc_alloc (); > > - elt->bound = i_bound; > + elt->bound = bound_wide_int::from (i_bound, SIGNED); > elt->stmt = at_stmt; > elt->is_exit = is_exit; > elt->next = loop->bounds; > @@ -4410,8 +4418,8 @@ infer_loop_bounds_from_undefined (class > static int > wide_int_cmp (const void *p1, const void *p2) > { > - const widest_int *d1 = (const widest_int *) p1; > - const widest_int *d2 = (const widest_int *) p2; > + const bound_wide_int *d1 = (const bound_wide_int *) p1; > + const bound_wide_int *d2 = (const bound_wide_int *) p2; > return wi::cmpu (*d1, *d2); > } > > @@ -4419,7 +4427,7 @@ wide_int_cmp (const void *p1, const void > Lookup by binary search. */ > > static int > -bound_index (const vec &bounds, const widest_int &bound) > +bound_index (const vec &bounds, const bound_wide_int &bound) > { > unsigned int end = bounds.length (); > unsigned int begin = 0; > @@ -4428,7 +4436,7 @@ bound_index (const vec &boun > while (begin != end) > { > unsigned int middle = (begin + end) / 2; > - widest_int index = bounds[middle]; > + bound_wide_int index = bounds[middle]; > > if (index == bound) > return middle; > @@ -4450,7 +4458,7 @@ static void > discover_iteration_bound_by_body_walk (class loop *loop) > { > class nb_iter_bound *elt; > - auto_vec bounds; > + auto_vec bounds; > vec > queues = vNULL; > vec queue = vNULL; > ptrdiff_t queue_index; > @@ -4459,7 +4467,7 @@ discover_iteration_bound_by_body_walk (c > /* Discover what bounds may interest us. */ > for (elt = loop->bounds; elt; elt = elt->next) > { > - widest_int bound = elt->bound; > + bound_wide_int bound = elt->bound; > > /* Exit terminates loop at given iteration, while non-exits produce undefined > effect on the next iteration. */ > @@ -4492,7 +4500,7 @@ discover_iteration_bound_by_body_walk (c > hash_map bb_bounds; > for (elt = loop->bounds; elt; elt = elt->next) > { > - widest_int bound = elt->bound; > + bound_wide_int bound = elt->bound; > if (!elt->is_exit) > { > bound += 1; > @@ -4601,7 +4609,8 @@ discover_iteration_bound_by_body_walk (c > print_decu (bounds[latch_index], dump_file); > fprintf (dump_file, "\n"); > } > - record_niter_bound (loop, bounds[latch_index], false, true); > + record_niter_bound (loop, widest_int::from (bounds[latch_index], > + SIGNED), false, true); > } > > queues.release (); > @@ -4704,7 +4713,8 @@ maybe_lower_iteration_bound (class loop > if (dump_file && (dump_flags & TDF_DETAILS)) > fprintf (dump_file, "Reducing loop iteration estimate by 1; " > "undefined statement must be executed at the last iteration.\n"); > - record_niter_bound (loop, loop->nb_iterations_upper_bound - 1, > + record_niter_bound (loop, widest_int::from (loop->nb_iterations_upper_bound, > + SIGNED) - 1, > false, true); > } > > @@ -4860,10 +4870,13 @@ estimate_numbers_of_iterations (class lo > not break code with undefined behavior by not recording smaller > maximum number of iterations. */ > if (loop->nb_iterations > - && TREE_CODE (loop->nb_iterations) == INTEGER_CST) > + && TREE_CODE (loop->nb_iterations) == INTEGER_CST > + && (wi::min_precision (wi::to_widest (loop->nb_iterations), SIGNED) > + <= bound_wide_int ().get_precision ())) > { > loop->any_upper_bound = true; > - loop->nb_iterations_upper_bound = wi::to_widest (loop->nb_iterations); > + loop->nb_iterations_upper_bound > + = bound_wide_int::from (wi::to_widest (loop->nb_iterations), SIGNED); > } > } > > @@ -5114,7 +5127,7 @@ n_of_executions_at_most (gimple *stmt, > class nb_iter_bound *niter_bound, > tree niter) > { > - widest_int bound = niter_bound->bound; > + widest_int bound = widest_int::from (niter_bound->bound, SIGNED); > tree nit_type = TREE_TYPE (niter), e; > enum tree_code cmp; > > --- gcc/tree-ssa-loop-ivcanon.cc.jj 2023-10-10 11:55:51.353420651 +0200 > +++ gcc/tree-ssa-loop-ivcanon.cc 2023-10-11 11:01:31.868233071 +0200 > @@ -622,10 +622,11 @@ remove_redundant_iv_tests (class loop *l > || !integer_zerop (niter.may_be_zero) > || !niter.niter > || TREE_CODE (niter.niter) != INTEGER_CST > - || !wi::ltu_p (loop->nb_iterations_upper_bound, > + || !wi::ltu_p (widest_int::from (loop->nb_iterations_upper_bound, > + SIGNED), > wi::to_widest (niter.niter))) > continue; > - > + > if (dump_file && (dump_flags & TDF_DETAILS)) > { > fprintf (dump_file, "Removed pointless exit: "); > --- gcc/match.pd.jj 2023-10-11 10:59:20.914051504 +0200 > +++ gcc/match.pd 2023-10-11 11:01:31.871233029 +0200 > @@ -6444,8 +6444,12 @@ (define_operator_list SYNC_FETCH_AND_AND > code and here to avoid a spurious overflow flag on the resulting > constant which fold_convert produces. */ > (if (TREE_CODE (@1) == INTEGER_CST) > - (cmp @00 { force_fit_type (TREE_TYPE (@00), wi::to_widest (@1), 0, > - TREE_OVERFLOW (@1)); }) > + (cmp @00 { force_fit_type (TREE_TYPE (@00), > + wide_int::from (wi::to_wide (@1), > + MAX (TYPE_PRECISION (TREE_TYPE (@1)), > + TYPE_PRECISION (TREE_TYPE (@00))), > + TYPE_SIGN (TREE_TYPE (@1))), > + 0, TREE_OVERFLOW (@1)); }) > (cmp @00 (convert @1))) > > (if (TYPE_PRECISION (TREE_TYPE (@0)) > TYPE_PRECISION (TREE_TYPE (@00))) > --- gcc/value-range.h.jj 2023-10-10 11:55:51.502418588 +0200 > +++ gcc/value-range.h 2023-10-11 11:01:31.894232710 +0200 > @@ -626,7 +626,9 @@ irange::maybe_resize (int needed) > { > m_max_ranges = HARD_MAX_RANGES; > wide_int *newmem = new wide_int[m_max_ranges * 2]; > - memcpy (newmem, m_base, sizeof (wide_int) * num_pairs () * 2); > + unsigned n = num_pairs () * 2; > + for (unsigned i = 0; i < n; ++i) > + newmem[i] = m_base[i]; > m_base = newmem; > } > } > --- gcc/value-range.cc.jj 2023-10-10 11:55:51.482418865 +0200 > +++ gcc/value-range.cc 2023-10-11 11:01:31.905232557 +0200 > @@ -245,17 +245,24 @@ vrange::dump (FILE *file) const > void > irange_bitmask::dump (FILE *file) const > { > - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; > + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p; > pretty_printer buffer; > > pp_needs_newline (&buffer) = true; > buffer.buffer->stream = file; > pp_string (&buffer, "MASK "); > - print_hex (m_mask, buf); > - pp_string (&buffer, buf); > + unsigned len_mask = m_mask.get_len (); > + unsigned len_val = m_value.get_len (); > + unsigned len = MAX (len_mask, len_val); > + if (len > WIDE_INT_MAX_INL_ELTS) > + p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); > + else > + p = buf; > + print_hex (m_mask, p); > + pp_string (&buffer, p); > pp_string (&buffer, " VALUE "); > - print_hex (m_value, buf); > - pp_string (&buffer, buf); > + print_hex (m_value, p); > + pp_string (&buffer, p); > pp_flush (&buffer); > } > > --- gcc/fold-const.cc.jj 2023-10-10 16:08:05.345254593 +0200 > +++ gcc/fold-const.cc 2023-10-11 11:01:31.929232224 +0200 > @@ -2137,7 +2137,10 @@ fold_convert_const_int_from_int (tree ty > /* Given an integer constant, make new constant with new type, > appropriately sign-extended or truncated. Use widest_int > so that any extension is done according ARG1's type. */ > - return force_fit_type (type, wi::to_widest (arg1), > + tree arg1_type = TREE_TYPE (arg1); > + unsigned prec = MAX (TYPE_PRECISION (arg1_type), TYPE_PRECISION (type)); > + return force_fit_type (type, wide_int::from (wi::to_wide (arg1), prec, > + TYPE_SIGN (arg1_type)), > !POINTER_TYPE_P (TREE_TYPE (arg1)), > TREE_OVERFLOW (arg1)); > } > @@ -9565,8 +9568,13 @@ fold_unary_loc (location_t loc, enum tre > } > if (change) > { > - tem = force_fit_type (type, wi::to_widest (and1), 0, > - TREE_OVERFLOW (and1)); > + tree and1_type = TREE_TYPE (and1); > + unsigned prec = MAX (TYPE_PRECISION (and1_type), > + TYPE_PRECISION (type)); > + tem = force_fit_type (type, > + wide_int::from (wi::to_wide (and1), prec, > + TYPE_SIGN (and1_type)), > + 0, TREE_OVERFLOW (and1)); > return fold_build2_loc (loc, BIT_AND_EXPR, type, > fold_convert_loc (loc, type, and0), tem); > } > --- gcc/tree-ssa-ccp.cc.jj 2023-10-10 11:55:51.315421177 +0200 > +++ gcc/tree-ssa-ccp.cc 2023-10-11 11:01:31.991231363 +0200 > @@ -1966,7 +1966,8 @@ bit_value_binop (enum tree_code code, si > } > else > { > - widest_int upper = wi::udiv_trunc (r1max, r2min); > + widest_int upper > + = wi::udiv_trunc (wi::zext (r1max, width), r2min); > unsigned int lzcount = wi::clz (upper); > unsigned int bits = wi::get_precision (upper) - lzcount; > *mask = wi::mask (bits, false); > --- gcc/lto-streamer-out.cc.jj 2023-10-10 11:55:51.211422617 +0200 > +++ gcc/lto-streamer-out.cc 2023-10-11 11:01:32.004231182 +0200 > @@ -2173,13 +2173,26 @@ output_cfg (struct output_block *ob, str > loop_estimation, EST_LAST, loop->estimate_state); > streamer_write_hwi (ob, loop->any_upper_bound); > if (loop->any_upper_bound) > - streamer_write_widest_int (ob, loop->nb_iterations_upper_bound); > + { > + widest_int w = widest_int::from (loop->nb_iterations_upper_bound, > + SIGNED); > + streamer_write_widest_int (ob, w); > + } > streamer_write_hwi (ob, loop->any_likely_upper_bound); > if (loop->any_likely_upper_bound) > - streamer_write_widest_int (ob, loop->nb_iterations_likely_upper_bound); > + { > + widest_int w > + = widest_int::from (loop->nb_iterations_likely_upper_bound, > + SIGNED); > + streamer_write_widest_int (ob, w); > + } > streamer_write_hwi (ob, loop->any_estimate); > if (loop->any_estimate) > - streamer_write_widest_int (ob, loop->nb_iterations_estimate); > + { > + widest_int w = widest_int::from (loop->nb_iterations_estimate, > + SIGNED); > + streamer_write_widest_int (ob, w); > + } > > /* Write OMP SIMD related info. */ > streamer_write_hwi (ob, loop->safelen); > --- gcc/lto-streamer-in.cc.jj 2023-10-10 11:55:51.200422770 +0200 > +++ gcc/lto-streamer-in.cc 2023-10-11 11:01:32.031230808 +0200 > @@ -1122,13 +1122,16 @@ input_cfg (class lto_input_block *ib, cl > loop->estimate_state = streamer_read_enum (ib, loop_estimation, EST_LAST); > loop->any_upper_bound = streamer_read_hwi (ib); > if (loop->any_upper_bound) > - loop->nb_iterations_upper_bound = streamer_read_widest_int (ib); > + loop->nb_iterations_upper_bound > + = bound_wide_int::from (streamer_read_widest_int (ib), SIGNED); > loop->any_likely_upper_bound = streamer_read_hwi (ib); > if (loop->any_likely_upper_bound) > - loop->nb_iterations_likely_upper_bound = streamer_read_widest_int (ib); > + loop->nb_iterations_likely_upper_bound > + = bound_wide_int::from (streamer_read_widest_int (ib), SIGNED); > loop->any_estimate = streamer_read_hwi (ib); > if (loop->any_estimate) > - loop->nb_iterations_estimate = streamer_read_widest_int (ib); > + loop->nb_iterations_estimate > + = bound_wide_int::from (streamer_read_widest_int (ib), SIGNED); > > /* Read OMP SIMD related info. */ > loop->safelen = streamer_read_hwi (ib); > @@ -1888,13 +1891,17 @@ lto_input_tree_1 (class lto_input_block > tree type = stream_read_tree_ref (ib, data_in); > unsigned HOST_WIDE_INT len = streamer_read_uhwi (ib); > unsigned HOST_WIDE_INT i; > - HOST_WIDE_INT a[WIDE_INT_MAX_ELTS]; > + HOST_WIDE_INT abuf[WIDE_INT_MAX_INL_ELTS], *a = abuf; > > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + a = XALLOCAVEC (HOST_WIDE_INT, len); > for (i = 0; i < len; i++) > a[i] = streamer_read_hwi (ib); > gcc_assert (TYPE_PRECISION (type) <= WIDE_INT_MAX_PRECISION); > - result = wide_int_to_tree (type, wide_int::from_array > - (a, len, TYPE_PRECISION (type))); > + result > + = wide_int_to_tree (type, > + wide_int::from_array (a, len, > + TYPE_PRECISION (type))); > streamer_tree_cache_append (data_in->reader_cache, result, hash); > } > else if (tag == LTO_tree_scc || tag == LTO_trees) > --- gcc/data-streamer-in.cc.jj 2023-10-10 11:55:51.036425041 +0200 > +++ gcc/data-streamer-in.cc 2023-10-11 11:01:32.031230808 +0200 > @@ -277,10 +277,12 @@ streamer_read_value_range (class lto_inp > wide_int > streamer_read_wide_int (class lto_input_block *ib) > { > - HOST_WIDE_INT a[WIDE_INT_MAX_ELTS]; > + HOST_WIDE_INT abuf[WIDE_INT_MAX_INL_ELTS], *a = abuf; > int i; > int prec = streamer_read_uhwi (ib); > int len = streamer_read_uhwi (ib); > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + a = XALLOCAVEC (HOST_WIDE_INT, len); > for (i = 0; i < len; i++) > a[i] = streamer_read_hwi (ib); > return wide_int::from_array (a, len, prec); > @@ -292,10 +294,12 @@ streamer_read_wide_int (class lto_input_ > widest_int > streamer_read_widest_int (class lto_input_block *ib) > { > - HOST_WIDE_INT a[WIDE_INT_MAX_ELTS]; > + HOST_WIDE_INT abuf[WIDE_INT_MAX_INL_ELTS], *a = abuf; > int i; > int prec ATTRIBUTE_UNUSED = streamer_read_uhwi (ib); > int len = streamer_read_uhwi (ib); > + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) > + a = XALLOCAVEC (HOST_WIDE_INT, len); > for (i = 0; i < len; i++) > a[i] = streamer_read_hwi (ib); > return widest_int::from_array (a, len); > --- gcc/tree-affine.cc.jj 2023-10-10 11:55:51.294421468 +0200 > +++ gcc/tree-affine.cc 2023-10-11 11:01:32.050230544 +0200 > @@ -805,6 +805,7 @@ aff_combination_expand (aff_tree *comb A > continue; > } > exp = XNEW (class name_expansion); > + ::new (static_cast (exp)) name_expansion (); > exp->in_progress = 1; > if (!*cache) > *cache = new hash_map; > @@ -860,6 +861,7 @@ tree_to_aff_combination_expand (tree exp > bool > free_name_expansion (tree const &, name_expansion **value, void *) > { > + (*value)->~name_expansion (); > free (*value); > return true; > } > --- gcc/gimple-ssa-strength-reduction.cc.jj 2023-10-10 11:55:51.150423462 +0200 > +++ gcc/gimple-ssa-strength-reduction.cc 2023-10-11 11:01:32.076230183 +0200 > @@ -238,7 +238,7 @@ public: > tree stride; > > /* The index constant i. */ > - widest_int index; > + offset_int index; > > /* The type of the candidate. This is normally the type of base_expr, > but casts may have occurred when combining feeding instructions. > @@ -333,7 +333,7 @@ class incr_info_d > { > public: > /* The increment that relates a candidate to its basis. */ > - widest_int incr; > + offset_int incr; > > /* How many times the increment occurs in the candidate tree. */ > unsigned count; > @@ -677,7 +677,7 @@ record_potential_basis (slsr_cand_t c, t > > static slsr_cand_t > alloc_cand_and_find_basis (enum cand_kind kind, gimple *gs, tree base, > - const widest_int &index, tree stride, tree ctype, > + const offset_int &index, tree stride, tree ctype, > tree stype, unsigned savings) > { > slsr_cand_t c = (slsr_cand_t) obstack_alloc (&cand_obstack, > @@ -893,7 +893,7 @@ slsr_process_phi (gphi *phi, bool speed) > int (i * S). > Otherwise, just return double int zero. */ > > -static widest_int > +static offset_int > backtrace_base_for_ref (tree *pbase) > { > tree base_in = *pbase; > @@ -922,7 +922,7 @@ backtrace_base_for_ref (tree *pbase) > { > /* X = B + (1 * S), S is integer constant. */ > *pbase = base_cand->base_expr; > - return wi::to_widest (base_cand->stride); > + return wi::to_offset (base_cand->stride); > } > else if (base_cand->kind == CAND_ADD > && TREE_CODE (base_cand->stride) == INTEGER_CST > @@ -966,13 +966,13 @@ backtrace_base_for_ref (tree *pbase) > *PINDEX: C1 + (C2 * C3) + C4 + (C5 * C3) */ > > static bool > -restructure_reference (tree *pbase, tree *poffset, widest_int *pindex, > +restructure_reference (tree *pbase, tree *poffset, offset_int *pindex, > tree *ptype) > { > tree base = *pbase, offset = *poffset; > - widest_int index = *pindex; > + offset_int index = *pindex; > tree mult_op0, t1, t2, type; > - widest_int c1, c2, c3, c4, c5; > + offset_int c1, c2, c3, c4, c5; > offset_int mem_offset; > > if (!base > @@ -985,18 +985,18 @@ restructure_reference (tree *pbase, tree > return false; > > t1 = TREE_OPERAND (base, 0); > - c1 = widest_int::from (mem_offset, SIGNED); > + c1 = offset_int::from (mem_offset, SIGNED); > type = TREE_TYPE (TREE_OPERAND (base, 1)); > > mult_op0 = TREE_OPERAND (offset, 0); > - c3 = wi::to_widest (TREE_OPERAND (offset, 1)); > + c3 = wi::to_offset (TREE_OPERAND (offset, 1)); > > if (TREE_CODE (mult_op0) == PLUS_EXPR) > > if (TREE_CODE (TREE_OPERAND (mult_op0, 1)) == INTEGER_CST) > { > t2 = TREE_OPERAND (mult_op0, 0); > - c2 = wi::to_widest (TREE_OPERAND (mult_op0, 1)); > + c2 = wi::to_offset (TREE_OPERAND (mult_op0, 1)); > } > else > return false; > @@ -1006,7 +1006,7 @@ restructure_reference (tree *pbase, tree > if (TREE_CODE (TREE_OPERAND (mult_op0, 1)) == INTEGER_CST) > { > t2 = TREE_OPERAND (mult_op0, 0); > - c2 = -wi::to_widest (TREE_OPERAND (mult_op0, 1)); > + c2 = -wi::to_offset (TREE_OPERAND (mult_op0, 1)); > } > else > return false; > @@ -1057,7 +1057,7 @@ slsr_process_ref (gimple *gs) > HOST_WIDE_INT cbitpos; > if (reversep || !bitpos.is_constant (&cbitpos)) > return; > - widest_int index = cbitpos; > + offset_int index = cbitpos; > > if (!restructure_reference (&base, &offset, &index, &type)) > return; > @@ -1079,7 +1079,7 @@ create_mul_ssa_cand (gimple *gs, tree ba > { > tree base = NULL_TREE, stride = NULL_TREE, ctype = NULL_TREE; > tree stype = NULL_TREE; > - widest_int index; > + offset_int index; > unsigned savings = 0; > slsr_cand_t c; > slsr_cand_t base_cand = base_cand_from_table (base_in); > @@ -1112,7 +1112,7 @@ create_mul_ssa_cand (gimple *gs, tree ba > ============================ > X = B + ((i' * S) * Z) */ > base = base_cand->base_expr; > - index = base_cand->index * wi::to_widest (base_cand->stride); > + index = base_cand->index * wi::to_offset (base_cand->stride); > stride = stride_in; > ctype = base_cand->cand_type; > stype = TREE_TYPE (stride_in); > @@ -1149,7 +1149,7 @@ static slsr_cand_t > create_mul_imm_cand (gimple *gs, tree base_in, tree stride_in, bool speed) > { > tree base = NULL_TREE, stride = NULL_TREE, ctype = NULL_TREE; > - widest_int index, temp; > + offset_int index, temp; > unsigned savings = 0; > slsr_cand_t c; > slsr_cand_t base_cand = base_cand_from_table (base_in); > @@ -1165,7 +1165,7 @@ create_mul_imm_cand (gimple *gs, tree ba > X = Y * c > ============================ > X = (B + i') * (S * c) */ > - temp = wi::to_widest (base_cand->stride) * wi::to_widest (stride_in); > + temp = wi::to_offset (base_cand->stride) * wi::to_offset (stride_in); > if (wi::fits_to_tree_p (temp, TREE_TYPE (stride_in))) > { > base = base_cand->base_expr; > @@ -1200,7 +1200,7 @@ create_mul_imm_cand (gimple *gs, tree ba > =========================== > X = (B + S) * c */ > base = base_cand->base_expr; > - index = wi::to_widest (base_cand->stride); > + index = wi::to_offset (base_cand->stride); > stride = stride_in; > ctype = base_cand->cand_type; > if (has_single_use (base_in)) > @@ -1281,7 +1281,7 @@ create_add_ssa_cand (gimple *gs, tree ba > { > tree base = NULL_TREE, stride = NULL_TREE, ctype = NULL_TREE; > tree stype = NULL_TREE; > - widest_int index; > + offset_int index; > unsigned savings = 0; > slsr_cand_t c; > slsr_cand_t base_cand = base_cand_from_table (base_in); > @@ -1300,7 +1300,7 @@ create_add_ssa_cand (gimple *gs, tree ba > =========================== > X = Y + ((+/-1 * S) * B) */ > base = base_in; > - index = wi::to_widest (addend_cand->stride); > + index = wi::to_offset (addend_cand->stride); > if (subtract_p) > index = -index; > stride = addend_cand->base_expr; > @@ -1350,7 +1350,7 @@ create_add_ssa_cand (gimple *gs, tree ba > =========================== > Value: X = Y + ((-1 * S) * B) */ > base = base_in; > - index = wi::to_widest (subtrahend_cand->stride); > + index = wi::to_offset (subtrahend_cand->stride); > index = -index; > stride = subtrahend_cand->base_expr; > ctype = TREE_TYPE (base_in); > @@ -1389,13 +1389,13 @@ create_add_ssa_cand (gimple *gs, tree ba > about BASE_IN into the new candidate. Return the new candidate. */ > > static slsr_cand_t > -create_add_imm_cand (gimple *gs, tree base_in, const widest_int &index_in, > +create_add_imm_cand (gimple *gs, tree base_in, const offset_int &index_in, > bool speed) > { > enum cand_kind kind = CAND_ADD; > tree base = NULL_TREE, stride = NULL_TREE, ctype = NULL_TREE; > tree stype = NULL_TREE; > - widest_int index, multiple; > + offset_int index, multiple; > unsigned savings = 0; > slsr_cand_t c; > slsr_cand_t base_cand = base_cand_from_table (base_in); > @@ -1405,7 +1405,7 @@ create_add_imm_cand (gimple *gs, tree ba > signop sign = TYPE_SIGN (TREE_TYPE (base_cand->stride)); > > if (TREE_CODE (base_cand->stride) == INTEGER_CST > - && wi::multiple_of_p (index_in, wi::to_widest (base_cand->stride), > + && wi::multiple_of_p (index_in, wi::to_offset (base_cand->stride), > sign, &multiple)) > { > /* Y = (B + i') * S, S constant, c = kS for some integer k > @@ -1494,7 +1494,7 @@ slsr_process_add (gimple *gs, tree rhs1, > else if (TREE_CODE (rhs2) == INTEGER_CST) > { > /* Record an interpretation for the add-immediate. */ > - widest_int index = wi::to_widest (rhs2); > + offset_int index = wi::to_offset (rhs2); > if (subtract_p) > index = -index; > > @@ -2079,7 +2079,7 @@ phi_dependent_cand_p (slsr_cand_t c) > /* Calculate the increment required for candidate C relative to > its basis. */ > > -static widest_int > +static offset_int > cand_increment (slsr_cand_t c) > { > slsr_cand_t basis; > @@ -2102,10 +2102,10 @@ cand_increment (slsr_cand_t c) > for this candidate, return the absolute value of that increment > instead. */ > > -static inline widest_int > +static inline offset_int > cand_abs_increment (slsr_cand_t c) > { > - widest_int increment = cand_increment (c); > + offset_int increment = cand_increment (c); > > if (!address_arithmetic_p && wi::neg_p (increment)) > increment = -increment; > @@ -2126,7 +2126,7 @@ cand_already_replaced (slsr_cand_t c) > replace_conditional_candidate. */ > > static void > -replace_mult_candidate (slsr_cand_t c, tree basis_name, widest_int bump) > +replace_mult_candidate (slsr_cand_t c, tree basis_name, offset_int bump) > { > tree target_type = TREE_TYPE (gimple_assign_lhs (c->cand_stmt)); > enum tree_code cand_code = gimple_assign_rhs_code (c->cand_stmt); > @@ -2245,7 +2245,7 @@ replace_unconditional_candidate (slsr_ca > return; > > basis = lookup_cand (c->basis); > - widest_int bump = cand_increment (c) * wi::to_widest (c->stride); > + offset_int bump = cand_increment (c) * wi::to_offset (c->stride); > > replace_mult_candidate (c, gimple_assign_lhs (basis->cand_stmt), bump); > } > @@ -2255,7 +2255,7 @@ replace_unconditional_candidate (slsr_ca > MAX_INCR_VEC_LEN increments have been found. */ > > static inline int > -incr_vec_index (const widest_int &increment) > +incr_vec_index (const offset_int &increment) > { > unsigned i; > > @@ -2275,7 +2275,7 @@ incr_vec_index (const widest_int &increm > > static tree > create_add_on_incoming_edge (slsr_cand_t c, tree basis_name, > - widest_int increment, edge e, location_t loc, > + offset_int increment, edge e, location_t loc, > bool known_stride) > { > tree lhs, basis_type; > @@ -2299,7 +2299,7 @@ create_add_on_incoming_edge (slsr_cand_t > { > tree bump_tree; > enum tree_code code = plus_code; > - widest_int bump = increment * wi::to_widest (c->stride); > + offset_int bump = increment * wi::to_offset (c->stride); > if (wi::neg_p (bump) && !POINTER_TYPE_P (basis_type)) > { > code = MINUS_EXPR; > @@ -2427,7 +2427,7 @@ create_phi_basis_1 (slsr_cand_t c, gimpl > feeding_def = gimple_assign_lhs (basis->cand_stmt); > else > { > - widest_int incr = -basis->index; > + offset_int incr = -basis->index; > feeding_def = create_add_on_incoming_edge (c, basis_name, incr, > e, loc, known_stride); > } > @@ -2444,7 +2444,7 @@ create_phi_basis_1 (slsr_cand_t c, gimpl > else > { > slsr_cand_t arg_cand = base_cand_from_table (arg); > - widest_int diff = arg_cand->index - basis->index; > + offset_int diff = arg_cand->index - basis->index; > feeding_def = create_add_on_incoming_edge (c, basis_name, diff, > e, loc, known_stride); > } > @@ -2525,7 +2525,7 @@ replace_conditional_candidate (slsr_cand > basis_name, loc, KNOWN_STRIDE); > > /* Replace C with an add of the new basis phi and a constant. */ > - widest_int bump = c->index * wi::to_widest (c->stride); > + offset_int bump = c->index * wi::to_offset (c->stride); > > replace_mult_candidate (c, name, bump); > } > @@ -2614,7 +2614,7 @@ replace_uncond_cands_and_profitable_phis > { > /* A multiply candidate with a stride of 1 is just an artifice > of a copy or cast; there is no value in replacing it. */ > - if (c->kind == CAND_MULT && wi::to_widest (c->stride) != 1) > + if (c->kind == CAND_MULT && wi::to_offset (c->stride) != 1) > { > /* A candidate dependent upon a phi will replace a multiply by > a constant with an add, and will insert at most one add for > @@ -2681,7 +2681,7 @@ count_candidates (slsr_cand_t c) > candidates with the same increment, also record T_0 for subsequent use. */ > > static void > -record_increment (slsr_cand_t c, widest_int increment, bool is_phi_adjust) > +record_increment (slsr_cand_t c, offset_int increment, bool is_phi_adjust) > { > bool found = false; > unsigned i; > @@ -2786,7 +2786,7 @@ record_phi_increments_1 (slsr_cand_t bas > record_phi_increments_1 (basis, arg_def); > else > { > - widest_int diff; > + offset_int diff; > > if (operand_equal_p (arg, phi_cand->base_expr, 0)) > { > @@ -2856,7 +2856,7 @@ record_increments (slsr_cand_t c) > /* Recursive helper function for phi_incr_cost. */ > > static int > -phi_incr_cost_1 (slsr_cand_t c, const widest_int &incr, gimple *phi, > +phi_incr_cost_1 (slsr_cand_t c, const offset_int &incr, gimple *phi, > int *savings) > { > unsigned i; > @@ -2883,7 +2883,7 @@ phi_incr_cost_1 (slsr_cand_t c, const wi > } > else > { > - widest_int diff; > + offset_int diff; > slsr_cand_t arg_cand; > > /* When the PHI argument is just a pass-through to the base > @@ -2925,7 +2925,7 @@ phi_incr_cost_1 (slsr_cand_t c, const wi > uses. */ > > static int > -phi_incr_cost (slsr_cand_t c, const widest_int &incr, gimple *phi, > +phi_incr_cost (slsr_cand_t c, const offset_int &incr, gimple *phi, > int *savings) > { > int retval = phi_incr_cost_1 (c, incr, phi, savings); > @@ -2981,10 +2981,10 @@ optimize_cands_for_speed_p (slsr_cand_t > > static int > lowest_cost_path (int cost_in, int repl_savings, slsr_cand_t c, > - const widest_int &incr, bool count_phis) > + const offset_int &incr, bool count_phis) > { > int local_cost, sib_cost, savings = 0; > - widest_int cand_incr = cand_abs_increment (c); > + offset_int cand_incr = cand_abs_increment (c); > > if (cand_already_replaced (c)) > local_cost = cost_in; > @@ -3027,11 +3027,11 @@ lowest_cost_path (int cost_in, int repl_ > would go dead. */ > > static int > -total_savings (int repl_savings, slsr_cand_t c, const widest_int &incr, > +total_savings (int repl_savings, slsr_cand_t c, const offset_int &incr, > bool count_phis) > { > int savings = 0; > - widest_int cand_incr = cand_abs_increment (c); > + offset_int cand_incr = cand_abs_increment (c); > > if (incr == cand_incr && !cand_already_replaced (c)) > savings += repl_savings + c->dead_savings; > @@ -3239,7 +3239,7 @@ ncd_for_two_cands (basic_block bb1, basi > candidates, return the earliest candidate in the block in *WHERE. */ > > static basic_block > -ncd_with_phi (slsr_cand_t c, const widest_int &incr, gphi *phi, > +ncd_with_phi (slsr_cand_t c, const offset_int &incr, gphi *phi, > basic_block ncd, slsr_cand_t *where) > { > unsigned i; > @@ -3255,7 +3255,7 @@ ncd_with_phi (slsr_cand_t c, const wides > ncd = ncd_with_phi (c, incr, as_a (arg_def), ncd, where); > else > { > - widest_int diff; > + offset_int diff; > > if (operand_equal_p (arg, phi_cand->base_expr, 0)) > diff = -basis->index; > @@ -3282,7 +3282,7 @@ ncd_with_phi (slsr_cand_t c, const wides > return the earliest candidate in the block in *WHERE. */ > > static basic_block > -ncd_of_cand_and_phis (slsr_cand_t c, const widest_int &incr, slsr_cand_t *where) > +ncd_of_cand_and_phis (slsr_cand_t c, const offset_int &incr, slsr_cand_t *where) > { > basic_block ncd = NULL; > > @@ -3308,7 +3308,7 @@ ncd_of_cand_and_phis (slsr_cand_t c, con > *WHERE. */ > > static basic_block > -nearest_common_dominator_for_cands (slsr_cand_t c, const widest_int &incr, > +nearest_common_dominator_for_cands (slsr_cand_t c, const offset_int &incr, > slsr_cand_t *where) > { > basic_block sib_ncd = NULL, dep_ncd = NULL, this_ncd = NULL, ncd; > @@ -3385,7 +3385,7 @@ insert_initializers (slsr_cand_t c) > gassign *init_stmt; > gassign *cast_stmt = NULL; > tree new_name, incr_tree, init_stride; > - widest_int incr = incr_vec[i].incr; > + offset_int incr = incr_vec[i].incr; > > if (!profitable_increment_p (i) > || incr == 1 > @@ -3550,7 +3550,7 @@ all_phi_incrs_profitable_1 (slsr_cand_t > else > { > int j; > - widest_int increment; > + offset_int increment; > > if (operand_equal_p (arg, phi_cand->base_expr, 0)) > increment = -basis->index; > @@ -3681,7 +3681,7 @@ replace_one_candidate (slsr_cand_t c, un > tree orig_rhs1, orig_rhs2; > tree rhs2; > enum tree_code orig_code, repl_code; > - widest_int cand_incr; > + offset_int cand_incr; > > orig_code = gimple_assign_rhs_code (c->cand_stmt); > orig_rhs1 = gimple_assign_rhs1 (c->cand_stmt); > @@ -3839,7 +3839,7 @@ replace_profitable_candidates (slsr_cand > { > if (!cand_already_replaced (c)) > { > - widest_int increment = cand_abs_increment (c); > + offset_int increment = cand_abs_increment (c); > enum tree_code orig_code = gimple_assign_rhs_code (c->cand_stmt); > int i; > > --- gcc/real.cc.jj 2023-10-10 11:55:51.273421759 +0200 > +++ gcc/real.cc 2023-10-11 11:01:32.105229780 +0200 > @@ -1477,7 +1477,7 @@ real_to_integer (const REAL_VALUE_TYPE * > wide_int > real_to_integer (const REAL_VALUE_TYPE *r, bool *fail, int precision) > { > - HOST_WIDE_INT val[2 * WIDE_INT_MAX_ELTS]; > + HOST_WIDE_INT valb[WIDE_INT_MAX_INL_ELTS], *val; > int exp; > int words, w; > wide_int result; > @@ -1516,7 +1516,11 @@ real_to_integer (const REAL_VALUE_TYPE * > is the smallest HWI-multiple that has at least PRECISION bits. > This ensures that the top bit of the significand is in the > top bit of the wide_int. */ > - words = (precision + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT; > + words = ((precision + HOST_BITS_PER_WIDE_INT - 1) > + / HOST_BITS_PER_WIDE_INT); > + val = valb; > + if (UNLIKELY (words > WIDE_INT_MAX_INL_ELTS)) > + val = XALLOCAVEC (HOST_WIDE_INT, words); > w = words * HOST_BITS_PER_WIDE_INT; > > #if (HOST_BITS_PER_WIDE_INT == HOST_BITS_PER_LONG) > --- gcc/tree-ssa-loop-ivopts.cc.jj 2023-10-10 11:55:51.384420222 +0200 > +++ gcc/tree-ssa-loop-ivopts.cc 2023-10-11 11:01:32.129229447 +0200 > @@ -1036,10 +1036,12 @@ niter_for_exit (struct ivopts_data *data > names that appear in phi nodes on abnormal edges, so that we do not > create overlapping life ranges for them (PR 27283). */ > desc = XNEW (class tree_niter_desc); > + ::new (static_cast (desc)) tree_niter_desc (); > if (!number_of_iterations_exit (data->current_loop, > exit, desc, true) > || contains_abnormal_ssa_name_p (desc->niter)) > { > + desc->~tree_niter_desc (); > XDELETE (desc); > desc = NULL; > } > @@ -7894,7 +7896,11 @@ remove_unused_ivs (struct ivopts_data *d > bool > free_tree_niter_desc (edge const &, tree_niter_desc *const &value, void *) > { > - free (value); > + if (value) > + { > + value->~tree_niter_desc (); > + free (value); > + } > return true; > } > > --- gcc/gengtype.cc.jj 2023-10-10 11:55:51.129423753 +0200 > +++ gcc/gengtype.cc 2023-10-11 11:01:32.162228989 +0200 > @@ -5235,7 +5235,6 @@ main (int argc, char **argv) > POS_HERE (do_scalar_typedef ("FIXED_VALUE_TYPE", &pos)); > POS_HERE (do_scalar_typedef ("double_int", &pos)); > POS_HERE (do_scalar_typedef ("offset_int", &pos)); > - POS_HERE (do_scalar_typedef ("widest_int", &pos)); > POS_HERE (do_scalar_typedef ("int64_t", &pos)); > POS_HERE (do_scalar_typedef ("poly_int64", &pos)); > POS_HERE (do_scalar_typedef ("poly_uint64", &pos)); > --- gcc/graphite-isl-ast-to-gimple.cc.jj 2023-10-10 11:55:51.186422964 +0200 > +++ gcc/graphite-isl-ast-to-gimple.cc 2023-10-11 11:01:32.173228835 +0200 > @@ -274,7 +274,7 @@ widest_int_from_isl_expr_int (__isl_keep > isl_val *val = isl_ast_expr_get_val (expr); > size_t n = isl_val_n_abs_num_chunks (val, sizeof (HOST_WIDE_INT)); > HOST_WIDE_INT *chunks = XALLOCAVEC (HOST_WIDE_INT, n); > - if (n > WIDE_INT_MAX_ELTS > + if (n > WIDEST_INT_MAX_ELTS > || isl_val_get_abs_num_chunks (val, sizeof (HOST_WIDE_INT), chunks) == -1) > { > isl_val_free (val); > --- gcc/gimple-ssa-warn-alloca.cc.jj 2023-10-10 11:55:51.150423462 +0200 > +++ gcc/gimple-ssa-warn-alloca.cc 2023-10-11 11:01:32.179228752 +0200 > @@ -310,7 +310,7 @@ pass_walloca::execute (function *fun) > > enum opt_code wcode > = is_vla ? OPT_Wvla_larger_than_ : OPT_Walloca_larger_than_; > - char buff[WIDE_INT_MAX_PRECISION / 4 + 4]; > + char buff[WIDE_INT_MAX_INL_PRECISION / 4 + 4]; > switch (t.type) > { > case ALLOCA_OK: > @@ -329,6 +329,7 @@ pass_walloca::execute (function *fun) > "large"))) > && t.limit != 0) > { > + gcc_assert (t.limit.get_len () < WIDE_INT_MAX_INL_ELTS); > print_decu (t.limit, buff); > inform (loc, "limit is %wu bytes, but argument " > "may be as large as %s", > @@ -347,6 +348,7 @@ pass_walloca::execute (function *fun) > : G_("argument to % is too large"))) > && t.limit != 0) > { > + gcc_assert (t.limit.get_len () < WIDE_INT_MAX_INL_ELTS); > print_decu (t.limit, buff); > inform (loc, "limit is %wu bytes, but argument is %s", > is_vla ? warn_vla_limit : adjusted_alloca_limit, > --- gcc/value-range-pretty-print.cc.jj 2023-10-10 11:55:51.481418879 +0200 > +++ gcc/value-range-pretty-print.cc 2023-10-11 11:01:32.187228641 +0200 > @@ -99,12 +99,19 @@ vrange_printer::print_irange_bitmasks (c > return; > > pp_string (pp, " MASK "); > - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; > - print_hex (bm.mask (), buf); > - pp_string (pp, buf); > + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p; > + unsigned len_mask = bm.mask ().get_len (); > + unsigned len_val = bm.value ().get_len (); > + unsigned len = MAX (len_mask, len_val); > + if (len > WIDE_INT_MAX_INL_ELTS) > + p = XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); > + else > + p = buf; > + print_hex (bm.mask (), p); > + pp_string (pp, p); > pp_string (pp, " VALUE "); > - print_hex (bm.value (), buf); > - pp_string (pp, buf); > + print_hex (bm.value (), p); > + pp_string (pp, p); > } > > void > --- gcc/gimple-ssa-sprintf.cc.jj 2023-10-10 11:55:51.148423490 +0200 > +++ gcc/gimple-ssa-sprintf.cc 2023-10-11 11:01:32.198228488 +0200 > @@ -1181,8 +1181,15 @@ adjust_range_for_overflow (tree dirtype, > *argmin), > size_int (dirprec))))) > { > - *argmin = force_fit_type (dirtype, wi::to_widest (*argmin), 0, false); > - *argmax = force_fit_type (dirtype, wi::to_widest (*argmax), 0, false); > + unsigned int maxprec = MAX (argprec, dirprec); > + *argmin = force_fit_type (dirtype, > + wide_int::from (wi::to_wide (*argmin), maxprec, > + TYPE_SIGN (argtype)), > + 0, false); > + *argmax = force_fit_type (dirtype, > + wide_int::from (wi::to_wide (*argmax), maxprec, > + TYPE_SIGN (argtype)), > + 0, false); > > /* If *ARGMIN is still less than *ARGMAX the conversion above > is safe. Otherwise, it has overflowed and would be unsafe. */ > --- gcc/omp-general.cc.jj 2023-10-10 11:55:51.254422022 +0200 > +++ gcc/omp-general.cc 2023-10-11 11:01:32.233228002 +0200 > @@ -1986,13 +1986,17 @@ omp_get_context_selector (tree ctx, cons > return NULL_TREE; > } > > +/* Needs to be a GC-friendly widest_int variant, but precision is > + desirable to be the same on all targets. */ > +typedef generic_wide_int > score_wide_int; > + > /* Compute *SCORE for context selector CTX. Return true if the score > would be different depending on whether it is a declare simd clone or > not. DECLARE_SIMD should be true for the case when it would be > a declare simd clone. */ > > static bool > -omp_context_compute_score (tree ctx, widest_int *score, bool declare_simd) > +omp_context_compute_score (tree ctx, score_wide_int *score, bool declare_simd) > { > tree construct = omp_get_context_selector (ctx, "construct", NULL); > bool has_kind = omp_get_context_selector (ctx, "device", "kind"); > @@ -2007,7 +2011,11 @@ omp_context_compute_score (tree ctx, wid > if (TREE_PURPOSE (t3) > && strcmp (IDENTIFIER_POINTER (TREE_PURPOSE (t3)), " score") == 0 > && TREE_CODE (TREE_VALUE (t3)) == INTEGER_CST) > - *score += wi::to_widest (TREE_VALUE (t3)); > + { > + tree t4 = TREE_VALUE (t3); > + *score += score_wide_int::from (wi::to_wide (t4), > + TYPE_SIGN (TREE_TYPE (t4))); > + } > if (construct || has_kind || has_arch || has_isa) > { > int scores[12]; > @@ -2028,16 +2036,16 @@ omp_context_compute_score (tree ctx, wid > *score = -1; > return ret; > } > - *score += wi::shifted_mask (scores[b + n], 1, false); > + *score += wi::shifted_mask (scores[b + n], 1, false); > } > if (has_kind) > - *score += wi::shifted_mask (scores[b + nconstructs], > + *score += wi::shifted_mask (scores[b + nconstructs], > 1, false); > if (has_arch) > - *score += wi::shifted_mask (scores[b + nconstructs] + 1, > + *score += wi::shifted_mask (scores[b + nconstructs] + 1, > 1, false); > if (has_isa) > - *score += wi::shifted_mask (scores[b + nconstructs] + 2, > + *score += wi::shifted_mask (scores[b + nconstructs] + 2, > 1, false); > } > else /* FIXME: Implement this. */ > @@ -2051,9 +2059,9 @@ struct GTY(()) omp_declare_variant_entry > /* NODE of the variant. */ > cgraph_node *variant; > /* Score if not in declare simd clone. */ > - widest_int score; > + score_wide_int score; > /* Score if in declare simd clone. */ > - widest_int score_in_declare_simd_clone; > + score_wide_int score_in_declare_simd_clone; > /* Context selector for the variant. */ > tree ctx; > /* True if the context selector is known to match already. */ > @@ -2214,12 +2222,12 @@ omp_resolve_late_declare_variant (tree a > } > } > > - widest_int max_score = -1; > + score_wide_int max_score = -1; > varentry2 = NULL; > FOR_EACH_VEC_SAFE_ELT (entryp->variants, i, varentry1) > if (matches[i]) > { > - widest_int score > + score_wide_int score > = (cur_node->simdclone ? varentry1->score_in_declare_simd_clone > : varentry1->score); > if (score > max_score) > @@ -2300,8 +2308,8 @@ omp_resolve_declare_variant (tree base) > > if (any_deferred) > { > - widest_int max_score1 = 0; > - widest_int max_score2 = 0; > + score_wide_int max_score1 = 0; > + score_wide_int max_score2 = 0; > bool first = true; > unsigned int i; > tree attr1, attr2; > @@ -2311,8 +2319,8 @@ omp_resolve_declare_variant (tree base) > vec_alloc (entry.variants, variants.length ()); > FOR_EACH_VEC_ELT (variants, i, attr1) > { > - widest_int score1; > - widest_int score2; > + score_wide_int score1; > + score_wide_int score2; > bool need_two; > tree ctx = TREE_VALUE (TREE_VALUE (attr1)); > need_two = omp_context_compute_score (ctx, &score1, false); > @@ -2471,16 +2479,16 @@ omp_resolve_declare_variant (tree base) > variants[j] = NULL_TREE; > } > } > - widest_int max_score1 = 0; > - widest_int max_score2 = 0; > + score_wide_int max_score1 = 0; > + score_wide_int max_score2 = 0; > bool first = true; > FOR_EACH_VEC_ELT (variants, i, attr1) > if (attr1) > { > if (variant1) > { > - widest_int score1; > - widest_int score2; > + score_wide_int score1; > + score_wide_int score2; > bool need_two; > tree ctx; > if (first) > @@ -2552,7 +2560,7 @@ omp_lto_output_declare_variant_alt (lto_ > gcc_assert (nvar != LCC_NOT_FOUND); > streamer_write_hwi_stream (ob->main_stream, nvar); > > - for (widest_int *w = &varentry->score; ; > + for (score_wide_int *w = &varentry->score; ; > w = &varentry->score_in_declare_simd_clone) > { > unsigned len = w->get_len (); > @@ -2602,15 +2610,15 @@ omp_lto_input_declare_variant_alt (lto_i > omp_declare_variant_entry varentry; > varentry.variant > = dyn_cast (nodes[streamer_read_hwi (ib)]); > - for (widest_int *w = &varentry.score; ; > + for (score_wide_int *w = &varentry.score; ; > w = &varentry.score_in_declare_simd_clone) > { > unsigned len2 = streamer_read_hwi (ib); > - HOST_WIDE_INT arr[WIDE_INT_MAX_ELTS]; > - gcc_assert (len2 <= WIDE_INT_MAX_ELTS); > + HOST_WIDE_INT arr[WIDE_INT_MAX_HWIS (1024)]; > + gcc_assert (len2 <= WIDE_INT_MAX_HWIS (1024)); > for (unsigned int j = 0; j < len2; j++) > arr[j] = streamer_read_hwi (ib); > - *w = widest_int::from_array (arr, len2, true); > + *w = score_wide_int::from_array (arr, len2, true); > if (w == &varentry.score_in_declare_simd_clone) > break; > } > --- gcc/godump.cc.jj 2023-10-10 11:55:51.169423199 +0200 > +++ gcc/godump.cc 2023-10-11 11:01:32.244227850 +0200 > @@ -1154,7 +1154,11 @@ go_output_typedef (class godump_containe > snprintf (buf, sizeof buf, HOST_WIDE_INT_PRINT_UNSIGNED, > tree_to_uhwi (value)); > else > - print_hex (wi::to_wide (element), buf); > + { > + wide_int w = wi::to_wide (element); > + gcc_assert (w.get_len () <= WIDE_INT_MAX_INL_ELTS); > + print_hex (w, buf); > + } > > mhval->value = xstrdup (buf); > *slot = mhval; > --- gcc/c-family/c-warn.cc.jj 2023-10-10 11:55:50.991425664 +0200 > +++ gcc/c-family/c-warn.cc 2023-10-11 11:01:32.259227642 +0200 > @@ -1517,13 +1517,15 @@ match_case_to_enum_1 (tree key, tree typ > return; > > char buf[WIDE_INT_PRINT_BUFFER_SIZE]; > + wide_int w = wi::to_wide (key); > > + gcc_assert (w.get_len () <= WIDE_INT_MAX_INL_ELTS); > if (tree_fits_uhwi_p (key)) > - print_dec (wi::to_wide (key), buf, UNSIGNED); > + print_dec (w, buf, UNSIGNED); > else if (tree_fits_shwi_p (key)) > - print_dec (wi::to_wide (key), buf, SIGNED); > + print_dec (w, buf, SIGNED); > else > - print_hex (wi::to_wide (key), buf); > + print_hex (w, buf); > > if (TYPE_NAME (type) == NULL_TREE) > warning_at (DECL_SOURCE_LOCATION (CASE_LABEL (label)), > --- gcc/testsuite/gcc.dg/bitint-38.c.jj 2023-10-11 11:01:32.260227628 +0200 > +++ gcc/testsuite/gcc.dg/bitint-38.c 2023-10-11 11:01:32.260227628 +0200 > @@ -0,0 +1,18 @@ > +/* PR c/102989 */ > +/* { dg-do compile { target { bitint } } } */ > +/* { dg-options "-std=c2x" } */ > + > +#if __BITINT_MAXWIDTH__ >= 16319 > +constexpr unsigned _BitInt(16319) a > + = 4680985677016772612762154819367704422543836437669953782416002271793962834329168658813322158671064891592515774953720856634870923177432447705972876331990053749984553335872803574901499931018113920514837614959871082649647383371181551558627154389107216612303325331853355817576005118468541159326372619696331343658686953639145705781100644718684758413485893669336454109876999790801402128499090811881709104649674862313589352128970962606260330555361418355992844984747378584876584701151447719231148263122838630355037006001414407244263646996363302404142712756260212949394224832506196290059599922434186612301221326677697811837903387593458849038216955909915772285205237253020482154478415731138408115936384134250549382132629614483178985741405330900049927326885251150047829738932440914270003968904271522253086610789546710660692344537575931817539008652034390354024803064135722396104671425919208091873674380711701009695674400446914274879597856373383816513099167820636702860465475852408378923071709288494858771 8679328070760084086678347179914817925081838771618312732334619953338746336344235621880377969700575932441037647685522242087626242598557198281818035387041014982421454431301328519954419349662422321998640294484962248942200767856494617479789279508933089953562472777752533078949270357456411225295514777094292976154560435086940424655827475235351037015722948500440213104315345429039792938727637405493857897687860646721735939868427505051910441391428602410680811634071227305942736229370315135549833621317069889444840536939875718852316046029271487585787996817357832819135821597249351327129787563440079330192925005282225863601565085768302390070984541083848793677853325040788618095457604634069790858402095129504884493804786565702907285079744297614689529418499373699950548566574281131379540553067419984805580275990178637682206952934297126196311933247650406428586936204966208340578982843313215493324281743280941581054818065875039369227272958623284206565849097120192778001425881533311545969511794227355187664684482 1076723664040282772834511419891351278169017103987094803829594286352340468346618726088781492626816188657331359104171819822673805856317828499039088088223137258297373929043307673570090396947789598799922928643843532617012164811074618881774622628943539037974883812689130801860915090035870244061005819418130068390986470314677853605080103313411837904358287837401546257413240466939893527508931541065241929872307203876443882106193262544652290132364691671910332006127864146991404015366683569317248057949596070354929361158326955551600236075268435044105880162798380799161607987365282458662031599096921825176202707890730023698706855762932691688259365358964076595824577775275991183149118372047206055118463112864604063853894820407249837871368934941438119680605528546887256934334246075596746410297954458632358171428714141820918183384435681332379317541048252391710712196623406338702061195213724569303285402242853671386113148211535691685461836458295037538034378318055108240082414441205300401526732399959228346926528 5868527433894909787347879267219998553887947118371644230077196261091790054661137064507652696875808198227721893010845036272973896751342282223372868676411105110619802312478845334924428989367434296419583141353290734064957763692081580321158838506910105690489839411267714779909760922523919728126916698474467985072441061216678854230256137692581027738555375097332958050133139374022828048972138472210726471116051723494645640899149064935081338553896271776634260577632520862863253438112547576818030682762780487579974252843347131902268184630230744619001769580100555724349831351711453652423392733269844651810642872646454708320911151006405841043755773040569519694562001384853135600092723382281036377638632892616732587267367534070441436640794794969725805605344948061708104693047730058735906262800723879996685225467479857015996139751011885438578521415592516340586767183080003248698096281994426815656156629126260227960644144961063442364312856976883577079929899665615571717299720935330074769478622159225832048111890 15550505642082475400647639520782187776825395598257421714106473869797642678266380755873356747812273977691604147842741151722919464734890326772594979022403228191075586910464204870254674290437668861177639713112762996390246102030994917186957826982084194156870398312336059100521566034092740694642613192909850644003933745129291062576341213874815510099835708723355432970090139671120232910747665906191360160259512198160849784197597300106223945960886603127136037120000864968668651452411048372895607382907494278810971475663944948791458618662250238375166523484847507342040066801856222328988662049579299600545682490412754483621051190231623196265549391964259780178070495642538883789503379406531279338866955157646654913405181879254189185904298325865503395688786311067669273609670603076582607253527084977744533187145642686236350165593980428575119329911921382240780504527422630654086941060242757131313184709635181001199631726283364158943337968797uwb > + + 99354435180574564299271266552222578172075113116713358325600655730552766787479906529073488397418185627579390846490733481721083971838270203779417259831075136362874065305263582535084372902419372769083862829043530791029045356756086045764861629983194277028512784082136414548372230796164016158756724532501484216792238294178342275181330910551802702492661616766771761496751642576408123442979356507296298018787580599440901688627305198172033523414583103638114823180832702324343293173238228189911345006016698689223960135129694778394564723458123123219242152418497721476874557602245592409527373190093485408949663635681583495013552292646467700180715905024417027872690979739798998376831221941031100897284256766902460911469939550379184257728400222882228329325425160915011494771608565644643769102932300919635731192306480266678963993527909826119575699789720381785195702784475407075028616785026579051927432258932256639948075689186448982737022854836763857176511040420021053529931765121664200850644524317531813 6580583354892267674889041242033269460909681977976560034521639039430725755677822374344395898396211372319355124789799542376234809210389368371137389713916828942026766061140994764454871500778783295925116755317509663914767477611797310044790324362690289238226376759132803823570859340156379301941812445316638647179246842100385589420658435473148936366813407794620354606723723565774648029683165179179038598139755845890590464139424627978274673600910186236686806836341197638855769792191431717937120644408539077963483136972337005076467885284677936949723237478069190528099236807976274735224551960726415419714895889695566190421490918495228999614205060482160874990041784513772759690310045235006755130584099828048277520988327887307189558875181146234251782575349381499791841843745547499242224391954996737196442345744028729627085560585095468591264430335401905871691673552253306532305775547980366878253025038198821107503465576012325024944144068433845095382329034690968982252765269872350287231257030526119676847749889 8020793071808758903381796873868682378850925211629392760628685222745073544116615635557910805357623590218023715832716372532519372862093828545797325567803691998051785156065861566888871461130133522039321843439017964382030080752476709398731341173062430275003111954907627837208488348686666904765710656917706470924318432160155450726007668035494571779793129212242101293274853237850848806152774463689243426683295884648680790240363097015218347966399166380090370628591288712305133171869639679922854066493076773166970190482988828017031016891561971986279675371963020932469337264061317786330566839383989384760935590299287963546863848119999451739548405124001514033096695605580766121611440638549988895970262425133218159848061727217163487131806481686766843789971465247903534853837951413845786667122427182648989156599529647439419553785158561613114023267303869927565170507781782366447011340851258178534101585950081423437703778492347448230473897643505773957385504112182446690585033823747175966929091293693201061858670 1412091290914528612922762760129106240712411654020891616069444238262454616085949357324819001982408622934094423088006900195508316304798830005798846146019069617230113544498045767943398260569869576800909160468486734197235296943846538094003772185450752691487661291946370394082255156780133321880749972176678354949400430149178774383549026731074531642752800102510403600409373087389256894757251316390320119790096427135422928942190593529729331511123761973838149253632886709955562694478049949250867917281369066932495071150978070603658721109982107683360783895087241848635972859877369120730719801371625907796646750334291193278553078271746737492574629830542216317975270099875957324602221973676084409734882118984714393020513888068185216596858736723838280213298481534102049266077109716782685416775844216952380117843513860478691587871566346306938724280678649803200632934358875747458590670249884857423532785487044675442987935115835876597137116770657923711993294193723927203219818622698900248323489998654493398563392 20386853162641984444934998176248821703154774794026863423846665361147912580310179333239849314145158103813724371277156031826070213656189218428551171492579367736652650240510840524479280661922149370381404863668038229922105064658335083314946842545978050497021795217124947959575065471749872278802756371390871441004232633252611825748658593540667831098874027223327541523742857750954119615708541514145110863925049204517574000824797900817585376961462754521495100198829675100958066639531958106704159717265035205597161047879510849900587565746603225763129877434317949842105742386965886137117798642168190733367414126797929434627532307855448841035433795229031275545885872876848846666666475465866905332293095381494096702328649920740506658930503053162777944821433383407283155178707970906458023827141681140372968356084617001053870499079884384019820875585843129082894687740533946763756846924952825251383026364635539377880784234770789463152435704464616uwb; > +constexpr unsigned _BitInt(16319) b > + = 2012974456709302727574100507062899826244916604651702690369568375585444875683436016651313240507879631460278199833070536840736748203015663720699487742558225012459510671839702819911277389210572747802962612254071867246681224417252196882500481259668419053440016929124501988666433463234720317290647183004791877987066729683082610876903638426760496950933639842151648267717069732314480723734513076773386141566503759124994849008586735618331910154116717658619505172176655219466753041714225055613389568844166340061301478127682539435897545896747514780658901350656941594549684113110073818042623846495062926837977401328562704962152919204773680308909275189151399260541908650258823333205729663856729030609391087874209350087386427717471941018364076582158058783196771670836397622553590531790813778049726744441676017664770583404699601082021249424408322225403770069952978999103344897991212850771034350046678683935107104578823920023197128887935206232962765408343031754983248314869651416635487070271657078325770796 0927427529476249626444239951812293100465038963807939297639901456086408459677292249078230581624034160083198437374539728677906306289960873601083706201882999243554025429957091619812945018432503309674349427513057767160754691227365332241845175797106713295593063635202655344273695438810685712451003351469460085582752740414723264094665962205140763820691773090780866423727990711323748512766522537850976590598658397979845215595029782750537140603588592215363608992433922289542233458102634259275757690440754308009593855238137227351798446486981151672766513716998027602215751256719370429397129549459120277202327118788743080998483470436192625398340057850391478909668185290635380423955404607217710958636050373730838469336370845039431945543326700579270919052885975364141422331087288874462285858637176621255141698264412903522678033317989170115880081516284097559300133507799471895326457336815172421155995525168781635131143991136416642016744949082321204689839861376266795485532171923826942486502913400286963940309484 5074841294235761567980449851987801590557885255383108780893978951751291620996718943375268012352804274283212053215307351082398485942787208393179217828313523635411999195575775975468767044626129049246944319030723328643414657452918667180676010414042124309419561774077634818455683391702241961931064630304090800731366054338697758609749399910085968749785062456897269667152066394382597246893010196922581169913176950122050361571770395369054940058339483843974464929181291852743598061454541482411319258385620699919348723293144520169007289481864773872231619941455512161560322110383194752708538186600790658951199233733174967771841773153459237877008039869651750332243754352492249491511910065745115190552207411746311658792996881181387283802195501430068948175222703384724138990797519173145057548020529886221743921352071397159602123468588824225432226214084338178171815952010864033683018390805924551154638294257081323458112709114569289613012652231019895244815217219698389802086475280385093285017054289507498200807204 1877671808414208650126741828424137039886856128227784839167384793724787311771990610344101557824515267318471953889607369727247525026122768566005894410708733378610476162439181617541433899921526019016255148934343633249264588702955196457882643215670087245921660584346388422834316715992479275242981606484147943813466274962163956020344387132681012987276353911428481133080521318871633347106971027058394584162633836170084641092775091666390836768318808419325838493512223663993433528416052204206508892342192866072409572603964283634354221147328239255437197307410877079744744865442832584525330488906202103159953143660677502931584967475621398893234965164055257188078046145218709440040840330980650769823007158480986163459600042530048580517485340677496132105508699566551386838228504834826425017438879318409352467562176255853776374723731447388317368663357627383694650723788061962763254309361928109667564387774921758849538329207871323025399352532620973285930184201644018901002773323499765774835125335966401889419734 6327201303258090754079801393874104215986193719394144148559622409051961205332355846077533183278890738832391535561074612724819789952480872328880408266970201766239451001690274739141595541572957753788951050043026811943691163688663710637928472363177936029259448725818579129920714382357882142208643606823754520733994646572586821541644398149238544337745998203264678454665487925173493921777764033537269522992103115842823750405588538846833724101543165897489915300004787110814394934465518176677482202804123781727309993329004830726928892557850582806559007396866888620985629055058474721708813614135721948922060211334334572381348586196886746758900465692833094336637178459072850215866106799456460266354416689624866015411034238864944123721969568161372557215009049887790769403406590484422511214573790761107726077762451440539965975955360773797196902546431341823788555069435728043202455375041817472821677779625286961992491729576392881089462100341878uwb > + / 42uwb; > +constexpr unsigned _BitInt(16319) c > + = 2627723238202844734593528210036441397644224112049184868378010831834577492039736645259692442133560537468665927861231280160488737037607638644451145031889554569557078457728559890665090192944430229603341219963259499837606412471422041491392321377944430683327738899570355221943057508092711119541704691117701907071384712882644783009643200396240346365655860043111527324887717787506338111147788805979885801605021342047585162041301679344551753922701997368269944795232238874886098194759343298573068474608818358322518434782511069732797329482620522756442576995050342343559716596929997568140697461994153850282719374276045524526948313436094002393398634421757710211480013425387953089006436252036847553573885474180629254262438647346127462098789135554198787366415702252216790859116465478750185454645773734152676351670503270525404617292626896899730237926158293326447540206319154834398220123044550465903886878634766771065824008882586957518822701333555929857984594869031685661169338699069178282184753549263922342 7223360712994033576990398197160051785889033125034223732954451076425681456628201904077784454089380196178912326887148822779198657689238010492393879170486604804437202791286852035982584159978541711417080787022338893101116171974852272032081114570327098305927880933671644227124990161298341841320653588271798586647749346370617067175316167393884414111921877638201303618067479025167446526964230732790261566590993315887290551248612349150417516918700813876388862131622594037955509016393068514645257179527317715173019090736514553638608004576856188118523434383702648256819068546345047653068719910165573154521302405552789235554333112380164692074092017083602440917300094238211450798274305773890594242881597233221582216100516212402569681571888843321851284369613879319709906369098535804168065394213774970627125064665536078444150533436796088491087726051879648804306086489894004214709726215682689504951069889191755818331155532574370572928592103344141366890552816031266922028893616252999452323417869066941579667306347 1613572540792418096445006815472671637426015551116993769236905000141722943376810074187359103417921313777413085862282683858255797739853823398548217296703139254567248696079101149570408103776713947798346752251815365654448305519244177941397366865945576604838130455250898502853737564035949003922262966176561897745670199002376443298912801927760673401097511000258184731552675034906281464293064935209536776606120947583071904800720399805753234289940099824156768757863383436818507697242587247129471298448651825227005098698105411475159889557097847902482665935815324140919836703764265342890790987425495051276941605211107000354966589327240076217595000912275954778312003253352426141626242180107535863067944827325007651362995480529583458724884469690329738714185654845700964406091254014395163490619510733447727538171687315331867404492065331848584098243312698792767523028190759388941917646038806690598049147052029322201145747693079459384463557440930584834660987410296711333053084516015101240973366680443621409948422 3089535423200793619361066621523635138333071949675857709510246623578270082057593845373627754644593213511694799340435697589005171730412869312569995144579132884366864724543979793369135501578123803814859733983134834104975195720468081385513827225323421903045816417919536888887898936264050948644053011233768789016564682415233888521861166556793342365223662116883349759476292258652315155424431628407536492331622345779833699544022980163824904455584178665286877833385762620171269482394514620841257256794740307865515944817846748833567385388698214360784336910350490583704914700641332408720492396834716240637214630411024743621070432983803396754929609470890904235280794216538905439121760908467676546499780390041565327804122058643413369880265872674895012298018361509102904924291929842806674593714859387999453925424007022090069466220074179663268737341495281700093809393049733825916843964997096377440683341143111392219408276539024116171510614263868107283976403597687722315272782924847563997002977790058959538360498 9099084081251802305001465530685587689066710306032849298712531664047230963409638484129598076118133347670029704549206295184751171783054889490211218045322681317529569999778899567668829982207035948032411418382057247326141072264502161892285323531743728756335449414720326329614400327415751813608405440522389476951223717685562226240221655814783640319063683104993438443847695342093582440489676230855515734722099028773790309518629302472390856918840009781940193713784596688294176313226823907143925396584175086934911386332502448539920116580493698106175151294846382915609543814748269873022997601962804377576934064368480060369871027634248583037300264157126892396407333810094970488786868749240778818119777818968060847669660858189435863648299750130319878885182309492320093569553086644726783916663680961005542160003603514646606310756647257217877792590840884087816175376150368236330721380807047180835128240716072193739218623529235235449408073833764uwb > + >> 171; > +static_assert (a == 104035420857591336912033421371590282594618949554383312108016658002346729621809075187886810556089250519171906621444454338355954895015702651485390136163065190112858618641136386109985872833437486689598700444003401873678692740127267597323488784372301493640816109413989770365948235914632557318083097152197815560450925247817487980962431555270487460906147510436108215606628642367209525571478447319178007123437255461754491040756276160778293853969944521994107668165580080909219877874389675909142493269139537319578997141131109185638828370454486425623384865174757934426268782434751788699586973112527672021250884962359281306851455680239926549218932860934332800157896216992819480531309637672169509013220640901153010293602569164862363243469805553782278256652310412065059324510541006558913773071836572441888817803096026977339656338065485757937114708441754772139220505848611129471133288210945787143801106639643957649643750089633363257616620711210147673689610208240657756390397240974072579773 7162336060266724299262682963027758975719589213184278834763816748178347253973659384064502014166609966276276365911948251796162437485064618322435452987925569419207749303869957009187515572296092974825920128445718247115395611994626163709678379653804662270113642199222328179939231910556356649808610513835713167107960093732940155401402535472529845314262948384287403829130743120794819828038911203687822621892816584532456037443706537312200079293055483326584042301614839097487647975268866161712528420802033072670478029856147852927977509276880795320201330707208437309025474886548360918372629573524086551681748289855499045088814700848416285092483580997302004276045023244723783719637838813548308405502839640824921442501923177782405482132673872892466160260890531866472104767880873491792392312121780373603932508064157181247926020018908264767767538029765717460742268649556278120260488458272740646354530823680093746349319942102049084520394078200064313371341392468379588894883788089175030766695753883598777226542320 3470320354145742841869795472799186154631385288573730129094228733379855432514817031425884584962254283999586850250406406681047191820544352342046667950146374296364655891915135310082529994904874562441551527081311638121766367661807914647092917287784017613115795691373814041086838720316968010349263776702775009771662737124600992709418630470128579612748138807983617697487500079502839532266478317788699680283395230308668613168191852557234122469290277763000256531531071762280960597416576452124575885006363492171314551026369237325119844147154972582617127637240421323781252125819313268498872048683068789228870983086306586111793007178693570562554975762384431236664489360478109692520183356042112794589756922036102025380888246082763911915622037570736969677850621708281909652070776450422110772285659921383413532725137107621514770958361581240471968542997294446402584844918179956881219978405772785713402046471903103404871352324277109089891640558983922159359479964068994923538490500501798825116238188381267330618026 0931602902055966697959818348423522710110639396326239266299601139263260299521434523546406140610494389326654679284431132322144981017745231781290201550172288022219014695480722340733346810524613278322689559237011097328743609840024931300254707538619674324931023957662797178151131357638108862164917702657241608876888875152822934472871210395453237779282868767112670491355477607736558459506226763279722806223454862530846261212478858917574583089742594664412849677658245614783514210519230818425947916162496827685947964131847420075045403821417735560989294612338427979785664667342404360322691229080574383143194104895752448457393206937647986873989422753143333618385603582785837669832101260810460202314697058365446112520751877331125607781255602255658033499531518808006018903826482163757370770157446841421323038644940832376803068981340335707584011317358192377302802094242319541219701541955750707288766531879284239188942116170935670948579260796940039501429627634807289073224093389542774937118343634230323092968620 81371923061150409402403668284066920335645815769603890931600189625120845560771835017710222988445713995722670892970377791415975424998772977793133120924108755323766471601770964843725827421304729349535336212587039242582503381150992918495310760366078232133800372960134691178665615437284018675587037783965019497398984583781291648236566997741116811234934754542646608973862932050896956712947890625239848619289180051302224085308716715734850608995498117691600907423641124622236235949675965926735290984369155077055324647942699875972019355174794849379024365265476001505043957802797349447782453767742359446787304217770032967959809288342189111153359045680464231699344620995535326063943372491385550455978845273436611631962336651743357242055102619760848116407351488643448217122169718350824452317641509534606434395208225350712889271762643740106849245478364448395994915755050465135468245061369394410933866013068008514339549345174558881983866497072827311379042433413uwb); > +static_assert (b == 479279632549833982755738215967357101486884905869453021516563898948915446591294289678884104882828483681018619007873937343032559095956110409690354224418625002966550159961834004740780330764422082810229193393826635058733624861250523067262019347540099774628575459315357616349150824579695313640630281667807589996920649924543478780215152006371546893079438057655154349456445174360590648508217399231758605134881847410713059287758746575793311941456361347290358374327775052253988819455767870384140373534325319062214637649448223675213701403987503519204500321584986093940400979311922337629196153927395934961423190792514929752893552191612781025930779806940809347748073488156862698382316586632554531097474068541478416687472958980350462147229542043370966376951612302580094672036569174235908042392792082009922861348754900810642762162386011767716267196524707159512614047405558309045526869231198654773018734270263596328291409529332649735222668150705420335319769465472201979730869384913211207207 5373996013739990697006554637202292010533321860069785825009277097128404199976537163430585637450535494816805148579561924571056517747554447120549116657350857400882429019761724655720340465974195193558337722024597541511768455489944562084450292229841009963137094549217451331681817905394129588975104478734693440715083683204782281607795336838872403491895763128753290640898354947825338982854931267559169706314889964518235856823428090439337046435662559651700143711569575086573569627124354652912728119674823637085164390655787621331870294794577940904392020709798017325360408809054191003750299218897721285030845109314351714839790187795971666305588199093482237700013773902733073730520307294138196179858239813740704437154850888294873651516867866531415605553976328397837869734756039081291031211259255824353775865994433632176594824860215124447150789997421456161924170543832752214317501857017117934870794479802957418094172659233722650272378842003962384939273591028858259485681280063522734650517124720700592024503190 5445152238832105970200308151371801900107107616143235847115536995978281165233083750307528808742605565540002941143874829336203146501750257713925224473144855518861387693696103669523617994232375111611201101459297439748647388267459200813013679266349328732383431914791502242752803351817813918019855167200467126443959596212095412230012937785180621368904740496659226139300584975540396940968189138713630212621475457757421407899273838583419421850094135489271442461781867612967840281259964938951919393938448193171251996576357123654457926939171468811259400443993779102766652727502895609600502472189226835366234904950156893142674698374992326628993607966485208811438064202797698153274845831487974169502396605979807274335098034836109236427828852711258048141786054778320994100643663029556902570837898367870844766792830052796171750493189799905267492521148625102911003353413851945670464764491436591194854953791559798723403394543172251931597408230783241193488626433308391622670766594854714782494114377403163099298640 3589281430493343304207573431954440506367102005746914258775268625663056944615427077330312326664431034309894720122682694874274735620802316011315482410182991906165335883031756812018133914090861319389023790839528337203606889129436487920140167370284870924438860873830296648014424844378195912932551426780779819757525353368558050825303562419989528653425507781193568399131883673447888828695552112293654073088339775808234324436627659543962164946450396759723040075906766506152022264815158093674649622869572430121164843379253826764183953324829436751005035078152203675523168431161209463034491772102996315554878311000500752369796109685119745615468446576523546008325039060775520970963367909216533343057221662059707100715990114520515109428581554773471551782223970832412406073499896797949247197263055911053575580685552002226777990994346631851517791364630330551754443656577948498726362806681419705536740324268597539896282803552799726080554573302695958428417269671660306173853381343814024048279362738039470198839365 706286164147555864933364363287875097138128425573909904433183795098670203800533548856219174579901097084123411402160448390274656216062207733804522678116007830485911118338137291415500040244636646228465275546613185451215477214924093897408659253897872331630294361379429268082112519489979283826532913282908147824847781517964779380824918394924322420104717839012960422523766744397106063463998218416521947089619846125464833145312281971994057275917591591279145274837283273569411904875883590818927011083766111368623876288661469697856984023924541117354584710728162060928747544449729071086406072820826707352705098469570212430005031769870770984490147544922541878582516496026055634218534739829767044431114272772863484628968800592047985977005687260574374332608765746965647976405949709304033414442630581488362251756922883517287565772653346189666094175256518980878632057889091042584644510374477219106080358138511257658994752983022904583136418485544787844335722425uwb); > +static_assert (c == 877910742369718983756939060508412117978592490852198574421032559122366792451965262581837372001950924590370700613263257217338625506420135573519875944068826251478098411179104273956630178489731637399492219205096327228843406034228851197156969768002652376081122551643005269975404468281889267981913199560021628096606273673238473241136165744439969588386509610342875962281386773554725997852931943688986401368721939056760428331801110079995345152096844126486603181395448862805847511434872927541414315891787470959955624718369585383855232108897344587608804255681047991066144937466199967508281110381445335329419488661296149273726327727155188903861073076047845956925614932199835041402306636381498931110972831171298902299624728018258792144918535392288593787760450040073877424000870994528979160501117773965772018160145351225988200456446241582865271490428972723521053727772138981668764336614520000117771211219751569557888748379298875543540138845614585448888053708836039799464321601482849566246 0205686448548113229841097955613958440901375416256532864511852298696327611517233241324799070919491286426159788792631723833717451538043437364017185237743182402835670087683125602640318887451596650323528720128188198547270462971612157603487958526705005955580409441670771849388016438035850194585870327013409236236730914217722025655319472231141666790287955685713636274653565577454275838590350806168639165264676470440930351612992518904664647715805865941038423768376846697817543122409517591717292238745940345900530458551468519245767864531742102178628854376524513367983209186974575765707273973775386840081238803880335095740836386527208267311808973522450391189055739828936937359693167240524660624945856907042041257347192086984009640984509322622503890256046324768341632643546455779035376002061691113121234273164937984171774242327769915688742564049454163158318121818582764775268091292470889088445575108022688069271697198283151469645400870507006663799330661702702747443254220478311056407220749648103123435473381 5835208730552187341151209786784404558964588524975699899667232359656087068265936071288476301376185091512558347426364387962855698738699677293418712135210300114273729873885726742284413334588575122260492832433475214578049120087810369667863747603253414920332978483681609032604700190675353306116459095608887974519070883897641904030079983051686730294469340122451388381805960985594425706961500112962181441863870246158853022907449053406669059217439700137798133324937711920480432972814232484890568414170138076703081910957324642214513769972707454684597021527968182227457305657212026631030431211601014598336832495586844591088625369619943085350399708145578212681703887459419803788389699105928956705542918117397687718299410438578196037512469579622360911547558939620383631206904838624230010389486206816112538671492964636904178280343035479227922490985224047514289607138750504639061341508460897057144703039182990126916002853558594129248477604970769784327224466025218250890974545423433548473473960450795877572106353 5699926870646542578883331119051762306186067523001099412719645903032216675157165664232169078747190660947349603478964371047816225566409299125144678788763535185293382682071978173375457816107340136266810981911392425229112574139527147434230557453697491827393851359741896378730899459343419189068773030249591068607233883641315916228107226354275825769958808983867746939746789934806529358175103584438984838716184743516032727606660368313170324641040912283279337675151268874519556402164606924599236339646810051353621165145061052331521169712577463884531324397308353641769207596248691884466743214435301972295965363863294829404998426686187015125531502334672467143049925799395804908806616087054502527659797515485553762026569035404102874274275507439659763196532038078250094456842405342003835752491712509924133499003218952646583819297211097086138006098680208194804434552641485715856993900589523667230634434821280585126992071104389130687587301633060167397324932707250357187351836675057507009105128859076478863019096 6776854031578939382690709022667421734442841784680826494146620589862829612704279521637740421694195051400095278084716974624615208392585573200182664157066813849346058321763156523965698465901396025152159642193562900743812715885811057212579017860488539960334406702752688595217360219470968738009774067915037157027492209108801337707562571266897723911401203374308490793226200974353356835311756384895692909802720948968131504604855466961987314701846460342135201914356152591684810924688350929140120187693089324255924634578576427004426339299493833434502951593902551451002292839635000904253250021884625417628756439862964325562720709528784964868687330847894476999577326582332350213148861205413652337499383416531545707272907994755638339630221576707954964236210962693804639714754668679841134928393081284209158098202683744650513918920168330598432362389777471870631039488408769354863001967531729415686631571754649uwb); > +#endif > > Jakub > -- Richard Biener SUSE Software Solutions Germany GmbH, Frankenstrasse 146, 90461 Nuernberg, Germany; GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg) ---1609957120-1752750060-1697109012=:8126--