From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTPS id D97DC3858426 for ; Fri, 6 Oct 2023 17:41:45 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org D97DC3858426 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1696614105; h=from:from:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:in-reply-to:in-reply-to: references:references; bh=VSBQR6mJ4+jnFHqmfmLhtjmkrP5aPenr3MwpdELOQBY=; b=SflEvoC22Tlqq94LU6KTujZfDZXLa9lYCHA2sNpjKYpVgezxJbwfj1VgCkt3xXHkwZLAca tV2GymH+jEszw9W58qbtRl8/TzUW097ylO0Sj3iXmUxbzy3SOdm3VeRLHaVO7uPRrD9ITd oR+P7c9iipcs+uvmrtfUvDuEdYLqeUE= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-688-I2iBO8myOAOo9yKbkeJ5GQ-1; Fri, 06 Oct 2023 13:41:28 -0400 X-MC-Unique: I2iBO8myOAOo9yKbkeJ5GQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C2A973C100AB; Fri, 6 Oct 2023 17:41:27 +0000 (UTC) Received: from tucnak.zalov.cz (unknown [10.39.193.202]) by smtp.corp.redhat.com (Postfix) with ESMTPS id ECDD6140E962; Fri, 6 Oct 2023 17:41:26 +0000 (UTC) Received: from tucnak.zalov.cz (localhost [127.0.0.1]) by tucnak.zalov.cz (8.17.1/8.17.1) with ESMTPS id 396HfNv1891133 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Fri, 6 Oct 2023 19:41:24 +0200 Received: (from jakub@localhost) by tucnak.zalov.cz (8.17.1/8.17.1/Submit) id 396HfMlr891132; Fri, 6 Oct 2023 19:41:22 +0200 Date: Fri, 6 Oct 2023 19:41:22 +0200 From: Jakub Jelinek To: Richard Biener , Richard Sandiford Cc: gcc-patches@gcc.gnu.org Subject: [RFC] > WIDE_INT_MAX_PREC support in wide_int and widest_int Message-ID: Reply-To: Jakub Jelinek References: MIME-Version: 1.0 In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: multipart/mixed; boundary="gUXedCAPDhdsc8Li" Content-Disposition: inline X-Spam-Status: No, score=-3.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,KAM_LOTSOFHASH,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: --gUXedCAPDhdsc8Li Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hi! On Thu, Oct 05, 2023 at 05:11:02PM +0200, Jakub Jelinek wrote: > On Thu, Sep 28, 2023 at 04:03:55PM +0200, Jakub Jelinek wrote: > > Your thoughts on all of this? >=20 > So, here is some further progress on the patch (on top of the ipa_bits > removal patch). Further progress, this patch now passes bootstrap/regtest on x86_64-linux. As I've mentioned before, wide_int still allocates based on precision, not on actual needed length (while widest_int allocates on needed length) and the patch as is seems to do the very large allocations in various cases even when no _BitInt is every parsed in the source. To see how often does that happen, I've applied the first attached incremental patch as a hack to gather statistics on such allocations and then (in the same patch) attempted to tweak the largest offenders. The most common problem with huge (usually exactly 510 limbs) allocations has been because 5 spots in the sources do force_fit_type (type, wi::to_wid= est (some_tree), ...), with a comment that it wants to ensure it is sign or zer= o extended properly according to the original sign. I've used force_fit_type (type, wide_int::from (wi::to_wide (some_tree), MAX (TYPE_PRECISION (type), TYPE_PRECISION (TREE_TYPE (some_tree))), TYPE_SIGN (TREE_TYPE (some_tree))), ...); for that instead - force_fit_type takes const wide_int_ref &, so for widest_int something with the 32640 bit precision but which wants to create wide_int rather than widest_int as unary/binary operation result and that is why when trying to wi::ext it we allocate large vectors. I think maximum of the two precisions is all we need. Another problem was with bit ccp TRUNC_DIV_EXPR UNSIGNED handling, for some reason the widest_int mask is often sign-extended rather than zero-extended and trying to udiv_trunc something wi::neg_p results again into 510-ish limbs. I think just zero extending it before division like we e.g. do for arithmetic right shift is the right thing. With that, on make -j32 -k check-gcc I only saw allocations in _BitInt tests, and except for the newly added bitint-38.c test which tests unsigned _BitInt(16319) everything was quite small. So perhaps with cleaning up the force_fit_type+tree-ssa-ccp.cc hunks of the hack patch we could get away with wide_int doing precision based allocations. Another thing is that I've added #pragma GCC diagnostic to wide-int.h to workaround PR111715 false positive warnings on tree-affine.o. The second attached patch (just compile tested) removes those pragmas again and adds a short hack where we know that write_val for widest_int was passed exact length rather than approximate upper bound and so we don't need to do anything at all in set_len. --- gcc/tree-vect-loop.cc.jj=092023-10-04 16:28:04.354782008 +0200 +++ gcc/tree-vect-loop.cc=092023-10-05 11:52:25.001491397 +0200 @@ -11681,7 +11681,7 @@ vect_transform_loop (loop_vec_info loop_ =09=09=09=09=09LOOP_VINFO_VECT_FACTOR (loop_vinfo), =09=09=09=09=09&bound)) =09 loop->nb_iterations_upper_bound -=09 =3D wi::umin ((widest_int) (bound - 1), +=09 =3D wi::umin ((bound_wide_int) (bound - 1), =09=09=09 loop->nb_iterations_upper_bound); } } --- gcc/wide-int-print.cc.jj=092023-10-04 16:28:04.447780740 +0200 +++ gcc/wide-int-print.cc=092023-10-05 11:36:55.265242917 +0200 @@ -74,9 +74,12 @@ print_decs (const wide_int_ref &wi, char void print_decs (const wide_int_ref &wi, FILE *file) { - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; - print_decs (wi, buf); - fputs (buf, file); + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p =3D buf; + unsigned len =3D wi.get_len (); + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) + p =3D XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); + print_decs (wi, p); + fputs (p, file); } =20 /* Try to print the unsigned self in decimal to BUF if the number fits @@ -98,9 +101,12 @@ print_decu (const wide_int_ref &wi, char void print_decu (const wide_int_ref &wi, FILE *file) { - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; - print_decu (wi, buf); - fputs (buf, file); + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p =3D buf; + unsigned len =3D wi.get_len (); + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) + p =3D XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); + print_decu (wi, p); + fputs (p, file); } =20 void @@ -134,9 +140,12 @@ print_hex (const wide_int_ref &val, char void print_hex (const wide_int_ref &wi, FILE *file) { - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; - print_hex (wi, buf); - fputs (buf, file); + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p =3D buf; + unsigned len =3D wi.get_len (); + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) + p =3D XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); + print_hex (wi, p); + fputs (p, file); } =20 /* Print larger precision wide_int. Not defined as inline in a header --- gcc/lto-streamer-out.cc.jj=092023-10-04 16:28:04.201784093 +0200 +++ gcc/lto-streamer-out.cc=092023-10-05 11:36:54.700250663 +0200 @@ -2173,13 +2173,26 @@ output_cfg (struct output_block *ob, str =09=09=09 loop_estimation, EST_LAST, loop->estimate_state); streamer_write_hwi (ob, loop->any_upper_bound); if (loop->any_upper_bound) -=09streamer_write_widest_int (ob, loop->nb_iterations_upper_bound); +=09{ +=09 widest_int w =3D widest_int::from (loop->nb_iterations_upper_bound, +=09=09=09=09=09 SIGNED); +=09 streamer_write_widest_int (ob, w); +=09} streamer_write_hwi (ob, loop->any_likely_upper_bound); if (loop->any_likely_upper_bound) -=09streamer_write_widest_int (ob, loop->nb_iterations_likely_upper_bound); +=09{ +=09 widest_int w +=09 =3D widest_int::from (loop->nb_iterations_likely_upper_bound, +=09=09=09=09SIGNED); +=09 streamer_write_widest_int (ob, w); +=09} streamer_write_hwi (ob, loop->any_estimate); if (loop->any_estimate) -=09streamer_write_widest_int (ob, loop->nb_iterations_estimate); +=09{ +=09 widest_int w =3D widest_int::from (loop->nb_iterations_estimate, +=09=09=09=09=09 SIGNED); +=09 streamer_write_widest_int (ob, w); +=09} =20 /* Write OMP SIMD related info. */ streamer_write_hwi (ob, loop->safelen); --- gcc/value-range.h.jj=092023-10-04 16:28:04.436780890 +0200 +++ gcc/value-range.h=092023-10-05 11:36:55.257243027 +0200 @@ -626,7 +626,9 @@ irange::maybe_resize (int needed) { m_max_ranges =3D HARD_MAX_RANGES; wide_int *newmem =3D new wide_int[m_max_ranges * 2]; - memcpy (newmem, m_base, sizeof (wide_int) * num_pairs () * 2); + unsigned n =3D num_pairs () * 2; + for (unsigned i =3D 0; i < n; ++i) +=09newmem[i] =3D m_base[i]; m_base =3D newmem; } } --- gcc/tree-ssa-loop-ivopts.cc.jj=092023-09-29 18:58:47.317894622 +0200 +++ gcc/tree-ssa-loop-ivopts.cc=092023-10-06 12:40:49.512169963 +0200 @@ -1036,10 +1036,12 @@ niter_for_exit (struct ivopts_data *data =09 names that appear in phi nodes on abnormal edges, so that we do not =09 create overlapping life ranges for them (PR 27283). */ desc =3D XNEW (class tree_niter_desc); + ::new (static_cast (desc)) tree_niter_desc (); if (!number_of_iterations_exit (data->current_loop, =09=09=09=09 exit, desc, true) =09 || contains_abnormal_ssa_name_p (desc->niter)) =09{ +=09 desc->~tree_niter_desc (); =09 XDELETE (desc); =09 desc =3D NULL; =09} @@ -7894,7 +7896,11 @@ remove_unused_ivs (struct ivopts_data *d bool free_tree_niter_desc (edge const &, tree_niter_desc *const &value, void *) { - free (value); + if (value) + { + value->~tree_niter_desc (); + free (value); + } return true; } =20 --- gcc/lto-streamer-in.cc.jj=092023-10-04 16:28:04.178784406 +0200 +++ gcc/lto-streamer-in.cc=092023-10-05 11:36:54.730250251 +0200 @@ -1122,13 +1122,16 @@ input_cfg (class lto_input_block *ib, cl loop->estimate_state =3D streamer_read_enum (ib, loop_estimation, ES= T_LAST); loop->any_upper_bound =3D streamer_read_hwi (ib); if (loop->any_upper_bound) -=09loop->nb_iterations_upper_bound =3D streamer_read_widest_int (ib); +=09loop->nb_iterations_upper_bound +=09 =3D bound_wide_int::from (streamer_read_widest_int (ib), SIGNED); loop->any_likely_upper_bound =3D streamer_read_hwi (ib); if (loop->any_likely_upper_bound) -=09loop->nb_iterations_likely_upper_bound =3D streamer_read_widest_int (ib= ); +=09loop->nb_iterations_likely_upper_bound +=09 =3D bound_wide_int::from (streamer_read_widest_int (ib), SIGNED); loop->any_estimate =3D streamer_read_hwi (ib); if (loop->any_estimate) -=09loop->nb_iterations_estimate =3D streamer_read_widest_int (ib); +=09loop->nb_iterations_estimate +=09 =3D bound_wide_int::from (streamer_read_widest_int (ib), SIGNED); =20 /* Read OMP SIMD related info. */ loop->safelen =3D streamer_read_hwi (ib); @@ -1888,13 +1891,17 @@ lto_input_tree_1 (class lto_input_block tree type =3D stream_read_tree_ref (ib, data_in); unsigned HOST_WIDE_INT len =3D streamer_read_uhwi (ib); unsigned HOST_WIDE_INT i; - HOST_WIDE_INT a[WIDE_INT_MAX_ELTS]; + HOST_WIDE_INT abuf[WIDE_INT_MAX_INL_ELTS], *a =3D abuf; =20 + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) +=09a =3D XALLOCAVEC (HOST_WIDE_INT, len); for (i =3D 0; i < len; i++) =09a[i] =3D streamer_read_hwi (ib); gcc_assert (TYPE_PRECISION (type) <=3D WIDE_INT_MAX_PRECISION); - result =3D wide_int_to_tree (type, wide_int::from_array -=09=09=09=09 (a, len, TYPE_PRECISION (type))); + result +=09=3D wide_int_to_tree (type, +=09=09=09 wide_int::from_array (a, len, +=09=09=09=09=09=09 TYPE_PRECISION (type))); streamer_tree_cache_append (data_in->reader_cache, result, hash); } else if (tag =3D=3D LTO_tree_scc || tag =3D=3D LTO_trees) --- gcc/value-range.cc.jj=092023-10-04 16:28:04.416781162 +0200 +++ gcc/value-range.cc=092023-10-05 11:36:54.835248812 +0200 @@ -245,17 +245,24 @@ vrange::dump (FILE *file) const void irange_bitmask::dump (FILE *file) const { - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p; pretty_printer buffer; =20 pp_needs_newline (&buffer) =3D true; buffer.buffer->stream =3D file; pp_string (&buffer, "MASK "); - print_hex (m_mask, buf); - pp_string (&buffer, buf); + unsigned len_mask =3D m_mask.get_len (); + unsigned len_val =3D m_value.get_len (); + unsigned len =3D MAX (len_mask, len_val); + if (len > WIDE_INT_MAX_INL_ELTS) + p =3D XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); + else + p =3D buf; + print_hex (m_mask, p); + pp_string (&buffer, p); pp_string (&buffer, " VALUE "); - print_hex (m_value, buf); - pp_string (&buffer, buf); + print_hex (m_value, p); + pp_string (&buffer, p); pp_flush (&buffer); } =20 --- gcc/testsuite/gcc.dg/bitint-38.c.jj=092023-10-05 11:36:54.667251115 +02= 00 +++ gcc/testsuite/gcc.dg/bitint-38.c=092023-10-05 12:57:07.941106025 +0200 @@ -0,0 +1,18 @@ +/* PR c/102989 */ +/* { dg-do compile { target { bitint } } } */ +/* { dg-options "-std=3Dc2x" } */ + +#if __BITINT_MAXWIDTH__ >=3D 16319 +constexpr unsigned _BitInt(16319) a + =3D 46809856770167726127621548193677044225438364376699537824160022717939= 628343291686588133221586710648915925157749537208566348709231774324477059728= 763319900537499845533358728035749014999310181139205148376149598710826496473= 833711815515586271543891072166123033253318533558175760051184685411593263726= 196963313436586869536391457057811006447186847584134858936693364541098769997= 908014021284990908118817091046496748623135893521289709626062603305553614183= 559928449847473785848765847011514477192311482631228386303550370060014144072= 442636469963633024041427127562602129493942248325061962900595999224341866123= 012213266776978118379033875934588490382169559099157722852052372530204821544= 784157311384081159363841342505493821326296144831789857414053309000499273268= 852511500478297389324409142700039689042715222530866107895467106606923445375= 759318175390086520343903540248030641357223961046714259192080918736743807117= 010096956744004469142748795978563733838165130991678206367028604654758524083= 789230717092884948587718679328070760084086678347179914817925081838771618312= 732334619953338746336344235621880377969700575932441037647685522242087626242= 598557198281818035387041014982421454431301328519954419349662422321998640294= 484962248942200767856494617479789279508933089953562472777752533078949270357= 456411225295514777094292976154560435086940424655827475235351037015722948500= 440213104315345429039792938727637405493857897687860646721735939868427505051= 910441391428602410680811634071227305942736229370315135549833621317069889444= 840536939875718852316046029271487585787996817357832819135821597249351327129= 787563440079330192925005282225863601565085768302390070984541083848793677853= 325040788618095457604634069790858402095129504884493804786565702907285079744= 297614689529418499373699950548566574281131379540553067419984805580275990178= 637682206952934297126196311933247650406428586936204966208340578982843313215= 493324281743280941581054818065875039369227272958623284206565849097120192778= 001425881533311545969511794227355187664684482107672366404028277283451141989= 135127816901710398709480382959428635234046834661872608878149262681618865733= 135910417181982267380585631782849903908808822313725829737392904330767357009= 039694778959879992292864384353261701216481107461888177462262894353903797488= 381268913080186091509003587024406100581941813006839098647031467785360508010= 331341183790435828783740154625741324046693989352750893154106524192987230720= 387644388210619326254465229013236469167191033200612786414699140401536668356= 931724805794959607035492936115832695555160023607526843504410588016279838079= 916160798736528245866203159909692182517620270789073002369870685576293269168= 825936535896407659582457777527599118314911837204720605511846311286460406385= 389482040724983787136893494143811968060552854688725693433424607559674641029= 795445863235817142871414182091818338443568133237931754104825239171071219662= 340633870206119521372456930328540224285367138611314821153569168546183645829= 503753803437831805510824008241444120530040152673239995922834692652858685274= 338949097873478792672199985538879471183716442300771962610917900546611370645= 076526968758081982277218930108450362729738967513422822233728686764111051106= 198023124788453349244289893674342964195831413532907340649577636920815803211= 588385069101056904898394112677147799097609225239197281269166984744679850724= 410612166788542302561376925810277385553750973329580501331393740228280489721= 384722107264711160517234946456408991490649350813385538962717766342605776325= 208628632534381125475768180306827627804875799742528433471319022681846302307= 446190017695801005557243498313517114536524233927332698446518106428726464547= 083209111510064058410437557730405695196945620013848531356000927233822810363= 776386328926167325872673675340704414366407947949697258056053449480617081046= 930477300587359062628007238799966852254674798570159961397510118854385785214= 155925163405867671830800032486980962819944268156561566291262602279606441449= 610634423643128569768835770799298996656155717172997209353300747694786221592= 258320481118901555050564208247540064763952078218777682539559825742171410647= 386979764267826638075587335674781227397769160414784274115172291946473489032= 677259497902240322819107558691046420487025467429043766886117763971311276299= 639024610203099491718695782698208419415687039831233605910052156603409274069= 464261319290985064400393374512929106257634121387481551009983570872335543297= 009013967112023291074766590619136016025951219816084978419759730010622394596= 088660312713603712000086496866865145241104837289560738290749427881097147566= 394494879145861866225023837516652348484750734204006680185622232898866204957= 929960054568249041275448362105119023162319626554939196425978017807049564253= 888378950337940653127933886695515764665491340518187925418918590429832586550= 339568878631106766927360967060307658260725352708497774453318714564268623635= 016559398042857511932991192138224078050452742263065408694106024275713131318= 4709635181001199631726283364158943337968797uwb + + 99354435180574564299271266552222578172075113116713358325600655730552= 766787479906529073488397418185627579390846490733481721083971838270203779417= 259831075136362874065305263582535084372902419372769083862829043530791029045= 356756086045764861629983194277028512784082136414548372230796164016158756724= 532501484216792238294178342275181330910551802702492661616766771761496751642= 576408123442979356507296298018787580599440901688627305198172033523414583103= 638114823180832702324343293173238228189911345006016698689223960135129694778= 394564723458123123219242152418497721476874557602245592409527373190093485408= 949663635681583495013552292646467700180715905024417027872690979739798998376= 831221941031100897284256766902460911469939550379184257728400222882228329325= 425160915011494771608565644643769102932300919635731192306480266678963993527= 909826119575699789720381785195702784475407075028616785026579051927432258932= 256639948075689186448982737022854836763857176511040420021053529931765121664= 200850644524317531813658058335489226767488904124203326946090968197797656003= 452163903943072575567782237434439589839621137231935512478979954237623480921= 038936837113738971391682894202676606114099476445487150077878329592511675531= 750966391476747761179731004479032436269028923822637675913280382357085934015= 637930194181244531663864717924684210038558942065843547314893636681340779462= 035460672372356577464802968316517917903859813975584589059046413942462797827= 467360091018623668680683634119763885576979219143171793712064440853907796348= 313697233700507646788528467793694972323747806919052809923680797627473522455= 196072641541971489588969556619042149091849522899961420506048216087499004178= 451377275969031004523500675513058409982804827752098832788730718955887518114= 623425178257534938149979184184374554749924222439195499673719644234574402872= 962708556058509546859126443033540190587169167355225330653230577554798036687= 825302503819882110750346557601232502494414406843384509538232903469096898225= 276526987235028723125703052611967684774988980207930718087589033817968738686= 823788509252116293927606286852227450735441166156355579108053576235902180237= 158327163725325193728620938285457973255678036919980517851560658615668888714= 611301335220393218434390179643820300807524767093987313411730624302750031119= 549076278372084883486866669047657106569177064709243184321601554507260076680= 354945717797931292122421012932748532378508488061527744636892434266832958846= 486807902403630970152183479663991663800903706285912887123051331718696396799= 228540664930767731669701904829888280170310168915619719862796753719630209324= 693372640613177863305668393839893847609355902992879635468638481199994517395= 484051240015140330966956055807661216114406385499888959702624251332181598480= 617272171634871318064816867668437899714652479035348538379514138457866671224= 271826489891565995296474394195537851585616131140232673038699275651705077817= 823664470113408512581785341015859500814234377037784923474482304738976435057= 739573855041121824466905850338237471759669290912936932010618586701412091290= 914528612922762760129106240712411654020891616069444238262454616085949357324= 819001982408622934094423088006900195508316304798830005798846146019069617230= 113544498045767943398260569869576800909160468486734197235296943846538094003= 772185450752691487661291946370394082255156780133321880749972176678354949400= 430149178774383549026731074531642752800102510403600409373087389256894757251= 316390320119790096427135422928942190593529729331511123761973838149253632886= 709955562694478049949250867917281369066932495071150978070603658721109982107= 683360783895087241848635972859877369120730719801371625907796646750334291193= 278553078271746737492574629830542216317975270099875957324602221973676084409= 734882118984714393020513888068185216596858736723838280213298481534102049266= 077109716782685416775844216952380117843513860478691587871566346306938724280= 678649803200632934358875747458590670249884857423532785487044675442987935115= 835876597137116770657923711993294193723927203219818622698900248323489998654= 493398563392203868531626419844449349981762488217031547747940268634238466653= 611479125803101793332398493141451581038137243712771560318260702136561892184= 285511714925793677366526502405108405244792806619221493703814048636680382299= 221050646583350833149468425459780504970217952171249479595750654717498722788= 027563713908714410042326332526118257486585935406678310988740272233275415237= 428577509541196157085415141451108639250492045175740008247979008175853769614= 627545214951001988296751009580666395319581067041597172650352055971610478795= 108499005875657466032257631298774343179498421057423869658861371177986421681= 907333674141267979294346275323078554488410354337952290312755458858728768488= 466666664754658669053322930953814940967023286499207405066589305030531627779= 448214333834072831551787079709064580238271416811403729683560846170010538704= 990798843840198208755858431290828946877405339467637568469249528252513830263= 64635539377880784234770789463152435704464616uwb; +constexpr unsigned _BitInt(16319) b + =3D 20129744567093027275741005070628998262449166046517026903695683755854= 448756834360166513132405078796314602781998330705368407367482030156637206994= 877425582250124595106718397028199112773892105727478029626122540718672466812= 244172521968825004812596684190534400169291245019886664334632347203172906471= 830047918779870667296830826108769036384267604969509336398421516482677170697= 323144807237345130767733861415665037591249948490085867356183319101541167176= 586195051721766552194667530417142250556133895688441663400613014781276825394= 358975458967475147806589013506569415945496841131100738180426238464950629268= 379774013285627049621529192047736803089092751891513992605419086502588233332= 057296638567290306093910878742093500873864277174719410183640765821580587831= 967716708363976225535905317908137780497267444416760176647705834046996010820= 212494244083222254037700699529789991033448979912128507710343500466786839351= 071045788239200231971288879352062329627654083430317549832483148696514166354= 870702716570783257707960927427529476249626444239951812293100465038963807939= 297639901456086408459677292249078230581624034160083198437374539728677906306= 289960873601083706201882999243554025429957091619812945018432503309674349427= 513057767160754691227365332241845175797106713295593063635202655344273695438= 810685712451003351469460085582752740414723264094665962205140763820691773090= 780866423727990711323748512766522537850976590598658397979845215595029782750= 537140603588592215363608992433922289542233458102634259275757690440754308009= 593855238137227351798446486981151672766513716998027602215751256719370429397= 129549459120277202327118788743080998483470436192625398340057850391478909668= 185290635380423955404607217710958636050373730838469336370845039431945543326= 700579270919052885975364141422331087288874462285858637176621255141698264412= 903522678033317989170115880081516284097559300133507799471895326457336815172= 421155995525168781635131143991136416642016744949082321204689839861376266795= 485532171923826942486502913400286963940309484507484129423576156798044985198= 780159055788525538310878089397895175129162099671894337526801235280427428321= 205321530735108239848594278720839317921782831352363541199919557577597546876= 704462612904924694431903072332864341465745291866718067601041404212430941956= 177407763481845568339170224196193106463030409080073136605433869775860974939= 991008596874978506245689726966715206639438259724689301019692258116991317695= 012205036157177039536905494005833948384397446492918129185274359806145454148= 241131925838562069991934872329314452016900728948186477387223161994145551216= 156032211038319475270853818660079065895119923373317496777184177315345923787= 700803986965175033224375435249224949151191006574511519055220741174631165879= 299688118138728380219550143006894817522270338472413899079751917314505754802= 052988622174392135207139715960212346858882422543222621408433817817181595201= 086403368301839080592455115463829425708132345811270911456928961301265223101= 989524481521721969838980208647528038509328501705428950749820080720418776718= 084142086501267418284241370398868561282277848391673847937247873117719906103= 441015578245152673184719538896073697272475250261227685660058944107087333786= 104761624391816175414338999215260190162551489343436332492645887029551964578= 826432156700872459216605843463884228343167159924792752429816064841479438134= 662749621639560203443871326810129872763539114284811330805213188716333471069= 710270583945841626338361700846410927750916663908367683188084193258384935122= 236639934335284160522042065088923421928660724095726039642836343542211473282= 392554371973074108770797447448654428325845253304889062021031599531436606775= 029315849674756213988932349651640552571880780461452187094400408403309806507= 698230071584809861634596000425300485805174853406774961321055086995665513868= 382285048348264250174388793184093524675621762558537763747237314473883173686= 633576273836946507237880619627632543093619281096675643877749217588495383292= 078713230253993525326209732859301842016440189010027733234997657748351253359= 664018894197346327201303258090754079801393874104215986193719394144148559622= 409051961205332355846077533183278890738832391535561074612724819789952480872= 328880408266970201766239451001690274739141595541572957753788951050043026811= 943691163688663710637928472363177936029259448725818579129920714382357882142= 208643606823754520733994646572586821541644398149238544337745998203264678454= 665487925173493921777764033537269522992103115842823750405588538846833724101= 543165897489915300004787110814394934465518176677482202804123781727309993329= 004830726928892557850582806559007396866888620985629055058474721708813614135= 721948922060211334334572381348586196886746758900465692833094336637178459072= 850215866106799456460266354416689624866015411034238864944123721969568161372= 557215009049887790769403406590484422511214573790761107726077762451440539965= 975955360773797196902546431341823788555069435728043202455375041817472821677= 779625286961992491729576392881089462100341878uwb + / 42uwb; +constexpr unsigned _BitInt(16319) c + =3D 26277232382028447345935282100364413976442241120491848683780108318345= 774920397366452596924421335605374686659278612312801604887370376076386444511= 450318895545695570784577285598906650901929444302296033412199632594998376064= 124714220414913923213779444306833277388995703552219430575080927111195417046= 911177019070713847128826447830096432003962403463656558600431115273248877177= 875063381111477888059798858016050213420475851620413016793445517539227019973= 682699447952322388748860981947593432985730684746088183583225184347825110697= 327973294826205227564425769950503423435597165969299975681406974619941538502= 827193742760455245269483134360940023933986344217577102114800134253879530890= 064362520368475535738854741806292542624386473461274620987891355541987873664= 157022522167908591164654787501854546457737341526763516705032705254046172926= 268968997302379261582933264475402063191548343982201230445504659038868786347= 667710658240088825869575188227013335559298579845948690316856611693386990691= 782821847535492639223427223360712994033576990398197160051785889033125034223= 732954451076425681456628201904077784454089380196178912326887148822779198657= 689238010492393879170486604804437202791286852035982584159978541711417080787= 022338893101116171974852272032081114570327098305927880933671644227124990161= 298341841320653588271798586647749346370617067175316167393884414111921877638= 201303618067479025167446526964230732790261566590993315887290551248612349150= 417516918700813876388862131622594037955509016393068514645257179527317715173= 019090736514553638608004576856188118523434383702648256819068546345047653068= 719910165573154521302405552789235554333112380164692074092017083602440917300= 094238211450798274305773890594242881597233221582216100516212402569681571888= 843321851284369613879319709906369098535804168065394213774970627125064665536= 078444150533436796088491087726051879648804306086489894004214709726215682689= 504951069889191755818331155532574370572928592103344141366890552816031266922= 028893616252999452323417869066941579667306347161357254079241809644500681547= 267163742601555111699376923690500014172294337681007418735910341792131377741= 308586228268385825579773985382339854821729670313925456724869607910114957040= 810377671394779834675225181536565444830551924417794139736686594557660483813= 045525089850285373756403594900392226296617656189774567019900237644329891280= 192776067340109751100025818473155267503490628146429306493520953677660612094= 758307190480072039980575323428994009982415676875786338343681850769724258724= 712947129844865182522700509869810541147515988955709784790248266593581532414= 091983670376426534289079098742549505127694160521110700035496658932724007621= 759500091227595477831200325335242614162624218010753586306794482732500765136= 299548052958345872488446969032973871418565484570096440609125401439516349061= 951073344772753817168731533186740449206533184858409824331269879276752302819= 075938894191764603880669059804914705202932220114574769307945938446355744093= 058483466098741029671133305308451601510124097336668044362140994842230895354= 232007936193610666215236351383330719496758577095102466235782700820575938453= 736277546445932135116947993404356975890051717304128693125699951445791328843= 668647245439797933691355015781238038148597339831348341049751957204680813855= 138272253234219030458164179195368888878989362640509486440530112337687890165= 646824152338885218611665567933423652236621168833497594762922586523151554244= 316284075364923316223457798336995440229801638249044555841786652868778333857= 626201712694823945146208412572567947403078655159448178467488335673853886982= 143607843369103504905837049147006413324087204923968347162406372146304110247= 436210704329838033967549296094708909042352807942165389054391217609084676765= 464997803900415653278041220586434133698802658726748950122980183615091029049= 242919298428066745937148593879994539254240070220900694662200741796632687373= 414952817000938093930497338259168439649970963774406833411431113922194082765= 390241161715106142638681072839764035976877223152727829248475639970029777900= 589595383604989099084081251802305001465530685587689066710306032849298712531= 664047230963409638484129598076118133347670029704549206295184751171783054889= 490211218045322681317529569999778899567668829982207035948032411418382057247= 326141072264502161892285323531743728756335449414720326329614400327415751813= 608405440522389476951223717685562226240221655814783640319063683104993438443= 847695342093582440489676230855515734722099028773790309518629302472390856918= 840009781940193713784596688294176313226823907143925396584175086934911386332= 502448539920116580493698106175151294846382915609543814748269873022997601962= 804377576934064368480060369871027634248583037300264157126892396407333810094= 970488786868749240778818119777818968060847669660858189435863648299750130319= 878885182309492320093569553086644726783916663680961005542160003603514646606= 310756647257217877792590840884087816175376150368236330721380807047180835128= 240716072193739218623529235235449408073833764uwb + >> 171; +static_assert (a =3D=3D 10403542085759133691203342137159028259461894955438= 331210801665800234672962180907518788681055608925051917190662144445433835595= 489501570265148539013616306519011285861864113638610998587283343748668959870= 044400340187367869274012726759732348878437230149364081610941398977036594823= 591463255731808309715219781556045092524781748798096243155527048746090614751= 043610821560662864236720952557147844731917800712343725546175449104075627616= 077829385396994452199410766816558008090921987787438967590914249326913953731= 957899714113110918563882837045448642562338486517475793442626878243475178869= 958697311252767202125088496235928130685145568023992654921893286093433280015= 789621699281948053130963767216950901322064090115301029360256916486236324346= 980555378227825665231041206505932451054100655891377307183657244188881780309= 602697733965633806548575793711470844175477213922050584861112947113328821094= 578714380110663964395764964375008963336325761662071121014767368961020824065= 775639039724097407257977371623360602667242992626829630277589757195892131842= 788347638167481783472539736593840645020141666099662762763659119482517961624= 374850646183224354529879255694192077493038699570091875155722960929748259201= 284457182471153956119946261637096783796538046622701136421992223281799392319= 105563566498086105138357131671079600937329401554014025354725298453142629483= 842874038291307431207948198280389112036878226218928165845324560374437065373= 122000792930554833265840423016148390974876479752688661617125284208020330726= 704780298561478529279775092768807953202013307072084373090254748865483609183= 726295735240865516817482898554990450888147008484162850924835809973020042760= 450232447237837196378388135483084055028396408249214425019231777824054821326= 738728924661602608905318664721047678808734917923923121217803736039325080641= 571812479260200189082647677675380297657174607422686495562781202604884582727= 406463545308236800937463493199421020490845203940782000643133713413924683795= 888948837880891750307666957538835987772265423203470320354145742841869795472= 799186154631385288573730129094228733379855432514817031425884584962254283999= 586850250406406681047191820544352342046667950146374296364655891915135310082= 529994904874562441551527081311638121766367661807914647092917287784017613115= 795691373814041086838720316968010349263776702775009771662737124600992709418= 630470128579612748138807983617697487500079502839532266478317788699680283395= 230308668613168191852557234122469290277763000256531531071762280960597416576= 452124575885006363492171314551026369237325119844147154972582617127637240421= 323781252125819313268498872048683068789228870983086306586111793007178693570= 562554975762384431236664489360478109692520183356042112794589756922036102025= 380888246082763911915622037570736969677850621708281909652070776450422110772= 285659921383413532725137107621514770958361581240471968542997294446402584844= 918179956881219978405772785713402046471903103404871352324277109089891640558= 983922159359479964068994923538490500501798825116238188381267330618026093160= 290205596669795981834842352271011063939632623926629960113926326029952143452= 354640614061049438932665467928443113232214498101774523178129020155017228802= 221901469548072234073334681052461327832268955923701109732874360984002493130= 025470753861967432493102395766279717815113135763810886216491770265724160887= 688887515282293447287121039545323777928286876711267049135547760773655845950= 622676327972280622345486253084626121247885891757458308974259466441284967765= 824561478351421051923081842594791616249682768594796413184742007504540382141= 773556098929461233842797978566466734240436032269122908057438314319410489575= 244845739320693764798687398942275314333361838560358278583766983210126081046= 020231469705836544611252075187733112560778125560225565803349953151880800601= 890382648216375737077015744684142132303864494083237680306898134033570758401= 131735819237730280209424231954121970154195575070728876653187928423918894211= 617093567094857926079694003950142962763480728907322409338954277493711834363= 423032309296862081371923061150409402403668284066920335645815769603890931600= 189625120845560771835017710222988445713995722670892970377791415975424998772= 977793133120924108755323766471601770964843725827421304729349535336212587039= 242582503381150992918495310760366078232133800372960134691178665615437284018= 675587037783965019497398984583781291648236566997741116811234934754542646608= 973862932050896956712947890625239848619289180051302224085308716715734850608= 995498117691600907423641124622236235949675965926735290984369155077055324647= 942699875972019355174794849379024365265476001505043957802797349447782453767= 742359446787304217770032967959809288342189111153359045680464231699344620995= 535326063943372491385550455978845273436611631962336651743357242055102619760= 848116407351488643448217122169718350824452317641509534606434395208225350712= 889271762643740106849245478364448395994915755050465135468245061369394410933= 866013068008514339549345174558881983866497072827311379042433413uwb); +static_assert (b =3D=3D 47927963254983398275573821596735710148688490586945= 302151656389894891544659129428967888410488282848368101861900787393734303255= 909595611040969035422441862500296655015996183400474078033076442208281022919= 339382663505873362486125052306726201934754009977462857545931535761634915082= 457969531364063028166780758999692064992454347878021515200637154689307943805= 765515434945644517436059064850821739923175860513488184741071305928775874657= 579331194145636134729035837432777505225398881945576787038414037353432531906= 221463764944822367521370140398750351920450032158498609394040097931192233762= 919615392739593496142319079251492975289355219161278102593077980694080934774= 807348815686269838231658663255453109747406854147841668747295898035046214722= 954204337096637695161230258009467203656917423590804239279208200992286134875= 490081064276216238601176771626719652470715951261404740555830904552686923119= 865477301873427026359632829140952933264973522266815070542033531976946547220= 197973086938491321120720753739960137399906970065546372022920105333218600697= 858250092770971284041999765371634305856374505354948168051485795619245710565= 177475544471205491166573508574008824290197617246557203404659741951935583377= 220245975415117684554899445620844502922298410099631370945492174513316818179= 053941295889751044787346934407150836832047822816077953368388724034918957631= 287532906408983549478253389828549312675591697063148899645182358568234280904= 393370464356625596517001437115695750865735696271243546529127281196748236370= 851643906557876213318702947945779409043920207097980173253604088090541910037= 502992188977212850308451093143517148397901877959716663055881990934822377000= 137739027330737305203072941381961798582398137407044371548508882948736515168= 678665314156055539763283978378697347560390812910312112592558243537758659944= 336321765948248602151244471507899974214561619241705438327522143175018570171= 179348707944798029574180941726592337226502723788420039623849392735910288582= 594856812800635227346505171247207005920245031905445152238832105970200308151= 371801900107107616143235847115536995978281165233083750307528808742605565540= 002941143874829336203146501750257713925224473144855518861387693696103669523= 617994232375111611201101459297439748647388267459200813013679266349328732383= 431914791502242752803351817813918019855167200467126443959596212095412230012= 937785180621368904740496659226139300584975540396940968189138713630212621475= 457757421407899273838583419421850094135489271442461781867612967840281259964= 938951919393938448193171251996576357123654457926939171468811259400443993779= 102766652727502895609600502472189226835366234904950156893142674698374992326= 628993607966485208811438064202797698153274845831487974169502396605979807274= 335098034836109236427828852711258048141786054778320994100643663029556902570= 837898367870844766792830052796171750493189799905267492521148625102911003353= 413851945670464764491436591194854953791559798723403394543172251931597408230= 783241193488626433308391622670766594854714782494114377403163099298640358928= 143049334330420757343195444050636710200574691425877526862566305694461542707= 733031232666443103430989472012268269487427473562080231601131548241018299190= 616533588303175681201813391409086131938902379083952833720360688912943648792= 014016737028487092443886087383029664801442484437819591293255142678077981975= 752535336855805082530356241998952865342550778119356839913188367344788882869= 555211229365407308833977580823432443662765954396216494645039675972304007590= 676650615202226481515809367464962286957243012116484337925382676418395332482= 943675100503507815220367552316843116120946303449177210299631555487831100050= 075236979610968511974561546844657652354600832503906077552097096336790921653= 334305722166205970710071599011452051510942858155477347155178222397083241240= 607349989679794924719726305591105357558068555200222677799099434663185151779= 136463033055175444365657794849872636280668141970553674032426859753989628280= 355279972608055457330269595842841726967166030617385338134381402404827936273= 803947019883936570628616414755586493336436328787509713812842557390990443318= 379509867020380053354885621917457990109708412341140216044839027465621606220= 773380452267811600783048591111833813729141550004024463664622846527554661318= 545121547721492409389740865925389787233163029436137942926808211251948997928= 382653291328290814782484778151796477938082491839492432242010471783901296042= 252376674439710606346399821841652194708961984612546483314531228197199405727= 591759159127914527483728327356941190487588359081892701108376611136862387628= 866146969785698402392454111735458471072816206092874754444972907108640607282= 082670735270509846957021243000503176987077098449014754492254187858251649602= 605563421853473982976704443111427277286348462896880059204798597700568726057= 437433260876574696564797640594970930403341444263058148836225175692288351728= 756577265334618966609417525651898087863205788909104258464451037447721910608= 0358138511257658994752983022904583136418485544787844335722425uwb); +static_assert (c =3D=3D 87791074236971898375693906050841211797859249085219= 857442103255912236679245196526258183737200195092459037070061326325721733862= 550642013557351987594406882625147809841117910427395663017848973163739949221= 920509632722884340603422885119715696976800265237608112255164300526997540446= 828188926798191319956002162809660627367323847324113616574443996958838650961= 034287596228138677355472599785293194368898640136872193905676042833180111007= 999534515209684412648660318139544886280584751143487292754141431589178747095= 995562471836958538385523210889734458760880425568104799106614493746619996750= 828111038144533532941948866129614927372632772715518890386107307604784595692= 561493219983504140230663638149893111097283117129890229962472801825879214491= 853539228859378776045004007387742400087099452897916050111777396577201816014= 535122598820045644624158286527149042897272352105372777213898166876433661452= 000011777121121975156955788874837929887554354013884561458544888805370883603= 979946432160148284956624602056864485481132298410979556139584409013754162565= 328645118522986963276115172332413247990709194912864261597887926317238337174= 515380434373640171852377431824028356700876831256026403188874515966503235287= 201281881985472704629716121576034879585267050059555804094416707718493880164= 380358501945858703270134092362367309142177220256553194722311416667902879556= 857136362746535655774542758385903508061686391652646764704409303516129925189= 046646477158058659410384237683768466978175431224095175917172922387459403459= 005304585514685192457678645317421021786288543765245133679832091869745757657= 072739737753868400812388038803350957408363865272082673118089735224503911890= 557398289369373596931672405246606249458569070420412573471920869840096409845= 093226225038902560463247683416326435464557790353760020616911131212342731649= 379841717742423277699156887425640494541631583181218185827647752680912924708= 890884455751080226880692716971982831514696454008705070066637993306617027027= 474432542204783110564072207496481031234354733815835208730552187341151209786= 784404558964588524975699899667232359656087068265936071288476301376185091512= 558347426364387962855698738699677293418712135210300114273729873885726742284= 413334588575122260492832433475214578049120087810369667863747603253414920332= 978483681609032604700190675353306116459095608887974519070883897641904030079= 983051686730294469340122451388381805960985594425706961500112962181441863870= 246158853022907449053406669059217439700137798133324937711920480432972814232= 484890568414170138076703081910957324642214513769972707454684597021527968182= 227457305657212026631030431211601014598336832495586844591088625369619943085= 350399708145578212681703887459419803788389699105928956705542918117397687718= 299410438578196037512469579622360911547558939620383631206904838624230010389= 486206816112538671492964636904178280343035479227922490985224047514289607138= 750504639061341508460897057144703039182990126916002853558594129248477604970= 769784327224466025218250890974545423433548473473960450795877572106353569992= 687064654257888333111905176230618606752300109941271964590303221667515716566= 423216907874719066094734960347896437104781622556640929912514467878876353518= 529338268207197817337545781610734013626681098191139242522911257413952714743= 423055745369749182739385135974189637873089945934341918906877303024959106860= 723388364131591622810722635427582576995880898386774693974678993480652935817= 510358443898483871618474351603272760666036831317032464104091228327933767515= 126887451955640216460692459923633964681005135362116514506105233152116971257= 746388453132439730835364176920759624869188446674321443530197229596536386329= 482940499842668618701512553150233467246714304992579939580490880661608705450= 252765979751548555376202656903540410287427427550743965976319653203807825009= 445684240534200383575249171250992413349900321895264658381929721109708613800= 609868020819480443455264148571585699390058952366723063443482128058512699207= 110438913068758730163306016739732493270725035718735183667505750700910512885= 907647886301909667768540315789393826907090226674217344428417846808264941466= 205898628296127042795216377404216941950514000952780847169746246152083925855= 732001826641570668138493460583217631565239656984659013960251521596421935629= 007438127158858110572125790178604885399603344067027526885952173602194709687= 380097740679150371570274922091088013377075625712668977239114012033743084907= 932262009743533568353117563848956929098027209489681315046048554669619873147= 018464603421352019143561525916848109246883509291401201876930893242559246345= 785764270044263392994938334345029515939025514510022928396350009042532500218= 846254176287564398629643255627207095287849648686873308478944769995773265823= 323502131488612054136523374993834165315457072729079947556383396302215767079= 549642362109626938046397147546686798411349283930812842091580982026837446505= 139189201683305984323623897774718706310394884087693548630019675317294156866= 31571754649uwb); +#endif --- gcc/tree-ssa-loop-niter.cc.jj=092023-10-04 16:28:04.329782348 +0200 +++ gcc/tree-ssa-loop-niter.cc=092023-10-05 11:36:54.982246797 +0200 @@ -3873,12 +3873,17 @@ do_warn_aggressive_loop_optimizations (c return; =20 gimple *estmt =3D last_nondebug_stmt (e->src); - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; - print_dec (i_bound, buf, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations)) + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p; + unsigned len =3D i_bound.get_len (); + if (len > WIDE_INT_MAX_INL_ELTS) + p =3D XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); + else + p =3D buf; + print_dec (i_bound, p, TYPE_UNSIGNED (TREE_TYPE (loop->nb_iterations)) =09 ? UNSIGNED : SIGNED); auto_diagnostic_group d; if (warning_at (gimple_location (stmt), OPT_Waggressive_loop_optimizatio= ns, -=09=09 "iteration %s invokes undefined behavior", buf)) +=09=09 "iteration %s invokes undefined behavior", p)) inform (gimple_location (estmt), "within this loop"); loop->warned_aggressive_loop_optimizations =3D true; } @@ -3915,6 +3920,9 @@ record_estimate (class loop *loop, tree else gcc_checking_assert (i_bound =3D=3D wi::to_widest (bound)); =20 + if (wi::min_precision (i_bound, SIGNED) > bound_wide_int ().get_precisio= n ()) + return; + /* If we have a guaranteed upper bound, record it in the appropriate list, unless this is an !is_exit bound (i.e. undefined behavior in at_stmt) in a loop with known constant number of iterations. */ @@ -3925,7 +3933,7 @@ record_estimate (class loop *loop, tree { class nb_iter_bound *elt =3D ggc_alloc (); =20 - elt->bound =3D i_bound; + elt->bound =3D bound_wide_int::from (i_bound, SIGNED); elt->stmt =3D at_stmt; elt->is_exit =3D is_exit; elt->next =3D loop->bounds; @@ -4410,8 +4418,8 @@ infer_loop_bounds_from_undefined (class static int wide_int_cmp (const void *p1, const void *p2) { - const widest_int *d1 =3D (const widest_int *) p1; - const widest_int *d2 =3D (const widest_int *) p2; + const bound_wide_int *d1 =3D (const bound_wide_int *) p1; + const bound_wide_int *d2 =3D (const bound_wide_int *) p2; return wi::cmpu (*d1, *d2); } =20 @@ -4419,7 +4427,7 @@ wide_int_cmp (const void *p1, const void Lookup by binary search. */ =20 static int -bound_index (const vec &bounds, const widest_int &bound) +bound_index (const vec &bounds, const bound_wide_int &boun= d) { unsigned int end =3D bounds.length (); unsigned int begin =3D 0; @@ -4428,7 +4436,7 @@ bound_index (const vec &boun while (begin !=3D end) { unsigned int middle =3D (begin + end) / 2; - widest_int index =3D bounds[middle]; + bound_wide_int index =3D bounds[middle]; =20 if (index =3D=3D bound) =09return middle; @@ -4450,7 +4458,7 @@ static void discover_iteration_bound_by_body_walk (class loop *loop) { class nb_iter_bound *elt; - auto_vec bounds; + auto_vec bounds; vec > queues =3D vNULL; vec queue =3D vNULL; ptrdiff_t queue_index; @@ -4459,7 +4467,7 @@ discover_iteration_bound_by_body_walk (c /* Discover what bounds may interest us. */ for (elt =3D loop->bounds; elt; elt =3D elt->next) { - widest_int bound =3D elt->bound; + bound_wide_int bound =3D elt->bound; =20 /* Exit terminates loop at given iteration, while non-exits produce = undefined =09 effect on the next iteration. */ @@ -4492,7 +4500,7 @@ discover_iteration_bound_by_body_walk (c hash_map bb_bounds; for (elt =3D loop->bounds; elt; elt =3D elt->next) { - widest_int bound =3D elt->bound; + bound_wide_int bound =3D elt->bound; if (!elt->is_exit) =09{ =09 bound +=3D 1; @@ -4601,7 +4609,8 @@ discover_iteration_bound_by_body_walk (c =09 print_decu (bounds[latch_index], dump_file); =09 fprintf (dump_file, "\n"); =09} - record_niter_bound (loop, bounds[latch_index], false, true); + record_niter_bound (loop, widest_int::from (bounds[latch_index], +=09=09=09=09=09=09 SIGNED), false, true); } =20 queues.release (); @@ -4704,7 +4713,8 @@ maybe_lower_iteration_bound (class loop if (dump_file && (dump_flags & TDF_DETAILS)) =09fprintf (dump_file, "Reducing loop iteration estimate by 1; " =09=09 "undefined statement must be executed at the last iteration.\n"); - record_niter_bound (loop, loop->nb_iterations_upper_bound - 1, + record_niter_bound (loop, widest_int::from (loop->nb_iterations_uppe= r_bound, +=09=09=09=09=09=09 SIGNED) - 1, =09=09=09 false, true); } =20 @@ -4860,10 +4870,13 @@ estimate_numbers_of_iterations (class lo not break code with undefined behavior by not recording smaller maximum number of iterations. */ if (loop->nb_iterations - && TREE_CODE (loop->nb_iterations) =3D=3D INTEGER_CST) + && TREE_CODE (loop->nb_iterations) =3D=3D INTEGER_CST + && (wi::min_precision (wi::to_widest (loop->nb_iterations), SIGNED) +=09 <=3D bound_wide_int ().get_precision ())) { loop->any_upper_bound =3D true; - loop->nb_iterations_upper_bound =3D wi::to_widest (loop->nb_iteratio= ns); + loop->nb_iterations_upper_bound + =3D bound_wide_int::from (wi::to_widest (loop->nb_iterations), SIG= NED); } } =20 @@ -5114,7 +5127,7 @@ n_of_executions_at_most (gimple *stmt, =09=09=09 class nb_iter_bound *niter_bound, =09=09=09 tree niter) { - widest_int bound =3D niter_bound->bound; + widest_int bound =3D widest_int::from (niter_bound->bound, SIGNED); tree nit_type =3D TREE_TYPE (niter), e; enum tree_code cmp; =20 --- gcc/cfgloop.h.jj=092023-10-04 16:28:04.010786695 +0200 +++ gcc/cfgloop.h=092023-10-05 11:36:55.065245659 +0200 @@ -44,6 +44,9 @@ enum iv_extend_code IV_UNKNOWN_EXTEND }; =20 +typedef generic_wide_int > + bound_wide_int; + /* The structure describing a bound on number of iterations of a loop. */ =20 class GTY ((chain_next ("%h.next"))) nb_iter_bound { @@ -58,7 +61,7 @@ public: overflows (as MAX + 1 is sometimes produced as the estimate on num= ber =09of executions of STMT). b) it is consistent with the result of number_of_iterations_exit. */ - widest_int bound; + bound_wide_int bound; =20 /* True if, after executing the statement BOUND + 1 times, we will leave the loop; that is, all the statements after it are executed at = most @@ -161,14 +164,14 @@ public: =20 /* An integer guaranteed to be greater or equal to nb_iterations. Only valid if any_upper_bound is true. */ - widest_int nb_iterations_upper_bound; + bound_wide_int nb_iterations_upper_bound; =20 - widest_int nb_iterations_likely_upper_bound; + bound_wide_int nb_iterations_likely_upper_bound; =20 /* An integer giving an estimate on nb_iterations. Unlike nb_iterations_upper_bound, there is no guarantee that it is at least nb_iterations. */ - widest_int nb_iterations_estimate; + bound_wide_int nb_iterations_estimate; =20 /* If > 0, an integer, where the user asserted that for any I in [ 0, nb_iterations ) and for any J in --- gcc/tree.h.jj=092023-10-04 16:28:04.403781340 +0200 +++ gcc/tree.h=092023-10-05 11:36:54.793249388 +0200 @@ -6258,13 +6258,17 @@ namespace wi template struct int_traits > { - static const enum precision_type precision_type =3D CONST_PRECISION; + static const enum precision_type precision_type + =3D N =3D=3D ADDR_MAX_PRECISION ? CONST_PRECISION : WIDEST_CONST_PRE= CISION; static const bool host_dependent_precision =3D false; static const bool is_sign_extended =3D true; static const unsigned int precision =3D N; + static const unsigned int inl_precision + =3D N =3D=3D ADDR_MAX_PRECISION ? 0 +=09 : N / WIDEST_INT_MAX_PRECISION * WIDE_INT_MAX_INL_PRECISION; }; =20 - typedef extended_tree widest_extended_tree; + typedef extended_tree widest_extended_tree; typedef extended_tree offset_extended_tree; =20 typedef const generic_wide_int tree_to_widest_ref= ; @@ -6292,7 +6296,8 @@ namespace wi tree_to_poly_wide_ref to_poly_wide (const_tree); =20 template - struct ints_for >, CONST_PRECISION> + struct ints_for >, +=09=09 int_traits >::precision_type> { typedef generic_wide_int > extended; static extended zero (const extended &); @@ -6308,7 +6313,7 @@ namespace wi =20 /* Used to convert a tree to a widest2_int like this: widest2_int foo =3D widest2_int_cst (some_tree). */ -typedef generic_wide_int > +typedef generic_wide_int = > widest2_int_cst; =20 /* Refer to INTEGER_CST T as though it were a widest_int. @@ -6444,7 +6449,7 @@ wi::extended_tree ::get_len () const { if (N =3D=3D ADDR_MAX_PRECISION) return TREE_INT_CST_OFFSET_NUNITS (m_t); - else if (N >=3D WIDE_INT_MAX_PRECISION) + else if (N >=3D WIDEST_INT_MAX_PRECISION) return TREE_INT_CST_EXT_NUNITS (m_t); else /* This class is designed to be used for specific output precisions @@ -6530,7 +6535,8 @@ wi::to_poly_wide (const_tree t) template inline generic_wide_int > wi::ints_for >, -=09 wi::CONST_PRECISION>::zero (const extended &x) +=09 wi::int_traits >::precision_type +=09 >::zero (const extended &x) { return build_zero_cst (TREE_TYPE (x.get_tree ())); } --- gcc/cfgloop.cc.jj=092023-10-04 16:28:03.991786955 +0200 +++ gcc/cfgloop.cc=092023-10-05 11:36:55.157244398 +0200 @@ -1895,33 +1895,38 @@ void record_niter_bound (class loop *loop, const widest_int &i_bound, =09=09 bool realistic, bool upper) { + if (wi::min_precision (i_bound, SIGNED) > bound_wide_int ().get_precisio= n ()) + return; + + bound_wide_int bound =3D bound_wide_int::from (i_bound, SIGNED); + /* Update the bounds only when there is no previous estimation, or when = the current estimation is smaller. */ if (upper && (!loop->any_upper_bound -=09 || wi::ltu_p (i_bound, loop->nb_iterations_upper_bound))) +=09 || wi::ltu_p (bound, loop->nb_iterations_upper_bound))) { loop->any_upper_bound =3D true; - loop->nb_iterations_upper_bound =3D i_bound; + loop->nb_iterations_upper_bound =3D bound; if (!loop->any_likely_upper_bound) =09{ =09 loop->any_likely_upper_bound =3D true; -=09 loop->nb_iterations_likely_upper_bound =3D i_bound; +=09 loop->nb_iterations_likely_upper_bound =3D bound; =09} } if (realistic && (!loop->any_estimate -=09 || wi::ltu_p (i_bound, loop->nb_iterations_estimate))) +=09 || wi::ltu_p (bound, loop->nb_iterations_estimate))) { loop->any_estimate =3D true; - loop->nb_iterations_estimate =3D i_bound; + loop->nb_iterations_estimate =3D bound; } if (!realistic && (!loop->any_likely_upper_bound - || wi::ltu_p (i_bound, loop->nb_iterations_likely_upper_bound))) + || wi::ltu_p (bound, loop->nb_iterations_likely_upper_bound))) { loop->any_likely_upper_bound =3D true; - loop->nb_iterations_likely_upper_bound =3D i_bound; + loop->nb_iterations_likely_upper_bound =3D bound; } =20 /* If an upper bound is smaller than the realistic estimate of the @@ -2018,7 +2023,7 @@ get_estimated_loop_iterations (class loo return false; } =20 - *nit =3D loop->nb_iterations_estimate; + *nit =3D widest_int::from (loop->nb_iterations_estimate, SIGNED); return true; } =20 @@ -2032,7 +2037,7 @@ get_max_loop_iterations (const class loo if (!loop->any_upper_bound) return false; =20 - *nit =3D loop->nb_iterations_upper_bound; + *nit =3D widest_int::from (loop->nb_iterations_upper_bound, SIGNED); return true; } =20 @@ -2066,7 +2071,7 @@ get_likely_max_loop_iterations (class lo if (!loop->any_likely_upper_bound) return false; =20 - *nit =3D loop->nb_iterations_likely_upper_bound; + *nit =3D widest_int::from (loop->nb_iterations_likely_upper_bound, SIGNE= D); return true; } =20 --- gcc/gimple-ssa-strength-reduction.cc.jj=092023-01-02 09:32:29.884176934= +0100 +++ gcc/gimple-ssa-strength-reduction.cc=092023-10-05 14:45:14.554340423 +0= 200 @@ -238,7 +238,7 @@ public: tree stride; =20 /* The index constant i. */ - widest_int index; + offset_int index; =20 /* The type of the candidate. This is normally the type of base_expr, but casts may have occurred when combining feeding instructions. @@ -333,7 +333,7 @@ class incr_info_d { public: /* The increment that relates a candidate to its basis. */ - widest_int incr; + offset_int incr; =20 /* How many times the increment occurs in the candidate tree. */ unsigned count; @@ -677,7 +677,7 @@ record_potential_basis (slsr_cand_t c, t =20 static slsr_cand_t alloc_cand_and_find_basis (enum cand_kind kind, gimple *gs, tree base, -=09=09=09 const widest_int &index, tree stride, tree ctype, +=09=09=09 const offset_int &index, tree stride, tree ctype, =09=09=09 tree stype, unsigned savings) { slsr_cand_t c =3D (slsr_cand_t) obstack_alloc (&cand_obstack, @@ -893,7 +893,7 @@ slsr_process_phi (gphi *phi, bool speed) int (i * S). Otherwise, just return double int zero. */ =20 -static widest_int +static offset_int backtrace_base_for_ref (tree *pbase) { tree base_in =3D *pbase; @@ -922,7 +922,7 @@ backtrace_base_for_ref (tree *pbase) =09{ =09 /* X =3D B + (1 * S), S is integer constant. */ =09 *pbase =3D base_cand->base_expr; -=09 return wi::to_widest (base_cand->stride); +=09 return wi::to_offset (base_cand->stride); =09} else if (base_cand->kind =3D=3D CAND_ADD =09 && TREE_CODE (base_cand->stride) =3D=3D INTEGER_CST @@ -966,13 +966,13 @@ backtrace_base_for_ref (tree *pbase) *PINDEX: C1 + (C2 * C3) + C4 + (C5 * C3) */ =20 static bool -restructure_reference (tree *pbase, tree *poffset, widest_int *pindex, +restructure_reference (tree *pbase, tree *poffset, offset_int *pindex, =09=09 tree *ptype) { tree base =3D *pbase, offset =3D *poffset; - widest_int index =3D *pindex; + offset_int index =3D *pindex; tree mult_op0, t1, t2, type; - widest_int c1, c2, c3, c4, c5; + offset_int c1, c2, c3, c4, c5; offset_int mem_offset; =20 if (!base @@ -985,18 +985,18 @@ restructure_reference (tree *pbase, tree return false; =20 t1 =3D TREE_OPERAND (base, 0); - c1 =3D widest_int::from (mem_offset, SIGNED); + c1 =3D offset_int::from (mem_offset, SIGNED); type =3D TREE_TYPE (TREE_OPERAND (base, 1)); =20 mult_op0 =3D TREE_OPERAND (offset, 0); - c3 =3D wi::to_widest (TREE_OPERAND (offset, 1)); + c3 =3D wi::to_offset (TREE_OPERAND (offset, 1)); =20 if (TREE_CODE (mult_op0) =3D=3D PLUS_EXPR) =20 if (TREE_CODE (TREE_OPERAND (mult_op0, 1)) =3D=3D INTEGER_CST) { =09t2 =3D TREE_OPERAND (mult_op0, 0); -=09c2 =3D wi::to_widest (TREE_OPERAND (mult_op0, 1)); +=09c2 =3D wi::to_offset (TREE_OPERAND (mult_op0, 1)); } else return false; @@ -1006,7 +1006,7 @@ restructure_reference (tree *pbase, tree if (TREE_CODE (TREE_OPERAND (mult_op0, 1)) =3D=3D INTEGER_CST) { =09t2 =3D TREE_OPERAND (mult_op0, 0); -=09c2 =3D -wi::to_widest (TREE_OPERAND (mult_op0, 1)); +=09c2 =3D -wi::to_offset (TREE_OPERAND (mult_op0, 1)); } else return false; @@ -1057,7 +1057,7 @@ slsr_process_ref (gimple *gs) HOST_WIDE_INT cbitpos; if (reversep || !bitpos.is_constant (&cbitpos)) return; - widest_int index =3D cbitpos; + offset_int index =3D cbitpos; =20 if (!restructure_reference (&base, &offset, &index, &type)) return; @@ -1079,7 +1079,7 @@ create_mul_ssa_cand (gimple *gs, tree ba { tree base =3D NULL_TREE, stride =3D NULL_TREE, ctype =3D NULL_TREE; tree stype =3D NULL_TREE; - widest_int index; + offset_int index; unsigned savings =3D 0; slsr_cand_t c; slsr_cand_t base_cand =3D base_cand_from_table (base_in); @@ -1112,7 +1112,7 @@ create_mul_ssa_cand (gimple *gs, tree ba =09 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D =09 X =3D B + ((i' * S) * Z) */ =09 base =3D base_cand->base_expr; -=09 index =3D base_cand->index * wi::to_widest (base_cand->stride); +=09 index =3D base_cand->index * wi::to_offset (base_cand->stride); =09 stride =3D stride_in; =09 ctype =3D base_cand->cand_type; =09 stype =3D TREE_TYPE (stride_in); @@ -1149,7 +1149,7 @@ static slsr_cand_t create_mul_imm_cand (gimple *gs, tree base_in, tree stride_in, bool speed) { tree base =3D NULL_TREE, stride =3D NULL_TREE, ctype =3D NULL_TREE; - widest_int index, temp; + offset_int index, temp; unsigned savings =3D 0; slsr_cand_t c; slsr_cand_t base_cand =3D base_cand_from_table (base_in); @@ -1165,7 +1165,7 @@ create_mul_imm_cand (gimple *gs, tree ba =09 X =3D Y * c =09 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D =09 X =3D (B + i') * (S * c) */ -=09 temp =3D wi::to_widest (base_cand->stride) * wi::to_widest (stride_in= ); +=09 temp =3D wi::to_offset (base_cand->stride) * wi::to_offset (stride_in= ); =09 if (wi::fits_to_tree_p (temp, TREE_TYPE (stride_in))) =09 { =09 base =3D base_cand->base_expr; @@ -1200,7 +1200,7 @@ create_mul_imm_cand (gimple *gs, tree ba =09 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D =09 X =3D (B + S) * c */ =09 base =3D base_cand->base_expr; -=09 index =3D wi::to_widest (base_cand->stride); +=09 index =3D wi::to_offset (base_cand->stride); =09 stride =3D stride_in; =09 ctype =3D base_cand->cand_type; =09 if (has_single_use (base_in)) @@ -1281,7 +1281,7 @@ create_add_ssa_cand (gimple *gs, tree ba { tree base =3D NULL_TREE, stride =3D NULL_TREE, ctype =3D NULL_TREE; tree stype =3D NULL_TREE; - widest_int index; + offset_int index; unsigned savings =3D 0; slsr_cand_t c; slsr_cand_t base_cand =3D base_cand_from_table (base_in); @@ -1300,7 +1300,7 @@ create_add_ssa_cand (gimple *gs, tree ba =09 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D =09 X =3D Y + ((+/-1 * S) * B) */ =09 base =3D base_in; -=09 index =3D wi::to_widest (addend_cand->stride); +=09 index =3D wi::to_offset (addend_cand->stride); =09 if (subtract_p) =09 index =3D -index; =09 stride =3D addend_cand->base_expr; @@ -1350,7 +1350,7 @@ create_add_ssa_cand (gimple *gs, tree ba =09=09 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D =09=09 Value: X =3D Y + ((-1 * S) * B) */ =09=09 base =3D base_in; -=09=09 index =3D wi::to_widest (subtrahend_cand->stride); +=09=09 index =3D wi::to_offset (subtrahend_cand->stride); =09=09 index =3D -index; =09=09 stride =3D subtrahend_cand->base_expr; =09=09 ctype =3D TREE_TYPE (base_in); @@ -1389,13 +1389,13 @@ create_add_ssa_cand (gimple *gs, tree ba about BASE_IN into the new candidate. Return the new candidate. */ =20 static slsr_cand_t -create_add_imm_cand (gimple *gs, tree base_in, const widest_int &index_in, +create_add_imm_cand (gimple *gs, tree base_in, const offset_int &index_in, =09=09 bool speed) { enum cand_kind kind =3D CAND_ADD; tree base =3D NULL_TREE, stride =3D NULL_TREE, ctype =3D NULL_TREE; tree stype =3D NULL_TREE; - widest_int index, multiple; + offset_int index, multiple; unsigned savings =3D 0; slsr_cand_t c; slsr_cand_t base_cand =3D base_cand_from_table (base_in); @@ -1405,7 +1405,7 @@ create_add_imm_cand (gimple *gs, tree ba signop sign =3D TYPE_SIGN (TREE_TYPE (base_cand->stride)); =20 if (TREE_CODE (base_cand->stride) =3D=3D INTEGER_CST -=09 && wi::multiple_of_p (index_in, wi::to_widest (base_cand->stride), +=09 && wi::multiple_of_p (index_in, wi::to_offset (base_cand->stride), =09=09=09=09sign, &multiple)) =09{ =09 /* Y =3D (B + i') * S, S constant, c =3D kS for some integer k @@ -1494,7 +1494,7 @@ slsr_process_add (gimple *gs, tree rhs1, else if (TREE_CODE (rhs2) =3D=3D INTEGER_CST) { /* Record an interpretation for the add-immediate. */ - widest_int index =3D wi::to_widest (rhs2); + offset_int index =3D wi::to_offset (rhs2); if (subtract_p) =09index =3D -index; =20 @@ -2079,7 +2079,7 @@ phi_dependent_cand_p (slsr_cand_t c) /* Calculate the increment required for candidate C relative to=20 its basis. */ =20 -static widest_int +static offset_int cand_increment (slsr_cand_t c) { slsr_cand_t basis; @@ -2102,10 +2102,10 @@ cand_increment (slsr_cand_t c) for this candidate, return the absolute value of that increment instead. */ =20 -static inline widest_int +static inline offset_int cand_abs_increment (slsr_cand_t c) { - widest_int increment =3D cand_increment (c); + offset_int increment =3D cand_increment (c); =20 if (!address_arithmetic_p && wi::neg_p (increment)) increment =3D -increment; @@ -2126,7 +2126,7 @@ cand_already_replaced (slsr_cand_t c) replace_conditional_candidate. */ =20 static void -replace_mult_candidate (slsr_cand_t c, tree basis_name, widest_int bump) +replace_mult_candidate (slsr_cand_t c, tree basis_name, offset_int bump) { tree target_type =3D TREE_TYPE (gimple_assign_lhs (c->cand_stmt)); enum tree_code cand_code =3D gimple_assign_rhs_code (c->cand_stmt); @@ -2245,7 +2245,7 @@ replace_unconditional_candidate (slsr_ca return; =20 basis =3D lookup_cand (c->basis); - widest_int bump =3D cand_increment (c) * wi::to_widest (c->stride); + offset_int bump =3D cand_increment (c) * wi::to_offset (c->stride); =20 replace_mult_candidate (c, gimple_assign_lhs (basis->cand_stmt), bump); } @@ -2255,7 +2255,7 @@ replace_unconditional_candidate (slsr_ca MAX_INCR_VEC_LEN increments have been found. */ =20 static inline int -incr_vec_index (const widest_int &increment) +incr_vec_index (const offset_int &increment) { unsigned i; =20 @@ -2275,7 +2275,7 @@ incr_vec_index (const widest_int &increm =20 static tree create_add_on_incoming_edge (slsr_cand_t c, tree basis_name, -=09=09=09 widest_int increment, edge e, location_t loc, +=09=09=09 offset_int increment, edge e, location_t loc, =09=09=09 bool known_stride) { tree lhs, basis_type; @@ -2299,7 +2299,7 @@ create_add_on_incoming_edge (slsr_cand_t { tree bump_tree; enum tree_code code =3D plus_code; - widest_int bump =3D increment * wi::to_widest (c->stride); + offset_int bump =3D increment * wi::to_offset (c->stride); if (wi::neg_p (bump) && !POINTER_TYPE_P (basis_type)) =09{ =09 code =3D MINUS_EXPR; @@ -2427,7 +2427,7 @@ create_phi_basis_1 (slsr_cand_t c, gimpl =09 feeding_def =3D gimple_assign_lhs (basis->cand_stmt); =09else =09 { -=09 widest_int incr =3D -basis->index; +=09 offset_int incr =3D -basis->index; =09 feeding_def =3D create_add_on_incoming_edge (c, basis_name, incr, =09=09=09=09=09=09 e, loc, known_stride); =09 } @@ -2444,7 +2444,7 @@ create_phi_basis_1 (slsr_cand_t c, gimpl =09 else =09 { =09 slsr_cand_t arg_cand =3D base_cand_from_table (arg); -=09 widest_int diff =3D arg_cand->index - basis->index; +=09 offset_int diff =3D arg_cand->index - basis->index; =09 feeding_def =3D create_add_on_incoming_edge (c, basis_name, diff, =09=09=09=09=09=09=09 e, loc, known_stride); =09 } @@ -2525,7 +2525,7 @@ replace_conditional_candidate (slsr_cand =09=09=09 basis_name, loc, KNOWN_STRIDE); =20 /* Replace C with an add of the new basis phi and a constant. */ - widest_int bump =3D c->index * wi::to_widest (c->stride); + offset_int bump =3D c->index * wi::to_offset (c->stride); =20 replace_mult_candidate (c, name, bump); } @@ -2614,7 +2614,7 @@ replace_uncond_cands_and_profitable_phis { /* A multiply candidate with a stride of 1 is just an artifice =09 of a copy or cast; there is no value in replacing it. */ - if (c->kind =3D=3D CAND_MULT && wi::to_widest (c->stride) !=3D 1) + if (c->kind =3D=3D CAND_MULT && wi::to_offset (c->stride) !=3D 1) =09{ =09 /* A candidate dependent upon a phi will replace a multiply by=20 =09 a constant with an add, and will insert at most one add for @@ -2681,7 +2681,7 @@ count_candidates (slsr_cand_t c) candidates with the same increment, also record T_0 for subsequent use.= */ =20 static void -record_increment (slsr_cand_t c, widest_int increment, bool is_phi_adjust) +record_increment (slsr_cand_t c, offset_int increment, bool is_phi_adjust) { bool found =3D false; unsigned i; @@ -2786,7 +2786,7 @@ record_phi_increments_1 (slsr_cand_t bas =09record_phi_increments_1 (basis, arg_def); else =09{ -=09 widest_int diff; +=09 offset_int diff; =20 =09 if (operand_equal_p (arg, phi_cand->base_expr, 0)) =09 { @@ -2856,7 +2856,7 @@ record_increments (slsr_cand_t c) /* Recursive helper function for phi_incr_cost. */ =20 static int -phi_incr_cost_1 (slsr_cand_t c, const widest_int &incr, gimple *phi, +phi_incr_cost_1 (slsr_cand_t c, const offset_int &incr, gimple *phi, =09=09 int *savings) { unsigned i; @@ -2883,7 +2883,7 @@ phi_incr_cost_1 (slsr_cand_t c, const wi =09} else =09{ -=09 widest_int diff; +=09 offset_int diff; =09 slsr_cand_t arg_cand; =20 =09 /* When the PHI argument is just a pass-through to the base @@ -2925,7 +2925,7 @@ phi_incr_cost_1 (slsr_cand_t c, const wi uses. */ =20 static int -phi_incr_cost (slsr_cand_t c, const widest_int &incr, gimple *phi, +phi_incr_cost (slsr_cand_t c, const offset_int &incr, gimple *phi, =09 int *savings) { int retval =3D phi_incr_cost_1 (c, incr, phi, savings); @@ -2981,10 +2981,10 @@ optimize_cands_for_speed_p (slsr_cand_t =20 static int lowest_cost_path (int cost_in, int repl_savings, slsr_cand_t c, -=09=09 const widest_int &incr, bool count_phis) +=09=09 const offset_int &incr, bool count_phis) { int local_cost, sib_cost, savings =3D 0; - widest_int cand_incr =3D cand_abs_increment (c); + offset_int cand_incr =3D cand_abs_increment (c); =20 if (cand_already_replaced (c)) local_cost =3D cost_in; @@ -3027,11 +3027,11 @@ lowest_cost_path (int cost_in, int repl_ would go dead. */ =20 static int -total_savings (int repl_savings, slsr_cand_t c, const widest_int &incr, +total_savings (int repl_savings, slsr_cand_t c, const offset_int &incr, =09 bool count_phis) { int savings =3D 0; - widest_int cand_incr =3D cand_abs_increment (c); + offset_int cand_incr =3D cand_abs_increment (c); =20 if (incr =3D=3D cand_incr && !cand_already_replaced (c)) savings +=3D repl_savings + c->dead_savings; @@ -3239,7 +3239,7 @@ ncd_for_two_cands (basic_block bb1, basi candidates, return the earliest candidate in the block in *WHERE. */ =20 static basic_block -ncd_with_phi (slsr_cand_t c, const widest_int &incr, gphi *phi, +ncd_with_phi (slsr_cand_t c, const offset_int &incr, gphi *phi, =09 basic_block ncd, slsr_cand_t *where) { unsigned i; @@ -3255,7 +3255,7 @@ ncd_with_phi (slsr_cand_t c, const wides =09ncd =3D ncd_with_phi (c, incr, as_a (arg_def), ncd, where); else=20 =09{ -=09 widest_int diff; +=09 offset_int diff; =20 =09 if (operand_equal_p (arg, phi_cand->base_expr, 0)) =09 diff =3D -basis->index; @@ -3282,7 +3282,7 @@ ncd_with_phi (slsr_cand_t c, const wides return the earliest candidate in the block in *WHERE. */ =20 static basic_block -ncd_of_cand_and_phis (slsr_cand_t c, const widest_int &incr, slsr_cand_t *= where) +ncd_of_cand_and_phis (slsr_cand_t c, const offset_int &incr, slsr_cand_t *= where) { basic_block ncd =3D NULL; =20 @@ -3308,7 +3308,7 @@ ncd_of_cand_and_phis (slsr_cand_t c, con *WHERE. */ =20 static basic_block -nearest_common_dominator_for_cands (slsr_cand_t c, const widest_int &incr, +nearest_common_dominator_for_cands (slsr_cand_t c, const offset_int &incr, =09=09=09=09 slsr_cand_t *where) { basic_block sib_ncd =3D NULL, dep_ncd =3D NULL, this_ncd =3D NULL, ncd; @@ -3385,7 +3385,7 @@ insert_initializers (slsr_cand_t c) gassign *init_stmt; gassign *cast_stmt =3D NULL; tree new_name, incr_tree, init_stride; - widest_int incr =3D incr_vec[i].incr; + offset_int incr =3D incr_vec[i].incr; =20 if (!profitable_increment_p (i) =09 || incr =3D=3D 1 @@ -3550,7 +3550,7 @@ all_phi_incrs_profitable_1 (slsr_cand_t else =09{ =09 int j; -=09 widest_int increment; +=09 offset_int increment; =20 =09 if (operand_equal_p (arg, phi_cand->base_expr, 0)) =09 increment =3D -basis->index; @@ -3681,7 +3681,7 @@ replace_one_candidate (slsr_cand_t c, un tree orig_rhs1, orig_rhs2; tree rhs2; enum tree_code orig_code, repl_code; - widest_int cand_incr; + offset_int cand_incr; =20 orig_code =3D gimple_assign_rhs_code (c->cand_stmt); orig_rhs1 =3D gimple_assign_rhs1 (c->cand_stmt); @@ -3839,7 +3839,7 @@ replace_profitable_candidates (slsr_cand { if (!cand_already_replaced (c)) { - widest_int increment =3D cand_abs_increment (c); + offset_int increment =3D cand_abs_increment (c); enum tree_code orig_code =3D gimple_assign_rhs_code (c->cand_stmt); int i; =20 --- gcc/real.cc.jj=092023-10-04 16:28:04.263783248 +0200 +++ gcc/real.cc=092023-10-05 11:36:54.902247893 +0200 @@ -1477,7 +1477,7 @@ real_to_integer (const REAL_VALUE_TYPE * wide_int real_to_integer (const REAL_VALUE_TYPE *r, bool *fail, int precision) { - HOST_WIDE_INT val[2 * WIDE_INT_MAX_ELTS]; + HOST_WIDE_INT valb[WIDE_INT_MAX_INL_ELTS], *val; int exp; int words, w; wide_int result; @@ -1516,7 +1516,11 @@ real_to_integer (const REAL_VALUE_TYPE * =09 is the smallest HWI-multiple that has at least PRECISION bits. =09 This ensures that the top bit of the significand is in the =09 top bit of the wide_int. */ - words =3D (precision + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_W= IDE_INT; + words =3D ((precision + HOST_BITS_PER_WIDE_INT - 1) +=09 / HOST_BITS_PER_WIDE_INT); + val =3D valb; + if (UNLIKELY (words > WIDE_INT_MAX_INL_ELTS)) +=09val =3D XALLOCAVEC (HOST_WIDE_INT, words); w =3D words * HOST_BITS_PER_WIDE_INT; =20 #if (HOST_BITS_PER_WIDE_INT =3D=3D HOST_BITS_PER_LONG) --- gcc/omp-general.cc.jj=092023-10-04 16:28:04.218783861 +0200 +++ gcc/omp-general.cc=092023-10-06 13:39:37.002609538 +0200 @@ -1986,13 +1986,17 @@ omp_get_context_selector (tree ctx, cons return NULL_TREE; } =20 +/* Needs to be a GC-friendly widest_int variant, but precision is + desirable to be the same on all targets. */ +typedef generic_wide_int > score_wide_int; + /* Compute *SCORE for context selector CTX. Return true if the score would be different depending on whether it is a declare simd clone or not. DECLARE_SIMD should be true for the case when it would be a declare simd clone. */ =20 static bool -omp_context_compute_score (tree ctx, widest_int *score, bool declare_simd) +omp_context_compute_score (tree ctx, score_wide_int *score, bool declare_s= imd) { tree construct =3D omp_get_context_selector (ctx, "construct", NULL); bool has_kind =3D omp_get_context_selector (ctx, "device", "kind"); @@ -2007,7 +2011,11 @@ omp_context_compute_score (tree ctx, wid =09 if (TREE_PURPOSE (t3) =09 && strcmp (IDENTIFIER_POINTER (TREE_PURPOSE (t3)), " score") =3D= =3D 0 =09 && TREE_CODE (TREE_VALUE (t3)) =3D=3D INTEGER_CST) -=09 *score +=3D wi::to_widest (TREE_VALUE (t3)); +=09 { +=09 tree t4 =3D TREE_VALUE (t3); +=09 *score +=3D score_wide_int::from (wi::to_wide (t4), +=09=09=09=09=09 TYPE_SIGN (TREE_TYPE (t4))); +=09 } if (construct || has_kind || has_arch || has_isa) { int scores[12]; @@ -2028,16 +2036,16 @@ omp_context_compute_score (tree ctx, wid =09=09 *score =3D -1; =09=09 return ret; =09=09} -=09 *score +=3D wi::shifted_mask (scores[b + n], 1, fals= e); +=09 *score +=3D wi::shifted_mask (scores[b + n], 1, = false); =09 } =09 if (has_kind) -=09 *score +=3D wi::shifted_mask (scores[b + nconstructs], +=09 *score +=3D wi::shifted_mask (scores[b + nconstruc= ts], =09=09=09=09=09=09 1, false); =09 if (has_arch) -=09 *score +=3D wi::shifted_mask (scores[b + nconstructs] = + 1, +=09 *score +=3D wi::shifted_mask (scores[b + nconstruc= ts] + 1, =09=09=09=09=09=09 1, false); =09 if (has_isa) -=09 *score +=3D wi::shifted_mask (scores[b + nconstructs] = + 2, +=09 *score +=3D wi::shifted_mask (scores[b + nconstruc= ts] + 2, =09=09=09=09=09=09 1, false); =09} else /* FIXME: Implement this. */ @@ -2051,9 +2059,9 @@ struct GTY(()) omp_declare_variant_entry /* NODE of the variant. */ cgraph_node *variant; /* Score if not in declare simd clone. */ - widest_int score; + score_wide_int score; /* Score if in declare simd clone. */ - widest_int score_in_declare_simd_clone; + score_wide_int score_in_declare_simd_clone; /* Context selector for the variant. */ tree ctx; /* True if the context selector is known to match already. */ @@ -2214,12 +2222,12 @@ omp_resolve_late_declare_variant (tree a =09 } } =20 - widest_int max_score =3D -1; + score_wide_int max_score =3D -1; varentry2 =3D NULL; FOR_EACH_VEC_SAFE_ELT (entryp->variants, i, varentry1) if (matches[i]) { -=09widest_int score +=09score_wide_int score =09 =3D (cur_node->simdclone ? varentry1->score_in_declare_simd_clone =09 : varentry1->score); =09if (score > max_score) @@ -2300,8 +2308,8 @@ omp_resolve_declare_variant (tree base) =20 if (any_deferred) { - widest_int max_score1 =3D 0; - widest_int max_score2 =3D 0; + score_wide_int max_score1 =3D 0; + score_wide_int max_score2 =3D 0; bool first =3D true; unsigned int i; tree attr1, attr2; @@ -2311,8 +2319,8 @@ omp_resolve_declare_variant (tree base) vec_alloc (entry.variants, variants.length ()); FOR_EACH_VEC_ELT (variants, i, attr1) =09{ -=09 widest_int score1; -=09 widest_int score2; +=09 score_wide_int score1; +=09 score_wide_int score2; =09 bool need_two; =09 tree ctx =3D TREE_VALUE (TREE_VALUE (attr1)); =09 need_two =3D omp_context_compute_score (ctx, &score1, false); @@ -2471,16 +2479,16 @@ omp_resolve_declare_variant (tree base) =09=09variants[j] =3D NULL_TREE; =09 } } - widest_int max_score1 =3D 0; - widest_int max_score2 =3D 0; + score_wide_int max_score1 =3D 0; + score_wide_int max_score2 =3D 0; bool first =3D true; FOR_EACH_VEC_ELT (variants, i, attr1) if (attr1) { =09if (variant1) =09 { -=09 widest_int score1; -=09 widest_int score2; +=09 score_wide_int score1; +=09 score_wide_int score2; =09 bool need_two; =09 tree ctx; =09 if (first) @@ -2552,7 +2560,7 @@ omp_lto_output_declare_variant_alt (lto_ gcc_assert (nvar !=3D LCC_NOT_FOUND); streamer_write_hwi_stream (ob->main_stream, nvar); =20 - for (widest_int *w =3D &varentry->score; ; + for (score_wide_int *w =3D &varentry->score; ; =09 w =3D &varentry->score_in_declare_simd_clone) =09{ =09 unsigned len =3D w->get_len (); @@ -2602,15 +2610,15 @@ omp_lto_input_declare_variant_alt (lto_i omp_declare_variant_entry varentry; varentry.variant =09=3D dyn_cast (nodes[streamer_read_hwi (ib)]); - for (widest_int *w =3D &varentry.score; ; + for (score_wide_int *w =3D &varentry.score; ; =09 w =3D &varentry.score_in_declare_simd_clone) =09{ =09 unsigned len2 =3D streamer_read_hwi (ib); -=09 HOST_WIDE_INT arr[WIDE_INT_MAX_ELTS]; -=09 gcc_assert (len2 <=3D WIDE_INT_MAX_ELTS); +=09 HOST_WIDE_INT arr[WIDE_INT_MAX_HWIS (1024)]; +=09 gcc_assert (len2 <=3D WIDE_INT_MAX_HWIS (1024)); =09 for (unsigned int j =3D 0; j < len2; j++) =09 arr[j] =3D streamer_read_hwi (ib); -=09 *w =3D widest_int::from_array (arr, len2, true); +=09 *w =3D score_wide_int::from_array (arr, len2, true); =09 if (w =3D=3D &varentry.score_in_declare_simd_clone) =09 break; =09} --- gcc/graphite-isl-ast-to-gimple.cc.jj=092023-10-04 16:28:04.164784597 +0= 200 +++ gcc/graphite-isl-ast-to-gimple.cc=092023-10-05 11:36:55.064245673 +0200 @@ -274,7 +274,7 @@ widest_int_from_isl_expr_int (__isl_keep isl_val *val =3D isl_ast_expr_get_val (expr); size_t n =3D isl_val_n_abs_num_chunks (val, sizeof (HOST_WIDE_INT)); HOST_WIDE_INT *chunks =3D XALLOCAVEC (HOST_WIDE_INT, n); - if (n > WIDE_INT_MAX_ELTS + if (n > WIDEST_INT_MAX_ELTS || isl_val_get_abs_num_chunks (val, sizeof (HOST_WIDE_INT), chunks) = =3D=3D -1) { isl_val_free (val); --- gcc/poly-int.h.jj=092023-10-04 16:28:04.242783534 +0200 +++ gcc/poly-int.h=092023-10-05 11:36:55.194243890 +0200 @@ -109,6 +109,21 @@ struct poly_coeff_traits +struct poly_coeff_traits +{ + typedef WI_UNARY_RESULT (T) result; + typedef int int_type; + /* These types are always signed. */ + static const int signedness =3D 1; + static const int precision =3D wi::int_traits::precision; + static const int inl_precision =3D wi::int_traits::inl_precision; + static const int rank =3D precision * 2 / CHAR_BIT; + + template + struct init_cast { using type =3D const Arg &; }; +}; + /* Information about a pair of coefficient types. */ template struct poly_coeff_pair_traits --- gcc/gimple-ssa-warn-alloca.cc.jj=092023-10-04 16:28:04.126785115 +0200 +++ gcc/gimple-ssa-warn-alloca.cc=092023-10-05 11:36:55.126244823 +0200 @@ -310,7 +310,7 @@ pass_walloca::execute (function *fun) =20 =09 enum opt_code wcode =09 =3D is_vla ? OPT_Wvla_larger_than_ : OPT_Walloca_larger_than_; -=09 char buff[WIDE_INT_MAX_PRECISION / 4 + 4]; +=09 char buff[WIDE_INT_MAX_INL_PRECISION / 4 + 4]; =09 switch (t.type) =09 { =09 case ALLOCA_OK: @@ -329,6 +329,7 @@ pass_walloca::execute (function *fun) =09=09=09=09 "large"))) =09=09 && t.limit !=3D 0) =09=09 { +=09=09 gcc_assert (t.limit.get_len () < WIDE_INT_MAX_INL_ELTS); =09=09 print_decu (t.limit, buff); =09=09 inform (loc, "limit is %wu bytes, but argument " =09=09=09=09 "may be as large as %s", @@ -347,6 +348,7 @@ pass_walloca::execute (function *fun) =09=09=09=09 : G_("argument to % is too large"))) =09=09 && t.limit !=3D 0) =09=09 { +=09=09 gcc_assert (t.limit.get_len () < WIDE_INT_MAX_INL_ELTS); =09=09 print_decu (t.limit, buff); =09=09 inform (loc, "limit is %wu bytes, but argument is %s", =09=09=09 is_vla ? warn_vla_limit : adjusted_alloca_limit, --- gcc/tree-affine.cc.jj=092023-09-28 12:05:50.975150358 +0200 +++ gcc/tree-affine.cc=092023-10-06 10:06:46.671895782 +0200 @@ -805,6 +805,7 @@ aff_combination_expand (aff_tree *comb A =09 continue; =09 } =09 exp =3D XNEW (class name_expansion); +=09 ::new (static_cast (exp)) name_expansion (); =09 exp->in_progress =3D 1; =09 if (!*cache) =09 *cache =3D new hash_map; @@ -860,6 +861,7 @@ tree_to_aff_combination_expand (tree exp bool free_name_expansion (tree const &, name_expansion **value, void *) { + (*value)->~name_expansion (); free (*value); return true; } --- gcc/tree.cc.jj=092023-10-04 16:28:04.399781394 +0200 +++ gcc/tree.cc=092023-10-05 11:36:54.618251787 +0200 @@ -2676,13 +2676,13 @@ build_zero_cst (tree type) tree build_replicated_int_cst (tree type, unsigned int width, HOST_WIDE_INT val= ue) { - int n =3D (TYPE_PRECISION (type) + HOST_BITS_PER_WIDE_INT - 1) - / HOST_BITS_PER_WIDE_INT; + int n =3D ((TYPE_PRECISION (type) + HOST_BITS_PER_WIDE_INT - 1) +=09 / HOST_BITS_PER_WIDE_INT); unsigned HOST_WIDE_INT low, mask; - HOST_WIDE_INT a[WIDE_INT_MAX_ELTS]; + HOST_WIDE_INT a[WIDE_INT_MAX_INL_ELTS]; int i; =20 - gcc_assert (n && n <=3D WIDE_INT_MAX_ELTS); + gcc_assert (n && n <=3D WIDE_INT_MAX_INL_ELTS); =20 if (width =3D=3D HOST_BITS_PER_WIDE_INT) low =3D value; @@ -2696,8 +2696,8 @@ build_replicated_int_cst (tree type, uns a[i] =3D low; =20 gcc_assert (TYPE_PRECISION (type) <=3D MAX_BITSIZE_MODE_ANY_INT); - return wide_int_to_tree - (type, wide_int::from_array (a, n, TYPE_PRECISION (type))); + return wide_int_to_tree (type, wide_int::from_array (a, n, +=09=09=09=09=09=09 TYPE_PRECISION (type))); } =20 /* If floating-point type TYPE has an IEEE-style sign bit, return an --- gcc/gengtype.cc.jj=092023-10-04 16:28:04.102785442 +0200 +++ gcc/gengtype.cc=092023-10-05 11:36:54.966247016 +0200 @@ -5235,7 +5235,6 @@ main (int argc, char **argv) POS_HERE (do_scalar_typedef ("FIXED_VALUE_TYPE", &pos)); POS_HERE (do_scalar_typedef ("double_int", &pos)); POS_HERE (do_scalar_typedef ("offset_int", &pos)); - POS_HERE (do_scalar_typedef ("widest_int", &pos)); POS_HERE (do_scalar_typedef ("int64_t", &pos)); POS_HERE (do_scalar_typedef ("poly_int64", &pos)); POS_HERE (do_scalar_typedef ("poly_uint64", &pos)); --- gcc/dwarf2out.cc.jj=092023-10-04 16:28:04.065785946 +0200 +++ gcc/dwarf2out.cc=092023-10-05 11:36:54.656251266 +0200 @@ -397,7 +397,7 @@ dump_struct_debug (tree type, enum debug of the number. */ =20 static unsigned int -get_full_len (const wide_int &op) +get_full_len (const rwide_int &op) { int prec =3D wi::get_precision (op); return ((prec + HOST_BITS_PER_WIDE_INT - 1) @@ -3900,7 +3900,7 @@ static void add_data_member_location_att =09=09=09=09=09=09struct vlr_context *); static bool add_const_value_attribute (dw_die_ref, machine_mode, rtx); static void insert_int (HOST_WIDE_INT, unsigned, unsigned char *); -static void insert_wide_int (const wide_int &, unsigned char *, int); +static void insert_wide_int (const rwide_int &, unsigned char *, int); static unsigned insert_float (const_rtx, unsigned char *); static rtx rtl_for_decl_location (tree); static bool add_location_or_const_value_attribute (dw_die_ref, tree, bool)= ; @@ -4598,14 +4598,14 @@ AT_unsigned (dw_attr_node *a) =20 static inline void add_AT_wide (dw_die_ref die, enum dwarf_attribute attr_kind, -=09 const wide_int& w) +=09 const rwide_int& w) { dw_attr_node attr; =20 attr.dw_attr =3D attr_kind; attr.dw_attr_val.val_class =3D dw_val_class_wide_int; attr.dw_attr_val.val_entry =3D NULL; - attr.dw_attr_val.v.val_wide =3D ggc_alloc (); + attr.dw_attr_val.v.val_wide =3D ggc_alloc (); *attr.dw_attr_val.v.val_wide =3D w; add_dwarf_attr (die, &attr); } @@ -16714,7 +16714,7 @@ mem_loc_descriptor (rtx rtl, machine_mod =09 mem_loc_result->dw_loc_oprnd1.v.val_die_ref.external =3D 0; =09 mem_loc_result->dw_loc_oprnd2.val_class =09 =3D dw_val_class_wide_int; -=09 mem_loc_result->dw_loc_oprnd2.v.val_wide =3D ggc_alloc (); +=09 mem_loc_result->dw_loc_oprnd2.v.val_wide =3D ggc_alloc (); =09 *mem_loc_result->dw_loc_oprnd2.v.val_wide =3D rtx_mode_t (rtl, mode); =09} break; @@ -17288,7 +17288,7 @@ loc_descriptor (rtx rtl, machine_mode mo =09 loc_result =3D new_loc_descr (DW_OP_implicit_value, =09=09=09=09 GET_MODE_SIZE (int_mode), 0); =09 loc_result->dw_loc_oprnd2.val_class =3D dw_val_class_wide_int; -=09 loc_result->dw_loc_oprnd2.v.val_wide =3D ggc_alloc (); +=09 loc_result->dw_loc_oprnd2.v.val_wide =3D ggc_alloc (); =09 *loc_result->dw_loc_oprnd2.v.val_wide =3D rtx_mode_t (rtl, int_mode); =09} break; @@ -20189,7 +20189,7 @@ extract_int (const unsigned char *src, u /* Writes wide_int values to dw_vec_const array. */ =20 static void -insert_wide_int (const wide_int &val, unsigned char *dest, int elt_size) +insert_wide_int (const rwide_int &val, unsigned char *dest, int elt_size) { int i; =20 @@ -20274,7 +20274,7 @@ add_const_value_attribute (dw_die_ref di =09 && (GET_MODE_PRECISION (int_mode) =09 & (HOST_BITS_PER_WIDE_INT - 1)) =3D=3D 0) =09{ -=09 wide_int w =3D rtx_mode_t (rtl, int_mode); +=09 rwide_int w =3D rtx_mode_t (rtl, int_mode); =09 add_AT_wide (die, DW_AT_const_value, w); =09 return true; =09} --- gcc/wide-int.cc.jj=092023-10-04 16:28:04.466780481 +0200 +++ gcc/wide-int.cc=092023-10-06 12:31:56.841517949 +0200 @@ -51,7 +51,7 @@ typedef unsigned int UDWtype __attribute #include "longlong.h" #endif =20 -static const HOST_WIDE_INT zeros[WIDE_INT_MAX_ELTS] =3D {}; +static const HOST_WIDE_INT zeros[1] =3D {}; =20 /* * Internal utilities. @@ -62,8 +62,7 @@ static const HOST_WIDE_INT zeros[WIDE_IN #define HALF_INT_MASK ((HOST_WIDE_INT_1 << HOST_BITS_PER_HALF_WIDE_INT) - = 1) =20 #define BLOCK_OF(TARGET) ((TARGET) / HOST_BITS_PER_WIDE_INT) -#define BLOCKS_NEEDED(PREC) \ - (PREC ? (((PREC) + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT)= : 1) +#define BLOCKS_NEEDED(PREC) (PREC ? CEIL (PREC, HOST_BITS_PER_WIDE_INT) : = 1) #define SIGN_MASK(X) ((HOST_WIDE_INT) (X) < 0 ? -1 : 0) =20 /* Return the value a VAL[I] if I < LEN, otherwise, return 0 or -1 @@ -96,7 +95,7 @@ canonize (HOST_WIDE_INT *val, unsigned i top =3D val[len - 1]; if (len * HOST_BITS_PER_WIDE_INT > precision) val[len - 1] =3D top =3D sext_hwi (top, precision % HOST_BITS_PER_WIDE= _INT); - if (top !=3D 0 && top !=3D (HOST_WIDE_INT)-1) + if (top !=3D 0 && top !=3D HOST_WIDE_INT_M1) return len; =20 /* At this point we know that the top is either 0 or -1. Find the @@ -163,7 +162,7 @@ wi::from_buffer (const unsigned char *bu /* We have to clear all the bits ourself, as we merely or in values below. */ unsigned int len =3D BLOCKS_NEEDED (precision); - HOST_WIDE_INT *val =3D result.write_val (); + HOST_WIDE_INT *val =3D result.write_val (0); for (unsigned int i =3D 0; i < len; ++i) val[i] =3D 0; =20 @@ -232,8 +231,7 @@ wi::to_mpz (const wide_int_ref &x, mpz_t } else if (excess < 0 && wi::neg_p (x)) { - int extra -=09=3D (-excess + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT; + int extra =3D CEIL (-excess, HOST_BITS_PER_WIDE_INT); HOST_WIDE_INT *t =3D XALLOCAVEC (HOST_WIDE_INT, len + extra); for (int i =3D 0; i < len; i++) =09t[i] =3D v[i]; @@ -280,8 +278,8 @@ wi::from_mpz (const_tree type, mpz_t x, extracted from the GMP manual, section "Integer Import and Export": http://gmplib.org/manual/Integer-Import-and-Export.html */ numb =3D CHAR_BIT * sizeof (HOST_WIDE_INT); - count =3D (mpz_sizeinbase (x, 2) + numb - 1) / numb; - HOST_WIDE_INT *val =3D res.write_val (); + count =3D CEIL (mpz_sizeinbase (x, 2), numb); + HOST_WIDE_INT *val =3D res.write_val (0); /* Read the absolute value. =20 Write directly to the wide_int storage if possible, otherwise leave @@ -289,7 +287,7 @@ wi::from_mpz (const_tree type, mpz_t x, to use mpz_tdiv_r_2exp for the latter case, but the situation is pathological and it seems safer to operate on the original mpz value in all cases. */ - void *valres =3D mpz_export (count <=3D WIDE_INT_MAX_ELTS ? val : 0, + void *valres =3D mpz_export (count <=3D WIDE_INT_MAX_INL_ELTS ? val : 0, =09=09=09 &count, -1, sizeof (HOST_WIDE_INT), 0, 0, x); if (count < 1) { @@ -1334,21 +1332,6 @@ wi::mul_internal (HOST_WIDE_INT *val, co unsigned HOST_WIDE_INT o0, o1, k, t; unsigned int i; unsigned int j; - unsigned int blocks_needed =3D BLOCKS_NEEDED (prec); - unsigned int half_blocks_needed =3D blocks_needed * 2; - /* The sizes here are scaled to support a 2x largest mode by 2x - largest mode yielding a 4x largest mode result. This is what is - needed by vpn. */ - - unsigned HOST_HALF_WIDE_INT - u[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT]; - unsigned HOST_HALF_WIDE_INT - v[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT]; - /* The '2' in 'R' is because we are internally doing a full - multiply. */ - unsigned HOST_HALF_WIDE_INT - r[2 * 4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT]; - HOST_WIDE_INT mask =3D ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT)= - 1; =20 /* If the top level routine did not really pass in an overflow, then just make sure that we never attempt to set it. */ @@ -1469,6 +1452,36 @@ wi::mul_internal (HOST_WIDE_INT *val, co return 1; } =20 + /* The sizes here are scaled to support a 2x WIDE_INT_MAX_INL_PRECISION = by 2x + WIDE_INT_MAX_INL_PRECISION yielding a 4x WIDE_INT_MAX_INL_PRECISION + result. */ + + unsigned HOST_HALF_WIDE_INT + ubuf[4 * WIDE_INT_MAX_INL_PRECISION / HOST_BITS_PER_HALF_WIDE_INT]; + unsigned HOST_HALF_WIDE_INT + vbuf[4 * WIDE_INT_MAX_INL_PRECISION / HOST_BITS_PER_HALF_WIDE_INT]; + /* The '2' in 'R' is because we are internally doing a full + multiply. */ + unsigned HOST_HALF_WIDE_INT + rbuf[2 * 4 * WIDE_INT_MAX_INL_PRECISION / HOST_BITS_PER_HALF_WIDE_INT]= ; + const HOST_WIDE_INT mask =3D ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WID= E_INT) - 1; + unsigned HOST_HALF_WIDE_INT *u =3D ubuf; + unsigned HOST_HALF_WIDE_INT *v =3D vbuf; + unsigned HOST_HALF_WIDE_INT *r =3D rbuf; + + if (prec > WIDE_INT_MAX_INL_PRECISION && !high) + prec =3D (op1len + op2len + 1) * HOST_BITS_PER_WIDE_INT; + unsigned int blocks_needed =3D BLOCKS_NEEDED (prec); + unsigned int half_blocks_needed =3D blocks_needed * 2; + if (UNLIKELY (prec > WIDE_INT_MAX_INL_PRECISION)) + { + unsigned HOST_HALF_WIDE_INT *buf +=09=3D XALLOCAVEC (unsigned HOST_HALF_WIDE_INT, 4 * 4 * blocks_needed); + u =3D buf; + v =3D u + 4 * blocks_needed; + r =3D v + 4 * blocks_needed; + } + /* We do unsigned mul and then correct it. */ wi_unpack (u, op1val, op1len, half_blocks_needed, prec, SIGNED); wi_unpack (v, op2val, op2len, half_blocks_needed, prec, SIGNED); @@ -1782,16 +1795,6 @@ wi::divmod_internal (HOST_WIDE_INT *quot =09=09 unsigned int divisor_prec, signop sgn, =09=09 wi::overflow_type *oflow) { - unsigned int dividend_blocks_needed =3D 2 * BLOCKS_NEEDED (dividend_prec= ); - unsigned int divisor_blocks_needed =3D 2 * BLOCKS_NEEDED (divisor_prec); - unsigned HOST_HALF_WIDE_INT - b_quotient[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT]= ; - unsigned HOST_HALF_WIDE_INT - b_remainder[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT= ]; - unsigned HOST_HALF_WIDE_INT - b_dividend[(4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT= ) + 1]; - unsigned HOST_HALF_WIDE_INT - b_divisor[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT]; unsigned int m, n; bool dividend_neg =3D false; bool divisor_neg =3D false; @@ -1910,6 +1913,44 @@ wi::divmod_internal (HOST_WIDE_INT *quot =09} } =20 + unsigned HOST_HALF_WIDE_INT + b_quotient_buf[4 * WIDE_INT_MAX_INL_PRECISION +=09=09 / HOST_BITS_PER_HALF_WIDE_INT]; + unsigned HOST_HALF_WIDE_INT + b_remainder_buf[4 * WIDE_INT_MAX_INL_PRECISION +=09=09 / HOST_BITS_PER_HALF_WIDE_INT]; + unsigned HOST_HALF_WIDE_INT + b_dividend_buf[(4 * WIDE_INT_MAX_INL_PRECISION +=09=09 / HOST_BITS_PER_HALF_WIDE_INT) + 1]; + unsigned HOST_HALF_WIDE_INT + b_divisor_buf[4 * WIDE_INT_MAX_INL_PRECISION +=09=09 / HOST_BITS_PER_HALF_WIDE_INT]; + unsigned HOST_HALF_WIDE_INT *b_quotient =3D b_quotient_buf; + unsigned HOST_HALF_WIDE_INT *b_remainder =3D b_remainder_buf; + unsigned HOST_HALF_WIDE_INT *b_dividend =3D b_dividend_buf; + unsigned HOST_HALF_WIDE_INT *b_divisor =3D b_divisor_buf; + + if (dividend_prec > WIDE_INT_MAX_INL_PRECISION + && (sgn =3D=3D SIGNED || dividend_val[dividend_len - 1] >=3D 0)) + dividend_prec =3D (dividend_len + 1) * HOST_BITS_PER_WIDE_INT; + if (divisor_prec > WIDE_INT_MAX_INL_PRECISION) + divisor_prec =3D divisor_len * HOST_BITS_PER_WIDE_INT; + unsigned int dividend_blocks_needed =3D 2 * BLOCKS_NEEDED (dividend_prec= ); + unsigned int divisor_blocks_needed =3D 2 * BLOCKS_NEEDED (divisor_prec); + if (UNLIKELY (dividend_prec > WIDE_INT_MAX_INL_PRECISION) + || UNLIKELY (divisor_prec > WIDE_INT_MAX_INL_PRECISION)) + { + unsigned HOST_HALF_WIDE_INT *buf + =3D XALLOCAVEC (unsigned HOST_HALF_WIDE_INT, +=09=09 12 * dividend_blocks_needed +=09=09 + 4 * divisor_blocks_needed + 1); + b_quotient =3D buf; + b_remainder =3D b_quotient + 4 * dividend_blocks_needed; + b_dividend =3D b_remainder + 4 * dividend_blocks_needed; + b_divisor =3D b_dividend + 4 * dividend_blocks_needed + 1; + memset (b_quotient, 0, +=09 4 * dividend_blocks_needed * sizeof (HOST_HALF_WIDE_INT)); + } wi_unpack (b_dividend, dividend.get_val (), dividend.get_len (), =09 dividend_blocks_needed, dividend_prec, UNSIGNED); wi_unpack (b_divisor, divisor.get_val (), divisor.get_len (), @@ -1924,7 +1965,8 @@ wi::divmod_internal (HOST_WIDE_INT *quot while (n > 1 && b_divisor[n - 1] =3D=3D 0) n--; =20 - memset (b_quotient, 0, sizeof (b_quotient)); + if (b_quotient =3D=3D b_quotient_buf) + memset (b_quotient_buf, 0, sizeof (b_quotient_buf)); =20 divmod_internal_2 (b_quotient, b_remainder, b_dividend, b_divisor, m, n)= ; =20 @@ -1970,6 +2012,8 @@ wi::lshift_large (HOST_WIDE_INT *val, co =20 /* The whole-block shift fills with zeros. */ unsigned int len =3D BLOCKS_NEEDED (precision); + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) + len =3D xlen + skip + 1; for (unsigned int i =3D 0; i < skip; ++i) val[i] =3D 0; =20 @@ -1993,22 +2037,17 @@ wi::lshift_large (HOST_WIDE_INT *val, co return canonize (val, len, precision); } =20 -/* Right shift XVAL by SHIFT and store the result in VAL. Return the +/* Right shift XVAL by SHIFT and store the result in VAL. LEN is the number of blocks in VAL. The input has XPRECISION bits and the output has XPRECISION - SHIFT bits. */ -static unsigned int +static void rshift_large_common (HOST_WIDE_INT *val, const HOST_WIDE_INT *xval, -=09=09 unsigned int xlen, unsigned int xprecision, -=09=09 unsigned int shift) +=09=09 unsigned int xlen, unsigned int shift, unsigned int len) { /* Split the shift into a whole-block shift and a subblock shift. */ unsigned int skip =3D shift / HOST_BITS_PER_WIDE_INT; unsigned int small_shift =3D shift % HOST_BITS_PER_WIDE_INT; =20 - /* Work out how many blocks are needed to store the significant bits - (excluding the upper zeros or signs). */ - unsigned int len =3D BLOCKS_NEEDED (xprecision - shift); - /* It's easier to handle the simple block case specially. */ if (small_shift =3D=3D 0) for (unsigned int i =3D 0; i < len; ++i) @@ -2025,7 +2064,6 @@ rshift_large_common (HOST_WIDE_INT *val, =09 val[i] |=3D curr << (-small_shift % HOST_BITS_PER_WIDE_INT); =09} } - return len; } =20 /* Logically right shift XVAL by SHIFT and store the result in VAL. @@ -2036,11 +2074,20 @@ wi::lrshift_large (HOST_WIDE_INT *val, c =09=09 unsigned int xlen, unsigned int xprecision, =09=09 unsigned int precision, unsigned int shift) { - unsigned int len =3D rshift_large_common (val, xval, xlen, xprecision, s= hift); + /* Work out how many blocks are needed to store the significant bits + (excluding the upper zeros or signs). */ + unsigned int blocks_needed =3D BLOCKS_NEEDED (xprecision - shift); + unsigned int len =3D blocks_needed; + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS) + && len > xlen + && xval[xlen - 1] >=3D 0) + len =3D xlen; + + rshift_large_common (val, xval, xlen, shift, len); =20 /* The value we just created has precision XPRECISION - SHIFT. Zero-extend it to wider precisions. */ - if (precision > xprecision - shift) + if (precision > xprecision - shift && len =3D=3D blocks_needed) { unsigned int small_prec =3D (xprecision - shift) % HOST_BITS_PER_WID= E_INT; if (small_prec) @@ -2063,11 +2110,18 @@ wi::arshift_large (HOST_WIDE_INT *val, c =09=09 unsigned int xlen, unsigned int xprecision, =09=09 unsigned int precision, unsigned int shift) { - unsigned int len =3D rshift_large_common (val, xval, xlen, xprecision, s= hift); + /* Work out how many blocks are needed to store the significant bits + (excluding the upper zeros or signs). */ + unsigned int blocks_needed =3D BLOCKS_NEEDED (xprecision - shift); + unsigned int len =3D blocks_needed; + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS) && len > xlen) + len =3D xlen; + + rshift_large_common (val, xval, xlen, shift, len); =20 /* The value we just created has precision XPRECISION - SHIFT. Sign-extend it to wider types. */ - if (precision > xprecision - shift) + if (precision > xprecision - shift && len =3D=3D blocks_needed) { unsigned int small_prec =3D (xprecision - shift) % HOST_BITS_PER_WID= E_INT; if (small_prec) @@ -2399,9 +2453,12 @@ from_int (int i) static void assert_deceq (const char *expected, const wide_int_ref &wi, signop sgn) { - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; - print_dec (wi, buf, sgn); - ASSERT_STREQ (expected, buf); + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p =3D buf; + unsigned len =3D wi.get_len (); + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) + p =3D XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); + print_dec (wi, p, sgn); + ASSERT_STREQ (expected, p); } =20 /* Likewise for base 16. */ @@ -2409,9 +2466,12 @@ assert_deceq (const char *expected, cons static void assert_hexeq (const char *expected, const wide_int_ref &wi) { - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; - print_hex (wi, buf); - ASSERT_STREQ (expected, buf); + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p =3D buf; + unsigned len =3D wi.get_len (); + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) + p =3D XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); + print_hex (wi, p); + ASSERT_STREQ (expected, p); } =20 /* Test cases. */ @@ -2428,7 +2488,7 @@ test_printing () assert_hexeq ("0x1fffffffffffffffff", wi::shwi (-1, 69)); assert_hexeq ("0xffffffffffffffff", wi::mask (64, false, 69)); assert_hexeq ("0xffffffffffffffff", wi::mask (64, false)); - if (WIDE_INT_MAX_PRECISION > 128) + if (WIDE_INT_MAX_INL_PRECISION > 128) { assert_hexeq ("0x20000000000000000fffffffffffffffe", =09=09 wi::lshift (1, 129) + wi::lshift (1, 64) - 2); --- gcc/c-family/c-warn.cc.jj=092023-10-04 16:28:03.935787718 +0200 +++ gcc/c-family/c-warn.cc=092023-10-05 11:36:55.090245316 +0200 @@ -1517,13 +1517,15 @@ match_case_to_enum_1 (tree key, tree typ return; =20 char buf[WIDE_INT_PRINT_BUFFER_SIZE]; + wide_int w =3D wi::to_wide (key); =20 + gcc_assert (w.get_len () <=3D WIDE_INT_MAX_INL_ELTS); if (tree_fits_uhwi_p (key)) - print_dec (wi::to_wide (key), buf, UNSIGNED); + print_dec (w, buf, UNSIGNED); else if (tree_fits_shwi_p (key)) - print_dec (wi::to_wide (key), buf, SIGNED); + print_dec (w, buf, SIGNED); else - print_hex (wi::to_wide (key), buf); + print_hex (w, buf); =20 if (TYPE_NAME (type) =3D=3D NULL_TREE) warning_at (DECL_SOURCE_LOCATION (CASE_LABEL (label)), --- gcc/wide-int.h.jj=092023-10-04 16:28:04.468780454 +0200 +++ gcc/wide-int.h=092023-10-06 15:13:31.117547151 +0200 @@ -27,7 +27,7 @@ along with GCC; see the file COPYING3. other longer storage GCC representations (rtl and tree). =20 The actual precision of a wide_int depends on the flavor. There - are three predefined flavors: + are four predefined flavors: =20 1) wide_int (the default). This flavor does the math in the precision of its input arguments. It is assumed (and checked) @@ -53,6 +53,10 @@ along with GCC; see the file COPYING3. multiply, division, shifts, comparisons, and operations that need overflow detected), the signedness must be specified separately. =20 + For precisions up to WIDE_INT_MAX_INL_PRECISION, it uses an inline + buffer in the type, for larger precisions up to WIDEST_INT_MAX_PRECIS= ION + it uses a pointer to heap allocated buffer. + 2) offset_int. This is a fixed-precision integer that can hold any address offset, measured in either bits or bytes, with at least one extra sign bit. At the moment the maximum address @@ -76,11 +80,15 @@ along with GCC; see the file COPYING3. wi::leu_p (a, b) as a more efficient short-hand for "a >=3D 0 && a <=3D b". ] =20 - 3) widest_int. This representation is an approximation of + 3) rwide_int. Restricted wide_int. This is similar to + wide_int, but maximum possible precision is RWIDE_INT_MAX_PRECISION + and it always uses an inline buffer. offset_int and rwide_int are + GC-friendly, wide_int and widest_int are not. + + 4) widest_int. This representation is an approximation of infinite precision math. However, it is not really infinite precision math as in the GMP library. It is really finite - precision math where the precision is 4 times the size of the - largest integer that the target port can represent. + precision math where the precision is WIDEST_INT_MAX_PRECISION. =20 Like offset_int, widest_int is wider than all the values that it needs to represent, so the integers are logically signed. @@ -231,17 +239,34 @@ along with GCC; see the file COPYING3. can be arbitrarily different from X. */ =20 /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very - early examination of the target's mode file. The WIDE_INT_MAX_ELTS + early examination of the target's mode file. The WIDE_INT_MAX_INL_ELTS can accomodate at least 1 more bit so that unsigned numbers of that mode can be represented as a signed value. Note that it is still possible to create fixed_wide_ints that have precisions greater than MAX_BITSIZE_MODE_ANY_INT. This can be useful when representing a double-width multiplication result, for example. */ -#define WIDE_INT_MAX_ELTS \ - ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) / HOST_BITS_PER_WID= E_INT) - +#define WIDE_INT_MAX_INL_ELTS \ + ((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) \ + / HOST_BITS_PER_WIDE_INT) + +#define WIDE_INT_MAX_INL_PRECISION \ + (WIDE_INT_MAX_INL_ELTS * HOST_BITS_PER_WIDE_INT) + +/* Precision of wide_int and largest _BitInt precision + 1 we can + support. */ +#define WIDE_INT_MAX_ELTS 255 #define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT= ) =20 +#define RWIDE_INT_MAX_ELTS WIDE_INT_MAX_INL_ELTS +#define RWIDE_INT_MAX_PRECISION WIDE_INT_MAX_INL_PRECISION + +/* Precision of widest_int and largest _BitInt precision + 1 we can + support. */ +#define WIDEST_INT_MAX_ELTS 510 +#define WIDEST_INT_MAX_PRECISION (WIDEST_INT_MAX_ELTS * HOST_BITS_PER_WIDE= _INT) + +STATIC_ASSERT (WIDE_INT_MAX_INL_ELTS < WIDE_INT_MAX_ELTS); + /* This is the max size of any pointer on any machine. It does not seem to be as easy to sniff this out of the machine description as it is for MAX_BITSIZE_MODE_ANY_INT since targets may support @@ -307,17 +332,19 @@ along with GCC; see the file COPYING3. #define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \ WI_BINARY_RESULT (T1, T2) RESULT =3D \ wi::int_traits ::get_binary_result (X, Y); = \ - HOST_WIDE_INT *VAL =3D RESULT.write_val () + HOST_WIDE_INT *VAL =3D RESULT.write_val (0) =20 /* Similar for the result of a unary operation on X, which has type T. */ #define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \ WI_UNARY_RESULT (T) RESULT =3D \ wi::int_traits ::get_binary_result (X, X); \ - HOST_WIDE_INT *VAL =3D RESULT.write_val () + HOST_WIDE_INT *VAL =3D RESULT.write_val (0) =20 template class generic_wide_int; template class fixed_wide_int_storage; class wide_int_storage; +class rwide_int_storage; +template class widest_int_storage; =20 /* An N-bit integer. Until we can use typedef templates, use this instead= . */ #define FIXED_WIDE_INT(N) \ @@ -325,10 +352,9 @@ class wide_int_storage; =20 typedef generic_wide_int wide_int; typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) offset_int; -typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION) widest_int; -/* Spelled out explicitly (rather than through FIXED_WIDE_INT) - so as not to confuse gengtype. */ -typedef generic_wide_int < fixed_wide_int_storage > widest2_int; +typedef generic_wide_int rwide_int; +typedef generic_wide_int = > widest_int; +typedef generic_wide_int > widest2_int; =20 /* wi::storage_ref can be a reference to a primitive type, so this is the conservatively-correct setting. */ @@ -380,7 +406,11 @@ namespace wi =20 /* The integer has a constant precision (known at GCC compile time) and is signed. */ - CONST_PRECISION + CONST_PRECISION, + + /* Like CONST_PRECISION, but with WIDEST_INT_MAX_PRECISION or larger + precision where not all elements of arrays are always present. */ + WIDEST_CONST_PRECISION }; =20 /* This class, which has no default implementation, is expected to @@ -390,9 +420,15 @@ namespace wi Classifies the type of T. =20 static const unsigned int precision; - Only defined if precision_type =3D=3D CONST_PRECISION. Specifies t= he + Only defined if precision_type =3D=3D CONST_PRECISION or + precision_type =3D=3D WIDEST_CONST_PRECISION. Specifies the precision of all integers of type T. =20 + static const unsigned int inl_precision; + Only defined if precision_type =3D=3D WIDEST_CONST_PRECISION. + Specifies precision which is represented in the inline + arrays. + static const bool host_dependent_precision; True if the precision of T depends (or can depend) on the host. =20 @@ -415,9 +451,10 @@ namespace wi struct binary_traits; =20 /* Specify the result type for each supported combination of binary - inputs. Note that CONST_PRECISION and VAR_PRECISION cannot be - mixed, in order to give stronger type checking. When both inputs - are CONST_PRECISION, they must have the same precision. */ + inputs. Note that CONST_PRECISION, WIDEST_CONST_PRECISION and + VAR_PRECISION cannot be mixed, in order to give stronger type + checking. When both inputs are CONST_PRECISION or both are + WIDEST_CONST_PRECISION, they must have the same precision. */ template struct binary_traits { @@ -447,6 +484,17 @@ namespace wi }; =20 template + struct binary_traits + { + typedef generic_wide_int < widest_int_storage +=09=09=09 ::inl_precision> > result_type; + typedef result_type operator_result; + typedef bool predicate_result; + typedef result_type signed_shift_result_type; + typedef bool signed_predicate_result; + }; + + template struct binary_traits { typedef wide_int result_type; @@ -468,6 +516,17 @@ namespace wi }; =20 template + struct binary_traits + { + typedef generic_wide_int < widest_int_storage +=09=09=09 ::inl_precision> > result_type; + typedef result_type operator_result; + typedef bool predicate_result; + typedef result_type signed_shift_result_type; + typedef bool signed_predicate_result; + }; + + template struct binary_traits { STATIC_ASSERT (int_traits ::precision =3D=3D int_traits ::prec= ision); @@ -482,6 +541,18 @@ namespace wi }; =20 template + struct binary_traits + { + STATIC_ASSERT (int_traits ::precision =3D=3D int_traits ::prec= ision); + typedef generic_wide_int < widest_int_storage +=09=09=09 ::inl_precision> > result_type; + typedef result_type operator_result; + typedef bool predicate_result; + typedef result_type signed_shift_result_type; + typedef bool signed_predicate_result; + }; + + template struct binary_traits { typedef wide_int result_type; @@ -709,8 +780,10 @@ wi::storage_ref::get_val () const Although not required by generic_wide_int itself, writable storage classes can also provide the following functions: =20 - HOST_WIDE_INT *write_val () - Get a modifiable version of get_val () + HOST_WIDE_INT *write_val (unsigned int) + Get a modifiable version of get_val (). The argument should be + upper estimation for LEN (ignored by all storages but + widest_int_storage). =20 unsigned int set_len (unsigned int len) Set the value returned by get_len () to LEN. */ @@ -777,6 +850,8 @@ public: =20 static const bool is_sign_extended =3D wi::int_traits >::is_sign_extended; + static const bool needs_write_val_arg + =3D wi::int_traits >::needs_write_val_arg; }; =20 template @@ -1049,6 +1124,7 @@ namespace wi static const enum precision_type precision_type =3D VAR_PRECISION; static const bool host_dependent_precision =3D HDP; static const bool is_sign_extended =3D SE; + static const bool needs_write_val_arg =3D false; }; } =20 @@ -1065,7 +1141,11 @@ namespace wi class GTY(()) wide_int_storage { private: - HOST_WIDE_INT val[WIDE_INT_MAX_ELTS]; + union + { + HOST_WIDE_INT val[WIDE_INT_MAX_INL_ELTS]; + HOST_WIDE_INT *valp; + } GTY((skip)) u; unsigned int len; unsigned int precision; =20 @@ -1073,14 +1153,17 @@ public: wide_int_storage (); template wide_int_storage (const T &); + wide_int_storage (const wide_int_storage &); + ~wide_int_storage (); =20 /* The standard generic_wide_int storage methods. */ unsigned int get_precision () const; const HOST_WIDE_INT *get_val () const; unsigned int get_len () const; - HOST_WIDE_INT *write_val (); + HOST_WIDE_INT *write_val (unsigned int); void set_len (unsigned int, bool =3D false); =20 + wide_int_storage &operator =3D (const wide_int_storage &); template wide_int_storage &operator =3D (const T &); =20 @@ -1099,12 +1182,15 @@ namespace wi /* Guaranteed by a static assert in the wide_int_storage constructor. = */ static const bool host_dependent_precision =3D false; static const bool is_sign_extended =3D true; + static const bool needs_write_val_arg =3D false; template static wide_int get_binary_result (const T1 &, const T2 &); + template + static unsigned int get_binary_precision (const T1 &, const T2 &); }; } =20 -inline wide_int_storage::wide_int_storage () {} +inline wide_int_storage::wide_int_storage () : precision (0) {} =20 /* Initialize the storage from integer X, in its natural precision. Note that we do not allow integers with host-dependent precision @@ -1113,21 +1199,75 @@ inline wide_int_storage::wide_int_storag template inline wide_int_storage::wide_int_storage (const T &x) { - { STATIC_ASSERT (!wi::int_traits::host_dependent_precision); } - { STATIC_ASSERT (wi::int_traits::precision_type !=3D wi::CONST_PRECIS= ION); } + STATIC_ASSERT (!wi::int_traits::host_dependent_precision); + STATIC_ASSERT (wi::int_traits::precision_type !=3D wi::CONST_PRECISIO= N); + STATIC_ASSERT (wi::int_traits::precision_type +=09=09 !=3D wi::WIDEST_CONST_PRECISION); WIDE_INT_REF_FOR (T) xi (x); precision =3D xi.precision; + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) + u.valp =3D XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE= _INT)); wi::copy (*this, xi); } =20 +inline wide_int_storage::wide_int_storage (const wide_int_storage &x) +{ + len =3D x.len; + precision =3D x.precision; + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) + { + u.valp =3D XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WI= DE_INT)); + memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT)); + } + else if (LIKELY (precision)) + memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT)); +} + +inline wide_int_storage::~wide_int_storage () +{ + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) + XDELETEVEC (u.valp); +} + +inline wide_int_storage& +wide_int_storage::operator =3D (const wide_int_storage &x) +{ + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) + { + if (this =3D=3D &x) +=09return *this; + XDELETEVEC (u.valp); + } + len =3D x.len; + precision =3D x.precision; + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) + { + u.valp =3D XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WI= DE_INT)); + memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT)); + } + else if (LIKELY (precision)) + memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT)); + return *this; +} + template inline wide_int_storage& wide_int_storage::operator =3D (const T &x) { - { STATIC_ASSERT (!wi::int_traits::host_dependent_precision); } - { STATIC_ASSERT (wi::int_traits::precision_type !=3D wi::CONST_PRECIS= ION); } + STATIC_ASSERT (!wi::int_traits::host_dependent_precision); + STATIC_ASSERT (wi::int_traits::precision_type !=3D wi::CONST_PRECISIO= N); + STATIC_ASSERT (wi::int_traits::precision_type +=09=09 !=3D wi::WIDEST_CONST_PRECISION); WIDE_INT_REF_FOR (T) xi (x); - precision =3D xi.precision; + if (UNLIKELY (precision !=3D xi.precision)) + { + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) +=09XDELETEVEC (u.valp); + precision =3D xi.precision; + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) +=09u.valp =3D XNEWVEC (HOST_WIDE_INT, +=09=09=09 CEIL (precision, HOST_BITS_PER_WIDE_INT)); + } wi::copy (*this, xi); return *this; } @@ -1141,7 +1281,7 @@ wide_int_storage::get_precision () const inline const HOST_WIDE_INT * wide_int_storage::get_val () const { - return val; + return UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION) ? u.valp : u.va= l; } =20 inline unsigned int @@ -1151,9 +1291,9 @@ wide_int_storage::get_len () const } =20 inline HOST_WIDE_INT * -wide_int_storage::write_val () +wide_int_storage::write_val (unsigned int) { - return val; + return UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION) ? u.valp : u.va= l; } =20 inline void @@ -1161,8 +1301,10 @@ wide_int_storage::set_len (unsigned int { len =3D l; if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision) - val[len - 1] =3D sext_hwi (val[len - 1], -=09=09=09 precision % HOST_BITS_PER_WIDE_INT); + { + HOST_WIDE_INT &v =3D write_val (len)[len - 1]; + v =3D sext_hwi (v, precision % HOST_BITS_PER_WIDE_INT); + } } =20 /* Treat X as having signedness SGN and convert it to a PRECISION-bit @@ -1172,7 +1314,7 @@ wide_int_storage::from (const wide_int_r =09=09=09signop sgn) { wide_int result =3D wide_int::create (precision); - result.set_len (wi::force_to_size (result.write_val (), x.val, x.len, + result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.le= n, =09=09=09=09 x.precision, precision, sgn)); return result; } @@ -1185,7 +1327,7 @@ wide_int_storage::from_array (const HOST =09=09=09 unsigned int precision, bool need_canon_p) { wide_int result =3D wide_int::create (precision); - result.set_len (wi::from_array (result.write_val (), val, len, precision= , + result.set_len (wi::from_array (result.write_val (len), val, len, precis= ion, =09=09=09=09 need_canon_p)); return result; } @@ -1196,6 +1338,9 @@ wide_int_storage::create (unsigned int p { wide_int x; x.precision =3D precision; + if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) + x.u.valp =3D XNEWVEC (HOST_WIDE_INT, +=09=09=09CEIL (precision, HOST_BITS_PER_WIDE_INT)); return x; } =20 @@ -1212,6 +1357,194 @@ wi::int_traits ::get_b return wide_int::create (wi::get_precision (x)); } =20 +template +inline unsigned int +wi::int_traits ::get_binary_precision (const T1 &x, +=09=09=09=09=09=09=09 const T2 &y) +{ + /* This shouldn't be used for two flexible-precision inputs. */ + STATIC_ASSERT (wi::int_traits ::precision_type !=3D FLEXIBLE_PRECISI= ON +=09=09 || wi::int_traits ::precision_type !=3D FLEXIBLE_PRECISION); + if (wi::int_traits ::precision_type =3D=3D FLEXIBLE_PRECISION) + return wi::get_precision (y); + else + return wi::get_precision (x); +} + +/* The storage used by rwide_int. */ +class GTY(()) rwide_int_storage +{ +private: + HOST_WIDE_INT val[RWIDE_INT_MAX_ELTS]; + unsigned int len; + unsigned int precision; + +public: + rwide_int_storage () =3D default; + template + rwide_int_storage (const T &); + + /* The standard generic_rwide_int storage methods. */ + unsigned int get_precision () const; + const HOST_WIDE_INT *get_val () const; + unsigned int get_len () const; + HOST_WIDE_INT *write_val (unsigned int); + void set_len (unsigned int, bool =3D false); + + template + rwide_int_storage &operator =3D (const T &); + + static rwide_int from (const wide_int_ref &, unsigned int, signop); + static rwide_int from_array (const HOST_WIDE_INT *, unsigned int, +=09=09=09 unsigned int, bool =3D true); + static rwide_int create (unsigned int); +}; + +namespace wi +{ + template <> + struct int_traits + { + static const enum precision_type precision_type =3D VAR_PRECISION; + /* Guaranteed by a static assert in the rwide_int_storage constructor.= */ + static const bool host_dependent_precision =3D false; + static const bool is_sign_extended =3D true; + static const bool needs_write_val_arg =3D false; + template + static rwide_int get_binary_result (const T1 &, const T2 &); + template + static unsigned int get_binary_precision (const T1 &, const T2 &); + }; +} + +/* Initialize the storage from integer X, in its natural precision. + Note that we do not allow integers with host-dependent precision + to become rwide_ints; rwide_ints must always be logically independent + of the host. */ +template +inline rwide_int_storage::rwide_int_storage (const T &x) +{ + STATIC_ASSERT (!wi::int_traits::host_dependent_precision); + STATIC_ASSERT (wi::int_traits::precision_type !=3D wi::CONST_PRECISIO= N); + STATIC_ASSERT (wi::int_traits::precision_type +=09=09 !=3D wi::WIDEST_CONST_PRECISION); + WIDE_INT_REF_FOR (T) xi (x); + precision =3D xi.precision; + gcc_assert (precision <=3D RWIDE_INT_MAX_PRECISION); + wi::copy (*this, xi); +} + +template +inline rwide_int_storage& +rwide_int_storage::operator =3D (const T &x) +{ + STATIC_ASSERT (!wi::int_traits::host_dependent_precision); + STATIC_ASSERT (wi::int_traits::precision_type !=3D wi::CONST_PRECISIO= N); + STATIC_ASSERT (wi::int_traits::precision_type +=09=09 !=3D wi::WIDEST_CONST_PRECISION); + WIDE_INT_REF_FOR (T) xi (x); + precision =3D xi.precision; + gcc_assert (precision <=3D RWIDE_INT_MAX_PRECISION); + wi::copy (*this, xi); + return *this; +} + +inline unsigned int +rwide_int_storage::get_precision () const +{ + return precision; +} + +inline const HOST_WIDE_INT * +rwide_int_storage::get_val () const +{ + return val; +} + +inline unsigned int +rwide_int_storage::get_len () const +{ + return len; +} + +inline HOST_WIDE_INT * +rwide_int_storage::write_val (unsigned int) +{ + return val; +} + +inline void +rwide_int_storage::set_len (unsigned int l, bool is_sign_extended) +{ + len =3D l; + if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision) + val[len - 1] =3D sext_hwi (val[len - 1], +=09=09=09 precision % HOST_BITS_PER_WIDE_INT); +} + +/* Treat X as having signedness SGN and convert it to a PRECISION-bit + number. */ +inline rwide_int +rwide_int_storage::from (const wide_int_ref &x, unsigned int precision, +=09=09=09 signop sgn) +{ + rwide_int result =3D rwide_int::create (precision); + result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.le= n, +=09=09=09=09 x.precision, precision, sgn)); + return result; +} + +/* Create a rwide_int from the explicit block encoding given by VAL and + LEN. PRECISION is the precision of the integer. NEED_CANON_P is + true if the encoding may have redundant trailing blocks. */ +inline rwide_int +rwide_int_storage::from_array (const HOST_WIDE_INT *val, unsigned int len, +=09=09=09 unsigned int precision, bool need_canon_p) +{ + rwide_int result =3D rwide_int::create (precision); + result.set_len (wi::from_array (result.write_val (len), val, len, precis= ion, +=09=09=09=09 need_canon_p)); + return result; +} + +/* Return an uninitialized rwide_int with precision PRECISION. */ +inline rwide_int +rwide_int_storage::create (unsigned int precision) +{ + rwide_int x; + gcc_assert (precision <=3D RWIDE_INT_MAX_PRECISION); + x.precision =3D precision; + return x; +} + +template +inline rwide_int +wi::int_traits ::get_binary_result (const T1 &x, +=09=09=09=09=09=09 const T2 &y) +{ + /* This shouldn't be used for two flexible-precision inputs. */ + STATIC_ASSERT (wi::int_traits ::precision_type !=3D FLEXIBLE_PRECISI= ON +=09=09 || wi::int_traits ::precision_type !=3D FLEXIBLE_PRECISION); + if (wi::int_traits ::precision_type =3D=3D FLEXIBLE_PRECISION) + return rwide_int::create (wi::get_precision (y)); + else + return rwide_int::create (wi::get_precision (x)); +} + +template +inline unsigned int +wi::int_traits ::get_binary_precision (const T1 &x, +=09=09=09=09=09=09=09 const T2 &y) +{ + /* This shouldn't be used for two flexible-precision inputs. */ + STATIC_ASSERT (wi::int_traits ::precision_type !=3D FLEXIBLE_PRECISI= ON +=09=09 || wi::int_traits ::precision_type !=3D FLEXIBLE_PRECISION); + if (wi::int_traits ::precision_type =3D=3D FLEXIBLE_PRECISION) + return wi::get_precision (y); + else + return wi::get_precision (x); +} + /* The storage used by FIXED_WIDE_INT (N). */ template class GTY(()) fixed_wide_int_storage @@ -1221,7 +1554,7 @@ private: unsigned int len; =20 public: - fixed_wide_int_storage (); + fixed_wide_int_storage () =3D default; template fixed_wide_int_storage (const T &); =20 @@ -1229,7 +1562,7 @@ public: unsigned int get_precision () const; const HOST_WIDE_INT *get_val () const; unsigned int get_len () const; - HOST_WIDE_INT *write_val (); + HOST_WIDE_INT *write_val (unsigned int); void set_len (unsigned int, bool =3D false); =20 static FIXED_WIDE_INT (N) from (const wide_int_ref &, signop); @@ -1245,15 +1578,15 @@ namespace wi static const enum precision_type precision_type =3D CONST_PRECISION; static const bool host_dependent_precision =3D false; static const bool is_sign_extended =3D true; + static const bool needs_write_val_arg =3D false; static const unsigned int precision =3D N; template static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &); + template + static unsigned int get_binary_precision (const T1 &, const T2 &); }; } =20 -template -inline fixed_wide_int_storage ::fixed_wide_int_storage () {} - /* Initialize the storage from integer X, in precision N. */ template template @@ -1288,7 +1621,7 @@ fixed_wide_int_storage ::get_len () c =20 template inline HOST_WIDE_INT * -fixed_wide_int_storage ::write_val () +fixed_wide_int_storage ::write_val (unsigned int) { return val; } @@ -1308,7 +1641,7 @@ inline FIXED_WIDE_INT (N) fixed_wide_int_storage ::from (const wide_int_ref &x, signop sgn) { FIXED_WIDE_INT (N) result; - result.set_len (wi::force_to_size (result.write_val (), x.val, x.len, + result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.le= n, =09=09=09=09 x.precision, N, sgn)); return result; } @@ -1323,7 +1656,7 @@ fixed_wide_int_storage ::from_array ( =09=09=09=09=09bool need_canon_p) { FIXED_WIDE_INT (N) result; - result.set_len (wi::from_array (result.write_val (), val, len, + result.set_len (wi::from_array (result.write_val (len), val, len, =09=09=09=09 N, need_canon_p)); return result; } @@ -1337,6 +1670,255 @@ get_binary_result (const T1 &, const T2 return FIXED_WIDE_INT (N) (); } =20 +template +template +inline unsigned int +wi::int_traits < fixed_wide_int_storage >:: +get_binary_precision (const T1 &, const T2 &) +{ + return N; +} + +#define WIDEST_INT(N) generic_wide_int < widest_int_storage > + +/* The storage used by widest_int. */ +template +class GTY(()) widest_int_storage +{ +private: + union + { + HOST_WIDE_INT val[WIDE_INT_MAX_HWIS (N)]; + HOST_WIDE_INT *valp; + } GTY((skip)) u; + unsigned int len; + +public: + widest_int_storage (); + widest_int_storage (const widest_int_storage &); + template + widest_int_storage (const T &); + ~widest_int_storage (); + widest_int_storage &operator =3D (const widest_int_storage &); + template + inline widest_int_storage& operator =3D (const T &); + + /* The standard generic_wide_int storage methods. */ + unsigned int get_precision () const; + const HOST_WIDE_INT *get_val () const; + unsigned int get_len () const; + HOST_WIDE_INT *write_val (unsigned int); + void set_len (unsigned int, bool =3D false); + + static WIDEST_INT (N) from (const wide_int_ref &, signop); + static WIDEST_INT (N) from_array (const HOST_WIDE_INT *, unsigned int, +=09=09=09=09 bool =3D true); +}; + +namespace wi +{ + template + struct int_traits < widest_int_storage > + { + static const enum precision_type precision_type =3D WIDEST_CONST_PRECI= SION; + static const bool host_dependent_precision =3D false; + static const bool is_sign_extended =3D true; + static const bool needs_write_val_arg =3D true; + static const unsigned int precision + =3D N / WIDE_INT_MAX_INL_PRECISION * WIDEST_INT_MAX_PRECISION; + static const unsigned int inl_precision =3D N; + template + static WIDEST_INT (N) get_binary_result (const T1 &, const T2 &); + template + static unsigned int get_binary_precision (const T1 &, const T2 &); + }; +} + +template +inline widest_int_storage ::widest_int_storage () : len (0) {} + +/* Initialize the storage from integer X, in precision N. */ +template +template +inline widest_int_storage ::widest_int_storage (const T &x) : len (0) +{ + /* Check for type compatibility. We don't want to initialize a + widest integer from something like a wide_int. */ + WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED; + wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N / WIDE_INT_MAX_INL_PRECISION +=09=09=09=09=09 * WIDEST_INT_MAX_PRECISION)); +} + +template +inline +widest_int_storage ::widest_int_storage (const widest_int_storage &x) +{ + len =3D x.len; + if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT)) + { + u.valp =3D XNEWVEC (HOST_WIDE_INT, len); + memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT)); + } + else + memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT)); +} + +template +inline widest_int_storage ::~widest_int_storage () +{ + if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT)) + XDELETEVEC (u.valp); +} + +template +inline widest_int_storage & +widest_int_storage ::operator =3D (const widest_int_storage &x) +{ + if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT)) + { + if (this =3D=3D &x) +=09return *this; + XDELETEVEC (u.valp); + } + len =3D x.len; + if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT)) + { + u.valp =3D XNEWVEC (HOST_WIDE_INT, len); + memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT)); + } + else + memcpy (u.val, x.u.val, len * sizeof (HOST_WIDE_INT)); + return *this; +} + +template +template +inline widest_int_storage & +widest_int_storage ::operator =3D (const T &x) +{ + /* Check for type compatibility. We don't want to assign a + widest integer from something like a wide_int. */ + WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED; + if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT)) + XDELETEVEC (u.valp); + len =3D 0; + wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N / WIDE_INT_MAX_INL_PRECISION +=09=09=09=09=09 * WIDEST_INT_MAX_PRECISION)); + return *this; +} + +template +inline unsigned int +widest_int_storage ::get_precision () const +{ + return N / WIDE_INT_MAX_INL_PRECISION * WIDEST_INT_MAX_PRECISION; +} + +template +inline const HOST_WIDE_INT * +widest_int_storage ::get_val () const +{ + return UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT) ? u.valp : u.val; +} + +template +inline unsigned int +widest_int_storage ::get_len () const +{ + return len; +} + +template +inline HOST_WIDE_INT * +widest_int_storage ::write_val (unsigned int l) +{ + if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT)) + XDELETEVEC (u.valp); + len =3D l; + if (UNLIKELY (l > N / HOST_BITS_PER_WIDE_INT)) + { + u.valp =3D XNEWVEC (HOST_WIDE_INT, l); + return u.valp; + } + return u.val; +} + +#if GCC_VERSION >=3D 4007 +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wfree-nonheap-object" +#pragma GCC diagnostic ignored "-Warray-bounds=3D" +#pragma GCC diagnostic ignored "-Wstringop-overread" +#endif + +template +inline void +widest_int_storage ::set_len (unsigned int l, bool) +{ + gcc_checking_assert (l <=3D len); + if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT) + && l <=3D N / HOST_BITS_PER_WIDE_INT) + { + HOST_WIDE_INT *valp =3D u.valp; + memcpy (u.val, valp, l * sizeof (u.val[0])); + XDELETEVEC (valp); + } + len =3D l; + /* There are no excess bits in val[len - 1]. */ + STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT =3D=3D 0); +} + +#if GCC_VERSION >=3D 4007 +#pragma GCC diagnostic pop +#endif + +/* Treat X as having signedness SGN and convert it to an N-bit number. */ +template +inline WIDEST_INT (N) +widest_int_storage ::from (const wide_int_ref &x, signop sgn) +{ + WIDEST_INT (N) result; + unsigned int exp_len =3D x.len; + unsigned int prec =3D result.get_precision (); + if (sgn =3D=3D UNSIGNED && prec > x.precision && x.val[x.len - 1] < 0) + exp_len =3D CEIL (x.precision, HOST_BITS_PER_WIDE_INT) + 1; + result.set_len (wi::force_to_size (result.write_val (exp_len), x.val, x.= len, +=09=09=09=09 x.precision, prec, sgn)); + return result; +} + +/* Create a WIDEST_INT (N) from the explicit block encoding given by + VAL and LEN. NEED_CANON_P is true if the encoding may have redundant + trailing blocks. */ +template +inline WIDEST_INT (N) +widest_int_storage ::from_array (const HOST_WIDE_INT *val, +=09=09=09=09 unsigned int len, +=09=09=09=09 bool need_canon_p) +{ + WIDEST_INT (N) result; + result.set_len (wi::from_array (result.write_val (len), val, len, +=09=09=09=09 result.get_precision (), need_canon_p)); + return result; +} + +template +template +inline WIDEST_INT (N) +wi::int_traits < widest_int_storage >:: +get_binary_result (const T1 &, const T2 &) +{ + return WIDEST_INT (N) (); +} + +template +template +inline unsigned int +wi::int_traits < widest_int_storage >:: +get_binary_precision (const T1 &, const T2 &) +{ + return N / WIDE_INT_MAX_INL_PRECISION * WIDEST_INT_MAX_PRECISION; +} + /* A reference to one element of a trailing_wide_ints structure. */ class trailing_wide_int_storage { @@ -1359,7 +1941,7 @@ public: unsigned int get_len () const; unsigned int get_precision () const; const HOST_WIDE_INT *get_val () const; - HOST_WIDE_INT *write_val (); + HOST_WIDE_INT *write_val (unsigned int); void set_len (unsigned int, bool =3D false); =20 template @@ -1445,7 +2027,7 @@ trailing_wide_int_storage::get_val () co } =20 inline HOST_WIDE_INT * -trailing_wide_int_storage::write_val () +trailing_wide_int_storage::write_val (unsigned int) { return m_val; } @@ -1528,6 +2110,7 @@ namespace wi static const enum precision_type precision_type =3D FLEXIBLE_PRECISION= ; static const bool host_dependent_precision =3D true; static const bool is_sign_extended =3D true; + static const bool needs_write_val_arg =3D false; static unsigned int get_precision (T); static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, T); }; @@ -1699,6 +2282,7 @@ namespace wi precision of HOST_WIDE_INT. */ static const bool host_dependent_precision =3D false; static const bool is_sign_extended =3D true; + static const bool needs_write_val_arg =3D false; static unsigned int get_precision (const wi::hwi_with_prec &); static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, =09=09=09=09 const wi::hwi_with_prec &); @@ -1804,8 +2388,8 @@ template inline unsigned int wi::get_binary_precision (const T1 &x, const T2 &y) { - return get_precision (wi::int_traits :: -=09=09=09get_binary_result (x, y)); + return wi::int_traits ::get_binary_precision = (x, +=09=09=09=09=09=09=09=09=09 y); } =20 /* Copy the contents of Y to X, but keeping X's current precision. */ @@ -1813,9 +2397,9 @@ template inline void wi::copy (T1 &x, const T2 &y) { - HOST_WIDE_INT *xval =3D x.write_val (); - const HOST_WIDE_INT *yval =3D y.get_val (); unsigned int len =3D y.get_len (); + HOST_WIDE_INT *xval =3D x.write_val (len); + const HOST_WIDE_INT *yval =3D y.get_val (); unsigned int i =3D 0; do xval[i] =3D yval[i]; @@ -2162,6 +2746,8 @@ wi::bit_not (const T &x) { WI_UNARY_RESULT_VAR (result, val, T, x); WIDE_INT_REF_FOR (T) xi (x, get_precision (result)); + if (result.needs_write_val_arg) + val =3D result.write_val (xi.len); for (unsigned int i =3D 0; i < xi.len; ++i) val[i] =3D ~xi.val[i]; result.set_len (xi.len); @@ -2203,6 +2789,9 @@ wi::sext (const T &x, unsigned int offse unsigned int precision =3D get_precision (result); WIDE_INT_REF_FOR (T) xi (x, precision); =20 + if (result.needs_write_val_arg) + val =3D result.write_val (MAX (xi.len, +=09=09=09=09 CEIL (offset, HOST_BITS_PER_WIDE_INT))); if (offset <=3D HOST_BITS_PER_WIDE_INT) { val[0] =3D sext_hwi (xi.ulow (), offset); @@ -2230,6 +2819,9 @@ wi::zext (const T &x, unsigned int offse return result; } =20 + if (result.needs_write_val_arg) + val =3D result.write_val (MAX (xi.len, +=09=09=09=09 offset / HOST_BITS_PER_WIDE_INT + 1)); /* In these cases we know that at least the top bit will be clear, so no sign extension is necessary. */ if (offset < HOST_BITS_PER_WIDE_INT) @@ -2259,6 +2851,9 @@ wi::set_bit (const T &x, unsigned int bi WI_UNARY_RESULT_VAR (result, val, T, x); unsigned int precision =3D get_precision (result); WIDE_INT_REF_FOR (T) xi (x, precision); + if (result.needs_write_val_arg) + val =3D result.write_val (MAX (xi.len, +=09=09=09=09 bit / HOST_BITS_PER_WIDE_INT + 1)); if (precision <=3D HOST_BITS_PER_WIDE_INT) { val[0] =3D xi.ulow () | (HOST_WIDE_INT_1U << bit); @@ -2280,6 +2875,8 @@ wi::bswap (const T &x) WI_UNARY_RESULT_VAR (result, val, T, x); unsigned int precision =3D get_precision (result); WIDE_INT_REF_FOR (T) xi (x, precision); + if (result.needs_write_val_arg) + gcc_unreachable (); /* bswap on widest_int makes no sense. */ result.set_len (bswap_large (val, xi.val, xi.len, precision)); return result; } @@ -2292,6 +2889,8 @@ wi::bitreverse (const T &x) WI_UNARY_RESULT_VAR (result, val, T, x); unsigned int precision =3D get_precision (result); WIDE_INT_REF_FOR (T) xi (x, precision); + if (result.needs_write_val_arg) + gcc_unreachable (); /* bitreverse on widest_int makes no sense. */ result.set_len (bitreverse_large (val, xi.val, xi.len, precision)); return result; } @@ -2368,6 +2967,8 @@ wi::bit_and (const T1 &x, const T2 &y) WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y, precision); bool is_sign_extended =3D xi.is_sign_extended && yi.is_sign_extended; + if (result.needs_write_val_arg) + val =3D result.write_val (MAX (xi.len, yi.len)); if (LIKELY (xi.len + yi.len =3D=3D 2)) { val[0] =3D xi.ulow () & yi.ulow (); @@ -2389,6 +2990,8 @@ wi::bit_and_not (const T1 &x, const T2 & WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y, precision); bool is_sign_extended =3D xi.is_sign_extended && yi.is_sign_extended; + if (result.needs_write_val_arg) + val =3D result.write_val (MAX (xi.len, yi.len)); if (LIKELY (xi.len + yi.len =3D=3D 2)) { val[0] =3D xi.ulow () & ~yi.ulow (); @@ -2410,6 +3013,8 @@ wi::bit_or (const T1 &x, const T2 &y) WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y, precision); bool is_sign_extended =3D xi.is_sign_extended && yi.is_sign_extended; + if (result.needs_write_val_arg) + val =3D result.write_val (MAX (xi.len, yi.len)); if (LIKELY (xi.len + yi.len =3D=3D 2)) { val[0] =3D xi.ulow () | yi.ulow (); @@ -2431,6 +3036,8 @@ wi::bit_or_not (const T1 &x, const T2 &y WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y, precision); bool is_sign_extended =3D xi.is_sign_extended && yi.is_sign_extended; + if (result.needs_write_val_arg) + val =3D result.write_val (MAX (xi.len, yi.len)); if (LIKELY (xi.len + yi.len =3D=3D 2)) { val[0] =3D xi.ulow () | ~yi.ulow (); @@ -2452,6 +3059,8 @@ wi::bit_xor (const T1 &x, const T2 &y) WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y, precision); bool is_sign_extended =3D xi.is_sign_extended && yi.is_sign_extended; + if (result.needs_write_val_arg) + val =3D result.write_val (MAX (xi.len, yi.len)); if (LIKELY (xi.len + yi.len =3D=3D 2)) { val[0] =3D xi.ulow () ^ yi.ulow (); @@ -2472,6 +3081,8 @@ wi::add (const T1 &x, const T2 &y) unsigned int precision =3D get_precision (result); WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y, precision); + if (result.needs_write_val_arg) + val =3D result.write_val (MAX (xi.len, yi.len) + 1); if (precision <=3D HOST_BITS_PER_WIDE_INT) { val[0] =3D xi.ulow () + yi.ulow (); @@ -2515,6 +3126,8 @@ wi::add (const T1 &x, const T2 &y, signo unsigned int precision =3D get_precision (result); WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y, precision); + if (result.needs_write_val_arg) + val =3D result.write_val (MAX (xi.len, yi.len) + 1); if (precision <=3D HOST_BITS_PER_WIDE_INT) { unsigned HOST_WIDE_INT xl =3D xi.ulow (); @@ -2558,6 +3171,8 @@ wi::sub (const T1 &x, const T2 &y) unsigned int precision =3D get_precision (result); WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y, precision); + if (result.needs_write_val_arg) + val =3D result.write_val (MAX (xi.len, yi.len) + 1); if (precision <=3D HOST_BITS_PER_WIDE_INT) { val[0] =3D xi.ulow () - yi.ulow (); @@ -2601,6 +3216,8 @@ wi::sub (const T1 &x, const T2 &y, signo unsigned int precision =3D get_precision (result); WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y, precision); + if (result.needs_write_val_arg) + val =3D result.write_val (MAX (xi.len, yi.len) + 1); if (precision <=3D HOST_BITS_PER_WIDE_INT) { unsigned HOST_WIDE_INT xl =3D xi.ulow (); @@ -2643,6 +3260,8 @@ wi::mul (const T1 &x, const T2 &y) unsigned int precision =3D get_precision (result); WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y, precision); + if (result.needs_write_val_arg) + val =3D result.write_val (xi.len + yi.len + 2); if (precision <=3D HOST_BITS_PER_WIDE_INT) { val[0] =3D xi.ulow () * yi.ulow (); @@ -2664,6 +3283,8 @@ wi::mul (const T1 &x, const T2 &y, signo unsigned int precision =3D get_precision (result); WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y, precision); + if (result.needs_write_val_arg) + val =3D result.write_val (xi.len + yi.len + 2); result.set_len (mul_internal (val, xi.val, xi.len, =09=09=09=09yi.val, yi.len, precision, =09=09=09=09sgn, overflow, false)); @@ -2698,6 +3319,8 @@ wi::mul_high (const T1 &x, const T2 &y, unsigned int precision =3D get_precision (result); WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y, precision); + if (result.needs_write_val_arg) + gcc_unreachable (); /* mul_high on widest_int doesn't make sense. */ result.set_len (mul_internal (val, xi.val, xi.len, =09=09=09=09yi.val, yi.len, precision, =09=09=09=09sgn, 0, true)); @@ -2716,6 +3339,12 @@ wi::div_trunc (const T1 &x, const T2 &y, WIDE_INT_REF_FOR (T1) xi (x, precision); WIDE_INT_REF_FOR (T2) yi (y); =20 + if (quotient.needs_write_val_arg) + quotient_val =3D quotient.write_val ((sgn =3D=3D UNSIGNED +=09=09=09=09=09&& xi.val[xi.len - 1] < 0) +=09=09=09=09 ? CEIL (precision, +=09=09=09=09=09 HOST_BITS_PER_WIDE_INT) + 1 +=09=09=09=09 : xi.len + 1); quotient.set_len (divmod_internal (quotient_val, 0, 0, xi.val, xi.len, =09=09=09=09 precision, =09=09=09=09 yi.val, yi.len, yi.precision, @@ -2753,6 +3382,15 @@ wi::div_floor (const T1 &x, const T2 &y, WIDE_INT_REF_FOR (T2) yi (y); =20 unsigned int remainder_len; + if (quotient.needs_write_val_arg) + { + quotient_val =3D quotient.write_val ((sgn =3D=3D UNSIGNED +=09=09=09=09=09 && xi.val[xi.len - 1] < 0) +=09=09=09=09=09 ? CEIL (precision, +=09=09=09=09=09=09 HOST_BITS_PER_WIDE_INT) + 1 +=09=09=09=09=09 : xi.len + 1); + remainder_val =3D remainder.write_val (yi.len); + } quotient.set_len (divmod_internal (quotient_val, =09=09=09=09 &remainder_len, remainder_val, =09=09=09=09 xi.val, xi.len, precision, @@ -2795,6 +3433,15 @@ wi::div_ceil (const T1 &x, const T2 &y, WIDE_INT_REF_FOR (T2) yi (y); =20 unsigned int remainder_len; + if (quotient.needs_write_val_arg) + { + quotient_val =3D quotient.write_val ((sgn =3D=3D UNSIGNED +=09=09=09=09=09 && xi.val[xi.len - 1] < 0) +=09=09=09=09=09 ? CEIL (precision, +=09=09=09=09=09=09 HOST_BITS_PER_WIDE_INT) + 1 +=09=09=09=09=09 : xi.len + 1); + remainder_val =3D remainder.write_val (yi.len); + } quotient.set_len (divmod_internal (quotient_val, =09=09=09=09 &remainder_len, remainder_val, =09=09=09=09 xi.val, xi.len, precision, @@ -2828,6 +3475,15 @@ wi::div_round (const T1 &x, const T2 &y, WIDE_INT_REF_FOR (T2) yi (y); =20 unsigned int remainder_len; + if (quotient.needs_write_val_arg) + { + quotient_val =3D quotient.write_val ((sgn =3D=3D UNSIGNED +=09=09=09=09=09 && xi.val[xi.len - 1] < 0) +=09=09=09=09=09 ? CEIL (precision, +=09=09=09=09=09=09 HOST_BITS_PER_WIDE_INT) + 1 +=09=09=09=09=09 : xi.len + 1); + remainder_val =3D remainder.write_val (yi.len); + } quotient.set_len (divmod_internal (quotient_val, =09=09=09=09 &remainder_len, remainder_val, =09=09=09=09 xi.val, xi.len, precision, @@ -2871,6 +3527,15 @@ wi::divmod_trunc (const T1 &x, const T2 WIDE_INT_REF_FOR (T2) yi (y); =20 unsigned int remainder_len; + if (quotient.needs_write_val_arg) + { + quotient_val =3D quotient.write_val ((sgn =3D=3D UNSIGNED +=09=09=09=09=09 && xi.val[xi.len - 1] < 0) +=09=09=09=09=09 ? CEIL (precision, +=09=09=09=09=09=09 HOST_BITS_PER_WIDE_INT) + 1 +=09=09=09=09=09 : xi.len + 1); + remainder_val =3D remainder.write_val (yi.len); + } quotient.set_len (divmod_internal (quotient_val, =09=09=09=09 &remainder_len, remainder_val, =09=09=09=09 xi.val, xi.len, precision, @@ -2915,6 +3580,8 @@ wi::mod_trunc (const T1 &x, const T2 &y, WIDE_INT_REF_FOR (T2) yi (y); =20 unsigned int remainder_len; + if (remainder.needs_write_val_arg) + remainder_val =3D remainder.write_val (yi.len); divmod_internal (0, &remainder_len, remainder_val, =09=09 xi.val, xi.len, precision, =09=09 yi.val, yi.len, yi.precision, sgn, overflow); @@ -2955,6 +3622,15 @@ wi::mod_floor (const T1 &x, const T2 &y, WIDE_INT_REF_FOR (T2) yi (y); =20 unsigned int remainder_len; + if (quotient.needs_write_val_arg) + { + quotient_val =3D quotient.write_val ((sgn =3D=3D UNSIGNED +=09=09=09=09=09 && xi.val[xi.len - 1] < 0) +=09=09=09=09=09 ? CEIL (precision, +=09=09=09=09=09=09 HOST_BITS_PER_WIDE_INT) + 1 +=09=09=09=09=09 : xi.len + 1); + remainder_val =3D remainder.write_val (yi.len); + } quotient.set_len (divmod_internal (quotient_val, =09=09=09=09 &remainder_len, remainder_val, =09=09=09=09 xi.val, xi.len, precision, @@ -2991,6 +3667,15 @@ wi::mod_ceil (const T1 &x, const T2 &y, WIDE_INT_REF_FOR (T2) yi (y); =20 unsigned int remainder_len; + if (quotient.needs_write_val_arg) + { + quotient_val =3D quotient.write_val ((sgn =3D=3D UNSIGNED +=09=09=09=09=09 && xi.val[xi.len - 1] < 0) +=09=09=09=09=09 ? CEIL (precision, +=09=09=09=09=09=09 HOST_BITS_PER_WIDE_INT) + 1 +=09=09=09=09=09 : xi.len + 1); + remainder_val =3D remainder.write_val (yi.len); + } quotient.set_len (divmod_internal (quotient_val, =09=09=09=09 &remainder_len, remainder_val, =09=09=09=09 xi.val, xi.len, precision, @@ -3017,6 +3702,15 @@ wi::mod_round (const T1 &x, const T2 &y, WIDE_INT_REF_FOR (T2) yi (y); =20 unsigned int remainder_len; + if (quotient.needs_write_val_arg) + { + quotient_val =3D quotient.write_val ((sgn =3D=3D UNSIGNED +=09=09=09=09=09 && xi.val[xi.len - 1] < 0) +=09=09=09=09=09 ? CEIL (precision, +=09=09=09=09=09=09 HOST_BITS_PER_WIDE_INT) + 1 +=09=09=09=09=09 : xi.len + 1); + remainder_val =3D remainder.write_val (yi.len); + } quotient.set_len (divmod_internal (quotient_val, =09=09=09=09 &remainder_len, remainder_val, =09=09=09=09 xi.val, xi.len, precision, @@ -3086,12 +3780,16 @@ wi::lshift (const T1 &x, const T2 &y) /* Handle the simple cases quickly. */ if (geu_p (yi, precision)) { + if (result.needs_write_val_arg) +=09val =3D result.write_val (1); val[0] =3D 0; result.set_len (1); } else { unsigned int shift =3D yi.to_uhwi (); + if (result.needs_write_val_arg) +=09val =3D result.write_val (xi.len + shift / HOST_BITS_PER_WIDE_INT + 1); /* For fixed-precision integers like offset_int and widest_int, =09 handle the case where the shift value is constant and the =09 result is a single nonnegative HWI (meaning that we don't @@ -3130,12 +3828,23 @@ wi::lrshift (const T1 &x, const T2 &y) /* Handle the simple cases quickly. */ if (geu_p (yi, xi.precision)) { + if (result.needs_write_val_arg) +=09val =3D result.write_val (1); val[0] =3D 0; result.set_len (1); } else { unsigned int shift =3D yi.to_uhwi (); + if (result.needs_write_val_arg) +=09{ +=09 unsigned int est_len =3D xi.len; +=09 if (xi.val[xi.len - 1] < 0 && shift) +=09 /* Logical right shift of sign-extended value might need a very +=09 large precision e.g. for widest_int. */ +=09 est_len =3D CEIL (xi.precision - shift, HOST_BITS_PER_WIDE_INT) + 1= ; +=09 val =3D result.write_val (est_len); +=09} /* For fixed-precision integers like offset_int and widest_int, =09 handle the case where the shift value is constant and the =09 shifted value is a single nonnegative HWI (meaning that all @@ -3171,6 +3880,8 @@ wi::arshift (const T1 &x, const T2 &y) since the result can be no larger than that. */ WIDE_INT_REF_FOR (T1) xi (x); WIDE_INT_REF_FOR (T2) yi (y); + if (result.needs_write_val_arg) + val =3D result.write_val (xi.len); /* Handle the simple cases quickly. */ if (geu_p (yi, xi.precision)) { @@ -3374,25 +4085,56 @@ operator % (const T1 &x, const T2 &y) return wi::smod_trunc (x, y); } =20 -template +void gt_ggc_mx (generic_wide_int *) =3D delete; +void gt_pch_nx (generic_wide_int *) =3D delete; +void gt_pch_nx (generic_wide_int *, +=09=09gt_pointer_operator, void *) =3D delete; + +inline void +gt_ggc_mx (generic_wide_int *) +{ +} + +inline void +gt_pch_nx (generic_wide_int *) +{ +} + +inline void +gt_pch_nx (generic_wide_int *, gt_pointer_operator, vo= id *) +{ +} + +template void -gt_ggc_mx (generic_wide_int *) +gt_ggc_mx (generic_wide_int > *) { } =20 -template +template void -gt_pch_nx (generic_wide_int *) +gt_pch_nx (generic_wide_int > *) { } =20 -template +template void -gt_pch_nx (generic_wide_int *, gt_pointer_operator, void *) +gt_pch_nx (generic_wide_int > *, +=09 gt_pointer_operator, void *) { } =20 template +void gt_ggc_mx (generic_wide_int > *) =3D delete; + +template +void gt_pch_nx (generic_wide_int > *) =3D delete; + +template +void gt_pch_nx (generic_wide_int > *, +=09=09gt_pointer_operator, void *) =3D delete; + +template void gt_ggc_mx (trailing_wide_ints *) { @@ -3465,7 +4207,7 @@ inline wide_int wi::mask (unsigned int width, bool negate_p, unsigned int precision) { wide_int result =3D wide_int::create (precision); - result.set_len (mask (result.write_val (), width, negate_p, precision)); + result.set_len (mask (result.write_val (0), width, negate_p, precision))= ; return result; } =20 @@ -3477,7 +4219,7 @@ wi::shifted_mask (unsigned int start, un =09=09 unsigned int precision) { wide_int result =3D wide_int::create (precision); - result.set_len (shifted_mask (result.write_val (), start, width, negate_= p, + result.set_len (shifted_mask (result.write_val (0), start, width, negate= _p, =09=09=09=09precision)); return result; } @@ -3498,8 +4240,8 @@ wi::mask (unsigned int width, bool negat { STATIC_ASSERT (wi::int_traits::precision); T result; - result.set_len (mask (result.write_val (), width, negate_p, -=09=09=09wi::int_traits ::precision)); + result.set_len (mask (result.write_val (width / HOST_BITS_PER_WIDE_INT += 1), +=09=09=09width, negate_p, wi::int_traits ::precision)); return result; } =20 @@ -3512,9 +4254,13 @@ wi::shifted_mask (unsigned int start, un { STATIC_ASSERT (wi::int_traits::precision); T result; - result.set_len (shifted_mask (result.write_val (), start, width, -=09=09=09=09negate_p, -=09=09=09=09wi::int_traits ::precision)); + unsigned int prec =3D wi::int_traits ::precision; + unsigned int est_len + =3D result.needs_write_val_arg + ? ((start + (width > prec - start ? prec - start : width)) +=09 / HOST_BITS_PER_WIDE_INT + 1) : 0; + result.set_len (shifted_mask (result.write_val (est_len), start, width, +=09=09=09=09negate_p, prec)); return result; } =20 --- gcc/godump.cc.jj=092023-10-04 16:28:04.148784815 +0200 +++ gcc/godump.cc=092023-10-05 11:36:55.219243548 +0200 @@ -1154,7 +1154,11 @@ go_output_typedef (class godump_containe =09 snprintf (buf, sizeof buf, HOST_WIDE_INT_PRINT_UNSIGNED, =09=09 tree_to_uhwi (value)); =09 else -=09 print_hex (wi::to_wide (element), buf); +=09 { +=09 wide_int w =3D wi::to_wide (element); +=09 gcc_assert (w.get_len () <=3D WIDE_INT_MAX_INL_ELTS); +=09 print_hex (w, buf); +=09 } =20 =09 mhval->value =3D xstrdup (buf); =09 *slot =3D mhval; --- gcc/tree-ssa-loop-ivcanon.cc.jj=092023-10-04 16:28:04.310782607 +0200 +++ gcc/tree-ssa-loop-ivcanon.cc=092023-10-05 11:36:55.219243548 +0200 @@ -622,10 +622,11 @@ remove_redundant_iv_tests (class loop *l =09 || !integer_zerop (niter.may_be_zero) =09 || !niter.niter =09 || TREE_CODE (niter.niter) !=3D INTEGER_CST -=09 || !wi::ltu_p (loop->nb_iterations_upper_bound, +=09 || !wi::ltu_p (widest_int::from (loop->nb_iterations_upper_bound, +=09=09=09=09=09 SIGNED), =09=09=09 wi::to_widest (niter.niter))) =09 continue; -=09 =20 + =09 if (dump_file && (dump_flags & TDF_DETAILS)) =09 { =09 fprintf (dump_file, "Removed pointless exit: "); --- gcc/value-range-pretty-print.cc.jj=092023-10-04 16:28:04.415781176 +020= 0 +++ gcc/value-range-pretty-print.cc=092023-10-05 11:36:55.142244603 +0200 @@ -99,12 +99,19 @@ vrange_printer::print_irange_bitmasks (c return; =20 pp_string (pp, " MASK "); - char buf[WIDE_INT_PRINT_BUFFER_SIZE]; - print_hex (bm.mask (), buf); - pp_string (pp, buf); + char buf[WIDE_INT_PRINT_BUFFER_SIZE], *p; + unsigned len_mask =3D bm.mask ().get_len (); + unsigned len_val =3D bm.value ().get_len (); + unsigned len =3D MAX (len_mask, len_val); + if (len > WIDE_INT_MAX_INL_ELTS) + p =3D XALLOCAVEC (char, len * HOST_BITS_PER_WIDE_INT / 4 + 4); + else + p =3D buf; + print_hex (bm.mask (), p); + pp_string (pp, p); pp_string (pp, " VALUE "); - print_hex (bm.value (), buf); - pp_string (pp, buf); + print_hex (bm.value (), p); + pp_string (pp, p); } =20 void --- gcc/print-tree.cc.jj=092023-10-04 16:28:04.257783330 +0200 +++ gcc/print-tree.cc=092023-10-05 11:36:54.630251622 +0200 @@ -365,13 +365,13 @@ print_node (FILE *file, const char *pref fputs (code =3D=3D CALL_EXPR ? " must-tail-call" : " static", file); if (TREE_DEPRECATED (node)) fputs (" deprecated", file); - if (TREE_UNAVAILABLE (node)) - fputs (" unavailable", file); if (TREE_VISITED (node)) fputs (" visited", file); =20 if (code !=3D TREE_VEC && code !=3D INTEGER_CST && code !=3D SSA_NAME) { + if (TREE_UNAVAILABLE (node)) +=09fputs (" unavailable", file); if (TREE_LANG_FLAG_0 (node)) =09fputs (" tree_0", file); if (TREE_LANG_FLAG_1 (node)) --- gcc/wide-int-print.h.jj=092023-10-04 16:28:04.448780726 +0200 +++ gcc/wide-int-print.h=092023-10-05 11:36:54.630251622 +0200 @@ -22,7 +22,7 @@ along with GCC; see the file COPYING3. =20 #include =20 -#define WIDE_INT_PRINT_BUFFER_SIZE (WIDE_INT_MAX_PRECISION / 4 + 4) +#define WIDE_INT_PRINT_BUFFER_SIZE (WIDE_INT_MAX_INL_PRECISION / 4 + 4) =20 /* Printing functions. */ =20 --- gcc/dwarf2out.h.jj=092023-10-04 16:28:04.095785537 +0200 +++ gcc/dwarf2out.h=092023-10-05 11:36:54.666251128 +0200 @@ -30,7 +30,7 @@ typedef struct dw_cfi_node *dw_cfi_ref; typedef struct dw_loc_descr_node *dw_loc_descr_ref; typedef struct dw_loc_list_struct *dw_loc_list_ref; typedef struct dw_discr_list_node *dw_discr_list_ref; -typedef wide_int *wide_int_ptr; +typedef rwide_int *rwide_int_ptr; =20 =20 /* Call frames are described using a sequence of Call Frame @@ -252,7 +252,7 @@ struct GTY(()) dw_val_node { unsigned HOST_WIDE_INT =09GTY ((tag ("dw_val_class_unsigned_const"))) val_unsigned; double_int GTY ((tag ("dw_val_class_const_double"))) val_double; - wide_int_ptr GTY ((tag ("dw_val_class_wide_int"))) val_wide; + rwide_int_ptr GTY ((tag ("dw_val_class_wide_int"))) val_wide; dw_vec_const GTY ((tag ("dw_val_class_vec"))) val_vec; struct dw_val_die_union =09{ --- gcc/data-streamer-in.cc.jj=092023-10-04 16:28:04.025786491 +0200 +++ gcc/data-streamer-in.cc=092023-10-05 11:36:54.843248702 +0200 @@ -277,10 +277,12 @@ streamer_read_value_range (class lto_inp wide_int streamer_read_wide_int (class lto_input_block *ib) { - HOST_WIDE_INT a[WIDE_INT_MAX_ELTS]; + HOST_WIDE_INT abuf[WIDE_INT_MAX_INL_ELTS], *a =3D abuf; int i; int prec =3D streamer_read_uhwi (ib); int len =3D streamer_read_uhwi (ib); + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) + a =3D XALLOCAVEC (HOST_WIDE_INT, len); for (i =3D 0; i < len; i++) a[i] =3D streamer_read_hwi (ib); return wide_int::from_array (a, len, prec); @@ -292,10 +294,12 @@ streamer_read_wide_int (class lto_input_ widest_int streamer_read_widest_int (class lto_input_block *ib) { - HOST_WIDE_INT a[WIDE_INT_MAX_ELTS]; + HOST_WIDE_INT abuf[WIDE_INT_MAX_INL_ELTS], *a =3D abuf; int i; int prec ATTRIBUTE_UNUSED =3D streamer_read_uhwi (ib); int len =3D streamer_read_uhwi (ib); + if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)) + a =3D XALLOCAVEC (HOST_WIDE_INT, len); for (i =3D 0; i < len; i++) a[i] =3D streamer_read_hwi (ib); return widest_int::from_array (a, len); =09Jakub --gUXedCAPDhdsc8Li Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename=Q012ra --- gcc/tree-ssa-ccp.cc.jj 2023-08-24 15:37:29.264410998 +0200 +++ gcc/tree-ssa-ccp.cc 2023-10-06 17:20:49.504965969 +0200 @@ -1966,7 +1966,8 @@ bit_value_binop (enum tree_code code, si } else { - widest_int upper = wi::udiv_trunc (r1max, r2min); + widest_int upper + = wi::udiv_trunc (wi::zext (r1max, width), r2min); unsigned int lzcount = wi::clz (upper); unsigned int bits = wi::get_precision (upper) - lzcount; *mask = wi::mask (bits, false); --- gcc/wide-int.cc.jj 2023-10-06 12:31:56.841517949 +0200 +++ gcc/wide-int.cc 2023-10-06 17:21:59.930022075 +0200 @@ -2406,6 +2406,17 @@ debug (const widest_int *ptr) fprintf (stderr, "\n"); } +bool wide_int_bitint_seen = false; + +void +wide_int_log (const char *p, int n) +{ + extern const char *current_function_name (void); + FILE *f = fopen ("/tmp/wis", "a"); + fprintf (f, "%d %s %s %s %d %c\n", (int) BITS_PER_WORD, main_input_filename ? main_input_filename : "-", current_function_name (), p, n, wide_int_bitint_seen ? 'y' : 'n'); + fclose (f); +} + #if CHECKING_P namespace selftest { --- gcc/gimple-ssa-sprintf.cc.jj 2023-01-02 09:32:20.797308227 +0100 +++ gcc/gimple-ssa-sprintf.cc 2023-10-06 17:08:45.516732616 +0200 @@ -1181,8 +1181,15 @@ adjust_range_for_overflow (tree dirtype, *argmin), size_int (dirprec))))) { - *argmin = force_fit_type (dirtype, wi::to_widest (*argmin), 0, false); - *argmax = force_fit_type (dirtype, wi::to_widest (*argmax), 0, false); + unsigned int maxprec = MAX (argprec, dirprec); + *argmin = force_fit_type (dirtype, + wide_int::from (wi::to_wide (*argmin), maxprec, + TYPE_SIGN (argtype)), + 0, false); + *argmax = force_fit_type (dirtype, + wide_int::from (wi::to_wide (*argmax), maxprec, + TYPE_SIGN (argtype)), + 0, false); /* If *ARGMIN is still less than *ARGMAX the conversion above is safe. Otherwise, it has overflowed and would be unsafe. */ --- gcc/match.pd.jj 2023-10-04 10:26:45.861259889 +0200 +++ gcc/match.pd 2023-10-06 17:09:34.435070589 +0200 @@ -6431,8 +6431,12 @@ (define_operator_list SYNC_FETCH_AND_AND code and here to avoid a spurious overflow flag on the resulting constant which fold_convert produces. */ (if (TREE_CODE (@1) == INTEGER_CST) - (cmp @00 { force_fit_type (TREE_TYPE (@00), wi::to_widest (@1), 0, - TREE_OVERFLOW (@1)); }) + (cmp @00 { force_fit_type (TREE_TYPE (@00), + wide_int::from (wi::to_wide (@1), + MAX (TYPE_PRECISION (TREE_TYPE (@1)), + TYPE_PRECISION (TREE_TYPE (@00))), + TYPE_SIGN (TREE_TYPE (@1))), + 0, TREE_OVERFLOW (@1)); }) (cmp @00 (convert @1))) (if (TYPE_PRECISION (TREE_TYPE (@0)) > TYPE_PRECISION (TREE_TYPE (@00))) --- gcc/tree.cc.jj 2023-10-05 11:36:54.618251787 +0200 +++ gcc/tree.cc 2023-10-06 17:23:07.321118844 +0200 @@ -7178,6 +7178,8 @@ tree build_bitint_type (unsigned HOST_WIDE_INT precision, int unsignedp) { tree itype, ret; +extern bool wide_int_bitint_seen; +if (precision > 128) wide_int_bitint_seen = true; gcc_checking_assert (precision >= 1 + !unsignedp); --- gcc/wide-int.h.jj 2023-10-06 13:12:05.720338130 +0200 +++ gcc/wide-int.h 2023-10-06 17:42:59.980139497 +0200 @@ -1206,7 +1206,11 @@ inline wide_int_storage::wide_int_storag WIDE_INT_REF_FOR (T) xi (x); precision = xi.precision; if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) +{ +extern void wide_int_log (const char *, int); +wide_int_log ("ctor", precision); u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT)); +} wi::copy (*this, xi); } @@ -1216,6 +1220,8 @@ inline wide_int_storage::wide_int_storag precision = x.precision; if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) { +extern void wide_int_log (const char *, int); +wide_int_log ("copy ctor", precision); u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT)); memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT)); } @@ -1242,6 +1248,8 @@ wide_int_storage::operator = (const wide precision = x.precision; if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) { +extern void wide_int_log (const char *, int); +wide_int_log ("operator=1", precision); u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT)); memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT)); } @@ -1265,8 +1273,12 @@ wide_int_storage::operator = (const T &x XDELETEVEC (u.valp); precision = xi.precision; if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) +{ +extern void wide_int_log (const char *, int); +wide_int_log ("operator=2", precision); u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT)); +} } wi::copy (*this, xi); return *this; @@ -1339,8 +1351,12 @@ wide_int_storage::create (unsigned int p wide_int x; x.precision = precision; if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION)) +{ +extern void wide_int_log (const char *, int); +wide_int_log ("create", precision); x.u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT)); +} return x; } @@ -1756,6 +1772,8 @@ widest_int_storage ::widest_int_stora len = x.len; if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT)) { +extern void wide_int_log (const char *, int); +wide_int_log ("wi copy ctor", len); u.valp = XNEWVEC (HOST_WIDE_INT, len); memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT)); } @@ -1783,6 +1801,8 @@ widest_int_storage ::operator = (cons len = x.len; if (UNLIKELY (len > N / HOST_BITS_PER_WIDE_INT)) { +extern void wide_int_log (const char *, int); +wide_int_log ("wi operator=1", len); u.valp = XNEWVEC (HOST_WIDE_INT, len); memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT)); } @@ -1837,6 +1857,8 @@ widest_int_storage ::write_val (unsig len = l; if (UNLIKELY (l > N / HOST_BITS_PER_WIDE_INT)) { +extern void wide_int_log (const char *, int); +wide_int_log ("wi write_val", l); u.valp = XNEWVEC (HOST_WIDE_INT, l); return u.valp; } --- gcc/fold-const.cc.jj 2023-09-29 18:58:47.252895500 +0200 +++ gcc/fold-const.cc 2023-10-06 17:03:24.561076214 +0200 @@ -2137,7 +2137,10 @@ fold_convert_const_int_from_int (tree ty /* Given an integer constant, make new constant with new type, appropriately sign-extended or truncated. Use widest_int so that any extension is done according ARG1's type. */ - return force_fit_type (type, wi::to_widest (arg1), + tree arg1_type = TREE_TYPE (arg1); + unsigned prec = MAX (TYPE_PRECISION (arg1_type), TYPE_PRECISION (type)); + return force_fit_type (type, wide_int::from (wi::to_wide (arg1), prec, + TYPE_SIGN (arg1_type)), !POINTER_TYPE_P (TREE_TYPE (arg1)), TREE_OVERFLOW (arg1)); } @@ -9565,8 +9568,13 @@ fold_unary_loc (location_t loc, enum tre } if (change) { - tem = force_fit_type (type, wi::to_widest (and1), 0, - TREE_OVERFLOW (and1)); + tree and1_type = TREE_TYPE (and1); + unsigned prec = MAX (TYPE_PRECISION (and1_type), + TYPE_PRECISION (type)); + tem = force_fit_type (type, + wide_int::from (wi::to_wide (and1), prec, + TYPE_SIGN (and1_type)), + 0, TREE_OVERFLOW (and1)); return fold_build2_loc (loc, BIT_AND_EXPR, type, fold_convert_loc (loc, type, and0), tem); } --gUXedCAPDhdsc8Li Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename=Q012rb --- gcc/wide-int.h.jj 2023-10-06 15:13:31.117547151 +0200 +++ gcc/wide-int.h 2023-10-06 18:31:35.031659272 +0200 @@ -1843,13 +1843,6 @@ widest_int_storage ::write_val (unsig return u.val; } -#if GCC_VERSION >= 4007 -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wfree-nonheap-object" -#pragma GCC diagnostic ignored "-Warray-bounds=" -#pragma GCC diagnostic ignored "-Wstringop-overread" -#endif - template inline void widest_int_storage ::set_len (unsigned int l, bool) @@ -1867,10 +1860,6 @@ widest_int_storage ::set_len (unsigne STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0); } -#if GCC_VERSION >= 4007 -#pragma GCC diagnostic pop -#endif - /* Treat X as having signedness SGN and convert it to an N-bit number. */ template inline WIDEST_INT (N) @@ -2404,7 +2393,10 @@ wi::copy (T1 &x, const T2 &y) do xval[i] = yval[i]; while (++i < len); - x.set_len (len, y.is_sign_extended); + /* For widest_int write_val is called with an exact value, not + upper bound for len, so nothing is needed further. */ + if (!wi::int_traits ::needs_write_val_arg) + x.set_len (len, y.is_sign_extended); } /* Return true if X fits in a HOST_WIDE_INT with no loss of precision. */ --gUXedCAPDhdsc8Li--