public inbox for gcc-bugs@sourceware.org help / color / mirror / Atom feed
From: "cvs-commit at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org> To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/98474] [8/9/10/11 Regression] incorrect results using __uint128_t Date: Thu, 31 Dec 2020 10:07:39 +0000 [thread overview] Message-ID: <bug-98474-4-WVrO6CfWLk@http.gcc.gnu.org/bugzilla/> (raw) In-Reply-To: <bug-98474-4@http.gcc.gnu.org/bugzilla/> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98474 --- Comment #5 from CVS Commits <cvs-commit at gcc dot gnu.org> --- The master branch has been updated by Jakub Jelinek <jakub@gcc.gnu.org>: https://gcc.gnu.org/g:9e603837f7ad886df62e02ac0cd395ec17b7d587 commit r11-6376-g9e603837f7ad886df62e02ac0cd395ec17b7d587 Author: Jakub Jelinek <jakub@redhat.com> Date: Thu Dec 31 11:06:56 2020 +0100 wide-int: Fix wi::to_mpz [PR98474] The following testcase is miscompiled, because niter analysis miscomputes the number of iterations to 0. The problem is that niter analysis uses mpz_t (wonder why, wouldn't widest_int do the same job?) and when wi::to_mpz is called e.g. on the TYPE_MAX_VALUE of __uint128_t, it initializes the mpz_t result with wrong value. wi::to_mpz has code to handle negative wide_ints in signed types by inverting all bits, importing to mpz and complementing it, which is fine, but doesn't handle correctly the case when the wide_int's len (times HOST_BITS_PER_WIDE_INT) is smaller than precision when wi::neg_p. E.g. the 0xffffffffffffffffffffffffffffffff TYPE_MAX_VALUE is represented in wide_int as 0xffffffffffffffff len 1, and wi::to_mpz would create 0xffffffffffffffff mpz_t value from that. This patch handles it by adding the needed -1 host wide int words (and has also code to deal with precision that aren't multiple of HOST_BITS_PER_WIDE_INT). 2020-12-31 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/98474 * wide-int.cc (wi::to_mpz): If wide_int has MSB set, but type is unsigned and excess negative, append set bits after len until precision. * gcc.c-torture/execute/pr98474.c: New test.
next prev parent reply other threads:[~2020-12-31 10:07 UTC|newest] Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-12-29 21:57 [Bug c++/98474] New: " jeffhurchalla at gmail dot com 2020-12-29 22:17 ` [Bug c++/98474] " jeffhurchalla at gmail dot com 2020-12-29 22:27 ` [Bug tree-optimization/98474] [8/9/10/11 Regression] " jakub at gcc dot gnu.org 2020-12-29 22:27 ` jakub at gcc dot gnu.org 2020-12-30 10:03 ` jakub at gcc dot gnu.org 2020-12-31 2:59 ` jeffhurchalla at gmail dot com 2020-12-31 10:07 ` cvs-commit at gcc dot gnu.org [this message] 2020-12-31 10:11 ` [Bug tree-optimization/98474] [8/9/10 " jakub at gcc dot gnu.org 2021-01-01 2:44 ` jeffhurchalla at gmail dot com 2021-01-05 10:44 ` rguenth at gcc dot gnu.org 2021-01-06 9:40 ` cvs-commit at gcc dot gnu.org 2021-01-06 9:46 ` [Bug tree-optimization/98474] [8/9 " jakub at gcc dot gnu.org 2021-04-20 23:31 ` cvs-commit at gcc dot gnu.org 2021-04-22 16:49 ` cvs-commit at gcc dot gnu.org 2021-04-22 17:09 ` jakub at gcc dot gnu.org
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bug-98474-4-WVrO6CfWLk@http.gcc.gnu.org/bugzilla/ \ --to=gcc-bugzilla@gcc.gnu.org \ --cc=gcc-bugs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).