public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
From: Michael Meissner <meissner@gcc.gnu.org>
To: gcc-cvs@gcc.gnu.org
Subject: [gcc(refs/users/meissner/heads/dmf007)] Bump up precision size to 11 bits.
Date: Wed,  1 Feb 2023 03:09:24 +0000 (GMT)	[thread overview]
Message-ID: <20230201030924.B18653858D38@sourceware.org> (raw)

https://gcc.gnu.org/g:79fbfd2c5bc64e7e705f86cac8da29c04187000d

commit 79fbfd2c5bc64e7e705f86cac8da29c04187000d
Author: Michael Meissner <meissner@linux.ibm.com>
Date:   Tue Jan 31 22:08:53 2023 -0500

    Bump up precision size to 11 bits.
    
    The new __dmr type that is being added as a possible future PowerPC instruction
    set bumps into a structure field size issue.  The size of the __dmr type is 1024 bits.
    The precision field in tree_type_common is currently 10 bits, so if you store
    1,024 into field, you get a 0 back.  When you get 0 in the precision field, the
    ccp pass passes this 0 to sext_hwi in hwint.h.  That function in turn generates
    a shift that is equal to the host wide int bit size, which is undefined as
    machine dependent for shifting in C/C++.
    
          int shift = HOST_BITS_PER_WIDE_INT - prec;
          return ((HOST_WIDE_INT) ((unsigned HOST_WIDE_INT) src << shift)) >> shift;
    
    It turns out the x86_64 where I first did my tests returns the original input
    before the two shifts, while the PowerPC always returns 0.  In the ccp pass, the
    original input is -1, and so it worked.  When I did the runs on the PowerPC, the
    result was 0, which ultimately led to the failure.
    
    2023-01-31   Michael Meissner  <meissner@linux.ibm.com>
    
    gcc/
    
            * hwint.h (sext_hwi): Add assertion against precision 0.
            * tree-core.h (tree_type_common): Bump up precision field by 1 bit, and
            reduce contains_placeholder_bits to 1 bit.

Diff:
---
 gcc/hwint.h     | 1 +
 gcc/tree-core.h | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/gcc/hwint.h b/gcc/hwint.h
index e31aa006fa4..ba92efbfc25 100644
--- a/gcc/hwint.h
+++ b/gcc/hwint.h
@@ -277,6 +277,7 @@ ctz_or_zero (unsigned HOST_WIDE_INT x)
 static inline HOST_WIDE_INT
 sext_hwi (HOST_WIDE_INT src, unsigned int prec)
 {
+  gcc_checking_assert (prec != 0);
   if (prec == HOST_BITS_PER_WIDE_INT)
     return src;
   else
diff --git a/gcc/tree-core.h b/gcc/tree-core.h
index 8124a1328d4..e27eb1eb87f 100644
--- a/gcc/tree-core.h
+++ b/gcc/tree-core.h
@@ -1686,12 +1686,12 @@ struct GTY(()) tree_type_common {
   tree attributes;
   unsigned int uid;
 
-  unsigned int precision : 10;
+  unsigned int precision : 11;
   unsigned no_force_blk_flag : 1;
   unsigned needs_constructing_flag : 1;
   unsigned transparent_aggr_flag : 1;
   unsigned restrict_flag : 1;
-  unsigned contains_placeholder_bits : 2;
+  unsigned contains_placeholder_bits : 1;
 
   ENUM_BITFIELD(machine_mode) mode : 8;

                 reply	other threads:[~2023-02-01  3:09 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230201030924.B18653858D38@sourceware.org \
    --to=meissner@gcc.gnu.org \
    --cc=gcc-cvs@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).