public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc r9-9410] aarch64: Tighten up checks for ubfix [PR98681]
@ 2021-04-20 23:31 Jakub Jelinek
  0 siblings, 0 replies; only message in thread
From: Jakub Jelinek @ 2021-04-20 23:31 UTC (permalink / raw)
  To: gcc-cvs

https://gcc.gnu.org/g:e3dc765eb4556b78fe52d32be9858f2805b4488d

commit r9-9410-ge3dc765eb4556b78fe52d32be9858f2805b4488d
Author: Jakub Jelinek <jakub@redhat.com>
Date:   Tue Jan 26 14:48:26 2021 +0100

    aarch64: Tighten up checks for ubfix [PR98681]
    
    The testcase in the patch doesn't assemble, because the instruction requires
    that the penultimate operand (lsb) range is [0, 32] (or [0, 64]) and the last
    operand's range is [1, 32 - lsb] (or [1, 64 - lsb]).
    The INTVAL (shft_amnt) < GET_MODE_BITSIZE (mode) will accept the lsb operand
    to be in range [MIN, 32] (or [MIN, 64]) and then we invoke UB in the
    compiler and sometimes it will make it through.
    The patch changes all the INTVAL uses in that function to UINTVAL,
    which isn't strictly necessary, but can be done (e.g. after the
    UINTVAL (shft_amnt) < GET_MODE_BITSIZE (mode) check we know it is not
    negative and thus INTVAL (shft_amnt) and UINTVAL (shft_amnt) then behave the
    same.  But, I had to add INTVAL (mask) > 0 check in that case, otherwise we
    risk (hypothetically) emitting instruction that doesn't assemble.
    The problem is with masks that have the MSB bit set, while the instruction
    can handle those, e.g.
    ubfiz w1, w0, 13, 19
    will do
    (w0 << 13) & 0xffffe000
    in RTL we represent SImode constants with MSB set as negative HOST_WIDE_INT,
    so it will actually be HOST_WIDE_INT_C (0xffffffffffffe000), and
    the instruction uses %P3 to print the last operand, which calls
    asm_fprintf (f, "%u", popcount_hwi (INTVAL (x)))
    to print that.  But that will not print 19, but 51 instead, will include
    there also all the copies of the sign bit.
    Not supporting those masks with MSB set isn't a big loss though, they really
    shouldn't appear normally, as both GIMPLE and RTL optimizations should
    optimize those away (one isn't masking any bits off with such masks, so
    just w0 << 13 will do too).
    
    2021-01-26  Jakub Jelinek  <jakub@redhat.com>
    
            PR target/98681
            * config/aarch64/aarch64.c (aarch64_mask_and_shift_for_ubfiz_p):
            Use UINTVAL (shft_amnt) and UINTVAL (mask) instead of INTVAL (shft_amnt)
            and INTVAL (mask).  Add && INTVAL (mask) > 0 condition.
    
            * gcc.c-torture/execute/pr98681.c: New test.
    
    (cherry picked from commit fb09d7242a25971b275292332337a56b86637f2c)

Diff:
---
 gcc/config/aarch64/aarch64.c                  |  9 +++++----
 gcc/testsuite/gcc.c-torture/execute/pr98681.c | 18 ++++++++++++++++++
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index d71daf92d0b..857df74dea5 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -9545,10 +9545,11 @@ aarch64_mask_and_shift_for_ubfiz_p (scalar_int_mode mode, rtx mask,
 				    rtx shft_amnt)
 {
   return CONST_INT_P (mask) && CONST_INT_P (shft_amnt)
-	 && INTVAL (shft_amnt) < GET_MODE_BITSIZE (mode)
-	 && exact_log2 ((INTVAL (mask) >> INTVAL (shft_amnt)) + 1) >= 0
-	 && (INTVAL (mask)
-	     & ((HOST_WIDE_INT_1U << INTVAL (shft_amnt)) - 1)) == 0;
+	 && INTVAL (mask) > 0
+	 && UINTVAL (shft_amnt) < GET_MODE_BITSIZE (mode)
+	 && exact_log2 ((UINTVAL (mask) >> UINTVAL (shft_amnt)) + 1) >= 0
+	 && (UINTVAL (mask)
+	     & ((HOST_WIDE_INT_1U << UINTVAL (shft_amnt)) - 1)) == 0;
 }
 
 /* Return true if the masks and a shift amount from an RTX of the form
diff --git a/gcc/testsuite/gcc.c-torture/execute/pr98681.c b/gcc/testsuite/gcc.c-torture/execute/pr98681.c
new file mode 100644
index 00000000000..a256881342f
--- /dev/null
+++ b/gcc/testsuite/gcc.c-torture/execute/pr98681.c
@@ -0,0 +1,18 @@
+/* PR target/98681 */
+
+__attribute__((noipa)) int
+foo (int x)
+{
+  if (x > 32)
+    return (x << -64) & 255;
+  else
+    return x;
+}
+
+int
+main ()
+{
+  if (foo (32) != 32 || foo (-150) != -150)
+    __builtin_abort ();
+  return 0;
+}


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-04-20 23:31 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-20 23:31 [gcc r9-9410] aarch64: Tighten up checks for ubfix [PR98681] Jakub Jelinek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).