public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc r13-5482] aarch64: Correct the maximum shift amount for shifted operands
@ 2023-01-28 23:08 Philipp Tomsich
  0 siblings, 0 replies; only message in thread
From: Philipp Tomsich @ 2023-01-28 23:08 UTC (permalink / raw)
  To: gcc-cvs

https://gcc.gnu.org/g:2f2101c87ac88a9fa9f7b4a264fb7738118c7fc9

commit r13-5482-g2f2101c87ac88a9fa9f7b4a264fb7738118c7fc9
Author: Philipp Tomsich <philipp.tomsich@vrull.eu>
Date:   Sun Jan 29 00:07:09 2023 +0100

    aarch64: Correct the maximum shift amount for shifted operands
    
    The aarch64 ISA specification allows a left shift amount to be applied
    after extension in the range of 0 to 4 (encoded in the imm3 field).
    
    This is true for at least the following instructions:
    
     * ADD (extend register)
     * ADDS (extended register)
     * SUB (extended register)
    
    The result of this patch can be seen, when compiling the following code:
    
    uint64_t myadd(uint64_t a, uint64_t b)
    {
        return a+(((uint8_t)b)<<4);
    }
    
    Without the patch the following sequence will be generated:
    
    0000000000000000 <myadd>:
       0:   d37c1c21        ubfiz   x1, x1, #4, #8
       4:   8b000020        add     x0, x1, x0
       8:   d65f03c0        ret
    
    With the patch the ubfiz will be merged into the add instruction:
    
    0000000000000000 <myadd>:
       0:   8b211000        add     x0, x0, w1, uxtb #4
       4:   d65f03c0        ret
    
    gcc/ChangeLog:
    
            * config/aarch64/aarch64.cc (aarch64_uxt_size): fix an
            off-by-one in checking the permissible shift-amount.

Diff:
---
 gcc/config/aarch64/aarch64.cc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
index 089c1c85845..17c1e23e5b5 100644
--- a/gcc/config/aarch64/aarch64.cc
+++ b/gcc/config/aarch64/aarch64.cc
@@ -13022,7 +13022,7 @@ aarch64_output_casesi (rtx *operands)
 int
 aarch64_uxt_size (int shift, HOST_WIDE_INT mask)
 {
-  if (shift >= 0 && shift <= 3)
+  if (shift >= 0 && shift <= 4)
     {
       int size;
       for (size = 8; size <= 32; size *= 2)

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-01-28 23:08 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-28 23:08 [gcc r13-5482] aarch64: Correct the maximum shift amount for shifted operands Philipp Tomsich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).