From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 1251) id B28573858439; Wed, 3 Aug 2022 08:11:18 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B28573858439 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="utf-8" From: Roger Sayle To: gcc-cvs@gcc.gnu.org Subject: [gcc r13-1945] PR target/47949: Use xchg to move from/to AX_REG with -Oz on x86. X-Act-Checkin: gcc X-Git-Author: Roger Sayle X-Git-Refname: refs/heads/master X-Git-Oldrev: e6b011bcfd52c245978ccd540e3f929571c59471 X-Git-Newrev: fc6ef90173478521982e9df3831a06ea85b4f41e Message-Id: <20220803081118.B28573858439@sourceware.org> Date: Wed, 3 Aug 2022 08:11:18 +0000 (GMT) X-BeenThere: gcc-cvs@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-cvs mailing list List-Unsubscribe: , List-Archive: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Aug 2022 08:11:18 -0000 https://gcc.gnu.org/g:fc6ef90173478521982e9df3831a06ea85b4f41e commit r13-1945-gfc6ef90173478521982e9df3831a06ea85b4f41e Author: Roger Sayle Date: Wed Aug 3 09:07:36 2022 +0100 PR target/47949: Use xchg to move from/to AX_REG with -Oz on x86. This patch adds a peephole2 to i386.md to implement the suggestion in PR target/47949, of using xchg instead of mov for moving values to/from the %rax/%eax register, controlled by -Oz, as the xchg instruction is one byte shorter than the move it is replacing. The new test case is taken from the PR: int foo(int x) { return x; } where previously we'd generate: foo: mov %edi,%eax // 2 bytes ret but with this patch, using -Oz, we generate: foo: xchg %eax,%edi // 1 byte ret On the CSiBE benchmark, this saves a total of 10238 bytes (reducing the -Oz total from 3661796 bytes to 3651558 bytes, a 0.28% saving). Interestingly, some modern architectures (such as Zen 3) implement xchg using zero latency register renaming (just like mov), so in theory this transformation could be enabled when optimizing for speed, if benchmarking shows the improved code density produces consistently better performance. However, this is architecture dependent, and there may be interactions using xchg (instead a single_set) in the late RTL passes (such as cprop_hardreg), so for now I've restricted this to -Oz. 2022-08-03 Roger Sayle Uroš Bizjak gcc/ChangeLog PR target/47949 * config/i386/i386.md (peephole2): New peephole2 to convert SWI48 moves to/from %rax/%eax where the src is dead to xchg, when optimizing for minimal size with -Oz. gcc/testsuite/ChangeLog PR target/47949 * gcc.target/i386/pr47949.c: New test case. Diff: --- gcc/config/i386/i386.md | 12 ++++++++++++ gcc/testsuite/gcc.target/i386/pr47949.c | 15 +++++++++++++++ 2 files changed, 27 insertions(+) diff --git a/gcc/config/i386/i386.md b/gcc/config/i386/i386.md index e8f3851be01..298e4b30348 100644 --- a/gcc/config/i386/i386.md +++ b/gcc/config/i386/i386.md @@ -3027,6 +3027,18 @@ [(parallel [(set (match_dup 1) (match_dup 2)) (set (match_dup 2) (match_dup 1))])]) +;; Convert moves to/from AX_REG into xchg with -Oz. +(define_peephole2 + [(set (match_operand:SWI48 0 "general_reg_operand") + (match_operand:SWI48 1 "general_reg_operand"))] + "optimize_size > 1 + && (REGNO (operands[0]) == AX_REG + || REGNO (operands[1]) == AX_REG) + && optimize_insn_for_size_p () + && peep2_reg_dead_p (1, operands[1])" + [(parallel [(set (match_dup 0) (match_dup 1)) + (set (match_dup 1) (match_dup 0))])]) + (define_expand "movstrict" [(set (strict_low_part (match_operand:SWI12 0 "register_operand")) (match_operand:SWI12 1 "general_operand"))] diff --git a/gcc/testsuite/gcc.target/i386/pr47949.c b/gcc/testsuite/gcc.target/i386/pr47949.c new file mode 100644 index 00000000000..a0524b1f00d --- /dev/null +++ b/gcc/testsuite/gcc.target/i386/pr47949.c @@ -0,0 +1,15 @@ +/* { dg-do compile } */ +/* { dg-options "-Oz" } */ +/* { dg-additional-options "-mregparm=2" { target ia32 } } */ + +int foo(int x, int y) +{ + return y; +} + +long bar(long x, long y) +{ + return y; +} + +/* { dg-final { scan-assembler-times "xchg" 2 } } */