From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 51A0C398C030; Thu, 28 Jan 2021 11:57:35 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 51A0C398C030 From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/98856] [11 Regression] botan AES-128/XTS is slower by ~17% since r11-6649-g285fa338b06b804e72997c4d876ecf08a9c083af Date: Thu, 28 Jan 2021 11:57:35 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 11.0 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: ASSIGNED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: rguenth at gcc dot gnu.org X-Bugzilla-Target-Milestone: 11.0 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: gcc-bugs@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-bugs mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 Jan 2021 11:57:35 -0000 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D98856 --- Comment #7 from Richard Biener --- OK, and the spill is likely because we expand as (insn 7 6 0 (set (reg:TI 84 [ _9 ]) (mem:TI (reg/v/f:DI 93 [ in ]) [0 MEM <__int128 unsigned> [(char * {ref-all})in_8(D)]+0 S16 A8])) -1 (nil)) (insn 8 7 9 (parallel [ (set (reg:DI 95) (lshiftrt:DI (subreg:DI (reg:TI 84 [ _9 ]) 8) (const_int 63 [0x3f]))) (clobber (reg:CC 17 flags)) ]) "t.c":7:26 -1 (nil)) ^^^ (subreg:DI (reg:TI 84 [ _9 ]) 8) ... (insn 12 11 13 (set (reg:V2DI 98 [ vect__5.3 ]) (ashift:V2DI (subreg:V2DI (reg:TI 84 [ _9 ]) 0) (const_int 1 [0x1]))) "t.c":9:16 -1 (nil)) ^^^ (subreg:V2DI (reg:TI 84 [ _9 ]) 0) LRA then does Choosing alt 4 in insn 7: (0) v (1) vm {*movti_internal} Creating newreg=3D103 from oldreg=3D84, assigning class ALL_SSE_REGS = to r103 7: r103:TI=3D[r101:DI] REG_DEAD r101:DI Inserting insn reload after: 20: r84:TI=3Dr103:TI Choosing alt 0 in insn 8: (0) =3Drm (1) 0 (2) cJ {*lshrdi3_1} Creating newreg=3D104 from oldreg=3D95, assigning class GENERAL_REGS = to r104 Inserting insn reload before: 21: r104:DI=3Dr84:TI#8 but somehow this means the reload 20 is used for the reload 21 instead of avoiding the reload 20 and doing a movhlps / movq combo? (I guess there's no high part xmm extract to gpr) As said the assembly is a bit weird: poly_double_le2: .LFB0: .cfi_startproc vmovdqu (%rsi), %xmm2 vmovdqa %xmm2, -24(%rsp) movq -16(%rsp), %rax ok, well ... vmovdqa -24(%rsp), %xmm3 ??? shrq $63, %rax imulq $135, %rax, %rax vmovq %rax, %xmm0 movq -24(%rsp), %rax ??? movq %xmm2/3, %rax vpsllq $1, %xmm3, %xmm1 shrq $63, %rax vpinsrq $1, %rax, %xmm0, %xmm0 vpxor %xmm1, %xmm0, %xmm0 vmovdqu %xmm0, (%rdi) note even with -march=3Dcore-avx2 (and thus inter-unit moves not pessimized= ) we get poly_double_le2: .LFB0: .cfi_startproc vmovdqu (%rsi), %xmm2 vmovdqa %xmm2, -24(%rsp) movq -16(%rsp), %rax vmovdqa -24(%rsp), %xmm3 shrq $63, %rax vpsllq $1, %xmm3, %xmm1 imulq $135, %rax, %rax vmovq %rax, %xmm0 movq -24(%rsp), %rax shrq $63, %rax vpinsrq $1, %rax, %xmm0, %xmm0 vpxor %xmm1, %xmm0, %xmm0 vmovdqu %xmm0, (%rdi) with .L56: .cfi_restore_state vmovdqu (%rsi), %xmm4 movq 8(%rsi), %rdx shrq $63, %rdx imulq $135, %rdx, %rdi movq 8(%rsi), %rdx vmovq %rdi, %xmm0 vpsllq $1, %xmm4, %xmm1 shrq $63, %rdx vpinsrq $1, %rdx, %xmm0, %xmm0 vpxor %xmm1, %xmm0, %xmm0 vmovdqu %xmm0, (%rax) jmp .L53 we arrive at ES-128/XTS 672043 key schedule/sec; 0.00 ms/op 4978.00 cycles/op (2 ops in = 0.00 ms) AES-128/XTS encrypt buffer size 1024 bytes: 843.310 MiB/sec 4.18 cycles/byte (421.66 MiB in 500.00 ms) AES-128/XTS decrypt buffer size 1024 bytes: 847.215 MiB/sec 4.16 cycles/byte (421.66 MiB in 497.70 ms) a variant using movhlps isn't any faster than spilling unfortunately :/ I guess re-materializing from a load is too much to be asked from LRA. On the vectorizer side costing is 52 scalar vs. 40 vector (as usual the vectorized store alone leads to a big boost).=