From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 33D933857C75; Thu, 28 Jan 2021 11:03:00 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 33D933857C75 From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/98856] [11 Regression] botan AES-128/XTS is slower by ~17% since r11-6649-g285fa338b06b804e72997c4d876ecf08a9c083af Date: Thu, 28 Jan 2021 11:03:00 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 11.0 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: ASSIGNED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: rguenth at gcc dot gnu.org X-Bugzilla-Target-Milestone: 11.0 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: gcc-bugs@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-bugs mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 Jan 2021 11:03:00 -0000 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D98856 --- Comment #5 from Richard Biener --- Looks like STLF issues. There's a ls_stlf counter, with SLP vectorization disabled I see 34.39% 1417 botan libbotan-2.so.17 [.] Botan::Block_Cipher_Fixed_Params<16ul, 16ul, 0ul, 1ul, Botan::BlockCip 32.27% 1333 botan libbotan-2.so.17 [.] Botan::Block_Cipher_Fixed_Params<16ul, 16ul, 0ul, 1ul, Botan::BlockCip 7.31% 306 botan libbotan-2.so.17 [.] Botan::poly_double_n= _le while with SLP vectorization enabled there's Samples: 4K of event 'ls_stlf:u', Event count (approx.): 723886942=20=20=20= =20=20=20=20=20=20=20=20=20=20=20 Overhead Samples Command Shared Object Symbol=20=20=20=20=20=20= =20=20 32.41% 1320 botan libbotan-2.so.17 [.] Botan::poly_double_n= _le 27.23% 1114 botan libbotan-2.so.17 [.] Botan::Block_Cipher_Fixed_Params<16ul, 16ul, 0ul, 1ul, Botan::BlockCip 27.06% 1107 botan libbotan-2.so.17 [.] Botan::Block_Cipher_Fixed_Params<16ul, 16ul, 0ul, 1ul, Botan::BlockCip but then the register docs suggest that the unnamed cpu/event=3D0x24,umask= =3D0x2/u is supposed to be the forwarding fails due to incomplete/misaligned data.=20 Unvectorized: Samples: 4K of event 'cpu/event=3D0x24,umask=3D0x2/u', Event count (approx.= ): 1024347253=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20 Overhead Samples Command Shared Object Symbol=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20 33.56% 1382 botan libbotan-2.so.17 [.] Botan::Block_Cipher_Fixed_Params<16ul, 16ul, 0ul, 1ul, Botan::BlockCip 30.32% 1246 botan libbotan-2.so.17 [.] Botan::Block_Cipher_Fixed_Params<16ul, 16ul, 0ul, 1ul, Botan::BlockCip 23.18% 953 botan libbotan-2.so.17 [.] Botan::poly_double_n= _le vectorized: Samples: 4K of event 'cpu/event=3D0x24,umask=3D0x2/u', Event count (approx.= ): 489384781=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20 Overhead Samples Command Shared Object Symbol=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20 30.17% 1229 botan libbotan-2.so.17 [.] Botan::poly_double_n= _le 29.40% 1203 botan libbotan-2.so.17 [.] Botan::Block_Cipher_Fixed_Params<16ul, 16ul, 0ul, 1ul, Botan::BlockCip 28.09% 1147 botan libbotan-2.so.17 [.] Botan::Block_Cipher_Fixed_Params<16ul, 16ul, 0ul, 1ul, Botan::BlockCip but the masking doesn't work as expected since I get hits for either bit on 4.05 | vmovdqa %xmm4,0x10(%rsp)=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20 # | const uint64_t carry =3D POLY * (W[LIMBS-1] >> 63);=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20 # 12.24 | mov 0x18(%rsp),%rdx=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20 # | W[0] =3D (W[0] << 1) ^ carry;=20=20=20=20=20=20=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20= =20=20=20 # 24.00 | vmovdqa 0x10(%rsp),%xmm5 which should only happen for bit 2 (data not ready). Of course this code-gen is weird since 0x10(%rsp) is available in %xmm4. Well, changing the above doesn't make a difference. I guess the event hit is just quite delayed - that makes perf quite useless here. As a general optimization remark we fail to scalarize 'W' in poly_double_le for the larger sizes, but the relevant differences likely appear for the cases we expand the memcpy inline on GIMPLE, specifically [local count: 1431655747]: _60 =3D MEM <__int128 unsigned> [(char * {ref-all})in_6(D)]; _61 =3D BIT_FIELD_REF <_60, 64, 64>; _62 =3D _61 >> 63; carry_63 =3D _62 * 135; _308 =3D _61 << 1; _228 =3D (long unsigned int) _60; _310 =3D _228 >> 63; _311 =3D _308 ^ _310; _71 =3D _228 << 1; _72 =3D carry_63 ^ _71; MEM [(char * {ref-all})out_5(D)] =3D _72; MEM [(char * {ref-all})out_5(D) + 8B] =3D _311; this is turned into [local count: 1431655747]: _60 =3D MEM <__int128 unsigned> [(char * {ref-all})in_6(D)]; _114 =3D VIEW_CONVERT_EXPR(_60); vect__71.335_298 =3D _114 << 1; _61 =3D BIT_FIELD_REF <_60, 64, 64>; _62 =3D _61 >> 63; carry_63 =3D _62 * 135; _228 =3D (long unsigned int) _60; _310 =3D _228 >> 63; _147 =3D {carry_63, _310}; vect__72.336_173 =3D _147 ^ vect__71.335_298; MEM [(char * {ref-all})out_5(D)] =3D vect__72.336_173; after the patch which is build/include/botan/mem_ops.h:148:15: note: Basic block will be vectorized using SLP build/include/botan/mem_ops.h:148:15: note: Vectorizing SLP tree: build/include/botan/mem_ops.h:148:15: note: node 0x275d8e8 (max_nunits=3D2, refcnt=3D1) build/include/botan/mem_ops.h:148:15: note: op template: MEM [(char * {ref-all})out_5(D)] =3D _72; build/include/botan/mem_ops.h:148:15: note: stmt 0 MEM [(char * {ref-all})out_5(D)] =3D _72; build/include/botan/mem_ops.h:148:15: note: stmt 1 MEM [(char * {ref-all})out_5(D) + 8B] =3D _311; build/include/botan/mem_ops.h:148:15: note: children 0x275d960 build/include/botan/mem_ops.h:148:15: note: node 0x275d960 (max_nunits=3D2, refcnt=3D1) build/include/botan/mem_ops.h:148:15: note: op template: _72 =3D carry_63 ^= _71; build/include/botan/mem_ops.h:148:15: note: stmt 0 _72 =3D carry_63 ^ _= 71; build/include/botan/mem_ops.h:148:15: note: stmt 1 _311 =3D _308 ^ _310; build/include/botan/mem_ops.h:148:15: note: children 0x275d9d8 0x275da50 build/include/botan/mem_ops.h:148:15: note: node (external) 0x275d9d8 (max_nunits=3D1, refcnt=3D1) build/include/botan/mem_ops.h:148:15: note: { carry_63, _310 } build/include/botan/mem_ops.h:148:15: note: node 0x275da50 (max_nunits=3D2, refcnt=3D1) build/include/botan/mem_ops.h:148:15: note: op template: _71 =3D _228 << 1; build/include/botan/mem_ops.h:148:15: note: stmt 0 _71 =3D _228 << 1; build/include/botan/mem_ops.h:148:15: note: stmt 1 _308 =3D _61 << 1; build/include/botan/mem_ops.h:148:15: note: children 0x275dac8 0x275dbb8 build/include/botan/mem_ops.h:148:15: note: node 0x275dac8 (max_nunits=3D1, refcnt=3D1) build/include/botan/mem_ops.h:148:15: note: op: VEC_PERM_EXPR build/include/botan/mem_ops.h:148:15: note: stmt 0 _228 =3D BIT_FIELD_R= EF <_60, 64, 0>; build/include/botan/mem_ops.h:148:15: note: stmt 1 _61 =3D BIT_FIELD_REF <_60, 64, 64>; build/include/botan/mem_ops.h:148:15: note: lane permutation { 0[0] 0[1= ] } build/include/botan/mem_ops.h:148:15: note: children 0x275db40 build/include/botan/mem_ops.h:148:15: note: node (external) 0x275db40 (max_nunits=3D1, refcnt=3D1) build/include/botan/mem_ops.h:148:15: note: { } build/include/botan/mem_ops.h:148:15: note: node (constant) 0x275dbb8 (max_nunits=3D1, refcnt=3D1) build/include/botan/mem_ops.h:148:15: note: { 1, 1 } with costs build/include/botan/mem_ops.h:148:15: note: Cost model analysis: Vector inside of basic block cost: 24 Vector prologue cost: 8 Vector epilogue cost: 8 Scalar cost of basic block: 52 the vectorization isn't too bad I think, it turns into .L56: .cfi_restore_state vmovdqu (%rsi), %xmm4 vmovdqa %xmm4, 16(%rsp) movq 24(%rsp), %rdx vmovdqa 16(%rsp), %xmm5 shrq $63, %rdx imulq $135, %rdx, %rdi movq 16(%rsp), %rdx vmovq %rdi, %xmm0 vpsllq $1, %xmm5, %xmm1 shrq $63, %rdx vpinsrq $1, %rdx, %xmm0, %xmm0 vpxor %xmm1, %xmm0, %xmm0 vmovdqu %xmm0, (%rax) jmp .L53 instead of .L56: .cfi_restore_state movq 8(%rsi), %rdx movq (%rsi), %rdi movq %rdx, %rcx leaq (%rdi,%rdi), %rsi addq %rdx, %rdx shrq $63, %rdi shrq $63, %rcx xorq %rdi, %rdx imulq $135, %rcx, %rcx movq %rdx, 8(%rax) xorq %rsi, %rcx movq %rcx, (%rax) jmp .L53 but we see the 128bit move split when using GPRs possibly avoiding the STLF issue. I don't understand why we spill to extract the high part thoug= h. Will see to create a small testcase for the above kernel. With the vectorization disabled for just this kernel I get AES-128/XTS 280780 key schedule/sec; 0.00 ms/op 12122 cycles/op (2 ops in 0= ms) AES-128/XTS encrypt buffer size 1024 bytes: 852.401 MiB/sec 4.14 cycles/byte (426.20 MiB in 500.00 ms) AES-128/XTS decrypt buffer size 1024 bytes: 854.461 MiB/sec 4.13 cycles/byte (426.20 MiB in 498.80 ms) compared to ES-128/XTS 286409 key schedule/sec; 0.00 ms/op 11761 cycles/op (2 ops in 0 = ms) AES-128/XTS encrypt buffer size 1024 bytes: 765.736 MiB/sec 4.62 cycles/byte (382.87 MiB in 500.00 ms) AES-128/XTS decrypt buffer size 1024 bytes: 766.612 MiB/sec 4.61 cycles/byte (382.87 MiB in 499.43 ms) so that seems to be it.=