From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 91C203865C33; Tue, 5 Dec 2023 03:44:24 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 91C203865C33 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1701747864; bh=NQKu+XWf6jNve7ajyo2ux5D3OsP2iN2rYMbZ4UPDSsY=; h=From:To:Subject:Date:In-Reply-To:References:From; b=B/Qj5v3Y8C8nKqBcD++8C8P9boSudmVSYFdFKlEpX15NMXog+K3M0GbOzNtVQYm2q XIQnUFBc4/EDEVeQ4w87oM3DXq1MmizI0cC06N1ynbN4hlXibCsqBIibVC+bv8+MuJ Xku/Gurqn4VzJ7DRyfH2hwXDnGCkTxPFHBOlc0l8= From: "liuhongt at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug middle-end/112824] Stack spills and vector splitting with vector builtins Date: Tue, 05 Dec 2023 03:44:23 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: middle-end X-Bugzilla-Version: 14.0 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: liuhongt at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D112824 --- Comment #7 from Hongtao Liu --- (In reply to Chris Elrod from comment #6) > Hongtao Liu, I do think that one should ideally be able to get optimal > codegen when using 512-bit builtin vectors or vector intrinsics, without > needing to set `-mprefer-vector-width=3D512` (and, currently, also setting > `-mtune-ctrl=3Davx512_move_by_pieces`). >=20 >=20 > GCC respects the vector builtins and uses 512 bit ops, but then does spli= ts > and spills across function boundaries. > So, what I'm arguing is, while it would be great to respect > `-mprefer-vector-width=3D512`, it should ideally also be able to respect > vector builtins/intrinsics, so that one can use full width vectors without > also having to set `-mprefer-vector-width=3D512 > -mtune-control=3Davx512_move_by_pieces`. If it's designed the way you want it to be=EF=BC=8C another issue would be = like, should we lower 512-bit vector builtins/intrinsic to ymm/xmm when -mprefer-vector-width=3D256, the answer is we'd rather not. The intrinsic s= hould be closer to a one-to-one correspondence of instructions.(Though there're several instrinics which are lowered to a sequence of instructions) There're also others users using 512-bit intriniscs for specific kernel loo= ps, but still want compiler to generate xmm/ymm for other codes, -mprefer-vector-width=3D256. Originally -mprefer-vector-width=3DXXX is designed for auto-vectorization, = and -mtune-ctrl=3Davx512_move_by_pieces is for memory movement. Both of which a= re orthogonal to codegen for builtin, intrinsic or explicit vector types. If u= ser explicitly use 512-bit vector type, builtins or intrinsics, gcc will genera= te zmm no matter -mprefer-vector-width=3D. And yes, there could be some mismatches between 512-bit intrinsic and architecture tuning when you're using 512-bit intrinsic, and also rely on compiler autogen to handle struct (struct Dual { Vector data; };).=20 For such case, an explicit -mprefer-vector-width=3D512 is needed.=