From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id B2D0B3858CDB; Fri, 31 Mar 2023 07:28:13 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B2D0B3858CDB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1680247693; bh=SHFhQrwiw6ItOx6/RAqdhmWaJ8O12KvcPJMQC1HmC4w=; h=From:To:Subject:Date:In-Reply-To:References:From; b=BAXy6NwVV443PlXmN50nfNf+OQhXmjOjpMVDXmGruH2EksXV08CVrrk3nnwb7PNtX VLil7Bco4H/zrrlk4jc5eDqBYIin0kexH/FyHW//Wc11ZmDX6mAA63NvtwWfICwamd 34hVJUb4Y5SHs/9D/LqzUiRUkHjQHmNZ1D/XR3gQ= From: "ubizjak at gmail dot com" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/85048] [missed optimization] vector conversions Date: Fri, 31 Mar 2023 07:28:13 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 8.0.1 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: enhancement X-Bugzilla-Who: ubizjak at gmail dot com X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D85048 --- Comment #12 from Uro=C5=A1 Bizjak --- (In reply to Hongtao.liu from comment #9) > With the patch, we can generate optimized code expect for those 16 {u,}qq > cases, since the ABI doesn't support 1024-bit vector. Can't these be vectorized using partial vectors? GCC generates: _Z9vcvtqq2psDv16_l: vmovq 56(%rsp), %xmm0 vmovq 40(%rsp), %xmm1 vmovq 88(%rsp), %xmm2 vmovq 120(%rsp), %xmm3 vpinsrq $1, 64(%rsp), %xmm0, %xmm0 vpinsrq $1, 48(%rsp), %xmm1, %xmm1 vpinsrq $1, 96(%rsp), %xmm2, %xmm2 vpinsrq $1, 128(%rsp), %xmm3, %xmm3 vinserti128 $0x1, %xmm0, %ymm1, %ymm1 vcvtqq2psy 8(%rsp), %xmm0 vcvtqq2psy %ymm1, %xmm1 vinsertf128 $0x1, %xmm1, %ymm0, %ymm0 vmovq 72(%rsp), %xmm1 vpinsrq $1, 80(%rsp), %xmm1, %xmm1 vinserti128 $0x1, %xmm2, %ymm1, %ymm1 vmovq 104(%rsp), %xmm2 vcvtqq2psy %ymm1, %xmm1 vpinsrq $1, 112(%rsp), %xmm2, %xmm2 vinserti128 $0x1, %xmm3, %ymm2, %ymm2 vcvtqq2psy %ymm2, %xmm2 vinsertf128 $0x1, %xmm2, %ymm1, %ymm1 vinsertf64x4 $0x1, %ymm1, %zmm0, %zmm0 where clang manages to vectorize the function to: vcvtqq2ps 16(%rbp), %ymm0 vcvtqq2ps 80(%rbp), %ymm1 vinsertf64x4 $1, %ymm1, %zmm0, %zmm0=