From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 2FE683858C52 for ; Thu, 19 Jan 2023 09:22:40 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 2FE683858C52 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 08EEC1758; Thu, 19 Jan 2023 01:23:21 -0800 (PST) Received: from localhost (e121540-lin.manchester.arm.com [10.32.99.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE5D33F71A; Thu, 19 Jan 2023 01:22:38 -0800 (PST) From: Richard Sandiford To: Christophe Lyon Mail-Followup-To: Christophe Lyon ,, , richard.sandiford@arm.com Cc: , Subject: Re: [PATCH 1/2] aarch64: fix ICE in aarch64_layout_arg [PR108411] References: <20230118201649.11612-1-christophe.lyon@arm.com> Date: Thu, 19 Jan 2023 09:22:37 +0000 In-Reply-To: <20230118201649.11612-1-christophe.lyon@arm.com> (Christophe Lyon's message of "Wed, 18 Jan 2023 21:16:48 +0100") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-37.2 required=5.0 tests=BAYES_00,GIT_PATCH_0,KAM_DMARC_NONE,KAM_DMARC_STATUS,KAM_LAZY_DOMAIN_SECURITY,SPF_HELO_NONE,SPF_NONE,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Christophe Lyon writes: > The previous patch added an assert which should not be applied to PST > types (Pure Scalable Types) because alignment does not matter in this > case. This patch moves the assert after the PST case is handled to > avoid the ICE. > > PR target/108411 > gcc/ > * config/aarch64/aarch64.cc (aarch64_layout_arg): Improve > comment. Move assert about alignment a bit later. > --- > gcc/config/aarch64/aarch64.cc | 28 +++++++++++++++++++++------- > 1 file changed, 21 insertions(+), 7 deletions(-) > > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc > index d36b57341b3..7175b453b3a 100644 > --- a/gcc/config/aarch64/aarch64.cc > +++ b/gcc/config/aarch64/aarch64.cc > @@ -7659,7 +7659,18 @@ aarch64_layout_arg (cumulative_args_t pcum_v, const function_arg_info &arg) > && (currently_expanding_function_start > || currently_expanding_gimple_stmt)); > > - /* There are several things to note here: > + /* HFAs and HVAs can have an alignment greater than 16 bytes. For example: > + > + typedef struct foo { > + __Int8x16_t foo[2] __attribute__((aligned(32))); > + } foo; > + > + is still a HVA despite its larger-than-normal alignment. > + However, such over-aligned HFAs and HVAs are guaranteed to have > + no padding. > + > + If we exclude HFAs and HVAs from the discussion below, then there > + are several things to note: > > - Both the C and AAPCS64 interpretations of a type's alignment should > give a value that is no greater than the type's size. > @@ -7704,12 +7715,6 @@ aarch64_layout_arg (cumulative_args_t pcum_v, const function_arg_info &arg) > would treat the alignment as though it was *equal to* 16 bytes. > > Both behaviors were wrong, but in different cases. */ > - unsigned int alignment > - = aarch64_function_arg_alignment (mode, type, &abi_break, > - &abi_break_packed); > - gcc_assert (alignment <= 16 * BITS_PER_UNIT > - && (!alignment || abi_break < alignment) > - && (!abi_break_packed || alignment < abi_break_packed)); > > pcum->aapcs_arg_processed = true; > > @@ -7780,6 +7785,15 @@ aarch64_layout_arg (cumulative_args_t pcum_v, const function_arg_info &arg) > &nregs); > gcc_assert (!sve_p || !allocate_nvrn); > > + unsigned int alignment > + = aarch64_function_arg_alignment (mode, type, &abi_break, > + &abi_break_packed); > + > + gcc_assert (allocate_nvrn || (alignment <= 16 * BITS_PER_UNIT > + && (!alignment || abi_break < alignment) > + && (!abi_break_packed > + || alignment < abi_break_packed))); I think allocate_nvrn should only circumvent the first part, so: gcc_assert ((allocate_nvrn || alignment <= 16 * BITS_PER_UNIT) && (!alignment || abi_break < alignment) && (!abi_break_packed || alignment < abi_break_packed)); OK with that change, and sorry for not thinking about this originally. Richard > + > /* allocate_ncrn may be false-positive, but allocate_nvrn is quite reliable. > The following code thus handles passing by SIMD/FP registers first. */