From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id E8BD53858C27 for ; Wed, 17 Nov 2021 09:38:21 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org E8BD53858C27 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 08F6FD6E; Wed, 17 Nov 2021 01:38:21 -0800 (PST) Received: from localhost (e121540-lin.manchester.arm.com [10.32.98.88]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 846F93F5A1; Wed, 17 Nov 2021 01:38:20 -0800 (PST) From: Richard Sandiford To: apinski--- via Gcc-patches Mail-Followup-To: apinski--- via Gcc-patches , apinski@marvell.com, richard.sandiford@arm.com Cc: apinski@marvell.com Subject: Re: [PATCH] Fix PR target/103100 -mstrict-align and memset on not aligned buffers References: <1636176325-17121-1-git-send-email-apinski@marvell.com> Date: Wed, 17 Nov 2021 09:38:19 +0000 In-Reply-To: <1636176325-17121-1-git-send-email-apinski@marvell.com> (apinski's message of "Fri, 5 Nov 2021 22:25:25 -0700") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Nov 2021 09:38:23 -0000 apinski--- via Gcc-patches writes: > From: Andrew Pinski > > The problem here is with -mstrict-align, aarch64_expand_setmem needs > to check the alginment of the mode to make sure we can use it for > doing the stores. > > gcc/ChangeLog: > > PR target/103100 > * config/aarch64/aarch64.c (aarch64_expand_setmem): > Add check for alignment of the mode if STRICT_ALIGNMENT is true. > --- > gcc/config/aarch64/aarch64.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c > index fdf05505846..2c00583e12c 100644 > --- a/gcc/config/aarch64/aarch64.c > +++ b/gcc/config/aarch64/aarch64.c > @@ -23738,7 +23738,9 @@ aarch64_expand_setmem (rtx *operands) > over writing. */ > opt_scalar_int_mode mode_iter; > FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT) > - if (GET_MODE_BITSIZE (mode_iter.require ()) <= MIN (n, copy_limit)) > + if (GET_MODE_BITSIZE (mode_iter.require ()) <= MIN (n, copy_limit) > + && (!STRICT_ALIGNMENT > + || MEM_ALIGN (dst) >= GET_MODE_ALIGNMENT (mode_iter.require ()))) Sorry for the slow review. I think instead we should have keep track of the alignment of the start byte. This will be MEM_ALIGN for the first iteration but could decrease after writing some bytes. The net effect should be the same in practice. It just seems more robust. Thanks, Richard > cur_mode = mode_iter.require (); > > gcc_assert (cur_mode != BLKmode);