From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by sourceware.org (Postfix) with ESMTPS id AAA033858D38 for ; Tue, 19 Dec 2023 03:01:27 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org AAA033858D38 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=linux.ibm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linux.ibm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org AAA033858D38 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=148.163.158.5 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1702954889; cv=none; b=kbh/nkwZsal6XPNTGoytT+p5tfp6lk4wspHk2K/RUf8cj3K7q9Q23EX+6sxX79zzCsRhpJNa6Mp/U4btkBYE+sBISsOACFuT3MPuCaG1hreEsGs9pjx4ux/RFpLWOXMdz3KM354KoC8qhMFk4O/X1P4cXEpwXcWVldGaxgdiCc8= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1702954889; c=relaxed/simple; bh=gA5PZyA06X6mKweVQ8vmKCXY0pDSEbj5gynrVPVVIqk=; h=DKIM-Signature:Message-ID:Date:Subject:To:From:MIME-Version; b=VoFHJ9Pl3FzoIHVmhwJostqHe+o/YqTpToo1n9ARGCMBJEA82v/ZZ7szHhU8yO1LcttFufwJp7OHnc6snbrlUL4ch/FQYrQI+Uulyu95BIhTiKooZkSum0r3yQ6+TArAOKlx42dzzpgn419yWUSmRdOnGcNHUHTY2hf4on/lQJk= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from pps.filterd (m0353723.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3BJ1CdRx010447; Tue, 19 Dec 2023 03:01:27 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=message-id : date : subject : to : cc : references : from : in-reply-to : content-type : content-transfer-encoding : mime-version; s=pp1; bh=0xTbj8QmxmRqRFwsmrY3CHXN0r1C+PIpRsY31nNqbMU=; b=sFiwvZmjcFVzTCfDunbAVG1rk7siHVPuwsoykPvk/lBZqSX5ZF1yazsvOqaWL+KO0NrB d+6YLXCBj/fgp24CdBbxcmA5QFrM+zXuY5EPnBa97yyNjNeTAmOJVqs/QO9i7qqD9D0d ZyWSfCGMYVrmEXbTcc2yXc1TSm/vZqXueBkz14q49ZHRVlzLVPBBQeWcEBsXrS1RTUYD 9qkgR/o7EI2iD+1A0bZJ8QZVFh7kRWJFTJ/0oQfIYkajVbeQCHXhetixFZ4HxaiGtLEK 2tzMVE/vS/RXHdoWctPQc6kJlgCNgeIX6NDCMsx4bZ33N/Xs7Jsza5tcMqLPjgMGg52M qQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3v31acsy0j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 19 Dec 2023 03:01:25 +0000 Received: from m0353723.ppops.net (m0353723.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 3BJ2tpUI015296; Tue, 19 Dec 2023 03:01:25 GMT Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3v31acsy04-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 19 Dec 2023 03:01:25 +0000 Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3BJ0xPIo010954; Tue, 19 Dec 2023 03:01:24 GMT Received: from smtprelay04.fra02v.mail.ibm.com ([9.218.2.228]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3v1q7ncwtq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 19 Dec 2023 03:01:24 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay04.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 3BJ31L2E41615984 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 19 Dec 2023 03:01:21 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A050420043; Tue, 19 Dec 2023 03:01:21 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C6FA420040; Tue, 19 Dec 2023 03:01:19 +0000 (GMT) Received: from [9.197.230.199] (unknown [9.197.230.199]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 19 Dec 2023 03:01:19 +0000 (GMT) Message-ID: <29e86ab3-76c0-1f80-acbb-9107e1131edc@linux.ibm.com> Date: Tue, 19 Dec 2023 11:01:18 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.6.1 Subject: Re: [Patchv2, rs6000] Correct definition of macro of fixed point efficient unaligned Content-Language: en-US To: HAO CHEN GUI Cc: Segher Boessenkool , David , Peter Bergner , gcc-patches References: <17f04e5b-da04-4303-874c-2596bcab4251@linux.ibm.com> From: "Kewen.Lin" In-Reply-To: <17f04e5b-da04-4303-874c-2596bcab4251@linux.ibm.com> Content-Type: text/plain; charset=UTF-8 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: vRGKbgq0rugzS8XDg9BlLjb75g36FykS X-Proofpoint-ORIG-GUID: g_Vhh0ZGn170O0FsOLxhCrb6VceYWzkA Content-Transfer-Encoding: 7bit X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-18_15,2023-12-14_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 mlxlogscore=999 priorityscore=1501 bulkscore=0 clxscore=1015 suspectscore=0 mlxscore=0 impostorscore=0 lowpriorityscore=0 adultscore=0 spamscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311290000 definitions=main-2312190022 X-Spam-Status: No, score=-13.5 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,GIT_PATCH_0,KAM_SHORT,NICE_REPLY_A,RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Hi Haochen, on 2023/12/18 10:43, HAO CHEN GUI wrote: > Hi, > The patch corrects the definition of > TARGET_EFFICIENT_OVERLAPPING_UNALIGNED and replace it with the call of > slow_unaligned_access. > > Compared with last version, > https://gcc.gnu.org/pipermail/gcc-patches/2023-December/640076.html > the main change is to replace the macro with slow_unaligned_access. > > Bootstrapped and tested on x86 and powerpc64-linux BE and LE with no > regressions. Is this OK for trunk? > > Thanks > Gui Haochen > > ChangeLog > rs6000: Correct definition of macro of fixed point efficient unaligned > > Marco TARGET_EFFICIENT_OVERLAPPING_UNALIGNED is used in rs6000-string.cc to > guard the platform which is efficient on fixed point unaligned load/store. > It's originally defined by TARGET_EFFICIENT_UNALIGNED_VSX which is enabled > from P8 and can be disabled by mno-vsx option. So the definition is wrong. > This patch corrects the problem and call slow_unaligned_access to judge if > fixed point unaligned load/store is efficient or not. > > gcc/ > * config/rs6000/rs6000.h (TARGET_EFFICIENT_OVERLAPPING_UNALIGNED): > Remove. > * config/rs6000/rs6000-string.cc (select_block_compare_mode): > Replace TARGET_EFFICIENT_OVERLAPPING_UNALIGNED with > targetm.slow_unaligned_access. > (expand_block_compare_gpr): Likewise. > (expand_block_compare): Likewise. > (expand_strncmp_gpr_sequence): Likewise. > > gcc/testsuite/ > * gcc.target/powerpc/block-cmp-1.c: New. > * gcc.target/powerpc/block-cmp-2.c: New. > > patch.diff > diff --git a/gcc/config/rs6000/rs6000-string.cc b/gcc/config/rs6000/rs6000-string.cc > index 44a946cd453..cb9eeef05d8 100644 > --- a/gcc/config/rs6000/rs6000-string.cc > +++ b/gcc/config/rs6000/rs6000-string.cc > @@ -305,7 +305,7 @@ select_block_compare_mode (unsigned HOST_WIDE_INT offset, > else if (bytes == GET_MODE_SIZE (QImode)) > return QImode; > else if (bytes < GET_MODE_SIZE (SImode) > - && TARGET_EFFICIENT_OVERLAPPING_UNALIGNED > + && !targetm.slow_unaligned_access (SImode, align) > && offset >= GET_MODE_SIZE (SImode) - bytes) > /* This matches the case were we have SImode and 3 bytes > and offset >= 1 and permits us to move back one and overlap > @@ -313,7 +313,7 @@ select_block_compare_mode (unsigned HOST_WIDE_INT offset, > unwanted bytes off of the input. */ > return SImode; > else if (word_mode_ok && bytes < UNITS_PER_WORD > - && TARGET_EFFICIENT_OVERLAPPING_UNALIGNED > + && !targetm.slow_unaligned_access (word_mode, align) > && offset >= UNITS_PER_WORD-bytes) > /* Similarly, if we can use DImode it will get matched here and > can do an overlapping read that ends at the end of the block. */ > @@ -1749,7 +1749,7 @@ expand_block_compare_gpr(unsigned HOST_WIDE_INT bytes, unsigned int base_align, > load_mode_size = GET_MODE_SIZE (load_mode); > if (bytes >= load_mode_size) > cmp_bytes = load_mode_size; > - else if (TARGET_EFFICIENT_OVERLAPPING_UNALIGNED) > + else if (!targetm.slow_unaligned_access (load_mode, align)) > { > /* Move this load back so it doesn't go past the end. > P8/P9 can do this efficiently. */ > @@ -2026,7 +2026,7 @@ expand_block_compare (rtx operands[]) > /* The code generated for p7 and older is not faster than glibc > memcmp if alignment is small and length is not short, so bail > out to avoid those conditions. */ > - if (!TARGET_EFFICIENT_OVERLAPPING_UNALIGNED > + if (targetm.slow_unaligned_access (word_mode, UINTVAL (align_rtx)) At the first glance it looks that we can use base_align here instead, but I noticed that base_align is computed with unsigned int base_align = UINTVAL (align_rtx) / BITS_PER_UNIT; As the internal doc, the alignment already passed in bytes? If so, the "/ BITS_PER_UNIT" looks unexpected, could you have a check? If it is the case, a separated patch for it is appreciated (and please some other related/similar places too). Thanks! > && ((base_align == 1 && bytes > 16) > || (base_align == 2 && bytes > 32))) > return false; > @@ -2168,7 +2168,7 @@ expand_strncmp_gpr_sequence (unsigned HOST_WIDE_INT bytes_to_compare, > load_mode_size = GET_MODE_SIZE (load_mode); > if (bytes_to_compare >= load_mode_size) > cmp_bytes = load_mode_size; > - else if (TARGET_EFFICIENT_OVERLAPPING_UNALIGNED) > + else if (!targetm.slow_unaligned_access (load_mode, align)) > { > /* Move this load back so it doesn't go past the end. > P8/P9 can do this efficiently. */ > diff --git a/gcc/config/rs6000/rs6000.h b/gcc/config/rs6000/rs6000.h > index 326c45221e9..3971a56c588 100644 > --- a/gcc/config/rs6000/rs6000.h > +++ b/gcc/config/rs6000/rs6000.h > @@ -483,10 +483,6 @@ extern int rs6000_vector_align[]; > #define TARGET_NO_SF_SUBREG TARGET_DIRECT_MOVE_64BIT > #define TARGET_ALLOW_SF_SUBREG (!TARGET_DIRECT_MOVE_64BIT) > > -/* This wants to be set for p8 and newer. On p7, overlapping unaligned > - loads are slow. */ > -#define TARGET_EFFICIENT_OVERLAPPING_UNALIGNED TARGET_EFFICIENT_UNALIGNED_VSX > - > /* Byte/char syncs were added as phased in for ISA 2.06B, but are not present > in power7, so conditionalize them on p8 features. TImode syncs need quad > memory support. */ > diff --git a/gcc/testsuite/gcc.target/powerpc/block-cmp-1.c b/gcc/testsuite/gcc.target/powerpc/block-cmp-1.c > new file mode 100644 > index 00000000000..bcf0cb2ab4f > --- /dev/null > +++ b/gcc/testsuite/gcc.target/powerpc/block-cmp-1.c > @@ -0,0 +1,11 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O2 -mdejagnu-cpu=power8 -mno-vsx" } */ > +/* { dg-final { scan-assembler-not {\mb[l]? memcmp\M} } } */ > + > +/* Test that it still can do expand for memcmpsi instead of calling library > + on P8 with vsx disabled. */ > + > +int foo (const char* s1, const char* s2) > +{ > + return __builtin_memcmp (s1, s2, 20); > +} > diff --git a/gcc/testsuite/gcc.target/powerpc/block-cmp-2.c b/gcc/testsuite/gcc.target/powerpc/block-cmp-2.c > new file mode 100644 > index 00000000000..4f162dc0437 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/powerpc/block-cmp-2.c > @@ -0,0 +1,11 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O2 -mstrict-align" } */ There is an effective target opt_mstrict_align, we should check it first. The others look good to me, thanks! BR, Kewen > +/* { dg-final { scan-assembler-times {\mb[l]? memcmp\M} 1 } } */ > + > +/* Test that it calls library for block memory compare when strict-align > + is set. The flag causes rs6000_slow_unaligned_access returns true. */ > + > +int foo (const char* s1, const char* s2) > +{ > + return __builtin_memcmp (s1, s2, 20); > +} >