From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 277443858D26; Wed, 22 May 2024 09:48:00 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 277443858D26 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 277443858D26 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1716371282; cv=none; b=ckO0mnDCs/redVsgA3Jmp4S+2vx4YCB38lOG/du+YBy4QBZbMhPXluhYTejNEdq0YgNiDgbNKVmUV/Bw/iqbX6CfsFSF1tc9N74wBL6CgtDDhcqTUdWwBNTOT3JtvO6DcxhwMzjqu9lAaErnJ+qcpsF35cPg6CXU56u41miXIuo= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1716371282; c=relaxed/simple; bh=EA0vxRIl8ZyJ8tbux3RBsNL9lQSrKDhetBEeq97+iac=; h=From:To:Subject:Date:Message-ID:MIME-Version; b=j7nJVJ3I5EXMnbemArKziDar7bDigoXgmtQA6/ZNvKp0cm/wGclTj6Cp4HJLbmbTcEq96Iqv/SIOmgQvnwHI4yZBKbRjLFoX4El0EEUo1N7mTY5xRTFJt9sk5K6osG7kkKcNLevMdlaFusc0VEdqRqcGLALAJtCehftnHezrtgQ= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EC55C339; Wed, 22 May 2024 02:48:23 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DD7273F766; Wed, 22 May 2024 02:47:58 -0700 (PDT) From: Richard Sandiford To: Tamar Christina Mail-Followup-To: Tamar Christina ,gcc-patches@gcc.gnu.org, nd@arm.com, Richard.Earnshaw@arm.com, Marcus.Shawcroft@arm.com, ktkachov@gcc.gnu.org, richard.sandiford@arm.com Cc: gcc-patches@gcc.gnu.org, nd@arm.com, Richard.Earnshaw@arm.com, Marcus.Shawcroft@arm.com, ktkachov@gcc.gnu.org Subject: Re: [PATCH 3/4]AArch64: add new alternative with early clobber to patterns References: Date: Wed, 22 May 2024 10:47:57 +0100 In-Reply-To: (Tamar Christina's message of "Wed, 22 May 2024 10:29:40 +0100") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-20.4 required=5.0 tests=BAYES_00,GIT_PATCH_0,KAM_DMARC_NONE,KAM_DMARC_STATUS,KAM_LAZY_DOMAIN_SECURITY,SPF_HELO_NONE,SPF_NONE,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Tamar Christina writes: > Hi All, > > This patch adds new alternatives to the patterns which are affected. The new > alternatives with the conditional early clobbers are added before the normal > ones in order for LRA to prefer them in the event that we have enough free > registers to accommodate them. > > In case register pressure is too high the normal alternatives will be preferred > before a reload is considered as we rather have the tie than a spill. > > Tests are in the next patch. > > Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. > > Ok for master? > > Thanks, > Tamar > > gcc/ChangeLog: > > * config/aarch64/aarch64-sve.md (and3, > @aarch64_pred__z, *3_cc, > *3_ptest, aarch64_pred__z, > *3_cc, *3_ptest, > aarch64_pred__z, *3_cc, > *3_ptest, @aarch64_pred_cmp, > *cmp_cc, *cmp_ptest, > @aarch64_pred_cmp_wide, > *aarch64_pred_cmp_wide_cc, > *aarch64_pred_cmp_wide_ptest, @aarch64_brk, > *aarch64_brk_cc, *aarch64_brk_ptest, > @aarch64_brk, *aarch64_brkn_cc, *aarch64_brkn_ptest, > *aarch64_brk_cc, *aarch64_brk_ptest, > aarch64_rdffr_z, *aarch64_rdffr_z_ptest, *aarch64_rdffr_ptest, > *aarch64_rdffr_z_cc, *aarch64_rdffr_cc): Add new early clobber > alternative. > * config/aarch64/aarch64-sve2.md > (@aarch64_pred_): Likewise. > > --- > diff --git a/gcc/config/aarch64/aarch64-sve.md b/gcc/config/aarch64/aarch64-sve.md > index e3085c0c636f1317409bbf3b5fbaf5342a2df1f6..8fdc1bc3cd43acfcd675a18350c297428c85fe46 100644 > --- a/gcc/config/aarch64/aarch64-sve.md > +++ b/gcc/config/aarch64/aarch64-sve.md > @@ -1161,8 +1161,10 @@ (define_insn "aarch64_rdffr_z" > (reg:VNx16BI FFRT_REGNUM) > (match_operand:VNx16BI 1 "register_operand")))] > "TARGET_SVE && TARGET_NON_STREAMING" > - {@ [ cons: =0, 1 ] > - [ Upa , Upa ] rdffr\t%0.b, %1/z > + {@ [ cons: =0, 1 ; attrs: pred_clobber ] > + [ &Upa , Upa; yes ] rdffr\t%0.b, %1/z > + [ ?Upa , Upa; yes ] ^ > + [ Upa , Upa; * ] ^ > } > ) Sorry for not explaining it very well, but in the previous review I suggested: > The gather-like approach would be something like: > > [ &Upa , Upl , w , ; yes ] cmp\t%0., %1/z, %3., #%4 > [ ?Upl , 0 , w , ; yes ] ^ > [ Upa , Upl , w , ; no ] ^ > [ &Upa , Upl , w , w ; yes ] cmp\t%0., %1/z, %3., %4. > [ ?Upl , 0 , w , w ; yes ] ^ > [ Upa , Upl , w , w ; no ] ^ > > with: > > (define_attr "pred_clobber" "any,no,yes" (const_string "any")) (with emphasis on the last line). What I didn't say explicitly is that "no" should require !TARGET_SVE_PRED_CLOBBER. The premise of that review was that we shouldn't enable things like: [ Upa , Upl , w , w ; no ] ^ for TARGET_SVE_PRED_CLOBBER since it contradicts the earlyclobber alternative. So we should enable either the pred_clobber=yes alternatives or the pred_clobber=no alternatives, but not both. The default "any" is then for other non-predicate instructions that don't care about TARGET_SVE_PRED_CLOBBER either way. In contrast, this patch makes pred_clobber=yes enable the alternatives that correctly describe the restriction (good!) but then also enables the normal alternatives too, which IMO makes the semantics unclear. Thanks, Richard > > @@ -1179,8 +1181,10 @@ (define_insn "*aarch64_rdffr_z_ptest" > UNSPEC_PTEST)) > (clobber (match_scratch:VNx16BI 0))] > "TARGET_SVE && TARGET_NON_STREAMING" > - {@ [ cons: =0, 1 ] > - [ Upa , Upa ] rdffrs\t%0.b, %1/z > + {@ [ cons: =0, 1 ; attrs: pred_clobber ] > + [ &Upa , Upa; yes ] rdffrs\t%0.b, %1/z > + [ ?Upa , Upa; yes ] ^ > + [ Upa , Upa; * ] ^ > } > ) > > @@ -1195,8 +1199,10 @@ (define_insn "*aarch64_rdffr_ptest" > UNSPEC_PTEST)) > (clobber (match_scratch:VNx16BI 0))] > "TARGET_SVE && TARGET_NON_STREAMING" > - {@ [ cons: =0, 1 ] > - [ Upa , Upa ] rdffrs\t%0.b, %1/z > + {@ [ cons: =0, 1 ; attrs: pred_clobber ] > + [ &Upa , Upa; yes ] rdffrs\t%0.b, %1/z > + [ ?Upa , Upa; yes ] ^ > + [ Upa , Upa; * ] ^ > } > ) > > @@ -1216,8 +1222,10 @@ (define_insn "*aarch64_rdffr_z_cc" > (reg:VNx16BI FFRT_REGNUM) > (match_dup 1)))] > "TARGET_SVE && TARGET_NON_STREAMING" > - {@ [ cons: =0, 1 ] > - [ Upa , Upa ] rdffrs\t%0.b, %1/z > + {@ [ cons: =0, 1 ; attrs: pred_clobber ] > + [ &Upa , Upa; yes ] rdffrs\t%0.b, %1/z > + [ ?Upa , Upa; yes ] ^ > + [ Upa , Upa; * ] ^ > } > ) > > @@ -1233,8 +1241,10 @@ (define_insn "*aarch64_rdffr_cc" > (set (match_operand:VNx16BI 0 "register_operand") > (reg:VNx16BI FFRT_REGNUM))] > "TARGET_SVE && TARGET_NON_STREAMING" > - {@ [ cons: =0, 1 ] > - [ Upa , Upa ] rdffrs\t%0.b, %1/z > + {@ [ cons: =0, 1 ; attrs: pred_clobber ] > + [ &Upa , Upa; yes ] rdffrs\t%0.b, %1/z > + [ ?Upa , Upa; yes ] ^ > + [ Upa , Upa; * ] ^ > } > ) > > @@ -6651,8 +6661,10 @@ (define_insn "and3" > (and:PRED_ALL (match_operand:PRED_ALL 1 "register_operand") > (match_operand:PRED_ALL 2 "register_operand")))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 ] > - [ Upa , Upa, Upa ] and\t%0.b, %1/z, %2.b, %2.b > + {@ [ cons: =0, 1 , 2 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa; yes ] and\t%0.b, %1/z, %2.b, %2.b > + [ ?Upa , Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa; * ] ^ > } > ) > > @@ -6679,8 +6691,10 @@ (define_insn "@aarch64_pred__z" > (match_operand:PRED_ALL 3 "register_operand")) > (match_operand:PRED_ALL 1 "register_operand")))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, Upa ] \t%0.b, %1/z, %2.b, %3.b > + {@ [ cons: =0, 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, Upa; yes ] \t%0.b, %1/z, %2.b, %3.b > + [ ?Upa , Upa, Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa, Upa; * ] ^ > } > ) > > @@ -6703,8 +6717,10 @@ (define_insn "*3_cc" > (and:PRED_ALL (LOGICAL:PRED_ALL (match_dup 2) (match_dup 3)) > (match_dup 4)))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, Upa ] s\t%0.b, %1/z, %2.b, %3.b > + {@ [ cons: =0, 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, Upa; yes ] s\t%0.b, %1/z, %2.b, %3.b > + [ ?Upa , Upa, Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa, Upa; * ] ^ > } > ) > > @@ -6723,8 +6739,10 @@ (define_insn "*3_ptest" > UNSPEC_PTEST)) > (clobber (match_scratch:VNx16BI 0))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, Upa ] s\t%0.b, %1/z, %2.b, %3.b > + {@ [ cons: =0, 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, Upa; yes ] s\t%0.b, %1/z, %2.b, %3.b > + [ ?Upa , Upa, Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa, Upa; * ] ^ > } > ) > > @@ -6745,8 +6763,10 @@ (define_insn "aarch64_pred__z" > (match_operand:PRED_ALL 2 "register_operand")) > (match_operand:PRED_ALL 1 "register_operand")))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, Upa ] \t%0.b, %1/z, %2.b, %3.b > + {@ [ cons: =0, 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, Upa; yes ] \t%0.b, %1/z, %2.b, %3.b > + [ ?Upa , Upa, Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa, Upa; * ] ^ > } > ) > > @@ -6770,8 +6790,10 @@ (define_insn "*3_cc" > (match_dup 2)) > (match_dup 4)))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, Upa ] s\t%0.b, %1/z, %2.b, %3.b > + {@ [ cons: =0, 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, Upa; yes ] s\t%0.b, %1/z, %2.b, %3.b > + [ ?Upa , Upa, Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa, Upa; * ] ^ > } > ) > > @@ -6791,8 +6813,10 @@ (define_insn "*3_ptest" > UNSPEC_PTEST)) > (clobber (match_scratch:VNx16BI 0))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, Upa ] s\t%0.b, %1/z, %2.b, %3.b > + {@ [ cons: =0, 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, Upa; yes ] s\t%0.b, %1/z, %2.b, %3.b > + [ ?Upa , Upa, Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa, Upa; * ] ^ > } > ) > > @@ -6813,8 +6837,10 @@ (define_insn "aarch64_pred__z" > (not:PRED_ALL (match_operand:PRED_ALL 3 "register_operand"))) > (match_operand:PRED_ALL 1 "register_operand")))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, Upa ] \t%0.b, %1/z, %2.b, %3.b > + {@ [ cons: =0, 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, Upa; yes ] \t%0.b, %1/z, %2.b, %3.b > + [ ?Upa , Upa, Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa, Upa; * ] ^ > } > ) > > @@ -6839,8 +6865,10 @@ (define_insn "*3_cc" > (not:PRED_ALL (match_dup 3))) > (match_dup 4)))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, Upa ] s\t%0.b, %1/z, %2.b, %3.b > + {@ [ cons: =0, 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, Upa; yes ] s\t%0.b, %1/z, %2.b, %3.b > + [ ?Upa , Upa, Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa, Upa; * ] ^ > } > ) > > @@ -6861,8 +6889,10 @@ (define_insn "*3_ptest" > UNSPEC_PTEST)) > (clobber (match_scratch:VNx16BI 0))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, Upa ] s\t%0.b, %1/z, %2.b, %3.b > + {@ [ cons: =0, 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, Upa; yes ] s\t%0.b, %1/z, %2.b, %3.b > + [ ?Upa , Upa, Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa, Upa; * ] ^ > } > ) > > @@ -8104,9 +8134,13 @@ (define_insn "@aarch64_pred_cmp" > UNSPEC_PRED_Z)) > (clobber (reg:CC_NZC CC_REGNUM))] > "TARGET_SVE" > - {@ [ cons: =0 , 1 , 3 , 4 ] > - [ Upa , Upl , w , ] cmp\t%0., %1/z, %3., #%4 > - [ Upa , Upl , w , w ] cmp\t%0., %1/z, %3., %4. > + {@ [ cons: =0 , 1 , 3 , 4 ; attrs: pred_clobber ] > + [ &Upa , Upl , w , ; yes ] cmp\t%0., %1/z, %3., #%4 > + [ ?Upa , Upl , w , ; yes ] ^ > + [ Upa , Upl , w , ; * ] ^ > + [ &Upa , Upl , w , w ; yes ] cmp\t%0., %1/z, %3., %4. > + [ ?Upa , Upl , w , w ; yes ] ^ > + [ Upa , Upl , w , w ; * ] ^ > } > ) > > @@ -8136,9 +8170,13 @@ (define_insn_and_rewrite "*cmp_cc" > UNSPEC_PRED_Z))] > "TARGET_SVE > && aarch64_sve_same_pred_for_ptest_p (&operands[4], &operands[6])" > - {@ [ cons: =0 , 1 , 2 , 3 ] > - [ Upa , Upl , w , ] cmp\t%0., %1/z, %2., #%3 > - [ Upa , Upl , w , w ] cmp\t%0., %1/z, %2., %3. > + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upl , w , ; yes ] cmp\t%0., %1/z, %2., #%3 > + [ ?Upa , Upl , w , ; yes ] ^ > + [ Upa , Upl , w , ; * ] ^ > + [ &Upa , Upl , w , w ; yes ] cmp\t%0., %1/z, %2., %3. > + [ ?Upa , Upl , w , w ; yes ] ^ > + [ Upa , Upl , w , w ; * ] ^ > } > "&& !rtx_equal_p (operands[4], operands[6])" > { > @@ -8166,9 +8204,13 @@ (define_insn_and_rewrite "*cmp_ptest" > (clobber (match_scratch: 0))] > "TARGET_SVE > && aarch64_sve_same_pred_for_ptest_p (&operands[4], &operands[6])" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upl, w , ] cmp\t%0., %1/z, %2., #%3 > - [ Upa , Upl, w , w ] cmp\t%0., %1/z, %2., %3. > + {@ [ cons: =0, 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upl, w , ; yes ] cmp\t%0., %1/z, %2., #%3 > + [ ?Upa , Upl, w , ; yes ] ^ > + [ Upa , Upl, w , ; * ] ^ > + [ &Upa , Upl, w , w ; yes ] cmp\t%0., %1/z, %2., %3. > + [ ?Upa , Upl, w , w ; yes ] ^ > + [ Upa , Upl, w , w ; * ] ^ > } > "&& !rtx_equal_p (operands[4], operands[6])" > { > @@ -8221,8 +8263,10 @@ (define_insn "@aarch64_pred_cmp_wide" > UNSPEC_PRED_Z)) > (clobber (reg:CC_NZC CC_REGNUM))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2, 3, 4 ] > - [ Upa , Upl, , w, w ] cmp\t%0., %1/z, %3., %4.d > + {@ [ cons: =0, 1 , 2, 3, 4; attrs: pred_clobber ] > + [ &Upa , Upl, , w, w; yes ] cmp\t%0., %1/z, %3., %4.d > + [ ?Upa , Upl, , w, w; yes ] ^ > + [ Upa , Upl, , w, w; * ] ^ > } > ) > > @@ -8254,8 +8298,10 @@ (define_insn "*aarch64_pred_cmp_wide_cc" > UNSPEC_PRED_Z))] > "TARGET_SVE > && aarch64_sve_same_pred_for_ptest_p (&operands[4], &operands[6])" > - {@ [ cons: =0, 1 , 2, 3, 6 ] > - [ Upa , Upl, w, w, Upl ] cmp\t%0., %1/z, %2., %3.d > + {@ [ cons: =0, 1 , 2, 3, 6 ; attrs: pred_clobber ] > + [ &Upa , Upl, w, w, Upl; yes ] cmp\t%0., %1/z, %2., %3.d > + [ ?Upa , Upl, w, w, Upl; yes ] ^ > + [ Upa , Upl, w, w, Upl; * ] ^ > } > ) > > @@ -8279,8 +8325,10 @@ (define_insn "*aarch64_pred_cmp_wide_ptest" > (clobber (match_scratch: 0))] > "TARGET_SVE > && aarch64_sve_same_pred_for_ptest_p (&operands[4], &operands[6])" > - {@ [ cons: =0, 1 , 2, 3, 6 ] > - [ Upa , Upl, w, w, Upl ] cmp\t%0., %1/z, %2., %3.d > + {@ [ cons: =0, 1 , 2, 3, 6 ; attrs: pred_clobber ] > + [ &Upa , Upl, w, w, Upl; yes ] cmp\t%0., %1/z, %2., %3.d > + [ ?Upa , Upl, w, w, Upl; yes ] ^ > + [ Upa , Upl, w, w, Upl; * ] ^ > } > ) > > @@ -9948,9 +9996,13 @@ (define_insn "@aarch64_brk" > (match_operand:VNx16BI 3 "aarch64_simd_reg_or_zero")] > SVE_BRK_UNARY))] > "TARGET_SVE" > - {@ [ cons: =0 , 1 , 2 , 3 ] > - [ Upa , Upa , Upa , Dz ] brk\t%0.b, %1/z, %2.b > - [ Upa , Upa , Upa , 0 ] brk\t%0.b, %1/m, %2.b > + {@ [ cons: =0 , 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upa , Upa , Dz; yes ] brk\t%0.b, %1/z, %2.b > + [ ?Upa , Upa , Upa , Dz; yes ] ^ > + [ Upa , Upa , Upa , Dz; * ] ^ > + [ &Upa , Upa , Upa , 0 ; yes ] brk\t%0.b, %1/m, %2.b > + [ ?Upa , Upa , Upa , 0 ; yes ] ^ > + [ Upa , Upa , Upa , 0 ; * ] ^ > } > ) > > @@ -9974,8 +10026,10 @@ (define_insn "*aarch64_brk_cc" > (match_dup 3)] > SVE_BRK_UNARY))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 ] > - [ Upa , Upa, Upa ] brks\t%0.b, %1/z, %2.b > + {@ [ cons: =0, 1 , 2 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa; yes ] brks\t%0.b, %1/z, %2.b > + [ ?Upa , Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa; * ] ^ > } > ) > > @@ -9994,8 +10048,10 @@ (define_insn "*aarch64_brk_ptest" > UNSPEC_PTEST)) > (clobber (match_scratch:VNx16BI 0))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 ] > - [ Upa , Upa, Upa ] brks\t%0.b, %1/z, %2.b > + {@ [ cons: =0, 1 , 2 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa; yes ] brks\t%0.b, %1/z, %2.b > + [ ?Upa , Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa; * ] ^ > } > ) > > @@ -10020,8 +10076,10 @@ (define_insn "@aarch64_brk" > (match_operand:VNx16BI 3 "register_operand")] > SVE_BRK_BINARY))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, ] brk\t%0.b, %1/z, %2.b, %.b > + {@ [ cons: =0, 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, ; yes ] brk\t%0.b, %1/z, %2.b, %.b > + [ ?Upa , Upa, Upa, ; yes ] ^ > + [ Upa , Upa, Upa, ; * ] ^ > } > ) > > @@ -10046,8 +10104,10 @@ (define_insn_and_rewrite "*aarch64_brkn_cc" > (match_dup 3)] > UNSPEC_BRKN))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, 0 ] brkns\t%0.b, %1/z, %2.b, %0.b > + {@ [ cons: =0, 1 , 2 , 3; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, 0; yes ] brkns\t%0.b, %1/z, %2.b, %0.b > + [ ?Upa , Upa, Upa, 0; yes ] ^ > + [ Upa , Upa, Upa, 0; * ] ^ > } > "&& (operands[4] != CONST0_RTX (VNx16BImode) > || operands[5] != CONST0_RTX (VNx16BImode))" > @@ -10072,8 +10132,10 @@ (define_insn_and_rewrite "*aarch64_brkn_ptest" > UNSPEC_PTEST)) > (clobber (match_scratch:VNx16BI 0))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, 0 ] brkns\t%0.b, %1/z, %2.b, %0.b > + {@ [ cons: =0, 1 , 2 , 3; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, 0; yes ] brkns\t%0.b, %1/z, %2.b, %0.b > + [ ?Upa , Upa, Upa, 0; yes ] ^ > + [ Upa , Upa, Upa, 0; * ] ^ > } > "&& (operands[4] != CONST0_RTX (VNx16BImode) > || operands[5] != CONST0_RTX (VNx16BImode))" > @@ -10103,8 +10165,10 @@ (define_insn "*aarch64_brk_cc" > (match_dup 3)] > SVE_BRKP))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, Upa ] brks\t%0.b, %1/z, %2.b, %3.b > + {@ [ cons: =0, 1 , 2 , 3 , 4; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, Upa, ; yes ] brks\t%0.b, %1/z, %2.b, %3.b > + [ ?Upa , Upa, Upa, Upa, ; yes ] ^ > + [ Upa , Upa, Upa, Upa, ; * ] ^ > } > ) > > @@ -10123,8 +10187,10 @@ (define_insn "*aarch64_brk_ptest" > UNSPEC_PTEST)) > (clobber (match_scratch:VNx16BI 0))] > "TARGET_SVE" > - {@ [ cons: =0, 1 , 2 , 3 ] > - [ Upa , Upa, Upa, Upa ] brks\t%0.b, %1/z, %2.b, %3.b > + {@ [ cons: =0, 1 , 2 , 3 ; attrs: pred_clobber ] > + [ &Upa , Upa, Upa, Upa; yes ] brks\t%0.b, %1/z, %2.b, %3.b > + [ ?Upa , Upa, Upa, Upa; yes ] ^ > + [ Upa , Upa, Upa, Upa; * ] ^ > } > ) > > diff --git a/gcc/config/aarch64/aarch64-sve2.md b/gcc/config/aarch64/aarch64-sve2.md > index aa12baf48355358ca4fefe88157df3aac6eb09bd..1a49494a69d8335e5f7d3ef4bd3a90d0805bba84 100644 > --- a/gcc/config/aarch64/aarch64-sve2.md > +++ b/gcc/config/aarch64/aarch64-sve2.md > @@ -3349,8 +3349,10 @@ (define_insn "@aarch64_pred_" > UNSPEC_PRED_Z)) > (clobber (reg:CC_NZC CC_REGNUM))] > "TARGET_SVE2 && TARGET_NON_STREAMING" > - {@ [ cons: =0, 1 , 2, 3, 4 ] > - [ Upa , Upl, , w, w ] \t%0., %1/z, %3., %4. > + {@ [ cons: =0, 1 , 2, 3, 4; attrs: pred_clobber ] > + [ &Upa , Upl, , w, w; yes ] \t%0., %1/z, %3., %4. > + [ ?Upa , Upl, , w, w; yes ] ^ > + [ Upa , Upl, , w, w; * ] ^ > } > )