From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 7E9DF385020F for ; Thu, 29 Sep 2022 10:43:08 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 7E9DF385020F Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E289015BF for ; Thu, 29 Sep 2022 03:43:14 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.98.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C04863F73B for ; Thu, 29 Sep 2022 03:43:07 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [PATCH 17/17] aarch64: Remove redundant TARGET_* checks References: Date: Thu, 29 Sep 2022 11:43:06 +0100 In-Reply-To: (Richard Sandiford's message of "Thu, 29 Sep 2022 11:39:11 +0100") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-45.6 required=5.0 tests=BAYES_00,GIT_PATCH_0,KAM_DMARC_NONE,KAM_DMARC_STATUS,KAM_LAZY_DOMAIN_SECURITY,SPF_HELO_NONE,SPF_NONE,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: After previous patches, it's possible to remove TARGET_* options that are redundant due to (IMO) obvious dependencies. gcc/ * config/aarch64/aarch64.h (TARGET_CRYPTO, TARGET_SHA3, TARGET_SM4) (TARGET_DOTPROD): Don't depend on TARGET_SIMD. (TARGET_AES, TARGET_SHA2): Likewise. Remove TARGET_CRYPTO test. (TARGET_FP_F16INST): Don't depend on TARGET_FLOAT. (TARGET_SVE2, TARGET_SVE_F32MM, TARGET_SVE_F64MM): Don't depend on TARGET_SVE. (TARGET_SVE2_AES, TARGET_SVE2_BITPERM, TARGET_SVE2_SHA3) (TARGET_SVE2_SM4): Don't depend on TARGET_SVE2. (TARGET_F32MM, TARGET_F64MM): Delete. * config/aarch64/aarch64-c.cc (aarch64_update_cpp_builtins): Guard float macros with just TARGET_FLOAT rather than TARGET_FLOAT || TARGET_SIMD. * config/aarch64/aarch64-simd.md (copysign3): Depend only on TARGET_SIMD, rather than TARGET_FLOAT && TARGET_SIMD. (aarch64_crypto_aesv16qi): Depend only on TARGET_AES, rather than TARGET_SIMD && TARGET_AES. (aarch64_crypto_aesv16qi): Likewise. (*aarch64_crypto_aese_fused): Likewise. (*aarch64_crypto_aesd_fused): Likewise. (aarch64_crypto_pmulldi): Likewise. (aarch64_crypto_pmullv2di): Likewise. (aarch64_crypto_sha1hsi): Likewise TARGET_SHA2. (aarch64_crypto_sha1hv4si): Likewise. (aarch64_be_crypto_sha1hv4si): Likewise. (aarch64_crypto_sha1su1v4si): Likewise. (aarch64_crypto_sha1v4si): Likewise. (aarch64_crypto_sha1su0v4si): Likewise. (aarch64_crypto_sha256hv4si): Likewise. (aarch64_crypto_sha256su0v4si): Likewise. (aarch64_crypto_sha256su1v4si): Likewise. (aarch64_crypto_sha512hqv2di): Likewise TARGET_SHA3. (aarch64_crypto_sha512su0qv2di): Likewise. (aarch64_crypto_sha512su1qv2di, eor3q4): Likewise. (aarch64_rax1qv2di, aarch64_xarqv2di, bcaxq4): Likewise. (aarch64_sm3ss1qv4si): Likewise TARGET_SM4. (aarch64_sm3ttqv4si): Likewise. (aarch64_sm3partwqv4si): Likewise. (aarch64_sm4eqv4si, aarch64_sm4ekeyqv4si): Likewise. * config/aarch64/aarch64.md (dihf2) (copysign3, copysign3_insn) (xorsign3): Remove redundant TARGET_FLOAT condition. --- gcc/config/aarch64/aarch64-c.cc | 2 +- gcc/config/aarch64/aarch64-simd.md | 56 +++++++++++++++--------------- gcc/config/aarch64/aarch64.h | 30 ++++++++-------- gcc/config/aarch64/aarch64.md | 8 ++--- 4 files changed, 47 insertions(+), 49 deletions(-) diff --git a/gcc/config/aarch64/aarch64-c.cc b/gcc/config/aarch64/aarch64-c.cc index e066ca5f43c..592af8cd729 100644 --- a/gcc/config/aarch64/aarch64-c.cc +++ b/gcc/config/aarch64/aarch64-c.cc @@ -92,7 +92,7 @@ aarch64_update_cpp_builtins (cpp_reader *pfile) aarch64_def_or_undef (TARGET_FLOAT, "__ARM_FEATURE_FMA", pfile); - if (TARGET_FLOAT || TARGET_SIMD) + if (TARGET_FLOAT) { builtin_define_with_int_value ("__ARM_FP", 0x0E); builtin_define ("__ARM_FP16_FORMAT_IEEE"); diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index dc80f826100..5386043739a 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -716,7 +716,7 @@ (define_expand "copysign3" [(match_operand:VHSDF 0 "register_operand") (match_operand:VHSDF 1 "register_operand") (match_operand:VHSDF 2 "register_operand")] - "TARGET_FLOAT && TARGET_SIMD" + "TARGET_SIMD" { rtx v_bitmask = gen_reg_rtx (mode); int bits = GET_MODE_UNIT_BITSIZE (mode) - 1; @@ -8097,7 +8097,7 @@ (define_insn "aarch64_crypto_aesv16qi" (match_operand:V16QI 1 "register_operand" "%0") (match_operand:V16QI 2 "register_operand" "w"))] CRYPTO_AES))] - "TARGET_SIMD && TARGET_AES" + "TARGET_AES" "aes\\t%0.16b, %2.16b" [(set_attr "type" "crypto_aese")] ) @@ -8106,7 +8106,7 @@ (define_insn "aarch64_crypto_aesv16qi" [(set (match_operand:V16QI 0 "register_operand" "=w") (unspec:V16QI [(match_operand:V16QI 1 "register_operand" "w")] CRYPTO_AESMC))] - "TARGET_SIMD && TARGET_AES" + "TARGET_AES" "aes\\t%0.16b, %1.16b" [(set_attr "type" "crypto_aesmc")] ) @@ -8125,7 +8125,7 @@ (define_insn "*aarch64_crypto_aese_fused" (match_operand:V16QI 2 "register_operand" "w"))] UNSPEC_AESE)] UNSPEC_AESMC))] - "TARGET_SIMD && TARGET_AES + "TARGET_AES && aarch64_fusion_enabled_p (AARCH64_FUSE_AES_AESMC)" "aese\\t%0.16b, %2.16b\;aesmc\\t%0.16b, %0.16b" [(set_attr "type" "crypto_aese") @@ -8146,7 +8146,7 @@ (define_insn "*aarch64_crypto_aesd_fused" (match_operand:V16QI 2 "register_operand" "w"))] UNSPEC_AESD)] UNSPEC_AESIMC))] - "TARGET_SIMD && TARGET_AES + "TARGET_AES && aarch64_fusion_enabled_p (AARCH64_FUSE_AES_AESMC)" "aesd\\t%0.16b, %2.16b\;aesimc\\t%0.16b, %0.16b" [(set_attr "type" "crypto_aese") @@ -8160,7 +8160,7 @@ (define_insn "aarch64_crypto_sha1hsi" (unspec:SI [(match_operand:SI 1 "register_operand" "w")] UNSPEC_SHA1H))] - "TARGET_SIMD && TARGET_SHA2" + "TARGET_SHA2" "sha1h\\t%s0, %s1" [(set_attr "type" "crypto_sha1_fast")] ) @@ -8170,7 +8170,7 @@ (define_insn "aarch64_crypto_sha1hv4si" (unspec:SI [(vec_select:SI (match_operand:V4SI 1 "register_operand" "w") (parallel [(const_int 0)]))] UNSPEC_SHA1H))] - "TARGET_SIMD && TARGET_SHA2 && !BYTES_BIG_ENDIAN" + "TARGET_SHA2 && !BYTES_BIG_ENDIAN" "sha1h\\t%s0, %s1" [(set_attr "type" "crypto_sha1_fast")] ) @@ -8180,7 +8180,7 @@ (define_insn "aarch64_be_crypto_sha1hv4si" (unspec:SI [(vec_select:SI (match_operand:V4SI 1 "register_operand" "w") (parallel [(const_int 3)]))] UNSPEC_SHA1H))] - "TARGET_SIMD && TARGET_SHA2 && BYTES_BIG_ENDIAN" + "TARGET_SHA2 && BYTES_BIG_ENDIAN" "sha1h\\t%s0, %s1" [(set_attr "type" "crypto_sha1_fast")] ) @@ -8190,7 +8190,7 @@ (define_insn "aarch64_crypto_sha1su1v4si" (unspec:V4SI [(match_operand:V4SI 1 "register_operand" "0") (match_operand:V4SI 2 "register_operand" "w")] UNSPEC_SHA1SU1))] - "TARGET_SIMD && TARGET_SHA2" + "TARGET_SHA2" "sha1su1\\t%0.4s, %2.4s" [(set_attr "type" "crypto_sha1_fast")] ) @@ -8201,7 +8201,7 @@ (define_insn "aarch64_crypto_sha1v4si" (match_operand:SI 2 "register_operand" "w") (match_operand:V4SI 3 "register_operand" "w")] CRYPTO_SHA1))] - "TARGET_SIMD && TARGET_SHA2" + "TARGET_SHA2" "sha1\\t%q0, %s2, %3.4s" [(set_attr "type" "crypto_sha1_slow")] ) @@ -8212,7 +8212,7 @@ (define_insn "aarch64_crypto_sha1su0v4si" (match_operand:V4SI 2 "register_operand" "w") (match_operand:V4SI 3 "register_operand" "w")] UNSPEC_SHA1SU0))] - "TARGET_SIMD && TARGET_SHA2" + "TARGET_SHA2" "sha1su0\\t%0.4s, %2.4s, %3.4s" [(set_attr "type" "crypto_sha1_xor")] ) @@ -8225,7 +8225,7 @@ (define_insn "aarch64_crypto_sha256hv4si" (match_operand:V4SI 2 "register_operand" "w") (match_operand:V4SI 3 "register_operand" "w")] CRYPTO_SHA256))] - "TARGET_SIMD && TARGET_SHA2" + "TARGET_SHA2" "sha256h\\t%q0, %q2, %3.4s" [(set_attr "type" "crypto_sha256_slow")] ) @@ -8235,7 +8235,7 @@ (define_insn "aarch64_crypto_sha256su0v4si" (unspec:V4SI [(match_operand:V4SI 1 "register_operand" "0") (match_operand:V4SI 2 "register_operand" "w")] UNSPEC_SHA256SU0))] - "TARGET_SIMD && TARGET_SHA2" + "TARGET_SHA2" "sha256su0\\t%0.4s, %2.4s" [(set_attr "type" "crypto_sha256_fast")] ) @@ -8246,7 +8246,7 @@ (define_insn "aarch64_crypto_sha256su1v4si" (match_operand:V4SI 2 "register_operand" "w") (match_operand:V4SI 3 "register_operand" "w")] UNSPEC_SHA256SU1))] - "TARGET_SIMD && TARGET_SHA2" + "TARGET_SHA2" "sha256su1\\t%0.4s, %2.4s, %3.4s" [(set_attr "type" "crypto_sha256_slow")] ) @@ -8259,7 +8259,7 @@ (define_insn "aarch64_crypto_sha512hqv2di" (match_operand:V2DI 2 "register_operand" "w") (match_operand:V2DI 3 "register_operand" "w")] CRYPTO_SHA512))] - "TARGET_SIMD && TARGET_SHA3" + "TARGET_SHA3" "sha512h\\t%q0, %q2, %3.2d" [(set_attr "type" "crypto_sha512")] ) @@ -8269,7 +8269,7 @@ (define_insn "aarch64_crypto_sha512su0qv2di" (unspec:V2DI [(match_operand:V2DI 1 "register_operand" "0") (match_operand:V2DI 2 "register_operand" "w")] UNSPEC_SHA512SU0))] - "TARGET_SIMD && TARGET_SHA3" + "TARGET_SHA3" "sha512su0\\t%0.2d, %2.2d" [(set_attr "type" "crypto_sha512")] ) @@ -8280,7 +8280,7 @@ (define_insn "aarch64_crypto_sha512su1qv2di" (match_operand:V2DI 2 "register_operand" "w") (match_operand:V2DI 3 "register_operand" "w")] UNSPEC_SHA512SU1))] - "TARGET_SIMD && TARGET_SHA3" + "TARGET_SHA3" "sha512su1\\t%0.2d, %2.2d, %3.2d" [(set_attr "type" "crypto_sha512")] ) @@ -8294,7 +8294,7 @@ (define_insn "eor3q4" (match_operand:VQ_I 2 "register_operand" "w") (match_operand:VQ_I 3 "register_operand" "w")) (match_operand:VQ_I 1 "register_operand" "w")))] - "TARGET_SIMD && TARGET_SHA3" + "TARGET_SHA3" "eor3\\t%0.16b, %1.16b, %2.16b, %3.16b" [(set_attr "type" "crypto_sha3")] ) @@ -8306,7 +8306,7 @@ (define_insn "aarch64_rax1qv2di" (match_operand:V2DI 2 "register_operand" "w") (const_int 1)) (match_operand:V2DI 1 "register_operand" "w")))] - "TARGET_SIMD && TARGET_SHA3" + "TARGET_SHA3" "rax1\\t%0.2d, %1.2d, %2.2d" [(set_attr "type" "crypto_sha3")] ) @@ -8318,7 +8318,7 @@ (define_insn "aarch64_xarqv2di" (match_operand:V2DI 1 "register_operand" "%w") (match_operand:V2DI 2 "register_operand" "w")) (match_operand:SI 3 "aarch64_simd_shift_imm_di" "Usd")))] - "TARGET_SIMD && TARGET_SHA3" + "TARGET_SHA3" "xar\\t%0.2d, %1.2d, %2.2d, %3" [(set_attr "type" "crypto_sha3")] ) @@ -8330,7 +8330,7 @@ (define_insn "bcaxq4" (not:VQ_I (match_operand:VQ_I 3 "register_operand" "w")) (match_operand:VQ_I 2 "register_operand" "w")) (match_operand:VQ_I 1 "register_operand" "w")))] - "TARGET_SIMD && TARGET_SHA3" + "TARGET_SHA3" "bcax\\t%0.16b, %1.16b, %2.16b, %3.16b" [(set_attr "type" "crypto_sha3")] ) @@ -8343,7 +8343,7 @@ (define_insn "aarch64_sm3ss1qv4si" (match_operand:V4SI 2 "register_operand" "w") (match_operand:V4SI 3 "register_operand" "w")] UNSPEC_SM3SS1))] - "TARGET_SIMD && TARGET_SM4" + "TARGET_SM4" "sm3ss1\\t%0.4s, %1.4s, %2.4s, %3.4s" [(set_attr "type" "crypto_sm3")] ) @@ -8356,7 +8356,7 @@ (define_insn "aarch64_sm3ttqv4si" (match_operand:V4SI 3 "register_operand" "w") (match_operand:SI 4 "aarch64_imm2" "Ui2")] CRYPTO_SM3TT))] - "TARGET_SIMD && TARGET_SM4" + "TARGET_SM4" "sm3tt\\t%0.4s, %2.4s, %3.4s[%4]" [(set_attr "type" "crypto_sm3")] ) @@ -8367,7 +8367,7 @@ (define_insn "aarch64_sm3partwqv4si" (match_operand:V4SI 2 "register_operand" "w") (match_operand:V4SI 3 "register_operand" "w")] CRYPTO_SM3PART))] - "TARGET_SIMD && TARGET_SM4" + "TARGET_SM4" "sm3partw\\t%0.4s, %2.4s, %3.4s" [(set_attr "type" "crypto_sm3")] ) @@ -8379,7 +8379,7 @@ (define_insn "aarch64_sm4eqv4si" (unspec:V4SI [(match_operand:V4SI 1 "register_operand" "0") (match_operand:V4SI 2 "register_operand" "w")] UNSPEC_SM4E))] - "TARGET_SIMD && TARGET_SM4" + "TARGET_SM4" "sm4e\\t%0.4s, %2.4s" [(set_attr "type" "crypto_sm4")] ) @@ -8389,7 +8389,7 @@ (define_insn "aarch64_sm4ekeyqv4si" (unspec:V4SI [(match_operand:V4SI 1 "register_operand" "w") (match_operand:V4SI 2 "register_operand" "w")] UNSPEC_SM4EKEY))] - "TARGET_SIMD && TARGET_SM4" + "TARGET_SM4" "sm4ekey\\t%0.4s, %1.4s, %2.4s" [(set_attr "type" "crypto_sm4")] ) @@ -8975,7 +8975,7 @@ (define_insn "aarch64_crypto_pmulldi" (unspec:TI [(match_operand:DI 1 "register_operand" "w") (match_operand:DI 2 "register_operand" "w")] UNSPEC_PMULL))] - "TARGET_SIMD && TARGET_AES" + "TARGET_AES" "pmull\\t%0.1q, %1.1d, %2.1d" [(set_attr "type" "crypto_pmull")] ) @@ -8985,7 +8985,7 @@ (define_insn "aarch64_crypto_pmullv2di" (unspec:TI [(match_operand:V2DI 1 "register_operand" "w") (match_operand:V2DI 2 "register_operand" "w")] UNSPEC_PMULL2))] - "TARGET_SIMD && TARGET_AES" + "TARGET_AES" "pmull2\\t%0.1q, %1.2d, %2.2d" [(set_attr "type" "crypto_pmull")] ) diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h index 6ee63570551..2d6221826bb 100644 --- a/gcc/config/aarch64/aarch64.h +++ b/gcc/config/aarch64/aarch64.h @@ -222,19 +222,19 @@ enum class aarch64_feature : unsigned char { #define AARCH64_ISA_LS64 (aarch64_isa_flags & AARCH64_FL_LS64) /* Crypto is an optional extension to AdvSIMD. */ -#define TARGET_CRYPTO (TARGET_SIMD && AARCH64_ISA_CRYPTO) +#define TARGET_CRYPTO (AARCH64_ISA_CRYPTO) /* SHA2 is an optional extension to AdvSIMD. */ -#define TARGET_SHA2 ((TARGET_SIMD && AARCH64_ISA_SHA2) || TARGET_CRYPTO) +#define TARGET_SHA2 (AARCH64_ISA_SHA2) /* SHA3 is an optional extension to AdvSIMD. */ -#define TARGET_SHA3 (TARGET_SIMD && AARCH64_ISA_SHA3) +#define TARGET_SHA3 (AARCH64_ISA_SHA3) /* AES is an optional extension to AdvSIMD. */ -#define TARGET_AES ((TARGET_SIMD && AARCH64_ISA_AES) || TARGET_CRYPTO) +#define TARGET_AES (AARCH64_ISA_AES) /* SM is an optional extension to AdvSIMD. */ -#define TARGET_SM4 (TARGET_SIMD && AARCH64_ISA_SM4) +#define TARGET_SM4 (AARCH64_ISA_SM4) /* FP16FML is an optional extension to AdvSIMD. */ #define TARGET_F16FML (TARGET_SIMD && AARCH64_ISA_F16FML && TARGET_FP_F16INST) @@ -246,29 +246,29 @@ enum class aarch64_feature : unsigned char { #define TARGET_LSE (AARCH64_ISA_LSE) /* ARMv8.2-A FP16 support that can be enabled through the +fp16 extension. */ -#define TARGET_FP_F16INST (TARGET_FLOAT && AARCH64_ISA_F16) +#define TARGET_FP_F16INST (AARCH64_ISA_F16) #define TARGET_SIMD_F16INST (TARGET_SIMD && AARCH64_ISA_F16) /* Dot Product is an optional extension to AdvSIMD enabled through +dotprod. */ -#define TARGET_DOTPROD (TARGET_SIMD && AARCH64_ISA_DOTPROD) +#define TARGET_DOTPROD (AARCH64_ISA_DOTPROD) /* SVE instructions, enabled through +sve. */ #define TARGET_SVE (AARCH64_ISA_SVE) /* SVE2 instructions, enabled through +sve2. */ -#define TARGET_SVE2 (TARGET_SVE && AARCH64_ISA_SVE2) +#define TARGET_SVE2 (AARCH64_ISA_SVE2) /* SVE2 AES instructions, enabled through +sve2-aes. */ -#define TARGET_SVE2_AES (TARGET_SVE2 && AARCH64_ISA_SVE2_AES) +#define TARGET_SVE2_AES (AARCH64_ISA_SVE2_AES) /* SVE2 BITPERM instructions, enabled through +sve2-bitperm. */ -#define TARGET_SVE2_BITPERM (TARGET_SVE2 && AARCH64_ISA_SVE2_BITPERM) +#define TARGET_SVE2_BITPERM (AARCH64_ISA_SVE2_BITPERM) /* SVE2 SHA3 instructions, enabled through +sve2-sha3. */ -#define TARGET_SVE2_SHA3 (TARGET_SVE2 && AARCH64_ISA_SVE2_SHA3) +#define TARGET_SVE2_SHA3 (AARCH64_ISA_SVE2_SHA3) /* SVE2 SM4 instructions, enabled through +sve2-sm4. */ -#define TARGET_SVE2_SM4 (TARGET_SVE2 && AARCH64_ISA_SVE2_SM4) +#define TARGET_SVE2_SM4 (AARCH64_ISA_SVE2_SM4) /* ARMv8.3-A features. */ #define TARGET_ARMV8_3 (AARCH64_ISA_V8_3A) @@ -296,12 +296,10 @@ enum class aarch64_feature : unsigned char { #define TARGET_SVE_I8MM (TARGET_SVE && AARCH64_ISA_I8MM) /* F32MM instructions are enabled through +f32mm. */ -#define TARGET_F32MM (AARCH64_ISA_F32MM) -#define TARGET_SVE_F32MM (TARGET_SVE && AARCH64_ISA_F32MM) +#define TARGET_SVE_F32MM (AARCH64_ISA_F32MM) /* F64MM instructions are enabled through +f64mm. */ -#define TARGET_F64MM (AARCH64_ISA_F64MM) -#define TARGET_SVE_F64MM (TARGET_SVE && AARCH64_ISA_F64MM) +#define TARGET_SVE_F64MM (AARCH64_ISA_F64MM) /* BF16 instructions are enabled through +bf16. */ #define TARGET_BF16_FP (AARCH64_ISA_BF16) diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index 3f8e40a48b5..0a7633e5dd6 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -6468,7 +6468,7 @@ (define_expand "sihf2" (define_expand "dihf2" [(set (match_operand:HF 0 "register_operand") (FLOATUORS:HF (match_operand:DI 1 "register_operand")))] - "TARGET_FLOAT && (TARGET_FP_F16INST || TARGET_SIMD)" + "TARGET_FP_F16INST || TARGET_SIMD" { if (TARGET_FP_F16INST) emit_insn (gen_aarch64_fp16_dihf2 (operands[0], operands[1])); @@ -6727,7 +6727,7 @@ (define_expand "copysign3" [(match_operand:GPF 0 "register_operand") (match_operand:GPF 1 "register_operand") (match_operand:GPF 2 "register_operand")] - "TARGET_FLOAT && TARGET_SIMD" + "TARGET_SIMD" { rtx bitmask = gen_reg_rtx (mode); emit_move_insn (bitmask, GEN_INT (HOST_WIDE_INT_M1U @@ -6744,7 +6744,7 @@ (define_insn "copysign3_insn" (match_operand:GPF 2 "register_operand" "w,w,0,0") (match_operand: 3 "register_operand" "0,w,w,X")] UNSPEC_COPYSIGN))] - "TARGET_FLOAT && TARGET_SIMD" + "TARGET_SIMD" "@ bsl\\t%0., %2., %1. bit\\t%0., %2., %3. @@ -6765,7 +6765,7 @@ (define_expand "xorsign3" [(match_operand:GPF 0 "register_operand") (match_operand:GPF 1 "register_operand") (match_operand:GPF 2 "register_operand")] - "TARGET_FLOAT && TARGET_SIMD" + "TARGET_SIMD" { machine_mode imode = mode; -- 2.25.1