From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 2C53C384AB48 for ; Mon, 19 Aug 2024 12:59:56 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 2C53C384AB48 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 2C53C384AB48 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1724072398; cv=none; b=B1kb3Ucjn4/uBqxLjVGzN/EDQ1rid2L+G7J+boQzKp+b+l01W2Iz1r9umgA52Lxm0HsbIRfs0gztctvS9uEx92IC1lTGyKGViO7BnJBWJEOWcywlyKGAzRlNqEnqAdXWb2aMjZ2fyg94Z7V6rF7Fad8zFqeoZqyQeXVZVS/10kw= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1724072398; c=relaxed/simple; bh=gUi9m7+rctJeLdnQw1896LU+C/tWzBMUW672iAj7+lY=; h=From:To:Subject:Date:Message-ID:MIME-Version; b=T2H1sFSYhHjmqeFwbFwA7e3dK8UIVMATZVKZeEVUIDfzWcul47XW7N5U4iBPv92xn3WMRW2/vE9PGsQhrLJvgGUgb9WWqe5Nu75CHTCZoJfdXO00y8b4hJm/7YQhfU7joLSGm1PnRwwJKlPJnU3cMPto/BPQuOIzvYPSDJ7Cu0M= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D77A0339; Mon, 19 Aug 2024 06:00:21 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 25AF93F73B; Mon, 19 Aug 2024 05:59:55 -0700 (PDT) From: Richard Sandiford To: Jennifer Schmitz Mail-Followup-To: Jennifer Schmitz ,"gcc-patches@gcc.gnu.org" , Kyrylo Tkachov , richard.sandiford@arm.com Cc: "gcc-patches@gcc.gnu.org" , Kyrylo Tkachov Subject: Re: [PATCH 2/2] SVE intrinsics: Fold constant operands for svmul In-Reply-To: <13227B95-046C-468F-AD78-C28DC4B8332A@nvidia.com> (Jennifer Schmitz's message of "Mon, 19 Aug 2024 07:04:21 +0000") References: <13227B95-046C-468F-AD78-C28DC4B8332A@nvidia.com> Date: Mon, 19 Aug 2024 13:59:53 +0100 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-18.9 required=5.0 tests=BAYES_00,GIT_PATCH_0,KAM_DMARC_NONE,KAM_DMARC_STATUS,KAM_LAZY_DOMAIN_SECURITY,KAM_SHORT,SPF_HELO_NONE,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Jennifer Schmitz writes: > This patch implements constant folding for svmul. It uses the > gimple_folder::const_fold function to fold constant integer operands. > Additionally, if at least one of the operands is a zero vector, svmul is > folded to a zero vector (in case of ptrue, _x, or _z). > Tests were added to check the produced assembly for different > predicates and signed and unsigned integers. > > The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression. > OK for mainline? > > Signed-off-by: Jennifer Schmitz > > gcc/ > > * config/aarch64/aarch64-sve-builtins-base.cc > (svmul_impl::fold): Implement function and add constant folding. > > gcc/testsuite/ > > * gcc.target/aarch64/sve/const_fold_mul_1.c: New test. > * gcc.target/aarch64/sve/const_fold_mul_zero.c: Likewise. > > From 42b98071845072bde7411d5a8be792513f601193 Mon Sep 17 00:00:00 2001 > From: Jennifer Schmitz > Date: Thu, 15 Aug 2024 06:21:53 -0700 > Subject: [PATCH 2/2] SVE intrinsics: Fold constant operands for svmul > > This patch implements constant folding for svmul. It uses the > gimple_folder::const_fold function to fold constant integer operands. > Additionally, if at least one of the operands is a zero vector, svmul is > folded to a zero vector (in case of ptrue, _x, or _z). > Tests were added to check the produced assembly for different > predicates and signed and unsigned integers. > > The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression. > OK for mainline? > > Signed-off-by: Jennifer Schmitz > > gcc/ > > * config/aarch64/aarch64-sve-builtins-base.cc > (svmul_impl::fold): Implement function and add constant folding. > > gcc/testsuite/ > > * gcc.target/aarch64/sve/const_fold_mul_1.c: New test. > * gcc.target/aarch64/sve/const_fold_mul_zero.c: Likewise. > --- > .../aarch64/aarch64-sve-builtins-base.cc | 36 ++++- > .../gcc.target/aarch64/sve/const_fold_mul_1.c | 128 ++++++++++++++++++ > .../aarch64/sve/const_fold_mul_zero.c | 109 +++++++++++++-- > 3 files changed, 262 insertions(+), 11 deletions(-) > create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/const_fold_mul_1.c > > diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc b/gcc/config/aarch64/aarch64-sve-builtins-base.cc > index 7f948ecc0c7..ef0e11fe327 100644 > --- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc > +++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc > @@ -2019,6 +2019,40 @@ public: > } > }; > > +class svmul_impl : public rtx_code_function > +{ > +public: > + CONSTEXPR svmul_impl () > + : rtx_code_function (MULT, MULT, UNSPEC_COND_FMUL) {} > + > + gimple * > + fold (gimple_folder &f) const override > + { > + tree pg = gimple_call_arg (f.call, 0); > + tree op1 = gimple_call_arg (f.call, 1); > + tree op2 = gimple_call_arg (f.call, 2); > + > + /* For integers, if one of the operands is a zero vector, > + fold to zero vector. */ > + int step = f.type_suffix (0).element_bytes; > + if (f.pred != PRED_m || is_ptrue (pg, step)) > + { > + if (vector_cst_all_same (op1, step) > + && integer_zerop (VECTOR_CST_ENCODED_ELT (op1, 0))) > + return gimple_build_assign (f.lhs, op1); > + if (vector_cst_all_same (op2, step) > + && integer_zerop (VECTOR_CST_ENCODED_ELT (op2, 0))) > + return gimple_build_assign (f.lhs, op2); > + } Similarly to part 1, I think we should drop this and just use... > + > + /* Try to fold constant operands. */ > + if (gimple *new_stmt = f.const_fold (MULT_EXPR)) > + return new_stmt; ...this. Thanks, Richard > + > + return NULL; > + } > +}; > + > class svnand_impl : public function_base > { > public: > @@ -3203,7 +3237,7 @@ FUNCTION (svmls_lane, svmls_lane_impl,) > FUNCTION (svmmla, svmmla_impl,) > FUNCTION (svmov, svmov_impl,) > FUNCTION (svmsb, svmsb_impl,) > -FUNCTION (svmul, rtx_code_function, (MULT, MULT, UNSPEC_COND_FMUL)) > +FUNCTION (svmul, svmul_impl,) > FUNCTION (svmul_lane, CODE_FOR_MODE0 (aarch64_mul_lane),) > FUNCTION (svmulh, unspec_based_function, (UNSPEC_SMUL_HIGHPART, > UNSPEC_UMUL_HIGHPART, -1)) > diff --git a/gcc/testsuite/gcc.target/aarch64/sve/const_fold_mul_1.c b/gcc/testsuite/gcc.target/aarch64/sve/const_fold_mul_1.c > new file mode 100644 > index 00000000000..95273e2e57d > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/sve/const_fold_mul_1.c > @@ -0,0 +1,128 @@ > +/* { dg-final { check-function-bodies "**" "" } } */ > +/* { dg-options "-O2" } */ > + > +#include "arm_sve.h" > + > +/* > +** s64_x_pg: > +** mov z[0-9]+\.d, #15 > +** ret > +*/ > +svint64_t s64_x_pg (svbool_t pg) > +{ > + return svmul_x (pg, svdup_s64 (5), svdup_s64 (3)); > +} > + > +/* > +** s64_z_pg: > +** mov z[0-9]+\.d, p[0-7]/z, #15 > +** ret > +*/ > +svint64_t s64_z_pg (svbool_t pg) > +{ > + return svmul_z (pg, svdup_s64 (5), svdup_s64 (3)); > +} > + > +/* > +** s64_m_pg: > +** mov (z[0-9]+\.d), #3 > +** mov (z[0-9]+\.d), #5 > +** mul \2, p[0-7]/m, \2, \1 > +** ret > +*/ > +svint64_t s64_m_pg (svbool_t pg) > +{ > + return svmul_m (pg, svdup_s64 (5), svdup_s64 (3)); > +} > + > +/* > +** s64_x_ptrue: > +** mov z[0-9]+\.d, #15 > +** ret > +*/ > +svint64_t s64_x_ptrue () > +{ > + return svmul_x (svptrue_b64 (), svdup_s64 (5), svdup_s64 (3)); > +} > + > +/* > +** s64_z_ptrue: > +** mov z[0-9]+\.d, #15 > +** ret > +*/ > +svint64_t s64_z_ptrue () > +{ > + return svmul_z (svptrue_b64 (), svdup_s64 (5), svdup_s64 (3)); > +} > + > +/* > +** s64_m_ptrue: > +** mov z[0-9]+\.d, #15 > +** ret > +*/ > +svint64_t s64_m_ptrue () > +{ > + return svmul_m (svptrue_b64 (), svdup_s64 (5), svdup_s64 (3)); > +} > + > +/* > +** u64_x_pg: > +** mov z[0-9]+\.d, #15 > +** ret > +*/ > +svuint64_t u64_x_pg (svbool_t pg) > +{ > + return svmul_x (pg, svdup_u64 (5), svdup_u64 (3)); > +} > + > +/* > +** u64_z_pg: > +** mov z[0-9]+\.d, p[0-7]/z, #15 > +** ret > +*/ > +svuint64_t u64_z_pg (svbool_t pg) > +{ > + return svmul_z (pg, svdup_u64 (5), svdup_u64 (3)); > +} > + > +/* > +** u64_m_pg: > +** mov (z[0-9]+\.d), #3 > +** mov (z[0-9]+\.d), #5 > +** mul \2, p[0-7]/m, \2, \1 > +** ret > +*/ > +svuint64_t u64_m_pg (svbool_t pg) > +{ > + return svmul_m (pg, svdup_u64 (5), svdup_u64 (3)); > +} > + > +/* > +** u64_x_ptrue: > +** mov z[0-9]+\.d, #15 > +** ret > +*/ > +svuint64_t u64_x_ptrue () > +{ > + return svmul_x (svptrue_b64 (), svdup_u64 (5), svdup_u64 (3)); > +} > + > +/* > +** u64_z_ptrue: > +** mov z[0-9]+\.d, #15 > +** ret > +*/ > +svuint64_t u64_z_ptrue () > +{ > + return svmul_z (svptrue_b64 (), svdup_u64 (5), svdup_u64 (3)); > +} > + > +/* > +** u64_m_ptrue: > +** mov z[0-9]+\.d, #15 > +** ret > +*/ > +svuint64_t u64_m_ptrue () > +{ > + return svmul_m (svptrue_b64 (), svdup_u64 (5), svdup_u64 (3)); > +} > diff --git a/gcc/testsuite/gcc.target/aarch64/sve/const_fold_mul_zero.c b/gcc/testsuite/gcc.target/aarch64/sve/const_fold_mul_zero.c > index 793291449c1..c6295bbc640 100644 > --- a/gcc/testsuite/gcc.target/aarch64/sve/const_fold_mul_zero.c > +++ b/gcc/testsuite/gcc.target/aarch64/sve/const_fold_mul_zero.c > @@ -20,7 +20,7 @@ svint64_t s64_x_pg_op1 (svbool_t pg, svint64_t op2) > */ > svint64_t s64_z_pg_op1 (svbool_t pg, svint64_t op2) > { > - return svdiv_z (pg, svdup_s64 (0), op2); > + return svmul_z (pg, svdup_s64 (0), op2); > } > > /* > @@ -30,7 +30,7 @@ svint64_t s64_z_pg_op1 (svbool_t pg, svint64_t op2) > */ > svint64_t s64_m_pg_op1 (svbool_t pg, svint64_t op2) > { > - return svdiv_m (pg, svdup_s64 (0), op2); > + return svmul_m (pg, svdup_s64 (0), op2); > } > > /* > @@ -40,7 +40,7 @@ svint64_t s64_m_pg_op1 (svbool_t pg, svint64_t op2) > */ > svint64_t s64_x_pg_op2 (svbool_t pg, svint64_t op1) > { > - return svdiv_x (pg, op1, svdup_s64 (0)); > + return svmul_x (pg, op1, svdup_s64 (0)); > } > > /* > @@ -50,18 +50,17 @@ svint64_t s64_x_pg_op2 (svbool_t pg, svint64_t op1) > */ > svint64_t s64_z_pg_op2 (svbool_t pg, svint64_t op1) > { > - return svdiv_z (pg, op1, svdup_s64 (0)); > + return svmul_z (pg, op1, svdup_s64 (0)); > } > > /* > ** s64_m_pg_op2: > -** mov (z[0-9]+)\.b, #0 > -** mul (z[0-9]+\.d), p[0-7]+/m, \2, \1\.d > +** mov z[0-9]+\.d, p[0-7]/m, #0 > ** ret > */ > svint64_t s64_m_pg_op2 (svbool_t pg, svint64_t op1) > { > - return svdiv_m (pg, op1, svdup_s64 (0)); > + return svmul_m (pg, op1, svdup_s64 (0)); > } > > /* > @@ -71,7 +70,7 @@ svint64_t s64_m_pg_op2 (svbool_t pg, svint64_t op1) > */ > svint64_t s64_m_ptrue_op1 (svint64_t op2) > { > - return svdiv_m (svptrue_b64 (), svdup_s64 (0), op2); > + return svmul_m (svptrue_b64 (), svdup_s64 (0), op2); > } > > /* > @@ -81,7 +80,7 @@ svint64_t s64_m_ptrue_op1 (svint64_t op2) > */ > svint64_t s64_m_ptrue_op2 (svint64_t op1) > { > - return svdiv_m (svptrue_b64 (), op1, svdup_s64 (0)); > + return svmul_m (svptrue_b64 (), op1, svdup_s64 (0)); > } > > /* > @@ -91,5 +90,95 @@ svint64_t s64_m_ptrue_op2 (svint64_t op1) > */ > svint64_t s64_m_ptrue_op1_op2 () > { > - return svdiv_m (svptrue_b64 (), svdup_s64 (0), svdup_s64 (0)); > + return svmul_m (svptrue_b64 (), svdup_s64 (0), svdup_s64 (0)); > +} > + > +/* > +** u64_x_pg_op1: > +** mov z[0-9]+\.b, #0 > +** ret > +*/ > +svuint64_t u64_x_pg_op1 (svbool_t pg, svuint64_t op2) > +{ > + return svmul_x (pg, svdup_u64 (0), op2); > +} > + > +/* > +** u64_z_pg_op1: > +** mov z[0-9]+\.b, #0 > +** ret > +*/ > +svuint64_t u64_z_pg_op1 (svbool_t pg, svuint64_t op2) > +{ > + return svmul_z (pg, svdup_u64 (0), op2); > +} > + > +/* > +** u64_m_pg_op1: > +** mov z[0-9]+\.d, p[0-7]/z, #0 > +** ret > +*/ > +svuint64_t u64_m_pg_op1 (svbool_t pg, svuint64_t op2) > +{ > + return svmul_m (pg, svdup_u64 (0), op2); > +} > + > +/* > +** u64_x_pg_op2: > +** mov z[0-9]+\.b, #0 > +** ret > +*/ > +svuint64_t u64_x_pg_op2 (svbool_t pg, svuint64_t op1) > +{ > + return svmul_x (pg, op1, svdup_u64 (0)); > +} > + > +/* > +** u64_z_pg_op2: > +** mov z[0-9]+\.b, #0 > +** ret > +*/ > +svuint64_t u64_z_pg_op2 (svbool_t pg, svuint64_t op1) > +{ > + return svmul_z (pg, op1, svdup_u64 (0)); > +} > + > +/* > +** u64_m_pg_op2: > +** mov z[0-9]+\.d, p[0-7]/m, #0 > +** ret > +*/ > +svuint64_t u64_m_pg_op2 (svbool_t pg, svuint64_t op1) > +{ > + return svmul_m (pg, op1, svdup_u64 (0)); > +} > + > +/* > +** u64_m_ptrue_op1: > +** mov z[0-9]+\.b, #0 > +** ret > +*/ > +svuint64_t u64_m_ptrue_op1 (svuint64_t op2) > +{ > + return svmul_m (svptrue_b64 (), svdup_u64 (0), op2); > +} > + > +/* > +** u64_m_ptrue_op2: > +** mov z[0-9]+\.b, #0 > +** ret > +*/ > +svuint64_t u64_m_ptrue_op2 (svuint64_t op1) > +{ > + return svmul_m (svptrue_b64 (), op1, svdup_u64 (0)); > +} > + > +/* > +** u64_m_ptrue_op1_op2: > +** mov z[0-9]+\.b, #0 > +** ret > +*/ > +svuint64_t u64_m_ptrue_op1_op2 () > +{ > + return svmul_m (svptrue_b64 (), svdup_u64 (0), svdup_u64 (0)); > }