From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id E0CCF3858D3C for ; Mon, 27 Sep 2021 15:44:25 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org E0CCF3858D3C Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 80914D6E; Mon, 27 Sep 2021 08:44:25 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.98.126]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0A82D3F70D; Mon, 27 Sep 2021 08:44:24 -0700 (PDT) From: Richard Sandiford To: "Roger Sayle" Mail-Followup-To: "Roger Sayle" , "'GCC Patches'" , richard.sandiford@arm.com Cc: "'GCC Patches'" Subject: Re: [PATCH] Introduce sh_mul and uh_mul RTX codes for high-part multiplications References: <01ce01d7b1e1$02f00000$08d00000$@nextmovesoftware.com> Date: Mon, 27 Sep 2021 16:44:23 +0100 In-Reply-To: <01ce01d7b1e1$02f00000$08d00000$@nextmovesoftware.com> (Roger Sayle's message of "Sat, 25 Sep 2021 08:43:18 +0100") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-12.3 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Sep 2021 15:44:27 -0000 "Roger Sayle" writes: > This patch introduces new RTX codes to allow the RTL passes and > backends to consistently represent high-part multiplications. > Currently, the RTL used by different backends for expanding > smul3_highpart and umul3_highpart varies greatly, > with many but not all choosing to express this something like: > > (define_insn "smuldi3_highpart" > [(set (match_operand:DI 0 "nvptx_register_operand" "=3DR") > (truncate:DI > (lshiftrt:TI > (mult:TI (sign_extend:TI > (match_operand:DI 1 "nvptx_register_operand" "R")) > (sign_extend:TI > (match_operand:DI 2 "nvptx_register_operand" "R"))) > (const_int 64))))] > "" > "%.\\tmul.hi.s64\\t%0, %1, %2;") > > One complication with using this "widening multiplication" representation > is that it requires an intermediate in a wider mode, making it difficult > or impossible to encode a high-part multiplication of the widest supported > integer mode. Yeah. It's also a problem when representing vector ops. > A second is that it can interfere with optimization; for > example simplify-rtx.c contains the comment: > > case TRUNCATE: > /* Don't optimize (lshiftrt (mult ...)) as it would interfere > with the umulXi3_highpart patterns. */ > > Hopefully these problems are solved (or reduced) by introducing a > new canonical form for high-part multiplications in RTL passes. > This also simplifies insn patterns when one operand is constant. > > Whilst implementing some constant folding simplifications and > compile-time evaluation of these new RTX codes, I noticed that > this functionality could also be added for the existing saturating > arithmetic RTX codes. Then likewise when documenting these new RTX > codes, I also took the opportunity to silence the @xref warnings in > invoke.texi. > > This patch has been tested on x86_64-pc-linux-gnu with "make bootstrap" > and "make -k check" with no new failures. Ok for mainline? > > > 2021-09-25 Roger Sayle > > gcc/ChangeLog > * gcc/rtl.def (SH_MULT, UH_MULT): New RTX codes for representing > signed and unsigned high-part multiplication respectively. > * gcc/simplify-rtx.c (simplify_binary_operation_1) [SH_MULT, > UH_MULT]: Simplify high-part multiplications by zero. > [SS_PLUS, US_PLUS, SS_MINUS, US_MINUS, SS_MULT, US_MULT, > SS_DIV, US_DIV]: Similar simplifications for saturating > arithmetic. > (simplify_const_binary_operation) [SS_PLUS, US_PLUS, SS_MINUS, > US_MINUS, SS_MULT, US_MULT, SH_MULT, UH_MULT]: Implement > compile-time evaluation for constant operands. > * gcc/dwarf2out.c (mem_loc_descriptor): Skip SH_MULT and UH_MULT. > * doc/rtl.texi (sh_mult, uhmult): Document new RTX codes. > * doc/md.texi (smul@var{m}3_highpart, umul@var{m3}_highpart): > Mention the new sh_mul and uh_mul RTX codes. > * doc/invoke.texi: Silence @xref "compilation" warnings. Look like a good idea to me. Only real comment is on the naming: if possible, I think we should try to avoid introducing yet more differences between optab names and rtl codes. How about umul_highpart for the unsigned code, to match both the optab and the existing convention of adding =E2=80=9Cu=E2=80=9D directly to the front of non-satur= ating operations? Things are more inconsistent for signed rtx codes: sometimes the =E2=80=9Cs=E2=80=9D is present and sometimes it isn't. But since =E2=80=9C= smin=E2=80=9D and =E2=80=9Csmax=E2=80=9D have it, I think we can justify having it here too. So I think we should use smul_highpart and umul_highpart. It's a bit more wordy than sh_mul, but still a lot shorter than the status quo ;-) > diff --git a/gcc/simplify-rtx.c b/gcc/simplify-rtx.c > index ebad5cb..b4b04b9 100644 > --- a/gcc/simplify-rtx.c > +++ b/gcc/simplify-rtx.c > @@ -4142,11 +4142,40 @@ simplify_context::simplify_binary_operation_1 (rt= x_code code, > case US_PLUS: > case SS_MINUS: > case US_MINUS: > + /* Simplify x + 0 to x, if possible. */ Nit: +/- > + if (trueop1 =3D=3D CONST0_RTX (mode) && !HONOR_SIGNED_ZEROS (mode)) The HONOR_SIGNED_ZEROS check is redundant, since these ops don't support modes with signed zero. Same for the other HONOR_* macros in the patch. E.g. I don't think we should try to guess how infinities and saturation work together. > + return op0; > + return 0; > + > case SS_MULT: > case US_MULT: > + /* Simplify x * 0 to 0, if possible. */ > + if (trueop1 =3D=3D CONST0_RTX (mode) > + && !HONOR_NANS (mode) > + && !HONOR_SIGNED_ZEROS (mode) > + && !side_effects_p (op0)) > + return op1; > + > + /* Simplify x * 1 to x, if possible. */ > + if (trueop1 =3D=3D CONST1_RTX (mode) && !HONOR_SNANS (mode)) > + return op0; > + return 0; > + > + case SH_MULT: > + case UH_MULT: > + /* Simplify x * 0 to 0, if possible. */ > + if (trueop1 =3D=3D CONST0_RTX (mode) > + && !HONOR_NANS (mode) > + && !HONOR_SIGNED_ZEROS (mode) > + && !side_effects_p (op0)) > + return op1; > + return 0; > + > case SS_DIV: > case US_DIV: > - /* ??? There are simplifications that can be done. */ > + /* Simplify x / 1 to x, if possible. */ > + if (trueop1 =3D=3D CONST1_RTX (mode) && !HONOR_SNANS (mode)) > + return op0; > return 0; >=20=20 > case VEC_SERIES: > @@ -5011,6 +5040,63 @@ simplify_const_binary_operation (enum rtx_code cod= e, machine_mode mode, > } > break; > } > + > + case SS_PLUS: > + result =3D wi::add (pop0, pop1, SIGNED, &overflow); I think a goto label would be good here, so that later signed ops can reuse this code instead of having to repeat it. Same idea for the unsigned case. > + if (overflow =3D=3D wi::OVF_OVERFLOW) > + result =3D wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); > + else if (overflow =3D=3D wi::OVF_UNDERFLOW) > + result =3D wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); Should be min_value. Same for the other underflow handlers. Like Andreas said, @pxref would be better where applicable. Thanks, Richard > + else if (overflow !=3D wi::OVF_NONE) > + return NULL_RTX; > + break; > + > + case US_PLUS: > + result =3D wi::add (pop0, pop1, UNSIGNED, &overflow); > + if (overflow !=3D wi::OVF_NONE) > + result =3D wi::max_value (GET_MODE_PRECISION (int_mode), UNSIGNED); > + break; > + > + case SS_MINUS: > + result =3D wi::sub (pop0, pop1, SIGNED, &overflow); > + if (overflow =3D=3D wi::OVF_OVERFLOW) > + result =3D wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); > + else if (overflow =3D=3D wi::OVF_UNDERFLOW) > + result =3D wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); > + else if (overflow !=3D wi::OVF_NONE) > + return NULL_RTX; > + break; > + > + case US_MINUS: > + result =3D wi::sub (pop0, pop1, UNSIGNED, &overflow); > + if (overflow !=3D wi::OVF_NONE) > + result =3D wi::min_value (GET_MODE_PRECISION (int_mode), UNSIGNED); > + break; > + > + case SS_MULT: > + result =3D wi::mul (pop0, pop1, SIGNED, &overflow); > + if (overflow =3D=3D wi::OVF_OVERFLOW) > + result =3D wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); > + else if (overflow =3D=3D wi::OVF_UNDERFLOW) > + result =3D wi::max_value (GET_MODE_PRECISION (int_mode), SIGNED); > + else if (overflow !=3D wi::OVF_NONE) > + return NULL_RTX; > + break; > + > + case US_MULT: > + result =3D wi::mul (pop0, pop1, UNSIGNED, &overflow); > + if (overflow !=3D wi::OVF_NONE) > + result =3D wi::max_value (GET_MODE_PRECISION (int_mode), UNSIGNED); > + break; > + > + case SH_MULT: > + result =3D wi::mul_high (pop0, pop1, SIGNED); > + break; > + > + case UH_MULT: > + result =3D wi::mul_high (pop0, pop1, UNSIGNED); > + break; > + > default: > return NULL_RTX; > }