From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) by sourceware.org (Postfix) with ESMTPS id 95E303858410 for ; Fri, 17 Mar 2023 13:15:57 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 95E303858410 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=vrull.eu Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=vrull.eu Received: by mail-ed1-x534.google.com with SMTP id fd5so20188338edb.7 for ; Fri, 17 Mar 2023 06:15:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vrull.eu; s=google; t=1679058956; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=ZEak/sMXU8sqeGlKobkvxh6wwEsP9gfq6i6OE5V5evw=; b=pQPyAPu0TSrq1eDkn7F8qWPTKXVjpXAfjx88xMjgIVM8Ia6mbpUDfEGzM4ce2c7JEU wuYyD23NOY4fx98xB9sbSVaAC7ZMA4bwjwhcQ7JVDisKJmfWOuSk8Yv669OZbKT2UGBU H38wbHZgevLShXU7Bm9+PcJXpwub2KBSCGWqETplwOIn6QELCHS9ySh9cFF9YI180L6m VG4QoqWflgFsTr5Zeeuch9m0A+IJrlzCrYet7aC6SlP73JrO1C1g8eLc26bvnmxyuUZj 2RHYkUDGmtZkXRr3Dg3O6dAXluZrb2LjAO0pYiM5A7lLPTauMfQE5CCRr7FZTuvp5mBS f9Dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679058956; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZEak/sMXU8sqeGlKobkvxh6wwEsP9gfq6i6OE5V5evw=; b=rS/VdWiEepyvkMLAICNS5KqJVa9y5R4Quqa9pZmF6UmtqY5Rtd5ozMQD7yeG4qnXpm xpeG2atsh2oXL2iw+JA8NgSQPgVh2yVaH19v+yfxcRcvZXWfHCCKx9ixpQIodJuC0CfS wx+kkIVs0cSup8dZALc1e9FFb7s3KnBtwB7p4VyCXNBGDmZt5oicRIvBcaQckMYghRis +8rZHtQ1SihuSx7qOP/2bIPir61Ox1PUvITfbmDZiHAzEAHSyXciSS5c9pmTpHkcscrp gCwFfUwU3PCL9Xl2tWyjv8BYcfJ1+IggS6QzWay4hOsUkCmux/JJ2Ob3qF/1cNc6R90T V73Q== X-Gm-Message-State: AO0yUKU/xcRdqE2Jg0K9aDQmkMgaStEmo89+/aFJd92vQZzpEqVNg4AY zjkqxJxDaaurS9dcLP/75eoJ+L6dq6xC/agU7l8N4g== X-Google-Smtp-Source: AK7set/snvD0t/saSTCEhKnNxO86keoPolV0na0gROtFhi2dfuY53yo24cYiHfvvaa/7MFyzK5XuZHDGEat31RfTse8= X-Received: by 2002:a17:907:7383:b0:931:faf0:3db1 with SMTP id er3-20020a170907738300b00931faf03db1mr951254ejc.4.1679058955844; Fri, 17 Mar 2023 06:15:55 -0700 (PDT) MIME-Version: 1.0 References: <20230316152706.2214124-1-manolis.tsamis@vrull.eu> In-Reply-To: From: Philipp Tomsich Date: Fri, 17 Mar 2023 14:15:43 +0100 Message-ID: Subject: Re: [PATCH v1] [RFC] Improve folding for comparisons with zero in tree-ssa-forwprop. To: Richard Biener Cc: Manolis Tsamis , Andrew MacLeod , gcc-patches@gcc.gnu.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-9.5 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,GIT_PATCH_0,JMQ_SPF_NEUTRAL,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Fri, 17 Mar 2023 at 09:31, Richard Biener w= rote: > > On Thu, Mar 16, 2023 at 4:27=E2=80=AFPM Manolis Tsamis wrote: > > > > For this C testcase: > > > > void g(); > > void f(unsigned int *a) > > { > > if (++*a =3D=3D 1) > > g(); > > } > > > > GCC will currently emit a comparison with 1 by using the value > > of *a after the increment. This can be improved by comparing > > against 0 and using the value before the increment. As a result > > there is a potentially shorter dependancy chain (no need to wait > > for the result of +1) and on targets with compare zero instructions > > the generated code is one instruction shorter. > > The downside is we now need two registers and their lifetime overlaps. > > Your patch mixes changing / inverting a parameter (which seems unneeded > for the actual change) with preferring compares against zero. > > What's the reason to specifically prefer compares against zero? On x86 > we have add that sets flags, so ++*a =3D=3D 0 would be preferred, but > for your sequence we'd need a test reg, reg; branch on zero, so we do > not save any instruction. AArch64, RISC-V and MIPS support a branch-on-(not-)equals-zero, while comparing against a constant requires to load any non-zero value into a register first. This feels a bit like we need to call onto the backend to check whether comparisons against 0 are cheaper. Obviously, the underlying issue become worse if the immediate can not be built up in a single instruction. Using RISC-V as an example (primarily, as RISC-V makes it particularly easy to run into multi-instruction sequences for constants), we can construct the following case: void f(unsigned int *a) { if ((*a +=3D 0x900) =3D=3D 0x900) g(); } which GCC 12.2.0 (trunk may already be small enough to reuse the constant once loaded into register, but I did not check=E2=80=A6) with -O3 turns into: f: lw a4,0(a0) li a5,4096 addiw a5,a5,-1792 addw a4,a5,a4 li a5,4096 sw a4,0(a0) addi a5,a5,-1792 beq a4,a5,.L4 ret .L4: tail g Thanks, Philipp. On Fri, 17 Mar 2023 at 09:31, Richard Biener w= rote: > > On Thu, Mar 16, 2023 at 4:27=E2=80=AFPM Manolis Tsamis wrote: > > > > For this C testcase: > > > > void g(); > > void f(unsigned int *a) > > { > > if (++*a =3D=3D 1) > > g(); > > } > > > > GCC will currently emit a comparison with 1 by using the value > > of *a after the increment. This can be improved by comparing > > against 0 and using the value before the increment. As a result > > there is a potentially shorter dependancy chain (no need to wait > > for the result of +1) and on targets with compare zero instructions > > the generated code is one instruction shorter. > > The downside is we now need two registers and their lifetime overlaps. > > Your patch mixes changing / inverting a parameter (which seems unneeded > for the actual change) with preferring compares against zero. > > What's the reason to specifically prefer compares against zero? On x86 > we have add that sets flags, so ++*a =3D=3D 0 would be preferred, but > for your sequence we'd need a test reg, reg; branch on zero, so we do > not save any instruction. > > We do have quite some number of bugreports with regards to making VRPs > life harder when splitting things this way. It's easier for VRP to handl= e > > _1 =3D _2 + 1; > if (_1 =3D=3D 1) > > than it is > > _1 =3D _2 + 1; > if (_2 =3D=3D 0) > > where VRP fails to derive a range for _1 on the _2 =3D=3D 0 branch. So b= esides > the life-range issue there's other side-effects as well. Maybe ranger me= anwhile > can handle the above case? > > What's the overall effect of the change on a larger code base? > > Thanks, > Richard. > > > > > Example from Aarch64: > > > > Before > > ldr w1, [x0] > > add w1, w1, 1 > > str w1, [x0] > > cmp w1, 1 > > beq .L4 > > ret > > > > After > > ldr w1, [x0] > > add w2, w1, 1 > > str w2, [x0] > > cbz w1, .L4 > > ret > > > > gcc/ChangeLog: > > > > * tree-ssa-forwprop.cc (combine_cond_expr_cond): > > (forward_propagate_into_comparison_1): Optimize > > for zero comparisons. > > > > Signed-off-by: Manolis Tsamis > > --- > > > > gcc/tree-ssa-forwprop.cc | 41 +++++++++++++++++++++++++++------------- > > 1 file changed, 28 insertions(+), 13 deletions(-) > > > > diff --git a/gcc/tree-ssa-forwprop.cc b/gcc/tree-ssa-forwprop.cc > > index e34f0888954..93d5043821b 100644 > > --- a/gcc/tree-ssa-forwprop.cc > > +++ b/gcc/tree-ssa-forwprop.cc > > @@ -373,12 +373,13 @@ rhs_to_tree (tree type, gimple *stmt) > > /* Combine OP0 CODE OP1 in the context of a COND_EXPR. Returns > > the folded result in a form suitable for COND_EXPR_COND or > > NULL_TREE, if there is no suitable simplified form. If > > - INVARIANT_ONLY is true only gimple_min_invariant results are > > - considered simplified. */ > > + ALWAYS_COMBINE is false then only combine it the resulting > > + expression is gimple_min_invariant or considered simplified > > + compared to the original. */ > > > > static tree > > combine_cond_expr_cond (gimple *stmt, enum tree_code code, tree type, > > - tree op0, tree op1, bool invariant_only) > > + tree op0, tree op1, bool always_combine) > > { > > tree t; > > > > @@ -398,17 +399,31 @@ combine_cond_expr_cond (gimple *stmt, enum tree_c= ode code, tree type, > > /* Canonicalize the combined condition for use in a COND_EXPR. */ > > t =3D canonicalize_cond_expr_cond (t); > > > > - /* Bail out if we required an invariant but didn't get one. */ > > - if (!t || (invariant_only && !is_gimple_min_invariant (t))) > > + if (!t) > > { > > fold_undefer_overflow_warnings (false, NULL, 0); > > return NULL_TREE; > > } > > > > - bool nowarn =3D warning_suppressed_p (stmt, OPT_Wstrict_overflow); > > - fold_undefer_overflow_warnings (!nowarn, stmt, 0); > > + if (always_combine || is_gimple_min_invariant (t)) > > + { > > + bool nowarn =3D warning_suppressed_p (stmt, OPT_Wstrict_overflow= ); > > + fold_undefer_overflow_warnings (!nowarn, stmt, 0); > > + return t; > > + } > > > > - return t; > > + /* If the result of folding is a zero comparison treat it preferenti= ally. */ > > + if (TREE_CODE_CLASS (TREE_CODE (t)) =3D=3D tcc_comparison > > + && integer_zerop (TREE_OPERAND (t, 1)) > > + && !integer_zerop (op1)) > > + { > > + bool nowarn =3D warning_suppressed_p (stmt, OPT_Wstrict_overflow= ); > > + fold_undefer_overflow_warnings (!nowarn, stmt, 0); > > + return t; > > + } > > + > > + fold_undefer_overflow_warnings (false, NULL, 0); > > + return NULL_TREE; > > } > > > > /* Combine the comparison OP0 CODE OP1 at LOC with the defining statem= ents > > @@ -432,7 +447,7 @@ forward_propagate_into_comparison_1 (gimple *stmt, > > if (def_stmt && can_propagate_from (def_stmt)) > > { > > enum tree_code def_code =3D gimple_assign_rhs_code (def_stmt)= ; > > - bool invariant_only_p =3D !single_use0_p; > > + bool always_combine =3D single_use0_p; > > > > rhs0 =3D rhs_to_tree (TREE_TYPE (op1), def_stmt); > > > > @@ -442,10 +457,10 @@ forward_propagate_into_comparison_1 (gimple *stmt= , > > && TREE_CODE (TREE_TYPE (TREE_OPERAND (rhs0, 0))) > > =3D=3D BOOLEAN_TYPE) > > || TREE_CODE_CLASS (def_code) =3D=3D tcc_comparison)) > > - invariant_only_p =3D false; > > + always_combine =3D true; > > > > tmp =3D combine_cond_expr_cond (stmt, code, type, > > - rhs0, op1, invariant_only_p); > > + rhs0, op1, always_combine); > > if (tmp) > > return tmp; > > } > > @@ -459,7 +474,7 @@ forward_propagate_into_comparison_1 (gimple *stmt, > > { > > rhs1 =3D rhs_to_tree (TREE_TYPE (op0), def_stmt); > > tmp =3D combine_cond_expr_cond (stmt, code, type, > > - op0, rhs1, !single_use1_p); > > + op0, rhs1, single_use1_p); > > if (tmp) > > return tmp; > > } > > @@ -470,7 +485,7 @@ forward_propagate_into_comparison_1 (gimple *stmt, > > && rhs1 !=3D NULL_TREE) > > tmp =3D combine_cond_expr_cond (stmt, code, type, > > rhs0, rhs1, > > - !(single_use0_p && single_use1_p)); > > + single_use0_p && single_use1_p); > > > > return tmp; > > } > > -- > > 2.34.1 > >