From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com [IPv6:2a00:1450:4864:20::633]) by sourceware.org (Postfix) with ESMTPS id E8F653858285 for ; Mon, 8 Aug 2022 11:28:43 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org E8F653858285 Received: by mail-ej1-x633.google.com with SMTP id qn6so4192091ejc.11 for ; Mon, 08 Aug 2022 04:28:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=NyLzXRGJcWEqhycYb4eex3MkoyCCi1VYbkGLGDS4cHk=; b=8RBkkYeWuv6dE9V6u4miMKZZKjpB9zLfhunQny4z2/JKLT9SO+ReWmXqnuM7HbrbqT vpgjXfYKSxkD6sL3dulws+BSEJCrRgyTbBVWfkLa4NTVo5G7tJsfwaFTWHR76yaImSZ3 vhuzgVy83A2ow9ZKhN1B5LPfM1NpChCX3WZ34DvOIJwQaqKDNhEPrc7VAwztZtVmaCBU YIOAAaWiql//j60oE486XtKs0p39SkyP01BG30KmsUizR2d6ZeChTUXsu87Gkm/nGDfF gOHS10bkZAdWJzP3vJUilfOadjZ1i0q0udjJ2bZJNz3hfSSqyDxvZhKLvKcWnqBL6Pfx bGuw== X-Gm-Message-State: ACgBeo1DlL4IBFXieQVlqIXRaLBVSC8q0Zt59FmnOUcW081dXirh4fim 2BTOBlMlC3Y6M8NptxzVo/GellGt0VbW+JZvs80nVg65Ntk= X-Google-Smtp-Source: AA6agR7VLrHwuVTLE4gLkLEYEjDFN0T9wszXo8+FY/iIo6QK3CrHmal+9OnWJGR+DFXHHomdY1j+TF8x+miDRpaicWc= X-Received: by 2002:a17:907:948f:b0:731:3f2e:8916 with SMTP id dm15-20020a170907948f00b007313f2e8916mr5797431ejc.442.1659958122391; Mon, 08 Aug 2022 04:28:42 -0700 (PDT) MIME-Version: 1.0 References: <015001d8aa91$0b04e680$210eb380$@nextmovesoftware.com> In-Reply-To: <015001d8aa91$0b04e680$210eb380$@nextmovesoftware.com> From: Richard Biener Date: Mon, 8 Aug 2022 13:28:29 +0200 Message-ID: Subject: Re: [PATCH] middle-end: Optimize ((X >> C1) & C2) != C3 for more cases. To: Roger Sayle Cc: GCC Patches Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Aug 2022 11:28:45 -0000 On Sun, Aug 7, 2022 at 9:08 PM Roger Sayle wrote: > > > Following my middle-end patch for PR tree-optimization/94026, I'd promised > Jeff Law that I'd clean up the dead-code in fold-const.cc now that these > optimizations are handled in match.pd. Alas, I discovered things aren't > quite that simple, as the transformations I'd added avoided cases where > C2 overlapped with the new bits introduced by the shift, but the original > code handled any value of C2 provided that it had a single-bit set (under > the condition that C3 was always zero). > > This patch upgrades the transformations supported by match.pd to cover > any values of C2 and C3, provided that C1 is a valid bit shift constant, > for all three shift types (logical right, arithmetic right and left). > This then makes the code in fold-const.cc fully redundant, and adds > support for some new (corner) cases not previously handled. If the > constant C1 is valid for the type's precision, the shift is now always > eliminated (with C2 and C3 possibly updated to test the sign bit). > > Interestingly, the fold-const.cc code that I'm now deleting was originally > added by me back in 2006 to resolve PR middle-end/21137. I've confirmed > that those testcase(s) remain resolved with this patch (and I'll close > 21137 in Bugzilla). This patch also implements most (but not all) of the > examples mentioned in PR tree-optimization/98954, for which I have some > follow-up patches. > > This patch has been tested on x86_64-pc-linux-gnu with make bootstrap > and make -k check, both with and without --target_board=unix{-m32}, > with no new failures. Ok for mainline? + (with { wide_int smask = wi::arshift (sb, c1); } + (if ((c2 & smask) == 0) + (cmp (bit_and @0 { wide_int_to_tree (t0, c2 << c1); }) + { wide_int_to_tree (t0, c3 << c1); }) + (if ((c3 & smask) == 0) + (cmp (bit_and @0 { wide_int_to_tree (t0, (c2 << c1) | sb); }) + { wide_int_to_tree (t0, c3 << c1); }) + (if ((c2 & smask) != (c3 & smask)) you can use (switch (if ((c2 & smask) == 0) (...) (if ((c3 & smask) == 0) (..) (if ((c2 & smask) != (c3 & smask)) (..))) to make this better readable (switch is basically an if else-if else-if ... clause). OK with that change. Thanks, Richard. > > 2022-08-07 Roger Sayle > > gcc/ChangeLog > PR middle-end/21137 > PR tree-optimization/98954 > * fold-const.cc (fold_binary_loc): Remove optimizations to > optimize ((X >> C1) & C2) ==/!= 0. > * match.pd (cmp (bit_and (lshift @0 @1) @2) @3): Remove wi::ctz > check, and handle all values of INTEGER_CSTs @2 and @3. > (cmp (bit_and (rshift @0 @1) @2) @3): Likewise, remove wi::clz > checks, and handle all values of INTEGER_CSTs @2 and @3. > > gcc/testsuite/ChangeLog > PR middle-end/21137 > PR tree-optimization/98954 > * gcc.dg/fold-eqandshift-4.c: New test case. > > > Thanks in advance, > Roger > -- >