From: Richard Biener <richard.guenther@gmail.com> To: Drew Ross <drross@redhat.com> Cc: Jakub Jelinek <jakub@redhat.com>, Jeff Law <jeffreyalaw@gmail.com>, Andrew Pinski <pinskia@gmail.com>, gcc-patches@gcc.gnu.org Subject: Re: [PATCH] match.pd: Implement missed optimization (x << c) >> c -> -(x & 1) [PR101955] Date: Fri, 28 Jul 2023 08:30:36 +0200 [thread overview] Message-ID: <CAFiYyc1+rKid8Xg-NFrV6objWNa8k+COwb2_QvWV3J4+V2YoRQ@mail.gmail.com> (raw) In-Reply-To: <CAEsMqOP0jA_OWeqvFLX4q9AO=qamjAKCtWt8a-t-Tsou5XkgCA@mail.gmail.com> On Wed, Jul 26, 2023 at 8:19 PM Drew Ross <drross@redhat.com> wrote: > > Here is what I came up with for combining the two: > > /* For (x << c) >> c, optimize into x & ((unsigned)-1 >> c) for > unsigned x OR truncate into the precision(type) - c lowest bits > of signed x (if they have mode precision or a precision of 1) */ > (simplify > (rshift (nop_convert? (lshift @0 INTEGER_CST@1)) @@1) > (if (wi::ltu_p (wi::to_wide (@1), element_precision (type))) > (if (TYPE_UNSIGNED (type)) > (bit_and @0 (rshift { build_minus_one_cst (type); } @1)) > (if (INTEGRAL_TYPE_P (type)) > (with { > int width = element_precision (type) - tree_to_uhwi (@1); > tree stype = build_nonstandard_integer_type (width, 0); > } > (if (TYPE_PRECISION (stype) == 1 || type_has_mode_precision_p (stype)) > (convert (convert:stype @0)))))))) > > Let me know what you think. Looks good to me. Thanks, Richard. > > Btw, I wonder whether we can handle > > some cases of widening/truncating converts between the shifts? > > I will look into this. > > Drew > > On Wed, Jul 26, 2023 at 4:40 AM Richard Biener <richard.guenther@gmail.com> wrote: >> >> On Tue, Jul 25, 2023 at 9:26 PM Drew Ross <drross@redhat.com> wrote: >> > >> > > With that fixed I think for non-vector integrals the above is the most suitable >> > > canonical form of a sign-extension. Note it should also work for any other >> > > constant shift amount - just use the appropriate intermediate precision for >> > > the truncating type. >> > > We _might_ want >> > > to consider to only use the converts when the intermediate type has >> > > mode precision (and as a special case allow one bit as in your above case) >> > > so it can expand to (sign_extend:<outer> (subreg:<inner> reg)). >> > >> > Here is a pattern that that only matches to truncations that result in mode precision (or precision of 1): >> > >> > (simplify >> > (rshift (nop_convert? (lshift @0 INTEGER_CST@1)) @@1) >> > (if (INTEGRAL_TYPE_P (type) >> > && !TYPE_UNSIGNED (type) >> > && wi::gt_p (element_precision (type), wi::to_wide (@1), TYPE_SIGN (TREE_TYPE (@1)))) >> > (with { >> > int width = element_precision (type) - tree_to_uhwi (@1); >> > tree stype = build_nonstandard_integer_type (width, 0); >> > } >> > (if (TYPE_PRECISION (stype) == 1 || type_has_mode_precision_p (stype)) >> > (convert (convert:stype @0)))))) >> > >> > Look ok? >> >> I suppose so. Can you see to amend the existing >> >> /* Optimize (x << c) >> c into x & ((unsigned)-1 >> c) for unsigned >> types. */ >> (simplify >> (rshift (lshift @0 INTEGER_CST@1) @1) >> (if (TYPE_UNSIGNED (type) >> && (wi::ltu_p (wi::to_wide (@1), element_precision (type)))) >> (bit_and @0 (rshift { build_minus_one_cst (type); } @1)))) >> >> pattern? You will get a duplicate pattern diagnostic otherwise. It >> also looks like this >> one has the (nop_convert? ..) missing. Btw, I wonder whether we can handle >> some cases of widening/truncating converts between the shifts? >> >> Richard. >> >> > > You might also want to verify what RTL expansion >> > > produces before/after - it at least shouldn't be worse. >> > >> > The RTL is slightly better for the mode precision cases and slightly worse for the precision 1 case. >> > >> > > That said - do you have any testcase where the canonicalization is an enabler >> > > for further transforms or was this requested stand-alone? >> > >> > No, I don't have any specific test cases. This patch is just in response to pr101955. >> > >> > On Tue, Jul 25, 2023 at 2:55 AM Richard Biener <richard.guenther@gmail.com> wrote: >> >> >> >> On Mon, Jul 24, 2023 at 9:42 PM Jakub Jelinek <jakub@redhat.com> wrote: >> >> > >> >> > On Mon, Jul 24, 2023 at 03:29:54PM -0400, Drew Ross via Gcc-patches wrote: >> >> > > So would something like >> >> > > >> >> > > (simplify >> >> > > (rshift (nop_convert? (lshift @0 INTEGER_CST@1)) @@1) >> >> > > (with { tree stype = build_nonstandard_integer_type (1, 0); } >> >> > > (if (INTEGRAL_TYPE_P (type) >> >> > > && !TYPE_UNSIGNED (type) >> >> > > && wi::eq_p (wi::to_wide (@1), element_precision (type) - 1)) >> >> > > (convert (convert:stype @0))))) >> >> > > >> >> > > work? >> >> > >> >> > Certainly swap the if and with and the (with then should be indented by 1 >> >> > column to the right of (if and (convert one further (the reason for the >> >> > swapping is not to call build_nonstandard_integer_type when it will not be >> >> > needed, which will be probably far more often then an actual match). >> >> >> >> With that fixed I think for non-vector integrals the above is the most suitable >> >> canonical form of a sign-extension. Note it should also work for any other >> >> constant shift amount - just use the appropriate intermediate precision for >> >> the truncating type. You might also want to verify what RTL expansion >> >> produces before/after - it at least shouldn't be worse. We _might_ want >> >> to consider to only use the converts when the intermediate type has >> >> mode precision (and as a special case allow one bit as in your above case) >> >> so it can expand to (sign_extend:<outer> (subreg:<inner> reg)). >> >> >> >> > As discussed privately, the above isn't what we want for vectors and the 2 >> >> > shifts are probably best on most arches because even when using -(x & 1) the >> >> > { 1, 1, 1, ... } vector would often needed to be loaded from memory. >> >> >> >> I think for vectors a vpcmpgt {0,0,0,..}, %xmm is the cheapest way of >> >> producing the result. Note that to reflect this on GIMPLE you'd need >> >> >> >> _2 = _1 < { 0,0...}; >> >> res = _2 ? { -1, -1, ...} : { 0, 0,...}; >> >> >> >> because whether the ISA has a way to produce all-ones masks isn't known. >> >> >> >> For scalars using -(T)(_1 < 0) would also be possible. >> >> >> >> That said - do you have any testcase where the canonicalization is an enabler >> >> for further transforms or was this requested stand-alone? >> >> >> >> Thanks, >> >> Richard. >> >> >> >> > Jakub >> >> > >> >> >>

next prev parent reply other threads:[~2023-07-28 6:30 UTC|newest]Thread overview:14+ messages / expand[flat|nested] mbox.gz Atom feed top 2023-07-21 15:08 Drew Ross 2023-07-21 17:27 ` Andrew Pinski 2023-07-22 6:09 ` Jeff Law 2023-07-24 7:16 ` Richard Biener 2023-07-24 19:29 ` Drew Ross 2023-07-24 19:42 ` Jakub Jelinek 2023-07-25 6:54 ` Richard Biener 2023-07-25 19:25 ` Drew Ross 2023-07-25 19:43 ` Jakub Jelinek 2023-07-26 8:39 ` Richard Biener 2023-07-26 18:18 ` Drew Ross2023-07-28 6:30 ` Richard Biener [this message]2023-08-01 19:20 ` [PATCH] match.pd: Canonicalize (signed x << c) >> c [PR101955] Drew Ross 2023-08-01 21:36 ` Jakub Jelinek

Be sure your reply has aReply instructions:You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the--to,--cc, and--in-reply-toswitches of git-send-email(1): git send-email \ --in-reply-to=CAFiYyc1+rKid8Xg-NFrV6objWNa8k+COwb2_QvWV3J4+V2YoRQ@mail.gmail.com \ --to=richard.guenther@gmail.com \ --cc=drross@redhat.com \ --cc=gcc-patches@gcc.gnu.org \ --cc=jakub@redhat.com \ --cc=jeffreyalaw@gmail.com \ --cc=pinskia@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting theIn-Reply-Toheader via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).