From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTPS id 407523858D35 for ; Wed, 26 Jul 2023 18:19:07 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 407523858D35 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1690395546; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=1phFCOkk0QbgN6FiK2MZJ622/+iYamZFTaxMQm9DDOg=; b=WFIyImokwCrrwFIQd4MUU5LlDyeaLhxqjKjQH54H9Ybnc9gICu7LB6Aa5Cn0s0+bihn9o8 NqIrLR/wRuyXZxZBWiNaElDzSg6KbNVEnKFRTi+IzbjcT7cfxRSfylYhxmV2zChgt7s9F7 EtwnJVy+y2OpHy5QWMykCk7zC49K85w= Received: from mail-lf1-f72.google.com (mail-lf1-f72.google.com [209.85.167.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-441-Ceimv99eNduzy3sLnhQ-0g-1; Wed, 26 Jul 2023 14:19:03 -0400 X-MC-Unique: Ceimv99eNduzy3sLnhQ-0g-1 Received: by mail-lf1-f72.google.com with SMTP id 2adb3069b0e04-4fb76659d37so66353e87.2 for ; Wed, 26 Jul 2023 11:19:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690395542; x=1691000342; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1phFCOkk0QbgN6FiK2MZJ622/+iYamZFTaxMQm9DDOg=; b=ls4iVzdt/wC/tEOhNeiLiLi3fS2CEHSgpv3budksPdpm1+L4psaiEaia9CV0EFNQRr mghy0RI4K8Ry7DeJdbDWqBft+fsk24hzHQVtOaZIiDOu91ymJiJJAkxKmzUAEx5IBF1D S9+zPwY1PtPELTsy8Ns2TGylJasPTxu1z+lpVFxgiuPJWRPM2GK10VujJA9FK1+f6uAI jV/ZmevOdRi6mxxHFmmlGey2J7Vn6WGdXEd/rrQWcRX8PFkMuZUgvnEjK8/FitFhKdGD ExW93Y8811cJh9I8HLYPOTQjyUe8NXsyMQc9wqrOl2baWPF3wYuU4+zNoQ/IHUldwzCp Gd6A== X-Gm-Message-State: ABy/qLYzIh6T6hZvxjHpVxeHvLJCeQCjc9QjToBSzOCHQ+pVTgOiAo56 4FDCI8aDd1A5k8r2qoQkBFCZaa0edv3+XU8vt/Jt6F4c3kuoA5BdYZDt2RSVeMPfWUm8mwsKV6M lKRVGBxaAYHKqImwMa99RgbMOTwvGT3+xUQ== X-Received: by 2002:ac2:5e6a:0:b0:4fd:d0d5:8771 with SMTP id a10-20020ac25e6a000000b004fdd0d58771mr1646791lfr.18.1690395542137; Wed, 26 Jul 2023 11:19:02 -0700 (PDT) X-Google-Smtp-Source: APBJJlHHKfoDVJZNguvQVwXp2SflfcaKJyGTfHfhy1gp47cn0/JzxjRHiI51cvizoyk6JgI2Z20TVGdFQvg23XrcNhA= X-Received: by 2002:ac2:5e6a:0:b0:4fd:d0d5:8771 with SMTP id a10-20020ac25e6a000000b004fdd0d58771mr1646780lfr.18.1690395541750; Wed, 26 Jul 2023 11:19:01 -0700 (PDT) MIME-Version: 1.0 References: <20230721150851.94504-1-drross@redhat.com> <6acebf98-5165-2c0c-7dea-0c148b7034cd@gmail.com> In-Reply-To: From: Drew Ross Date: Wed, 26 Jul 2023 14:18:50 -0400 Message-ID: Subject: Re: [PATCH] match.pd: Implement missed optimization (x << c) >> c -> -(x & 1) [PR101955] To: Richard Biener Cc: Jakub Jelinek , Jeff Law , Andrew Pinski , gcc-patches@gcc.gnu.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: multipart/alternative; boundary="000000000000cbdd21060167e175" X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: --000000000000cbdd21060167e175 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Here is what I came up with for combining the two: /* For (x << c) >> c, optimize into x & ((unsigned)-1 >> c) for unsigned x OR truncate into the precision(type) - c lowest bits of signed x (if they have mode precision or a precision of 1) */ (simplify (rshift (nop_convert? (lshift @0 INTEGER_CST@1)) @@1) (if (wi::ltu_p (wi::to_wide (@1), element_precision (type))) (if (TYPE_UNSIGNED (type)) (bit_and @0 (rshift { build_minus_one_cst (type); } @1)) (if (INTEGRAL_TYPE_P (type)) (with { int width =3D element_precision (type) - tree_to_uhwi (@1); tree stype =3D build_nonstandard_integer_type (width, 0); } (if (TYPE_PRECISION (stype) =3D=3D 1 || type_has_mode_precision_p (sty= pe)) (convert (convert:stype @0)))))))) Let me know what you think. > Btw, I wonder whether we can handle > some cases of widening/truncating converts between the shifts? I will look into this. Drew On Wed, Jul 26, 2023 at 4:40=E2=80=AFAM Richard Biener wrote: > On Tue, Jul 25, 2023 at 9:26=E2=80=AFPM Drew Ross wro= te: > > > > > With that fixed I think for non-vector integrals the above is the most > suitable > > > canonical form of a sign-extension. Note it should also work for any > other > > > constant shift amount - just use the appropriate intermediate > precision for > > > the truncating type. > > > We _might_ want > > > to consider to only use the converts when the intermediate type has > > > mode precision (and as a special case allow one bit as in your above > case) > > > so it can expand to (sign_extend: (subreg: reg)). > > > > Here is a pattern that that only matches to truncations that result in > mode precision (or precision of 1): > > > > (simplify > > (rshift (nop_convert? (lshift @0 INTEGER_CST@1)) @@1) > > (if (INTEGRAL_TYPE_P (type) > > && !TYPE_UNSIGNED (type) > > && wi::gt_p (element_precision (type), wi::to_wide (@1), TYPE_SIGN > (TREE_TYPE (@1)))) > > (with { > > int width =3D element_precision (type) - tree_to_uhwi (@1); > > tree stype =3D build_nonstandard_integer_type (width, 0); > > } > > (if (TYPE_PRECISION (stype) =3D=3D 1 || type_has_mode_precision_p (s= type)) > > (convert (convert:stype @0)))))) > > > > Look ok? > > I suppose so. Can you see to amend the existing > > /* Optimize (x << c) >> c into x & ((unsigned)-1 >> c) for unsigned > types. */ > (simplify > (rshift (lshift @0 INTEGER_CST@1) @1) > (if (TYPE_UNSIGNED (type) > && (wi::ltu_p (wi::to_wide (@1), element_precision (type)))) > (bit_and @0 (rshift { build_minus_one_cst (type); } @1)))) > > pattern? You will get a duplicate pattern diagnostic otherwise. It > also looks like this > one has the (nop_convert? ..) missing. Btw, I wonder whether we can hand= le > some cases of widening/truncating converts between the shifts? > > Richard. > > > > You might also want to verify what RTL expansion > > > produces before/after - it at least shouldn't be worse. > > > > The RTL is slightly better for the mode precision cases and slightly > worse for the precision 1 case. > > > > > That said - do you have any testcase where the canonicalization is an > enabler > > > for further transforms or was this requested stand-alone? > > > > No, I don't have any specific test cases. This patch is just in response > to pr101955. > > > > On Tue, Jul 25, 2023 at 2:55=E2=80=AFAM Richard Biener < > richard.guenther@gmail.com> wrote: > >> > >> On Mon, Jul 24, 2023 at 9:42=E2=80=AFPM Jakub Jelinek wrote: > >> > > >> > On Mon, Jul 24, 2023 at 03:29:54PM -0400, Drew Ross via Gcc-patches > wrote: > >> > > So would something like > >> > > > >> > > (simplify > >> > > (rshift (nop_convert? (lshift @0 INTEGER_CST@1)) @@1) > >> > > (with { tree stype =3D build_nonstandard_integer_type (1, 0); } > >> > > (if (INTEGRAL_TYPE_P (type) > >> > > && !TYPE_UNSIGNED (type) > >> > > && wi::eq_p (wi::to_wide (@1), element_precision (type) - 1)) > >> > > (convert (convert:stype @0))))) > >> > > > >> > > work? > >> > > >> > Certainly swap the if and with and the (with then should be indented > by 1 > >> > column to the right of (if and (convert one further (the reason for > the > >> > swapping is not to call build_nonstandard_integer_type when it will > not be > >> > needed, which will be probably far more often then an actual match). > >> > >> With that fixed I think for non-vector integrals the above is the most > suitable > >> canonical form of a sign-extension. Note it should also work for any > other > >> constant shift amount - just use the appropriate intermediate precision > for > >> the truncating type. You might also want to verify what RTL expansion > >> produces before/after - it at least shouldn't be worse. We _might_ wa= nt > >> to consider to only use the converts when the intermediate type has > >> mode precision (and as a special case allow one bit as in your above > case) > >> so it can expand to (sign_extend: (subreg: reg)). > >> > >> > As discussed privately, the above isn't what we want for vectors and > the 2 > >> > shifts are probably best on most arches because even when using -(x & > 1) the > >> > { 1, 1, 1, ... } vector would often needed to be loaded from memory. > >> > >> I think for vectors a vpcmpgt {0,0,0,..}, %xmm is the cheapest way of > >> producing the result. Note that to reflect this on GIMPLE you'd need > >> > >> _2 =3D _1 < { 0,0...}; > >> res =3D _2 ? { -1, -1, ...} : { 0, 0,...}; > >> > >> because whether the ISA has a way to produce all-ones masks isn't know= n. > >> > >> For scalars using -(T)(_1 < 0) would also be possible. > >> > >> That said - do you have any testcase where the canonicalization is an > enabler > >> for further transforms or was this requested stand-alone? > >> > >> Thanks, > >> Richard. > >> > >> > Jakub > >> > > >> > > --000000000000cbdd21060167e175--