From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qv1-xf36.google.com (mail-qv1-xf36.google.com [IPv6:2607:f8b0:4864:20::f36]) by sourceware.org (Postfix) with ESMTPS id 2546C383E6A1 for ; Tue, 14 Jun 2022 13:42:07 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 2546C383E6A1 Received: by mail-qv1-xf36.google.com with SMTP id 63so6455393qva.10 for ; Tue, 14 Jun 2022 06:42:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=th6Kqm/5xuEPBigjot9MZIS2T0MT5Gu28BWaW+OjhoY=; b=UXB2M7EFrD4wkxf1ReU8EhTNvIESGJAecjpo/VkfQpamjEqhKqKfAOdX9aTIZxpyu3 zROKxbZ2PR1YDRU20dCU+7vAgy5EfEMvb+74lAfc1mWmGjurTWvi9cHEW1+qz0LJ1DGG g/IncoUpTN4VGsS1vUTeOU0vU+VDRCwmoH/XThcAcFmIGoG/rsR+AZ2KpMkE2+zpuvam ATbmJGd8RO4mW9eVu5xxExe2/grBKKx7o7dtM8OzUJqk2HgKRfN+3tgK1U3AiKeZ8sB4 toZdBos1F4v5GQdT9UbQBwM3eab7JSi26ctaKG+YCZoIsjqUD0eXrl57FmuZlqVUAmrk KLqQ== X-Gm-Message-State: AJIora/JIkri71PqKqBuq7djT1LJvhgzDP0jGWOahQ9NCnhRyZ47mTcs bkCkw/psJa15RGtIPL6fUFGKWXOtgRNYcOec8ovqzHDOneRfGA== X-Google-Smtp-Source: AGRyM1uYRTP8ZuV8MTOTkrGXUH002WPqcXvdfVxJlY0wCFc/vq7M4Kfhn19dqHoWlPJcpSC1YlehUce9S7nZAnGZquA= X-Received: by 2002:a05:6214:21cf:b0:462:485b:d7b9 with SMTP id d15-20020a05621421cf00b00462485bd7b9mr3627864qvh.78.1655214126288; Tue, 14 Jun 2022 06:42:06 -0700 (PDT) MIME-Version: 1.0 References: <007e01d878cd$2a32f100$7e98d300$@nextmovesoftware.com> In-Reply-To: <007e01d878cd$2a32f100$7e98d300$@nextmovesoftware.com> From: Richard Biener Date: Tue, 14 Jun 2022 15:41:55 +0200 Message-ID: Subject: Re: [PATCH take #2] Fold truncations of left shifts in match.pd To: Roger Sayle Cc: GCC Patches Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-1.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Jun 2022 13:42:09 -0000 On Sun, Jun 5, 2022 at 1:12 PM Roger Sayle wrote: > > > Hi Richard, > Many thanks for taking the time to explain how vectorization is supposed > to work. I now see that vect_recog_rotate_pattern in tree-vect-patterns.cc > is supposed to handle lowering of rotations to (vector) shifts, and > completely agree that adding support for signed types (using appropriate > casts to unsigned_type_for and casting the result back to the original > signed type) is a better approach to avoid the regression of pr98674.c. > > I've also implemented your suggestions of combining the proposed new > (convert (lshift @1 INTEGER_CST@2)) with the existing one, and at the > same time including support for valid shifts greater than the narrower > type, such as (short)(x << 20), to constant zero. Although this optimization > is already performed during the tree-ssa passes, it's convenient to > also catch it here during constant folding. > > This revised patch has been tested on x86_64-pc-linux-gnu with > make bootstrap and make -k check, both with and without > --target_board=unix{-m32}, with no new failures. Ok for mainline? OK. Thanks, Richard. > 2022-06-05 Roger Sayle > Richard Biener > > gcc/ChangeLog > * match.pd (convert (lshift @1 INTEGER_CST@2)): Narrow integer > left shifts by a constant when the result is truncated, and the > shift constant is well-defined. > * tree-vect-patterns.cc (vect_recog_rotate_pattern): Add > support for rotations of signed integer types, by lowering > using unsigned vector shifts. > > gcc/testsuite/ChangeLog > * gcc.dg/fold-convlshift-4.c: New test case. > * gcc.dg/optimize-bswaphi-1.c: Update found bswap count. > * gcc.dg/tree-ssa/pr61839_3.c: Shift is now optimized before VRP. > * gcc.dg/vect/vect-over-widen-1-big-array.c: Remove obsolete tests. > * gcc.dg/vect/vect-over-widen-1.c: Likewise. > * gcc.dg/vect/vect-over-widen-3-big-array.c: Likewise. > * gcc.dg/vect/vect-over-widen-3.c: Likewise. > * gcc.dg/vect/vect-over-widen-4-big-array.c: Likewise. > * gcc.dg/vect/vect-over-widen-4.c: Likewise. > > > Thanks again, > Roger > -- > > > -----Original Message----- > > From: Richard Biener > > Sent: 02 June 2022 12:03 > > To: Roger Sayle > > Cc: GCC Patches > > Subject: Re: [PATCH] Fold truncations of left shifts in match.pd > > > > On Thu, Jun 2, 2022 at 12:55 PM Roger Sayle > > wrote: > > > > > > > > > Hi Richard, > > > > + /* RTL expansion knows how to expand rotates using shift/or. */ > > > > + if (icode == CODE_FOR_nothing > > > > + && (code == LROTATE_EXPR || code == RROTATE_EXPR) > > > > + && optab_handler (ior_optab, vec_mode) != CODE_FOR_nothing > > > > + && optab_handler (ashl_optab, vec_mode) != CODE_FOR_nothing) > > > > + icode = (int) optab_handler (lshr_optab, vec_mode); > > > > > > > > but we then get the vector costing wrong. > > > > > > The issue is that we currently get the (relative) vector costing wrong. > > > Currently for gcc.dg/vect/pr98674.c, the vectorizer thinks the scalar > > > code requires two shifts and an ior, so believes its profitable to > > > vectorize this loop using two vector shifts and an vector ior. But > > > once match.pd simplifies the truncate and recognizes the HImode rotate we > > end up with: > > > > > > pr98674.c:6:16: note: ==> examining statement: _6 = _1 r>> 8; > > > pr98674.c:6:16: note: vect_is_simple_use: vectype vector(8) short int > > > pr98674.c:6:16: note: vect_is_simple_use: operand 8, type of def: constant > > > pr98674.c:6:16: missed: op not supported by target. > > > pr98674.c:8:33: missed: not vectorized: relevant stmt not supported: _6 = _1 > > r>> 8; > > > pr98674.c:6:16: missed: bad operation or unsupported loop bound. > > > > > > > > > Clearly, it's a win to vectorize HImode rotates, when the backend can > > > perform > > > 8 (or 16) rotations at a time, but using 3 vector instructions, even > > > when a scalar rotate can performed in a single instruction. > > > Fundamentally, vectorization may still be desirable/profitable even when the > > backend doesn't provide an optab. > > > > Yes, as said it's tree-vect-patterns.cc job to handle this not natively supported > > rotate by re-writing it. Can you check why vect_recog_rotate_pattern does not > > do this? Ah, the code only handles !TYPE_UNSIGNED (type) - not sure why > > though (for rotates it should not matter and for the lowered sequence we can > > convert to desired signedness to get arithmetic/logical shifts)? > > > > > The current situation where the i386's backend provides expanders to > > > lower rotations (or vcond) into individual instruction sequences, also interferes > > with > > > vector costing. It's the vector cost function that needs to be fixed, not the > > > generated code made worse (or the backend bloated performing its own > > > RTL expansion workarounds). > > > > > > Is it instead ok to mark pr98674.c as XFAIL (a regression)? > > > The tweak to tree-vect-stmts.cc was based on the assumption that we > > > wished to continue vectorizing this loop. Improving scalar code > > > generation really shouldn't disable vectorization like this. > > > > Yes, see above where the fix needs to be. The pattern will then expose the shift > > and ior to the vectorizer which then are properly costed. > > > > Richard. > > > > > > > > > > > Cheers, > > > Roger > > > -- > > > > > >