public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "cvs-commit at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org>
To: gcc-bugs@gcc.gnu.org
Subject: [Bug target/113196] [14 Regression] Failure to use ushll{,2}
Date: Fri, 12 Jan 2024 12:38:25 +0000	[thread overview]
Message-ID: <bug-113196-4-Dp5jr8XVPy@http.gcc.gnu.org/bugzilla/> (raw)
In-Reply-To: <bug-113196-4@http.gcc.gnu.org/bugzilla/>

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113196

--- Comment #2 from GCC Commits <cvs-commit at gcc dot gnu.org> ---
The trunk branch has been updated by Richard Sandiford <rsandifo@gcc.gnu.org>:

https://gcc.gnu.org/g:74e3e839ab2d368413207455af2fdaaacc73842b

commit r14-7187-g74e3e839ab2d368413207455af2fdaaacc73842b
Author: Richard Sandiford <richard.sandiford@arm.com>
Date:   Fri Jan 12 12:38:01 2024 +0000

    aarch64: Rework uxtl->zip optimisation [PR113196]

    g:f26f92b534f9 implemented unsigned extensions using ZIPs rather than
    UXTL{,2}, since the former has a higher throughput than the latter on
    amny cores.  The optimisation worked by lowering directly to ZIP during
    expand, so that the zero input could be hoisted and shared.

    However, changing to ZIP means that zero extensions no longer benefit
    from some existing combine patterns.  The patch included new patterns
    for UADDW and USUBW, but the PR shows that other patterns were affected
    as well.

    This patch instead introduces the ZIPs during a pre-reload split
    and forcibly hoists the zero move to the outermost scope.  This has
    the disadvantage of executing the move even for a shrink-wrapped
    function, which I suppose could be a problem if it causes a kernel
    to trap and enable Advanced SIMD unnecessarily.  In other circumstances,
    an unused move shouldn't affect things much.

    Also, the RA should be able to rematerialise the move at an
    appropriate point if necessary, such as if there is an intervening
    call.

    In https://gcc.gnu.org/pipermail/gcc-patches/2024-January/641948.html
    I'd then tried to allow a zero to be recombined back into a solitary
    ZIP.  However, that relied on late-combine, which didn't make it into
    GCC 14.  This version instead restricts the split to cases where the
    UXTL executes more frequently as the entry block (which is where we
    plan to put the zero).

    Also, the original optimisation contained a big-endian correction
    that I don't think is needed/correct.  Even on big-endian targets,
    we want the ZIP to take the low half of an element from the input
    vector and the high half from the zero vector.  And the patterns
    map directly to the underlying Advanced SIMD instructions: the use
    of unspecs means that there's no need to adjust for the difference
    between GCC and Arm lane numbering.

    gcc/
            PR target/113196
            * config/aarch64/aarch64.h (machine_function::advsimd_zero_insn):
            New member variable.
            * config/aarch64/aarch64-protos.h (aarch64_split_simd_shift_p):
            Declare.
            * config/aarch64/iterators.md (Vnarrowq2): New mode attribute.
            * config/aarch64/aarch64-simd.md
            (vec_unpacku_hi_<mode>, vec_unpacks_hi_<mode>): Recombine into...
            (vec_unpack<su>_hi_<mode>): ...this.  Move the generation of
            zip2 for zero-extends to...
            (aarch64_simd_vec_unpack<su>_hi_<mode>): ...a split of this
            instruction.  Fix big-endian handling.
            (vec_unpacku_lo_<mode>, vec_unpacks_lo_<mode>): Recombine into...
            (vec_unpack<su>_lo_<mode>): ...this.  Move the generation of
            zip1 for zero-extends to...
            (<optab><Vnarrowq><mode>2): ...a split of this instruction.
            Fix big-endian handling.
            (*aarch64_zip1_uxtl): New pattern.
            (aarch64_usubw<mode>_lo_zip, aarch64_uaddw<mode>_lo_zip): Delete
            (aarch64_usubw<mode>_hi_zip, aarch64_uaddw<mode>_hi_zip): Likewise.
            * config/aarch64/aarch64.cc (aarch64_get_shareable_reg): New
function.
            (aarch64_gen_shareable_zero): Use it.
            (aarch64_split_simd_shift_p): New function.

    gcc/testsuite/
            PR target/113196
            * gcc.target/aarch64/pr113196.c: New test.
            * gcc.target/aarch64/simd/vmovl_high_1.c: Remove double include.
            Expect uxtl2 rather than zip2.
            * gcc.target/aarch64/vect_mixed_sizes_8.c: Expect zip1 rather
            than uxtl.
            * gcc.target/aarch64/vect_mixed_sizes_9.c: Likewise.
            * gcc.target/aarch64/vect_mixed_sizes_10.c: Likewise.

  parent reply	other threads:[~2024-01-12 12:38 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-02 10:06 [Bug target/113196] New: " rsandifo at gcc dot gnu.org
2024-01-02 10:07 ` [Bug target/113196] " rsandifo at gcc dot gnu.org
2024-01-04  6:22 ` pinskia at gcc dot gnu.org
2024-01-12 12:38 ` cvs-commit at gcc dot gnu.org [this message]
2024-01-12 12:41 ` rsandifo at gcc dot gnu.org

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-113196-4-Dp5jr8XVPy@http.gcc.gnu.org/bugzilla/ \
    --to=gcc-bugzilla@gcc.gnu.org \
    --cc=gcc-bugs@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).