public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "cvs-commit at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org>
To: gcc-bugs@gcc.gnu.org
Subject: [Bug tree-optimization/106787] [13 Regression] ICE in vect_schedule_slp_node, at tree-vect-slp.cc:8648 since r13-2288-g61c4c989034548f4
Date: Fri, 02 Sep 2022 13:00:25 +0000	[thread overview]
Message-ID: <bug-106787-4-wVN1wysarD@http.gcc.gnu.org/bugzilla/> (raw)
In-Reply-To: <bug-106787-4@http.gcc.gnu.org/bugzilla/>

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106787

--- Comment #3 from CVS Commits <cvs-commit at gcc dot gnu.org> ---
The trunk branch has been updated by Richard Sandiford <rsandifo@gcc.gnu.org>:

https://gcc.gnu.org/g:eab511df13ca6abb24c3c2abb0f420a89c91e310

commit r13-2377-geab511df13ca6abb24c3c2abb0f420a89c91e310
Author: Richard Sandiford <richard.sandiford@arm.com>
Date:   Fri Sep 2 14:00:14 2022 +0100

    vect: Ensure SLP nodes don't end up in multiple BB partitions [PR106787]

    In the PR we have two REDUC_PLUS SLP instances that share a common
    load of stride 4.  Each instance also has a unique contiguous load.

    Initially all three loads are out of order, so have a nontrivial
    load permutation.  The layout pass puts them in order instead,
    For the two contiguous loads it is possible to do this by adjusting the
    SLP_LOAD_PERMUTATION to be { 0, 1, 2, 3 }.  But a SLP_LOAD_PERMUTATION
    of { 0, 4, 8, 12 } is rejected as unsupported, so the pass creates a
    separate VEC_PERM_EXPR instead.

    Later the 4-stride load's initial SLP_LOAD_PERMUTATION is rejected too,
    so that the load gets replaced by an external node built from scalars.
    We then have an external node feeding a VEC_PERM_EXPR.

    VEC_PERM_EXPRs created in this way do not have any associated
    SLP_TREE_SCALAR_STMTS.  This means that they do not affect the
    decision about which nodes should be in which subgraph for costing
    purposes.  If the VEC_PERM_EXPR is fed by a vect_external_def,
    then the VEC_PERM_EXPR's input doesn't affect that decision either.

    The net effect is that a shared VEC_PERM_EXPR fed by an external def
    can appear in more than one subgraph.  This triggered an ICE in
    vect_schedule_node, which (rightly) expects to be called no more
    than once for the same internal def.

    There seemed to be many possible fixes, including:

    (1) Replace unsupported loads with external defs *before* doing
        the layout optimisation.  This would avoid the need for the
        VEC_PERM_EXPR altogether.

    (2) If the target doesn't support a load in its original layout,
        stop the layout optimisation from checking whether the target
        supports loads in any new candidate layout.  In other words,
        treat all layouts as if they were supported whenever the
        original layout is not in fact supported.

        I'd rather not do this.  In principle, the layout optimisation
        could convert an unsupported layout to a supported one.
        Selectively ignoring target support would work against that.

        We could try to look specifically for loads that will need
        to be decomposed, but that just seems like admitting that
        things are happening in the wrong order.

    (3) Add SLP_TREE_SCALAR_STMTS to VEC_PERM_EXPRs.

        That would be OK for this case, but wouldn't be possible
        for external defs that represent existing vectors.

    (4) Make vect_schedule_slp share SCC info between subgraphs.

        It feels like that's working around the partitioning problem
        rather than a real fix though.

    (5) Directly ensure that internal def nodes belong to a single
        subgraph.

    (1) is probably the best long-term fix, but (5) is much simpler.
    The subgraph partitioning code already has a hash set to record
    which nodes have been visited; we just need to convert that to a
    map from nodes to instances instead.

    gcc/
            PR tree-optimization/106787
            * tree-vect-slp.cc (vect_map_to_instance): New function, split out
            from...
            (vect_bb_partition_graph_r): ...here.  Replace the visited set
            with a map from nodes to instances.  Ensure that a node only
            appears in one partition.
            (vect_bb_partition_graph): Update accordingly.

    gcc/testsuite/
            * gcc.dg/vect/bb-slp-layout-19.c: New test.

  parent reply	other threads:[~2022-09-02 13:00 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-31 11:33 [Bug tree-optimization/106787] New: ICE in vect_schedule_slp_node, at tree-vect-slp.cc:8648 marxin at gcc dot gnu.org
2022-08-31 11:34 ` [Bug tree-optimization/106787] [13 Regression] ICE in vect_schedule_slp_node, at tree-vect-slp.cc:8648 since r13-2288-g61c4c989034548f4 marxin at gcc dot gnu.org
2022-08-31 12:13 ` rguenth at gcc dot gnu.org
2022-09-01 10:41 ` rsandifo at gcc dot gnu.org
2022-09-02 13:00 ` cvs-commit at gcc dot gnu.org [this message]
2022-09-02 13:01 ` rsandifo at gcc dot gnu.org

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-106787-4-wVN1wysarD@http.gcc.gnu.org/bugzilla/ \
    --to=gcc-bugzilla@gcc.gnu.org \
    --cc=gcc-bugs@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).