public inbox for gcc-regression@sourceware.org
help / color / mirror / Atom feed
From: ci_notify@linaro.org
To: Richard Biener <rguenther@suse.de>
Cc: gcc-regression@gcc.gnu.org
Subject: [TCWG CI] Regression caused by gcc: Keep virtual SSA up-to-date in vectorizer
Date: Mon, 4 Jul 2022 15:54:46 +0000 (UTC)	[thread overview]
Message-ID: <530146653.7114.1656950088270@jenkins.jenkins> (raw)

[TCWG CI] Regression caused by gcc: Keep virtual SSA up-to-date in vectorizer:
commit 10b502fb78351a4073b6682c026a92c82d3da6c5
Author: Richard Biener <rguenther@suse.de>

    Keep virtual SSA up-to-date in vectorizer

Results regressed to
# reset_artifacts:
-10
# true:
0
# build_abe binutils:
1
# First few build errors in logs:
# 00:08:41 make[2]: [Makefile:1786: aarch64-unknown-linux-gnu/bits/largefile-config.h] Error 1 (ignored)
# 00:08:41 make[2]: [Makefile:1787: aarch64-unknown-linux-gnu/bits/largefile-config.h] Error 1 (ignored)
# 00:08:50 /home/tcwg-buildslave/workspace/tcwg_gnu_8/abe/snapshots/gcc.git~master/libgfortran/generated/matmul_c4.c:2450:1: internal compiler error: in vect_do_peeling, at tree-vect-loop-manip.cc:2690
# 00:08:50 /home/tcwg-buildslave/workspace/tcwg_gnu_8/abe/snapshots/gcc.git~master/libgfortran/generated/matmul_c8.c:2450:1: internal compiler error: in vect_do_peeling, at tree-vect-loop-manip.cc:2690
# 00:08:51 make[3]: *** [Makefile:4675: matmul_c4.lo] Error 1
# 00:08:51 make[3]: *** [Makefile:4682: matmul_c8.lo] Error 1
# 00:08:54 make[2]: *** [Makefile:1693: all] Error 2
# 00:08:54 make[1]: *** [Makefile:15872: all-target-libgfortran] Error 2
# 00:09:55 make: *** [Makefile:1034: all] Error 2

from
# reset_artifacts:
-10
# true:
0
# build_abe binutils:
1
# build_abe gcc:
2
# build_abe linux:
4
# build_abe glibc:
5
# build_abe gdb:
6

THIS IS THE END OF INTERESTING STUFF.  BELOW ARE LINKS TO BUILDS, REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.

This commit has regressed these CI configurations:
 - tcwg_gnu_native_build/master-aarch64

First_bad build: https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/13/artifact/artifacts/build-10b502fb78351a4073b6682c026a92c82d3da6c5/
Last_good build: https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/13/artifact/artifacts/build-88b9d090aa1686ba52ce6016aeed66464fb0d4bb/
Baseline build: https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/13/artifact/artifacts/build-baseline/
Even more details: https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/13/artifact/artifacts/

Reproduce builds:
<cut>
mkdir investigate-gcc-10b502fb78351a4073b6682c026a92c82d3da6c5
cd investigate-gcc-10b502fb78351a4073b6682c026a92c82d3da6c5

# Fetch scripts
git clone https://git.linaro.org/toolchain/jenkins-scripts

# Fetch manifests and test.sh script
mkdir -p artifacts/manifests
curl -o artifacts/manifests/build-baseline.sh https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/13/artifact/artifacts/manifests/build-baseline.sh --fail
curl -o artifacts/manifests/build-parameters.sh https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/13/artifact/artifacts/manifests/build-parameters.sh --fail
curl -o artifacts/test.sh https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/13/artifact/artifacts/test.sh --fail
chmod +x artifacts/test.sh

# Reproduce the baseline build (build all pre-requisites)
./jenkins-scripts/tcwg_gnu-build.sh @@ artifacts/manifests/build-baseline.sh

# Save baseline build state (which is then restored in artifacts/test.sh)
mkdir -p ./bisect
rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ --exclude /gcc/ ./ ./bisect/baseline/

cd gcc

# Reproduce first_bad build
git checkout --detach 10b502fb78351a4073b6682c026a92c82d3da6c5
../artifacts/test.sh

# Reproduce last_good build
git checkout --detach 88b9d090aa1686ba52ce6016aeed66464fb0d4bb
../artifacts/test.sh

cd ..
</cut>

Full commit (up to 1000 lines):
<cut>
commit 10b502fb78351a4073b6682c026a92c82d3da6c5
Author: Richard Biener <rguenther@suse.de>
Date:   Mon Jul 4 12:36:05 2022 +0200

    Keep virtual SSA up-to-date in vectorizer
    
    The following removes a FIXME where we fail(ed) to keep virtual
    SSA up-to-date, patching up the remaining two cases I managed to
    trigger.  I've left an assert so that we pick up cases arising
    for the cases I wasn't able to trigger.
    
    2022-07-04  Richard Biener  <rguenther@suse.de>
    
            * tree-vect-loop-manip.cc (vect_do_peeling): Assert that
            no SSA update is needed instead of updating virtual SSA
            form.
            * tree-vect-stmts.cc (vectorizable_load): For hoisted
            invariant load use the loop entry virtual use.
            For emulated gather loads use the virtual use of the
            original stmt like vect_finish_stmt_generation would do.
---
 gcc/tree-vect-loop-manip.cc | 11 ++++-------
 gcc/tree-vect-stmts.cc      | 15 ++++++++++++---
 2 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc
index 47c4fe8de86..7b7af944dba 100644
--- a/gcc/tree-vect-loop-manip.cc
+++ b/gcc/tree-vect-loop-manip.cc
@@ -2683,14 +2683,11 @@ vect_do_peeling (loop_vec_info loop_vinfo, tree niters, tree nitersm1,
   class loop *first_loop = loop;
   bool irred_flag = loop_preheader_edge (loop)->flags & EDGE_IRREDUCIBLE_LOOP;
 
-  /* We might have a queued need to update virtual SSA form.  As we
-     delete the update SSA machinery below after doing a regular
+  /* Historically we might have a queued need to update virtual SSA form.
+     As we delete the update SSA machinery below after doing a regular
      incremental SSA update during loop copying make sure we don't
-     lose that fact.
-     ???  Needing to update virtual SSA form by renaming is unfortunate
-     but not all of the vectorizer code inserting new loads / stores
-     properly assigns virtual operands to those statements.  */
-  update_ssa (TODO_update_ssa_only_virtuals);
+     lose that fact.  */
+  gcc_assert (!need_ssa_update_p (cfun));
 
   create_lcssa_for_virtual_phi (loop);
 
diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
index 346d8ce2804..d6a6fe3fb38 100644
--- a/gcc/tree-vect-stmts.cc
+++ b/gcc/tree-vect-stmts.cc
@@ -9024,9 +9024,16 @@ vectorizable_load (vec_info *vinfo,
 			     "hoisting out of the vectorized loop: %G", stmt);
 	  scalar_dest = copy_ssa_name (scalar_dest);
 	  tree rhs = unshare_expr (gimple_assign_rhs1 (stmt));
-	  gsi_insert_on_edge_immediate
-	    (loop_preheader_edge (loop),
-	     gimple_build_assign (scalar_dest, rhs));
+	  edge pe = loop_preheader_edge (loop);
+	  gphi *vphi = get_virtual_phi (loop->header);
+	  tree vuse;
+	  if (vphi)
+	    vuse = PHI_ARG_DEF_FROM_EDGE (vphi, pe);
+	  else
+	    vuse = gimple_vuse (gsi_stmt (*gsi));
+	  gimple *new_stmt = gimple_build_assign (scalar_dest, rhs);
+	  gimple_set_vuse (new_stmt, vuse);
+	  gsi_insert_on_edge_immediate (pe, new_stmt);
 	}
       /* These copies are all equivalent, but currently the representation
 	 requires a separate STMT_VINFO_VEC_STMT for each one.  */
@@ -9769,6 +9776,8 @@ vectorizable_load (vec_info *vinfo,
 			    tree ref = build2 (MEM_REF, ltype, ptr,
 					       build_int_cst (ref_type, 0));
 			    new_stmt = gimple_build_assign (elt, ref);
+			    gimple_set_vuse (new_stmt,
+					     gimple_vuse (gsi_stmt (*gsi)));
 			    gimple_seq_add_stmt (&stmts, new_stmt);
 			    CONSTRUCTOR_APPEND_ELT (ctor_elts, NULL_TREE, elt);
 			  }
</cut>
>From hjl@sc.intel.com  Mon Jul  4 17:52:56 2022
Return-Path: <hjl@sc.intel.com>
X-Original-To: gcc-regression@gcc.gnu.org
Delivered-To: gcc-regression@gcc.gnu.org
Received: from mga11.intel.com (mga11.intel.com [192.55.52.93])
 by sourceware.org (Postfix) with ESMTPS id 679AE385AE5C
 for <gcc-regression@gcc.gnu.org>; Mon,  4 Jul 2022 17:52:55 +0000 (GMT)
DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 679AE385AE5C
X-IronPort-AV: E=McAfee;i="6400,9594,10398"; a="280730040"
X-IronPort-AV: E=Sophos;i="5.92,243,1650956400"; d="scan'208";a="280730040"
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 04 Jul 2022 10:52:54 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.92,243,1650956400"; d="scan'208";a="619370344"
Received: from scymds02.sc.intel.com ([10.82.73.244])
 by orsmga008.jf.intel.com with ESMTP; 04 Jul 2022 10:52:54 -0700
Received: from gnu-snb-1.sc.intel.com (gnu-snb-1.sc.intel.com [172.25.33.219])
 by scymds02.sc.intel.com with ESMTP id 264Hqs53024658;
 Mon, 4 Jul 2022 10:52:54 -0700
Received: by gnu-snb-1.sc.intel.com (Postfix, from userid 1000)
 id F1144180FD5; Mon,  4 Jul 2022 10:52:53 -0700 (PDT)
Date: Mon, 04 Jul 2022 10:52:53 -0700
To: skpgkp2@gmail.com, hjl.tools@gmail.com, gcc-regression@gcc.gnu.org
Subject: Regressions on master at commit r13-1459 vs commit r13-1415 on
 Linux/i686
User-Agent: Heirloom mailx 12.5 7/5/10
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-Id: <20220704175253.F1144180FD5@gnu-snb-1.sc.intel.com>
From: "H.J. Lu" <hjl@sc.intel.com>
X-Spam-Status: No, score=-3459.8 required=5.0 tests=BAYES_00, KAM_DMARC_STATUS,
 KAM_LAZY_DOMAIN_SECURITY, KAM_NUMSUBJECT, SPF_HELO_NONE, SPF_NONE, TXREP,
 T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6
X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on
 server2.sourceware.org
X-BeenThere: gcc-regression@gcc.gnu.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Gcc-regression mailing list <gcc-regression.gcc.gnu.org>
List-Unsubscribe: <https://gcc.gnu.org/mailman/options/gcc-regression>,
 <mailto:gcc-regression-request@gcc.gnu.org?subject=unsubscribe>
List-Archive: <https://gcc.gnu.org/pipermail/gcc-regression/>
List-Post: <mailto:gcc-regression@gcc.gnu.org>
List-Help: <mailto:gcc-regression-request@gcc.gnu.org?subject=help>
List-Subscribe: <https://gcc.gnu.org/mailman/listinfo/gcc-regression>,
 <mailto:gcc-regression-request@gcc.gnu.org?subject=subscribe>
X-List-Received-Date: Mon, 04 Jul 2022 17:52:56 -0000

New failures:
FAIL: gcc.dg/auto-init-uninit-4.c (test for excess errors)

New passes:


             reply	other threads:[~2022-07-04 15:54 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-04 15:54 ci_notify [this message]
2022-07-04 19:30 ci_notify

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=530146653.7114.1656950088270@jenkins.jenkins \
    --to=ci_notify@linaro.org \
    --cc=gcc-regression@gcc.gnu.org \
    --cc=rguenther@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).