From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by sourceware.org (Postfix) with ESMTPS id DC0153858D33 for ; Thu, 19 Oct 2023 06:16:25 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org DC0153858D33 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=intel.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org DC0153858D33 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=192.55.52.136 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1697696188; cv=none; b=ksi4TXmdVxVy1IxEzBP8E3Nwdo7gV448S0wePkcGy6aIeA5omqo+QcAZrZi7CiIxc6/h+0Yh7GpL2ys2XQn3j0l0dSTLPEmi4K36lTf740WN9aBWJnzn24v6UzikLBkXYNxvpURirA5n1UaJTV3uOvw8x3s4L14/SSX3igoVQrM= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1697696188; c=relaxed/simple; bh=l1w1mXZTQguX+anYGjgH2i2Z5JVQV0tVwEjFat7unT8=; h=DKIM-Signature:From:To:Subject:Date:Message-Id:MIME-Version; b=lBeM9lzo1DmCNy2M1LpXXWBMiYPFqnc8nYxAIXN5G+p153ZYxjvALLX4p1QaCbyoTcXwucsUSPrZuHVPduR/KKA6D5GkW+aWmQY/38tbSB8aSgQMLvmndzhXNiz1nfzQj0CCO5B+RoVBeqWbEAOck1ENKzLh7FuQg2I79NzUdus= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697696186; x=1729232186; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=l1w1mXZTQguX+anYGjgH2i2Z5JVQV0tVwEjFat7unT8=; b=DZkU91f0j9Olj+7FnBP5IRT55YP23rK18t8Oe5MIDN12tQwPzRxUcyDS 8wzOnz5W92Ncnj5zlbUAMtmYrX9bC5hdofxzXCY79e90cSsTj6EoWWPkV XzXUDs4/xtk7r8Z1rxeK13snC1NngzJDH1Le4Ft0Cr5ZYknq5WzkEoddk m2HStsLQ59yw26B/+ZXVhEUc58KsU07A5sOFxY1M8h3Bzqxc6TFq4cvAL YhVu9kqQj/NBfH5mYyLI8QVJ5ROQdJT8mbCwtDYoTKBNWfHWuvE4e6Ojx er28DTkXFMEyVJVyUEvwUumm25rIGP355Iz89D3phz7PmU4QaV0FQ/Is2 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10867"; a="365532656" X-IronPort-AV: E=Sophos;i="6.03,236,1694761200"; d="scan'208";a="365532656" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2023 23:16:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10867"; a="760521144" X-IronPort-AV: E=Sophos;i="6.03,236,1694761200"; d="scan'208";a="760521144" Received: from shvmail03.sh.intel.com ([10.239.245.20]) by fmsmga007.fm.intel.com with ESMTP; 18 Oct 2023 23:16:22 -0700 Received: from shliclel4217.sh.intel.com (shliclel4217.sh.intel.com [10.239.240.127]) by shvmail03.sh.intel.com (Postfix) with ESMTP id 31BC01005677; Thu, 19 Oct 2023 14:16:22 +0800 (CST) From: liuhongt To: gcc-patches@gcc.gnu.org Cc: crazylht@gmail.com, hjl.tools@gmail.com Subject: [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. Date: Thu, 19 Oct 2023 14:14:22 +0800 Message-Id: <20231019061422.281377-1-hongtao.liu@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,GIT_PATCH_0,KAM_SHORT,SPF_HELO_NONE,SPF_NONE,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: >So the bugs were not fixed without this hunk? IIRC in the audit >trail we concluded the value is always positive ... (but of course >a large unsigned value can appear negative if you test it this way?) No, I added this incase in the future there's negative skip_niters as you mentioned in the PR, it's just defensive programming. >I think you can use one of the mpz_pow* functions and >wi::to_mpz/from_mpz for this. See tree-ssa-loop-niter.cc for the >most heavy user of mpz (but not pow I think). Changed. Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}. Ok for trunk. There's loop in vect_peel_nonlinear_iv_init to get init_expr * pow (step_expr, skip_niters). When skipn_iters is too big, compile time hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is pow of 2, otherwise give up vectorization when skip_niters >= TYPE_PRECISION (TREE_TYPE (init_expr)). Also give up vectorization when niters_skip is negative which will be used for fully masked loop. gcc/ChangeLog: PR tree-optimization/111820 PR tree-optimization/111833 * tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): Give up vectorization for nonlinear iv vect_step_op_mul when step_expr is not exact_log2 and niters is greater than TYPE_PRECISION (TREE_TYPE (step_expr)). Also don't vectorize for nagative niters_skip which will be used by fully masked loop. (vect_can_advance_ivs_p): Pass whole phi_info to vect_can_peel_nonlinear_iv_p. * tree-vect-loop.cc (vect_peel_nonlinear_iv_init): Optimize init_expr * pow (step_expr, skipn) to init_expr << (log2 (step_expr) * skipn) when step_expr is exact_log2. gcc/testsuite/ChangeLog: * gcc.target/i386/pr111820-1.c: New test. * gcc.target/i386/pr111820-2.c: New test. * gcc.target/i386/pr111820-3.c: New test. * gcc.target/i386/pr103144-mul-1.c: Adjust testcase. * gcc.target/i386/pr103144-mul-2.c: Adjust testcase. --- .../gcc.target/i386/pr103144-mul-1.c | 8 ++--- .../gcc.target/i386/pr103144-mul-2.c | 8 ++--- gcc/testsuite/gcc.target/i386/pr111820-1.c | 16 +++++++++ gcc/testsuite/gcc.target/i386/pr111820-2.c | 16 +++++++++ gcc/testsuite/gcc.target/i386/pr111820-3.c | 16 +++++++++ gcc/tree-vect-loop-manip.cc | 28 +++++++++++++-- gcc/tree-vect-loop.cc | 34 ++++++++++++++++--- 7 files changed, 110 insertions(+), 16 deletions(-) create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-1.c create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-2.c create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-3.c diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c index 640c34fd959..913d7737dcd 100644 --- a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c +++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c @@ -11,7 +11,7 @@ foo_mul (int* a, int b) for (int i = 0; i != N; i++) { a[i] = b; - b *= 3; + b *= 4; } } @@ -23,7 +23,7 @@ foo_mul_const (int* a) for (int i = 0; i != N; i++) { a[i] = b; - b *= 3; + b *= 4; } } @@ -34,7 +34,7 @@ foo_mul_peel (int* a, int b) for (int i = 0; i != 39; i++) { a[i] = b; - b *= 3; + b *= 4; } } @@ -46,6 +46,6 @@ foo_mul_peel_const (int* a) for (int i = 0; i != 39; i++) { a[i] = b; - b *= 3; + b *= 4; } } diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c index 39fdea3a69d..b2ff186e335 100644 --- a/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c +++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-2.c @@ -16,12 +16,12 @@ avx2_test (void) __builtin_memset (epi32_exp, 0, N * sizeof (int)); int b = 8; - v8si init = __extension__(v8si) { b, b * 3, b * 9, b * 27, b * 81, b * 243, b * 729, b * 2187 }; + v8si init = __extension__(v8si) { b, b * 4, b * 16, b * 64, b * 256, b * 1024, b * 4096, b * 16384 }; for (int i = 0; i != N / 8; i++) { memcpy (epi32_exp + i * 8, &init, 32); - init *= 6561; + init *= 65536; } foo_mul (epi32_dst, b); @@ -32,11 +32,11 @@ avx2_test (void) if (__builtin_memcmp (epi32_dst, epi32_exp, 39 * 4) != 0) __builtin_abort (); - init = __extension__(v8si) { 1, 3, 9, 27, 81, 243, 729, 2187 }; + init = __extension__(v8si) { 1, 4, 16, 64, 256, 1024, 4096, 16384 }; for (int i = 0; i != N / 8; i++) { memcpy (epi32_exp + i * 8, &init, 32); - init *= 6561; + init *= 65536; } foo_mul_const (epi32_dst); diff --git a/gcc/testsuite/gcc.target/i386/pr111820-1.c b/gcc/testsuite/gcc.target/i386/pr111820-1.c new file mode 100644 index 00000000000..50e960c39d4 --- /dev/null +++ b/gcc/testsuite/gcc.target/i386/pr111820-1.c @@ -0,0 +1,16 @@ +/* { dg-do compile } */ +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -Wno-aggressive-loop-optimizations -fdump-tree-vect-details" } */ +/* { dg-final { scan-tree-dump "Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big" "vect" } } */ + +int r; +int r_0; + +void f1 (void) +{ + int n = 0; + while (-- n) + { + r_0 += r; + r *= 3; + } +} diff --git a/gcc/testsuite/gcc.target/i386/pr111820-2.c b/gcc/testsuite/gcc.target/i386/pr111820-2.c new file mode 100644 index 00000000000..dbeceb228c3 --- /dev/null +++ b/gcc/testsuite/gcc.target/i386/pr111820-2.c @@ -0,0 +1,16 @@ +/* { dg-do compile } */ +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */ +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */ + +int r; +int r_0; + +void f (void) +{ + int n = 0; + while (-- n) + { + r_0 += r ; + r *= 2; + } +} diff --git a/gcc/testsuite/gcc.target/i386/pr111820-3.c b/gcc/testsuite/gcc.target/i386/pr111820-3.c new file mode 100644 index 00000000000..b778f517663 --- /dev/null +++ b/gcc/testsuite/gcc.target/i386/pr111820-3.c @@ -0,0 +1,16 @@ +/* { dg-do compile } */ +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */ +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */ + +int r; +int r_0; + +void f (void) +{ + int n = 14; + while (-- n) + { + r_0 += r ; + r *= 3; + } +} diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc index 2608c286e5d..a530088b61d 100644 --- a/gcc/tree-vect-loop-manip.cc +++ b/gcc/tree-vect-loop-manip.cc @@ -1783,8 +1783,10 @@ iv_phi_p (stmt_vec_info stmt_info) /* Return true if vectorizer can peel for nonlinear iv. */ static bool vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo, - enum vect_induction_op_type induction_type) + stmt_vec_info stmt_info) { + enum vect_induction_op_type induction_type + = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (stmt_info); tree niters_skip; /* Init_expr will be update by vect_update_ivs_after_vectorizer, if niters or vf is unkown: @@ -1805,11 +1807,31 @@ vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo, return false; } + /* Avoid compile time hog on vect_peel_nonlinear_iv_init. */ + if (induction_type == vect_step_op_mul) + { + tree step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_info); + tree type = TREE_TYPE (step_expr); + + if (wi::exact_log2 (wi::to_wide (step_expr)) == -1 + && LOOP_VINFO_INT_NITERS(loop_vinfo) >= TYPE_PRECISION (type)) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "Avoid compile time hog on" + " vect_peel_nonlinear_iv_init" + " for nonlinear induction vec_step_op_mul" + " when iteration count is too big.\n"); + return false; + } + } + /* Also doens't support peel for neg when niter is variable. ??? generate something like niter_expr & 1 ? init_expr : -init_expr? */ niters_skip = LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo); if ((niters_skip != NULL_TREE - && TREE_CODE (niters_skip) != INTEGER_CST) + && (TREE_CODE (niters_skip) != INTEGER_CST + || (HOST_WIDE_INT) TREE_INT_CST_LOW (niters_skip) < 0)) || (!vect_use_loop_mask_for_alignment_p (loop_vinfo) && LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0)) { @@ -1870,7 +1892,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo) induction_type = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (phi_info); if (induction_type != vect_step_op_add) { - if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, induction_type)) + if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, phi_info)) return false; continue; diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc index 89bdcaa0910..f0dbba50786 100644 --- a/gcc/tree-vect-loop.cc +++ b/gcc/tree-vect-loop.cc @@ -9134,11 +9134,35 @@ vect_peel_nonlinear_iv_init (gimple_seq* stmts, tree init_expr, init_expr = gimple_convert (stmts, utype, init_expr); unsigned skipn = TREE_INT_CST_LOW (skip_niters); wide_int begin = wi::to_wide (step_expr); - for (unsigned i = 0; i != skipn - 1; i++) - begin = wi::mul (begin, wi::to_wide (step_expr)); - tree mult_expr = wide_int_to_tree (utype, begin); - init_expr = gimple_build (stmts, MULT_EXPR, utype, init_expr, mult_expr); - init_expr = gimple_convert (stmts, type, init_expr); + int pow2_step = wi::exact_log2 (begin); + /* Optimize init_expr * pow (step_expr, skipn) to + init_expr << (log2 (step_expr) * skipn). */ + if (pow2_step != -1) + { + if (skipn >= TYPE_PRECISION (type) + || skipn > (UINT_MAX / (unsigned) pow2_step) + || skipn * (unsigned) pow2_step >= TYPE_PRECISION (type)) + init_expr = build_zero_cst (type); + else + { + tree lshc = build_int_cst (utype, skipn * (unsigned) pow2_step); + init_expr = gimple_build (stmts, LSHIFT_EXPR, utype, + init_expr, lshc); + } + } + /* Any better way for init_expr * pow (step_expr, skipn)???. */ + else + { + gcc_assert (skipn < TYPE_PRECISION (type)); + auto_mpz base, exp; + wi::to_mpz (begin, base, TYPE_SIGN (type)); + mpz_pow_ui (exp, base, skipn); + begin = wi::from_mpz (type, exp, TYPE_SIGN (type)); + tree mult_expr = wide_int_to_tree (utype, begin); + init_expr = gimple_build (stmts, MULT_EXPR, utype, + init_expr, mult_expr); + } + init_expr = gimple_convert (stmts, type, init_expr); } break; -- 2.31.1