From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from server.nextmovesoftware.com (server.nextmovesoftware.com [162.254.253.69]) by sourceware.org (Postfix) with ESMTPS id A65C8383A614 for ; Sun, 5 Jun 2022 11:12:38 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org A65C8383A614 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=nextmovesoftware.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=nextmovesoftware.com DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nextmovesoftware.com; s=default; h=Content-Type:MIME-Version:Message-ID: Date:Subject:Cc:To:From:Sender:Reply-To:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=Q3D/nNmB+HXU7tmHgQsm9R4B6r0lQOOxxrb+6LJRWgE=; b=nm3rwAQexJ2BznyTynZWfhXHWu zKQSgtxamZeog/iEpWOXSHDI/6RRW70SKKlh7QPMDrN/xTwtsZ192xub6Cgq2JiwakOdKexjJ8f8h T/Bet4tiygjHd/Ersd69prkIQrXd+L/GGv31rneqfQVEx7xACkGirgAtdwixkW+yzs1rRcoZlSaIh Hjn3n6UICi0OYfyijQHYkaWoFQ7ZqxzoveKcq144xa+GG+ghEu7QtbwPCZdBBTHSmAYiplVJLtd2r DmaJr0pGUIlg+aZQeKK7Pnb99FBPCAMZ6qxb81ARGt0qYP7fSYBAj2NR/EghWGyrbaHFM1lLQLQAW QVy2MltA==; Received: from host109-154-46-241.range109-154.btcentralplus.com ([109.154.46.241]:61917 helo=Dell) by server.nextmovesoftware.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1nxoBS-0002XJ-3r; Sun, 05 Jun 2022 07:12:38 -0400 From: "Roger Sayle" To: "'Richard Biener'" Cc: "'GCC Patches'" Subject: [PATCH take #2] Fold truncations of left shifts in match.pd Date: Sun, 5 Jun 2022 12:12:36 +0100 Message-ID: <007e01d878cd$2a32f100$7e98d300$@nextmovesoftware.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_007F_01D878D5.8BF9CA00" X-Mailer: Microsoft Outlook 16.0 Thread-Index: Adh4yP6ikti1S1cmQIOEt+zs6VH2Cg== Content-Language: en-gb X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - server.nextmovesoftware.com X-AntiAbuse: Original Domain - gcc.gnu.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - nextmovesoftware.com X-Get-Message-Sender-Via: server.nextmovesoftware.com: authenticated_id: roger@nextmovesoftware.com X-Authenticated-Sender: server.nextmovesoftware.com: roger@nextmovesoftware.com X-Source: X-Source-Args: X-Source-Dir: X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 Jun 2022 11:12:41 -0000 This is a multipart message in MIME format. ------=_NextPart_000_007F_01D878D5.8BF9CA00 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hi Richard, Many thanks for taking the time to explain how vectorization is supposed to work. I now see that vect_recog_rotate_pattern in = tree-vect-patterns.cc is supposed to handle lowering of rotations to (vector) shifts, and completely agree that adding support for signed types (using appropriate casts to unsigned_type_for and casting the result back to the original signed type) is a better approach to avoid the regression of pr98674.c. I've also implemented your suggestions of combining the proposed new (convert (lshift @1 INTEGER_CST@2)) with the existing one, and at the same time including support for valid shifts greater than the narrower type, such as (short)(x << 20), to constant zero. Although this = optimization is already performed during the tree-ssa passes, it's convenient to also catch it here during constant folding. This revised patch has been tested on x86_64-pc-linux-gnu with make bootstrap and make -k check, both with and without --target_board=3Dunix{-m32}, with no new failures. Ok for mainline? 2022-06-05 Roger Sayle Richard Biener gcc/ChangeLog * match.pd (convert (lshift @1 INTEGER_CST@2)): Narrow integer left shifts by a constant when the result is truncated, and the shift constant is well-defined. * tree-vect-patterns.cc (vect_recog_rotate_pattern): Add support for rotations of signed integer types, by lowering using unsigned vector shifts. gcc/testsuite/ChangeLog * gcc.dg/fold-convlshift-4.c: New test case. * gcc.dg/optimize-bswaphi-1.c: Update found bswap count. * gcc.dg/tree-ssa/pr61839_3.c: Shift is now optimized before = VRP. * gcc.dg/vect/vect-over-widen-1-big-array.c: Remove obsolete = tests. * gcc.dg/vect/vect-over-widen-1.c: Likewise. * gcc.dg/vect/vect-over-widen-3-big-array.c: Likewise. * gcc.dg/vect/vect-over-widen-3.c: Likewise. * gcc.dg/vect/vect-over-widen-4-big-array.c: Likewise. * gcc.dg/vect/vect-over-widen-4.c: Likewise. Thanks again, Roger -- > -----Original Message----- > From: Richard Biener > Sent: 02 June 2022 12:03 > To: Roger Sayle > Cc: GCC Patches > Subject: Re: [PATCH] Fold truncations of left shifts in match.pd >=20 > On Thu, Jun 2, 2022 at 12:55 PM Roger Sayle = > wrote: > > > > > > Hi Richard, > > > + /* RTL expansion knows how to expand rotates using shift/or. = */ > > > + if (icode =3D=3D CODE_FOR_nothing > > > + && (code =3D=3D LROTATE_EXPR || code =3D=3D RROTATE_EXPR) > > > + && optab_handler (ior_optab, vec_mode) !=3D = CODE_FOR_nothing > > > + && optab_handler (ashl_optab, vec_mode) !=3D = CODE_FOR_nothing) > > > + icode =3D (int) optab_handler (lshr_optab, vec_mode); > > > > > > but we then get the vector costing wrong. > > > > The issue is that we currently get the (relative) vector costing = wrong. > > Currently for gcc.dg/vect/pr98674.c, the vectorizer thinks the = scalar > > code requires two shifts and an ior, so believes its profitable to > > vectorize this loop using two vector shifts and an vector ior. But > > once match.pd simplifies the truncate and recognizes the HImode = rotate we > end up with: > > > > pr98674.c:6:16: note: =3D=3D> examining statement: _6 =3D _1 r>> = 8; > > pr98674.c:6:16: note: vect_is_simple_use: vectype vector(8) short = int > > pr98674.c:6:16: note: vect_is_simple_use: operand 8, type of def: = constant > > pr98674.c:6:16: missed: op not supported by target. > > pr98674.c:8:33: missed: not vectorized: relevant stmt not = supported: _6 =3D _1 > r>> 8; > > pr98674.c:6:16: missed: bad operation or unsupported loop bound. > > > > > > Clearly, it's a win to vectorize HImode rotates, when the backend = can > > perform > > 8 (or 16) rotations at a time, but using 3 vector instructions, even > > when a scalar rotate can performed in a single instruction. > > Fundamentally, vectorization may still be desirable/profitable even = when the > backend doesn't provide an optab. >=20 > Yes, as said it's tree-vect-patterns.cc job to handle this not = natively supported > rotate by re-writing it. Can you check why vect_recog_rotate_pattern = does not > do this? Ah, the code only handles !TYPE_UNSIGNED (type) - not sure = why > though (for rotates it should not matter and for the lowered sequence = we can > convert to desired signedness to get arithmetic/logical shifts)? >=20 > > The current situation where the i386's backend provides expanders to > > lower rotations (or vcond) into individual instruction sequences, = also interferes > with > > vector costing. It's the vector cost function that needs to be = fixed, not the > > generated code made worse (or the backend bloated performing its own > > RTL expansion workarounds). > > > > Is it instead ok to mark pr98674.c as XFAIL (a regression)? > > The tweak to tree-vect-stmts.cc was based on the assumption that we > > wished to continue vectorizing this loop. Improving scalar code > > generation really shouldn't disable vectorization like this. >=20 > Yes, see above where the fix needs to be. The pattern will then = expose the shift > and ior to the vectorizer which then are properly costed. >=20 > Richard. >=20 > > > > > > Cheers, > > Roger > > -- > > > > ------=_NextPart_000_007F_01D878D5.8BF9CA00 Content-Type: text/plain; name="patchls4.txt" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="patchls4.txt" diff --git a/gcc/match.pd b/gcc/match.pd=0A= index 2d3ffc4..bbcf9e2 100644=0A= --- a/gcc/match.pd=0A= +++ b/gcc/match.pd=0A= @@ -3621,17 +3621,18 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)=0A= (if (integer_zerop (@2) || integer_all_onesp (@2))=0A= (cmp @0 @2)))))=0A= =0A= -/* Both signed and unsigned lshift produce the same result, so use=0A= - the form that minimizes the number of conversions. Postpone this=0A= - transformation until after shifts by zero have been folded. */=0A= +/* Narrow a lshift by constant. */=0A= (simplify=0A= - (convert (lshift:s@0 (convert:s@1 @2) INTEGER_CST@3))=0A= + (convert (lshift:s@0 @1 INTEGER_CST@2))=0A= (if (INTEGRAL_TYPE_P (type)=0A= - && tree_nop_conversion_p (type, TREE_TYPE (@0))=0A= - && INTEGRAL_TYPE_P (TREE_TYPE (@2))=0A= - && TYPE_PRECISION (TREE_TYPE (@2)) <=3D TYPE_PRECISION (type)=0A= - && !integer_zerop (@3))=0A= - (lshift (convert @2) @3)))=0A= + && INTEGRAL_TYPE_P (TREE_TYPE (@0))=0A= + && !integer_zerop (@2)=0A= + && TYPE_PRECISION (type) <=3D TYPE_PRECISION (TREE_TYPE (@0)))=0A= + (if (TYPE_PRECISION (type) =3D=3D TYPE_PRECISION (TREE_TYPE (@0))=0A= + || wi::ltu_p (wi::to_wide (@2), TYPE_PRECISION (type)))=0A= + (lshift (convert @1) @2)=0A= + (if (wi::ltu_p (wi::to_wide (@2), TYPE_PRECISION (TREE_TYPE (@0))))=0A= + { build_zero_cst (type); }))))=0A= =0A= /* Simplifications of conversions. */=0A= =0A= diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc=0A= index 0fad4db..8f62486 100644=0A= --- a/gcc/tree-vect-patterns.cc=0A= +++ b/gcc/tree-vect-patterns.cc=0A= @@ -2614,8 +2614,7 @@ vect_recog_rotate_pattern (vec_info *vinfo,=0A= || TYPE_PRECISION (TREE_TYPE (lhs)) !=3D 16=0A= || TYPE_PRECISION (type) <=3D 16=0A= || TREE_CODE (oprnd0) !=3D SSA_NAME=0A= - || BITS_PER_UNIT !=3D 8=0A= - || !TYPE_UNSIGNED (TREE_TYPE (lhs)))=0A= + || BITS_PER_UNIT !=3D 8)=0A= return NULL;=0A= =0A= stmt_vec_info def_stmt_info;=0A= @@ -2688,8 +2687,7 @@ vect_recog_rotate_pattern (vec_info *vinfo,=0A= =0A= if (TREE_CODE (oprnd0) !=3D SSA_NAME=0A= || TYPE_PRECISION (TREE_TYPE (lhs)) !=3D TYPE_PRECISION (type)=0A= - || !INTEGRAL_TYPE_P (type)=0A= - || !TYPE_UNSIGNED (type))=0A= + || !INTEGRAL_TYPE_P (type))=0A= return NULL;=0A= =0A= stmt_vec_info def_stmt_info;=0A= @@ -2745,31 +2743,36 @@ vect_recog_rotate_pattern (vec_info *vinfo,=0A= goto use_rotate;=0A= }=0A= =0A= + tree utype =3D unsigned_type_for (type);=0A= + tree uvectype =3D get_vectype_for_scalar_type (vinfo, utype);=0A= + if (!uvectype)=0A= + return NULL;=0A= +=0A= /* If vector/vector or vector/scalar shifts aren't supported by the = target,=0A= don't do anything here either. */=0A= - optab1 =3D optab_for_tree_code (LSHIFT_EXPR, vectype, optab_vector);=0A= - optab2 =3D optab_for_tree_code (RSHIFT_EXPR, vectype, optab_vector);=0A= + optab1 =3D optab_for_tree_code (LSHIFT_EXPR, uvectype, optab_vector);=0A= + optab2 =3D optab_for_tree_code (RSHIFT_EXPR, uvectype, optab_vector);=0A= if (!optab1=0A= - || optab_handler (optab1, TYPE_MODE (vectype)) =3D=3D = CODE_FOR_nothing=0A= + || optab_handler (optab1, TYPE_MODE (uvectype)) =3D=3D = CODE_FOR_nothing=0A= || !optab2=0A= - || optab_handler (optab2, TYPE_MODE (vectype)) =3D=3D = CODE_FOR_nothing)=0A= + || optab_handler (optab2, TYPE_MODE (uvectype)) =3D=3D = CODE_FOR_nothing)=0A= {=0A= if (! is_a (vinfo) && dt =3D=3D vect_internal_def)=0A= return NULL;=0A= - optab1 =3D optab_for_tree_code (LSHIFT_EXPR, vectype, = optab_scalar);=0A= - optab2 =3D optab_for_tree_code (RSHIFT_EXPR, vectype, = optab_scalar);=0A= + optab1 =3D optab_for_tree_code (LSHIFT_EXPR, uvectype, = optab_scalar);=0A= + optab2 =3D optab_for_tree_code (RSHIFT_EXPR, uvectype, = optab_scalar);=0A= if (!optab1=0A= - || optab_handler (optab1, TYPE_MODE (vectype)) =3D=3D = CODE_FOR_nothing=0A= + || optab_handler (optab1, TYPE_MODE (uvectype)) =3D=3D = CODE_FOR_nothing=0A= || !optab2=0A= - || optab_handler (optab2, TYPE_MODE (vectype)) =3D=3D = CODE_FOR_nothing)=0A= + || optab_handler (optab2, TYPE_MODE (uvectype)) =3D=3D = CODE_FOR_nothing)=0A= return NULL;=0A= }=0A= =0A= *type_out =3D vectype;=0A= =0A= - if (bswap16_p && !useless_type_conversion_p (type, TREE_TYPE = (oprnd0)))=0A= + if (!useless_type_conversion_p (utype, TREE_TYPE (oprnd0)))=0A= {=0A= - def =3D vect_recog_temp_ssa_var (type, NULL);=0A= + def =3D vect_recog_temp_ssa_var (utype, NULL);=0A= def_stmt =3D gimple_build_assign (def, NOP_EXPR, oprnd0);=0A= append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt);=0A= oprnd0 =3D def;=0A= @@ -2779,7 +2782,7 @@ vect_recog_rotate_pattern (vec_info *vinfo,=0A= ext_def =3D vect_get_external_def_edge (vinfo, oprnd1);=0A= =0A= def =3D NULL_TREE;=0A= - scalar_int_mode mode =3D SCALAR_INT_TYPE_MODE (type);=0A= + scalar_int_mode mode =3D SCALAR_INT_TYPE_MODE (utype);=0A= if (dt !=3D vect_internal_def || TYPE_MODE (TREE_TYPE (oprnd1)) = =3D=3D mode)=0A= def =3D oprnd1;=0A= else if (def_stmt && gimple_assign_cast_p (def_stmt))=0A= @@ -2793,7 +2796,7 @@ vect_recog_rotate_pattern (vec_info *vinfo,=0A= =0A= if (def =3D=3D NULL_TREE)=0A= {=0A= - def =3D vect_recog_temp_ssa_var (type, NULL);=0A= + def =3D vect_recog_temp_ssa_var (utype, NULL);=0A= def_stmt =3D gimple_build_assign (def, NOP_EXPR, oprnd1);=0A= append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt);=0A= }=0A= @@ -2839,13 +2842,13 @@ vect_recog_rotate_pattern (vec_info *vinfo,=0A= append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt, vecstype);=0A= }=0A= =0A= - var1 =3D vect_recog_temp_ssa_var (type, NULL);=0A= + var1 =3D vect_recog_temp_ssa_var (utype, NULL);=0A= def_stmt =3D gimple_build_assign (var1, rhs_code =3D=3D LROTATE_EXPR=0A= ? LSHIFT_EXPR : RSHIFT_EXPR,=0A= oprnd0, def);=0A= append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt);=0A= =0A= - var2 =3D vect_recog_temp_ssa_var (type, NULL);=0A= + var2 =3D vect_recog_temp_ssa_var (utype, NULL);=0A= def_stmt =3D gimple_build_assign (var2, rhs_code =3D=3D LROTATE_EXPR=0A= ? RSHIFT_EXPR : LSHIFT_EXPR,=0A= oprnd0, def2);=0A= @@ -2855,9 +2858,15 @@ vect_recog_rotate_pattern (vec_info *vinfo,=0A= vect_pattern_detected ("vect_recog_rotate_pattern", last_stmt);=0A= =0A= /* Pattern supported. Create a stmt to be used to replace the = pattern. */=0A= - var =3D vect_recog_temp_ssa_var (type, NULL);=0A= + var =3D vect_recog_temp_ssa_var (utype, NULL);=0A= pattern_stmt =3D gimple_build_assign (var, BIT_IOR_EXPR, var1, var2);=0A= =0A= + if (!useless_type_conversion_p (type, utype))=0A= + {=0A= + append_pattern_def_seq (vinfo, stmt_vinfo, pattern_stmt);=0A= + tree result =3D vect_recog_temp_ssa_var (type, NULL);=0A= + pattern_stmt =3D gimple_build_assign (result, NOP_EXPR, var);=0A= + }=0A= return pattern_stmt;=0A= }=0A= =0A= diff --git a/gcc/testsuite/gcc.dg/fold-convlshift-4.c = b/gcc/testsuite/gcc.dg/fold-convlshift-4.c=0A= new file mode 100644=0A= index 0000000..001627f=0A= --- /dev/null=0A= +++ b/gcc/testsuite/gcc.dg/fold-convlshift-4.c=0A= @@ -0,0 +1,9 @@=0A= +/* { dg-do compile } */=0A= +/* { dg-options "-O2 -fdump-tree-optimized" } */=0A= +short foo(short x)=0A= +{=0A= + return x << 5;=0A= +}=0A= +=0A= +/* { dg-final { scan-tree-dump-not "\\(int\\)" "optimized" } } */=0A= +/* { dg-final { scan-tree-dump-not "\\(short int\\)" "optimized" } } */=0A= diff --git a/gcc/testsuite/gcc.dg/optimize-bswaphi-1.c = b/gcc/testsuite/gcc.dg/optimize-bswaphi-1.c=0A= index d045da9..a5d8bfd 100644=0A= --- a/gcc/testsuite/gcc.dg/optimize-bswaphi-1.c=0A= +++ b/gcc/testsuite/gcc.dg/optimize-bswaphi-1.c=0A= @@ -68,4 +68,4 @@ get_unaligned_16_be (unsigned char *p)=0A= =0A= =0A= /* { dg-final { scan-tree-dump-times "16 bit load in target endianness = found at" 4 "bswap" } } */=0A= -/* { dg-final { scan-tree-dump-times "16 bit bswap implementation found = at" 5 "bswap" } } */=0A= +/* { dg-final { scan-tree-dump-times "16 bit bswap implementation found = at" 4 "bswap" } } */=0A= diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c = b/gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c=0A= index bc2126f..38cf792 100644=0A= --- a/gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c=0A= +++ b/gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c=0A= @@ -1,6 +1,6 @@=0A= /* PR tree-optimization/61839. */=0A= /* { dg-do run } */=0A= -/* { dg-options "-O2 -fdump-tree-vrp -fdump-tree-optimized = -fdisable-tree-ethread -fdisable-tree-threadfull1" } */=0A= +/* { dg-options "-O2 -fdump-tree-optimized -fdisable-tree-ethread = -fdisable-tree-threadfull1" } */=0A= =0A= __attribute__ ((noinline))=0A= int foo (int a, unsigned b)=0A= @@ -21,6 +21,4 @@ int main ()=0A= foo (-1, b);=0A= }=0A= =0A= -/* Scan for c [12, 13] << 8 in function foo. */=0A= -/* { dg-final { scan-tree-dump-times "3072 : 3328" 1 "vrp1" } } */=0A= /* { dg-final { scan-tree-dump-times "3072" 0 "optimized" } } */=0A= diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c = b/gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c=0A= index 9e5f464..9a5141ee 100644=0A= --- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c=0A= +++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c=0A= @@ -58,9 +58,7 @@ int main (void)=0A= }=0A= =0A= /* { dg-final { scan-tree-dump-times "vect_recog_widen_shift_pattern: = detected" 2 "vect" { target vect_widen_shift } } } */=0A= -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* << 3} "vect" } } */=0A= /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* >> 3} "vect" } } */=0A= -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* << 8} "vect" } } */=0A= /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* >> 5} "vect" } } */=0A= /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } = */=0A= =0A= diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c = b/gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c=0A= index c2d0797..f2d284c 100644=0A= --- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c=0A= +++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c=0A= @@ -62,9 +62,7 @@ int main (void)=0A= }=0A= =0A= /* { dg-final { scan-tree-dump-times "vect_recog_widen_shift_pattern: = detected" 2 "vect" { target vect_widen_shift } } } */=0A= -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* << 3} "vect" } } */=0A= /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* >> 3} "vect" } } */=0A= -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* << 8} "vect" } } */=0A= /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* >> 5} "vect" } } */=0A= /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } = */=0A= =0A= diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c = b/gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c=0A= index 37da7c9..6f89aac 100644=0A= --- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c=0A= +++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c=0A= @@ -59,9 +59,7 @@ int main (void)=0A= return 0;=0A= }=0A= =0A= -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* << 3} "vect" } } */=0A= /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* >> 3} "vect" } } */=0A= /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* >> 8} "vect" } } */=0A= -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* << 9} "vect" } } */=0A= /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } = */=0A= =0A= diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c = b/gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c=0A= index 4138480..a1e1182 100644=0A= --- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c=0A= +++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c=0A= @@ -57,9 +57,7 @@ int main (void)=0A= return 0;=0A= }=0A= =0A= -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* << 3} "vect" } } */=0A= /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* >> 3} "vect" } } */=0A= /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* >> 8} "vect" } } */=0A= -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* << 9} "vect" } } */=0A= /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } = */=0A= =0A= diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c = b/gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c=0A= index 514337c..03a6e67 100644=0A= --- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c=0A= +++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c=0A= @@ -62,9 +62,7 @@ int main (void)=0A= }=0A= =0A= /* { dg-final { scan-tree-dump-times "vect_recog_widen_shift_pattern: = detected" 2 "vect" { target vect_widen_shift } } } */=0A= -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* << 3} "vect" } } */=0A= /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* >> 3} "vect" } } */=0A= -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* << 8} "vect" } } */=0A= /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* >> 5} "vect" } } */=0A= /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } = */=0A= =0A= diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c = b/gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c=0A= index 3d536d5..0ef377f 100644=0A= --- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c=0A= +++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c=0A= @@ -66,9 +66,7 @@ int main (void)=0A= }=0A= =0A= /* { dg-final { scan-tree-dump-times "vect_recog_widen_shift_pattern: = detected" 2 "vect" { target vect_widen_shift } } } */=0A= -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* << 3} "vect" } } */=0A= /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* >> 3} "vect" } } */=0A= -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* << 8} "vect" } } */=0A= /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: = detected:[^\n]* >> 5} "vect" } } */=0A= /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } = */=0A= =0A= ------=_NextPart_000_007F_01D878D5.8BF9CA00--