* [PATCH v2] vect: Fix vectorized BIT_FIELD_REF for signed bit-fields [PR110557]
@ 2023-07-07 13:18 Xi Ruoyao
2023-07-10 10:33 ` Richard Biener
0 siblings, 1 reply; 5+ messages in thread
From: Xi Ruoyao @ 2023-07-07 13:18 UTC (permalink / raw)
To: gcc-patches
Cc: Andre Vieira, Richard Biener, Jakub Jelinek, Hongtao Liu, Xi Ruoyao
If a bit-field is signed and it's wider than the output type, we must
ensure the extracted result sign-extended. But this was not handled
correctly.
For example:
int x : 8;
long y : 55;
bool z : 1;
The vectorized extraction of y was:
vect__ifc__49.29_110 =
MEM <vector(2) long unsigned int> [(struct Item *)vectp_a.27_108];
vect_patt_38.30_112 =
vect__ifc__49.29_110 & { 9223372036854775552, 9223372036854775552 };
vect_patt_39.31_113 = vect_patt_38.30_112 >> 8;
vect_patt_40.32_114 =
VIEW_CONVERT_EXPR<vector(2) long int>(vect_patt_39.31_113);
This is obviously incorrect. This pach has implemented it as:
vect__ifc__25.16_62 =
MEM <vector(2) long unsigned int> [(struct Item *)vectp_a.14_60];
vect_patt_31.17_63 =
VIEW_CONVERT_EXPR<vector(2) long int>(vect__ifc__25.16_62);
vect_patt_32.18_64 = vect_patt_31.17_63 << 1;
vect_patt_33.19_65 = vect_patt_32.18_64 >> 9;
gcc/ChangeLog:
PR tree-optimization/110557
* tree-vect-patterns.cc (vect_recog_bitfield_ref_pattern):
Ensure the output sign-extended if necessary.
gcc/testsuite/ChangeLog:
PR tree-optimization/110557
* g++.dg/vect/pr110557.cc: New test.
---
Change v1 -> v2:
- Rename two variables for readability.
- Remove a redundant useless_type_conversion_p check.
- Edit the comment for early conversion to show the rationale of
"|| ref_sext".
Bootstrapped (with BOOT_CFLAGS="-O3 -mavx2") and regtested on
x86_64-linux-gnu. Ok for trunk and gcc-13?
gcc/testsuite/g++.dg/vect/pr110557.cc | 37 ++++++++++++++++
gcc/tree-vect-patterns.cc | 62 ++++++++++++++++++++-------
2 files changed, 83 insertions(+), 16 deletions(-)
create mode 100644 gcc/testsuite/g++.dg/vect/pr110557.cc
diff --git a/gcc/testsuite/g++.dg/vect/pr110557.cc b/gcc/testsuite/g++.dg/vect/pr110557.cc
new file mode 100644
index 00000000000..e1fbe1caac4
--- /dev/null
+++ b/gcc/testsuite/g++.dg/vect/pr110557.cc
@@ -0,0 +1,37 @@
+// { dg-additional-options "-mavx" { target { avx_runtime } } }
+
+static inline long
+min (long a, long b)
+{
+ return a < b ? a : b;
+}
+
+struct Item
+{
+ int x : 8;
+ long y : 55;
+ bool z : 1;
+};
+
+__attribute__ ((noipa)) long
+test (Item *a, int cnt)
+{
+ long size = 0;
+ for (int i = 0; i < cnt; i++)
+ size = min ((long)a[i].y, size);
+ return size;
+}
+
+int
+main ()
+{
+ struct Item items[] = {
+ { 1, -1 },
+ { 2, -2 },
+ { 3, -3 },
+ { 4, -4 },
+ };
+
+ if (test (items, 4) != -4)
+ __builtin_trap ();
+}
diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc
index 1bc36b043a0..c0832e8679f 100644
--- a/gcc/tree-vect-patterns.cc
+++ b/gcc/tree-vect-patterns.cc
@@ -2566,7 +2566,7 @@ vect_recog_widen_sum_pattern (vec_info *vinfo,
Widening with mask first, shift later:
container = (type_out) container;
masked = container & (((1 << bitsize) - 1) << bitpos);
- result = patt2 >> masked;
+ result = masked >> bitpos;
Widening with shift first, mask last:
container = (type_out) container;
@@ -2578,6 +2578,15 @@ vect_recog_widen_sum_pattern (vec_info *vinfo,
result = masked >> bitpos;
result = (type_out) result;
+ If the bitfield is signed and it's wider than type_out, we need to
+ keep the result sign-extended:
+ container = (type) container;
+ masked = container << (prec - bitsize - bitpos);
+ result = (type_out) (masked >> (prec - bitsize));
+
+ Here type is the signed variant of the wider of type_out and the type
+ of container.
+
The shifting is always optional depending on whether bitpos != 0.
*/
@@ -2636,14 +2645,22 @@ vect_recog_bitfield_ref_pattern (vec_info *vinfo, stmt_vec_info stmt_info,
if (BYTES_BIG_ENDIAN)
shift_n = prec - shift_n - mask_width;
+ bool ref_sext = (!TYPE_UNSIGNED (TREE_TYPE (bf_ref)) &&
+ TYPE_PRECISION (ret_type) > mask_width);
+ bool load_widen = (TYPE_PRECISION (TREE_TYPE (container)) <
+ TYPE_PRECISION (ret_type));
+
/* We move the conversion earlier if the loaded type is smaller than the
- return type to enable the use of widening loads. */
- if (TYPE_PRECISION (TREE_TYPE (container)) < TYPE_PRECISION (ret_type)
- && !useless_type_conversion_p (TREE_TYPE (container), ret_type))
- {
- pattern_stmt
- = gimple_build_assign (vect_recog_temp_ssa_var (ret_type),
- NOP_EXPR, container);
+ return type to enable the use of widening loads. And if we need a
+ sign extension, we need to convert the loaded value early to a signed
+ type as well. */
+ if (ref_sext || load_widen)
+ {
+ tree type = load_widen ? ret_type : container_type;
+ if (ref_sext)
+ type = gimple_signed_type (type);
+ pattern_stmt = gimple_build_assign (vect_recog_temp_ssa_var (type),
+ NOP_EXPR, container);
container = gimple_get_lhs (pattern_stmt);
container_type = TREE_TYPE (container);
prec = tree_to_uhwi (TYPE_SIZE (container_type));
@@ -2671,7 +2688,7 @@ vect_recog_bitfield_ref_pattern (vec_info *vinfo, stmt_vec_info stmt_info,
shift_first = true;
tree result;
- if (shift_first)
+ if (shift_first && !ref_sext)
{
tree shifted = container;
if (shift_n)
@@ -2694,14 +2711,27 @@ vect_recog_bitfield_ref_pattern (vec_info *vinfo, stmt_vec_info stmt_info,
}
else
{
- tree mask = wide_int_to_tree (container_type,
- wi::shifted_mask (shift_n, mask_width,
- false, prec));
- pattern_stmt
- = gimple_build_assign (vect_recog_temp_ssa_var (container_type),
- BIT_AND_EXPR, container, mask);
- tree masked = gimple_assign_lhs (pattern_stmt);
+ tree temp = vect_recog_temp_ssa_var (container_type);
+ if (!ref_sext)
+ {
+ tree mask = wide_int_to_tree (container_type,
+ wi::shifted_mask (shift_n,
+ mask_width,
+ false, prec));
+ pattern_stmt = gimple_build_assign (temp, BIT_AND_EXPR,
+ container, mask);
+ }
+ else
+ {
+ HOST_WIDE_INT shl = prec - shift_n - mask_width;
+ shift_n += shl;
+ pattern_stmt = gimple_build_assign (temp, LSHIFT_EXPR,
+ container,
+ build_int_cst (sizetype,
+ shl));
+ }
+ tree masked = gimple_assign_lhs (pattern_stmt);
append_pattern_def_seq (vinfo, stmt_info, pattern_stmt, vectype);
pattern_stmt
= gimple_build_assign (vect_recog_temp_ssa_var (container_type),
--
2.41.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2] vect: Fix vectorized BIT_FIELD_REF for signed bit-fields [PR110557]
2023-07-07 13:18 [PATCH v2] vect: Fix vectorized BIT_FIELD_REF for signed bit-fields [PR110557] Xi Ruoyao
@ 2023-07-10 10:33 ` Richard Biener
2023-07-10 11:12 ` Pushed: " Xi Ruoyao
0 siblings, 1 reply; 5+ messages in thread
From: Richard Biener @ 2023-07-10 10:33 UTC (permalink / raw)
To: Xi Ruoyao; +Cc: gcc-patches, Andre Vieira, Jakub Jelinek, Hongtao Liu
On Fri, 7 Jul 2023, Xi Ruoyao wrote:
> If a bit-field is signed and it's wider than the output type, we must
> ensure the extracted result sign-extended. But this was not handled
> correctly.
>
> For example:
>
> int x : 8;
> long y : 55;
> bool z : 1;
>
> The vectorized extraction of y was:
>
> vect__ifc__49.29_110 =
> MEM <vector(2) long unsigned int> [(struct Item *)vectp_a.27_108];
> vect_patt_38.30_112 =
> vect__ifc__49.29_110 & { 9223372036854775552, 9223372036854775552 };
> vect_patt_39.31_113 = vect_patt_38.30_112 >> 8;
> vect_patt_40.32_114 =
> VIEW_CONVERT_EXPR<vector(2) long int>(vect_patt_39.31_113);
>
> This is obviously incorrect. This pach has implemented it as:
>
> vect__ifc__25.16_62 =
> MEM <vector(2) long unsigned int> [(struct Item *)vectp_a.14_60];
> vect_patt_31.17_63 =
> VIEW_CONVERT_EXPR<vector(2) long int>(vect__ifc__25.16_62);
> vect_patt_32.18_64 = vect_patt_31.17_63 << 1;
> vect_patt_33.19_65 = vect_patt_32.18_64 >> 9;
OK.
Thanks,
Richard.
> gcc/ChangeLog:
>
> PR tree-optimization/110557
> * tree-vect-patterns.cc (vect_recog_bitfield_ref_pattern):
> Ensure the output sign-extended if necessary.
>
> gcc/testsuite/ChangeLog:
>
> PR tree-optimization/110557
> * g++.dg/vect/pr110557.cc: New test.
> ---
>
> Change v1 -> v2:
>
> - Rename two variables for readability.
> - Remove a redundant useless_type_conversion_p check.
> - Edit the comment for early conversion to show the rationale of
> "|| ref_sext".
>
> Bootstrapped (with BOOT_CFLAGS="-O3 -mavx2") and regtested on
> x86_64-linux-gnu. Ok for trunk and gcc-13?
>
> gcc/testsuite/g++.dg/vect/pr110557.cc | 37 ++++++++++++++++
> gcc/tree-vect-patterns.cc | 62 ++++++++++++++++++++-------
> 2 files changed, 83 insertions(+), 16 deletions(-)
> create mode 100644 gcc/testsuite/g++.dg/vect/pr110557.cc
>
> diff --git a/gcc/testsuite/g++.dg/vect/pr110557.cc b/gcc/testsuite/g++.dg/vect/pr110557.cc
> new file mode 100644
> index 00000000000..e1fbe1caac4
> --- /dev/null
> +++ b/gcc/testsuite/g++.dg/vect/pr110557.cc
> @@ -0,0 +1,37 @@
> +// { dg-additional-options "-mavx" { target { avx_runtime } } }
> +
> +static inline long
> +min (long a, long b)
> +{
> + return a < b ? a : b;
> +}
> +
> +struct Item
> +{
> + int x : 8;
> + long y : 55;
> + bool z : 1;
> +};
> +
> +__attribute__ ((noipa)) long
> +test (Item *a, int cnt)
> +{
> + long size = 0;
> + for (int i = 0; i < cnt; i++)
> + size = min ((long)a[i].y, size);
> + return size;
> +}
> +
> +int
> +main ()
> +{
> + struct Item items[] = {
> + { 1, -1 },
> + { 2, -2 },
> + { 3, -3 },
> + { 4, -4 },
> + };
> +
> + if (test (items, 4) != -4)
> + __builtin_trap ();
> +}
> diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc
> index 1bc36b043a0..c0832e8679f 100644
> --- a/gcc/tree-vect-patterns.cc
> +++ b/gcc/tree-vect-patterns.cc
> @@ -2566,7 +2566,7 @@ vect_recog_widen_sum_pattern (vec_info *vinfo,
> Widening with mask first, shift later:
> container = (type_out) container;
> masked = container & (((1 << bitsize) - 1) << bitpos);
> - result = patt2 >> masked;
> + result = masked >> bitpos;
>
> Widening with shift first, mask last:
> container = (type_out) container;
> @@ -2578,6 +2578,15 @@ vect_recog_widen_sum_pattern (vec_info *vinfo,
> result = masked >> bitpos;
> result = (type_out) result;
>
> + If the bitfield is signed and it's wider than type_out, we need to
> + keep the result sign-extended:
> + container = (type) container;
> + masked = container << (prec - bitsize - bitpos);
> + result = (type_out) (masked >> (prec - bitsize));
> +
> + Here type is the signed variant of the wider of type_out and the type
> + of container.
> +
> The shifting is always optional depending on whether bitpos != 0.
>
> */
> @@ -2636,14 +2645,22 @@ vect_recog_bitfield_ref_pattern (vec_info *vinfo, stmt_vec_info stmt_info,
> if (BYTES_BIG_ENDIAN)
> shift_n = prec - shift_n - mask_width;
>
> + bool ref_sext = (!TYPE_UNSIGNED (TREE_TYPE (bf_ref)) &&
> + TYPE_PRECISION (ret_type) > mask_width);
> + bool load_widen = (TYPE_PRECISION (TREE_TYPE (container)) <
> + TYPE_PRECISION (ret_type));
> +
> /* We move the conversion earlier if the loaded type is smaller than the
> - return type to enable the use of widening loads. */
> - if (TYPE_PRECISION (TREE_TYPE (container)) < TYPE_PRECISION (ret_type)
> - && !useless_type_conversion_p (TREE_TYPE (container), ret_type))
> - {
> - pattern_stmt
> - = gimple_build_assign (vect_recog_temp_ssa_var (ret_type),
> - NOP_EXPR, container);
> + return type to enable the use of widening loads. And if we need a
> + sign extension, we need to convert the loaded value early to a signed
> + type as well. */
> + if (ref_sext || load_widen)
> + {
> + tree type = load_widen ? ret_type : container_type;
> + if (ref_sext)
> + type = gimple_signed_type (type);
> + pattern_stmt = gimple_build_assign (vect_recog_temp_ssa_var (type),
> + NOP_EXPR, container);
> container = gimple_get_lhs (pattern_stmt);
> container_type = TREE_TYPE (container);
> prec = tree_to_uhwi (TYPE_SIZE (container_type));
> @@ -2671,7 +2688,7 @@ vect_recog_bitfield_ref_pattern (vec_info *vinfo, stmt_vec_info stmt_info,
> shift_first = true;
>
> tree result;
> - if (shift_first)
> + if (shift_first && !ref_sext)
> {
> tree shifted = container;
> if (shift_n)
> @@ -2694,14 +2711,27 @@ vect_recog_bitfield_ref_pattern (vec_info *vinfo, stmt_vec_info stmt_info,
> }
> else
> {
> - tree mask = wide_int_to_tree (container_type,
> - wi::shifted_mask (shift_n, mask_width,
> - false, prec));
> - pattern_stmt
> - = gimple_build_assign (vect_recog_temp_ssa_var (container_type),
> - BIT_AND_EXPR, container, mask);
> - tree masked = gimple_assign_lhs (pattern_stmt);
> + tree temp = vect_recog_temp_ssa_var (container_type);
> + if (!ref_sext)
> + {
> + tree mask = wide_int_to_tree (container_type,
> + wi::shifted_mask (shift_n,
> + mask_width,
> + false, prec));
> + pattern_stmt = gimple_build_assign (temp, BIT_AND_EXPR,
> + container, mask);
> + }
> + else
> + {
> + HOST_WIDE_INT shl = prec - shift_n - mask_width;
> + shift_n += shl;
> + pattern_stmt = gimple_build_assign (temp, LSHIFT_EXPR,
> + container,
> + build_int_cst (sizetype,
> + shl));
> + }
>
> + tree masked = gimple_assign_lhs (pattern_stmt);
> append_pattern_def_seq (vinfo, stmt_info, pattern_stmt, vectype);
> pattern_stmt
> = gimple_build_assign (vect_recog_temp_ssa_var (container_type),
>
--
Richard Biener <rguenther@suse.de>
SUSE Software Solutions Germany GmbH, Frankenstrasse 146, 90461 Nuernberg,
Germany; GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman;
HRB 36809 (AG Nuernberg)
^ permalink raw reply [flat|nested] 5+ messages in thread
* Pushed: [PATCH v2] vect: Fix vectorized BIT_FIELD_REF for signed bit-fields [PR110557]
2023-07-10 10:33 ` Richard Biener
@ 2023-07-10 11:12 ` Xi Ruoyao
2023-07-11 7:34 ` Prathamesh Kulkarni
0 siblings, 1 reply; 5+ messages in thread
From: Xi Ruoyao @ 2023-07-10 11:12 UTC (permalink / raw)
To: Richard Biener; +Cc: gcc-patches, Andre Vieira, Jakub Jelinek, Hongtao Liu
On Mon, 2023-07-10 at 10:33 +0000, Richard Biener wrote:
> On Fri, 7 Jul 2023, Xi Ruoyao wrote:
>
> > If a bit-field is signed and it's wider than the output type, we
> > must
> > ensure the extracted result sign-extended. But this was not handled
> > correctly.
> >
> > For example:
> >
> > int x : 8;
> > long y : 55;
> > bool z : 1;
> >
> > The vectorized extraction of y was:
> >
> > vect__ifc__49.29_110 =
> > MEM <vector(2) long unsigned int> [(struct Item
> > *)vectp_a.27_108];
> > vect_patt_38.30_112 =
> > vect__ifc__49.29_110 & { 9223372036854775552,
> > 9223372036854775552 };
> > vect_patt_39.31_113 = vect_patt_38.30_112 >> 8;
> > vect_patt_40.32_114 =
> > VIEW_CONVERT_EXPR<vector(2) long int>(vect_patt_39.31_113);
> >
> > This is obviously incorrect. This pach has implemented it as:
> >
> > vect__ifc__25.16_62 =
> > MEM <vector(2) long unsigned int> [(struct Item
> > *)vectp_a.14_60];
> > vect_patt_31.17_63 =
> > VIEW_CONVERT_EXPR<vector(2) long int>(vect__ifc__25.16_62);
> > vect_patt_32.18_64 = vect_patt_31.17_63 << 1;
> > vect_patt_33.19_65 = vect_patt_32.18_64 >> 9;
>
> OK.
Pushed r14-2407 and r13-7553.
--
Xi Ruoyao <xry111@xry111.site>
School of Aerospace Science and Technology, Xidian University
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Pushed: [PATCH v2] vect: Fix vectorized BIT_FIELD_REF for signed bit-fields [PR110557]
2023-07-10 11:12 ` Pushed: " Xi Ruoyao
@ 2023-07-11 7:34 ` Prathamesh Kulkarni
2023-07-11 8:12 ` [PATCH pushed] testsuite: Unbreak pr110557.cc where long is 32-bit (was Re: Pushed: [PATCH v2] vect: Fix vectorized BIT_FIELD_REF for signed bit-fields [PR110557]) Xi Ruoyao
0 siblings, 1 reply; 5+ messages in thread
From: Prathamesh Kulkarni @ 2023-07-11 7:34 UTC (permalink / raw)
To: Xi Ruoyao
Cc: Richard Biener, gcc-patches, Andre Vieira, Jakub Jelinek, Hongtao Liu
On Mon, 10 Jul 2023 at 16:43, Xi Ruoyao via Gcc-patches
<gcc-patches@gcc.gnu.org> wrote:
>
> On Mon, 2023-07-10 at 10:33 +0000, Richard Biener wrote:
> > On Fri, 7 Jul 2023, Xi Ruoyao wrote:
> >
> > > If a bit-field is signed and it's wider than the output type, we
> > > must
> > > ensure the extracted result sign-extended. But this was not handled
> > > correctly.
> > >
> > > For example:
> > >
> > > int x : 8;
> > > long y : 55;
> > > bool z : 1;
> > >
> > > The vectorized extraction of y was:
> > >
> > > vect__ifc__49.29_110 =
> > > MEM <vector(2) long unsigned int> [(struct Item
> > > *)vectp_a.27_108];
> > > vect_patt_38.30_112 =
> > > vect__ifc__49.29_110 & { 9223372036854775552,
> > > 9223372036854775552 };
> > > vect_patt_39.31_113 = vect_patt_38.30_112 >> 8;
> > > vect_patt_40.32_114 =
> > > VIEW_CONVERT_EXPR<vector(2) long int>(vect_patt_39.31_113);
> > >
> > > This is obviously incorrect. This pach has implemented it as:
> > >
> > > vect__ifc__25.16_62 =
> > > MEM <vector(2) long unsigned int> [(struct Item
> > > *)vectp_a.14_60];
> > > vect_patt_31.17_63 =
> > > VIEW_CONVERT_EXPR<vector(2) long int>(vect__ifc__25.16_62);
> > > vect_patt_32.18_64 = vect_patt_31.17_63 << 1;
> > > vect_patt_33.19_65 = vect_patt_32.18_64 >> 9;
> >
> > OK.
>
> Pushed r14-2407 and r13-7553.
Hi Xi,
Your commit:
https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=63ae6bc60c0f67fb2791991bf4b6e7e0a907d420,
seems to cause following regressions on arm-linux-gnueabihf:
FAIL: g++.dg/vect/pr110557.cc -std=c++98 (test for excess errors)
FAIL: g++.dg/vect/pr110557.cc -std=c++14 (test for excess errors)
FAIL: g++.dg/vect/pr110557.cc -std=c++17 (test for excess errors)
FAIL: g++.dg/vect/pr110557.cc -std=c++20 (test for excess errors)
Excess error:
gcc/testsuite/g++.dg/vect/pr110557.cc:12:8: warning: width of
'Item::y' exceeds its type
Thanks,
Prathamesh
>
> --
> Xi Ruoyao <xry111@xry111.site>
> School of Aerospace Science and Technology, Xidian University
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH pushed] testsuite: Unbreak pr110557.cc where long is 32-bit (was Re: Pushed: [PATCH v2] vect: Fix vectorized BIT_FIELD_REF for signed bit-fields [PR110557])
2023-07-11 7:34 ` Prathamesh Kulkarni
@ 2023-07-11 8:12 ` Xi Ruoyao
0 siblings, 0 replies; 5+ messages in thread
From: Xi Ruoyao @ 2023-07-11 8:12 UTC (permalink / raw)
To: Prathamesh Kulkarni
Cc: Richard Biener, gcc-patches, Andre Vieira, Jakub Jelinek, Hongtao Liu
[-- Attachment #1: Type: text/plain, Size: 920 bytes --]
On Tue, 2023-07-11 at 13:04 +0530, Prathamesh Kulkarni wrote:
/* snip */
> Hi Xi,
> Your commit:
> https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=63ae6bc60c0f67fb2791991bf4b6e7e0a907d420,
>
> seems to cause following regressions on arm-linux-gnueabihf:
> FAIL: g++.dg/vect/pr110557.cc -std=c++98 (test for excess errors)
> FAIL: g++.dg/vect/pr110557.cc -std=c++14 (test for excess errors)
> FAIL: g++.dg/vect/pr110557.cc -std=c++17 (test for excess errors)
> FAIL: g++.dg/vect/pr110557.cc -std=c++20 (test for excess errors)
>
> Excess error:
> gcc/testsuite/g++.dg/vect/pr110557.cc:12:8: warning: width of
> 'Item::y' exceeds its type
Ah sorry, I didn't consider ports with 32-bit long.
The attached patch should fix the issue. It has been tested and pushed
r14-2427 and r13-7555.
--
Xi Ruoyao <xry111@xry111.site>
School of Aerospace Science and Technology, Xidian University
[-- Attachment #2: 0001-testsuite-Unbreak-pr110557.cc-where-long-is-32-bit.patch --]
[-- Type: text/x-patch, Size: 1512 bytes --]
From 312839653b8295599c63cae90278a87af528edad Mon Sep 17 00:00:00 2001
From: Xi Ruoyao <xry111@xry111.site>
Date: Tue, 11 Jul 2023 15:55:54 +0800
Subject: [PATCH] testsuite: Unbreak pr110557.cc where long is 32-bit
On ports with 32-bit long, the test produced excess errors:
gcc/testsuite/g++.dg/vect/pr110557.cc:12:8: warning: width of
'Item::y' exceeds its type
Reported-by: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org>
gcc/testsuite/ChangeLog:
* g++.dg/vect/pr110557.cc: Use long long instead of long for
64-bit type.
(test): Remove an unnecessary cast.
---
gcc/testsuite/g++.dg/vect/pr110557.cc | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/gcc/testsuite/g++.dg/vect/pr110557.cc b/gcc/testsuite/g++.dg/vect/pr110557.cc
index e1fbe1caac4..effb67e2df3 100644
--- a/gcc/testsuite/g++.dg/vect/pr110557.cc
+++ b/gcc/testsuite/g++.dg/vect/pr110557.cc
@@ -1,7 +1,9 @@
// { dg-additional-options "-mavx" { target { avx_runtime } } }
-static inline long
-min (long a, long b)
+typedef long long i64;
+
+static inline i64
+min (i64 a, i64 b)
{
return a < b ? a : b;
}
@@ -9,16 +11,16 @@ min (long a, long b)
struct Item
{
int x : 8;
- long y : 55;
+ i64 y : 55;
bool z : 1;
};
-__attribute__ ((noipa)) long
+__attribute__ ((noipa)) i64
test (Item *a, int cnt)
{
- long size = 0;
+ i64 size = 0;
for (int i = 0; i < cnt; i++)
- size = min ((long)a[i].y, size);
+ size = min (a[i].y, size);
return size;
}
--
2.41.0
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2023-07-11 8:12 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-07 13:18 [PATCH v2] vect: Fix vectorized BIT_FIELD_REF for signed bit-fields [PR110557] Xi Ruoyao
2023-07-10 10:33 ` Richard Biener
2023-07-10 11:12 ` Pushed: " Xi Ruoyao
2023-07-11 7:34 ` Prathamesh Kulkarni
2023-07-11 8:12 ` [PATCH pushed] testsuite: Unbreak pr110557.cc where long is 32-bit (was Re: Pushed: [PATCH v2] vect: Fix vectorized BIT_FIELD_REF for signed bit-fields [PR110557]) Xi Ruoyao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).