public inbox for gcc-cvs@sourceware.org help / color / mirror / Atom feed
From: William Schmidt <wschmidt@gcc.gnu.org> To: gcc-cvs@gcc.gnu.org Subject: [gcc(refs/users/wschmidt/heads/builtins4)] rs6000: More bug fixes Date: Wed, 13 Jan 2021 14:58:42 +0000 (GMT) [thread overview] Message-ID: <20210113145842.93D0B385783D@sourceware.org> (raw) https://gcc.gnu.org/g:c960c4d15be4a8bb91185c562eb97d23d08d34e2 commit c960c4d15be4a8bb91185c562eb97d23d08d34e2 Author: Bill Schmidt <wschmidt@linux.ibm.com> Date: Wed Jan 13 08:58:23 2021 -0600 rs6000: More bug fixes 2021-01-13 Bill Schmidt <wschmidt@linux.ibm.com> gcc/ * config/rs6000/rs6000-builtin-new.def: Assorted fixes. * config/rs6000/rs6000-c.c (altivec_resolve_new_overloaded_builtin): Remove incorrect gcc_assert. * config/rs6000/rs6000-call.c (rs6000_init_builtins): Fix initialization of ptr_intTI_type_node and ptr_uintTI_type_node. * config/rs6000/rs6000-gen-builtins.c (typeinfo): Remove isopaque. (match_type): Remove vop handling. (construct_fntype_id): Remove isopaque handling. (parse_ovld_entry): Commentary update. * config/rs6000/rs6000-overload.def: Assorted fixes. gcc/testsuite/ * gcc.dg/vmx/ops.c: Remove deprecated calls. * gcc.target/powerpc/altivec-7.c: Likewise. * gcc.target/powerpc/bfp/scalar-test-neg-2.c: Adjust. * gcc.target/powerpc/bfp/scalar-test-neg-3.c: Adjust. * gcc.target/powerpc/bfp/scalar-test-neg-5.c: Adjust. * gcc.target/powerpc/builtins-3-p9-runnable.c: Adjust. Diff: --- gcc/config/rs6000/rs6000-builtin-new.def | 173 ++++++++++----------- gcc/config/rs6000/rs6000-c.c | 1 - gcc/config/rs6000/rs6000-call.c | 5 +- gcc/config/rs6000/rs6000-gen-builtins.c | 49 ++---- gcc/config/rs6000/rs6000-overload.def | 47 +++--- gcc/testsuite/gcc.dg/vmx/ops.c | 86 ---------- gcc/testsuite/gcc.target/powerpc/altivec-7.c | 29 ++-- .../gcc.target/powerpc/bfp/scalar-test-neg-2.c | 2 +- .../gcc.target/powerpc/bfp/scalar-test-neg-3.c | 2 +- .../gcc.target/powerpc/bfp/scalar-test-neg-5.c | 2 +- .../gcc.target/powerpc/builtins-3-p9-runnable.c | 8 +- 11 files changed, 139 insertions(+), 265 deletions(-) diff --git a/gcc/config/rs6000/rs6000-builtin-new.def b/gcc/config/rs6000/rs6000-builtin-new.def index 6fa121bb38d..68d1a7d361a 100644 --- a/gcc/config/rs6000/rs6000-builtin-new.def +++ b/gcc/config/rs6000/rs6000-builtin-new.def @@ -81,7 +81,6 @@ ; vp vector pixel ; vf vector float ; vd vector double -; vop opaque vector (matches all vectors) ; ; For simplicity, We don't support "short int" and "long long int". ; We don't currently support a <basetype> of "bool", "long double", @@ -127,30 +126,20 @@ ; ; It is important to note that each entry's <bif-name> must be ; unique. The code generated from this file will call def_builtin -; for each entry, and this can only happen once per name. This -; means that in some cases we currently retain some tricks from -; the old builtin support to aid with overloading. This -; unfortunately seems to be necessary for backward compatibility. +; for each entry, and this can only happen once per name. ; -; The two tricks at our disposal are the void pointer and the "vop" -; vector type. We use void pointers anywhere that pointer types -; are accepted (primarily for vector load/store built-ins). In -; practice this means that we accept pointers to anything, not -; just to the types that we intend. We use the "vop" vector type -; anytime that a built-in must accept vector types that have -; different modes. This is an opaque type that will match any -; vector type, which may mean matching vector types that we don't -; intend. +; The type signature for the builtin must match the modes of the RTL +; pattern <bif-pattern>. When a builtin is used only as a basis for +; overloading, you can use an arbitrary type for each mode (for example, +; for V8HImode, you could use vp, vss, vus, or vbs). The overloading +; machinery takes care of adding appropriate casts between vectors to +; satisfy impedance matching. The overloaded prototypes are the ones +; that must match what users expect. Thus you will often have a small +; number of entries in this file that correspond to a much greater +; number of entries in rs6000-overload.def. ; -; We can improve on "vop" when a vector argument or return type is -; limited to one mode. For example, "vsll" and "vull" both map to -; V2DImode. In this case, we can arbitrarily pick one of the -; acceptable types to use in the prototype. The signature used by -; def_builtin is based on modes, not types, so this works well. -; Only use "vop" when there is no alternative. When there is a -; choice, best practice is to use the signed type ("vsll" in the -; example above) unless the choices are unsigned and bool, in -; which case the unsigned type should be used. +; However, builtins in this file that are expected to be directly called +; by users must have one version for each expected type combination. ; ; Eventually we want to automatically generate built-in documentation ; from the entries in this file. Documenting of built-ins with more @@ -354,12 +343,24 @@ vuc __builtin_altivec_mask_for_load (const void *); MASK_FOR_LOAD altivec_lvsr_direct {ldstmask} - vus __builtin_altivec_mfvscr (); + vss __builtin_altivec_mfvscr (); MFVSCR altivec_mfvscr {} - void __builtin_altivec_mtvscr (vop); + void __builtin_altivec_mtvscr (vsi); MTVSCR altivec_mtvscr {} + const vsll __builtin_altivec_vmulesw (vsi, vsi); + VMULESW vec_widen_smult_even_v4si {} + + const vull __builtin_altivec_vmuleuw (vui, vui); + VMULEUW vec_widen_umult_even_v4si {} + + const vsll __builtin_altivec_vmulosw (vsi, vsi); + VMULOSW vec_widen_smult_odd_v4si {} + + const vull __builtin_altivec_vmulouw (vui, vui); + VMULOUW vec_widen_umult_odd_v4si {} + const vsc __builtin_altivec_nabs_v16qi (vsc); NABS_V16QI nabsv16qi2 {} @@ -441,7 +442,7 @@ const vus __builtin_altivec_vadduhs (vus, vus); VADDUHS altivec_vadduhs {} - const vui __builtin_altivec_vadduwm (vui, vui); + const vsi __builtin_altivec_vadduwm (vsi, vsi); VADDUWM addv4si3 {} const vui __builtin_altivec_vadduws (vui, vui); @@ -528,19 +529,19 @@ const vsc __builtin_altivec_vcmpequb (vuc, vuc); VCMPEQUB vector_eqv16qi {} - const int __builtin_altivec_vcmpequb_p (int, vuc, vuc); + const int __builtin_altivec_vcmpequb_p (int, vsc, vsc); VCMPEQUB_P vector_eq_v16qi_p {pred} const vss __builtin_altivec_vcmpequh (vus, vus); VCMPEQUH vector_eqv8hi {} - const int __builtin_altivec_vcmpequh_p (int, vus, vus); + const int __builtin_altivec_vcmpequh_p (int, vss, vss); VCMPEQUH_P vector_eq_v8hi_p {pred} const vsi __builtin_altivec_vcmpequw (vui, vui); VCMPEQUW vector_eqv4si {} - const int __builtin_altivec_vcmpequw_p (int, vui, vui); + const int __builtin_altivec_vcmpequw_p (int, vsi, vsi); VCMPEQUW_P vector_eq_v4si_p {pred} const vf __builtin_altivec_vcmpgefp (vf, vf); @@ -576,19 +577,19 @@ const vsc __builtin_altivec_vcmpgtub (vuc, vuc); VCMPGTUB vector_gtuv16qi {} - const int __builtin_altivec_vcmpgtub_p (int, vuc, vuc); + const int __builtin_altivec_vcmpgtub_p (int, vsc, vsc); VCMPGTUB_P vector_gtu_v16qi_p {pred} const vss __builtin_altivec_vcmpgtuh (vus, vus); VCMPGTUH vector_gtuv8hi {} - const int __builtin_altivec_vcmpgtuh_p (int, vus, vus); + const int __builtin_altivec_vcmpgtuh_p (int, vss, vss); VCMPGTUH_P vector_gtu_v8hi_p {pred} const vsi __builtin_altivec_vcmpgtuw (vui, vui); VCMPGTUW vector_gtuv4si {} - const int __builtin_altivec_vcmpgtuw_p (int, vui, vui); + const int __builtin_altivec_vcmpgtuw_p (int, vsi, vsi); VCMPGTUW_P vector_gtu_v4si_p {pred} const vsi __builtin_altivec_vctsxs (vf, const int<5>); @@ -873,7 +874,7 @@ const vuq __builtin_altivec_vsel_1ti_uns (vuq, vuq, vuq); VSEL_1TI_UNS vector_select_v1ti_uns {} - const vf __builtin_altivec_vsel_4sf (vf, vf, vui); + const vf __builtin_altivec_vsel_4sf (vf, vf, vf); VSEL_4SF vector_select_v4sf {} const vsi __builtin_altivec_vsel_4si (vsi, vsi, vui); @@ -888,7 +889,7 @@ const vus __builtin_altivec_vsel_8hi_uns (vus, vus, vus); VSEL_8HI_UNS vector_select_v8hi_uns {} - const vop __builtin_altivec_vsl (vop, vuc); + const vsi __builtin_altivec_vsl (vsi, vsi); VSL altivec_vsl {} const vsc __builtin_altivec_vslb (vsc, vuc); @@ -909,7 +910,7 @@ const vss __builtin_altivec_vslh (vss, vus); VSLH vashlv8hi3 {} - const vop __builtin_altivec_vslo (vop, vuc); + const vsi __builtin_altivec_vslo (vsi, vsi); VSLO altivec_vslo {} const vsi __builtin_altivec_vslw (vsi, vui); @@ -933,7 +934,7 @@ const vsi __builtin_altivec_vspltw (vsi, const int<2>); VSPLTW altivec_vspltw {} - const vop __builtin_altivec_vsr (vop, vop); + const vsi __builtin_altivec_vsr (vsi, vsi); VSR altivec_vsr {} const vsc __builtin_altivec_vsrab (vsc, vuc); @@ -951,7 +952,7 @@ const vss __builtin_altivec_vsrh (vss, vus); VSRH vlshrv8hi3 {} - const vop __builtin_altivec_vsro (vop, vop); + const vsi __builtin_altivec_vsro (vsi, vsi); VSRO altivec_vsro {} const vsi __builtin_altivec_vsrw (vsi, vui); @@ -1086,28 +1087,28 @@ ; Cell builtins. [cell] - pure vop __builtin_altivec_lvlx (signed long long, const void *); + pure vuc __builtin_altivec_lvlx (signed long long, const void *); LVLX altivec_lvlx {ldvec} - pure vop __builtin_altivec_lvlxl (signed long long, const void *); + pure vuc __builtin_altivec_lvlxl (signed long long, const void *); LVLXL altivec_lvlxl {ldvec} - pure vop __builtin_altivec_lvrx (signed long long, const void *); + pure vuc __builtin_altivec_lvrx (signed long long, const void *); LVRX altivec_lvrx {ldvec} - pure vop __builtin_altivec_lvrxl (signed long long, const void *); + pure vuc __builtin_altivec_lvrxl (signed long long, const void *); LVRXL altivec_lvrxl {ldvec} - void __builtin_altivec_stvlx (vop, signed long long, void *); + void __builtin_altivec_stvlx (vuc, signed long long, void *); STVLX altivec_stvlx {stvec} - void __builtin_altivec_stvlxl (vop, signed long long, void *); + void __builtin_altivec_stvlxl (vuc, signed long long, void *); STVLXL altivec_stvlxl {stvec} - void __builtin_altivec_stvrx (vop, signed long long, void *); + void __builtin_altivec_stvrx (vuc, signed long long, void *); STVRX altivec_stvrx {stvec} - void __builtin_altivec_stvrxl (vop, signed long long, void *); + void __builtin_altivec_stvrxl (vuc, signed long long, void *); STVRXL altivec_stvrxl {stvec} @@ -1167,6 +1168,24 @@ const vull __builtin_altivec_vandc_v2di_uns (vull, vull); VANDC_V2DI_UNS andcv2di3 {} + const vbll __builtin_altivec_vcmpequd (vull, vull); + VCMPEQUD vector_eqv2di {} + + const int __builtin_altivec_vcmpequd_p (int, vsll, vsll); + VCMPEQUD_P vector_eq_v2di_p {pred} + + const vsll __builtin_altivec_vcmpgtsd (vsll, vsll); + VCMPGTSD vector_gtv2di {} + + const int __builtin_altivec_vcmpgtsd_p (int, vsll, vsll); + VCMPGTSD_P vector_gt_v2di_p {pred} + + const vsll __builtin_altivec_vcmpgtud (vull, vull); + VCMPGTUD vector_gtuv2di {} + + const int __builtin_altivec_vcmpgtud_p (int, vull, vull); + VCMPGTUD_P vector_gtu_v2di_p {pred} + const vd __builtin_altivec_vnor_v2df (vd, vd); VNOR_V2DF norv2df3 {} @@ -1717,11 +1736,11 @@ XVCVSXWDP vsx_xvcvsxwdp {} ; Need to pick one or the other here!! #### -; Second one is used in the overload table (old and new) for VEC_FLOAT. -; const vf __builtin_vsx_xvcvsxwsp (vsi); -; XVCVSXWSP vsx_floatv4siv4sf2 {} +; The first is needed to make vec_float work correctly. const vf __builtin_vsx_xvcvsxwsp (vsi); - XVCVSXWSP_V4SF vsx_xvcvsxwdp {} + XVCVSXWSP vsx_floatv4siv4sf2 {} +; const vf __builtin_vsx_xvcvsxwsp (vsi); +; XVCVSXWSP_V4SF vsx_xvcvsxwdp {} const vd __builtin_vsx_xvcvuxddp (vull); XVCVUXDDP vsx_floatunsv2div2df2 {} @@ -1740,11 +1759,11 @@ XVCVUXWDP vsx_xvcvuxwdp {} ; Need to pick one or the other here!! #### -; Second one is used in the overload table (old and new) for VEC_FLOAT. -; const vf __builtin_vsx_xvcvuxwsp (vui); -; XVCVUXWSP vsx_floatunsv4siv4sf2 {} +; The first is needed to make vec_float work correctly. const vf __builtin_vsx_xvcvuxwsp (vui); - XVCVUXWSP_V4SF vsx_xvcvuxwsp {} + XVCVUXWSP vsx_floatunsv4siv4sf2 {} +; const vf __builtin_vsx_xvcvuxwsp (vui); +; XVCVUXWSP_V4SF vsx_xvcvuxwsp {} fpmath vd __builtin_vsx_xvdivdp (vd, vd); XVDIVDP divv2df3 {} @@ -2199,24 +2218,6 @@ const vuc __builtin_altivec_vbpermq2 (vuc, vuc); VBPERMQ2 altivec_vbpermq2 {} - const vbll __builtin_altivec_vcmpequd (vull, vull); - VCMPEQUD vector_eqv2di {} - - const int __builtin_altivec_vcmpequd_p (int, vsll, vsll); - VCMPEQUD_P vector_eq_v2di_p {pred} - - const vsll __builtin_altivec_vcmpgtsd (vsll, vsll); - VCMPGTSD vector_gtv2di {} - - const int __builtin_altivec_vcmpgtsd_p (int, vsll, vsll); - VCMPGTSD_P vector_gt_v2di_p {pred} - - const vsll __builtin_altivec_vcmpgtud (vull, vull); - VCMPGTUD vector_gtuv2di {} - - const int __builtin_altivec_vcmpgtud_p (int, vull, vull); - VCMPGTUD_P vector_gtu_v2di_p {pred} - const vsll __builtin_altivec_vmaxsd (vsll, vsll); VMAXSD smaxv2di3 {} @@ -2253,18 +2254,6 @@ const vsi __builtin_altivec_vmrgow_v4si (vsi, vsi); VMRGOW_V4SI p8_vmrgow_v4si {} - const vsll __builtin_altivec_vmulesw (vsi, vsi); - VMULESW vec_widen_smult_even_v4si {} - - const vull __builtin_altivec_vmuleuw (vui, vui); - VMULEUW vec_widen_umult_even_v4si {} - - const vsll __builtin_altivec_vmulosw (vsi, vsi); - VMULOSW vec_widen_smult_odd_v4si {} - - const vull __builtin_altivec_vmulouw (vui, vui); - VMULOUW vec_widen_umult_odd_v4si {} - const vsc __builtin_altivec_vpermxor (vsc, vsc, vsc); VPERMXOR altivec_vpermxor {} @@ -2296,13 +2285,13 @@ ; const vull __builtin_altivec_vpmsumw (vui, vui); ; VPMSUMW crypto_vpmsumw {} - const vsc __builtin_altivec_vpopcntb (vsc); + const vuc __builtin_altivec_vpopcntb (vsc); VPOPCNTB popcountv16qi2 {} - const vsll __builtin_altivec_vpopcntd (vsll); + const vull __builtin_altivec_vpopcntd (vsll); VPOPCNTD popcountv2di2 {} - const vss __builtin_altivec_vpopcnth (vss); + const vus __builtin_altivec_vpopcnth (vss); VPOPCNTH popcountv8hi2 {} const vuc __builtin_altivec_vpopcntub (vuc); @@ -2317,7 +2306,7 @@ const vui __builtin_altivec_vpopcntuw (vui); VPOPCNTUW popcountv4si2 {} - const vsi __builtin_altivec_vpopcntw (vsi); + const vui __builtin_altivec_vpopcntw (vsi); VPOPCNTW popcountv4si2 {} const vsll __builtin_altivec_vrld (vsll, vull); @@ -2716,10 +2705,10 @@ const vuc __builtin_vsx_insert4b (vsi, vuc, const int[0,12]); INSERT4B insert4b {} - const vd __builtin_vsx_insert_exp_dp (vop, vull); + const vd __builtin_vsx_insert_exp_dp (vd, vd); VIEDP xviexpdp {} - const vf __builtin_vsx_insert_exp_sp (vop, vui); + const vf __builtin_vsx_insert_exp_sp (vf, vf); VIESP xviexpsp {} const signed int __builtin_vsx_scalar_cmp_exp_dp_eq (double, double); @@ -2833,13 +2822,13 @@ void __builtin_altivec_xst_len_r (vsc, void *, long long); XST_LEN_R xst_len_r {} - void __builtin_altivec_stxvl (vop, void *, long long); + void __builtin_altivec_stxvl (vuc, void *, long long); STXVL stxvl {} const signed int __builtin_scalar_byte_in_set (unsigned char, unsigned long long); CMPEQB cmpeqb {} - pure vop __builtin_vsx_lxvl (const void *, unsigned long long); + pure vuc __builtin_vsx_lxvl (const void *, unsigned long long); LXVL lxvl {} const unsigned int __builtin_vsx_scalar_extract_exp (double); @@ -3374,7 +3363,7 @@ const vus __builtin_vsx_xxblend_v8hi (vus, vus, vus); VXXBLEND_V8HI xxblend_v8hi {} - const vop __builtin_vsx_xxeval (vop, vop, vop, const int <8>); + const vull __builtin_vsx_xxeval (vull, vull, vull, const int <8>); XXEVAL xxeval {} const vuc __builtin_vsx_xxgenpcvm_v16qi (vuc, const int <2>); diff --git a/gcc/config/rs6000/rs6000-c.c b/gcc/config/rs6000/rs6000-c.c index 1de614ba2b3..4e535e535ac 100644 --- a/gcc/config/rs6000/rs6000-c.c +++ b/gcc/config/rs6000/rs6000-c.c @@ -2947,7 +2947,6 @@ altivec_resolve_new_overloaded_builtin (location_t loc, tree fndecl, break; } } - gcc_assert (unsupported_builtin); } if (unsupported_builtin) diff --git a/gcc/config/rs6000/rs6000-call.c b/gcc/config/rs6000/rs6000-call.c index c96410b411b..fde516ab7cb 100644 --- a/gcc/config/rs6000/rs6000-call.c +++ b/gcc/config/rs6000/rs6000-call.c @@ -15624,6 +15624,7 @@ rs6000_init_builtins (void) = build_pointer_type (build_qualified_type (unsigned_V4SI_type_node, TYPE_QUAL_CONST)); + /* #### Should just always be long long??? */ unsigned_V2DI_type_node = rs6000_vector_type (TARGET_POWERPC64 ? "__vector unsigned long" : "__vector unsigned long long", @@ -15711,10 +15712,10 @@ rs6000_init_builtins (void) = build_pointer_type (build_qualified_type (uintDI_type_internal_node, TYPE_QUAL_CONST)); ptr_intTI_type_node - = build_pointer_type (build_qualified_type (intDI_type_internal_node, + = build_pointer_type (build_qualified_type (intTI_type_internal_node, TYPE_QUAL_CONST)); ptr_uintTI_type_node - = build_pointer_type (build_qualified_type (uintDI_type_internal_node, + = build_pointer_type (build_qualified_type (uintTI_type_internal_node, TYPE_QUAL_CONST)); ptr_long_integer_type_node = build_pointer_type diff --git a/gcc/config/rs6000/rs6000-gen-builtins.c b/gcc/config/rs6000/rs6000-gen-builtins.c index 6405b0f7a56..8f129f47d8c 100644 --- a/gcc/config/rs6000/rs6000-gen-builtins.c +++ b/gcc/config/rs6000/rs6000-gen-builtins.c @@ -317,7 +317,6 @@ struct typeinfo { char isbool; char ispixel; char ispointer; - char isopaque; basetype base; restriction restr; int val1; @@ -474,7 +473,6 @@ static typemap type_map[TYPE_MAP_SIZE] = { "if", "ibm128_float" }, { "ld", "long_double" }, { "lg", "long_integer" }, - { "opaque", "opaque_V4SI" }, { "pbv16qi", "ptr_bool_V16QI" }, { "pbv2di", "ptr_bool_V2DI" }, { "pbv4si", "ptr_bool_V4SI" }, @@ -972,7 +970,6 @@ match_type (typeinfo *typedata, int voidok) vd vector double v256 __vector_pair v512 __vector_quad - vop opaque vector (matches all vectors) For simplicity, We don't support "short int" and "long long int". We don't support a <basetype> of "bool", "long double", or "_Float16", @@ -1156,11 +1153,6 @@ match_type (typeinfo *typedata, int voidok) handle_pointer (typedata); return 1; } - else if (!strcmp (token, "vop")) - { - typedata->isopaque = 1; - return 1; - } else if (!strcmp (token, "signed")) typedata->issigned = 1; else if (!strcmp (token, "unsigned")) @@ -1549,20 +1541,12 @@ construct_fntype_id (prototype *protoptr) buf[bufi++] = 'v'; else { - if (protoptr->rettype.isopaque) - { - memcpy (&buf[bufi], "opaque", 6); - bufi += 6; - } + if (protoptr->rettype.isunsigned) + buf[bufi++] = 'u'; + if (protoptr->rettype.isvector) + complete_vector_type (&protoptr->rettype, buf, &bufi); else - { - if (protoptr->rettype.isunsigned) - buf[bufi++] = 'u'; - if (protoptr->rettype.isvector) - complete_vector_type (&protoptr->rettype, buf, &bufi); - else - complete_base_type (&protoptr->rettype, buf, &bufi); - } + complete_base_type (&protoptr->rettype, buf, &bufi); } memcpy (&buf[bufi], "_ftype", 6); @@ -1608,21 +1592,13 @@ construct_fntype_id (prototype *protoptr) else buf[bufi++] = 'p'; } - if (argptr->info.isopaque) - { - assert (!argptr->info.ispointer); - memcpy (&buf[bufi], "opaque", 6); - bufi += 6; - } + + if (argptr->info.isunsigned) + buf[bufi++] = 'u'; + if (argptr->info.isvector) + complete_vector_type (&argptr->info, buf, &bufi); else - { - if (argptr->info.isunsigned) - buf[bufi++] = 'u'; - if (argptr->info.isvector) - complete_vector_type (&argptr->info, buf, &bufi); - else - complete_base_type (&argptr->info, buf, &bufi); - } + complete_base_type (&argptr->info, buf, &bufi); } assert (!argptr); } @@ -1965,8 +1941,7 @@ parse_ovld_entry () /* Check for an optional overload id. Usually we use the builtin function id for that purpose, but sometimes we need multiple - overload entries for the same builtin id when we use opaque - vector parameter and return types, and it needs to be unique. */ + overload entries for the same builtin id, and it needs to be unique. */ consume_whitespace (); if (linebuf[pos] != '\n') { diff --git a/gcc/config/rs6000/rs6000-overload.def b/gcc/config/rs6000/rs6000-overload.def index 66f5836c444..ec884b63388 100644 --- a/gcc/config/rs6000/rs6000-overload.def +++ b/gcc/config/rs6000/rs6000-overload.def @@ -257,7 +257,7 @@ VADDUWM VADDUWM_VSI_VBI vui __builtin_vec_add (vbi, vui); VADDUWM VADDUWM_VBI_VUI - vui __builtin_vec_add (vbi, vui); + vui __builtin_vec_add (vui, vbi); VADDUWM VADDUWM_VUI_VBI vsll __builtin_vec_add (vbll, vsll); VADDUDM VADDUDM_VBLL_VSLL @@ -382,7 +382,7 @@ VAND_V16QI_UNS VAND_VBC_VUC vss __builtin_vec_and (vss, vbs); VAND_V8HI VAND_VSS_VBS - vss __builtin_vec_and (vss, vbs); + vss __builtin_vec_and (vbs, vss); VAND_V8HI VAND_VBS_VSS vus __builtin_vec_and (vus, vbs); VAND_V8HI_UNS VAND_VUS_VBS @@ -1606,9 +1606,9 @@ [VEC_FLOAT, vec_float, __builtin_vec_float] vf __builtin_vec_float (vsi); - XVCVSXWSP_V4SF + XVCVSXWSP vf __builtin_vec_float (vui); - XVCVUXWSP_V4SF + XVCVUXWSP [VEC_FLOAT2, vec_float2, __builtin_vec_float2] vf __builtin_vec_float2 (vsll, vsll); @@ -1634,9 +1634,10 @@ vf __builtin_vec_floato (vd); FLOATO_V2DF +; #### XVRSPIM{TARGET_VSX}; VRFIM [VEC_FLOOR, vec_floor, __builtin_vec_floor] vf __builtin_vec_floor (vf); - XVRSPIM + VRFIM vd __builtin_vec_floor (vd); XVRDPIM @@ -2118,7 +2119,7 @@ VMLADDUHM VMLADDUHM_VSSVUS vss __builtin_vec_madd (vus, vss, vss); VMLADDUHM VMLADDUHM_VUSVSS - vus __builtin_vec_madd (vss, vus, vus); + vus __builtin_vec_madd (vus, vus, vus); VMLADDUHM VMLADDUHM_VUS vf __builtin_vec_madd (vf, vf, vf); VMADDFP @@ -2896,19 +2897,19 @@ VPMSUMD VPMSUMD_V [VEC_POPCNT, vec_popcnt, __builtin_vec_vpopcnt, _ARCH_PWR8] - vsc __builtin_vec_vpopcnt (vsc); + vuc __builtin_vec_vpopcnt (vsc); VPOPCNTB vuc __builtin_vec_vpopcnt (vuc); VPOPCNTUB - vss __builtin_vec_vpopcnt (vss); + vus __builtin_vec_vpopcnt (vss); VPOPCNTH vus __builtin_vec_vpopcnt (vus); VPOPCNTUH - vsi __builtin_vec_vpopcnt (vsi); + vui __builtin_vec_vpopcnt (vsi); VPOPCNTW vui __builtin_vec_vpopcnt (vui); VPOPCNTUW - vsll __builtin_vec_vpopcnt (vsll); + vull __builtin_vec_vpopcnt (vsll); VPOPCNTD vull __builtin_vec_vpopcnt (vull); VPOPCNTUD @@ -3082,9 +3083,10 @@ vull __builtin_vec_rlnm (vull, vull); VRLDNM +; #### XVRSPI{TARGET_VSX};VRFIN [VEC_ROUND, vec_round, __builtin_vec_round] vf __builtin_vec_round (vf); - XVRSPI + VRFIN vd __builtin_vec_round (vd); XVRDPI @@ -3193,11 +3195,11 @@ VEC_VSIGNED2_V2DF [VEC_SIGNEDE, vec_signede, __builtin_vec_vsignede] - vui __builtin_vec_vsignede (vd); + vsi __builtin_vec_vsignede (vd); VEC_VSIGNEDE_V2DF [VEC_SIGNEDO, vec_signedo, __builtin_vec_vsignedo] - vui __builtin_vec_vsignedo (vd); + vsi __builtin_vec_vsignedo (vd); VEC_VSIGNEDO_V2DF [VEC_SL, vec_sl, __builtin_vec_sl] @@ -3356,8 +3358,8 @@ VSL VSL_VBLL_VUC vbll __builtin_vec_sll (vbll, vus); VSL VSL_VBLL_VUS - vbll __builtin_vec_sll (vbll, vui); - VSL VSL_VBLL_VUI + vbll __builtin_vec_sll (vbll, vull); + VSL VSL_VBLL_VULL [VEC_SLO, vec_slo, __builtin_vec_slo] vsc __builtin_vec_slo (vsc, vsc); @@ -3686,14 +3688,10 @@ STVX_V4SI STVX_VSI void __builtin_vec_st (vsi, signed long long, signed int *); STVX_V4SI STVX_SI - void __builtin_vec_st (vsi, signed long long, signed long *); - STVX_V4SI STVX_SL void __builtin_vec_st (vui, signed long long, vui *); STVX_V4SI STVX_VUI void __builtin_vec_st (vui, signed long long, unsigned int *); STVX_V4SI STVX_UI - void __builtin_vec_st (vui, signed long long, unsigned long *); - STVX_V4SI STVX_UL void __builtin_vec_st (vbi, signed long long, vbi *); STVX_V4SI STVX_VBI void __builtin_vec_st (vbi, signed long long, signed int *); @@ -4190,7 +4188,7 @@ [VEC_SUM4S, vec_sum4s, __builtin_vec_sum4s] vui __builtin_vec_sum4s (vuc, vui); VSUM4UBS - vsi __builtin_vec_sum4s (vsc, vui); + vsi __builtin_vec_sum4s (vsc, vsi); VSUM4SBS vsi __builtin_vec_sum4s (vss, vsi); VSUM4SHS @@ -4219,9 +4217,10 @@ signed int __builtin_vec_xvtlsbb_all_zeros (vuc); XVTLSBB_ZEROS +; #### XVRSPIZ{TARGET_VSX}; VRFIZ [VEC_TRUNC, vec_trunc, __builtin_vec_trunc] vf __builtin_vec_trunc (vf); - XVRSPIZ + VRFIZ vd __builtin_vec_trunc (vd); XVRDPIZ @@ -4286,13 +4285,13 @@ DOUBLEL_V4SF VUPKLF [VEC_UNSIGNED, vec_unsigned, __builtin_vec_vunsigned] - vsi __builtin_vec_vunsigned (vf); + vui __builtin_vec_vunsigned (vf); VEC_VUNSIGNED_V4SF - vsll __builtin_vec_vunsigned (vd); + vull __builtin_vec_vunsigned (vd); VEC_VUNSIGNED_V2DF [VEC_UNSIGNED2, vec_unsigned2, __builtin_vec_vunsigned2] - vsi __builtin_vec_vunsigned2 (vd, vd); + vui __builtin_vec_vunsigned2 (vd, vd); VEC_VUNSIGNED2_V2DF [VEC_UNSIGNEDE, vec_unsignede, __builtin_vec_vunsignede] diff --git a/gcc/testsuite/gcc.dg/vmx/ops.c b/gcc/testsuite/gcc.dg/vmx/ops.c index 735710819f9..b8f80930078 100644 --- a/gcc/testsuite/gcc.dg/vmx/ops.c +++ b/gcc/testsuite/gcc.dg/vmx/ops.c @@ -317,8 +317,6 @@ void f2() { *var_vec_b16++ = vec_cmpgt(var_vec_u16[0], var_vec_u16[1]); *var_vec_b16++ = vec_ld(var_int[0], var_vec_b16_ptr[1]); *var_vec_b16++ = vec_ldl(var_int[0], var_vec_b16_ptr[1]); - *var_vec_b16++ = vec_lvx(var_int[0], var_vec_b16_ptr[1]); - *var_vec_b16++ = vec_lvxl(var_int[0], var_vec_b16_ptr[1]); *var_vec_b16++ = vec_mergeh(var_vec_b16[0], var_vec_b16[1]); *var_vec_b16++ = vec_mergel(var_vec_b16[0], var_vec_b16[1]); *var_vec_b16++ = vec_nor(var_vec_b16[0], var_vec_b16[1]); @@ -357,8 +355,6 @@ void f3() { *var_vec_b32++ = vec_cmpgt(var_vec_u32[0], var_vec_u32[1]); *var_vec_b32++ = vec_ld(var_int[0], var_vec_b32_ptr[1]); *var_vec_b32++ = vec_ldl(var_int[0], var_vec_b32_ptr[1]); - *var_vec_b32++ = vec_lvx(var_int[0], var_vec_b32_ptr[1]); - *var_vec_b32++ = vec_lvxl(var_int[0], var_vec_b32_ptr[1]); *var_vec_b32++ = vec_mergeh(var_vec_b32[0], var_vec_b32[1]); *var_vec_b32++ = vec_mergel(var_vec_b32[0], var_vec_b32[1]); *var_vec_b32++ = vec_nor(var_vec_b32[0], var_vec_b32[1]); @@ -389,8 +385,6 @@ void f4() { *var_vec_b8++ = vec_cmpgt(var_vec_u8[0], var_vec_u8[1]); *var_vec_b8++ = vec_ld(var_int[0], var_vec_b8_ptr[1]); *var_vec_b8++ = vec_ldl(var_int[0], var_vec_b8_ptr[1]); - *var_vec_b8++ = vec_lvx(var_int[0], var_vec_b8_ptr[1]); - *var_vec_b8++ = vec_lvxl(var_int[0], var_vec_b8_ptr[1]); } void f5() { *var_vec_b8++ = vec_mergeh(var_vec_b8[0], var_vec_b8[1]); @@ -506,11 +500,6 @@ void f6() { *var_vec_f32++ = vec_ldl(var_int[0], var_float_ptr[1]); *var_vec_f32++ = vec_ldl(var_int[0], var_vec_f32_ptr[1]); *var_vec_f32++ = vec_loge(var_vec_f32[0]); - *var_vec_f32++ = vec_lvewx(var_int[0], var_float_ptr[1]); - *var_vec_f32++ = vec_lvx(var_int[0], var_float_ptr[1]); - *var_vec_f32++ = vec_lvx(var_int[0], var_vec_f32_ptr[1]); - *var_vec_f32++ = vec_lvxl(var_int[0], var_float_ptr[1]); - *var_vec_f32++ = vec_lvxl(var_int[0], var_vec_f32_ptr[1]); *var_vec_f32++ = vec_madd(var_vec_f32[0], var_vec_f32[1], var_vec_f32[2]); *var_vec_f32++ = vec_max(var_vec_f32[0], var_vec_f32[1]); *var_vec_f32++ = vec_mergeh(var_vec_f32[0], var_vec_f32[1]); @@ -562,8 +551,6 @@ void f9() { *var_vec_f32++ = vec_xor(var_vec_f32[0], var_vec_f32[1]); *var_vec_p16++ = vec_ld(var_int[0], var_vec_p16_ptr[1]); *var_vec_p16++ = vec_ldl(var_int[0], var_vec_p16_ptr[1]); - *var_vec_p16++ = vec_lvx(var_int[0], var_vec_p16_ptr[1]); - *var_vec_p16++ = vec_lvxl(var_int[0], var_vec_p16_ptr[1]); *var_vec_p16++ = vec_mergeh(var_vec_p16[0], var_vec_p16[1]); *var_vec_p16++ = vec_mergel(var_vec_p16[0], var_vec_p16[1]); *var_vec_p16++ = vec_packpx(var_vec_u32[0], var_vec_u32[1]); @@ -622,11 +609,6 @@ void f10() { *var_vec_s16++ = vec_lde(var_int[0], var_short_ptr[1]); *var_vec_s16++ = vec_ldl(var_int[0], var_short_ptr[1]); *var_vec_s16++ = vec_ldl(var_int[0], var_vec_s16_ptr[1]); - *var_vec_s16++ = vec_lvehx(var_int[0], var_short_ptr[1]); - *var_vec_s16++ = vec_lvx(var_int[0], var_short_ptr[1]); - *var_vec_s16++ = vec_lvx(var_int[0], var_vec_s16_ptr[1]); - *var_vec_s16++ = vec_lvxl(var_int[0], var_short_ptr[1]); - *var_vec_s16++ = vec_lvxl(var_int[0], var_vec_s16_ptr[1]); *var_vec_s16++ = vec_madds(var_vec_s16[0], var_vec_s16[1], var_vec_s16[2]); *var_vec_s16++ = vec_max(var_vec_b16[0], var_vec_s16[1]); *var_vec_s16++ = vec_max(var_vec_s16[0], var_vec_b16[1]); @@ -787,11 +769,6 @@ void f13() { *var_vec_s32++ = vec_lde(var_int[0], var_int_ptr[1]); *var_vec_s32++ = vec_ldl(var_int[0], var_int_ptr[1]); *var_vec_s32++ = vec_ldl(var_int[0], var_vec_s32_ptr[1]); - *var_vec_s32++ = vec_lvewx(var_int[0], var_int_ptr[1]); - *var_vec_s32++ = vec_lvx(var_int[0], var_int_ptr[1]); - *var_vec_s32++ = vec_lvx(var_int[0], var_vec_s32_ptr[1]); - *var_vec_s32++ = vec_lvxl(var_int[0], var_int_ptr[1]); - *var_vec_s32++ = vec_lvxl(var_int[0], var_vec_s32_ptr[1]); *var_vec_s32++ = vec_max(var_vec_b32[0], var_vec_s32[1]); *var_vec_s32++ = vec_max(var_vec_s32[0], var_vec_b32[1]); *var_vec_s32++ = vec_max(var_vec_s32[0], var_vec_s32[1]); @@ -919,11 +896,6 @@ void f17() { *var_vec_s8++ = vec_lde(var_int[0], var_signed_char_ptr[1]); *var_vec_s8++ = vec_ldl(var_int[0], var_signed_char_ptr[1]); *var_vec_s8++ = vec_ldl(var_int[0], var_vec_s8_ptr[1]); - *var_vec_s8++ = vec_lvebx(var_int[0], var_signed_char_ptr[1]); - *var_vec_s8++ = vec_lvx(var_int[0], var_signed_char_ptr[1]); - *var_vec_s8++ = vec_lvx(var_int[0], var_vec_s8_ptr[1]); - *var_vec_s8++ = vec_lvxl(var_int[0], var_signed_char_ptr[1]); - *var_vec_s8++ = vec_lvxl(var_int[0], var_vec_s8_ptr[1]); *var_vec_s8++ = vec_max(var_vec_b8[0], var_vec_s8[1]); *var_vec_s8++ = vec_max(var_vec_s8[0], var_vec_b8[1]); *var_vec_s8++ = vec_max(var_vec_s8[0], var_vec_s8[1]); @@ -1050,11 +1022,6 @@ void f19() { *var_vec_u16++ = vec_lde(var_int[0], var_unsigned_short_ptr[1]); *var_vec_u16++ = vec_ldl(var_int[0], var_unsigned_short_ptr[1]); *var_vec_u16++ = vec_ldl(var_int[0], var_vec_u16_ptr[1]); - *var_vec_u16++ = vec_lvehx(var_int[0], var_unsigned_short_ptr[1]); - *var_vec_u16++ = vec_lvx(var_int[0], var_unsigned_short_ptr[1]); - *var_vec_u16++ = vec_lvx(var_int[0], var_vec_u16_ptr[1]); - *var_vec_u16++ = vec_lvxl(var_int[0], var_unsigned_short_ptr[1]); - *var_vec_u16++ = vec_lvxl(var_int[0], var_vec_u16_ptr[1]); *var_vec_u16++ = vec_max(var_vec_b16[0], var_vec_u16[1]); *var_vec_u16++ = vec_max(var_vec_u16[0], var_vec_b16[1]); *var_vec_u16++ = vec_max(var_vec_u16[0], var_vec_u16[1]); @@ -1213,11 +1180,6 @@ void f22() { *var_vec_u32++ = vec_lde(var_int[0], var_unsigned_int_ptr[1]); *var_vec_u32++ = vec_ldl(var_int[0], var_unsigned_int_ptr[1]); *var_vec_u32++ = vec_ldl(var_int[0], var_vec_u32_ptr[1]); - *var_vec_u32++ = vec_lvewx(var_int[0], var_unsigned_int_ptr[1]); - *var_vec_u32++ = vec_lvx(var_int[0], var_unsigned_int_ptr[1]); - *var_vec_u32++ = vec_lvx(var_int[0], var_vec_u32_ptr[1]); - *var_vec_u32++ = vec_lvxl(var_int[0], var_unsigned_int_ptr[1]); - *var_vec_u32++ = vec_lvxl(var_int[0], var_vec_u32_ptr[1]); *var_vec_u32++ = vec_max(var_vec_b32[0], var_vec_u32[1]); *var_vec_u32++ = vec_max(var_vec_u32[0], var_vec_b32[1]); *var_vec_u32++ = vec_max(var_vec_u32[0], var_vec_u32[1]); @@ -1341,7 +1303,6 @@ void f25() { *var_vec_u8++ = vec_lde(var_int[0], var_unsigned_char_ptr[1]); *var_vec_u8++ = vec_ldl(var_int[0], var_unsigned_char_ptr[1]); *var_vec_u8++ = vec_ldl(var_int[0], var_vec_u8_ptr[1]); - *var_vec_u8++ = vec_lvebx(var_int[0], var_unsigned_char_ptr[1]); *var_vec_u8++ = vec_lvsl(var_int[0], var_float_ptr[1]); *var_vec_u8++ = vec_lvsl(var_int[0], var_int_ptr[1]); *var_vec_u8++ = vec_lvsl(var_int[0], var_short_ptr[1]); @@ -1356,12 +1317,8 @@ void f25() { *var_vec_u8++ = vec_lvsr(var_int[0], var_unsigned_char_ptr[1]); *var_vec_u8++ = vec_lvsr(var_int[0], var_unsigned_int_ptr[1]); *var_vec_u8++ = vec_lvsr(var_int[0], var_unsigned_short_ptr[1]); - *var_vec_u8++ = vec_lvx(var_int[0], var_unsigned_char_ptr[1]); - *var_vec_u8++ = vec_lvx(var_int[0], var_vec_u8_ptr[1]); } void f26() { - *var_vec_u8++ = vec_lvxl(var_int[0], var_unsigned_char_ptr[1]); - *var_vec_u8++ = vec_lvxl(var_int[0], var_vec_u8_ptr[1]); *var_vec_u8++ = vec_max(var_vec_b8[0], var_vec_u8[1]); *var_vec_u8++ = vec_max(var_vec_u8[0], var_vec_b8[1]); *var_vec_u8++ = vec_max(var_vec_u8[0], var_vec_u8[1]); @@ -2353,47 +2310,4 @@ void f37() { vec_stl(var_vec_u32[0], var_int[1], var_vec_u32_ptr[2]); vec_stl(var_vec_u8[0], var_int[1], var_unsigned_char_ptr[2]); vec_stl(var_vec_u8[0], var_int[1], var_vec_u8_ptr[2]); - vec_stvebx(var_vec_s8[0], var_int[1], var_signed_char_ptr[2]); - vec_stvebx(var_vec_u8[0], var_int[1], var_unsigned_char_ptr[2]); - vec_stvehx(var_vec_s16[0], var_int[1], var_short_ptr[2]); - vec_stvehx(var_vec_u16[0], var_int[1], var_unsigned_short_ptr[2]); - vec_stvewx(var_vec_f32[0], var_int[1], var_float_ptr[2]); - vec_stvewx(var_vec_s32[0], var_int[1], var_int_ptr[2]); - vec_stvewx(var_vec_u32[0], var_int[1], var_unsigned_int_ptr[2]); - vec_stvx(var_vec_b16[0], var_int[1], var_vec_b16_ptr[2]); - vec_stvx(var_vec_b32[0], var_int[1], var_vec_b32_ptr[2]); - vec_stvx(var_vec_b8[0], var_int[1], var_vec_b8_ptr[2]); - vec_stvx(var_vec_f32[0], var_int[1], var_float_ptr[2]); - vec_stvx(var_vec_f32[0], var_int[1], var_vec_f32_ptr[2]); - vec_stvx(var_vec_p16[0], var_int[1], var_vec_p16_ptr[2]); - vec_stvx(var_vec_s16[0], var_int[1], var_short_ptr[2]); - vec_stvx(var_vec_s16[0], var_int[1], var_vec_s16_ptr[2]); - vec_stvx(var_vec_s32[0], var_int[1], var_int_ptr[2]); - vec_stvx(var_vec_s32[0], var_int[1], var_vec_s32_ptr[2]); - vec_stvx(var_vec_s8[0], var_int[1], var_signed_char_ptr[2]); - vec_stvx(var_vec_s8[0], var_int[1], var_vec_s8_ptr[2]); - vec_stvx(var_vec_u16[0], var_int[1], var_unsigned_short_ptr[2]); - vec_stvx(var_vec_u16[0], var_int[1], var_vec_u16_ptr[2]); - vec_stvx(var_vec_u32[0], var_int[1], var_unsigned_int_ptr[2]); - vec_stvx(var_vec_u32[0], var_int[1], var_vec_u32_ptr[2]); - vec_stvx(var_vec_u8[0], var_int[1], var_unsigned_char_ptr[2]); - vec_stvx(var_vec_u8[0], var_int[1], var_vec_u8_ptr[2]); - vec_stvxl(var_vec_b16[0], var_int[1], var_vec_b16_ptr[2]); - vec_stvxl(var_vec_b32[0], var_int[1], var_vec_b32_ptr[2]); - vec_stvxl(var_vec_b8[0], var_int[1], var_vec_b8_ptr[2]); - vec_stvxl(var_vec_f32[0], var_int[1], var_float_ptr[2]); - vec_stvxl(var_vec_f32[0], var_int[1], var_vec_f32_ptr[2]); - vec_stvxl(var_vec_p16[0], var_int[1], var_vec_p16_ptr[2]); - vec_stvxl(var_vec_s16[0], var_int[1], var_short_ptr[2]); - vec_stvxl(var_vec_s16[0], var_int[1], var_vec_s16_ptr[2]); - vec_stvxl(var_vec_s32[0], var_int[1], var_int_ptr[2]); - vec_stvxl(var_vec_s32[0], var_int[1], var_vec_s32_ptr[2]); - vec_stvxl(var_vec_s8[0], var_int[1], var_signed_char_ptr[2]); - vec_stvxl(var_vec_s8[0], var_int[1], var_vec_s8_ptr[2]); - vec_stvxl(var_vec_u16[0], var_int[1], var_unsigned_short_ptr[2]); - vec_stvxl(var_vec_u16[0], var_int[1], var_vec_u16_ptr[2]); - vec_stvxl(var_vec_u32[0], var_int[1], var_unsigned_int_ptr[2]); - vec_stvxl(var_vec_u32[0], var_int[1], var_vec_u32_ptr[2]); - vec_stvxl(var_vec_u8[0], var_int[1], var_unsigned_char_ptr[2]); - vec_stvxl(var_vec_u8[0], var_int[1], var_vec_u8_ptr[2]); } diff --git a/gcc/testsuite/gcc.target/powerpc/altivec-7.c b/gcc/testsuite/gcc.target/powerpc/altivec-7.c index 42c04a1ed79..0cef426fcd9 100644 --- a/gcc/testsuite/gcc.target/powerpc/altivec-7.c +++ b/gcc/testsuite/gcc.target/powerpc/altivec-7.c @@ -26,25 +26,23 @@ int main () { *vecfloat++ = vec_andc((vector bool int)vecint[0], vecfloat[1]); *vecfloat++ = vec_andc(vecfloat[0], (vector bool int)vecint[1]); - *vecfloat++ = vec_vxor((vector bool int)vecint[0], vecfloat[1]); - *vecfloat++ = vec_vxor(vecfloat[0], (vector bool int)vecint[1]); + *vecfloat++ = vec_xor((vector bool int)vecint[0], vecfloat[1]); + *vecfloat++ = vec_xor(vecfloat[0], (vector bool int)vecint[1]); *varpixel++ = vec_packpx(vecuint[0], vecuint[1]); - *varpixel++ = vec_vpkpx(vecuint[0], vecuint[1]); - *vecshort++ = vec_vmulesb(vecchar[0], vecchar[1]); - *vecshort++ = vec_vmulosb(vecchar[0], vecchar[1]); + *vecshort++ = vec_mule(vecchar[0], vecchar[1]); + *vecshort++ = vec_mulo(vecchar[0], vecchar[1]); *vecint++ = vec_ld(var_int[0], intp[1]); *vecint++ = vec_lde(var_int[0], intp[1]); *vecint++ = vec_ldl(var_int[0], intp[1]); - *vecint++ = vec_lvewx(var_int[0], intp[1]); *vecint++ = vec_unpackh(vecshort[0]); *vecint++ = vec_unpackl(vecshort[0]); *vecushort++ = vec_andc((vector bool short)vecshort[0], vecushort[1]); *vecushort++ = vec_andc(vecushort[0], (vector bool short)vecshort[1]); - *vecushort++ = vec_vxor((vector bool short)vecshort[0], vecushort[1]); - *vecushort++ = vec_vxor(vecushort[0], (vector bool short)vecshort[1]); + *vecushort++ = vec_xor((vector bool short)vecshort[0], vecushort[1]); + *vecushort++ = vec_xor(vecushort[0], (vector bool short)vecshort[1]); *vecuint++ = vec_ld(var_int[0], uintp[1]); *vecuint++ = vec_lvx(var_int[0], uintp[1]); - *vecuint++ = vec_vmsumubm(vecuchar[0], vecuchar[1], vecuint[2]); + *vecuint++ = vec_msum(vecuchar[0], vecuchar[1], vecuint[2]); *vecuchar++ = vec_xor(vecuchar[0], (vector unsigned char)vecchar[1]); *vecubi++ = vec_unpackh(vecubsi[0]); @@ -62,11 +60,10 @@ int main () /* Expected results: vec_packpx vpkpx - vec_vmulosb vmulesb + vec_mulo vmulesb vec_ld lxv2x vec_lde lvewx vec_ldl lxvl - vec_lvewx lvewx vec_unpackh vupklsh vec_unpackh vupklpx vec_unpackh vupklsb @@ -75,10 +72,10 @@ int main () vec_unpackl vupkhsb vec_andc xxlnor (vnor AIX) xxland (vand AIX) - vec_vxor xxlxor - vec_vmsumubm vmsumubm - vec_vmulesb vmulosb - vec_vmulosb vmulesb + vec_xor xxlxor + vec_msum vmsumubm + vec_mule vmulosb + vec_mulo vmulesb vec_ld lvx */ @@ -89,7 +86,7 @@ int main () /* { dg-final { scan-assembler-times {\mlxv} 0 { target { ! powerpc_vsx } } } } */ /* { dg-final { scan-assembler-times {\mlvx\M} 0 { target powerpc_vsx } } } */ /* { dg-final { scan-assembler-times {\mlxv} 42 { target powerpc_vsx } } } */ -/* { dg-final { scan-assembler-times "lvewx" 2 } } */ +/* { dg-final { scan-assembler-times "lvewx" 1 } } */ /* { dg-final { scan-assembler-times "lvxl" 1 } } */ /* { dg-final { scan-assembler-times "vupklsh" 2 } } */ /* { dg-final { scan-assembler-times "vupkhsh" 2 } } */ diff --git a/gcc/testsuite/gcc.target/powerpc/bfp/scalar-test-neg-2.c b/gcc/testsuite/gcc.target/powerpc/bfp/scalar-test-neg-2.c index 7a1e8e8bd30..46d743a899b 100644 --- a/gcc/testsuite/gcc.target/powerpc/bfp/scalar-test-neg-2.c +++ b/gcc/testsuite/gcc.target/powerpc/bfp/scalar-test-neg-2.c @@ -10,5 +10,5 @@ test_neg (float *p) { float source = *p; - return __builtin_vec_scalar_test_neg (source); /* { dg-error "'__builtin_vsx_scalar_test_neg' requires" } */ + return __builtin_vec_scalar_test_neg (source); /* { dg-error "'__builtin_vsx_scalar_test_neg_sp' requires" } */ } diff --git a/gcc/testsuite/gcc.target/powerpc/bfp/scalar-test-neg-3.c b/gcc/testsuite/gcc.target/powerpc/bfp/scalar-test-neg-3.c index c9b90927693..bfc892b116e 100644 --- a/gcc/testsuite/gcc.target/powerpc/bfp/scalar-test-neg-3.c +++ b/gcc/testsuite/gcc.target/powerpc/bfp/scalar-test-neg-3.c @@ -10,5 +10,5 @@ test_neg (double *p) { double source = *p; - return __builtin_vec_scalar_test_neg (source); /* { dg-error "'__builtin_vsx_scalar_test_neg' requires" } */ + return __builtin_vec_scalar_test_neg (source); /* { dg-error "'__builtin_vsx_scalar_test_neg_dp' requires" } */ } diff --git a/gcc/testsuite/gcc.target/powerpc/bfp/scalar-test-neg-5.c b/gcc/testsuite/gcc.target/powerpc/bfp/scalar-test-neg-5.c index e70eb2d46f8..8c55c1cfb5c 100644 --- a/gcc/testsuite/gcc.target/powerpc/bfp/scalar-test-neg-5.c +++ b/gcc/testsuite/gcc.target/powerpc/bfp/scalar-test-neg-5.c @@ -10,5 +10,5 @@ test_neg (__ieee128 *p) { __ieee128 source = *p; - return __builtin_vec_scalar_test_neg (source); /* { dg-error "'__builtin_vsx_scalar_test_neg' requires" } */ + return __builtin_vec_scalar_test_neg (source); /* { dg-error "'__builtin_vsx_scalar_test_neg_qp' requires" } */ } diff --git a/gcc/testsuite/gcc.target/powerpc/builtins-3-p9-runnable.c b/gcc/testsuite/gcc.target/powerpc/builtins-3-p9-runnable.c index 44c0397c49a..e023076bac7 100644 --- a/gcc/testsuite/gcc.target/powerpc/builtins-3-p9-runnable.c +++ b/gcc/testsuite/gcc.target/powerpc/builtins-3-p9-runnable.c @@ -73,10 +73,10 @@ int main() { abort(); } vfexpt = (vector float){1.0, -2.0, 0.0, 8.5}; - vfr = vec_extract_fp_from_shorth(vusha); + vfr = vec_extract_fp32_from_shorth(vusha); #ifdef DEBUG - printf ("vec_extract_fp_from_shorth\n"); + printf ("vec_extract_fp32_from_shorth\n"); for (i=0; i<4; i++) printf("result[%d] = %f; expected[%d] = %f\n", i, vfr[i], i, vfexpt[i]); @@ -88,10 +88,10 @@ int main() { } vfexpt = (vector float){1.5, 0.5, 1.25, -0.25}; - vfr = vec_extract_fp_from_shortl(vusha); + vfr = vec_extract_fp32_from_shortl(vusha); #ifdef DEBUG - printf ("\nvec_extract_fp_from_shortl\n"); + printf ("\nvec_extract_fp32_from_shortl\n"); for (i=0; i<4; i++) printf("result[%d] = %f; expected[%d] = %f\n", i, vfr[i], i, vfexpt[i]);
next reply other threads:[~2021-01-13 14:58 UTC|newest] Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-01-13 14:58 William Schmidt [this message] -- strict thread matches above, loose matches on Subject: below -- 2021-02-24 3:59 William Schmidt 2021-02-22 20:27 William Schmidt 2021-02-22 20:27 William Schmidt 2021-02-07 18:18 William Schmidt 2021-02-07 18:18 William Schmidt 2021-02-07 18:18 William Schmidt 2021-02-07 18:18 William Schmidt 2021-02-07 18:18 William Schmidt 2021-02-07 18:18 William Schmidt 2021-02-07 18:18 William Schmidt 2021-02-07 18:18 William Schmidt 2021-02-07 18:18 William Schmidt 2021-02-07 18:17 William Schmidt 2021-02-07 18:17 William Schmidt 2021-02-07 18:17 William Schmidt 2021-02-07 18:16 William Schmidt 2021-02-07 17:48 William Schmidt 2021-02-01 0:34 William Schmidt 2021-01-28 23:21 William Schmidt 2021-01-27 23:01 William Schmidt 2021-01-27 16:07 William Schmidt 2021-01-14 23:07 William Schmidt 2021-01-13 21:47 William Schmidt 2021-01-08 23:09 William Schmidt 2021-01-08 20:42 William Schmidt 2021-01-07 18:23 William Schmidt 2021-01-06 21:07 William Schmidt 2020-12-17 22:25 William Schmidt
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210113145842.93D0B385783D@sourceware.org \ --to=wschmidt@gcc.gnu.org \ --cc=gcc-cvs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).