From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) by sourceware.org (Postfix) with ESMTPS id 151713839808 for ; Sun, 5 Jun 2022 10:15:51 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 151713839808 Received: by mail-ed1-x534.google.com with SMTP id fd25so15252233edb.3 for ; Sun, 05 Jun 2022 03:15:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=lHWEdzaLzic7JI8cB+D6sAtFYtJH40Oy3fMvPSrlLpQ=; b=gqNWnIK7JMMH3YpUcKmQIej25SVOXU7/Uc/V8RcBwsI0PBgUb3M/druBY7pWwihYkn FrhZnzTmD2a1FHK01XMKsa2Ww+Zi4cx/EQkKWhDympfC54GbCx0kyna3tEEEjV4+p4dV qMThu8s6iUCZPScQQw3bvgE5YOoUk2UjukeW/tP8vj/jU6zMSKUJ+QOieJmFfhwpCNQ6 YHTFMllg+dTgkzvndN4yIjcO/bIFMJQMo3VlfvVmkGJoGIwDeUfZh8Uuss9oF8XP/Zg3 DHwsriya2KS5Yi2Jz0VVs95fFdWYglg2Ga0ETxuIfoqPw8fgfVHm8RLkpdlj0ckvXhNG ZiyQ== X-Gm-Message-State: AOAM530ODhJ+CL7s9sN9yYw9OrCRc4Nem48bwTHWFnlq/UYK8PB0ld1o Zx8PIcTuoF5C9P/Hk2odvf2C0zWYf5XmNBfAhw2jaQ== X-Google-Smtp-Source: ABdhPJwoQqmmuxRBv2Diyk/f4zCSLHnjrjjWO1oj3DkWBKlrv/WAb2E48/7eITa5t9CCOJZxJ95SSkstQemK7b/S084= X-Received: by 2002:aa7:c396:0:b0:42d:8b86:a8dc with SMTP id k22-20020aa7c396000000b0042d8b86a8dcmr20098681edq.54.1654424149515; Sun, 05 Jun 2022 03:15:49 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Prathamesh Kulkarni Date: Sun, 5 Jun 2022 15:45:13 +0530 Message-ID: Subject: Re: [1/2] PR96463 - aarch64 specific changes To: Prathamesh Kulkarni , gcc Patches , richard.sandiford@arm.com Content-Type: multipart/mixed; boundary="000000000000bd8a1b05e0b0a38a" X-Spam-Status: No, score=-9.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 Jun 2022 10:15:54 -0000 --000000000000bd8a1b05e0b0a38a Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Wed, 1 Jun 2022 at 14:12, Richard Sandiford wrote: > > Prathamesh Kulkarni writes: > > On Thu, 12 May 2022 at 16:15, Richard Sandiford > > wrote: > >> > >> Prathamesh Kulkarni writes: > >> > On Wed, 11 May 2022 at 12:44, Richard Sandiford > >> > wrote: > >> >> > >> >> Prathamesh Kulkarni writes: > >> >> > On Fri, 6 May 2022 at 16:00, Richard Sandiford > >> >> > wrote: > >> >> >> > >> >> >> Prathamesh Kulkarni writes: > >> >> >> > diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc b= /gcc/config/aarch64/aarch64-sve-builtins-base.cc > >> >> >> > index c24c0548724..1ef4ea2087b 100644 > >> >> >> > --- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc > >> >> >> > +++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc > >> >> >> > @@ -44,6 +44,14 @@ > >> >> >> > #include "aarch64-sve-builtins-shapes.h" > >> >> >> > #include "aarch64-sve-builtins-base.h" > >> >> >> > #include "aarch64-sve-builtins-functions.h" > >> >> >> > +#include "aarch64-builtins.h" > >> >> >> > +#include "gimple-ssa.h" > >> >> >> > +#include "tree-phinodes.h" > >> >> >> > +#include "tree-ssa-operands.h" > >> >> >> > +#include "ssa-iterators.h" > >> >> >> > +#include "stringpool.h" > >> >> >> > +#include "value-range.h" > >> >> >> > +#include "tree-ssanames.h" > >> >> >> > >> >> >> Minor, but: I think the preferred approach is to include "ssa.h" > >> >> >> rather than include some of these headers directly. > >> >> >> > >> >> >> > > >> >> >> > using namespace aarch64_sve; > >> >> >> > > >> >> >> > @@ -1207,6 +1215,56 @@ public: > >> >> >> > insn_code icode =3D code_for_aarch64_sve_ld1rq (e.vector_= mode (0)); > >> >> >> > return e.use_contiguous_load_insn (icode); > >> >> >> > } > >> >> >> > + > >> >> >> > + gimple * > >> >> >> > + fold (gimple_folder &f) const OVERRIDE > >> >> >> > + { > >> >> >> > + tree arg0 =3D gimple_call_arg (f.call, 0); > >> >> >> > + tree arg1 =3D gimple_call_arg (f.call, 1); > >> >> >> > + > >> >> >> > + /* Transform: > >> >> >> > + lhs =3D svld1rq ({-1, -1, ... }, arg1) > >> >> >> > + into: > >> >> >> > + tmp =3D mem_ref [(int * {ref-all}) arg1] > >> >> >> > + lhs =3D vec_perm_expr. > >> >> >> > + on little endian target. */ > >> >> >> > + > >> >> >> > + if (!BYTES_BIG_ENDIAN > >> >> >> > + && integer_all_onesp (arg0)) > >> >> >> > + { > >> >> >> > + tree lhs =3D gimple_call_lhs (f.call); > >> >> >> > + auto simd_type =3D aarch64_get_simd_info_for_type (Int32= x4_t); > >> >> >> > >> >> >> Does this work for other element sizes? I would have expected i= t > >> >> >> to be the (128-bit) Advanced SIMD vector associated with the sam= e > >> >> >> element type as the SVE vector. > >> >> >> > >> >> >> The testcase should cover more than just int32x4_t -> svint32_t, > >> >> >> just to be sure. > >> >> > In the attached patch, it obtains corresponding advsimd type with= : > >> >> > > >> >> > tree eltype =3D TREE_TYPE (lhs_type); > >> >> > unsigned nunits =3D 128 / TREE_INT_CST_LOW (TYPE_SIZE (eltype)); > >> >> > tree vectype =3D build_vector_type (eltype, nunits); > >> >> > > >> >> > While this seems to work with different element sizes, I am not s= ure if it's > >> >> > the correct approach ? > >> >> > >> >> Yeah, that looks correct. Other SVE code uses aarch64_vq_mode > >> >> to get the vector mode associated with a .Q =E2=80=9Celement=E2=80= =9D, so an > >> >> alternative would be: > >> >> > >> >> machine_mode vq_mode =3D aarch64_vq_mode (TYPE_MODE (eltype)).r= equire (); > >> >> tree vectype =3D build_vector_type_for_mode (eltype, vq_mode); > >> >> > >> >> which is more explicit about wanting an Advanced SIMD vector. > >> >> > >> >> >> > + > >> >> >> > + tree elt_ptr_type > >> >> >> > + =3D build_pointer_type_for_mode (simd_type.eltype, VOI= Dmode, true); > >> >> >> > + tree zero =3D build_zero_cst (elt_ptr_type); > >> >> >> > + > >> >> >> > + /* Use element type alignment. */ > >> >> >> > + tree access_type > >> >> >> > + =3D build_aligned_type (simd_type.itype, TYPE_ALIGN (s= imd_type.eltype)); > >> >> >> > + > >> >> >> > + tree tmp =3D make_ssa_name_fn (cfun, access_type, 0); > >> >> >> > + gimple *mem_ref_stmt > >> >> >> > + =3D gimple_build_assign (tmp, fold_build2 (MEM_REF, ac= cess_type, arg1, zero)); > >> >> >> > >> >> >> Long line. Might be easier to format by assigning the fold_buil= d2 result > >> >> >> to a temporary variable. > >> >> >> > >> >> >> > + gsi_insert_before (f.gsi, mem_ref_stmt, GSI_SAME_STMT); > >> >> >> > + > >> >> >> > + tree mem_ref_lhs =3D gimple_get_lhs (mem_ref_stmt); > >> >> >> > + tree vectype =3D TREE_TYPE (mem_ref_lhs); > >> >> >> > + tree lhs_type =3D TREE_TYPE (lhs); > >> >> >> > >> >> >> Is this necessary? The code above supplied the types and I woul= dn't > >> >> >> have expected them to change during the build process. > >> >> >> > >> >> >> > + > >> >> >> > + int source_nelts =3D TYPE_VECTOR_SUBPARTS (vectype).to_c= onstant (); > >> >> >> > + vec_perm_builder sel (TYPE_VECTOR_SUBPARTS (lhs_type), s= ource_nelts, 1); > >> >> >> > + for (int i =3D 0; i < source_nelts; i++) > >> >> >> > + sel.quick_push (i); > >> >> >> > + > >> >> >> > + vec_perm_indices indices (sel, 1, source_nelts); > >> >> >> > + gcc_checking_assert (can_vec_perm_const_p (TYPE_MODE (lh= s_type), indices)); > >> >> >> > + tree mask =3D vec_perm_indices_to_tree (lhs_type, indice= s); > >> >> >> > + return gimple_build_assign (lhs, VEC_PERM_EXPR, mem_ref_= lhs, mem_ref_lhs, mask); > >> >> >> > >> >> >> Nit: long line. > >> >> >> > >> >> >> > + } > >> >> >> > + > >> >> >> > + return NULL; > >> >> >> > + } > >> >> >> > }; > >> >> >> > > >> >> >> > class svld1ro_impl : public load_replicate > >> >> >> > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch6= 4/aarch64.cc > >> >> >> > index f650abbc4ce..47810fec804 100644 > >> >> >> > --- a/gcc/config/aarch64/aarch64.cc > >> >> >> > +++ b/gcc/config/aarch64/aarch64.cc > >> >> >> > @@ -23969,6 +23969,35 @@ aarch64_evpc_sve_tbl (struct expand_v= ec_perm_d *d) > >> >> >> > return true; > >> >> >> > } > >> >> >> > > >> >> >> > +/* Try to implement D using SVE dup instruction. */ > >> >> >> > + > >> >> >> > +static bool > >> >> >> > +aarch64_evpc_sve_dup (struct expand_vec_perm_d *d) > >> >> >> > +{ > >> >> >> > + if (BYTES_BIG_ENDIAN > >> >> >> > + || d->perm.length ().is_constant () > >> >> >> > + || !d->one_vector_p > >> >> >> > + || d->target =3D=3D NULL > >> >> >> > + || d->op0 =3D=3D NULL > >> >> >> > >> >> >> These last two lines mean that we always return false for d->tes= ting. > >> >> >> The idea instead is that the return value should be the same for= both > >> >> >> d->testing and !d->testing. The difference is that for !d->test= ing we > >> >> >> also emit code to do the permute. > >> >> > >> >> It doesn't look like the new patch addresses this. There should be > >> >> no checks for/uses of =E2=80=9Cd->target=E2=80=9D and =E2=80=9Cd->o= p0=E2=80=9D until after: > >> >> > >> >> if (d->testing_p) > >> >> return true; > >> >> > >> >> This... > >> >> > >> >> >> > + || GET_MODE_NUNITS (GET_MODE (d->target)).is_constant (= ) > >> >> >> > >> >> >> Sorry, I've forgotten the context now, but: these positive tests > >> >> >> for is_constant surprised me. Do we really only want to do this > >> >> >> for variable-length SVE code generation, rather than fixed-lengt= h? > >> >> >> > >> >> >> > + || !GET_MODE_NUNITS (GET_MODE (d->op0)).is_constant ()) > >> >> >> > + return false; > >> >> >> > + > >> >> >> > + if (d->testing_p) > >> >> >> > + return true; > >> >> >> > >> >> >> This should happen after the later tests, once we're sure that t= he > >> >> >> permute vector has the right form. If the issue is that op0 isn= 't > >> >> >> provided for testing then I think the hook needs to be passed th= e > >> >> >> input mode alongside the result mode. > >> >> > >> >> ...was my guess about why the checks were there. > >> > Ah right sorry. IIUC, if d->testing is true, then d->op0 could be NU= LL ? > >> > In that case, how do we obtain input mode ? > >> > >> Well, like I say, I think we might need to extend the vec_perm_const > >> hook interface so that it gets passed the input mode, now that that > >> isn't necessarily the same as the output mode. > >> > >> It would be good to do that as a separate prepatch, since it would > >> affect other targets too. And for safety, that patch should make all > >> existing implementations of the hook return false if the modes aren't > >> equal, including for aarch64. The current patch can then make the > >> aarch64 hook treat the dup case as an exception. > > Hi Richard, > > I have attached updated patch, which tries to address above suggestions= . > > I had a question about couple of things: > > (1) The patch resulted in ICE for float operands, because we were > > using lhs_type to build mask, which is float vector type. > > So I adjusted the patch to make mask vector of integer_type_node with > > length =3D=3D length(lhs_type) if lhs has float vector type. > > Does that look OK ? > > Let's use: > > build_vector_type (ssizetype, lhs_len) > > unconditionally, even for integers. OK thanks, done in attached patch. > > > (2) Moved check for d->vmode !=3D op_mode (and only checking for dup in > > that case), inside vec_perm_const_1, > > since it does some initial bookkeeping (like swapping operands), > > before calling respective functions. > > Does that look OK ? > > > > Thanks, > > Prathamesh > >> > >> Thanks, > >> Richard > > > > diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc b/gcc/conf= ig/aarch64/aarch64-sve-builtins-base.cc > > index bee410929bd..48e849bec34 100644 > > --- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc > > +++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc > > @@ -44,6 +44,7 @@ > > #include "aarch64-sve-builtins-shapes.h" > > #include "aarch64-sve-builtins-base.h" > > #include "aarch64-sve-builtins-functions.h" > > +#include "ssa.h" > > > > using namespace aarch64_sve; > > > > @@ -1207,6 +1208,66 @@ public: > > insn_code icode =3D code_for_aarch64_sve_ld1rq (e.vector_mode (0))= ; > > return e.use_contiguous_load_insn (icode); > > } > > + > > + gimple * > > + fold (gimple_folder &f) const override > > + { > > + tree arg0 =3D gimple_call_arg (f.call, 0); > > + tree arg1 =3D gimple_call_arg (f.call, 1); > > + > > + /* Transform: > > + lhs =3D svld1rq ({-1, -1, ... }, arg1) > > + into: > > + tmp =3D mem_ref [(int * {ref-all}) arg1] > > + lhs =3D vec_perm_expr. > > + on little endian target. > > + vectype is the corresponding ADVSIMD type. */ > > + > > + if (!BYTES_BIG_ENDIAN > > + && integer_all_onesp (arg0)) > > + { > > + tree lhs =3D gimple_call_lhs (f.call); > > + tree lhs_type =3D TREE_TYPE (lhs); > > + poly_uint64 lhs_len =3D TYPE_VECTOR_SUBPARTS (lhs_type); > > + tree eltype =3D TREE_TYPE (lhs_type); > > + > > + scalar_mode elmode =3D GET_MODE_INNER (TYPE_MODE (lhs_type)); > > + machine_mode vq_mode =3D aarch64_vq_mode (elmode).require (); > > + tree vectype =3D build_vector_type_for_mode (eltype, vq_mode); > > + > > + tree elt_ptr_type > > + =3D build_pointer_type_for_mode (eltype, VOIDmode, true); > > + tree zero =3D build_zero_cst (elt_ptr_type); > > + > > + /* Use element type alignment. */ > > + tree access_type > > + =3D build_aligned_type (vectype, TYPE_ALIGN (eltype)); > > + > > + tree mem_ref_lhs =3D make_ssa_name_fn (cfun, access_type, 0); > > + tree mem_ref_op =3D fold_build2 (MEM_REF, access_type, arg1, zero= ); > > + gimple *mem_ref_stmt > > + =3D gimple_build_assign (mem_ref_lhs, mem_ref_op); > > + gsi_insert_before (f.gsi, mem_ref_stmt, GSI_SAME_STMT); > > + > > + int source_nelts =3D TYPE_VECTOR_SUBPARTS (access_type).to_consta= nt (); > > + vec_perm_builder sel (lhs_len, source_nelts, 1); > > + for (int i =3D 0; i < source_nelts; i++) > > + sel.quick_push (i); > > + > > + vec_perm_indices indices (sel, 1, source_nelts); > > + gcc_checking_assert (can_vec_perm_const_p (TYPE_MODE (lhs_type), > > + TYPE_MODE (access_type= ), > > + indices)); > > + tree mask_type =3D (FLOAT_TYPE_P (eltype)) > > + ? build_vector_type (integer_type_node, lhs_len) > > + : lhs_type; > > + tree mask =3D vec_perm_indices_to_tree (mask_type, indices); > > + return gimple_build_assign (lhs, VEC_PERM_EXPR, > > + mem_ref_lhs, mem_ref_lhs, mask); > > + } > > + > > + return NULL; > > + } > > }; > > > > class svld1ro_impl : public load_replicate > > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64= .cc > > index d4c575ce976..ae8e913d525 100644 > > --- a/gcc/config/aarch64/aarch64.cc > > +++ b/gcc/config/aarch64/aarch64.cc > > @@ -23401,7 +23401,8 @@ struct expand_vec_perm_d > > bool testing_p; > > }; > > > > -static bool aarch64_expand_vec_perm_const_1 (struct expand_vec_perm_d = *d); > > +static bool aarch64_expand_vec_perm_const_1 (struct expand_vec_perm_d = *d, > > + machine_mode op_mode); > > > > /* Generate a variable permutation. */ > > > > @@ -23638,7 +23639,7 @@ aarch64_evpc_reencode (struct expand_vec_perm_d= *d) > > newd.one_vector_p =3D d->one_vector_p; > > > > newd.perm.new_vector (newpermconst, newd.one_vector_p ? 1 : 2, nelt = / 2); > > - return aarch64_expand_vec_perm_const_1 (&newd); > > + return aarch64_expand_vec_perm_const_1 (&newd, newd.vmode); > > } > > > > /* Recognize patterns suitable for the UZP instructions. */ > > @@ -23945,6 +23946,32 @@ aarch64_evpc_sve_tbl (struct expand_vec_perm_d= *d) > > return true; > > } > > > > +/* Try to implement D using SVE dup instruction. */ > > + > > +static bool > > +aarch64_evpc_sve_dup (struct expand_vec_perm_d *d, machine_mode op_mod= e) > > +{ > > + if (BYTES_BIG_ENDIAN > > + || d->perm.length ().is_constant () > > Sorry, I've forgotten: why do we need this is_constant check? Oh I guess I had put it there, to check if target vector is of variable length, sorry. I assume we don't need this. Removed in the attached patch. > > > + || !d->one_vector_p > > + || aarch64_classify_vector_mode (op_mode) !=3D VEC_ADVSIMD) > > + return false; > > We need to check that nelts_per_pattern is 1 as well. OK thanks, done. > > > + int npatterns =3D d->perm.encoding ().npatterns (); > > + if (!known_eq (npatterns, GET_MODE_NUNITS (op_mode))) > > + return false; > > + > > + for (int i =3D 0; i < npatterns; i++) > > + if (!known_eq (d->perm[i], i)) > > + return false; > > + > > + if (d->testing_p) > > + return true; > > + > > + aarch64_expand_sve_dupq (d->target, GET_MODE (d->target), d->op0); > > + return true; > > +} > > + > > /* Try to implement D using SVE SEL instruction. */ > > > > static bool > > @@ -24066,7 +24093,8 @@ aarch64_evpc_ins (struct expand_vec_perm_d *d) > > } > > > > static bool > > -aarch64_expand_vec_perm_const_1 (struct expand_vec_perm_d *d) > > +aarch64_expand_vec_perm_const_1 (struct expand_vec_perm_d *d, > > + machine_mode op_mode) > > I think we should add op_mode to expand_vec_perm_d instead. > Let's also add an op_vec_flags to cache the aarch64_classify_vector_mode > result. OK thanks, done. > > > { > > /* The pattern matching functions above are written to look for a sm= all > > number to begin the sequence (0, 1, N/2). If we begin with an in= dex > > @@ -24084,6 +24112,12 @@ aarch64_expand_vec_perm_const_1 (struct expand= _vec_perm_d *d) > > || d->vec_flags =3D=3D VEC_SVE_PRED) > > && known_gt (nelt, 1)) > > { > > + /* If operand and result modes differ, then only check > > + for dup case. */ > > + if (d->vmode !=3D op_mode) > > + return (d->vec_flags =3D=3D VEC_SVE_DATA) > > + ? aarch64_evpc_sve_dup (d, op_mode) : false; > > + > > I think it'd be more future-proof to format this as: > > if (d->vmod =3D=3D d->op_mode) > { > =E2=80=A6existing code=E2=80=A6 > } > else > { > if (aarch64_evpc_sve_dup (d)) > return true; > } > > with the d->vec_flags =3D=3D VEC_SVE_DATA check being in aarch64_evpc_sve= _dup, > alongside the op_mode check. I think we'll be adding more checks here > over time. Um I was wondering if we should structure it as: if (d->vmode =3D=3D d->op_mode) { ...existing code... } if (aarch64_evpc_sve_dup (d)) return true; So we check for dup irrespective of d->vmode =3D=3D d->op_mode ? Thanks, Prathamesh > > > if (aarch64_evpc_rev_local (d)) > > return true; > > else if (aarch64_evpc_rev_global (d)) > > @@ -24105,7 +24139,12 @@ aarch64_expand_vec_perm_const_1 (struct expand= _vec_perm_d *d) > > else if (aarch64_evpc_reencode (d)) > > return true; > > if (d->vec_flags =3D=3D VEC_SVE_DATA) > > - return aarch64_evpc_sve_tbl (d); > > + { > > + if (aarch64_evpc_sve_tbl (d)) > > + return true; > > + else if (aarch64_evpc_sve_dup (d, op_mode)) > > + return true; > > + } > > else if (d->vec_flags =3D=3D VEC_ADVSIMD) > > return aarch64_evpc_tbl (d); > > } > > Is this part still needed, given the above? > > Thanks, > Richard > > > @@ -24119,9 +24158,6 @@ aarch64_vectorize_vec_perm_const (machine_mode = vmode, machine_mode op_mode, > > rtx target, rtx op0, rtx op1, > > const vec_perm_indices &sel) > > { > > - if (vmode !=3D op_mode) > > - return false; > > - > > struct expand_vec_perm_d d; > > > > /* Check whether the mask can be applied to a single vector. */ > > @@ -24154,10 +24190,10 @@ aarch64_vectorize_vec_perm_const (machine_mod= e vmode, machine_mode op_mode, > > d.testing_p =3D !target; > > > > if (!d.testing_p) > > - return aarch64_expand_vec_perm_const_1 (&d); > > + return aarch64_expand_vec_perm_const_1 (&d, op_mode); > > > > rtx_insn *last =3D get_last_insn (); > > - bool ret =3D aarch64_expand_vec_perm_const_1 (&d); > > + bool ret =3D aarch64_expand_vec_perm_const_1 (&d, op_mode); > > gcc_assert (last =3D=3D get_last_insn ()); > > > > return ret; --000000000000bd8a1b05e0b0a38a Content-Type: text/plain; charset="US-ASCII"; name="pr96463-10.txt" Content-Disposition: attachment; filename="pr96463-10.txt" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: f_l4158ug90 ZGlmZiAtLWdpdCBhL2djYy9jb25maWcvYWFyY2g2NC9hYXJjaDY0LXN2ZS1idWlsdGlucy1iYXNl LmNjIGIvZ2NjL2NvbmZpZy9hYXJjaDY0L2FhcmNoNjQtc3ZlLWJ1aWx0aW5zLWJhc2UuY2MKaW5k ZXggYmVlNDEwOTI5YmQuLjFhODA0YjFhYjczIDEwMDY0NAotLS0gYS9nY2MvY29uZmlnL2FhcmNo NjQvYWFyY2g2NC1zdmUtYnVpbHRpbnMtYmFzZS5jYworKysgYi9nY2MvY29uZmlnL2FhcmNoNjQv YWFyY2g2NC1zdmUtYnVpbHRpbnMtYmFzZS5jYwpAQCAtNDQsNiArNDQsNyBAQAogI2luY2x1ZGUg ImFhcmNoNjQtc3ZlLWJ1aWx0aW5zLXNoYXBlcy5oIgogI2luY2x1ZGUgImFhcmNoNjQtc3ZlLWJ1 aWx0aW5zLWJhc2UuaCIKICNpbmNsdWRlICJhYXJjaDY0LXN2ZS1idWlsdGlucy1mdW5jdGlvbnMu aCIKKyNpbmNsdWRlICJzc2EuaCIKIAogdXNpbmcgbmFtZXNwYWNlIGFhcmNoNjRfc3ZlOwogCkBA IC0xMjA3LDYgKzEyMDgsNjQgQEAgcHVibGljOgogICAgIGluc25fY29kZSBpY29kZSA9IGNvZGVf Zm9yX2FhcmNoNjRfc3ZlX2xkMXJxIChlLnZlY3Rvcl9tb2RlICgwKSk7CiAgICAgcmV0dXJuIGUu dXNlX2NvbnRpZ3VvdXNfbG9hZF9pbnNuIChpY29kZSk7CiAgIH0KKworICBnaW1wbGUgKgorICBm b2xkIChnaW1wbGVfZm9sZGVyICZmKSBjb25zdCBvdmVycmlkZQorICB7CisgICAgdHJlZSBhcmcw ID0gZ2ltcGxlX2NhbGxfYXJnIChmLmNhbGwsIDApOworICAgIHRyZWUgYXJnMSA9IGdpbXBsZV9j YWxsX2FyZyAoZi5jYWxsLCAxKTsKKworICAgIC8qIFRyYW5zZm9ybToKKyAgICAgICBsaHMgPSBz dmxkMXJxICh7LTEsIC0xLCAuLi4gfSwgYXJnMSkKKyAgICAgICBpbnRvOgorICAgICAgIHRtcCA9 IG1lbV9yZWY8dmVjdHlwZT4gWyhpbnQgKiB7cmVmLWFsbH0pIGFyZzFdCisgICAgICAgbGhzID0g dmVjX3Blcm1fZXhwcjx0bXAsIHRtcCwgezAsIDEsIDIsIDMsIC4uLn0+LgorICAgICAgIG9uIGxp dHRsZSBlbmRpYW4gdGFyZ2V0LgorICAgICAgIHZlY3R5cGUgaXMgdGhlIGNvcnJlc3BvbmRpbmcg QURWU0lNRCB0eXBlLiAgKi8KKworICAgIGlmICghQllURVNfQklHX0VORElBTgorCSYmIGludGVn ZXJfYWxsX29uZXNwIChhcmcwKSkKKyAgICAgIHsKKwl0cmVlIGxocyA9IGdpbXBsZV9jYWxsX2xo cyAoZi5jYWxsKTsKKwl0cmVlIGxoc190eXBlID0gVFJFRV9UWVBFIChsaHMpOworCXBvbHlfdWlu dDY0IGxoc19sZW4gPSBUWVBFX1ZFQ1RPUl9TVUJQQVJUUyAobGhzX3R5cGUpOworCXRyZWUgZWx0 eXBlID0gVFJFRV9UWVBFIChsaHNfdHlwZSk7CisKKwlzY2FsYXJfbW9kZSBlbG1vZGUgPSBHRVRf TU9ERV9JTk5FUiAoVFlQRV9NT0RFIChsaHNfdHlwZSkpOworCW1hY2hpbmVfbW9kZSB2cV9tb2Rl ID0gYWFyY2g2NF92cV9tb2RlIChlbG1vZGUpLnJlcXVpcmUgKCk7CisJdHJlZSB2ZWN0eXBlID0g YnVpbGRfdmVjdG9yX3R5cGVfZm9yX21vZGUgKGVsdHlwZSwgdnFfbW9kZSk7CisKKwl0cmVlIGVs dF9wdHJfdHlwZQorCSAgPSBidWlsZF9wb2ludGVyX3R5cGVfZm9yX21vZGUgKGVsdHlwZSwgVk9J RG1vZGUsIHRydWUpOworCXRyZWUgemVybyA9IGJ1aWxkX3plcm9fY3N0IChlbHRfcHRyX3R5cGUp OworCisJLyogVXNlIGVsZW1lbnQgdHlwZSBhbGlnbm1lbnQuICAqLworCXRyZWUgYWNjZXNzX3R5 cGUKKwkgID0gYnVpbGRfYWxpZ25lZF90eXBlICh2ZWN0eXBlLCBUWVBFX0FMSUdOIChlbHR5cGUp KTsKKworCXRyZWUgbWVtX3JlZl9saHMgPSBtYWtlX3NzYV9uYW1lX2ZuIChjZnVuLCBhY2Nlc3Nf dHlwZSwgMCk7CisJdHJlZSBtZW1fcmVmX29wID0gZm9sZF9idWlsZDIgKE1FTV9SRUYsIGFjY2Vz c190eXBlLCBhcmcxLCB6ZXJvKTsKKwlnaW1wbGUgKm1lbV9yZWZfc3RtdAorCSAgPSBnaW1wbGVf YnVpbGRfYXNzaWduIChtZW1fcmVmX2xocywgbWVtX3JlZl9vcCk7CisJZ3NpX2luc2VydF9iZWZv cmUgKGYuZ3NpLCBtZW1fcmVmX3N0bXQsIEdTSV9TQU1FX1NUTVQpOworCisJaW50IHNvdXJjZV9u ZWx0cyA9IFRZUEVfVkVDVE9SX1NVQlBBUlRTIChhY2Nlc3NfdHlwZSkudG9fY29uc3RhbnQgKCk7 CisJdmVjX3Blcm1fYnVpbGRlciBzZWwgKGxoc19sZW4sIHNvdXJjZV9uZWx0cywgMSk7CisJZm9y IChpbnQgaSA9IDA7IGkgPCBzb3VyY2VfbmVsdHM7IGkrKykKKwkgIHNlbC5xdWlja19wdXNoIChp KTsKKworCXZlY19wZXJtX2luZGljZXMgaW5kaWNlcyAoc2VsLCAxLCBzb3VyY2VfbmVsdHMpOwor CWdjY19jaGVja2luZ19hc3NlcnQgKGNhbl92ZWNfcGVybV9jb25zdF9wIChUWVBFX01PREUgKGxo c190eXBlKSwKKwkJCQkJCSAgIFRZUEVfTU9ERSAoYWNjZXNzX3R5cGUpLAorCQkJCQkJICAgaW5k aWNlcykpOworCXRyZWUgbWFza190eXBlID0gYnVpbGRfdmVjdG9yX3R5cGUgKHNzaXpldHlwZSwg bGhzX2xlbik7CisJdHJlZSBtYXNrID0gdmVjX3Blcm1faW5kaWNlc190b190cmVlIChtYXNrX3R5 cGUsIGluZGljZXMpOworCXJldHVybiBnaW1wbGVfYnVpbGRfYXNzaWduIChsaHMsIFZFQ19QRVJN X0VYUFIsCisJCQkJICAgIG1lbV9yZWZfbGhzLCBtZW1fcmVmX2xocywgbWFzayk7CisgICAgICB9 CisKKyAgICByZXR1cm4gTlVMTDsKKyAgfQogfTsKIAogY2xhc3Mgc3ZsZDFyb19pbXBsIDogcHVi bGljIGxvYWRfcmVwbGljYXRlCmRpZmYgLS1naXQgYS9nY2MvY29uZmlnL2FhcmNoNjQvYWFyY2g2 NC5jYyBiL2djYy9jb25maWcvYWFyY2g2NC9hYXJjaDY0LmNjCmluZGV4IGQ0YzU3NWNlOTc2Li5i YjI0NzAxYjBkMiAxMDA2NDQKLS0tIGEvZ2NjL2NvbmZpZy9hYXJjaDY0L2FhcmNoNjQuY2MKKysr IGIvZ2NjL2NvbmZpZy9hYXJjaDY0L2FhcmNoNjQuY2MKQEAgLTIzMzk1LDggKzIzMzk1LDEwIEBA IHN0cnVjdCBleHBhbmRfdmVjX3Blcm1fZAogewogICBydHggdGFyZ2V0LCBvcDAsIG9wMTsKICAg dmVjX3Blcm1faW5kaWNlcyBwZXJtOworICBtYWNoaW5lX21vZGUgb3BfbW9kZTsKICAgbWFjaGlu ZV9tb2RlIHZtb2RlOwogICB1bnNpZ25lZCBpbnQgdmVjX2ZsYWdzOworICB1bnNpZ25lZCBpbnQg b3BfdmVjX2ZsYWdzOwogICBib29sIG9uZV92ZWN0b3JfcDsKICAgYm9vbCB0ZXN0aW5nX3A7CiB9 OwpAQCAtMjM5NDUsNiArMjM5NDcsMzIgQEAgYWFyY2g2NF9ldnBjX3N2ZV90YmwgKHN0cnVjdCBl eHBhbmRfdmVjX3Blcm1fZCAqZCkKICAgcmV0dXJuIHRydWU7CiB9CiAKKy8qIFRyeSB0byBpbXBs ZW1lbnQgRCB1c2luZyBTVkUgZHVwIGluc3RydWN0aW9uLiAgKi8KKworc3RhdGljIGJvb2wKK2Fh cmNoNjRfZXZwY19zdmVfZHVwIChzdHJ1Y3QgZXhwYW5kX3ZlY19wZXJtX2QgKmQpCit7CisgIGlm IChCWVRFU19CSUdfRU5ESUFOCisgICAgICB8fCAhZC0+b25lX3ZlY3Rvcl9wCisgICAgICB8fCBk LT52ZWNfZmxhZ3MgIT0gVkVDX1NWRV9EQVRBCisgICAgICB8fCBkLT5vcF92ZWNfZmxhZ3MgIT0g VkVDX0FEVlNJTUQKKyAgICAgIHx8IGQtPnBlcm0uZW5jb2RpbmcgKCkubmVsdHNfcGVyX3BhdHRl cm4gKCkgIT0gMQorICAgICAgfHwgIWtub3duX2VxIChkLT5wZXJtLmVuY29kaW5nICgpLm5wYXR0 ZXJucyAoKSwKKwkJICAgIEdFVF9NT0RFX05VTklUUyAoZC0+b3BfbW9kZSkpKQorICAgIHJldHVy biBmYWxzZTsKKworICBpbnQgbnBhdHRlcm5zID0gZC0+cGVybS5lbmNvZGluZyAoKS5ucGF0dGVy bnMgKCk7CisgIGZvciAoaW50IGkgPSAwOyBpIDwgbnBhdHRlcm5zOyBpKyspCisgICAgaWYgKCFr bm93bl9lcSAoZC0+cGVybVtpXSwgaSkpCisgICAgICByZXR1cm4gZmFsc2U7CisKKyAgaWYgKGQt PnRlc3RpbmdfcCkKKyAgICByZXR1cm4gdHJ1ZTsKKworICBhYXJjaDY0X2V4cGFuZF9zdmVfZHVw cSAoZC0+dGFyZ2V0LCBHRVRfTU9ERSAoZC0+dGFyZ2V0KSwgZC0+b3AwKTsKKyAgcmV0dXJuIHRy dWU7Cit9CisKIC8qIFRyeSB0byBpbXBsZW1lbnQgRCB1c2luZyBTVkUgU0VMIGluc3RydWN0aW9u LiAgKi8KIAogc3RhdGljIGJvb2wKQEAgLTI0MDg0LDMwICsyNDExMiwzOSBAQCBhYXJjaDY0X2V4 cGFuZF92ZWNfcGVybV9jb25zdF8xIChzdHJ1Y3QgZXhwYW5kX3ZlY19wZXJtX2QgKmQpCiAgICAg ICAgfHwgZC0+dmVjX2ZsYWdzID09IFZFQ19TVkVfUFJFRCkKICAgICAgICYmIGtub3duX2d0IChu ZWx0LCAxKSkKICAgICB7Ci0gICAgICBpZiAoYWFyY2g2NF9ldnBjX3Jldl9sb2NhbCAoZCkpCi0J cmV0dXJuIHRydWU7Ci0gICAgICBlbHNlIGlmIChhYXJjaDY0X2V2cGNfcmV2X2dsb2JhbCAoZCkp Ci0JcmV0dXJuIHRydWU7Ci0gICAgICBlbHNlIGlmIChhYXJjaDY0X2V2cGNfZXh0IChkKSkKLQly ZXR1cm4gdHJ1ZTsKLSAgICAgIGVsc2UgaWYgKGFhcmNoNjRfZXZwY19kdXAgKGQpKQotCXJldHVy biB0cnVlOwotICAgICAgZWxzZSBpZiAoYWFyY2g2NF9ldnBjX3ppcCAoZCkpCi0JcmV0dXJuIHRy dWU7Ci0gICAgICBlbHNlIGlmIChhYXJjaDY0X2V2cGNfdXpwIChkKSkKLQlyZXR1cm4gdHJ1ZTsK LSAgICAgIGVsc2UgaWYgKGFhcmNoNjRfZXZwY190cm4gKGQpKQotCXJldHVybiB0cnVlOwotICAg ICAgZWxzZSBpZiAoYWFyY2g2NF9ldnBjX3NlbCAoZCkpCi0JcmV0dXJuIHRydWU7Ci0gICAgICBl bHNlIGlmIChhYXJjaDY0X2V2cGNfaW5zIChkKSkKLQlyZXR1cm4gdHJ1ZTsKLSAgICAgIGVsc2Ug aWYgKGFhcmNoNjRfZXZwY19yZWVuY29kZSAoZCkpCisgICAgICAvKiBJZiBvcGVyYW5kIGFuZCBy ZXN1bHQgbW9kZXMgZGlmZmVyLCB0aGVuIG9ubHkgY2hlY2sKKwkgZm9yIGR1cCBjYXNlLiAgKi8K KyAgICAgIGlmIChkLT52bW9kZSA9PSBkLT5vcF9tb2RlKQorCXsKKwkgIGlmIChhYXJjaDY0X2V2 cGNfcmV2X2xvY2FsIChkKSkKKwkgICAgcmV0dXJuIHRydWU7CisJICBlbHNlIGlmIChhYXJjaDY0 X2V2cGNfcmV2X2dsb2JhbCAoZCkpCisJICAgIHJldHVybiB0cnVlOworCSAgZWxzZSBpZiAoYWFy Y2g2NF9ldnBjX2V4dCAoZCkpCisJICAgIHJldHVybiB0cnVlOworCSAgZWxzZSBpZiAoYWFyY2g2 NF9ldnBjX2R1cCAoZCkpCisJICAgIHJldHVybiB0cnVlOworCSAgZWxzZSBpZiAoYWFyY2g2NF9l dnBjX3ppcCAoZCkpCisJICAgIHJldHVybiB0cnVlOworCSAgZWxzZSBpZiAoYWFyY2g2NF9ldnBj X3V6cCAoZCkpCisJICAgIHJldHVybiB0cnVlOworCSAgZWxzZSBpZiAoYWFyY2g2NF9ldnBjX3Ry biAoZCkpCisJICAgIHJldHVybiB0cnVlOworCSAgZWxzZSBpZiAoYWFyY2g2NF9ldnBjX3NlbCAo ZCkpCisJICAgIHJldHVybiB0cnVlOworCSAgZWxzZSBpZiAoYWFyY2g2NF9ldnBjX2lucyAoZCkp CisJICAgIHJldHVybiB0cnVlOworCSAgZWxzZSBpZiAoYWFyY2g2NF9ldnBjX3JlZW5jb2RlIChk KSkKKwkgICAgcmV0dXJuIHRydWU7CisKKwkgIGlmIChkLT52ZWNfZmxhZ3MgPT0gVkVDX1NWRV9E QVRBKQorCSAgICByZXR1cm4gYWFyY2g2NF9ldnBjX3N2ZV90YmwgKGQpOworCSAgZWxzZSBpZiAo ZC0+dmVjX2ZsYWdzID09IFZFQ19BRFZTSU1EKQorCSAgICByZXR1cm4gYWFyY2g2NF9ldnBjX3Ri bCAoZCk7CisJfQorCisgICAgICBpZiAoYWFyY2g2NF9ldnBjX3N2ZV9kdXAgKGQpKQogCXJldHVy biB0cnVlOwotICAgICAgaWYgKGQtPnZlY19mbGFncyA9PSBWRUNfU1ZFX0RBVEEpCi0JcmV0dXJu IGFhcmNoNjRfZXZwY19zdmVfdGJsIChkKTsKLSAgICAgIGVsc2UgaWYgKGQtPnZlY19mbGFncyA9 PSBWRUNfQURWU0lNRCkKLQlyZXR1cm4gYWFyY2g2NF9ldnBjX3RibCAoZCk7CiAgICAgfQogICBy ZXR1cm4gZmFsc2U7CiB9CkBAIC0yNDExOSw5ICsyNDE1Niw2IEBAIGFhcmNoNjRfdmVjdG9yaXpl X3ZlY19wZXJtX2NvbnN0IChtYWNoaW5lX21vZGUgdm1vZGUsIG1hY2hpbmVfbW9kZSBvcF9tb2Rl LAogCQkJCSAgcnR4IHRhcmdldCwgcnR4IG9wMCwgcnR4IG9wMSwKIAkJCQkgIGNvbnN0IHZlY19w ZXJtX2luZGljZXMgJnNlbCkKIHsKLSAgaWYgKHZtb2RlICE9IG9wX21vZGUpCi0gICAgcmV0dXJu IGZhbHNlOwotCiAgIHN0cnVjdCBleHBhbmRfdmVjX3Blcm1fZCBkOwogCiAgIC8qIENoZWNrIHdo ZXRoZXIgdGhlIG1hc2sgY2FuIGJlIGFwcGxpZWQgdG8gYSBzaW5nbGUgdmVjdG9yLiAgKi8KQEAg LTI0MTQ1LDYgKzI0MTc5LDggQEAgYWFyY2g2NF92ZWN0b3JpemVfdmVjX3Blcm1fY29uc3QgKG1h Y2hpbmVfbW9kZSB2bW9kZSwgbWFjaGluZV9tb2RlIG9wX21vZGUsCiAJCSAgICAgc2VsLm5lbHRz X3Blcl9pbnB1dCAoKSk7CiAgIGQudm1vZGUgPSB2bW9kZTsKICAgZC52ZWNfZmxhZ3MgPSBhYXJj aDY0X2NsYXNzaWZ5X3ZlY3Rvcl9tb2RlIChkLnZtb2RlKTsKKyAgZC5vcF9tb2RlID0gb3BfbW9k ZTsKKyAgZC5vcF92ZWNfZmxhZ3MgPSBhYXJjaDY0X2NsYXNzaWZ5X3ZlY3Rvcl9tb2RlIChkLm9w X21vZGUpOwogICBkLnRhcmdldCA9IHRhcmdldDsKICAgZC5vcDAgPSBvcDAgPyBmb3JjZV9yZWcg KHZtb2RlLCBvcDApIDogTlVMTF9SVFg7CiAgIGlmIChvcDAgPT0gb3AxKQo= --000000000000bd8a1b05e0b0a38a--