From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com [IPv6:2a00:1450:4864:20::136]) by sourceware.org (Postfix) with ESMTPS id 73E153857B93 for ; Wed, 27 Sep 2023 11:32:06 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 73E153857B93 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-lf1-x136.google.com with SMTP id 2adb3069b0e04-503056c8195so17994612e87.1 for ; Wed, 27 Sep 2023 04:32:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695814325; x=1696419125; darn=gcc.gnu.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=N4+NHwiwjhoc/gYg8m8wzjUffYm6GQ34QSEsf7+N1OY=; b=kNo328J9qNbJ7MTJwbiggpcLTNw2qWW6hlgaZ6DuM2xE8ipvfopqLw/o6v05VFlBnY vjogvslFOYXib8Hj5/vGqoGyvY093e/nxJ6X60uF7epdAamYmlb+O/rwH4EjfEUJWg5w rsENfp+hsy/LSURFH9mnkriEWw/FmPVLPiNKGDiBHtFAsBf3BCT+vVqRVic4rP7KNfmg 1p4zy54rvgbNhOSq40a1iGvl+ysaVB3oVE8IEBVvCL8bjaAu1R61e+wPtoFQ0SA3OKST kC3BRsgkCp3r7irdMt1HjIfDQ8vWQhGmbsR87V1L8IbOi8dogsX5IxkfzFCB40IKuTSG Ed2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695814325; x=1696419125; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=N4+NHwiwjhoc/gYg8m8wzjUffYm6GQ34QSEsf7+N1OY=; b=Ra+MoGzjbP2GlWVQFx55ERJDfx6yPg7Qu9vymM14RUYS8UrjoJZEYv57ktH+GM+lFJ ZIAIUpBkqfwDKLvco2V1xwPrAxSO4cvu129NfFFgCuFTLS8/CfuiqQONmvmkrUhOcnES DpsmgVvGDXYRcQhiHOEZ/xzEvz2fSDoxmD3vEeVa5N26elyTPid1saIiFcgt8cplhid+ RYfuXQ7AdbZIAE5id3XzJqo108T2vX5zrbBPYSX5FFOK1n2yYIu/2DikTSfZPGiMdp+L kqerbsTavIxVQrfyhLlcTMTbrfThwlo23ukbLLWh4amVKwyuRbBwDBAfA3mjo3bNSDZX 3sxQ== X-Gm-Message-State: AOJu0YycxKhWn9xyzzHdDjDZEPKPWch+tuR2dpMzn5mPi4oFjf3BULSN O51kCabvSXuqB/oLiz5ID5uP3mSGvxh62hUWPDg= X-Google-Smtp-Source: AGHT+IE+XAWh1lXH8cZdz4aOvqp/CtWwPgNPKZb+l+X1PvDt5H/wbmWG6G5bWeDzOKWo1JspG6voBApFW0Ff8zcr/Gw= X-Received: by 2002:ac2:5f42:0:b0:500:b872:7899 with SMTP id 2-20020ac25f42000000b00500b8727899mr1184850lfz.62.1695814324690; Wed, 27 Sep 2023 04:32:04 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Richard Biener Date: Wed, 27 Sep 2023 13:29:40 +0200 Message-ID: Subject: Re: [PATCH 09/10] vect: Get rid of vect_model_store_cost To: Kewen Lin Cc: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Thu, Sep 14, 2023 at 5:12=E2=80=AFAM Kewen Lin wro= te: > > This patch is to eventually get rid of vect_model_store_cost, > it adjusts the costing for the remaining memory access types > VMAT_CONTIGUOUS{, _DOWN, _REVERSE} by moving costing close > to the transform code. Note that in vect_model_store_cost, > there is one special handling for vectorizing a store into > the function result, since it's extra penalty and the > transform part doesn't have it, this patch keep it alone. OK. > gcc/ChangeLog: > > * tree-vect-stmts.cc (vect_model_store_cost): Remove. > (vectorizable_store): Adjust the costing for the remaining memory > access types VMAT_CONTIGUOUS{, _DOWN, _REVERSE}. > --- > gcc/tree-vect-stmts.cc | 137 +++++++++++++---------------------------- > 1 file changed, 44 insertions(+), 93 deletions(-) > > diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc > index e3ba8077091..3d451c80bca 100644 > --- a/gcc/tree-vect-stmts.cc > +++ b/gcc/tree-vect-stmts.cc > @@ -951,81 +951,6 @@ cfun_returns (tree decl) > return false; > } > > -/* Function vect_model_store_cost > - > - Models cost for stores. In the case of grouped accesses, one access > - has the overhead of the grouped access attributed to it. */ > - > -static void > -vect_model_store_cost (vec_info *vinfo, stmt_vec_info stmt_info, int nco= pies, > - vect_memory_access_type memory_access_type, > - dr_alignment_support alignment_support_scheme, > - int misalignment, > - vec_load_store_type vls_type, slp_tree slp_node, > - stmt_vector_for_cost *cost_vec) > -{ > - gcc_assert (memory_access_type !=3D VMAT_GATHER_SCATTER > - && memory_access_type !=3D VMAT_ELEMENTWISE > - && memory_access_type !=3D VMAT_STRIDED_SLP > - && memory_access_type !=3D VMAT_LOAD_STORE_LANES > - && memory_access_type !=3D VMAT_CONTIGUOUS_PERMUTE); > - > - unsigned int inside_cost =3D 0, prologue_cost =3D 0; > - > - /* ??? Somehow we need to fix this at the callers. */ > - if (slp_node) > - ncopies =3D SLP_TREE_NUMBER_OF_VEC_STMTS (slp_node); > - > - if (vls_type =3D=3D VLS_STORE_INVARIANT) > - { > - if (!slp_node) > - prologue_cost +=3D record_stmt_cost (cost_vec, 1, scalar_to_vec, > - stmt_info, 0, vect_prologue); > - } > - > - > - /* Costs of the stores. */ > - vect_get_store_cost (vinfo, stmt_info, ncopies, alignment_support_sche= me, > - misalignment, &inside_cost, cost_vec); > - > - /* When vectorizing a store into the function result assign > - a penalty if the function returns in a multi-register location. > - In this case we assume we'll end up with having to spill the > - vector result and do piecewise loads as a conservative estimate. *= / > - tree base =3D get_base_address (STMT_VINFO_DATA_REF (stmt_info)->ref); > - if (base > - && (TREE_CODE (base) =3D=3D RESULT_DECL > - || (DECL_P (base) && cfun_returns (base))) > - && !aggregate_value_p (base, cfun->decl)) > - { > - rtx reg =3D hard_function_value (TREE_TYPE (base), cfun->decl, 0, = 1); > - /* ??? Handle PARALLEL in some way. */ > - if (REG_P (reg)) > - { > - int nregs =3D hard_regno_nregs (REGNO (reg), GET_MODE (reg)); > - /* Assume that a single reg-reg move is possible and cheap, > - do not account for vector to gp register move cost. */ > - if (nregs > 1) > - { > - /* Spill. */ > - prologue_cost +=3D record_stmt_cost (cost_vec, ncopies, > - vector_store, > - stmt_info, 0, vect_epilo= gue); > - /* Loads. */ > - prologue_cost +=3D record_stmt_cost (cost_vec, ncopies * nr= egs, > - scalar_load, > - stmt_info, 0, vect_epilo= gue); > - } > - } > - } > - > - if (dump_enabled_p ()) > - dump_printf_loc (MSG_NOTE, vect_location, > - "vect_model_store_cost: inside_cost =3D %d, " > - "prologue_cost =3D %d .\n", inside_cost, prologue_c= ost); > -} > - > - > /* Calculate cost of DR's memory access. */ > void > vect_get_store_cost (vec_info *, stmt_vec_info stmt_info, int ncopies, > @@ -9223,6 +9148,11 @@ vectorizable_store (vec_info *vinfo, > return true; > } > > + gcc_assert (memory_access_type =3D=3D VMAT_CONTIGUOUS > + || memory_access_type =3D=3D VMAT_CONTIGUOUS_DOWN > + || memory_access_type =3D=3D VMAT_CONTIGUOUS_PERMUTE > + || memory_access_type =3D=3D VMAT_CONTIGUOUS_REVERSE); > + > unsigned inside_cost =3D 0, prologue_cost =3D 0; > auto_vec result_chain (group_size); > auto_vec vec_oprnds; > @@ -9257,10 +9187,9 @@ vectorizable_store (vec_info *vinfo, > that there is no interleaving, DR_GROUP_SIZE is 1, > and only one iteration of the loop will be executed.= */ > op =3D vect_get_store_rhs (next_stmt_info); > - if (costing_p > - && memory_access_type =3D=3D VMAT_CONTIGUOUS_PERMUT= E) > + if (costing_p) > update_prologue_cost (&prologue_cost, op); > - else if (!costing_p) > + else > { > vect_get_vec_defs_for_operand (vinfo, next_stmt_inf= o, > ncopies, op, > @@ -9352,10 +9281,9 @@ vectorizable_store (vec_info *vinfo, > { > if (costing_p) > { > - if (memory_access_type =3D=3D VMAT_CONTIGUOUS_PERMUTE) > - vect_get_store_cost (vinfo, stmt_info, 1, > - alignment_support_scheme, misalignme= nt, > - &inside_cost, cost_vec); > + vect_get_store_cost (vinfo, stmt_info, 1, > + alignment_support_scheme, misalignment= , > + &inside_cost, cost_vec); > > if (!slp) > { > @@ -9550,18 +9478,41 @@ vectorizable_store (vec_info *vinfo, > > if (costing_p) > { > - if (memory_access_type =3D=3D VMAT_CONTIGUOUS_PERMUTE) > - { > - if (dump_enabled_p ()) > - dump_printf_loc (MSG_NOTE, vect_location, > - "vect_model_store_cost: inside_cost =3D %d, = " > - "prologue_cost =3D %d .\n", > - inside_cost, prologue_cost); > + /* When vectorizing a store into the function result assign > + a penalty if the function returns in a multi-register location. > + In this case we assume we'll end up with having to spill the > + vector result and do piecewise loads as a conservative estimate.= */ > + tree base =3D get_base_address (STMT_VINFO_DATA_REF (stmt_info)->r= ef); > + if (base > + && (TREE_CODE (base) =3D=3D RESULT_DECL > + || (DECL_P (base) && cfun_returns (base))) > + && !aggregate_value_p (base, cfun->decl)) > + { > + rtx reg =3D hard_function_value (TREE_TYPE (base), cfun->decl, = 0, 1); > + /* ??? Handle PARALLEL in some way. */ > + if (REG_P (reg)) > + { > + int nregs =3D hard_regno_nregs (REGNO (reg), GET_MODE (reg)= ); > + /* Assume that a single reg-reg move is possible and cheap, > + do not account for vector to gp register move cost. */ > + if (nregs > 1) > + { > + /* Spill. */ > + prologue_cost > + +=3D record_stmt_cost (cost_vec, ncopies, vector_stor= e, > + stmt_info, 0, vect_epilogue); > + /* Loads. */ > + prologue_cost > + +=3D record_stmt_cost (cost_vec, ncopies * nregs, sca= lar_load, > + stmt_info, 0, vect_epilogue); > + } > + } > } > - else > - vect_model_store_cost (vinfo, stmt_info, ncopies, memory_access_t= ype, > - alignment_support_scheme, misalignment, vl= s_type, > - slp_node, cost_vec); > + if (dump_enabled_p ()) > + dump_printf_loc (MSG_NOTE, vect_location, > + "vect_model_store_cost: inside_cost =3D %d, " > + "prologue_cost =3D %d .\n", > + inside_cost, prologue_cost); > } > > return true; > -- > 2.31.1 >