From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yb1-xb34.google.com (mail-yb1-xb34.google.com [IPv6:2607:f8b0:4864:20::b34]) by sourceware.org (Postfix) with ESMTPS id D0BD23858418 for ; Thu, 31 Aug 2023 08:06:36 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org D0BD23858418 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-yb1-xb34.google.com with SMTP id 3f1490d57ef6-d7b79a4899bso308367276.2 for ; Thu, 31 Aug 2023 01:06:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693469196; x=1694073996; darn=gcc.gnu.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=OLhz5VJlSizNAdETVw2s+aXg0hEUi8AbQFcKlIMHFRE=; b=qw7jSr5YSNYOx3tUI/QaKQgHfLPWE9lzI6k3zviMLBTVBpVjs9HihWp95rrHB42xe0 xGjIGf4RBL61VA7dMWdzInkUZLUe2GaLBTulswZZdFgVGIuxbURAWHqxhk6za8d03ew2 KmvcZcnBLPSAD4E6mV96V0H+qE9GOy7uu6LdWhHc/i1e9uc0KxP/7WDxaWBPAAr44eeX a1OSq/tdcUcn0QsFTcpEmTUd22x8LIyl3Do5GNPKnVSq9hX1sxDqVBz2pHCESIBvoqrI uxOMN7UwWBvT1vuOPM+EXyBlBv2oJwlJ0z9C9Qrl31rqOtH0nAYLpR2iXbJi8Wm/P2IV S4rQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693469196; x=1694073996; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OLhz5VJlSizNAdETVw2s+aXg0hEUi8AbQFcKlIMHFRE=; b=F0e3PFtIOOy+rKbL2QhQkxZ64J4qeKJTLd5EZidDwrMd9XEL+YhQk7VRSi8bGhaoTY YWTK7Ep07RUuJ1/L3eGMpDH2kkiQrEihoKOGL61SPzx3LxBbKBnkOBk8cErmxMA+Vrn1 ArV8gq0LT1lLK2FTX5Nbg2prj8SLf3Kr4CzG/yIuLodK3Qq4NkuQJpk2ezqtRDhQH050 zvk8MYZz2yMpkw/8wi1xYMzIAdB6tylfWO2FJoATAObB1fRsiox9ZJIMM1BqKy24a5ec uztkJ/uESHqsvMvdCjzFfHs1RjI6PDeeW1Rfpp8jt7zNv9j87mGqlBz48oT94kRBVtBP Rc9g== X-Gm-Message-State: AOJu0YzYfa2irYBKiUtqCPtLVyVHZoqi7lV5FiYthodUiU2BrosecCk7 /tZkwmyWPOAishwg2Kdw68LbOqF+cX8dtDehMRI= X-Google-Smtp-Source: AGHT+IGY3pSr81RRZrGWAq9doWbUXRjK8e/yz23lX9fRosaguRuB8fPw6x6MOd2sEUNVwq35vhdE9y+ctVro42txNdQ= X-Received: by 2002:a25:7682:0:b0:d1c:5049:687b with SMTP id r124-20020a257682000000b00d1c5049687bmr4779804ybc.16.1693469196071; Thu, 31 Aug 2023 01:06:36 -0700 (PDT) MIME-Version: 1.0 References: <20230830103516.882926-1-hongtao.liu@intel.com> In-Reply-To: From: Hongtao Liu Date: Thu, 31 Aug 2023 16:06:24 +0800 Message-ID: Subject: Re: [PATCH] Adjust costing of emulated vectorized gather/scatter To: Richard Biener Cc: liuhongt , gcc-patches@gcc.gnu.org, rguenther@suse.de, hubicka@ucw.cz Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,KAM_SHORT,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Wed, Aug 30, 2023 at 8:18=E2=80=AFPM Richard Biener via Gcc-patches wrote: > > On Wed, Aug 30, 2023 at 12:38=E2=80=AFPM liuhongt via Gcc-patches > wrote: > > > > r14-332-g24905a4bd1375c adjusts costing of emulated vectorized > > gather/scatter. > > ---- > > commit 24905a4bd1375ccd99c02510b9f9529015a48315 > > Author: Richard Biener > > Date: Wed Jan 18 11:04:49 2023 +0100 > > > > Adjust costing of emulated vectorized gather/scatter > > > > Emulated gather/scatter behave similar to strided elementwise > > accesses in that they need to decompose the offset vector > > and construct or decompose the data vector so handle them > > the same way, pessimizing the cases with may elements. > > ---- > > > > But for emulated gather/scatter, offset vector load/vec_construct has > > aready been counted, and in real case, it's probably eliminated by > > later optimizer. > > Also after decomposing, element loads from continous memory could be > > less bounded compared to normal elementwise load. > > The patch decreases the cost a little bit. > > > > This will enable gather emulation for below loop with VF=3D8(ymm) > > > > double > > foo (double* a, double* b, unsigned int* c, int n) > > { > > double sum =3D 0; > > for (int i =3D 0; i !=3D n; i++) > > sum +=3D a[i] * b[c[i]]; > > return sum; > > } > > > > For the upper loop, microbenchmark result shows on ICX, > > emulated gather with VF=3D8 is 30% faster than emulated gather with > > VF=3D4 when tripcount is big enough. > > It bring back ~4% for 510.parest still ~5% regression compared to > > gather instruction due to throughput bound. > > > > For -march=3Dznver1/2/3/4, the change doesn't enable VF=3D8(ymm) for th= e > > loop, VF remains 4(xmm) as before(guess related to their own cost > > model). > > > > > > Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}. > > Ok for trunk? > > > > gcc/ChangeLog: > > > > PR target/111064 > > * config/i386/i386.cc (ix86_vector_costs::add_stmt_cost): > > Decrease cost a little bit for vec_to_scalar(offset vector) in > > emulated gather. > > > > gcc/testsuite/ChangeLog: > > > > * gcc.target/i386/pr111064.c: New test. > > --- > > gcc/config/i386/i386.cc | 11 ++++++++++- > > gcc/testsuite/gcc.target/i386/pr111064.c | 12 ++++++++++++ > > 2 files changed, 22 insertions(+), 1 deletion(-) > > create mode 100644 gcc/testsuite/gcc.target/i386/pr111064.c > > > > diff --git a/gcc/config/i386/i386.cc b/gcc/config/i386/i386.cc > > index 1bc3f11ff07..337e0f1bfbb 100644 > > --- a/gcc/config/i386/i386.cc > > +++ b/gcc/config/i386/i386.cc > > @@ -24079,7 +24079,16 @@ ix86_vector_costs::add_stmt_cost (int count, v= ect_cost_for_stmt kind, > > || STMT_VINFO_MEMORY_ACCESS_TYPE (stmt_info) =3D=3D VMAT_GATH= ER_SCATTER)) > > { > > stmt_cost =3D ix86_builtin_vectorization_cost (kind, vectype, mi= salign); > > - stmt_cost *=3D (TYPE_VECTOR_SUBPARTS (vectype) + 1); > > + /* For emulated gather/scatter, offset vector load/vec_construct= has > > + already been counted and in real case, it's probably eliminate= d by > > + later optimizer. > > + Also after decomposing, element loads from continous memory > > + could be less bounded compared to normal elementwise load. */ > > + if (kind =3D=3D vec_to_scalar > > + && STMT_VINFO_MEMORY_ACCESS_TYPE (stmt_info) =3D=3D VMAT_GATH= ER_SCATTER) > > + stmt_cost *=3D TYPE_VECTOR_SUBPARTS (vectype); > > For gather we cost N vector extracts (from the offset vector), N scalar l= oads > (the actual data loads) and one vec_construct. > > For scatter we cost N vector extracts (from the offset vector), > N vector extracts (from the data vector) and N scalar stores. > > It was intended penaltize the extracts the same way as vector constructio= n. > > Your change will adjust all three different decomposition kinds "a > bit", I realize the > scaling by (TYPE_VECTOR_SUBPARTS + 1) is kind-of arbitrary but so is your > adjustment and I don't see why VMAT_GATHER_SCATTER is special to your > adjustment. > > So the comment you put before the special-casing doesn't really make > sense to me. > > For zen4 costing we currently have > > *_11 8 times vec_to_scalar costs 576 in body > *_11 8 times scalar_load costs 96 in body > *_11 1 times vec_construct costs 792 in body > > for zmm > > *_11 4 times vec_to_scalar costs 80 in body > *_11 4 times scalar_load costs 48 in body > *_11 1 times vec_construct costs 100 in body > > for ymm and > > *_11 2 times vec_to_scalar costs 24 in body > *_11 2 times scalar_load costs 24 in body > *_11 1 times vec_construct costs 12 in body > > for xmm. Even with your adjustment if we were to enable cost comparison = between > vector sizes we'd choose xmm I bet (you can try by re-ordering the modes = in > the ix86_autovectorize_vector_modes hook). So it feels like a hack. If = you > think that Icelake should enable 4 element vectorized emulated gather the= n > we should disable this individual scaling and possibly instead penaltize = when > the number of (emulated) gathers is too high? I think even for element wise load/store, the penalty is too high. looked at the original issue PR84037, the regression comes from many parts. and related issue PR87561, the regression is due to outer loop context(similar for PR82862), not a real vectorization issue,(PR82862 vectorized code standalone is even faster than scalar version) stmt_cost *=3D (TYPE_VECTOR_SUBPARTS (vectype) seems to just disable vectorization as a walkround, but not a realistic estimation. For simplicity, maybe we should reduce the penalty(.i.e. stmt_cost *=3D (TYPE_VECTOR_SUBPARTS (vectype) / 2, at least w/ this, vectorizer will still choose ymm even with cost comparison. But I'm not sure if this will regress PR87561, PR84037, PR84016.(Maybe we should only reduce the penalty when there's no outer loop due to PR87561/PR82862, makes some sense?) > > That said, we could count the number of element extracts and inserts > (and maybe [scalar] loads and stores) and at finish_cost time weight them > against the number of "other" operations. > > As repeatedly said the current cost model setup is a bit garbage-in-garba= ge-out > since it in no way models latency correctly, instead it disregards all > dependencies > and simply counts ops. > > > + else > > + stmt_cost *=3D (TYPE_VECTOR_SUBPARTS (vectype) + 1); > > } > > else if ((kind =3D=3D vec_construct || kind =3D=3D scalar_to_vec) > > && node > > diff --git a/gcc/testsuite/gcc.target/i386/pr111064.c b/gcc/testsuite/g= cc.target/i386/pr111064.c > > new file mode 100644 > > index 00000000000..aa2589bd36f > > --- /dev/null > > +++ b/gcc/testsuite/gcc.target/i386/pr111064.c > > @@ -0,0 +1,12 @@ > > +/* { dg-do compile } */ > > +/* { dg-options "-Ofast -march=3Dicelake-server -mno-gather" } */ > > +/* { dg-final { scan-assembler-times {(?n)vfmadd[123]*pd.*ymm} 2 { tar= get { ! ia32 } } } } */ > > + > > +double > > +foo (double* a, double* b, unsigned int* c, int n) > > +{ > > + double sum =3D 0; > > + for (int i =3D 0; i !=3D n; i++) > > + sum +=3D a[i] * b[c[i]]; > > + return sum; > > +} > > -- > > 2.31.1 > > --=20 BR, Hongtao