public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
* [Bug target/114427] New: [x86] ec_pack_truncv8si/v4si can be optimized with pblendw instead of pand for AVX2 target
@ 2024-03-22  5:25 liuhongt at gcc dot gnu.org
  0 siblings, 0 replies; only message in thread
From: liuhongt at gcc dot gnu.org @ 2024-03-22  5:25 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114427

            Bug ID: 114427
           Summary: [x86] ec_pack_truncv8si/v4si can be optimized with
                    pblendw instead of pand for AVX2 target
           Product: gcc
           Version: 14.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: target
          Assignee: unassigned at gcc dot gnu.org
          Reporter: liuhongt at gcc dot gnu.org
  Target Milestone: ---

void
foo (int* a, short* __restrict b, int* c)
{
    for (int i = 0; i != 8; i++)
      b[i] = c[i] + a[i];
}

gcc -O2 -march=x86-64-v3 -S

        mov     eax, 65535
        vmovd   xmm0, eax
        vpbroadcastd    xmm0, xmm0
        vpand   xmm2, xmm0, XMMWORD PTR [rdi+16]
        vpand   xmm1, xmm0, XMMWORD PTR [rdi]
        vpackusdw       xmm1, xmm1, xmm2
        vpand   xmm2, xmm0, XMMWORD PTR [rdx]
        vpand   xmm0, xmm0, XMMWORD PTR [rdx+16]
        vpackusdw       xmm0, xmm2, xmm0
        vpaddw  xmm0, xmm1, xmm0
        vmovdqu XMMWORD PTR [rsi], xmm0


It can be better with below, 

        vpxor   %xmm0, %xmm0, %xmm0
        vpblendw        $85, 16(%rdi), %xmm0, %xmm2
        vpblendw        $85, (%rdi), %xmm0, %xmm1
        vpackusdw       %xmm2, %xmm1, %xmm1
        vpblendw        $85, (%rdx), %xmm0, %xmm2
        vpblendw        $85, 16(%rdx), %xmm0, %xmm0
        vpackusdw       %xmm0, %xmm2, %xmm0
        vpaddw  %xmm0, %xmm1, %xmm0
        vmovdqu %xmm0, (%rsi)

Currently, we're using (const_vector:v4si (const_int 0xffff) x4) as mask to
clear upper 16 bits, but pblendw with zero vector can also be used, and zero
vector is much cheaper than (const_vector:v4si (const_int 0xffff) x4)

        mov     eax, 65535
        vmovd   xmm0, eax
        vpbroadcastd    xmm0, xmm0

pblendw has same latency as pand, but could be a little bit worse from
thoughput view(0.33->0.5 on ADL P-core, same on Zen4).

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2024-03-22  5:25 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-22  5:25 [Bug target/114427] New: [x86] ec_pack_truncv8si/v4si can be optimized with pblendw instead of pand for AVX2 target liuhongt at gcc dot gnu.org

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).