From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vs1-xe2a.google.com (mail-vs1-xe2a.google.com [IPv6:2607:f8b0:4864:20::e2a]) by sourceware.org (Postfix) with ESMTPS id DA8DA3857411 for ; Mon, 14 Mar 2022 12:21:10 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org DA8DA3857411 Received: by mail-vs1-xe2a.google.com with SMTP id v62so16884584vsv.1 for ; Mon, 14 Mar 2022 05:21:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=unH0NKkDQZqWSY+ZGPaDpsOEUiJ08W9+/DgP8y0I6Fs=; b=KVP5muTrYAKQtCttZiSYyBmrsYNqHWk89hfFXpnV40l3lLNIs7HqA5ObwPfR4dNVgf WBlOhSjqEDeICtZhq0UuC1LsFgY4gqMfL1GygQ+NLwjmLySnozT3yzNmMOkYj/giKOqQ BLhIOjmBtHbcYrSPv4iYngVK1AgQnS8baf/hwfFkP8Ic9T+sL8hNleoxUps5Y4dZsEgJ 78a5wLhP57tP5i5wtSj/Khdk2WmtkjaZvMjAHuuLqIHrjeU5gvObbT+6qltYWBvhsy+2 U7nIDi4Q9FhrujQKiOK29R5d1wjnL5mL1JqsrUZbO/hN14wVkvsh/KmnSlaRnFvAqvVp KVYg== X-Gm-Message-State: AOAM5306NpmqnlwyjlmWmK2iYLqlov7cfPedvoavlhdgD5aG8LmYbhrr SpsQfdFDUYm+bGPpd8qD2qnkH1nIjsSq3k3v7G8= X-Google-Smtp-Source: ABdhPJw8CqiBiZagj2rj9ZqsQbjfY4YZEdk7UK6Lk7sbSvCrPV8kt3KFVZpKULDhyLzWnXOvnOQeaG74kAkGLlALi9c= X-Received: by 2002:a05:6102:30bc:b0:320:a6a7:43a8 with SMTP id y28-20020a05610230bc00b00320a6a743a8mr9812988vsd.55.1647260470092; Mon, 14 Mar 2022 05:21:10 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Hongtao Liu Date: Mon, 14 Mar 2022 20:20:59 +0800 Message-ID: Subject: Re: [PATCH] i386: Fix up _mm_loadu_si{16,32} [PR99754] To: Jakub Jelinek Cc: Uros Bizjak , GCC Patches , "H. J. Lu" Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-3.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Mar 2022 12:21:12 -0000 On Mon, Mar 14, 2022 at 7:25 PM Jakub Jelinek wrote: > > On Sun, Mar 13, 2022 at 09:34:10PM +0800, Hongtao Liu wrote: > > LGTM, thanks for handling this. > > Thanks, committed. > > > > Note, while the Intrinsics guide for _mm_loadu_si32 says SSE2, > > > for _mm_loadu_si16 it says strangely SSE. But the intrinsics > > > returns __m128i, which is only defined in emmintrin.h, and > > > _mm_set_epi16 is also only SSE2 and later in emmintrin.h. > > > Even clang defines it in emmintrin.h and ends up with inlining > > > failure when calling _mm_loadu_si16 from sse,no-sse2 function. > > > So, isn't that a bug in the intrinsic guide instead? > > I think it's a bug, it's supposed to generate movzx + movd, and movd > > is under sse2, and have reported it to the colleague who maintains > > Intel intrinsic guide. > > > > Similar bug for > > _mm_loadu_si64 > > _mm_storeu_si16 > > _mm_storeu_si64 > > Currently it emits pxor + pinsrw, but even those are SSE2 instructions, > unless they use a MMX register (then it is MMX and SSE). > I agree that movzwl + movd seems better than pxor + pinsrw though. > So, do we want to help it a little bit then? Like: > > 2022-03-14 Jakub Jelinek > > * config/i386/eemintrin.h (_mm_loadu_si16): Use _mm_set_epi32 instead > of _mm_set_epi16 and zero extend the memory load. > > * gcc.target/i386/pr95483-1.c: Use -msse2 instead of -msse in > dg-options, allow movzwl+movd instead of pxor with pinsrw. > > --- gcc/config/i386/emmintrin.h.jj 2022-03-14 10:44:29.402617685 +0100 > +++ gcc/config/i386/emmintrin.h 2022-03-14 11:58:18.062666257 +0100 > @@ -724,7 +724,7 @@ _mm_loadu_si32 (void const *__P) > extern __inline __m128i __attribute__((__gnu_inline__, __always_inline__, __artificial__)) > _mm_loadu_si16 (void const *__P) > { > - return _mm_set_epi16 (0, 0, 0, 0, 0, 0, 0, (*(__m16_u *)__P)[0]); > + return _mm_set_epi32 (0, 0, 0, (unsigned short) ((*(__m16_u *)__P)[0])); > } Under avx512fp16, the former directly generates vmovw, but the latter still generates movzx + vmovd. There's still a miss optimization. Thus I prefer to optimize it in the backend pxor + pinsrw -> movzx + movd -> vmovw (under avx512fp16). I'll open a PR for that and optimize it in GCC13. > > extern __inline void __attribute__((__gnu_inline__, __always_inline__, __artificial__)) > --- gcc/testsuite/gcc.target/i386/pr95483-1.c.jj 2020-10-14 22:05:19.380856952 +0200 > +++ gcc/testsuite/gcc.target/i386/pr95483-1.c 2022-03-14 12:11:07.716891710 +0100 > @@ -1,7 +1,7 @@ > /* { dg-do compile } */ > -/* { dg-options "-O2 -msse" } */ > -/* { dg-final { scan-assembler-times "pxor\[ \\t\]+\[^\n\]*%xmm\[0-9\]+\[^\n\]*%xmm\[0-9\]+(?:\n|\[ \\t\]+#)" 1 } } */ > -/* { dg-final { scan-assembler-times "pinsrw\[ \\t\]+\[^\n\]*%xmm\[0-9\]+(?:\n|\[ \\t\]+#)" 1 } } */ > +/* { dg-options "-O2 -msse2" } */ > +/* { dg-final { scan-assembler-times "(?:movzwl\[ \\t\]+\[^\n\]*|pxor\[ \\t\]+\[^\n\]*%xmm\[0-9\]+\[^\n\]*%xmm\[0-9\]+)(?:\n|\[ \\t\]+#)" 1 } } */ > +/* { dg-final { scan-assembler-times "(?:movd|pinsrw)\[ \\t\]+\[^\n\]*%xmm\[0-9\]+(?:\n|\[ \\t\]+#)" 1 } } */ > /* { dg-final { scan-assembler-times "pextrw\[ \\t\]+\[^\n\]*%xmm\[0-9\]+\[^\n\]*(?:\n|\[ \\t\]+#)" 1 } } */ > > > > > Jakub > -- BR, Hongtao