From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qv1-xf29.google.com (mail-qv1-xf29.google.com [IPv6:2607:f8b0:4864:20::f29]) by sourceware.org (Postfix) with ESMTPS id 1841A38582B1 for ; Wed, 29 Jun 2022 09:33:38 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 1841A38582B1 Received: by mail-qv1-xf29.google.com with SMTP id u14so21257324qvv.2 for ; Wed, 29 Jun 2022 02:33:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=rLbJlsuHTIw5Rg1PwOU2h85fhoxI2hMeR3SIiBfUJ8s=; b=7zFcX0iVOtAdss4qcapNFnSdrCAtIJmfacCGainFc2QzW8R3TORcppxFb4/uzNZKpc 3slK6WsAIR3BjO4xJybK/1EY3Whgab+dJABQ/sfUZXW6YrOS1PcDM/dFR6flX4qo7bwf yK1Bso+fnStR2abR1GLq+zuktoXAi5d0kf1xayepBE48JYnWgdkMDXejWJq5qKRpRXMm vogW5dF2tm9LYTZE4bQdbS+khqerfsNhpEbpZZpsPo5F/uhsdiBvSFidDq+I7Bkx8d17 4KHCxQPkT2/3tjypk5z6R+AqhRp7RcINyN42CrXdW7coc8O5gYCntmHEPE5b3IPor5up 5YcQ== X-Gm-Message-State: AJIora+O+TnkhZzvRxEflel3w9kaK43Nd0jfyvh98h8MIR27eT0F+8DR otDJuHkqz0uV6MW6VkD3vpbd1Uk7snf40wmFj8LGoBGZ X-Google-Smtp-Source: AGRyM1u3u1nY+YbgKcd+uRqNXhOyEIHJ//2Pr6j/NwyGxtjH53Pw6sG6um1dLAfgfi3lYxFbEmsHJ7Pmap87Nwro7Y0= X-Received: by 2002:ad4:5ba4:0:b0:470:9ca4:343f with SMTP id 4-20020ad45ba4000000b004709ca4343fmr6699903qvq.55.1656495217368; Wed, 29 Jun 2022 02:33:37 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Richard Biener Date: Wed, 29 Jun 2022 11:33:25 +0200 Message-ID: Subject: Re: [PATCH 1/2]AArch64 Add fallback case using sdot for usdot To: Tamar Christina Cc: Richard Sandiford , Richard Earnshaw , nd , "gcc-patches@gcc.gnu.org" , Marcus Shawcroft Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2022 09:33:41 -0000 On Tue, Jun 28, 2022 at 5:54 PM Tamar Christina wrote: > > > -----Original Message----- > > From: Richard Biener > > Sent: Monday, June 27, 2022 7:10 AM > > To: Tamar Christina > > Cc: Richard Sandiford ; Richard Earnshaw > > ; nd ; gcc- > > patches@gcc.gnu.org; Marcus Shawcroft > > Subject: Re: [PATCH 1/2]AArch64 Add fallback case using sdot for usdot > > > > On Mon, Jun 27, 2022 at 7:25 AM Tamar Christina via Gcc-patches > patches@gcc.gnu.org> wrote: > > > > > > > -----Original Message----- > > > > From: Richard Sandiford > > > > Sent: Thursday, June 16, 2022 7:54 PM > > > > To: Tamar Christina > > > > Cc: gcc-patches@gcc.gnu.org; nd ; Richard Earnshaw > > > > ; Marcus Shawcroft > > > > ; Kyrylo Tkachov > > > > > > Subject: Re: [PATCH 1/2]AArch64 Add fallback case using sdot for > > > > usdot > > > > > > > > Richard Sandiford via Gcc-patches writes: > > > > > Tamar Christina writes: > > > > >> Hi All, > > > > >> > > > > >> The usdot operation is common in video encoder and decoders > > > > >> including some of the most widely used ones. > > > > >> > > > > >> This patch adds a +dotprod version of the optab as a fallback for > > > > >> when you do have sdot but not usdot available. > > > > >> > > > > >> The fallback works by adding a bias to the unsigned argument to > > > > >> convert it to a signed value and then correcting for the bias later on. > > > > >> > > > > >> Essentially it relies on (x - 128)y + 128y == xy where x is > > > > >> unsigned and y is signed (assuming both are 8-bit values). > > > > >> Because the range of a signed byte is only to 127 we split the bias > > correction into: > > > > >> > > > > >> (x - 128)y + 127y + y > > > > > > > > > > I bet you knew this question was coming, but: this technique isn't > > > > > target-specific, so wouldn't it be better to handle it in > > > > > tree-vect-patterns.cc instead? > > > > > > Ok, so after many hours of trying I don't know how to make this work. > > > DOT_PROD_EXPR is a reduction, but emitting them as additional pattern > > > statement doesn't work because they'll be marked as internal_def > > > rather than reduction_def. I tried marking the new vec_stmt_info that > > > I create explicitly as reduction_def but this gets overwritten during analysis. > > > > > > I then looked into getting it as a vectorizable_operation but has this > > > obvious problems In that it no longer treats it as a reduction and so tries to > > decompose into hi/lo. > > > > > > I then looked into treating additional patterns from a reduction as > > > reductions themselves but this is obviously wrong as non-reduction > > statements also get marked as reductions. > > > > > > The conclusion is that I don't think the vectorizer allows additional > > > reductions to be emitted from patterns. > > > > Indeed. DOT_PROD is a weird beast and it doesn't define which lanes are > > reduced to which so it's only usable when the result is reduced to a single > > lane. > > > > An SLP pattern might work if you use reduc-plus for the reduced lanes and > > keep the multiply separate? > > Unfortunately I can't seem to get it to handle the reduction in SLP. It seems to always > use the non-SLP aware loop vectorizer here. The suggested unroll factor is always 1 and > even trying to force it gets it to bail out later, presumable because it's reducing into a > scalar that's used outside the loop? Yes, it possibly needs 1-lane SLP support. > Thanks, > Tamar > > > > > Richard. > > > > > > Also, how about doing (x - 128)y + 64y + 64y instead, to reduce the > > > > number of hoisted constants? > > > > > > > > Thanks, > > > > Richard > > > > > > > > > Thanks, > > > > > Richard > > > > > > > > > >> Concretely for: > > > > >> > > > > >> #define N 480 > > > > >> #define SIGNEDNESS_1 unsigned > > > > >> #define SIGNEDNESS_2 signed > > > > >> #define SIGNEDNESS_3 signed > > > > >> #define SIGNEDNESS_4 unsigned > > > > >> > > > > >> SIGNEDNESS_1 int __attribute__ ((noipa)) f (SIGNEDNESS_1 int res, > > > > >> SIGNEDNESS_3 char *restrict a, > > > > >> SIGNEDNESS_4 char *restrict b) { > > > > >> for (__INTPTR_TYPE__ i = 0; i < N; ++i) > > > > >> { > > > > >> int av = a[i]; > > > > >> int bv = b[i]; > > > > >> SIGNEDNESS_2 short mult = av * bv; > > > > >> res += mult; > > > > >> } > > > > >> return res; > > > > >> } > > > > >> > > > > >> we generate: > > > > >> > > > > >> movi v5.16b, 0x7f > > > > >> mov x3, 0 > > > > >> movi v4.16b, 0x1 > > > > >> movi v3.16b, 0xffffffffffffff80 > > > > >> movi v0.4s, 0 > > > > >> .L2: > > > > >> ldr q2, [x2, x3] > > > > >> ldr q1, [x1, x3] > > > > >> add x3, x3, 16 > > > > >> sub v2.16b, v2.16b, v3.16b > > > > >> sdot v0.4s, v2.16b, v1.16b > > > > >> sdot v0.4s, v5.16b, v1.16b > > > > >> sdot v0.4s, v4.16b, v1.16b > > > > >> cmp x3, 480 > > > > >> bne .L2 > > > > >> > > > > >> instead of: > > > > >> > > > > >> movi v0.4s, 0 > > > > >> mov x3, 0 > > > > >> .L2: > > > > >> ldr q2, [x1, x3] > > > > >> ldr q1, [x2, x3] > > > > >> add x3, x3, 16 > > > > >> sxtl v4.8h, v2.8b > > > > >> sxtl2 v3.8h, v2.16b > > > > >> uxtl v2.8h, v1.8b > > > > >> uxtl2 v1.8h, v1.16b > > > > >> mul v2.8h, v2.8h, v4.8h > > > > >> mul v1.8h, v1.8h, v3.8h > > > > >> saddw v0.4s, v0.4s, v2.4h > > > > >> saddw2 v0.4s, v0.4s, v2.8h > > > > >> saddw v0.4s, v0.4s, v1.4h > > > > >> saddw2 v0.4s, v0.4s, v1.8h > > > > >> cmp x3, 480 > > > > >> bne .L2 > > > > >> > > > > >> The new sequence is significantly faster as the operations it > > > > >> uses are well optimized. Note that execution tests are already > > > > >> in the mid-end > > > > testsuite. > > > > >> > > > > >> Thanks to James Greenhalgh for the tip-off. > > > > >> > > > > >> Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. > > > > >> > > > > >> Ok for master? > > > > >> > > > > >> Thanks, > > > > >> Tamar > > > > >> > > > > >> gcc/ChangeLog: > > > > >> > > > > >> * config/aarch64/aarch64-simd.md (usdot_prod): > > > > >> Generate > > > > fallback > > > > >> or call original isns ... > > > > >> (usdot_prod_insn): ...here. > > > > >> > > > > >> gcc/testsuite/ChangeLog: > > > > >> > > > > >> * gcc.target/aarch64/simd/vusdot-autovec-2.c: New test. > > > > >> > > > > >> --- inline copy of patch -- > > > > >> diff --git a/gcc/config/aarch64/aarch64-simd.md > > > > >> b/gcc/config/aarch64/aarch64-simd.md > > > > >> index > > > > >> > > > > > > cf2f4badacc594df9ecf06de3f8ea570ef9e0ff2..235a6fa371e471816284e3383e > > > > 8 > > > > >> 564e9cf643a74 100644 > > > > >> --- a/gcc/config/aarch64/aarch64-simd.md > > > > >> +++ b/gcc/config/aarch64/aarch64-simd.md > > > > >> @@ -623,7 +623,7 @@ (define_insn "dot_prod" > > > > >> > > > > >> ;; These instructions map to the __builtins for the Armv8.6-a > > > > >> I8MM usdot ;; (vector) Dot Product operation and the vectorized > > optab. > > > > >> -(define_insn "usdot_prod" > > > > >> +(define_insn "usdot_prod_insn" > > > > >> [(set (match_operand:VS 0 "register_operand" "=w") > > > > >> (plus:VS > > > > >> (unspec:VS [(match_operand: 1 "register_operand" > > > > >> "w") > > > > @@ > > > > >> -635,6 +635,43 @@ (define_insn "usdot_prod" > > > > >> [(set_attr "type" "neon_dot")] > > > > >> ) > > > > >> > > > > >> +;; usdot auto-vec fallback code > > > > >> +(define_expand "usdot_prod" > > > > >> + [(set (match_operand:VS 0 "register_operand") > > > > >> + (plus:VS > > > > >> + (unspec:VS [(match_operand: 1 "register_operand") > > > > >> + (match_operand: 2 "register_operand")] > > > > >> + UNSPEC_USDOT) > > > > >> + (match_operand:VS 3 "register_operand")))] > > > > >> + "TARGET_DOTPROD || TARGET_I8MM" > > > > >> +{ > > > > >> + if (TARGET_I8MM) > > > > >> + { > > > > >> + emit_insn (gen_usdot_prod_insn (operands[0], > > operands[1], > > > > >> + operands[2], operands[3])); > > > > >> + DONE; > > > > >> + } > > > > >> + > > > > >> + machine_mode elemmode = GET_MODE_INNER (mode); > > > > >> + HOST_WIDE_INT val = 1 << (GET_MODE_BITSIZE > > > > (elemmode).to_constant > > > > >> +() - 1); > > > > >> + rtx signbit = gen_int_mode (val, elemmode); > > > > >> + rtx t1 = gen_reg_rtx (mode); > > > > >> + rtx t2 = gen_reg_rtx (mode); > > > > >> + rtx tmp = gen_reg_rtx (mode); > > > > >> + rtx c1 = gen_const_vec_duplicate (mode, > > > > >> + gen_int_mode (val - 1, elemmode)); > > > > >> + rtx c2 = gen_const_vec_duplicate (mode, gen_int_mode > > > > >> +(1, elemmode)); > > > > >> + rtx dup = gen_const_vec_duplicate (mode, signbit); > > > > >> + c1 = force_reg (mode, c1); > > > > >> + c2 = force_reg (mode, c2); > > > > >> + dup = force_reg (mode, dup); > > > > >> + emit_insn (gen_sub3 (tmp, operands[1], dup)); > > > > >> + emit_insn (gen_sdot_prod (t1, tmp, operands[2], > > > > >> +operands[3])); > > > > >> + emit_insn (gen_sdot_prod (t2, c1, operands[2], t1)); > > > > >> + emit_insn (gen_sdot_prod (operands[0], c2, > > > > >> +operands[2], t2)); > > > > >> + DONE; > > > > >> +}) > > > > >> + > > > > >> ;; These instructions map to the __builtins for the Dot Product > > > > >> ;; indexed operations. > > > > >> (define_insn "aarch64_dot_lane" > > > > >> diff --git > > > > >> a/gcc/testsuite/gcc.target/aarch64/simd/vusdot-autovec-2.c > > > > >> b/gcc/testsuite/gcc.target/aarch64/simd/vusdot-autovec-2.c > > > > >> new file mode 100644 > > > > >> index > > > > >> > > > > > > 0000000000000000000000000000000000000000..acd8e36209690386d021df72f1 > > > > 4 > > > > >> 67a696750ac3e > > > > >> --- /dev/null > > > > >> +++ b/gcc/testsuite/gcc.target/aarch64/simd/vusdot-autovec-2.c > > > > >> @@ -0,0 +1,25 @@ > > > > >> +/* { dg-do compile } */ > > > > >> +/* { dg-options "-O3 -march=armv8.2-a+noi8mm+dotprod" } */ > > > > >> + > > > > >> +#define N 480 > > > > >> +#define SIGNEDNESS_1 unsigned > > > > >> +#define SIGNEDNESS_2 signed > > > > >> +#define SIGNEDNESS_3 signed > > > > >> +#define SIGNEDNESS_4 unsigned > > > > >> + > > > > >> +SIGNEDNESS_1 int __attribute__ ((noipa)) f (SIGNEDNESS_1 int > > > > >> +res, > > > > >> +SIGNEDNESS_3 char *restrict a, > > > > >> + SIGNEDNESS_4 char *restrict b) { > > > > >> + for (__INTPTR_TYPE__ i = 0; i < N; ++i) > > > > >> + { > > > > >> + int av = a[i]; > > > > >> + int bv = b[i]; > > > > >> + SIGNEDNESS_2 short mult = av * bv; > > > > >> + res += mult; > > > > >> + } > > > > >> + return res; > > > > >> +} > > > > >> + > > > > >> +/* { dg-final { scan-assembler-not {\tusdot\t} } } */ > > > > >> +/* { dg-final { scan-assembler-times {\tsdot\t} 3 } } */