public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* Missed lowering to ld1rq from svld1rq for memory operand
@ 2022-08-05 11:32 Prathamesh Kulkarni
  2022-08-05 12:19 ` Richard Sandiford
  0 siblings, 1 reply; 5+ messages in thread
From: Prathamesh Kulkarni @ 2022-08-05 11:32 UTC (permalink / raw)
  To: Richard Sandiford, gcc Patches

[-- Attachment #1: Type: text/plain, Size: 598 bytes --]

Hi Richard,
Following from off-list discussion, in the attached patch, I wrote pattern
similar to vec_duplicate<mode>_reg, which seems to work for the svld1rq tests.
Does it look OK ?

Sorry, I didn't fully understand your suggestion on integrating with
vec_duplicate<mode>_reg
pattern. For vec_duplicate<mode>_reg, the operand to vec_duplicate expects
mode to be <VEL>, while the pattern in patch expects operand of
vec_duplicate to have mode <V128>.
How do we write a pattern so an operand can accept either of the 2 modes ?
Also it seems <V128> cannot be used with SVE_ALL ?

Thanks,
Prathamesh

[-- Attachment #2: gnu-782-3.txt --]
[-- Type: text/plain, Size: 2149 bytes --]

diff --git a/gcc/config/aarch64/aarch64-sve.md b/gcc/config/aarch64/aarch64-sve.md
index bd60e65b0c3..b0dc33870b8 100644
--- a/gcc/config/aarch64/aarch64-sve.md
+++ b/gcc/config/aarch64/aarch64-sve.md
@@ -2504,6 +2504,27 @@
   }
 )
 
+;; Fold ldr+dup -> ld1rq
+
+(define_insn_and_split "*vec_duplicate<mode>_ld1rq"
+  [(set (match_operand:SVE_FULL 0 "register_operand" "=w")
+	(vec_duplicate:SVE_FULL
+	  (match_operand:<V128> 1 "aarch64_sve_ld1rq_operand" "UtQ")))
+   (clobber (match_scratch:VNx16BI 2 "=Upl"))]
+  "TARGET_SVE"
+  "#"
+  "&& 1"
+  [(const_int 0)]
+  {
+    if (GET_CODE (operands[2]) == SCRATCH)
+      operands[2] = gen_reg_rtx (VNx16BImode);
+    emit_move_insn (operands[2], CONSTM1_RTX (VNx16BImode));
+    rtx gp = gen_lowpart (<VPRED>mode, operands[2]);
+    emit_insn (gen_aarch64_sve_ld1rq<mode> (operands[0], operands[1], gp));
+    DONE;
+  }
+)
+
 ;; Accept memory operands for the benefit of combine, and also in case
 ;; the scalar input gets spilled to memory during RA.  We want to split
 ;; the load at the first opportunity in order to allow the PTRUE to be
diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/general/pr96463-2.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/general/pr96463-2.c
index 196de3f5e0a..0dfe125507f 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve/acle/general/pr96463-2.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/general/pr96463-2.c
@@ -26,4 +26,8 @@ TEST(svfloat64_t, float64_t, f64)
 
 TEST(svbfloat16_t, bfloat16_t, bf16)
 
-/* { dg-final { scan-assembler-times {\tdup\tz[0-9]+\.q, z[0-9]+\.q\[0\]} 12 { target aarch64_little_endian } } } */
+/* { dg-final { scan-assembler-not "dup" { target aarch64_little_endian } } } */
+/* { dg-final { scan-assembler-times {\tld1rqb\tz0\.b, p0/z, \[x0\]} 2 { target aarch64_little_endian } } } */
+/* { dg-final { scan-assembler-times {\tld1rqh\tz0\.h, p0/z, \[x0\]} 4 { target aarch64_little_endian } } } */
+/* { dg-final { scan-assembler-times {\tld1rqw\tz0\.s, p0/z, \[x0\]} 3 { target aarch64_little_endian } } } */
+/* { dg-final { scan-assembler-times {\tld1rqd\tz0\.d, p0/z, \[x0\]} 3 { target aarch64_little_endian } } } */

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-01-14 17:59 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-05 11:32 Missed lowering to ld1rq from svld1rq for memory operand Prathamesh Kulkarni
2022-08-05 12:19 ` Richard Sandiford
2023-01-10 17:34   ` Prathamesh Kulkarni
2023-01-12 15:32     ` Richard Sandiford
2023-01-14 17:59       ` Prathamesh Kulkarni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).