Hi all, The vaddlv_u8 and vaddlv_u16 intrinsics produce a widened scalar result (uint16_t and uint32_t). The ADDLV instructions themselves zero the rest of the V register, which gives us a free zero-extension to 32 and 64 bits, similar to how it works on the GP reg side. Because we don't model that zero-extension in the machine description this can cause GCC to move the results of these instructions to the GP regs just to do a (superfluous) zero-extension. This patch just adds a pattern to catch these cases. For the testcases we can now generate no zero-extends or GP<->FP reg moves, whereas before we generated stuff like: foo_8_32: uaddlv h0, v0.8b umov w1, v0.h[0] // FP<->GP move with zero-extension! str w1, [x0] ret Bootstrapped and tested on aarch64-none-linux-gnu. Pushing to trunk. Thanks, Kyrill gcc/ChangeLog: * config/aarch64/aarch64-simd.md (*aarch64_addlv_ze): New pattern. gcc/testsuite/ChangeLog: * gcc.target/aarch64/simd/addlv_zext.c: New test.