Hi all, This patch extends the aarch64_get_lane_zero_extendsi instruction definition to also cover DI mode. This prevents a redundant AND instruction from being generated due to the pattern failing to be matched. Example: typedef char v16qi __attribute__ ((vector_size (16))); unsigned long long foo (v16qi a) {   return a[0]; } Previously generated: foo:         umov    w0, v0.b[0]         and     x0, x0, 255         ret And now generates: foo:         umov    w0, v0.b[0]         ret Bootstrapped on aarch64-none-linux-gnu and tested on aarch64-none-elf with no regressions. gcc/ 2018-07-23  Sam Tebbs         * config/aarch64/aarch64-simd.md     (*aarch64_get_lane_zero_extendsi):         Rename to... (*aarch64_get_lane_zero_extend): ... This.         Use GPI iterator instead of SI mode. gcc/testsuite 2018-07-23  Sam Tebbs         * gcc.target/aarch64/extract_zero_extend.c: New file