Hi, Attached is a patch that fixes sqdmulh_lane_* intrinsics. Previously they, used to accept 128-bit lane index range. This fixes this bug to accept 64-bit lane index range. sqdmulh_laneq_* and AdvSIMD scalar ones still accept 128-bit lane index range as before. It has passed regressions on aarch64-none-elf. OK for trunk and aarch64-4.7-branch? Thanks, Tejas Belagod ARM. Changelog 2013-01-14 Tejas Belagod gcc/ * config/aarch64/aarch64-simd-builtins.def: Separate sqdmulh_lane entries into lane and laneq entries. * config/aarch64/aarch64-simd.md (aarch64_sqdmulh_lane): Remove AdvSIMD scalar modes. (aarch64_sqdmulh_laneq): New. (aarch64_sqdmulh_lane): New RTL pattern for Scalar AdvSIMD modes. * config/aarch64/arm_neon.h: Fix all the vqdmulh_lane* intrinsics' builtin implementations to relfect changes in RTL in aarch64-simd.md. * config/aarch64/iterators.md (VCOND): New. (VCONQ): New.