Hi All, This patch fixes the code generation for copy_blkmode_to_reg by calculating the bitsize per iteration doing the maximum copy allowed that does not read more than the amount of bits left to copy. This fixes the bad code generation reported and also still produces better code in most cases. For targets that don't support fast unaligned access it defaults to the old 1 byte copy (MIN alignment). This produces for the copying of a 3 byte structure: fun3: adrp x1, .LANCHOR0 add x1, x1, :lo12:.LANCHOR0 mov x0, 0 sub sp, sp, #16 ldrh w2, [x1, 16] ldrb w1, [x1, 18] add sp, sp, 16 bfi x0, x2, 0, 16 bfi x0, x1, 16, 8 ret whereas before it was producing fun3: adrp x0, .LANCHOR0 add x2, x0, :lo12:.LANCHOR0 sub sp, sp, #16 ldrh w1, [x0, #:lo12:.LANCHOR0] ldrb w0, [x2, 2] strh w1, [sp, 8] strb w0, [sp, 10] ldr w0, [sp, 8] add sp, sp, 16 ret Cross compiled on aarch64-none-elf and no issues Bootstrapped powerpc64-unknown-linux-gnu, x86_64-pc-linux-gnu, arm-none-linux-gnueabihf, aarch64-none-linux-gnu with no issues. Regtested aarch64-none-elf, x86_64-pc-linux-gnu, powerpc64-unknown-linux-gnu and arm-none-linux-gnueabihf and found no issues. Regression on powerpc (pr63594-2.c) is fixed now. OK for trunk? Thanks, Tamar gcc/ 2018-04-05 Tamar Christina PR middle-end/85123 * expr.c (copy_blkmode_to_reg): Fix wrong code gen. --