Hi, I noticed atmoic_store pattern is the only one in atomic.md that uses memory_operand as predicate. This seems like a typo to me. It also causes problem. The general address expression supported by memory_operand is kept till LRA finds out it doesn't match the "Q" constraint. As a result LRA needs to reload the address expression out of memory reference. Since there is no combine optimizer after LRA, below inefficient code is generated for atomic stores: 67 add x1, x29, 64 68 add x0, x1, x0, sxtw 3 69 sub x0, x0, #16 70 stlr x19, [x0] Or: 67 sxtw x0, w0 68 add x1, x29, 48 69 add x1, x1, x0, sxtw 3 70 stlr x19, [x1] With this patch, we force atomic_store to use direct register addressing mode at earlier compilation phase and better code will be generated: 67 add x1, x29, 48 68 add x1, x1, x0, sxtw 3 69 stlr x19, [x1] Bootstrap and test on aarch64. Is it OK? Thanks, bin 2015-12-01 Bin Cheng * config/aarch64/atomics.md (atomic_store): Use predicate aarch64_sync_memory_operand.