[Added PR number and updated patches] On Aarch64, the __sync builtins are implemented using the __atomic operations and barriers. This makes the the __sync builtins inconsistent with their documentation which requires stronger barriers than those for the __atomic builtins. The difference between __sync and __atomic builtins is that the restrictions imposed by a __sync operation's barrier apply to all memory references while the restrictions of an __atomic operation's barrier only need to apply to a subset. This affects Aarch64 in particular because, although its implementation of __atomic builtins is correct, the barriers generated are too weak for the __sync builtins. The affected __sync builtins are the __sync_fetch_and_op (and __sync_op_and_fetch) functions, __sync_compare_and_swap and __sync_lock_test_and_set. This and a following patch modifies the code generated for these functions to weaken initial load-acquires to a simple load and to add a final fence to prevent code-hoisting. The last patch will add tests for the code generated by the Aarch64 backend for the __sync builtins. - Full barriers: __sync_fetch_and_op, __sync_op_and_fetch __sync_*_compare_and_swap [load-acquire; code; store-release] becomes [load; code ; store-release; fence]. - Acquire barriers: __sync_lock_test_and_set [load-acquire; code; store] becomes [load; code; store; fence] The code generated for release barriers and for the __atomic builtins is unchanged. This patch changes the code generated for __sync_fetch_and_ and __sync__and_fetch builtins. Tested with check-gcc for aarch64-none-linux-gnu. Ok for trunk? Matthew gcc/ 2015-05-22 Matthew Wahab PR target/65697 * config/aarch64/aarch64.c (aarch64_emit_post_barrier): New. (aarch64_split_atomic_op): Check for __sync memory models, emit appropriate initial and final barriers.