Hi, As mentioned in PR, for following test-case: #include uint32x2_t f1(float32x2_t a, float32x2_t b) { return vabs_f32 (a) >= vabs_f32 (b); } uint32x2_t f2(float32x2_t a, float32x2_t b) { return (uint32x2_t) __builtin_neon_vcagev2sf (a, b); } We generate vacge for f2, but with -ffast-math, we generate following for f1: f1: vabs.f32 d1, d1 vabs.f32 d0, d0 vcge.f32 d0, d0, d1 bx lr This happens because, the middle-end inverts the comparison to b <= a, .optimized dump: _8 = __builtin_neon_vabsv2sf (a_4(D)); _7 = __builtin_neon_vabsv2sf (b_5(D)); _1 = _7 <= _8; _2 = VIEW_CONVERT_EXPR(_1); _6 = VIEW_CONVERT_EXPR(_2); return _6; and combine fails to match the following pattern: (set (reg:V2SI 121) (neg:V2SI (le:V2SI (abs:V2SF (reg:V2SF 123)) (abs:V2SF (reg:V2SF 122))))) because neon_vca pattern has GTGE code iterator. The attached patch adjusts the neon_vca patterns to use GLTE instead similar to neon_vca_fp16insn, and removes NEON_VACMP iterator. Code-gen with patch: f1: vacle.f32 d0, d1, d0 bx lr Bootstrapped + tested on arm-linux-gnueabihf and cross-tested on arm*-*-*. OK to commit ? Thanks, Prathamesh