I was working on polishing some of Kai's work to eliminate shorten_compare and stumbled on this tiny missed optimization. Basically this change allows us to see something like this: n = (short unsigned int) mode_size[(unsigned int) mode] * 8 <= 64 ? (int) ((short unsigned int) mode_size[(unsigned int) mode] * 8) : 64; Note the (int) cast on the true arm of the COND_EXPR. We factor that out and adjust the constant (which is obviously free) into: n = (int)((short unsigned int) mode_size[(unsigned int) mode] * 8 <= 64 ? ((short unsigned int) mode_size[(unsigned int) mode] * 8) : 64); Seems subtle, but now the existing optimizers can recognize it as: ! n = (int) MIN_EXPR <(short unsigned int) mode_size[(unsigned int) mode] * 8, 64>; In another case I saw we had the same kind of transformation occur, but there was already a type conversation outside the MIN_EXPR. So we ended up with (T1) (T2) MIN_EXPR < ... > And we were able to prove the conversion to T2 was redundant resulting in just (T) MIN_EXPR < ... > You could legitimately ask why not apply this when both arguments are converted. That leads to recursion via fold-const.c which wants to shove the conversion back into the arms in that case. Removing that code results in testsuite regressions. I felt it was time to cut that thread a bit as the real goal here is to remove the shorten_* bits in c-common, not convert all of fold-const.c to match.pd :-) Bootstrapped and regression tested on x86_64-linux-gnu. OK for the trunk?