This patch optimize immediate addition sequences generated by aarch64_add_constant. The current addition sequences generated are: * If immediate fit into unsigned 12bit range, generate single add/sub. * Otherwise if it fit into unsigned 24bit range, generate two add/sub. * Otherwise invoke general constant build function. This haven't considered the situation where immedate can't fit into unsigned 12bit range, but can fit into single mov instruction for which case we generate one move and one addition. The move won't touch the destination register thus the sequences is better than two additions which both touch the destination register. This patch thus optimize the addition sequences into: * If immediate fit into unsigned 12bit range, generate single add/sub. * Otherwise if it fit into unsigned 24bit range, generate two add/sub. And don't do this if it fit into single move instruction, in which case move the immedaite to scratch register firstly, then generate one addition to add the scratch register to the destination register. * Otherwise invoke general constant build function. OK for trunk? gcc/ 2016-07-20 Jiong Wang * config/aarch64/aarch64.c (aarch64_add_constant): Optimize instruction sequences.