Range-ops uses wi_fold (implemented by each opcode)  to individually fold subranges one at a time and then combines them. This patch first calls wi_fold_in_parts which checks if one of the subranges is small, and if so, further splits that subrange into constants. Currently, if a subrange is 4 or less values, then we call it individually for each of the 4 values. 4 was chosen as a reasonable tradeoff between excess work vs benefit.     We could consider increasing that number for -O3 perhaps. This allows us under some circumstances to generate much more precise values. For instance:     tmp_5 = x_4(D) != 0;     _1 = (int) tmp_5;     _2 = 255 >> _1;     _3 = (unsigned char) _2;     _6 = _3 * 2;     return _6; _1 : int [0, 1] _2 : int [127, 255] _3 : unsigned char [127, +INF] we currently produce a range of [127,255] for _2.   the RHS of the shift (_1)  only has 2 values, so by further splitting that subrange in [0,0] and [1,1], we can produce a more precise range, and do better:     y_5 = x_4(D) != 0;     _1 = (int) y_5;     _2 = 255 >> _1;     _3 = (unsigned char) _2;     return 254; _1 : int [0, 1] _2 : int [127, 127][255, 255] _3 : unsigned char [127, 127][+INF, +INF] Now _2 is a perfect 127 or 255, and we can fold this function to just the return value now. This will work for many different opcodes without having to go and rework range-ops for them. Bootstraps on x86_64-pc-linux-gnu  with no regressions and fixes all 3 testcases in the PR.  pushed. Andrew