On 09/04/2018 08:49 AM, Richard Biener wrote: > On Tue, Sep 4, 2018 at 2:41 PM Aldy Hernandez wrote: >> >> >> >> On 09/04/2018 07:58 AM, Richard Biener wrote: >>> On Mon, Sep 3, 2018 at 1:32 PM Aldy Hernandez wrote: >>>> >>>> >>>> >>>> On 08/28/2018 05:27 AM, Richard Biener wrote: >>>>> On Mon, Aug 27, 2018 at 2:24 PM Aldy Hernandez wrote: >>>>>> >>>>>> Howdy! >>>>>> >>>>>> Phew, I think this is the last abstraction. This handles the unary >>>>>> CONVERT_EXPR_P code. >>>>>> >>>>>> It's the usual story-- normalize the symbolics to [-MIN,+MAX] and handle >>>>>> everything generically. >>>>>> >>>>>> Normalizing the symbolics brought about some nice surprises. We now >>>>>> handle a few things we were punting on before, which I've documented in >>>>>> the patch, but can remove if so desired. I wrote them mainly for myself: >>>>>> >>>>>> /* NOTES: Previously we were returning VARYING for all symbolics, but >>>>>> we can do better by treating them as [-MIN, +MAX]. For >>>>>> example, converting [SYM, SYM] from INT to LONG UNSIGNED, >>>>>> we can return: ~[0x8000000, 0xffffffff7fffffff]. >>>>>> >>>>>> We were also failing to convert ~[0,0] from char* to unsigned, >>>>>> instead choosing to return VR_VARYING. Now we return ~[0,0]. */ >>>>>> >>>>>> Tested on x86-64 by the usual bootstrap and regtest gymnastics, >>>>>> including --enable-languages=all, because my past sins are still >>>>>> haunting me. >>>>>> >>>>>> OK? >>>>> >>>>> The new wide_int_range_convert_tree looks odd given it returns >>>>> tree's. I'd have expected an API that does the conversion resulting >>>>> in a wide_int range and the VRP code adapting to that by converting >>>>> the result to trees. >>>> >>>> Rewritten as suggested. >>>> >>>> A few notes. >>>> >>>> First. I am not using widest_int as was done previously. We were >>>> passing 0/false to the overflow args in force_fit_type so the call was >>>> just degrading into wide_int_to_tree() anyhow. Am I missing some >>>> subtlety here, and must we use widest_int and then force_fit_type on the >>>> caller? >>> >>> The issue is that the wide-ints get "converted", so it's indeed subtle. >> >> This seems like it should be documented-- at the very least in >> wide_int_range_convert. > > I don't think you need it there. > >> What do you mean "converted"? I'd like to understand this better before >> I write out the comment :). Do we lose precision by keeping it in a >> wide_int as opposed to a widest_int? > > As you're using wide_int::from that "changes" precision now. Ah, so it's wide_int_to_tree that wants to see the widest precision. I see. > >> Also, can we have the caller to wide_int_range_convert() use >> wide_int_to_tree directly instead of passing through force_fit_type? > > I think so. > >> It >> seems force_fit_type with overflowable=0 and overflowed=false is just a >> call to wide_int_to_tree anyhow. >> >>> >>> + || wi::rshift (wi::sub (vr0_max, vr0_min), >>> + wi::uhwi (outer_prec, inner_prec), >>> + inner_sign) == 0) >>> >>> this was previously allowed only for VR_RANGEs - now it's also for >>> VR_ANTI_RANGE. >>> That looks incorrect. Testcase welcome ;) >> >> See change to extract_range_into_wide_ints. VR_ANTI_RANGEs of constants >> will remain as is, whereas VR_ANTI_RANGEs of symbols will degrade into >> [-MIN,+MAX] and be handled generically. >> >> Also, see comment: >> >> + /* We normalize everything to a VR_RANGE, but for constant >> + anti-ranges we must handle them by leaving the final result >> + as an anti range. This allows us to convert things like >> + ~[0,5] seamlessly. */ > > Yes, but for truncation of ~[0, 5] from say int to short it's not correct > to make the result ~[0, 5], is it? At least the original code dropped > that to VARYING. For the same reason truncating [3, 765] from > short to unsigned char isn't [3, 253]. But maybe I'm missing something. Correct, but in that case we will realize that in wide_int_range_convert and refuse to do the conversion: /* If the conversion is not truncating we can convert the min and max values and canonicalize the resulting range. Otherwise we can do the conversion if the size of the range is less than what the precision of the target type can represent. */ if (outer_prec >= inner_prec || wi::rshift (wi::sub (vr0_max, vr0_min), wi::uhwi (outer_prec, inner_prec), inner_sign) == 0) > >>> >>> + return (!wi::eq_p (min, wi::min_value (outer_prec, outer_sign)) >>> + || !wi::eq_p (max, wi::max_value (outer_prec, outer_sign))); >>> >>> so you return false for VARYING? Why? I'd simply return true and >>> return VARYING in the new type in the path you return false. >> >> Since symbols and VR_VARYING and other things get turned into a >> [-MIN,+MAX] by extract_range_into_wide_ints, it's a lot easier to handle >> things generically and return varying when the range spans the entire >> domain. It also keeps with the rest of the wide_int_range_* functions >> that return false when we know it's going to be VR_VARYING. > > Ah, OK, didn't know they did that. Not sure if that's convenient though > given VR_VARYING has no representation in the wide-int-range "range" > so callers chaining ops would need to do sth like > > if (!wide_int_range_* (...., &out_min, &out_max)) > { > out_min = wide_int::min (prec); > out_max = wide_int::max (prec); > } > if (!wide_int_range_* (out_min, out_max, ....)) > ... > > to not lose cases where VARYING inputs produce non-VARYING results? Well in the ssa-range branch we treat [-MIN,+MAX] as varying. We call it range_for_type(). Also, the above if(!wide_int_range*) chaining is not necessary in the ssa-range work. See op_wi() in range-op.c. We just return false and the caller loop takes care of things. (Andrew's forte is not readable function names ;-). When I have my way with range-op.c, this op_RANDOM_TWO_LETTERS nonsense will go away.) OK for trunk? Aldy