From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 29112 invoked by alias); 25 Aug 2013 13:21:19 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 29098 invoked by uid 89); 25 Aug 2013 13:21:18 -0000 X-Spam-SWARE-Status: No, score=-2.9 required=5.0 tests=AWL,BAYES_00,KHOP_RCVD_UNTRUST,KHOP_THREADED,RCVD_IN_DNSWL_LOW,RCVD_IN_HOSTKARMA_YE autolearn=ham version=3.3.2 Received: from mail-ie0-f169.google.com (HELO mail-ie0-f169.google.com) (209.85.223.169) by sourceware.org (qpsmtpd/0.84/v0.84-167-ge50287c) with (AES128-SHA encrypted) ESMTPS; Sun, 25 Aug 2013 13:21:13 +0000 Received: by mail-ie0-f169.google.com with SMTP id 10so3465555ied.14 for ; Sun, 25 Aug 2013 06:21:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=kCPKM5eM8GxkxeI8E+oDTel/8vBcTUtrySinDBBu6rA=; b=cHo40LBsULoAtAgWOjNiYVl7+C3m7P/kAkTIceq/Bp4rGRiRdh+3kyc3IuGZ6YnM62 2ls15XqSz3ORMZEtu/odzEW73f4Fydyt8UYC8aYIXn4lr8fzfEyK1TDe+i+LxoecolIi Bq7YTyZh2wMT58zD1s+lZqpAogekwhlCel+5+sphK8kjil/fBcRY7mGvF0In/mov39Rc rY1as3o1Ambpy9A7BrxjZ89M3VsRZgapVXax81yi1it8qKgS2b+V+8Ajg4Dnlb7EXN6v P6aw/jkNGdDuu+GXV6CWp6Xl5F1qaKlILoVKUc00mL5IbEwxb8SStyoWINxJ3q9fd+FE jSaA== X-Gm-Message-State: ALoCoQkhzFxgjmLCeRtWNhWpv6azI4fFhwXW2WiDtRO7WNn1Zjj0KoVeaSugthbORAc61wKJWJXq X-Received: by 10.50.118.105 with SMTP id kl9mr3774080igb.3.1377436871625; Sun, 25 Aug 2013 06:21:11 -0700 (PDT) Received: from moria.site (pool-98-113-157-218.nycmny.fios.verizon.net. [98.113.157.218]) by mx.google.com with ESMTPSA id f5sm10845413igc.4.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 25 Aug 2013 06:21:11 -0700 (PDT) Message-ID: <521A04C5.5050501@naturalbridge.com> Date: Sun, 25 Aug 2013 15:14:00 -0000 From: Kenneth Zadeck User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: rguenther@suse.de, gcc-patches , Mike Stump , r.sandiford@uk.ibm.com, rdsandiford@googlemail.com Subject: Re: wide-int branch now up for public comment and review References: <520A9DCC.6080609@naturalbridge.com> <87ppt4e9hg.fsf@talisman.default> <87ppt2cjt1.fsf@talisman.default> In-Reply-To: <87ppt2cjt1.fsf@talisman.default> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-SW-Source: 2013-08/txt/msg01438.txt.bz2 On 08/25/2013 03:26 AM, Richard Sandiford wrote: > Richard Sandiford writes: >> The main thing that's changed since the early patches is that we now >> have a mixture of wide-int types. This seems to have led to a lot of >> boiler-plate forwarding functions (or at least it felt like that while >> moving them all out the class). And that in turn seems to be because >> you're trying to keep everything as member functions. E.g. a lot of the >> forwarders are from a member function to a static function. >> >> Wouldn't it be better to have the actual classes be light-weight, >> with little more than accessors, and do the actual work with non-member >> template functions? There seems to be 3 grades of wide-int: >> >> (1) read-only, constant precision (from int, etc.) >> (2) read-write, constant precision (fixed_wide_int) >> (3) read-write, variable precision (wide_int proper) >> >> but we should be able to hide that behind templates, with compiler errors >> if you try to write to (1), etc. >> >> To take one example, the reason we can't simply use things like >> std::min on wide ints is because signedness needs to be specified >> explicitly, but there's a good reason why the standard defined >> std::min (x, y) rather than x.min (y). It seems like we ought >> to have smin and umin functions alongside std::min, rather than >> make them member functions. We could put them in a separate namespace >> if necessary. > FWIW, here's a patch that shows the beginnings of what I mean. > The changes are: > > (1) Using a new templated class, wide_int_accessors, to access the > integer object. For now this just contains a single function, > to_shwi, but I expect more to follow... > > (2) Adding a new namespace, wi, for the operators. So far this > just contains the previously-static comparison functions > and whatever else was needed to avoid cross-dependencies > between wi and wide_int_ro (except for the debug routines). > > (3) Removing the comparison member functions and using the static > ones everywhere. > > The idea behind using a namespace rather than static functions > is that it makes it easier to separate the core, tree and rtx bits. > IMO wide-int.h shouldn't know about trees and rtxes, and all routines > related to them should be in tree.h and rtl.h instead. But using > static functions means that you have to declare everything in one place. > Also, it feels odd for wide_int to be both an object and a home > of static functions that don't always operate on wide_ints, e.g. when > comparing a CONST_INT against 16. > > The eventual aim is to use wide_int_accessors (via the wi interface > routines) to abstract away everything about the underlying object. > Then wide_int_ro should not need to have any fields. wide_int can > have the fields that wide_int_ro has now, and fixed_wide_int will > just have an array and length. The array can also be the right > size for the int template parameter, rather than always being > WIDE_INT_MAX_ELTS. > > The aim is also to use wide_int_accessors to handle the flexible > precision case, so that it only kicks in when primitive types are > used as operator arguments. > > I used a wide_int_accessors class rather than just using templated > wi functions because I think it's dangerous to have a default > implementation of things like to_shwi1 and to_shwi2. The default > implementation we have now is only suitable for primitive types > (because of the sizeof), but could successfully match any type > that provides enough arithmetic to satisfy signedp and top_bit_set. > I admit that's only a theoretical problem though. > > I realise I'm probably not being helpful here. In fact I'm probably > being the cook too many and should really just leave this up to you > two and Richard. But I realised while reading through wide-int.h > the other day that I have strong opinions about how this should > be done. :-( > > Tested on x86_64-linux-gnu FWIW. I expect this to remain local though. > > Thanks, > Richard This is really mostly style. I think that there are a lot of places where the static functions are better than the oo cases. However, if have a wide-int in your hand already i like just making the oo call. It is a shame that there is now way to just say i want both but i only want to specify one. The problem is that you can take this too far and only have static functions and then it just begins to look more like badly written c code. However, i do agree that most of the uses of comparison functions could be static - BUT NOT ALL. however, i let mike "design" the interface because i am just learning c++. So he gets to carry this conversation - at least until richi returns!!!! > > Index: gcc/ada/gcc-interface/cuintp.c > =================================================================== > --- gcc/ada/gcc-interface/cuintp.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/ada/gcc-interface/cuintp.c 2013-08-25 07:42:29.133616663 +0100 > @@ -177,7 +177,7 @@ UI_From_gnu (tree Input) > in a signed 64-bit integer. */ > if (tree_fits_shwi_p (Input)) > return UI_From_Int (tree_to_shwi (Input)); > - else if (wide_int::lts_p (Input, 0) && TYPE_UNSIGNED (gnu_type)) > + else if (wi::lts_p (Input, 0) && TYPE_UNSIGNED (gnu_type)) > return No_Uint; > #endif > > Index: gcc/alias.c > =================================================================== > --- gcc/alias.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/alias.c 2013-08-25 07:42:29.134616672 +0100 > @@ -340,8 +340,8 @@ ao_ref_from_mem (ao_ref *ref, const_rtx > || (DECL_P (ref->base) > && (DECL_SIZE (ref->base) == NULL_TREE > || TREE_CODE (DECL_SIZE (ref->base)) != INTEGER_CST > - || wide_int::ltu_p (DECL_SIZE (ref->base), > - ref->offset + ref->size))))) > + || wi::ltu_p (DECL_SIZE (ref->base), > + ref->offset + ref->size))))) > return false; > > return true; > Index: gcc/c-family/c-common.c > =================================================================== > --- gcc/c-family/c-common.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/c-family/c-common.c 2013-08-25 07:42:29.138616711 +0100 > @@ -7925,8 +7925,8 @@ handle_alloc_size_attribute (tree *node, > wide_int p; > > if (TREE_CODE (position) != INTEGER_CST > - || (p = wide_int (position)).ltu_p (1) > - || p.gtu_p (arg_count) ) > + || wi::ltu_p (p = wide_int (position), 1) > + || wi::gtu_p (p, arg_count)) > { > warning (OPT_Wattributes, > "alloc_size parameter outside range"); > Index: gcc/c-family/c-lex.c > =================================================================== > --- gcc/c-family/c-lex.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/c-family/c-lex.c 2013-08-25 07:42:29.139616721 +0100 > @@ -545,7 +545,7 @@ narrowest_unsigned_type (const wide_int > continue; > upper = TYPE_MAX_VALUE (integer_types[itk]); > > - if (wide_int::geu_p (upper, val)) > + if (wi::geu_p (upper, val)) > return (enum integer_type_kind) itk; > } > > @@ -573,7 +573,7 @@ narrowest_signed_type (const wide_int &v > continue; > upper = TYPE_MAX_VALUE (integer_types[itk]); > > - if (wide_int::geu_p (upper, val)) > + if (wi::geu_p (upper, val)) > return (enum integer_type_kind) itk; > } > > Index: gcc/c-family/c-pretty-print.c > =================================================================== > --- gcc/c-family/c-pretty-print.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/c-family/c-pretty-print.c 2013-08-25 07:42:29.140616730 +0100 > @@ -919,7 +919,7 @@ pp_c_integer_constant (c_pretty_printer > { > wide_int wi = i; > > - if (wi.lt_p (i, 0, TYPE_SIGN (TREE_TYPE (i)))) > + if (wi::lt_p (i, 0, TYPE_SIGN (TREE_TYPE (i)))) > { > pp_minus (pp); > wi = -wi; > Index: gcc/cgraph.c > =================================================================== > --- gcc/cgraph.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/cgraph.c 2013-08-25 07:42:29.141616740 +0100 > @@ -624,7 +624,7 @@ cgraph_add_thunk (struct cgraph_node *de > > node = cgraph_create_node (alias); > gcc_checking_assert (!virtual_offset > - || wide_int::eq_p (virtual_offset, virtual_value)); > + || wi::eq_p (virtual_offset, virtual_value)); > node->thunk.fixed_offset = fixed_offset; > node->thunk.this_adjusting = this_adjusting; > node->thunk.virtual_value = virtual_value; > Index: gcc/config/bfin/bfin.c > =================================================================== > --- gcc/config/bfin/bfin.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/config/bfin/bfin.c 2013-08-25 07:42:29.168617001 +0100 > @@ -3285,7 +3285,7 @@ bfin_local_alignment (tree type, unsigne > memcpy can use 32 bit loads/stores. */ > if (TYPE_SIZE (type) > && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST > - && (!wide_int::gtu_p (TYPE_SIZE (type), 8)) > + && !wi::gtu_p (TYPE_SIZE (type), 8) > && align < 32) > return 32; > return align; > Index: gcc/config/i386/i386.c > =================================================================== > --- gcc/config/i386/i386.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/config/i386/i386.c 2013-08-25 07:42:29.175617069 +0100 > @@ -25695,7 +25695,7 @@ ix86_data_alignment (tree type, int alig > && AGGREGATE_TYPE_P (type) > && TYPE_SIZE (type) > && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST > - && (wide_int::geu_p (TYPE_SIZE (type), max_align)) > + && wi::geu_p (TYPE_SIZE (type), max_align) > && align < max_align) > align = max_align; > > @@ -25706,7 +25706,7 @@ ix86_data_alignment (tree type, int alig > if ((opt ? AGGREGATE_TYPE_P (type) : TREE_CODE (type) == ARRAY_TYPE) > && TYPE_SIZE (type) > && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST > - && (wide_int::geu_p (TYPE_SIZE (type), 128)) > + && wi::geu_p (TYPE_SIZE (type), 128) > && align < 128) > return 128; > } > @@ -25821,7 +25821,7 @@ ix86_local_alignment (tree exp, enum mac > != TYPE_MAIN_VARIANT (va_list_type_node))) > && TYPE_SIZE (type) > && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST > - && (wide_int::geu_p (TYPE_SIZE (type), 16)) > + && wi::geu_p (TYPE_SIZE (type), 16) > && align < 128) > return 128; > } > Index: gcc/config/rs6000/rs6000-c.c > =================================================================== > --- gcc/config/rs6000/rs6000-c.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/config/rs6000/rs6000-c.c 2013-08-25 07:42:29.188617194 +0100 > @@ -4196,7 +4196,7 @@ altivec_resolve_overloaded_builtin (loca > mode = TYPE_MODE (arg1_type); > if ((mode == V2DFmode || mode == V2DImode) && VECTOR_MEM_VSX_P (mode) > && TREE_CODE (arg2) == INTEGER_CST > - && wide_int::ltu_p (arg2, 2)) > + && wi::ltu_p (arg2, 2)) > { > tree call = NULL_TREE; > > @@ -4281,7 +4281,7 @@ altivec_resolve_overloaded_builtin (loca > mode = TYPE_MODE (arg1_type); > if ((mode == V2DFmode || mode == V2DImode) && VECTOR_UNIT_VSX_P (mode) > && tree_fits_uhwi_p (arg2) > - && wide_int::ltu_p (arg2, 2)) > + && wi::ltu_p (arg2, 2)) > { > tree call = NULL_TREE; > > Index: gcc/cp/init.c > =================================================================== > --- gcc/cp/init.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/cp/init.c 2013-08-25 07:42:29.189617204 +0100 > @@ -2381,7 +2381,7 @@ build_new_1 (vec **placemen > gcc_assert (TREE_CODE (size) == INTEGER_CST); > cookie_size = targetm.cxx.get_cookie_size (elt_type); > gcc_assert (TREE_CODE (cookie_size) == INTEGER_CST); > - gcc_checking_assert (addr_wide_int (cookie_size).ltu_p(max_size)); > + gcc_checking_assert (wi::ltu_p (cookie_size, max_size)); > /* Unconditionally subtract the cookie size. This decreases the > maximum object size and is safe even if we choose not to use > a cookie after all. */ > @@ -2389,7 +2389,7 @@ build_new_1 (vec **placemen > bool overflow; > inner_size = addr_wide_int (size) > .mul (inner_nelts_count, SIGNED, &overflow); > - if (overflow || inner_size.gtu_p (max_size)) > + if (overflow || wi::gtu_p (inner_size, max_size)) > { > if (complain & tf_error) > error ("size of array is too large"); > Index: gcc/dwarf2out.c > =================================================================== > --- gcc/dwarf2out.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/dwarf2out.c 2013-08-25 07:42:29.192617233 +0100 > @@ -14783,7 +14783,7 @@ field_byte_offset (const_tree decl) > object_offset_in_bits > = round_up_to_align (object_offset_in_bits, type_align_in_bits); > > - if (object_offset_in_bits.gtu_p (bitpos_int)) > + if (wi::gtu_p (object_offset_in_bits, bitpos_int)) > { > object_offset_in_bits = deepest_bitpos - type_size_in_bits; > > @@ -16218,7 +16218,7 @@ add_bound_info (dw_die_ref subrange_die, > zext_hwi (tree_to_hwi (bound), prec)); > } > else if (prec == HOST_BITS_PER_WIDE_INT > - || (cst_fits_uhwi_p (bound) && wide_int (bound).ges_p (0))) > + || (cst_fits_uhwi_p (bound) && wi::ges_p (bound, 0))) > add_AT_unsigned (subrange_die, bound_attr, tree_to_hwi (bound)); > else > add_AT_wide (subrange_die, bound_attr, wide_int (bound)); > Index: gcc/fold-const.c > =================================================================== > --- gcc/fold-const.c 2013-08-25 07:42:28.417609742 +0100 > +++ gcc/fold-const.c 2013-08-25 07:42:29.194617252 +0100 > @@ -510,7 +510,7 @@ negate_expr_p (tree t) > if (TREE_CODE (TREE_OPERAND (t, 1)) == INTEGER_CST) > { > tree op1 = TREE_OPERAND (t, 1); > - if (wide_int::eq_p (op1, TYPE_PRECISION (type) - 1)) > + if (wi::eq_p (op1, TYPE_PRECISION (type) - 1)) > return true; > } > break; > @@ -721,7 +721,7 @@ fold_negate_expr (location_t loc, tree t > if (TREE_CODE (TREE_OPERAND (t, 1)) == INTEGER_CST) > { > tree op1 = TREE_OPERAND (t, 1); > - if (wide_int::eq_p (op1, TYPE_PRECISION (type) - 1)) > + if (wi::eq_p (op1, TYPE_PRECISION (type) - 1)) > { > tree ntype = TYPE_UNSIGNED (type) > ? signed_type_for (type) > @@ -5836,7 +5836,7 @@ extract_muldiv_1 (tree t, tree c, enum t > && (tcode == RSHIFT_EXPR || TYPE_UNSIGNED (TREE_TYPE (op0))) > /* const_binop may not detect overflow correctly, > so check for it explicitly here. */ > - && wide_int::gtu_p (TYPE_PRECISION (TREE_TYPE (size_one_node)), op1) > + && wi::gtu_p (TYPE_PRECISION (TREE_TYPE (size_one_node)), op1) > && 0 != (t1 = fold_convert (ctype, > const_binop (LSHIFT_EXPR, > size_one_node, > @@ -6602,7 +6602,8 @@ fold_single_bit_test (location_t loc, en > not overflow, adjust BITNUM and INNER. */ > if (TREE_CODE (inner) == RSHIFT_EXPR > && TREE_CODE (TREE_OPERAND (inner, 1)) == INTEGER_CST > - && (wide_int (TREE_OPERAND (inner, 1) + bitnum).ltu_p (TYPE_PRECISION (type)))) > + && wi::ltu_p (TREE_OPERAND (inner, 1) + bitnum, > + TYPE_PRECISION (type))) > { > bitnum += tree_to_hwi (TREE_OPERAND (inner, 1)); > inner = TREE_OPERAND (inner, 0); > @@ -12911,7 +12912,7 @@ fold_binary_loc (location_t loc, > prec = TYPE_PRECISION (itype); > > /* Check for a valid shift count. */ > - if (wide_int::ltu_p (arg001, prec)) > + if (wi::ltu_p (arg001, prec)) > { > tree arg01 = TREE_OPERAND (arg0, 1); > tree arg000 = TREE_OPERAND (TREE_OPERAND (arg0, 0), 0); > @@ -13036,7 +13037,7 @@ fold_binary_loc (location_t loc, > tree arg00 = TREE_OPERAND (arg0, 0); > tree arg01 = TREE_OPERAND (arg0, 1); > tree itype = TREE_TYPE (arg00); > - if (wide_int::eq_p (arg01, TYPE_PRECISION (itype) - 1)) > + if (wi::eq_p (arg01, TYPE_PRECISION (itype) - 1)) > { > if (TYPE_UNSIGNED (itype)) > { > @@ -14341,7 +14342,7 @@ fold_ternary_loc (location_t loc, enum t > /* Make sure that the perm value is in an acceptable > range. */ > t = val; > - if (t.gtu_p (nelts_cnt)) > + if (wi::gtu_p (t, nelts_cnt)) > { > need_mask_canon = true; > sel[i] = t.to_uhwi () & (nelts_cnt - 1); > @@ -15163,7 +15164,7 @@ multiple_of_p (tree type, const_tree top > op1 = TREE_OPERAND (top, 1); > /* const_binop may not detect overflow correctly, > so check for it explicitly here. */ > - if (wide_int::gtu_p (TYPE_PRECISION (TREE_TYPE (size_one_node)), op1) > + if (wi::gtu_p (TYPE_PRECISION (TREE_TYPE (size_one_node)), op1) > && 0 != (t1 = fold_convert (type, > const_binop (LSHIFT_EXPR, > size_one_node, > Index: gcc/fortran/trans-intrinsic.c > =================================================================== > --- gcc/fortran/trans-intrinsic.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/fortran/trans-intrinsic.c 2013-08-25 07:42:29.195617262 +0100 > @@ -986,8 +986,9 @@ trans_this_image (gfc_se * se, gfc_expr > { > wide_int wdim_arg = dim_arg; > > - if (wdim_arg.ltu_p (1) > - || wdim_arg.gtu_p (GFC_TYPE_ARRAY_CORANK (TREE_TYPE (desc)))) > + if (wi::ltu_p (wdim_arg, 1) > + || wi::gtu_p (wdim_arg, > + GFC_TYPE_ARRAY_CORANK (TREE_TYPE (desc)))) > gfc_error ("'dim' argument of %s intrinsic at %L is not a valid " > "dimension index", expr->value.function.isym->name, > &expr->where); > @@ -1346,8 +1347,8 @@ gfc_conv_intrinsic_bound (gfc_se * se, g > { > wide_int wbound = bound; > if (((!as || as->type != AS_ASSUMED_RANK) > - && wbound.geu_p (GFC_TYPE_ARRAY_RANK (TREE_TYPE (desc)))) > - || wbound.gtu_p (GFC_MAX_DIMENSIONS)) > + && wi::geu_p (wbound, GFC_TYPE_ARRAY_RANK (TREE_TYPE (desc)))) > + || wi::gtu_p (wbound, GFC_MAX_DIMENSIONS)) > gfc_error ("'dim' argument of %s intrinsic at %L is not a valid " > "dimension index", upper ? "UBOUND" : "LBOUND", > &expr->where); > @@ -1543,7 +1544,8 @@ conv_intrinsic_cobound (gfc_se * se, gfc > if (INTEGER_CST_P (bound)) > { > wide_int wbound = bound; > - if (wbound.ltu_p (1) || wbound.gtu_p (GFC_TYPE_ARRAY_CORANK (TREE_TYPE (desc)))) > + if (wi::ltu_p (wbound, 1) > + || wi::gtu_p (wbound, GFC_TYPE_ARRAY_CORANK (TREE_TYPE (desc)))) > gfc_error ("'dim' argument of %s intrinsic at %L is not a valid " > "dimension index", expr->value.function.isym->name, > &expr->where); > Index: gcc/gimple-fold.c > =================================================================== > --- gcc/gimple-fold.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/gimple-fold.c 2013-08-25 07:42:29.195617262 +0100 > @@ -2799,7 +2799,7 @@ fold_array_ctor_reference (tree type, tr > be larger than size of array element. */ > if (!TYPE_SIZE_UNIT (type) > || TREE_CODE (TYPE_SIZE_UNIT (type)) != INTEGER_CST > - || elt_size.lts_p (addr_wide_int (TYPE_SIZE_UNIT (type)))) > + || wi::lts_p (elt_size, TYPE_SIZE_UNIT (type))) > return NULL_TREE; > > /* Compute the array index we look for. */ > @@ -2902,7 +2902,7 @@ fold_nonarray_ctor_reference (tree type, > [BITOFFSET, BITOFFSET_END)? */ > if (access_end.cmps (bitoffset) > 0 > && (field_size == NULL_TREE > - || addr_wide_int (offset).lts_p (bitoffset_end))) > + || wi::lts_p (offset, bitoffset_end))) > { > addr_wide_int inner_offset = addr_wide_int (offset) - bitoffset; > /* We do have overlap. Now see if field is large enough to > @@ -2910,7 +2910,7 @@ fold_nonarray_ctor_reference (tree type, > fields. */ > if (access_end.cmps (bitoffset_end) > 0) > return NULL_TREE; > - if (addr_wide_int (offset).lts_p (bitoffset)) > + if (wi::lts_p (offset, bitoffset)) > return NULL_TREE; > return fold_ctor_reference (type, cval, > inner_offset.to_uhwi (), size, > Index: gcc/gimple-ssa-strength-reduction.c > =================================================================== > --- gcc/gimple-ssa-strength-reduction.c 2013-08-25 07:42:28.418609752 +0100 > +++ gcc/gimple-ssa-strength-reduction.c 2013-08-25 07:42:29.196617272 +0100 > @@ -2355,8 +2355,8 @@ record_increment (slsr_cand_t c, const m > if (c->kind == CAND_ADD > && !is_phi_adjust > && c->index == increment > - && (increment.gts_p (1) > - || increment.lts_p (-1)) > + && (wi::gts_p (increment, 1) > + || wi::lts_p (increment, -1)) > && (gimple_assign_rhs_code (c->cand_stmt) == PLUS_EXPR > || gimple_assign_rhs_code (c->cand_stmt) == POINTER_PLUS_EXPR)) > { > Index: gcc/loop-doloop.c > =================================================================== > --- gcc/loop-doloop.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/loop-doloop.c 2013-08-25 07:42:29.196617272 +0100 > @@ -461,9 +461,10 @@ doloop_modify (struct loop *loop, struct > /* Determine if the iteration counter will be non-negative. > Note that the maximum value loaded is iterations_max - 1. */ > if (max_loop_iterations (loop, &iterations) > - && (iterations.leu_p (wide_int::set_bit_in_zero > - (GET_MODE_PRECISION (mode) - 1, > - GET_MODE_PRECISION (mode))))) > + && wi::leu_p (iterations, > + wide_int::set_bit_in_zero > + (GET_MODE_PRECISION (mode) - 1, > + GET_MODE_PRECISION (mode)))) > nonneg = 1; > break; > > @@ -697,7 +698,7 @@ doloop_optimize (struct loop *loop) > computed, we must be sure that the number of iterations fits into > the new mode. */ > && (word_mode_size >= GET_MODE_PRECISION (mode) > - || iter.leu_p (word_mode_max))) > + || wi::leu_p (iter, word_mode_max))) > { > if (word_mode_size > GET_MODE_PRECISION (mode)) > { > Index: gcc/loop-unroll.c > =================================================================== > --- gcc/loop-unroll.c 2013-08-25 07:42:28.420609771 +0100 > +++ gcc/loop-unroll.c 2013-08-25 07:42:29.196617272 +0100 > @@ -693,7 +693,7 @@ decide_unroll_constant_iterations (struc > if (desc->niter < 2 * nunroll > || ((estimated_loop_iterations (loop, &iterations) > || max_loop_iterations (loop, &iterations)) > - && iterations.ltu_p (2 * nunroll))) > + && wi::ltu_p (iterations, 2 * nunroll))) > { > if (dump_file) > fprintf (dump_file, ";; Not unrolling loop, doesn't roll\n"); > @@ -816,7 +816,7 @@ unroll_loop_constant_iterations (struct > desc->niter -= exit_mod; > loop->nb_iterations_upper_bound -= exit_mod; > if (loop->any_estimate > - && wide_int::leu_p (exit_mod, loop->nb_iterations_estimate)) > + && wi::leu_p (exit_mod, loop->nb_iterations_estimate)) > loop->nb_iterations_estimate -= exit_mod; > else > loop->any_estimate = false; > @@ -859,7 +859,7 @@ unroll_loop_constant_iterations (struct > desc->niter -= exit_mod + 1; > loop->nb_iterations_upper_bound -= exit_mod + 1; > if (loop->any_estimate > - && wide_int::leu_p (exit_mod + 1, loop->nb_iterations_estimate)) > + && wi::leu_p (exit_mod + 1, loop->nb_iterations_estimate)) > loop->nb_iterations_estimate -= exit_mod + 1; > else > loop->any_estimate = false; > @@ -992,7 +992,7 @@ decide_unroll_runtime_iterations (struct > /* Check whether the loop rolls. */ > if ((estimated_loop_iterations (loop, &iterations) > || max_loop_iterations (loop, &iterations)) > - && iterations.ltu_p (2 * nunroll)) > + && wi::ltu_p (iterations, 2 * nunroll)) > { > if (dump_file) > fprintf (dump_file, ";; Not unrolling loop, doesn't roll\n"); > @@ -1379,7 +1379,7 @@ decide_peel_simple (struct loop *loop, i > if (estimated_loop_iterations (loop, &iterations)) > { > /* TODO: unsigned/signed confusion */ > - if (wide_int::leu_p (npeel, iterations)) > + if (wi::leu_p (npeel, iterations)) > { > if (dump_file) > { > @@ -1396,7 +1396,7 @@ decide_peel_simple (struct loop *loop, i > /* If we have small enough bound on iterations, we can still peel (completely > unroll). */ > else if (max_loop_iterations (loop, &iterations) > - && iterations.ltu_p (npeel)) > + && wi::ltu_p (iterations, npeel)) > npeel = iterations.to_shwi () + 1; > else > { > @@ -1547,7 +1547,7 @@ decide_unroll_stupid (struct loop *loop, > /* Check whether the loop rolls. */ > if ((estimated_loop_iterations (loop, &iterations) > || max_loop_iterations (loop, &iterations)) > - && iterations.ltu_p (2 * nunroll)) > + && wi::ltu_p (iterations, 2 * nunroll)) > { > if (dump_file) > fprintf (dump_file, ";; Not unrolling loop, doesn't roll\n"); > Index: gcc/lto/lto.c > =================================================================== > --- gcc/lto/lto.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/lto/lto.c 2013-08-25 07:42:29.206617368 +0100 > @@ -1778,7 +1778,7 @@ #define compare_values(X) \ > > if (CODE_CONTAINS_STRUCT (code, TS_INT_CST)) > { > - if (!wide_int::eq_p (t1, t2)) > + if (!wi::eq_p (t1, t2)) > return false; > } > > Index: gcc/rtl.h > =================================================================== > --- gcc/rtl.h 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/rtl.h 2013-08-25 07:42:29.197617281 +0100 > @@ -1402,10 +1402,10 @@ get_mode (const rtx_mode_t p) > > /* Specialization of to_shwi1 function in wide-int.h for rtl. This > cannot be in wide-int.h because of circular includes. */ > -template<> > -inline const HOST_WIDE_INT* > -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, > - unsigned int *l, unsigned int *p, const rtx_mode_t& rp) > +inline const HOST_WIDE_INT * > +wide_int_accessors ::to_shwi (HOST_WIDE_INT *, unsigned int *l, > + unsigned int *p, > + const rtx_mode_t &rp) > { > const rtx rcst = get_rtx (rp); > enum machine_mode mode = get_mode (rp); > @@ -1414,34 +1414,6 @@ wide_int_ro::to_shwi1 (HOST_WIDE_INT *s > > switch (GET_CODE (rcst)) > { > - case CONST_INT: > - *l = 1; > - return &INTVAL (rcst); > - > - case CONST_WIDE_INT: > - *l = CONST_WIDE_INT_NUNITS (rcst); > - return &CONST_WIDE_INT_ELT (rcst, 0); > - > - case CONST_DOUBLE: > - *l = 2; > - return &CONST_DOUBLE_LOW (rcst); > - > - default: > - gcc_unreachable (); > - } > -} > - > -/* Specialization of to_shwi2 function in wide-int.h for rtl. This > - cannot be in wide-int.h because of circular includes. */ > -template<> > -inline const HOST_WIDE_INT* > -wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, > - unsigned int *l, const rtx_mode_t& rp) > -{ > - const rtx rcst = get_rtx (rp); > - > - switch (GET_CODE (rcst)) > - { > case CONST_INT: > *l = 1; > return &INTVAL (rcst); > Index: gcc/simplify-rtx.c > =================================================================== > --- gcc/simplify-rtx.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/simplify-rtx.c 2013-08-25 07:42:29.198617291 +0100 > @@ -4649,8 +4649,8 @@ simplify_const_relational_operation (enu > return comparison_result (code, CMP_EQ); > else > { > - int cr = wo0.lts_p (ptrueop1) ? CMP_LT : CMP_GT; > - cr |= wo0.ltu_p (ptrueop1) ? CMP_LTU : CMP_GTU; > + int cr = wi::lts_p (wo0, ptrueop1) ? CMP_LT : CMP_GT; > + cr |= wi::ltu_p (wo0, ptrueop1) ? CMP_LTU : CMP_GTU; > return comparison_result (code, cr); > } > } > Index: gcc/tree-affine.c > =================================================================== > --- gcc/tree-affine.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/tree-affine.c 2013-08-25 07:42:29.198617291 +0100 > @@ -911,7 +911,7 @@ aff_comb_cannot_overlap_p (aff_tree *dif > else > { > /* We succeed if the second object starts after the first one ends. */ > - return size1.les_p (d); > + return wi::les_p (size1, d); > } > } > > Index: gcc/tree-chrec.c > =================================================================== > --- gcc/tree-chrec.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/tree-chrec.c 2013-08-25 07:42:29.198617291 +0100 > @@ -475,7 +475,7 @@ tree_fold_binomial (tree type, tree n, u > num = n; > > /* Check that k <= n. */ > - if (num.ltu_p (k)) > + if (wi::ltu_p (num, k)) > return NULL_TREE; > > /* Denominator = 2. */ > Index: gcc/tree-predcom.c > =================================================================== > --- gcc/tree-predcom.c 2013-08-25 07:42:28.421609781 +0100 > +++ gcc/tree-predcom.c 2013-08-25 07:42:29.199617301 +0100 > @@ -921,9 +921,9 @@ add_ref_to_chain (chain_p chain, dref re > dref root = get_chain_root (chain); > max_wide_int dist; > > - gcc_assert (root->offset.les_p (ref->offset)); > + gcc_assert (wi::les_p (root->offset, ref->offset)); > dist = ref->offset - root->offset; > - if (wide_int::leu_p (MAX_DISTANCE, dist)) > + if (wi::leu_p (MAX_DISTANCE, dist)) > { > free (ref); > return; > @@ -1194,7 +1194,7 @@ determine_roots_comp (struct loop *loop, > FOR_EACH_VEC_ELT (comp->refs, i, a) > { > if (!chain || DR_IS_WRITE (a->ref) > - || max_wide_int (MAX_DISTANCE).leu_p (a->offset - last_ofs)) > + || wi::leu_p (MAX_DISTANCE, a->offset - last_ofs)) > { > if (nontrivial_chain_p (chain)) > { > Index: gcc/tree-ssa-loop-ivcanon.c > =================================================================== > --- gcc/tree-ssa-loop-ivcanon.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/tree-ssa-loop-ivcanon.c 2013-08-25 07:42:29.199617301 +0100 > @@ -488,7 +488,7 @@ remove_exits_and_undefined_stmts (struct > into unreachable (or trap when debugging experience is supposed > to be good). */ > if (!elt->is_exit > - && elt->bound.ltu_p (max_wide_int (npeeled))) > + && wi::ltu_p (elt->bound, npeeled)) > { > gimple_stmt_iterator gsi = gsi_for_stmt (elt->stmt); > gimple stmt = gimple_build_call > @@ -505,7 +505,7 @@ remove_exits_and_undefined_stmts (struct > } > /* If we know the exit will be taken after peeling, update. */ > else if (elt->is_exit > - && elt->bound.leu_p (max_wide_int (npeeled))) > + && wi::leu_p (elt->bound, npeeled)) > { > basic_block bb = gimple_bb (elt->stmt); > edge exit_edge = EDGE_SUCC (bb, 0); > @@ -545,7 +545,7 @@ remove_redundant_iv_tests (struct loop * > /* Exit is pointless if it won't be taken before loop reaches > upper bound. */ > if (elt->is_exit && loop->any_upper_bound > - && loop->nb_iterations_upper_bound.ltu_p (elt->bound)) > + && wi::ltu_p (loop->nb_iterations_upper_bound, elt->bound)) > { > basic_block bb = gimple_bb (elt->stmt); > edge exit_edge = EDGE_SUCC (bb, 0); > @@ -562,7 +562,7 @@ remove_redundant_iv_tests (struct loop * > || !integer_zerop (niter.may_be_zero) > || !niter.niter > || TREE_CODE (niter.niter) != INTEGER_CST > - || !loop->nb_iterations_upper_bound.ltu_p (niter.niter)) > + || !wi::ltu_p (loop->nb_iterations_upper_bound, niter.niter)) > continue; > > if (dump_file && (dump_flags & TDF_DETAILS)) > Index: gcc/tree-ssa-loop-ivopts.c > =================================================================== > --- gcc/tree-ssa-loop-ivopts.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/tree-ssa-loop-ivopts.c 2013-08-25 07:42:29.200617310 +0100 > @@ -4659,7 +4659,7 @@ may_eliminate_iv (struct ivopts_data *da > if (stmt_after_increment (loop, cand, use->stmt)) > max_niter += 1; > period_value = period; > - if (max_niter.gtu_p (period_value)) > + if (wi::gtu_p (max_niter, period_value)) > { > /* See if we can take advantage of inferred loop bound information. */ > if (data->loop_single_exit_p) > @@ -4667,7 +4667,7 @@ may_eliminate_iv (struct ivopts_data *da > if (!max_loop_iterations (loop, &max_niter)) > return false; > /* The loop bound is already adjusted by adding 1. */ > - if (max_niter.gtu_p (period_value)) > + if (wi::gtu_p (max_niter, period_value)) > return false; > } > else > Index: gcc/tree-ssa-loop-niter.c > =================================================================== > --- gcc/tree-ssa-loop-niter.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/tree-ssa-loop-niter.c 2013-08-25 07:42:29.200617310 +0100 > @@ -2410,7 +2410,7 @@ derive_constant_upper_bound_ops (tree ty > > /* If the bound does not fit in TYPE, max. value of TYPE could be > attained. */ > - if (max.ltu_p (bnd)) > + if (wi::ltu_p (max, bnd)) > return max; > > return bnd; > @@ -2443,7 +2443,7 @@ derive_constant_upper_bound_ops (tree ty > BND <= MAX (type) - CST. */ > > mmax -= cst; > - if (bnd.ltu_p (mmax)) > + if (wi::ltu_p (bnd, max)) > return max; > > return bnd + cst; > @@ -2463,7 +2463,7 @@ derive_constant_upper_bound_ops (tree ty > /* This should only happen if the type is unsigned; however, for > buggy programs that use overflowing signed arithmetics even with > -fno-wrapv, this condition may also be true for signed values. */ > - if (bnd.ltu_p (cst)) > + if (wi::ltu_p (bnd, cst)) > return max; > > if (TYPE_UNSIGNED (type)) > @@ -2519,14 +2519,14 @@ record_niter_bound (struct loop *loop, c > current estimation is smaller. */ > if (upper > && (!loop->any_upper_bound > - || i_bound.ltu_p (loop->nb_iterations_upper_bound))) > + || wi::ltu_p (i_bound, loop->nb_iterations_upper_bound))) > { > loop->any_upper_bound = true; > loop->nb_iterations_upper_bound = i_bound; > } > if (realistic > && (!loop->any_estimate > - || i_bound.ltu_p (loop->nb_iterations_estimate))) > + || wi::ltu_p (i_bound, loop->nb_iterations_estimate))) > { > loop->any_estimate = true; > loop->nb_iterations_estimate = i_bound; > @@ -2536,7 +2536,8 @@ record_niter_bound (struct loop *loop, c > number of iterations, use the upper bound instead. */ > if (loop->any_upper_bound > && loop->any_estimate > - && loop->nb_iterations_upper_bound.ltu_p (loop->nb_iterations_estimate)) > + && wi::ltu_p (loop->nb_iterations_upper_bound, > + loop->nb_iterations_estimate)) > loop->nb_iterations_estimate = loop->nb_iterations_upper_bound; > } > > @@ -2642,7 +2643,7 @@ record_estimate (struct loop *loop, tree > i_bound += delta; > > /* If an overflow occurred, ignore the result. */ > - if (i_bound.ltu_p (delta)) > + if (wi::ltu_p (i_bound, delta)) > return; > > if (upper && !is_exit) > @@ -3051,7 +3052,7 @@ bound_index (vec bounds, c > > if (index == bound) > return middle; > - else if (index.ltu_p (bound)) > + else if (wi::ltu_p (index, bound)) > begin = middle + 1; > else > end = middle; > @@ -3093,7 +3094,7 @@ discover_iteration_bound_by_body_walk (s > } > > if (!loop->any_upper_bound > - || bound.ltu_p (loop->nb_iterations_upper_bound)) > + || wi::ltu_p (bound, loop->nb_iterations_upper_bound)) > bounds.safe_push (bound); > } > > @@ -3124,7 +3125,7 @@ discover_iteration_bound_by_body_walk (s > } > > if (!loop->any_upper_bound > - || bound.ltu_p (loop->nb_iterations_upper_bound)) > + || wi::ltu_p (bound, loop->nb_iterations_upper_bound)) > { > ptrdiff_t index = bound_index (bounds, bound); > void **entry = pointer_map_contains (bb_bounds, > @@ -3259,7 +3260,7 @@ maybe_lower_iteration_bound (struct loop > for (elt = loop->bounds; elt; elt = elt->next) > { > if (!elt->is_exit > - && elt->bound.ltu_p (loop->nb_iterations_upper_bound)) > + && wi::ltu_p (elt->bound, loop->nb_iterations_upper_bound)) > { > if (!not_executed_last_iteration) > not_executed_last_iteration = pointer_set_create (); > @@ -3556,7 +3557,7 @@ max_stmt_executions (struct loop *loop, > > *nit += 1; > > - return (*nit).gtu_p (nit_minus_one); > + return wi::gtu_p (*nit, nit_minus_one); > } > > /* Sets NIT to the estimated number of executions of the latch of the > @@ -3575,7 +3576,7 @@ estimated_stmt_executions (struct loop * > > *nit += 1; > > - return (*nit).gtu_p (nit_minus_one); > + return wi::gtu_p (*nit, nit_minus_one); > } > > /* Records estimates on numbers of iterations of loops. */ > Index: gcc/tree-ssa.c > =================================================================== > --- gcc/tree-ssa.c 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/tree-ssa.c 2013-08-25 07:42:29.201617320 +0100 > @@ -1829,8 +1829,8 @@ non_rewritable_mem_ref_base (tree ref) > && useless_type_conversion_p (TREE_TYPE (base), > TREE_TYPE (TREE_TYPE (decl))) > && mem_ref_offset (base).fits_uhwi_p () > - && addr_wide_int (TYPE_SIZE_UNIT (TREE_TYPE (decl))) > - .gtu_p (mem_ref_offset (base)) > + && wi::gtu_p (TYPE_SIZE_UNIT (TREE_TYPE (decl)), > + mem_ref_offset (base)) > && multiple_of_p (sizetype, TREE_OPERAND (base, 1), > TYPE_SIZE_UNIT (TREE_TYPE (base)))) > return NULL_TREE; > Index: gcc/tree-vrp.c > =================================================================== > --- gcc/tree-vrp.c 2013-08-25 07:42:28.470610254 +0100 > +++ gcc/tree-vrp.c 2013-08-25 07:42:29.202617330 +0100 > @@ -2652,13 +2652,13 @@ extract_range_from_binary_expr_1 (value_ > /* Canonicalize the intervals. */ > if (sign == UNSIGNED) > { > - if (size.ltu_p (min0 + max0)) > + if (wi::ltu_p (size, min0 + max0)) > { > min0 -= size; > max0 -= size; > } > > - if (size.ltu_p (min1 + max1)) > + if (wi::ltu_p (size, min1 + max1)) > { > min1 -= size; > max1 -= size; > @@ -2673,7 +2673,7 @@ extract_range_from_binary_expr_1 (value_ > /* Sort the 4 products so that min is in prod0 and max is in > prod3. */ > /* min0min1 > max0max1 */ > - if (prod0.gts_p (prod3)) > + if (wi::gts_p (prod0, prod3)) > { > wide_int tmp = prod3; > prod3 = prod0; > @@ -2681,21 +2681,21 @@ extract_range_from_binary_expr_1 (value_ > } > > /* min0max1 > max0min1 */ > - if (prod1.gts_p (prod2)) > + if (wi::gts_p (prod1, prod2)) > { > wide_int tmp = prod2; > prod2 = prod1; > prod1 = tmp; > } > > - if (prod0.gts_p (prod1)) > + if (wi::gts_p (prod0, prod1)) > { > wide_int tmp = prod1; > prod1 = prod0; > prod0 = tmp; > } > > - if (prod2.gts_p (prod3)) > + if (wi::gts_p (prod2, prod3)) > { > wide_int tmp = prod3; > prod3 = prod2; > @@ -2704,7 +2704,7 @@ extract_range_from_binary_expr_1 (value_ > > /* diff = max - min. */ > prod2 = prod3 - prod0; > - if (prod2.geu_p (sizem1)) > + if (wi::geu_p (prod2, sizem1)) > { > /* the range covers all values. */ > set_value_range_to_varying (vr); > @@ -2801,14 +2801,14 @@ extract_range_from_binary_expr_1 (value_ > { > low_bound = bound; > high_bound = complement; > - if (wide_int::ltu_p (vr0.max, low_bound)) > + if (wi::ltu_p (vr0.max, low_bound)) > { > /* [5, 6] << [1, 2] == [10, 24]. */ > /* We're shifting out only zeroes, the value increases > monotonically. */ > in_bounds = true; > } > - else if (high_bound.ltu_p (vr0.min)) > + else if (wi::ltu_p (high_bound, vr0.min)) > { > /* [0xffffff00, 0xffffffff] << [1, 2] > == [0xfffffc00, 0xfffffffe]. */ > @@ -2822,8 +2822,8 @@ extract_range_from_binary_expr_1 (value_ > /* [-1, 1] << [1, 2] == [-4, 4]. */ > low_bound = complement; > high_bound = bound; > - if (wide_int::lts_p (vr0.max, high_bound) > - && low_bound.lts_p (wide_int (vr0.min))) > + if (wi::lts_p (vr0.max, high_bound) > + && wi::lts_p (low_bound, vr0.min)) > { > /* For non-negative numbers, we're shifting out only > zeroes, the value increases monotonically. > @@ -3844,7 +3844,7 @@ adjust_range_with_scev (value_range_t *v > if (!overflow > && wtmp.fits_to_tree_p (TREE_TYPE (init)) > && (sgn == UNSIGNED > - || (wtmp.gts_p (0) == wide_int::gts_p (step, 0)))) > + || wi::gts_p (wtmp, 0) == wi::gts_p (step, 0))) > { > tem = wide_int_to_tree (TREE_TYPE (init), wtmp); > extract_range_from_binary_expr (&maxvr, PLUS_EXPR, > @@ -4736,7 +4736,7 @@ masked_increment (wide_int val, wide_int > res = bit - 1; > res = (val + bit).and_not (res); > res &= mask; > - if (res.gtu_p (val)) > + if (wi::gtu_p (res, val)) > return res ^ sgnbit; > } > return val ^ sgnbit; > @@ -6235,7 +6235,7 @@ search_for_addr_array (tree t, location_ > > idx = mem_ref_offset (t); > idx = idx.sdiv_trunc (addr_wide_int (el_sz)); > - if (idx.lts_p (0)) > + if (wi::lts_p (idx, 0)) > { > if (dump_file && (dump_flags & TDF_DETAILS)) > { > @@ -6247,9 +6247,7 @@ search_for_addr_array (tree t, location_ > "array subscript is below array bounds"); > TREE_NO_WARNING (t) = 1; > } > - else if (idx.gts_p (addr_wide_int (up_bound) > - - low_bound > - + 1)) > + else if (wi::gts_p (idx, addr_wide_int (up_bound) - low_bound + 1)) > { > if (dump_file && (dump_flags & TDF_DETAILS)) > { > @@ -8681,7 +8679,7 @@ range_fits_type_p (value_range_t *vr, un > a signed wide_int, while a negative value cannot be represented > by an unsigned wide_int. */ > if (src_sgn != dest_sgn > - && (max_wide_int (vr->min).lts_p (0) || max_wide_int (vr->max).lts_p (0))) > + && (wi::lts_p (vr->min, 0) || wi::lts_p (vr->max, 0))) > return false; > > /* Then we can perform the conversion on both ends and compare > @@ -8985,7 +8983,7 @@ simplify_conversion_using_ranges (gimple > > /* If the first conversion is not injective, the second must not > be widening. */ > - if ((innermax - innermin).gtu_p (max_wide_int::mask (middle_prec, false)) > + if (wi::gtu_p (innermax - innermin, max_wide_int::mask (middle_prec, false)) > && middle_prec < final_prec) > return false; > /* We also want a medium value so that we can track the effect that > Index: gcc/tree.c > =================================================================== > --- gcc/tree.c 2013-08-25 07:42:28.423609800 +0100 > +++ gcc/tree.c 2013-08-25 07:42:29.203617339 +0100 > @@ -1228,7 +1228,7 @@ wide_int_to_tree (tree type, const wide_ > case BOOLEAN_TYPE: > /* Cache false or true. */ > limit = 2; > - if (cst.leu_p (1)) > + if (wi::leu_p (cst, 1)) > ix = cst.to_uhwi (); > break; > > @@ -1247,7 +1247,7 @@ wide_int_to_tree (tree type, const wide_ > if (cst.to_uhwi () < (unsigned HOST_WIDE_INT) INTEGER_SHARE_LIMIT) > ix = cst.to_uhwi (); > } > - else if (cst.ltu_p (INTEGER_SHARE_LIMIT)) > + else if (wi::ltu_p (cst, INTEGER_SHARE_LIMIT)) > ix = cst.to_uhwi (); > } > else > @@ -1264,7 +1264,7 @@ wide_int_to_tree (tree type, const wide_ > if (cst.to_shwi () < INTEGER_SHARE_LIMIT) > ix = cst.to_shwi () + 1; > } > - else if (cst.lts_p (INTEGER_SHARE_LIMIT)) > + else if (wi::lts_p (cst, INTEGER_SHARE_LIMIT)) > ix = cst.to_shwi () + 1; > } > } > @@ -1381,7 +1381,7 @@ cache_integer_cst (tree t) > case BOOLEAN_TYPE: > /* Cache false or true. */ > limit = 2; > - if (wide_int::ltu_p (t, 2)) > + if (wi::ltu_p (t, 2)) > ix = TREE_INT_CST_ELT (t, 0); > break; > > @@ -1400,7 +1400,7 @@ cache_integer_cst (tree t) > if (tree_to_uhwi (t) < (unsigned HOST_WIDE_INT) INTEGER_SHARE_LIMIT) > ix = tree_to_uhwi (t); > } > - else if (wide_int::ltu_p (t, INTEGER_SHARE_LIMIT)) > + else if (wi::ltu_p (t, INTEGER_SHARE_LIMIT)) > ix = tree_to_uhwi (t); > } > else > @@ -1417,7 +1417,7 @@ cache_integer_cst (tree t) > if (tree_to_shwi (t) < INTEGER_SHARE_LIMIT) > ix = tree_to_shwi (t) + 1; > } > - else if (wide_int::ltu_p (t, INTEGER_SHARE_LIMIT)) > + else if (wi::ltu_p (t, INTEGER_SHARE_LIMIT)) > ix = tree_to_shwi (t) + 1; > } > } > @@ -1451,7 +1451,7 @@ cache_integer_cst (tree t) > /* If there is already an entry for the number verify it's the > same. */ > if (*slot) > - gcc_assert (wide_int::eq_p (((tree)*slot), t)); > + gcc_assert (wi::eq_p (tree (*slot), t)); > else > /* Otherwise insert this one into the hash table. */ > *slot = t; > @@ -6757,7 +6757,7 @@ tree_int_cst_equal (const_tree t1, const > prec2 = TYPE_PRECISION (TREE_TYPE (t2)); > > if (prec1 == prec2) > - return wide_int::eq_p (t1, t2); > + return wi::eq_p (t1, t2); > else if (prec1 < prec2) > return (wide_int (t1)).force_to_size (prec2, TYPE_SIGN (TREE_TYPE (t1))) == t2; > else > @@ -8562,7 +8562,7 @@ int_fits_type_p (const_tree c, const_tre > > if (c_neg && !t_neg) > return false; > - if ((c_neg || !t_neg) && wc.ltu_p (wd)) > + if ((c_neg || !t_neg) && wi::ltu_p (wc, wd)) > return false; > } > else if (wc.cmp (wd, TYPE_SIGN (TREE_TYPE (type_low_bound))) < 0) > @@ -8583,7 +8583,7 @@ int_fits_type_p (const_tree c, const_tre > > if (t_neg && !c_neg) > return false; > - if ((t_neg || !c_neg) && wc.gtu_p (wd)) > + if ((t_neg || !c_neg) && wi::gtu_p (wc, wd)) > return false; > } > else if (wc.cmp (wd, TYPE_SIGN (TREE_TYPE (type_high_bound))) > 0) > Index: gcc/tree.h > =================================================================== > --- gcc/tree.h 2013-08-25 07:17:37.505554513 +0100 > +++ gcc/tree.h 2013-08-25 07:42:29.204617349 +0100 > @@ -1411,10 +1411,10 @@ #define TREE_LANG_FLAG_6(NODE) \ > /* Define additional fields and accessors for nodes representing constants. */ > > #define INT_CST_LT(A, B) \ > - (wide_int::lts_p (A, B)) > + (wi::lts_p (A, B)) > > #define INT_CST_LT_UNSIGNED(A, B) \ > - (wide_int::ltu_p (A, B)) > + (wi::ltu_p (A, B)) > > #define TREE_INT_CST_NUNITS(NODE) (INTEGER_CST_CHECK (NODE)->base.u.length) > #define TREE_INT_CST_ELT(NODE, I) TREE_INT_CST_ELT_CHECK (NODE, I) > Index: gcc/wide-int.cc > =================================================================== > --- gcc/wide-int.cc 2013-08-25 07:42:28.471610264 +0100 > +++ gcc/wide-int.cc 2013-08-25 07:42:29.205617359 +0100 > @@ -598,9 +598,9 @@ top_bit_of (const HOST_WIDE_INT *a, unsi > > /* Return true if OP0 == OP1. */ > bool > -wide_int_ro::eq_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, > - unsigned int prec, > - const HOST_WIDE_INT *op1, unsigned int op1len) > +wi::eq_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, > + unsigned int prec, > + const HOST_WIDE_INT *op1, unsigned int op1len) > { > int l0 = op0len - 1; > unsigned int small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1); > @@ -628,10 +628,10 @@ wide_int_ro::eq_p_large (const HOST_WIDE > > /* Return true if OP0 < OP1 using signed comparisons. */ > bool > -wide_int_ro::lts_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, > - unsigned int p0, > - const HOST_WIDE_INT *op1, unsigned int op1len, > - unsigned int p1) > +wi::lts_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, > + unsigned int p0, > + const HOST_WIDE_INT *op1, unsigned int op1len, > + unsigned int p1) > { > HOST_WIDE_INT s0, s1; > unsigned HOST_WIDE_INT u0, u1; > @@ -709,8 +709,8 @@ wide_int_ro::cmps_large (const HOST_WIDE > > /* Return true if OP0 < OP1 using unsigned comparisons. */ > bool > -wide_int_ro::ltu_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, unsigned int p0, > - const HOST_WIDE_INT *op1, unsigned int op1len, unsigned int p1) > +wi::ltu_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, unsigned int p0, > + const HOST_WIDE_INT *op1, unsigned int op1len, unsigned int p1) > { > unsigned HOST_WIDE_INT x0; > unsigned HOST_WIDE_INT x1; > Index: gcc/wide-int.h > =================================================================== > --- gcc/wide-int.h 2013-08-25 07:42:28.424609809 +0100 > +++ gcc/wide-int.h 2013-08-25 08:23:14.445592968 +0100 > @@ -304,6 +304,95 @@ signedp (unsigned long) > return false; > } > > +/* This class, which has no default implementation, is expected to > + provide the following routines: > + > + HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p, > + ... x) > + -- Decompose integer X into a length, precision and array of > + HOST_WIDE_INTs. Store the length in *L, the precision in *P > + and return the array. S is available as scratch space if needed. */ > +template struct wide_int_accessors; > + > +namespace wi > +{ > + template > + bool eq_p (const T1 &, const T2 &); > + > + template > + bool lt_p (const T1 &, const T2 &, signop); > + > + template > + bool lts_p (const T1 &, const T2 &); > + > + template > + bool ltu_p (const T1 &, const T2 &); > + > + template > + bool le_p (const T1 &, const T2 &, signop); > + > + template > + bool les_p (const T1 &, const T2 &); > + > + template > + bool leu_p (const T1 &, const T2 &); > + > + template > + bool gt_p (const T1 &, const T2 &, signop); > + > + template > + bool gts_p (const T1 &, const T2 &); > + > + template > + bool gtu_p (const T1 &, const T2 &); > + > + template > + bool ge_p (const T1 &, const T2 &, signop); > + > + template > + bool ges_p (const T1 &, const T2 &); > + > + template > + bool geu_p (const T1 &, const T2 &); > + > + /* Comparisons. */ > + bool eq_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int, > + const HOST_WIDE_INT *, unsigned int); > + bool lts_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int, > + const HOST_WIDE_INT *, unsigned int, unsigned int); > + bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int, > + const HOST_WIDE_INT *, unsigned int, unsigned int); > + void check_precision (unsigned int *, unsigned int *, bool, bool); > + > + template > + const HOST_WIDE_INT *to_shwi1 (HOST_WIDE_INT *, unsigned int *, > + unsigned int *, const T &); > + > + template > + const HOST_WIDE_INT *to_shwi2 (HOST_WIDE_INT *, unsigned int *, const T &); > +} > + > +/* Decompose integer X into a length, precision and array of HOST_WIDE_INTs. > + Store the length in *L, the precision in *P and return the array. > + S is available as a scratch array if needed, and can be used as > + the return value. */ > +template > +inline const HOST_WIDE_INT * > +wi::to_shwi1 (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p, > + const T &x) > +{ > + return wide_int_accessors ::to_shwi (s, l, p, x); > +} > + > +/* Like to_shwi1, but without the precision. */ > +template > +inline const HOST_WIDE_INT * > +wi::to_shwi2 (HOST_WIDE_INT *s, unsigned int *l, const T &x) > +{ > + unsigned int p; > + return wide_int_accessors ::to_shwi (s, l, &p, x); > +} > + > class wide_int; > > class GTY(()) wide_int_ro > @@ -323,7 +412,6 @@ class GTY(()) wide_int_ro > unsigned short len; > unsigned int precision; > > - const HOST_WIDE_INT *get_val () const; > wide_int_ro &operator = (const wide_int_ro &); > > public: > @@ -374,6 +462,7 @@ class GTY(()) wide_int_ro > /* Public accessors for the interior of a wide int. */ > unsigned short get_len () const; > unsigned int get_precision () const; > + const HOST_WIDE_INT *get_val () const; > HOST_WIDE_INT elt (unsigned int) const; > > /* Comparative functions. */ > @@ -389,85 +478,10 @@ class GTY(()) wide_int_ro > template > bool operator == (const T &) const; > > - template > - static bool eq_p (const T1 &, const T2 &); > - > template > bool operator != (const T &) const; > > template > - bool lt_p (const T &, signop) const; > - > - template > - static bool lt_p (const T1 &, const T2 &, signop); > - > - template > - bool lts_p (const T &) const; > - > - template > - static bool lts_p (const T1 &, const T2 &); > - > - template > - bool ltu_p (const T &) const; > - > - template > - static bool ltu_p (const T1 &, const T2 &); > - > - template > - bool le_p (const T &, signop) const; > - > - template > - static bool le_p (const T1 &, const T2 &, signop); > - > - template > - bool les_p (const T &) const; > - > - template > - static bool les_p (const T1 &, const T2 &); > - > - template > - bool leu_p (const T &) const; > - > - template > - static bool leu_p (const T1 &, const T2 &); > - > - template > - bool gt_p (const T &, signop) const; > - > - template > - static bool gt_p (const T1 &, const T2 &, signop); > - > - template > - bool gts_p (const T &) const; > - > - template > - static bool gts_p (const T1 &, const T2 &); > - > - template > - bool gtu_p (const T &) const; > - > - template > - static bool gtu_p (const T1 &, const T2 &); > - > - template > - bool ge_p (const T &, signop) const; > - > - template > - static bool ge_p (const T1 &, const T2 &, signop); > - > - template > - bool ges_p (const T &) const; > - > - template > - static bool ges_p (const T1 &, const T2 &); > - > - template > - bool geu_p (const T &) const; > - > - template > - static bool geu_p (const T1 &, const T2 &); > - > - template > int cmp (const T &, signop) const; > > template > @@ -705,18 +719,10 @@ class GTY(()) wide_int_ro > /* Internal versions that do the work if the values do not fit in a HWI. */ > > /* Comparisons */ > - static bool eq_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int, > - const HOST_WIDE_INT *, unsigned int); > - static bool lts_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int, > - const HOST_WIDE_INT *, unsigned int, unsigned int); > static int cmps_large (const HOST_WIDE_INT *, unsigned int, unsigned int, > const HOST_WIDE_INT *, unsigned int, unsigned int); > - static bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, unsigned int, > - const HOST_WIDE_INT *, unsigned int, unsigned int); > static int cmpu_large (const HOST_WIDE_INT *, unsigned int, unsigned int, > const HOST_WIDE_INT *, unsigned int, unsigned int); > - static void check_precision (unsigned int *, unsigned int *, bool, bool); > - > > /* Logicals. */ > static wide_int_ro and_large (const HOST_WIDE_INT *, unsigned int, > @@ -769,17 +775,6 @@ class GTY(()) wide_int_ro > int trunc_shift (const HOST_WIDE_INT *, unsigned int, unsigned int, > ShiftOp) const; > > - template > - static bool top_bit_set (T); > - > - template > - static const HOST_WIDE_INT *to_shwi1 (HOST_WIDE_INT *, unsigned int *, > - unsigned int *, const T &); > - > - template > - static const HOST_WIDE_INT *to_shwi2 (HOST_WIDE_INT *, unsigned int *, > - const T &); > - > #ifdef DEBUG_WIDE_INT > /* Debugging routines. */ > static void debug_wa (const char *, const wide_int_ro &, > @@ -1163,51 +1158,11 @@ wide_int_ro::neg_p (signop sgn) const > return sign_mask () != 0; > } > > -/* Return true if THIS == C. If both operands have nonzero precisions, > - the precisions must be the same. */ > -template > -inline bool > -wide_int_ro::operator == (const T &c) const > -{ > - bool result; > - HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS]; > - const HOST_WIDE_INT *s; > - unsigned int cl; > - unsigned int p1, p2; > - > - p1 = precision; > - > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, false); > - > - if (p1 == 0) > - /* There are prec 0 types and we need to do this to check their > - min and max values. */ > - result = (len == cl) && (val[0] == s[0]); > - else if (p1 < HOST_BITS_PER_WIDE_INT) > - { > - unsigned HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << p1) - 1; > - result = (val[0] & mask) == (s[0] & mask); > - } > - else if (p1 == HOST_BITS_PER_WIDE_INT) > - result = val[0] == s[0]; > - else > - result = eq_p_large (val, len, p1, s, cl); > - > - if (result) > - gcc_assert (len == cl); > - > -#ifdef DEBUG_WIDE_INT > - debug_vwa ("wide_int_ro:: %d = (%s == %s)\n", result, *this, s, cl, p2); > -#endif > - return result; > -} > - > /* Return true if C1 == C2. If both parameters have nonzero precisions, > then those precisions must be equal. */ > template > inline bool > -wide_int_ro::eq_p (const T1 &c1, const T2 &c2) > +wi::eq_p (const T1 &c1, const T2 &c2) > { > bool result; > HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS]; > @@ -1237,51 +1192,28 @@ wide_int_ro::eq_p (const T1 &c1, const T > return result; > } > > -/* Return true if THIS != C. If both parameters have nonzero precisions, > - then those precisions must be equal. */ > +/* Return true if THIS == C. If both operands have nonzero precisions, > + the precisions must be the same. */ > template > inline bool > -wide_int_ro::operator != (const T &c) const > +wide_int_ro::operator == (const T &c) const > { > - return !(*this == c); > + return wi::eq_p (*this, c); > } > > -/* Return true if THIS < C using signed comparisons. */ > +/* Return true if THIS != C. If both parameters have nonzero precisions, > + then those precisions must be equal. */ > template > inline bool > -wide_int_ro::lts_p (const T &c) const > +wide_int_ro::operator != (const T &c) const > { > - bool result; > - HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS]; > - const HOST_WIDE_INT *s; > - unsigned int cl; > - unsigned int p1, p2; > - > - p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, false, true); > - > - if (p1 <= HOST_BITS_PER_WIDE_INT > - && p2 <= HOST_BITS_PER_WIDE_INT) > - { > - gcc_assert (cl != 0); > - HOST_WIDE_INT x0 = sext_hwi (val[0], p1); > - HOST_WIDE_INT x1 = sext_hwi (s[0], p2); > - result = x0 < x1; > - } > - else > - result = lts_p_large (val, len, p1, s, cl, p2); > - > -#ifdef DEBUG_WIDE_INT > - debug_vwa ("wide_int_ro:: %d = (%s lts_p %s\n", result, *this, s, cl, p2); > -#endif > - return result; > + return !wi::eq_p (*this, c); > } > > /* Return true if C1 < C2 using signed comparisons. */ > template > inline bool > -wide_int_ro::lts_p (const T1 &c1, const T2 &c2) > +wi::lts_p (const T1 &c1, const T2 &c2) > { > bool result; > HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS]; > @@ -1305,38 +1237,8 @@ wide_int_ro::lts_p (const T1 &c1, const > result = lts_p_large (s1, cl1, p1, s2, cl2, p2); > > #ifdef DEBUG_WIDE_INT > - debug_vaa ("wide_int_ro:: %d = (%s lts_p %s\n", result, s1, cl1, p1, s2, cl2, p2); > -#endif > - return result; > -} > - > -/* Return true if THIS < C using unsigned comparisons. */ > -template > -inline bool > -wide_int_ro::ltu_p (const T &c) const > -{ > - bool result; > - HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS]; > - const HOST_WIDE_INT *s; > - unsigned int cl; > - unsigned int p1, p2; > - > - p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, false, true); > - > - if (p1 <= HOST_BITS_PER_WIDE_INT > - && p2 <= HOST_BITS_PER_WIDE_INT) > - { > - unsigned HOST_WIDE_INT x0 = zext_hwi (val[0], p1); > - unsigned HOST_WIDE_INT x1 = zext_hwi (s[0], p2); > - result = x0 < x1; > - } > - else > - result = ltu_p_large (val, len, p1, s, cl, p2); > - > -#ifdef DEBUG_WIDE_INT > - debug_vwa ("wide_int_ro:: %d = (%s ltu_p %s)\n", result, *this, s, cl, p2); > + wide_int_ro::debug_vaa ("wide_int_ro:: %d = (%s lts_p %s\n", > + result, s1, cl1, p1, s2, cl2, p2); > #endif > return result; > } > @@ -1344,7 +1246,7 @@ wide_int_ro::ltu_p (const T &c) const > /* Return true if C1 < C2 using unsigned comparisons. */ > template > inline bool > -wide_int_ro::ltu_p (const T1 &c1, const T2 &c2) > +wi::ltu_p (const T1 &c1, const T2 &c2) > { > bool result; > HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS]; > @@ -1372,21 +1274,10 @@ wide_int_ro::ltu_p (const T1 &c1, const > return result; > } > > -/* Return true if THIS < C. Signedness is indicated by SGN. */ > -template > -inline bool > -wide_int_ro::lt_p (const T &c, signop sgn) const > -{ > - if (sgn == SIGNED) > - return lts_p (c); > - else > - return ltu_p (c); > -} > - > /* Return true if C1 < C2. Signedness is indicated by SGN. */ > template > inline bool > -wide_int_ro::lt_p (const T1 &c1, const T2 &c2, signop sgn) > +wi::lt_p (const T1 &c1, const T2 &c2, signop sgn) > { > if (sgn == SIGNED) > return lts_p (c1, c2); > @@ -1394,53 +1285,26 @@ wide_int_ro::lt_p (const T1 &c1, const T > return ltu_p (c1, c2); > } > > -/* Return true if THIS <= C using signed comparisons. */ > -template > -inline bool > -wide_int_ro::les_p (const T &c) const > -{ > - return !gts_p (c); > -} > - > /* Return true if C1 <= C2 using signed comparisons. */ > template > inline bool > -wide_int_ro::les_p (const T1 &c1, const T2 &c2) > +wi::les_p (const T1 &c1, const T2 &c2) > { > return !gts_p (c1, c2); > } > > -/* Return true if THIS <= C using unsigned comparisons. */ > -template > -inline bool > -wide_int_ro::leu_p (const T &c) const > -{ > - return !gtu_p (c); > -} > - > /* Return true if C1 <= C2 using unsigned comparisons. */ > template > inline bool > -wide_int_ro::leu_p (const T1 &c1, const T2 &c2) > +wi::leu_p (const T1 &c1, const T2 &c2) > { > return !gtu_p (c1, c2); > } > > -/* Return true if THIS <= C. Signedness is indicated by SGN. */ > -template > -inline bool > -wide_int_ro::le_p (const T &c, signop sgn) const > -{ > - if (sgn == SIGNED) > - return les_p (c); > - else > - return leu_p (c); > -} > - > /* Return true if C1 <= C2. Signedness is indicated by SGN. */ > template > inline bool > -wide_int_ro::le_p (const T1 &c1, const T2 &c2, signop sgn) > +wi::le_p (const T1 &c1, const T2 &c2, signop sgn) > { > if (sgn == SIGNED) > return les_p (c1, c2); > @@ -1448,53 +1312,26 @@ wide_int_ro::le_p (const T1 &c1, const T > return leu_p (c1, c2); > } > > -/* Return true if THIS > C using signed comparisons. */ > -template > -inline bool > -wide_int_ro::gts_p (const T &c) const > -{ > - return lts_p (c, *this); > -} > - > /* Return true if C1 > C2 using signed comparisons. */ > template > inline bool > -wide_int_ro::gts_p (const T1 &c1, const T2 &c2) > +wi::gts_p (const T1 &c1, const T2 &c2) > { > return lts_p (c2, c1); > } > > -/* Return true if THIS > C using unsigned comparisons. */ > -template > -inline bool > -wide_int_ro::gtu_p (const T &c) const > -{ > - return ltu_p (c, *this); > -} > - > /* Return true if C1 > C2 using unsigned comparisons. */ > template > inline bool > -wide_int_ro::gtu_p (const T1 &c1, const T2 &c2) > +wi::gtu_p (const T1 &c1, const T2 &c2) > { > return ltu_p (c2, c1); > } > > -/* Return true if THIS > C. Signedness is indicated by SGN. */ > -template > -inline bool > -wide_int_ro::gt_p (const T &c, signop sgn) const > -{ > - if (sgn == SIGNED) > - return gts_p (c); > - else > - return gtu_p (c); > -} > - > /* Return true if C1 > C2. Signedness is indicated by SGN. */ > template > inline bool > -wide_int_ro::gt_p (const T1 &c1, const T2 &c2, signop sgn) > +wi::gt_p (const T1 &c1, const T2 &c2, signop sgn) > { > if (sgn == SIGNED) > return gts_p (c1, c2); > @@ -1502,53 +1339,26 @@ wide_int_ro::gt_p (const T1 &c1, const T > return gtu_p (c1, c2); > } > > -/* Return true if THIS >= C using signed comparisons. */ > -template > -inline bool > -wide_int_ro::ges_p (const T &c) const > -{ > - return !lts_p (c); > -} > - > /* Return true if C1 >= C2 using signed comparisons. */ > template > inline bool > -wide_int_ro::ges_p (const T1 &c1, const T2 &c2) > +wi::ges_p (const T1 &c1, const T2 &c2) > { > return !lts_p (c1, c2); > } > > -/* Return true if THIS >= C using unsigned comparisons. */ > -template > -inline bool > -wide_int_ro::geu_p (const T &c) const > -{ > - return !ltu_p (c); > -} > - > /* Return true if C1 >= C2 using unsigned comparisons. */ > template > inline bool > -wide_int_ro::geu_p (const T1 &c1, const T2 &c2) > +wi::geu_p (const T1 &c1, const T2 &c2) > { > return !ltu_p (c1, c2); > } > > -/* Return true if THIS >= C. Signedness is indicated by SGN. */ > -template > -inline bool > -wide_int_ro::ge_p (const T &c, signop sgn) const > -{ > - if (sgn == SIGNED) > - return ges_p (c); > - else > - return geu_p (c); > -} > - > /* Return true if C1 >= C2. Signedness is indicated by SGN. */ > template > inline bool > -wide_int_ro::ge_p (const T1 &c1, const T2 &c2, signop sgn) > +wi::ge_p (const T1 &c1, const T2 &c2, signop sgn) > { > if (sgn == SIGNED) > return ges_p (c1, c2); > @@ -1568,7 +1378,7 @@ wide_int_ro::cmps (const T &c) const > unsigned int cl; > unsigned int prec; > > - s = to_shwi1 (ws, &cl, &prec, c); > + s = wi::to_shwi1 (ws, &cl, &prec, c); > if (prec == 0) > prec = precision; > > @@ -1606,7 +1416,7 @@ wide_int_ro::cmpu (const T &c) const > unsigned int cl; > unsigned int prec; > > - s = to_shwi1 (ws, &cl, &prec, c); > + s = wi::to_shwi1 (ws, &cl, &prec, c); > if (prec == 0) > prec = precision; > > @@ -1681,13 +1491,12 @@ wide_int_ro::min (const T &c, signop sgn > > p1 = precision; > > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > - if (sgn == SIGNED) > - return lts_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false); > - else > - return ltu_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false); > + return (wi::lt_p (*this, c, sgn) > + ? *this > + : wide_int_ro::from_array (s, cl, p1, false)); > } > > /* Return the signed or unsigned min of THIS and OP1. */ > @@ -1695,9 +1504,9 @@ wide_int_ro::min (const T &c, signop sgn > wide_int_ro::min (const wide_int_ro &op1, signop sgn) const > { > if (sgn == SIGNED) > - return lts_p (op1) ? (*this) : op1; > + return wi::lts_p (*this, op1) ? *this : op1; > else > - return ltu_p (op1) ? (*this) : op1; > + return wi::ltu_p (*this, op1) ? *this : op1; > } > > /* Return the signed or unsigned max of THIS and C. */ > @@ -1712,22 +1521,18 @@ wide_int_ro::max (const T &c, signop sgn > > p1 = precision; > > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > - if (sgn == SIGNED) > - return gts_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false); > - else > - return gtu_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > + return (wi::gt_p (*this, c, sgn) > + ? *this > + : wide_int_ro::from_array (s, cl, p1, false)); > } > > /* Return the signed or unsigned max of THIS and OP1. */ > inline wide_int_ro > wide_int_ro::max (const wide_int_ro &op1, signop sgn) const > { > - if (sgn == SIGNED) > - return gts_p (op1) ? (*this) : op1; > - else > - return gtu_p (op1) ? (*this) : op1; > + return wi::gt_p (*this, op1, sgn) ? *this : op1; > } > > /* Return the signed min of THIS and C. */ > @@ -1742,17 +1547,19 @@ wide_int_ro::smin (const T &c) const > > p1 = precision; > > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > - return lts_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false); > + return (wi::lts_p (*this, c) > + ? *this > + : wide_int_ro::from_array (s, cl, p1, false)); > } > > /* Return the signed min of THIS and OP1. */ > inline wide_int_ro > wide_int_ro::smin (const wide_int_ro &op1) const > { > - return lts_p (op1) ? (*this) : op1; > + return wi::lts_p (*this, op1) ? *this : op1; > } > > /* Return the signed max of THIS and C. */ > @@ -1767,17 +1574,19 @@ wide_int_ro::smax (const T &c) const > > p1 = precision; > > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > - return gts_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false); > + return (wi::gts_p (*this, c) > + ? *this > + : wide_int_ro::from_array (s, cl, p1, false)); > } > > /* Return the signed max of THIS and OP1. */ > inline wide_int_ro > wide_int_ro::smax (const wide_int_ro &op1) const > { > - return gts_p (op1) ? (*this) : op1; > + return wi::gts_p (*this, op1) ? *this : op1; > } > > /* Return the unsigned min of THIS and C. */ > @@ -1792,15 +1601,17 @@ wide_int_ro::umin (const T &c) const > > p1 = precision; > > - s = to_shwi1 (ws, &cl, &p2, c); > - return ltu_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + return (wi::ltu_p (*this, c) > + ? *this > + : wide_int_ro::from_array (s, cl, p1, false)); > } > > /* Return the unsigned min of THIS and OP1. */ > inline wide_int_ro > wide_int_ro::umin (const wide_int_ro &op1) const > { > - return ltu_p (op1) ? (*this) : op1; > + return wi::ltu_p (*this, op1) ? *this : op1; > } > > /* Return the unsigned max of THIS and C. */ > @@ -1815,17 +1626,19 @@ wide_int_ro::umax (const T &c) const > > p1 = precision; > > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > - return gtu_p (c) ? (*this) : wide_int_ro::from_array (s, cl, p1, false); > + return (wi::gtu_p (*this, c) > + ? *this > + : wide_int_ro::from_array (s, cl, p1, false)); > } > > /* Return the unsigned max of THIS and OP1. */ > inline wide_int_ro > wide_int_ro::umax (const wide_int_ro &op1) const > { > - return gtu_p (op1) ? (*this) : op1; > + return wi::gtu_p (*this, op1) ? *this : op1; > } > > /* Return THIS extended to PREC. The signedness of the extension is > @@ -1891,8 +1704,8 @@ wide_int_ro::operator & (const T &c) con > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > if (p1 <= HOST_BITS_PER_WIDE_INT) > { > @@ -1921,8 +1734,8 @@ wide_int_ro::and_not (const T &c) const > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > if (p1 <= HOST_BITS_PER_WIDE_INT) > { > @@ -1973,8 +1786,8 @@ wide_int_ro::operator | (const T &c) con > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > if (p1 <= HOST_BITS_PER_WIDE_INT) > { > @@ -2003,8 +1816,8 @@ wide_int_ro::or_not (const T &c) const > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > if (p1 <= HOST_BITS_PER_WIDE_INT) > { > @@ -2033,8 +1846,8 @@ wide_int_ro::operator ^ (const T &c) con > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > if (p1 <= HOST_BITS_PER_WIDE_INT) > { > @@ -2063,8 +1876,8 @@ wide_int_ro::operator + (const T &c) con > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > if (p1 <= HOST_BITS_PER_WIDE_INT) > { > @@ -2096,8 +1909,8 @@ wide_int_ro::add (const T &c, signop sgn > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > if (p1 <= HOST_BITS_PER_WIDE_INT) > { > @@ -2141,8 +1954,8 @@ wide_int_ro::operator * (const T &c) con > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > if (p1 <= HOST_BITS_PER_WIDE_INT) > { > @@ -2176,8 +1989,8 @@ wide_int_ro::mul (const T &c, signop sgn > if (overflow) > *overflow = false; > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > return mul_internal (false, false, > val, len, p1, > @@ -2217,8 +2030,8 @@ wide_int_ro::mul_full (const T &c, signo > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > return mul_internal (false, true, > val, len, p1, > @@ -2257,8 +2070,8 @@ wide_int_ro::mul_high (const T &c, signo > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > return mul_internal (true, false, > val, len, p1, > @@ -2298,8 +2111,8 @@ wide_int_ro::operator - (const T &c) con > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > if (p1 <= HOST_BITS_PER_WIDE_INT) > { > @@ -2331,8 +2144,8 @@ wide_int_ro::sub (const T &c, signop sgn > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, true, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, true, true); > > if (p1 <= HOST_BITS_PER_WIDE_INT) > { > @@ -2379,8 +2192,8 @@ wide_int_ro::div_trunc (const T &c, sign > if (overflow) > *overflow = false; > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, false, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, false, true); > > return divmod_internal (true, val, len, p1, s, cl, p2, sgn, > &remainder, false, overflow); > @@ -2420,8 +2233,8 @@ wide_int_ro::div_floor (const T &c, sign > if (overflow) > *overflow = false; > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, false, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, false, true); > > return divmod_internal (true, val, len, p1, s, cl, p2, sgn, > &remainder, false, overflow); > @@ -2461,8 +2274,8 @@ wide_int_ro::div_ceil (const T &c, signo > if (overflow) > *overflow = false; > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, false, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, false, true); > > quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn, > &remainder, true, overflow); > @@ -2490,8 +2303,8 @@ wide_int_ro::div_round (const T &c, sign > if (overflow) > *overflow = false; > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, false, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, false, true); > > quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn, > &remainder, true, overflow); > @@ -2505,7 +2318,7 @@ wide_int_ro::div_round (const T &c, sign > wide_int_ro p_divisor = divisor.neg_p (SIGNED) ? -divisor : divisor; > p_divisor = p_divisor.rshiftu_large (1); > > - if (p_divisor.gts_p (p_remainder)) > + if (wi::gts_p (p_divisor, p_remainder)) > { > if (quotient.neg_p (SIGNED)) > return quotient - 1; > @@ -2516,7 +2329,7 @@ wide_int_ro::div_round (const T &c, sign > else > { > wide_int_ro p_divisor = divisor.rshiftu_large (1); > - if (p_divisor.gtu_p (remainder)) > + if (wi::gtu_p (p_divisor, remainder)) > return quotient + 1; > } > } > @@ -2537,8 +2350,8 @@ wide_int_ro::divmod_trunc (const T &c, w > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, false, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, false, true); > > return divmod_internal (true, val, len, p1, s, cl, p2, sgn, > remainder, true, 0); > @@ -2575,8 +2388,8 @@ wide_int_ro::divmod_floor (const T &c, w > unsigned int p1, p2; > > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, false, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, false, true); > > quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn, > remainder, true, 0); > @@ -2613,8 +2426,8 @@ wide_int_ro::mod_trunc (const T &c, sign > if (overflow) > *overflow = false; > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, false, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, false, true); > > divmod_internal (false, val, len, p1, s, cl, p2, sgn, > &remainder, true, overflow); > @@ -2655,8 +2468,8 @@ wide_int_ro::mod_floor (const T &c, sign > if (overflow) > *overflow = false; > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, false, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, false, true); > > quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn, > &remainder, true, overflow); > @@ -2692,8 +2505,8 @@ wide_int_ro::mod_ceil (const T &c, signo > if (overflow) > *overflow = false; > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, false, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, false, true); > > quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn, > &remainder, true, overflow); > @@ -2721,8 +2534,8 @@ wide_int_ro::mod_round (const T &c, sign > if (overflow) > *overflow = false; > p1 = precision; > - s = to_shwi1 (ws, &cl, &p2, c); > - check_precision (&p1, &p2, false, true); > + s = wi::to_shwi1 (ws, &cl, &p2, c); > + wi::check_precision (&p1, &p2, false, true); > > quotient = divmod_internal (true, val, len, p1, s, cl, p2, sgn, > &remainder, true, overflow); > @@ -2737,7 +2550,7 @@ wide_int_ro::mod_round (const T &c, sign > wide_int_ro p_divisor = divisor.neg_p (SIGNED) ? -divisor : divisor; > p_divisor = p_divisor.rshiftu_large (1); > > - if (p_divisor.gts_p (p_remainder)) > + if (wi::gts_p (p_divisor, p_remainder)) > { > if (quotient.neg_p (SIGNED)) > return remainder + divisor; > @@ -2748,7 +2561,7 @@ wide_int_ro::mod_round (const T &c, sign > else > { > wide_int_ro p_divisor = divisor.rshiftu_large (1); > - if (p_divisor.gtu_p (remainder)) > + if (wi::gtu_p (p_divisor, remainder)) > return remainder - divisor; > } > } > @@ -2768,7 +2581,7 @@ wide_int_ro::lshift (const T &c, unsigne > unsigned int cl; > HOST_WIDE_INT shift; > > - s = to_shwi2 (ws, &cl, c); > + s = wi::to_shwi2 (ws, &cl, c); > > gcc_checking_assert (precision); > > @@ -2806,7 +2619,7 @@ wide_int_ro::lshift_widen (const T &c, u > unsigned int cl; > HOST_WIDE_INT shift; > > - s = to_shwi2 (ws, &cl, c); > + s = wi::to_shwi2 (ws, &cl, c); > > gcc_checking_assert (precision); > gcc_checking_assert (res_prec); > @@ -2843,7 +2656,7 @@ wide_int_ro::lrotate (const T &c, unsign > const HOST_WIDE_INT *s; > unsigned int cl; > > - s = to_shwi2 (ws, &cl, c); > + s = wi::to_shwi2 (ws, &cl, c); > > return lrotate ((unsigned HOST_WIDE_INT) s[0], prec); > } > @@ -2901,7 +2714,7 @@ wide_int_ro::rshiftu (const T &c, unsign > unsigned int cl; > HOST_WIDE_INT shift; > > - s = to_shwi2 (ws, &cl, c); > + s = wi::to_shwi2 (ws, &cl, c); > gcc_checking_assert (precision); > shift = trunc_shift (s, cl, bitsize, trunc_op); > > @@ -2944,7 +2757,7 @@ wide_int_ro::rshifts (const T &c, unsign > unsigned int cl; > HOST_WIDE_INT shift; > > - s = to_shwi2 (ws, &cl, c); > + s = wi::to_shwi2 (ws, &cl, c); > gcc_checking_assert (precision); > shift = trunc_shift (s, cl, bitsize, trunc_op); > > @@ -2989,7 +2802,7 @@ wide_int_ro::rrotate (const T &c, unsign > const HOST_WIDE_INT *s; > unsigned int cl; > > - s = to_shwi2 (ws, &cl, c); > + s = wi::to_shwi2 (ws, &cl, c); > return rrotate ((unsigned HOST_WIDE_INT) s[0], prec); > } > > @@ -3080,25 +2893,26 @@ wide_int_ro::trunc_shift (const HOST_WID > return cnt[0] & (bitsize - 1); > } > > +/* Implementation of wide_int_accessors for primitive integer types > + like "int". */ > +template > +struct primitive_wide_int_accessors > +{ > + static const HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *, unsigned int *, > + unsigned int *, const T &); > +}; > + > template > inline bool > -wide_int_ro::top_bit_set (T x) > +top_bit_set (T x) > { > - return (x >> (sizeof (x)*8 - 1)) != 0; > + return (x >> (sizeof (x) * 8 - 1)) != 0; > } > > -/* The following template and its overrides are used for the first > - and second operand of static binary comparison functions. > - These have been implemented so that pointer copying is done > - from the rep of the operands rather than actual data copying. > - This is safe even for garbage collected objects since the value > - is immediately throw away. > - > - This template matches all integers. */ > template > inline const HOST_WIDE_INT * > -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p, > - const T &x) > +primitive_wide_int_accessors ::to_shwi (HOST_WIDE_INT *s, unsigned int *l, > + unsigned int *p, const T &x) > { > s[0] = x; > if (signedp (x) > @@ -3114,29 +2928,23 @@ wide_int_ro::to_shwi1 (HOST_WIDE_INT *s, > return s; > } > > -/* The following template and its overrides are used for the second > - operand of binary functions. These have been implemented so that > - pointer copying is done from the rep of the second operand rather > - than actual data copying. This is safe even for garbage collected > - objects since the value is immediately throw away. > +template <> > +struct wide_int_accessors > + : public primitive_wide_int_accessors {}; > > - The next template matches all integers. */ > -template > -inline const HOST_WIDE_INT * > -wide_int_ro::to_shwi2 (HOST_WIDE_INT *s, unsigned int *l, const T &x) > -{ > - s[0] = x; > - if (signedp (x) > - || sizeof (T) < sizeof (HOST_WIDE_INT) > - || ! top_bit_set (x)) > - *l = 1; > - else > - { > - s[1] = 0; > - *l = 2; > - } > - return s; > -} > +template <> > +struct wide_int_accessors > + : public primitive_wide_int_accessors {}; > + > +#if HOST_BITS_PER_INT != HOST_BITS_PER_WIDE_INT > +template <> > +struct wide_int_accessors > + : public primitive_wide_int_accessors {}; > + > +template <> > +struct wide_int_accessors > + : public primitive_wide_int_accessors {}; > +#endif > > inline wide_int::wide_int () {} > > @@ -3275,7 +3083,6 @@ class GTY(()) fixed_wide_int : public wi > protected: > fixed_wide_int &operator = (const wide_int &); > fixed_wide_int (const wide_int_ro); > - const HOST_WIDE_INT *get_val () const; > > using wide_int_ro::val; > > @@ -3285,16 +3092,8 @@ class GTY(()) fixed_wide_int : public wi > using wide_int_ro::to_short_addr; > using wide_int_ro::fits_uhwi_p; > using wide_int_ro::fits_shwi_p; > - using wide_int_ro::gtu_p; > - using wide_int_ro::gts_p; > - using wide_int_ro::geu_p; > - using wide_int_ro::ges_p; > using wide_int_ro::to_shwi; > using wide_int_ro::operator ==; > - using wide_int_ro::ltu_p; > - using wide_int_ro::lts_p; > - using wide_int_ro::leu_p; > - using wide_int_ro::les_p; > using wide_int_ro::to_uhwi; > using wide_int_ro::cmps; > using wide_int_ro::neg_p; > @@ -3510,13 +3309,6 @@ inline fixed_wide_int ::fixed_w > } > > template > -inline const HOST_WIDE_INT * > -fixed_wide_int ::get_val () const > -{ > - return val; > -} > - > -template > inline fixed_wide_int > fixed_wide_int ::from_wide_int (const wide_int &w) > { > @@ -4165,118 +3957,62 @@ extern void gt_pch_nx(max_wide_int*); > > extern addr_wide_int mem_ref_offset (const_tree); > > -/* The wide-int overload templates. */ > - > template <> > -inline const HOST_WIDE_INT* > -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, > - unsigned int *l, unsigned int *p, > - const wide_int_ro &y) > -{ > - *p = y.precision; > - *l = y.len; > - return y.val; > -} > - > -template <> > -inline const HOST_WIDE_INT* > -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, > - unsigned int *l, unsigned int *p, > - const wide_int &y) > -{ > - *p = y.precision; > - *l = y.len; > - return y.val; > -} > - > - > -template <> > -inline const HOST_WIDE_INT* > -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, > - unsigned int *l, unsigned int *p, > - const fixed_wide_int &y) > +struct wide_int_accessors > { > - *p = y.get_precision (); > - *l = y.get_len (); > - return y.get_val (); > -} > + static const HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *, unsigned int *, > + unsigned int *, const wide_int_ro &); > +}; > > -#if addr_max_precision != MAX_BITSIZE_MODE_ANY_INT > -template <> > -inline const HOST_WIDE_INT* > -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, > - unsigned int *l, unsigned int *p, > - const fixed_wide_int &y) > +inline const HOST_WIDE_INT * > +wide_int_accessors ::to_shwi (HOST_WIDE_INT *, unsigned int *l, > + unsigned int *p, > + const wide_int_ro &y) > { > *p = y.get_precision (); > *l = y.get_len (); > return y.get_val (); > } > -#endif > > template <> > -inline const HOST_WIDE_INT* > -wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, > - unsigned int *l, const wide_int &y) > -{ > - *l = y.len; > - return y.val; > -} > +struct wide_int_accessors > + : public wide_int_accessors {}; > > +template <> > +template > +struct wide_int_accessors > > + : public wide_int_accessors {}; > > /* The tree and const_tree overload templates. */ > template <> > -inline const HOST_WIDE_INT* > -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, > - unsigned int *l, unsigned int *p, > - const tree &tcst) > +struct wide_int_accessors > { > - tree type = TREE_TYPE (tcst); > - > - *p = TYPE_PRECISION (type); > - *l = TREE_INT_CST_NUNITS (tcst); > - return (const HOST_WIDE_INT*)&TREE_INT_CST_ELT (tcst, 0); > -} > + static const HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *, unsigned int *, > + unsigned int *, const_tree); > +}; > > -template <> > -inline const HOST_WIDE_INT* > -wide_int_ro::to_shwi1 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, > - unsigned int *l, unsigned int *p, > - const const_tree &tcst) > +inline const HOST_WIDE_INT * > +wide_int_accessors ::to_shwi (HOST_WIDE_INT *, unsigned int *l, > + unsigned int *p, const_tree tcst) > { > tree type = TREE_TYPE (tcst); > > *p = TYPE_PRECISION (type); > *l = TREE_INT_CST_NUNITS (tcst); > - return (const HOST_WIDE_INT*)&TREE_INT_CST_ELT (tcst, 0); > + return (const HOST_WIDE_INT *) &TREE_INT_CST_ELT (tcst, 0); > } > > template <> > -inline const HOST_WIDE_INT* > -wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, > - unsigned int *l, const tree &tcst) > -{ > - *l = TREE_INT_CST_NUNITS (tcst); > - return (const HOST_WIDE_INT*)&TREE_INT_CST_ELT (tcst, 0); > -} > - > -template <> > -inline const HOST_WIDE_INT* > -wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, > - unsigned int *l, const const_tree &tcst) > -{ > - *l = TREE_INT_CST_NUNITS (tcst); > - return (const HOST_WIDE_INT*)&TREE_INT_CST_ELT (tcst, 0); > -} > +struct wide_int_accessors : public wide_int_accessors {}; > > /* Checking for the functions that require that at least one of the > operands have a nonzero precision. If both of them have a precision, > then if CHECK_EQUAL is true, require that the precision be the same. */ > > inline void > -wide_int_ro::check_precision (unsigned int *p1, unsigned int *p2, > - bool check_equal ATTRIBUTE_UNUSED, > - bool check_zero ATTRIBUTE_UNUSED) > +wi::check_precision (unsigned int *p1, unsigned int *p2, > + bool check_equal ATTRIBUTE_UNUSED, > + bool check_zero ATTRIBUTE_UNUSED) > { > gcc_checking_assert ((!check_zero) || *p1 != 0 || *p2 != 0); > > @@ -4298,9 +4034,11 @@ typedef std::pair /* There should logically be an overload for rtl here, but it cannot > be here because of circular include issues. It is in rtl.h. */ > template <> > -inline const HOST_WIDE_INT* > -wide_int_ro::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, > - unsigned int *l, const rtx_mode_t &rp); > +struct wide_int_accessors > +{ > + static const HOST_WIDE_INT *to_shwi (HOST_WIDE_INT *, unsigned int *, > + unsigned int *, const rtx_mode_t &); > +}; > > /* tree related routines. */ >