From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 29612 invoked by alias); 16 Jun 2011 11:57:55 -0000 Received: (qmail 29593 invoked by uid 22791); 16 Jun 2011 11:57:52 -0000 X-SWARE-Spam-Status: No, hits=-2.4 required=5.0 tests=AWL,BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,RCVD_IN_DNSWL_LOW,RFC_ABUSE_POST,TW_TM X-Spam-Check-By: sourceware.org Received: from mail-iw0-f175.google.com (HELO mail-iw0-f175.google.com) (209.85.214.175) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 16 Jun 2011 11:57:36 +0000 Received: by iwn19 with SMTP id 19so1204560iwn.20 for ; Thu, 16 Jun 2011 04:57:35 -0700 (PDT) MIME-Version: 1.0 Received: by 10.42.177.202 with SMTP id bj10mr664528icb.21.1308225455280; Thu, 16 Jun 2011 04:57:35 -0700 (PDT) Received: by 10.42.176.5 with HTTP; Thu, 16 Jun 2011 04:57:35 -0700 (PDT) In-Reply-To: References: Date: Thu, 16 Jun 2011 12:06:00 -0000 Message-ID: Subject: Re: [PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR From: Jay Foad To: Richard Guenther Cc: gcc-patches@gcc.gnu.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org X-SW-Source: 2011-06/txt/msg01238.txt.bz2 > BIT_FIELD_EXPR is equivalent to computing > a & ~((1 << C1 - 1) << C2) | ((b << C2) & (1 << C1 =3D 1)), a & ~(((1 << C1) - 1) << C2) | ((b & ((1 << C1) - 1)) << C2) ? Jay. thus > inserting b of width C1 at the bitfield position C2 in a, returning > the new value. =C2=A0This allows translating > =C2=A0BIT_FIELD_REF =3D b; > to > =C2=A0a =3D BIT_FIELD_EXPR ; > which avoids partial definitions of a (thus, BIT_FIELD_EXPR is > similar to COMPLEX_EXPR). =C2=A0BIT_FIELD_EXPR is supposed to work > on registers only. > > Comments welcome, esp. on how to avoid introducing quaternary > RHS on gimple stmts (or using a GIMPLE_SINGLE_RHS as the patch does). > > Folders/combiners are missing to handle some of the easy > BIT_FIELD_REF / BIT_FIELD_EXPR cases, as well as eventually > re-writing shift/mask operations to BIT_FIELD_REF/EXPR. > > Richard. > > 2011-06-16 =C2=A0Richard Guenther =C2=A0 > > =C2=A0 =C2=A0 =C2=A0 =C2=A0* expr.c (expand_expr_real_1): Handle BIT_FIEL= D_EXPR. > =C2=A0 =C2=A0 =C2=A0 =C2=A0* fold-const.c (operand_equal_p): Likewise. > =C2=A0 =C2=A0 =C2=A0 =C2=A0(build_bit_mask): New function. > =C2=A0 =C2=A0 =C2=A0 =C2=A0(fold_quaternary_loc): Likewise. > =C2=A0 =C2=A0 =C2=A0 =C2=A0(fold): Call it. > =C2=A0 =C2=A0 =C2=A0 =C2=A0(fold_build4_stat_loc): New function. > =C2=A0 =C2=A0 =C2=A0 =C2=A0* gimplify.c (gimplify_expr): Handle BIT_FIELD= _EXPR. > =C2=A0 =C2=A0 =C2=A0 =C2=A0* tree-inline.c (estimate_operator_cost): Like= wise. > =C2=A0 =C2=A0 =C2=A0 =C2=A0* tree-pretty-print.c (dump_generic_node): Lik= ewise. > =C2=A0 =C2=A0 =C2=A0 =C2=A0* tree-ssa-operands.c (get_expr_operands): Lik= ewise. > =C2=A0 =C2=A0 =C2=A0 =C2=A0* tree.def (BIT_FIELD_EXPR): New tree code. > =C2=A0 =C2=A0 =C2=A0 =C2=A0* tree.h (build_bit_mask): Declare. > =C2=A0 =C2=A0 =C2=A0 =C2=A0(fold_quaternary): Define. > =C2=A0 =C2=A0 =C2=A0 =C2=A0(fold_quaternary_loc): Declare. > =C2=A0 =C2=A0 =C2=A0 =C2=A0(fold_build4): Define. > =C2=A0 =C2=A0 =C2=A0 =C2=A0(fold_build4_loc): Likewise. > =C2=A0 =C2=A0 =C2=A0 =C2=A0(fold_build4_stat_loc): Declare. > =C2=A0 =C2=A0 =C2=A0 =C2=A0* gimple.c (gimple_rhs_class_table): Handle BI= T_FIELD_EXPR. > > Index: trunk/gcc/expr.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > *** trunk.orig/gcc/expr.c =C2=A0 =C2=A0 =C2=A0 2011-06-15 13:27:40.000000= 000 +0200 > --- trunk/gcc/expr.c =C2=A0 =C2=A02011-06-15 15:08:41.000000000 +0200 > *************** expand_expr_real_1 (tree exp, rtx target > *** 8680,8685 **** > --- 8680,8708 ---- > > =C2=A0 =C2=A0 =C2=A0 =C2=A0return expand_constructor (exp, target, modifi= er, false); > > + =C2=A0 =C2=A0 case BIT_FIELD_EXPR: > + =C2=A0 =C2=A0 =C2=A0 { > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 unsigned bitpos =3D (unsigned) TREE_INT_CST= _LOW (TREE_OPERAND (exp, 3)); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 unsigned bitsize =3D (unsigned) TREE_INT_CS= T_LOW (TREE_OPERAND (exp, 2)); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 tree bits, mask; > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (BYTES_BIG_ENDIAN) > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 bitpos =3D TYPE_PRECISION (type) - b= itsize - bitpos; > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* build a mask to mask/clear the bits in t= he word. =C2=A0*/ > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 mask =3D build_bit_mask (type, bitsize, bit= pos); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* extend the bits to the word type, shift = them to the right > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0place and mask the bits. =C2= =A0*/ > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 bits =3D fold_convert (type, TREE_OPERAND (= exp, 1)); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 bits =3D fold_build2 (BIT_AND_EXPR, type, > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 fold_build2 (LSHIFT_EXPR, type, > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0bits, size_int (bitpos)), mask); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* switch to clear mask and do the composit= ion. =C2=A0*/ > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 mask =3D fold_build1 (BIT_NOT_EXPR, type, m= ask); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 return expand_normal (fold_build2 (BIT_IOR_= EXPR, type, > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 fold_build2 (BIT_AND_EXPR, type, > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0TREE_OPERAND (exp, 0), mask), > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 bits)); > + =C2=A0 =C2=A0 =C2=A0 } > + > =C2=A0 =C2=A0 =C2=A0case TARGET_MEM_REF: > =C2=A0 =C2=A0 =C2=A0 =C2=A0{ > =C2=A0 =C2=A0 =C2=A0 =C2=A0addr_space_t as =3D TYPE_ADDR_SPACE (TREE_TYPE= (exp)); > Index: trunk/gcc/fold-const.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > *** trunk.orig/gcc/fold-const.c 2011-06-15 14:51:31.000000000 +0200 > --- trunk/gcc/fold-const.c =C2=A0 =C2=A0 =C2=A02011-06-15 15:33:04.000000= 000 +0200 > *************** operand_equal_p (const_tree arg0, const_ > *** 2667,2672 **** > --- 2667,2675 ---- > =C2=A0 =C2=A0 =C2=A0 =C2=A0case DOT_PROD_EXPR: > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return OP_SAME (0) && OP_SAME (1) && OP= _SAME (2); > > + =C2=A0 =C2=A0 =C2=A0 case BIT_FIELD_EXPR: > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 return OP_SAME (0) && OP_SAME (1) && OP_SAM= E (2) && OP_SAME (3); > + > =C2=A0 =C2=A0 =C2=A0 =C2=A0default: > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return 0; > =C2=A0 =C2=A0 =C2=A0 =C2=A0} > *************** contains_label_p (tree st) > *** 13230,13235 **** > --- 13233,13251 ---- > =C2=A0 =C2=A0 (walk_tree_without_duplicates (&st, contains_label_1 , NULL= ) !=3D NULL_TREE); > =C2=A0} > > + /* Builds and returns a mask of integral type TYPE for masking out > + =C2=A0 =C2=A0BITSIZE bits at bit position BITPOS in a word of type TYPE. > + =C2=A0 =C2=A0The mask has the bits set from bit BITPOS to BITPOS + BITS= IZE - 1. =C2=A0*/ > + > + tree > + build_bit_mask (tree type, unsigned int bitsize, unsigned int bitpos) > + { > + =C2=A0 tree mask =3D double_int_to_tree (type, double_int_mask (bitsize= )); > + =C2=A0 mask =3D const_binop (LSHIFT_EXPR, mask, size_int (bitpos)); > + > + =C2=A0 return mask; > + } > + > =C2=A0/* Fold a ternary expression of code CODE and type TYPE with operan= ds > =C2=A0 =C2=A0 OP0, OP1, and OP2. =C2=A0Return the folded expression if fo= lding is > =C2=A0 =C2=A0 successful. =C2=A0Otherwise, return NULL_TREE. =C2=A0*/ > *************** fold_ternary_loc (location_t loc, enum t > *** 13591,13596 **** > --- 13607,13685 ---- > =C2=A0 =C2=A0 =C2=A0} /* switch (code) */ > =C2=A0} > > + /* Fold a quaternary expression of code CODE and type TYPE with operands > + =C2=A0 =C2=A0OP0, OP1, OP2, and OP3. =C2=A0Return the folded expression= if folding is > + =C2=A0 =C2=A0successful. =C2=A0Otherwise, return NULL_TREE. =C2=A0*/ > + > + tree > + fold_quaternary_loc (location_t loc, enum tree_code code, tree type, > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0tr= ee op0, tree op1, tree op2, tree op3) > + { > + =C2=A0 tree arg0 =3D NULL_TREE, arg1 =3D NULL_TREE; > + =C2=A0 enum tree_code_class kind =3D TREE_CODE_CLASS (code); > + > + =C2=A0 gcc_assert (IS_EXPR_CODE_CLASS (kind) > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 && TREE_CODE_LENGTH (c= ode) =3D=3D 4); > + > + =C2=A0 /* Strip any conversions that don't change the mode. =C2=A0This = is safe > + =C2=A0 =C2=A0 =C2=A0for every expression, except for a comparison expre= ssion because > + =C2=A0 =C2=A0 =C2=A0its signedness is derived from its operands. =C2=A0= So, in the latter > + =C2=A0 =C2=A0 =C2=A0case, only strip conversions that don't change the = signedness. > + > + =C2=A0 =C2=A0 =C2=A0Note that this is done as an internal manipulation = within the > + =C2=A0 =C2=A0 =C2=A0constant folder, in order to find the simplest repr= esentation of > + =C2=A0 =C2=A0 =C2=A0the arguments so that their form can be studied. = =C2=A0In any cases, > + =C2=A0 =C2=A0 =C2=A0the appropriate type conversions should be put back= in the tree > + =C2=A0 =C2=A0 =C2=A0that will get out of the constant folder. =C2=A0*/ > + =C2=A0 if (op0) > + =C2=A0 =C2=A0 { > + =C2=A0 =C2=A0 =C2=A0 arg0 =3D op0; > + =C2=A0 =C2=A0 =C2=A0 STRIP_NOPS (arg0); > + =C2=A0 =C2=A0 } > + > + =C2=A0 if (op1) > + =C2=A0 =C2=A0 { > + =C2=A0 =C2=A0 =C2=A0 arg1 =3D op1; > + =C2=A0 =C2=A0 =C2=A0 STRIP_NOPS (arg1); > + =C2=A0 =C2=A0 } > + > + =C2=A0 switch (code) > + =C2=A0 =C2=A0 { > + =C2=A0 =C2=A0 case BIT_FIELD_EXPR: > + =C2=A0 =C2=A0 =C2=A0 /* Constant fold BIT_FIELD_EXPR. =C2=A0*/ > + =C2=A0 =C2=A0 =C2=A0 if (TREE_CODE (arg0) =3D=3D INTEGER_CST > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 || TREE_CODE (arg1) =3D=3D INTEGER_CST) > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 { > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 unsigned bitpos =3D (unsigned) TREE_= INT_CST_LOW (op3); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 unsigned bitsize =3D (unsigned) TREE= _INT_CST_LOW (op2); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 tree bits, mask; > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (BYTES_BIG_ENDIAN) > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 bitpos =3D TYPE_PRECISION (ty= pe) - bitsize - bitpos; > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* build a mask to mask/clear the bi= ts in the word. =C2=A0*/ > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 mask =3D build_bit_mask (type, bitsi= ze, bitpos); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* extend the bits to the word type,= shift them to the right > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0place and mask the bits= . =C2=A0*/ > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 bits =3D fold_convert_loc (loc, type= , arg1); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 bits =3D fold_build2_loc (loc, BIT_A= ND_EXPR, type, > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 fold_build2_loc (loc, LSHIFT_EXPR= , type, > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0bits, size_int (bitpos)), > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 mask); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* switch to clear mask and do the c= omposition. =C2=A0*/ > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 mask =3D fold_build1_loc (loc, BIT_N= OT_EXPR, type, mask); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return fold_build2_loc (loc, BIT_IOR= _EXPR, type, > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 fold_build2_loc (loc, BIT_AND_EXP= R, type, > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0fold_convert (type, arg0), > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0mask), > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 bits); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 } > + > + =C2=A0 =C2=A0 =C2=A0 return NULL_TREE; > + > + =C2=A0 =C2=A0 default: > + =C2=A0 =C2=A0 =C2=A0 return NULL_TREE; > + =C2=A0 =C2=A0 } > + } > + > =C2=A0/* Perform constant folding and related simplification of EXPR. > =C2=A0 =C2=A0 The related simplifications include x*1 =3D> x, x*0 =3D> 0,= etc., > =C2=A0 =C2=A0 and application of the associative law. > *************** fold (tree expr) > *** 13632,13638 **** > =C2=A0 =C2=A0if (IS_EXPR_CODE_CLASS (kind)) > =C2=A0 =C2=A0 =C2=A0{ > =C2=A0 =C2=A0 =C2=A0 =C2=A0tree type =3D TREE_TYPE (t); > ! =C2=A0 =C2=A0 =C2=A0 tree op0, op1, op2; > > =C2=A0 =C2=A0 =C2=A0 =C2=A0switch (TREE_CODE_LENGTH (code)) > =C2=A0 =C2=A0 =C2=A0 =C2=A0{ > --- 13721,13727 ---- > =C2=A0 =C2=A0if (IS_EXPR_CODE_CLASS (kind)) > =C2=A0 =C2=A0 =C2=A0{ > =C2=A0 =C2=A0 =C2=A0 =C2=A0tree type =3D TREE_TYPE (t); > ! =C2=A0 =C2=A0 =C2=A0 tree op0, op1, op2, op3; > > =C2=A0 =C2=A0 =C2=A0 =C2=A0switch (TREE_CODE_LENGTH (code)) > =C2=A0 =C2=A0 =C2=A0 =C2=A0{ > *************** fold (tree expr) > *** 13651,13656 **** > --- 13740,13752 ---- > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0op2 =3D TREE_OPERAND (t, 2); > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0tem =3D fold_ternary_loc (loc, code, ty= pe, op0, op1, op2); > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return tem ? tem : expr; > + =C2=A0 =C2=A0 =C2=A0 case 4: > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 op0 =3D TREE_OPERAND (t, 0); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 op1 =3D TREE_OPERAND (t, 1); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 op2 =3D TREE_OPERAND (t, 2); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 op3 =3D TREE_OPERAND (t, 3); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 tem =3D fold_quaternary_loc (loc, code, typ= e, op0, op1, op2, op3); > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 return tem ? tem : expr; > =C2=A0 =C2=A0 =C2=A0 =C2=A0default: > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0break; > =C2=A0 =C2=A0 =C2=A0 =C2=A0} > *************** fold_build3_stat_loc (location_t loc, en > *** 14107,14112 **** > --- 14203,14294 ---- > =C2=A0 =C2=A0return tem; > =C2=A0} > > + /* Fold a quaternary tree expression with code CODE of type TYPE with > + =C2=A0 =C2=A0operands OP0, OP1, OP2, and OP3. =C2=A0Return a folded exp= ression if > + =C2=A0 =C2=A0successful. =C2=A0Otherwise, return a tree expression with= code CODE of > + =C2=A0 =C2=A0type TYPE with operands OP0, OP1, OP2, and OP3. =C2=A0*/ > + > + tree > + fold_build4_stat_loc (location_t loc, enum tree_code code, tree type, > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 t= ree op0, tree op1, tree op2, tree op3 MEM_STAT_DECL) > + { > + =C2=A0 tree tem; > + #ifdef ENABLE_FOLD_CHECKING > + =C2=A0 unsigned char checksum_before_op0[16], > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 checksum_before= _op1[16], > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 checksum_before= _op2[16], > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 checksum_before= _op3[16], > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 checksum_after_op0[16], > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 checksum_after_op1[16], > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 checksum_after_op2[16]; > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 checksum_after_op3[16]; > + =C2=A0 struct md5_ctx ctx; > + =C2=A0 htab_t ht; > + > + =C2=A0 ht =3D htab_create (32, htab_hash_pointer, htab_eq_pointer, NULL= ); > + =C2=A0 md5_init_ctx (&ctx); > + =C2=A0 fold_checksum_tree (op0, &ctx, ht); > + =C2=A0 md5_finish_ctx (&ctx, checksum_before_op0); > + =C2=A0 htab_empty (ht); > + > + =C2=A0 md5_init_ctx (&ctx); > + =C2=A0 fold_checksum_tree (op1, &ctx, ht); > + =C2=A0 md5_finish_ctx (&ctx, checksum_before_op1); > + =C2=A0 htab_empty (ht); > + > + =C2=A0 md5_init_ctx (&ctx); > + =C2=A0 fold_checksum_tree (op2, &ctx, ht); > + =C2=A0 md5_finish_ctx (&ctx, checksum_before_op2); > + =C2=A0 htab_empty (ht); > + > + =C2=A0 md5_init_ctx (&ctx); > + =C2=A0 fold_checksum_tree (op3, &ctx, ht); > + =C2=A0 md5_finish_ctx (&ctx, checksum_before_op3); > + =C2=A0 htab_empty (ht); > + #endif > + > + =C2=A0 gcc_assert (TREE_CODE_CLASS (code) !=3D tcc_vl_exp); > + =C2=A0 tem =3D fold_quaternary_loc (loc, code, type, op0, op1, op2, op3= ); > + =C2=A0 if (!tem) > + =C2=A0 =C2=A0 tem =3D build4_stat_loc (loc, code, type, op0, op1, op2, = op3 PASS_MEM_STAT); > + > + #ifdef ENABLE_FOLD_CHECKING > + =C2=A0 md5_init_ctx (&ctx); > + =C2=A0 fold_checksum_tree (op0, &ctx, ht); > + =C2=A0 md5_finish_ctx (&ctx, checksum_after_op0); > + =C2=A0 htab_empty (ht); > + > + =C2=A0 if (memcmp (checksum_before_op0, checksum_after_op0, 16)) > + =C2=A0 =C2=A0 fold_check_failed (op0, tem); > + > + =C2=A0 md5_init_ctx (&ctx); > + =C2=A0 fold_checksum_tree (op1, &ctx, ht); > + =C2=A0 md5_finish_ctx (&ctx, checksum_after_op1); > + =C2=A0 htab_empty (ht); > + > + =C2=A0 if (memcmp (checksum_before_op1, checksum_after_op1, 16)) > + =C2=A0 =C2=A0 fold_check_failed (op1, tem); > + > + =C2=A0 md5_init_ctx (&ctx); > + =C2=A0 fold_checksum_tree (op2, &ctx, ht); > + =C2=A0 md5_finish_ctx (&ctx, checksum_after_op2); > + =C2=A0 htab_delete (ht); > + > + =C2=A0 if (memcmp (checksum_before_op2, checksum_after_op2, 16)) > + =C2=A0 =C2=A0 fold_check_failed (op2, tem); > + > + =C2=A0 md5_init_ctx (&ctx); > + =C2=A0 fold_checksum_tree (op3, &ctx, ht); > + =C2=A0 md5_finish_ctx (&ctx, checksum_after_op3); > + =C2=A0 htab_delete (ht); > + > + =C2=A0 if (memcmp (checksum_before_op3, checksum_after_op3, 16)) > + =C2=A0 =C2=A0 fold_check_failed (op3, tem); > + #endif > + =C2=A0 return tem; > + } > + > + > =C2=A0/* Fold a CALL_EXPR expression of type TYPE with operands FN and NA= RGS > =C2=A0 =C2=A0 arguments in ARGARRAY, and a null static chain. > =C2=A0 =C2=A0 Return a folded expression if successful. =C2=A0Otherwise, = return a CALL_EXPR > Index: trunk/gcc/gimplify.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > *** trunk.orig/gcc/gimplify.c =C2=A0 2011-06-10 13:28:11.000000000 +0200 > --- trunk/gcc/gimplify.c =C2=A0 =C2=A0 =C2=A0 =C2=A02011-06-15 15:05:55.0= 00000000 +0200 > *************** gimplify_expr (tree *expr_p, gimple_seq > *** 7239,7244 **** > --- 7239,7248 ---- > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Classified as tcc_expression. =C2=A0= */ > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto expr_3; > > + =C2=A0 =C2=A0 =C2=A0 case BIT_FIELD_EXPR: > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Arguments 3 and 4 are constants. =C2=A0*/ > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 goto expr_2; > + > =C2=A0 =C2=A0 =C2=A0 =C2=A0case POINTER_PLUS_EXPR: > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Convert ((type *)A)+offset in= to &A->field_of_type_and_offset. > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 The second is gimple immediate = saving a need for extra statement. > Index: trunk/gcc/tree-inline.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > *** trunk.orig/gcc/tree-inline.c =C2=A0 =C2=A0 =C2=A0 =C2=A02011-06-07 14= :41:22.000000000 +0200 > --- trunk/gcc/tree-inline.c =C2=A0 =C2=A0 2011-06-15 14:56:09.000000000 += 0200 > *************** estimate_operator_cost (enum tree_code c > *** 3357,3362 **** > --- 3357,3366 ---- > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return weights->div_mod_cost; > =C2=A0 =C2=A0 =C2=A0 =C2=A0return 1; > > + =C2=A0 =C2=A0 /* Bit-field insertion needs several shift and mask opera= tions. =C2=A0*/ > + =C2=A0 =C2=A0 case BIT_FIELD_EXPR: > + =C2=A0 =C2=A0 =C2=A0 return 3; > + > =C2=A0 =C2=A0 =C2=A0default: > =C2=A0 =C2=A0 =C2=A0 =C2=A0/* We expect a copy assignment with no operato= r. =C2=A0*/ > =C2=A0 =C2=A0 =C2=A0 =C2=A0gcc_assert (get_gimple_rhs_class (code) =3D=3D= GIMPLE_SINGLE_RHS); > Index: trunk/gcc/tree-pretty-print.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > *** trunk.orig/gcc/tree-pretty-print.c =C2=A02011-06-07 14:41:22.00000000= 0 +0200 > --- trunk/gcc/tree-pretty-print.c =C2=A0 =C2=A0 =C2=A0 2011-06-15 15:07:0= 5.000000000 +0200 > *************** dump_generic_node (pretty_printer *buffe > *** 1217,1222 **** > --- 1217,1234 ---- > =C2=A0 =C2=A0 =C2=A0 =C2=A0pp_string (buffer, ">"); > =C2=A0 =C2=A0 =C2=A0 =C2=A0break; > > + =C2=A0 =C2=A0 case BIT_FIELD_EXPR: > + =C2=A0 =C2=A0 =C2=A0 pp_string (buffer, "BIT_FIELD_EXPR <"); > + =C2=A0 =C2=A0 =C2=A0 dump_generic_node (buffer, TREE_OPERAND (node, 0),= spc, flags, false); > + =C2=A0 =C2=A0 =C2=A0 pp_string (buffer, ", "); > + =C2=A0 =C2=A0 =C2=A0 dump_generic_node (buffer, TREE_OPERAND (node, 1),= spc, flags, false); > + =C2=A0 =C2=A0 =C2=A0 pp_string (buffer, ", "); > + =C2=A0 =C2=A0 =C2=A0 dump_generic_node (buffer, TREE_OPERAND (node, 2),= spc, flags, false); > + =C2=A0 =C2=A0 =C2=A0 pp_string (buffer, ", "); > + =C2=A0 =C2=A0 =C2=A0 dump_generic_node (buffer, TREE_OPERAND (node, 3),= spc, flags, false); > + =C2=A0 =C2=A0 =C2=A0 pp_string (buffer, ">"); > + =C2=A0 =C2=A0 =C2=A0 break; > + > =C2=A0 =C2=A0 =C2=A0case ARRAY_REF: > =C2=A0 =C2=A0 =C2=A0case ARRAY_RANGE_REF: > =C2=A0 =C2=A0 =C2=A0 =C2=A0op0 =3D TREE_OPERAND (node, 0); > Index: trunk/gcc/tree-ssa-operands.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > *** trunk.orig/gcc/tree-ssa-operands.c =C2=A02011-04-18 11:06:42.00000000= 0 +0200 > --- trunk/gcc/tree-ssa-operands.c =C2=A0 =C2=A0 =C2=A0 2011-06-15 14:54:1= 2.000000000 +0200 > *************** get_expr_operands (gimple stmt, tree *ex > *** 974,979 **** > --- 974,984 ---- > =C2=A0 =C2=A0 =C2=A0 =C2=A0get_expr_operands (stmt, &TREE_OPERAND (expr, = 0), flags); > =C2=A0 =C2=A0 =C2=A0 =C2=A0return; > > + =C2=A0 =C2=A0 case BIT_FIELD_EXPR: > + =C2=A0 =C2=A0 =C2=A0 gcc_assert (TREE_CODE (TREE_OPERAND (expr, 2)) =3D= =3D INTEGER_CST > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 && TREE_CODE (T= REE_OPERAND (expr, 3)) =3D=3D INTEGER_CST); > + =C2=A0 =C2=A0 =C2=A0 /* Fallthru. =C2=A0*/ > + > =C2=A0 =C2=A0 =C2=A0case TRUTH_AND_EXPR: > =C2=A0 =C2=A0 =C2=A0case TRUTH_OR_EXPR: > =C2=A0 =C2=A0 =C2=A0case TRUTH_XOR_EXPR: > Index: trunk/gcc/tree.def > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > *** trunk.orig/gcc/tree.def =C2=A0 =C2=A0 2011-05-11 11:09:42.000000000 += 0200 > --- trunk/gcc/tree.def =C2=A02011-06-15 14:53:17.000000000 +0200 > *************** DEFTREECODE (ADDR_EXPR, "addr_expr", tcc > *** 784,789 **** > --- 784,801 ---- > =C2=A0 =C2=A0 descriptor of type ptr_mode. =C2=A0*/ > =C2=A0DEFTREECODE (FDESC_EXPR, "fdesc_expr", tcc_expression, 2) > > + /* Given a word, a value and a bitfield position and size within > + =C2=A0 =C2=A0the word, produce the value that results if replacing the > + =C2=A0 =C2=A0described parts of word with value. > + =C2=A0 =C2=A0Operand 0 is a tree for the word of integral type; > + =C2=A0 =C2=A0Operand 1 is a tree for the value of integral type; > + =C2=A0 =C2=A0Operand 2 is a tree giving the constant number of bits bei= ng > + =C2=A0 =C2=A0referenced which is less or equal to the precision of the = value; > + =C2=A0 =C2=A0Operand 3 is a tree giving the constant position of the fi= rst referenced > + =C2=A0 =C2=A0bit such that the sum of operands 2 and 3 is less than or = equal to the > + =C2=A0 =C2=A0precision of the word. =C2=A0*/ > + DEFTREECODE (BIT_FIELD_EXPR, "bitfield_expr", tcc_expression, 4) > + > =C2=A0/* Given two real or integer operands of the same type, > =C2=A0 =C2=A0 returns a complex value of the corresponding complex type. = =C2=A0*/ > =C2=A0DEFTREECODE (COMPLEX_EXPR, "complex_expr", tcc_binary, 2) > Index: trunk/gcc/tree.h > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > *** trunk.orig/gcc/tree.h =C2=A0 =C2=A0 =C2=A0 2011-06-15 13:27:06.000000= 000 +0200 > --- trunk/gcc/tree.h =C2=A0 =C2=A02011-06-15 15:19:59.000000000 +0200 > *************** extern bool is_typedef_decl (tree x); > *** 5091,5096 **** > --- 5091,5097 ---- > =C2=A0extern bool typedef_variant_p (tree); > =C2=A0extern bool auto_var_in_fn_p (const_tree, const_tree); > =C2=A0extern tree build_low_bits_mask (tree, unsigned); > + extern tree build_bit_mask (tree type, unsigned int, unsigned int); > =C2=A0extern tree tree_strip_nop_conversions (tree); > =C2=A0extern tree tree_strip_sign_nop_conversions (tree); > =C2=A0extern tree lhd_gcc_personality (void); > *************** extern tree fold_binary_loc (location_t, > *** 5147,5152 **** > --- 5148,5156 ---- > =C2=A0#define fold_ternary(CODE,T1,T2,T3,T4)\ > =C2=A0 =C2=A0 fold_ternary_loc (UNKNOWN_LOCATION, CODE, T1, T2, T3, T4) > =C2=A0extern tree fold_ternary_loc (location_t, enum tree_code, tree, tre= e, tree, tree); > + #define fold_quaternary(CODE,T1,T2,T3,T4,T5)\ > + =C2=A0 =C2=A0fold_quaternary_loc (UNKNOWN_LOCATION, CODE, T1, T2, T3, T= 4, T5) > + extern tree fold_quaternary_loc (location_t, enum tree_code, tree, tree= , tree, tree, tree); > =C2=A0#define fold_build1(c,t1,t2)\ > =C2=A0 =C2=A0 fold_build1_stat_loc (UNKNOWN_LOCATION, c, t1, t2 MEM_STAT_= INFO) > =C2=A0#define fold_build1_loc(l,c,t1,t2)\ > *************** extern tree fold_build2_stat_loc (locati > *** 5165,5170 **** > --- 5169,5180 ---- > =C2=A0 =C2=A0 fold_build3_stat_loc (l, c, t1, t2, t3, t4 MEM_STAT_INFO) > =C2=A0extern tree fold_build3_stat_loc (location_t, enum tree_code, tree,= tree, tree, > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0tree MEM_STAT_DECL); > + #define fold_build4(c,t1,t2,t3,t4,t5)\ > + =C2=A0 =C2=A0fold_build4_stat_loc (UNKNOWN_LOCATION, c, t1, t2, t3, t4,= t5 MEM_STAT_INFO) > + #define fold_build4_loc(l,c,t1,t2,t3,t4,t5)\ > + =C2=A0 =C2=A0fold_build4_stat_loc (l, c, t1, t2, t3, t4, t5 MEM_STAT_IN= FO) > + extern tree fold_build4_stat_loc (location_t, enum tree_code, tree, tre= e, tree, > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 tree, tree MEM_STAT_DECL); > =C2=A0extern tree fold_build1_initializer_loc (location_t, enum tree_code= , tree, tree); > =C2=A0extern tree fold_build2_initializer_loc (location_t, enum tree_code= , tree, tree, tree); > =C2=A0extern tree fold_build3_initializer_loc (location_t, enum tree_code= , tree, tree, tree, tree); > Index: trunk/gcc/gimple.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > *** trunk.orig/gcc/gimple.c =C2=A0 =C2=A0 2011-06-15 15:33:04.000000000 += 0200 > --- trunk/gcc/gimple.c =C2=A02011-06-15 15:33:54.000000000 +0200 > *************** get_gimple_rhs_num_ops (enum tree_code c > *** 2599,2605 **** > =C2=A0 =C2=A0 =C2=A0 =C2=A0|| (SYM) =3D=3D ADDR_EXPR =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 \ > =C2=A0 =C2=A0 =C2=A0 =C2=A0|| (SYM) =3D=3D WITH_SIZE_EXPR =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\ > =C2=A0 =C2=A0 =C2=A0 =C2=A0|| (SYM) =3D=3D SSA_NAME =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\ > ! =C2=A0 =C2=A0 =C2=A0 || (SYM) =3D=3D VEC_COND_EXPR) ? GIMPLE_SINGLE_RHS= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0\ > =C2=A0 =C2=A0 : GIMPLE_INVALID_RHS), > =C2=A0#define END_OF_BASE_TREE_CODES (unsigned char) GIMPLE_INVALID_RHS, > > --- 2599,2606 ---- > =C2=A0 =C2=A0 =C2=A0 =C2=A0|| (SYM) =3D=3D ADDR_EXPR =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 \ > =C2=A0 =C2=A0 =C2=A0 =C2=A0|| (SYM) =3D=3D WITH_SIZE_EXPR =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\ > =C2=A0 =C2=A0 =C2=A0 =C2=A0|| (SYM) =3D=3D SSA_NAME =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\ > ! =C2=A0 =C2=A0 =C2=A0 || (SYM) =3D=3D VEC_COND_EXPR =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 \ > ! =C2=A0 =C2=A0 =C2=A0 || (SYM) =3D=3D BIT_FIELD_EXPR) ? GIMPLE_SINGLE_RH= S =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 \ > =C2=A0 =C2=A0 : GIMPLE_INVALID_RHS), > =C2=A0#define END_OF_BASE_TREE_CODES (unsigned char) GIMPLE_INVALID_RHS, > >