public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: "Richard Guenther" <richard.guenther@gmail.com>
To: "Jan Hubicka" <hubicka@ucw.cz>
Cc: gcc-patches@gcc.gnu.org
Subject: Re: Patch ping...
Date: Sat, 30 Aug 2008 19:06:00 -0000	[thread overview]
Message-ID: <84fc9c000808290552w34f7548ft5b89b659aa5eb6ce@mail.gmail.com> (raw)
In-Reply-To: <20080828203249.GH15545@atrey.karlin.mff.cuni.cz>

On Thu, Aug 28, 2008 at 10:32 PM, Jan Hubicka <hubicka@ucw.cz> wrote:
>> On Sat, Apr 5, 2008 at 6:26 PM, Jan Hubicka <hubicka@ucw.cz> wrote:
>> > Hi,
>> >  I would like to ping the BRANCH_COST patch
>> >  http://gcc.gnu.org/ml/gcc/2008-03/msg00137.html
>> >
>> >  I hope to proceed with updating GCC to optimize cold blocks in same way
>> >  as -Os and explicitely marked hot functions in -Os code for speed.
>> >  For this I need to populate RTL cost interfaces with the profile info
>> >  and teach expansion about it.
>> >  This is taking quite some years now, I realize it might not be clear
>> >  what I am precisely shooting for, so I will also add wiki page.
>>
>> I think the patch makes sense (BRANCH_COST is special anyway compared to
>> other isns cost), but I'd like to see the bigger picture as well here.  In
>> particular, BRANCH_COST (hot, predictable), why isn't that simply
>> BRANCH_COST (optimize_size_p, predictable) matching what I possibly
>> expect for the other cost interface (insn_cost (optimize_size_p, rtx)).
>
> Hi,
> with the optimize_*_for_speed_p predicates, this patch becomes cleaner
> now.  I would also like to update other costs similar way so we can
> avoid the current way we switch optimize_size global variable.
>
> Bootstrapped/regtested i686-linux, OK?

It looks ok, but I think that PARAM_PREDICTABLE_BRANCH_OUTCOME
should be a target macro and not a param.

Ok with that change, but please wait 24h to let others comment.

Thanks,
Richard.

>        * optabs.c (expand_abs_nojump): Update BRANCH_COST call.
>        * fold-cost.c (LOGICAL_OP_NON_SHORT_CIRCUIT, fold_truthop): Likewise.
>        * dojump.c (do_jump): Likewise.
>        * ifcvt.c (MAX_CONDITIONAL_EXECUTE): Likewise.
>        (note-if_info): Add BRANCH_COST.
>        (noce_try_store_flag_constants, noce_try_addcc, noce_try_store_flag_mask,
>        noce_try_cmove_arith, noce_try_cmove_arith, noce_try_cmove_arith,
>        noce_find_if_block, find_if_case_1, find_if_case_2): Use compuated
>        branch cost.
>        * expr.h (BRANCH_COST): Update default.
>        * predict.c (predictable_edge_p): New function.
>        * expmed.c (expand_smod_pow2, expand_sdiv_pow2, emit_store_flag):
>        Update BRANCH_COST call.
>        * basic-block.h (predictable_edge_p): Declare.
>        * config/alpha/alpha.h (BRANCH_COST): Update.
>        * config/frv/frv.h (BRANCH_COST): Update.
>        * config/s390/s390.h (BRANCH_COST): Update.
>        * config/spu/spu.h (BRANCH_COST): Update.
>        * config/sparc/sparc.h (BRANCH_COST): Update.
>        * config/m32r/m32r.h (BRANCH_COST): Update.
>        * config/i386/i386.h (BRANCH_COST): Update.
>        * config/i386/i386.c (ix86_expand_int_movcc): Update use of BRANCH_COST.
>        * config/sh/sh.h (BRANCH_COST): Update.
>        * config/pdp11/pdp11.h (BRANCH_COST): Update.
>        * config/avr/avr.h (BRANCH_COST): Update.
>        * config/crx/crx.h (BRANCH_COST): Update.
>        * config/xtensa/xtensa.h (BRANCH_COST): Update.
>        * config/stormy16/stormy16.h (BRANCH_COST): Update.
>        * config/m68hc11/m68hc11.h (BRANCH_COST): Update.
>        * config/iq2000/iq2000.h (BRANCH_COST): Update.
>        * config/ia64/ia64.h (BRANCH_COST): Update.
>        * config/rs6000/rs6000.h (BRANCH_COST): Update.
>        * config/arc/arc.h (BRANCH_COST): Update.
>        * config/score/score.h (BRANCH_COST): Update.
>        * config/arm/arm.h (BRANCH_COST): Update.
>        * config/pa/pa.h (BRANCH_COST): Update.
>        * config/mips/mips.h (BRANCH_COST): Update.
>        * config/vax/vax.h (BRANCH_COST): Update.
>        * config/h8300/h8300.h (BRANCH_COST): Update.
>        * params.def (PARAM_PREDICTABLE_BRANCH_OUTCOME): New.
>        * doc/invoke.texi (predictable-branch-cost-outcome): Document.
>        * doc/tm.texi (BRANCH_COST): Update.
> Index: doc/tm.texi
> ===================================================================
> *** doc/tm.texi (revision 139737)
> --- doc/tm.texi (working copy)
> *************** value to the result of that function.  T
> *** 5874,5882 ****
>  are the same as to this macro.
>  @end defmac
>
> ! @defmac BRANCH_COST
> ! A C expression for the cost of a branch instruction.  A value of 1 is
> ! the default; other values are interpreted relative to that.
>  @end defmac
>
>  Here are additional macros which do not specify precise relative costs,
> --- 5874,5887 ----
>  are the same as to this macro.
>  @end defmac
>
> ! @defmac BRANCH_COST (@var{speed_p}, @var{predictable_p})
> ! A C expression for the cost of a branch instruction.  A value of 1 is the
> ! default; other values are interpreted relative to that. Parameter @var{speed_p}
> ! is true when the branch in question should be optimized for speed.  When
> ! it is false, @code{BRANCH_COST} should be returning value optimal for code size
> ! rather then performance considerations.  @var{predictable_p} is true for well
> ! predictable branches. On many architectures the @code{BRANCH_COST} can be
> ! reduced then.
>  @end defmac
>
>  Here are additional macros which do not specify precise relative costs,
> Index: doc/invoke.texi
> ===================================================================
> *** doc/invoke.texi     (revision 139737)
> --- doc/invoke.texi     (working copy)
> *************** to the hottest structure frequency in th
> *** 6905,6910 ****
> --- 6905,6914 ----
>  parameter, then structure reorganization is not applied to this structure.
>  The default is 10.
>
> + @item predictable-branch-cost-outcome
> + When branch is predicted to be taken with probability lower than this threshold
> + (in percent), then it is considered well predictable. The default is 10.
> +
>  @item max-crossjump-edges
>  The maximum number of incoming edges to consider for crossjumping.
>  The algorithm used by @option{-fcrossjumping} is @math{O(N^2)} in
> Index: optabs.c
> ===================================================================
> *** optabs.c    (revision 139737)
> --- optabs.c    (working copy)
> *************** expand_abs_nojump (enum machine_mode mod
> *** 3443,3449 ****
>       value of X as (((signed) x >> (W-1)) ^ x) - ((signed) x >> (W-1)),
>       where W is the width of MODE.  */
>
> !   if (GET_MODE_CLASS (mode) == MODE_INT && BRANCH_COST >= 2)
>      {
>        rtx extended = expand_shift (RSHIFT_EXPR, mode, op0,
>                                   size_int (GET_MODE_BITSIZE (mode) - 1),
> --- 3443,3451 ----
>       value of X as (((signed) x >> (W-1)) ^ x) - ((signed) x >> (W-1)),
>       where W is the width of MODE.  */
>
> !   if (GET_MODE_CLASS (mode) == MODE_INT
> !       && BRANCH_COST (optimize_insn_for_speed_p (),
> !                     false) >= 2)
>      {
>        rtx extended = expand_shift (RSHIFT_EXPR, mode, op0,
>                                   size_int (GET_MODE_BITSIZE (mode) - 1),
> Index: fold-const.c
> ===================================================================
> *** fold-const.c        (revision 139737)
> --- fold-const.c        (working copy)
> *************** fold_cond_expr_with_comparison (tree typ
> *** 5109,5115 ****
>
>
>  #ifndef LOGICAL_OP_NON_SHORT_CIRCUIT
> ! #define LOGICAL_OP_NON_SHORT_CIRCUIT (BRANCH_COST >= 2)
>  #endif
>
>  /* EXP is some logical combination of boolean tests.  See if we can
> --- 5109,5117 ----
>
>
>  #ifndef LOGICAL_OP_NON_SHORT_CIRCUIT
> ! #define LOGICAL_OP_NON_SHORT_CIRCUIT \
> !   (BRANCH_COST (!cfun || optimize_function_for_speed_p (cfun), \
> !               false) >= 2)
>  #endif
>
>  /* EXP is some logical combination of boolean tests.  See if we can
> *************** fold_truthop (enum tree_code code, tree
> *** 5357,5363 ****
>       that can be merged.  Avoid doing this if the RHS is a floating-point
>       comparison since those can trap.  */
>
> !   if (BRANCH_COST >= 2
>        && ! FLOAT_TYPE_P (TREE_TYPE (rl_arg))
>        && simple_operand_p (rl_arg)
>        && simple_operand_p (rr_arg))
> --- 5359,5366 ----
>       that can be merged.  Avoid doing this if the RHS is a floating-point
>       comparison since those can trap.  */
>
> !   if (BRANCH_COST (!cfun || optimize_function_for_speed_p (cfun),
> !                  false) >= 2
>        && ! FLOAT_TYPE_P (TREE_TYPE (rl_arg))
>        && simple_operand_p (rl_arg)
>        && simple_operand_p (rr_arg))
> Index: dojump.c
> ===================================================================
> *** dojump.c    (revision 139737)
> --- dojump.c    (working copy)
> *************** do_jump (tree exp, rtx if_false_label, r
> *** 510,516 ****
>        /* High branch cost, expand as the bitwise AND of the conditions.
>         Do the same if the RHS has side effects, because we're effectively
>         turning a TRUTH_AND_EXPR into a TRUTH_ANDIF_EXPR.  */
> !       if (BRANCH_COST >= 4 || TREE_SIDE_EFFECTS (TREE_OPERAND (exp, 1)))
>        goto normal;
>
>      case TRUTH_ANDIF_EXPR:
> --- 510,518 ----
>        /* High branch cost, expand as the bitwise AND of the conditions.
>         Do the same if the RHS has side effects, because we're effectively
>         turning a TRUTH_AND_EXPR into a TRUTH_ANDIF_EXPR.  */
> !       if (BRANCH_COST (optimize_insn_for_speed_p (),
> !                      false) >= 4
> !         || TREE_SIDE_EFFECTS (TREE_OPERAND (exp, 1)))
>        goto normal;
>
>      case TRUTH_ANDIF_EXPR:
> *************** do_jump (tree exp, rtx if_false_label, r
> *** 531,537 ****
>        /* High branch cost, expand as the bitwise OR of the conditions.
>         Do the same if the RHS has side effects, because we're effectively
>         turning a TRUTH_OR_EXPR into a TRUTH_ORIF_EXPR.  */
> !       if (BRANCH_COST >= 4 || TREE_SIDE_EFFECTS (TREE_OPERAND (exp, 1)))
>        goto normal;
>
>      case TRUTH_ORIF_EXPR:
> --- 533,540 ----
>        /* High branch cost, expand as the bitwise OR of the conditions.
>         Do the same if the RHS has side effects, because we're effectively
>         turning a TRUTH_OR_EXPR into a TRUTH_ORIF_EXPR.  */
> !       if (BRANCH_COST (optimize_insn_for_speed_p (), false)>= 4
> !         || TREE_SIDE_EFFECTS (TREE_OPERAND (exp, 1)))
>        goto normal;
>
>      case TRUTH_ORIF_EXPR:
> Index: ifcvt.c
> ===================================================================
> *** ifcvt.c     (revision 139737)
> --- ifcvt.c     (working copy)
> ***************
> *** 67,73 ****
>  #endif
>
>  #ifndef MAX_CONDITIONAL_EXECUTE
> ! #define MAX_CONDITIONAL_EXECUTE   (BRANCH_COST + 1)
>  #endif
>
>  #define IFCVT_MULTIPLE_DUMPS 1
> --- 67,75 ----
>  #endif
>
>  #ifndef MAX_CONDITIONAL_EXECUTE
> ! #define MAX_CONDITIONAL_EXECUTE \
> !   (BRANCH_COST (optimize_function_for_speed_p (cfun), false) \
> !    + 1)
>  #endif
>
>  #define IFCVT_MULTIPLE_DUMPS 1
> *************** struct noce_if_info
> *** 626,631 ****
> --- 628,636 ----
>       from TEST_BB.  For the noce transformations, we allow the symmetric
>       form as well.  */
>    bool then_else_reversed;
> +
> +   /* Estimated cost of the particular branch instruction.  */
> +   int branch_cost;
>  };
>
>  static rtx noce_emit_store_flag (struct noce_if_info *, rtx, int, int);
> *************** noce_try_store_flag_constants (struct no
> *** 963,982 ****
>        normalize = 0;
>        else if (ifalse == 0 && exact_log2 (itrue) >= 0
>               && (STORE_FLAG_VALUE == 1
> !                  || BRANCH_COST >= 2))
>        normalize = 1;
>        else if (itrue == 0 && exact_log2 (ifalse) >= 0 && can_reverse
> !              && (STORE_FLAG_VALUE == 1 || BRANCH_COST >= 2))
>        normalize = 1, reversep = 1;
>        else if (itrue == -1
>               && (STORE_FLAG_VALUE == -1
> !                  || BRANCH_COST >= 2))
>        normalize = -1;
>        else if (ifalse == -1 && can_reverse
> !              && (STORE_FLAG_VALUE == -1 || BRANCH_COST >= 2))
>        normalize = -1, reversep = 1;
> !       else if ((BRANCH_COST >= 2 && STORE_FLAG_VALUE == -1)
> !              || BRANCH_COST >= 3)
>        normalize = -1;
>        else
>        return FALSE;
> --- 968,987 ----
>        normalize = 0;
>        else if (ifalse == 0 && exact_log2 (itrue) >= 0
>               && (STORE_FLAG_VALUE == 1
> !                  || if_info->branch_cost >= 2))
>        normalize = 1;
>        else if (itrue == 0 && exact_log2 (ifalse) >= 0 && can_reverse
> !              && (STORE_FLAG_VALUE == 1 || if_info->branch_cost >= 2))
>        normalize = 1, reversep = 1;
>        else if (itrue == -1
>               && (STORE_FLAG_VALUE == -1
> !                  || if_info->branch_cost >= 2))
>        normalize = -1;
>        else if (ifalse == -1 && can_reverse
> !              && (STORE_FLAG_VALUE == -1 || if_info->branch_cost >= 2))
>        normalize = -1, reversep = 1;
> !       else if ((if_info->branch_cost >= 2 && STORE_FLAG_VALUE == -1)
> !              || if_info->branch_cost >= 3)
>        normalize = -1;
>        else
>        return FALSE;
> *************** noce_try_addcc (struct noce_if_info *if_
> *** 1107,1113 ****
>
>        /* If that fails, construct conditional increment or decrement using
>         setcc.  */
> !       if (BRANCH_COST >= 2
>          && (XEXP (if_info->a, 1) == const1_rtx
>              || XEXP (if_info->a, 1) == constm1_rtx))
>          {
> --- 1112,1118 ----
>
>        /* If that fails, construct conditional increment or decrement using
>         setcc.  */
> !       if (if_info->branch_cost >= 2
>          && (XEXP (if_info->a, 1) == const1_rtx
>              || XEXP (if_info->a, 1) == constm1_rtx))
>          {
> *************** noce_try_store_flag_mask (struct noce_if
> *** 1158,1164 ****
>    int reversep;
>
>    reversep = 0;
> !   if ((BRANCH_COST >= 2
>         || STORE_FLAG_VALUE == -1)
>        && ((if_info->a == const0_rtx
>           && rtx_equal_p (if_info->b, if_info->x))
> --- 1163,1169 ----
>    int reversep;
>
>    reversep = 0;
> !   if ((if_info->branch_cost >= 2
>         || STORE_FLAG_VALUE == -1)
>        && ((if_info->a == const0_rtx
>           && rtx_equal_p (if_info->b, if_info->x))
> *************** noce_try_cmove_arith (struct noce_if_inf
> *** 1317,1323 ****
>    /* ??? FIXME: Magic number 5.  */
>    if (cse_not_expected
>        && MEM_P (a) && MEM_P (b)
> !       && BRANCH_COST >= 5)
>      {
>        a = XEXP (a, 0);
>        b = XEXP (b, 0);
> --- 1322,1328 ----
>    /* ??? FIXME: Magic number 5.  */
>    if (cse_not_expected
>        && MEM_P (a) && MEM_P (b)
> !       && if_info->branch_cost >= 5)
>      {
>        a = XEXP (a, 0);
>        b = XEXP (b, 0);
> *************** noce_try_cmove_arith (struct noce_if_inf
> *** 1347,1353 ****
>    if (insn_a)
>      {
>        insn_cost = insn_rtx_cost (PATTERN (insn_a));
> !       if (insn_cost == 0 || insn_cost > COSTS_N_INSNS (BRANCH_COST))
>        return FALSE;
>      }
>    else
> --- 1352,1358 ----
>    if (insn_a)
>      {
>        insn_cost = insn_rtx_cost (PATTERN (insn_a));
> !       if (insn_cost == 0 || insn_cost > COSTS_N_INSNS (if_info->branch_cost))
>        return FALSE;
>      }
>    else
> *************** noce_try_cmove_arith (struct noce_if_inf
> *** 1356,1362 ****
>    if (insn_b)
>      {
>        insn_cost += insn_rtx_cost (PATTERN (insn_b));
> !       if (insn_cost == 0 || insn_cost > COSTS_N_INSNS (BRANCH_COST))
>          return FALSE;
>      }
>
> --- 1361,1367 ----
>    if (insn_b)
>      {
>        insn_cost += insn_rtx_cost (PATTERN (insn_b));
> !       if (insn_cost == 0 || insn_cost > COSTS_N_INSNS (if_info->branch_cost))
>          return FALSE;
>      }
>
> *************** noce_find_if_block (basic_block test_bb,
> *** 2831,2836 ****
> --- 2836,2843 ----
>    if_info.cond_earliest = cond_earliest;
>    if_info.jump = jump;
>    if_info.then_else_reversed = then_else_reversed;
> +   if_info.branch_cost = BRANCH_COST (optimize_bb_for_speed_p (test_bb),
> +                                    predictable_edge_p (then_edge));
>
>    /* Do the real work.  */
>
> *************** find_if_case_1 (basic_block test_bb, edg
> *** 3597,3603 ****
>             test_bb->index, then_bb->index);
>
>    /* THEN is small.  */
> !   if (! cheap_bb_rtx_cost_p (then_bb, COSTS_N_INSNS (BRANCH_COST)))
>      return FALSE;
>
>    /* Registers set are dead, or are predicable.  */
> --- 3604,3612 ----
>             test_bb->index, then_bb->index);
>
>    /* THEN is small.  */
> !   if (! cheap_bb_rtx_cost_p (then_bb,
> !       COSTS_N_INSNS (BRANCH_COST (optimize_bb_for_speed_p (then_edge->src),
> !                                   predictable_edge_p (then_edge)))))
>      return FALSE;
>
>    /* Registers set are dead, or are predicable.  */
> *************** find_if_case_2 (basic_block test_bb, edg
> *** 3711,3717 ****
>             test_bb->index, else_bb->index);
>
>    /* ELSE is small.  */
> !   if (! cheap_bb_rtx_cost_p (else_bb, COSTS_N_INSNS (BRANCH_COST)))
>      return FALSE;
>
>    /* Registers set are dead, or are predicable.  */
> --- 3720,3728 ----
>             test_bb->index, else_bb->index);
>
>    /* ELSE is small.  */
> !   if (! cheap_bb_rtx_cost_p (else_bb,
> !       COSTS_N_INSNS (BRANCH_COST (optimize_bb_for_speed_p (else_edge->src),
> !                                   predictable_edge_p (else_edge)))))
>      return FALSE;
>
>    /* Registers set are dead, or are predicable.  */
> Index: expr.h
> ===================================================================
> *** expr.h      (revision 139737)
> --- expr.h      (working copy)
> *************** along with GCC; see the file COPYING3.
> *** 36,42 ****
>
>  /* The default branch cost is 1.  */
>  #ifndef BRANCH_COST
> ! #define BRANCH_COST 1
>  #endif
>
>  /* This is the 4th arg to `expand_expr'.
> --- 36,42 ----
>
>  /* The default branch cost is 1.  */
>  #ifndef BRANCH_COST
> ! #define BRANCH_COST(speed_p, predictable_p) 1
>  #endif
>
>  /* This is the 4th arg to `expand_expr'.
> Index: predict.c
> ===================================================================
> *** predict.c   (revision 139737)
> --- predict.c   (working copy)
> *************** optimize_insn_for_speed_p (void)
> *** 245,250 ****
> --- 245,267 ----
>    return !optimize_insn_for_size_p ();
>  }
>
> + /* Return true when edge E is likely to be well predictable by branch
> +    predictor.  */
> +
> + bool
> + predictable_edge_p (edge e)
> + {
> +   if (profile_status == PROFILE_ABSENT)
> +     return false;
> +   if ((e->probability
> +        <= PARAM_VALUE (PARAM_PREDICTABLE_BRANCH_OUTCOME) * REG_BR_PROB_BASE / 100)
> +       || (REG_BR_PROB_BASE - e->probability
> +           <= PARAM_VALUE (PARAM_PREDICTABLE_BRANCH_OUTCOME) * REG_BR_PROB_BASE / 100))
> +     return true;
> +   return false;
> + }
> +
> +
>  /* Set RTL expansion for BB profile.  */
>
>  void
> Index: expmed.c
> ===================================================================
> *** expmed.c    (revision 139737)
> --- expmed.c    (working copy)
> *************** expand_smod_pow2 (enum machine_mode mode
> *** 3492,3498 ****
>    result = gen_reg_rtx (mode);
>
>    /* Avoid conditional branches when they're expensive.  */
> !   if (BRANCH_COST >= 2
>        && optimize_insn_for_speed_p ())
>      {
>        rtx signmask = emit_store_flag (result, LT, op0, const0_rtx,
> --- 3492,3498 ----
>    result = gen_reg_rtx (mode);
>
>    /* Avoid conditional branches when they're expensive.  */
> !   if (BRANCH_COST (optimize_insn_for_speed_p (), false) >= 2
>        && optimize_insn_for_speed_p ())
>      {
>        rtx signmask = emit_store_flag (result, LT, op0, const0_rtx,
> *************** expand_sdiv_pow2 (enum machine_mode mode
> *** 3592,3598 ****
>    logd = floor_log2 (d);
>    shift = build_int_cst (NULL_TREE, logd);
>
> !   if (d == 2 && BRANCH_COST >= 1)
>      {
>        temp = gen_reg_rtx (mode);
>        temp = emit_store_flag (temp, LT, op0, const0_rtx, mode, 0, 1);
> --- 3592,3600 ----
>    logd = floor_log2 (d);
>    shift = build_int_cst (NULL_TREE, logd);
>
> !   if (d == 2
> !       && BRANCH_COST (optimize_insn_for_speed_p (),
> !                     false) >= 1)
>      {
>        temp = gen_reg_rtx (mode);
>        temp = emit_store_flag (temp, LT, op0, const0_rtx, mode, 0, 1);
> *************** expand_sdiv_pow2 (enum machine_mode mode
> *** 3602,3608 ****
>      }
>
>  #ifdef HAVE_conditional_move
> !   if (BRANCH_COST >= 2)
>      {
>        rtx temp2;
>
> --- 3604,3611 ----
>      }
>
>  #ifdef HAVE_conditional_move
> !   if (BRANCH_COST (optimize_insn_for_speed_p (), false)
> !       >= 2)
>      {
>        rtx temp2;
>
> *************** expand_sdiv_pow2 (enum machine_mode mode
> *** 3631,3637 ****
>      }
>  #endif
>
> !   if (BRANCH_COST >= 2)
>      {
>        int ushift = GET_MODE_BITSIZE (mode) - logd;
>
> --- 3634,3641 ----
>      }
>  #endif
>
> !   if (BRANCH_COST (optimize_insn_for_speed_p (),
> !                  false) >= 2)
>      {
>        int ushift = GET_MODE_BITSIZE (mode) - logd;
>
> *************** emit_store_flag (rtx target, enum rtx_co
> *** 5345,5351 ****
>       comparison with zero.  Don't do any of these cases if branches are
>       very cheap.  */
>
> !   if (BRANCH_COST > 0
>        && GET_MODE_CLASS (mode) == MODE_INT && (code == EQ || code == NE)
>        && op1 != const0_rtx)
>      {
> --- 5349,5356 ----
>       comparison with zero.  Don't do any of these cases if branches are
>       very cheap.  */
>
> !   if (BRANCH_COST (optimize_insn_for_speed_p (),
> !                  false) > 0
>        && GET_MODE_CLASS (mode) == MODE_INT && (code == EQ || code == NE)
>        && op1 != const0_rtx)
>      {
> *************** emit_store_flag (rtx target, enum rtx_co
> *** 5368,5377 ****
>       do LE and GT if branches are expensive since they are expensive on
>       2-operand machines.  */
>
> !   if (BRANCH_COST == 0
>        || GET_MODE_CLASS (mode) != MODE_INT || op1 != const0_rtx
>        || (code != EQ && code != NE
> !         && (BRANCH_COST <= 1 || (code != LE && code != GT))))
>      return 0;
>
>    /* See what we need to return.  We can only return a 1, -1, or the
> --- 5373,5384 ----
>       do LE and GT if branches are expensive since they are expensive on
>       2-operand machines.  */
>
> !   if (BRANCH_COST (optimize_insn_for_speed_p (),
> !                  false) == 0
>        || GET_MODE_CLASS (mode) != MODE_INT || op1 != const0_rtx
>        || (code != EQ && code != NE
> !         && (BRANCH_COST (optimize_insn_for_speed_p (),
> !                          false) <= 1 || (code != LE && code != GT))))
>      return 0;
>
>    /* See what we need to return.  We can only return a 1, -1, or the
> *************** emit_store_flag (rtx target, enum rtx_co
> *** 5467,5473 ****
>         that "or", which is an extra insn, so we only handle EQ if branches
>         are expensive.  */
>
> !       if (tem == 0 && (code == NE || BRANCH_COST > 1))
>        {
>          if (rtx_equal_p (subtarget, op0))
>            subtarget = 0;
> --- 5474,5483 ----
>         that "or", which is an extra insn, so we only handle EQ if branches
>         are expensive.  */
>
> !       if (tem == 0
> !         && (code == NE
> !             || BRANCH_COST (optimize_insn_for_speed_p (),
> !                             false) > 1))
>        {
>          if (rtx_equal_p (subtarget, op0))
>            subtarget = 0;
> Index: basic-block.h
> ===================================================================
> *** basic-block.h       (revision 139737)
> --- basic-block.h       (working copy)
> *************** extern void guess_outgoing_edge_probabil
> *** 848,853 ****
> --- 848,854 ----
>  extern void remove_predictions_associated_with_edge (edge);
>  extern bool edge_probability_reliable_p (const_edge);
>  extern bool br_prob_note_reliable_p (const_rtx);
> + extern bool predictable_edge_p (edge);
>
>  /* In cfg.c  */
>  extern void dump_regset (regset, FILE *);
> Index: config/alpha/alpha.h
> ===================================================================
> *** config/alpha/alpha.h        (revision 139737)
> --- config/alpha/alpha.h        (working copy)
> *************** extern int alpha_memory_latency;
> *** 640,646 ****
>  #define MEMORY_MOVE_COST(MODE,CLASS,IN)  (2*alpha_memory_latency)
>
>  /* Provide the cost of a branch.  Exact meaning under development.  */
> ! #define BRANCH_COST 5
>
>  /* Stack layout; function entry, exit and calling.  */
>
> --- 640,646 ----
>  #define MEMORY_MOVE_COST(MODE,CLASS,IN)  (2*alpha_memory_latency)
>
>  /* Provide the cost of a branch.  Exact meaning under development.  */
> ! #define BRANCH_COST(speed_p, predictable_p) 5
>
>  /* Stack layout; function entry, exit and calling.  */
>
> Index: config/frv/frv.h
> ===================================================================
> *** config/frv/frv.h    (revision 139737)
> --- config/frv/frv.h    (working copy)
> *************** do {                                                    \
> *** 2193,2199 ****
>
>  /* A C expression for the cost of a branch instruction.  A value of 1 is the
>     default; other values are interpreted relative to that.  */
> ! #define BRANCH_COST frv_branch_cost_int
>
>  /* Define this macro as a C expression which is nonzero if accessing less than
>     a word of memory (i.e. a `char' or a `short') is no faster than accessing a
> --- 2193,2199 ----
>
>  /* A C expression for the cost of a branch instruction.  A value of 1 is the
>     default; other values are interpreted relative to that.  */
> ! #define BRANCH_COST(speed_p, predictable_p) frv_branch_cost_int
>
>  /* Define this macro as a C expression which is nonzero if accessing less than
>     a word of memory (i.e. a `char' or a `short') is no faster than accessing a
> Index: config/s390/s390.h
> ===================================================================
> *** config/s390/s390.h  (revision 139737)
> --- config/s390/s390.h  (working copy)
> *************** extern struct rtx_def *s390_compare_op0,
> *** 828,834 ****
>
>  /* A C expression for the cost of a branch instruction.  A value of 1
>     is the default; other values are interpreted relative to that.  */
> ! #define BRANCH_COST 1
>
>  /* Nonzero if access to memory by bytes is slow and undesirable.  */
>  #define SLOW_BYTE_ACCESS 1
> --- 828,834 ----
>
>  /* A C expression for the cost of a branch instruction.  A value of 1
>     is the default; other values are interpreted relative to that.  */
> ! #define BRANCH_COST(speed_p, predictable_p) 1
>
>  /* Nonzero if access to memory by bytes is slow and undesirable.  */
>  #define SLOW_BYTE_ACCESS 1
> Index: config/spu/spu.h
> ===================================================================
> *** config/spu/spu.h    (revision 139737)
> --- config/spu/spu.h    (working copy)
> *************** targetm.resolve_overloaded_builtin = spu
> *** 434,440 ****
>
>  /* Costs */
>
> ! #define BRANCH_COST spu_branch_cost
>
>  #define SLOW_BYTE_ACCESS 0
>
> --- 434,440 ----
>
>  /* Costs */
>
> ! #define BRANCH_COST(speed_p, predictable_p) spu_branch_cost
>
>  #define SLOW_BYTE_ACCESS 0
>
> Index: config/sparc/sparc.h
> ===================================================================
> *** config/sparc/sparc.h        (revision 139737)
> --- config/sparc/sparc.h        (working copy)
> *************** do {
> *** 2196,2202 ****
>     On Niagara-2, a not-taken branch costs 1 cycle whereas a taken
>     branch costs 6 cycles.  */
>
> ! #define BRANCH_COST \
>        ((sparc_cpu == PROCESSOR_V9 \
>          || sparc_cpu == PROCESSOR_ULTRASPARC) \
>         ? 7 \
> --- 2196,2202 ----
>     On Niagara-2, a not-taken branch costs 1 cycle whereas a taken
>     branch costs 6 cycles.  */
>
> ! #define BRANCH_COST (speed_p, predictable_p) \
>        ((sparc_cpu == PROCESSOR_V9 \
>          || sparc_cpu == PROCESSOR_ULTRASPARC) \
>         ? 7 \
> Index: config/m32r/m32r.h
> ===================================================================
> *** config/m32r/m32r.h  (revision 139737)
> --- config/m32r/m32r.h  (working copy)
> *************** L2:     .word STATIC
> *** 1224,1230 ****
>  /* A value of 2 here causes GCC to avoid using branches in comparisons like
>     while (a < N && a).  Branches aren't that expensive on the M32R so
>     we define this as 1.  Defining it as 2 had a heavy hit in fp-bit.c.  */
> ! #define BRANCH_COST ((TARGET_BRANCH_COST) ? 2 : 1)
>
>  /* Nonzero if access to memory by bytes is slow and undesirable.
>     For RISC chips, it means that access to memory by bytes is no
> --- 1224,1230 ----
>  /* A value of 2 here causes GCC to avoid using branches in comparisons like
>     while (a < N && a).  Branches aren't that expensive on the M32R so
>     we define this as 1.  Defining it as 2 had a heavy hit in fp-bit.c.  */
> ! #define BRANCH_COST(speed_p, predictable_p) ((TARGET_BRANCH_COST) ? 2 : 1)
>
>  /* Nonzero if access to memory by bytes is slow and undesirable.
>     For RISC chips, it means that access to memory by bytes is no
> Index: config/i386/i386.h
> ===================================================================
> *** config/i386/i386.h  (revision 139737)
> --- config/i386/i386.h  (working copy)
> *************** do {                                                    \
> *** 1975,1981 ****
>  /* A C expression for the cost of a branch instruction.  A value of 1
>     is the default; other values are interpreted relative to that.  */
>
> ! #define BRANCH_COST ix86_branch_cost
>
>  /* Define this macro as a C expression which is nonzero if accessing
>     less than a word of memory (i.e. a `char' or a `short') is no
> --- 1975,1982 ----
>  /* A C expression for the cost of a branch instruction.  A value of 1
>     is the default; other values are interpreted relative to that.  */
>
> ! #define BRANCH_COST(speed_p, predictable_p) \
> !   (!(speed_p) ? 2 : (predictable_p) ? 0 : ix86_branch_cost)
>
>  /* Define this macro as a C expression which is nonzero if accessing
>     less than a word of memory (i.e. a `char' or a `short') is no
> Index: config/i386/i386.c
> ===================================================================
> *** config/i386/i386.c  (revision 139737)
> --- config/i386/i386.c  (working copy)
> *************** ix86_expand_int_movcc (rtx operands[])
> *** 14636,14642 ****
>         */
>
>        if ((!TARGET_CMOVE || (mode == QImode && TARGET_PARTIAL_REG_STALL))
> !         && BRANCH_COST >= 2)
>        {
>          if (cf == 0)
>            {
> --- 14636,14643 ----
>         */
>
>        if ((!TARGET_CMOVE || (mode == QImode && TARGET_PARTIAL_REG_STALL))
> !         && BRANCH_COST (optimize_insn_for_speed_p (),
> !                         false) >= 2)
>        {
>          if (cf == 0)
>            {
> *************** ix86_expand_int_movcc (rtx operands[])
> *** 14721,14727 ****
>        optab op;
>        rtx var, orig_out, out, tmp;
>
> !       if (BRANCH_COST <= 2)
>        return 0; /* FAIL */
>
>        /* If one of the two operands is an interesting constant, load a
> --- 14722,14728 ----
>        optab op;
>        rtx var, orig_out, out, tmp;
>
> !       if (BRANCH_COST (optimize_insn_for_speed_p (), false) <= 2)
>        return 0; /* FAIL */
>
>        /* If one of the two operands is an interesting constant, load a
> Index: config/sh/sh.h
> ===================================================================
> *** config/sh/sh.h      (revision 139737)
> --- config/sh/sh.h      (working copy)
> *************** struct sh_args {
> *** 2847,2853 ****
>     The SH1 does not have delay slots, hence we get a pipeline stall
>     at every branch.  The SH4 is superscalar, so the single delay slot
>     is not sufficient to keep both pipelines filled.  */
> ! #define BRANCH_COST (TARGET_SH5 ? 1 : ! TARGET_SH2 || TARGET_HARD_SH4 ? 2 : 1)
>
>  /* Assembler output control.  */
>
> --- 2847,2854 ----
>     The SH1 does not have delay slots, hence we get a pipeline stall
>     at every branch.  The SH4 is superscalar, so the single delay slot
>     is not sufficient to keep both pipelines filled.  */
> ! #define BRANCH_COST(speed_p, predictable_p) \
> !       (TARGET_SH5 ? 1 : ! TARGET_SH2 || TARGET_HARD_SH4 ? 2 : 1)
>
>  /* Assembler output control.  */
>
> Index: config/pdp11/pdp11.h
> ===================================================================
> *** config/pdp11/pdp11.h        (revision 139737)
> --- config/pdp11/pdp11.h        (working copy)
> *************** JMP     FUNCTION        0x0058  0x0000 <- FUNCTION
> *** 1057,1063 ****
>  /* there is no point in avoiding branches on a pdp,
>     since branches are really cheap - I just want to find out
>     how much difference the BRANCH_COST macro makes in code */
> ! #define BRANCH_COST (TARGET_BRANCH_CHEAP ? 0 : 1)
>
>
>  #define COMPARE_FLAG_MODE HImode
> --- 1057,1063 ----
>  /* there is no point in avoiding branches on a pdp,
>     since branches are really cheap - I just want to find out
>     how much difference the BRANCH_COST macro makes in code */
> ! #define BRANCH_COST(speed_p, predictable_p) (TARGET_BRANCH_CHEAP ? 0 : 1)
>
>
>  #define COMPARE_FLAG_MODE HImode
> Index: config/avr/avr.h
> ===================================================================
> *** config/avr/avr.h    (revision 139737)
> --- config/avr/avr.h    (working copy)
> *************** do {                                                                        \
> *** 511,517 ****
>                                         (MODE)==SImode ? 8 :   \
>                                         (MODE)==SFmode ? 8 : 16)
>
> ! #define BRANCH_COST 0
>
>  #define SLOW_BYTE_ACCESS 0
>
> --- 511,517 ----
>                                         (MODE)==SImode ? 8 :   \
>                                         (MODE)==SFmode ? 8 : 16)
>
> ! #define BRANCH_COST(speed_p, predictable_p) 0
>
>  #define SLOW_BYTE_ACCESS 0
>
> Index: config/crx/crx.h
> ===================================================================
> *** config/crx/crx.h    (revision 139737)
> --- config/crx/crx.h    (working copy)
> *************** struct cumulative_args
> *** 420,426 ****
>  /* Moving to processor register flushes pipeline - thus asymmetric */
>  #define REGISTER_MOVE_COST(MODE, FROM, TO) ((TO != GENERAL_REGS) ? 8 : 2)
>  /* Assume best case (branch predicted) */
> ! #define BRANCH_COST 2
>
>  #define SLOW_BYTE_ACCESS  1
>
> --- 420,426 ----
>  /* Moving to processor register flushes pipeline - thus asymmetric */
>  #define REGISTER_MOVE_COST(MODE, FROM, TO) ((TO != GENERAL_REGS) ? 8 : 2)
>  /* Assume best case (branch predicted) */
> ! #define BRANCH_COST(speed_p, predictable_p) 2
>
>  #define SLOW_BYTE_ACCESS  1
>
> Index: config/xtensa/xtensa.h
> ===================================================================
> *** config/xtensa/xtensa.h      (revision 139737)
> --- config/xtensa/xtensa.h      (working copy)
> *************** typedef struct xtensa_args
> *** 882,888 ****
>
>  #define MEMORY_MOVE_COST(MODE, CLASS, IN) 4
>
> ! #define BRANCH_COST 3
>
>  /* How to refer to registers in assembler output.
>     This sequence is indexed by compiler's hard-register-number (see above).  */
> --- 882,888 ----
>
>  #define MEMORY_MOVE_COST(MODE, CLASS, IN) 4
>
> ! #define BRANCH_COST(speed_p, predictable_p) 3
>
>  /* How to refer to registers in assembler output.
>     This sequence is indexed by compiler's hard-register-number (see above).  */
> Index: config/stormy16/stormy16.h
> ===================================================================
> *** config/stormy16/stormy16.h  (revision 139737)
> --- config/stormy16/stormy16.h  (working copy)
> *************** do {                                                    \
> *** 587,593 ****
>
>  #define MEMORY_MOVE_COST(M,C,I) (5 + memory_move_secondary_cost (M, C, I))
>
> ! #define BRANCH_COST 5
>
>  #define SLOW_BYTE_ACCESS 0
>
> --- 587,593 ----
>
>  #define MEMORY_MOVE_COST(M,C,I) (5 + memory_move_secondary_cost (M, C, I))
>
> ! #define BRANCH_COST(speed_p, predictable_p) 5
>
>  #define SLOW_BYTE_ACCESS 0
>
> Index: config/m68hc11/m68hc11.h
> ===================================================================
> *** config/m68hc11/m68hc11.h    (revision 139737)
> --- config/m68hc11/m68hc11.h    (working copy)
> *************** extern unsigned char m68hc11_reg_valid_f
> *** 1266,1272 ****
>
>     Pretend branches are cheap because GCC generates sub-optimal code
>     for the default value.  */
> ! #define BRANCH_COST 0
>
>  /* Nonzero if access to memory by bytes is slow and undesirable.  */
>  #define SLOW_BYTE_ACCESS      0
> --- 1266,1272 ----
>
>     Pretend branches are cheap because GCC generates sub-optimal code
>     for the default value.  */
> ! #define BRANCH_COST(speed_p, predictable_p) 0
>
>  /* Nonzero if access to memory by bytes is slow and undesirable.  */
>  #define SLOW_BYTE_ACCESS      0
> Index: config/iq2000/iq2000.h
> ===================================================================
> *** config/iq2000/iq2000.h      (revision 139737)
> --- config/iq2000/iq2000.h      (working copy)
> *************** typedef struct iq2000_args
> *** 624,630 ****
>  #define MEMORY_MOVE_COST(MODE,CLASS,TO_P)     \
>    (TO_P ? 2 : 16)
>
> ! #define BRANCH_COST 2
>
>  #define SLOW_BYTE_ACCESS 1
>
> --- 624,630 ----
>  #define MEMORY_MOVE_COST(MODE,CLASS,TO_P)     \
>    (TO_P ? 2 : 16)
>
> ! #define BRANCH_COST(speed_p, predictable_p) 2
>
>  #define SLOW_BYTE_ACCESS 1
>
> Index: config/ia64/ia64.h
> ===================================================================
> *** config/ia64/ia64.h  (revision 139737)
> --- config/ia64/ia64.h  (working copy)
> *************** do {                                                                    \
> *** 1384,1390 ****
>     many additional insn groups we run into, vs how good the dynamic
>     branch predictor is.  */
>
> ! #define BRANCH_COST 6
>
>  /* Define this macro as a C expression which is nonzero if accessing less than
>     a word of memory (i.e. a `char' or a `short') is no faster than accessing a
> --- 1384,1390 ----
>     many additional insn groups we run into, vs how good the dynamic
>     branch predictor is.  */
>
> ! #define BRANCH_COST(speed_p, predictable_p) 6
>
>  /* Define this macro as a C expression which is nonzero if accessing less than
>     a word of memory (i.e. a `char' or a `short') is no faster than accessing a
> Index: config/rs6000/rs6000.h
> ===================================================================
> *** config/rs6000/rs6000.h      (revision 139737)
> --- config/rs6000/rs6000.h      (working copy)
> *************** extern enum rs6000_nop_insertion rs6000_
> *** 967,973 ****
>     Set this to 3 on the RS/6000 since that is roughly the average cost of an
>     unscheduled conditional branch.  */
>
> ! #define BRANCH_COST 3
>
>  /* Override BRANCH_COST heuristic which empirically produces worse
>     performance for removing short circuiting from the logical ops.  */
> --- 967,973 ----
>     Set this to 3 on the RS/6000 since that is roughly the average cost of an
>     unscheduled conditional branch.  */
>
> ! #define BRANCH_COST(speed_p, predictable_p) 3
>
>  /* Override BRANCH_COST heuristic which empirically produces worse
>     performance for removing short circuiting from the logical ops.  */
> Index: config/arc/arc.h
> ===================================================================
> *** config/arc/arc.h    (revision 139737)
> --- config/arc/arc.h    (working copy)
> *************** arc_select_cc_mode (OP, X, Y)
> *** 824,830 ****
>  /* The cost of a branch insn.  */
>  /* ??? What's the right value here?  Branches are certainly more
>     expensive than reg->reg moves.  */
> ! #define BRANCH_COST 2
>
>  /* Nonzero if access to memory by bytes is slow and undesirable.
>     For RISC chips, it means that access to memory by bytes is no
> --- 824,830 ----
>  /* The cost of a branch insn.  */
>  /* ??? What's the right value here?  Branches are certainly more
>     expensive than reg->reg moves.  */
> ! #define BRANCH_COST(speed_p, predictable_p) 2
>
>  /* Nonzero if access to memory by bytes is slow and undesirable.
>     For RISC chips, it means that access to memory by bytes is no
> Index: config/score/score.h
> ===================================================================
> *** config/score/score.h        (revision 139737)
> --- config/score/score.h        (working copy)
> *************** typedef struct score_args
> *** 793,799 ****
>    (4 + memory_move_secondary_cost ((MODE), (CLASS), (TO_P)))
>
>  /* Try to generate sequences that don't involve branches.  */
> ! #define BRANCH_COST                     2
>
>  /* Nonzero if access to memory by bytes is slow and undesirable.  */
>  #define SLOW_BYTE_ACCESS                1
> --- 793,799 ----
>    (4 + memory_move_secondary_cost ((MODE), (CLASS), (TO_P)))
>
>  /* Try to generate sequences that don't involve branches.  */
> ! #define BRANCH_COST(speed_p, predictable_p) 2
>
>  /* Nonzero if access to memory by bytes is slow and undesirable.  */
>  #define SLOW_BYTE_ACCESS                1
> Index: config/arm/arm.h
> ===================================================================
> *** config/arm/arm.h    (revision 139737)
> --- config/arm/arm.h    (working copy)
> *************** do {                                                    \
> *** 2297,2303 ****
>
>  /* Try to generate sequences that don't involve branches, we can then use
>     conditional instructions */
> ! #define BRANCH_COST \
>    (TARGET_32BIT ? 4 : (optimize > 0 ? 2 : 0))
>
>  /* Position Independent Code.  */
> --- 2297,2303 ----
>
>  /* Try to generate sequences that don't involve branches, we can then use
>     conditional instructions */
> ! #define BRANCH_COST(speed_p, predictable_p) \
>    (TARGET_32BIT ? 4 : (optimize > 0 ? 2 : 0))
>
>  /* Position Independent Code.  */
> Index: config/pa/pa.h
> ===================================================================
> *** config/pa/pa.h      (revision 139737)
> --- config/pa/pa.h      (working copy)
> *************** do {                                                                    \
> *** 1570,1576 ****
>    : 2)
>
>  /* Adjust the cost of branches.  */
> ! #define BRANCH_COST (pa_cpu == PROCESSOR_8000 ? 2 : 1)
>
>  /* Handling the special cases is going to get too complicated for a macro,
>     just call `pa_adjust_insn_length' to do the real work.  */
> --- 1570,1576 ----
>    : 2)
>
>  /* Adjust the cost of branches.  */
> ! #define BRANCH_COST(speed_p, predictable_p) (pa_cpu == PROCESSOR_8000 ? 2 : 1)
>
>  /* Handling the special cases is going to get too complicated for a macro,
>     just call `pa_adjust_insn_length' to do the real work.  */
> Index: config/mips/mips.h
> ===================================================================
> *** config/mips/mips.h  (revision 139737)
> --- config/mips/mips.h  (working copy)
> *************** typedef struct mips_args {
> *** 2551,2557 ****
>  /* A C expression for the cost of a branch instruction.  A value of
>     1 is the default; other values are interpreted relative to that.  */
>
> ! #define BRANCH_COST mips_branch_cost
>  #define LOGICAL_OP_NON_SHORT_CIRCUIT 0
>
>  /* If defined, modifies the length assigned to instruction INSN as a
> --- 2551,2557 ----
>  /* A C expression for the cost of a branch instruction.  A value of
>     1 is the default; other values are interpreted relative to that.  */
>
> ! #define BRANCH_COST(speed_p, predictable_p) mips_branch_cost
>  #define LOGICAL_OP_NON_SHORT_CIRCUIT 0
>
>  /* If defined, modifies the length assigned to instruction INSN as a
> Index: config/vax/vax.h
> ===================================================================
> *** config/vax/vax.h    (revision 139737)
> --- config/vax/vax.h    (working copy)
> *************** enum reg_class { NO_REGS, ALL_REGS, LIM_
> *** 648,654 ****
>     Branches are extremely cheap on the VAX while the shift insns often
>     used to replace branches can be expensive.  */
>
> ! #define BRANCH_COST 0
>
>  /* Tell final.c how to eliminate redundant test instructions.  */
>
> --- 648,654 ----
>     Branches are extremely cheap on the VAX while the shift insns often
>     used to replace branches can be expensive.  */
>
> ! #define BRANCH_COST(speed_p, predictable_p) 0
>
>  /* Tell final.c how to eliminate redundant test instructions.  */
>
> Index: config/h8300/h8300.h
> ===================================================================
> *** config/h8300/h8300.h        (revision 139737)
> --- config/h8300/h8300.h        (working copy)
> *************** struct cum_arg
> *** 1004,1010 ****
>  #define DELAY_SLOT_LENGTH(JUMP) \
>    (NEXT_INSN (PREV_INSN (JUMP)) == JUMP ? 0 : 2)
>
> ! #define BRANCH_COST 0
>
>  /* Tell final.c how to eliminate redundant test instructions.  */
>
> --- 1004,1010 ----
>  #define DELAY_SLOT_LENGTH(JUMP) \
>    (NEXT_INSN (PREV_INSN (JUMP)) == JUMP ? 0 : 2)
>
> ! #define BRANCH_COST(speed_p, predictable_p) 0
>
>  /* Tell final.c how to eliminate redundant test instructions.  */
>
> Index: params.def
> ===================================================================
> *** params.def  (revision 139737)
> --- params.def  (working copy)
> *************** DEFPARAM (PARAM_STRUCT_REORG_COLD_STRUCT
> *** 78,83 ****
> --- 78,90 ----
>          "The threshold ratio between current and hottest structure counts",
>          10, 0, 100)
>
> + /* When branch is predicted to be taken with probability lower than this
> +    threshold (in percent), then it is considered well predictable. */
> + DEFPARAM (PARAM_PREDICTABLE_BRANCH_OUTCOME,
> +         "predictable-branch-outcome",
> +         "Maximal esitmated outcome of branch considered predictable",
> +         2, 0, 50)
> +
>  /* The single function inlining limit. This is the maximum size
>     of a function counted in internal gcc instructions (not in
>     real machine instructions) that is eligible for inlining
>

  reply	other threads:[~2008-08-29 12:53 UTC|newest]

Thread overview: 502+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-04-05 16:54 Jan Hubicka
2008-04-05 17:36 ` Richard Guenther
2008-04-05 20:39   ` Jan Hubicka
2008-04-08 20:42     ` Mark Mitchell
2008-04-08 22:52       ` Jan Hubicka
2008-04-08 23:06         ` Mark Mitchell
2008-04-09  7:19           ` Andi Kleen
2008-04-10 13:36           ` Jan Hubicka
2008-04-10 18:36       ` Michael Matz
2008-04-11  8:16         ` Mark Mitchell
2008-04-12 19:10           ` Hans-Peter Nilsson
2008-08-29 22:15   ` Jan Hubicka
2008-08-30 19:06     ` Richard Guenther [this message]
2008-09-02 14:38       ` Ian Lance Taylor
  -- strict thread matches above, loose matches on Subject: below --
2024-02-09  9:44 Jakub Jelinek
2024-02-12 16:07 ` Jeff Law
2023-03-01 10:23 Jakub Jelinek
2023-03-04  1:33 ` Joseph Myers
2023-02-13 10:35 Jakub Jelinek
2023-01-30  9:50 Jakub Jelinek
2023-01-30 23:07 ` Richard Sandiford
2023-01-09 16:50 Jakub Jelinek
2022-12-09 15:09 Jakub Jelinek
2022-10-21  7:23 [PATCH] builtins: Add __builtin_nextafterf16b builtin Jakub Jelinek
2022-10-21 15:42 ` [PATCH] builtins: Add various complex builtins for _Float{16,32,64,128,32x,64x,128x} Jakub Jelinek
2022-10-24 16:28   ` Jeff Law
2022-10-25  9:03     ` Patch ping Jakub Jelinek
2022-03-02  9:47 Jakub Jelinek
2022-03-02 18:59 ` Jeff Law
2022-01-04 12:45 Jakub Jelinek
2022-01-03 10:40 Jakub Jelinek
2022-01-03 12:38 ` Richard Biener
2022-01-03 10:25 Jakub Jelinek
2022-01-03 12:39 ` Richard Biener
2022-01-03 13:15 ` Jan Hubicka
2021-12-01 15:15 Jakub Jelinek
2021-03-31  7:07 Jakub Jelinek
2021-03-31  7:10 ` Richard Biener
2021-03-24 11:44 Jakub Jelinek
2021-03-24 15:45 ` Martin Sebor
2021-03-24 16:40   ` Jakub Jelinek
2021-03-24 17:14     ` Martin Sebor
2021-03-25  8:45       ` Richard Biener
2021-03-24 16:12 ` Jeff Law
2021-03-19  9:57 Jakub Jelinek
2021-02-16  8:13 [PATCH] cfgrtl: Fix up fixup_partitions caused ICE [PR99085] Jakub Jelinek
2021-02-23  8:49 ` Patch ping Jakub Jelinek
2021-01-25  9:43 Jakub Jelinek
2021-01-25 22:34 ` Jason Merrill
2020-10-22  9:05 Jakub Jelinek
2020-10-22 20:42 ` Joseph Myers
2020-10-05  9:09 Jakub Jelinek
2020-10-05 12:02 ` Nathan Sidwell
2020-09-25 11:42 Jakub Jelinek
2020-03-10 12:28 Jakub Jelinek
2020-02-10  9:24 Jakub Jelinek
2020-02-12 21:39 ` Jeff Law
2020-02-13  9:54   ` Jakub Jelinek
2020-02-13 17:42     ` Martin Sebor
2020-02-13 19:36       ` Jeff Law
2020-01-07 10:20 Jakub Jelinek
2019-09-14  0:40 [PATCH] Fix up sqrt(x) < c and sqrt(x) >= c match.pd folding (PR tree-optimization/91734) Jakub Jelinek
2019-09-16  6:57 ` Richard Biener
2019-09-21  6:14   ` [PATCH] Fix up sqrt(x) < c and sqrt(x) >= c match.pd folding (PR tree-optimization/91734, take 2) Jakub Jelinek
2019-09-30  7:03     ` Patch ping Jakub Jelinek
2019-04-16 11:54 Jakub Jelinek
2018-04-30  8:43 Jakub Jelinek
2018-04-16 10:35 Jakub Jelinek
2018-04-17  6:14 ` Kirill Yukhin
2018-04-10 13:35 Jakub Jelinek
2018-04-10 12:34 ` Kirill Yukhin
2018-03-12 17:35 Jakub Jelinek
2018-03-12 23:22 ` Jason Merrill
2018-03-05 18:38 Jakub Jelinek
2018-03-05 16:19 ` Jan Hubicka
2018-03-02  8:49 Jakub Jelinek
2018-03-02 17:17 ` Jeff Law
2018-03-05 15:39 ` Kirill Yukhin
2018-02-14 17:49 Jakub Jelinek
2018-02-19 18:15 ` Jeff Law
2018-02-07  9:01 Jakub Jelinek
2017-11-20  8:31 Jakub Jelinek
2017-11-20 18:31 ` Nathan Sidwell
2017-11-20 19:08 ` Nathan Sidwell
2017-11-21  8:53   ` Jakub Jelinek
2017-11-21  0:16 ` Jim Wilson
2017-11-21  3:01   ` Jim Wilson
2017-11-21  8:14   ` Jakub Jelinek
2017-11-06 16:22 Jakub Jelinek
2017-10-24 11:04 Jakub Jelinek
2017-10-24 18:58 ` Kirill Yukhin
2017-10-16 10:16 Jakub Jelinek
2017-10-06 14:12 Jakub Jelinek
2017-10-06 15:25 ` Nathan Sidwell
2017-10-06 15:27 ` Nathan Sidwell
2017-09-29  9:13 Jakub Jelinek
2017-07-28 16:58 Jakub Jelinek
2017-07-25  9:40 Jakub Jelinek
2017-07-26 10:34 ` Richard Biener
2017-07-26 13:47   ` Jakub Jelinek
2017-07-26 14:13     ` Richard Biener
2017-07-26 17:31       ` Jakub Jelinek
2017-07-27  7:19         ` Richard Biener
2017-07-27  8:35           ` Jakub Jelinek
2017-07-28  7:59             ` Richard Biener
2017-04-10 12:18 Jakub Jelinek
2017-04-10 12:41 ` Nathan Sidwell
2017-04-10 13:22   ` Jakub Jelinek
2017-04-10 14:39     ` Nathan Sidwell
2017-04-05 10:45 Jakub Jelinek
2017-03-31  8:34 Jakub Jelinek
2017-03-31 15:14 ` Jeff Law
2017-03-31 18:50   ` Jakub Jelinek
2017-03-31 15:15 ` Jeff Law
2017-02-07 15:11 Jakub Jelinek
2017-02-07 15:22 ` Uros Bizjak
2017-02-02 10:13 Jakub Jelinek
2017-02-02 10:15 ` Richard Biener
2017-01-26 20:42 Jakub Jelinek
2017-01-10  7:27 Jakub Jelinek
2016-11-18 17:08 Jakub Jelinek
2016-10-08  6:15 [C++ PATCH] Fix -Wimplicit-fallthrough in templates (PR c++/77886) Jakub Jelinek
2016-10-17 17:37 ` Patch ping Jakub Jelinek
2016-09-28 21:18 Bernd Edlinger
2016-09-28 19:31 Jakub Jelinek
2016-09-28 19:35 ` Bernd Schmidt
2016-09-28 19:55   ` Jakub Jelinek
2016-09-28 20:19   ` Jakub Jelinek
2016-09-28 21:41     ` Bernd Schmidt
2016-09-28 21:51       ` Jakub Jelinek
2016-09-29  0:32         ` Bernd Schmidt
2016-09-29  0:41           ` Jakub Jelinek
2016-09-14 21:55 Jakub Jelinek
2016-09-15 11:01 ` Bernd Schmidt
2016-09-05 17:14 [C++ PATCH] Fix constexpr switch handling (PR c++/77467) Jakub Jelinek
2016-09-16 20:00 ` Jason Merrill
2016-09-16 20:51   ` Jakub Jelinek
2016-09-19 18:49     ` Jason Merrill
2016-09-20 16:29       ` [C++ PATCH] Fix constexpr switch handling (PR c++/77467, take 2) Jakub Jelinek
2016-09-27 21:33         ` Patch ping Jakub Jelinek
2016-08-15  8:50 Jakub Jelinek
2016-07-22 14:16 Cesar Philippidis
2016-07-18 18:08 Jakub Jelinek
2016-07-11 13:14 Jakub Jelinek
2016-07-12  8:54 ` Richard Biener
2016-06-02  9:47 Jakub Jelinek
2016-03-18  9:23 Jakub Jelinek
2016-03-17 14:24 Jakub Jelinek
2016-03-17 15:48 ` Jason Merrill
2016-03-04  7:30 Jakub Jelinek
2016-03-04  7:38 ` Jeff Law
2016-03-03 14:36 Jakub Jelinek
2016-03-04  7:10 ` Jeff Law
2016-03-04  7:23   ` Jakub Jelinek
2016-02-11 18:14 Jakub Jelinek
2016-02-10 14:12 Jakub Jelinek
2016-02-10 14:21 ` Richard Biener
2015-05-05 18:52 Jakub Jelinek
2015-05-05 19:10 ` Andreas Krebbel
2015-04-17  8:47 Jakub Jelinek
2015-04-17 15:32 ` Jeff Law
2015-04-11 22:27 patch ping Bernhard Reutner-Fischer
2015-04-13 13:12 ` Jeff Law
2015-04-22 19:47   ` Bernhard Reutner-Fischer
2015-03-18 14:01 Patch ping Jakub Jelinek
2015-02-12 15:37 Jakub Jelinek
2015-02-09 23:06 patch ping Trevor Saunders
2015-02-09 23:15 ` Jan Hubicka
2015-02-04 19:30 Patch ping Jakub Jelinek
2015-01-14  6:29 Jan Hubicka
2015-01-14 21:42 ` Jason Merrill
2015-01-05 13:53 Jakub Jelinek
2015-01-05 21:27 ` Jeff Law
2015-01-05 21:39   ` Jakub Jelinek
2015-01-06  8:23     ` Jakub Jelinek
2015-01-09  5:34     ` Jeff Law
2014-12-12  8:23 Jakub Jelinek
2014-11-01 11:58 nvptx offloading patches [3/n], RFD Bernd Schmidt
2015-02-04 11:38 ` Jakub Jelinek
2015-02-09 10:20   ` Richard Biener
2015-02-16 21:08     ` Jakub Jelinek
2015-02-16 21:35       ` Richard Biener
2015-02-16 21:44         ` Jakub Jelinek
2015-02-17 10:00           ` Richard Biener
2015-02-18 10:00             ` Jakub Jelinek
2015-02-25  8:51               ` Patch ping Jakub Jelinek
2015-02-25  9:30                 ` Richard Biener
2015-02-25 16:51                   ` Jakub Jelinek
2014-07-19 10:12 Jakub Jelinek
2014-04-09 13:07 Jakub Jelinek
2014-04-09 22:29 ` DJ Delorie
2014-04-10  5:59   ` Jakub Jelinek
2014-04-10 16:01     ` DJ Delorie
2014-04-10 18:42       ` Tobias Burnus
2014-04-14 11:02       ` Jakub Jelinek
2014-04-16 18:45         ` Toon Moene
2014-04-16 19:13         ` DJ Delorie
2014-04-17 12:21           ` Jakub Jelinek
2014-04-10  4:24 ` Jeff Law
2014-02-06 12:12 Jakub Jelinek
2015-04-17 15:46 ` Richard Earnshaw
2015-04-17 15:47   ` Richard Earnshaw
2014-01-13  8:07 Jakub Jelinek
2014-01-13  8:15 ` Uros Bizjak
2014-01-13  8:35   ` Jakub Jelinek
2014-01-13 10:23     ` Richard Biener
2014-01-13 18:26     ` Kirill Yukhin
2014-01-13 18:33       ` Uros Bizjak
2014-01-13 18:40       ` Uros Bizjak
2014-01-13 18:59         ` Jakub Jelinek
2014-01-13 15:15 ` Jeff Law
2014-01-13 16:26   ` Jakub Jelinek
2014-01-13 15:22     ` Jeff Law
2014-04-14 10:56       ` Jakub Jelinek
2014-04-16 21:35 ` Jeff Law
2014-04-17 21:56   ` Uros Bizjak
2014-01-06  9:52 Jakub Jelinek
2013-05-17  6:49 Jakub Jelinek
2013-05-17 15:44 ` Jeff Law
2013-04-26  7:40 Jakub Jelinek
2013-04-26 11:01 ` Gabriel Dos Reis
2013-03-05 13:12 Jakub Jelinek
2013-03-05 13:26 ` Richard Biener
2013-03-05 13:47   ` Jakub Jelinek
2013-03-05 13:52     ` Richard Biener
2013-03-05 15:44 ` Vladimir Makarov
2013-03-05 15:46 ` Vladimir Makarov
2013-02-07  8:24 Jakub Jelinek
2013-02-07 14:34 ` Jeff Law
2013-01-30 10:18 Jakub Jelinek
2012-12-18 14:12 Jakub Jelinek
2012-12-18 21:36 ` Paul Richard Thomas
2012-11-26 12:30 Jakub Jelinek
2012-12-06  9:28 ` Richard Biener
2012-11-16  9:10 Jakub Jelinek
2012-11-17 19:12 ` Richard Henderson
2012-11-17 19:16 ` Richard Henderson
2012-11-17 20:04 ` Richard Henderson
2012-11-19  7:53   ` Jakub Jelinek
2012-11-19 16:56     ` Richard Henderson
2012-10-22 18:31 Jakub Jelinek
2012-08-27  7:44 Jakub Jelinek
2012-09-03 11:34 ` Richard Guenther
2012-06-11 11:28 Jakub Jelinek
2012-03-05 11:09 Jakub Jelinek
2012-03-05 12:18 ` Richard Guenther
2012-03-05 20:08 ` Richard Henderson
2012-02-14 10:07 Jakub Jelinek
2012-02-17 14:56 ` Jan Hubicka
2012-02-03 10:14 Jakub Jelinek
2012-02-03 10:56 ` Paolo Carlini
2012-01-24 10:29 Jakub Jelinek
2012-01-24 10:53 ` Richard Guenther
2012-01-02 10:38 Jakub Jelinek
2012-01-02 12:20 ` Richard Guenther
2011-11-07 21:54 Jakub Jelinek
2011-11-08 13:45 ` Richard Guenther
2011-11-02 20:19 Jakub Jelinek
2011-11-04 10:11 ` Richard Guenther
2011-11-04 10:39   ` Jakub Jelinek
2011-11-04 11:44     ` Richard Guenther
2011-11-04 14:09       ` Michael Matz
2011-09-26  9:30 Jakub Jelinek
2011-09-26 10:08 ` Richard Sandiford
2011-09-12 15:39 Jakub Jelinek
2011-09-12 16:17 ` Jeff Law
2011-08-29  9:41 Jakub Jelinek
2011-08-29 12:00 ` Joseph S. Myers
2011-08-29 12:49 ` Bernd Schmidt
2011-08-29 21:33 ` Jeff Law
2011-08-18  9:45 Jakub Jelinek
2011-06-20  9:22 Jakub Jelinek
2011-06-21 18:37 ` Richard Henderson
2011-06-25 19:39 ` Eric Botcazou
2011-06-25 23:56   ` Mike Stump
2011-05-23  9:34 Jakub Jelinek
2011-05-23 10:11 ` Richard Guenther
2011-05-23 18:13 ` Jeff Law
2011-05-12 16:12 Jakub Jelinek
2011-04-26 12:55 Jakub Jelinek
2011-03-14 20:20 Jakub Jelinek
2011-03-14 20:27 ` Diego Novillo
2011-02-28 10:38 Jakub Jelinek
2011-02-28 16:07 ` Jeff Law
2011-02-28 16:18 ` Jeff Law
2011-02-28 18:12 ` Jeff Law
2011-02-03 11:59 Jakub Jelinek
2011-02-03 16:14 ` Richard Henderson
2011-02-03 16:20   ` Jakub Jelinek
2011-02-03 16:25     ` IainS
2011-02-03 16:27       ` Richard Henderson
2011-02-03 16:38         ` Jakub Jelinek
2011-02-03 16:49           ` IainS
2011-02-03 16:54             ` Jakub Jelinek
2011-02-03 18:44           ` Mike Stump
2011-02-03 19:04             ` IainS
2010-11-05 20:04 Jakub Jelinek
2010-11-09 15:48 ` Jeff Law
2010-09-08 18:13 Jakub Jelinek
2010-07-20 16:59 Jakub Jelinek
2010-07-27 17:39 ` Jeff Law
2010-06-21 10:12 Jakub Jelinek
2010-06-21 11:19 ` Paolo Bonzini
2010-06-21 12:08   ` Jan Kratochvil
2010-06-21 12:20     ` Jan Kratochvil
2010-05-10 17:00 Jakub Jelinek
2010-05-10 23:43 ` Joseph S. Myers
2010-04-19  9:47 Jakub Jelinek
2010-03-02 19:00 Patch Ping Jeff Law
2010-03-03 10:09 ` Richard Guenther
2010-02-23 15:42 Patch ping Jakub Jelinek
2010-02-23 20:12 ` Uros Bizjak
2010-02-09 22:39 Jakub Jelinek
2010-02-09 22:52 ` Richard Guenther
2010-01-14  9:33 Jakub Jelinek
2010-01-14 19:12 ` Richard Henderson
2010-01-04 10:54 Jakub Jelinek
2010-01-04 14:35 ` Richard Guenther
2009-11-02 13:17 Jakub Jelinek
2009-11-02 13:29 ` Richard Guenther
2009-10-19 19:22 Jakub Jelinek
2009-10-19 19:22 ` Richard Henderson
2009-10-19 21:09 ` Joseph S. Myers
2009-10-19 22:06 ` Jason Merrill
2009-10-20  1:25   ` Paolo Carlini
2009-10-12 12:37 Jakub Jelinek
2009-10-12 19:23 ` Tom Tromey
2009-10-12 20:21   ` Jakub Jelinek
2009-10-12 21:29     ` Tom Tromey
2009-08-06 20:57 Jakub Jelinek
2009-05-20 21:07 Jakub Jelinek
2009-04-08 18:16 Jakub Jelinek
2009-01-09 16:41 Jakub Jelinek
2009-01-10  2:39 ` Ian Lance Taylor
2008-11-10 16:53 Jakub Jelinek
2008-11-12 15:51 ` Nick Clifton
2008-11-22  2:49 ` Ian Lance Taylor
2008-09-26  0:33 Jakub Jelinek
2008-09-26 12:53 ` Diego Novillo
2008-09-26 17:36 ` Richard Henderson
2008-07-28 15:02 Jakub Jelinek
2008-06-27 16:11 Jakub Jelinek
2008-05-07  8:38 Jakub Jelinek
2008-05-07 14:59 ` Jason Merrill
2008-05-21 15:05   ` Jakub Jelinek
2008-05-21 15:51     ` Jason Merrill
2008-05-10 19:23 ` Diego Novillo
2008-02-20 14:35 Jakub Jelinek
2008-02-20 16:26 ` Tom Tromey
2008-02-15 16:47 Jakub Jelinek
2007-09-04 10:02 Jan Hubicka
2007-09-04 10:07 ` Richard Guenther
2007-07-30 18:17 Zdenek Dvorak
2007-07-09  9:03 Zdenek Dvorak
2007-07-09  9:44 ` Richard Guenther
2007-05-24 21:39 Krister Walfridsson
2007-05-23  9:13 Zdenek Dvorak
2007-05-23 20:24 ` Diego Novillo
2007-04-18  1:07 Jan Hubicka
2007-04-17  1:49 Zdenek Dvorak
2006-12-16  0:05 H. J. Lu
2006-12-16  0:35 ` Janis Johnson
2006-12-14 23:53 Zdenek Dvorak
2006-12-15 13:12 ` Richard Guenther
2006-12-16 16:32   ` Zdenek Dvorak
2006-05-02 14:32 Patch Ping Tom Tromey
2006-05-03  2:22 ` Mark Mitchell
2006-03-21 21:26 Patch ping Zdenek Dvorak
2006-03-10 19:33 Uttam Pawar
2006-03-11 20:40 ` Roger Sayle
2006-03-13 19:23   ` Uttam Pawar
2006-03-14  1:02     ` Roger Sayle
2006-03-14 16:49     ` Steve Ellcey
2006-03-14 16:55       ` Andrew Pinski
2006-03-15  4:38       ` Roger Sayle
2006-03-15 19:29         ` Steve Ellcey
2006-03-15 10:23       ` Grigory Zagorodnev
2006-03-15 10:15     ` Andreas Schwab
2006-02-16 23:51 [patch] for PR26327 Uttam Pawar
2006-02-21 19:36 ` patch Ping Uttam Pawar
2006-02-16 15:58 Patch ping Zdenek Dvorak
2006-02-17  2:40 ` Roger Sayle
2006-02-17  9:24   ` Zdenek Dvorak
2006-02-17 10:34     ` Paolo Bonzini
2006-02-17 15:31       ` Roger Sayle
2006-02-21  9:15         ` Zdenek Dvorak
2006-02-21 14:47           ` Roger Sayle
2006-02-21 15:43             ` Zdenek Dvorak
2006-02-21 18:01               ` Richard Henderson
2006-02-21 23:04                 ` Zdenek Dvorak
2006-02-21 23:16                   ` Richard Henderson
2006-02-22  0:20                     ` Zdenek Dvorak
2006-02-14 17:19 Jakub Jelinek
2006-01-28  0:07 Zdenek Dvorak
2006-01-16 21:54 Jakub Jelinek
2006-01-10 21:41 Jan Hubicka
2006-01-10 22:45 ` Ian Lance Taylor
2006-01-10 14:03 Zdenek Dvorak
2006-01-10 14:20 ` Diego Novillo
2006-01-10 16:27   ` Zdenek Dvorak
2005-12-19 19:30 Jan Hubicka
2005-12-19  8:10 patch ping Jan Beulich
2005-12-19  9:26 ` Gerald Pfeifer
2005-11-19 19:14 Rafael Ávila de Espíndola
2005-11-20  9:06 ` Andreas Jaeger
2005-10-30 13:57 Richard Kenner
2005-10-29  1:18 Andrew Pinski
2005-10-29  4:16 ` Ian Lance Taylor
2005-10-29 20:17   ` Andrew Pinski
2005-10-29 20:26     ` Andrew Pinski
2005-10-29 21:08       ` Andrew Pinski
2005-10-30  4:59         ` Ian Lance Taylor
2005-10-04 16:35 Patch ping Ian Lance Taylor
2005-10-04 17:49 ` Richard Henderson
2005-08-29  8:03 Jakub Jelinek
2005-08-29  8:49 ` Ian Lance Taylor
2005-08-01 12:56 Jan Hubicka
2005-06-20 19:37 Jan Hubicka
2005-06-20 22:42 ` Richard Henderson
2005-06-21  8:34   ` Jan Hubicka
2005-06-15 22:34 patch ping Eric Christopher
2005-05-18 11:23 Patch ping Tobias Schlüter
2005-05-12 20:41 Jakub Jelinek
2005-04-04 15:14 Ian Lance Taylor
2005-04-05  2:09 ` Richard Henderson
2005-03-30 19:18 Dale Johannesen
2005-03-30 22:59 ` Tom Tromey
2005-03-30 23:05 ` Geoffrey Keating
2005-03-25 21:26 Zdenek Dvorak
2005-03-09 23:35 Jakub Jelinek
2005-02-27 16:37 Zdenek Dvorak
2004-12-10 17:14 H. J. Lu
2004-12-10 17:02 H. J. Lu
2004-10-11 20:39 Patch Ping Tom Tromey
2004-10-12 23:35 ` Geoffrey Keating
2004-09-03 23:39 Patch ping H. J. Lu
2004-09-03 23:44 ` Richard Henderson
     [not found] <20040731163035.GA7104@troutmask.apl.washington.edu>
2004-08-06 20:45 ` Paul Brook
2004-07-08 14:50 jlquinn
2004-07-08 14:55 ` Roger Sayle
2004-07-08 15:26 ` Paolo Bonzini
2004-06-24  3:10 patch ping Ziemowit Laski
2004-06-23 19:35 Josef Zlomek
2004-06-21 22:57 Pat Haugen
2004-06-21 17:42 Jerry Quinn
2004-06-21 11:44 Patch ping Paolo Bonzini
2004-06-21 15:20 ` Roger Sayle
2004-06-14 13:11 Paul Brook
2004-06-14 17:14 ` Mark Mitchell
2004-06-14 17:36   ` Daniel Jacobowitz
2004-06-14 18:13     ` Paul Brook
2004-06-14 18:22       ` Daniel Jacobowitz
2004-06-15  0:08 ` Richard Henderson
2004-06-15 16:33   ` Paul Brook
2004-06-15 17:46     ` Richard Henderson
2004-06-10 16:48 Tobias Schlüter
2004-06-11  6:49 ` Steve Kargl
2004-05-29 19:51 patch ping jlquinn
2004-05-20 13:25 Patch ping Ben Elliston
2004-05-16 11:59 Richard Guenther
     [not found] <c7dcf6$uiq$1@sea.gmane.org>
     [not found] ` <16538.13954.773875.174452@cuddles.cambridge.redhat.com>
2004-05-06 19:00   ` Patch Ping Ranjit Mathew
2004-05-04 23:01 Patch ping Andrew Pinski
2004-04-28 13:35 Paul Brook
2004-04-28 13:51 ` Richard Earnshaw
2004-04-28 14:02   ` Paul Brook
2004-04-28 15:36     ` Richard Earnshaw
2004-04-28 13:56 ` Roger Sayle
2004-04-23 11:08 Zdenek Dvorak
2004-04-23 13:34 ` Nathan Sidwell
2004-04-23 13:48   ` Zdenek Dvorak
2004-04-23 14:18     ` Roger Sayle
2004-04-24 13:01   ` Aldy Hernandez
2004-04-24 19:40     ` Zdenek Dvorak
2004-04-25  3:39       ` Aldy Hernandez
2004-04-25 16:37     ` Zdenek Dvorak
2004-04-23  9:30 Paolo Bonzini
2004-04-19 19:21 Josef Zlomek
2004-04-20  0:50 ` Roger Sayle
2004-04-16 10:50 Paolo Bonzini
2004-04-16 21:16 ` Geoff Keating
2004-04-19  5:37   ` Andreas Jaeger
2004-04-19  9:45     ` Paolo Bonzini
2004-04-19  9:56       ` Arnaud Charlet
2004-04-19 10:01         ` Paolo Bonzini
2004-04-19 12:10           ` Arnaud Charlet
2004-04-19 10:07       ` Andreas Jaeger
2004-04-19 10:41       ` Laurent GUERBY
2004-04-19 11:13         ` Paolo Bonzini
2004-04-19 11:30       ` Andreas Jaeger
2004-04-19 12:38         ` Paolo Bonzini
2004-04-19 12:57       ` Andreas Schwab
2004-04-19 13:16         ` Paul Brook
2004-04-19 15:08           ` Richard Earnshaw
2004-04-19 13:20         ` Richard Earnshaw
2004-04-19  5:41   ` Andreas Jaeger
2004-03-26 15:38 Ian Lance Taylor
2004-03-24  2:53 patch ping Eric Christopher
2004-03-01 13:03 Patch ping Zdenek Dvorak
2004-03-02  9:33 ` Zack Weinberg
2004-03-19  8:14   ` Zack Weinberg
2004-03-19  8:14 ` Zdenek Dvorak
2004-02-17  0:25 patch ping Alan Modra
2004-02-21 13:45 ` Alan Modra
2004-02-12 21:22 Patch ping Zdenek Dvorak
2004-02-21 13:45 ` Zdenek Dvorak
2004-02-06  1:23 patch ping Alan Modra
2004-02-06  4:23 ` Roger Sayle
2004-02-21 13:45   ` Roger Sayle
2004-02-06 10:40 ` Andreas Schwab
2004-02-06 11:02   ` Alan Modra
2004-02-21 13:45     ` Alan Modra
2004-02-21 13:45   ` Andreas Schwab
2004-02-21 13:45 ` Alan Modra
2004-02-02  9:21 Patch ping Paolo Bonzini
2004-02-21 13:45 ` Paolo Bonzini
2003-10-19 11:11 Zdenek Dvorak
2003-10-19 17:39 ` Zack Weinberg
2003-10-19 18:38   ` Jan Hubicka
2003-10-19 18:41     ` Andreas Jaeger
2003-10-19 19:54       ` Zack Weinberg
2003-09-10  3:17 Jerry Quinn
2003-09-11 14:49 ` Jim Wilson
2002-12-18 11:59 Dale Johannesen
2002-07-12 15:33 patch ping Eric Christopher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=84fc9c000808290552w34f7548ft5b89b659aa5eb6ce@mail.gmail.com \
    --to=richard.guenther@gmail.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=hubicka@ucw.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).