From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 56021 invoked by alias); 16 Jun 2015 08:36:53 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 55997 invoked by uid 89); 16 Jun 2015 08:36:52 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=0.1 required=5.0 tests=AWL,BAYES_50,SPF_PASS autolearn=ham version=3.3.2 X-HELO: eu-smtp-delivery-143.mimecast.com Received: from eu-smtp-delivery-143.mimecast.com (HELO eu-smtp-delivery-143.mimecast.com) (146.101.78.143) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 16 Jun 2015 08:36:48 +0000 Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.140]) by uk-mta-35.uk.mimecast.lan; Tue, 16 Jun 2015 09:36:44 +0100 Received: from [10.2.207.50] ([10.1.2.79]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Tue, 16 Jun 2015 09:36:44 +0100 Message-ID: <557FE01B.9060206@arm.com> Date: Tue, 16 Jun 2015 08:41:00 -0000 From: Kyrill Tkachov User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.2.0 MIME-Version: 1.0 To: Andrew Pinski CC: GCC Patches , Ramana Radhakrishnan , Richard Earnshaw Subject: Re: [PATCH][ARM] Add debug dumping of cost table fields References: <55438E3F.8050205@arm.com> <55658268.7020403@arm.com> In-Reply-To: X-MC-Unique: gxszTRLKTY2WIfg5v9QGJQ-1 Content-Type: multipart/mixed; boundary="------------090008080509020902080208" X-IsSubscribed: yes X-SW-Source: 2015-06/txt/msg01065.txt.bz2 This is a multi-part message in MIME format. --------------090008080509020902080208 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: quoted-printable Content-length: 1663 On 27/05/15 09:39, Andrew Pinski wrote: > On Wed, May 27, 2015 at 4:38 PM, Kyrill Tkachov = wrote: >> Ping. >> https://gcc.gnu.org/ml/gcc-patches/2015-05/msg00054.html > This and the one in AARCH64 is too noisy. Can we have an option to > turn this on and default to turning them off. How about this? The new undocumented option can be used to turn on verbose = costs dumping. It is off by default. Tested arm-none-eabi. Ok for trunk? Thanks, Kyrill 2015-06-16 Kyrylo Tkachov * config/arm/arm.c (DBG_COST): New macro. (arm_new_rtx_costs): Use above. * config/arm/arm.opt (mdebug-rtx-costs): New option. > Thanks, > Andrew > >> Thanks, >> Kyrill >> >> On 01/05/15 15:31, Kyrill Tkachov wrote: >>> Hi all, >>> >>> This patch adds a macro to wrap cost field accesses into a helpful debug >>> dump, >>> saying which field is being accessed at what line and with what values. >>> This helped me track down cases where the costs were doing the wrong th= ing >>> by allowing me to see which path in arm_new_rtx_costs was taken. >>> For example, the combine log might now contain: >>> >>> Trying 2 -> 6: >>> Successfully matched this instruction: >>> (set (reg:SI 115 [ D.5348 ]) >>> (neg:SI (reg:SI 0 r0 [ a ]))) >>> using extra_cost->alu.arith with cost 0 from line 10506 >>> >>> which can be useful in debugging the rtx costs. >>> >>> Bootstrapped and tested on arm. >>> >>> Ok for trunk? >>> >>> Thanks, >>> Kyrill >>> >>> >>> 2015-05-01 Kyrylo Tkachov >>> >>> * config/arm/arm.c (DBG_COST): New macro. >>> (arm_new_rtx_costs): Use above. >> --------------090008080509020902080208 Content-Type: text/x-patch; name=arm-dbg-costs.patch Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="arm-dbg-costs.patch" Content-length: 35972 commit 9db83ab2f7763f84445763150642fe418b06b1fe Author: Kyrylo Tkachov Date: Thu Apr 2 13:37:20 2015 +0100 [ARM] Add debug dumping of cost table fields. diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c index 737d824..cae3c02 100644 --- a/gcc/config/arm/arm.c +++ b/gcc/config/arm/arm.c @@ -9322,6 +9322,12 @@ arm_unspec_cost (rtx x, enum rtx_code /* outer_code = */, bool speed_p, int *cost) } \ while (0); =20 + +#define DBG_COST(F) (((debug_rtx_costs \ + && dump_file && (dump_flags & TDF_DETAILS)) \ + ? fprintf (dump_file, "using "#F" with cost %d from line %d\n", \ + (F), __LINE__) : 0), (F)) + /* RTX costs. Make an estimate of the cost of executing the operation X, which is contained with an operation with code OUTER_CODE. SPEED_P indicates whether the cost desired is the performance cost, @@ -9422,7 +9428,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, + arm_address_cost (XEXP (x, 0), mode, ADDR_SPACE_GENERIC, speed_p)); #else - *cost +=3D extra_cost->ldst.load; + *cost +=3D DBG_COST (extra_cost->ldst.load); #endif return true; =20 @@ -9450,11 +9456,11 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, { HOST_WIDE_INT nregs =3D XVECLEN (x, 0); HOST_WIDE_INT regs_per_insn_1st =3D is_ldm - ? extra_cost->ldst.ldm_regs_per_insn_1st - : extra_cost->ldst.stm_regs_per_insn_1st; + ? DBG_COST (extra_cost->ldst.ldm_regs_per_insn_1st) + : DBG_COST (extra_cost->ldst.stm_regs_per_insn_1st); HOST_WIDE_INT regs_per_insn_sub =3D is_ldm - ? extra_cost->ldst.ldm_regs_per_insn_subsequent - : extra_cost->ldst.stm_regs_per_insn_subsequent; + ? DBG_COST (extra_cost->ldst.ldm_regs_per_insn_subsequent) + : DBG_COST (extra_cost->ldst.stm_regs_per_insn_subsequent); =20 *cost +=3D regs_per_insn_1st + COSTS_N_INSNS (((MAX (nregs - regs_per_insn_1st, 0)) @@ -9471,9 +9477,10 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum r= tx_code outer_code, if (TARGET_HARD_FLOAT && GET_MODE_CLASS (mode) =3D=3D MODE_FLOAT && (mode =3D=3D SFmode || !TARGET_VFP_SINGLE)) *cost +=3D COSTS_N_INSNS (speed_p - ? extra_cost->fp[mode !=3D SFmode].div : 0); + ? DBG_COST (extra_cost->fp[mode !=3D SFmode].div) : 0); else if (mode =3D=3D SImode && TARGET_IDIV) - *cost +=3D COSTS_N_INSNS (speed_p ? extra_cost->mult[0].idiv : 0); + *cost +=3D COSTS_N_INSNS (speed_p ? DBG_COST (extra_cost->mult[0].idiv) + : 0); else *cost =3D LIBCALL_COST (2); return false; /* All arguments must be in registers. */ @@ -9489,7 +9496,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, *cost +=3D (COSTS_N_INSNS (1) + rtx_cost (XEXP (x, 0), code, 0, speed_p)); if (speed_p) - *cost +=3D extra_cost->alu.shift_reg; + *cost +=3D DBG_COST (extra_cost->alu.shift_reg); return true; } /* Fall through */ @@ -9502,7 +9509,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, *cost +=3D (COSTS_N_INSNS (2) + rtx_cost (XEXP (x, 0), code, 0, speed_p)); if (speed_p) - *cost +=3D 2 * extra_cost->alu.shift; + *cost +=3D DBG_COST (2 * extra_cost->alu.shift); return true; } else if (mode =3D=3D SImode) @@ -9510,7 +9517,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, *cost +=3D rtx_cost (XEXP (x, 0), code, 0, speed_p); /* Slightly disparage register shifts at -Os, but not by much. */ if (!CONST_INT_P (XEXP (x, 1))) - *cost +=3D (speed_p ? extra_cost->alu.shift_reg : 1 + *cost +=3D (speed_p ? DBG_COST (extra_cost->alu.shift_reg) : 1 + rtx_cost (XEXP (x, 1), code, 1, speed_p)); return true; } @@ -9523,7 +9530,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, /* Slightly disparage register shifts at -Os, but not by much. */ if (!CONST_INT_P (XEXP (x, 1))) - *cost +=3D (speed_p ? extra_cost->alu.shift_reg : 1 + *cost +=3D (speed_p ? DBG_COST (extra_cost->alu.shift_reg) : 1 + rtx_cost (XEXP (x, 1), code, 1, speed_p)); } else if (code =3D=3D LSHIFTRT || code =3D=3D ASHIFTRT) @@ -9532,7 +9539,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, { /* Can use SBFX/UBFX. */ if (speed_p) - *cost +=3D extra_cost->alu.bfx; + *cost +=3D DBG_COST (extra_cost->alu.bfx); *cost +=3D rtx_cost (XEXP (x, 0), code, 0, speed_p); } else @@ -9542,10 +9549,10 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, if (speed_p) { if (CONST_INT_P (XEXP (x, 1))) - *cost +=3D 2 * extra_cost->alu.shift; + *cost +=3D 2 * DBG_COST (extra_cost->alu.shift); else - *cost +=3D (extra_cost->alu.shift - + extra_cost->alu.shift_reg); + *cost +=3D (DBG_COST (extra_cost->alu.shift) + + DBG_COST (extra_cost->alu.shift_reg)); } else /* Slightly disparage register shifts. */ @@ -9559,12 +9566,12 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, if (speed_p) { if (CONST_INT_P (XEXP (x, 1))) - *cost +=3D (2 * extra_cost->alu.shift - + extra_cost->alu.log_shift); + *cost +=3D (DBG_COST (2 * extra_cost->alu.shift) + + DBG_COST (extra_cost->alu.log_shift)); else - *cost +=3D (extra_cost->alu.shift - + extra_cost->alu.shift_reg - + extra_cost->alu.log_shift_reg); + *cost +=3D (DBG_COST (extra_cost->alu.shift) + + DBG_COST (extra_cost->alu.shift_reg) + + DBG_COST (extra_cost->alu.log_shift_reg)); } } return true; @@ -9579,7 +9586,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, if (mode =3D=3D SImode) { if (speed_p) - *cost +=3D extra_cost->alu.rev; + *cost +=3D DBG_COST (extra_cost->alu.rev); =20 return false; } @@ -9594,8 +9601,8 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, =20 if (speed_p) { - *cost +=3D 6 * extra_cost->alu.shift; - *cost +=3D 3 * extra_cost->alu.logical; + *cost +=3D DBG_COST (6 * extra_cost->alu.shift); + *cost +=3D DBG_COST (3 * extra_cost->alu.logical); } } else @@ -9604,9 +9611,9 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, =20 if (speed_p) { - *cost +=3D 2 * extra_cost->alu.shift; - *cost +=3D extra_cost->alu.arith_shift; - *cost +=3D 2 * extra_cost->alu.logical; + *cost +=3D DBG_COST (2 * extra_cost->alu.shift); + *cost +=3D DBG_COST (extra_cost->alu.arith_shift); + *cost +=3D DBG_COST (2 * extra_cost->alu.logical); } } return true; @@ -9623,7 +9630,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, rtx mul_op0, mul_op1, sub_op; =20 if (speed_p) - *cost +=3D extra_cost->fp[mode !=3D SFmode].mult_addsub; + *cost +=3D DBG_COST (extra_cost->fp[mode !=3D SFmode].mult_addsub); =20 if (GET_CODE (XEXP (x, 0)) =3D=3D MULT) { @@ -9651,7 +9658,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, } =20 if (speed_p) - *cost +=3D extra_cost->fp[mode !=3D SFmode].addsub; + *cost +=3D DBG_COST (extra_cost->fp[mode !=3D SFmode].addsub); return false; } =20 @@ -9675,11 +9682,11 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, if (shift_by_reg !=3D NULL) { if (speed_p) - *cost +=3D extra_cost->alu.arith_shift_reg; + *cost +=3D DBG_COST (extra_cost->alu.arith_shift_reg); *cost +=3D rtx_cost (shift_by_reg, code, 0, speed_p); } else if (speed_p) - *cost +=3D extra_cost->alu.arith_shift; + *cost +=3D DBG_COST (extra_cost->alu.arith_shift); =20 *cost +=3D (rtx_cost (shift_op, code, 0, speed_p) + rtx_cost (non_shift_op, code, 0, speed_p)); @@ -9691,7 +9698,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, { /* MLS. */ if (speed_p) - *cost +=3D extra_cost->mult[0].add; + *cost +=3D DBG_COST (extra_cost->mult[0].add); *cost +=3D (rtx_cost (XEXP (x, 0), MINUS, 0, speed_p) + rtx_cost (XEXP (XEXP (x, 1), 0), MULT, 0, speed_p) + rtx_cost (XEXP (XEXP (x, 1), 1), MULT, 1, speed_p)); @@ -9705,12 +9712,12 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, NULL_RTX, 1, 0); *cost =3D COSTS_N_INSNS (insns); if (speed_p) - *cost +=3D insns * extra_cost->alu.arith; + *cost +=3D insns * DBG_COST (extra_cost->alu.arith); *cost +=3D rtx_cost (XEXP (x, 1), code, 1, speed_p); return true; } else if (speed_p) - *cost +=3D extra_cost->alu.arith; + *cost +=3D DBG_COST (extra_cost->alu.arith); =20 return false; } @@ -9730,7 +9737,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, /* Slightly disparage, as we might need to widen the result. */ *cost +=3D 1; if (speed_p) - *cost +=3D extra_cost->alu.arith; + *cost +=3D DBG_COST (extra_cost->alu.arith); =20 if (CONST_INT_P (XEXP (x, 0))) { @@ -9750,7 +9757,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, rtx op1 =3D XEXP (x, 1); =20 if (speed_p) - *cost +=3D 2 * extra_cost->alu.arith; + *cost +=3D DBG_COST (2 * extra_cost->alu.arith); =20 if (GET_CODE (op1) =3D=3D ZERO_EXTEND) *cost +=3D rtx_cost (XEXP (op1, 0), ZERO_EXTEND, 0, speed_p); @@ -9763,7 +9770,8 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, else if (GET_CODE (XEXP (x, 0)) =3D=3D SIGN_EXTEND) { if (speed_p) - *cost +=3D extra_cost->alu.arith + extra_cost->alu.arith_shift; + *cost +=3D DBG_COST (extra_cost->alu.arith + + extra_cost->alu.arith_shift); *cost +=3D (rtx_cost (XEXP (XEXP (x, 0), 0), SIGN_EXTEND, 0, speed_p) + rtx_cost (XEXP (x, 1), MINUS, 1, speed_p)); @@ -9773,10 +9781,10 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, || GET_CODE (XEXP (x, 1)) =3D=3D SIGN_EXTEND) { if (speed_p) - *cost +=3D (extra_cost->alu.arith + *cost +=3D (DBG_COST (extra_cost->alu.arith) + (GET_CODE (XEXP (x, 1)) =3D=3D ZERO_EXTEND - ? extra_cost->alu.arith - : extra_cost->alu.arith_shift)); + ? DBG_COST (extra_cost->alu.arith) + : DBG_COST (extra_cost->alu.arith_shift))); *cost +=3D (rtx_cost (XEXP (x, 0), MINUS, 0, speed_p) + rtx_cost (XEXP (XEXP (x, 1), 0), GET_CODE (XEXP (x, 1)), 0, speed_p)); @@ -9784,7 +9792,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, } =20 if (speed_p) - *cost +=3D 2 * extra_cost->alu.arith; + *cost +=3D DBG_COST (2 * extra_cost->alu.arith); return false; } =20 @@ -9802,7 +9810,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, rtx mul_op0, mul_op1, add_op; =20 if (speed_p) - *cost +=3D extra_cost->fp[mode !=3D SFmode].mult_addsub; + *cost +=3D DBG_COST (extra_cost->fp[mode !=3D SFmode].mult_addsub); =20 mul_op0 =3D XEXP (XEXP (x, 0), 0); mul_op1 =3D XEXP (XEXP (x, 0), 1); @@ -9816,7 +9824,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, } =20 if (speed_p) - *cost +=3D extra_cost->fp[mode !=3D SFmode].addsub; + *cost +=3D DBG_COST (extra_cost->fp[mode !=3D SFmode].addsub); return false; } else if (GET_MODE_CLASS (mode) =3D=3D MODE_FLOAT) @@ -9844,7 +9852,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, NULL_RTX, 1, 0); *cost =3D COSTS_N_INSNS (insns); if (speed_p) - *cost +=3D insns * extra_cost->alu.arith; + *cost +=3D insns * DBG_COST (extra_cost->alu.arith); /* Slightly penalize a narrow operation as the result may need widening. */ *cost +=3D 1 + rtx_cost (XEXP (x, 0), PLUS, 0, speed_p); @@ -9855,7 +9863,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, need widening. */ *cost +=3D 1; if (speed_p) - *cost +=3D extra_cost->alu.arith; + *cost +=3D DBG_COST (extra_cost->alu.arith); =20 return false; } @@ -9870,7 +9878,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, { /* UXTA[BH] or SXTA[BH]. */ if (speed_p) - *cost +=3D extra_cost->alu.extend_arith; + *cost +=3D DBG_COST (extra_cost->alu.extend_arith); *cost +=3D (rtx_cost (XEXP (XEXP (x, 0), 0), ZERO_EXTEND, 0, speed_p) + rtx_cost (XEXP (x, 1), PLUS, 0, speed_p)); @@ -9884,11 +9892,11 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, if (shift_reg) { if (speed_p) - *cost +=3D extra_cost->alu.arith_shift_reg; + *cost +=3D DBG_COST (extra_cost->alu.arith_shift_reg); *cost +=3D rtx_cost (shift_reg, ASHIFT, 1, speed_p); } else if (speed_p) - *cost +=3D extra_cost->alu.arith_shift; + *cost +=3D DBG_COST (extra_cost->alu.arith_shift); =20 *cost +=3D (rtx_cost (shift_op, ASHIFT, 0, speed_p) + rtx_cost (XEXP (x, 1), PLUS, 1, speed_p)); @@ -9915,7 +9923,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, { /* SMLA[BT][BT]. */ if (speed_p) - *cost +=3D extra_cost->mult[0].extend_add; + *cost +=3D DBG_COST (extra_cost->mult[0].extend_add); *cost +=3D (rtx_cost (XEXP (XEXP (mul_op, 0), 0), SIGN_EXTEND, 0, speed_p) + rtx_cost (XEXP (XEXP (mul_op, 1), 0), @@ -9938,12 +9946,12 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, NULL_RTX, 1, 0); *cost =3D COSTS_N_INSNS (insns); if (speed_p) - *cost +=3D insns * extra_cost->alu.arith; + *cost +=3D insns * DBG_COST (extra_cost->alu.arith); *cost +=3D rtx_cost (XEXP (x, 0), PLUS, 0, speed_p); return true; } else if (speed_p) - *cost +=3D extra_cost->alu.arith; + *cost +=3D DBG_COST (extra_cost->alu.arith); =20 return false; } @@ -9958,7 +9966,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, && GET_CODE (XEXP (XEXP (x, 0), 1)) =3D=3D SIGN_EXTEND))) { if (speed_p) - *cost +=3D extra_cost->mult[1].extend_add; + *cost +=3D DBG_COST (extra_cost->mult[1].extend_add); *cost +=3D (rtx_cost (XEXP (XEXP (XEXP (x, 0), 0), 0), ZERO_EXTEND, 0, speed_p) + rtx_cost (XEXP (XEXP (XEXP (x, 0), 1), 0), @@ -9973,10 +9981,10 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, || GET_CODE (XEXP (x, 0)) =3D=3D SIGN_EXTEND) { if (speed_p) - *cost +=3D (extra_cost->alu.arith + *cost +=3D (DBG_COST (extra_cost->alu.arith) + (GET_CODE (XEXP (x, 0)) =3D=3D ZERO_EXTEND - ? extra_cost->alu.arith - : extra_cost->alu.arith_shift)); + ? DBG_COST (extra_cost->alu.arith) + : DBG_COST (extra_cost->alu.arith_shift))); =20 *cost +=3D (rtx_cost (XEXP (XEXP (x, 0), 0), ZERO_EXTEND, 0, speed_p) @@ -9985,7 +9993,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum rt= x_code outer_code, } =20 if (speed_p) - *cost +=3D 2 * extra_cost->alu.arith; + *cost +=3D DBG_COST (2 * extra_cost->alu.arith); return false; } =20 @@ -10000,7 +10008,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, *cost +=3D rtx_cost (inner, BSWAP, 0 , speed_p); =20 if (speed_p) - *cost +=3D extra_cost->alu.rev; + *cost +=3D DBG_COST (extra_cost->alu.rev); =20 return true; } @@ -10025,11 +10033,11 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enu= m rtx_code outer_code, if (shift_reg) { if (speed_p) - *cost +=3D extra_cost->alu.log_shift_reg; + *cost +=3D DBG_COST (extra_cost->alu.log_shift_reg); *cost +=3D rtx_cost (shift_reg, ASHIFT, 1, speed_p); } else if (speed_p) - *cost +=3D extra_cost->alu.log_shift; + *cost +=3D DBG_COST (extra_cost->alu.log_shift); =20 *cost +=3D (rtx_cost (shift_op, ASHIFT, 0, speed_p) + rtx_cost (XEXP (x, 1), code, 1, speed_p)); @@ -10044,13 +10052,13 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enu= m rtx_code outer_code, =20 *cost =3D COSTS_N_INSNS (insns); if (speed_p) - *cost +=3D insns * extra_cost->alu.logical; + *cost +=3D insns * DBG_COST (extra_cost->alu.logical); *cost +=3D rtx_cost (op0, code, 0, speed_p); return true; } =20 if (speed_p) - *cost +=3D extra_cost->alu.logical; + *cost +=3D DBG_COST (extra_cost->alu.logical); *cost +=3D (rtx_cost (op0, code, 0, speed_p) + rtx_cost (XEXP (x, 1), code, 1, speed_p)); return true; @@ -10071,7 +10079,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, if (GET_CODE (op0) =3D=3D ZERO_EXTEND) { if (speed_p) - *cost +=3D 2 * extra_cost->alu.logical; + *cost +=3D DBG_COST (2 * extra_cost->alu.logical); =20 *cost +=3D (rtx_cost (XEXP (op0, 0), ZERO_EXTEND, 0, speed_p) + rtx_cost (XEXP (x, 1), code, 0, speed_p)); @@ -10080,7 +10088,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, else if (GET_CODE (op0) =3D=3D SIGN_EXTEND) { if (speed_p) - *cost +=3D extra_cost->alu.logical + extra_cost->alu.log_shift; + *cost +=3D DBG_COST (extra_cost->alu.logical + extra_cost->alu.log_shift= ); =20 *cost +=3D (rtx_cost (XEXP (op0, 0), SIGN_EXTEND, 0, speed_p) + rtx_cost (XEXP (x, 1), code, 0, speed_p)); @@ -10088,7 +10096,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, } =20 if (speed_p) - *cost +=3D 2 * extra_cost->alu.logical; + *cost +=3D DBG_COST (2 * extra_cost->alu.logical); =20 return true; } @@ -10107,7 +10115,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, op0 =3D XEXP (op0, 0); =20 if (speed_p) - *cost +=3D extra_cost->fp[mode !=3D SFmode].mult; + *cost +=3D DBG_COST (extra_cost->fp[mode !=3D SFmode].mult); =20 *cost +=3D (rtx_cost (op0, MULT, 0, speed_p) + rtx_cost (XEXP (x, 1), MULT, 1, speed_p)); @@ -10138,13 +10146,13 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enu= m rtx_code outer_code, { /* SMUL[TB][TB]. */ if (speed_p) - *cost +=3D extra_cost->mult[0].extend; + *cost +=3D DBG_COST (extra_cost->mult[0].extend); *cost +=3D (rtx_cost (XEXP (x, 0), SIGN_EXTEND, 0, speed_p) + rtx_cost (XEXP (x, 1), SIGN_EXTEND, 0, speed_p)); return true; } if (speed_p) - *cost +=3D extra_cost->mult[0].simple; + *cost +=3D DBG_COST (extra_cost->mult[0].simple); return false; } =20 @@ -10157,7 +10165,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, && GET_CODE (XEXP (x, 1)) =3D=3D SIGN_EXTEND))) { if (speed_p) - *cost +=3D extra_cost->mult[1].extend; + *cost +=3D DBG_COST (extra_cost->mult[1].extend); *cost +=3D (rtx_cost (XEXP (XEXP (x, 0), 0), ZERO_EXTEND, 0, speed_p) + rtx_cost (XEXP (XEXP (x, 1), 0), @@ -10178,7 +10186,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, && (mode =3D=3D SFmode || !TARGET_VFP_SINGLE)) { if (speed_p) - *cost +=3D extra_cost->fp[mode !=3D SFmode].neg; + *cost +=3D DBG_COST (extra_cost->fp[mode !=3D SFmode].neg); =20 return false; } @@ -10195,8 +10203,8 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, *cost +=3D COSTS_N_INSNS (1); /* Assume the non-flag-changing variant. */ if (speed_p) - *cost +=3D (extra_cost->alu.log_shift - + extra_cost->alu.arith_shift); + *cost +=3D (DBG_COST (extra_cost->alu.log_shift) + + DBG_COST (extra_cost->alu.arith_shift)); *cost +=3D rtx_cost (XEXP (XEXP (x, 0), 0), ABS, 0, speed_p); return true; } @@ -10218,13 +10226,13 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enu= m rtx_code outer_code, + rtx_cost (XEXP (XEXP (x, 0), 1), COMPARE, 1, speed_p)); if (speed_p) - *cost +=3D extra_cost->alu.arith; + *cost +=3D DBG_COST (extra_cost->alu.arith); } return true; } =20 if (speed_p) - *cost +=3D extra_cost->alu.arith; + *cost +=3D DBG_COST (extra_cost->alu.arith); return false; } =20 @@ -10234,7 +10242,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, /* Slightly disparage, as we might need an extend operation. */ *cost +=3D 1; if (speed_p) - *cost +=3D extra_cost->alu.arith; + *cost +=3D DBG_COST (extra_cost->alu.arith); return false; } =20 @@ -10242,7 +10250,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, { *cost +=3D COSTS_N_INSNS (1); if (speed_p) - *cost +=3D 2 * extra_cost->alu.arith; + *cost +=3D 2 * DBG_COST (extra_cost->alu.arith); return false; } =20 @@ -10263,17 +10271,17 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enu= m rtx_code outer_code, if (shift_reg !=3D NULL) { if (speed_p) - *cost +=3D extra_cost->alu.log_shift_reg; + *cost +=3D DBG_COST (extra_cost->alu.log_shift_reg); *cost +=3D rtx_cost (shift_reg, ASHIFT, 1, speed_p); } else if (speed_p) - *cost +=3D extra_cost->alu.log_shift; + *cost +=3D DBG_COST (extra_cost->alu.log_shift); *cost +=3D rtx_cost (shift_op, ASHIFT, 0, speed_p); return true; } =20 if (speed_p) - *cost +=3D extra_cost->alu.logical; + *cost +=3D DBG_COST (extra_cost->alu.logical); return false; } if (mode =3D=3D DImode) @@ -10310,9 +10318,9 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, if (speed_p) { if (extra_cost->alu.non_exec_costs_exec) - *cost +=3D op1cost + op2cost + extra_cost->alu.non_exec; + *cost +=3D op1cost + op2cost + DBG_COST (extra_cost->alu.non_exec); else - *cost +=3D MAX (op1cost, op2cost) + extra_cost->alu.non_exec; + *cost +=3D MAX (op1cost, op2cost) + DBG_COST (extra_cost->alu.non_exec= ); } else *cost +=3D op1cost + op2cost; @@ -10335,7 +10343,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, && (op0mode =3D=3D SFmode || !TARGET_VFP_SINGLE)) { if (speed_p) - *cost +=3D extra_cost->fp[op0mode !=3D SFmode].compare; + *cost +=3D DBG_COST (extra_cost->fp[op0mode !=3D SFmode].compare); =20 if (XEXP (x, 1) =3D=3D CONST0_RTX (op0mode)) { @@ -10356,7 +10364,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, { *cost +=3D COSTS_N_INSNS (1); if (speed_p) - *cost +=3D 2 * extra_cost->alu.arith; + *cost +=3D DBG_COST (2 * extra_cost->alu.arith); return false; } =20 @@ -10377,14 +10385,14 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enu= m rtx_code outer_code, if (speed_p && GET_CODE (XEXP (x, 0)) =3D=3D MULT && !power_of_two_operand (XEXP (XEXP (x, 0), 1), mode)) - *cost +=3D extra_cost->mult[0].flag_setting; + *cost +=3D DBG_COST (extra_cost->mult[0].flag_setting); =20 if (speed_p && GET_CODE (XEXP (x, 0)) =3D=3D PLUS && GET_CODE (XEXP (XEXP (x, 0), 0)) =3D=3D MULT && !power_of_two_operand (XEXP (XEXP (XEXP (x, 0), 0), 1), mode)) - *cost +=3D extra_cost->mult[0].flag_setting; + *cost +=3D DBG_COST (extra_cost->mult[0].flag_setting); return true; } =20 @@ -10396,17 +10404,17 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enu= m rtx_code outer_code, { *cost +=3D rtx_cost (shift_reg, ASHIFT, 1, speed_p); if (speed_p) - *cost +=3D extra_cost->alu.arith_shift_reg; + *cost +=3D DBG_COST (extra_cost->alu.arith_shift_reg); } else if (speed_p) - *cost +=3D extra_cost->alu.arith_shift; + *cost +=3D DBG_COST (extra_cost->alu.arith_shift); *cost +=3D (rtx_cost (shift_op, ASHIFT, 0, speed_p) + rtx_cost (XEXP (x, 1), COMPARE, 1, speed_p)); return true; } =20 if (speed_p) - *cost +=3D extra_cost->alu.arith; + *cost +=3D DBG_COST (extra_cost->alu.arith); if (CONST_INT_P (XEXP (x, 1)) && const_ok_for_op (INTVAL (XEXP (x, 1)), COMPARE)) { @@ -10458,7 +10466,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, case LT: /* LSR Rd, Rn, #31. */ if (speed_p) - *cost +=3D extra_cost->alu.shift; + *cost +=3D DBG_COST (extra_cost->alu.shift); break; =20 case EQ: @@ -10476,7 +10484,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, ADC Rd, Rn, T1. */ *cost +=3D COSTS_N_INSNS (1); if (speed_p) - *cost +=3D extra_cost->alu.arith_shift; + *cost +=3D DBG_COST (extra_cost->alu.arith_shift); break; =20 case GT: @@ -10484,8 +10492,8 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, LSR Rd, Rd, #31. */ *cost +=3D COSTS_N_INSNS (1); if (speed_p) - *cost +=3D (extra_cost->alu.arith_shift - + extra_cost->alu.shift); + *cost +=3D (DBG_COST (extra_cost->alu.arith_shift) + + DBG_COST (extra_cost->alu.shift)); break; =20 case GE: @@ -10493,7 +10501,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, ADD Rd, Rn, #1. */ *cost +=3D COSTS_N_INSNS (1); if (speed_p) - *cost +=3D extra_cost->alu.shift; + *cost +=3D DBG_COST (extra_cost->alu.shift); break; =20 default: @@ -10536,7 +10544,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, && (mode =3D=3D SFmode || !TARGET_VFP_SINGLE)) { if (speed_p) - *cost +=3D extra_cost->fp[mode !=3D SFmode].neg; + *cost +=3D DBG_COST (extra_cost->fp[mode !=3D SFmode].neg); =20 return false; } @@ -10549,7 +10557,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, if (mode =3D=3D SImode) { if (speed_p) - *cost +=3D extra_cost->alu.log_shift + extra_cost->alu.arith_shift; + *cost +=3D DBG_COST (extra_cost->alu.log_shift + extra_cost->alu.arit= h_shift); return false; } /* Vector mode? */ @@ -10569,12 +10577,12 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enu= m rtx_code outer_code, return true; =20 if (GET_MODE (XEXP (x, 0)) =3D=3D SImode) - *cost +=3D extra_cost->ldst.load; + *cost +=3D DBG_COST (extra_cost->ldst.load); else - *cost +=3D extra_cost->ldst.load_sign_extend; + *cost +=3D DBG_COST (extra_cost->ldst.load_sign_extend); =20 if (mode =3D=3D DImode) - *cost +=3D extra_cost->alu.shift; + *cost +=3D DBG_COST (extra_cost->alu.shift); =20 return true; } @@ -10585,7 +10593,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, /* We have SXTB/SXTH. */ *cost +=3D rtx_cost (XEXP (x, 0), code, 0, speed_p); if (speed_p) - *cost +=3D extra_cost->alu.extend; + *cost +=3D DBG_COST (extra_cost->alu.extend); } else if (GET_MODE (XEXP (x, 0)) !=3D SImode) { @@ -10593,7 +10601,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, *cost +=3D COSTS_N_INSNS (1); *cost +=3D rtx_cost (XEXP (x, 0), code, 0, speed_p); if (speed_p) - *cost +=3D 2 * extra_cost->alu.shift; + *cost +=3D DBG_COST (2 * extra_cost->alu.shift); } =20 /* Widening beyond 32-bits requires one more insn. */ @@ -10601,7 +10609,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, { *cost +=3D COSTS_N_INSNS (1); if (speed_p) - *cost +=3D extra_cost->alu.shift; + *cost +=3D DBG_COST (extra_cost->alu.shift); } =20 return true; @@ -10629,14 +10637,14 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enu= m rtx_code outer_code, AND, and we don't really model 16-bit vs 32-bit insns here. */ if (speed_p) - *cost +=3D extra_cost->alu.logical; + *cost +=3D DBG_COST (extra_cost->alu.logical); } else if (GET_MODE (XEXP (x, 0)) !=3D SImode && arm_arch6) { /* We have UXTB/UXTH. */ *cost +=3D rtx_cost (XEXP (x, 0), code, 0, speed_p); if (speed_p) - *cost +=3D extra_cost->alu.extend; + *cost +=3D DBG_COST (extra_cost->alu.extend); } else if (GET_MODE (XEXP (x, 0)) !=3D SImode) { @@ -10647,7 +10655,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, *cost +=3D COSTS_N_INSNS (1); *cost +=3D rtx_cost (XEXP (x, 0), code, 0, speed_p); if (speed_p) - *cost +=3D 2 * extra_cost->alu.shift; + *cost +=3D DBG_COST (2 * extra_cost->alu.shift); } =20 /* Widening beyond 32-bits requires one more insn. */ @@ -10708,7 +10716,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, if (arm_arch_thumb2 && !flag_pic) *cost +=3D COSTS_N_INSNS (1); else - *cost +=3D extra_cost->ldst.load; + *cost +=3D DBG_COST (extra_cost->ldst.load); } else *cost +=3D COSTS_N_INSNS (1); @@ -10717,7 +10725,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, { *cost +=3D COSTS_N_INSNS (1); if (speed_p) - *cost +=3D extra_cost->alu.arith; + *cost +=3D DBG_COST (extra_cost->alu.arith); } =20 return true; @@ -10734,16 +10742,16 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enu= m rtx_code outer_code, if (vfp3_const_double_rtx (x)) { if (speed_p) - *cost +=3D extra_cost->fp[mode =3D=3D DFmode].fpconst; + *cost +=3D DBG_COST (extra_cost->fp[mode =3D=3D DFmode].fpconst); return true; } =20 if (speed_p) { if (mode =3D=3D DFmode) - *cost +=3D extra_cost->ldst.loadd; + *cost +=3D DBG_COST (extra_cost->ldst.loadd); else - *cost +=3D extra_cost->ldst.loadf; + *cost +=3D DBG_COST (extra_cost->ldst.loadf); } else *cost +=3D COSTS_N_INSNS (1 + (mode =3D=3D DFmode)); @@ -10774,14 +10782,14 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enu= m rtx_code outer_code, =20 case CLZ: if (speed_p) - *cost +=3D extra_cost->alu.clz; + *cost +=3D DBG_COST (extra_cost->alu.clz); return false; =20 case SMIN: if (XEXP (x, 1) =3D=3D const0_rtx) { if (speed_p) - *cost +=3D extra_cost->alu.log_shift; + *cost +=3D DBG_COST (extra_cost->alu.log_shift); *cost +=3D rtx_cost (XEXP (x, 0), code, 0, speed_p); return true; } @@ -10804,7 +10812,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, =3D=3D ZERO_EXTEND)))) { if (speed_p) - *cost +=3D extra_cost->mult[1].extend; + *cost +=3D DBG_COST (extra_cost->mult[1].extend); *cost +=3D (rtx_cost (XEXP (XEXP (XEXP (x, 0), 0), 0), ZERO_EXTEND, 0, speed_p) + rtx_cost (XEXP (XEXP (XEXP (x, 0), 0), 1), ZERO_EXTEND, @@ -10834,14 +10842,14 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enu= m rtx_code outer_code, && CONST_INT_P (XEXP (x, 2))) { if (speed_p) - *cost +=3D extra_cost->alu.bfx; + *cost +=3D DBG_COST (extra_cost->alu.bfx); *cost +=3D rtx_cost (XEXP (x, 0), code, 0, speed_p); return true; } /* Without UBFX/SBFX, need to resort to shift operations. */ *cost +=3D COSTS_N_INSNS (1); if (speed_p) - *cost +=3D 2 * extra_cost->alu.shift; + *cost +=3D DBG_COST (2 * extra_cost->alu.shift); *cost +=3D rtx_cost (XEXP (x, 0), ASHIFT, 0, speed_p); return true; =20 @@ -10849,7 +10857,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, if (TARGET_HARD_FLOAT) { if (speed_p) - *cost +=3D extra_cost->fp[mode =3D=3D DFmode].widen; + *cost +=3D DBG_COST (extra_cost->fp[mode =3D=3D DFmode].widen); if (!TARGET_FPU_ARMV8 && GET_MODE (XEXP (x, 0)) =3D=3D HFmode) { @@ -10857,7 +10865,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, widening to SFmode. */ *cost +=3D COSTS_N_INSNS (1); if (speed_p) - *cost +=3D extra_cost->fp[0].widen; + *cost +=3D DBG_COST (extra_cost->fp[0].widen); } *cost +=3D rtx_cost (XEXP (x, 0), code, 0, speed_p); return true; @@ -10870,7 +10878,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, if (TARGET_HARD_FLOAT) { if (speed_p) - *cost +=3D extra_cost->fp[mode =3D=3D DFmode].narrow; + *cost +=3D DBG_COST (extra_cost->fp[mode =3D=3D DFmode].narrow); *cost +=3D rtx_cost (XEXP (x, 0), code, 0, speed_p); return true; /* Vector modes? */ @@ -10899,7 +10907,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, *cost +=3D rtx_cost (op2, FMA, 2, speed_p); =20 if (speed_p) - *cost +=3D extra_cost->fp[mode =3D=3DDFmode].fma; + *cost +=3D DBG_COST (extra_cost->fp[mode =3D=3DDFmode].fma); =20 return true; } @@ -10914,7 +10922,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, if (GET_MODE_CLASS (mode) =3D=3D MODE_INT) { if (speed_p) - *cost +=3D extra_cost->fp[GET_MODE (XEXP (x, 0)) =3D=3D DFmode].toint; + *cost +=3D DBG_COST (extra_cost->fp[GET_MODE (XEXP (x, 0)) =3D=3D DFmode= ].toint); /* Strip of the 'cost' of rounding towards zero. */ if (GET_CODE (XEXP (x, 0)) =3D=3D FIX) *cost +=3D rtx_cost (XEXP (XEXP (x, 0), 0), code, 0, speed_p); @@ -10928,7 +10936,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, && TARGET_FPU_ARMV8) { if (speed_p) - *cost +=3D extra_cost->fp[mode =3D=3D DFmode].roundint; + *cost +=3D DBG_COST (extra_cost->fp[mode =3D=3D DFmode].roundint); return false; } /* Vector costs? */ @@ -10943,7 +10951,7 @@ arm_new_rtx_costs (rtx x, enum rtx_code code, enum = rtx_code outer_code, /* ??? Increase the cost to deal with transferring from CORE -> FP registers? */ if (speed_p) - *cost +=3D extra_cost->fp[mode =3D=3D DFmode].fromint; + *cost +=3D DBG_COST (extra_cost->fp[mode =3D=3D DFmode].fromint); return false; } *cost =3D LIBCALL_COST (1); @@ -11016,7 +11024,7 @@ arm_rtx_costs (rtx x, int code, int outer_code, int= opno ATTRIBUTE_UNUSED, &generic_extra_costs, total, speed); } =20 - if (dump_file && (dump_flags & TDF_DETAILS)) + if (debug_rtx_costs && dump_file && (dump_flags & TDF_DETAILS)) { print_rtl_single (dump_file, x); fprintf (dump_file, "\n%s cost: %d (%s)\n", speed ? "Hot" : "Cold", diff --git a/gcc/config/arm/arm.opt b/gcc/config/arm/arm.opt index d4ff164..1f29125 100644 --- a/gcc/config/arm/arm.opt +++ b/gcc/config/arm/arm.opt @@ -277,3 +277,7 @@ Assume loading data from flash is slower than fetching = instructions. masm-syntax-unified Target Report Var(inline_asm_unified) Init(0) Assume unified syntax for Thumb inline assembly code. + +mdebug-rtx-costs +Target Undocumented Var(debug_rtx_costs) Init(0) +Dump more detailed rtx costs in debug dumps. --------------090008080509020902080208--