public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
@ 2015-03-31  4:38 Mikhail Maltsev
  2015-03-31 15:52 ` Trevor Saunders
                   ` (2 more replies)
  0 siblings, 3 replies; 21+ messages in thread
From: Mikhail Maltsev @ 2015-03-31  4:38 UTC (permalink / raw)
  To: Jeff Law, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 2525 bytes --]

Hi!

I'm currently working on the proposed task of replacing rtx objects
(i.e. struct rtx_def) with derived classes. I would like to get some
feedback on this work (it's far from being finished, but basically I
would like to know, whether my modifications are appropriate, e.g. one
may consider that this is "too much" for just refactoring, because
sometimes they involve small modification of semantics).

The attached patch is not well tested, i.e. I bootstrapped and regtested
it only on x86_64, but I'll perform more extensive testing before
submitting the next version.

The key points I would like to ask about:

1. The original task was to replace the rtx type by rtx_insn *, where it
is appropriate. But rtx_insn has several derived classes, such as
rtx_code_label, for example. So I tried to use the most derived type
when possible. Is it OK?

2. Not all of these "type promotions" can be done by just looking at
function callers and callees (and some functions will only be generated
during the build of some rare architecture) and checks already done in
them. In a couple of cases I referred to comments and my general
understanding of code semantics. In one case this actually caused a
regression (in the patch it is fixed, of course), because of somewhat
misleading comment (see "live_label_rtx" function added in patch for
details) The question is - are such changes OK for refactoring (or it
should strictly preserve semantics)?

3. In lra-constraints.c I added a new class rtx_usage_list, which, IMHO,
allows to group the functions which work with usage list in a more
explicit manner and make some conditions more self-explaining. I hope
that Vladimir Makarov (in this case, because it concerns LRA) and other
authors will not object against such "intrusion" into their code (or
would rather tell me what should be fixed in my patch(es), instead of
just refusing to apply it/them). In general, are such changes OK or
should better be avoided?

A couple of questions related to further work:

1. I noticed that emit_insn function, in fact, does two kinds of things:
it can either add it's argument to the chain, or, if the argument is a
pattern, it creates a new instruction based on that pattern. Shouldn't
this logic be separated in the callers?

2. Are there any plans on implementing a better class hierarchy on AST's
("union tree_node" type). I see that C++ FE uses a huge number of macros
(which check TREE_CODE and some boolean flags). Could this be improved
somehow?

-- 
Regards,
    Mikhail Maltsev

[-- Attachment #2: as_insn.patch --]
[-- Type: text/plain, Size: 110619 bytes --]

diff --git a/gcc/bb-reorder.c b/gcc/bb-reorder.c
index c2a3be3..7179faa 100644
--- a/gcc/bb-reorder.c
+++ b/gcc/bb-reorder.c
@@ -1745,9 +1745,11 @@ set_edge_can_fallthru_flag (void)
 	continue;
       if (!any_condjump_p (BB_END (bb)))
 	continue;
-      if (!invert_jump (BB_END (bb), JUMP_LABEL (BB_END (bb)), 0))
+
+      rtx_jump_insn *bb_end_jump = as_a <rtx_jump_insn *> (BB_END (bb));
+      if (!invert_jump (bb_end_jump, JUMP_LABEL (bb_end_jump), 0))
 	continue;
-      invert_jump (BB_END (bb), JUMP_LABEL (BB_END (bb)), 0);
+      invert_jump (bb_end_jump, JUMP_LABEL (bb_end_jump), 0);
       EDGE_SUCC (bb, 0)->flags |= EDGE_CAN_FALLTHRU;
       EDGE_SUCC (bb, 1)->flags |= EDGE_CAN_FALLTHRU;
     }
@@ -1902,9 +1904,15 @@ fix_up_fall_thru_edges (void)
 
 		      fall_thru_label = block_label (fall_thru->dest);
 
-		      if (old_jump && JUMP_P (old_jump) && fall_thru_label)
-			invert_worked = invert_jump (old_jump,
-						     fall_thru_label,0);
+		      if (old_jump && fall_thru_label)
+                        {
+                          rtx_jump_insn *old_jump_insn =
+                                  dyn_cast <rtx_jump_insn *> (old_jump);
+                          if (old_jump_insn)
+                            invert_worked = invert_jump (old_jump_insn,
+						     fall_thru_label, 0);
+                        }
+
 		      if (invert_worked)
 			{
 			  fall_thru->flags &= ~EDGE_FALLTHRU;
@@ -2024,7 +2032,7 @@ fix_crossing_conditional_branches (void)
   rtx_insn *old_jump;
   rtx set_src;
   rtx old_label = NULL_RTX;
-  rtx new_label;
+  rtx_code_label *new_label;
 
   FOR_EACH_BB_FN (cur_bb, cfun)
     {
@@ -2088,7 +2096,7 @@ fix_crossing_conditional_branches (void)
 	      else
 		{
 		  basic_block last_bb;
-		  rtx_insn *new_jump;
+		  rtx_insn *new_jump, *old_label_insn;
 
 		  /* Create new basic block to be dest for
 		     conditional jump.  */
@@ -2099,9 +2107,9 @@ fix_crossing_conditional_branches (void)
 		  emit_label (new_label);
 
 		  gcc_assert (GET_CODE (old_label) == LABEL_REF);
-		  old_label = JUMP_LABEL (old_jump);
-		  new_jump = emit_jump_insn (gen_jump (old_label));
-		  JUMP_LABEL (new_jump) = old_label;
+		  old_label_insn = JUMP_LABEL_AS_INSN (old_jump);
+		  new_jump = emit_jump_insn (gen_jump (old_label_insn));
+		  JUMP_LABEL (new_jump) = old_label_insn;
 
 		  last_bb = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
 		  new_bb = create_basic_block (new_label, new_jump, last_bb);
@@ -2117,7 +2125,7 @@ fix_crossing_conditional_branches (void)
 
 	      /* Make old jump branch to new bb.  */
 
-	      redirect_jump (old_jump, new_label, 0);
+	      redirect_jump (as_a <rtx_jump_insn *> (old_jump), new_label, 0);
 
 	      /* Remove crossing_edge as predecessor of 'dest'.  */
 
diff --git a/gcc/bt-load.c b/gcc/bt-load.c
index c028281..2280124 100644
--- a/gcc/bt-load.c
+++ b/gcc/bt-load.c
@@ -1212,7 +1212,7 @@ move_btr_def (basic_block new_def_bb, int btr, btr_def def, bitmap live_range,
   btr_mode = GET_MODE (SET_DEST (set));
   btr_rtx = gen_rtx_REG (btr_mode, btr);
 
-  new_insn = as_a <rtx_insn *> (gen_move_insn (btr_rtx, src));
+  new_insn = gen_move_insn (btr_rtx, src);
 
   /* Insert target register initialization at head of basic block.  */
   def->insn = emit_insn_after (new_insn, insp);
diff --git a/gcc/builtins.c b/gcc/builtins.c
index 9263777..945492e 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -2001,7 +2001,7 @@ expand_errno_check (tree exp, rtx target)
   /* Test the result; if it is NaN, set errno=EDOM because
      the argument was not in the domain.  */
   do_compare_rtx_and_jump (target, target, EQ, 0, GET_MODE (target),
-			   NULL_RTX, NULL_RTX, lab,
+			   NULL_RTX, NULL, lab,
 			   /* The jump is very likely.  */
 			   REG_BR_PROB_BASE - (REG_BR_PROB_BASE / 2000 - 1));
 
@@ -5938,9 +5938,9 @@ expand_builtin_acc_on_device (tree exp, rtx target)
   emit_move_insn (target, const1_rtx);
   rtx_code_label *done_label = gen_label_rtx ();
   do_compare_rtx_and_jump (v, v1, EQ, false, v_mode, NULL_RTX,
-			   NULL_RTX, done_label, PROB_EVEN);
+			   NULL, done_label, PROB_EVEN);
   do_compare_rtx_and_jump (v, v2, EQ, false, v_mode, NULL_RTX,
-			   NULL_RTX, done_label, PROB_EVEN);
+			   NULL, done_label, PROB_EVEN);
   emit_move_insn (target, const0_rtx);
   emit_label (done_label);
 
diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c
index cee152e..05146b6 100644
--- a/gcc/cfgcleanup.c
+++ b/gcc/cfgcleanup.c
@@ -190,7 +190,8 @@ try_simplify_condjump (basic_block cbranch_block)
     return false;
 
   /* Invert the conditional branch.  */
-  if (!invert_jump (cbranch_insn, block_label (jump_dest_block), 0))
+  if (!invert_jump (as_a <rtx_jump_insn *> (cbranch_insn),
+                    block_label (jump_dest_block), 0))
     return false;
 
   if (dump_file)
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index 97e7a25..aedc4b8 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -2051,7 +2051,7 @@ static hash_map<basic_block, rtx_code_label *> *lab_rtx_for_bb;
 
 /* Returns the label_rtx expression for a label starting basic block BB.  */
 
-static rtx
+static rtx_code_label *
 label_rtx_for_bb (basic_block bb ATTRIBUTE_UNUSED)
 {
   gimple_stmt_iterator gsi;
@@ -2078,7 +2078,7 @@ label_rtx_for_bb (basic_block bb ATTRIBUTE_UNUSED)
       if (DECL_NONLOCAL (lab))
 	break;
 
-      return label_rtx (lab);
+      return live_label_rtx (lab);
     }
 
   rtx_code_label *l = gen_label_rtx ();
@@ -5579,7 +5579,7 @@ construct_init_block (void)
     {
       tree label = gimple_block_label (e->dest);
 
-      emit_jump (label_rtx (label));
+      emit_jump (live_label_rtx (label));
       flags = 0;
     }
   else
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index 0e27edd..7da23e7 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -1001,18 +1001,18 @@ rtl_can_merge_blocks (basic_block a, basic_block b)
 /* Return the label in the head of basic block BLOCK.  Create one if it doesn't
    exist.  */
 
-rtx
+rtx_code_label *
 block_label (basic_block block)
 {
   if (block == EXIT_BLOCK_PTR_FOR_FN (cfun))
-    return NULL_RTX;
+    return NULL;
 
   if (!LABEL_P (BB_HEAD (block)))
     {
       BB_HEAD (block) = emit_label_before (gen_label_rtx (), BB_HEAD (block));
     }
 
-  return BB_HEAD (block);
+  return as_a <rtx_code_label *> (BB_HEAD (block));
 }
 
 /* Attempt to perform edge redirection by replacing possibly complex jump
@@ -1114,7 +1114,8 @@ try_redirect_by_replacing_jump (edge e, basic_block target, bool in_cfglayout)
       if (dump_file)
 	fprintf (dump_file, "Redirecting jump %i from %i to %i.\n",
 		 INSN_UID (insn), e->dest->index, target->index);
-      if (!redirect_jump (insn, block_label (target), 0))
+      if (!redirect_jump (as_a <rtx_jump_insn *> (insn),
+                          block_label (target), 0))
 	{
 	  gcc_assert (target == EXIT_BLOCK_PTR_FOR_FN (cfun));
 	  return NULL;
@@ -1298,7 +1299,8 @@ patch_jump_insn (rtx_insn *insn, rtx_insn *old_label, basic_block new_bb)
 	  /* If the substitution doesn't succeed, die.  This can happen
 	     if the back end emitted unrecognizable instructions or if
 	     target is exit block on some arches.  */
-	  if (!redirect_jump (insn, block_label (new_bb), 0))
+	  if (!redirect_jump (as_a <rtx_jump_insn *> (insn),
+                              block_label (new_bb), 0))
 	    {
 	      gcc_assert (new_bb == EXIT_BLOCK_PTR_FOR_FN (cfun));
 	      return false;
@@ -1326,7 +1328,7 @@ redirect_branch_edge (edge e, basic_block target)
 
   if (!currently_expanding_to_rtl)
     {
-      if (!patch_jump_insn (insn, old_label, target))
+      if (!patch_jump_insn (as_a <rtx_jump_insn *> (insn), old_label, target))
 	return NULL;
     }
   else
@@ -1334,7 +1336,8 @@ redirect_branch_edge (edge e, basic_block target)
        jumps (i.e. not yet split by find_many_sub_basic_blocks).
        Redirect all of those that match our label.  */
     FOR_BB_INSNS (src, insn)
-      if (JUMP_P (insn) && !patch_jump_insn (insn, old_label, target))
+      if (JUMP_P (insn) && !patch_jump_insn (as_a <rtx_jump_insn *> (insn),
+                                             old_label, target))
 	return NULL;
 
   if (dump_file)
@@ -1525,7 +1528,8 @@ force_nonfallthru_and_redirect (edge e, basic_block target, rtx jump_label)
       edge b = unchecked_make_edge (e->src, target, 0);
       bool redirected;
 
-      redirected = redirect_jump (BB_END (e->src), block_label (target), 0);
+      redirected = redirect_jump (as_a <rtx_jump_insn *> (BB_END (e->src)),
+                                  block_label (target), 0);
       gcc_assert (redirected);
 
       note = find_reg_note (BB_END (e->src), REG_BR_PROB, NULL_RTX);
@@ -3783,10 +3787,10 @@ fixup_reorder_chain (void)
 	  e_taken = e;
 
       bb_end_insn = BB_END (bb);
-      if (JUMP_P (bb_end_insn))
+      if (rtx_jump_insn *bb_end_jump = dyn_cast <rtx_jump_insn *> (bb_end_insn))
 	{
-	  ret_label = JUMP_LABEL (bb_end_insn);
-	  if (any_condjump_p (bb_end_insn))
+	  ret_label = JUMP_LABEL (bb_end_jump);
+	  if (any_condjump_p (bb_end_jump))
 	    {
 	      /* This might happen if the conditional jump has side
 		 effects and could therefore not be optimized away.
@@ -3794,10 +3798,10 @@ fixup_reorder_chain (void)
 		 to prevent rtl_verify_flow_info from complaining.  */
 	      if (!e_fall)
 		{
-		  gcc_assert (!onlyjump_p (bb_end_insn)
-			      || returnjump_p (bb_end_insn)
+		  gcc_assert (!onlyjump_p (bb_end_jump)
+			      || returnjump_p (bb_end_jump)
                               || (e_taken->flags & EDGE_CROSSING));
-		  emit_barrier_after (bb_end_insn);
+		  emit_barrier_after (bb_end_jump);
 		  continue;
 		}
 
@@ -3819,11 +3823,11 @@ fixup_reorder_chain (void)
 		 edge based on known or assumed probability.  */
 	      else if (bb->aux != e_taken->dest)
 		{
-		  rtx note = find_reg_note (bb_end_insn, REG_BR_PROB, 0);
+		  rtx note = find_reg_note (bb_end_jump, REG_BR_PROB, 0);
 
 		  if (note
 		      && XINT (note, 0) < REG_BR_PROB_BASE / 2
-		      && invert_jump (bb_end_insn,
+		      && invert_jump (bb_end_jump,
 				      (e_fall->dest
 				       == EXIT_BLOCK_PTR_FOR_FN (cfun)
 				       ? NULL_RTX
@@ -3846,7 +3850,7 @@ fixup_reorder_chain (void)
 
 	      /* Otherwise we can try to invert the jump.  This will
 		 basically never fail, however, keep up the pretense.  */
-	      else if (invert_jump (bb_end_insn,
+	      else if (invert_jump (bb_end_jump,
 				    (e_fall->dest
 				     == EXIT_BLOCK_PTR_FOR_FN (cfun)
 				     ? NULL_RTX
@@ -4967,7 +4971,7 @@ rtl_lv_add_condition_to_bb (basic_block first_head ,
 			    basic_block second_head ATTRIBUTE_UNUSED,
 			    basic_block cond_bb, void *comp_rtx)
 {
-  rtx label;
+  rtx_code_label *label;
   rtx_insn *seq, *jump;
   rtx op0 = XEXP ((rtx)comp_rtx, 0);
   rtx op1 = XEXP ((rtx)comp_rtx, 1);
@@ -4983,8 +4987,7 @@ rtl_lv_add_condition_to_bb (basic_block first_head ,
   start_sequence ();
   op0 = force_operand (op0, NULL_RTX);
   op1 = force_operand (op1, NULL_RTX);
-  do_compare_rtx_and_jump (op0, op1, comp, 0,
-			   mode, NULL_RTX, NULL_RTX, label, -1);
+  do_compare_rtx_and_jump (op0, op1, comp, 0, mode, NULL_RTX, NULL, label, -1);
   jump = get_last_insn ();
   JUMP_LABEL (jump) = label;
   LABEL_NUSES (label)++;
diff --git a/gcc/cfgrtl.h b/gcc/cfgrtl.h
index 32c8ff6..cdf1477 100644
--- a/gcc/cfgrtl.h
+++ b/gcc/cfgrtl.h
@@ -33,7 +33,7 @@ extern bool contains_no_active_insn_p (const_basic_block);
 extern bool forwarder_block_p (const_basic_block);
 extern bool can_fallthru (basic_block, basic_block);
 extern rtx_note *bb_note (basic_block);
-extern rtx block_label (basic_block);
+extern rtx_code_label *block_label (basic_block);
 extern edge try_redirect_by_replacing_jump (edge, basic_block, bool);
 extern void emit_barrier_after_bb (basic_block bb);
 extern basic_block force_nonfallthru_and_redirect (edge, basic_block, rtx);
diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c
index 22bc81f..b6c71b2 100644
--- a/gcc/config/i386/i386.c
+++ b/gcc/config/i386/i386.c
@@ -38448,7 +38448,7 @@ ix86_emit_cmove (rtx dst, rtx src, enum rtx_code code, rtx op1, rtx op2)
     }
   else
     {
-      rtx nomove = gen_label_rtx ();
+      rtx_code_label *nomove = gen_label_rtx ();
       emit_cmp_and_jump_insns (op1, op2, reverse_condition (code),
 			       const0_rtx, GET_MODE (op1), 1, nomove);
       emit_move_insn (dst, src);
diff --git a/gcc/dojump.c b/gcc/dojump.c
index ad356ba..42dc479 100644
--- a/gcc/dojump.c
+++ b/gcc/dojump.c
@@ -61,10 +61,12 @@ along with GCC; see the file COPYING3.  If not see
 #include "tm_p.h"
 
 static bool prefer_and_bit_test (machine_mode, int);
-static void do_jump_by_parts_greater (tree, tree, int, rtx, rtx, int);
-static void do_jump_by_parts_equality (tree, tree, rtx, rtx, int);
-static void do_compare_and_jump	(tree, tree, enum rtx_code, enum rtx_code, rtx,
-				 rtx, int);
+static void do_jump_by_parts_greater (tree, tree, int,
+				      rtx_code_label *, rtx_code_label *, int);
+static void do_jump_by_parts_equality (tree, tree, rtx_code_label *,
+				       rtx_code_label *, int);
+static void do_compare_and_jump	(tree, tree, enum rtx_code, enum rtx_code,
+				 rtx_code_label *, rtx_code_label *, int);
 
 /* Invert probability if there is any.  -1 stands for unknown.  */
 
@@ -146,34 +148,34 @@ restore_pending_stack_adjust (saved_pending_stack_adjust *save)
 \f
 /* Expand conditional expressions.  */
 
-/* Generate code to evaluate EXP and jump to LABEL if the value is zero.
-   LABEL is an rtx of code CODE_LABEL, in this function and all the
-   functions here.  */
+/* Generate code to evaluate EXP and jump to LABEL if the value is zero.  */
 
 void
-jumpifnot (tree exp, rtx label, int prob)
+jumpifnot (tree exp, rtx_code_label *label, int prob)
 {
-  do_jump (exp, label, NULL_RTX, inv (prob));
+  do_jump (exp, label, NULL, inv (prob));
 }
 
 void
-jumpifnot_1 (enum tree_code code, tree op0, tree op1, rtx label, int prob)
+jumpifnot_1 (enum tree_code code, tree op0, tree op1, rtx_code_label *label,
+	     int prob)
 {
-  do_jump_1 (code, op0, op1, label, NULL_RTX, inv (prob));
+  do_jump_1 (code, op0, op1, label, NULL, inv (prob));
 }
 
 /* Generate code to evaluate EXP and jump to LABEL if the value is nonzero.  */
 
 void
-jumpif (tree exp, rtx label, int prob)
+jumpif (tree exp, rtx_code_label *label, int prob)
 {
-  do_jump (exp, NULL_RTX, label, prob);
+  do_jump (exp, NULL, label, prob);
 }
 
 void
-jumpif_1 (enum tree_code code, tree op0, tree op1, rtx label, int prob)
+jumpif_1 (enum tree_code code, tree op0, tree op1,
+	  rtx_code_label *label, int prob)
 {
-  do_jump_1 (code, op0, op1, NULL_RTX, label, prob);
+  do_jump_1 (code, op0, op1, NULL, label, prob);
 }
 
 /* Used internally by prefer_and_bit_test.  */
@@ -225,7 +227,8 @@ prefer_and_bit_test (machine_mode mode, int bitnum)
 
 void
 do_jump_1 (enum tree_code code, tree op0, tree op1,
-	   rtx if_false_label, rtx if_true_label, int prob)
+	   rtx_code_label *if_false_label, rtx_code_label *if_true_label,
+	   int prob)
 {
   machine_mode mode;
   rtx_code_label *drop_through_label = 0;
@@ -378,15 +381,15 @@ do_jump_1 (enum tree_code code, tree op0, tree op1,
             op0_prob = inv (op0_false_prob);
             op1_prob = inv (op1_false_prob);
           }
-        if (if_false_label == NULL_RTX)
+        if (if_false_label == NULL)
           {
             drop_through_label = gen_label_rtx ();
-            do_jump (op0, drop_through_label, NULL_RTX, op0_prob);
-            do_jump (op1, NULL_RTX, if_true_label, op1_prob);
+            do_jump (op0, drop_through_label, NULL, op0_prob);
+            do_jump (op1, NULL, if_true_label, op1_prob);
           }
         else
           {
-            do_jump (op0, if_false_label, NULL_RTX, op0_prob);
+            do_jump (op0, if_false_label, NULL, op0_prob);
             do_jump (op1, if_false_label, if_true_label, op1_prob);
           }
         break;
@@ -405,18 +408,18 @@ do_jump_1 (enum tree_code code, tree op0, tree op1,
           {
             op0_prob = prob / 2;
             op1_prob = GCOV_COMPUTE_SCALE ((prob / 2), inv (op0_prob));
-          }
-        if (if_true_label == NULL_RTX)
-          {
-            drop_through_label = gen_label_rtx ();
-            do_jump (op0, NULL_RTX, drop_through_label, op0_prob);
-            do_jump (op1, if_false_label, NULL_RTX, op1_prob);
-          }
-        else
-          {
-            do_jump (op0, NULL_RTX, if_true_label, op0_prob);
-            do_jump (op1, if_false_label, if_true_label, op1_prob);
-          }
+	  }
+	if (if_true_label == NULL)
+	  {
+	    drop_through_label = gen_label_rtx ();
+	    do_jump (op0, NULL, drop_through_label, op0_prob);
+	    do_jump (op1, if_false_label, NULL, op1_prob);
+	  }
+	else
+	  {
+	    do_jump (op0, NULL, if_true_label, op0_prob);
+	    do_jump (op1, if_false_label, if_true_label, op1_prob);
+	  }
         break;
       }
 
@@ -443,14 +446,15 @@ do_jump_1 (enum tree_code code, tree op0, tree op1,
    PROB is probability of jump to if_true_label, or -1 if unknown.  */
 
 void
-do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
+do_jump (tree exp, rtx_code_label *if_false_label,
+	 rtx_code_label *if_true_label, int prob)
 {
   enum tree_code code = TREE_CODE (exp);
   rtx temp;
   int i;
   tree type;
   machine_mode mode;
-  rtx_code_label *drop_through_label = 0;
+  rtx_code_label *drop_through_label = NULL;
 
   switch (code)
     {
@@ -458,10 +462,13 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
       break;
 
     case INTEGER_CST:
-      temp = integer_zerop (exp) ? if_false_label : if_true_label;
-      if (temp)
-        emit_jump (temp);
-      break;
+      {
+	rtx_code_label *lab = integer_zerop (exp) ? if_false_label
+						  : if_true_label;
+	if (lab)
+	  emit_jump (lab);
+	break;
+      }
 
 #if 0
       /* This is not true with #pragma weak  */
@@ -511,7 +518,7 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
 	  }
 
         do_pending_stack_adjust ();
-	do_jump (TREE_OPERAND (exp, 0), label1, NULL_RTX, -1);
+	do_jump (TREE_OPERAND (exp, 0), label1, NULL, -1);
 	do_jump (TREE_OPERAND (exp, 1), if_false_label, if_true_label, prob);
         emit_label (label1);
 	do_jump (TREE_OPERAND (exp, 2), if_false_label, if_true_label, prob);
@@ -555,7 +562,7 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
       if (integer_onep (TREE_OPERAND (exp, 1)))
 	{
 	  tree exp0 = TREE_OPERAND (exp, 0);
-	  rtx set_label, clr_label;
+	  rtx_code_label *set_label, *clr_label;
 	  int setclr_prob = prob;
 
 	  /* Strip narrowing integral type conversions.  */
@@ -684,11 +691,12 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
 
 static void
 do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
-			      rtx op1, rtx if_false_label, rtx if_true_label,
+			      rtx op1, rtx_code_label *if_false_label,
+			      rtx_code_label *if_true_label,
 			      int prob)
 {
   int nwords = (GET_MODE_SIZE (mode) / UNITS_PER_WORD);
-  rtx drop_through_label = 0;
+  rtx_code_label *drop_through_label = 0;
   bool drop_through_if_true = false, drop_through_if_false = false;
   enum rtx_code code = GT;
   int i;
@@ -735,7 +743,7 @@ do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
 
       /* All but high-order word must be compared as unsigned.  */
       do_compare_rtx_and_jump (op0_word, op1_word, code, (unsignedp || i > 0),
-			       word_mode, NULL_RTX, NULL_RTX, if_true_label,
+			       word_mode, NULL_RTX, NULL, if_true_label,
 			       prob);
 
       /* Emit only one comparison for 0.  Do not emit the last cond jump.  */
@@ -744,7 +752,7 @@ do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
 
       /* Consider lower words only if these are equal.  */
       do_compare_rtx_and_jump (op0_word, op1_word, NE, unsignedp, word_mode,
-			       NULL_RTX, NULL_RTX, if_false_label, inv (prob));
+			       NULL_RTX, NULL, if_false_label, inv (prob));
     }
 
   if (!drop_through_if_false)
@@ -760,7 +768,8 @@ do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
 
 static void
 do_jump_by_parts_greater (tree treeop0, tree treeop1, int swap,
-			  rtx if_false_label, rtx if_true_label, int prob)
+			  rtx_code_label *if_false_label,
+			  rtx_code_label *if_true_label, int prob)
 {
   rtx op0 = expand_normal (swap ? treeop1 : treeop0);
   rtx op1 = expand_normal (swap ? treeop0 : treeop1);
@@ -773,17 +782,18 @@ do_jump_by_parts_greater (tree treeop0, tree treeop1, int swap,
 \f
 /* Jump according to whether OP0 is 0.  We assume that OP0 has an integer
    mode, MODE, that is too wide for the available compare insns.  Either
-   Either (but not both) of IF_TRUE_LABEL and IF_FALSE_LABEL may be NULL_RTX
+   Either (but not both) of IF_TRUE_LABEL and IF_FALSE_LABEL may be NULL
    to indicate drop through.  */
 
 static void
 do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
-			   rtx if_false_label, rtx if_true_label, int prob)
+			   rtx_code_label *if_false_label,
+			   rtx_code_label *if_true_label, int prob)
 {
   int nwords = GET_MODE_SIZE (mode) / UNITS_PER_WORD;
   rtx part;
   int i;
-  rtx drop_through_label = 0;
+  rtx_code_label *drop_through_label = NULL;
 
   /* The fastest way of doing this comparison on almost any machine is to
      "or" all the words and compare the result.  If all have to be loaded
@@ -806,12 +816,12 @@ do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
 
   /* If we couldn't do the "or" simply, do this with a series of compares.  */
   if (! if_false_label)
-    drop_through_label = if_false_label = gen_label_rtx ();
+    if_false_label = drop_through_label = gen_label_rtx ();
 
   for (i = 0; i < nwords; i++)
     do_compare_rtx_and_jump (operand_subword_force (op0, i, mode),
                              const0_rtx, EQ, 1, word_mode, NULL_RTX,
-			     if_false_label, NULL_RTX, prob);
+			     if_false_label, NULL, prob);
 
   if (if_true_label)
     emit_jump (if_true_label);
@@ -827,10 +837,11 @@ do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
 
 static void
 do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
-			       rtx if_false_label, rtx if_true_label, int prob)
+			       rtx_code_label *if_false_label,
+			       rtx_code_label *if_true_label, int prob)
 {
   int nwords = (GET_MODE_SIZE (mode) / UNITS_PER_WORD);
-  rtx drop_through_label = 0;
+  rtx_code_label *drop_through_label = NULL;
   int i;
 
   if (op1 == const0_rtx)
@@ -853,7 +864,7 @@ do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
     do_compare_rtx_and_jump (operand_subword_force (op0, i, mode),
                              operand_subword_force (op1, i, mode),
                              EQ, 0, word_mode, NULL_RTX,
-			     if_false_label, NULL_RTX, prob);
+			     if_false_label, NULL, prob);
 
   if (if_true_label)
     emit_jump (if_true_label);
@@ -865,8 +876,9 @@ do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
    with one insn, test the comparison and jump to the appropriate label.  */
 
 static void
-do_jump_by_parts_equality (tree treeop0, tree treeop1, rtx if_false_label,
-			   rtx if_true_label, int prob)
+do_jump_by_parts_equality (tree treeop0, tree treeop1,
+			   rtx_code_label *if_false_label,
+			   rtx_code_label *if_true_label, int prob)
 {
   rtx op0 = expand_normal (treeop0);
   rtx op1 = expand_normal (treeop1);
@@ -961,11 +973,12 @@ split_comparison (enum rtx_code code, machine_mode mode,
 
 void
 do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
-			 machine_mode mode, rtx size, rtx if_false_label,
-			 rtx if_true_label, int prob)
+			 machine_mode mode, rtx size,
+			 rtx_code_label *if_false_label,
+			 rtx_code_label *if_true_label, int prob)
 {
   rtx tem;
-  rtx dummy_label = NULL;
+  rtx_code_label *dummy_label = NULL;
 
   /* Reverse the comparison if that is safe and we want to jump if it is
      false.  Also convert to the reverse comparison if the target can
@@ -987,9 +1000,7 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
       if (can_compare_p (rcode, mode, ccp_jump)
 	  || (code == ORDERED && ! can_compare_p (ORDERED, mode, ccp_jump)))
 	{
-          tem = if_true_label;
-          if_true_label = if_false_label;
-          if_false_label = tem;
+	  std::swap (if_true_label, if_false_label);
 	  code = rcode;
 	  prob = inv (prob);
 	}
@@ -1000,9 +1011,7 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
 
   if (swap_commutative_operands_p (op0, op1))
     {
-      tem = op0;
-      op0 = op1;
-      op1 = tem;
+      std::swap (op0, op1);
       code = swap_condition (code);
     }
 
@@ -1014,8 +1023,9 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
     {
       if (CONSTANT_P (tem))
 	{
-	  rtx label = (tem == const0_rtx || tem == CONST0_RTX (mode))
-		      ? if_false_label : if_true_label;
+	  rtx_code_label *label = (tem == const0_rtx
+				   || tem == CONST0_RTX (mode)) ?
+				       if_false_label : if_true_label;
 	  if (label)
 	    emit_jump (label);
 	  return;
@@ -1134,7 +1144,7 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
 		first_prob = REG_BR_PROB_BASE - REG_BR_PROB_BASE / 100;
 	      if (and_them)
 		{
-		  rtx dest_label;
+		  rtx_code_label *dest_label;
 		  /* If we only jump if true, just bypass the second jump.  */
 		  if (! if_false_label)
 		    {
@@ -1145,13 +1155,11 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
 		  else
 		    dest_label = if_false_label;
                   do_compare_rtx_and_jump (op0, op1, first_code, unsignedp, mode,
-					   size, dest_label, NULL_RTX,
-					   first_prob);
+					   size, dest_label, NULL, first_prob);
 		}
               else
                 do_compare_rtx_and_jump (op0, op1, first_code, unsignedp, mode,
-					 size, NULL_RTX, if_true_label,
-					 first_prob);
+					 size, NULL, if_true_label, first_prob);
 	    }
 	}
 
@@ -1177,8 +1185,9 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
 
 static void
 do_compare_and_jump (tree treeop0, tree treeop1, enum rtx_code signed_code,
-		     enum rtx_code unsigned_code, rtx if_false_label,
-		     rtx if_true_label, int prob)
+		     enum rtx_code unsigned_code,
+		     rtx_code_label *if_false_label,
+		     rtx_code_label *if_true_label, int prob)
 {
   rtx op0, op1;
   tree type;
diff --git a/gcc/dojump.h b/gcc/dojump.h
index 74d3f37..1b64ea7 100644
--- a/gcc/dojump.h
+++ b/gcc/dojump.h
@@ -57,20 +57,23 @@ extern void save_pending_stack_adjust (saved_pending_stack_adjust *);
 extern void restore_pending_stack_adjust (saved_pending_stack_adjust *);
 
 /* Generate code to evaluate EXP and jump to LABEL if the value is zero.  */
-extern void jumpifnot (tree, rtx, int);
-extern void jumpifnot_1 (enum tree_code, tree, tree, rtx, int);
+extern void jumpifnot (tree exp, rtx_code_label *label, int prob);
+extern void jumpifnot_1 (enum tree_code, tree, tree, rtx_code_label *, int);
 
 /* Generate code to evaluate EXP and jump to LABEL if the value is nonzero.  */
-extern void jumpif (tree, rtx, int);
-extern void jumpif_1 (enum tree_code, tree, tree, rtx, int);
+extern void jumpif (tree exp, rtx_code_label *label, int prob);
+extern void jumpif_1 (enum tree_code, tree, tree, rtx_code_label *, int);
 
 /* Generate code to evaluate EXP and jump to IF_FALSE_LABEL if
    the result is zero, or IF_TRUE_LABEL if the result is one.  */
-extern void do_jump (tree, rtx, rtx, int);
-extern void do_jump_1 (enum tree_code, tree, tree, rtx, rtx, int);
+extern void do_jump (tree exp, rtx_code_label *if_false_label,
+		     rtx_code_label *if_true_label, int prob);
+extern void do_jump_1 (enum tree_code, tree, tree, rtx_code_label *,
+		       rtx_code_label *, int);
 
 extern void do_compare_rtx_and_jump (rtx, rtx, enum rtx_code, int,
-				     machine_mode, rtx, rtx, rtx, int);
+				     machine_mode, rtx, rtx_code_label *,
+				     rtx_code_label *, int);
 
 extern bool split_comparison (enum rtx_code, machine_mode,
 			      enum rtx_code *, enum rtx_code *);
diff --git a/gcc/dse.c b/gcc/dse.c
index 2bb20d7..e923ea6 100644
--- a/gcc/dse.c
+++ b/gcc/dse.c
@@ -907,7 +907,7 @@ emit_inc_dec_insn_before (rtx mem ATTRIBUTE_UNUSED,
       end_sequence ();
     }
   else
-    new_insn = as_a <rtx_insn *> (gen_move_insn (dest, src));
+    new_insn = gen_move_insn (dest, src);
   info.first = new_insn;
   info.fixed_regs_live = insn_info->fixed_regs_live;
   info.failure = false;
diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index 483eacb..8b12b10 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -4463,13 +4463,15 @@ emit_barrier_before (rtx before)
 
 /* Emit the label LABEL before the insn BEFORE.  */
 
-rtx_insn *
-emit_label_before (rtx label, rtx_insn *before)
+rtx_code_label *
+emit_label_before (rtx uncast_label, rtx_insn *before)
 {
+  rtx_code_label *label = as_a <rtx_code_label *> (uncast_label);
+
   gcc_checking_assert (INSN_UID (label) == 0);
   INSN_UID (label) = cur_insn_uid++;
   add_insn_before (label, before, NULL);
-  return as_a <rtx_insn *> (label);
+  return label;
 }
 \f
 /* Helper for emit_insn_after, handles lists of instructions
@@ -5090,13 +5092,15 @@ emit_call_insn (rtx x)
 
 /* Add the label LABEL to the end of the doubly-linked list.  */
 
-rtx_insn *
-emit_label (rtx label)
+rtx_code_label *
+emit_label (rtx uncast_label)
 {
+  rtx_code_label *label = as_a <rtx_code_label *> (uncast_label);
+
   gcc_checking_assert (INSN_UID (label) == 0);
   INSN_UID (label) = cur_insn_uid++;
-  add_insn (as_a <rtx_insn *> (label));
-  return as_a <rtx_insn *> (label);
+  add_insn (label);
+  return label;
 }
 
 /* Make an insn of code JUMP_TABLE_DATA
@@ -5357,7 +5361,7 @@ emit (rtx x)
   switch (code)
     {
     case CODE_LABEL:
-      return emit_label (x);
+      return emit_label (as_a <rtx_code_label *> (x));
     case INSN:
       return emit_insn (x);
     case  JUMP_INSN:
diff --git a/gcc/except.c b/gcc/except.c
index 833ec21..90ffbd1 100644
--- a/gcc/except.c
+++ b/gcc/except.c
@@ -1354,7 +1354,7 @@ sjlj_emit_dispatch_table (rtx_code_label *dispatch_label, int num_dispatch)
     if (lp && lp->post_landing_pad)
       {
 	rtx_insn *seq2;
-	rtx label;
+	rtx_code_label *label;
 
 	start_sequence ();
 
@@ -1368,7 +1368,7 @@ sjlj_emit_dispatch_table (rtx_code_label *dispatch_label, int num_dispatch)
 	    t = build_int_cst (integer_type_node, disp_index);
 	    case_elt = build_case_label (t, NULL, t_label);
 	    dispatch_labels.quick_push (case_elt);
-	    label = label_rtx (t_label);
+	    label = live_label_rtx (t_label);
 	  }
 	else
 	  label = gen_label_rtx ();
diff --git a/gcc/explow.c b/gcc/explow.c
index de446a9..57cb767 100644
--- a/gcc/explow.c
+++ b/gcc/explow.c
@@ -984,7 +984,7 @@ emit_stack_save (enum save_level save_level, rtx *psave)
 {
   rtx sa = *psave;
   /* The default is that we use a move insn and save in a Pmode object.  */
-  rtx (*fcn) (rtx, rtx) = gen_move_insn;
+  rtx_insn * (*fcn) (rtx, rtx) = gen_move_insn;
   machine_mode mode = STACK_SAVEAREA_MODE (save_level);
 
   /* See if this machine has anything special to do for this kind of save.  */
@@ -1039,7 +1039,7 @@ void
 emit_stack_restore (enum save_level save_level, rtx sa)
 {
   /* The default is that we use a move insn.  */
-  rtx (*fcn) (rtx, rtx) = gen_move_insn;
+  rtx_insn * (*fcn) (rtx, rtx) = gen_move_insn;
 
   /* If stack_realign_drap, the x86 backend emits a prologue that aligns both
      STACK_POINTER and HARD_FRAME_POINTER.
diff --git a/gcc/expmed.c b/gcc/expmed.c
index e0b2619..ccfb25a 100644
--- a/gcc/expmed.c
+++ b/gcc/expmed.c
@@ -5799,8 +5799,8 @@ emit_store_flag_force (rtx target, enum rtx_code code, rtx op0, rtx op1,
       && op1 == const0_rtx)
     {
       label = gen_label_rtx ();
-      do_compare_rtx_and_jump (target, const0_rtx, EQ, unsignedp,
-			       mode, NULL_RTX, NULL_RTX, label, -1);
+      do_compare_rtx_and_jump (target, const0_rtx, EQ, unsignedp, mode,
+			       NULL_RTX, NULL, label, -1);
       emit_move_insn (target, trueval);
       emit_label (label);
       return target;
@@ -5837,8 +5837,8 @@ emit_store_flag_force (rtx target, enum rtx_code code, rtx op0, rtx op1,
 
   emit_move_insn (target, trueval);
   label = gen_label_rtx ();
-  do_compare_rtx_and_jump (op0, op1, code, unsignedp, mode, NULL_RTX,
-			   NULL_RTX, label, -1);
+  do_compare_rtx_and_jump (op0, op1, code, unsignedp, mode, NULL_RTX, NULL,
+			   label, -1);
 
   emit_move_insn (target, falseval);
   emit_label (label);
@@ -5855,6 +5855,6 @@ do_cmp_and_jump (rtx arg1, rtx arg2, enum rtx_code op, machine_mode mode,
 		 rtx_code_label *label)
 {
   int unsignedp = (op == LTU || op == LEU || op == GTU || op == GEU);
-  do_compare_rtx_and_jump (arg1, arg2, op, unsignedp, mode,
-			   NULL_RTX, NULL_RTX, label, -1);
+  do_compare_rtx_and_jump (arg1, arg2, op, unsignedp, mode, NULL_RTX,
+			   NULL, label, -1);
 }
diff --git a/gcc/expr.c b/gcc/expr.c
index dc13a14..a7066be 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -3652,7 +3652,7 @@ emit_move_insn (rtx x, rtx y)
 /* Generate the body of an instruction to copy Y into X.
    It may be a list of insns, if one insn isn't enough.  */
 
-rtx
+rtx_insn *
 gen_move_insn (rtx x, rtx y)
 {
   rtx_insn *seq;
@@ -8122,6 +8122,7 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
 		    enum expand_modifier modifier)
 {
   rtx op0, op1, op2, temp;
+  rtx_code_label *lab;
   tree type;
   int unsignedp;
   machine_mode mode;
@@ -8864,11 +8865,7 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
 
       /* If op1 was placed in target, swap op0 and op1.  */
       if (target != op0 && target == op1)
-	{
-	  temp = op0;
-	  op0 = op1;
-	  op1 = temp;
-	}
+	std::swap (op0, op1);
 
       /* We generate better code and avoid problems with op1 mentioning
 	 target by forcing op1 into a pseudo if it isn't a constant.  */
@@ -8935,13 +8932,13 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
 	if (target != op0)
 	  emit_move_insn (target, op0);
 
-	temp = gen_label_rtx ();
+	lab = gen_label_rtx ();
 	do_compare_rtx_and_jump (target, cmpop1, comparison_code,
-				 unsignedp, mode, NULL_RTX, NULL_RTX, temp,
+				 unsignedp, mode, NULL_RTX, NULL, lab,
 				 -1);
       }
       emit_move_insn (target, op1);
-      emit_label (temp);
+      emit_label (lab);
       return target;
 
     case BIT_NOT_EXPR:
@@ -9019,38 +9016,39 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
     case UNGE_EXPR:
     case UNEQ_EXPR:
     case LTGT_EXPR:
-      temp = do_store_flag (ops,
-			    modifier != EXPAND_STACK_PARM ? target : NULL_RTX,
-			    tmode != VOIDmode ? tmode : mode);
-      if (temp)
-	return temp;
-
-      /* Use a compare and a jump for BLKmode comparisons, or for function
-	 type comparisons is HAVE_canonicalize_funcptr_for_compare.  */
-
-      if ((target == 0
-	   || modifier == EXPAND_STACK_PARM
-	   || ! safe_from_p (target, treeop0, 1)
-	   || ! safe_from_p (target, treeop1, 1)
-	   /* Make sure we don't have a hard reg (such as function's return
-	      value) live across basic blocks, if not optimizing.  */
-	   || (!optimize && REG_P (target)
-	       && REGNO (target) < FIRST_PSEUDO_REGISTER)))
-	target = gen_reg_rtx (tmode != VOIDmode ? tmode : mode);
+      {
+	temp = do_store_flag (ops,
+			      modifier != EXPAND_STACK_PARM ? target : NULL_RTX,
+			      tmode != VOIDmode ? tmode : mode);
+	if (temp)
+	  return temp;
 
-      emit_move_insn (target, const0_rtx);
+	/* Use a compare and a jump for BLKmode comparisons, or for function
+	   type comparisons is HAVE_canonicalize_funcptr_for_compare.  */
+
+	if ((target == 0
+	     || modifier == EXPAND_STACK_PARM
+	     || ! safe_from_p (target, treeop0, 1)
+	     || ! safe_from_p (target, treeop1, 1)
+	     /* Make sure we don't have a hard reg (such as function's return
+		value) live across basic blocks, if not optimizing.  */
+	     || (!optimize && REG_P (target)
+		 && REGNO (target) < FIRST_PSEUDO_REGISTER)))
+	  target = gen_reg_rtx (tmode != VOIDmode ? tmode : mode);
 
-      op1 = gen_label_rtx ();
-      jumpifnot_1 (code, treeop0, treeop1, op1, -1);
+	emit_move_insn (target, const0_rtx);
 
-      if (TYPE_PRECISION (type) == 1 && !TYPE_UNSIGNED (type))
-	emit_move_insn (target, constm1_rtx);
-      else
-	emit_move_insn (target, const1_rtx);
+	rtx_code_label *lab1 = gen_label_rtx ();
+	jumpifnot_1 (code, treeop0, treeop1, lab1, -1);
 
-      emit_label (op1);
-      return target;
+	if (TYPE_PRECISION (type) == 1 && !TYPE_UNSIGNED (type))
+	  emit_move_insn (target, constm1_rtx);
+	else
+	  emit_move_insn (target, const1_rtx);
 
+	emit_label (lab1);
+	return target;
+      }
     case COMPLEX_EXPR:
       /* Get the rtx code of the operands.  */
       op0 = expand_normal (treeop0);
@@ -9273,58 +9271,60 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
       }
 
     case COND_EXPR:
-      /* A COND_EXPR with its type being VOID_TYPE represents a
-	 conditional jump and is handled in
-	 expand_gimple_cond_expr.  */
-      gcc_assert (!VOID_TYPE_P (type));
-
-      /* Note that COND_EXPRs whose type is a structure or union
-	 are required to be constructed to contain assignments of
-	 a temporary variable, so that we can evaluate them here
-	 for side effect only.  If type is void, we must do likewise.  */
-
-      gcc_assert (!TREE_ADDRESSABLE (type)
-		  && !ignore
-		  && TREE_TYPE (treeop1) != void_type_node
-		  && TREE_TYPE (treeop2) != void_type_node);
-
-      temp = expand_cond_expr_using_cmove (treeop0, treeop1, treeop2);
-      if (temp)
-	return temp;
-
-      /* If we are not to produce a result, we have no target.  Otherwise,
-	 if a target was specified use it; it will not be used as an
-	 intermediate target unless it is safe.  If no target, use a
-	 temporary.  */
-
-      if (modifier != EXPAND_STACK_PARM
-	  && original_target
-	  && safe_from_p (original_target, treeop0, 1)
-	  && GET_MODE (original_target) == mode
-	  && !MEM_P (original_target))
-	temp = original_target;
-      else
-	temp = assign_temp (type, 0, 1);
-
-      do_pending_stack_adjust ();
-      NO_DEFER_POP;
-      op0 = gen_label_rtx ();
-      op1 = gen_label_rtx ();
-      jumpifnot (treeop0, op0, -1);
-      store_expr (treeop1, temp,
-		  modifier == EXPAND_STACK_PARM,
-		  false);
-
-      emit_jump_insn (gen_jump (op1));
-      emit_barrier ();
-      emit_label (op0);
-      store_expr (treeop2, temp,
-		  modifier == EXPAND_STACK_PARM,
-		  false);
+      {
+	/* A COND_EXPR with its type being VOID_TYPE represents a
+	   conditional jump and is handled in
+	   expand_gimple_cond_expr.  */
+	gcc_assert (!VOID_TYPE_P (type));
+
+	/* Note that COND_EXPRs whose type is a structure or union
+	   are required to be constructed to contain assignments of
+	   a temporary variable, so that we can evaluate them here
+	   for side effect only.  If type is void, we must do likewise.  */
+
+	gcc_assert (!TREE_ADDRESSABLE (type)
+		    && !ignore
+		    && TREE_TYPE (treeop1) != void_type_node
+		    && TREE_TYPE (treeop2) != void_type_node);
+
+	temp = expand_cond_expr_using_cmove (treeop0, treeop1, treeop2);
+	if (temp)
+	  return temp;
 
-      emit_label (op1);
-      OK_DEFER_POP;
-      return temp;
+	/* If we are not to produce a result, we have no target.  Otherwise,
+	   if a target was specified use it; it will not be used as an
+	   intermediate target unless it is safe.  If no target, use a
+	   temporary.  */
+
+	if (modifier != EXPAND_STACK_PARM
+	    && original_target
+	    && safe_from_p (original_target, treeop0, 1)
+	    && GET_MODE (original_target) == mode
+	    && !MEM_P (original_target))
+	  temp = original_target;
+	else
+	  temp = assign_temp (type, 0, 1);
+
+	do_pending_stack_adjust ();
+	NO_DEFER_POP;
+	rtx_code_label *lab0 = gen_label_rtx ();
+	rtx_code_label *lab1 = gen_label_rtx ();
+	jumpifnot (treeop0, lab0, -1);
+	store_expr (treeop1, temp,
+		    modifier == EXPAND_STACK_PARM,
+		    false);
+
+	emit_jump_insn (gen_jump (lab1));
+	emit_barrier ();
+	emit_label (lab0);
+	store_expr (treeop2, temp,
+		    modifier == EXPAND_STACK_PARM,
+		    false);
+
+	emit_label (lab1);
+	OK_DEFER_POP;
+	return temp;
+      }
 
     case VEC_COND_EXPR:
       target = expand_vec_cond_expr (type, treeop0, treeop1, treeop2, target);
diff --git a/gcc/expr.h b/gcc/expr.h
index 867852e..6c4afc4 100644
--- a/gcc/expr.h
+++ b/gcc/expr.h
@@ -203,7 +203,7 @@ extern rtx store_by_pieces (rtx, unsigned HOST_WIDE_INT,
 
 /* Emit insns to set X from Y.  */
 extern rtx_insn *emit_move_insn (rtx, rtx);
-extern rtx gen_move_insn (rtx, rtx);
+extern rtx_insn *gen_move_insn (rtx, rtx);
 
 /* Emit insns to set X from Y, with no frills.  */
 extern rtx_insn *emit_move_insn_1 (rtx, rtx);
diff --git a/gcc/function.c b/gcc/function.c
index 2c3d142..97ecf3a 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -5760,7 +5760,7 @@ convert_jumps_to_returns (basic_block last_bb, bool simple_p,
 	    dest = simple_return_rtx;
 	  else
 	    dest = ret_rtx;
-	  if (!redirect_jump (jump, dest, 0))
+	  if (!redirect_jump (as_a <rtx_jump_insn *> (jump), dest, 0))
 	    {
 #ifdef HAVE_simple_return
 	      if (simple_p)
diff --git a/gcc/gcse.c b/gcc/gcse.c
index 37aac6a..20e79e0 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -2227,7 +2227,8 @@ pre_insert_copy_insn (struct gcse_expr *expr, rtx_insn *insn)
   int regno = REGNO (reg);
   int indx = expr->bitmap_index;
   rtx pat = PATTERN (insn);
-  rtx set, first_set, new_insn;
+  rtx set, first_set;
+  rtx_insn *new_insn;
   rtx old_reg;
   int i;
 
diff --git a/gcc/ifcvt.c b/gcc/ifcvt.c
index a3e3e5c..bf79122 100644
--- a/gcc/ifcvt.c
+++ b/gcc/ifcvt.c
@@ -4444,9 +4444,10 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
       else
 	new_dest_label = block_label (new_dest);
 
+      rtx_jump_insn *jump_insn = as_a <rtx_jump_insn *> (jump);
       if (reversep
-	  ? ! invert_jump_1 (jump, new_dest_label)
-	  : ! redirect_jump_1 (jump, new_dest_label))
+	  ? ! invert_jump_1 (jump_insn, new_dest_label)
+	  : ! redirect_jump_1 (jump_insn, new_dest_label))
 	goto cancel;
     }
 
@@ -4457,7 +4458,8 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
 
   if (other_bb != new_dest)
     {
-      redirect_jump_2 (jump, old_dest, new_dest_label, 0, reversep);
+      redirect_jump_2 (as_a <rtx_jump_insn *> (jump), old_dest, new_dest_label,
+                       0, reversep);
 
       redirect_edge_succ (BRANCH_EDGE (test_bb), new_dest);
       if (reversep)
diff --git a/gcc/internal-fn.c b/gcc/internal-fn.c
index e402825..af9baff 100644
--- a/gcc/internal-fn.c
+++ b/gcc/internal-fn.c
@@ -422,7 +422,7 @@ expand_arith_overflow_result_store (tree lhs, rtx target,
       lres = convert_modes (tgtmode, mode, res, uns);
       gcc_assert (GET_MODE_PRECISION (tgtmode) < GET_MODE_PRECISION (mode));
       do_compare_rtx_and_jump (res, convert_modes (mode, tgtmode, lres, uns),
-			       EQ, true, mode, NULL_RTX, NULL_RTX, done_label,
+			       EQ, true, mode, NULL_RTX, NULL, done_label,
 			       PROB_VERY_LIKELY);
       write_complex_part (target, const1_rtx, true);
       emit_label (done_label);
@@ -569,7 +569,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
 	      : CONST_SCALAR_INT_P (op1)))
 	tem = op1;
       do_compare_rtx_and_jump (res, tem, code == PLUS_EXPR ? GEU : LEU,
-			       true, mode, NULL_RTX, NULL_RTX, done_label,
+			       true, mode, NULL_RTX, NULL, done_label,
 			       PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -584,7 +584,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       rtx tem = expand_binop (mode, add_optab,
 			      code == PLUS_EXPR ? res : op0, sgn,
 			      NULL_RTX, false, OPTAB_LIB_WIDEN);
-      do_compare_rtx_and_jump (tem, op1, GEU, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (tem, op1, GEU, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -627,8 +627,8 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       else if (pos_neg == 3)
 	/* If ARG0 is not known to be always positive, check at runtime.  */
 	do_compare_rtx_and_jump (op0, const0_rtx, LT, false, mode, NULL_RTX,
-				 NULL_RTX, do_error, PROB_VERY_UNLIKELY);
-      do_compare_rtx_and_jump (op1, op0, LEU, true, mode, NULL_RTX, NULL_RTX,
+				 NULL, do_error, PROB_VERY_UNLIKELY);
+      do_compare_rtx_and_jump (op1, op0, LEU, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -642,7 +642,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
 			  OPTAB_LIB_WIDEN);
       rtx tem = expand_binop (mode, add_optab, op1, sgn, NULL_RTX, false,
 			      OPTAB_LIB_WIDEN);
-      do_compare_rtx_and_jump (op0, tem, LTU, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (op0, tem, LTU, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -655,7 +655,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       res = expand_binop (mode, add_optab, op0, op1, NULL_RTX, false,
 			  OPTAB_LIB_WIDEN);
       do_compare_rtx_and_jump (res, const0_rtx, LT, false, mode, NULL_RTX,
-			       NULL_RTX, do_error, PROB_VERY_UNLIKELY);
+			       NULL, do_error, PROB_VERY_UNLIKELY);
       rtx tem = op1;
       /* The operation is commutative, so we can pick operand to compare
 	 against.  For prec <= BITS_PER_WORD, I think preferring REG operand
@@ -668,7 +668,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
 	  ? (CONST_SCALAR_INT_P (op1) && REG_P (op0))
 	  : CONST_SCALAR_INT_P (op0))
 	tem = op0;
-      do_compare_rtx_and_jump (res, tem, GEU, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (res, tem, GEU, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -698,26 +698,26 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
 	  tem = expand_binop (mode, ((pos_neg == 1) ^ (code == MINUS_EXPR))
 				    ? and_optab : ior_optab,
 			      op0, res, NULL_RTX, false, OPTAB_LIB_WIDEN);
-	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL,
+				   NULL, done_label, PROB_VERY_LIKELY);
 	}
       else
 	{
 	  rtx_code_label *do_ior_label = gen_label_rtx ();
 	  do_compare_rtx_and_jump (op1, const0_rtx,
 				   code == MINUS_EXPR ? GE : LT, false, mode,
-				   NULL_RTX, NULL_RTX, do_ior_label,
+				   NULL_RTX, NULL, do_ior_label,
 				   PROB_EVEN);
 	  tem = expand_binop (mode, and_optab, op0, res, NULL_RTX, false,
 			      OPTAB_LIB_WIDEN);
 	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	  emit_jump (do_error);
 	  emit_label (do_ior_label);
 	  tem = expand_binop (mode, ior_optab, op0, res, NULL_RTX, false,
 			      OPTAB_LIB_WIDEN);
 	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	}
       goto do_error_label;
     }
@@ -730,14 +730,14 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       res = expand_binop (mode, sub_optab, op0, op1, NULL_RTX, false,
 			  OPTAB_LIB_WIDEN);
       rtx_code_label *op0_geu_op1 = gen_label_rtx ();
-      do_compare_rtx_and_jump (op0, op1, GEU, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (op0, op1, GEU, true, mode, NULL_RTX, NULL,
 			       op0_geu_op1, PROB_EVEN);
       do_compare_rtx_and_jump (res, const0_rtx, LT, false, mode, NULL_RTX,
-			       NULL_RTX, done_label, PROB_VERY_LIKELY);
+			       NULL, done_label, PROB_VERY_LIKELY);
       emit_jump (do_error);
       emit_label (op0_geu_op1);
       do_compare_rtx_and_jump (res, const0_rtx, GE, false, mode, NULL_RTX,
-			       NULL_RTX, done_label, PROB_VERY_LIKELY);
+			       NULL, done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
 
@@ -816,12 +816,12 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       /* If the op1 is negative, we have to use a different check.  */
       if (pos_neg == 3)
 	do_compare_rtx_and_jump (op1, const0_rtx, LT, false, mode, NULL_RTX,
-				 NULL_RTX, sub_check, PROB_EVEN);
+				 NULL, sub_check, PROB_EVEN);
 
       /* Compare the result of the operation with one of the operands.  */
       if (pos_neg & 1)
 	do_compare_rtx_and_jump (res, op0, code == PLUS_EXPR ? GE : LE,
-				 false, mode, NULL_RTX, NULL_RTX, done_label,
+				 false, mode, NULL_RTX, NULL, done_label,
 				 PROB_VERY_LIKELY);
 
       /* If we get here, we have to print the error.  */
@@ -835,7 +835,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       /* We have k = a + b for b < 0 here.  k <= a must hold.  */
       if (pos_neg & 2)
 	do_compare_rtx_and_jump (res, op0, code == PLUS_EXPR ? LE : GE,
-				 false, mode, NULL_RTX, NULL_RTX, done_label,
+				 false, mode, NULL_RTX, NULL, done_label,
 				 PROB_VERY_LIKELY);
     }
 
@@ -931,7 +931,7 @@ expand_neg_overflow (location_t loc, tree lhs, tree arg1, bool is_ubsan)
 
       /* Compare the operand with the most negative value.  */
       rtx minv = expand_normal (TYPE_MIN_VALUE (TREE_TYPE (arg1)));
-      do_compare_rtx_and_jump (op1, minv, NE, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (op1, minv, NE, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
     }
 
@@ -1068,15 +1068,15 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	  ops.location = loc;
 	  res = expand_expr_real_2 (&ops, NULL_RTX, mode, EXPAND_NORMAL);
 	  do_compare_rtx_and_jump (op1, const0_rtx, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	  goto do_error_label;
 	case 3:
 	  rtx_code_label *do_main_label;
 	  do_main_label = gen_label_rtx ();
 	  do_compare_rtx_and_jump (op0, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
+				   NULL, do_main_label, PROB_VERY_LIKELY);
 	  do_compare_rtx_and_jump (op1, const0_rtx, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
+				   NULL, do_main_label, PROB_VERY_LIKELY);
 	  write_complex_part (target, const1_rtx, true);
 	  emit_label (do_main_label);
 	  goto do_main;
@@ -1113,15 +1113,15 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	  ops.location = loc;
 	  res = expand_expr_real_2 (&ops, NULL_RTX, mode, EXPAND_NORMAL);
 	  do_compare_rtx_and_jump (op0, const0_rtx, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	  do_compare_rtx_and_jump (op0, constm1_rtx, NE, true, mode, NULL_RTX,
-				   NULL_RTX, do_error, PROB_VERY_UNLIKELY);
+				   NULL, do_error, PROB_VERY_UNLIKELY);
 	  int prec;
 	  prec = GET_MODE_PRECISION (mode);
 	  rtx sgn;
 	  sgn = immed_wide_int_const (wi::min_value (prec, SIGNED), mode);
 	  do_compare_rtx_and_jump (op1, sgn, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	  goto do_error_label;
 	case 3:
 	  /* Rest of handling of this case after res is computed.  */
@@ -1167,7 +1167,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	      tem = expand_binop (mode, and_optab, op0, op1, NULL_RTX, false,
 				  OPTAB_LIB_WIDEN);
 	      do_compare_rtx_and_jump (tem, const0_rtx, EQ, true, mode,
-				       NULL_RTX, NULL_RTX, done_label,
+				       NULL_RTX, NULL, done_label,
 				       PROB_VERY_LIKELY);
 	      goto do_error_label;
 	    }
@@ -1185,8 +1185,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	  tem = expand_binop (mode, and_optab, op0, op1, NULL_RTX, false,
 			      OPTAB_LIB_WIDEN);
 	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, after_negate_label,
-				   PROB_VERY_LIKELY);
+				   NULL, after_negate_label, PROB_VERY_LIKELY);
 	  /* Both arguments negative here, negate them and continue with
 	     normal unsigned overflow checking multiplication.  */
 	  emit_move_insn (op0, expand_unop (mode, neg_optab, op0,
@@ -1202,13 +1201,13 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	  tem2 = expand_binop (mode, xor_optab, op0, op1, NULL_RTX, false,
 			       OPTAB_LIB_WIDEN);
 	  do_compare_rtx_and_jump (tem2, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
+				   NULL, do_main_label, PROB_VERY_LIKELY);
 	  /* One argument is negative here, the other positive.  This
 	     overflows always, unless one of the arguments is 0.  But
 	     if e.g. s2 is 0, (U) s1 * 0 doesn't overflow, whatever s1
 	     is, thus we can keep do_main code oring in overflow as is.  */
 	  do_compare_rtx_and_jump (tem, const0_rtx, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
+				   NULL, do_main_label, PROB_VERY_LIKELY);
 	  write_complex_part (target, const1_rtx, true);
 	  emit_label (do_main_label);
 	  goto do_main;
@@ -1274,7 +1273,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	    /* For the unsigned multiplication, there was overflow if
 	       HIPART is non-zero.  */
 	    do_compare_rtx_and_jump (hipart, const0_rtx, EQ, true, mode,
-				     NULL_RTX, NULL_RTX, done_label,
+				     NULL_RTX, NULL, done_label,
 				     PROB_VERY_LIKELY);
 	  else
 	    {
@@ -1284,7 +1283,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		 the high half.  There was overflow if
 		 HIPART is different from RES < 0 ? -1 : 0.  */
 	      do_compare_rtx_and_jump (signbit, hipart, EQ, true, mode,
-				       NULL_RTX, NULL_RTX, done_label,
+				       NULL_RTX, NULL, done_label,
 				       PROB_VERY_LIKELY);
 	    }
 	}
@@ -1377,12 +1376,12 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 
 	  if (!op0_small_p)
 	    do_compare_rtx_and_jump (signbit0, hipart0, NE, true, hmode,
-				     NULL_RTX, NULL_RTX, large_op0,
+				     NULL_RTX, NULL, large_op0,
 				     PROB_UNLIKELY);
 
 	  if (!op1_small_p)
 	    do_compare_rtx_and_jump (signbit1, hipart1, NE, true, hmode,
-				     NULL_RTX, NULL_RTX, small_op0_large_op1,
+				     NULL_RTX, NULL, small_op0_large_op1,
 				     PROB_UNLIKELY);
 
 	  /* If both op0 and op1 are sign (!uns) or zero (uns) extended from
@@ -1428,7 +1427,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 
 	  if (!op1_small_p)
 	    do_compare_rtx_and_jump (signbit1, hipart1, NE, true, hmode,
-				     NULL_RTX, NULL_RTX, both_ops_large,
+				     NULL_RTX, NULL, both_ops_large,
 				     PROB_UNLIKELY);
 
 	  /* If op1 is sign (!uns) or zero (uns) extended from hmode to mode,
@@ -1465,7 +1464,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		emit_jump (after_hipart_neg);
 	      else if (larger_sign != -1)
 		do_compare_rtx_and_jump (hipart, const0_rtx, GE, false, hmode,
-					 NULL_RTX, NULL_RTX, after_hipart_neg,
+					 NULL_RTX, NULL, after_hipart_neg,
 					 PROB_EVEN);
 
 	      tem = convert_modes (mode, hmode, lopart, 1);
@@ -1481,7 +1480,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		emit_jump (after_lopart_neg);
 	      else if (smaller_sign != -1)
 		do_compare_rtx_and_jump (lopart, const0_rtx, GE, false, hmode,
-					 NULL_RTX, NULL_RTX, after_lopart_neg,
+					 NULL_RTX, NULL, after_lopart_neg,
 					 PROB_EVEN);
 
 	      tem = expand_simple_binop (mode, MINUS, loxhi, larger, NULL_RTX,
@@ -1510,7 +1509,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 					 hprec - 1, NULL_RTX, 0);
 
 	  do_compare_rtx_and_jump (signbitloxhi, hipartloxhi, NE, true, hmode,
-				   NULL_RTX, NULL_RTX, do_overflow,
+				   NULL_RTX, NULL, do_overflow,
 				   PROB_VERY_UNLIKELY);
 
 	  /* res = (loxhi << (bitsize / 2)) | (hmode) lo0xlo1;  */
@@ -1546,7 +1545,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		  tem = expand_simple_binop (hmode, PLUS, hipart0, const1_rtx,
 					     NULL_RTX, 1, OPTAB_DIRECT);
 		  do_compare_rtx_and_jump (tem, const1_rtx, GTU, true, hmode,
-					   NULL_RTX, NULL_RTX, do_error,
+					   NULL_RTX, NULL, do_error,
 					   PROB_VERY_UNLIKELY);
 		}
 
@@ -1555,7 +1554,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		  tem = expand_simple_binop (hmode, PLUS, hipart1, const1_rtx,
 					     NULL_RTX, 1, OPTAB_DIRECT);
 		  do_compare_rtx_and_jump (tem, const1_rtx, GTU, true, hmode,
-					   NULL_RTX, NULL_RTX, do_error,
+					   NULL_RTX, NULL, do_error,
 					   PROB_VERY_UNLIKELY);
 		}
 
@@ -1566,18 +1565,18 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		emit_jump (hipart_different);
 	      else if (op0_sign == 1 || op1_sign == 1)
 		do_compare_rtx_and_jump (hipart0, hipart1, NE, true, hmode,
-					 NULL_RTX, NULL_RTX, hipart_different,
+					 NULL_RTX, NULL, hipart_different,
 					 PROB_EVEN);
 
 	      do_compare_rtx_and_jump (res, const0_rtx, LT, false, mode,
-				       NULL_RTX, NULL_RTX, do_error,
+				       NULL_RTX, NULL, do_error,
 				       PROB_VERY_UNLIKELY);
 	      emit_jump (done_label);
 
 	      emit_label (hipart_different);
 
 	      do_compare_rtx_and_jump (res, const0_rtx, GE, false, mode,
-				       NULL_RTX, NULL_RTX, do_error,
+				       NULL_RTX, NULL, do_error,
 				       PROB_VERY_UNLIKELY);
 	      emit_jump (done_label);
 	    }
@@ -1623,7 +1622,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
     {
       rtx_code_label *all_done_label = gen_label_rtx ();
       do_compare_rtx_and_jump (res, const0_rtx, GE, false, mode, NULL_RTX,
-			       NULL_RTX, all_done_label, PROB_VERY_LIKELY);
+			       NULL, all_done_label, PROB_VERY_LIKELY);
       write_complex_part (target, const1_rtx, true);
       emit_label (all_done_label);
     }
@@ -1634,13 +1633,13 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
       rtx_code_label *all_done_label = gen_label_rtx ();
       rtx_code_label *set_noovf = gen_label_rtx ();
       do_compare_rtx_and_jump (op1, const0_rtx, GE, false, mode, NULL_RTX,
-			       NULL_RTX, all_done_label, PROB_VERY_LIKELY);
+			       NULL, all_done_label, PROB_VERY_LIKELY);
       write_complex_part (target, const1_rtx, true);
       do_compare_rtx_and_jump (op0, const0_rtx, EQ, true, mode, NULL_RTX,
-			       NULL_RTX, set_noovf, PROB_VERY_LIKELY);
+			       NULL, set_noovf, PROB_VERY_LIKELY);
       do_compare_rtx_and_jump (op0, constm1_rtx, NE, true, mode, NULL_RTX,
-			       NULL_RTX, all_done_label, PROB_VERY_UNLIKELY);
-      do_compare_rtx_and_jump (op1, res, NE, true, mode, NULL_RTX, NULL_RTX,
+			       NULL, all_done_label, PROB_VERY_UNLIKELY);
+      do_compare_rtx_and_jump (op1, res, NE, true, mode, NULL_RTX, NULL,
 			       all_done_label, PROB_VERY_UNLIKELY);
       emit_label (set_noovf);
       write_complex_part (target, const0_rtx, true);
diff --git a/gcc/ira.c b/gcc/ira.c
index ea2b69f..bdf81e6 100644
--- a/gcc/ira.c
+++ b/gcc/ira.c
@@ -4994,7 +4994,7 @@ split_live_ranges_for_shrink_wrap (void)
 
       if (newreg)
 	{
-	  rtx new_move = gen_move_insn (newreg, dest);
+	  rtx_insn *new_move = gen_move_insn (newreg, dest);
 	  emit_insn_after (new_move, bb_note (call_dom));
 	  if (dump_file)
 	    {
diff --git a/gcc/is-a.h b/gcc/is-a.h
index 58917eb..4fb9dde 100644
--- a/gcc/is-a.h
+++ b/gcc/is-a.h
@@ -46,6 +46,11 @@ TYPE as_a <TYPE> (pointer)
 
       do_something_with (as_a <cgraph_node *> *ptr);
 
+TYPE assert_as_a <TYPE> (pointer)
+
+    Like as_a <TYPE> (pointer), but uses assertion, which is enabled even in
+    non-checking (release) build.
+
 TYPE safe_as_a <TYPE> (pointer)
 
     Like as_a <TYPE> (pointer), but where pointer could be NULL.  This
@@ -193,6 +198,17 @@ as_a (U *p)
   return is_a_helper <T>::cast (p);
 }
 
+/* Same as above, but checks the condition even in release build.  */
+
+template <typename T, typename U>
+inline T
+assert_as_a (U *p)
+{
+  gcc_assert (is_a <T> (p));
+  return is_a_helper <T>::cast (p);
+}
+
+
 /* Similar to as_a<>, but where the pointer can be NULL, even if
    is_a_helper<T> doesn't check for NULL.  */
 
diff --git a/gcc/jump.c b/gcc/jump.c
index 34b3b7b..0cc0be5 100644
--- a/gcc/jump.c
+++ b/gcc/jump.c
@@ -1583,7 +1583,7 @@ redirect_jump_1 (rtx jump, rtx nlabel)
    (this can only occur when trying to produce return insns).  */
 
 int
-redirect_jump (rtx jump, rtx nlabel, int delete_unused)
+redirect_jump (rtx_jump_insn *jump, rtx nlabel, int delete_unused)
 {
   rtx olabel = JUMP_LABEL (jump);
 
@@ -1615,7 +1615,7 @@ redirect_jump (rtx jump, rtx nlabel, int delete_unused)
    If DELETE_UNUSED is positive, delete related insn to OLABEL if its ref
    count has dropped to zero.  */
 void
-redirect_jump_2 (rtx jump, rtx olabel, rtx nlabel, int delete_unused,
+redirect_jump_2 (rtx_jump_insn *jump, rtx olabel, rtx nlabel, int delete_unused,
 		 int invert)
 {
   rtx note;
@@ -1703,7 +1703,7 @@ invert_exp_1 (rtx x, rtx insn)
    inversion and redirection.  */
 
 int
-invert_jump_1 (rtx_insn *jump, rtx nlabel)
+invert_jump_1 (rtx_jump_insn *jump, rtx nlabel)
 {
   rtx x = pc_set (jump);
   int ochanges;
@@ -1727,7 +1727,7 @@ invert_jump_1 (rtx_insn *jump, rtx nlabel)
    NLABEL instead of where it jumps now.  Return true if successful.  */
 
 int
-invert_jump (rtx_insn *jump, rtx nlabel, int delete_unused)
+invert_jump (rtx_jump_insn *jump, rtx nlabel, int delete_unused)
 {
   rtx olabel = JUMP_LABEL (jump);
 
diff --git a/gcc/loop-unroll.c b/gcc/loop-unroll.c
index 2befb61..2f3ff35 100644
--- a/gcc/loop-unroll.c
+++ b/gcc/loop-unroll.c
@@ -794,10 +794,11 @@ split_edge_and_insert (edge e, rtx_insn *insns)
    in order to create a jump.  */
 
 static rtx_insn *
-compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int prob,
-		      rtx_insn *cinsn)
+compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp,
+		      rtx_code_label *label, int prob, rtx_insn *cinsn)
 {
-  rtx_insn *seq, *jump;
+  rtx_insn *seq;
+  rtx_jump_insn *jump;
   rtx cond;
   machine_mode mode;
 
@@ -816,8 +817,7 @@ compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int prob,
       gcc_assert (rtx_equal_p (op0, XEXP (cond, 0)));
       gcc_assert (rtx_equal_p (op1, XEXP (cond, 1)));
       emit_jump_insn (copy_insn (PATTERN (cinsn)));
-      jump = get_last_insn ();
-      gcc_assert (JUMP_P (jump));
+      jump = assert_as_a <rtx_jump_insn *> (get_last_insn ());
       JUMP_LABEL (jump) = JUMP_LABEL (cinsn);
       LABEL_NUSES (JUMP_LABEL (jump))++;
       redirect_jump (jump, label, 0);
@@ -829,9 +829,8 @@ compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int prob,
       op0 = force_operand (op0, NULL_RTX);
       op1 = force_operand (op1, NULL_RTX);
       do_compare_rtx_and_jump (op0, op1, comp, 0,
-			       mode, NULL_RTX, NULL_RTX, label, -1);
-      jump = get_last_insn ();
-      gcc_assert (JUMP_P (jump));
+			       mode, NULL_RTX, NULL, label, -1);
+      jump = assert_as_a <rtx_jump_insn *> (get_last_insn ());
       JUMP_LABEL (jump) = label;
       LABEL_NUSES (label)++;
     }
diff --git a/gcc/lra-constraints.c b/gcc/lra-constraints.c
index 57d731a..db4765f 100644
--- a/gcc/lra-constraints.c
+++ b/gcc/lra-constraints.c
@@ -1060,9 +1060,8 @@ emit_spill_move (bool to_p, rtx mem_pseudo, rtx val)
 	  LRA_SUBREG_P (mem_pseudo) = 1;
 	}
     }
-  return as_a <rtx_insn *> (to_p
-			    ? gen_move_insn (mem_pseudo, val)
-			    : gen_move_insn (val, mem_pseudo));
+  return to_p ? gen_move_insn (mem_pseudo, val)
+	      : gen_move_insn (val, mem_pseudo);
 }
 
 /* Process a special case insn (register move), return true if we
@@ -4501,6 +4500,107 @@ static int calls_num;
    USAGE_INSNS.	 */
 static int curr_usage_insns_check;
 
+namespace
+{
+
+class rtx_usage_list GTY(()) : public rtx_def
+{
+public:
+  /* This class represents an element in a singly-linked list, which:
+     1. Ends with non-debug INSN
+     2. May contain several INSN_LIST nodes with DEBUG_INSNs attached to them
+
+     I.e.:   INSN_LIST--> INSN_LIST-->..--> INSN
+               |            |
+             DEBUG_INSN   DEBUG_INSN
+
+   See struct usage_insns for description of how it is used.  */
+
+  /* Get next node of the list.  */
+  rtx_usage_list *next () const;
+
+  /* Get the current instruction.  */
+  rtx_insn *insn ();
+
+  /* Check, if current INSN is debug info.  */
+  bool debug_p () const;
+
+  /* Add debug information to the chain.  */
+  rtx_usage_list *push_front (rtx_debug_insn *debug_insn);
+};
+
+/* If current node is an INSN return it.  Otherwise it as an INSN_LIST node,
+   in this case return the attached INSN.  */
+
+rtx_insn *
+rtx_usage_list::insn ()
+{
+  if (rtx_insn *as_insn = dyn_cast <rtx_insn *> (this))
+    return as_insn;
+  return safe_as_a <rtx_debug_insn *> (XEXP (this, 0));
+}
+
+/* Get next node.  */
+
+rtx_usage_list *
+rtx_usage_list::next () const
+{
+  return reinterpret_cast <rtx_usage_list *> (XEXP (this, 1));
+}
+
+/* Check, if current INSN is debug info.  */
+
+bool
+rtx_usage_list::debug_p () const
+{
+  return is_a <const rtx_insn_list *> (this);
+}
+
+/* Add debug information to the chain.  */
+
+rtx_usage_list *
+rtx_usage_list::push_front (rtx_debug_insn *debug_insn)
+{
+  /* ??? Maybe it would be better to store DEBUG_INSNs in a separate
+     homogeneous list (or vec) and use another pointer for actual INSN?
+     Then we won't have to traverse the list and some checks will also
+     become simpler.  */
+  return reinterpret_cast <rtx_usage_list *>
+                (gen_rtx_INSN_LIST (VOIDmode,
+                                    debug_insn, this));
+}
+
+} // anon namespace
+
+/* Helpers for as-a casts.  */
+
+template <>
+template <>
+inline bool
+is_a_helper <rtx_insn_list *>::test (rtx_usage_list *list)
+{
+  return list->code == INSN_LIST;
+}
+
+template <>
+template <>
+inline bool
+is_a_helper <const rtx_insn_list *>::test (const rtx_usage_list *list)
+{
+  return list->code == INSN_LIST;
+}
+
+/* rtx_usage_list is either an INSN_LIST node or an INSN (no other
+   options).  Therefore, this check is valid.  */
+
+template <>
+template <>
+inline bool
+is_a_helper <rtx_insn *>::test (rtx_usage_list *list)
+{
+  return list->code != INSN_LIST;
+}
+
 /* Info about last usage of registers in EBB to do inheritance/split
    transformation.  Inheritance transformation is done from a spilled
    pseudo and split transformations from a hard register or a pseudo
@@ -4526,17 +4626,17 @@ struct usage_insns
      to use the original reg value again in the next insns we can try
      to use the value in a hard register from a reload insn of the
      current insn.  */
-  rtx insns;
+  rtx_usage_list *insns;
 };
 
 /* Map: regno -> corresponding pseudo usage insns.  */
 static struct usage_insns *usage_insns;
 
 static void
-setup_next_usage_insn (int regno, rtx insn, int reloads_num, bool after_p)
+setup_next_usage_insn (int regno, rtx_insn *insn, int reloads_num, bool after_p)
 {
   usage_insns[regno].check = curr_usage_insns_check;
-  usage_insns[regno].insns = insn;
+  usage_insns[regno].insns = reinterpret_cast <rtx_usage_list *> (insn);
   usage_insns[regno].reloads_num = reloads_num;
   usage_insns[regno].calls_num = calls_num;
   usage_insns[regno].after_p = after_p;
@@ -4546,20 +4646,19 @@ setup_next_usage_insn (int regno, rtx insn, int reloads_num, bool after_p)
    optional debug insns finished by a non-debug insn using REGNO.
    RELOADS_NUM is current number of reload insns processed so far.  */
 static void
-add_next_usage_insn (int regno, rtx insn, int reloads_num)
+add_next_usage_insn (int regno, rtx_insn *insn, int reloads_num)
 {
-  rtx next_usage_insns;
+  rtx_usage_list *next_usage_insns;
+  rtx_debug_insn *debug_insn;
 
   if (usage_insns[regno].check == curr_usage_insns_check
-      && (next_usage_insns = usage_insns[regno].insns) != NULL_RTX
-      && DEBUG_INSN_P (insn))
+      && (next_usage_insns = usage_insns[regno].insns) != NULL
+      && (debug_insn = dyn_cast <rtx_debug_insn *> (insn)) != NULL)
     {
       /* Check that we did not add the debug insn yet.	*/
-      if (next_usage_insns != insn
-	  && (GET_CODE (next_usage_insns) != INSN_LIST
-	      || XEXP (next_usage_insns, 0) != insn))
-	usage_insns[regno].insns = gen_rtx_INSN_LIST (VOIDmode, insn,
-						      next_usage_insns);
+      if (next_usage_insns->insn () != debug_insn)
+	usage_insns[regno].insns =
+                usage_insns[regno].insns->push_front (debug_insn);
     }
   else if (NONDEBUG_INSN_P (insn))
     setup_next_usage_insn (regno, insn, reloads_num, false);
@@ -4569,16 +4668,13 @@ add_next_usage_insn (int regno, rtx insn, int reloads_num)
 
 /* Return first non-debug insn in list USAGE_INSNS.  */
 static rtx_insn *
-skip_usage_debug_insns (rtx usage_insns)
+skip_usage_debug_insns (rtx_usage_list *usage_insns)
 {
-  rtx insn;
-
   /* Skip debug insns.  */
-  for (insn = usage_insns;
-       insn != NULL_RTX && GET_CODE (insn) == INSN_LIST;
-       insn = XEXP (insn, 1))
+  for (; usage_insns != NULL && usage_insns->debug_p ();
+       usage_insns = usage_insns->next ())
     ;
-  return safe_as_a <rtx_insn *> (insn);
+  return safe_as_a <rtx_insn *> (usage_insns);
 }
 
 /* Return true if we need secondary memory moves for insn in
@@ -4586,7 +4682,7 @@ skip_usage_debug_insns (rtx usage_insns)
    into the insn.  */
 static bool
 check_secondary_memory_needed_p (enum reg_class inher_cl ATTRIBUTE_UNUSED,
-				 rtx usage_insns ATTRIBUTE_UNUSED)
+				 rtx_usage_list *usage_insns ATTRIBUTE_UNUSED)
 {
 #ifndef SECONDARY_MEMORY_NEEDED
   return false;
@@ -4639,15 +4735,16 @@ static bitmap_head check_only_regs;
    class of ORIGINAL REGNO.  */
 static bool
 inherit_reload_reg (bool def_p, int original_regno,
-		    enum reg_class cl, rtx_insn *insn, rtx next_usage_insns)
+		    enum reg_class cl, rtx_insn *insn,
+                    rtx_usage_list *next_usage_insns)
 {
   if (optimize_function_for_size_p (cfun))
     return false;
 
   enum reg_class rclass = lra_get_allocno_class (original_regno);
   rtx original_reg = regno_reg_rtx[original_regno];
-  rtx new_reg, usage_insn;
-  rtx_insn *new_insns;
+  rtx new_reg;
+  rtx_insn *usage_insn, *new_insns;
 
   lra_assert (! usage_insns[original_regno].after_p);
   if (lra_dump_file != NULL)
@@ -4746,22 +4843,21 @@ inherit_reload_reg (bool def_p, int original_regno,
   else
     lra_process_new_insns (insn, new_insns, NULL,
 			   "Add inheritance<-original");
-  while (next_usage_insns != NULL_RTX)
+  while (next_usage_insns != NULL)
     {
-      if (GET_CODE (next_usage_insns) != INSN_LIST)
+      if (! next_usage_insns->debug_p ())
 	{
-	  usage_insn = next_usage_insns;
-	  lra_assert (NONDEBUG_INSN_P (usage_insn));
+	  usage_insn = assert_as_a <rtx_insn *> (next_usage_insns);
+	  lra_assert (! is_a <rtx_debug_insn *> (usage_insn));
 	  next_usage_insns = NULL;
 	}
       else
 	{
-	  usage_insn = XEXP (next_usage_insns, 0);
-	  lra_assert (DEBUG_INSN_P (usage_insn));
-	  next_usage_insns = XEXP (next_usage_insns, 1);
+	  usage_insn = next_usage_insns->insn ();
+	  next_usage_insns = next_usage_insns->next ();
 	}
-      lra_substitute_pseudo (&usage_insn, original_regno, new_reg);
-      lra_update_insn_regno_info (as_a <rtx_insn *> (usage_insn));
+      lra_substitute_pseudo_within_insn (usage_insn, original_regno, new_reg);
+      lra_update_insn_regno_info (usage_insn);
       if (lra_dump_file != NULL)
 	{
 	  fprintf (lra_dump_file,
@@ -4913,13 +5009,13 @@ choose_split_class (enum reg_class allocno_class,
    transformation.  */
 static bool
 split_reg (bool before_p, int original_regno, rtx_insn *insn,
-	   rtx next_usage_insns)
+	   rtx_usage_list *next_usage_insns)
 {
   enum reg_class rclass;
   rtx original_reg;
   int hard_regno, nregs;
-  rtx new_reg, usage_insn;
-  rtx_insn *restore, *save;
+  rtx new_reg;
+  rtx_insn *restore, *save, *usage_insn;
   bool after_p;
   bool call_save_p;
 
@@ -5016,14 +5112,13 @@ split_reg (bool before_p, int original_regno, rtx_insn *insn,
     {
       if (GET_CODE (next_usage_insns) != INSN_LIST)
 	{
-	  usage_insn = next_usage_insns;
+	  usage_insn = as_a <rtx_insn *> (next_usage_insns);
 	  break;
 	}
-      usage_insn = XEXP (next_usage_insns, 0);
-      lra_assert (DEBUG_INSN_P (usage_insn));
-      next_usage_insns = XEXP (next_usage_insns, 1);
-      lra_substitute_pseudo (&usage_insn, original_regno, new_reg);
-      lra_update_insn_regno_info (as_a <rtx_insn *> (usage_insn));
+      usage_insn = next_usage_insns->insn ();
+      next_usage_insns = next_usage_insns->next ();
+      lra_substitute_pseudo_within_insn (usage_insn, original_regno, new_reg);
+      lra_update_insn_regno_info (usage_insn);
       if (lra_dump_file != NULL)
 	{
 	  fprintf (lra_dump_file, "    Split reuse change %d->%d:\n",
@@ -5031,9 +5126,9 @@ split_reg (bool before_p, int original_regno, rtx_insn *insn,
 	  dump_insn_slim (lra_dump_file, usage_insn);
 	}
     }
-  lra_assert (NOTE_P (usage_insn) || NONDEBUG_INSN_P (usage_insn));
+  lra_assert (! DEBUG_INSN_P (usage_insn));
   lra_assert (usage_insn != insn || (after_p && before_p));
-  lra_process_new_insns (as_a <rtx_insn *> (usage_insn),
+  lra_process_new_insns (usage_insn,
 			 after_p ? NULL : restore,
 			 after_p ? restore : NULL,
 			 call_save_p
@@ -5069,18 +5164,15 @@ split_if_necessary (int regno, machine_mode mode,
 {
   bool res = false;
   int i, nregs = 1;
-  rtx next_usage_insns;
+  rtx_usage_list *next_usage_insns;
 
   if (regno < FIRST_PSEUDO_REGISTER)
     nregs = hard_regno_nregs[regno][mode];
   for (i = 0; i < nregs; i++)
     if (usage_insns[regno + i].check == curr_usage_insns_check
-	&& (next_usage_insns = usage_insns[regno + i].insns) != NULL_RTX
+	&& (next_usage_insns = usage_insns[regno + i].insns) != NULL
 	/* To avoid processing the register twice or more.  */
-	&& ((GET_CODE (next_usage_insns) != INSN_LIST
-	     && INSN_UID (next_usage_insns) < max_uid)
-	    || (GET_CODE (next_usage_insns) == INSN_LIST
-		&& (INSN_UID (XEXP (next_usage_insns, 0)) < max_uid)))
+	&& (INSN_UID (next_usage_insns->insn ()) < max_uid)
 	&& need_for_split_p (potential_reload_hard_regs, regno + i)
 	&& split_reg (before_p, regno + i, insn, next_usage_insns))
     res = true;
@@ -5209,7 +5301,7 @@ struct to_inherit
   /* Original regno.  */
   int regno;
   /* Subsequent insns which can inherit original reg value.  */
-  rtx insns;
+  rtx_usage_list *insns;
 };
 
 /* Array containing all info for doing inheritance from the current
@@ -5222,7 +5314,7 @@ static int to_inherit_num;
 /* Add inheritance info REGNO and INSNS. Their meaning is described in
    structure to_inherit.  */
 static void
-add_to_inherit (int regno, rtx insns)
+add_to_inherit (int regno, rtx_usage_list *insns)
 {
   int i;
 
@@ -5301,7 +5393,8 @@ inherit_in_ebb (rtx_insn *head, rtx_insn *tail)
   int i, src_regno, dst_regno, nregs;
   bool change_p, succ_p, update_reloads_num_p;
   rtx_insn *prev_insn, *last_insn;
-  rtx next_usage_insns, set;
+  rtx_usage_list *next_usage_insns;
+  rtx set;
   enum reg_class cl;
   struct lra_insn_reg *reg;
   basic_block last_processed_bb, curr_bb = NULL;
@@ -5569,7 +5662,7 @@ inherit_in_ebb (rtx_insn *head, rtx_insn *tail)
 			   || reg_renumber[src_regno] >= 0)
 		    {
 		      bool before_p;
-		      rtx use_insn = curr_insn;
+		      rtx_insn *use_insn = curr_insn;
 
 		      before_p = (JUMP_P (curr_insn)
 				  || (CALL_P (curr_insn) && reg->type == OP_IN));
diff --git a/gcc/lra.c b/gcc/lra.c
index 269a0f1..6d3c73e 100644
--- a/gcc/lra.c
+++ b/gcc/lra.c
@@ -1825,7 +1825,7 @@ lra_substitute_pseudo (rtx *loc, int old_regno, rtx new_reg)
   const char *fmt;
   int i, j;
 
-  if (x == NULL_RTX)
+  if (x == NULL)
     return false;
 
   code = GET_CODE (x);
diff --git a/gcc/modulo-sched.c b/gcc/modulo-sched.c
index 22cd216..4afe43e 100644
--- a/gcc/modulo-sched.c
+++ b/gcc/modulo-sched.c
@@ -790,8 +790,7 @@ schedule_reg_moves (partial_schedule_ptr ps)
 	  move->old_reg = old_reg;
 	  move->new_reg = gen_reg_rtx (GET_MODE (prev_reg));
 	  move->num_consecutive_stages = distances[0] && distances[1] ? 2 : 1;
-	  move->insn = as_a <rtx_insn *> (gen_move_insn (move->new_reg,
-							 copy_rtx (prev_reg)));
+	  move->insn = gen_move_insn (move->new_reg, copy_rtx (prev_reg));
 	  bitmap_clear (move->uses);
 
 	  prev_reg = move->new_reg;
diff --git a/gcc/optabs.c b/gcc/optabs.c
index e9dc798..9a51ba3 100644
--- a/gcc/optabs.c
+++ b/gcc/optabs.c
@@ -1416,7 +1416,7 @@ expand_binop_directly (machine_mode mode, optab binoptab,
   machine_mode mode0, mode1, tmp_mode;
   struct expand_operand ops[3];
   bool commutative_p;
-  rtx pat;
+  rtx_insn *pat;
   rtx xop0 = op0, xop1 = op1;
   rtx swap;
 
@@ -1499,8 +1499,8 @@ expand_binop_directly (machine_mode mode, optab binoptab,
       /* If PAT is composed of more than one insn, try to add an appropriate
 	 REG_EQUAL note to it.  If we can't because TEMP conflicts with an
 	 operand, call expand_binop again, this time without a target.  */
-      if (INSN_P (pat) && NEXT_INSN (as_a <rtx_insn *> (pat)) != NULL_RTX
-	  && ! add_equal_note (as_a <rtx_insn *> (pat), ops[0].value,
+      if (INSN_P (pat) && NEXT_INSN (pat) != NULL_RTX
+	  && ! add_equal_note (pat, ops[0].value,
 			       optab_to_code (binoptab),
 			       ops[1].value, ops[2].value))
 	{
@@ -3016,15 +3016,15 @@ expand_unop_direct (machine_mode mode, optab unoptab, rtx op0, rtx target,
       struct expand_operand ops[2];
       enum insn_code icode = optab_handler (unoptab, mode);
       rtx_insn *last = get_last_insn ();
-      rtx pat;
+      rtx_insn *pat;
 
       create_output_operand (&ops[0], target, mode);
       create_convert_operand_from (&ops[1], op0, mode, unsignedp);
       pat = maybe_gen_insn (icode, 2, ops);
       if (pat)
 	{
-	  if (INSN_P (pat) && NEXT_INSN (as_a <rtx_insn *> (pat)) != NULL_RTX
-	      && ! add_equal_note (as_a <rtx_insn *> (pat), ops[0].value,
+	  if (INSN_P (pat) && NEXT_INSN (pat) != NULL_RTX
+	      && ! add_equal_note (pat, ops[0].value,
 				   optab_to_code (unoptab),
 				   ops[1].value, NULL_RTX))
 	    {
@@ -3508,7 +3508,7 @@ expand_abs (machine_mode mode, rtx op0, rtx target,
   NO_DEFER_POP;
 
   do_compare_rtx_and_jump (target, CONST0_RTX (mode), GE, 0, mode,
-			   NULL_RTX, NULL_RTX, op1, -1);
+			   NULL_RTX, NULL, op1, -1);
 
   op0 = expand_unop (mode, result_unsignedp ? neg_optab : negv_optab,
                      target, target, 0);
@@ -3817,7 +3817,7 @@ maybe_emit_unop_insn (enum insn_code icode, rtx target, rtx op0,
 		      enum rtx_code code)
 {
   struct expand_operand ops[2];
-  rtx pat;
+  rtx_insn *pat;
 
   create_output_operand (&ops[0], target, GET_MODE (target));
   create_input_operand (&ops[1], op0, GET_MODE (op0));
@@ -3825,10 +3825,9 @@ maybe_emit_unop_insn (enum insn_code icode, rtx target, rtx op0,
   if (!pat)
     return false;
 
-  if (INSN_P (pat) && NEXT_INSN (as_a <rtx_insn *> (pat)) != NULL_RTX
+  if (INSN_P (pat) && NEXT_INSN (pat) != NULL_RTX
       && code != UNKNOWN)
-    add_equal_note (as_a <rtx_insn *> (pat), ops[0].value, code, ops[1].value,
-		    NULL_RTX);
+    add_equal_note (pat, ops[0].value, code, ops[1].value, NULL_RTX);
 
   emit_insn (pat);
 
@@ -8370,13 +8369,13 @@ maybe_legitimize_operands (enum insn_code icode, unsigned int opno,
    and emit any necessary set-up code.  Return null and emit no
    code on failure.  */
 
-rtx
+rtx_insn *
 maybe_gen_insn (enum insn_code icode, unsigned int nops,
 		struct expand_operand *ops)
 {
   gcc_assert (nops == (unsigned int) insn_data[(int) icode].n_generator_args);
   if (!maybe_legitimize_operands (icode, 0, nops, ops))
-    return NULL_RTX;
+    return NULL;
 
   switch (nops)
     {
diff --git a/gcc/optabs.h b/gcc/optabs.h
index 152af87..5c30ce5 100644
--- a/gcc/optabs.h
+++ b/gcc/optabs.h
@@ -541,8 +541,8 @@ extern void create_convert_operand_from_type (struct expand_operand *op,
 extern bool maybe_legitimize_operands (enum insn_code icode,
 				       unsigned int opno, unsigned int nops,
 				       struct expand_operand *ops);
-extern rtx maybe_gen_insn (enum insn_code icode, unsigned int nops,
-			   struct expand_operand *ops);
+extern rtx_insn *maybe_gen_insn (enum insn_code icode, unsigned int nops,
+				 struct expand_operand *ops);
 extern bool maybe_expand_insn (enum insn_code icode, unsigned int nops,
 			       struct expand_operand *ops);
 extern bool maybe_expand_jump_insn (enum insn_code icode, unsigned int nops,
diff --git a/gcc/postreload-gcse.c b/gcc/postreload-gcse.c
index 83048bd..21228ac 100644
--- a/gcc/postreload-gcse.c
+++ b/gcc/postreload-gcse.c
@@ -1115,8 +1115,8 @@ eliminate_partially_redundant_load (basic_block bb, rtx_insn *insn,
 
 	  /* Make sure we can generate a move from register avail_reg to
 	     dest.  */
-	  rtx_insn *move = as_a <rtx_insn *>
-	    (gen_move_insn (copy_rtx (dest), copy_rtx (avail_reg)));
+	  rtx_insn *move = gen_move_insn (copy_rtx (dest),
+					  copy_rtx (avail_reg));
 	  extract_insn (move);
 	  if (! constrain_operands (1, get_preferred_alternatives (insn,
 								   pred_bb))
diff --git a/gcc/recog.c b/gcc/recog.c
index a9d3b1f..8fee5a7 100644
--- a/gcc/recog.c
+++ b/gcc/recog.c
@@ -3068,7 +3068,7 @@ split_all_insns_noflow (void)
 #ifdef HAVE_peephole2
 struct peep2_insn_data
 {
-  rtx insn;
+  rtx_insn *insn;
   regset live_before;
 };
 
@@ -3084,7 +3084,7 @@ int peep2_current_count;
 /* A non-insn marker indicating the last insn of the block.
    The live_before regset for this element is correct, indicating
    DF_LIVE_OUT for the block.  */
-#define PEEP2_EOB	pc_rtx
+#define PEEP2_EOB	(static_cast<rtx_insn *> (pc_rtx))
 
 /* Wrap N to fit into the peep2_insn_data buffer.  */
 
@@ -3287,7 +3287,7 @@ peep2_reinit_state (regset live)
 
   /* Indicate that all slots except the last holds invalid data.  */
   for (i = 0; i < MAX_INSNS_PER_PEEP2; ++i)
-    peep2_insn_data[i].insn = NULL_RTX;
+    peep2_insn_data[i].insn = NULL;
   peep2_current_count = 0;
 
   /* Indicate that the last slot contains live_after data.  */
@@ -3315,7 +3315,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
 
   /* If we are splitting an RTX_FRAME_RELATED_P insn, do not allow it to
      match more than one insn, or to be split into more than one insn.  */
-  old_insn = as_a <rtx_insn *> (peep2_insn_data[peep2_current].insn);
+  old_insn = peep2_insn_data[peep2_current].insn;
   if (RTX_FRAME_RELATED_P (old_insn))
     {
       bool any_note = false;
@@ -3403,7 +3403,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
       rtx note;
 
       j = peep2_buf_position (peep2_current + i);
-      old_insn = as_a <rtx_insn *> (peep2_insn_data[j].insn);
+      old_insn = peep2_insn_data[j].insn;
       if (!CALL_P (old_insn))
 	continue;
       was_call = true;
@@ -3442,7 +3442,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
       while (++i <= match_len)
 	{
 	  j = peep2_buf_position (peep2_current + i);
-	  old_insn = as_a <rtx_insn *> (peep2_insn_data[j].insn);
+	  old_insn = peep2_insn_data[j].insn;
 	  gcc_assert (!CALL_P (old_insn));
 	}
       break;
@@ -3454,7 +3454,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
   for (i = match_len; i >= 0; --i)
     {
       int j = peep2_buf_position (peep2_current + i);
-      old_insn = as_a <rtx_insn *> (peep2_insn_data[j].insn);
+      old_insn = peep2_insn_data[j].insn;
 
       as_note = find_reg_note (old_insn, REG_ARGS_SIZE, NULL);
       if (as_note)
@@ -3465,7 +3465,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
   eh_note = find_reg_note (peep2_insn_data[i].insn, REG_EH_REGION, NULL_RTX);
 
   /* Replace the old sequence with the new.  */
-  rtx_insn *peepinsn = as_a <rtx_insn *> (peep2_insn_data[i].insn);
+  rtx_insn *peepinsn = peep2_insn_data[i].insn;
   last = emit_insn_after_setloc (attempt,
 				 peep2_insn_data[i].insn,
 				 INSN_LOCATION (peepinsn));
@@ -3582,7 +3582,7 @@ peep2_update_life (basic_block bb, int match_len, rtx_insn *last,
    add more instructions to the buffer.  */
 
 static bool
-peep2_fill_buffer (basic_block bb, rtx insn, regset live)
+peep2_fill_buffer (basic_block bb, rtx_insn *insn, regset live)
 {
   int pos;
 
@@ -3608,7 +3608,7 @@ peep2_fill_buffer (basic_block bb, rtx insn, regset live)
   COPY_REG_SET (peep2_insn_data[pos].live_before, live);
   peep2_current_count++;
 
-  df_simulate_one_insn_forwards (bb, as_a <rtx_insn *> (insn), live);
+  df_simulate_one_insn_forwards (bb, insn, live);
   return true;
 }
 
diff --git a/gcc/recog.h b/gcc/recog.h
index 45ea671..7c95885 100644
--- a/gcc/recog.h
+++ b/gcc/recog.h
@@ -278,43 +278,43 @@ typedef const char * (*insn_output_fn) (rtx *, rtx_insn *);
 
 struct insn_gen_fn
 {
-  typedef rtx (*f0) (void);
-  typedef rtx (*f1) (rtx);
-  typedef rtx (*f2) (rtx, rtx);
-  typedef rtx (*f3) (rtx, rtx, rtx);
-  typedef rtx (*f4) (rtx, rtx, rtx, rtx);
-  typedef rtx (*f5) (rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f6) (rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f7) (rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f8) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f9) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f10) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f11) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f12) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f13) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f14) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f15) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f16) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f0) (void);
+  typedef rtx_insn * (*f1) (rtx);
+  typedef rtx_insn * (*f2) (rtx, rtx);
+  typedef rtx_insn * (*f3) (rtx, rtx, rtx);
+  typedef rtx_insn * (*f4) (rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f5) (rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f6) (rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f7) (rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f8) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f9) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f10) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f11) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f12) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f13) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f14) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f15) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f16) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
 
   typedef f0 stored_funcptr;
 
-  rtx operator () (void) const { return ((f0)func) (); }
-  rtx operator () (rtx a0) const { return ((f1)func) (a0); }
-  rtx operator () (rtx a0, rtx a1) const { return ((f2)func) (a0, a1); }
-  rtx operator () (rtx a0, rtx a1, rtx a2) const { return ((f3)func) (a0, a1, a2); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3) const { return ((f4)func) (a0, a1, a2, a3); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4) const { return ((f5)func) (a0, a1, a2, a3, a4); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5) const { return ((f6)func) (a0, a1, a2, a3, a4, a5); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6) const { return ((f7)func) (a0, a1, a2, a3, a4, a5, a6); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7) const { return ((f8)func) (a0, a1, a2, a3, a4, a5, a6, a7); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8) const { return ((f9)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9) const { return ((f10)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10) const { return ((f11)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11) const { return ((f12)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12) const { return ((f13)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13) const { return ((f14)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14) const { return ((f15)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14, rtx a15) const { return ((f16)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15); }
+  rtx_insn * operator () (void) const { return ((f0)func) (); }
+  rtx_insn * operator () (rtx a0) const { return ((f1)func) (a0); }
+  rtx_insn * operator () (rtx a0, rtx a1) const { return ((f2)func) (a0, a1); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2) const { return ((f3)func) (a0, a1, a2); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3) const { return ((f4)func) (a0, a1, a2, a3); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4) const { return ((f5)func) (a0, a1, a2, a3, a4); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5) const { return ((f6)func) (a0, a1, a2, a3, a4, a5); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6) const { return ((f7)func) (a0, a1, a2, a3, a4, a5, a6); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7) const { return ((f8)func) (a0, a1, a2, a3, a4, a5, a6, a7); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8) const { return ((f9)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9) const { return ((f10)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10) const { return ((f11)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11) const { return ((f12)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12) const { return ((f13)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13) const { return ((f14)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14) const { return ((f15)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14, rtx a15) const { return ((f16)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15); }
 
   // This is for compatibility of code that invokes functions like
   //   (*funcptr) (arg)
diff --git a/gcc/rtl.h b/gcc/rtl.h
index e5e4560..e88f3c8 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -636,6 +636,8 @@ class GTY(()) rtx_note : public rtx_insn
 
 #define NULL_RTX (rtx) 0
 
+#define NULL_INSN (rtx_insn *) 0
+
 /* The "next" and "previous" RTX, relative to this one.  */
 
 #define RTX_NEXT(X) (rtx_next[GET_CODE (X)] == 0 ? NULL			\
@@ -827,6 +829,14 @@ is_a_helper <rtx_debug_insn *>::test (rtx rt)
 template <>
 template <>
 inline bool
+is_a_helper <rtx_debug_insn *>::test (rtx_insn *insn)
+{
+  return DEBUG_INSN_P (insn);
+}
+
+template <>
+template <>
+inline bool
 is_a_helper <rtx_nonjump_insn *>::test (rtx rt)
 {
   return NONJUMP_INSN_P (rt);
@@ -843,6 +853,14 @@ is_a_helper <rtx_jump_insn *>::test (rtx rt)
 template <>
 template <>
 inline bool
+is_a_helper <rtx_jump_insn *>::test (rtx_insn *insn)
+{
+  return JUMP_P (insn);
+}
+
+template <>
+template <>
+inline bool
 is_a_helper <rtx_call_insn *>::test (rtx rt)
 {
   return CALL_P (rt);
@@ -2662,7 +2680,7 @@ extern rtx_insn *emit_debug_insn_before (rtx, rtx);
 extern rtx_insn *emit_debug_insn_before_noloc (rtx, rtx);
 extern rtx_insn *emit_debug_insn_before_setloc (rtx, rtx, int);
 extern rtx_barrier *emit_barrier_before (rtx);
-extern rtx_insn *emit_label_before (rtx, rtx_insn *);
+extern rtx_code_label *emit_label_before (rtx , rtx_insn *);
 extern rtx_note *emit_note_before (enum insn_note, rtx);
 extern rtx_insn *emit_insn_after (rtx, rtx);
 extern rtx_insn *emit_insn_after_noloc (rtx, rtx, basic_block);
@@ -2683,7 +2701,7 @@ extern rtx_insn *emit_insn (rtx);
 extern rtx_insn *emit_debug_insn (rtx);
 extern rtx_insn *emit_jump_insn (rtx);
 extern rtx_insn *emit_call_insn (rtx);
-extern rtx_insn *emit_label (rtx);
+extern rtx_code_label *emit_label (rtx);
 extern rtx_jump_table_data *emit_jump_table_data (rtx);
 extern rtx_barrier *emit_barrier (void);
 extern rtx_note *emit_note (enum insn_note);
@@ -3336,14 +3354,14 @@ extern int eh_returnjump_p (rtx_insn *);
 extern int onlyjump_p (const rtx_insn *);
 extern int only_sets_cc0_p (const_rtx);
 extern int sets_cc0_p (const_rtx);
-extern int invert_jump_1 (rtx_insn *, rtx);
-extern int invert_jump (rtx_insn *, rtx, int);
+extern int invert_jump_1 (rtx_jump_insn *, rtx);
+extern int invert_jump (rtx_jump_insn *, rtx, int);
 extern int rtx_renumbered_equal_p (const_rtx, const_rtx);
 extern int true_regnum (const_rtx);
 extern unsigned int reg_or_subregno (const_rtx);
 extern int redirect_jump_1 (rtx, rtx);
-extern void redirect_jump_2 (rtx, rtx, rtx, int, int);
-extern int redirect_jump (rtx, rtx, int);
+extern void redirect_jump_2 (rtx_jump_insn *, rtx, rtx, int, int);
+extern int redirect_jump (rtx_jump_insn *, rtx, int);
 extern void rebuild_jump_labels (rtx_insn *);
 extern void rebuild_jump_labels_chain (rtx_insn *);
 extern rtx reversed_comparison (const_rtx, machine_mode);
@@ -3426,7 +3444,7 @@ extern void print_inline_rtx (FILE *, const_rtx, int);
    not be in sched-vis.c but in rtl.c, because they are not only used
    by the scheduler anymore but for all "slim" RTL dumping.  */
 extern void dump_value_slim (FILE *, const_rtx, int);
-extern void dump_insn_slim (FILE *, const_rtx);
+extern void dump_insn_slim (FILE *, const rtx_insn *);
 extern void dump_rtl_slim (FILE *, const rtx_insn *, const rtx_insn *,
 			   int, int);
 extern void print_value (pretty_printer *, const_rtx, int);
@@ -3438,7 +3456,7 @@ extern const char *str_pattern_slim (const_rtx);
 /* In stmt.c */
 extern void expand_null_return (void);
 extern void expand_naked_return (void);
-extern void emit_jump (rtx);
+extern void emit_jump (rtx_code_label *);
 
 /* In expr.c */
 extern rtx move_by_pieces (rtx, rtx, unsigned HOST_WIDE_INT,
diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c
index 743aad6..7d10abe 100644
--- a/gcc/rtlanal.c
+++ b/gcc/rtlanal.c
@@ -2914,14 +2914,14 @@ rtx_referenced_p (const_rtx x, const_rtx body)
 bool
 tablejump_p (const rtx_insn *insn, rtx *labelp, rtx_jump_table_data **tablep)
 {
-  rtx label, table;
+  rtx table;
 
   if (!JUMP_P (insn))
     return false;
 
-  label = JUMP_LABEL (insn);
-  if (label != NULL_RTX && !ANY_RETURN_P (label)
-      && (table = NEXT_INSN (as_a <rtx_insn *> (label))) != NULL_RTX
+  rtx_insn *label = JUMP_LABEL_AS_INSN (insn);
+  if (label && !ANY_RETURN_P (label)
+      && (table = NEXT_INSN (label)) != NULL_RTX
       && JUMP_TABLE_DATA_P (table))
     {
       if (labelp)
diff --git a/gcc/sched-deps.c b/gcc/sched-deps.c
index 5434831..e6f1003 100644
--- a/gcc/sched-deps.c
+++ b/gcc/sched-deps.c
@@ -2649,7 +2649,7 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rtx_insn *insn)
     case MEM:
       {
 	/* Reading memory.  */
-	rtx u;
+	rtx_insn_list *u;
 	rtx_insn_list *pending;
 	rtx_expr_list *pending_mem;
 	rtx t = x;
@@ -2700,11 +2700,10 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rtx_insn *insn)
 		pending_mem = pending_mem->next ();
 	      }
 
-	    for (u = deps->last_pending_memory_flush; u; u = XEXP (u, 1))
-	      add_dependence (insn, as_a <rtx_insn *> (XEXP (u, 0)),
-			      REG_DEP_ANTI);
+	    for (u = deps->last_pending_memory_flush; u; u = u->next ())
+	      add_dependence (insn, u->insn (), REG_DEP_ANTI);
 
-	    for (u = deps->pending_jump_insns; u; u = XEXP (u, 1))
+	    for (u = deps->pending_jump_insns; u; u = u->next ())
 	      if (deps_may_trap_p (x))
 		{
 		  if ((sched_deps_info->generate_spec_deps)
@@ -2713,11 +2712,10 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rtx_insn *insn)
 		      ds_t ds = set_dep_weak (DEP_ANTI, BEGIN_CONTROL,
 					      MAX_DEP_WEAK);
 		      
-		      note_dep (as_a <rtx_insn *> (XEXP (u, 0)), ds);
+		      note_dep (u->insn (), ds);
 		    }
 		  else
-		    add_dependence (insn, as_a <rtx_insn *> (XEXP (u, 0)),
-				    REG_DEP_CONTROL);
+		    add_dependence (insn, u->insn (), REG_DEP_CONTROL);
 		}
 	  }
 
@@ -3088,7 +3086,7 @@ sched_analyze_insn (struct deps_desc *deps, rtx x, rtx_insn *insn)
   if (DEBUG_INSN_P (insn))
     {
       rtx_insn *prev = deps->last_debug_insn;
-      rtx u;
+      rtx_insn_list *u;
 
       if (!deps->readonly)
 	deps->last_debug_insn = insn;
@@ -3100,8 +3098,8 @@ sched_analyze_insn (struct deps_desc *deps, rtx x, rtx_insn *insn)
 			   REG_DEP_ANTI, false);
 
       if (!sel_sched_p ())
-	for (u = deps->last_pending_memory_flush; u; u = XEXP (u, 1))
-	  add_dependence (insn, as_a <rtx_insn *> (XEXP (u, 0)), REG_DEP_ANTI);
+	for (u = deps->last_pending_memory_flush; u; u = u->next ())
+	  add_dependence (insn, u->insn (), REG_DEP_ANTI);
 
       EXECUTE_IF_SET_IN_REG_SET (reg_pending_uses, 0, i, rsi)
 	{
diff --git a/gcc/sched-vis.c b/gcc/sched-vis.c
index 32f7a7c..31794e6 100644
--- a/gcc/sched-vis.c
+++ b/gcc/sched-vis.c
@@ -67,7 +67,7 @@ along with GCC; see the file COPYING3.  If not see
    pointer, via str_pattern_slim, but this usage is discouraged.  */
 
 /* For insns we print patterns, and for some patterns we print insns...  */
-static void print_insn_with_notes (pretty_printer *, const_rtx);
+static void print_insn_with_notes (pretty_printer *, const rtx_insn *);
 
 /* This recognizes rtx'en classified as expressions.  These are always
    represent some action on values or results of other expression, that
@@ -669,7 +669,7 @@ print_pattern (pretty_printer *pp, const_rtx x, int verbose)
    with their INSN_UIDs.  */
 
 void
-print_insn (pretty_printer *pp, const_rtx x, int verbose)
+print_insn (pretty_printer *pp, const rtx_insn *x, int verbose)
 {
   if (verbose)
     {
@@ -787,7 +787,7 @@ print_insn (pretty_printer *pp, const_rtx x, int verbose)
    note attached to the instruction.  */
 
 static void
-print_insn_with_notes (pretty_printer *pp, const_rtx x)
+print_insn_with_notes (pretty_printer *pp, const rtx_insn *x)
 {
   pp_string (pp, print_rtx_head);
   print_insn (pp, x, 1);
@@ -823,7 +823,7 @@ dump_value_slim (FILE *f, const_rtx x, int verbose)
 /* Emit a slim dump of X (an insn) to the file F, including any register
    note attached to the instruction.  */
 void
-dump_insn_slim (FILE *f, const_rtx x)
+dump_insn_slim (FILE *f, const rtx_insn *x)
 {
   pretty_printer rtl_slim_pp;
   rtl_slim_pp.buffer->stream = f;
@@ -893,9 +893,9 @@ str_pattern_slim (const_rtx x)
 }
 
 /* Emit a slim dump of X (an insn) to stderr.  */
-extern void debug_insn_slim (const_rtx);
+extern void debug_insn_slim (const rtx_insn *);
 DEBUG_FUNCTION void
-debug_insn_slim (const_rtx x)
+debug_insn_slim (const rtx_insn *x)
 {
   dump_insn_slim (stderr, x);
 }
diff --git a/gcc/stmt.c b/gcc/stmt.c
index 45dc45f..a6418ff 100644
--- a/gcc/stmt.c
+++ b/gcc/stmt.c
@@ -135,12 +135,13 @@ static void balance_case_nodes (case_node_ptr *, case_node_ptr);
 static int node_has_low_bound (case_node_ptr, tree);
 static int node_has_high_bound (case_node_ptr, tree);
 static int node_is_bounded (case_node_ptr, tree);
-static void emit_case_nodes (rtx, case_node_ptr, rtx, int, tree);
+static void emit_case_nodes (rtx, case_node_ptr, rtx_code_label *, int, tree);
 \f
 /* Return the rtx-label that corresponds to a LABEL_DECL,
-   creating it if necessary.  */
+   creating it if necessary.  If label was deleted, the corresponding
+   note (NOTE_INSN_DELETED{_DEBUG,}_LABEL) insn will be returned.  */
 
-rtx
+rtx_insn *
 label_rtx (tree label)
 {
   gcc_assert (TREE_CODE (label) == LABEL_DECL);
@@ -153,15 +154,15 @@ label_rtx (tree label)
 	LABEL_PRESERVE_P (r) = 1;
     }
 
-  return DECL_RTL (label);
+  return as_a <rtx_insn *> (DECL_RTL (label));
 }
 
 /* As above, but also put it on the forced-reference list of the
    function that contains it.  */
-rtx
+rtx_insn *
 force_label_rtx (tree label)
 {
-  rtx_insn *ref = as_a <rtx_insn *> (label_rtx (label));
+  rtx_insn *ref = label_rtx (label);
   tree function = decl_function_context (label);
 
   gcc_assert (function);
@@ -170,10 +171,18 @@ force_label_rtx (tree label)
   return ref;
 }
 
+/* As label_rtx, but ensures (in check build), that returned value is
+   an existing label (i.e. rtx with code CODE_LABEL).  */
+rtx_code_label *
+live_label_rtx (tree label)
+{
+  return as_a <rtx_code_label *> (label_rtx (label));
+}
+
 /* Add an unconditional jump to LABEL as the next sequential instruction.  */
 
 void
-emit_jump (rtx label)
+emit_jump (rtx_code_label *label)
 {
   do_pending_stack_adjust ();
   emit_jump_insn (gen_jump (label));
@@ -196,7 +205,7 @@ emit_jump (rtx label)
 void
 expand_label (tree label)
 {
-  rtx_insn *label_r = as_a <rtx_insn *> (label_rtx (label));
+  rtx_code_label *label_r = live_label_rtx (label);
 
   do_pending_stack_adjust ();
   emit_label (label_r);
@@ -717,7 +726,7 @@ resolve_operand_name_1 (char *p, tree outputs, tree inputs, tree labels)
 void
 expand_naked_return (void)
 {
-  rtx end_label;
+  rtx_code_label *end_label;
 
   clear_pending_stack_adjust ();
   do_pending_stack_adjust ();
@@ -732,12 +741,12 @@ expand_naked_return (void)
 /* Generate code to jump to LABEL if OP0 and OP1 are equal in mode MODE. PROB
    is the probability of jumping to LABEL.  */
 static void
-do_jump_if_equal (machine_mode mode, rtx op0, rtx op1, rtx label,
+do_jump_if_equal (machine_mode mode, rtx op0, rtx op1, rtx_code_label *label,
 		  int unsignedp, int prob)
 {
   gcc_assert (prob <= REG_BR_PROB_BASE);
   do_compare_rtx_and_jump (op0, op1, EQ, unsignedp, mode,
-			   NULL_RTX, NULL_RTX, label, prob);
+			   NULL_RTX, NULL, label, prob);
 }
 \f
 /* Do the insertion of a case label into case_list.  The labels are
@@ -894,8 +903,8 @@ expand_switch_as_decision_tree_p (tree range,
 
 static void
 emit_case_decision_tree (tree index_expr, tree index_type,
-			 struct case_node *case_list, rtx default_label,
-                         int default_prob)
+			 case_node_ptr case_list, rtx_code_label *default_label,
+			 int default_prob)
 {
   rtx index = expand_normal (index_expr);
 
@@ -1153,7 +1162,7 @@ void
 expand_case (gswitch *stmt)
 {
   tree minval = NULL_TREE, maxval = NULL_TREE, range = NULL_TREE;
-  rtx default_label = NULL_RTX;
+  rtx_code_label *default_label = NULL;
   unsigned int count, uniq;
   int i;
   int ncases = gimple_switch_num_labels (stmt);
@@ -1185,7 +1194,7 @@ expand_case (gswitch *stmt)
   do_pending_stack_adjust ();
 
   /* Find the default case target label.  */
-  default_label = label_rtx (CASE_LABEL (gimple_switch_default_label (stmt)));
+  default_label = live_label_rtx (CASE_LABEL (gimple_switch_default_label (stmt)));
   edge default_edge = EDGE_SUCC (bb, 0);
   int default_prob = default_edge->probability;
 
@@ -1335,7 +1344,7 @@ expand_sjlj_dispatch_table (rtx dispatch_index,
       for (int i = 0; i < ncases; i++)
         {
 	  tree elt = dispatch_table[i];
-	  rtx lab = label_rtx (CASE_LABEL (elt));
+	  rtx_code_label *lab = live_label_rtx (CASE_LABEL (elt));
 	  do_jump_if_equal (index_mode, index, zero, lab, 0, -1);
 	  force_expand_binop (index_mode, sub_optab,
 			      index, CONST1_RTX (index_mode),
@@ -1604,7 +1613,7 @@ node_is_bounded (case_node_ptr node, tree index_type)
    tests for the value 50, then this node need not test anything.  */
 
 static void
-emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
+emit_case_nodes (rtx index, case_node_ptr node, rtx_code_label *default_label,
 		 int default_prob, tree index_type)
 {
   /* If INDEX has an unsigned type, we must make unsigned branches.  */
@@ -1632,7 +1641,8 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 			convert_modes (mode, imode,
 				       expand_normal (node->low),
 				       unsignedp),
-			label_rtx (node->code_label), unsignedp, probability);
+			live_label_rtx (node->code_label),
+			unsignedp, probability);
       /* Since this case is taken at this point, reduce its weight from
          subtree_weight.  */
       subtree_prob -= prob;
@@ -1699,7 +1709,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 				convert_modes (mode, imode,
 					       expand_normal (node->right->low),
 					       unsignedp),
-				label_rtx (node->right->code_label),
+				live_label_rtx (node->right->code_label),
 				unsignedp, probability);
 
 	      /* See if the value matches what the left hand side
@@ -1711,7 +1721,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 				convert_modes (mode, imode,
 					       expand_normal (node->left->low),
 					       unsignedp),
-				label_rtx (node->left->code_label),
+				live_label_rtx (node->left->code_label),
 				unsignedp, probability);
 	    }
 
@@ -1798,7 +1808,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 			        (mode, imode,
 			         expand_normal (node->right->low),
 			         unsignedp),
-			        label_rtx (node->right->code_label), unsignedp, probability);
+			        live_label_rtx (node->right->code_label), unsignedp, probability);
             }
 	  }
 
@@ -1840,7 +1850,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 			        (mode, imode,
 			         expand_normal (node->left->low),
 			         unsignedp),
-			        label_rtx (node->left->code_label), unsignedp, probability);
+			        live_label_rtx (node->left->code_label), unsignedp, probability);
             }
 	}
     }
@@ -2063,7 +2073,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 				       mode, 1, default_label, probability);
 	    }
 
-	  emit_jump (label_rtx (node->code_label));
+	  emit_jump (live_label_rtx (node->code_label));
 	}
     }
 }
diff --git a/gcc/stmt.h b/gcc/stmt.h
index 620b0f1..7b142ce 100644
--- a/gcc/stmt.h
+++ b/gcc/stmt.h
@@ -31,13 +31,18 @@ extern tree resolve_asm_operand_names (tree, tree, tree, tree);
 extern tree tree_overlaps_hard_reg_set (tree, HARD_REG_SET *);
 #endif
 
-/* Return the CODE_LABEL rtx for a LABEL_DECL, creating it if necessary.  */
-extern rtx label_rtx (tree);
+/* Return the CODE_LABEL rtx for a LABEL_DECL, creating it if necessary.
+   If label was deleted, the corresponding note
+   (NOTE_INSN_DELETED{_DEBUG,}_LABEL) insn will be returned.  */
+extern rtx_insn *label_rtx (tree);
 
 /* As label_rtx, but additionally the label is placed on the forced label
    list of its containing function (i.e. it is treated as reachable even
    if how is not obvious).  */
-extern rtx force_label_rtx (tree);
+extern rtx_insn *force_label_rtx (tree);
+
+/* As label_rtx, but checks that label was not deleted.  */
+extern rtx_code_label *live_label_rtx (tree);
 
 /* Expand a GIMPLE_SWITCH statement.  */
 extern void expand_case (gswitch *);
diff --git a/gcc/store-motion.c b/gcc/store-motion.c
index 530766f..11e2dec 100644
--- a/gcc/store-motion.c
+++ b/gcc/store-motion.c
@@ -813,7 +813,7 @@ insert_store (struct st_expr * expr, edge e)
     return 0;
 
   reg = expr->reaching_reg;
-  insn = as_a <rtx_insn *> (gen_move_insn (copy_rtx (expr->pattern), reg));
+  insn = gen_move_insn (copy_rtx (expr->pattern), reg);
 
   /* If we are inserting this expression on ALL predecessor edges of a BB,
      insert it at the start of the BB, and reset the insert bits on the other
@@ -954,7 +954,7 @@ replace_store_insn (rtx reg, rtx_insn *del, basic_block bb,
   rtx mem, note, set, ptr;
 
   mem = smexpr->pattern;
-  insn = as_a <rtx_insn *> (gen_move_insn (reg, SET_SRC (single_set (del))));
+  insn = gen_move_insn (reg, SET_SRC (single_set (del)));
 
   for (ptr = smexpr->antic_stores; ptr; ptr = XEXP (ptr, 1))
     if (XEXP (ptr, 0) == del)

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-03-31  4:38 [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses Mikhail Maltsev
@ 2015-03-31 15:52 ` Trevor Saunders
  2015-04-02 21:13 ` Jeff Law
  2015-04-25 11:49 ` Richard Sandiford
  2 siblings, 0 replies; 21+ messages in thread
From: Trevor Saunders @ 2015-03-31 15:52 UTC (permalink / raw)
  To: Mikhail Maltsev; +Cc: Jeff Law, gcc-patches

On Tue, Mar 31, 2015 at 07:37:40AM +0300, Mikhail Maltsev wrote:
> Hi!
> 
> I'm currently working on the proposed task of replacing rtx objects
> (i.e. struct rtx_def) with derived classes. I would like to get some
> feedback on this work (it's far from being finished, but basically I
> would like to know, whether my modifications are appropriate, e.g. one
> may consider that this is "too much" for just refactoring, because
> sometimes they involve small modification of semantics).

I don't see why "too much" would makesense if the change improves
maintainability.

> The attached patch is not well tested, i.e. I bootstrapped and regtested
> it only on x86_64, but I'll perform more extensive testing before
> submitting the next version.
> 
> The key points I would like to ask about:
> 
> 1. The original task was to replace the rtx type by rtx_insn *, where it
> is appropriate. But rtx_insn has several derived classes, such as
> rtx_code_label, for example. So I tried to use the most derived type
> when possible. Is it OK?

sure why not?

> 2. Not all of these "type promotions" can be done by just looking at
> function callers and callees (and some functions will only be generated
> during the build of some rare architecture) and checks already done in
> them. In a couple of cases I referred to comments and my general
> understanding of code semantics. In one case this actually caused a
> regression (in the patch it is fixed, of course), because of somewhat
> misleading comment (see "live_label_rtx" function added in patch for
> details) The question is - are such changes OK for refactoring (or it
> should strictly preserve semantics)?

I think correct semantic changes are just fine if they make things
easier to use and read.

> 3. In lra-constraints.c I added a new class rtx_usage_list, which, IMHO,
> allows to group the functions which work with usage list in a more
> explicit manner and make some conditions more self-explaining. I hope
> that Vladimir Makarov (in this case, because it concerns LRA) and other
> authors will not object against such "intrusion" into their code (or
> would rather tell me what should be fixed in my patch(es), instead of
> just refusing to apply it/them). In general, are such changes OK or
> should better be avoided?

I wouldn't avoidthem, though I would definitely break this patch up into
smaller ones that each make one set of related changes.

> A couple of questions related to further work:
> 
> 1. I noticed that emit_insn function, in fact, does two kinds of things:
> it can either add it's argument to the chain, or, if the argument is a
> pattern, it creates a new instruction based on that pattern. Shouldn't
> this logic be separated in the callers?

That might well make sense.

> 2. Are there any plans on implementing a better class hierarchy on AST's
> ("union tree_node" type). I see that C++ FE uses a huge number of macros
> (which check TREE_CODE and some boolean flags). Could this be improved
> somehow?

people have talked about doing it, and Andrew MacLeod's work on
seperating types out of tree is related, but not too much has happened
yet.

Trev

> 
> -- 
> Regards,
>     Mikhail Maltsev

> diff --git a/gcc/bb-reorder.c b/gcc/bb-reorder.c
> index c2a3be3..7179faa 100644
> --- a/gcc/bb-reorder.c
> +++ b/gcc/bb-reorder.c
> @@ -1745,9 +1745,11 @@ set_edge_can_fallthru_flag (void)
>  	continue;
>        if (!any_condjump_p (BB_END (bb)))
>  	continue;
> -      if (!invert_jump (BB_END (bb), JUMP_LABEL (BB_END (bb)), 0))
> +
> +      rtx_jump_insn *bb_end_jump = as_a <rtx_jump_insn *> (BB_END (bb));
> +      if (!invert_jump (bb_end_jump, JUMP_LABEL (bb_end_jump), 0))
>  	continue;
> -      invert_jump (BB_END (bb), JUMP_LABEL (BB_END (bb)), 0);
> +      invert_jump (bb_end_jump, JUMP_LABEL (bb_end_jump), 0);
>        EDGE_SUCC (bb, 0)->flags |= EDGE_CAN_FALLTHRU;
>        EDGE_SUCC (bb, 1)->flags |= EDGE_CAN_FALLTHRU;
>      }
> @@ -1902,9 +1904,15 @@ fix_up_fall_thru_edges (void)
>  
>  		      fall_thru_label = block_label (fall_thru->dest);
>  
> -		      if (old_jump && JUMP_P (old_jump) && fall_thru_label)
> -			invert_worked = invert_jump (old_jump,
> -						     fall_thru_label,0);
> +		      if (old_jump && fall_thru_label)
> +                        {
> +                          rtx_jump_insn *old_jump_insn =
> +                                  dyn_cast <rtx_jump_insn *> (old_jump);
> +                          if (old_jump_insn)
> +                            invert_worked = invert_jump (old_jump_insn,
> +						     fall_thru_label, 0);
> +                        }
> +
>  		      if (invert_worked)
>  			{
>  			  fall_thru->flags &= ~EDGE_FALLTHRU;
> @@ -2024,7 +2032,7 @@ fix_crossing_conditional_branches (void)
>    rtx_insn *old_jump;
>    rtx set_src;
>    rtx old_label = NULL_RTX;
> -  rtx new_label;
> +  rtx_code_label *new_label;
>  
>    FOR_EACH_BB_FN (cur_bb, cfun)
>      {
> @@ -2088,7 +2096,7 @@ fix_crossing_conditional_branches (void)
>  	      else
>  		{
>  		  basic_block last_bb;
> -		  rtx_insn *new_jump;
> +		  rtx_insn *new_jump, *old_label_insn;
>  
>  		  /* Create new basic block to be dest for
>  		     conditional jump.  */
> @@ -2099,9 +2107,9 @@ fix_crossing_conditional_branches (void)
>  		  emit_label (new_label);
>  
>  		  gcc_assert (GET_CODE (old_label) == LABEL_REF);
> -		  old_label = JUMP_LABEL (old_jump);
> -		  new_jump = emit_jump_insn (gen_jump (old_label));
> -		  JUMP_LABEL (new_jump) = old_label;
> +		  old_label_insn = JUMP_LABEL_AS_INSN (old_jump);
> +		  new_jump = emit_jump_insn (gen_jump (old_label_insn));
> +		  JUMP_LABEL (new_jump) = old_label_insn;
>  
>  		  last_bb = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
>  		  new_bb = create_basic_block (new_label, new_jump, last_bb);
> @@ -2117,7 +2125,7 @@ fix_crossing_conditional_branches (void)
>  
>  	      /* Make old jump branch to new bb.  */
>  
> -	      redirect_jump (old_jump, new_label, 0);
> +	      redirect_jump (as_a <rtx_jump_insn *> (old_jump), new_label, 0);
>  
>  	      /* Remove crossing_edge as predecessor of 'dest'.  */
>  
> diff --git a/gcc/bt-load.c b/gcc/bt-load.c
> index c028281..2280124 100644
> --- a/gcc/bt-load.c
> +++ b/gcc/bt-load.c
> @@ -1212,7 +1212,7 @@ move_btr_def (basic_block new_def_bb, int btr, btr_def def, bitmap live_range,
>    btr_mode = GET_MODE (SET_DEST (set));
>    btr_rtx = gen_rtx_REG (btr_mode, btr);
>  
> -  new_insn = as_a <rtx_insn *> (gen_move_insn (btr_rtx, src));
> +  new_insn = gen_move_insn (btr_rtx, src);
>  
>    /* Insert target register initialization at head of basic block.  */
>    def->insn = emit_insn_after (new_insn, insp);
> diff --git a/gcc/builtins.c b/gcc/builtins.c
> index 9263777..945492e 100644
> --- a/gcc/builtins.c
> +++ b/gcc/builtins.c
> @@ -2001,7 +2001,7 @@ expand_errno_check (tree exp, rtx target)
>    /* Test the result; if it is NaN, set errno=EDOM because
>       the argument was not in the domain.  */
>    do_compare_rtx_and_jump (target, target, EQ, 0, GET_MODE (target),
> -			   NULL_RTX, NULL_RTX, lab,
> +			   NULL_RTX, NULL, lab,
>  			   /* The jump is very likely.  */
>  			   REG_BR_PROB_BASE - (REG_BR_PROB_BASE / 2000 - 1));
>  
> @@ -5938,9 +5938,9 @@ expand_builtin_acc_on_device (tree exp, rtx target)
>    emit_move_insn (target, const1_rtx);
>    rtx_code_label *done_label = gen_label_rtx ();
>    do_compare_rtx_and_jump (v, v1, EQ, false, v_mode, NULL_RTX,
> -			   NULL_RTX, done_label, PROB_EVEN);
> +			   NULL, done_label, PROB_EVEN);
>    do_compare_rtx_and_jump (v, v2, EQ, false, v_mode, NULL_RTX,
> -			   NULL_RTX, done_label, PROB_EVEN);
> +			   NULL, done_label, PROB_EVEN);
>    emit_move_insn (target, const0_rtx);
>    emit_label (done_label);
>  
> diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c
> index cee152e..05146b6 100644
> --- a/gcc/cfgcleanup.c
> +++ b/gcc/cfgcleanup.c
> @@ -190,7 +190,8 @@ try_simplify_condjump (basic_block cbranch_block)
>      return false;
>  
>    /* Invert the conditional branch.  */
> -  if (!invert_jump (cbranch_insn, block_label (jump_dest_block), 0))
> +  if (!invert_jump (as_a <rtx_jump_insn *> (cbranch_insn),
> +                    block_label (jump_dest_block), 0))
>      return false;
>  
>    if (dump_file)
> diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
> index 97e7a25..aedc4b8 100644
> --- a/gcc/cfgexpand.c
> +++ b/gcc/cfgexpand.c
> @@ -2051,7 +2051,7 @@ static hash_map<basic_block, rtx_code_label *> *lab_rtx_for_bb;
>  
>  /* Returns the label_rtx expression for a label starting basic block BB.  */
>  
> -static rtx
> +static rtx_code_label *
>  label_rtx_for_bb (basic_block bb ATTRIBUTE_UNUSED)
>  {
>    gimple_stmt_iterator gsi;
> @@ -2078,7 +2078,7 @@ label_rtx_for_bb (basic_block bb ATTRIBUTE_UNUSED)
>        if (DECL_NONLOCAL (lab))
>  	break;
>  
> -      return label_rtx (lab);
> +      return live_label_rtx (lab);
>      }
>  
>    rtx_code_label *l = gen_label_rtx ();
> @@ -5579,7 +5579,7 @@ construct_init_block (void)
>      {
>        tree label = gimple_block_label (e->dest);
>  
> -      emit_jump (label_rtx (label));
> +      emit_jump (live_label_rtx (label));
>        flags = 0;
>      }
>    else
> diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
> index 0e27edd..7da23e7 100644
> --- a/gcc/cfgrtl.c
> +++ b/gcc/cfgrtl.c
> @@ -1001,18 +1001,18 @@ rtl_can_merge_blocks (basic_block a, basic_block b)
>  /* Return the label in the head of basic block BLOCK.  Create one if it doesn't
>     exist.  */
>  
> -rtx
> +rtx_code_label *
>  block_label (basic_block block)
>  {
>    if (block == EXIT_BLOCK_PTR_FOR_FN (cfun))
> -    return NULL_RTX;
> +    return NULL;
>  
>    if (!LABEL_P (BB_HEAD (block)))
>      {
>        BB_HEAD (block) = emit_label_before (gen_label_rtx (), BB_HEAD (block));
>      }
>  
> -  return BB_HEAD (block);
> +  return as_a <rtx_code_label *> (BB_HEAD (block));
>  }
>  
>  /* Attempt to perform edge redirection by replacing possibly complex jump
> @@ -1114,7 +1114,8 @@ try_redirect_by_replacing_jump (edge e, basic_block target, bool in_cfglayout)
>        if (dump_file)
>  	fprintf (dump_file, "Redirecting jump %i from %i to %i.\n",
>  		 INSN_UID (insn), e->dest->index, target->index);
> -      if (!redirect_jump (insn, block_label (target), 0))
> +      if (!redirect_jump (as_a <rtx_jump_insn *> (insn),
> +                          block_label (target), 0))
>  	{
>  	  gcc_assert (target == EXIT_BLOCK_PTR_FOR_FN (cfun));
>  	  return NULL;
> @@ -1298,7 +1299,8 @@ patch_jump_insn (rtx_insn *insn, rtx_insn *old_label, basic_block new_bb)
>  	  /* If the substitution doesn't succeed, die.  This can happen
>  	     if the back end emitted unrecognizable instructions or if
>  	     target is exit block on some arches.  */
> -	  if (!redirect_jump (insn, block_label (new_bb), 0))
> +	  if (!redirect_jump (as_a <rtx_jump_insn *> (insn),
> +                              block_label (new_bb), 0))
>  	    {
>  	      gcc_assert (new_bb == EXIT_BLOCK_PTR_FOR_FN (cfun));
>  	      return false;
> @@ -1326,7 +1328,7 @@ redirect_branch_edge (edge e, basic_block target)
>  
>    if (!currently_expanding_to_rtl)
>      {
> -      if (!patch_jump_insn (insn, old_label, target))
> +      if (!patch_jump_insn (as_a <rtx_jump_insn *> (insn), old_label, target))
>  	return NULL;
>      }
>    else
> @@ -1334,7 +1336,8 @@ redirect_branch_edge (edge e, basic_block target)
>         jumps (i.e. not yet split by find_many_sub_basic_blocks).
>         Redirect all of those that match our label.  */
>      FOR_BB_INSNS (src, insn)
> -      if (JUMP_P (insn) && !patch_jump_insn (insn, old_label, target))
> +      if (JUMP_P (insn) && !patch_jump_insn (as_a <rtx_jump_insn *> (insn),
> +                                             old_label, target))
>  	return NULL;
>  
>    if (dump_file)
> @@ -1525,7 +1528,8 @@ force_nonfallthru_and_redirect (edge e, basic_block target, rtx jump_label)
>        edge b = unchecked_make_edge (e->src, target, 0);
>        bool redirected;
>  
> -      redirected = redirect_jump (BB_END (e->src), block_label (target), 0);
> +      redirected = redirect_jump (as_a <rtx_jump_insn *> (BB_END (e->src)),
> +                                  block_label (target), 0);
>        gcc_assert (redirected);
>  
>        note = find_reg_note (BB_END (e->src), REG_BR_PROB, NULL_RTX);
> @@ -3783,10 +3787,10 @@ fixup_reorder_chain (void)
>  	  e_taken = e;
>  
>        bb_end_insn = BB_END (bb);
> -      if (JUMP_P (bb_end_insn))
> +      if (rtx_jump_insn *bb_end_jump = dyn_cast <rtx_jump_insn *> (bb_end_insn))
>  	{
> -	  ret_label = JUMP_LABEL (bb_end_insn);
> -	  if (any_condjump_p (bb_end_insn))
> +	  ret_label = JUMP_LABEL (bb_end_jump);
> +	  if (any_condjump_p (bb_end_jump))
>  	    {
>  	      /* This might happen if the conditional jump has side
>  		 effects and could therefore not be optimized away.
> @@ -3794,10 +3798,10 @@ fixup_reorder_chain (void)
>  		 to prevent rtl_verify_flow_info from complaining.  */
>  	      if (!e_fall)
>  		{
> -		  gcc_assert (!onlyjump_p (bb_end_insn)
> -			      || returnjump_p (bb_end_insn)
> +		  gcc_assert (!onlyjump_p (bb_end_jump)
> +			      || returnjump_p (bb_end_jump)
>                                || (e_taken->flags & EDGE_CROSSING));
> -		  emit_barrier_after (bb_end_insn);
> +		  emit_barrier_after (bb_end_jump);
>  		  continue;
>  		}
>  
> @@ -3819,11 +3823,11 @@ fixup_reorder_chain (void)
>  		 edge based on known or assumed probability.  */
>  	      else if (bb->aux != e_taken->dest)
>  		{
> -		  rtx note = find_reg_note (bb_end_insn, REG_BR_PROB, 0);
> +		  rtx note = find_reg_note (bb_end_jump, REG_BR_PROB, 0);
>  
>  		  if (note
>  		      && XINT (note, 0) < REG_BR_PROB_BASE / 2
> -		      && invert_jump (bb_end_insn,
> +		      && invert_jump (bb_end_jump,
>  				      (e_fall->dest
>  				       == EXIT_BLOCK_PTR_FOR_FN (cfun)
>  				       ? NULL_RTX
> @@ -3846,7 +3850,7 @@ fixup_reorder_chain (void)
>  
>  	      /* Otherwise we can try to invert the jump.  This will
>  		 basically never fail, however, keep up the pretense.  */
> -	      else if (invert_jump (bb_end_insn,
> +	      else if (invert_jump (bb_end_jump,
>  				    (e_fall->dest
>  				     == EXIT_BLOCK_PTR_FOR_FN (cfun)
>  				     ? NULL_RTX
> @@ -4967,7 +4971,7 @@ rtl_lv_add_condition_to_bb (basic_block first_head ,
>  			    basic_block second_head ATTRIBUTE_UNUSED,
>  			    basic_block cond_bb, void *comp_rtx)
>  {
> -  rtx label;
> +  rtx_code_label *label;
>    rtx_insn *seq, *jump;
>    rtx op0 = XEXP ((rtx)comp_rtx, 0);
>    rtx op1 = XEXP ((rtx)comp_rtx, 1);
> @@ -4983,8 +4987,7 @@ rtl_lv_add_condition_to_bb (basic_block first_head ,
>    start_sequence ();
>    op0 = force_operand (op0, NULL_RTX);
>    op1 = force_operand (op1, NULL_RTX);
> -  do_compare_rtx_and_jump (op0, op1, comp, 0,
> -			   mode, NULL_RTX, NULL_RTX, label, -1);
> +  do_compare_rtx_and_jump (op0, op1, comp, 0, mode, NULL_RTX, NULL, label, -1);
>    jump = get_last_insn ();
>    JUMP_LABEL (jump) = label;
>    LABEL_NUSES (label)++;
> diff --git a/gcc/cfgrtl.h b/gcc/cfgrtl.h
> index 32c8ff6..cdf1477 100644
> --- a/gcc/cfgrtl.h
> +++ b/gcc/cfgrtl.h
> @@ -33,7 +33,7 @@ extern bool contains_no_active_insn_p (const_basic_block);
>  extern bool forwarder_block_p (const_basic_block);
>  extern bool can_fallthru (basic_block, basic_block);
>  extern rtx_note *bb_note (basic_block);
> -extern rtx block_label (basic_block);
> +extern rtx_code_label *block_label (basic_block);
>  extern edge try_redirect_by_replacing_jump (edge, basic_block, bool);
>  extern void emit_barrier_after_bb (basic_block bb);
>  extern basic_block force_nonfallthru_and_redirect (edge, basic_block, rtx);
> diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c
> index 22bc81f..b6c71b2 100644
> --- a/gcc/config/i386/i386.c
> +++ b/gcc/config/i386/i386.c
> @@ -38448,7 +38448,7 @@ ix86_emit_cmove (rtx dst, rtx src, enum rtx_code code, rtx op1, rtx op2)
>      }
>    else
>      {
> -      rtx nomove = gen_label_rtx ();
> +      rtx_code_label *nomove = gen_label_rtx ();
>        emit_cmp_and_jump_insns (op1, op2, reverse_condition (code),
>  			       const0_rtx, GET_MODE (op1), 1, nomove);
>        emit_move_insn (dst, src);
> diff --git a/gcc/dojump.c b/gcc/dojump.c
> index ad356ba..42dc479 100644
> --- a/gcc/dojump.c
> +++ b/gcc/dojump.c
> @@ -61,10 +61,12 @@ along with GCC; see the file COPYING3.  If not see
>  #include "tm_p.h"
>  
>  static bool prefer_and_bit_test (machine_mode, int);
> -static void do_jump_by_parts_greater (tree, tree, int, rtx, rtx, int);
> -static void do_jump_by_parts_equality (tree, tree, rtx, rtx, int);
> -static void do_compare_and_jump	(tree, tree, enum rtx_code, enum rtx_code, rtx,
> -				 rtx, int);
> +static void do_jump_by_parts_greater (tree, tree, int,
> +				      rtx_code_label *, rtx_code_label *, int);
> +static void do_jump_by_parts_equality (tree, tree, rtx_code_label *,
> +				       rtx_code_label *, int);
> +static void do_compare_and_jump	(tree, tree, enum rtx_code, enum rtx_code,
> +				 rtx_code_label *, rtx_code_label *, int);
>  
>  /* Invert probability if there is any.  -1 stands for unknown.  */
>  
> @@ -146,34 +148,34 @@ restore_pending_stack_adjust (saved_pending_stack_adjust *save)
>  \f
>  /* Expand conditional expressions.  */
>  
> -/* Generate code to evaluate EXP and jump to LABEL if the value is zero.
> -   LABEL is an rtx of code CODE_LABEL, in this function and all the
> -   functions here.  */
> +/* Generate code to evaluate EXP and jump to LABEL if the value is zero.  */
>  
>  void
> -jumpifnot (tree exp, rtx label, int prob)
> +jumpifnot (tree exp, rtx_code_label *label, int prob)
>  {
> -  do_jump (exp, label, NULL_RTX, inv (prob));
> +  do_jump (exp, label, NULL, inv (prob));
>  }
>  
>  void
> -jumpifnot_1 (enum tree_code code, tree op0, tree op1, rtx label, int prob)
> +jumpifnot_1 (enum tree_code code, tree op0, tree op1, rtx_code_label *label,
> +	     int prob)
>  {
> -  do_jump_1 (code, op0, op1, label, NULL_RTX, inv (prob));
> +  do_jump_1 (code, op0, op1, label, NULL, inv (prob));
>  }
>  
>  /* Generate code to evaluate EXP and jump to LABEL if the value is nonzero.  */
>  
>  void
> -jumpif (tree exp, rtx label, int prob)
> +jumpif (tree exp, rtx_code_label *label, int prob)
>  {
> -  do_jump (exp, NULL_RTX, label, prob);
> +  do_jump (exp, NULL, label, prob);
>  }
>  
>  void
> -jumpif_1 (enum tree_code code, tree op0, tree op1, rtx label, int prob)
> +jumpif_1 (enum tree_code code, tree op0, tree op1,
> +	  rtx_code_label *label, int prob)
>  {
> -  do_jump_1 (code, op0, op1, NULL_RTX, label, prob);
> +  do_jump_1 (code, op0, op1, NULL, label, prob);
>  }
>  
>  /* Used internally by prefer_and_bit_test.  */
> @@ -225,7 +227,8 @@ prefer_and_bit_test (machine_mode mode, int bitnum)
>  
>  void
>  do_jump_1 (enum tree_code code, tree op0, tree op1,
> -	   rtx if_false_label, rtx if_true_label, int prob)
> +	   rtx_code_label *if_false_label, rtx_code_label *if_true_label,
> +	   int prob)
>  {
>    machine_mode mode;
>    rtx_code_label *drop_through_label = 0;
> @@ -378,15 +381,15 @@ do_jump_1 (enum tree_code code, tree op0, tree op1,
>              op0_prob = inv (op0_false_prob);
>              op1_prob = inv (op1_false_prob);
>            }
> -        if (if_false_label == NULL_RTX)
> +        if (if_false_label == NULL)
>            {
>              drop_through_label = gen_label_rtx ();
> -            do_jump (op0, drop_through_label, NULL_RTX, op0_prob);
> -            do_jump (op1, NULL_RTX, if_true_label, op1_prob);
> +            do_jump (op0, drop_through_label, NULL, op0_prob);
> +            do_jump (op1, NULL, if_true_label, op1_prob);
>            }
>          else
>            {
> -            do_jump (op0, if_false_label, NULL_RTX, op0_prob);
> +            do_jump (op0, if_false_label, NULL, op0_prob);
>              do_jump (op1, if_false_label, if_true_label, op1_prob);
>            }
>          break;
> @@ -405,18 +408,18 @@ do_jump_1 (enum tree_code code, tree op0, tree op1,
>            {
>              op0_prob = prob / 2;
>              op1_prob = GCOV_COMPUTE_SCALE ((prob / 2), inv (op0_prob));
> -          }
> -        if (if_true_label == NULL_RTX)
> -          {
> -            drop_through_label = gen_label_rtx ();
> -            do_jump (op0, NULL_RTX, drop_through_label, op0_prob);
> -            do_jump (op1, if_false_label, NULL_RTX, op1_prob);
> -          }
> -        else
> -          {
> -            do_jump (op0, NULL_RTX, if_true_label, op0_prob);
> -            do_jump (op1, if_false_label, if_true_label, op1_prob);
> -          }
> +	  }
> +	if (if_true_label == NULL)
> +	  {
> +	    drop_through_label = gen_label_rtx ();
> +	    do_jump (op0, NULL, drop_through_label, op0_prob);
> +	    do_jump (op1, if_false_label, NULL, op1_prob);
> +	  }
> +	else
> +	  {
> +	    do_jump (op0, NULL, if_true_label, op0_prob);
> +	    do_jump (op1, if_false_label, if_true_label, op1_prob);
> +	  }
>          break;
>        }
>  
> @@ -443,14 +446,15 @@ do_jump_1 (enum tree_code code, tree op0, tree op1,
>     PROB is probability of jump to if_true_label, or -1 if unknown.  */
>  
>  void
> -do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
> +do_jump (tree exp, rtx_code_label *if_false_label,
> +	 rtx_code_label *if_true_label, int prob)
>  {
>    enum tree_code code = TREE_CODE (exp);
>    rtx temp;
>    int i;
>    tree type;
>    machine_mode mode;
> -  rtx_code_label *drop_through_label = 0;
> +  rtx_code_label *drop_through_label = NULL;
>  
>    switch (code)
>      {
> @@ -458,10 +462,13 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
>        break;
>  
>      case INTEGER_CST:
> -      temp = integer_zerop (exp) ? if_false_label : if_true_label;
> -      if (temp)
> -        emit_jump (temp);
> -      break;
> +      {
> +	rtx_code_label *lab = integer_zerop (exp) ? if_false_label
> +						  : if_true_label;
> +	if (lab)
> +	  emit_jump (lab);
> +	break;
> +      }
>  
>  #if 0
>        /* This is not true with #pragma weak  */
> @@ -511,7 +518,7 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
>  	  }
>  
>          do_pending_stack_adjust ();
> -	do_jump (TREE_OPERAND (exp, 0), label1, NULL_RTX, -1);
> +	do_jump (TREE_OPERAND (exp, 0), label1, NULL, -1);
>  	do_jump (TREE_OPERAND (exp, 1), if_false_label, if_true_label, prob);
>          emit_label (label1);
>  	do_jump (TREE_OPERAND (exp, 2), if_false_label, if_true_label, prob);
> @@ -555,7 +562,7 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
>        if (integer_onep (TREE_OPERAND (exp, 1)))
>  	{
>  	  tree exp0 = TREE_OPERAND (exp, 0);
> -	  rtx set_label, clr_label;
> +	  rtx_code_label *set_label, *clr_label;
>  	  int setclr_prob = prob;
>  
>  	  /* Strip narrowing integral type conversions.  */
> @@ -684,11 +691,12 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
>  
>  static void
>  do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
> -			      rtx op1, rtx if_false_label, rtx if_true_label,
> +			      rtx op1, rtx_code_label *if_false_label,
> +			      rtx_code_label *if_true_label,
>  			      int prob)
>  {
>    int nwords = (GET_MODE_SIZE (mode) / UNITS_PER_WORD);
> -  rtx drop_through_label = 0;
> +  rtx_code_label *drop_through_label = 0;
>    bool drop_through_if_true = false, drop_through_if_false = false;
>    enum rtx_code code = GT;
>    int i;
> @@ -735,7 +743,7 @@ do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
>  
>        /* All but high-order word must be compared as unsigned.  */
>        do_compare_rtx_and_jump (op0_word, op1_word, code, (unsignedp || i > 0),
> -			       word_mode, NULL_RTX, NULL_RTX, if_true_label,
> +			       word_mode, NULL_RTX, NULL, if_true_label,
>  			       prob);
>  
>        /* Emit only one comparison for 0.  Do not emit the last cond jump.  */
> @@ -744,7 +752,7 @@ do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
>  
>        /* Consider lower words only if these are equal.  */
>        do_compare_rtx_and_jump (op0_word, op1_word, NE, unsignedp, word_mode,
> -			       NULL_RTX, NULL_RTX, if_false_label, inv (prob));
> +			       NULL_RTX, NULL, if_false_label, inv (prob));
>      }
>  
>    if (!drop_through_if_false)
> @@ -760,7 +768,8 @@ do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
>  
>  static void
>  do_jump_by_parts_greater (tree treeop0, tree treeop1, int swap,
> -			  rtx if_false_label, rtx if_true_label, int prob)
> +			  rtx_code_label *if_false_label,
> +			  rtx_code_label *if_true_label, int prob)
>  {
>    rtx op0 = expand_normal (swap ? treeop1 : treeop0);
>    rtx op1 = expand_normal (swap ? treeop0 : treeop1);
> @@ -773,17 +782,18 @@ do_jump_by_parts_greater (tree treeop0, tree treeop1, int swap,
>  \f
>  /* Jump according to whether OP0 is 0.  We assume that OP0 has an integer
>     mode, MODE, that is too wide for the available compare insns.  Either
> -   Either (but not both) of IF_TRUE_LABEL and IF_FALSE_LABEL may be NULL_RTX
> +   Either (but not both) of IF_TRUE_LABEL and IF_FALSE_LABEL may be NULL
>     to indicate drop through.  */
>  
>  static void
>  do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
> -			   rtx if_false_label, rtx if_true_label, int prob)
> +			   rtx_code_label *if_false_label,
> +			   rtx_code_label *if_true_label, int prob)
>  {
>    int nwords = GET_MODE_SIZE (mode) / UNITS_PER_WORD;
>    rtx part;
>    int i;
> -  rtx drop_through_label = 0;
> +  rtx_code_label *drop_through_label = NULL;
>  
>    /* The fastest way of doing this comparison on almost any machine is to
>       "or" all the words and compare the result.  If all have to be loaded
> @@ -806,12 +816,12 @@ do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
>  
>    /* If we couldn't do the "or" simply, do this with a series of compares.  */
>    if (! if_false_label)
> -    drop_through_label = if_false_label = gen_label_rtx ();
> +    if_false_label = drop_through_label = gen_label_rtx ();
>  
>    for (i = 0; i < nwords; i++)
>      do_compare_rtx_and_jump (operand_subword_force (op0, i, mode),
>                               const0_rtx, EQ, 1, word_mode, NULL_RTX,
> -			     if_false_label, NULL_RTX, prob);
> +			     if_false_label, NULL, prob);
>  
>    if (if_true_label)
>      emit_jump (if_true_label);
> @@ -827,10 +837,11 @@ do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
>  
>  static void
>  do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
> -			       rtx if_false_label, rtx if_true_label, int prob)
> +			       rtx_code_label *if_false_label,
> +			       rtx_code_label *if_true_label, int prob)
>  {
>    int nwords = (GET_MODE_SIZE (mode) / UNITS_PER_WORD);
> -  rtx drop_through_label = 0;
> +  rtx_code_label *drop_through_label = NULL;
>    int i;
>  
>    if (op1 == const0_rtx)
> @@ -853,7 +864,7 @@ do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
>      do_compare_rtx_and_jump (operand_subword_force (op0, i, mode),
>                               operand_subword_force (op1, i, mode),
>                               EQ, 0, word_mode, NULL_RTX,
> -			     if_false_label, NULL_RTX, prob);
> +			     if_false_label, NULL, prob);
>  
>    if (if_true_label)
>      emit_jump (if_true_label);
> @@ -865,8 +876,9 @@ do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
>     with one insn, test the comparison and jump to the appropriate label.  */
>  
>  static void
> -do_jump_by_parts_equality (tree treeop0, tree treeop1, rtx if_false_label,
> -			   rtx if_true_label, int prob)
> +do_jump_by_parts_equality (tree treeop0, tree treeop1,
> +			   rtx_code_label *if_false_label,
> +			   rtx_code_label *if_true_label, int prob)
>  {
>    rtx op0 = expand_normal (treeop0);
>    rtx op1 = expand_normal (treeop1);
> @@ -961,11 +973,12 @@ split_comparison (enum rtx_code code, machine_mode mode,
>  
>  void
>  do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
> -			 machine_mode mode, rtx size, rtx if_false_label,
> -			 rtx if_true_label, int prob)
> +			 machine_mode mode, rtx size,
> +			 rtx_code_label *if_false_label,
> +			 rtx_code_label *if_true_label, int prob)
>  {
>    rtx tem;
> -  rtx dummy_label = NULL;
> +  rtx_code_label *dummy_label = NULL;
>  
>    /* Reverse the comparison if that is safe and we want to jump if it is
>       false.  Also convert to the reverse comparison if the target can
> @@ -987,9 +1000,7 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
>        if (can_compare_p (rcode, mode, ccp_jump)
>  	  || (code == ORDERED && ! can_compare_p (ORDERED, mode, ccp_jump)))
>  	{
> -          tem = if_true_label;
> -          if_true_label = if_false_label;
> -          if_false_label = tem;
> +	  std::swap (if_true_label, if_false_label);
>  	  code = rcode;
>  	  prob = inv (prob);
>  	}
> @@ -1000,9 +1011,7 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
>  
>    if (swap_commutative_operands_p (op0, op1))
>      {
> -      tem = op0;
> -      op0 = op1;
> -      op1 = tem;
> +      std::swap (op0, op1);
>        code = swap_condition (code);
>      }
>  
> @@ -1014,8 +1023,9 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
>      {
>        if (CONSTANT_P (tem))
>  	{
> -	  rtx label = (tem == const0_rtx || tem == CONST0_RTX (mode))
> -		      ? if_false_label : if_true_label;
> +	  rtx_code_label *label = (tem == const0_rtx
> +				   || tem == CONST0_RTX (mode)) ?
> +				       if_false_label : if_true_label;
>  	  if (label)
>  	    emit_jump (label);
>  	  return;
> @@ -1134,7 +1144,7 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
>  		first_prob = REG_BR_PROB_BASE - REG_BR_PROB_BASE / 100;
>  	      if (and_them)
>  		{
> -		  rtx dest_label;
> +		  rtx_code_label *dest_label;
>  		  /* If we only jump if true, just bypass the second jump.  */
>  		  if (! if_false_label)
>  		    {
> @@ -1145,13 +1155,11 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
>  		  else
>  		    dest_label = if_false_label;
>                    do_compare_rtx_and_jump (op0, op1, first_code, unsignedp, mode,
> -					   size, dest_label, NULL_RTX,
> -					   first_prob);
> +					   size, dest_label, NULL, first_prob);
>  		}
>                else
>                  do_compare_rtx_and_jump (op0, op1, first_code, unsignedp, mode,
> -					 size, NULL_RTX, if_true_label,
> -					 first_prob);
> +					 size, NULL, if_true_label, first_prob);
>  	    }
>  	}
>  
> @@ -1177,8 +1185,9 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
>  
>  static void
>  do_compare_and_jump (tree treeop0, tree treeop1, enum rtx_code signed_code,
> -		     enum rtx_code unsigned_code, rtx if_false_label,
> -		     rtx if_true_label, int prob)
> +		     enum rtx_code unsigned_code,
> +		     rtx_code_label *if_false_label,
> +		     rtx_code_label *if_true_label, int prob)
>  {
>    rtx op0, op1;
>    tree type;
> diff --git a/gcc/dojump.h b/gcc/dojump.h
> index 74d3f37..1b64ea7 100644
> --- a/gcc/dojump.h
> +++ b/gcc/dojump.h
> @@ -57,20 +57,23 @@ extern void save_pending_stack_adjust (saved_pending_stack_adjust *);
>  extern void restore_pending_stack_adjust (saved_pending_stack_adjust *);
>  
>  /* Generate code to evaluate EXP and jump to LABEL if the value is zero.  */
> -extern void jumpifnot (tree, rtx, int);
> -extern void jumpifnot_1 (enum tree_code, tree, tree, rtx, int);
> +extern void jumpifnot (tree exp, rtx_code_label *label, int prob);
> +extern void jumpifnot_1 (enum tree_code, tree, tree, rtx_code_label *, int);
>  
>  /* Generate code to evaluate EXP and jump to LABEL if the value is nonzero.  */
> -extern void jumpif (tree, rtx, int);
> -extern void jumpif_1 (enum tree_code, tree, tree, rtx, int);
> +extern void jumpif (tree exp, rtx_code_label *label, int prob);
> +extern void jumpif_1 (enum tree_code, tree, tree, rtx_code_label *, int);
>  
>  /* Generate code to evaluate EXP and jump to IF_FALSE_LABEL if
>     the result is zero, or IF_TRUE_LABEL if the result is one.  */
> -extern void do_jump (tree, rtx, rtx, int);
> -extern void do_jump_1 (enum tree_code, tree, tree, rtx, rtx, int);
> +extern void do_jump (tree exp, rtx_code_label *if_false_label,
> +		     rtx_code_label *if_true_label, int prob);
> +extern void do_jump_1 (enum tree_code, tree, tree, rtx_code_label *,
> +		       rtx_code_label *, int);
>  
>  extern void do_compare_rtx_and_jump (rtx, rtx, enum rtx_code, int,
> -				     machine_mode, rtx, rtx, rtx, int);
> +				     machine_mode, rtx, rtx_code_label *,
> +				     rtx_code_label *, int);
>  
>  extern bool split_comparison (enum rtx_code, machine_mode,
>  			      enum rtx_code *, enum rtx_code *);
> diff --git a/gcc/dse.c b/gcc/dse.c
> index 2bb20d7..e923ea6 100644
> --- a/gcc/dse.c
> +++ b/gcc/dse.c
> @@ -907,7 +907,7 @@ emit_inc_dec_insn_before (rtx mem ATTRIBUTE_UNUSED,
>        end_sequence ();
>      }
>    else
> -    new_insn = as_a <rtx_insn *> (gen_move_insn (dest, src));
> +    new_insn = gen_move_insn (dest, src);
>    info.first = new_insn;
>    info.fixed_regs_live = insn_info->fixed_regs_live;
>    info.failure = false;
> diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
> index 483eacb..8b12b10 100644
> --- a/gcc/emit-rtl.c
> +++ b/gcc/emit-rtl.c
> @@ -4463,13 +4463,15 @@ emit_barrier_before (rtx before)
>  
>  /* Emit the label LABEL before the insn BEFORE.  */
>  
> -rtx_insn *
> -emit_label_before (rtx label, rtx_insn *before)
> +rtx_code_label *
> +emit_label_before (rtx uncast_label, rtx_insn *before)
>  {
> +  rtx_code_label *label = as_a <rtx_code_label *> (uncast_label);
> +
>    gcc_checking_assert (INSN_UID (label) == 0);
>    INSN_UID (label) = cur_insn_uid++;
>    add_insn_before (label, before, NULL);
> -  return as_a <rtx_insn *> (label);
> +  return label;
>  }
>  \f
>  /* Helper for emit_insn_after, handles lists of instructions
> @@ -5090,13 +5092,15 @@ emit_call_insn (rtx x)
>  
>  /* Add the label LABEL to the end of the doubly-linked list.  */
>  
> -rtx_insn *
> -emit_label (rtx label)
> +rtx_code_label *
> +emit_label (rtx uncast_label)
>  {
> +  rtx_code_label *label = as_a <rtx_code_label *> (uncast_label);
> +
>    gcc_checking_assert (INSN_UID (label) == 0);
>    INSN_UID (label) = cur_insn_uid++;
> -  add_insn (as_a <rtx_insn *> (label));
> -  return as_a <rtx_insn *> (label);
> +  add_insn (label);
> +  return label;
>  }
>  
>  /* Make an insn of code JUMP_TABLE_DATA
> @@ -5357,7 +5361,7 @@ emit (rtx x)
>    switch (code)
>      {
>      case CODE_LABEL:
> -      return emit_label (x);
> +      return emit_label (as_a <rtx_code_label *> (x));
>      case INSN:
>        return emit_insn (x);
>      case  JUMP_INSN:
> diff --git a/gcc/except.c b/gcc/except.c
> index 833ec21..90ffbd1 100644
> --- a/gcc/except.c
> +++ b/gcc/except.c
> @@ -1354,7 +1354,7 @@ sjlj_emit_dispatch_table (rtx_code_label *dispatch_label, int num_dispatch)
>      if (lp && lp->post_landing_pad)
>        {
>  	rtx_insn *seq2;
> -	rtx label;
> +	rtx_code_label *label;
>  
>  	start_sequence ();
>  
> @@ -1368,7 +1368,7 @@ sjlj_emit_dispatch_table (rtx_code_label *dispatch_label, int num_dispatch)
>  	    t = build_int_cst (integer_type_node, disp_index);
>  	    case_elt = build_case_label (t, NULL, t_label);
>  	    dispatch_labels.quick_push (case_elt);
> -	    label = label_rtx (t_label);
> +	    label = live_label_rtx (t_label);
>  	  }
>  	else
>  	  label = gen_label_rtx ();
> diff --git a/gcc/explow.c b/gcc/explow.c
> index de446a9..57cb767 100644
> --- a/gcc/explow.c
> +++ b/gcc/explow.c
> @@ -984,7 +984,7 @@ emit_stack_save (enum save_level save_level, rtx *psave)
>  {
>    rtx sa = *psave;
>    /* The default is that we use a move insn and save in a Pmode object.  */
> -  rtx (*fcn) (rtx, rtx) = gen_move_insn;
> +  rtx_insn * (*fcn) (rtx, rtx) = gen_move_insn;
>    machine_mode mode = STACK_SAVEAREA_MODE (save_level);
>  
>    /* See if this machine has anything special to do for this kind of save.  */
> @@ -1039,7 +1039,7 @@ void
>  emit_stack_restore (enum save_level save_level, rtx sa)
>  {
>    /* The default is that we use a move insn.  */
> -  rtx (*fcn) (rtx, rtx) = gen_move_insn;
> +  rtx_insn * (*fcn) (rtx, rtx) = gen_move_insn;
>  
>    /* If stack_realign_drap, the x86 backend emits a prologue that aligns both
>       STACK_POINTER and HARD_FRAME_POINTER.
> diff --git a/gcc/expmed.c b/gcc/expmed.c
> index e0b2619..ccfb25a 100644
> --- a/gcc/expmed.c
> +++ b/gcc/expmed.c
> @@ -5799,8 +5799,8 @@ emit_store_flag_force (rtx target, enum rtx_code code, rtx op0, rtx op1,
>        && op1 == const0_rtx)
>      {
>        label = gen_label_rtx ();
> -      do_compare_rtx_and_jump (target, const0_rtx, EQ, unsignedp,
> -			       mode, NULL_RTX, NULL_RTX, label, -1);
> +      do_compare_rtx_and_jump (target, const0_rtx, EQ, unsignedp, mode,
> +			       NULL_RTX, NULL, label, -1);
>        emit_move_insn (target, trueval);
>        emit_label (label);
>        return target;
> @@ -5837,8 +5837,8 @@ emit_store_flag_force (rtx target, enum rtx_code code, rtx op0, rtx op1,
>  
>    emit_move_insn (target, trueval);
>    label = gen_label_rtx ();
> -  do_compare_rtx_and_jump (op0, op1, code, unsignedp, mode, NULL_RTX,
> -			   NULL_RTX, label, -1);
> +  do_compare_rtx_and_jump (op0, op1, code, unsignedp, mode, NULL_RTX, NULL,
> +			   label, -1);
>  
>    emit_move_insn (target, falseval);
>    emit_label (label);
> @@ -5855,6 +5855,6 @@ do_cmp_and_jump (rtx arg1, rtx arg2, enum rtx_code op, machine_mode mode,
>  		 rtx_code_label *label)
>  {
>    int unsignedp = (op == LTU || op == LEU || op == GTU || op == GEU);
> -  do_compare_rtx_and_jump (arg1, arg2, op, unsignedp, mode,
> -			   NULL_RTX, NULL_RTX, label, -1);
> +  do_compare_rtx_and_jump (arg1, arg2, op, unsignedp, mode, NULL_RTX,
> +			   NULL, label, -1);
>  }
> diff --git a/gcc/expr.c b/gcc/expr.c
> index dc13a14..a7066be 100644
> --- a/gcc/expr.c
> +++ b/gcc/expr.c
> @@ -3652,7 +3652,7 @@ emit_move_insn (rtx x, rtx y)
>  /* Generate the body of an instruction to copy Y into X.
>     It may be a list of insns, if one insn isn't enough.  */
>  
> -rtx
> +rtx_insn *
>  gen_move_insn (rtx x, rtx y)
>  {
>    rtx_insn *seq;
> @@ -8122,6 +8122,7 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
>  		    enum expand_modifier modifier)
>  {
>    rtx op0, op1, op2, temp;
> +  rtx_code_label *lab;
>    tree type;
>    int unsignedp;
>    machine_mode mode;
> @@ -8864,11 +8865,7 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
>  
>        /* If op1 was placed in target, swap op0 and op1.  */
>        if (target != op0 && target == op1)
> -	{
> -	  temp = op0;
> -	  op0 = op1;
> -	  op1 = temp;
> -	}
> +	std::swap (op0, op1);
>  
>        /* We generate better code and avoid problems with op1 mentioning
>  	 target by forcing op1 into a pseudo if it isn't a constant.  */
> @@ -8935,13 +8932,13 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
>  	if (target != op0)
>  	  emit_move_insn (target, op0);
>  
> -	temp = gen_label_rtx ();
> +	lab = gen_label_rtx ();
>  	do_compare_rtx_and_jump (target, cmpop1, comparison_code,
> -				 unsignedp, mode, NULL_RTX, NULL_RTX, temp,
> +				 unsignedp, mode, NULL_RTX, NULL, lab,
>  				 -1);
>        }
>        emit_move_insn (target, op1);
> -      emit_label (temp);
> +      emit_label (lab);
>        return target;
>  
>      case BIT_NOT_EXPR:
> @@ -9019,38 +9016,39 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
>      case UNGE_EXPR:
>      case UNEQ_EXPR:
>      case LTGT_EXPR:
> -      temp = do_store_flag (ops,
> -			    modifier != EXPAND_STACK_PARM ? target : NULL_RTX,
> -			    tmode != VOIDmode ? tmode : mode);
> -      if (temp)
> -	return temp;
> -
> -      /* Use a compare and a jump for BLKmode comparisons, or for function
> -	 type comparisons is HAVE_canonicalize_funcptr_for_compare.  */
> -
> -      if ((target == 0
> -	   || modifier == EXPAND_STACK_PARM
> -	   || ! safe_from_p (target, treeop0, 1)
> -	   || ! safe_from_p (target, treeop1, 1)
> -	   /* Make sure we don't have a hard reg (such as function's return
> -	      value) live across basic blocks, if not optimizing.  */
> -	   || (!optimize && REG_P (target)
> -	       && REGNO (target) < FIRST_PSEUDO_REGISTER)))
> -	target = gen_reg_rtx (tmode != VOIDmode ? tmode : mode);
> +      {
> +	temp = do_store_flag (ops,
> +			      modifier != EXPAND_STACK_PARM ? target : NULL_RTX,
> +			      tmode != VOIDmode ? tmode : mode);
> +	if (temp)
> +	  return temp;
>  
> -      emit_move_insn (target, const0_rtx);
> +	/* Use a compare and a jump for BLKmode comparisons, or for function
> +	   type comparisons is HAVE_canonicalize_funcptr_for_compare.  */
> +
> +	if ((target == 0
> +	     || modifier == EXPAND_STACK_PARM
> +	     || ! safe_from_p (target, treeop0, 1)
> +	     || ! safe_from_p (target, treeop1, 1)
> +	     /* Make sure we don't have a hard reg (such as function's return
> +		value) live across basic blocks, if not optimizing.  */
> +	     || (!optimize && REG_P (target)
> +		 && REGNO (target) < FIRST_PSEUDO_REGISTER)))
> +	  target = gen_reg_rtx (tmode != VOIDmode ? tmode : mode);
>  
> -      op1 = gen_label_rtx ();
> -      jumpifnot_1 (code, treeop0, treeop1, op1, -1);
> +	emit_move_insn (target, const0_rtx);
>  
> -      if (TYPE_PRECISION (type) == 1 && !TYPE_UNSIGNED (type))
> -	emit_move_insn (target, constm1_rtx);
> -      else
> -	emit_move_insn (target, const1_rtx);
> +	rtx_code_label *lab1 = gen_label_rtx ();
> +	jumpifnot_1 (code, treeop0, treeop1, lab1, -1);
>  
> -      emit_label (op1);
> -      return target;
> +	if (TYPE_PRECISION (type) == 1 && !TYPE_UNSIGNED (type))
> +	  emit_move_insn (target, constm1_rtx);
> +	else
> +	  emit_move_insn (target, const1_rtx);
>  
> +	emit_label (lab1);
> +	return target;
> +      }
>      case COMPLEX_EXPR:
>        /* Get the rtx code of the operands.  */
>        op0 = expand_normal (treeop0);
> @@ -9273,58 +9271,60 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
>        }
>  
>      case COND_EXPR:
> -      /* A COND_EXPR with its type being VOID_TYPE represents a
> -	 conditional jump and is handled in
> -	 expand_gimple_cond_expr.  */
> -      gcc_assert (!VOID_TYPE_P (type));
> -
> -      /* Note that COND_EXPRs whose type is a structure or union
> -	 are required to be constructed to contain assignments of
> -	 a temporary variable, so that we can evaluate them here
> -	 for side effect only.  If type is void, we must do likewise.  */
> -
> -      gcc_assert (!TREE_ADDRESSABLE (type)
> -		  && !ignore
> -		  && TREE_TYPE (treeop1) != void_type_node
> -		  && TREE_TYPE (treeop2) != void_type_node);
> -
> -      temp = expand_cond_expr_using_cmove (treeop0, treeop1, treeop2);
> -      if (temp)
> -	return temp;
> -
> -      /* If we are not to produce a result, we have no target.  Otherwise,
> -	 if a target was specified use it; it will not be used as an
> -	 intermediate target unless it is safe.  If no target, use a
> -	 temporary.  */
> -
> -      if (modifier != EXPAND_STACK_PARM
> -	  && original_target
> -	  && safe_from_p (original_target, treeop0, 1)
> -	  && GET_MODE (original_target) == mode
> -	  && !MEM_P (original_target))
> -	temp = original_target;
> -      else
> -	temp = assign_temp (type, 0, 1);
> -
> -      do_pending_stack_adjust ();
> -      NO_DEFER_POP;
> -      op0 = gen_label_rtx ();
> -      op1 = gen_label_rtx ();
> -      jumpifnot (treeop0, op0, -1);
> -      store_expr (treeop1, temp,
> -		  modifier == EXPAND_STACK_PARM,
> -		  false);
> -
> -      emit_jump_insn (gen_jump (op1));
> -      emit_barrier ();
> -      emit_label (op0);
> -      store_expr (treeop2, temp,
> -		  modifier == EXPAND_STACK_PARM,
> -		  false);
> +      {
> +	/* A COND_EXPR with its type being VOID_TYPE represents a
> +	   conditional jump and is handled in
> +	   expand_gimple_cond_expr.  */
> +	gcc_assert (!VOID_TYPE_P (type));
> +
> +	/* Note that COND_EXPRs whose type is a structure or union
> +	   are required to be constructed to contain assignments of
> +	   a temporary variable, so that we can evaluate them here
> +	   for side effect only.  If type is void, we must do likewise.  */
> +
> +	gcc_assert (!TREE_ADDRESSABLE (type)
> +		    && !ignore
> +		    && TREE_TYPE (treeop1) != void_type_node
> +		    && TREE_TYPE (treeop2) != void_type_node);
> +
> +	temp = expand_cond_expr_using_cmove (treeop0, treeop1, treeop2);
> +	if (temp)
> +	  return temp;
>  
> -      emit_label (op1);
> -      OK_DEFER_POP;
> -      return temp;
> +	/* If we are not to produce a result, we have no target.  Otherwise,
> +	   if a target was specified use it; it will not be used as an
> +	   intermediate target unless it is safe.  If no target, use a
> +	   temporary.  */
> +
> +	if (modifier != EXPAND_STACK_PARM
> +	    && original_target
> +	    && safe_from_p (original_target, treeop0, 1)
> +	    && GET_MODE (original_target) == mode
> +	    && !MEM_P (original_target))
> +	  temp = original_target;
> +	else
> +	  temp = assign_temp (type, 0, 1);
> +
> +	do_pending_stack_adjust ();
> +	NO_DEFER_POP;
> +	rtx_code_label *lab0 = gen_label_rtx ();
> +	rtx_code_label *lab1 = gen_label_rtx ();
> +	jumpifnot (treeop0, lab0, -1);
> +	store_expr (treeop1, temp,
> +		    modifier == EXPAND_STACK_PARM,
> +		    false);
> +
> +	emit_jump_insn (gen_jump (lab1));
> +	emit_barrier ();
> +	emit_label (lab0);
> +	store_expr (treeop2, temp,
> +		    modifier == EXPAND_STACK_PARM,
> +		    false);
> +
> +	emit_label (lab1);
> +	OK_DEFER_POP;
> +	return temp;
> +      }
>  
>      case VEC_COND_EXPR:
>        target = expand_vec_cond_expr (type, treeop0, treeop1, treeop2, target);
> diff --git a/gcc/expr.h b/gcc/expr.h
> index 867852e..6c4afc4 100644
> --- a/gcc/expr.h
> +++ b/gcc/expr.h
> @@ -203,7 +203,7 @@ extern rtx store_by_pieces (rtx, unsigned HOST_WIDE_INT,
>  
>  /* Emit insns to set X from Y.  */
>  extern rtx_insn *emit_move_insn (rtx, rtx);
> -extern rtx gen_move_insn (rtx, rtx);
> +extern rtx_insn *gen_move_insn (rtx, rtx);
>  
>  /* Emit insns to set X from Y, with no frills.  */
>  extern rtx_insn *emit_move_insn_1 (rtx, rtx);
> diff --git a/gcc/function.c b/gcc/function.c
> index 2c3d142..97ecf3a 100644
> --- a/gcc/function.c
> +++ b/gcc/function.c
> @@ -5760,7 +5760,7 @@ convert_jumps_to_returns (basic_block last_bb, bool simple_p,
>  	    dest = simple_return_rtx;
>  	  else
>  	    dest = ret_rtx;
> -	  if (!redirect_jump (jump, dest, 0))
> +	  if (!redirect_jump (as_a <rtx_jump_insn *> (jump), dest, 0))
>  	    {
>  #ifdef HAVE_simple_return
>  	      if (simple_p)
> diff --git a/gcc/gcse.c b/gcc/gcse.c
> index 37aac6a..20e79e0 100644
> --- a/gcc/gcse.c
> +++ b/gcc/gcse.c
> @@ -2227,7 +2227,8 @@ pre_insert_copy_insn (struct gcse_expr *expr, rtx_insn *insn)
>    int regno = REGNO (reg);
>    int indx = expr->bitmap_index;
>    rtx pat = PATTERN (insn);
> -  rtx set, first_set, new_insn;
> +  rtx set, first_set;
> +  rtx_insn *new_insn;
>    rtx old_reg;
>    int i;
>  
> diff --git a/gcc/ifcvt.c b/gcc/ifcvt.c
> index a3e3e5c..bf79122 100644
> --- a/gcc/ifcvt.c
> +++ b/gcc/ifcvt.c
> @@ -4444,9 +4444,10 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
>        else
>  	new_dest_label = block_label (new_dest);
>  
> +      rtx_jump_insn *jump_insn = as_a <rtx_jump_insn *> (jump);
>        if (reversep
> -	  ? ! invert_jump_1 (jump, new_dest_label)
> -	  : ! redirect_jump_1 (jump, new_dest_label))
> +	  ? ! invert_jump_1 (jump_insn, new_dest_label)
> +	  : ! redirect_jump_1 (jump_insn, new_dest_label))
>  	goto cancel;
>      }
>  
> @@ -4457,7 +4458,8 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
>  
>    if (other_bb != new_dest)
>      {
> -      redirect_jump_2 (jump, old_dest, new_dest_label, 0, reversep);
> +      redirect_jump_2 (as_a <rtx_jump_insn *> (jump), old_dest, new_dest_label,
> +                       0, reversep);
>  
>        redirect_edge_succ (BRANCH_EDGE (test_bb), new_dest);
>        if (reversep)
> diff --git a/gcc/internal-fn.c b/gcc/internal-fn.c
> index e402825..af9baff 100644
> --- a/gcc/internal-fn.c
> +++ b/gcc/internal-fn.c
> @@ -422,7 +422,7 @@ expand_arith_overflow_result_store (tree lhs, rtx target,
>        lres = convert_modes (tgtmode, mode, res, uns);
>        gcc_assert (GET_MODE_PRECISION (tgtmode) < GET_MODE_PRECISION (mode));
>        do_compare_rtx_and_jump (res, convert_modes (mode, tgtmode, lres, uns),
> -			       EQ, true, mode, NULL_RTX, NULL_RTX, done_label,
> +			       EQ, true, mode, NULL_RTX, NULL, done_label,
>  			       PROB_VERY_LIKELY);
>        write_complex_part (target, const1_rtx, true);
>        emit_label (done_label);
> @@ -569,7 +569,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
>  	      : CONST_SCALAR_INT_P (op1)))
>  	tem = op1;
>        do_compare_rtx_and_jump (res, tem, code == PLUS_EXPR ? GEU : LEU,
> -			       true, mode, NULL_RTX, NULL_RTX, done_label,
> +			       true, mode, NULL_RTX, NULL, done_label,
>  			       PROB_VERY_LIKELY);
>        goto do_error_label;
>      }
> @@ -584,7 +584,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
>        rtx tem = expand_binop (mode, add_optab,
>  			      code == PLUS_EXPR ? res : op0, sgn,
>  			      NULL_RTX, false, OPTAB_LIB_WIDEN);
> -      do_compare_rtx_and_jump (tem, op1, GEU, true, mode, NULL_RTX, NULL_RTX,
> +      do_compare_rtx_and_jump (tem, op1, GEU, true, mode, NULL_RTX, NULL,
>  			       done_label, PROB_VERY_LIKELY);
>        goto do_error_label;
>      }
> @@ -627,8 +627,8 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
>        else if (pos_neg == 3)
>  	/* If ARG0 is not known to be always positive, check at runtime.  */
>  	do_compare_rtx_and_jump (op0, const0_rtx, LT, false, mode, NULL_RTX,
> -				 NULL_RTX, do_error, PROB_VERY_UNLIKELY);
> -      do_compare_rtx_and_jump (op1, op0, LEU, true, mode, NULL_RTX, NULL_RTX,
> +				 NULL, do_error, PROB_VERY_UNLIKELY);
> +      do_compare_rtx_and_jump (op1, op0, LEU, true, mode, NULL_RTX, NULL,
>  			       done_label, PROB_VERY_LIKELY);
>        goto do_error_label;
>      }
> @@ -642,7 +642,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
>  			  OPTAB_LIB_WIDEN);
>        rtx tem = expand_binop (mode, add_optab, op1, sgn, NULL_RTX, false,
>  			      OPTAB_LIB_WIDEN);
> -      do_compare_rtx_and_jump (op0, tem, LTU, true, mode, NULL_RTX, NULL_RTX,
> +      do_compare_rtx_and_jump (op0, tem, LTU, true, mode, NULL_RTX, NULL,
>  			       done_label, PROB_VERY_LIKELY);
>        goto do_error_label;
>      }
> @@ -655,7 +655,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
>        res = expand_binop (mode, add_optab, op0, op1, NULL_RTX, false,
>  			  OPTAB_LIB_WIDEN);
>        do_compare_rtx_and_jump (res, const0_rtx, LT, false, mode, NULL_RTX,
> -			       NULL_RTX, do_error, PROB_VERY_UNLIKELY);
> +			       NULL, do_error, PROB_VERY_UNLIKELY);
>        rtx tem = op1;
>        /* The operation is commutative, so we can pick operand to compare
>  	 against.  For prec <= BITS_PER_WORD, I think preferring REG operand
> @@ -668,7 +668,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
>  	  ? (CONST_SCALAR_INT_P (op1) && REG_P (op0))
>  	  : CONST_SCALAR_INT_P (op0))
>  	tem = op0;
> -      do_compare_rtx_and_jump (res, tem, GEU, true, mode, NULL_RTX, NULL_RTX,
> +      do_compare_rtx_and_jump (res, tem, GEU, true, mode, NULL_RTX, NULL,
>  			       done_label, PROB_VERY_LIKELY);
>        goto do_error_label;
>      }
> @@ -698,26 +698,26 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
>  	  tem = expand_binop (mode, ((pos_neg == 1) ^ (code == MINUS_EXPR))
>  				    ? and_optab : ior_optab,
>  			      op0, res, NULL_RTX, false, OPTAB_LIB_WIDEN);
> -	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
> -				   NULL_RTX, done_label, PROB_VERY_LIKELY);
> +	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL,
> +				   NULL, done_label, PROB_VERY_LIKELY);
>  	}
>        else
>  	{
>  	  rtx_code_label *do_ior_label = gen_label_rtx ();
>  	  do_compare_rtx_and_jump (op1, const0_rtx,
>  				   code == MINUS_EXPR ? GE : LT, false, mode,
> -				   NULL_RTX, NULL_RTX, do_ior_label,
> +				   NULL_RTX, NULL, do_ior_label,
>  				   PROB_EVEN);
>  	  tem = expand_binop (mode, and_optab, op0, res, NULL_RTX, false,
>  			      OPTAB_LIB_WIDEN);
>  	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
> -				   NULL_RTX, done_label, PROB_VERY_LIKELY);
> +				   NULL, done_label, PROB_VERY_LIKELY);
>  	  emit_jump (do_error);
>  	  emit_label (do_ior_label);
>  	  tem = expand_binop (mode, ior_optab, op0, res, NULL_RTX, false,
>  			      OPTAB_LIB_WIDEN);
>  	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
> -				   NULL_RTX, done_label, PROB_VERY_LIKELY);
> +				   NULL, done_label, PROB_VERY_LIKELY);
>  	}
>        goto do_error_label;
>      }
> @@ -730,14 +730,14 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
>        res = expand_binop (mode, sub_optab, op0, op1, NULL_RTX, false,
>  			  OPTAB_LIB_WIDEN);
>        rtx_code_label *op0_geu_op1 = gen_label_rtx ();
> -      do_compare_rtx_and_jump (op0, op1, GEU, true, mode, NULL_RTX, NULL_RTX,
> +      do_compare_rtx_and_jump (op0, op1, GEU, true, mode, NULL_RTX, NULL,
>  			       op0_geu_op1, PROB_EVEN);
>        do_compare_rtx_and_jump (res, const0_rtx, LT, false, mode, NULL_RTX,
> -			       NULL_RTX, done_label, PROB_VERY_LIKELY);
> +			       NULL, done_label, PROB_VERY_LIKELY);
>        emit_jump (do_error);
>        emit_label (op0_geu_op1);
>        do_compare_rtx_and_jump (res, const0_rtx, GE, false, mode, NULL_RTX,
> -			       NULL_RTX, done_label, PROB_VERY_LIKELY);
> +			       NULL, done_label, PROB_VERY_LIKELY);
>        goto do_error_label;
>      }
>  
> @@ -816,12 +816,12 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
>        /* If the op1 is negative, we have to use a different check.  */
>        if (pos_neg == 3)
>  	do_compare_rtx_and_jump (op1, const0_rtx, LT, false, mode, NULL_RTX,
> -				 NULL_RTX, sub_check, PROB_EVEN);
> +				 NULL, sub_check, PROB_EVEN);
>  
>        /* Compare the result of the operation with one of the operands.  */
>        if (pos_neg & 1)
>  	do_compare_rtx_and_jump (res, op0, code == PLUS_EXPR ? GE : LE,
> -				 false, mode, NULL_RTX, NULL_RTX, done_label,
> +				 false, mode, NULL_RTX, NULL, done_label,
>  				 PROB_VERY_LIKELY);
>  
>        /* If we get here, we have to print the error.  */
> @@ -835,7 +835,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
>        /* We have k = a + b for b < 0 here.  k <= a must hold.  */
>        if (pos_neg & 2)
>  	do_compare_rtx_and_jump (res, op0, code == PLUS_EXPR ? LE : GE,
> -				 false, mode, NULL_RTX, NULL_RTX, done_label,
> +				 false, mode, NULL_RTX, NULL, done_label,
>  				 PROB_VERY_LIKELY);
>      }
>  
> @@ -931,7 +931,7 @@ expand_neg_overflow (location_t loc, tree lhs, tree arg1, bool is_ubsan)
>  
>        /* Compare the operand with the most negative value.  */
>        rtx minv = expand_normal (TYPE_MIN_VALUE (TREE_TYPE (arg1)));
> -      do_compare_rtx_and_jump (op1, minv, NE, true, mode, NULL_RTX, NULL_RTX,
> +      do_compare_rtx_and_jump (op1, minv, NE, true, mode, NULL_RTX, NULL,
>  			       done_label, PROB_VERY_LIKELY);
>      }
>  
> @@ -1068,15 +1068,15 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  	  ops.location = loc;
>  	  res = expand_expr_real_2 (&ops, NULL_RTX, mode, EXPAND_NORMAL);
>  	  do_compare_rtx_and_jump (op1, const0_rtx, EQ, true, mode, NULL_RTX,
> -				   NULL_RTX, done_label, PROB_VERY_LIKELY);
> +				   NULL, done_label, PROB_VERY_LIKELY);
>  	  goto do_error_label;
>  	case 3:
>  	  rtx_code_label *do_main_label;
>  	  do_main_label = gen_label_rtx ();
>  	  do_compare_rtx_and_jump (op0, const0_rtx, GE, false, mode, NULL_RTX,
> -				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
> +				   NULL, do_main_label, PROB_VERY_LIKELY);
>  	  do_compare_rtx_and_jump (op1, const0_rtx, EQ, true, mode, NULL_RTX,
> -				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
> +				   NULL, do_main_label, PROB_VERY_LIKELY);
>  	  write_complex_part (target, const1_rtx, true);
>  	  emit_label (do_main_label);
>  	  goto do_main;
> @@ -1113,15 +1113,15 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  	  ops.location = loc;
>  	  res = expand_expr_real_2 (&ops, NULL_RTX, mode, EXPAND_NORMAL);
>  	  do_compare_rtx_and_jump (op0, const0_rtx, EQ, true, mode, NULL_RTX,
> -				   NULL_RTX, done_label, PROB_VERY_LIKELY);
> +				   NULL, done_label, PROB_VERY_LIKELY);
>  	  do_compare_rtx_and_jump (op0, constm1_rtx, NE, true, mode, NULL_RTX,
> -				   NULL_RTX, do_error, PROB_VERY_UNLIKELY);
> +				   NULL, do_error, PROB_VERY_UNLIKELY);
>  	  int prec;
>  	  prec = GET_MODE_PRECISION (mode);
>  	  rtx sgn;
>  	  sgn = immed_wide_int_const (wi::min_value (prec, SIGNED), mode);
>  	  do_compare_rtx_and_jump (op1, sgn, EQ, true, mode, NULL_RTX,
> -				   NULL_RTX, done_label, PROB_VERY_LIKELY);
> +				   NULL, done_label, PROB_VERY_LIKELY);
>  	  goto do_error_label;
>  	case 3:
>  	  /* Rest of handling of this case after res is computed.  */
> @@ -1167,7 +1167,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  	      tem = expand_binop (mode, and_optab, op0, op1, NULL_RTX, false,
>  				  OPTAB_LIB_WIDEN);
>  	      do_compare_rtx_and_jump (tem, const0_rtx, EQ, true, mode,
> -				       NULL_RTX, NULL_RTX, done_label,
> +				       NULL_RTX, NULL, done_label,
>  				       PROB_VERY_LIKELY);
>  	      goto do_error_label;
>  	    }
> @@ -1185,8 +1185,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  	  tem = expand_binop (mode, and_optab, op0, op1, NULL_RTX, false,
>  			      OPTAB_LIB_WIDEN);
>  	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
> -				   NULL_RTX, after_negate_label,
> -				   PROB_VERY_LIKELY);
> +				   NULL, after_negate_label, PROB_VERY_LIKELY);
>  	  /* Both arguments negative here, negate them and continue with
>  	     normal unsigned overflow checking multiplication.  */
>  	  emit_move_insn (op0, expand_unop (mode, neg_optab, op0,
> @@ -1202,13 +1201,13 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  	  tem2 = expand_binop (mode, xor_optab, op0, op1, NULL_RTX, false,
>  			       OPTAB_LIB_WIDEN);
>  	  do_compare_rtx_and_jump (tem2, const0_rtx, GE, false, mode, NULL_RTX,
> -				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
> +				   NULL, do_main_label, PROB_VERY_LIKELY);
>  	  /* One argument is negative here, the other positive.  This
>  	     overflows always, unless one of the arguments is 0.  But
>  	     if e.g. s2 is 0, (U) s1 * 0 doesn't overflow, whatever s1
>  	     is, thus we can keep do_main code oring in overflow as is.  */
>  	  do_compare_rtx_and_jump (tem, const0_rtx, EQ, true, mode, NULL_RTX,
> -				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
> +				   NULL, do_main_label, PROB_VERY_LIKELY);
>  	  write_complex_part (target, const1_rtx, true);
>  	  emit_label (do_main_label);
>  	  goto do_main;
> @@ -1274,7 +1273,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  	    /* For the unsigned multiplication, there was overflow if
>  	       HIPART is non-zero.  */
>  	    do_compare_rtx_and_jump (hipart, const0_rtx, EQ, true, mode,
> -				     NULL_RTX, NULL_RTX, done_label,
> +				     NULL_RTX, NULL, done_label,
>  				     PROB_VERY_LIKELY);
>  	  else
>  	    {
> @@ -1284,7 +1283,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  		 the high half.  There was overflow if
>  		 HIPART is different from RES < 0 ? -1 : 0.  */
>  	      do_compare_rtx_and_jump (signbit, hipart, EQ, true, mode,
> -				       NULL_RTX, NULL_RTX, done_label,
> +				       NULL_RTX, NULL, done_label,
>  				       PROB_VERY_LIKELY);
>  	    }
>  	}
> @@ -1377,12 +1376,12 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  
>  	  if (!op0_small_p)
>  	    do_compare_rtx_and_jump (signbit0, hipart0, NE, true, hmode,
> -				     NULL_RTX, NULL_RTX, large_op0,
> +				     NULL_RTX, NULL, large_op0,
>  				     PROB_UNLIKELY);
>  
>  	  if (!op1_small_p)
>  	    do_compare_rtx_and_jump (signbit1, hipart1, NE, true, hmode,
> -				     NULL_RTX, NULL_RTX, small_op0_large_op1,
> +				     NULL_RTX, NULL, small_op0_large_op1,
>  				     PROB_UNLIKELY);
>  
>  	  /* If both op0 and op1 are sign (!uns) or zero (uns) extended from
> @@ -1428,7 +1427,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  
>  	  if (!op1_small_p)
>  	    do_compare_rtx_and_jump (signbit1, hipart1, NE, true, hmode,
> -				     NULL_RTX, NULL_RTX, both_ops_large,
> +				     NULL_RTX, NULL, both_ops_large,
>  				     PROB_UNLIKELY);
>  
>  	  /* If op1 is sign (!uns) or zero (uns) extended from hmode to mode,
> @@ -1465,7 +1464,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  		emit_jump (after_hipart_neg);
>  	      else if (larger_sign != -1)
>  		do_compare_rtx_and_jump (hipart, const0_rtx, GE, false, hmode,
> -					 NULL_RTX, NULL_RTX, after_hipart_neg,
> +					 NULL_RTX, NULL, after_hipart_neg,
>  					 PROB_EVEN);
>  
>  	      tem = convert_modes (mode, hmode, lopart, 1);
> @@ -1481,7 +1480,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  		emit_jump (after_lopart_neg);
>  	      else if (smaller_sign != -1)
>  		do_compare_rtx_and_jump (lopart, const0_rtx, GE, false, hmode,
> -					 NULL_RTX, NULL_RTX, after_lopart_neg,
> +					 NULL_RTX, NULL, after_lopart_neg,
>  					 PROB_EVEN);
>  
>  	      tem = expand_simple_binop (mode, MINUS, loxhi, larger, NULL_RTX,
> @@ -1510,7 +1509,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  					 hprec - 1, NULL_RTX, 0);
>  
>  	  do_compare_rtx_and_jump (signbitloxhi, hipartloxhi, NE, true, hmode,
> -				   NULL_RTX, NULL_RTX, do_overflow,
> +				   NULL_RTX, NULL, do_overflow,
>  				   PROB_VERY_UNLIKELY);
>  
>  	  /* res = (loxhi << (bitsize / 2)) | (hmode) lo0xlo1;  */
> @@ -1546,7 +1545,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  		  tem = expand_simple_binop (hmode, PLUS, hipart0, const1_rtx,
>  					     NULL_RTX, 1, OPTAB_DIRECT);
>  		  do_compare_rtx_and_jump (tem, const1_rtx, GTU, true, hmode,
> -					   NULL_RTX, NULL_RTX, do_error,
> +					   NULL_RTX, NULL, do_error,
>  					   PROB_VERY_UNLIKELY);
>  		}
>  
> @@ -1555,7 +1554,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  		  tem = expand_simple_binop (hmode, PLUS, hipart1, const1_rtx,
>  					     NULL_RTX, 1, OPTAB_DIRECT);
>  		  do_compare_rtx_and_jump (tem, const1_rtx, GTU, true, hmode,
> -					   NULL_RTX, NULL_RTX, do_error,
> +					   NULL_RTX, NULL, do_error,
>  					   PROB_VERY_UNLIKELY);
>  		}
>  
> @@ -1566,18 +1565,18 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>  		emit_jump (hipart_different);
>  	      else if (op0_sign == 1 || op1_sign == 1)
>  		do_compare_rtx_and_jump (hipart0, hipart1, NE, true, hmode,
> -					 NULL_RTX, NULL_RTX, hipart_different,
> +					 NULL_RTX, NULL, hipart_different,
>  					 PROB_EVEN);
>  
>  	      do_compare_rtx_and_jump (res, const0_rtx, LT, false, mode,
> -				       NULL_RTX, NULL_RTX, do_error,
> +				       NULL_RTX, NULL, do_error,
>  				       PROB_VERY_UNLIKELY);
>  	      emit_jump (done_label);
>  
>  	      emit_label (hipart_different);
>  
>  	      do_compare_rtx_and_jump (res, const0_rtx, GE, false, mode,
> -				       NULL_RTX, NULL_RTX, do_error,
> +				       NULL_RTX, NULL, do_error,
>  				       PROB_VERY_UNLIKELY);
>  	      emit_jump (done_label);
>  	    }
> @@ -1623,7 +1622,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>      {
>        rtx_code_label *all_done_label = gen_label_rtx ();
>        do_compare_rtx_and_jump (res, const0_rtx, GE, false, mode, NULL_RTX,
> -			       NULL_RTX, all_done_label, PROB_VERY_LIKELY);
> +			       NULL, all_done_label, PROB_VERY_LIKELY);
>        write_complex_part (target, const1_rtx, true);
>        emit_label (all_done_label);
>      }
> @@ -1634,13 +1633,13 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
>        rtx_code_label *all_done_label = gen_label_rtx ();
>        rtx_code_label *set_noovf = gen_label_rtx ();
>        do_compare_rtx_and_jump (op1, const0_rtx, GE, false, mode, NULL_RTX,
> -			       NULL_RTX, all_done_label, PROB_VERY_LIKELY);
> +			       NULL, all_done_label, PROB_VERY_LIKELY);
>        write_complex_part (target, const1_rtx, true);
>        do_compare_rtx_and_jump (op0, const0_rtx, EQ, true, mode, NULL_RTX,
> -			       NULL_RTX, set_noovf, PROB_VERY_LIKELY);
> +			       NULL, set_noovf, PROB_VERY_LIKELY);
>        do_compare_rtx_and_jump (op0, constm1_rtx, NE, true, mode, NULL_RTX,
> -			       NULL_RTX, all_done_label, PROB_VERY_UNLIKELY);
> -      do_compare_rtx_and_jump (op1, res, NE, true, mode, NULL_RTX, NULL_RTX,
> +			       NULL, all_done_label, PROB_VERY_UNLIKELY);
> +      do_compare_rtx_and_jump (op1, res, NE, true, mode, NULL_RTX, NULL,
>  			       all_done_label, PROB_VERY_UNLIKELY);
>        emit_label (set_noovf);
>        write_complex_part (target, const0_rtx, true);
> diff --git a/gcc/ira.c b/gcc/ira.c
> index ea2b69f..bdf81e6 100644
> --- a/gcc/ira.c
> +++ b/gcc/ira.c
> @@ -4994,7 +4994,7 @@ split_live_ranges_for_shrink_wrap (void)
>  
>        if (newreg)
>  	{
> -	  rtx new_move = gen_move_insn (newreg, dest);
> +	  rtx_insn *new_move = gen_move_insn (newreg, dest);
>  	  emit_insn_after (new_move, bb_note (call_dom));
>  	  if (dump_file)
>  	    {
> diff --git a/gcc/is-a.h b/gcc/is-a.h
> index 58917eb..4fb9dde 100644
> --- a/gcc/is-a.h
> +++ b/gcc/is-a.h
> @@ -46,6 +46,11 @@ TYPE as_a <TYPE> (pointer)
>  
>        do_something_with (as_a <cgraph_node *> *ptr);
>  
> +TYPE assert_as_a <TYPE> (pointer)
> +
> +    Like as_a <TYPE> (pointer), but uses assertion, which is enabled even in
> +    non-checking (release) build.
> +
>  TYPE safe_as_a <TYPE> (pointer)
>  
>      Like as_a <TYPE> (pointer), but where pointer could be NULL.  This
> @@ -193,6 +198,17 @@ as_a (U *p)
>    return is_a_helper <T>::cast (p);
>  }
>  
> +/* Same as above, but checks the condition even in release build.  */
> +
> +template <typename T, typename U>
> +inline T
> +assert_as_a (U *p)
> +{
> +  gcc_assert (is_a <T> (p));
> +  return is_a_helper <T>::cast (p);
> +}
> +
> +
>  /* Similar to as_a<>, but where the pointer can be NULL, even if
>     is_a_helper<T> doesn't check for NULL.  */
>  
> diff --git a/gcc/jump.c b/gcc/jump.c
> index 34b3b7b..0cc0be5 100644
> --- a/gcc/jump.c
> +++ b/gcc/jump.c
> @@ -1583,7 +1583,7 @@ redirect_jump_1 (rtx jump, rtx nlabel)
>     (this can only occur when trying to produce return insns).  */
>  
>  int
> -redirect_jump (rtx jump, rtx nlabel, int delete_unused)
> +redirect_jump (rtx_jump_insn *jump, rtx nlabel, int delete_unused)
>  {
>    rtx olabel = JUMP_LABEL (jump);
>  
> @@ -1615,7 +1615,7 @@ redirect_jump (rtx jump, rtx nlabel, int delete_unused)
>     If DELETE_UNUSED is positive, delete related insn to OLABEL if its ref
>     count has dropped to zero.  */
>  void
> -redirect_jump_2 (rtx jump, rtx olabel, rtx nlabel, int delete_unused,
> +redirect_jump_2 (rtx_jump_insn *jump, rtx olabel, rtx nlabel, int delete_unused,
>  		 int invert)
>  {
>    rtx note;
> @@ -1703,7 +1703,7 @@ invert_exp_1 (rtx x, rtx insn)
>     inversion and redirection.  */
>  
>  int
> -invert_jump_1 (rtx_insn *jump, rtx nlabel)
> +invert_jump_1 (rtx_jump_insn *jump, rtx nlabel)
>  {
>    rtx x = pc_set (jump);
>    int ochanges;
> @@ -1727,7 +1727,7 @@ invert_jump_1 (rtx_insn *jump, rtx nlabel)
>     NLABEL instead of where it jumps now.  Return true if successful.  */
>  
>  int
> -invert_jump (rtx_insn *jump, rtx nlabel, int delete_unused)
> +invert_jump (rtx_jump_insn *jump, rtx nlabel, int delete_unused)
>  {
>    rtx olabel = JUMP_LABEL (jump);
>  
> diff --git a/gcc/loop-unroll.c b/gcc/loop-unroll.c
> index 2befb61..2f3ff35 100644
> --- a/gcc/loop-unroll.c
> +++ b/gcc/loop-unroll.c
> @@ -794,10 +794,11 @@ split_edge_and_insert (edge e, rtx_insn *insns)
>     in order to create a jump.  */
>  
>  static rtx_insn *
> -compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int prob,
> -		      rtx_insn *cinsn)
> +compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp,
> +		      rtx_code_label *label, int prob, rtx_insn *cinsn)
>  {
> -  rtx_insn *seq, *jump;
> +  rtx_insn *seq;
> +  rtx_jump_insn *jump;
>    rtx cond;
>    machine_mode mode;
>  
> @@ -816,8 +817,7 @@ compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int prob,
>        gcc_assert (rtx_equal_p (op0, XEXP (cond, 0)));
>        gcc_assert (rtx_equal_p (op1, XEXP (cond, 1)));
>        emit_jump_insn (copy_insn (PATTERN (cinsn)));
> -      jump = get_last_insn ();
> -      gcc_assert (JUMP_P (jump));
> +      jump = assert_as_a <rtx_jump_insn *> (get_last_insn ());
>        JUMP_LABEL (jump) = JUMP_LABEL (cinsn);
>        LABEL_NUSES (JUMP_LABEL (jump))++;
>        redirect_jump (jump, label, 0);
> @@ -829,9 +829,8 @@ compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int prob,
>        op0 = force_operand (op0, NULL_RTX);
>        op1 = force_operand (op1, NULL_RTX);
>        do_compare_rtx_and_jump (op0, op1, comp, 0,
> -			       mode, NULL_RTX, NULL_RTX, label, -1);
> -      jump = get_last_insn ();
> -      gcc_assert (JUMP_P (jump));
> +			       mode, NULL_RTX, NULL, label, -1);
> +      jump = assert_as_a <rtx_jump_insn *> (get_last_insn ());
>        JUMP_LABEL (jump) = label;
>        LABEL_NUSES (label)++;
>      }
> diff --git a/gcc/lra-constraints.c b/gcc/lra-constraints.c
> index 57d731a..db4765f 100644
> --- a/gcc/lra-constraints.c
> +++ b/gcc/lra-constraints.c
> @@ -1060,9 +1060,8 @@ emit_spill_move (bool to_p, rtx mem_pseudo, rtx val)
>  	  LRA_SUBREG_P (mem_pseudo) = 1;
>  	}
>      }
> -  return as_a <rtx_insn *> (to_p
> -			    ? gen_move_insn (mem_pseudo, val)
> -			    : gen_move_insn (val, mem_pseudo));
> +  return to_p ? gen_move_insn (mem_pseudo, val)
> +	      : gen_move_insn (val, mem_pseudo);
>  }
>  
>  /* Process a special case insn (register move), return true if we
> @@ -4501,6 +4500,107 @@ static int calls_num;
>     USAGE_INSNS.	 */
>  static int curr_usage_insns_check;
>  
> +namespace
> +{
> +
> +class rtx_usage_list GTY(()) : public rtx_def
> +{
> +public:
> +  /* This class represents an element in a singly-linked list, which:
> +     1. Ends with non-debug INSN
> +     2. May contain several INSN_LIST nodes with DEBUG_INSNs attached to them
> +
> +     I.e.:   INSN_LIST--> INSN_LIST-->..--> INSN
> +               |            |
> +             DEBUG_INSN   DEBUG_INSN
> +
> +   See struct usage_insns for description of how it is used.  */
> +
> +  /* Get next node of the list.  */
> +  rtx_usage_list *next () const;
> +
> +  /* Get the current instruction.  */
> +  rtx_insn *insn ();
> +
> +  /* Check, if current INSN is debug info.  */
> +  bool debug_p () const;
> +
> +  /* Add debug information to the chain.  */
> +  rtx_usage_list *push_front (rtx_debug_insn *debug_insn);
> +};
> +
> +/* If current node is an INSN return it.  Otherwise it as an INSN_LIST node,
> +   in this case return the attached INSN.  */
> +
> +rtx_insn *
> +rtx_usage_list::insn ()
> +{
> +  if (rtx_insn *as_insn = dyn_cast <rtx_insn *> (this))
> +    return as_insn;
> +  return safe_as_a <rtx_debug_insn *> (XEXP (this, 0));
> +}
> +
> +/* Get next node.  */
> +
> +rtx_usage_list *
> +rtx_usage_list::next () const
> +{
> +  return reinterpret_cast <rtx_usage_list *> (XEXP (this, 1));
> +}
> +
> +/* Check, if current INSN is debug info.  */
> +
> +bool
> +rtx_usage_list::debug_p () const
> +{
> +  return is_a <const rtx_insn_list *> (this);
> +}
> +
> +/* Add debug information to the chain.  */
> +
> +rtx_usage_list *
> +rtx_usage_list::push_front (rtx_debug_insn *debug_insn)
> +{
> +  /* ??? Maybe it would be better to store DEBUG_INSNs in a separate
> +     homogeneous list (or vec) and use another pointer for actual INSN?
> +     Then we won't have to traverse the list and some checks will also
> +     become simpler.  */
> +  return reinterpret_cast <rtx_usage_list *>
> +                (gen_rtx_INSN_LIST (VOIDmode,
> +                                    debug_insn, this));
> +}
> +
> +} // anon namespace
> +
> +/* Helpers for as-a casts.  */
> +
> +template <>
> +template <>
> +inline bool
> +is_a_helper <rtx_insn_list *>::test (rtx_usage_list *list)
> +{
> +  return list->code == INSN_LIST;
> +}
> +
> +template <>
> +template <>
> +inline bool
> +is_a_helper <const rtx_insn_list *>::test (const rtx_usage_list *list)
> +{
> +  return list->code == INSN_LIST;
> +}
> +
> +/* rtx_usage_list is either an INSN_LIST node or an INSN (no other
> +   options).  Therefore, this check is valid.  */
> +
> +template <>
> +template <>
> +inline bool
> +is_a_helper <rtx_insn *>::test (rtx_usage_list *list)
> +{
> +  return list->code != INSN_LIST;
> +}
> +
>  /* Info about last usage of registers in EBB to do inheritance/split
>     transformation.  Inheritance transformation is done from a spilled
>     pseudo and split transformations from a hard register or a pseudo
> @@ -4526,17 +4626,17 @@ struct usage_insns
>       to use the original reg value again in the next insns we can try
>       to use the value in a hard register from a reload insn of the
>       current insn.  */
> -  rtx insns;
> +  rtx_usage_list *insns;
>  };
>  
>  /* Map: regno -> corresponding pseudo usage insns.  */
>  static struct usage_insns *usage_insns;
>  
>  static void
> -setup_next_usage_insn (int regno, rtx insn, int reloads_num, bool after_p)
> +setup_next_usage_insn (int regno, rtx_insn *insn, int reloads_num, bool after_p)
>  {
>    usage_insns[regno].check = curr_usage_insns_check;
> -  usage_insns[regno].insns = insn;
> +  usage_insns[regno].insns = reinterpret_cast <rtx_usage_list *> (insn);
>    usage_insns[regno].reloads_num = reloads_num;
>    usage_insns[regno].calls_num = calls_num;
>    usage_insns[regno].after_p = after_p;
> @@ -4546,20 +4646,19 @@ setup_next_usage_insn (int regno, rtx insn, int reloads_num, bool after_p)
>     optional debug insns finished by a non-debug insn using REGNO.
>     RELOADS_NUM is current number of reload insns processed so far.  */
>  static void
> -add_next_usage_insn (int regno, rtx insn, int reloads_num)
> +add_next_usage_insn (int regno, rtx_insn *insn, int reloads_num)
>  {
> -  rtx next_usage_insns;
> +  rtx_usage_list *next_usage_insns;
> +  rtx_debug_insn *debug_insn;
>  
>    if (usage_insns[regno].check == curr_usage_insns_check
> -      && (next_usage_insns = usage_insns[regno].insns) != NULL_RTX
> -      && DEBUG_INSN_P (insn))
> +      && (next_usage_insns = usage_insns[regno].insns) != NULL
> +      && (debug_insn = dyn_cast <rtx_debug_insn *> (insn)) != NULL)
>      {
>        /* Check that we did not add the debug insn yet.	*/
> -      if (next_usage_insns != insn
> -	  && (GET_CODE (next_usage_insns) != INSN_LIST
> -	      || XEXP (next_usage_insns, 0) != insn))
> -	usage_insns[regno].insns = gen_rtx_INSN_LIST (VOIDmode, insn,
> -						      next_usage_insns);
> +      if (next_usage_insns->insn () != debug_insn)
> +	usage_insns[regno].insns =
> +                usage_insns[regno].insns->push_front (debug_insn);
>      }
>    else if (NONDEBUG_INSN_P (insn))
>      setup_next_usage_insn (regno, insn, reloads_num, false);
> @@ -4569,16 +4668,13 @@ add_next_usage_insn (int regno, rtx insn, int reloads_num)
>  
>  /* Return first non-debug insn in list USAGE_INSNS.  */
>  static rtx_insn *
> -skip_usage_debug_insns (rtx usage_insns)
> +skip_usage_debug_insns (rtx_usage_list *usage_insns)
>  {
> -  rtx insn;
> -
>    /* Skip debug insns.  */
> -  for (insn = usage_insns;
> -       insn != NULL_RTX && GET_CODE (insn) == INSN_LIST;
> -       insn = XEXP (insn, 1))
> +  for (; usage_insns != NULL && usage_insns->debug_p ();
> +       usage_insns = usage_insns->next ())
>      ;
> -  return safe_as_a <rtx_insn *> (insn);
> +  return safe_as_a <rtx_insn *> (usage_insns);
>  }
>  
>  /* Return true if we need secondary memory moves for insn in
> @@ -4586,7 +4682,7 @@ skip_usage_debug_insns (rtx usage_insns)
>     into the insn.  */
>  static bool
>  check_secondary_memory_needed_p (enum reg_class inher_cl ATTRIBUTE_UNUSED,
> -				 rtx usage_insns ATTRIBUTE_UNUSED)
> +				 rtx_usage_list *usage_insns ATTRIBUTE_UNUSED)
>  {
>  #ifndef SECONDARY_MEMORY_NEEDED
>    return false;
> @@ -4639,15 +4735,16 @@ static bitmap_head check_only_regs;
>     class of ORIGINAL REGNO.  */
>  static bool
>  inherit_reload_reg (bool def_p, int original_regno,
> -		    enum reg_class cl, rtx_insn *insn, rtx next_usage_insns)
> +		    enum reg_class cl, rtx_insn *insn,
> +                    rtx_usage_list *next_usage_insns)
>  {
>    if (optimize_function_for_size_p (cfun))
>      return false;
>  
>    enum reg_class rclass = lra_get_allocno_class (original_regno);
>    rtx original_reg = regno_reg_rtx[original_regno];
> -  rtx new_reg, usage_insn;
> -  rtx_insn *new_insns;
> +  rtx new_reg;
> +  rtx_insn *usage_insn, *new_insns;
>  
>    lra_assert (! usage_insns[original_regno].after_p);
>    if (lra_dump_file != NULL)
> @@ -4746,22 +4843,21 @@ inherit_reload_reg (bool def_p, int original_regno,
>    else
>      lra_process_new_insns (insn, new_insns, NULL,
>  			   "Add inheritance<-original");
> -  while (next_usage_insns != NULL_RTX)
> +  while (next_usage_insns != NULL)
>      {
> -      if (GET_CODE (next_usage_insns) != INSN_LIST)
> +      if (! next_usage_insns->debug_p ())
>  	{
> -	  usage_insn = next_usage_insns;
> -	  lra_assert (NONDEBUG_INSN_P (usage_insn));
> +	  usage_insn = assert_as_a <rtx_insn *> (next_usage_insns);
> +	  lra_assert (! is_a <rtx_debug_insn *> (usage_insn));
>  	  next_usage_insns = NULL;
>  	}
>        else
>  	{
> -	  usage_insn = XEXP (next_usage_insns, 0);
> -	  lra_assert (DEBUG_INSN_P (usage_insn));
> -	  next_usage_insns = XEXP (next_usage_insns, 1);
> +	  usage_insn = next_usage_insns->insn ();
> +	  next_usage_insns = next_usage_insns->next ();
>  	}
> -      lra_substitute_pseudo (&usage_insn, original_regno, new_reg);
> -      lra_update_insn_regno_info (as_a <rtx_insn *> (usage_insn));
> +      lra_substitute_pseudo_within_insn (usage_insn, original_regno, new_reg);
> +      lra_update_insn_regno_info (usage_insn);
>        if (lra_dump_file != NULL)
>  	{
>  	  fprintf (lra_dump_file,
> @@ -4913,13 +5009,13 @@ choose_split_class (enum reg_class allocno_class,
>     transformation.  */
>  static bool
>  split_reg (bool before_p, int original_regno, rtx_insn *insn,
> -	   rtx next_usage_insns)
> +	   rtx_usage_list *next_usage_insns)
>  {
>    enum reg_class rclass;
>    rtx original_reg;
>    int hard_regno, nregs;
> -  rtx new_reg, usage_insn;
> -  rtx_insn *restore, *save;
> +  rtx new_reg;
> +  rtx_insn *restore, *save, *usage_insn;
>    bool after_p;
>    bool call_save_p;
>  
> @@ -5016,14 +5112,13 @@ split_reg (bool before_p, int original_regno, rtx_insn *insn,
>      {
>        if (GET_CODE (next_usage_insns) != INSN_LIST)
>  	{
> -	  usage_insn = next_usage_insns;
> +	  usage_insn = as_a <rtx_insn *> (next_usage_insns);
>  	  break;
>  	}
> -      usage_insn = XEXP (next_usage_insns, 0);
> -      lra_assert (DEBUG_INSN_P (usage_insn));
> -      next_usage_insns = XEXP (next_usage_insns, 1);
> -      lra_substitute_pseudo (&usage_insn, original_regno, new_reg);
> -      lra_update_insn_regno_info (as_a <rtx_insn *> (usage_insn));
> +      usage_insn = next_usage_insns->insn ();
> +      next_usage_insns = next_usage_insns->next ();
> +      lra_substitute_pseudo_within_insn (usage_insn, original_regno, new_reg);
> +      lra_update_insn_regno_info (usage_insn);
>        if (lra_dump_file != NULL)
>  	{
>  	  fprintf (lra_dump_file, "    Split reuse change %d->%d:\n",
> @@ -5031,9 +5126,9 @@ split_reg (bool before_p, int original_regno, rtx_insn *insn,
>  	  dump_insn_slim (lra_dump_file, usage_insn);
>  	}
>      }
> -  lra_assert (NOTE_P (usage_insn) || NONDEBUG_INSN_P (usage_insn));
> +  lra_assert (! DEBUG_INSN_P (usage_insn));
>    lra_assert (usage_insn != insn || (after_p && before_p));
> -  lra_process_new_insns (as_a <rtx_insn *> (usage_insn),
> +  lra_process_new_insns (usage_insn,
>  			 after_p ? NULL : restore,
>  			 after_p ? restore : NULL,
>  			 call_save_p
> @@ -5069,18 +5164,15 @@ split_if_necessary (int regno, machine_mode mode,
>  {
>    bool res = false;
>    int i, nregs = 1;
> -  rtx next_usage_insns;
> +  rtx_usage_list *next_usage_insns;
>  
>    if (regno < FIRST_PSEUDO_REGISTER)
>      nregs = hard_regno_nregs[regno][mode];
>    for (i = 0; i < nregs; i++)
>      if (usage_insns[regno + i].check == curr_usage_insns_check
> -	&& (next_usage_insns = usage_insns[regno + i].insns) != NULL_RTX
> +	&& (next_usage_insns = usage_insns[regno + i].insns) != NULL
>  	/* To avoid processing the register twice or more.  */
> -	&& ((GET_CODE (next_usage_insns) != INSN_LIST
> -	     && INSN_UID (next_usage_insns) < max_uid)
> -	    || (GET_CODE (next_usage_insns) == INSN_LIST
> -		&& (INSN_UID (XEXP (next_usage_insns, 0)) < max_uid)))
> +	&& (INSN_UID (next_usage_insns->insn ()) < max_uid)
>  	&& need_for_split_p (potential_reload_hard_regs, regno + i)
>  	&& split_reg (before_p, regno + i, insn, next_usage_insns))
>      res = true;
> @@ -5209,7 +5301,7 @@ struct to_inherit
>    /* Original regno.  */
>    int regno;
>    /* Subsequent insns which can inherit original reg value.  */
> -  rtx insns;
> +  rtx_usage_list *insns;
>  };
>  
>  /* Array containing all info for doing inheritance from the current
> @@ -5222,7 +5314,7 @@ static int to_inherit_num;
>  /* Add inheritance info REGNO and INSNS. Their meaning is described in
>     structure to_inherit.  */
>  static void
> -add_to_inherit (int regno, rtx insns)
> +add_to_inherit (int regno, rtx_usage_list *insns)
>  {
>    int i;
>  
> @@ -5301,7 +5393,8 @@ inherit_in_ebb (rtx_insn *head, rtx_insn *tail)
>    int i, src_regno, dst_regno, nregs;
>    bool change_p, succ_p, update_reloads_num_p;
>    rtx_insn *prev_insn, *last_insn;
> -  rtx next_usage_insns, set;
> +  rtx_usage_list *next_usage_insns;
> +  rtx set;
>    enum reg_class cl;
>    struct lra_insn_reg *reg;
>    basic_block last_processed_bb, curr_bb = NULL;
> @@ -5569,7 +5662,7 @@ inherit_in_ebb (rtx_insn *head, rtx_insn *tail)
>  			   || reg_renumber[src_regno] >= 0)
>  		    {
>  		      bool before_p;
> -		      rtx use_insn = curr_insn;
> +		      rtx_insn *use_insn = curr_insn;
>  
>  		      before_p = (JUMP_P (curr_insn)
>  				  || (CALL_P (curr_insn) && reg->type == OP_IN));
> diff --git a/gcc/lra.c b/gcc/lra.c
> index 269a0f1..6d3c73e 100644
> --- a/gcc/lra.c
> +++ b/gcc/lra.c
> @@ -1825,7 +1825,7 @@ lra_substitute_pseudo (rtx *loc, int old_regno, rtx new_reg)
>    const char *fmt;
>    int i, j;
>  
> -  if (x == NULL_RTX)
> +  if (x == NULL)
>      return false;
>  
>    code = GET_CODE (x);
> diff --git a/gcc/modulo-sched.c b/gcc/modulo-sched.c
> index 22cd216..4afe43e 100644
> --- a/gcc/modulo-sched.c
> +++ b/gcc/modulo-sched.c
> @@ -790,8 +790,7 @@ schedule_reg_moves (partial_schedule_ptr ps)
>  	  move->old_reg = old_reg;
>  	  move->new_reg = gen_reg_rtx (GET_MODE (prev_reg));
>  	  move->num_consecutive_stages = distances[0] && distances[1] ? 2 : 1;
> -	  move->insn = as_a <rtx_insn *> (gen_move_insn (move->new_reg,
> -							 copy_rtx (prev_reg)));
> +	  move->insn = gen_move_insn (move->new_reg, copy_rtx (prev_reg));
>  	  bitmap_clear (move->uses);
>  
>  	  prev_reg = move->new_reg;
> diff --git a/gcc/optabs.c b/gcc/optabs.c
> index e9dc798..9a51ba3 100644
> --- a/gcc/optabs.c
> +++ b/gcc/optabs.c
> @@ -1416,7 +1416,7 @@ expand_binop_directly (machine_mode mode, optab binoptab,
>    machine_mode mode0, mode1, tmp_mode;
>    struct expand_operand ops[3];
>    bool commutative_p;
> -  rtx pat;
> +  rtx_insn *pat;
>    rtx xop0 = op0, xop1 = op1;
>    rtx swap;
>  
> @@ -1499,8 +1499,8 @@ expand_binop_directly (machine_mode mode, optab binoptab,
>        /* If PAT is composed of more than one insn, try to add an appropriate
>  	 REG_EQUAL note to it.  If we can't because TEMP conflicts with an
>  	 operand, call expand_binop again, this time without a target.  */
> -      if (INSN_P (pat) && NEXT_INSN (as_a <rtx_insn *> (pat)) != NULL_RTX
> -	  && ! add_equal_note (as_a <rtx_insn *> (pat), ops[0].value,
> +      if (INSN_P (pat) && NEXT_INSN (pat) != NULL_RTX
> +	  && ! add_equal_note (pat, ops[0].value,
>  			       optab_to_code (binoptab),
>  			       ops[1].value, ops[2].value))
>  	{
> @@ -3016,15 +3016,15 @@ expand_unop_direct (machine_mode mode, optab unoptab, rtx op0, rtx target,
>        struct expand_operand ops[2];
>        enum insn_code icode = optab_handler (unoptab, mode);
>        rtx_insn *last = get_last_insn ();
> -      rtx pat;
> +      rtx_insn *pat;
>  
>        create_output_operand (&ops[0], target, mode);
>        create_convert_operand_from (&ops[1], op0, mode, unsignedp);
>        pat = maybe_gen_insn (icode, 2, ops);
>        if (pat)
>  	{
> -	  if (INSN_P (pat) && NEXT_INSN (as_a <rtx_insn *> (pat)) != NULL_RTX
> -	      && ! add_equal_note (as_a <rtx_insn *> (pat), ops[0].value,
> +	  if (INSN_P (pat) && NEXT_INSN (pat) != NULL_RTX
> +	      && ! add_equal_note (pat, ops[0].value,
>  				   optab_to_code (unoptab),
>  				   ops[1].value, NULL_RTX))
>  	    {
> @@ -3508,7 +3508,7 @@ expand_abs (machine_mode mode, rtx op0, rtx target,
>    NO_DEFER_POP;
>  
>    do_compare_rtx_and_jump (target, CONST0_RTX (mode), GE, 0, mode,
> -			   NULL_RTX, NULL_RTX, op1, -1);
> +			   NULL_RTX, NULL, op1, -1);
>  
>    op0 = expand_unop (mode, result_unsignedp ? neg_optab : negv_optab,
>                       target, target, 0);
> @@ -3817,7 +3817,7 @@ maybe_emit_unop_insn (enum insn_code icode, rtx target, rtx op0,
>  		      enum rtx_code code)
>  {
>    struct expand_operand ops[2];
> -  rtx pat;
> +  rtx_insn *pat;
>  
>    create_output_operand (&ops[0], target, GET_MODE (target));
>    create_input_operand (&ops[1], op0, GET_MODE (op0));
> @@ -3825,10 +3825,9 @@ maybe_emit_unop_insn (enum insn_code icode, rtx target, rtx op0,
>    if (!pat)
>      return false;
>  
> -  if (INSN_P (pat) && NEXT_INSN (as_a <rtx_insn *> (pat)) != NULL_RTX
> +  if (INSN_P (pat) && NEXT_INSN (pat) != NULL_RTX
>        && code != UNKNOWN)
> -    add_equal_note (as_a <rtx_insn *> (pat), ops[0].value, code, ops[1].value,
> -		    NULL_RTX);
> +    add_equal_note (pat, ops[0].value, code, ops[1].value, NULL_RTX);
>  
>    emit_insn (pat);
>  
> @@ -8370,13 +8369,13 @@ maybe_legitimize_operands (enum insn_code icode, unsigned int opno,
>     and emit any necessary set-up code.  Return null and emit no
>     code on failure.  */
>  
> -rtx
> +rtx_insn *
>  maybe_gen_insn (enum insn_code icode, unsigned int nops,
>  		struct expand_operand *ops)
>  {
>    gcc_assert (nops == (unsigned int) insn_data[(int) icode].n_generator_args);
>    if (!maybe_legitimize_operands (icode, 0, nops, ops))
> -    return NULL_RTX;
> +    return NULL;
>  
>    switch (nops)
>      {
> diff --git a/gcc/optabs.h b/gcc/optabs.h
> index 152af87..5c30ce5 100644
> --- a/gcc/optabs.h
> +++ b/gcc/optabs.h
> @@ -541,8 +541,8 @@ extern void create_convert_operand_from_type (struct expand_operand *op,
>  extern bool maybe_legitimize_operands (enum insn_code icode,
>  				       unsigned int opno, unsigned int nops,
>  				       struct expand_operand *ops);
> -extern rtx maybe_gen_insn (enum insn_code icode, unsigned int nops,
> -			   struct expand_operand *ops);
> +extern rtx_insn *maybe_gen_insn (enum insn_code icode, unsigned int nops,
> +				 struct expand_operand *ops);
>  extern bool maybe_expand_insn (enum insn_code icode, unsigned int nops,
>  			       struct expand_operand *ops);
>  extern bool maybe_expand_jump_insn (enum insn_code icode, unsigned int nops,
> diff --git a/gcc/postreload-gcse.c b/gcc/postreload-gcse.c
> index 83048bd..21228ac 100644
> --- a/gcc/postreload-gcse.c
> +++ b/gcc/postreload-gcse.c
> @@ -1115,8 +1115,8 @@ eliminate_partially_redundant_load (basic_block bb, rtx_insn *insn,
>  
>  	  /* Make sure we can generate a move from register avail_reg to
>  	     dest.  */
> -	  rtx_insn *move = as_a <rtx_insn *>
> -	    (gen_move_insn (copy_rtx (dest), copy_rtx (avail_reg)));
> +	  rtx_insn *move = gen_move_insn (copy_rtx (dest),
> +					  copy_rtx (avail_reg));
>  	  extract_insn (move);
>  	  if (! constrain_operands (1, get_preferred_alternatives (insn,
>  								   pred_bb))
> diff --git a/gcc/recog.c b/gcc/recog.c
> index a9d3b1f..8fee5a7 100644
> --- a/gcc/recog.c
> +++ b/gcc/recog.c
> @@ -3068,7 +3068,7 @@ split_all_insns_noflow (void)
>  #ifdef HAVE_peephole2
>  struct peep2_insn_data
>  {
> -  rtx insn;
> +  rtx_insn *insn;
>    regset live_before;
>  };
>  
> @@ -3084,7 +3084,7 @@ int peep2_current_count;
>  /* A non-insn marker indicating the last insn of the block.
>     The live_before regset for this element is correct, indicating
>     DF_LIVE_OUT for the block.  */
> -#define PEEP2_EOB	pc_rtx
> +#define PEEP2_EOB	(static_cast<rtx_insn *> (pc_rtx))
>  
>  /* Wrap N to fit into the peep2_insn_data buffer.  */
>  
> @@ -3287,7 +3287,7 @@ peep2_reinit_state (regset live)
>  
>    /* Indicate that all slots except the last holds invalid data.  */
>    for (i = 0; i < MAX_INSNS_PER_PEEP2; ++i)
> -    peep2_insn_data[i].insn = NULL_RTX;
> +    peep2_insn_data[i].insn = NULL;
>    peep2_current_count = 0;
>  
>    /* Indicate that the last slot contains live_after data.  */
> @@ -3315,7 +3315,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
>  
>    /* If we are splitting an RTX_FRAME_RELATED_P insn, do not allow it to
>       match more than one insn, or to be split into more than one insn.  */
> -  old_insn = as_a <rtx_insn *> (peep2_insn_data[peep2_current].insn);
> +  old_insn = peep2_insn_data[peep2_current].insn;
>    if (RTX_FRAME_RELATED_P (old_insn))
>      {
>        bool any_note = false;
> @@ -3403,7 +3403,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
>        rtx note;
>  
>        j = peep2_buf_position (peep2_current + i);
> -      old_insn = as_a <rtx_insn *> (peep2_insn_data[j].insn);
> +      old_insn = peep2_insn_data[j].insn;
>        if (!CALL_P (old_insn))
>  	continue;
>        was_call = true;
> @@ -3442,7 +3442,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
>        while (++i <= match_len)
>  	{
>  	  j = peep2_buf_position (peep2_current + i);
> -	  old_insn = as_a <rtx_insn *> (peep2_insn_data[j].insn);
> +	  old_insn = peep2_insn_data[j].insn;
>  	  gcc_assert (!CALL_P (old_insn));
>  	}
>        break;
> @@ -3454,7 +3454,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
>    for (i = match_len; i >= 0; --i)
>      {
>        int j = peep2_buf_position (peep2_current + i);
> -      old_insn = as_a <rtx_insn *> (peep2_insn_data[j].insn);
> +      old_insn = peep2_insn_data[j].insn;
>  
>        as_note = find_reg_note (old_insn, REG_ARGS_SIZE, NULL);
>        if (as_note)
> @@ -3465,7 +3465,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
>    eh_note = find_reg_note (peep2_insn_data[i].insn, REG_EH_REGION, NULL_RTX);
>  
>    /* Replace the old sequence with the new.  */
> -  rtx_insn *peepinsn = as_a <rtx_insn *> (peep2_insn_data[i].insn);
> +  rtx_insn *peepinsn = peep2_insn_data[i].insn;
>    last = emit_insn_after_setloc (attempt,
>  				 peep2_insn_data[i].insn,
>  				 INSN_LOCATION (peepinsn));
> @@ -3582,7 +3582,7 @@ peep2_update_life (basic_block bb, int match_len, rtx_insn *last,
>     add more instructions to the buffer.  */
>  
>  static bool
> -peep2_fill_buffer (basic_block bb, rtx insn, regset live)
> +peep2_fill_buffer (basic_block bb, rtx_insn *insn, regset live)
>  {
>    int pos;
>  
> @@ -3608,7 +3608,7 @@ peep2_fill_buffer (basic_block bb, rtx insn, regset live)
>    COPY_REG_SET (peep2_insn_data[pos].live_before, live);
>    peep2_current_count++;
>  
> -  df_simulate_one_insn_forwards (bb, as_a <rtx_insn *> (insn), live);
> +  df_simulate_one_insn_forwards (bb, insn, live);
>    return true;
>  }
>  
> diff --git a/gcc/recog.h b/gcc/recog.h
> index 45ea671..7c95885 100644
> --- a/gcc/recog.h
> +++ b/gcc/recog.h
> @@ -278,43 +278,43 @@ typedef const char * (*insn_output_fn) (rtx *, rtx_insn *);
>  
>  struct insn_gen_fn
>  {
> -  typedef rtx (*f0) (void);
> -  typedef rtx (*f1) (rtx);
> -  typedef rtx (*f2) (rtx, rtx);
> -  typedef rtx (*f3) (rtx, rtx, rtx);
> -  typedef rtx (*f4) (rtx, rtx, rtx, rtx);
> -  typedef rtx (*f5) (rtx, rtx, rtx, rtx, rtx);
> -  typedef rtx (*f6) (rtx, rtx, rtx, rtx, rtx, rtx);
> -  typedef rtx (*f7) (rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> -  typedef rtx (*f8) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> -  typedef rtx (*f9) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> -  typedef rtx (*f10) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> -  typedef rtx (*f11) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> -  typedef rtx (*f12) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> -  typedef rtx (*f13) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> -  typedef rtx (*f14) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> -  typedef rtx (*f15) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> -  typedef rtx (*f16) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f0) (void);
> +  typedef rtx_insn * (*f1) (rtx);
> +  typedef rtx_insn * (*f2) (rtx, rtx);
> +  typedef rtx_insn * (*f3) (rtx, rtx, rtx);
> +  typedef rtx_insn * (*f4) (rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f5) (rtx, rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f6) (rtx, rtx, rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f7) (rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f8) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f9) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f10) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f11) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f12) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f13) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f14) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f15) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
> +  typedef rtx_insn * (*f16) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
>  
>    typedef f0 stored_funcptr;
>  
> -  rtx operator () (void) const { return ((f0)func) (); }
> -  rtx operator () (rtx a0) const { return ((f1)func) (a0); }
> -  rtx operator () (rtx a0, rtx a1) const { return ((f2)func) (a0, a1); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2) const { return ((f3)func) (a0, a1, a2); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3) const { return ((f4)func) (a0, a1, a2, a3); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4) const { return ((f5)func) (a0, a1, a2, a3, a4); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5) const { return ((f6)func) (a0, a1, a2, a3, a4, a5); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6) const { return ((f7)func) (a0, a1, a2, a3, a4, a5, a6); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7) const { return ((f8)func) (a0, a1, a2, a3, a4, a5, a6, a7); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8) const { return ((f9)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9) const { return ((f10)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10) const { return ((f11)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11) const { return ((f12)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12) const { return ((f13)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13) const { return ((f14)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14) const { return ((f15)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14); }
> -  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14, rtx a15) const { return ((f16)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15); }
> +  rtx_insn * operator () (void) const { return ((f0)func) (); }
> +  rtx_insn * operator () (rtx a0) const { return ((f1)func) (a0); }
> +  rtx_insn * operator () (rtx a0, rtx a1) const { return ((f2)func) (a0, a1); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2) const { return ((f3)func) (a0, a1, a2); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3) const { return ((f4)func) (a0, a1, a2, a3); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4) const { return ((f5)func) (a0, a1, a2, a3, a4); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5) const { return ((f6)func) (a0, a1, a2, a3, a4, a5); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6) const { return ((f7)func) (a0, a1, a2, a3, a4, a5, a6); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7) const { return ((f8)func) (a0, a1, a2, a3, a4, a5, a6, a7); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8) const { return ((f9)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9) const { return ((f10)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10) const { return ((f11)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11) const { return ((f12)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12) const { return ((f13)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13) const { return ((f14)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14) const { return ((f15)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14); }
> +  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14, rtx a15) const { return ((f16)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15); }
>  
>    // This is for compatibility of code that invokes functions like
>    //   (*funcptr) (arg)
> diff --git a/gcc/rtl.h b/gcc/rtl.h
> index e5e4560..e88f3c8 100644
> --- a/gcc/rtl.h
> +++ b/gcc/rtl.h
> @@ -636,6 +636,8 @@ class GTY(()) rtx_note : public rtx_insn
>  
>  #define NULL_RTX (rtx) 0
>  
> +#define NULL_INSN (rtx_insn *) 0
> +
>  /* The "next" and "previous" RTX, relative to this one.  */
>  
>  #define RTX_NEXT(X) (rtx_next[GET_CODE (X)] == 0 ? NULL			\
> @@ -827,6 +829,14 @@ is_a_helper <rtx_debug_insn *>::test (rtx rt)
>  template <>
>  template <>
>  inline bool
> +is_a_helper <rtx_debug_insn *>::test (rtx_insn *insn)
> +{
> +  return DEBUG_INSN_P (insn);
> +}
> +
> +template <>
> +template <>
> +inline bool
>  is_a_helper <rtx_nonjump_insn *>::test (rtx rt)
>  {
>    return NONJUMP_INSN_P (rt);
> @@ -843,6 +853,14 @@ is_a_helper <rtx_jump_insn *>::test (rtx rt)
>  template <>
>  template <>
>  inline bool
> +is_a_helper <rtx_jump_insn *>::test (rtx_insn *insn)
> +{
> +  return JUMP_P (insn);
> +}
> +
> +template <>
> +template <>
> +inline bool
>  is_a_helper <rtx_call_insn *>::test (rtx rt)
>  {
>    return CALL_P (rt);
> @@ -2662,7 +2680,7 @@ extern rtx_insn *emit_debug_insn_before (rtx, rtx);
>  extern rtx_insn *emit_debug_insn_before_noloc (rtx, rtx);
>  extern rtx_insn *emit_debug_insn_before_setloc (rtx, rtx, int);
>  extern rtx_barrier *emit_barrier_before (rtx);
> -extern rtx_insn *emit_label_before (rtx, rtx_insn *);
> +extern rtx_code_label *emit_label_before (rtx , rtx_insn *);
>  extern rtx_note *emit_note_before (enum insn_note, rtx);
>  extern rtx_insn *emit_insn_after (rtx, rtx);
>  extern rtx_insn *emit_insn_after_noloc (rtx, rtx, basic_block);
> @@ -2683,7 +2701,7 @@ extern rtx_insn *emit_insn (rtx);
>  extern rtx_insn *emit_debug_insn (rtx);
>  extern rtx_insn *emit_jump_insn (rtx);
>  extern rtx_insn *emit_call_insn (rtx);
> -extern rtx_insn *emit_label (rtx);
> +extern rtx_code_label *emit_label (rtx);
>  extern rtx_jump_table_data *emit_jump_table_data (rtx);
>  extern rtx_barrier *emit_barrier (void);
>  extern rtx_note *emit_note (enum insn_note);
> @@ -3336,14 +3354,14 @@ extern int eh_returnjump_p (rtx_insn *);
>  extern int onlyjump_p (const rtx_insn *);
>  extern int only_sets_cc0_p (const_rtx);
>  extern int sets_cc0_p (const_rtx);
> -extern int invert_jump_1 (rtx_insn *, rtx);
> -extern int invert_jump (rtx_insn *, rtx, int);
> +extern int invert_jump_1 (rtx_jump_insn *, rtx);
> +extern int invert_jump (rtx_jump_insn *, rtx, int);
>  extern int rtx_renumbered_equal_p (const_rtx, const_rtx);
>  extern int true_regnum (const_rtx);
>  extern unsigned int reg_or_subregno (const_rtx);
>  extern int redirect_jump_1 (rtx, rtx);
> -extern void redirect_jump_2 (rtx, rtx, rtx, int, int);
> -extern int redirect_jump (rtx, rtx, int);
> +extern void redirect_jump_2 (rtx_jump_insn *, rtx, rtx, int, int);
> +extern int redirect_jump (rtx_jump_insn *, rtx, int);
>  extern void rebuild_jump_labels (rtx_insn *);
>  extern void rebuild_jump_labels_chain (rtx_insn *);
>  extern rtx reversed_comparison (const_rtx, machine_mode);
> @@ -3426,7 +3444,7 @@ extern void print_inline_rtx (FILE *, const_rtx, int);
>     not be in sched-vis.c but in rtl.c, because they are not only used
>     by the scheduler anymore but for all "slim" RTL dumping.  */
>  extern void dump_value_slim (FILE *, const_rtx, int);
> -extern void dump_insn_slim (FILE *, const_rtx);
> +extern void dump_insn_slim (FILE *, const rtx_insn *);
>  extern void dump_rtl_slim (FILE *, const rtx_insn *, const rtx_insn *,
>  			   int, int);
>  extern void print_value (pretty_printer *, const_rtx, int);
> @@ -3438,7 +3456,7 @@ extern const char *str_pattern_slim (const_rtx);
>  /* In stmt.c */
>  extern void expand_null_return (void);
>  extern void expand_naked_return (void);
> -extern void emit_jump (rtx);
> +extern void emit_jump (rtx_code_label *);
>  
>  /* In expr.c */
>  extern rtx move_by_pieces (rtx, rtx, unsigned HOST_WIDE_INT,
> diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c
> index 743aad6..7d10abe 100644
> --- a/gcc/rtlanal.c
> +++ b/gcc/rtlanal.c
> @@ -2914,14 +2914,14 @@ rtx_referenced_p (const_rtx x, const_rtx body)
>  bool
>  tablejump_p (const rtx_insn *insn, rtx *labelp, rtx_jump_table_data **tablep)
>  {
> -  rtx label, table;
> +  rtx table;
>  
>    if (!JUMP_P (insn))
>      return false;
>  
> -  label = JUMP_LABEL (insn);
> -  if (label != NULL_RTX && !ANY_RETURN_P (label)
> -      && (table = NEXT_INSN (as_a <rtx_insn *> (label))) != NULL_RTX
> +  rtx_insn *label = JUMP_LABEL_AS_INSN (insn);
> +  if (label && !ANY_RETURN_P (label)
> +      && (table = NEXT_INSN (label)) != NULL_RTX
>        && JUMP_TABLE_DATA_P (table))
>      {
>        if (labelp)
> diff --git a/gcc/sched-deps.c b/gcc/sched-deps.c
> index 5434831..e6f1003 100644
> --- a/gcc/sched-deps.c
> +++ b/gcc/sched-deps.c
> @@ -2649,7 +2649,7 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rtx_insn *insn)
>      case MEM:
>        {
>  	/* Reading memory.  */
> -	rtx u;
> +	rtx_insn_list *u;
>  	rtx_insn_list *pending;
>  	rtx_expr_list *pending_mem;
>  	rtx t = x;
> @@ -2700,11 +2700,10 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rtx_insn *insn)
>  		pending_mem = pending_mem->next ();
>  	      }
>  
> -	    for (u = deps->last_pending_memory_flush; u; u = XEXP (u, 1))
> -	      add_dependence (insn, as_a <rtx_insn *> (XEXP (u, 0)),
> -			      REG_DEP_ANTI);
> +	    for (u = deps->last_pending_memory_flush; u; u = u->next ())
> +	      add_dependence (insn, u->insn (), REG_DEP_ANTI);
>  
> -	    for (u = deps->pending_jump_insns; u; u = XEXP (u, 1))
> +	    for (u = deps->pending_jump_insns; u; u = u->next ())
>  	      if (deps_may_trap_p (x))
>  		{
>  		  if ((sched_deps_info->generate_spec_deps)
> @@ -2713,11 +2712,10 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rtx_insn *insn)
>  		      ds_t ds = set_dep_weak (DEP_ANTI, BEGIN_CONTROL,
>  					      MAX_DEP_WEAK);
>  		      
> -		      note_dep (as_a <rtx_insn *> (XEXP (u, 0)), ds);
> +		      note_dep (u->insn (), ds);
>  		    }
>  		  else
> -		    add_dependence (insn, as_a <rtx_insn *> (XEXP (u, 0)),
> -				    REG_DEP_CONTROL);
> +		    add_dependence (insn, u->insn (), REG_DEP_CONTROL);
>  		}
>  	  }
>  
> @@ -3088,7 +3086,7 @@ sched_analyze_insn (struct deps_desc *deps, rtx x, rtx_insn *insn)
>    if (DEBUG_INSN_P (insn))
>      {
>        rtx_insn *prev = deps->last_debug_insn;
> -      rtx u;
> +      rtx_insn_list *u;
>  
>        if (!deps->readonly)
>  	deps->last_debug_insn = insn;
> @@ -3100,8 +3098,8 @@ sched_analyze_insn (struct deps_desc *deps, rtx x, rtx_insn *insn)
>  			   REG_DEP_ANTI, false);
>  
>        if (!sel_sched_p ())
> -	for (u = deps->last_pending_memory_flush; u; u = XEXP (u, 1))
> -	  add_dependence (insn, as_a <rtx_insn *> (XEXP (u, 0)), REG_DEP_ANTI);
> +	for (u = deps->last_pending_memory_flush; u; u = u->next ())
> +	  add_dependence (insn, u->insn (), REG_DEP_ANTI);
>  
>        EXECUTE_IF_SET_IN_REG_SET (reg_pending_uses, 0, i, rsi)
>  	{
> diff --git a/gcc/sched-vis.c b/gcc/sched-vis.c
> index 32f7a7c..31794e6 100644
> --- a/gcc/sched-vis.c
> +++ b/gcc/sched-vis.c
> @@ -67,7 +67,7 @@ along with GCC; see the file COPYING3.  If not see
>     pointer, via str_pattern_slim, but this usage is discouraged.  */
>  
>  /* For insns we print patterns, and for some patterns we print insns...  */
> -static void print_insn_with_notes (pretty_printer *, const_rtx);
> +static void print_insn_with_notes (pretty_printer *, const rtx_insn *);
>  
>  /* This recognizes rtx'en classified as expressions.  These are always
>     represent some action on values or results of other expression, that
> @@ -669,7 +669,7 @@ print_pattern (pretty_printer *pp, const_rtx x, int verbose)
>     with their INSN_UIDs.  */
>  
>  void
> -print_insn (pretty_printer *pp, const_rtx x, int verbose)
> +print_insn (pretty_printer *pp, const rtx_insn *x, int verbose)
>  {
>    if (verbose)
>      {
> @@ -787,7 +787,7 @@ print_insn (pretty_printer *pp, const_rtx x, int verbose)
>     note attached to the instruction.  */
>  
>  static void
> -print_insn_with_notes (pretty_printer *pp, const_rtx x)
> +print_insn_with_notes (pretty_printer *pp, const rtx_insn *x)
>  {
>    pp_string (pp, print_rtx_head);
>    print_insn (pp, x, 1);
> @@ -823,7 +823,7 @@ dump_value_slim (FILE *f, const_rtx x, int verbose)
>  /* Emit a slim dump of X (an insn) to the file F, including any register
>     note attached to the instruction.  */
>  void
> -dump_insn_slim (FILE *f, const_rtx x)
> +dump_insn_slim (FILE *f, const rtx_insn *x)
>  {
>    pretty_printer rtl_slim_pp;
>    rtl_slim_pp.buffer->stream = f;
> @@ -893,9 +893,9 @@ str_pattern_slim (const_rtx x)
>  }
>  
>  /* Emit a slim dump of X (an insn) to stderr.  */
> -extern void debug_insn_slim (const_rtx);
> +extern void debug_insn_slim (const rtx_insn *);
>  DEBUG_FUNCTION void
> -debug_insn_slim (const_rtx x)
> +debug_insn_slim (const rtx_insn *x)
>  {
>    dump_insn_slim (stderr, x);
>  }
> diff --git a/gcc/stmt.c b/gcc/stmt.c
> index 45dc45f..a6418ff 100644
> --- a/gcc/stmt.c
> +++ b/gcc/stmt.c
> @@ -135,12 +135,13 @@ static void balance_case_nodes (case_node_ptr *, case_node_ptr);
>  static int node_has_low_bound (case_node_ptr, tree);
>  static int node_has_high_bound (case_node_ptr, tree);
>  static int node_is_bounded (case_node_ptr, tree);
> -static void emit_case_nodes (rtx, case_node_ptr, rtx, int, tree);
> +static void emit_case_nodes (rtx, case_node_ptr, rtx_code_label *, int, tree);
>  \f
>  /* Return the rtx-label that corresponds to a LABEL_DECL,
> -   creating it if necessary.  */
> +   creating it if necessary.  If label was deleted, the corresponding
> +   note (NOTE_INSN_DELETED{_DEBUG,}_LABEL) insn will be returned.  */
>  
> -rtx
> +rtx_insn *
>  label_rtx (tree label)
>  {
>    gcc_assert (TREE_CODE (label) == LABEL_DECL);
> @@ -153,15 +154,15 @@ label_rtx (tree label)
>  	LABEL_PRESERVE_P (r) = 1;
>      }
>  
> -  return DECL_RTL (label);
> +  return as_a <rtx_insn *> (DECL_RTL (label));
>  }
>  
>  /* As above, but also put it on the forced-reference list of the
>     function that contains it.  */
> -rtx
> +rtx_insn *
>  force_label_rtx (tree label)
>  {
> -  rtx_insn *ref = as_a <rtx_insn *> (label_rtx (label));
> +  rtx_insn *ref = label_rtx (label);
>    tree function = decl_function_context (label);
>  
>    gcc_assert (function);
> @@ -170,10 +171,18 @@ force_label_rtx (tree label)
>    return ref;
>  }
>  
> +/* As label_rtx, but ensures (in check build), that returned value is
> +   an existing label (i.e. rtx with code CODE_LABEL).  */
> +rtx_code_label *
> +live_label_rtx (tree label)
> +{
> +  return as_a <rtx_code_label *> (label_rtx (label));
> +}
> +
>  /* Add an unconditional jump to LABEL as the next sequential instruction.  */
>  
>  void
> -emit_jump (rtx label)
> +emit_jump (rtx_code_label *label)
>  {
>    do_pending_stack_adjust ();
>    emit_jump_insn (gen_jump (label));
> @@ -196,7 +205,7 @@ emit_jump (rtx label)
>  void
>  expand_label (tree label)
>  {
> -  rtx_insn *label_r = as_a <rtx_insn *> (label_rtx (label));
> +  rtx_code_label *label_r = live_label_rtx (label);
>  
>    do_pending_stack_adjust ();
>    emit_label (label_r);
> @@ -717,7 +726,7 @@ resolve_operand_name_1 (char *p, tree outputs, tree inputs, tree labels)
>  void
>  expand_naked_return (void)
>  {
> -  rtx end_label;
> +  rtx_code_label *end_label;
>  
>    clear_pending_stack_adjust ();
>    do_pending_stack_adjust ();
> @@ -732,12 +741,12 @@ expand_naked_return (void)
>  /* Generate code to jump to LABEL if OP0 and OP1 are equal in mode MODE. PROB
>     is the probability of jumping to LABEL.  */
>  static void
> -do_jump_if_equal (machine_mode mode, rtx op0, rtx op1, rtx label,
> +do_jump_if_equal (machine_mode mode, rtx op0, rtx op1, rtx_code_label *label,
>  		  int unsignedp, int prob)
>  {
>    gcc_assert (prob <= REG_BR_PROB_BASE);
>    do_compare_rtx_and_jump (op0, op1, EQ, unsignedp, mode,
> -			   NULL_RTX, NULL_RTX, label, prob);
> +			   NULL_RTX, NULL, label, prob);
>  }
>  \f
>  /* Do the insertion of a case label into case_list.  The labels are
> @@ -894,8 +903,8 @@ expand_switch_as_decision_tree_p (tree range,
>  
>  static void
>  emit_case_decision_tree (tree index_expr, tree index_type,
> -			 struct case_node *case_list, rtx default_label,
> -                         int default_prob)
> +			 case_node_ptr case_list, rtx_code_label *default_label,
> +			 int default_prob)
>  {
>    rtx index = expand_normal (index_expr);
>  
> @@ -1153,7 +1162,7 @@ void
>  expand_case (gswitch *stmt)
>  {
>    tree minval = NULL_TREE, maxval = NULL_TREE, range = NULL_TREE;
> -  rtx default_label = NULL_RTX;
> +  rtx_code_label *default_label = NULL;
>    unsigned int count, uniq;
>    int i;
>    int ncases = gimple_switch_num_labels (stmt);
> @@ -1185,7 +1194,7 @@ expand_case (gswitch *stmt)
>    do_pending_stack_adjust ();
>  
>    /* Find the default case target label.  */
> -  default_label = label_rtx (CASE_LABEL (gimple_switch_default_label (stmt)));
> +  default_label = live_label_rtx (CASE_LABEL (gimple_switch_default_label (stmt)));
>    edge default_edge = EDGE_SUCC (bb, 0);
>    int default_prob = default_edge->probability;
>  
> @@ -1335,7 +1344,7 @@ expand_sjlj_dispatch_table (rtx dispatch_index,
>        for (int i = 0; i < ncases; i++)
>          {
>  	  tree elt = dispatch_table[i];
> -	  rtx lab = label_rtx (CASE_LABEL (elt));
> +	  rtx_code_label *lab = live_label_rtx (CASE_LABEL (elt));
>  	  do_jump_if_equal (index_mode, index, zero, lab, 0, -1);
>  	  force_expand_binop (index_mode, sub_optab,
>  			      index, CONST1_RTX (index_mode),
> @@ -1604,7 +1613,7 @@ node_is_bounded (case_node_ptr node, tree index_type)
>     tests for the value 50, then this node need not test anything.  */
>  
>  static void
> -emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
> +emit_case_nodes (rtx index, case_node_ptr node, rtx_code_label *default_label,
>  		 int default_prob, tree index_type)
>  {
>    /* If INDEX has an unsigned type, we must make unsigned branches.  */
> @@ -1632,7 +1641,8 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
>  			convert_modes (mode, imode,
>  				       expand_normal (node->low),
>  				       unsignedp),
> -			label_rtx (node->code_label), unsignedp, probability);
> +			live_label_rtx (node->code_label),
> +			unsignedp, probability);
>        /* Since this case is taken at this point, reduce its weight from
>           subtree_weight.  */
>        subtree_prob -= prob;
> @@ -1699,7 +1709,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
>  				convert_modes (mode, imode,
>  					       expand_normal (node->right->low),
>  					       unsignedp),
> -				label_rtx (node->right->code_label),
> +				live_label_rtx (node->right->code_label),
>  				unsignedp, probability);
>  
>  	      /* See if the value matches what the left hand side
> @@ -1711,7 +1721,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
>  				convert_modes (mode, imode,
>  					       expand_normal (node->left->low),
>  					       unsignedp),
> -				label_rtx (node->left->code_label),
> +				live_label_rtx (node->left->code_label),
>  				unsignedp, probability);
>  	    }
>  
> @@ -1798,7 +1808,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
>  			        (mode, imode,
>  			         expand_normal (node->right->low),
>  			         unsignedp),
> -			        label_rtx (node->right->code_label), unsignedp, probability);
> +			        live_label_rtx (node->right->code_label), unsignedp, probability);
>              }
>  	  }
>  
> @@ -1840,7 +1850,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
>  			        (mode, imode,
>  			         expand_normal (node->left->low),
>  			         unsignedp),
> -			        label_rtx (node->left->code_label), unsignedp, probability);
> +			        live_label_rtx (node->left->code_label), unsignedp, probability);
>              }
>  	}
>      }
> @@ -2063,7 +2073,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
>  				       mode, 1, default_label, probability);
>  	    }
>  
> -	  emit_jump (label_rtx (node->code_label));
> +	  emit_jump (live_label_rtx (node->code_label));
>  	}
>      }
>  }
> diff --git a/gcc/stmt.h b/gcc/stmt.h
> index 620b0f1..7b142ce 100644
> --- a/gcc/stmt.h
> +++ b/gcc/stmt.h
> @@ -31,13 +31,18 @@ extern tree resolve_asm_operand_names (tree, tree, tree, tree);
>  extern tree tree_overlaps_hard_reg_set (tree, HARD_REG_SET *);
>  #endif
>  
> -/* Return the CODE_LABEL rtx for a LABEL_DECL, creating it if necessary.  */
> -extern rtx label_rtx (tree);
> +/* Return the CODE_LABEL rtx for a LABEL_DECL, creating it if necessary.
> +   If label was deleted, the corresponding note
> +   (NOTE_INSN_DELETED{_DEBUG,}_LABEL) insn will be returned.  */
> +extern rtx_insn *label_rtx (tree);
>  
>  /* As label_rtx, but additionally the label is placed on the forced label
>     list of its containing function (i.e. it is treated as reachable even
>     if how is not obvious).  */
> -extern rtx force_label_rtx (tree);
> +extern rtx_insn *force_label_rtx (tree);
> +
> +/* As label_rtx, but checks that label was not deleted.  */
> +extern rtx_code_label *live_label_rtx (tree);
>  
>  /* Expand a GIMPLE_SWITCH statement.  */
>  extern void expand_case (gswitch *);
> diff --git a/gcc/store-motion.c b/gcc/store-motion.c
> index 530766f..11e2dec 100644
> --- a/gcc/store-motion.c
> +++ b/gcc/store-motion.c
> @@ -813,7 +813,7 @@ insert_store (struct st_expr * expr, edge e)
>      return 0;
>  
>    reg = expr->reaching_reg;
> -  insn = as_a <rtx_insn *> (gen_move_insn (copy_rtx (expr->pattern), reg));
> +  insn = gen_move_insn (copy_rtx (expr->pattern), reg);
>  
>    /* If we are inserting this expression on ALL predecessor edges of a BB,
>       insert it at the start of the BB, and reset the insert bits on the other
> @@ -954,7 +954,7 @@ replace_store_insn (rtx reg, rtx_insn *del, basic_block bb,
>    rtx mem, note, set, ptr;
>  
>    mem = smexpr->pattern;
> -  insn = as_a <rtx_insn *> (gen_move_insn (reg, SET_SRC (single_set (del))));
> +  insn = gen_move_insn (reg, SET_SRC (single_set (del)));
>  
>    for (ptr = smexpr->antic_stores; ptr; ptr = XEXP (ptr, 1))
>      if (XEXP (ptr, 0) == del)

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-03-31  4:38 [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses Mikhail Maltsev
  2015-03-31 15:52 ` Trevor Saunders
@ 2015-04-02 21:13 ` Jeff Law
  2015-04-25 11:49 ` Richard Sandiford
  2 siblings, 0 replies; 21+ messages in thread
From: Jeff Law @ 2015-04-02 21:13 UTC (permalink / raw)
  To: Mikhail Maltsev, gcc-patches

On 03/30/2015 10:37 PM, Mikhail Maltsev wrote:
> Hi!
>
> I'm currently working on the proposed task of replacing rtx objects
> (i.e. struct rtx_def) with derived classes. I would like to get some
> feedback on this work (it's far from being finished, but basically I
> would like to know, whether my modifications are appropriate, e.g. one
> may consider that this is "too much" for just refactoring, because
> sometimes they involve small modification of semantics).
>
> The attached patch is not well tested, i.e. I bootstrapped and regtested
> it only on x86_64, but I'll perform more extensive testing before
> submitting the next version.
>
> The key points I would like to ask about:
>
> 1. The original task was to replace the rtx type by rtx_insn *, where it
> is appropriate. But rtx_insn has several derived classes, such as
> rtx_code_label, for example. So I tried to use the most derived type
> when possible. Is it OK?
Definitely.  In general the idea here is to exploit the static type 
checking done in the compiler to avoid runtime checking and failures.


>
> 2. Not all of these "type promotions" can be done by just looking at
> function callers and callees (and some functions will only be generated
> during the build of some rare architecture) and checks already done in
> them. In a couple of cases I referred to comments and my general
> understanding of code semantics. In one case this actually caused a
> regression (in the patch it is fixed, of course), because of somewhat
> misleading comment (see "live_label_rtx" function added in patch for
> details) The question is - are such changes OK for refactoring (or it
> should strictly preserve semantics)?
They're OK, but it may be easier to run things through the review 
process if refactoring is kept separate from strengthening the type 
checking.


>
> 3. In lra-constraints.c I added a new class rtx_usage_list, which, IMHO,
> allows to group the functions which work with usage list in a more
> explicit manner and make some conditions more self-explaining. I hope
> that Vladimir Makarov (in this case, because it concerns LRA) and other
> authors will not object against such "intrusion" into their code (or
> would rather tell me what should be fixed in my patch(es), instead of
> just refusing to apply it/them). In general, are such changes OK or
> should better be avoided?
>
> A couple of questions related to further work:
I don't see anything inherently wrong with this concept.  Though again, 
I'd suggest separating out these changes from type safety work.

>
> 1. I noticed that emit_insn function, in fact, does two kinds of things:
> it can either add it's argument to the chain, or, if the argument is a
> pattern, it creates a new instruction based on that pattern. Shouldn't
> this logic be separated in the callers?
That would be wise.  There's probably several of these kinds of things 
lurking around.

>
> 2. Are there any plans on implementing a better class hierarchy on AST's
> ("union tree_node" type). I see that C++ FE uses a huge number of macros
> (which check TREE_CODE and some boolean flags). Could this be improved
> somehow?
It's in progress and I'm hoping Andrew will be in a position to post 
this work soon.

jeff

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-03-31  4:38 [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses Mikhail Maltsev
  2015-03-31 15:52 ` Trevor Saunders
  2015-04-02 21:13 ` Jeff Law
@ 2015-04-25 11:49 ` Richard Sandiford
  2015-04-27 16:38   ` Jeff Law
  2015-04-27 20:01   ` Mikhail Maltsev
  2 siblings, 2 replies; 21+ messages in thread
From: Richard Sandiford @ 2015-04-25 11:49 UTC (permalink / raw)
  To: Mikhail Maltsev; +Cc: Jeff Law, gcc-patches

Thanks for looking at this.

Mikhail Maltsev <maltsevm@gmail.com> writes:
> 2. Not all of these "type promotions" can be done by just looking at
> function callers and callees (and some functions will only be generated
> during the build of some rare architecture) and checks already done in
> them. In a couple of cases I referred to comments and my general
> understanding of code semantics. In one case this actually caused a
> regression (in the patch it is fixed, of course), because of somewhat
> misleading comment (see "live_label_rtx" function added in patch for
> details) The question is - are such changes OK for refactoring (or it
> should strictly preserve semantics)?

FWIW I think the split between label_rtx and live_label_rtx is good,
but I think we should give them different names.  The first one is
returning only a position in the instruction stream, the second is
returning a jump target.  I think we should rename both of them to
make that distinction clearer.

> @@ -2099,9 +2107,9 @@ fix_crossing_conditional_branches (void)
>  		  emit_label (new_label);
> 
>  		  gcc_assert (GET_CODE (old_label) == LABEL_REF);
> -		  old_label = JUMP_LABEL (old_jump);
> -		  new_jump = emit_jump_insn (gen_jump (old_label));
> -		  JUMP_LABEL (new_jump) = old_label;
> +		  old_label_insn = JUMP_LABEL_AS_INSN (old_jump);
> +		  new_jump = emit_jump_insn (gen_jump (old_label_insn));
> +		  JUMP_LABEL (new_jump) = old_label_insn;
> 
>  		  last_bb = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
>  		  new_bb = create_basic_block (new_label, new_jump, last_bb);

I think the eventual aim would be to have rtx_jump_insn member functions
that get and set the jump label as an rtx_insn, with JUMP_LABEL_AS_INSN
being a stepping stone towards that.  In cases like this it might make
more sense to ensure old_jump has the right type (rtx_jump_insn) and go
straight to the member functions, rather than switching to JUMP_LABEL_AS_INSN
now and then having to rewrite it later.

> @@ -1014,8 +1023,9 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
>      {
>        if (CONSTANT_P (tem))
>  	{
> -	  rtx label = (tem == const0_rtx || tem == CONST0_RTX (mode))
> -		      ? if_false_label : if_true_label;
> +	  rtx_code_label *label = (tem == const0_rtx
> +				   || tem == CONST0_RTX (mode)) ?
> +				       if_false_label : if_true_label;
>  	  if (label)
>  	    emit_jump (label);
>  	  return;

Formatting nit, but the line break should be before "?" rather than after.

> diff --git a/gcc/is-a.h b/gcc/is-a.h
> index 58917eb..4fb9dde 100644
> --- a/gcc/is-a.h
> +++ b/gcc/is-a.h
> @@ -46,6 +46,11 @@ TYPE as_a <TYPE> (pointer)
>  
>        do_something_with (as_a <cgraph_node *> *ptr);
>  
> +TYPE assert_as_a <TYPE> (pointer)
> +
> +    Like as_a <TYPE> (pointer), but uses assertion, which is enabled even in
> +    non-checking (release) build.
> +
>  TYPE safe_as_a <TYPE> (pointer)
>  
>      Like as_a <TYPE> (pointer), but where pointer could be NULL.  This
> @@ -193,6 +198,17 @@ as_a (U *p)
>    return is_a_helper <T>::cast (p);
>  }
>  
> +/* Same as above, but checks the condition even in release build.  */
> +
> +template <typename T, typename U>
> +inline T
> +assert_as_a (U *p)
> +{
> +  gcc_assert (is_a <T> (p));
> +  return is_a_helper <T>::cast (p);
> +}
> +
> +
>  /* Similar to as_a<>, but where the pointer can be NULL, even if
>     is_a_helper<T> doesn't check for NULL.  */

This preserves the behaviour of the original code but I'm not sure
it's worth it.  I doubt the distinction between:

  gcc_assert (JUMP_P (x));

and:

  gcc_checking_assert (JUMP_P (x));

was ever very scientific.  Seems like we should take this refactoring as
an opportunity to make the checking more consistent.

> @@ -5069,18 +5164,15 @@ split_if_necessary (int regno, machine_mode mode,
>  {
>    bool res = false;
>    int i, nregs = 1;
> -  rtx next_usage_insns;
> +  rtx_usage_list *next_usage_insns;
>  
>    if (regno < FIRST_PSEUDO_REGISTER)
>      nregs = hard_regno_nregs[regno][mode];
>    for (i = 0; i < nregs; i++)
>      if (usage_insns[regno + i].check == curr_usage_insns_check
> -	&& (next_usage_insns = usage_insns[regno + i].insns) != NULL_RTX
> +	&& (next_usage_insns = usage_insns[regno + i].insns) != NULL
>  	/* To avoid processing the register twice or more.  */
> -	&& ((GET_CODE (next_usage_insns) != INSN_LIST
> -	     && INSN_UID (next_usage_insns) < max_uid)
> -	    || (GET_CODE (next_usage_insns) == INSN_LIST
> -		&& (INSN_UID (XEXP (next_usage_insns, 0)) < max_uid)))
> +	&& (INSN_UID (next_usage_insns->insn ()) < max_uid)
>  	&& need_for_split_p (potential_reload_hard_regs, regno + i)
>  	&& split_reg (before_p, regno + i, insn, next_usage_insns))
>      res = true;

No need for the brackets in the last condition now that it fits on
a single line.

> @@ -4501,6 +4500,107 @@ static int calls_num;
>     USAGE_INSNS.	 */
>  static int curr_usage_insns_check;
> 
> +namespace
> +{
> +
> +class rtx_usage_list GTY(()) : public rtx_def
> +{
> +public:
> +  /* This class represents an element in a singly-linked list, which:
> +     1. Ends with non-debug INSN
> +     2. May contain several INSN_LIST nodes with DEBUG_INSNs attached to them
> +
> +     I.e.:   INSN_LIST--> INSN_LIST-->..--> INSN
> +               |            |
> +             DEBUG_INSN   DEBUG_INSN
> +
> +   See struct usage_insns for description of how it is used.  */
> +
> +  /* Get next node of the list.  */
> +  rtx_usage_list *next () const;
> +
> +  /* Get the current instruction.  */
> +  rtx_insn *insn ();
> +
> +  /* Check, if current INSN is debug info.  */
> +  bool debug_p () const;
> +
> +  /* Add debug information to the chain.  */
> +  rtx_usage_list *push_front (rtx_debug_insn *debug_insn);
> +};
> +
> +/* If current node is an INSN return it.  Otherwise it as an INSN_LIST node,
> +   in this case return the attached INSN.  */
> +
> +rtx_insn *
> +rtx_usage_list::insn ()
> +{
> +  if (rtx_insn *as_insn = dyn_cast <rtx_insn *> (this))
> +    return as_insn;
> +  return safe_as_a <rtx_debug_insn *> (XEXP (this, 0));
> +}
> +
> +/* Get next node.  */
> +
> +rtx_usage_list *
> +rtx_usage_list::next () const
> +{
> +  return reinterpret_cast <rtx_usage_list *> (XEXP (this, 1));
> +}
> +
> +/* Check, if current INSN is debug info.  */
> +
> +bool
> +rtx_usage_list::debug_p () const
> +{
> +  return is_a <const rtx_insn_list *> (this);
> +}
> +
> +/* Add debug information to the chain.  */
> +
> +rtx_usage_list *
> +rtx_usage_list::push_front (rtx_debug_insn *debug_insn)
> +{
> +  /* ??? Maybe it would be better to store DEBUG_INSNs in a separate
> +     homogeneous list (or vec) and use another pointer for actual INSN?
> +     Then we won't have to traverse the list and some checks will also
> +     become simpler.  */
> +  return reinterpret_cast <rtx_usage_list *>
> +                (gen_rtx_INSN_LIST (VOIDmode,
> +                                    debug_insn, this));
> +}
> +
> +} // anon namespace
> +
> +/* Helpers for as-a casts.  */
> +
> +template <>
> +template <>
> +inline bool
> +is_a_helper <rtx_insn_list *>::test (rtx_usage_list *list)
> +{
> +  return list->code == INSN_LIST;
> +}
> +
> +template <>
> +template <>
> +inline bool
> +is_a_helper <const rtx_insn_list *>::test (const rtx_usage_list *list)
> +{
> +  return list->code == INSN_LIST;
> +}
> +
> +/* rtx_usage_list is either an INSN_LIST node or an INSN (no other
> +   options).  Therefore, this check is valid.  */
> +
> +template <>
> +template <>
> +inline bool
> +is_a_helper <rtx_insn *>::test (rtx_usage_list *list)
> +{
> +  return list->code != INSN_LIST;
> +}
> +
>  /* Info about last usage of registers in EBB to do inheritance/split
>     transformation.  Inheritance transformation is done from a spilled
>     pseudo and split transformations from a hard register or a pseudo

That seems pretty heavy-weight for LRA-local code.  Also, the long-term
plan is for INSN_LIST and rtx_insn to be in separate hierarchies,
at which point we'd have no alias-safe way to distinguish them.

usage_insns isn't a GC structure and isn't live across a GC collection,
so I don't think we need these lists to be rtxes at all.  Also:

/* Return first non-debug insn in list USAGE_INSNS.  */
static rtx_insn *
skip_usage_debug_insns (rtx usage_insns)
{
  rtx insn;

  /* Skip debug insns.  */
  for (insn = usage_insns;
       insn != NULL_RTX && GET_CODE (insn) == INSN_LIST;
       insn = XEXP (insn, 1))
    ;
  return safe_as_a <rtx_insn *> (insn);
}

suggests that having the nondebug insn last is a problem.  Any
correctness decisions should be based on the nondebug insn and
it's inefficient to have to skip all the debug insns before
doing that.

So I think we should change the way this list is represented.
Maybe we could use something like a vec (perhaps too expensive to allocate,
reallocate and deallocate for each register) or a simple obstack-based
linked list.  Either of those would be more space-efficient than
INSN_LIST and would avoid the rtx garbage after the pass has finished.

FWIW the patch looked good to me otherwise.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-04-25 11:49 ` Richard Sandiford
@ 2015-04-27 16:38   ` Jeff Law
  2015-04-27 16:57     ` Richard Sandiford
  2015-04-27 20:01   ` Mikhail Maltsev
  1 sibling, 1 reply; 21+ messages in thread
From: Jeff Law @ 2015-04-27 16:38 UTC (permalink / raw)
  To: Mikhail Maltsev, gcc-patches, rdsandiford

On 04/25/2015 05:49 AM, Richard Sandiford wrote:
>
>> @@ -2099,9 +2107,9 @@ fix_crossing_conditional_branches (void)
>>   		  emit_label (new_label);
>>
>>   		  gcc_assert (GET_CODE (old_label) == LABEL_REF);
>> -		  old_label = JUMP_LABEL (old_jump);
>> -		  new_jump = emit_jump_insn (gen_jump (old_label));
>> -		  JUMP_LABEL (new_jump) = old_label;
>> +		  old_label_insn = JUMP_LABEL_AS_INSN (old_jump);
>> +		  new_jump = emit_jump_insn (gen_jump (old_label_insn));
>> +		  JUMP_LABEL (new_jump) = old_label_insn;
>>
>>   		  last_bb = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
>>   		  new_bb = create_basic_block (new_label, new_jump, last_bb);
>
> I think the eventual aim would be to have rtx_jump_insn member functions
> that get and set the jump label as an rtx_insn, with JUMP_LABEL_AS_INSN
> being a stepping stone towards that.  In cases like this it might make
> more sense to ensure old_jump has the right type (rtx_jump_insn) and go
> straight to the member functions, rather than switching to JUMP_LABEL_AS_INSN
> now and then having to rewrite it later.
I'm comfortable with either way, so long as we get there.  I know that 
David certainly found it easier to introduce "scaffolding" early in this 
patch series, then exploit it, then tear down the scaffolding near the 
end of a patch series.

>> diff --git a/gcc/is-a.h b/gcc/is-a.h
>> index 58917eb..4fb9dde 100644
>> --- a/gcc/is-a.h
>> +++ b/gcc/is-a.h
>> @@ -46,6 +46,11 @@ TYPE as_a <TYPE> (pointer)
>>
>>         do_something_with (as_a <cgraph_node *> *ptr);
>>
>> +TYPE assert_as_a <TYPE> (pointer)
>> +
>> +    Like as_a <TYPE> (pointer), but uses assertion, which is enabled even in
>> +    non-checking (release) build.
>> +
>>   TYPE safe_as_a <TYPE> (pointer)
>>
>>       Like as_a <TYPE> (pointer), but where pointer could be NULL.  This
>> @@ -193,6 +198,17 @@ as_a (U *p)
>>     return is_a_helper <T>::cast (p);
>>   }
>>
>> +/* Same as above, but checks the condition even in release build.  */
>> +
>> +template <typename T, typename U>
>> +inline T
>> +assert_as_a (U *p)
>> +{
>> +  gcc_assert (is_a <T> (p));
>> +  return is_a_helper <T>::cast (p);
>> +}
>> +
>> +
>>   /* Similar to as_a<>, but where the pointer can be NULL, even if
>>      is_a_helper<T> doesn't check for NULL.  */
>
> This preserves the behaviour of the original code but I'm not sure
> it's worth it.  I doubt the distinction between:
>
>    gcc_assert (JUMP_P (x));
>
> and:
>
>    gcc_checking_assert (JUMP_P (x));
>
> was ever very scientific.  Seems like we should take this refactoring as
> an opportunity to make the checking more consistent.
Without some guidelines I suspect usage of gcc_check_assert would be 
highly inconsistent.

And ultimately we want to get away from the helpers as much as possible, 
instead relying on the static typesystem to detect errors at compile 
time.  So unless there's a compelling reason, I'd prefer not to add more 
"support" for these helpers.

[ snip]

>
> That seems pretty heavy-weight for LRA-local code.  Also, the long-term
> plan is for INSN_LIST and rtx_insn to be in separate hierarchies,
> at which point we'd have no alias-safe way to distinguish them.
That's certainly what I think ought to happen.  INSN_LIST should turn 
into a standard vector or forward list.  For the use cases in GCC, 
either ought to be acceptable.

Jeff

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-04-27 16:38   ` Jeff Law
@ 2015-04-27 16:57     ` Richard Sandiford
  0 siblings, 0 replies; 21+ messages in thread
From: Richard Sandiford @ 2015-04-27 16:57 UTC (permalink / raw)
  To: Jeff Law; +Cc: Mikhail Maltsev, gcc-patches

Jeff Law <law@redhat.com> writes:
> On 04/25/2015 05:49 AM, Richard Sandiford wrote:
>> That seems pretty heavy-weight for LRA-local code.  Also, the long-term
>> plan is for INSN_LIST and rtx_insn to be in separate hierarchies,
>> at which point we'd have no alias-safe way to distinguish them.
> That's certainly what I think ought to happen.  INSN_LIST should turn 
> into a standard vector or forward list.  For the use cases in GCC, 
> either ought to be acceptable.

OK.  But I think whatever replaces INSN_LIST will still need to be GCed,
for uses such as nonlocal_goto_handler_labels.  My point was that in this
case we don't want a GCed list, so it'd be better to avoid INSN_LIST
altogether.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-04-25 11:49 ` Richard Sandiford
  2015-04-27 16:38   ` Jeff Law
@ 2015-04-27 20:01   ` Mikhail Maltsev
  2015-04-28 13:50     ` Richard Sandiford
  2015-04-28 23:55     ` Jeff Law
  1 sibling, 2 replies; 21+ messages in thread
From: Mikhail Maltsev @ 2015-04-27 20:01 UTC (permalink / raw)
  To: Jeff Law, gcc-patches, rdsandiford

[-- Attachment #1: Type: text/plain, Size: 2167 bytes --]

I'm sending an updated patch (rebased to recent trunk, bootstrapped and
regtested on x86_64-unknown-linux-gnu).

On 04/25/2015 02:49 PM, Richard Sandiford wrote:
> FWIW I think the split between label_rtx and live_label_rtx is good,
> but I think we should give them different names.  The first one is
> returning only a position in the instruction stream, the second is
> returning a jump target.  I think we should rename both of them to
> make that distinction clearer.

I renamed live_label_rtx to jump_target_rtx. But I'm not sure if it is
appropriate (so, perhaps, you could give some advice about the right
names for these functions?)

> I think the eventual aim would be to have rtx_jump_insn member functions
> that get and set the jump label as an rtx_insn, with JUMP_LABEL_AS_INSN
> being a stepping stone towards that.  In cases like this it might make
> more sense to ensure old_jump has the right type (rtx_jump_insn) and go
> straight to the member functions, rather than switching to JUMP_LABEL_AS_INSN
> now and then having to rewrite it later.

I added the member functions. The problem is that JUMP_LABEL does not
always satisfy the current invariant of rtx_insn: it can also be an RTL
expression of type RETURN or SIMPLE_RETURN.

> Formatting nit, but the line break should be before "?" rather than after.
Fixed.

> This preserves the behaviour of the original code but I'm not sure
> it's worth it.  I doubt the distinction between:
> 
>   gcc_assert (JUMP_P (x));
> 
> and:
> 
>   gcc_checking_assert (JUMP_P (x));
> 
> was ever very scientific.  Seems like we should take this refactoring as
> an opportunity to make the checking more consistent.
Fixed (removed assert_as_a).

> That seems pretty heavy-weight for LRA-local code.  Also, the long-term
> plan is for INSN_LIST and rtx_insn to be in separate hierarchies,
> at which point we'd have no alias-safe way to distinguish them.
> 
> usage_insns isn't a GC structure and isn't live across a GC collection,
> so I don't think we need these lists to be rtxes at all.
OK, reverted changes in LRA code for now. I think this should be a
separate patch then.

-- 
Regards,
    Mikhail Maltsev

[-- Attachment #2: as_insn2.patch --]
[-- Type: text/x-patch, Size: 101097 bytes --]

diff --git a/gcc/bb-reorder.c b/gcc/bb-reorder.c
index c2a3be3..ae726e5 100644
--- a/gcc/bb-reorder.c
+++ b/gcc/bb-reorder.c
@@ -1745,9 +1745,11 @@ set_edge_can_fallthru_flag (void)
 	continue;
       if (!any_condjump_p (BB_END (bb)))
 	continue;
-      if (!invert_jump (BB_END (bb), JUMP_LABEL (BB_END (bb)), 0))
+
+      rtx_jump_insn *bb_end_jump = as_a <rtx_jump_insn *> (BB_END (bb));
+      if (!invert_jump (bb_end_jump, JUMP_LABEL (bb_end_jump), 0))
 	continue;
-      invert_jump (BB_END (bb), JUMP_LABEL (BB_END (bb)), 0);
+      invert_jump (bb_end_jump, JUMP_LABEL (bb_end_jump), 0);
       EDGE_SUCC (bb, 0)->flags |= EDGE_CAN_FALLTHRU;
       EDGE_SUCC (bb, 1)->flags |= EDGE_CAN_FALLTHRU;
     }
@@ -1902,9 +1904,15 @@ fix_up_fall_thru_edges (void)
 
 		      fall_thru_label = block_label (fall_thru->dest);
 
-		      if (old_jump && JUMP_P (old_jump) && fall_thru_label)
-			invert_worked = invert_jump (old_jump,
-						     fall_thru_label,0);
+		      if (old_jump && fall_thru_label)
+                        {
+                          rtx_jump_insn *old_jump_insn =
+                                  dyn_cast <rtx_jump_insn *> (old_jump);
+                          if (old_jump_insn)
+                            invert_worked = invert_jump (old_jump_insn,
+						     fall_thru_label, 0);
+                        }
+
 		      if (invert_worked)
 			{
 			  fall_thru->flags &= ~EDGE_FALLTHRU;
@@ -2021,10 +2029,9 @@ fix_crossing_conditional_branches (void)
   edge succ2;
   edge crossing_edge;
   edge new_edge;
-  rtx_insn *old_jump;
   rtx set_src;
   rtx old_label = NULL_RTX;
-  rtx new_label;
+  rtx_code_label *new_label;
 
   FOR_EACH_BB_FN (cur_bb, cfun)
     {
@@ -2049,7 +2056,7 @@ fix_crossing_conditional_branches (void)
 
       if (crossing_edge)
 	{
-	  old_jump = BB_END (cur_bb);
+	  rtx_jump_insn *old_jump = as_a <rtx_jump_insn *> (BB_END (cur_bb));
 
 	  /* Check to make sure the jump instruction is a
 	     conditional jump.  */
@@ -2088,7 +2095,8 @@ fix_crossing_conditional_branches (void)
 	      else
 		{
 		  basic_block last_bb;
-		  rtx_insn *new_jump;
+		  rtx_insn *old_label_insn;
+		  rtx_jump_insn *new_jump;
 
 		  /* Create new basic block to be dest for
 		     conditional jump.  */
@@ -2099,9 +2107,10 @@ fix_crossing_conditional_branches (void)
 		  emit_label (new_label);
 
 		  gcc_assert (GET_CODE (old_label) == LABEL_REF);
-		  old_label = JUMP_LABEL (old_jump);
-		  new_jump = emit_jump_insn (gen_jump (old_label));
-		  JUMP_LABEL (new_jump) = old_label;
+		  old_label_insn = old_jump->jump_target ();
+		  new_jump = as_a <rtx_jump_insn *>
+				(emit_jump_insn (gen_jump (old_label_insn)));
+		  new_jump->set_jump_target (old_label_insn);
 
 		  last_bb = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
 		  new_bb = create_basic_block (new_label, new_jump, last_bb);
diff --git a/gcc/bt-load.c b/gcc/bt-load.c
index c028281..2280124 100644
--- a/gcc/bt-load.c
+++ b/gcc/bt-load.c
@@ -1212,7 +1212,7 @@ move_btr_def (basic_block new_def_bb, int btr, btr_def def, bitmap live_range,
   btr_mode = GET_MODE (SET_DEST (set));
   btr_rtx = gen_rtx_REG (btr_mode, btr);
 
-  new_insn = as_a <rtx_insn *> (gen_move_insn (btr_rtx, src));
+  new_insn = gen_move_insn (btr_rtx, src);
 
   /* Insert target register initialization at head of basic block.  */
   def->insn = emit_insn_after (new_insn, insp);
diff --git a/gcc/builtins.c b/gcc/builtins.c
index 028d793..9e06db8 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -2001,7 +2001,7 @@ expand_errno_check (tree exp, rtx target)
   /* Test the result; if it is NaN, set errno=EDOM because
      the argument was not in the domain.  */
   do_compare_rtx_and_jump (target, target, EQ, 0, GET_MODE (target),
-			   NULL_RTX, NULL_RTX, lab,
+			   NULL_RTX, NULL, lab,
 			   /* The jump is very likely.  */
 			   REG_BR_PROB_BASE - (REG_BR_PROB_BASE / 2000 - 1));
 
@@ -5938,9 +5938,9 @@ expand_builtin_acc_on_device (tree exp, rtx target)
   emit_move_insn (target, const1_rtx);
   rtx_code_label *done_label = gen_label_rtx ();
   do_compare_rtx_and_jump (v, v1, EQ, false, v_mode, NULL_RTX,
-			   NULL_RTX, done_label, PROB_EVEN);
+			   NULL, done_label, PROB_EVEN);
   do_compare_rtx_and_jump (v, v2, EQ, false, v_mode, NULL_RTX,
-			   NULL_RTX, done_label, PROB_EVEN);
+			   NULL, done_label, PROB_EVEN);
   emit_move_insn (target, const0_rtx);
   emit_label (done_label);
 
diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c
index 477b6da..5358d52 100644
--- a/gcc/cfgcleanup.c
+++ b/gcc/cfgcleanup.c
@@ -190,7 +190,8 @@ try_simplify_condjump (basic_block cbranch_block)
     return false;
 
   /* Invert the conditional branch.  */
-  if (!invert_jump (cbranch_insn, block_label (jump_dest_block), 0))
+  if (!invert_jump (as_a <rtx_jump_insn *> (cbranch_insn),
+                    block_label (jump_dest_block), 0))
     return false;
 
   if (dump_file)
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index 5905ddb..049230d 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -2051,7 +2051,7 @@ static hash_map<basic_block, rtx_code_label *> *lab_rtx_for_bb;
 
 /* Returns the label_rtx expression for a label starting basic block BB.  */
 
-static rtx
+static rtx_code_label *
 label_rtx_for_bb (basic_block bb ATTRIBUTE_UNUSED)
 {
   gimple_stmt_iterator gsi;
@@ -2078,7 +2078,7 @@ label_rtx_for_bb (basic_block bb ATTRIBUTE_UNUSED)
       if (DECL_NONLOCAL (lab))
 	break;
 
-      return label_rtx (lab);
+      return jump_target_rtx (lab);
     }
 
   rtx_code_label *l = gen_label_rtx ();
@@ -3120,7 +3120,7 @@ expand_goto (tree label)
   gcc_assert (!context || context == current_function_decl);
 #endif
 
-  emit_jump (label_rtx (label));
+  emit_jump (jump_target_rtx (label));
 }
 
 /* Output a return with no value.  */
@@ -5579,7 +5579,7 @@ construct_init_block (void)
     {
       tree label = gimple_block_label (e->dest);
 
-      emit_jump (label_rtx (label));
+      emit_jump (jump_target_rtx (label));
       flags = 0;
     }
   else
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index 322d1a9..043859a 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -999,18 +999,18 @@ rtl_can_merge_blocks (basic_block a, basic_block b)
 /* Return the label in the head of basic block BLOCK.  Create one if it doesn't
    exist.  */
 
-rtx
+rtx_code_label *
 block_label (basic_block block)
 {
   if (block == EXIT_BLOCK_PTR_FOR_FN (cfun))
-    return NULL_RTX;
+    return NULL;
 
   if (!LABEL_P (BB_HEAD (block)))
     {
       BB_HEAD (block) = emit_label_before (gen_label_rtx (), BB_HEAD (block));
     }
 
-  return BB_HEAD (block);
+  return as_a <rtx_code_label *> (BB_HEAD (block));
 }
 
 /* Attempt to perform edge redirection by replacing possibly complex jump
@@ -1110,7 +1110,8 @@ try_redirect_by_replacing_jump (edge e, basic_block target, bool in_cfglayout)
       if (dump_file)
 	fprintf (dump_file, "Redirecting jump %i from %i to %i.\n",
 		 INSN_UID (insn), e->dest->index, target->index);
-      if (!redirect_jump (insn, block_label (target), 0))
+      if (!redirect_jump (as_a <rtx_jump_insn *> (insn),
+                          block_label (target), 0))
 	{
 	  gcc_assert (target == EXIT_BLOCK_PTR_FOR_FN (cfun));
 	  return NULL;
@@ -1294,7 +1295,8 @@ patch_jump_insn (rtx_insn *insn, rtx_insn *old_label, basic_block new_bb)
 	  /* If the substitution doesn't succeed, die.  This can happen
 	     if the back end emitted unrecognizable instructions or if
 	     target is exit block on some arches.  */
-	  if (!redirect_jump (insn, block_label (new_bb), 0))
+	  if (!redirect_jump (as_a <rtx_jump_insn *> (insn),
+                              block_label (new_bb), 0))
 	    {
 	      gcc_assert (new_bb == EXIT_BLOCK_PTR_FOR_FN (cfun));
 	      return false;
@@ -1322,7 +1324,7 @@ redirect_branch_edge (edge e, basic_block target)
 
   if (!currently_expanding_to_rtl)
     {
-      if (!patch_jump_insn (insn, old_label, target))
+      if (!patch_jump_insn (as_a <rtx_jump_insn *> (insn), old_label, target))
 	return NULL;
     }
   else
@@ -1330,7 +1332,8 @@ redirect_branch_edge (edge e, basic_block target)
        jumps (i.e. not yet split by find_many_sub_basic_blocks).
        Redirect all of those that match our label.  */
     FOR_BB_INSNS (src, insn)
-      if (JUMP_P (insn) && !patch_jump_insn (insn, old_label, target))
+      if (JUMP_P (insn) && !patch_jump_insn (as_a <rtx_jump_insn *> (insn),
+                                             old_label, target))
 	return NULL;
 
   if (dump_file)
@@ -1521,7 +1524,8 @@ force_nonfallthru_and_redirect (edge e, basic_block target, rtx jump_label)
       edge b = unchecked_make_edge (e->src, target, 0);
       bool redirected;
 
-      redirected = redirect_jump (BB_END (e->src), block_label (target), 0);
+      redirected = redirect_jump (as_a <rtx_jump_insn *> (BB_END (e->src)),
+                                  block_label (target), 0);
       gcc_assert (redirected);
 
       note = find_reg_note (BB_END (e->src), REG_BR_PROB, NULL_RTX);
@@ -3777,10 +3781,10 @@ fixup_reorder_chain (void)
 	  e_taken = e;
 
       bb_end_insn = BB_END (bb);
-      if (JUMP_P (bb_end_insn))
+      if (rtx_jump_insn *bb_end_jump = dyn_cast <rtx_jump_insn *> (bb_end_insn))
 	{
-	  ret_label = JUMP_LABEL (bb_end_insn);
-	  if (any_condjump_p (bb_end_insn))
+	  ret_label = JUMP_LABEL (bb_end_jump);
+	  if (any_condjump_p (bb_end_jump))
 	    {
 	      /* This might happen if the conditional jump has side
 		 effects and could therefore not be optimized away.
@@ -3788,10 +3792,10 @@ fixup_reorder_chain (void)
 		 to prevent rtl_verify_flow_info from complaining.  */
 	      if (!e_fall)
 		{
-		  gcc_assert (!onlyjump_p (bb_end_insn)
-			      || returnjump_p (bb_end_insn)
+		  gcc_assert (!onlyjump_p (bb_end_jump)
+			      || returnjump_p (bb_end_jump)
                               || (e_taken->flags & EDGE_CROSSING));
-		  emit_barrier_after (bb_end_insn);
+		  emit_barrier_after (bb_end_jump);
 		  continue;
 		}
 
@@ -3813,11 +3817,11 @@ fixup_reorder_chain (void)
 		 edge based on known or assumed probability.  */
 	      else if (bb->aux != e_taken->dest)
 		{
-		  rtx note = find_reg_note (bb_end_insn, REG_BR_PROB, 0);
+		  rtx note = find_reg_note (bb_end_jump, REG_BR_PROB, 0);
 
 		  if (note
 		      && XINT (note, 0) < REG_BR_PROB_BASE / 2
-		      && invert_jump (bb_end_insn,
+		      && invert_jump (bb_end_jump,
 				      (e_fall->dest
 				       == EXIT_BLOCK_PTR_FOR_FN (cfun)
 				       ? NULL_RTX
@@ -3840,7 +3844,7 @@ fixup_reorder_chain (void)
 
 	      /* Otherwise we can try to invert the jump.  This will
 		 basically never fail, however, keep up the pretense.  */
-	      else if (invert_jump (bb_end_insn,
+	      else if (invert_jump (bb_end_jump,
 				    (e_fall->dest
 				     == EXIT_BLOCK_PTR_FOR_FN (cfun)
 				     ? NULL_RTX
@@ -4961,7 +4965,7 @@ rtl_lv_add_condition_to_bb (basic_block first_head ,
 			    basic_block second_head ATTRIBUTE_UNUSED,
 			    basic_block cond_bb, void *comp_rtx)
 {
-  rtx label;
+  rtx_code_label *label;
   rtx_insn *seq, *jump;
   rtx op0 = XEXP ((rtx)comp_rtx, 0);
   rtx op1 = XEXP ((rtx)comp_rtx, 1);
@@ -4977,8 +4981,7 @@ rtl_lv_add_condition_to_bb (basic_block first_head ,
   start_sequence ();
   op0 = force_operand (op0, NULL_RTX);
   op1 = force_operand (op1, NULL_RTX);
-  do_compare_rtx_and_jump (op0, op1, comp, 0,
-			   mode, NULL_RTX, NULL_RTX, label, -1);
+  do_compare_rtx_and_jump (op0, op1, comp, 0, mode, NULL_RTX, NULL, label, -1);
   jump = get_last_insn ();
   JUMP_LABEL (jump) = label;
   LABEL_NUSES (label)++;
diff --git a/gcc/cfgrtl.h b/gcc/cfgrtl.h
index 32c8ff6..cdf1477 100644
--- a/gcc/cfgrtl.h
+++ b/gcc/cfgrtl.h
@@ -33,7 +33,7 @@ extern bool contains_no_active_insn_p (const_basic_block);
 extern bool forwarder_block_p (const_basic_block);
 extern bool can_fallthru (basic_block, basic_block);
 extern rtx_note *bb_note (basic_block);
-extern rtx block_label (basic_block);
+extern rtx_code_label *block_label (basic_block);
 extern edge try_redirect_by_replacing_jump (edge, basic_block, bool);
 extern void emit_barrier_after_bb (basic_block bb);
 extern basic_block force_nonfallthru_and_redirect (edge, basic_block, rtx);
diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c
index 77a6109..9896f21 100644
--- a/gcc/config/i386/i386.c
+++ b/gcc/config/i386/i386.c
@@ -38390,7 +38390,7 @@ ix86_emit_cmove (rtx dst, rtx src, enum rtx_code code, rtx op1, rtx op2)
     }
   else
     {
-      rtx nomove = gen_label_rtx ();
+      rtx_code_label *nomove = gen_label_rtx ();
       emit_cmp_and_jump_insns (op1, op2, reverse_condition (code),
 			       const0_rtx, GET_MODE (op1), 1, nomove);
       emit_move_insn (dst, src);
diff --git a/gcc/dojump.c b/gcc/dojump.c
index ad356ba..9f1af75 100644
--- a/gcc/dojump.c
+++ b/gcc/dojump.c
@@ -61,10 +61,12 @@ along with GCC; see the file COPYING3.  If not see
 #include "tm_p.h"
 
 static bool prefer_and_bit_test (machine_mode, int);
-static void do_jump_by_parts_greater (tree, tree, int, rtx, rtx, int);
-static void do_jump_by_parts_equality (tree, tree, rtx, rtx, int);
-static void do_compare_and_jump	(tree, tree, enum rtx_code, enum rtx_code, rtx,
-				 rtx, int);
+static void do_jump_by_parts_greater (tree, tree, int,
+				      rtx_code_label *, rtx_code_label *, int);
+static void do_jump_by_parts_equality (tree, tree, rtx_code_label *,
+				       rtx_code_label *, int);
+static void do_compare_and_jump	(tree, tree, enum rtx_code, enum rtx_code,
+				 rtx_code_label *, rtx_code_label *, int);
 
 /* Invert probability if there is any.  -1 stands for unknown.  */
 
@@ -146,34 +148,34 @@ restore_pending_stack_adjust (saved_pending_stack_adjust *save)
 \f
 /* Expand conditional expressions.  */
 
-/* Generate code to evaluate EXP and jump to LABEL if the value is zero.
-   LABEL is an rtx of code CODE_LABEL, in this function and all the
-   functions here.  */
+/* Generate code to evaluate EXP and jump to LABEL if the value is zero.  */
 
 void
-jumpifnot (tree exp, rtx label, int prob)
+jumpifnot (tree exp, rtx_code_label *label, int prob)
 {
-  do_jump (exp, label, NULL_RTX, inv (prob));
+  do_jump (exp, label, NULL, inv (prob));
 }
 
 void
-jumpifnot_1 (enum tree_code code, tree op0, tree op1, rtx label, int prob)
+jumpifnot_1 (enum tree_code code, tree op0, tree op1, rtx_code_label *label,
+	     int prob)
 {
-  do_jump_1 (code, op0, op1, label, NULL_RTX, inv (prob));
+  do_jump_1 (code, op0, op1, label, NULL, inv (prob));
 }
 
 /* Generate code to evaluate EXP and jump to LABEL if the value is nonzero.  */
 
 void
-jumpif (tree exp, rtx label, int prob)
+jumpif (tree exp, rtx_code_label *label, int prob)
 {
-  do_jump (exp, NULL_RTX, label, prob);
+  do_jump (exp, NULL, label, prob);
 }
 
 void
-jumpif_1 (enum tree_code code, tree op0, tree op1, rtx label, int prob)
+jumpif_1 (enum tree_code code, tree op0, tree op1,
+	  rtx_code_label *label, int prob)
 {
-  do_jump_1 (code, op0, op1, NULL_RTX, label, prob);
+  do_jump_1 (code, op0, op1, NULL, label, prob);
 }
 
 /* Used internally by prefer_and_bit_test.  */
@@ -225,7 +227,8 @@ prefer_and_bit_test (machine_mode mode, int bitnum)
 
 void
 do_jump_1 (enum tree_code code, tree op0, tree op1,
-	   rtx if_false_label, rtx if_true_label, int prob)
+	   rtx_code_label *if_false_label, rtx_code_label *if_true_label,
+	   int prob)
 {
   machine_mode mode;
   rtx_code_label *drop_through_label = 0;
@@ -378,15 +381,15 @@ do_jump_1 (enum tree_code code, tree op0, tree op1,
             op0_prob = inv (op0_false_prob);
             op1_prob = inv (op1_false_prob);
           }
-        if (if_false_label == NULL_RTX)
+        if (if_false_label == NULL)
           {
             drop_through_label = gen_label_rtx ();
-            do_jump (op0, drop_through_label, NULL_RTX, op0_prob);
-            do_jump (op1, NULL_RTX, if_true_label, op1_prob);
+            do_jump (op0, drop_through_label, NULL, op0_prob);
+            do_jump (op1, NULL, if_true_label, op1_prob);
           }
         else
           {
-            do_jump (op0, if_false_label, NULL_RTX, op0_prob);
+            do_jump (op0, if_false_label, NULL, op0_prob);
             do_jump (op1, if_false_label, if_true_label, op1_prob);
           }
         break;
@@ -405,18 +408,18 @@ do_jump_1 (enum tree_code code, tree op0, tree op1,
           {
             op0_prob = prob / 2;
             op1_prob = GCOV_COMPUTE_SCALE ((prob / 2), inv (op0_prob));
-          }
-        if (if_true_label == NULL_RTX)
-          {
-            drop_through_label = gen_label_rtx ();
-            do_jump (op0, NULL_RTX, drop_through_label, op0_prob);
-            do_jump (op1, if_false_label, NULL_RTX, op1_prob);
-          }
-        else
-          {
-            do_jump (op0, NULL_RTX, if_true_label, op0_prob);
-            do_jump (op1, if_false_label, if_true_label, op1_prob);
-          }
+	  }
+	if (if_true_label == NULL)
+	  {
+	    drop_through_label = gen_label_rtx ();
+	    do_jump (op0, NULL, drop_through_label, op0_prob);
+	    do_jump (op1, if_false_label, NULL, op1_prob);
+	  }
+	else
+	  {
+	    do_jump (op0, NULL, if_true_label, op0_prob);
+	    do_jump (op1, if_false_label, if_true_label, op1_prob);
+	  }
         break;
       }
 
@@ -443,14 +446,15 @@ do_jump_1 (enum tree_code code, tree op0, tree op1,
    PROB is probability of jump to if_true_label, or -1 if unknown.  */
 
 void
-do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
+do_jump (tree exp, rtx_code_label *if_false_label,
+	 rtx_code_label *if_true_label, int prob)
 {
   enum tree_code code = TREE_CODE (exp);
   rtx temp;
   int i;
   tree type;
   machine_mode mode;
-  rtx_code_label *drop_through_label = 0;
+  rtx_code_label *drop_through_label = NULL;
 
   switch (code)
     {
@@ -458,10 +462,13 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
       break;
 
     case INTEGER_CST:
-      temp = integer_zerop (exp) ? if_false_label : if_true_label;
-      if (temp)
-        emit_jump (temp);
-      break;
+      {
+	rtx_code_label *lab = integer_zerop (exp) ? if_false_label
+						  : if_true_label;
+	if (lab)
+	  emit_jump (lab);
+	break;
+      }
 
 #if 0
       /* This is not true with #pragma weak  */
@@ -511,7 +518,7 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
 	  }
 
         do_pending_stack_adjust ();
-	do_jump (TREE_OPERAND (exp, 0), label1, NULL_RTX, -1);
+	do_jump (TREE_OPERAND (exp, 0), label1, NULL, -1);
 	do_jump (TREE_OPERAND (exp, 1), if_false_label, if_true_label, prob);
         emit_label (label1);
 	do_jump (TREE_OPERAND (exp, 2), if_false_label, if_true_label, prob);
@@ -555,7 +562,7 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
       if (integer_onep (TREE_OPERAND (exp, 1)))
 	{
 	  tree exp0 = TREE_OPERAND (exp, 0);
-	  rtx set_label, clr_label;
+	  rtx_code_label *set_label, *clr_label;
 	  int setclr_prob = prob;
 
 	  /* Strip narrowing integral type conversions.  */
@@ -684,11 +691,12 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
 
 static void
 do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
-			      rtx op1, rtx if_false_label, rtx if_true_label,
+			      rtx op1, rtx_code_label *if_false_label,
+			      rtx_code_label *if_true_label,
 			      int prob)
 {
   int nwords = (GET_MODE_SIZE (mode) / UNITS_PER_WORD);
-  rtx drop_through_label = 0;
+  rtx_code_label *drop_through_label = 0;
   bool drop_through_if_true = false, drop_through_if_false = false;
   enum rtx_code code = GT;
   int i;
@@ -735,7 +743,7 @@ do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
 
       /* All but high-order word must be compared as unsigned.  */
       do_compare_rtx_and_jump (op0_word, op1_word, code, (unsignedp || i > 0),
-			       word_mode, NULL_RTX, NULL_RTX, if_true_label,
+			       word_mode, NULL_RTX, NULL, if_true_label,
 			       prob);
 
       /* Emit only one comparison for 0.  Do not emit the last cond jump.  */
@@ -744,7 +752,7 @@ do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
 
       /* Consider lower words only if these are equal.  */
       do_compare_rtx_and_jump (op0_word, op1_word, NE, unsignedp, word_mode,
-			       NULL_RTX, NULL_RTX, if_false_label, inv (prob));
+			       NULL_RTX, NULL, if_false_label, inv (prob));
     }
 
   if (!drop_through_if_false)
@@ -760,7 +768,8 @@ do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
 
 static void
 do_jump_by_parts_greater (tree treeop0, tree treeop1, int swap,
-			  rtx if_false_label, rtx if_true_label, int prob)
+			  rtx_code_label *if_false_label,
+			  rtx_code_label *if_true_label, int prob)
 {
   rtx op0 = expand_normal (swap ? treeop1 : treeop0);
   rtx op1 = expand_normal (swap ? treeop0 : treeop1);
@@ -773,17 +782,18 @@ do_jump_by_parts_greater (tree treeop0, tree treeop1, int swap,
 \f
 /* Jump according to whether OP0 is 0.  We assume that OP0 has an integer
    mode, MODE, that is too wide for the available compare insns.  Either
-   Either (but not both) of IF_TRUE_LABEL and IF_FALSE_LABEL may be NULL_RTX
+   Either (but not both) of IF_TRUE_LABEL and IF_FALSE_LABEL may be NULL
    to indicate drop through.  */
 
 static void
 do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
-			   rtx if_false_label, rtx if_true_label, int prob)
+			   rtx_code_label *if_false_label,
+			   rtx_code_label *if_true_label, int prob)
 {
   int nwords = GET_MODE_SIZE (mode) / UNITS_PER_WORD;
   rtx part;
   int i;
-  rtx drop_through_label = 0;
+  rtx_code_label *drop_through_label = NULL;
 
   /* The fastest way of doing this comparison on almost any machine is to
      "or" all the words and compare the result.  If all have to be loaded
@@ -806,12 +816,12 @@ do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
 
   /* If we couldn't do the "or" simply, do this with a series of compares.  */
   if (! if_false_label)
-    drop_through_label = if_false_label = gen_label_rtx ();
+    if_false_label = drop_through_label = gen_label_rtx ();
 
   for (i = 0; i < nwords; i++)
     do_compare_rtx_and_jump (operand_subword_force (op0, i, mode),
                              const0_rtx, EQ, 1, word_mode, NULL_RTX,
-			     if_false_label, NULL_RTX, prob);
+			     if_false_label, NULL, prob);
 
   if (if_true_label)
     emit_jump (if_true_label);
@@ -827,10 +837,11 @@ do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
 
 static void
 do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
-			       rtx if_false_label, rtx if_true_label, int prob)
+			       rtx_code_label *if_false_label,
+			       rtx_code_label *if_true_label, int prob)
 {
   int nwords = (GET_MODE_SIZE (mode) / UNITS_PER_WORD);
-  rtx drop_through_label = 0;
+  rtx_code_label *drop_through_label = NULL;
   int i;
 
   if (op1 == const0_rtx)
@@ -853,7 +864,7 @@ do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
     do_compare_rtx_and_jump (operand_subword_force (op0, i, mode),
                              operand_subword_force (op1, i, mode),
                              EQ, 0, word_mode, NULL_RTX,
-			     if_false_label, NULL_RTX, prob);
+			     if_false_label, NULL, prob);
 
   if (if_true_label)
     emit_jump (if_true_label);
@@ -865,8 +876,9 @@ do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
    with one insn, test the comparison and jump to the appropriate label.  */
 
 static void
-do_jump_by_parts_equality (tree treeop0, tree treeop1, rtx if_false_label,
-			   rtx if_true_label, int prob)
+do_jump_by_parts_equality (tree treeop0, tree treeop1,
+			   rtx_code_label *if_false_label,
+			   rtx_code_label *if_true_label, int prob)
 {
   rtx op0 = expand_normal (treeop0);
   rtx op1 = expand_normal (treeop1);
@@ -961,11 +973,12 @@ split_comparison (enum rtx_code code, machine_mode mode,
 
 void
 do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
-			 machine_mode mode, rtx size, rtx if_false_label,
-			 rtx if_true_label, int prob)
+			 machine_mode mode, rtx size,
+			 rtx_code_label *if_false_label,
+			 rtx_code_label *if_true_label, int prob)
 {
   rtx tem;
-  rtx dummy_label = NULL;
+  rtx_code_label *dummy_label = NULL;
 
   /* Reverse the comparison if that is safe and we want to jump if it is
      false.  Also convert to the reverse comparison if the target can
@@ -987,9 +1000,7 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
       if (can_compare_p (rcode, mode, ccp_jump)
 	  || (code == ORDERED && ! can_compare_p (ORDERED, mode, ccp_jump)))
 	{
-          tem = if_true_label;
-          if_true_label = if_false_label;
-          if_false_label = tem;
+	  std::swap (if_true_label, if_false_label);
 	  code = rcode;
 	  prob = inv (prob);
 	}
@@ -1000,9 +1011,7 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
 
   if (swap_commutative_operands_p (op0, op1))
     {
-      tem = op0;
-      op0 = op1;
-      op1 = tem;
+      std::swap (op0, op1);
       code = swap_condition (code);
     }
 
@@ -1014,8 +1023,9 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
     {
       if (CONSTANT_P (tem))
 	{
-	  rtx label = (tem == const0_rtx || tem == CONST0_RTX (mode))
-		      ? if_false_label : if_true_label;
+	  rtx_code_label *label = (tem == const0_rtx
+				   || tem == CONST0_RTX (mode))
+					? if_false_label : if_true_label;
 	  if (label)
 	    emit_jump (label);
 	  return;
@@ -1134,7 +1144,7 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
 		first_prob = REG_BR_PROB_BASE - REG_BR_PROB_BASE / 100;
 	      if (and_them)
 		{
-		  rtx dest_label;
+		  rtx_code_label *dest_label;
 		  /* If we only jump if true, just bypass the second jump.  */
 		  if (! if_false_label)
 		    {
@@ -1145,13 +1155,11 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
 		  else
 		    dest_label = if_false_label;
                   do_compare_rtx_and_jump (op0, op1, first_code, unsignedp, mode,
-					   size, dest_label, NULL_RTX,
-					   first_prob);
+					   size, dest_label, NULL, first_prob);
 		}
               else
                 do_compare_rtx_and_jump (op0, op1, first_code, unsignedp, mode,
-					 size, NULL_RTX, if_true_label,
-					 first_prob);
+					 size, NULL, if_true_label, first_prob);
 	    }
 	}
 
@@ -1177,8 +1185,9 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
 
 static void
 do_compare_and_jump (tree treeop0, tree treeop1, enum rtx_code signed_code,
-		     enum rtx_code unsigned_code, rtx if_false_label,
-		     rtx if_true_label, int prob)
+		     enum rtx_code unsigned_code,
+		     rtx_code_label *if_false_label,
+		     rtx_code_label *if_true_label, int prob)
 {
   rtx op0, op1;
   tree type;
diff --git a/gcc/dojump.h b/gcc/dojump.h
index 74d3f37..1b64ea7 100644
--- a/gcc/dojump.h
+++ b/gcc/dojump.h
@@ -57,20 +57,23 @@ extern void save_pending_stack_adjust (saved_pending_stack_adjust *);
 extern void restore_pending_stack_adjust (saved_pending_stack_adjust *);
 
 /* Generate code to evaluate EXP and jump to LABEL if the value is zero.  */
-extern void jumpifnot (tree, rtx, int);
-extern void jumpifnot_1 (enum tree_code, tree, tree, rtx, int);
+extern void jumpifnot (tree exp, rtx_code_label *label, int prob);
+extern void jumpifnot_1 (enum tree_code, tree, tree, rtx_code_label *, int);
 
 /* Generate code to evaluate EXP and jump to LABEL if the value is nonzero.  */
-extern void jumpif (tree, rtx, int);
-extern void jumpif_1 (enum tree_code, tree, tree, rtx, int);
+extern void jumpif (tree exp, rtx_code_label *label, int prob);
+extern void jumpif_1 (enum tree_code, tree, tree, rtx_code_label *, int);
 
 /* Generate code to evaluate EXP and jump to IF_FALSE_LABEL if
    the result is zero, or IF_TRUE_LABEL if the result is one.  */
-extern void do_jump (tree, rtx, rtx, int);
-extern void do_jump_1 (enum tree_code, tree, tree, rtx, rtx, int);
+extern void do_jump (tree exp, rtx_code_label *if_false_label,
+		     rtx_code_label *if_true_label, int prob);
+extern void do_jump_1 (enum tree_code, tree, tree, rtx_code_label *,
+		       rtx_code_label *, int);
 
 extern void do_compare_rtx_and_jump (rtx, rtx, enum rtx_code, int,
-				     machine_mode, rtx, rtx, rtx, int);
+				     machine_mode, rtx, rtx_code_label *,
+				     rtx_code_label *, int);
 
 extern bool split_comparison (enum rtx_code, machine_mode,
 			      enum rtx_code *, enum rtx_code *);
diff --git a/gcc/dse.c b/gcc/dse.c
index 603cdbd..3b3662b 100644
--- a/gcc/dse.c
+++ b/gcc/dse.c
@@ -907,7 +907,7 @@ emit_inc_dec_insn_before (rtx mem ATTRIBUTE_UNUSED,
       end_sequence ();
     }
   else
-    new_insn = as_a <rtx_insn *> (gen_move_insn (dest, src));
+    new_insn = gen_move_insn (dest, src);
   info.first = new_insn;
   info.fixed_regs_live = insn_info->fixed_regs_live;
   info.failure = false;
diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index b48f88b..79173ba 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -4441,13 +4441,15 @@ emit_barrier_before (rtx before)
 
 /* Emit the label LABEL before the insn BEFORE.  */
 
-rtx_insn *
-emit_label_before (rtx label, rtx_insn *before)
+rtx_code_label *
+emit_label_before (rtx uncast_label, rtx_insn *before)
 {
+  rtx_code_label *label = as_a <rtx_code_label *> (uncast_label);
+
   gcc_checking_assert (INSN_UID (label) == 0);
   INSN_UID (label) = cur_insn_uid++;
   add_insn_before (label, before, NULL);
-  return as_a <rtx_insn *> (label);
+  return label;
 }
 \f
 /* Helper for emit_insn_after, handles lists of instructions
@@ -5068,13 +5070,15 @@ emit_call_insn (rtx x)
 
 /* Add the label LABEL to the end of the doubly-linked list.  */
 
-rtx_insn *
-emit_label (rtx label)
+rtx_code_label *
+emit_label (rtx uncast_label)
 {
+  rtx_code_label *label = as_a <rtx_code_label *> (uncast_label);
+
   gcc_checking_assert (INSN_UID (label) == 0);
   INSN_UID (label) = cur_insn_uid++;
-  add_insn (as_a <rtx_insn *> (label));
-  return as_a <rtx_insn *> (label);
+  add_insn (label);
+  return label;
 }
 
 /* Make an insn of code JUMP_TABLE_DATA
@@ -5335,7 +5339,7 @@ emit (rtx x)
   switch (code)
     {
     case CODE_LABEL:
-      return emit_label (x);
+      return emit_label (as_a <rtx_code_label *> (x));
     case INSN:
       return emit_insn (x);
     case  JUMP_INSN:
diff --git a/gcc/except.c b/gcc/except.c
index d609592..c2b8214 100644
--- a/gcc/except.c
+++ b/gcc/except.c
@@ -1349,7 +1349,7 @@ sjlj_emit_dispatch_table (rtx_code_label *dispatch_label, int num_dispatch)
     if (lp && lp->post_landing_pad)
       {
 	rtx_insn *seq2;
-	rtx label;
+	rtx_code_label *label;
 
 	start_sequence ();
 
@@ -1363,7 +1363,7 @@ sjlj_emit_dispatch_table (rtx_code_label *dispatch_label, int num_dispatch)
 	    t = build_int_cst (integer_type_node, disp_index);
 	    case_elt = build_case_label (t, NULL, t_label);
 	    dispatch_labels.quick_push (case_elt);
-	    label = label_rtx (t_label);
+	    label = jump_target_rtx (t_label);
 	  }
 	else
 	  label = gen_label_rtx ();
diff --git a/gcc/explow.c b/gcc/explow.c
index de446a9..57cb767 100644
--- a/gcc/explow.c
+++ b/gcc/explow.c
@@ -984,7 +984,7 @@ emit_stack_save (enum save_level save_level, rtx *psave)
 {
   rtx sa = *psave;
   /* The default is that we use a move insn and save in a Pmode object.  */
-  rtx (*fcn) (rtx, rtx) = gen_move_insn;
+  rtx_insn * (*fcn) (rtx, rtx) = gen_move_insn;
   machine_mode mode = STACK_SAVEAREA_MODE (save_level);
 
   /* See if this machine has anything special to do for this kind of save.  */
@@ -1039,7 +1039,7 @@ void
 emit_stack_restore (enum save_level save_level, rtx sa)
 {
   /* The default is that we use a move insn.  */
-  rtx (*fcn) (rtx, rtx) = gen_move_insn;
+  rtx_insn * (*fcn) (rtx, rtx) = gen_move_insn;
 
   /* If stack_realign_drap, the x86 backend emits a prologue that aligns both
      STACK_POINTER and HARD_FRAME_POINTER.
diff --git a/gcc/expmed.c b/gcc/expmed.c
index 6679f50..f180688 100644
--- a/gcc/expmed.c
+++ b/gcc/expmed.c
@@ -5807,8 +5807,8 @@ emit_store_flag_force (rtx target, enum rtx_code code, rtx op0, rtx op1,
       && op1 == const0_rtx)
     {
       label = gen_label_rtx ();
-      do_compare_rtx_and_jump (target, const0_rtx, EQ, unsignedp,
-			       mode, NULL_RTX, NULL_RTX, label, -1);
+      do_compare_rtx_and_jump (target, const0_rtx, EQ, unsignedp, mode,
+			       NULL_RTX, NULL, label, -1);
       emit_move_insn (target, trueval);
       emit_label (label);
       return target;
@@ -5845,8 +5845,8 @@ emit_store_flag_force (rtx target, enum rtx_code code, rtx op0, rtx op1,
 
   emit_move_insn (target, trueval);
   label = gen_label_rtx ();
-  do_compare_rtx_and_jump (op0, op1, code, unsignedp, mode, NULL_RTX,
-			   NULL_RTX, label, -1);
+  do_compare_rtx_and_jump (op0, op1, code, unsignedp, mode, NULL_RTX, NULL,
+			   label, -1);
 
   emit_move_insn (target, falseval);
   emit_label (label);
@@ -5863,6 +5863,6 @@ do_cmp_and_jump (rtx arg1, rtx arg2, enum rtx_code op, machine_mode mode,
 		 rtx_code_label *label)
 {
   int unsignedp = (op == LTU || op == LEU || op == GTU || op == GEU);
-  do_compare_rtx_and_jump (arg1, arg2, op, unsignedp, mode,
-			   NULL_RTX, NULL_RTX, label, -1);
+  do_compare_rtx_and_jump (arg1, arg2, op, unsignedp, mode, NULL_RTX,
+			   NULL, label, -1);
 }
diff --git a/gcc/expr.c b/gcc/expr.c
index 530a944..85efaa3 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -3652,7 +3652,7 @@ emit_move_insn (rtx x, rtx y)
 /* Generate the body of an instruction to copy Y into X.
    It may be a list of insns, if one insn isn't enough.  */
 
-rtx
+rtx_insn *
 gen_move_insn (rtx x, rtx y)
 {
   rtx_insn *seq;
@@ -8128,6 +8128,7 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
 		    enum expand_modifier modifier)
 {
   rtx op0, op1, op2, temp;
+  rtx_code_label *lab;
   tree type;
   int unsignedp;
   machine_mode mode;
@@ -8870,11 +8871,7 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
 
       /* If op1 was placed in target, swap op0 and op1.  */
       if (target != op0 && target == op1)
-	{
-	  temp = op0;
-	  op0 = op1;
-	  op1 = temp;
-	}
+	std::swap (op0, op1);
 
       /* We generate better code and avoid problems with op1 mentioning
 	 target by forcing op1 into a pseudo if it isn't a constant.  */
@@ -8941,13 +8938,13 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
 	if (target != op0)
 	  emit_move_insn (target, op0);
 
-	temp = gen_label_rtx ();
+	lab = gen_label_rtx ();
 	do_compare_rtx_and_jump (target, cmpop1, comparison_code,
-				 unsignedp, mode, NULL_RTX, NULL_RTX, temp,
+				 unsignedp, mode, NULL_RTX, NULL, lab,
 				 -1);
       }
       emit_move_insn (target, op1);
-      emit_label (temp);
+      emit_label (lab);
       return target;
 
     case BIT_NOT_EXPR:
@@ -9025,38 +9022,39 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
     case UNGE_EXPR:
     case UNEQ_EXPR:
     case LTGT_EXPR:
-      temp = do_store_flag (ops,
-			    modifier != EXPAND_STACK_PARM ? target : NULL_RTX,
-			    tmode != VOIDmode ? tmode : mode);
-      if (temp)
-	return temp;
-
-      /* Use a compare and a jump for BLKmode comparisons, or for function
-	 type comparisons is HAVE_canonicalize_funcptr_for_compare.  */
-
-      if ((target == 0
-	   || modifier == EXPAND_STACK_PARM
-	   || ! safe_from_p (target, treeop0, 1)
-	   || ! safe_from_p (target, treeop1, 1)
-	   /* Make sure we don't have a hard reg (such as function's return
-	      value) live across basic blocks, if not optimizing.  */
-	   || (!optimize && REG_P (target)
-	       && REGNO (target) < FIRST_PSEUDO_REGISTER)))
-	target = gen_reg_rtx (tmode != VOIDmode ? tmode : mode);
+      {
+	temp = do_store_flag (ops,
+			      modifier != EXPAND_STACK_PARM ? target : NULL_RTX,
+			      tmode != VOIDmode ? tmode : mode);
+	if (temp)
+	  return temp;
 
-      emit_move_insn (target, const0_rtx);
+	/* Use a compare and a jump for BLKmode comparisons, or for function
+	   type comparisons is HAVE_canonicalize_funcptr_for_compare.  */
+
+	if ((target == 0
+	     || modifier == EXPAND_STACK_PARM
+	     || ! safe_from_p (target, treeop0, 1)
+	     || ! safe_from_p (target, treeop1, 1)
+	     /* Make sure we don't have a hard reg (such as function's return
+		value) live across basic blocks, if not optimizing.  */
+	     || (!optimize && REG_P (target)
+		 && REGNO (target) < FIRST_PSEUDO_REGISTER)))
+	  target = gen_reg_rtx (tmode != VOIDmode ? tmode : mode);
 
-      op1 = gen_label_rtx ();
-      jumpifnot_1 (code, treeop0, treeop1, op1, -1);
+	emit_move_insn (target, const0_rtx);
 
-      if (TYPE_PRECISION (type) == 1 && !TYPE_UNSIGNED (type))
-	emit_move_insn (target, constm1_rtx);
-      else
-	emit_move_insn (target, const1_rtx);
+	rtx_code_label *lab1 = gen_label_rtx ();
+	jumpifnot_1 (code, treeop0, treeop1, lab1, -1);
 
-      emit_label (op1);
-      return target;
+	if (TYPE_PRECISION (type) == 1 && !TYPE_UNSIGNED (type))
+	  emit_move_insn (target, constm1_rtx);
+	else
+	  emit_move_insn (target, const1_rtx);
 
+	emit_label (lab1);
+	return target;
+      }
     case COMPLEX_EXPR:
       /* Get the rtx code of the operands.  */
       op0 = expand_normal (treeop0);
@@ -9279,58 +9277,60 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
       }
 
     case COND_EXPR:
-      /* A COND_EXPR with its type being VOID_TYPE represents a
-	 conditional jump and is handled in
-	 expand_gimple_cond_expr.  */
-      gcc_assert (!VOID_TYPE_P (type));
-
-      /* Note that COND_EXPRs whose type is a structure or union
-	 are required to be constructed to contain assignments of
-	 a temporary variable, so that we can evaluate them here
-	 for side effect only.  If type is void, we must do likewise.  */
-
-      gcc_assert (!TREE_ADDRESSABLE (type)
-		  && !ignore
-		  && TREE_TYPE (treeop1) != void_type_node
-		  && TREE_TYPE (treeop2) != void_type_node);
-
-      temp = expand_cond_expr_using_cmove (treeop0, treeop1, treeop2);
-      if (temp)
-	return temp;
-
-      /* If we are not to produce a result, we have no target.  Otherwise,
-	 if a target was specified use it; it will not be used as an
-	 intermediate target unless it is safe.  If no target, use a
-	 temporary.  */
-
-      if (modifier != EXPAND_STACK_PARM
-	  && original_target
-	  && safe_from_p (original_target, treeop0, 1)
-	  && GET_MODE (original_target) == mode
-	  && !MEM_P (original_target))
-	temp = original_target;
-      else
-	temp = assign_temp (type, 0, 1);
-
-      do_pending_stack_adjust ();
-      NO_DEFER_POP;
-      op0 = gen_label_rtx ();
-      op1 = gen_label_rtx ();
-      jumpifnot (treeop0, op0, -1);
-      store_expr (treeop1, temp,
-		  modifier == EXPAND_STACK_PARM,
-		  false);
-
-      emit_jump_insn (gen_jump (op1));
-      emit_barrier ();
-      emit_label (op0);
-      store_expr (treeop2, temp,
-		  modifier == EXPAND_STACK_PARM,
-		  false);
+      {
+	/* A COND_EXPR with its type being VOID_TYPE represents a
+	   conditional jump and is handled in
+	   expand_gimple_cond_expr.  */
+	gcc_assert (!VOID_TYPE_P (type));
+
+	/* Note that COND_EXPRs whose type is a structure or union
+	   are required to be constructed to contain assignments of
+	   a temporary variable, so that we can evaluate them here
+	   for side effect only.  If type is void, we must do likewise.  */
+
+	gcc_assert (!TREE_ADDRESSABLE (type)
+		    && !ignore
+		    && TREE_TYPE (treeop1) != void_type_node
+		    && TREE_TYPE (treeop2) != void_type_node);
+
+	temp = expand_cond_expr_using_cmove (treeop0, treeop1, treeop2);
+	if (temp)
+	  return temp;
 
-      emit_label (op1);
-      OK_DEFER_POP;
-      return temp;
+	/* If we are not to produce a result, we have no target.  Otherwise,
+	   if a target was specified use it; it will not be used as an
+	   intermediate target unless it is safe.  If no target, use a
+	   temporary.  */
+
+	if (modifier != EXPAND_STACK_PARM
+	    && original_target
+	    && safe_from_p (original_target, treeop0, 1)
+	    && GET_MODE (original_target) == mode
+	    && !MEM_P (original_target))
+	  temp = original_target;
+	else
+	  temp = assign_temp (type, 0, 1);
+
+	do_pending_stack_adjust ();
+	NO_DEFER_POP;
+	rtx_code_label *lab0 = gen_label_rtx ();
+	rtx_code_label *lab1 = gen_label_rtx ();
+	jumpifnot (treeop0, lab0, -1);
+	store_expr (treeop1, temp,
+		    modifier == EXPAND_STACK_PARM,
+		    false);
+
+	emit_jump_insn (gen_jump (lab1));
+	emit_barrier ();
+	emit_label (lab0);
+	store_expr (treeop2, temp,
+		    modifier == EXPAND_STACK_PARM,
+		    false);
+
+	emit_label (lab1);
+	OK_DEFER_POP;
+	return temp;
+      }
 
     case VEC_COND_EXPR:
       target = expand_vec_cond_expr (type, treeop0, treeop1, treeop2, target);
diff --git a/gcc/expr.h b/gcc/expr.h
index 867852e..6c4afc4 100644
--- a/gcc/expr.h
+++ b/gcc/expr.h
@@ -203,7 +203,7 @@ extern rtx store_by_pieces (rtx, unsigned HOST_WIDE_INT,
 
 /* Emit insns to set X from Y.  */
 extern rtx_insn *emit_move_insn (rtx, rtx);
-extern rtx gen_move_insn (rtx, rtx);
+extern rtx_insn *gen_move_insn (rtx, rtx);
 
 /* Emit insns to set X from Y, with no frills.  */
 extern rtx_insn *emit_move_insn_1 (rtx, rtx);
diff --git a/gcc/function.c b/gcc/function.c
index 9077c91..3884170 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -5786,7 +5786,7 @@ convert_jumps_to_returns (basic_block last_bb, bool simple_p,
 	    dest = simple_return_rtx;
 	  else
 	    dest = ret_rtx;
-	  if (!redirect_jump (jump, dest, 0))
+	  if (!redirect_jump (as_a <rtx_jump_insn *> (jump), dest, 0))
 	    {
 #ifdef HAVE_simple_return
 	      if (simple_p)
diff --git a/gcc/gcse.c b/gcc/gcse.c
index e4303fe..5fa7759d 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -2229,7 +2229,8 @@ pre_insert_copy_insn (struct gcse_expr *expr, rtx_insn *insn)
   int regno = REGNO (reg);
   int indx = expr->bitmap_index;
   rtx pat = PATTERN (insn);
-  rtx set, first_set, new_insn;
+  rtx set, first_set;
+  rtx_insn *new_insn;
   rtx old_reg;
   int i;
 
diff --git a/gcc/ifcvt.c b/gcc/ifcvt.c
index a3e3e5c..bf79122 100644
--- a/gcc/ifcvt.c
+++ b/gcc/ifcvt.c
@@ -4444,9 +4444,10 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
       else
 	new_dest_label = block_label (new_dest);
 
+      rtx_jump_insn *jump_insn = as_a <rtx_jump_insn *> (jump);
       if (reversep
-	  ? ! invert_jump_1 (jump, new_dest_label)
-	  : ! redirect_jump_1 (jump, new_dest_label))
+	  ? ! invert_jump_1 (jump_insn, new_dest_label)
+	  : ! redirect_jump_1 (jump_insn, new_dest_label))
 	goto cancel;
     }
 
@@ -4457,7 +4458,8 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
 
   if (other_bb != new_dest)
     {
-      redirect_jump_2 (jump, old_dest, new_dest_label, 0, reversep);
+      redirect_jump_2 (as_a <rtx_jump_insn *> (jump), old_dest, new_dest_label,
+                       0, reversep);
 
       redirect_edge_succ (BRANCH_EDGE (test_bb), new_dest);
       if (reversep)
diff --git a/gcc/internal-fn.c b/gcc/internal-fn.c
index 0053ed9..46ee812 100644
--- a/gcc/internal-fn.c
+++ b/gcc/internal-fn.c
@@ -422,7 +422,7 @@ expand_arith_overflow_result_store (tree lhs, rtx target,
       lres = convert_modes (tgtmode, mode, res, uns);
       gcc_assert (GET_MODE_PRECISION (tgtmode) < GET_MODE_PRECISION (mode));
       do_compare_rtx_and_jump (res, convert_modes (mode, tgtmode, lres, uns),
-			       EQ, true, mode, NULL_RTX, NULL_RTX, done_label,
+			       EQ, true, mode, NULL_RTX, NULL, done_label,
 			       PROB_VERY_LIKELY);
       write_complex_part (target, const1_rtx, true);
       emit_label (done_label);
@@ -569,7 +569,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
 	      : CONST_SCALAR_INT_P (op1)))
 	tem = op1;
       do_compare_rtx_and_jump (res, tem, code == PLUS_EXPR ? GEU : LEU,
-			       true, mode, NULL_RTX, NULL_RTX, done_label,
+			       true, mode, NULL_RTX, NULL, done_label,
 			       PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -584,7 +584,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       rtx tem = expand_binop (mode, add_optab,
 			      code == PLUS_EXPR ? res : op0, sgn,
 			      NULL_RTX, false, OPTAB_LIB_WIDEN);
-      do_compare_rtx_and_jump (tem, op1, GEU, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (tem, op1, GEU, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -627,8 +627,8 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       else if (pos_neg == 3)
 	/* If ARG0 is not known to be always positive, check at runtime.  */
 	do_compare_rtx_and_jump (op0, const0_rtx, LT, false, mode, NULL_RTX,
-				 NULL_RTX, do_error, PROB_VERY_UNLIKELY);
-      do_compare_rtx_and_jump (op1, op0, LEU, true, mode, NULL_RTX, NULL_RTX,
+				 NULL, do_error, PROB_VERY_UNLIKELY);
+      do_compare_rtx_and_jump (op1, op0, LEU, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -642,7 +642,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
 			  OPTAB_LIB_WIDEN);
       rtx tem = expand_binop (mode, add_optab, op1, sgn, NULL_RTX, false,
 			      OPTAB_LIB_WIDEN);
-      do_compare_rtx_and_jump (op0, tem, LTU, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (op0, tem, LTU, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -655,7 +655,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       res = expand_binop (mode, add_optab, op0, op1, NULL_RTX, false,
 			  OPTAB_LIB_WIDEN);
       do_compare_rtx_and_jump (res, const0_rtx, LT, false, mode, NULL_RTX,
-			       NULL_RTX, do_error, PROB_VERY_UNLIKELY);
+			       NULL, do_error, PROB_VERY_UNLIKELY);
       rtx tem = op1;
       /* The operation is commutative, so we can pick operand to compare
 	 against.  For prec <= BITS_PER_WORD, I think preferring REG operand
@@ -668,7 +668,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
 	  ? (CONST_SCALAR_INT_P (op1) && REG_P (op0))
 	  : CONST_SCALAR_INT_P (op0))
 	tem = op0;
-      do_compare_rtx_and_jump (res, tem, GEU, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (res, tem, GEU, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -698,26 +698,26 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
 	  tem = expand_binop (mode, ((pos_neg == 1) ^ (code == MINUS_EXPR))
 				    ? and_optab : ior_optab,
 			      op0, res, NULL_RTX, false, OPTAB_LIB_WIDEN);
-	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL,
+				   NULL, done_label, PROB_VERY_LIKELY);
 	}
       else
 	{
 	  rtx_code_label *do_ior_label = gen_label_rtx ();
 	  do_compare_rtx_and_jump (op1, const0_rtx,
 				   code == MINUS_EXPR ? GE : LT, false, mode,
-				   NULL_RTX, NULL_RTX, do_ior_label,
+				   NULL_RTX, NULL, do_ior_label,
 				   PROB_EVEN);
 	  tem = expand_binop (mode, and_optab, op0, res, NULL_RTX, false,
 			      OPTAB_LIB_WIDEN);
 	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	  emit_jump (do_error);
 	  emit_label (do_ior_label);
 	  tem = expand_binop (mode, ior_optab, op0, res, NULL_RTX, false,
 			      OPTAB_LIB_WIDEN);
 	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	}
       goto do_error_label;
     }
@@ -730,14 +730,14 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       res = expand_binop (mode, sub_optab, op0, op1, NULL_RTX, false,
 			  OPTAB_LIB_WIDEN);
       rtx_code_label *op0_geu_op1 = gen_label_rtx ();
-      do_compare_rtx_and_jump (op0, op1, GEU, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (op0, op1, GEU, true, mode, NULL_RTX, NULL,
 			       op0_geu_op1, PROB_EVEN);
       do_compare_rtx_and_jump (res, const0_rtx, LT, false, mode, NULL_RTX,
-			       NULL_RTX, done_label, PROB_VERY_LIKELY);
+			       NULL, done_label, PROB_VERY_LIKELY);
       emit_jump (do_error);
       emit_label (op0_geu_op1);
       do_compare_rtx_and_jump (res, const0_rtx, GE, false, mode, NULL_RTX,
-			       NULL_RTX, done_label, PROB_VERY_LIKELY);
+			       NULL, done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
 
@@ -816,12 +816,12 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       /* If the op1 is negative, we have to use a different check.  */
       if (pos_neg == 3)
 	do_compare_rtx_and_jump (op1, const0_rtx, LT, false, mode, NULL_RTX,
-				 NULL_RTX, sub_check, PROB_EVEN);
+				 NULL, sub_check, PROB_EVEN);
 
       /* Compare the result of the operation with one of the operands.  */
       if (pos_neg & 1)
 	do_compare_rtx_and_jump (res, op0, code == PLUS_EXPR ? GE : LE,
-				 false, mode, NULL_RTX, NULL_RTX, done_label,
+				 false, mode, NULL_RTX, NULL, done_label,
 				 PROB_VERY_LIKELY);
 
       /* If we get here, we have to print the error.  */
@@ -835,7 +835,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       /* We have k = a + b for b < 0 here.  k <= a must hold.  */
       if (pos_neg & 2)
 	do_compare_rtx_and_jump (res, op0, code == PLUS_EXPR ? LE : GE,
-				 false, mode, NULL_RTX, NULL_RTX, done_label,
+				 false, mode, NULL_RTX, NULL, done_label,
 				 PROB_VERY_LIKELY);
     }
 
@@ -931,7 +931,7 @@ expand_neg_overflow (location_t loc, tree lhs, tree arg1, bool is_ubsan)
 
       /* Compare the operand with the most negative value.  */
       rtx minv = expand_normal (TYPE_MIN_VALUE (TREE_TYPE (arg1)));
-      do_compare_rtx_and_jump (op1, minv, NE, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (op1, minv, NE, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
     }
 
@@ -1068,15 +1068,15 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	  ops.location = loc;
 	  res = expand_expr_real_2 (&ops, NULL_RTX, mode, EXPAND_NORMAL);
 	  do_compare_rtx_and_jump (op1, const0_rtx, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	  goto do_error_label;
 	case 3:
 	  rtx_code_label *do_main_label;
 	  do_main_label = gen_label_rtx ();
 	  do_compare_rtx_and_jump (op0, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
+				   NULL, do_main_label, PROB_VERY_LIKELY);
 	  do_compare_rtx_and_jump (op1, const0_rtx, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
+				   NULL, do_main_label, PROB_VERY_LIKELY);
 	  write_complex_part (target, const1_rtx, true);
 	  emit_label (do_main_label);
 	  goto do_main;
@@ -1113,15 +1113,15 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	  ops.location = loc;
 	  res = expand_expr_real_2 (&ops, NULL_RTX, mode, EXPAND_NORMAL);
 	  do_compare_rtx_and_jump (op0, const0_rtx, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	  do_compare_rtx_and_jump (op0, constm1_rtx, NE, true, mode, NULL_RTX,
-				   NULL_RTX, do_error, PROB_VERY_UNLIKELY);
+				   NULL, do_error, PROB_VERY_UNLIKELY);
 	  int prec;
 	  prec = GET_MODE_PRECISION (mode);
 	  rtx sgn;
 	  sgn = immed_wide_int_const (wi::min_value (prec, SIGNED), mode);
 	  do_compare_rtx_and_jump (op1, sgn, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	  goto do_error_label;
 	case 3:
 	  /* Rest of handling of this case after res is computed.  */
@@ -1167,7 +1167,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	      tem = expand_binop (mode, and_optab, op0, op1, NULL_RTX, false,
 				  OPTAB_LIB_WIDEN);
 	      do_compare_rtx_and_jump (tem, const0_rtx, EQ, true, mode,
-				       NULL_RTX, NULL_RTX, done_label,
+				       NULL_RTX, NULL, done_label,
 				       PROB_VERY_LIKELY);
 	      goto do_error_label;
 	    }
@@ -1185,8 +1185,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	  tem = expand_binop (mode, and_optab, op0, op1, NULL_RTX, false,
 			      OPTAB_LIB_WIDEN);
 	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, after_negate_label,
-				   PROB_VERY_LIKELY);
+				   NULL, after_negate_label, PROB_VERY_LIKELY);
 	  /* Both arguments negative here, negate them and continue with
 	     normal unsigned overflow checking multiplication.  */
 	  emit_move_insn (op0, expand_unop (mode, neg_optab, op0,
@@ -1202,13 +1201,13 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	  tem2 = expand_binop (mode, xor_optab, op0, op1, NULL_RTX, false,
 			       OPTAB_LIB_WIDEN);
 	  do_compare_rtx_and_jump (tem2, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
+				   NULL, do_main_label, PROB_VERY_LIKELY);
 	  /* One argument is negative here, the other positive.  This
 	     overflows always, unless one of the arguments is 0.  But
 	     if e.g. s2 is 0, (U) s1 * 0 doesn't overflow, whatever s1
 	     is, thus we can keep do_main code oring in overflow as is.  */
 	  do_compare_rtx_and_jump (tem, const0_rtx, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
+				   NULL, do_main_label, PROB_VERY_LIKELY);
 	  write_complex_part (target, const1_rtx, true);
 	  emit_label (do_main_label);
 	  goto do_main;
@@ -1274,7 +1273,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	    /* For the unsigned multiplication, there was overflow if
 	       HIPART is non-zero.  */
 	    do_compare_rtx_and_jump (hipart, const0_rtx, EQ, true, mode,
-				     NULL_RTX, NULL_RTX, done_label,
+				     NULL_RTX, NULL, done_label,
 				     PROB_VERY_LIKELY);
 	  else
 	    {
@@ -1284,7 +1283,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		 the high half.  There was overflow if
 		 HIPART is different from RES < 0 ? -1 : 0.  */
 	      do_compare_rtx_and_jump (signbit, hipart, EQ, true, mode,
-				       NULL_RTX, NULL_RTX, done_label,
+				       NULL_RTX, NULL, done_label,
 				       PROB_VERY_LIKELY);
 	    }
 	}
@@ -1377,12 +1376,12 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 
 	  if (!op0_small_p)
 	    do_compare_rtx_and_jump (signbit0, hipart0, NE, true, hmode,
-				     NULL_RTX, NULL_RTX, large_op0,
+				     NULL_RTX, NULL, large_op0,
 				     PROB_UNLIKELY);
 
 	  if (!op1_small_p)
 	    do_compare_rtx_and_jump (signbit1, hipart1, NE, true, hmode,
-				     NULL_RTX, NULL_RTX, small_op0_large_op1,
+				     NULL_RTX, NULL, small_op0_large_op1,
 				     PROB_UNLIKELY);
 
 	  /* If both op0 and op1 are sign (!uns) or zero (uns) extended from
@@ -1428,7 +1427,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 
 	  if (!op1_small_p)
 	    do_compare_rtx_and_jump (signbit1, hipart1, NE, true, hmode,
-				     NULL_RTX, NULL_RTX, both_ops_large,
+				     NULL_RTX, NULL, both_ops_large,
 				     PROB_UNLIKELY);
 
 	  /* If op1 is sign (!uns) or zero (uns) extended from hmode to mode,
@@ -1465,7 +1464,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		emit_jump (after_hipart_neg);
 	      else if (larger_sign != -1)
 		do_compare_rtx_and_jump (hipart, const0_rtx, GE, false, hmode,
-					 NULL_RTX, NULL_RTX, after_hipart_neg,
+					 NULL_RTX, NULL, after_hipart_neg,
 					 PROB_EVEN);
 
 	      tem = convert_modes (mode, hmode, lopart, 1);
@@ -1481,7 +1480,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		emit_jump (after_lopart_neg);
 	      else if (smaller_sign != -1)
 		do_compare_rtx_and_jump (lopart, const0_rtx, GE, false, hmode,
-					 NULL_RTX, NULL_RTX, after_lopart_neg,
+					 NULL_RTX, NULL, after_lopart_neg,
 					 PROB_EVEN);
 
 	      tem = expand_simple_binop (mode, MINUS, loxhi, larger, NULL_RTX,
@@ -1510,7 +1509,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 					 hprec - 1, NULL_RTX, 0);
 
 	  do_compare_rtx_and_jump (signbitloxhi, hipartloxhi, NE, true, hmode,
-				   NULL_RTX, NULL_RTX, do_overflow,
+				   NULL_RTX, NULL, do_overflow,
 				   PROB_VERY_UNLIKELY);
 
 	  /* res = (loxhi << (bitsize / 2)) | (hmode) lo0xlo1;  */
@@ -1546,7 +1545,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		  tem = expand_simple_binop (hmode, PLUS, hipart0, const1_rtx,
 					     NULL_RTX, 1, OPTAB_DIRECT);
 		  do_compare_rtx_and_jump (tem, const1_rtx, GTU, true, hmode,
-					   NULL_RTX, NULL_RTX, do_error,
+					   NULL_RTX, NULL, do_error,
 					   PROB_VERY_UNLIKELY);
 		}
 
@@ -1555,7 +1554,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		  tem = expand_simple_binop (hmode, PLUS, hipart1, const1_rtx,
 					     NULL_RTX, 1, OPTAB_DIRECT);
 		  do_compare_rtx_and_jump (tem, const1_rtx, GTU, true, hmode,
-					   NULL_RTX, NULL_RTX, do_error,
+					   NULL_RTX, NULL, do_error,
 					   PROB_VERY_UNLIKELY);
 		}
 
@@ -1566,18 +1565,18 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		emit_jump (hipart_different);
 	      else if (op0_sign == 1 || op1_sign == 1)
 		do_compare_rtx_and_jump (hipart0, hipart1, NE, true, hmode,
-					 NULL_RTX, NULL_RTX, hipart_different,
+					 NULL_RTX, NULL, hipart_different,
 					 PROB_EVEN);
 
 	      do_compare_rtx_and_jump (res, const0_rtx, LT, false, mode,
-				       NULL_RTX, NULL_RTX, do_error,
+				       NULL_RTX, NULL, do_error,
 				       PROB_VERY_UNLIKELY);
 	      emit_jump (done_label);
 
 	      emit_label (hipart_different);
 
 	      do_compare_rtx_and_jump (res, const0_rtx, GE, false, mode,
-				       NULL_RTX, NULL_RTX, do_error,
+				       NULL_RTX, NULL, do_error,
 				       PROB_VERY_UNLIKELY);
 	      emit_jump (done_label);
 	    }
@@ -1623,7 +1622,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
     {
       rtx_code_label *all_done_label = gen_label_rtx ();
       do_compare_rtx_and_jump (res, const0_rtx, GE, false, mode, NULL_RTX,
-			       NULL_RTX, all_done_label, PROB_VERY_LIKELY);
+			       NULL, all_done_label, PROB_VERY_LIKELY);
       write_complex_part (target, const1_rtx, true);
       emit_label (all_done_label);
     }
@@ -1634,13 +1633,13 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
       rtx_code_label *all_done_label = gen_label_rtx ();
       rtx_code_label *set_noovf = gen_label_rtx ();
       do_compare_rtx_and_jump (op1, const0_rtx, GE, false, mode, NULL_RTX,
-			       NULL_RTX, all_done_label, PROB_VERY_LIKELY);
+			       NULL, all_done_label, PROB_VERY_LIKELY);
       write_complex_part (target, const1_rtx, true);
       do_compare_rtx_and_jump (op0, const0_rtx, EQ, true, mode, NULL_RTX,
-			       NULL_RTX, set_noovf, PROB_VERY_LIKELY);
+			       NULL, set_noovf, PROB_VERY_LIKELY);
       do_compare_rtx_and_jump (op0, constm1_rtx, NE, true, mode, NULL_RTX,
-			       NULL_RTX, all_done_label, PROB_VERY_UNLIKELY);
-      do_compare_rtx_and_jump (op1, res, NE, true, mode, NULL_RTX, NULL_RTX,
+			       NULL, all_done_label, PROB_VERY_UNLIKELY);
+      do_compare_rtx_and_jump (op1, res, NE, true, mode, NULL_RTX, NULL,
 			       all_done_label, PROB_VERY_UNLIKELY);
       emit_label (set_noovf);
       write_complex_part (target, const0_rtx, true);
diff --git a/gcc/ira.c b/gcc/ira.c
index 25baa90..cd5ccb7 100644
--- a/gcc/ira.c
+++ b/gcc/ira.c
@@ -4991,7 +4991,7 @@ split_live_ranges_for_shrink_wrap (void)
 
       if (newreg)
 	{
-	  rtx new_move = gen_move_insn (newreg, dest);
+	  rtx_insn *new_move = gen_move_insn (newreg, dest);
 	  emit_insn_after (new_move, bb_note (call_dom));
 	  if (dump_file)
 	    {
diff --git a/gcc/jump.c b/gcc/jump.c
index bc91550..b10512c 100644
--- a/gcc/jump.c
+++ b/gcc/jump.c
@@ -1580,9 +1580,9 @@ redirect_jump_1 (rtx jump, rtx nlabel)
    (this can only occur when trying to produce return insns).  */
 
 int
-redirect_jump (rtx jump, rtx nlabel, int delete_unused)
+redirect_jump (rtx_jump_insn *jump, rtx nlabel, int delete_unused)
 {
-  rtx olabel = JUMP_LABEL (jump);
+  rtx olabel = jump->jump_label ();
 
   if (!nlabel)
     {
@@ -1612,7 +1612,7 @@ redirect_jump (rtx jump, rtx nlabel, int delete_unused)
    If DELETE_UNUSED is positive, delete related insn to OLABEL if its ref
    count has dropped to zero.  */
 void
-redirect_jump_2 (rtx jump, rtx olabel, rtx nlabel, int delete_unused,
+redirect_jump_2 (rtx_jump_insn *jump, rtx olabel, rtx nlabel, int delete_unused,
 		 int invert)
 {
   rtx note;
@@ -1700,7 +1700,7 @@ invert_exp_1 (rtx x, rtx insn)
    inversion and redirection.  */
 
 int
-invert_jump_1 (rtx_insn *jump, rtx nlabel)
+invert_jump_1 (rtx_jump_insn *jump, rtx nlabel)
 {
   rtx x = pc_set (jump);
   int ochanges;
@@ -1724,7 +1724,7 @@ invert_jump_1 (rtx_insn *jump, rtx nlabel)
    NLABEL instead of where it jumps now.  Return true if successful.  */
 
 int
-invert_jump (rtx_insn *jump, rtx nlabel, int delete_unused)
+invert_jump (rtx_jump_insn *jump, rtx nlabel, int delete_unused)
 {
   rtx olabel = JUMP_LABEL (jump);
 
diff --git a/gcc/loop-unroll.c b/gcc/loop-unroll.c
index ccf473d..f1d2ea5 100644
--- a/gcc/loop-unroll.c
+++ b/gcc/loop-unroll.c
@@ -794,10 +794,11 @@ split_edge_and_insert (edge e, rtx_insn *insns)
    in order to create a jump.  */
 
 static rtx_insn *
-compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int prob,
-		      rtx_insn *cinsn)
+compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp,
+		      rtx_code_label *label, int prob, rtx_insn *cinsn)
 {
-  rtx_insn *seq, *jump;
+  rtx_insn *seq;
+  rtx_jump_insn *jump;
   rtx cond;
   machine_mode mode;
 
@@ -816,8 +817,7 @@ compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int prob,
       gcc_assert (rtx_equal_p (op0, XEXP (cond, 0)));
       gcc_assert (rtx_equal_p (op1, XEXP (cond, 1)));
       emit_jump_insn (copy_insn (PATTERN (cinsn)));
-      jump = get_last_insn ();
-      gcc_assert (JUMP_P (jump));
+      jump = as_a <rtx_jump_insn *> (get_last_insn ());
       JUMP_LABEL (jump) = JUMP_LABEL (cinsn);
       LABEL_NUSES (JUMP_LABEL (jump))++;
       redirect_jump (jump, label, 0);
@@ -829,10 +829,9 @@ compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int prob,
       op0 = force_operand (op0, NULL_RTX);
       op1 = force_operand (op1, NULL_RTX);
       do_compare_rtx_and_jump (op0, op1, comp, 0,
-			       mode, NULL_RTX, NULL_RTX, label, -1);
-      jump = get_last_insn ();
-      gcc_assert (JUMP_P (jump));
-      JUMP_LABEL (jump) = label;
+			       mode, NULL_RTX, NULL, label, -1);
+      jump = as_a <rtx_jump_insn *> (get_last_insn ());
+      jump->set_jump_target (label);
       LABEL_NUSES (label)++;
     }
   add_int_reg_note (jump, REG_BR_PROB, prob);
diff --git a/gcc/lra-constraints.c b/gcc/lra-constraints.c
index a65a12f..a151081 100644
--- a/gcc/lra-constraints.c
+++ b/gcc/lra-constraints.c
@@ -1060,9 +1060,8 @@ emit_spill_move (bool to_p, rtx mem_pseudo, rtx val)
 	  LRA_SUBREG_P (mem_pseudo) = 1;
 	}
     }
-  return as_a <rtx_insn *> (to_p
-			    ? gen_move_insn (mem_pseudo, val)
-			    : gen_move_insn (val, mem_pseudo));
+  return to_p ? gen_move_insn (mem_pseudo, val)
+	      : gen_move_insn (val, mem_pseudo);
 }
 
 /* Process a special case insn (register move), return true if we
@@ -4766,7 +4765,7 @@ inherit_reload_reg (bool def_p, int original_regno,
 		   "    Inheritance reuse change %d->%d (bb%d):\n",
 		   original_regno, REGNO (new_reg),
 		   BLOCK_FOR_INSN (usage_insn)->index);
-	  dump_insn_slim (lra_dump_file, usage_insn);
+	  dump_insn_slim (lra_dump_file, as_a <rtx_insn *> (usage_insn));
 	}
     }
   if (lra_dump_file != NULL)
@@ -5026,7 +5025,7 @@ split_reg (bool before_p, int original_regno, rtx_insn *insn,
 	{
 	  fprintf (lra_dump_file, "    Split reuse change %d->%d:\n",
 		   original_regno, REGNO (new_reg));
-	  dump_insn_slim (lra_dump_file, usage_insn);
+	  dump_insn_slim (lra_dump_file, as_a <rtx_insn *> (usage_insn));
 	}
     }
   lra_assert (NOTE_P (usage_insn) || NONDEBUG_INSN_P (usage_insn));
diff --git a/gcc/modulo-sched.c b/gcc/modulo-sched.c
index 22cd216..4afe43e 100644
--- a/gcc/modulo-sched.c
+++ b/gcc/modulo-sched.c
@@ -790,8 +790,7 @@ schedule_reg_moves (partial_schedule_ptr ps)
 	  move->old_reg = old_reg;
 	  move->new_reg = gen_reg_rtx (GET_MODE (prev_reg));
 	  move->num_consecutive_stages = distances[0] && distances[1] ? 2 : 1;
-	  move->insn = as_a <rtx_insn *> (gen_move_insn (move->new_reg,
-							 copy_rtx (prev_reg)));
+	  move->insn = gen_move_insn (move->new_reg, copy_rtx (prev_reg));
 	  bitmap_clear (move->uses);
 
 	  prev_reg = move->new_reg;
diff --git a/gcc/optabs.c b/gcc/optabs.c
index 983c8d9..df5c81c 100644
--- a/gcc/optabs.c
+++ b/gcc/optabs.c
@@ -1416,7 +1416,7 @@ expand_binop_directly (machine_mode mode, optab binoptab,
   machine_mode mode0, mode1, tmp_mode;
   struct expand_operand ops[3];
   bool commutative_p;
-  rtx pat;
+  rtx_insn *pat;
   rtx xop0 = op0, xop1 = op1;
   rtx swap;
 
@@ -1499,8 +1499,8 @@ expand_binop_directly (machine_mode mode, optab binoptab,
       /* If PAT is composed of more than one insn, try to add an appropriate
 	 REG_EQUAL note to it.  If we can't because TEMP conflicts with an
 	 operand, call expand_binop again, this time without a target.  */
-      if (INSN_P (pat) && NEXT_INSN (as_a <rtx_insn *> (pat)) != NULL_RTX
-	  && ! add_equal_note (as_a <rtx_insn *> (pat), ops[0].value,
+      if (INSN_P (pat) && NEXT_INSN (pat) != NULL_RTX
+	  && ! add_equal_note (pat, ops[0].value,
 			       optab_to_code (binoptab),
 			       ops[1].value, ops[2].value))
 	{
@@ -3016,15 +3016,15 @@ expand_unop_direct (machine_mode mode, optab unoptab, rtx op0, rtx target,
       struct expand_operand ops[2];
       enum insn_code icode = optab_handler (unoptab, mode);
       rtx_insn *last = get_last_insn ();
-      rtx pat;
+      rtx_insn *pat;
 
       create_output_operand (&ops[0], target, mode);
       create_convert_operand_from (&ops[1], op0, mode, unsignedp);
       pat = maybe_gen_insn (icode, 2, ops);
       if (pat)
 	{
-	  if (INSN_P (pat) && NEXT_INSN (as_a <rtx_insn *> (pat)) != NULL_RTX
-	      && ! add_equal_note (as_a <rtx_insn *> (pat), ops[0].value,
+	  if (INSN_P (pat) && NEXT_INSN (pat) != NULL_RTX
+	      && ! add_equal_note (pat, ops[0].value,
 				   optab_to_code (unoptab),
 				   ops[1].value, NULL_RTX))
 	    {
@@ -3508,7 +3508,7 @@ expand_abs (machine_mode mode, rtx op0, rtx target,
   NO_DEFER_POP;
 
   do_compare_rtx_and_jump (target, CONST0_RTX (mode), GE, 0, mode,
-			   NULL_RTX, NULL_RTX, op1, -1);
+			   NULL_RTX, NULL, op1, -1);
 
   op0 = expand_unop (mode, result_unsignedp ? neg_optab : negv_optab,
                      target, target, 0);
@@ -3817,7 +3817,7 @@ maybe_emit_unop_insn (enum insn_code icode, rtx target, rtx op0,
 		      enum rtx_code code)
 {
   struct expand_operand ops[2];
-  rtx pat;
+  rtx_insn *pat;
 
   create_output_operand (&ops[0], target, GET_MODE (target));
   create_input_operand (&ops[1], op0, GET_MODE (op0));
@@ -3825,10 +3825,9 @@ maybe_emit_unop_insn (enum insn_code icode, rtx target, rtx op0,
   if (!pat)
     return false;
 
-  if (INSN_P (pat) && NEXT_INSN (as_a <rtx_insn *> (pat)) != NULL_RTX
+  if (INSN_P (pat) && NEXT_INSN (pat) != NULL_RTX
       && code != UNKNOWN)
-    add_equal_note (as_a <rtx_insn *> (pat), ops[0].value, code, ops[1].value,
-		    NULL_RTX);
+    add_equal_note (pat, ops[0].value, code, ops[1].value, NULL_RTX);
 
   emit_insn (pat);
 
@@ -8370,13 +8369,13 @@ maybe_legitimize_operands (enum insn_code icode, unsigned int opno,
    and emit any necessary set-up code.  Return null and emit no
    code on failure.  */
 
-rtx
+rtx_insn *
 maybe_gen_insn (enum insn_code icode, unsigned int nops,
 		struct expand_operand *ops)
 {
   gcc_assert (nops == (unsigned int) insn_data[(int) icode].n_generator_args);
   if (!maybe_legitimize_operands (icode, 0, nops, ops))
-    return NULL_RTX;
+    return NULL;
 
   switch (nops)
     {
diff --git a/gcc/optabs.h b/gcc/optabs.h
index 152af87..5c30ce5 100644
--- a/gcc/optabs.h
+++ b/gcc/optabs.h
@@ -541,8 +541,8 @@ extern void create_convert_operand_from_type (struct expand_operand *op,
 extern bool maybe_legitimize_operands (enum insn_code icode,
 				       unsigned int opno, unsigned int nops,
 				       struct expand_operand *ops);
-extern rtx maybe_gen_insn (enum insn_code icode, unsigned int nops,
-			   struct expand_operand *ops);
+extern rtx_insn *maybe_gen_insn (enum insn_code icode, unsigned int nops,
+				 struct expand_operand *ops);
 extern bool maybe_expand_insn (enum insn_code icode, unsigned int nops,
 			       struct expand_operand *ops);
 extern bool maybe_expand_jump_insn (enum insn_code icode, unsigned int nops,
diff --git a/gcc/postreload-gcse.c b/gcc/postreload-gcse.c
index 9014d69..2194557 100644
--- a/gcc/postreload-gcse.c
+++ b/gcc/postreload-gcse.c
@@ -1115,8 +1115,8 @@ eliminate_partially_redundant_load (basic_block bb, rtx_insn *insn,
 
 	  /* Make sure we can generate a move from register avail_reg to
 	     dest.  */
-	  rtx_insn *move = as_a <rtx_insn *>
-	    (gen_move_insn (copy_rtx (dest), copy_rtx (avail_reg)));
+	  rtx_insn *move = gen_move_insn (copy_rtx (dest),
+					  copy_rtx (avail_reg));
 	  extract_insn (move);
 	  if (! constrain_operands (1, get_preferred_alternatives (insn,
 								   pred_bb))
diff --git a/gcc/recog.c b/gcc/recog.c
index c3ad86f..cba26de 100644
--- a/gcc/recog.c
+++ b/gcc/recog.c
@@ -3066,7 +3066,7 @@ split_all_insns_noflow (void)
 #ifdef HAVE_peephole2
 struct peep2_insn_data
 {
-  rtx insn;
+  rtx_insn *insn;
   regset live_before;
 };
 
@@ -3082,7 +3082,7 @@ int peep2_current_count;
 /* A non-insn marker indicating the last insn of the block.
    The live_before regset for this element is correct, indicating
    DF_LIVE_OUT for the block.  */
-#define PEEP2_EOB	pc_rtx
+#define PEEP2_EOB	(static_cast<rtx_insn *> (pc_rtx))
 
 /* Wrap N to fit into the peep2_insn_data buffer.  */
 
@@ -3285,7 +3285,7 @@ peep2_reinit_state (regset live)
 
   /* Indicate that all slots except the last holds invalid data.  */
   for (i = 0; i < MAX_INSNS_PER_PEEP2; ++i)
-    peep2_insn_data[i].insn = NULL_RTX;
+    peep2_insn_data[i].insn = NULL;
   peep2_current_count = 0;
 
   /* Indicate that the last slot contains live_after data.  */
@@ -3313,7 +3313,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
 
   /* If we are splitting an RTX_FRAME_RELATED_P insn, do not allow it to
      match more than one insn, or to be split into more than one insn.  */
-  old_insn = as_a <rtx_insn *> (peep2_insn_data[peep2_current].insn);
+  old_insn = peep2_insn_data[peep2_current].insn;
   if (RTX_FRAME_RELATED_P (old_insn))
     {
       bool any_note = false;
@@ -3401,7 +3401,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
       rtx note;
 
       j = peep2_buf_position (peep2_current + i);
-      old_insn = as_a <rtx_insn *> (peep2_insn_data[j].insn);
+      old_insn = peep2_insn_data[j].insn;
       if (!CALL_P (old_insn))
 	continue;
       was_call = true;
@@ -3440,7 +3440,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
       while (++i <= match_len)
 	{
 	  j = peep2_buf_position (peep2_current + i);
-	  old_insn = as_a <rtx_insn *> (peep2_insn_data[j].insn);
+	  old_insn = peep2_insn_data[j].insn;
 	  gcc_assert (!CALL_P (old_insn));
 	}
       break;
@@ -3452,7 +3452,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
   for (i = match_len; i >= 0; --i)
     {
       int j = peep2_buf_position (peep2_current + i);
-      old_insn = as_a <rtx_insn *> (peep2_insn_data[j].insn);
+      old_insn = peep2_insn_data[j].insn;
 
       as_note = find_reg_note (old_insn, REG_ARGS_SIZE, NULL);
       if (as_note)
@@ -3463,7 +3463,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
   eh_note = find_reg_note (peep2_insn_data[i].insn, REG_EH_REGION, NULL_RTX);
 
   /* Replace the old sequence with the new.  */
-  rtx_insn *peepinsn = as_a <rtx_insn *> (peep2_insn_data[i].insn);
+  rtx_insn *peepinsn = peep2_insn_data[i].insn;
   last = emit_insn_after_setloc (attempt,
 				 peep2_insn_data[i].insn,
 				 INSN_LOCATION (peepinsn));
@@ -3580,7 +3580,7 @@ peep2_update_life (basic_block bb, int match_len, rtx_insn *last,
    add more instructions to the buffer.  */
 
 static bool
-peep2_fill_buffer (basic_block bb, rtx insn, regset live)
+peep2_fill_buffer (basic_block bb, rtx_insn *insn, regset live)
 {
   int pos;
 
@@ -3606,7 +3606,7 @@ peep2_fill_buffer (basic_block bb, rtx insn, regset live)
   COPY_REG_SET (peep2_insn_data[pos].live_before, live);
   peep2_current_count++;
 
-  df_simulate_one_insn_forwards (bb, as_a <rtx_insn *> (insn), live);
+  df_simulate_one_insn_forwards (bb, insn, live);
   return true;
 }
 
diff --git a/gcc/recog.h b/gcc/recog.h
index 8a38b26..6b5d9e4 100644
--- a/gcc/recog.h
+++ b/gcc/recog.h
@@ -276,43 +276,43 @@ typedef const char * (*insn_output_fn) (rtx *, rtx_insn *);
 
 struct insn_gen_fn
 {
-  typedef rtx (*f0) (void);
-  typedef rtx (*f1) (rtx);
-  typedef rtx (*f2) (rtx, rtx);
-  typedef rtx (*f3) (rtx, rtx, rtx);
-  typedef rtx (*f4) (rtx, rtx, rtx, rtx);
-  typedef rtx (*f5) (rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f6) (rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f7) (rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f8) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f9) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f10) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f11) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f12) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f13) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f14) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f15) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f16) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f0) (void);
+  typedef rtx_insn * (*f1) (rtx);
+  typedef rtx_insn * (*f2) (rtx, rtx);
+  typedef rtx_insn * (*f3) (rtx, rtx, rtx);
+  typedef rtx_insn * (*f4) (rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f5) (rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f6) (rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f7) (rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f8) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f9) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f10) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f11) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f12) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f13) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f14) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f15) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f16) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
 
   typedef f0 stored_funcptr;
 
-  rtx operator () (void) const { return ((f0)func) (); }
-  rtx operator () (rtx a0) const { return ((f1)func) (a0); }
-  rtx operator () (rtx a0, rtx a1) const { return ((f2)func) (a0, a1); }
-  rtx operator () (rtx a0, rtx a1, rtx a2) const { return ((f3)func) (a0, a1, a2); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3) const { return ((f4)func) (a0, a1, a2, a3); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4) const { return ((f5)func) (a0, a1, a2, a3, a4); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5) const { return ((f6)func) (a0, a1, a2, a3, a4, a5); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6) const { return ((f7)func) (a0, a1, a2, a3, a4, a5, a6); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7) const { return ((f8)func) (a0, a1, a2, a3, a4, a5, a6, a7); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8) const { return ((f9)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9) const { return ((f10)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10) const { return ((f11)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11) const { return ((f12)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12) const { return ((f13)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13) const { return ((f14)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14) const { return ((f15)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14, rtx a15) const { return ((f16)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15); }
+  rtx_insn * operator () (void) const { return ((f0)func) (); }
+  rtx_insn * operator () (rtx a0) const { return ((f1)func) (a0); }
+  rtx_insn * operator () (rtx a0, rtx a1) const { return ((f2)func) (a0, a1); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2) const { return ((f3)func) (a0, a1, a2); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3) const { return ((f4)func) (a0, a1, a2, a3); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4) const { return ((f5)func) (a0, a1, a2, a3, a4); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5) const { return ((f6)func) (a0, a1, a2, a3, a4, a5); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6) const { return ((f7)func) (a0, a1, a2, a3, a4, a5, a6); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7) const { return ((f8)func) (a0, a1, a2, a3, a4, a5, a6, a7); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8) const { return ((f9)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9) const { return ((f10)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10) const { return ((f11)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11) const { return ((f12)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12) const { return ((f13)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13) const { return ((f14)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14) const { return ((f15)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14, rtx a15) const { return ((f16)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15); }
 
   // This is for compatibility of code that invokes functions like
   //   (*funcptr) (arg)
diff --git a/gcc/resource.c b/gcc/resource.c
index 26d9fca..d110953 100644
--- a/gcc/resource.c
+++ b/gcc/resource.c
@@ -439,7 +439,7 @@ find_dead_or_set_registers (rtx_insn *target, struct resources *res,
 
   for (insn = target; insn; insn = next_insn)
     {
-      rtx_insn *this_jump_insn = insn;
+      rtx_insn *this_insn = insn;
 
       next_insn = NEXT_INSN (insn);
 
@@ -487,8 +487,8 @@ find_dead_or_set_registers (rtx_insn *target, struct resources *res,
 		 of a call, so search for a JUMP_INSN in any position.  */
 	      for (i = 0; i < seq->len (); i++)
 		{
-		  this_jump_insn = seq->insn (i);
-		  if (JUMP_P (this_jump_insn))
+		  this_insn = seq->insn (i);
+		  if (JUMP_P (this_insn))
 		    break;
 		}
 	    }
@@ -497,14 +497,14 @@ find_dead_or_set_registers (rtx_insn *target, struct resources *res,
 	  break;
 	}
 
-      if (JUMP_P (this_jump_insn))
+      if (rtx_jump_insn *this_jump_insn = dyn_cast <rtx_jump_insn *> (this_insn))
 	{
 	  if (jump_count++ < 10)
 	    {
 	      if (any_uncondjump_p (this_jump_insn)
 		  || ANY_RETURN_P (PATTERN (this_jump_insn)))
 		{
-		  rtx lab_or_return = JUMP_LABEL (this_jump_insn);
+		  rtx lab_or_return = this_jump_insn->jump_label ();
 		  if (ANY_RETURN_P (lab_or_return))
 		    next_insn = NULL;
 		  else
@@ -577,10 +577,10 @@ find_dead_or_set_registers (rtx_insn *target, struct resources *res,
 		  AND_COMPL_HARD_REG_SET (scratch, needed.regs);
 		  AND_COMPL_HARD_REG_SET (fallthrough_res.regs, scratch);
 
-		  if (!ANY_RETURN_P (JUMP_LABEL (this_jump_insn)))
-		    find_dead_or_set_registers (JUMP_LABEL_AS_INSN (this_jump_insn),
-						&target_res, 0, jump_count,
-						target_set, needed);
+		  if (!ANY_RETURN_P (this_jump_insn->jump_label ()))
+		    find_dead_or_set_registers
+			  (this_jump_insn->jump_target (),
+			   &target_res, 0, jump_count, target_set, needed);
 		  find_dead_or_set_registers (next_insn,
 					      &fallthrough_res, 0, jump_count,
 					      set, needed);
diff --git a/gcc/rtl.h b/gcc/rtl.h
index e5e4560..486c988 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -546,6 +546,7 @@ class GTY(()) rtx_nonjump_insn : public rtx_insn
 
 class GTY(()) rtx_jump_insn : public rtx_insn
 {
+public:
   /* No extra fields, but adds the invariant:
        JUMP_P (X) aka (GET_CODE (X) == JUMP_INSN)
      i.e. an instruction that can possibly jump.
@@ -553,6 +554,18 @@ class GTY(()) rtx_jump_insn : public rtx_insn
      This is an instance of:
        DEF_RTL_EXPR(JUMP_INSN, "jump_insn", "uuBeiie0", RTX_INSN)
      from rtl.def.  */
+
+  /* Returns jump target of this instruction.  */
+
+  inline rtx jump_label () const;
+
+  /* Returns jump target cast to rtx_insn *.  */
+
+  inline rtx_insn *jump_target () const;
+
+  /* Set jump target.  */
+
+  inline void set_jump_target (rtx_insn *);
 };
 
 class GTY(()) rtx_call_insn : public rtx_insn
@@ -827,6 +840,14 @@ is_a_helper <rtx_debug_insn *>::test (rtx rt)
 template <>
 template <>
 inline bool
+is_a_helper <rtx_debug_insn *>::test (rtx_insn *insn)
+{
+  return DEBUG_INSN_P (insn);
+}
+
+template <>
+template <>
+inline bool
 is_a_helper <rtx_nonjump_insn *>::test (rtx rt)
 {
   return NONJUMP_INSN_P (rt);
@@ -843,6 +864,14 @@ is_a_helper <rtx_jump_insn *>::test (rtx rt)
 template <>
 template <>
 inline bool
+is_a_helper <rtx_jump_insn *>::test (rtx_insn *insn)
+{
+  return JUMP_P (insn);
+}
+
+template <>
+template <>
+inline bool
 is_a_helper <rtx_call_insn *>::test (rtx rt)
 {
   return CALL_P (rt);
@@ -1681,6 +1710,23 @@ inline rtx_insn *JUMP_LABEL_AS_INSN (const rtx_insn *insn)
   return safe_as_a <rtx_insn *> (JUMP_LABEL (insn));
 }
 
+/* Methods of rtx_jump_insn.  */
+
+inline rtx rtx_jump_insn::jump_label () const
+{
+  return JUMP_LABEL (this);
+}
+
+inline rtx_insn *rtx_jump_insn::jump_target () const
+{
+  return safe_as_a <rtx_insn *> (JUMP_LABEL (this));
+}
+
+inline void rtx_jump_insn::set_jump_target (rtx_insn *target)
+{
+  JUMP_LABEL(this) = target;
+}
+
 /* Once basic blocks are found, each CODE_LABEL starts a chain that
    goes through all the LABEL_REFs that jump to that label.  The chain
    eventually winds up at the CODE_LABEL: it is circular.  */
@@ -2662,7 +2708,7 @@ extern rtx_insn *emit_debug_insn_before (rtx, rtx);
 extern rtx_insn *emit_debug_insn_before_noloc (rtx, rtx);
 extern rtx_insn *emit_debug_insn_before_setloc (rtx, rtx, int);
 extern rtx_barrier *emit_barrier_before (rtx);
-extern rtx_insn *emit_label_before (rtx, rtx_insn *);
+extern rtx_code_label *emit_label_before (rtx, rtx_insn *);
 extern rtx_note *emit_note_before (enum insn_note, rtx);
 extern rtx_insn *emit_insn_after (rtx, rtx);
 extern rtx_insn *emit_insn_after_noloc (rtx, rtx, basic_block);
@@ -2683,7 +2729,7 @@ extern rtx_insn *emit_insn (rtx);
 extern rtx_insn *emit_debug_insn (rtx);
 extern rtx_insn *emit_jump_insn (rtx);
 extern rtx_insn *emit_call_insn (rtx);
-extern rtx_insn *emit_label (rtx);
+extern rtx_code_label *emit_label (rtx);
 extern rtx_jump_table_data *emit_jump_table_data (rtx);
 extern rtx_barrier *emit_barrier (void);
 extern rtx_note *emit_note (enum insn_note);
@@ -3336,14 +3382,14 @@ extern int eh_returnjump_p (rtx_insn *);
 extern int onlyjump_p (const rtx_insn *);
 extern int only_sets_cc0_p (const_rtx);
 extern int sets_cc0_p (const_rtx);
-extern int invert_jump_1 (rtx_insn *, rtx);
-extern int invert_jump (rtx_insn *, rtx, int);
+extern int invert_jump_1 (rtx_jump_insn *, rtx);
+extern int invert_jump (rtx_jump_insn *, rtx, int);
 extern int rtx_renumbered_equal_p (const_rtx, const_rtx);
 extern int true_regnum (const_rtx);
 extern unsigned int reg_or_subregno (const_rtx);
 extern int redirect_jump_1 (rtx, rtx);
-extern void redirect_jump_2 (rtx, rtx, rtx, int, int);
-extern int redirect_jump (rtx, rtx, int);
+extern void redirect_jump_2 (rtx_jump_insn *, rtx, rtx, int, int);
+extern int redirect_jump (rtx_jump_insn *, rtx, int);
 extern void rebuild_jump_labels (rtx_insn *);
 extern void rebuild_jump_labels_chain (rtx_insn *);
 extern rtx reversed_comparison (const_rtx, machine_mode);
@@ -3426,7 +3472,7 @@ extern void print_inline_rtx (FILE *, const_rtx, int);
    not be in sched-vis.c but in rtl.c, because they are not only used
    by the scheduler anymore but for all "slim" RTL dumping.  */
 extern void dump_value_slim (FILE *, const_rtx, int);
-extern void dump_insn_slim (FILE *, const_rtx);
+extern void dump_insn_slim (FILE *, const rtx_insn *);
 extern void dump_rtl_slim (FILE *, const rtx_insn *, const rtx_insn *,
 			   int, int);
 extern void print_value (pretty_printer *, const_rtx, int);
diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c
index 2377f25a..3a6d9ce 100644
--- a/gcc/rtlanal.c
+++ b/gcc/rtlanal.c
@@ -2914,7 +2914,8 @@ rtx_referenced_p (const_rtx x, const_rtx body)
 bool
 tablejump_p (const rtx_insn *insn, rtx *labelp, rtx_jump_table_data **tablep)
 {
-  rtx label, table;
+  rtx label;
+  rtx_insn *table;
 
   if (!JUMP_P (insn))
     return false;
diff --git a/gcc/sched-deps.c b/gcc/sched-deps.c
index e624563..ca1a64b 100644
--- a/gcc/sched-deps.c
+++ b/gcc/sched-deps.c
@@ -2650,7 +2650,7 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rtx_insn *insn)
     case MEM:
       {
 	/* Reading memory.  */
-	rtx u;
+	rtx_insn_list *u;
 	rtx_insn_list *pending;
 	rtx_expr_list *pending_mem;
 	rtx t = x;
@@ -2701,11 +2701,10 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rtx_insn *insn)
 		pending_mem = pending_mem->next ();
 	      }
 
-	    for (u = deps->last_pending_memory_flush; u; u = XEXP (u, 1))
-	      add_dependence (insn, as_a <rtx_insn *> (XEXP (u, 0)),
-			      REG_DEP_ANTI);
+	    for (u = deps->last_pending_memory_flush; u; u = u->next ())
+	      add_dependence (insn, u->insn (), REG_DEP_ANTI);
 
-	    for (u = deps->pending_jump_insns; u; u = XEXP (u, 1))
+	    for (u = deps->pending_jump_insns; u; u = u->next ())
 	      if (deps_may_trap_p (x))
 		{
 		  if ((sched_deps_info->generate_spec_deps)
@@ -2714,11 +2713,10 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rtx_insn *insn)
 		      ds_t ds = set_dep_weak (DEP_ANTI, BEGIN_CONTROL,
 					      MAX_DEP_WEAK);
 		      
-		      note_dep (as_a <rtx_insn *> (XEXP (u, 0)), ds);
+		      note_dep (u->insn (), ds);
 		    }
 		  else
-		    add_dependence (insn, as_a <rtx_insn *> (XEXP (u, 0)),
-				    REG_DEP_CONTROL);
+		    add_dependence (insn, u->insn (), REG_DEP_CONTROL);
 		}
 	  }
 
@@ -3089,7 +3087,7 @@ sched_analyze_insn (struct deps_desc *deps, rtx x, rtx_insn *insn)
   if (DEBUG_INSN_P (insn))
     {
       rtx_insn *prev = deps->last_debug_insn;
-      rtx u;
+      rtx_insn_list *u;
 
       if (!deps->readonly)
 	deps->last_debug_insn = insn;
@@ -3101,8 +3099,8 @@ sched_analyze_insn (struct deps_desc *deps, rtx x, rtx_insn *insn)
 			   REG_DEP_ANTI, false);
 
       if (!sel_sched_p ())
-	for (u = deps->last_pending_memory_flush; u; u = XEXP (u, 1))
-	  add_dependence (insn, as_a <rtx_insn *> (XEXP (u, 0)), REG_DEP_ANTI);
+	for (u = deps->last_pending_memory_flush; u; u = u->next ())
+	  add_dependence (insn, u->insn (), REG_DEP_ANTI);
 
       EXECUTE_IF_SET_IN_REG_SET (reg_pending_uses, 0, i, rsi)
 	{
diff --git a/gcc/sched-vis.c b/gcc/sched-vis.c
index 32f7a7c..31794e6 100644
--- a/gcc/sched-vis.c
+++ b/gcc/sched-vis.c
@@ -67,7 +67,7 @@ along with GCC; see the file COPYING3.  If not see
    pointer, via str_pattern_slim, but this usage is discouraged.  */
 
 /* For insns we print patterns, and for some patterns we print insns...  */
-static void print_insn_with_notes (pretty_printer *, const_rtx);
+static void print_insn_with_notes (pretty_printer *, const rtx_insn *);
 
 /* This recognizes rtx'en classified as expressions.  These are always
    represent some action on values or results of other expression, that
@@ -669,7 +669,7 @@ print_pattern (pretty_printer *pp, const_rtx x, int verbose)
    with their INSN_UIDs.  */
 
 void
-print_insn (pretty_printer *pp, const_rtx x, int verbose)
+print_insn (pretty_printer *pp, const rtx_insn *x, int verbose)
 {
   if (verbose)
     {
@@ -787,7 +787,7 @@ print_insn (pretty_printer *pp, const_rtx x, int verbose)
    note attached to the instruction.  */
 
 static void
-print_insn_with_notes (pretty_printer *pp, const_rtx x)
+print_insn_with_notes (pretty_printer *pp, const rtx_insn *x)
 {
   pp_string (pp, print_rtx_head);
   print_insn (pp, x, 1);
@@ -823,7 +823,7 @@ dump_value_slim (FILE *f, const_rtx x, int verbose)
 /* Emit a slim dump of X (an insn) to the file F, including any register
    note attached to the instruction.  */
 void
-dump_insn_slim (FILE *f, const_rtx x)
+dump_insn_slim (FILE *f, const rtx_insn *x)
 {
   pretty_printer rtl_slim_pp;
   rtl_slim_pp.buffer->stream = f;
@@ -893,9 +893,9 @@ str_pattern_slim (const_rtx x)
 }
 
 /* Emit a slim dump of X (an insn) to stderr.  */
-extern void debug_insn_slim (const_rtx);
+extern void debug_insn_slim (const rtx_insn *);
 DEBUG_FUNCTION void
-debug_insn_slim (const_rtx x)
+debug_insn_slim (const rtx_insn *x)
 {
   dump_insn_slim (stderr, x);
 }
diff --git a/gcc/stmt.c b/gcc/stmt.c
index 6c62a12..8d8529a 100644
--- a/gcc/stmt.c
+++ b/gcc/stmt.c
@@ -135,12 +135,12 @@ static void balance_case_nodes (case_node_ptr *, case_node_ptr);
 static int node_has_low_bound (case_node_ptr, tree);
 static int node_has_high_bound (case_node_ptr, tree);
 static int node_is_bounded (case_node_ptr, tree);
-static void emit_case_nodes (rtx, case_node_ptr, rtx, int, tree);
+static void emit_case_nodes (rtx, case_node_ptr, rtx_code_label *, int, tree);
 \f
 /* Return the rtx-label that corresponds to a LABEL_DECL,
    creating it if necessary.  */
 
-rtx
+rtx_insn *
 label_rtx (tree label)
 {
   gcc_assert (TREE_CODE (label) == LABEL_DECL);
@@ -153,15 +153,15 @@ label_rtx (tree label)
 	LABEL_PRESERVE_P (r) = 1;
     }
 
-  return DECL_RTL (label);
+  return as_a <rtx_insn *> (DECL_RTL (label));
 }
 
 /* As above, but also put it on the forced-reference list of the
    function that contains it.  */
-rtx
+rtx_insn *
 force_label_rtx (tree label)
 {
-  rtx_insn *ref = as_a <rtx_insn *> (label_rtx (label));
+  rtx_insn *ref = label_rtx (label);
   tree function = decl_function_context (label);
 
   gcc_assert (function);
@@ -170,6 +170,14 @@ force_label_rtx (tree label)
   return ref;
 }
 
+/* As label_rtx, but ensures (in check build), that returned value is
+   an existing label (i.e. rtx with code CODE_LABEL).  */
+rtx_code_label *
+jump_target_rtx (tree label)
+{
+  return as_a <rtx_code_label *> (label_rtx (label));
+}
+
 /* Add an unconditional jump to LABEL as the next sequential instruction.  */
 
 void
@@ -196,7 +204,7 @@ emit_jump (rtx label)
 void
 expand_label (tree label)
 {
-  rtx_insn *label_r = as_a <rtx_insn *> (label_rtx (label));
+  rtx_code_label *label_r = jump_target_rtx (label);
 
   do_pending_stack_adjust ();
   emit_label (label_r);
@@ -705,7 +713,7 @@ resolve_operand_name_1 (char *p, tree outputs, tree inputs, tree labels)
 void
 expand_naked_return (void)
 {
-  rtx end_label;
+  rtx_code_label *end_label;
 
   clear_pending_stack_adjust ();
   do_pending_stack_adjust ();
@@ -720,12 +728,12 @@ expand_naked_return (void)
 /* Generate code to jump to LABEL if OP0 and OP1 are equal in mode MODE. PROB
    is the probability of jumping to LABEL.  */
 static void
-do_jump_if_equal (machine_mode mode, rtx op0, rtx op1, rtx label,
+do_jump_if_equal (machine_mode mode, rtx op0, rtx op1, rtx_code_label *label,
 		  int unsignedp, int prob)
 {
   gcc_assert (prob <= REG_BR_PROB_BASE);
   do_compare_rtx_and_jump (op0, op1, EQ, unsignedp, mode,
-			   NULL_RTX, NULL_RTX, label, prob);
+			   NULL_RTX, NULL, label, prob);
 }
 \f
 /* Do the insertion of a case label into case_list.  The labels are
@@ -882,8 +890,8 @@ expand_switch_as_decision_tree_p (tree range,
 
 static void
 emit_case_decision_tree (tree index_expr, tree index_type,
-			 struct case_node *case_list, rtx default_label,
-                         int default_prob)
+			 case_node_ptr case_list, rtx_code_label *default_label,
+			 int default_prob)
 {
   rtx index = expand_normal (index_expr);
 
@@ -1141,7 +1149,7 @@ void
 expand_case (gswitch *stmt)
 {
   tree minval = NULL_TREE, maxval = NULL_TREE, range = NULL_TREE;
-  rtx default_label = NULL_RTX;
+  rtx_code_label *default_label = NULL;
   unsigned int count, uniq;
   int i;
   int ncases = gimple_switch_num_labels (stmt);
@@ -1173,7 +1181,7 @@ expand_case (gswitch *stmt)
   do_pending_stack_adjust ();
 
   /* Find the default case target label.  */
-  default_label = label_rtx (CASE_LABEL (gimple_switch_default_label (stmt)));
+  default_label = jump_target_rtx (CASE_LABEL (gimple_switch_default_label (stmt)));
   edge default_edge = EDGE_SUCC (bb, 0);
   int default_prob = default_edge->probability;
 
@@ -1323,7 +1331,7 @@ expand_sjlj_dispatch_table (rtx dispatch_index,
       for (int i = 0; i < ncases; i++)
         {
 	  tree elt = dispatch_table[i];
-	  rtx lab = label_rtx (CASE_LABEL (elt));
+	  rtx_code_label *lab = jump_target_rtx (CASE_LABEL (elt));
 	  do_jump_if_equal (index_mode, index, zero, lab, 0, -1);
 	  force_expand_binop (index_mode, sub_optab,
 			      index, CONST1_RTX (index_mode),
@@ -1592,7 +1600,7 @@ node_is_bounded (case_node_ptr node, tree index_type)
    tests for the value 50, then this node need not test anything.  */
 
 static void
-emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
+emit_case_nodes (rtx index, case_node_ptr node, rtx_code_label *default_label,
 		 int default_prob, tree index_type)
 {
   /* If INDEX has an unsigned type, we must make unsigned branches.  */
@@ -1620,7 +1628,8 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 			convert_modes (mode, imode,
 				       expand_normal (node->low),
 				       unsignedp),
-			label_rtx (node->code_label), unsignedp, probability);
+			jump_target_rtx (node->code_label),
+			unsignedp, probability);
       /* Since this case is taken at this point, reduce its weight from
          subtree_weight.  */
       subtree_prob -= prob;
@@ -1687,7 +1696,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 				convert_modes (mode, imode,
 					       expand_normal (node->right->low),
 					       unsignedp),
-				label_rtx (node->right->code_label),
+				jump_target_rtx (node->right->code_label),
 				unsignedp, probability);
 
 	      /* See if the value matches what the left hand side
@@ -1699,7 +1708,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 				convert_modes (mode, imode,
 					       expand_normal (node->left->low),
 					       unsignedp),
-				label_rtx (node->left->code_label),
+				jump_target_rtx (node->left->code_label),
 				unsignedp, probability);
 	    }
 
@@ -1786,7 +1795,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 			        (mode, imode,
 			         expand_normal (node->right->low),
 			         unsignedp),
-			        label_rtx (node->right->code_label), unsignedp, probability);
+			        jump_target_rtx (node->right->code_label), unsignedp, probability);
             }
 	  }
 
@@ -1828,7 +1837,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 			        (mode, imode,
 			         expand_normal (node->left->low),
 			         unsignedp),
-			        label_rtx (node->left->code_label), unsignedp, probability);
+			        jump_target_rtx (node->left->code_label), unsignedp, probability);
             }
 	}
     }
@@ -2051,7 +2060,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 				       mode, 1, default_label, probability);
 	    }
 
-	  emit_jump (label_rtx (node->code_label));
+	  emit_jump (jump_target_rtx (node->code_label));
 	}
     }
 }
diff --git a/gcc/stmt.h b/gcc/stmt.h
index 620b0f1..721c7ea 100644
--- a/gcc/stmt.h
+++ b/gcc/stmt.h
@@ -31,13 +31,18 @@ extern tree resolve_asm_operand_names (tree, tree, tree, tree);
 extern tree tree_overlaps_hard_reg_set (tree, HARD_REG_SET *);
 #endif
 
-/* Return the CODE_LABEL rtx for a LABEL_DECL, creating it if necessary.  */
-extern rtx label_rtx (tree);
+/* Return the CODE_LABEL rtx for a LABEL_DECL, creating it if necessary.
+   If label was deleted, the corresponding note
+   (NOTE_INSN_DELETED{_DEBUG,}_LABEL) insn will be returned.  */
+extern rtx_insn *label_rtx (tree);
 
 /* As label_rtx, but additionally the label is placed on the forced label
    list of its containing function (i.e. it is treated as reachable even
    if how is not obvious).  */
-extern rtx force_label_rtx (tree);
+extern rtx_insn *force_label_rtx (tree);
+
+/* As label_rtx, but checks that label was not deleted.  */
+extern rtx_code_label *jump_target_rtx (tree);
 
 /* Expand a GIMPLE_SWITCH statement.  */
 extern void expand_case (gswitch *);
diff --git a/gcc/store-motion.c b/gcc/store-motion.c
index d621ec1..fdd2f47 100644
--- a/gcc/store-motion.c
+++ b/gcc/store-motion.c
@@ -813,7 +813,7 @@ insert_store (struct st_expr * expr, edge e)
     return 0;
 
   reg = expr->reaching_reg;
-  insn = as_a <rtx_insn *> (gen_move_insn (copy_rtx (expr->pattern), reg));
+  insn = gen_move_insn (copy_rtx (expr->pattern), reg);
 
   /* If we are inserting this expression on ALL predecessor edges of a BB,
      insert it at the start of the BB, and reset the insert bits on the other
@@ -954,7 +954,7 @@ replace_store_insn (rtx reg, rtx_insn *del, basic_block bb,
   rtx mem, note, set, ptr;
 
   mem = smexpr->pattern;
-  insn = as_a <rtx_insn *> (gen_move_insn (reg, SET_SRC (single_set (del))));
+  insn = gen_move_insn (reg, SET_SRC (single_set (del)));
 
   for (ptr = smexpr->antic_stores; ptr; ptr = XEXP (ptr, 1))
     if (XEXP (ptr, 0) == del)

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-04-27 20:01   ` Mikhail Maltsev
@ 2015-04-28 13:50     ` Richard Sandiford
  2015-04-28 17:12       ` Jeff Law
  2015-04-29  8:02       ` Mikhail Maltsev
  2015-04-28 23:55     ` Jeff Law
  1 sibling, 2 replies; 21+ messages in thread
From: Richard Sandiford @ 2015-04-28 13:50 UTC (permalink / raw)
  To: Mikhail Maltsev; +Cc: Jeff Law, gcc-patches

Mikhail Maltsev <maltsevm@gmail.com> writes:
> I'm sending an updated patch (rebased to recent trunk, bootstrapped and
> regtested on x86_64-unknown-linux-gnu).
>
> On 04/25/2015 02:49 PM, Richard Sandiford wrote:
>> FWIW I think the split between label_rtx and live_label_rtx is good,
>> but I think we should give them different names.  The first one is
>> returning only a position in the instruction stream, the second is
>> returning a jump target.  I think we should rename both of them to
>> make that distinction clearer.
>
> I renamed live_label_rtx to jump_target_rtx. But I'm not sure if it is
> appropriate (so, perhaps, you could give some advice about the right
> names for these functions?)

Shied away from that because I'm hopeless with names. :-)  jump_target_rtx
sounds good to me.

I still think we should rename label_rtx too, because I think it's
confusing for label_rtx to return something other than a label.
That's probably a separate, follow-up patch though.

>> I think the eventual aim would be to have rtx_jump_insn member functions
>> that get and set the jump label as an rtx_insn, with JUMP_LABEL_AS_INSN
>> being a stepping stone towards that.  In cases like this it might make
>> more sense to ensure old_jump has the right type (rtx_jump_insn) and go
>> straight to the member functions, rather than switching to JUMP_LABEL_AS_INSN
>> now and then having to rewrite it later.
>
> I added the member functions. The problem is that JUMP_LABEL does not
> always satisfy the current invariant of rtx_insn: it can also be an RTL
> expression of type RETURN or SIMPLE_RETURN.

Yeah, that's probably something that needs to change at some point.

>> This preserves the behaviour of the original code but I'm not sure
>> it's worth it.  I doubt the distinction between:
>> 
>>   gcc_assert (JUMP_P (x));
>> 
>> and:
>> 
>>   gcc_checking_assert (JUMP_P (x));
>> 
>> was ever very scientific.  Seems like we should take this refactoring as
>> an opportunity to make the checking more consistent.
> Fixed (removed assert_as_a).
>
>> That seems pretty heavy-weight for LRA-local code.  Also, the long-term
>> plan is for INSN_LIST and rtx_insn to be in separate hierarchies,
>> at which point we'd have no alias-safe way to distinguish them.
>> 
>> usage_insns isn't a GC structure and isn't live across a GC collection,
>> so I don't think we need these lists to be rtxes at all.
> OK, reverted changes in LRA code for now. I think this should be a
> separate patch then.

Agreed.

> +inline rtx_insn *rtx_jump_insn::jump_target () const
> +{
> +  return safe_as_a <rtx_insn *> (JUMP_LABEL (this));
> +}
> +
> +inline void rtx_jump_insn::set_jump_target (rtx_insn *target)
> +{
> +  JUMP_LABEL(this) = target;

Space before "(this)".

Could these two operate on rtx_code_labels rather than rtx_insns?

> @@ -1173,7 +1181,7 @@ expand_case (gswitch *stmt)
>    do_pending_stack_adjust ();
>  
>    /* Find the default case target label.  */
> -  default_label = label_rtx (CASE_LABEL (gimple_switch_default_label (stmt)));
> +  default_label = jump_target_rtx (CASE_LABEL (gimple_switch_default_label (stmt)));
>    edge default_edge = EDGE_SUCC (bb, 0);
>    int default_prob = default_edge->probability;
>  

Long line -- can break before "(CASE_LABEL"

> @@ -1786,7 +1795,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
>  			        (mode, imode,
>  			         expand_normal (node->right->low),
>  			         unsignedp),
> -			        label_rtx (node->right->code_label), unsignedp, probability);
> +			        jump_target_rtx (node->right->code_label), unsignedp, probability);
>              }
>  	  }
>  
> @@ -1828,7 +1837,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
>  			        (mode, imode,
>  			         expand_normal (node->left->low),
>  			         unsignedp),
> -			        label_rtx (node->left->code_label), unsignedp, probability);
> +			        jump_target_rtx (node->left->code_label), unsignedp, probability);
>              }
>  	}
>      }

Long lines here too.

Looks good to me with those changes FWIW.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-04-28 13:50     ` Richard Sandiford
@ 2015-04-28 17:12       ` Jeff Law
  2015-04-29  8:02       ` Mikhail Maltsev
  1 sibling, 0 replies; 21+ messages in thread
From: Jeff Law @ 2015-04-28 17:12 UTC (permalink / raw)
  To: Mikhail Maltsev, gcc-patches, richard.sandiford

On 04/28/2015 07:46 AM, Richard Sandiford wrote:
>
> I still think we should rename label_rtx too, because I think it's
> confusing for label_rtx to return something other than a label.
> That's probably a separate, follow-up patch though.
Seems fine as a follow-up.

>> I added the member functions. The problem is that JUMP_LABEL does not
>> always satisfy the current invariant of rtx_insn: it can also be an RTL
>> expression of type RETURN or SIMPLE_RETURN.
>
> Yeah, that's probably something that needs to change at some point.
Agreed.  These corner cases are precisely the kinds of things we want to 
be identifying and cleaning up as we go.   I haven't looked at the 
updated patch yet, but if there isn't a comment about this case, then 
there should be.

Jeff

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-04-27 20:01   ` Mikhail Maltsev
  2015-04-28 13:50     ` Richard Sandiford
@ 2015-04-28 23:55     ` Jeff Law
  1 sibling, 0 replies; 21+ messages in thread
From: Jeff Law @ 2015-04-28 23:55 UTC (permalink / raw)
  To: Mikhail Maltsev, gcc-patches, rdsandiford

On 04/27/2015 02:09 PM, Mikhail Maltsev wrote:
> I'm sending an updated patch (rebased to recent trunk, bootstrapped and
> regtested on x86_64-unknown-linux-gnu).
>
[ ... ]

>
> -- Regards, Mikhail Maltsev
>
>
> as_insn2.patch
Needs a ChangeLog.  I know it's a bit tedious...  But please include it. 
  It makes patch review easier, and we need one for the ChangeLog file 
anyway.

In general, probably more as_a conversions than I'd like.  But that may 
be unavoidable at this point.  On a positive note, I do see some going 
away as you strengthen various return and parameter types.

I probably would have done separate patches for the std::swap changes. 
They're not really related to the rtx subclasses work.

I'm not sure why you added indention in expand_expr_real_2's switch 
statement.  It certainly makes the patch harder to review.  I'm going to 
assume there was a good reason for the new {} pair and added indention.

So I think with a ChangeLog this is ready to go.

jeff

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-04-28 13:50     ` Richard Sandiford
  2015-04-28 17:12       ` Jeff Law
@ 2015-04-29  8:02       ` Mikhail Maltsev
  2015-04-30  3:54         ` Jeff Law
  2015-04-30  5:46         ` Jeff Law
  1 sibling, 2 replies; 21+ messages in thread
From: Mikhail Maltsev @ 2015-04-29  8:02 UTC (permalink / raw)
  To: Jeff Law, gcc-patches, richard.sandiford

[-- Attachment #1: Type: text/plain, Size: 2178 bytes --]

On 28.04.2015 16:46, Richard Sandiford wrote:
>> +inline void rtx_jump_insn::set_jump_target (rtx_insn *target) +{
>> + JUMP_LABEL(this) = target;
Fixed.

> Could these two operate on rtx_code_labels rather than rtx_insns?
Indeed, right now [set_]jump_target are not used with NOTE's, so I could
promote the type to rtx_code_label with no regressions.

> Long line -- can break before "(CASE_LABEL"
Fixed. Others too. Rechecked with contrib/check_GNU_style.sh (no
warnings, except one special case in gcc/recog.h).

On 28.04.2015 20:08, Jeff Law wrote:
> Agreed.  These corner cases are precisely the kinds of things we
> want to be identifying and cleaning up as we go.   I haven't looked
> at the updated patch yet, but if there isn't a comment about this
> case, then there should be.
Done.

> Needs a ChangeLog.  I know it's a bit tedious...  But please include
> it.  It makes patch review easier, and we need one for the ChangeLog
> file anyway.
Attached.

> In general, probably more as_a conversions than I'd like.  But that
> may be unavoidable at this point.  On a positive note, I do see some
> going away as you strengthen various return and parameter types.

Also some conversions are, perhaps, unavoidable when they come right
after a check, so the type is indeed narrowed.

> I probably would have done separate patches for the std::swap
> changes. They're not really related to the rtx subclasses work.
OK, sending 2 separate patches. Note that they a not "commutative":
std::swap should be applied before the main one, because one of the
swaps in do_compare_rtx_and_jump uses a single temporary variable of
type rtx for swapping labels and for storing generic rtl expressions
(this could be worked around, of course, but I think that would be just
a waste of time).

> I'm not sure why you added indention in expand_expr_real_2's switch
> statement.  It certainly makes the patch harder to review.  I'm going
> to assume there was a good reason for the new {} pair and added
> indention.
Sorry for that. I had to introduce a couple of variables, and I decided
to limit their scope in order to make their use less error-prone.


-- 
Regards,
    Mikhail Maltsev

[-- Attachment #2: as_insn3.cl --]
[-- Type: text/plain, Size: 5362 bytes --]

gcc/ChangeLog:

2015-04-29  Mikhail Maltsev  <maltsevm@gmail.com>

	Promote types of RTL expressions to more derived ones.
	* bb-reorder.c (set_edge_can_fallthru_flag): Use rtx_jump_insn where
	feasible.
	(fix_up_fall_thru_edges): Likewise.
	(fix_crossing_conditional_branches): Likewise. Promote jump targets
	from to rtx_insn to rtx_code_label where feasible.
	* bt-load.c (move_btr_def): Remove as-a cast of the value returned by
	gen_move_insn (returned type changed to rtx_insn).
	* builtins.c (expand_errno_check): Fix arguments of
	do_compare_rtx_and_jump (now expects rtx_code_label).
	(expand_builtin_acc_on_device): Likewise.
	* cfgcleanup.c (try_simplify_condjump): Add cast when calling
	invert_jump (now exprects rtx_jump_insn).
	* cfgexpand.c (label_rtx_for_bb): Promote return type to
	rtx_code_label.
	(construct_init_block): Use rtx_code_label.
	* cfgrtl.c (block_label): Promote return type to rtx_code_label.
	(try_redirect_by_replacing_jump): Use cast to rtx_jump_insn when
	calling redirect_jump.
	(patch_jump_insn): Likewise.
	(redirect_branch_edge): Likewise.
	(force_nonfallthru_and_redirect): Likewise.
	(fixup_reorder_chain): Explicitly use rtx_jump_insn instead of rtx_insn
	when suitable.
	(rtl_lv_add_condition_to_bb): Update call of do_compare_rtx_and_jump.
	* cfgrtl.h: Promote return type of block_label to rtx_code_label.
	* config/i386/i386.c (ix86_emit_cmove): Explicitly use rtx_code_label
	to store the value retured by gen_label_rtx.
	* dojump.c (jumpifnot): Promote argument type from rtx to
	rtx_code_label.
	(jumpifnot_1): Likewise.
	(jumpif): Likewise.
	(jumpif_1): Likewise.
	(do_jump_1): Likewise.
	(do_jump): Likewise. Use rtx_code_label when feasible.
	(do_jump_by_parts_greater_rtx): Likewise.
	(do_jump_by_parts_zero_rtx): Likewise.
	(do_jump_by_parts_equality_rtx): Likewise.
	(do_compare_rtx_and_jump): Likewise.
	* dojump.h: Update function prototypes.
	* dse.c (emit_inc_dec_insn_before): Remove case (gen_move_insn now
	returns rtx_insn).
	* emit-rtl.c (emit_label_before): Promote return type to
	rtx_code_label.
	(emit_label): Likewise.
	* except.c (sjlj_emit_dispatch_table): Use jump_target_rtx.
	* explow.c (emit_stack_save): Update for new return type of
	gen_move_insn.
	(emit_stack_restore): Likewise.
	* expmed.c (emit_store_flag_force): Fix calls of
	do_compare_rtx_and_jump.
	(do_cmp_and_jump): Likewise.
	* expr.c (expand_expr_real_2): Likewise. Promote some local variables
	from rtx to rtx_code_label.
	* expr.h: Update return type of gen_move_insn (promote to rtx_insn).
	* function.c (convert_jumps_to_returns): Fix call of redirect_jump.
	* gcse.c (pre_insert_copy_insn): Use rtx_insn instead of rtx.
	* ifcvt.c (dead_or_predicable): Use rtx_jump_insn when calling
	invert_jump_1 and redirect_jump_1.
	* internal-fn.c (expand_arith_overflow_result_store): Fix call of
	do_compare_rtx_and_jump.
	(expand_addsub_overflow): Likewise.
	(expand_neg_overflow): Likewise.
	(expand_mul_overflow): Likewise.
	* ira.c (split_live_ranges_for_shrink_wrap): Use rtx_insn for
	return value of gen_move_insn.
	* jump.c (redirect_jump): Promote argument from rtx to rtx_jump_insn.
	* loop-unroll.c (compare_and_jump_seq): Promote rtx to rtx_code_label.
	* lra-constraints.c (emit_spill_move): Remove cast of value returned
	by gen_move_insn.
	(inherit_reload_reg): Add cast when calling dump_insn_slim.
	(split_reg): Likewise.
	* modulo-sched.c (schedule_reg_moves): Remove cast of value returned by
	gen_move_insn.
	* optabs.c (expand_binop_directly): Remove casts of values returned by
	maybe_gen_insn.
	(expand_unop_direct): Likewise.
	(expand_abs): Likewise.
	(maybe_emit_unop_insn): Likewise.
	(maybe_gen_insn): Promote return type to rtx_insn.
	* optabs.h: Update prototype of maybe_gen_insn.
	* postreload-gcse.c (eliminate_partially_redundant_load): Remove
	redundant cast.
	* recog.c (struct peep2_insn_data): Promote type of insn field to
	rtx_insn.
	(peep2_reinit_state): Use NULL instead of NULL_RTX.
	(peep2_attempt): Remove casts of insn in peep2_insn_data.
	(peep2_fill_buffer): Promote argument from rtx to rtx_insn
	* recog.h (struct insn_gen_fn): Promote return types of function
	pointers and operator ().from rtx to rtx_insn.
	* resource.c (find_dead_or_set_registers): Use dyn_cast to
	rtx_jump_insn instead of check.  Use it's jump_target method.
	* rtl.h (rtx_jump_insn::jump_label): Define new method.
	(rtx_jump_insn::jump_target): Define new method.
	(rtx_jump_insn::set_jump_target): Define new method.
	* rtlanal.c (tablejump_p): Promote type of one local variable.
	* sched-deps.c (sched_analyze_2): Promote rtx to rtx_insn_list.
	(sched_analyze_insn): Likewise.
	* sched-vis.c (print_insn_with_notes): Promote rtx to rtx_insn.
	(print_insn): Likewise.
	* stmt.c (label_rtx): Promote return type to rtx_insn.
	(force_label_rtx): Likewise.
	(jump_target_rtx): Define new function.
	(expand_label): Use it, get rid of one cast.
	(expand_naked_return): Promote rtx to rtx_code_label.
	(do_jump_if_equal): Fix do_compare_rtx_and_jump call.
	(expand_case): Use rtx_code_label instread of rtx where feasible.
	(expand_sjlj_dispatch_table): Likewise.
	(emit_case_nodes): Likewise.
	* stmt.h: Declare jump_target_rtx.  Update prototypes.  Fix comments.
	* store-motion.c (insert_store): Make use of new return type of
	gen_move_insn and remove a cast.
	(replace_store_insn): Likewise.



[-- Attachment #3: as_insn3.patch --]
[-- Type: text/plain, Size: 100219 bytes --]

diff --git a/gcc/bb-reorder.c b/gcc/bb-reorder.c
index c134712..7f96a3e 100644
--- a/gcc/bb-reorder.c
+++ b/gcc/bb-reorder.c
@@ -1736,9 +1736,11 @@ set_edge_can_fallthru_flag (void)
 	continue;
       if (!any_condjump_p (BB_END (bb)))
 	continue;
-      if (!invert_jump (BB_END (bb), JUMP_LABEL (BB_END (bb)), 0))
+
+      rtx_jump_insn *bb_end_jump = as_a <rtx_jump_insn *> (BB_END (bb));
+      if (!invert_jump (bb_end_jump, JUMP_LABEL (bb_end_jump), 0))
 	continue;
-      invert_jump (BB_END (bb), JUMP_LABEL (BB_END (bb)), 0);
+      invert_jump (bb_end_jump, JUMP_LABEL (bb_end_jump), 0);
       EDGE_SUCC (bb, 0)->flags |= EDGE_CAN_FALLTHRU;
       EDGE_SUCC (bb, 1)->flags |= EDGE_CAN_FALLTHRU;
     }
@@ -1893,9 +1895,15 @@ fix_up_fall_thru_edges (void)
 
 		      fall_thru_label = block_label (fall_thru->dest);
 
-		      if (old_jump && JUMP_P (old_jump) && fall_thru_label)
-			invert_worked = invert_jump (old_jump,
-						     fall_thru_label,0);
+		      if (old_jump && fall_thru_label)
+			{
+			  rtx_jump_insn *old_jump_insn =
+				dyn_cast <rtx_jump_insn *> (old_jump);
+			  if (old_jump_insn)
+			    invert_worked = invert_jump (old_jump_insn,
+							 fall_thru_label, 0);
+			}
+
 		      if (invert_worked)
 			{
 			  fall_thru->flags &= ~EDGE_FALLTHRU;
@@ -2012,10 +2020,9 @@ fix_crossing_conditional_branches (void)
   edge succ2;
   edge crossing_edge;
   edge new_edge;
-  rtx_insn *old_jump;
   rtx set_src;
   rtx old_label = NULL_RTX;
-  rtx new_label;
+  rtx_code_label *new_label;
 
   FOR_EACH_BB_FN (cur_bb, cfun)
     {
@@ -2040,7 +2047,7 @@ fix_crossing_conditional_branches (void)
 
       if (crossing_edge)
 	{
-	  old_jump = BB_END (cur_bb);
+	  rtx_jump_insn *old_jump = as_a <rtx_jump_insn *> (BB_END (cur_bb));
 
 	  /* Check to make sure the jump instruction is a
 	     conditional jump.  */
@@ -2079,7 +2086,8 @@ fix_crossing_conditional_branches (void)
 	      else
 		{
 		  basic_block last_bb;
-		  rtx_insn *new_jump;
+		  rtx_code_label *old_jump_target;
+		  rtx_jump_insn *new_jump;
 
 		  /* Create new basic block to be dest for
 		     conditional jump.  */
@@ -2090,9 +2098,10 @@ fix_crossing_conditional_branches (void)
 		  emit_label (new_label);
 
 		  gcc_assert (GET_CODE (old_label) == LABEL_REF);
-		  old_label = JUMP_LABEL (old_jump);
-		  new_jump = emit_jump_insn (gen_jump (old_label));
-		  JUMP_LABEL (new_jump) = old_label;
+		  old_jump_target = old_jump->jump_target ();
+		  new_jump = as_a <rtx_jump_insn *>
+				(emit_jump_insn (gen_jump (old_jump_target)));
+		  new_jump->set_jump_target (old_jump_target);
 
 		  last_bb = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
 		  new_bb = create_basic_block (new_label, new_jump, last_bb);
diff --git a/gcc/bt-load.c b/gcc/bt-load.c
index c028281..2280124 100644
--- a/gcc/bt-load.c
+++ b/gcc/bt-load.c
@@ -1212,7 +1212,7 @@ move_btr_def (basic_block new_def_bb, int btr, btr_def def, bitmap live_range,
   btr_mode = GET_MODE (SET_DEST (set));
   btr_rtx = gen_rtx_REG (btr_mode, btr);
 
-  new_insn = as_a <rtx_insn *> (gen_move_insn (btr_rtx, src));
+  new_insn = gen_move_insn (btr_rtx, src);
 
   /* Insert target register initialization at head of basic block.  */
   def->insn = emit_insn_after (new_insn, insp);
diff --git a/gcc/builtins.c b/gcc/builtins.c
index 028d793..9e06db8 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -2001,7 +2001,7 @@ expand_errno_check (tree exp, rtx target)
   /* Test the result; if it is NaN, set errno=EDOM because
      the argument was not in the domain.  */
   do_compare_rtx_and_jump (target, target, EQ, 0, GET_MODE (target),
-			   NULL_RTX, NULL_RTX, lab,
+			   NULL_RTX, NULL, lab,
 			   /* The jump is very likely.  */
 			   REG_BR_PROB_BASE - (REG_BR_PROB_BASE / 2000 - 1));
 
@@ -5938,9 +5938,9 @@ expand_builtin_acc_on_device (tree exp, rtx target)
   emit_move_insn (target, const1_rtx);
   rtx_code_label *done_label = gen_label_rtx ();
   do_compare_rtx_and_jump (v, v1, EQ, false, v_mode, NULL_RTX,
-			   NULL_RTX, done_label, PROB_EVEN);
+			   NULL, done_label, PROB_EVEN);
   do_compare_rtx_and_jump (v, v2, EQ, false, v_mode, NULL_RTX,
-			   NULL_RTX, done_label, PROB_EVEN);
+			   NULL, done_label, PROB_EVEN);
   emit_move_insn (target, const0_rtx);
   emit_label (done_label);
 
diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c
index 477b6da..1dccb2f 100644
--- a/gcc/cfgcleanup.c
+++ b/gcc/cfgcleanup.c
@@ -190,7 +190,8 @@ try_simplify_condjump (basic_block cbranch_block)
     return false;
 
   /* Invert the conditional branch.  */
-  if (!invert_jump (cbranch_insn, block_label (jump_dest_block), 0))
+  if (!invert_jump (as_a <rtx_jump_insn *> (cbranch_insn),
+		    block_label (jump_dest_block), 0))
     return false;
 
   if (dump_file)
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index 5905ddb..049230d 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -2051,7 +2051,7 @@ static hash_map<basic_block, rtx_code_label *> *lab_rtx_for_bb;
 
 /* Returns the label_rtx expression for a label starting basic block BB.  */
 
-static rtx
+static rtx_code_label *
 label_rtx_for_bb (basic_block bb ATTRIBUTE_UNUSED)
 {
   gimple_stmt_iterator gsi;
@@ -2078,7 +2078,7 @@ label_rtx_for_bb (basic_block bb ATTRIBUTE_UNUSED)
       if (DECL_NONLOCAL (lab))
 	break;
 
-      return label_rtx (lab);
+      return jump_target_rtx (lab);
     }
 
   rtx_code_label *l = gen_label_rtx ();
@@ -3120,7 +3120,7 @@ expand_goto (tree label)
   gcc_assert (!context || context == current_function_decl);
 #endif
 
-  emit_jump (label_rtx (label));
+  emit_jump (jump_target_rtx (label));
 }
 
 /* Output a return with no value.  */
@@ -5579,7 +5579,7 @@ construct_init_block (void)
     {
       tree label = gimple_block_label (e->dest);
 
-      emit_jump (label_rtx (label));
+      emit_jump (jump_target_rtx (label));
       flags = 0;
     }
   else
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index 8a75044..f00b4f3 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -999,18 +999,18 @@ rtl_can_merge_blocks (basic_block a, basic_block b)
 /* Return the label in the head of basic block BLOCK.  Create one if it doesn't
    exist.  */
 
-rtx
+rtx_code_label *
 block_label (basic_block block)
 {
   if (block == EXIT_BLOCK_PTR_FOR_FN (cfun))
-    return NULL_RTX;
+    return NULL;
 
   if (!LABEL_P (BB_HEAD (block)))
     {
       BB_HEAD (block) = emit_label_before (gen_label_rtx (), BB_HEAD (block));
     }
 
-  return BB_HEAD (block);
+  return as_a <rtx_code_label *> (BB_HEAD (block));
 }
 
 /* Attempt to perform edge redirection by replacing possibly complex jump
@@ -1110,7 +1110,8 @@ try_redirect_by_replacing_jump (edge e, basic_block target, bool in_cfglayout)
       if (dump_file)
 	fprintf (dump_file, "Redirecting jump %i from %i to %i.\n",
 		 INSN_UID (insn), e->dest->index, target->index);
-      if (!redirect_jump (insn, block_label (target), 0))
+      if (!redirect_jump (as_a <rtx_jump_insn *> (insn),
+			  block_label (target), 0))
 	{
 	  gcc_assert (target == EXIT_BLOCK_PTR_FOR_FN (cfun));
 	  return NULL;
@@ -1294,7 +1295,8 @@ patch_jump_insn (rtx_insn *insn, rtx_insn *old_label, basic_block new_bb)
 	  /* If the substitution doesn't succeed, die.  This can happen
 	     if the back end emitted unrecognizable instructions or if
 	     target is exit block on some arches.  */
-	  if (!redirect_jump (insn, block_label (new_bb), 0))
+	  if (!redirect_jump (as_a <rtx_jump_insn *> (insn),
+			      block_label (new_bb), 0))
 	    {
 	      gcc_assert (new_bb == EXIT_BLOCK_PTR_FOR_FN (cfun));
 	      return false;
@@ -1322,7 +1324,7 @@ redirect_branch_edge (edge e, basic_block target)
 
   if (!currently_expanding_to_rtl)
     {
-      if (!patch_jump_insn (insn, old_label, target))
+      if (!patch_jump_insn (as_a <rtx_jump_insn *> (insn), old_label, target))
 	return NULL;
     }
   else
@@ -1330,7 +1332,8 @@ redirect_branch_edge (edge e, basic_block target)
        jumps (i.e. not yet split by find_many_sub_basic_blocks).
        Redirect all of those that match our label.  */
     FOR_BB_INSNS (src, insn)
-      if (JUMP_P (insn) && !patch_jump_insn (insn, old_label, target))
+      if (JUMP_P (insn) && !patch_jump_insn (as_a <rtx_jump_insn *> (insn),
+					     old_label, target))
 	return NULL;
 
   if (dump_file)
@@ -1521,7 +1524,8 @@ force_nonfallthru_and_redirect (edge e, basic_block target, rtx jump_label)
       edge b = unchecked_make_edge (e->src, target, 0);
       bool redirected;
 
-      redirected = redirect_jump (BB_END (e->src), block_label (target), 0);
+      redirected = redirect_jump (as_a <rtx_jump_insn *> (BB_END (e->src)),
+				  block_label (target), 0);
       gcc_assert (redirected);
 
       note = find_reg_note (BB_END (e->src), REG_BR_PROB, NULL_RTX);
@@ -3775,10 +3779,10 @@ fixup_reorder_chain (void)
 	  e_taken = e;
 
       bb_end_insn = BB_END (bb);
-      if (JUMP_P (bb_end_insn))
+      if (rtx_jump_insn *bb_end_jump = dyn_cast <rtx_jump_insn *> (bb_end_insn))
 	{
-	  ret_label = JUMP_LABEL (bb_end_insn);
-	  if (any_condjump_p (bb_end_insn))
+	  ret_label = JUMP_LABEL (bb_end_jump);
+	  if (any_condjump_p (bb_end_jump))
 	    {
 	      /* This might happen if the conditional jump has side
 		 effects and could therefore not be optimized away.
@@ -3786,10 +3790,10 @@ fixup_reorder_chain (void)
 		 to prevent rtl_verify_flow_info from complaining.  */
 	      if (!e_fall)
 		{
-		  gcc_assert (!onlyjump_p (bb_end_insn)
-			      || returnjump_p (bb_end_insn)
+		  gcc_assert (!onlyjump_p (bb_end_jump)
+			      || returnjump_p (bb_end_jump)
                               || (e_taken->flags & EDGE_CROSSING));
-		  emit_barrier_after (bb_end_insn);
+		  emit_barrier_after (bb_end_jump);
 		  continue;
 		}
 
@@ -3811,11 +3815,11 @@ fixup_reorder_chain (void)
 		 edge based on known or assumed probability.  */
 	      else if (bb->aux != e_taken->dest)
 		{
-		  rtx note = find_reg_note (bb_end_insn, REG_BR_PROB, 0);
+		  rtx note = find_reg_note (bb_end_jump, REG_BR_PROB, 0);
 
 		  if (note
 		      && XINT (note, 0) < REG_BR_PROB_BASE / 2
-		      && invert_jump (bb_end_insn,
+		      && invert_jump (bb_end_jump,
 				      (e_fall->dest
 				       == EXIT_BLOCK_PTR_FOR_FN (cfun)
 				       ? NULL_RTX
@@ -3838,7 +3842,7 @@ fixup_reorder_chain (void)
 
 	      /* Otherwise we can try to invert the jump.  This will
 		 basically never fail, however, keep up the pretense.  */
-	      else if (invert_jump (bb_end_insn,
+	      else if (invert_jump (bb_end_jump,
 				    (e_fall->dest
 				     == EXIT_BLOCK_PTR_FOR_FN (cfun)
 				     ? NULL_RTX
@@ -4955,7 +4959,7 @@ rtl_lv_add_condition_to_bb (basic_block first_head ,
 			    basic_block second_head ATTRIBUTE_UNUSED,
 			    basic_block cond_bb, void *comp_rtx)
 {
-  rtx label;
+  rtx_code_label *label;
   rtx_insn *seq, *jump;
   rtx op0 = XEXP ((rtx)comp_rtx, 0);
   rtx op1 = XEXP ((rtx)comp_rtx, 1);
@@ -4971,8 +4975,7 @@ rtl_lv_add_condition_to_bb (basic_block first_head ,
   start_sequence ();
   op0 = force_operand (op0, NULL_RTX);
   op1 = force_operand (op1, NULL_RTX);
-  do_compare_rtx_and_jump (op0, op1, comp, 0,
-			   mode, NULL_RTX, NULL_RTX, label, -1);
+  do_compare_rtx_and_jump (op0, op1, comp, 0, mode, NULL_RTX, NULL, label, -1);
   jump = get_last_insn ();
   JUMP_LABEL (jump) = label;
   LABEL_NUSES (label)++;
diff --git a/gcc/cfgrtl.h b/gcc/cfgrtl.h
index 32c8ff6..cdf1477 100644
--- a/gcc/cfgrtl.h
+++ b/gcc/cfgrtl.h
@@ -33,7 +33,7 @@ extern bool contains_no_active_insn_p (const_basic_block);
 extern bool forwarder_block_p (const_basic_block);
 extern bool can_fallthru (basic_block, basic_block);
 extern rtx_note *bb_note (basic_block);
-extern rtx block_label (basic_block);
+extern rtx_code_label *block_label (basic_block);
 extern edge try_redirect_by_replacing_jump (edge, basic_block, bool);
 extern void emit_barrier_after_bb (basic_block bb);
 extern basic_block force_nonfallthru_and_redirect (edge, basic_block, rtx);
diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c
index 77a6109..9896f21 100644
--- a/gcc/config/i386/i386.c
+++ b/gcc/config/i386/i386.c
@@ -38390,7 +38390,7 @@ ix86_emit_cmove (rtx dst, rtx src, enum rtx_code code, rtx op1, rtx op2)
     }
   else
     {
-      rtx nomove = gen_label_rtx ();
+      rtx_code_label *nomove = gen_label_rtx ();
       emit_cmp_and_jump_insns (op1, op2, reverse_condition (code),
 			       const0_rtx, GET_MODE (op1), 1, nomove);
       emit_move_insn (dst, src);
diff --git a/gcc/dojump.c b/gcc/dojump.c
index 0790c77..456ddea 100644
--- a/gcc/dojump.c
+++ b/gcc/dojump.c
@@ -61,10 +61,12 @@ along with GCC; see the file COPYING3.  If not see
 #include "tm_p.h"
 
 static bool prefer_and_bit_test (machine_mode, int);
-static void do_jump_by_parts_greater (tree, tree, int, rtx, rtx, int);
-static void do_jump_by_parts_equality (tree, tree, rtx, rtx, int);
-static void do_compare_and_jump	(tree, tree, enum rtx_code, enum rtx_code, rtx,
-				 rtx, int);
+static void do_jump_by_parts_greater (tree, tree, int,
+				      rtx_code_label *, rtx_code_label *, int);
+static void do_jump_by_parts_equality (tree, tree, rtx_code_label *,
+				       rtx_code_label *, int);
+static void do_compare_and_jump	(tree, tree, enum rtx_code, enum rtx_code,
+				 rtx_code_label *, rtx_code_label *, int);
 
 /* Invert probability if there is any.  -1 stands for unknown.  */
 
@@ -146,34 +148,34 @@ restore_pending_stack_adjust (saved_pending_stack_adjust *save)
 \f
 /* Expand conditional expressions.  */
 
-/* Generate code to evaluate EXP and jump to LABEL if the value is zero.
-   LABEL is an rtx of code CODE_LABEL, in this function and all the
-   functions here.  */
+/* Generate code to evaluate EXP and jump to LABEL if the value is zero.  */
 
 void
-jumpifnot (tree exp, rtx label, int prob)
+jumpifnot (tree exp, rtx_code_label *label, int prob)
 {
-  do_jump (exp, label, NULL_RTX, inv (prob));
+  do_jump (exp, label, NULL, inv (prob));
 }
 
 void
-jumpifnot_1 (enum tree_code code, tree op0, tree op1, rtx label, int prob)
+jumpifnot_1 (enum tree_code code, tree op0, tree op1, rtx_code_label *label,
+	     int prob)
 {
-  do_jump_1 (code, op0, op1, label, NULL_RTX, inv (prob));
+  do_jump_1 (code, op0, op1, label, NULL, inv (prob));
 }
 
 /* Generate code to evaluate EXP and jump to LABEL if the value is nonzero.  */
 
 void
-jumpif (tree exp, rtx label, int prob)
+jumpif (tree exp, rtx_code_label *label, int prob)
 {
-  do_jump (exp, NULL_RTX, label, prob);
+  do_jump (exp, NULL, label, prob);
 }
 
 void
-jumpif_1 (enum tree_code code, tree op0, tree op1, rtx label, int prob)
+jumpif_1 (enum tree_code code, tree op0, tree op1,
+	  rtx_code_label *label, int prob)
 {
-  do_jump_1 (code, op0, op1, NULL_RTX, label, prob);
+  do_jump_1 (code, op0, op1, NULL, label, prob);
 }
 
 /* Used internally by prefer_and_bit_test.  */
@@ -225,7 +227,8 @@ prefer_and_bit_test (machine_mode mode, int bitnum)
 
 void
 do_jump_1 (enum tree_code code, tree op0, tree op1,
-	   rtx if_false_label, rtx if_true_label, int prob)
+	   rtx_code_label *if_false_label, rtx_code_label *if_true_label,
+	   int prob)
 {
   machine_mode mode;
   rtx_code_label *drop_through_label = 0;
@@ -378,15 +381,15 @@ do_jump_1 (enum tree_code code, tree op0, tree op1,
             op0_prob = inv (op0_false_prob);
             op1_prob = inv (op1_false_prob);
           }
-        if (if_false_label == NULL_RTX)
+	if (if_false_label == NULL)
           {
             drop_through_label = gen_label_rtx ();
-            do_jump (op0, drop_through_label, NULL_RTX, op0_prob);
-            do_jump (op1, NULL_RTX, if_true_label, op1_prob);
+	    do_jump (op0, drop_through_label, NULL, op0_prob);
+	    do_jump (op1, NULL, if_true_label, op1_prob);
           }
         else
           {
-            do_jump (op0, if_false_label, NULL_RTX, op0_prob);
+	    do_jump (op0, if_false_label, NULL, op0_prob);
             do_jump (op1, if_false_label, if_true_label, op1_prob);
           }
         break;
@@ -405,18 +408,18 @@ do_jump_1 (enum tree_code code, tree op0, tree op1,
           {
             op0_prob = prob / 2;
             op1_prob = GCOV_COMPUTE_SCALE ((prob / 2), inv (op0_prob));
-          }
-        if (if_true_label == NULL_RTX)
-          {
-            drop_through_label = gen_label_rtx ();
-            do_jump (op0, NULL_RTX, drop_through_label, op0_prob);
-            do_jump (op1, if_false_label, NULL_RTX, op1_prob);
-          }
-        else
-          {
-            do_jump (op0, NULL_RTX, if_true_label, op0_prob);
-            do_jump (op1, if_false_label, if_true_label, op1_prob);
-          }
+	  }
+	if (if_true_label == NULL)
+	  {
+	    drop_through_label = gen_label_rtx ();
+	    do_jump (op0, NULL, drop_through_label, op0_prob);
+	    do_jump (op1, if_false_label, NULL, op1_prob);
+	  }
+	else
+	  {
+	    do_jump (op0, NULL, if_true_label, op0_prob);
+	    do_jump (op1, if_false_label, if_true_label, op1_prob);
+	  }
         break;
       }
 
@@ -443,14 +446,15 @@ do_jump_1 (enum tree_code code, tree op0, tree op1,
    PROB is probability of jump to if_true_label, or -1 if unknown.  */
 
 void
-do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
+do_jump (tree exp, rtx_code_label *if_false_label,
+	 rtx_code_label *if_true_label, int prob)
 {
   enum tree_code code = TREE_CODE (exp);
   rtx temp;
   int i;
   tree type;
   machine_mode mode;
-  rtx_code_label *drop_through_label = 0;
+  rtx_code_label *drop_through_label = NULL;
 
   switch (code)
     {
@@ -458,10 +462,13 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
       break;
 
     case INTEGER_CST:
-      temp = integer_zerop (exp) ? if_false_label : if_true_label;
-      if (temp)
-        emit_jump (temp);
-      break;
+      {
+	rtx_code_label *lab = integer_zerop (exp) ? if_false_label
+						  : if_true_label;
+	if (lab)
+	  emit_jump (lab);
+	break;
+      }
 
 #if 0
       /* This is not true with #pragma weak  */
@@ -511,7 +518,7 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
 	  }
 
         do_pending_stack_adjust ();
-	do_jump (TREE_OPERAND (exp, 0), label1, NULL_RTX, -1);
+	do_jump (TREE_OPERAND (exp, 0), label1, NULL, -1);
 	do_jump (TREE_OPERAND (exp, 1), if_false_label, if_true_label, prob);
         emit_label (label1);
 	do_jump (TREE_OPERAND (exp, 2), if_false_label, if_true_label, prob);
@@ -555,7 +562,7 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
       if (integer_onep (TREE_OPERAND (exp, 1)))
 	{
 	  tree exp0 = TREE_OPERAND (exp, 0);
-	  rtx set_label, clr_label;
+	  rtx_code_label *set_label, *clr_label;
 	  int setclr_prob = prob;
 
 	  /* Strip narrowing integral type conversions.  */
@@ -684,11 +691,12 @@ do_jump (tree exp, rtx if_false_label, rtx if_true_label, int prob)
 
 static void
 do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
-			      rtx op1, rtx if_false_label, rtx if_true_label,
+			      rtx op1, rtx_code_label *if_false_label,
+			      rtx_code_label *if_true_label,
 			      int prob)
 {
   int nwords = (GET_MODE_SIZE (mode) / UNITS_PER_WORD);
-  rtx drop_through_label = 0;
+  rtx_code_label *drop_through_label = 0;
   bool drop_through_if_true = false, drop_through_if_false = false;
   enum rtx_code code = GT;
   int i;
@@ -735,7 +743,7 @@ do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
 
       /* All but high-order word must be compared as unsigned.  */
       do_compare_rtx_and_jump (op0_word, op1_word, code, (unsignedp || i > 0),
-			       word_mode, NULL_RTX, NULL_RTX, if_true_label,
+			       word_mode, NULL_RTX, NULL, if_true_label,
 			       prob);
 
       /* Emit only one comparison for 0.  Do not emit the last cond jump.  */
@@ -744,7 +752,7 @@ do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
 
       /* Consider lower words only if these are equal.  */
       do_compare_rtx_and_jump (op0_word, op1_word, NE, unsignedp, word_mode,
-			       NULL_RTX, NULL_RTX, if_false_label, inv (prob));
+			       NULL_RTX, NULL, if_false_label, inv (prob));
     }
 
   if (!drop_through_if_false)
@@ -760,7 +768,8 @@ do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
 
 static void
 do_jump_by_parts_greater (tree treeop0, tree treeop1, int swap,
-			  rtx if_false_label, rtx if_true_label, int prob)
+			  rtx_code_label *if_false_label,
+			  rtx_code_label *if_true_label, int prob)
 {
   rtx op0 = expand_normal (swap ? treeop1 : treeop0);
   rtx op1 = expand_normal (swap ? treeop0 : treeop1);
@@ -773,17 +782,18 @@ do_jump_by_parts_greater (tree treeop0, tree treeop1, int swap,
 \f
 /* Jump according to whether OP0 is 0.  We assume that OP0 has an integer
    mode, MODE, that is too wide for the available compare insns.  Either
-   Either (but not both) of IF_TRUE_LABEL and IF_FALSE_LABEL may be NULL_RTX
+   Either (but not both) of IF_TRUE_LABEL and IF_FALSE_LABEL may be NULL
    to indicate drop through.  */
 
 static void
 do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
-			   rtx if_false_label, rtx if_true_label, int prob)
+			   rtx_code_label *if_false_label,
+			   rtx_code_label *if_true_label, int prob)
 {
   int nwords = GET_MODE_SIZE (mode) / UNITS_PER_WORD;
   rtx part;
   int i;
-  rtx drop_through_label = 0;
+  rtx_code_label *drop_through_label = NULL;
 
   /* The fastest way of doing this comparison on almost any machine is to
      "or" all the words and compare the result.  If all have to be loaded
@@ -806,12 +816,12 @@ do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
 
   /* If we couldn't do the "or" simply, do this with a series of compares.  */
   if (! if_false_label)
-    drop_through_label = if_false_label = gen_label_rtx ();
+    if_false_label = drop_through_label = gen_label_rtx ();
 
   for (i = 0; i < nwords; i++)
     do_compare_rtx_and_jump (operand_subword_force (op0, i, mode),
                              const0_rtx, EQ, 1, word_mode, NULL_RTX,
-			     if_false_label, NULL_RTX, prob);
+			     if_false_label, NULL, prob);
 
   if (if_true_label)
     emit_jump (if_true_label);
@@ -827,10 +837,11 @@ do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
 
 static void
 do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
-			       rtx if_false_label, rtx if_true_label, int prob)
+			       rtx_code_label *if_false_label,
+			       rtx_code_label *if_true_label, int prob)
 {
   int nwords = (GET_MODE_SIZE (mode) / UNITS_PER_WORD);
-  rtx drop_through_label = 0;
+  rtx_code_label *drop_through_label = NULL;
   int i;
 
   if (op1 == const0_rtx)
@@ -853,7 +864,7 @@ do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
     do_compare_rtx_and_jump (operand_subword_force (op0, i, mode),
                              operand_subword_force (op1, i, mode),
                              EQ, 0, word_mode, NULL_RTX,
-			     if_false_label, NULL_RTX, prob);
+			     if_false_label, NULL, prob);
 
   if (if_true_label)
     emit_jump (if_true_label);
@@ -865,8 +876,9 @@ do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
    with one insn, test the comparison and jump to the appropriate label.  */
 
 static void
-do_jump_by_parts_equality (tree treeop0, tree treeop1, rtx if_false_label,
-			   rtx if_true_label, int prob)
+do_jump_by_parts_equality (tree treeop0, tree treeop1,
+			   rtx_code_label *if_false_label,
+			   rtx_code_label *if_true_label, int prob)
 {
   rtx op0 = expand_normal (treeop0);
   rtx op1 = expand_normal (treeop1);
@@ -961,11 +973,12 @@ split_comparison (enum rtx_code code, machine_mode mode,
 
 void
 do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
-			 machine_mode mode, rtx size, rtx if_false_label,
-			 rtx if_true_label, int prob)
+			 machine_mode mode, rtx size,
+			 rtx_code_label *if_false_label,
+			 rtx_code_label *if_true_label, int prob)
 {
   rtx tem;
-  rtx dummy_label = NULL;
+  rtx_code_label *dummy_label = NULL;
 
   /* Reverse the comparison if that is safe and we want to jump if it is
      false.  Also convert to the reverse comparison if the target can
@@ -1010,8 +1023,9 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
     {
       if (CONSTANT_P (tem))
 	{
-	  rtx label = (tem == const0_rtx || tem == CONST0_RTX (mode))
-		      ? if_false_label : if_true_label;
+	  rtx_code_label *label = (tem == const0_rtx
+				   || tem == CONST0_RTX (mode))
+					? if_false_label : if_true_label;
 	  if (label)
 	    emit_jump (label);
 	  return;
@@ -1130,7 +1144,7 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
 		first_prob = REG_BR_PROB_BASE - REG_BR_PROB_BASE / 100;
 	      if (and_them)
 		{
-		  rtx dest_label;
+		  rtx_code_label *dest_label;
 		  /* If we only jump if true, just bypass the second jump.  */
 		  if (! if_false_label)
 		    {
@@ -1141,13 +1155,11 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
 		  else
 		    dest_label = if_false_label;
                   do_compare_rtx_and_jump (op0, op1, first_code, unsignedp, mode,
-					   size, dest_label, NULL_RTX,
-					   first_prob);
+					   size, dest_label, NULL, first_prob);
 		}
               else
                 do_compare_rtx_and_jump (op0, op1, first_code, unsignedp, mode,
-					 size, NULL_RTX, if_true_label,
-					 first_prob);
+					 size, NULL, if_true_label, first_prob);
 	    }
 	}
 
@@ -1173,8 +1185,9 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
 
 static void
 do_compare_and_jump (tree treeop0, tree treeop1, enum rtx_code signed_code,
-		     enum rtx_code unsigned_code, rtx if_false_label,
-		     rtx if_true_label, int prob)
+		     enum rtx_code unsigned_code,
+		     rtx_code_label *if_false_label,
+		     rtx_code_label *if_true_label, int prob)
 {
   rtx op0, op1;
   tree type;
diff --git a/gcc/dojump.h b/gcc/dojump.h
index 74d3f37..1b64ea7 100644
--- a/gcc/dojump.h
+++ b/gcc/dojump.h
@@ -57,20 +57,23 @@ extern void save_pending_stack_adjust (saved_pending_stack_adjust *);
 extern void restore_pending_stack_adjust (saved_pending_stack_adjust *);
 
 /* Generate code to evaluate EXP and jump to LABEL if the value is zero.  */
-extern void jumpifnot (tree, rtx, int);
-extern void jumpifnot_1 (enum tree_code, tree, tree, rtx, int);
+extern void jumpifnot (tree exp, rtx_code_label *label, int prob);
+extern void jumpifnot_1 (enum tree_code, tree, tree, rtx_code_label *, int);
 
 /* Generate code to evaluate EXP and jump to LABEL if the value is nonzero.  */
-extern void jumpif (tree, rtx, int);
-extern void jumpif_1 (enum tree_code, tree, tree, rtx, int);
+extern void jumpif (tree exp, rtx_code_label *label, int prob);
+extern void jumpif_1 (enum tree_code, tree, tree, rtx_code_label *, int);
 
 /* Generate code to evaluate EXP and jump to IF_FALSE_LABEL if
    the result is zero, or IF_TRUE_LABEL if the result is one.  */
-extern void do_jump (tree, rtx, rtx, int);
-extern void do_jump_1 (enum tree_code, tree, tree, rtx, rtx, int);
+extern void do_jump (tree exp, rtx_code_label *if_false_label,
+		     rtx_code_label *if_true_label, int prob);
+extern void do_jump_1 (enum tree_code, tree, tree, rtx_code_label *,
+		       rtx_code_label *, int);
 
 extern void do_compare_rtx_and_jump (rtx, rtx, enum rtx_code, int,
-				     machine_mode, rtx, rtx, rtx, int);
+				     machine_mode, rtx, rtx_code_label *,
+				     rtx_code_label *, int);
 
 extern bool split_comparison (enum rtx_code, machine_mode,
 			      enum rtx_code *, enum rtx_code *);
diff --git a/gcc/dse.c b/gcc/dse.c
index 603cdbd..3b3662b 100644
--- a/gcc/dse.c
+++ b/gcc/dse.c
@@ -907,7 +907,7 @@ emit_inc_dec_insn_before (rtx mem ATTRIBUTE_UNUSED,
       end_sequence ();
     }
   else
-    new_insn = as_a <rtx_insn *> (gen_move_insn (dest, src));
+    new_insn = gen_move_insn (dest, src);
   info.first = new_insn;
   info.fixed_regs_live = insn_info->fixed_regs_live;
   info.failure = false;
diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index b48f88b..10a8cc9 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -4441,13 +4441,15 @@ emit_barrier_before (rtx before)
 
 /* Emit the label LABEL before the insn BEFORE.  */
 
-rtx_insn *
-emit_label_before (rtx label, rtx_insn *before)
+rtx_code_label *
+emit_label_before (rtx uncast_label, rtx_insn *before)
 {
+  rtx_code_label *label = as_a <rtx_code_label *> (uncast_label);
+
   gcc_checking_assert (INSN_UID (label) == 0);
   INSN_UID (label) = cur_insn_uid++;
   add_insn_before (label, before, NULL);
-  return as_a <rtx_insn *> (label);
+  return label;
 }
 \f
 /* Helper for emit_insn_after, handles lists of instructions
@@ -5068,13 +5070,15 @@ emit_call_insn (rtx x)
 
 /* Add the label LABEL to the end of the doubly-linked list.  */
 
-rtx_insn *
-emit_label (rtx label)
+rtx_code_label *
+emit_label (rtx uncast_label)
 {
+  rtx_code_label *label = as_a <rtx_code_label *> (uncast_label);
+
   gcc_checking_assert (INSN_UID (label) == 0);
   INSN_UID (label) = cur_insn_uid++;
-  add_insn (as_a <rtx_insn *> (label));
-  return as_a <rtx_insn *> (label);
+  add_insn (label);
+  return label;
 }
 
 /* Make an insn of code JUMP_TABLE_DATA
diff --git a/gcc/except.c b/gcc/except.c
index d609592..c2b8214 100644
--- a/gcc/except.c
+++ b/gcc/except.c
@@ -1349,7 +1349,7 @@ sjlj_emit_dispatch_table (rtx_code_label *dispatch_label, int num_dispatch)
     if (lp && lp->post_landing_pad)
       {
 	rtx_insn *seq2;
-	rtx label;
+	rtx_code_label *label;
 
 	start_sequence ();
 
@@ -1363,7 +1363,7 @@ sjlj_emit_dispatch_table (rtx_code_label *dispatch_label, int num_dispatch)
 	    t = build_int_cst (integer_type_node, disp_index);
 	    case_elt = build_case_label (t, NULL, t_label);
 	    dispatch_labels.quick_push (case_elt);
-	    label = label_rtx (t_label);
+	    label = jump_target_rtx (t_label);
 	  }
 	else
 	  label = gen_label_rtx ();
diff --git a/gcc/explow.c b/gcc/explow.c
index de446a9..57cb767 100644
--- a/gcc/explow.c
+++ b/gcc/explow.c
@@ -984,7 +984,7 @@ emit_stack_save (enum save_level save_level, rtx *psave)
 {
   rtx sa = *psave;
   /* The default is that we use a move insn and save in a Pmode object.  */
-  rtx (*fcn) (rtx, rtx) = gen_move_insn;
+  rtx_insn * (*fcn) (rtx, rtx) = gen_move_insn;
   machine_mode mode = STACK_SAVEAREA_MODE (save_level);
 
   /* See if this machine has anything special to do for this kind of save.  */
@@ -1039,7 +1039,7 @@ void
 emit_stack_restore (enum save_level save_level, rtx sa)
 {
   /* The default is that we use a move insn.  */
-  rtx (*fcn) (rtx, rtx) = gen_move_insn;
+  rtx_insn * (*fcn) (rtx, rtx) = gen_move_insn;
 
   /* If stack_realign_drap, the x86 backend emits a prologue that aligns both
      STACK_POINTER and HARD_FRAME_POINTER.
diff --git a/gcc/expmed.c b/gcc/expmed.c
index 6679f50..f180688 100644
--- a/gcc/expmed.c
+++ b/gcc/expmed.c
@@ -5807,8 +5807,8 @@ emit_store_flag_force (rtx target, enum rtx_code code, rtx op0, rtx op1,
       && op1 == const0_rtx)
     {
       label = gen_label_rtx ();
-      do_compare_rtx_and_jump (target, const0_rtx, EQ, unsignedp,
-			       mode, NULL_RTX, NULL_RTX, label, -1);
+      do_compare_rtx_and_jump (target, const0_rtx, EQ, unsignedp, mode,
+			       NULL_RTX, NULL, label, -1);
       emit_move_insn (target, trueval);
       emit_label (label);
       return target;
@@ -5845,8 +5845,8 @@ emit_store_flag_force (rtx target, enum rtx_code code, rtx op0, rtx op1,
 
   emit_move_insn (target, trueval);
   label = gen_label_rtx ();
-  do_compare_rtx_and_jump (op0, op1, code, unsignedp, mode, NULL_RTX,
-			   NULL_RTX, label, -1);
+  do_compare_rtx_and_jump (op0, op1, code, unsignedp, mode, NULL_RTX, NULL,
+			   label, -1);
 
   emit_move_insn (target, falseval);
   emit_label (label);
@@ -5863,6 +5863,6 @@ do_cmp_and_jump (rtx arg1, rtx arg2, enum rtx_code op, machine_mode mode,
 		 rtx_code_label *label)
 {
   int unsignedp = (op == LTU || op == LEU || op == GTU || op == GEU);
-  do_compare_rtx_and_jump (arg1, arg2, op, unsignedp, mode,
-			   NULL_RTX, NULL_RTX, label, -1);
+  do_compare_rtx_and_jump (arg1, arg2, op, unsignedp, mode, NULL_RTX,
+			   NULL, label, -1);
 }
diff --git a/gcc/expr.c b/gcc/expr.c
index 25aa11f..85efaa3 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -3652,7 +3652,7 @@ emit_move_insn (rtx x, rtx y)
 /* Generate the body of an instruction to copy Y into X.
    It may be a list of insns, if one insn isn't enough.  */
 
-rtx
+rtx_insn *
 gen_move_insn (rtx x, rtx y)
 {
   rtx_insn *seq;
@@ -8128,6 +8128,7 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
 		    enum expand_modifier modifier)
 {
   rtx op0, op1, op2, temp;
+  rtx_code_label *lab;
   tree type;
   int unsignedp;
   machine_mode mode;
@@ -8937,13 +8938,13 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
 	if (target != op0)
 	  emit_move_insn (target, op0);
 
-	temp = gen_label_rtx ();
+	lab = gen_label_rtx ();
 	do_compare_rtx_and_jump (target, cmpop1, comparison_code,
-				 unsignedp, mode, NULL_RTX, NULL_RTX, temp,
+				 unsignedp, mode, NULL_RTX, NULL, lab,
 				 -1);
       }
       emit_move_insn (target, op1);
-      emit_label (temp);
+      emit_label (lab);
       return target;
 
     case BIT_NOT_EXPR:
@@ -9021,38 +9022,39 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
     case UNGE_EXPR:
     case UNEQ_EXPR:
     case LTGT_EXPR:
-      temp = do_store_flag (ops,
-			    modifier != EXPAND_STACK_PARM ? target : NULL_RTX,
-			    tmode != VOIDmode ? tmode : mode);
-      if (temp)
-	return temp;
-
-      /* Use a compare and a jump for BLKmode comparisons, or for function
-	 type comparisons is HAVE_canonicalize_funcptr_for_compare.  */
-
-      if ((target == 0
-	   || modifier == EXPAND_STACK_PARM
-	   || ! safe_from_p (target, treeop0, 1)
-	   || ! safe_from_p (target, treeop1, 1)
-	   /* Make sure we don't have a hard reg (such as function's return
-	      value) live across basic blocks, if not optimizing.  */
-	   || (!optimize && REG_P (target)
-	       && REGNO (target) < FIRST_PSEUDO_REGISTER)))
-	target = gen_reg_rtx (tmode != VOIDmode ? tmode : mode);
+      {
+	temp = do_store_flag (ops,
+			      modifier != EXPAND_STACK_PARM ? target : NULL_RTX,
+			      tmode != VOIDmode ? tmode : mode);
+	if (temp)
+	  return temp;
 
-      emit_move_insn (target, const0_rtx);
+	/* Use a compare and a jump for BLKmode comparisons, or for function
+	   type comparisons is HAVE_canonicalize_funcptr_for_compare.  */
+
+	if ((target == 0
+	     || modifier == EXPAND_STACK_PARM
+	     || ! safe_from_p (target, treeop0, 1)
+	     || ! safe_from_p (target, treeop1, 1)
+	     /* Make sure we don't have a hard reg (such as function's return
+		value) live across basic blocks, if not optimizing.  */
+	     || (!optimize && REG_P (target)
+		 && REGNO (target) < FIRST_PSEUDO_REGISTER)))
+	  target = gen_reg_rtx (tmode != VOIDmode ? tmode : mode);
 
-      op1 = gen_label_rtx ();
-      jumpifnot_1 (code, treeop0, treeop1, op1, -1);
+	emit_move_insn (target, const0_rtx);
 
-      if (TYPE_PRECISION (type) == 1 && !TYPE_UNSIGNED (type))
-	emit_move_insn (target, constm1_rtx);
-      else
-	emit_move_insn (target, const1_rtx);
+	rtx_code_label *lab1 = gen_label_rtx ();
+	jumpifnot_1 (code, treeop0, treeop1, lab1, -1);
 
-      emit_label (op1);
-      return target;
+	if (TYPE_PRECISION (type) == 1 && !TYPE_UNSIGNED (type))
+	  emit_move_insn (target, constm1_rtx);
+	else
+	  emit_move_insn (target, const1_rtx);
 
+	emit_label (lab1);
+	return target;
+      }
     case COMPLEX_EXPR:
       /* Get the rtx code of the operands.  */
       op0 = expand_normal (treeop0);
@@ -9275,58 +9277,60 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
       }
 
     case COND_EXPR:
-      /* A COND_EXPR with its type being VOID_TYPE represents a
-	 conditional jump and is handled in
-	 expand_gimple_cond_expr.  */
-      gcc_assert (!VOID_TYPE_P (type));
-
-      /* Note that COND_EXPRs whose type is a structure or union
-	 are required to be constructed to contain assignments of
-	 a temporary variable, so that we can evaluate them here
-	 for side effect only.  If type is void, we must do likewise.  */
-
-      gcc_assert (!TREE_ADDRESSABLE (type)
-		  && !ignore
-		  && TREE_TYPE (treeop1) != void_type_node
-		  && TREE_TYPE (treeop2) != void_type_node);
-
-      temp = expand_cond_expr_using_cmove (treeop0, treeop1, treeop2);
-      if (temp)
-	return temp;
-
-      /* If we are not to produce a result, we have no target.  Otherwise,
-	 if a target was specified use it; it will not be used as an
-	 intermediate target unless it is safe.  If no target, use a
-	 temporary.  */
-
-      if (modifier != EXPAND_STACK_PARM
-	  && original_target
-	  && safe_from_p (original_target, treeop0, 1)
-	  && GET_MODE (original_target) == mode
-	  && !MEM_P (original_target))
-	temp = original_target;
-      else
-	temp = assign_temp (type, 0, 1);
-
-      do_pending_stack_adjust ();
-      NO_DEFER_POP;
-      op0 = gen_label_rtx ();
-      op1 = gen_label_rtx ();
-      jumpifnot (treeop0, op0, -1);
-      store_expr (treeop1, temp,
-		  modifier == EXPAND_STACK_PARM,
-		  false);
-
-      emit_jump_insn (gen_jump (op1));
-      emit_barrier ();
-      emit_label (op0);
-      store_expr (treeop2, temp,
-		  modifier == EXPAND_STACK_PARM,
-		  false);
+      {
+	/* A COND_EXPR with its type being VOID_TYPE represents a
+	   conditional jump and is handled in
+	   expand_gimple_cond_expr.  */
+	gcc_assert (!VOID_TYPE_P (type));
+
+	/* Note that COND_EXPRs whose type is a structure or union
+	   are required to be constructed to contain assignments of
+	   a temporary variable, so that we can evaluate them here
+	   for side effect only.  If type is void, we must do likewise.  */
+
+	gcc_assert (!TREE_ADDRESSABLE (type)
+		    && !ignore
+		    && TREE_TYPE (treeop1) != void_type_node
+		    && TREE_TYPE (treeop2) != void_type_node);
+
+	temp = expand_cond_expr_using_cmove (treeop0, treeop1, treeop2);
+	if (temp)
+	  return temp;
 
-      emit_label (op1);
-      OK_DEFER_POP;
-      return temp;
+	/* If we are not to produce a result, we have no target.  Otherwise,
+	   if a target was specified use it; it will not be used as an
+	   intermediate target unless it is safe.  If no target, use a
+	   temporary.  */
+
+	if (modifier != EXPAND_STACK_PARM
+	    && original_target
+	    && safe_from_p (original_target, treeop0, 1)
+	    && GET_MODE (original_target) == mode
+	    && !MEM_P (original_target))
+	  temp = original_target;
+	else
+	  temp = assign_temp (type, 0, 1);
+
+	do_pending_stack_adjust ();
+	NO_DEFER_POP;
+	rtx_code_label *lab0 = gen_label_rtx ();
+	rtx_code_label *lab1 = gen_label_rtx ();
+	jumpifnot (treeop0, lab0, -1);
+	store_expr (treeop1, temp,
+		    modifier == EXPAND_STACK_PARM,
+		    false);
+
+	emit_jump_insn (gen_jump (lab1));
+	emit_barrier ();
+	emit_label (lab0);
+	store_expr (treeop2, temp,
+		    modifier == EXPAND_STACK_PARM,
+		    false);
+
+	emit_label (lab1);
+	OK_DEFER_POP;
+	return temp;
+      }
 
     case VEC_COND_EXPR:
       target = expand_vec_cond_expr (type, treeop0, treeop1, treeop2, target);
diff --git a/gcc/expr.h b/gcc/expr.h
index 867852e..6c4afc4 100644
--- a/gcc/expr.h
+++ b/gcc/expr.h
@@ -203,7 +203,7 @@ extern rtx store_by_pieces (rtx, unsigned HOST_WIDE_INT,
 
 /* Emit insns to set X from Y.  */
 extern rtx_insn *emit_move_insn (rtx, rtx);
-extern rtx gen_move_insn (rtx, rtx);
+extern rtx_insn *gen_move_insn (rtx, rtx);
 
 /* Emit insns to set X from Y, with no frills.  */
 extern rtx_insn *emit_move_insn_1 (rtx, rtx);
diff --git a/gcc/function.c b/gcc/function.c
index af4c087..7961d07 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -5784,7 +5784,7 @@ convert_jumps_to_returns (basic_block last_bb, bool simple_p,
 	    dest = simple_return_rtx;
 	  else
 	    dest = ret_rtx;
-	  if (!redirect_jump (jump, dest, 0))
+	  if (!redirect_jump (as_a <rtx_jump_insn *> (jump), dest, 0))
 	    {
 	      if (HAVE_simple_return && simple_p)
 		{
diff --git a/gcc/gcse.c b/gcc/gcse.c
index e4303fe..5fa7759d 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -2229,7 +2229,8 @@ pre_insert_copy_insn (struct gcse_expr *expr, rtx_insn *insn)
   int regno = REGNO (reg);
   int indx = expr->bitmap_index;
   rtx pat = PATTERN (insn);
-  rtx set, first_set, new_insn;
+  rtx set, first_set;
+  rtx_insn *new_insn;
   rtx old_reg;
   int i;
 
diff --git a/gcc/ifcvt.c b/gcc/ifcvt.c
index a3e3e5c..7be6a09 100644
--- a/gcc/ifcvt.c
+++ b/gcc/ifcvt.c
@@ -4444,9 +4444,10 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
       else
 	new_dest_label = block_label (new_dest);
 
+      rtx_jump_insn *jump_insn = as_a <rtx_jump_insn *> (jump);
       if (reversep
-	  ? ! invert_jump_1 (jump, new_dest_label)
-	  : ! redirect_jump_1 (jump, new_dest_label))
+	  ? ! invert_jump_1 (jump_insn, new_dest_label)
+	  : ! redirect_jump_1 (jump_insn, new_dest_label))
 	goto cancel;
     }
 
@@ -4457,7 +4458,8 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
 
   if (other_bb != new_dest)
     {
-      redirect_jump_2 (jump, old_dest, new_dest_label, 0, reversep);
+      redirect_jump_2 (as_a <rtx_jump_insn *> (jump), old_dest, new_dest_label,
+		       0, reversep);
 
       redirect_edge_succ (BRANCH_EDGE (test_bb), new_dest);
       if (reversep)
diff --git a/gcc/internal-fn.c b/gcc/internal-fn.c
index 0053ed9..46ee812 100644
--- a/gcc/internal-fn.c
+++ b/gcc/internal-fn.c
@@ -422,7 +422,7 @@ expand_arith_overflow_result_store (tree lhs, rtx target,
       lres = convert_modes (tgtmode, mode, res, uns);
       gcc_assert (GET_MODE_PRECISION (tgtmode) < GET_MODE_PRECISION (mode));
       do_compare_rtx_and_jump (res, convert_modes (mode, tgtmode, lres, uns),
-			       EQ, true, mode, NULL_RTX, NULL_RTX, done_label,
+			       EQ, true, mode, NULL_RTX, NULL, done_label,
 			       PROB_VERY_LIKELY);
       write_complex_part (target, const1_rtx, true);
       emit_label (done_label);
@@ -569,7 +569,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
 	      : CONST_SCALAR_INT_P (op1)))
 	tem = op1;
       do_compare_rtx_and_jump (res, tem, code == PLUS_EXPR ? GEU : LEU,
-			       true, mode, NULL_RTX, NULL_RTX, done_label,
+			       true, mode, NULL_RTX, NULL, done_label,
 			       PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -584,7 +584,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       rtx tem = expand_binop (mode, add_optab,
 			      code == PLUS_EXPR ? res : op0, sgn,
 			      NULL_RTX, false, OPTAB_LIB_WIDEN);
-      do_compare_rtx_and_jump (tem, op1, GEU, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (tem, op1, GEU, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -627,8 +627,8 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       else if (pos_neg == 3)
 	/* If ARG0 is not known to be always positive, check at runtime.  */
 	do_compare_rtx_and_jump (op0, const0_rtx, LT, false, mode, NULL_RTX,
-				 NULL_RTX, do_error, PROB_VERY_UNLIKELY);
-      do_compare_rtx_and_jump (op1, op0, LEU, true, mode, NULL_RTX, NULL_RTX,
+				 NULL, do_error, PROB_VERY_UNLIKELY);
+      do_compare_rtx_and_jump (op1, op0, LEU, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -642,7 +642,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
 			  OPTAB_LIB_WIDEN);
       rtx tem = expand_binop (mode, add_optab, op1, sgn, NULL_RTX, false,
 			      OPTAB_LIB_WIDEN);
-      do_compare_rtx_and_jump (op0, tem, LTU, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (op0, tem, LTU, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -655,7 +655,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       res = expand_binop (mode, add_optab, op0, op1, NULL_RTX, false,
 			  OPTAB_LIB_WIDEN);
       do_compare_rtx_and_jump (res, const0_rtx, LT, false, mode, NULL_RTX,
-			       NULL_RTX, do_error, PROB_VERY_UNLIKELY);
+			       NULL, do_error, PROB_VERY_UNLIKELY);
       rtx tem = op1;
       /* The operation is commutative, so we can pick operand to compare
 	 against.  For prec <= BITS_PER_WORD, I think preferring REG operand
@@ -668,7 +668,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
 	  ? (CONST_SCALAR_INT_P (op1) && REG_P (op0))
 	  : CONST_SCALAR_INT_P (op0))
 	tem = op0;
-      do_compare_rtx_and_jump (res, tem, GEU, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (res, tem, GEU, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
@@ -698,26 +698,26 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
 	  tem = expand_binop (mode, ((pos_neg == 1) ^ (code == MINUS_EXPR))
 				    ? and_optab : ior_optab,
 			      op0, res, NULL_RTX, false, OPTAB_LIB_WIDEN);
-	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL,
+				   NULL, done_label, PROB_VERY_LIKELY);
 	}
       else
 	{
 	  rtx_code_label *do_ior_label = gen_label_rtx ();
 	  do_compare_rtx_and_jump (op1, const0_rtx,
 				   code == MINUS_EXPR ? GE : LT, false, mode,
-				   NULL_RTX, NULL_RTX, do_ior_label,
+				   NULL_RTX, NULL, do_ior_label,
 				   PROB_EVEN);
 	  tem = expand_binop (mode, and_optab, op0, res, NULL_RTX, false,
 			      OPTAB_LIB_WIDEN);
 	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	  emit_jump (do_error);
 	  emit_label (do_ior_label);
 	  tem = expand_binop (mode, ior_optab, op0, res, NULL_RTX, false,
 			      OPTAB_LIB_WIDEN);
 	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	}
       goto do_error_label;
     }
@@ -730,14 +730,14 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       res = expand_binop (mode, sub_optab, op0, op1, NULL_RTX, false,
 			  OPTAB_LIB_WIDEN);
       rtx_code_label *op0_geu_op1 = gen_label_rtx ();
-      do_compare_rtx_and_jump (op0, op1, GEU, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (op0, op1, GEU, true, mode, NULL_RTX, NULL,
 			       op0_geu_op1, PROB_EVEN);
       do_compare_rtx_and_jump (res, const0_rtx, LT, false, mode, NULL_RTX,
-			       NULL_RTX, done_label, PROB_VERY_LIKELY);
+			       NULL, done_label, PROB_VERY_LIKELY);
       emit_jump (do_error);
       emit_label (op0_geu_op1);
       do_compare_rtx_and_jump (res, const0_rtx, GE, false, mode, NULL_RTX,
-			       NULL_RTX, done_label, PROB_VERY_LIKELY);
+			       NULL, done_label, PROB_VERY_LIKELY);
       goto do_error_label;
     }
 
@@ -816,12 +816,12 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       /* If the op1 is negative, we have to use a different check.  */
       if (pos_neg == 3)
 	do_compare_rtx_and_jump (op1, const0_rtx, LT, false, mode, NULL_RTX,
-				 NULL_RTX, sub_check, PROB_EVEN);
+				 NULL, sub_check, PROB_EVEN);
 
       /* Compare the result of the operation with one of the operands.  */
       if (pos_neg & 1)
 	do_compare_rtx_and_jump (res, op0, code == PLUS_EXPR ? GE : LE,
-				 false, mode, NULL_RTX, NULL_RTX, done_label,
+				 false, mode, NULL_RTX, NULL, done_label,
 				 PROB_VERY_LIKELY);
 
       /* If we get here, we have to print the error.  */
@@ -835,7 +835,7 @@ expand_addsub_overflow (location_t loc, tree_code code, tree lhs,
       /* We have k = a + b for b < 0 here.  k <= a must hold.  */
       if (pos_neg & 2)
 	do_compare_rtx_and_jump (res, op0, code == PLUS_EXPR ? LE : GE,
-				 false, mode, NULL_RTX, NULL_RTX, done_label,
+				 false, mode, NULL_RTX, NULL, done_label,
 				 PROB_VERY_LIKELY);
     }
 
@@ -931,7 +931,7 @@ expand_neg_overflow (location_t loc, tree lhs, tree arg1, bool is_ubsan)
 
       /* Compare the operand with the most negative value.  */
       rtx minv = expand_normal (TYPE_MIN_VALUE (TREE_TYPE (arg1)));
-      do_compare_rtx_and_jump (op1, minv, NE, true, mode, NULL_RTX, NULL_RTX,
+      do_compare_rtx_and_jump (op1, minv, NE, true, mode, NULL_RTX, NULL,
 			       done_label, PROB_VERY_LIKELY);
     }
 
@@ -1068,15 +1068,15 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	  ops.location = loc;
 	  res = expand_expr_real_2 (&ops, NULL_RTX, mode, EXPAND_NORMAL);
 	  do_compare_rtx_and_jump (op1, const0_rtx, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	  goto do_error_label;
 	case 3:
 	  rtx_code_label *do_main_label;
 	  do_main_label = gen_label_rtx ();
 	  do_compare_rtx_and_jump (op0, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
+				   NULL, do_main_label, PROB_VERY_LIKELY);
 	  do_compare_rtx_and_jump (op1, const0_rtx, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
+				   NULL, do_main_label, PROB_VERY_LIKELY);
 	  write_complex_part (target, const1_rtx, true);
 	  emit_label (do_main_label);
 	  goto do_main;
@@ -1113,15 +1113,15 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	  ops.location = loc;
 	  res = expand_expr_real_2 (&ops, NULL_RTX, mode, EXPAND_NORMAL);
 	  do_compare_rtx_and_jump (op0, const0_rtx, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	  do_compare_rtx_and_jump (op0, constm1_rtx, NE, true, mode, NULL_RTX,
-				   NULL_RTX, do_error, PROB_VERY_UNLIKELY);
+				   NULL, do_error, PROB_VERY_UNLIKELY);
 	  int prec;
 	  prec = GET_MODE_PRECISION (mode);
 	  rtx sgn;
 	  sgn = immed_wide_int_const (wi::min_value (prec, SIGNED), mode);
 	  do_compare_rtx_and_jump (op1, sgn, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, done_label, PROB_VERY_LIKELY);
+				   NULL, done_label, PROB_VERY_LIKELY);
 	  goto do_error_label;
 	case 3:
 	  /* Rest of handling of this case after res is computed.  */
@@ -1167,7 +1167,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	      tem = expand_binop (mode, and_optab, op0, op1, NULL_RTX, false,
 				  OPTAB_LIB_WIDEN);
 	      do_compare_rtx_and_jump (tem, const0_rtx, EQ, true, mode,
-				       NULL_RTX, NULL_RTX, done_label,
+				       NULL_RTX, NULL, done_label,
 				       PROB_VERY_LIKELY);
 	      goto do_error_label;
 	    }
@@ -1185,8 +1185,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	  tem = expand_binop (mode, and_optab, op0, op1, NULL_RTX, false,
 			      OPTAB_LIB_WIDEN);
 	  do_compare_rtx_and_jump (tem, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, after_negate_label,
-				   PROB_VERY_LIKELY);
+				   NULL, after_negate_label, PROB_VERY_LIKELY);
 	  /* Both arguments negative here, negate them and continue with
 	     normal unsigned overflow checking multiplication.  */
 	  emit_move_insn (op0, expand_unop (mode, neg_optab, op0,
@@ -1202,13 +1201,13 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	  tem2 = expand_binop (mode, xor_optab, op0, op1, NULL_RTX, false,
 			       OPTAB_LIB_WIDEN);
 	  do_compare_rtx_and_jump (tem2, const0_rtx, GE, false, mode, NULL_RTX,
-				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
+				   NULL, do_main_label, PROB_VERY_LIKELY);
 	  /* One argument is negative here, the other positive.  This
 	     overflows always, unless one of the arguments is 0.  But
 	     if e.g. s2 is 0, (U) s1 * 0 doesn't overflow, whatever s1
 	     is, thus we can keep do_main code oring in overflow as is.  */
 	  do_compare_rtx_and_jump (tem, const0_rtx, EQ, true, mode, NULL_RTX,
-				   NULL_RTX, do_main_label, PROB_VERY_LIKELY);
+				   NULL, do_main_label, PROB_VERY_LIKELY);
 	  write_complex_part (target, const1_rtx, true);
 	  emit_label (do_main_label);
 	  goto do_main;
@@ -1274,7 +1273,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 	    /* For the unsigned multiplication, there was overflow if
 	       HIPART is non-zero.  */
 	    do_compare_rtx_and_jump (hipart, const0_rtx, EQ, true, mode,
-				     NULL_RTX, NULL_RTX, done_label,
+				     NULL_RTX, NULL, done_label,
 				     PROB_VERY_LIKELY);
 	  else
 	    {
@@ -1284,7 +1283,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		 the high half.  There was overflow if
 		 HIPART is different from RES < 0 ? -1 : 0.  */
 	      do_compare_rtx_and_jump (signbit, hipart, EQ, true, mode,
-				       NULL_RTX, NULL_RTX, done_label,
+				       NULL_RTX, NULL, done_label,
 				       PROB_VERY_LIKELY);
 	    }
 	}
@@ -1377,12 +1376,12 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 
 	  if (!op0_small_p)
 	    do_compare_rtx_and_jump (signbit0, hipart0, NE, true, hmode,
-				     NULL_RTX, NULL_RTX, large_op0,
+				     NULL_RTX, NULL, large_op0,
 				     PROB_UNLIKELY);
 
 	  if (!op1_small_p)
 	    do_compare_rtx_and_jump (signbit1, hipart1, NE, true, hmode,
-				     NULL_RTX, NULL_RTX, small_op0_large_op1,
+				     NULL_RTX, NULL, small_op0_large_op1,
 				     PROB_UNLIKELY);
 
 	  /* If both op0 and op1 are sign (!uns) or zero (uns) extended from
@@ -1428,7 +1427,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 
 	  if (!op1_small_p)
 	    do_compare_rtx_and_jump (signbit1, hipart1, NE, true, hmode,
-				     NULL_RTX, NULL_RTX, both_ops_large,
+				     NULL_RTX, NULL, both_ops_large,
 				     PROB_UNLIKELY);
 
 	  /* If op1 is sign (!uns) or zero (uns) extended from hmode to mode,
@@ -1465,7 +1464,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		emit_jump (after_hipart_neg);
 	      else if (larger_sign != -1)
 		do_compare_rtx_and_jump (hipart, const0_rtx, GE, false, hmode,
-					 NULL_RTX, NULL_RTX, after_hipart_neg,
+					 NULL_RTX, NULL, after_hipart_neg,
 					 PROB_EVEN);
 
 	      tem = convert_modes (mode, hmode, lopart, 1);
@@ -1481,7 +1480,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		emit_jump (after_lopart_neg);
 	      else if (smaller_sign != -1)
 		do_compare_rtx_and_jump (lopart, const0_rtx, GE, false, hmode,
-					 NULL_RTX, NULL_RTX, after_lopart_neg,
+					 NULL_RTX, NULL, after_lopart_neg,
 					 PROB_EVEN);
 
 	      tem = expand_simple_binop (mode, MINUS, loxhi, larger, NULL_RTX,
@@ -1510,7 +1509,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 					 hprec - 1, NULL_RTX, 0);
 
 	  do_compare_rtx_and_jump (signbitloxhi, hipartloxhi, NE, true, hmode,
-				   NULL_RTX, NULL_RTX, do_overflow,
+				   NULL_RTX, NULL, do_overflow,
 				   PROB_VERY_UNLIKELY);
 
 	  /* res = (loxhi << (bitsize / 2)) | (hmode) lo0xlo1;  */
@@ -1546,7 +1545,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		  tem = expand_simple_binop (hmode, PLUS, hipart0, const1_rtx,
 					     NULL_RTX, 1, OPTAB_DIRECT);
 		  do_compare_rtx_and_jump (tem, const1_rtx, GTU, true, hmode,
-					   NULL_RTX, NULL_RTX, do_error,
+					   NULL_RTX, NULL, do_error,
 					   PROB_VERY_UNLIKELY);
 		}
 
@@ -1555,7 +1554,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		  tem = expand_simple_binop (hmode, PLUS, hipart1, const1_rtx,
 					     NULL_RTX, 1, OPTAB_DIRECT);
 		  do_compare_rtx_and_jump (tem, const1_rtx, GTU, true, hmode,
-					   NULL_RTX, NULL_RTX, do_error,
+					   NULL_RTX, NULL, do_error,
 					   PROB_VERY_UNLIKELY);
 		}
 
@@ -1566,18 +1565,18 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
 		emit_jump (hipart_different);
 	      else if (op0_sign == 1 || op1_sign == 1)
 		do_compare_rtx_and_jump (hipart0, hipart1, NE, true, hmode,
-					 NULL_RTX, NULL_RTX, hipart_different,
+					 NULL_RTX, NULL, hipart_different,
 					 PROB_EVEN);
 
 	      do_compare_rtx_and_jump (res, const0_rtx, LT, false, mode,
-				       NULL_RTX, NULL_RTX, do_error,
+				       NULL_RTX, NULL, do_error,
 				       PROB_VERY_UNLIKELY);
 	      emit_jump (done_label);
 
 	      emit_label (hipart_different);
 
 	      do_compare_rtx_and_jump (res, const0_rtx, GE, false, mode,
-				       NULL_RTX, NULL_RTX, do_error,
+				       NULL_RTX, NULL, do_error,
 				       PROB_VERY_UNLIKELY);
 	      emit_jump (done_label);
 	    }
@@ -1623,7 +1622,7 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
     {
       rtx_code_label *all_done_label = gen_label_rtx ();
       do_compare_rtx_and_jump (res, const0_rtx, GE, false, mode, NULL_RTX,
-			       NULL_RTX, all_done_label, PROB_VERY_LIKELY);
+			       NULL, all_done_label, PROB_VERY_LIKELY);
       write_complex_part (target, const1_rtx, true);
       emit_label (all_done_label);
     }
@@ -1634,13 +1633,13 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
       rtx_code_label *all_done_label = gen_label_rtx ();
       rtx_code_label *set_noovf = gen_label_rtx ();
       do_compare_rtx_and_jump (op1, const0_rtx, GE, false, mode, NULL_RTX,
-			       NULL_RTX, all_done_label, PROB_VERY_LIKELY);
+			       NULL, all_done_label, PROB_VERY_LIKELY);
       write_complex_part (target, const1_rtx, true);
       do_compare_rtx_and_jump (op0, const0_rtx, EQ, true, mode, NULL_RTX,
-			       NULL_RTX, set_noovf, PROB_VERY_LIKELY);
+			       NULL, set_noovf, PROB_VERY_LIKELY);
       do_compare_rtx_and_jump (op0, constm1_rtx, NE, true, mode, NULL_RTX,
-			       NULL_RTX, all_done_label, PROB_VERY_UNLIKELY);
-      do_compare_rtx_and_jump (op1, res, NE, true, mode, NULL_RTX, NULL_RTX,
+			       NULL, all_done_label, PROB_VERY_UNLIKELY);
+      do_compare_rtx_and_jump (op1, res, NE, true, mode, NULL_RTX, NULL,
 			       all_done_label, PROB_VERY_UNLIKELY);
       emit_label (set_noovf);
       write_complex_part (target, const0_rtx, true);
diff --git a/gcc/ira.c b/gcc/ira.c
index 25baa90..cd5ccb7 100644
--- a/gcc/ira.c
+++ b/gcc/ira.c
@@ -4991,7 +4991,7 @@ split_live_ranges_for_shrink_wrap (void)
 
       if (newreg)
 	{
-	  rtx new_move = gen_move_insn (newreg, dest);
+	  rtx_insn *new_move = gen_move_insn (newreg, dest);
 	  emit_insn_after (new_move, bb_note (call_dom));
 	  if (dump_file)
 	    {
diff --git a/gcc/jump.c b/gcc/jump.c
index bc91550..b10512c 100644
--- a/gcc/jump.c
+++ b/gcc/jump.c
@@ -1580,9 +1580,9 @@ redirect_jump_1 (rtx jump, rtx nlabel)
    (this can only occur when trying to produce return insns).  */
 
 int
-redirect_jump (rtx jump, rtx nlabel, int delete_unused)
+redirect_jump (rtx_jump_insn *jump, rtx nlabel, int delete_unused)
 {
-  rtx olabel = JUMP_LABEL (jump);
+  rtx olabel = jump->jump_label ();
 
   if (!nlabel)
     {
@@ -1612,7 +1612,7 @@ redirect_jump (rtx jump, rtx nlabel, int delete_unused)
    If DELETE_UNUSED is positive, delete related insn to OLABEL if its ref
    count has dropped to zero.  */
 void
-redirect_jump_2 (rtx jump, rtx olabel, rtx nlabel, int delete_unused,
+redirect_jump_2 (rtx_jump_insn *jump, rtx olabel, rtx nlabel, int delete_unused,
 		 int invert)
 {
   rtx note;
@@ -1700,7 +1700,7 @@ invert_exp_1 (rtx x, rtx insn)
    inversion and redirection.  */
 
 int
-invert_jump_1 (rtx_insn *jump, rtx nlabel)
+invert_jump_1 (rtx_jump_insn *jump, rtx nlabel)
 {
   rtx x = pc_set (jump);
   int ochanges;
@@ -1724,7 +1724,7 @@ invert_jump_1 (rtx_insn *jump, rtx nlabel)
    NLABEL instead of where it jumps now.  Return true if successful.  */
 
 int
-invert_jump (rtx_insn *jump, rtx nlabel, int delete_unused)
+invert_jump (rtx_jump_insn *jump, rtx nlabel, int delete_unused)
 {
   rtx olabel = JUMP_LABEL (jump);
 
diff --git a/gcc/loop-unroll.c b/gcc/loop-unroll.c
index ccf473d..f1d2ea5 100644
--- a/gcc/loop-unroll.c
+++ b/gcc/loop-unroll.c
@@ -794,10 +794,11 @@ split_edge_and_insert (edge e, rtx_insn *insns)
    in order to create a jump.  */
 
 static rtx_insn *
-compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int prob,
-		      rtx_insn *cinsn)
+compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp,
+		      rtx_code_label *label, int prob, rtx_insn *cinsn)
 {
-  rtx_insn *seq, *jump;
+  rtx_insn *seq;
+  rtx_jump_insn *jump;
   rtx cond;
   machine_mode mode;
 
@@ -816,8 +817,7 @@ compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int prob,
       gcc_assert (rtx_equal_p (op0, XEXP (cond, 0)));
       gcc_assert (rtx_equal_p (op1, XEXP (cond, 1)));
       emit_jump_insn (copy_insn (PATTERN (cinsn)));
-      jump = get_last_insn ();
-      gcc_assert (JUMP_P (jump));
+      jump = as_a <rtx_jump_insn *> (get_last_insn ());
       JUMP_LABEL (jump) = JUMP_LABEL (cinsn);
       LABEL_NUSES (JUMP_LABEL (jump))++;
       redirect_jump (jump, label, 0);
@@ -829,10 +829,9 @@ compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int prob,
       op0 = force_operand (op0, NULL_RTX);
       op1 = force_operand (op1, NULL_RTX);
       do_compare_rtx_and_jump (op0, op1, comp, 0,
-			       mode, NULL_RTX, NULL_RTX, label, -1);
-      jump = get_last_insn ();
-      gcc_assert (JUMP_P (jump));
-      JUMP_LABEL (jump) = label;
+			       mode, NULL_RTX, NULL, label, -1);
+      jump = as_a <rtx_jump_insn *> (get_last_insn ());
+      jump->set_jump_target (label);
       LABEL_NUSES (label)++;
     }
   add_int_reg_note (jump, REG_BR_PROB, prob);
diff --git a/gcc/lra-constraints.c b/gcc/lra-constraints.c
index a65a12f..a151081 100644
--- a/gcc/lra-constraints.c
+++ b/gcc/lra-constraints.c
@@ -1060,9 +1060,8 @@ emit_spill_move (bool to_p, rtx mem_pseudo, rtx val)
 	  LRA_SUBREG_P (mem_pseudo) = 1;
 	}
     }
-  return as_a <rtx_insn *> (to_p
-			    ? gen_move_insn (mem_pseudo, val)
-			    : gen_move_insn (val, mem_pseudo));
+  return to_p ? gen_move_insn (mem_pseudo, val)
+	      : gen_move_insn (val, mem_pseudo);
 }
 
 /* Process a special case insn (register move), return true if we
@@ -4766,7 +4765,7 @@ inherit_reload_reg (bool def_p, int original_regno,
 		   "    Inheritance reuse change %d->%d (bb%d):\n",
 		   original_regno, REGNO (new_reg),
 		   BLOCK_FOR_INSN (usage_insn)->index);
-	  dump_insn_slim (lra_dump_file, usage_insn);
+	  dump_insn_slim (lra_dump_file, as_a <rtx_insn *> (usage_insn));
 	}
     }
   if (lra_dump_file != NULL)
@@ -5026,7 +5025,7 @@ split_reg (bool before_p, int original_regno, rtx_insn *insn,
 	{
 	  fprintf (lra_dump_file, "    Split reuse change %d->%d:\n",
 		   original_regno, REGNO (new_reg));
-	  dump_insn_slim (lra_dump_file, usage_insn);
+	  dump_insn_slim (lra_dump_file, as_a <rtx_insn *> (usage_insn));
 	}
     }
   lra_assert (NOTE_P (usage_insn) || NONDEBUG_INSN_P (usage_insn));
diff --git a/gcc/modulo-sched.c b/gcc/modulo-sched.c
index 22cd216..4afe43e 100644
--- a/gcc/modulo-sched.c
+++ b/gcc/modulo-sched.c
@@ -790,8 +790,7 @@ schedule_reg_moves (partial_schedule_ptr ps)
 	  move->old_reg = old_reg;
 	  move->new_reg = gen_reg_rtx (GET_MODE (prev_reg));
 	  move->num_consecutive_stages = distances[0] && distances[1] ? 2 : 1;
-	  move->insn = as_a <rtx_insn *> (gen_move_insn (move->new_reg,
-							 copy_rtx (prev_reg)));
+	  move->insn = gen_move_insn (move->new_reg, copy_rtx (prev_reg));
 	  bitmap_clear (move->uses);
 
 	  prev_reg = move->new_reg;
diff --git a/gcc/optabs.c b/gcc/optabs.c
index 983c8d9..df5c81c 100644
--- a/gcc/optabs.c
+++ b/gcc/optabs.c
@@ -1416,7 +1416,7 @@ expand_binop_directly (machine_mode mode, optab binoptab,
   machine_mode mode0, mode1, tmp_mode;
   struct expand_operand ops[3];
   bool commutative_p;
-  rtx pat;
+  rtx_insn *pat;
   rtx xop0 = op0, xop1 = op1;
   rtx swap;
 
@@ -1499,8 +1499,8 @@ expand_binop_directly (machine_mode mode, optab binoptab,
       /* If PAT is composed of more than one insn, try to add an appropriate
 	 REG_EQUAL note to it.  If we can't because TEMP conflicts with an
 	 operand, call expand_binop again, this time without a target.  */
-      if (INSN_P (pat) && NEXT_INSN (as_a <rtx_insn *> (pat)) != NULL_RTX
-	  && ! add_equal_note (as_a <rtx_insn *> (pat), ops[0].value,
+      if (INSN_P (pat) && NEXT_INSN (pat) != NULL_RTX
+	  && ! add_equal_note (pat, ops[0].value,
 			       optab_to_code (binoptab),
 			       ops[1].value, ops[2].value))
 	{
@@ -3016,15 +3016,15 @@ expand_unop_direct (machine_mode mode, optab unoptab, rtx op0, rtx target,
       struct expand_operand ops[2];
       enum insn_code icode = optab_handler (unoptab, mode);
       rtx_insn *last = get_last_insn ();
-      rtx pat;
+      rtx_insn *pat;
 
       create_output_operand (&ops[0], target, mode);
       create_convert_operand_from (&ops[1], op0, mode, unsignedp);
       pat = maybe_gen_insn (icode, 2, ops);
       if (pat)
 	{
-	  if (INSN_P (pat) && NEXT_INSN (as_a <rtx_insn *> (pat)) != NULL_RTX
-	      && ! add_equal_note (as_a <rtx_insn *> (pat), ops[0].value,
+	  if (INSN_P (pat) && NEXT_INSN (pat) != NULL_RTX
+	      && ! add_equal_note (pat, ops[0].value,
 				   optab_to_code (unoptab),
 				   ops[1].value, NULL_RTX))
 	    {
@@ -3508,7 +3508,7 @@ expand_abs (machine_mode mode, rtx op0, rtx target,
   NO_DEFER_POP;
 
   do_compare_rtx_and_jump (target, CONST0_RTX (mode), GE, 0, mode,
-			   NULL_RTX, NULL_RTX, op1, -1);
+			   NULL_RTX, NULL, op1, -1);
 
   op0 = expand_unop (mode, result_unsignedp ? neg_optab : negv_optab,
                      target, target, 0);
@@ -3817,7 +3817,7 @@ maybe_emit_unop_insn (enum insn_code icode, rtx target, rtx op0,
 		      enum rtx_code code)
 {
   struct expand_operand ops[2];
-  rtx pat;
+  rtx_insn *pat;
 
   create_output_operand (&ops[0], target, GET_MODE (target));
   create_input_operand (&ops[1], op0, GET_MODE (op0));
@@ -3825,10 +3825,9 @@ maybe_emit_unop_insn (enum insn_code icode, rtx target, rtx op0,
   if (!pat)
     return false;
 
-  if (INSN_P (pat) && NEXT_INSN (as_a <rtx_insn *> (pat)) != NULL_RTX
+  if (INSN_P (pat) && NEXT_INSN (pat) != NULL_RTX
       && code != UNKNOWN)
-    add_equal_note (as_a <rtx_insn *> (pat), ops[0].value, code, ops[1].value,
-		    NULL_RTX);
+    add_equal_note (pat, ops[0].value, code, ops[1].value, NULL_RTX);
 
   emit_insn (pat);
 
@@ -8370,13 +8369,13 @@ maybe_legitimize_operands (enum insn_code icode, unsigned int opno,
    and emit any necessary set-up code.  Return null and emit no
    code on failure.  */
 
-rtx
+rtx_insn *
 maybe_gen_insn (enum insn_code icode, unsigned int nops,
 		struct expand_operand *ops)
 {
   gcc_assert (nops == (unsigned int) insn_data[(int) icode].n_generator_args);
   if (!maybe_legitimize_operands (icode, 0, nops, ops))
-    return NULL_RTX;
+    return NULL;
 
   switch (nops)
     {
diff --git a/gcc/optabs.h b/gcc/optabs.h
index 152af87..5c30ce5 100644
--- a/gcc/optabs.h
+++ b/gcc/optabs.h
@@ -541,8 +541,8 @@ extern void create_convert_operand_from_type (struct expand_operand *op,
 extern bool maybe_legitimize_operands (enum insn_code icode,
 				       unsigned int opno, unsigned int nops,
 				       struct expand_operand *ops);
-extern rtx maybe_gen_insn (enum insn_code icode, unsigned int nops,
-			   struct expand_operand *ops);
+extern rtx_insn *maybe_gen_insn (enum insn_code icode, unsigned int nops,
+				 struct expand_operand *ops);
 extern bool maybe_expand_insn (enum insn_code icode, unsigned int nops,
 			       struct expand_operand *ops);
 extern bool maybe_expand_jump_insn (enum insn_code icode, unsigned int nops,
diff --git a/gcc/postreload-gcse.c b/gcc/postreload-gcse.c
index 9014d69..2194557 100644
--- a/gcc/postreload-gcse.c
+++ b/gcc/postreload-gcse.c
@@ -1115,8 +1115,8 @@ eliminate_partially_redundant_load (basic_block bb, rtx_insn *insn,
 
 	  /* Make sure we can generate a move from register avail_reg to
 	     dest.  */
-	  rtx_insn *move = as_a <rtx_insn *>
-	    (gen_move_insn (copy_rtx (dest), copy_rtx (avail_reg)));
+	  rtx_insn *move = gen_move_insn (copy_rtx (dest),
+					  copy_rtx (avail_reg));
 	  extract_insn (move);
 	  if (! constrain_operands (1, get_preferred_alternatives (insn,
 								   pred_bb))
diff --git a/gcc/recog.c b/gcc/recog.c
index c3ad86f..cba26de 100644
--- a/gcc/recog.c
+++ b/gcc/recog.c
@@ -3066,7 +3066,7 @@ split_all_insns_noflow (void)
 #ifdef HAVE_peephole2
 struct peep2_insn_data
 {
-  rtx insn;
+  rtx_insn *insn;
   regset live_before;
 };
 
@@ -3082,7 +3082,7 @@ int peep2_current_count;
 /* A non-insn marker indicating the last insn of the block.
    The live_before regset for this element is correct, indicating
    DF_LIVE_OUT for the block.  */
-#define PEEP2_EOB	pc_rtx
+#define PEEP2_EOB	(static_cast<rtx_insn *> (pc_rtx))
 
 /* Wrap N to fit into the peep2_insn_data buffer.  */
 
@@ -3285,7 +3285,7 @@ peep2_reinit_state (regset live)
 
   /* Indicate that all slots except the last holds invalid data.  */
   for (i = 0; i < MAX_INSNS_PER_PEEP2; ++i)
-    peep2_insn_data[i].insn = NULL_RTX;
+    peep2_insn_data[i].insn = NULL;
   peep2_current_count = 0;
 
   /* Indicate that the last slot contains live_after data.  */
@@ -3313,7 +3313,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
 
   /* If we are splitting an RTX_FRAME_RELATED_P insn, do not allow it to
      match more than one insn, or to be split into more than one insn.  */
-  old_insn = as_a <rtx_insn *> (peep2_insn_data[peep2_current].insn);
+  old_insn = peep2_insn_data[peep2_current].insn;
   if (RTX_FRAME_RELATED_P (old_insn))
     {
       bool any_note = false;
@@ -3401,7 +3401,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
       rtx note;
 
       j = peep2_buf_position (peep2_current + i);
-      old_insn = as_a <rtx_insn *> (peep2_insn_data[j].insn);
+      old_insn = peep2_insn_data[j].insn;
       if (!CALL_P (old_insn))
 	continue;
       was_call = true;
@@ -3440,7 +3440,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
       while (++i <= match_len)
 	{
 	  j = peep2_buf_position (peep2_current + i);
-	  old_insn = as_a <rtx_insn *> (peep2_insn_data[j].insn);
+	  old_insn = peep2_insn_data[j].insn;
 	  gcc_assert (!CALL_P (old_insn));
 	}
       break;
@@ -3452,7 +3452,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
   for (i = match_len; i >= 0; --i)
     {
       int j = peep2_buf_position (peep2_current + i);
-      old_insn = as_a <rtx_insn *> (peep2_insn_data[j].insn);
+      old_insn = peep2_insn_data[j].insn;
 
       as_note = find_reg_note (old_insn, REG_ARGS_SIZE, NULL);
       if (as_note)
@@ -3463,7 +3463,7 @@ peep2_attempt (basic_block bb, rtx uncast_insn, int match_len, rtx_insn *attempt
   eh_note = find_reg_note (peep2_insn_data[i].insn, REG_EH_REGION, NULL_RTX);
 
   /* Replace the old sequence with the new.  */
-  rtx_insn *peepinsn = as_a <rtx_insn *> (peep2_insn_data[i].insn);
+  rtx_insn *peepinsn = peep2_insn_data[i].insn;
   last = emit_insn_after_setloc (attempt,
 				 peep2_insn_data[i].insn,
 				 INSN_LOCATION (peepinsn));
@@ -3580,7 +3580,7 @@ peep2_update_life (basic_block bb, int match_len, rtx_insn *last,
    add more instructions to the buffer.  */
 
 static bool
-peep2_fill_buffer (basic_block bb, rtx insn, regset live)
+peep2_fill_buffer (basic_block bb, rtx_insn *insn, regset live)
 {
   int pos;
 
@@ -3606,7 +3606,7 @@ peep2_fill_buffer (basic_block bb, rtx insn, regset live)
   COPY_REG_SET (peep2_insn_data[pos].live_before, live);
   peep2_current_count++;
 
-  df_simulate_one_insn_forwards (bb, as_a <rtx_insn *> (insn), live);
+  df_simulate_one_insn_forwards (bb, insn, live);
   return true;
 }
 
diff --git a/gcc/recog.h b/gcc/recog.h
index 8a38b26..6b5d9e4 100644
--- a/gcc/recog.h
+++ b/gcc/recog.h
@@ -276,43 +276,43 @@ typedef const char * (*insn_output_fn) (rtx *, rtx_insn *);
 
 struct insn_gen_fn
 {
-  typedef rtx (*f0) (void);
-  typedef rtx (*f1) (rtx);
-  typedef rtx (*f2) (rtx, rtx);
-  typedef rtx (*f3) (rtx, rtx, rtx);
-  typedef rtx (*f4) (rtx, rtx, rtx, rtx);
-  typedef rtx (*f5) (rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f6) (rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f7) (rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f8) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f9) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f10) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f11) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f12) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f13) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f14) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f15) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
-  typedef rtx (*f16) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f0) (void);
+  typedef rtx_insn * (*f1) (rtx);
+  typedef rtx_insn * (*f2) (rtx, rtx);
+  typedef rtx_insn * (*f3) (rtx, rtx, rtx);
+  typedef rtx_insn * (*f4) (rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f5) (rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f6) (rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f7) (rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f8) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f9) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f10) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f11) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f12) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f13) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f14) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f15) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+  typedef rtx_insn * (*f16) (rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx, rtx);
 
   typedef f0 stored_funcptr;
 
-  rtx operator () (void) const { return ((f0)func) (); }
-  rtx operator () (rtx a0) const { return ((f1)func) (a0); }
-  rtx operator () (rtx a0, rtx a1) const { return ((f2)func) (a0, a1); }
-  rtx operator () (rtx a0, rtx a1, rtx a2) const { return ((f3)func) (a0, a1, a2); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3) const { return ((f4)func) (a0, a1, a2, a3); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4) const { return ((f5)func) (a0, a1, a2, a3, a4); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5) const { return ((f6)func) (a0, a1, a2, a3, a4, a5); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6) const { return ((f7)func) (a0, a1, a2, a3, a4, a5, a6); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7) const { return ((f8)func) (a0, a1, a2, a3, a4, a5, a6, a7); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8) const { return ((f9)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9) const { return ((f10)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10) const { return ((f11)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11) const { return ((f12)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12) const { return ((f13)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13) const { return ((f14)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14) const { return ((f15)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14); }
-  rtx operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14, rtx a15) const { return ((f16)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15); }
+  rtx_insn * operator () (void) const { return ((f0)func) (); }
+  rtx_insn * operator () (rtx a0) const { return ((f1)func) (a0); }
+  rtx_insn * operator () (rtx a0, rtx a1) const { return ((f2)func) (a0, a1); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2) const { return ((f3)func) (a0, a1, a2); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3) const { return ((f4)func) (a0, a1, a2, a3); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4) const { return ((f5)func) (a0, a1, a2, a3, a4); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5) const { return ((f6)func) (a0, a1, a2, a3, a4, a5); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6) const { return ((f7)func) (a0, a1, a2, a3, a4, a5, a6); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7) const { return ((f8)func) (a0, a1, a2, a3, a4, a5, a6, a7); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8) const { return ((f9)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9) const { return ((f10)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10) const { return ((f11)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11) const { return ((f12)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12) const { return ((f13)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13) const { return ((f14)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14) const { return ((f15)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14); }
+  rtx_insn * operator () (rtx a0, rtx a1, rtx a2, rtx a3, rtx a4, rtx a5, rtx a6, rtx a7, rtx a8, rtx a9, rtx a10, rtx a11, rtx a12, rtx a13, rtx a14, rtx a15) const { return ((f16)func) (a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15); }
 
   // This is for compatibility of code that invokes functions like
   //   (*funcptr) (arg)
diff --git a/gcc/resource.c b/gcc/resource.c
index ba9de12..14f358d 100644
--- a/gcc/resource.c
+++ b/gcc/resource.c
@@ -439,7 +439,7 @@ find_dead_or_set_registers (rtx_insn *target, struct resources *res,
 
   for (insn = target; insn; insn = next_insn)
     {
-      rtx_insn *this_jump_insn = insn;
+      rtx_insn *this_insn = insn;
 
       next_insn = NEXT_INSN (insn);
 
@@ -487,8 +487,8 @@ find_dead_or_set_registers (rtx_insn *target, struct resources *res,
 		 of a call, so search for a JUMP_INSN in any position.  */
 	      for (i = 0; i < seq->len (); i++)
 		{
-		  this_jump_insn = seq->insn (i);
-		  if (JUMP_P (this_jump_insn))
+		  this_insn = seq->insn (i);
+		  if (JUMP_P (this_insn))
 		    break;
 		}
 	    }
@@ -497,14 +497,15 @@ find_dead_or_set_registers (rtx_insn *target, struct resources *res,
 	  break;
 	}
 
-      if (JUMP_P (this_jump_insn))
+      if (rtx_jump_insn *this_jump_insn =
+	    dyn_cast <rtx_jump_insn *> (this_insn))
 	{
 	  if (jump_count++ < 10)
 	    {
 	      if (any_uncondjump_p (this_jump_insn)
 		  || ANY_RETURN_P (PATTERN (this_jump_insn)))
 		{
-		  rtx lab_or_return = JUMP_LABEL (this_jump_insn);
+		  rtx lab_or_return = this_jump_insn->jump_label ();
 		  if (ANY_RETURN_P (lab_or_return))
 		    next_insn = NULL;
 		  else
@@ -577,10 +578,10 @@ find_dead_or_set_registers (rtx_insn *target, struct resources *res,
 		  AND_COMPL_HARD_REG_SET (scratch, needed.regs);
 		  AND_COMPL_HARD_REG_SET (fallthrough_res.regs, scratch);
 
-		  if (!ANY_RETURN_P (JUMP_LABEL (this_jump_insn)))
-		    find_dead_or_set_registers (JUMP_LABEL_AS_INSN (this_jump_insn),
-						&target_res, 0, jump_count,
-						target_set, needed);
+		  if (!ANY_RETURN_P (this_jump_insn->jump_label ()))
+		    find_dead_or_set_registers
+			  (this_jump_insn->jump_target (),
+			   &target_res, 0, jump_count, target_set, needed);
 		  find_dead_or_set_registers (next_insn,
 					      &fallthrough_res, 0, jump_count,
 					      set, needed);
diff --git a/gcc/rtl.h b/gcc/rtl.h
index e5e4560..7820f8a 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -546,6 +546,7 @@ class GTY(()) rtx_nonjump_insn : public rtx_insn
 
 class GTY(()) rtx_jump_insn : public rtx_insn
 {
+public:
   /* No extra fields, but adds the invariant:
        JUMP_P (X) aka (GET_CODE (X) == JUMP_INSN)
      i.e. an instruction that can possibly jump.
@@ -553,6 +554,21 @@ class GTY(()) rtx_jump_insn : public rtx_insn
      This is an instance of:
        DEF_RTL_EXPR(JUMP_INSN, "jump_insn", "uuBeiie0", RTX_INSN)
      from rtl.def.  */
+
+  /* Returns jump target of this instruction.  The returned value is not
+     necessarily a code label: it may also be a RETURN or SIMPLE_RETURN
+     expression.  Also, when the code label is marked "deleted", it is
+     replaced by a NOTE.  In some cases the value is NULL_RTX.  */
+
+  inline rtx jump_label () const;
+
+  /* Returns jump target cast to rtx_code_label *.  */
+
+  inline rtx_code_label *jump_target () const;
+
+  /* Set jump target.  */
+
+  inline void set_jump_target (rtx_code_label *);
 };
 
 class GTY(()) rtx_call_insn : public rtx_insn
@@ -827,6 +843,14 @@ is_a_helper <rtx_debug_insn *>::test (rtx rt)
 template <>
 template <>
 inline bool
+is_a_helper <rtx_debug_insn *>::test (rtx_insn *insn)
+{
+  return DEBUG_INSN_P (insn);
+}
+
+template <>
+template <>
+inline bool
 is_a_helper <rtx_nonjump_insn *>::test (rtx rt)
 {
   return NONJUMP_INSN_P (rt);
@@ -843,6 +867,14 @@ is_a_helper <rtx_jump_insn *>::test (rtx rt)
 template <>
 template <>
 inline bool
+is_a_helper <rtx_jump_insn *>::test (rtx_insn *insn)
+{
+  return JUMP_P (insn);
+}
+
+template <>
+template <>
+inline bool
 is_a_helper <rtx_call_insn *>::test (rtx rt)
 {
   return CALL_P (rt);
@@ -1681,6 +1713,23 @@ inline rtx_insn *JUMP_LABEL_AS_INSN (const rtx_insn *insn)
   return safe_as_a <rtx_insn *> (JUMP_LABEL (insn));
 }
 
+/* Methods of rtx_jump_insn.  */
+
+inline rtx rtx_jump_insn::jump_label () const
+{
+  return JUMP_LABEL (this);
+}
+
+inline rtx_code_label *rtx_jump_insn::jump_target () const
+{
+  return safe_as_a <rtx_code_label *> (JUMP_LABEL (this));
+}
+
+inline void rtx_jump_insn::set_jump_target (rtx_code_label *target)
+{
+  JUMP_LABEL (this) = target;
+}
+
 /* Once basic blocks are found, each CODE_LABEL starts a chain that
    goes through all the LABEL_REFs that jump to that label.  The chain
    eventually winds up at the CODE_LABEL: it is circular.  */
@@ -2662,7 +2711,7 @@ extern rtx_insn *emit_debug_insn_before (rtx, rtx);
 extern rtx_insn *emit_debug_insn_before_noloc (rtx, rtx);
 extern rtx_insn *emit_debug_insn_before_setloc (rtx, rtx, int);
 extern rtx_barrier *emit_barrier_before (rtx);
-extern rtx_insn *emit_label_before (rtx, rtx_insn *);
+extern rtx_code_label *emit_label_before (rtx, rtx_insn *);
 extern rtx_note *emit_note_before (enum insn_note, rtx);
 extern rtx_insn *emit_insn_after (rtx, rtx);
 extern rtx_insn *emit_insn_after_noloc (rtx, rtx, basic_block);
@@ -2683,7 +2732,7 @@ extern rtx_insn *emit_insn (rtx);
 extern rtx_insn *emit_debug_insn (rtx);
 extern rtx_insn *emit_jump_insn (rtx);
 extern rtx_insn *emit_call_insn (rtx);
-extern rtx_insn *emit_label (rtx);
+extern rtx_code_label *emit_label (rtx);
 extern rtx_jump_table_data *emit_jump_table_data (rtx);
 extern rtx_barrier *emit_barrier (void);
 extern rtx_note *emit_note (enum insn_note);
@@ -3336,14 +3385,14 @@ extern int eh_returnjump_p (rtx_insn *);
 extern int onlyjump_p (const rtx_insn *);
 extern int only_sets_cc0_p (const_rtx);
 extern int sets_cc0_p (const_rtx);
-extern int invert_jump_1 (rtx_insn *, rtx);
-extern int invert_jump (rtx_insn *, rtx, int);
+extern int invert_jump_1 (rtx_jump_insn *, rtx);
+extern int invert_jump (rtx_jump_insn *, rtx, int);
 extern int rtx_renumbered_equal_p (const_rtx, const_rtx);
 extern int true_regnum (const_rtx);
 extern unsigned int reg_or_subregno (const_rtx);
 extern int redirect_jump_1 (rtx, rtx);
-extern void redirect_jump_2 (rtx, rtx, rtx, int, int);
-extern int redirect_jump (rtx, rtx, int);
+extern void redirect_jump_2 (rtx_jump_insn *, rtx, rtx, int, int);
+extern int redirect_jump (rtx_jump_insn *, rtx, int);
 extern void rebuild_jump_labels (rtx_insn *);
 extern void rebuild_jump_labels_chain (rtx_insn *);
 extern rtx reversed_comparison (const_rtx, machine_mode);
@@ -3426,7 +3475,7 @@ extern void print_inline_rtx (FILE *, const_rtx, int);
    not be in sched-vis.c but in rtl.c, because they are not only used
    by the scheduler anymore but for all "slim" RTL dumping.  */
 extern void dump_value_slim (FILE *, const_rtx, int);
-extern void dump_insn_slim (FILE *, const_rtx);
+extern void dump_insn_slim (FILE *, const rtx_insn *);
 extern void dump_rtl_slim (FILE *, const rtx_insn *, const rtx_insn *,
 			   int, int);
 extern void print_value (pretty_printer *, const_rtx, int);
diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c
index 2377f25a..3a6d9ce 100644
--- a/gcc/rtlanal.c
+++ b/gcc/rtlanal.c
@@ -2914,7 +2914,8 @@ rtx_referenced_p (const_rtx x, const_rtx body)
 bool
 tablejump_p (const rtx_insn *insn, rtx *labelp, rtx_jump_table_data **tablep)
 {
-  rtx label, table;
+  rtx label;
+  rtx_insn *table;
 
   if (!JUMP_P (insn))
     return false;
diff --git a/gcc/sched-deps.c b/gcc/sched-deps.c
index e624563..ca1a64b 100644
--- a/gcc/sched-deps.c
+++ b/gcc/sched-deps.c
@@ -2650,7 +2650,7 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rtx_insn *insn)
     case MEM:
       {
 	/* Reading memory.  */
-	rtx u;
+	rtx_insn_list *u;
 	rtx_insn_list *pending;
 	rtx_expr_list *pending_mem;
 	rtx t = x;
@@ -2701,11 +2701,10 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rtx_insn *insn)
 		pending_mem = pending_mem->next ();
 	      }
 
-	    for (u = deps->last_pending_memory_flush; u; u = XEXP (u, 1))
-	      add_dependence (insn, as_a <rtx_insn *> (XEXP (u, 0)),
-			      REG_DEP_ANTI);
+	    for (u = deps->last_pending_memory_flush; u; u = u->next ())
+	      add_dependence (insn, u->insn (), REG_DEP_ANTI);
 
-	    for (u = deps->pending_jump_insns; u; u = XEXP (u, 1))
+	    for (u = deps->pending_jump_insns; u; u = u->next ())
 	      if (deps_may_trap_p (x))
 		{
 		  if ((sched_deps_info->generate_spec_deps)
@@ -2714,11 +2713,10 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rtx_insn *insn)
 		      ds_t ds = set_dep_weak (DEP_ANTI, BEGIN_CONTROL,
 					      MAX_DEP_WEAK);
 		      
-		      note_dep (as_a <rtx_insn *> (XEXP (u, 0)), ds);
+		      note_dep (u->insn (), ds);
 		    }
 		  else
-		    add_dependence (insn, as_a <rtx_insn *> (XEXP (u, 0)),
-				    REG_DEP_CONTROL);
+		    add_dependence (insn, u->insn (), REG_DEP_CONTROL);
 		}
 	  }
 
@@ -3089,7 +3087,7 @@ sched_analyze_insn (struct deps_desc *deps, rtx x, rtx_insn *insn)
   if (DEBUG_INSN_P (insn))
     {
       rtx_insn *prev = deps->last_debug_insn;
-      rtx u;
+      rtx_insn_list *u;
 
       if (!deps->readonly)
 	deps->last_debug_insn = insn;
@@ -3101,8 +3099,8 @@ sched_analyze_insn (struct deps_desc *deps, rtx x, rtx_insn *insn)
 			   REG_DEP_ANTI, false);
 
       if (!sel_sched_p ())
-	for (u = deps->last_pending_memory_flush; u; u = XEXP (u, 1))
-	  add_dependence (insn, as_a <rtx_insn *> (XEXP (u, 0)), REG_DEP_ANTI);
+	for (u = deps->last_pending_memory_flush; u; u = u->next ())
+	  add_dependence (insn, u->insn (), REG_DEP_ANTI);
 
       EXECUTE_IF_SET_IN_REG_SET (reg_pending_uses, 0, i, rsi)
 	{
diff --git a/gcc/sched-vis.c b/gcc/sched-vis.c
index 32f7a7c..31794e6 100644
--- a/gcc/sched-vis.c
+++ b/gcc/sched-vis.c
@@ -67,7 +67,7 @@ along with GCC; see the file COPYING3.  If not see
    pointer, via str_pattern_slim, but this usage is discouraged.  */
 
 /* For insns we print patterns, and for some patterns we print insns...  */
-static void print_insn_with_notes (pretty_printer *, const_rtx);
+static void print_insn_with_notes (pretty_printer *, const rtx_insn *);
 
 /* This recognizes rtx'en classified as expressions.  These are always
    represent some action on values or results of other expression, that
@@ -669,7 +669,7 @@ print_pattern (pretty_printer *pp, const_rtx x, int verbose)
    with their INSN_UIDs.  */
 
 void
-print_insn (pretty_printer *pp, const_rtx x, int verbose)
+print_insn (pretty_printer *pp, const rtx_insn *x, int verbose)
 {
   if (verbose)
     {
@@ -787,7 +787,7 @@ print_insn (pretty_printer *pp, const_rtx x, int verbose)
    note attached to the instruction.  */
 
 static void
-print_insn_with_notes (pretty_printer *pp, const_rtx x)
+print_insn_with_notes (pretty_printer *pp, const rtx_insn *x)
 {
   pp_string (pp, print_rtx_head);
   print_insn (pp, x, 1);
@@ -823,7 +823,7 @@ dump_value_slim (FILE *f, const_rtx x, int verbose)
 /* Emit a slim dump of X (an insn) to the file F, including any register
    note attached to the instruction.  */
 void
-dump_insn_slim (FILE *f, const_rtx x)
+dump_insn_slim (FILE *f, const rtx_insn *x)
 {
   pretty_printer rtl_slim_pp;
   rtl_slim_pp.buffer->stream = f;
@@ -893,9 +893,9 @@ str_pattern_slim (const_rtx x)
 }
 
 /* Emit a slim dump of X (an insn) to stderr.  */
-extern void debug_insn_slim (const_rtx);
+extern void debug_insn_slim (const rtx_insn *);
 DEBUG_FUNCTION void
-debug_insn_slim (const_rtx x)
+debug_insn_slim (const rtx_insn *x)
 {
   dump_insn_slim (stderr, x);
 }
diff --git a/gcc/stmt.c b/gcc/stmt.c
index 6c62a12..b3eefe4 100644
--- a/gcc/stmt.c
+++ b/gcc/stmt.c
@@ -135,12 +135,12 @@ static void balance_case_nodes (case_node_ptr *, case_node_ptr);
 static int node_has_low_bound (case_node_ptr, tree);
 static int node_has_high_bound (case_node_ptr, tree);
 static int node_is_bounded (case_node_ptr, tree);
-static void emit_case_nodes (rtx, case_node_ptr, rtx, int, tree);
+static void emit_case_nodes (rtx, case_node_ptr, rtx_code_label *, int, tree);
 \f
 /* Return the rtx-label that corresponds to a LABEL_DECL,
    creating it if necessary.  */
 
-rtx
+rtx_insn *
 label_rtx (tree label)
 {
   gcc_assert (TREE_CODE (label) == LABEL_DECL);
@@ -153,15 +153,15 @@ label_rtx (tree label)
 	LABEL_PRESERVE_P (r) = 1;
     }
 
-  return DECL_RTL (label);
+  return as_a <rtx_insn *> (DECL_RTL (label));
 }
 
 /* As above, but also put it on the forced-reference list of the
    function that contains it.  */
-rtx
+rtx_insn *
 force_label_rtx (tree label)
 {
-  rtx_insn *ref = as_a <rtx_insn *> (label_rtx (label));
+  rtx_insn *ref = label_rtx (label);
   tree function = decl_function_context (label);
 
   gcc_assert (function);
@@ -170,6 +170,14 @@ force_label_rtx (tree label)
   return ref;
 }
 
+/* As label_rtx, but ensures (in check build), that returned value is
+   an existing label (i.e. rtx with code CODE_LABEL).  */
+rtx_code_label *
+jump_target_rtx (tree label)
+{
+  return as_a <rtx_code_label *> (label_rtx (label));
+}
+
 /* Add an unconditional jump to LABEL as the next sequential instruction.  */
 
 void
@@ -196,7 +204,7 @@ emit_jump (rtx label)
 void
 expand_label (tree label)
 {
-  rtx_insn *label_r = as_a <rtx_insn *> (label_rtx (label));
+  rtx_code_label *label_r = jump_target_rtx (label);
 
   do_pending_stack_adjust ();
   emit_label (label_r);
@@ -705,7 +713,7 @@ resolve_operand_name_1 (char *p, tree outputs, tree inputs, tree labels)
 void
 expand_naked_return (void)
 {
-  rtx end_label;
+  rtx_code_label *end_label;
 
   clear_pending_stack_adjust ();
   do_pending_stack_adjust ();
@@ -720,12 +728,12 @@ expand_naked_return (void)
 /* Generate code to jump to LABEL if OP0 and OP1 are equal in mode MODE. PROB
    is the probability of jumping to LABEL.  */
 static void
-do_jump_if_equal (machine_mode mode, rtx op0, rtx op1, rtx label,
+do_jump_if_equal (machine_mode mode, rtx op0, rtx op1, rtx_code_label *label,
 		  int unsignedp, int prob)
 {
   gcc_assert (prob <= REG_BR_PROB_BASE);
   do_compare_rtx_and_jump (op0, op1, EQ, unsignedp, mode,
-			   NULL_RTX, NULL_RTX, label, prob);
+			   NULL_RTX, NULL, label, prob);
 }
 \f
 /* Do the insertion of a case label into case_list.  The labels are
@@ -882,8 +890,8 @@ expand_switch_as_decision_tree_p (tree range,
 
 static void
 emit_case_decision_tree (tree index_expr, tree index_type,
-			 struct case_node *case_list, rtx default_label,
-                         int default_prob)
+			 case_node_ptr case_list, rtx_code_label *default_label,
+			 int default_prob)
 {
   rtx index = expand_normal (index_expr);
 
@@ -1141,7 +1149,7 @@ void
 expand_case (gswitch *stmt)
 {
   tree minval = NULL_TREE, maxval = NULL_TREE, range = NULL_TREE;
-  rtx default_label = NULL_RTX;
+  rtx_code_label *default_label = NULL;
   unsigned int count, uniq;
   int i;
   int ncases = gimple_switch_num_labels (stmt);
@@ -1173,7 +1181,8 @@ expand_case (gswitch *stmt)
   do_pending_stack_adjust ();
 
   /* Find the default case target label.  */
-  default_label = label_rtx (CASE_LABEL (gimple_switch_default_label (stmt)));
+  default_label = jump_target_rtx
+      (CASE_LABEL (gimple_switch_default_label (stmt)));
   edge default_edge = EDGE_SUCC (bb, 0);
   int default_prob = default_edge->probability;
 
@@ -1323,7 +1332,7 @@ expand_sjlj_dispatch_table (rtx dispatch_index,
       for (int i = 0; i < ncases; i++)
         {
 	  tree elt = dispatch_table[i];
-	  rtx lab = label_rtx (CASE_LABEL (elt));
+	  rtx_code_label *lab = jump_target_rtx (CASE_LABEL (elt));
 	  do_jump_if_equal (index_mode, index, zero, lab, 0, -1);
 	  force_expand_binop (index_mode, sub_optab,
 			      index, CONST1_RTX (index_mode),
@@ -1592,7 +1601,7 @@ node_is_bounded (case_node_ptr node, tree index_type)
    tests for the value 50, then this node need not test anything.  */
 
 static void
-emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
+emit_case_nodes (rtx index, case_node_ptr node, rtx_code_label *default_label,
 		 int default_prob, tree index_type)
 {
   /* If INDEX has an unsigned type, we must make unsigned branches.  */
@@ -1620,7 +1629,8 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 			convert_modes (mode, imode,
 				       expand_normal (node->low),
 				       unsignedp),
-			label_rtx (node->code_label), unsignedp, probability);
+			jump_target_rtx (node->code_label),
+			unsignedp, probability);
       /* Since this case is taken at this point, reduce its weight from
          subtree_weight.  */
       subtree_prob -= prob;
@@ -1662,7 +1672,8 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 				       LT, NULL_RTX, mode, unsignedp,
 				       label_rtx (node->left->code_label),
                                        probability);
-	      emit_case_nodes (index, node->right, default_label, default_prob, index_type);
+	      emit_case_nodes (index, node->right, default_label, default_prob,
+			       index_type);
 	    }
 
 	  /* If both children are single-valued cases with no
@@ -1687,7 +1698,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 				convert_modes (mode, imode,
 					       expand_normal (node->right->low),
 					       unsignedp),
-				label_rtx (node->right->code_label),
+				jump_target_rtx (node->right->code_label),
 				unsignedp, probability);
 
 	      /* See if the value matches what the left hand side
@@ -1699,7 +1710,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 				convert_modes (mode, imode,
 					       expand_normal (node->left->low),
 					       unsignedp),
-				label_rtx (node->left->code_label),
+				jump_target_rtx (node->left->code_label),
 				unsignedp, probability);
 	    }
 
@@ -1786,7 +1797,8 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 			        (mode, imode,
 			         expand_normal (node->right->low),
 			         unsignedp),
-			        label_rtx (node->right->code_label), unsignedp, probability);
+				jump_target_rtx (node->right->code_label),
+				unsignedp, probability);
             }
 	  }
 
@@ -1828,7 +1840,8 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 			        (mode, imode,
 			         expand_normal (node->left->low),
 			         unsignedp),
-			        label_rtx (node->left->code_label), unsignedp, probability);
+				jump_target_rtx (node->left->code_label),
+				unsignedp, probability);
             }
 	}
     }
@@ -2051,7 +2064,7 @@ emit_case_nodes (rtx index, case_node_ptr node, rtx default_label,
 				       mode, 1, default_label, probability);
 	    }
 
-	  emit_jump (label_rtx (node->code_label));
+	  emit_jump (jump_target_rtx (node->code_label));
 	}
     }
 }
diff --git a/gcc/stmt.h b/gcc/stmt.h
index 620b0f1..721c7ea 100644
--- a/gcc/stmt.h
+++ b/gcc/stmt.h
@@ -31,13 +31,18 @@ extern tree resolve_asm_operand_names (tree, tree, tree, tree);
 extern tree tree_overlaps_hard_reg_set (tree, HARD_REG_SET *);
 #endif
 
-/* Return the CODE_LABEL rtx for a LABEL_DECL, creating it if necessary.  */
-extern rtx label_rtx (tree);
+/* Return the CODE_LABEL rtx for a LABEL_DECL, creating it if necessary.
+   If label was deleted, the corresponding note
+   (NOTE_INSN_DELETED{_DEBUG,}_LABEL) insn will be returned.  */
+extern rtx_insn *label_rtx (tree);
 
 /* As label_rtx, but additionally the label is placed on the forced label
    list of its containing function (i.e. it is treated as reachable even
    if how is not obvious).  */
-extern rtx force_label_rtx (tree);
+extern rtx_insn *force_label_rtx (tree);
+
+/* As label_rtx, but checks that label was not deleted.  */
+extern rtx_code_label *jump_target_rtx (tree);
 
 /* Expand a GIMPLE_SWITCH statement.  */
 extern void expand_case (gswitch *);
diff --git a/gcc/store-motion.c b/gcc/store-motion.c
index d621ec1..fdd2f47 100644
--- a/gcc/store-motion.c
+++ b/gcc/store-motion.c
@@ -813,7 +813,7 @@ insert_store (struct st_expr * expr, edge e)
     return 0;
 
   reg = expr->reaching_reg;
-  insn = as_a <rtx_insn *> (gen_move_insn (copy_rtx (expr->pattern), reg));
+  insn = gen_move_insn (copy_rtx (expr->pattern), reg);
 
   /* If we are inserting this expression on ALL predecessor edges of a BB,
      insert it at the start of the BB, and reset the insert bits on the other
@@ -954,7 +954,7 @@ replace_store_insn (rtx reg, rtx_insn *del, basic_block bb,
   rtx mem, note, set, ptr;
 
   mem = smexpr->pattern;
-  insn = as_a <rtx_insn *> (gen_move_insn (reg, SET_SRC (single_set (del))));
+  insn = gen_move_insn (reg, SET_SRC (single_set (del)));
 
   for (ptr = smexpr->antic_stores; ptr; ptr = XEXP (ptr, 1))
     if (XEXP (ptr, 0) == del)

[-- Attachment #4: swaps.patch --]
[-- Type: text/plain, Size: 1330 bytes --]

diff --git a/gcc/dojump.c b/gcc/dojump.c
index ad356ba..0790c77 100644
--- a/gcc/dojump.c
+++ b/gcc/dojump.c
@@ -987,9 +987,7 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
       if (can_compare_p (rcode, mode, ccp_jump)
 	  || (code == ORDERED && ! can_compare_p (ORDERED, mode, ccp_jump)))
 	{
-          tem = if_true_label;
-          if_true_label = if_false_label;
-          if_false_label = tem;
+	  std::swap (if_true_label, if_false_label);
 	  code = rcode;
 	  prob = inv (prob);
 	}
@@ -1000,9 +998,7 @@ do_compare_rtx_and_jump (rtx op0, rtx op1, enum rtx_code code, int unsignedp,
 
   if (swap_commutative_operands_p (op0, op1))
     {
-      tem = op0;
-      op0 = op1;
-      op1 = tem;
+      std::swap (op0, op1);
       code = swap_condition (code);
     }
 
diff --git a/gcc/expr.c b/gcc/expr.c
index 530a944..25aa11f 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -8870,11 +8870,7 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
 
       /* If op1 was placed in target, swap op0 and op1.  */
       if (target != op0 && target == op1)
-	{
-	  temp = op0;
-	  op0 = op1;
-	  op1 = temp;
-	}
+	std::swap (op0, op1);
 
       /* We generate better code and avoid problems with op1 mentioning
 	 target by forcing op1 into a pseudo if it isn't a constant.  */

[-- Attachment #5: swaps.cl --]
[-- Type: text/plain, Size: 196 bytes --]

gcc/ChangeLog:

2015-04-29  Mikhail Maltsev  <maltsevm@gmail.com>

	* dojump.c (do_compare_rtx_and_jump): Use std::swap instead of
        manual swaps.
	* expr.c (expand_expr_real_2): Likewise.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-04-29  8:02       ` Mikhail Maltsev
@ 2015-04-30  3:54         ` Jeff Law
  2015-04-30  5:46         ` Jeff Law
  1 sibling, 0 replies; 21+ messages in thread
From: Jeff Law @ 2015-04-30  3:54 UTC (permalink / raw)
  To: Mikhail Maltsev, gcc-patches, richard.sandiford

On 04/29/2015 01:55 AM, Mikhail Maltsev wrote:
>
>> I probably would have done separate patches for the std::swap
>> changes. They're not really related to the rtx subclasses work.
> OK, sending 2 separate patches. Note that they a not "commutative":
> std::swap should be applied before the main one, because one of the
> swaps in do_compare_rtx_and_jump uses a single temporary variable of
> type rtx for swapping labels and for storing generic rtl expressions
> (this could be worked around, of course, but I think that would be just
> a waste of time).
I've applied the swap patch.  I want to look over the subclass patch a 
final time before applying it.

If you're going to continue this work, you should probably get 
write-after-approval access so that you can commit your own approved 
changes.


jeff

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-04-29  8:02       ` Mikhail Maltsev
  2015-04-30  3:54         ` Jeff Law
@ 2015-04-30  5:46         ` Jeff Law
  2015-05-04 20:32           ` Mikhail Maltsev
       [not found]           ` <5547D40F.6010802@gmail.com>
  1 sibling, 2 replies; 21+ messages in thread
From: Jeff Law @ 2015-04-30  5:46 UTC (permalink / raw)
  To: Mikhail Maltsev, gcc-patches, richard.sandiford

[-- Attachment #1: Type: text/plain, Size: 914 bytes --]

On 04/29/2015 01:55 AM, Mikhail Maltsev wrote:
[ Big Snip ]

Couple minor issues.

Can you please check the changes to do_jump_1, the indention looked 
weird in the patch.  If it's correct, just say so.

The ChangeLog needed some work.  I'm attaching the one I'd use for the 
patch as it stands today.  There were some functions that had changed, 
but which weren't referenced and other minor oversights that I've fixed. 
  I suspect you'll need to adjust it slightly as you fix PEEP2_EOB (see 
below).

One significant question/issue.

The definition of PEEP2_EOB looks wrong.  I don't see how you can safely 
cast pc_rtx to an rtx_insn * since it's an RTX rather than rtx chain 
object.  Maybe you're getting away with it because it's used as marker. 
  But it still feels wrong.  You'd probably be better off creating a 
unique rtx_insn * object and using that as the marker.

Otherwise it's ready to go.

jeff

[-- Attachment #2: as_insn3.cl --]
[-- Type: application/simple-filter+xml, Size: 6335 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-04-30  5:46         ` Jeff Law
@ 2015-05-04 20:32           ` Mikhail Maltsev
  2015-05-04 21:22             ` Trevor Saunders
  2015-05-09  5:49             ` Trevor Saunders
       [not found]           ` <5547D40F.6010802@gmail.com>
  1 sibling, 2 replies; 21+ messages in thread
From: Mikhail Maltsev @ 2015-05-04 20:32 UTC (permalink / raw)
  To: gcc-patches, Jeff Law

[-- Attachment #1: Type: text/plain, Size: 1629 bytes --]

(the original message was bounced by the mailing list, resending with
compressed attachment)

On 30.04.2015 8:00, Jeff Law wrote:
> 
> Can you please check the changes to do_jump_1, the indention looked 
> weird in the patch.  If it's correct, just say so.
It is ok. Probably that's because the surrounding code is indented with
spaces.

> The definition of PEEP2_EOB looks wrong.  I don't see how you can
> safely cast pc_rtx to an rtx_insn * since it's an RTX rather than rtx
> chain object.  Maybe you're getting away with it because it's used as
> marker. But it still feels wrong.
Yes, FWIW, it is only needed for assertions in peep2_regno_dead_p and
peep2_reg_dead_p which check it against NULL (they are intended to
verify that live_before field in peep2_insn_data struct is valid). At
least, when I removed the assertions and changed PEEP2_EOB to NULL (as
an experiment), the testsuite passed without regressions.

> You'd probably be better off creating a unique rtx_insn * object and
> using that as the marker.
OK. Fixed the patch. Rebased and tested on x86_64-linux (fortunately, it
did not conflict with Trevor's series of rtx_insn-related patches).

I'm trying to continue and the next patch (peep_split.patch,
peep_split.cl) is addressing the same task in some of the generated code
(namely, gen_peephole2_* and gen_split_* series of functions).

> If you're going to continue this work, you should probably get
> write-after-approval access so that you can commit your own approved
> changes.
Is it OK to mention you as a maintainer who can approve my request for
write access?

-- 
Regards,
    Mikhail Maltsev


[-- Attachment #2: as_insn.tar.gz --]
[-- Type: application/x-gzip, Size: 28642 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-05-04 20:32           ` Mikhail Maltsev
@ 2015-05-04 21:22             ` Trevor Saunders
  2015-05-09  5:49             ` Trevor Saunders
  1 sibling, 0 replies; 21+ messages in thread
From: Trevor Saunders @ 2015-05-04 21:22 UTC (permalink / raw)
  To: Mikhail Maltsev; +Cc: gcc-patches, Jeff Law

> OK. Fixed the patch. Rebased and tested on x86_64-linux (fortunately, it
> did not conflict with Trevor's series of rtx_insn-related patches).

good :) fwiw I have another series that'll probably be ready about the
end of the week (the punishment for writing small patches is making the
testing box spin for days ;-)

> I'm trying to continue and the next patch (peep_split.patch,
> peep_split.cl) is addressing the same task in some of the generated code
> (namely, gen_peephole2_* and gen_split_* series of functions).

ok, I've stayed away from the generators andjust done more "trivial"
changes of rtx -> rtx_insn * in arguments.

Trev

> > If you're going to continue this work, you should probably get
> > write-after-approval access so that you can commit your own approved
> > changes.
> Is it OK to mention you as a maintainer who can approve my request for
> write access?
> 
> -- 
> Regards,
>     Mikhail Maltsev
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
       [not found]           ` <5547D40F.6010802@gmail.com>
@ 2015-05-08 21:54             ` Jeff Law
  2015-05-11 20:41               ` Mikhail Maltsev
  2015-06-06  5:51               ` Mikhail Maltsev
  0 siblings, 2 replies; 21+ messages in thread
From: Jeff Law @ 2015-05-08 21:54 UTC (permalink / raw)
  To: Mikhail Maltsev, gcc-patches, richard.sandiford, Trevor Saunders

On 05/04/2015 02:18 PM, Mikhail Maltsev wrote:
> Yes, FWIW, it is only needed for assertions in peep2_regno_dead_p and
> peep2_reg_dead_p which check it against NULL (they are intended to
> verify that live_before field in peep2_insn_data struct is valid). At
> least, when I removed the assertions and changed PEEP2_EOB to NULL (as
> an experiment), the testsuite passed without regressions.
>
>> You'd probably be better off creating a unique rtx_insn * object and
>> using that as the marker.
> OK. Fixed the patch. Rebased and tested on x86_64-linux (fortunately, it
> did not conflict with Trevor's series of rtx_insn-related patches).
Thanks for taking care of that.

>
> I'm trying to continue and the next patch (peep_split.patch,
> peep_split.cl) is addressing the same task in some of the generated code
> (namely, gen_peephole2_* and gen_split_* series of functions).
And that looks good.  If it's bootstrapping and regression testing then 
it's good to go too.

>
>> If you're going to continue this work, you should probably get
>> write-after-approval access so that you can commit your own approved
>> changes.
> Is it OK to mention you as a maintainer who can approve my request for
> write access?
Yes, absolutely.  If you haven't already done so, go ahead and get this 
going because...

Both patches are approved.  Please install onto the trunk.

jeff

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-05-04 20:32           ` Mikhail Maltsev
  2015-05-04 21:22             ` Trevor Saunders
@ 2015-05-09  5:49             ` Trevor Saunders
  1 sibling, 0 replies; 21+ messages in thread
From: Trevor Saunders @ 2015-05-09  5:49 UTC (permalink / raw)
  To: Mikhail Maltsev; +Cc: gcc-patches, Jeff Law

On Mon, May 04, 2015 at 11:32:38PM +0300, Mikhail Maltsev wrote:
> > You'd probably be better off creating a unique rtx_insn * object and
> > using that as the marker.
> OK. Fixed the patch. Rebased and tested on x86_64-linux (fortunately, it
> did not conflict with Trevor's series of rtx_insn-related patches).

ok, that second series is now in.  I think you might conflict with the
last patch, but I think your patch is a super set of what I did so the
rebase should still be simple.

Trev

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-05-08 21:54             ` Jeff Law
@ 2015-05-11 20:41               ` Mikhail Maltsev
  2015-05-11 21:21                 ` Joseph Myers
  2015-05-12 20:26                 ` Jeff Law
  2015-06-06  5:51               ` Mikhail Maltsev
  1 sibling, 2 replies; 21+ messages in thread
From: Mikhail Maltsev @ 2015-05-11 20:41 UTC (permalink / raw)
  To: Jeff Law, richard.sandiford, Trevor Saunders; +Cc: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 1637 bytes --]

On 09.05.2015 0:54, Jeff Law wrote:
> 
> Both patches are approved.  Please install onto the trunk.
> 
> jeff
> 

Sorry for delay. When I started to work on this task, I wrote that I'll
test the patches on couple of other platforms (not just x86). Probably I
should have done it earlier, because I missed a couple of important
details, which could break the build. Fortunately I did several tests
before merging into trunk, and I think I need an advice on testing (or
maybe some reworking).

First, I didn't realize that lots of code in GCC (besides the targets in
gcc/config and the generated code) is compiled conditionally (I mean,
guarded by #ifdef's). I also missed a couple of places which use
rtx_jump-related functions (which were affected by the change of
prototypes) in target code. I attached the changes that should be added
to the patch. I don't think they can be considered "obvious", so I'm
sending them for review. Right now I did not include the changelog, but
the patch will anyway need some fixing and rebasing, so I'll update it
later.

In general, is there a recommended set of targets that cover most
conditionally compiled code? Also, the GCC Wiki mentions some automated
test services and compile farm. Is it possible to use it to test a patch
on many targets?

Finally, I could try to break the patch into smaller pieces, though I
don't know if it's worth the efforts.

P.S. Bootstrapped/regtested on x86_64-unknown-linux-gnu {,-m32}
(C,C++,lto,objc,fortran,go), crosscompiled and regtested (C and C++
testsuites) on sh-elf, mips-elf, poweperpc-eabisim and arm-eabi simulators.

-- 
Regards,
    Mikhail Maltsev

[-- Attachment #2: as_insn_amend.patch --]
[-- Type: text/plain, Size: 28396 bytes --]

diff --git a/gcc/config/bfin/bfin.c b/gcc/config/bfin/bfin.c
index 2768266..37f4ded 100644
--- a/gcc/config/bfin/bfin.c
+++ b/gcc/config/bfin/bfin.c
@@ -3844,7 +3844,8 @@ hwloop_optimize (hwloop_info loop)
 
   delete_insn (loop->loop_end);
   /* Insert the loop end label before the last instruction of the loop.  */
-  emit_label_before (loop->end_label, loop->last_insn);
+  emit_label_before (as_a <rtx_code_label *> (loop->end_label),
+		     loop->last_insn);
 
   return true;
 }
diff --git a/gcc/config/mips/mips.c b/gcc/config/mips/mips.c
index 16ed5f0..280738c 100644
--- a/gcc/config/mips/mips.c
+++ b/gcc/config/mips/mips.c
@@ -16799,13 +16799,14 @@ mips16_split_long_branches (void)
   do
     {
       rtx_insn *insn;
+      rtx_jump_insn *jump_insn;
 
       shorten_branches (get_insns ());
       something_changed = false;
       for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
-	if (JUMP_P (insn)
-	    && get_attr_length (insn) > 4
-	    && (any_condjump_p (insn) || any_uncondjump_p (insn)))
+	if ((jump_insn = dyn_cast <rtx_jump_insn *> (insn))
+	    && get_attr_length (jump_insn) > 4
+	    && (any_condjump_p (jump_insn) || any_uncondjump_p (jump_insn)))
 	  {
 	    rtx old_label, temp, saved_temp;
 	    rtx_code_label *new_label;
@@ -16820,7 +16821,7 @@ mips16_split_long_branches (void)
 	    emit_move_insn (saved_temp, temp);
 
 	    /* Load the branch target into TEMP.  */
-	    old_label = JUMP_LABEL (insn);
+	    old_label = JUMP_LABEL (jump_insn);
 	    target = gen_rtx_LABEL_REF (Pmode, old_label);
 	    mips16_load_branch_target (temp, target);
 
@@ -16835,7 +16836,7 @@ mips16_split_long_branches (void)
 	       a PC-relative constant pool.  */
 	    mips16_lay_out_constants (false);
 
-	    if (simplejump_p (insn))
+	    if (simplejump_p (jump_insn))
 	      /* We're going to replace INSN with a longer form.  */
 	      new_label = NULL;
 	    else
@@ -16849,11 +16850,11 @@ mips16_split_long_branches (void)
 	    jump_sequence = get_insns ();
 	    end_sequence ();
 
-	    emit_insn_after (jump_sequence, insn);
+	    emit_insn_after (jump_sequence, jump_insn);
 	    if (new_label)
-	      invert_jump (insn, new_label, false);
+	      invert_jump (jump_insn, new_label, false);
 	    else
-	      delete_insn (insn);
+	      delete_insn (jump_insn);
 	    something_changed = true;
 	  }
     }
diff --git a/gcc/config/sh/sh.c b/gcc/config/sh/sh.c
index 9bcb423..bc1ce24 100644
--- a/gcc/config/sh/sh.c
+++ b/gcc/config/sh/sh.c
@@ -5876,7 +5876,7 @@ static void
 gen_far_branch (struct far_branch *bp)
 {
   rtx_insn *insn = bp->insert_place;
-  rtx_insn *jump;
+  rtx_jump_insn *jump;
   rtx_code_label *label = gen_label_rtx ();
   int ok;
 
@@ -5907,7 +5907,7 @@ gen_far_branch (struct far_branch *bp)
       JUMP_LABEL (jump) = pat;
     }
 
-  ok = invert_jump (insn, label, 1);
+  ok = invert_jump (as_a <rtx_jump_insn *> (insn), label, 1);
   gcc_assert (ok);
 
   /* If we are branching around a jump (rather than a return), prevent
@@ -6700,7 +6700,7 @@ split_branches (rtx_insn *first)
 		    bp->insert_place = insn;
 		    bp->address = addr;
 		  }
-		ok = redirect_jump (insn, label, 0);
+		ok = redirect_jump (as_a <rtx_jump_insn *> (insn), label, 0);
 		gcc_assert (ok);
 	      }
 	    else
@@ -6775,7 +6775,7 @@ split_branches (rtx_insn *first)
 			JUMP_LABEL (insn) = far_label;
 			LABEL_NUSES (far_label)++;
 		      }
-		    redirect_jump (insn, ret_rtx, 1);
+		    redirect_jump (as_a <rtx_jump_insn *> (insn), ret_rtx, 1);
 		    far_label = 0;
 		  }
 	      }
diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index d297380..a7338ce 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -4401,11 +4401,12 @@ emit_insn_before_noloc (rtx x, rtx_insn *before, basic_block bb)
 /* Make an instruction with body X and code JUMP_INSN
    and output it before the instruction BEFORE.  */
 
-rtx_insn *
+rtx_jump_insn *
 emit_jump_insn_before_noloc (rtx x, rtx_insn *before)
 {
-  return emit_pattern_before_noloc (x, before, NULL_RTX, NULL,
-				    make_jump_insn_raw);
+  return as_a <rtx_jump_insn *> (
+		emit_pattern_before_noloc (x, before, NULL_RTX, NULL,
+					   make_jump_insn_raw));
 }
 
 /* Make an instruction with body X and code CALL_INSN
@@ -4445,12 +4446,12 @@ emit_barrier_before (rtx before)
 /* Emit the label LABEL before the insn BEFORE.  */
 
 rtx_code_label *
-emit_label_before (rtx_code_label *label, rtx_insn *before)
+emit_label_before (rtx label, rtx_insn *before)
 {
   gcc_checking_assert (INSN_UID (label) == 0);
   INSN_UID (label) = cur_insn_uid++;
   add_insn_before (label, before, NULL);
-  return label;
+  return as_a <rtx_code_label *> (label);
 }
 \f
 /* Helper for emit_insn_after, handles lists of instructions
@@ -4552,10 +4553,11 @@ emit_insn_after_noloc (rtx x, rtx after, basic_block bb)
 /* Make an insn of code JUMP_INSN with body X
    and output it after the insn AFTER.  */
 
-rtx_insn *
+rtx_jump_insn *
 emit_jump_insn_after_noloc (rtx x, rtx after)
 {
-  return emit_pattern_after_noloc (x, after, NULL, make_jump_insn_raw);
+  return as_a <rtx_jump_insn *> (
+		emit_pattern_after_noloc (x, after, NULL, make_jump_insn_raw));
 }
 
 /* Make an instruction with body X and code CALL_INSN
@@ -4727,17 +4729,19 @@ emit_insn_after (rtx pattern, rtx after)
 }
 
 /* Like emit_jump_insn_after_noloc, but set INSN_LOCATION according to LOC.  */
-rtx_insn *
+rtx_jump_insn *
 emit_jump_insn_after_setloc (rtx pattern, rtx after, int loc)
 {
-  return emit_pattern_after_setloc (pattern, after, loc, make_jump_insn_raw);
+  return as_a <rtx_jump_insn *> (
+	emit_pattern_after_setloc (pattern, after, loc, make_jump_insn_raw));
 }
 
 /* Like emit_jump_insn_after_noloc, but set INSN_LOCATION according to AFTER.  */
-rtx_insn *
+rtx_jump_insn *
 emit_jump_insn_after (rtx pattern, rtx after)
 {
-  return emit_pattern_after (pattern, after, true, make_jump_insn_raw);
+  return as_a <rtx_jump_insn *> (
+	emit_pattern_after (pattern, after, true, make_jump_insn_raw));
 }
 
 /* Like emit_call_insn_after_noloc, but set INSN_LOCATION according to LOC.  */
@@ -4842,19 +4846,21 @@ emit_insn_before (rtx pattern, rtx before)
 }
 
 /* like emit_insn_before_noloc, but set INSN_LOCATION according to LOC.  */
-rtx_insn *
+rtx_jump_insn *
 emit_jump_insn_before_setloc (rtx pattern, rtx_insn *before, int loc)
 {
-  return emit_pattern_before_setloc (pattern, before, loc, false,
-				     make_jump_insn_raw);
+  return as_a <rtx_jump_insn *> (
+	emit_pattern_before_setloc (pattern, before, loc, false,
+				    make_jump_insn_raw));
 }
 
 /* Like emit_jump_insn_before_noloc, but set INSN_LOCATION according to BEFORE.  */
-rtx_insn *
+rtx_jump_insn *
 emit_jump_insn_before (rtx pattern, rtx before)
 {
-  return emit_pattern_before (pattern, before, true, false,
-			      make_jump_insn_raw);
+  return as_a <rtx_jump_insn *> (
+	emit_pattern_before (pattern, before, true, false,
+			     make_jump_insn_raw));
 }
 
 /* Like emit_insn_before_noloc, but set INSN_LOCATION according to LOC.  */
diff --git a/gcc/explow.c b/gcc/explow.c
index 57cb767..c4427a8 100644
--- a/gcc/explow.c
+++ b/gcc/explow.c
@@ -984,7 +984,7 @@ emit_stack_save (enum save_level save_level, rtx *psave)
 {
   rtx sa = *psave;
   /* The default is that we use a move insn and save in a Pmode object.  */
-  rtx_insn * (*fcn) (rtx, rtx) = gen_move_insn;
+  rtx (*fcn) (rtx, rtx) = gen_move_insn_uncast;
   machine_mode mode = STACK_SAVEAREA_MODE (save_level);
 
   /* See if this machine has anything special to do for this kind of save.  */
@@ -1039,7 +1039,7 @@ void
 emit_stack_restore (enum save_level save_level, rtx sa)
 {
   /* The default is that we use a move insn.  */
-  rtx_insn * (*fcn) (rtx, rtx) = gen_move_insn;
+  rtx (*fcn) (rtx, rtx) = gen_move_insn_uncast;
 
   /* If stack_realign_drap, the x86 backend emits a prologue that aligns both
      STACK_POINTER and HARD_FRAME_POINTER.
diff --git a/gcc/expr.c b/gcc/expr.c
index a789024..395cafb 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -3664,6 +3664,15 @@ gen_move_insn (rtx x, rtx y)
   return seq;
 }
 
+/* Same as above, but return rtx (used as a callback, which must have
+   prototype compatible with other functions returning rtx).  */
+
+rtx
+gen_move_insn_uncast (rtx x, rtx y)
+{
+  return gen_move_insn (x, y);
+}
+
 /* If Y is representable exactly in a narrower mode, and the target can
    perform the extension directly from constant or memory, then emit the
    move as an extension.  */
diff --git a/gcc/expr.h b/gcc/expr.h
index 6c4afc4..e3afa8d 100644
--- a/gcc/expr.h
+++ b/gcc/expr.h
@@ -204,6 +204,7 @@ extern rtx store_by_pieces (rtx, unsigned HOST_WIDE_INT,
 /* Emit insns to set X from Y.  */
 extern rtx_insn *emit_move_insn (rtx, rtx);
 extern rtx_insn *gen_move_insn (rtx, rtx);
+extern rtx gen_move_insn_uncast (rtx, rtx);
 
 /* Emit insns to set X from Y, with no frills.  */
 extern rtx_insn *emit_move_insn_1 (rtx, rtx);
diff --git a/gcc/loop-doloop.c b/gcc/loop-doloop.c
index b5adbac..afd1da0 100644
--- a/gcc/loop-doloop.c
+++ b/gcc/loop-doloop.c
@@ -365,7 +365,7 @@ static bool
 add_test (rtx cond, edge *e, basic_block dest)
 {
   rtx_insn *seq, *jump;
-  rtx label;
+  rtx_code_label *label;
   machine_mode mode;
   rtx op0 = XEXP (cond, 0), op1 = XEXP (cond, 1);
   enum rtx_code code = GET_CODE (cond);
@@ -379,8 +379,7 @@ add_test (rtx cond, edge *e, basic_block dest)
   op0 = force_operand (op0, NULL_RTX);
   op1 = force_operand (op1, NULL_RTX);
   label = block_label (dest);
-  do_compare_rtx_and_jump (op0, op1, code, 0, mode, NULL_RTX,
-			   NULL_RTX, label, -1);
+  do_compare_rtx_and_jump (op0, op1, code, 0, mode, NULL_RTX, NULL, label, -1);
 
   jump = get_last_insn ();
   if (!jump || !JUMP_P (jump))
@@ -432,7 +431,7 @@ doloop_modify (struct loop *loop, struct niter_desc *desc,
   rtx tmp, noloop = NULL_RTX;
   rtx_insn *sequence;
   rtx_insn *jump_insn;
-  rtx jump_label;
+  rtx_code_label *jump_label;
   int nonneg = 0;
   bool increment_count;
   basic_block loop_end = desc->out_edge->src;
@@ -627,7 +626,7 @@ doloop_optimize (struct loop *loop)
   rtx doloop_seq, doloop_pat, doloop_reg;
   rtx count;
   widest_int iterations, iterations_max;
-  rtx start_label;
+  rtx_code_label *start_label;
   rtx condition;
   unsigned level, est_niter;
   int max_cost;
diff --git a/gcc/reorg.c b/gcc/reorg.c
index 4b41f7e..e085290 100644
--- a/gcc/reorg.c
+++ b/gcc/reorg.c
@@ -236,7 +236,7 @@ static rtx_insn *delete_from_delay_slot (rtx_insn *);
 static void delete_scheduled_jump (rtx_insn *);
 static void note_delay_statistics (int, int);
 #if defined(ANNUL_IFFALSE_SLOTS) || defined(ANNUL_IFTRUE_SLOTS)
-static rtx_insn_list *optimize_skip (rtx_insn *);
+static rtx_insn_list *optimize_skip (rtx_jump_insn *);
 #endif
 static int get_jump_flags (const rtx_insn *, rtx);
 static int mostly_true_jump (rtx);
@@ -264,12 +264,12 @@ static void try_merge_delay_insns (rtx_insn *, rtx_insn *);
 static rtx redundant_insn (rtx, rtx_insn *, rtx);
 static int own_thread_p (rtx, rtx, int);
 static void update_block (rtx_insn *, rtx);
-static int reorg_redirect_jump (rtx_insn *, rtx);
+static int reorg_redirect_jump (rtx_jump_insn *, rtx);
 static void update_reg_dead_notes (rtx_insn *, rtx_insn *);
 static void fix_reg_dead_note (rtx, rtx);
 static void update_reg_unused_notes (rtx, rtx);
 static void fill_simple_delay_slots (int);
-static rtx_insn_list *fill_slots_from_thread (rtx_insn *, rtx, rtx, rtx,
+static rtx_insn_list *fill_slots_from_thread (rtx_jump_insn *, rtx, rtx, rtx,
 					      int, int, int, int,
 					      int *, rtx_insn_list *);
 static void fill_eager_delay_slots (void);
@@ -779,7 +779,7 @@ note_delay_statistics (int slots_filled, int index)
    of delay slots required.  */
 
 static rtx_insn_list *
-optimize_skip (rtx_insn *insn)
+optimize_skip (rtx_jump_insn *insn)
 {
   rtx_insn *trial = next_nonnote_insn (insn);
   rtx_insn *next_trial = next_active_insn (trial);
@@ -1789,7 +1789,7 @@ update_block (rtx_insn *insn, rtx where)
    the basic block containing the jump.  */
 
 static int
-reorg_redirect_jump (rtx_insn *jump, rtx nlabel)
+reorg_redirect_jump (rtx_jump_insn *jump, rtx nlabel)
 {
   incr_ticks_for_insn (jump);
   return redirect_jump (jump, nlabel, 1);
@@ -2147,7 +2147,7 @@ fill_simple_delay_slots (int non_jumps_p)
 	  && (condjump_p (insn) || condjump_in_parallel_p (insn))
 	  && !ANY_RETURN_P (JUMP_LABEL (insn)))
 	{
-	  delay_list = optimize_skip (insn);
+	  delay_list = optimize_skip (as_a <rtx_jump_insn *> (insn));
 	  if (delay_list)
 	    slots_filled += 1;
 	}
@@ -2296,18 +2296,20 @@ fill_simple_delay_slots (int non_jumps_p)
 		    = add_to_delay_list (copy_delay_slot_insn (next_trial),
 					 delay_list);
 		  slots_filled++;
-		  reorg_redirect_jump (trial, new_label);
+		  reorg_redirect_jump (as_a <rtx_jump_insn *> (trial),
+				       new_label);
 		}
 	    }
 	}
 
       /* If this is an unconditional jump, then try to get insns from the
 	 target of the jump.  */
-      if (JUMP_P (insn)
-	  && simplejump_p (insn)
+      rtx_jump_insn *jump_insn;
+      if ((jump_insn = dyn_cast <rtx_jump_insn *> (insn))
+	  && simplejump_p (jump_insn)
 	  && slots_filled != slots_to_fill)
 	delay_list
-	  = fill_slots_from_thread (insn, const_true_rtx,
+	  = fill_slots_from_thread (jump_insn, const_true_rtx,
 				    next_active_insn (JUMP_LABEL (insn)),
 				    NULL, 1, 1,
 				    own_thread_p (JUMP_LABEL (insn),
@@ -2411,10 +2413,9 @@ follow_jumps (rtx label, rtx_insn *jump, bool *crossing)
    slot.  We then adjust the jump to point after the insns we have taken.  */
 
 static rtx_insn_list *
-fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx thread_or_return,
-			rtx opposite_thread, int likely,
-			int thread_if_true,
-			int own_thread, int slots_to_fill,
+fill_slots_from_thread (rtx_jump_insn *insn, rtx condition,
+			rtx thread_or_return, rtx opposite_thread, int likely,
+			int thread_if_true, int own_thread, int slots_to_fill,
 			int *pslots_filled, rtx_insn_list *delay_list)
 {
   rtx new_thread;
@@ -2883,6 +2884,7 @@ fill_eager_delay_slots (void)
       rtx target_label, insn_at_target;
       rtx_insn *fallthrough_insn;
       rtx_insn_list *delay_list = 0;
+      rtx_jump_insn *jump_insn;
       int own_target;
       int own_fallthrough;
       int prediction, slots_to_fill, slots_filled;
@@ -2890,11 +2892,11 @@ fill_eager_delay_slots (void)
       insn = unfilled_slots_base[i];
       if (insn == 0
 	  || insn->deleted ()
-	  || !JUMP_P (insn)
-	  || ! (condjump_p (insn) || condjump_in_parallel_p (insn)))
+	  || ! (jump_insn = dyn_cast <rtx_jump_insn *> (insn))
+	  || ! (condjump_p (jump_insn) || condjump_in_parallel_p (jump_insn)))
 	continue;
 
-      slots_to_fill = num_delay_slots (insn);
+      slots_to_fill = num_delay_slots (jump_insn);
       /* Some machine description have defined instructions to have
 	 delay slots only in certain circumstances which may depend on
 	 nearby insns (which change due to reorg's actions).
@@ -2910,8 +2912,8 @@ fill_eager_delay_slots (void)
 	continue;
 
       slots_filled = 0;
-      target_label = JUMP_LABEL (insn);
-      condition = get_branch_condition (insn, target_label);
+      target_label = JUMP_LABEL (jump_insn);
+      condition = get_branch_condition (jump_insn, target_label);
 
       if (condition == 0)
 	continue;
@@ -2931,9 +2933,9 @@ fill_eager_delay_slots (void)
 	}
       else
 	{
-	  fallthrough_insn = next_active_insn (insn);
-	  own_fallthrough = own_thread_p (NEXT_INSN (insn), NULL_RTX, 1);
-	  prediction = mostly_true_jump (insn);
+	  fallthrough_insn = next_active_insn (jump_insn);
+	  own_fallthrough = own_thread_p (NEXT_INSN (jump_insn), NULL_RTX, 1);
+	  prediction = mostly_true_jump (jump_insn);
 	}
 
       /* If this insn is expected to branch, first try to get insns from our
@@ -2943,7 +2945,7 @@ fill_eager_delay_slots (void)
       if (prediction > 0)
 	{
 	  delay_list
-	    = fill_slots_from_thread (insn, condition, insn_at_target,
+	    = fill_slots_from_thread (jump_insn, condition, insn_at_target,
 				      fallthrough_insn, prediction == 2, 1,
 				      own_target,
 				      slots_to_fill, &slots_filled, delay_list);
@@ -2954,11 +2956,12 @@ fill_eager_delay_slots (void)
 		 we might have found a redundant insn which we deleted
 		 from the thread that was filled.  So we have to recompute
 		 the next insn at the target.  */
-	      target_label = JUMP_LABEL (insn);
+	      target_label = JUMP_LABEL (jump_insn);
 	      insn_at_target = first_active_target_insn (target_label);
 
 	      delay_list
-		= fill_slots_from_thread (insn, condition, fallthrough_insn,
+		= fill_slots_from_thread (jump_insn, condition,
+					  fallthrough_insn,
 					  insn_at_target, 0, 0,
 					  own_fallthrough,
 					  slots_to_fill, &slots_filled,
@@ -2969,7 +2972,7 @@ fill_eager_delay_slots (void)
 	{
 	  if (own_fallthrough)
 	    delay_list
-	      = fill_slots_from_thread (insn, condition, fallthrough_insn,
+	      = fill_slots_from_thread (jump_insn, condition, fallthrough_insn,
 					insn_at_target, 0, 0,
 					own_fallthrough,
 					slots_to_fill, &slots_filled,
@@ -2977,7 +2980,7 @@ fill_eager_delay_slots (void)
 
 	  if (delay_list == 0)
 	    delay_list
-	      = fill_slots_from_thread (insn, condition, insn_at_target,
+	      = fill_slots_from_thread (jump_insn, condition, insn_at_target,
 					next_active_insn (insn), 0, 1,
 					own_target,
 					slots_to_fill, &slots_filled,
@@ -2986,7 +2989,7 @@ fill_eager_delay_slots (void)
 
       if (delay_list)
 	unfilled_slots_base[i]
-	  = emit_delay_sequence (insn, delay_list, slots_filled);
+	  = emit_delay_sequence (jump_insn, delay_list, slots_filled);
 
       if (slots_to_fill == slots_filled)
 	unfilled_slots_base[i] = 0;
@@ -3222,40 +3225,41 @@ relax_delay_slots (rtx_insn *first)
       /* If this is a jump insn, see if it now jumps to a jump, jumps to
 	 the next insn, or jumps to a label that is not the last of a
 	 group of consecutive labels.  */
-      if (JUMP_P (insn)
+      if (is_a <rtx_jump_insn *> (insn)
 	  && (condjump_p (insn) || condjump_in_parallel_p (insn))
 	  && !ANY_RETURN_P (target_label = JUMP_LABEL (insn)))
 	{
+	  rtx_jump_insn *jump_insn = as_a <rtx_jump_insn *> (insn);
 	  target_label
-	    = skip_consecutive_labels (follow_jumps (target_label, insn,
+	    = skip_consecutive_labels (follow_jumps (target_label, jump_insn,
 						     &crossing));
 	  if (ANY_RETURN_P (target_label))
 	    target_label = find_end_label (target_label);
 
 	  if (target_label && next_active_insn (target_label) == next
-	      && ! condjump_in_parallel_p (insn)
-	      && ! (next && switch_text_sections_between_p (insn, next)))
+	      && ! condjump_in_parallel_p (jump_insn)
+	      && ! (next && switch_text_sections_between_p (jump_insn, next)))
 	    {
-	      delete_jump (insn);
+	      delete_jump (jump_insn);
 	      continue;
 	    }
 
-	  if (target_label && target_label != JUMP_LABEL (insn))
+	  if (target_label && target_label != JUMP_LABEL (jump_insn))
 	    {
-	      reorg_redirect_jump (insn, target_label);
+	      reorg_redirect_jump (jump_insn, target_label);
 	      if (crossing)
-		CROSSING_JUMP_P (insn) = 1;
+		CROSSING_JUMP_P (jump_insn) = 1;
 	    }
 
 	  /* See if this jump conditionally branches around an unconditional
 	     jump.  If so, invert this jump and point it to the target of the
 	     second jump.  Check if it's possible on the target.  */
 	  if (next && simplejump_or_return_p (next)
-	      && any_condjump_p (insn)
+	      && any_condjump_p (jump_insn)
 	      && target_label
 	      && next_active_insn (target_label) == next_active_insn (next)
-	      && no_labels_between_p (insn, next)
-	      && targetm.can_follow_jump (insn, next))
+	      && no_labels_between_p (jump_insn, next)
+	      && targetm.can_follow_jump (jump_insn, next))
 	    {
 	      rtx label = JUMP_LABEL (next);
 
@@ -3270,10 +3274,10 @@ relax_delay_slots (rtx_insn *first)
 	      if (!ANY_RETURN_P (label))
 		++LABEL_NUSES (label);
 
-	      if (invert_jump (insn, label, 1))
+	      if (invert_jump (jump_insn, label, 1))
 		{
 		  delete_related_insns (next);
-		  next = insn;
+		  next = jump_insn;
 		}
 
 	      if (!ANY_RETURN_P (label))
@@ -3303,8 +3307,8 @@ relax_delay_slots (rtx_insn *first)
 	  rtx other_target = JUMP_LABEL (other);
 	  target_label = JUMP_LABEL (insn);
 
-	  if (invert_jump (other, target_label, 0))
-	    reorg_redirect_jump (insn, other_target);
+	  if (invert_jump (as_a <rtx_jump_insn *> (other), target_label, 0))
+	    reorg_redirect_jump (as_a <rtx_jump_insn *> (insn), other_target);
 	}
 
       /* Now look only at cases where we have a filled delay slot.  */
@@ -3369,25 +3373,28 @@ relax_delay_slots (rtx_insn *first)
 	}
 
       /* Now look only at the cases where we have a filled JUMP_INSN.  */
-      if (!JUMP_P (delay_insn)
-	  || !(condjump_p (delay_insn) || condjump_in_parallel_p (delay_insn)))
+      rtx_jump_insn *delay_jump_insn =
+		dyn_cast <rtx_jump_insn *> (delay_insn);
+      if (! delay_jump_insn || !(condjump_p (delay_jump_insn)
+	  || condjump_in_parallel_p (delay_jump_insn)))
 	continue;
 
-      target_label = JUMP_LABEL (delay_insn);
+      target_label = JUMP_LABEL (delay_jump_insn);
       if (target_label && ANY_RETURN_P (target_label))
 	continue;
 
       /* If this jump goes to another unconditional jump, thread it, but
 	 don't convert a jump into a RETURN here.  */
-      trial = skip_consecutive_labels (follow_jumps (target_label, delay_insn,
+      trial = skip_consecutive_labels (follow_jumps (target_label,
+						     delay_jump_insn,
 						     &crossing));
       if (ANY_RETURN_P (trial))
 	trial = find_end_label (trial);
 
       if (trial && trial != target_label
-	  && redirect_with_delay_slots_safe_p (delay_insn, trial, insn))
+	  && redirect_with_delay_slots_safe_p (delay_jump_insn, trial, insn))
 	{
-	  reorg_redirect_jump (delay_insn, trial);
+	  reorg_redirect_jump (delay_jump_insn, trial);
 	  target_label = trial;
 	  if (crossing)
 	    CROSSING_JUMP_P (insn) = 1;
@@ -3419,7 +3426,7 @@ relax_delay_slots (rtx_insn *first)
 	      /* Now emit a label before the special USE insn, and
 		 redirect our jump to the new label.  */
 	      target_label = get_label_before (PREV_INSN (tmp), target_label);
-	      reorg_redirect_jump (delay_insn, target_label);
+	      reorg_redirect_jump (delay_jump_insn, target_label);
 	      next = insn;
 	      continue;
 	    }
@@ -3440,19 +3447,19 @@ relax_delay_slots (rtx_insn *first)
 	    target_label = find_end_label (target_label);
 	  
 	  if (target_label
-	      && redirect_with_delay_slots_safe_p (delay_insn, target_label,
-						   insn))
+	      && redirect_with_delay_slots_safe_p (delay_jump_insn,
+						   target_label, insn))
 	    {
 	      update_block (trial_seq->insn (1), insn);
-	      reorg_redirect_jump (delay_insn, target_label);
+	      reorg_redirect_jump (delay_jump_insn, target_label);
 	      next = insn;
 	      continue;
 	    }
 	}
 
       /* See if we have a simple (conditional) jump that is useless.  */
-      if (! INSN_ANNULLED_BRANCH_P (delay_insn)
-	  && ! condjump_in_parallel_p (delay_insn)
+      if (! INSN_ANNULLED_BRANCH_P (delay_jump_insn)
+	  && ! condjump_in_parallel_p (delay_jump_insn)
 	  && prev_active_insn (target_label) == insn
 	  && ! BARRIER_P (prev_nonnote_insn (target_label))
 #if HAVE_cc0
@@ -3489,11 +3496,11 @@ relax_delay_slots (rtx_insn *first)
 	  trial = PREV_INSN (insn);
 	  delete_related_insns (insn);
 	  gcc_assert (GET_CODE (pat) == SEQUENCE);
-	  add_insn_after (delay_insn, trial, NULL);
-	  after = delay_insn;
+	  add_insn_after (delay_jump_insn, trial, NULL);
+	  after = delay_jump_insn;
 	  for (i = 1; i < pat->len (); i++)
 	    after = emit_copy_of_insn_after (pat->insn (i), after);
-	  delete_scheduled_jump (delay_insn);
+	  delete_scheduled_jump (delay_jump_insn);
 	  continue;
 	}
 
@@ -3515,14 +3522,14 @@ relax_delay_slots (rtx_insn *first)
 	 this jump and point it to the target of the second jump.  We cannot
 	 do this for annulled jumps, though.  Again, don't convert a jump to
 	 a RETURN here.  */
-      if (! INSN_ANNULLED_BRANCH_P (delay_insn)
-	  && any_condjump_p (delay_insn)
+      if (! INSN_ANNULLED_BRANCH_P (delay_jump_insn)
+	  && any_condjump_p (delay_jump_insn)
 	  && next && simplejump_or_return_p (next)
 	  && next_active_insn (target_label) == next_active_insn (next)
 	  && no_labels_between_p (insn, next))
 	{
 	  rtx label = JUMP_LABEL (next);
-	  rtx old_label = JUMP_LABEL (delay_insn);
+	  rtx old_label = JUMP_LABEL (delay_jump_insn);
 
 	  if (ANY_RETURN_P (label))
 	    label = find_end_label (label);
@@ -3530,7 +3537,8 @@ relax_delay_slots (rtx_insn *first)
 	  /* find_end_label can generate a new label. Check this first.  */
 	  if (label
 	      && no_labels_between_p (insn, next)
-	      && redirect_with_delay_slots_safe_p (delay_insn, label, insn))
+	      && redirect_with_delay_slots_safe_p (delay_jump_insn,
+						   label, insn))
 	    {
 	      /* Be careful how we do this to avoid deleting code or labels
 		 that are momentarily dead.  See similar optimization in
@@ -3538,7 +3546,7 @@ relax_delay_slots (rtx_insn *first)
 	      if (old_label)
 		++LABEL_NUSES (old_label);
 
-	      if (invert_jump (delay_insn, label, 1))
+	      if (invert_jump (delay_jump_insn, label, 1))
 		{
 		  int i;
 
@@ -3585,7 +3593,7 @@ static void
 make_return_insns (rtx_insn *first)
 {
   rtx_insn *insn;
-  rtx_insn *jump_insn;
+  rtx_jump_insn *jump_insn;
   rtx real_return_label = function_return_label;
   rtx real_simple_return_label = function_simple_return_label;
   int slots, i;
@@ -3645,7 +3653,7 @@ make_return_insns (rtx_insn *first)
       else
 	continue;
 
-      jump_insn = pat->insn (0);
+      jump_insn = as_a <rtx_jump_insn *> (pat->insn (0));
 
       /* If we can't make the jump into a RETURN, try to redirect it to the best
 	 RETURN and go on to the next insn.  */
@@ -3783,7 +3791,7 @@ dbr_schedule (rtx_insn *first)
 	  && !ANY_RETURN_P (JUMP_LABEL (insn))
 	  && ((target = skip_consecutive_labels (JUMP_LABEL (insn)))
 	      != JUMP_LABEL (insn)))
-	redirect_jump (insn, target, 1);
+	redirect_jump (as_a <rtx_jump_insn *> (insn), target, 1);
     }
 
   init_resource_info (epilogue_insn);
diff --git a/gcc/rtl.h b/gcc/rtl.h
index 12052b8..f236fa0 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -2701,9 +2701,9 @@ extern void decide_function_section (tree);
 extern rtx_insn *emit_insn_before (rtx, rtx);
 extern rtx_insn *emit_insn_before_noloc (rtx, rtx_insn *, basic_block);
 extern rtx_insn *emit_insn_before_setloc (rtx, rtx_insn *, int);
-extern rtx_insn *emit_jump_insn_before (rtx, rtx);
-extern rtx_insn *emit_jump_insn_before_noloc (rtx, rtx_insn *);
-extern rtx_insn *emit_jump_insn_before_setloc (rtx, rtx_insn *, int);
+extern rtx_jump_insn *emit_jump_insn_before (rtx, rtx);
+extern rtx_jump_insn *emit_jump_insn_before_noloc (rtx, rtx_insn *);
+extern rtx_jump_insn *emit_jump_insn_before_setloc (rtx, rtx_insn *, int);
 extern rtx_insn *emit_call_insn_before (rtx, rtx_insn *);
 extern rtx_insn *emit_call_insn_before_noloc (rtx, rtx_insn *);
 extern rtx_insn *emit_call_insn_before_setloc (rtx, rtx_insn *, int);
@@ -2711,14 +2711,14 @@ extern rtx_insn *emit_debug_insn_before (rtx, rtx_insn *);
 extern rtx_insn *emit_debug_insn_before_noloc (rtx, rtx);
 extern rtx_insn *emit_debug_insn_before_setloc (rtx, rtx, int);
 extern rtx_barrier *emit_barrier_before (rtx);
-extern rtx_code_label *emit_label_before (rtx_code_label *, rtx_insn *);
+extern rtx_code_label *emit_label_before (rtx, rtx_insn *);
 extern rtx_note *emit_note_before (enum insn_note, rtx_insn *);
 extern rtx_insn *emit_insn_after (rtx, rtx);
 extern rtx_insn *emit_insn_after_noloc (rtx, rtx, basic_block);
 extern rtx_insn *emit_insn_after_setloc (rtx, rtx, int);
-extern rtx_insn *emit_jump_insn_after (rtx, rtx);
-extern rtx_insn *emit_jump_insn_after_noloc (rtx, rtx);
-extern rtx_insn *emit_jump_insn_after_setloc (rtx, rtx, int);
+extern rtx_jump_insn *emit_jump_insn_after (rtx, rtx);
+extern rtx_jump_insn *emit_jump_insn_after_noloc (rtx, rtx);
+extern rtx_jump_insn *emit_jump_insn_after_setloc (rtx, rtx, int);
 extern rtx_insn *emit_call_insn_after (rtx, rtx);
 extern rtx_insn *emit_call_insn_after_noloc (rtx, rtx);
 extern rtx_insn *emit_call_insn_after_setloc (rtx, rtx, int);

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-05-11 20:41               ` Mikhail Maltsev
@ 2015-05-11 21:21                 ` Joseph Myers
  2015-05-12 20:26                 ` Jeff Law
  1 sibling, 0 replies; 21+ messages in thread
From: Joseph Myers @ 2015-05-11 21:21 UTC (permalink / raw)
  To: Mikhail Maltsev; +Cc: Jeff Law, richard.sandiford, gcc-patches

On Mon, 11 May 2015, Mikhail Maltsev wrote:

> In general, is there a recommended set of targets that cover most
> conditionally compiled code? Also, the GCC Wiki mentions some automated

See contrib/config-list.mk (note that some of those targets may have 
pre-existing build failures, and note that you need to start with a 
current trunk native compiler so that --enable-werror-always works; don't 
try to build all those cross compilers using an older GCC).

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-05-11 20:41               ` Mikhail Maltsev
  2015-05-11 21:21                 ` Joseph Myers
@ 2015-05-12 20:26                 ` Jeff Law
  1 sibling, 0 replies; 21+ messages in thread
From: Jeff Law @ 2015-05-12 20:26 UTC (permalink / raw)
  To: Mikhail Maltsev, richard.sandiford, Trevor Saunders; +Cc: gcc-patches

On 05/11/2015 02:41 PM, Mikhail Maltsev wrote:
> On 09.05.2015 0:54, Jeff Law wrote:
>>
>> Both patches are approved.  Please install onto the trunk.
>>
>> jeff
>>
>
> Sorry for delay. When I started to work on this task, I wrote that I'll
> test the patches on couple of other platforms (not just x86). Probably I
> should have done it earlier, because I missed a couple of important
> details, which could break the build. Fortunately I did several tests
> before merging into trunk, and I think I need an advice on testing (or
> maybe some reworking).
It happens.  This kind of problem is part of what Trevor's patches are 
improving for us.

For many years, the preferred style of coding in GCC was to create 
target macros, the conditionalize code based on those macros.  That 
results in a lot of code in GCC that is rarely actually compiled.

>
> In general, is there a recommended set of targets that cover most
> conditionally compiled code? Also, the GCC Wiki mentions some automated
> test services and compile farm. Is it possible to use it to test a patch
> on many targets?
There's a makefile fragment in contrib which will build a large number 
of targets that you might find helpful.  Of course without some baseline 
to compare against, it's of less value.


>
> Finally, I could try to break the patch into smaller pieces, though I
> don't know if it's worth the efforts.
I doubt it's worth the effort at this point.


>
> P.S. Bootstrapped/regtested on x86_64-unknown-linux-gnu {,-m32}
> (C,C++,lto,objc,fortran,go), crosscompiled and regtested (C and C++
> testsuites) on sh-elf, mips-elf, poweperpc-eabisim and arm-eabi simulators.
These seem like a reasonable set of targets, especially if you could add 
one cc0 target (h8/300, v850, m68k come to mind as candidates).  I also 
doubt you need to do the full testing with simulators for this work. 
I'd think that bootstrapping one target, then just build the cross tools 
for the others would be fine.

jeff

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses
  2015-05-08 21:54             ` Jeff Law
  2015-05-11 20:41               ` Mikhail Maltsev
@ 2015-06-06  5:51               ` Mikhail Maltsev
  1 sibling, 0 replies; 21+ messages in thread
From: Mikhail Maltsev @ 2015-06-06  5:51 UTC (permalink / raw)
  To: Jeff Law, gcc-patches, richard.sandiford

[-- Attachment #1: Type: text/plain, Size: 1360 bytes --]

09.05.2015 1:54, Jeff Law wrote:
> On 05/04/2015 02:18 PM, Mikhail Maltsev wrote:
[snip]
>> I'm trying to continue and the next patch (peep_split.patch,
>> peep_split.cl) is addressing the same task in some of the generated code
>> (namely, gen_peephole2_* and gen_split_* series of functions).
> And that looks good.  If it's bootstrapping and regression testing then
> it's good to go too.
> 
>>
>>> If you're going to continue this work, you should probably get
>>> write-after-approval access so that you can commit your own approved
>>> changes.
>> Is it OK to mention you as a maintainer who can approve my request for
>> write access?
> Yes, absolutely.  If you haven't already done so, go ahead and get this
> going because...
> 
> Both patches are approved.  Please install onto the trunk.
> 
> jeff
> 

Though this patch was approved about a month ago and I spent some time
while fixing the first patch related to rtx class hierarchy, I suppose
that it is still OK to apply it without additional approval.

I rebased the patch, and it required ~1 line change (which is rather
obvious). I also performed the complete test (bootstrapped and regtested
on x86_64-linux multilib, checked build of targets in
contrib/config-list.mk and ran regtests on several simulators: sh, mips
and arm).

Commited to trunk as r224183.

-- 
Regards,
    Mikhail Maltsev

[-- Attachment #2: peep_split2.patch --]
[-- Type: text/plain, Size: 10571 bytes --]

diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index c388eb5..5c8d6c4 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,3 +1,21 @@
+2015-06-06  Mikhail Maltsev  <maltsevm@gmail.com>
+
+	* combine.c (combine_split_insns): Remove cast.
+	* config/bfin/bfin.c (hwloop_fail): Add cast in try_split call.
+	* config/sh/sh.c (sh_try_split_insn_simple): Remove cast.
+	* config/sh/sh_treg_combine.cc (sh_treg_combine::execute): Add cast.
+	* emit-rtl.c (try_split): Promote type of trial argument to rtx_insn.
+	* genemit.c (gen_split): Change return type of generated functions to
+	rtx_insn.
+	* genrecog.c (get_failure_return): Use NULL instead of NULL_RTX.
+	(print_subroutine_start): Promote rtx to rtx_insn in gen_split_* and
+	gen_peephole2_* functions.
+	(print_subroutine, main): Likewise.
+	* recog.c (peephole2_optimize): Remove cast.
+	(peep2_next_insn): Promote return type to rtx_insn.
+	* recog.h (peep2_next_insn): Fix prototype.
+	* rtl.h (try_split, split_insns): Likewise.
+
 2015-06-05  Kaz Kojima  <kkojima@gcc.gnu.org>
 
 	PR target/66410
diff --git a/gcc/combine.c b/gcc/combine.c
index 01f43b1..8a9ab7a 100644
--- a/gcc/combine.c
+++ b/gcc/combine.c
@@ -554,7 +554,7 @@ combine_split_insns (rtx pattern, rtx_insn *insn)
   rtx_insn *ret;
   unsigned int nregs;
 
-  ret = safe_as_a <rtx_insn *> (split_insns (pattern, insn));
+  ret = split_insns (pattern, insn);
   nregs = max_reg_num ();
   if (nregs > reg_stat.length ())
     reg_stat.safe_grow_cleared (nregs);
diff --git a/gcc/config/bfin/bfin.c b/gcc/config/bfin/bfin.c
index 914a024..7b570cd 100644
--- a/gcc/config/bfin/bfin.c
+++ b/gcc/config/bfin/bfin.c
@@ -3877,7 +3877,7 @@ hwloop_fail (hwloop_info loop)
   else
     {
       splitting_loops = 1;  
-      try_split (PATTERN (insn), insn, 1);
+      try_split (PATTERN (insn), safe_as_a <rtx_insn *> (insn), 1);
       splitting_loops = 0;
     }
 }
diff --git a/gcc/config/sh/sh.c b/gcc/config/sh/sh.c
index d77154c..3b63014 100644
--- a/gcc/config/sh/sh.c
+++ b/gcc/config/sh/sh.c
@@ -14236,7 +14236,7 @@ sh_try_split_insn_simple (rtx_insn* i, rtx_insn* curr_insn, int n = 0)
       fprintf (dump_file, "\n");
     }
 
-  rtx_insn* seq = safe_as_a<rtx_insn*> (split_insns (PATTERN (i), curr_insn));
+  rtx_insn* seq = split_insns (PATTERN (i), curr_insn);
 
   if (seq == NULL)
     return std::make_pair (i, i);
diff --git a/gcc/config/sh/sh_treg_combine.cc b/gcc/config/sh/sh_treg_combine.cc
index 02e13e8..c09a4c3 100644
--- a/gcc/config/sh/sh_treg_combine.cc
+++ b/gcc/config/sh/sh_treg_combine.cc
@@ -1612,7 +1612,7 @@ sh_treg_combine::execute (function *fun)
 	log_msg ("trying to split insn:\n");
 	log_insn (*i);
 	log_msg ("\n");
-	try_split (PATTERN (*i), *i, 0);
+	try_split (PATTERN (*i), safe_as_a <rtx_insn *> (*i), 0);
       }
 
   m_touched_insns.clear ();
diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index e632710..7bb2c77 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -3653,9 +3653,8 @@ mark_label_nuses (rtx x)
    returns TRIAL.  If the insn to be returned can be split, it will be.  */
 
 rtx_insn *
-try_split (rtx pat, rtx uncast_trial, int last)
+try_split (rtx pat, rtx_insn *trial, int last)
 {
-  rtx_insn *trial = as_a <rtx_insn *> (uncast_trial);
   rtx_insn *before = PREV_INSN (trial);
   rtx_insn *after = NEXT_INSN (trial);
   rtx note;
@@ -3674,7 +3673,7 @@ try_split (rtx pat, rtx uncast_trial, int last)
     split_branch_probability = XINT (note, 0);
   probability = split_branch_probability;
 
-  seq = safe_as_a <rtx_insn *> (split_insns (pat, trial));
+  seq = split_insns (pat, trial);
 
   split_branch_probability = -1;
 
diff --git a/gcc/genemit.c b/gcc/genemit.c
index 3f5dd82..e5b39fd 100644
--- a/gcc/genemit.c
+++ b/gcc/genemit.c
@@ -568,15 +568,17 @@ gen_split (rtx split)
   /* Output the prototype, function name and argument declarations.  */
   if (GET_CODE (split) == DEFINE_PEEPHOLE2)
     {
-      printf ("extern rtx gen_%s_%d (rtx_insn *, rtx *);\n",
+      printf ("extern rtx_insn *gen_%s_%d (rtx_insn *, rtx *);\n",
 	      name, insn_code_number);
-      printf ("rtx\ngen_%s_%d (rtx_insn *curr_insn ATTRIBUTE_UNUSED, rtx *operands%s)\n",
+      printf ("rtx_insn *\ngen_%s_%d (rtx_insn *curr_insn ATTRIBUTE_UNUSED, rtx *operands%s)\n",
 	      name, insn_code_number, unused);
     }
   else
     {
-      printf ("extern rtx gen_split_%d (rtx_insn *, rtx *);\n", insn_code_number);
-      printf ("rtx\ngen_split_%d (rtx_insn *curr_insn ATTRIBUTE_UNUSED, rtx *operands%s)\n",
+      printf ("extern rtx_insn *gen_split_%d (rtx_insn *, rtx *);\n",
+	      insn_code_number);
+      printf ("rtx_insn *\ngen_split_%d "
+	      "(rtx_insn *curr_insn ATTRIBUTE_UNUSED, rtx *operands%s)\n",
 	      insn_code_number, unused);
     }
   printf ("{\n");
@@ -584,7 +586,7 @@ gen_split (rtx split)
   /* Declare all local variables.  */
   for (i = 0; i < stats.num_operand_vars; i++)
     printf ("  rtx operand%d;\n", i);
-  printf ("  rtx _val = 0;\n");
+  printf ("  rtx_insn *_val = NULL;\n");
 
   if (GET_CODE (split) == DEFINE_PEEPHOLE2)
     output_peephole2_scratches (split);
diff --git a/gcc/genrecog.c b/gcc/genrecog.c
index 4b6dee6..217eb50 100644
--- a/gcc/genrecog.c
+++ b/gcc/genrecog.c
@@ -4307,7 +4307,7 @@ get_failure_return (routine_type type)
 
     case SPLIT:
     case PEEPHOLE2:
-      return "NULL_RTX";
+      return "NULL";
     }
   gcc_unreachable ();
 }
@@ -5061,7 +5061,7 @@ print_subroutine_start (output_state *os, state *s, position *root)
   if (os->type == SUBPATTERN || os->type == RECOG)
     printf ("  int res ATTRIBUTE_UNUSED;\n");
   else
-    printf ("  rtx res ATTRIBUTE_UNUSED;\n");
+    printf ("  rtx_insn *res ATTRIBUTE_UNUSED;\n");
 }
 
 /* Output the definition of pattern routine ROUTINE.  */
@@ -5111,7 +5111,7 @@ print_pattern (output_state *os, pattern_routine *routine)
 static void
 print_subroutine (output_state *os, state *s, int proc_id)
 {
-  /* For now, the top-level functions take a plain "rtx", and perform a
+  /* For now, the top-level "recog" takes a plain "rtx", and performs a
      checked cast to "rtx_insn *" for use throughout the rest of the
      function and the code it calls.  */
   const char *insn_param
@@ -5134,29 +5134,31 @@ print_subroutine (output_state *os, state *s, int proc_id)
 
     case SPLIT:
       if (proc_id)
-	printf ("static rtx\nsplit_%d", proc_id);
+	printf ("static rtx_insn *\nsplit_%d", proc_id);
       else
-	printf ("rtx\nsplit_insns");
-      printf (" (rtx x1 ATTRIBUTE_UNUSED, %s ATTRIBUTE_UNUSED)\n",
-	      insn_param);
+	printf ("rtx_insn *\nsplit_insns");
+      printf (" (rtx x1 ATTRIBUTE_UNUSED, rtx_insn *insn ATTRIBUTE_UNUSED)\n");
       break;
 
     case PEEPHOLE2:
       if (proc_id)
-	printf ("static rtx\npeephole2_%d", proc_id);
+	printf ("static rtx_insn *\npeephole2_%d", proc_id);
       else
-	printf ("rtx\npeephole2_insns");
+	printf ("rtx_insn *\npeephole2_insns");
       printf (" (rtx x1 ATTRIBUTE_UNUSED,\n"
-	      "\t%s ATTRIBUTE_UNUSED,\n"
-	      "\tint *pmatch_len_ ATTRIBUTE_UNUSED)\n", insn_param);
+	      "\trtx_insn *insn ATTRIBUTE_UNUSED,\n"
+	      "\tint *pmatch_len_ ATTRIBUTE_UNUSED)\n");
       break;
     }
   print_subroutine_start (os, s, &root_pos);
   if (proc_id == 0)
     {
       printf ("  recog_data.insn = NULL;\n");
-      printf ("  rtx_insn *insn ATTRIBUTE_UNUSED;\n");
-      printf ("  insn = safe_as_a <rtx_insn *> (uncast_insn);\n");
+      if (os->type == RECOG)
+	{
+	  printf ("  rtx_insn *insn ATTRIBUTE_UNUSED;\n");
+	  printf ("  insn = safe_as_a <rtx_insn *> (uncast_insn);\n");
+	}
     }
   print_state (os, s, 2, true);
   printf ("}\n");
@@ -5323,7 +5325,7 @@ main (int argc, char **argv)
 
 	  /* Declare the gen_split routine that we'll call if the
 	     pattern matches.  The definition comes from insn-emit.c.  */
-	  printf ("extern rtx gen_split_%d (rtx_insn *, rtx *);\n",
+	  printf ("extern rtx_insn *gen_split_%d (rtx_insn *, rtx *);\n",
 		  next_insn_code);
 	  break;
 
@@ -5335,7 +5337,7 @@ main (int argc, char **argv)
 
 	  /* Declare the gen_peephole2 routine that we'll call if the
 	     pattern matches.  The definition comes from insn-emit.c.  */
-	  printf ("extern rtx gen_peephole2_%d (rtx_insn *, rtx *);\n",
+	  printf ("extern rtx_insn *gen_peephole2_%d (rtx_insn *, rtx *);\n",
 		  next_insn_code);
 	  break;
 
diff --git a/gcc/recog.c b/gcc/recog.c
index ace0e9b..b1b9c22 100644
--- a/gcc/recog.c
+++ b/gcc/recog.c
@@ -3080,7 +3080,7 @@ peep2_buf_position (int n)
    does not exist.  Used by the recognizer to find the next insn to match
    in a multi-insn pattern.  */
 
-rtx
+rtx_insn *
 peep2_next_insn (int n)
 {
   gcc_assert (n <= peep2_current_count);
@@ -3653,8 +3653,7 @@ peephole2_optimize (void)
 
 	  /* Match the peephole.  */
 	  head = peep2_insn_data[peep2_current].insn;
-	  attempt = safe_as_a <rtx_insn *> (
-		      peephole2_insns (PATTERN (head), head, &match_len));
+	  attempt = peephole2_insns (PATTERN (head), head, &match_len);
 	  if (attempt != NULL)
 	    {
 	      rtx_insn *last = peep2_attempt (bb, head, match_len, attempt);
diff --git a/gcc/recog.h b/gcc/recog.h
index d97f4df..ce931eb 100644
--- a/gcc/recog.h
+++ b/gcc/recog.h
@@ -139,14 +139,14 @@ extern void preprocess_constraints (int, int, const char **,
 				    operand_alternative *);
 extern const operand_alternative *preprocess_insn_constraints (int);
 extern void preprocess_constraints (rtx_insn *);
-extern rtx peep2_next_insn (int);
+extern rtx_insn *peep2_next_insn (int);
 extern int peep2_regno_dead_p (int, int);
 extern int peep2_reg_dead_p (int, rtx);
 #ifdef CLEAR_HARD_REG_SET
 extern rtx peep2_find_free_register (int, int, const char *,
 				     machine_mode, HARD_REG_SET *);
 #endif
-extern rtx peephole2_insns (rtx, rtx, int *);
+extern rtx_insn *peephole2_insns (rtx, rtx_insn *, int *);
 
 extern int store_data_bypass_p (rtx_insn *, rtx_insn *);
 extern int if_test_bypass_p (rtx_insn *, rtx_insn *);
diff --git a/gcc/rtl.h b/gcc/rtl.h
index 863bfd4..2c190ec 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -2831,11 +2831,11 @@ extern rtx_insn *delete_related_insns (rtx);
 extern rtx *find_constant_term_loc (rtx *);
 
 /* In emit-rtl.c  */
-extern rtx_insn *try_split (rtx, rtx, int);
+extern rtx_insn *try_split (rtx, rtx_insn *, int);
 extern int split_branch_probability;
 
-/* In unknown file  */
-extern rtx split_insns (rtx, rtx);
+/* In insn-recog.c (generated by genrecog).  */
+extern rtx_insn *split_insns (rtx, rtx_insn *);
 
 /* In simplify-rtx.c  */
 extern rtx simplify_const_unary_operation (enum rtx_code, machine_mode,

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2015-06-06  5:49 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-31  4:38 [PATCH, RFC]: Next stage1, refactoring: propagating rtx subclasses Mikhail Maltsev
2015-03-31 15:52 ` Trevor Saunders
2015-04-02 21:13 ` Jeff Law
2015-04-25 11:49 ` Richard Sandiford
2015-04-27 16:38   ` Jeff Law
2015-04-27 16:57     ` Richard Sandiford
2015-04-27 20:01   ` Mikhail Maltsev
2015-04-28 13:50     ` Richard Sandiford
2015-04-28 17:12       ` Jeff Law
2015-04-29  8:02       ` Mikhail Maltsev
2015-04-30  3:54         ` Jeff Law
2015-04-30  5:46         ` Jeff Law
2015-05-04 20:32           ` Mikhail Maltsev
2015-05-04 21:22             ` Trevor Saunders
2015-05-09  5:49             ` Trevor Saunders
     [not found]           ` <5547D40F.6010802@gmail.com>
2015-05-08 21:54             ` Jeff Law
2015-05-11 20:41               ` Mikhail Maltsev
2015-05-11 21:21                 ` Joseph Myers
2015-05-12 20:26                 ` Jeff Law
2015-06-06  5:51               ` Mikhail Maltsev
2015-04-28 23:55     ` Jeff Law

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).