public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] Remove the postincrement queue
@ 2004-07-08 14:55 Paolo Bonzini
  2004-07-08 21:15 ` Richard Henderson
  0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2004-07-08 14:55 UTC (permalink / raw)
  To: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 1050 bytes --]

As mentioned a few days ago, a very nice thing that we can do with 
tree-ssa is eliminating all the messy handling of postincrements and the 
pending postincrement chain; and a RTL code that I had just learnt 
about, QUEUED. :-)

~700 lines go away, and many comments like "the caller should have done 
so" and "the callee will do so", which is quite nice.

Unfortunately (but quite understandably), va_arg expanders are heavy 
users of {pre,post}{in,de}crements, which means that this patch has 
ramifications in backend code.  The obvious solution is to expand 
preincrements and predecrements manually, and for postincrements use 
(x+N)-N; of course the real solution would be to transition these 
backends to using gimplifiers for va_arg: these could well use the 
now-banned tree codes, but this is well above my abilities.

This patch bootstrapped/regtested all languages on i686-pc-linux-gnu.  I 
also built cc1 for c4x-elf, fr30-elf, iq2000-elf, mn10300-elf, m32r-elf, 
mips-elf, hppa-linux, i860-sysv4, s390-linux.

Ok for mainline?

Paolo


[-- Attachment #2: gcc-eliminate-queue.patch --]
[-- Type: text/plain, Size: 81621 bytes --]

As mentioned a few days ago, a very nice thing that we can do with
tree-ssa is eliminating all the messy handling of postincrements and
the pending postincrement chain; and a RTL code that I just learnt
about, QUEUED. :-)

~700 lines go away, and many comments like "the caller should have
done so" and "the callee will do so", which is quite nice.

Unfortunately (but quite understandably), va_arg expanders are heavy
users of {pre,post}{in,de}crements, which means that this patch has
ramifications in backend code.  The obvious solution is to expand
preincrements and predecrements manually, and for postincrements use
(x + N) - N; of course the real solution would be to transition these
backends to using gimplifiers for va_arg: these could well use the
now-banned tree codes, but this is well above my abilities.

This patch bootstrapped/regtested all languages on i686-pc-linux-gnu.
I also built cc1 for c4x-elf, fr30-elf, iq2000-elf, mn10300-elf,
m32r-elf, mips-elf, hppa-linux, i860-sysv4, s390-linux.

Ok for mainline?

Paolo

2004-07-08  Paolo Bonzini  <bonzini@gnu.org>

	* expr.c (enqueue_insn, finish_expr_for_function,
	protect_from_queue, queued_subexp_p, mark_queue,
	emit_insns_enqueued_after_mark, emit_queue): Remove.
	(expand_increment): Accept source and destination,
	only expand preincrement.
	(store_constructor): Adjust parameters to expand_increment.
	(expand_expr_real_1 <case PREINCREMENT_EXPR,
	case PREDECREMENT_EXPR, case POSTINCREMENT_EXPR,
	case POSTDECREMENT_EXPR>): Abort.
	* expr.h (QUEUED_VAR, QUEUED_INSN, QUEUED_COPY,
	QUEUED_BODY, QUEUED_NEXT, finish_expr_for_function,
	protect_from_queue, emit_queue, queued_subexp_p): Remove.
	* function.h (pending_chain, x_pending_chain): Remove.
	* rtl.def (QUEUED): Remove.

	* emit-rtl.c (copy_insn_1, copy_most_rtx,
	set_used_flags, verify_rtx_sharing): Remove references to QUEUED.
	* genattrtab.c (attr_copy_rtx, clear_struct_flag,
	encode_units_mask): Likewise.
	* local-alloc.c (equiv_init_varies_p): Likewise.
	* rtl.c (copy_rtx): Likewise.
	* rtlanal.c (rtx_unstable_p, rtx_varies_p): Likewise.
	* simplify-rtx.c (simplify_gen_subreg): Likewise.
	* config/mn10300/mn10300.c (legitimate_pic_operand_p): Likewise.

	* builtins.c (expand_builtin, expand_builtin_apply,
	expand_builtin_mathfn, expand_builtin_mathfn_2,
	expand_builtin_mathfn_3, expand_builtin_setjmp_setup):
	Remove calls to emit_queue and protect_from_queue.
	* c-typeck.c (c_expand_asm_operands): Likewise.
	* calls.c (expand_call, precompute_arguments,
	precompute_register_parameters, rtx_for_function_call,
	store_one_arg): Likewise.
	* dojump.c (do_compare_and_jump, do_jump): Likewise.
	* explow.c (memory_address): Likewise.
	* expmed.c (clear_by_pieces_1, clear_storage,
	clear_storage_via_libcall, emit_group_load,
	emit_group_store, emit_store_flag,
	expand_expr_real_1, store_by_pieces,
	store_constructor, store_expr, try_casesi,
	try_tablejump): Likewise.
	* function.c (expand_pending_sizes): Likewise.
	* optabs.c (emit_cmp_and_jump_insns,
	emit_conditional_add, emit_conditional_move,
	expand_fix, expand_float, prepare_cmp_insn): Likewise.
	* stmt.c (emit_case_bit_tests,
	expand_asm_expr, expand_computed_goto,
	expand_decl_init, expand_end_case_type,
	expand_end_stmt_expr, expand_expr_stmt_value,
	expand_return, expand_start_case,
	optimize_tail_recursion): Likewise.
	* config/c4x/c4x.c (c4x_expand_builtin): Likewise.
	* config/s390/s390.c (s390_expand_cmpmem): Likewise.

	* config/c4x/c4x.c (c4x_va_arg): Do not pass
	increments/decrements to the expander.
	* config/fr30/fr30.c (fr30_pass_by_reference,
	fr30_pass_by_value): Likewise.
	* config/i860/i860.c (i860_va_arg): Likewise.
	* config/iq2000/iq2000.c (iq2000_va_arg): Likewise.
	* config/m32r/m32r.c (m32r_va_arg): Likewise.
	* config/mips/mips.c (mips_va_arg): Likewise.
	* config/mn10300/mn10300.c (mn10300_va_arg): Likewise.
	* config/pa/pa.c (hppa_va_arg): Likewise.

cp/ChangeLog:
2004-07-03  Paolo Bonzini  <bonzini@gnu.org>

	* typeck.c (c_expand_asm_operands): Do not call emit_queue.

Index: builtins.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/builtins.c,v
retrieving revision 1.348
diff -u -r1.348 builtins.c
--- builtins.c	8 Jul 2004 03:40:29 -0000	1.348
+++ builtins.c	8 Jul 2004 14:27:39 -0000
@@ -525,8 +525,6 @@
 
   buf_addr = force_reg (Pmode, force_operand (buf_addr, NULL_RTX));
 
-  emit_queue ();
-
   /* We store the frame pointer and the address of receiver_label in
      the buffer and use the rest of it for the stack save area, which
      is machine-dependent.  */
@@ -960,9 +958,8 @@
 	}
       emit_insn (gen_prefetch (op0, op1, op2));
     }
-  else
 #endif
-    op0 = protect_from_queue (op0, 0);
+
   /* Don't do anything with direct references to volatile memory, but
      generate code to handle other side effects.  */
   if (!MEM_P (op0) && side_effects_p (op0))
@@ -1271,9 +1268,6 @@
 				       incoming_args, 0, OPTAB_LIB_WIDEN);
 #endif
 
-  /* Perform postincrements before actually calling the function.  */
-  emit_queue ();
-
   /* Push a new argument block and copy the arguments.  Do not allow
      the (potential) memcpy call below to interfere with our stack
      manipulations.  */
@@ -1777,7 +1771,6 @@
 
       op0 = expand_expr (arg, subtarget, VOIDmode, 0);
 
-      emit_queue ();
       start_sequence ();
 
       /* Compute into TARGET.
@@ -1932,7 +1925,6 @@
   op0 = expand_expr (arg0, subtarget, VOIDmode, 0);
   op1 = expand_expr (arg1, 0, VOIDmode, 0);
 
-  emit_queue ();
   start_sequence ();
 
   /* Compute into TARGET.
@@ -2037,7 +2029,6 @@
 
       op0 = expand_expr (arg, subtarget, VOIDmode, 0);
 
-      emit_queue ();
       start_sequence ();
 
       /* Compute into TARGET.
@@ -5694,9 +5685,6 @@
   enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl);
   enum machine_mode target_mode = TYPE_MODE (TREE_TYPE (exp));
 
-  /* Perform postincrements before expanding builtin functions.  */
-  emit_queue ();
-
   if (DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_MD)
     return targetm.expand_builtin (exp, target, subtarget, mode, ignore);
 
Index: c-typeck.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/c-typeck.c,v
retrieving revision 1.336
diff -u -r1.336 c-typeck.c
--- c-typeck.c	4 Jul 2004 17:28:55 -0000	1.336
+++ c-typeck.c	8 Jul 2004 14:27:40 -0000
@@ -6242,9 +6242,6 @@
 	    readonly_error (o[i], "modification by `asm'");
 	}
     }
-
-  /* Those MODIFY_EXPRs could do autoincrements.  */
-  emit_queue ();
 }
 \f
 /* Generate a goto statement to LABEL.  */
Index: calls.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/calls.c,v
retrieving revision 1.345
diff -u -r1.345 calls.c
--- calls.c	8 Jul 2004 07:45:32 -0000	1.345
+++ calls.c	8 Jul 2004 14:27:40 -0000
@@ -163,8 +163,6 @@
 prepare_call_address (rtx funexp, rtx static_chain_value,
 		      rtx *call_fusage, int reg_parm_seen, int sibcallp)
 {
-  funexp = protect_from_queue (funexp, 0);
-
   /* Make a valid memory address and copy constants through pseudo-regs,
      but not for a constant address if -fno-function-cse.  */
   if (GET_CODE (funexp) != SYMBOL_REF)
@@ -662,10 +660,6 @@
 					 VOIDmode, 0);
 	    preserve_temp_slots (args[i].value);
 	    pop_temp_slots ();
-
-	    /* ANSI doesn't require a sequence point here,
-	       but PCC has one, so this will avoid some problems.  */
-	    emit_queue ();
 	  }
 
 	/* If the value is a non-legitimate constant, force it into a
@@ -1264,15 +1258,8 @@
       if (TREE_ADDRESSABLE (TREE_TYPE (args[i].tree_value)))
 	abort ();
 
-      args[i].value
-	= expand_expr (args[i].tree_value, NULL_RTX, VOIDmode, 0);
-
-      /* ANSI doesn't require a sequence point here,
-	 but PCC has one, so this will avoid some problems.  */
-      emit_queue ();
-
       args[i].initial_value = args[i].value
-	= protect_from_queue (args[i].value, 0);
+	= expand_expr (args[i].tree_value, NULL_RTX, VOIDmode, 0);
 
       mode = TYPE_MODE (TREE_TYPE (args[i].tree_value));
       if (mode != args[i].mode)
@@ -1447,7 +1434,6 @@
       push_temp_slots ();
       funexp = expand_expr (addr, NULL_RTX, VOIDmode, 0);
       pop_temp_slots ();	/* FUNEXP can't be BLKmode.  */
-      emit_queue ();
     }
   return funexp;
 }
@@ -2373,10 +2359,6 @@
 
       if (pass == 0)
 	{
-	  /* Emit any queued insns now; otherwise they would end up in
-             only one of the alternates.  */
-	  emit_queue ();
-
 	  /* State variables we need to save and restore between
 	     iterations.  */
 	  save_pending_stack_adjust = pending_stack_adjust;
@@ -2798,9 +2780,6 @@
       load_register_parameters (args, num_actuals, &call_fusage, flags,
 				pass == 0, &sibcall_failure);
 
-      /* Perform postincrements before actually calling the function.  */
-      emit_queue ();
-
       /* Save a pointer to the last insn before the call, so that we can
 	 later safely search backwards to find the CALL_INSN.  */
       before_call = get_last_insn ();
@@ -3558,9 +3537,6 @@
 	  || (GET_MODE (val) != mode && GET_MODE (val) != VOIDmode))
 	abort ();
 
-      /* There's no need to call protect_from_queue, because
-	 either emit_move_insn or emit_push_insn will do that.  */
-
       /* Make sure it is a reasonable operand for a move or push insn.  */
       if (!REG_P (val) && !MEM_P (val)
 	  && ! (CONSTANT_P (val) && LEGITIMATE_CONSTANT_P (val)))
@@ -4076,7 +4052,6 @@
    for a value of mode OUTMODE,
    with NARGS different arguments, passed as alternating rtx values
    and machine_modes to convert them to.
-   The rtx values should have been passed through protect_from_queue already.
 
    FN_TYPE should be LCT_NORMAL for `normal' calls, LCT_CONST for `const'
    calls, LCT_PURE for `pure' calls, LCT_CONST_MAKE_BLOCK for `const' calls
@@ -4448,10 +4423,6 @@
      be deferred during the rest of the arguments.  */
   NO_DEFER_POP;
 
-  /* ANSI doesn't require a sequence point here,
-     but PCC has one, so this will avoid some problems.  */
-  emit_queue ();
-
   /* Free any temporary slots made in processing this argument.  Show
      that we might have taken the address of something and pushed that
      as an operand.  */
Index: dojump.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/dojump.c,v
retrieving revision 1.23
diff -u -r1.23 dojump.c
--- dojump.c	8 Jul 2004 07:45:33 -0000	1.23
+++ dojump.c	8 Jul 2004 14:27:42 -0000
@@ -167,8 +167,6 @@
   tree type;
   enum machine_mode mode;
 
-  emit_queue ();
-
   switch (code)
     {
     case ERROR_MARK:
@@ -306,7 +304,6 @@
       preserve_temp_slots (NULL_RTX);
       free_temp_slots ();
       pop_temp_slots ();
-      emit_queue ();
       do_pending_stack_adjust ();
       do_jump (TREE_OPERAND (exp, 1), if_false_label, if_true_label);
       break;
@@ -619,8 +616,6 @@
         temp = copy_to_reg (temp);
 #endif
       do_pending_stack_adjust ();
-      /* Do any postincrements in the expression that was tested.  */
-      emit_queue ();
 
       if (GET_CODE (temp) == CONST_INT
           || (GET_CODE (temp) == CONST_DOUBLE && GET_MODE (temp) == VOIDmode)
@@ -1018,9 +1013,6 @@
     }
 #endif
 
-  /* Do any postincrements in the expression that was tested.  */
-  emit_queue ();
-
   do_compare_rtx_and_jump (op0, op1, code, unsignedp, mode,
                            ((mode == BLKmode)
                             ? expr_size (TREE_OPERAND (exp, 0)) : NULL_RTX),
Index: emit-rtl.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/emit-rtl.c,v
retrieving revision 1.401
diff -u -r1.401 emit-rtl.c
--- emit-rtl.c	4 Jul 2004 08:06:54 -0000	1.401
+++ emit-rtl.c	8 Jul 2004 14:27:42 -0000
@@ -2227,7 +2227,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -2408,7 +2407,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -2525,7 +2523,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -2651,7 +2648,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -2721,7 +2717,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -5005,7 +5000,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
Index: explow.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/explow.c,v
retrieving revision 1.131
diff -u -r1.131 explow.c
--- explow.c	8 Jul 2004 07:41:58 -0000	1.131
+++ explow.c	8 Jul 2004 14:27:42 -0000
@@ -443,14 +443,6 @@
   if (! cse_not_expected && CONSTANT_P (x) && CONSTANT_ADDRESS_P (x))
     x = force_reg (Pmode, x);
 
-  /* Accept a QUEUED that refers to a REG
-     even though that isn't a valid address.
-     On attempting to put this in an insn we will call protect_from_queue
-     which will turn it into a REG, which is valid.  */
-  else if (GET_CODE (x) == QUEUED
-      && REG_P (QUEUED_VAR (x)))
-    ;
-
   /* We get better cse by rejecting indirect addressing at this stage.
      Let the combiner create indirect addresses where appropriate.
      For now, generate the code so that the subexpressions useful to share
@@ -855,7 +847,6 @@
 adjust_stack (rtx adjust)
 {
   rtx temp;
-  adjust = protect_from_queue (adjust, 0);
 
   if (adjust == const0_rtx)
     return;
@@ -885,7 +876,6 @@
 anti_adjust_stack (rtx adjust)
 {
   rtx temp;
-  adjust = protect_from_queue (adjust, 0);
 
   if (adjust == const0_rtx)
     return;
Index: expmed.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/expmed.c,v
retrieving revision 1.178
diff -u -r1.178 expmed.c
--- expmed.c	8 Jul 2004 06:08:52 -0000	1.178
+++ expmed.c	8 Jul 2004 14:27:42 -0000
@@ -327,8 +327,6 @@
       op0 = SUBREG_REG (op0);
     }
 
-  value = protect_from_queue (value, 0);
-
   /* Use vec_extract patterns for extracting parts of vectors whenever
      available.  */
   if (VECTOR_MODE_P (GET_MODE (op0))
@@ -577,8 +575,6 @@
 	}
       offset = 0;
     }
-  else
-    op0 = protect_from_queue (op0, 1);
 
   /* If VALUE is a floating-point mode, access it as an integer of the
      corresponding size.  This can occur on a machine with 64 bit registers
@@ -747,9 +743,7 @@
    The field starts at position BITPOS within the byte.
     (If OP0 is a register, it may be a full word or a narrower mode,
      but BITPOS still counts within a full word,
-     which is significant on bigendian machines.)
-
-   Note that protect_from_queue has already been done on OP0 and VALUE.  */
+     which is significant on bigendian machines.)  */
 
 static void
 store_fixed_bit_field (rtx op0, unsigned HOST_WIDE_INT offset,
@@ -1352,8 +1346,6 @@
 	}
       offset = 0;
     }
-  else
-    op0 = protect_from_queue (str_rtx, 1);
 
   /* Now OFFSET is nonzero only for memory operands.  */
 
@@ -1470,8 +1462,7 @@
 	  bitsize_rtx = GEN_INT (bitsize);
 	  bitpos_rtx = GEN_INT (xbitpos);
 
-	  pat = gen_extzv (protect_from_queue (xtarget, 1),
-			   xop0, bitsize_rtx, bitpos_rtx);
+	  pat = gen_extzv (xtarget, xop0, bitsize_rtx, bitpos_rtx);
 	  if (pat)
 	    {
 	      emit_insn (pat);
@@ -1599,8 +1590,7 @@
 	  bitsize_rtx = GEN_INT (bitsize);
 	  bitpos_rtx = GEN_INT (xbitpos);
 
-	  pat = gen_extv (protect_from_queue (xtarget, 1),
-			  xop0, bitsize_rtx, bitpos_rtx);
+	  pat = gen_extv (xtarget, xop0, bitsize_rtx, bitpos_rtx);
 	  if (pat)
 	    {
 	      emit_insn (pat);
@@ -2506,10 +2496,6 @@
   int opno;
   enum machine_mode nmode;
 
-  /* op0 must be register to make mult_cost match the precomputed
-     shiftadd_cost array.  */
-  op0 = protect_from_queue (op0, 0);
-
   /* Avoid referencing memory over and over.
      For speed, but also for correctness when mem is volatile.  */
   if (MEM_P (op0))
@@ -4547,10 +4533,6 @@
   rtx last = get_last_insn ();
   rtx pattern, comparison;
 
-  /* ??? Ok to do this and then fail? */
-  op0 = protect_from_queue (op0, 0);
-  op1 = protect_from_queue (op1, 0);
-
   if (unsignedp)
     code = unsigned_condition (code);
 
@@ -4655,7 +4637,6 @@
 	 first.  */
       if (GET_MODE_SIZE (target_mode) > GET_MODE_SIZE (mode))
 	{
-	  op0 = protect_from_queue (op0, 0);
 	  op0 = convert_modes (target_mode, mode, op0, 0);
 	  mode = target_mode;
 	}
@@ -4687,13 +4668,8 @@
       insn_operand_predicate_fn pred;
 
       /* We think we may be able to do this with a scc insn.  Emit the
-	 comparison and then the scc insn.
-
-	 compare_from_rtx may call emit_queue, which would be deleted below
-	 if the scc insn fails.  So call it ourselves before setting LAST.
-	 Likewise for do_pending_stack_adjust.  */
+	 comparison and then the scc insn.  */
 
-      emit_queue ();
       do_pending_stack_adjust ();
       last = get_last_insn ();
 
@@ -4930,7 +4906,6 @@
 	tem = expand_unop (mode, ffs_optab, op0, subtarget, 1);
       else if (GET_MODE_SIZE (mode) < UNITS_PER_WORD)
 	{
-	  op0 = protect_from_queue (op0, 0);
 	  tem = convert_modes (word_mode, mode, op0, 1);
 	  mode = word_mode;
 	}
Index: expr.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/expr.c,v
retrieving revision 1.673
diff -u -r1.673 expr.c
--- expr.c	8 Jul 2004 07:45:35 -0000	1.673
+++ expr.c	8 Jul 2004 14:27:43 -0000
@@ -119,7 +119,6 @@
   int reverse;
 };
 
-static rtx enqueue_insn (rtx, rtx);
 static unsigned HOST_WIDE_INT move_by_pieces_ninsns (unsigned HOST_WIDE_INT,
 						     unsigned int);
 static void move_by_pieces_1 (rtx (*) (rtx, ...), enum machine_mode,
@@ -151,7 +150,6 @@
 static unsigned HOST_WIDE_INT highest_pow2_factor_for_target (tree, tree);
 
 static int is_aligning_offset (tree, tree);
-static rtx expand_increment (tree, int, int);
 static void expand_operands (tree, tree, rtx, rtx*, rtx*,
 			     enum expand_modifier);
 static rtx do_store_flag (tree, rtx, enum machine_mode, int);
@@ -311,215 +309,6 @@
 {
   cfun->expr = ggc_alloc_cleared (sizeof (struct expr_status));
 }
-
-/* Small sanity check that the queue is empty at the end of a function.  */
-
-void
-finish_expr_for_function (void)
-{
-  if (pending_chain)
-    abort ();
-}
-\f
-/* Manage the queue of increment instructions to be output
-   for POSTINCREMENT_EXPR expressions, etc.  */
-
-/* Queue up to increment (or change) VAR later.  BODY says how:
-   BODY should be the same thing you would pass to emit_insn
-   to increment right away.  It will go to emit_insn later on.
-
-   The value is a QUEUED expression to be used in place of VAR
-   where you want to guarantee the pre-incrementation value of VAR.  */
-
-static rtx
-enqueue_insn (rtx var, rtx body)
-{
-  pending_chain = gen_rtx_QUEUED (GET_MODE (var), var, NULL_RTX, NULL_RTX,
-				  body, pending_chain);
-  return pending_chain;
-}
-
-/* Use protect_from_queue to convert a QUEUED expression
-   into something that you can put immediately into an instruction.
-   If the queued incrementation has not happened yet,
-   protect_from_queue returns the variable itself.
-   If the incrementation has happened, protect_from_queue returns a temp
-   that contains a copy of the old value of the variable.
-
-   Any time an rtx which might possibly be a QUEUED is to be put
-   into an instruction, it must be passed through protect_from_queue first.
-   QUEUED expressions are not meaningful in instructions.
-
-   Do not pass a value through protect_from_queue and then hold
-   on to it for a while before putting it in an instruction!
-   If the queue is flushed in between, incorrect code will result.  */
-
-rtx
-protect_from_queue (rtx x, int modify)
-{
-  RTX_CODE code = GET_CODE (x);
-
-#if 0  /* A QUEUED can hang around after the queue is forced out.  */
-  /* Shortcut for most common case.  */
-  if (pending_chain == 0)
-    return x;
-#endif
-
-  if (code != QUEUED)
-    {
-      /* A special hack for read access to (MEM (QUEUED ...)) to facilitate
-	 use of autoincrement.  Make a copy of the contents of the memory
-	 location rather than a copy of the address, but not if the value is
-	 of mode BLKmode.  Don't modify X in place since it might be
-	 shared.  */
-      if (code == MEM && GET_MODE (x) != BLKmode
-	  && GET_CODE (XEXP (x, 0)) == QUEUED && !modify)
-	{
-	  rtx y = XEXP (x, 0);
-	  rtx new = replace_equiv_address_nv (x, QUEUED_VAR (y));
-
-	  if (QUEUED_INSN (y))
-	    {
-	      rtx temp = gen_reg_rtx (GET_MODE (x));
-
-	      emit_insn_before (gen_move_insn (temp, new),
-				QUEUED_INSN (y));
-	      return temp;
-	    }
-
-	  /* Copy the address into a pseudo, so that the returned value
-	     remains correct across calls to emit_queue.  */
-	  return replace_equiv_address (new, copy_to_reg (XEXP (new, 0)));
-	}
-
-      /* Otherwise, recursively protect the subexpressions of all
-	 the kinds of rtx's that can contain a QUEUED.  */
-      if (code == MEM)
-	{
-	  rtx tem = protect_from_queue (XEXP (x, 0), 0);
-	  if (tem != XEXP (x, 0))
-	    {
-	      x = copy_rtx (x);
-	      XEXP (x, 0) = tem;
-	    }
-	}
-      else if (code == PLUS || code == MULT)
-	{
-	  rtx new0 = protect_from_queue (XEXP (x, 0), 0);
-	  rtx new1 = protect_from_queue (XEXP (x, 1), 0);
-	  if (new0 != XEXP (x, 0) || new1 != XEXP (x, 1))
-	    {
-	      x = copy_rtx (x);
-	      XEXP (x, 0) = new0;
-	      XEXP (x, 1) = new1;
-	    }
-	}
-      return x;
-    }
-  /* If the increment has not happened, use the variable itself.  Copy it
-     into a new pseudo so that the value remains correct across calls to
-     emit_queue.  */
-  if (QUEUED_INSN (x) == 0)
-    return copy_to_reg (QUEUED_VAR (x));
-  /* If the increment has happened and a pre-increment copy exists,
-     use that copy.  */
-  if (QUEUED_COPY (x) != 0)
-    return QUEUED_COPY (x);
-  /* The increment has happened but we haven't set up a pre-increment copy.
-     Set one up now, and use it.  */
-  QUEUED_COPY (x) = gen_reg_rtx (GET_MODE (QUEUED_VAR (x)));
-  emit_insn_before (gen_move_insn (QUEUED_COPY (x), QUEUED_VAR (x)),
-		    QUEUED_INSN (x));
-  return QUEUED_COPY (x);
-}
-
-/* Return nonzero if X contains a QUEUED expression:
-   if it contains anything that will be altered by a queued increment.
-   We handle only combinations of MEM, PLUS, MINUS and MULT operators
-   since memory addresses generally contain only those.  */
-
-int
-queued_subexp_p (rtx x)
-{
-  enum rtx_code code = GET_CODE (x);
-  switch (code)
-    {
-    case QUEUED:
-      return 1;
-    case MEM:
-      return queued_subexp_p (XEXP (x, 0));
-    case MULT:
-    case PLUS:
-    case MINUS:
-      return (queued_subexp_p (XEXP (x, 0))
-	      || queued_subexp_p (XEXP (x, 1)));
-    default:
-      return 0;
-    }
-}
-
-/* Retrieve a mark on the queue.  */
-  
-static rtx
-mark_queue (void)
-{
-  return pending_chain;
-}
-
-/* Perform all the pending incrementations that have been enqueued
-   after MARK was retrieved.  If MARK is null, perform all the
-   pending incrementations.  */
-
-static void
-emit_insns_enqueued_after_mark (rtx mark)
-{
-  rtx p;
-
-  /* The marked incrementation may have been emitted in the meantime
-     through a call to emit_queue.  In this case, the mark is not valid
-     anymore so do nothing.  */
-  if (mark && ! QUEUED_BODY (mark))
-    return;
-
-  while ((p = pending_chain) != mark)
-    {
-      rtx body = QUEUED_BODY (p);
-
-      switch (GET_CODE (body))
-	{
-	case INSN:
-	case JUMP_INSN:
-	case CALL_INSN:
-	case CODE_LABEL:
-	case BARRIER:
-	case NOTE:
-	  QUEUED_INSN (p) = body;
-	  emit_insn (body);
-	  break;
-
-#ifdef ENABLE_CHECKING
-	case SEQUENCE:
-	  abort ();
-	  break;
-#endif
-
-	default:
-	  QUEUED_INSN (p) = emit_insn (body);
-	  break;
-	}
-
-      QUEUED_BODY (p) = 0;
-      pending_chain = QUEUED_NEXT (p);
-    }
-}
-
-/* Perform all the pending incrementations.  */
-
-void
-emit_queue (void)
-{
-  emit_insns_enqueued_after_mark (NULL_RTX);
-}
 \f
 /* Copy data from FROM to TO, where the machine modes are not the same.
    Both modes may be integer, or both may be floating.
@@ -540,8 +329,6 @@
   enum rtx_code equiv_code = (unsignedp < 0 ? UNKNOWN
 			      : (unsignedp ? ZERO_EXTEND : SIGN_EXTEND));
 
-  to = protect_from_queue (to, 1);
-  from = protect_from_queue (from, 0);
 
   if (to_real != from_real)
     abort ();
@@ -898,10 +685,7 @@
    Both X and MODE may be floating, or both integer.
    UNSIGNEDP is nonzero if X is an unsigned value.
    This can be done by referring to a part of X in place
-   or by copying to a new temporary with conversion.
-
-   This function *must not* call protect_from_queue
-   except when putting X into an insn (in which case convert_move does it).  */
+   or by copying to a new temporary with conversion.  */
 
 rtx
 convert_to_mode (enum machine_mode mode, rtx x, int unsignedp)
@@ -917,10 +701,7 @@
    This can be done by referring to a part of X in place
    or by copying to a new temporary with conversion.
 
-   You can give VOIDmode for OLDMODE, if you are sure X has a nonvoid mode.
-
-   This function *must not* call protect_from_queue
-   except when putting X into an insn (in which case convert_move does it).  */
+   You can give VOIDmode for OLDMODE, if you are sure X has a nonvoid mode.  */
 
 rtx
 convert_modes (enum machine_mode mode, enum machine_mode oldmode, rtx x, int unsignedp)
@@ -1039,8 +820,7 @@
 }
 
 /* Generate several move instructions to copy LEN bytes from block FROM to
-   block TO.  (These are MEM rtx's with BLKmode).  The caller must pass FROM
-   and TO through protect_from_queue before calling.
+   block TO.  (These are MEM rtx's with BLKmode).
 
    If PUSH_ROUNDING is defined and TO is NULL, emit_single_push_insn is
    used to push FROM to the stack.
@@ -1341,10 +1121,6 @@
 
   align = MIN (MEM_ALIGN (x), MEM_ALIGN (y));
 
-  x = protect_from_queue (x, 1);
-  y = protect_from_queue (y, 0);
-  size = protect_from_queue (size, 0);
-
   if (!MEM_P (x))
     abort ();
   if (!MEM_P (y))
@@ -1514,24 +1290,9 @@
   enum machine_mode size_mode;
   rtx retval;
 
-  /* DST, SRC, or SIZE may have been passed through protect_from_queue.
-
-     It is unsafe to save the value generated by protect_from_queue and reuse
-     it later.  Consider what happens if emit_queue is called before the
-     return value from protect_from_queue is used.
-
-     Expansion of the CALL_EXPR below will call emit_queue before we are
-     finished emitting RTL for argument setup.  So if we are not careful we
-     could get the wrong value for an argument.
-
-     To avoid this problem we go ahead and emit code to copy the addresses of
-     DST and SRC and SIZE into new pseudos.
-
-     Note this is not strictly needed for library calls since they do not call
-     emit_queue before loading their arguments.  However, we may need to have
-     library calls call emit_queue in the future since failing to do so could
-     cause problems for targets which define SMALL_REGISTER_CLASSES and pass
-     arguments in registers.  */
+  /* Emit code to copy the addresses of DST and SRC and SIZE into new
+     pseudos.  We can then place those new pseudos into a VAR_DECL and
+     use them later.  */
 
   dst_addr = copy_to_mode_reg (Pmode, XEXP (dst, 0));
   src_addr = copy_to_mode_reg (Pmode, XEXP (src, 0));
@@ -1927,8 +1688,6 @@
 				build_int_2 (shift, 0), tmps[i], 0);
     }
 
-  emit_queue ();
-
   /* Copy the extracted pieces into the proper (probable) hard regs.  */
   for (i = start; i < XVECLEN (dst, 0); i++)
     emit_move_insn (XEXP (XVECEXP (dst, 0, i), 0), tmps[i]);
@@ -1983,7 +1742,6 @@
       tmps[i] = gen_reg_rtx (GET_MODE (reg));
       emit_move_insn (tmps[i], reg);
     }
-  emit_queue ();
 
   /* If we won't be storing directly into memory, protect the real destination
      from strange tricks we might play.  */
@@ -2077,8 +1835,6 @@
 			 mode, tmps[i], ssize);
     }
 
-  emit_queue ();
-
   /* Copy from the pseudo into the (probable) hard reg.  */
   if (orig_dst != dst)
     emit_move_insn (orig_dst, dst);
@@ -2325,7 +2081,6 @@
 
   if (! STORE_BY_PIECES_P (len, align))
     abort ();
-  to = protect_from_queue (to, 1);
   data.constfun = constfun;
   data.constfundata = constfundata;
   data.len = len;
@@ -2363,8 +2118,7 @@
 }
 
 /* Generate several move instructions to clear LEN bytes of block TO.  (A MEM
-   rtx with BLKmode).  The caller must pass TO through protect_from_queue
-   before calling. ALIGN is maximum alignment we can assume.  */
+   rtx with BLKmode).  ALIGN is maximum alignment we can assume.  */
 
 static void
 clear_by_pieces (rtx to, unsigned HOST_WIDE_INT len, unsigned int align)
@@ -2394,8 +2148,7 @@
 
 /* Subroutine of clear_by_pieces and store_by_pieces.
    Generate several move instructions to store LEN bytes of block TO.  (A MEM
-   rtx with BLKmode).  The caller must pass TO through protect_from_queue
-   before calling.  ALIGN is maximum alignment we can assume.  */
+   rtx with BLKmode).  ALIGN is maximum alignment we can assume.  */
 
 static void
 store_by_pieces_1 (struct store_by_pieces *data ATTRIBUTE_UNUSED,
@@ -2535,9 +2288,6 @@
     emit_move_insn (object, CONST0_RTX (GET_MODE (object)));
   else
     {
-      object = protect_from_queue (object, 1);
-      size = protect_from_queue (size, 0);
-
       if (size == const0_rtx)
 	;
       else if (GET_CODE (size) == CONST_INT
@@ -2618,24 +2368,8 @@
   enum machine_mode size_mode;
   rtx retval;
 
-  /* OBJECT or SIZE may have been passed through protect_from_queue.
-
-     It is unsafe to save the value generated by protect_from_queue
-     and reuse it later.  Consider what happens if emit_queue is
-     called before the return value from protect_from_queue is used.
-
-     Expansion of the CALL_EXPR below will call emit_queue before
-     we are finished emitting RTL for argument setup.  So if we are
-     not careful we could get the wrong value for an argument.
-
-     To avoid this problem we go ahead and emit code to copy OBJECT
-     and SIZE into new pseudos.
-
-     Note this is not strictly needed for library calls since they
-     do not call emit_queue before loading their arguments.  However,
-     we may need to have library calls call emit_queue in the future
-     since failing to do so could cause problems for targets which
-     define SMALL_REGISTER_CLASSES and pass arguments in registers.  */
+  /* Emit code to copy OBJECT and SIZE into new pseudos.  We can then
+     place those into new pseudos into a VAR_DECL and use them later.  */
 
   object = copy_to_mode_reg (Pmode, XEXP (object, 0));
 
@@ -2739,9 +2473,6 @@
   rtx y_cst = NULL_RTX;
   rtx last_insn, set;
 
-  x = protect_from_queue (x, 1);
-  y = protect_from_queue (y, 0);
-
   if (mode == BLKmode || (GET_MODE (y) != mode && GET_MODE (y) != VOIDmode))
     abort ();
 
@@ -3406,7 +3137,7 @@
     if (where_pad != none)
       where_pad = (where_pad == downward ? upward : downward);
 
-  xinner = x = protect_from_queue (x, 0);
+  xinner = x;
 
   if (mode == BLKmode)
     {
@@ -3850,8 +3581,6 @@
 		  && (bitsize != 1 || TREE_CODE (op1) != INTEGER_CST))
 		break;
 	      value = expand_expr (op1, NULL_RTX, VOIDmode, 0);
-	      value = protect_from_queue (value, 0);
-	      to_rtx = protect_from_queue (to_rtx, 1);
 	      binop = TREE_CODE (src) == PLUS_EXPR ? add_optab : sub_optab;
 	      if (bitsize == 1
 		  && count + bitsize != GET_MODE_BITSIZE (GET_MODE (to_rtx)))
@@ -4033,7 +3762,6 @@
 {
   rtx temp;
   rtx alt_rtl = NULL_RTX;
-  rtx mark = mark_queue ();
   int dont_return_target = 0;
   int dont_store_target = 0;
 
@@ -4053,7 +3781,6 @@
 	 part.  */
       expand_expr (TREE_OPERAND (exp, 0), const0_rtx, VOIDmode,
 		   want_value & 2 ? EXPAND_STACK_PARM : EXPAND_NORMAL);
-      emit_queue ();
       return store_expr (TREE_OPERAND (exp, 1), target, want_value);
     }
   else if (TREE_CODE (exp) == COND_EXPR && GET_MODE (target) == BLKmode)
@@ -4065,47 +3792,19 @@
 
       rtx lab1 = gen_label_rtx (), lab2 = gen_label_rtx ();
 
-      emit_queue ();
-      target = protect_from_queue (target, 1);
-
       do_pending_stack_adjust ();
       NO_DEFER_POP;
       jumpifnot (TREE_OPERAND (exp, 0), lab1);
       store_expr (TREE_OPERAND (exp, 1), target, want_value & 2);
-      emit_queue ();
       emit_jump_insn (gen_jump (lab2));
       emit_barrier ();
       emit_label (lab1);
       store_expr (TREE_OPERAND (exp, 2), target, want_value & 2);
-      emit_queue ();
       emit_label (lab2);
       OK_DEFER_POP;
 
       return want_value & 1 ? target : NULL_RTX;
     }
-  else if (queued_subexp_p (target))
-    /* If target contains a postincrement, let's not risk
-       using it as the place to generate the rhs.  */
-    {
-      if (GET_MODE (target) != BLKmode && GET_MODE (target) != VOIDmode)
-	{
-	  /* Expand EXP into a new pseudo.  */
-	  temp = gen_reg_rtx (GET_MODE (target));
-	  temp = expand_expr (exp, temp, GET_MODE (target),
-			      (want_value & 2
-			       ? EXPAND_STACK_PARM : EXPAND_NORMAL));
-	}
-      else
-	temp = expand_expr (exp, NULL_RTX, GET_MODE (target),
-			    (want_value & 2
-			     ? EXPAND_STACK_PARM : EXPAND_NORMAL));
-
-      /* If target is volatile, ANSI requires accessing the value
-	 *from* the target, if it is accessed.  So make that happen.
-	 In no case return the target itself.  */
-      if (! MEM_VOLATILE_P (target) && (want_value & 1) != 0)
-	dont_return_target = 1;
-    }
   else if ((want_value & 1) != 0
 	   && MEM_P (target)
 	   && ! MEM_VOLATILE_P (target)
@@ -4273,9 +3972,6 @@
 	 bit-initialized.  */
       && expr_size (exp) != const0_rtx)
     {
-      emit_insns_enqueued_after_mark (mark);
-      target = protect_from_queue (target, 1);
-      temp = protect_from_queue (temp, 0);
       if (GET_MODE (temp) != GET_MODE (target)
 	  && GET_MODE (temp) != VOIDmode)
 	{
@@ -5031,7 +4727,6 @@
 
 		  /* Build the head of the loop.  */
 		  do_pending_stack_adjust ();
-		  emit_queue ();
 		  emit_label (loop_start);
 
 		  /* Assign value to element index.  */
@@ -5060,9 +4755,7 @@
 
 		  /* Update the loop counter, and jump to the head of
 		     the loop.  */
-		  expand_increment (build (PREINCREMENT_EXPR,
-					   TREE_TYPE (index),
-					   index, integer_one_node), 0, 0);
+		  expand_increment (index, integer_one_node);
 		  emit_jump (loop_start);
 
 		  /* Build the end of the loop.  */
@@ -8121,7 +7814,6 @@
 
     case COMPOUND_EXPR:
       expand_expr (TREE_OPERAND (exp, 0), const0_rtx, VOIDmode, 0);
-      emit_queue ();
       return expand_expr_real (TREE_OPERAND (exp, 1),
 			       (ignore ? const0_rtx : target),
 			       VOIDmode, modifier, alt_rtl);
@@ -8460,7 +8152,6 @@
 	    else
 	      expand_expr (TREE_OPERAND (exp, 1),
 			   ignore ? const0_rtx : NULL_RTX, VOIDmode, 0);
-	    emit_queue ();
 	    emit_jump_insn (gen_jump (op1));
 	    emit_barrier ();
 	    emit_label (op0);
@@ -8473,7 +8164,6 @@
 			   ignore ? const0_rtx : NULL_RTX, VOIDmode, 0);
 	  }
 
-	emit_queue ();
 	emit_label (op1);
 	OK_DEFER_POP;
 
@@ -8548,15 +8238,6 @@
 	expand_return (TREE_OPERAND (exp, 0));
       return const0_rtx;
 
-    case PREINCREMENT_EXPR:
-    case PREDECREMENT_EXPR:
-      return expand_increment (exp, 0, ignore);
-
-    case POSTINCREMENT_EXPR:
-    case POSTDECREMENT_EXPR:
-      /* Faster to treat as pre-increment if result is not used.  */
-      return expand_increment (exp, ! ignore, ignore);
-
     case ADDR_EXPR:
       if (modifier == EXPAND_STACK_PARM)
 	target = 0;
@@ -8587,10 +8268,6 @@
 	  if (ignore)
 	    return op0;
 
-	  /* Pass 1 for MODIFY, so that protect_from_queue doesn't get
-	     clever and returns a REG when given a MEM.  */
-	  op0 = protect_from_queue (op0, 1);
-
 	  /* We would like the object in memory.  If it is a constant, we can
 	     have it be statically allocated into memory.  For a non-constant,
 	     we need to allocate some memory and store the value into it.  */
@@ -8804,6 +8481,10 @@
     case FILTER_EXPR:
       return get_exception_filter (cfun);
 
+    case PREINCREMENT_EXPR:
+    case PREDECREMENT_EXPR:
+    case POSTINCREMENT_EXPR:
+    case POSTDECREMENT_EXPR:
     case FDESC_EXPR:
       /* Function descriptors are not valid except for as
 	 initialization constants, and should not be expanded.  */
@@ -9013,15 +8694,14 @@
    and return the RTX for the result.
    POST is 1 for postinc/decrements and 0 for preinc/decrements.  */
 
-static rtx
-expand_increment (tree exp, int post, int ignore)
+rtx
+expand_increment (tree incremented, tree amount)
 {
   rtx op0, op1;
   rtx temp, value;
-  tree incremented = TREE_OPERAND (exp, 0);
-  optab this_optab = add_optab;
+  optab this_optab;
   int icode;
-  enum machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum machine_mode mode = TYPE_MODE (TREE_TYPE (incremented));
   int op0_is_copy = 0;
   int single_insn = 0;
   /* 1 means we can't store into OP0 directly,
@@ -9031,18 +8711,7 @@
 
   /* Stabilize any component ref that might need to be
      evaluated more than once below.  */
-  if (!post
-      || TREE_CODE (incremented) == BIT_FIELD_REF
-      || (TREE_CODE (incremented) == COMPONENT_REF
-	  && (TREE_CODE (TREE_OPERAND (incremented, 0)) != INDIRECT_REF
-	      || DECL_BIT_FIELD (TREE_OPERAND (incremented, 1)))))
-    incremented = stabilize_reference (incremented);
-  /* Nested *INCREMENT_EXPRs can happen in C++.  We must force innermost
-     ones into save exprs so that they don't accidentally get evaluated
-     more than once by the code below.  */
-  if (TREE_CODE (incremented) == PREINCREMENT_EXPR
-      || TREE_CODE (incremented) == PREDECREMENT_EXPR)
-    incremented = save_expr (incremented);
+  incremented = stabilize_reference (incremented);
 
   /* Compute the operands as RTX.
      Note whether OP0 is the actual lvalue or a copy of it:
@@ -9060,156 +8729,66 @@
      Note that we can safely modify this SUBREG since it is know not to be
      shared (it was made by the expand_expr call above).  */
 
-  if (GET_CODE (op0) == SUBREG && SUBREG_PROMOTED_VAR_P (op0))
-    {
-      if (post)
-	SUBREG_REG (op0) = copy_to_reg (SUBREG_REG (op0));
-      else
-	bad_subreg = 1;
-    }
-  else if (GET_CODE (op0) == SUBREG
-	   && GET_MODE_BITSIZE (GET_MODE (op0)) < BITS_PER_WORD)
-    {
-      /* We cannot increment this SUBREG in place.  If we are
-	 post-incrementing, get a copy of the old value.  Otherwise,
-	 just mark that we cannot increment in place.  */
-      if (post)
-	op0 = copy_to_reg (op0);
-      else
-	bad_subreg = 1;
-    }
+  if (GET_CODE (op0) == SUBREG
+      && (SUBREG_PROMOTED_VAR_P (op0)
+	  || GET_MODE_BITSIZE (GET_MODE (op0)) < BITS_PER_WORD))
+    bad_subreg = 1;
 
   op0_is_copy = ((GET_CODE (op0) == SUBREG || REG_P (op0))
 		 && temp != get_last_insn ());
-  op1 = expand_expr (TREE_OPERAND (exp, 1), NULL_RTX, VOIDmode, 0);
-
-  /* Decide whether incrementing or decrementing.  */
-  if (TREE_CODE (exp) == POSTDECREMENT_EXPR
-      || TREE_CODE (exp) == PREDECREMENT_EXPR)
-    this_optab = sub_optab;
-
-  /* Convert decrement by a constant into a negative increment.  */
-  if (this_optab == sub_optab
-      && GET_CODE (op1) == CONST_INT)
-    {
-      op1 = GEN_INT (-INTVAL (op1));
-      this_optab = add_optab;
-    }
+  op1 = expand_expr (amount, NULL_RTX, VOIDmode, 0);
 
-  if (TYPE_TRAP_SIGNED (TREE_TYPE (exp)))
-    this_optab = this_optab == add_optab ? addv_optab : subv_optab;
+  this_optab = TYPE_TRAP_SIGNED (TREE_TYPE (incremented))
+               ? addv_optab : add_optab;
 
   /* For a preincrement, see if we can do this with a single instruction.  */
-  if (!post)
-    {
-      icode = (int) this_optab->handlers[(int) mode].insn_code;
-      if (icode != (int) CODE_FOR_nothing
-	  /* Make sure that OP0 is valid for operands 0 and 1
-	     of the insn we want to queue.  */
-	  && (*insn_data[icode].operand[0].predicate) (op0, mode)
-	  && (*insn_data[icode].operand[1].predicate) (op0, mode)
-	  && (*insn_data[icode].operand[2].predicate) (op1, mode))
-	single_insn = 1;
-    }
+  icode = (int) this_optab->handlers[(int) mode].insn_code;
+  if (icode != (int) CODE_FOR_nothing
+      /* Make sure that OP0 is valid for operands 0 and 1
+         of the insn we want to queue.  */
+      && (*insn_data[icode].operand[0].predicate) (op0, mode)
+      && (*insn_data[icode].operand[1].predicate) (op0, mode)
+      && (*insn_data[icode].operand[2].predicate) (op1, mode))
+    single_insn = 1;
 
   /* If OP0 is not the actual lvalue, but rather a copy in a register,
      then we cannot just increment OP0.  We must therefore contrive to
-     increment the original value.  Then, for postincrement, we can return
-     OP0 since it is a copy of the old value.  For preincrement, expand here
+     increment the original value.  For preincrement, expand here
      unless we can do it with a single insn.
 
      Likewise if storing directly into OP0 would clobber high bits
      we need to preserve (bad_subreg).  */
-  if (op0_is_copy || (!post && !single_insn) || bad_subreg)
+  if (op0_is_copy || !single_insn || bad_subreg)
     {
       /* This is the easiest way to increment the value wherever it is.
 	 Problems with multiple evaluation of INCREMENTED are prevented
-	 because either (1) it is a component_ref or preincrement,
-	 in which case it was stabilized above, or (2) it is an array_ref
-	 with constant index in an array in a register, which is
-	 safe to reevaluate.  */
-      tree newexp = build (((TREE_CODE (exp) == POSTDECREMENT_EXPR
-			     || TREE_CODE (exp) == PREDECREMENT_EXPR)
-			    ? MINUS_EXPR : PLUS_EXPR),
-			   TREE_TYPE (exp),
-			   incremented,
-			   TREE_OPERAND (exp, 1));
-
+	 because either (1) it is a component_ref, in which case it was
+         stabilized above, or (2) it is an array_ref with constant index
+         in an array in a register, which is safe to reevaluate.  */
+      tree newexp = build2 (PLUS_EXPR, TREE_TYPE (incremented),
+			    incremented, amount);
       while (TREE_CODE (incremented) == NOP_EXPR
 	     || TREE_CODE (incremented) == CONVERT_EXPR)
 	{
-	  newexp = convert (TREE_TYPE (incremented), newexp);
+	  newexp = fold_convert (TREE_TYPE (incremented), newexp);
 	  incremented = TREE_OPERAND (incremented, 0);
 	}
 
-      temp = expand_assignment (incremented, newexp, ! post && ! ignore);
-      return post ? op0 : temp;
+      return expand_assignment (incremented, newexp, 1);
     }
-
-  if (post)
+  else
     {
-      /* We have a true reference to the value in OP0.
-	 If there is an insn to add or subtract in this mode, queue it.
-	 Queuing the increment insn avoids the register shuffling
-	 that often results if we must increment now and first save
-	 the old value for subsequent use.  */
-
-#if 0  /* Turned off to avoid making extra insn for indexed memref.  */
-      op0 = stabilize (op0);
-#endif
-
-      icode = (int) this_optab->handlers[(int) mode].insn_code;
-      if (icode != (int) CODE_FOR_nothing
-	  /* Make sure that OP0 is valid for operands 0 and 1
-	     of the insn we want to queue.  */
-	  && (*insn_data[icode].operand[0].predicate) (op0, mode)
-	  && (*insn_data[icode].operand[1].predicate) (op0, mode))
-	{
-	  if (! (*insn_data[icode].operand[2].predicate) (op1, mode))
-	    op1 = force_reg (mode, op1);
-
-	  return enqueue_insn (op0, GEN_FCN (icode) (op0, op0, op1));
-	}
-      if (icode != (int) CODE_FOR_nothing && MEM_P (op0))
-	{
-	  rtx addr = (general_operand (XEXP (op0, 0), mode)
-		      ? force_reg (Pmode, XEXP (op0, 0))
-		      : copy_to_reg (XEXP (op0, 0)));
-	  rtx temp, result;
+      /* Increment however we can.  */
+      value = expand_binop (mode, this_optab, op0, op1, op0,
+		          TYPE_UNSIGNED (TREE_TYPE (incremented)),
+		          OPTAB_LIB_WIDEN);
+
+      /* Make sure the value is stored into OP0.  */
+      if (value != op0)
+        emit_move_insn (op0, value);
 
-	  op0 = replace_equiv_address (op0, addr);
-	  temp = force_reg (GET_MODE (op0), op0);
-	  if (! (*insn_data[icode].operand[2].predicate) (op1, mode))
-	    op1 = force_reg (mode, op1);
-
-	  /* The increment queue is LIFO, thus we have to `queue'
-	     the instructions in reverse order.  */
-	  enqueue_insn (op0, gen_move_insn (op0, temp));
-	  result = enqueue_insn (temp, GEN_FCN (icode) (temp, temp, op1));
-	  return result;
-	}
+      return op0;
     }
-
-  /* Preincrement, or we can't increment with one simple insn.  */
-  if (post)
-    /* Save a copy of the value before inc or dec, to return it later.  */
-    temp = value = copy_to_reg (op0);
-  else
-    /* Arrange to return the incremented value.  */
-    /* Copy the rtx because expand_binop will protect from the queue,
-       and the results of that would be invalid for us to return
-       if our caller does emit_queue before using our result.  */
-    temp = copy_rtx (value = op0);
-
-  /* Increment however we can.  */
-  op1 = expand_binop (mode, this_optab, value, op1, op0,
-		      TYPE_UNSIGNED (TREE_TYPE (exp)), OPTAB_LIB_WIDEN);
-
-  /* Make sure the value is stored into OP0.  */
-  if (op1 != op0)
-    emit_move_insn (op0, op1);
-
-  return temp;
 }
 \f
 /* Generate code to calculate EXP using a store-flag instruction
@@ -9419,9 +8998,7 @@
      because, if the emit_store_flag does anything it will succeed and
      OP0 and OP1 will not be used subsequently.  */
 
-  result = emit_store_flag (target, code,
-			    queued_subexp_p (op0) ? copy_rtx (op0) : op0,
-			    queued_subexp_p (op1) ? copy_rtx (op1) : op1,
+  result = emit_store_flag (target, code, op0, op1,
 			    operand_mode, unsignedp, 1);
 
   if (result)
@@ -9526,8 +9103,7 @@
 
       index = expand_expr (index_expr, NULL_RTX, VOIDmode, 0);
     }
-  emit_queue ();
-  index = protect_from_queue (index, 0);
+
   do_pending_stack_adjust ();
 
   op_mode = insn_data[(int) CODE_FOR_casesi].operand[0].mode;
@@ -9653,8 +9229,6 @@
 			    convert (index_type, index_expr),
 			    convert (index_type, minval)));
   index = expand_expr (index_expr, NULL_RTX, VOIDmode, 0);
-  emit_queue ();
-  index = protect_from_queue (index, 0);
   do_pending_stack_adjust ();
 
   do_tablejump (index, TYPE_MODE (index_type),
Index: expr.h
===================================================================
RCS file: /cvs/gcc/gcc/gcc/expr.h,v
retrieving revision 1.160
diff -u -r1.160 expr.h
--- expr.h	7 Jul 2004 19:23:58 -0000	1.160
+++ expr.h	8 Jul 2004 14:27:43 -0000
@@ -39,25 +39,6 @@
 #define BRANCH_COST 1
 #endif
 
-/* Macros to access the slots of a QUEUED rtx.
-   Here rather than in rtl.h because only the expansion pass
-   should ever encounter a QUEUED.  */
-
-/* The variable for which an increment is queued.  */
-#define QUEUED_VAR(P) XEXP (P, 0)
-/* If the increment has been emitted, this is the insn
-   that does the increment.  It is zero before the increment is emitted.
-   If more than one insn is emitted, this is the first insn.  */
-#define QUEUED_INSN(P) XEXP (P, 1)
-/* If a pre-increment copy has been generated, this is the copy
-   (it is a temporary reg).  Zero if no copy made yet.  */
-#define QUEUED_COPY(P) XEXP (P, 2)
-/* This is the body to use for the insn to do the increment.
-   It is used to emit the increment.  */
-#define QUEUED_BODY(P) XEXP (P, 3)
-/* Next QUEUED in the queue.  */
-#define QUEUED_NEXT(P) XEXP (P, 4)
-
 /* This is the 4th arg to `expand_expr'.
    EXPAND_STACK_PARM means we are possibly expanding a call param onto
    the stack.  Choosing a value of 2 isn't special;  It just allows
@@ -302,8 +283,7 @@
 
 /* Create but don't emit one rtl instruction to perform certain operations.
    Modes must match; operands must meet the operation's predicates.
-   Likewise for subtraction and for just copying.
-   These do not call protect_from_queue; caller must do so.  */
+   Likewise for subtraction and for just copying.  */
 extern rtx gen_add2_insn (rtx, rtx);
 extern rtx gen_add3_insn (rtx, rtx, rtx);
 extern rtx gen_sub2_insn (rtx, rtx);
@@ -320,6 +300,10 @@
 /* Generate code to indirectly jump to a location given in the rtx LOC.  */
 extern void emit_indirect_jump (rtx);
 
+/* Arguments VAR, AMOUNT: generate code to increment VAR by AMOUNT.
+   Return the place in which the incremented value is stored.  */
+extern rtx expand_increment (tree, tree);
+
 #include "insn-config.h"
 
 #ifdef HAVE_conditional_move
@@ -390,19 +374,6 @@
 /* This is run at the start of compiling a function.  */
 extern void init_expr (void);
 
-/* This is run at the end of compiling a function.  */
-extern void finish_expr_for_function (void);
-
-/* Use protect_from_queue to convert a QUEUED expression
-   into something that you can put immediately into an instruction.  */
-extern rtx protect_from_queue (rtx, int);
-
-/* Perform all the pending incrementations.  */
-extern void emit_queue (void);
-
-/* Tell if something has a queued subexpression.  */
-extern int queued_subexp_p (rtx);
-
 /* Emit some rtl insns to move data between rtx's, converting machine modes.
    Both modes must be floating or both fixed.  */
 extern void convert_move (rtx, rtx, int);
Index: function.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/function.c,v
retrieving revision 1.550
diff -u -r1.550 function.c
--- function.c	8 Jul 2004 07:45:39 -0000	1.550
+++ function.c	8 Jul 2004 14:27:43 -0000
@@ -4066,12 +4066,7 @@
 
   /* Evaluate now the sizes of any types declared among the arguments.  */
   for (tem = pending_sizes; tem; tem = TREE_CHAIN (tem))
-    {
-      expand_expr (TREE_VALUE (tem), const0_rtx, VOIDmode, 0);
-      /* Flush the queue in case this parameter declaration has
-	 side-effects.  */
-      emit_queue ();
-    }
+    expand_expr (TREE_VALUE (tem), const0_rtx, VOIDmode, 0);
 }
 
 /* Start the RTL for a new function, and set variables used for
@@ -4330,8 +4325,6 @@
 {
   rtx clobber_after;
 
-  finish_expr_for_function ();
-
   /* If arg_pointer_save_area was referenced only from a nested
      function, we will not have initialized it yet.  Do that now.  */
   if (arg_pointer_save_area && ! cfun->arg_pointer_save_area_init)
Index: function.h
===================================================================
RCS file: /cvs/gcc/gcc/gcc/function.h,v
retrieving revision 1.127
diff -u -r1.127 function.h
--- function.h	4 Jul 2004 08:06:54 -0000	1.127
+++ function.h	8 Jul 2004 14:27:43 -0000
@@ -147,9 +147,6 @@
 
   /* List of labels that must never be deleted.  */
   rtx x_forced_labels;
-
-  /* Postincrements that still need to be expanded.  */
-  rtx x_pending_chain;
 };
 
 #define pending_stack_adjust (cfun->expr->x_pending_stack_adjust)
@@ -157,7 +154,6 @@
 #define saveregs_value (cfun->expr->x_saveregs_value)
 #define apply_args_value (cfun->expr->x_apply_args_value)
 #define forced_labels (cfun->expr->x_forced_labels)
-#define pending_chain (cfun->expr->x_pending_chain)
 #define stack_pointer_delta (cfun->expr->x_stack_pointer_delta)
 
 /* This structure can save all the important global and static variables
Index: genattrtab.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/genattrtab.c,v
retrieving revision 1.147
diff -u -r1.147 genattrtab.c
--- genattrtab.c	8 Jul 2004 03:40:29 -0000	1.147
+++ genattrtab.c	8 Jul 2004 14:27:44 -0000
@@ -838,7 +838,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -2218,7 +2217,6 @@
       return attr_rtx (CONST_STRING, attr_printf (MAX_DIGITS, "%d", j));
 
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -4174,7 +4172,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
Index: local-alloc.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/local-alloc.c,v
retrieving revision 1.133
diff -u -r1.133 local-alloc.c
--- local-alloc.c	7 Jul 2004 19:23:59 -0000	1.133
+++ local-alloc.c	8 Jul 2004 14:27:44 -0000
@@ -520,9 +520,6 @@
     case MEM:
       return ! RTX_UNCHANGING_P (x) || equiv_init_varies_p (XEXP (x, 0));
 
-    case QUEUED:
-      return 1;
-
     case CONST:
     case CONST_INT:
     case CONST_DOUBLE:
Index: optabs.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/optabs.c,v
retrieving revision 1.227
diff -u -r1.227 optabs.c
--- optabs.c	7 Jul 2004 19:24:00 -0000	1.227
+++ optabs.c	8 Jul 2004 14:27:44 -0000
@@ -675,11 +675,6 @@
 
   class = GET_MODE_CLASS (mode);
 
-  op0 = protect_from_queue (op0, 0);
-  op1 = protect_from_queue (op1, 0);
-  if (target)
-    target = protect_from_queue (target, 1);
-
   if (flag_force_mem)
     {
       /* Load duplicate non-volatile operands once.  */
@@ -2170,20 +2165,12 @@
 
   class = GET_MODE_CLASS (mode);
 
-  op0 = protect_from_queue (op0, 0);
-
   if (flag_force_mem)
-    {
-      op0 = force_not_mem (op0);
-    }
+    op0 = force_not_mem (op0);
 
-  if (targ0)
-    targ0 = protect_from_queue (targ0, 1);
-  else
+  if (!targ0)
     targ0 = gen_reg_rtx (mode);
-  if (targ1)
-    targ1 = protect_from_queue (targ1, 1);
-  else
+  if (!targ1)
     targ1 = gen_reg_rtx (mode);
 
   /* Record where to go back to if we fail.  */
@@ -2274,9 +2261,6 @@
 
   class = GET_MODE_CLASS (mode);
 
-  op0 = protect_from_queue (op0, 0);
-  op1 = protect_from_queue (op1, 0);
-
   if (flag_force_mem)
     {
       op0 = force_not_mem (op0);
@@ -2293,13 +2277,9 @@
       && rtx_cost (op1, binoptab->code) > COSTS_N_INSNS (1))
     op1 = force_reg (mode, op1);
 
-  if (targ0)
-    targ0 = protect_from_queue (targ0, 1);
-  else
+  if (!targ0)
     targ0 = gen_reg_rtx (mode);
-  if (targ1)
-    targ1 = protect_from_queue (targ1, 1);
-  else
+  if (!targ1)
     targ1 = gen_reg_rtx (mode);
 
   /* Record where to go back to if we fail.  */
@@ -2502,15 +2482,8 @@
 
   class = GET_MODE_CLASS (mode);
 
-  op0 = protect_from_queue (op0, 0);
-
   if (flag_force_mem)
-    {
-      op0 = force_not_mem (op0);
-    }
-
-  if (target)
-    target = protect_from_queue (target, 1);
+    op0 = force_not_mem (op0);
 
   if (unoptab->handlers[(int) mode].insn_code != CODE_FOR_nothing)
     {
@@ -3039,18 +3012,11 @@
   if (submode == BLKmode)
     abort ();
 
-  op0 = protect_from_queue (op0, 0);
-
   if (flag_force_mem)
-    {
-      op0 = force_not_mem (op0);
-    }
+    op0 = force_not_mem (op0);
 
   last = get_last_insn ();
 
-  if (target)
-    target = protect_from_queue (target, 1);
-
   this_abs_optab = ! unsignedp && flag_trapv
                    && (GET_MODE_CLASS(mode) == MODE_INT)
                    ? absv_optab : abs_optab;
@@ -3225,9 +3191,7 @@
   enum machine_mode mode0 = insn_data[icode].operand[1].mode;
   rtx pat;
 
-  temp = target = protect_from_queue (target, 1);
-
-  op0 = protect_from_queue (op0, 0);
+  temp = target;
 
   /* Sign and zero extension from memory is often done specially on
      RISC machines, so forcing into a register here can pessimize
@@ -3709,11 +3673,6 @@
       if (size == 0)
 	abort ();
 
-      emit_queue ();
-      x = protect_from_queue (x, 0);
-      y = protect_from_queue (y, 0);
-      size = protect_from_queue (size, 0);
-
       /* Try to use a memory block compare insn - either cmpstr
 	 or cmpmem will do.  */
       for (cmp_mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
@@ -3818,8 +3777,6 @@
 prepare_operand (int icode, rtx x, int opnum, enum machine_mode mode,
 		 enum machine_mode wider_mode, int unsignedp)
 {
-  x = protect_from_queue (x, 0);
-
   if (mode != wider_mode)
     x = convert_modes (wider_mode, mode, x, unsignedp);
 
@@ -3945,7 +3902,6 @@
     op0 = force_reg (mode, op0);
 #endif
 
-  emit_queue ();
   if (unsignedp)
     comparison = unsigned_condition (comparison);
 
@@ -3972,8 +3928,8 @@
 {
   enum rtx_code comparison = *pcomparison;
   enum rtx_code swapped = swap_condition (comparison);
-  rtx x = protect_from_queue (*px, 0);
-  rtx y = protect_from_queue (*py, 0);
+  rtx x = *px;
+  rtx y = *py;
   enum machine_mode orig_mode = GET_MODE (x);
   enum machine_mode mode;
   rtx value, target, insns, equiv;
@@ -4164,18 +4120,11 @@
       op3 = force_not_mem (op3);
     }
 
-  if (target)
-    target = protect_from_queue (target, 1);
-  else
+  if (!target)
     target = gen_reg_rtx (mode);
 
   subtarget = target;
 
-  emit_queue ();
-
-  op2 = protect_from_queue (op2, 0);
-  op3 = protect_from_queue (op3, 0);
-
   /* If the insn doesn't accept these operands, put them in pseudos.  */
 
   if (! (*insn_data[icode].operand[0].predicate)
@@ -4305,23 +4254,16 @@
       op3 = force_not_mem (op3);
     }
 
-  if (target)
-    target = protect_from_queue (target, 1);
-  else
+  if (!target)
     target = gen_reg_rtx (mode);
 
-  subtarget = target;
-
-  emit_queue ();
-
-  op2 = protect_from_queue (op2, 0);
-  op3 = protect_from_queue (op3, 0);
-
   /* If the insn doesn't accept these operands, put them in pseudos.  */
 
   if (! (*insn_data[icode].operand[0].predicate)
-      (subtarget, insn_data[icode].operand[0].mode))
+      (target, insn_data[icode].operand[0].mode))
     subtarget = gen_reg_rtx (insn_data[icode].operand[0].mode);
+  else
+    subtarget = target;
 
   if (! (*insn_data[icode].operand[2].predicate)
       (op2, insn_data[icode].operand[2].mode))
@@ -4360,11 +4302,7 @@
 \f
 /* These functions attempt to generate an insn body, rather than
    emitting the insn, but if the gen function already emits them, we
-   make no attempt to turn them back into naked patterns.
-
-   They do not protect from queued increments,
-   because they may be used 1) in protect_from_queue itself
-   and 2) in other passes where there is no queue.  */
+   make no attempt to turn them back into naked patterns.  */
 
 /* Generate and return an insn body to add Y to X.  */
 
@@ -4621,9 +4559,6 @@
 
 	if (icode != CODE_FOR_nothing)
 	  {
-	    to = protect_from_queue (to, 1);
-	    from = protect_from_queue (from, 0);
-
 	    if (imode != GET_MODE (from))
 	      from = convert_to_mode (imode, from, unsignedp);
 
@@ -4647,11 +4582,6 @@
       rtx temp;
       REAL_VALUE_TYPE offset;
 
-      emit_queue ();
-
-      to = protect_from_queue (to, 1);
-      from = protect_from_queue (from, 0);
-
       if (flag_force_mem)
 	from = force_not_mem (from);
 
@@ -4759,9 +4689,6 @@
       rtx value;
       convert_optab tab = unsignedp ? ufloat_optab : sfloat_optab;
 
-      to = protect_from_queue (to, 1);
-      from = protect_from_queue (from, 0);
-
       if (GET_MODE_SIZE (GET_MODE (from)) < GET_MODE_SIZE (SImode))
 	from = convert_to_mode (SImode, from, unsignedp);
 
@@ -4827,9 +4754,6 @@
 
 	if (icode != CODE_FOR_nothing)
 	  {
-	    to = protect_from_queue (to, 1);
-	    from = protect_from_queue (from, 0);
-
 	    if (fmode != GET_MODE (from))
 	      from = convert_to_mode (fmode, from, 0);
 
@@ -4889,10 +4813,6 @@
 	  lab1 = gen_label_rtx ();
 	  lab2 = gen_label_rtx ();
 
-	  emit_queue ();
-	  to = protect_from_queue (to, 1);
-	  from = protect_from_queue (from, 0);
-
 	  if (flag_force_mem)
 	    from = force_not_mem (from);
 
@@ -4963,9 +4883,6 @@
       if (!libfunc)
 	abort ();
 
-      to = protect_from_queue (to, 1);
-      from = protect_from_queue (from, 0);
-
       if (flag_force_mem)
 	from = force_not_mem (from);
 
Index: rtl.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/rtl.c,v
retrieving revision 1.138
diff -u -r1.138 rtl.c
--- rtl.c	4 Jul 2004 08:06:54 -0000	1.138
+++ rtl.c	8 Jul 2004 14:27:44 -0000
@@ -214,7 +214,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
Index: rtl.def
===================================================================
RCS file: /cvs/gcc/gcc/gcc/rtl.def,v
retrieving revision 1.87
diff -u -r1.87 rtl.def
--- rtl.def	4 Jul 2004 08:06:54 -0000	1.87
+++ rtl.def	8 Jul 2004 14:27:44 -0000
@@ -906,24 +906,6 @@
    pretend to be looking at the entire value and comparing it.  */
 DEF_RTL_EXPR(CC0, "cc0", "", RTX_OBJ)
 
-/* =====================================================================
-   A QUEUED expression really points to a member of the queue of instructions
-   to be output later for postincrement/postdecrement.
-   QUEUED expressions never become part of instructions.
-   When a QUEUED expression would be put into an instruction,
-   instead either the incremented variable or a copy of its previous
-   value is used.
-   
-   Operands are:
-   0. the variable to be incremented (a REG rtx).
-   1. the incrementing instruction, or 0 if it hasn't been output yet.
-   2. A REG rtx for a copy of the old value of the variable, or 0 if none yet.
-   3. the body to use for the incrementing instruction
-   4. the next QUEUED expression in the queue.
-   ====================================================================== */
-
-DEF_RTL_EXPR(QUEUED, "queued", "eeeee", RTX_EXTRA)
-
 /* ----------------------------------------------------------------------
    Expressions for operators in an rtl pattern
    ---------------------------------------------------------------------- */
Index: rtlanal.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/rtlanal.c,v
retrieving revision 1.193
diff -u -r1.193 rtlanal.c
--- rtlanal.c	4 Jul 2004 08:06:54 -0000	1.193
+++ rtlanal.c	8 Jul 2004 14:27:45 -0000
@@ -83,9 +83,6 @@
     case MEM:
       return ! RTX_UNCHANGING_P (x) || rtx_unstable_p (XEXP (x, 0));
 
-    case QUEUED:
-      return 1;
-
     case CONST:
     case CONST_INT:
     case CONST_DOUBLE:
@@ -161,9 +158,6 @@
     case MEM:
       return ! RTX_UNCHANGING_P (x) || rtx_varies_p (XEXP (x, 0), for_alias);
 
-    case QUEUED:
-      return 1;
-
     case CONST:
     case CONST_INT:
     case CONST_DOUBLE:
Index: simplify-rtx.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/simplify-rtx.c,v
retrieving revision 1.197
diff -u -r1.197 simplify-rtx.c
--- simplify-rtx.c	1 Jul 2004 13:09:39 -0000	1.197
+++ simplify-rtx.c	8 Jul 2004 14:27:45 -0000
@@ -3802,9 +3802,6 @@
       || byte >= GET_MODE_SIZE (innermode))
     abort ();
 
-  if (GET_CODE (op) == QUEUED)
-    return NULL_RTX;
-
   new = simplify_subreg (outermode, op, innermode, byte);
   if (new)
     return new;
Index: stmt.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/stmt.c,v
retrieving revision 1.372
diff -u -r1.372 stmt.c
--- stmt.c	8 Jul 2004 07:45:45 -0000	1.372
+++ stmt.c	8 Jul 2004 14:27:45 -0000
@@ -430,7 +430,6 @@
 
   x = convert_memory_address (Pmode, x);
 
-  emit_queue ();
   do_pending_stack_adjust ();
   emit_indirect_jump (x);
 }
@@ -1060,7 +1059,7 @@
 	  if ((! allows_mem && MEM_P (op))
 	      || GET_CODE (op) == CONCAT)
 	    {
-	      real_output_rtx[i] = protect_from_queue (op, 1);
+	      real_output_rtx[i] = op;
 	      op = gen_reg_rtx (GET_MODE (op));
 	      if (is_inout)
 		emit_move_insn (op, real_output_rtx[i]);
@@ -1185,13 +1184,6 @@
 
   generating_concat_p = 0;
 
-  for (i = 0; i < ninputs - ninout; i++)
-    ASM_OPERANDS_INPUT (body, i)
-      = protect_from_queue (ASM_OPERANDS_INPUT (body, i), 0);
-
-  for (i = 0; i < noutputs; i++)
-    output_rtx[i] = protect_from_queue (output_rtx[i], 1);
-
   /* For in-out operands, copy output rtx to input rtx.  */
   for (i = 0; i < ninout; i++)
     {
@@ -1362,9 +1354,6 @@
 	  TREE_VALUE (tail) = o[i];
 	}
     }
-
-  /* Those MODIFY_EXPRs could do autoincrements.  */
-  emit_queue ();
 }
 
 /* A subroutine of expand_asm_operands.  Check that all operands have
@@ -1626,8 +1615,6 @@
 
   /* Free any temporaries used to evaluate this expression.  */
   free_temp_slots ();
-
-  emit_queue ();
 }
 
 /* Warn if EXP contains any computations whose results are not used.
@@ -2014,7 +2001,6 @@
   if (TREE_CODE (TREE_TYPE (TREE_TYPE (current_function_decl))) == VOID_TYPE)
     {
       expand_expr (retval, NULL_RTX, VOIDmode, 0);
-      emit_queue ();
       expand_null_return ();
       return;
     }
@@ -2146,7 +2132,6 @@
 	result_reg_mode = tmpmode;
       result_reg = gen_reg_rtx (result_reg_mode);
 
-      emit_queue ();
       for (i = 0; i < n_regs; i++)
 	emit_move_insn (operand_subword (result_reg, i, 0, result_reg_mode),
 			result_pseudos[i]);
@@ -2169,7 +2154,6 @@
       val = assign_temp (nt, 0, 0, 1);
       val = expand_expr (retval_rhs, val, GET_MODE (val), 0);
       val = force_not_mem (val);
-      emit_queue ();
       /* Return the calculated value.  */
       expand_value_return (shift_return_value (val));
     }
@@ -2177,7 +2161,6 @@
     {
       /* No hard reg used; calculate value into hard return reg.  */
       expand_expr (retval, const0_rtx, VOIDmode, 0);
-      emit_queue ();
       expand_value_return (result_rtl);
     }
 }
@@ -2694,13 +2677,11 @@
 	  || code == POINTER_TYPE || code == REFERENCE_TYPE)
 	expand_assignment (decl, convert (TREE_TYPE (decl), integer_zero_node),
 			   0);
-      emit_queue ();
     }
   else if (DECL_INITIAL (decl) && TREE_CODE (DECL_INITIAL (decl)) != TREE_LIST)
     {
       emit_line_note (DECL_SOURCE_LOCATION (decl));
       expand_assignment (decl, DECL_INITIAL (decl), 0);
-      emit_queue ();
     }
 
   /* Don't let the initialization count as "using" the variable.  */
@@ -2813,7 +2794,6 @@
   nesting_stack = thiscase;
 
   do_pending_stack_adjust ();
-  emit_queue ();
 
   /* Make sure case_stmt.start points to something that won't
      need any transformation before expand_end_case.  */
@@ -3293,8 +3273,6 @@
 			    convert (index_type, index_expr),
 			    convert (index_type, minval)));
   index = expand_expr (index_expr, NULL_RTX, VOIDmode, 0);
-  emit_queue ();
-  index = protect_from_queue (index, 0);
   do_pending_stack_adjust ();
 
   mode = TYPE_MODE (index_type);
@@ -3446,7 +3424,6 @@
       if (count == 0)
 	{
 	  expand_expr (index_expr, const0_rtx, VOIDmode, 0);
-	  emit_queue ();
 	  emit_jump (default_label);
 	}
 
@@ -3515,10 +3492,8 @@
 		  }
 	    }
 
-	  emit_queue ();
 	  do_pending_stack_adjust ();
 
-	  index = protect_from_queue (index, 0);
 	  if (MEM_P (index))
 	    index = copy_to_reg (index);
 	  if (GET_CODE (index) == CONST_INT
Index: config/c4x/c4x.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/c4x/c4x.c,v
retrieving revision 1.151
diff -u -r1.151 c4x.c
--- config/c4x/c4x.c	7 Jul 2004 19:24:11 -0000	1.151
+++ config/c4x/c4x.c	8 Jul 2004 14:27:46 -0000
@@ -725,13 +725,8 @@
 rtx
 c4x_va_arg (tree valist, tree type)
 {
-  tree t;
-
-  t = build (PREDECREMENT_EXPR, TREE_TYPE (valist), valist,
-	     build_int_2 (int_size_in_bytes (type), 0));
-  TREE_SIDE_EFFECTS (t) = 1;
-
-  return expand_expr (t, NULL_RTX, Pmode, EXPAND_NORMAL);
+  return expand_increment (valist,
+			   build_int_2 (-int_size_in_bytes (type), 0));
 }
 
 
@@ -4807,7 +4802,6 @@
     case C4X_BUILTIN_FIX:
       arg0 = TREE_VALUE (arglist);
       r0 = expand_expr (arg0, NULL_RTX, QFmode, 0);
-      r0 = protect_from_queue (r0, 0);
       if (! target || ! register_operand (target, QImode))
 	target = gen_reg_rtx (QImode);
       emit_insn (gen_fixqfqi_clobber (target, r0));
@@ -4816,7 +4810,6 @@
     case C4X_BUILTIN_FIX_ANSI:
       arg0 = TREE_VALUE (arglist);
       r0 = expand_expr (arg0, NULL_RTX, QFmode, 0);
-      r0 = protect_from_queue (r0, 0);
       if (! target || ! register_operand (target, QImode))
 	target = gen_reg_rtx (QImode);
       emit_insn (gen_fix_truncqfqi2 (target, r0));
@@ -4829,8 +4822,6 @@
       arg1 = TREE_VALUE (TREE_CHAIN (arglist));
       r0 = expand_expr (arg0, NULL_RTX, QImode, 0);
       r1 = expand_expr (arg1, NULL_RTX, QImode, 0);
-      r0 = protect_from_queue (r0, 0);
-      r1 = protect_from_queue (r1, 0);
       if (! target || ! register_operand (target, QImode))
 	target = gen_reg_rtx (QImode);
       emit_insn (gen_mulqi3_24_clobber (target, r0, r1));
@@ -4841,7 +4832,6 @@
 	break;
       arg0 = TREE_VALUE (arglist);
       r0 = expand_expr (arg0, NULL_RTX, QFmode, 0);
-      r0 = protect_from_queue (r0, 0);
       if (! target || ! register_operand (target, QFmode))
 	target = gen_reg_rtx (QFmode);
       emit_insn (gen_toieee (target, r0));
@@ -4852,7 +4842,6 @@
 	break;
       arg0 = TREE_VALUE (arglist);
       r0 = expand_expr (arg0, NULL_RTX, QFmode, 0);
-      r0 = protect_from_queue (r0, 0);
       if (register_operand (r0, QFmode))
 	{
 	  r1 = assign_stack_local (QFmode, GET_MODE_SIZE (QFmode), 0);
@@ -4869,7 +4858,6 @@
 	break;
       arg0 = TREE_VALUE (arglist);
       r0 = expand_expr (arg0, NULL_RTX, QFmode, 0);
-      r0 = protect_from_queue (r0, 0);
       if (! target || ! register_operand (target, QFmode))
 	target = gen_reg_rtx (QFmode);
       emit_insn (gen_rcpfqf_clobber (target, r0));
Index: config/fr30/fr30.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/fr30/fr30.c,v
retrieving revision 1.41
diff -u -r1.41 fr30.c
--- config/fr30/fr30.c	7 Jul 2004 09:27:55 -0000	1.41
+++ config/fr30/fr30.c	8 Jul 2004 14:27:46 -0000
@@ -722,12 +722,11 @@
   type_ptr     = build_pointer_type (type);
   type_ptr_ptr = build_pointer_type (type_ptr);
   
-  t = build (POSTINCREMENT_EXPR, va_list_type_node, valist, build_int_2 (UNITS_PER_WORD, 0));
-  TREE_SIDE_EFFECTS (t) = 1;
+  expand_increment (valist, build_int_2 (UNITS_PER_WORD, 0));
+
+  t = build2 (MINUS_EXPR, va_list_type_node, valist, build_int_2 (UNITS_PER_WORD, 0));
   t = build1 (NOP_EXPR, type_ptr_ptr, t);
-  TREE_SIDE_EFFECTS (t) = 1;
   t = build1 (INDIRECT_REF, type_ptr, t);
-  
   return expand_expr (t, NULL_RTX, Pmode, EXPAND_NORMAL);
 }
 
@@ -741,9 +740,8 @@
 
   if ((size % UNITS_PER_WORD) == 0)
     {
-      t = build (POSTINCREMENT_EXPR, va_list_type_node, valist, build_int_2 (size, 0));
-      TREE_SIDE_EFFECTS (t) = 1;
-      
+      expand_increment (valist, build_int_2 (size, 0));
+      t = build2 (MINUS_EXPR, va_list_type_node, valist, build_int_2 (size, 0));
       return expand_expr (t, NULL_RTX, Pmode, EXPAND_NORMAL);
     }
 
Index: config/i860/i860.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/i860/i860.c,v
retrieving revision 1.45
diff -u -r1.45 i860.c
--- config/i860/i860.c	7 Jul 2004 19:24:24 -0000	1.45
+++ config/i860/i860.c	8 Jul 2004 14:27:46 -0000
@@ -2043,10 +2043,7 @@
   emit_label (lab_over);
 
   /* Increment either the ireg_used or freg_used field.  */
-
-  u = build (PREINCREMENT_EXPR, TREE_TYPE (reg), reg, build_int_2 (n_reg, 0));
-  TREE_SIDE_EFFECTS (u) = 1;
-  expand_expr (u, const0_rtx, VOIDmode, EXPAND_NORMAL);
+  expand_increment (reg, build_int_2 (n_reg, 0));
 
   return addr_rtx;
 }
Index: config/iq2000/iq2000.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/iq2000/iq2000.c,v
retrieving revision 1.19
diff -u -r1.19 iq2000.c
--- config/iq2000/iq2000.c	4 Jul 2004 20:58:46 -0000	1.19
+++ config/iq2000/iq2000.c	8 Jul 2004 14:27:46 -0000
@@ -1629,15 +1629,12 @@
 	expand_expr (t, const0_rtx, VOIDmode, EXPAND_NORMAL);
       }
 
-    t = build (POSTINCREMENT_EXPR, TREE_TYPE (gpr), gpr,
-	       size_int (rsize));
-    r = expand_expr (t, addr_rtx, Pmode, EXPAND_NORMAL);
+    r = expand_increment (gpr, size_int (rsize));
+    t = build (MINUS_EXPR, TREE_TYPE (gpr), gpr, size_int (rsize));
+    r = expand_expr (t, r, Pmode, EXPAND_NORMAL);
     if (r != addr_rtx)
       emit_move_insn (addr_rtx, r);
 
-    /* Flush the POSTINCREMENT.  */
-    emit_queue();
-
     if (indirect)
       {
 	r = gen_rtx_MEM (Pmode, addr_rtx);
@@ -1688,7 +1685,6 @@
       t = build (MODIFY_EXPR, TREE_TYPE (foff), foff, t);
       expand_expr (t, const0_rtx, VOIDmode, EXPAND_NORMAL);
 
-      emit_queue ();
       emit_jump (lab_over);
       emit_barrier ();
       emit_label (lab_false);
@@ -1704,13 +1700,12 @@
 
       /* Emit code for addr_rtx = the ovfl pointer into overflow area.
 	 Postincrement the ovfl pointer by 8.  */
-      t = build (POSTINCREMENT_EXPR, TREE_TYPE(ovfl), ovfl,
-		 size_int (8));
-      r = expand_expr (t, addr_rtx, Pmode, EXPAND_NORMAL);
+      r = expand_increment (ovfl, size_int (8));
+      t = build (MINUS_EXPR, TREE_TYPE (ovfl), ovfl, size_int (8));
+      r = expand_expr (t, r, Pmode, EXPAND_NORMAL);
       if (r != addr_rtx)
-	emit_move_insn (addr_rtx, r);
+        emit_move_insn (addr_rtx, r);
 
-      emit_queue();
       emit_label (lab_over);
       return addr_rtx;
     }
@@ -1756,19 +1751,17 @@
       t = build (MODIFY_EXPR, TREE_TYPE (goff), goff, t);
       expand_expr (t, const0_rtx, VOIDmode, EXPAND_NORMAL);
       
-      emit_queue();
       emit_jump (lab_over);
       emit_barrier ();
       emit_label (lab_false);
 
       /* Emit code for addr_rtx -> overflow area, postinc by step_size.  */
-      t = build (POSTINCREMENT_EXPR, TREE_TYPE(ovfl), ovfl,
-		 size_int (step_size));
-      r = expand_expr (t, addr_rtx, Pmode, EXPAND_NORMAL);
+      r = expand_increment (ovfl, size_int (step_size));
+      t = build (MINUS_EXPR, TREE_TYPE (ovfl), ovfl, size_int (step_size));
+      r = expand_expr (t, r, Pmode, EXPAND_NORMAL);
       if (r != addr_rtx)
-	emit_move_insn (addr_rtx, r);
+        emit_move_insn (addr_rtx, r);
 
-      emit_queue();
       emit_label (lab_over);
 
       if (indirect)
Index: config/m32r/m32r.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/m32r/m32r.c,v
retrieving revision 1.97
diff -u -r1.97 m32r.c
--- config/m32r/m32r.c	7 Jul 2004 19:24:26 -0000	1.97
+++ config/m32r/m32r.c	8 Jul 2004 14:27:46 -0000
@@ -1412,11 +1412,10 @@
       type_ptr = build_pointer_type (type);
       type_ptr_ptr = build_pointer_type (type_ptr);
 
-      t = build (POSTINCREMENT_EXPR, va_list_type_node, valist, 
-		 build_int_2 (UNITS_PER_WORD, 0));
-      TREE_SIDE_EFFECTS (t) = 1;
+      expand_increment (valist, build_int_2 (UNITS_PER_WORD, 0));
+
+      t = build2 (MINUS_EXPR, va_list_type_node, valist, build_int_2 (UNITS_PER_WORD, 0));
       t = build1 (NOP_EXPR, type_ptr_ptr, t);
-      TREE_SIDE_EFFECTS (t) = 1;
       t = build1 (INDIRECT_REF, type_ptr, t);
 
       addr_rtx = expand_expr (t, NULL_RTX, Pmode, EXPAND_NORMAL);
@@ -1441,9 +1440,9 @@
 	}
       else
 	{
-	  t = build (POSTINCREMENT_EXPR, va_list_type_node, valist, 
-		     build_int_2 (rsize, 0));
-	  TREE_SIDE_EFFECTS (t) = 1;
+          expand_increment (valist, build_int_2 (rsize, 0));
+
+          t = build2 (MINUS_EXPR, va_list_type_node, valist, build_int_2 (rsize, 0));
 	  addr_rtx = expand_expr (t, NULL_RTX, Pmode, EXPAND_NORMAL);
 	}
     }
Index: config/mips/mips.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/mips/mips.c,v
retrieving revision 1.426
diff -u -r1.426 mips.c
--- config/mips/mips.c	7 Jul 2004 19:24:28 -0000	1.426
+++ config/mips/mips.c	8 Jul 2004 14:27:47 -0000
@@ -4145,14 +4145,11 @@
 	  /* Emit code to set addr_rtx to the valist, and postincrement
 	     the valist by the size of the argument, rounded up to the
 	     next word.	 Account for padding on big-endian targets.  */
-	  t = build (POSTINCREMENT_EXPR, TREE_TYPE (gpr), gpr,
-		     size_int (rsize));
+          expand_increment (gpr, size_int (rsize));
+	  t = build (MINUS_EXPR, TREE_TYPE (gpr), gpr, size_int (rsize));
 	  addr_rtx = expand_expr (t, 0, Pmode, EXPAND_NORMAL);
 	  if (BYTES_BIG_ENDIAN)
 	    addr_rtx = plus_constant (addr_rtx, rsize - size);
-
-	  /* Flush the POSTINCREMENT.  */
-	  emit_queue();
 	}
       else
 	{
@@ -4278,7 +4275,6 @@
 
 	  /* [7] Emit code to jump over the else clause, then the label
 	     that starts it.  */
-	  emit_queue();
 	  emit_jump (lab_over);
 	  emit_barrier ();
 	  emit_label (lab_false);
@@ -4296,17 +4292,19 @@
 
 	  /* [10, 11].	Emit code to store ovfl in addr_rtx, then
 	     post-increment ovfl by osize.  On big-endian machines,
-	     the argument has OSIZE - SIZE bytes of leading padding.  */
-	  t = build (POSTINCREMENT_EXPR, TREE_TYPE (ovfl), ovfl,
-		     size_int (osize));
+	     the argument has OSIZE - SIZE bytes of leading padding,
+	     so we compute (x += osize) - size.  Otherwise, compute
+	     (x += osize) - osize.  */
+          expand_increment (ovfl, size_int (osize));
 	  if (BYTES_BIG_ENDIAN && osize > size)
-	    t = build (PLUS_EXPR, TREE_TYPE (t), t,
-		       build_int_2 (osize - size, 0));
+	    t = build (MINUS_EXPR, TREE_TYPE (t), t, build_int_2 (size, 0));
+          else
+	    t = build (MINUS_EXPR, TREE_TYPE (t), t, build_int_2 (osize, 0));
+
 	  r = expand_expr (t, addr_rtx, Pmode, EXPAND_NORMAL);
 	  if (r != addr_rtx)
 	    emit_move_insn (addr_rtx, r);
 
-	  emit_queue();
 	  emit_label (lab_over);
 	}
       if (indirect)
Index: config/mn10300/mn10300.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/mn10300/mn10300.c,v
retrieving revision 1.69
diff -u -r1.69 mn10300.c
--- config/mn10300/mn10300.c	4 Jul 2004 08:07:09 -0000	1.69
+++ config/mn10300/mn10300.c	8 Jul 2004 14:27:47 -0000
@@ -1460,17 +1460,16 @@
 rtx
 mn10300_va_arg (tree valist, tree type)
 {
-  HOST_WIDE_INT align, rsize;
+  HOST_WIDE_INT align, rsize, adjust;
   tree t, ptr, pptr;
 
   /* Compute the rounded size of the type.  */
   align = PARM_BOUNDARY / BITS_PER_UNIT;
   rsize = (((int_size_in_bytes (type) + align - 1) / align) * align);
+  adjust = rsize > 8 ? 4 : rsize;
 
-  t = build (POSTINCREMENT_EXPR, TREE_TYPE (valist), valist, 
-	     build_int_2 ((rsize > 8 ? 4 : rsize), 0));
-  TREE_SIDE_EFFECTS (t) = 1;
-
+  expand_increment (valist, build_int_2 (adjust, 0));
+  t = build (MINUS_EXPR, TREE_TYPE (valist), valist, build_int_2 (adjust, 0));
   ptr = build_pointer_type (type);
 
   /* "Large" types are passed by reference.  */
@@ -1478,16 +1477,10 @@
     {
       pptr = build_pointer_type (ptr);
       t = build1 (NOP_EXPR, pptr, t);
-      TREE_SIDE_EFFECTS (t) = 1;
-
       t = build1 (INDIRECT_REF, ptr, t);
-      TREE_SIDE_EFFECTS (t) = 1;
     }
   else
-    {
-      t = build1 (NOP_EXPR, ptr, t);
-      TREE_SIDE_EFFECTS (t) = 1;
-    }
+    t = build1 (NOP_EXPR, ptr, t);
 
   /* Calculate!  */
   return expand_expr (t, NULL_RTX, Pmode, EXPAND_NORMAL);
@@ -1843,9 +1836,6 @@
 	  || XINT (x, 1) == UNSPEC_PLT))
       return 1;
 
-  if (GET_CODE (x) == QUEUED)
-    return legitimate_pic_operand_p (QUEUED_VAR (x));
-
   fmt = GET_RTX_FORMAT (GET_CODE (x));
   for (i = GET_RTX_LENGTH (GET_CODE (x)) - 1; i >= 0; i--)
     {
Index: config/pa/pa.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/pa/pa.c,v
retrieving revision 1.255
diff -u -r1.255 pa.c
--- config/pa/pa.c	7 Jul 2004 19:24:34 -0000	1.255
+++ config/pa/pa.c	8 Jul 2004 14:27:48 -0000
@@ -5984,18 +5984,17 @@
       else
 	{
 	  ptr = build_pointer_type (type);
+	  pptr = build_pointer_type (ptr);
 
 	  /* Args grow upward.  */
-	  t = build (POSTINCREMENT_EXPR, TREE_TYPE (valist), valist,
+	  expand_increment (valist,
+			    build_int_2 (POINTER_SIZE / BITS_PER_UNIT, 0));
+
+	  t = build (MINUS_EXPR, TREE_TYPE (valist), valist,
 		     build_int_2 (POINTER_SIZE / BITS_PER_UNIT, 0));
-	  TREE_SIDE_EFFECTS (t) = 1;
 
-	  pptr = build_pointer_type (ptr);
 	  t = build1 (NOP_EXPR, pptr, t);
-	  TREE_SIDE_EFFECTS (t) = 1;
-
 	  t = build1 (INDIRECT_REF, ptr, t);
-	  TREE_SIDE_EFFECTS (t) = 1;
 	}
     }
   else /* !TARGET_64BIT */
@@ -6006,16 +6005,16 @@
       if (size > 8 || size <= 0)
 	{
 	  /* Args grow downward.  */
-	  t = build (PREDECREMENT_EXPR, TREE_TYPE (valist), valist,
+	  pptr = build_pointer_type (ptr);
+
+	  expand_increment (valist,
+			    build_int_2 (-POINTER_SIZE / BITS_PER_UNIT, 0));
+
+	  t = build (PLUS_EXPR, TREE_TYPE (valist), valist,
 		     build_int_2 (POINTER_SIZE / BITS_PER_UNIT, 0));
-	  TREE_SIDE_EFFECTS (t) = 1;
 
-	  pptr = build_pointer_type (ptr);
 	  t = build1 (NOP_EXPR, pptr, t);
-	  TREE_SIDE_EFFECTS (t) = 1;
-
 	  t = build1 (INDIRECT_REF, ptr, t);
-	  TREE_SIDE_EFFECTS (t) = 1;
 	}
       else
 	{
Index: config/s390/s390.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/s390/s390.c,v
retrieving revision 1.156
diff -u -r1.156 s390.c
--- config/s390/s390.c	7 Jul 2004 19:24:45 -0000	1.156
+++ config/s390/s390.c	8 Jul 2004 14:27:49 -0000
@@ -3184,10 +3184,6 @@
   rtx (*gen_result) (rtx) =
     GET_MODE (target) == DImode ? gen_cmpint_di : gen_cmpint_si;
 
-  op0 = protect_from_queue (op0, 0);
-  op1 = protect_from_queue (op1, 0);
-  len = protect_from_queue (len, 0);
-
   if (GET_CODE (len) == CONST_INT && INTVAL (len) >= 0 && INTVAL (len) <= 256)
     {
       if (INTVAL (len) > 0)
Index: cp/typeck.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/cp/typeck.c,v
retrieving revision 1.557
diff -u -r1.557 typeck.c
--- cp/typeck.c	7 Jul 2004 10:20:44 -0000	1.557
+++ cp/typeck.c	8 Jul 2004 14:27:51 -0000
@@ -5825,9 +5825,6 @@
 	    readonly_error (o[i], "modification by `asm'", 1);
 	}
     }
-
-  /* Those MODIFY_EXPRs could do autoincrements.  */
-  emit_queue ();
 }
 \f
 /* If RETVAL is the address of, or a reference to, a local variable or

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] Remove the postincrement queue
  2004-07-08 14:55 [PATCH] Remove the postincrement queue Paolo Bonzini
@ 2004-07-08 21:15 ` Richard Henderson
  2004-07-09  8:34   ` Paolo Bonzini
  0 siblings, 1 reply; 15+ messages in thread
From: Richard Henderson @ 2004-07-08 21:15 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: gcc-patches

On Thu, Jul 08, 2004 at 04:35:24PM +0200, Paolo Bonzini wrote:
> Unfortunately (but quite understandably), va_arg expanders are heavy 
> users of {pre,post}{in,de}crements, which means that this patch has 
> ramifications in backend code.  The obvious solution is to expand 
> preincrements and predecrements manually, and for postincrements use 
> (x+N)-N; of course the real solution would be to transition these 
> backends to using gimplifiers for va_arg: these could well use the 
> now-banned tree codes, but this is well above my abilities.

Hum.  I'm not really fond of the brief scan of the changes you
did here in the valist code.  Most of them would need reverting
to almost exactly what's there.

Why don't we take care of this first?  I'm sure the conversion
is not above your abilities.


r~

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] Remove the postincrement queue
  2004-07-08 21:15 ` Richard Henderson
@ 2004-07-09  8:34   ` Paolo Bonzini
  2004-07-09  9:38     ` gimplify mn10300 va_arg Paolo Bonzini
                       ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Paolo Bonzini @ 2004-07-09  8:34 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Paolo Bonzini, gcc-patches

> Why don't we take care of this first?  I'm sure the conversion
> is not above your abilities.

Huh, looks like *you* took care of this.  Thank you very much, really.

Would you preapprove the other bits for when the gimple_va_arg 
conversion is done?

Paolo

^ permalink raw reply	[flat|nested] 15+ messages in thread

* gimplify mn10300 va_arg
  2004-07-09  8:34   ` Paolo Bonzini
@ 2004-07-09  9:38     ` Paolo Bonzini
  2004-07-09 11:10       ` Richard Henderson
  2004-07-09  9:43     ` [RFT/RFA] gimplify pa va_arg Paolo Bonzini
  2004-07-09 10:55     ` [PATCH] Remove the postincrement queue Richard Henderson
  2 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2004-07-09  9:38 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Richard Henderson, Paolo Bonzini, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 194 bytes --]

Here is the first bit.  Tested on a mn10300-elf cross the same as rth 
did; the code looks better except that it uses memmove instead of memcpy 
to move the big struct.

Ok for mainline?

Paolo

[-- Attachment #2: gimplify-va-arg-mn10300.patch --]
[-- Type: text/plain, Size: 2881 bytes --]

Index: mn10300.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/mn10300/mn10300.c,v
retrieving revision 1.69
diff -u -r1.69 mn10300.c
--- mn10300.c	4 Jul 2004 08:07:09 -0000	1.69
+++ mn10300.c	9 Jul 2004 08:42:29 -0000
@@ -43,6 +43,7 @@
 #include "tm_p.h"
 #include "target.h"
 #include "target-def.h"
+#include "tree-gimple.h"
 
 /* This is used by GOTaddr2picreg to uniquely identify
    UNSPEC_INT_LABELs.  */
@@ -71,6 +72,7 @@
 static void mn10300_file_start (void);
 static bool mn10300_return_in_memory (tree, tree);
 static rtx mn10300_builtin_saveregs (void);
+static tree mn10300_gimplify_va_arg_expr (tree, tree, tree *, tree *);
 
 \f
 /* Initialize the GCC target structure.  */
@@ -99,6 +101,9 @@
 #undef TARGET_EXPAND_BUILTIN_SAVEREGS
 #define TARGET_EXPAND_BUILTIN_SAVEREGS mn10300_builtin_saveregs
 
+#undef TARGET_GIMPLIFY_VA_ARG_EXPR
+#define TARGET_GIMPLIFY_VA_ARG_EXPR mn10300_gimplify_va_arg_expr
+
 static void mn10300_encode_section_info (tree, rtx, int);
 struct gcc_target targetm = TARGET_INITIALIZER;
 \f
@@ -1457,40 +1462,34 @@
   std_expand_builtin_va_start (valist, nextarg);
 }
 
-rtx
-mn10300_va_arg (tree valist, tree type)
+static tree
+mn10300_gimplify_va_arg_expr (tree valist, tree type,
+			      tree *pre_p ATTRIBUTE_UNUSED,
+			      tree *post_p ATTRIBUTE_UNUSED)
 {
-  HOST_WIDE_INT align, rsize;
+  HOST_WIDE_INT align, rsize, adjust;
   tree t, ptr, pptr;
 
   /* Compute the rounded size of the type.  */
   align = PARM_BOUNDARY / BITS_PER_UNIT;
   rsize = (((int_size_in_bytes (type) + align - 1) / align) * align);
+  adjust = rsize > 8 ? 4 : rsize;
 
-  t = build (POSTINCREMENT_EXPR, TREE_TYPE (valist), valist, 
-	     build_int_2 ((rsize > 8 ? 4 : rsize), 0));
-  TREE_SIDE_EFFECTS (t) = 1;
-
+  t = build2 (POSTINCREMENT_EXPR, TREE_TYPE (valist),
+	      valist, build_int_2 (adjust, 0));
   ptr = build_pointer_type (type);
 
   /* "Large" types are passed by reference.  */
   if (rsize > 8)
     {
       pptr = build_pointer_type (ptr);
-      t = build1 (NOP_EXPR, pptr, t);
-      TREE_SIDE_EFFECTS (t) = 1;
-
-      t = build1 (INDIRECT_REF, ptr, t);
-      TREE_SIDE_EFFECTS (t) = 1;
+      t = fold_convert (pptr, t);
+      t = build_fold_indirect_ref (t);
     }
   else
-    {
-      t = build1 (NOP_EXPR, ptr, t);
-      TREE_SIDE_EFFECTS (t) = 1;
-    }
+    t = fold_convert (ptr, t);
 
-  /* Calculate!  */
-  return expand_expr (t, NULL_RTX, Pmode, EXPAND_NORMAL);
+  return build_fold_indirect_ref (t);
 }
 
 /* Return an RTX to represent where a value with mode MODE will be returned
@@ -1843,9 +1842,6 @@
 	  || XINT (x, 1) == UNSPEC_PLT))
       return 1;
 
-  if (GET_CODE (x) == QUEUED)
-    return legitimate_pic_operand_p (QUEUED_VAR (x));
-
   fmt = GET_RTX_FORMAT (GET_CODE (x));
   for (i = GET_RTX_LENGTH (GET_CODE (x)) - 1; i >= 0; i--)
     {

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFT/RFA] gimplify pa va_arg
  2004-07-09  8:34   ` Paolo Bonzini
  2004-07-09  9:38     ` gimplify mn10300 va_arg Paolo Bonzini
@ 2004-07-09  9:43     ` Paolo Bonzini
  2004-07-09  9:44       ` Paolo Bonzini
  2004-07-09 10:55     ` [PATCH] Remove the postincrement queue Richard Henderson
  2 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2004-07-09  9:43 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Richard Henderson, Paolo Bonzini, gcc-patches

And here is the second.  This gives several differences for both PA and 
PA64 in the struct case.  For PA64, it looks like awful optimization on 
part of the current mainline GCC (but HPPA assembly is not my forte). 
For PA, the only important change is that the frame grows from 64 to 128 
bytes, but again I cannot figure it out.

I attach two sdiffs to help review.

Ok for mainline?

Paolo

2004-07-09  Paolo Bonzini  <bonzini@gnu.org>

	* config/pa/pa.h (EXPAND_BUILTIN_VA_ARG): Do not define.
	* config/pa/pa.c (hppa_gimplify_va_arg_expr): New, based on
	hppa_va_arg.  Produce GIMPLE in the pre-queue instead of
	expanding trees to RTL.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFT/RFA] gimplify pa va_arg
  2004-07-09  9:43     ` [RFT/RFA] gimplify pa va_arg Paolo Bonzini
@ 2004-07-09  9:44       ` Paolo Bonzini
  2004-07-09 10:08         ` Paolo Bonzini
  0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2004-07-09  9:44 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Richard Henderson, Paolo Bonzini, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 731 bytes --]

Paolo Bonzini wrote:

> And here is the second.  This gives several differences for both PA and 
> PA64 in the struct case.  For PA64, it looks like awful optimization on 
> part of the current mainline GCC (but HPPA assembly is not my forte). 
> For PA, the only important change is that the frame grows from 64 to 128 
> bytes, but again I cannot figure it out.
> 
> I attach two sdiffs to help review.
> 
> Ok for mainline?
> 
> Paolo
> 
> 2004-07-09  Paolo Bonzini  <bonzini@gnu.org>
> 
>     * config/pa/pa.h (EXPAND_BUILTIN_VA_ARG): Do not define.
>     * config/pa/pa.c (hppa_gimplify_va_arg_expr): New, based on
>     hppa_va_arg.  Produce GIMPLE in the pre-queue instead of
>     expanding trees to RTL.

ENOPATCH

Paolo


[-- Attachment #2: gimplify-va-arg-pa.patch --]
[-- Type: text/plain, Size: 4263 bytes --]

Index: pa.h
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/pa/pa.h,v
retrieving revision 1.221
diff -u -r1.221 pa.h
--- pa.h	5 Jul 2004 19:49:16 -0000	1.221
+++ pa.h	9 Jul 2004 09:22:56 -0000
@@ -1141,10 +1141,6 @@
 #define EXPAND_BUILTIN_VA_START(valist, nextarg) \
   hppa_va_start (valist, nextarg)
 
-/* Implement `va_arg'.  */
-
-#define EXPAND_BUILTIN_VA_ARG(valist, type) \
-  hppa_va_arg (valist, type)
 \f
 /* Addressing modes, and classification of registers for them. 
 
Index: pa.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/pa/pa.c,v
retrieving revision 1.255
diff -u -r1.255 pa.c
--- pa.c	7 Jul 2004 19:24:34 -0000	1.255
+++ pa.c	9 Jul 2004 09:22:57 -0000
@@ -47,6 +47,7 @@
 #include "tm_p.h"
 #include "target.h"
 #include "target-def.h"
+#include "tree-gimple.h"
 
 #undef TARGET_SCHED_USE_DFA_PIPELINE_INTERFACE 
 #define TARGET_SCHED_USE_DFA_PIPELINE_INTERFACE hook_int_void_1
@@ -142,6 +143,7 @@
 static void pa_hpux_init_libfuncs (void);
 #endif
 static rtx pa_struct_value_rtx (tree, int);
+static tree hppa_gimplify_va_arg_expr (tree, tree, tree *, tree *);
 
 /* Save the operands last given to a compare for use when we
    generate a scc or bcc insn.  */
@@ -268,6 +270,9 @@
 #undef TARGET_EXPAND_BUILTIN_SAVEREGS
 #define TARGET_EXPAND_BUILTIN_SAVEREGS hppa_builtin_saveregs
 
+#undef TARGET_GIMPLIFY_VA_ARG_EXPR
+#define TARGET_GIMPLIFY_VA_ARG_EXPR hppa_gimplify_va_arg_expr
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 \f
 void
@@ -5948,8 +5953,8 @@
   std_expand_builtin_va_start (valist, nextarg);
 }
 
-rtx
-hppa_va_arg (tree valist, tree type)
+static tree
+hppa_gimplify_va_arg_expr (tree valist, tree type, tree *pre_p, tree *post_p)
 {
   HOST_WIDE_INT size = int_size_in_bytes (type);
   HOST_WIDE_INT ofs;
@@ -5975,27 +5980,21 @@
           t = build (BIT_AND_EXPR, TREE_TYPE (t), t,
                      build_int_2 (-2 * UNITS_PER_WORD, -1));
           t = build (MODIFY_EXPR, TREE_TYPE (valist), valist, t);
-          TREE_SIDE_EFFECTS (t) = 1;
-	  expand_expr (t, const0_rtx, VOIDmode, EXPAND_NORMAL);
+	  gimplify_and_add (t, pre_p);
         }
 
       if (size > 0)
-	return std_expand_builtin_va_arg (valist, type);
+	return std_gimplify_va_arg_expr (valist, type, pre_p, post_p);
       else
 	{
 	  ptr = build_pointer_type (type);
+	  pptr = build_pointer_type (ptr);
 
 	  /* Args grow upward.  */
 	  t = build (POSTINCREMENT_EXPR, TREE_TYPE (valist), valist,
 		     build_int_2 (POINTER_SIZE / BITS_PER_UNIT, 0));
-	  TREE_SIDE_EFFECTS (t) = 1;
-
-	  pptr = build_pointer_type (ptr);
-	  t = build1 (NOP_EXPR, pptr, t);
-	  TREE_SIDE_EFFECTS (t) = 1;
-
-	  t = build1 (INDIRECT_REF, ptr, t);
-	  TREE_SIDE_EFFECTS (t) = 1;
+	  t = fold_convert (pptr, t);
+	  t = build_fold_indirect_ref (t);
 	}
     }
   else /* !TARGET_64BIT */
@@ -6006,16 +6005,12 @@
       if (size > 8 || size <= 0)
 	{
 	  /* Args grow downward.  */
-	  t = build (PREDECREMENT_EXPR, TREE_TYPE (valist), valist,
-		     build_int_2 (POINTER_SIZE / BITS_PER_UNIT, 0));
-	  TREE_SIDE_EFFECTS (t) = 1;
-
 	  pptr = build_pointer_type (ptr);
-	  t = build1 (NOP_EXPR, pptr, t);
-	  TREE_SIDE_EFFECTS (t) = 1;
 
-	  t = build1 (INDIRECT_REF, ptr, t);
-	  TREE_SIDE_EFFECTS (t) = 1;
+	  t = build2 (PREDECREMENT_EXPR, TREE_TYPE (valist), valist,
+		      build_int_2 (POINTER_SIZE / BITS_PER_UNIT, 0));
+	  t = fold_convert (pptr, t);
+	  t = build_fold_indirect_ref (t);
 	}
       else
 	{
@@ -6028,23 +6023,21 @@
 		     build_int_2 ((size > 4 ? -8 : -4), -1));
 
 	  t = build (MODIFY_EXPR, TREE_TYPE (valist), valist, t);
-	  TREE_SIDE_EFFECTS (t) = 1;
+	  gimplify_and_add (t, pre_p);
 
 	  ofs = (8 - size) % 4;
 	  if (ofs)
-	    {
-	      t = build (PLUS_EXPR, TREE_TYPE (valist), t,
-			 build_int_2 (ofs, 0));
-	      TREE_SIDE_EFFECTS (t) = 1;
-	    }
+	    t = build (PLUS_EXPR, TREE_TYPE (valist), valist,
+		       build_int_2 (ofs, 0));
+	  else
+	    t = valist;
 
-	  t = build1 (NOP_EXPR, ptr, t);
-	  TREE_SIDE_EFFECTS (t) = 1;
+	  t = fold_convert (ptr, t);
 	}
     }
 
   /* Calculate!  */
-  return expand_expr (t, NULL_RTX, VOIDmode, EXPAND_NORMAL);
+  return build_fold_indirect_ref (t);
 }
 
 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFT/RFA] gimplify pa va_arg
  2004-07-09  9:44       ` Paolo Bonzini
@ 2004-07-09 10:08         ` Paolo Bonzini
  2004-07-09 10:13           ` Paolo Bonzini
  0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2004-07-09 10:08 UTC (permalink / raw)
  To: gcc-patches; +Cc: Richard Henderson, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 88 bytes --]

>> I attach two sdiffs to help review.

And I forgot the sdiffs as well.  Sorry.

Paolo

[-- Attachment #2: pa64-diff --]
[-- Type: text/plain, Size: 3341 bytes --]

	.LEVEL 2.0w				.LEVEL 2.0w
	.text					.text
	.align 4				.align 4
	.align 8				.align 8
.globl f1				.globl f1
	.type	f1, @function			.type	f1, @function
f1:					f1:
	.PROC					.PROC
	.CALLINFO FRAME=0,NO_CALLS		.CALLINFO FRAME=0,NO_CALLS
	.ENTRY					.ENTRY
	addil LT'list,%r27			addil LT'list,%r27
	ldd RT'list(%r1),%r20			ldd RT'list(%r1),%r20
	ldd 0(%r20),%r21			ldd 0(%r20),%r21
	ldd 0(%r20),%r19	      |		ldo 8(%r21),%r19
	ldo 8(%r19),%r19	      <
	std %r19,0(%r20)	      <
	ldw 4(%r21),%r28			ldw 4(%r21),%r28
				      >		std %r19,0(%r20)
	bve (%r2)				bve (%r2)
	extrd,s %r28,63,32,%r28			extrd,s %r28,63,32,%r28
	.EXIT					.EXIT
	.PROCEND				.PROCEND
	.size	f1, .-f1			.size	f1, .-f1
	.align 8				.align 8
.globl f2				.globl f2
	.type	f2, @function			.type	f2, @function
f2:					f2:
	.PROC					.PROC
	.CALLINFO FRAME=0,NO_CALLS		.CALLINFO FRAME=0,NO_CALLS
	.ENTRY					.ENTRY
	addil LT'list,%r27			addil LT'list,%r27
	ldd RT'list(%r1),%r20			ldd RT'list(%r1),%r20
	ldd 0(%r20),%r21			ldd 0(%r20),%r21
	ldd 0(%r20),%r19	      |		ldo 8(%r21),%r19
	ldo 8(%r19),%r19	      <
	std %r19,0(%r20)	      <
	bve (%r2)		      <
	ldd 0(%r21),%r28			ldd 0(%r21),%r28
				      >		bve (%r2)
				      >		std %r19,0(%r20)
	.EXIT					.EXIT
	.PROCEND				.PROCEND
	.size	f2, .-f2			.size	f2, .-f2
	.align 8				.align 8
.globl f3				.globl f3
	.type	f3, @function			.type	f3, @function
f3:					f3:
	.PROC					.PROC
	.CALLINFO FRAME=0,NO_CALLS		.CALLINFO FRAME=0,NO_CALLS
	.ENTRY					.ENTRY
	addil LT'list,%r27			addil LT'list,%r27
	ldd RT'list(%r1),%r20			ldd RT'list(%r1),%r20
	ldd 0(%r20),%r21			ldd 0(%r20),%r21
	ldd 0(%r20),%r19	      |		ldo 8(%r21),%r19
	ldo 8(%r19),%r19	      <
	std %r19,0(%r20)	      <
	bve (%r2)		      <
	fldd 0(%r21),%fr4			fldd 0(%r21),%fr4
				      >		bve (%r2)
				      >		std %r19,0(%r20)
	.EXIT					.EXIT
	.PROCEND				.PROCEND
	.size	f3, .-f3			.size	f3, .-f3
				      >	.globl memmove
	.align 8				.align 8
.globl f4				.globl f4
	.type	f4, @function			.type	f4, @function
f4:					f4:
	.PROC					.PROC
	.CALLINFO FRAME=128,CALLS,SAV		.CALLINFO FRAME=128,CALLS,SAV
	.ENTRY					.ENTRY
	std %r2,-16(%r30)			std %r2,-16(%r30)
	addil LT'list,%r27			addil LT'list,%r27
	ldd RT'list(%r1),%r21	      |		ldd RT'list(%r1),%r20
	copy %r28,%r26				copy %r28,%r26
	std,ma %r5,128(%r30)			std,ma %r5,128(%r30)
	ldi 400,%r24				ldi 400,%r24
	copy %r28,%r5				copy %r28,%r5
	std %r4,-120(%r30)			std %r4,-120(%r30)
	ldd 0(%r21),%r19	      |		ldd 0(%r20),%r19
	ldo 15(%r19),%r19			ldo 15(%r19),%r19
	depdi 0,63,4,%r19			depdi 0,63,4,%r19
	std %r19,0(%r21)	      |		copy %r19,%r25
	ldd 0(%r21),%r20	      <
	ldo 15(%r20),%r20	      <
	ldo -48(%r30),%r29	      <
	std %r20,0(%r21)	      <
	ldd 0(%r21),%r19	      <
	depdi 0,63,4,%r19	      <
	std %r19,0(%r21)	      <
	ldd 0(%r21),%r25	      <
	ldd 0(%r21),%r19	      <
	ldo 400(%r19),%r19			ldo 400(%r19),%r19
	std %r19,0(%r21)	      |		ldo -48(%r30),%r29
	b,l memcpy,%r2		      |		b,l memmove,%r2
	nop			      |		std %r19,0(%r20)
	ldd -144(%r30),%r2			ldd -144(%r30),%r2
	ldd -120(%r30),%r4			ldd -120(%r30),%r4
	copy %r5,%r28				copy %r5,%r28
	bve (%r2)				bve (%r2)
	ldd,mb -128(%r30),%r5			ldd,mb -128(%r30),%r5
	.EXIT					.EXIT
	.PROCEND				.PROCEND
	.size	f4, .-f4			.size	f4, .-f4
	.comm	list,8,8			.comm	list,8,8
	.ident	"GCC: (GNU) 3.5.0 200		.ident	"GCC: (GNU) 3.5.0 200

[-- Attachment #3: pa-diff --]
[-- Type: text/plain, Size: 3406 bytes --]

	.LEVEL 1.1							.LEVEL 1.1
	.text								.text
	.align 4							.align 4
	.align 4							.align 4
.globl f1							.globl f1
	.type	f1, @function						.type	f1, @function
f1:								f1:
	.PROC								.PROC
	.CALLINFO FRAME=0,NO_CALLS					.CALLINFO FRAME=0,NO_CALLS
	.ENTRY								.ENTRY
	addil LR'list-$global$,%r27					addil LR'list-$global$,%r27
	ldw RR'list-$global$(%r1),%r19					ldw RR'list-$global$(%r1),%r19
	ldo -4(%r19),%r19						ldo -4(%r19),%r19
	depi 0,31,2,%r19						depi 0,31,2,%r19
	stw %r19,RR'list-$global$(%r1)					stw %r19,RR'list-$global$(%r1)
	ldw RR'list-$global$(%r1),%r20			   <
	bv %r0(%r2)							bv %r0(%r2)
	ldw 0(%r20),%r28				   |		ldw 0(%r19),%r28
	.EXIT								.EXIT
	.PROCEND							.PROCEND
	.size	f1, .-f1						.size	f1, .-f1
	.align 4							.align 4
.globl f2							.globl f2
	.type	f2, @function						.type	f2, @function
f2:								f2:
	.PROC								.PROC
	.CALLINFO FRAME=0,NO_CALLS					.CALLINFO FRAME=0,NO_CALLS
	.ENTRY								.ENTRY
	addil LR'list-$global$,%r27					addil LR'list-$global$,%r27
	ldw RR'list-$global$(%r1),%r19					ldw RR'list-$global$(%r1),%r19
	ldo -8(%r19),%r19						ldo -8(%r19),%r19
	depi 0,31,3,%r19						depi 0,31,3,%r19
	stw %r19,RR'list-$global$(%r1)					stw %r19,RR'list-$global$(%r1)
	ldw RR'list-$global$(%r1),%r20			   |		ldw 0(%r19),%r28
	ldw 0(%r20),%r28				   |		ldw 4(%r19),%r29
	ldw 4(%r20),%r29				   <
	bv,n %r0(%r2)							bv,n %r0(%r2)
	.EXIT								.EXIT
	.PROCEND							.PROCEND
	.size	f2, .-f2						.size	f2, .-f2
	.align 4							.align 4
.globl f3							.globl f3
	.type	f3, @function						.type	f3, @function
f3:								f3:
	.PROC								.PROC
	.CALLINFO FRAME=0,NO_CALLS					.CALLINFO FRAME=0,NO_CALLS
	.ENTRY								.ENTRY
	addil LR'list-$global$,%r27					addil LR'list-$global$,%r27
	ldw RR'list-$global$(%r1),%r19					ldw RR'list-$global$(%r1),%r19
	ldo -8(%r19),%r19						ldo -8(%r19),%r19
	depi 0,31,3,%r19						depi 0,31,3,%r19
	stw %r19,RR'list-$global$(%r1)					stw %r19,RR'list-$global$(%r1)
	ldw RR'list-$global$(%r1),%r20			   <
	bv %r0(%r2)							bv %r0(%r2)
	fldds 0(%r20),%fr4				   |		fldds 0(%r19),%fr4
	.EXIT								.EXIT
	.PROCEND							.PROCEND
	.size	f3, .-f3						.size	f3, .-f3
							   >	.globl memmove
	.align 4							.align 4
.globl f4							.globl f4
	.type	f4, @function						.type	f4, @function
f4:								f4:
	.PROC								.PROC
	.CALLINFO FRAME=64,CALLS,SAVE_RP,ENTRY_GR=3	   |		.CALLINFO FRAME=128,CALLS,SAVE_RP,ENTRY_GR=3
	.ENTRY								.ENTRY
	stw %r2,-20(%r30)						stw %r2,-20(%r30)
	addil LR'list-$global$,%r27					addil LR'list-$global$,%r27
	copy %r28,%r26							copy %r28,%r26
	stwm %r4,64(%r30)				   |		stwm %r4,128(%r30)
	ldi 400,%r24							ldi 400,%r24
	ldw RR'list-$global$(%r1),%r19					ldw RR'list-$global$(%r1),%r19
							   >		ldw -4(%r19),%r25
	ldo -4(%r19),%r19						ldo -4(%r19),%r19
	copy %r28,%r4					   <
	stw %r19,RR'list-$global$(%r1)					stw %r19,RR'list-$global$(%r1)
	ldw RR'list-$global$(%r1),%r20			   |		bl memmove,%r2
	bl memcpy,%r2					   |		copy %r28,%r4
	ldw 0(%r20),%r25				   |		ldw -148(%r30),%r2
	ldw -84(%r30),%r2				   <
	copy %r4,%r28							copy %r4,%r28
	bv %r0(%r2)							bv %r0(%r2)
	ldwm -64(%r30),%r4				   |		ldwm -128(%r30),%r4
	.EXIT								.EXIT
	.PROCEND							.PROCEND
	.size	f4, .-f4						.size	f4, .-f4
	.comm	list,4,4						.comm	list,4,4
	.ident	"GCC: (GNU) 3.5.0 20040709 (experimental		.ident	"GCC: (GNU) 3.5.0 20040709 (experimental

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFT/RFA] gimplify pa va_arg
  2004-07-09 10:08         ` Paolo Bonzini
@ 2004-07-09 10:13           ` Paolo Bonzini
  0 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2004-07-09 10:13 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Richard Henderson, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 88 bytes --]

>> I attach two sdiffs to help review.

And I forgot the sdiffs as well.  Sorry.

Paolo

[-- Attachment #2: pa64-diff --]
[-- Type: text/plain, Size: 3341 bytes --]

	.LEVEL 2.0w				.LEVEL 2.0w
	.text					.text
	.align 4				.align 4
	.align 8				.align 8
.globl f1				.globl f1
	.type	f1, @function			.type	f1, @function
f1:					f1:
	.PROC					.PROC
	.CALLINFO FRAME=0,NO_CALLS		.CALLINFO FRAME=0,NO_CALLS
	.ENTRY					.ENTRY
	addil LT'list,%r27			addil LT'list,%r27
	ldd RT'list(%r1),%r20			ldd RT'list(%r1),%r20
	ldd 0(%r20),%r21			ldd 0(%r20),%r21
	ldd 0(%r20),%r19	      |		ldo 8(%r21),%r19
	ldo 8(%r19),%r19	      <
	std %r19,0(%r20)	      <
	ldw 4(%r21),%r28			ldw 4(%r21),%r28
				      >		std %r19,0(%r20)
	bve (%r2)				bve (%r2)
	extrd,s %r28,63,32,%r28			extrd,s %r28,63,32,%r28
	.EXIT					.EXIT
	.PROCEND				.PROCEND
	.size	f1, .-f1			.size	f1, .-f1
	.align 8				.align 8
.globl f2				.globl f2
	.type	f2, @function			.type	f2, @function
f2:					f2:
	.PROC					.PROC
	.CALLINFO FRAME=0,NO_CALLS		.CALLINFO FRAME=0,NO_CALLS
	.ENTRY					.ENTRY
	addil LT'list,%r27			addil LT'list,%r27
	ldd RT'list(%r1),%r20			ldd RT'list(%r1),%r20
	ldd 0(%r20),%r21			ldd 0(%r20),%r21
	ldd 0(%r20),%r19	      |		ldo 8(%r21),%r19
	ldo 8(%r19),%r19	      <
	std %r19,0(%r20)	      <
	bve (%r2)		      <
	ldd 0(%r21),%r28			ldd 0(%r21),%r28
				      >		bve (%r2)
				      >		std %r19,0(%r20)
	.EXIT					.EXIT
	.PROCEND				.PROCEND
	.size	f2, .-f2			.size	f2, .-f2
	.align 8				.align 8
.globl f3				.globl f3
	.type	f3, @function			.type	f3, @function
f3:					f3:
	.PROC					.PROC
	.CALLINFO FRAME=0,NO_CALLS		.CALLINFO FRAME=0,NO_CALLS
	.ENTRY					.ENTRY
	addil LT'list,%r27			addil LT'list,%r27
	ldd RT'list(%r1),%r20			ldd RT'list(%r1),%r20
	ldd 0(%r20),%r21			ldd 0(%r20),%r21
	ldd 0(%r20),%r19	      |		ldo 8(%r21),%r19
	ldo 8(%r19),%r19	      <
	std %r19,0(%r20)	      <
	bve (%r2)		      <
	fldd 0(%r21),%fr4			fldd 0(%r21),%fr4
				      >		bve (%r2)
				      >		std %r19,0(%r20)
	.EXIT					.EXIT
	.PROCEND				.PROCEND
	.size	f3, .-f3			.size	f3, .-f3
				      >	.globl memmove
	.align 8				.align 8
.globl f4				.globl f4
	.type	f4, @function			.type	f4, @function
f4:					f4:
	.PROC					.PROC
	.CALLINFO FRAME=128,CALLS,SAV		.CALLINFO FRAME=128,CALLS,SAV
	.ENTRY					.ENTRY
	std %r2,-16(%r30)			std %r2,-16(%r30)
	addil LT'list,%r27			addil LT'list,%r27
	ldd RT'list(%r1),%r21	      |		ldd RT'list(%r1),%r20
	copy %r28,%r26				copy %r28,%r26
	std,ma %r5,128(%r30)			std,ma %r5,128(%r30)
	ldi 400,%r24				ldi 400,%r24
	copy %r28,%r5				copy %r28,%r5
	std %r4,-120(%r30)			std %r4,-120(%r30)
	ldd 0(%r21),%r19	      |		ldd 0(%r20),%r19
	ldo 15(%r19),%r19			ldo 15(%r19),%r19
	depdi 0,63,4,%r19			depdi 0,63,4,%r19
	std %r19,0(%r21)	      |		copy %r19,%r25
	ldd 0(%r21),%r20	      <
	ldo 15(%r20),%r20	      <
	ldo -48(%r30),%r29	      <
	std %r20,0(%r21)	      <
	ldd 0(%r21),%r19	      <
	depdi 0,63,4,%r19	      <
	std %r19,0(%r21)	      <
	ldd 0(%r21),%r25	      <
	ldd 0(%r21),%r19	      <
	ldo 400(%r19),%r19			ldo 400(%r19),%r19
	std %r19,0(%r21)	      |		ldo -48(%r30),%r29
	b,l memcpy,%r2		      |		b,l memmove,%r2
	nop			      |		std %r19,0(%r20)
	ldd -144(%r30),%r2			ldd -144(%r30),%r2
	ldd -120(%r30),%r4			ldd -120(%r30),%r4
	copy %r5,%r28				copy %r5,%r28
	bve (%r2)				bve (%r2)
	ldd,mb -128(%r30),%r5			ldd,mb -128(%r30),%r5
	.EXIT					.EXIT
	.PROCEND				.PROCEND
	.size	f4, .-f4			.size	f4, .-f4
	.comm	list,8,8			.comm	list,8,8
	.ident	"GCC: (GNU) 3.5.0 200		.ident	"GCC: (GNU) 3.5.0 200

[-- Attachment #3: pa-diff --]
[-- Type: text/plain, Size: 3406 bytes --]

	.LEVEL 1.1							.LEVEL 1.1
	.text								.text
	.align 4							.align 4
	.align 4							.align 4
.globl f1							.globl f1
	.type	f1, @function						.type	f1, @function
f1:								f1:
	.PROC								.PROC
	.CALLINFO FRAME=0,NO_CALLS					.CALLINFO FRAME=0,NO_CALLS
	.ENTRY								.ENTRY
	addil LR'list-$global$,%r27					addil LR'list-$global$,%r27
	ldw RR'list-$global$(%r1),%r19					ldw RR'list-$global$(%r1),%r19
	ldo -4(%r19),%r19						ldo -4(%r19),%r19
	depi 0,31,2,%r19						depi 0,31,2,%r19
	stw %r19,RR'list-$global$(%r1)					stw %r19,RR'list-$global$(%r1)
	ldw RR'list-$global$(%r1),%r20			   <
	bv %r0(%r2)							bv %r0(%r2)
	ldw 0(%r20),%r28				   |		ldw 0(%r19),%r28
	.EXIT								.EXIT
	.PROCEND							.PROCEND
	.size	f1, .-f1						.size	f1, .-f1
	.align 4							.align 4
.globl f2							.globl f2
	.type	f2, @function						.type	f2, @function
f2:								f2:
	.PROC								.PROC
	.CALLINFO FRAME=0,NO_CALLS					.CALLINFO FRAME=0,NO_CALLS
	.ENTRY								.ENTRY
	addil LR'list-$global$,%r27					addil LR'list-$global$,%r27
	ldw RR'list-$global$(%r1),%r19					ldw RR'list-$global$(%r1),%r19
	ldo -8(%r19),%r19						ldo -8(%r19),%r19
	depi 0,31,3,%r19						depi 0,31,3,%r19
	stw %r19,RR'list-$global$(%r1)					stw %r19,RR'list-$global$(%r1)
	ldw RR'list-$global$(%r1),%r20			   |		ldw 0(%r19),%r28
	ldw 0(%r20),%r28				   |		ldw 4(%r19),%r29
	ldw 4(%r20),%r29				   <
	bv,n %r0(%r2)							bv,n %r0(%r2)
	.EXIT								.EXIT
	.PROCEND							.PROCEND
	.size	f2, .-f2						.size	f2, .-f2
	.align 4							.align 4
.globl f3							.globl f3
	.type	f3, @function						.type	f3, @function
f3:								f3:
	.PROC								.PROC
	.CALLINFO FRAME=0,NO_CALLS					.CALLINFO FRAME=0,NO_CALLS
	.ENTRY								.ENTRY
	addil LR'list-$global$,%r27					addil LR'list-$global$,%r27
	ldw RR'list-$global$(%r1),%r19					ldw RR'list-$global$(%r1),%r19
	ldo -8(%r19),%r19						ldo -8(%r19),%r19
	depi 0,31,3,%r19						depi 0,31,3,%r19
	stw %r19,RR'list-$global$(%r1)					stw %r19,RR'list-$global$(%r1)
	ldw RR'list-$global$(%r1),%r20			   <
	bv %r0(%r2)							bv %r0(%r2)
	fldds 0(%r20),%fr4				   |		fldds 0(%r19),%fr4
	.EXIT								.EXIT
	.PROCEND							.PROCEND
	.size	f3, .-f3						.size	f3, .-f3
							   >	.globl memmove
	.align 4							.align 4
.globl f4							.globl f4
	.type	f4, @function						.type	f4, @function
f4:								f4:
	.PROC								.PROC
	.CALLINFO FRAME=64,CALLS,SAVE_RP,ENTRY_GR=3	   |		.CALLINFO FRAME=128,CALLS,SAVE_RP,ENTRY_GR=3
	.ENTRY								.ENTRY
	stw %r2,-20(%r30)						stw %r2,-20(%r30)
	addil LR'list-$global$,%r27					addil LR'list-$global$,%r27
	copy %r28,%r26							copy %r28,%r26
	stwm %r4,64(%r30)				   |		stwm %r4,128(%r30)
	ldi 400,%r24							ldi 400,%r24
	ldw RR'list-$global$(%r1),%r19					ldw RR'list-$global$(%r1),%r19
							   >		ldw -4(%r19),%r25
	ldo -4(%r19),%r19						ldo -4(%r19),%r19
	copy %r28,%r4					   <
	stw %r19,RR'list-$global$(%r1)					stw %r19,RR'list-$global$(%r1)
	ldw RR'list-$global$(%r1),%r20			   |		bl memmove,%r2
	bl memcpy,%r2					   |		copy %r28,%r4
	ldw 0(%r20),%r25				   |		ldw -148(%r30),%r2
	ldw -84(%r30),%r2				   <
	copy %r4,%r28							copy %r4,%r28
	bv %r0(%r2)							bv %r0(%r2)
	ldwm -64(%r30),%r4				   |		ldwm -128(%r30),%r4
	.EXIT								.EXIT
	.PROCEND							.PROCEND
	.size	f4, .-f4						.size	f4, .-f4
	.comm	list,4,4						.comm	list,4,4
	.ident	"GCC: (GNU) 3.5.0 20040709 (experimental		.ident	"GCC: (GNU) 3.5.0 20040709 (experimental

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] Remove the postincrement queue
  2004-07-09  8:34   ` Paolo Bonzini
  2004-07-09  9:38     ` gimplify mn10300 va_arg Paolo Bonzini
  2004-07-09  9:43     ` [RFT/RFA] gimplify pa va_arg Paolo Bonzini
@ 2004-07-09 10:55     ` Richard Henderson
  2004-07-09 11:15       ` Paolo Bonzini
  2 siblings, 1 reply; 15+ messages in thread
From: Richard Henderson @ 2004-07-09 10:55 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Paolo Bonzini, gcc-patches

On Fri, Jul 09, 2004 at 09:13:26AM +0200, Paolo Bonzini wrote:
> Would you preapprove the other bits for when the gimple_va_arg 
> conversion is done?

I'd like to see if we can get rid of the rest of the 
expand_increment calls.


r~

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: gimplify mn10300 va_arg
  2004-07-09  9:38     ` gimplify mn10300 va_arg Paolo Bonzini
@ 2004-07-09 11:10       ` Richard Henderson
  0 siblings, 0 replies; 15+ messages in thread
From: Richard Henderson @ 2004-07-09 11:10 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Paolo Bonzini, gcc-patches

On Fri, Jul 09, 2004 at 10:47:34AM +0200, Paolo Bonzini wrote:
> Here is the first bit.  Tested on a mn10300-elf cross the same as rth 
> did; the code looks better except that it uses memmove instead of memcpy 
> to move the big struct.

Oh, well, as you can see, I've taken care of the rest of them too.

For the record, you should have noticed that this va_arg function
was doing exactly the same thing as the generic one, modulo the
one bit that handles indirect arguments.


r~

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] Remove the postincrement queue
  2004-07-09 10:55     ` [PATCH] Remove the postincrement queue Richard Henderson
@ 2004-07-09 11:15       ` Paolo Bonzini
  2004-07-09 15:49         ` [PATCH] Remove the postincrement queue, take 2 Paolo Bonzini
  0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2004-07-09 11:15 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Paolo Bonzini, gcc-patches

> I'd like to see if we can get rid of the rest of the 
> expand_increment calls.

There is just one, in store_constructor.  So at least I can make 
expand_increment static, but I could probably just go with expand_binop 
there, I'll try a variation of my patch.

Paolo

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] Remove the postincrement queue, take 2
  2004-07-09 11:15       ` Paolo Bonzini
@ 2004-07-09 15:49         ` Paolo Bonzini
  2004-07-09 22:05           ` Richard Henderson
  0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2004-07-09 15:49 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Richard Henderson, Steven Bosscher, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 366 bytes --]

Here is the updated patch, I just used expand_assignment of a PLUS_EXPR 
instead of expand_increment.  The other change is that I was updating 
c_expand_asm_operands, but Steven pointed out it is unused, so this take 
removes it.

Thanks very much to rth for taking care of gimplifiers for va_arg.

Bootstrapped/regtested i686-pc-linux-gnu, ok for mainline?

Paolo


[-- Attachment #2: gcc-eliminate-queue.patch --]
[-- Type: text/plain, Size: 72388 bytes --]

2004-07-08  Paolo Bonzini  <bonzini@gnu.org>
	    Steven Bosscher  <stevenb@suse.de>

	* expr.c (enqueue_insn, finish_expr_for_function,
	protect_from_queue, queued_subexp_p, mark_queue,
	emit_insns_enqueued_after_mark, emit_queue,
	expand_increment): Remove.
	(store_constructor): Expand increment as an assignment.
	(expand_expr_real_1 <case PREINCREMENT_EXPR,
	case PREDECREMENT_EXPR, case POSTINCREMENT_EXPR,
	case POSTDECREMENT_EXPR>): Abort.
	* c-common.h (c_expand_asm_operands): Remove.
	* c-typeck.c (c_expand_asm_operands): Remove.
	* expr.h (QUEUED_VAR, QUEUED_INSN, QUEUED_COPY,
	QUEUED_BODY, QUEUED_NEXT, finish_expr_for_function,
	protect_from_queue, emit_queue, queued_subexp_p): Remove.
	* function.h (pending_chain, x_pending_chain): Remove.
	* rtl.def (QUEUED): Remove.

	* emit-rtl.c (copy_insn_1, copy_most_rtx,
	set_used_flags, verify_rtx_sharing): Remove references to QUEUED.
	* genattrtab.c (attr_copy_rtx, clear_struct_flag,
	encode_units_mask): Likewise.
	* local-alloc.c (equiv_init_varies_p): Likewise.
	* rtl.c (copy_rtx): Likewise.
	* rtlanal.c (rtx_unstable_p, rtx_varies_p): Likewise.
	* simplify-rtx.c (simplify_gen_subreg): Likewise.
	* config/mn10300/mn10300.c (legitimate_pic_operand_p): Likewise.

	* builtins.c (expand_builtin, expand_builtin_apply,
	expand_builtin_mathfn, expand_builtin_mathfn_2,
	expand_builtin_mathfn_3, expand_builtin_setjmp_setup):
	Remove calls to emit_queue and protect_from_queue.
	* calls.c (expand_call, precompute_arguments,
	precompute_register_parameters, rtx_for_function_call,
	store_one_arg): Likewise.
	* dojump.c (do_compare_and_jump, do_jump): Likewise.
	* explow.c (memory_address): Likewise.
	* expmed.c (clear_by_pieces_1, clear_storage,
	clear_storage_via_libcall, emit_group_load,
	emit_group_store, emit_store_flag,
	expand_expr_real_1, store_by_pieces,
	store_constructor, store_expr, try_casesi,
	try_tablejump): Likewise.
	* function.c (expand_pending_sizes): Likewise.
	* optabs.c (emit_cmp_and_jump_insns,
	emit_conditional_add, emit_conditional_move,
	expand_fix, expand_float, prepare_cmp_insn): Likewise.
	* stmt.c (emit_case_bit_tests,
	expand_asm_expr, expand_computed_goto,
	expand_decl_init, expand_end_case_type,
	expand_end_stmt_expr, expand_expr_stmt_value,
	expand_return, expand_start_case,
	optimize_tail_recursion): Likewise.
	* config/c4x/c4x.c (c4x_expand_builtin): Likewise.
	* config/s390/s390.c (s390_expand_cmpmem): Likewise.

cp/ChangeLog:
2004-07-08  Paolo Bonzini  <bonzini@gnu.org>
	    Steven Bosscher  <stevenb@suse.de>

	* c-typeck.c (c_expand_asm_operands): Remove.

Index: gcc/builtins.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/builtins.c,v
retrieving revision 1.354
diff -u -r1.354 builtins.c
--- gcc/builtins.c	9 Jul 2004 03:37:13 -0000	1.354
+++ gcc/builtins.c	9 Jul 2004 14:35:29 -0000
@@ -524,8 +524,6 @@
 
   buf_addr = force_reg (Pmode, force_operand (buf_addr, NULL_RTX));
 
-  emit_queue ();
-
   /* We store the frame pointer and the address of receiver_label in
      the buffer and use the rest of it for the stack save area, which
      is machine-dependent.  */
@@ -959,9 +957,8 @@
 	}
       emit_insn (gen_prefetch (op0, op1, op2));
     }
-  else
 #endif
-    op0 = protect_from_queue (op0, 0);
+
   /* Don't do anything with direct references to volatile memory, but
      generate code to handle other side effects.  */
   if (!MEM_P (op0) && side_effects_p (op0))
@@ -1270,9 +1267,6 @@
 				       incoming_args, 0, OPTAB_LIB_WIDEN);
 #endif
 
-  /* Perform postincrements before actually calling the function.  */
-  emit_queue ();
-
   /* Push a new argument block and copy the arguments.  Do not allow
      the (potential) memcpy call below to interfere with our stack
      manipulations.  */
@@ -1776,7 +1770,6 @@
 
       op0 = expand_expr (arg, subtarget, VOIDmode, 0);
 
-      emit_queue ();
       start_sequence ();
 
       /* Compute into TARGET.
@@ -1931,7 +1924,6 @@
   op0 = expand_expr (arg0, subtarget, VOIDmode, 0);
   op1 = expand_expr (arg1, 0, VOIDmode, 0);
 
-  emit_queue ();
   start_sequence ();
 
   /* Compute into TARGET.
@@ -2036,7 +2028,6 @@
 
       op0 = expand_expr (arg, subtarget, VOIDmode, 0);
 
-      emit_queue ();
       start_sequence ();
 
       /* Compute into TARGET.
@@ -5707,9 +5698,6 @@
   enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl);
   enum machine_mode target_mode = TYPE_MODE (TREE_TYPE (exp));
 
-  /* Perform postincrements before expanding builtin functions.  */
-  emit_queue ();
-
   if (DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_MD)
     return targetm.expand_builtin (exp, target, subtarget, mode, ignore);
 
Index: gcc/c-common.h
===================================================================
RCS file: /cvs/gcc/gcc/gcc/c-common.h,v
retrieving revision 1.248
diff -u -r1.248 c-common.h
--- gcc/c-common.h	30 Jun 2004 22:27:59 -0000	1.248
+++ gcc/c-common.h	9 Jul 2004 14:35:30 -0000
@@ -798,8 +798,6 @@
 extern tree build_continue_stmt (void);
 extern tree build_break_stmt (void);
 
-extern void c_expand_asm_operands (tree, tree, tree, tree, int, location_t);
-
 /* These functions must be defined by each front-end which implements
    a variant of the C language.  They are used in c-common.c.  */
 
Index: gcc/c-typeck.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/c-typeck.c,v
retrieving revision 1.336
diff -u -r1.336 c-typeck.c
--- gcc/c-typeck.c	4 Jul 2004 17:28:55 -0000	1.336
+++ gcc/c-typeck.c	9 Jul 2004 14:35:31 -0000
@@ -6188,64 +6188,6 @@
   return args;
 }
 
-/* Expand an ASM statement with operands, handling output operands
-   that are not variables or INDIRECT_REFS by transforming such
-   cases into cases that expand_asm_operands can handle.
-
-   Arguments are same as for expand_asm_operands.  */
-
-void
-c_expand_asm_operands (tree string, tree outputs, tree inputs,
-		       tree clobbers, int vol, location_t locus)
-{
-  int noutputs = list_length (outputs);
-  int i;
-  /* o[I] is the place that output number I should be written.  */
-  tree *o = alloca (noutputs * sizeof (tree));
-  tree tail;
-
-  /* Record the contents of OUTPUTS before it is modified.  */
-  for (i = 0, tail = outputs; tail; tail = TREE_CHAIN (tail), i++)
-    {
-      o[i] = TREE_VALUE (tail);
-      if (o[i] == error_mark_node)
-	return;
-    }
-
-  /* Generate the ASM_OPERANDS insn; store into the TREE_VALUEs of
-     OUTPUTS some trees for where the values were actually stored.  */
-  expand_asm_operands (string, outputs, inputs, clobbers, vol, locus);
-
-  /* Copy all the intermediate outputs into the specified outputs.  */
-  for (i = 0, tail = outputs; tail; tail = TREE_CHAIN (tail), i++)
-    {
-      if (o[i] != TREE_VALUE (tail))
-	{
-	  expand_expr (build_modify_expr (o[i], NOP_EXPR, TREE_VALUE (tail)),
-		       NULL_RTX, VOIDmode, EXPAND_NORMAL);
-	  free_temp_slots ();
-
-	  /* Restore the original value so that it's correct the next
-	     time we expand this function.  */
-	  TREE_VALUE (tail) = o[i];
-	}
-      /* Detect modification of read-only values.
-	 (Otherwise done by build_modify_expr.)  */
-      else
-	{
-	  tree type = TREE_TYPE (o[i]);
-	  if (TREE_READONLY (o[i])
-	      || TYPE_READONLY (type)
-	      || ((TREE_CODE (type) == RECORD_TYPE
-		   || TREE_CODE (type) == UNION_TYPE)
-		  && C_TYPE_FIELDS_READONLY (type)))
-	    readonly_error (o[i], "modification by `asm'");
-	}
-    }
-
-  /* Those MODIFY_EXPRs could do autoincrements.  */
-  emit_queue ();
-}
 \f
 /* Generate a goto statement to LABEL.  */
 
Index: gcc/calls.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/calls.c,v
retrieving revision 1.347
diff -u -r1.347 calls.c
--- gcc/calls.c	9 Jul 2004 03:29:26 -0000	1.347
+++ gcc/calls.c	9 Jul 2004 14:35:31 -0000
@@ -163,8 +163,6 @@
 prepare_call_address (rtx funexp, rtx static_chain_value,
 		      rtx *call_fusage, int reg_parm_seen, int sibcallp)
 {
-  funexp = protect_from_queue (funexp, 0);
-
   /* Make a valid memory address and copy constants through pseudo-regs,
      but not for a constant address if -fno-function-cse.  */
   if (GET_CODE (funexp) != SYMBOL_REF)
@@ -662,10 +660,6 @@
 					 VOIDmode, 0);
 	    preserve_temp_slots (args[i].value);
 	    pop_temp_slots ();
-
-	    /* ANSI doesn't require a sequence point here,
-	       but PCC has one, so this will avoid some problems.  */
-	    emit_queue ();
 	  }
 
 	/* If the value is a non-legitimate constant, force it into a
@@ -1258,15 +1252,8 @@
       if (TREE_ADDRESSABLE (TREE_TYPE (args[i].tree_value)))
 	abort ();
 
-      args[i].value
-	= expand_expr (args[i].tree_value, NULL_RTX, VOIDmode, 0);
-
-      /* ANSI doesn't require a sequence point here,
-	 but PCC has one, so this will avoid some problems.  */
-      emit_queue ();
-
       args[i].initial_value = args[i].value
-	= protect_from_queue (args[i].value, 0);
+	= expand_expr (args[i].tree_value, NULL_RTX, VOIDmode, 0);
 
       mode = TYPE_MODE (TREE_TYPE (args[i].tree_value));
       if (mode != args[i].mode)
@@ -1441,7 +1428,6 @@
       push_temp_slots ();
       funexp = expand_expr (addr, NULL_RTX, VOIDmode, 0);
       pop_temp_slots ();	/* FUNEXP can't be BLKmode.  */
-      emit_queue ();
     }
   return funexp;
 }
@@ -2367,10 +2353,6 @@
 
       if (pass == 0)
 	{
-	  /* Emit any queued insns now; otherwise they would end up in
-             only one of the alternates.  */
-	  emit_queue ();
-
 	  /* State variables we need to save and restore between
 	     iterations.  */
 	  save_pending_stack_adjust = pending_stack_adjust;
@@ -2792,9 +2774,6 @@
       load_register_parameters (args, num_actuals, &call_fusage, flags,
 				pass == 0, &sibcall_failure);
 
-      /* Perform postincrements before actually calling the function.  */
-      emit_queue ();
-
       /* Save a pointer to the last insn before the call, so that we can
 	 later safely search backwards to find the CALL_INSN.  */
       before_call = get_last_insn ();
@@ -3550,9 +3529,6 @@
 	  || (GET_MODE (val) != mode && GET_MODE (val) != VOIDmode))
 	abort ();
 
-      /* There's no need to call protect_from_queue, because
-	 either emit_move_insn or emit_push_insn will do that.  */
-
       /* Make sure it is a reasonable operand for a move or push insn.  */
       if (!REG_P (val) && !MEM_P (val)
 	  && ! (CONSTANT_P (val) && LEGITIMATE_CONSTANT_P (val)))
@@ -4058,7 +4034,6 @@
    for a value of mode OUTMODE,
    with NARGS different arguments, passed as alternating rtx values
    and machine_modes to convert them to.
-   The rtx values should have been passed through protect_from_queue already.
 
    FN_TYPE should be LCT_NORMAL for `normal' calls, LCT_CONST for `const'
    calls, LCT_PURE for `pure' calls, LCT_CONST_MAKE_BLOCK for `const' calls
@@ -4430,10 +4405,6 @@
      be deferred during the rest of the arguments.  */
   NO_DEFER_POP;
 
-  /* ANSI doesn't require a sequence point here,
-     but PCC has one, so this will avoid some problems.  */
-  emit_queue ();
-
   /* Free any temporary slots made in processing this argument.  Show
      that we might have taken the address of something and pushed that
      as an operand.  */
Index: gcc/dojump.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/dojump.c,v
retrieving revision 1.23
diff -u -r1.23 dojump.c
--- gcc/dojump.c	8 Jul 2004 07:45:33 -0000	1.23
+++ gcc/dojump.c	9 Jul 2004 14:35:35 -0000
@@ -167,8 +167,6 @@
   tree type;
   enum machine_mode mode;
 
-  emit_queue ();
-
   switch (code)
     {
     case ERROR_MARK:
@@ -306,7 +304,6 @@
       preserve_temp_slots (NULL_RTX);
       free_temp_slots ();
       pop_temp_slots ();
-      emit_queue ();
       do_pending_stack_adjust ();
       do_jump (TREE_OPERAND (exp, 1), if_false_label, if_true_label);
       break;
@@ -619,8 +616,6 @@
         temp = copy_to_reg (temp);
 #endif
       do_pending_stack_adjust ();
-      /* Do any postincrements in the expression that was tested.  */
-      emit_queue ();
 
       if (GET_CODE (temp) == CONST_INT
           || (GET_CODE (temp) == CONST_DOUBLE && GET_MODE (temp) == VOIDmode)
@@ -1018,9 +1013,6 @@
     }
 #endif
 
-  /* Do any postincrements in the expression that was tested.  */
-  emit_queue ();
-
   do_compare_rtx_and_jump (op0, op1, code, unsignedp, mode,
                            ((mode == BLKmode)
                             ? expr_size (TREE_OPERAND (exp, 0)) : NULL_RTX),
Index: gcc/emit-rtl.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/emit-rtl.c,v
retrieving revision 1.402
diff -u -r1.402 emit-rtl.c
--- gcc/emit-rtl.c	9 Jul 2004 03:29:27 -0000	1.402
+++ gcc/emit-rtl.c	9 Jul 2004 14:35:35 -0000
@@ -2227,7 +2227,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -2408,7 +2407,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -2525,7 +2523,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -2651,7 +2648,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -2721,7 +2717,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -5005,7 +5000,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
Index: gcc/explow.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/explow.c,v
retrieving revision 1.132
diff -u -r1.132 explow.c
--- gcc/explow.c	9 Jul 2004 03:29:31 -0000	1.132
+++ gcc/explow.c	9 Jul 2004 14:35:36 -0000
@@ -443,14 +443,6 @@
   if (! cse_not_expected && CONSTANT_P (x) && CONSTANT_ADDRESS_P (x))
     x = force_reg (Pmode, x);
 
-  /* Accept a QUEUED that refers to a REG
-     even though that isn't a valid address.
-     On attempting to put this in an insn we will call protect_from_queue
-     which will turn it into a REG, which is valid.  */
-  else if (GET_CODE (x) == QUEUED
-      && REG_P (QUEUED_VAR (x)))
-    ;
-
   /* We get better cse by rejecting indirect addressing at this stage.
      Let the combiner create indirect addresses where appropriate.
      For now, generate the code so that the subexpressions useful to share
@@ -855,7 +847,6 @@
 adjust_stack (rtx adjust)
 {
   rtx temp;
-  adjust = protect_from_queue (adjust, 0);
 
   if (adjust == const0_rtx)
     return;
@@ -885,7 +876,6 @@
 anti_adjust_stack (rtx adjust)
 {
   rtx temp;
-  adjust = protect_from_queue (adjust, 0);
 
   if (adjust == const0_rtx)
     return;
Index: gcc/expmed.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/expmed.c,v
retrieving revision 1.179
diff -u -r1.179 expmed.c
--- gcc/expmed.c	8 Jul 2004 21:52:36 -0000	1.179
+++ gcc/expmed.c	9 Jul 2004 14:35:36 -0000
@@ -327,8 +327,6 @@
       op0 = SUBREG_REG (op0);
     }
 
-  value = protect_from_queue (value, 0);
-
   /* Use vec_set patterns for inserting parts of vectors whenever
      available.  */
   if (VECTOR_MODE_P (GET_MODE (op0))
@@ -577,8 +575,6 @@
 	}
       offset = 0;
     }
-  else
-    op0 = protect_from_queue (op0, 1);
 
   /* If VALUE is a floating-point mode, access it as an integer of the
      corresponding size.  This can occur on a machine with 64 bit registers
@@ -747,9 +743,7 @@
    The field starts at position BITPOS within the byte.
     (If OP0 is a register, it may be a full word or a narrower mode,
      but BITPOS still counts within a full word,
-     which is significant on bigendian machines.)
-
-   Note that protect_from_queue has already been done on OP0 and VALUE.  */
+     which is significant on bigendian machines.)  */
 
 static void
 store_fixed_bit_field (rtx op0, unsigned HOST_WIDE_INT offset,
@@ -1352,8 +1346,6 @@
 	}
       offset = 0;
     }
-  else
-    op0 = protect_from_queue (str_rtx, 1);
 
   /* Now OFFSET is nonzero only for memory operands.  */
 
@@ -1470,8 +1462,7 @@
 	  bitsize_rtx = GEN_INT (bitsize);
 	  bitpos_rtx = GEN_INT (xbitpos);
 
-	  pat = gen_extzv (protect_from_queue (xtarget, 1),
-			   xop0, bitsize_rtx, bitpos_rtx);
+	  pat = gen_extzv (xtarget, xop0, bitsize_rtx, bitpos_rtx);
 	  if (pat)
 	    {
 	      emit_insn (pat);
@@ -1599,8 +1590,7 @@
 	  bitsize_rtx = GEN_INT (bitsize);
 	  bitpos_rtx = GEN_INT (xbitpos);
 
-	  pat = gen_extv (protect_from_queue (xtarget, 1),
-			  xop0, bitsize_rtx, bitpos_rtx);
+	  pat = gen_extv (xtarget, xop0, bitsize_rtx, bitpos_rtx);
 	  if (pat)
 	    {
 	      emit_insn (pat);
@@ -2506,10 +2496,6 @@
   int opno;
   enum machine_mode nmode;
 
-  /* op0 must be register to make mult_cost match the precomputed
-     shiftadd_cost array.  */
-  op0 = protect_from_queue (op0, 0);
-
   /* Avoid referencing memory over and over.
      For speed, but also for correctness when mem is volatile.  */
   if (MEM_P (op0))
@@ -4547,10 +4533,6 @@
   rtx last = get_last_insn ();
   rtx pattern, comparison;
 
-  /* ??? Ok to do this and then fail? */
-  op0 = protect_from_queue (op0, 0);
-  op1 = protect_from_queue (op1, 0);
-
   if (unsignedp)
     code = unsigned_condition (code);
 
@@ -4655,7 +4637,6 @@
 	 first.  */
       if (GET_MODE_SIZE (target_mode) > GET_MODE_SIZE (mode))
 	{
-	  op0 = protect_from_queue (op0, 0);
 	  op0 = convert_modes (target_mode, mode, op0, 0);
 	  mode = target_mode;
 	}
@@ -4687,13 +4668,8 @@
       insn_operand_predicate_fn pred;
 
       /* We think we may be able to do this with a scc insn.  Emit the
-	 comparison and then the scc insn.
-
-	 compare_from_rtx may call emit_queue, which would be deleted below
-	 if the scc insn fails.  So call it ourselves before setting LAST.
-	 Likewise for do_pending_stack_adjust.  */
+	 comparison and then the scc insn.  */
 
-      emit_queue ();
       do_pending_stack_adjust ();
       last = get_last_insn ();
 
@@ -4930,7 +4906,6 @@
 	tem = expand_unop (mode, ffs_optab, op0, subtarget, 1);
       else if (GET_MODE_SIZE (mode) < UNITS_PER_WORD)
 	{
-	  op0 = protect_from_queue (op0, 0);
 	  tem = convert_modes (word_mode, mode, op0, 1);
 	  mode = word_mode;
 	}
Index: gcc/expr.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/expr.c,v
retrieving revision 1.676
diff -u -r1.676 expr.c
--- gcc/expr.c	9 Jul 2004 03:29:31 -0000	1.676
+++ gcc/expr.c	9 Jul 2004 14:35:37 -0000
@@ -119,7 +119,6 @@
   int reverse;
 };
 
-static rtx enqueue_insn (rtx, rtx);
 static unsigned HOST_WIDE_INT move_by_pieces_ninsns (unsigned HOST_WIDE_INT,
 						     unsigned int);
 static void move_by_pieces_1 (rtx (*) (rtx, ...), enum machine_mode,
@@ -151,7 +150,6 @@
 static unsigned HOST_WIDE_INT highest_pow2_factor_for_target (tree, tree);
 
 static int is_aligning_offset (tree, tree);
-static rtx expand_increment (tree, int, int);
 static void expand_operands (tree, tree, rtx, rtx*, rtx*,
 			     enum expand_modifier);
 static rtx reduce_to_bit_field_precision (rtx, rtx, tree);
@@ -312,215 +310,6 @@
 {
   cfun->expr = ggc_alloc_cleared (sizeof (struct expr_status));
 }
-
-/* Small sanity check that the queue is empty at the end of a function.  */
-
-void
-finish_expr_for_function (void)
-{
-  if (pending_chain)
-    abort ();
-}
-\f
-/* Manage the queue of increment instructions to be output
-   for POSTINCREMENT_EXPR expressions, etc.  */
-
-/* Queue up to increment (or change) VAR later.  BODY says how:
-   BODY should be the same thing you would pass to emit_insn
-   to increment right away.  It will go to emit_insn later on.
-
-   The value is a QUEUED expression to be used in place of VAR
-   where you want to guarantee the pre-incrementation value of VAR.  */
-
-static rtx
-enqueue_insn (rtx var, rtx body)
-{
-  pending_chain = gen_rtx_QUEUED (GET_MODE (var), var, NULL_RTX, NULL_RTX,
-				  body, pending_chain);
-  return pending_chain;
-}
-
-/* Use protect_from_queue to convert a QUEUED expression
-   into something that you can put immediately into an instruction.
-   If the queued incrementation has not happened yet,
-   protect_from_queue returns the variable itself.
-   If the incrementation has happened, protect_from_queue returns a temp
-   that contains a copy of the old value of the variable.
-
-   Any time an rtx which might possibly be a QUEUED is to be put
-   into an instruction, it must be passed through protect_from_queue first.
-   QUEUED expressions are not meaningful in instructions.
-
-   Do not pass a value through protect_from_queue and then hold
-   on to it for a while before putting it in an instruction!
-   If the queue is flushed in between, incorrect code will result.  */
-
-rtx
-protect_from_queue (rtx x, int modify)
-{
-  RTX_CODE code = GET_CODE (x);
-
-#if 0  /* A QUEUED can hang around after the queue is forced out.  */
-  /* Shortcut for most common case.  */
-  if (pending_chain == 0)
-    return x;
-#endif
-
-  if (code != QUEUED)
-    {
-      /* A special hack for read access to (MEM (QUEUED ...)) to facilitate
-	 use of autoincrement.  Make a copy of the contents of the memory
-	 location rather than a copy of the address, but not if the value is
-	 of mode BLKmode.  Don't modify X in place since it might be
-	 shared.  */
-      if (code == MEM && GET_MODE (x) != BLKmode
-	  && GET_CODE (XEXP (x, 0)) == QUEUED && !modify)
-	{
-	  rtx y = XEXP (x, 0);
-	  rtx new = replace_equiv_address_nv (x, QUEUED_VAR (y));
-
-	  if (QUEUED_INSN (y))
-	    {
-	      rtx temp = gen_reg_rtx (GET_MODE (x));
-
-	      emit_insn_before (gen_move_insn (temp, new),
-				QUEUED_INSN (y));
-	      return temp;
-	    }
-
-	  /* Copy the address into a pseudo, so that the returned value
-	     remains correct across calls to emit_queue.  */
-	  return replace_equiv_address (new, copy_to_reg (XEXP (new, 0)));
-	}
-
-      /* Otherwise, recursively protect the subexpressions of all
-	 the kinds of rtx's that can contain a QUEUED.  */
-      if (code == MEM)
-	{
-	  rtx tem = protect_from_queue (XEXP (x, 0), 0);
-	  if (tem != XEXP (x, 0))
-	    {
-	      x = copy_rtx (x);
-	      XEXP (x, 0) = tem;
-	    }
-	}
-      else if (code == PLUS || code == MULT)
-	{
-	  rtx new0 = protect_from_queue (XEXP (x, 0), 0);
-	  rtx new1 = protect_from_queue (XEXP (x, 1), 0);
-	  if (new0 != XEXP (x, 0) || new1 != XEXP (x, 1))
-	    {
-	      x = copy_rtx (x);
-	      XEXP (x, 0) = new0;
-	      XEXP (x, 1) = new1;
-	    }
-	}
-      return x;
-    }
-  /* If the increment has not happened, use the variable itself.  Copy it
-     into a new pseudo so that the value remains correct across calls to
-     emit_queue.  */
-  if (QUEUED_INSN (x) == 0)
-    return copy_to_reg (QUEUED_VAR (x));
-  /* If the increment has happened and a pre-increment copy exists,
-     use that copy.  */
-  if (QUEUED_COPY (x) != 0)
-    return QUEUED_COPY (x);
-  /* The increment has happened but we haven't set up a pre-increment copy.
-     Set one up now, and use it.  */
-  QUEUED_COPY (x) = gen_reg_rtx (GET_MODE (QUEUED_VAR (x)));
-  emit_insn_before (gen_move_insn (QUEUED_COPY (x), QUEUED_VAR (x)),
-		    QUEUED_INSN (x));
-  return QUEUED_COPY (x);
-}
-
-/* Return nonzero if X contains a QUEUED expression:
-   if it contains anything that will be altered by a queued increment.
-   We handle only combinations of MEM, PLUS, MINUS and MULT operators
-   since memory addresses generally contain only those.  */
-
-int
-queued_subexp_p (rtx x)
-{
-  enum rtx_code code = GET_CODE (x);
-  switch (code)
-    {
-    case QUEUED:
-      return 1;
-    case MEM:
-      return queued_subexp_p (XEXP (x, 0));
-    case MULT:
-    case PLUS:
-    case MINUS:
-      return (queued_subexp_p (XEXP (x, 0))
-	      || queued_subexp_p (XEXP (x, 1)));
-    default:
-      return 0;
-    }
-}
-
-/* Retrieve a mark on the queue.  */
-  
-static rtx
-mark_queue (void)
-{
-  return pending_chain;
-}
-
-/* Perform all the pending incrementations that have been enqueued
-   after MARK was retrieved.  If MARK is null, perform all the
-   pending incrementations.  */
-
-static void
-emit_insns_enqueued_after_mark (rtx mark)
-{
-  rtx p;
-
-  /* The marked incrementation may have been emitted in the meantime
-     through a call to emit_queue.  In this case, the mark is not valid
-     anymore so do nothing.  */
-  if (mark && ! QUEUED_BODY (mark))
-    return;
-
-  while ((p = pending_chain) != mark)
-    {
-      rtx body = QUEUED_BODY (p);
-
-      switch (GET_CODE (body))
-	{
-	case INSN:
-	case JUMP_INSN:
-	case CALL_INSN:
-	case CODE_LABEL:
-	case BARRIER:
-	case NOTE:
-	  QUEUED_INSN (p) = body;
-	  emit_insn (body);
-	  break;
-
-#ifdef ENABLE_CHECKING
-	case SEQUENCE:
-	  abort ();
-	  break;
-#endif
-
-	default:
-	  QUEUED_INSN (p) = emit_insn (body);
-	  break;
-	}
-
-      QUEUED_BODY (p) = 0;
-      pending_chain = QUEUED_NEXT (p);
-    }
-}
-
-/* Perform all the pending incrementations.  */
-
-void
-emit_queue (void)
-{
-  emit_insns_enqueued_after_mark (NULL_RTX);
-}
 \f
 /* Copy data from FROM to TO, where the machine modes are not the same.
    Both modes may be integer, or both may be floating.
@@ -541,8 +330,6 @@
   enum rtx_code equiv_code = (unsignedp < 0 ? UNKNOWN
 			      : (unsignedp ? ZERO_EXTEND : SIGN_EXTEND));
 
-  to = protect_from_queue (to, 1);
-  from = protect_from_queue (from, 0);
 
   if (to_real != from_real)
     abort ();
@@ -899,10 +686,7 @@
    Both X and MODE may be floating, or both integer.
    UNSIGNEDP is nonzero if X is an unsigned value.
    This can be done by referring to a part of X in place
-   or by copying to a new temporary with conversion.
-
-   This function *must not* call protect_from_queue
-   except when putting X into an insn (in which case convert_move does it).  */
+   or by copying to a new temporary with conversion.  */
 
 rtx
 convert_to_mode (enum machine_mode mode, rtx x, int unsignedp)
@@ -918,10 +702,7 @@
    This can be done by referring to a part of X in place
    or by copying to a new temporary with conversion.
 
-   You can give VOIDmode for OLDMODE, if you are sure X has a nonvoid mode.
-
-   This function *must not* call protect_from_queue
-   except when putting X into an insn (in which case convert_move does it).  */
+   You can give VOIDmode for OLDMODE, if you are sure X has a nonvoid mode.  */
 
 rtx
 convert_modes (enum machine_mode mode, enum machine_mode oldmode, rtx x, int unsignedp)
@@ -1040,8 +821,7 @@
 }
 
 /* Generate several move instructions to copy LEN bytes from block FROM to
-   block TO.  (These are MEM rtx's with BLKmode).  The caller must pass FROM
-   and TO through protect_from_queue before calling.
+   block TO.  (These are MEM rtx's with BLKmode).
 
    If PUSH_ROUNDING is defined and TO is NULL, emit_single_push_insn is
    used to push FROM to the stack.
@@ -1342,10 +1122,6 @@
 
   align = MIN (MEM_ALIGN (x), MEM_ALIGN (y));
 
-  x = protect_from_queue (x, 1);
-  y = protect_from_queue (y, 0);
-  size = protect_from_queue (size, 0);
-
   if (!MEM_P (x))
     abort ();
   if (!MEM_P (y))
@@ -1513,24 +1289,9 @@
   enum machine_mode size_mode;
   rtx retval;
 
-  /* DST, SRC, or SIZE may have been passed through protect_from_queue.
-
-     It is unsafe to save the value generated by protect_from_queue and reuse
-     it later.  Consider what happens if emit_queue is called before the
-     return value from protect_from_queue is used.
-
-     Expansion of the CALL_EXPR below will call emit_queue before we are
-     finished emitting RTL for argument setup.  So if we are not careful we
-     could get the wrong value for an argument.
-
-     To avoid this problem we go ahead and emit code to copy the addresses of
-     DST and SRC and SIZE into new pseudos.
-
-     Note this is not strictly needed for library calls since they do not call
-     emit_queue before loading their arguments.  However, we may need to have
-     library calls call emit_queue in the future since failing to do so could
-     cause problems for targets which define SMALL_REGISTER_CLASSES and pass
-     arguments in registers.  */
+  /* Emit code to copy the addresses of DST and SRC and SIZE into new
+     pseudos.  We can then place those new pseudos into a VAR_DECL and
+     use them later.  */
 
   dst_addr = copy_to_mode_reg (Pmode, XEXP (dst, 0));
   src_addr = copy_to_mode_reg (Pmode, XEXP (src, 0));
@@ -1926,8 +1687,6 @@
 				build_int_2 (shift, 0), tmps[i], 0);
     }
 
-  emit_queue ();
-
   /* Copy the extracted pieces into the proper (probable) hard regs.  */
   for (i = start; i < XVECLEN (dst, 0); i++)
     emit_move_insn (XEXP (XVECEXP (dst, 0, i), 0), tmps[i]);
@@ -1982,7 +1741,6 @@
       tmps[i] = gen_reg_rtx (GET_MODE (reg));
       emit_move_insn (tmps[i], reg);
     }
-  emit_queue ();
 
   /* If we won't be storing directly into memory, protect the real destination
      from strange tricks we might play.  */
@@ -2076,8 +1834,6 @@
 			 mode, tmps[i], ssize);
     }
 
-  emit_queue ();
-
   /* Copy from the pseudo into the (probable) hard reg.  */
   if (orig_dst != dst)
     emit_move_insn (orig_dst, dst);
@@ -2324,7 +2080,6 @@
 
   if (! STORE_BY_PIECES_P (len, align))
     abort ();
-  to = protect_from_queue (to, 1);
   data.constfun = constfun;
   data.constfundata = constfundata;
   data.len = len;
@@ -2362,8 +2117,7 @@
 }
 
 /* Generate several move instructions to clear LEN bytes of block TO.  (A MEM
-   rtx with BLKmode).  The caller must pass TO through protect_from_queue
-   before calling. ALIGN is maximum alignment we can assume.  */
+   rtx with BLKmode).  ALIGN is maximum alignment we can assume.  */
 
 static void
 clear_by_pieces (rtx to, unsigned HOST_WIDE_INT len, unsigned int align)
@@ -2393,8 +2147,7 @@
 
 /* Subroutine of clear_by_pieces and store_by_pieces.
    Generate several move instructions to store LEN bytes of block TO.  (A MEM
-   rtx with BLKmode).  The caller must pass TO through protect_from_queue
-   before calling.  ALIGN is maximum alignment we can assume.  */
+   rtx with BLKmode).  ALIGN is maximum alignment we can assume.  */
 
 static void
 store_by_pieces_1 (struct store_by_pieces *data ATTRIBUTE_UNUSED,
@@ -2534,9 +2287,6 @@
     emit_move_insn (object, CONST0_RTX (GET_MODE (object)));
   else
     {
-      object = protect_from_queue (object, 1);
-      size = protect_from_queue (size, 0);
-
       if (size == const0_rtx)
 	;
       else if (GET_CODE (size) == CONST_INT
@@ -2617,24 +2367,8 @@
   enum machine_mode size_mode;
   rtx retval;
 
-  /* OBJECT or SIZE may have been passed through protect_from_queue.
-
-     It is unsafe to save the value generated by protect_from_queue
-     and reuse it later.  Consider what happens if emit_queue is
-     called before the return value from protect_from_queue is used.
-
-     Expansion of the CALL_EXPR below will call emit_queue before
-     we are finished emitting RTL for argument setup.  So if we are
-     not careful we could get the wrong value for an argument.
-
-     To avoid this problem we go ahead and emit code to copy OBJECT
-     and SIZE into new pseudos.
-
-     Note this is not strictly needed for library calls since they
-     do not call emit_queue before loading their arguments.  However,
-     we may need to have library calls call emit_queue in the future
-     since failing to do so could cause problems for targets which
-     define SMALL_REGISTER_CLASSES and pass arguments in registers.  */
+  /* Emit code to copy OBJECT and SIZE into new pseudos.  We can then
+     place those into new pseudos into a VAR_DECL and use them later.  */
 
   object = copy_to_mode_reg (Pmode, XEXP (object, 0));
 
@@ -2738,9 +2472,6 @@
   rtx y_cst = NULL_RTX;
   rtx last_insn, set;
 
-  x = protect_from_queue (x, 1);
-  y = protect_from_queue (y, 0);
-
   if (mode == BLKmode || (GET_MODE (y) != mode && GET_MODE (y) != VOIDmode))
     abort ();
 
@@ -3405,7 +3136,7 @@
     if (where_pad != none)
       where_pad = (where_pad == downward ? upward : downward);
 
-  xinner = x = protect_from_queue (x, 0);
+  xinner = x;
 
   if (mode == BLKmode)
     {
@@ -3849,8 +3580,6 @@
 		  && (bitsize != 1 || TREE_CODE (op1) != INTEGER_CST))
 		break;
 	      value = expand_expr (op1, NULL_RTX, VOIDmode, 0);
-	      value = protect_from_queue (value, 0);
-	      to_rtx = protect_from_queue (to_rtx, 1);
 	      binop = TREE_CODE (src) == PLUS_EXPR ? add_optab : sub_optab;
 	      if (bitsize == 1
 		  && count + bitsize != GET_MODE_BITSIZE (GET_MODE (to_rtx)))
@@ -4032,7 +3761,6 @@
 {
   rtx temp;
   rtx alt_rtl = NULL_RTX;
-  rtx mark = mark_queue ();
   int dont_return_target = 0;
   int dont_store_target = 0;
 
@@ -4052,7 +3780,6 @@
 	 part.  */
       expand_expr (TREE_OPERAND (exp, 0), const0_rtx, VOIDmode,
 		   want_value & 2 ? EXPAND_STACK_PARM : EXPAND_NORMAL);
-      emit_queue ();
       return store_expr (TREE_OPERAND (exp, 1), target, want_value);
     }
   else if (TREE_CODE (exp) == COND_EXPR && GET_MODE (target) == BLKmode)
@@ -4064,47 +3791,19 @@
 
       rtx lab1 = gen_label_rtx (), lab2 = gen_label_rtx ();
 
-      emit_queue ();
-      target = protect_from_queue (target, 1);
-
       do_pending_stack_adjust ();
       NO_DEFER_POP;
       jumpifnot (TREE_OPERAND (exp, 0), lab1);
       store_expr (TREE_OPERAND (exp, 1), target, want_value & 2);
-      emit_queue ();
       emit_jump_insn (gen_jump (lab2));
       emit_barrier ();
       emit_label (lab1);
       store_expr (TREE_OPERAND (exp, 2), target, want_value & 2);
-      emit_queue ();
       emit_label (lab2);
       OK_DEFER_POP;
 
       return want_value & 1 ? target : NULL_RTX;
     }
-  else if (queued_subexp_p (target))
-    /* If target contains a postincrement, let's not risk
-       using it as the place to generate the rhs.  */
-    {
-      if (GET_MODE (target) != BLKmode && GET_MODE (target) != VOIDmode)
-	{
-	  /* Expand EXP into a new pseudo.  */
-	  temp = gen_reg_rtx (GET_MODE (target));
-	  temp = expand_expr (exp, temp, GET_MODE (target),
-			      (want_value & 2
-			       ? EXPAND_STACK_PARM : EXPAND_NORMAL));
-	}
-      else
-	temp = expand_expr (exp, NULL_RTX, GET_MODE (target),
-			    (want_value & 2
-			     ? EXPAND_STACK_PARM : EXPAND_NORMAL));
-
-      /* If target is volatile, ANSI requires accessing the value
-	 *from* the target, if it is accessed.  So make that happen.
-	 In no case return the target itself.  */
-      if (! MEM_VOLATILE_P (target) && (want_value & 1) != 0)
-	dont_return_target = 1;
-    }
   else if ((want_value & 1) != 0
 	   && MEM_P (target)
 	   && ! MEM_VOLATILE_P (target)
@@ -4272,9 +3971,6 @@
 	 bit-initialized.  */
       && expr_size (exp) != const0_rtx)
     {
-      emit_insns_enqueued_after_mark (mark);
-      target = protect_from_queue (target, 1);
-      temp = protect_from_queue (temp, 0);
       if (GET_MODE (temp) != GET_MODE (target)
 	  && GET_MODE (temp) != VOIDmode)
 	{
@@ -5030,7 +4726,6 @@
 
 		  /* Build the head of the loop.  */
 		  do_pending_stack_adjust ();
-		  emit_queue ();
 		  emit_label (loop_start);
 
 		  /* Assign value to element index.  */
@@ -5059,9 +4754,10 @@
 
 		  /* Update the loop counter, and jump to the head of
 		     the loop.  */
-		  expand_increment (build (PREINCREMENT_EXPR,
-					   TREE_TYPE (index),
-					   index, integer_one_node), 0, 0);
+		  expand_assignment (index,
+			             build2 (PLUS_EXPR, TREE_TYPE (index),
+					     index, integer_one_node), 0);
+
 		  emit_jump (loop_start);
 
 		  /* Build the end of the loop.  */
@@ -8138,7 +7834,6 @@
 
     case COMPOUND_EXPR:
       expand_expr (TREE_OPERAND (exp, 0), const0_rtx, VOIDmode, 0);
-      emit_queue ();
       return expand_expr_real (TREE_OPERAND (exp, 1),
 			       (ignore ? const0_rtx : target),
 			       VOIDmode, modifier, alt_rtl);
@@ -8477,7 +8172,6 @@
 	    else
 	      expand_expr (TREE_OPERAND (exp, 1),
 			   ignore ? const0_rtx : NULL_RTX, VOIDmode, 0);
-	    emit_queue ();
 	    emit_jump_insn (gen_jump (op1));
 	    emit_barrier ();
 	    emit_label (op0);
@@ -8490,7 +8184,6 @@
 			   ignore ? const0_rtx : NULL_RTX, VOIDmode, 0);
 	  }
 
-	emit_queue ();
 	emit_label (op1);
 	OK_DEFER_POP;
 
@@ -8565,15 +8258,6 @@
 	expand_return (TREE_OPERAND (exp, 0));
       return const0_rtx;
 
-    case PREINCREMENT_EXPR:
-    case PREDECREMENT_EXPR:
-      return REDUCE_BIT_FIELD (expand_increment (exp, 0, ignore));
-
-    case POSTINCREMENT_EXPR:
-    case POSTDECREMENT_EXPR:
-      /* Faster to treat as pre-increment if result is not used.  */
-      return REDUCE_BIT_FIELD (expand_increment (exp, ! ignore, ignore));
-
     case ADDR_EXPR:
       if (modifier == EXPAND_STACK_PARM)
 	target = 0;
@@ -8604,10 +8288,6 @@
 	  if (ignore)
 	    return op0;
 
-	  /* Pass 1 for MODIFY, so that protect_from_queue doesn't get
-	     clever and returns a REG when given a MEM.  */
-	  op0 = protect_from_queue (op0, 1);
-
 	  /* We would like the object in memory.  If it is a constant, we can
 	     have it be statically allocated into memory.  For a non-constant,
 	     we need to allocate some memory and store the value into it.  */
@@ -8821,6 +8501,10 @@
     case FILTER_EXPR:
       return get_exception_filter (cfun);
 
+    case PREINCREMENT_EXPR:
+    case PREDECREMENT_EXPR:
+    case POSTINCREMENT_EXPR:
+    case POSTDECREMENT_EXPR:
     case FDESC_EXPR:
       /* Function descriptors are not valid except for as
 	 initialization constants, and should not be expanded.  */
@@ -9056,209 +8740,6 @@
   return 0;
 }
 \f
-/* Expand code for a post- or pre- increment or decrement
-   and return the RTX for the result.
-   POST is 1 for postinc/decrements and 0 for preinc/decrements.  */
-
-static rtx
-expand_increment (tree exp, int post, int ignore)
-{
-  rtx op0, op1;
-  rtx temp, value;
-  tree incremented = TREE_OPERAND (exp, 0);
-  optab this_optab = add_optab;
-  int icode;
-  enum machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
-  int op0_is_copy = 0;
-  int single_insn = 0;
-  /* 1 means we can't store into OP0 directly,
-     because it is a subreg narrower than a word,
-     and we don't dare clobber the rest of the word.  */
-  int bad_subreg = 0;
-
-  /* Stabilize any component ref that might need to be
-     evaluated more than once below.  */
-  if (!post
-      || TREE_CODE (incremented) == BIT_FIELD_REF
-      || (TREE_CODE (incremented) == COMPONENT_REF
-	  && (TREE_CODE (TREE_OPERAND (incremented, 0)) != INDIRECT_REF
-	      || DECL_BIT_FIELD (TREE_OPERAND (incremented, 1)))))
-    incremented = stabilize_reference (incremented);
-  /* Nested *INCREMENT_EXPRs can happen in C++.  We must force innermost
-     ones into save exprs so that they don't accidentally get evaluated
-     more than once by the code below.  */
-  if (TREE_CODE (incremented) == PREINCREMENT_EXPR
-      || TREE_CODE (incremented) == PREDECREMENT_EXPR)
-    incremented = save_expr (incremented);
-
-  /* Compute the operands as RTX.
-     Note whether OP0 is the actual lvalue or a copy of it:
-     I believe it is a copy iff it is a register or subreg
-     and insns were generated in computing it.  */
-
-  temp = get_last_insn ();
-  op0 = expand_expr (incremented, NULL_RTX, VOIDmode, 0);
-
-  /* If OP0 is a SUBREG made for a promoted variable, we cannot increment
-     in place but instead must do sign- or zero-extension during assignment,
-     so we copy it into a new register and let the code below use it as
-     a copy.
-
-     Note that we can safely modify this SUBREG since it is know not to be
-     shared (it was made by the expand_expr call above).  */
-
-  if (GET_CODE (op0) == SUBREG && SUBREG_PROMOTED_VAR_P (op0))
-    {
-      if (post)
-	SUBREG_REG (op0) = copy_to_reg (SUBREG_REG (op0));
-      else
-	bad_subreg = 1;
-    }
-  else if (GET_CODE (op0) == SUBREG
-	   && GET_MODE_BITSIZE (GET_MODE (op0)) < BITS_PER_WORD)
-    {
-      /* We cannot increment this SUBREG in place.  If we are
-	 post-incrementing, get a copy of the old value.  Otherwise,
-	 just mark that we cannot increment in place.  */
-      if (post)
-	op0 = copy_to_reg (op0);
-      else
-	bad_subreg = 1;
-    }
-
-  op0_is_copy = ((GET_CODE (op0) == SUBREG || REG_P (op0))
-		 && temp != get_last_insn ());
-  op1 = expand_expr (TREE_OPERAND (exp, 1), NULL_RTX, VOIDmode, 0);
-
-  /* Decide whether incrementing or decrementing.  */
-  if (TREE_CODE (exp) == POSTDECREMENT_EXPR
-      || TREE_CODE (exp) == PREDECREMENT_EXPR)
-    this_optab = sub_optab;
-
-  /* Convert decrement by a constant into a negative increment.  */
-  if (this_optab == sub_optab
-      && GET_CODE (op1) == CONST_INT)
-    {
-      op1 = GEN_INT (-INTVAL (op1));
-      this_optab = add_optab;
-    }
-
-  if (TYPE_TRAP_SIGNED (TREE_TYPE (exp)))
-    this_optab = this_optab == add_optab ? addv_optab : subv_optab;
-
-  /* For a preincrement, see if we can do this with a single instruction.  */
-  if (!post)
-    {
-      icode = (int) this_optab->handlers[(int) mode].insn_code;
-      if (icode != (int) CODE_FOR_nothing
-	  /* Make sure that OP0 is valid for operands 0 and 1
-	     of the insn we want to queue.  */
-	  && (*insn_data[icode].operand[0].predicate) (op0, mode)
-	  && (*insn_data[icode].operand[1].predicate) (op0, mode)
-	  && (*insn_data[icode].operand[2].predicate) (op1, mode))
-	single_insn = 1;
-    }
-
-  /* If OP0 is not the actual lvalue, but rather a copy in a register,
-     then we cannot just increment OP0.  We must therefore contrive to
-     increment the original value.  Then, for postincrement, we can return
-     OP0 since it is a copy of the old value.  For preincrement, expand here
-     unless we can do it with a single insn.
-
-     Likewise if storing directly into OP0 would clobber high bits
-     we need to preserve (bad_subreg).  */
-  if (op0_is_copy || (!post && !single_insn) || bad_subreg)
-    {
-      /* This is the easiest way to increment the value wherever it is.
-	 Problems with multiple evaluation of INCREMENTED are prevented
-	 because either (1) it is a component_ref or preincrement,
-	 in which case it was stabilized above, or (2) it is an array_ref
-	 with constant index in an array in a register, which is
-	 safe to reevaluate.  */
-      tree newexp = build (((TREE_CODE (exp) == POSTDECREMENT_EXPR
-			     || TREE_CODE (exp) == PREDECREMENT_EXPR)
-			    ? MINUS_EXPR : PLUS_EXPR),
-			   TREE_TYPE (exp),
-			   incremented,
-			   TREE_OPERAND (exp, 1));
-
-      while (TREE_CODE (incremented) == NOP_EXPR
-	     || TREE_CODE (incremented) == CONVERT_EXPR)
-	{
-	  newexp = convert (TREE_TYPE (incremented), newexp);
-	  incremented = TREE_OPERAND (incremented, 0);
-	}
-
-      temp = expand_assignment (incremented, newexp, ! post && ! ignore);
-      return post ? op0 : temp;
-    }
-
-  if (post)
-    {
-      /* We have a true reference to the value in OP0.
-	 If there is an insn to add or subtract in this mode, queue it.
-	 Queuing the increment insn avoids the register shuffling
-	 that often results if we must increment now and first save
-	 the old value for subsequent use.  */
-
-#if 0  /* Turned off to avoid making extra insn for indexed memref.  */
-      op0 = stabilize (op0);
-#endif
-
-      icode = (int) this_optab->handlers[(int) mode].insn_code;
-      if (icode != (int) CODE_FOR_nothing
-	  /* Make sure that OP0 is valid for operands 0 and 1
-	     of the insn we want to queue.  */
-	  && (*insn_data[icode].operand[0].predicate) (op0, mode)
-	  && (*insn_data[icode].operand[1].predicate) (op0, mode))
-	{
-	  if (! (*insn_data[icode].operand[2].predicate) (op1, mode))
-	    op1 = force_reg (mode, op1);
-
-	  return enqueue_insn (op0, GEN_FCN (icode) (op0, op0, op1));
-	}
-      if (icode != (int) CODE_FOR_nothing && MEM_P (op0))
-	{
-	  rtx addr = (general_operand (XEXP (op0, 0), mode)
-		      ? force_reg (Pmode, XEXP (op0, 0))
-		      : copy_to_reg (XEXP (op0, 0)));
-	  rtx temp, result;
-
-	  op0 = replace_equiv_address (op0, addr);
-	  temp = force_reg (GET_MODE (op0), op0);
-	  if (! (*insn_data[icode].operand[2].predicate) (op1, mode))
-	    op1 = force_reg (mode, op1);
-
-	  /* The increment queue is LIFO, thus we have to `queue'
-	     the instructions in reverse order.  */
-	  enqueue_insn (op0, gen_move_insn (op0, temp));
-	  result = enqueue_insn (temp, GEN_FCN (icode) (temp, temp, op1));
-	  return result;
-	}
-    }
-
-  /* Preincrement, or we can't increment with one simple insn.  */
-  if (post)
-    /* Save a copy of the value before inc or dec, to return it later.  */
-    temp = value = copy_to_reg (op0);
-  else
-    /* Arrange to return the incremented value.  */
-    /* Copy the rtx because expand_binop will protect from the queue,
-       and the results of that would be invalid for us to return
-       if our caller does emit_queue before using our result.  */
-    temp = copy_rtx (value = op0);
-
-  /* Increment however we can.  */
-  op1 = expand_binop (mode, this_optab, value, op1, op0,
-		      TYPE_UNSIGNED (TREE_TYPE (exp)), OPTAB_LIB_WIDEN);
-
-  /* Make sure the value is stored into OP0.  */
-  if (op1 != op0)
-    emit_move_insn (op0, op1);
-
-  return temp;
-}
-\f
 /* Generate code to calculate EXP using a store-flag instruction
    and return an rtx for the result.  EXP is either a comparison
    or a TRUTH_NOT_EXPR whose operand is a comparison.
@@ -9466,9 +8947,7 @@
      because, if the emit_store_flag does anything it will succeed and
      OP0 and OP1 will not be used subsequently.  */
 
-  result = emit_store_flag (target, code,
-			    queued_subexp_p (op0) ? copy_rtx (op0) : op0,
-			    queued_subexp_p (op1) ? copy_rtx (op1) : op1,
+  result = emit_store_flag (target, code, op0, op1,
 			    operand_mode, unsignedp, 1);
 
   if (result)
@@ -9573,8 +9052,7 @@
 
       index = expand_expr (index_expr, NULL_RTX, VOIDmode, 0);
     }
-  emit_queue ();
-  index = protect_from_queue (index, 0);
+
   do_pending_stack_adjust ();
 
   op_mode = insn_data[(int) CODE_FOR_casesi].operand[0].mode;
@@ -9700,8 +9178,6 @@
 			    convert (index_type, index_expr),
 			    convert (index_type, minval)));
   index = expand_expr (index_expr, NULL_RTX, VOIDmode, 0);
-  emit_queue ();
-  index = protect_from_queue (index, 0);
   do_pending_stack_adjust ();
 
   do_tablejump (index, TYPE_MODE (index_type),
Index: gcc/expr.h
===================================================================
RCS file: /cvs/gcc/gcc/gcc/expr.h,v
retrieving revision 1.162
diff -u -r1.162 expr.h
--- gcc/expr.h	8 Jul 2004 17:40:15 -0000	1.162
+++ gcc/expr.h	9 Jul 2004 14:35:37 -0000
@@ -39,25 +39,6 @@
 #define BRANCH_COST 1
 #endif
 
-/* Macros to access the slots of a QUEUED rtx.
-   Here rather than in rtl.h because only the expansion pass
-   should ever encounter a QUEUED.  */
-
-/* The variable for which an increment is queued.  */
-#define QUEUED_VAR(P) XEXP (P, 0)
-/* If the increment has been emitted, this is the insn
-   that does the increment.  It is zero before the increment is emitted.
-   If more than one insn is emitted, this is the first insn.  */
-#define QUEUED_INSN(P) XEXP (P, 1)
-/* If a pre-increment copy has been generated, this is the copy
-   (it is a temporary reg).  Zero if no copy made yet.  */
-#define QUEUED_COPY(P) XEXP (P, 2)
-/* This is the body to use for the insn to do the increment.
-   It is used to emit the increment.  */
-#define QUEUED_BODY(P) XEXP (P, 3)
-/* Next QUEUED in the queue.  */
-#define QUEUED_NEXT(P) XEXP (P, 4)
-
 /* This is the 4th arg to `expand_expr'.
    EXPAND_STACK_PARM means we are possibly expanding a call param onto
    the stack.  Choosing a value of 2 isn't special;  It just allows
@@ -314,8 +295,7 @@
 
 /* Create but don't emit one rtl instruction to perform certain operations.
    Modes must match; operands must meet the operation's predicates.
-   Likewise for subtraction and for just copying.
-   These do not call protect_from_queue; caller must do so.  */
+   Likewise for subtraction and for just copying.  */
 extern rtx gen_add2_insn (rtx, rtx);
 extern rtx gen_add3_insn (rtx, rtx, rtx);
 extern rtx gen_sub2_insn (rtx, rtx);
@@ -399,19 +379,6 @@
 /* This is run at the start of compiling a function.  */
 extern void init_expr (void);
 
-/* This is run at the end of compiling a function.  */
-extern void finish_expr_for_function (void);
-
-/* Use protect_from_queue to convert a QUEUED expression
-   into something that you can put immediately into an instruction.  */
-extern rtx protect_from_queue (rtx, int);
-
-/* Perform all the pending incrementations.  */
-extern void emit_queue (void);
-
-/* Tell if something has a queued subexpression.  */
-extern int queued_subexp_p (rtx);
-
 /* Emit some rtl insns to move data between rtx's, converting machine modes.
    Both modes must be floating or both fixed.  */
 extern void convert_move (rtx, rtx, int);
Index: gcc/function.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/function.c,v
retrieving revision 1.552
diff -u -r1.552 function.c
--- gcc/function.c	9 Jul 2004 03:29:32 -0000	1.552
+++ gcc/function.c	9 Jul 2004 14:35:38 -0000
@@ -4060,12 +4060,7 @@
 
   /* Evaluate now the sizes of any types declared among the arguments.  */
   for (tem = pending_sizes; tem; tem = TREE_CHAIN (tem))
-    {
-      expand_expr (TREE_VALUE (tem), const0_rtx, VOIDmode, 0);
-      /* Flush the queue in case this parameter declaration has
-	 side-effects.  */
-      emit_queue ();
-    }
+    expand_expr (TREE_VALUE (tem), const0_rtx, VOIDmode, 0);
 }
 
 /* Start the RTL for a new function, and set variables used for
@@ -4324,8 +4319,6 @@
 {
   rtx clobber_after;
 
-  finish_expr_for_function ();
-
   /* If arg_pointer_save_area was referenced only from a nested
      function, we will not have initialized it yet.  Do that now.  */
   if (arg_pointer_save_area && ! cfun->arg_pointer_save_area_init)
Index: gcc/function.h
===================================================================
RCS file: /cvs/gcc/gcc/gcc/function.h,v
retrieving revision 1.127
diff -u -r1.127 function.h
--- gcc/function.h	4 Jul 2004 08:06:54 -0000	1.127
+++ gcc/function.h	9 Jul 2004 14:35:38 -0000
@@ -147,9 +147,6 @@
 
   /* List of labels that must never be deleted.  */
   rtx x_forced_labels;
-
-  /* Postincrements that still need to be expanded.  */
-  rtx x_pending_chain;
 };
 
 #define pending_stack_adjust (cfun->expr->x_pending_stack_adjust)
@@ -157,7 +154,6 @@
 #define saveregs_value (cfun->expr->x_saveregs_value)
 #define apply_args_value (cfun->expr->x_apply_args_value)
 #define forced_labels (cfun->expr->x_forced_labels)
-#define pending_chain (cfun->expr->x_pending_chain)
 #define stack_pointer_delta (cfun->expr->x_stack_pointer_delta)
 
 /* This structure can save all the important global and static variables
Index: gcc/genattrtab.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/genattrtab.c,v
retrieving revision 1.147
diff -u -r1.147 genattrtab.c
--- gcc/genattrtab.c	8 Jul 2004 03:40:29 -0000	1.147
+++ gcc/genattrtab.c	9 Jul 2004 14:35:38 -0000
@@ -838,7 +838,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -2218,7 +2217,6 @@
       return attr_rtx (CONST_STRING, attr_printf (MAX_DIGITS, "%d", j));
 
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
@@ -4174,7 +4172,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
Index: gcc/local-alloc.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/local-alloc.c,v
retrieving revision 1.134
diff -u -r1.134 local-alloc.c
--- gcc/local-alloc.c	9 Jul 2004 03:29:33 -0000	1.134
+++ gcc/local-alloc.c	9 Jul 2004 14:35:39 -0000
@@ -520,9 +520,6 @@
     case MEM:
       return ! RTX_UNCHANGING_P (x) || equiv_init_varies_p (XEXP (x, 0));
 
-    case QUEUED:
-      return 1;
-
     case CONST:
     case CONST_INT:
     case CONST_DOUBLE:
Index: gcc/optabs.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/optabs.c,v
retrieving revision 1.228
diff -u -r1.228 optabs.c
--- gcc/optabs.c	9 Jul 2004 03:29:34 -0000	1.228
+++ gcc/optabs.c	9 Jul 2004 14:35:39 -0000
@@ -675,11 +675,6 @@
 
   class = GET_MODE_CLASS (mode);
 
-  op0 = protect_from_queue (op0, 0);
-  op1 = protect_from_queue (op1, 0);
-  if (target)
-    target = protect_from_queue (target, 1);
-
   if (flag_force_mem)
     {
       /* Load duplicate non-volatile operands once.  */
@@ -2170,20 +2165,12 @@
 
   class = GET_MODE_CLASS (mode);
 
-  op0 = protect_from_queue (op0, 0);
-
   if (flag_force_mem)
-    {
-      op0 = force_not_mem (op0);
-    }
+    op0 = force_not_mem (op0);
 
-  if (targ0)
-    targ0 = protect_from_queue (targ0, 1);
-  else
+  if (!targ0)
     targ0 = gen_reg_rtx (mode);
-  if (targ1)
-    targ1 = protect_from_queue (targ1, 1);
-  else
+  if (!targ1)
     targ1 = gen_reg_rtx (mode);
 
   /* Record where to go back to if we fail.  */
@@ -2274,9 +2261,6 @@
 
   class = GET_MODE_CLASS (mode);
 
-  op0 = protect_from_queue (op0, 0);
-  op1 = protect_from_queue (op1, 0);
-
   if (flag_force_mem)
     {
       op0 = force_not_mem (op0);
@@ -2293,13 +2277,9 @@
       && rtx_cost (op1, binoptab->code) > COSTS_N_INSNS (1))
     op1 = force_reg (mode, op1);
 
-  if (targ0)
-    targ0 = protect_from_queue (targ0, 1);
-  else
+  if (!targ0)
     targ0 = gen_reg_rtx (mode);
-  if (targ1)
-    targ1 = protect_from_queue (targ1, 1);
-  else
+  if (!targ1)
     targ1 = gen_reg_rtx (mode);
 
   /* Record where to go back to if we fail.  */
@@ -2502,15 +2482,8 @@
 
   class = GET_MODE_CLASS (mode);
 
-  op0 = protect_from_queue (op0, 0);
-
   if (flag_force_mem)
-    {
-      op0 = force_not_mem (op0);
-    }
-
-  if (target)
-    target = protect_from_queue (target, 1);
+    op0 = force_not_mem (op0);
 
   if (unoptab->handlers[(int) mode].insn_code != CODE_FOR_nothing)
     {
@@ -3039,18 +3012,11 @@
   if (submode == BLKmode)
     abort ();
 
-  op0 = protect_from_queue (op0, 0);
-
   if (flag_force_mem)
-    {
-      op0 = force_not_mem (op0);
-    }
+    op0 = force_not_mem (op0);
 
   last = get_last_insn ();
 
-  if (target)
-    target = protect_from_queue (target, 1);
-
   this_abs_optab = ! unsignedp && flag_trapv
                    && (GET_MODE_CLASS(mode) == MODE_INT)
                    ? absv_optab : abs_optab;
@@ -3225,9 +3191,7 @@
   enum machine_mode mode0 = insn_data[icode].operand[1].mode;
   rtx pat;
 
-  temp = target = protect_from_queue (target, 1);
-
-  op0 = protect_from_queue (op0, 0);
+  temp = target;
 
   /* Sign and zero extension from memory is often done specially on
      RISC machines, so forcing into a register here can pessimize
@@ -3709,11 +3673,6 @@
       if (size == 0)
 	abort ();
 
-      emit_queue ();
-      x = protect_from_queue (x, 0);
-      y = protect_from_queue (y, 0);
-      size = protect_from_queue (size, 0);
-
       /* Try to use a memory block compare insn - either cmpstr
 	 or cmpmem will do.  */
       for (cmp_mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
@@ -3818,8 +3777,6 @@
 prepare_operand (int icode, rtx x, int opnum, enum machine_mode mode,
 		 enum machine_mode wider_mode, int unsignedp)
 {
-  x = protect_from_queue (x, 0);
-
   if (mode != wider_mode)
     x = convert_modes (wider_mode, mode, x, unsignedp);
 
@@ -3945,7 +3902,6 @@
     op0 = force_reg (mode, op0);
 #endif
 
-  emit_queue ();
   if (unsignedp)
     comparison = unsigned_condition (comparison);
 
@@ -3972,8 +3928,8 @@
 {
   enum rtx_code comparison = *pcomparison;
   enum rtx_code swapped = swap_condition (comparison);
-  rtx x = protect_from_queue (*px, 0);
-  rtx y = protect_from_queue (*py, 0);
+  rtx x = *px;
+  rtx y = *py;
   enum machine_mode orig_mode = GET_MODE (x);
   enum machine_mode mode;
   rtx value, target, insns, equiv;
@@ -4164,18 +4120,11 @@
       op3 = force_not_mem (op3);
     }
 
-  if (target)
-    target = protect_from_queue (target, 1);
-  else
+  if (!target)
     target = gen_reg_rtx (mode);
 
   subtarget = target;
 
-  emit_queue ();
-
-  op2 = protect_from_queue (op2, 0);
-  op3 = protect_from_queue (op3, 0);
-
   /* If the insn doesn't accept these operands, put them in pseudos.  */
 
   if (! (*insn_data[icode].operand[0].predicate)
@@ -4305,23 +4254,16 @@
       op3 = force_not_mem (op3);
     }
 
-  if (target)
-    target = protect_from_queue (target, 1);
-  else
+  if (!target)
     target = gen_reg_rtx (mode);
 
-  subtarget = target;
-
-  emit_queue ();
-
-  op2 = protect_from_queue (op2, 0);
-  op3 = protect_from_queue (op3, 0);
-
   /* If the insn doesn't accept these operands, put them in pseudos.  */
 
   if (! (*insn_data[icode].operand[0].predicate)
-      (subtarget, insn_data[icode].operand[0].mode))
+      (target, insn_data[icode].operand[0].mode))
     subtarget = gen_reg_rtx (insn_data[icode].operand[0].mode);
+  else
+    subtarget = target;
 
   if (! (*insn_data[icode].operand[2].predicate)
       (op2, insn_data[icode].operand[2].mode))
@@ -4360,11 +4302,7 @@
 \f
 /* These functions attempt to generate an insn body, rather than
    emitting the insn, but if the gen function already emits them, we
-   make no attempt to turn them back into naked patterns.
-
-   They do not protect from queued increments,
-   because they may be used 1) in protect_from_queue itself
-   and 2) in other passes where there is no queue.  */
+   make no attempt to turn them back into naked patterns.  */
 
 /* Generate and return an insn body to add Y to X.  */
 
@@ -4621,9 +4559,6 @@
 
 	if (icode != CODE_FOR_nothing)
 	  {
-	    to = protect_from_queue (to, 1);
-	    from = protect_from_queue (from, 0);
-
 	    if (imode != GET_MODE (from))
 	      from = convert_to_mode (imode, from, unsignedp);
 
@@ -4647,11 +4582,6 @@
       rtx temp;
       REAL_VALUE_TYPE offset;
 
-      emit_queue ();
-
-      to = protect_from_queue (to, 1);
-      from = protect_from_queue (from, 0);
-
       if (flag_force_mem)
 	from = force_not_mem (from);
 
@@ -4759,9 +4689,6 @@
       rtx value;
       convert_optab tab = unsignedp ? ufloat_optab : sfloat_optab;
 
-      to = protect_from_queue (to, 1);
-      from = protect_from_queue (from, 0);
-
       if (GET_MODE_SIZE (GET_MODE (from)) < GET_MODE_SIZE (SImode))
 	from = convert_to_mode (SImode, from, unsignedp);
 
@@ -4827,9 +4754,6 @@
 
 	if (icode != CODE_FOR_nothing)
 	  {
-	    to = protect_from_queue (to, 1);
-	    from = protect_from_queue (from, 0);
-
 	    if (fmode != GET_MODE (from))
 	      from = convert_to_mode (fmode, from, 0);
 
@@ -4889,10 +4813,6 @@
 	  lab1 = gen_label_rtx ();
 	  lab2 = gen_label_rtx ();
 
-	  emit_queue ();
-	  to = protect_from_queue (to, 1);
-	  from = protect_from_queue (from, 0);
-
 	  if (flag_force_mem)
 	    from = force_not_mem (from);
 
@@ -4963,9 +4883,6 @@
       if (!libfunc)
 	abort ();
 
-      to = protect_from_queue (to, 1);
-      from = protect_from_queue (from, 0);
-
       if (flag_force_mem)
 	from = force_not_mem (from);
 
Index: gcc/rtl.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/rtl.c,v
retrieving revision 1.138
diff -u -r1.138 rtl.c
--- gcc/rtl.c	4 Jul 2004 08:06:54 -0000	1.138
+++ gcc/rtl.c	9 Jul 2004 14:35:39 -0000
@@ -214,7 +214,6 @@
   switch (code)
     {
     case REG:
-    case QUEUED:
     case CONST_INT:
     case CONST_DOUBLE:
     case CONST_VECTOR:
Index: gcc/rtl.def
===================================================================
RCS file: /cvs/gcc/gcc/gcc/rtl.def,v
retrieving revision 1.87
diff -u -r1.87 rtl.def
--- gcc/rtl.def	4 Jul 2004 08:06:54 -0000	1.87
+++ gcc/rtl.def	9 Jul 2004 14:35:39 -0000
@@ -906,24 +906,6 @@
    pretend to be looking at the entire value and comparing it.  */
 DEF_RTL_EXPR(CC0, "cc0", "", RTX_OBJ)
 
-/* =====================================================================
-   A QUEUED expression really points to a member of the queue of instructions
-   to be output later for postincrement/postdecrement.
-   QUEUED expressions never become part of instructions.
-   When a QUEUED expression would be put into an instruction,
-   instead either the incremented variable or a copy of its previous
-   value is used.
-   
-   Operands are:
-   0. the variable to be incremented (a REG rtx).
-   1. the incrementing instruction, or 0 if it hasn't been output yet.
-   2. A REG rtx for a copy of the old value of the variable, or 0 if none yet.
-   3. the body to use for the incrementing instruction
-   4. the next QUEUED expression in the queue.
-   ====================================================================== */
-
-DEF_RTL_EXPR(QUEUED, "queued", "eeeee", RTX_EXTRA)
-
 /* ----------------------------------------------------------------------
    Expressions for operators in an rtl pattern
    ---------------------------------------------------------------------- */
Index: gcc/rtlanal.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/rtlanal.c,v
retrieving revision 1.194
diff -u -r1.194 rtlanal.c
--- gcc/rtlanal.c	9 Jul 2004 03:29:35 -0000	1.194
+++ gcc/rtlanal.c	9 Jul 2004 14:35:40 -0000
@@ -83,9 +83,6 @@
     case MEM:
       return ! RTX_UNCHANGING_P (x) || rtx_unstable_p (XEXP (x, 0));
 
-    case QUEUED:
-      return 1;
-
     case CONST:
     case CONST_INT:
     case CONST_DOUBLE:
@@ -161,9 +158,6 @@
     case MEM:
       return ! RTX_UNCHANGING_P (x) || rtx_varies_p (XEXP (x, 0), for_alias);
 
-    case QUEUED:
-      return 1;
-
     case CONST:
     case CONST_INT:
     case CONST_DOUBLE:
Index: gcc/simplify-rtx.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/simplify-rtx.c,v
retrieving revision 1.197
diff -u -r1.197 simplify-rtx.c
--- gcc/simplify-rtx.c	1 Jul 2004 13:09:39 -0000	1.197
+++ gcc/simplify-rtx.c	9 Jul 2004 14:35:40 -0000
@@ -3802,9 +3802,6 @@
       || byte >= GET_MODE_SIZE (innermode))
     abort ();
 
-  if (GET_CODE (op) == QUEUED)
-    return NULL_RTX;
-
   new = simplify_subreg (outermode, op, innermode, byte);
   if (new)
     return new;
Index: gcc/stmt.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/stmt.c,v
retrieving revision 1.373
diff -u -r1.373 stmt.c
--- gcc/stmt.c	9 Jul 2004 03:29:35 -0000	1.373
+++ gcc/stmt.c	9 Jul 2004 14:35:40 -0000
@@ -430,7 +430,6 @@
 
   x = convert_memory_address (Pmode, x);
 
-  emit_queue ();
   do_pending_stack_adjust ();
   emit_indirect_jump (x);
 }
@@ -1060,7 +1059,7 @@
 	  if ((! allows_mem && MEM_P (op))
 	      || GET_CODE (op) == CONCAT)
 	    {
-	      real_output_rtx[i] = protect_from_queue (op, 1);
+	      real_output_rtx[i] = op;
 	      op = gen_reg_rtx (GET_MODE (op));
 	      if (is_inout)
 		emit_move_insn (op, real_output_rtx[i]);
@@ -1185,13 +1184,6 @@
 
   generating_concat_p = 0;
 
-  for (i = 0; i < ninputs - ninout; i++)
-    ASM_OPERANDS_INPUT (body, i)
-      = protect_from_queue (ASM_OPERANDS_INPUT (body, i), 0);
-
-  for (i = 0; i < noutputs; i++)
-    output_rtx[i] = protect_from_queue (output_rtx[i], 1);
-
   /* For in-out operands, copy output rtx to input rtx.  */
   for (i = 0; i < ninout; i++)
     {
@@ -1362,9 +1354,6 @@
 	  TREE_VALUE (tail) = o[i];
 	}
     }
-
-  /* Those MODIFY_EXPRs could do autoincrements.  */
-  emit_queue ();
 }
 
 /* A subroutine of expand_asm_operands.  Check that all operands have
@@ -1626,8 +1615,6 @@
 
   /* Free any temporaries used to evaluate this expression.  */
   free_temp_slots ();
-
-  emit_queue ();
 }
 
 /* Warn if EXP contains any computations whose results are not used.
@@ -2014,7 +2001,6 @@
   if (TREE_CODE (TREE_TYPE (TREE_TYPE (current_function_decl))) == VOID_TYPE)
     {
       expand_expr (retval, NULL_RTX, VOIDmode, 0);
-      emit_queue ();
       expand_null_return ();
       return;
     }
@@ -2146,7 +2132,6 @@
 	result_reg_mode = tmpmode;
       result_reg = gen_reg_rtx (result_reg_mode);
 
-      emit_queue ();
       for (i = 0; i < n_regs; i++)
 	emit_move_insn (operand_subword (result_reg, i, 0, result_reg_mode),
 			result_pseudos[i]);
@@ -2169,7 +2154,6 @@
       val = assign_temp (nt, 0, 0, 1);
       val = expand_expr (retval_rhs, val, GET_MODE (val), 0);
       val = force_not_mem (val);
-      emit_queue ();
       /* Return the calculated value.  */
       expand_value_return (shift_return_value (val));
     }
@@ -2177,7 +2161,6 @@
     {
       /* No hard reg used; calculate value into hard return reg.  */
       expand_expr (retval, const0_rtx, VOIDmode, 0);
-      emit_queue ();
       expand_value_return (result_rtl);
     }
 }
@@ -2694,13 +2677,11 @@
 	  || code == POINTER_TYPE || code == REFERENCE_TYPE)
 	expand_assignment (decl, convert (TREE_TYPE (decl), integer_zero_node),
 			   0);
-      emit_queue ();
     }
   else if (DECL_INITIAL (decl) && TREE_CODE (DECL_INITIAL (decl)) != TREE_LIST)
     {
       emit_line_note (DECL_SOURCE_LOCATION (decl));
       expand_assignment (decl, DECL_INITIAL (decl), 0);
-      emit_queue ();
     }
 
   /* Don't let the initialization count as "using" the variable.  */
@@ -2813,7 +2794,6 @@
   nesting_stack = thiscase;
 
   do_pending_stack_adjust ();
-  emit_queue ();
 
   /* Make sure case_stmt.start points to something that won't
      need any transformation before expand_end_case.  */
@@ -3293,8 +3273,6 @@
 			    convert (index_type, index_expr),
 			    convert (index_type, minval)));
   index = expand_expr (index_expr, NULL_RTX, VOIDmode, 0);
-  emit_queue ();
-  index = protect_from_queue (index, 0);
   do_pending_stack_adjust ();
 
   mode = TYPE_MODE (index_type);
@@ -3446,7 +3424,6 @@
       if (count == 0)
 	{
 	  expand_expr (index_expr, const0_rtx, VOIDmode, 0);
-	  emit_queue ();
 	  emit_jump (default_label);
 	}
 
@@ -3515,10 +3492,8 @@
 		  }
 	    }
 
-	  emit_queue ();
 	  do_pending_stack_adjust ();
 
-	  index = protect_from_queue (index, 0);
 	  if (MEM_P (index))
 	    index = copy_to_reg (index);
 	  if (GET_CODE (index) == CONST_INT
Index: gcc/config/c4x/c4x.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/c4x/c4x.c,v
retrieving revision 1.152
diff -u -r1.152 c4x.c
--- gcc/config/c4x/c4x.c	8 Jul 2004 20:34:25 -0000	1.152
+++ gcc/config/c4x/c4x.c	9 Jul 2004 14:35:45 -0000
@@ -4815,7 +4815,6 @@
     case C4X_BUILTIN_FIX:
       arg0 = TREE_VALUE (arglist);
       r0 = expand_expr (arg0, NULL_RTX, QFmode, 0);
-      r0 = protect_from_queue (r0, 0);
       if (! target || ! register_operand (target, QImode))
 	target = gen_reg_rtx (QImode);
       emit_insn (gen_fixqfqi_clobber (target, r0));
@@ -4824,7 +4823,6 @@
     case C4X_BUILTIN_FIX_ANSI:
       arg0 = TREE_VALUE (arglist);
       r0 = expand_expr (arg0, NULL_RTX, QFmode, 0);
-      r0 = protect_from_queue (r0, 0);
       if (! target || ! register_operand (target, QImode))
 	target = gen_reg_rtx (QImode);
       emit_insn (gen_fix_truncqfqi2 (target, r0));
@@ -4837,8 +4835,6 @@
       arg1 = TREE_VALUE (TREE_CHAIN (arglist));
       r0 = expand_expr (arg0, NULL_RTX, QImode, 0);
       r1 = expand_expr (arg1, NULL_RTX, QImode, 0);
-      r0 = protect_from_queue (r0, 0);
-      r1 = protect_from_queue (r1, 0);
       if (! target || ! register_operand (target, QImode))
 	target = gen_reg_rtx (QImode);
       emit_insn (gen_mulqi3_24_clobber (target, r0, r1));
@@ -4849,7 +4845,6 @@
 	break;
       arg0 = TREE_VALUE (arglist);
       r0 = expand_expr (arg0, NULL_RTX, QFmode, 0);
-      r0 = protect_from_queue (r0, 0);
       if (! target || ! register_operand (target, QFmode))
 	target = gen_reg_rtx (QFmode);
       emit_insn (gen_toieee (target, r0));
@@ -4860,7 +4855,6 @@
 	break;
       arg0 = TREE_VALUE (arglist);
       r0 = expand_expr (arg0, NULL_RTX, QFmode, 0);
-      r0 = protect_from_queue (r0, 0);
       if (register_operand (r0, QFmode))
 	{
 	  r1 = assign_stack_local (QFmode, GET_MODE_SIZE (QFmode), 0);
@@ -4877,7 +4871,6 @@
 	break;
       arg0 = TREE_VALUE (arglist);
       r0 = expand_expr (arg0, NULL_RTX, QFmode, 0);
-      r0 = protect_from_queue (r0, 0);
       if (! target || ! register_operand (target, QFmode))
 	target = gen_reg_rtx (QFmode);
       emit_insn (gen_rcpfqf_clobber (target, r0));
Index: gcc/config/mn10300/mn10300.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/mn10300/mn10300.c,v
retrieving revision 1.70
diff -u -r1.70 mn10300.c
--- gcc/config/mn10300/mn10300.c	9 Jul 2004 09:50:10 -0000	1.70
+++ gcc/config/mn10300/mn10300.c	9 Jul 2004 14:35:46 -0000
@@ -1818,9 +1818,6 @@
 	  || XINT (x, 1) == UNSPEC_PLT))
       return 1;
 
-  if (GET_CODE (x) == QUEUED)
-    return legitimate_pic_operand_p (QUEUED_VAR (x));
-
   fmt = GET_RTX_FORMAT (GET_CODE (x));
   for (i = GET_RTX_LENGTH (GET_CODE (x)) - 1; i >= 0; i--)
     {
Index: gcc/config/s390/s390.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/s390/s390.c,v
retrieving revision 1.156
diff -u -r1.156 s390.c
--- gcc/config/s390/s390.c	7 Jul 2004 19:24:45 -0000	1.156
+++ gcc/config/s390/s390.c	9 Jul 2004 14:35:47 -0000
@@ -3184,10 +3184,6 @@
   rtx (*gen_result) (rtx) =
     GET_MODE (target) == DImode ? gen_cmpint_di : gen_cmpint_si;
 
-  op0 = protect_from_queue (op0, 0);
-  op1 = protect_from_queue (op1, 0);
-  len = protect_from_queue (len, 0);
-
   if (GET_CODE (len) == CONST_INT && INTVAL (len) >= 0 && INTVAL (len) <= 256)
     {
       if (INTVAL (len) > 0)
Index: gcc/cp/typeck.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/cp/typeck.c,v
retrieving revision 1.558
diff -u -r1.558 typeck.c
--- gcc/cp/typeck.c	8 Jul 2004 10:03:54 -0000	1.558
+++ gcc/cp/typeck.c	9 Jul 2004 14:35:52 -0000
@@ -5773,63 +5773,6 @@
   return convert_for_assignment (type, rhs, errtype, fndecl, parmnum);
 }
 \f
-/* Expand an ASM statement with operands, handling output operands
-   that are not variables or INDIRECT_REFS by transforming such
-   cases into cases that expand_asm_operands can handle.
-
-   Arguments are same as for expand_asm_operands.
-
-   We don't do default conversions on all inputs, because it can screw
-   up operands that are expected to be in memory.  */
-
-void
-c_expand_asm_operands (tree string, tree outputs, tree inputs, tree clobbers,
-		       int vol, location_t locus)
-{
-  int noutputs = list_length (outputs);
-  int i;
-  /* o[I] is the place that output number I should be written.  */
-  tree *o = alloca (noutputs * sizeof (tree));
-  tree tail;
-
-  /* Record the contents of OUTPUTS before it is modified.  */
-  for (i = 0, tail = outputs; tail; tail = TREE_CHAIN (tail), i++)
-    o[i] = TREE_VALUE (tail);
-
-  /* Generate the ASM_OPERANDS insn;
-     store into the TREE_VALUEs of OUTPUTS some trees for
-     where the values were actually stored.  */
-  expand_asm_operands (string, outputs, inputs, clobbers, vol, locus);
-
-  /* Copy all the intermediate outputs into the specified outputs.  */
-  for (i = 0, tail = outputs; tail; tail = TREE_CHAIN (tail), i++)
-    {
-      if (o[i] != TREE_VALUE (tail))
-	{
-	  expand_expr (build_modify_expr (o[i], NOP_EXPR, TREE_VALUE (tail)),
-		       const0_rtx, VOIDmode, EXPAND_NORMAL);
-	  free_temp_slots ();
-
-	  /* Restore the original value so that it's correct the next
-	     time we expand this function.  */
-	  TREE_VALUE (tail) = o[i];
-	}
-      /* Detect modification of read-only values.
-	 (Otherwise done by build_modify_expr.)  */
-      else
-	{
-	  tree type = TREE_TYPE (o[i]);
-	  if (type != error_mark_node
-	      && (CP_TYPE_CONST_P (type)
-		  || (CLASS_TYPE_P (type) && C_TYPE_FIELDS_READONLY (type))))
-	    readonly_error (o[i], "modification by `asm'", 1);
-	}
-    }
-
-  /* Those MODIFY_EXPRs could do autoincrements.  */
-  emit_queue ();
-}
-\f
 /* If RETVAL is the address of, or a reference to, a local variable or
    temporary give an appropriate warning.  */
 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] Remove the postincrement queue, take 2
  2004-07-09 15:49         ` [PATCH] Remove the postincrement queue, take 2 Paolo Bonzini
@ 2004-07-09 22:05           ` Richard Henderson
  0 siblings, 0 replies; 15+ messages in thread
From: Richard Henderson @ 2004-07-09 22:05 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Steven Bosscher, gcc-patches

On Fri, Jul 09, 2004 at 04:50:02PM +0200, Paolo Bonzini wrote:
> Here is the updated patch, I just used expand_assignment of a PLUS_EXPR 
> instead of expand_increment.  The other change is that I was updating 
> c_expand_asm_operands, but Steven pointed out it is unused, so this take 
> removes it.

This patch is ok, BUT it must wait until the mips va_arg patch is
committed.


r~

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFT/RFA] gimplify pa va_arg
       [not found] <no.id>
@ 2004-07-09 21:01 ` John David Anglin
  0 siblings, 0 replies; 15+ messages in thread
From: John David Anglin @ 2004-07-09 21:01 UTC (permalink / raw)
  To: John David Anglin; +Cc: gcc-patches, paolo.bonzini, rth

> The code generated for f4 is wrong.  Garbage is used for the first

Doh.  Just forget the last message.

Dave
-- 
J. David Anglin                                  dave.anglin@nrc-cnrc.gc.ca
National Research Council of Canada              (613) 990-0752 (FAX: 952-6602)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFT/RFA] gimplify pa va_arg
@ 2004-07-09 20:48 John David Anglin
  0 siblings, 0 replies; 15+ messages in thread
From: John David Anglin @ 2004-07-09 20:48 UTC (permalink / raw)
  To: gcc-patches; +Cc: paolo.bonzini, rth

> Paolo Bonzini wrote:
> 
>       And here is the second. This gives several differences for both PA and
>       PA64 in the struct case. For PA64, it looks like awful optimization on
>       part of the current mainline GCC (but HPPA assembly is not my forte).
>       For PA, the only important change is that the frame grows from 64 to
>       128 bytes, but again I cannot figure it out.

---------------------
__builtin_va_list list;

int f1(void) { return __builtin_va_arg(list, int); }
long long f2(void) { return __builtin_va_arg(list, long long); }
double f3(void) { return __builtin_va_arg(list, double); }

struct S { int x[100]; };
struct S f4(void) { return __builtin_va_arg(list, struct S); }
---------------------

The code generated for f4 is wrong.  Garbage is used for the first
argument in the call to memcpy/memmove (%r26).  The same garbage is
used for the return in %r28.

This doesn't seem to be a result of gimplification.  There seems
to be some kind of assumption that there is an incoming value in
%r28:

;; Function f4

(note 2 0 3 NOTE_INSN_DELETED)

(insn 3 2 4 (nil) (set (reg/f:SI 94)
        (reg:SI 28 %r28)) -1 (nil)
    (nil))

Dave
-- 
J. David Anglin                                  dave.anglin@nrc-cnrc.gc.ca
National Research Council of Canada              (613) 990-0752 (FAX: 952-6602)

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2004-07-09 21:07 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-07-08 14:55 [PATCH] Remove the postincrement queue Paolo Bonzini
2004-07-08 21:15 ` Richard Henderson
2004-07-09  8:34   ` Paolo Bonzini
2004-07-09  9:38     ` gimplify mn10300 va_arg Paolo Bonzini
2004-07-09 11:10       ` Richard Henderson
2004-07-09  9:43     ` [RFT/RFA] gimplify pa va_arg Paolo Bonzini
2004-07-09  9:44       ` Paolo Bonzini
2004-07-09 10:08         ` Paolo Bonzini
2004-07-09 10:13           ` Paolo Bonzini
2004-07-09 10:55     ` [PATCH] Remove the postincrement queue Richard Henderson
2004-07-09 11:15       ` Paolo Bonzini
2004-07-09 15:49         ` [PATCH] Remove the postincrement queue, take 2 Paolo Bonzini
2004-07-09 22:05           ` Richard Henderson
2004-07-09 20:48 [RFT/RFA] gimplify pa va_arg John David Anglin
     [not found] <no.id>
2004-07-09 21:01 ` John David Anglin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).