public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [patch,avr] Fix PR109650 wrong code
@ 2023-05-15 18:05 Georg-Johann Lay
  2023-05-19  8:17 ` Georg-Johann Lay
  2023-05-19  8:49 ` Georg-Johann Lay
  0 siblings, 2 replies; 6+ messages in thread
From: Georg-Johann Lay @ 2023-05-15 18:05 UTC (permalink / raw)
  To: gcc-patches; +Cc: Jeff Law, Denis Chertykov

[-- Attachment #1: Type: text/plain, Size: 3009 bytes --]

This patch fixes a wrong-code bug in the wake of PR92729, the transition
that turned the AVR backend from cc0 to CCmode.  In cc0, the insn that
uses cc0 like a conditional branch always follows the cc0 setter, which
is no more the case with CCmode where set and use of REG_CC might be in
different basic blocks.

This patch removes the machine-dependent reorg pass in avr_reorg entirely.

It is replaced by a new, AVR specific mini-pass that runs prior to
split2. Canonicalization of comparisons away from the "difficult"
codes GT[U] and LE[U] is now mostly performed by implementing
TARGET_CANONICALIZE_COMPARISON.

Moreover:

* Text peephole conditions get "dead_or_set_regno_p (*, REG_CC)" as
needed.

* RTL peephole conditions get "peep2_regno_dead_p (*, REG_CC)" as
needed.

* Conditional branches no more clobber REG_CC.

* insn output for compares looks ahead to determine the branch mode in
use. This needs also "dead_or_set_regno_p (*, REG_CC)".

* Add RTL peepholes for decrement-and-branch detection.

Finally, it fixes some of the many indentation glitches left over from
PR92729.

Ok?

I'd also backport this one because all of v12+ is affected by the wrong 
code.

Johann

--

gcc/
	PR/target 109650
	PR/target 97279

	* config/avr/avr-passes.def (avr_pass_ifelse): Insert new pass.
	* config/avr/avr.cc (avr_pass_ifelse): New RTL pass.
	(avr_pass_data_ifelse): New pass_data for it.
	(make_avr_pass_ifelse, avr_redundant_compare, avr_cbranch_cost)
	(avr_canonicalize_comparison, avr_out_plus_set_ZN): New functions.
	(compare_condtition): Make sure REG_CC dies in the branch insn.
	(avr_rtx_costs_1): Add computation of cbranch costs.
	(avr_adjust_insn_length) [ADJUST_LEN_ADD_SET_ZN]: Handle case.
	(TARGET_CANONICALIZE_COMPARISON): New define.
	(avr_simplify_comparison_p, compare_diff_p, avr_compare_pattern)
	(avr_reorg_remove_redundant_compare, avr_reorg): Remove functions.
	(TARGET_MACHINE_DEPENDENT_REORG): Remove define.

	* avr-protos.h (avr_simplify_comparison_p): Remove proto.
	(make_avr_pass_ifelse, avr_out_plus_set_ZN, cc_reg_rtx): New Protos

	* config/avr/avr.md (branch, difficult_branch): Don't split insns.
	(*swapped_tst<mode>, *add.for.eqne.<mode>): New insns.
	(*cbranch<mode>4): Rename to cbranch<mode>4_insn.
	(cbranch<mode>4): Try to canonicalize comparisons at expand.
	(define_peephole): Add dead_or_set_regno_p(insn,REG_CC) as needed.
	(define_deephole2): Add peep2_regno_dead_p(*,REG_CC) as needed.
	Add new RTL peepholes for decrement-and-branch and *swapped_tst<mode>.
	(adjust_len) [add_set_ZN]: New.
	(rvbranch, *rvbranch, difficult_rvbranch, *difficult_rvbranch)
	(branch_unspec, *negated_tst<mode>, *reversed_tst<mode>): Remove insns.
	(define_c_enum "unspec") [UNSPEC_IDENTITY]: Remove.

	* config/avr/avr-dimode.md (cbranch<mode>4): Canonicalize comparisons.
	* config/avr/predicates.md (scratch_or_d_register_operand): New.
	* config/avr/contraints.md (Yxx): New constraint.

gcc/testsuite/
	PR/target 109650
	* config/avr/torture/pr109650-1.c: New test.

[-- Attachment #2: pr109650.diff --]
[-- Type: text/x-patch, Size: 87093 bytes --]

diff --git a/gcc/config/avr/avr-dimode.md b/gcc/config/avr/avr-dimode.md
index c0bb04ff9e0..91f0d395761 100644
--- a/gcc/config/avr/avr-dimode.md
+++ b/gcc/config/avr/avr-dimode.md
@@ -455,12 +455,18 @@ (define_expand "conditional_jump"
 (define_expand "cbranch<mode>4"
   [(set (pc)
         (if_then_else (match_operator 0 "ordered_comparison_operator"
-                        [(match_operand:ALL8 1 "register_operand"  "")
-                         (match_operand:ALL8 2 "nonmemory_operand" "")])
-         (label_ref (match_operand 3 "" ""))
-         (pc)))]
+                        [(match_operand:ALL8 1 "register_operand")
+                         (match_operand:ALL8 2 "nonmemory_operand")])
+                      (label_ref (match_operand 3))
+                      (pc)))]
   "avr_have_dimode"
    {
+    int icode = (int) GET_CODE (operands[0]);
+
+    targetm.canonicalize_comparison (&icode, &operands[1], &operands[2], false);
+    operands[0] = gen_rtx_fmt_ee ((enum rtx_code) icode,
+                                  VOIDmode, operands[1], operands[2]);
+
     rtx acc_a = gen_rtx_REG (<MODE>mode, ACC_A);
 
     avr_fix_inputs (operands, 1 << 2, regmask (<MODE>mode, ACC_A));
@@ -490,8 +496,8 @@ (define_insn_and_split "cbranch_<mode>2_split"
         (if_then_else (match_operator 0 "ordered_comparison_operator"
                         [(reg:ALL8 ACC_A)
                          (reg:ALL8 ACC_B)])
-         (label_ref (match_operand 1 "" ""))
-         (pc)))]
+                      (label_ref (match_operand 1))
+                      (pc)))]
   "avr_have_dimode"
   "#"
   "&& reload_completed"
@@ -544,8 +550,8 @@ (define_insn_and_split "cbranch_const_<mode>2_split"
         (if_then_else (match_operator 0 "ordered_comparison_operator"
                         [(reg:ALL8 ACC_A)
                          (match_operand:ALL8 1 "const_operand" "n Ynn")])
-         (label_ref (match_operand 2 "" ""))
-         (pc)))
+                      (label_ref (match_operand 2 "" ""))
+                      (pc)))
    (clobber (match_scratch:QI 3 "=&d"))]
   "avr_have_dimode
    && !s8_operand (operands[1], VOIDmode)"
diff --git a/gcc/config/avr/avr-passes.def b/gcc/config/avr/avr-passes.def
index 02ca737deee..9ff9b23f62c 100644
--- a/gcc/config/avr/avr-passes.def
+++ b/gcc/config/avr/avr-passes.def
@@ -43,3 +43,23 @@ INSERT_PASS_BEFORE (pass_free_cfg, 1, avr_pass_recompute_notes);
    insns withaout any insns in between.  */
 
 INSERT_PASS_AFTER (pass_expand, 1, avr_pass_casesi);
+
+/* If-else decision trees generated for switch / case may produce sequences
+   like
+
+      SREG = compare (reg, val);
+	  if (SREG == 0)  goto label1;
+      SREG = compare (reg, 1 + val);
+	  if (SREG >= 0)  goto label2;
+
+   which can be optimized to
+
+      SREG = compare (reg, val);
+	  if (SREG == 0)  goto label1;
+	  if (SREG >= 0)  goto label2;
+
+   The optimal place for such a pass would be directly after expand, but
+   it's not possible for a jump insn to target more than one code label.
+   Hence, run a mini pass right before split2 which introduces REG_CC.  */
+
+INSERT_PASS_BEFORE (pass_split_after_reload, 1, avr_pass_ifelse);
diff --git a/gcc/config/avr/avr-protos.h b/gcc/config/avr/avr-protos.h
index ec96fd45865..e3aee54c083 100644
--- a/gcc/config/avr/avr-protos.h
+++ b/gcc/config/avr/avr-protos.h
@@ -58,6 +58,7 @@ extern const char *ret_cond_branch (rtx x, int len, int reverse);
 extern const char *avr_out_movpsi (rtx_insn *, rtx*, int*);
 extern const char *avr_out_sign_extend (rtx_insn *, rtx*, int*);
 extern const char *avr_out_insert_notbit (rtx_insn *, rtx*, rtx, int*);
+extern const char *avr_out_plus_set_ZN (rtx*, int*);
 
 extern const char *ashlqi3_out (rtx_insn *insn, rtx operands[], int *len);
 extern const char *ashlhi3_out (rtx_insn *insn, rtx operands[], int *len);
@@ -112,8 +113,6 @@ extern int jump_over_one_insn_p (rtx_insn *insn, rtx dest);
 
 extern void avr_final_prescan_insn (rtx_insn *insn, rtx *operand,
 				    int num_operands);
-extern int avr_simplify_comparison_p (machine_mode mode,
-				      RTX_CODE op, rtx x);
 extern RTX_CODE avr_normalize_condition (RTX_CODE condition);
 extern void out_shift_with_cnt (const char *templ, rtx_insn *insn,
 				rtx operands[], int *len, int t_len);
@@ -145,6 +144,7 @@ extern rtx tmp_reg_rtx;
 extern rtx zero_reg_rtx;
 extern rtx all_regs_rtx[32];
 extern rtx rampz_rtx;
+extern rtx cc_reg_rtx;
 
 #endif /* RTX_CODE */
 
@@ -160,6 +160,7 @@ class rtl_opt_pass;
 extern rtl_opt_pass *make_avr_pass_pre_proep (gcc::context *);
 extern rtl_opt_pass *make_avr_pass_recompute_notes (gcc::context *);
 extern rtl_opt_pass *make_avr_pass_casesi (gcc::context *);
+extern rtl_opt_pass *make_avr_pass_ifelse (gcc::context *);
 
 /* From avr-log.cc */
 
diff --git a/gcc/config/avr/avr.cc b/gcc/config/avr/avr.cc
index c193430cf07..022083ab209 100644
--- a/gcc/config/avr/avr.cc
+++ b/gcc/config/avr/avr.cc
@@ -359,6 +359,41 @@ public:
   }
 }; // avr_pass_casesi
 
+
+static const pass_data avr_pass_data_ifelse =
+{
+  RTL_PASS,      // type
+  "",            // name (will be patched)
+  OPTGROUP_NONE, // optinfo_flags
+  TV_DF_SCAN,    // tv_id
+  0,             // properties_required
+  0,             // properties_provided
+  0,             // properties_destroyed
+  0,             // todo_flags_start
+  TODO_df_finish | TODO_df_verify // todo_flags_finish
+};
+
+class avr_pass_ifelse : public rtl_opt_pass
+{
+public:
+  avr_pass_ifelse (gcc::context *ctxt, const char *name)
+    : rtl_opt_pass (avr_pass_data_ifelse, ctxt)
+  {
+    this->name = name;
+  }
+
+  void avr_rest_of_handle_ifelse (function*);
+
+  virtual bool gate (function*) { return optimize > 0; }
+
+  virtual unsigned int execute (function *func)
+  {
+    avr_rest_of_handle_ifelse (func);
+
+    return 0;
+  }
+}; // avr_pass_ifelse
+
 } // anon namespace
 
 rtl_opt_pass*
@@ -373,6 +408,12 @@ make_avr_pass_casesi (gcc::context *ctxt)
   return new avr_pass_casesi (ctxt, "avr-casesi");
 }
 
+rtl_opt_pass*
+make_avr_pass_ifelse (gcc::context *ctxt)
+{
+  return new avr_pass_ifelse (ctxt, "avr-ifelse");
+}
+
 
 /* Make one parallel insn with all the patterns from insns i[0]..i[5].  */
 
@@ -686,6 +727,304 @@ avr_pass_casesi::avr_rest_of_handle_casesi (function *func)
 }
 
 
+/* A helper for the next method.  Suppose we have two conditional branches
+
+      if (reg <cond1> xval1) goto label1;
+      if (reg <cond2> xval2) goto label2;
+
+   If the second comparison is redundant and there is a code <cond> such
+   that the sequence can be performed as
+
+      REG_CC = compare (reg, xval1);
+      if (REG_CC <cond1> 0)  goto label1;
+      if (REG_CC <cond> 0)   goto label2;
+
+   then return <cond>.  Otherwise, return UNKNOWN.
+   xval1 and xval2 are CONST_INT, and mode is the scalar int mode in which
+   the comparison will be carried out.  reverse_cond1 can be set to reverse
+   condition cond1.  This is useful if the second comparison does not follow
+   the first one, but is located after label1 like in:
+
+      if (reg <cond1> xval1) goto label1;
+      ...
+      label1:
+      if (reg <cond2> xval2) goto label2;  */
+
+static enum rtx_code
+avr_redundant_compare (enum rtx_code cond1, rtx xval1,
+		       enum rtx_code cond2, rtx xval2,
+		       machine_mode mode, bool reverse_cond1)
+{
+  HOST_WIDE_INT ival1 = INTVAL (xval1);
+  HOST_WIDE_INT ival2 = INTVAL (xval2);
+
+  unsigned HOST_WIDE_INT mask = GET_MODE_MASK (mode);
+  unsigned HOST_WIDE_INT uval1 = mask & UINTVAL (xval1);
+  unsigned HOST_WIDE_INT uval2 = mask & UINTVAL (xval2);
+
+  if (reverse_cond1)
+    cond1 = reverse_condition (cond1);
+
+  if (cond1 == EQ)
+    {
+      ////////////////////////////////////////////////
+      // A sequence like
+      //    if (reg == val)  goto label1;
+      //    if (reg > val)   goto label2;
+      // can be re-written using the same, simple comparison like in:
+      //    REG_CC = compare (reg, val)
+      //    if (REG_CC == 0)  goto label1;
+      //    if (REG_CC >= 0)  goto label2;
+      if (ival1 == ival2
+	  && (cond2 == GT || cond2 == GTU))
+	return avr_normalize_condition (cond2);
+
+      // Similar, but the input sequence is like
+      //    if (reg == val)  goto label1;
+      //    if (reg >= val)  goto label2;
+      if (ival1 == ival2
+	  && (cond2 == GE || cond2 == GEU))
+	return cond2;
+
+      // Similar, but the input sequence is like
+      //    if (reg == val)      goto label1;
+      //    if (reg >= val + 1)  goto label2;
+      if ((cond2 == GE && ival2 == 1 + ival1)
+	  || (cond2 == GEU && uval2 == 1 + uval1))
+	return cond2;
+
+      // Similar, but the input sequence is like
+      //    if (reg == val)      goto label1;
+      //    if (reg >  val - 1)  goto label2;
+      if ((cond2 == GT && ival2 == ival1 - 1)
+	  || (cond2 == GTU && uval2 == uval1 - 1))
+	return avr_normalize_condition (cond2);
+
+      /////////////////////////////////////////////////////////
+      // A sequence like
+      //    if (reg == val)      goto label1;
+      //    if (reg < 1 + val)   goto label2;
+      // can be re-written as
+      //    REG_CC = compare (reg, val)
+      //    if (REG_CC == 0)  goto label1;
+      //    if (REG_CC < 0)   goto label2;
+      if ((cond2 == LT && ival2 == 1 + ival1)
+	  || (cond2 == LTU && uval2 == 1 + uval1))
+	return cond2;
+
+      // Similar, but with an input sequence like
+      //    if (reg == val)   goto label1;
+      //    if (reg <= val)   goto label2;
+      if (ival1 == ival2
+	  && (cond2 == LE || cond2 == LEU))
+	return avr_normalize_condition (cond2);
+
+      // Similar, but with an input sequence like
+      //    if (reg == val)  goto label1;
+      //    if (reg < val)   goto label2;
+      if (ival1 == ival2
+	  && (cond2 == LT || cond2 == LTU))
+	return cond2;
+
+      // Similar, but with an input sequence like
+      //    if (reg == val)      goto label1;
+      //    if (reg <= val - 1)  goto label2;
+      if ((cond2 == LE && ival2 == ival1 - 1)
+	  || (cond2 == LEU && uval2 == uval1 - 1))
+	return avr_normalize_condition (cond2);
+
+    } // cond1 == EQ
+
+  return UNKNOWN;
+}
+
+
+/* If-else decision trees generated for switch / case may produce sequences
+   like
+
+      SREG = compare (reg, val);
+	  if (SREG == 0)  goto label1;
+      SREG = compare (reg, 1 + val);
+	  if (SREG >= 0)  goto label2;
+
+   which can be optimized to
+
+      SREG = compare (reg, val);
+	  if (SREG == 0)  goto label1;
+	  if (SREG >= 0)  goto label2;
+
+   The optimal place for such a pass would be directly after expand, but
+   it's not possible for a jump insn to target more than one code label.
+   Hence, run a mini pass right before split2 which introduces REG_CC.  */
+
+void
+avr_pass_ifelse::avr_rest_of_handle_ifelse (function*)
+{
+  rtx_insn *next_insn;
+
+  for (rtx_insn *insn = get_insns(); insn; insn = next_insn)
+    {
+      next_insn = next_nonnote_nondebug_insn (insn);
+
+      if (! next_insn)
+	break;
+
+      // Search for two cbranch insns.  The first one is a cbranch.
+      // Filter for "cbranch<mode>4_insn" with mode in QI, HI, PSI, SI.
+
+      if (! JUMP_P (insn))
+	continue;
+
+      int icode1 = recog_memoized (insn);
+
+      if (icode1 != CODE_FOR_cbranchqi4_insn
+	  && icode1 != CODE_FOR_cbranchhi4_insn
+	  && icode1 != CODE_FOR_cbranchpsi4_insn
+	  && icode1 != CODE_FOR_cbranchsi4_insn)
+	continue;
+
+      rtx_jump_insn *insn1 = as_a<rtx_jump_insn *> (insn);
+      rtx_jump_insn *insn2 = nullptr;
+      bool follow_label1 = false;
+
+      // Extract the operands of the first insn:
+      // $0 = comparison operator ($1, $2)
+      // $1 = reg
+      // $2 = reg or const_int
+      // $3 = code_label
+      // $4 = optional SCRATCH for HI, PSI, SI cases.
+
+      const auto &op = recog_data.operand;
+
+      extract_insn (insn1);
+      rtx xop1[5] = { op[0], op[1], op[2], op[3], op[4] };
+      int n_operands = recog_data.n_operands;
+
+      // For now, we can optimize cbranches that follow an EQ cbranch,
+      // and cbranches that follow the label of a NE cbranch.
+
+      if (GET_CODE (xop1[0]) == EQ
+	  && JUMP_P (next_insn)
+	  && recog_memoized (next_insn) == icode1)
+	{
+	  // The 2nd cbranch insn follows insn1, i.e. is located in the
+	  // fallthrough path of insn1.
+
+	  insn2 = as_a<rtx_jump_insn *> (next_insn);
+	}
+      else if (GET_CODE (xop1[0]) == NE)
+	{
+	  // insn1 might branch to a label followed by a cbranch.
+
+	  rtx target1 = JUMP_LABEL (insn1);
+	  rtx_insn *code_label1 = JUMP_LABEL_AS_INSN (insn1);
+	  rtx_insn *next = next_nonnote_nondebug_insn (code_label1);
+	  rtx_insn *barrier = prev_nonnote_nondebug_insn (code_label1);
+
+	  if (// Target label of insn1 is used exactly once and
+	      // is not a fallthru, i.e. is preceded by a barrier.
+	      LABEL_NUSES (target1) == 1
+	      && barrier
+	      && BARRIER_P (barrier)
+	      // Following the target label is a cbranch of the same kind.
+	      && next
+	      && JUMP_P (next)
+	      && recog_memoized (next) == icode1)
+	    {
+	      follow_label1 = true;
+	      insn2 = as_a<rtx_jump_insn *> (next);
+	    }
+	}
+
+      if (! insn2)
+	continue;
+
+      // Also extract operands of insn2, and filter for REG + CONST_INT
+      // comparsons against the same register.
+
+      extract_insn (insn2);
+      rtx xop2[5] = { op[0], op[1], op[2], op[3], op[4] };
+
+      if (! rtx_equal_p (xop1[1], xop2[1])
+	  || ! CONST_INT_P (xop1[2])
+	  || ! CONST_INT_P (xop2[2]))
+	continue;
+
+      machine_mode mode = GET_MODE (xop1[1]);
+      enum rtx_code code1 = GET_CODE (xop1[0]);
+      enum rtx_code code2 = GET_CODE (xop2[0]);
+
+      code2 = avr_redundant_compare (code1, xop1[2], code2, xop2[2],
+				     mode, follow_label1);
+      if (code2 == UNKNOWN)
+	continue;
+
+      //////////////////////////////////////////////////////
+      // Found a replacement.
+
+      if (dump_file)
+	{
+	  fprintf (dump_file, "\n;; Found chain of jump_insn %d and"
+		   " jump_insn %d, follow_label1=%d:\n",
+		   INSN_UID (insn1), INSN_UID (insn2), follow_label1);
+	  print_rtl_single (dump_file, PATTERN (insn1));
+	  print_rtl_single (dump_file, PATTERN (insn2));
+	}
+
+      if (! follow_label1)
+	next_insn = next_nonnote_nondebug_insn (insn2);
+
+      // Pop the new branch conditions and the new comparison.
+      // Prematurely split into compare + branch so that we can drop
+      // the 2nd comparison.  The following pass, split2, splits all
+      // insns for REG_CC, and it should still work as usual even when
+      // there are already some REG_CC insns around.
+
+      rtx xcond1 = gen_rtx_fmt_ee (code1, VOIDmode, cc_reg_rtx, const0_rtx);
+      rtx xcond2 = gen_rtx_fmt_ee (code2, VOIDmode, cc_reg_rtx, const0_rtx);
+      rtx xpat1 = gen_branch (xop1[3], xcond1);
+      rtx xpat2 = gen_branch (xop2[3], xcond2);
+      rtx xcompare = NULL_RTX;
+
+      if (mode == QImode)
+	{
+	  gcc_assert (n_operands == 4);
+	  xcompare = gen_cmpqi3 (xop1[1], xop1[2]);
+	}
+      else
+	{
+	  gcc_assert (n_operands == 5);
+	  rtx (*gen_cmp)(rtx,rtx,rtx)
+	    = mode == HImode  ? gen_gen_comparehi
+	    : mode == PSImode ? gen_gen_comparepsi
+	    : gen_gen_comparesi; // SImode
+	  xcompare = gen_cmp (xop1[1], xop1[2], xop1[4]);
+	}
+
+      // Emit that stuff.
+
+      rtx_insn *cmp = emit_insn_before (xcompare, insn1);
+      rtx_jump_insn *branch1 = emit_jump_insn_before (xpat1, insn1);
+      rtx_jump_insn *branch2 = emit_jump_insn_before (xpat2, insn2);
+
+      JUMP_LABEL (branch1) = xop1[3];
+      JUMP_LABEL (branch2) = xop2[3];
+      // delete_insn() decrements LABEL_NUSES when deleting a JUMP_INSN, but
+      // when we pop a new JUMP_INSN, do it by hand.
+      ++LABEL_NUSES (xop1[3]);
+      ++LABEL_NUSES (xop2[3]);
+
+      delete_insn (insn1);
+      delete_insn (insn2);
+
+      // As a side effect, also recog the new insns.
+      gcc_assert (valid_insn_p (cmp));
+      gcc_assert (valid_insn_p (branch1));
+      gcc_assert (valid_insn_p (branch2));
+    } // loop insns
+}
+
+
 /* Set `avr_arch' as specified by `-mmcu='.
    Return true on success.  */
 
@@ -3173,28 +3512,6 @@ avr_asm_final_postscan_insn (FILE *stream, rtx_insn *insn, rtx*, int)
 }
 
 
-/* Return 0 if undefined, 1 if always true or always false.  */
-
-int
-avr_simplify_comparison_p (machine_mode mode, RTX_CODE op, rtx x)
-{
-  unsigned int max = (mode == QImode ? 0xff :
-                      mode == HImode ? 0xffff :
-                      mode == PSImode ? 0xffffff :
-                      mode == SImode ? 0xffffffff : 0);
-  if (max && op && CONST_INT_P (x))
-    {
-      if (unsigned_condition (op) != op)
-        max >>= 1;
-
-      if (max != (INTVAL (x) & max)
-          && INTVAL (x) != 0xff)
-        return 1;
-    }
-  return 0;
-}
-
-
 /* Worker function for `FUNCTION_ARG_REGNO_P'.  */
 /* Returns nonzero if REGNO is the number of a hard
    register in which function arguments are sometimes passed.  */
@@ -5677,29 +5994,36 @@ avr_frame_pointer_required_p (void)
           || get_frame_size () > 0);
 }
 
-/* Returns the condition of compare insn INSN, or UNKNOWN.  */
+
+/* Returns the condition of the branch following INSN, where INSN is some
+   comparison.  If the next insn is not a branch or the condition code set
+   by INSN might be used by more insns than the next one, return UNKNOWN.
+   For now, just look at the next insn, which misses some opportunities like
+   following jumps.  */
 
 static RTX_CODE
 compare_condition (rtx_insn *insn)
 {
-  rtx_insn *next = next_real_insn (insn);
+  rtx set;
+  rtx_insn *next = next_real_nondebug_insn (insn);
 
-  if (next && JUMP_P (next))
+  if (next
+      && JUMP_P (next)
+      // If SREG does not die in the next insn, it is used in more than one
+      // branch.  This can happen due to pass .avr-ifelse optimizations.
+      && dead_or_set_regno_p (next, REG_CC)
+      // Branches are (set (pc) (if_then_else (COND (...)))).
+      && (set = single_set (next))
+      && GET_CODE (SET_SRC (set)) == IF_THEN_ELSE)
     {
-      rtx pat = PATTERN (next);
-      if (GET_CODE (pat) == PARALLEL)
-        pat = XVECEXP (pat, 0, 0);
-      rtx src = SET_SRC (pat);
-
-      if (IF_THEN_ELSE == GET_CODE (src))
-        return GET_CODE (XEXP (src, 0));
+      return GET_CODE (XEXP (SET_SRC (set), 0));
     }
 
   return UNKNOWN;
 }
 
 
-/* Returns true iff INSN is a tst insn that only tests the sign.  */
+/* Returns true if INSN is a tst insn that only tests the sign.  */
 
 static bool
 compare_sign_p (rtx_insn *insn)
@@ -5709,23 +6033,95 @@ compare_sign_p (rtx_insn *insn)
 }
 
 
-/* Returns true iff the next insn is a JUMP_INSN with a condition
-   that needs to be swapped (GT, GTU, LE, LEU).  */
+/* Returns true if INSN is a compare insn with the EQ or NE condition.  */
 
 static bool
-compare_diff_p (rtx_insn *insn)
+compare_eq_p (rtx_insn *insn)
 {
   RTX_CODE cond = compare_condition (insn);
-  return (cond == GT || cond == GTU || cond == LE || cond == LEU) ? cond : 0;
+  return (cond == EQ || cond == NE);
 }
 
-/* Returns true iff INSN is a compare insn with the EQ or NE condition.  */
 
-static bool
-compare_eq_p (rtx_insn *insn)
+/* Implement `TARGET_CANONICALIZE_COMPARISON'.  */
+/* Basically tries to convert "difficult" comparisons like GT[U]
+   and LE[U] to simple ones.  Some asymmetric comparisons can be
+   transformed to EQ or NE against zero.  */
+
+static void
+avr_canonicalize_comparison (int *icode, rtx *op0, rtx *op1, bool op0_fixed)
 {
-  RTX_CODE cond = compare_condition (insn);
-  return (cond == EQ || cond == NE);
+  enum rtx_code code = (enum rtx_code) *icode;
+  machine_mode mode = GET_MODE (*op0);
+
+  bool signed_p = code == GT || code == LE;
+  bool unsigned_p = code == GTU || code == LEU;
+  bool difficult_p = signed_p || unsigned_p;
+
+  if (// Only do integers and fixed-points.
+      (! SCALAR_INT_MODE_P (mode)
+       && ! ALL_SCALAR_FIXED_POINT_MODE_P (mode))
+      // Only do comparisons against a register.
+      || ! register_operand (*op0, mode))
+    return;
+
+  // Canonicalize "difficult" reg-reg comparisons.
+
+  if (! op0_fixed
+      && difficult_p
+      && register_operand (*op1, mode))
+    {
+      std::swap (*op0, *op1);
+      *icode = (int) swap_condition (code);
+      return;
+    }
+
+  // Canonicalize comparisons against compile-time constants.
+
+  if (CONST_INT_P (*op1)
+      || CONST_FIXED_P (*op1))
+    {
+      // INT_MODE of the same size.
+      scalar_int_mode imode = int_mode_for_mode (mode).require ();
+
+      unsigned HOST_WIDE_INT mask = GET_MODE_MASK (imode);
+      unsigned HOST_WIDE_INT maxval = signed_p ? mask >> 1 : mask;
+
+      // Convert value *op1 to imode.
+      rtx xval = simplify_gen_subreg (imode, *op1, mode, 0);
+
+      // Canonicalize difficult comparisons against const.
+      if (difficult_p
+	  && (UINTVAL (xval) & mask) != maxval)
+	{
+	  // Convert *op0 > *op1  to *op0 >= 1 + *op1.
+	  // Convert *op0 <= *op1 to *op0 <  1 + *op1.
+	  xval = simplify_binary_operation (PLUS, imode, xval, const1_rtx);
+
+	  // Convert value back to its original mode.
+	  *op1 = simplify_gen_subreg (mode, xval, imode, 0);
+
+	  // Map  >  to  >=  and  <=  to  <.
+	  *icode = (int) avr_normalize_condition (code);
+
+	  return;
+	}
+
+      // Some asymmetric comparisons can be turned into EQ or NE.
+      if (code == LTU && xval == const1_rtx)
+	{
+	  *icode = (int) EQ;
+	  *op1 = CONST0_RTX (mode);
+	  return;
+	}
+
+      if (code == GEU && xval == const1_rtx)
+	{
+	  *icode = (int) NE;
+	  *op1 = CONST0_RTX (mode);
+	  return;
+	}
+    }
 }
 
 
@@ -8160,6 +8556,122 @@ avr_out_plus (rtx insn, rtx *xop, int *plen, int *pcc, bool out_label)
 }
 
 
+/* Output an instruction sequence for addition of REG in XOP[0] and CONST_INT
+   in XOP[1] in such a way that SREG.Z and SREG.N are set according to the
+   result.  XOP[2] might be a d-regs clobber register.  If XOP[2] is SCRATCH,
+   then the addition can be performed without a clobber reg.  Return "".
+
+   If PLEN == NULL, then output the instructions.
+   If PLEN != NULL, then set *PLEN to the length of the sequence in words. */
+
+const char*
+avr_out_plus_set_ZN (rtx *xop, int *plen)
+{
+  if (plen)
+    *plen = 0;
+
+  // Register to compare and value to compare against.
+  rtx xreg = xop[0];
+  rtx xval = xop[1];
+
+  machine_mode mode = GET_MODE (xreg);
+
+  // Number of bytes to operate on.  */
+  int n_bytes = GET_MODE_SIZE (mode);
+
+  if (n_bytes == 1)
+    {
+      if (INTVAL (xval) == 1)
+	return avr_asm_len ("inc %0", xop, plen, 1);
+
+      if (INTVAL (xval) == -1)
+	return avr_asm_len ("dec %0", xop, plen, 1);
+    }
+
+  if (n_bytes == 2
+      && test_hard_reg_class (ADDW_REGS, xreg)
+      && IN_RANGE (INTVAL (xval), 1, 63))
+    {
+      // Add 16-bit value in [1..63] to a w register.
+      return avr_asm_len ("adiw %0, %1", xop, plen, 1);
+    }
+
+  // Addition won't work; subtract the negative of XVAL instead.
+  xval = simplify_unary_operation (NEG, mode, xval, mode);
+
+  // Value (0..0xff) held in clobber register xop[2] or -1 if unknown.
+  int clobber_val = -1;
+
+  // [0] = Current sub-register.
+  // [1] = Current partial xval.
+  // [2] = 8-bit clobber d-register or SCRATCH.
+  rtx op[3];
+  op[2] = xop[2];
+
+  // Work byte-wise from LSB to MSB.  The lower two bytes might be
+  // SBIW'ed in one go.
+  for (int i = 0; i < n_bytes; ++i)
+    {
+      op[0] = simplify_gen_subreg (QImode, xreg, mode, i);
+
+      if (i == 0
+	  && n_bytes >= 2
+	  && test_hard_reg_class (ADDW_REGS, op[0]))
+	{
+	  op[1] = simplify_gen_subreg (HImode, xval, mode, 0);
+	  if (IN_RANGE (INTVAL (op[1]), 0, 63))
+	    {
+	      // SBIW can handle the lower 16 bits.
+	      avr_asm_len ("sbiw %0, %1", op, plen, 1);
+
+	      // Next byte has already been handled: Skip it.
+	      ++i;
+	      continue;
+	    }
+	}
+
+      op[1] = simplify_gen_subreg (QImode, xval, mode, i);
+
+      if (test_hard_reg_class (LD_REGS, op[0]))
+	{
+	  // d-regs can subtract immediates.
+	  avr_asm_len (i == 0
+		       ? "subi %0, %1"
+		       : "sbci %0, %1", op, plen, 1);
+	}
+      else
+	{
+	  int val8 = 0xff & INTVAL (op[1]);
+	  if (val8 == 0)
+	    {
+	      // Any register can subtract 0.
+	      avr_asm_len (i == 0
+			   ? "sub %0, __zero_reg__"
+			   : "sbc %0, __zero_reg__", op, plen, 1);
+	    }
+	  else
+	    {
+	      // Use d-register to hold partial xval.
+
+	      if (val8 != clobber_val)
+		{
+		  // Load partial xval to QI clobber reg and memoize for later.
+		  gcc_assert (REG_P (op[2]));
+		  avr_asm_len ("ldi %2, %1", op, plen, 1);
+		  clobber_val = val8;
+		}
+
+	      avr_asm_len (i == 0
+			   ? "sub %0, %2"
+			   : "sbc %0, %2", op, plen, 1);
+	    }
+	}
+    } // Loop bytes.
+
+  return "";
+}
+
+
 /* Output bit operation (IOR, AND, XOR) with register XOP[0] and compile
    time constant XOP[2]:
 
@@ -9311,6 +9823,7 @@ avr_adjust_insn_length (rtx_insn *insn, int len)
     case ADJUST_LEN_CALL: len = AVR_HAVE_JMP_CALL ? 2 : 1; break;
 
     case ADJUST_LEN_INSERT_BITS: avr_out_insert_bits (op, &len); break;
+    case ADJUST_LEN_ADD_SET_ZN: avr_out_plus_set_ZN (op, &len); break;
 
     case ADJUST_LEN_INSV_NOTBIT:
       avr_out_insert_notbit (insn, op, NULL_RTX, &len);
@@ -10607,6 +11120,42 @@ avr_mul_highpart_cost (rtx x, int)
 }
 
 
+/* Return the expected cost of a conditional branch like
+   (set (pc)
+	(if_then_else (X)
+		      (label_ref *)
+		      (pc)))
+   where X is some comparison operator.  */
+
+static int
+avr_cbranch_cost (rtx x)
+{
+  bool difficult_p = difficult_comparison_operator (x, VOIDmode);
+
+  if (reload_completed)
+    {
+      // After reload, we basically just have plain branches.
+      return COSTS_N_INSNS (1 + difficult_p);
+    }
+
+  rtx xreg = XEXP (x, 0);
+  rtx xval = XEXP (x, 1);
+  machine_mode mode = GET_MODE (xreg);
+  if (mode == VOIDmode)
+    mode = GET_MODE (xval);
+  int size = GET_MODE_SIZE (mode);
+  bool reg_p = register_operand (xreg, mode);
+  bool reg_or_0_p = reg_or_0_operand (xval, mode);
+
+  return COSTS_N_INSNS (size
+			// For the branch
+			+ 1 + difficult_p
+			// Combine might propagate constants other than zero
+			// into the 2nd operand.  Make that more expensive.
+			+ 1 * (!reg_p || !reg_or_0_p));
+}
+
+
 /* Mutually recursive subroutine of avr_rtx_cost for calculating the
    cost of an RTX operand given its context.  X is the rtx of the
    operand, MODE is its mode, and OUTER is the rtx_code of this
@@ -11490,6 +12039,15 @@ avr_rtx_costs_1 (rtx x, machine_mode mode, int outer_code,
         }
       break;
 
+    case IF_THEN_ELSE:
+      if (outer_code == SET
+	  && XEXP (x, 2) == pc_rtx
+	  && ordered_comparison_operator (XEXP (x, 0), VOIDmode))
+	{
+	  *total = avr_cbranch_cost (XEXP (x, 0));
+	  return true;
+	}
+
     default:
       break;
     }
@@ -11602,281 +12160,6 @@ avr_normalize_condition (RTX_CODE condition)
     }
 }
 
-/* Helper function for `avr_reorg'.  */
-
-static rtx
-avr_compare_pattern (rtx_insn *insn)
-{
-  rtx pattern = single_set (insn);
-
-  if (pattern
-      && NONJUMP_INSN_P (insn)
-      && REG_P (SET_DEST (pattern))
-      && REGNO (SET_DEST (pattern)) == REG_CC
-      && GET_CODE (SET_SRC (pattern)) == COMPARE)
-    {
-      machine_mode mode0 = GET_MODE (XEXP (SET_SRC (pattern), 0));
-      machine_mode mode1 = GET_MODE (XEXP (SET_SRC (pattern), 1));
-
-      /* The 64-bit comparisons have fixed operands ACC_A and ACC_B.
-         They must not be swapped, thus skip them.  */
-
-      if ((mode0 == VOIDmode || GET_MODE_SIZE (mode0) <= 4)
-          && (mode1 == VOIDmode || GET_MODE_SIZE (mode1) <= 4))
-        return pattern;
-    }
-
-  return NULL_RTX;
-}
-
-/* Helper function for `avr_reorg'.  */
-
-/* Expansion of switch/case decision trees leads to code like
-
-       REG_CC = compare (Reg, Num)
-       if (REG_CC == 0)
-         goto L1
-
-       REG_CC = compare (Reg, Num)
-       if (REG_CC > 0)
-         goto L2
-
-   The second comparison is superfluous and can be deleted.
-   The second jump condition can be transformed from a
-   "difficult" one to a "simple" one because "REG_CC > 0" and
-   "REG_CC >= 0" will have the same effect here.
-
-   This function relies on the way switch/case is being expaned
-   as binary decision tree.  For example code see PR 49903.
-
-   Return TRUE if optimization performed.
-   Return FALSE if nothing changed.
-
-   INSN1 is a comparison, i.e. avr_compare_pattern != 0.
-
-   We don't want to do this in text peephole because it is
-   tedious to work out jump offsets there and the second comparison
-   might have been transormed by `avr_reorg'.
-
-   RTL peephole won't do because peephole2 does not scan across
-   basic blocks.  */
-
-static bool
-avr_reorg_remove_redundant_compare (rtx_insn *insn1)
-{
-  rtx comp1, ifelse1, xcond1;
-  rtx_insn *branch1;
-  rtx comp2, ifelse2, xcond2;
-  rtx_insn *branch2, *insn2;
-  enum rtx_code code;
-  rtx_insn *jump;
-  rtx target, cond;
-
-  /* Look out for:  compare1 - branch1 - compare2 - branch2  */
-
-  branch1 = next_nonnote_nondebug_insn (insn1);
-  if (!branch1 || !JUMP_P (branch1))
-    return false;
-
-  insn2 = next_nonnote_nondebug_insn (branch1);
-  if (!insn2 || !avr_compare_pattern (insn2))
-    return false;
-
-  branch2 = next_nonnote_nondebug_insn (insn2);
-  if (!branch2 || !JUMP_P (branch2))
-    return false;
-
-  comp1 = avr_compare_pattern (insn1);
-  comp2 = avr_compare_pattern (insn2);
-  xcond1 = single_set (branch1);
-  xcond2 = single_set (branch2);
-
-  if (!comp1 || !comp2
-      || !rtx_equal_p (comp1, comp2)
-      || !xcond1 || SET_DEST (xcond1) != pc_rtx
-      || !xcond2 || SET_DEST (xcond2) != pc_rtx
-      || IF_THEN_ELSE != GET_CODE (SET_SRC (xcond1))
-      || IF_THEN_ELSE != GET_CODE (SET_SRC (xcond2)))
-    {
-      return false;
-    }
-
-  comp1 = SET_SRC (comp1);
-  ifelse1 = SET_SRC (xcond1);
-  ifelse2 = SET_SRC (xcond2);
-
-  /* comp<n> is COMPARE now and ifelse<n> is IF_THEN_ELSE.  */
-
-  if (EQ != GET_CODE (XEXP (ifelse1, 0))
-      || !REG_P (XEXP (comp1, 0))
-      || !CONST_INT_P (XEXP (comp1, 1))
-      || XEXP (ifelse1, 2) != pc_rtx
-      || XEXP (ifelse2, 2) != pc_rtx
-      || LABEL_REF != GET_CODE (XEXP (ifelse1, 1))
-      || LABEL_REF != GET_CODE (XEXP (ifelse2, 1))
-      || !COMPARISON_P (XEXP (ifelse2, 0))
-      || REG_CC != REGNO (XEXP (XEXP (ifelse1, 0), 0))
-      || REG_CC != REGNO (XEXP (XEXP (ifelse2, 0), 0))
-      || const0_rtx != XEXP (XEXP (ifelse1, 0), 1)
-      || const0_rtx != XEXP (XEXP (ifelse2, 0), 1))
-    {
-      return false;
-    }
-
-  /* We filtered the insn sequence to look like
-
-        (set (reg:CC cc)
-             (compare (reg:M N)
-                      (const_int VAL)))
-        (set (pc)
-             (if_then_else (eq (reg:CC cc)
-                               (const_int 0))
-                           (label_ref L1)
-                           (pc)))
-
-        (set (reg:CC cc)
-             (compare (reg:M N)
-                      (const_int VAL)))
-        (set (pc)
-             (if_then_else (CODE (reg:CC cc)
-                                 (const_int 0))
-                           (label_ref L2)
-                           (pc)))
-  */
-
-  code = GET_CODE (XEXP (ifelse2, 0));
-
-  /* Map GT/GTU to GE/GEU which is easier for AVR.
-     The first two instructions compare/branch on EQ
-     so we may replace the difficult
-
-        if (x == VAL)   goto L1;
-        if (x > VAL)    goto L2;
-
-     with easy
-
-         if (x == VAL)   goto L1;
-         if (x >= VAL)   goto L2;
-
-     Similarly, replace LE/LEU by LT/LTU.  */
-
-  switch (code)
-    {
-    case EQ:
-    case LT:  case LTU:
-    case GE:  case GEU:
-      break;
-
-    case LE:  case LEU:
-    case GT:  case GTU:
-      code = avr_normalize_condition (code);
-      break;
-
-    default:
-      return false;
-    }
-
-  /* Wrap the branches into UNSPECs so they won't be changed or
-     optimized in the remainder.  */
-
-  target = XEXP (XEXP (ifelse1, 1), 0);
-  cond = XEXP (ifelse1, 0);
-  jump = emit_jump_insn_after (gen_branch_unspec (target, cond), insn1);
-
-  JUMP_LABEL (jump) = JUMP_LABEL (branch1);
-
-  target = XEXP (XEXP (ifelse2, 1), 0);
-  cond = gen_rtx_fmt_ee (code, VOIDmode, cc_reg_rtx, const0_rtx);
-  jump = emit_jump_insn_after (gen_branch_unspec (target, cond), insn2);
-
-  JUMP_LABEL (jump) = JUMP_LABEL (branch2);
-
-  /* The comparisons in insn1 and insn2 are exactly the same;
-     insn2 is superfluous so delete it.  */
-
-  delete_insn (insn2);
-  delete_insn (branch1);
-  delete_insn (branch2);
-
-  return true;
-}
-
-
-/* Implement `TARGET_MACHINE_DEPENDENT_REORG'.  */
-/* Optimize conditional jumps.  */
-
-static void
-avr_reorg (void)
-{
-  rtx_insn *insn = get_insns();
-
-  for (insn = next_real_insn (insn); insn; insn = next_real_insn (insn))
-    {
-      rtx pattern = avr_compare_pattern (insn);
-
-      if (!pattern)
-        continue;
-
-      if (optimize
-          && avr_reorg_remove_redundant_compare (insn))
-        {
-          continue;
-        }
-
-      if (compare_diff_p (insn))
-	{
-          /* Now we work under compare insn with difficult branch.  */
-
-	  rtx_insn *next = next_real_insn (insn);
-          rtx pat = PATTERN (next);
-          if (GET_CODE (pat) == PARALLEL)
-            pat = XVECEXP (pat, 0, 0);
-
-          pattern = SET_SRC (pattern);
-
-          if (true_regnum (XEXP (pattern, 0)) >= 0
-              && true_regnum (XEXP (pattern, 1)) >= 0)
-            {
-              rtx x = XEXP (pattern, 0);
-              rtx src = SET_SRC (pat);
-              rtx t = XEXP (src, 0);
-              PUT_CODE (t, swap_condition (GET_CODE (t)));
-              XEXP (pattern, 0) = XEXP (pattern, 1);
-              XEXP (pattern, 1) = x;
-              INSN_CODE (next) = -1;
-            }
-          else if (true_regnum (XEXP (pattern, 0)) >= 0
-                   && XEXP (pattern, 1) == const0_rtx)
-            {
-              /* This is a tst insn, we can reverse it.  */
-              rtx src = SET_SRC (pat);
-              rtx t = XEXP (src, 0);
-
-              PUT_CODE (t, swap_condition (GET_CODE (t)));
-              XEXP (pattern, 1) = XEXP (pattern, 0);
-              XEXP (pattern, 0) = const0_rtx;
-              INSN_CODE (next) = -1;
-              INSN_CODE (insn) = -1;
-            }
-          else if (true_regnum (XEXP (pattern, 0)) >= 0
-                   && CONST_INT_P (XEXP (pattern, 1)))
-            {
-              rtx x = XEXP (pattern, 1);
-              rtx src = SET_SRC (pat);
-              rtx t = XEXP (src, 0);
-              machine_mode mode = GET_MODE (XEXP (pattern, 0));
-
-              if (avr_simplify_comparison_p (mode, GET_CODE (t), x))
-                {
-                  XEXP (pattern, 1) = gen_int_mode (INTVAL (x) + 1, mode);
-                  PUT_CODE (t, avr_normalize_condition (GET_CODE (t)));
-                  INSN_CODE (next) = -1;
-                  INSN_CODE (insn) = -1;
-                }
-            }
-        }
-    }
-}
 
 /* Returns register number for function return value.*/
 
@@ -14580,8 +14863,6 @@ avr_float_lib_compare_returns_bool (machine_mode mode, enum rtx_code)
 #define TARGET_RTX_COSTS avr_rtx_costs
 #undef  TARGET_ADDRESS_COST
 #define TARGET_ADDRESS_COST avr_address_cost
-#undef  TARGET_MACHINE_DEPENDENT_REORG
-#define TARGET_MACHINE_DEPENDENT_REORG avr_reorg
 #undef  TARGET_FUNCTION_ARG
 #define TARGET_FUNCTION_ARG avr_function_arg
 #undef  TARGET_FUNCTION_ARG_ADVANCE
@@ -14711,6 +14992,9 @@ avr_float_lib_compare_returns_bool (machine_mode mode, enum rtx_code)
 #undef  TARGET_MD_ASM_ADJUST
 #define TARGET_MD_ASM_ADJUST avr_md_asm_adjust
 
+#undef  TARGET_CANONICALIZE_COMPARISON
+#define TARGET_CANONICALIZE_COMPARISON avr_canonicalize_comparison
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 \f
diff --git a/gcc/config/avr/avr.md b/gcc/config/avr/avr.md
index 43b75046384..5c8e376f7da 100644
--- a/gcc/config/avr/avr.md
+++ b/gcc/config/avr/avr.md
@@ -77,7 +77,6 @@ (define_c_enum "unspec"
    UNSPEC_FMULS
    UNSPEC_FMULSU
    UNSPEC_COPYSIGN
-   UNSPEC_IDENTITY
    UNSPEC_INSERT_BITS
    UNSPEC_ROUND
    ])
@@ -165,6 +164,7 @@ (define_attr "adjust_len"
    ashlsi, ashrsi, lshrsi,
    ashlpsi, ashrpsi, lshrpsi,
    insert_bits, insv_notbit, insv_notbit_0, insv_notbit_7,
+   add_set_ZN,
    no"
   (const_string "no"))
 
@@ -6448,80 +6448,46 @@ (define_insn_and_split "zero_extendsidi2"
 ;;<=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=>
 ;; compare
 
-; Optimize negated tests into reverse compare if overflow is undefined.
-(define_insn "*negated_tstqi"
+;; "*swapped_tstqi"
+;; "*swapped_tstqq" "*swapped_tstuqq"
+(define_insn "*swapped_tst<mode>"
   [(set (reg:CC REG_CC)
-        (compare:CC (neg:QI (match_operand:QI 0 "register_operand" "r"))
-                 (const_int 0)))]
-  "reload_completed && !flag_wrapv && !flag_trapv"
-  "cp __zero_reg__,%0"
-  [(set_attr "length" "1")])
-
-(define_insn "*reversed_tstqi"
-  [(set (reg:CC REG_CC)
-        (compare:CC (const_int 0)
-                 (match_operand:QI 0 "register_operand" "r")))]
+        (compare:CC (match_operand:ALL1 0 "const0_operand"   "Y00")
+                    (match_operand:ALL1 1 "register_operand" "r")))]
   "reload_completed"
-  "cp __zero_reg__,%0"
-[(set_attr "length" "2")])
+  "cp __zero_reg__,%1"
+[(set_attr "length" "1")])
 
-(define_insn "*negated_tsthi"
-  [(set (reg:CC REG_CC)
-        (compare:CC (neg:HI (match_operand:HI 0 "register_operand" "r"))
-                 (const_int 0)))]
-  "reload_completed && !flag_wrapv && !flag_trapv"
-  "cp __zero_reg__,%A0
-	cpc __zero_reg__,%B0"
-[(set_attr "length" "2")])
-
-;; Leave here the clobber used by the cmphi pattern for simplicity, even
-;; though it is unused, because this pattern is synthesized by avr_reorg.
-(define_insn "*reversed_tsthi"
+
+;; "*swapped_tsthi"
+;; "*swapped_tsthq" "*swapped_tstuhq"
+;; "*swapped_tstha" "*swapped_tstuha"
+(define_insn "*swapped_tst<mode>"
   [(set (reg:CC REG_CC)
-        (compare:CC (const_int 0)
-                 (match_operand:HI 0 "register_operand" "r")))
-   (clobber (match_scratch:QI 1 "=X"))]
+        (compare:CC (match_operand:ALL2 0 "const0_operand"   "Y00")
+                    (match_operand:ALL2 1 "register_operand" "r")))]
   "reload_completed"
-  "cp __zero_reg__,%A0
-	cpc __zero_reg__,%B0"
-[(set_attr "length" "2")])
+  "cp __zero_reg__,%A1
+	cpc __zero_reg__,%B1"
+  [(set_attr "length" "2")])
 
-(define_insn "*negated_tstpsi"
-  [(set (reg:CC REG_CC)
-        (compare:CC (neg:PSI (match_operand:PSI 0 "register_operand" "r"))
-                 (const_int 0)))]
-  "reload_completed && !flag_wrapv && !flag_trapv"
-  "cp __zero_reg__,%A0\;cpc __zero_reg__,%B0\;cpc __zero_reg__,%C0"
-  [(set_attr "length" "3")])
 
-(define_insn "*reversed_tstpsi"
+(define_insn "*swapped_tstpsi"
   [(set (reg:CC REG_CC)
         (compare:CC (const_int 0)
-                 (match_operand:PSI 0 "register_operand" "r")))
-   (clobber (match_scratch:QI 1 "=X"))]
+                    (match_operand:PSI 0 "register_operand" "r")))]
   "reload_completed"
   "cp __zero_reg__,%A0\;cpc __zero_reg__,%B0\;cpc __zero_reg__,%C0"
   [(set_attr "length" "3")])
 
-(define_insn "*negated_tstsi"
-  [(set (reg:CC REG_CC)
-        (compare:CC (neg:SI (match_operand:SI 0 "register_operand" "r"))
-                 (const_int 0)))]
-  "reload_completed && !flag_wrapv && !flag_trapv"
-  "cp __zero_reg__,%A0
-	cpc __zero_reg__,%B0
-	cpc __zero_reg__,%C0
-	cpc __zero_reg__,%D0"
-  [(set_attr "length" "4")])
 
-;; "*reversed_tstsi"
-;; "*reversed_tstsq" "*reversed_tstusq"
-;; "*reversed_tstsa" "*reversed_tstusa"
-(define_insn "*reversed_tst<mode>"
+;; "*swapped_tstsi"
+;; "*swapped_tstsq" "*swapped_tstusq"
+;; "*swapped_tstsa" "*swapped_tstusa"
+(define_insn "*swapped_tst<mode>"
   [(set (reg:CC REG_CC)
         (compare:CC (match_operand:ALL4 0 "const0_operand"   "Y00")
-                 (match_operand:ALL4 1 "register_operand" "r")))
-   (clobber (match_scratch:QI 2 "=X"))]
+                    (match_operand:ALL4 1 "register_operand" "r")))]
   "reload_completed"
   "cp __zero_reg__,%A1
 	cpc __zero_reg__,%B1
@@ -6535,7 +6501,7 @@ (define_insn "*reversed_tst<mode>"
 (define_insn "cmp<mode>3"
   [(set (reg:CC REG_CC)
         (compare:CC (match_operand:ALL1 0 "register_operand"  "r  ,r,d")
-                 (match_operand:ALL1 1 "nonmemory_operand" "Y00,r,i")))]
+                    (match_operand:ALL1 1 "nonmemory_operand" "Y00,r,i")))]
   "reload_completed"
   "@
 	tst %0
@@ -6546,7 +6512,7 @@ (define_insn "cmp<mode>3"
 (define_insn "*cmpqi_sign_extend"
   [(set (reg:CC REG_CC)
         (compare:CC (sign_extend:HI (match_operand:QI 0 "register_operand" "d"))
-                 (match_operand:HI 1 "s8_operand"                       "n")))]
+                    (match_operand:HI 1 "s8_operand"                       "n")))]
   "reload_completed"
   "cpi %0,lo8(%1)"
   [(set_attr "length" "1")])
@@ -6555,7 +6521,7 @@ (define_insn "*cmpqi_sign_extend"
 (define_insn "*cmphi.zero-extend.0"
   [(set (reg:CC REG_CC)
         (compare:CC (zero_extend:HI (match_operand:QI 0 "register_operand" "r"))
-                 (match_operand:HI 1 "register_operand" "r")))]
+                    (match_operand:HI 1 "register_operand" "r")))]
   "reload_completed"
   "cp %0,%A1\;cpc __zero_reg__,%B1"
   [(set_attr "length" "2")])
@@ -6563,7 +6529,7 @@ (define_insn "*cmphi.zero-extend.0"
 (define_insn "*cmphi.zero-extend.1"
   [(set (reg:CC REG_CC)
         (compare:CC (match_operand:HI 0 "register_operand" "r")
-                 (zero_extend:HI (match_operand:QI 1 "register_operand" "r"))))]
+                    (zero_extend:HI (match_operand:QI 1 "register_operand" "r"))))]
   "reload_completed"
   "cp %A0,%1\;cpc %B0,__zero_reg__"
   [(set_attr "length" "2")])
@@ -6574,8 +6540,8 @@ (define_insn "*cmphi.zero-extend.1"
 (define_insn "cmp<mode>3"
   [(set (reg:CC REG_CC)
         (compare:CC (match_operand:ALL2 0 "register_operand"  "!w  ,r  ,r,d ,r  ,d,r")
-                 (match_operand:ALL2 1 "nonmemory_operand"  "Y00,Y00,r,s ,s  ,M,n Ynn")))
-   (clobber (match_scratch:QI 2                            "=X  ,X  ,X,&d,&d ,X,&d"))]
+                    (match_operand:ALL2 1 "nonmemory_operand"  "Y00,Y00,r,s ,s  ,M,n Ynn")))
+   (clobber (match_scratch:QI 2                               "=X  ,X  ,X,&d,&d ,X,&d"))]
   "reload_completed"
   {
     switch (which_alternative)
@@ -6608,8 +6574,8 @@ (define_insn "cmp<mode>3"
 (define_insn "*cmppsi"
   [(set (reg:CC REG_CC)
         (compare:CC (match_operand:PSI 0 "register_operand"  "r,r,d ,r  ,d,r")
-                 (match_operand:PSI 1 "nonmemory_operand" "L,r,s ,s  ,M,n")))
-   (clobber (match_scratch:QI 2                          "=X,X,&d,&d ,X,&d"))]
+                    (match_operand:PSI 1 "nonmemory_operand" "L,r,s ,s  ,M,n")))
+   (clobber (match_scratch:QI 2                             "=X,X,&d,&d ,X,&d"))]
   "reload_completed"
   {
     switch (which_alternative)
@@ -6640,8 +6606,8 @@ (define_insn "*cmppsi"
 (define_insn "*cmp<mode>"
   [(set (reg:CC REG_CC)
         (compare:CC (match_operand:ALL4 0 "register_operand"  "r  ,r ,d,r ,r")
-                 (match_operand:ALL4 1 "nonmemory_operand" "Y00,r ,M,M ,n Ynn")))
-   (clobber (match_scratch:QI 2                           "=X  ,X ,X,&d,&d"))]
+                    (match_operand:ALL4 1 "nonmemory_operand" "Y00,r ,M,M ,n Ynn")))
+   (clobber (match_scratch:QI 2                              "=X  ,X ,X,&d,&d"))]
   "reload_completed"
   {
     if (0 == which_alternative)
@@ -6655,6 +6621,13 @@ (define_insn "*cmp<mode>"
    (set_attr "adjust_len" "tstsi,*,compare,compare,compare")])
 
 
+;; A helper for avr_pass_ifelse::avr_rest_of_handle_ifelse().
+(define_expand "gen_compare<mode>"
+  [(parallel [(set (reg:CC REG_CC)
+                   (compare:CC (match_operand:HISI 0 "register_operand")
+                               (match_operand:HISI 1 "const_int_operand")))
+              (clobber (match_operand:QI 2 "scratch_operand"))])])
+
 ;; ----------------------------------------------------------------------
 ;; JUMP INSTRUCTIONS
 ;; ----------------------------------------------------------------------
@@ -6663,53 +6636,69 @@ (define_insn "*cmp<mode>"
 (define_expand "cbranch<mode>4"
   [(set (pc)
         (if_then_else (match_operator 0 "ordered_comparison_operator"
-                        [(match_operand:ALL1 1 "register_operand" "")
-                         (match_operand:ALL1 2 "nonmemory_operand" "")])
-         (label_ref (match_operand 3 "" ""))
-         (pc)))])
+                        [(match_operand:ALL1 1 "register_operand")
+                         (match_operand:ALL1 2 "nonmemory_operand")])
+                      (label_ref (match_operand 3))
+                      (pc)))]
+  ""
+  {
+    int icode = (int) GET_CODE (operands[0]);
+
+    targetm.canonicalize_comparison (&icode, &operands[1], &operands[2], false);
+    operands[0] = gen_rtx_fmt_ee ((enum rtx_code) icode,
+                                  VOIDmode, operands[1], operands[2]);
+  })
 
 (define_expand "cbranch<mode>4"
   [(parallel
      [(set (pc)
-           (if_then_else
-             (match_operator 0 "ordered_comparison_operator"
-               [(match_operand:ORDERED234 1 "register_operand" "")
-                (match_operand:ORDERED234 2 "nonmemory_operand" "")])
-             (label_ref (match_operand 3 "" ""))
-             (pc)))
-      (clobber (match_scratch:QI 4 ""))])])
-
-;; "*cbranchqi4"
-;; "*cbranchqq4"  "*cbranchuqq4"
-(define_insn_and_split "*cbranch<mode>4"
+           (if_then_else (match_operator 0 "ordered_comparison_operator"
+                           [(match_operand:ORDERED234 1 "register_operand")
+                            (match_operand:ORDERED234 2 "nonmemory_operand")])
+                         (label_ref (match_operand 3))
+                         (pc)))
+      (clobber (match_scratch:QI 4))])]
+  ""
+  {
+    int icode = (int) GET_CODE (operands[0]);
+
+    targetm.canonicalize_comparison (&icode, &operands[1], &operands[2], false);
+    operands[0] = gen_rtx_fmt_ee ((enum rtx_code) icode,
+                                  VOIDmode, operands[1], operands[2]);
+  })
+
+
+;; "cbranchqi4_insn"
+;; "cbranchqq4_insn"  "cbranchuqq4_insn"
+(define_insn_and_split "cbranch<mode>4_insn"
   [(set (pc)
         (if_then_else (match_operator 0 "ordered_comparison_operator"
-                        [(match_operand:ALL1 1 "register_operand" "r  ,r,d")
+                        [(match_operand:ALL1 1 "register_operand"  "r  ,r,d")
                          (match_operand:ALL1 2 "nonmemory_operand" "Y00,r,i")])
-         (label_ref (match_operand 3 "" ""))
-         (pc)))]
+                      (label_ref (match_operand 3))
+                      (pc)))]
    ""
    "#"
    "reload_completed"
    [(set (reg:CC REG_CC)
-                    (compare:CC (match_dup 1) (match_dup 2)))
+         (compare:CC (match_dup 1) (match_dup 2)))
     (set (pc)
          (if_then_else (match_op_dup 0
                          [(reg:CC REG_CC) (const_int 0)])
                        (label_ref (match_dup 3))
-                       (pc)))]
-   "")
+                       (pc)))])
 
-;; "*cbranchsi4"  "*cbranchsq4"  "*cbranchusq4"  "*cbranchsa4"  "*cbranchusa4"
-(define_insn_and_split "*cbranch<mode>4"
+;; "cbranchsi4_insn"
+;; "cbranchsq4_insn"  "cbranchusq4_insn"  "cbranchsa4_insn"  "cbranchusa4_insn"
+(define_insn_and_split "cbranch<mode>4_insn"
   [(set (pc)
-           (if_then_else
-             (match_operator 0 "ordered_comparison_operator"
-               [(match_operand:ALL4 1 "register_operand" "r  ,r ,d,r ,r")
-                (match_operand:ALL4 2 "nonmemory_operand" "Y00,r ,M,M ,n Ynn")])
-             (label_ref (match_operand 3 "" ""))
-             (pc)))
-   (clobber (match_scratch:QI 4 "=X  ,X ,X,&d,&d"))]
+        (if_then_else
+         (match_operator 0 "ordered_comparison_operator"
+           [(match_operand:ALL4 1 "register_operand"  "r  ,r,d,r ,r")
+            (match_operand:ALL4 2 "nonmemory_operand" "Y00,r,M,M ,n Ynn")])
+         (label_ref (match_operand 3))
+         (pc)))
+   (clobber (match_scratch:QI 4                      "=X  ,X,X,&d,&d"))]
    ""
    "#"
    "reload_completed"
@@ -6720,19 +6709,18 @@ (define_insn_and_split "*cbranch<mode>4"
          (if_then_else (match_op_dup 0
                          [(reg:CC REG_CC) (const_int 0)])
                        (label_ref (match_dup 3))
-                       (pc)))]
-   "")
+                       (pc)))])
 
-;; "*cbranchpsi4"
-(define_insn_and_split "*cbranchpsi4"
+;; "cbranchpsi4_insn"
+(define_insn_and_split "cbranchpsi4_insn"
   [(set (pc)
-           (if_then_else
-             (match_operator 0 "ordered_comparison_operator"
-               [(match_operand:PSI 1 "register_operand" "r,r,d ,r  ,d,r")
-                (match_operand:PSI 2 "nonmemory_operand" "L,r,s ,s  ,M,n")])
-             (label_ref (match_operand 3 "" ""))
-             (pc)))
-   (clobber (match_scratch:QI 4 "=X,X,&d,&d ,X,&d"))]
+        (if_then_else
+         (match_operator 0 "ordered_comparison_operator"
+           [(match_operand:PSI 1 "register_operand"  "r,r,d ,r ,d,r")
+            (match_operand:PSI 2 "nonmemory_operand" "L,r,s ,s ,M,n")])
+         (label_ref (match_operand 3))
+         (pc)))
+   (clobber (match_scratch:QI 4                     "=X,X,&d,&d,X,&d"))]
    ""
    "#"
    "reload_completed"
@@ -6743,19 +6731,19 @@ (define_insn_and_split "*cbranchpsi4"
          (if_then_else (match_op_dup 0
                          [(reg:CC REG_CC) (const_int 0)])
                        (label_ref (match_dup 3))
-                       (pc)))]
-   "")
+                       (pc)))])
 
-;; "*cbranchhi4"  "*cbranchhq4"  "*cbranchuhq4"  "*cbranchha4"  "*cbranchuha4"
-(define_insn_and_split "*cbranch<mode>4"
+;; "cbranchhi4_insn"
+;; "cbranchhq4_insn"  "cbranchuhq4_insn"  "cbranchha4_insn"  "cbranchuha4_insn"
+(define_insn_and_split "cbranch<mode>4_insn"
   [(set (pc)
-           (if_then_else
-             (match_operator 0 "ordered_comparison_operator"
-               [(match_operand:ALL2 1 "register_operand" "!w  ,r  ,r,d ,r  ,d,r")
-                (match_operand:ALL2 2 "nonmemory_operand" "Y00,Y00,r,s ,s  ,M,n Ynn")])
-             (label_ref (match_operand 3 "" ""))
-             (pc)))
-   (clobber (match_scratch:QI 4 "=X  ,X  ,X,&d,&d ,X,&d"))]
+        (if_then_else
+         (match_operator 0 "ordered_comparison_operator"
+           [(match_operand:ALL2 1 "register_operand" "!w  ,r  ,r,d ,r ,d,r")
+            (match_operand:ALL2 2 "nonmemory_operand" "Y00,Y00,r,s ,s ,M,n Ynn")])
+         (label_ref (match_operand 3))
+         (pc)))
+   (clobber (match_scratch:QI 4                      "=X  ,X  ,X,&d,&d,X,&d"))]
    ""
    "#"
    "reload_completed"
@@ -6766,8 +6754,7 @@ (define_insn_and_split "*cbranch<mode>4"
          (if_then_else (match_op_dup 0
                          [(reg:CC REG_CC) (const_int 0)])
                        (label_ref (match_dup 3))
-                       (pc)))]
-   "")
+                       (pc)))])
 
 ;; Test a single bit in a QI/HI/SImode register.
 ;; Combine will create zero extract patterns for single bit tests.
@@ -6879,161 +6866,145 @@ (define_insn "*sbrx_and_branch<mode>"
 
 ;; Convert sign tests to bit 7/15/31 tests that match the above insns.
 (define_peephole2
-  [(set (reg:CC REG_CC) (compare:CC (match_operand:QI 0 "register_operand" "")
-                       (const_int 0)))
-   (parallel [(set (pc) (if_then_else (ge (reg:CC REG_CC) (const_int 0))
-                                      (label_ref (match_operand 1 "" ""))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
-  [(parallel [(set (pc) (if_then_else (eq (zero_extract:HI (match_dup 0)
-                                                           (const_int 1)
-                                                           (const_int 7))
-                                          (const_int 0))
-                                      (label_ref (match_dup 1))
-                                      (pc)))
+  [(set (reg:CC REG_CC)
+        (compare:CC (match_operand:QI 0 "register_operand")
+                    (const_int 0)))
+   (set (pc)
+        (if_then_else (ge (reg:CC REG_CC)
+                          (const_int 0))
+                      (label_ref (match_operand 1))
+                      (pc)))]
+  "peep2_regno_dead_p (2, REG_CC)"
+  [(parallel [(set (pc)
+                   (if_then_else (eq (zero_extract:HI (match_dup 0)
+                                                      (const_int 1)
+                                                      (const_int 7))
+                                     (const_int 0))
+                                 (label_ref (match_dup 1))
+                                 (pc)))
               (clobber (reg:CC REG_CC))])])
 
 (define_peephole2
-  [(set (reg:CC REG_CC) (compare:CC (match_operand:QI 0 "register_operand" "")
-                       (const_int 0)))
-   (parallel [(set (pc) (if_then_else (lt (reg:CC REG_CC) (const_int 0))
-                                      (label_ref (match_operand 1 "" ""))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
-  [(parallel [(set (pc) (if_then_else (ne (zero_extract:HI (match_dup 0)
-                                                           (const_int 1)
-                                                           (const_int 7))
-                                          (const_int 0))
-                                      (label_ref (match_dup 1))
-                                      (pc)))
+  [(set (reg:CC REG_CC)
+        (compare:CC (match_operand:QI 0 "register_operand")
+                    (const_int 0)))
+   (set (pc)
+        (if_then_else (lt (reg:CC REG_CC)
+                          (const_int 0))
+                      (label_ref (match_operand 1))
+                      (pc)))]
+  "peep2_regno_dead_p (2, REG_CC)"
+  [(parallel [(set (pc)
+                   (if_then_else (ne (zero_extract:HI (match_dup 0)
+                                                      (const_int 1)
+                                                      (const_int 7))
+                                     (const_int 0))
+                                 (label_ref (match_dup 1))
+                                 (pc)))
               (clobber (reg:CC REG_CC))])])
 
 (define_peephole2
-  [(parallel [(set (reg:CC REG_CC) (compare:CC (match_operand:HI 0 "register_operand" "")
-                                  (const_int 0)))
-              (clobber (match_operand:HI 2 ""))])
-   (parallel [(set (pc) (if_then_else (ge (reg:CC REG_CC) (const_int 0))
-                                      (label_ref (match_operand 1 "" ""))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
-  [(parallel [(set (pc) (if_then_else (eq (and:HI (match_dup 0) (const_int -32768))
-                                          (const_int 0))
-                                      (label_ref (match_dup 1))
-                                      (pc)))
+  [(parallel [(set (reg:CC REG_CC)
+                   (compare:CC (match_operand:HI 0 "register_operand")
+                               (const_int 0)))
+              (clobber (match_operand:HI 2))])
+   (set (pc)
+        (if_then_else (ge (reg:CC REG_CC)
+                          (const_int 0))
+                      (label_ref (match_operand 1))
+                      (pc)))]
+  "peep2_regno_dead_p (2, REG_CC)"
+  [(parallel [(set (pc)
+                   (if_then_else (eq (and:HI (match_dup 0)
+                                             (const_int -32768))
+                                     (const_int 0))
+                                 (label_ref (match_dup 1))
+                                 (pc)))
               (clobber (reg:CC REG_CC))])])
 
 (define_peephole2
-  [(parallel [(set (reg:CC REG_CC) (compare:CC (match_operand:HI 0 "register_operand" "")
-                                  (const_int 0)))
-              (clobber (match_operand:HI 2 ""))])
-   (parallel [(set (pc) (if_then_else (lt (reg:CC REG_CC) (const_int 0))
-                                      (label_ref (match_operand 1 "" ""))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
-  [(parallel [(set (pc) (if_then_else (ne (and:HI (match_dup 0) (const_int -32768))
-                                          (const_int 0))
-                                      (label_ref (match_dup 1))
-                                      (pc)))
+  [(parallel [(set (reg:CC REG_CC)
+                   (compare:CC (match_operand:HI 0 "register_operand")
+                               (const_int 0)))
+              (clobber (match_operand:HI 2))])
+   (set (pc)
+        (if_then_else (lt (reg:CC REG_CC)
+                          (const_int 0))
+                      (label_ref (match_operand 1))
+                      (pc)))]
+  "peep2_regno_dead_p (2, REG_CC)"
+  [(parallel [(set (pc)
+                   (if_then_else (ne (and:HI (match_dup 0)
+                                             (const_int -32768))
+                                     (const_int 0))
+                                 (label_ref (match_dup 1))
+                                 (pc)))
               (clobber (reg:CC REG_CC))])])
 
 (define_peephole2
-  [(parallel [(set (reg:CC REG_CC) (compare:CC (match_operand:SI 0 "register_operand" "")
-                                  (const_int 0)))
-              (clobber (match_operand:SI 2 ""))])
-   (parallel [(set (pc) (if_then_else (ge (reg:CC REG_CC) (const_int 0))
-                                      (label_ref (match_operand 1 "" ""))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
-  [(parallel [(set (pc) (if_then_else (eq (and:SI (match_dup 0) (match_dup 2))
-                                          (const_int 0))
-                                      (label_ref (match_dup 1))
-                                      (pc)))
+  [(parallel [(set (reg:CC REG_CC)
+                   (compare:CC (match_operand:SI 0 "register_operand")
+                               (const_int 0)))
+              (clobber (match_operand:SI 2))])
+   (set (pc)
+        (if_then_else (ge (reg:CC REG_CC)
+                          (const_int 0))
+                      (label_ref (match_operand 1))
+                      (pc)))]
+  "peep2_regno_dead_p (2, REG_CC)"
+  [(parallel [(set (pc)
+                   (if_then_else (eq (and:SI (match_dup 0)
+                                             (match_dup 2))
+                                     (const_int 0))
+                                 (label_ref (match_dup 1))
+                                 (pc)))
               (clobber (reg:CC REG_CC))])]
-  "operands[2] = gen_int_mode (-2147483647 - 1, SImode);")
+  {
+    operands[2] = gen_int_mode (-2147483647 - 1, SImode);
+  })
 
 (define_peephole2
-  [(parallel [(set (reg:CC REG_CC) (compare:CC (match_operand:SI 0 "register_operand" "")
-                                  (const_int 0)))
-              (clobber (match_operand:SI 2 ""))])
-   (parallel [(set (pc) (if_then_else (lt (reg:CC REG_CC) (const_int 0))
-                                      (label_ref (match_operand 1 "" ""))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
-  [(parallel [(set (pc) (if_then_else (ne (and:SI (match_dup 0) (match_dup 2))
-                                          (const_int 0))
-                                      (label_ref (match_dup 1))
-                                      (pc)))
+  [(parallel [(set (reg:CC REG_CC)
+                   (compare:CC (match_operand:SI 0 "register_operand")
+                               (const_int 0)))
+              (clobber (match_operand:SI 2))])
+   (set (pc)
+        (if_then_else (lt (reg:CC REG_CC)
+                          (const_int 0))
+                      (label_ref (match_operand 1))
+                      (pc)))]
+  "peep2_regno_dead_p (2, REG_CC)"
+  [(parallel [(set (pc)
+                   (if_then_else (ne (and:SI (match_dup 0)
+                                             (match_dup 2))
+                                     (const_int 0))
+                                 (label_ref (match_dup 1))
+                                 (pc)))
               (clobber (reg:CC REG_CC))])]
-  "operands[2] = gen_int_mode (-2147483647 - 1, SImode);")
+  {
+    operands[2] = gen_int_mode (-2147483647 - 1, SImode);
+  })
 
 ;; ************************************************************************
 ;; Implementation of conditional jumps here.
 ;;  Compare with 0 (test) jumps
 ;; ************************************************************************
 
-(define_insn_and_split "branch"
+(define_insn "branch"
   [(set (pc)
         (if_then_else (match_operator 1 "simple_comparison_operator"
-                                      [(reg:CC REG_CC)
-                                       (const_int 0)])
-                      (label_ref (match_operand 0 "" ""))
+                        [(reg:CC REG_CC)
+                         (const_int 0)])
+                      (label_ref (match_operand 0))
                       (pc)))]
   "reload_completed"
-  "#"
-  "&& reload_completed"
-  [(parallel [(set (pc)
-                   (if_then_else (match_op_dup 1
-                                               [(reg:CC REG_CC)
-                                                (const_int 0)])
-                                 (label_ref (match_dup 0))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])])
-
-(define_insn "*branch"
-  [(set (pc)
-        (if_then_else (match_operator 1 "simple_comparison_operator"
-                                      [(reg:CC REG_CC)
-                                       (const_int 0)])
-                      (label_ref (match_operand 0 "" ""))
-                      (pc)))
-   (clobber (reg:CC REG_CC))]
-  "reload_completed"
   {
     return ret_cond_branch (operands[1], avr_jump_mode (operands[0], insn), 0);
   }
   [(set_attr "type" "branch")])
 
 
-;; Same as above but wrap SET_SRC so that this branch won't be transformed
-;; or optimized in the remainder.
-
-(define_insn "branch_unspec"
-  [(set (pc)
-        (unspec [(if_then_else (match_operator 1 "simple_comparison_operator"
-                                               [(reg:CC REG_CC)
-                                                (const_int 0)])
-                               (label_ref (match_operand 0 "" ""))
-                               (pc))
-                 ] UNSPEC_IDENTITY))
-   (clobber (reg:CC REG_CC))]
-  "reload_completed"
-  {
-    return ret_cond_branch (operands[1], avr_jump_mode (operands[0], insn), 0);
-  }
-  [(set_attr "type" "branch")])
-
-;; ****************************************************************
-;; AVR does not have following conditional jumps: LE,LEU,GT,GTU.
-;; Convert them all to proper jumps.
-;; ****************************************************************/
-
-(define_insn_and_split "difficult_branch"
+(define_insn "difficult_branch"
   [(set (pc)
         (if_then_else (match_operator 1 "difficult_comparison_operator"
                         [(reg:CC REG_CC)
@@ -7041,95 +7012,11 @@ (define_insn_and_split "difficult_branch"
                       (label_ref (match_operand 0 "" ""))
                       (pc)))]
   "reload_completed"
-  "#"
-  "&& reload_completed"
-  [(parallel [(set (pc)
-                   (if_then_else (match_op_dup 1
-                                   [(reg:CC REG_CC)
-                                    (const_int 0)])
-                                 (label_ref (match_dup 0))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])])
-
-(define_insn "*difficult_branch"
-  [(set (pc)
-        (if_then_else (match_operator 1 "difficult_comparison_operator"
-                        [(reg:CC REG_CC)
-                         (const_int 0)])
-                      (label_ref (match_operand 0 "" ""))
-                      (pc)))
-   (clobber (reg:CC REG_CC))]
-  "reload_completed"
   {
     return ret_cond_branch (operands[1], avr_jump_mode (operands[0], insn), 0);
   }
   [(set_attr "type" "branch1")])
 
-;; revers branch
-
-(define_insn_and_split "rvbranch"
-  [(set (pc)
-        (if_then_else (match_operator 1 "simple_comparison_operator"
-                                      [(reg:CC REG_CC)
-                                       (const_int 0)])
-                      (pc)
-                      (label_ref (match_operand 0 "" ""))))]
-  "reload_completed"
-  "#"
-  "&& reload_completed"
-  [(parallel [(set (pc)
-                   (if_then_else (match_op_dup 1
-                                               [(reg:CC REG_CC)
-                                                (const_int 0)])
-                                 (pc)
-                                 (label_ref (match_dup 0))))
-              (clobber (reg:CC REG_CC))])])
-
-(define_insn "*rvbranch"
-  [(set (pc)
-        (if_then_else (match_operator 1 "simple_comparison_operator"
-                                      [(reg:CC REG_CC)
-                                       (const_int 0)])
-                      (pc)
-                      (label_ref (match_operand 0 "" ""))))
-   (clobber (reg:CC REG_CC))]
-  "reload_completed"
-  {
-    return ret_cond_branch (operands[1], avr_jump_mode (operands[0], insn), 1);
-  }
-  [(set_attr "type" "branch1")])
-
-(define_insn_and_split "difficult_rvbranch"
-  [(set (pc)
-        (if_then_else (match_operator 1 "difficult_comparison_operator"
-                                      [(reg:CC REG_CC)
-                                       (const_int 0)])
-                      (pc)
-                      (label_ref (match_operand 0 "" ""))))]
-  "reload_completed"
-  "#"
-  "&& reload_completed"
-  [(parallel [(set (pc)
-                   (if_then_else (match_op_dup 1
-                                               [(reg:CC REG_CC)
-                                                (const_int 0)])
-                                 (pc)
-                                 (label_ref (match_dup 0))))
-              (clobber (reg:CC REG_CC))])])
-
-(define_insn "*difficult_rvbranch"
-  [(set (pc)
-        (if_then_else (match_operator 1 "difficult_comparison_operator"
-                                      [(reg:CC REG_CC)
-                                       (const_int 0)])
-                      (pc)
-                      (label_ref (match_operand 0 "" ""))))
-   (clobber (reg:CC REG_CC))]
-  "reload_completed"
-  {
-    return ret_cond_branch (operands[1], avr_jump_mode (operands[0], insn), 1);
-  }
-  [(set_attr "type" "branch")])
 
 ;; **************************************************************************
 ;; Unconditional and other jump instructions.
@@ -7655,15 +7542,14 @@ (define_peephole ; "*dec-and-branchsi!=-1.d.clobber"
               (clobber (reg:CC REG_CC))])
    (parallel [(set (reg:CC REG_CC)
                    (compare:CC (match_dup 0)
-                            (const_int -1)))
-              (clobber (match_operand:QI 1 "d_register_operand" ""))])
-   (parallel [(set (pc)
-                   (if_then_else (eqne (reg:CC REG_CC)
-                                       (const_int 0))
-                                 (label_ref (match_operand 2 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
+                               (const_int -1)))
+              (clobber (match_operand:QI 1 "scratch_or_d_register_operand"))])
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "dead_or_set_regno_p (insn, REG_CC)"
   {
     const char *op;
     int jump_mode;
@@ -7699,15 +7585,14 @@ (define_peephole ; "*dec-and-branchhi!=-1"
               (clobber (reg:CC REG_CC))])
    (parallel [(set (reg:CC REG_CC)
                    (compare:CC (match_dup 0)
-                            (const_int -1)))
+                               (const_int -1)))
               (clobber (match_operand:QI 1 "d_register_operand" ""))])
-   (parallel [(set (pc)
-                   (if_then_else (eqne (reg:CC REG_CC)
-                                       (const_int 0))
-                                 (label_ref (match_operand 2 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "dead_or_set_regno_p (insn, REG_CC)"
   {
     const char *op;
     int jump_mode;
@@ -7741,15 +7626,14 @@ (define_peephole ; "*dec-and-branchhi!=-1.d.clobber"
               (clobber (reg:CC REG_CC))])
    (parallel [(set (reg:CC REG_CC)
                    (compare:CC (match_dup 0)
-                            (const_int -1)))
-              (clobber (match_operand:QI 1 "d_register_operand" ""))])
-   (parallel [(set (pc)
-                   (if_then_else (eqne (reg:CC REG_CC)
-                                       (const_int 0))
-                                 (label_ref (match_operand 2 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
+                               (const_int -1)))
+              (clobber (match_operand:QI 1 "scratch_or_d_register_operand"))])
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "dead_or_set_regno_p (insn, REG_CC)"
   {
     const char *op;
     int jump_mode;
@@ -7783,15 +7667,14 @@ (define_peephole ; "*dec-and-branchhi!=-1.l.clobber"
               (clobber (reg:CC REG_CC))])
    (parallel [(set (reg:CC REG_CC)
                    (compare:CC (match_dup 0)
-                            (const_int -1)))
+                               (const_int -1)))
               (clobber (match_operand:QI 1 "d_register_operand" ""))])
-   (parallel [(set (pc)
-                   (if_then_else (eqne (reg:CC REG_CC)
-                                       (const_int 0))
-                                 (label_ref (match_operand 2 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "dead_or_set_regno_p (insn, REG_CC)"
   {
     const char *op;
     int jump_mode;
@@ -7821,14 +7704,13 @@ (define_peephole ; "*dec-and-branchqi!=-1"
               (clobber (reg:CC REG_CC))])
    (set (reg:CC REG_CC)
         (compare:CC (match_dup 0)
-                 (const_int -1)))
-   (parallel [(set (pc)
-                   (if_then_else (eqne (reg:CC REG_CC)
-                                       (const_int 0))
-                                 (label_ref (match_operand 1 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
+                    (const_int -1)))
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 1))
+                      (pc)))]
+  "dead_or_set_regno_p (insn, REG_CC)"
   {
     const char *op;
     int jump_mode;
@@ -7854,14 +7736,14 @@ (define_peephole ; "*dec-and-branchqi!=-1"
 (define_peephole ; "*cpse.eq"
   [(set (reg:CC REG_CC)
         (compare:CC (match_operand:ALL1 1 "register_operand" "r,r")
-                 (match_operand:ALL1 2 "reg_or_0_operand" "r,Y00")))
-   (parallel [(set (pc)
-                   (if_then_else (eq (reg:CC REG_CC)
-                                     (const_int 0))
-                                 (label_ref (match_operand 0 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  "jump_over_one_insn_p (insn, operands[0])"
+                    (match_operand:ALL1 2 "reg_or_0_operand" "r,Y00")))
+   (set (pc)
+        (if_then_else (eq (reg:CC REG_CC)
+                          (const_int 0))
+                      (label_ref (match_operand 0))
+                      (pc)))]
+  "jump_over_one_insn_p (insn, operands[0])
+   && dead_or_set_regno_p (insn, REG_CC)"
   "@
 	cpse %1,%2
 	cpse %1,__zero_reg__")
@@ -7889,16 +7771,16 @@ (define_peephole ; "*cpse.eq"
 
 (define_peephole ; "*cpse.ne"
   [(set (reg:CC REG_CC)
-        (compare:CC (match_operand:ALL1 1 "register_operand" "")
-                 (match_operand:ALL1 2 "reg_or_0_operand" "")))
-   (parallel [(set (pc)
-                   (if_then_else (ne (reg:CC REG_CC)
-                                     (const_int 0))
-                                 (label_ref (match_operand 0 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  "!AVR_HAVE_JMP_CALL
-   || !TARGET_SKIP_BUG"
+        (compare:CC (match_operand:ALL1 1 "register_operand")
+                    (match_operand:ALL1 2 "reg_or_0_operand")))
+   (set (pc)
+        (if_then_else (ne (reg:CC REG_CC)
+                          (const_int 0))
+                      (label_ref (match_operand 0))
+                      (pc)))]
+  "(!AVR_HAVE_JMP_CALL
+    || !TARGET_SKIP_BUG)
+   && dead_or_set_regno_p (insn, REG_CC)"
   {
     if (operands[2] == CONST0_RTX (<MODE>mode))
       operands[2] = zero_reg_rtx;
@@ -8093,7 +7975,7 @@ (define_insn_and_split "delay_cycles_1"
                                 (const_int 1)]
                                UNSPECV_DELAY_CYCLES)
               (set (match_dup 1)
-               (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
+                   (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
               (clobber (match_dup 2))
               (clobber (reg:CC REG_CC))])])
 
@@ -8125,7 +8007,7 @@ (define_insn_and_split "delay_cycles_2"
                                 (const_int 2)]
                                UNSPECV_DELAY_CYCLES)
               (set (match_dup 1)
-               (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
+                   (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
               (clobber (match_dup 2))
               (clobber (reg:CC REG_CC))])]
   ""
@@ -8162,7 +8044,7 @@ (define_insn_and_split "delay_cycles_3"
                                 (const_int 3)]
                                UNSPECV_DELAY_CYCLES)
               (set (match_dup 1)
-               (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
+                   (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
               (clobber (match_dup 2))
               (clobber (match_dup 3))
               (clobber (match_dup 4))
@@ -8205,7 +8087,7 @@ (define_insn_and_split "delay_cycles_4"
                                 (const_int 4)]
                                UNSPECV_DELAY_CYCLES)
               (set (match_dup 1)
-               (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
+                   (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
               (clobber (match_dup 2))
               (clobber (match_dup 3))
               (clobber (match_dup 4))
@@ -9488,6 +9370,258 @@ (define_peephole2
               (clobber (reg:CC REG_CC))])])
 
 
+;; Try optimize decrement-and-branch.  When we have an addition followed
+;; by a comparison of the result against zero, we can output the addition
+;; in such a way that SREG.N and SREG.Z are set according to the result.
+
+;; { -1, +1 } for QImode, otherwise the empty set.
+(define_mode_attr p1m1 [(QI "N P")
+                        (HI "Yxx") (PSI "Yxx") (SI "Yxx")])
+
+;; FIXME: reload1.cc::do_output_reload() does not support output reloads
+;; for JUMP_INSNs, hence letting combine doing decrement-and-branch like
+;; the following might run into ICE.  Doing reloads by hand is too painful...
+;
+; (define_insn_and_split "*add.for.eqne.<mode>.cbranch"
+;   [(set (pc)
+;         (if_then_else (eqne (match_operand:QISI 1 "register_operand"  "0")
+;                             (match_operand:QISI 2 "const_int_operand" "n"))
+;                       (label_ref (match_operand 4))
+;                       (pc)))
+;    (set (match_operand:QISI 0 "register_operand" "=r")
+;         (plus:QISI (match_dup 1)
+;                    (match_operand:QISI 3 "const_int_operand" "n")))]
+;   ;; No clobber for now as combine might not have one handy.
+;   ;; We pop a scatch in split1.
+;   "!reload_completed
+;    && const0_rtx == simplify_binary_operation (PLUS, <MODE>mode,
+;                                                operands[2], operands[3])"
+;   { gcc_unreachable(); }
+;   "&& 1"
+;   [(parallel [(set (pc)
+;                    (if_then_else (eqne (match_dup 1)
+;                                        (match_dup 2))
+;                                  (label_ref (match_dup 4))
+;                                  (pc)))
+;               (set (match_dup 0)
+;                    (plus:QISI (match_dup 1)
+;                               (match_dup 3)))
+;               (clobber (scratch:QI))])])
+;
+;; ...Hence, stick with RTL peepholes for now.  Unfortunately, there is no
+;; canonical form, and if reload shuffles registers around, we might miss
+;; opportunities to match a decrement-and-branch.
+;; doloop_end doesn't reload either, so doloop_end also won't work.
+
+(define_expand "gen_add_for_<code>_<mode>"
+  ; "*add.for.eqne.<mode>"
+  [(parallel [(set (reg:CC REG_CC)
+                   (compare:CC (plus:QISI (match_operand:QISI 0 "register_operand")
+                                          (match_operand:QISI 1 "const_int_operand"))
+                               (const_int 0)))
+              (set (match_dup 0)
+                   (plus:QISI (match_dup 0)
+                              (match_dup 1)))
+              (clobber (match_operand:QI 3))])
+   ; "branch"
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_dup 2))
+                      (pc)))])
+
+
+;; 1/3: A version without clobber: d-reg or 8-bit adds +/-1.
+(define_peephole2
+  [(parallel [(set (match_operand:QISI 0 "register_operand")
+                   (plus:QISI (match_dup 0)
+                              (match_operand:QISI 1 "const_int_operand")))
+              (clobber (reg:CC REG_CC))])
+   (set (reg:CC REG_CC)
+        (compare:CC (match_dup 0)
+                    (const_int 0)))
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "peep2_regno_dead_p (3, REG_CC)
+   && (d_register_operand (operands[0], <MODE>mode)
+       || (<MODE>mode == QImode
+           && (INTVAL (operands[1]) == 1
+               || INTVAL (operands[1]) == -1)))"
+  [(scratch)]
+  {
+    emit (gen_gen_add_for_<code>_<mode> (operands[0], operands[1], operands[2],
+          gen_rtx_SCRATCH (QImode)));
+    DONE;
+  })
+
+;; 2/3: A version with clobber from the insn.
+(define_peephole2
+  [(parallel [(set (match_operand:QISI 0 "register_operand")
+                   (plus:QISI (match_dup 0)
+                              (match_operand:QISI 1 "const_int_operand")))
+              (clobber (match_operand:QI 3 "scratch_or_d_register_operand"))
+              (clobber (reg:CC REG_CC))])
+   (parallel [(set (reg:CC REG_CC)
+                   (compare:CC (match_dup 0)
+                               (const_int 0)))
+              (clobber (match_operand:QI 4 "scratch_or_d_register_operand"))])
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "peep2_regno_dead_p (3, REG_CC)"
+  [(scratch)]
+  {
+    rtx scratch = REG_P (operands[3]) ? operands[3] : operands[4];
+
+    // We need either a d-register or a scratch register to clobber.
+    if (! REG_P (scratch)
+        && ! d_register_operand (operands[0], <MODE>mode)
+        && ! (QImode == <MODE>mode
+              && (INTVAL (operands[1]) == 1
+                  || INTVAL (operands[1]) == -1)))
+      {
+        FAIL;
+      }
+    emit (gen_gen_add_for_<code>_<mode> (operands[0], operands[1], operands[2],
+          scratch));
+    DONE;
+  })
+
+;; 3/3 A version with a clobber from peephole2.
+(define_peephole2
+  [(match_scratch:QI 3 "d")
+   (parallel [(set (match_operand:QISI 0 "register_operand")
+                   (plus:QISI (match_dup 0)
+                              (match_operand:QISI 1 "const_int_operand")))
+              (clobber (reg:CC REG_CC))])
+   (set (reg:CC REG_CC)
+        (compare:CC (match_dup 0)
+                    (const_int 0)))
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "peep2_regno_dead_p (3, REG_CC)"
+  [(scratch)]
+  {
+    emit (gen_gen_add_for_<code>_<mode> (operands[0], operands[1], operands[2],
+          operands[3]));
+    DONE;
+  })
+
+;; Result of the above three peepholes is an addition that also
+;; performs an EQ or NE comparison (of the result) against zero.
+;; FIXME: Using (match_dup 0) instead of operands[3/4] makes rnregs
+;; barf in regrename.cc::merge_overlapping_regs().  For now, use the
+;; fix from PR50788: Constrain as "0".
+(define_insn "*add.for.eqne.<mode>"
+  [(set (reg:CC REG_CC)
+        (compare:CC
+         (plus:QISI (match_operand:QISI 3 "register_operand"  "0,0     ,0")
+                    (match_operand:QISI 1 "const_int_operand" "n,<p1m1>,n"))
+         (const_int 0)))
+   (set (match_operand:QISI 0 "register_operand"             "=d,*r    ,r")
+        (plus:QISI (match_operand:QISI 4 "register_operand"   "0,0     ,0")
+                   (match_dup 1)))
+   (clobber (match_scratch:QI 2                              "=X,X     ,&d"))]
+  "reload_completed"
+  {
+    return avr_out_plus_set_ZN (operands, nullptr);
+  }
+  [(set_attr "adjust_len" "add_set_ZN")])
+
+
+;; Swapping both comparison and branch condition.  This can turn difficult
+;; branches to easy ones.  And in some cases, a comparison against one can
+;; be turned into a comparison against zero.
+
+(define_peephole2 ; "*swapped_tst<mode>"
+  [(parallel [(set (reg:CC REG_CC)
+                   (compare:CC (match_operand:ORDERED234 1 "register_operand")
+                               (match_operand:ORDERED234 2 "const_operand")))
+              (clobber (match_operand:QI 3 "scratch_operand"))])
+   (set (pc)
+        (if_then_else (match_operator 0 "ordered_comparison_operator"
+                        [(reg:CC REG_CC)
+                         (const_int 0)])
+                      (label_ref (match_operand 4))
+                      (pc)))]
+  "peep2_regno_dead_p (2, REG_CC)"
+  [(set (reg:CC REG_CC)
+        (compare:CC (match_dup 2)
+                    (match_dup 1)))
+   ; "branch"
+   (set (pc)
+        (if_then_else (match_op_dup 0 [(reg:CC REG_CC)
+                                       (const_int 0)])
+                      (label_ref (match_dup 4))
+                      (pc)))]
+  {
+    rtx xval = avr_to_int_mode (operands[2]);
+    enum rtx_code code = GET_CODE (operands[0]);
+
+    if (code == GT && xval == const0_rtx)
+      code = LT;
+    else if (code == GE && xval == const1_rtx)
+      code = LT;
+    else if (code == LE && xval == const0_rtx)
+      code = GE;
+    else if (code == LT && xval == const1_rtx)
+      code = GE;
+    else
+      FAIL;
+
+    operands[2] = CONST0_RTX (<MODE>mode);
+    PUT_CODE (operands[0], code);
+  })
+
+;; Same, but for 8-bit modes which have no scratch reg.
+(define_peephole2 ; "*swapped_tst<mode>"
+  [(set (reg:CC REG_CC)
+        (compare:CC (match_operand:ALL1 1 "register_operand")
+                    (match_operand:ALL1 2 "const_operand")))
+   (set (pc)
+        (if_then_else (match_operator 0 "ordered_comparison_operator"
+                        [(reg:CC REG_CC)
+                         (const_int 0)])
+                      (label_ref (match_operand 4))
+                      (pc)))]
+  "peep2_regno_dead_p (2, REG_CC)"
+  [(set (reg:CC REG_CC)
+        (compare:CC (match_dup 2)
+                    (match_dup 1)))
+   ; "branch"
+   (set (pc)
+        (if_then_else (match_op_dup 0 [(reg:CC REG_CC)
+                                       (const_int 0)])
+                      (label_ref (match_dup 4))
+                      (pc)))]
+  {
+    rtx xval = avr_to_int_mode (operands[2]);
+    enum rtx_code code = GET_CODE (operands[0]);
+
+    if (code == GT && xval == const0_rtx)
+      code = LT;
+    else if (code == GE && xval == const1_rtx)
+      code = LT;
+    else if (code == LE && xval == const0_rtx)
+      code = GE;
+    else if (code == LT && xval == const1_rtx)
+      code = GE;
+    else
+      FAIL;
+
+    operands[2] = CONST0_RTX (<MODE>mode);
+    PUT_CODE (operands[0], code);
+  })
+
+
 (define_expand "extzv"
   [(set (match_operand:QI 0 "register_operand" "")
         (zero_extract:QI (match_operand:QI 1 "register_operand"  "")
diff --git a/gcc/config/avr/constraints.md b/gcc/config/avr/constraints.md
index ac43678872d..f840c0114a0 100644
--- a/gcc/config/avr/constraints.md
+++ b/gcc/config/avr/constraints.md
@@ -245,6 +245,11 @@ (define_constraint "Ym2"
 	    (match_test "INTVAL (avr_to_int_mode (op)) == -2"))
        (match_test "satisfies_constraint_Cm2 (op)")))
 
+;; Constraint that's the empty set.  Useful with mode and code iterators.
+(define_constraint "Yxx"
+  "A constraints that is always false"
+  (match_test "false"))
+
 (define_constraint "Yx2"
   "Fixed-point or integer constant not in the range @minus{}2 @dots{} 2"
   (and (ior (match_code "const_int")
diff --git a/gcc/config/avr/predicates.md b/gcc/config/avr/predicates.md
index e352326466f..f4ee42af274 100644
--- a/gcc/config/avr/predicates.md
+++ b/gcc/config/avr/predicates.md
@@ -27,6 +27,11 @@ (define_predicate "d_register_operand"
   (and (match_code "reg")
        (match_test "REGNO (op) >= 16 && REGNO (op) <= 31")))
 
+(define_predicate "scratch_or_d_register_operand"
+  (ior (match_operand 0 "d_register_operand")
+       (and (match_code ("scratch"))
+            (match_operand 0 "scratch_operand"))))
+
 (define_predicate "even_register_operand"
   (and (match_code "reg")
        (and (match_test "REGNO (op) <= 31")

[-- Attachment #3: pr109650-test.diff --]
[-- Type: text/x-patch, Size: 1712 bytes --]

diff --git a/gcc/testsuite/gcc.target/avr/torture/pr109650-1.c b/gcc/testsuite/gcc.target/avr/torture/pr109650-1.c
new file mode 100644
index 00000000000..9030db00fde
--- /dev/null
+++ b/gcc/testsuite/gcc.target/avr/torture/pr109650-1.c
@@ -0,0 +1,63 @@
+/* { dg-do run } */
+/* { dg-options { -std=c99 } } */
+
+typedef _Bool bool;
+typedef __UINT8_TYPE__ uint8_t;
+
+static inline __attribute__((__always_inline__))
+bool func1a (bool p1, uint8_t p2)
+{
+  if (p1)
+    return p2 <= 8;
+  return p2 <= 2;
+}
+
+__attribute__((__noinline__, __noclone__))
+bool func1b (bool p1, uint8_t p2)
+{
+  return func1a (p1, p2);
+}
+
+static inline __attribute__((__always_inline__))
+bool func2a (bool p1, unsigned p2)
+{
+  if (p1)
+    return p2 <= 8;
+  return p2 <= 2;
+}
+
+__attribute__((__noinline__, __noclone__))
+bool func2b (bool p1, unsigned p2)
+{
+  return func2a (p1, p2);
+}
+
+void test1 (void)
+{
+  if (func1a (0, 1) != func1b (0, 1)) __builtin_abort();
+  if (func1a (0, 2) != func1b (0, 2)) __builtin_abort();
+  if (func1a (0, 3) != func1b (0, 3)) __builtin_abort();
+
+  if (func1a (1, 7) != func1b (1, 7)) __builtin_abort();
+  if (func1a (1, 8) != func1b (1, 8)) __builtin_abort();
+  if (func1a (1, 9) != func1b (1, 9)) __builtin_abort();
+}
+
+void test2 (void)
+{
+  if (func2a (0, 1) != func2b (0, 1)) __builtin_abort();
+  if (func2a (0, 2) != func2b (0, 2)) __builtin_abort();
+  if (func2a (0, 3) != func2b (0, 3)) __builtin_abort();
+
+  if (func2a (1, 7) != func2b (1, 7)) __builtin_abort();
+  if (func2a (1, 8) != func2b (1, 8)) __builtin_abort();
+  if (func2a (1, 9) != func2b (1, 9)) __builtin_abort();
+}
+
+int main (void)
+{
+  test1();
+  test2();
+
+  __builtin_exit (0);
+}

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch,avr] Fix PR109650 wrong code
  2023-05-15 18:05 [patch,avr] Fix PR109650 wrong code Georg-Johann Lay
@ 2023-05-19  8:17 ` Georg-Johann Lay
  2023-05-19  8:49 ` Georg-Johann Lay
  1 sibling, 0 replies; 6+ messages in thread
From: Georg-Johann Lay @ 2023-05-19  8:17 UTC (permalink / raw)
  To: gcc-patches; +Cc: Jeff Law, Denis Chertykov, Senthil Kumar Selvaraj

Here is a revised version of the patch.  The difference to the
previous one is that it adds some combine patterns for *cbranch
insns that were lost in the PR92729 transition.  The post-reload
part of the patterns were still there.  The new patterns are
slightly more general in that they also handle fixed-point modes.

Apart from that, the patch behaves the same:

Am 15.05.23 um 20:05 schrieb Georg-Johann Lay:
> This patch fixes a wrong-code bug in the wake of PR92729, the transition
> that turned the AVR backend from cc0 to CCmode.  In cc0, the insn that
> uses cc0 like a conditional branch always follows the cc0 setter, which
> is no more the case with CCmode where set and use of REG_CC might be in
> different basic blocks.
> 
> This patch removes the machine-dependent reorg pass in avr_reorg entirely.
> 
> It is replaced by a new, AVR specific mini-pass that runs prior to
> split2. Canonicalization of comparisons away from the "difficult"
> codes GT[U] and LE[U] is now mostly performed by implementing
> TARGET_CANONICALIZE_COMPARISON.
> 
> Moreover:
> 
> * Text peephole conditions get "dead_or_set_regno_p (*, REG_CC)" as
> needed.
> 
> * RTL peephole conditions get "peep2_regno_dead_p (*, REG_CC)" as
> needed.
> 
> * Conditional branches no more clobber REG_CC.
> 
> * insn output for compares looks ahead to determine the branch mode in
> use. This needs also "dead_or_set_regno_p (*, REG_CC)".
> 
> * Add RTL peepholes for decrement-and-branch detection.
> 
> Finally, it fixes some of the many indentation glitches left over from
> PR92729.
> 
> Ok?
> 
> I'd also backport this one because all of v12+ is affected by the wrong 
> code.

Johann

--

gcc/
	PR target/109650
	PR target/97279

	* config/avr/avr-passes.def (avr_pass_ifelse): Insert new pass.
	* config/avr/avr.cc (avr_pass_ifelse): New RTL pass.
	(avr_pass_data_ifelse): New pass_data for it.
	(make_avr_pass_ifelse, avr_redundant_compare, avr_cbranch_cost)
	(avr_canonicalize_comparison, avr_out_plus_set_ZN)
	(avr_out_cmp_ext): New functions.
	(compare_condtition): Make sure REG_CC dies in the branch insn.
	(avr_rtx_costs_1): Add computation of cbranch costs.
	(avr_adjust_insn_length) [ADJUST_LEN_ADD_SET_ZN, ADJUST_LEN_CMP_ZEXT]:
	[ADJUST_LEN_CMP_SEXT]Handle them.
	(TARGET_CANONICALIZE_COMPARISON): New define.
	(avr_simplify_comparison_p, compare_diff_p, avr_compare_pattern)
	(avr_reorg_remove_redundant_compare, avr_reorg): Remove functions.
	(TARGET_MACHINE_DEPENDENT_REORG): Remove define.

	* avr-protos.h (avr_simplify_comparison_p): Remove proto.
	(make_avr_pass_ifelse, avr_out_plus_set_ZN, cc_reg_rtx)
	(avr_out_cmp_zext): New Protos

	* config/avr/avr.md (branch, difficult_branch): Don't split insns.
	(*cbranchhi.zero-extend.0", *cbranchhi.zero-extend.1")
	(*swapped_tst<mode>, *add.for.eqne.<mode>): New insns.
	(*cbranch<mode>4): Rename to cbranch<mode>4_insn.
	(define_peephole): Add dead_or_set_regno_p(insn,REG_CC) as needed.
	(define_deephole2): Add peep2_regno_dead_p(*,REG_CC) as needed.
	Add new RTL peepholes for decrement-and-branch and *swapped_tst<mode>.
	Rework signtest-and-branch peepholes for *sbrx_branch<mode>.
	(adjust_len) [add_set_ZN, cmp_zext]: New.
	(QIPSI): New mode iterator.
	(ALLs1, ALLs2, ALLs4, ALLs234): New mode iterators.
	(gelt): New code iterator.
	(gelt_eqne): New code attribute.
	(rvbranch, *rvbranch, difficult_rvbranch, *difficult_rvbranch)
	(branch_unspec, *negated_tst<mode>, *reversed_tst<mode>)
	(*cmpqi_sign_extend): Remove insns.
	(define_c_enum "unspec") [UNSPEC_IDENTITY]: Remove.

	* config/avr/avr-dimode.md (cbranch<mode>4): Canonicalize comparisons.
	* config/avr/predicates.md (scratch_or_d_register_operand): New.
	* config/avr/contraints.md (Yxx): New constraint.

gcc/testsuite/
	PR target/109650
	* config/avr/torture/pr109650-1.c: New test.
	* config/avr/torture/pr109650-2.c: New test.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch,avr] Fix PR109650 wrong code
  2023-05-15 18:05 [patch,avr] Fix PR109650 wrong code Georg-Johann Lay
  2023-05-19  8:17 ` Georg-Johann Lay
@ 2023-05-19  8:49 ` Georg-Johann Lay
  2023-05-30 13:15   ` Ping #1: " Georg-Johann Lay
                     ` (2 more replies)
  1 sibling, 3 replies; 6+ messages in thread
From: Georg-Johann Lay @ 2023-05-19  8:49 UTC (permalink / raw)
  To: gcc-patches; +Cc: Jeff Law, Denis Chertykov, Senthil Kumar Selvaraj

[-- Attachment #1: Type: text/plain, Size: 3896 bytes --]

...Ok, and now with the patch attached...

Here is a revised version of the patch.  The difference to the
previous one is that it adds some combine patterns for *cbranch
insns that were lost in the PR92729 transition.  The post-reload
part of the patterns were still there.  The new patterns are
slightly more general in that they also handle fixed-point modes.

Apart from that, the patch behaves the same:

Am 15.05.23 um 20:05 schrieb Georg-Johann Lay:
> This patch fixes a wrong-code bug in the wake of PR92729, the transition
> that turned the AVR backend from cc0 to CCmode.  In cc0, the insn that
> uses cc0 like a conditional branch always follows the cc0 setter, which
> is no more the case with CCmode where set and use of REG_CC might be in
> different basic blocks.
> 
> This patch removes the machine-dependent reorg pass in avr_reorg entirely.
> 
> It is replaced by a new, AVR specific mini-pass that runs prior to
> split2. Canonicalization of comparisons away from the "difficult"
> codes GT[U] and LE[U] is now mostly performed by implementing
> TARGET_CANONICALIZE_COMPARISON.
> 
> Moreover:
> 
> * Text peephole conditions get "dead_or_set_regno_p (*, REG_CC)" as
> needed.
> 
> * RTL peephole conditions get "peep2_regno_dead_p (*, REG_CC)" as
> needed.
> 
> * Conditional branches no more clobber REG_CC.
> 
> * insn output for compares looks ahead to determine the branch mode in
> use. This needs also "dead_or_set_regno_p (*, REG_CC)".
> 
> * Add RTL peepholes for decrement-and-branch detection.
> 
> Finally, it fixes some of the many indentation glitches left over from
> PR92729.
> 
> Ok?
> 
> I'd also backport this one because all of v12+ is affected by the wrong 
> code.

Johann

--

gcc/
	PR target/109650
	PR target/92729

	* config/avr/avr-passes.def (avr_pass_ifelse): Insert new pass.
	* config/avr/avr.cc (avr_pass_ifelse): New RTL pass.
	(avr_pass_data_ifelse): New pass_data for it.
	(make_avr_pass_ifelse, avr_redundant_compare, avr_cbranch_cost)
	(avr_canonicalize_comparison, avr_out_plus_set_ZN)
	(avr_out_cmp_ext): New functions.
	(compare_condtition): Make sure REG_CC dies in the branch insn.
	(avr_rtx_costs_1): Add computation of cbranch costs.
	(avr_adjust_insn_length) [ADJUST_LEN_ADD_SET_ZN, ADJUST_LEN_CMP_ZEXT]:
	[ADJUST_LEN_CMP_SEXT]Handle them.
	(TARGET_CANONICALIZE_COMPARISON): New define.
	(avr_simplify_comparison_p, compare_diff_p, avr_compare_pattern)
	(avr_reorg_remove_redundant_compare, avr_reorg): Remove functions.
	(TARGET_MACHINE_DEPENDENT_REORG): Remove define.

	* avr-protos.h (avr_simplify_comparison_p): Remove proto.
	(make_avr_pass_ifelse, avr_out_plus_set_ZN, cc_reg_rtx)
	(avr_out_cmp_zext): New Protos

	* config/avr/avr.md (branch, difficult_branch): Don't split insns.
	(*cbranchhi.zero-extend.0", *cbranchhi.zero-extend.1")
	(*swapped_tst<mode>, *add.for.eqne.<mode>): New insns.
	(*cbranch<mode>4): Rename to cbranch<mode>4_insn.
	(define_peephole): Add dead_or_set_regno_p(insn,REG_CC) as needed.
	(define_deephole2): Add peep2_regno_dead_p(*,REG_CC) as needed.
	Add new RTL peepholes for decrement-and-branch and *swapped_tst<mode>.
	Rework signtest-and-branch peepholes for *sbrx_branch<mode>.
	(adjust_len) [add_set_ZN, cmp_zext]: New.
	(QIPSI): New mode iterator.
	(ALLs1, ALLs2, ALLs4, ALLs234): New mode iterators.
	(gelt): New code iterator.
	(gelt_eqne): New code attribute.
	(rvbranch, *rvbranch, difficult_rvbranch, *difficult_rvbranch)
	(branch_unspec, *negated_tst<mode>, *reversed_tst<mode>)
	(*cmpqi_sign_extend): Remove insns.
	(define_c_enum "unspec") [UNSPEC_IDENTITY]: Remove.

	* config/avr/avr-dimode.md (cbranch<mode>4): Canonicalize comparisons.
	* config/avr/predicates.md (scratch_or_d_register_operand): New.
	* config/avr/contraints.md (Yxx): New constraint.

gcc/testsuite/
	PR target/109650
	* config/avr/torture/pr109650-1.c: New test.
	* config/avr/torture/pr109650-2.c: New test.

[-- Attachment #2: pr109650-v4.diff --]
[-- Type: text/x-patch, Size: 98020 bytes --]

diff --git a/gcc/config/avr/avr-dimode.md b/gcc/config/avr/avr-dimode.md
index c0bb04ff9e0..91f0d395761 100644
--- a/gcc/config/avr/avr-dimode.md
+++ b/gcc/config/avr/avr-dimode.md
@@ -455,12 +455,18 @@ (define_expand "conditional_jump"
 (define_expand "cbranch<mode>4"
   [(set (pc)
         (if_then_else (match_operator 0 "ordered_comparison_operator"
-                        [(match_operand:ALL8 1 "register_operand"  "")
-                         (match_operand:ALL8 2 "nonmemory_operand" "")])
-         (label_ref (match_operand 3 "" ""))
-         (pc)))]
+                        [(match_operand:ALL8 1 "register_operand")
+                         (match_operand:ALL8 2 "nonmemory_operand")])
+                      (label_ref (match_operand 3))
+                      (pc)))]
   "avr_have_dimode"
    {
+    int icode = (int) GET_CODE (operands[0]);
+
+    targetm.canonicalize_comparison (&icode, &operands[1], &operands[2], false);
+    operands[0] = gen_rtx_fmt_ee ((enum rtx_code) icode,
+                                  VOIDmode, operands[1], operands[2]);
+
     rtx acc_a = gen_rtx_REG (<MODE>mode, ACC_A);
 
     avr_fix_inputs (operands, 1 << 2, regmask (<MODE>mode, ACC_A));
@@ -490,8 +496,8 @@ (define_insn_and_split "cbranch_<mode>2_split"
         (if_then_else (match_operator 0 "ordered_comparison_operator"
                         [(reg:ALL8 ACC_A)
                          (reg:ALL8 ACC_B)])
-         (label_ref (match_operand 1 "" ""))
-         (pc)))]
+                      (label_ref (match_operand 1))
+                      (pc)))]
   "avr_have_dimode"
   "#"
   "&& reload_completed"
@@ -544,8 +550,8 @@ (define_insn_and_split "cbranch_const_<mode>2_split"
         (if_then_else (match_operator 0 "ordered_comparison_operator"
                         [(reg:ALL8 ACC_A)
                          (match_operand:ALL8 1 "const_operand" "n Ynn")])
-         (label_ref (match_operand 2 "" ""))
-         (pc)))
+                      (label_ref (match_operand 2 "" ""))
+                      (pc)))
    (clobber (match_scratch:QI 3 "=&d"))]
   "avr_have_dimode
    && !s8_operand (operands[1], VOIDmode)"
diff --git a/gcc/config/avr/avr-passes.def b/gcc/config/avr/avr-passes.def
index 02ca737deee..9ff9b23f62c 100644
--- a/gcc/config/avr/avr-passes.def
+++ b/gcc/config/avr/avr-passes.def
@@ -43,3 +43,23 @@ INSERT_PASS_BEFORE (pass_free_cfg, 1, avr_pass_recompute_notes);
    insns withaout any insns in between.  */
 
 INSERT_PASS_AFTER (pass_expand, 1, avr_pass_casesi);
+
+/* If-else decision trees generated for switch / case may produce sequences
+   like
+
+      SREG = compare (reg, val);
+	  if (SREG == 0)  goto label1;
+      SREG = compare (reg, 1 + val);
+	  if (SREG >= 0)  goto label2;
+
+   which can be optimized to
+
+      SREG = compare (reg, val);
+	  if (SREG == 0)  goto label1;
+	  if (SREG >= 0)  goto label2;
+
+   The optimal place for such a pass would be directly after expand, but
+   it's not possible for a jump insn to target more than one code label.
+   Hence, run a mini pass right before split2 which introduces REG_CC.  */
+
+INSERT_PASS_BEFORE (pass_split_after_reload, 1, avr_pass_ifelse);
diff --git a/gcc/config/avr/avr-protos.h b/gcc/config/avr/avr-protos.h
index ec96fd45865..eba6f9fef46 100644
--- a/gcc/config/avr/avr-protos.h
+++ b/gcc/config/avr/avr-protos.h
@@ -58,6 +58,8 @@ extern const char *ret_cond_branch (rtx x, int len, int reverse);
 extern const char *avr_out_movpsi (rtx_insn *, rtx*, int*);
 extern const char *avr_out_sign_extend (rtx_insn *, rtx*, int*);
 extern const char *avr_out_insert_notbit (rtx_insn *, rtx*, rtx, int*);
+extern const char *avr_out_plus_set_ZN (rtx*, int*);
+extern const char *avr_out_cmp_ext (rtx*, enum rtx_code, int*);
 
 extern const char *ashlqi3_out (rtx_insn *insn, rtx operands[], int *len);
 extern const char *ashlhi3_out (rtx_insn *insn, rtx operands[], int *len);
@@ -112,8 +114,6 @@ extern int jump_over_one_insn_p (rtx_insn *insn, rtx dest);
 
 extern void avr_final_prescan_insn (rtx_insn *insn, rtx *operand,
 				    int num_operands);
-extern int avr_simplify_comparison_p (machine_mode mode,
-				      RTX_CODE op, rtx x);
 extern RTX_CODE avr_normalize_condition (RTX_CODE condition);
 extern void out_shift_with_cnt (const char *templ, rtx_insn *insn,
 				rtx operands[], int *len, int t_len);
@@ -145,6 +145,7 @@ extern rtx tmp_reg_rtx;
 extern rtx zero_reg_rtx;
 extern rtx all_regs_rtx[32];
 extern rtx rampz_rtx;
+extern rtx cc_reg_rtx;
 
 #endif /* RTX_CODE */
 
@@ -160,6 +161,7 @@ class rtl_opt_pass;
 extern rtl_opt_pass *make_avr_pass_pre_proep (gcc::context *);
 extern rtl_opt_pass *make_avr_pass_recompute_notes (gcc::context *);
 extern rtl_opt_pass *make_avr_pass_casesi (gcc::context *);
+extern rtl_opt_pass *make_avr_pass_ifelse (gcc::context *);
 
 /* From avr-log.cc */
 
diff --git a/gcc/config/avr/avr.cc b/gcc/config/avr/avr.cc
index c193430cf07..a8b63c2a909 100644
--- a/gcc/config/avr/avr.cc
+++ b/gcc/config/avr/avr.cc
@@ -359,6 +359,41 @@ public:
   }
 }; // avr_pass_casesi
 
+
+static const pass_data avr_pass_data_ifelse =
+{
+  RTL_PASS,      // type
+  "",            // name (will be patched)
+  OPTGROUP_NONE, // optinfo_flags
+  TV_DF_SCAN,    // tv_id
+  0,             // properties_required
+  0,             // properties_provided
+  0,             // properties_destroyed
+  0,             // todo_flags_start
+  TODO_df_finish | TODO_df_verify // todo_flags_finish
+};
+
+class avr_pass_ifelse : public rtl_opt_pass
+{
+public:
+  avr_pass_ifelse (gcc::context *ctxt, const char *name)
+    : rtl_opt_pass (avr_pass_data_ifelse, ctxt)
+  {
+    this->name = name;
+  }
+
+  void avr_rest_of_handle_ifelse (function*);
+
+  virtual bool gate (function*) { return optimize > 0; }
+
+  virtual unsigned int execute (function *func)
+  {
+    avr_rest_of_handle_ifelse (func);
+
+    return 0;
+  }
+}; // avr_pass_ifelse
+
 } // anon namespace
 
 rtl_opt_pass*
@@ -373,6 +408,12 @@ make_avr_pass_casesi (gcc::context *ctxt)
   return new avr_pass_casesi (ctxt, "avr-casesi");
 }
 
+rtl_opt_pass*
+make_avr_pass_ifelse (gcc::context *ctxt)
+{
+  return new avr_pass_ifelse (ctxt, "avr-ifelse");
+}
+
 
 /* Make one parallel insn with all the patterns from insns i[0]..i[5].  */
 
@@ -686,6 +727,304 @@ avr_pass_casesi::avr_rest_of_handle_casesi (function *func)
 }
 
 
+/* A helper for the next method.  Suppose we have two conditional branches
+
+      if (reg <cond1> xval1) goto label1;
+      if (reg <cond2> xval2) goto label2;
+
+   If the second comparison is redundant and there is a code <cond> such
+   that the sequence can be performed as
+
+      REG_CC = compare (reg, xval1);
+      if (REG_CC <cond1> 0)  goto label1;
+      if (REG_CC <cond> 0)   goto label2;
+
+   then return <cond>.  Otherwise, return UNKNOWN.
+   xval1 and xval2 are CONST_INT, and mode is the scalar int mode in which
+   the comparison will be carried out.  reverse_cond1 can be set to reverse
+   condition cond1.  This is useful if the second comparison does not follow
+   the first one, but is located after label1 like in:
+
+      if (reg <cond1> xval1) goto label1;
+      ...
+      label1:
+      if (reg <cond2> xval2) goto label2;  */
+
+static enum rtx_code
+avr_redundant_compare (enum rtx_code cond1, rtx xval1,
+		       enum rtx_code cond2, rtx xval2,
+		       machine_mode mode, bool reverse_cond1)
+{
+  HOST_WIDE_INT ival1 = INTVAL (xval1);
+  HOST_WIDE_INT ival2 = INTVAL (xval2);
+
+  unsigned HOST_WIDE_INT mask = GET_MODE_MASK (mode);
+  unsigned HOST_WIDE_INT uval1 = mask & UINTVAL (xval1);
+  unsigned HOST_WIDE_INT uval2 = mask & UINTVAL (xval2);
+
+  if (reverse_cond1)
+    cond1 = reverse_condition (cond1);
+
+  if (cond1 == EQ)
+    {
+      ////////////////////////////////////////////////
+      // A sequence like
+      //    if (reg == val)  goto label1;
+      //    if (reg > val)   goto label2;
+      // can be re-written using the same, simple comparison like in:
+      //    REG_CC = compare (reg, val)
+      //    if (REG_CC == 0)  goto label1;
+      //    if (REG_CC >= 0)  goto label2;
+      if (ival1 == ival2
+	  && (cond2 == GT || cond2 == GTU))
+	return avr_normalize_condition (cond2);
+
+      // Similar, but the input sequence is like
+      //    if (reg == val)  goto label1;
+      //    if (reg >= val)  goto label2;
+      if (ival1 == ival2
+	  && (cond2 == GE || cond2 == GEU))
+	return cond2;
+
+      // Similar, but the input sequence is like
+      //    if (reg == val)      goto label1;
+      //    if (reg >= val + 1)  goto label2;
+      if ((cond2 == GE && ival2 == 1 + ival1)
+	  || (cond2 == GEU && uval2 == 1 + uval1))
+	return cond2;
+
+      // Similar, but the input sequence is like
+      //    if (reg == val)      goto label1;
+      //    if (reg >  val - 1)  goto label2;
+      if ((cond2 == GT && ival2 == ival1 - 1)
+	  || (cond2 == GTU && uval2 == uval1 - 1))
+	return avr_normalize_condition (cond2);
+
+      /////////////////////////////////////////////////////////
+      // A sequence like
+      //    if (reg == val)      goto label1;
+      //    if (reg < 1 + val)   goto label2;
+      // can be re-written as
+      //    REG_CC = compare (reg, val)
+      //    if (REG_CC == 0)  goto label1;
+      //    if (REG_CC < 0)   goto label2;
+      if ((cond2 == LT && ival2 == 1 + ival1)
+	  || (cond2 == LTU && uval2 == 1 + uval1))
+	return cond2;
+
+      // Similar, but with an input sequence like
+      //    if (reg == val)   goto label1;
+      //    if (reg <= val)   goto label2;
+      if (ival1 == ival2
+	  && (cond2 == LE || cond2 == LEU))
+	return avr_normalize_condition (cond2);
+
+      // Similar, but with an input sequence like
+      //    if (reg == val)  goto label1;
+      //    if (reg < val)   goto label2;
+      if (ival1 == ival2
+	  && (cond2 == LT || cond2 == LTU))
+	return cond2;
+
+      // Similar, but with an input sequence like
+      //    if (reg == val)      goto label1;
+      //    if (reg <= val - 1)  goto label2;
+      if ((cond2 == LE && ival2 == ival1 - 1)
+	  || (cond2 == LEU && uval2 == uval1 - 1))
+	return avr_normalize_condition (cond2);
+
+    } // cond1 == EQ
+
+  return UNKNOWN;
+}
+
+
+/* If-else decision trees generated for switch / case may produce sequences
+   like
+
+      SREG = compare (reg, val);
+	  if (SREG == 0)  goto label1;
+      SREG = compare (reg, 1 + val);
+	  if (SREG >= 0)  goto label2;
+
+   which can be optimized to
+
+      SREG = compare (reg, val);
+	  if (SREG == 0)  goto label1;
+	  if (SREG >= 0)  goto label2;
+
+   The optimal place for such a pass would be directly after expand, but
+   it's not possible for a jump insn to target more than one code label.
+   Hence, run a mini pass right before split2 which introduces REG_CC.  */
+
+void
+avr_pass_ifelse::avr_rest_of_handle_ifelse (function*)
+{
+  rtx_insn *next_insn;
+
+  for (rtx_insn *insn = get_insns(); insn; insn = next_insn)
+    {
+      next_insn = next_nonnote_nondebug_insn (insn);
+
+      if (! next_insn)
+	break;
+
+      // Search for two cbranch insns.  The first one is a cbranch.
+      // Filter for "cbranch<mode>4_insn" with mode in QI, HI, PSI, SI.
+
+      if (! JUMP_P (insn))
+	continue;
+
+      int icode1 = recog_memoized (insn);
+
+      if (icode1 != CODE_FOR_cbranchqi4_insn
+	  && icode1 != CODE_FOR_cbranchhi4_insn
+	  && icode1 != CODE_FOR_cbranchpsi4_insn
+	  && icode1 != CODE_FOR_cbranchsi4_insn)
+	continue;
+
+      rtx_jump_insn *insn1 = as_a<rtx_jump_insn *> (insn);
+      rtx_jump_insn *insn2 = nullptr;
+      bool follow_label1 = false;
+
+      // Extract the operands of the first insn:
+      // $0 = comparison operator ($1, $2)
+      // $1 = reg
+      // $2 = reg or const_int
+      // $3 = code_label
+      // $4 = optional SCRATCH for HI, PSI, SI cases.
+
+      const auto &op = recog_data.operand;
+
+      extract_insn (insn1);
+      rtx xop1[5] = { op[0], op[1], op[2], op[3], op[4] };
+      int n_operands = recog_data.n_operands;
+
+      // For now, we can optimize cbranches that follow an EQ cbranch,
+      // and cbranches that follow the label of a NE cbranch.
+
+      if (GET_CODE (xop1[0]) == EQ
+	  && JUMP_P (next_insn)
+	  && recog_memoized (next_insn) == icode1)
+	{
+	  // The 2nd cbranch insn follows insn1, i.e. is located in the
+	  // fallthrough path of insn1.
+
+	  insn2 = as_a<rtx_jump_insn *> (next_insn);
+	}
+      else if (GET_CODE (xop1[0]) == NE)
+	{
+	  // insn1 might branch to a label followed by a cbranch.
+
+	  rtx target1 = JUMP_LABEL (insn1);
+	  rtx_insn *code_label1 = JUMP_LABEL_AS_INSN (insn1);
+	  rtx_insn *next = next_nonnote_nondebug_insn (code_label1);
+	  rtx_insn *barrier = prev_nonnote_nondebug_insn (code_label1);
+
+	  if (// Target label of insn1 is used exactly once and
+	      // is not a fallthru, i.e. is preceded by a barrier.
+	      LABEL_NUSES (target1) == 1
+	      && barrier
+	      && BARRIER_P (barrier)
+	      // Following the target label is a cbranch of the same kind.
+	      && next
+	      && JUMP_P (next)
+	      && recog_memoized (next) == icode1)
+	    {
+	      follow_label1 = true;
+	      insn2 = as_a<rtx_jump_insn *> (next);
+	    }
+	}
+
+      if (! insn2)
+	continue;
+
+      // Also extract operands of insn2, and filter for REG + CONST_INT
+      // comparsons against the same register.
+
+      extract_insn (insn2);
+      rtx xop2[5] = { op[0], op[1], op[2], op[3], op[4] };
+
+      if (! rtx_equal_p (xop1[1], xop2[1])
+	  || ! CONST_INT_P (xop1[2])
+	  || ! CONST_INT_P (xop2[2]))
+	continue;
+
+      machine_mode mode = GET_MODE (xop1[1]);
+      enum rtx_code code1 = GET_CODE (xop1[0]);
+      enum rtx_code code2 = GET_CODE (xop2[0]);
+
+      code2 = avr_redundant_compare (code1, xop1[2], code2, xop2[2],
+				     mode, follow_label1);
+      if (code2 == UNKNOWN)
+	continue;
+
+      //////////////////////////////////////////////////////
+      // Found a replacement.
+
+      if (dump_file)
+	{
+	  fprintf (dump_file, "\n;; Found chain of jump_insn %d and"
+		   " jump_insn %d, follow_label1=%d:\n",
+		   INSN_UID (insn1), INSN_UID (insn2), follow_label1);
+	  print_rtl_single (dump_file, PATTERN (insn1));
+	  print_rtl_single (dump_file, PATTERN (insn2));
+	}
+
+      if (! follow_label1)
+	next_insn = next_nonnote_nondebug_insn (insn2);
+
+      // Pop the new branch conditions and the new comparison.
+      // Prematurely split into compare + branch so that we can drop
+      // the 2nd comparison.  The following pass, split2, splits all
+      // insns for REG_CC, and it should still work as usual even when
+      // there are already some REG_CC insns around.
+
+      rtx xcond1 = gen_rtx_fmt_ee (code1, VOIDmode, cc_reg_rtx, const0_rtx);
+      rtx xcond2 = gen_rtx_fmt_ee (code2, VOIDmode, cc_reg_rtx, const0_rtx);
+      rtx xpat1 = gen_branch (xop1[3], xcond1);
+      rtx xpat2 = gen_branch (xop2[3], xcond2);
+      rtx xcompare = NULL_RTX;
+
+      if (mode == QImode)
+	{
+	  gcc_assert (n_operands == 4);
+	  xcompare = gen_cmpqi3 (xop1[1], xop1[2]);
+	}
+      else
+	{
+	  gcc_assert (n_operands == 5);
+	  rtx (*gen_cmp)(rtx,rtx,rtx)
+	    = mode == HImode  ? gen_gen_comparehi
+	    : mode == PSImode ? gen_gen_comparepsi
+	    : gen_gen_comparesi; // SImode
+	  xcompare = gen_cmp (xop1[1], xop1[2], xop1[4]);
+	}
+
+      // Emit that stuff.
+
+      rtx_insn *cmp = emit_insn_before (xcompare, insn1);
+      rtx_jump_insn *branch1 = emit_jump_insn_before (xpat1, insn1);
+      rtx_jump_insn *branch2 = emit_jump_insn_before (xpat2, insn2);
+
+      JUMP_LABEL (branch1) = xop1[3];
+      JUMP_LABEL (branch2) = xop2[3];
+      // delete_insn() decrements LABEL_NUSES when deleting a JUMP_INSN, but
+      // when we pop a new JUMP_INSN, do it by hand.
+      ++LABEL_NUSES (xop1[3]);
+      ++LABEL_NUSES (xop2[3]);
+
+      delete_insn (insn1);
+      delete_insn (insn2);
+
+      // As a side effect, also recog the new insns.
+      gcc_assert (valid_insn_p (cmp));
+      gcc_assert (valid_insn_p (branch1));
+      gcc_assert (valid_insn_p (branch2));
+    } // loop insns
+}
+
+
 /* Set `avr_arch' as specified by `-mmcu='.
    Return true on success.  */
 
@@ -3173,28 +3512,6 @@ avr_asm_final_postscan_insn (FILE *stream, rtx_insn *insn, rtx*, int)
 }
 
 
-/* Return 0 if undefined, 1 if always true or always false.  */
-
-int
-avr_simplify_comparison_p (machine_mode mode, RTX_CODE op, rtx x)
-{
-  unsigned int max = (mode == QImode ? 0xff :
-                      mode == HImode ? 0xffff :
-                      mode == PSImode ? 0xffffff :
-                      mode == SImode ? 0xffffffff : 0);
-  if (max && op && CONST_INT_P (x))
-    {
-      if (unsigned_condition (op) != op)
-        max >>= 1;
-
-      if (max != (INTVAL (x) & max)
-          && INTVAL (x) != 0xff)
-        return 1;
-    }
-  return 0;
-}
-
-
 /* Worker function for `FUNCTION_ARG_REGNO_P'.  */
 /* Returns nonzero if REGNO is the number of a hard
    register in which function arguments are sometimes passed.  */
@@ -5677,29 +5994,36 @@ avr_frame_pointer_required_p (void)
           || get_frame_size () > 0);
 }
 
-/* Returns the condition of compare insn INSN, or UNKNOWN.  */
+
+/* Returns the condition of the branch following INSN, where INSN is some
+   comparison.  If the next insn is not a branch or the condition code set
+   by INSN might be used by more insns than the next one, return UNKNOWN.
+   For now, just look at the next insn, which misses some opportunities like
+   following jumps.  */
 
 static RTX_CODE
 compare_condition (rtx_insn *insn)
 {
-  rtx_insn *next = next_real_insn (insn);
+  rtx set;
+  rtx_insn *next = next_real_nondebug_insn (insn);
 
-  if (next && JUMP_P (next))
+  if (next
+      && JUMP_P (next)
+      // If SREG does not die in the next insn, it is used in more than one
+      // branch.  This can happen due to pass .avr-ifelse optimizations.
+      && dead_or_set_regno_p (next, REG_CC)
+      // Branches are (set (pc) (if_then_else (COND (...)))).
+      && (set = single_set (next))
+      && GET_CODE (SET_SRC (set)) == IF_THEN_ELSE)
     {
-      rtx pat = PATTERN (next);
-      if (GET_CODE (pat) == PARALLEL)
-        pat = XVECEXP (pat, 0, 0);
-      rtx src = SET_SRC (pat);
-
-      if (IF_THEN_ELSE == GET_CODE (src))
-        return GET_CODE (XEXP (src, 0));
+      return GET_CODE (XEXP (SET_SRC (set), 0));
     }
 
   return UNKNOWN;
 }
 
 
-/* Returns true iff INSN is a tst insn that only tests the sign.  */
+/* Returns true if INSN is a tst insn that only tests the sign.  */
 
 static bool
 compare_sign_p (rtx_insn *insn)
@@ -5709,23 +6033,95 @@ compare_sign_p (rtx_insn *insn)
 }
 
 
-/* Returns true iff the next insn is a JUMP_INSN with a condition
-   that needs to be swapped (GT, GTU, LE, LEU).  */
+/* Returns true if INSN is a compare insn with the EQ or NE condition.  */
 
 static bool
-compare_diff_p (rtx_insn *insn)
+compare_eq_p (rtx_insn *insn)
 {
   RTX_CODE cond = compare_condition (insn);
-  return (cond == GT || cond == GTU || cond == LE || cond == LEU) ? cond : 0;
+  return (cond == EQ || cond == NE);
 }
 
-/* Returns true iff INSN is a compare insn with the EQ or NE condition.  */
 
-static bool
-compare_eq_p (rtx_insn *insn)
+/* Implement `TARGET_CANONICALIZE_COMPARISON'.  */
+/* Basically tries to convert "difficult" comparisons like GT[U]
+   and LE[U] to simple ones.  Some asymmetric comparisons can be
+   transformed to EQ or NE against zero.  */
+
+static void
+avr_canonicalize_comparison (int *icode, rtx *op0, rtx *op1, bool op0_fixed)
 {
-  RTX_CODE cond = compare_condition (insn);
-  return (cond == EQ || cond == NE);
+  enum rtx_code code = (enum rtx_code) *icode;
+  machine_mode mode = GET_MODE (*op0);
+
+  bool signed_p = code == GT || code == LE;
+  bool unsigned_p = code == GTU || code == LEU;
+  bool difficult_p = signed_p || unsigned_p;
+
+  if (// Only do integers and fixed-points.
+      (! SCALAR_INT_MODE_P (mode)
+       && ! ALL_SCALAR_FIXED_POINT_MODE_P (mode))
+      // Only do comparisons against a register.
+      || ! register_operand (*op0, mode))
+    return;
+
+  // Canonicalize "difficult" reg-reg comparisons.
+
+  if (! op0_fixed
+      && difficult_p
+      && register_operand (*op1, mode))
+    {
+      std::swap (*op0, *op1);
+      *icode = (int) swap_condition (code);
+      return;
+    }
+
+  // Canonicalize comparisons against compile-time constants.
+
+  if (CONST_INT_P (*op1)
+      || CONST_FIXED_P (*op1))
+    {
+      // INT_MODE of the same size.
+      scalar_int_mode imode = int_mode_for_mode (mode).require ();
+
+      unsigned HOST_WIDE_INT mask = GET_MODE_MASK (imode);
+      unsigned HOST_WIDE_INT maxval = signed_p ? mask >> 1 : mask;
+
+      // Convert value *op1 to imode.
+      rtx xval = simplify_gen_subreg (imode, *op1, mode, 0);
+
+      // Canonicalize difficult comparisons against const.
+      if (difficult_p
+	  && (UINTVAL (xval) & mask) != maxval)
+	{
+	  // Convert *op0 > *op1  to *op0 >= 1 + *op1.
+	  // Convert *op0 <= *op1 to *op0 <  1 + *op1.
+	  xval = simplify_binary_operation (PLUS, imode, xval, const1_rtx);
+
+	  // Convert value back to its original mode.
+	  *op1 = simplify_gen_subreg (mode, xval, imode, 0);
+
+	  // Map  >  to  >=  and  <=  to  <.
+	  *icode = (int) avr_normalize_condition (code);
+
+	  return;
+	}
+
+      // Some asymmetric comparisons can be turned into EQ or NE.
+      if (code == LTU && xval == const1_rtx)
+	{
+	  *icode = (int) EQ;
+	  *op1 = CONST0_RTX (mode);
+	  return;
+	}
+
+      if (code == GEU && xval == const1_rtx)
+	{
+	  *icode = (int) NE;
+	  *op1 = CONST0_RTX (mode);
+	  return;
+	}
+    }
 }
 
 
@@ -6018,6 +6414,68 @@ avr_out_tstsi (rtx_insn *insn, rtx *op, int *plen)
 }
 
 
+/* Output a comparison of a zero- or sign-extended register against a
+   plain register.  CODE is SIGN_EXTEND or ZERO_EXTEND.  Return "".
+
+   PLEN != 0: Set *PLEN to the code length in words. Don't output anything.
+   PLEN == 0: Print instructions.  */
+
+const char*
+avr_out_cmp_ext (rtx xop[], enum rtx_code code, int *plen)
+{
+  // The smaller reg is the one that's to be extended.  Get its index as z.
+  int z = GET_MODE_SIZE (GET_MODE (xop[1])) < GET_MODE_SIZE (GET_MODE (xop[0]));
+  rtx zreg = xop[z];
+  rtx reg = xop[1 - z];
+  machine_mode mode = GET_MODE (reg);
+  machine_mode zmode = GET_MODE (zreg);
+  rtx zex;
+
+  if (plen)
+    *plen = 0;
+
+  // zex holds the extended bytes above zreg.  This is 0 for ZERO_EXTEND,
+  // and 0 or -1 for SIGN_EXTEND.
+
+  if (code == SIGN_EXTEND)
+    {
+      // Sign-extend the high-byte of zreg to tmp_reg.
+      int zmsb = GET_MODE_SIZE (zmode) - 1;
+      rtx xzmsb = simplify_gen_subreg (QImode, zreg, zmode, zmsb);
+
+      avr_asm_len ("mov __tmp_reg__,%0" CR_TAB
+		   "rol __tmp_reg__"    CR_TAB
+		   "sbc __tmp_reg__,__tmp_reg__", &xzmsb, plen, 3);
+      zex = tmp_reg_rtx;
+    }
+  else if (code == ZERO_EXTEND)
+    {
+      zex = zero_reg_rtx;
+    }
+  else
+    gcc_unreachable();
+
+  // Now output n_bytes bytes of the very comparison.
+
+  int n_bytes = GET_MODE_SIZE (mode);
+
+  avr_asm_len ("cp %0,%1", xop, plen, 1);
+
+  for (int b = 1; b < n_bytes; ++b)
+    {
+      rtx regs[2];
+      regs[1 - z] = simplify_gen_subreg (QImode, reg, mode, b);
+      regs[z] = (b < GET_MODE_SIZE (zmode)
+		 ? simplify_gen_subreg (QImode, zreg, zmode, b)
+		 : zex);
+
+      avr_asm_len ("cpc %0,%1", regs, plen, 1);
+    }
+
+  return "";
+}
+
+
 /* Generate asm equivalent for various shifts.  This only handles cases
    that are not already carefully hand-optimized in ?sh??i3_out.
 
@@ -8160,6 +8618,122 @@ avr_out_plus (rtx insn, rtx *xop, int *plen, int *pcc, bool out_label)
 }
 
 
+/* Output an instruction sequence for addition of REG in XOP[0] and CONST_INT
+   in XOP[1] in such a way that SREG.Z and SREG.N are set according to the
+   result.  XOP[2] might be a d-regs clobber register.  If XOP[2] is SCRATCH,
+   then the addition can be performed without a clobber reg.  Return "".
+
+   If PLEN == NULL, then output the instructions.
+   If PLEN != NULL, then set *PLEN to the length of the sequence in words. */
+
+const char*
+avr_out_plus_set_ZN (rtx *xop, int *plen)
+{
+  if (plen)
+    *plen = 0;
+
+  // Register to compare and value to compare against.
+  rtx xreg = xop[0];
+  rtx xval = xop[1];
+
+  machine_mode mode = GET_MODE (xreg);
+
+  // Number of bytes to operate on.
+  int n_bytes = GET_MODE_SIZE (mode);
+
+  if (n_bytes == 1)
+    {
+      if (INTVAL (xval) == 1)
+	return avr_asm_len ("inc %0", xop, plen, 1);
+
+      if (INTVAL (xval) == -1)
+	return avr_asm_len ("dec %0", xop, plen, 1);
+    }
+
+  if (n_bytes == 2
+      && test_hard_reg_class (ADDW_REGS, xreg)
+      && IN_RANGE (INTVAL (xval), 1, 63))
+    {
+      // Add 16-bit value in [1..63] to a w register.
+      return avr_asm_len ("adiw %0, %1", xop, plen, 1);
+    }
+
+  // Addition won't work; subtract the negative of XVAL instead.
+  xval = simplify_unary_operation (NEG, mode, xval, mode);
+
+  // Value (0..0xff) held in clobber register xop[2] or -1 if unknown.
+  int clobber_val = -1;
+
+  // [0] = Current sub-register.
+  // [1] = Current partial xval.
+  // [2] = 8-bit clobber d-register or SCRATCH.
+  rtx op[3];
+  op[2] = xop[2];
+
+  // Work byte-wise from LSB to MSB.  The lower two bytes might be
+  // SBIW'ed in one go.
+  for (int i = 0; i < n_bytes; ++i)
+    {
+      op[0] = simplify_gen_subreg (QImode, xreg, mode, i);
+
+      if (i == 0
+	  && n_bytes >= 2
+	  && test_hard_reg_class (ADDW_REGS, op[0]))
+	{
+	  op[1] = simplify_gen_subreg (HImode, xval, mode, 0);
+	  if (IN_RANGE (INTVAL (op[1]), 0, 63))
+	    {
+	      // SBIW can handle the lower 16 bits.
+	      avr_asm_len ("sbiw %0, %1", op, plen, 1);
+
+	      // Next byte has already been handled: Skip it.
+	      ++i;
+	      continue;
+	    }
+	}
+
+      op[1] = simplify_gen_subreg (QImode, xval, mode, i);
+
+      if (test_hard_reg_class (LD_REGS, op[0]))
+	{
+	  // d-regs can subtract immediates.
+	  avr_asm_len (i == 0
+		       ? "subi %0, %1"
+		       : "sbci %0, %1", op, plen, 1);
+	}
+      else
+	{
+	  int val8 = 0xff & INTVAL (op[1]);
+	  if (val8 == 0)
+	    {
+	      // Any register can subtract 0.
+	      avr_asm_len (i == 0
+			   ? "sub %0, __zero_reg__"
+			   : "sbc %0, __zero_reg__", op, plen, 1);
+	    }
+	  else
+	    {
+	      // Use d-register to hold partial xval.
+
+	      if (val8 != clobber_val)
+		{
+		  // Load partial xval to QI clobber reg and memoize for later.
+		  gcc_assert (REG_P (op[2]));
+		  avr_asm_len ("ldi %2, %1", op, plen, 1);
+		  clobber_val = val8;
+		}
+
+	      avr_asm_len (i == 0
+			   ? "sub %0, %2"
+			   : "sbc %0, %2", op, plen, 1);
+	    }
+	}
+    } // Loop bytes.
+
+  return "";
+}
+
+
 /* Output bit operation (IOR, AND, XOR) with register XOP[0] and compile
    time constant XOP[2]:
 
@@ -9291,6 +9865,8 @@ avr_adjust_insn_length (rtx_insn *insn, int len)
     case ADJUST_LEN_TSTSI: avr_out_tstsi (insn, op, &len); break;
     case ADJUST_LEN_COMPARE: avr_out_compare (insn, op, &len); break;
     case ADJUST_LEN_COMPARE64: avr_out_compare64 (insn, op, &len); break;
+    case ADJUST_LEN_CMP_UEXT: avr_out_cmp_ext (op, ZERO_EXTEND, &len); break;
+    case ADJUST_LEN_CMP_SEXT: avr_out_cmp_ext (op, SIGN_EXTEND, &len); break;
 
     case ADJUST_LEN_LSHRQI: lshrqi3_out (insn, op, &len); break;
     case ADJUST_LEN_LSHRHI: lshrhi3_out (insn, op, &len); break;
@@ -9311,6 +9887,7 @@ avr_adjust_insn_length (rtx_insn *insn, int len)
     case ADJUST_LEN_CALL: len = AVR_HAVE_JMP_CALL ? 2 : 1; break;
 
     case ADJUST_LEN_INSERT_BITS: avr_out_insert_bits (op, &len); break;
+    case ADJUST_LEN_ADD_SET_ZN: avr_out_plus_set_ZN (op, &len); break;
 
     case ADJUST_LEN_INSV_NOTBIT:
       avr_out_insert_notbit (insn, op, NULL_RTX, &len);
@@ -10607,6 +11184,58 @@ avr_mul_highpart_cost (rtx x, int)
 }
 
 
+/* Return the expected cost of a conditional branch like
+   (set (pc)
+	(if_then_else (X)
+		      (label_ref *)
+		      (pc)))
+   where X is some comparison operator.  */
+
+static int
+avr_cbranch_cost (rtx x)
+{
+  bool difficult_p = difficult_comparison_operator (x, VOIDmode);
+
+  if (reload_completed)
+    {
+      // After reload, we basically just have plain branches.
+      return COSTS_N_INSNS (1 + difficult_p);
+    }
+
+  rtx xreg = XEXP (x, 0);
+  rtx xval = XEXP (x, 1);
+  machine_mode mode = GET_MODE (xreg);
+  if (mode == VOIDmode)
+    mode = GET_MODE (xval);
+  int size = GET_MODE_SIZE (mode);
+
+  if (GET_CODE (xreg) == ZERO_EXTEND
+      || GET_CODE (xval) == ZERO_EXTEND)
+    {
+      // *cbranch<HISI:mode>.<code><QIPSI:mode>.0/1, code = zero_extend.
+      return COSTS_N_INSNS (size + 1);
+    }
+
+  if (GET_CODE (xreg) == SIGN_EXTEND
+      || GET_CODE (xval) == SIGN_EXTEND)
+    {
+      // *cbranch<HISI:mode>.<code><QIPSI:mode>.0/1, code = sign_extend.
+      // Make it a bit cheaper than it actually is (less reg pressure).
+      return COSTS_N_INSNS (size + 1 + 1);
+    }
+
+  bool reg_p = register_operand (xreg, mode);
+  bool reg_or_0_p = reg_or_0_operand (xval, mode);
+
+  return COSTS_N_INSNS (size
+			// For the branch
+			+ 1 + difficult_p
+			// Combine might propagate constants other than zero
+			// into the 2nd operand.  Make that more expensive.
+			+ 1 * (!reg_p || !reg_or_0_p));
+}
+
+
 /* Mutually recursive subroutine of avr_rtx_cost for calculating the
    cost of an RTX operand given its context.  X is the rtx of the
    operand, MODE is its mode, and OUTER is the rtx_code of this
@@ -11490,6 +12119,15 @@ avr_rtx_costs_1 (rtx x, machine_mode mode, int outer_code,
         }
       break;
 
+    case IF_THEN_ELSE:
+      if (outer_code == SET
+	  && XEXP (x, 2) == pc_rtx
+	  && ordered_comparison_operator (XEXP (x, 0), VOIDmode))
+	{
+	  *total = avr_cbranch_cost (XEXP (x, 0));
+	  return true;
+	}
+
     default:
       break;
     }
@@ -11602,281 +12240,6 @@ avr_normalize_condition (RTX_CODE condition)
     }
 }
 
-/* Helper function for `avr_reorg'.  */
-
-static rtx
-avr_compare_pattern (rtx_insn *insn)
-{
-  rtx pattern = single_set (insn);
-
-  if (pattern
-      && NONJUMP_INSN_P (insn)
-      && REG_P (SET_DEST (pattern))
-      && REGNO (SET_DEST (pattern)) == REG_CC
-      && GET_CODE (SET_SRC (pattern)) == COMPARE)
-    {
-      machine_mode mode0 = GET_MODE (XEXP (SET_SRC (pattern), 0));
-      machine_mode mode1 = GET_MODE (XEXP (SET_SRC (pattern), 1));
-
-      /* The 64-bit comparisons have fixed operands ACC_A and ACC_B.
-         They must not be swapped, thus skip them.  */
-
-      if ((mode0 == VOIDmode || GET_MODE_SIZE (mode0) <= 4)
-          && (mode1 == VOIDmode || GET_MODE_SIZE (mode1) <= 4))
-        return pattern;
-    }
-
-  return NULL_RTX;
-}
-
-/* Helper function for `avr_reorg'.  */
-
-/* Expansion of switch/case decision trees leads to code like
-
-       REG_CC = compare (Reg, Num)
-       if (REG_CC == 0)
-         goto L1
-
-       REG_CC = compare (Reg, Num)
-       if (REG_CC > 0)
-         goto L2
-
-   The second comparison is superfluous and can be deleted.
-   The second jump condition can be transformed from a
-   "difficult" one to a "simple" one because "REG_CC > 0" and
-   "REG_CC >= 0" will have the same effect here.
-
-   This function relies on the way switch/case is being expaned
-   as binary decision tree.  For example code see PR 49903.
-
-   Return TRUE if optimization performed.
-   Return FALSE if nothing changed.
-
-   INSN1 is a comparison, i.e. avr_compare_pattern != 0.
-
-   We don't want to do this in text peephole because it is
-   tedious to work out jump offsets there and the second comparison
-   might have been transormed by `avr_reorg'.
-
-   RTL peephole won't do because peephole2 does not scan across
-   basic blocks.  */
-
-static bool
-avr_reorg_remove_redundant_compare (rtx_insn *insn1)
-{
-  rtx comp1, ifelse1, xcond1;
-  rtx_insn *branch1;
-  rtx comp2, ifelse2, xcond2;
-  rtx_insn *branch2, *insn2;
-  enum rtx_code code;
-  rtx_insn *jump;
-  rtx target, cond;
-
-  /* Look out for:  compare1 - branch1 - compare2 - branch2  */
-
-  branch1 = next_nonnote_nondebug_insn (insn1);
-  if (!branch1 || !JUMP_P (branch1))
-    return false;
-
-  insn2 = next_nonnote_nondebug_insn (branch1);
-  if (!insn2 || !avr_compare_pattern (insn2))
-    return false;
-
-  branch2 = next_nonnote_nondebug_insn (insn2);
-  if (!branch2 || !JUMP_P (branch2))
-    return false;
-
-  comp1 = avr_compare_pattern (insn1);
-  comp2 = avr_compare_pattern (insn2);
-  xcond1 = single_set (branch1);
-  xcond2 = single_set (branch2);
-
-  if (!comp1 || !comp2
-      || !rtx_equal_p (comp1, comp2)
-      || !xcond1 || SET_DEST (xcond1) != pc_rtx
-      || !xcond2 || SET_DEST (xcond2) != pc_rtx
-      || IF_THEN_ELSE != GET_CODE (SET_SRC (xcond1))
-      || IF_THEN_ELSE != GET_CODE (SET_SRC (xcond2)))
-    {
-      return false;
-    }
-
-  comp1 = SET_SRC (comp1);
-  ifelse1 = SET_SRC (xcond1);
-  ifelse2 = SET_SRC (xcond2);
-
-  /* comp<n> is COMPARE now and ifelse<n> is IF_THEN_ELSE.  */
-
-  if (EQ != GET_CODE (XEXP (ifelse1, 0))
-      || !REG_P (XEXP (comp1, 0))
-      || !CONST_INT_P (XEXP (comp1, 1))
-      || XEXP (ifelse1, 2) != pc_rtx
-      || XEXP (ifelse2, 2) != pc_rtx
-      || LABEL_REF != GET_CODE (XEXP (ifelse1, 1))
-      || LABEL_REF != GET_CODE (XEXP (ifelse2, 1))
-      || !COMPARISON_P (XEXP (ifelse2, 0))
-      || REG_CC != REGNO (XEXP (XEXP (ifelse1, 0), 0))
-      || REG_CC != REGNO (XEXP (XEXP (ifelse2, 0), 0))
-      || const0_rtx != XEXP (XEXP (ifelse1, 0), 1)
-      || const0_rtx != XEXP (XEXP (ifelse2, 0), 1))
-    {
-      return false;
-    }
-
-  /* We filtered the insn sequence to look like
-
-        (set (reg:CC cc)
-             (compare (reg:M N)
-                      (const_int VAL)))
-        (set (pc)
-             (if_then_else (eq (reg:CC cc)
-                               (const_int 0))
-                           (label_ref L1)
-                           (pc)))
-
-        (set (reg:CC cc)
-             (compare (reg:M N)
-                      (const_int VAL)))
-        (set (pc)
-             (if_then_else (CODE (reg:CC cc)
-                                 (const_int 0))
-                           (label_ref L2)
-                           (pc)))
-  */
-
-  code = GET_CODE (XEXP (ifelse2, 0));
-
-  /* Map GT/GTU to GE/GEU which is easier for AVR.
-     The first two instructions compare/branch on EQ
-     so we may replace the difficult
-
-        if (x == VAL)   goto L1;
-        if (x > VAL)    goto L2;
-
-     with easy
-
-         if (x == VAL)   goto L1;
-         if (x >= VAL)   goto L2;
-
-     Similarly, replace LE/LEU by LT/LTU.  */
-
-  switch (code)
-    {
-    case EQ:
-    case LT:  case LTU:
-    case GE:  case GEU:
-      break;
-
-    case LE:  case LEU:
-    case GT:  case GTU:
-      code = avr_normalize_condition (code);
-      break;
-
-    default:
-      return false;
-    }
-
-  /* Wrap the branches into UNSPECs so they won't be changed or
-     optimized in the remainder.  */
-
-  target = XEXP (XEXP (ifelse1, 1), 0);
-  cond = XEXP (ifelse1, 0);
-  jump = emit_jump_insn_after (gen_branch_unspec (target, cond), insn1);
-
-  JUMP_LABEL (jump) = JUMP_LABEL (branch1);
-
-  target = XEXP (XEXP (ifelse2, 1), 0);
-  cond = gen_rtx_fmt_ee (code, VOIDmode, cc_reg_rtx, const0_rtx);
-  jump = emit_jump_insn_after (gen_branch_unspec (target, cond), insn2);
-
-  JUMP_LABEL (jump) = JUMP_LABEL (branch2);
-
-  /* The comparisons in insn1 and insn2 are exactly the same;
-     insn2 is superfluous so delete it.  */
-
-  delete_insn (insn2);
-  delete_insn (branch1);
-  delete_insn (branch2);
-
-  return true;
-}
-
-
-/* Implement `TARGET_MACHINE_DEPENDENT_REORG'.  */
-/* Optimize conditional jumps.  */
-
-static void
-avr_reorg (void)
-{
-  rtx_insn *insn = get_insns();
-
-  for (insn = next_real_insn (insn); insn; insn = next_real_insn (insn))
-    {
-      rtx pattern = avr_compare_pattern (insn);
-
-      if (!pattern)
-        continue;
-
-      if (optimize
-          && avr_reorg_remove_redundant_compare (insn))
-        {
-          continue;
-        }
-
-      if (compare_diff_p (insn))
-	{
-          /* Now we work under compare insn with difficult branch.  */
-
-	  rtx_insn *next = next_real_insn (insn);
-          rtx pat = PATTERN (next);
-          if (GET_CODE (pat) == PARALLEL)
-            pat = XVECEXP (pat, 0, 0);
-
-          pattern = SET_SRC (pattern);
-
-          if (true_regnum (XEXP (pattern, 0)) >= 0
-              && true_regnum (XEXP (pattern, 1)) >= 0)
-            {
-              rtx x = XEXP (pattern, 0);
-              rtx src = SET_SRC (pat);
-              rtx t = XEXP (src, 0);
-              PUT_CODE (t, swap_condition (GET_CODE (t)));
-              XEXP (pattern, 0) = XEXP (pattern, 1);
-              XEXP (pattern, 1) = x;
-              INSN_CODE (next) = -1;
-            }
-          else if (true_regnum (XEXP (pattern, 0)) >= 0
-                   && XEXP (pattern, 1) == const0_rtx)
-            {
-              /* This is a tst insn, we can reverse it.  */
-              rtx src = SET_SRC (pat);
-              rtx t = XEXP (src, 0);
-
-              PUT_CODE (t, swap_condition (GET_CODE (t)));
-              XEXP (pattern, 1) = XEXP (pattern, 0);
-              XEXP (pattern, 0) = const0_rtx;
-              INSN_CODE (next) = -1;
-              INSN_CODE (insn) = -1;
-            }
-          else if (true_regnum (XEXP (pattern, 0)) >= 0
-                   && CONST_INT_P (XEXP (pattern, 1)))
-            {
-              rtx x = XEXP (pattern, 1);
-              rtx src = SET_SRC (pat);
-              rtx t = XEXP (src, 0);
-              machine_mode mode = GET_MODE (XEXP (pattern, 0));
-
-              if (avr_simplify_comparison_p (mode, GET_CODE (t), x))
-                {
-                  XEXP (pattern, 1) = gen_int_mode (INTVAL (x) + 1, mode);
-                  PUT_CODE (t, avr_normalize_condition (GET_CODE (t)));
-                  INSN_CODE (next) = -1;
-                  INSN_CODE (insn) = -1;
-                }
-            }
-        }
-    }
-}
 
 /* Returns register number for function return value.*/
 
@@ -14580,8 +14943,6 @@ avr_float_lib_compare_returns_bool (machine_mode mode, enum rtx_code)
 #define TARGET_RTX_COSTS avr_rtx_costs
 #undef  TARGET_ADDRESS_COST
 #define TARGET_ADDRESS_COST avr_address_cost
-#undef  TARGET_MACHINE_DEPENDENT_REORG
-#define TARGET_MACHINE_DEPENDENT_REORG avr_reorg
 #undef  TARGET_FUNCTION_ARG
 #define TARGET_FUNCTION_ARG avr_function_arg
 #undef  TARGET_FUNCTION_ARG_ADVANCE
@@ -14711,6 +15072,9 @@ avr_float_lib_compare_returns_bool (machine_mode mode, enum rtx_code)
 #undef  TARGET_MD_ASM_ADJUST
 #define TARGET_MD_ASM_ADJUST avr_md_asm_adjust
 
+#undef  TARGET_CANONICALIZE_COMPARISON
+#define TARGET_CANONICALIZE_COMPARISON avr_canonicalize_comparison
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 \f
diff --git a/gcc/config/avr/avr.md b/gcc/config/avr/avr.md
index 43b75046384..69f8a4513b7 100644
--- a/gcc/config/avr/avr.md
+++ b/gcc/config/avr/avr.md
@@ -77,7 +77,6 @@ (define_c_enum "unspec"
    UNSPEC_FMULS
    UNSPEC_FMULSU
    UNSPEC_COPYSIGN
-   UNSPEC_IDENTITY
    UNSPEC_INSERT_BITS
    UNSPEC_ROUND
    ])
@@ -165,6 +164,7 @@ (define_attr "adjust_len"
    ashlsi, ashrsi, lshrsi,
    ashlpsi, ashrpsi, lshrpsi,
    insert_bits, insv_notbit, insv_notbit_0, insv_notbit_7,
+   add_set_ZN, cmp_uext, cmp_sext,
    no"
   (const_string "no"))
 
@@ -251,11 +251,23 @@ (define_mode_iterator QIHI  [QI HI])
 (define_mode_iterator QIHI2 [QI HI])
 (define_mode_iterator QISI  [QI HI PSI SI])
 (define_mode_iterator QIDI  [QI HI PSI SI DI])
+(define_mode_iterator QIPSI [QI HI PSI])
 (define_mode_iterator HISI  [HI PSI SI])
 
+;; Ordered integral and fixed-point modes of specific sizes.
 (define_mode_iterator ALL1 [QI QQ UQQ])
 (define_mode_iterator ALL2 [HI HQ UHQ HA UHA])
 (define_mode_iterator ALL4 [SI SQ USQ SA USA])
+(define_mode_iterator ALL234 [HI SI PSI
+                              HQ UHQ HA UHA
+                              SQ USQ SA USA])
+
+;; Ordered signed integral and signed fixed-point modes of specific sizes.
+(define_mode_iterator ALLs1 [QI QQ])
+(define_mode_iterator ALLs2 [HI HQ HA])
+(define_mode_iterator ALLs4 [SI SQ SA])
+(define_mode_iterator ALLs234 [HI SI PSI
+                               HQ HA SQ SA])
 
 ;; All supported move-modes
 (define_mode_iterator MOVMODE [QI QQ UQQ
@@ -263,11 +275,6 @@ (define_mode_iterator MOVMODE [QI QQ UQQ
                                SI SQ USQ SA USA
                                SF PSI])
 
-;; Supported ordered modes that are 2, 3, 4 bytes wide
-(define_mode_iterator ORDERED234 [HI SI PSI
-                                  HQ UHQ HA UHA
-                                  SQ USQ SA USA])
-
 ;; Post-reload split of 3, 4 bytes wide moves.
 (define_mode_iterator SPLIT34 [SI SF PSI
                                SQ USQ SA USA])
@@ -282,6 +289,7 @@ (define_code_iterator any_shiftrt [lshiftrt ashiftrt])
 (define_code_iterator bitop [xor ior and])
 (define_code_iterator xior [xor ior])
 (define_code_iterator eqne [eq ne])
+(define_code_iterator gelt [ge lt])
 
 (define_code_iterator ss_addsub [ss_plus ss_minus])
 (define_code_iterator us_addsub [us_plus us_minus])
@@ -309,6 +317,10 @@ (define_code_attr abelian
   [(ss_minus "") (us_minus "")
    (ss_plus "%") (us_plus "%")])
 
+(define_code_attr gelt_eqne
+  [(ge "eq")
+   (lt "ne")])
+
 ;; Map RTX code to its standard insn name
 (define_code_attr code_stdname
   [(ashift   "ashl")
@@ -6448,80 +6460,41 @@ (define_insn_and_split "zero_extendsidi2"
 ;;<=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=><=>
 ;; compare
 
-; Optimize negated tests into reverse compare if overflow is undefined.
-(define_insn "*negated_tstqi"
-  [(set (reg:CC REG_CC)
-        (compare:CC (neg:QI (match_operand:QI 0 "register_operand" "r"))
-                 (const_int 0)))]
-  "reload_completed && !flag_wrapv && !flag_trapv"
-  "cp __zero_reg__,%0"
-  [(set_attr "length" "1")])
-
-(define_insn "*reversed_tstqi"
+;; "*swapped_tstqi"  "*swapped_tstqq"
+(define_insn "*swapped_tst<mode>"
   [(set (reg:CC REG_CC)
-        (compare:CC (const_int 0)
-                 (match_operand:QI 0 "register_operand" "r")))]
+        (compare:CC (match_operand:ALLs1 0 "const0_operand"   "Y00")
+                    (match_operand:ALLs1 1 "register_operand" "r")))]
   "reload_completed"
-  "cp __zero_reg__,%0"
-[(set_attr "length" "2")])
+  "cp __zero_reg__,%1"
+[(set_attr "length" "1")])
 
-(define_insn "*negated_tsthi"
-  [(set (reg:CC REG_CC)
-        (compare:CC (neg:HI (match_operand:HI 0 "register_operand" "r"))
-                 (const_int 0)))]
-  "reload_completed && !flag_wrapv && !flag_trapv"
-  "cp __zero_reg__,%A0
-	cpc __zero_reg__,%B0"
-[(set_attr "length" "2")])
-
-;; Leave here the clobber used by the cmphi pattern for simplicity, even
-;; though it is unused, because this pattern is synthesized by avr_reorg.
-(define_insn "*reversed_tsthi"
+
+;; "*swapped_tsthi"  "*swapped_tsthq"  "*swapped_tstha"
+(define_insn "*swapped_tst<mode>"
   [(set (reg:CC REG_CC)
-        (compare:CC (const_int 0)
-                 (match_operand:HI 0 "register_operand" "r")))
-   (clobber (match_scratch:QI 1 "=X"))]
+        (compare:CC (match_operand:ALLs2 0 "const0_operand"   "Y00")
+                    (match_operand:ALLs2 1 "register_operand" "r")))]
   "reload_completed"
-  "cp __zero_reg__,%A0
-	cpc __zero_reg__,%B0"
-[(set_attr "length" "2")])
+  "cp __zero_reg__,%A1
+	cpc __zero_reg__,%B1"
+  [(set_attr "length" "2")])
 
-(define_insn "*negated_tstpsi"
-  [(set (reg:CC REG_CC)
-        (compare:CC (neg:PSI (match_operand:PSI 0 "register_operand" "r"))
-                 (const_int 0)))]
-  "reload_completed && !flag_wrapv && !flag_trapv"
-  "cp __zero_reg__,%A0\;cpc __zero_reg__,%B0\;cpc __zero_reg__,%C0"
-  [(set_attr "length" "3")])
 
-(define_insn "*reversed_tstpsi"
+(define_insn "*swapped_tstpsi"
   [(set (reg:CC REG_CC)
         (compare:CC (const_int 0)
-                 (match_operand:PSI 0 "register_operand" "r")))
-   (clobber (match_scratch:QI 1 "=X"))]
+                    (match_operand:PSI 0 "register_operand" "r")))]
   "reload_completed"
   "cp __zero_reg__,%A0\;cpc __zero_reg__,%B0\;cpc __zero_reg__,%C0"
   [(set_attr "length" "3")])
 
-(define_insn "*negated_tstsi"
-  [(set (reg:CC REG_CC)
-        (compare:CC (neg:SI (match_operand:SI 0 "register_operand" "r"))
-                 (const_int 0)))]
-  "reload_completed && !flag_wrapv && !flag_trapv"
-  "cp __zero_reg__,%A0
-	cpc __zero_reg__,%B0
-	cpc __zero_reg__,%C0
-	cpc __zero_reg__,%D0"
-  [(set_attr "length" "4")])
 
-;; "*reversed_tstsi"
-;; "*reversed_tstsq" "*reversed_tstusq"
-;; "*reversed_tstsa" "*reversed_tstusa"
-(define_insn "*reversed_tst<mode>"
+;; "*swapped_tstsi"  "*swapped_tstsq"  "*swapped_tstsa"
+(define_insn "*swapped_tst<mode>"
   [(set (reg:CC REG_CC)
-        (compare:CC (match_operand:ALL4 0 "const0_operand"   "Y00")
-                 (match_operand:ALL4 1 "register_operand" "r")))
-   (clobber (match_scratch:QI 2 "=X"))]
+        (compare:CC (match_operand:ALLs4 0 "const0_operand"   "Y00")
+                    (match_operand:ALLs4 1 "register_operand" "r")))]
   "reload_completed"
   "cp __zero_reg__,%A1
 	cpc __zero_reg__,%B1
@@ -6535,38 +6508,40 @@ (define_insn "*reversed_tst<mode>"
 (define_insn "cmp<mode>3"
   [(set (reg:CC REG_CC)
         (compare:CC (match_operand:ALL1 0 "register_operand"  "r  ,r,d")
-                 (match_operand:ALL1 1 "nonmemory_operand" "Y00,r,i")))]
+                    (match_operand:ALL1 1 "nonmemory_operand" "Y00,r,i")))]
   "reload_completed"
   "@
-	tst %0
+	cp %0, __zero_reg__
 	cp %0,%1
 	cpi %0,lo8(%1)"
   [(set_attr "length" "1,1,1")])
 
-(define_insn "*cmpqi_sign_extend"
-  [(set (reg:CC REG_CC)
-        (compare:CC (sign_extend:HI (match_operand:QI 0 "register_operand" "d"))
-                 (match_operand:HI 1 "s8_operand"                       "n")))]
-  "reload_completed"
-  "cpi %0,lo8(%1)"
-  [(set_attr "length" "1")])
-
 
-(define_insn "*cmphi.zero-extend.0"
+;; May be generated by "*cbranch<HISI:mode>.<code><QIPSI:mode>.0/1".
+(define_insn "*cmp<HISI:mode>.<code><QIPSI:mode>.0"
   [(set (reg:CC REG_CC)
-        (compare:CC (zero_extend:HI (match_operand:QI 0 "register_operand" "r"))
-                 (match_operand:HI 1 "register_operand" "r")))]
-  "reload_completed"
-  "cp %0,%A1\;cpc __zero_reg__,%B1"
-  [(set_attr "length" "2")])
+        (compare:CC (any_extend:HISI (match_operand:QIPSI 0 "register_operand" "r"))
+                    (match_operand:HISI 1 "register_operand" "r")))]
+  "reload_completed
+   && GET_MODE_SIZE (<HISI:MODE>mode) > GET_MODE_SIZE (<QIPSI:MODE>mode)"
+  {
+    return avr_out_cmp_ext (operands, <CODE>, nullptr);
+  }
+  [(set_attr "adjust_len" "cmp_<extend_su>ext")])
 
-(define_insn "*cmphi.zero-extend.1"
+;; Swapped version of the above.
+;; May be generated by "*cbranch<HISI:mode>.<code><QIPSI:mode>.0/1".
+(define_insn "*cmp<HISI:mode>.<code><QIPSI:mode>.1"
   [(set (reg:CC REG_CC)
-        (compare:CC (match_operand:HI 0 "register_operand" "r")
-                 (zero_extend:HI (match_operand:QI 1 "register_operand" "r"))))]
-  "reload_completed"
-  "cp %A0,%1\;cpc %B0,__zero_reg__"
-  [(set_attr "length" "2")])
+        (compare:CC (match_operand:HISI 0 "register_operand" "r")
+                    (any_extend:HISI (match_operand:QIPSI 1 "register_operand" "r"))))]
+  "reload_completed
+   && GET_MODE_SIZE (<HISI:MODE>mode) > GET_MODE_SIZE (<QIPSI:MODE>mode)"
+  {
+    return avr_out_cmp_ext (operands, <CODE>, nullptr);
+  }
+  [(set_attr "adjust_len" "cmp_<extend_su>ext")])
+
 
 ;; "cmphi3"
 ;; "cmphq3" "cmpuhq3"
@@ -6574,8 +6549,8 @@ (define_insn "*cmphi.zero-extend.1"
 (define_insn "cmp<mode>3"
   [(set (reg:CC REG_CC)
         (compare:CC (match_operand:ALL2 0 "register_operand"  "!w  ,r  ,r,d ,r  ,d,r")
-                 (match_operand:ALL2 1 "nonmemory_operand"  "Y00,Y00,r,s ,s  ,M,n Ynn")))
-   (clobber (match_scratch:QI 2                            "=X  ,X  ,X,&d,&d ,X,&d"))]
+                    (match_operand:ALL2 1 "nonmemory_operand"  "Y00,Y00,r,s ,s  ,M,n Ynn")))
+   (clobber (match_scratch:QI 2                               "=X  ,X  ,X,&d,&d ,X,&d"))]
   "reload_completed"
   {
     switch (which_alternative)
@@ -6602,14 +6577,14 @@ (define_insn "cmp<mode>3"
 
     return avr_out_compare (insn, operands, NULL);
   }
-  [(set_attr "length" "1,2,2,3,4,2,4")
+  [(set_attr "length" "2,2,2,3,4,2,4")
    (set_attr "adjust_len" "tsthi,tsthi,*,*,*,compare,compare")])
 
 (define_insn "*cmppsi"
   [(set (reg:CC REG_CC)
         (compare:CC (match_operand:PSI 0 "register_operand"  "r,r,d ,r  ,d,r")
-                 (match_operand:PSI 1 "nonmemory_operand" "L,r,s ,s  ,M,n")))
-   (clobber (match_scratch:QI 2                          "=X,X,&d,&d ,X,&d"))]
+                    (match_operand:PSI 1 "nonmemory_operand" "L,r,s ,s  ,M,n")))
+   (clobber (match_scratch:QI 2                             "=X,X,&d,&d ,X,&d"))]
   "reload_completed"
   {
     switch (which_alternative)
@@ -6640,8 +6615,8 @@ (define_insn "*cmppsi"
 (define_insn "*cmp<mode>"
   [(set (reg:CC REG_CC)
         (compare:CC (match_operand:ALL4 0 "register_operand"  "r  ,r ,d,r ,r")
-                 (match_operand:ALL4 1 "nonmemory_operand" "Y00,r ,M,M ,n Ynn")))
-   (clobber (match_scratch:QI 2                           "=X  ,X ,X,&d,&d"))]
+                    (match_operand:ALL4 1 "nonmemory_operand" "Y00,r ,M,M ,n Ynn")))
+   (clobber (match_scratch:QI 2                              "=X  ,X ,X,&d,&d"))]
   "reload_completed"
   {
     if (0 == which_alternative)
@@ -6655,6 +6630,13 @@ (define_insn "*cmp<mode>"
    (set_attr "adjust_len" "tstsi,*,compare,compare,compare")])
 
 
+;; A helper for avr_pass_ifelse::avr_rest_of_handle_ifelse().
+(define_expand "gen_compare<mode>"
+  [(parallel [(set (reg:CC REG_CC)
+                   (compare:CC (match_operand:HISI 0 "register_operand")
+                               (match_operand:HISI 1 "const_int_operand")))
+              (clobber (match_operand:QI 2 "scratch_operand"))])])
+
 ;; ----------------------------------------------------------------------
 ;; JUMP INSTRUCTIONS
 ;; ----------------------------------------------------------------------
@@ -6663,53 +6645,67 @@ (define_insn "*cmp<mode>"
 (define_expand "cbranch<mode>4"
   [(set (pc)
         (if_then_else (match_operator 0 "ordered_comparison_operator"
-                        [(match_operand:ALL1 1 "register_operand" "")
-                         (match_operand:ALL1 2 "nonmemory_operand" "")])
-         (label_ref (match_operand 3 "" ""))
-         (pc)))])
+                        [(match_operand:ALL1 1 "register_operand")
+                         (match_operand:ALL1 2 "nonmemory_operand")])
+                      (label_ref (match_operand 3))
+                      (pc)))]
+  ""
+  {
+    int icode = (int) GET_CODE (operands[0]);
+
+    targetm.canonicalize_comparison (&icode, &operands[1], &operands[2], false);
+    PUT_CODE (operands[0], (enum rtx_code) icode);
+  })
 
 (define_expand "cbranch<mode>4"
   [(parallel
      [(set (pc)
-           (if_then_else
-             (match_operator 0 "ordered_comparison_operator"
-               [(match_operand:ORDERED234 1 "register_operand" "")
-                (match_operand:ORDERED234 2 "nonmemory_operand" "")])
-             (label_ref (match_operand 3 "" ""))
-             (pc)))
-      (clobber (match_scratch:QI 4 ""))])])
-
-;; "*cbranchqi4"
-;; "*cbranchqq4"  "*cbranchuqq4"
-(define_insn_and_split "*cbranch<mode>4"
+           (if_then_else (match_operator 0 "ordered_comparison_operator"
+                           [(match_operand:ALL234 1 "register_operand")
+                            (match_operand:ALL234 2 "nonmemory_operand")])
+                         (label_ref (match_operand 3))
+                         (pc)))
+      (clobber (match_scratch:QI 4))])]
+  ""
+  {
+    int icode = (int) GET_CODE (operands[0]);
+
+    targetm.canonicalize_comparison (&icode, &operands[1], &operands[2], false);
+    PUT_CODE (operands[0], (enum rtx_code) icode);
+  })
+
+
+;; "cbranchqi4_insn"
+;; "cbranchqq4_insn"  "cbranchuqq4_insn"
+(define_insn_and_split "cbranch<mode>4_insn"
   [(set (pc)
         (if_then_else (match_operator 0 "ordered_comparison_operator"
-                        [(match_operand:ALL1 1 "register_operand" "r  ,r,d")
+                        [(match_operand:ALL1 1 "register_operand"  "r  ,r,d")
                          (match_operand:ALL1 2 "nonmemory_operand" "Y00,r,i")])
-         (label_ref (match_operand 3 "" ""))
-         (pc)))]
+                      (label_ref (match_operand 3))
+                      (pc)))]
    ""
    "#"
    "reload_completed"
    [(set (reg:CC REG_CC)
-                    (compare:CC (match_dup 1) (match_dup 2)))
+         (compare:CC (match_dup 1) (match_dup 2)))
     (set (pc)
          (if_then_else (match_op_dup 0
                          [(reg:CC REG_CC) (const_int 0)])
                        (label_ref (match_dup 3))
-                       (pc)))]
-   "")
+                       (pc)))])
 
-;; "*cbranchsi4"  "*cbranchsq4"  "*cbranchusq4"  "*cbranchsa4"  "*cbranchusa4"
-(define_insn_and_split "*cbranch<mode>4"
+;; "cbranchsi4_insn"
+;; "cbranchsq4_insn"  "cbranchusq4_insn"  "cbranchsa4_insn"  "cbranchusa4_insn"
+(define_insn_and_split "cbranch<mode>4_insn"
   [(set (pc)
-           (if_then_else
-             (match_operator 0 "ordered_comparison_operator"
-               [(match_operand:ALL4 1 "register_operand" "r  ,r ,d,r ,r")
-                (match_operand:ALL4 2 "nonmemory_operand" "Y00,r ,M,M ,n Ynn")])
-             (label_ref (match_operand 3 "" ""))
-             (pc)))
-   (clobber (match_scratch:QI 4 "=X  ,X ,X,&d,&d"))]
+        (if_then_else
+         (match_operator 0 "ordered_comparison_operator"
+           [(match_operand:ALL4 1 "register_operand"  "r  ,r,d,r ,r")
+            (match_operand:ALL4 2 "nonmemory_operand" "Y00,r,M,M ,n Ynn")])
+         (label_ref (match_operand 3))
+         (pc)))
+   (clobber (match_scratch:QI 4                      "=X  ,X,X,&d,&d"))]
    ""
    "#"
    "reload_completed"
@@ -6720,19 +6716,18 @@ (define_insn_and_split "*cbranch<mode>4"
          (if_then_else (match_op_dup 0
                          [(reg:CC REG_CC) (const_int 0)])
                        (label_ref (match_dup 3))
-                       (pc)))]
-   "")
+                       (pc)))])
 
-;; "*cbranchpsi4"
-(define_insn_and_split "*cbranchpsi4"
+;; "cbranchpsi4_insn"
+(define_insn_and_split "cbranchpsi4_insn"
   [(set (pc)
-           (if_then_else
-             (match_operator 0 "ordered_comparison_operator"
-               [(match_operand:PSI 1 "register_operand" "r,r,d ,r  ,d,r")
-                (match_operand:PSI 2 "nonmemory_operand" "L,r,s ,s  ,M,n")])
-             (label_ref (match_operand 3 "" ""))
-             (pc)))
-   (clobber (match_scratch:QI 4 "=X,X,&d,&d ,X,&d"))]
+        (if_then_else
+         (match_operator 0 "ordered_comparison_operator"
+           [(match_operand:PSI 1 "register_operand"  "r,r,d ,r ,d,r")
+            (match_operand:PSI 2 "nonmemory_operand" "L,r,s ,s ,M,n")])
+         (label_ref (match_operand 3))
+         (pc)))
+   (clobber (match_scratch:QI 4                     "=X,X,&d,&d,X,&d"))]
    ""
    "#"
    "reload_completed"
@@ -6743,19 +6738,19 @@ (define_insn_and_split "*cbranchpsi4"
          (if_then_else (match_op_dup 0
                          [(reg:CC REG_CC) (const_int 0)])
                        (label_ref (match_dup 3))
-                       (pc)))]
-   "")
+                       (pc)))])
 
-;; "*cbranchhi4"  "*cbranchhq4"  "*cbranchuhq4"  "*cbranchha4"  "*cbranchuha4"
-(define_insn_and_split "*cbranch<mode>4"
+;; "cbranchhi4_insn"
+;; "cbranchhq4_insn"  "cbranchuhq4_insn"  "cbranchha4_insn"  "cbranchuha4_insn"
+(define_insn_and_split "cbranch<mode>4_insn"
   [(set (pc)
-           (if_then_else
-             (match_operator 0 "ordered_comparison_operator"
-               [(match_operand:ALL2 1 "register_operand" "!w  ,r  ,r,d ,r  ,d,r")
-                (match_operand:ALL2 2 "nonmemory_operand" "Y00,Y00,r,s ,s  ,M,n Ynn")])
-             (label_ref (match_operand 3 "" ""))
-             (pc)))
-   (clobber (match_scratch:QI 4 "=X  ,X  ,X,&d,&d ,X,&d"))]
+        (if_then_else
+         (match_operator 0 "ordered_comparison_operator"
+           [(match_operand:ALL2 1 "register_operand" "!w  ,r  ,r,d ,r ,d,r")
+            (match_operand:ALL2 2 "nonmemory_operand" "Y00,Y00,r,s ,s ,M,n Ynn")])
+         (label_ref (match_operand 3))
+         (pc)))
+   (clobber (match_scratch:QI 4                      "=X  ,X  ,X,&d,&d,X,&d"))]
    ""
    "#"
    "reload_completed"
@@ -6766,8 +6761,71 @@ (define_insn_and_split "*cbranch<mode>4"
          (if_then_else (match_op_dup 0
                          [(reg:CC REG_CC) (const_int 0)])
                        (label_ref (match_dup 3))
-                       (pc)))]
-   "")
+                       (pc)))])
+
+;; Combiner pattern to compare sign- or zero-extended register against
+;; a wider register, like comparing uint8_t against uint16_t.
+(define_insn_and_split "*cbranch<HISI:mode>.<code><QIPSI:mode>.0"
+  [(set (pc)
+        (if_then_else (match_operator 0 "ordered_comparison_operator"
+                        [(any_extend:HISI (match_operand:QIPSI 1 "register_operand" "r"))
+                         (match_operand:HISI 2 "register_operand" "r")])
+                      (label_ref (match_operand 3))
+                      (pc)))]
+  "optimize
+   && GET_MODE_SIZE (<HISI:MODE>mode) > GET_MODE_SIZE (<QIPSI:MODE>mode)"
+  "#"
+  "&& reload_completed"
+  [; "*cmp<HISI:mode>.<code><QIPSI:mode>.0"
+   (set (reg:CC REG_CC)
+        (compare:CC (match_dup 1)
+                    (match_dup 2)))
+   ; "branch"
+   (set (pc)
+        (if_then_else (match_op_dup 0 [(reg:CC REG_CC)
+                                       (const_int 0)])
+                      (label_ref (match_dup 3))
+                      (pc)))]
+  {
+    operands[1] = gen_rtx_<CODE> (<HISI:MODE>mode, operands[1]);
+    if (difficult_comparison_operator (operands[0], VOIDmode))
+      {
+        PUT_CODE (operands[0], swap_condition (GET_CODE (operands[0])));
+        std::swap (operands[1], operands[2]);
+      }
+  })
+
+;; Same combiner pattern, but with swapped operands.
+(define_insn_and_split "*cbranch<HISI:mode>.<code><QIPSI:mode>.0"
+  [(set (pc)
+        (if_then_else (match_operator 0 "ordered_comparison_operator"
+                        [(match_operand:HISI 1 "register_operand" "r")
+                         (any_extend:HISI (match_operand:QIPSI 2 "register_operand" "r"))])
+                      (label_ref (match_operand 3))
+                      (pc)))]
+  "optimize
+   && GET_MODE_SIZE (<HISI:MODE>mode) > GET_MODE_SIZE (<QIPSI:MODE>mode)"
+  "#"
+  "&& reload_completed"
+  [; "*cmp<HISI:mode>.<code><QIPSI:mode>.0"
+   (set (reg:CC REG_CC)
+        (compare:CC (match_dup 1)
+                    (match_dup 2)))
+   ; "branch"
+   (set (pc)
+        (if_then_else (match_op_dup 0 [(reg:CC REG_CC)
+                                       (const_int 0)])
+                      (label_ref (match_dup 3))
+                      (pc)))]
+  {
+    operands[2] = gen_rtx_<CODE> (<HISI:MODE>mode, operands[2]);
+    if (difficult_comparison_operator (operands[0], VOIDmode))
+      {
+        PUT_CODE (operands[0], swap_condition (GET_CODE (operands[0])));
+        std::swap (operands[1], operands[2]);
+      }
+  })
+
 
 ;; Test a single bit in a QI/HI/SImode register.
 ;; Combine will create zero extract patterns for single bit tests.
@@ -6841,14 +6899,11 @@ (define_insn_and_split "*sbrx_and_branch<mode>_split"
   "#"
   "&& reload_completed"
   [(parallel [(set (pc)
-                   (if_then_else
-                   (match_op_dup 0
-                                 [(and:QISI
-                                   (match_dup 1)
-                                   (match_dup 2))
-                                  (const_int 0)])
-                   (label_ref (match_dup 3))
-                   (pc)))
+                   (if_then_else (match_op_dup 0 [(and:QISI (match_dup 1)
+                                                            (match_dup 2))
+                                                  (const_int 0)])
+                                 (label_ref (match_dup 3))
+                                 (pc)))
               (clobber (reg:CC REG_CC))])])
 
 (define_insn "*sbrx_and_branch<mode>"
@@ -6877,163 +6932,77 @@ (define_insn "*sbrx_and_branch<mode>"
                                     (const_int 2)
                                     (const_int 4))))])
 
-;; Convert sign tests to bit 7/15/31 tests that match the above insns.
-(define_peephole2
-  [(set (reg:CC REG_CC) (compare:CC (match_operand:QI 0 "register_operand" "")
-                       (const_int 0)))
-   (parallel [(set (pc) (if_then_else (ge (reg:CC REG_CC) (const_int 0))
-                                      (label_ref (match_operand 1 "" ""))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
-  [(parallel [(set (pc) (if_then_else (eq (zero_extract:HI (match_dup 0)
-                                                           (const_int 1)
-                                                           (const_int 7))
-                                          (const_int 0))
-                                      (label_ref (match_dup 1))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])])
-
-(define_peephole2
-  [(set (reg:CC REG_CC) (compare:CC (match_operand:QI 0 "register_operand" "")
-                       (const_int 0)))
-   (parallel [(set (pc) (if_then_else (lt (reg:CC REG_CC) (const_int 0))
-                                      (label_ref (match_operand 1 "" ""))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
-  [(parallel [(set (pc) (if_then_else (ne (zero_extract:HI (match_dup 0)
-                                                           (const_int 1)
-                                                           (const_int 7))
-                                          (const_int 0))
-                                      (label_ref (match_dup 1))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])])
-
-(define_peephole2
-  [(parallel [(set (reg:CC REG_CC) (compare:CC (match_operand:HI 0 "register_operand" "")
-                                  (const_int 0)))
-              (clobber (match_operand:HI 2 ""))])
-   (parallel [(set (pc) (if_then_else (ge (reg:CC REG_CC) (const_int 0))
-                                      (label_ref (match_operand 1 "" ""))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
-  [(parallel [(set (pc) (if_then_else (eq (and:HI (match_dup 0) (const_int -32768))
-                                          (const_int 0))
-                                      (label_ref (match_dup 1))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])])
 
-(define_peephole2
-  [(parallel [(set (reg:CC REG_CC) (compare:CC (match_operand:HI 0 "register_operand" "")
-                                  (const_int 0)))
-              (clobber (match_operand:HI 2 ""))])
-   (parallel [(set (pc) (if_then_else (lt (reg:CC REG_CC) (const_int 0))
-                                      (label_ref (match_operand 1 "" ""))
-                                      (pc)))
+;; Convert sign tests to bit 7 tests that match the above insns.
+(define_peephole2 ; "*sbrx_branch<mode>"
+  [(set (reg:CC REG_CC)
+        (compare:CC (match_operand:ALLs1 0 "register_operand")
+                    (match_operand:ALLs1 1 "const0_operand")))
+   (set (pc)
+        (if_then_else (gelt (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "peep2_regno_dead_p (2, REG_CC)"
+  [(parallel [(set (pc)
+                   (if_then_else (<gelt_eqne> (zero_extract:HI (match_dup 0)
+                                                               (const_int 1)
+                                                               (match_dup 1))
+                                              (const_int 0))
+                                 (label_ref (match_dup 2))
+                                 (pc)))
               (clobber (reg:CC REG_CC))])]
-  ""
-  [(parallel [(set (pc) (if_then_else (ne (and:HI (match_dup 0) (const_int -32768))
-                                          (const_int 0))
-                                      (label_ref (match_dup 1))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])])
+  {
+    operands[0] = avr_to_int_mode (operands[0]);
+    operands[1] = GEN_INT (GET_MODE_BITSIZE (<MODE>mode) - 1);
+  })
 
-(define_peephole2
-  [(parallel [(set (reg:CC REG_CC) (compare:CC (match_operand:SI 0 "register_operand" "")
-                                  (const_int 0)))
-              (clobber (match_operand:SI 2 ""))])
-   (parallel [(set (pc) (if_then_else (ge (reg:CC REG_CC) (const_int 0))
-                                      (label_ref (match_operand 1 "" ""))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
-  [(parallel [(set (pc) (if_then_else (eq (and:SI (match_dup 0) (match_dup 2))
-                                          (const_int 0))
-                                      (label_ref (match_dup 1))
-                                      (pc)))
+;; Convert sign tests to bit 15/23/31 tests that match the above insns.
+(define_peephole2 ; "*sbrx_branch<mode>"
+  [(parallel [(set (reg:CC REG_CC)
+                   (compare:CC (match_operand:ALLs234 0 "register_operand")
+                               (match_operand:ALLs234 1 "const0_operand")))
+              (clobber (match_operand:QI 3 "scratch_operand"))])
+   (set (pc)
+        (if_then_else (gelt (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "peep2_regno_dead_p (2, REG_CC)"
+  [(parallel [(set (pc)
+                   (if_then_else (<gelt_eqne> (zero_extract:HI (match_dup 0)
+                                                               (const_int 1)
+                                                               (match_dup 1))
+                                              (const_int 0))
+                                 (label_ref (match_dup 2))
+                                 (pc)))
               (clobber (reg:CC REG_CC))])]
-  "operands[2] = gen_int_mode (-2147483647 - 1, SImode);")
+  {
+    operands[0] = avr_to_int_mode (operands[0]);
+    operands[1] = GEN_INT (GET_MODE_BITSIZE (<MODE>mode) - 1);
+  })
 
-(define_peephole2
-  [(parallel [(set (reg:CC REG_CC) (compare:CC (match_operand:SI 0 "register_operand" "")
-                                  (const_int 0)))
-              (clobber (match_operand:SI 2 ""))])
-   (parallel [(set (pc) (if_then_else (lt (reg:CC REG_CC) (const_int 0))
-                                      (label_ref (match_operand 1 "" ""))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
-  [(parallel [(set (pc) (if_then_else (ne (and:SI (match_dup 0) (match_dup 2))
-                                          (const_int 0))
-                                      (label_ref (match_dup 1))
-                                      (pc)))
-              (clobber (reg:CC REG_CC))])]
-  "operands[2] = gen_int_mode (-2147483647 - 1, SImode);")
 
 ;; ************************************************************************
 ;; Implementation of conditional jumps here.
 ;;  Compare with 0 (test) jumps
 ;; ************************************************************************
 
-(define_insn_and_split "branch"
+(define_insn "branch"
   [(set (pc)
         (if_then_else (match_operator 1 "simple_comparison_operator"
-                                      [(reg:CC REG_CC)
-                                       (const_int 0)])
-                      (label_ref (match_operand 0 "" ""))
+                        [(reg:CC REG_CC)
+                         (const_int 0)])
+                      (label_ref (match_operand 0))
                       (pc)))]
   "reload_completed"
-  "#"
-  "&& reload_completed"
-  [(parallel [(set (pc)
-                   (if_then_else (match_op_dup 1
-                                               [(reg:CC REG_CC)
-                                                (const_int 0)])
-                                 (label_ref (match_dup 0))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])])
-
-(define_insn "*branch"
-  [(set (pc)
-        (if_then_else (match_operator 1 "simple_comparison_operator"
-                                      [(reg:CC REG_CC)
-                                       (const_int 0)])
-                      (label_ref (match_operand 0 "" ""))
-                      (pc)))
-   (clobber (reg:CC REG_CC))]
-  "reload_completed"
   {
     return ret_cond_branch (operands[1], avr_jump_mode (operands[0], insn), 0);
   }
   [(set_attr "type" "branch")])
 
 
-;; Same as above but wrap SET_SRC so that this branch won't be transformed
-;; or optimized in the remainder.
-
-(define_insn "branch_unspec"
-  [(set (pc)
-        (unspec [(if_then_else (match_operator 1 "simple_comparison_operator"
-                                               [(reg:CC REG_CC)
-                                                (const_int 0)])
-                               (label_ref (match_operand 0 "" ""))
-                               (pc))
-                 ] UNSPEC_IDENTITY))
-   (clobber (reg:CC REG_CC))]
-  "reload_completed"
-  {
-    return ret_cond_branch (operands[1], avr_jump_mode (operands[0], insn), 0);
-  }
-  [(set_attr "type" "branch")])
-
-;; ****************************************************************
-;; AVR does not have following conditional jumps: LE,LEU,GT,GTU.
-;; Convert them all to proper jumps.
-;; ****************************************************************/
-
-(define_insn_and_split "difficult_branch"
+(define_insn "difficult_branch"
   [(set (pc)
         (if_then_else (match_operator 1 "difficult_comparison_operator"
                         [(reg:CC REG_CC)
@@ -7041,95 +7010,11 @@ (define_insn_and_split "difficult_branch"
                       (label_ref (match_operand 0 "" ""))
                       (pc)))]
   "reload_completed"
-  "#"
-  "&& reload_completed"
-  [(parallel [(set (pc)
-                   (if_then_else (match_op_dup 1
-                                   [(reg:CC REG_CC)
-                                    (const_int 0)])
-                                 (label_ref (match_dup 0))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])])
-
-(define_insn "*difficult_branch"
-  [(set (pc)
-        (if_then_else (match_operator 1 "difficult_comparison_operator"
-                        [(reg:CC REG_CC)
-                         (const_int 0)])
-                      (label_ref (match_operand 0 "" ""))
-                      (pc)))
-   (clobber (reg:CC REG_CC))]
-  "reload_completed"
   {
     return ret_cond_branch (operands[1], avr_jump_mode (operands[0], insn), 0);
   }
   [(set_attr "type" "branch1")])
 
-;; revers branch
-
-(define_insn_and_split "rvbranch"
-  [(set (pc)
-        (if_then_else (match_operator 1 "simple_comparison_operator"
-                                      [(reg:CC REG_CC)
-                                       (const_int 0)])
-                      (pc)
-                      (label_ref (match_operand 0 "" ""))))]
-  "reload_completed"
-  "#"
-  "&& reload_completed"
-  [(parallel [(set (pc)
-                   (if_then_else (match_op_dup 1
-                                               [(reg:CC REG_CC)
-                                                (const_int 0)])
-                                 (pc)
-                                 (label_ref (match_dup 0))))
-              (clobber (reg:CC REG_CC))])])
-
-(define_insn "*rvbranch"
-  [(set (pc)
-        (if_then_else (match_operator 1 "simple_comparison_operator"
-                                      [(reg:CC REG_CC)
-                                       (const_int 0)])
-                      (pc)
-                      (label_ref (match_operand 0 "" ""))))
-   (clobber (reg:CC REG_CC))]
-  "reload_completed"
-  {
-    return ret_cond_branch (operands[1], avr_jump_mode (operands[0], insn), 1);
-  }
-  [(set_attr "type" "branch1")])
-
-(define_insn_and_split "difficult_rvbranch"
-  [(set (pc)
-        (if_then_else (match_operator 1 "difficult_comparison_operator"
-                                      [(reg:CC REG_CC)
-                                       (const_int 0)])
-                      (pc)
-                      (label_ref (match_operand 0 "" ""))))]
-  "reload_completed"
-  "#"
-  "&& reload_completed"
-  [(parallel [(set (pc)
-                   (if_then_else (match_op_dup 1
-                                               [(reg:CC REG_CC)
-                                                (const_int 0)])
-                                 (pc)
-                                 (label_ref (match_dup 0))))
-              (clobber (reg:CC REG_CC))])])
-
-(define_insn "*difficult_rvbranch"
-  [(set (pc)
-        (if_then_else (match_operator 1 "difficult_comparison_operator"
-                                      [(reg:CC REG_CC)
-                                       (const_int 0)])
-                      (pc)
-                      (label_ref (match_operand 0 "" ""))))
-   (clobber (reg:CC REG_CC))]
-  "reload_completed"
-  {
-    return ret_cond_branch (operands[1], avr_jump_mode (operands[0], insn), 1);
-  }
-  [(set_attr "type" "branch")])
 
 ;; **************************************************************************
 ;; Unconditional and other jump instructions.
@@ -7655,15 +7540,14 @@ (define_peephole ; "*dec-and-branchsi!=-1.d.clobber"
               (clobber (reg:CC REG_CC))])
    (parallel [(set (reg:CC REG_CC)
                    (compare:CC (match_dup 0)
-                            (const_int -1)))
-              (clobber (match_operand:QI 1 "d_register_operand" ""))])
-   (parallel [(set (pc)
-                   (if_then_else (eqne (reg:CC REG_CC)
-                                       (const_int 0))
-                                 (label_ref (match_operand 2 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
+                               (const_int -1)))
+              (clobber (match_operand:QI 1 "scratch_or_d_register_operand"))])
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "dead_or_set_regno_p (insn, REG_CC)"
   {
     const char *op;
     int jump_mode;
@@ -7699,15 +7583,14 @@ (define_peephole ; "*dec-and-branchhi!=-1"
               (clobber (reg:CC REG_CC))])
    (parallel [(set (reg:CC REG_CC)
                    (compare:CC (match_dup 0)
-                            (const_int -1)))
+                               (const_int -1)))
               (clobber (match_operand:QI 1 "d_register_operand" ""))])
-   (parallel [(set (pc)
-                   (if_then_else (eqne (reg:CC REG_CC)
-                                       (const_int 0))
-                                 (label_ref (match_operand 2 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "dead_or_set_regno_p (insn, REG_CC)"
   {
     const char *op;
     int jump_mode;
@@ -7741,15 +7624,14 @@ (define_peephole ; "*dec-and-branchhi!=-1.d.clobber"
               (clobber (reg:CC REG_CC))])
    (parallel [(set (reg:CC REG_CC)
                    (compare:CC (match_dup 0)
-                            (const_int -1)))
-              (clobber (match_operand:QI 1 "d_register_operand" ""))])
-   (parallel [(set (pc)
-                   (if_then_else (eqne (reg:CC REG_CC)
-                                       (const_int 0))
-                                 (label_ref (match_operand 2 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
+                               (const_int -1)))
+              (clobber (match_operand:QI 1 "scratch_or_d_register_operand"))])
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "dead_or_set_regno_p (insn, REG_CC)"
   {
     const char *op;
     int jump_mode;
@@ -7783,15 +7665,14 @@ (define_peephole ; "*dec-and-branchhi!=-1.l.clobber"
               (clobber (reg:CC REG_CC))])
    (parallel [(set (reg:CC REG_CC)
                    (compare:CC (match_dup 0)
-                            (const_int -1)))
+                               (const_int -1)))
               (clobber (match_operand:QI 1 "d_register_operand" ""))])
-   (parallel [(set (pc)
-                   (if_then_else (eqne (reg:CC REG_CC)
-                                       (const_int 0))
-                                 (label_ref (match_operand 2 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "dead_or_set_regno_p (insn, REG_CC)"
   {
     const char *op;
     int jump_mode;
@@ -7821,14 +7702,13 @@ (define_peephole ; "*dec-and-branchqi!=-1"
               (clobber (reg:CC REG_CC))])
    (set (reg:CC REG_CC)
         (compare:CC (match_dup 0)
-                 (const_int -1)))
-   (parallel [(set (pc)
-                   (if_then_else (eqne (reg:CC REG_CC)
-                                       (const_int 0))
-                                 (label_ref (match_operand 1 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  ""
+                    (const_int -1)))
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 1))
+                      (pc)))]
+  "dead_or_set_regno_p (insn, REG_CC)"
   {
     const char *op;
     int jump_mode;
@@ -7854,14 +7734,14 @@ (define_peephole ; "*dec-and-branchqi!=-1"
 (define_peephole ; "*cpse.eq"
   [(set (reg:CC REG_CC)
         (compare:CC (match_operand:ALL1 1 "register_operand" "r,r")
-                 (match_operand:ALL1 2 "reg_or_0_operand" "r,Y00")))
-   (parallel [(set (pc)
-                   (if_then_else (eq (reg:CC REG_CC)
-                                     (const_int 0))
-                                 (label_ref (match_operand 0 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  "jump_over_one_insn_p (insn, operands[0])"
+                    (match_operand:ALL1 2 "reg_or_0_operand" "r,Y00")))
+   (set (pc)
+        (if_then_else (eq (reg:CC REG_CC)
+                          (const_int 0))
+                      (label_ref (match_operand 0))
+                      (pc)))]
+  "jump_over_one_insn_p (insn, operands[0])
+   && dead_or_set_regno_p (insn, REG_CC)"
   "@
 	cpse %1,%2
 	cpse %1,__zero_reg__")
@@ -7889,16 +7769,16 @@ (define_peephole ; "*cpse.eq"
 
 (define_peephole ; "*cpse.ne"
   [(set (reg:CC REG_CC)
-        (compare:CC (match_operand:ALL1 1 "register_operand" "")
-                 (match_operand:ALL1 2 "reg_or_0_operand" "")))
-   (parallel [(set (pc)
-                   (if_then_else (ne (reg:CC REG_CC)
-                                     (const_int 0))
-                                 (label_ref (match_operand 0 "" ""))
-                                 (pc)))
-              (clobber (reg:CC REG_CC))])]
-  "!AVR_HAVE_JMP_CALL
-   || !TARGET_SKIP_BUG"
+        (compare:CC (match_operand:ALL1 1 "register_operand")
+                    (match_operand:ALL1 2 "reg_or_0_operand")))
+   (set (pc)
+        (if_then_else (ne (reg:CC REG_CC)
+                          (const_int 0))
+                      (label_ref (match_operand 0))
+                      (pc)))]
+  "(!AVR_HAVE_JMP_CALL
+    || !TARGET_SKIP_BUG)
+   && dead_or_set_regno_p (insn, REG_CC)"
   {
     if (operands[2] == CONST0_RTX (<MODE>mode))
       operands[2] = zero_reg_rtx;
@@ -8093,7 +7973,7 @@ (define_insn_and_split "delay_cycles_1"
                                 (const_int 1)]
                                UNSPECV_DELAY_CYCLES)
               (set (match_dup 1)
-               (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
+                   (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
               (clobber (match_dup 2))
               (clobber (reg:CC REG_CC))])])
 
@@ -8125,7 +8005,7 @@ (define_insn_and_split "delay_cycles_2"
                                 (const_int 2)]
                                UNSPECV_DELAY_CYCLES)
               (set (match_dup 1)
-               (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
+                   (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
               (clobber (match_dup 2))
               (clobber (reg:CC REG_CC))])]
   ""
@@ -8162,7 +8042,7 @@ (define_insn_and_split "delay_cycles_3"
                                 (const_int 3)]
                                UNSPECV_DELAY_CYCLES)
               (set (match_dup 1)
-               (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
+                   (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
               (clobber (match_dup 2))
               (clobber (match_dup 3))
               (clobber (match_dup 4))
@@ -8205,7 +8085,7 @@ (define_insn_and_split "delay_cycles_4"
                                 (const_int 4)]
                                UNSPECV_DELAY_CYCLES)
               (set (match_dup 1)
-               (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
+                   (unspec_volatile:BLK [(match_dup 1)] UNSPECV_MEMORY_BARRIER))
               (clobber (match_dup 2))
               (clobber (match_dup 3))
               (clobber (match_dup 4))
@@ -9488,6 +9368,258 @@ (define_peephole2
               (clobber (reg:CC REG_CC))])])
 
 
+;; Try optimize decrement-and-branch.  When we have an addition followed
+;; by a comparison of the result against zero, we can output the addition
+;; in such a way that SREG.N and SREG.Z are set according to the result.
+
+;; { -1, +1 } for QImode, otherwise the empty set.
+(define_mode_attr p1m1 [(QI "N P")
+                        (HI "Yxx") (PSI "Yxx") (SI "Yxx")])
+
+;; FIXME: reload1.cc::do_output_reload() does not support output reloads
+;; for JUMP_INSNs, hence letting combine doing decrement-and-branch like
+;; the following might run into ICE.  Doing reloads by hand is too painful...
+;
+; (define_insn_and_split "*add.for.eqne.<mode>.cbranch"
+;   [(set (pc)
+;         (if_then_else (eqne (match_operand:QISI 1 "register_operand"  "0")
+;                             (match_operand:QISI 2 "const_int_operand" "n"))
+;                       (label_ref (match_operand 4))
+;                       (pc)))
+;    (set (match_operand:QISI 0 "register_operand" "=r")
+;         (plus:QISI (match_dup 1)
+;                    (match_operand:QISI 3 "const_int_operand" "n")))]
+;   ;; No clobber for now as combine might not have one handy.
+;   ;; We pop a scatch in split1.
+;   "!reload_completed
+;    && const0_rtx == simplify_binary_operation (PLUS, <MODE>mode,
+;                                                operands[2], operands[3])"
+;   { gcc_unreachable(); }
+;   "&& 1"
+;   [(parallel [(set (pc)
+;                    (if_then_else (eqne (match_dup 1)
+;                                        (match_dup 2))
+;                                  (label_ref (match_dup 4))
+;                                  (pc)))
+;               (set (match_dup 0)
+;                    (plus:QISI (match_dup 1)
+;                               (match_dup 3)))
+;               (clobber (scratch:QI))])])
+;
+;; ...Hence, stick with RTL peepholes for now.  Unfortunately, there is no
+;; canonical form, and if reload shuffles registers around, we might miss
+;; opportunities to match a decrement-and-branch.
+;; doloop_end doesn't reload either, so doloop_end also won't work.
+
+(define_expand "gen_add_for_<code>_<mode>"
+  ; "*add.for.eqne.<mode>"
+  [(parallel [(set (reg:CC REG_CC)
+                   (compare:CC (plus:QISI (match_operand:QISI 0 "register_operand")
+                                          (match_operand:QISI 1 "const_int_operand"))
+                               (const_int 0)))
+              (set (match_dup 0)
+                   (plus:QISI (match_dup 0)
+                              (match_dup 1)))
+              (clobber (match_operand:QI 3))])
+   ; "branch"
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_dup 2))
+                      (pc)))])
+
+
+;; 1/3: A version without clobber: d-reg or 8-bit adds +/-1.
+(define_peephole2
+  [(parallel [(set (match_operand:QISI 0 "register_operand")
+                   (plus:QISI (match_dup 0)
+                              (match_operand:QISI 1 "const_int_operand")))
+              (clobber (reg:CC REG_CC))])
+   (set (reg:CC REG_CC)
+        (compare:CC (match_dup 0)
+                    (const_int 0)))
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "peep2_regno_dead_p (3, REG_CC)
+   && (d_register_operand (operands[0], <MODE>mode)
+       || (<MODE>mode == QImode
+           && (INTVAL (operands[1]) == 1
+               || INTVAL (operands[1]) == -1)))"
+  [(scratch)]
+  {
+    emit (gen_gen_add_for_<code>_<mode> (operands[0], operands[1], operands[2],
+          gen_rtx_SCRATCH (QImode)));
+    DONE;
+  })
+
+;; 2/3: A version with clobber from the insn.
+(define_peephole2
+  [(parallel [(set (match_operand:QISI 0 "register_operand")
+                   (plus:QISI (match_dup 0)
+                              (match_operand:QISI 1 "const_int_operand")))
+              (clobber (match_operand:QI 3 "scratch_or_d_register_operand"))
+              (clobber (reg:CC REG_CC))])
+   (parallel [(set (reg:CC REG_CC)
+                   (compare:CC (match_dup 0)
+                               (const_int 0)))
+              (clobber (match_operand:QI 4 "scratch_or_d_register_operand"))])
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "peep2_regno_dead_p (3, REG_CC)"
+  [(scratch)]
+  {
+    rtx scratch = REG_P (operands[3]) ? operands[3] : operands[4];
+
+    // We need either a d-register or a scratch register to clobber.
+    if (! REG_P (scratch)
+        && ! d_register_operand (operands[0], <MODE>mode)
+        && ! (QImode == <MODE>mode
+              && (INTVAL (operands[1]) == 1
+                  || INTVAL (operands[1]) == -1)))
+      {
+        FAIL;
+      }
+    emit (gen_gen_add_for_<code>_<mode> (operands[0], operands[1], operands[2],
+          scratch));
+    DONE;
+  })
+
+;; 3/3 A version with a clobber from peephole2.
+(define_peephole2
+  [(match_scratch:QI 3 "d")
+   (parallel [(set (match_operand:QISI 0 "register_operand")
+                   (plus:QISI (match_dup 0)
+                              (match_operand:QISI 1 "const_int_operand")))
+              (clobber (reg:CC REG_CC))])
+   (set (reg:CC REG_CC)
+        (compare:CC (match_dup 0)
+                    (const_int 0)))
+   (set (pc)
+        (if_then_else (eqne (reg:CC REG_CC)
+                            (const_int 0))
+                      (label_ref (match_operand 2))
+                      (pc)))]
+  "peep2_regno_dead_p (3, REG_CC)"
+  [(scratch)]
+  {
+    emit (gen_gen_add_for_<code>_<mode> (operands[0], operands[1], operands[2],
+          operands[3]));
+    DONE;
+  })
+
+;; Result of the above three peepholes is an addition that also
+;; performs an EQ or NE comparison (of the result) against zero.
+;; FIXME: Using (match_dup 0) instead of operands[3/4] makes rnregs
+;; barf in regrename.cc::merge_overlapping_regs().  For now, use the
+;; fix from PR50788: Constrain as "0".
+(define_insn "*add.for.eqne.<mode>"
+  [(set (reg:CC REG_CC)
+        (compare:CC
+         (plus:QISI (match_operand:QISI 3 "register_operand"  "0,0     ,0")
+                    (match_operand:QISI 1 "const_int_operand" "n,<p1m1>,n"))
+         (const_int 0)))
+   (set (match_operand:QISI 0 "register_operand"             "=d,*r    ,r")
+        (plus:QISI (match_operand:QISI 4 "register_operand"   "0,0     ,0")
+                   (match_dup 1)))
+   (clobber (match_scratch:QI 2                              "=X,X     ,&d"))]
+  "reload_completed"
+  {
+    return avr_out_plus_set_ZN (operands, nullptr);
+  }
+  [(set_attr "adjust_len" "add_set_ZN")])
+
+
+;; Swapping both comparison and branch condition.  This can turn difficult
+;; branches to easy ones.  And in some cases, a comparison against one can
+;; be turned into a comparison against zero.
+
+(define_peephole2 ; "*swapped_tst<mode>"
+  [(parallel [(set (reg:CC REG_CC)
+                   (compare:CC (match_operand:ALLs234 1 "register_operand")
+                               (match_operand:ALLs234 2 "const_operand")))
+              (clobber (match_operand:QI 3 "scratch_operand"))])
+   (set (pc)
+        (if_then_else (match_operator 0 "ordered_comparison_operator"
+                        [(reg:CC REG_CC)
+                         (const_int 0)])
+                      (label_ref (match_operand 4))
+                      (pc)))]
+  "peep2_regno_dead_p (2, REG_CC)"
+  [(set (reg:CC REG_CC)
+        (compare:CC (match_dup 2)
+                    (match_dup 1)))
+   ; "branch"
+   (set (pc)
+        (if_then_else (match_op_dup 0 [(reg:CC REG_CC)
+                                       (const_int 0)])
+                      (label_ref (match_dup 4))
+                      (pc)))]
+  {
+    rtx xval = avr_to_int_mode (operands[2]);
+    enum rtx_code code = GET_CODE (operands[0]);
+
+    if (code == GT && xval == const0_rtx)
+      code = LT;
+    else if (code == GE && xval == const1_rtx)
+      code = LT;
+    else if (code == LE && xval == const0_rtx)
+      code = GE;
+    else if (code == LT && xval == const1_rtx)
+      code = GE;
+    else
+      FAIL;
+
+    operands[2] = CONST0_RTX (<MODE>mode);
+    PUT_CODE (operands[0], code);
+  })
+
+;; Same, but for 8-bit modes which have no scratch reg.
+(define_peephole2 ; "*swapped_tst<mode>"
+  [(set (reg:CC REG_CC)
+        (compare:CC (match_operand:ALLs1 1 "register_operand")
+                    (match_operand:ALLs1 2 "const_operand")))
+   (set (pc)
+        (if_then_else (match_operator 0 "ordered_comparison_operator"
+                        [(reg:CC REG_CC)
+                         (const_int 0)])
+                      (label_ref (match_operand 4))
+                      (pc)))]
+  "peep2_regno_dead_p (2, REG_CC)"
+  [(set (reg:CC REG_CC)
+        (compare:CC (match_dup 2)
+                    (match_dup 1)))
+   ; "branch"
+   (set (pc)
+        (if_then_else (match_op_dup 0 [(reg:CC REG_CC)
+                                       (const_int 0)])
+                      (label_ref (match_dup 4))
+                      (pc)))]
+  {
+    rtx xval = avr_to_int_mode (operands[2]);
+    enum rtx_code code = GET_CODE (operands[0]);
+
+    if (code == GT && xval == const0_rtx)
+      code = LT;
+    else if (code == GE && xval == const1_rtx)
+      code = LT;
+    else if (code == LE && xval == const0_rtx)
+      code = GE;
+    else if (code == LT && xval == const1_rtx)
+      code = GE;
+    else
+      FAIL;
+
+    operands[2] = CONST0_RTX (<MODE>mode);
+    PUT_CODE (operands[0], code);
+  })
+
+
 (define_expand "extzv"
   [(set (match_operand:QI 0 "register_operand" "")
         (zero_extract:QI (match_operand:QI 1 "register_operand"  "")
diff --git a/gcc/config/avr/constraints.md b/gcc/config/avr/constraints.md
index ac43678872d..f840c0114a0 100644
--- a/gcc/config/avr/constraints.md
+++ b/gcc/config/avr/constraints.md
@@ -245,6 +245,11 @@ (define_constraint "Ym2"
 	    (match_test "INTVAL (avr_to_int_mode (op)) == -2"))
        (match_test "satisfies_constraint_Cm2 (op)")))
 
+;; Constraint that's the empty set.  Useful with mode and code iterators.
+(define_constraint "Yxx"
+  "A constraints that is always false"
+  (match_test "false"))
+
 (define_constraint "Yx2"
   "Fixed-point or integer constant not in the range @minus{}2 @dots{} 2"
   (and (ior (match_code "const_int")
diff --git a/gcc/config/avr/predicates.md b/gcc/config/avr/predicates.md
index e352326466f..f4ee42af274 100644
--- a/gcc/config/avr/predicates.md
+++ b/gcc/config/avr/predicates.md
@@ -27,6 +27,11 @@ (define_predicate "d_register_operand"
   (and (match_code "reg")
        (match_test "REGNO (op) >= 16 && REGNO (op) <= 31")))
 
+(define_predicate "scratch_or_d_register_operand"
+  (ior (match_operand 0 "d_register_operand")
+       (and (match_code ("scratch"))
+            (match_operand 0 "scratch_operand"))))
+
 (define_predicate "even_register_operand"
   (and (match_code "reg")
        (and (match_test "REGNO (op) <= 31")
diff --git a/gcc/testsuite/gcc.target/avr/torture/pr109650-1.c b/gcc/testsuite/gcc.target/avr/torture/pr109650-1.c
new file mode 100644
index 00000000000..9030db00fde
--- /dev/null
+++ b/gcc/testsuite/gcc.target/avr/torture/pr109650-1.c
@@ -0,0 +1,63 @@
+/* { dg-do run } */
+/* { dg-options { -std=c99 } } */
+
+typedef _Bool bool;
+typedef __UINT8_TYPE__ uint8_t;
+
+static inline __attribute__((__always_inline__))
+bool func1a (bool p1, uint8_t p2)
+{
+  if (p1)
+    return p2 <= 8;
+  return p2 <= 2;
+}
+
+__attribute__((__noinline__, __noclone__))
+bool func1b (bool p1, uint8_t p2)
+{
+  return func1a (p1, p2);
+}
+
+static inline __attribute__((__always_inline__))
+bool func2a (bool p1, unsigned p2)
+{
+  if (p1)
+    return p2 <= 8;
+  return p2 <= 2;
+}
+
+__attribute__((__noinline__, __noclone__))
+bool func2b (bool p1, unsigned p2)
+{
+  return func2a (p1, p2);
+}
+
+void test1 (void)
+{
+  if (func1a (0, 1) != func1b (0, 1)) __builtin_abort();
+  if (func1a (0, 2) != func1b (0, 2)) __builtin_abort();
+  if (func1a (0, 3) != func1b (0, 3)) __builtin_abort();
+
+  if (func1a (1, 7) != func1b (1, 7)) __builtin_abort();
+  if (func1a (1, 8) != func1b (1, 8)) __builtin_abort();
+  if (func1a (1, 9) != func1b (1, 9)) __builtin_abort();
+}
+
+void test2 (void)
+{
+  if (func2a (0, 1) != func2b (0, 1)) __builtin_abort();
+  if (func2a (0, 2) != func2b (0, 2)) __builtin_abort();
+  if (func2a (0, 3) != func2b (0, 3)) __builtin_abort();
+
+  if (func2a (1, 7) != func2b (1, 7)) __builtin_abort();
+  if (func2a (1, 8) != func2b (1, 8)) __builtin_abort();
+  if (func2a (1, 9) != func2b (1, 9)) __builtin_abort();
+}
+
+int main (void)
+{
+  test1();
+  test2();
+
+  __builtin_exit (0);
+}
diff --git a/gcc/testsuite/gcc.target/avr/torture/pr109650-2.c b/gcc/testsuite/gcc.target/avr/torture/pr109650-2.c
new file mode 100644
index 00000000000..386dc322afa
--- /dev/null
+++ b/gcc/testsuite/gcc.target/avr/torture/pr109650-2.c
@@ -0,0 +1,79 @@
+/* { dg-do run } */
+
+typedef __UINT8_TYPE__ uint8_t;
+
+#define AI static __inline__ __attribute__((__always_inline__))
+#define NI __attribute__((__noinline__,__noclone__))
+
+AI uint8_t func1_eq (uint8_t c, unsigned x)
+{
+  if (x == c)
+    return 1;
+  return 0;
+}
+
+AI uint8_t func1_ne (uint8_t c, unsigned x)
+{
+  if (x != c)
+    return 1;
+  return 0;
+}
+
+AI uint8_t func1_ltu (uint8_t c, unsigned x)
+{
+  if (x < c)
+    return 1;
+  return 0;
+}
+
+AI uint8_t func1_leu (uint8_t c, unsigned x)
+{
+  if (x <= c)
+    return 1;
+  return 0;
+}
+
+AI uint8_t func1_gtu (uint8_t c, unsigned x)
+{
+  if (x > c)
+    return 1;
+  return 0;
+}
+
+AI uint8_t func1_geu (uint8_t c, unsigned x)
+{
+  if (x >= c)
+    return 1;
+  return 0;
+}
+
+NI uint8_t func2_eq (uint8_t c, unsigned x) { return func1_eq (c, x); }
+NI uint8_t func2_ne (uint8_t c, unsigned x) { return func1_ne (c, x); }
+NI uint8_t func2_ltu (uint8_t c, unsigned x) { return func1_ltu (c, x); }
+NI uint8_t func2_leu (uint8_t c, unsigned x) { return func1_leu (c, x); }
+NI uint8_t func2_gtu (uint8_t c, unsigned x) { return func1_gtu (c, x); }
+NI uint8_t func2_geu (uint8_t c, unsigned x) { return func1_geu (c, x); }
+
+AI void test4 (uint8_t c, unsigned x)
+{
+  if (func2_eq (c, x) != func1_eq (c, x)) __builtin_abort();
+  if (func2_ne (c, x) != func1_ne (c, x)) __builtin_abort();
+  if (func2_ltu (c, x) != func1_ltu (c, x)) __builtin_abort();
+  if (func2_leu (c, x) != func1_leu (c, x)) __builtin_abort();
+  if (func2_gtu (c, x) != func1_gtu (c, x)) __builtin_abort();
+  if (func2_geu (c, x) != func1_geu (c, x)) __builtin_abort();
+}
+
+int main (void)
+{
+  test4 (127, 127);
+  test4 (127, 128);
+  test4 (128, 127);
+
+  test4 (0x42, 0x142);
+  test4 (0x0, 0x100);
+  test4 (0x0, 0x0);
+  test4 (0x0, 0x1);
+
+  __builtin_exit (0);
+}

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Ping #1: [patch,avr] Fix PR109650 wrong code
  2023-05-19  8:49 ` Georg-Johann Lay
@ 2023-05-30 13:15   ` Georg-Johann Lay
  2023-06-07  8:39   ` Ping #2: " Georg-Johann Lay
  2023-06-10 16:58   ` Jeff Law
  2 siblings, 0 replies; 6+ messages in thread
From: Georg-Johann Lay @ 2023-05-30 13:15 UTC (permalink / raw)
  To: gcc-patches; +Cc: Jeff Law, Denis Chertykov

Ping #1 for:

https://gcc.gnu.org/pipermail/gcc-patches/2023-May/618976.html

https://gcc.gnu.org/pipermail/gcc-patches/attachments/20230519/9536bf8c/attachment-0001.bin

Johann

Am 19.05.23 um 10:49 schrieb Georg-Johann Lay:
> 
> Here is a revised version of the patch.  The difference to the
> previous one is that it adds some combine patterns for *cbranch
> insns that were lost in the PR92729 transition.  The post-reload
> part of the patterns were still there.  The new patterns are
> slightly more general in that they also handle fixed-point modes.
> 
> Apart from that, the patch behaves the same:
> 
> Am 15.05.23 um 20:05 schrieb Georg-Johann Lay:
>> This patch fixes a wrong-code bug in the wake of PR92729, the transition
>> that turned the AVR backend from cc0 to CCmode.  In cc0, the insn that
>> uses cc0 like a conditional branch always follows the cc0 setter, which
>> is no more the case with CCmode where set and use of REG_CC might be in
>> different basic blocks.
>>
>> This patch removes the machine-dependent reorg pass in avr_reorg 
>> entirely.
>>
>> It is replaced by a new, AVR specific mini-pass that runs prior to
>> split2. Canonicalization of comparisons away from the "difficult"
>> codes GT[U] and LE[U] is now mostly performed by implementing
>> TARGET_CANONICALIZE_COMPARISON.
>>
>> Moreover:
>>
>> * Text peephole conditions get "dead_or_set_regno_p (*, REG_CC)" as
>> needed.
>>
>> * RTL peephole conditions get "peep2_regno_dead_p (*, REG_CC)" as
>> needed.
>>
>> * Conditional branches no more clobber REG_CC.
>>
>> * insn output for compares looks ahead to determine the branch mode in
>> use. This needs also "dead_or_set_regno_p (*, REG_CC)".
>>
>> * Add RTL peepholes for decrement-and-branch detection.
>>
>> Finally, it fixes some of the many indentation glitches left over from
>> PR92729.
>>
>> Ok?
>>
>> I'd also backport this one because all of v12+ is affected by the 
>> wrong code.
> 
> Johann
> 
> -- 
> 
> gcc/
>      PR target/109650
>      PR target/92729
> 
>      * config/avr/avr-passes.def (avr_pass_ifelse): Insert new pass.
>      * config/avr/avr.cc (avr_pass_ifelse): New RTL pass.
>      (avr_pass_data_ifelse): New pass_data for it.
>      (make_avr_pass_ifelse, avr_redundant_compare, avr_cbranch_cost)
>      (avr_canonicalize_comparison, avr_out_plus_set_ZN)
>      (avr_out_cmp_ext): New functions.
>      (compare_condtition): Make sure REG_CC dies in the branch insn.
>      (avr_rtx_costs_1): Add computation of cbranch costs.
>      (avr_adjust_insn_length) [ADJUST_LEN_ADD_SET_ZN, ADJUST_LEN_CMP_ZEXT]:
>      [ADJUST_LEN_CMP_SEXT]Handle them.
>      (TARGET_CANONICALIZE_COMPARISON): New define.
>      (avr_simplify_comparison_p, compare_diff_p, avr_compare_pattern)
>      (avr_reorg_remove_redundant_compare, avr_reorg): Remove functions.
>      (TARGET_MACHINE_DEPENDENT_REORG): Remove define.
> 
>      * avr-protos.h (avr_simplify_comparison_p): Remove proto.
>      (make_avr_pass_ifelse, avr_out_plus_set_ZN, cc_reg_rtx)
>      (avr_out_cmp_zext): New Protos
> 
>      * config/avr/avr.md (branch, difficult_branch): Don't split insns.
>      (*cbranchhi.zero-extend.0", *cbranchhi.zero-extend.1")
>      (*swapped_tst<mode>, *add.for.eqne.<mode>): New insns.
>      (*cbranch<mode>4): Rename to cbranch<mode>4_insn.
>      (define_peephole): Add dead_or_set_regno_p(insn,REG_CC) as needed.
>      (define_deephole2): Add peep2_regno_dead_p(*,REG_CC) as needed.
>      Add new RTL peepholes for decrement-and-branch and *swapped_tst<mode>.
>      Rework signtest-and-branch peepholes for *sbrx_branch<mode>.
>      (adjust_len) [add_set_ZN, cmp_zext]: New.
>      (QIPSI): New mode iterator.
>      (ALLs1, ALLs2, ALLs4, ALLs234): New mode iterators.
>      (gelt): New code iterator.
>      (gelt_eqne): New code attribute.
>      (rvbranch, *rvbranch, difficult_rvbranch, *difficult_rvbranch)
>      (branch_unspec, *negated_tst<mode>, *reversed_tst<mode>)
>      (*cmpqi_sign_extend): Remove insns.
>      (define_c_enum "unspec") [UNSPEC_IDENTITY]: Remove.
> 
>      * config/avr/avr-dimode.md (cbranch<mode>4): Canonicalize comparisons.
>      * config/avr/predicates.md (scratch_or_d_register_operand): New.
>      * config/avr/contraints.md (Yxx): New constraint.
> 
> gcc/testsuite/
>      PR target/109650
>      * config/avr/torture/pr109650-1.c: New test.
>      * config/avr/torture/pr109650-2.c: New test.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Ping #2: [patch,avr] Fix PR109650 wrong code
  2023-05-19  8:49 ` Georg-Johann Lay
  2023-05-30 13:15   ` Ping #1: " Georg-Johann Lay
@ 2023-06-07  8:39   ` Georg-Johann Lay
  2023-06-10 16:58   ` Jeff Law
  2 siblings, 0 replies; 6+ messages in thread
From: Georg-Johann Lay @ 2023-06-07  8:39 UTC (permalink / raw)
  To: gcc-patches; +Cc: Jeff Law, Denis Chertykov

Ping #2 for:

https://gcc.gnu.org/pipermail/gcc-patches/2023-May/618976.html

https://gcc.gnu.org/pipermail/gcc-patches/attachments/20230519/9536bf8c/attachment-0001.bin

Ping #1:
https://gcc.gnu.org/pipermail/gcc-patches/2023-May/620098.html

Johann

Am 19.05.23 um 10:49 schrieb Georg-Johann Lay:
> 
> Here is a revised version of the patch.  The difference to the
> previous one is that it adds some combine patterns for *cbranch
> insns that were lost in the PR92729 transition.  The post-reload
> part of the patterns were still there.  The new patterns are
> slightly more general in that they also handle fixed-point modes.
> 
> Apart from that, the patch behaves the same:
> 
> Am 15.05.23 um 20:05 schrieb Georg-Johann Lay:
>> This patch fixes a wrong-code bug in the wake of PR92729, the transition
>> that turned the AVR backend from cc0 to CCmode.  In cc0, the insn that
>> uses cc0 like a conditional branch always follows the cc0 setter, which
>> is no more the case with CCmode where set and use of REG_CC might be in
>> different basic blocks.
>>
>> This patch removes the machine-dependent reorg pass in avr_reorg 
>> entirely.
>>
>> It is replaced by a new, AVR specific mini-pass that runs prior to
>> split2. Canonicalization of comparisons away from the "difficult"
>> codes GT[U] and LE[U] is now mostly performed by implementing
>> TARGET_CANONICALIZE_COMPARISON.
>>
>> Moreover:
>>
>> * Text peephole conditions get "dead_or_set_regno_p (*, REG_CC)" as
>> needed.
>>
>> * RTL peephole conditions get "peep2_regno_dead_p (*, REG_CC)" as
>> needed.
>>
>> * Conditional branches no more clobber REG_CC.
>>
>> * insn output for compares looks ahead to determine the branch mode in
>> use. This needs also "dead_or_set_regno_p (*, REG_CC)".
>>
>> * Add RTL peepholes for decrement-and-branch detection.
>>
>> Finally, it fixes some of the many indentation glitches left over from
>> PR92729.
>>
>> Ok?
>>
>> I'd also backport this one because all of v12+ is affected by the 
>> wrong code.
> 
> Johann
> 
> -- 
> 
> gcc/
>      PR target/109650
>      PR target/92729
> 
>      * config/avr/avr-passes.def (avr_pass_ifelse): Insert new pass.
>      * config/avr/avr.cc (avr_pass_ifelse): New RTL pass.
>      (avr_pass_data_ifelse): New pass_data for it.
>      (make_avr_pass_ifelse, avr_redundant_compare, avr_cbranch_cost)
>      (avr_canonicalize_comparison, avr_out_plus_set_ZN)
>      (avr_out_cmp_ext): New functions.
>      (compare_condtition): Make sure REG_CC dies in the branch insn.
>      (avr_rtx_costs_1): Add computation of cbranch costs.
>      (avr_adjust_insn_length) [ADJUST_LEN_ADD_SET_ZN, ADJUST_LEN_CMP_ZEXT]:
>      [ADJUST_LEN_CMP_SEXT]Handle them.
>      (TARGET_CANONICALIZE_COMPARISON): New define.
>      (avr_simplify_comparison_p, compare_diff_p, avr_compare_pattern)
>      (avr_reorg_remove_redundant_compare, avr_reorg): Remove functions.
>      (TARGET_MACHINE_DEPENDENT_REORG): Remove define.
> 
>      * avr-protos.h (avr_simplify_comparison_p): Remove proto.
>      (make_avr_pass_ifelse, avr_out_plus_set_ZN, cc_reg_rtx)
>      (avr_out_cmp_zext): New Protos
> 
>      * config/avr/avr.md (branch, difficult_branch): Don't split insns.
>      (*cbranchhi.zero-extend.0", *cbranchhi.zero-extend.1")
>      (*swapped_tst<mode>, *add.for.eqne.<mode>): New insns.
>      (*cbranch<mode>4): Rename to cbranch<mode>4_insn.
>      (define_peephole): Add dead_or_set_regno_p(insn,REG_CC) as needed.
>      (define_deephole2): Add peep2_regno_dead_p(*,REG_CC) as needed.
>      Add new RTL peepholes for decrement-and-branch and *swapped_tst<mode>.
>      Rework signtest-and-branch peepholes for *sbrx_branch<mode>.
>      (adjust_len) [add_set_ZN, cmp_zext]: New.
>      (QIPSI): New mode iterator.
>      (ALLs1, ALLs2, ALLs4, ALLs234): New mode iterators.
>      (gelt): New code iterator.
>      (gelt_eqne): New code attribute.
>      (rvbranch, *rvbranch, difficult_rvbranch, *difficult_rvbranch)
>      (branch_unspec, *negated_tst<mode>, *reversed_tst<mode>)
>      (*cmpqi_sign_extend): Remove insns.
>      (define_c_enum "unspec") [UNSPEC_IDENTITY]: Remove.
> 
>      * config/avr/avr-dimode.md (cbranch<mode>4): Canonicalize comparisons.
>      * config/avr/predicates.md (scratch_or_d_register_operand): New.
>      * config/avr/contraints.md (Yxx): New constraint.
> 
> gcc/testsuite/
>      PR target/109650
>      * config/avr/torture/pr109650-1.c: New test.
>      * config/avr/torture/pr109650-2.c: New test.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch,avr] Fix PR109650 wrong code
  2023-05-19  8:49 ` Georg-Johann Lay
  2023-05-30 13:15   ` Ping #1: " Georg-Johann Lay
  2023-06-07  8:39   ` Ping #2: " Georg-Johann Lay
@ 2023-06-10 16:58   ` Jeff Law
  2 siblings, 0 replies; 6+ messages in thread
From: Jeff Law @ 2023-06-10 16:58 UTC (permalink / raw)
  To: Georg-Johann Lay, gcc-patches; +Cc: Denis Chertykov, Senthil Kumar Selvaraj



On 5/19/23 02:49, Georg-Johann Lay wrote:
> Subject:
> Re: [patch,avr] Fix PR109650 wrong code
> From:
> Georg-Johann Lay <avr@gjlay.de>
> Date:
> 5/19/23, 02:49
> 
> To:
> gcc-patches@gcc.gnu.org
> CC:
> Jeff Law <jeffreyalaw@gmail.com>, Denis Chertykov <chertykov@gmail.com>, 
> Senthil Kumar Selvaraj <SenthilKumar.Selvaraj@microchip.com>
> 
> 
> ...Ok, and now with the patch attached...
> 
> Here is a revised version of the patch.  The difference to the
> previous one is that it adds some combine patterns for *cbranch
> insns that were lost in the PR92729 transition.  The post-reload
> part of the patterns were still there.  The new patterns are
> slightly more general in that they also handle fixed-point modes.
> 
> Apart from that, the patch behaves the same:
> 
> Am 15.05.23 um 20:05 schrieb Georg-Johann Lay:
>> This patch fixes a wrong-code bug in the wake of PR92729, the transition
>> that turned the AVR backend from cc0 to CCmode.  In cc0, the insn that
>> uses cc0 like a conditional branch always follows the cc0 setter, which
>> is no more the case with CCmode where set and use of REG_CC might be in
>> different basic blocks.
>>
>> This patch removes the machine-dependent reorg pass in avr_reorg 
>> entirely.
>>
>> It is replaced by a new, AVR specific mini-pass that runs prior to
>> split2. Canonicalization of comparisons away from the "difficult"
>> codes GT[U] and LE[U] is now mostly performed by implementing
>> TARGET_CANONICALIZE_COMPARISON.
>>
>> Moreover:
>>
>> * Text peephole conditions get "dead_or_set_regno_p (*, REG_CC)" as
>> needed.
>>
>> * RTL peephole conditions get "peep2_regno_dead_p (*, REG_CC)" as
>> needed.
>>
>> * Conditional branches no more clobber REG_CC.
>>
>> * insn output for compares looks ahead to determine the branch mode in
>> use. This needs also "dead_or_set_regno_p (*, REG_CC)".
>>
>> * Add RTL peepholes for decrement-and-branch detection.
>>
>> Finally, it fixes some of the many indentation glitches left over from
>> PR92729.
>>
>> Ok?
>>
>> I'd also backport this one because all of v12+ is affected by the 
>> wrong code.
> 
> Johann
> 
> -- 
> 
> gcc/
>      PR target/109650
>      PR target/92729
> 
>      * config/avr/avr-passes.def (avr_pass_ifelse): Insert new pass.
>      * config/avr/avr.cc (avr_pass_ifelse): New RTL pass.
>      (avr_pass_data_ifelse): New pass_data for it.
>      (make_avr_pass_ifelse, avr_redundant_compare, avr_cbranch_cost)
>      (avr_canonicalize_comparison, avr_out_plus_set_ZN)
>      (avr_out_cmp_ext): New functions.
>      (compare_condtition): Make sure REG_CC dies in the branch insn.
>      (avr_rtx_costs_1): Add computation of cbranch costs.
>      (avr_adjust_insn_length) [ADJUST_LEN_ADD_SET_ZN, ADJUST_LEN_CMP_ZEXT]:
>      [ADJUST_LEN_CMP_SEXT]Handle them.
>      (TARGET_CANONICALIZE_COMPARISON): New define.
>      (avr_simplify_comparison_p, compare_diff_p, avr_compare_pattern)
>      (avr_reorg_remove_redundant_compare, avr_reorg): Remove functions.
>      (TARGET_MACHINE_DEPENDENT_REORG): Remove define.
> 
>      * avr-protos.h (avr_simplify_comparison_p): Remove proto.
>      (make_avr_pass_ifelse, avr_out_plus_set_ZN, cc_reg_rtx)
>      (avr_out_cmp_zext): New Protos
> 
>      * config/avr/avr.md (branch, difficult_branch): Don't split insns.
>      (*cbranchhi.zero-extend.0", *cbranchhi.zero-extend.1")
>      (*swapped_tst<mode>, *add.for.eqne.<mode>): New insns.
>      (*cbranch<mode>4): Rename to cbranch<mode>4_insn.
>      (define_peephole): Add dead_or_set_regno_p(insn,REG_CC) as needed.
>      (define_deephole2): Add peep2_regno_dead_p(*,REG_CC) as needed.
>      Add new RTL peepholes for decrement-and-branch and *swapped_tst<mode>.
>      Rework signtest-and-branch peepholes for *sbrx_branch<mode>.
>      (adjust_len) [add_set_ZN, cmp_zext]: New.
>      (QIPSI): New mode iterator.
>      (ALLs1, ALLs2, ALLs4, ALLs234): New mode iterators.
>      (gelt): New code iterator.
>      (gelt_eqne): New code attribute.
>      (rvbranch, *rvbranch, difficult_rvbranch, *difficult_rvbranch)
>      (branch_unspec, *negated_tst<mode>, *reversed_tst<mode>)
>      (*cmpqi_sign_extend): Remove insns.
>      (define_c_enum "unspec") [UNSPEC_IDENTITY]: Remove.
> 
>      * config/avr/avr-dimode.md (cbranch<mode>4): Canonicalize comparisons.
>      * config/avr/predicates.md (scratch_or_d_register_operand): New.
>      * config/avr/contraints.md (Yxx): New constraint.
> 
> gcc/testsuite/
>      PR target/109650
>      * config/avr/torture/pr109650-1.c: New test.
>      * config/avr/torture/pr109650-2.c: New test.
So I did a cursory review and didn't see anything obviously wrong. 
Given we haven't heard from Denis in a while I'll go ahead and ACK.

More importantly how do we want to handle things going forward?  I don't 
think my involvement in avr specific changes would bring any significant 
value and I don't expect to hear from Denis.

I could propose you as the maintainer of the avr port to the steering 
committee, but I wouldn't do that without knowing its a task you want to 
take on.


Jeff


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-06-10 16:58 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-15 18:05 [patch,avr] Fix PR109650 wrong code Georg-Johann Lay
2023-05-19  8:17 ` Georg-Johann Lay
2023-05-19  8:49 ` Georg-Johann Lay
2023-05-30 13:15   ` Ping #1: " Georg-Johann Lay
2023-06-07  8:39   ` Ping #2: " Georg-Johann Lay
2023-06-10 16:58   ` Jeff Law

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).