public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
@ 2016-11-11 17:26 Tamar Christina
  2016-11-11 22:06 ` Joseph Myers
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2016-11-11 17:26 UTC (permalink / raw)
  To: GCC Patches, Wilco Dijkstra, rguenther, law, Michael Meissner; +Cc: nd

[-- Attachment #1: Type: text/plain, Size: 5715 bytes --]

Hi All,

This is v3 of the patch which adds an optimized route to the fpclassify builtin
for floating point numbers which are similar to IEEE-754 in format.

The patch has been rewritten to do it in GIMPLE instead of a fold. As part of
the implementation optimized versions of is_normal, is_subnormal, is_nan,
is_infinite and is_zero have been created. This patch also introduces two new
intrinsics __builtin_iszero and __builtin_issubnormal.

NOTE: the old code for ISNORMAL, ISSUBNORMAL, ISNAN and ISINFINITE had a
      special case for ibm_extended_format which dropped the second part
      of the number (which was being represented as two numbers internally).
      fpclassify did not have such a case. As such I have dropped it as I am
      under the impression the format is deprecated? So the optimization isn't
      as important? If this is wrong it would be easy to add that back in.

Should ISFINITE be change as well? Also should it be SUBNORMAL or DENORMAL?
And what should I do about Documentation? I'm not sure how to document a new
BUILTIN.

The goal is to make it faster by:
1. Trying to determine the most common case first
   (e.g. the float is a Normal number) and then the
   rest. The amount of code generated at -O2 are
   about the same +/- 1 instruction, but the code
   is much better.
2. Using integer operation in the optimized path.

At a high level, the optimized path uses integer operations
to perform the following checks in the given order:

  - normal
  - zero
  - nan
  - infinite
  - subnormal

The operations are ordered in the order of most occurrence of the values.

In case the optimization can't be applied a fall-back method is used which
is similar to the existing implementation using FP instructions. However
the operations now also follow the same order as described above. Which means
there should be some slight benefits there as well.

A limitation with this new approach is that the exponent
of the floating point has to fit in 32 bits and the floating
point has to have an IEEE like format and values for NaN and INF
(e.g. for NaN and INF all bits of the exp must be set).

To determine this IEEE likeness a new boolean was added to real_format.

As an example, AArch64 now generates for classification of doubles:

f:
	fmov	x1, d0
	mov	w0, 7
	ubfx	x2, x1, 52, 11
	add	w2, w2, 1
	tst	w2, 2046
	bne	.L1
	lsl	x1, x1, 1
	mov	w0, 13
	cbz	x1, .L1
	mov	x2, -9007199254740992
	cmp	x1, x2
	mov	w0, 5
	mov	w3, 11
	csel	w0, w0, w3, eq
	mov	w1, 3
	csel	w0, w0, w1, ls
.L1:
	ret

and for the floating point version:

f:
	adrp	x2, .LC0
	fabs	d1, d0
	adrp	x1, .LC1
	mov	w0, 7
	ldr	d3, [x2, #:lo12:.LC0]
	ldr	d2, [x1, #:lo12:.LC1]
	fcmpe	d1, d3
	fccmpe	d1, d2, 2, ge
	bls	.L1
	fcmp	d0, #0.0
	mov	w0, 13
	beq	.L1
	fcmp	d1, d1
	bvs	.L5
	fcmpe	d1, d2
	mov	w0, 5
	mov	w1, 11
	csel	w0, w0, w1, gt
.L1:
	ret
.L5:
	mov	w0, 3
	ret

One new test to test  that the integer version does not generate FP code,
correctness is tested using the existing test code for FP classifiy.

Glibc benchmarks ran against the built-in and this shows the following
performance gain on Aarch64 using the integer code:

* zero: 0%
* inf/nan: 29%
* normal: 69.1%

On x86_64:

* zero: 0%
* inf/nan: 89.9%
* normal: 4.7%

Regression tests ran on aarch64-none-linux and arm-none-linux-gnueabi
and no regression. x86_64 bootstrapped successfully as well.

Ok for trunk?

Thanks,
Tamar

gcc/
2016-11-11  Tamar Christina  <tamar.christina@arm.com>

	* gcc/builtins.c (fold_builtin_fpclassify): Removed.
	(expand_builtin): Added builtins to lowering list.
	(fold_builtin_n): Removed fold_builtin_varargs.
	(fold_builtin_varargs): Removed.
	* gcc/builtins.def (BUILT_IN_ISZERO, BUILT_IN_ISSUBNORMAL): Added.
	(fold_builtin_interclass_mathfn): Use get_min_float instead.
	* gcc/real.h (get_min_float): Added.
	* gcc/real.c (get_min_float): Added.
	* gcc/gimple-low.c (lower_stm): Define BUILT_IN_FPCLASSIFY,
	CASE_FLT_FN (BUILT_IN_ISINF), BUILT_IN_ISINFD32, BUILT_IN_ISINFD64,
	BUILT_IN_ISINFD128, BUILT_IN_ISNAND32, BUILT_IN_ISNAND64,
	BUILT_IN_ISNAND128, BUILT_IN_ISNAN, BUILT_IN_ISNORMAL, BUILT_IN_ISZERO,
	BUILT_IN_ISSUBNORMAL.
	(lower_builtin_fpclassify, is_nan, is_normal, is_infinity): Added.
	(is_zero, is_subnormal, use_ieee_int_mode): Likewise.
	(lower_builtin_isnan, lower_builtin_isinfinite): Likewise.
	(lower_builtin_isnormal, lower_builtin_iszero): Likewise.
	(lower_builtin_issubnormal): Likewise.
	(emit_tree_cond, get_num_as_int, emit_tree_and_return_var): Added.
	* gcc/real.h (real_format): Added is_ieee_compatible field.
	* gcc/real.c (ieee_single_format): Set is_ieee_compatible flag.
	(mips_single_format): Likewise.
	(motorola_single_format): Likewise.
	(spu_single_format): Likewise.
	(ieee_double_format): Likewise.
	(mips_double_format): Likewise.
	(motorola_double_format): Likewise.
	(ieee_extended_motorola_format): Likewise.
	(ieee_extended_intel_128_format): Likewise.
	(ieee_extended_intel_96_round_53_format): Likewise.
	(ibm_extended_format): Likewise.
	(mips_extended_format): Likewise.
	(ieee_quad_format): Likewise.
	(mips_quad_format): Likewise.
	(vax_f_format): Likewise.
	(vax_d_format): Likewise.
	(vax_g_format): Likewise.
	(decimal_single_format): Likewise.
	(decimal_quad_format): Likewise.
	(iee_half_format): Likewise.
	(mips_single_format): Likewise.
	(arm_half_format): Likewise.
	(real_internal_format): Likewise.

gcc/testsuite/
2016-11-11  Tamar Christina  <tamar.christina@arm.com>

	* gcc.target/aarch64/builtin-fpclassify.c: New codegen test.
	* gcc.dg/fold-notunord.c: Removed.

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: fp-classify-new.patch --]
[-- Type: text/x-patch; name="fp-classify-new.patch", Size: 48170 bytes --]

diff --git a/gcc/builtins.c b/gcc/builtins.c
index 3ac2d44148440b124559ba7cd3de483b7a74b72d..fb09d342c836d68ef40a90fca803dbd496407ecb 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -160,7 +160,6 @@ static tree fold_builtin_0 (location_t, tree);
 static tree fold_builtin_1 (location_t, tree, tree);
 static tree fold_builtin_2 (location_t, tree, tree, tree);
 static tree fold_builtin_3 (location_t, tree, tree, tree, tree);
-static tree fold_builtin_varargs (location_t, tree, tree*, int);
 
 static tree fold_builtin_strpbrk (location_t, tree, tree, tree);
 static tree fold_builtin_strstr (location_t, tree, tree, tree);
@@ -5998,10 +5997,8 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
       if (! flag_unsafe_math_optimizations)
 	break;
       gcc_fallthrough ();
-    CASE_FLT_FN (BUILT_IN_ISINF):
     CASE_FLT_FN (BUILT_IN_FINITE):
     case BUILT_IN_ISFINITE:
-    case BUILT_IN_ISNORMAL:
       target = expand_builtin_interclass_mathfn (exp, target);
       if (target)
 	return target;
@@ -6281,8 +6278,20 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
 	}
       break;
 
+    CASE_FLT_FN (BUILT_IN_ISINF):
+    case BUILT_IN_ISNAND32:
+    case BUILT_IN_ISNAND64:
+    case BUILT_IN_ISNAND128:
+    case BUILT_IN_ISNAN:
+    case BUILT_IN_ISINFD32:
+    case BUILT_IN_ISINFD64:
+    case BUILT_IN_ISINFD128:
+    case BUILT_IN_ISNORMAL:
+    case BUILT_IN_ISZERO:
+    case BUILT_IN_ISSUBNORMAL:
+    case BUILT_IN_FPCLASSIFY:
     case BUILT_IN_SETJMP:
-      /* This should have been lowered to the builtins below.  */
+      /* These should have been lowered to the builtins below.  */
       gcc_unreachable ();
 
     case BUILT_IN_SETJMP_SETUP:
@@ -7646,30 +7655,6 @@ fold_builtin_interclass_mathfn (location_t loc, tree fndecl, tree arg)
   switch (DECL_FUNCTION_CODE (fndecl))
     {
       tree result;
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      {
-	/* isinf(x) -> isgreater(fabs(x),DBL_MAX).  */
-	tree const isgr_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isgr_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	return result;
-      }
     CASE_FLT_FN (BUILT_IN_FINITE):
     case BUILT_IN_ISFINITE:
       {
@@ -7701,79 +7686,6 @@ fold_builtin_interclass_mathfn (location_t loc, tree fndecl, tree arg)
 				  result);*/
 	return result;
       }
-    case BUILT_IN_ISNORMAL:
-      {
-	/* isnormal(x) -> isgreaterequal(fabs(x),DBL_MIN) &
-	   islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	tree orig_arg, max_exp, min_exp;
-	machine_mode orig_mode = mode;
-	REAL_VALUE_TYPE rmax, rmin;
-	char buf[128];
-
-	orig_arg = arg = builtin_save_expr (arg);
-	if (is_ibm_extended)
-	  {
-	    /* Use double to test the normal range of IBM extended
-	       precision.  Emin for IBM extended precision is
-	       different to emin for IEEE double, being 53 higher
-	       since the low double exponent is at least 53 lower
-	       than the high double exponent.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	arg = fold_build1_loc (loc, ABS_EXPR, type, arg);
-
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&rmax, buf);
-	sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (orig_mode)->emin - 1);
-	real_from_string (&rmin, buf);
-	max_exp = build_real (type, rmax);
-	min_exp = build_real (type, rmin);
-
-	max_exp = build_call_expr (isle_fn, 2, arg, max_exp);
-	if (is_ibm_extended)
-	  {
-	    /* Testing the high end of the range is done just using
-	       the high double, using the same test as isfinite().
-	       For the subnormal end of the range we first test the
-	       high double, then if its magnitude is equal to the
-	       limit of 0x1p-969, we test whether the low double is
-	       non-zero and opposite sign to the high double.  */
-	    tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
-	    tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	    tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
-	    tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
-				       arg, min_exp);
-	    tree as_complex = build1 (VIEW_CONVERT_EXPR,
-				      complex_double_type_node, orig_arg);
-	    tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
-	    tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
-	    tree zero = build_real (type, dconst0);
-	    tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
-	    tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
-	    tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
-	    tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
-				      fold_build3 (COND_EXPR,
-						   integer_type_node,
-						   hilt, logt, lolt));
-	    eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
-				  eq_min, ok_lo);
-	    min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
-				   gt_min, eq_min);
-	  }
-	else
-	  {
-	    tree const isge_fn
-	      = builtin_decl_explicit (BUILT_IN_ISGREATEREQUAL);
-	    min_exp = build_call_expr (isge_fn, 2, arg, min_exp);
-	  }
-	result = fold_build2 (BIT_AND_EXPR, integer_type_node,
-			      max_exp, min_exp);
-	return result;
-      }
     default:
       break;
     }
@@ -7794,12 +7706,6 @@ fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
 
   switch (builtin_index)
     {
-    case BUILT_IN_ISINF:
-      if (!HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      return NULL_TREE;
-
     case BUILT_IN_ISINF_SIGN:
       {
 	/* isinf_sign(x) -> isinf(x) ? (signbit(x) ? -1 : 1) : 0 */
@@ -7838,100 +7744,11 @@ fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
 	return omit_one_operand_loc (loc, type, integer_one_node, arg);
 
       return NULL_TREE;
-
-    case BUILT_IN_ISNAN:
-      if (!HONOR_NANS (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      {
-	bool is_ibm_extended = MODE_COMPOSITE_P (TYPE_MODE (TREE_TYPE (arg)));
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    arg = fold_build1_loc (loc, NOP_EXPR, double_type_node, arg);
-	  }
-      }
-      arg = builtin_save_expr (arg);
-      return fold_build2_loc (loc, UNORDERED_EXPR, type, arg, arg);
-
     default:
       gcc_unreachable ();
     }
 }
 
-/* Fold a call to __builtin_fpclassify(int, int, int, int, int, ...).
-   This builtin will generate code to return the appropriate floating
-   point classification depending on the value of the floating point
-   number passed in.  The possible return values must be supplied as
-   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
-   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
-   one floating point argument which is "type generic".  */
-
-static tree
-fold_builtin_fpclassify (location_t loc, tree *args, int nargs)
-{
-  tree fp_nan, fp_infinite, fp_normal, fp_subnormal, fp_zero,
-    arg, type, res, tmp;
-  machine_mode mode;
-  REAL_VALUE_TYPE r;
-  char buf[128];
-
-  /* Verify the required arguments in the original call.  */
-  if (nargs != 6
-      || !validate_arg (args[0], INTEGER_TYPE)
-      || !validate_arg (args[1], INTEGER_TYPE)
-      || !validate_arg (args[2], INTEGER_TYPE)
-      || !validate_arg (args[3], INTEGER_TYPE)
-      || !validate_arg (args[4], INTEGER_TYPE)
-      || !validate_arg (args[5], REAL_TYPE))
-    return NULL_TREE;
-
-  fp_nan = args[0];
-  fp_infinite = args[1];
-  fp_normal = args[2];
-  fp_subnormal = args[3];
-  fp_zero = args[4];
-  arg = args[5];
-  type = TREE_TYPE (arg);
-  mode = TYPE_MODE (type);
-  arg = builtin_save_expr (fold_build1_loc (loc, ABS_EXPR, type, arg));
-
-  /* fpclassify(x) ->
-       isnan(x) ? FP_NAN :
-         (fabs(x) == Inf ? FP_INFINITE :
-	   (fabs(x) >= DBL_MIN ? FP_NORMAL :
-	     (x == 0 ? FP_ZERO : FP_SUBNORMAL))).  */
-
-  tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-		     build_real (type, dconst0));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node,
-		     tmp, fp_zero, fp_subnormal);
-
-  sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (mode)->emin - 1);
-  real_from_string (&r, buf);
-  tmp = fold_build2_loc (loc, GE_EXPR, integer_type_node,
-		     arg, build_real (type, r));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, fp_normal, res);
-
-  if (HONOR_INFINITIES (mode))
-    {
-      real_inf (&r);
-      tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-			 build_real (type, r));
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp,
-			 fp_infinite, res);
-    }
-
-  if (HONOR_NANS (mode))
-    {
-      tmp = fold_build2_loc (loc, ORDERED_EXPR, integer_type_node, arg, arg);
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, res, fp_nan);
-    }
-
-  return res;
-}
-
 /* Fold a call to an unordered comparison function such as
    __builtin_isgreater().  FNDECL is the FUNCTION_DECL for the function
    being called and ARG0 and ARG1 are the arguments for the call.
@@ -8243,30 +8060,9 @@ fold_builtin_1 (location_t loc, tree fndecl, tree arg0)
 	  return ret;
 	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
       }
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    case BUILT_IN_ISNORMAL:
-      return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-
     case BUILT_IN_ISINF_SIGN:
       return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF_SIGN);
 
-    CASE_FLT_FN (BUILT_IN_ISNAN):
-    case BUILT_IN_ISNAND32:
-    case BUILT_IN_ISNAND64:
-    case BUILT_IN_ISNAND128:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISNAN);
-
     case BUILT_IN_FREE:
       if (integer_zerop (arg0))
 	return build_empty_stmt (loc);
@@ -8465,7 +8261,11 @@ fold_builtin_n (location_t loc, tree fndecl, tree *args, int nargs, bool)
       ret = fold_builtin_3 (loc, fndecl, args[0], args[1], args[2]);
       break;
     default:
-      ret = fold_builtin_varargs (loc, fndecl, args, nargs);
+      /* There used to be a call to fold_builtin_varargs here, but with the
+         lowering of fpclassify which was it's only member the function became
+	 redundant. As such it has been removed. The function's default case
+	 was the same as what is below the switch here, so the function can
+	 safely be removed.  */
       break;
     }
   if (ret)
@@ -9422,37 +9222,6 @@ fold_builtin_object_size (tree ptr, tree ost)
   return NULL_TREE;
 }
 
-/* Builtins with folding operations that operate on "..." arguments
-   need special handling; we need to store the arguments in a convenient
-   data structure before attempting any folding.  Fortunately there are
-   only a few builtins that fall into this category.  FNDECL is the
-   function, EXP is the CALL_EXPR for the call.  */
-
-static tree
-fold_builtin_varargs (location_t loc, tree fndecl, tree *args, int nargs)
-{
-  enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl);
-  tree ret = NULL_TREE;
-
-  switch (fcode)
-    {
-    case BUILT_IN_FPCLASSIFY:
-      ret = fold_builtin_fpclassify (loc, args, nargs);
-      break;
-
-    default:
-      break;
-    }
-  if (ret)
-    {
-      ret = build1 (NOP_EXPR, TREE_TYPE (ret), ret);
-      SET_EXPR_LOCATION (ret, loc);
-      TREE_NO_WARNING (ret) = 1;
-      return ret;
-    }
-  return NULL_TREE;
-}
-
 /* Initialize format string characters in the target charset.  */
 
 bool
diff --git a/gcc/builtins.def b/gcc/builtins.def
index 219feebd3aebefbd079bf37cc801453cd1965e00..e3d12eccfed528fd6df0570b65f8aef42494d675 100644
--- a/gcc/builtins.def
+++ b/gcc/builtins.def
@@ -831,6 +831,8 @@ DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFL, "isinfl", BT_FN_INT_LONGDOUBLE, ATTR_CO
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD32, "isinfd32", BT_FN_INT_DFLOAT32, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD64, "isinfd64", BT_FN_INT_DFLOAT64, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD128, "isinfd128", BT_FN_INT_DFLOAT128, ATTR_CONST_NOTHROW_LEAF_LIST)
+DEF_C99_C90RES_BUILTIN (BUILT_IN_ISZERO, "iszero", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
+DEF_C99_C90RES_BUILTIN (BUILT_IN_ISSUBNORMAL, "issubnormal", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_C99_C90RES_BUILTIN (BUILT_IN_ISNAN, "isnan", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANF, "isnanf", BT_FN_INT_FLOAT, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANL, "isnanl", BT_FN_INT_LONGDOUBLE, ATTR_CONST_NOTHROW_LEAF_LIST)
diff --git a/gcc/gimple-low.c b/gcc/gimple-low.c
index 64752b67b86b3d01df5f5661e4666df98b7b91d1..ac9dd6cb319eba25402a68fd4ffecfdc8f0d2118 100644
--- a/gcc/gimple-low.c
+++ b/gcc/gimple-low.c
@@ -30,6 +30,8 @@ along with GCC; see the file COPYING3.  If not see
 #include "calls.h"
 #include "gimple-iterator.h"
 #include "gimple-low.h"
+#include "stor-layout.h"
+#include "target.h"
 
 /* The differences between High GIMPLE and Low GIMPLE are the
    following:
@@ -72,6 +74,12 @@ static void lower_gimple_bind (gimple_stmt_iterator *, struct lower_data *);
 static void lower_try_catch (gimple_stmt_iterator *, struct lower_data *);
 static void lower_gimple_return (gimple_stmt_iterator *, struct lower_data *);
 static void lower_builtin_setjmp (gimple_stmt_iterator *);
+static void lower_builtin_fpclassify (gimple_stmt_iterator *);
+static void lower_builtin_isnan (gimple_stmt_iterator *);
+static void lower_builtin_isinfinite (gimple_stmt_iterator *);
+static void lower_builtin_isnormal (gimple_stmt_iterator *);
+static void lower_builtin_iszero (gimple_stmt_iterator *);
+static void lower_builtin_issubnormal (gimple_stmt_iterator *);
 static void lower_builtin_posix_memalign (gimple_stmt_iterator *);
 
 
@@ -330,19 +338,61 @@ lower_stmt (gimple_stmt_iterator *gsi, struct lower_data *data)
 	if (decl
 	    && DECL_BUILT_IN_CLASS (decl) == BUILT_IN_NORMAL)
 	  {
-	    if (DECL_FUNCTION_CODE (decl) == BUILT_IN_SETJMP)
-	      {
-		lower_builtin_setjmp (gsi);
-		data->cannot_fallthru = false;
-		return;
-	      }
-	    else if (DECL_FUNCTION_CODE (decl) == BUILT_IN_POSIX_MEMALIGN
-		     && flag_tree_bit_ccp
-		     && gimple_builtin_call_types_compatible_p (stmt, decl))
-	      {
-		lower_builtin_posix_memalign (gsi);
-		return;
-	      }
+	    switch (DECL_FUNCTION_CODE (decl))
+	    {
+		    case BUILT_IN_SETJMP:
+			lower_builtin_setjmp (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    case BUILT_IN_POSIX_MEMALIGN:
+			if (flag_tree_bit_ccp
+			    && gimple_builtin_call_types_compatible_p (stmt, decl))
+			{
+				lower_builtin_posix_memalign (gsi);
+				return;
+			}
+			break;
+
+		    case BUILT_IN_FPCLASSIFY:
+			lower_builtin_fpclassify (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    CASE_FLT_FN (BUILT_IN_ISINF):
+		    case BUILT_IN_ISINFD32:
+		    case BUILT_IN_ISINFD64:
+		    case BUILT_IN_ISINFD128:
+			lower_builtin_isinfinite (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    case BUILT_IN_ISNAND32:
+		    case BUILT_IN_ISNAND64:
+		    case BUILT_IN_ISNAND128:
+		    CASE_FLT_FN (BUILT_IN_ISNAN):
+			lower_builtin_isnan (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    case BUILT_IN_ISNORMAL:
+			lower_builtin_isnormal (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    case BUILT_IN_ISZERO:
+			lower_builtin_iszero (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    case BUILT_IN_ISSUBNORMAL:
+			lower_builtin_issubnormal (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    default:
+			break;
+	    }
 	  }
 
 	if (decl && (flags_from_decl_or_type (decl) & ECF_NORETURN))
@@ -822,6 +872,580 @@ lower_builtin_setjmp (gimple_stmt_iterator *gsi)
   gsi_remove (gsi, false);
 }
 
+static tree
+emit_tree_and_return_var (gimple_seq *seq, tree arg)
+{
+  tree tmp = create_tmp_reg (TREE_TYPE(arg));
+  gassign *stm = gimple_build_assign(tmp, arg);
+  gimple_seq_add_stmt (seq, stm);
+  return tmp;
+}
+
+/* This function builds an if statement that ends up using explicit branches
+   instead of becoming a csel. This function assumes you will fall through to
+   the next statements after this condition for the false branch. */
+static void
+emit_tree_cond (gimple_seq *seq, tree result_variable, tree exit_label,
+		tree cond, tree true_branch)
+{
+    /* Create labels for fall through  */
+  tree true_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree false_label = create_artificial_label (UNKNOWN_LOCATION);
+  gcond *stmt = gimple_build_cond_from_tree (cond, true_label, false_label);
+  gimple_seq_add_stmt (seq, stmt);
+
+  /* Build the true case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (true_label));
+  tree value = TREE_CONSTANT (true_branch) 
+	     ? true_branch
+	     : emit_tree_and_return_var (seq, true_branch);
+  gimple_seq_add_stmt (seq, gimple_build_assign (result_variable, value));
+  gimple_seq_add_stmt (seq, gimple_build_goto (exit_label));
+
+  /* Build the false case. */
+  gimple_seq_add_stmt (seq, gimple_build_label (false_label));
+}
+
+static tree
+get_num_as_int (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  gcc_assert (format->b == 2);
+
+  /* Re-interpret the float as an unsigned integer type
+     with equal precision.  */
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+  tree conv_arg = fold_build1_loc (loc, VIEW_CONVERT_EXPR, int_arg_type, arg);
+  return emit_tree_and_return_var(seq, conv_arg);
+}
+
+  /* Check if the number that is being classified is close enough to IEEE 754
+     format to be able to go in the early exit code.  */
+static bool
+use_ieee_int_mode (tree arg)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  return (format->is_binary_ieee_compatible
+	  && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
+	  /* We explicitly disable quad float support on 32 bit systems.  */
+	  && !(UNITS_PER_WORD == 4 && type_width == 128)
+	  && targetm.scalar_mode_supported_p (mode));
+}
+
+static tree
+is_normal (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const tree bool_type = boolean_type_node;
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    REAL_VALUE_TYPE rinf, rmin;
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    char buf[128];
+    real_inf(&rinf);
+    get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+    real_from_string (&rmin, buf);
+
+    tree inf_exp = fold_build2_loc (loc, LT_EXPR, bool_type, arg_p,
+				    build_real (type, rinf));
+
+    tree min_exp = fold_build2_loc (loc, GE_EXPR, bool_type, arg_p,
+				    build_real (type, rmin));
+
+    tree res
+      = fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			 emit_tree_and_return_var (seq, min_exp),
+			 emit_tree_and_return_var (seq, inf_exp));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  gcc_assert (format->b == 2);
+
+  const tree int_type = unsigned_type_node;
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  const int exp_mask  = (1 << exp_bits) - 1;
+
+  /* Get the number reinterpreted as an integer.  */
+  tree int_arg = get_num_as_int (seq, arg, loc);
+
+  /* Extract exp bits from the float, where we expect the exponent to be.
+     We create a new type because BIT_FIELD_REF does not allow you to
+     extract less bits than the precision of the storage variable.  */
+  tree exp_tmp
+    = fold_build3_loc (loc, BIT_FIELD_REF,
+		       build_nonstandard_integer_type (exp_bits, true),
+		       int_arg,
+		       build_int_cstu (int_type, exp_bits),
+		       build_int_cstu (int_type, format->p - 1));
+  tree exp_bitfield = emit_tree_and_return_var (seq, exp_tmp);
+
+  /* Re-interpret the extracted exponent bits as a 32 bit int.
+     This allows us to continue doing operations as int_type.  */
+  tree exp
+    = emit_tree_and_return_var(seq,fold_build1_loc (loc, NOP_EXPR, int_type,
+						    exp_bitfield));
+
+  /* exp_mask & ~1.  */
+  tree mask_check
+     = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+			build_int_cstu (int_type, exp_mask),
+			fold_build1_loc (loc, BIT_NOT_EXPR, int_type,
+					 build_int_cstu (int_type, 1)));
+
+  /* (exp + 1) & mask_check.
+     Check to see if exp is not all 0 or all 1.  */
+  tree exp_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+		       emit_tree_and_return_var (seq,
+				fold_build2_loc (loc, PLUS_EXPR, int_type, exp,
+						 build_int_cstu (int_type, 1))),
+		       mask_check);
+
+  tree res = fold_build2_loc (loc, NE_EXPR, boolean_type_node,
+			      build_int_cstu (int_type, 0),
+			      emit_tree_and_return_var(seq, exp_check));
+
+  return emit_tree_and_return_var (seq, res);
+}
+
+static tree
+is_zero (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    tree res = fold_build2_loc (loc, EQ_EXPR, boolean_type_node, arg_p,
+				build_real (type, dconst0));
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  gcc_assert (format->b == 2);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign. */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* num << 1 == 0.
+     This checks to see if the number is zero.  */
+  tree zero_check
+    = fold_build2_loc (loc, EQ_EXPR, boolean_type_node,
+		       build_int_cstu (int_arg_type, 0),
+		       emit_tree_and_return_var (seq, int_arg));
+
+  return emit_tree_and_return_var (seq, zero_check);
+}
+
+static tree
+is_subnormal (gimple_seq *seq, tree arg, location_t loc)
+{
+  const tree bool_type = boolean_type_node;
+
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  gcc_assert (format->b == 2);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    REAL_VALUE_TYPE r;
+    char buf[128];
+    sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (mode)->emin - 1);
+    real_from_string (&r, buf);
+    tree subnorm = fold_build2_loc (loc, LT_EXPR, bool_type,
+				    arg_p, build_real (type, r));
+
+    tree zero = fold_build2_loc (loc, GT_EXPR, bool_type, arg_p,
+				 build_real (type, dconst0));
+
+    tree res
+      = fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			 emit_tree_and_return_var (seq, subnorm),
+			 emit_tree_and_return_var (seq, zero));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign. */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check for a zero exponent and non-zero mantissa.
+     This can be done with two comparisons by first apply a
+     removing the sign bit and checking if the value is larger
+     than the mantissa mask.  */
+
+  /* This creates a mask to be used to check the mantissa value in the shifted
+     integer representation of the fpnum.  */
+  tree significant_bit = build_int_cstu (int_arg_type, format->p - 1);
+  tree mantissa_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					significant_bit),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check if exponent is zero and mantissa is not. */
+  tree subnorm_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, LE_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 mantissa_mask));
+
+  return emit_tree_and_return_var (seq, subnorm_check);
+}
+
+static tree
+is_infinity (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_INFINITIES (mode))
+  {
+    return build_int_cst (bool_type, 0);
+  }
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    REAL_VALUE_TYPE r;
+    real_inf (&r);
+    tree res = fold_build2_loc (loc, EQ_EXPR, bool_type, arg_p,
+				build_real (type, r));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  gcc_assert (format->b == 2);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type, exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign. */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type, exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0. */
+  tree inf_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, EQ_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Determines if the given number is a NaN value.
+   This function is the last in the chain and only has to
+   check if it's preconditions are true.  */
+static tree
+is_nan (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (mode))
+  {
+    return build_int_cst (bool_type, 0);
+  }
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    tree eq_check
+      = fold_build2_loc (loc, ORDERED_EXPR, bool_type,arg_p, arg_p);
+
+    tree res
+      = fold_build1_loc (loc, BIT_NOT_EXPR, bool_type,
+			 emit_tree_and_return_var (seq, eq_check));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type, exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign. */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type, exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is not 0. */
+  tree nan_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, GT_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, nan_check);
+}
+
+/* Validate a single argument ARG against a tree code CODE representing
+   a type.  */
+static bool
+gimple_validate_arg (gimple* call, int index, enum tree_code code)
+{
+  const tree arg = gimple_call_arg(call, index);
+  if (!arg)
+    return false;
+  else if (code == POINTER_TYPE)
+    return POINTER_TYPE_P (TREE_TYPE (arg));
+  else if (code == INTEGER_TYPE)
+    return INTEGRAL_TYPE_P (TREE_TYPE (arg));
+  return code == TREE_CODE (TREE_TYPE (arg));
+}
+
+/* Lowers calls to __builtin_fpclassify to
+   fpclassify (x) ->
+     isnormal(x) ? FP_NORMAL :
+       iszero (x) ? FP_ZERO :
+	 isnan (x) ? FP_NAN :
+	   isinfinite (x) ? FP_INFINITE :
+	     FP_SUBNORMAL.
+
+   The code may use integer arithmentic if it decides
+   that the produced assembly would be faster. This can only be done
+   for numbers that are similar to IEEE-754 in format.
+
+   This builtin will generate code to return the appropriate floating
+   point classification depending on the value of the floating point
+   number passed in.  The possible return values must be supplied as
+   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
+   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
+   one floating point argument which is "type generic".
+*/
+static void
+lower_builtin_fpclassify (gimple_stmt_iterator *gsi)
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 6
+      || !gimple_validate_arg (call, 0, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 1, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 2, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 3, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 4, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 5, REAL_TYPE))
+    return;
+
+  /* Collect the arguments from the call.  */
+  tree fp_nan = gimple_call_arg (call, 0);
+  tree fp_infinite = gimple_call_arg (call, 1);
+  tree fp_normal = gimple_call_arg (call, 2);
+  tree fp_subnormal = gimple_call_arg (call, 3);
+  tree fp_zero = gimple_call_arg (call, 4);
+  tree arg = gimple_call_arg (call, 5);
+
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+    dest = create_tmp_reg (TREE_TYPE (orig_dest));
+
+  emit_tree_cond (&body, dest, done_label,
+		  is_normal(&body, arg, loc), fp_normal);
+  emit_tree_cond (&body, dest, done_label,
+		  is_zero(&body, arg, loc), fp_zero);
+  emit_tree_cond (&body, dest, done_label,
+		  is_nan(&body, arg, loc), fp_nan);
+  emit_tree_cond (&body, dest, done_label,
+		  is_infinity(&body, arg, loc), fp_infinite);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to is_subnormal.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, fp_subnormal));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+
+  /* Remove the call to __builtin_fpclassify.  */
+  gsi_remove (gsi, false);
+}
+
+static void
+gen_call_fp_builtin (gimple_stmt_iterator *gsi,
+		     tree (*fndecl)(gimple_seq *, tree, location_t))
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 1
+      || !gimple_validate_arg (call, 0, REAL_TYPE))
+    return;
+
+  tree arg = gimple_call_arg (call, 0);
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  tree type = TREE_TYPE (orig_dest);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+    dest = create_tmp_reg (type);
+
+  tree t_true = build_int_cst (type, true);
+  tree t_false = build_int_cst (type, false);
+
+  emit_tree_cond (&body, dest, done_label,
+		  fndecl(&body, arg, loc), t_true);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to false.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, t_false));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+  /* Remove the call to the builtin.  */
+  gsi_remove (gsi, false);
+}
+
+static void
+lower_builtin_isnan (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_nan);
+}
+
+static void
+lower_builtin_isinfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_infinity);
+}
+
+static void
+lower_builtin_isnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_normal);
+}
+
+static void
+lower_builtin_iszero (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_zero);
+}
+
+static void
+lower_builtin_issubnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_subnormal);
+}
+
 /* Lower calls to posix_memalign to
      res = posix_memalign (ptr, align, size);
      if (res == 0)
diff --git a/gcc/real.h b/gcc/real.h
index 59af580e78f2637be84f71b98b45ec6611053222..30604adf0f7d4ca4257ed92f6d019b52a52db6c5 100644
--- a/gcc/real.h
+++ b/gcc/real.h
@@ -161,6 +161,19 @@ struct real_format
   bool has_signed_zero;
   bool qnan_msb_set;
   bool canonical_nan_lsbs_set;
+
+  /* This flag indicates whether the format is suitable for the optimized
+     code paths for the __builtin_fpclassify function and friends.  For
+     this, the format must be a base 2 representation with the sign bit as
+     the most-significant bit followed by (exp <= 32) exponent bits
+     followed by the mantissa bits.  It must be possible to interpret the
+     bits of the floating-point representation as an integer.  NaNs and
+     INFs (if available) must be represented by the same schema used by
+     IEEE 754.  (NaNs must be represented by an exponent with all bits 1,
+     any mantissa except all bits 0 and any sign bit.  +INF and -INF must be
+     represented by an exponent with all bits 1, a mantissa with all bits 0 and
+     a sign bit of 0 and 1 respectively.)  */
+  bool is_binary_ieee_compatible;
   const char *name;
 };
 
@@ -511,6 +524,11 @@ extern bool real_isinteger (const REAL_VALUE_TYPE *, HOST_WIDE_INT *);
    float string.  BUF must be large enough to contain the result.  */
 extern void get_max_float (const struct real_format *, char *, size_t);
 
+/* Write into BUF the minimum negative representable finite floating-point
+   number, x, such that b**(x-1) is normalized.
+   BUF must be large enough to contain the result.  */
+extern void get_min_float (const struct real_format *, char *, size_t);
+
 #ifndef GENERATOR_FILE
 /* real related routines.  */
 extern wide_int real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
diff --git a/gcc/real.c b/gcc/real.c
index 66e88e2ad366f7848609d157074c80420d778bcf..20c907a6d543c73ba62aa9a8ddf6973d82de7832 100644
--- a/gcc/real.c
+++ b/gcc/real.c
@@ -3052,6 +3052,7 @@ const struct real_format ieee_single_format =
     true,
     true,
     false,
+    true,
     "ieee_single"
   };
 
@@ -3075,6 +3076,7 @@ const struct real_format mips_single_format =
     true,
     false,
     true,
+    true,
     "mips_single"
   };
 
@@ -3098,6 +3100,7 @@ const struct real_format motorola_single_format =
     true,
     true,
     true,
+    true,
     "motorola_single"
   };
 
@@ -3132,6 +3135,7 @@ const struct real_format spu_single_format =
     true,
     false,
     false,
+    false,
     "spu_single"
   };
 \f
@@ -3343,6 +3347,7 @@ const struct real_format ieee_double_format =
     true,
     true,
     false,
+    true,
     "ieee_double"
   };
 
@@ -3366,6 +3371,7 @@ const struct real_format mips_double_format =
     true,
     false,
     true,
+    true,
     "mips_double"
   };
 
@@ -3389,6 +3395,7 @@ const struct real_format motorola_double_format =
     true,
     true,
     true,
+    true,
     "motorola_double"
   };
 \f
@@ -3735,6 +3742,7 @@ const struct real_format ieee_extended_motorola_format =
     true,
     true,
     true,
+    false,
     "ieee_extended_motorola"
   };
 
@@ -3758,6 +3766,7 @@ const struct real_format ieee_extended_intel_96_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96"
   };
 
@@ -3781,6 +3790,7 @@ const struct real_format ieee_extended_intel_128_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_128"
   };
 
@@ -3806,6 +3816,7 @@ const struct real_format ieee_extended_intel_96_round_53_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96_round_53"
   };
 \f
@@ -3896,6 +3907,7 @@ const struct real_format ibm_extended_format =
     true,
     true,
     false,
+    false,
     "ibm_extended"
   };
 
@@ -3919,6 +3931,7 @@ const struct real_format mips_extended_format =
     true,
     false,
     true,
+    false,
     "mips_extended"
   };
 
@@ -4184,6 +4197,7 @@ const struct real_format ieee_quad_format =
     true,
     true,
     false,
+    true,
     "ieee_quad"
   };
 
@@ -4207,6 +4221,7 @@ const struct real_format mips_quad_format =
     true,
     false,
     true,
+    true,
     "mips_quad"
   };
 \f
@@ -4509,6 +4524,7 @@ const struct real_format vax_f_format =
     false,
     false,
     false,
+    false,
     "vax_f"
   };
 
@@ -4532,6 +4548,7 @@ const struct real_format vax_d_format =
     false,
     false,
     false,
+    false,
     "vax_d"
   };
 
@@ -4555,6 +4572,7 @@ const struct real_format vax_g_format =
     false,
     false,
     false,
+    false,
     "vax_g"
   };
 \f
@@ -4633,6 +4651,7 @@ const struct real_format decimal_single_format =
     true,
     true,
     false,
+    false,
     "decimal_single"
   };
 
@@ -4657,6 +4676,7 @@ const struct real_format decimal_double_format =
     true,
     true,
     false,
+    false,
     "decimal_double"
   };
 
@@ -4681,6 +4701,7 @@ const struct real_format decimal_quad_format =
     true,
     true,
     false,
+    false,
     "decimal_quad"
   };
 \f
@@ -4820,6 +4841,7 @@ const struct real_format ieee_half_format =
     true,
     true,
     false,
+    true,
     "ieee_half"
   };
 
@@ -4846,6 +4868,7 @@ const struct real_format arm_half_format =
     true,
     false,
     false,
+    false,
     "arm_half"
   };
 \f
@@ -4893,6 +4916,7 @@ const struct real_format real_internal_format =
     true,
     true,
     false,
+    false,
     "real_internal"
   };
 \f
@@ -5080,6 +5104,16 @@ get_max_float (const struct real_format *fmt, char *buf, size_t len)
   gcc_assert (strlen (buf) < len);
 }
 
+/* Write into BUF the minimum negative representable finite floating-point
+   number, x, such that b**(x-1) is normalized.
+   BUF must be large enough to contain the result.  */
+void
+get_min_float (const struct real_format *fmt, char *buf, size_t len)
+{
+  sprintf (buf, "0x1p%d", fmt->emin - 1);
+  gcc_assert (strlen (buf) < len);
+}
+
 /* True if mode M has a NaN representation and
    the treatment of NaN operands is important.  */
 
diff --git a/gcc/testsuite/gcc.dg/c99-builtins.c b/gcc/testsuite/gcc.dg/c99-builtins.c
new file mode 100644
index 0000000000000000000000000000000000000000..3ca3ed43e7a69a266467ae2a9aa738ce2d15afb9
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/c99-builtins.c
@@ -0,0 +1,131 @@
+/* { dg-options "-O2" } */
+/* { dg-do run } */
+
+#include <assert.h>
+#include <math.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+int
+main(void)
+{
+
+	/* Test FP Classify as a whole.  */
+
+	assert(fpclassify((float)0) == FP_ZERO);
+	assert(fpclassify((float)-0.0) == FP_ZERO);
+	printf("PASS fpclassify<FP_ZERO>(float)\n");
+
+	assert(fpclassify((double)0) == FP_ZERO);
+	assert(fpclassify((double)-0) == FP_ZERO);
+	printf("PASS fpclassify<FP_ZERO>(double)\n");
+
+	assert(fpclassify((long double)0) == FP_ZERO);
+	assert(fpclassify((long double)-0.0) == FP_ZERO);
+
+	printf("PASS fpclassify<FP_ZERO>(long double)\n");
+
+	assert(fpclassify((float)1) == FP_NORMAL);
+	assert(fpclassify((float)1000) == FP_NORMAL);
+	printf("PASS fpclassify<FP_NORMAL>(float)\n");
+
+	assert(fpclassify((double)1) == FP_NORMAL);
+	assert(fpclassify((double)1000) == FP_NORMAL);
+	printf("PASS fpclassify<FP_NORMAL>(double)\n");
+
+	assert(fpclassify((long double)1) == FP_NORMAL);
+	assert(fpclassify((long double)1000) == FP_NORMAL);
+	printf("PASS fpclassify<FP_NORMAL>(long double)\n");
+
+	assert(fpclassify(0x1.2p-150f) == FP_SUBNORMAL);
+	printf("PASS fpclassify<FP_NORMAL>(float)\n");
+
+	assert(fpclassify(0x1.2p-1075) == FP_SUBNORMAL);
+	printf("PASS fpclassify<FP_NORMAL>(double)\n");
+
+	assert(fpclassify(0x1.2p-16383L) == FP_SUBNORMAL);
+	printf("PASS fpclassify<FP_NORMAL>(long double)\n");
+
+	assert(fpclassify(HUGE_VALF) == FP_INFINITE);
+	assert(fpclassify((float)HUGE_VAL) == FP_INFINITE);
+	assert(fpclassify((float)HUGE_VALL) == FP_INFINITE);
+	printf("PASS fpclassify<FP_INFINITE>(float)\n");
+
+	assert(fpclassify(HUGE_VAL) == FP_INFINITE);
+	assert(fpclassify((double)HUGE_VALF) == FP_INFINITE);
+	assert(fpclassify((double)HUGE_VALL) == FP_INFINITE);
+	printf("PASS fpclassify<FP_INFINITE>(double)\n");
+
+	assert(fpclassify(HUGE_VALL) == FP_INFINITE);
+	assert(fpclassify((long double)HUGE_VALF) == FP_INFINITE);
+	assert(fpclassify((long double)HUGE_VAL) == FP_INFINITE);
+	printf("PASS fpclassify<FP_INFINITE>(long double)\n");
+
+	assert(fpclassify(NAN) == FP_NAN);
+	printf("PASS fpclassify<FP_NAN>(float)\n");
+
+	assert(fpclassify((double)NAN) == FP_NAN);
+	printf("PASS fpclassify<FP_NAN>(double)\n");
+
+	assert(fpclassify((long double)NAN) == FP_NAN);
+	printf("PASS fpclassify<FP_NAN>(long double)\n");
+
+	/* Test if individual builtins work.  */
+
+	assert(__builtin_iszero((float)0));
+	assert(__builtin_iszero((float)-0.0));
+	printf("PASS __builtin_iszero(float)\n");
+
+	assert(__builtin_iszero((double)0));
+	assert(__builtin_iszero((double)-0));
+	printf("PASS __builtin_iszero(double)\n");
+
+	assert(__builtin_iszero((long double)0));
+	assert(__builtin_iszero((long double)-0.0));
+	printf("PASS __builtin_iszero(long double)\n");
+
+	assert(__builtin_isnormal((float)1));
+	assert(__builtin_isnormal((float)1000));
+	printf("PASS __builtin_isnormal(float)\n");
+
+	assert(__builtin_isnormal((double)1));
+	assert(__builtin_isnormal((double)1000));
+	printf("PASS __builtin_isnormal(double)\n");
+
+	assert(__builtin_isnormal((long double)1));
+	assert(__builtin_isnormal((long double)1000));
+	printf("PASS __builtin_isnormal(long double)\n");
+
+	assert(__builtin_issubnormal(0x1.2p-150f));
+	printf("PASS __builtin_issubnormal(float)\n");
+
+	assert(__builtin_issubnormal(0x1.2p-1075));
+	printf("PASS __builtin_issubnormal(double)\n");
+
+	assert(__builtin_issubnormal(0x1.2p-16383L));
+	printf("PASS __builtin_issubnormal(long double)\n");
+
+	assert(__builtin_isinf(HUGE_VALF));
+	assert(__builtin_isinf((float)HUGE_VAL));
+	assert(__builtin_isinf((float)HUGE_VALL));
+	printf("PASS __builtin_isinf(float)\n");
+
+	assert(__builtin_isinf(HUGE_VAL));
+	assert(__builtin_isinf((double)HUGE_VALF));
+	assert(__builtin_isinf((double)HUGE_VALL));
+	printf("PASS __builtin_isinf(double)\n");
+	
+	assert(__builtin_isinf(HUGE_VALL));
+	assert(__builtin_isinf((long double)HUGE_VALF));
+	assert(__builtin_isinf((long double)HUGE_VAL));
+	printf("PASS __builtin_isinf(double)\n");
+
+	assert(__builtin_isnan(NAN));
+	printf("PASS __builtin_isnan(float)\n");
+	assert(__builtin_isnan((double)NAN));
+	printf("PASS __builtin_isnan(double)\n");
+	assert(__builtin_isnan((long double)NAN));
+	printf("PASS __builtin_isnan(double)\n");
+
+	exit(0);
+}
diff --git a/gcc/testsuite/gcc.dg/fold-notunord.c b/gcc/testsuite/gcc.dg/fold-notunord.c
deleted file mode 100644
index ca345154ac204cb5f380855828421b7f88d49052..0000000000000000000000000000000000000000
--- a/gcc/testsuite/gcc.dg/fold-notunord.c
+++ /dev/null
@@ -1,9 +0,0 @@
-/* { dg-do compile } */
-/* { dg-options "-O -ftrapping-math -fdump-tree-optimized" } */
-
-int f (double d)
-{
-  return !__builtin_isnan (d);
-}
-
-/* { dg-final { scan-tree-dump " ord " "optimized" } } */
diff --git a/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
new file mode 100644
index 0000000000000000000000000000000000000000..84a73a6483780dac2347e72fa7d139545d2087eb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
@@ -0,0 +1,22 @@
+/* This file checks the code generation for the new __builtin_fpclassify.
+   because checking the exact assembly isn't very useful, we'll just be checking
+   for the presence of certain instructions and the omition of others. */
+/* { dg-options "-O2" } */
+/* { dg-do compile } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fabs\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmp\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmpe\[ \t\]?" } } */
+/* { dg-final { scan-assembler "\[ \t\]?sbfx\[ \t\]?" } } */
+
+#include <stdio.h>
+#include <math.h>
+
+/*
+ fp_nan = args[0];
+ fp_infinite = args[1];
+ fp_normal = args[2];
+ fp_subnormal = args[3];
+ fp_zero = args[4];
+*/
+
+int f(double x) { return __builtin_fpclassify(0, 1, 4, 3, 2, x); }

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-11-11 17:26 [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE Tamar Christina
@ 2016-11-11 22:06 ` Joseph Myers
  2016-11-24 15:52   ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Joseph Myers @ 2016-11-11 22:06 UTC (permalink / raw)
  To: Tamar Christina
  Cc: GCC Patches, Wilco Dijkstra, rguenther, law, Michael Meissner, nd

On Fri, 11 Nov 2016, Tamar Christina wrote:

> is_infinite and is_zero have been created. This patch also introduces two new
> intrinsics __builtin_iszero and __builtin_issubnormal.

And so the ChangeLog entry needs to include:

	PR middle-end/77925
	PR middle-end/77926

(in addition to PR references for whatever PRs for signaling NaN issues 
are partly addressed by the patch - those can't be closed however until 
fully fixed for all formats with signaling NaNs).

> NOTE: the old code for ISNORMAL, ISSUBNORMAL, ISNAN and ISINFINITE had a
>       special case for ibm_extended_format which dropped the second part
>       of the number (which was being represented as two numbers internally).
>       fpclassify did not have such a case. As such I have dropped it as I am
>       under the impression the format is deprecated? So the optimization isn't
>       as important? If this is wrong it would be easy to add that back in.

It's not simply an optimization when it involves comparison against the 
largest normal value (to determine whether something is finite / infinite) 
- in that case it's needed to avoid bad results on the true maximum value 
(mantissa 53 1s, 0, 53 1s) which can occur at runtime but GCC can't 
represent internally because it internally treats this format as having a 
fixed width of 106 bits.

> Should ISFINITE be change as well? Also should it be SUBNORMAL or DENORMAL?

It's subnormal.  (Denormal was the old name in IEEE 754-1985.)

> And what should I do about Documentation? I'm not sure how to document a new
> BUILTIN.

Document them in the "Other Builtins" node in extend.texi, like the 
existing classification built-in functions.

> diff --git a/gcc/builtins.def b/gcc/builtins.def
> index 219feebd3aebefbd079bf37cc801453cd1965e00..e3d12eccfed528fd6df0570b65f8aef42494d675 100644
> --- a/gcc/builtins.def
> +++ b/gcc/builtins.def
> @@ -831,6 +831,8 @@ DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFL, "isinfl", BT_FN_INT_LONGDOUBLE, ATTR_CO
>  DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD32, "isinfd32", BT_FN_INT_DFLOAT32, ATTR_CONST_NOTHROW_LEAF_LIST)
>  DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD64, "isinfd64", BT_FN_INT_DFLOAT64, ATTR_CONST_NOTHROW_LEAF_LIST)
>  DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD128, "isinfd128", BT_FN_INT_DFLOAT128, ATTR_CONST_NOTHROW_LEAF_LIST)
> +DEF_C99_C90RES_BUILTIN (BUILT_IN_ISZERO, "iszero", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
> +DEF_C99_C90RES_BUILTIN (BUILT_IN_ISSUBNORMAL, "issubnormal", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)

No, these should be DEF_GCC_BUILTIN, so that only the __builtin_* names 
exist.

> +static tree
> +is_zero (gimple_seq *seq, tree arg, location_t loc)
> +{
> +  tree type = TREE_TYPE (arg);
> +
> +  /* If not using optimized route then exit early.  */
> +  if (!use_ieee_int_mode (arg))
> +  {
> +    tree arg_p
> +      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
> +							arg));
> +    tree res = fold_build2_loc (loc, EQ_EXPR, boolean_type_node, arg_p,
> +				build_real (type, dconst0));

There is no need to take the absolute value before comparing with constant 
0.

I think tests for the new built-in functions should be added for the 
_Float* types (so add gcc.dg/torture/float-tg-4.h to test them and 
associated float*-tg-4.c files using that header to instantiate the tests 
for each type).

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-11-11 22:06 ` Joseph Myers
@ 2016-11-24 15:52   ` Tamar Christina
  2016-11-24 18:28     ` Joseph Myers
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2016-11-24 15:52 UTC (permalink / raw)
  To: Joseph Myers
  Cc: GCC Patches, Wilco Dijkstra, rguenther, law, Michael Meissner, nd

[-- Attachment #1: Type: text/plain, Size: 7823 bytes --]

Hi All,

This is a respin of the v3 of the patch which adds an optimized route to the fpclassify builtin
for floating point numbers which are similar to IEEE-754 in format.

Compared to the last one this adds the tests requested for the FloatN types,
adds documentation, ISFINITE and IBM extended support back in.

Regression tests ran on aarch64-none-linux and arm-none-linux-gnueabi
and no regression. x86_64 bootstrapped successfully as well.

I would appreciate if someone with access to a machine with the IBM extended
types can test this as well.

Ok for trunk?

Thanks,
Tamar

gcc/
2016-11-17  Tamar Christina  <tamar.christina@arm.com>

        PR middle-end/77925
        PR middle-end/77926
        PR middle-end/66462

        * gcc/builtins.c (fold_builtin_fpclassify): Removed.
        (fold_builtin_interclass_mathfn): Removed.
        (expand_builtin): Added builtins to lowering list.
        (fold_builtin_n): Removed fold_builtin_varargs.
        (fold_builtin_varargs): Removed.
        * gcc/builtins.def (BUILT_IN_ISZERO, BUILT_IN_ISSUBNORMAL): Added.
        * gcc/real.h (get_min_float): Added.
        (real_format): Added is_ieee_compatible field.
        * gcc/real.c (get_min_float): Added.
        (ieee_single_format): Set is_ieee_compatible flag.
        * gcc/gimple-low.c (lower_stm): Define BUILT_IN_FPCLASSIFY,
        CASE_FLT_FN (BUILT_IN_ISINF), BUILT_IN_ISINFD32, BUILT_IN_ISINFD64,
        BUILT_IN_ISINFD128, BUILT_IN_ISNAND32, BUILT_IN_ISNAND64,
        BUILT_IN_ISNAND128, BUILT_IN_ISNAN, BUILT_IN_ISNORMAL, BUILT_IN_ISZERO,
        BUILT_IN_ISSUBNORMAL, CASE_FLT_FN (BUILT_IN_FINITE), BUILT_IN_FINITED32
        BUILT_IN_FINITED64, BUILT_IN_FINITED128, BUILT_IN_ISFINITE.
        (lower_builtin_fpclassify, is_nan, is_normal, is_infinity): Added.
        (is_zero, is_subnormal, is_finite, use_ieee_int_mode): Likewise.
        (lower_builtin_isnan, lower_builtin_isinfinite): Likewise.
        (lower_builtin_isnormal, lower_builtin_iszero): Likewise.
        (lower_builtin_issubnormal, lower_builtin_isfinite): Likewise.
        (emit_tree_cond, get_num_as_int, emit_tree_and_return_var): Added.
        (mips_single_format): Likewise.
        (motorola_single_format): Likewise.
        (spu_single_format): Likewise.
        (ieee_double_format): Likewise.
        (mips_double_format): Likewise.
        (motorola_double_format): Likewise.
        (ieee_extended_motorola_format): Likewise.
        (ieee_extended_intel_128_format): Likewise.
        (ieee_extended_intel_96_round_53_format): Likewise.
        (ibm_extended_format): Likewise.
        (mips_extended_format): Likewise.
        (ieee_quad_format): Likewise.
        (mips_quad_format): Likewise.
        (vax_f_format): Likewise.
        (vax_d_format): Likewise.
        (vax_g_format): Likewise.
        (decimal_single_format): Likewise.
        (decimal_quad_format): Likewise.
        (iee_half_format): Likewise.
        (mips_single_format): Likewise.
        (arm_half_format): Likewise.
        (real_internal_format): Likewise.
        * gcc/doc/extend.texi: Added documentation for built-ins.

gcc/testsuite/
2016-11-17  Tamar Christina  <tamar.christina@arm.com>

        * gcc.target/aarch64/builtin-fpclassify.c: New codegen test.
        * gcc.dg/fold-notunord.c: Removed.
        * gcc.dg/torture/floatn-tg-4.h: Added tests for iszero and issubnormal.
        * gcc.dg/torture/float128-tg-4.c: Likewise.
        * gcc.dg/torture/float128x-tg-4: Likewise.
        * gcc.dg/torture/float16-tg-4.c: Likewise.
        * gcc.dg/torture/float32-tg-4.c: Likewise.
        * gcc.dg/torture/float32x-tg-4.c: Likewise.
        * gcc.dg/torture/float64-tg-4.c: Likewise.
        * gcc.dg/torture/float64x-tg-4.c: Likewise.
        * gcc.dg/pr28796-1.c: Added -O2.
        * gcc.dg/builtins-43.c: Check lower instead of gimple.
________________________________________
From: Joseph Myers <joseph@codesourcery.com>
Sent: Friday, November 11, 2016 10:05:43 PM
To: Tamar Christina
Cc: GCC Patches; Wilco Dijkstra; rguenther@suse.de; law@redhat.com; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

On Fri, 11 Nov 2016, Tamar Christina wrote:

> is_infinite and is_zero have been created. This patch also introduces two new
> intrinsics __builtin_iszero and __builtin_issubnormal.

And so the ChangeLog entry needs to include:

        PR middle-end/77925
        PR middle-end/77926

(in addition to PR references for whatever PRs for signaling NaN issues
are partly addressed by the patch - those can't be closed however until
fully fixed for all formats with signaling NaNs).

> NOTE: the old code for ISNORMAL, ISSUBNORMAL, ISNAN and ISINFINITE had a
>       special case for ibm_extended_format which dropped the second part
>       of the number (which was being represented as two numbers internally).
>       fpclassify did not have such a case. As such I have dropped it as I am
>       under the impression the format is deprecated? So the optimization isn't
>       as important? If this is wrong it would be easy to add that back in.

It's not simply an optimization when it involves comparison against the
largest normal value (to determine whether something is finite / infinite)
- in that case it's needed to avoid bad results on the true maximum value
(mantissa 53 1s, 0, 53 1s) which can occur at runtime but GCC can't
represent internally because it internally treats this format as having a
fixed width of 106 bits.

> Should ISFINITE be change as well? Also should it be SUBNORMAL or DENORMAL?

It's subnormal.  (Denormal was the old name in IEEE 754-1985.)

> And what should I do about Documentation? I'm not sure how to document a new
> BUILTIN.

Document them in the "Other Builtins" node in extend.texi, like the
existing classification built-in functions.

> diff --git a/gcc/builtins.def b/gcc/builtins.def
> index 219feebd3aebefbd079bf37cc801453cd1965e00..e3d12eccfed528fd6df0570b65f8aef42494d675 100644
> --- a/gcc/builtins.def
> +++ b/gcc/builtins.def
> @@ -831,6 +831,8 @@ DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFL, "isinfl", BT_FN_INT_LONGDOUBLE, ATTR_CO
>  DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD32, "isinfd32", BT_FN_INT_DFLOAT32, ATTR_CONST_NOTHROW_LEAF_LIST)
>  DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD64, "isinfd64", BT_FN_INT_DFLOAT64, ATTR_CONST_NOTHROW_LEAF_LIST)
>  DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD128, "isinfd128", BT_FN_INT_DFLOAT128, ATTR_CONST_NOTHROW_LEAF_LIST)
> +DEF_C99_C90RES_BUILTIN (BUILT_IN_ISZERO, "iszero", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
> +DEF_C99_C90RES_BUILTIN (BUILT_IN_ISSUBNORMAL, "issubnormal", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)

No, these should be DEF_GCC_BUILTIN, so that only the __builtin_* names
exist.

> +static tree
> +is_zero (gimple_seq *seq, tree arg, location_t loc)
> +{
> +  tree type = TREE_TYPE (arg);
> +
> +  /* If not using optimized route then exit early.  */
> +  if (!use_ieee_int_mode (arg))
> +  {
> +    tree arg_p
> +      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
> +                                                     arg));
> +    tree res = fold_build2_loc (loc, EQ_EXPR, boolean_type_node, arg_p,
> +                             build_real (type, dconst0));

There is no need to take the absolute value before comparing with constant
0.

I think tests for the new built-in functions should be added for the
_Float* types (so add gcc.dg/torture/float-tg-4.h to test them and
associated float*-tg-4.c files using that header to instantiate the tests
for each type).

--
Joseph S. Myers
joseph@codesourcery.com

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: fpclassify_update.patch --]
[-- Type: text/x-patch; name="fpclassify_update.patch", Size: 64976 bytes --]

diff --git a/gcc/builtins.c b/gcc/builtins.c
index 3ac2d44148440b124559ba7cd3de483b7a74b72d..2340f60edb31ebf964367699aaf33ac8401dff41 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -160,7 +160,6 @@ static tree fold_builtin_0 (location_t, tree);
 static tree fold_builtin_1 (location_t, tree, tree);
 static tree fold_builtin_2 (location_t, tree, tree, tree);
 static tree fold_builtin_3 (location_t, tree, tree, tree, tree);
-static tree fold_builtin_varargs (location_t, tree, tree*, int);
 
 static tree fold_builtin_strpbrk (location_t, tree, tree, tree);
 static tree fold_builtin_strstr (location_t, tree, tree, tree);
@@ -2202,19 +2201,8 @@ interclass_mathfn_icode (tree arg, tree fndecl)
   switch (DECL_FUNCTION_CODE (fndecl))
     {
     CASE_FLT_FN (BUILT_IN_ILOGB):
-      errno_set = true; builtin_optab = ilogb_optab; break;
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      builtin_optab = isinf_optab; break;
-    case BUILT_IN_ISNORMAL:
-    case BUILT_IN_ISFINITE:
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      /* These builtins have no optabs (yet).  */
+      errno_set = true;
+      builtin_optab = ilogb_optab;
       break;
     default:
       gcc_unreachable ();
@@ -2233,8 +2221,7 @@ interclass_mathfn_icode (tree arg, tree fndecl)
 }
 
 /* Expand a call to one of the builtin math functions that operate on
-   floating point argument and output an integer result (ilogb, isinf,
-   isnan, etc).
+   floating point argument and output an integer result (ilogb, etc).
    Return 0 if a normal call should be emitted rather than expanding the
    function in-line.  EXP is the expression that is a call to the builtin
    function; if convenient, the result should be placed in TARGET.  */
@@ -5997,11 +5984,7 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
     CASE_FLT_FN (BUILT_IN_ILOGB):
       if (! flag_unsafe_math_optimizations)
 	break;
-      gcc_fallthrough ();
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-    case BUILT_IN_ISNORMAL:
+	
       target = expand_builtin_interclass_mathfn (exp, target);
       if (target)
 	return target;
@@ -6281,8 +6264,25 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
 	}
       break;
 
+    CASE_FLT_FN (BUILT_IN_ISINF):
+    case BUILT_IN_ISNAND32:
+    case BUILT_IN_ISNAND64:
+    case BUILT_IN_ISNAND128:
+    case BUILT_IN_ISNAN:
+    case BUILT_IN_ISINFD32:
+    case BUILT_IN_ISINFD64:
+    case BUILT_IN_ISINFD128:
+    case BUILT_IN_ISNORMAL:
+    case BUILT_IN_ISZERO:
+    case BUILT_IN_ISSUBNORMAL:
+    case BUILT_IN_FPCLASSIFY:
     case BUILT_IN_SETJMP:
-      /* This should have been lowered to the builtins below.  */
+    CASE_FLT_FN (BUILT_IN_FINITE):
+    case BUILT_IN_FINITED32:
+    case BUILT_IN_FINITED64:
+    case BUILT_IN_FINITED128:
+    case BUILT_IN_ISFINITE:
+      /* These should have been lowered to the builtins in gimple-low.c.  */
       gcc_unreachable ();
 
     case BUILT_IN_SETJMP_SETUP:
@@ -7622,184 +7622,19 @@ fold_builtin_modf (location_t loc, tree arg0, tree arg1, tree rettype)
   return NULL_TREE;
 }
 
-/* Given a location LOC, an interclass builtin function decl FNDECL
-   and its single argument ARG, return an folded expression computing
-   the same, or NULL_TREE if we either couldn't or didn't want to fold
-   (the latter happen if there's an RTL instruction available).  */
-
-static tree
-fold_builtin_interclass_mathfn (location_t loc, tree fndecl, tree arg)
-{
-  machine_mode mode;
-
-  if (!validate_arg (arg, REAL_TYPE))
-    return NULL_TREE;
-
-  if (interclass_mathfn_icode (arg, fndecl) != CODE_FOR_nothing)
-    return NULL_TREE;
-
-  mode = TYPE_MODE (TREE_TYPE (arg));
 
-  bool is_ibm_extended = MODE_COMPOSITE_P (mode);
 
-  /* If there is no optab, try generic code.  */
-  switch (DECL_FUNCTION_CODE (fndecl))
-    {
-      tree result;
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      {
-	/* isinf(x) -> isgreater(fabs(x),DBL_MAX).  */
-	tree const isgr_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isgr_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	return result;
-      }
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-      {
-	/* isfinite(x) -> islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isle_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	/*result = fold_build2_loc (loc, UNGT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	result = fold_build1_loc (loc, TRUTH_NOT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  result);*/
-	return result;
-      }
-    case BUILT_IN_ISNORMAL:
-      {
-	/* isnormal(x) -> isgreaterequal(fabs(x),DBL_MIN) &
-	   islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	tree orig_arg, max_exp, min_exp;
-	machine_mode orig_mode = mode;
-	REAL_VALUE_TYPE rmax, rmin;
-	char buf[128];
-
-	orig_arg = arg = builtin_save_expr (arg);
-	if (is_ibm_extended)
-	  {
-	    /* Use double to test the normal range of IBM extended
-	       precision.  Emin for IBM extended precision is
-	       different to emin for IEEE double, being 53 higher
-	       since the low double exponent is at least 53 lower
-	       than the high double exponent.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	arg = fold_build1_loc (loc, ABS_EXPR, type, arg);
-
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&rmax, buf);
-	sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (orig_mode)->emin - 1);
-	real_from_string (&rmin, buf);
-	max_exp = build_real (type, rmax);
-	min_exp = build_real (type, rmin);
-
-	max_exp = build_call_expr (isle_fn, 2, arg, max_exp);
-	if (is_ibm_extended)
-	  {
-	    /* Testing the high end of the range is done just using
-	       the high double, using the same test as isfinite().
-	       For the subnormal end of the range we first test the
-	       high double, then if its magnitude is equal to the
-	       limit of 0x1p-969, we test whether the low double is
-	       non-zero and opposite sign to the high double.  */
-	    tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
-	    tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	    tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
-	    tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
-				       arg, min_exp);
-	    tree as_complex = build1 (VIEW_CONVERT_EXPR,
-				      complex_double_type_node, orig_arg);
-	    tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
-	    tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
-	    tree zero = build_real (type, dconst0);
-	    tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
-	    tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
-	    tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
-	    tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
-				      fold_build3 (COND_EXPR,
-						   integer_type_node,
-						   hilt, logt, lolt));
-	    eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
-				  eq_min, ok_lo);
-	    min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
-				   gt_min, eq_min);
-	  }
-	else
-	  {
-	    tree const isge_fn
-	      = builtin_decl_explicit (BUILT_IN_ISGREATEREQUAL);
-	    min_exp = build_call_expr (isge_fn, 2, arg, min_exp);
-	  }
-	result = fold_build2 (BIT_AND_EXPR, integer_type_node,
-			      max_exp, min_exp);
-	return result;
-      }
-    default:
-      break;
-    }
-
-  return NULL_TREE;
-}
-
-/* Fold a call to __builtin_isnan(), __builtin_isinf, __builtin_finite.
+/* Fold a call to __builtin_isinf_sign.
    ARG is the argument for the call.  */
 
 static tree
-fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
+fold_builtin_classify (location_t loc, tree arg, int builtin_index)
 {
-  tree type = TREE_TYPE (TREE_TYPE (fndecl));
-
   if (!validate_arg (arg, REAL_TYPE))
     return NULL_TREE;
 
   switch (builtin_index)
     {
-    case BUILT_IN_ISINF:
-      if (!HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      return NULL_TREE;
-
     case BUILT_IN_ISINF_SIGN:
       {
 	/* isinf_sign(x) -> isinf(x) ? (signbit(x) ? -1 : 1) : 0 */
@@ -7832,106 +7667,11 @@ fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
 	return tmp;
       }
 
-    case BUILT_IN_ISFINITE:
-      if (!HONOR_NANS (arg)
-	  && !HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_one_node, arg);
-
-      return NULL_TREE;
-
-    case BUILT_IN_ISNAN:
-      if (!HONOR_NANS (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      {
-	bool is_ibm_extended = MODE_COMPOSITE_P (TYPE_MODE (TREE_TYPE (arg)));
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    arg = fold_build1_loc (loc, NOP_EXPR, double_type_node, arg);
-	  }
-      }
-      arg = builtin_save_expr (arg);
-      return fold_build2_loc (loc, UNORDERED_EXPR, type, arg, arg);
-
     default:
       gcc_unreachable ();
     }
 }
 
-/* Fold a call to __builtin_fpclassify(int, int, int, int, int, ...).
-   This builtin will generate code to return the appropriate floating
-   point classification depending on the value of the floating point
-   number passed in.  The possible return values must be supplied as
-   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
-   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
-   one floating point argument which is "type generic".  */
-
-static tree
-fold_builtin_fpclassify (location_t loc, tree *args, int nargs)
-{
-  tree fp_nan, fp_infinite, fp_normal, fp_subnormal, fp_zero,
-    arg, type, res, tmp;
-  machine_mode mode;
-  REAL_VALUE_TYPE r;
-  char buf[128];
-
-  /* Verify the required arguments in the original call.  */
-  if (nargs != 6
-      || !validate_arg (args[0], INTEGER_TYPE)
-      || !validate_arg (args[1], INTEGER_TYPE)
-      || !validate_arg (args[2], INTEGER_TYPE)
-      || !validate_arg (args[3], INTEGER_TYPE)
-      || !validate_arg (args[4], INTEGER_TYPE)
-      || !validate_arg (args[5], REAL_TYPE))
-    return NULL_TREE;
-
-  fp_nan = args[0];
-  fp_infinite = args[1];
-  fp_normal = args[2];
-  fp_subnormal = args[3];
-  fp_zero = args[4];
-  arg = args[5];
-  type = TREE_TYPE (arg);
-  mode = TYPE_MODE (type);
-  arg = builtin_save_expr (fold_build1_loc (loc, ABS_EXPR, type, arg));
-
-  /* fpclassify(x) ->
-       isnan(x) ? FP_NAN :
-         (fabs(x) == Inf ? FP_INFINITE :
-	   (fabs(x) >= DBL_MIN ? FP_NORMAL :
-	     (x == 0 ? FP_ZERO : FP_SUBNORMAL))).  */
-
-  tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-		     build_real (type, dconst0));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node,
-		     tmp, fp_zero, fp_subnormal);
-
-  sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (mode)->emin - 1);
-  real_from_string (&r, buf);
-  tmp = fold_build2_loc (loc, GE_EXPR, integer_type_node,
-		     arg, build_real (type, r));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, fp_normal, res);
-
-  if (HONOR_INFINITIES (mode))
-    {
-      real_inf (&r);
-      tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-			 build_real (type, r));
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp,
-			 fp_infinite, res);
-    }
-
-  if (HONOR_NANS (mode))
-    {
-      tmp = fold_build2_loc (loc, ORDERED_EXPR, integer_type_node, arg, arg);
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, res, fp_nan);
-    }
-
-  return res;
-}
-
 /* Fold a call to an unordered comparison function such as
    __builtin_isgreater().  FNDECL is the FUNCTION_DECL for the function
    being called and ARG0 and ARG1 are the arguments for the call.
@@ -8232,40 +7972,8 @@ fold_builtin_1 (location_t loc, tree fndecl, tree arg0)
     case BUILT_IN_ISDIGIT:
       return fold_builtin_isdigit (loc, arg0);
 
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISFINITE:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISFINITE);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    case BUILT_IN_ISNORMAL:
-      return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-
     case BUILT_IN_ISINF_SIGN:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF_SIGN);
-
-    CASE_FLT_FN (BUILT_IN_ISNAN):
-    case BUILT_IN_ISNAND32:
-    case BUILT_IN_ISNAND64:
-    case BUILT_IN_ISNAND128:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISNAN);
+      return fold_builtin_classify (loc, arg0, BUILT_IN_ISINF_SIGN);
 
     case BUILT_IN_FREE:
       if (integer_zerop (arg0))
@@ -8465,7 +8173,6 @@ fold_builtin_n (location_t loc, tree fndecl, tree *args, int nargs, bool)
       ret = fold_builtin_3 (loc, fndecl, args[0], args[1], args[2]);
       break;
     default:
-      ret = fold_builtin_varargs (loc, fndecl, args, nargs);
       break;
     }
   if (ret)
@@ -9422,37 +9129,6 @@ fold_builtin_object_size (tree ptr, tree ost)
   return NULL_TREE;
 }
 
-/* Builtins with folding operations that operate on "..." arguments
-   need special handling; we need to store the arguments in a convenient
-   data structure before attempting any folding.  Fortunately there are
-   only a few builtins that fall into this category.  FNDECL is the
-   function, EXP is the CALL_EXPR for the call.  */
-
-static tree
-fold_builtin_varargs (location_t loc, tree fndecl, tree *args, int nargs)
-{
-  enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl);
-  tree ret = NULL_TREE;
-
-  switch (fcode)
-    {
-    case BUILT_IN_FPCLASSIFY:
-      ret = fold_builtin_fpclassify (loc, args, nargs);
-      break;
-
-    default:
-      break;
-    }
-  if (ret)
-    {
-      ret = build1 (NOP_EXPR, TREE_TYPE (ret), ret);
-      SET_EXPR_LOCATION (ret, loc);
-      TREE_NO_WARNING (ret) = 1;
-      return ret;
-    }
-  return NULL_TREE;
-}
-
 /* Initialize format string characters in the target charset.  */
 
 bool
diff --git a/gcc/builtins.def b/gcc/builtins.def
index 219feebd3aebefbd079bf37cc801453cd1965e00..91aa6f37fa098777bc794bad56d8c561ab9fdc44 100644
--- a/gcc/builtins.def
+++ b/gcc/builtins.def
@@ -831,6 +831,8 @@ DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFL, "isinfl", BT_FN_INT_LONGDOUBLE, ATTR_CO
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD32, "isinfd32", BT_FN_INT_DFLOAT32, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD64, "isinfd64", BT_FN_INT_DFLOAT64, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD128, "isinfd128", BT_FN_INT_DFLOAT128, ATTR_CONST_NOTHROW_LEAF_LIST)
+DEF_GCC_BUILTIN        (BUILT_IN_ISZERO, "iszero", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
+DEF_GCC_BUILTIN        (BUILT_IN_ISSUBNORMAL, "issubnormal", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_C99_C90RES_BUILTIN (BUILT_IN_ISNAN, "isnan", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANF, "isnanf", BT_FN_INT_FLOAT, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANL, "isnanl", BT_FN_INT_LONGDOUBLE, ATTR_CONST_NOTHROW_LEAF_LIST)
diff --git a/gcc/doc/extend.texi b/gcc/doc/extend.texi
index 0669f7999beb078822e471352036d8f13517812d..0b4a6fd645dfd829510188d8b39b75b90b5edc75 100644
--- a/gcc/doc/extend.texi
+++ b/gcc/doc/extend.texi
@@ -10433,6 +10433,10 @@ in the Cilk Plus language manual which can be found at
 @findex __builtin_isgreater
 @findex __builtin_isgreaterequal
 @findex __builtin_isinf_sign
+@findex __builtin_isinf
+@findex __builtin_isnan
+@findex __builtin_iszero
+@findex __builtin_issubnormal
 @findex __builtin_isless
 @findex __builtin_islessequal
 @findex __builtin_islessgreater
@@ -11499,6 +11503,53 @@ to classify.  GCC treats the last argument as type-generic, which
 means it does not do default promotion from float to double.
 @end deftypefn
 
+@deftypefn {Built-in Function} int __builtin_isnan (...)
+This built-in implements the C99 isnan functionality which checks if
+the given argument represents a NaN.  The return value of the
+function will either be a 0 (false) or a 1 (true).
+On most systems, when an IEEE 754 floating point is used this
+built-in does not produce a signal when a signaling NaN is used.
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from float to double.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isinf (...)
+This built-in implements the C99 isinf functionality which checks if
+the given argument represents an Infinite number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from float to double.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnormal (...)
+This built-in implements the C99 isnormal functionality which checks if
+the given argument represents a normal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from float to double.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_iszero (...)
+This built-in implements the C99 iszero functionality which checks if
+the given argument represents the number 0.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from float to double.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_issubnormal (...)
+This built-in implements the C99 issubnormal functionality which checks if
+the given argument represents a sub-normal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from float to double.
+@end deftypefn
+
 @deftypefn {Built-in Function} double __builtin_inf (void)
 Similar to @code{__builtin_huge_val}, except a warning is generated
 if the target floating-point format does not support infinities.
diff --git a/gcc/gimple-low.c b/gcc/gimple-low.c
index 64752b67b86b3d01df5f5661e4666df98b7b91d1..d40e6a6ecfdcb9f9f0deda85fc5729bb3e9dc900 100644
--- a/gcc/gimple-low.c
+++ b/gcc/gimple-low.c
@@ -30,6 +30,8 @@ along with GCC; see the file COPYING3.  If not see
 #include "calls.h"
 #include "gimple-iterator.h"
 #include "gimple-low.h"
+#include "stor-layout.h"
+#include "target.h"
 
 /* The differences between High GIMPLE and Low GIMPLE are the
    following:
@@ -72,6 +74,13 @@ static void lower_gimple_bind (gimple_stmt_iterator *, struct lower_data *);
 static void lower_try_catch (gimple_stmt_iterator *, struct lower_data *);
 static void lower_gimple_return (gimple_stmt_iterator *, struct lower_data *);
 static void lower_builtin_setjmp (gimple_stmt_iterator *);
+static void lower_builtin_fpclassify (gimple_stmt_iterator *);
+static void lower_builtin_isnan (gimple_stmt_iterator *);
+static void lower_builtin_isinfinite (gimple_stmt_iterator *);
+static void lower_builtin_isnormal (gimple_stmt_iterator *);
+static void lower_builtin_iszero (gimple_stmt_iterator *);
+static void lower_builtin_issubnormal (gimple_stmt_iterator *);
+static void lower_builtin_isfinite (gimple_stmt_iterator *);
 static void lower_builtin_posix_memalign (gimple_stmt_iterator *);
 
 
@@ -330,19 +339,71 @@ lower_stmt (gimple_stmt_iterator *gsi, struct lower_data *data)
 	if (decl
 	    && DECL_BUILT_IN_CLASS (decl) == BUILT_IN_NORMAL)
 	  {
-	    if (DECL_FUNCTION_CODE (decl) == BUILT_IN_SETJMP)
-	      {
-		lower_builtin_setjmp (gsi);
-		data->cannot_fallthru = false;
-		return;
-	      }
-	    else if (DECL_FUNCTION_CODE (decl) == BUILT_IN_POSIX_MEMALIGN
-		     && flag_tree_bit_ccp
-		     && gimple_builtin_call_types_compatible_p (stmt, decl))
-	      {
-		lower_builtin_posix_memalign (gsi);
-		return;
-	      }
+	    switch (DECL_FUNCTION_CODE (decl))
+	    {
+		    case BUILT_IN_SETJMP:
+			lower_builtin_setjmp (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    case BUILT_IN_POSIX_MEMALIGN:
+			if (flag_tree_bit_ccp
+			    && gimple_builtin_call_types_compatible_p (stmt,
+								       decl))
+			{
+				lower_builtin_posix_memalign (gsi);
+				return;
+			}
+			break;
+
+		    case BUILT_IN_FPCLASSIFY:
+			lower_builtin_fpclassify (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    CASE_FLT_FN (BUILT_IN_ISINF):
+		    case BUILT_IN_ISINFD32:
+		    case BUILT_IN_ISINFD64:
+		    case BUILT_IN_ISINFD128:
+			lower_builtin_isinfinite (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    case BUILT_IN_ISNAND32:
+		    case BUILT_IN_ISNAND64:
+		    case BUILT_IN_ISNAND128:
+		    CASE_FLT_FN (BUILT_IN_ISNAN):
+			lower_builtin_isnan (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    case BUILT_IN_ISNORMAL:
+			lower_builtin_isnormal (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    case BUILT_IN_ISZERO:
+			lower_builtin_iszero (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    case BUILT_IN_ISSUBNORMAL:
+			lower_builtin_issubnormal (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    CASE_FLT_FN (BUILT_IN_FINITE):
+		    case BUILT_IN_FINITED32:
+		    case BUILT_IN_FINITED64:
+		    case BUILT_IN_FINITED128:
+		    case BUILT_IN_ISFINITE:
+			lower_builtin_isfinite (gsi);
+			data->cannot_fallthru = false;
+			return;
+
+		    default:
+			break;
+	    }
 	  }
 
 	if (decl && (flags_from_decl_or_type (decl) & ECF_NORETURN))
@@ -822,6 +883,736 @@ lower_builtin_setjmp (gimple_stmt_iterator *gsi)
   gsi_remove (gsi, false);
 }
 
+static tree
+emit_tree_and_return_var (gimple_seq *seq, tree arg)
+{
+  if (TREE_CODE (arg) == SSA_NAME || VAR_P (arg))
+    return arg;
+
+  tree tmp = create_tmp_reg (TREE_TYPE(arg));
+  gassign *stm = gimple_build_assign(tmp, arg);
+  gimple_seq_add_stmt (seq, stm);
+  return tmp;
+}
+
+/* This function builds an if statement that ends up using explicit branches
+   instead of becoming a csel.  This function assumes you will fall through to
+   the next statements after this condition for the false branch.  */
+static void
+emit_tree_cond (gimple_seq *seq, tree result_variable, tree exit_label,
+		tree cond, tree true_branch)
+{
+    /* Create labels for fall through.  */
+  tree true_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree false_label = create_artificial_label (UNKNOWN_LOCATION);
+  gcond *stmt = gimple_build_cond_from_tree (cond, true_label, false_label);
+  gimple_seq_add_stmt (seq, stmt);
+
+  /* Build the true case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (true_label));
+  tree value = TREE_CONSTANT (true_branch)
+	     ? true_branch
+	     : emit_tree_and_return_var (seq, true_branch);
+  gimple_seq_add_stmt (seq, gimple_build_assign (result_variable, value));
+  gimple_seq_add_stmt (seq, gimple_build_goto (exit_label));
+
+  /* Build the false case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (false_label));
+}
+
+static tree
+get_num_as_int (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  /* Re-interpret the float as an unsigned integer type
+     with equal precision.  */
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+  tree conv_arg = fold_build1_loc (loc, VIEW_CONVERT_EXPR, int_arg_type, arg);
+  return emit_tree_and_return_var(seq, conv_arg);
+}
+
+  /* Check if the number that is being classified is close enough to IEEE 754
+     format to be able to go in the early exit code.  */
+static bool
+use_ieee_int_mode (tree arg)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  return (format->is_binary_ieee_compatible
+	  && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
+	  /* We explicitly disable quad float support on 32 bit systems.  */
+	  && !(UNITS_PER_WORD == 4 && type_width == 128)
+	  && targetm.scalar_mode_supported_p (mode));
+}
+
+  /* perform standard IBM extended format fixups for FP functions.  */
+static bool
+perform_ibm_extended_fixups (tree *arg, machine_mode *mode,
+			     tree *type, location_t loc)
+{
+  bool is_ibm_extended = MODE_COMPOSITE_P (*mode);
+  if (is_ibm_extended)
+    {
+      /* NaN and Inf are encoded in the high-order double value
+	 only.  The low-order value is not significant.  */
+      *type = double_type_node;
+      *mode = DFmode;
+      *arg = fold_build1_loc (loc, NOP_EXPR, *type, *arg);
+    }
+
+  return is_ibm_extended;
+}
+
+static tree
+is_normal (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const tree bool_type = boolean_type_node;
+
+  /* Perform IBM extended format fixups if required.  */
+  bool is_ibm_extended = perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree orig_arg = arg;
+    if (TREE_CODE (arg) != SSA_NAME
+	&& (TREE_ADDRESSABLE (arg) != 0
+	  || (TREE_CODE (arg) != PARM_DECL
+	      && (!VAR_P (arg) || TREE_STATIC (arg)))))
+      orig_arg = save_expr (arg);
+
+    REAL_VALUE_TYPE rinf, rmin;
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    char buf[128];
+    real_inf(&rinf);
+    get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+    real_from_string (&rmin, buf);
+
+    tree inf_exp = fold_build2_loc (loc, LT_EXPR, bool_type, arg_p,
+				    build_real (type, rinf));
+    tree min_exp = build_real (type, rmin);
+    if (is_ibm_extended)
+      {
+	/* Testing the high end of the range is done just using
+	   the high double, using the same test as isfinite().
+	   For the subnormal end of the range we first test the
+	   high double, then if its magnitude is equal to the
+	   limit of 0x1p-969, we test whether the low double is
+	   non-zero and opposite sign to the high double.  */
+	tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
+	tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
+	tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
+	tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
+			       arg, min_exp);
+	tree as_complex = build1 (VIEW_CONVERT_EXPR,
+			      complex_double_type_node, orig_arg);
+	tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
+	tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
+	tree zero = build_real (type, dconst0);
+	tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
+	tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
+	tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
+	tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
+			      fold_build3 (COND_EXPR,
+					   integer_type_node,
+					   hilt, logt, lolt));
+	eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
+			  eq_min, ok_lo);
+	min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
+			   gt_min, eq_min);
+      }
+      else
+      {
+	min_exp = fold_build2_loc (loc, GE_EXPR, bool_type, arg_p,
+				   min_exp);
+      }
+
+    tree res
+      = fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			 emit_tree_and_return_var (seq, min_exp),
+			 emit_tree_and_return_var (seq, inf_exp));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const tree int_type = unsigned_type_node;
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  const int exp_mask  = (1 << exp_bits) - 1;
+
+  /* Get the number reinterpreted as an integer.  */
+  tree int_arg = get_num_as_int (seq, arg, loc);
+
+  /* Extract exp bits from the float, where we expect the exponent to be.
+     We create a new type because BIT_FIELD_REF does not allow you to
+     extract less bits than the precision of the storage variable.  */
+  tree exp_tmp
+    = fold_build3_loc (loc, BIT_FIELD_REF,
+		       build_nonstandard_integer_type (exp_bits, true),
+		       int_arg,
+		       build_int_cstu (int_type, exp_bits),
+		       build_int_cstu (int_type, format->p - 1));
+  tree exp_bitfield = emit_tree_and_return_var (seq, exp_tmp);
+
+  /* Re-interpret the extracted exponent bits as a 32 bit int.
+     This allows us to continue doing operations as int_type.  */
+  tree exp
+    = emit_tree_and_return_var(seq,fold_build1_loc (loc, NOP_EXPR, int_type,
+						    exp_bitfield));
+
+  /* exp_mask & ~1.  */
+  tree mask_check
+     = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+			build_int_cstu (int_type, exp_mask),
+			fold_build1_loc (loc, BIT_NOT_EXPR, int_type,
+					 build_int_cstu (int_type, 1)));
+
+  /* (exp + 1) & mask_check.
+     Check to see if exp is not all 0 or all 1.  */
+  tree exp_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+		       emit_tree_and_return_var (seq,
+				fold_build2_loc (loc, PLUS_EXPR, int_type, exp,
+						 build_int_cstu (int_type, 1))),
+		       mask_check);
+
+  tree res = fold_build2_loc (loc, NE_EXPR, boolean_type_node,
+			      build_int_cstu (int_type, 0),
+			      emit_tree_and_return_var(seq, exp_check));
+
+  return emit_tree_and_return_var (seq, res);
+}
+
+static tree
+is_zero (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+  machine_mode mode = TYPE_MODE (type);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree res = fold_build2_loc (loc, EQ_EXPR, boolean_type_node, arg,
+				build_real (type, dconst0));
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* num << 1 == 0.
+     This checks to see if the number is zero.  */
+  tree zero_check
+    = fold_build2_loc (loc, EQ_EXPR, boolean_type_node,
+		       build_int_cstu (int_arg_type, 0),
+		       emit_tree_and_return_var (seq, int_arg));
+
+  return emit_tree_and_return_var (seq, zero_check);
+}
+
+static tree
+is_subnormal (gimple_seq *seq, tree arg, location_t loc)
+{
+  const tree bool_type = boolean_type_node;
+
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    REAL_VALUE_TYPE r;
+    char buf[128];
+    get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+    real_from_string (&r, buf);
+    tree subnorm = fold_build2_loc (loc, LT_EXPR, bool_type,
+				    arg_p, build_real (type, r));
+
+    tree zero = fold_build2_loc (loc, GT_EXPR, bool_type, arg_p,
+				 build_real (type, dconst0));
+
+    tree res
+      = fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			 emit_tree_and_return_var (seq, subnorm),
+			 emit_tree_and_return_var (seq, zero));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check for a zero exponent and non-zero mantissa.
+     This can be done with two comparisons by first apply a
+     removing the sign bit and checking if the value is larger
+     than the mantissa mask.  */
+
+  /* This creates a mask to be used to check the mantissa value in the shifted
+     integer representation of the fpnum.  */
+  tree significant_bit = build_int_cstu (int_arg_type, format->p - 1);
+  tree mantissa_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					significant_bit),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check if exponent is zero and mantissa is not.  */
+  tree subnorm_cond
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, LE_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 mantissa_mask));
+
+  tree zero_cond
+    = fold_build2_loc (loc, GT_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, int_arg),
+		       build_int_cstu (int_arg_type, 0));
+
+  tree subnorm_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, subnorm_cond),
+		       emit_tree_and_return_var (seq, zero_cond));
+
+  return emit_tree_and_return_var (seq, subnorm_check);
+}
+
+static tree
+is_infinity (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_INFINITIES (mode))
+  {
+    return build_int_cst (bool_type, false);
+  }
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    REAL_VALUE_TYPE r;
+    real_inf (&r);
+    tree res = fold_build2_loc (loc, EQ_EXPR, bool_type, arg_p,
+				build_real (type, r));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0.  */
+  tree inf_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, EQ_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+static tree
+is_finite (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (arg) && !HONOR_INFINITIES (arg))
+  {
+    return build_int_cst (bool_type, true);
+  }
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    REAL_VALUE_TYPE rmax;
+    char buf[128];
+    get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+    real_from_string (&rmax, buf);
+
+    tree res = fold_build2_loc (loc, LE_EXPR, bool_type, arg_p,
+				build_real (type, rmax));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign. */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0. */
+  tree inf_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, LT_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Determines if the given number is a NaN value.
+   This function is the last in the chain and only has to
+   check if it's preconditions are true.  */
+static tree
+is_nan (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (mode))
+  {
+    return build_int_cst (bool_type, false);
+  }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    tree eq_check
+      = fold_build2_loc (loc, ORDERED_EXPR, bool_type,arg_p, arg_p);
+
+    tree res
+      = fold_build1_loc (loc, BIT_NOT_EXPR, bool_type,
+			 emit_tree_and_return_var (seq, eq_check));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is not 0.  */
+  tree nan_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, GT_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, nan_check);
+}
+
+/* Validate a single argument ARG against a tree code CODE representing
+   a type.  */
+static bool
+gimple_validate_arg (gimple* call, int index, enum tree_code code)
+{
+  const tree arg = gimple_call_arg(call, index);
+  if (!arg)
+    return false;
+  else if (code == POINTER_TYPE)
+    return POINTER_TYPE_P (TREE_TYPE (arg));
+  else if (code == INTEGER_TYPE)
+    return INTEGRAL_TYPE_P (TREE_TYPE (arg));
+  return code == TREE_CODE (TREE_TYPE (arg));
+}
+
+/* Lowers calls to __builtin_fpclassify to
+   fpclassify (x) ->
+     isnormal(x) ? FP_NORMAL :
+       iszero (x) ? FP_ZERO :
+	 isnan (x) ? FP_NAN :
+	   isinfinite (x) ? FP_INFINITE :
+	     FP_SUBNORMAL.
+
+   The code may use integer arithmentic if it decides
+   that the produced assembly would be faster. This can only be done
+   for numbers that are similar to IEEE-754 in format.
+
+   This builtin will generate code to return the appropriate floating
+   point classification depending on the value of the floating point
+   number passed in.  The possible return values must be supplied as
+   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
+   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
+   one floating point argument which is "type generic".
+*/
+static void
+lower_builtin_fpclassify (gimple_stmt_iterator *gsi)
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 6
+      || !gimple_validate_arg (call, 0, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 1, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 2, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 3, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 4, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 5, REAL_TYPE))
+    return;
+
+  /* Collect the arguments from the call.  */
+  tree fp_nan = gimple_call_arg (call, 0);
+  tree fp_infinite = gimple_call_arg (call, 1);
+  tree fp_normal = gimple_call_arg (call, 2);
+  tree fp_subnormal = gimple_call_arg (call, 3);
+  tree fp_zero = gimple_call_arg (call, 4);
+  tree arg = gimple_call_arg (call, 5);
+
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+    dest = create_tmp_reg (TREE_TYPE (orig_dest));
+
+  emit_tree_cond (&body, dest, done_label,
+		  is_normal(&body, arg, loc), fp_normal);
+  emit_tree_cond (&body, dest, done_label,
+		  is_zero(&body, arg, loc), fp_zero);
+  emit_tree_cond (&body, dest, done_label,
+		  is_nan(&body, arg, loc), fp_nan);
+  emit_tree_cond (&body, dest, done_label,
+		  is_infinity(&body, arg, loc), fp_infinite);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to is_subnormal.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, fp_subnormal));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+
+  /* Remove the call to __builtin_fpclassify.  */
+  gsi_remove (gsi, false);
+}
+
+static void
+gen_call_fp_builtin (gimple_stmt_iterator *gsi,
+		     tree (*fndecl)(gimple_seq *, tree, location_t))
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 1
+      || !gimple_validate_arg (call, 0, REAL_TYPE))
+    return;
+
+  tree arg = gimple_call_arg (call, 0);
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  tree type = TREE_TYPE (orig_dest);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+    dest = create_tmp_reg (type);
+
+  tree t_true = build_int_cst (type, true);
+  tree t_false = build_int_cst (type, false);
+
+  emit_tree_cond (&body, dest, done_label,
+		  fndecl(&body, arg, loc), t_true);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to false.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, t_false));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+  /* Remove the call to the builtin.  */
+  gsi_remove (gsi, false);
+}
+
+static void
+lower_builtin_isnan (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_nan);
+}
+
+static void
+lower_builtin_isinfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_infinity);
+}
+
+static void
+lower_builtin_isnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_normal);
+}
+
+static void
+lower_builtin_iszero (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_zero);
+}
+
+static void
+lower_builtin_issubnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_subnormal);
+}
+
+static void
+lower_builtin_isfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_finite);
+}
+
 /* Lower calls to posix_memalign to
      res = posix_memalign (ptr, align, size);
      if (res == 0)
diff --git a/gcc/real.h b/gcc/real.h
index 59af580e78f2637be84f71b98b45ec6611053222..4b1b92138e07f43a175a2cbee4d952afad5898f7 100644
--- a/gcc/real.h
+++ b/gcc/real.h
@@ -161,6 +161,19 @@ struct real_format
   bool has_signed_zero;
   bool qnan_msb_set;
   bool canonical_nan_lsbs_set;
+
+  /* This flag indicates whether the format is suitable for the optimized
+     code paths for the __builtin_fpclassify function and friends.  For
+     this, the format must be a base 2 representation with the sign bit as
+     the most-significant bit followed by (exp <= 32) exponent bits
+     followed by the mantissa bits.  It must be possible to interpret the
+     bits of the floating-point representation as an integer.  NaNs and
+     INFs (if available) must be represented by the same schema used by
+     IEEE 754.  (NaNs must be represented by an exponent with all bits 1,
+     any mantissa except all bits 0 and any sign bit.  +INF and -INF must be
+     represented by an exponent with all bits 1, a mantissa with all bits 0 and
+     a sign bit of 0 and 1 respectively.)  */
+  bool is_binary_ieee_compatible;
   const char *name;
 };
 
@@ -511,6 +524,11 @@ extern bool real_isinteger (const REAL_VALUE_TYPE *, HOST_WIDE_INT *);
    float string.  BUF must be large enough to contain the result.  */
 extern void get_max_float (const struct real_format *, char *, size_t);
 
+/* Write into BUF the smallest positive normalized number x,
+   such that b**(x-1) is normalized.  BUF must be large enough
+   to contain the result.  */
+extern void get_min_float (const struct real_format *, char *, size_t);
+
 #ifndef GENERATOR_FILE
 /* real related routines.  */
 extern wide_int real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
diff --git a/gcc/real.c b/gcc/real.c
index 66e88e2ad366f7848609d157074c80420d778bcf..20c907a6d543c73ba62aa9a8ddf6973d82de7832 100644
--- a/gcc/real.c
+++ b/gcc/real.c
@@ -3052,6 +3052,7 @@ const struct real_format ieee_single_format =
     true,
     true,
     false,
+    true,
     "ieee_single"
   };
 
@@ -3075,6 +3076,7 @@ const struct real_format mips_single_format =
     true,
     false,
     true,
+    true,
     "mips_single"
   };
 
@@ -3098,6 +3100,7 @@ const struct real_format motorola_single_format =
     true,
     true,
     true,
+    true,
     "motorola_single"
   };
 
@@ -3132,6 +3135,7 @@ const struct real_format spu_single_format =
     true,
     false,
     false,
+    false,
     "spu_single"
   };
 \f
@@ -3343,6 +3347,7 @@ const struct real_format ieee_double_format =
     true,
     true,
     false,
+    true,
     "ieee_double"
   };
 
@@ -3366,6 +3371,7 @@ const struct real_format mips_double_format =
     true,
     false,
     true,
+    true,
     "mips_double"
   };
 
@@ -3389,6 +3395,7 @@ const struct real_format motorola_double_format =
     true,
     true,
     true,
+    true,
     "motorola_double"
   };
 \f
@@ -3735,6 +3742,7 @@ const struct real_format ieee_extended_motorola_format =
     true,
     true,
     true,
+    false,
     "ieee_extended_motorola"
   };
 
@@ -3758,6 +3766,7 @@ const struct real_format ieee_extended_intel_96_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96"
   };
 
@@ -3781,6 +3790,7 @@ const struct real_format ieee_extended_intel_128_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_128"
   };
 
@@ -3806,6 +3816,7 @@ const struct real_format ieee_extended_intel_96_round_53_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96_round_53"
   };
 \f
@@ -3896,6 +3907,7 @@ const struct real_format ibm_extended_format =
     true,
     true,
     false,
+    false,
     "ibm_extended"
   };
 
@@ -3919,6 +3931,7 @@ const struct real_format mips_extended_format =
     true,
     false,
     true,
+    false,
     "mips_extended"
   };
 
@@ -4184,6 +4197,7 @@ const struct real_format ieee_quad_format =
     true,
     true,
     false,
+    true,
     "ieee_quad"
   };
 
@@ -4207,6 +4221,7 @@ const struct real_format mips_quad_format =
     true,
     false,
     true,
+    true,
     "mips_quad"
   };
 \f
@@ -4509,6 +4524,7 @@ const struct real_format vax_f_format =
     false,
     false,
     false,
+    false,
     "vax_f"
   };
 
@@ -4532,6 +4548,7 @@ const struct real_format vax_d_format =
     false,
     false,
     false,
+    false,
     "vax_d"
   };
 
@@ -4555,6 +4572,7 @@ const struct real_format vax_g_format =
     false,
     false,
     false,
+    false,
     "vax_g"
   };
 \f
@@ -4633,6 +4651,7 @@ const struct real_format decimal_single_format =
     true,
     true,
     false,
+    false,
     "decimal_single"
   };
 
@@ -4657,6 +4676,7 @@ const struct real_format decimal_double_format =
     true,
     true,
     false,
+    false,
     "decimal_double"
   };
 
@@ -4681,6 +4701,7 @@ const struct real_format decimal_quad_format =
     true,
     true,
     false,
+    false,
     "decimal_quad"
   };
 \f
@@ -4820,6 +4841,7 @@ const struct real_format ieee_half_format =
     true,
     true,
     false,
+    true,
     "ieee_half"
   };
 
@@ -4846,6 +4868,7 @@ const struct real_format arm_half_format =
     true,
     false,
     false,
+    false,
     "arm_half"
   };
 \f
@@ -4893,6 +4916,7 @@ const struct real_format real_internal_format =
     true,
     true,
     false,
+    false,
     "real_internal"
   };
 \f
@@ -5080,6 +5104,16 @@ get_max_float (const struct real_format *fmt, char *buf, size_t len)
   gcc_assert (strlen (buf) < len);
 }
 
+/* Write into BUF the minimum negative representable finite floating-point
+   number, x, such that b**(x-1) is normalized.
+   BUF must be large enough to contain the result.  */
+void
+get_min_float (const struct real_format *fmt, char *buf, size_t len)
+{
+  sprintf (buf, "0x1p%d", fmt->emin - 1);
+  gcc_assert (strlen (buf) < len);
+}
+
 /* True if mode M has a NaN representation and
    the treatment of NaN operands is important.  */
 
diff --git a/gcc/testsuite/gcc.dg/builtins-43.c b/gcc/testsuite/gcc.dg/builtins-43.c
index f7c318edf084104b9b820e18e631ed61e760569e..5d41c28aef8619f06658f45846ae15dd8b4987ed 100644
--- a/gcc/testsuite/gcc.dg/builtins-43.c
+++ b/gcc/testsuite/gcc.dg/builtins-43.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-gimple -fdump-tree-optimized" } */
+/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-lower -fdump-tree-optimized" } */
   
 extern void f(int);
 extern void link_error ();
@@ -51,7 +51,7 @@ main ()
 
 
 /* Check that all instances of __builtin_isnan were folded.  */
-/* { dg-final { scan-tree-dump-times "isnan" 0 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "isnan" 0 "lower" } } */
 
 /* Check that all instances of link_error were subject to DCE.  */
 /* { dg-final { scan-tree-dump-times "link_error" 0 "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/fold-notunord.c b/gcc/testsuite/gcc.dg/fold-notunord.c
deleted file mode 100644
index ca345154ac204cb5f380855828421b7f88d49052..0000000000000000000000000000000000000000
--- a/gcc/testsuite/gcc.dg/fold-notunord.c
+++ /dev/null
@@ -1,9 +0,0 @@
-/* { dg-do compile } */
-/* { dg-options "-O -ftrapping-math -fdump-tree-optimized" } */
-
-int f (double d)
-{
-  return !__builtin_isnan (d);
-}
-
-/* { dg-final { scan-tree-dump " ord " "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/pr28796-1.c b/gcc/testsuite/gcc.dg/pr28796-1.c
index 077118a298878441e812410f3a6bf3707fb1d839..a57b4e350af1bc45344106fdeab4b32ef87f233f 100644
--- a/gcc/testsuite/gcc.dg/pr28796-1.c
+++ b/gcc/testsuite/gcc.dg/pr28796-1.c
@@ -1,5 +1,5 @@
 /* { dg-do link } */
-/* { dg-options "-ffinite-math-only" } */
+/* { dg-options "-ffinite-math-only -O2" } */
 
 extern void link_error(void);
 
diff --git a/gcc/testsuite/gcc.dg/torture/float128-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..ec9d3ad41e24280978707888590eec1b562207f0
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128_runtime } */
+
+#define WIDTH 128
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0ede861716750453a86c9abc703ad0b2826674c6
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128x_runtime } */
+
+#define WIDTH 128
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float16-tg-4.c b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..007c4c224ea95537c31185d0aff964d1975f2190
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float16 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float16 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float16_runtime } */
+
+#define WIDTH 16
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..c7f8353da2cffdfc2c2f58f5da3d5363b95e6f91
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32 type-generic built-in functions: __builtin_f__builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32_runtime } */
+
+#define WIDTH 32
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0d7a592920aca112d5f6409e565d4582c253c977
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32x_runtime } */
+
+#define WIDTH 32
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..bb25a22a68e60ce2717ab3583bbec595dd563c35
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64_runtime } */
+
+#define WIDTH 64
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..82305d916b8bd75131e2c647fd37f74cadbc8f1d
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64x_runtime } */
+
+#define WIDTH 64
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
new file mode 100644
index 0000000000000000000000000000000000000000..aa3448c090cf797a1525b1045ffebeed79cace40
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
@@ -0,0 +1,99 @@
+/* Tests for _FloatN / _FloatNx types: compile and execution tests for
+   type-generic built-in functions: __builtin_iszero, __builtin_issubnormal.
+   Before including this file, define WIDTH as the value N; define EXT to 1
+   for _FloatNx and 0 for _FloatN.  */
+
+#define __STDC_WANT_IEC_60559_TYPES_EXT__
+#include <float.h>
+
+#define CONCATX(X, Y) X ## Y
+#define CONCAT(X, Y) CONCATX (X, Y)
+#define CONCAT3(X, Y, Z) CONCAT (CONCAT (X, Y), Z)
+#define CONCAT4(W, X, Y, Z) CONCAT (CONCAT (CONCAT (W, X), Y), Z)
+
+#if EXT
+# define TYPE CONCAT3 (_Float, WIDTH, x)
+# define CST(C) CONCAT4 (C, f, WIDTH, x)
+# define MAX CONCAT3 (FLT, WIDTH, X_MAX)
+# define MIN CONCAT3 (FLT, WIDTH, X_MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, X_TRUE_MIN)
+#else
+# define TYPE CONCAT (_Float, WIDTH)
+# define CST(C) CONCAT3 (C, f, WIDTH)
+# define MAX CONCAT3 (FLT, WIDTH, _MAX)
+# define MIN CONCAT3 (FLT, WIDTH, _MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, _TRUE_MIN)
+#endif
+
+extern void exit (int);
+extern void abort (void);
+
+volatile TYPE inf = __builtin_inf (), nanval = __builtin_nan ("");
+volatile TYPE neginf = -__builtin_inf (), negnanval = -__builtin_nan ("");
+volatile TYPE zero = CST (0.0), negzero = -CST (0.0), one = CST (1.0);
+volatile TYPE max = MAX, negmax = -MAX, min = MIN, negmin = -MIN;
+volatile TYPE true_min = TRUE_MIN, negtrue_min = -TRUE_MIN;
+volatile TYPE sub_norm = MIN / 2.0;
+
+int
+main (void)
+{
+  if (__builtin_iszero (inf) == 1)
+    abort ();
+  if (__builtin_iszero (nanval) == 1)
+    abort ();
+  if (__builtin_iszero (neginf) == 1)
+    abort ();
+  if (__builtin_iszero (negnanval) == 1)
+    abort ();
+  if (__builtin_iszero (zero) != 1)
+    abort ();
+  if (__builtin_iszero (negzero) != 1)
+    abort ();
+  if (__builtin_iszero (one) == 1)
+    abort ();
+  if (__builtin_iszero (max) == 1)
+    abort ();
+  if (__builtin_iszero (negmax) == 1)
+    abort ();
+  if (__builtin_iszero (min) == 1)
+    abort ();
+  if (__builtin_iszero (negmin) == 1)
+    abort ();
+  if (__builtin_iszero (true_min) == 1)
+    abort ();
+  if (__builtin_iszero (negtrue_min) == 1)
+    abort ();
+  if (__builtin_iszero (sub_norm) == 1)
+    abort ();
+
+  if (__builtin_issubnormal (inf) == 1)
+    abort ();
+  if (__builtin_issubnormal (nanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (neginf) == 1)
+    abort ();
+  if (__builtin_issubnormal (negnanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (zero) == 1)
+    abort ();
+  if (__builtin_issubnormal (negzero) == 1)
+    abort ();
+  if (__builtin_issubnormal (one) == 1)
+    abort ();
+  if (__builtin_issubnormal (max) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmax) == 1)
+    abort ();
+  if (__builtin_issubnormal (min) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmin) == 1)
+    abort ();
+  if (__builtin_issubnormal (true_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (negtrue_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (sub_norm) != 1)
+    abort ();
+  exit (0);
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
new file mode 100644
index 0000000000000000000000000000000000000000..84a73a6483780dac2347e72fa7d139545d2087eb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
@@ -0,0 +1,22 @@
+/* This file checks the code generation for the new __builtin_fpclassify.
+   because checking the exact assembly isn't very useful, we'll just be checking
+   for the presence of certain instructions and the omition of others. */
+/* { dg-options "-O2" } */
+/* { dg-do compile } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fabs\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmp\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmpe\[ \t\]?" } } */
+/* { dg-final { scan-assembler "\[ \t\]?sbfx\[ \t\]?" } } */
+
+#include <stdio.h>
+#include <math.h>
+
+/*
+ fp_nan = args[0];
+ fp_infinite = args[1];
+ fp_normal = args[2];
+ fp_subnormal = args[3];
+ fp_zero = args[4];
+*/
+
+int f(double x) { return __builtin_fpclassify(0, 1, 4, 3, 2, x); }


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-11-24 15:52   ` Tamar Christina
@ 2016-11-24 18:28     ` Joseph Myers
  2016-11-25 12:19       ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Joseph Myers @ 2016-11-24 18:28 UTC (permalink / raw)
  To: Tamar Christina
  Cc: GCC Patches, Wilco Dijkstra, rguenther, law, Michael Meissner, nd

On Thu, 24 Nov 2016, Tamar Christina wrote:

> @@ -11499,6 +11503,53 @@ to classify.  GCC treats the last argument as type-generic, which
>  means it does not do default promotion from float to double.
>  @end deftypefn
>  
> +@deftypefn {Built-in Function} int __builtin_isnan (...)
> +This built-in implements the C99 isnan functionality which checks if
> +the given argument represents a NaN.  The return value of the
> +function will either be a 0 (false) or a 1 (true).
> +On most systems, when an IEEE 754 floating point is used this
> +built-in does not produce a signal when a signaling NaN is used.

"an IEEE 754 floating point" should probably be "an IEEE 754 
floating-point type" or similar.

> +GCC treats the argument as type-generic, which means it does
> +not do default promotion from float to double.

I think type names such as float and double should use @code{} in the 
manual.

> +the given argument represents an Infinite number.  The return

Infinite should not have a capital letter there.

> +@deftypefn {Built-in Function} int __builtin_iszero (...)
> +This built-in implements the C99 iszero functionality which checks if

This isn't C99, it's TS 18661-1:2014.

> +the given argument represents the number 0.  The return

0 or -0.

> +@deftypefn {Built-in Function} int __builtin_issubnormal (...)
> +This built-in implements the C99 issubnormal functionality which checks if

Again, TS 18661-1.

> +the given argument represents a sub-normal number.  The return

Do not hyphenate subnormal.

> +	    switch (DECL_FUNCTION_CODE (decl))
> +	    {
> +		    case BUILT_IN_SETJMP:
> +			lower_builtin_setjmp (gsi);
> +			data->cannot_fallthru = false;
> +			return;

The indentation in this whole block of code (not all quoted) is wrong.

> +    real_inf(&rinf);

Missing space before '('.

> +  emit_tree_cond (&body, dest, done_label,
> +		  is_normal(&body, arg, loc), fp_normal);
> +  emit_tree_cond (&body, dest, done_label,
> +		  is_zero(&body, arg, loc), fp_zero);
> +  emit_tree_cond (&body, dest, done_label,
> +		  is_nan(&body, arg, loc), fp_nan);
> +  emit_tree_cond (&body, dest, done_label,
> +		  is_infinity(&body, arg, loc), fp_infinite);

Likewise.

> +		  fndecl(&body, arg, loc), t_true);

Likewise.

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-11-24 18:28     ` Joseph Myers
@ 2016-11-25 12:19       ` Tamar Christina
  2016-12-02 16:20         ` Tamar Christina
  2016-12-14  8:07         ` Jeff Law
  0 siblings, 2 replies; 47+ messages in thread
From: Tamar Christina @ 2016-11-25 12:19 UTC (permalink / raw)
  To: Joseph Myers
  Cc: GCC Patches, Wilco Dijkstra, rguenther, law, Michael Meissner, nd

[-- Attachment #1: Type: text/plain, Size: 3047 bytes --]

Hi Joseph,

I have updated the patch with the changes,
w.r.t to the formatting, there are tabs there that seem to be rendered
at 4 spaces wide. In my editor setup at 8 spaces they are correct.

Kind Regards,
Tamar
________________________________________
From: Joseph Myers <joseph@codesourcery.com>
Sent: Thursday, November 24, 2016 6:28:18 PM
To: Tamar Christina
Cc: GCC Patches; Wilco Dijkstra; rguenther@suse.de; law@redhat.com; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

On Thu, 24 Nov 2016, Tamar Christina wrote:

> @@ -11499,6 +11503,53 @@ to classify.  GCC treats the last argument as type-generic, which
>  means it does not do default promotion from float to double.
>  @end deftypefn
>
> +@deftypefn {Built-in Function} int __builtin_isnan (...)
> +This built-in implements the C99 isnan functionality which checks if
> +the given argument represents a NaN.  The return value of the
> +function will either be a 0 (false) or a 1 (true).
> +On most systems, when an IEEE 754 floating point is used this
> +built-in does not produce a signal when a signaling NaN is used.

"an IEEE 754 floating point" should probably be "an IEEE 754
floating-point type" or similar.

> +GCC treats the argument as type-generic, which means it does
> +not do default promotion from float to double.

I think type names such as float and double should use @code{} in the
manual.

> +the given argument represents an Infinite number.  The return

Infinite should not have a capital letter there.

> +@deftypefn {Built-in Function} int __builtin_iszero (...)
> +This built-in implements the C99 iszero functionality which checks if

This isn't C99, it's TS 18661-1:2014.

> +the given argument represents the number 0.  The return

0 or -0.

> +@deftypefn {Built-in Function} int __builtin_issubnormal (...)
> +This built-in implements the C99 issubnormal functionality which checks if

Again, TS 18661-1.

> +the given argument represents a sub-normal number.  The return

Do not hyphenate subnormal.

> +         switch (DECL_FUNCTION_CODE (decl))
> +         {
> +                 case BUILT_IN_SETJMP:
> +                     lower_builtin_setjmp (gsi);
> +                     data->cannot_fallthru = false;
> +                     return;

The indentation in this whole block of code (not all quoted) is wrong.

> +    real_inf(&rinf);

Missing space before '('.

> +  emit_tree_cond (&body, dest, done_label,
> +               is_normal(&body, arg, loc), fp_normal);
> +  emit_tree_cond (&body, dest, done_label,
> +               is_zero(&body, arg, loc), fp_zero);
> +  emit_tree_cond (&body, dest, done_label,
> +               is_nan(&body, arg, loc), fp_nan);
> +  emit_tree_cond (&body, dest, done_label,
> +               is_infinity(&body, arg, loc), fp_infinite);

Likewise.

> +               fndecl(&body, arg, loc), t_true);

Likewise.

--
Joseph S. Myers
joseph@codesourcery.com

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: fp-classify-new2.patch --]
[-- Type: text/x-patch; name="fp-classify-new2.patch", Size: 65047 bytes --]

diff --git a/gcc/builtins.c b/gcc/builtins.c
index 3ac2d44148440b124559ba7cd3de483b7a74b72d..2340f60edb31ebf964367699aaf33ac8401dff41 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -160,7 +160,6 @@ static tree fold_builtin_0 (location_t, tree);
 static tree fold_builtin_1 (location_t, tree, tree);
 static tree fold_builtin_2 (location_t, tree, tree, tree);
 static tree fold_builtin_3 (location_t, tree, tree, tree, tree);
-static tree fold_builtin_varargs (location_t, tree, tree*, int);
 
 static tree fold_builtin_strpbrk (location_t, tree, tree, tree);
 static tree fold_builtin_strstr (location_t, tree, tree, tree);
@@ -2202,19 +2201,8 @@ interclass_mathfn_icode (tree arg, tree fndecl)
   switch (DECL_FUNCTION_CODE (fndecl))
     {
     CASE_FLT_FN (BUILT_IN_ILOGB):
-      errno_set = true; builtin_optab = ilogb_optab; break;
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      builtin_optab = isinf_optab; break;
-    case BUILT_IN_ISNORMAL:
-    case BUILT_IN_ISFINITE:
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      /* These builtins have no optabs (yet).  */
+      errno_set = true;
+      builtin_optab = ilogb_optab;
       break;
     default:
       gcc_unreachable ();
@@ -2233,8 +2221,7 @@ interclass_mathfn_icode (tree arg, tree fndecl)
 }
 
 /* Expand a call to one of the builtin math functions that operate on
-   floating point argument and output an integer result (ilogb, isinf,
-   isnan, etc).
+   floating point argument and output an integer result (ilogb, etc).
    Return 0 if a normal call should be emitted rather than expanding the
    function in-line.  EXP is the expression that is a call to the builtin
    function; if convenient, the result should be placed in TARGET.  */
@@ -5997,11 +5984,7 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
     CASE_FLT_FN (BUILT_IN_ILOGB):
       if (! flag_unsafe_math_optimizations)
 	break;
-      gcc_fallthrough ();
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-    case BUILT_IN_ISNORMAL:
+	
       target = expand_builtin_interclass_mathfn (exp, target);
       if (target)
 	return target;
@@ -6281,8 +6264,25 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
 	}
       break;
 
+    CASE_FLT_FN (BUILT_IN_ISINF):
+    case BUILT_IN_ISNAND32:
+    case BUILT_IN_ISNAND64:
+    case BUILT_IN_ISNAND128:
+    case BUILT_IN_ISNAN:
+    case BUILT_IN_ISINFD32:
+    case BUILT_IN_ISINFD64:
+    case BUILT_IN_ISINFD128:
+    case BUILT_IN_ISNORMAL:
+    case BUILT_IN_ISZERO:
+    case BUILT_IN_ISSUBNORMAL:
+    case BUILT_IN_FPCLASSIFY:
     case BUILT_IN_SETJMP:
-      /* This should have been lowered to the builtins below.  */
+    CASE_FLT_FN (BUILT_IN_FINITE):
+    case BUILT_IN_FINITED32:
+    case BUILT_IN_FINITED64:
+    case BUILT_IN_FINITED128:
+    case BUILT_IN_ISFINITE:
+      /* These should have been lowered to the builtins in gimple-low.c.  */
       gcc_unreachable ();
 
     case BUILT_IN_SETJMP_SETUP:
@@ -7622,184 +7622,19 @@ fold_builtin_modf (location_t loc, tree arg0, tree arg1, tree rettype)
   return NULL_TREE;
 }
 
-/* Given a location LOC, an interclass builtin function decl FNDECL
-   and its single argument ARG, return an folded expression computing
-   the same, or NULL_TREE if we either couldn't or didn't want to fold
-   (the latter happen if there's an RTL instruction available).  */
-
-static tree
-fold_builtin_interclass_mathfn (location_t loc, tree fndecl, tree arg)
-{
-  machine_mode mode;
-
-  if (!validate_arg (arg, REAL_TYPE))
-    return NULL_TREE;
-
-  if (interclass_mathfn_icode (arg, fndecl) != CODE_FOR_nothing)
-    return NULL_TREE;
-
-  mode = TYPE_MODE (TREE_TYPE (arg));
 
-  bool is_ibm_extended = MODE_COMPOSITE_P (mode);
 
-  /* If there is no optab, try generic code.  */
-  switch (DECL_FUNCTION_CODE (fndecl))
-    {
-      tree result;
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      {
-	/* isinf(x) -> isgreater(fabs(x),DBL_MAX).  */
-	tree const isgr_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isgr_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	return result;
-      }
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-      {
-	/* isfinite(x) -> islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isle_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	/*result = fold_build2_loc (loc, UNGT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	result = fold_build1_loc (loc, TRUTH_NOT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  result);*/
-	return result;
-      }
-    case BUILT_IN_ISNORMAL:
-      {
-	/* isnormal(x) -> isgreaterequal(fabs(x),DBL_MIN) &
-	   islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	tree orig_arg, max_exp, min_exp;
-	machine_mode orig_mode = mode;
-	REAL_VALUE_TYPE rmax, rmin;
-	char buf[128];
-
-	orig_arg = arg = builtin_save_expr (arg);
-	if (is_ibm_extended)
-	  {
-	    /* Use double to test the normal range of IBM extended
-	       precision.  Emin for IBM extended precision is
-	       different to emin for IEEE double, being 53 higher
-	       since the low double exponent is at least 53 lower
-	       than the high double exponent.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	arg = fold_build1_loc (loc, ABS_EXPR, type, arg);
-
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&rmax, buf);
-	sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (orig_mode)->emin - 1);
-	real_from_string (&rmin, buf);
-	max_exp = build_real (type, rmax);
-	min_exp = build_real (type, rmin);
-
-	max_exp = build_call_expr (isle_fn, 2, arg, max_exp);
-	if (is_ibm_extended)
-	  {
-	    /* Testing the high end of the range is done just using
-	       the high double, using the same test as isfinite().
-	       For the subnormal end of the range we first test the
-	       high double, then if its magnitude is equal to the
-	       limit of 0x1p-969, we test whether the low double is
-	       non-zero and opposite sign to the high double.  */
-	    tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
-	    tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	    tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
-	    tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
-				       arg, min_exp);
-	    tree as_complex = build1 (VIEW_CONVERT_EXPR,
-				      complex_double_type_node, orig_arg);
-	    tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
-	    tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
-	    tree zero = build_real (type, dconst0);
-	    tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
-	    tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
-	    tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
-	    tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
-				      fold_build3 (COND_EXPR,
-						   integer_type_node,
-						   hilt, logt, lolt));
-	    eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
-				  eq_min, ok_lo);
-	    min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
-				   gt_min, eq_min);
-	  }
-	else
-	  {
-	    tree const isge_fn
-	      = builtin_decl_explicit (BUILT_IN_ISGREATEREQUAL);
-	    min_exp = build_call_expr (isge_fn, 2, arg, min_exp);
-	  }
-	result = fold_build2 (BIT_AND_EXPR, integer_type_node,
-			      max_exp, min_exp);
-	return result;
-      }
-    default:
-      break;
-    }
-
-  return NULL_TREE;
-}
-
-/* Fold a call to __builtin_isnan(), __builtin_isinf, __builtin_finite.
+/* Fold a call to __builtin_isinf_sign.
    ARG is the argument for the call.  */
 
 static tree
-fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
+fold_builtin_classify (location_t loc, tree arg, int builtin_index)
 {
-  tree type = TREE_TYPE (TREE_TYPE (fndecl));
-
   if (!validate_arg (arg, REAL_TYPE))
     return NULL_TREE;
 
   switch (builtin_index)
     {
-    case BUILT_IN_ISINF:
-      if (!HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      return NULL_TREE;
-
     case BUILT_IN_ISINF_SIGN:
       {
 	/* isinf_sign(x) -> isinf(x) ? (signbit(x) ? -1 : 1) : 0 */
@@ -7832,106 +7667,11 @@ fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
 	return tmp;
       }
 
-    case BUILT_IN_ISFINITE:
-      if (!HONOR_NANS (arg)
-	  && !HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_one_node, arg);
-
-      return NULL_TREE;
-
-    case BUILT_IN_ISNAN:
-      if (!HONOR_NANS (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      {
-	bool is_ibm_extended = MODE_COMPOSITE_P (TYPE_MODE (TREE_TYPE (arg)));
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    arg = fold_build1_loc (loc, NOP_EXPR, double_type_node, arg);
-	  }
-      }
-      arg = builtin_save_expr (arg);
-      return fold_build2_loc (loc, UNORDERED_EXPR, type, arg, arg);
-
     default:
       gcc_unreachable ();
     }
 }
 
-/* Fold a call to __builtin_fpclassify(int, int, int, int, int, ...).
-   This builtin will generate code to return the appropriate floating
-   point classification depending on the value of the floating point
-   number passed in.  The possible return values must be supplied as
-   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
-   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
-   one floating point argument which is "type generic".  */
-
-static tree
-fold_builtin_fpclassify (location_t loc, tree *args, int nargs)
-{
-  tree fp_nan, fp_infinite, fp_normal, fp_subnormal, fp_zero,
-    arg, type, res, tmp;
-  machine_mode mode;
-  REAL_VALUE_TYPE r;
-  char buf[128];
-
-  /* Verify the required arguments in the original call.  */
-  if (nargs != 6
-      || !validate_arg (args[0], INTEGER_TYPE)
-      || !validate_arg (args[1], INTEGER_TYPE)
-      || !validate_arg (args[2], INTEGER_TYPE)
-      || !validate_arg (args[3], INTEGER_TYPE)
-      || !validate_arg (args[4], INTEGER_TYPE)
-      || !validate_arg (args[5], REAL_TYPE))
-    return NULL_TREE;
-
-  fp_nan = args[0];
-  fp_infinite = args[1];
-  fp_normal = args[2];
-  fp_subnormal = args[3];
-  fp_zero = args[4];
-  arg = args[5];
-  type = TREE_TYPE (arg);
-  mode = TYPE_MODE (type);
-  arg = builtin_save_expr (fold_build1_loc (loc, ABS_EXPR, type, arg));
-
-  /* fpclassify(x) ->
-       isnan(x) ? FP_NAN :
-         (fabs(x) == Inf ? FP_INFINITE :
-	   (fabs(x) >= DBL_MIN ? FP_NORMAL :
-	     (x == 0 ? FP_ZERO : FP_SUBNORMAL))).  */
-
-  tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-		     build_real (type, dconst0));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node,
-		     tmp, fp_zero, fp_subnormal);
-
-  sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (mode)->emin - 1);
-  real_from_string (&r, buf);
-  tmp = fold_build2_loc (loc, GE_EXPR, integer_type_node,
-		     arg, build_real (type, r));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, fp_normal, res);
-
-  if (HONOR_INFINITIES (mode))
-    {
-      real_inf (&r);
-      tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-			 build_real (type, r));
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp,
-			 fp_infinite, res);
-    }
-
-  if (HONOR_NANS (mode))
-    {
-      tmp = fold_build2_loc (loc, ORDERED_EXPR, integer_type_node, arg, arg);
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, res, fp_nan);
-    }
-
-  return res;
-}
-
 /* Fold a call to an unordered comparison function such as
    __builtin_isgreater().  FNDECL is the FUNCTION_DECL for the function
    being called and ARG0 and ARG1 are the arguments for the call.
@@ -8232,40 +7972,8 @@ fold_builtin_1 (location_t loc, tree fndecl, tree arg0)
     case BUILT_IN_ISDIGIT:
       return fold_builtin_isdigit (loc, arg0);
 
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISFINITE:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISFINITE);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    case BUILT_IN_ISNORMAL:
-      return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-
     case BUILT_IN_ISINF_SIGN:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF_SIGN);
-
-    CASE_FLT_FN (BUILT_IN_ISNAN):
-    case BUILT_IN_ISNAND32:
-    case BUILT_IN_ISNAND64:
-    case BUILT_IN_ISNAND128:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISNAN);
+      return fold_builtin_classify (loc, arg0, BUILT_IN_ISINF_SIGN);
 
     case BUILT_IN_FREE:
       if (integer_zerop (arg0))
@@ -8465,7 +8173,6 @@ fold_builtin_n (location_t loc, tree fndecl, tree *args, int nargs, bool)
       ret = fold_builtin_3 (loc, fndecl, args[0], args[1], args[2]);
       break;
     default:
-      ret = fold_builtin_varargs (loc, fndecl, args, nargs);
       break;
     }
   if (ret)
@@ -9422,37 +9129,6 @@ fold_builtin_object_size (tree ptr, tree ost)
   return NULL_TREE;
 }
 
-/* Builtins with folding operations that operate on "..." arguments
-   need special handling; we need to store the arguments in a convenient
-   data structure before attempting any folding.  Fortunately there are
-   only a few builtins that fall into this category.  FNDECL is the
-   function, EXP is the CALL_EXPR for the call.  */
-
-static tree
-fold_builtin_varargs (location_t loc, tree fndecl, tree *args, int nargs)
-{
-  enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl);
-  tree ret = NULL_TREE;
-
-  switch (fcode)
-    {
-    case BUILT_IN_FPCLASSIFY:
-      ret = fold_builtin_fpclassify (loc, args, nargs);
-      break;
-
-    default:
-      break;
-    }
-  if (ret)
-    {
-      ret = build1 (NOP_EXPR, TREE_TYPE (ret), ret);
-      SET_EXPR_LOCATION (ret, loc);
-      TREE_NO_WARNING (ret) = 1;
-      return ret;
-    }
-  return NULL_TREE;
-}
-
 /* Initialize format string characters in the target charset.  */
 
 bool
diff --git a/gcc/builtins.def b/gcc/builtins.def
index 219feebd3aebefbd079bf37cc801453cd1965e00..91aa6f37fa098777bc794bad56d8c561ab9fdc44 100644
--- a/gcc/builtins.def
+++ b/gcc/builtins.def
@@ -831,6 +831,8 @@ DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFL, "isinfl", BT_FN_INT_LONGDOUBLE, ATTR_CO
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD32, "isinfd32", BT_FN_INT_DFLOAT32, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD64, "isinfd64", BT_FN_INT_DFLOAT64, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD128, "isinfd128", BT_FN_INT_DFLOAT128, ATTR_CONST_NOTHROW_LEAF_LIST)
+DEF_GCC_BUILTIN        (BUILT_IN_ISZERO, "iszero", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
+DEF_GCC_BUILTIN        (BUILT_IN_ISSUBNORMAL, "issubnormal", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_C99_C90RES_BUILTIN (BUILT_IN_ISNAN, "isnan", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANF, "isnanf", BT_FN_INT_FLOAT, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANL, "isnanl", BT_FN_INT_LONGDOUBLE, ATTR_CONST_NOTHROW_LEAF_LIST)
diff --git a/gcc/doc/extend.texi b/gcc/doc/extend.texi
index 0669f7999beb078822e471352036d8f13517812d..c240bbe9a8fd595a0e7e2b41fb708efae1e5279a 100644
--- a/gcc/doc/extend.texi
+++ b/gcc/doc/extend.texi
@@ -10433,6 +10433,10 @@ in the Cilk Plus language manual which can be found at
 @findex __builtin_isgreater
 @findex __builtin_isgreaterequal
 @findex __builtin_isinf_sign
+@findex __builtin_isinf
+@findex __builtin_isnan
+@findex __builtin_iszero
+@findex __builtin_issubnormal
 @findex __builtin_isless
 @findex __builtin_islessequal
 @findex __builtin_islessgreater
@@ -11496,7 +11500,54 @@ constant values and they must appear in this order: @code{FP_NAN},
 @code{FP_INFINITE}, @code{FP_NORMAL}, @code{FP_SUBNORMAL} and
 @code{FP_ZERO}.  The ellipsis is for exactly one floating-point value
 to classify.  GCC treats the last argument as type-generic, which
-means it does not do default promotion from float to double.
+means it does not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnan (...)
+This built-in implements the C99 isnan functionality which checks if
+the given argument represents a NaN.  The return value of the
+function will either be a 0 (false) or a 1 (true).
+On most systems, when an IEEE 754 floating-point type is used this
+built-in does not produce a signal when a signaling NaN is used.
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isinf (...)
+This built-in implements the C99 isinf functionality which checks if
+the given argument represents an infinite number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnormal (...)
+This built-in implements the C99 isnormal functionality which checks if
+the given argument represents a normal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_iszero (...)
+This built-in implements the TS 18661-1:2014 iszero functionality which checks if
+the given argument represents the number 0 or -0.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_issubnormal (...)
+This built-in implements the TS 18661-1:2014 issubnormal functionality which checks if
+the given argument represents a subnormal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
 @end deftypefn
 
 @deftypefn {Built-in Function} double __builtin_inf (void)
diff --git a/gcc/gimple-low.c b/gcc/gimple-low.c
index 64752b67b86b3d01df5f5661e4666df98b7b91d1..6ec9179193b07ac5426afa1fea5308b0bab1c069 100644
--- a/gcc/gimple-low.c
+++ b/gcc/gimple-low.c
@@ -30,6 +30,8 @@ along with GCC; see the file COPYING3.  If not see
 #include "calls.h"
 #include "gimple-iterator.h"
 #include "gimple-low.h"
+#include "stor-layout.h"
+#include "target.h"
 
 /* The differences between High GIMPLE and Low GIMPLE are the
    following:
@@ -72,6 +74,13 @@ static void lower_gimple_bind (gimple_stmt_iterator *, struct lower_data *);
 static void lower_try_catch (gimple_stmt_iterator *, struct lower_data *);
 static void lower_gimple_return (gimple_stmt_iterator *, struct lower_data *);
 static void lower_builtin_setjmp (gimple_stmt_iterator *);
+static void lower_builtin_fpclassify (gimple_stmt_iterator *);
+static void lower_builtin_isnan (gimple_stmt_iterator *);
+static void lower_builtin_isinfinite (gimple_stmt_iterator *);
+static void lower_builtin_isnormal (gimple_stmt_iterator *);
+static void lower_builtin_iszero (gimple_stmt_iterator *);
+static void lower_builtin_issubnormal (gimple_stmt_iterator *);
+static void lower_builtin_isfinite (gimple_stmt_iterator *);
 static void lower_builtin_posix_memalign (gimple_stmt_iterator *);
 
 
@@ -330,18 +339,69 @@ lower_stmt (gimple_stmt_iterator *gsi, struct lower_data *data)
 	if (decl
 	    && DECL_BUILT_IN_CLASS (decl) == BUILT_IN_NORMAL)
 	  {
-	    if (DECL_FUNCTION_CODE (decl) == BUILT_IN_SETJMP)
+	    switch (DECL_FUNCTION_CODE (decl))
 	      {
+	      case BUILT_IN_SETJMP:
 		lower_builtin_setjmp (gsi);
 		data->cannot_fallthru = false;
 		return;
-	      }
-	    else if (DECL_FUNCTION_CODE (decl) == BUILT_IN_POSIX_MEMALIGN
-		     && flag_tree_bit_ccp
-		     && gimple_builtin_call_types_compatible_p (stmt, decl))
-	      {
-		lower_builtin_posix_memalign (gsi);
+
+	      case BUILT_IN_POSIX_MEMALIGN:
+		if (flag_tree_bit_ccp
+		    && gimple_builtin_call_types_compatible_p (stmt, decl))
+		  {
+			lower_builtin_posix_memalign (gsi);
+			return;
+		  }
+		break;
+
+	      case BUILT_IN_FPCLASSIFY:
+		lower_builtin_fpclassify (gsi);
+		data->cannot_fallthru = false;
 		return;
+
+	      CASE_FLT_FN (BUILT_IN_ISINF):
+	      case BUILT_IN_ISINFD32:
+	      case BUILT_IN_ISINFD64:
+	      case BUILT_IN_ISINFD128:
+		lower_builtin_isinfinite (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNAND32:
+	      case BUILT_IN_ISNAND64:
+	      case BUILT_IN_ISNAND128:
+	      CASE_FLT_FN (BUILT_IN_ISNAN):
+		lower_builtin_isnan (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNORMAL:
+		lower_builtin_isnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISZERO:
+		lower_builtin_iszero (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISSUBNORMAL:
+		lower_builtin_issubnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      CASE_FLT_FN (BUILT_IN_FINITE):
+	      case BUILT_IN_FINITED32:
+	      case BUILT_IN_FINITED64:
+	      case BUILT_IN_FINITED128:
+	      case BUILT_IN_ISFINITE:
+		lower_builtin_isfinite (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      default:
+		break;
 	      }
 	  }
 
@@ -822,6 +882,736 @@ lower_builtin_setjmp (gimple_stmt_iterator *gsi)
   gsi_remove (gsi, false);
 }
 
+static tree
+emit_tree_and_return_var (gimple_seq *seq, tree arg)
+{
+  if (TREE_CODE (arg) == SSA_NAME || VAR_P (arg))
+    return arg;
+
+  tree tmp = create_tmp_reg (TREE_TYPE(arg));
+  gassign *stm = gimple_build_assign(tmp, arg);
+  gimple_seq_add_stmt (seq, stm);
+  return tmp;
+}
+
+/* This function builds an if statement that ends up using explicit branches
+   instead of becoming a csel.  This function assumes you will fall through to
+   the next statements after this condition for the false branch.  */
+static void
+emit_tree_cond (gimple_seq *seq, tree result_variable, tree exit_label,
+		tree cond, tree true_branch)
+{
+    /* Create labels for fall through.  */
+  tree true_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree false_label = create_artificial_label (UNKNOWN_LOCATION);
+  gcond *stmt = gimple_build_cond_from_tree (cond, true_label, false_label);
+  gimple_seq_add_stmt (seq, stmt);
+
+  /* Build the true case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (true_label));
+  tree value = TREE_CONSTANT (true_branch)
+	     ? true_branch
+	     : emit_tree_and_return_var (seq, true_branch);
+  gimple_seq_add_stmt (seq, gimple_build_assign (result_variable, value));
+  gimple_seq_add_stmt (seq, gimple_build_goto (exit_label));
+
+  /* Build the false case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (false_label));
+}
+
+static tree
+get_num_as_int (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  /* Re-interpret the float as an unsigned integer type
+     with equal precision.  */
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+  tree conv_arg = fold_build1_loc (loc, VIEW_CONVERT_EXPR, int_arg_type, arg);
+  return emit_tree_and_return_var(seq, conv_arg);
+}
+
+  /* Check if the number that is being classified is close enough to IEEE 754
+     format to be able to go in the early exit code.  */
+static bool
+use_ieee_int_mode (tree arg)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  return (format->is_binary_ieee_compatible
+	  && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
+	  /* We explicitly disable quad float support on 32 bit systems.  */
+	  && !(UNITS_PER_WORD == 4 && type_width == 128)
+	  && targetm.scalar_mode_supported_p (mode));
+}
+
+  /* perform standard IBM extended format fixups for FP functions.  */
+static bool
+perform_ibm_extended_fixups (tree *arg, machine_mode *mode,
+			     tree *type, location_t loc)
+{
+  bool is_ibm_extended = MODE_COMPOSITE_P (*mode);
+  if (is_ibm_extended)
+    {
+      /* NaN and Inf are encoded in the high-order double value
+	 only.  The low-order value is not significant.  */
+      *type = double_type_node;
+      *mode = DFmode;
+      *arg = fold_build1_loc (loc, NOP_EXPR, *type, *arg);
+    }
+
+  return is_ibm_extended;
+}
+
+static tree
+is_normal (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const tree bool_type = boolean_type_node;
+
+  /* Perform IBM extended format fixups if required.  */
+  bool is_ibm_extended = perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree orig_arg = arg;
+    if (TREE_CODE (arg) != SSA_NAME
+	&& (TREE_ADDRESSABLE (arg) != 0
+	  || (TREE_CODE (arg) != PARM_DECL
+	      && (!VAR_P (arg) || TREE_STATIC (arg)))))
+      orig_arg = save_expr (arg);
+
+    REAL_VALUE_TYPE rinf, rmin;
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    char buf[128];
+    real_inf (&rinf);
+    get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+    real_from_string (&rmin, buf);
+
+    tree inf_exp = fold_build2_loc (loc, LT_EXPR, bool_type, arg_p,
+				    build_real (type, rinf));
+    tree min_exp = build_real (type, rmin);
+    if (is_ibm_extended)
+      {
+	/* Testing the high end of the range is done just using
+	   the high double, using the same test as isfinite().
+	   For the subnormal end of the range we first test the
+	   high double, then if its magnitude is equal to the
+	   limit of 0x1p-969, we test whether the low double is
+	   non-zero and opposite sign to the high double.  */
+	tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
+	tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
+	tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
+	tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
+				   arg, min_exp);
+	tree as_complex = build1 (VIEW_CONVERT_EXPR,
+				  complex_double_type_node, orig_arg);
+	tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
+	tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
+	tree zero = build_real (type, dconst0);
+	tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
+	tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
+	tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
+	tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
+				  fold_build3 (COND_EXPR,
+					       integer_type_node,
+					       hilt, logt, lolt));
+	eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
+			      eq_min, ok_lo);
+	min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
+			       gt_min, eq_min);
+      }
+      else
+      {
+	min_exp = fold_build2_loc (loc, GE_EXPR, bool_type, arg_p,
+				   min_exp);
+      }
+
+    tree res
+      = fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			 emit_tree_and_return_var (seq, min_exp),
+			 emit_tree_and_return_var (seq, inf_exp));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const tree int_type = unsigned_type_node;
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  const int exp_mask  = (1 << exp_bits) - 1;
+
+  /* Get the number reinterpreted as an integer.  */
+  tree int_arg = get_num_as_int (seq, arg, loc);
+
+  /* Extract exp bits from the float, where we expect the exponent to be.
+     We create a new type because BIT_FIELD_REF does not allow you to
+     extract less bits than the precision of the storage variable.  */
+  tree exp_tmp
+    = fold_build3_loc (loc, BIT_FIELD_REF,
+		       build_nonstandard_integer_type (exp_bits, true),
+		       int_arg,
+		       build_int_cstu (int_type, exp_bits),
+		       build_int_cstu (int_type, format->p - 1));
+  tree exp_bitfield = emit_tree_and_return_var (seq, exp_tmp);
+
+  /* Re-interpret the extracted exponent bits as a 32 bit int.
+     This allows us to continue doing operations as int_type.  */
+  tree exp
+    = emit_tree_and_return_var(seq,fold_build1_loc (loc, NOP_EXPR, int_type,
+						    exp_bitfield));
+
+  /* exp_mask & ~1.  */
+  tree mask_check
+     = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+			build_int_cstu (int_type, exp_mask),
+			fold_build1_loc (loc, BIT_NOT_EXPR, int_type,
+					 build_int_cstu (int_type, 1)));
+
+  /* (exp + 1) & mask_check.
+     Check to see if exp is not all 0 or all 1.  */
+  tree exp_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+		       emit_tree_and_return_var (seq,
+				fold_build2_loc (loc, PLUS_EXPR, int_type, exp,
+						 build_int_cstu (int_type, 1))),
+		       mask_check);
+
+  tree res = fold_build2_loc (loc, NE_EXPR, boolean_type_node,
+			      build_int_cstu (int_type, 0),
+			      emit_tree_and_return_var(seq, exp_check));
+
+  return emit_tree_and_return_var (seq, res);
+}
+
+static tree
+is_zero (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+  machine_mode mode = TYPE_MODE (type);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree res = fold_build2_loc (loc, EQ_EXPR, boolean_type_node, arg,
+				build_real (type, dconst0));
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* num << 1 == 0.
+     This checks to see if the number is zero.  */
+  tree zero_check
+    = fold_build2_loc (loc, EQ_EXPR, boolean_type_node,
+		       build_int_cstu (int_arg_type, 0),
+		       emit_tree_and_return_var (seq, int_arg));
+
+  return emit_tree_and_return_var (seq, zero_check);
+}
+
+static tree
+is_subnormal (gimple_seq *seq, tree arg, location_t loc)
+{
+  const tree bool_type = boolean_type_node;
+
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    REAL_VALUE_TYPE r;
+    char buf[128];
+    get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+    real_from_string (&r, buf);
+    tree subnorm = fold_build2_loc (loc, LT_EXPR, bool_type,
+				    arg_p, build_real (type, r));
+
+    tree zero = fold_build2_loc (loc, GT_EXPR, bool_type, arg_p,
+				 build_real (type, dconst0));
+
+    tree res
+      = fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			 emit_tree_and_return_var (seq, subnorm),
+			 emit_tree_and_return_var (seq, zero));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check for a zero exponent and non-zero mantissa.
+     This can be done with two comparisons by first apply a
+     removing the sign bit and checking if the value is larger
+     than the mantissa mask.  */
+
+  /* This creates a mask to be used to check the mantissa value in the shifted
+     integer representation of the fpnum.  */
+  tree significant_bit = build_int_cstu (int_arg_type, format->p - 1);
+  tree mantissa_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					significant_bit),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check if exponent is zero and mantissa is not.  */
+  tree subnorm_cond
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, LE_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 mantissa_mask));
+
+  tree zero_cond
+    = fold_build2_loc (loc, GT_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, int_arg),
+		       build_int_cstu (int_arg_type, 0));
+
+  tree subnorm_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, subnorm_cond),
+		       emit_tree_and_return_var (seq, zero_cond));
+
+  return emit_tree_and_return_var (seq, subnorm_check);
+}
+
+static tree
+is_infinity (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_INFINITIES (mode))
+  {
+    return build_int_cst (bool_type, false);
+  }
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    REAL_VALUE_TYPE r;
+    real_inf (&r);
+    tree res = fold_build2_loc (loc, EQ_EXPR, bool_type, arg_p,
+				build_real (type, r));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0.  */
+  tree inf_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, EQ_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+static tree
+is_finite (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (arg) && !HONOR_INFINITIES (arg))
+  {
+    return build_int_cst (bool_type, true);
+  }
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    REAL_VALUE_TYPE rmax;
+    char buf[128];
+    get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+    real_from_string (&rmax, buf);
+
+    tree res = fold_build2_loc (loc, LE_EXPR, bool_type, arg_p,
+				build_real (type, rmax));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign. */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0. */
+  tree inf_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, LT_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Determines if the given number is a NaN value.
+   This function is the last in the chain and only has to
+   check if it's preconditions are true.  */
+static tree
+is_nan (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (mode))
+  {
+    return build_int_cst (bool_type, false);
+  }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    tree eq_check
+      = fold_build2_loc (loc, ORDERED_EXPR, bool_type,arg_p, arg_p);
+
+    tree res
+      = fold_build1_loc (loc, BIT_NOT_EXPR, bool_type,
+			 emit_tree_and_return_var (seq, eq_check));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is not 0.  */
+  tree nan_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, GT_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, nan_check);
+}
+
+/* Validate a single argument ARG against a tree code CODE representing
+   a type.  */
+static bool
+gimple_validate_arg (gimple* call, int index, enum tree_code code)
+{
+  const tree arg = gimple_call_arg(call, index);
+  if (!arg)
+    return false;
+  else if (code == POINTER_TYPE)
+    return POINTER_TYPE_P (TREE_TYPE (arg));
+  else if (code == INTEGER_TYPE)
+    return INTEGRAL_TYPE_P (TREE_TYPE (arg));
+  return code == TREE_CODE (TREE_TYPE (arg));
+}
+
+/* Lowers calls to __builtin_fpclassify to
+   fpclassify (x) ->
+     isnormal(x) ? FP_NORMAL :
+       iszero (x) ? FP_ZERO :
+	 isnan (x) ? FP_NAN :
+	   isinfinite (x) ? FP_INFINITE :
+	     FP_SUBNORMAL.
+
+   The code may use integer arithmentic if it decides
+   that the produced assembly would be faster. This can only be done
+   for numbers that are similar to IEEE-754 in format.
+
+   This builtin will generate code to return the appropriate floating
+   point classification depending on the value of the floating point
+   number passed in.  The possible return values must be supplied as
+   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
+   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
+   one floating point argument which is "type generic".
+*/
+static void
+lower_builtin_fpclassify (gimple_stmt_iterator *gsi)
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 6
+      || !gimple_validate_arg (call, 0, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 1, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 2, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 3, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 4, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 5, REAL_TYPE))
+    return;
+
+  /* Collect the arguments from the call.  */
+  tree fp_nan = gimple_call_arg (call, 0);
+  tree fp_infinite = gimple_call_arg (call, 1);
+  tree fp_normal = gimple_call_arg (call, 2);
+  tree fp_subnormal = gimple_call_arg (call, 3);
+  tree fp_zero = gimple_call_arg (call, 4);
+  tree arg = gimple_call_arg (call, 5);
+
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+    dest = create_tmp_reg (TREE_TYPE (orig_dest));
+
+  emit_tree_cond (&body, dest, done_label,
+		  is_normal (&body, arg, loc), fp_normal);
+  emit_tree_cond (&body, dest, done_label,
+		  is_zero (&body, arg, loc), fp_zero);
+  emit_tree_cond (&body, dest, done_label,
+		  is_nan (&body, arg, loc), fp_nan);
+  emit_tree_cond (&body, dest, done_label,
+		  is_infinity (&body, arg, loc), fp_infinite);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to is_subnormal.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, fp_subnormal));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+
+  /* Remove the call to __builtin_fpclassify.  */
+  gsi_remove (gsi, false);
+}
+
+static void
+gen_call_fp_builtin (gimple_stmt_iterator *gsi,
+		     tree (*fndecl)(gimple_seq *, tree, location_t))
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 1
+      || !gimple_validate_arg (call, 0, REAL_TYPE))
+    return;
+
+  tree arg = gimple_call_arg (call, 0);
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  tree type = TREE_TYPE (orig_dest);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+      dest = create_tmp_reg (type);
+
+  tree t_true = build_int_cst (type, true);
+  tree t_false = build_int_cst (type, false);
+
+  emit_tree_cond (&body, dest, done_label,
+		  fndecl (&body, arg, loc), t_true);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to false.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, t_false));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+  /* Remove the call to the builtin.  */
+  gsi_remove (gsi, false);
+}
+
+static void
+lower_builtin_isnan (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_nan);
+}
+
+static void
+lower_builtin_isinfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_infinity);
+}
+
+static void
+lower_builtin_isnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_normal);
+}
+
+static void
+lower_builtin_iszero (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_zero);
+}
+
+static void
+lower_builtin_issubnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_subnormal);
+}
+
+static void
+lower_builtin_isfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_finite);
+}
+
 /* Lower calls to posix_memalign to
      res = posix_memalign (ptr, align, size);
      if (res == 0)
diff --git a/gcc/real.h b/gcc/real.h
index 59af580e78f2637be84f71b98b45ec6611053222..4b1b92138e07f43a175a2cbee4d952afad5898f7 100644
--- a/gcc/real.h
+++ b/gcc/real.h
@@ -161,6 +161,19 @@ struct real_format
   bool has_signed_zero;
   bool qnan_msb_set;
   bool canonical_nan_lsbs_set;
+
+  /* This flag indicates whether the format is suitable for the optimized
+     code paths for the __builtin_fpclassify function and friends.  For
+     this, the format must be a base 2 representation with the sign bit as
+     the most-significant bit followed by (exp <= 32) exponent bits
+     followed by the mantissa bits.  It must be possible to interpret the
+     bits of the floating-point representation as an integer.  NaNs and
+     INFs (if available) must be represented by the same schema used by
+     IEEE 754.  (NaNs must be represented by an exponent with all bits 1,
+     any mantissa except all bits 0 and any sign bit.  +INF and -INF must be
+     represented by an exponent with all bits 1, a mantissa with all bits 0 and
+     a sign bit of 0 and 1 respectively.)  */
+  bool is_binary_ieee_compatible;
   const char *name;
 };
 
@@ -511,6 +524,11 @@ extern bool real_isinteger (const REAL_VALUE_TYPE *, HOST_WIDE_INT *);
    float string.  BUF must be large enough to contain the result.  */
 extern void get_max_float (const struct real_format *, char *, size_t);
 
+/* Write into BUF the smallest positive normalized number x,
+   such that b**(x-1) is normalized.  BUF must be large enough
+   to contain the result.  */
+extern void get_min_float (const struct real_format *, char *, size_t);
+
 #ifndef GENERATOR_FILE
 /* real related routines.  */
 extern wide_int real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
diff --git a/gcc/real.c b/gcc/real.c
index 66e88e2ad366f7848609d157074c80420d778bcf..20c907a6d543c73ba62aa9a8ddf6973d82de7832 100644
--- a/gcc/real.c
+++ b/gcc/real.c
@@ -3052,6 +3052,7 @@ const struct real_format ieee_single_format =
     true,
     true,
     false,
+    true,
     "ieee_single"
   };
 
@@ -3075,6 +3076,7 @@ const struct real_format mips_single_format =
     true,
     false,
     true,
+    true,
     "mips_single"
   };
 
@@ -3098,6 +3100,7 @@ const struct real_format motorola_single_format =
     true,
     true,
     true,
+    true,
     "motorola_single"
   };
 
@@ -3132,6 +3135,7 @@ const struct real_format spu_single_format =
     true,
     false,
     false,
+    false,
     "spu_single"
   };
 \f
@@ -3343,6 +3347,7 @@ const struct real_format ieee_double_format =
     true,
     true,
     false,
+    true,
     "ieee_double"
   };
 
@@ -3366,6 +3371,7 @@ const struct real_format mips_double_format =
     true,
     false,
     true,
+    true,
     "mips_double"
   };
 
@@ -3389,6 +3395,7 @@ const struct real_format motorola_double_format =
     true,
     true,
     true,
+    true,
     "motorola_double"
   };
 \f
@@ -3735,6 +3742,7 @@ const struct real_format ieee_extended_motorola_format =
     true,
     true,
     true,
+    false,
     "ieee_extended_motorola"
   };
 
@@ -3758,6 +3766,7 @@ const struct real_format ieee_extended_intel_96_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96"
   };
 
@@ -3781,6 +3790,7 @@ const struct real_format ieee_extended_intel_128_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_128"
   };
 
@@ -3806,6 +3816,7 @@ const struct real_format ieee_extended_intel_96_round_53_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96_round_53"
   };
 \f
@@ -3896,6 +3907,7 @@ const struct real_format ibm_extended_format =
     true,
     true,
     false,
+    false,
     "ibm_extended"
   };
 
@@ -3919,6 +3931,7 @@ const struct real_format mips_extended_format =
     true,
     false,
     true,
+    false,
     "mips_extended"
   };
 
@@ -4184,6 +4197,7 @@ const struct real_format ieee_quad_format =
     true,
     true,
     false,
+    true,
     "ieee_quad"
   };
 
@@ -4207,6 +4221,7 @@ const struct real_format mips_quad_format =
     true,
     false,
     true,
+    true,
     "mips_quad"
   };
 \f
@@ -4509,6 +4524,7 @@ const struct real_format vax_f_format =
     false,
     false,
     false,
+    false,
     "vax_f"
   };
 
@@ -4532,6 +4548,7 @@ const struct real_format vax_d_format =
     false,
     false,
     false,
+    false,
     "vax_d"
   };
 
@@ -4555,6 +4572,7 @@ const struct real_format vax_g_format =
     false,
     false,
     false,
+    false,
     "vax_g"
   };
 \f
@@ -4633,6 +4651,7 @@ const struct real_format decimal_single_format =
     true,
     true,
     false,
+    false,
     "decimal_single"
   };
 
@@ -4657,6 +4676,7 @@ const struct real_format decimal_double_format =
     true,
     true,
     false,
+    false,
     "decimal_double"
   };
 
@@ -4681,6 +4701,7 @@ const struct real_format decimal_quad_format =
     true,
     true,
     false,
+    false,
     "decimal_quad"
   };
 \f
@@ -4820,6 +4841,7 @@ const struct real_format ieee_half_format =
     true,
     true,
     false,
+    true,
     "ieee_half"
   };
 
@@ -4846,6 +4868,7 @@ const struct real_format arm_half_format =
     true,
     false,
     false,
+    false,
     "arm_half"
   };
 \f
@@ -4893,6 +4916,7 @@ const struct real_format real_internal_format =
     true,
     true,
     false,
+    false,
     "real_internal"
   };
 \f
@@ -5080,6 +5104,16 @@ get_max_float (const struct real_format *fmt, char *buf, size_t len)
   gcc_assert (strlen (buf) < len);
 }
 
+/* Write into BUF the minimum negative representable finite floating-point
+   number, x, such that b**(x-1) is normalized.
+   BUF must be large enough to contain the result.  */
+void
+get_min_float (const struct real_format *fmt, char *buf, size_t len)
+{
+  sprintf (buf, "0x1p%d", fmt->emin - 1);
+  gcc_assert (strlen (buf) < len);
+}
+
 /* True if mode M has a NaN representation and
    the treatment of NaN operands is important.  */
 
diff --git a/gcc/testsuite/gcc.dg/builtins-43.c b/gcc/testsuite/gcc.dg/builtins-43.c
index f7c318edf084104b9b820e18e631ed61e760569e..5d41c28aef8619f06658f45846ae15dd8b4987ed 100644
--- a/gcc/testsuite/gcc.dg/builtins-43.c
+++ b/gcc/testsuite/gcc.dg/builtins-43.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-gimple -fdump-tree-optimized" } */
+/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-lower -fdump-tree-optimized" } */
   
 extern void f(int);
 extern void link_error ();
@@ -51,7 +51,7 @@ main ()
 
 
 /* Check that all instances of __builtin_isnan were folded.  */
-/* { dg-final { scan-tree-dump-times "isnan" 0 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "isnan" 0 "lower" } } */
 
 /* Check that all instances of link_error were subject to DCE.  */
 /* { dg-final { scan-tree-dump-times "link_error" 0 "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/fold-notunord.c b/gcc/testsuite/gcc.dg/fold-notunord.c
deleted file mode 100644
index ca345154ac204cb5f380855828421b7f88d49052..0000000000000000000000000000000000000000
--- a/gcc/testsuite/gcc.dg/fold-notunord.c
+++ /dev/null
@@ -1,9 +0,0 @@
-/* { dg-do compile } */
-/* { dg-options "-O -ftrapping-math -fdump-tree-optimized" } */
-
-int f (double d)
-{
-  return !__builtin_isnan (d);
-}
-
-/* { dg-final { scan-tree-dump " ord " "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/pr28796-1.c b/gcc/testsuite/gcc.dg/pr28796-1.c
index 077118a298878441e812410f3a6bf3707fb1d839..a57b4e350af1bc45344106fdeab4b32ef87f233f 100644
--- a/gcc/testsuite/gcc.dg/pr28796-1.c
+++ b/gcc/testsuite/gcc.dg/pr28796-1.c
@@ -1,5 +1,5 @@
 /* { dg-do link } */
-/* { dg-options "-ffinite-math-only" } */
+/* { dg-options "-ffinite-math-only -O2" } */
 
 extern void link_error(void);
 
diff --git a/gcc/testsuite/gcc.dg/torture/float128-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..ec9d3ad41e24280978707888590eec1b562207f0
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128_runtime } */
+
+#define WIDTH 128
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0ede861716750453a86c9abc703ad0b2826674c6
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128x_runtime } */
+
+#define WIDTH 128
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float16-tg-4.c b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..007c4c224ea95537c31185d0aff964d1975f2190
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float16 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float16 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float16_runtime } */
+
+#define WIDTH 16
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..c7f8353da2cffdfc2c2f58f5da3d5363b95e6f91
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32 type-generic built-in functions: __builtin_f__builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32_runtime } */
+
+#define WIDTH 32
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0d7a592920aca112d5f6409e565d4582c253c977
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32x_runtime } */
+
+#define WIDTH 32
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..bb25a22a68e60ce2717ab3583bbec595dd563c35
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64_runtime } */
+
+#define WIDTH 64
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..82305d916b8bd75131e2c647fd37f74cadbc8f1d
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64x_runtime } */
+
+#define WIDTH 64
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
new file mode 100644
index 0000000000000000000000000000000000000000..aa3448c090cf797a1525b1045ffebeed79cace40
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
@@ -0,0 +1,99 @@
+/* Tests for _FloatN / _FloatNx types: compile and execution tests for
+   type-generic built-in functions: __builtin_iszero, __builtin_issubnormal.
+   Before including this file, define WIDTH as the value N; define EXT to 1
+   for _FloatNx and 0 for _FloatN.  */
+
+#define __STDC_WANT_IEC_60559_TYPES_EXT__
+#include <float.h>
+
+#define CONCATX(X, Y) X ## Y
+#define CONCAT(X, Y) CONCATX (X, Y)
+#define CONCAT3(X, Y, Z) CONCAT (CONCAT (X, Y), Z)
+#define CONCAT4(W, X, Y, Z) CONCAT (CONCAT (CONCAT (W, X), Y), Z)
+
+#if EXT
+# define TYPE CONCAT3 (_Float, WIDTH, x)
+# define CST(C) CONCAT4 (C, f, WIDTH, x)
+# define MAX CONCAT3 (FLT, WIDTH, X_MAX)
+# define MIN CONCAT3 (FLT, WIDTH, X_MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, X_TRUE_MIN)
+#else
+# define TYPE CONCAT (_Float, WIDTH)
+# define CST(C) CONCAT3 (C, f, WIDTH)
+# define MAX CONCAT3 (FLT, WIDTH, _MAX)
+# define MIN CONCAT3 (FLT, WIDTH, _MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, _TRUE_MIN)
+#endif
+
+extern void exit (int);
+extern void abort (void);
+
+volatile TYPE inf = __builtin_inf (), nanval = __builtin_nan ("");
+volatile TYPE neginf = -__builtin_inf (), negnanval = -__builtin_nan ("");
+volatile TYPE zero = CST (0.0), negzero = -CST (0.0), one = CST (1.0);
+volatile TYPE max = MAX, negmax = -MAX, min = MIN, negmin = -MIN;
+volatile TYPE true_min = TRUE_MIN, negtrue_min = -TRUE_MIN;
+volatile TYPE sub_norm = MIN / 2.0;
+
+int
+main (void)
+{
+  if (__builtin_iszero (inf) == 1)
+    abort ();
+  if (__builtin_iszero (nanval) == 1)
+    abort ();
+  if (__builtin_iszero (neginf) == 1)
+    abort ();
+  if (__builtin_iszero (negnanval) == 1)
+    abort ();
+  if (__builtin_iszero (zero) != 1)
+    abort ();
+  if (__builtin_iszero (negzero) != 1)
+    abort ();
+  if (__builtin_iszero (one) == 1)
+    abort ();
+  if (__builtin_iszero (max) == 1)
+    abort ();
+  if (__builtin_iszero (negmax) == 1)
+    abort ();
+  if (__builtin_iszero (min) == 1)
+    abort ();
+  if (__builtin_iszero (negmin) == 1)
+    abort ();
+  if (__builtin_iszero (true_min) == 1)
+    abort ();
+  if (__builtin_iszero (negtrue_min) == 1)
+    abort ();
+  if (__builtin_iszero (sub_norm) == 1)
+    abort ();
+
+  if (__builtin_issubnormal (inf) == 1)
+    abort ();
+  if (__builtin_issubnormal (nanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (neginf) == 1)
+    abort ();
+  if (__builtin_issubnormal (negnanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (zero) == 1)
+    abort ();
+  if (__builtin_issubnormal (negzero) == 1)
+    abort ();
+  if (__builtin_issubnormal (one) == 1)
+    abort ();
+  if (__builtin_issubnormal (max) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmax) == 1)
+    abort ();
+  if (__builtin_issubnormal (min) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmin) == 1)
+    abort ();
+  if (__builtin_issubnormal (true_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (negtrue_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (sub_norm) != 1)
+    abort ();
+  exit (0);
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
new file mode 100644
index 0000000000000000000000000000000000000000..84a73a6483780dac2347e72fa7d139545d2087eb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
@@ -0,0 +1,22 @@
+/* This file checks the code generation for the new __builtin_fpclassify.
+   because checking the exact assembly isn't very useful, we'll just be checking
+   for the presence of certain instructions and the omition of others. */
+/* { dg-options "-O2" } */
+/* { dg-do compile } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fabs\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmp\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmpe\[ \t\]?" } } */
+/* { dg-final { scan-assembler "\[ \t\]?sbfx\[ \t\]?" } } */
+
+#include <stdio.h>
+#include <math.h>
+
+/*
+ fp_nan = args[0];
+ fp_infinite = args[1];
+ fp_normal = args[2];
+ fp_subnormal = args[3];
+ fp_zero = args[4];
+*/
+
+int f(double x) { return __builtin_fpclassify(0, 1, 4, 3, 2, x); }

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-11-25 12:19       ` Tamar Christina
@ 2016-12-02 16:20         ` Tamar Christina
  2016-12-12  9:20           ` Tamar Christina
  2016-12-14  8:07         ` Jeff Law
  1 sibling, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2016-12-02 16:20 UTC (permalink / raw)
  To: Joseph Myers
  Cc: GCC Patches, Wilco Dijkstra, rguenther, law, Michael Meissner, nd

Ping?

Is there anything else I need to do for the patch or is it OK for trunk?

Thanks,
Tamar

________________________________________
From: Tamar Christina
Sent: Friday, November 25, 2016 12:18:52 PM
To: Joseph Myers
Cc: GCC Patches; Wilco Dijkstra; rguenther@suse.de; law@redhat.com; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

Hi Joseph,

I have updated the patch with the changes,
w.r.t to the formatting, there are tabs there that seem to be rendered
at 4 spaces wide. In my editor setup at 8 spaces they are correct.

Kind Regards,
Tamar
________________________________________
From: Joseph Myers <joseph@codesourcery.com>
Sent: Thursday, November 24, 2016 6:28:18 PM
To: Tamar Christina
Cc: GCC Patches; Wilco Dijkstra; rguenther@suse.de; law@redhat.com; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

On Thu, 24 Nov 2016, Tamar Christina wrote:

> @@ -11499,6 +11503,53 @@ to classify.  GCC treats the last argument as type-generic, which
>  means it does not do default promotion from float to double.
>  @end deftypefn
>
> +@deftypefn {Built-in Function} int __builtin_isnan (...)
> +This built-in implements the C99 isnan functionality which checks if
> +the given argument represents a NaN.  The return value of the
> +function will either be a 0 (false) or a 1 (true).
> +On most systems, when an IEEE 754 floating point is used this
> +built-in does not produce a signal when a signaling NaN is used.

"an IEEE 754 floating point" should probably be "an IEEE 754
floating-point type" or similar.

> +GCC treats the argument as type-generic, which means it does
> +not do default promotion from float to double.

I think type names such as float and double should use @code{} in the
manual.

> +the given argument represents an Infinite number.  The return

Infinite should not have a capital letter there.

> +@deftypefn {Built-in Function} int __builtin_iszero (...)
> +This built-in implements the C99 iszero functionality which checks if

This isn't C99, it's TS 18661-1:2014.

> +the given argument represents the number 0.  The return

0 or -0.

> +@deftypefn {Built-in Function} int __builtin_issubnormal (...)
> +This built-in implements the C99 issubnormal functionality which checks if

Again, TS 18661-1.

> +the given argument represents a sub-normal number.  The return

Do not hyphenate subnormal.

> +         switch (DECL_FUNCTION_CODE (decl))
> +         {
> +                 case BUILT_IN_SETJMP:
> +                     lower_builtin_setjmp (gsi);
> +                     data->cannot_fallthru = false;
> +                     return;

The indentation in this whole block of code (not all quoted) is wrong.

> +    real_inf(&rinf);

Missing space before '('.

> +  emit_tree_cond (&body, dest, done_label,
> +               is_normal(&body, arg, loc), fp_normal);
> +  emit_tree_cond (&body, dest, done_label,
> +               is_zero(&body, arg, loc), fp_zero);
> +  emit_tree_cond (&body, dest, done_label,
> +               is_nan(&body, arg, loc), fp_nan);
> +  emit_tree_cond (&body, dest, done_label,
> +               is_infinity(&body, arg, loc), fp_infinite);

Likewise.

> +               fndecl(&body, arg, loc), t_true);

Likewise.

--
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-12-02 16:20         ` Tamar Christina
@ 2016-12-12  9:20           ` Tamar Christina
  2016-12-12 15:53             ` Jeff Law
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2016-12-12  9:20 UTC (permalink / raw)
  To: Joseph Myers
  Cc: GCC Patches, Wilco Dijkstra, rguenther, law, Michael Meissner, nd

Ping

________________________________________
From: Tamar Christina
Sent: Friday, December 2, 2016 4:20:42 PM
To: Joseph Myers
Cc: GCC Patches; Wilco Dijkstra; rguenther@suse.de; law@redhat.com; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

Ping?

Is there anything else I need to do for the patch or is it OK for trunk?

Thanks,
Tamar

________________________________________
From: Tamar Christina
Sent: Friday, November 25, 2016 12:18:52 PM
To: Joseph Myers
Cc: GCC Patches; Wilco Dijkstra; rguenther@suse.de; law@redhat.com; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

Hi Joseph,

I have updated the patch with the changes,
w.r.t to the formatting, there are tabs there that seem to be rendered
at 4 spaces wide. In my editor setup at 8 spaces they are correct.

Kind Regards,
Tamar
________________________________________
From: Joseph Myers <joseph@codesourcery.com>
Sent: Thursday, November 24, 2016 6:28:18 PM
To: Tamar Christina
Cc: GCC Patches; Wilco Dijkstra; rguenther@suse.de; law@redhat.com; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

On Thu, 24 Nov 2016, Tamar Christina wrote:

> @@ -11499,6 +11503,53 @@ to classify.  GCC treats the last argument as type-generic, which
>  means it does not do default promotion from float to double.
>  @end deftypefn
>
> +@deftypefn {Built-in Function} int __builtin_isnan (...)
> +This built-in implements the C99 isnan functionality which checks if
> +the given argument represents a NaN.  The return value of the
> +function will either be a 0 (false) or a 1 (true).
> +On most systems, when an IEEE 754 floating point is used this
> +built-in does not produce a signal when a signaling NaN is used.

"an IEEE 754 floating point" should probably be "an IEEE 754
floating-point type" or similar.

> +GCC treats the argument as type-generic, which means it does
> +not do default promotion from float to double.

I think type names such as float and double should use @code{} in the
manual.

> +the given argument represents an Infinite number.  The return

Infinite should not have a capital letter there.

> +@deftypefn {Built-in Function} int __builtin_iszero (...)
> +This built-in implements the C99 iszero functionality which checks if

This isn't C99, it's TS 18661-1:2014.

> +the given argument represents the number 0.  The return

0 or -0.

> +@deftypefn {Built-in Function} int __builtin_issubnormal (...)
> +This built-in implements the C99 issubnormal functionality which checks if

Again, TS 18661-1.

> +the given argument represents a sub-normal number.  The return

Do not hyphenate subnormal.

> +         switch (DECL_FUNCTION_CODE (decl))
> +         {
> +                 case BUILT_IN_SETJMP:
> +                     lower_builtin_setjmp (gsi);
> +                     data->cannot_fallthru = false;
> +                     return;

The indentation in this whole block of code (not all quoted) is wrong.

> +    real_inf(&rinf);

Missing space before '('.

> +  emit_tree_cond (&body, dest, done_label,
> +               is_normal(&body, arg, loc), fp_normal);
> +  emit_tree_cond (&body, dest, done_label,
> +               is_zero(&body, arg, loc), fp_zero);
> +  emit_tree_cond (&body, dest, done_label,
> +               is_nan(&body, arg, loc), fp_nan);
> +  emit_tree_cond (&body, dest, done_label,
> +               is_infinity(&body, arg, loc), fp_infinite);

Likewise.

> +               fndecl(&body, arg, loc), t_true);

Likewise.

--
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-12-12  9:20           ` Tamar Christina
@ 2016-12-12 15:53             ` Jeff Law
  0 siblings, 0 replies; 47+ messages in thread
From: Jeff Law @ 2016-12-12 15:53 UTC (permalink / raw)
  To: Tamar Christina, Joseph Myers
  Cc: GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

On 12/12/2016 02:19 AM, Tamar Christina wrote:
> Ping
>
> ________________________________________
> From: Tamar Christina
> Sent: Friday, December 2, 2016 4:20:42 PM
> To: Joseph Myers
> Cc: GCC Patches; Wilco Dijkstra; rguenther@suse.de; law@redhat.com; Michael Meissner; nd
> Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
>
> Ping?
>
> Is there anything else I need to do for the patch or is it OK for trunk?
Just a note, it is in my queue of things to look at.   GIven it was 
posted well before stage1 close I think it deserves the opportunity to 
move forward (even if we need another iteration) even though we're well 
into stage3.

Jeff

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-11-25 12:19       ` Tamar Christina
  2016-12-02 16:20         ` Tamar Christina
@ 2016-12-14  8:07         ` Jeff Law
  2016-12-15 10:16           ` Tamar Christina
  1 sibling, 1 reply; 47+ messages in thread
From: Jeff Law @ 2016-12-14  8:07 UTC (permalink / raw)
  To: Tamar Christina, Joseph Myers
  Cc: GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

On 11/25/2016 05:18 AM, Tamar Christina wrote:
> Hi Joseph,
>
> I have updated the patch with the changes,
> w.r.t to the formatting, there are tabs there that seem to be rendered
> at 4 spaces wide. In my editor setup at 8 spaces they are correct.
>
Various random comments, mostly formatting.  I do think we're going to 
need another iteration.  The good news is I don't expect we'll be asking 
for major changes in direction, just trying to tie up various loose 
ends, so it shouldn't take nearly as long.

On a high level, presumably there's no real value in keeping the old 
code to "fold" fpclassify.  By exposing those operations as integer 
logicals for the fast path, if the FP value becomes a constant during 
the optimization pipeline we'll see the reinterpreted values flowing 
into the new integer logical tests and they'll simplify just like 
anything else.  Right?

The old IBM format is still supported, though they are expected to be 
moveing towards a standard ieee 128 bit format.  So my only concern is 
that we preserve correct behavior for those cases -- I don't really care 
about optimizing them.  So I think you need to keep them.

For documenting builtins, using existing builtins as a template.



> @@ -822,6 +882,736 @@ lower_builtin_setjmp (gimple_stmt_iterator *gsi)
>    gsi_remove (gsi, false);
>  }
>
> +static tree
> +emit_tree_and_return_var (gimple_seq *seq, tree arg)
Needs a function comment.

> +{
> +  if (TREE_CODE (arg) == SSA_NAME || VAR_P (arg))
> +    return arg;
> +
> +  tree tmp = create_tmp_reg (TREE_TYPE(arg));
Nit.  Need space between TREE_TYPE and the open-parn for its arglist

> +  gassign *stm = gimple_build_assign(tmp, arg);
Similarly between gimple_build_assign and its arglist.  This formatting 
nit is repeated often.  Please check the patch as a whole and correct. 
Probably the best way to go is a search for [a-z] immediately preceding 
an open paren.


> +/* This function builds an if statement that ends up using explicit branches
> +   instead of becoming a csel.  This function assumes you will fall through to
> +   the next statements after this condition for the false branch.  */
A function comment is supposed to document each argument (and the return 
value if any).  It's probably better to avoid referring to an ARM 
instruction (csel) and instead describe the intent more genericly.


> +static void
> +emit_tree_cond (gimple_seq *seq, tree result_variable, tree exit_label,
> +		tree cond, tree true_branch)
> +{
> +    /* Create labels for fall through.  */
> +  tree true_label = create_artificial_label (UNKNOWN_LOCATION);
Comment is indented too deep.  It should line up with the code.


> +
> +static tree
> +get_num_as_int (gimple_seq *seq, tree arg, location_t loc)
Needs function comment.  I'm not going to call out each one.  Please 
verify that every new function has a comment and that they document 
their arguments and return values.

Here's an example from builtins.c you might want to refer back to:

> /* For a memory reference expression EXP compute values M and N such that M
>    divides (&EXP - N) and such that N < M.  If these numbers can be determined,
>    store M in alignp and N in *BITPOSP and return true.  Otherwise return false
>    and store BITS_PER_UNIT to *alignp and any bit-offset to *bitposp.  */
>
> bool
> get_object_alignment_1 (tree exp, unsigned int *alignp,
>                         unsigned HOST_WIDE_INT *bitposp)
> {
>   return get_object_alignment_2 (exp, alignp, bitposp, false);
> }



> +  /* Check if the number that is being classified is close enough to IEEE 754
> +     format to be able to go in the early exit code.  */
> +static bool
> +use_ieee_int_mode (tree arg)
Comment should describe the argument.  Comment is indented 2 spaces too 
deep on both lines.  This occurs in several please.  I won't call each 
one out, but please walk through the patch as a whole and fix them.


> +{
> +  tree type = TREE_TYPE (arg);
> +
> +  machine_mode mode = TYPE_MODE (type);
> +
> +  const real_format *format = REAL_MODE_FORMAT (mode);
> +  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
> +  return (format->is_binary_ieee_compatible
> +	  && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
> +	  /* We explicitly disable quad float support on 32 bit systems.  */
> +	  && !(UNITS_PER_WORD == 4 && type_width == 128)
> +	  && targetm.scalar_mode_supported_p (mode));
> +}
Presumably this is why you needed the target.h inclusion.

Note that on some systems we even disable 64bit floating point support. 
I suspect this check needs a little re-thinking as I don't think that 
checking for a specific UNITS_PER_WORD is correct, nor is checking the 
width of the type.  I'm not offhand sure what the test should be, just 
that I think we need something better here.



> +
> +static tree
> +is_normal (gimple_seq *seq, tree arg, location_t loc)
> +{
> +  tree type = TREE_TYPE (arg);
> +
> +  machine_mode mode = TYPE_MODE (type);
> +  const real_format *format = REAL_MODE_FORMAT (mode);
> +  const tree bool_type = boolean_type_node;
> +
> +  /* Perform IBM extended format fixups if required.  */
> +  bool is_ibm_extended = perform_ibm_extended_fixups (&arg, &mode, &type, loc);
> +
> +  /* If not using optimized route then exit early.  */
> +  if (!use_ieee_int_mode (arg))
> +  {
Please check on your formatting.  The open-curly is indented two spaces 
relative to the IF statement.   I see multiple occurrences, so please go 
through the patch and check for them.  I'm not going to try and call out 
each one.






> +
> +static tree
> +is_infinity (gimple_seq *seq, tree arg, location_t loc)
There's a huge amount of code duplication between is_infinity and 
is_finite.  Would it make sense to try and refactor a bit to avoid the 
duplication?  Some of the early setup is even common among most (all?) 
if the is_* functions, consider factoring the common bits into routines 
you can reuse.


> +
> +/* Determines if the given number is a NaN value.
> +   This function is the last in the chain and only has to
> +   check if it's preconditions are true.  */
> +static tree
> +is_nan (gimple_seq *seq, tree arg, location_t loc)
So in the old code we checked UNGT_EXPR, in the new code's slow path you 
check UNORDERED.  Was that change intentional?

In general, I'm going to assume you've got the right tests.   I've done 
light spot checking and will probably do even more on the next iteration.

I see a lot of formatting and lack-of-function comment issues.  I'm 
comfortable with the overall direction though.  I'll want to look more 
closely at the helpers you're using to build up gimple code and how 
they're used.  I get the sense that we ought to already have something 
to do those things, and if not we may want to move yours into a more 
generic location.  I'll probably focus on that and trying to pick up any 
missed nits on the next iteration.  Again, the good news is that I don't 
expect the next iteration to take as long to get to.

Thanks for your patience.

jeff

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-12-14  8:07         ` Jeff Law
@ 2016-12-15 10:16           ` Tamar Christina
  2016-12-15 19:37             ` Joseph Myers
  2016-12-19 20:34             ` Jeff Law
  0 siblings, 2 replies; 47+ messages in thread
From: Tamar Christina @ 2016-12-15 10:16 UTC (permalink / raw)
  To: Jeff Law, Joseph Myers
  Cc: GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd


> On a high level, presumably there's no real value in keeping the old
> code to "fold" fpclassify.  By exposing those operations as integer
> logicals for the fast path, if the FP value becomes a constant during
> the optimization pipeline we'll see the reinterpreted values flowing
> into the new integer logical tests and they'll simplify just like
> anything else.  Right?

Yes, if it becomes a constant it will be folded away, both in the integer and the fp case.

> The old IBM format is still supported, though they are expected to be
> moveing towards a standard ieee 128 bit format.  So my only concern is
> that we preserve correct behavior for those cases -- I don't really care
> about optimizing them.  So I think you need to keep them.

Yes, I re-added them. It's mostly a copy paste from what they were in the
other functions. But I have no way of testing it.

> For documenting builtins, using existing builtins as a template.

Yeah, I based them off the fpclassify documentation.

> > +{
> > +  tree type = TREE_TYPE (arg);
> > +
> > +  machine_mode mode = TYPE_MODE (type);
> > +
> > +  const real_format *format = REAL_MODE_FORMAT (mode);
>  > +  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
> > +  return (format->is_binary_ieee_compatible
> > +       && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
> > +       /* We explicitly disable quad float support on 32 bit systems.  */
> > +       && !(UNITS_PER_WORD == 4 && type_width == 128)
> > +       && targetm.scalar_mode_supported_p (mode));
> > +}
> Presumably this is why you needed the target.h inclusion.
>
> Note that on some systems we even disable 64bit floating point support.
> I suspect this check needs a little re-thinking as I don't think that
> checking for a specific UNITS_PER_WORD is correct, nor is checking the
> width of the type.  I'm not offhand sure what the test should be, just
> that I think we need something better here.

I think what I really wanted to test here is if there was an integer mode available
which has the exact width as the floating point one. So I have replaced this with
just a call to int_mode_for_mode. Which is probably more correct.

> > +
> > +/* Determines if the given number is a NaN value.
> > +   This function is the last in the chain and only has to
> > +   check if it's preconditions are true.  */
> > +static tree
> > +is_nan (gimple_seq *seq, tree arg, location_t loc)
> So in the old code we checked UNGT_EXPR, in the new code's slow path you
> check UNORDERED.  Was that change intentional?

The old FP code used UNORDERED and the new one was using ORDERED and negating the result.
I've replaced it with UNORDERED, but both are correct.

Thanks for the review,
I'll get the new patch out ASAP.

Tamar

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-12-15 10:16           ` Tamar Christina
@ 2016-12-15 19:37             ` Joseph Myers
  2016-12-19 17:24               ` Tamar Christina
  2016-12-19 20:34             ` Jeff Law
  1 sibling, 1 reply; 47+ messages in thread
From: Joseph Myers @ 2016-12-15 19:37 UTC (permalink / raw)
  To: Tamar Christina
  Cc: Jeff Law, GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

On Thu, 15 Dec 2016, Tamar Christina wrote:

> > Note that on some systems we even disable 64bit floating point support.
> > I suspect this check needs a little re-thinking as I don't think that
> > checking for a specific UNITS_PER_WORD is correct, nor is checking the
> > width of the type.  I'm not offhand sure what the test should be, just
> > that I think we need something better here.
> 
> I think what I really wanted to test here is if there was an integer 
> mode available which has the exact width as the floating point one. So I 
> have replaced this with just a call to int_mode_for_mode. Which is 
> probably more correct.

I think an integer mode should always exist - even in the case of TFmode 
on 32-bit systems (32-bit sparc / s390, for example, use TFmode long 
double for GNU/Linux, and it's supported as _Float128 and __float128 on 
32-bit x86).  It just be not be usable for arithmetic or declaring 
variables of that type.

I don't know whether TImode bitwise operations, such as generated by this 
fpclassify work, will get properly lowered to operations on supported 
narrower modes, but I hope so (clearly it's simpler if you can write 
things straightforwardly and have them cover this case of TFmode on 32-bit 
systems automatically through lowering elsewhere in the compiler, than if 
covering that case would require additional code - the more cases you 
cover, the more opportunity there is for glibc to use the built-in 
functions even with -fsignaling-nans).

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-12-15 19:37             ` Joseph Myers
@ 2016-12-19 17:24               ` Tamar Christina
  2017-01-18 16:15                 ` Jeff Law
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2016-12-19 17:24 UTC (permalink / raw)
  To: Joseph Myers
  Cc: Jeff Law, GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

[-- Attachment #1: Type: text/plain, Size: 2468 bytes --]

Hi All,

I've respun the patch with the feedback from Jeff and Joseph.

> I think an integer mode should always exist - even in the case of TFmode
> on 32-bit systems (32-bit sparc / s390, for example, use TFmode long
> double for GNU/Linux, and it's supported as _Float128 and __float128 on
> 32-bit x86).  It just be not be usable for arithmetic or declaring
> variables of that type.

You're right, so I test the integer mode I receive with scalar_mode_supported_p.
And this seems to do the right thing.

Thanks for all the comments so far!

Kind Regards,
Tamar
________________________________________
From: Joseph Myers <joseph@codesourcery.com>
Sent: Thursday, December 15, 2016 7:03:27 PM
To: Tamar Christina
Cc: Jeff Law; GCC Patches; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

On Thu, 15 Dec 2016, Tamar Christina wrote:

> > Note that on some systems we even disable 64bit floating point support.
> > I suspect this check needs a little re-thinking as I don't think that
> > checking for a specific UNITS_PER_WORD is correct, nor is checking the
> > width of the type.  I'm not offhand sure what the test should be, just
> > that I think we need something better here.
>
> I think what I really wanted to test here is if there was an integer
> mode available which has the exact width as the floating point one. So I
> have replaced this with just a call to int_mode_for_mode. Which is
> probably more correct.

I think an integer mode should always exist - even in the case of TFmode
on 32-bit systems (32-bit sparc / s390, for example, use TFmode long
double for GNU/Linux, and it's supported as _Float128 and __float128 on
32-bit x86).  It just be not be usable for arithmetic or declaring
variables of that type.

I don't know whether TImode bitwise operations, such as generated by this
fpclassify work, will get properly lowered to operations on supported
narrower modes, but I hope so (clearly it's simpler if you can write
things straightforwardly and have them cover this case of TFmode on 32-bit
systems automatically through lowering elsewhere in the compiler, than if
covering that case would require additional code - the more cases you
cover, the more opportunity there is for glibc to use the built-in
functions even with -fsignaling-nans).

--
Joseph S. Myers
joseph@codesourcery.com

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: fpclassify-patch-gimple-3.patch --]
[-- Type: text/x-patch; name="fpclassify-patch-gimple-3.patch", Size: 69579 bytes --]

diff --git a/gcc/builtins.c b/gcc/builtins.c
index 3ac2d44148440b124559ba7cd3de483b7a74b72d..d8ff9c70ae6b9e72e09b8cbd9a0bd41b6830b83e 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -160,7 +160,6 @@ static tree fold_builtin_0 (location_t, tree);
 static tree fold_builtin_1 (location_t, tree, tree);
 static tree fold_builtin_2 (location_t, tree, tree, tree);
 static tree fold_builtin_3 (location_t, tree, tree, tree, tree);
-static tree fold_builtin_varargs (location_t, tree, tree*, int);
 
 static tree fold_builtin_strpbrk (location_t, tree, tree, tree);
 static tree fold_builtin_strstr (location_t, tree, tree, tree);
@@ -2202,19 +2201,8 @@ interclass_mathfn_icode (tree arg, tree fndecl)
   switch (DECL_FUNCTION_CODE (fndecl))
     {
     CASE_FLT_FN (BUILT_IN_ILOGB):
-      errno_set = true; builtin_optab = ilogb_optab; break;
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      builtin_optab = isinf_optab; break;
-    case BUILT_IN_ISNORMAL:
-    case BUILT_IN_ISFINITE:
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      /* These builtins have no optabs (yet).  */
+      errno_set = true;
+      builtin_optab = ilogb_optab;
       break;
     default:
       gcc_unreachable ();
@@ -2233,8 +2221,7 @@ interclass_mathfn_icode (tree arg, tree fndecl)
 }
 
 /* Expand a call to one of the builtin math functions that operate on
-   floating point argument and output an integer result (ilogb, isinf,
-   isnan, etc).
+   floating point argument and output an integer result (ilogb, etc).
    Return 0 if a normal call should be emitted rather than expanding the
    function in-line.  EXP is the expression that is a call to the builtin
    function; if convenient, the result should be placed in TARGET.  */
@@ -5997,11 +5984,7 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
     CASE_FLT_FN (BUILT_IN_ILOGB):
       if (! flag_unsafe_math_optimizations)
 	break;
-      gcc_fallthrough ();
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-    case BUILT_IN_ISNORMAL:
+
       target = expand_builtin_interclass_mathfn (exp, target);
       if (target)
 	return target;
@@ -6281,8 +6264,25 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
 	}
       break;
 
+    CASE_FLT_FN (BUILT_IN_ISINF):
+    case BUILT_IN_ISNAND32:
+    case BUILT_IN_ISNAND64:
+    case BUILT_IN_ISNAND128:
+    case BUILT_IN_ISNAN:
+    case BUILT_IN_ISINFD32:
+    case BUILT_IN_ISINFD64:
+    case BUILT_IN_ISINFD128:
+    case BUILT_IN_ISNORMAL:
+    case BUILT_IN_ISZERO:
+    case BUILT_IN_ISSUBNORMAL:
+    case BUILT_IN_FPCLASSIFY:
     case BUILT_IN_SETJMP:
-      /* This should have been lowered to the builtins below.  */
+    CASE_FLT_FN (BUILT_IN_FINITE):
+    case BUILT_IN_FINITED32:
+    case BUILT_IN_FINITED64:
+    case BUILT_IN_FINITED128:
+    case BUILT_IN_ISFINITE:
+      /* These should have been lowered to the builtins in gimple-low.c.  */
       gcc_unreachable ();
 
     case BUILT_IN_SETJMP_SETUP:
@@ -7622,184 +7622,19 @@ fold_builtin_modf (location_t loc, tree arg0, tree arg1, tree rettype)
   return NULL_TREE;
 }
 
-/* Given a location LOC, an interclass builtin function decl FNDECL
-   and its single argument ARG, return an folded expression computing
-   the same, or NULL_TREE if we either couldn't or didn't want to fold
-   (the latter happen if there's an RTL instruction available).  */
-
-static tree
-fold_builtin_interclass_mathfn (location_t loc, tree fndecl, tree arg)
-{
-  machine_mode mode;
-
-  if (!validate_arg (arg, REAL_TYPE))
-    return NULL_TREE;
-
-  if (interclass_mathfn_icode (arg, fndecl) != CODE_FOR_nothing)
-    return NULL_TREE;
-
-  mode = TYPE_MODE (TREE_TYPE (arg));
-
-  bool is_ibm_extended = MODE_COMPOSITE_P (mode);
 
-  /* If there is no optab, try generic code.  */
-  switch (DECL_FUNCTION_CODE (fndecl))
-    {
-      tree result;
 
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      {
-	/* isinf(x) -> isgreater(fabs(x),DBL_MAX).  */
-	tree const isgr_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isgr_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	return result;
-      }
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-      {
-	/* isfinite(x) -> islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isle_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	/*result = fold_build2_loc (loc, UNGT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	result = fold_build1_loc (loc, TRUTH_NOT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  result);*/
-	return result;
-      }
-    case BUILT_IN_ISNORMAL:
-      {
-	/* isnormal(x) -> isgreaterequal(fabs(x),DBL_MIN) &
-	   islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	tree orig_arg, max_exp, min_exp;
-	machine_mode orig_mode = mode;
-	REAL_VALUE_TYPE rmax, rmin;
-	char buf[128];
-
-	orig_arg = arg = builtin_save_expr (arg);
-	if (is_ibm_extended)
-	  {
-	    /* Use double to test the normal range of IBM extended
-	       precision.  Emin for IBM extended precision is
-	       different to emin for IEEE double, being 53 higher
-	       since the low double exponent is at least 53 lower
-	       than the high double exponent.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	arg = fold_build1_loc (loc, ABS_EXPR, type, arg);
-
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&rmax, buf);
-	sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (orig_mode)->emin - 1);
-	real_from_string (&rmin, buf);
-	max_exp = build_real (type, rmax);
-	min_exp = build_real (type, rmin);
-
-	max_exp = build_call_expr (isle_fn, 2, arg, max_exp);
-	if (is_ibm_extended)
-	  {
-	    /* Testing the high end of the range is done just using
-	       the high double, using the same test as isfinite().
-	       For the subnormal end of the range we first test the
-	       high double, then if its magnitude is equal to the
-	       limit of 0x1p-969, we test whether the low double is
-	       non-zero and opposite sign to the high double.  */
-	    tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
-	    tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	    tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
-	    tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
-				       arg, min_exp);
-	    tree as_complex = build1 (VIEW_CONVERT_EXPR,
-				      complex_double_type_node, orig_arg);
-	    tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
-	    tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
-	    tree zero = build_real (type, dconst0);
-	    tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
-	    tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
-	    tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
-	    tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
-				      fold_build3 (COND_EXPR,
-						   integer_type_node,
-						   hilt, logt, lolt));
-	    eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
-				  eq_min, ok_lo);
-	    min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
-				   gt_min, eq_min);
-	  }
-	else
-	  {
-	    tree const isge_fn
-	      = builtin_decl_explicit (BUILT_IN_ISGREATEREQUAL);
-	    min_exp = build_call_expr (isge_fn, 2, arg, min_exp);
-	  }
-	result = fold_build2 (BIT_AND_EXPR, integer_type_node,
-			      max_exp, min_exp);
-	return result;
-      }
-    default:
-      break;
-    }
-
-  return NULL_TREE;
-}
-
-/* Fold a call to __builtin_isnan(), __builtin_isinf, __builtin_finite.
+/* Fold a call to __builtin_isinf_sign.
    ARG is the argument for the call.  */
 
 static tree
-fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
+fold_builtin_classify (location_t loc, tree arg, int builtin_index)
 {
-  tree type = TREE_TYPE (TREE_TYPE (fndecl));
-
   if (!validate_arg (arg, REAL_TYPE))
     return NULL_TREE;
 
   switch (builtin_index)
     {
-    case BUILT_IN_ISINF:
-      if (!HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      return NULL_TREE;
-
     case BUILT_IN_ISINF_SIGN:
       {
 	/* isinf_sign(x) -> isinf(x) ? (signbit(x) ? -1 : 1) : 0 */
@@ -7832,106 +7667,11 @@ fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
 	return tmp;
       }
 
-    case BUILT_IN_ISFINITE:
-      if (!HONOR_NANS (arg)
-	  && !HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_one_node, arg);
-
-      return NULL_TREE;
-
-    case BUILT_IN_ISNAN:
-      if (!HONOR_NANS (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      {
-	bool is_ibm_extended = MODE_COMPOSITE_P (TYPE_MODE (TREE_TYPE (arg)));
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    arg = fold_build1_loc (loc, NOP_EXPR, double_type_node, arg);
-	  }
-      }
-      arg = builtin_save_expr (arg);
-      return fold_build2_loc (loc, UNORDERED_EXPR, type, arg, arg);
-
     default:
       gcc_unreachable ();
     }
 }
 
-/* Fold a call to __builtin_fpclassify(int, int, int, int, int, ...).
-   This builtin will generate code to return the appropriate floating
-   point classification depending on the value of the floating point
-   number passed in.  The possible return values must be supplied as
-   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
-   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
-   one floating point argument which is "type generic".  */
-
-static tree
-fold_builtin_fpclassify (location_t loc, tree *args, int nargs)
-{
-  tree fp_nan, fp_infinite, fp_normal, fp_subnormal, fp_zero,
-    arg, type, res, tmp;
-  machine_mode mode;
-  REAL_VALUE_TYPE r;
-  char buf[128];
-
-  /* Verify the required arguments in the original call.  */
-  if (nargs != 6
-      || !validate_arg (args[0], INTEGER_TYPE)
-      || !validate_arg (args[1], INTEGER_TYPE)
-      || !validate_arg (args[2], INTEGER_TYPE)
-      || !validate_arg (args[3], INTEGER_TYPE)
-      || !validate_arg (args[4], INTEGER_TYPE)
-      || !validate_arg (args[5], REAL_TYPE))
-    return NULL_TREE;
-
-  fp_nan = args[0];
-  fp_infinite = args[1];
-  fp_normal = args[2];
-  fp_subnormal = args[3];
-  fp_zero = args[4];
-  arg = args[5];
-  type = TREE_TYPE (arg);
-  mode = TYPE_MODE (type);
-  arg = builtin_save_expr (fold_build1_loc (loc, ABS_EXPR, type, arg));
-
-  /* fpclassify(x) ->
-       isnan(x) ? FP_NAN :
-         (fabs(x) == Inf ? FP_INFINITE :
-	   (fabs(x) >= DBL_MIN ? FP_NORMAL :
-	     (x == 0 ? FP_ZERO : FP_SUBNORMAL))).  */
-
-  tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-		     build_real (type, dconst0));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node,
-		     tmp, fp_zero, fp_subnormal);
-
-  sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (mode)->emin - 1);
-  real_from_string (&r, buf);
-  tmp = fold_build2_loc (loc, GE_EXPR, integer_type_node,
-		     arg, build_real (type, r));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, fp_normal, res);
-
-  if (HONOR_INFINITIES (mode))
-    {
-      real_inf (&r);
-      tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-			 build_real (type, r));
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp,
-			 fp_infinite, res);
-    }
-
-  if (HONOR_NANS (mode))
-    {
-      tmp = fold_build2_loc (loc, ORDERED_EXPR, integer_type_node, arg, arg);
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, res, fp_nan);
-    }
-
-  return res;
-}
-
 /* Fold a call to an unordered comparison function such as
    __builtin_isgreater().  FNDECL is the FUNCTION_DECL for the function
    being called and ARG0 and ARG1 are the arguments for the call.
@@ -8232,40 +7972,8 @@ fold_builtin_1 (location_t loc, tree fndecl, tree arg0)
     case BUILT_IN_ISDIGIT:
       return fold_builtin_isdigit (loc, arg0);
 
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISFINITE:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISFINITE);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    case BUILT_IN_ISNORMAL:
-      return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-
     case BUILT_IN_ISINF_SIGN:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF_SIGN);
-
-    CASE_FLT_FN (BUILT_IN_ISNAN):
-    case BUILT_IN_ISNAND32:
-    case BUILT_IN_ISNAND64:
-    case BUILT_IN_ISNAND128:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISNAN);
+      return fold_builtin_classify (loc, arg0, BUILT_IN_ISINF_SIGN);
 
     case BUILT_IN_FREE:
       if (integer_zerop (arg0))
@@ -8465,7 +8173,6 @@ fold_builtin_n (location_t loc, tree fndecl, tree *args, int nargs, bool)
       ret = fold_builtin_3 (loc, fndecl, args[0], args[1], args[2]);
       break;
     default:
-      ret = fold_builtin_varargs (loc, fndecl, args, nargs);
       break;
     }
   if (ret)
@@ -9422,37 +9129,6 @@ fold_builtin_object_size (tree ptr, tree ost)
   return NULL_TREE;
 }
 
-/* Builtins with folding operations that operate on "..." arguments
-   need special handling; we need to store the arguments in a convenient
-   data structure before attempting any folding.  Fortunately there are
-   only a few builtins that fall into this category.  FNDECL is the
-   function, EXP is the CALL_EXPR for the call.  */
-
-static tree
-fold_builtin_varargs (location_t loc, tree fndecl, tree *args, int nargs)
-{
-  enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl);
-  tree ret = NULL_TREE;
-
-  switch (fcode)
-    {
-    case BUILT_IN_FPCLASSIFY:
-      ret = fold_builtin_fpclassify (loc, args, nargs);
-      break;
-
-    default:
-      break;
-    }
-  if (ret)
-    {
-      ret = build1 (NOP_EXPR, TREE_TYPE (ret), ret);
-      SET_EXPR_LOCATION (ret, loc);
-      TREE_NO_WARNING (ret) = 1;
-      return ret;
-    }
-  return NULL_TREE;
-}
-
 /* Initialize format string characters in the target charset.  */
 
 bool
diff --git a/gcc/builtins.def b/gcc/builtins.def
index 219feebd3aebefbd079bf37cc801453cd1965e00..91aa6f37fa098777bc794bad56d8c561ab9fdc44 100644
--- a/gcc/builtins.def
+++ b/gcc/builtins.def
@@ -831,6 +831,8 @@ DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFL, "isinfl", BT_FN_INT_LONGDOUBLE, ATTR_CO
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD32, "isinfd32", BT_FN_INT_DFLOAT32, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD64, "isinfd64", BT_FN_INT_DFLOAT64, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD128, "isinfd128", BT_FN_INT_DFLOAT128, ATTR_CONST_NOTHROW_LEAF_LIST)
+DEF_GCC_BUILTIN        (BUILT_IN_ISZERO, "iszero", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
+DEF_GCC_BUILTIN        (BUILT_IN_ISSUBNORMAL, "issubnormal", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_C99_C90RES_BUILTIN (BUILT_IN_ISNAN, "isnan", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANF, "isnanf", BT_FN_INT_FLOAT, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANL, "isnanl", BT_FN_INT_LONGDOUBLE, ATTR_CONST_NOTHROW_LEAF_LIST)
diff --git a/gcc/doc/extend.texi b/gcc/doc/extend.texi
index 0669f7999beb078822e471352036d8f13517812d..c240bbe9a8fd595a0e7e2b41fb708efae1e5279a 100644
--- a/gcc/doc/extend.texi
+++ b/gcc/doc/extend.texi
@@ -10433,6 +10433,10 @@ in the Cilk Plus language manual which can be found at
 @findex __builtin_isgreater
 @findex __builtin_isgreaterequal
 @findex __builtin_isinf_sign
+@findex __builtin_isinf
+@findex __builtin_isnan
+@findex __builtin_iszero
+@findex __builtin_issubnormal
 @findex __builtin_isless
 @findex __builtin_islessequal
 @findex __builtin_islessgreater
@@ -11496,7 +11500,54 @@ constant values and they must appear in this order: @code{FP_NAN},
 @code{FP_INFINITE}, @code{FP_NORMAL}, @code{FP_SUBNORMAL} and
 @code{FP_ZERO}.  The ellipsis is for exactly one floating-point value
 to classify.  GCC treats the last argument as type-generic, which
-means it does not do default promotion from float to double.
+means it does not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnan (...)
+This built-in implements the C99 isnan functionality which checks if
+the given argument represents a NaN.  The return value of the
+function will either be a 0 (false) or a 1 (true).
+On most systems, when an IEEE 754 floating-point type is used this
+built-in does not produce a signal when a signaling NaN is used.
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isinf (...)
+This built-in implements the C99 isinf functionality which checks if
+the given argument represents an infinite number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnormal (...)
+This built-in implements the C99 isnormal functionality which checks if
+the given argument represents a normal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_iszero (...)
+This built-in implements the TS 18661-1:2014 iszero functionality which checks if
+the given argument represents the number 0 or -0.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_issubnormal (...)
+This built-in implements the TS 18661-1:2014 issubnormal functionality which checks if
+the given argument represents a subnormal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
 @end deftypefn
 
 @deftypefn {Built-in Function} double __builtin_inf (void)
diff --git a/gcc/gimple-low.c b/gcc/gimple-low.c
index 64752b67b86b3d01df5f5661e4666df98b7b91d1..ceaee295c6e81531dbe047c569dd18332179ccbf 100644
--- a/gcc/gimple-low.c
+++ b/gcc/gimple-low.c
@@ -30,6 +30,8 @@ along with GCC; see the file COPYING3.  If not see
 #include "calls.h"
 #include "gimple-iterator.h"
 #include "gimple-low.h"
+#include "stor-layout.h"
+#include "target.h"
 
 /* The differences between High GIMPLE and Low GIMPLE are the
    following:
@@ -72,6 +74,13 @@ static void lower_gimple_bind (gimple_stmt_iterator *, struct lower_data *);
 static void lower_try_catch (gimple_stmt_iterator *, struct lower_data *);
 static void lower_gimple_return (gimple_stmt_iterator *, struct lower_data *);
 static void lower_builtin_setjmp (gimple_stmt_iterator *);
+static void lower_builtin_fpclassify (gimple_stmt_iterator *);
+static void lower_builtin_isnan (gimple_stmt_iterator *);
+static void lower_builtin_isinfinite (gimple_stmt_iterator *);
+static void lower_builtin_isnormal (gimple_stmt_iterator *);
+static void lower_builtin_iszero (gimple_stmt_iterator *);
+static void lower_builtin_issubnormal (gimple_stmt_iterator *);
+static void lower_builtin_isfinite (gimple_stmt_iterator *);
 static void lower_builtin_posix_memalign (gimple_stmt_iterator *);
 
 
@@ -330,18 +339,69 @@ lower_stmt (gimple_stmt_iterator *gsi, struct lower_data *data)
 	if (decl
 	    && DECL_BUILT_IN_CLASS (decl) == BUILT_IN_NORMAL)
 	  {
-	    if (DECL_FUNCTION_CODE (decl) == BUILT_IN_SETJMP)
+	    switch (DECL_FUNCTION_CODE (decl))
 	      {
+	      case BUILT_IN_SETJMP:
 		lower_builtin_setjmp (gsi);
 		data->cannot_fallthru = false;
 		return;
-	      }
-	    else if (DECL_FUNCTION_CODE (decl) == BUILT_IN_POSIX_MEMALIGN
-		     && flag_tree_bit_ccp
-		     && gimple_builtin_call_types_compatible_p (stmt, decl))
-	      {
-		lower_builtin_posix_memalign (gsi);
+
+	      case BUILT_IN_POSIX_MEMALIGN:
+		if (flag_tree_bit_ccp
+		    && gimple_builtin_call_types_compatible_p (stmt, decl))
+		  {
+			lower_builtin_posix_memalign (gsi);
+			return;
+		  }
+		break;
+
+	      case BUILT_IN_FPCLASSIFY:
+		lower_builtin_fpclassify (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      CASE_FLT_FN (BUILT_IN_ISINF):
+	      case BUILT_IN_ISINFD32:
+	      case BUILT_IN_ISINFD64:
+	      case BUILT_IN_ISINFD128:
+		lower_builtin_isinfinite (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNAND32:
+	      case BUILT_IN_ISNAND64:
+	      case BUILT_IN_ISNAND128:
+	      CASE_FLT_FN (BUILT_IN_ISNAN):
+		lower_builtin_isnan (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNORMAL:
+		lower_builtin_isnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISZERO:
+		lower_builtin_iszero (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISSUBNORMAL:
+		lower_builtin_issubnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      CASE_FLT_FN (BUILT_IN_FINITE):
+	      case BUILT_IN_FINITED32:
+	      case BUILT_IN_FINITED64:
+	      case BUILT_IN_FINITED128:
+	      case BUILT_IN_ISFINITE:
+		lower_builtin_isfinite (gsi);
+		data->cannot_fallthru = false;
 		return;
+
+	      default:
+		break;
 	      }
 	  }
 
@@ -822,6 +882,819 @@ lower_builtin_setjmp (gimple_stmt_iterator *gsi)
   gsi_remove (gsi, false);
 }
 
+/* This function will if ARG is not already a variable or SSA_NAME,
+   create a new temporary TMP and bind ARG to TMP.  This new binding is then
+   emitted into SEQ and TMP is returned.  */
+static tree
+emit_tree_and_return_var (gimple_seq *seq, tree arg)
+{
+  if (TREE_CODE (arg) == SSA_NAME || VAR_P (arg))
+    return arg;
+
+  tree tmp = create_tmp_reg (TREE_TYPE (arg));
+  gassign *stm = gimple_build_assign (tmp, arg);
+  gimple_seq_add_stmt (seq, stm);
+  return tmp;
+}
+
+/* This function builds an if statement that ends up using explicit branches
+   instead of becoming a ternary conditional select.  This function assumes you
+   will fall through to the next statements after the condition for the false
+   branch.  The code emitted looks like:
+
+   if (COND)
+     RESULT_VARIABLE = TRUE_BRANCH
+     GOTO EXIT_LABEL
+   else
+     ...
+
+   SEQ is the gimple sequence/buffer to emit any new bindings to.
+   RESULT_VARIABLE is the value to set if COND.
+   EXIT_LABEL is the label to jump to in case COND.
+   COND is condition to use in the conditional statement of the if.
+   TRUE_BRANCH is the value to set RESULT_VARIABLE to if COND.  */
+static void
+emit_tree_cond (gimple_seq *seq, tree result_variable, tree exit_label,
+		tree cond, tree true_branch)
+{
+  /* Create labels for fall through.  */
+  tree true_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree false_label = create_artificial_label (UNKNOWN_LOCATION);
+  gcond *stmt = gimple_build_cond_from_tree (cond, true_label, false_label);
+  gimple_seq_add_stmt (seq, stmt);
+
+  /* Build the true case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (true_label));
+  tree value = TREE_CONSTANT (true_branch)
+	     ? true_branch
+	     : emit_tree_and_return_var (seq, true_branch);
+  gimple_seq_add_stmt (seq, gimple_build_assign (result_variable, value));
+  gimple_seq_add_stmt (seq, gimple_build_goto (exit_label));
+
+  /* Build the false case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (false_label));
+}
+
+/* This function returns a variable containing an reinterpreted ARG as an
+   integer.
+
+   SEQ is the gimple sequence/buffer to write any new bindings to.
+   ARG is the floating point number to reinterpret as an integer.
+   LOC is the location to use when doing folding operations.  */
+static tree
+get_num_as_int (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  /* Re-interpret the float as an unsigned integer type
+     with equal precision.  */
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+  tree conv_arg = fold_build1_loc (loc, VIEW_CONVERT_EXPR, int_arg_type, arg);
+  return emit_tree_and_return_var (seq, conv_arg);
+}
+
+/* Check if ARG which is the floating point number being classified is close
+   enough to IEEE 754 format to be able to go in the early exit code.  */
+static bool
+use_ieee_int_mode (tree arg)
+{
+  tree type = TREE_TYPE (arg);
+  machine_mode mode = TYPE_MODE (type);
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  machine_mode imode = int_mode_for_mode (mode);
+
+  return (format->is_binary_ieee_compatible
+	  && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
+	  /* Check if there's a usable integer mode.  */
+	  && imode != BLKmode
+	  && targetm.scalar_mode_supported_p (imode));
+}
+
+/* Perform some IBM extended format fixups on ARG for use by FP functions.
+   This is done by ignoring the lower 64 bits of the number.
+
+   MODE is the machine mode of ARG.
+   TYPE is the type of ARG.
+   LOC is the location to be used in fold functions.  Usually is the location
+   of the definition of ARG.  */
+static bool
+perform_ibm_extended_fixups (tree *arg, machine_mode *mode,
+			     tree *type, location_t loc)
+{
+  bool is_ibm_extended = MODE_COMPOSITE_P (*mode);
+  if (is_ibm_extended)
+    {
+      /* NaN and Inf are encoded in the high-order double value
+	 only.  The low-order value is not significant.  */
+      *type = double_type_node;
+      *mode = DFmode;
+      *arg = fold_build1_loc (loc, NOP_EXPR, *type, *arg);
+    }
+
+  return is_ibm_extended;
+}
+
+/* Generates code to check if ARG is a normal number.  For the FP case we check
+   MIN_VALUE(ARG) <= ABS(ARG) > INF and for the INT value we check the exp and
+   mantissa bits.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_normal (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const tree bool_type = boolean_type_node;
+
+  /* Perform IBM extended format fixups if required.  */
+  bool is_ibm_extended = perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree orig_arg = arg;
+      if (TREE_CODE (arg) != SSA_NAME
+	  && (TREE_ADDRESSABLE (arg) != 0
+	    || (TREE_CODE (arg) != PARM_DECL
+	        && (!VAR_P (arg) || TREE_STATIC (arg)))))
+	orig_arg = save_expr (arg);
+
+      REAL_VALUE_TYPE rinf, rmin;
+      tree arg_p
+        = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      char buf[128];
+      real_inf (&rinf);
+      get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+      real_from_string (&rmin, buf);
+
+      tree inf_exp = fold_build2_loc (loc, LT_EXPR, bool_type, arg_p,
+				      build_real (type, rinf));
+      tree min_exp = build_real (type, rmin);
+      if (is_ibm_extended)
+	{
+	  /* Testing the high end of the range is done just using
+	     the high double, using the same test as isfinite().
+	     For the subnormal end of the range we first test the
+	     high double, then if its magnitude is equal to the
+	     limit of 0x1p-969, we test whether the low double is
+	     non-zero and opposite sign to the high double.  */
+	  tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
+	  tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
+	  tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
+	  tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
+				     arg, min_exp);
+	  tree as_complex = build1 (VIEW_CONVERT_EXPR,
+				    complex_double_type_node, orig_arg);
+	  tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
+	  tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
+	  tree zero = build_real (type, dconst0);
+	  tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
+	  tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
+	  tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
+	  tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
+				    fold_build3 (COND_EXPR,
+						 integer_type_node,
+						 hilt, logt, lolt));
+	  eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
+				eq_min, ok_lo);
+	  min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
+				 gt_min, eq_min);
+	}
+	else
+	{
+	  min_exp = fold_build2_loc (loc, GE_EXPR, bool_type, arg_p,
+				     min_exp);
+	}
+
+      tree res
+	= fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			   emit_tree_and_return_var (seq, min_exp),
+			   emit_tree_and_return_var (seq, inf_exp));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const tree int_type = unsigned_type_node;
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  const int exp_mask  = (1 << exp_bits) - 1;
+
+  /* Get the number reinterpreted as an integer.  */
+  tree int_arg = get_num_as_int (seq, arg, loc);
+
+  /* Extract exp bits from the float, where we expect the exponent to be.
+     We create a new type because BIT_FIELD_REF does not allow you to
+     extract less bits than the precision of the storage variable.  */
+  tree exp_tmp
+    = fold_build3_loc (loc, BIT_FIELD_REF,
+		       build_nonstandard_integer_type (exp_bits, true),
+		       int_arg,
+		       build_int_cstu (int_type, exp_bits),
+		       build_int_cstu (int_type, format->p - 1));
+  tree exp_bitfield = emit_tree_and_return_var (seq, exp_tmp);
+
+  /* Re-interpret the extracted exponent bits as a 32 bit int.
+     This allows us to continue doing operations as int_type.  */
+  tree exp
+    = emit_tree_and_return_var (seq, fold_build1_loc (loc, NOP_EXPR, int_type,
+						      exp_bitfield));
+
+  /* exp_mask & ~1.  */
+  tree mask_check
+     = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+			build_int_cstu (int_type, exp_mask),
+			fold_build1_loc (loc, BIT_NOT_EXPR, int_type,
+					 build_int_cstu (int_type, 1)));
+
+  /* (exp + 1) & mask_check.
+     Check to see if exp is not all 0 or all 1.  */
+  tree exp_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+		       emit_tree_and_return_var (seq,
+				fold_build2_loc (loc, PLUS_EXPR, int_type, exp,
+						 build_int_cstu (int_type, 1))),
+		       mask_check);
+
+  tree res = fold_build2_loc (loc, NE_EXPR, boolean_type_node,
+			      build_int_cstu (int_type, 0),
+			      emit_tree_and_return_var (seq, exp_check));
+
+  return emit_tree_and_return_var (seq, res);
+}
+
+/* Generates code to check if ARG is a zero. For both the FP and INT case we
+   check if ARG == 0 (modulo sign bit).  Returns a variable containing a boolean
+   which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_zero (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+  machine_mode mode = TYPE_MODE (type);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree res = fold_build2_loc (loc, EQ_EXPR, boolean_type_node, arg,
+				  build_real (type, dconst0));
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* num << 1 == 0.
+     This checks to see if the number is zero.  */
+  tree zero_check
+    = fold_build2_loc (loc, EQ_EXPR, boolean_type_node,
+		       build_int_cstu (int_arg_type, 0),
+		       emit_tree_and_return_var (seq, int_arg));
+
+  return emit_tree_and_return_var (seq, zero_check);
+}
+
+/* Generates code to check if ARG is a subnormal number.  In the FP case we test
+   fabs (ARG) != 0 && fabs (ARG) < MIN_VALUE (ARG) and in the INT case we check
+   the exp and mantissa bits on ARG. Returns a variable containing a boolean
+   which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_subnormal (gimple_seq *seq, tree arg, location_t loc)
+{
+  const tree bool_type = boolean_type_node;
+
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      REAL_VALUE_TYPE r;
+      char buf[128];
+      get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+      real_from_string (&r, buf);
+      tree subnorm = fold_build2_loc (loc, LT_EXPR, bool_type,
+				      arg_p, build_real (type, r));
+
+      tree zero = fold_build2_loc (loc, GT_EXPR, bool_type, arg_p,
+				   build_real (type, dconst0));
+
+      tree res
+	= fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			   emit_tree_and_return_var (seq, subnorm),
+			   emit_tree_and_return_var (seq, zero));
+
+      return emit_tree_and_return_var (seq, res);
+  }
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check for a zero exponent and non-zero mantissa.
+     This can be done with two comparisons by first apply a
+     removing the sign bit and checking if the value is larger
+     than the mantissa mask.  */
+
+  /* This creates a mask to be used to check the mantissa value in the shifted
+     integer representation of the fpnum.  */
+  tree significant_bit = build_int_cstu (int_arg_type, format->p - 1);
+  tree mantissa_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					significant_bit),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check if exponent is zero and mantissa is not.  */
+  tree subnorm_cond_tmp
+    = fold_build2_loc (loc, LE_EXPR, bool_type,
+		       emit_tree_and_return_var (seq, int_arg),
+		       mantissa_mask);
+
+  tree subnorm_cond = emit_tree_and_return_var (seq, subnorm_cond_tmp);
+
+  tree zero_cond
+    = fold_build2_loc (loc, GT_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, int_arg),
+		       build_int_cstu (int_arg_type, 0));
+
+  tree subnorm_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, subnorm_cond),
+		       emit_tree_and_return_var (seq, zero_cond));
+
+  return emit_tree_and_return_var (seq, subnorm_check);
+}
+
+/* Generates code to check if ARG is an infinity.  In the FP case we test
+   FABS(ARG) == INF and in the INT case we check the bits on the exp and
+   mantissa.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_infinity (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_INFINITIES (mode))
+    {
+      return build_int_cst (bool_type, false);
+    }
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+      REAL_VALUE_TYPE r;
+      real_inf (&r);
+      tree res = fold_build2_loc (loc, EQ_EXPR, bool_type, arg_p,
+				  build_real (type, r));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0.  */
+  tree inf_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, EQ_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Generates code to check if ARG is a finite number.  In the FP case we check
+   if FABS(ARG) <= MAX_VALUE(ARG) and in the INT case we check the exp and
+   mantissa bits.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_finite (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (arg) && !HONOR_INFINITIES (arg))
+    {
+      return build_int_cst (bool_type, true);
+    }
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      REAL_VALUE_TYPE rmax;
+      char buf[128];
+      get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+      real_from_string (&rmax, buf);
+
+      tree res = fold_build2_loc (loc, LE_EXPR, bool_type, arg_p,
+				  build_real (type, rmax));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign. */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0. */
+  tree inf_check_tmp
+    = fold_build2_loc (loc, LT_EXPR, bool_type,
+		       emit_tree_and_return_var (seq, int_arg),
+		       inf_mask);
+
+  tree inf_check = emit_tree_and_return_var (seq, inf_check_tmp);
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Generates code to check if ARG is a NaN. In the FP case we simply check if
+   ARG != ARG and in the INT case we check the bits in the exp and mantissa.
+   Returns a variable containing a boolean which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_nan (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (mode))
+    {
+      return build_int_cst (bool_type, false);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      tree res
+	= fold_build2_loc (loc, UNORDERED_EXPR, bool_type,arg_p, arg_p);
+
+      return emit_tree_and_return_var (seq, res);
+  }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is not 0.  */
+  tree nan_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, GT_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, nan_check);
+}
+
+/* Validate a single argument ARG against a tree code CODE representing
+   a type.  */
+/* Validates a single argument from the arguments list CALL at position INDEX.
+   The extracted parameter is compared against the expected type CODE.
+
+   A boolean is returned indicating if the parameter exist and if of the
+   expected type.  */
+static bool
+gimple_validate_arg (gimple* call, int index, enum tree_code code)
+{
+  const tree arg = gimple_call_arg (call, index);
+  if (!arg)
+    return false;
+  else if (code == POINTER_TYPE)
+    return POINTER_TYPE_P (TREE_TYPE (arg));
+  else if (code == INTEGER_TYPE)
+    return INTEGRAL_TYPE_P (TREE_TYPE (arg));
+  return code == TREE_CODE (TREE_TYPE (arg));
+}
+
+/* Lowers calls to __builtin_fpclassify to
+   fpclassify (x) ->
+     isnormal(x) ? FP_NORMAL :
+       iszero (x) ? FP_ZERO :
+	 isnan (x) ? FP_NAN :
+	   isinfinite (x) ? FP_INFINITE :
+	     FP_SUBNORMAL.
+
+   The code may use integer arithmentic if it decides
+   that the produced assembly would be faster. This can only be done
+   for numbers that are similar to IEEE-754 in format.
+
+   This builtin will generate code to return the appropriate floating
+   point classification depending on the value of the floating point
+   number passed in.  The possible return values must be supplied as
+   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
+   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
+   one floating point argument which is "type generic".
+
+   GSI is the gimple iterator containing the fpclassify call to lower.
+   The call will be expanded and replaced inline in the given GSI.  */
+static void
+lower_builtin_fpclassify (gimple_stmt_iterator *gsi)
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 6
+      || !gimple_validate_arg (call, 0, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 1, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 2, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 3, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 4, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 5, REAL_TYPE))
+    return;
+
+  /* Collect the arguments from the call.  */
+  tree fp_nan = gimple_call_arg (call, 0);
+  tree fp_infinite = gimple_call_arg (call, 1);
+  tree fp_normal = gimple_call_arg (call, 2);
+  tree fp_subnormal = gimple_call_arg (call, 3);
+  tree fp_zero = gimple_call_arg (call, 4);
+  tree arg = gimple_call_arg (call, 5);
+
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+    dest = create_tmp_reg (TREE_TYPE (orig_dest));
+
+  emit_tree_cond (&body, dest, done_label,
+		  is_normal (&body, arg, loc), fp_normal);
+  emit_tree_cond (&body, dest, done_label,
+		  is_zero (&body, arg, loc), fp_zero);
+  emit_tree_cond (&body, dest, done_label,
+		  is_nan (&body, arg, loc), fp_nan);
+  emit_tree_cond (&body, dest, done_label,
+		  is_infinity (&body, arg, loc), fp_infinite);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to is_subnormal.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, fp_subnormal));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+    {
+      gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+    }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+
+  /* Remove the call to __builtin_fpclassify.  */
+  gsi_remove (gsi, false);
+}
+
+/* Generic wrapper for the is_nan, is_normal, is_subnormal, is_zero, etc.
+   All these functions have the same setup. The wrapper validates the parameter
+   and also creates the branches and labels required to properly invoke.
+   This has been generalize and the function to call is passed as argument FNDECL.
+
+   GSI is the gimple iterator containing the fpclassify call to lower.
+   The call will be expanded and replaced inline in the given GSI.  */
+static void
+gen_call_fp_builtin (gimple_stmt_iterator *gsi,
+		     tree (*fndecl)(gimple_seq *, tree, location_t))
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 1
+      || !gimple_validate_arg (call, 0, REAL_TYPE))
+    return;
+
+  tree arg = gimple_call_arg (call, 0);
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  tree type = TREE_TYPE (orig_dest);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+      dest = create_tmp_reg (type);
+
+  tree t_true = build_int_cst (type, true);
+  tree t_false = build_int_cst (type, false);
+
+  emit_tree_cond (&body, dest, done_label,
+		  fndecl (&body, arg, loc), t_true);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to false.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, t_false));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+  /* Remove the call to the builtin.  */
+  gsi_remove (gsi, false);
+}
+
+/* Lower and expand calls to __builtin_isnan in GSI.  */
+static void
+lower_builtin_isnan (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_nan);
+}
+
+/* Lower and expand calls to __builtin_isinfinite in GSI.  */
+static void
+lower_builtin_isinfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_infinity);
+}
+
+/* Lower and expand calls to __builtin_isnormal in GSI.  */
+static void
+lower_builtin_isnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_normal);
+}
+
+/* Lower and expand calls to __builtin_iszero in GSI.  */
+static void
+lower_builtin_iszero (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_zero);
+}
+
+/* Lower and expand calls to __builtin_issubnormal in GSI.  */
+static void
+lower_builtin_issubnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_subnormal);
+}
+
+/* Lower and expand calls to __builtin_isfinite in GSI.  */
+static void
+lower_builtin_isfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_finite);
+}
+
 /* Lower calls to posix_memalign to
      res = posix_memalign (ptr, align, size);
      if (res == 0)
diff --git a/gcc/real.h b/gcc/real.h
index 59af580e78f2637be84f71b98b45ec6611053222..4b1b92138e07f43a175a2cbee4d952afad5898f7 100644
--- a/gcc/real.h
+++ b/gcc/real.h
@@ -161,6 +161,19 @@ struct real_format
   bool has_signed_zero;
   bool qnan_msb_set;
   bool canonical_nan_lsbs_set;
+
+  /* This flag indicates whether the format is suitable for the optimized
+     code paths for the __builtin_fpclassify function and friends.  For
+     this, the format must be a base 2 representation with the sign bit as
+     the most-significant bit followed by (exp <= 32) exponent bits
+     followed by the mantissa bits.  It must be possible to interpret the
+     bits of the floating-point representation as an integer.  NaNs and
+     INFs (if available) must be represented by the same schema used by
+     IEEE 754.  (NaNs must be represented by an exponent with all bits 1,
+     any mantissa except all bits 0 and any sign bit.  +INF and -INF must be
+     represented by an exponent with all bits 1, a mantissa with all bits 0 and
+     a sign bit of 0 and 1 respectively.)  */
+  bool is_binary_ieee_compatible;
   const char *name;
 };
 
@@ -511,6 +524,11 @@ extern bool real_isinteger (const REAL_VALUE_TYPE *, HOST_WIDE_INT *);
    float string.  BUF must be large enough to contain the result.  */
 extern void get_max_float (const struct real_format *, char *, size_t);
 
+/* Write into BUF the smallest positive normalized number x,
+   such that b**(x-1) is normalized.  BUF must be large enough
+   to contain the result.  */
+extern void get_min_float (const struct real_format *, char *, size_t);
+
 #ifndef GENERATOR_FILE
 /* real related routines.  */
 extern wide_int real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
diff --git a/gcc/real.c b/gcc/real.c
index 66e88e2ad366f7848609d157074c80420d778bcf..20c907a6d543c73ba62aa9a8ddf6973d82de7832 100644
--- a/gcc/real.c
+++ b/gcc/real.c
@@ -3052,6 +3052,7 @@ const struct real_format ieee_single_format =
     true,
     true,
     false,
+    true,
     "ieee_single"
   };
 
@@ -3075,6 +3076,7 @@ const struct real_format mips_single_format =
     true,
     false,
     true,
+    true,
     "mips_single"
   };
 
@@ -3098,6 +3100,7 @@ const struct real_format motorola_single_format =
     true,
     true,
     true,
+    true,
     "motorola_single"
   };
 
@@ -3132,6 +3135,7 @@ const struct real_format spu_single_format =
     true,
     false,
     false,
+    false,
     "spu_single"
   };
 \f
@@ -3343,6 +3347,7 @@ const struct real_format ieee_double_format =
     true,
     true,
     false,
+    true,
     "ieee_double"
   };
 
@@ -3366,6 +3371,7 @@ const struct real_format mips_double_format =
     true,
     false,
     true,
+    true,
     "mips_double"
   };
 
@@ -3389,6 +3395,7 @@ const struct real_format motorola_double_format =
     true,
     true,
     true,
+    true,
     "motorola_double"
   };
 \f
@@ -3735,6 +3742,7 @@ const struct real_format ieee_extended_motorola_format =
     true,
     true,
     true,
+    false,
     "ieee_extended_motorola"
   };
 
@@ -3758,6 +3766,7 @@ const struct real_format ieee_extended_intel_96_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96"
   };
 
@@ -3781,6 +3790,7 @@ const struct real_format ieee_extended_intel_128_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_128"
   };
 
@@ -3806,6 +3816,7 @@ const struct real_format ieee_extended_intel_96_round_53_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96_round_53"
   };
 \f
@@ -3896,6 +3907,7 @@ const struct real_format ibm_extended_format =
     true,
     true,
     false,
+    false,
     "ibm_extended"
   };
 
@@ -3919,6 +3931,7 @@ const struct real_format mips_extended_format =
     true,
     false,
     true,
+    false,
     "mips_extended"
   };
 
@@ -4184,6 +4197,7 @@ const struct real_format ieee_quad_format =
     true,
     true,
     false,
+    true,
     "ieee_quad"
   };
 
@@ -4207,6 +4221,7 @@ const struct real_format mips_quad_format =
     true,
     false,
     true,
+    true,
     "mips_quad"
   };
 \f
@@ -4509,6 +4524,7 @@ const struct real_format vax_f_format =
     false,
     false,
     false,
+    false,
     "vax_f"
   };
 
@@ -4532,6 +4548,7 @@ const struct real_format vax_d_format =
     false,
     false,
     false,
+    false,
     "vax_d"
   };
 
@@ -4555,6 +4572,7 @@ const struct real_format vax_g_format =
     false,
     false,
     false,
+    false,
     "vax_g"
   };
 \f
@@ -4633,6 +4651,7 @@ const struct real_format decimal_single_format =
     true,
     true,
     false,
+    false,
     "decimal_single"
   };
 
@@ -4657,6 +4676,7 @@ const struct real_format decimal_double_format =
     true,
     true,
     false,
+    false,
     "decimal_double"
   };
 
@@ -4681,6 +4701,7 @@ const struct real_format decimal_quad_format =
     true,
     true,
     false,
+    false,
     "decimal_quad"
   };
 \f
@@ -4820,6 +4841,7 @@ const struct real_format ieee_half_format =
     true,
     true,
     false,
+    true,
     "ieee_half"
   };
 
@@ -4846,6 +4868,7 @@ const struct real_format arm_half_format =
     true,
     false,
     false,
+    false,
     "arm_half"
   };
 \f
@@ -4893,6 +4916,7 @@ const struct real_format real_internal_format =
     true,
     true,
     false,
+    false,
     "real_internal"
   };
 \f
@@ -5080,6 +5104,16 @@ get_max_float (const struct real_format *fmt, char *buf, size_t len)
   gcc_assert (strlen (buf) < len);
 }
 
+/* Write into BUF the minimum negative representable finite floating-point
+   number, x, such that b**(x-1) is normalized.
+   BUF must be large enough to contain the result.  */
+void
+get_min_float (const struct real_format *fmt, char *buf, size_t len)
+{
+  sprintf (buf, "0x1p%d", fmt->emin - 1);
+  gcc_assert (strlen (buf) < len);
+}
+
 /* True if mode M has a NaN representation and
    the treatment of NaN operands is important.  */
 
diff --git a/gcc/testsuite/gcc.dg/builtins-43.c b/gcc/testsuite/gcc.dg/builtins-43.c
index f7c318edf084104b9b820e18e631ed61e760569e..5d41c28aef8619f06658f45846ae15dd8b4987ed 100644
--- a/gcc/testsuite/gcc.dg/builtins-43.c
+++ b/gcc/testsuite/gcc.dg/builtins-43.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-gimple -fdump-tree-optimized" } */
+/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-lower -fdump-tree-optimized" } */
   
 extern void f(int);
 extern void link_error ();
@@ -51,7 +51,7 @@ main ()
 
 
 /* Check that all instances of __builtin_isnan were folded.  */
-/* { dg-final { scan-tree-dump-times "isnan" 0 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "isnan" 0 "lower" } } */
 
 /* Check that all instances of link_error were subject to DCE.  */
 /* { dg-final { scan-tree-dump-times "link_error" 0 "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/fold-notunord.c b/gcc/testsuite/gcc.dg/fold-notunord.c
deleted file mode 100644
index ca345154ac204cb5f380855828421b7f88d49052..0000000000000000000000000000000000000000
--- a/gcc/testsuite/gcc.dg/fold-notunord.c
+++ /dev/null
@@ -1,9 +0,0 @@
-/* { dg-do compile } */
-/* { dg-options "-O -ftrapping-math -fdump-tree-optimized" } */
-
-int f (double d)
-{
-  return !__builtin_isnan (d);
-}
-
-/* { dg-final { scan-tree-dump " ord " "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/pr28796-1.c b/gcc/testsuite/gcc.dg/pr28796-1.c
index 077118a298878441e812410f3a6bf3707fb1d839..a57b4e350af1bc45344106fdeab4b32ef87f233f 100644
--- a/gcc/testsuite/gcc.dg/pr28796-1.c
+++ b/gcc/testsuite/gcc.dg/pr28796-1.c
@@ -1,5 +1,5 @@
 /* { dg-do link } */
-/* { dg-options "-ffinite-math-only" } */
+/* { dg-options "-ffinite-math-only -O2" } */
 
 extern void link_error(void);
 
diff --git a/gcc/testsuite/gcc.dg/torture/float128-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..ec9d3ad41e24280978707888590eec1b562207f0
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128_runtime } */
+
+#define WIDTH 128
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0ede861716750453a86c9abc703ad0b2826674c6
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128x_runtime } */
+
+#define WIDTH 128
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float16-tg-4.c b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..007c4c224ea95537c31185d0aff964d1975f2190
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float16 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float16 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float16_runtime } */
+
+#define WIDTH 16
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..c7f8353da2cffdfc2c2f58f5da3d5363b95e6f91
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32 type-generic built-in functions: __builtin_f__builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32_runtime } */
+
+#define WIDTH 32
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0d7a592920aca112d5f6409e565d4582c253c977
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32x_runtime } */
+
+#define WIDTH 32
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..bb25a22a68e60ce2717ab3583bbec595dd563c35
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64_runtime } */
+
+#define WIDTH 64
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..82305d916b8bd75131e2c647fd37f74cadbc8f1d
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64x_runtime } */
+
+#define WIDTH 64
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
new file mode 100644
index 0000000000000000000000000000000000000000..aa3448c090cf797a1525b1045ffebeed79cace40
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
@@ -0,0 +1,99 @@
+/* Tests for _FloatN / _FloatNx types: compile and execution tests for
+   type-generic built-in functions: __builtin_iszero, __builtin_issubnormal.
+   Before including this file, define WIDTH as the value N; define EXT to 1
+   for _FloatNx and 0 for _FloatN.  */
+
+#define __STDC_WANT_IEC_60559_TYPES_EXT__
+#include <float.h>
+
+#define CONCATX(X, Y) X ## Y
+#define CONCAT(X, Y) CONCATX (X, Y)
+#define CONCAT3(X, Y, Z) CONCAT (CONCAT (X, Y), Z)
+#define CONCAT4(W, X, Y, Z) CONCAT (CONCAT (CONCAT (W, X), Y), Z)
+
+#if EXT
+# define TYPE CONCAT3 (_Float, WIDTH, x)
+# define CST(C) CONCAT4 (C, f, WIDTH, x)
+# define MAX CONCAT3 (FLT, WIDTH, X_MAX)
+# define MIN CONCAT3 (FLT, WIDTH, X_MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, X_TRUE_MIN)
+#else
+# define TYPE CONCAT (_Float, WIDTH)
+# define CST(C) CONCAT3 (C, f, WIDTH)
+# define MAX CONCAT3 (FLT, WIDTH, _MAX)
+# define MIN CONCAT3 (FLT, WIDTH, _MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, _TRUE_MIN)
+#endif
+
+extern void exit (int);
+extern void abort (void);
+
+volatile TYPE inf = __builtin_inf (), nanval = __builtin_nan ("");
+volatile TYPE neginf = -__builtin_inf (), negnanval = -__builtin_nan ("");
+volatile TYPE zero = CST (0.0), negzero = -CST (0.0), one = CST (1.0);
+volatile TYPE max = MAX, negmax = -MAX, min = MIN, negmin = -MIN;
+volatile TYPE true_min = TRUE_MIN, negtrue_min = -TRUE_MIN;
+volatile TYPE sub_norm = MIN / 2.0;
+
+int
+main (void)
+{
+  if (__builtin_iszero (inf) == 1)
+    abort ();
+  if (__builtin_iszero (nanval) == 1)
+    abort ();
+  if (__builtin_iszero (neginf) == 1)
+    abort ();
+  if (__builtin_iszero (negnanval) == 1)
+    abort ();
+  if (__builtin_iszero (zero) != 1)
+    abort ();
+  if (__builtin_iszero (negzero) != 1)
+    abort ();
+  if (__builtin_iszero (one) == 1)
+    abort ();
+  if (__builtin_iszero (max) == 1)
+    abort ();
+  if (__builtin_iszero (negmax) == 1)
+    abort ();
+  if (__builtin_iszero (min) == 1)
+    abort ();
+  if (__builtin_iszero (negmin) == 1)
+    abort ();
+  if (__builtin_iszero (true_min) == 1)
+    abort ();
+  if (__builtin_iszero (negtrue_min) == 1)
+    abort ();
+  if (__builtin_iszero (sub_norm) == 1)
+    abort ();
+
+  if (__builtin_issubnormal (inf) == 1)
+    abort ();
+  if (__builtin_issubnormal (nanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (neginf) == 1)
+    abort ();
+  if (__builtin_issubnormal (negnanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (zero) == 1)
+    abort ();
+  if (__builtin_issubnormal (negzero) == 1)
+    abort ();
+  if (__builtin_issubnormal (one) == 1)
+    abort ();
+  if (__builtin_issubnormal (max) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmax) == 1)
+    abort ();
+  if (__builtin_issubnormal (min) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmin) == 1)
+    abort ();
+  if (__builtin_issubnormal (true_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (negtrue_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (sub_norm) != 1)
+    abort ();
+  exit (0);
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
new file mode 100644
index 0000000000000000000000000000000000000000..3a1bf956bfcedcc15dc1cbacfe2f0b663b31c3cc
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
@@ -0,0 +1,22 @@
+/* This file checks the code generation for the new __builtin_fpclassify.
+   because checking the exact assembly isn't very useful, we'll just be checking
+   for the presence of certain instructions and the omition of others. */
+/* { dg-options "-O2" } */
+/* { dg-do compile } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fabs\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmp\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmpe\[ \t\]?" } } */
+/* { dg-final { scan-assembler "\[ \t\]?ubfx\[ \t\]?" } } */
+
+#include <stdio.h>
+#include <math.h>
+
+/*
+ fp_nan = args[0];
+ fp_infinite = args[1];
+ fp_normal = args[2];
+ fp_subnormal = args[3];
+ fp_zero = args[4];
+*/
+
+int f(double x) { return __builtin_fpclassify(0, 1, 4, 3, 2, x); }

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-12-15 10:16           ` Tamar Christina
  2016-12-15 19:37             ` Joseph Myers
@ 2016-12-19 20:34             ` Jeff Law
  2017-01-03  9:56               ` Tamar Christina
  1 sibling, 1 reply; 47+ messages in thread
From: Jeff Law @ 2016-12-19 20:34 UTC (permalink / raw)
  To: Tamar Christina, Joseph Myers
  Cc: GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

On 12/15/2016 03:14 AM, Tamar Christina wrote:
>
>> On a high level, presumably there's no real value in keeping the old
>> code to "fold" fpclassify.  By exposing those operations as integer
>> logicals for the fast path, if the FP value becomes a constant during
>> the optimization pipeline we'll see the reinterpreted values flowing
>> into the new integer logical tests and they'll simplify just like
>> anything else.  Right?
>
> Yes, if it becomes a constant it will be folded away, both in the integer and the fp case.
Thanks for clarifying.


>
>> The old IBM format is still supported, though they are expected to be
>> moveing towards a standard ieee 128 bit format.  So my only concern is
>> that we preserve correct behavior for those cases -- I don't really care
>> about optimizing them.  So I think you need to keep them.
>
> Yes, I re-added them. It's mostly a copy paste from what they were in the
> other functions. But I have no way of testing it.
Understood.

>>  > +  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
>>> +  return (format->is_binary_ieee_compatible
>>> +       && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
>>> +       /* We explicitly disable quad float support on 32 bit systems.  */
>>> +       && !(UNITS_PER_WORD == 4 && type_width == 128)
>>> +       && targetm.scalar_mode_supported_p (mode));
>>> +}
>> Presumably this is why you needed the target.h inclusion.
>>
>> Note that on some systems we even disable 64bit floating point support.
>> I suspect this check needs a little re-thinking as I don't think that
>> checking for a specific UNITS_PER_WORD is correct, nor is checking the
>> width of the type.  I'm not offhand sure what the test should be, just
>> that I think we need something better here.
>
> I think what I really wanted to test here is if there was an integer mode available
> which has the exact width as the floating point one. So I have replaced this with
> just a call to int_mode_for_mode. Which is probably more correct.
I'll need to think about it, but would inherently think that 
int_mode_for_mode is better than an explicit check of UNITS_PER_WORD and 
typewidth.


>
>>> +
>>> +/* Determines if the given number is a NaN value.
>>> +   This function is the last in the chain and only has to
>>> +   check if it's preconditions are true.  */
>>> +static tree
>>> +is_nan (gimple_seq *seq, tree arg, location_t loc)
>> So in the old code we checked UNGT_EXPR, in the new code's slow path you
>> check UNORDERED.  Was that change intentional?
>
> The old FP code used UNORDERED and the new one was using ORDERED and negating the result.
> I've replaced it with UNORDERED, but both are correct.
OK.  Just wanted to make sure.

jeff

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-12-19 20:34             ` Jeff Law
@ 2017-01-03  9:56               ` Tamar Christina
  2017-01-16 17:06                 ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2017-01-03  9:56 UTC (permalink / raw)
  To: Jeff Law, Joseph Myers
  Cc: GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

Hi Jeff,

I wasn't sure if you saw the updated patch attached to the previous email or if you just hadn't had the time to look at it yet.

Cheers,
Tamar

________________________________________
From: Jeff Law <law@redhat.com>
Sent: Monday, December 19, 2016 8:27:33 PM
To: Tamar Christina; Joseph Myers
Cc: GCC Patches; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

On 12/15/2016 03:14 AM, Tamar Christina wrote:
>
>> On a high level, presumably there's no real value in keeping the old
>> code to "fold" fpclassify.  By exposing those operations as integer
>> logicals for the fast path, if the FP value becomes a constant during
>> the optimization pipeline we'll see the reinterpreted values flowing
>> into the new integer logical tests and they'll simplify just like
>> anything else.  Right?
>
> Yes, if it becomes a constant it will be folded away, both in the integer and the fp case.
Thanks for clarifying.


>
>> The old IBM format is still supported, though they are expected to be
>> moveing towards a standard ieee 128 bit format.  So my only concern is
>> that we preserve correct behavior for those cases -- I don't really care
>> about optimizing them.  So I think you need to keep them.
>
> Yes, I re-added them. It's mostly a copy paste from what they were in the
> other functions. But I have no way of testing it.
Understood.

>>  > +  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
>>> +  return (format->is_binary_ieee_compatible
>>> +       && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
>>> +       /* We explicitly disable quad float support on 32 bit systems.  */
>>> +       && !(UNITS_PER_WORD == 4 && type_width == 128)
>>> +       && targetm.scalar_mode_supported_p (mode));
>>> +}
>> Presumably this is why you needed the target.h inclusion.
>>
>> Note that on some systems we even disable 64bit floating point support.
>> I suspect this check needs a little re-thinking as I don't think that
>> checking for a specific UNITS_PER_WORD is correct, nor is checking the
>> width of the type.  I'm not offhand sure what the test should be, just
>> that I think we need something better here.
>
> I think what I really wanted to test here is if there was an integer mode available
> which has the exact width as the floating point one. So I have replaced this with
> just a call to int_mode_for_mode. Which is probably more correct.
I'll need to think about it, but would inherently think that
int_mode_for_mode is better than an explicit check of UNITS_PER_WORD and
typewidth.


>
>>> +
>>> +/* Determines if the given number is a NaN value.
>>> +   This function is the last in the chain and only has to
>>> +   check if it's preconditions are true.  */
>>> +static tree
>>> +is_nan (gimple_seq *seq, tree arg, location_t loc)
>> So in the old code we checked UNGT_EXPR, in the new code's slow path you
>> check UNORDERED.  Was that change intentional?
>
> The old FP code used UNORDERED and the new one was using ORDERED and negating the result.
> I've replaced it with UNORDERED, but both are correct.
OK.  Just wanted to make sure.

jeff

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-03  9:56               ` Tamar Christina
@ 2017-01-16 17:06                 ` Tamar Christina
  0 siblings, 0 replies; 47+ messages in thread
From: Tamar Christina @ 2017-01-16 17:06 UTC (permalink / raw)
  To: Jeff Law, Joseph Myers
  Cc: GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

Ping. Does this still have a chance or should I table it till Stage 1 opens again?

Tamar.

________________________________________
From: Tamar Christina
Sent: Tuesday, January 3, 2017 9:55:51 AM
To: Jeff Law; Joseph Myers
Cc: GCC Patches; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

Hi Jeff,

I wasn't sure if you saw the updated patch attached to the previous email or if you just hadn't had the time to look at it yet.

Cheers,
Tamar

________________________________________
From: Jeff Law <law@redhat.com>
Sent: Monday, December 19, 2016 8:27:33 PM
To: Tamar Christina; Joseph Myers
Cc: GCC Patches; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

On 12/15/2016 03:14 AM, Tamar Christina wrote:
>
>> On a high level, presumably there's no real value in keeping the old
>> code to "fold" fpclassify.  By exposing those operations as integer
>> logicals for the fast path, if the FP value becomes a constant during
>> the optimization pipeline we'll see the reinterpreted values flowing
>> into the new integer logical tests and they'll simplify just like
>> anything else.  Right?
>
> Yes, if it becomes a constant it will be folded away, both in the integer and the fp case.
Thanks for clarifying.


>
>> The old IBM format is still supported, though they are expected to be
>> moveing towards a standard ieee 128 bit format.  So my only concern is
>> that we preserve correct behavior for those cases -- I don't really care
>> about optimizing them.  So I think you need to keep them.
>
> Yes, I re-added them. It's mostly a copy paste from what they were in the
> other functions. But I have no way of testing it.
Understood.

>>  > +  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
>>> +  return (format->is_binary_ieee_compatible
>>> +       && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
>>> +       /* We explicitly disable quad float support on 32 bit systems.  */
>>> +       && !(UNITS_PER_WORD == 4 && type_width == 128)
>>> +       && targetm.scalar_mode_supported_p (mode));
>>> +}
>> Presumably this is why you needed the target.h inclusion.
>>
>> Note that on some systems we even disable 64bit floating point support.
>> I suspect this check needs a little re-thinking as I don't think that
>> checking for a specific UNITS_PER_WORD is correct, nor is checking the
>> width of the type.  I'm not offhand sure what the test should be, just
>> that I think we need something better here.
>
> I think what I really wanted to test here is if there was an integer mode available
> which has the exact width as the floating point one. So I have replaced this with
> just a call to int_mode_for_mode. Which is probably more correct.
I'll need to think about it, but would inherently think that
int_mode_for_mode is better than an explicit check of UNITS_PER_WORD and
typewidth.


>
>>> +
>>> +/* Determines if the given number is a NaN value.
>>> +   This function is the last in the chain and only has to
>>> +   check if it's preconditions are true.  */
>>> +static tree
>>> +is_nan (gimple_seq *seq, tree arg, location_t loc)
>> So in the old code we checked UNGT_EXPR, in the new code's slow path you
>> check UNORDERED.  Was that change intentional?
>
> The old FP code used UNORDERED and the new one was using ORDERED and negating the result.
> I've replaced it with UNORDERED, but both are correct.
OK.  Just wanted to make sure.

jeff

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2016-12-19 17:24               ` Tamar Christina
@ 2017-01-18 16:15                 ` Jeff Law
  2017-01-18 16:43                   ` Joseph Myers
  2017-01-18 16:55                   ` Joseph Myers
  0 siblings, 2 replies; 47+ messages in thread
From: Jeff Law @ 2017-01-18 16:15 UTC (permalink / raw)
  To: Tamar Christina, Joseph Myers
  Cc: GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

On 12/19/2016 10:18 AM, Tamar Christina wrote:
> Hi All,
>
> I've respun the patch with the feedback from Jeff and Joseph.
>
>> I think an integer mode should always exist - even in the case of TFmode
>> on 32-bit systems (32-bit sparc / s390, for example, use TFmode long
>> double for GNU/Linux, and it's supported as _Float128 and __float128 on
>> 32-bit x86).  It just be not be usable for arithmetic or declaring
>> variables of that type.
> You're right, so I test the integer mode I receive with scalar_mode_supported_p.
> And this seems to do the right thing.
>
> Thanks for all the comments so far!
>
> Kind Regards,
> Tamar
> ________________________________________
> From: Joseph Myers <joseph@codesourcery.com>
> Sent: Thursday, December 15, 2016 7:03:27 PM
> To: Tamar Christina
> Cc: Jeff Law; GCC Patches; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
> Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
>
> On Thu, 15 Dec 2016, Tamar Christina wrote:
>
>>> Note that on some systems we even disable 64bit floating point support.
>>> I suspect this check needs a little re-thinking as I don't think that
>>> checking for a specific UNITS_PER_WORD is correct, nor is checking the
>>> width of the type.  I'm not offhand sure what the test should be, just
>>> that I think we need something better here.
>> I think what I really wanted to test here is if there was an integer
>> mode available which has the exact width as the floating point one. So I
>> have replaced this with just a call to int_mode_for_mode. Which is
>> probably more correct.
> I think an integer mode should always exist - even in the case of TFmode
> on 32-bit systems (32-bit sparc / s390, for example, use TFmode long
> double for GNU/Linux, and it's supported as _Float128 and __float128 on
> 32-bit x86).  It just be not be usable for arithmetic or declaring
> variables of that type.
>
> I don't know whether TImode bitwise operations, such as generated by this
> fpclassify work, will get properly lowered to operations on supported
> narrower modes, but I hope so (clearly it's simpler if you can write
> things straightforwardly and have them cover this case of TFmode on 32-bit
> systems automatically through lowering elsewhere in the compiler, than if
> covering that case would require additional code - the more cases you
> cover, the more opportunity there is for glibc to use the built-in
> functions even with -fsignaling-nans).
>
> --
> Joseph S. Myers
> joseph@codesourcery.com
>
>
> fpclassify-patch-gimple-3.patch
>
>


> diff --git a/gcc/gimple-low.c b/gcc/gimple-low.c
> index 64752b67b86b3d01df5f5661e4666df98b7b91d1..ceaee295c6e81531dbe047c569dd18332179ccbf 100644
> --- a/gcc/gimple-low.c
> +++ b/gcc/gimple-low.c
> @@ -30,6 +30,8 @@ along with GCC; see the file COPYING3.  If not see
>  #include "calls.h"
>  #include "gimple-iterator.h"
>  #include "gimple-low.h"
> +#include "stor-layout.h"
> +#include "target.h"
Presumably these are for the endianness & mode checking?  It's a bit of 
a wart checking those in gimple-low, but I can live with it.  We might 
consider ways to avoid this layering violation in the future.





> +
> +/* Validate a single argument ARG against a tree code CODE representing
> +   a type.  */
I think this comment for gimple_validate_arg is outdated and superseded 
by the one immediately following.  So I think this comment can simply be 
removed.

With that nit fixed, this is OK for the trunk.  Thanks for your 
patience.  I'm really happy with the improvements that were made since 
the initial submission of this change.

jeff


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-18 16:15                 ` Jeff Law
@ 2017-01-18 16:43                   ` Joseph Myers
  2017-01-18 16:55                   ` Joseph Myers
  1 sibling, 0 replies; 47+ messages in thread
From: Joseph Myers @ 2017-01-18 16:43 UTC (permalink / raw)
  To: Jeff Law
  Cc: Tamar Christina, GCC Patches, Wilco Dijkstra, rguenther,
	Michael Meissner, nd

Since the patch adds new built-in functions __builtin_issubnormal and 
__builtin_iszero, it also needs to update c-typeck.c:convert_arguments to 
make those functions remove excess precision.  This is mentioned in the 
PRs 77925 and 77926 for addition of those functions (which as I noted in 
<https://gcc.gnu.org/ml/gcc-patches/2016-11/msg01190.html> should be 
included in the ChangeLog entry for the patch).

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-18 16:15                 ` Jeff Law
  2017-01-18 16:43                   ` Joseph Myers
@ 2017-01-18 16:55                   ` Joseph Myers
  2017-01-18 17:07                     ` Joseph Myers
  1 sibling, 1 reply; 47+ messages in thread
From: Joseph Myers @ 2017-01-18 16:55 UTC (permalink / raw)
  To: Jeff Law
  Cc: Tamar Christina, GCC Patches, Wilco Dijkstra, rguenther,
	Michael Meissner, nd

Also, I don't think the call to perform_ibm_extended_fixups in 
is_subnormal is correct.  Subnormal for IBM long double is *not* the same 
as subnormal double high part.  Likewise it's incorrect in is_normal as 
well.

Generally, I don't see tests added that these new functions are correct 
for float, double and long double, which would detect such issues if run 
for a target with IBM long double.

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-18 16:55                   ` Joseph Myers
@ 2017-01-18 17:07                     ` Joseph Myers
  2017-01-19 14:44                       ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Joseph Myers @ 2017-01-18 17:07 UTC (permalink / raw)
  To: Jeff Law
  Cc: Tamar Christina, GCC Patches, Wilco Dijkstra, rguenther,
	Michael Meissner, nd

On Wed, 18 Jan 2017, Joseph Myers wrote:

> Generally, I don't see tests added that these new functions are correct 
> for float, double and long double, which would detect such issues if run 
> for a target with IBM long double.

Specifically, I think gcc.dg/tg-tests.h should have tests added for 
__builtin_issubnormal and __builtin_iszero (i.e. have the existing test 
inputs also tested with the new functions).

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-18 17:07                     ` Joseph Myers
@ 2017-01-19 14:44                       ` Tamar Christina
  2017-01-19 14:47                         ` Joseph Myers
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2017-01-19 14:44 UTC (permalink / raw)
  To: Joseph Myers, Jeff Law
  Cc: GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

[-- Attachment #1: Type: text/plain, Size: 6846 bytes --]

Hi Joseph & Jeff,

Thanks for the feedback!

> > Generally, I don't see tests added that these new functions are correct
> > for float, double and long double, which would detect such issues if run
> > for a target with IBM long double.
>
> Specifically, I think gcc.dg/tg-tests.h should have tests added for
> __builtin_issubnormal and __builtin_iszero (i.e. have the existing test
> inputs also tested with the new functions).

Right, sorry I missed these, I had tests for them in the aarch64 specific backend
but not in the general area.

> Also, I don't think the call to perform_ibm_extended_fixups in
> is_subnormal is correct.  Subnormal for IBM long double is *not* the same
> as subnormal double high part.  Likewise it's incorrect in is_normal as
> well.

The calls to is_zero and is_subnormal were incorrect indeed. I've corrected them
by not calling the fixup code and to instead make sure it falls through into the
old fp based code which did normal floating point operations on the number. This
is the same code as was before in fpclassify so it should work.

As for is_normal, the code is almost identical as the code that used to be in
fold_builtin_interclass_mathfn in BUILT_IN_ISNORMAL, with the exception that I
don't check <= max_value but instead < inifity, so I can reuse the same constant.

The code was mostly just copy pasted from that procedure (with now a bug fix in
that it uses the abs value and not the original one).

> Since the patch adds new built-in functions __builtin_issubnormal and
> __builtin_iszero, it also needs to update c-typeck.c:convert_arguments to
> make those functions remove excess precision.  This is mentioned in the
> PRs 77925 and 77926 for addition of those functions (which as I noted in
> <https://gcc.gnu.org/ml/gcc-patches/2016-11/msg01190.html> should be
> included in the ChangeLog entry for the patch).

Hmm, I had missed this detail when I looked at the tickets. I've added it now.

> > +#include "stor-layout.h"
> > +#include "target.h"
> Presumably these are for the endianness & mode checking?  It's a bit of
> a wart checking those in gimple-low, but I can live with it.  We might
> consider ways to avoid this layering violation in the future.

Yes, one way to possibly avoid it is to just pass them along as arguments to the
top level function. I can make a ticket to resolve this when stage1 opens again
if you'd like.

Are you ok with the changes Joseph?

The changelog has also been updated:

gcc/
2017-01-19  Tamar Christina  <tamar.christina@arm.com>

        PR middle-end/77925
        PR middle-end/77926
        PR middle-end/66462

        * gcc/builtins.c (fold_builtin_fpclassify): Removed.
        (fold_builtin_interclass_mathfn): Removed.
        (expand_builtin): Added builtins to lowering list.
        (fold_builtin_n): Removed fold_builtin_varargs.
        (fold_builtin_varargs): Removed.
        * gcc/builtins.def (BUILT_IN_ISZERO, BUILT_IN_ISSUBNORMAL): Added.
        * gcc/real.h (get_min_float): Added.
        (real_format): Added is_ieee_compatible field.
        * gcc/real.c (get_min_float): Added.
        (ieee_single_format): Set is_ieee_compatible flag.
        * gcc/gimple-low.c (lower_stm): Define BUILT_IN_FPCLASSIFY,
        CASE_FLT_FN (BUILT_IN_ISINF), BUILT_IN_ISINFD32, BUILT_IN_ISINFD64,
        BUILT_IN_ISINFD128, BUILT_IN_ISNAND32, BUILT_IN_ISNAND64,
        BUILT_IN_ISNAND128, BUILT_IN_ISNAN, BUILT_IN_ISNORMAL, BUILT_IN_ISZERO,
        BUILT_IN_ISSUBNORMAL, CASE_FLT_FN (BUILT_IN_FINITE), BUILT_IN_FINITED32
        BUILT_IN_FINITED64, BUILT_IN_FINITED128, BUILT_IN_ISFINITE.
        (lower_builtin_fpclassify, is_nan, is_normal, is_infinity): Added.
        (is_zero, is_subnormal, is_finite, use_ieee_int_mode): Likewise.
        (lower_builtin_isnan, lower_builtin_isinfinite): Likewise.
        (lower_builtin_isnormal, lower_builtin_iszero): Likewise.
        (lower_builtin_issubnormal, lower_builtin_isfinite): Likewise.
        (emit_tree_cond, get_num_as_int, emit_tree_and_return_var): Added.
        (mips_single_format): Likewise.
        (motorola_single_format): Likewise.
        (spu_single_format): Likewise.
        (ieee_double_format): Likewise.
        (mips_double_format): Likewise.
        (motorola_double_format): Likewise.
        (ieee_extended_motorola_format): Likewise.
        (ieee_extended_intel_128_format): Likewise.
        (ieee_extended_intel_96_round_53_format): Likewise.
        (ibm_extended_format): Likewise.
        (mips_extended_format): Likewise.
        (ieee_quad_format): Likewise.
        (mips_quad_format): Likewise.
        (vax_f_format): Likewise.
        (vax_d_format): Likewise.
        (vax_g_format): Likewise.
        (decimal_single_format): Likewise.
        (decimal_quad_format): Likewise.
        (iee_half_format): Likewise.
        (mips_single_format): Likewise.
        (arm_half_format): Likewise.
        (real_internal_format): Likewise.
        * gcc/doc/extend.texi: Added documentation for built-ins.
        * gcc/c/c-typeck.c (convert_arguments): Added BUILT_IN_ISZERO
        and BUILT_IN_ISSUBNORMAL.

gcc/testsuite/
2017-01-19  Tamar Christina  <tamar.christina@arm.com>

        * gcc.target/aarch64/builtin-fpclassify.c: New codegen test.
        * gcc.dg/fold-notunord.c: Removed.
        * gcc.dg/torture/floatn-tg-4.h: Added tests for iszero and issubnormal.
        * gcc.dg/torture/float128-tg-4.c: Likewise.
        * gcc.dg/torture/float128x-tg-4: Likewise.
        * gcc.dg/torture/float16-tg-4.c: Likewise.
        * gcc.dg/torture/float32-tg-4.c: Likewise.
        * gcc.dg/torture/float32x-tg-4.c: Likewise.
        * gcc.dg/torture/float64-tg-4.c: Likewise.
        * gcc.dg/torture/float64x-tg-4.c: Likewise.
        * gcc.dg/pr28796-1.c: Added -O2.
        * gcc.dg/builtins-43.c: Check lower instead of gimple.
        * gcc/testsuite/gcc.dg/tg-tests.h: Added iszero and issubnormal.

Tamar.

________________________________________
From: Joseph Myers <joseph@codesourcery.com>
Sent: Wednesday, January 18, 2017 5:05:07 PM
To: Jeff Law
Cc: Tamar Christina; GCC Patches; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

On Wed, 18 Jan 2017, Joseph Myers wrote:

> Generally, I don't see tests added that these new functions are correct
> for float, double and long double, which would detect such issues if run
> for a target with IBM long double.

Specifically, I think gcc.dg/tg-tests.h should have tests added for
__builtin_issubnormal and __builtin_iszero (i.e. have the existing test
inputs also tested with the new functions).

--
Joseph S. Myers
joseph@codesourcery.com

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: fp-classify-jan-19-spin.patch --]
[-- Type: text/x-patch; name="fp-classify-jan-19-spin.patch", Size: 74667 bytes --]

diff --git a/gcc/builtins.c b/gcc/builtins.c
index 3ac2d44148440b124559ba7cd3de483b7a74b72d..d8ff9c70ae6b9e72e09b8cbd9a0bd41b6830b83e 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -160,7 +160,6 @@ static tree fold_builtin_0 (location_t, tree);
 static tree fold_builtin_1 (location_t, tree, tree);
 static tree fold_builtin_2 (location_t, tree, tree, tree);
 static tree fold_builtin_3 (location_t, tree, tree, tree, tree);
-static tree fold_builtin_varargs (location_t, tree, tree*, int);
 
 static tree fold_builtin_strpbrk (location_t, tree, tree, tree);
 static tree fold_builtin_strstr (location_t, tree, tree, tree);
@@ -2202,19 +2201,8 @@ interclass_mathfn_icode (tree arg, tree fndecl)
   switch (DECL_FUNCTION_CODE (fndecl))
     {
     CASE_FLT_FN (BUILT_IN_ILOGB):
-      errno_set = true; builtin_optab = ilogb_optab; break;
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      builtin_optab = isinf_optab; break;
-    case BUILT_IN_ISNORMAL:
-    case BUILT_IN_ISFINITE:
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      /* These builtins have no optabs (yet).  */
+      errno_set = true;
+      builtin_optab = ilogb_optab;
       break;
     default:
       gcc_unreachable ();
@@ -2233,8 +2221,7 @@ interclass_mathfn_icode (tree arg, tree fndecl)
 }
 
 /* Expand a call to one of the builtin math functions that operate on
-   floating point argument and output an integer result (ilogb, isinf,
-   isnan, etc).
+   floating point argument and output an integer result (ilogb, etc).
    Return 0 if a normal call should be emitted rather than expanding the
    function in-line.  EXP is the expression that is a call to the builtin
    function; if convenient, the result should be placed in TARGET.  */
@@ -5997,11 +5984,7 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
     CASE_FLT_FN (BUILT_IN_ILOGB):
       if (! flag_unsafe_math_optimizations)
 	break;
-      gcc_fallthrough ();
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-    case BUILT_IN_ISNORMAL:
+
       target = expand_builtin_interclass_mathfn (exp, target);
       if (target)
 	return target;
@@ -6281,8 +6264,25 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
 	}
       break;
 
+    CASE_FLT_FN (BUILT_IN_ISINF):
+    case BUILT_IN_ISNAND32:
+    case BUILT_IN_ISNAND64:
+    case BUILT_IN_ISNAND128:
+    case BUILT_IN_ISNAN:
+    case BUILT_IN_ISINFD32:
+    case BUILT_IN_ISINFD64:
+    case BUILT_IN_ISINFD128:
+    case BUILT_IN_ISNORMAL:
+    case BUILT_IN_ISZERO:
+    case BUILT_IN_ISSUBNORMAL:
+    case BUILT_IN_FPCLASSIFY:
     case BUILT_IN_SETJMP:
-      /* This should have been lowered to the builtins below.  */
+    CASE_FLT_FN (BUILT_IN_FINITE):
+    case BUILT_IN_FINITED32:
+    case BUILT_IN_FINITED64:
+    case BUILT_IN_FINITED128:
+    case BUILT_IN_ISFINITE:
+      /* These should have been lowered to the builtins in gimple-low.c.  */
       gcc_unreachable ();
 
     case BUILT_IN_SETJMP_SETUP:
@@ -7622,184 +7622,19 @@ fold_builtin_modf (location_t loc, tree arg0, tree arg1, tree rettype)
   return NULL_TREE;
 }
 
-/* Given a location LOC, an interclass builtin function decl FNDECL
-   and its single argument ARG, return an folded expression computing
-   the same, or NULL_TREE if we either couldn't or didn't want to fold
-   (the latter happen if there's an RTL instruction available).  */
-
-static tree
-fold_builtin_interclass_mathfn (location_t loc, tree fndecl, tree arg)
-{
-  machine_mode mode;
-
-  if (!validate_arg (arg, REAL_TYPE))
-    return NULL_TREE;
-
-  if (interclass_mathfn_icode (arg, fndecl) != CODE_FOR_nothing)
-    return NULL_TREE;
-
-  mode = TYPE_MODE (TREE_TYPE (arg));
-
-  bool is_ibm_extended = MODE_COMPOSITE_P (mode);
 
-  /* If there is no optab, try generic code.  */
-  switch (DECL_FUNCTION_CODE (fndecl))
-    {
-      tree result;
 
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      {
-	/* isinf(x) -> isgreater(fabs(x),DBL_MAX).  */
-	tree const isgr_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isgr_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	return result;
-      }
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-      {
-	/* isfinite(x) -> islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isle_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	/*result = fold_build2_loc (loc, UNGT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	result = fold_build1_loc (loc, TRUTH_NOT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  result);*/
-	return result;
-      }
-    case BUILT_IN_ISNORMAL:
-      {
-	/* isnormal(x) -> isgreaterequal(fabs(x),DBL_MIN) &
-	   islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	tree orig_arg, max_exp, min_exp;
-	machine_mode orig_mode = mode;
-	REAL_VALUE_TYPE rmax, rmin;
-	char buf[128];
-
-	orig_arg = arg = builtin_save_expr (arg);
-	if (is_ibm_extended)
-	  {
-	    /* Use double to test the normal range of IBM extended
-	       precision.  Emin for IBM extended precision is
-	       different to emin for IEEE double, being 53 higher
-	       since the low double exponent is at least 53 lower
-	       than the high double exponent.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	arg = fold_build1_loc (loc, ABS_EXPR, type, arg);
-
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&rmax, buf);
-	sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (orig_mode)->emin - 1);
-	real_from_string (&rmin, buf);
-	max_exp = build_real (type, rmax);
-	min_exp = build_real (type, rmin);
-
-	max_exp = build_call_expr (isle_fn, 2, arg, max_exp);
-	if (is_ibm_extended)
-	  {
-	    /* Testing the high end of the range is done just using
-	       the high double, using the same test as isfinite().
-	       For the subnormal end of the range we first test the
-	       high double, then if its magnitude is equal to the
-	       limit of 0x1p-969, we test whether the low double is
-	       non-zero and opposite sign to the high double.  */
-	    tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
-	    tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	    tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
-	    tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
-				       arg, min_exp);
-	    tree as_complex = build1 (VIEW_CONVERT_EXPR,
-				      complex_double_type_node, orig_arg);
-	    tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
-	    tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
-	    tree zero = build_real (type, dconst0);
-	    tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
-	    tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
-	    tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
-	    tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
-				      fold_build3 (COND_EXPR,
-						   integer_type_node,
-						   hilt, logt, lolt));
-	    eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
-				  eq_min, ok_lo);
-	    min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
-				   gt_min, eq_min);
-	  }
-	else
-	  {
-	    tree const isge_fn
-	      = builtin_decl_explicit (BUILT_IN_ISGREATEREQUAL);
-	    min_exp = build_call_expr (isge_fn, 2, arg, min_exp);
-	  }
-	result = fold_build2 (BIT_AND_EXPR, integer_type_node,
-			      max_exp, min_exp);
-	return result;
-      }
-    default:
-      break;
-    }
-
-  return NULL_TREE;
-}
-
-/* Fold a call to __builtin_isnan(), __builtin_isinf, __builtin_finite.
+/* Fold a call to __builtin_isinf_sign.
    ARG is the argument for the call.  */
 
 static tree
-fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
+fold_builtin_classify (location_t loc, tree arg, int builtin_index)
 {
-  tree type = TREE_TYPE (TREE_TYPE (fndecl));
-
   if (!validate_arg (arg, REAL_TYPE))
     return NULL_TREE;
 
   switch (builtin_index)
     {
-    case BUILT_IN_ISINF:
-      if (!HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      return NULL_TREE;
-
     case BUILT_IN_ISINF_SIGN:
       {
 	/* isinf_sign(x) -> isinf(x) ? (signbit(x) ? -1 : 1) : 0 */
@@ -7832,106 +7667,11 @@ fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
 	return tmp;
       }
 
-    case BUILT_IN_ISFINITE:
-      if (!HONOR_NANS (arg)
-	  && !HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_one_node, arg);
-
-      return NULL_TREE;
-
-    case BUILT_IN_ISNAN:
-      if (!HONOR_NANS (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      {
-	bool is_ibm_extended = MODE_COMPOSITE_P (TYPE_MODE (TREE_TYPE (arg)));
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    arg = fold_build1_loc (loc, NOP_EXPR, double_type_node, arg);
-	  }
-      }
-      arg = builtin_save_expr (arg);
-      return fold_build2_loc (loc, UNORDERED_EXPR, type, arg, arg);
-
     default:
       gcc_unreachable ();
     }
 }
 
-/* Fold a call to __builtin_fpclassify(int, int, int, int, int, ...).
-   This builtin will generate code to return the appropriate floating
-   point classification depending on the value of the floating point
-   number passed in.  The possible return values must be supplied as
-   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
-   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
-   one floating point argument which is "type generic".  */
-
-static tree
-fold_builtin_fpclassify (location_t loc, tree *args, int nargs)
-{
-  tree fp_nan, fp_infinite, fp_normal, fp_subnormal, fp_zero,
-    arg, type, res, tmp;
-  machine_mode mode;
-  REAL_VALUE_TYPE r;
-  char buf[128];
-
-  /* Verify the required arguments in the original call.  */
-  if (nargs != 6
-      || !validate_arg (args[0], INTEGER_TYPE)
-      || !validate_arg (args[1], INTEGER_TYPE)
-      || !validate_arg (args[2], INTEGER_TYPE)
-      || !validate_arg (args[3], INTEGER_TYPE)
-      || !validate_arg (args[4], INTEGER_TYPE)
-      || !validate_arg (args[5], REAL_TYPE))
-    return NULL_TREE;
-
-  fp_nan = args[0];
-  fp_infinite = args[1];
-  fp_normal = args[2];
-  fp_subnormal = args[3];
-  fp_zero = args[4];
-  arg = args[5];
-  type = TREE_TYPE (arg);
-  mode = TYPE_MODE (type);
-  arg = builtin_save_expr (fold_build1_loc (loc, ABS_EXPR, type, arg));
-
-  /* fpclassify(x) ->
-       isnan(x) ? FP_NAN :
-         (fabs(x) == Inf ? FP_INFINITE :
-	   (fabs(x) >= DBL_MIN ? FP_NORMAL :
-	     (x == 0 ? FP_ZERO : FP_SUBNORMAL))).  */
-
-  tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-		     build_real (type, dconst0));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node,
-		     tmp, fp_zero, fp_subnormal);
-
-  sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (mode)->emin - 1);
-  real_from_string (&r, buf);
-  tmp = fold_build2_loc (loc, GE_EXPR, integer_type_node,
-		     arg, build_real (type, r));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, fp_normal, res);
-
-  if (HONOR_INFINITIES (mode))
-    {
-      real_inf (&r);
-      tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-			 build_real (type, r));
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp,
-			 fp_infinite, res);
-    }
-
-  if (HONOR_NANS (mode))
-    {
-      tmp = fold_build2_loc (loc, ORDERED_EXPR, integer_type_node, arg, arg);
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, res, fp_nan);
-    }
-
-  return res;
-}
-
 /* Fold a call to an unordered comparison function such as
    __builtin_isgreater().  FNDECL is the FUNCTION_DECL for the function
    being called and ARG0 and ARG1 are the arguments for the call.
@@ -8232,40 +7972,8 @@ fold_builtin_1 (location_t loc, tree fndecl, tree arg0)
     case BUILT_IN_ISDIGIT:
       return fold_builtin_isdigit (loc, arg0);
 
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISFINITE:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISFINITE);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    case BUILT_IN_ISNORMAL:
-      return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-
     case BUILT_IN_ISINF_SIGN:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF_SIGN);
-
-    CASE_FLT_FN (BUILT_IN_ISNAN):
-    case BUILT_IN_ISNAND32:
-    case BUILT_IN_ISNAND64:
-    case BUILT_IN_ISNAND128:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISNAN);
+      return fold_builtin_classify (loc, arg0, BUILT_IN_ISINF_SIGN);
 
     case BUILT_IN_FREE:
       if (integer_zerop (arg0))
@@ -8465,7 +8173,6 @@ fold_builtin_n (location_t loc, tree fndecl, tree *args, int nargs, bool)
       ret = fold_builtin_3 (loc, fndecl, args[0], args[1], args[2]);
       break;
     default:
-      ret = fold_builtin_varargs (loc, fndecl, args, nargs);
       break;
     }
   if (ret)
@@ -9422,37 +9129,6 @@ fold_builtin_object_size (tree ptr, tree ost)
   return NULL_TREE;
 }
 
-/* Builtins with folding operations that operate on "..." arguments
-   need special handling; we need to store the arguments in a convenient
-   data structure before attempting any folding.  Fortunately there are
-   only a few builtins that fall into this category.  FNDECL is the
-   function, EXP is the CALL_EXPR for the call.  */
-
-static tree
-fold_builtin_varargs (location_t loc, tree fndecl, tree *args, int nargs)
-{
-  enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl);
-  tree ret = NULL_TREE;
-
-  switch (fcode)
-    {
-    case BUILT_IN_FPCLASSIFY:
-      ret = fold_builtin_fpclassify (loc, args, nargs);
-      break;
-
-    default:
-      break;
-    }
-  if (ret)
-    {
-      ret = build1 (NOP_EXPR, TREE_TYPE (ret), ret);
-      SET_EXPR_LOCATION (ret, loc);
-      TREE_NO_WARNING (ret) = 1;
-      return ret;
-    }
-  return NULL_TREE;
-}
-
 /* Initialize format string characters in the target charset.  */
 
 bool
diff --git a/gcc/builtins.def b/gcc/builtins.def
index 219feebd3aebefbd079bf37cc801453cd1965e00..91aa6f37fa098777bc794bad56d8c561ab9fdc44 100644
--- a/gcc/builtins.def
+++ b/gcc/builtins.def
@@ -831,6 +831,8 @@ DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFL, "isinfl", BT_FN_INT_LONGDOUBLE, ATTR_CO
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD32, "isinfd32", BT_FN_INT_DFLOAT32, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD64, "isinfd64", BT_FN_INT_DFLOAT64, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD128, "isinfd128", BT_FN_INT_DFLOAT128, ATTR_CONST_NOTHROW_LEAF_LIST)
+DEF_GCC_BUILTIN        (BUILT_IN_ISZERO, "iszero", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
+DEF_GCC_BUILTIN        (BUILT_IN_ISSUBNORMAL, "issubnormal", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_C99_C90RES_BUILTIN (BUILT_IN_ISNAN, "isnan", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANF, "isnanf", BT_FN_INT_FLOAT, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANL, "isnanl", BT_FN_INT_LONGDOUBLE, ATTR_CONST_NOTHROW_LEAF_LIST)
diff --git a/gcc/c/c-typeck.c b/gcc/c/c-typeck.c
index f0917ed788c91b2a2c8701438150f0e634c9402b..e2b4acd21b7d70e6500e42064db49e0f4a84aae9 100644
--- a/gcc/c/c-typeck.c
+++ b/gcc/c/c-typeck.c
@@ -3232,6 +3232,8 @@ convert_arguments (location_t loc, vec<location_t> arg_loc, tree typelist,
 	case BUILT_IN_ISINF_SIGN:
 	case BUILT_IN_ISNAN:
 	case BUILT_IN_ISNORMAL:
+	case BUILT_IN_ISZERO:
+	case BUILT_IN_ISSUBNORMAL:
 	case BUILT_IN_FPCLASSIFY:
 	  type_generic_remove_excess_precision = true;
 	  break;
diff --git a/gcc/doc/extend.texi b/gcc/doc/extend.texi
index 0669f7999beb078822e471352036d8f13517812d..c240bbe9a8fd595a0e7e2b41fb708efae1e5279a 100644
--- a/gcc/doc/extend.texi
+++ b/gcc/doc/extend.texi
@@ -10433,6 +10433,10 @@ in the Cilk Plus language manual which can be found at
 @findex __builtin_isgreater
 @findex __builtin_isgreaterequal
 @findex __builtin_isinf_sign
+@findex __builtin_isinf
+@findex __builtin_isnan
+@findex __builtin_iszero
+@findex __builtin_issubnormal
 @findex __builtin_isless
 @findex __builtin_islessequal
 @findex __builtin_islessgreater
@@ -11496,7 +11500,54 @@ constant values and they must appear in this order: @code{FP_NAN},
 @code{FP_INFINITE}, @code{FP_NORMAL}, @code{FP_SUBNORMAL} and
 @code{FP_ZERO}.  The ellipsis is for exactly one floating-point value
 to classify.  GCC treats the last argument as type-generic, which
-means it does not do default promotion from float to double.
+means it does not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnan (...)
+This built-in implements the C99 isnan functionality which checks if
+the given argument represents a NaN.  The return value of the
+function will either be a 0 (false) or a 1 (true).
+On most systems, when an IEEE 754 floating-point type is used this
+built-in does not produce a signal when a signaling NaN is used.
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isinf (...)
+This built-in implements the C99 isinf functionality which checks if
+the given argument represents an infinite number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnormal (...)
+This built-in implements the C99 isnormal functionality which checks if
+the given argument represents a normal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_iszero (...)
+This built-in implements the TS 18661-1:2014 iszero functionality which checks if
+the given argument represents the number 0 or -0.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_issubnormal (...)
+This built-in implements the TS 18661-1:2014 issubnormal functionality which checks if
+the given argument represents a subnormal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
 @end deftypefn
 
 @deftypefn {Built-in Function} double __builtin_inf (void)
diff --git a/gcc/gimple-low.c b/gcc/gimple-low.c
index 64752b67b86b3d01df5f5661e4666df98b7b91d1..739b1dba367749432165129be4115b0f3e72a599 100644
--- a/gcc/gimple-low.c
+++ b/gcc/gimple-low.c
@@ -30,6 +30,8 @@ along with GCC; see the file COPYING3.  If not see
 #include "calls.h"
 #include "gimple-iterator.h"
 #include "gimple-low.h"
+#include "stor-layout.h"
+#include "target.h"
 
 /* The differences between High GIMPLE and Low GIMPLE are the
    following:
@@ -72,6 +74,13 @@ static void lower_gimple_bind (gimple_stmt_iterator *, struct lower_data *);
 static void lower_try_catch (gimple_stmt_iterator *, struct lower_data *);
 static void lower_gimple_return (gimple_stmt_iterator *, struct lower_data *);
 static void lower_builtin_setjmp (gimple_stmt_iterator *);
+static void lower_builtin_fpclassify (gimple_stmt_iterator *);
+static void lower_builtin_isnan (gimple_stmt_iterator *);
+static void lower_builtin_isinfinite (gimple_stmt_iterator *);
+static void lower_builtin_isnormal (gimple_stmt_iterator *);
+static void lower_builtin_iszero (gimple_stmt_iterator *);
+static void lower_builtin_issubnormal (gimple_stmt_iterator *);
+static void lower_builtin_isfinite (gimple_stmt_iterator *);
 static void lower_builtin_posix_memalign (gimple_stmt_iterator *);
 
 
@@ -330,18 +339,69 @@ lower_stmt (gimple_stmt_iterator *gsi, struct lower_data *data)
 	if (decl
 	    && DECL_BUILT_IN_CLASS (decl) == BUILT_IN_NORMAL)
 	  {
-	    if (DECL_FUNCTION_CODE (decl) == BUILT_IN_SETJMP)
+	    switch (DECL_FUNCTION_CODE (decl))
 	      {
+	      case BUILT_IN_SETJMP:
 		lower_builtin_setjmp (gsi);
 		data->cannot_fallthru = false;
 		return;
-	      }
-	    else if (DECL_FUNCTION_CODE (decl) == BUILT_IN_POSIX_MEMALIGN
-		     && flag_tree_bit_ccp
-		     && gimple_builtin_call_types_compatible_p (stmt, decl))
-	      {
-		lower_builtin_posix_memalign (gsi);
+
+	      case BUILT_IN_POSIX_MEMALIGN:
+		if (flag_tree_bit_ccp
+		    && gimple_builtin_call_types_compatible_p (stmt, decl))
+		  {
+			lower_builtin_posix_memalign (gsi);
+			return;
+		  }
+		break;
+
+	      case BUILT_IN_FPCLASSIFY:
+		lower_builtin_fpclassify (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      CASE_FLT_FN (BUILT_IN_ISINF):
+	      case BUILT_IN_ISINFD32:
+	      case BUILT_IN_ISINFD64:
+	      case BUILT_IN_ISINFD128:
+		lower_builtin_isinfinite (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNAND32:
+	      case BUILT_IN_ISNAND64:
+	      case BUILT_IN_ISNAND128:
+	      CASE_FLT_FN (BUILT_IN_ISNAN):
+		lower_builtin_isnan (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNORMAL:
+		lower_builtin_isnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISZERO:
+		lower_builtin_iszero (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISSUBNORMAL:
+		lower_builtin_issubnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      CASE_FLT_FN (BUILT_IN_FINITE):
+	      case BUILT_IN_FINITED32:
+	      case BUILT_IN_FINITED64:
+	      case BUILT_IN_FINITED128:
+	      case BUILT_IN_ISFINITE:
+		lower_builtin_isfinite (gsi);
+		data->cannot_fallthru = false;
 		return;
+
+	      default:
+		break;
 	      }
 	  }
 
@@ -822,6 +882,812 @@ lower_builtin_setjmp (gimple_stmt_iterator *gsi)
   gsi_remove (gsi, false);
 }
 
+/* This function will if ARG is not already a variable or SSA_NAME,
+   create a new temporary TMP and bind ARG to TMP.  This new binding is then
+   emitted into SEQ and TMP is returned.  */
+static tree
+emit_tree_and_return_var (gimple_seq *seq, tree arg)
+{
+  if (TREE_CODE (arg) == SSA_NAME || VAR_P (arg))
+    return arg;
+
+  tree tmp = create_tmp_reg (TREE_TYPE (arg));
+  gassign *stm = gimple_build_assign (tmp, arg);
+  gimple_seq_add_stmt (seq, stm);
+  return tmp;
+}
+
+/* This function builds an if statement that ends up using explicit branches
+   instead of becoming a ternary conditional select.  This function assumes you
+   will fall through to the next statements after the condition for the false
+   branch.  The code emitted looks like:
+
+   if (COND)
+     RESULT_VARIABLE = TRUE_BRANCH
+     GOTO EXIT_LABEL
+   else
+     ...
+
+   SEQ is the gimple sequence/buffer to emit any new bindings to.
+   RESULT_VARIABLE is the value to set if COND.
+   EXIT_LABEL is the label to jump to in case COND.
+   COND is condition to use in the conditional statement of the if.
+   TRUE_BRANCH is the value to set RESULT_VARIABLE to if COND.  */
+static void
+emit_tree_cond (gimple_seq *seq, tree result_variable, tree exit_label,
+		tree cond, tree true_branch)
+{
+  /* Create labels for fall through.  */
+  tree true_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree false_label = create_artificial_label (UNKNOWN_LOCATION);
+  gcond *stmt = gimple_build_cond_from_tree (cond, true_label, false_label);
+  gimple_seq_add_stmt (seq, stmt);
+
+  /* Build the true case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (true_label));
+  tree value = TREE_CONSTANT (true_branch)
+	     ? true_branch
+	     : emit_tree_and_return_var (seq, true_branch);
+  gimple_seq_add_stmt (seq, gimple_build_assign (result_variable, value));
+  gimple_seq_add_stmt (seq, gimple_build_goto (exit_label));
+
+  /* Build the false case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (false_label));
+}
+
+/* This function returns a variable containing an reinterpreted ARG as an
+   integer.
+
+   SEQ is the gimple sequence/buffer to write any new bindings to.
+   ARG is the floating point number to reinterpret as an integer.
+   LOC is the location to use when doing folding operations.  */
+static tree
+get_num_as_int (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  /* Re-interpret the float as an unsigned integer type
+     with equal precision.  */
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+  tree conv_arg = fold_build1_loc (loc, VIEW_CONVERT_EXPR, int_arg_type, arg);
+  return emit_tree_and_return_var (seq, conv_arg);
+}
+
+/* Check if ARG which is the floating point number being classified is close
+   enough to IEEE 754 format to be able to go in the early exit code.  */
+static bool
+use_ieee_int_mode (tree arg)
+{
+  tree type = TREE_TYPE (arg);
+  machine_mode mode = TYPE_MODE (type);
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  machine_mode imode = int_mode_for_mode (mode);
+  bool is_ibm_extended = MODE_COMPOSITE_P (mode);
+
+  return (format->is_binary_ieee_compatible
+	  && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
+	  /* Check if there's a usable integer mode.  */
+	  && imode != BLKmode
+	  && targetm.scalar_mode_supported_p (imode)
+	  && !is_ibm_extended);
+}
+
+/* Perform some IBM extended format fixups on ARG for use by FP functions.
+   This is done by ignoring the lower 64 bits of the number.
+
+   MODE is the machine mode of ARG.
+   TYPE is the type of ARG.
+   LOC is the location to be used in fold functions.  Usually is the location
+   of the definition of ARG.  */
+static bool
+perform_ibm_extended_fixups (tree *arg, machine_mode *mode,
+			     tree *type, location_t loc)
+{
+  bool is_ibm_extended = MODE_COMPOSITE_P (*mode);
+  if (is_ibm_extended)
+    {
+      /* NaN and Inf are encoded in the high-order double value
+	 only.  The low-order value is not significant.  */
+      *type = double_type_node;
+      *mode = DFmode;
+      *arg = fold_build1_loc (loc, NOP_EXPR, *type, *arg);
+    }
+
+  return is_ibm_extended;
+}
+
+/* Generates code to check if ARG is a normal number.  For the FP case we check
+   MIN_VALUE(ARG) <= ABS(ARG) > INF and for the INT value we check the exp and
+   mantissa bits.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_normal (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const tree bool_type = boolean_type_node;
+
+  /* Perform IBM extended format fixups if required.  */
+  bool is_ibm_extended = perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree orig_arg = arg;
+      if (TREE_CODE (arg) != SSA_NAME
+	  && (TREE_ADDRESSABLE (arg) != 0
+	    || (TREE_CODE (arg) != PARM_DECL
+	        && (!VAR_P (arg) || TREE_STATIC (arg)))))
+	orig_arg = save_expr (arg);
+
+      REAL_VALUE_TYPE rinf, rmin;
+      tree arg_p
+        = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      char buf[128];
+      real_inf (&rinf);
+      get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+      real_from_string (&rmin, buf);
+
+      tree inf_exp = fold_build2_loc (loc, LT_EXPR, bool_type, arg_p,
+				      build_real (type, rinf));
+      tree min_exp = build_real (type, rmin);
+      if (is_ibm_extended)
+	{
+	  /* Testing the high end of the range is done just using
+	     the high double, using the same test as isfinite().
+	     For the subnormal end of the range we first test the
+	     high double, then if its magnitude is equal to the
+	     limit of 0x1p-969, we test whether the low double is
+	     non-zero and opposite sign to the high double.  */
+	  tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
+	  tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
+	  tree gt_min = build_call_expr (isgt_fn, 2, arg_p, min_exp);
+	  tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
+				     arg_p, min_exp);
+	  tree as_complex = build1 (VIEW_CONVERT_EXPR,
+				    complex_double_type_node, orig_arg);
+	  tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
+	  tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
+	  tree zero = build_real (type, dconst0);
+	  tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
+	  tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
+	  tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
+	  tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
+				    fold_build3 (COND_EXPR,
+						 integer_type_node,
+						 hilt, logt, lolt));
+	  eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
+				eq_min, ok_lo);
+	  min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
+				 gt_min, eq_min);
+	}
+	else
+	{
+	  min_exp = fold_build2_loc (loc, GE_EXPR, bool_type, arg_p,
+				     min_exp);
+	}
+
+      tree res
+	= fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			   emit_tree_and_return_var (seq, min_exp),
+			   emit_tree_and_return_var (seq, inf_exp));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const tree int_type = unsigned_type_node;
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  const int exp_mask  = (1 << exp_bits) - 1;
+
+  /* Get the number reinterpreted as an integer.  */
+  tree int_arg = get_num_as_int (seq, arg, loc);
+
+  /* Extract exp bits from the float, where we expect the exponent to be.
+     We create a new type because BIT_FIELD_REF does not allow you to
+     extract less bits than the precision of the storage variable.  */
+  tree exp_tmp
+    = fold_build3_loc (loc, BIT_FIELD_REF,
+		       build_nonstandard_integer_type (exp_bits, true),
+		       int_arg,
+		       build_int_cstu (int_type, exp_bits),
+		       build_int_cstu (int_type, format->p - 1));
+  tree exp_bitfield = emit_tree_and_return_var (seq, exp_tmp);
+
+  /* Re-interpret the extracted exponent bits as a 32 bit int.
+     This allows us to continue doing operations as int_type.  */
+  tree exp
+    = emit_tree_and_return_var (seq, fold_build1_loc (loc, NOP_EXPR, int_type,
+						      exp_bitfield));
+
+  /* exp_mask & ~1.  */
+  tree mask_check
+     = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+			build_int_cstu (int_type, exp_mask),
+			fold_build1_loc (loc, BIT_NOT_EXPR, int_type,
+					 build_int_cstu (int_type, 1)));
+
+  /* (exp + 1) & mask_check.
+     Check to see if exp is not all 0 or all 1.  */
+  tree exp_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+		       emit_tree_and_return_var (seq,
+				fold_build2_loc (loc, PLUS_EXPR, int_type, exp,
+						 build_int_cstu (int_type, 1))),
+		       mask_check);
+
+  tree res = fold_build2_loc (loc, NE_EXPR, boolean_type_node,
+			      build_int_cstu (int_type, 0),
+			      emit_tree_and_return_var (seq, exp_check));
+
+  return emit_tree_and_return_var (seq, res);
+}
+
+/* Generates code to check if ARG is a zero. For both the FP and INT case we
+   check if ARG == 0 (modulo sign bit).  Returns a variable containing a boolean
+   which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_zero (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree res = fold_build2_loc (loc, EQ_EXPR, boolean_type_node, arg,
+				  build_real (type, dconst0));
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* num << 1 == 0.
+     This checks to see if the number is zero.  */
+  tree zero_check
+    = fold_build2_loc (loc, EQ_EXPR, boolean_type_node,
+		       build_int_cstu (int_arg_type, 0),
+		       emit_tree_and_return_var (seq, int_arg));
+
+  return emit_tree_and_return_var (seq, zero_check);
+}
+
+/* Generates code to check if ARG is a subnormal number.  In the FP case we test
+   fabs (ARG) != 0 && fabs (ARG) < MIN_VALUE (ARG) and in the INT case we check
+   the exp and mantissa bits on ARG. Returns a variable containing a boolean
+   which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_subnormal (gimple_seq *seq, tree arg, location_t loc)
+{
+  const tree bool_type = boolean_type_node;
+
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      REAL_VALUE_TYPE r;
+      char buf[128];
+      get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+      real_from_string (&r, buf);
+      tree subnorm = fold_build2_loc (loc, LT_EXPR, bool_type,
+				      arg_p, build_real (type, r));
+
+      tree zero = fold_build2_loc (loc, GT_EXPR, bool_type, arg_p,
+				   build_real (type, dconst0));
+
+      tree res
+	= fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			   emit_tree_and_return_var (seq, subnorm),
+			   emit_tree_and_return_var (seq, zero));
+
+      return emit_tree_and_return_var (seq, res);
+  }
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check for a zero exponent and non-zero mantissa.
+     This can be done with two comparisons by first apply a
+     removing the sign bit and checking if the value is larger
+     than the mantissa mask.  */
+
+  /* This creates a mask to be used to check the mantissa value in the shifted
+     integer representation of the fpnum.  */
+  tree significant_bit = build_int_cstu (int_arg_type, format->p - 1);
+  tree mantissa_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					significant_bit),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check if exponent is zero and mantissa is not.  */
+  tree subnorm_cond_tmp
+    = fold_build2_loc (loc, LE_EXPR, bool_type,
+		       emit_tree_and_return_var (seq, int_arg),
+		       mantissa_mask);
+
+  tree subnorm_cond = emit_tree_and_return_var (seq, subnorm_cond_tmp);
+
+  tree zero_cond
+    = fold_build2_loc (loc, GT_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, int_arg),
+		       build_int_cstu (int_arg_type, 0));
+
+  tree subnorm_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, subnorm_cond),
+		       emit_tree_and_return_var (seq, zero_cond));
+
+  return emit_tree_and_return_var (seq, subnorm_check);
+}
+
+/* Generates code to check if ARG is an infinity.  In the FP case we test
+   FABS(ARG) == INF and in the INT case we check the bits on the exp and
+   mantissa.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_infinity (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_INFINITIES (mode))
+    {
+      return build_int_cst (bool_type, false);
+    }
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+      REAL_VALUE_TYPE r;
+      real_inf (&r);
+      tree res = fold_build2_loc (loc, EQ_EXPR, bool_type, arg_p,
+				  build_real (type, r));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0.  */
+  tree inf_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, EQ_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Generates code to check if ARG is a finite number.  In the FP case we check
+   if FABS(ARG) <= MAX_VALUE(ARG) and in the INT case we check the exp and
+   mantissa bits.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_finite (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (arg) && !HONOR_INFINITIES (arg))
+    {
+      return build_int_cst (bool_type, true);
+    }
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      REAL_VALUE_TYPE rmax;
+      char buf[128];
+      get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+      real_from_string (&rmax, buf);
+
+      tree res = fold_build2_loc (loc, LE_EXPR, bool_type, arg_p,
+				  build_real (type, rmax));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign. */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0. */
+  tree inf_check_tmp
+    = fold_build2_loc (loc, LT_EXPR, bool_type,
+		       emit_tree_and_return_var (seq, int_arg),
+		       inf_mask);
+
+  tree inf_check = emit_tree_and_return_var (seq, inf_check_tmp);
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Generates code to check if ARG is a NaN. In the FP case we simply check if
+   ARG != ARG and in the INT case we check the bits in the exp and mantissa.
+   Returns a variable containing a boolean which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_nan (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (mode))
+    {
+      return build_int_cst (bool_type, false);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      tree res
+	= fold_build2_loc (loc, UNORDERED_EXPR, bool_type,arg_p, arg_p);
+
+      return emit_tree_and_return_var (seq, res);
+  }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is not 0.  */
+  tree nan_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, GT_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, nan_check);
+}
+
+/* Validates a single argument from the arguments list CALL at position INDEX.
+   The extracted parameter is compared against the expected type CODE.
+
+   A boolean is returned indicating if the parameter exist and if of the
+   expected type.  */
+static bool
+gimple_validate_arg (gimple* call, int index, enum tree_code code)
+{
+  const tree arg = gimple_call_arg (call, index);
+  if (!arg)
+    return false;
+  else if (code == POINTER_TYPE)
+    return POINTER_TYPE_P (TREE_TYPE (arg));
+  else if (code == INTEGER_TYPE)
+    return INTEGRAL_TYPE_P (TREE_TYPE (arg));
+  return code == TREE_CODE (TREE_TYPE (arg));
+}
+
+/* Lowers calls to __builtin_fpclassify to
+   fpclassify (x) ->
+     isnormal(x) ? FP_NORMAL :
+       iszero (x) ? FP_ZERO :
+	 isnan (x) ? FP_NAN :
+	   isinfinite (x) ? FP_INFINITE :
+	     FP_SUBNORMAL.
+
+   The code may use integer arithmentic if it decides
+   that the produced assembly would be faster. This can only be done
+   for numbers that are similar to IEEE-754 in format.
+
+   This builtin will generate code to return the appropriate floating
+   point classification depending on the value of the floating point
+   number passed in.  The possible return values must be supplied as
+   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
+   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
+   one floating point argument which is "type generic".
+
+   GSI is the gimple iterator containing the fpclassify call to lower.
+   The call will be expanded and replaced inline in the given GSI.  */
+static void
+lower_builtin_fpclassify (gimple_stmt_iterator *gsi)
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 6
+      || !gimple_validate_arg (call, 0, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 1, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 2, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 3, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 4, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 5, REAL_TYPE))
+    return;
+
+  /* Collect the arguments from the call.  */
+  tree fp_nan = gimple_call_arg (call, 0);
+  tree fp_infinite = gimple_call_arg (call, 1);
+  tree fp_normal = gimple_call_arg (call, 2);
+  tree fp_subnormal = gimple_call_arg (call, 3);
+  tree fp_zero = gimple_call_arg (call, 4);
+  tree arg = gimple_call_arg (call, 5);
+
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+    dest = create_tmp_reg (TREE_TYPE (orig_dest));
+
+  emit_tree_cond (&body, dest, done_label,
+		  is_normal (&body, arg, loc), fp_normal);
+  emit_tree_cond (&body, dest, done_label,
+		  is_zero (&body, arg, loc), fp_zero);
+  emit_tree_cond (&body, dest, done_label,
+		  is_nan (&body, arg, loc), fp_nan);
+  emit_tree_cond (&body, dest, done_label,
+		  is_infinity (&body, arg, loc), fp_infinite);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to is_subnormal.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, fp_subnormal));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+    {
+      gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+    }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+
+  /* Remove the call to __builtin_fpclassify.  */
+  gsi_remove (gsi, false);
+}
+
+/* Generic wrapper for the is_nan, is_normal, is_subnormal, is_zero, etc.
+   All these functions have the same setup. The wrapper validates the parameter
+   and also creates the branches and labels required to properly invoke.
+   This has been generalize and the function to call is passed as argument FNDECL.
+
+   GSI is the gimple iterator containing the fpclassify call to lower.
+   The call will be expanded and replaced inline in the given GSI.  */
+static void
+gen_call_fp_builtin (gimple_stmt_iterator *gsi,
+		     tree (*fndecl)(gimple_seq *, tree, location_t))
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 1
+      || !gimple_validate_arg (call, 0, REAL_TYPE))
+    return;
+
+  tree arg = gimple_call_arg (call, 0);
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  tree type = TREE_TYPE (orig_dest);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+      dest = create_tmp_reg (type);
+
+  tree t_true = build_int_cst (type, true);
+  tree t_false = build_int_cst (type, false);
+
+  emit_tree_cond (&body, dest, done_label,
+		  fndecl (&body, arg, loc), t_true);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to false.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, t_false));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+  /* Remove the call to the builtin.  */
+  gsi_remove (gsi, false);
+}
+
+/* Lower and expand calls to __builtin_isnan in GSI.  */
+static void
+lower_builtin_isnan (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_nan);
+}
+
+/* Lower and expand calls to __builtin_isinfinite in GSI.  */
+static void
+lower_builtin_isinfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_infinity);
+}
+
+/* Lower and expand calls to __builtin_isnormal in GSI.  */
+static void
+lower_builtin_isnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_normal);
+}
+
+/* Lower and expand calls to __builtin_iszero in GSI.  */
+static void
+lower_builtin_iszero (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_zero);
+}
+
+/* Lower and expand calls to __builtin_issubnormal in GSI.  */
+static void
+lower_builtin_issubnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_subnormal);
+}
+
+/* Lower and expand calls to __builtin_isfinite in GSI.  */
+static void
+lower_builtin_isfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_finite);
+}
+
 /* Lower calls to posix_memalign to
      res = posix_memalign (ptr, align, size);
      if (res == 0)
diff --git a/gcc/real.h b/gcc/real.h
index 59af580e78f2637be84f71b98b45ec6611053222..4b1b92138e07f43a175a2cbee4d952afad5898f7 100644
--- a/gcc/real.h
+++ b/gcc/real.h
@@ -161,6 +161,19 @@ struct real_format
   bool has_signed_zero;
   bool qnan_msb_set;
   bool canonical_nan_lsbs_set;
+
+  /* This flag indicates whether the format is suitable for the optimized
+     code paths for the __builtin_fpclassify function and friends.  For
+     this, the format must be a base 2 representation with the sign bit as
+     the most-significant bit followed by (exp <= 32) exponent bits
+     followed by the mantissa bits.  It must be possible to interpret the
+     bits of the floating-point representation as an integer.  NaNs and
+     INFs (if available) must be represented by the same schema used by
+     IEEE 754.  (NaNs must be represented by an exponent with all bits 1,
+     any mantissa except all bits 0 and any sign bit.  +INF and -INF must be
+     represented by an exponent with all bits 1, a mantissa with all bits 0 and
+     a sign bit of 0 and 1 respectively.)  */
+  bool is_binary_ieee_compatible;
   const char *name;
 };
 
@@ -511,6 +524,11 @@ extern bool real_isinteger (const REAL_VALUE_TYPE *, HOST_WIDE_INT *);
    float string.  BUF must be large enough to contain the result.  */
 extern void get_max_float (const struct real_format *, char *, size_t);
 
+/* Write into BUF the smallest positive normalized number x,
+   such that b**(x-1) is normalized.  BUF must be large enough
+   to contain the result.  */
+extern void get_min_float (const struct real_format *, char *, size_t);
+
 #ifndef GENERATOR_FILE
 /* real related routines.  */
 extern wide_int real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
diff --git a/gcc/real.c b/gcc/real.c
index 66e88e2ad366f7848609d157074c80420d778bcf..20c907a6d543c73ba62aa9a8ddf6973d82de7832 100644
--- a/gcc/real.c
+++ b/gcc/real.c
@@ -3052,6 +3052,7 @@ const struct real_format ieee_single_format =
     true,
     true,
     false,
+    true,
     "ieee_single"
   };
 
@@ -3075,6 +3076,7 @@ const struct real_format mips_single_format =
     true,
     false,
     true,
+    true,
     "mips_single"
   };
 
@@ -3098,6 +3100,7 @@ const struct real_format motorola_single_format =
     true,
     true,
     true,
+    true,
     "motorola_single"
   };
 
@@ -3132,6 +3135,7 @@ const struct real_format spu_single_format =
     true,
     false,
     false,
+    false,
     "spu_single"
   };
 \f
@@ -3343,6 +3347,7 @@ const struct real_format ieee_double_format =
     true,
     true,
     false,
+    true,
     "ieee_double"
   };
 
@@ -3366,6 +3371,7 @@ const struct real_format mips_double_format =
     true,
     false,
     true,
+    true,
     "mips_double"
   };
 
@@ -3389,6 +3395,7 @@ const struct real_format motorola_double_format =
     true,
     true,
     true,
+    true,
     "motorola_double"
   };
 \f
@@ -3735,6 +3742,7 @@ const struct real_format ieee_extended_motorola_format =
     true,
     true,
     true,
+    false,
     "ieee_extended_motorola"
   };
 
@@ -3758,6 +3766,7 @@ const struct real_format ieee_extended_intel_96_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96"
   };
 
@@ -3781,6 +3790,7 @@ const struct real_format ieee_extended_intel_128_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_128"
   };
 
@@ -3806,6 +3816,7 @@ const struct real_format ieee_extended_intel_96_round_53_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96_round_53"
   };
 \f
@@ -3896,6 +3907,7 @@ const struct real_format ibm_extended_format =
     true,
     true,
     false,
+    false,
     "ibm_extended"
   };
 
@@ -3919,6 +3931,7 @@ const struct real_format mips_extended_format =
     true,
     false,
     true,
+    false,
     "mips_extended"
   };
 
@@ -4184,6 +4197,7 @@ const struct real_format ieee_quad_format =
     true,
     true,
     false,
+    true,
     "ieee_quad"
   };
 
@@ -4207,6 +4221,7 @@ const struct real_format mips_quad_format =
     true,
     false,
     true,
+    true,
     "mips_quad"
   };
 \f
@@ -4509,6 +4524,7 @@ const struct real_format vax_f_format =
     false,
     false,
     false,
+    false,
     "vax_f"
   };
 
@@ -4532,6 +4548,7 @@ const struct real_format vax_d_format =
     false,
     false,
     false,
+    false,
     "vax_d"
   };
 
@@ -4555,6 +4572,7 @@ const struct real_format vax_g_format =
     false,
     false,
     false,
+    false,
     "vax_g"
   };
 \f
@@ -4633,6 +4651,7 @@ const struct real_format decimal_single_format =
     true,
     true,
     false,
+    false,
     "decimal_single"
   };
 
@@ -4657,6 +4676,7 @@ const struct real_format decimal_double_format =
     true,
     true,
     false,
+    false,
     "decimal_double"
   };
 
@@ -4681,6 +4701,7 @@ const struct real_format decimal_quad_format =
     true,
     true,
     false,
+    false,
     "decimal_quad"
   };
 \f
@@ -4820,6 +4841,7 @@ const struct real_format ieee_half_format =
     true,
     true,
     false,
+    true,
     "ieee_half"
   };
 
@@ -4846,6 +4868,7 @@ const struct real_format arm_half_format =
     true,
     false,
     false,
+    false,
     "arm_half"
   };
 \f
@@ -4893,6 +4916,7 @@ const struct real_format real_internal_format =
     true,
     true,
     false,
+    false,
     "real_internal"
   };
 \f
@@ -5080,6 +5104,16 @@ get_max_float (const struct real_format *fmt, char *buf, size_t len)
   gcc_assert (strlen (buf) < len);
 }
 
+/* Write into BUF the minimum negative representable finite floating-point
+   number, x, such that b**(x-1) is normalized.
+   BUF must be large enough to contain the result.  */
+void
+get_min_float (const struct real_format *fmt, char *buf, size_t len)
+{
+  sprintf (buf, "0x1p%d", fmt->emin - 1);
+  gcc_assert (strlen (buf) < len);
+}
+
 /* True if mode M has a NaN representation and
    the treatment of NaN operands is important.  */
 
diff --git a/gcc/testsuite/gcc.dg/builtins-43.c b/gcc/testsuite/gcc.dg/builtins-43.c
index f7c318edf084104b9b820e18e631ed61e760569e..5d41c28aef8619f06658f45846ae15dd8b4987ed 100644
--- a/gcc/testsuite/gcc.dg/builtins-43.c
+++ b/gcc/testsuite/gcc.dg/builtins-43.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-gimple -fdump-tree-optimized" } */
+/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-lower -fdump-tree-optimized" } */
   
 extern void f(int);
 extern void link_error ();
@@ -51,7 +51,7 @@ main ()
 
 
 /* Check that all instances of __builtin_isnan were folded.  */
-/* { dg-final { scan-tree-dump-times "isnan" 0 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "isnan" 0 "lower" } } */
 
 /* Check that all instances of link_error were subject to DCE.  */
 /* { dg-final { scan-tree-dump-times "link_error" 0 "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/fold-notunord.c b/gcc/testsuite/gcc.dg/fold-notunord.c
deleted file mode 100644
index ca345154ac204cb5f380855828421b7f88d49052..0000000000000000000000000000000000000000
--- a/gcc/testsuite/gcc.dg/fold-notunord.c
+++ /dev/null
@@ -1,9 +0,0 @@
-/* { dg-do compile } */
-/* { dg-options "-O -ftrapping-math -fdump-tree-optimized" } */
-
-int f (double d)
-{
-  return !__builtin_isnan (d);
-}
-
-/* { dg-final { scan-tree-dump " ord " "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/pr28796-1.c b/gcc/testsuite/gcc.dg/pr28796-1.c
index 077118a298878441e812410f3a6bf3707fb1d839..a57b4e350af1bc45344106fdeab4b32ef87f233f 100644
--- a/gcc/testsuite/gcc.dg/pr28796-1.c
+++ b/gcc/testsuite/gcc.dg/pr28796-1.c
@@ -1,5 +1,5 @@
 /* { dg-do link } */
-/* { dg-options "-ffinite-math-only" } */
+/* { dg-options "-ffinite-math-only -O2" } */
 
 extern void link_error(void);
 
diff --git a/gcc/testsuite/gcc.dg/tg-tests.h b/gcc/testsuite/gcc.dg/tg-tests.h
index 0cf1f6452584962cebda999b5e8c7a61180d7830..cbb707c54b8437aada0cc205b1819369ada95665 100644
--- a/gcc/testsuite/gcc.dg/tg-tests.h
+++ b/gcc/testsuite/gcc.dg/tg-tests.h
@@ -11,6 +11,7 @@ void __attribute__ ((__noinline__))
 foo_1 (float f, double d, long double ld,
        int res_unord, int res_isnan, int res_isinf,
        int res_isinf_sign, int res_isfin, int res_isnorm,
+       int res_iszero, int res_issubnorm,
        int res_signbit, int classification)
 {
   if (__builtin_isunordered (f, 0) != res_unord)
@@ -80,6 +81,20 @@ foo_1 (float f, double d, long double ld,
   if (__builtin_finitel (ld) != res_isfin)
     __builtin_abort ();
 
+  if (__builtin_iszero (f) != res_iszero)
+    __builtin_abort ();
+  if (__builtin_iszero (d) != res_iszero)
+    __builtin_abort ();
+  if (__builtin_iszero (ld) != res_iszero)
+    __builtin_abort ();
+
+  if (__builtin_issubnormal (f) != res_issubnorm)
+    __builtin_abort ();
+  if (__builtin_issubnormal (d) != res_issubnorm)
+    __builtin_abort ();
+  if (__builtin_issubnormal (ld) != res_issubnorm)
+    __builtin_abort ();
+
   /* Sign bit of zeros and nans is not preserved in unsafe math mode.  */
 #ifdef UNSAFE
   if (!res_isnan && f != 0 && d != 0 && ld != 0)
@@ -115,12 +130,13 @@ foo_1 (float f, double d, long double ld,
 void __attribute__ ((__noinline__))
 foo (float f, double d, long double ld,
      int res_unord, int res_isnan, int res_isinf,
-     int res_isfin, int res_isnorm, int classification)
+     int res_isfin, int res_isnorm, int res_iszero,
+     int res_issubnorm, int classification)
 {
-  foo_1 (f, d, ld, res_unord, res_isnan, res_isinf, res_isinf, res_isfin, res_isnorm, 0, classification);
+  foo_1 (f, d, ld, res_unord, res_isnan, res_isinf, res_isinf, res_isfin, res_isnorm, res_iszero, res_issubnorm, 0, classification);
   /* Try all the values negated as well.  All will have the sign bit set,
      except for the nan.  */
-  foo_1 (-f, -d, -ld, res_unord, res_isnan, res_isinf, -res_isinf, res_isfin, res_isnorm, 1, classification);
+  foo_1 (-f, -d, -ld, res_unord, res_isnan, res_isinf, -res_isinf, res_isfin, res_isnorm, res_iszero, res_issubnorm, 1, classification);
 }
 
 int __attribute__ ((__noinline__))
@@ -132,35 +148,35 @@ main_tests (void)
   
   /* Test NaN.  */
   f = __builtin_nanf(""); d = __builtin_nan(""); ld = __builtin_nanl("");
-  foo(f, d, ld, /*unord=*/ 1, /*isnan=*/ 1, /*isinf=*/ 0, /*isfin=*/ 0, /*isnorm=*/ 0, FP_NAN);
+  foo(f, d, ld, /*unord=*/ 1, /*isnan=*/ 1, /*isinf=*/ 0, /*isfin=*/ 0, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/0, FP_NAN);
 
   /* Test infinity.  */
   f = __builtin_inff(); d = __builtin_inf(); ld = __builtin_infl();
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, FP_INFINITE);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/0, FP_INFINITE);
 
   /* Test zero.  */
   f = 0; d = 0; ld = 0;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, FP_ZERO);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, /*iszero=*/1, /*issubnorm=*/0, FP_ZERO);
 
   /* Test one.  */
   f = 1; d = 1; ld = 1;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, FP_NORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, /*iszero=*/0, /*issubnorm=*/0, FP_NORMAL);
 
   /* Test minimum values.  */
   f = __FLT_MIN__; d = __DBL_MIN__; ld = __LDBL_MIN__;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, FP_NORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, /*iszero=*/0, /*issubnorm=*/0, FP_NORMAL);
 
   /* Test subnormal values.  */
   f = __FLT_MIN__/2; d = __DBL_MIN__/2; ld = __LDBL_MIN__/2;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, FP_SUBNORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/1, FP_SUBNORMAL);
 
   /* Test maximum values.  */
   f = __FLT_MAX__; d = __DBL_MAX__; ld = __LDBL_MAX__;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, FP_NORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, /*iszero=*/0, /*issubnorm=*/0, FP_NORMAL);
 
   /* Test overflow values.  */
   f = __FLT_MAX__*2; d = __DBL_MAX__*2; ld = __LDBL_MAX__*2;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, FP_INFINITE);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/0, FP_INFINITE);
 
   return 0;
 }
diff --git a/gcc/testsuite/gcc.dg/torture/float128-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..ec9d3ad41e24280978707888590eec1b562207f0
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128_runtime } */
+
+#define WIDTH 128
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0ede861716750453a86c9abc703ad0b2826674c6
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128x_runtime } */
+
+#define WIDTH 128
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float16-tg-4.c b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..007c4c224ea95537c31185d0aff964d1975f2190
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float16 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float16 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float16_runtime } */
+
+#define WIDTH 16
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..c7f8353da2cffdfc2c2f58f5da3d5363b95e6f91
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32 type-generic built-in functions: __builtin_f__builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32_runtime } */
+
+#define WIDTH 32
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0d7a592920aca112d5f6409e565d4582c253c977
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32x_runtime } */
+
+#define WIDTH 32
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..bb25a22a68e60ce2717ab3583bbec595dd563c35
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64_runtime } */
+
+#define WIDTH 64
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..82305d916b8bd75131e2c647fd37f74cadbc8f1d
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64x_runtime } */
+
+#define WIDTH 64
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
new file mode 100644
index 0000000000000000000000000000000000000000..aa3448c090cf797a1525b1045ffebeed79cace40
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
@@ -0,0 +1,99 @@
+/* Tests for _FloatN / _FloatNx types: compile and execution tests for
+   type-generic built-in functions: __builtin_iszero, __builtin_issubnormal.
+   Before including this file, define WIDTH as the value N; define EXT to 1
+   for _FloatNx and 0 for _FloatN.  */
+
+#define __STDC_WANT_IEC_60559_TYPES_EXT__
+#include <float.h>
+
+#define CONCATX(X, Y) X ## Y
+#define CONCAT(X, Y) CONCATX (X, Y)
+#define CONCAT3(X, Y, Z) CONCAT (CONCAT (X, Y), Z)
+#define CONCAT4(W, X, Y, Z) CONCAT (CONCAT (CONCAT (W, X), Y), Z)
+
+#if EXT
+# define TYPE CONCAT3 (_Float, WIDTH, x)
+# define CST(C) CONCAT4 (C, f, WIDTH, x)
+# define MAX CONCAT3 (FLT, WIDTH, X_MAX)
+# define MIN CONCAT3 (FLT, WIDTH, X_MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, X_TRUE_MIN)
+#else
+# define TYPE CONCAT (_Float, WIDTH)
+# define CST(C) CONCAT3 (C, f, WIDTH)
+# define MAX CONCAT3 (FLT, WIDTH, _MAX)
+# define MIN CONCAT3 (FLT, WIDTH, _MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, _TRUE_MIN)
+#endif
+
+extern void exit (int);
+extern void abort (void);
+
+volatile TYPE inf = __builtin_inf (), nanval = __builtin_nan ("");
+volatile TYPE neginf = -__builtin_inf (), negnanval = -__builtin_nan ("");
+volatile TYPE zero = CST (0.0), negzero = -CST (0.0), one = CST (1.0);
+volatile TYPE max = MAX, negmax = -MAX, min = MIN, negmin = -MIN;
+volatile TYPE true_min = TRUE_MIN, negtrue_min = -TRUE_MIN;
+volatile TYPE sub_norm = MIN / 2.0;
+
+int
+main (void)
+{
+  if (__builtin_iszero (inf) == 1)
+    abort ();
+  if (__builtin_iszero (nanval) == 1)
+    abort ();
+  if (__builtin_iszero (neginf) == 1)
+    abort ();
+  if (__builtin_iszero (negnanval) == 1)
+    abort ();
+  if (__builtin_iszero (zero) != 1)
+    abort ();
+  if (__builtin_iszero (negzero) != 1)
+    abort ();
+  if (__builtin_iszero (one) == 1)
+    abort ();
+  if (__builtin_iszero (max) == 1)
+    abort ();
+  if (__builtin_iszero (negmax) == 1)
+    abort ();
+  if (__builtin_iszero (min) == 1)
+    abort ();
+  if (__builtin_iszero (negmin) == 1)
+    abort ();
+  if (__builtin_iszero (true_min) == 1)
+    abort ();
+  if (__builtin_iszero (negtrue_min) == 1)
+    abort ();
+  if (__builtin_iszero (sub_norm) == 1)
+    abort ();
+
+  if (__builtin_issubnormal (inf) == 1)
+    abort ();
+  if (__builtin_issubnormal (nanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (neginf) == 1)
+    abort ();
+  if (__builtin_issubnormal (negnanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (zero) == 1)
+    abort ();
+  if (__builtin_issubnormal (negzero) == 1)
+    abort ();
+  if (__builtin_issubnormal (one) == 1)
+    abort ();
+  if (__builtin_issubnormal (max) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmax) == 1)
+    abort ();
+  if (__builtin_issubnormal (min) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmin) == 1)
+    abort ();
+  if (__builtin_issubnormal (true_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (negtrue_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (sub_norm) != 1)
+    abort ();
+  exit (0);
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
new file mode 100644
index 0000000000000000000000000000000000000000..3a1bf956bfcedcc15dc1cbacfe2f0b663b31c3cc
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
@@ -0,0 +1,22 @@
+/* This file checks the code generation for the new __builtin_fpclassify.
+   because checking the exact assembly isn't very useful, we'll just be checking
+   for the presence of certain instructions and the omition of others. */
+/* { dg-options "-O2" } */
+/* { dg-do compile } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fabs\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmp\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmpe\[ \t\]?" } } */
+/* { dg-final { scan-assembler "\[ \t\]?ubfx\[ \t\]?" } } */
+
+#include <stdio.h>
+#include <math.h>
+
+/*
+ fp_nan = args[0];
+ fp_infinite = args[1];
+ fp_normal = args[2];
+ fp_subnormal = args[3];
+ fp_zero = args[4];
+*/
+
+int f(double x) { return __builtin_fpclassify(0, 1, 4, 3, 2, x); }

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-19 14:44                       ` Tamar Christina
@ 2017-01-19 14:47                         ` Joseph Myers
  2017-01-19 15:26                           ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Joseph Myers @ 2017-01-19 14:47 UTC (permalink / raw)
  To: Tamar Christina
  Cc: Jeff Law, GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

On Thu, 19 Jan 2017, Tamar Christina wrote:

> > Also, I don't think the call to perform_ibm_extended_fixups in
> > is_subnormal is correct.  Subnormal for IBM long double is *not* the same
> > as subnormal double high part.  Likewise it's incorrect in is_normal as
> > well.
> 
> The calls to is_zero and is_subnormal were incorrect indeed. I've 
> corrected them by not calling the fixup code and to instead make sure it 
> falls through into the old fp based code which did normal floating point 
> operations on the number. This is the same code as was before in 
> fpclassify so it should work.

For is_zero it's fine to test based on the high part for IBM long double; 
an IBM long double is (zero, infinite, NaN, finite) if and only if the 
high part is.  The problem is the different threshold between normal and 
subnormal.

> As for is_normal, the code is almost identical as the code that used to 
> be in fold_builtin_interclass_mathfn in BUILT_IN_ISNORMAL, with the 
> exception that I don't check <= max_value but instead < inifity, so I 
> can reuse the same constant.

The old code set orig_arg before converting IBM long double to double.  
Your code sets it after the conversion.  The old code set min_exp based on 
a string set from REAL_MODE_FORMAT (orig_mode)->emin - 1; your code uses 
the adjusted mode.  Both of those are incorrect for IBM long double.

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-19 14:47                         ` Joseph Myers
@ 2017-01-19 15:26                           ` Tamar Christina
  2017-01-19 16:34                             ` Joseph Myers
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2017-01-19 15:26 UTC (permalink / raw)
  To: Joseph Myers
  Cc: Jeff Law, GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

[-- Attachment #1: Type: text/plain, Size: 3330 bytes --]


> > The calls to is_zero and is_subnormal were incorrect indeed. I've
> > corrected them by not calling the fixup code and to instead make sure it
> > falls through into the old fp based code which did normal floating point
> > operations on the number. This is the same code as was before in
> > fpclassify so it should work.
>
> For is_zero it's fine to test based on the high part for IBM long double;
> an IBM long double is (zero, infinite, NaN, finite) if and only if the
> high part is.  The problem is the different threshold between normal and
> subnormal.

Ah ok, I've added it back in for is_zero then.

> > As for is_normal, the code is almost identical as the code that used to
> > be in fold_builtin_interclass_mathfn in BUILT_IN_ISNORMAL, with the
> > exception that I don't check <= max_value but instead < inifity, so I
> > can reuse the same constant.
>
> The old code set orig_arg before converting IBM long double to double.
> Your code sets it after the conversion.  The old code set min_exp based on
> a string set from REAL_MODE_FORMAT (orig_mode)->emin - 1; your code uses
> the adjusted mode.  Both of those are incorrect for IBM long double.

Hmm, this is correct, I've made the change to be like it was before, but
there's something I don't quite get about the old code, if it's building rmin
the orig_mode which is larger than mode, but then creating the real using
build_real (type, rmin) using the adjusted type, shouldn't it not be getting
truncated?

I've updated and attached the patch.

Tamar

________________________________________
From: Joseph Myers <joseph@codesourcery.com>
Sent: Thursday, January 19, 2017 2:46:06 PM
To: Tamar Christina
Cc: Jeff Law; GCC Patches; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

On Thu, 19 Jan 2017, Tamar Christina wrote:

> > Also, I don't think the call to perform_ibm_extended_fixups in
> > is_subnormal is correct.  Subnormal for IBM long double is *not* the same
> > as subnormal double high part.  Likewise it's incorrect in is_normal as
> > well.
>
> The calls to is_zero and is_subnormal were incorrect indeed. I've
> corrected them by not calling the fixup code and to instead make sure it
> falls through into the old fp based code which did normal floating point
> operations on the number. This is the same code as was before in
> fpclassify so it should work.

For is_zero it's fine to test based on the high part for IBM long double;
an IBM long double is (zero, infinite, NaN, finite) if and only if the
high part is.  The problem is the different threshold between normal and
subnormal.

> As for is_normal, the code is almost identical as the code that used to
> be in fold_builtin_interclass_mathfn in BUILT_IN_ISNORMAL, with the
> exception that I don't check <= max_value but instead < inifity, so I
> can reuse the same constant.

The old code set orig_arg before converting IBM long double to double.
Your code sets it after the conversion.  The old code set min_exp based on
a string set from REAL_MODE_FORMAT (orig_mode)->emin - 1; your code uses
the adjusted mode.  Both of those are incorrect for IBM long double.

--
Joseph S. Myers
joseph@codesourcery.com

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: fp-classify-jan-19-spin2.patch --]
[-- Type: text/x-patch; name="fp-classify-jan-19-spin2.patch", Size: 74879 bytes --]

diff --git a/gcc/builtins.c b/gcc/builtins.c
index 3ac2d44148440b124559ba7cd3de483b7a74b72d..d8ff9c70ae6b9e72e09b8cbd9a0bd41b6830b83e 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -160,7 +160,6 @@ static tree fold_builtin_0 (location_t, tree);
 static tree fold_builtin_1 (location_t, tree, tree);
 static tree fold_builtin_2 (location_t, tree, tree, tree);
 static tree fold_builtin_3 (location_t, tree, tree, tree, tree);
-static tree fold_builtin_varargs (location_t, tree, tree*, int);
 
 static tree fold_builtin_strpbrk (location_t, tree, tree, tree);
 static tree fold_builtin_strstr (location_t, tree, tree, tree);
@@ -2202,19 +2201,8 @@ interclass_mathfn_icode (tree arg, tree fndecl)
   switch (DECL_FUNCTION_CODE (fndecl))
     {
     CASE_FLT_FN (BUILT_IN_ILOGB):
-      errno_set = true; builtin_optab = ilogb_optab; break;
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      builtin_optab = isinf_optab; break;
-    case BUILT_IN_ISNORMAL:
-    case BUILT_IN_ISFINITE:
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      /* These builtins have no optabs (yet).  */
+      errno_set = true;
+      builtin_optab = ilogb_optab;
       break;
     default:
       gcc_unreachable ();
@@ -2233,8 +2221,7 @@ interclass_mathfn_icode (tree arg, tree fndecl)
 }
 
 /* Expand a call to one of the builtin math functions that operate on
-   floating point argument and output an integer result (ilogb, isinf,
-   isnan, etc).
+   floating point argument and output an integer result (ilogb, etc).
    Return 0 if a normal call should be emitted rather than expanding the
    function in-line.  EXP is the expression that is a call to the builtin
    function; if convenient, the result should be placed in TARGET.  */
@@ -5997,11 +5984,7 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
     CASE_FLT_FN (BUILT_IN_ILOGB):
       if (! flag_unsafe_math_optimizations)
 	break;
-      gcc_fallthrough ();
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-    case BUILT_IN_ISNORMAL:
+
       target = expand_builtin_interclass_mathfn (exp, target);
       if (target)
 	return target;
@@ -6281,8 +6264,25 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
 	}
       break;
 
+    CASE_FLT_FN (BUILT_IN_ISINF):
+    case BUILT_IN_ISNAND32:
+    case BUILT_IN_ISNAND64:
+    case BUILT_IN_ISNAND128:
+    case BUILT_IN_ISNAN:
+    case BUILT_IN_ISINFD32:
+    case BUILT_IN_ISINFD64:
+    case BUILT_IN_ISINFD128:
+    case BUILT_IN_ISNORMAL:
+    case BUILT_IN_ISZERO:
+    case BUILT_IN_ISSUBNORMAL:
+    case BUILT_IN_FPCLASSIFY:
     case BUILT_IN_SETJMP:
-      /* This should have been lowered to the builtins below.  */
+    CASE_FLT_FN (BUILT_IN_FINITE):
+    case BUILT_IN_FINITED32:
+    case BUILT_IN_FINITED64:
+    case BUILT_IN_FINITED128:
+    case BUILT_IN_ISFINITE:
+      /* These should have been lowered to the builtins in gimple-low.c.  */
       gcc_unreachable ();
 
     case BUILT_IN_SETJMP_SETUP:
@@ -7622,184 +7622,19 @@ fold_builtin_modf (location_t loc, tree arg0, tree arg1, tree rettype)
   return NULL_TREE;
 }
 
-/* Given a location LOC, an interclass builtin function decl FNDECL
-   and its single argument ARG, return an folded expression computing
-   the same, or NULL_TREE if we either couldn't or didn't want to fold
-   (the latter happen if there's an RTL instruction available).  */
-
-static tree
-fold_builtin_interclass_mathfn (location_t loc, tree fndecl, tree arg)
-{
-  machine_mode mode;
-
-  if (!validate_arg (arg, REAL_TYPE))
-    return NULL_TREE;
-
-  if (interclass_mathfn_icode (arg, fndecl) != CODE_FOR_nothing)
-    return NULL_TREE;
-
-  mode = TYPE_MODE (TREE_TYPE (arg));
-
-  bool is_ibm_extended = MODE_COMPOSITE_P (mode);
 
-  /* If there is no optab, try generic code.  */
-  switch (DECL_FUNCTION_CODE (fndecl))
-    {
-      tree result;
 
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      {
-	/* isinf(x) -> isgreater(fabs(x),DBL_MAX).  */
-	tree const isgr_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isgr_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	return result;
-      }
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-      {
-	/* isfinite(x) -> islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isle_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	/*result = fold_build2_loc (loc, UNGT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	result = fold_build1_loc (loc, TRUTH_NOT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  result);*/
-	return result;
-      }
-    case BUILT_IN_ISNORMAL:
-      {
-	/* isnormal(x) -> isgreaterequal(fabs(x),DBL_MIN) &
-	   islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	tree orig_arg, max_exp, min_exp;
-	machine_mode orig_mode = mode;
-	REAL_VALUE_TYPE rmax, rmin;
-	char buf[128];
-
-	orig_arg = arg = builtin_save_expr (arg);
-	if (is_ibm_extended)
-	  {
-	    /* Use double to test the normal range of IBM extended
-	       precision.  Emin for IBM extended precision is
-	       different to emin for IEEE double, being 53 higher
-	       since the low double exponent is at least 53 lower
-	       than the high double exponent.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	arg = fold_build1_loc (loc, ABS_EXPR, type, arg);
-
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&rmax, buf);
-	sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (orig_mode)->emin - 1);
-	real_from_string (&rmin, buf);
-	max_exp = build_real (type, rmax);
-	min_exp = build_real (type, rmin);
-
-	max_exp = build_call_expr (isle_fn, 2, arg, max_exp);
-	if (is_ibm_extended)
-	  {
-	    /* Testing the high end of the range is done just using
-	       the high double, using the same test as isfinite().
-	       For the subnormal end of the range we first test the
-	       high double, then if its magnitude is equal to the
-	       limit of 0x1p-969, we test whether the low double is
-	       non-zero and opposite sign to the high double.  */
-	    tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
-	    tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	    tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
-	    tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
-				       arg, min_exp);
-	    tree as_complex = build1 (VIEW_CONVERT_EXPR,
-				      complex_double_type_node, orig_arg);
-	    tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
-	    tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
-	    tree zero = build_real (type, dconst0);
-	    tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
-	    tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
-	    tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
-	    tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
-				      fold_build3 (COND_EXPR,
-						   integer_type_node,
-						   hilt, logt, lolt));
-	    eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
-				  eq_min, ok_lo);
-	    min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
-				   gt_min, eq_min);
-	  }
-	else
-	  {
-	    tree const isge_fn
-	      = builtin_decl_explicit (BUILT_IN_ISGREATEREQUAL);
-	    min_exp = build_call_expr (isge_fn, 2, arg, min_exp);
-	  }
-	result = fold_build2 (BIT_AND_EXPR, integer_type_node,
-			      max_exp, min_exp);
-	return result;
-      }
-    default:
-      break;
-    }
-
-  return NULL_TREE;
-}
-
-/* Fold a call to __builtin_isnan(), __builtin_isinf, __builtin_finite.
+/* Fold a call to __builtin_isinf_sign.
    ARG is the argument for the call.  */
 
 static tree
-fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
+fold_builtin_classify (location_t loc, tree arg, int builtin_index)
 {
-  tree type = TREE_TYPE (TREE_TYPE (fndecl));
-
   if (!validate_arg (arg, REAL_TYPE))
     return NULL_TREE;
 
   switch (builtin_index)
     {
-    case BUILT_IN_ISINF:
-      if (!HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      return NULL_TREE;
-
     case BUILT_IN_ISINF_SIGN:
       {
 	/* isinf_sign(x) -> isinf(x) ? (signbit(x) ? -1 : 1) : 0 */
@@ -7832,106 +7667,11 @@ fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
 	return tmp;
       }
 
-    case BUILT_IN_ISFINITE:
-      if (!HONOR_NANS (arg)
-	  && !HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_one_node, arg);
-
-      return NULL_TREE;
-
-    case BUILT_IN_ISNAN:
-      if (!HONOR_NANS (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      {
-	bool is_ibm_extended = MODE_COMPOSITE_P (TYPE_MODE (TREE_TYPE (arg)));
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    arg = fold_build1_loc (loc, NOP_EXPR, double_type_node, arg);
-	  }
-      }
-      arg = builtin_save_expr (arg);
-      return fold_build2_loc (loc, UNORDERED_EXPR, type, arg, arg);
-
     default:
       gcc_unreachable ();
     }
 }
 
-/* Fold a call to __builtin_fpclassify(int, int, int, int, int, ...).
-   This builtin will generate code to return the appropriate floating
-   point classification depending on the value of the floating point
-   number passed in.  The possible return values must be supplied as
-   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
-   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
-   one floating point argument which is "type generic".  */
-
-static tree
-fold_builtin_fpclassify (location_t loc, tree *args, int nargs)
-{
-  tree fp_nan, fp_infinite, fp_normal, fp_subnormal, fp_zero,
-    arg, type, res, tmp;
-  machine_mode mode;
-  REAL_VALUE_TYPE r;
-  char buf[128];
-
-  /* Verify the required arguments in the original call.  */
-  if (nargs != 6
-      || !validate_arg (args[0], INTEGER_TYPE)
-      || !validate_arg (args[1], INTEGER_TYPE)
-      || !validate_arg (args[2], INTEGER_TYPE)
-      || !validate_arg (args[3], INTEGER_TYPE)
-      || !validate_arg (args[4], INTEGER_TYPE)
-      || !validate_arg (args[5], REAL_TYPE))
-    return NULL_TREE;
-
-  fp_nan = args[0];
-  fp_infinite = args[1];
-  fp_normal = args[2];
-  fp_subnormal = args[3];
-  fp_zero = args[4];
-  arg = args[5];
-  type = TREE_TYPE (arg);
-  mode = TYPE_MODE (type);
-  arg = builtin_save_expr (fold_build1_loc (loc, ABS_EXPR, type, arg));
-
-  /* fpclassify(x) ->
-       isnan(x) ? FP_NAN :
-         (fabs(x) == Inf ? FP_INFINITE :
-	   (fabs(x) >= DBL_MIN ? FP_NORMAL :
-	     (x == 0 ? FP_ZERO : FP_SUBNORMAL))).  */
-
-  tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-		     build_real (type, dconst0));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node,
-		     tmp, fp_zero, fp_subnormal);
-
-  sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (mode)->emin - 1);
-  real_from_string (&r, buf);
-  tmp = fold_build2_loc (loc, GE_EXPR, integer_type_node,
-		     arg, build_real (type, r));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, fp_normal, res);
-
-  if (HONOR_INFINITIES (mode))
-    {
-      real_inf (&r);
-      tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-			 build_real (type, r));
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp,
-			 fp_infinite, res);
-    }
-
-  if (HONOR_NANS (mode))
-    {
-      tmp = fold_build2_loc (loc, ORDERED_EXPR, integer_type_node, arg, arg);
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, res, fp_nan);
-    }
-
-  return res;
-}
-
 /* Fold a call to an unordered comparison function such as
    __builtin_isgreater().  FNDECL is the FUNCTION_DECL for the function
    being called and ARG0 and ARG1 are the arguments for the call.
@@ -8232,40 +7972,8 @@ fold_builtin_1 (location_t loc, tree fndecl, tree arg0)
     case BUILT_IN_ISDIGIT:
       return fold_builtin_isdigit (loc, arg0);
 
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISFINITE:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISFINITE);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    case BUILT_IN_ISNORMAL:
-      return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-
     case BUILT_IN_ISINF_SIGN:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF_SIGN);
-
-    CASE_FLT_FN (BUILT_IN_ISNAN):
-    case BUILT_IN_ISNAND32:
-    case BUILT_IN_ISNAND64:
-    case BUILT_IN_ISNAND128:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISNAN);
+      return fold_builtin_classify (loc, arg0, BUILT_IN_ISINF_SIGN);
 
     case BUILT_IN_FREE:
       if (integer_zerop (arg0))
@@ -8465,7 +8173,6 @@ fold_builtin_n (location_t loc, tree fndecl, tree *args, int nargs, bool)
       ret = fold_builtin_3 (loc, fndecl, args[0], args[1], args[2]);
       break;
     default:
-      ret = fold_builtin_varargs (loc, fndecl, args, nargs);
       break;
     }
   if (ret)
@@ -9422,37 +9129,6 @@ fold_builtin_object_size (tree ptr, tree ost)
   return NULL_TREE;
 }
 
-/* Builtins with folding operations that operate on "..." arguments
-   need special handling; we need to store the arguments in a convenient
-   data structure before attempting any folding.  Fortunately there are
-   only a few builtins that fall into this category.  FNDECL is the
-   function, EXP is the CALL_EXPR for the call.  */
-
-static tree
-fold_builtin_varargs (location_t loc, tree fndecl, tree *args, int nargs)
-{
-  enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl);
-  tree ret = NULL_TREE;
-
-  switch (fcode)
-    {
-    case BUILT_IN_FPCLASSIFY:
-      ret = fold_builtin_fpclassify (loc, args, nargs);
-      break;
-
-    default:
-      break;
-    }
-  if (ret)
-    {
-      ret = build1 (NOP_EXPR, TREE_TYPE (ret), ret);
-      SET_EXPR_LOCATION (ret, loc);
-      TREE_NO_WARNING (ret) = 1;
-      return ret;
-    }
-  return NULL_TREE;
-}
-
 /* Initialize format string characters in the target charset.  */
 
 bool
diff --git a/gcc/builtins.def b/gcc/builtins.def
index 219feebd3aebefbd079bf37cc801453cd1965e00..91aa6f37fa098777bc794bad56d8c561ab9fdc44 100644
--- a/gcc/builtins.def
+++ b/gcc/builtins.def
@@ -831,6 +831,8 @@ DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFL, "isinfl", BT_FN_INT_LONGDOUBLE, ATTR_CO
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD32, "isinfd32", BT_FN_INT_DFLOAT32, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD64, "isinfd64", BT_FN_INT_DFLOAT64, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD128, "isinfd128", BT_FN_INT_DFLOAT128, ATTR_CONST_NOTHROW_LEAF_LIST)
+DEF_GCC_BUILTIN        (BUILT_IN_ISZERO, "iszero", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
+DEF_GCC_BUILTIN        (BUILT_IN_ISSUBNORMAL, "issubnormal", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_C99_C90RES_BUILTIN (BUILT_IN_ISNAN, "isnan", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANF, "isnanf", BT_FN_INT_FLOAT, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANL, "isnanl", BT_FN_INT_LONGDOUBLE, ATTR_CONST_NOTHROW_LEAF_LIST)
diff --git a/gcc/c/c-typeck.c b/gcc/c/c-typeck.c
index f0917ed788c91b2a2c8701438150f0e634c9402b..e2b4acd21b7d70e6500e42064db49e0f4a84aae9 100644
--- a/gcc/c/c-typeck.c
+++ b/gcc/c/c-typeck.c
@@ -3232,6 +3232,8 @@ convert_arguments (location_t loc, vec<location_t> arg_loc, tree typelist,
 	case BUILT_IN_ISINF_SIGN:
 	case BUILT_IN_ISNAN:
 	case BUILT_IN_ISNORMAL:
+	case BUILT_IN_ISZERO:
+	case BUILT_IN_ISSUBNORMAL:
 	case BUILT_IN_FPCLASSIFY:
 	  type_generic_remove_excess_precision = true;
 	  break;
diff --git a/gcc/doc/extend.texi b/gcc/doc/extend.texi
index 0669f7999beb078822e471352036d8f13517812d..c240bbe9a8fd595a0e7e2b41fb708efae1e5279a 100644
--- a/gcc/doc/extend.texi
+++ b/gcc/doc/extend.texi
@@ -10433,6 +10433,10 @@ in the Cilk Plus language manual which can be found at
 @findex __builtin_isgreater
 @findex __builtin_isgreaterequal
 @findex __builtin_isinf_sign
+@findex __builtin_isinf
+@findex __builtin_isnan
+@findex __builtin_iszero
+@findex __builtin_issubnormal
 @findex __builtin_isless
 @findex __builtin_islessequal
 @findex __builtin_islessgreater
@@ -11496,7 +11500,54 @@ constant values and they must appear in this order: @code{FP_NAN},
 @code{FP_INFINITE}, @code{FP_NORMAL}, @code{FP_SUBNORMAL} and
 @code{FP_ZERO}.  The ellipsis is for exactly one floating-point value
 to classify.  GCC treats the last argument as type-generic, which
-means it does not do default promotion from float to double.
+means it does not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnan (...)
+This built-in implements the C99 isnan functionality which checks if
+the given argument represents a NaN.  The return value of the
+function will either be a 0 (false) or a 1 (true).
+On most systems, when an IEEE 754 floating-point type is used this
+built-in does not produce a signal when a signaling NaN is used.
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isinf (...)
+This built-in implements the C99 isinf functionality which checks if
+the given argument represents an infinite number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnormal (...)
+This built-in implements the C99 isnormal functionality which checks if
+the given argument represents a normal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_iszero (...)
+This built-in implements the TS 18661-1:2014 iszero functionality which checks if
+the given argument represents the number 0 or -0.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_issubnormal (...)
+This built-in implements the TS 18661-1:2014 issubnormal functionality which checks if
+the given argument represents a subnormal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
 @end deftypefn
 
 @deftypefn {Built-in Function} double __builtin_inf (void)
diff --git a/gcc/gimple-low.c b/gcc/gimple-low.c
index 64752b67b86b3d01df5f5661e4666df98b7b91d1..d336ae512bbabcbb9cd558a53ee1224cd1178d6a 100644
--- a/gcc/gimple-low.c
+++ b/gcc/gimple-low.c
@@ -30,6 +30,8 @@ along with GCC; see the file COPYING3.  If not see
 #include "calls.h"
 #include "gimple-iterator.h"
 #include "gimple-low.h"
+#include "stor-layout.h"
+#include "target.h"
 
 /* The differences between High GIMPLE and Low GIMPLE are the
    following:
@@ -72,6 +74,13 @@ static void lower_gimple_bind (gimple_stmt_iterator *, struct lower_data *);
 static void lower_try_catch (gimple_stmt_iterator *, struct lower_data *);
 static void lower_gimple_return (gimple_stmt_iterator *, struct lower_data *);
 static void lower_builtin_setjmp (gimple_stmt_iterator *);
+static void lower_builtin_fpclassify (gimple_stmt_iterator *);
+static void lower_builtin_isnan (gimple_stmt_iterator *);
+static void lower_builtin_isinfinite (gimple_stmt_iterator *);
+static void lower_builtin_isnormal (gimple_stmt_iterator *);
+static void lower_builtin_iszero (gimple_stmt_iterator *);
+static void lower_builtin_issubnormal (gimple_stmt_iterator *);
+static void lower_builtin_isfinite (gimple_stmt_iterator *);
 static void lower_builtin_posix_memalign (gimple_stmt_iterator *);
 
 
@@ -330,18 +339,69 @@ lower_stmt (gimple_stmt_iterator *gsi, struct lower_data *data)
 	if (decl
 	    && DECL_BUILT_IN_CLASS (decl) == BUILT_IN_NORMAL)
 	  {
-	    if (DECL_FUNCTION_CODE (decl) == BUILT_IN_SETJMP)
+	    switch (DECL_FUNCTION_CODE (decl))
 	      {
+	      case BUILT_IN_SETJMP:
 		lower_builtin_setjmp (gsi);
 		data->cannot_fallthru = false;
 		return;
-	      }
-	    else if (DECL_FUNCTION_CODE (decl) == BUILT_IN_POSIX_MEMALIGN
-		     && flag_tree_bit_ccp
-		     && gimple_builtin_call_types_compatible_p (stmt, decl))
-	      {
-		lower_builtin_posix_memalign (gsi);
+
+	      case BUILT_IN_POSIX_MEMALIGN:
+		if (flag_tree_bit_ccp
+		    && gimple_builtin_call_types_compatible_p (stmt, decl))
+		  {
+			lower_builtin_posix_memalign (gsi);
+			return;
+		  }
+		break;
+
+	      case BUILT_IN_FPCLASSIFY:
+		lower_builtin_fpclassify (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      CASE_FLT_FN (BUILT_IN_ISINF):
+	      case BUILT_IN_ISINFD32:
+	      case BUILT_IN_ISINFD64:
+	      case BUILT_IN_ISINFD128:
+		lower_builtin_isinfinite (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNAND32:
+	      case BUILT_IN_ISNAND64:
+	      case BUILT_IN_ISNAND128:
+	      CASE_FLT_FN (BUILT_IN_ISNAN):
+		lower_builtin_isnan (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNORMAL:
+		lower_builtin_isnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISZERO:
+		lower_builtin_iszero (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISSUBNORMAL:
+		lower_builtin_issubnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      CASE_FLT_FN (BUILT_IN_FINITE):
+	      case BUILT_IN_FINITED32:
+	      case BUILT_IN_FINITED64:
+	      case BUILT_IN_FINITED128:
+	      case BUILT_IN_ISFINITE:
+		lower_builtin_isfinite (gsi);
+		data->cannot_fallthru = false;
 		return;
+
+	      default:
+		break;
 	      }
 	  }
 
@@ -822,6 +882,818 @@ lower_builtin_setjmp (gimple_stmt_iterator *gsi)
   gsi_remove (gsi, false);
 }
 
+/* This function will if ARG is not already a variable or SSA_NAME,
+   create a new temporary TMP and bind ARG to TMP.  This new binding is then
+   emitted into SEQ and TMP is returned.  */
+static tree
+emit_tree_and_return_var (gimple_seq *seq, tree arg)
+{
+  if (TREE_CODE (arg) == SSA_NAME || VAR_P (arg))
+    return arg;
+
+  tree tmp = create_tmp_reg (TREE_TYPE (arg));
+  gassign *stm = gimple_build_assign (tmp, arg);
+  gimple_seq_add_stmt (seq, stm);
+  return tmp;
+}
+
+/* This function builds an if statement that ends up using explicit branches
+   instead of becoming a ternary conditional select.  This function assumes you
+   will fall through to the next statements after the condition for the false
+   branch.  The code emitted looks like:
+
+   if (COND)
+     RESULT_VARIABLE = TRUE_BRANCH
+     GOTO EXIT_LABEL
+   else
+     ...
+
+   SEQ is the gimple sequence/buffer to emit any new bindings to.
+   RESULT_VARIABLE is the value to set if COND.
+   EXIT_LABEL is the label to jump to in case COND.
+   COND is condition to use in the conditional statement of the if.
+   TRUE_BRANCH is the value to set RESULT_VARIABLE to if COND.  */
+static void
+emit_tree_cond (gimple_seq *seq, tree result_variable, tree exit_label,
+		tree cond, tree true_branch)
+{
+  /* Create labels for fall through.  */
+  tree true_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree false_label = create_artificial_label (UNKNOWN_LOCATION);
+  gcond *stmt = gimple_build_cond_from_tree (cond, true_label, false_label);
+  gimple_seq_add_stmt (seq, stmt);
+
+  /* Build the true case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (true_label));
+  tree value = TREE_CONSTANT (true_branch)
+	     ? true_branch
+	     : emit_tree_and_return_var (seq, true_branch);
+  gimple_seq_add_stmt (seq, gimple_build_assign (result_variable, value));
+  gimple_seq_add_stmt (seq, gimple_build_goto (exit_label));
+
+  /* Build the false case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (false_label));
+}
+
+/* This function returns a variable containing an reinterpreted ARG as an
+   integer.
+
+   SEQ is the gimple sequence/buffer to write any new bindings to.
+   ARG is the floating point number to reinterpret as an integer.
+   LOC is the location to use when doing folding operations.  */
+static tree
+get_num_as_int (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  /* Re-interpret the float as an unsigned integer type
+     with equal precision.  */
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+  tree conv_arg = fold_build1_loc (loc, VIEW_CONVERT_EXPR, int_arg_type, arg);
+  return emit_tree_and_return_var (seq, conv_arg);
+}
+
+/* Check if ARG which is the floating point number being classified is close
+   enough to IEEE 754 format to be able to go in the early exit code.  */
+static bool
+use_ieee_int_mode (tree arg)
+{
+  tree type = TREE_TYPE (arg);
+  machine_mode mode = TYPE_MODE (type);
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  machine_mode imode = int_mode_for_mode (mode);
+  bool is_ibm_extended = MODE_COMPOSITE_P (mode);
+
+  return (format->is_binary_ieee_compatible
+	  && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
+	  /* Check if there's a usable integer mode.  */
+	  && imode != BLKmode
+	  && targetm.scalar_mode_supported_p (imode)
+	  && !is_ibm_extended);
+}
+
+/* Perform some IBM extended format fixups on ARG for use by FP functions.
+   This is done by ignoring the lower 64 bits of the number.
+
+   MODE is the machine mode of ARG.
+   TYPE is the type of ARG.
+   LOC is the location to be used in fold functions.  Usually is the location
+   of the definition of ARG.  */
+static bool
+perform_ibm_extended_fixups (tree *arg, machine_mode *mode,
+			     tree *type, location_t loc)
+{
+  bool is_ibm_extended = MODE_COMPOSITE_P (*mode);
+  if (is_ibm_extended)
+    {
+      /* NaN and Inf are encoded in the high-order double value
+	 only.  The low-order value is not significant.  */
+      *type = double_type_node;
+      *mode = DFmode;
+      *arg = fold_build1_loc (loc, NOP_EXPR, *type, *arg);
+    }
+
+  return is_ibm_extended;
+}
+
+/* Generates code to check if ARG is a normal number.  For the FP case we check
+   MIN_VALUE(ARG) <= ABS(ARG) > INF and for the INT value we check the exp and
+   mantissa bits.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_normal (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const tree bool_type = boolean_type_node;
+
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree orig_arg = arg;
+      machine_mode orig_mode = mode;
+      if (TREE_CODE (arg) != SSA_NAME
+	  && (TREE_ADDRESSABLE (arg) != 0
+	    || (TREE_CODE (arg) != PARM_DECL
+	        && (!VAR_P (arg) || TREE_STATIC (arg)))))
+	orig_arg = save_expr (arg);
+
+      /* Perform IBM extended format fixups if required.  */
+      bool is_ibm_extended = perform_ibm_extended_fixups (&arg, &mode,
+							  &type, loc);
+
+      REAL_VALUE_TYPE rinf, rmin;
+      tree arg_p
+        = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      char buf[128];
+      real_inf (&rinf);
+      get_min_float (REAL_MODE_FORMAT (orig_mode), buf, sizeof (buf));
+      real_from_string (&rmin, buf);
+
+      tree inf_exp = fold_build2_loc (loc, LT_EXPR, bool_type, arg_p,
+				      build_real (type, rinf));
+      tree min_exp = build_real (type, rmin);
+      if (is_ibm_extended)
+	{
+	  /* Testing the high end of the range is done just using
+	     the high double, using the same test as isfinite().
+	     For the subnormal end of the range we first test the
+	     high double, then if its magnitude is equal to the
+	     limit of 0x1p-969, we test whether the low double is
+	     non-zero and opposite sign to the high double.  */
+	  tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
+	  tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
+	  tree gt_min = build_call_expr (isgt_fn, 2, arg_p, min_exp);
+	  tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
+				     arg_p, min_exp);
+	  tree as_complex = build1 (VIEW_CONVERT_EXPR,
+				    complex_double_type_node, orig_arg);
+	  tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
+	  tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
+	  tree zero = build_real (type, dconst0);
+	  tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
+	  tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
+	  tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
+	  tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
+				    fold_build3 (COND_EXPR,
+						 integer_type_node,
+						 hilt, logt, lolt));
+	  eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
+				eq_min, ok_lo);
+	  min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
+				 gt_min, eq_min);
+	}
+	else
+	{
+	  min_exp = fold_build2_loc (loc, GE_EXPR, bool_type, arg_p,
+				     min_exp);
+	}
+
+      tree res
+	= fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			   emit_tree_and_return_var (seq, min_exp),
+			   emit_tree_and_return_var (seq, inf_exp));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const tree int_type = unsigned_type_node;
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  const int exp_mask  = (1 << exp_bits) - 1;
+
+  /* Get the number reinterpreted as an integer.  */
+  tree int_arg = get_num_as_int (seq, arg, loc);
+
+  /* Extract exp bits from the float, where we expect the exponent to be.
+     We create a new type because BIT_FIELD_REF does not allow you to
+     extract less bits than the precision of the storage variable.  */
+  tree exp_tmp
+    = fold_build3_loc (loc, BIT_FIELD_REF,
+		       build_nonstandard_integer_type (exp_bits, true),
+		       int_arg,
+		       build_int_cstu (int_type, exp_bits),
+		       build_int_cstu (int_type, format->p - 1));
+  tree exp_bitfield = emit_tree_and_return_var (seq, exp_tmp);
+
+  /* Re-interpret the extracted exponent bits as a 32 bit int.
+     This allows us to continue doing operations as int_type.  */
+  tree exp
+    = emit_tree_and_return_var (seq, fold_build1_loc (loc, NOP_EXPR, int_type,
+						      exp_bitfield));
+
+  /* exp_mask & ~1.  */
+  tree mask_check
+     = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+			build_int_cstu (int_type, exp_mask),
+			fold_build1_loc (loc, BIT_NOT_EXPR, int_type,
+					 build_int_cstu (int_type, 1)));
+
+  /* (exp + 1) & mask_check.
+     Check to see if exp is not all 0 or all 1.  */
+  tree exp_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+		       emit_tree_and_return_var (seq,
+				fold_build2_loc (loc, PLUS_EXPR, int_type, exp,
+						 build_int_cstu (int_type, 1))),
+		       mask_check);
+
+  tree res = fold_build2_loc (loc, NE_EXPR, boolean_type_node,
+			      build_int_cstu (int_type, 0),
+			      emit_tree_and_return_var (seq, exp_check));
+
+  return emit_tree_and_return_var (seq, res);
+}
+
+/* Generates code to check if ARG is a zero. For both the FP and INT case we
+   check if ARG == 0 (modulo sign bit).  Returns a variable containing a boolean
+   which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_zero (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      /* Perform IBM extended format fixups if required.  */
+      bool is_ibm_extended = perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+      tree res = fold_build2_loc (loc, EQ_EXPR, boolean_type_node, arg,
+				  build_real (type, dconst0));
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* num << 1 == 0.
+     This checks to see if the number is zero.  */
+  tree zero_check
+    = fold_build2_loc (loc, EQ_EXPR, boolean_type_node,
+		       build_int_cstu (int_arg_type, 0),
+		       emit_tree_and_return_var (seq, int_arg));
+
+  return emit_tree_and_return_var (seq, zero_check);
+}
+
+/* Generates code to check if ARG is a subnormal number.  In the FP case we test
+   fabs (ARG) != 0 && fabs (ARG) < MIN_VALUE (ARG) and in the INT case we check
+   the exp and mantissa bits on ARG. Returns a variable containing a boolean
+   which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_subnormal (gimple_seq *seq, tree arg, location_t loc)
+{
+  const tree bool_type = boolean_type_node;
+
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      REAL_VALUE_TYPE r;
+      char buf[128];
+      get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+      real_from_string (&r, buf);
+      tree subnorm = fold_build2_loc (loc, LT_EXPR, bool_type,
+				      arg_p, build_real (type, r));
+
+      tree zero = fold_build2_loc (loc, GT_EXPR, bool_type, arg_p,
+				   build_real (type, dconst0));
+
+      tree res
+	= fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			   emit_tree_and_return_var (seq, subnorm),
+			   emit_tree_and_return_var (seq, zero));
+
+      return emit_tree_and_return_var (seq, res);
+  }
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check for a zero exponent and non-zero mantissa.
+     This can be done with two comparisons by first apply a
+     removing the sign bit and checking if the value is larger
+     than the mantissa mask.  */
+
+  /* This creates a mask to be used to check the mantissa value in the shifted
+     integer representation of the fpnum.  */
+  tree significant_bit = build_int_cstu (int_arg_type, format->p - 1);
+  tree mantissa_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					significant_bit),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check if exponent is zero and mantissa is not.  */
+  tree subnorm_cond_tmp
+    = fold_build2_loc (loc, LE_EXPR, bool_type,
+		       emit_tree_and_return_var (seq, int_arg),
+		       mantissa_mask);
+
+  tree subnorm_cond = emit_tree_and_return_var (seq, subnorm_cond_tmp);
+
+  tree zero_cond
+    = fold_build2_loc (loc, GT_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, int_arg),
+		       build_int_cstu (int_arg_type, 0));
+
+  tree subnorm_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, subnorm_cond),
+		       emit_tree_and_return_var (seq, zero_cond));
+
+  return emit_tree_and_return_var (seq, subnorm_check);
+}
+
+/* Generates code to check if ARG is an infinity.  In the FP case we test
+   FABS(ARG) == INF and in the INT case we check the bits on the exp and
+   mantissa.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_infinity (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_INFINITIES (mode))
+    {
+      return build_int_cst (bool_type, false);
+    }
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+      REAL_VALUE_TYPE r;
+      real_inf (&r);
+      tree res = fold_build2_loc (loc, EQ_EXPR, bool_type, arg_p,
+				  build_real (type, r));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0.  */
+  tree inf_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, EQ_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Generates code to check if ARG is a finite number.  In the FP case we check
+   if FABS(ARG) <= MAX_VALUE(ARG) and in the INT case we check the exp and
+   mantissa bits.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_finite (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (arg) && !HONOR_INFINITIES (arg))
+    {
+      return build_int_cst (bool_type, true);
+    }
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      REAL_VALUE_TYPE rmax;
+      char buf[128];
+      get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+      real_from_string (&rmax, buf);
+
+      tree res = fold_build2_loc (loc, LE_EXPR, bool_type, arg_p,
+				  build_real (type, rmax));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign. */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0. */
+  tree inf_check_tmp
+    = fold_build2_loc (loc, LT_EXPR, bool_type,
+		       emit_tree_and_return_var (seq, int_arg),
+		       inf_mask);
+
+  tree inf_check = emit_tree_and_return_var (seq, inf_check_tmp);
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Generates code to check if ARG is a NaN. In the FP case we simply check if
+   ARG != ARG and in the INT case we check the bits in the exp and mantissa.
+   Returns a variable containing a boolean which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_nan (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (mode))
+    {
+      return build_int_cst (bool_type, false);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      tree res
+	= fold_build2_loc (loc, UNORDERED_EXPR, bool_type,arg_p, arg_p);
+
+      return emit_tree_and_return_var (seq, res);
+  }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is not 0.  */
+  tree nan_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, GT_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, nan_check);
+}
+
+/* Validates a single argument from the arguments list CALL at position INDEX.
+   The extracted parameter is compared against the expected type CODE.
+
+   A boolean is returned indicating if the parameter exist and if of the
+   expected type.  */
+static bool
+gimple_validate_arg (gimple* call, int index, enum tree_code code)
+{
+  const tree arg = gimple_call_arg (call, index);
+  if (!arg)
+    return false;
+  else if (code == POINTER_TYPE)
+    return POINTER_TYPE_P (TREE_TYPE (arg));
+  else if (code == INTEGER_TYPE)
+    return INTEGRAL_TYPE_P (TREE_TYPE (arg));
+  return code == TREE_CODE (TREE_TYPE (arg));
+}
+
+/* Lowers calls to __builtin_fpclassify to
+   fpclassify (x) ->
+     isnormal(x) ? FP_NORMAL :
+       iszero (x) ? FP_ZERO :
+	 isnan (x) ? FP_NAN :
+	   isinfinite (x) ? FP_INFINITE :
+	     FP_SUBNORMAL.
+
+   The code may use integer arithmentic if it decides
+   that the produced assembly would be faster. This can only be done
+   for numbers that are similar to IEEE-754 in format.
+
+   This builtin will generate code to return the appropriate floating
+   point classification depending on the value of the floating point
+   number passed in.  The possible return values must be supplied as
+   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
+   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
+   one floating point argument which is "type generic".
+
+   GSI is the gimple iterator containing the fpclassify call to lower.
+   The call will be expanded and replaced inline in the given GSI.  */
+static void
+lower_builtin_fpclassify (gimple_stmt_iterator *gsi)
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 6
+      || !gimple_validate_arg (call, 0, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 1, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 2, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 3, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 4, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 5, REAL_TYPE))
+    return;
+
+  /* Collect the arguments from the call.  */
+  tree fp_nan = gimple_call_arg (call, 0);
+  tree fp_infinite = gimple_call_arg (call, 1);
+  tree fp_normal = gimple_call_arg (call, 2);
+  tree fp_subnormal = gimple_call_arg (call, 3);
+  tree fp_zero = gimple_call_arg (call, 4);
+  tree arg = gimple_call_arg (call, 5);
+
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+    dest = create_tmp_reg (TREE_TYPE (orig_dest));
+
+  emit_tree_cond (&body, dest, done_label,
+		  is_normal (&body, arg, loc), fp_normal);
+  emit_tree_cond (&body, dest, done_label,
+		  is_zero (&body, arg, loc), fp_zero);
+  emit_tree_cond (&body, dest, done_label,
+		  is_nan (&body, arg, loc), fp_nan);
+  emit_tree_cond (&body, dest, done_label,
+		  is_infinity (&body, arg, loc), fp_infinite);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to is_subnormal.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, fp_subnormal));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+    {
+      gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+    }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+
+  /* Remove the call to __builtin_fpclassify.  */
+  gsi_remove (gsi, false);
+}
+
+/* Generic wrapper for the is_nan, is_normal, is_subnormal, is_zero, etc.
+   All these functions have the same setup. The wrapper validates the parameter
+   and also creates the branches and labels required to properly invoke.
+   This has been generalize and the function to call is passed as argument FNDECL.
+
+   GSI is the gimple iterator containing the fpclassify call to lower.
+   The call will be expanded and replaced inline in the given GSI.  */
+static void
+gen_call_fp_builtin (gimple_stmt_iterator *gsi,
+		     tree (*fndecl)(gimple_seq *, tree, location_t))
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 1
+      || !gimple_validate_arg (call, 0, REAL_TYPE))
+    return;
+
+  tree arg = gimple_call_arg (call, 0);
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  tree type = TREE_TYPE (orig_dest);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+      dest = create_tmp_reg (type);
+
+  tree t_true = build_int_cst (type, true);
+  tree t_false = build_int_cst (type, false);
+
+  emit_tree_cond (&body, dest, done_label,
+		  fndecl (&body, arg, loc), t_true);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to false.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, t_false));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+  /* Remove the call to the builtin.  */
+  gsi_remove (gsi, false);
+}
+
+/* Lower and expand calls to __builtin_isnan in GSI.  */
+static void
+lower_builtin_isnan (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_nan);
+}
+
+/* Lower and expand calls to __builtin_isinfinite in GSI.  */
+static void
+lower_builtin_isinfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_infinity);
+}
+
+/* Lower and expand calls to __builtin_isnormal in GSI.  */
+static void
+lower_builtin_isnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_normal);
+}
+
+/* Lower and expand calls to __builtin_iszero in GSI.  */
+static void
+lower_builtin_iszero (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_zero);
+}
+
+/* Lower and expand calls to __builtin_issubnormal in GSI.  */
+static void
+lower_builtin_issubnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_subnormal);
+}
+
+/* Lower and expand calls to __builtin_isfinite in GSI.  */
+static void
+lower_builtin_isfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_finite);
+}
+
 /* Lower calls to posix_memalign to
      res = posix_memalign (ptr, align, size);
      if (res == 0)
diff --git a/gcc/real.h b/gcc/real.h
index 59af580e78f2637be84f71b98b45ec6611053222..4b1b92138e07f43a175a2cbee4d952afad5898f7 100644
--- a/gcc/real.h
+++ b/gcc/real.h
@@ -161,6 +161,19 @@ struct real_format
   bool has_signed_zero;
   bool qnan_msb_set;
   bool canonical_nan_lsbs_set;
+
+  /* This flag indicates whether the format is suitable for the optimized
+     code paths for the __builtin_fpclassify function and friends.  For
+     this, the format must be a base 2 representation with the sign bit as
+     the most-significant bit followed by (exp <= 32) exponent bits
+     followed by the mantissa bits.  It must be possible to interpret the
+     bits of the floating-point representation as an integer.  NaNs and
+     INFs (if available) must be represented by the same schema used by
+     IEEE 754.  (NaNs must be represented by an exponent with all bits 1,
+     any mantissa except all bits 0 and any sign bit.  +INF and -INF must be
+     represented by an exponent with all bits 1, a mantissa with all bits 0 and
+     a sign bit of 0 and 1 respectively.)  */
+  bool is_binary_ieee_compatible;
   const char *name;
 };
 
@@ -511,6 +524,11 @@ extern bool real_isinteger (const REAL_VALUE_TYPE *, HOST_WIDE_INT *);
    float string.  BUF must be large enough to contain the result.  */
 extern void get_max_float (const struct real_format *, char *, size_t);
 
+/* Write into BUF the smallest positive normalized number x,
+   such that b**(x-1) is normalized.  BUF must be large enough
+   to contain the result.  */
+extern void get_min_float (const struct real_format *, char *, size_t);
+
 #ifndef GENERATOR_FILE
 /* real related routines.  */
 extern wide_int real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
diff --git a/gcc/real.c b/gcc/real.c
index 66e88e2ad366f7848609d157074c80420d778bcf..20c907a6d543c73ba62aa9a8ddf6973d82de7832 100644
--- a/gcc/real.c
+++ b/gcc/real.c
@@ -3052,6 +3052,7 @@ const struct real_format ieee_single_format =
     true,
     true,
     false,
+    true,
     "ieee_single"
   };
 
@@ -3075,6 +3076,7 @@ const struct real_format mips_single_format =
     true,
     false,
     true,
+    true,
     "mips_single"
   };
 
@@ -3098,6 +3100,7 @@ const struct real_format motorola_single_format =
     true,
     true,
     true,
+    true,
     "motorola_single"
   };
 
@@ -3132,6 +3135,7 @@ const struct real_format spu_single_format =
     true,
     false,
     false,
+    false,
     "spu_single"
   };
 \f
@@ -3343,6 +3347,7 @@ const struct real_format ieee_double_format =
     true,
     true,
     false,
+    true,
     "ieee_double"
   };
 
@@ -3366,6 +3371,7 @@ const struct real_format mips_double_format =
     true,
     false,
     true,
+    true,
     "mips_double"
   };
 
@@ -3389,6 +3395,7 @@ const struct real_format motorola_double_format =
     true,
     true,
     true,
+    true,
     "motorola_double"
   };
 \f
@@ -3735,6 +3742,7 @@ const struct real_format ieee_extended_motorola_format =
     true,
     true,
     true,
+    false,
     "ieee_extended_motorola"
   };
 
@@ -3758,6 +3766,7 @@ const struct real_format ieee_extended_intel_96_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96"
   };
 
@@ -3781,6 +3790,7 @@ const struct real_format ieee_extended_intel_128_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_128"
   };
 
@@ -3806,6 +3816,7 @@ const struct real_format ieee_extended_intel_96_round_53_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96_round_53"
   };
 \f
@@ -3896,6 +3907,7 @@ const struct real_format ibm_extended_format =
     true,
     true,
     false,
+    false,
     "ibm_extended"
   };
 
@@ -3919,6 +3931,7 @@ const struct real_format mips_extended_format =
     true,
     false,
     true,
+    false,
     "mips_extended"
   };
 
@@ -4184,6 +4197,7 @@ const struct real_format ieee_quad_format =
     true,
     true,
     false,
+    true,
     "ieee_quad"
   };
 
@@ -4207,6 +4221,7 @@ const struct real_format mips_quad_format =
     true,
     false,
     true,
+    true,
     "mips_quad"
   };
 \f
@@ -4509,6 +4524,7 @@ const struct real_format vax_f_format =
     false,
     false,
     false,
+    false,
     "vax_f"
   };
 
@@ -4532,6 +4548,7 @@ const struct real_format vax_d_format =
     false,
     false,
     false,
+    false,
     "vax_d"
   };
 
@@ -4555,6 +4572,7 @@ const struct real_format vax_g_format =
     false,
     false,
     false,
+    false,
     "vax_g"
   };
 \f
@@ -4633,6 +4651,7 @@ const struct real_format decimal_single_format =
     true,
     true,
     false,
+    false,
     "decimal_single"
   };
 
@@ -4657,6 +4676,7 @@ const struct real_format decimal_double_format =
     true,
     true,
     false,
+    false,
     "decimal_double"
   };
 
@@ -4681,6 +4701,7 @@ const struct real_format decimal_quad_format =
     true,
     true,
     false,
+    false,
     "decimal_quad"
   };
 \f
@@ -4820,6 +4841,7 @@ const struct real_format ieee_half_format =
     true,
     true,
     false,
+    true,
     "ieee_half"
   };
 
@@ -4846,6 +4868,7 @@ const struct real_format arm_half_format =
     true,
     false,
     false,
+    false,
     "arm_half"
   };
 \f
@@ -4893,6 +4916,7 @@ const struct real_format real_internal_format =
     true,
     true,
     false,
+    false,
     "real_internal"
   };
 \f
@@ -5080,6 +5104,16 @@ get_max_float (const struct real_format *fmt, char *buf, size_t len)
   gcc_assert (strlen (buf) < len);
 }
 
+/* Write into BUF the minimum negative representable finite floating-point
+   number, x, such that b**(x-1) is normalized.
+   BUF must be large enough to contain the result.  */
+void
+get_min_float (const struct real_format *fmt, char *buf, size_t len)
+{
+  sprintf (buf, "0x1p%d", fmt->emin - 1);
+  gcc_assert (strlen (buf) < len);
+}
+
 /* True if mode M has a NaN representation and
    the treatment of NaN operands is important.  */
 
diff --git a/gcc/testsuite/gcc.dg/builtins-43.c b/gcc/testsuite/gcc.dg/builtins-43.c
index f7c318edf084104b9b820e18e631ed61e760569e..5d41c28aef8619f06658f45846ae15dd8b4987ed 100644
--- a/gcc/testsuite/gcc.dg/builtins-43.c
+++ b/gcc/testsuite/gcc.dg/builtins-43.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-gimple -fdump-tree-optimized" } */
+/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-lower -fdump-tree-optimized" } */
   
 extern void f(int);
 extern void link_error ();
@@ -51,7 +51,7 @@ main ()
 
 
 /* Check that all instances of __builtin_isnan were folded.  */
-/* { dg-final { scan-tree-dump-times "isnan" 0 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "isnan" 0 "lower" } } */
 
 /* Check that all instances of link_error were subject to DCE.  */
 /* { dg-final { scan-tree-dump-times "link_error" 0 "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/fold-notunord.c b/gcc/testsuite/gcc.dg/fold-notunord.c
deleted file mode 100644
index ca345154ac204cb5f380855828421b7f88d49052..0000000000000000000000000000000000000000
--- a/gcc/testsuite/gcc.dg/fold-notunord.c
+++ /dev/null
@@ -1,9 +0,0 @@
-/* { dg-do compile } */
-/* { dg-options "-O -ftrapping-math -fdump-tree-optimized" } */
-
-int f (double d)
-{
-  return !__builtin_isnan (d);
-}
-
-/* { dg-final { scan-tree-dump " ord " "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/pr28796-1.c b/gcc/testsuite/gcc.dg/pr28796-1.c
index 077118a298878441e812410f3a6bf3707fb1d839..a57b4e350af1bc45344106fdeab4b32ef87f233f 100644
--- a/gcc/testsuite/gcc.dg/pr28796-1.c
+++ b/gcc/testsuite/gcc.dg/pr28796-1.c
@@ -1,5 +1,5 @@
 /* { dg-do link } */
-/* { dg-options "-ffinite-math-only" } */
+/* { dg-options "-ffinite-math-only -O2" } */
 
 extern void link_error(void);
 
diff --git a/gcc/testsuite/gcc.dg/tg-tests.h b/gcc/testsuite/gcc.dg/tg-tests.h
index 0cf1f6452584962cebda999b5e8c7a61180d7830..cbb707c54b8437aada0cc205b1819369ada95665 100644
--- a/gcc/testsuite/gcc.dg/tg-tests.h
+++ b/gcc/testsuite/gcc.dg/tg-tests.h
@@ -11,6 +11,7 @@ void __attribute__ ((__noinline__))
 foo_1 (float f, double d, long double ld,
        int res_unord, int res_isnan, int res_isinf,
        int res_isinf_sign, int res_isfin, int res_isnorm,
+       int res_iszero, int res_issubnorm,
        int res_signbit, int classification)
 {
   if (__builtin_isunordered (f, 0) != res_unord)
@@ -80,6 +81,20 @@ foo_1 (float f, double d, long double ld,
   if (__builtin_finitel (ld) != res_isfin)
     __builtin_abort ();
 
+  if (__builtin_iszero (f) != res_iszero)
+    __builtin_abort ();
+  if (__builtin_iszero (d) != res_iszero)
+    __builtin_abort ();
+  if (__builtin_iszero (ld) != res_iszero)
+    __builtin_abort ();
+
+  if (__builtin_issubnormal (f) != res_issubnorm)
+    __builtin_abort ();
+  if (__builtin_issubnormal (d) != res_issubnorm)
+    __builtin_abort ();
+  if (__builtin_issubnormal (ld) != res_issubnorm)
+    __builtin_abort ();
+
   /* Sign bit of zeros and nans is not preserved in unsafe math mode.  */
 #ifdef UNSAFE
   if (!res_isnan && f != 0 && d != 0 && ld != 0)
@@ -115,12 +130,13 @@ foo_1 (float f, double d, long double ld,
 void __attribute__ ((__noinline__))
 foo (float f, double d, long double ld,
      int res_unord, int res_isnan, int res_isinf,
-     int res_isfin, int res_isnorm, int classification)
+     int res_isfin, int res_isnorm, int res_iszero,
+     int res_issubnorm, int classification)
 {
-  foo_1 (f, d, ld, res_unord, res_isnan, res_isinf, res_isinf, res_isfin, res_isnorm, 0, classification);
+  foo_1 (f, d, ld, res_unord, res_isnan, res_isinf, res_isinf, res_isfin, res_isnorm, res_iszero, res_issubnorm, 0, classification);
   /* Try all the values negated as well.  All will have the sign bit set,
      except for the nan.  */
-  foo_1 (-f, -d, -ld, res_unord, res_isnan, res_isinf, -res_isinf, res_isfin, res_isnorm, 1, classification);
+  foo_1 (-f, -d, -ld, res_unord, res_isnan, res_isinf, -res_isinf, res_isfin, res_isnorm, res_iszero, res_issubnorm, 1, classification);
 }
 
 int __attribute__ ((__noinline__))
@@ -132,35 +148,35 @@ main_tests (void)
   
   /* Test NaN.  */
   f = __builtin_nanf(""); d = __builtin_nan(""); ld = __builtin_nanl("");
-  foo(f, d, ld, /*unord=*/ 1, /*isnan=*/ 1, /*isinf=*/ 0, /*isfin=*/ 0, /*isnorm=*/ 0, FP_NAN);
+  foo(f, d, ld, /*unord=*/ 1, /*isnan=*/ 1, /*isinf=*/ 0, /*isfin=*/ 0, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/0, FP_NAN);
 
   /* Test infinity.  */
   f = __builtin_inff(); d = __builtin_inf(); ld = __builtin_infl();
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, FP_INFINITE);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/0, FP_INFINITE);
 
   /* Test zero.  */
   f = 0; d = 0; ld = 0;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, FP_ZERO);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, /*iszero=*/1, /*issubnorm=*/0, FP_ZERO);
 
   /* Test one.  */
   f = 1; d = 1; ld = 1;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, FP_NORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, /*iszero=*/0, /*issubnorm=*/0, FP_NORMAL);
 
   /* Test minimum values.  */
   f = __FLT_MIN__; d = __DBL_MIN__; ld = __LDBL_MIN__;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, FP_NORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, /*iszero=*/0, /*issubnorm=*/0, FP_NORMAL);
 
   /* Test subnormal values.  */
   f = __FLT_MIN__/2; d = __DBL_MIN__/2; ld = __LDBL_MIN__/2;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, FP_SUBNORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/1, FP_SUBNORMAL);
 
   /* Test maximum values.  */
   f = __FLT_MAX__; d = __DBL_MAX__; ld = __LDBL_MAX__;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, FP_NORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, /*iszero=*/0, /*issubnorm=*/0, FP_NORMAL);
 
   /* Test overflow values.  */
   f = __FLT_MAX__*2; d = __DBL_MAX__*2; ld = __LDBL_MAX__*2;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, FP_INFINITE);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/0, FP_INFINITE);
 
   return 0;
 }
diff --git a/gcc/testsuite/gcc.dg/torture/float128-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..ec9d3ad41e24280978707888590eec1b562207f0
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128_runtime } */
+
+#define WIDTH 128
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0ede861716750453a86c9abc703ad0b2826674c6
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128x_runtime } */
+
+#define WIDTH 128
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float16-tg-4.c b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..007c4c224ea95537c31185d0aff964d1975f2190
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float16 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float16 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float16_runtime } */
+
+#define WIDTH 16
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..c7f8353da2cffdfc2c2f58f5da3d5363b95e6f91
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32 type-generic built-in functions: __builtin_f__builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32_runtime } */
+
+#define WIDTH 32
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0d7a592920aca112d5f6409e565d4582c253c977
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32x_runtime } */
+
+#define WIDTH 32
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..bb25a22a68e60ce2717ab3583bbec595dd563c35
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64_runtime } */
+
+#define WIDTH 64
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..82305d916b8bd75131e2c647fd37f74cadbc8f1d
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64x_runtime } */
+
+#define WIDTH 64
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
new file mode 100644
index 0000000000000000000000000000000000000000..aa3448c090cf797a1525b1045ffebeed79cace40
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
@@ -0,0 +1,99 @@
+/* Tests for _FloatN / _FloatNx types: compile and execution tests for
+   type-generic built-in functions: __builtin_iszero, __builtin_issubnormal.
+   Before including this file, define WIDTH as the value N; define EXT to 1
+   for _FloatNx and 0 for _FloatN.  */
+
+#define __STDC_WANT_IEC_60559_TYPES_EXT__
+#include <float.h>
+
+#define CONCATX(X, Y) X ## Y
+#define CONCAT(X, Y) CONCATX (X, Y)
+#define CONCAT3(X, Y, Z) CONCAT (CONCAT (X, Y), Z)
+#define CONCAT4(W, X, Y, Z) CONCAT (CONCAT (CONCAT (W, X), Y), Z)
+
+#if EXT
+# define TYPE CONCAT3 (_Float, WIDTH, x)
+# define CST(C) CONCAT4 (C, f, WIDTH, x)
+# define MAX CONCAT3 (FLT, WIDTH, X_MAX)
+# define MIN CONCAT3 (FLT, WIDTH, X_MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, X_TRUE_MIN)
+#else
+# define TYPE CONCAT (_Float, WIDTH)
+# define CST(C) CONCAT3 (C, f, WIDTH)
+# define MAX CONCAT3 (FLT, WIDTH, _MAX)
+# define MIN CONCAT3 (FLT, WIDTH, _MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, _TRUE_MIN)
+#endif
+
+extern void exit (int);
+extern void abort (void);
+
+volatile TYPE inf = __builtin_inf (), nanval = __builtin_nan ("");
+volatile TYPE neginf = -__builtin_inf (), negnanval = -__builtin_nan ("");
+volatile TYPE zero = CST (0.0), negzero = -CST (0.0), one = CST (1.0);
+volatile TYPE max = MAX, negmax = -MAX, min = MIN, negmin = -MIN;
+volatile TYPE true_min = TRUE_MIN, negtrue_min = -TRUE_MIN;
+volatile TYPE sub_norm = MIN / 2.0;
+
+int
+main (void)
+{
+  if (__builtin_iszero (inf) == 1)
+    abort ();
+  if (__builtin_iszero (nanval) == 1)
+    abort ();
+  if (__builtin_iszero (neginf) == 1)
+    abort ();
+  if (__builtin_iszero (negnanval) == 1)
+    abort ();
+  if (__builtin_iszero (zero) != 1)
+    abort ();
+  if (__builtin_iszero (negzero) != 1)
+    abort ();
+  if (__builtin_iszero (one) == 1)
+    abort ();
+  if (__builtin_iszero (max) == 1)
+    abort ();
+  if (__builtin_iszero (negmax) == 1)
+    abort ();
+  if (__builtin_iszero (min) == 1)
+    abort ();
+  if (__builtin_iszero (negmin) == 1)
+    abort ();
+  if (__builtin_iszero (true_min) == 1)
+    abort ();
+  if (__builtin_iszero (negtrue_min) == 1)
+    abort ();
+  if (__builtin_iszero (sub_norm) == 1)
+    abort ();
+
+  if (__builtin_issubnormal (inf) == 1)
+    abort ();
+  if (__builtin_issubnormal (nanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (neginf) == 1)
+    abort ();
+  if (__builtin_issubnormal (negnanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (zero) == 1)
+    abort ();
+  if (__builtin_issubnormal (negzero) == 1)
+    abort ();
+  if (__builtin_issubnormal (one) == 1)
+    abort ();
+  if (__builtin_issubnormal (max) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmax) == 1)
+    abort ();
+  if (__builtin_issubnormal (min) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmin) == 1)
+    abort ();
+  if (__builtin_issubnormal (true_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (negtrue_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (sub_norm) != 1)
+    abort ();
+  exit (0);
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
new file mode 100644
index 0000000000000000000000000000000000000000..3a1bf956bfcedcc15dc1cbacfe2f0b663b31c3cc
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
@@ -0,0 +1,22 @@
+/* This file checks the code generation for the new __builtin_fpclassify.
+   because checking the exact assembly isn't very useful, we'll just be checking
+   for the presence of certain instructions and the omition of others. */
+/* { dg-options "-O2" } */
+/* { dg-do compile } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fabs\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmp\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmpe\[ \t\]?" } } */
+/* { dg-final { scan-assembler "\[ \t\]?ubfx\[ \t\]?" } } */
+
+#include <stdio.h>
+#include <math.h>
+
+/*
+ fp_nan = args[0];
+ fp_infinite = args[1];
+ fp_normal = args[2];
+ fp_subnormal = args[3];
+ fp_zero = args[4];
+*/
+
+int f(double x) { return __builtin_fpclassify(0, 1, 4, 3, 2, x); }

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-19 15:26                           ` Tamar Christina
@ 2017-01-19 16:34                             ` Joseph Myers
  2017-01-19 18:06                               ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Joseph Myers @ 2017-01-19 16:34 UTC (permalink / raw)
  To: Tamar Christina
  Cc: Jeff Law, GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

On Thu, 19 Jan 2017, Tamar Christina wrote:

> > The old code set orig_arg before converting IBM long double to double.
> > Your code sets it after the conversion.  The old code set min_exp based on
> > a string set from REAL_MODE_FORMAT (orig_mode)->emin - 1; your code uses
> > the adjusted mode.  Both of those are incorrect for IBM long double.
> 
> Hmm, this is correct, I've made the change to be like it was before, but
> there's something I don't quite get about the old code, if it's building rmin
> the orig_mode which is larger than mode, but then creating the real using
> build_real (type, rmin) using the adjusted type, shouldn't it not be getting
> truncated?

The value is a power of 2, which is *larger* than the minimum normal value 
for double (as they have the same least subnormal value).

Looking further at the code, my only remaining comments are for the cases 
where you aren't using the integer path: in is_normal you use LT_EXPR to 
compare with +Inf, but need to use __builtin_isless, likewise in 
is_subnormal again you should be using __builtin_isless and 
__builtin_isgreater, in is_finite you should be using 
__builtin_islessequal.  All the existing expressions will raise spurious 
"invalid" exceptions for quiet NaN arguments.  (I'm presuming that the 
output of these functions goes through folding that converts 
__builtin_isless to !UNGE_EXPR, etc.)

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-19 16:34                             ` Joseph Myers
@ 2017-01-19 18:06                               ` Tamar Christina
  2017-01-19 18:28                                 ` Joseph Myers
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2017-01-19 18:06 UTC (permalink / raw)
  To: Joseph Myers
  Cc: Jeff Law, GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

[-- Attachment #1: Type: text/plain, Size: 1899 bytes --]

Hi Joseph,

I made the requested changes and did a quick pass over the rest
of the fp cases.

Regards,
Tamar

________________________________________
From: Joseph Myers <joseph@codesourcery.com>
Sent: Thursday, January 19, 2017 4:33:13 PM
To: Tamar Christina
Cc: Jeff Law; GCC Patches; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

On Thu, 19 Jan 2017, Tamar Christina wrote:

> > The old code set orig_arg before converting IBM long double to double.
> > Your code sets it after the conversion.  The old code set min_exp based on
> > a string set from REAL_MODE_FORMAT (orig_mode)->emin - 1; your code uses
> > the adjusted mode.  Both of those are incorrect for IBM long double.
>
> Hmm, this is correct, I've made the change to be like it was before, but
> there's something I don't quite get about the old code, if it's building rmin
> the orig_mode which is larger than mode, but then creating the real using
> build_real (type, rmin) using the adjusted type, shouldn't it not be getting
> truncated?

The value is a power of 2, which is *larger* than the minimum normal value
for double (as they have the same least subnormal value).

Looking further at the code, my only remaining comments are for the cases
where you aren't using the integer path: in is_normal you use LT_EXPR to
compare with +Inf, but need to use __builtin_isless, likewise in
is_subnormal again you should be using __builtin_isless and
__builtin_isgreater, in is_finite you should be using
__builtin_islessequal.  All the existing expressions will raise spurious
"invalid" exceptions for quiet NaN arguments.  (I'm presuming that the
output of these functions goes through folding that converts
__builtin_isless to !UNGE_EXPR, etc.)

--
Joseph S. Myers
joseph@codesourcery.com

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: fp-classify-jan-19-spin3.patch --]
[-- Type: text/x-patch; name="fp-classify-jan-19-spin3.patch", Size: 75141 bytes --]

diff --git a/gcc/builtins.c b/gcc/builtins.c
index 3ac2d44148440b124559ba7cd3de483b7a74b72d..d8ff9c70ae6b9e72e09b8cbd9a0bd41b6830b83e 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -160,7 +160,6 @@ static tree fold_builtin_0 (location_t, tree);
 static tree fold_builtin_1 (location_t, tree, tree);
 static tree fold_builtin_2 (location_t, tree, tree, tree);
 static tree fold_builtin_3 (location_t, tree, tree, tree, tree);
-static tree fold_builtin_varargs (location_t, tree, tree*, int);
 
 static tree fold_builtin_strpbrk (location_t, tree, tree, tree);
 static tree fold_builtin_strstr (location_t, tree, tree, tree);
@@ -2202,19 +2201,8 @@ interclass_mathfn_icode (tree arg, tree fndecl)
   switch (DECL_FUNCTION_CODE (fndecl))
     {
     CASE_FLT_FN (BUILT_IN_ILOGB):
-      errno_set = true; builtin_optab = ilogb_optab; break;
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      builtin_optab = isinf_optab; break;
-    case BUILT_IN_ISNORMAL:
-    case BUILT_IN_ISFINITE:
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      /* These builtins have no optabs (yet).  */
+      errno_set = true;
+      builtin_optab = ilogb_optab;
       break;
     default:
       gcc_unreachable ();
@@ -2233,8 +2221,7 @@ interclass_mathfn_icode (tree arg, tree fndecl)
 }
 
 /* Expand a call to one of the builtin math functions that operate on
-   floating point argument and output an integer result (ilogb, isinf,
-   isnan, etc).
+   floating point argument and output an integer result (ilogb, etc).
    Return 0 if a normal call should be emitted rather than expanding the
    function in-line.  EXP is the expression that is a call to the builtin
    function; if convenient, the result should be placed in TARGET.  */
@@ -5997,11 +5984,7 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
     CASE_FLT_FN (BUILT_IN_ILOGB):
       if (! flag_unsafe_math_optimizations)
 	break;
-      gcc_fallthrough ();
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-    case BUILT_IN_ISNORMAL:
+
       target = expand_builtin_interclass_mathfn (exp, target);
       if (target)
 	return target;
@@ -6281,8 +6264,25 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
 	}
       break;
 
+    CASE_FLT_FN (BUILT_IN_ISINF):
+    case BUILT_IN_ISNAND32:
+    case BUILT_IN_ISNAND64:
+    case BUILT_IN_ISNAND128:
+    case BUILT_IN_ISNAN:
+    case BUILT_IN_ISINFD32:
+    case BUILT_IN_ISINFD64:
+    case BUILT_IN_ISINFD128:
+    case BUILT_IN_ISNORMAL:
+    case BUILT_IN_ISZERO:
+    case BUILT_IN_ISSUBNORMAL:
+    case BUILT_IN_FPCLASSIFY:
     case BUILT_IN_SETJMP:
-      /* This should have been lowered to the builtins below.  */
+    CASE_FLT_FN (BUILT_IN_FINITE):
+    case BUILT_IN_FINITED32:
+    case BUILT_IN_FINITED64:
+    case BUILT_IN_FINITED128:
+    case BUILT_IN_ISFINITE:
+      /* These should have been lowered to the builtins in gimple-low.c.  */
       gcc_unreachable ();
 
     case BUILT_IN_SETJMP_SETUP:
@@ -7622,184 +7622,19 @@ fold_builtin_modf (location_t loc, tree arg0, tree arg1, tree rettype)
   return NULL_TREE;
 }
 
-/* Given a location LOC, an interclass builtin function decl FNDECL
-   and its single argument ARG, return an folded expression computing
-   the same, or NULL_TREE if we either couldn't or didn't want to fold
-   (the latter happen if there's an RTL instruction available).  */
-
-static tree
-fold_builtin_interclass_mathfn (location_t loc, tree fndecl, tree arg)
-{
-  machine_mode mode;
-
-  if (!validate_arg (arg, REAL_TYPE))
-    return NULL_TREE;
-
-  if (interclass_mathfn_icode (arg, fndecl) != CODE_FOR_nothing)
-    return NULL_TREE;
-
-  mode = TYPE_MODE (TREE_TYPE (arg));
-
-  bool is_ibm_extended = MODE_COMPOSITE_P (mode);
 
-  /* If there is no optab, try generic code.  */
-  switch (DECL_FUNCTION_CODE (fndecl))
-    {
-      tree result;
 
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      {
-	/* isinf(x) -> isgreater(fabs(x),DBL_MAX).  */
-	tree const isgr_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isgr_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	return result;
-      }
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-      {
-	/* isfinite(x) -> islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isle_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	/*result = fold_build2_loc (loc, UNGT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	result = fold_build1_loc (loc, TRUTH_NOT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  result);*/
-	return result;
-      }
-    case BUILT_IN_ISNORMAL:
-      {
-	/* isnormal(x) -> isgreaterequal(fabs(x),DBL_MIN) &
-	   islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	tree orig_arg, max_exp, min_exp;
-	machine_mode orig_mode = mode;
-	REAL_VALUE_TYPE rmax, rmin;
-	char buf[128];
-
-	orig_arg = arg = builtin_save_expr (arg);
-	if (is_ibm_extended)
-	  {
-	    /* Use double to test the normal range of IBM extended
-	       precision.  Emin for IBM extended precision is
-	       different to emin for IEEE double, being 53 higher
-	       since the low double exponent is at least 53 lower
-	       than the high double exponent.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	arg = fold_build1_loc (loc, ABS_EXPR, type, arg);
-
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&rmax, buf);
-	sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (orig_mode)->emin - 1);
-	real_from_string (&rmin, buf);
-	max_exp = build_real (type, rmax);
-	min_exp = build_real (type, rmin);
-
-	max_exp = build_call_expr (isle_fn, 2, arg, max_exp);
-	if (is_ibm_extended)
-	  {
-	    /* Testing the high end of the range is done just using
-	       the high double, using the same test as isfinite().
-	       For the subnormal end of the range we first test the
-	       high double, then if its magnitude is equal to the
-	       limit of 0x1p-969, we test whether the low double is
-	       non-zero and opposite sign to the high double.  */
-	    tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
-	    tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	    tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
-	    tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
-				       arg, min_exp);
-	    tree as_complex = build1 (VIEW_CONVERT_EXPR,
-				      complex_double_type_node, orig_arg);
-	    tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
-	    tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
-	    tree zero = build_real (type, dconst0);
-	    tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
-	    tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
-	    tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
-	    tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
-				      fold_build3 (COND_EXPR,
-						   integer_type_node,
-						   hilt, logt, lolt));
-	    eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
-				  eq_min, ok_lo);
-	    min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
-				   gt_min, eq_min);
-	  }
-	else
-	  {
-	    tree const isge_fn
-	      = builtin_decl_explicit (BUILT_IN_ISGREATEREQUAL);
-	    min_exp = build_call_expr (isge_fn, 2, arg, min_exp);
-	  }
-	result = fold_build2 (BIT_AND_EXPR, integer_type_node,
-			      max_exp, min_exp);
-	return result;
-      }
-    default:
-      break;
-    }
-
-  return NULL_TREE;
-}
-
-/* Fold a call to __builtin_isnan(), __builtin_isinf, __builtin_finite.
+/* Fold a call to __builtin_isinf_sign.
    ARG is the argument for the call.  */
 
 static tree
-fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
+fold_builtin_classify (location_t loc, tree arg, int builtin_index)
 {
-  tree type = TREE_TYPE (TREE_TYPE (fndecl));
-
   if (!validate_arg (arg, REAL_TYPE))
     return NULL_TREE;
 
   switch (builtin_index)
     {
-    case BUILT_IN_ISINF:
-      if (!HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      return NULL_TREE;
-
     case BUILT_IN_ISINF_SIGN:
       {
 	/* isinf_sign(x) -> isinf(x) ? (signbit(x) ? -1 : 1) : 0 */
@@ -7832,106 +7667,11 @@ fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
 	return tmp;
       }
 
-    case BUILT_IN_ISFINITE:
-      if (!HONOR_NANS (arg)
-	  && !HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_one_node, arg);
-
-      return NULL_TREE;
-
-    case BUILT_IN_ISNAN:
-      if (!HONOR_NANS (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      {
-	bool is_ibm_extended = MODE_COMPOSITE_P (TYPE_MODE (TREE_TYPE (arg)));
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    arg = fold_build1_loc (loc, NOP_EXPR, double_type_node, arg);
-	  }
-      }
-      arg = builtin_save_expr (arg);
-      return fold_build2_loc (loc, UNORDERED_EXPR, type, arg, arg);
-
     default:
       gcc_unreachable ();
     }
 }
 
-/* Fold a call to __builtin_fpclassify(int, int, int, int, int, ...).
-   This builtin will generate code to return the appropriate floating
-   point classification depending on the value of the floating point
-   number passed in.  The possible return values must be supplied as
-   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
-   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
-   one floating point argument which is "type generic".  */
-
-static tree
-fold_builtin_fpclassify (location_t loc, tree *args, int nargs)
-{
-  tree fp_nan, fp_infinite, fp_normal, fp_subnormal, fp_zero,
-    arg, type, res, tmp;
-  machine_mode mode;
-  REAL_VALUE_TYPE r;
-  char buf[128];
-
-  /* Verify the required arguments in the original call.  */
-  if (nargs != 6
-      || !validate_arg (args[0], INTEGER_TYPE)
-      || !validate_arg (args[1], INTEGER_TYPE)
-      || !validate_arg (args[2], INTEGER_TYPE)
-      || !validate_arg (args[3], INTEGER_TYPE)
-      || !validate_arg (args[4], INTEGER_TYPE)
-      || !validate_arg (args[5], REAL_TYPE))
-    return NULL_TREE;
-
-  fp_nan = args[0];
-  fp_infinite = args[1];
-  fp_normal = args[2];
-  fp_subnormal = args[3];
-  fp_zero = args[4];
-  arg = args[5];
-  type = TREE_TYPE (arg);
-  mode = TYPE_MODE (type);
-  arg = builtin_save_expr (fold_build1_loc (loc, ABS_EXPR, type, arg));
-
-  /* fpclassify(x) ->
-       isnan(x) ? FP_NAN :
-         (fabs(x) == Inf ? FP_INFINITE :
-	   (fabs(x) >= DBL_MIN ? FP_NORMAL :
-	     (x == 0 ? FP_ZERO : FP_SUBNORMAL))).  */
-
-  tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-		     build_real (type, dconst0));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node,
-		     tmp, fp_zero, fp_subnormal);
-
-  sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (mode)->emin - 1);
-  real_from_string (&r, buf);
-  tmp = fold_build2_loc (loc, GE_EXPR, integer_type_node,
-		     arg, build_real (type, r));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, fp_normal, res);
-
-  if (HONOR_INFINITIES (mode))
-    {
-      real_inf (&r);
-      tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-			 build_real (type, r));
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp,
-			 fp_infinite, res);
-    }
-
-  if (HONOR_NANS (mode))
-    {
-      tmp = fold_build2_loc (loc, ORDERED_EXPR, integer_type_node, arg, arg);
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, res, fp_nan);
-    }
-
-  return res;
-}
-
 /* Fold a call to an unordered comparison function such as
    __builtin_isgreater().  FNDECL is the FUNCTION_DECL for the function
    being called and ARG0 and ARG1 are the arguments for the call.
@@ -8232,40 +7972,8 @@ fold_builtin_1 (location_t loc, tree fndecl, tree arg0)
     case BUILT_IN_ISDIGIT:
       return fold_builtin_isdigit (loc, arg0);
 
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISFINITE:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISFINITE);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    case BUILT_IN_ISNORMAL:
-      return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-
     case BUILT_IN_ISINF_SIGN:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF_SIGN);
-
-    CASE_FLT_FN (BUILT_IN_ISNAN):
-    case BUILT_IN_ISNAND32:
-    case BUILT_IN_ISNAND64:
-    case BUILT_IN_ISNAND128:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISNAN);
+      return fold_builtin_classify (loc, arg0, BUILT_IN_ISINF_SIGN);
 
     case BUILT_IN_FREE:
       if (integer_zerop (arg0))
@@ -8465,7 +8173,6 @@ fold_builtin_n (location_t loc, tree fndecl, tree *args, int nargs, bool)
       ret = fold_builtin_3 (loc, fndecl, args[0], args[1], args[2]);
       break;
     default:
-      ret = fold_builtin_varargs (loc, fndecl, args, nargs);
       break;
     }
   if (ret)
@@ -9422,37 +9129,6 @@ fold_builtin_object_size (tree ptr, tree ost)
   return NULL_TREE;
 }
 
-/* Builtins with folding operations that operate on "..." arguments
-   need special handling; we need to store the arguments in a convenient
-   data structure before attempting any folding.  Fortunately there are
-   only a few builtins that fall into this category.  FNDECL is the
-   function, EXP is the CALL_EXPR for the call.  */
-
-static tree
-fold_builtin_varargs (location_t loc, tree fndecl, tree *args, int nargs)
-{
-  enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl);
-  tree ret = NULL_TREE;
-
-  switch (fcode)
-    {
-    case BUILT_IN_FPCLASSIFY:
-      ret = fold_builtin_fpclassify (loc, args, nargs);
-      break;
-
-    default:
-      break;
-    }
-  if (ret)
-    {
-      ret = build1 (NOP_EXPR, TREE_TYPE (ret), ret);
-      SET_EXPR_LOCATION (ret, loc);
-      TREE_NO_WARNING (ret) = 1;
-      return ret;
-    }
-  return NULL_TREE;
-}
-
 /* Initialize format string characters in the target charset.  */
 
 bool
diff --git a/gcc/builtins.def b/gcc/builtins.def
index 219feebd3aebefbd079bf37cc801453cd1965e00..91aa6f37fa098777bc794bad56d8c561ab9fdc44 100644
--- a/gcc/builtins.def
+++ b/gcc/builtins.def
@@ -831,6 +831,8 @@ DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFL, "isinfl", BT_FN_INT_LONGDOUBLE, ATTR_CO
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD32, "isinfd32", BT_FN_INT_DFLOAT32, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD64, "isinfd64", BT_FN_INT_DFLOAT64, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD128, "isinfd128", BT_FN_INT_DFLOAT128, ATTR_CONST_NOTHROW_LEAF_LIST)
+DEF_GCC_BUILTIN        (BUILT_IN_ISZERO, "iszero", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
+DEF_GCC_BUILTIN        (BUILT_IN_ISSUBNORMAL, "issubnormal", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_C99_C90RES_BUILTIN (BUILT_IN_ISNAN, "isnan", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANF, "isnanf", BT_FN_INT_FLOAT, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANL, "isnanl", BT_FN_INT_LONGDOUBLE, ATTR_CONST_NOTHROW_LEAF_LIST)
diff --git a/gcc/c/c-typeck.c b/gcc/c/c-typeck.c
index f0917ed788c91b2a2c8701438150f0e634c9402b..e2b4acd21b7d70e6500e42064db49e0f4a84aae9 100644
--- a/gcc/c/c-typeck.c
+++ b/gcc/c/c-typeck.c
@@ -3232,6 +3232,8 @@ convert_arguments (location_t loc, vec<location_t> arg_loc, tree typelist,
 	case BUILT_IN_ISINF_SIGN:
 	case BUILT_IN_ISNAN:
 	case BUILT_IN_ISNORMAL:
+	case BUILT_IN_ISZERO:
+	case BUILT_IN_ISSUBNORMAL:
 	case BUILT_IN_FPCLASSIFY:
 	  type_generic_remove_excess_precision = true;
 	  break;
diff --git a/gcc/doc/extend.texi b/gcc/doc/extend.texi
index 0669f7999beb078822e471352036d8f13517812d..c240bbe9a8fd595a0e7e2b41fb708efae1e5279a 100644
--- a/gcc/doc/extend.texi
+++ b/gcc/doc/extend.texi
@@ -10433,6 +10433,10 @@ in the Cilk Plus language manual which can be found at
 @findex __builtin_isgreater
 @findex __builtin_isgreaterequal
 @findex __builtin_isinf_sign
+@findex __builtin_isinf
+@findex __builtin_isnan
+@findex __builtin_iszero
+@findex __builtin_issubnormal
 @findex __builtin_isless
 @findex __builtin_islessequal
 @findex __builtin_islessgreater
@@ -11496,7 +11500,54 @@ constant values and they must appear in this order: @code{FP_NAN},
 @code{FP_INFINITE}, @code{FP_NORMAL}, @code{FP_SUBNORMAL} and
 @code{FP_ZERO}.  The ellipsis is for exactly one floating-point value
 to classify.  GCC treats the last argument as type-generic, which
-means it does not do default promotion from float to double.
+means it does not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnan (...)
+This built-in implements the C99 isnan functionality which checks if
+the given argument represents a NaN.  The return value of the
+function will either be a 0 (false) or a 1 (true).
+On most systems, when an IEEE 754 floating-point type is used this
+built-in does not produce a signal when a signaling NaN is used.
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isinf (...)
+This built-in implements the C99 isinf functionality which checks if
+the given argument represents an infinite number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnormal (...)
+This built-in implements the C99 isnormal functionality which checks if
+the given argument represents a normal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_iszero (...)
+This built-in implements the TS 18661-1:2014 iszero functionality which checks if
+the given argument represents the number 0 or -0.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_issubnormal (...)
+This built-in implements the TS 18661-1:2014 issubnormal functionality which checks if
+the given argument represents a subnormal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
 @end deftypefn
 
 @deftypefn {Built-in Function} double __builtin_inf (void)
diff --git a/gcc/gimple-low.c b/gcc/gimple-low.c
index 64752b67b86b3d01df5f5661e4666df98b7b91d1..7624dbb5e0ffed36ad286b89646d0da6d2ff3807 100644
--- a/gcc/gimple-low.c
+++ b/gcc/gimple-low.c
@@ -30,6 +30,8 @@ along with GCC; see the file COPYING3.  If not see
 #include "calls.h"
 #include "gimple-iterator.h"
 #include "gimple-low.h"
+#include "stor-layout.h"
+#include "target.h"
 
 /* The differences between High GIMPLE and Low GIMPLE are the
    following:
@@ -72,6 +74,13 @@ static void lower_gimple_bind (gimple_stmt_iterator *, struct lower_data *);
 static void lower_try_catch (gimple_stmt_iterator *, struct lower_data *);
 static void lower_gimple_return (gimple_stmt_iterator *, struct lower_data *);
 static void lower_builtin_setjmp (gimple_stmt_iterator *);
+static void lower_builtin_fpclassify (gimple_stmt_iterator *);
+static void lower_builtin_isnan (gimple_stmt_iterator *);
+static void lower_builtin_isinfinite (gimple_stmt_iterator *);
+static void lower_builtin_isnormal (gimple_stmt_iterator *);
+static void lower_builtin_iszero (gimple_stmt_iterator *);
+static void lower_builtin_issubnormal (gimple_stmt_iterator *);
+static void lower_builtin_isfinite (gimple_stmt_iterator *);
 static void lower_builtin_posix_memalign (gimple_stmt_iterator *);
 
 
@@ -330,18 +339,69 @@ lower_stmt (gimple_stmt_iterator *gsi, struct lower_data *data)
 	if (decl
 	    && DECL_BUILT_IN_CLASS (decl) == BUILT_IN_NORMAL)
 	  {
-	    if (DECL_FUNCTION_CODE (decl) == BUILT_IN_SETJMP)
+	    switch (DECL_FUNCTION_CODE (decl))
 	      {
+	      case BUILT_IN_SETJMP:
 		lower_builtin_setjmp (gsi);
 		data->cannot_fallthru = false;
 		return;
-	      }
-	    else if (DECL_FUNCTION_CODE (decl) == BUILT_IN_POSIX_MEMALIGN
-		     && flag_tree_bit_ccp
-		     && gimple_builtin_call_types_compatible_p (stmt, decl))
-	      {
-		lower_builtin_posix_memalign (gsi);
+
+	      case BUILT_IN_POSIX_MEMALIGN:
+		if (flag_tree_bit_ccp
+		    && gimple_builtin_call_types_compatible_p (stmt, decl))
+		  {
+			lower_builtin_posix_memalign (gsi);
+			return;
+		  }
+		break;
+
+	      case BUILT_IN_FPCLASSIFY:
+		lower_builtin_fpclassify (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      CASE_FLT_FN (BUILT_IN_ISINF):
+	      case BUILT_IN_ISINFD32:
+	      case BUILT_IN_ISINFD64:
+	      case BUILT_IN_ISINFD128:
+		lower_builtin_isinfinite (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNAND32:
+	      case BUILT_IN_ISNAND64:
+	      case BUILT_IN_ISNAND128:
+	      CASE_FLT_FN (BUILT_IN_ISNAN):
+		lower_builtin_isnan (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNORMAL:
+		lower_builtin_isnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISZERO:
+		lower_builtin_iszero (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISSUBNORMAL:
+		lower_builtin_issubnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      CASE_FLT_FN (BUILT_IN_FINITE):
+	      case BUILT_IN_FINITED32:
+	      case BUILT_IN_FINITED64:
+	      case BUILT_IN_FINITED128:
+	      case BUILT_IN_ISFINITE:
+		lower_builtin_isfinite (gsi);
+		data->cannot_fallthru = false;
 		return;
+
+	      default:
+		break;
 	      }
 	  }
 
@@ -822,6 +882,825 @@ lower_builtin_setjmp (gimple_stmt_iterator *gsi)
   gsi_remove (gsi, false);
 }
 
+/* This function will if ARG is not already a variable or SSA_NAME,
+   create a new temporary TMP and bind ARG to TMP.  This new binding is then
+   emitted into SEQ and TMP is returned.  */
+static tree
+emit_tree_and_return_var (gimple_seq *seq, tree arg)
+{
+  if (TREE_CODE (arg) == SSA_NAME || VAR_P (arg))
+    return arg;
+
+  tree tmp = create_tmp_reg (TREE_TYPE (arg));
+  gassign *stm = gimple_build_assign (tmp, arg);
+  gimple_seq_add_stmt (seq, stm);
+  return tmp;
+}
+
+/* This function builds an if statement that ends up using explicit branches
+   instead of becoming a ternary conditional select.  This function assumes you
+   will fall through to the next statements after the condition for the false
+   branch.  The code emitted looks like:
+
+   if (COND)
+     RESULT_VARIABLE = TRUE_BRANCH
+     GOTO EXIT_LABEL
+   else
+     ...
+
+   SEQ is the gimple sequence/buffer to emit any new bindings to.
+   RESULT_VARIABLE is the value to set if COND.
+   EXIT_LABEL is the label to jump to in case COND.
+   COND is condition to use in the conditional statement of the if.
+   TRUE_BRANCH is the value to set RESULT_VARIABLE to if COND.  */
+static void
+emit_tree_cond (gimple_seq *seq, tree result_variable, tree exit_label,
+		tree cond, tree true_branch)
+{
+  /* Create labels for fall through.  */
+  tree true_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree false_label = create_artificial_label (UNKNOWN_LOCATION);
+  gcond *stmt = gimple_build_cond_from_tree (cond, true_label, false_label);
+  gimple_seq_add_stmt (seq, stmt);
+
+  /* Build the true case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (true_label));
+  tree value = TREE_CONSTANT (true_branch)
+	     ? true_branch
+	     : emit_tree_and_return_var (seq, true_branch);
+  gimple_seq_add_stmt (seq, gimple_build_assign (result_variable, value));
+  gimple_seq_add_stmt (seq, gimple_build_goto (exit_label));
+
+  /* Build the false case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (false_label));
+}
+
+/* This function returns a variable containing an reinterpreted ARG as an
+   integer.
+
+   SEQ is the gimple sequence/buffer to write any new bindings to.
+   ARG is the floating point number to reinterpret as an integer.
+   LOC is the location to use when doing folding operations.  */
+static tree
+get_num_as_int (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  /* Re-interpret the float as an unsigned integer type
+     with equal precision.  */
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+  tree conv_arg = fold_build1_loc (loc, VIEW_CONVERT_EXPR, int_arg_type, arg);
+  return emit_tree_and_return_var (seq, conv_arg);
+}
+
+/* Check if ARG which is the floating point number being classified is close
+   enough to IEEE 754 format to be able to go in the early exit code.  */
+static bool
+use_ieee_int_mode (tree arg)
+{
+  tree type = TREE_TYPE (arg);
+  machine_mode mode = TYPE_MODE (type);
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  machine_mode imode = int_mode_for_mode (mode);
+  bool is_ibm_extended = MODE_COMPOSITE_P (mode);
+
+  return (format->is_binary_ieee_compatible
+	  && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
+	  /* Check if there's a usable integer mode.  */
+	  && imode != BLKmode
+	  && targetm.scalar_mode_supported_p (imode)
+	  && !is_ibm_extended);
+}
+
+/* Perform some IBM extended format fixups on ARG for use by FP functions.
+   This is done by ignoring the lower 64 bits of the number.
+
+   MODE is the machine mode of ARG.
+   TYPE is the type of ARG.
+   LOC is the location to be used in fold functions.  Usually is the location
+   of the definition of ARG.  */
+static bool
+perform_ibm_extended_fixups (tree *arg, machine_mode *mode,
+			     tree *type, location_t loc)
+{
+  bool is_ibm_extended = MODE_COMPOSITE_P (*mode);
+  if (is_ibm_extended)
+    {
+      /* NaN and Inf are encoded in the high-order double value
+	 only.  The low-order value is not significant.  */
+      *type = double_type_node;
+      *mode = DFmode;
+      *arg = fold_build1_loc (loc, NOP_EXPR, *type, *arg);
+    }
+
+  return is_ibm_extended;
+}
+
+/* Generates code to check if ARG is a normal number.  For the FP case we check
+   MIN_VALUE(ARG) <= ABS(ARG) > INF and for the INT value we check the exp and
+   mantissa bits.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_normal (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const tree bool_type = boolean_type_node;
+
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree orig_arg = arg;
+      machine_mode orig_mode = mode;
+      if (TREE_CODE (arg) != SSA_NAME
+	  && (TREE_ADDRESSABLE (arg) != 0
+	    || (TREE_CODE (arg) != PARM_DECL
+	        && (!VAR_P (arg) || TREE_STATIC (arg)))))
+	orig_arg = save_expr (arg);
+
+      /* Perform IBM extended format fixups if required.  */
+      bool is_ibm_extended = perform_ibm_extended_fixups (&arg, &mode,
+							  &type, loc);
+
+      REAL_VALUE_TYPE rinf, rmin;
+      tree arg_p
+        = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+
+      tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
+      tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
+      tree const isge_fn = builtin_decl_explicit (BUILT_IN_ISGREATEREQUAL);
+
+      char buf[128];
+      real_inf (&rinf);
+      get_min_float (REAL_MODE_FORMAT (orig_mode), buf, sizeof (buf));
+      real_from_string (&rmin, buf);
+
+      tree inf_exp = build_call_expr (islt_fn, 2, arg_p,
+				      build_real (type, rinf));
+      tree min_exp = build_real (type, rmin);
+      if (is_ibm_extended)
+	{
+	  /* Testing the high end of the range is done just using
+	     the high double, using the same test as isfinite().
+	     For the subnormal end of the range we first test the
+	     high double, then if its magnitude is equal to the
+	     limit of 0x1p-969, we test whether the low double is
+	     non-zero and opposite sign to the high double.  */
+	  tree gt_min = build_call_expr (isgt_fn, 2, arg_p, min_exp);
+	  tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
+				     arg_p, min_exp);
+	  tree as_complex = build1 (VIEW_CONVERT_EXPR,
+				    complex_double_type_node, orig_arg);
+	  tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
+	  tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
+	  tree zero = build_real (type, dconst0);
+	  tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
+	  tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
+	  tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
+	  tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
+				    fold_build3 (COND_EXPR,
+						 integer_type_node,
+						 hilt, logt, lolt));
+	  eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
+				eq_min, ok_lo);
+	  min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
+				 gt_min, eq_min);
+	}
+	else
+	{
+	  min_exp = build_call_expr (isge_fn, 2, arg_p, min_exp);
+	}
+
+      tree res
+	= fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			   emit_tree_and_return_var (seq, min_exp),
+			   emit_tree_and_return_var (seq, inf_exp));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const tree int_type = unsigned_type_node;
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  const int exp_mask  = (1 << exp_bits) - 1;
+
+  /* Get the number reinterpreted as an integer.  */
+  tree int_arg = get_num_as_int (seq, arg, loc);
+
+  /* Extract exp bits from the float, where we expect the exponent to be.
+     We create a new type because BIT_FIELD_REF does not allow you to
+     extract less bits than the precision of the storage variable.  */
+  tree exp_tmp
+    = fold_build3_loc (loc, BIT_FIELD_REF,
+		       build_nonstandard_integer_type (exp_bits, true),
+		       int_arg,
+		       build_int_cstu (int_type, exp_bits),
+		       build_int_cstu (int_type, format->p - 1));
+  tree exp_bitfield = emit_tree_and_return_var (seq, exp_tmp);
+
+  /* Re-interpret the extracted exponent bits as a 32 bit int.
+     This allows us to continue doing operations as int_type.  */
+  tree exp
+    = emit_tree_and_return_var (seq, fold_build1_loc (loc, NOP_EXPR, int_type,
+						      exp_bitfield));
+
+  /* exp_mask & ~1.  */
+  tree mask_check
+     = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+			build_int_cstu (int_type, exp_mask),
+			fold_build1_loc (loc, BIT_NOT_EXPR, int_type,
+					 build_int_cstu (int_type, 1)));
+
+  /* (exp + 1) & mask_check.
+     Check to see if exp is not all 0 or all 1.  */
+  tree exp_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+		       emit_tree_and_return_var (seq,
+				fold_build2_loc (loc, PLUS_EXPR, int_type, exp,
+						 build_int_cstu (int_type, 1))),
+		       mask_check);
+
+  tree res = fold_build2_loc (loc, NE_EXPR, boolean_type_node,
+			      build_int_cstu (int_type, 0),
+			      emit_tree_and_return_var (seq, exp_check));
+
+  return emit_tree_and_return_var (seq, res);
+}
+
+/* Generates code to check if ARG is a zero. For both the FP and INT case we
+   check if ARG == 0 (modulo sign bit).  Returns a variable containing a boolean
+   which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_zero (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      machine_mode mode = TYPE_MODE (type);
+      /* Perform IBM extended format fixups if required.  */
+      perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+      tree res = fold_build2_loc (loc, EQ_EXPR, boolean_type_node, arg,
+				  build_real (type, dconst0));
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* num << 1 == 0.
+     This checks to see if the number is zero.  */
+  tree zero_check
+    = fold_build2_loc (loc, EQ_EXPR, boolean_type_node,
+		       build_int_cstu (int_arg_type, 0),
+		       emit_tree_and_return_var (seq, int_arg));
+
+  return emit_tree_and_return_var (seq, zero_check);
+}
+
+/* Generates code to check if ARG is a subnormal number.  In the FP case we test
+   fabs (ARG) != 0 && fabs (ARG) < MIN_VALUE (ARG) and in the INT case we check
+   the exp and mantissa bits on ARG. Returns a variable containing a boolean
+   which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_subnormal (gimple_seq *seq, tree arg, location_t loc)
+{
+  const tree bool_type = boolean_type_node;
+
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
+      tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
+
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      REAL_VALUE_TYPE r;
+      char buf[128];
+      get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+      real_from_string (&r, buf);
+      tree subnorm = build_call_expr (islt_fn, 2, arg_p, build_real (type, r));
+
+      tree zero = build_call_expr (isgt_fn, 2, arg_p,
+				   build_real (type, dconst0));
+
+      tree res
+	= fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			   emit_tree_and_return_var (seq, subnorm),
+			   emit_tree_and_return_var (seq, zero));
+
+      return emit_tree_and_return_var (seq, res);
+  }
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check for a zero exponent and non-zero mantissa.
+     This can be done with two comparisons by first apply a
+     removing the sign bit and checking if the value is larger
+     than the mantissa mask.  */
+
+  /* This creates a mask to be used to check the mantissa value in the shifted
+     integer representation of the fpnum.  */
+  tree significant_bit = build_int_cstu (int_arg_type, format->p - 1);
+  tree mantissa_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					significant_bit),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check if exponent is zero and mantissa is not.  */
+  tree subnorm_cond_tmp
+    = fold_build2_loc (loc, LE_EXPR, bool_type,
+		       emit_tree_and_return_var (seq, int_arg),
+		       mantissa_mask);
+
+  tree subnorm_cond = emit_tree_and_return_var (seq, subnorm_cond_tmp);
+
+  tree zero_cond
+    = fold_build2_loc (loc, GT_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, int_arg),
+		       build_int_cstu (int_arg_type, 0));
+
+  tree subnorm_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, subnorm_cond),
+		       emit_tree_and_return_var (seq, zero_cond));
+
+  return emit_tree_and_return_var (seq, subnorm_check);
+}
+
+/* Generates code to check if ARG is an infinity.  In the FP case we test
+   FABS(ARG) == INF and in the INT case we check the bits on the exp and
+   mantissa.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_infinity (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_INFINITIES (mode))
+    {
+      return build_int_cst (bool_type, false);
+    }
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      /* Perform IBM extended format fixups if required.  */
+      perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+      REAL_VALUE_TYPE r;
+      real_inf (&r);
+      tree res = fold_build2_loc (loc, EQ_EXPR, bool_type, arg_p,
+				  build_real (type, r));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0.  */
+  tree inf_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, EQ_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Generates code to check if ARG is a finite number.  In the FP case we check
+   if FABS(ARG) <= MAX_VALUE(ARG) and in the INT case we check the exp and
+   mantissa bits.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_finite (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (arg) && !HONOR_INFINITIES (arg))
+    {
+      return build_int_cst (bool_type, true);
+    }
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+
+      /* Perform IBM extended format fixups if required.  */
+      perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+      tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
+
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      REAL_VALUE_TYPE rmax;
+      char buf[128];
+      get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+      real_from_string (&rmax, buf);
+
+      tree res = build_call_expr (isle_fn, 2,  arg_p, build_real (type, rmax));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign. */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0. */
+  tree inf_check_tmp
+    = fold_build2_loc (loc, LT_EXPR, bool_type,
+		       emit_tree_and_return_var (seq, int_arg),
+		       inf_mask);
+
+  tree inf_check = emit_tree_and_return_var (seq, inf_check_tmp);
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Generates code to check if ARG is a NaN. In the FP case we simply check if
+   ARG != ARG and in the INT case we check the bits in the exp and mantissa.
+   Returns a variable containing a boolean which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_nan (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (mode))
+    {
+      return build_int_cst (bool_type, false);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      /* Perform IBM extended format fixups if required.  */
+      perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      tree res
+	= fold_build2_loc (loc, UNORDERED_EXPR, bool_type,arg_p, arg_p);
+
+      return emit_tree_and_return_var (seq, res);
+  }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is not 0.  */
+  tree nan_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, GT_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, nan_check);
+}
+
+/* Validates a single argument from the arguments list CALL at position INDEX.
+   The extracted parameter is compared against the expected type CODE.
+
+   A boolean is returned indicating if the parameter exist and if of the
+   expected type.  */
+static bool
+gimple_validate_arg (gimple* call, int index, enum tree_code code)
+{
+  const tree arg = gimple_call_arg (call, index);
+  if (!arg)
+    return false;
+  else if (code == POINTER_TYPE)
+    return POINTER_TYPE_P (TREE_TYPE (arg));
+  else if (code == INTEGER_TYPE)
+    return INTEGRAL_TYPE_P (TREE_TYPE (arg));
+  return code == TREE_CODE (TREE_TYPE (arg));
+}
+
+/* Lowers calls to __builtin_fpclassify to
+   fpclassify (x) ->
+     isnormal(x) ? FP_NORMAL :
+       iszero (x) ? FP_ZERO :
+	 isnan (x) ? FP_NAN :
+	   isinfinite (x) ? FP_INFINITE :
+	     FP_SUBNORMAL.
+
+   The code may use integer arithmentic if it decides
+   that the produced assembly would be faster. This can only be done
+   for numbers that are similar to IEEE-754 in format.
+
+   This builtin will generate code to return the appropriate floating
+   point classification depending on the value of the floating point
+   number passed in.  The possible return values must be supplied as
+   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
+   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
+   one floating point argument which is "type generic".
+
+   GSI is the gimple iterator containing the fpclassify call to lower.
+   The call will be expanded and replaced inline in the given GSI.  */
+static void
+lower_builtin_fpclassify (gimple_stmt_iterator *gsi)
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 6
+      || !gimple_validate_arg (call, 0, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 1, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 2, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 3, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 4, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 5, REAL_TYPE))
+    return;
+
+  /* Collect the arguments from the call.  */
+  tree fp_nan = gimple_call_arg (call, 0);
+  tree fp_infinite = gimple_call_arg (call, 1);
+  tree fp_normal = gimple_call_arg (call, 2);
+  tree fp_subnormal = gimple_call_arg (call, 3);
+  tree fp_zero = gimple_call_arg (call, 4);
+  tree arg = gimple_call_arg (call, 5);
+
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+    dest = create_tmp_reg (TREE_TYPE (orig_dest));
+
+  emit_tree_cond (&body, dest, done_label,
+		  is_normal (&body, arg, loc), fp_normal);
+  emit_tree_cond (&body, dest, done_label,
+		  is_zero (&body, arg, loc), fp_zero);
+  emit_tree_cond (&body, dest, done_label,
+		  is_nan (&body, arg, loc), fp_nan);
+  emit_tree_cond (&body, dest, done_label,
+		  is_infinity (&body, arg, loc), fp_infinite);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to is_subnormal.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, fp_subnormal));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+    {
+      gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+    }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+
+  /* Remove the call to __builtin_fpclassify.  */
+  gsi_remove (gsi, false);
+}
+
+/* Generic wrapper for the is_nan, is_normal, is_subnormal, is_zero, etc.
+   All these functions have the same setup. The wrapper validates the parameter
+   and also creates the branches and labels required to properly invoke.
+   This has been generalize and the function to call is passed as argument FNDECL.
+
+   GSI is the gimple iterator containing the fpclassify call to lower.
+   The call will be expanded and replaced inline in the given GSI.  */
+static void
+gen_call_fp_builtin (gimple_stmt_iterator *gsi,
+		     tree (*fndecl)(gimple_seq *, tree, location_t))
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 1
+      || !gimple_validate_arg (call, 0, REAL_TYPE))
+    return;
+
+  tree arg = gimple_call_arg (call, 0);
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  tree type = TREE_TYPE (orig_dest);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+      dest = create_tmp_reg (type);
+
+  tree t_true = build_int_cst (type, true);
+  tree t_false = build_int_cst (type, false);
+
+  emit_tree_cond (&body, dest, done_label,
+		  fndecl (&body, arg, loc), t_true);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to false.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, t_false));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+  /* Remove the call to the builtin.  */
+  gsi_remove (gsi, false);
+}
+
+/* Lower and expand calls to __builtin_isnan in GSI.  */
+static void
+lower_builtin_isnan (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_nan);
+}
+
+/* Lower and expand calls to __builtin_isinfinite in GSI.  */
+static void
+lower_builtin_isinfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_infinity);
+}
+
+/* Lower and expand calls to __builtin_isnormal in GSI.  */
+static void
+lower_builtin_isnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_normal);
+}
+
+/* Lower and expand calls to __builtin_iszero in GSI.  */
+static void
+lower_builtin_iszero (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_zero);
+}
+
+/* Lower and expand calls to __builtin_issubnormal in GSI.  */
+static void
+lower_builtin_issubnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_subnormal);
+}
+
+/* Lower and expand calls to __builtin_isfinite in GSI.  */
+static void
+lower_builtin_isfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_finite);
+}
+
 /* Lower calls to posix_memalign to
      res = posix_memalign (ptr, align, size);
      if (res == 0)
diff --git a/gcc/real.h b/gcc/real.h
index 59af580e78f2637be84f71b98b45ec6611053222..4b1b92138e07f43a175a2cbee4d952afad5898f7 100644
--- a/gcc/real.h
+++ b/gcc/real.h
@@ -161,6 +161,19 @@ struct real_format
   bool has_signed_zero;
   bool qnan_msb_set;
   bool canonical_nan_lsbs_set;
+
+  /* This flag indicates whether the format is suitable for the optimized
+     code paths for the __builtin_fpclassify function and friends.  For
+     this, the format must be a base 2 representation with the sign bit as
+     the most-significant bit followed by (exp <= 32) exponent bits
+     followed by the mantissa bits.  It must be possible to interpret the
+     bits of the floating-point representation as an integer.  NaNs and
+     INFs (if available) must be represented by the same schema used by
+     IEEE 754.  (NaNs must be represented by an exponent with all bits 1,
+     any mantissa except all bits 0 and any sign bit.  +INF and -INF must be
+     represented by an exponent with all bits 1, a mantissa with all bits 0 and
+     a sign bit of 0 and 1 respectively.)  */
+  bool is_binary_ieee_compatible;
   const char *name;
 };
 
@@ -511,6 +524,11 @@ extern bool real_isinteger (const REAL_VALUE_TYPE *, HOST_WIDE_INT *);
    float string.  BUF must be large enough to contain the result.  */
 extern void get_max_float (const struct real_format *, char *, size_t);
 
+/* Write into BUF the smallest positive normalized number x,
+   such that b**(x-1) is normalized.  BUF must be large enough
+   to contain the result.  */
+extern void get_min_float (const struct real_format *, char *, size_t);
+
 #ifndef GENERATOR_FILE
 /* real related routines.  */
 extern wide_int real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
diff --git a/gcc/real.c b/gcc/real.c
index 66e88e2ad366f7848609d157074c80420d778bcf..20c907a6d543c73ba62aa9a8ddf6973d82de7832 100644
--- a/gcc/real.c
+++ b/gcc/real.c
@@ -3052,6 +3052,7 @@ const struct real_format ieee_single_format =
     true,
     true,
     false,
+    true,
     "ieee_single"
   };
 
@@ -3075,6 +3076,7 @@ const struct real_format mips_single_format =
     true,
     false,
     true,
+    true,
     "mips_single"
   };
 
@@ -3098,6 +3100,7 @@ const struct real_format motorola_single_format =
     true,
     true,
     true,
+    true,
     "motorola_single"
   };
 
@@ -3132,6 +3135,7 @@ const struct real_format spu_single_format =
     true,
     false,
     false,
+    false,
     "spu_single"
   };
 \f
@@ -3343,6 +3347,7 @@ const struct real_format ieee_double_format =
     true,
     true,
     false,
+    true,
     "ieee_double"
   };
 
@@ -3366,6 +3371,7 @@ const struct real_format mips_double_format =
     true,
     false,
     true,
+    true,
     "mips_double"
   };
 
@@ -3389,6 +3395,7 @@ const struct real_format motorola_double_format =
     true,
     true,
     true,
+    true,
     "motorola_double"
   };
 \f
@@ -3735,6 +3742,7 @@ const struct real_format ieee_extended_motorola_format =
     true,
     true,
     true,
+    false,
     "ieee_extended_motorola"
   };
 
@@ -3758,6 +3766,7 @@ const struct real_format ieee_extended_intel_96_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96"
   };
 
@@ -3781,6 +3790,7 @@ const struct real_format ieee_extended_intel_128_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_128"
   };
 
@@ -3806,6 +3816,7 @@ const struct real_format ieee_extended_intel_96_round_53_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96_round_53"
   };
 \f
@@ -3896,6 +3907,7 @@ const struct real_format ibm_extended_format =
     true,
     true,
     false,
+    false,
     "ibm_extended"
   };
 
@@ -3919,6 +3931,7 @@ const struct real_format mips_extended_format =
     true,
     false,
     true,
+    false,
     "mips_extended"
   };
 
@@ -4184,6 +4197,7 @@ const struct real_format ieee_quad_format =
     true,
     true,
     false,
+    true,
     "ieee_quad"
   };
 
@@ -4207,6 +4221,7 @@ const struct real_format mips_quad_format =
     true,
     false,
     true,
+    true,
     "mips_quad"
   };
 \f
@@ -4509,6 +4524,7 @@ const struct real_format vax_f_format =
     false,
     false,
     false,
+    false,
     "vax_f"
   };
 
@@ -4532,6 +4548,7 @@ const struct real_format vax_d_format =
     false,
     false,
     false,
+    false,
     "vax_d"
   };
 
@@ -4555,6 +4572,7 @@ const struct real_format vax_g_format =
     false,
     false,
     false,
+    false,
     "vax_g"
   };
 \f
@@ -4633,6 +4651,7 @@ const struct real_format decimal_single_format =
     true,
     true,
     false,
+    false,
     "decimal_single"
   };
 
@@ -4657,6 +4676,7 @@ const struct real_format decimal_double_format =
     true,
     true,
     false,
+    false,
     "decimal_double"
   };
 
@@ -4681,6 +4701,7 @@ const struct real_format decimal_quad_format =
     true,
     true,
     false,
+    false,
     "decimal_quad"
   };
 \f
@@ -4820,6 +4841,7 @@ const struct real_format ieee_half_format =
     true,
     true,
     false,
+    true,
     "ieee_half"
   };
 
@@ -4846,6 +4868,7 @@ const struct real_format arm_half_format =
     true,
     false,
     false,
+    false,
     "arm_half"
   };
 \f
@@ -4893,6 +4916,7 @@ const struct real_format real_internal_format =
     true,
     true,
     false,
+    false,
     "real_internal"
   };
 \f
@@ -5080,6 +5104,16 @@ get_max_float (const struct real_format *fmt, char *buf, size_t len)
   gcc_assert (strlen (buf) < len);
 }
 
+/* Write into BUF the minimum negative representable finite floating-point
+   number, x, such that b**(x-1) is normalized.
+   BUF must be large enough to contain the result.  */
+void
+get_min_float (const struct real_format *fmt, char *buf, size_t len)
+{
+  sprintf (buf, "0x1p%d", fmt->emin - 1);
+  gcc_assert (strlen (buf) < len);
+}
+
 /* True if mode M has a NaN representation and
    the treatment of NaN operands is important.  */
 
diff --git a/gcc/testsuite/gcc.dg/builtins-43.c b/gcc/testsuite/gcc.dg/builtins-43.c
index f7c318edf084104b9b820e18e631ed61e760569e..5d41c28aef8619f06658f45846ae15dd8b4987ed 100644
--- a/gcc/testsuite/gcc.dg/builtins-43.c
+++ b/gcc/testsuite/gcc.dg/builtins-43.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-gimple -fdump-tree-optimized" } */
+/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-lower -fdump-tree-optimized" } */
   
 extern void f(int);
 extern void link_error ();
@@ -51,7 +51,7 @@ main ()
 
 
 /* Check that all instances of __builtin_isnan were folded.  */
-/* { dg-final { scan-tree-dump-times "isnan" 0 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "isnan" 0 "lower" } } */
 
 /* Check that all instances of link_error were subject to DCE.  */
 /* { dg-final { scan-tree-dump-times "link_error" 0 "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/fold-notunord.c b/gcc/testsuite/gcc.dg/fold-notunord.c
deleted file mode 100644
index ca345154ac204cb5f380855828421b7f88d49052..0000000000000000000000000000000000000000
--- a/gcc/testsuite/gcc.dg/fold-notunord.c
+++ /dev/null
@@ -1,9 +0,0 @@
-/* { dg-do compile } */
-/* { dg-options "-O -ftrapping-math -fdump-tree-optimized" } */
-
-int f (double d)
-{
-  return !__builtin_isnan (d);
-}
-
-/* { dg-final { scan-tree-dump " ord " "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/pr28796-1.c b/gcc/testsuite/gcc.dg/pr28796-1.c
index 077118a298878441e812410f3a6bf3707fb1d839..a57b4e350af1bc45344106fdeab4b32ef87f233f 100644
--- a/gcc/testsuite/gcc.dg/pr28796-1.c
+++ b/gcc/testsuite/gcc.dg/pr28796-1.c
@@ -1,5 +1,5 @@
 /* { dg-do link } */
-/* { dg-options "-ffinite-math-only" } */
+/* { dg-options "-ffinite-math-only -O2" } */
 
 extern void link_error(void);
 
diff --git a/gcc/testsuite/gcc.dg/tg-tests.h b/gcc/testsuite/gcc.dg/tg-tests.h
index 0cf1f6452584962cebda999b5e8c7a61180d7830..cbb707c54b8437aada0cc205b1819369ada95665 100644
--- a/gcc/testsuite/gcc.dg/tg-tests.h
+++ b/gcc/testsuite/gcc.dg/tg-tests.h
@@ -11,6 +11,7 @@ void __attribute__ ((__noinline__))
 foo_1 (float f, double d, long double ld,
        int res_unord, int res_isnan, int res_isinf,
        int res_isinf_sign, int res_isfin, int res_isnorm,
+       int res_iszero, int res_issubnorm,
        int res_signbit, int classification)
 {
   if (__builtin_isunordered (f, 0) != res_unord)
@@ -80,6 +81,20 @@ foo_1 (float f, double d, long double ld,
   if (__builtin_finitel (ld) != res_isfin)
     __builtin_abort ();
 
+  if (__builtin_iszero (f) != res_iszero)
+    __builtin_abort ();
+  if (__builtin_iszero (d) != res_iszero)
+    __builtin_abort ();
+  if (__builtin_iszero (ld) != res_iszero)
+    __builtin_abort ();
+
+  if (__builtin_issubnormal (f) != res_issubnorm)
+    __builtin_abort ();
+  if (__builtin_issubnormal (d) != res_issubnorm)
+    __builtin_abort ();
+  if (__builtin_issubnormal (ld) != res_issubnorm)
+    __builtin_abort ();
+
   /* Sign bit of zeros and nans is not preserved in unsafe math mode.  */
 #ifdef UNSAFE
   if (!res_isnan && f != 0 && d != 0 && ld != 0)
@@ -115,12 +130,13 @@ foo_1 (float f, double d, long double ld,
 void __attribute__ ((__noinline__))
 foo (float f, double d, long double ld,
      int res_unord, int res_isnan, int res_isinf,
-     int res_isfin, int res_isnorm, int classification)
+     int res_isfin, int res_isnorm, int res_iszero,
+     int res_issubnorm, int classification)
 {
-  foo_1 (f, d, ld, res_unord, res_isnan, res_isinf, res_isinf, res_isfin, res_isnorm, 0, classification);
+  foo_1 (f, d, ld, res_unord, res_isnan, res_isinf, res_isinf, res_isfin, res_isnorm, res_iszero, res_issubnorm, 0, classification);
   /* Try all the values negated as well.  All will have the sign bit set,
      except for the nan.  */
-  foo_1 (-f, -d, -ld, res_unord, res_isnan, res_isinf, -res_isinf, res_isfin, res_isnorm, 1, classification);
+  foo_1 (-f, -d, -ld, res_unord, res_isnan, res_isinf, -res_isinf, res_isfin, res_isnorm, res_iszero, res_issubnorm, 1, classification);
 }
 
 int __attribute__ ((__noinline__))
@@ -132,35 +148,35 @@ main_tests (void)
   
   /* Test NaN.  */
   f = __builtin_nanf(""); d = __builtin_nan(""); ld = __builtin_nanl("");
-  foo(f, d, ld, /*unord=*/ 1, /*isnan=*/ 1, /*isinf=*/ 0, /*isfin=*/ 0, /*isnorm=*/ 0, FP_NAN);
+  foo(f, d, ld, /*unord=*/ 1, /*isnan=*/ 1, /*isinf=*/ 0, /*isfin=*/ 0, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/0, FP_NAN);
 
   /* Test infinity.  */
   f = __builtin_inff(); d = __builtin_inf(); ld = __builtin_infl();
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, FP_INFINITE);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/0, FP_INFINITE);
 
   /* Test zero.  */
   f = 0; d = 0; ld = 0;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, FP_ZERO);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, /*iszero=*/1, /*issubnorm=*/0, FP_ZERO);
 
   /* Test one.  */
   f = 1; d = 1; ld = 1;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, FP_NORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, /*iszero=*/0, /*issubnorm=*/0, FP_NORMAL);
 
   /* Test minimum values.  */
   f = __FLT_MIN__; d = __DBL_MIN__; ld = __LDBL_MIN__;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, FP_NORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, /*iszero=*/0, /*issubnorm=*/0, FP_NORMAL);
 
   /* Test subnormal values.  */
   f = __FLT_MIN__/2; d = __DBL_MIN__/2; ld = __LDBL_MIN__/2;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, FP_SUBNORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/1, FP_SUBNORMAL);
 
   /* Test maximum values.  */
   f = __FLT_MAX__; d = __DBL_MAX__; ld = __LDBL_MAX__;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, FP_NORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, /*iszero=*/0, /*issubnorm=*/0, FP_NORMAL);
 
   /* Test overflow values.  */
   f = __FLT_MAX__*2; d = __DBL_MAX__*2; ld = __LDBL_MAX__*2;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, FP_INFINITE);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/0, FP_INFINITE);
 
   return 0;
 }
diff --git a/gcc/testsuite/gcc.dg/torture/float128-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..ec9d3ad41e24280978707888590eec1b562207f0
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128_runtime } */
+
+#define WIDTH 128
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0ede861716750453a86c9abc703ad0b2826674c6
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128x_runtime } */
+
+#define WIDTH 128
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float16-tg-4.c b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..007c4c224ea95537c31185d0aff964d1975f2190
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float16 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float16 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float16_runtime } */
+
+#define WIDTH 16
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..c7f8353da2cffdfc2c2f58f5da3d5363b95e6f91
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32 type-generic built-in functions: __builtin_f__builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32_runtime } */
+
+#define WIDTH 32
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0d7a592920aca112d5f6409e565d4582c253c977
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32x_runtime } */
+
+#define WIDTH 32
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..bb25a22a68e60ce2717ab3583bbec595dd563c35
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64_runtime } */
+
+#define WIDTH 64
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..82305d916b8bd75131e2c647fd37f74cadbc8f1d
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64x_runtime } */
+
+#define WIDTH 64
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
new file mode 100644
index 0000000000000000000000000000000000000000..aa3448c090cf797a1525b1045ffebeed79cace40
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
@@ -0,0 +1,99 @@
+/* Tests for _FloatN / _FloatNx types: compile and execution tests for
+   type-generic built-in functions: __builtin_iszero, __builtin_issubnormal.
+   Before including this file, define WIDTH as the value N; define EXT to 1
+   for _FloatNx and 0 for _FloatN.  */
+
+#define __STDC_WANT_IEC_60559_TYPES_EXT__
+#include <float.h>
+
+#define CONCATX(X, Y) X ## Y
+#define CONCAT(X, Y) CONCATX (X, Y)
+#define CONCAT3(X, Y, Z) CONCAT (CONCAT (X, Y), Z)
+#define CONCAT4(W, X, Y, Z) CONCAT (CONCAT (CONCAT (W, X), Y), Z)
+
+#if EXT
+# define TYPE CONCAT3 (_Float, WIDTH, x)
+# define CST(C) CONCAT4 (C, f, WIDTH, x)
+# define MAX CONCAT3 (FLT, WIDTH, X_MAX)
+# define MIN CONCAT3 (FLT, WIDTH, X_MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, X_TRUE_MIN)
+#else
+# define TYPE CONCAT (_Float, WIDTH)
+# define CST(C) CONCAT3 (C, f, WIDTH)
+# define MAX CONCAT3 (FLT, WIDTH, _MAX)
+# define MIN CONCAT3 (FLT, WIDTH, _MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, _TRUE_MIN)
+#endif
+
+extern void exit (int);
+extern void abort (void);
+
+volatile TYPE inf = __builtin_inf (), nanval = __builtin_nan ("");
+volatile TYPE neginf = -__builtin_inf (), negnanval = -__builtin_nan ("");
+volatile TYPE zero = CST (0.0), negzero = -CST (0.0), one = CST (1.0);
+volatile TYPE max = MAX, negmax = -MAX, min = MIN, negmin = -MIN;
+volatile TYPE true_min = TRUE_MIN, negtrue_min = -TRUE_MIN;
+volatile TYPE sub_norm = MIN / 2.0;
+
+int
+main (void)
+{
+  if (__builtin_iszero (inf) == 1)
+    abort ();
+  if (__builtin_iszero (nanval) == 1)
+    abort ();
+  if (__builtin_iszero (neginf) == 1)
+    abort ();
+  if (__builtin_iszero (negnanval) == 1)
+    abort ();
+  if (__builtin_iszero (zero) != 1)
+    abort ();
+  if (__builtin_iszero (negzero) != 1)
+    abort ();
+  if (__builtin_iszero (one) == 1)
+    abort ();
+  if (__builtin_iszero (max) == 1)
+    abort ();
+  if (__builtin_iszero (negmax) == 1)
+    abort ();
+  if (__builtin_iszero (min) == 1)
+    abort ();
+  if (__builtin_iszero (negmin) == 1)
+    abort ();
+  if (__builtin_iszero (true_min) == 1)
+    abort ();
+  if (__builtin_iszero (negtrue_min) == 1)
+    abort ();
+  if (__builtin_iszero (sub_norm) == 1)
+    abort ();
+
+  if (__builtin_issubnormal (inf) == 1)
+    abort ();
+  if (__builtin_issubnormal (nanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (neginf) == 1)
+    abort ();
+  if (__builtin_issubnormal (negnanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (zero) == 1)
+    abort ();
+  if (__builtin_issubnormal (negzero) == 1)
+    abort ();
+  if (__builtin_issubnormal (one) == 1)
+    abort ();
+  if (__builtin_issubnormal (max) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmax) == 1)
+    abort ();
+  if (__builtin_issubnormal (min) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmin) == 1)
+    abort ();
+  if (__builtin_issubnormal (true_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (negtrue_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (sub_norm) != 1)
+    abort ();
+  exit (0);
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
new file mode 100644
index 0000000000000000000000000000000000000000..3a1bf956bfcedcc15dc1cbacfe2f0b663b31c3cc
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
@@ -0,0 +1,22 @@
+/* This file checks the code generation for the new __builtin_fpclassify.
+   because checking the exact assembly isn't very useful, we'll just be checking
+   for the presence of certain instructions and the omition of others. */
+/* { dg-options "-O2" } */
+/* { dg-do compile } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fabs\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmp\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmpe\[ \t\]?" } } */
+/* { dg-final { scan-assembler "\[ \t\]?ubfx\[ \t\]?" } } */
+
+#include <stdio.h>
+#include <math.h>
+
+/*
+ fp_nan = args[0];
+ fp_infinite = args[1];
+ fp_normal = args[2];
+ fp_subnormal = args[3];
+ fp_zero = args[4];
+*/
+
+int f(double x) { return __builtin_fpclassify(0, 1, 4, 3, 2, x); }

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-19 18:06                               ` Tamar Christina
@ 2017-01-19 18:28                                 ` Joseph Myers
  2017-01-23 15:55                                   ` Tamar Christina
  2017-06-08 10:31                                   ` Markus Trippelsdorf
  0 siblings, 2 replies; 47+ messages in thread
From: Joseph Myers @ 2017-01-19 18:28 UTC (permalink / raw)
  To: Tamar Christina
  Cc: Jeff Law, GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

On Thu, 19 Jan 2017, Tamar Christina wrote:

> Hi Joseph,
> 
> I made the requested changes and did a quick pass over the rest
> of the fp cases.

I've no further comments, but watch out for any related test failures 
being reported.

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-19 18:28                                 ` Joseph Myers
@ 2017-01-23 15:55                                   ` Tamar Christina
  2017-01-23 16:26                                     ` Joseph Myers
       [not found]                                     ` <5C66F854-5903-4F34-8DE6-5EE2E1FC5C77@gmail.com>
  2017-06-08 10:31                                   ` Markus Trippelsdorf
  1 sibling, 2 replies; 47+ messages in thread
From: Tamar Christina @ 2017-01-23 15:55 UTC (permalink / raw)
  To: Joseph Myers
  Cc: Jeff Law, GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

[-- Attachment #1: Type: text/plain, Size: 4485 bytes --]

Hi Jeff, Joseph,

Thank you for the reviews. This is hopefully the final version of the patch.

I was doing some extra testing on the FP fallback case
and noticed that the builtins Joseph told me about before were emitting GENERIC
and not GIMPLE code, particularly the TRUTH_NOT_EXPR expression was problematic.
So I've added calls to gimplify_expr and gimple_boolify where appropriate to do
the conversion.

In the testcase I've also had to remove `-funsafe-math-optimizations` because at
least on AArch64 this sets the LZ bit in the FPCR which causes the fcmp to
collapse small subnormal numbers to 0 and so iszero would fail.

With this both the FP and Integer versions have no regressions.

Ok for trunk?

Thanks,

Tamar.

gcc/
2017-01-23  Tamar Christina  <tamar.christina@arm.com>

	PR middle-end/77925
	PR middle-end/77926
	PR middle-end/66462

	* gcc/builtins.c (fold_builtin_fpclassify): Removed.
	(fold_builtin_interclass_mathfn): Removed.
	(expand_builtin): Added builtins to lowering list.
	(fold_builtin_n): Removed fold_builtin_varargs.
	(fold_builtin_varargs): Removed.
	* gcc/builtins.def (BUILT_IN_ISZERO, BUILT_IN_ISSUBNORMAL): Added.
	* gcc/real.h (get_min_float): Added.
	(real_format): Added is_ieee_compatible field.
	* gcc/real.c (get_min_float): Added.
	(ieee_single_format): Set is_ieee_compatible flag.
	* gcc/gimple-low.c (lower_stm): Define BUILT_IN_FPCLASSIFY,
	CASE_FLT_FN (BUILT_IN_ISINF), BUILT_IN_ISINFD32, BUILT_IN_ISINFD64,
	BUILT_IN_ISINFD128, BUILT_IN_ISNAND32, BUILT_IN_ISNAND64,
	BUILT_IN_ISNAND128, BUILT_IN_ISNAN, BUILT_IN_ISNORMAL, BUILT_IN_ISZERO,
	BUILT_IN_ISSUBNORMAL, CASE_FLT_FN (BUILT_IN_FINITE), BUILT_IN_FINITED32
	BUILT_IN_FINITED64, BUILT_IN_FINITED128, BUILT_IN_ISFINITE.
	(lower_builtin_fpclassify, is_nan, is_normal, is_infinity): Added.
	(is_zero, is_subnormal, is_finite, use_ieee_int_mode): Likewise.
	(lower_builtin_isnan, lower_builtin_isinfinite): Likewise.
	(lower_builtin_isnormal, lower_builtin_iszero): Likewise.
	(lower_builtin_issubnormal, lower_builtin_isfinite): Likewise.
	(emit_tree_cond, get_num_as_int, emit_tree_and_return_var): Added.
	(mips_single_format): Likewise.
	(motorola_single_format): Likewise.
	(spu_single_format): Likewise.
	(ieee_double_format): Likewise.
	(mips_double_format): Likewise.
	(motorola_double_format): Likewise.
	(ieee_extended_motorola_format): Likewise.
	(ieee_extended_intel_128_format): Likewise.
	(ieee_extended_intel_96_round_53_format): Likewise.
	(ibm_extended_format): Likewise.
	(mips_extended_format): Likewise.
	(ieee_quad_format): Likewise.
	(mips_quad_format): Likewise.
	(vax_f_format): Likewise.
	(vax_d_format): Likewise.
	(vax_g_format): Likewise.
	(decimal_single_format): Likewise.
	(decimal_quad_format): Likewise.
	(iee_half_format): Likewise.
	(mips_single_format): Likewise.
	(arm_half_format): Likewise.
	(real_internal_format): Likewise.
	* gcc/doc/extend.texi: Added documentation for built-ins.
	* gcc/c/c-typeck.c (convert_arguments): Added BUILT_IN_ISZERO
	and BUILT_IN_ISSUBNORMAL.

gcc/testsuite/
2017-01-23  Tamar Christina  <tamar.christina@arm.com>

	* gcc.target/aarch64/builtin-fpclassify.c: New codegen test.
	* gcc.dg/fold-notunord.c: Removed.
	* gcc.dg/torture/floatn-tg-4.h: Added tests for iszero and issubnormal.
	* gcc.dg/torture/float128-tg-4.c: Likewise.
	* gcc.dg/torture/float128x-tg-4: Likewise.
	* gcc.dg/torture/float16-tg-4.c: Likewise.
	* gcc.dg/torture/float32-tg-4.c: Likewise.
	* gcc.dg/torture/float32x-tg-4.c: Likewise.
	* gcc.dg/torture/float64-tg-4.c: Likewise.
	* gcc.dg/torture/float64x-tg-4.c: Likewise.
	* gcc.dg/pr28796-1.c: Added -O2.
	* gcc.dg/builtins-43.c: Check lower instead of gimple.
	* gcc.dg/tg-tests.h: Added iszero and issubnormal.
	* gcc.dg/pr28796-2.c: Dropped -funsafe-math-optimizations.
________________________________________
From: Joseph Myers <joseph@codesourcery.com>
Sent: Thursday, January 19, 2017 6:20:35 PM
To: Tamar Christina
Cc: Jeff Law; GCC Patches; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

On Thu, 19 Jan 2017, Tamar Christina wrote:

> Hi Joseph,
>
> I made the requested changes and did a quick pass over the rest
> of the fp cases.

I've no further comments, but watch out for any related test failures
being reported.

--
Joseph S. Myers
joseph@codesourcery.com

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: fp-classify-jan-23-spin.patch --]
[-- Type: text/x-patch; name="fp-classify-jan-23-spin.patch", Size: 76327 bytes --]

diff --git a/gcc/builtins.c b/gcc/builtins.c
index 3ac2d44148440b124559ba7cd3de483b7a74b72d..d8ff9c70ae6b9e72e09b8cbd9a0bd41b6830b83e 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -160,7 +160,6 @@ static tree fold_builtin_0 (location_t, tree);
 static tree fold_builtin_1 (location_t, tree, tree);
 static tree fold_builtin_2 (location_t, tree, tree, tree);
 static tree fold_builtin_3 (location_t, tree, tree, tree, tree);
-static tree fold_builtin_varargs (location_t, tree, tree*, int);
 
 static tree fold_builtin_strpbrk (location_t, tree, tree, tree);
 static tree fold_builtin_strstr (location_t, tree, tree, tree);
@@ -2202,19 +2201,8 @@ interclass_mathfn_icode (tree arg, tree fndecl)
   switch (DECL_FUNCTION_CODE (fndecl))
     {
     CASE_FLT_FN (BUILT_IN_ILOGB):
-      errno_set = true; builtin_optab = ilogb_optab; break;
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      builtin_optab = isinf_optab; break;
-    case BUILT_IN_ISNORMAL:
-    case BUILT_IN_ISFINITE:
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      /* These builtins have no optabs (yet).  */
+      errno_set = true;
+      builtin_optab = ilogb_optab;
       break;
     default:
       gcc_unreachable ();
@@ -2233,8 +2221,7 @@ interclass_mathfn_icode (tree arg, tree fndecl)
 }
 
 /* Expand a call to one of the builtin math functions that operate on
-   floating point argument and output an integer result (ilogb, isinf,
-   isnan, etc).
+   floating point argument and output an integer result (ilogb, etc).
    Return 0 if a normal call should be emitted rather than expanding the
    function in-line.  EXP is the expression that is a call to the builtin
    function; if convenient, the result should be placed in TARGET.  */
@@ -5997,11 +5984,7 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
     CASE_FLT_FN (BUILT_IN_ILOGB):
       if (! flag_unsafe_math_optimizations)
 	break;
-      gcc_fallthrough ();
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-    case BUILT_IN_ISNORMAL:
+
       target = expand_builtin_interclass_mathfn (exp, target);
       if (target)
 	return target;
@@ -6281,8 +6264,25 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
 	}
       break;
 
+    CASE_FLT_FN (BUILT_IN_ISINF):
+    case BUILT_IN_ISNAND32:
+    case BUILT_IN_ISNAND64:
+    case BUILT_IN_ISNAND128:
+    case BUILT_IN_ISNAN:
+    case BUILT_IN_ISINFD32:
+    case BUILT_IN_ISINFD64:
+    case BUILT_IN_ISINFD128:
+    case BUILT_IN_ISNORMAL:
+    case BUILT_IN_ISZERO:
+    case BUILT_IN_ISSUBNORMAL:
+    case BUILT_IN_FPCLASSIFY:
     case BUILT_IN_SETJMP:
-      /* This should have been lowered to the builtins below.  */
+    CASE_FLT_FN (BUILT_IN_FINITE):
+    case BUILT_IN_FINITED32:
+    case BUILT_IN_FINITED64:
+    case BUILT_IN_FINITED128:
+    case BUILT_IN_ISFINITE:
+      /* These should have been lowered to the builtins in gimple-low.c.  */
       gcc_unreachable ();
 
     case BUILT_IN_SETJMP_SETUP:
@@ -7622,184 +7622,19 @@ fold_builtin_modf (location_t loc, tree arg0, tree arg1, tree rettype)
   return NULL_TREE;
 }
 
-/* Given a location LOC, an interclass builtin function decl FNDECL
-   and its single argument ARG, return an folded expression computing
-   the same, or NULL_TREE if we either couldn't or didn't want to fold
-   (the latter happen if there's an RTL instruction available).  */
-
-static tree
-fold_builtin_interclass_mathfn (location_t loc, tree fndecl, tree arg)
-{
-  machine_mode mode;
-
-  if (!validate_arg (arg, REAL_TYPE))
-    return NULL_TREE;
-
-  if (interclass_mathfn_icode (arg, fndecl) != CODE_FOR_nothing)
-    return NULL_TREE;
-
-  mode = TYPE_MODE (TREE_TYPE (arg));
-
-  bool is_ibm_extended = MODE_COMPOSITE_P (mode);
 
-  /* If there is no optab, try generic code.  */
-  switch (DECL_FUNCTION_CODE (fndecl))
-    {
-      tree result;
 
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      {
-	/* isinf(x) -> isgreater(fabs(x),DBL_MAX).  */
-	tree const isgr_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isgr_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	return result;
-      }
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-      {
-	/* isfinite(x) -> islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isle_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	/*result = fold_build2_loc (loc, UNGT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	result = fold_build1_loc (loc, TRUTH_NOT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  result);*/
-	return result;
-      }
-    case BUILT_IN_ISNORMAL:
-      {
-	/* isnormal(x) -> isgreaterequal(fabs(x),DBL_MIN) &
-	   islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	tree orig_arg, max_exp, min_exp;
-	machine_mode orig_mode = mode;
-	REAL_VALUE_TYPE rmax, rmin;
-	char buf[128];
-
-	orig_arg = arg = builtin_save_expr (arg);
-	if (is_ibm_extended)
-	  {
-	    /* Use double to test the normal range of IBM extended
-	       precision.  Emin for IBM extended precision is
-	       different to emin for IEEE double, being 53 higher
-	       since the low double exponent is at least 53 lower
-	       than the high double exponent.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	arg = fold_build1_loc (loc, ABS_EXPR, type, arg);
-
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&rmax, buf);
-	sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (orig_mode)->emin - 1);
-	real_from_string (&rmin, buf);
-	max_exp = build_real (type, rmax);
-	min_exp = build_real (type, rmin);
-
-	max_exp = build_call_expr (isle_fn, 2, arg, max_exp);
-	if (is_ibm_extended)
-	  {
-	    /* Testing the high end of the range is done just using
-	       the high double, using the same test as isfinite().
-	       For the subnormal end of the range we first test the
-	       high double, then if its magnitude is equal to the
-	       limit of 0x1p-969, we test whether the low double is
-	       non-zero and opposite sign to the high double.  */
-	    tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
-	    tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	    tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
-	    tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
-				       arg, min_exp);
-	    tree as_complex = build1 (VIEW_CONVERT_EXPR,
-				      complex_double_type_node, orig_arg);
-	    tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
-	    tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
-	    tree zero = build_real (type, dconst0);
-	    tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
-	    tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
-	    tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
-	    tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
-				      fold_build3 (COND_EXPR,
-						   integer_type_node,
-						   hilt, logt, lolt));
-	    eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
-				  eq_min, ok_lo);
-	    min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
-				   gt_min, eq_min);
-	  }
-	else
-	  {
-	    tree const isge_fn
-	      = builtin_decl_explicit (BUILT_IN_ISGREATEREQUAL);
-	    min_exp = build_call_expr (isge_fn, 2, arg, min_exp);
-	  }
-	result = fold_build2 (BIT_AND_EXPR, integer_type_node,
-			      max_exp, min_exp);
-	return result;
-      }
-    default:
-      break;
-    }
-
-  return NULL_TREE;
-}
-
-/* Fold a call to __builtin_isnan(), __builtin_isinf, __builtin_finite.
+/* Fold a call to __builtin_isinf_sign.
    ARG is the argument for the call.  */
 
 static tree
-fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
+fold_builtin_classify (location_t loc, tree arg, int builtin_index)
 {
-  tree type = TREE_TYPE (TREE_TYPE (fndecl));
-
   if (!validate_arg (arg, REAL_TYPE))
     return NULL_TREE;
 
   switch (builtin_index)
     {
-    case BUILT_IN_ISINF:
-      if (!HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      return NULL_TREE;
-
     case BUILT_IN_ISINF_SIGN:
       {
 	/* isinf_sign(x) -> isinf(x) ? (signbit(x) ? -1 : 1) : 0 */
@@ -7832,106 +7667,11 @@ fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
 	return tmp;
       }
 
-    case BUILT_IN_ISFINITE:
-      if (!HONOR_NANS (arg)
-	  && !HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_one_node, arg);
-
-      return NULL_TREE;
-
-    case BUILT_IN_ISNAN:
-      if (!HONOR_NANS (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      {
-	bool is_ibm_extended = MODE_COMPOSITE_P (TYPE_MODE (TREE_TYPE (arg)));
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    arg = fold_build1_loc (loc, NOP_EXPR, double_type_node, arg);
-	  }
-      }
-      arg = builtin_save_expr (arg);
-      return fold_build2_loc (loc, UNORDERED_EXPR, type, arg, arg);
-
     default:
       gcc_unreachable ();
     }
 }
 
-/* Fold a call to __builtin_fpclassify(int, int, int, int, int, ...).
-   This builtin will generate code to return the appropriate floating
-   point classification depending on the value of the floating point
-   number passed in.  The possible return values must be supplied as
-   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
-   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
-   one floating point argument which is "type generic".  */
-
-static tree
-fold_builtin_fpclassify (location_t loc, tree *args, int nargs)
-{
-  tree fp_nan, fp_infinite, fp_normal, fp_subnormal, fp_zero,
-    arg, type, res, tmp;
-  machine_mode mode;
-  REAL_VALUE_TYPE r;
-  char buf[128];
-
-  /* Verify the required arguments in the original call.  */
-  if (nargs != 6
-      || !validate_arg (args[0], INTEGER_TYPE)
-      || !validate_arg (args[1], INTEGER_TYPE)
-      || !validate_arg (args[2], INTEGER_TYPE)
-      || !validate_arg (args[3], INTEGER_TYPE)
-      || !validate_arg (args[4], INTEGER_TYPE)
-      || !validate_arg (args[5], REAL_TYPE))
-    return NULL_TREE;
-
-  fp_nan = args[0];
-  fp_infinite = args[1];
-  fp_normal = args[2];
-  fp_subnormal = args[3];
-  fp_zero = args[4];
-  arg = args[5];
-  type = TREE_TYPE (arg);
-  mode = TYPE_MODE (type);
-  arg = builtin_save_expr (fold_build1_loc (loc, ABS_EXPR, type, arg));
-
-  /* fpclassify(x) ->
-       isnan(x) ? FP_NAN :
-         (fabs(x) == Inf ? FP_INFINITE :
-	   (fabs(x) >= DBL_MIN ? FP_NORMAL :
-	     (x == 0 ? FP_ZERO : FP_SUBNORMAL))).  */
-
-  tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-		     build_real (type, dconst0));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node,
-		     tmp, fp_zero, fp_subnormal);
-
-  sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (mode)->emin - 1);
-  real_from_string (&r, buf);
-  tmp = fold_build2_loc (loc, GE_EXPR, integer_type_node,
-		     arg, build_real (type, r));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, fp_normal, res);
-
-  if (HONOR_INFINITIES (mode))
-    {
-      real_inf (&r);
-      tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-			 build_real (type, r));
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp,
-			 fp_infinite, res);
-    }
-
-  if (HONOR_NANS (mode))
-    {
-      tmp = fold_build2_loc (loc, ORDERED_EXPR, integer_type_node, arg, arg);
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, res, fp_nan);
-    }
-
-  return res;
-}
-
 /* Fold a call to an unordered comparison function such as
    __builtin_isgreater().  FNDECL is the FUNCTION_DECL for the function
    being called and ARG0 and ARG1 are the arguments for the call.
@@ -8232,40 +7972,8 @@ fold_builtin_1 (location_t loc, tree fndecl, tree arg0)
     case BUILT_IN_ISDIGIT:
       return fold_builtin_isdigit (loc, arg0);
 
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISFINITE:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISFINITE);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    case BUILT_IN_ISNORMAL:
-      return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-
     case BUILT_IN_ISINF_SIGN:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF_SIGN);
-
-    CASE_FLT_FN (BUILT_IN_ISNAN):
-    case BUILT_IN_ISNAND32:
-    case BUILT_IN_ISNAND64:
-    case BUILT_IN_ISNAND128:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISNAN);
+      return fold_builtin_classify (loc, arg0, BUILT_IN_ISINF_SIGN);
 
     case BUILT_IN_FREE:
       if (integer_zerop (arg0))
@@ -8465,7 +8173,6 @@ fold_builtin_n (location_t loc, tree fndecl, tree *args, int nargs, bool)
       ret = fold_builtin_3 (loc, fndecl, args[0], args[1], args[2]);
       break;
     default:
-      ret = fold_builtin_varargs (loc, fndecl, args, nargs);
       break;
     }
   if (ret)
@@ -9422,37 +9129,6 @@ fold_builtin_object_size (tree ptr, tree ost)
   return NULL_TREE;
 }
 
-/* Builtins with folding operations that operate on "..." arguments
-   need special handling; we need to store the arguments in a convenient
-   data structure before attempting any folding.  Fortunately there are
-   only a few builtins that fall into this category.  FNDECL is the
-   function, EXP is the CALL_EXPR for the call.  */
-
-static tree
-fold_builtin_varargs (location_t loc, tree fndecl, tree *args, int nargs)
-{
-  enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl);
-  tree ret = NULL_TREE;
-
-  switch (fcode)
-    {
-    case BUILT_IN_FPCLASSIFY:
-      ret = fold_builtin_fpclassify (loc, args, nargs);
-      break;
-
-    default:
-      break;
-    }
-  if (ret)
-    {
-      ret = build1 (NOP_EXPR, TREE_TYPE (ret), ret);
-      SET_EXPR_LOCATION (ret, loc);
-      TREE_NO_WARNING (ret) = 1;
-      return ret;
-    }
-  return NULL_TREE;
-}
-
 /* Initialize format string characters in the target charset.  */
 
 bool
diff --git a/gcc/builtins.def b/gcc/builtins.def
index 219feebd3aebefbd079bf37cc801453cd1965e00..91aa6f37fa098777bc794bad56d8c561ab9fdc44 100644
--- a/gcc/builtins.def
+++ b/gcc/builtins.def
@@ -831,6 +831,8 @@ DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFL, "isinfl", BT_FN_INT_LONGDOUBLE, ATTR_CO
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD32, "isinfd32", BT_FN_INT_DFLOAT32, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD64, "isinfd64", BT_FN_INT_DFLOAT64, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD128, "isinfd128", BT_FN_INT_DFLOAT128, ATTR_CONST_NOTHROW_LEAF_LIST)
+DEF_GCC_BUILTIN        (BUILT_IN_ISZERO, "iszero", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
+DEF_GCC_BUILTIN        (BUILT_IN_ISSUBNORMAL, "issubnormal", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_C99_C90RES_BUILTIN (BUILT_IN_ISNAN, "isnan", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANF, "isnanf", BT_FN_INT_FLOAT, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANL, "isnanl", BT_FN_INT_LONGDOUBLE, ATTR_CONST_NOTHROW_LEAF_LIST)
diff --git a/gcc/c/c-typeck.c b/gcc/c/c-typeck.c
index f0917ed788c91b2a2c8701438150f0e634c9402b..e2b4acd21b7d70e6500e42064db49e0f4a84aae9 100644
--- a/gcc/c/c-typeck.c
+++ b/gcc/c/c-typeck.c
@@ -3232,6 +3232,8 @@ convert_arguments (location_t loc, vec<location_t> arg_loc, tree typelist,
 	case BUILT_IN_ISINF_SIGN:
 	case BUILT_IN_ISNAN:
 	case BUILT_IN_ISNORMAL:
+	case BUILT_IN_ISZERO:
+	case BUILT_IN_ISSUBNORMAL:
 	case BUILT_IN_FPCLASSIFY:
 	  type_generic_remove_excess_precision = true;
 	  break;
diff --git a/gcc/doc/extend.texi b/gcc/doc/extend.texi
index 0669f7999beb078822e471352036d8f13517812d..c240bbe9a8fd595a0e7e2b41fb708efae1e5279a 100644
--- a/gcc/doc/extend.texi
+++ b/gcc/doc/extend.texi
@@ -10433,6 +10433,10 @@ in the Cilk Plus language manual which can be found at
 @findex __builtin_isgreater
 @findex __builtin_isgreaterequal
 @findex __builtin_isinf_sign
+@findex __builtin_isinf
+@findex __builtin_isnan
+@findex __builtin_iszero
+@findex __builtin_issubnormal
 @findex __builtin_isless
 @findex __builtin_islessequal
 @findex __builtin_islessgreater
@@ -11496,7 +11500,54 @@ constant values and they must appear in this order: @code{FP_NAN},
 @code{FP_INFINITE}, @code{FP_NORMAL}, @code{FP_SUBNORMAL} and
 @code{FP_ZERO}.  The ellipsis is for exactly one floating-point value
 to classify.  GCC treats the last argument as type-generic, which
-means it does not do default promotion from float to double.
+means it does not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnan (...)
+This built-in implements the C99 isnan functionality which checks if
+the given argument represents a NaN.  The return value of the
+function will either be a 0 (false) or a 1 (true).
+On most systems, when an IEEE 754 floating-point type is used this
+built-in does not produce a signal when a signaling NaN is used.
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isinf (...)
+This built-in implements the C99 isinf functionality which checks if
+the given argument represents an infinite number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnormal (...)
+This built-in implements the C99 isnormal functionality which checks if
+the given argument represents a normal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_iszero (...)
+This built-in implements the TS 18661-1:2014 iszero functionality which checks if
+the given argument represents the number 0 or -0.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_issubnormal (...)
+This built-in implements the TS 18661-1:2014 issubnormal functionality which checks if
+the given argument represents a subnormal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
 @end deftypefn
 
 @deftypefn {Built-in Function} double __builtin_inf (void)
diff --git a/gcc/gimple-low.c b/gcc/gimple-low.c
index 64752b67b86b3d01df5f5661e4666df98b7b91d1..f0208e4c2e7b4da23d3780fd6bf645b97729f792 100644
--- a/gcc/gimple-low.c
+++ b/gcc/gimple-low.c
@@ -30,6 +30,9 @@ along with GCC; see the file COPYING3.  If not see
 #include "calls.h"
 #include "gimple-iterator.h"
 #include "gimple-low.h"
+#include "stor-layout.h"
+#include "target.h"
+#include "gimplify.h"
 
 /* The differences between High GIMPLE and Low GIMPLE are the
    following:
@@ -72,6 +75,13 @@ static void lower_gimple_bind (gimple_stmt_iterator *, struct lower_data *);
 static void lower_try_catch (gimple_stmt_iterator *, struct lower_data *);
 static void lower_gimple_return (gimple_stmt_iterator *, struct lower_data *);
 static void lower_builtin_setjmp (gimple_stmt_iterator *);
+static void lower_builtin_fpclassify (gimple_stmt_iterator *);
+static void lower_builtin_isnan (gimple_stmt_iterator *);
+static void lower_builtin_isinfinite (gimple_stmt_iterator *);
+static void lower_builtin_isnormal (gimple_stmt_iterator *);
+static void lower_builtin_iszero (gimple_stmt_iterator *);
+static void lower_builtin_issubnormal (gimple_stmt_iterator *);
+static void lower_builtin_isfinite (gimple_stmt_iterator *);
 static void lower_builtin_posix_memalign (gimple_stmt_iterator *);
 
 
@@ -330,18 +340,69 @@ lower_stmt (gimple_stmt_iterator *gsi, struct lower_data *data)
 	if (decl
 	    && DECL_BUILT_IN_CLASS (decl) == BUILT_IN_NORMAL)
 	  {
-	    if (DECL_FUNCTION_CODE (decl) == BUILT_IN_SETJMP)
+	    switch (DECL_FUNCTION_CODE (decl))
 	      {
+	      case BUILT_IN_SETJMP:
 		lower_builtin_setjmp (gsi);
 		data->cannot_fallthru = false;
 		return;
-	      }
-	    else if (DECL_FUNCTION_CODE (decl) == BUILT_IN_POSIX_MEMALIGN
-		     && flag_tree_bit_ccp
-		     && gimple_builtin_call_types_compatible_p (stmt, decl))
-	      {
-		lower_builtin_posix_memalign (gsi);
+
+	      case BUILT_IN_POSIX_MEMALIGN:
+		if (flag_tree_bit_ccp
+		    && gimple_builtin_call_types_compatible_p (stmt, decl))
+		  {
+			lower_builtin_posix_memalign (gsi);
+			return;
+		  }
+		break;
+
+	      case BUILT_IN_FPCLASSIFY:
+		lower_builtin_fpclassify (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      CASE_FLT_FN (BUILT_IN_ISINF):
+	      case BUILT_IN_ISINFD32:
+	      case BUILT_IN_ISINFD64:
+	      case BUILT_IN_ISINFD128:
+		lower_builtin_isinfinite (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNAND32:
+	      case BUILT_IN_ISNAND64:
+	      case BUILT_IN_ISNAND128:
+	      CASE_FLT_FN (BUILT_IN_ISNAN):
+		lower_builtin_isnan (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNORMAL:
+		lower_builtin_isnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISZERO:
+		lower_builtin_iszero (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISSUBNORMAL:
+		lower_builtin_issubnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      CASE_FLT_FN (BUILT_IN_FINITE):
+	      case BUILT_IN_FINITED32:
+	      case BUILT_IN_FINITED64:
+	      case BUILT_IN_FINITED128:
+	      case BUILT_IN_ISFINITE:
+		lower_builtin_isfinite (gsi);
+		data->cannot_fallthru = false;
 		return;
+
+	      default:
+		break;
 	      }
 	  }
 
@@ -822,6 +883,841 @@ lower_builtin_setjmp (gimple_stmt_iterator *gsi)
   gsi_remove (gsi, false);
 }
 
+/* This function will if ARG is not already a variable or SSA_NAME,
+   create a new temporary TMP and bind ARG to TMP.  This new binding is then
+   emitted into SEQ and TMP is returned.  */
+static tree
+emit_tree_and_return_var (gimple_seq *seq, tree arg)
+{
+  if (TREE_CODE (arg) == SSA_NAME || VAR_P (arg))
+    return arg;
+
+  tree tmp = create_tmp_reg (TREE_TYPE (arg));
+  gassign *stm = gimple_build_assign (tmp, arg);
+  gimple_seq_add_stmt (seq, stm);
+  return tmp;
+}
+
+/* This function builds an if statement that ends up using explicit branches
+   instead of becoming a ternary conditional select.  This function assumes you
+   will fall through to the next statements after the condition for the false
+   branch.  The code emitted looks like:
+
+   if (COND)
+     RESULT_VARIABLE = TRUE_BRANCH
+     GOTO EXIT_LABEL
+   else
+     ...
+
+   SEQ is the gimple sequence/buffer to emit any new bindings to.
+   RESULT_VARIABLE is the value to set if COND.
+   EXIT_LABEL is the label to jump to in case COND.
+   COND is condition to use in the conditional statement of the if.
+   TRUE_BRANCH is the value to set RESULT_VARIABLE to if COND.  */
+static void
+emit_tree_cond (gimple_seq *seq, tree result_variable, tree exit_label,
+		tree cond, tree true_branch)
+{
+  /* Create labels for fall through.  */
+  tree true_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree false_label = create_artificial_label (UNKNOWN_LOCATION);
+  gcond *stmt = gimple_build_cond_from_tree (cond, true_label, false_label);
+  gimple_seq_add_stmt (seq, stmt);
+
+  /* Build the true case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (true_label));
+  tree value = TREE_CONSTANT (true_branch)
+	     ? true_branch
+	     : emit_tree_and_return_var (seq, true_branch);
+  gimple_seq_add_stmt (seq, gimple_build_assign (result_variable, value));
+  gimple_seq_add_stmt (seq, gimple_build_goto (exit_label));
+
+  /* Build the false case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (false_label));
+}
+
+/* This function returns a variable containing an reinterpreted ARG as an
+   integer.
+
+   SEQ is the gimple sequence/buffer to write any new bindings to.
+   ARG is the floating point number to reinterpret as an integer.
+   LOC is the location to use when doing folding operations.  */
+static tree
+get_num_as_int (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  /* Re-interpret the float as an unsigned integer type
+     with equal precision.  */
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+  tree conv_arg = fold_build1_loc (loc, VIEW_CONVERT_EXPR, int_arg_type, arg);
+  return emit_tree_and_return_var (seq, conv_arg);
+}
+
+/* Check if ARG which is the floating point number being classified is close
+   enough to IEEE 754 format to be able to go in the early exit code.  */
+static bool
+use_ieee_int_mode (tree arg)
+{
+  tree type = TREE_TYPE (arg);
+  machine_mode mode = TYPE_MODE (type);
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  machine_mode imode = int_mode_for_mode (mode);
+  bool is_ibm_extended = MODE_COMPOSITE_P (mode);
+
+  return (format->is_binary_ieee_compatible
+	  && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
+	  /* Check if there's a usable integer mode.  */
+	  && imode != BLKmode
+	  && targetm.scalar_mode_supported_p (imode)
+	  && !is_ibm_extended);
+}
+
+/* Perform some IBM extended format fixups on ARG for use by FP functions.
+   This is done by ignoring the lower 64 bits of the number.
+
+   MODE is the machine mode of ARG.
+   TYPE is the type of ARG.
+   LOC is the location to be used in fold functions.  Usually is the location
+   of the definition of ARG.  */
+static bool
+perform_ibm_extended_fixups (tree *arg, machine_mode *mode,
+			     tree *type, location_t loc)
+{
+  bool is_ibm_extended = MODE_COMPOSITE_P (*mode);
+  if (is_ibm_extended)
+    {
+      /* NaN and Inf are encoded in the high-order double value
+	 only.  The low-order value is not significant.  */
+      *type = double_type_node;
+      *mode = DFmode;
+      *arg = fold_build1_loc (loc, NOP_EXPR, *type, *arg);
+    }
+
+  return is_ibm_extended;
+}
+
+/* Generates code to check if ARG is a normal number.  For the FP case we check
+   MIN_VALUE(ARG) <= ABS(ARG) > INF and for the INT value we check the exp and
+   mantissa bits.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_normal (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const tree bool_type = boolean_type_node;
+
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree orig_arg = arg;
+      machine_mode orig_mode = mode;
+      if (TREE_CODE (arg) != SSA_NAME
+	  && (TREE_ADDRESSABLE (arg) != 0
+	    || (TREE_CODE (arg) != PARM_DECL
+	        && (!VAR_P (arg) || TREE_STATIC (arg)))))
+	orig_arg = save_expr (arg);
+
+      /* Perform IBM extended format fixups if required.  */
+      bool is_ibm_extended = perform_ibm_extended_fixups (&arg, &mode,
+							  &type, loc);
+
+      REAL_VALUE_TYPE rinf, rmin;
+      tree arg_p = fold_build1_loc (loc, ABS_EXPR, type, arg);
+
+      tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
+      tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
+      tree const isge_fn = builtin_decl_explicit (BUILT_IN_ISGREATEREQUAL);
+
+      char buf[128];
+      real_inf (&rinf);
+      get_min_float (REAL_MODE_FORMAT (orig_mode), buf, sizeof (buf));
+      real_from_string (&rmin, buf);
+
+      tree inf_exp = build_call_expr (islt_fn, 2, arg_p,
+				      build_real (type, rinf));
+      tree min_exp = build_real (type, rmin);
+      if (is_ibm_extended)
+	{
+	  /* Testing the high end of the range is done just using
+	     the high double, using the same test as isfinite().
+	     For the subnormal end of the range we first test the
+	     high double, then if its magnitude is equal to the
+	     limit of 0x1p-969, we test whether the low double is
+	     non-zero and opposite sign to the high double.  */
+	  tree gt_min = build_call_expr (isgt_fn, 2, arg_p, min_exp);
+	  tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
+				     arg_p, min_exp);
+	  tree as_complex = build1 (VIEW_CONVERT_EXPR,
+				    complex_double_type_node, orig_arg);
+	  tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
+	  tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
+	  tree zero = build_real (type, dconst0);
+	  tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
+	  tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
+	  tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
+	  tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
+				    fold_build3 (COND_EXPR,
+						 integer_type_node,
+						 hilt, logt, lolt));
+	  eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
+				eq_min, ok_lo);
+	  min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
+				 gt_min, eq_min);
+	}
+	else
+	{
+	  min_exp = build_call_expr (isge_fn, 2, arg_p, min_exp);
+	}
+
+      push_gimplify_context ();
+      gimplify_expr (&min_exp, seq, NULL, is_gimple_val, fb_either);
+      gimplify_expr (&inf_exp, seq, NULL, is_gimple_val, fb_either);
+
+      tree res
+	= fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			   emit_tree_and_return_var (seq,
+						     gimple_boolify (min_exp)),
+			   emit_tree_and_return_var (seq,
+						     gimple_boolify (inf_exp)));
+      pop_gimplify_context (NULL);
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const tree int_type = unsigned_type_node;
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  const int exp_mask  = (1 << exp_bits) - 1;
+
+  /* Get the number reinterpreted as an integer.  */
+  tree int_arg = get_num_as_int (seq, arg, loc);
+
+  /* Extract exp bits from the float, where we expect the exponent to be.
+     We create a new type because BIT_FIELD_REF does not allow you to
+     extract less bits than the precision of the storage variable.  */
+  tree exp_tmp
+    = fold_build3_loc (loc, BIT_FIELD_REF,
+		       build_nonstandard_integer_type (exp_bits, true),
+		       int_arg,
+		       build_int_cstu (int_type, exp_bits),
+		       build_int_cstu (int_type, format->p - 1));
+  tree exp_bitfield = emit_tree_and_return_var (seq, exp_tmp);
+
+  /* Re-interpret the extracted exponent bits as a 32 bit int.
+     This allows us to continue doing operations as int_type.  */
+  tree exp
+    = emit_tree_and_return_var (seq, fold_build1_loc (loc, NOP_EXPR, int_type,
+						      exp_bitfield));
+
+  /* exp_mask & ~1.  */
+  tree mask_check
+     = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+			build_int_cstu (int_type, exp_mask),
+			fold_build1_loc (loc, BIT_NOT_EXPR, int_type,
+					 build_int_cstu (int_type, 1)));
+
+  /* (exp + 1) & mask_check.
+     Check to see if exp is not all 0 or all 1.  */
+  tree exp_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+		       emit_tree_and_return_var (seq,
+				fold_build2_loc (loc, PLUS_EXPR, int_type, exp,
+						 build_int_cstu (int_type, 1))),
+		       mask_check);
+
+  tree res = fold_build2_loc (loc, NE_EXPR, boolean_type_node,
+			      build_int_cstu (int_type, 0),
+			      emit_tree_and_return_var (seq, exp_check));
+
+  return emit_tree_and_return_var (seq, res);
+}
+
+/* Generates code to check if ARG is a zero. For both the FP and INT case we
+   check if ARG == 0 (modulo sign bit).  Returns a variable containing a boolean
+   which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_zero (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      machine_mode mode = TYPE_MODE (type);
+      /* Perform IBM extended format fixups if required.  */
+      perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+      tree res = fold_build2_loc (loc, EQ_EXPR, boolean_type_node, arg,
+				  build_real (type, dconst0));
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* num << 1 == 0.
+     This checks to see if the number is zero.  */
+  tree zero_check
+    = fold_build2_loc (loc, EQ_EXPR, boolean_type_node,
+		       build_int_cstu (int_arg_type, 0),
+		       emit_tree_and_return_var (seq, int_arg));
+
+  return emit_tree_and_return_var (seq, zero_check);
+}
+
+/* Generates code to check if ARG is a subnormal number.  In the FP case we test
+   fabs (ARG) != 0 && fabs (ARG) < MIN_VALUE (ARG) and in the INT case we check
+   the exp and mantissa bits on ARG. Returns a variable containing a boolean
+   which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_subnormal (gimple_seq *seq, tree arg, location_t loc)
+{
+  const tree bool_type = boolean_type_node;
+
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
+      tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
+
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      REAL_VALUE_TYPE r;
+      char buf[128];
+      get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+      real_from_string (&r, buf);
+      tree subnorm = build_call_expr (islt_fn, 2, arg_p, build_real (type, r));
+
+      tree zero = build_call_expr (isgt_fn, 2, arg_p,
+				   build_real (type, dconst0));
+
+      push_gimplify_context ();
+      gimplify_expr (&subnorm, seq, NULL, is_gimple_val, fb_either);
+      gimplify_expr (&zero, seq, NULL, is_gimple_val, fb_either);
+
+      tree res
+	= fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			   emit_tree_and_return_var (seq,
+						     gimple_boolify (subnorm)),
+			   emit_tree_and_return_var (seq,
+						     gimple_boolify (zero)));
+      pop_gimplify_context (NULL);
+
+      return emit_tree_and_return_var (seq, res);
+  }
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check for a zero exponent and non-zero mantissa.
+     This can be done with two comparisons by first apply a
+     removing the sign bit and checking if the value is larger
+     than the mantissa mask.  */
+
+  /* This creates a mask to be used to check the mantissa value in the shifted
+     integer representation of the fpnum.  */
+  tree significant_bit = build_int_cstu (int_arg_type, format->p - 1);
+  tree mantissa_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					significant_bit),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check if exponent is zero and mantissa is not.  */
+  tree subnorm_cond_tmp
+    = fold_build2_loc (loc, LE_EXPR, bool_type,
+		       emit_tree_and_return_var (seq, int_arg),
+		       mantissa_mask);
+
+  tree subnorm_cond = emit_tree_and_return_var (seq, subnorm_cond_tmp);
+
+  tree zero_cond
+    = fold_build2_loc (loc, GT_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, int_arg),
+		       build_int_cstu (int_arg_type, 0));
+
+  tree subnorm_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, subnorm_cond),
+		       emit_tree_and_return_var (seq, zero_cond));
+
+  return emit_tree_and_return_var (seq, subnorm_check);
+}
+
+/* Generates code to check if ARG is an infinity.  In the FP case we test
+   FABS(ARG) == INF and in the INT case we check the bits on the exp and
+   mantissa.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_infinity (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_INFINITIES (mode))
+    {
+      return build_int_cst (bool_type, false);
+    }
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      /* Perform IBM extended format fixups if required.  */
+      perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+      REAL_VALUE_TYPE r;
+      real_inf (&r);
+      tree res = fold_build2_loc (loc, EQ_EXPR, bool_type, arg_p,
+				  build_real (type, r));
+
+      return emit_tree_and_return_var (seq, res);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0.  */
+  tree inf_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, EQ_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Generates code to check if ARG is a finite number.  In the FP case we check
+   if FABS(ARG) <= MAX_VALUE(ARG) and in the INT case we check the exp and
+   mantissa bits.  Returns a variable containing a boolean which has the result
+   of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_finite (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (arg) && !HONOR_INFINITIES (arg))
+    {
+      return build_int_cst (bool_type, true);
+    }
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+
+      /* Perform IBM extended format fixups if required.  */
+      perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+      tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
+
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      REAL_VALUE_TYPE rmax;
+      char buf[128];
+      get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+      real_from_string (&rmax, buf);
+
+      tree res = build_call_expr (isle_fn, 2,  arg_p, build_real (type, rmax));
+
+      push_gimplify_context ();
+      gimplify_expr (&res, seq, NULL, is_gimple_val, fb_either);
+      pop_gimplify_context (NULL);
+
+      return emit_tree_and_return_var (seq, gimple_boolify(res));
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign. */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0. */
+  tree inf_check_tmp
+    = fold_build2_loc (loc, LT_EXPR, bool_type,
+		       emit_tree_and_return_var (seq, int_arg),
+		       inf_mask);
+
+  tree inf_check = emit_tree_and_return_var (seq, inf_check_tmp);
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Generates code to check if ARG is a NaN. In the FP case we simply check if
+   ARG != ARG and in the INT case we check the bits in the exp and mantissa.
+   Returns a variable containing a boolean which has the result of the check.
+
+   SEQ is the buffer to use to emit the gimple instructions into.
+   LOC is the location to use during fold calls.  */
+static tree
+is_nan (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (mode))
+    {
+      return build_int_cst (bool_type, false);
+    }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+    {
+      /* Perform IBM extended format fixups if required.  */
+      perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+      tree arg_p
+	= emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							  arg));
+      tree res
+	= fold_build2_loc (loc, UNORDERED_EXPR, bool_type,arg_p, arg_p);
+
+      return emit_tree_and_return_var (seq, res);
+  }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is not 0.  */
+  tree nan_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, GT_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, nan_check);
+}
+
+/* Validates a single argument from the arguments list CALL at position INDEX.
+   The extracted parameter is compared against the expected type CODE.
+
+   A boolean is returned indicating if the parameter exist and if of the
+   expected type.  */
+static bool
+gimple_validate_arg (gimple* call, int index, enum tree_code code)
+{
+  const tree arg = gimple_call_arg (call, index);
+  if (!arg)
+    return false;
+  else if (code == POINTER_TYPE)
+    return POINTER_TYPE_P (TREE_TYPE (arg));
+  else if (code == INTEGER_TYPE)
+    return INTEGRAL_TYPE_P (TREE_TYPE (arg));
+  return code == TREE_CODE (TREE_TYPE (arg));
+}
+
+/* Lowers calls to __builtin_fpclassify to
+   fpclassify (x) ->
+     isnormal(x) ? FP_NORMAL :
+       iszero (x) ? FP_ZERO :
+	 isnan (x) ? FP_NAN :
+	   isinfinite (x) ? FP_INFINITE :
+	     FP_SUBNORMAL.
+
+   The code may use integer arithmentic if it decides
+   that the produced assembly would be faster. This can only be done
+   for numbers that are similar to IEEE-754 in format.
+
+   This builtin will generate code to return the appropriate floating
+   point classification depending on the value of the floating point
+   number passed in.  The possible return values must be supplied as
+   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
+   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
+   one floating point argument which is "type generic".
+
+   GSI is the gimple iterator containing the fpclassify call to lower.
+   The call will be expanded and replaced inline in the given GSI.  */
+static void
+lower_builtin_fpclassify (gimple_stmt_iterator *gsi)
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 6
+      || !gimple_validate_arg (call, 0, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 1, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 2, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 3, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 4, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 5, REAL_TYPE))
+    return;
+
+  /* Collect the arguments from the call.  */
+  tree fp_nan = gimple_call_arg (call, 0);
+  tree fp_infinite = gimple_call_arg (call, 1);
+  tree fp_normal = gimple_call_arg (call, 2);
+  tree fp_subnormal = gimple_call_arg (call, 3);
+  tree fp_zero = gimple_call_arg (call, 4);
+  tree arg = gimple_call_arg (call, 5);
+
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+    dest = create_tmp_reg (TREE_TYPE (orig_dest));
+
+  emit_tree_cond (&body, dest, done_label,
+		  is_normal (&body, arg, loc), fp_normal);
+  emit_tree_cond (&body, dest, done_label,
+		  is_zero (&body, arg, loc), fp_zero);
+  emit_tree_cond (&body, dest, done_label,
+		  is_nan (&body, arg, loc), fp_nan);
+  emit_tree_cond (&body, dest, done_label,
+		  is_infinity (&body, arg, loc), fp_infinite);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to is_subnormal.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, fp_subnormal));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+    {
+      gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+    }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+
+  /* Remove the call to __builtin_fpclassify.  */
+  gsi_remove (gsi, false);
+}
+
+/* Generic wrapper for the is_nan, is_normal, is_subnormal, is_zero, etc.
+   All these functions have the same setup. The wrapper validates the parameter
+   and also creates the branches and labels required to properly invoke.
+   This has been generalize and the function to call is passed as argument FNDECL.
+
+   GSI is the gimple iterator containing the fpclassify call to lower.
+   The call will be expanded and replaced inline in the given GSI.  */
+static void
+gen_call_fp_builtin (gimple_stmt_iterator *gsi,
+		     tree (*fndecl)(gimple_seq *, tree, location_t))
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 1
+      || !gimple_validate_arg (call, 0, REAL_TYPE))
+    return;
+
+  tree arg = gimple_call_arg (call, 0);
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  tree type = TREE_TYPE (orig_dest);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+      dest = create_tmp_reg (type);
+
+  tree t_true = build_int_cst (type, true);
+  tree t_false = build_int_cst (type, false);
+
+  emit_tree_cond (&body, dest, done_label,
+		  fndecl (&body, arg, loc), t_true);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to false.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, t_false));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+  /* Remove the call to the builtin.  */
+  gsi_remove (gsi, false);
+}
+
+/* Lower and expand calls to __builtin_isnan in GSI.  */
+static void
+lower_builtin_isnan (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_nan);
+}
+
+/* Lower and expand calls to __builtin_isinfinite in GSI.  */
+static void
+lower_builtin_isinfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_infinity);
+}
+
+/* Lower and expand calls to __builtin_isnormal in GSI.  */
+static void
+lower_builtin_isnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_normal);
+}
+
+/* Lower and expand calls to __builtin_iszero in GSI.  */
+static void
+lower_builtin_iszero (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_zero);
+}
+
+/* Lower and expand calls to __builtin_issubnormal in GSI.  */
+static void
+lower_builtin_issubnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_subnormal);
+}
+
+/* Lower and expand calls to __builtin_isfinite in GSI.  */
+static void
+lower_builtin_isfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_finite);
+}
+
 /* Lower calls to posix_memalign to
      res = posix_memalign (ptr, align, size);
      if (res == 0)
diff --git a/gcc/real.h b/gcc/real.h
index 59af580e78f2637be84f71b98b45ec6611053222..4b1b92138e07f43a175a2cbee4d952afad5898f7 100644
--- a/gcc/real.h
+++ b/gcc/real.h
@@ -161,6 +161,19 @@ struct real_format
   bool has_signed_zero;
   bool qnan_msb_set;
   bool canonical_nan_lsbs_set;
+
+  /* This flag indicates whether the format is suitable for the optimized
+     code paths for the __builtin_fpclassify function and friends.  For
+     this, the format must be a base 2 representation with the sign bit as
+     the most-significant bit followed by (exp <= 32) exponent bits
+     followed by the mantissa bits.  It must be possible to interpret the
+     bits of the floating-point representation as an integer.  NaNs and
+     INFs (if available) must be represented by the same schema used by
+     IEEE 754.  (NaNs must be represented by an exponent with all bits 1,
+     any mantissa except all bits 0 and any sign bit.  +INF and -INF must be
+     represented by an exponent with all bits 1, a mantissa with all bits 0 and
+     a sign bit of 0 and 1 respectively.)  */
+  bool is_binary_ieee_compatible;
   const char *name;
 };
 
@@ -511,6 +524,11 @@ extern bool real_isinteger (const REAL_VALUE_TYPE *, HOST_WIDE_INT *);
    float string.  BUF must be large enough to contain the result.  */
 extern void get_max_float (const struct real_format *, char *, size_t);
 
+/* Write into BUF the smallest positive normalized number x,
+   such that b**(x-1) is normalized.  BUF must be large enough
+   to contain the result.  */
+extern void get_min_float (const struct real_format *, char *, size_t);
+
 #ifndef GENERATOR_FILE
 /* real related routines.  */
 extern wide_int real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
diff --git a/gcc/real.c b/gcc/real.c
index 66e88e2ad366f7848609d157074c80420d778bcf..20c907a6d543c73ba62aa9a8ddf6973d82de7832 100644
--- a/gcc/real.c
+++ b/gcc/real.c
@@ -3052,6 +3052,7 @@ const struct real_format ieee_single_format =
     true,
     true,
     false,
+    true,
     "ieee_single"
   };
 
@@ -3075,6 +3076,7 @@ const struct real_format mips_single_format =
     true,
     false,
     true,
+    true,
     "mips_single"
   };
 
@@ -3098,6 +3100,7 @@ const struct real_format motorola_single_format =
     true,
     true,
     true,
+    true,
     "motorola_single"
   };
 
@@ -3132,6 +3135,7 @@ const struct real_format spu_single_format =
     true,
     false,
     false,
+    false,
     "spu_single"
   };
 \f
@@ -3343,6 +3347,7 @@ const struct real_format ieee_double_format =
     true,
     true,
     false,
+    true,
     "ieee_double"
   };
 
@@ -3366,6 +3371,7 @@ const struct real_format mips_double_format =
     true,
     false,
     true,
+    true,
     "mips_double"
   };
 
@@ -3389,6 +3395,7 @@ const struct real_format motorola_double_format =
     true,
     true,
     true,
+    true,
     "motorola_double"
   };
 \f
@@ -3735,6 +3742,7 @@ const struct real_format ieee_extended_motorola_format =
     true,
     true,
     true,
+    false,
     "ieee_extended_motorola"
   };
 
@@ -3758,6 +3766,7 @@ const struct real_format ieee_extended_intel_96_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96"
   };
 
@@ -3781,6 +3790,7 @@ const struct real_format ieee_extended_intel_128_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_128"
   };
 
@@ -3806,6 +3816,7 @@ const struct real_format ieee_extended_intel_96_round_53_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96_round_53"
   };
 \f
@@ -3896,6 +3907,7 @@ const struct real_format ibm_extended_format =
     true,
     true,
     false,
+    false,
     "ibm_extended"
   };
 
@@ -3919,6 +3931,7 @@ const struct real_format mips_extended_format =
     true,
     false,
     true,
+    false,
     "mips_extended"
   };
 
@@ -4184,6 +4197,7 @@ const struct real_format ieee_quad_format =
     true,
     true,
     false,
+    true,
     "ieee_quad"
   };
 
@@ -4207,6 +4221,7 @@ const struct real_format mips_quad_format =
     true,
     false,
     true,
+    true,
     "mips_quad"
   };
 \f
@@ -4509,6 +4524,7 @@ const struct real_format vax_f_format =
     false,
     false,
     false,
+    false,
     "vax_f"
   };
 
@@ -4532,6 +4548,7 @@ const struct real_format vax_d_format =
     false,
     false,
     false,
+    false,
     "vax_d"
   };
 
@@ -4555,6 +4572,7 @@ const struct real_format vax_g_format =
     false,
     false,
     false,
+    false,
     "vax_g"
   };
 \f
@@ -4633,6 +4651,7 @@ const struct real_format decimal_single_format =
     true,
     true,
     false,
+    false,
     "decimal_single"
   };
 
@@ -4657,6 +4676,7 @@ const struct real_format decimal_double_format =
     true,
     true,
     false,
+    false,
     "decimal_double"
   };
 
@@ -4681,6 +4701,7 @@ const struct real_format decimal_quad_format =
     true,
     true,
     false,
+    false,
     "decimal_quad"
   };
 \f
@@ -4820,6 +4841,7 @@ const struct real_format ieee_half_format =
     true,
     true,
     false,
+    true,
     "ieee_half"
   };
 
@@ -4846,6 +4868,7 @@ const struct real_format arm_half_format =
     true,
     false,
     false,
+    false,
     "arm_half"
   };
 \f
@@ -4893,6 +4916,7 @@ const struct real_format real_internal_format =
     true,
     true,
     false,
+    false,
     "real_internal"
   };
 \f
@@ -5080,6 +5104,16 @@ get_max_float (const struct real_format *fmt, char *buf, size_t len)
   gcc_assert (strlen (buf) < len);
 }
 
+/* Write into BUF the minimum negative representable finite floating-point
+   number, x, such that b**(x-1) is normalized.
+   BUF must be large enough to contain the result.  */
+void
+get_min_float (const struct real_format *fmt, char *buf, size_t len)
+{
+  sprintf (buf, "0x1p%d", fmt->emin - 1);
+  gcc_assert (strlen (buf) < len);
+}
+
 /* True if mode M has a NaN representation and
    the treatment of NaN operands is important.  */
 
diff --git a/gcc/testsuite/gcc.dg/builtins-43.c b/gcc/testsuite/gcc.dg/builtins-43.c
index f7c318edf084104b9b820e18e631ed61e760569e..5d41c28aef8619f06658f45846ae15dd8b4987ed 100644
--- a/gcc/testsuite/gcc.dg/builtins-43.c
+++ b/gcc/testsuite/gcc.dg/builtins-43.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-gimple -fdump-tree-optimized" } */
+/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-lower -fdump-tree-optimized" } */
   
 extern void f(int);
 extern void link_error ();
@@ -51,7 +51,7 @@ main ()
 
 
 /* Check that all instances of __builtin_isnan were folded.  */
-/* { dg-final { scan-tree-dump-times "isnan" 0 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "isnan" 0 "lower" } } */
 
 /* Check that all instances of link_error were subject to DCE.  */
 /* { dg-final { scan-tree-dump-times "link_error" 0 "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/fold-notunord.c b/gcc/testsuite/gcc.dg/fold-notunord.c
deleted file mode 100644
index ca345154ac204cb5f380855828421b7f88d49052..0000000000000000000000000000000000000000
--- a/gcc/testsuite/gcc.dg/fold-notunord.c
+++ /dev/null
@@ -1,9 +0,0 @@
-/* { dg-do compile } */
-/* { dg-options "-O -ftrapping-math -fdump-tree-optimized" } */
-
-int f (double d)
-{
-  return !__builtin_isnan (d);
-}
-
-/* { dg-final { scan-tree-dump " ord " "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/pr28796-1.c b/gcc/testsuite/gcc.dg/pr28796-1.c
index 077118a298878441e812410f3a6bf3707fb1d839..a57b4e350af1bc45344106fdeab4b32ef87f233f 100644
--- a/gcc/testsuite/gcc.dg/pr28796-1.c
+++ b/gcc/testsuite/gcc.dg/pr28796-1.c
@@ -1,5 +1,5 @@
 /* { dg-do link } */
-/* { dg-options "-ffinite-math-only" } */
+/* { dg-options "-ffinite-math-only -O2" } */
 
 extern void link_error(void);
 
diff --git a/gcc/testsuite/gcc.dg/pr28796-2.c b/gcc/testsuite/gcc.dg/pr28796-2.c
index f56a5d4a4449fbdb2df5eac7f8ec8075c908502f..797b2c88a1f6448cf2c2907f351351f568ec39f5 100644
--- a/gcc/testsuite/gcc.dg/pr28796-2.c
+++ b/gcc/testsuite/gcc.dg/pr28796-2.c
@@ -1,5 +1,5 @@
 /* { dg-do run } */
-/* { dg-options "-O2 -funsafe-math-optimizations -fno-finite-math-only -DUNSAFE" } */
+/* { dg-options "-O2 -fno-finite-math-only -DUNSAFE" } */
 /* { dg-add-options ieee } */
 /* { dg-skip-if "No Inf/NaN support" { spu-*-* } } */
 
diff --git a/gcc/testsuite/gcc.dg/tg-tests.h b/gcc/testsuite/gcc.dg/tg-tests.h
index 0cf1f6452584962cebda999b5e8c7a61180d7830..cbb707c54b8437aada0cc205b1819369ada95665 100644
--- a/gcc/testsuite/gcc.dg/tg-tests.h
+++ b/gcc/testsuite/gcc.dg/tg-tests.h
@@ -11,6 +11,7 @@ void __attribute__ ((__noinline__))
 foo_1 (float f, double d, long double ld,
        int res_unord, int res_isnan, int res_isinf,
        int res_isinf_sign, int res_isfin, int res_isnorm,
+       int res_iszero, int res_issubnorm,
        int res_signbit, int classification)
 {
   if (__builtin_isunordered (f, 0) != res_unord)
@@ -80,6 +81,20 @@ foo_1 (float f, double d, long double ld,
   if (__builtin_finitel (ld) != res_isfin)
     __builtin_abort ();
 
+  if (__builtin_iszero (f) != res_iszero)
+    __builtin_abort ();
+  if (__builtin_iszero (d) != res_iszero)
+    __builtin_abort ();
+  if (__builtin_iszero (ld) != res_iszero)
+    __builtin_abort ();
+
+  if (__builtin_issubnormal (f) != res_issubnorm)
+    __builtin_abort ();
+  if (__builtin_issubnormal (d) != res_issubnorm)
+    __builtin_abort ();
+  if (__builtin_issubnormal (ld) != res_issubnorm)
+    __builtin_abort ();
+
   /* Sign bit of zeros and nans is not preserved in unsafe math mode.  */
 #ifdef UNSAFE
   if (!res_isnan && f != 0 && d != 0 && ld != 0)
@@ -115,12 +130,13 @@ foo_1 (float f, double d, long double ld,
 void __attribute__ ((__noinline__))
 foo (float f, double d, long double ld,
      int res_unord, int res_isnan, int res_isinf,
-     int res_isfin, int res_isnorm, int classification)
+     int res_isfin, int res_isnorm, int res_iszero,
+     int res_issubnorm, int classification)
 {
-  foo_1 (f, d, ld, res_unord, res_isnan, res_isinf, res_isinf, res_isfin, res_isnorm, 0, classification);
+  foo_1 (f, d, ld, res_unord, res_isnan, res_isinf, res_isinf, res_isfin, res_isnorm, res_iszero, res_issubnorm, 0, classification);
   /* Try all the values negated as well.  All will have the sign bit set,
      except for the nan.  */
-  foo_1 (-f, -d, -ld, res_unord, res_isnan, res_isinf, -res_isinf, res_isfin, res_isnorm, 1, classification);
+  foo_1 (-f, -d, -ld, res_unord, res_isnan, res_isinf, -res_isinf, res_isfin, res_isnorm, res_iszero, res_issubnorm, 1, classification);
 }
 
 int __attribute__ ((__noinline__))
@@ -132,35 +148,35 @@ main_tests (void)
   
   /* Test NaN.  */
   f = __builtin_nanf(""); d = __builtin_nan(""); ld = __builtin_nanl("");
-  foo(f, d, ld, /*unord=*/ 1, /*isnan=*/ 1, /*isinf=*/ 0, /*isfin=*/ 0, /*isnorm=*/ 0, FP_NAN);
+  foo(f, d, ld, /*unord=*/ 1, /*isnan=*/ 1, /*isinf=*/ 0, /*isfin=*/ 0, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/0, FP_NAN);
 
   /* Test infinity.  */
   f = __builtin_inff(); d = __builtin_inf(); ld = __builtin_infl();
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, FP_INFINITE);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/0, FP_INFINITE);
 
   /* Test zero.  */
   f = 0; d = 0; ld = 0;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, FP_ZERO);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, /*iszero=*/1, /*issubnorm=*/0, FP_ZERO);
 
   /* Test one.  */
   f = 1; d = 1; ld = 1;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, FP_NORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, /*iszero=*/0, /*issubnorm=*/0, FP_NORMAL);
 
   /* Test minimum values.  */
   f = __FLT_MIN__; d = __DBL_MIN__; ld = __LDBL_MIN__;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, FP_NORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, /*iszero=*/0, /*issubnorm=*/0, FP_NORMAL);
 
   /* Test subnormal values.  */
   f = __FLT_MIN__/2; d = __DBL_MIN__/2; ld = __LDBL_MIN__/2;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, FP_SUBNORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/1, FP_SUBNORMAL);
 
   /* Test maximum values.  */
   f = __FLT_MAX__; d = __DBL_MAX__; ld = __LDBL_MAX__;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, FP_NORMAL);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 0, /*isfin=*/ 1, /*isnorm=*/ 1, /*iszero=*/0, /*issubnorm=*/0, FP_NORMAL);
 
   /* Test overflow values.  */
   f = __FLT_MAX__*2; d = __DBL_MAX__*2; ld = __LDBL_MAX__*2;
-  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, FP_INFINITE);
+  foo(f, d, ld, /*unord=*/ 0, /*isnan=*/ 0, /*isinf=*/ 1, /*isfin=*/ 0, /*isnorm=*/ 0, /*iszero=*/0, /*issubnorm=*/0, FP_INFINITE);
 
   return 0;
 }
diff --git a/gcc/testsuite/gcc.dg/torture/float128-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..ec9d3ad41e24280978707888590eec1b562207f0
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128_runtime } */
+
+#define WIDTH 128
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0ede861716750453a86c9abc703ad0b2826674c6
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float128x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128x_runtime } */
+
+#define WIDTH 128
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float16-tg-4.c b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..007c4c224ea95537c31185d0aff964d1975f2190
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float16 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float16 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float16_runtime } */
+
+#define WIDTH 16
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..c7f8353da2cffdfc2c2f58f5da3d5363b95e6f91
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32 type-generic built-in functions: __builtin_f__builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32_runtime } */
+
+#define WIDTH 32
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0d7a592920aca112d5f6409e565d4582c253c977
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float32x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32x_runtime } */
+
+#define WIDTH 32
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..bb25a22a68e60ce2717ab3583bbec595dd563c35
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64_runtime } */
+
+#define WIDTH 64
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..82305d916b8bd75131e2c647fd37f74cadbc8f1d
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
@@ -0,0 +1,11 @@
+/* Test _Float64x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64x_runtime } */
+
+#define WIDTH 64
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
new file mode 100644
index 0000000000000000000000000000000000000000..aa3448c090cf797a1525b1045ffebeed79cace40
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
@@ -0,0 +1,99 @@
+/* Tests for _FloatN / _FloatNx types: compile and execution tests for
+   type-generic built-in functions: __builtin_iszero, __builtin_issubnormal.
+   Before including this file, define WIDTH as the value N; define EXT to 1
+   for _FloatNx and 0 for _FloatN.  */
+
+#define __STDC_WANT_IEC_60559_TYPES_EXT__
+#include <float.h>
+
+#define CONCATX(X, Y) X ## Y
+#define CONCAT(X, Y) CONCATX (X, Y)
+#define CONCAT3(X, Y, Z) CONCAT (CONCAT (X, Y), Z)
+#define CONCAT4(W, X, Y, Z) CONCAT (CONCAT (CONCAT (W, X), Y), Z)
+
+#if EXT
+# define TYPE CONCAT3 (_Float, WIDTH, x)
+# define CST(C) CONCAT4 (C, f, WIDTH, x)
+# define MAX CONCAT3 (FLT, WIDTH, X_MAX)
+# define MIN CONCAT3 (FLT, WIDTH, X_MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, X_TRUE_MIN)
+#else
+# define TYPE CONCAT (_Float, WIDTH)
+# define CST(C) CONCAT3 (C, f, WIDTH)
+# define MAX CONCAT3 (FLT, WIDTH, _MAX)
+# define MIN CONCAT3 (FLT, WIDTH, _MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, _TRUE_MIN)
+#endif
+
+extern void exit (int);
+extern void abort (void);
+
+volatile TYPE inf = __builtin_inf (), nanval = __builtin_nan ("");
+volatile TYPE neginf = -__builtin_inf (), negnanval = -__builtin_nan ("");
+volatile TYPE zero = CST (0.0), negzero = -CST (0.0), one = CST (1.0);
+volatile TYPE max = MAX, negmax = -MAX, min = MIN, negmin = -MIN;
+volatile TYPE true_min = TRUE_MIN, negtrue_min = -TRUE_MIN;
+volatile TYPE sub_norm = MIN / 2.0;
+
+int
+main (void)
+{
+  if (__builtin_iszero (inf) == 1)
+    abort ();
+  if (__builtin_iszero (nanval) == 1)
+    abort ();
+  if (__builtin_iszero (neginf) == 1)
+    abort ();
+  if (__builtin_iszero (negnanval) == 1)
+    abort ();
+  if (__builtin_iszero (zero) != 1)
+    abort ();
+  if (__builtin_iszero (negzero) != 1)
+    abort ();
+  if (__builtin_iszero (one) == 1)
+    abort ();
+  if (__builtin_iszero (max) == 1)
+    abort ();
+  if (__builtin_iszero (negmax) == 1)
+    abort ();
+  if (__builtin_iszero (min) == 1)
+    abort ();
+  if (__builtin_iszero (negmin) == 1)
+    abort ();
+  if (__builtin_iszero (true_min) == 1)
+    abort ();
+  if (__builtin_iszero (negtrue_min) == 1)
+    abort ();
+  if (__builtin_iszero (sub_norm) == 1)
+    abort ();
+
+  if (__builtin_issubnormal (inf) == 1)
+    abort ();
+  if (__builtin_issubnormal (nanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (neginf) == 1)
+    abort ();
+  if (__builtin_issubnormal (negnanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (zero) == 1)
+    abort ();
+  if (__builtin_issubnormal (negzero) == 1)
+    abort ();
+  if (__builtin_issubnormal (one) == 1)
+    abort ();
+  if (__builtin_issubnormal (max) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmax) == 1)
+    abort ();
+  if (__builtin_issubnormal (min) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmin) == 1)
+    abort ();
+  if (__builtin_issubnormal (true_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (negtrue_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (sub_norm) != 1)
+    abort ();
+  exit (0);
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
new file mode 100644
index 0000000000000000000000000000000000000000..3a1bf956bfcedcc15dc1cbacfe2f0b663b31c3cc
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
@@ -0,0 +1,22 @@
+/* This file checks the code generation for the new __builtin_fpclassify.
+   because checking the exact assembly isn't very useful, we'll just be checking
+   for the presence of certain instructions and the omition of others. */
+/* { dg-options "-O2" } */
+/* { dg-do compile } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fabs\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmp\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmpe\[ \t\]?" } } */
+/* { dg-final { scan-assembler "\[ \t\]?ubfx\[ \t\]?" } } */
+
+#include <stdio.h>
+#include <math.h>
+
+/*
+ fp_nan = args[0];
+ fp_infinite = args[1];
+ fp_normal = args[2];
+ fp_subnormal = args[3];
+ fp_zero = args[4];
+*/
+
+int f(double x) { return __builtin_fpclassify(0, 1, 4, 3, 2, x); }

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-23 15:55                                   ` Tamar Christina
@ 2017-01-23 16:26                                     ` Joseph Myers
       [not found]                                     ` <5C66F854-5903-4F34-8DE6-5EE2E1FC5C77@gmail.com>
  1 sibling, 0 replies; 47+ messages in thread
From: Joseph Myers @ 2017-01-23 16:26 UTC (permalink / raw)
  To: Tamar Christina
  Cc: Jeff Law, GCC Patches, Wilco Dijkstra, rguenther, Michael Meissner, nd

On Mon, 23 Jan 2017, Tamar Christina wrote:

> In the testcase I've also had to remove `-funsafe-math-optimizations` 
> because at least on AArch64 this sets the LZ bit in the FPCR which 
> causes the fcmp to collapse small subnormal numbers to 0 and so iszero 
> would fail.

That's not an appropriate change to the test; it's clear that test is 
deliberately about testing what happens with -funsafe-math-optimizations.  
Rather, any particular new pieces that won't work with 
-funsafe-math-optimizations should have #ifdef UNSAFE conditionals in 
tg-tests.h, like the existing conditionals there.

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
       [not found]                                             ` <VI1PR0801MB20311246A73FD82A08541F26FF170@VI1PR0801MB2031.eurprd08.prod.outlook.com>
@ 2017-05-15 10:39                                               ` Tamar Christina
  0 siblings, 0 replies; 47+ messages in thread
From: Tamar Christina @ 2017-05-15 10:39 UTC (permalink / raw)
  To: Bernhard Reutner-Fischer, Joseph Myers, GCC Patches
  Cc: Jeff Law, Wilco Dijkstra, rguenther, Michael Meissner, nd

Resending the message to list, Sorry I hadn't noticed somewhere along the line it was dropped..

Hi All,

I believe Joseph had no more comments for the patch.

Any other comments or OK for trunk?

Regards,
Tamar
________________________________________
From: Tamar Christina
Sent: Tuesday, May 2, 2017 10:09 AM
To: Bernhard Reutner-Fischer; Joseph Myers
Cc: Jeff Law; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

Ping.

Hi All,

Since this didn't manage to get in for GCC 7, I'm looking for approval to commit to trunk.

Cheers,
Tamar
________________________________________
From: Tamar Christina
Sent: Thursday, February 9, 2017 1:56:30 PM
To: Bernhard Reutner-Fischer; Joseph Myers
Cc: Jeff Law; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

Ping. If not for now can I get approval for when stage1 opens again?

Cheers,
Tamar
________________________________________
From: Tamar Christina
Sent: Monday, January 30, 2017 10:01:33 AM
To: Bernhard Reutner-Fischer; Joseph Myers
Cc: Jeff Law; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

Ping?

You initially approved the patch Jeff but there has been some minor changes since then.

Regards,
Tamar

________________________________________
From: Tamar Christina
Sent: Wednesday, January 25, 2017 9:38:57 AM
To: Bernhard Reutner-Fischer; Joseph Myers
Cc: Jeff Law; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

Thanks, updated.

Aside from the changelog, Ok for trunk?

Thanks,
Tamar

________________________________________
From: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
Sent: Tuesday, January 24, 2017 3:43:54 PM
To: Tamar Christina; Joseph Myers
Cc: Jeff Law; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

On 23 January 2017 16:32:20 CET, Tamar Christina <Tamar.Christina@arm.com> wrote:

>gcc/
>2017-01-23  Tamar Christina  <tamar.christina@arm.com>
>
>       PR middle-end/77925
>       PR middle-end/77926
>       PR middle-end/66462
>
>       * gcc/builtins.c (fold_builtin_fpclassify): Removed.
>       (fold_builtin_interclass_mathfn): Removed.
>       (expand_builtin): Added builtins to lowering list.
>       (fold_builtin_n): Removed fold_builtin_varargs.

Present tense in ChangeLog please.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-01-19 18:28                                 ` Joseph Myers
  2017-01-23 15:55                                   ` Tamar Christina
@ 2017-06-08 10:31                                   ` Markus Trippelsdorf
  2017-06-08 12:00                                     ` Christophe Lyon
  1 sibling, 1 reply; 47+ messages in thread
From: Markus Trippelsdorf @ 2017-06-08 10:31 UTC (permalink / raw)
  To: Joseph Myers
  Cc: Tamar Christina, Jeff Law, GCC Patches, Wilco Dijkstra,
	rguenther, Michael Meissner, nd

On 2017.01.19 at 18:20 +0000, Joseph Myers wrote:
> On Thu, 19 Jan 2017, Tamar Christina wrote:
> 
> > Hi Joseph,
> > 
> > I made the requested changes and did a quick pass over the rest
> > of the fp cases.
> 
> I've no further comments, but watch out for any related test failures 
> being reported.

g++.dg/opt/pr60849.C started ICEing on both X86_64 and ppc64le.

-- 
Markus

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-06-08 10:31                                   ` Markus Trippelsdorf
@ 2017-06-08 12:00                                     ` Christophe Lyon
  2017-06-08 12:21                                       ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Christophe Lyon @ 2017-06-08 12:00 UTC (permalink / raw)
  To: Markus Trippelsdorf
  Cc: Joseph Myers, Tamar Christina, Jeff Law, GCC Patches,
	Wilco Dijkstra, rguenther, Michael Meissner, nd

On 8 June 2017 at 12:30, Markus Trippelsdorf <markus@trippelsdorf.de> wrote:
> On 2017.01.19 at 18:20 +0000, Joseph Myers wrote:
>> On Thu, 19 Jan 2017, Tamar Christina wrote:
>>
>> > Hi Joseph,
>> >
>> > I made the requested changes and did a quick pass over the rest
>> > of the fp cases.
>>
>> I've no further comments, but watch out for any related test failures
>> being reported.
>
> g++.dg/opt/pr60849.C started ICEing on both X86_64 and ppc64le.
>

Same on arm/aarch64, but there are also other regressions on big-endian configs:
See http://people.linaro.org/~christophe.lyon/cross-validation/gcc/trunk/249005/report-build-info.html


> --
> Markus

^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-06-08 12:00                                     ` Christophe Lyon
@ 2017-06-08 12:21                                       ` Tamar Christina
  2017-06-08 15:40                                         ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2017-06-08 12:21 UTC (permalink / raw)
  To: Christophe Lyon, Markus Trippelsdorf
  Cc: Joseph Myers, Jeff Law, GCC Patches, Wilco Dijkstra, rguenther,
	Michael Meissner, nd

Thanks, I'm looking at the failure.
My final validate seems to have only run the GCC tests.

> -----Original Message-----
> From: Christophe Lyon [mailto:christophe.lyon@linaro.org]
> Sent: 08 June 2017 13:00
> To: Markus Trippelsdorf
> Cc: Joseph Myers; Tamar Christina; Jeff Law; GCC Patches; Wilco Dijkstra;
> rguenther@suse.de; Michael Meissner; nd
> Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like
> numbers in GIMPLE.
> 
> On 8 June 2017 at 12:30, Markus Trippelsdorf <markus@trippelsdorf.de>
> wrote:
> > On 2017.01.19 at 18:20 +0000, Joseph Myers wrote:
> >> On Thu, 19 Jan 2017, Tamar Christina wrote:
> >>
> >> > Hi Joseph,
> >> >
> >> > I made the requested changes and did a quick pass over the rest of
> >> > the fp cases.
> >>
> >> I've no further comments, but watch out for any related test failures
> >> being reported.
> >
> > g++.dg/opt/pr60849.C started ICEing on both X86_64 and ppc64le.
> >
> 
> Same on arm/aarch64, but there are also other regressions on big-endian
> configs:
> See http://people.linaro.org/~christophe.lyon/cross-
> validation/gcc/trunk/249005/report-build-info.html
> 
> 
> > --
> > Markus

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-06-08 12:21                                       ` Tamar Christina
@ 2017-06-08 15:40                                         ` Tamar Christina
  2017-06-08 15:43                                           ` Richard Biener
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2017-06-08 15:40 UTC (permalink / raw)
  To: Christophe Lyon, Markus Trippelsdorf
  Cc: Joseph Myers, Jeff Law, GCC Patches, Wilco Dijkstra, rguenther,
	Michael Meissner, nd

The testcase does something unexpected

extern "C" int isnan ();

void foo(float a) {
  int (*xx)(...);
  xx = isnan;
  if (xx(a))
    g++;
}

and I'm wondering if this is a valid thing to do with a builtin. The issue is that at the point where gimple lowering is done xx hasn't been resolved to isnan yet.
So it never recognizes the alias. Previously these builtins were being resolved in expand, which happens late enough that it has replaced xx with isnan.

I can obviously fix the ICE by having the expand code leave the call as a call instead of a builtin. But if this is a valid thing for a builtin i'm not sure how to
best resolve this case.
________________________________________
From: Tamar Christina
Sent: Thursday, June 8, 2017 1:21:44 PM
To: Christophe Lyon; Markus Trippelsdorf
Cc: Joseph Myers; Jeff Law; GCC Patches; Wilco Dijkstra; rguenther@suse.de; Michael Meissner; nd
Subject: RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

Thanks, I'm looking at the failure.
My final validate seems to have only run the GCC tests.

> -----Original Message-----
> From: Christophe Lyon [mailto:christophe.lyon@linaro.org]
> Sent: 08 June 2017 13:00
> To: Markus Trippelsdorf
> Cc: Joseph Myers; Tamar Christina; Jeff Law; GCC Patches; Wilco Dijkstra;
> rguenther@suse.de; Michael Meissner; nd
> Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like
> numbers in GIMPLE.
>
> On 8 June 2017 at 12:30, Markus Trippelsdorf <markus@trippelsdorf.de>
> wrote:
> > On 2017.01.19 at 18:20 +0000, Joseph Myers wrote:
> >> On Thu, 19 Jan 2017, Tamar Christina wrote:
> >>
> >> > Hi Joseph,
> >> >
> >> > I made the requested changes and did a quick pass over the rest of
> >> > the fp cases.
> >>
> >> I've no further comments, but watch out for any related test failures
> >> being reported.
> >
> > g++.dg/opt/pr60849.C started ICEing on both X86_64 and ppc64le.
> >
>
> Same on arm/aarch64, but there are also other regressions on big-endian
> configs:
> See http://people.linaro.org/~christophe.lyon/cross-
> validation/gcc/trunk/249005/report-build-info.html
>
>
> > --
> > Markus

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-06-08 15:40                                         ` Tamar Christina
@ 2017-06-08 15:43                                           ` Richard Biener
  2017-06-08 16:44                                             ` Joseph Myers
  2017-06-09  8:14                                             ` Tamar Christina
  0 siblings, 2 replies; 47+ messages in thread
From: Richard Biener @ 2017-06-08 15:43 UTC (permalink / raw)
  To: Tamar Christina, Christophe Lyon, Markus Trippelsdorf
  Cc: Joseph Myers, Jeff Law, GCC Patches, Wilco Dijkstra,
	Michael Meissner, nd

On June 8, 2017 5:40:06 PM GMT+02:00, Tamar Christina <Tamar.Christina@arm.com> wrote:
>The testcase does something unexpected
>
>extern "C" int isnan ();
>
>void foo(float a) {
>  int (*xx)(...);
>  xx = isnan;
>  if (xx(a))
>    g++;
>}
>
>and I'm wondering if this is a valid thing to do with a builtin. The
>issue is that at the point where gimple lowering is done xx hasn't been
>resolved to isnan yet.
>So it never recognizes the alias. Previously these builtins were being
>resolved in expand, which happens late enough that it has replaced xx
>with isnan.
>
>I can obviously fix the ICE by having the expand code leave the call as
>a call instead of a builtin. But if this is a valid thing for a builtin
>i'm not sure how to
>best resolve this case.

For a built-in this is generally valid.  For plain isnan it depends on what the standard says.

You have to support taking the address of isnan anyway and thus expanding to a library call in that case.  Why doesn't that not work?

Richard.


>________________________________________
>From: Tamar Christina
>Sent: Thursday, June 8, 2017 1:21:44 PM
>To: Christophe Lyon; Markus Trippelsdorf
>Cc: Joseph Myers; Jeff Law; GCC Patches; Wilco Dijkstra;
>rguenther@suse.de; Michael Meissner; nd
>Subject: RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like
>numbers in GIMPLE.
>
>Thanks, I'm looking at the failure.
>My final validate seems to have only run the GCC tests.
>
>> -----Original Message-----
>> From: Christophe Lyon [mailto:christophe.lyon@linaro.org]
>> Sent: 08 June 2017 13:00
>> To: Markus Trippelsdorf
>> Cc: Joseph Myers; Tamar Christina; Jeff Law; GCC Patches; Wilco
>Dijkstra;
>> rguenther@suse.de; Michael Meissner; nd
>> Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like
>> numbers in GIMPLE.
>>
>> On 8 June 2017 at 12:30, Markus Trippelsdorf <markus@trippelsdorf.de>
>> wrote:
>> > On 2017.01.19 at 18:20 +0000, Joseph Myers wrote:
>> >> On Thu, 19 Jan 2017, Tamar Christina wrote:
>> >>
>> >> > Hi Joseph,
>> >> >
>> >> > I made the requested changes and did a quick pass over the rest
>of
>> >> > the fp cases.
>> >>
>> >> I've no further comments, but watch out for any related test
>failures
>> >> being reported.
>> >
>> > g++.dg/opt/pr60849.C started ICEing on both X86_64 and ppc64le.
>> >
>>
>> Same on arm/aarch64, but there are also other regressions on
>big-endian
>> configs:
>> See http://people.linaro.org/~christophe.lyon/cross-
>> validation/gcc/trunk/249005/report-build-info.html
>>
>>
>> > --
>> > Markus

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-06-08 15:43                                           ` Richard Biener
@ 2017-06-08 16:44                                             ` Joseph Myers
  2017-06-08 18:23                                               ` Richard Biener
  2017-06-09  8:14                                             ` Tamar Christina
  1 sibling, 1 reply; 47+ messages in thread
From: Joseph Myers @ 2017-06-08 16:44 UTC (permalink / raw)
  To: Richard Biener
  Cc: Tamar Christina, Christophe Lyon, Markus Trippelsdorf, Jeff Law,
	GCC Patches, Wilco Dijkstra, Michael Meissner, nd

On Thu, 8 Jun 2017, Richard Biener wrote:

> For a built-in this is generally valid.  For plain isnan it depends on 
> what the standard says.
> 
> You have to support taking the address of isnan anyway and thus 
> expanding to a library call in that case.  Why doesn't that not work?

In the case of isnan there is the Unix98 non-type-generic function, so it 
should definitely work to take the address of that function as declared in 
the system headers.

For the DEF_GCC_BUILTIN type-generic functions there may not be any 
corresponding library function at all (as well as only being callable with 
the __builtin_* name).

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-06-08 16:44                                             ` Joseph Myers
@ 2017-06-08 18:23                                               ` Richard Biener
  2017-08-17 11:46                                                 ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Richard Biener @ 2017-06-08 18:23 UTC (permalink / raw)
  To: Joseph Myers
  Cc: Tamar Christina, Christophe Lyon, Markus Trippelsdorf, Jeff Law,
	GCC Patches, Wilco Dijkstra, Michael Meissner, nd

On June 8, 2017 6:44:01 PM GMT+02:00, Joseph Myers <joseph@codesourcery.com> wrote:
>On Thu, 8 Jun 2017, Richard Biener wrote:
>
>> For a built-in this is generally valid.  For plain isnan it depends
>on 
>> what the standard says.
>> 
>> You have to support taking the address of isnan anyway and thus 
>> expanding to a library call in that case.  Why doesn't that not work?
>
>In the case of isnan there is the Unix98 non-type-generic function, so
>it 
>should definitely work to take the address of that function as declared
>in 
>the system headers.
>
>For the DEF_GCC_BUILTIN type-generic functions there may not be any 
>corresponding library function at all (as well as only being callable
>with 
>the __builtin_* name).

I haven't followed the patches in detail but I would suggest to move the lowering somewhere to gimple-fold.c so that late discovered direct calls are also lowered.

Richard.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-06-08 15:43                                           ` Richard Biener
  2017-06-08 16:44                                             ` Joseph Myers
@ 2017-06-09  8:14                                             ` Tamar Christina
  1 sibling, 0 replies; 47+ messages in thread
From: Tamar Christina @ 2017-06-09  8:14 UTC (permalink / raw)
  To: Richard Biener, Christophe Lyon, Markus Trippelsdorf
  Cc: Joseph Myers, Jeff Law, GCC Patches, Wilco Dijkstra,
	Michael Meissner, nd

> For a built-in this is generally valid.  For plain isnan it depends on what the
> standard says.
> 
> You have to support taking the address of isnan anyway and thus expanding
> to a library call in that case.  Why doesn't that not work?

Only because I had put a failsafe in builtins.c with a gcc_unreachable () as I never expect
The built-ins not to be expanded. The ICEs are coming from this call.

> 
> Richard.
> 
> 
> >________________________________________
> >From: Tamar Christina
> >Sent: Thursday, June 8, 2017 1:21:44 PM
> >To: Christophe Lyon; Markus Trippelsdorf
> >Cc: Joseph Myers; Jeff Law; GCC Patches; Wilco Dijkstra;
> >rguenther@suse.de; Michael Meissner; nd
> >Subject: RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like
> >numbers in GIMPLE.
> >
> >Thanks, I'm looking at the failure.
> >My final validate seems to have only run the GCC tests.
> >
> >> -----Original Message-----
> >> From: Christophe Lyon [mailto:christophe.lyon@linaro.org]
> >> Sent: 08 June 2017 13:00
> >> To: Markus Trippelsdorf
> >> Cc: Joseph Myers; Tamar Christina; Jeff Law; GCC Patches; Wilco
> >Dijkstra;
> >> rguenther@suse.de; Michael Meissner; nd
> >> Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like
> >> numbers in GIMPLE.
> >>
> >> On 8 June 2017 at 12:30, Markus Trippelsdorf <markus@trippelsdorf.de>
> >> wrote:
> >> > On 2017.01.19 at 18:20 +0000, Joseph Myers wrote:
> >> >> On Thu, 19 Jan 2017, Tamar Christina wrote:
> >> >>
> >> >> > Hi Joseph,
> >> >> >
> >> >> > I made the requested changes and did a quick pass over the rest
> >of
> >> >> > the fp cases.
> >> >>
> >> >> I've no further comments, but watch out for any related test
> >failures
> >> >> being reported.
> >> >
> >> > g++.dg/opt/pr60849.C started ICEing on both X86_64 and ppc64le.
> >> >
> >>
> >> Same on arm/aarch64, but there are also other regressions on
> >big-endian
> >> configs:
> >> See http://people.linaro.org/~christophe.lyon/cross-
> >> validation/gcc/trunk/249005/report-build-info.html
> >>
> >>
> >> > --
> >> > Markus


^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-06-08 18:23                                               ` Richard Biener
@ 2017-08-17 11:46                                                 ` Tamar Christina
  2017-08-23 13:19                                                   ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2017-08-17 11:46 UTC (permalink / raw)
  To: Richard Biener, Joseph Myers
  Cc: Christophe Lyon, Markus Trippelsdorf, Jeff Law, GCC Patches,
	Wilco Dijkstra, Michael Meissner, nd



> -----Original Message-----
> From: Richard Biener [mailto:rguenther@suse.de]
> Sent: 08 June 2017 19:23
> To: Joseph Myers
> Cc: Tamar Christina; Christophe Lyon; Markus Trippelsdorf; Jeff Law; GCC
> Patches; Wilco Dijkstra; Michael Meissner; nd
> Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like
> numbers in GIMPLE.
> 
> On June 8, 2017 6:44:01 PM GMT+02:00, Joseph Myers
> <joseph@codesourcery.com> wrote:
> >On Thu, 8 Jun 2017, Richard Biener wrote:
> >
> >> For a built-in this is generally valid.  For plain isnan it depends
> >on
> >> what the standard says.
> >>
> >> You have to support taking the address of isnan anyway and thus
> >> expanding to a library call in that case.  Why doesn't that not work?
> >
> >In the case of isnan there is the Unix98 non-type-generic function, so
> >it should definitely work to take the address of that function as
> >declared in the system headers.
> >
> >For the DEF_GCC_BUILTIN type-generic functions there may not be any
> >corresponding library function at all (as well as only being callable
> >with the __builtin_* name).
> 
> I haven't followed the patches in detail but I would suggest to move the
> lowering somewhere to gimple-fold.c so that late discovered direct calls are
> also lowered.

I can't do it in fold as in the case where the input isn't a constant it can't be folder away
and I would have to introduce new code. Because of how often folds are called it seems to be invalid
at certain points to introduce new control flow.

So while I have fixed the ICEs for PowerPC, AIX and the rest I'm still not sure what to do about the
late expansion behaviour change.

Originally the patch was in expand, which would have covered the late expansion case similar to what it's
doing now in trunk. I was however asked to move this to a GIMPLE lowering pass as to give other optimizations
a chance to do further optimizations on the generated code.

This of course works fine for C since these math functions are a Macro in C but are functions in C++.

C++ would then allow you to do stuff like take the address of the function so

void foo(float a) {
  int (*xx)(...);
  xx = isnan;
  if (xx(a))
    g++;
}

is perfectly legal. My current patch leaves "isnan" in as a call as by the time we're doing GIMPLE lowering
the alias has not been resolved yet, whereas the version currently committed is able to expand it as it's late
in expand.

Curiously the behaviour is somewhat confusing.
if xx is called with a non-constant it is expanded as you would expect

.LFB0:
	.cfi_startproc
	fcvt	d0, s0
	fcmp	d0, d0
	bvs	.L4
	
but when xx is called with a constant, say 0 it's not

.LFB0:
	.cfi_startproc
	mov	w0, 0
	bl	isnan
	cbz	w0, .L1

because it infers the type of the 0 to be an integer and then doesn't recognize the call.
using 0.0 works, but the behaviour is really counter intuitive.

The question is what should I do now, clearly it would be useful to handle the late expansion as well,
however I don't know what the best approach would be.

I can either:

1) Add two new implementations, one for constant folding and one for expansion, but then one in expand would
   be quite similar to the one in the early lowering phase. The constant folding one could be very simple since
   it's a constant I can just call the buildins and evaluate the value completely.
   
2) Use the patch as is but make another one to allow the renaming to be applied quite early on. e.g while still in Tree or GIMPLE resolve

	int (*xx)(...);
	xx = isnan;
	if (xx(a))

	to

	int (*xx)(...);
	xx = isnan;
	if (isnan(a))
  
   This seems like it would be the best approach and the more useful one in general.
   
Any thoughts?

Thanks,
Tamar

> 
> Richard.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-08-17 11:46                                                 ` Tamar Christina
@ 2017-08-23 13:19                                                   ` Tamar Christina
  2017-08-23 13:26                                                     ` Richard Biener
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2017-08-23 13:19 UTC (permalink / raw)
  To: Richard Biener, Joseph Myers
  Cc: Christophe Lyon, Markus Trippelsdorf, Jeff Law, GCC Patches,
	Wilco Dijkstra, Michael Meissner, nd

Ping
________________________________________
From: Tamar Christina
Sent: Thursday, August 17, 2017 11:49:51 AM
To: Richard Biener; Joseph Myers
Cc: Christophe Lyon; Markus Trippelsdorf; Jeff Law; GCC Patches; Wilco Dijkstra; Michael Meissner; nd
Subject: RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

> -----Original Message-----
> From: Richard Biener [mailto:rguenther@suse.de]
> Sent: 08 June 2017 19:23
> To: Joseph Myers
> Cc: Tamar Christina; Christophe Lyon; Markus Trippelsdorf; Jeff Law; GCC
> Patches; Wilco Dijkstra; Michael Meissner; nd
> Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like
> numbers in GIMPLE.
>
> On June 8, 2017 6:44:01 PM GMT+02:00, Joseph Myers
> <joseph@codesourcery.com> wrote:
> >On Thu, 8 Jun 2017, Richard Biener wrote:
> >
> >> For a built-in this is generally valid.  For plain isnan it depends
> >on
> >> what the standard says.
> >>
> >> You have to support taking the address of isnan anyway and thus
> >> expanding to a library call in that case.  Why doesn't that not work?
> >
> >In the case of isnan there is the Unix98 non-type-generic function, so
> >it should definitely work to take the address of that function as
> >declared in the system headers.
> >
> >For the DEF_GCC_BUILTIN type-generic functions there may not be any
> >corresponding library function at all (as well as only being callable
> >with the __builtin_* name).
>
> I haven't followed the patches in detail but I would suggest to move the
> lowering somewhere to gimple-fold.c so that late discovered direct calls are
> also lowered.

I can't do it in fold as in the case where the input isn't a constant it can't be folder away
and I would have to introduce new code. Because of how often folds are called it seems to be invalid
at certain points to introduce new control flow.

So while I have fixed the ICEs for PowerPC, AIX and the rest I'm still not sure what to do about the
late expansion behaviour change.

Originally the patch was in expand, which would have covered the late expansion case similar to what it's
doing now in trunk. I was however asked to move this to a GIMPLE lowering pass as to give other optimizations
a chance to do further optimizations on the generated code.

This of course works fine for C since these math functions are a Macro in C but are functions in C++.

C++ would then allow you to do stuff like take the address of the function so

void foo(float a) {
  int (*xx)(...);
  xx = isnan;
  if (xx(a))
    g++;
}

is perfectly legal. My current patch leaves "isnan" in as a call as by the time we're doing GIMPLE lowering
the alias has not been resolved yet, whereas the version currently committed is able to expand it as it's late
in expand.

Curiously the behaviour is somewhat confusing.
if xx is called with a non-constant it is expanded as you would expect

.LFB0:
        .cfi_startproc
        fcvt    d0, s0
        fcmp    d0, d0
        bvs     .L4

but when xx is called with a constant, say 0 it's not

.LFB0:
        .cfi_startproc
        mov     w0, 0
        bl      isnan
        cbz     w0, .L1

because it infers the type of the 0 to be an integer and then doesn't recognize the call.
using 0.0 works, but the behaviour is really counter intuitive.

The question is what should I do now, clearly it would be useful to handle the late expansion as well,
however I don't know what the best approach would be.

I can either:

1) Add two new implementations, one for constant folding and one for expansion, but then one in expand would
   be quite similar to the one in the early lowering phase. The constant folding one could be very simple since
   it's a constant I can just call the buildins and evaluate the value completely.

2) Use the patch as is but make another one to allow the renaming to be applied quite early on. e.g while still in Tree or GIMPLE resolve

        int (*xx)(...);
        xx = isnan;
        if (xx(a))

        to

        int (*xx)(...);
        xx = isnan;
        if (isnan(a))

   This seems like it would be the best approach and the more useful one in general.

Any thoughts?

Thanks,
Tamar

>
> Richard.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-08-23 13:19                                                   ` Tamar Christina
@ 2017-08-23 13:26                                                     ` Richard Biener
  2017-08-23 15:51                                                       ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Richard Biener @ 2017-08-23 13:26 UTC (permalink / raw)
  To: Tamar Christina
  Cc: Joseph Myers, Christophe Lyon, Markus Trippelsdorf, Jeff Law,
	GCC Patches, Wilco Dijkstra, Michael Meissner, nd

On Wed, 23 Aug 2017, Tamar Christina wrote:

> Ping
> ________________________________________
> From: Tamar Christina
> Sent: Thursday, August 17, 2017 11:49:51 AM
> To: Richard Biener; Joseph Myers
> Cc: Christophe Lyon; Markus Trippelsdorf; Jeff Law; GCC Patches; Wilco Dijkstra; Michael Meissner; nd
> Subject: RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
> 
> > -----Original Message-----
> > From: Richard Biener [mailto:rguenther@suse.de]
> > Sent: 08 June 2017 19:23
> > To: Joseph Myers
> > Cc: Tamar Christina; Christophe Lyon; Markus Trippelsdorf; Jeff Law; GCC
> > Patches; Wilco Dijkstra; Michael Meissner; nd
> > Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like
> > numbers in GIMPLE.
> >
> > On June 8, 2017 6:44:01 PM GMT+02:00, Joseph Myers
> > <joseph@codesourcery.com> wrote:
> > >On Thu, 8 Jun 2017, Richard Biener wrote:
> > >
> > >> For a built-in this is generally valid.  For plain isnan it depends
> > >on
> > >> what the standard says.
> > >>
> > >> You have to support taking the address of isnan anyway and thus
> > >> expanding to a library call in that case.  Why doesn't that not work?
> > >
> > >In the case of isnan there is the Unix98 non-type-generic function, so
> > >it should definitely work to take the address of that function as
> > >declared in the system headers.
> > >
> > >For the DEF_GCC_BUILTIN type-generic functions there may not be any
> > >corresponding library function at all (as well as only being callable
> > >with the __builtin_* name).
> >
> > I haven't followed the patches in detail but I would suggest to move the
> > lowering somewhere to gimple-fold.c so that late discovered direct calls are
> > also lowered.
> 
> I can't do it in fold as in the case where the input isn't a constant it 
> can't be folder away and I would have to introduce new code. Because of 
> how often folds are called it seems to be invalid at certain points to 
> introduce new control flow.

Yes, new control flow shouldn't be emitted from foldings.

> So while I have fixed the ICEs for PowerPC, AIX and the rest I'm still 
> not sure what to do about the late expansion behaviour change.

I think you should split up your work a bit.  The pieces from
fold_builtin_* you remove and move to lowering that require no
control flow like __builtin_isnan (x) -> x UNORD x should move
to match.pd patterns (aka foldings that will then be applied
both on GENERIC and GIMPLE).

As fallback the expanders should simply emit a call (as done
for isinf when not folded for example).

I think fpclassify is special (double-check), you can't have an indirect 
call to the
classification family as they are macros (builtins where they
exist could be called indirectly though I guess we should simply
disallow taking their address).  These are appropriate for
early lowering like you do.  You can leave high-level ops
from fpclassify lowering and rely on folding to turn them
into sth more optimal.

I still don't fully believe in lowering to integer ops
like the isnan lowering you are doing.  That hides what you
are doing from (not yet existing) propagation passes of
FP classification.  I'd do those transforms only late.
You are for example missing to open-code isunordered with
integer ops, no?  I'd have isnan become UNORDERED_EXPR and
eventually expand that with bitops.

That said, split out the uncontroversical part of moving existing
foldings from builtins.c to match.pd where they generate no
control-flow.

Thanks,
Richard.

> Originally the patch was in expand, which would have covered the late expansion case similar to what it's
> doing now in trunk. I was however asked to move this to a GIMPLE lowering pass as to give other optimizations
> a chance to do further optimizations on the generated code.
> 
> This of course works fine for C since these math functions are a Macro in C but are functions in C++.
> 
> C++ would then allow you to do stuff like take the address of the function so
> 
> void foo(float a) {
>   int (*xx)(...);
>   xx = isnan;
>   if (xx(a))
>     g++;
> }
> 
> is perfectly legal. My current patch leaves "isnan" in as a call as by the time we're doing GIMPLE lowering
> the alias has not been resolved yet, whereas the version currently committed is able to expand it as it's late
> in expand.
> 
> Curiously the behaviour is somewhat confusing.
> if xx is called with a non-constant it is expanded as you would expect
> 
> .LFB0:
>         .cfi_startproc
>         fcvt    d0, s0
>         fcmp    d0, d0
>         bvs     .L4
> 
> but when xx is called with a constant, say 0 it's not
> 
> .LFB0:
>         .cfi_startproc
>         mov     w0, 0
>         bl      isnan
>         cbz     w0, .L1
> 
> because it infers the type of the 0 to be an integer and then doesn't recognize the call.
> using 0.0 works, but the behaviour is really counter intuitive.
> 
> The question is what should I do now, clearly it would be useful to handle the late expansion as well,
> however I don't know what the best approach would be.
> 
> I can either:
> 
> 1) Add two new implementations, one for constant folding and one for expansion, but then one in expand would
>    be quite similar to the one in the early lowering phase. The constant folding one could be very simple since
>    it's a constant I can just call the buildins and evaluate the value completely.
> 
> 2) Use the patch as is but make another one to allow the renaming to be applied quite early on. e.g while still in Tree or GIMPLE resolve
> 
>         int (*xx)(...);
>         xx = isnan;
>         if (xx(a))
> 
>         to
> 
>         int (*xx)(...);
>         xx = isnan;
>         if (isnan(a))
> 
>    This seems like it would be the best approach and the more useful one in general.
> 
> Any thoughts?
> 
> Thanks,
> Tamar
> 
> >
> > Richard.
> 
> 

-- 
Richard Biener <rguenther@suse.de>
SUSE LINUX GmbH, GF: Felix Imendoerffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nuernberg)

^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-08-23 13:26                                                     ` Richard Biener
@ 2017-08-23 15:51                                                       ` Tamar Christina
  2017-08-24 11:01                                                         ` Richard Biener
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2017-08-23 15:51 UTC (permalink / raw)
  To: Richard Biener
  Cc: Joseph Myers, Christophe Lyon, Markus Trippelsdorf, Jeff Law,
	GCC Patches, Wilco Dijkstra, Michael Meissner, nd

Hi Richard,

Thanks for the feedback,

> 
> I think you should split up your work a bit.  The pieces from
> fold_builtin_* you remove and move to lowering that require no control flow
> like __builtin_isnan (x) -> x UNORD x should move to match.pd patterns (aka
> foldings that will then be applied both on GENERIC and GIMPLE).
> 

But emitting them as an UNORD wouldn't solve the performance or correctness issues
The patch is trying to address. For correctness, for instance an UNORD would still signal
When -fsignaling-nans is used (PR/66462).

> As fallback the expanders should simply emit a call (as done for isinf when
> not folded for example).

Yes the patch currently already does this. My question had more to do if the late expansion
when you alias one of these functions in C++ was really an issue, since It used to (sometimes,
only when you got the type of the argument correct) expand before whereas now it'll always
just emit a function call in this edge case.

> I think fpclassify is special (double-check), you can't have an indirect call to
> the classification family as they are macros (builtins where they exist could be
> called indirectly though I guess we should simply disallow taking their
> address).  These are appropriate for early lowering like you do.  You can leave
> high-level ops from fpclassify lowering and rely on folding to turn them into
> sth more optimal.

Fpclassify is also a function in C++, so in that regard it's not different from the rest.
For C code my patch will always do the right thing as like you said they're macros so
I would always be able to lower them early.

> 
> I still don't fully believe in lowering to integer ops like the isnan lowering you
> are doing.  That hides what you are doing from (not yet existing) propagation
> passes of FP classification.  I'd do those transforms only late.
> You are for example missing to open-code isunordered with integer ops, no?
> I'd have isnan become UNORDERED_EXPR and eventually expand that with
> bitops.

I'm not sure how I would be able to distinguish between the UNORDERED_EXPR of
an isnan call and that or a general X UNORDERED_EXPR X expression. The new isnan code
that replaces the UNORD isn't a general comparison, it can only really check for nan.
In general don't how if I rewrite these to TREE expressions early in match.pd how I would
ever be able to recognize the very specific cases they try to handle in order to do bitops only
when it makes sense.

I think C++ cases will still be problematic, since match.pd will match on

int (*xx)(...);
xx = isnan;
if (xx(a))

quite late, and only when xx is eventually replaced by isnan, so a lot of optimizations would have
already ignored the UNORD it would have generated. Which means the bitops have to be quite late
and would miss a lot of other optimization phases.

It's these cases where you do stuff with a function that you can't with a macro that's a problem for my
Current patch. It'll just always leave it in as a function call (the current expand code will expand to a cmp if
The types end up matching, but only then), the question is really if it's a big issue or not.

And if it is, maybe we should do that rewriting early on. It seems like replacing xx with isnan early on
Would have lots of benefits like getting type errors. Since for instance this is wrong, but gcc still produces code for it:

int g;

extern "C" int isnan ();
struct bar { int b; };

void foo(struct bar a) {
  int (*xx)(...);
  xx = isnan;
  if (xx(a))
    g++;
}

And makes a call to isnan with a struct as an argument.
For the record, with my current patch, testsuite/g++.dg/opt/pr60849.C passes since it only checks for compile,
But with a call instead of a compare in the generated code.

Thanks,
Tamar

> 
> That said, split out the uncontroversical part of moving existing foldings from
> builtins.c to match.pd where they generate no control-flow.
> 
> Thanks,
> Richard.
> 
> > Originally the patch was in expand, which would have covered the late
> > expansion case similar to what it's doing now in trunk. I was however
> > asked to move this to a GIMPLE lowering pass as to give other
> optimizations a chance to do further optimizations on the generated code.
> >
> > This of course works fine for C since these math functions are a Macro in C
> but are functions in C++.
> >
> > C++ would then allow you to do stuff like take the address of the
> > C++ function so
> >
> > void foo(float a) {
> >   int (*xx)(...);
> >   xx = isnan;
> >   if (xx(a))
> >     g++;
> > }
> >
> > is perfectly legal. My current patch leaves "isnan" in as a call as by
> > the time we're doing GIMPLE lowering the alias has not been resolved
> > yet, whereas the version currently committed is able to expand it as it's late
> in expand.
> >
> > Curiously the behaviour is somewhat confusing.
> > if xx is called with a non-constant it is expanded as you would expect
> >
> > .LFB0:
> >         .cfi_startproc
> >         fcvt    d0, s0
> >         fcmp    d0, d0
> >         bvs     .L4
> >
> > but when xx is called with a constant, say 0 it's not
> >
> > .LFB0:
> >         .cfi_startproc
> >         mov     w0, 0
> >         bl      isnan
> >         cbz     w0, .L1
> >
> > because it infers the type of the 0 to be an integer and then doesn't
> recognize the call.
> > using 0.0 works, but the behaviour is really counter intuitive.
> >
> > The question is what should I do now, clearly it would be useful to
> > handle the late expansion as well, however I don't know what the best
> approach would be.
> >
> > I can either:
> >
> > 1) Add two new implementations, one for constant folding and one for
> expansion, but then one in expand would
> >    be quite similar to the one in the early lowering phase. The constant
> folding one could be very simple since
> >    it's a constant I can just call the buildins and evaluate the value
> completely.
> >
> > 2) Use the patch as is but make another one to allow the renaming to
> > be applied quite early on. e.g while still in Tree or GIMPLE resolve
> >
> >         int (*xx)(...);
> >         xx = isnan;
> >         if (xx(a))
> >
> >         to
> >
> >         int (*xx)(...);
> >         xx = isnan;
> >         if (isnan(a))
> >
> >    This seems like it would be the best approach and the more useful one in
> general.
> >
> > Any thoughts?
> >
> > Thanks,
> > Tamar
> >
> > >
> > > Richard.
> >
> >
> 
> --
> Richard Biener <rguenther@suse.de>
> SUSE LINUX GmbH, GF: Felix Imendoerffer, Jane Smithard, Graham Norton,
> HRB 21284 (AG Nuernberg)

^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-08-23 15:51                                                       ` Tamar Christina
@ 2017-08-24 11:01                                                         ` Richard Biener
  2017-08-29 11:25                                                           ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Richard Biener @ 2017-08-24 11:01 UTC (permalink / raw)
  To: Tamar Christina
  Cc: Joseph Myers, Christophe Lyon, Markus Trippelsdorf, Jeff Law,
	GCC Patches, Wilco Dijkstra, Michael Meissner, nd

On Wed, 23 Aug 2017, Tamar Christina wrote:

> Hi Richard,
> 
> Thanks for the feedback,
> 
> > 
> > I think you should split up your work a bit.  The pieces from
> > fold_builtin_* you remove and move to lowering that require no control flow
> > like __builtin_isnan (x) -> x UNORD x should move to match.pd patterns (aka
> > foldings that will then be applied both on GENERIC and GIMPLE).
> > 
> 
> But emitting them as an UNORD wouldn't solve the performance or correctness issues
> The patch is trying to address. For correctness, for instance an UNORD would still signal
> When -fsignaling-nans is used (PR/66462).

Well, then address the correctness issues separately.  I think lumping
everything into a single patch just makes your live harder ;)

I was saying that if I write isunordered (x, y) and use 
-fno-signaling-nans
(the default) then I would expect the same code to be generated as if
I say isnan (x) || isnan (y).  And I'm not sure if isnan (x) || isnan (y)
is more efficient if both are implemented with integer ops compared
to using an unordered FP compare.

A canonical high-level representation on GIMPLE is equally important
as maybe getting some optimization on a lowered "optimized" form.

And a good canonical representation is short (x UNORD x or x UNORD y
counts as short here).

> > As fallback the expanders should simply emit a call (as done for isinf when
> > not folded for example).
> 
> Yes the patch currently already does this.

Ah, I probably looked at an old version then.

> My question had more to do if 
> the late expansion when you alias one of these functions in C++ was 
> really an issue, since It used to (sometimes, only when you got the type 
> of the argument correct) expand before whereas now it'll always just 
> emit a function call in this edge case.

The answer here is really that we should perform lowering at a point
where we do not omit possible optimizations.  For the particular case
this would mean lowering after IPA optimizations (also think of
accelerator offloading which uses the early optimization phase of
the host).  This also allows early optimization to more accurately
estimate code size (when optimizing for size).  And I still think
whether to expose fp classification as FP or integer ops should be
a target decision and done late.

This still means canonicalizations like isnan -> UNORD should happen
early and during folding.

Note __builtin_fpclassify is really a special case and lowering that
early is a good thing -- I'd simply keep the current code here at
the start.

> > I think fpclassify is special (double-check), you can't have an indirect call to
> > the classification family as they are macros (builtins where they exist could be
> > called indirectly though I guess we should simply disallow taking their
> > address).  These are appropriate for early lowering like you do.  You can leave
> > high-level ops from fpclassify lowering and rely on folding to turn them into
> > sth more optimal.
> 
> Fpclassify is also a function in C++, so in that regard it's not 
> different from the rest. For C code my patch will always do the right 
> thing as like you said they're macros so I would always be able to lower 
> them early.

Hum, but fpclassify is then not a builtin in C++ given it needs to know
the actual values of FP_NAN and friends.  That is, we are talking about
__builtin_fpclassify which does not match the standards fpclassify API.
The builtin is always a function and never a macro.

> > 
> > I still don't fully believe in lowering to integer ops like the isnan lowering you
> > are doing.  That hides what you are doing from (not yet existing) propagation
> > passes of FP classification.  I'd do those transforms only late.
> > You are for example missing to open-code isunordered with integer ops, no?
> > I'd have isnan become UNORDERED_EXPR and eventually expand that with
> > bitops.
> 
> I'm not sure how I would be able to distinguish between the 
> UNORDERED_EXPR of an isnan call and that or a general X UNORDERED_EXPR X 
> expression. The new isnan code that replaces the UNORD isn't a general 
> comparison, it can only really check for nan. In general don't how if I 
> rewrite these to TREE expressions early in match.pd how I would ever be 
> able to recognize the very specific cases they try to handle in order to 
> do bitops only when it makes sense.

I believe the reverse transform x UNORD x to isnan (x) is also valid
(modulo -fsignalling-nans).

> 
> I think C++ cases will still be problematic, since match.pd will match on
> 
> int (*xx)(...);
> xx = isnan;
> if (xx(a))
> 
> quite late, and only when xx is eventually replaced by isnan, so a lot 
> of optimizations would have already ignored the UNORD it would have 
> generated. Which means the bitops have to be quite late and would miss a 
> lot of other optimization phases.

Is that so?  Did you try whether vectorization likes x UNORD x or
the bit operations variant of isnan (x) more?  Try

int foo (double *x, int n)
{
  unsigned nan_cnt = 0;
  for (int i = 0; i < n; ++i)
    if (__builtin_isnan (x[i]))
      nan_cnt++;
  return nan_cnt;
}

for a conditional reduction.  On x86_64 I get an inner vectorized loop

.L5:
        movapd  (%rax), %xmm0
        addq    $32, %rax
        movapd  -16(%rax), %xmm1
        cmpq    %rdx, %rax
        cmpunordpd      %xmm0, %xmm0
        cmpunordpd      %xmm1, %xmm1
        pand    %xmm3, %xmm0
        pand    %xmm3, %xmm1
        shufps  $136, %xmm1, %xmm0
        paddd   %xmm0, %xmm2
        jne     .L5


> It's these cases where you do stuff with a function that you can't with 
> a macro that's a problem for my Current patch. It'll just always leave 
> it in as a function call (the current expand code will expand to a cmp 
> if The types end up matching, but only then), the question is really if 
> it's a big issue or not.
> 
> And if it is, maybe we should do that rewriting early on. It seems like 
> replacing xx with isnan early on Would have lots of benefits like 
> getting type errors. Since for instance this is wrong, but gcc still 
> produces code for it:
> 
> int g;
> 
> extern "C" int isnan ();
> struct bar { int b; };
> 
> void foo(struct bar a) {
>   int (*xx)(...);
>   xx = isnan;
>   if (xx(a))
>     g++;
> }
> 
> And makes a call to isnan with a struct as an argument.

Well, this is what you wrote ...

> For the record, with my current patch, testsuite/g++.dg/opt/pr60849.C passes since it only checks for compile,
> But with a call instead of a compare in the generated code.

For the particular testcase that's not a problem I guess.  With
correctly prototyped isnan and a correct type for the function
pointer it would be unfortunate.

Thanks,
Richard.

> Thanks,
> Tamar
> 
> > 
> > That said, split out the uncontroversical part of moving existing foldings from
> > builtins.c to match.pd where they generate no control-flow.
> > 
> > Thanks,
> > Richard.
> > 
> > > Originally the patch was in expand, which would have covered the late
> > > expansion case similar to what it's doing now in trunk. I was however
> > > asked to move this to a GIMPLE lowering pass as to give other
> > optimizations a chance to do further optimizations on the generated code.
> > >
> > > This of course works fine for C since these math functions are a Macro in C
> > but are functions in C++.
> > >
> > > C++ would then allow you to do stuff like take the address of the
> > > C++ function so
> > >
> > > void foo(float a) {
> > >   int (*xx)(...);
> > >   xx = isnan;
> > >   if (xx(a))
> > >     g++;
> > > }
> > >
> > > is perfectly legal. My current patch leaves "isnan" in as a call as by
> > > the time we're doing GIMPLE lowering the alias has not been resolved
> > > yet, whereas the version currently committed is able to expand it as it's late
> > in expand.
> > >
> > > Curiously the behaviour is somewhat confusing.
> > > if xx is called with a non-constant it is expanded as you would expect
> > >
> > > .LFB0:
> > >         .cfi_startproc
> > >         fcvt    d0, s0
> > >         fcmp    d0, d0
> > >         bvs     .L4
> > >
> > > but when xx is called with a constant, say 0 it's not
> > >
> > > .LFB0:
> > >         .cfi_startproc
> > >         mov     w0, 0
> > >         bl      isnan
> > >         cbz     w0, .L1
> > >
> > > because it infers the type of the 0 to be an integer and then doesn't
> > recognize the call.
> > > using 0.0 works, but the behaviour is really counter intuitive.
> > >
> > > The question is what should I do now, clearly it would be useful to
> > > handle the late expansion as well, however I don't know what the best
> > approach would be.
> > >
> > > I can either:
> > >
> > > 1) Add two new implementations, one for constant folding and one for
> > expansion, but then one in expand would
> > >    be quite similar to the one in the early lowering phase. The constant
> > folding one could be very simple since
> > >    it's a constant I can just call the buildins and evaluate the value
> > completely.
> > >
> > > 2) Use the patch as is but make another one to allow the renaming to
> > > be applied quite early on. e.g while still in Tree or GIMPLE resolve
> > >
> > >         int (*xx)(...);
> > >         xx = isnan;
> > >         if (xx(a))
> > >
> > >         to
> > >
> > >         int (*xx)(...);
> > >         xx = isnan;
> > >         if (isnan(a))
> > >
> > >    This seems like it would be the best approach and the more useful one in
> > general.
> > >
> > > Any thoughts?
> > >
> > > Thanks,
> > > Tamar
> > >
> > > >
> > > > Richard.
> > >
> > >
> > 
> > --
> > Richard Biener <rguenther@suse.de>
> > SUSE LINUX GmbH, GF: Felix Imendoerffer, Jane Smithard, Graham Norton,
> > HRB 21284 (AG Nuernberg)
> 
> 

-- 
Richard Biener <rguenther@suse.de>
SUSE LINUX GmbH, GF: Felix Imendoerffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nuernberg)

^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-08-24 11:01                                                         ` Richard Biener
@ 2017-08-29 11:25                                                           ` Tamar Christina
  2017-08-30 14:37                                                             ` Richard Biener
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2017-08-29 11:25 UTC (permalink / raw)
  To: Richard Biener
  Cc: Joseph Myers, Christophe Lyon, Markus Trippelsdorf, Jeff Law,
	GCC Patches, Wilco Dijkstra, Michael Meissner, nd



> -----Original Message-----
> From: Richard Biener [mailto:rguenther@suse.de]
> Sent: 24 August 2017 10:16
> To: Tamar Christina
> Cc: Joseph Myers; Christophe Lyon; Markus Trippelsdorf; Jeff Law; GCC
> Patches; Wilco Dijkstra; Michael Meissner; nd
> Subject: RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like
> numbers in GIMPLE.
> 
> On Wed, 23 Aug 2017, Tamar Christina wrote:
> 
> > Hi Richard,
> >
> > Thanks for the feedback,
> >
> > >
> > > I think you should split up your work a bit.  The pieces from
> > > fold_builtin_* you remove and move to lowering that require no
> > > control flow like __builtin_isnan (x) -> x UNORD x should move to
> > > match.pd patterns (aka foldings that will then be applied both on
> GENERIC and GIMPLE).
> > >
> >
> > But emitting them as an UNORD wouldn't solve the performance or
> > correctness issues The patch is trying to address. For correctness,
> > for instance an UNORD would still signal When -fsignaling-nans is used
> (PR/66462).
> 
> Well, then address the correctness issues separately.  I think lumping
> everything into a single patch just makes your live harder ;)

But the patch was made to fix the correctness issues. Which unless I'm mistaken
can only be solved using integer operations. To give a short history on the patch:

My original patch had this in builtins.c where the current code is and had
none of this problem. The intention of this patch was just address the correctness
issues with isnan etc.

Since we had to do this using integer operations we figured we'd also replace the
operations with faster ones so the code was tweaked in order to e.g. make the common
paths of fpclassify exit earlier. (I had provided benchmark results to show that
the integer operations are faster than the floating point ones, also on x86).

It was then suggested by upstream review that I do this as a gimple lowering
phase. I pushed back against this but ultimately lost and submitted a new version
that moved from builtins.c to gimple-low.c, which was why late expansion for C++
broke, because the expansion is now done very early.

The rational for this move was that it would give the rest of the pipeline a chance
to optimize the code further.

After a few revisions this was ultimately accepted (the v3 version), but was reverted
because it broke IBM's long double format due to the special case code for it violating SSA.

That was trivially fixed (once I finally had access to a PowerPC hardware months later) and I changed the
patch to leave the builtin as a function call if at lowering time it wasn't known to be a builtin.
This then "fixes" the C++ testcase, in that the test doesn't fail anymore, but it doesn't generate
the same code as before.
Though the test is quite contrived and I don't know if actual user code often has this.

While checking to see if this behavior is OK, I was suggested to do it as a gimple-fold instead.
I did, but then turns out it won't work for non-constants as I can't generate new control flow from
a fold (which makes sense in retrospect but I didn't know this before).

This is why the patch is more complex than what it was before. The original version only emitted simple tree.

> 
> I was saying that if I write isunordered (x, y) and use -fno-signaling-nans (the
> default) then I would expect the same code to be generated as if I say isnan
> (x) || isnan (y).  And I'm not sure if isnan (x) || isnan (y) is more efficient if
> both are implemented with integer ops compared to using an unordered FP
> compare.
> 

But isnan (x) || isnan (y) can be rewritten using a match.pd rule to isunordered (x, y)
If that should really generate the same code. So I don’t think it's an issue for this patch.

> A canonical high-level representation on GIMPLE is equally important as
> maybe getting some optimization on a lowered "optimized" form.
> 
> And a good canonical representation is short (x UNORD x or x UNORD y
> counts as short here).
> 
> > > As fallback the expanders should simply emit a call (as done for
> > > isinf when not folded for example).
> >
> > Yes the patch currently already does this.
> 
> Ah, I probably looked at an old version then.
> 
> > My question had more to do if
> > the late expansion when you alias one of these functions in C++ was
> > really an issue, since It used to (sometimes, only when you got the
> > type of the argument correct) expand before whereas now it'll always
> > just emit a function call in this edge case.
> 
> The answer here is really that we should perform lowering at a point where
> we do not omit possible optimizations.  For the particular case this would
> mean lowering after IPA optimizations (also think of accelerator offloading
> which uses the early optimization phase of the host).  This also allows early
> optimization to more accurately estimate code size (when optimizing for
> size).  And I still think whether to expose fp classification as FP or integer ops
> should be a target decision and done late.
> 
> This still means canonicalizations like isnan -> UNORD should happen early
> and during folding.
> 

This feels to me like it's a different patch, whatever transformations you do
to get isnan or any of the other builtins don't seem relevant. Only that when you
do generate isnan, you get better and more correct code than before.

If the integer operations are of a concern though, I could add a target hook
to make this opt-in. The backup code is still there as it gets used when the floating point
type is not IEEE like.

> >
> > >
> > > That said, split out the uncontroversical part of moving existing
> > > foldings from builtins.c to match.pd where they generate no control-
> flow.
> > >

If that's the direction I have take I will, I am just not convinced the patch will be
simpler nor smaller...

Thanks,
Tamar

> > > Thanks,
> > > Richard.
> > >
> > > > Originally the patch was in expand, which would have covered the
> > > > late expansion case similar to what it's doing now in trunk. I was
> > > > however asked to move this to a GIMPLE lowering pass as to give
> > > > other
> > > optimizations a chance to do further optimizations on the generated
> code.
> > > >
> > > > This of course works fine for C since these math functions are a
> > > > Macro in C
> > > but are functions in C++.
> > > >
> > > > C++ would then allow you to do stuff like take the address of the
> > > > C++ function so
> > > >
> > > > void foo(float a) {
> > > >   int (*xx)(...);
> > > >   xx = isnan;
> > > >   if (xx(a))
> > > >     g++;
> > > > }
> > > >
> > > > is perfectly legal. My current patch leaves "isnan" in as a call
> > > > as by the time we're doing GIMPLE lowering the alias has not been
> > > > resolved yet, whereas the version currently committed is able to
> > > > expand it as it's late
> > > in expand.
> > > >
> > > > Curiously the behaviour is somewhat confusing.
> > > > if xx is called with a non-constant it is expanded as you would
> > > > expect
> > > >
> > > > .LFB0:
> > > >         .cfi_startproc
> > > >         fcvt    d0, s0
> > > >         fcmp    d0, d0
> > > >         bvs     .L4
> > > >
> > > > but when xx is called with a constant, say 0 it's not
> > > >
> > > > .LFB0:
> > > >         .cfi_startproc
> > > >         mov     w0, 0
> > > >         bl      isnan
> > > >         cbz     w0, .L1
> > > >
> > > > because it infers the type of the 0 to be an integer and then
> > > > doesn't
> > > recognize the call.
> > > > using 0.0 works, but the behaviour is really counter intuitive.
> > > >
> > > > The question is what should I do now, clearly it would be useful
> > > > to handle the late expansion as well, however I don't know what
> > > > the best
> > > approach would be.
> > > >
> > > > I can either:
> > > >
> > > > 1) Add two new implementations, one for constant folding and one
> > > > for
> > > expansion, but then one in expand would
> > > >    be quite similar to the one in the early lowering phase. The
> > > > constant
> > > folding one could be very simple since
> > > >    it's a constant I can just call the buildins and evaluate the
> > > > value
> > > completely.
> > > >
> > > > 2) Use the patch as is but make another one to allow the renaming
> > > > to be applied quite early on. e.g while still in Tree or GIMPLE
> > > > resolve
> > > >
> > > >         int (*xx)(...);
> > > >         xx = isnan;
> > > >         if (xx(a))
> > > >
> > > >         to
> > > >
> > > >         int (*xx)(...);
> > > >         xx = isnan;
> > > >         if (isnan(a))
> > > >
> > > >    This seems like it would be the best approach and the more
> > > > useful one in
> > > general.
> > > >
> > > > Any thoughts?
> > > >
> > > > Thanks,
> > > > Tamar
> > > >
> > > > >
> > > > > Richard.
> > > >
> > > >
> > >
> > > --
> > > Richard Biener <rguenther@suse.de>
> > > SUSE LINUX GmbH, GF: Felix Imendoerffer, Jane Smithard, Graham
> > > Norton, HRB 21284 (AG Nuernberg)
> >
> >
> 
> --
> Richard Biener <rguenther@suse.de>
> SUSE LINUX GmbH, GF: Felix Imendoerffer, Jane Smithard, Graham Norton,
> HRB 21284 (AG Nuernberg)

^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-08-29 11:25                                                           ` Tamar Christina
@ 2017-08-30 14:37                                                             ` Richard Biener
  0 siblings, 0 replies; 47+ messages in thread
From: Richard Biener @ 2017-08-30 14:37 UTC (permalink / raw)
  To: Tamar Christina
  Cc: Joseph Myers, Christophe Lyon, Markus Trippelsdorf, Jeff Law,
	GCC Patches, Wilco Dijkstra, Michael Meissner, nd

[-- Attachment #1: Type: text/plain, Size: 10538 bytes --]

On Tue, 29 Aug 2017, Tamar Christina wrote:

> 
> 
> > -----Original Message-----
> > From: Richard Biener [mailto:rguenther@suse.de]
> > Sent: 24 August 2017 10:16
> > To: Tamar Christina
> > Cc: Joseph Myers; Christophe Lyon; Markus Trippelsdorf; Jeff Law; GCC
> > Patches; Wilco Dijkstra; Michael Meissner; nd
> > Subject: RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like
> > numbers in GIMPLE.
> > 
> > On Wed, 23 Aug 2017, Tamar Christina wrote:
> > 
> > > Hi Richard,
> > >
> > > Thanks for the feedback,
> > >
> > > >
> > > > I think you should split up your work a bit.  The pieces from
> > > > fold_builtin_* you remove and move to lowering that require no
> > > > control flow like __builtin_isnan (x) -> x UNORD x should move to
> > > > match.pd patterns (aka foldings that will then be applied both on
> > GENERIC and GIMPLE).
> > > >
> > >
> > > But emitting them as an UNORD wouldn't solve the performance or
> > > correctness issues The patch is trying to address. For correctness,
> > > for instance an UNORD would still signal When -fsignaling-nans is used
> > (PR/66462).
> > 
> > Well, then address the correctness issues separately.  I think lumping
> > everything into a single patch just makes your live harder ;)
> 
> But the patch was made to fix the correctness issues. Which unless I'm 
> mistaken can only be solved using integer operations. To give a short 
> history on the patch:
> 
> My original patch had this in builtins.c where the current code is and 
> had none of this problem. The intention of this patch was just address 
> the correctness issues with isnan etc.
> 
> Since we had to do this using integer operations we figured we'd also 
> replace the operations with faster ones so the code was tweaked in order 
> to e.g. make the common paths of fpclassify exit earlier. (I had 
> provided benchmark results to show that the integer operations are 
> faster than the floating point ones, also on x86).
> 
> It was then suggested by upstream review that I do this as a gimple lowering
> phase. I pushed back against this but ultimately lost and submitted a new version
> that moved from builtins.c to gimple-low.c, which was why late expansion for C++
> broke, because the expansion is now done very early.
> 
> The rational for this move was that it would give the rest of the pipeline a chance
> to optimize the code further.
> 
> After a few revisions this was ultimately accepted (the v3 version), but was reverted
> because it broke IBM's long double format due to the special case code for it violating SSA.
> 
> That was trivially fixed (once I finally had access to a PowerPC hardware months later) and I changed the
> patch to leave the builtin as a function call if at lowering time it wasn't known to be a builtin.
> This then "fixes" the C++ testcase, in that the test doesn't fail anymore, but it doesn't generate
> the same code as before.
> Though the test is quite contrived and I don't know if actual user code often has this.
> 
> While checking to see if this behavior is OK, I was suggested to do it 
> as a gimple-fold instead. I did, but then turns out it won't work for 
> non-constants as I can't generate new control flow from a fold (which 
> makes sense in retrospect but I didn't know this before).
> 
> This is why the patch is more complex than what it was before. The 
> original version only emitted simple tree.

The correctness issue is only about -fsignaling-nans, correct?  So a
simple fix is to guard the existing builtins.c code with
!flag_singalling_nans (and emit a library call for -fsignalling-nans).

> > 
> > I was saying that if I write isunordered (x, y) and use -fno-signaling-nans (the
> > default) then I would expect the same code to be generated as if I say isnan
> > (x) || isnan (y).  And I'm not sure if isnan (x) || isnan (y) is more efficient if
> > both are implemented with integer ops compared to using an unordered FP
> > compare.
> > 
> 
> But isnan (x) || isnan (y) can be rewritten using a match.pd rule to isunordered (x, y)
> If that should really generate the same code. So I donÂ’t think it's an issue for this patch.

If isnan is lowered to integer ops than we'll have a hard time recognizing
it.

> > A canonical high-level representation on GIMPLE is equally important as
> > maybe getting some optimization on a lowered "optimized" form.
> > 
> > And a good canonical representation is short (x UNORD x or x UNORD y
> > counts as short here).
> > 
> > > > As fallback the expanders should simply emit a call (as done for
> > > > isinf when not folded for example).
> > >
> > > Yes the patch currently already does this.
> > 
> > Ah, I probably looked at an old version then.
> > 
> > > My question had more to do if
> > > the late expansion when you alias one of these functions in C++ was
> > > really an issue, since It used to (sometimes, only when you got the
> > > type of the argument correct) expand before whereas now it'll always
> > > just emit a function call in this edge case.
> > 
> > The answer here is really that we should perform lowering at a point where
> > we do not omit possible optimizations.  For the particular case this would
> > mean lowering after IPA optimizations (also think of accelerator offloading
> > which uses the early optimization phase of the host).  This also allows early
> > optimization to more accurately estimate code size (when optimizing for
> > size).  And I still think whether to expose fp classification as FP or integer ops
> > should be a target decision and done late.
> > 
> > This still means canonicalizations like isnan -> UNORD should happen early
> > and during folding.
> > 
> 
> This feels to me like it's a different patch, whatever transformations you do
> to get isnan or any of the other builtins don't seem relevant. Only that when you
> do generate isnan, you get better and more correct code than before.
> 
> If the integer operations are of a concern though, I could add a target hook
> to make this opt-in. The backup code is still there as it gets used when the floating point
> type is not IEEE like.

I am concerned about using integer ops without having a way for the
targets to intervene.  This choice would be naturally made at
RTL expansion time (like in your original patch I guess).  There
are currently no optabs for isnan() and friends given we do lower
some of them early (to FP ops, and with bogus sNaN behavior).

> > >
> > > >
> > > > That said, split out the uncontroversical part of moving existing
> > > > foldings from builtins.c to match.pd where they generate no control-
> > flow.
> > > >
> 
> If that's the direction I have take I will, I am just not convinced the patch will be
> simpler nor smaller...

It makes the intent more visible, a wrong-code fix (adding flag_* checks)
compared to a added optimization.  Adding the flags checks is sth
we'd want to backport for example.

Richard.

> Thanks,
> Tamar
> 
> > > > Thanks,
> > > > Richard.
> > > >
> > > > > Originally the patch was in expand, which would have covered the
> > > > > late expansion case similar to what it's doing now in trunk. I was
> > > > > however asked to move this to a GIMPLE lowering pass as to give
> > > > > other
> > > > optimizations a chance to do further optimizations on the generated
> > code.
> > > > >
> > > > > This of course works fine for C since these math functions are a
> > > > > Macro in C
> > > > but are functions in C++.
> > > > >
> > > > > C++ would then allow you to do stuff like take the address of the
> > > > > C++ function so
> > > > >
> > > > > void foo(float a) {
> > > > >   int (*xx)(...);
> > > > >   xx = isnan;
> > > > >   if (xx(a))
> > > > >     g++;
> > > > > }
> > > > >
> > > > > is perfectly legal. My current patch leaves "isnan" in as a call
> > > > > as by the time we're doing GIMPLE lowering the alias has not been
> > > > > resolved yet, whereas the version currently committed is able to
> > > > > expand it as it's late
> > > > in expand.
> > > > >
> > > > > Curiously the behaviour is somewhat confusing.
> > > > > if xx is called with a non-constant it is expanded as you would
> > > > > expect
> > > > >
> > > > > .LFB0:
> > > > >         .cfi_startproc
> > > > >         fcvt    d0, s0
> > > > >         fcmp    d0, d0
> > > > >         bvs     .L4
> > > > >
> > > > > but when xx is called with a constant, say 0 it's not
> > > > >
> > > > > .LFB0:
> > > > >         .cfi_startproc
> > > > >         mov     w0, 0
> > > > >         bl      isnan
> > > > >         cbz     w0, .L1
> > > > >
> > > > > because it infers the type of the 0 to be an integer and then
> > > > > doesn't
> > > > recognize the call.
> > > > > using 0.0 works, but the behaviour is really counter intuitive.
> > > > >
> > > > > The question is what should I do now, clearly it would be useful
> > > > > to handle the late expansion as well, however I don't know what
> > > > > the best
> > > > approach would be.
> > > > >
> > > > > I can either:
> > > > >
> > > > > 1) Add two new implementations, one for constant folding and one
> > > > > for
> > > > expansion, but then one in expand would
> > > > >    be quite similar to the one in the early lowering phase. The
> > > > > constant
> > > > folding one could be very simple since
> > > > >    it's a constant I can just call the buildins and evaluate the
> > > > > value
> > > > completely.
> > > > >
> > > > > 2) Use the patch as is but make another one to allow the renaming
> > > > > to be applied quite early on. e.g while still in Tree or GIMPLE
> > > > > resolve
> > > > >
> > > > >         int (*xx)(...);
> > > > >         xx = isnan;
> > > > >         if (xx(a))
> > > > >
> > > > >         to
> > > > >
> > > > >         int (*xx)(...);
> > > > >         xx = isnan;
> > > > >         if (isnan(a))
> > > > >
> > > > >    This seems like it would be the best approach and the more
> > > > > useful one in
> > > > general.
> > > > >
> > > > > Any thoughts?
> > > > >
> > > > > Thanks,
> > > > > Tamar
> > > > >
> > > > > >
> > > > > > Richard.
> > > > >
> > > > >
> > > >
> > > > --
> > > > Richard Biener <rguenther@suse.de>
> > > > SUSE LINUX GmbH, GF: Felix Imendoerffer, Jane Smithard, Graham
> > > > Norton, HRB 21284 (AG Nuernberg)
> > >
> > >
> > 
> > --
> > Richard Biener <rguenther@suse.de>
> > SUSE LINUX GmbH, GF: Felix Imendoerffer, Jane Smithard, Graham Norton,
> > HRB 21284 (AG Nuernberg)
> 
> 

-- 
Richard Biener <rguenther@suse.de>
SUSE LINUX GmbH, GF: Felix Imendoerffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nuernberg)

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-06-09  8:12   ` Tamar Christina
@ 2017-06-09 11:40     ` Rainer Orth
  0 siblings, 0 replies; 47+ messages in thread
From: Rainer Orth @ 2017-06-09 11:40 UTC (permalink / raw)
  To: Tamar Christina
  Cc: Joseph Myers, David Edelsohn, Christophe Lyon,
	Markus Trippelsdorf, Jeffrey Law, Wilco Dijkstra,
	Michael Meissner, nd, GCC Patches

Hi Tamar,

> I have reverted the patch in commit r 249050 until I can fix the late expansion
> and IBM long double issues.

thanks.  Just for the record, Solaris/SPARC (also big-endian) was
affected as well:

	https://gcc.gnu.org/ml/gcc-testresults/2017-06/msg00908.html

	Rainer

-- 
-----------------------------------------------------------------------------
Rainer Orth, Center for Biotechnology, Bielefeld University

^ permalink raw reply	[flat|nested] 47+ messages in thread

* RE: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-06-09  5:19 ` Joseph Myers
@ 2017-06-09  8:12   ` Tamar Christina
  2017-06-09 11:40     ` Rainer Orth
  0 siblings, 1 reply; 47+ messages in thread
From: Tamar Christina @ 2017-06-09  8:12 UTC (permalink / raw)
  To: Joseph Myers, David Edelsohn
  Cc: Christophe Lyon, Markus Trippelsdorf, Jeffrey Law,
	Wilco Dijkstra, Michael Meissner, nd, GCC Patches

Hi All,

I have reverted the patch in commit r 249050 until I can fix the late expansion
and IBM long double issues.

Thanks for the feedback,
Tamar

> -----Original Message-----
> From: Joseph Myers [mailto:joseph@codesourcery.com]
> Sent: 09 June 2017 06:19
> To: David Edelsohn
> Cc: Tamar Christina; Christophe Lyon; Markus Trippelsdorf; Jeffrey Law; Wilco
> Dijkstra; Michael Meissner; nd; GCC Patches
> Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like
> numbers in GIMPLE.
> 
> On Thu, 8 Jun 2017, David Edelsohn wrote:
> 
> > Tamar,
> >
> > This patch has caused a large number of new failures on AIX, including
> > compiler ICEs.
> 
> It also caused ICEs building glibc for all powerpc configurations (i.e.,
> configurations using IBM long double).
> 
> https://sourceware.org/ml/libc-testresults/2017-q2/msg00318.html
> 
> --
> Joseph S. Myers
> joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
  2017-06-09  0:27 David Edelsohn
@ 2017-06-09  5:19 ` Joseph Myers
  2017-06-09  8:12   ` Tamar Christina
  0 siblings, 1 reply; 47+ messages in thread
From: Joseph Myers @ 2017-06-09  5:19 UTC (permalink / raw)
  To: David Edelsohn
  Cc: Tamar Christina, Christophe Lyon, Markus Trippelsdorf,
	Jeffrey Law, Wilco Dijkstra, Michael Meissner, nd, GCC Patches

On Thu, 8 Jun 2017, David Edelsohn wrote:

> Tamar,
> 
> This patch has caused a large number of new failures on AIX, including
> compiler ICEs.

It also caused ICEs building glibc for all powerpc configurations (i.e., 
configurations using IBM long double).

https://sourceware.org/ml/libc-testresults/2017-q2/msg00318.html

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
@ 2017-06-09  0:27 David Edelsohn
  2017-06-09  5:19 ` Joseph Myers
  0 siblings, 1 reply; 47+ messages in thread
From: David Edelsohn @ 2017-06-09  0:27 UTC (permalink / raw)
  To: Tamar Christina, Christophe Lyon, Markus Trippelsdorf
  Cc: Joseph S. Myers, Jeffrey Law, Wilco Dijkstra, Michael Meissner,
	nd, GCC Patches

Tamar,

This patch has caused a large number of new failures on AIX, including
compiler ICEs.

Thanks, David

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2017-08-30 13:26 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-11 17:26 [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE Tamar Christina
2016-11-11 22:06 ` Joseph Myers
2016-11-24 15:52   ` Tamar Christina
2016-11-24 18:28     ` Joseph Myers
2016-11-25 12:19       ` Tamar Christina
2016-12-02 16:20         ` Tamar Christina
2016-12-12  9:20           ` Tamar Christina
2016-12-12 15:53             ` Jeff Law
2016-12-14  8:07         ` Jeff Law
2016-12-15 10:16           ` Tamar Christina
2016-12-15 19:37             ` Joseph Myers
2016-12-19 17:24               ` Tamar Christina
2017-01-18 16:15                 ` Jeff Law
2017-01-18 16:43                   ` Joseph Myers
2017-01-18 16:55                   ` Joseph Myers
2017-01-18 17:07                     ` Joseph Myers
2017-01-19 14:44                       ` Tamar Christina
2017-01-19 14:47                         ` Joseph Myers
2017-01-19 15:26                           ` Tamar Christina
2017-01-19 16:34                             ` Joseph Myers
2017-01-19 18:06                               ` Tamar Christina
2017-01-19 18:28                                 ` Joseph Myers
2017-01-23 15:55                                   ` Tamar Christina
2017-01-23 16:26                                     ` Joseph Myers
     [not found]                                     ` <5C66F854-5903-4F34-8DE6-5EE2E1FC5C77@gmail.com>
     [not found]                                       ` <VI1PR0801MB20316861AF6C0D8582A10069FF740@VI1PR0801MB2031.eurprd08.prod.outlook.com>
     [not found]                                         ` <VI1PR0801MB2031A5E562143B32E35ECFFEFF4B0@VI1PR0801MB2031.eurprd08.prod.outlook.com>
     [not found]                                           ` <VI1PR0801MB20317F959D3EEA166DC641E0FF450@VI1PR0801MB2031.eurprd08.prod.outlook.com>
     [not found]                                             ` <VI1PR0801MB20311246A73FD82A08541F26FF170@VI1PR0801MB2031.eurprd08.prod.outlook.com>
2017-05-15 10:39                                               ` Tamar Christina
2017-06-08 10:31                                   ` Markus Trippelsdorf
2017-06-08 12:00                                     ` Christophe Lyon
2017-06-08 12:21                                       ` Tamar Christina
2017-06-08 15:40                                         ` Tamar Christina
2017-06-08 15:43                                           ` Richard Biener
2017-06-08 16:44                                             ` Joseph Myers
2017-06-08 18:23                                               ` Richard Biener
2017-08-17 11:46                                                 ` Tamar Christina
2017-08-23 13:19                                                   ` Tamar Christina
2017-08-23 13:26                                                     ` Richard Biener
2017-08-23 15:51                                                       ` Tamar Christina
2017-08-24 11:01                                                         ` Richard Biener
2017-08-29 11:25                                                           ` Tamar Christina
2017-08-30 14:37                                                             ` Richard Biener
2017-06-09  8:14                                             ` Tamar Christina
2016-12-19 20:34             ` Jeff Law
2017-01-03  9:56               ` Tamar Christina
2017-01-16 17:06                 ` Tamar Christina
2017-06-09  0:27 David Edelsohn
2017-06-09  5:19 ` Joseph Myers
2017-06-09  8:12   ` Tamar Christina
2017-06-09 11:40     ` Rainer Orth

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).