public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [Patch, AArch64] Restructure arm_neon.h vector types' implementation.
@ 2014-06-23 15:48 Tejas Belagod
  2014-06-25  8:31 ` Yufeng Zhang
  2014-06-28 10:25 ` Marc Glisse
  0 siblings, 2 replies; 11+ messages in thread
From: Tejas Belagod @ 2014-06-23 15:48 UTC (permalink / raw)
  To: gcc-patches; +Cc: Marc Glisse, Marcus Shawcroft

[-- Attachment #1: Type: text/plain, Size: 3051 bytes --]


Hi,

Here is a patch that restructures neon builtins to use vector types based on 
standard base types. We previously defined arm_neon.h's neon vector 
types(int8x8_t) using gcc's front-end vector extensions. We now move away from 
that and use types built internally(e.g. __Int8x8_t). These internal types names 
are defined by the AAPCS64 and we build arm_neon.h's public vector types over 
these internal types. e.g.

   typedef __Int8x8_t int8x8_t;

as opposed to

   typedef __builtin_aarch64_simd_qi int8x8_t
     __attribute__ ((__vector_size__ (8)));

Impact on mangling:

This patch does away with these builtin scalar types that the vector types were 
based on. These were previously used to look up mangling names. We now use the 
internal vector type names(e.g. __Int8x8_t) to lookup mangling for the 
arm_neon.h-exported vector types. There are a few internal scalar 
types(__builtin_aarch64_simd_oi etc.) that is needed to efficiently implement 
some NEON Intrinsics. These will be declared in the back-end and registered in 
the front-end and aarch64-specific builtin types, but are not user-visible. 
These, along with a few scalar __builtin types that aren't user-visible will 
have implementation-defined mangling. Because we don't have strong-typing across 
all builtins yet, we still have to maintain the old builtin scalar types - they 
will be removed once we move over to a strongly-typed builtin system implemented 
by the qualifier infrastructure.

Marc Glisse's patch in this thread exposed this issue 
https://gcc.gnu.org/ml/gcc-patches/2014-04/msg00618.html. I've tested my patch 
with the change that his patch introduced, and it seems to work fine - 
specifically these two lines:

+  for (tree t = registered_builtin_types; t; t = TREE_CHAIN (t))
+    emit_support_tinfo_1 (TREE_VALUE (t));

Regressed on aarch64-none-elf. OK for trunk?

Thanks,
Tejas.

gcc/Changelog

2014-06-23  Tejas Belagod  <tejas.belagod@arm.com>

	* config/aarch64/aarch64-builtins.c (aarch64_build_scalar_type): Remove.
	(aarch64_scalar_builtin_types, aarch64_simd_type, aarch64_simd_types,
	 aarch64_mangle_builtin_scalar_type, aarch64_mangle_builtin_vector_type,
	 aarch64_mangle_builtin_type, aarch64_simd_builtin_std_type,
	 aarch64_lookup_simd_builtin_type, aarch64_simd_builtin_type,
	 aarch64_init_simd_builtin_types,
	 aarch64_init_simd_builtin_scalar_types): New.
	(aarch64_init_simd_builtins): Refactor.
	(aarch64_fold_builtin): Remove redundant defn.
	(aarch64_init_crc32_builtins): Use aarch64_simd_builtin_std_type.
	* config/aarch64/aarch64-simd-builtin-types.def: New.
	* config/aarch64/t-aarch64: Add aarch64-simd-builtin-types.def
	dependency.
	* config/aarch64/aarch64-protos.h (aarch64_mangle_builtin_type): Export.
	* config/aarch64/aarch64-simd-builtins.def: Remove duplicates.
	* config/aarch64/aarch64.c (aarch64_simd_mangle_map): Remove.
	(aarch64_mangle_type): Refactor.
	* config/aarch64/arm_neon.h: Declare vector types based on internal
	type-names.

[-- Attachment #2: simd-types-refactor-18.txt --]
[-- Type: text/plain, Size: 33964 bytes --]

diff --git a/gcc/config/aarch64/aarch64-builtins.c b/gcc/config/aarch64/aarch64-builtins.c
index a94ef52..1119f33 100644
--- a/gcc/config/aarch64/aarch64-builtins.c
+++ b/gcc/config/aarch64/aarch64-builtins.c
@@ -471,256 +471,331 @@ static GTY(()) tree aarch64_builtin_decls[AARCH64_BUILTIN_MAX];
 #define NUM_DREG_TYPES 6
 #define NUM_QREG_TYPES 6
 
-/* Return a tree for a signed or unsigned argument of either
-   the mode specified by MODE, or the inner mode of MODE.  */
-tree
-aarch64_build_scalar_type (enum machine_mode mode,
-			   bool unsigned_p,
-			   bool poly_p)
-{
-#undef INT_TYPES
-#define INT_TYPES \
-  AARCH64_TYPE_BUILDER (QI) \
-  AARCH64_TYPE_BUILDER (HI) \
-  AARCH64_TYPE_BUILDER (SI) \
-  AARCH64_TYPE_BUILDER (DI) \
-  AARCH64_TYPE_BUILDER (EI) \
-  AARCH64_TYPE_BUILDER (OI) \
-  AARCH64_TYPE_BUILDER (CI) \
-  AARCH64_TYPE_BUILDER (XI) \
-  AARCH64_TYPE_BUILDER (TI) \
-
-/* Statically declare all the possible types we might need.  */
-#undef AARCH64_TYPE_BUILDER
-#define AARCH64_TYPE_BUILDER(X) \
-  static tree X##_aarch64_type_node_p = NULL; \
-  static tree X##_aarch64_type_node_s = NULL; \
-  static tree X##_aarch64_type_node_u = NULL;
-
-  INT_TYPES
-
-  static tree float_aarch64_type_node = NULL;
-  static tree double_aarch64_type_node = NULL;
-
-  gcc_assert (!VECTOR_MODE_P (mode));
-
-/* If we've already initialised this type, don't initialise it again,
-   otherwise ask for a new type of the correct size.  */
-#undef AARCH64_TYPE_BUILDER
-#define AARCH64_TYPE_BUILDER(X) \
-  case X##mode: \
-    if (unsigned_p) \
-      return (X##_aarch64_type_node_u \
-	      ? X##_aarch64_type_node_u \
-	      : X##_aarch64_type_node_u \
-		  = make_unsigned_type (GET_MODE_PRECISION (mode))); \
-    else if (poly_p) \
-       return (X##_aarch64_type_node_p \
-	      ? X##_aarch64_type_node_p \
-	      : X##_aarch64_type_node_p \
-		  = make_unsigned_type (GET_MODE_PRECISION (mode))); \
-    else \
-       return (X##_aarch64_type_node_s \
-	      ? X##_aarch64_type_node_s \
-	      : X##_aarch64_type_node_s \
-		  = make_signed_type (GET_MODE_PRECISION (mode))); \
-    break;
+/* Internal scalar builtin types.  These types are used to support
+   neon intrinsic builtins.  They are _not_ user-visible types.  Therefore
+   the mangling for these types are implementation defined.  */
+const char *aarch64_scalar_builtin_types[] = {
+  "__builtin_aarch64_simd_qi",
+  "__builtin_aarch64_simd_hi",
+  "__builtin_aarch64_simd_si",
+  "__builtin_aarch64_simd_sf",
+  "__builtin_aarch64_simd_di",
+  "__builtin_aarch64_simd_df",
+  "__builtin_aarch64_simd_poly8",
+  "__builtin_aarch64_simd_poly16",
+  "__builtin_aarch64_simd_poly64",
+  "__builtin_aarch64_simd_poly128",
+  "__builtin_aarch64_simd_ti",
+  "__builtin_aarch64_simd_uqi",
+  "__builtin_aarch64_simd_uhi",
+  "__builtin_aarch64_simd_usi",
+  "__builtin_aarch64_simd_udi",
+  "__builtin_aarch64_simd_ei",
+  "__builtin_aarch64_simd_oi",
+  "__builtin_aarch64_simd_ci",
+  "__builtin_aarch64_simd_xi",
+  NULL
+};
 
-  switch (mode)
-    {
-      INT_TYPES
-      case SFmode:
-	if (!float_aarch64_type_node)
-	  {
-	    float_aarch64_type_node = make_node (REAL_TYPE);
-	    TYPE_PRECISION (float_aarch64_type_node) = FLOAT_TYPE_SIZE;
-	    layout_type (float_aarch64_type_node);
-	  }
-	return float_aarch64_type_node;
-	break;
-      case DFmode:
-	if (!double_aarch64_type_node)
-	  {
-	    double_aarch64_type_node = make_node (REAL_TYPE);
-	    TYPE_PRECISION (double_aarch64_type_node) = DOUBLE_TYPE_SIZE;
-	    layout_type (double_aarch64_type_node);
-	  }
-	return double_aarch64_type_node;
-	break;
-      default:
-	gcc_unreachable ();
-    }
-}
+#define ENTRY(E, M, Q, G) E,
+enum aarch64_simd_type
+{
+#include "aarch64-simd-builtin-types.def"
+};
+#undef ENTRY
 
-tree
-aarch64_build_vector_type (enum machine_mode mode,
-			   bool unsigned_p,
-			   bool poly_p)
+struct aarch64_simd_type_info
 {
+  enum aarch64_simd_type type;
+
+  /* Internal type name.  */
+  const char *name;
+
+  /* Internal type name(mangled).  The mangled names conform to the
+     AAPCS64 (see "Procedure Call Standard for the ARM 64-bit Architecture",
+     Appendix A).  To qualify for emission with the mangled names defined in
+     that document, a vector type must not only be of the correct mode but also
+     be of the correct internal AdvSIMD vector type (e.g. __Int8x8_t); these
+     types are registered by aarch64_init_simd_builtin_types ().  In other
+     words, vector types defined in other ways e.g. via vector_size attribute
+     will get default mangled names.  */
+  const char *mangle;
+
+  /* Internal type.  */
+  tree itype;
+
+  /* Element type.  */
   tree eltype;
 
-#define VECTOR_TYPES \
-  AARCH64_TYPE_BUILDER (V16QI) \
-  AARCH64_TYPE_BUILDER (V8HI) \
-  AARCH64_TYPE_BUILDER (V4SI) \
-  AARCH64_TYPE_BUILDER (V2DI) \
-  AARCH64_TYPE_BUILDER (V8QI) \
-  AARCH64_TYPE_BUILDER (V4HI) \
-  AARCH64_TYPE_BUILDER (V2SI) \
-  \
-  AARCH64_TYPE_BUILDER (V4SF) \
-  AARCH64_TYPE_BUILDER (V2DF) \
-  AARCH64_TYPE_BUILDER (V2SF) \
-/* Declare our "cache" of values.  */
-#undef AARCH64_TYPE_BUILDER
-#define AARCH64_TYPE_BUILDER(X) \
-  static tree X##_aarch64_type_node_s = NULL; \
-  static tree X##_aarch64_type_node_u = NULL; \
-  static tree X##_aarch64_type_node_p = NULL;
-
-  VECTOR_TYPES
-
-  gcc_assert (VECTOR_MODE_P (mode));
-
-#undef AARCH64_TYPE_BUILDER
-#define AARCH64_TYPE_BUILDER(X) \
-  case X##mode: \
-    if (unsigned_p) \
-      return X##_aarch64_type_node_u \
-	     ? X##_aarch64_type_node_u \
-	     : X##_aarch64_type_node_u \
-		= build_vector_type_for_mode (aarch64_build_scalar_type \
-						(GET_MODE_INNER (mode), \
-						 unsigned_p, poly_p), mode); \
-    else if (poly_p) \
-       return X##_aarch64_type_node_p \
-	      ? X##_aarch64_type_node_p \
-	      : X##_aarch64_type_node_p \
-		= build_vector_type_for_mode (aarch64_build_scalar_type \
-						(GET_MODE_INNER (mode), \
-						 unsigned_p, poly_p), mode); \
-    else \
-       return X##_aarch64_type_node_s \
-	      ? X##_aarch64_type_node_s \
-	      : X##_aarch64_type_node_s \
-		= build_vector_type_for_mode (aarch64_build_scalar_type \
-						(GET_MODE_INNER (mode), \
-						 unsigned_p, poly_p), mode); \
-    break;
+  /* Machine mode the internal type maps to.  */
+  enum machine_mode mode;
 
-  switch (mode)
+  /* Qualifiers.  */
+  enum aarch64_type_qualifiers q;
+};
+
+#define ENTRY(E, M, Q, G)  \
+  {E, "__" #E, #G "__" #E, NULL_TREE, NULL_TREE, M##mode, qualifier_##Q},
+static struct aarch64_simd_type_info aarch64_simd_types [] = {
+#include "aarch64-simd-builtin-types.def"
+};
+#undef ENTRY
+
+static tree aarch64_simd_intOI_type_node = NULL_TREE;
+static tree aarch64_simd_intEI_type_node = NULL_TREE;
+static tree aarch64_simd_intCI_type_node = NULL_TREE;
+static tree aarch64_simd_intXI_type_node = NULL_TREE;
+
+static const char *
+aarch64_mangle_builtin_scalar_type (const_tree type)
+{
+  int i = 0;
+
+  while (aarch64_scalar_builtin_types[i] != NULL)
     {
-      default:
-	eltype = aarch64_build_scalar_type (GET_MODE_INNER (mode),
-					    unsigned_p, poly_p);
-	return build_vector_type_for_mode (eltype, mode);
-	break;
-      VECTOR_TYPES
-   }
+      const char *name = aarch64_scalar_builtin_types[i];
+
+      if (TREE_CODE (TYPE_NAME (type)) == TYPE_DECL
+	  && DECL_NAME (TYPE_NAME (type))
+	  && !strcmp (IDENTIFIER_POINTER (DECL_NAME (TYPE_NAME (type))), name))
+	return aarch64_scalar_builtin_types[i];
+      i++;
+    }
+  return NULL;
 }
 
-tree
-aarch64_build_type (enum machine_mode mode, bool unsigned_p, bool poly_p)
+static const char *
+aarch64_mangle_builtin_vector_type (const_tree type)
 {
-  if (VECTOR_MODE_P (mode))
-    return aarch64_build_vector_type (mode, unsigned_p, poly_p);
+  int i;
+  int nelts = sizeof (aarch64_simd_types) / sizeof (aarch64_simd_types[0]);
+
+  for (i = 0; i < nelts; i++)
+    if (aarch64_simd_types[i].mode ==  TYPE_MODE (type)
+	&& TYPE_NAME (type)
+	&& TREE_CODE (TYPE_NAME (type)) == TYPE_DECL
+	&& DECL_NAME (TYPE_NAME (type))
+	&& !strcmp
+	     (IDENTIFIER_POINTER (DECL_NAME (TYPE_NAME (type))),
+	      aarch64_simd_types[i].name))
+      return aarch64_simd_types[i].mangle;
+
+  return NULL;
+}
+
+const char *
+aarch64_mangle_builtin_type (const_tree type)
+{
+  if (TREE_CODE (type) == VECTOR_TYPE)
+    return aarch64_mangle_builtin_vector_type (type);
   else
-    return aarch64_build_scalar_type (mode, unsigned_p, poly_p);
+    return aarch64_mangle_builtin_scalar_type (type);
 }
 
-tree
-aarch64_build_signed_type (enum machine_mode mode)
+static tree
+aarch64_simd_builtin_std_type (enum machine_mode mode,
+			       enum aarch64_type_qualifiers q)
 {
-  return aarch64_build_type (mode, false, false);
+#define QUAL_TYPE(M)  \
+  ((q == qualifier_none) ? int##M##_type_node : unsigned_int##M##_type_node);
+  switch (mode)
+    {
+    case QImode:
+      return QUAL_TYPE (QI);
+    case HImode:
+      return QUAL_TYPE (HI);
+    case SImode:
+      return QUAL_TYPE (SI);
+    case DImode:
+      return QUAL_TYPE (DI);
+    case TImode:
+      return QUAL_TYPE (TI);
+    case OImode:
+      return aarch64_simd_intOI_type_node;
+    case EImode:
+      return aarch64_simd_intEI_type_node;
+    case CImode:
+      return aarch64_simd_intCI_type_node;
+    case XImode:
+      return aarch64_simd_intXI_type_node;
+    case SFmode:
+      return float_type_node;
+    case DFmode:
+      return double_type_node;
+    default:
+      gcc_unreachable ();
+    }
+#undef QUAL_TYPE
 }
 
-tree
-aarch64_build_unsigned_type (enum machine_mode mode)
+static tree
+aarch64_lookup_simd_builtin_type (enum machine_mode mode,
+				  enum aarch64_type_qualifiers q)
 {
-  return aarch64_build_type (mode, true, false);
+  int i;
+  int nelts = sizeof (aarch64_simd_types) / sizeof (aarch64_simd_types[0]);
+
+  /* Non-poly scalar modes map to standard types not in the table.  */
+  if (q != qualifier_poly && !VECTOR_MODE_P (mode))
+    return aarch64_simd_builtin_std_type (mode, q);
+
+  for (i = 0; i < nelts; i++)
+    if (aarch64_simd_types[i].mode == mode
+	&& aarch64_simd_types[i].q == q)
+      return aarch64_simd_types[i].itype;
+
+  return NULL_TREE;
 }
 
-tree
-aarch64_build_poly_type (enum machine_mode mode)
+static tree
+aarch64_simd_builtin_type (enum machine_mode mode,
+			   bool unsigned_p, bool poly_p)
 {
-  return aarch64_build_type (mode, false, true);
+  if (poly_p)
+    return aarch64_lookup_simd_builtin_type (mode, qualifier_poly);
+  else if (unsigned_p)
+    return aarch64_lookup_simd_builtin_type (mode, qualifier_unsigned);
+  else
+    return aarch64_lookup_simd_builtin_type (mode, qualifier_none);
 }
 
 static void
-aarch64_init_simd_builtins (void)
+aarch64_init_simd_builtin_types (void)
 {
-  unsigned int i, fcode = AARCH64_SIMD_BUILTIN_BASE + 1;
+  int i;
+  int nelts = sizeof (aarch64_simd_types) / sizeof (aarch64_simd_types[0]);
+  tree tdecl;
+
+  /* Init all the element types built by the front-end.  */
+  aarch64_simd_types[Int8x8_t].eltype = intQI_type_node;
+  aarch64_simd_types[Int8x16_t].eltype = intQI_type_node;
+  aarch64_simd_types[Int16x4_t].eltype = intHI_type_node;
+  aarch64_simd_types[Int16x8_t].eltype = intHI_type_node;
+  aarch64_simd_types[Int32x2_t].eltype = intSI_type_node;
+  aarch64_simd_types[Int32x4_t].eltype = intSI_type_node;
+  aarch64_simd_types[Int64x1_t].eltype = intDI_type_node;
+  aarch64_simd_types[Int64x2_t].eltype = intDI_type_node;
+  aarch64_simd_types[Uint8x8_t].eltype = unsigned_intQI_type_node;
+  aarch64_simd_types[Uint8x16_t].eltype = unsigned_intQI_type_node;
+  aarch64_simd_types[Uint16x4_t].eltype = unsigned_intHI_type_node;
+  aarch64_simd_types[Uint16x8_t].eltype = unsigned_intHI_type_node;
+  aarch64_simd_types[Uint32x2_t].eltype = unsigned_intSI_type_node;
+  aarch64_simd_types[Uint32x4_t].eltype = unsigned_intSI_type_node;
+  aarch64_simd_types[Uint64x1_t].eltype = unsigned_intDI_type_node;
+  aarch64_simd_types[Uint64x2_t].eltype = unsigned_intDI_type_node;
+
+  /* Poly types are a world of their own.  */
+  aarch64_simd_types[Poly8_t].eltype = aarch64_simd_types[Poly8_t].itype =
+    build_distinct_type_copy (unsigned_intQI_type_node);
+  aarch64_simd_types[Poly16_t].eltype = aarch64_simd_types[Poly16_t].itype =
+    build_distinct_type_copy (unsigned_intHI_type_node);
+  aarch64_simd_types[Poly64_t].eltype = aarch64_simd_types[Poly64_t].itype =
+    build_distinct_type_copy (unsigned_intDI_type_node);
+  aarch64_simd_types[Poly128_t].eltype = aarch64_simd_types[Poly128_t].itype =
+    build_distinct_type_copy (unsigned_intTI_type_node);
+  /* Init poly vector element types with scalar poly types.  */
+  aarch64_simd_types[Poly8x8_t].eltype = aarch64_simd_types[Poly8_t].itype;
+  aarch64_simd_types[Poly8x16_t].eltype = aarch64_simd_types[Poly8_t].itype;
+  aarch64_simd_types[Poly16x4_t].eltype = aarch64_simd_types[Poly16_t].itype;
+  aarch64_simd_types[Poly16x8_t].eltype = aarch64_simd_types[Poly16_t].itype;
+  aarch64_simd_types[Poly64x1_t].eltype = aarch64_simd_types[Poly64_t].itype;
+  aarch64_simd_types[Poly64x2_t].eltype = aarch64_simd_types[Poly64_t].itype;
+
+  /* Continue with standard types.  */
+  aarch64_simd_types[Float32x2_t].eltype = float_type_node;
+  aarch64_simd_types[Float32x4_t].eltype = float_type_node;
+  aarch64_simd_types[Float64x1_t].eltype = double_type_node;
+  aarch64_simd_types[Float64x2_t].eltype = double_type_node;
+
+  for (i = 0; i < nelts; i++)
+    {
+      tree eltype = aarch64_simd_types[i].eltype;
+      enum machine_mode mode = aarch64_simd_types[i].mode;
+      enum aarch64_simd_type type = aarch64_simd_types[i].type;
+
+      if (aarch64_simd_types[i].itype == NULL)
+	aarch64_simd_types[i].itype =
+	  build_distinct_type_copy
+	    (build_vector_type (eltype, GET_MODE_NUNITS (mode)));
+
+      tdecl = add_builtin_type (aarch64_simd_types[i].name,
+				aarch64_simd_types[i].itype);
+      TYPE_NAME (aarch64_simd_types[i].itype) = tdecl;
+      SET_TYPE_STRUCTURAL_EQUALITY (aarch64_simd_types[i].itype);
+    }
 
-  /* Signed scalar type nodes.  */
-  tree aarch64_simd_intQI_type_node = aarch64_build_signed_type (QImode);
-  tree aarch64_simd_intHI_type_node = aarch64_build_signed_type (HImode);
-  tree aarch64_simd_intSI_type_node = aarch64_build_signed_type (SImode);
-  tree aarch64_simd_intDI_type_node = aarch64_build_signed_type (DImode);
-  tree aarch64_simd_intTI_type_node = aarch64_build_signed_type (TImode);
-  tree aarch64_simd_intEI_type_node = aarch64_build_signed_type (EImode);
-  tree aarch64_simd_intOI_type_node = aarch64_build_signed_type (OImode);
-  tree aarch64_simd_intCI_type_node = aarch64_build_signed_type (CImode);
-  tree aarch64_simd_intXI_type_node = aarch64_build_signed_type (XImode);
-
-  /* Unsigned scalar type nodes.  */
-  tree aarch64_simd_intUQI_type_node = aarch64_build_unsigned_type (QImode);
-  tree aarch64_simd_intUHI_type_node = aarch64_build_unsigned_type (HImode);
-  tree aarch64_simd_intUSI_type_node = aarch64_build_unsigned_type (SImode);
-  tree aarch64_simd_intUDI_type_node = aarch64_build_unsigned_type (DImode);
-
-  /* Poly scalar type nodes.  */
-  tree aarch64_simd_polyQI_type_node = aarch64_build_poly_type (QImode);
-  tree aarch64_simd_polyHI_type_node = aarch64_build_poly_type (HImode);
-  tree aarch64_simd_polyDI_type_node = aarch64_build_poly_type (DImode);
-  tree aarch64_simd_polyTI_type_node = aarch64_build_poly_type (TImode);
-
-  /* Float type nodes.  */
-  tree aarch64_simd_float_type_node = aarch64_build_signed_type (SFmode);
-  tree aarch64_simd_double_type_node = aarch64_build_signed_type (DFmode);
-
-  /* Define typedefs which exactly correspond to the modes we are basing vector
-     types on.  If you change these names you'll need to change
-     the table used by aarch64_mangle_type too.  */
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intQI_type_node,
+#define AARCH64_BUILD_SIGNED_TYPE(mode)  \
+  make_signed_type (GET_MODE_PRECISION (mode));
+  aarch64_simd_intOI_type_node = AARCH64_BUILD_SIGNED_TYPE (OImode);
+  aarch64_simd_intEI_type_node = AARCH64_BUILD_SIGNED_TYPE (EImode);
+  aarch64_simd_intCI_type_node = AARCH64_BUILD_SIGNED_TYPE (CImode);
+  aarch64_simd_intXI_type_node = AARCH64_BUILD_SIGNED_TYPE (XImode);
+#undef AARCH64_BUILD_SIGNED_TYPE
+
+  tdecl = add_builtin_type
+	    ("__builtin_aarch64_simd_ei" , aarch64_simd_intEI_type_node);
+  TYPE_NAME (aarch64_simd_intEI_type_node) = tdecl;
+  tdecl = add_builtin_type
+	    ("__builtin_aarch64_simd_oi" , aarch64_simd_intOI_type_node);
+  TYPE_NAME (aarch64_simd_intOI_type_node) = tdecl;
+  tdecl = add_builtin_type
+	    ("__builtin_aarch64_simd_ci" , aarch64_simd_intCI_type_node);
+  TYPE_NAME (aarch64_simd_intCI_type_node) = tdecl;
+  tdecl = add_builtin_type
+	    ("__builtin_aarch64_simd_xi" , aarch64_simd_intXI_type_node);
+  TYPE_NAME (aarch64_simd_intXI_type_node) = tdecl;
+}
+
+static void
+aarch64_init_simd_builtin_scalar_types (void)
+{
+  /* Define typedefs for all the standard scalar types.  */
+  (*lang_hooks.types.register_builtin_type) (intQI_type_node,
 					     "__builtin_aarch64_simd_qi");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intHI_type_node,
+  (*lang_hooks.types.register_builtin_type) (intHI_type_node,
 					     "__builtin_aarch64_simd_hi");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intSI_type_node,
+  (*lang_hooks.types.register_builtin_type) (intSI_type_node,
 					     "__builtin_aarch64_simd_si");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_float_type_node,
+  (*lang_hooks.types.register_builtin_type) (float_type_node,
 					     "__builtin_aarch64_simd_sf");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intDI_type_node,
+  (*lang_hooks.types.register_builtin_type) (intDI_type_node,
 					     "__builtin_aarch64_simd_di");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_double_type_node,
+  (*lang_hooks.types.register_builtin_type) (double_type_node,
 					     "__builtin_aarch64_simd_df");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_polyQI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intQI_type_node,
 					     "__builtin_aarch64_simd_poly8");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_polyHI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intHI_type_node,
 					     "__builtin_aarch64_simd_poly16");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_polyDI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intDI_type_node,
 					     "__builtin_aarch64_simd_poly64");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_polyTI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intTI_type_node,
 					     "__builtin_aarch64_simd_poly128");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intTI_type_node,
+  (*lang_hooks.types.register_builtin_type) (intTI_type_node,
 					     "__builtin_aarch64_simd_ti");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intEI_type_node,
-					     "__builtin_aarch64_simd_ei");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intOI_type_node,
-					     "__builtin_aarch64_simd_oi");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intCI_type_node,
-					     "__builtin_aarch64_simd_ci");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intXI_type_node,
-					     "__builtin_aarch64_simd_xi");
 
   /* Unsigned integer types for various mode sizes.  */
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intUQI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intQI_type_node,
 					     "__builtin_aarch64_simd_uqi");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intUHI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intHI_type_node,
 					     "__builtin_aarch64_simd_uhi");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intUSI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intSI_type_node,
 					     "__builtin_aarch64_simd_usi");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intUDI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intDI_type_node,
 					     "__builtin_aarch64_simd_udi");
+}
+
+static void
+aarch64_init_simd_builtins (void)
+{
+  unsigned int i, fcode = AARCH64_SIMD_BUILTIN_BASE + 1;
+
+  aarch64_init_simd_builtin_types ();
+
+  /* Strong-typing hasn't been implemented for all AdvSIMD builtin intrinsics.
+     Therefore we need to preserve the old __builtin scalar types.  It can be
+     removed once all the intrinsics become strongly typed using the qualifier
+     system.  */
+  aarch64_init_simd_builtin_scalar_types ();
 
   for (i = 0; i < ARRAY_SIZE (aarch64_simd_builtin_data); i++, fcode++)
     {
@@ -800,9 +875,11 @@ aarch64_init_simd_builtins (void)
 	  if (qualifiers & qualifier_pointer && VECTOR_MODE_P (op_mode))
 	    op_mode = GET_MODE_INNER (op_mode);
 
-	  eltype = aarch64_build_type (op_mode,
-				       qualifiers & qualifier_unsigned,
-				       qualifiers & qualifier_poly);
+	  eltype = aarch64_simd_builtin_type
+		     (op_mode,
+		      (qualifiers & qualifier_unsigned) != 0,
+		      (qualifiers & qualifier_poly) != 0);
+	  gcc_assert (eltype != NULL);
 
 	  /* Add qualifiers.  */
 	  if (qualifiers & qualifier_const)
@@ -840,13 +917,14 @@ aarch64_init_simd_builtins (void)
 static void
 aarch64_init_crc32_builtins ()
 {
-  tree usi_type = aarch64_build_unsigned_type (SImode);
+  tree usi_type = aarch64_simd_builtin_std_type (SImode, qualifier_unsigned);
   unsigned int i = 0;
 
   for (i = 0; i < ARRAY_SIZE (aarch64_crc_builtin_data); ++i)
     {
       aarch64_crc_builtin_datum* d = &aarch64_crc_builtin_data[i];
-      tree argtype = aarch64_build_unsigned_type (d->mode);
+      tree argtype = aarch64_simd_builtin_std_type (d->mode,
+						    qualifier_unsigned);
       tree ftype = build_function_type_list (usi_type, usi_type, argtype, NULL_TREE);
       tree fndecl = add_builtin_function (d->name, ftype, d->fcode,
                                           BUILT_IN_MD, NULL, NULL_TREE);
@@ -1348,18 +1426,16 @@ aarch64_fold_builtin (tree fndecl, int n_args ATTRIBUTE_UNUSED, tree *args,
       VAR1 (REINTERP_SS, reinterpretv2si, 0, df)
       VAR1 (REINTERP_SS, reinterpretv2sf, 0, df)
       BUILTIN_VD (REINTERP_SS, reinterpretdf, 0)
-      BUILTIN_VD (REINTERP_SU, reinterpretdf, 0)
+      BUILTIN_VD_BHSI (REINTERP_SU, reinterpretdf, 0)
       VAR1 (REINTERP_US, reinterpretdi, 0, df)
       VAR1 (REINTERP_US, reinterpretv8qi, 0, df)
       VAR1 (REINTERP_US, reinterpretv4hi, 0, df)
       VAR1 (REINTERP_US, reinterpretv2si, 0, df)
-      VAR1 (REINTERP_US, reinterpretv2sf, 0, df)
-      BUILTIN_VD (REINTERP_SP, reinterpretdf, 0)
+      VAR1 (REINTERP_SP, reinterpretdf, 0, v8qi)
+      VAR1 (REINTERP_SP, reinterpretdf, 0, v4hi)
       VAR1 (REINTERP_PS, reinterpretdi, 0, df)
       VAR1 (REINTERP_PS, reinterpretv8qi, 0, df)
       VAR1 (REINTERP_PS, reinterpretv4hi, 0, df)
-      VAR1 (REINTERP_PS, reinterpretv2si, 0, df)
-      VAR1 (REINTERP_PS, reinterpretv2sf, 0, df)
 	return fold_build1 (VIEW_CONVERT_EXPR, type, args[0]);
       VAR1 (UNOP, floatv2si, 2, v2sf)
       VAR1 (UNOP, floatv4si, 2, v4sf)
diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h
index 53023ba..a1c7708 100644
--- a/gcc/config/aarch64/aarch64-protos.h
+++ b/gcc/config/aarch64/aarch64-protos.h
@@ -208,6 +208,7 @@ bool aarch64_simd_valid_immediate (rtx, enum machine_mode, bool,
 				   struct simd_immediate_info *);
 bool aarch64_symbolic_address_p (rtx);
 bool aarch64_uimm12_shift (HOST_WIDE_INT);
+const char *aarch64_mangle_builtin_type (const_tree);
 const char *aarch64_output_casesi (rtx *);
 const char *aarch64_rewrite_selected_cpu (const char *name);
 
diff --git a/gcc/config/aarch64/aarch64-simd-builtins.def b/gcc/config/aarch64/aarch64-simd-builtins.def
index faa0858..f20f414 100644
--- a/gcc/config/aarch64/aarch64-simd-builtins.def
+++ b/gcc/config/aarch64/aarch64-simd-builtins.def
@@ -58,21 +58,19 @@
   VAR1 (REINTERP_SS, reinterpretv2sf, 0, df)
   BUILTIN_VD (REINTERP_SS, reinterpretdf, 0)
 
-  BUILTIN_VD (REINTERP_SU, reinterpretdf, 0)
+  BUILTIN_VD_BHSI (REINTERP_SU, reinterpretdf, 0)
+
+  VAR1 (REINTERP_SP, reinterpretdf, 0, v8qi)
+  VAR1 (REINTERP_SP, reinterpretdf, 0, v4hi)
 
   VAR1 (REINTERP_US, reinterpretdi, 0, df)
   VAR1 (REINTERP_US, reinterpretv8qi, 0, df)
   VAR1 (REINTERP_US, reinterpretv4hi, 0, df)
   VAR1 (REINTERP_US, reinterpretv2si, 0, df)
-  VAR1 (REINTERP_US, reinterpretv2sf, 0, df)
-
-  BUILTIN_VD (REINTERP_SP, reinterpretdf, 0)
 
   VAR1 (REINTERP_PS, reinterpretdi, 0, df)
   VAR1 (REINTERP_PS, reinterpretv8qi, 0, df)
   VAR1 (REINTERP_PS, reinterpretv4hi, 0, df)
-  VAR1 (REINTERP_PS, reinterpretv2si, 0, df)
-  VAR1 (REINTERP_PS, reinterpretv2sf, 0, df)
 
   BUILTIN_VDQ_I (BINOP, dup_lane, 0)
   /* Implemented by aarch64_<sur>q<r>shl<mode>.  */
diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index f0aafbd..e3d8c69 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -7332,51 +7332,6 @@ aarch64_autovectorize_vector_sizes (void)
   return (16 | 8);
 }
 
-/* A table to help perform AArch64-specific name mangling for AdvSIMD
-   vector types in order to conform to the AAPCS64 (see "Procedure
-   Call Standard for the ARM 64-bit Architecture", Appendix A).  To
-   qualify for emission with the mangled names defined in that document,
-   a vector type must not only be of the correct mode but also be
-   composed of AdvSIMD vector element types (e.g.
-   _builtin_aarch64_simd_qi); these types are registered by
-   aarch64_init_simd_builtins ().  In other words, vector types defined
-   in other ways e.g. via vector_size attribute will get default
-   mangled names.  */
-typedef struct
-{
-  enum machine_mode mode;
-  const char *element_type_name;
-  const char *mangled_name;
-} aarch64_simd_mangle_map_entry;
-
-static aarch64_simd_mangle_map_entry aarch64_simd_mangle_map[] = {
-  /* 64-bit containerized types.  */
-  { V8QImode,  "__builtin_aarch64_simd_qi",     "10__Int8x8_t" },
-  { V8QImode,  "__builtin_aarch64_simd_uqi",    "11__Uint8x8_t" },
-  { V4HImode,  "__builtin_aarch64_simd_hi",     "11__Int16x4_t" },
-  { V4HImode,  "__builtin_aarch64_simd_uhi",    "12__Uint16x4_t" },
-  { V2SImode,  "__builtin_aarch64_simd_si",     "11__Int32x2_t" },
-  { V2SImode,  "__builtin_aarch64_simd_usi",    "12__Uint32x2_t" },
-  { V2SFmode,  "__builtin_aarch64_simd_sf",     "13__Float32x2_t" },
-  { V8QImode,  "__builtin_aarch64_simd_poly8",  "11__Poly8x8_t" },
-  { V4HImode,  "__builtin_aarch64_simd_poly16", "12__Poly16x4_t" },
-  /* 128-bit containerized types.  */
-  { V16QImode, "__builtin_aarch64_simd_qi",     "11__Int8x16_t" },
-  { V16QImode, "__builtin_aarch64_simd_uqi",    "12__Uint8x16_t" },
-  { V8HImode,  "__builtin_aarch64_simd_hi",     "11__Int16x8_t" },
-  { V8HImode,  "__builtin_aarch64_simd_uhi",    "12__Uint16x8_t" },
-  { V4SImode,  "__builtin_aarch64_simd_si",     "11__Int32x4_t" },
-  { V4SImode,  "__builtin_aarch64_simd_usi",    "12__Uint32x4_t" },
-  { V2DImode,  "__builtin_aarch64_simd_di",     "11__Int64x2_t" },
-  { V2DImode,  "__builtin_aarch64_simd_udi",    "12__Uint64x2_t" },
-  { V4SFmode,  "__builtin_aarch64_simd_sf",     "13__Float32x4_t" },
-  { V2DFmode,  "__builtin_aarch64_simd_df",     "13__Float64x2_t" },
-  { V16QImode, "__builtin_aarch64_simd_poly8",  "12__Poly8x16_t" },
-  { V8HImode,  "__builtin_aarch64_simd_poly16", "12__Poly16x8_t" },
-  { V2DImode,  "__builtin_aarch64_simd_poly64", "12__Poly64x2_t" },
-  { VOIDmode, NULL, NULL }
-};
-
 /* Implement TARGET_MANGLE_TYPE.  */
 
 static const char *
@@ -7387,25 +7342,10 @@ aarch64_mangle_type (const_tree type)
   if (lang_hooks.types_compatible_p (CONST_CAST_TREE (type), va_list_type))
     return "St9__va_list";
 
-  /* Check the mode of the vector type, and the name of the vector
-     element type, against the table.  */
-  if (TREE_CODE (type) == VECTOR_TYPE)
-    {
-      aarch64_simd_mangle_map_entry *pos = aarch64_simd_mangle_map;
-
-      while (pos->mode != VOIDmode)
-	{
-	  tree elt_type = TREE_TYPE (type);
-
-	  if (pos->mode == TYPE_MODE (type)
-	      && TREE_CODE (TYPE_NAME (elt_type)) == TYPE_DECL
-	      && !strcmp (IDENTIFIER_POINTER (DECL_NAME (TYPE_NAME (elt_type))),
-			  pos->element_type_name))
-	    return pos->mangled_name;
-
-	  pos++;
-	}
-    }
+  /* Mangle AArch64-specific internal types.  TYPE_NAME is non-NULL_TREE for
+     builtin types.  */
+  if (TYPE_NAME (type) != NULL)
+    return aarch64_mangle_builtin_type (type);
 
   /* Use the default mangling.  */
   return NULL;
diff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h
index 3ed8a98..50d294e 100644
--- a/gcc/config/aarch64/arm_neon.h
+++ b/gcc/config/aarch64/arm_neon.h
@@ -32,66 +32,45 @@
 #define __AARCH64_UINT64_C(__C) ((uint64_t) __C)
 #define __AARCH64_INT64_C(__C) ((int64_t) __C)
 
-typedef __builtin_aarch64_simd_qi int8x8_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_hi int16x4_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_si int32x2_t
-  __attribute__ ((__vector_size__ (8)));
+typedef __Int8x8_t int8x8_t;
+typedef __Int16x4_t int16x4_t;
+typedef __Int32x2_t int32x2_t;
 typedef int64_t int64x1_t;
 typedef int32_t int32x1_t;
 typedef int16_t int16x1_t;
 typedef int8_t int8x1_t;
 typedef double float64x1_t;
-typedef __builtin_aarch64_simd_sf float32x2_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_poly8 poly8x8_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_poly16 poly16x4_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_uqi uint8x8_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_uhi uint16x4_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_usi uint32x2_t
-  __attribute__ ((__vector_size__ (8)));
+typedef __Float32x2_t float32x2_t;
+typedef __Poly8x8_t poly8x8_t;
+typedef __Poly16x4_t poly16x4_t;
+typedef __Uint8x8_t uint8x8_t;
+typedef __Uint16x4_t uint16x4_t;
+typedef __Uint32x2_t uint32x2_t;
 typedef uint64_t uint64x1_t;
 typedef uint32_t uint32x1_t;
 typedef uint16_t uint16x1_t;
 typedef uint8_t uint8x1_t;
-typedef __builtin_aarch64_simd_qi int8x16_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_hi int16x8_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_si int32x4_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_di int64x2_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_sf float32x4_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_df float64x2_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_poly8 poly8x16_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_poly16 poly16x8_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_poly64 poly64x2_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_uqi uint8x16_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_uhi uint16x8_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_usi uint32x4_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_udi uint64x2_t
-  __attribute__ ((__vector_size__ (16)));
+typedef __Int8x16_t int8x16_t;
+typedef __Int16x8_t int16x8_t;
+typedef __Int32x4_t int32x4_t;
+typedef __Int64x2_t int64x2_t;
+typedef __Float32x4_t float32x4_t;
+typedef __Float64x2_t float64x2_t;
+typedef __Poly8x16_t poly8x16_t;
+typedef __Poly16x8_t poly16x8_t;
+typedef __Poly64x2_t poly64x2_t;
+typedef __Uint8x16_t uint8x16_t;
+typedef __Uint16x8_t uint16x8_t;
+typedef __Uint32x4_t uint32x4_t;
+typedef __Uint64x2_t uint64x2_t;
+
+typedef __Poly8_t poly8_t;
+typedef __Poly16_t poly16_t;
+typedef __Poly64_t poly64_t;
+typedef __Poly128_t poly128_t;
 
 typedef float float32_t;
 typedef double float64_t;
-typedef __builtin_aarch64_simd_poly8 poly8_t;
-typedef __builtin_aarch64_simd_poly16 poly16_t;
-typedef __builtin_aarch64_simd_poly64 poly64_t;
-typedef __builtin_aarch64_simd_poly128 poly128_t;
 
 typedef struct int8x8x2_t
 {
diff --git a/gcc/config/aarch64/t-aarch64 b/gcc/config/aarch64/t-aarch64
index 158fbb5..d331e36 100644
--- a/gcc/config/aarch64/t-aarch64
+++ b/gcc/config/aarch64/t-aarch64
@@ -31,7 +31,8 @@ aarch64-builtins.o: $(srcdir)/config/aarch64/aarch64-builtins.c $(CONFIG_H) \
   $(SYSTEM_H) coretypes.h $(TM_H) \
   $(RTL_H) $(TREE_H) expr.h $(TM_P_H) $(RECOG_H) langhooks.h \
   $(DIAGNOSTIC_CORE_H) $(OPTABS_H) \
-  $(srcdir)/config/aarch64/aarch64-simd-builtins.def
+  $(srcdir)/config/aarch64/aarch64-simd-builtins.def \
+  $(srcdir)/config/aarch64/aarch64-simd-builtin-types.def
 	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
 		$(srcdir)/config/aarch64/aarch64-builtins.c
 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch, AArch64] Restructure arm_neon.h vector types' implementation.
  2014-06-23 15:48 [Patch, AArch64] Restructure arm_neon.h vector types' implementation Tejas Belagod
@ 2014-06-25  8:31 ` Yufeng Zhang
  2014-06-27 15:32   ` Tejas Belagod
  2014-06-28 10:25 ` Marc Glisse
  1 sibling, 1 reply; 11+ messages in thread
From: Yufeng Zhang @ 2014-06-25  8:31 UTC (permalink / raw)
  To: Tejas Belagod; +Cc: gcc-patches, Marc Glisse, Marcus Shawcroft

On 23 June 2014 16:47, Tejas Belagod <tbelagod@arm.com> wrote:
>
> Hi,
>
> Here is a patch that restructures neon builtins to use vector types based on
> standard base types. We previously defined arm_neon.h's neon vector
> types(int8x8_t) using gcc's front-end vector extensions. We now move away
> from that and use types built internally(e.g. __Int8x8_t). These internal
> types names are defined by the AAPCS64 and we build arm_neon.h's public
> vector types over these internal types. e.g.
>
>   typedef __Int8x8_t int8x8_t;
>
> as opposed to
>
>   typedef __builtin_aarch64_simd_qi int8x8_t
>     __attribute__ ((__vector_size__ (8)));
>
> Impact on mangling:
>
> This patch does away with these builtin scalar types that the vector types
> were based on. These were previously used to look up mangling names. We now
> use the internal vector type names(e.g. __Int8x8_t) to lookup mangling for
> the arm_neon.h-exported vector types. There are a few internal scalar
> types(__builtin_aarch64_simd_oi etc.) that is needed to efficiently
> implement some NEON Intrinsics. These will be declared in the back-end and
> registered in the front-end and aarch64-specific builtin types, but are not
> user-visible. These, along with a few scalar __builtin types that aren't
> user-visible will have implementation-defined mangling. Because we don't
> have strong-typing across all builtins yet, we still have to maintain the
> old builtin scalar types - they will be removed once we move over to a
> strongly-typed builtin system implemented by the qualifier infrastructure.
>
> Marc Glisse's patch in this thread exposed this issue
> https://gcc.gnu.org/ml/gcc-patches/2014-04/msg00618.html. I've tested my
> patch with the change that his patch introduced, and it seems to work fine -
> specifically these two lines:
>
> +  for (tree t = registered_builtin_types; t; t = TREE_CHAIN (t))
> +    emit_support_tinfo_1 (TREE_VALUE (t));
>
> Regressed on aarch64-none-elf. OK for trunk?
>
> Thanks,
> Tejas.
>
> gcc/Changelog
>
> 2014-06-23  Tejas Belagod  <tejas.belagod@arm.com>
>
>         * config/aarch64/aarch64-builtins.c (aarch64_build_scalar_type):
> Remove.
>         (aarch64_scalar_builtin_types, aarch64_simd_type,
> aarch64_simd_types,
>          aarch64_mangle_builtin_scalar_type,
> aarch64_mangle_builtin_vector_type,
>          aarch64_mangle_builtin_type, aarch64_simd_builtin_std_type,
>          aarch64_lookup_simd_builtin_type, aarch64_simd_builtin_type,
>          aarch64_init_simd_builtin_types,
>          aarch64_init_simd_builtin_scalar_types): New.
>         (aarch64_init_simd_builtins): Refactor.
>         (aarch64_fold_builtin): Remove redundant defn.
>         (aarch64_init_crc32_builtins): Use aarch64_simd_builtin_std_type.
>         * config/aarch64/aarch64-simd-builtin-types.def: New.

Has the content of this new file been included in the patch?

Yufeng

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch, AArch64] Restructure arm_neon.h vector types' implementation.
  2014-06-25  8:31 ` Yufeng Zhang
@ 2014-06-27 15:32   ` Tejas Belagod
  2014-06-27 16:01     ` Yufeng Zhang
                       ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Tejas Belagod @ 2014-06-27 15:32 UTC (permalink / raw)
  To: Yufeng Zhang; +Cc: gcc-patches, Marc Glisse, Marcus Shawcroft

[-- Attachment #1: Type: text/plain, Size: 1007 bytes --]

>>
>> 2014-06-23  Tejas Belagod  <tejas.belagod@arm.com>
>>
>>         * config/aarch64/aarch64-builtins.c (aarch64_build_scalar_type):
>> Remove.
>>         (aarch64_scalar_builtin_types, aarch64_simd_type,
>> aarch64_simd_types,
>>          aarch64_mangle_builtin_scalar_type,
>> aarch64_mangle_builtin_vector_type,
>>          aarch64_mangle_builtin_type, aarch64_simd_builtin_std_type,
>>          aarch64_lookup_simd_builtin_type, aarch64_simd_builtin_type,
>>          aarch64_init_simd_builtin_types,
>>          aarch64_init_simd_builtin_scalar_types): New.
>>         (aarch64_init_simd_builtins): Refactor.
>>         (aarch64_fold_builtin): Remove redundant defn.
>>         (aarch64_init_crc32_builtins): Use aarch64_simd_builtin_std_type.
>>         * config/aarch64/aarch64-simd-builtin-types.def: New.
> 
> Has the content of this new file been included in the patch?
> 

Oops! Thanks for spotting that. Here is a new patch with the missing bit.

OK?

Thanks,
Tejas.

[-- Attachment #2: simd-types-refactor-19.txt --]
[-- Type: text/plain, Size: 35304 bytes --]

diff --git a/gcc/config/aarch64/aarch64-builtins.c b/gcc/config/aarch64/aarch64-builtins.c
index a94ef52..1119f33 100644
--- a/gcc/config/aarch64/aarch64-builtins.c
+++ b/gcc/config/aarch64/aarch64-builtins.c
@@ -471,256 +471,331 @@ static GTY(()) tree aarch64_builtin_decls[AARCH64_BUILTIN_MAX];
 #define NUM_DREG_TYPES 6
 #define NUM_QREG_TYPES 6
 
-/* Return a tree for a signed or unsigned argument of either
-   the mode specified by MODE, or the inner mode of MODE.  */
-tree
-aarch64_build_scalar_type (enum machine_mode mode,
-			   bool unsigned_p,
-			   bool poly_p)
-{
-#undef INT_TYPES
-#define INT_TYPES \
-  AARCH64_TYPE_BUILDER (QI) \
-  AARCH64_TYPE_BUILDER (HI) \
-  AARCH64_TYPE_BUILDER (SI) \
-  AARCH64_TYPE_BUILDER (DI) \
-  AARCH64_TYPE_BUILDER (EI) \
-  AARCH64_TYPE_BUILDER (OI) \
-  AARCH64_TYPE_BUILDER (CI) \
-  AARCH64_TYPE_BUILDER (XI) \
-  AARCH64_TYPE_BUILDER (TI) \
-
-/* Statically declare all the possible types we might need.  */
-#undef AARCH64_TYPE_BUILDER
-#define AARCH64_TYPE_BUILDER(X) \
-  static tree X##_aarch64_type_node_p = NULL; \
-  static tree X##_aarch64_type_node_s = NULL; \
-  static tree X##_aarch64_type_node_u = NULL;
-
-  INT_TYPES
-
-  static tree float_aarch64_type_node = NULL;
-  static tree double_aarch64_type_node = NULL;
-
-  gcc_assert (!VECTOR_MODE_P (mode));
-
-/* If we've already initialised this type, don't initialise it again,
-   otherwise ask for a new type of the correct size.  */
-#undef AARCH64_TYPE_BUILDER
-#define AARCH64_TYPE_BUILDER(X) \
-  case X##mode: \
-    if (unsigned_p) \
-      return (X##_aarch64_type_node_u \
-	      ? X##_aarch64_type_node_u \
-	      : X##_aarch64_type_node_u \
-		  = make_unsigned_type (GET_MODE_PRECISION (mode))); \
-    else if (poly_p) \
-       return (X##_aarch64_type_node_p \
-	      ? X##_aarch64_type_node_p \
-	      : X##_aarch64_type_node_p \
-		  = make_unsigned_type (GET_MODE_PRECISION (mode))); \
-    else \
-       return (X##_aarch64_type_node_s \
-	      ? X##_aarch64_type_node_s \
-	      : X##_aarch64_type_node_s \
-		  = make_signed_type (GET_MODE_PRECISION (mode))); \
-    break;
+/* Internal scalar builtin types.  These types are used to support
+   neon intrinsic builtins.  They are _not_ user-visible types.  Therefore
+   the mangling for these types are implementation defined.  */
+const char *aarch64_scalar_builtin_types[] = {
+  "__builtin_aarch64_simd_qi",
+  "__builtin_aarch64_simd_hi",
+  "__builtin_aarch64_simd_si",
+  "__builtin_aarch64_simd_sf",
+  "__builtin_aarch64_simd_di",
+  "__builtin_aarch64_simd_df",
+  "__builtin_aarch64_simd_poly8",
+  "__builtin_aarch64_simd_poly16",
+  "__builtin_aarch64_simd_poly64",
+  "__builtin_aarch64_simd_poly128",
+  "__builtin_aarch64_simd_ti",
+  "__builtin_aarch64_simd_uqi",
+  "__builtin_aarch64_simd_uhi",
+  "__builtin_aarch64_simd_usi",
+  "__builtin_aarch64_simd_udi",
+  "__builtin_aarch64_simd_ei",
+  "__builtin_aarch64_simd_oi",
+  "__builtin_aarch64_simd_ci",
+  "__builtin_aarch64_simd_xi",
+  NULL
+};
 
-  switch (mode)
-    {
-      INT_TYPES
-      case SFmode:
-	if (!float_aarch64_type_node)
-	  {
-	    float_aarch64_type_node = make_node (REAL_TYPE);
-	    TYPE_PRECISION (float_aarch64_type_node) = FLOAT_TYPE_SIZE;
-	    layout_type (float_aarch64_type_node);
-	  }
-	return float_aarch64_type_node;
-	break;
-      case DFmode:
-	if (!double_aarch64_type_node)
-	  {
-	    double_aarch64_type_node = make_node (REAL_TYPE);
-	    TYPE_PRECISION (double_aarch64_type_node) = DOUBLE_TYPE_SIZE;
-	    layout_type (double_aarch64_type_node);
-	  }
-	return double_aarch64_type_node;
-	break;
-      default:
-	gcc_unreachable ();
-    }
-}
+#define ENTRY(E, M, Q, G) E,
+enum aarch64_simd_type
+{
+#include "aarch64-simd-builtin-types.def"
+};
+#undef ENTRY
 
-tree
-aarch64_build_vector_type (enum machine_mode mode,
-			   bool unsigned_p,
-			   bool poly_p)
+struct aarch64_simd_type_info
 {
+  enum aarch64_simd_type type;
+
+  /* Internal type name.  */
+  const char *name;
+
+  /* Internal type name(mangled).  The mangled names conform to the
+     AAPCS64 (see "Procedure Call Standard for the ARM 64-bit Architecture",
+     Appendix A).  To qualify for emission with the mangled names defined in
+     that document, a vector type must not only be of the correct mode but also
+     be of the correct internal AdvSIMD vector type (e.g. __Int8x8_t); these
+     types are registered by aarch64_init_simd_builtin_types ().  In other
+     words, vector types defined in other ways e.g. via vector_size attribute
+     will get default mangled names.  */
+  const char *mangle;
+
+  /* Internal type.  */
+  tree itype;
+
+  /* Element type.  */
   tree eltype;
 
-#define VECTOR_TYPES \
-  AARCH64_TYPE_BUILDER (V16QI) \
-  AARCH64_TYPE_BUILDER (V8HI) \
-  AARCH64_TYPE_BUILDER (V4SI) \
-  AARCH64_TYPE_BUILDER (V2DI) \
-  AARCH64_TYPE_BUILDER (V8QI) \
-  AARCH64_TYPE_BUILDER (V4HI) \
-  AARCH64_TYPE_BUILDER (V2SI) \
-  \
-  AARCH64_TYPE_BUILDER (V4SF) \
-  AARCH64_TYPE_BUILDER (V2DF) \
-  AARCH64_TYPE_BUILDER (V2SF) \
-/* Declare our "cache" of values.  */
-#undef AARCH64_TYPE_BUILDER
-#define AARCH64_TYPE_BUILDER(X) \
-  static tree X##_aarch64_type_node_s = NULL; \
-  static tree X##_aarch64_type_node_u = NULL; \
-  static tree X##_aarch64_type_node_p = NULL;
-
-  VECTOR_TYPES
-
-  gcc_assert (VECTOR_MODE_P (mode));
-
-#undef AARCH64_TYPE_BUILDER
-#define AARCH64_TYPE_BUILDER(X) \
-  case X##mode: \
-    if (unsigned_p) \
-      return X##_aarch64_type_node_u \
-	     ? X##_aarch64_type_node_u \
-	     : X##_aarch64_type_node_u \
-		= build_vector_type_for_mode (aarch64_build_scalar_type \
-						(GET_MODE_INNER (mode), \
-						 unsigned_p, poly_p), mode); \
-    else if (poly_p) \
-       return X##_aarch64_type_node_p \
-	      ? X##_aarch64_type_node_p \
-	      : X##_aarch64_type_node_p \
-		= build_vector_type_for_mode (aarch64_build_scalar_type \
-						(GET_MODE_INNER (mode), \
-						 unsigned_p, poly_p), mode); \
-    else \
-       return X##_aarch64_type_node_s \
-	      ? X##_aarch64_type_node_s \
-	      : X##_aarch64_type_node_s \
-		= build_vector_type_for_mode (aarch64_build_scalar_type \
-						(GET_MODE_INNER (mode), \
-						 unsigned_p, poly_p), mode); \
-    break;
+  /* Machine mode the internal type maps to.  */
+  enum machine_mode mode;
 
-  switch (mode)
+  /* Qualifiers.  */
+  enum aarch64_type_qualifiers q;
+};
+
+#define ENTRY(E, M, Q, G)  \
+  {E, "__" #E, #G "__" #E, NULL_TREE, NULL_TREE, M##mode, qualifier_##Q},
+static struct aarch64_simd_type_info aarch64_simd_types [] = {
+#include "aarch64-simd-builtin-types.def"
+};
+#undef ENTRY
+
+static tree aarch64_simd_intOI_type_node = NULL_TREE;
+static tree aarch64_simd_intEI_type_node = NULL_TREE;
+static tree aarch64_simd_intCI_type_node = NULL_TREE;
+static tree aarch64_simd_intXI_type_node = NULL_TREE;
+
+static const char *
+aarch64_mangle_builtin_scalar_type (const_tree type)
+{
+  int i = 0;
+
+  while (aarch64_scalar_builtin_types[i] != NULL)
     {
-      default:
-	eltype = aarch64_build_scalar_type (GET_MODE_INNER (mode),
-					    unsigned_p, poly_p);
-	return build_vector_type_for_mode (eltype, mode);
-	break;
-      VECTOR_TYPES
-   }
+      const char *name = aarch64_scalar_builtin_types[i];
+
+      if (TREE_CODE (TYPE_NAME (type)) == TYPE_DECL
+	  && DECL_NAME (TYPE_NAME (type))
+	  && !strcmp (IDENTIFIER_POINTER (DECL_NAME (TYPE_NAME (type))), name))
+	return aarch64_scalar_builtin_types[i];
+      i++;
+    }
+  return NULL;
 }
 
-tree
-aarch64_build_type (enum machine_mode mode, bool unsigned_p, bool poly_p)
+static const char *
+aarch64_mangle_builtin_vector_type (const_tree type)
 {
-  if (VECTOR_MODE_P (mode))
-    return aarch64_build_vector_type (mode, unsigned_p, poly_p);
+  int i;
+  int nelts = sizeof (aarch64_simd_types) / sizeof (aarch64_simd_types[0]);
+
+  for (i = 0; i < nelts; i++)
+    if (aarch64_simd_types[i].mode ==  TYPE_MODE (type)
+	&& TYPE_NAME (type)
+	&& TREE_CODE (TYPE_NAME (type)) == TYPE_DECL
+	&& DECL_NAME (TYPE_NAME (type))
+	&& !strcmp
+	     (IDENTIFIER_POINTER (DECL_NAME (TYPE_NAME (type))),
+	      aarch64_simd_types[i].name))
+      return aarch64_simd_types[i].mangle;
+
+  return NULL;
+}
+
+const char *
+aarch64_mangle_builtin_type (const_tree type)
+{
+  if (TREE_CODE (type) == VECTOR_TYPE)
+    return aarch64_mangle_builtin_vector_type (type);
   else
-    return aarch64_build_scalar_type (mode, unsigned_p, poly_p);
+    return aarch64_mangle_builtin_scalar_type (type);
 }
 
-tree
-aarch64_build_signed_type (enum machine_mode mode)
+static tree
+aarch64_simd_builtin_std_type (enum machine_mode mode,
+			       enum aarch64_type_qualifiers q)
 {
-  return aarch64_build_type (mode, false, false);
+#define QUAL_TYPE(M)  \
+  ((q == qualifier_none) ? int##M##_type_node : unsigned_int##M##_type_node);
+  switch (mode)
+    {
+    case QImode:
+      return QUAL_TYPE (QI);
+    case HImode:
+      return QUAL_TYPE (HI);
+    case SImode:
+      return QUAL_TYPE (SI);
+    case DImode:
+      return QUAL_TYPE (DI);
+    case TImode:
+      return QUAL_TYPE (TI);
+    case OImode:
+      return aarch64_simd_intOI_type_node;
+    case EImode:
+      return aarch64_simd_intEI_type_node;
+    case CImode:
+      return aarch64_simd_intCI_type_node;
+    case XImode:
+      return aarch64_simd_intXI_type_node;
+    case SFmode:
+      return float_type_node;
+    case DFmode:
+      return double_type_node;
+    default:
+      gcc_unreachable ();
+    }
+#undef QUAL_TYPE
 }
 
-tree
-aarch64_build_unsigned_type (enum machine_mode mode)
+static tree
+aarch64_lookup_simd_builtin_type (enum machine_mode mode,
+				  enum aarch64_type_qualifiers q)
 {
-  return aarch64_build_type (mode, true, false);
+  int i;
+  int nelts = sizeof (aarch64_simd_types) / sizeof (aarch64_simd_types[0]);
+
+  /* Non-poly scalar modes map to standard types not in the table.  */
+  if (q != qualifier_poly && !VECTOR_MODE_P (mode))
+    return aarch64_simd_builtin_std_type (mode, q);
+
+  for (i = 0; i < nelts; i++)
+    if (aarch64_simd_types[i].mode == mode
+	&& aarch64_simd_types[i].q == q)
+      return aarch64_simd_types[i].itype;
+
+  return NULL_TREE;
 }
 
-tree
-aarch64_build_poly_type (enum machine_mode mode)
+static tree
+aarch64_simd_builtin_type (enum machine_mode mode,
+			   bool unsigned_p, bool poly_p)
 {
-  return aarch64_build_type (mode, false, true);
+  if (poly_p)
+    return aarch64_lookup_simd_builtin_type (mode, qualifier_poly);
+  else if (unsigned_p)
+    return aarch64_lookup_simd_builtin_type (mode, qualifier_unsigned);
+  else
+    return aarch64_lookup_simd_builtin_type (mode, qualifier_none);
 }
 
 static void
-aarch64_init_simd_builtins (void)
+aarch64_init_simd_builtin_types (void)
 {
-  unsigned int i, fcode = AARCH64_SIMD_BUILTIN_BASE + 1;
+  int i;
+  int nelts = sizeof (aarch64_simd_types) / sizeof (aarch64_simd_types[0]);
+  tree tdecl;
+
+  /* Init all the element types built by the front-end.  */
+  aarch64_simd_types[Int8x8_t].eltype = intQI_type_node;
+  aarch64_simd_types[Int8x16_t].eltype = intQI_type_node;
+  aarch64_simd_types[Int16x4_t].eltype = intHI_type_node;
+  aarch64_simd_types[Int16x8_t].eltype = intHI_type_node;
+  aarch64_simd_types[Int32x2_t].eltype = intSI_type_node;
+  aarch64_simd_types[Int32x4_t].eltype = intSI_type_node;
+  aarch64_simd_types[Int64x1_t].eltype = intDI_type_node;
+  aarch64_simd_types[Int64x2_t].eltype = intDI_type_node;
+  aarch64_simd_types[Uint8x8_t].eltype = unsigned_intQI_type_node;
+  aarch64_simd_types[Uint8x16_t].eltype = unsigned_intQI_type_node;
+  aarch64_simd_types[Uint16x4_t].eltype = unsigned_intHI_type_node;
+  aarch64_simd_types[Uint16x8_t].eltype = unsigned_intHI_type_node;
+  aarch64_simd_types[Uint32x2_t].eltype = unsigned_intSI_type_node;
+  aarch64_simd_types[Uint32x4_t].eltype = unsigned_intSI_type_node;
+  aarch64_simd_types[Uint64x1_t].eltype = unsigned_intDI_type_node;
+  aarch64_simd_types[Uint64x2_t].eltype = unsigned_intDI_type_node;
+
+  /* Poly types are a world of their own.  */
+  aarch64_simd_types[Poly8_t].eltype = aarch64_simd_types[Poly8_t].itype =
+    build_distinct_type_copy (unsigned_intQI_type_node);
+  aarch64_simd_types[Poly16_t].eltype = aarch64_simd_types[Poly16_t].itype =
+    build_distinct_type_copy (unsigned_intHI_type_node);
+  aarch64_simd_types[Poly64_t].eltype = aarch64_simd_types[Poly64_t].itype =
+    build_distinct_type_copy (unsigned_intDI_type_node);
+  aarch64_simd_types[Poly128_t].eltype = aarch64_simd_types[Poly128_t].itype =
+    build_distinct_type_copy (unsigned_intTI_type_node);
+  /* Init poly vector element types with scalar poly types.  */
+  aarch64_simd_types[Poly8x8_t].eltype = aarch64_simd_types[Poly8_t].itype;
+  aarch64_simd_types[Poly8x16_t].eltype = aarch64_simd_types[Poly8_t].itype;
+  aarch64_simd_types[Poly16x4_t].eltype = aarch64_simd_types[Poly16_t].itype;
+  aarch64_simd_types[Poly16x8_t].eltype = aarch64_simd_types[Poly16_t].itype;
+  aarch64_simd_types[Poly64x1_t].eltype = aarch64_simd_types[Poly64_t].itype;
+  aarch64_simd_types[Poly64x2_t].eltype = aarch64_simd_types[Poly64_t].itype;
+
+  /* Continue with standard types.  */
+  aarch64_simd_types[Float32x2_t].eltype = float_type_node;
+  aarch64_simd_types[Float32x4_t].eltype = float_type_node;
+  aarch64_simd_types[Float64x1_t].eltype = double_type_node;
+  aarch64_simd_types[Float64x2_t].eltype = double_type_node;
+
+  for (i = 0; i < nelts; i++)
+    {
+      tree eltype = aarch64_simd_types[i].eltype;
+      enum machine_mode mode = aarch64_simd_types[i].mode;
+      enum aarch64_simd_type type = aarch64_simd_types[i].type;
+
+      if (aarch64_simd_types[i].itype == NULL)
+	aarch64_simd_types[i].itype =
+	  build_distinct_type_copy
+	    (build_vector_type (eltype, GET_MODE_NUNITS (mode)));
+
+      tdecl = add_builtin_type (aarch64_simd_types[i].name,
+				aarch64_simd_types[i].itype);
+      TYPE_NAME (aarch64_simd_types[i].itype) = tdecl;
+      SET_TYPE_STRUCTURAL_EQUALITY (aarch64_simd_types[i].itype);
+    }
 
-  /* Signed scalar type nodes.  */
-  tree aarch64_simd_intQI_type_node = aarch64_build_signed_type (QImode);
-  tree aarch64_simd_intHI_type_node = aarch64_build_signed_type (HImode);
-  tree aarch64_simd_intSI_type_node = aarch64_build_signed_type (SImode);
-  tree aarch64_simd_intDI_type_node = aarch64_build_signed_type (DImode);
-  tree aarch64_simd_intTI_type_node = aarch64_build_signed_type (TImode);
-  tree aarch64_simd_intEI_type_node = aarch64_build_signed_type (EImode);
-  tree aarch64_simd_intOI_type_node = aarch64_build_signed_type (OImode);
-  tree aarch64_simd_intCI_type_node = aarch64_build_signed_type (CImode);
-  tree aarch64_simd_intXI_type_node = aarch64_build_signed_type (XImode);
-
-  /* Unsigned scalar type nodes.  */
-  tree aarch64_simd_intUQI_type_node = aarch64_build_unsigned_type (QImode);
-  tree aarch64_simd_intUHI_type_node = aarch64_build_unsigned_type (HImode);
-  tree aarch64_simd_intUSI_type_node = aarch64_build_unsigned_type (SImode);
-  tree aarch64_simd_intUDI_type_node = aarch64_build_unsigned_type (DImode);
-
-  /* Poly scalar type nodes.  */
-  tree aarch64_simd_polyQI_type_node = aarch64_build_poly_type (QImode);
-  tree aarch64_simd_polyHI_type_node = aarch64_build_poly_type (HImode);
-  tree aarch64_simd_polyDI_type_node = aarch64_build_poly_type (DImode);
-  tree aarch64_simd_polyTI_type_node = aarch64_build_poly_type (TImode);
-
-  /* Float type nodes.  */
-  tree aarch64_simd_float_type_node = aarch64_build_signed_type (SFmode);
-  tree aarch64_simd_double_type_node = aarch64_build_signed_type (DFmode);
-
-  /* Define typedefs which exactly correspond to the modes we are basing vector
-     types on.  If you change these names you'll need to change
-     the table used by aarch64_mangle_type too.  */
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intQI_type_node,
+#define AARCH64_BUILD_SIGNED_TYPE(mode)  \
+  make_signed_type (GET_MODE_PRECISION (mode));
+  aarch64_simd_intOI_type_node = AARCH64_BUILD_SIGNED_TYPE (OImode);
+  aarch64_simd_intEI_type_node = AARCH64_BUILD_SIGNED_TYPE (EImode);
+  aarch64_simd_intCI_type_node = AARCH64_BUILD_SIGNED_TYPE (CImode);
+  aarch64_simd_intXI_type_node = AARCH64_BUILD_SIGNED_TYPE (XImode);
+#undef AARCH64_BUILD_SIGNED_TYPE
+
+  tdecl = add_builtin_type
+	    ("__builtin_aarch64_simd_ei" , aarch64_simd_intEI_type_node);
+  TYPE_NAME (aarch64_simd_intEI_type_node) = tdecl;
+  tdecl = add_builtin_type
+	    ("__builtin_aarch64_simd_oi" , aarch64_simd_intOI_type_node);
+  TYPE_NAME (aarch64_simd_intOI_type_node) = tdecl;
+  tdecl = add_builtin_type
+	    ("__builtin_aarch64_simd_ci" , aarch64_simd_intCI_type_node);
+  TYPE_NAME (aarch64_simd_intCI_type_node) = tdecl;
+  tdecl = add_builtin_type
+	    ("__builtin_aarch64_simd_xi" , aarch64_simd_intXI_type_node);
+  TYPE_NAME (aarch64_simd_intXI_type_node) = tdecl;
+}
+
+static void
+aarch64_init_simd_builtin_scalar_types (void)
+{
+  /* Define typedefs for all the standard scalar types.  */
+  (*lang_hooks.types.register_builtin_type) (intQI_type_node,
 					     "__builtin_aarch64_simd_qi");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intHI_type_node,
+  (*lang_hooks.types.register_builtin_type) (intHI_type_node,
 					     "__builtin_aarch64_simd_hi");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intSI_type_node,
+  (*lang_hooks.types.register_builtin_type) (intSI_type_node,
 					     "__builtin_aarch64_simd_si");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_float_type_node,
+  (*lang_hooks.types.register_builtin_type) (float_type_node,
 					     "__builtin_aarch64_simd_sf");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intDI_type_node,
+  (*lang_hooks.types.register_builtin_type) (intDI_type_node,
 					     "__builtin_aarch64_simd_di");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_double_type_node,
+  (*lang_hooks.types.register_builtin_type) (double_type_node,
 					     "__builtin_aarch64_simd_df");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_polyQI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intQI_type_node,
 					     "__builtin_aarch64_simd_poly8");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_polyHI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intHI_type_node,
 					     "__builtin_aarch64_simd_poly16");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_polyDI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intDI_type_node,
 					     "__builtin_aarch64_simd_poly64");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_polyTI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intTI_type_node,
 					     "__builtin_aarch64_simd_poly128");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intTI_type_node,
+  (*lang_hooks.types.register_builtin_type) (intTI_type_node,
 					     "__builtin_aarch64_simd_ti");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intEI_type_node,
-					     "__builtin_aarch64_simd_ei");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intOI_type_node,
-					     "__builtin_aarch64_simd_oi");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intCI_type_node,
-					     "__builtin_aarch64_simd_ci");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intXI_type_node,
-					     "__builtin_aarch64_simd_xi");
 
   /* Unsigned integer types for various mode sizes.  */
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intUQI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intQI_type_node,
 					     "__builtin_aarch64_simd_uqi");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intUHI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intHI_type_node,
 					     "__builtin_aarch64_simd_uhi");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intUSI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intSI_type_node,
 					     "__builtin_aarch64_simd_usi");
-  (*lang_hooks.types.register_builtin_type) (aarch64_simd_intUDI_type_node,
+  (*lang_hooks.types.register_builtin_type) (unsigned_intDI_type_node,
 					     "__builtin_aarch64_simd_udi");
+}
+
+static void
+aarch64_init_simd_builtins (void)
+{
+  unsigned int i, fcode = AARCH64_SIMD_BUILTIN_BASE + 1;
+
+  aarch64_init_simd_builtin_types ();
+
+  /* Strong-typing hasn't been implemented for all AdvSIMD builtin intrinsics.
+     Therefore we need to preserve the old __builtin scalar types.  It can be
+     removed once all the intrinsics become strongly typed using the qualifier
+     system.  */
+  aarch64_init_simd_builtin_scalar_types ();
 
   for (i = 0; i < ARRAY_SIZE (aarch64_simd_builtin_data); i++, fcode++)
     {
@@ -800,9 +875,11 @@ aarch64_init_simd_builtins (void)
 	  if (qualifiers & qualifier_pointer && VECTOR_MODE_P (op_mode))
 	    op_mode = GET_MODE_INNER (op_mode);
 
-	  eltype = aarch64_build_type (op_mode,
-				       qualifiers & qualifier_unsigned,
-				       qualifiers & qualifier_poly);
+	  eltype = aarch64_simd_builtin_type
+		     (op_mode,
+		      (qualifiers & qualifier_unsigned) != 0,
+		      (qualifiers & qualifier_poly) != 0);
+	  gcc_assert (eltype != NULL);
 
 	  /* Add qualifiers.  */
 	  if (qualifiers & qualifier_const)
@@ -840,13 +917,14 @@ aarch64_init_simd_builtins (void)
 static void
 aarch64_init_crc32_builtins ()
 {
-  tree usi_type = aarch64_build_unsigned_type (SImode);
+  tree usi_type = aarch64_simd_builtin_std_type (SImode, qualifier_unsigned);
   unsigned int i = 0;
 
   for (i = 0; i < ARRAY_SIZE (aarch64_crc_builtin_data); ++i)
     {
       aarch64_crc_builtin_datum* d = &aarch64_crc_builtin_data[i];
-      tree argtype = aarch64_build_unsigned_type (d->mode);
+      tree argtype = aarch64_simd_builtin_std_type (d->mode,
+						    qualifier_unsigned);
       tree ftype = build_function_type_list (usi_type, usi_type, argtype, NULL_TREE);
       tree fndecl = add_builtin_function (d->name, ftype, d->fcode,
                                           BUILT_IN_MD, NULL, NULL_TREE);
@@ -1348,18 +1426,16 @@ aarch64_fold_builtin (tree fndecl, int n_args ATTRIBUTE_UNUSED, tree *args,
       VAR1 (REINTERP_SS, reinterpretv2si, 0, df)
       VAR1 (REINTERP_SS, reinterpretv2sf, 0, df)
       BUILTIN_VD (REINTERP_SS, reinterpretdf, 0)
-      BUILTIN_VD (REINTERP_SU, reinterpretdf, 0)
+      BUILTIN_VD_BHSI (REINTERP_SU, reinterpretdf, 0)
       VAR1 (REINTERP_US, reinterpretdi, 0, df)
       VAR1 (REINTERP_US, reinterpretv8qi, 0, df)
       VAR1 (REINTERP_US, reinterpretv4hi, 0, df)
       VAR1 (REINTERP_US, reinterpretv2si, 0, df)
-      VAR1 (REINTERP_US, reinterpretv2sf, 0, df)
-      BUILTIN_VD (REINTERP_SP, reinterpretdf, 0)
+      VAR1 (REINTERP_SP, reinterpretdf, 0, v8qi)
+      VAR1 (REINTERP_SP, reinterpretdf, 0, v4hi)
       VAR1 (REINTERP_PS, reinterpretdi, 0, df)
       VAR1 (REINTERP_PS, reinterpretv8qi, 0, df)
       VAR1 (REINTERP_PS, reinterpretv4hi, 0, df)
-      VAR1 (REINTERP_PS, reinterpretv2si, 0, df)
-      VAR1 (REINTERP_PS, reinterpretv2sf, 0, df)
 	return fold_build1 (VIEW_CONVERT_EXPR, type, args[0]);
       VAR1 (UNOP, floatv2si, 2, v2sf)
       VAR1 (UNOP, floatv4si, 2, v4sf)
diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h
index 53023ba..a1c7708 100644
--- a/gcc/config/aarch64/aarch64-protos.h
+++ b/gcc/config/aarch64/aarch64-protos.h
@@ -208,6 +208,7 @@ bool aarch64_simd_valid_immediate (rtx, enum machine_mode, bool,
 				   struct simd_immediate_info *);
 bool aarch64_symbolic_address_p (rtx);
 bool aarch64_uimm12_shift (HOST_WIDE_INT);
+const char *aarch64_mangle_builtin_type (const_tree);
 const char *aarch64_output_casesi (rtx *);
 const char *aarch64_rewrite_selected_cpu (const char *name);
 
diff --git a/gcc/config/aarch64/aarch64-simd-builtin-types.def b/gcc/config/aarch64/aarch64-simd-builtin-types.def
new file mode 100644
index 0000000..aa6a84e
--- /dev/null
+++ b/gcc/config/aarch64/aarch64-simd-builtin-types.def
@@ -0,0 +1,50 @@
+/* Builtin AdvSIMD types.
+   Copyright (C) 2014 Free Software Foundation, Inc.
+   Contributed by ARM Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+  ENTRY (Int8x8_t, V8QI, none, 10)
+  ENTRY (Int8x16_t, V16QI, none, 11)
+  ENTRY (Int16x4_t, V4HI, none, 11)
+  ENTRY (Int16x8_t, V8HI, none, 11)
+  ENTRY (Int32x2_t, V2SI, none, 11)
+  ENTRY (Int32x4_t, V4SI, none, 11)
+  ENTRY (Int64x1_t, DI, none, 11)
+  ENTRY (Int64x2_t, V2DI, none, 11)
+  ENTRY (Uint8x8_t, V8QI, unsigned, 11)
+  ENTRY (Uint8x16_t, V16QI, unsigned, 12)
+  ENTRY (Uint16x4_t, V4HI, unsigned, 12)
+  ENTRY (Uint16x8_t, V8HI, unsigned, 12)
+  ENTRY (Uint32x2_t, V2SI, unsigned, 12)
+  ENTRY (Uint32x4_t, V4SI, unsigned, 12)
+  ENTRY (Uint64x1_t, DI, unsigned, 12)
+  ENTRY (Uint64x2_t, V2DI, unsigned, 12)
+  ENTRY (Poly8_t, QI, poly, 9)
+  ENTRY (Poly16_t, HI, poly, 10)
+  ENTRY (Poly64_t, DI, poly, 10)
+  ENTRY (Poly128_t, TI, poly, 11)
+  ENTRY (Poly8x8_t, V8QI, poly, 11)
+  ENTRY (Poly8x16_t, V16QI, poly, 12)
+  ENTRY (Poly16x4_t, V4HI, poly, 12)
+  ENTRY (Poly16x8_t, V8HI, poly, 12)
+  ENTRY (Poly64x1_t, DI, poly, 12)
+  ENTRY (Poly64x2_t, V2DI, poly, 12)
+  ENTRY (Float32x2_t, V2SF, none, 13)
+  ENTRY (Float32x4_t, V4SF, none, 13)
+  ENTRY (Float64x1_t, DF, none, 13)
+  ENTRY (Float64x2_t, V2DF, none, 13)
diff --git a/gcc/config/aarch64/aarch64-simd-builtins.def b/gcc/config/aarch64/aarch64-simd-builtins.def
index faa0858..f20f414 100644
--- a/gcc/config/aarch64/aarch64-simd-builtins.def
+++ b/gcc/config/aarch64/aarch64-simd-builtins.def
@@ -58,21 +58,19 @@
   VAR1 (REINTERP_SS, reinterpretv2sf, 0, df)
   BUILTIN_VD (REINTERP_SS, reinterpretdf, 0)
 
-  BUILTIN_VD (REINTERP_SU, reinterpretdf, 0)
+  BUILTIN_VD_BHSI (REINTERP_SU, reinterpretdf, 0)
+
+  VAR1 (REINTERP_SP, reinterpretdf, 0, v8qi)
+  VAR1 (REINTERP_SP, reinterpretdf, 0, v4hi)
 
   VAR1 (REINTERP_US, reinterpretdi, 0, df)
   VAR1 (REINTERP_US, reinterpretv8qi, 0, df)
   VAR1 (REINTERP_US, reinterpretv4hi, 0, df)
   VAR1 (REINTERP_US, reinterpretv2si, 0, df)
-  VAR1 (REINTERP_US, reinterpretv2sf, 0, df)
-
-  BUILTIN_VD (REINTERP_SP, reinterpretdf, 0)
 
   VAR1 (REINTERP_PS, reinterpretdi, 0, df)
   VAR1 (REINTERP_PS, reinterpretv8qi, 0, df)
   VAR1 (REINTERP_PS, reinterpretv4hi, 0, df)
-  VAR1 (REINTERP_PS, reinterpretv2si, 0, df)
-  VAR1 (REINTERP_PS, reinterpretv2sf, 0, df)
 
   BUILTIN_VDQ_I (BINOP, dup_lane, 0)
   /* Implemented by aarch64_<sur>q<r>shl<mode>.  */
diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index f0aafbd..e3d8c69 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -7332,51 +7332,6 @@ aarch64_autovectorize_vector_sizes (void)
   return (16 | 8);
 }
 
-/* A table to help perform AArch64-specific name mangling for AdvSIMD
-   vector types in order to conform to the AAPCS64 (see "Procedure
-   Call Standard for the ARM 64-bit Architecture", Appendix A).  To
-   qualify for emission with the mangled names defined in that document,
-   a vector type must not only be of the correct mode but also be
-   composed of AdvSIMD vector element types (e.g.
-   _builtin_aarch64_simd_qi); these types are registered by
-   aarch64_init_simd_builtins ().  In other words, vector types defined
-   in other ways e.g. via vector_size attribute will get default
-   mangled names.  */
-typedef struct
-{
-  enum machine_mode mode;
-  const char *element_type_name;
-  const char *mangled_name;
-} aarch64_simd_mangle_map_entry;
-
-static aarch64_simd_mangle_map_entry aarch64_simd_mangle_map[] = {
-  /* 64-bit containerized types.  */
-  { V8QImode,  "__builtin_aarch64_simd_qi",     "10__Int8x8_t" },
-  { V8QImode,  "__builtin_aarch64_simd_uqi",    "11__Uint8x8_t" },
-  { V4HImode,  "__builtin_aarch64_simd_hi",     "11__Int16x4_t" },
-  { V4HImode,  "__builtin_aarch64_simd_uhi",    "12__Uint16x4_t" },
-  { V2SImode,  "__builtin_aarch64_simd_si",     "11__Int32x2_t" },
-  { V2SImode,  "__builtin_aarch64_simd_usi",    "12__Uint32x2_t" },
-  { V2SFmode,  "__builtin_aarch64_simd_sf",     "13__Float32x2_t" },
-  { V8QImode,  "__builtin_aarch64_simd_poly8",  "11__Poly8x8_t" },
-  { V4HImode,  "__builtin_aarch64_simd_poly16", "12__Poly16x4_t" },
-  /* 128-bit containerized types.  */
-  { V16QImode, "__builtin_aarch64_simd_qi",     "11__Int8x16_t" },
-  { V16QImode, "__builtin_aarch64_simd_uqi",    "12__Uint8x16_t" },
-  { V8HImode,  "__builtin_aarch64_simd_hi",     "11__Int16x8_t" },
-  { V8HImode,  "__builtin_aarch64_simd_uhi",    "12__Uint16x8_t" },
-  { V4SImode,  "__builtin_aarch64_simd_si",     "11__Int32x4_t" },
-  { V4SImode,  "__builtin_aarch64_simd_usi",    "12__Uint32x4_t" },
-  { V2DImode,  "__builtin_aarch64_simd_di",     "11__Int64x2_t" },
-  { V2DImode,  "__builtin_aarch64_simd_udi",    "12__Uint64x2_t" },
-  { V4SFmode,  "__builtin_aarch64_simd_sf",     "13__Float32x4_t" },
-  { V2DFmode,  "__builtin_aarch64_simd_df",     "13__Float64x2_t" },
-  { V16QImode, "__builtin_aarch64_simd_poly8",  "12__Poly8x16_t" },
-  { V8HImode,  "__builtin_aarch64_simd_poly16", "12__Poly16x8_t" },
-  { V2DImode,  "__builtin_aarch64_simd_poly64", "12__Poly64x2_t" },
-  { VOIDmode, NULL, NULL }
-};
-
 /* Implement TARGET_MANGLE_TYPE.  */
 
 static const char *
@@ -7387,25 +7342,10 @@ aarch64_mangle_type (const_tree type)
   if (lang_hooks.types_compatible_p (CONST_CAST_TREE (type), va_list_type))
     return "St9__va_list";
 
-  /* Check the mode of the vector type, and the name of the vector
-     element type, against the table.  */
-  if (TREE_CODE (type) == VECTOR_TYPE)
-    {
-      aarch64_simd_mangle_map_entry *pos = aarch64_simd_mangle_map;
-
-      while (pos->mode != VOIDmode)
-	{
-	  tree elt_type = TREE_TYPE (type);
-
-	  if (pos->mode == TYPE_MODE (type)
-	      && TREE_CODE (TYPE_NAME (elt_type)) == TYPE_DECL
-	      && !strcmp (IDENTIFIER_POINTER (DECL_NAME (TYPE_NAME (elt_type))),
-			  pos->element_type_name))
-	    return pos->mangled_name;
-
-	  pos++;
-	}
-    }
+  /* Mangle AArch64-specific internal types.  TYPE_NAME is non-NULL_TREE for
+     builtin types.  */
+  if (TYPE_NAME (type) != NULL)
+    return aarch64_mangle_builtin_type (type);
 
   /* Use the default mangling.  */
   return NULL;
diff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h
index 3ed8a98..50d294e 100644
--- a/gcc/config/aarch64/arm_neon.h
+++ b/gcc/config/aarch64/arm_neon.h
@@ -32,66 +32,45 @@
 #define __AARCH64_UINT64_C(__C) ((uint64_t) __C)
 #define __AARCH64_INT64_C(__C) ((int64_t) __C)
 
-typedef __builtin_aarch64_simd_qi int8x8_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_hi int16x4_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_si int32x2_t
-  __attribute__ ((__vector_size__ (8)));
+typedef __Int8x8_t int8x8_t;
+typedef __Int16x4_t int16x4_t;
+typedef __Int32x2_t int32x2_t;
 typedef int64_t int64x1_t;
 typedef int32_t int32x1_t;
 typedef int16_t int16x1_t;
 typedef int8_t int8x1_t;
 typedef double float64x1_t;
-typedef __builtin_aarch64_simd_sf float32x2_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_poly8 poly8x8_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_poly16 poly16x4_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_uqi uint8x8_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_uhi uint16x4_t
-  __attribute__ ((__vector_size__ (8)));
-typedef __builtin_aarch64_simd_usi uint32x2_t
-  __attribute__ ((__vector_size__ (8)));
+typedef __Float32x2_t float32x2_t;
+typedef __Poly8x8_t poly8x8_t;
+typedef __Poly16x4_t poly16x4_t;
+typedef __Uint8x8_t uint8x8_t;
+typedef __Uint16x4_t uint16x4_t;
+typedef __Uint32x2_t uint32x2_t;
 typedef uint64_t uint64x1_t;
 typedef uint32_t uint32x1_t;
 typedef uint16_t uint16x1_t;
 typedef uint8_t uint8x1_t;
-typedef __builtin_aarch64_simd_qi int8x16_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_hi int16x8_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_si int32x4_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_di int64x2_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_sf float32x4_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_df float64x2_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_poly8 poly8x16_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_poly16 poly16x8_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_poly64 poly64x2_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_uqi uint8x16_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_uhi uint16x8_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_usi uint32x4_t
-  __attribute__ ((__vector_size__ (16)));
-typedef __builtin_aarch64_simd_udi uint64x2_t
-  __attribute__ ((__vector_size__ (16)));
+typedef __Int8x16_t int8x16_t;
+typedef __Int16x8_t int16x8_t;
+typedef __Int32x4_t int32x4_t;
+typedef __Int64x2_t int64x2_t;
+typedef __Float32x4_t float32x4_t;
+typedef __Float64x2_t float64x2_t;
+typedef __Poly8x16_t poly8x16_t;
+typedef __Poly16x8_t poly16x8_t;
+typedef __Poly64x2_t poly64x2_t;
+typedef __Uint8x16_t uint8x16_t;
+typedef __Uint16x8_t uint16x8_t;
+typedef __Uint32x4_t uint32x4_t;
+typedef __Uint64x2_t uint64x2_t;
+
+typedef __Poly8_t poly8_t;
+typedef __Poly16_t poly16_t;
+typedef __Poly64_t poly64_t;
+typedef __Poly128_t poly128_t;
 
 typedef float float32_t;
 typedef double float64_t;
-typedef __builtin_aarch64_simd_poly8 poly8_t;
-typedef __builtin_aarch64_simd_poly16 poly16_t;
-typedef __builtin_aarch64_simd_poly64 poly64_t;
-typedef __builtin_aarch64_simd_poly128 poly128_t;
 
 typedef struct int8x8x2_t
 {
diff --git a/gcc/config/aarch64/t-aarch64 b/gcc/config/aarch64/t-aarch64
index 158fbb5..d331e36 100644
--- a/gcc/config/aarch64/t-aarch64
+++ b/gcc/config/aarch64/t-aarch64
@@ -31,7 +31,8 @@ aarch64-builtins.o: $(srcdir)/config/aarch64/aarch64-builtins.c $(CONFIG_H) \
   $(SYSTEM_H) coretypes.h $(TM_H) \
   $(RTL_H) $(TREE_H) expr.h $(TM_P_H) $(RECOG_H) langhooks.h \
   $(DIAGNOSTIC_CORE_H) $(OPTABS_H) \
-  $(srcdir)/config/aarch64/aarch64-simd-builtins.def
+  $(srcdir)/config/aarch64/aarch64-simd-builtins.def \
+  $(srcdir)/config/aarch64/aarch64-simd-builtin-types.def
 	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
 		$(srcdir)/config/aarch64/aarch64-builtins.c
 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch, AArch64] Restructure arm_neon.h vector types' implementation.
  2014-06-27 15:32   ` Tejas Belagod
@ 2014-06-27 16:01     ` Yufeng Zhang
  2014-08-22 12:02       ` Tejas Belagod
  2014-07-04 14:27     ` James Greenhalgh
  2014-07-08 16:00     ` James Greenhalgh
  2 siblings, 1 reply; 11+ messages in thread
From: Yufeng Zhang @ 2014-06-27 16:01 UTC (permalink / raw)
  To: Tejas Belagod; +Cc: gcc-patches, Marc Glisse, Marcus Shawcroft

On 27 June 2014 16:32, Tejas Belagod <tbelagod@arm.com> wrote:
>>>
>>> 2014-06-23  Tejas Belagod  <tejas.belagod@arm.com>
> diff --git a/gcc/config/aarch64/aarch64-simd-builtin-types.def
> b/gcc/config/aarch64/aarch64-simd-builtin-types.def
> new file mode 100644
> index 0000000..aa6a84e
> --- /dev/null
> +++ b/gcc/config/aarch64/aarch64-simd-builtin-types.def
> @@ -0,0 +1,50 @@
> +/* Builtin AdvSIMD types.
> +   Copyright (C) 2014 Free Software Foundation, Inc.
> +   Contributed by ARM Ltd.
> +
> +   This file is part of GCC.
> +
> +   GCC is free software; you can redistribute it and/or modify it
> +   under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3, or (at your option)
> +   any later version.
> +
> +   GCC is distributed in the hope that it will be useful, but
> +   WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with GCC; see the file COPYING3.  If not see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +  ENTRY (Int8x8_t, V8QI, none, 10)
> +  ENTRY (Int8x16_t, V16QI, none, 11)
> +  ENTRY (Int16x4_t, V4HI, none, 11)
> +  ENTRY (Int16x8_t, V8HI, none, 11)
> +  ENTRY (Int32x2_t, V2SI, none, 11)
> +  ENTRY (Int32x4_t, V4SI, none, 11)
> +  ENTRY (Int64x1_t, DI, none, 11)
> +  ENTRY (Int64x2_t, V2DI, none, 11)
> +  ENTRY (Uint8x8_t, V8QI, unsigned, 11)
> +  ENTRY (Uint8x16_t, V16QI, unsigned, 12)
> +  ENTRY (Uint16x4_t, V4HI, unsigned, 12)
> +  ENTRY (Uint16x8_t, V8HI, unsigned, 12)
> +  ENTRY (Uint32x2_t, V2SI, unsigned, 12)
> +  ENTRY (Uint32x4_t, V4SI, unsigned, 12)
> +  ENTRY (Uint64x1_t, DI, unsigned, 12)
> +  ENTRY (Uint64x2_t, V2DI, unsigned, 12)
> +  ENTRY (Poly8_t, QI, poly, 9)
> +  ENTRY (Poly16_t, HI, poly, 10)
> +  ENTRY (Poly64_t, DI, poly, 10)
> +  ENTRY (Poly128_t, TI, poly, 11)
> +  ENTRY (Poly8x8_t, V8QI, poly, 11)
> +  ENTRY (Poly8x16_t, V16QI, poly, 12)
> +  ENTRY (Poly16x4_t, V4HI, poly, 12)
> +  ENTRY (Poly16x8_t, V8HI, poly, 12)
> +  ENTRY (Poly64x1_t, DI, poly, 12)
> +  ENTRY (Poly64x2_t, V2DI, poly, 12)
> +  ENTRY (Float32x2_t, V2SF, none, 13)
> +  ENTRY (Float32x4_t, V4SF, none, 13)
> +  ENTRY (Float64x1_t, DF, none, 13)

Will this revert Alan Lawrance's commit in 211892, which defines
Float64x1_t to have V1DF mode?

Thanks,
Yufeng

commit cffa849a621eb949bbdc4ce8468c932889450e6d
Author: alalaw01 <alalaw01@138bc75d-0d04-0410-961f-82ee72b054a4>
Date:   Mon Jun 23 12:46:52 2014 +0000

    PR/60825 Make float64x1_t in arm_neon.h a proper vector type

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch, AArch64] Restructure arm_neon.h vector types' implementation.
  2014-06-23 15:48 [Patch, AArch64] Restructure arm_neon.h vector types' implementation Tejas Belagod
  2014-06-25  8:31 ` Yufeng Zhang
@ 2014-06-28 10:25 ` Marc Glisse
  2014-08-22 11:59   ` Tejas Belagod
  1 sibling, 1 reply; 11+ messages in thread
From: Marc Glisse @ 2014-06-28 10:25 UTC (permalink / raw)
  To: Tejas Belagod; +Cc: gcc-patches, Marcus Shawcroft

On Mon, 23 Jun 2014, Tejas Belagod wrote:

> Here is a patch that restructures neon builtins to use vector types based on 
> standard base types. We previously defined arm_neon.h's neon vector 
> types(int8x8_t) using gcc's front-end vector extensions. We now move away 
> from that and use types built internally(e.g. __Int8x8_t). These internal 
> types names are defined by the AAPCS64 and we build arm_neon.h's public 
> vector types over these internal types. e.g.
>
>  typedef __Int8x8_t int8x8_t;
>
> as opposed to
>
>  typedef __builtin_aarch64_simd_qi int8x8_t
>    __attribute__ ((__vector_size__ (8)));
>
> Impact on mangling:
>
> This patch does away with these builtin scalar types that the vector types 
> were based on. These were previously used to look up mangling names. We now 
> use the internal vector type names(e.g. __Int8x8_t) to lookup mangling for 
> the arm_neon.h-exported vector types. There are a few internal scalar 
> types(__builtin_aarch64_simd_oi etc.) that is needed to efficiently implement 
> some NEON Intrinsics. These will be declared in the back-end and registered 
> in the front-end and aarch64-specific builtin types, but are not 
> user-visible. These, along with a few scalar __builtin types that aren't 
> user-visible will have implementation-defined mangling. Because we don't have 
> strong-typing across all builtins yet, we still have to maintain the old 
> builtin scalar types - they will be removed once we move over to a 
> strongly-typed builtin system implemented by the qualifier infrastructure.
>
> Marc Glisse's patch in this thread exposed this issue 
> https://gcc.gnu.org/ml/gcc-patches/2014-04/msg00618.html. I've tested my 
> patch with the change that his patch introduced, and it seems to work fine - 
> specifically these two lines:
>
> +  for (tree t = registered_builtin_types; t; t = TREE_CHAIN (t))
> +    emit_support_tinfo_1 (TREE_VALUE (t));

If you still have that build somewhere, could you try:
nm -C libsupc++.a | grep typeinfo
and check how much your builtins appear there? With my patch you may have 
half-floats in addition to what you get without the patch (I think that's 
a good thing), but I hope not too much more...

Thanks for working on this,

-- 
Marc Glisse

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch, AArch64] Restructure arm_neon.h vector types' implementation.
  2014-06-27 15:32   ` Tejas Belagod
  2014-06-27 16:01     ` Yufeng Zhang
@ 2014-07-04 14:27     ` James Greenhalgh
  2014-08-22 12:07       ` Tejas Belagod
  2014-07-08 16:00     ` James Greenhalgh
  2 siblings, 1 reply; 11+ messages in thread
From: James Greenhalgh @ 2014-07-04 14:27 UTC (permalink / raw)
  To: Tejas Belagod; +Cc: Yufeng Zhang, gcc-patches, Marc Glisse, Marcus Shawcroft

On Fri, Jun 27, 2014 at 04:32:19PM +0100, Tejas Belagod wrote:
> +/* Internal scalar builtin types.  These types are used to support
> +   neon intrinsic builtins.  They are _not_ user-visible types.  Therefore
> +   the mangling for these types are implementation defined.  */
> +const char *aarch64_scalar_builtin_types[] = {
> +  "__builtin_aarch64_simd_qi",
> +  "__builtin_aarch64_simd_hi",
> +  "__builtin_aarch64_simd_si",
> +  "__builtin_aarch64_simd_sf",
> +  "__builtin_aarch64_simd_di",
> +  "__builtin_aarch64_simd_df",
> +  "__builtin_aarch64_simd_poly8",
> +  "__builtin_aarch64_simd_poly16",
> +  "__builtin_aarch64_simd_poly64",
> +  "__builtin_aarch64_simd_poly128",
> +  "__builtin_aarch64_simd_ti",
> +  "__builtin_aarch64_simd_uqi",
> +  "__builtin_aarch64_simd_uhi",
> +  "__builtin_aarch64_simd_usi",
> +  "__builtin_aarch64_simd_udi",
> +  "__builtin_aarch64_simd_ei",
> +  "__builtin_aarch64_simd_oi",
> +  "__builtin_aarch64_simd_ci",
> +  "__builtin_aarch64_simd_xi",
> +  NULL
> +};
<snip>
> +static const char *
> +aarch64_mangle_builtin_scalar_type (const_tree type)
> +{
> +  int i = 0;
> +
> +  while (aarch64_scalar_builtin_types[i] != NULL)
>      {
> -      default:
> -	eltype = aarch64_build_scalar_type (GET_MODE_INNER (mode),
> -					    unsigned_p, poly_p);
> -	return build_vector_type_for_mode (eltype, mode);
> -	break;
> -      VECTOR_TYPES
> -   }
> +      const char *name = aarch64_scalar_builtin_types[i];
> +
> +      if (TREE_CODE (TYPE_NAME (type)) == TYPE_DECL
> +	  && DECL_NAME (TYPE_NAME (type))
> +	  && !strcmp (IDENTIFIER_POINTER (DECL_NAME (TYPE_NAME (type))), name))
> +	return aarch64_scalar_builtin_types[i];
> +      i++;
> +    }
> +  return NULL;
>  }
<snip>
> diff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h
> index 3ed8a98..50d294e 100644
> --- a/gcc/config/aarch64/arm_neon.h
> +++ b/gcc/config/aarch64/arm_neon.h
> @@ -32,66 +32,45 @@
> +typedef __Poly8_t poly8_t;
> +typedef __Poly16_t poly16_t;
> +typedef __Poly64_t poly64_t;
> +typedef __Poly128_t poly128_t;

This looks wrong to me. The type which eventually becomes poly8_t in
arm_neon.h has "__Poly8_t" as its internal type name.

When you go through the loop in aarch64_mangle_builtin_scalar_type you'll
be checking in aarch64_scalar_builtin_types for a string matching
"__Poly8_t" and won't find it, so we'll end up with default mangling for this
type.

One question I have is, if for all the backend types we define we want the
mangled name to be:

  <strlen (type)><type>

then why do we not just return that and save the string comparisons?

I can see some argument for future flexibility, but in that case we will need
to rewrite this code anyway. Is there some other hole in my reasoning?

Thanks,
James


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch, AArch64] Restructure arm_neon.h vector types' implementation.
  2014-06-27 15:32   ` Tejas Belagod
  2014-06-27 16:01     ` Yufeng Zhang
  2014-07-04 14:27     ` James Greenhalgh
@ 2014-07-08 16:00     ` James Greenhalgh
  2 siblings, 0 replies; 11+ messages in thread
From: James Greenhalgh @ 2014-07-08 16:00 UTC (permalink / raw)
  To: Tejas Belagod; +Cc: Yufeng Zhang, gcc-patches, Marc Glisse, Marcus Shawcroft

I've spotted another couple of issues (inline below) that I see when
trying to bootstrap an AArch64 compiler with this patch applied.

(sorry for any duplicate mails sent and received, my mailer was upset by
some of the characters in the error messages).

Thanks,
James

On Fri, Jun 27, 2014 at 04:32:19PM +0100, Tejas Belagod wrote:
> +#define ENTRY(E, M, Q, G) E,
> +enum aarch64_simd_type
> +{
> +#include "aarch64-simd-builtin-types.def"
> +};
> +#undef ENTRY
>  

The final entry in this enum will have a trailing comma, resulting in:

[...]/aarch64-builtins.c:333:28
error: comma at end of enumerator list
  #define ENTRY(E, M, Q, G) E,

[...]/aarch64-simd-builtin-types.def:50:3: note: in expansion of macro ENTRY
  ENTRY (Float64x2_t, V2DF, none, 13)

<snip>
>  static void
> -aarch64_init_simd_builtins (void)
> +aarch64_init_simd_builtin_types (void)
>  {
> -  unsigned int i, fcode = AARCH64_SIMD_BUILTIN_BASE + 1;
> +  int i;
> +  int nelts = sizeof (aarch64_simd_types) / sizeof (aarch64_simd_types[0]);
> +  tree tdecl;
<snip>
> +
> +  for (i = 0; i < nelts; i++)
> +    {
> +      tree eltype = aarch64_simd_types[i].eltype;
> +      enum machine_mode mode = aarch64_simd_types[i].mode;
> +      enum aarch64_simd_type type = aarch64_simd_types[i].type;

Type is unused here, resulting in:

[...]/aarch64-builtins.c: In function void aarch64_init_simd_builtin_types():
[...]/aarch64-builtins.c:547:30: error: unused variable
  enum aarch64_simd_type type = aarch64_simd_types[i].type;
> +
> +      if (aarch64_simd_types[i].itype == NULL)
> +	aarch64_simd_types[i].itype =
> +	  build_distinct_type_copy
> +	    (build_vector_type (eltype, GET_MODE_NUNITS (mode)));
> +
> +      tdecl = add_builtin_type (aarch64_simd_types[i].name,
> +				aarch64_simd_types[i].itype);
> +      TYPE_NAME (aarch64_simd_types[i].itype) = tdecl;
> +      SET_TYPE_STRUCTURAL_EQUALITY (aarch64_simd_types[i].itype);
> +    }
>  
<snip>



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch, AArch64] Restructure arm_neon.h vector types' implementation.
  2014-06-28 10:25 ` Marc Glisse
@ 2014-08-22 11:59   ` Tejas Belagod
  2014-08-22 12:54     ` Marc Glisse
  0 siblings, 1 reply; 11+ messages in thread
From: Tejas Belagod @ 2014-08-22 11:59 UTC (permalink / raw)
  To: Marc Glisse; +Cc: gcc-patches, Marcus Shawcroft

On 28/06/14 11:25, Marc Glisse wrote:
> On Mon, 23 Jun 2014, Tejas Belagod wrote:
>
>> Here is a patch that restructures neon builtins to use vector types based on
>> standard base types. We previously defined arm_neon.h's neon vector
>> types(int8x8_t) using gcc's front-end vector extensions. We now move away
>> from that and use types built internally(e.g. __Int8x8_t). These internal
>> types names are defined by the AAPCS64 and we build arm_neon.h's public
>> vector types over these internal types. e.g.
>>
>>   typedef __Int8x8_t int8x8_t;
>>
>> as opposed to
>>
>>   typedef __builtin_aarch64_simd_qi int8x8_t
>>     __attribute__ ((__vector_size__ (8)));
>>
>> Impact on mangling:
>>
>> This patch does away with these builtin scalar types that the vector types
>> were based on. These were previously used to look up mangling names. We now
>> use the internal vector type names(e.g. __Int8x8_t) to lookup mangling for
>> the arm_neon.h-exported vector types. There are a few internal scalar
>> types(__builtin_aarch64_simd_oi etc.) that is needed to efficiently implement
>> some NEON Intrinsics. These will be declared in the back-end and registered
>> in the front-end and aarch64-specific builtin types, but are not
>> user-visible. These, along with a few scalar __builtin types that aren't
>> user-visible will have implementation-defined mangling. Because we don't have
>> strong-typing across all builtins yet, we still have to maintain the old
>> builtin scalar types - they will be removed once we move over to a
>> strongly-typed builtin system implemented by the qualifier infrastructure.
>>
>> Marc Glisse's patch in this thread exposed this issue
>> https://gcc.gnu.org/ml/gcc-patches/2014-04/msg00618.html. I've tested my
>> patch with the change that his patch introduced, and it seems to work fine -
>> specifically these two lines:
>>
>> +  for (tree t = registered_builtin_types; t; t = TREE_CHAIN (t))
>> +    emit_support_tinfo_1 (TREE_VALUE (t));
>
> If you still have that build somewhere, could you try:
> nm -C libsupc++.a | grep typeinfo
> and check how much your builtins appear there? With my patch you may have
> half-floats in addition to what you get without the patch (I think that's
> a good thing), but I hope not too much more...
>
> Thanks for working on this,
>

Marc,

Revisiting this old thread - sorry for the delay in getting back.

When I do this, I see no aarch64 builtin types listed with my patch and 
the above two lines from your patch. Is this expected?

Thanks,
Tejas.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch, AArch64] Restructure arm_neon.h vector types' implementation.
  2014-06-27 16:01     ` Yufeng Zhang
@ 2014-08-22 12:02       ` Tejas Belagod
  0 siblings, 0 replies; 11+ messages in thread
From: Tejas Belagod @ 2014-08-22 12:02 UTC (permalink / raw)
  To: Yufeng Zhang; +Cc: gcc-patches, Marc Glisse, Marcus Shawcroft

On 27/06/14 17:01, Yufeng Zhang wrote:
> On 27 June 2014 16:32, Tejas Belagod <tbelagod@arm.com> wrote:
>>>>
>>>> 2014-06-23  Tejas Belagod  <tejas.belagod@arm.com>
>> diff --git a/gcc/config/aarch64/aarch64-simd-builtin-types.def
>> b/gcc/config/aarch64/aarch64-simd-builtin-types.def
>> new file mode 100644
>> index 0000000..aa6a84e
>> --- /dev/null
>> +++ b/gcc/config/aarch64/aarch64-simd-builtin-types.def
>> @@ -0,0 +1,50 @@
>> +/* Builtin AdvSIMD types.
>> +   Copyright (C) 2014 Free Software Foundation, Inc.
>> +   Contributed by ARM Ltd.
>> +
>> +   This file is part of GCC.
>> +
>> +   GCC is free software; you can redistribute it and/or modify it
>> +   under the terms of the GNU General Public License as published by
>> +   the Free Software Foundation; either version 3, or (at your option)
>> +   any later version.
>> +
>> +   GCC is distributed in the hope that it will be useful, but
>> +   WITHOUT ANY WARRANTY; without even the implied warranty of
>> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>> +   General Public License for more details.
>> +
>> +   You should have received a copy of the GNU General Public License
>> +   along with GCC; see the file COPYING3.  If not see
>> +   <http://www.gnu.org/licenses/>.  */
>> +
>> +  ENTRY (Int8x8_t, V8QI, none, 10)
>> +  ENTRY (Int8x16_t, V16QI, none, 11)
>> +  ENTRY (Int16x4_t, V4HI, none, 11)
>> +  ENTRY (Int16x8_t, V8HI, none, 11)
>> +  ENTRY (Int32x2_t, V2SI, none, 11)
>> +  ENTRY (Int32x4_t, V4SI, none, 11)
>> +  ENTRY (Int64x1_t, DI, none, 11)
>> +  ENTRY (Int64x2_t, V2DI, none, 11)
>> +  ENTRY (Uint8x8_t, V8QI, unsigned, 11)
>> +  ENTRY (Uint8x16_t, V16QI, unsigned, 12)
>> +  ENTRY (Uint16x4_t, V4HI, unsigned, 12)
>> +  ENTRY (Uint16x8_t, V8HI, unsigned, 12)
>> +  ENTRY (Uint32x2_t, V2SI, unsigned, 12)
>> +  ENTRY (Uint32x4_t, V4SI, unsigned, 12)
>> +  ENTRY (Uint64x1_t, DI, unsigned, 12)
>> +  ENTRY (Uint64x2_t, V2DI, unsigned, 12)
>> +  ENTRY (Poly8_t, QI, poly, 9)
>> +  ENTRY (Poly16_t, HI, poly, 10)
>> +  ENTRY (Poly64_t, DI, poly, 10)
>> +  ENTRY (Poly128_t, TI, poly, 11)
>> +  ENTRY (Poly8x8_t, V8QI, poly, 11)
>> +  ENTRY (Poly8x16_t, V16QI, poly, 12)
>> +  ENTRY (Poly16x4_t, V4HI, poly, 12)
>> +  ENTRY (Poly16x8_t, V8HI, poly, 12)
>> +  ENTRY (Poly64x1_t, DI, poly, 12)
>> +  ENTRY (Poly64x2_t, V2DI, poly, 12)
>> +  ENTRY (Float32x2_t, V2SF, none, 13)
>> +  ENTRY (Float32x4_t, V4SF, none, 13)
>> +  ENTRY (Float64x1_t, DF, none, 13)
>
> Will this revert Alan Lawrance's commit in 211892, which defines
> Float64x1_t to have V1DF mode?
>

I've rebased over Alan's changes and am currently testing a renewed 
patch. Will post once testing is happy.

Thanks,
Tejas.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch, AArch64] Restructure arm_neon.h vector types' implementation.
  2014-07-04 14:27     ` James Greenhalgh
@ 2014-08-22 12:07       ` Tejas Belagod
  0 siblings, 0 replies; 11+ messages in thread
From: Tejas Belagod @ 2014-08-22 12:07 UTC (permalink / raw)
  To: James Greenhalgh; +Cc: Yufeng Zhang, gcc-patches, Marc Glisse, Marcus Shawcroft

On 04/07/14 15:27, James Greenhalgh wrote:
> On Fri, Jun 27, 2014 at 04:32:19PM +0100, Tejas Belagod wrote:
>> +/* Internal scalar builtin types.  These types are used to support
>> +   neon intrinsic builtins.  They are _not_ user-visible types.  Therefore
>> +   the mangling for these types are implementation defined.  */
>> +const char *aarch64_scalar_builtin_types[] = {
>> +  "__builtin_aarch64_simd_qi",
>> +  "__builtin_aarch64_simd_hi",
>> +  "__builtin_aarch64_simd_si",
>> +  "__builtin_aarch64_simd_sf",
>> +  "__builtin_aarch64_simd_di",
>> +  "__builtin_aarch64_simd_df",
>> +  "__builtin_aarch64_simd_poly8",
>> +  "__builtin_aarch64_simd_poly16",
>> +  "__builtin_aarch64_simd_poly64",
>> +  "__builtin_aarch64_simd_poly128",
>> +  "__builtin_aarch64_simd_ti",
>> +  "__builtin_aarch64_simd_uqi",
>> +  "__builtin_aarch64_simd_uhi",
>> +  "__builtin_aarch64_simd_usi",
>> +  "__builtin_aarch64_simd_udi",
>> +  "__builtin_aarch64_simd_ei",
>> +  "__builtin_aarch64_simd_oi",
>> +  "__builtin_aarch64_simd_ci",
>> +  "__builtin_aarch64_simd_xi",
>> +  NULL
>> +};
> <snip>
>> +static const char *
>> +aarch64_mangle_builtin_scalar_type (const_tree type)
>> +{
>> +  int i = 0;
>> +
>> +  while (aarch64_scalar_builtin_types[i] != NULL)
>>       {
>> -      default:
>> -	eltype = aarch64_build_scalar_type (GET_MODE_INNER (mode),
>> -					    unsigned_p, poly_p);
>> -	return build_vector_type_for_mode (eltype, mode);
>> -	break;
>> -      VECTOR_TYPES
>> -   }
>> +      const char *name = aarch64_scalar_builtin_types[i];
>> +
>> +      if (TREE_CODE (TYPE_NAME (type)) == TYPE_DECL
>> +	  && DECL_NAME (TYPE_NAME (type))
>> +	  && !strcmp (IDENTIFIER_POINTER (DECL_NAME (TYPE_NAME (type))), name))
>> +	return aarch64_scalar_builtin_types[i];
>> +      i++;
>> +    }
>> +  return NULL;
>>   }
> <snip>
>> diff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h
>> index 3ed8a98..50d294e 100644
>> --- a/gcc/config/aarch64/arm_neon.h
>> +++ b/gcc/config/aarch64/arm_neon.h
>> @@ -32,66 +32,45 @@
>> +typedef __Poly8_t poly8_t;
>> +typedef __Poly16_t poly16_t;
>> +typedef __Poly64_t poly64_t;
>> +typedef __Poly128_t poly128_t;
>
> This looks wrong to me. The type which eventually becomes poly8_t in
> arm_neon.h has "__Poly8_t" as its internal type name.
>
> When you go through the loop in aarch64_mangle_builtin_scalar_type you'll
> be checking in aarch64_scalar_builtin_types for a string matching
> "__Poly8_t" and won't find it, so we'll end up with default mangling for this
> type.
>

Sorry for the delay in getting back.

You're right. Testing a fixed patch.

> One question I have is, if for all the backend types we define we want the
> mangled name to be:
>
>    <strlen (type)><type>
>
> then why do we not just return that and save the string comparisons?
>
> I can see some argument for future flexibility, but in that case we will need
> to rewrite this code anyway. Is there some other hole in my reasoning?
>

There seems to be no robust TYPE_NAME check only to filter out backend 
builtin types into the backend hook function. A non-NULL_TREE TYPE_NAME 
is used to represent something more than just backend builtin types, I 
believe. Therefore there is a risk of letting in more types which we 
will need to filter out anyway, hence the static table of type names. If 
we have to use the table, we might as well have the strlen prepended to 
the mangled names anyway. Hope this reasoning holds water.

Thanks,
Tejas.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Patch, AArch64] Restructure arm_neon.h vector types' implementation.
  2014-08-22 11:59   ` Tejas Belagod
@ 2014-08-22 12:54     ` Marc Glisse
  0 siblings, 0 replies; 11+ messages in thread
From: Marc Glisse @ 2014-08-22 12:54 UTC (permalink / raw)
  To: Tejas Belagod; +Cc: gcc-patches, Marcus Shawcroft

On Fri, 22 Aug 2014, Tejas Belagod wrote:

> On 28/06/14 11:25, Marc Glisse wrote:
>> On Mon, 23 Jun 2014, Tejas Belagod wrote:
>> 
>>> Here is a patch that restructures neon builtins to use vector types based 
>>> on
>>> standard base types. We previously defined arm_neon.h's neon vector
>>> types(int8x8_t) using gcc's front-end vector extensions. We now move away
>>> from that and use types built internally(e.g. __Int8x8_t). These internal
>>> types names are defined by the AAPCS64 and we build arm_neon.h's public
>>> vector types over these internal types. e.g.
>>>
>>>   typedef __Int8x8_t int8x8_t;
>>> 
>>> as opposed to
>>>
>>>   typedef __builtin_aarch64_simd_qi int8x8_t
>>>     __attribute__ ((__vector_size__ (8)));
>>> 
>>> Impact on mangling:
>>> 
>>> This patch does away with these builtin scalar types that the vector types
>>> were based on. These were previously used to look up mangling names. We 
>>> now
>>> use the internal vector type names(e.g. __Int8x8_t) to lookup mangling for
>>> the arm_neon.h-exported vector types. There are a few internal scalar
>>> types(__builtin_aarch64_simd_oi etc.) that is needed to efficiently 
>>> implement
>>> some NEON Intrinsics. These will be declared in the back-end and 
>>> registered
>>> in the front-end and aarch64-specific builtin types, but are not
>>> user-visible. These, along with a few scalar __builtin types that aren't
>>> user-visible will have implementation-defined mangling. Because we don't 
>>> have
>>> strong-typing across all builtins yet, we still have to maintain the old
>>> builtin scalar types - they will be removed once we move over to a
>>> strongly-typed builtin system implemented by the qualifier infrastructure.
>>> 
>>> Marc Glisse's patch in this thread exposed this issue
>>> https://gcc.gnu.org/ml/gcc-patches/2014-04/msg00618.html. I've tested my
>>> patch with the change that his patch introduced, and it seems to work fine 
>>> -
>>> specifically these two lines:
>>> 
>>> +  for (tree t = registered_builtin_types; t; t = TREE_CHAIN (t))
>>> +    emit_support_tinfo_1 (TREE_VALUE (t));
>> 
>> If you still have that build somewhere, could you try:
>> nm -C libsupc++.a | grep typeinfo
>> and check how much your builtins appear there? With my patch you may have
>> half-floats in addition to what you get without the patch (I think that's
>> a good thing), but I hope not too much more...
>> 
>> Thanks for working on this,
>> 
>
> Marc,
>
> Revisiting this old thread - sorry for the delay in getting back.
>
> When I do this, I see no aarch64 builtin types listed with my patch and the 
> above two lines from your patch. Is this expected?

I think it is good that they don't appear :-)

Thanks,

-- 
Marc Glisse

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2014-08-22 12:54 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-23 15:48 [Patch, AArch64] Restructure arm_neon.h vector types' implementation Tejas Belagod
2014-06-25  8:31 ` Yufeng Zhang
2014-06-27 15:32   ` Tejas Belagod
2014-06-27 16:01     ` Yufeng Zhang
2014-08-22 12:02       ` Tejas Belagod
2014-07-04 14:27     ` James Greenhalgh
2014-08-22 12:07       ` Tejas Belagod
2014-07-08 16:00     ` James Greenhalgh
2014-06-28 10:25 ` Marc Glisse
2014-08-22 11:59   ` Tejas Belagod
2014-08-22 12:54     ` Marc Glisse

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).