public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Cupertino Miranda <cupertino.miranda@oracle.com>
To: gcc-patches@gcc.gnu.org
Cc: elena.zannoni@oracle.com, jose.marchesi@oracle.com,
	david.faust@oracle.com,
	Cupertino Miranda <cupertino.miranda@oracle.com>
Subject: [PATCH 1/2] bpf: Implementation of BPF CO-RE builtins
Date: Tue,  1 Aug 2023 19:43:06 +0100	[thread overview]
Message-ID: <20230801184307.179692-2-cupertino.miranda@oracle.com> (raw)
In-Reply-To: <20230801184307.179692-1-cupertino.miranda@oracle.com>

This patch updates the support for the BPF CO-RE builtins
__builtin_preserve_access_index and __builtin_preserve_field_info,
and adds support for the CO-RE builtins __builtin_btf_type_id,
__builtin_preserve_type_info and __builtin_preserve_enum_value.

These CO-RE relocations are now converted to __builtin_core_reloc which
abstracts all of the original builtins in a polymorphic relocation
specific builtin.

The builtin processing is now split in 2 stages, the first (pack) is
executed right after the front-end and the second (process) right before
the asm output.

In expand pass the __builtin_core_reloc is converted to a
unspec:UNSPEC_CORE_RELOC rtx entry.

The data required to process the builtin is now collected in the packing
stage (after front-end), not allowing the compiler to optimize any of
the relevant information required to compose the relocation when
necessary.
At expansion, that information is recovered and CTF/BTF is queried to
construct the information that will be used in the relocation.
At this point the relocation is added to specific section and the
builtin is expanded to the expected default value for the builtin.

In order to process __builtin_preserve_enum_value, it was necessary to
hook the front-end to collect the original enum value reference.
This is needed since the parser folds all the enum values to its
integer_cst representation.

More details can be found within the core-builtins.cc.

Regtested in host x86_64-linux-gnu and target bpf-unknown-none.
---
 gcc/config.gcc                                |    4 +-
 gcc/config/bpf/bpf-passes.def                 |   20 -
 gcc/config/bpf/bpf-protos.h                   |    4 +-
 gcc/config/bpf/bpf.cc                         |  817 +---------
 gcc/config/bpf/bpf.md                         |   17 +
 gcc/config/bpf/core-builtins.cc               | 1397 +++++++++++++++++
 gcc/config/bpf/core-builtins.h                |   36 +
 gcc/config/bpf/coreout.cc                     |   50 +-
 gcc/config/bpf/coreout.h                      |   13 +-
 gcc/config/bpf/t-bpf                          |    6 +-
 gcc/doc/extend.texi                           |   51 +
 ...core-builtin-fieldinfo-const-elimination.c |   29 +
 12 files changed, 1639 insertions(+), 805 deletions(-)
 delete mode 100644 gcc/config/bpf/bpf-passes.def
 create mode 100644 gcc/config/bpf/core-builtins.cc
 create mode 100644 gcc/config/bpf/core-builtins.h
 create mode 100644 gcc/testsuite/gcc.target/bpf/core-builtin-fieldinfo-const-elimination.c

diff --git a/gcc/config.gcc b/gcc/config.gcc
index eba69a463be0..c521669e78b1 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -1597,8 +1597,8 @@ bpf-*-*)
         use_collect2=no
         extra_headers="bpf-helpers.h"
         use_gcc_stdint=provide
-        extra_objs="coreout.o"
-        target_gtfiles="$target_gtfiles \$(srcdir)/config/bpf/coreout.cc"
+        extra_objs="coreout.o core-builtins.o"
+        target_gtfiles="$target_gtfiles \$(srcdir)/config/bpf/coreout.cc \$(srcdir)/config/bpf/core-builtins.cc"
         ;;
 cris-*-elf | cris-*-none)
 	tm_file="elfos.h newlib-stdint.h ${tm_file}"
diff --git a/gcc/config/bpf/bpf-passes.def b/gcc/config/bpf/bpf-passes.def
deleted file mode 100644
index deeaee988a01..000000000000
--- a/gcc/config/bpf/bpf-passes.def
+++ /dev/null
@@ -1,20 +0,0 @@
-/* Declaration of target-specific passes for eBPF.
-   Copyright (C) 2021-2023 Free Software Foundation, Inc.
-
-   This file is part of GCC.
-
-   GCC is free software; you can redistribute it and/or modify it
-   under the terms of the GNU General Public License as published by
-   the Free Software Foundation; either version 3, or (at your option)
-   any later version.
-
-   GCC is distributed in the hope that it will be useful, but
-   WITHOUT ANY WARRANTY; without even the implied warranty of
-   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-   General Public License for more details.
-
-   You should have received a copy of the GNU General Public License
-   along with GCC; see the file COPYING3.  If not see
-   <http://www.gnu.org/licenses/>.  */
-
-INSERT_PASS_AFTER (pass_df_initialize_opt, 1, pass_bpf_core_attr);
diff --git a/gcc/config/bpf/bpf-protos.h b/gcc/config/bpf/bpf-protos.h
index b484310e8cbf..fbcf5111eb21 100644
--- a/gcc/config/bpf/bpf-protos.h
+++ b/gcc/config/bpf/bpf-protos.h
@@ -30,7 +30,7 @@ extern void bpf_print_operand_address (FILE *, rtx);
 extern void bpf_expand_prologue (void);
 extern void bpf_expand_epilogue (void);
 extern void bpf_expand_cbranch (machine_mode, rtx *);
-
-rtl_opt_pass * make_pass_bpf_core_attr (gcc::context *);
+const char *bpf_add_core_reloc (rtx *operands, const char *templ);
+void bpf_process_move_operands (rtx *operands);
 
 #endif /* ! GCC_BPF_PROTOS_H */
diff --git a/gcc/config/bpf/bpf.cc b/gcc/config/bpf/bpf.cc
index b5b5674edbb5..101e994905d2 100644
--- a/gcc/config/bpf/bpf.cc
+++ b/gcc/config/bpf/bpf.cc
@@ -69,10 +69,7 @@ along with GCC; see the file COPYING3.  If not see
 #include "gimplify.h"
 #include "gimplify-me.h"
 
-#include "ctfc.h"
-#include "btf.h"
-
-#include "coreout.h"
+#include "core-builtins.h"
 
 /* Per-function machine data.  */
 struct GTY(()) machine_function
@@ -174,22 +171,7 @@ static const struct attribute_spec bpf_attribute_table[] =
    one.  */
 #define BPF_BUILTIN_MAX_ARGS 5
 
-enum bpf_builtins
-{
-  BPF_BUILTIN_UNUSED = 0,
-  /* Built-ins for non-generic loads and stores.  */
-  BPF_BUILTIN_LOAD_BYTE,
-  BPF_BUILTIN_LOAD_HALF,
-  BPF_BUILTIN_LOAD_WORD,
-
-  /* Compile Once - Run Everywhere (CO-RE) support.  */
-  BPF_BUILTIN_PRESERVE_ACCESS_INDEX,
-  BPF_BUILTIN_PRESERVE_FIELD_INFO,
-
-  BPF_BUILTIN_MAX,
-};
-
-static GTY (()) tree bpf_builtins[(int) BPF_BUILTIN_MAX];
+GTY (()) tree bpf_builtins[(int) BPF_BUILTIN_MAX];
 
 void bpf_register_coreattr_pass (void);
 
@@ -546,6 +528,17 @@ bpf_expand_cbranch (machine_mode mode, rtx *operands)
     }
 }
 
+/* This is used define_expand "mov<MM:mode>" to verofy if we need to replace
+   any of the operands. Currently this is used to replace the
+   __attribute__((preserve_access_index)) by the respective __builtin_core_reloc
+   which will at final create the relocation and respective label. */
+
+void
+bpf_process_move_operands (rtx *operands)
+{
+  bpf_replace_core_move_operands (operands);
+}
+
 /* Return the initial difference between the specified pair of
    registers.  The registers that can figure in FROM, and TO, are
    specified by ELIMINABLE_REGS in bpf.h.
@@ -852,6 +845,7 @@ bpf_output_call (rtx target)
    Additionally, the code 'w' denotes that the register should be printed
    as wN instead of rN, where N is the register number, but only when the
    value stored in the operand OP is 32-bit wide.  */
+
 static void
 bpf_print_register (FILE *file, rtx op, int code)
 {
@@ -978,13 +972,14 @@ bpf_print_operand_address (FILE *file, rtx addr)
 /* Add a BPF builtin function with NAME, CODE and TYPE.  Return
    the function decl or NULL_TREE if the builtin was not added.  */
 
-static tree
+static inline tree
 def_builtin (const char *name, enum bpf_builtins code, tree type)
 {
   tree t
-    = add_builtin_function (name, type, code, BUILT_IN_MD, NULL, NULL_TREE);
+    = add_builtin_function (name, type, code, BUILT_IN_MD, NULL, NULL);
 
   bpf_builtins[code] = t;
+
   return t;
 }
 
@@ -1003,214 +998,40 @@ bpf_init_builtins (void)
 	       build_function_type_list (ullt, ullt, 0));
   def_builtin ("__builtin_bpf_load_word", BPF_BUILTIN_LOAD_WORD,
 	       build_function_type_list (ullt, ullt, 0));
+
   def_builtin ("__builtin_preserve_access_index",
 	       BPF_BUILTIN_PRESERVE_ACCESS_INDEX,
 	       build_function_type_list (ptr_type_node, ptr_type_node, 0));
   def_builtin ("__builtin_preserve_field_info",
 	       BPF_BUILTIN_PRESERVE_FIELD_INFO,
-	       build_function_type_list (unsigned_type_node, ptr_type_node, unsigned_type_node, 0));
+	       build_function_type_list (unsigned_type_node, ptr_type_node,
+					 unsigned_type_node, 0));
+  def_builtin ("__builtin_btf_type_id",
+	       BPF_BUILTIN_BTF_TYPE_ID,
+	       build_function_type_list (integer_type_node, ptr_type_node,
+					 integer_type_node, 0));
+  def_builtin ("__builtin_preserve_type_info",
+	       BPF_BUILTIN_PRESERVE_TYPE_INFO,
+	       build_function_type_list (integer_type_node, ptr_type_node,
+					 integer_type_node, 0));
+  def_builtin ("__builtin_preserve_enum_value",
+	       BPF_BUILTIN_PRESERVE_ENUM_VALUE,
+	       build_function_type_list (integer_type_node, ptr_type_node,
+					 integer_type_node, integer_type_node,
+					 0));
+
+  def_builtin ("__builtin_core_reloc",
+	       BPF_BUILTIN_CORE_RELOC,
+	       build_function_type_list (integer_type_node,integer_type_node,
+					 0));
+  DECL_PURE_P (bpf_builtins[BPF_BUILTIN_CORE_RELOC]) = 1;
+
+  bpf_init_core_builtins ();
 }
 
 #undef TARGET_INIT_BUILTINS
 #define TARGET_INIT_BUILTINS bpf_init_builtins
 
-static tree bpf_core_compute (tree, vec<unsigned int> *);
-static int bpf_core_get_index (const tree);
-static bool is_attr_preserve_access (tree);
-
-/* BPF Compile Once - Run Everywhere (CO-RE) support. Construct a CO-RE
-   relocation record for EXPR of kind KIND to be emitted in the .BTF.ext
-   section. Does nothing if we are not targetting BPF CO-RE, or if the
-   constructed relocation would be a no-op.  */
-
-static void
-maybe_make_core_relo (tree expr, enum btf_core_reloc_kind kind)
-{
-  /* If we are not targetting BPF CO-RE, do not make a relocation. We
-     might not be generating any debug info at all.  */
-  if (!TARGET_BPF_CORE)
-    return;
-
-  auto_vec<unsigned int, 16> accessors;
-  tree container = bpf_core_compute (expr, &accessors);
-
-  /* Any valid use of the builtin must have at least one access. Otherwise,
-     there is nothing to record and nothing to do. This is primarily a
-     guard against optimizations leading to unexpected expressions in the
-     argument of the builtin. For example, if the builtin is used to read
-     a field of a structure which can be statically determined to hold a
-     constant value, the argument to the builtin will be optimized to that
-     constant. This is OK, and means the builtin call is superfluous.
-     e.g.
-     struct S foo;
-     foo.a = 5;
-     int x = __preserve_access_index (foo.a);
-     ... do stuff with x
-     'foo.a' in the builtin argument will be optimized to '5' with -01+.
-     This sequence does not warrant recording a CO-RE relocation.  */
-
-  if (accessors.length () < 1)
-    return;
-  accessors.reverse ();
-
-  rtx_code_label *label = gen_label_rtx ();
-  LABEL_PRESERVE_P (label) = 1;
-  emit_label (label);
-
-  /* Determine what output section this relocation will apply to.
-     If this function is associated with a section, use that. Otherwise,
-     fall back on '.text'.  */
-  const char * section_name;
-  if (current_function_decl && DECL_SECTION_NAME (current_function_decl))
-    section_name = DECL_SECTION_NAME (current_function_decl);
-  else
-    section_name = ".text";
-
-  /* Add the CO-RE relocation information to the BTF container.  */
-  bpf_core_reloc_add (TREE_TYPE (container), section_name, &accessors, label,
-		      kind);
-}
-
-/* Expand a call to __builtin_preserve_field_info by evaluating the requested
-   information about SRC according to KIND, and return a tree holding
-   the result.  */
-
-static tree
-bpf_core_field_info (tree src, enum btf_core_reloc_kind kind)
-{
-  unsigned int result;
-  poly_int64 bitsize, bitpos;
-  tree var_off = NULL_TREE;
-  machine_mode mode;
-  int unsignedp, reversep, volatilep;
-  location_t loc = EXPR_LOCATION (src);
-
-  get_inner_reference (src, &bitsize, &bitpos, &var_off, &mode, &unsignedp,
-		       &reversep, &volatilep);
-
-  /* Note: Use DECL_BIT_FIELD_TYPE rather than DECL_BIT_FIELD here, because it
-     remembers whether the field in question was originally declared as a
-     bitfield, regardless of how it has been optimized.  */
-  bool bitfieldp = (TREE_CODE (src) == COMPONENT_REF
-		    && DECL_BIT_FIELD_TYPE (TREE_OPERAND (src, 1)));
-
-  unsigned int align = TYPE_ALIGN (TREE_TYPE (src));
-  if (TREE_CODE (src) == COMPONENT_REF)
-    {
-      tree field = TREE_OPERAND (src, 1);
-      if (DECL_BIT_FIELD_TYPE (field))
-	align = TYPE_ALIGN (DECL_BIT_FIELD_TYPE (field));
-      else
-	align = TYPE_ALIGN (TREE_TYPE (field));
-    }
-
-  unsigned int start_bitpos = bitpos & ~(align - 1);
-  unsigned int end_bitpos = start_bitpos + align;
-
-  switch (kind)
-    {
-    case BPF_RELO_FIELD_BYTE_OFFSET:
-      {
-	if (var_off != NULL_TREE)
-	  {
-	    error_at (loc, "unsupported variable field offset");
-	    return error_mark_node;
-	  }
-
-	if (bitfieldp)
-	  result = start_bitpos / 8;
-	else
-	  result = bitpos / 8;
-      }
-      break;
-
-    case BPF_RELO_FIELD_BYTE_SIZE:
-      {
-	if (mode == BLKmode && bitsize == -1)
-	  {
-	    error_at (loc, "unsupported variable size field access");
-	    return error_mark_node;
-	  }
-
-	if (bitfieldp)
-	  {
-	    /* To match LLVM behavior, byte size of bitfields is recorded as
-	       the full size of the base type. A 3-bit bitfield of type int is
-	       therefore recorded as having a byte size of 4 bytes. */
-	    result = end_bitpos - start_bitpos;
-	    if (result & (result - 1))
-	      {
-		error_at (loc, "unsupported field expression");
-		return error_mark_node;
-	      }
-	    result = result / 8;
-	  }
-	else
-	  result = bitsize / 8;
-      }
-      break;
-
-    case BPF_RELO_FIELD_EXISTS:
-      /* The field always exists at compile time.  */
-      result = 1;
-      break;
-
-    case BPF_RELO_FIELD_SIGNED:
-      result = !unsignedp;
-      break;
-
-    case BPF_RELO_FIELD_LSHIFT_U64:
-    case BPF_RELO_FIELD_RSHIFT_U64:
-      {
-	if (mode == BLKmode && bitsize == -1)
-	  {
-	    error_at (loc, "unsupported variable size field access");
-	    return error_mark_node;
-	  }
-	if (var_off != NULL_TREE)
-	  {
-	    error_at (loc, "unsupported variable field offset");
-	    return error_mark_node;
-	  }
-
-	if (!bitfieldp)
-	  {
-	    if (bitsize > 64)
-	      {
-		error_at (loc, "field size too large");
-		return error_mark_node;
-	      }
-	    result = 64 - bitsize;
-	    break;
-	  }
-
-	if (end_bitpos - start_bitpos > 64)
-	  {
-	    error_at (loc, "field size too large");
-	    return error_mark_node;
-	  }
-
-	if (kind == BPF_RELO_FIELD_LSHIFT_U64)
-	  {
-	    if (TARGET_BIG_ENDIAN)
-	      result = bitpos + 64 - start_bitpos - align;
-	    else
-	      result = start_bitpos + 64 - bitpos - bitsize;
-	  }
-	else /* RSHIFT_U64 */
-	  result = 64 - bitsize;
-      }
-      break;
-
-    default:
-      error ("invalid second argument to built-in function");
-      return error_mark_node;
-      break;
-    }
-
-  return build_int_cst (unsigned_type_node, result);
-}
-
 /* Expand a call to a BPF-specific built-in function that was set up
    with bpf_init_builtins.  */
 
@@ -1261,73 +1082,34 @@ bpf_expand_builtin (tree exp, rtx target ATTRIBUTE_UNUSED,
       /* The result of the load is in R0.  */
       return gen_rtx_REG (ops[0].mode, BPF_R0);
     }
-
-  else if (code == -BPF_BUILTIN_PRESERVE_ACCESS_INDEX)
+  else
     {
-      /* A resolved overloaded __builtin_preserve_access_index.  */
-      tree arg = CALL_EXPR_ARG (exp, 0);
-
-      if (arg == NULL_TREE)
-	return NULL_RTX;
-
-      if (TREE_CODE (arg) == SSA_NAME)
-	{
-	  gimple *def_stmt = SSA_NAME_DEF_STMT (arg);
-
-	  if (is_gimple_assign (def_stmt))
-	    arg = gimple_assign_rhs1 (def_stmt);
-	  else
-	    return expand_normal (arg);
-	}
-
-      /* Avoid double-recording information if the argument is an access to
-	 a struct/union marked __attribute__((preserve_access_index)). This
-	 Will be handled by the attribute handling pass.  */
-      if (!is_attr_preserve_access (arg))
-	maybe_make_core_relo (arg, BPF_RELO_FIELD_BYTE_OFFSET);
-
-      return expand_normal (arg);
-    }
-
-  else if (code == -BPF_BUILTIN_PRESERVE_FIELD_INFO)
-    {
-      /* A resolved overloaded __builtin_preserve_field_info.  */
-      tree src = CALL_EXPR_ARG (exp, 0);
-      tree kind_tree = CALL_EXPR_ARG (exp, 1);
-      unsigned HOST_WIDE_INT kind_val = 0;
-      if (tree_fits_uhwi_p (kind_tree))
-	kind_val = tree_to_uhwi (kind_tree);
-      else
-	{
-	  error ("invalid argument to built-in function");
-	  return expand_normal (error_mark_node);
-	}
-
-      enum btf_core_reloc_kind kind = (enum btf_core_reloc_kind) kind_val;
-
-      if (TREE_CODE (src) == SSA_NAME)
-	{
-	  gimple *def_stmt = SSA_NAME_DEF_STMT (src);
-	  if (is_gimple_assign (def_stmt))
-	    src = gimple_assign_rhs1 (def_stmt);
-	}
-      if (TREE_CODE (src) == ADDR_EXPR)
-	src = TREE_OPERAND (src, 0);
-
-      tree result = bpf_core_field_info (src, kind);
-
-      if (result != error_mark_node)
-	maybe_make_core_relo (src, kind);
-
-      return expand_normal (result);
+      rtx ret = bpf_expand_core_builtin (exp, (enum bpf_builtins) code);
+      if (ret != NULL_RTX)
+	return ret;
     }
 
+  error ("invalid built-in function at expansion");
   gcc_unreachable ();
 }
 
 #undef TARGET_EXPAND_BUILTIN
 #define TARGET_EXPAND_BUILTIN bpf_expand_builtin
 
+static tree
+bpf_resolve_overloaded_builtin (location_t loc, tree fndecl, void *arglist)
+{
+  int code = DECL_MD_FUNCTION_CODE (fndecl);
+  if (code > BPF_CORE_BUILTINS_MARKER)
+    return bpf_resolve_overloaded_core_builtin (loc, fndecl, arglist);
+  else
+    return NULL_TREE;
+}
+
+#undef TARGET_RESOLVE_OVERLOADED_BUILTIN
+#define TARGET_RESOLVE_OVERLOADED_BUILTIN bpf_resolve_overloaded_builtin
+
+
 /* Initialize target-specific function library calls.  This is mainly
    used to call library-provided soft-fp operations, since eBPF
    doesn't support floating-point in "hardware".  */
@@ -1375,214 +1157,6 @@ bpf_debug_unwind_info ()
 #undef TARGET_ASM_ALIGNED_DI_OP
 #define TARGET_ASM_ALIGNED_DI_OP "\t.dword\t"
 
-
-/* BPF Compile Once - Run Everywhere (CO-RE) support routines.
-
-   BPF CO-RE is supported in two forms:
-   - A target builtin, __builtin_preserve_access_index
-
-     This builtin accepts a single argument. Any access to an aggregate data
-     structure (struct, union or array) within the argument will be recorded by
-     the CO-RE machinery, resulting in a relocation record being placed in the
-     .BTF.ext section of the output.
-
-     It is implemented in bpf_resolve_overloaded_builtin () and
-     bpf_expand_builtin (), using the supporting routines below.
-
-   - An attribute, __attribute__((preserve_access_index))
-
-     This attribute can be applied to struct and union types. Any access to a
-     type with this attribute will be recorded by the CO-RE machinery.
-
-     The pass pass_bpf_core_attr, below, implements support for
-     this attribute.  */
-
-/* Traverse the subtree under NODE, which is expected to be some form of
-   aggregate access the CO-RE machinery cares about (like a read of a member of
-   a struct or union), collecting access indices for the components and storing
-   them in the vector referenced by ACCESSORS.
-
-   Return the ultimate (top-level) container of the aggregate access. In general,
-   this will be a VAR_DECL or some kind of REF.
-
-   Note that the accessors are computed *in reverse order* of how the BPF
-   CO-RE machinery defines them. The vector needs to be reversed (or simply
-   output in reverse order) for the .BTF.ext relocation information.  */
-
-static tree
-bpf_core_compute (tree node, vec<unsigned int> *accessors)
-{
-
-  if (TREE_CODE (node) == ADDR_EXPR)
-    node = TREE_OPERAND (node, 0);
-
-  else if (INDIRECT_REF_P (node)
-	   || TREE_CODE (node) == POINTER_PLUS_EXPR)
-    {
-      accessors->safe_push (0);
-      return TREE_OPERAND (node, 0);
-    }
-
-  while (1)
-    {
-      switch (TREE_CODE (node))
-	{
-	case COMPONENT_REF:
-	  accessors->safe_push (bpf_core_get_index (TREE_OPERAND (node, 1)));
-	  break;
-
-	case ARRAY_REF:
-	case ARRAY_RANGE_REF:
-	  accessors->safe_push (bpf_core_get_index (node));
-	  break;
-
-	case MEM_REF:
-	  accessors->safe_push (bpf_core_get_index (node));
-	  if (TREE_CODE (TREE_OPERAND (node, 0)) == ADDR_EXPR)
-	    node = TREE_OPERAND (TREE_OPERAND (node, 0), 0);
-	  goto done;
-
-	default:
-	  goto done;
-	}
-      node = TREE_OPERAND (node, 0);
-    }
- done:
-  return node;
-
-}
-
-/* Compute the index of the NODE in its immediate container.
-   NODE should be a FIELD_DECL (i.e. of struct or union), or an ARRAY_REF. */
-static int
-bpf_core_get_index (const tree node)
-{
-  enum tree_code code = TREE_CODE (node);
-
-  if (code == FIELD_DECL)
-    {
-      /* Lookup the index from the BTF information.  Some struct/union members
-	 may not be emitted in BTF; only the BTF container has enough
-	 information to compute the correct index.  */
-      int idx = bpf_core_get_sou_member_index (ctf_get_tu_ctfc (), node);
-      if (idx >= 0)
-	return idx;
-    }
-
-  else if (code == ARRAY_REF || code == ARRAY_RANGE_REF || code == MEM_REF)
-    {
-      /* For array accesses, the index is operand 1.  */
-      tree index = TREE_OPERAND (node, 1);
-
-      /* If the indexing operand is a constant, extracting is trivial.  */
-      if (TREE_CODE (index) == INTEGER_CST && tree_fits_shwi_p (index))
-	return tree_to_shwi (index);
-    }
-
-  return -1;
-}
-
-/* Synthesize a new builtin function declaration with signature TYPE.
-   Used by bpf_resolve_overloaded_builtin to resolve calls to
-   __builtin_preserve_access_index.  */
-
-static tree
-bpf_core_newdecl (tree type, enum bpf_builtins which)
-{
-  tree rettype;
-  char name[80];
-  static unsigned long pai_count = 0;
-  static unsigned long pfi_count = 0;
-
-  switch (which)
-    {
-    case BPF_BUILTIN_PRESERVE_ACCESS_INDEX:
-      {
-	rettype = build_function_type_list (type, type, NULL);
-	int len = snprintf (name, sizeof (name), "%s", "__builtin_pai_");
-	len = snprintf (name + len, sizeof (name) - len, "%lu", pai_count++);
-      }
-      break;
-
-    case BPF_BUILTIN_PRESERVE_FIELD_INFO:
-      {
-	rettype = build_function_type_list (unsigned_type_node, type,
-					    unsigned_type_node, NULL);
-	int len = snprintf (name, sizeof (name), "%s", "__builtin_pfi_");
-	len = snprintf (name + len, sizeof (name) - len, "%lu", pfi_count++);
-      }
-      break;
-
-    default:
-      gcc_unreachable ();
-    }
-
-  return add_builtin_function_ext_scope (name, rettype, -which,
-					 BUILT_IN_MD, NULL, NULL_TREE);
-}
-
-/* Return whether EXPR could access some aggregate data structure that
-   BPF CO-RE support needs to know about.  */
-
-static bool
-bpf_core_is_maybe_aggregate_access (tree expr)
-{
-  switch (TREE_CODE (expr))
-    {
-    case COMPONENT_REF:
-    case BIT_FIELD_REF:
-    case ARRAY_REF:
-    case ARRAY_RANGE_REF:
-      return true;
-    case ADDR_EXPR:
-    case NOP_EXPR:
-      return bpf_core_is_maybe_aggregate_access (TREE_OPERAND (expr, 0));
-    default:
-      return false;
-    }
-}
-
-struct core_walk_data {
-  location_t loc;
-  enum bpf_builtins which;
-  tree arg;
-};
-
-/* Callback function used with walk_tree from bpf_resolve_overloaded_builtin.  */
-
-static tree
-bpf_core_walk (tree *tp, int *walk_subtrees, void *data)
-{
-  struct core_walk_data *dat = (struct core_walk_data *) data;
-
-  /* If this is a type, don't do anything. */
-  if (TYPE_P (*tp))
-    {
-      *walk_subtrees = 0;
-      return NULL_TREE;
-    }
-
-  /* Build a new function call to a type-resolved temporary builtin for the
-     desired operation, and pass along args as necessary.  */
-  tree newdecl = bpf_core_newdecl (TREE_TYPE (*tp), dat->which);
-
-  if (dat->which == BPF_BUILTIN_PRESERVE_ACCESS_INDEX)
-    {
-      if (bpf_core_is_maybe_aggregate_access (*tp))
-	{
-	  *tp = build_call_expr_loc (dat->loc, newdecl, 1, *tp);
-	  *walk_subtrees = 0;
-	}
-    }
-  else
-    {
-      *tp = build_call_expr_loc (dat->loc, newdecl, 2, *tp, dat->arg);
-      *walk_subtrees = 0;
-    }
-
-  return NULL_TREE;
-}
-
 /* Implement target hook small_register_classes_for_mode_p.  */
 
 static bool
@@ -1600,277 +1174,6 @@ bpf_small_register_classes_for_mode_p (machine_mode mode)
 #define TARGET_SMALL_REGISTER_CLASSES_FOR_MODE_P \
   bpf_small_register_classes_for_mode_p
 
-/* Return whether EXPR is a valid first argument for a call to
-   __builtin_preserve_field_info.  */
-
-static bool
-bpf_is_valid_preserve_field_info_arg (tree expr)
-{
-  switch (TREE_CODE (expr))
-    {
-    case COMPONENT_REF:
-    case BIT_FIELD_REF:
-    case ARRAY_REF:
-    case ARRAY_RANGE_REF:
-      return true;
-    case NOP_EXPR:
-      return bpf_is_valid_preserve_field_info_arg (TREE_OPERAND (expr, 0));
-    case ADDR_EXPR:
-      /* Do not accept ADDR_EXPRs like &foo.bar, but do accept accesses like
-	 foo.baz where baz is an array.  */
-      return (TREE_CODE (TREE_TYPE (TREE_OPERAND (expr, 0))) == ARRAY_TYPE);
-    default:
-      return false;
-    }
-}
-
-/* Implement TARGET_RESOLVE_OVERLOADED_BUILTIN (see gccint manual section
-   Target Macros::Misc.).
-   Used for CO-RE support builtins such as __builtin_preserve_access_index
-   and __builtin_preserve_field_info.
-
-   FNDECL is the declaration of the builtin, and ARGLIST is the list of
-   arguments passed to it, and is really a vec<tree,_> *.  */
-
-static tree
-bpf_resolve_overloaded_builtin (location_t loc, tree fndecl, void *arglist)
-{
-  enum bpf_builtins which = (enum bpf_builtins) DECL_MD_FUNCTION_CODE (fndecl);
-
-  if (which < BPF_BUILTIN_PRESERVE_ACCESS_INDEX
-      || which >= BPF_BUILTIN_MAX)
-    return NULL_TREE;
-
-  vec<tree, va_gc> *params = static_cast<vec<tree, va_gc> *> (arglist);
-  unsigned n_params = params ? params->length() : 0;
-
-  if (!(which == BPF_BUILTIN_PRESERVE_ACCESS_INDEX && n_params == 1)
-      && n_params != 2)
-    {
-      error_at (loc, "wrong number of arguments");
-      return error_mark_node;
-    }
-
-  tree param = (*params)[0];
-
-  /* If not generating BPF_CORE information, preserve_access_index does
-     nothing, and simply "resolves to" the argument.  */
-  if (which == BPF_BUILTIN_PRESERVE_ACCESS_INDEX && !TARGET_BPF_CORE)
-    return param;
-
-  /* For __builtin_preserve_field_info, enforce that the parameter is exactly a
-     field access and not a more complex expression.  */
-  else if (which == BPF_BUILTIN_PRESERVE_FIELD_INFO
-	   && !bpf_is_valid_preserve_field_info_arg (param))
-    {
-      error_at (EXPR_LOC_OR_LOC (param, loc),
-		"argument is not a field access");
-      return error_mark_node;
-    }
-
-  /* Do remove_c_maybe_const_expr for the arg.
-     TODO: WHY do we have to do this here? Why doesn't c-typeck take care
-     of it before or after this hook? */
-  if (TREE_CODE (param) == C_MAYBE_CONST_EXPR)
-    param = C_MAYBE_CONST_EXPR_EXPR (param);
-
-  /* Construct a new function declaration with the correct type, and return
-     a call to it.
-
-     Calls with statement-expressions, for example:
-     _(({ foo->a = 1; foo->u[2].b = 2; }))
-     require special handling.
-
-     We rearrange this into a new block scope in which each statement
-     becomes a unique builtin call:
-     {
-       _ ({ foo->a = 1;});
-       _ ({ foo->u[2].b = 2;});
-     }
-
-     This ensures that all the relevant information remains within the
-     expression trees the builtin finally gets.  */
-
-  struct core_walk_data data;
-  data.loc = loc;
-  data.which = which;
-  if (which == BPF_BUILTIN_PRESERVE_ACCESS_INDEX)
-    data.arg = NULL_TREE;
-  else
-    data.arg = (*params)[1];
-
-  walk_tree (&param, bpf_core_walk, (void *) &data, NULL);
-
-  return param;
-}
-
-#undef TARGET_RESOLVE_OVERLOADED_BUILTIN
-#define TARGET_RESOLVE_OVERLOADED_BUILTIN bpf_resolve_overloaded_builtin
-
-
-/* Handling for __attribute__((preserve_access_index)) for BPF CO-RE support.
-
-   This attribute marks a structure/union/array type as "preseve", so that
-   every access to that type should be recorded and replayed by the BPF loader;
-   this is just the same functionality as __builtin_preserve_access_index,
-   but in the form of an attribute for an entire aggregate type.
-
-   Note also that nested structs behave as though they all have the attribute.
-   For example:
-     struct X { int a; };
-     struct Y { struct X bar} __attribute__((preserve_access_index));
-     struct Y foo;
-     foo.bar.a;
-   will record access all the way to 'a', even though struct X does not have
-   the preserve_access_index attribute.
-
-   This is to follow LLVM behavior.
-
-   This pass finds all accesses to objects of types marked with the attribute,
-   and wraps them in the same "low-level" builtins used by the builtin version.
-   All logic afterwards is therefore identical to the builtin version of
-   preserve_access_index.  */
-
-/* True iff tree T accesses any member of a struct/union/class which is marked
-   with the PRESERVE_ACCESS_INDEX attribute.  */
-
-static bool
-is_attr_preserve_access (tree t)
-{
-  if (t == NULL_TREE)
-    return false;
-
-  poly_int64 bitsize, bitpos;
-  tree var_off;
-  machine_mode mode;
-  int sign, reverse, vol;
-
-  tree base = get_inner_reference (t, &bitsize, &bitpos, &var_off, &mode,
-				   &sign, &reverse, &vol);
-
-  if (TREE_CODE (base) == MEM_REF)
-    {
-      return lookup_attribute ("preserve_access_index",
-			       TYPE_ATTRIBUTES (TREE_TYPE (base)));
-    }
-
-  if (TREE_CODE (t) == COMPONENT_REF)
-    {
-      /* preserve_access_index propegates into nested structures,
-	 so check whether this is a component of another component
-	 which in turn is part of such a struct.  */
-
-      const tree op = TREE_OPERAND (t, 0);
-
-      if (TREE_CODE (op) == COMPONENT_REF)
-	return is_attr_preserve_access (op);
-
-      const tree container = DECL_CONTEXT (TREE_OPERAND (t, 1));
-
-      return lookup_attribute ("preserve_access_index",
-			       TYPE_ATTRIBUTES (container));
-    }
-
-  else if (TREE_CODE (t) == ADDR_EXPR)
-    return is_attr_preserve_access (TREE_OPERAND (t, 0));
-
-  return false;
-}
-
-/* The body of pass_bpf_core_attr. Scan RTL for accesses to structs/unions
-   marked with __attribute__((preserve_access_index)) and generate a CO-RE
-   relocation for any such access.  */
-
-static void
-handle_attr_preserve (function *fn)
-{
-  basic_block bb;
-  rtx_insn *insn;
-  FOR_EACH_BB_FN (bb, fn)
-    {
-      FOR_BB_INSNS (bb, insn)
-	{
-	  if (!NONJUMP_INSN_P (insn))
-	    continue;
-	  rtx pat = PATTERN (insn);
-	  if (GET_CODE (pat) != SET)
-	    continue;
-
-	  start_sequence();
-
-	  for (int i = 0; i < 2; i++)
-	    {
-	      rtx mem = XEXP (pat, i);
-	      if (MEM_P (mem))
-		{
-		  tree expr = MEM_EXPR (mem);
-		  if (!expr)
-		    continue;
-
-		  if (TREE_CODE (expr) == MEM_REF
-		      && TREE_CODE (TREE_OPERAND (expr, 0)) == SSA_NAME)
-		    {
-		      gimple *def_stmt = SSA_NAME_DEF_STMT (TREE_OPERAND (expr, 0));
-		      if (def_stmt && is_gimple_assign (def_stmt))
-			expr = gimple_assign_rhs1 (def_stmt);
-		    }
-
-		  if (is_attr_preserve_access (expr))
-		    maybe_make_core_relo (expr, BPF_RELO_FIELD_BYTE_OFFSET);
-		}
-	    }
-	  rtx_insn *seq = get_insns ();
-	  end_sequence ();
-	  emit_insn_before (seq, insn);
-	}
-    }
-}
-
-/* This pass finds accesses to structures marked with the BPF target attribute
-   __attribute__((preserve_access_index)). For every such access, a CO-RE
-   relocation record is generated, to be output in the .BTF.ext section.  */
-
-namespace {
-
-const pass_data pass_data_bpf_core_attr =
-{
-  RTL_PASS, /* type */
-  "bpf_core_attr", /* name */
-  OPTGROUP_NONE, /* optinfo_flags */
-  TV_NONE, /* tv_id */
-  0, /* properties_required */
-  0, /* properties_provided */
-  0, /* properties_destroyed */
-  0, /* todo_flags_start */
-  0, /* todo_flags_finish */
-};
-
-class pass_bpf_core_attr : public rtl_opt_pass
-{
-public:
-  pass_bpf_core_attr (gcc::context *ctxt)
-    : rtl_opt_pass (pass_data_bpf_core_attr, ctxt)
-  {}
-
-  virtual bool gate (function *) { return TARGET_BPF_CORE; }
-  virtual unsigned int execute (function *);
-};
-
-unsigned int
-pass_bpf_core_attr::execute (function *fn)
-{
-  handle_attr_preserve (fn);
-  return 0;
-}
-
-} /* Anonymous namespace.  */
-
-rtl_opt_pass *
-make_pass_bpf_core_attr (gcc::context *ctxt)
-{
-  return new pass_bpf_core_attr (ctxt);
-}
-
 /* Finally, build the GCC target.  */
 
 struct gcc_target targetm = TARGET_INITIALIZER;
diff --git a/gcc/config/bpf/bpf.md b/gcc/config/bpf/bpf.md
index a69a239b9d6a..e562d119f3c3 100644
--- a/gcc/config/bpf/bpf.md
+++ b/gcc/config/bpf/bpf.md
@@ -45,6 +45,7 @@
   UNSPEC_AFXOR
   UNSPEC_AXCHG
   UNSPEC_ACMP
+  UNSPEC_CORE_RELOC
 ])
 
 ;;;; Constants
@@ -367,6 +368,8 @@
         ""
         "
 {
+  bpf_process_move_operands (operands);
+
   if (!register_operand(operands[0], <MM:MODE>mode)
       && !register_operand(operands[1], <MM:MODE>mode))
     operands[1] = force_reg (<MM:MODE>mode, operands[1]);
@@ -384,6 +387,20 @@
    {st<mop>\t%0,%1|*(<smop> *) (%0) = %1}"
 [(set_attr "type" "ldx,alu,alu,stx,st")])
 
+(define_insn "mov_reloc_core<MM:mode>"
+  [(set (match_operand:MM 0 "nonimmediate_operand" "=r,q,r")
+	(unspec:MM [
+	  (match_operand:MM 1 "immediate_operand"  " I,I,B")
+	  (match_operand:SI 2 "immediate_operand"  " I,I,I")
+	 ] UNSPEC_CORE_RELOC)
+   )]
+  ""
+  "@
+   *return bpf_add_core_reloc (operands, \"{mov\t%0,%1|%0 = %1}\");
+   *return bpf_add_core_reloc (operands, \"{st<mop>\t%0,%1|*(<smop> *) (%0) = %1}\");
+   *return bpf_add_core_reloc (operands, \"{lddw\t%0,%1|%0 = %1 ll}\");"
+  [(set_attr "type" "alu,st,alu")])
+
 ;;;; Shifts
 
 (define_mode_iterator SIM [(SI "bpf_has_alu32") DI])
diff --git a/gcc/config/bpf/core-builtins.cc b/gcc/config/bpf/core-builtins.cc
new file mode 100644
index 000000000000..4514437045be
--- /dev/null
+++ b/gcc/config/bpf/core-builtins.cc
@@ -0,0 +1,1397 @@
+/* Subroutines used for code generation for eBPF.
+   Copyright (C) 2019-2023 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#define IN_TARGET_CODE 1
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "rtl.h"
+#include "regs.h"
+#include "insn-config.h"
+#include "insn-attr.h"
+#include "recog.h"
+#include "output.h"
+#include "alias.h"
+#include "tree.h"
+#include "stringpool.h"
+#include "attribs.h"
+#include "varasm.h"
+#include "stor-layout.h"
+#include "calls.h"
+#include "function.h"
+#include "explow.h"
+#include "memmodel.h"
+#include "emit-rtl.h"
+#include "reload.h"
+#include "tm_p.h"
+#include "target.h"
+#include "basic-block.h"
+#include "expr.h"
+#include "optabs.h"
+#include "bitmap.h"
+#include "df.h"
+#include "c-family/c-common.h"
+#include "diagnostic.h"
+#include "builtins.h"
+#include "predict.h"
+#include "langhooks.h"
+#include "flags.h"
+
+#include "cfg.h"
+#include "gimple.h"
+#include "gimple-iterator.h"
+#include "gimple-walk.h"
+#include "tree-pass.h"
+#include "tree-iterator.h"
+
+#include "context.h"
+#include "pass_manager.h"
+
+#include "gimplify.h"
+#include "gimplify-me.h"
+
+#include "plugin.h"
+
+#include "ctfc.h"
+#include "btf.h"
+#include "coreout.h"
+#include "core-builtins.h"
+
+/*
+ * BPF CO-RE builtins definition.
+
+   The expansion of CO-RE builtins occur in three steps:
+   1. - bpf_resolve_overloaded_core_builtin (pack step)
+     Right after the front-end, all of the CO-RE builtins are converted to an
+     internal builtin __builtin_core_reloc, which takes a single argument and
+     has polymorphic return value to fit the particular expected return type
+     from the original builtin.  The first argument contains an index argument
+     which points to the information stored in a vec<struct cr_builtins>
+     which collects the required information from the original CO-RE builtin in
+     order to use it later on in the __builtin_core_reloc expansion (the next
+     step).
+
+   2. - bpf_expand_core_builtin
+     In this step, the __builtin_core_reloc is expanded to a unspec:UNSPEC_CORE_RELOC
+     with 3 operands, destination, source and the index. The index operand
+     is the index in the vec constructed in the previous step.
+
+   3. - final asm output (process step)
+     This is the output of the unspec:UNSPEC_CORE_RELOC. The index passed in
+     the third operand is read and extracted as a integer from the rtx node.
+     The data is collected from the vec and it is used to create
+     the proper CO-RE relocation as well as do the final assembly output.
+     It also creates a label to mark the location of the move instruction that
+     is used in the CO-RE relocation.
+
+  The initialization of the CO-RE builtins infrastructure occurs in
+  bpf_is function.  It creates a struct
+  builtin_helpers_t arrays which defines the kind argument position,
+  the callback helpers, kind, compare, pack and process, for each individual
+  type of builtin argument possible in the original CO-RE builtins.
+
+  More precisely, field expression, type and enum value, used in the following
+  relocations:
+    - __builtin_core_preserve_access_index (<field_expr>)
+    - __builtin_core_field_info (<field_expr>, <kind>)
+    - __builtin_core_type_id (<type>, <kind>)
+    - __builtin_core_type_info (<type>, <kind>)
+    - __builtin_core_enum_value (<enum_value>, <kind>)
+
+  The kind helper allows to identify the proper relocation for the builtin
+  call based on the value within the kind argument.
+
+  The compare helper is used to identify if a new builtin call has similar
+  arguments to any other builtin call with the compiling unit.
+  This enables the possibility to optimize consecutive similar calls of the
+  builtins.
+
+  The pack helper callbacks are suppose to decode the original CO-RE builtin
+  call arguments, verify that it is a valid tree node for the particular
+  builtin, allocate a struct cr_local in vector and write it with the
+  relevant data for the particular builtin type.
+
+  The process helper should take the data constructed in the pack helper and
+  create a struct cr_final element which contains the essential
+  information to create a CO-RE relocation.
+  This information is further used by the final assembly output step to define
+  the CO-RE relocation and pass-through the default value for the original
+  CO-RE builtin.
+
+
+  BPF CO-RE preserve access is supported in two forms:
+  - A target builtin, __builtin_preserve_access_index
+
+    This builtin accepts a single argument.  Any access to an aggregate data
+    structure (struct, union or array) within the argument will be recorded by
+    the CO-RE machinery, resulting in a relocation record being placed in the
+    .BTF.ext section of the output.
+
+    It is implemented in bpf_resolve_overloaded_builtin () and
+    bpf_expand_builtin (), using the supporting routines below.
+
+  - An attribute, __attribute__((preserve_access_index))
+
+    This attribute can be applied to struct and union types.  Any access to a
+    type with this attribute will be recorded by the CO-RE machinery.
+    In the expand, any move matching is checked if any of its operands is
+    an expression to an attributed type, and if so, the expand will emit a
+    unspec:UNSPEC_CORE_RELOC that later on, in final assembly output, will
+    create the CO-RE relocation, just like it would happen if it was defined
+    as a builtin.  */
+
+
+struct cr_builtins {
+  tree type;
+  tree expr;
+  tree default_value;
+  rtx rtx_default_value;
+  enum btf_core_reloc_kind kind; /* Recovered from proper argument.  */
+  enum bpf_builtins orig_builtin_code;
+  tree orig_arg_expr;
+};
+
+#define CORE_BUILTINS_DATA_EMPTY \
+  { NULL_TREE, NULL_TREE, NULL_TREE, NULL_RTX, BPF_RELO_INVALID, \
+    BPF_BUILTIN_UNUSED, NULL }
+
+/* Vector definition and its access function.  */
+vec<struct cr_builtins> builtins_data;
+
+static inline int
+allocate_builtin_data ()
+{
+  struct cr_builtins data = CORE_BUILTINS_DATA_EMPTY;
+  int ret = builtins_data.length ();
+  builtins_data.safe_push (data);
+  return ret;
+}
+
+static inline struct cr_builtins *
+get_builtin_data (int index)
+{
+  return &builtins_data[index];
+}
+
+typedef bool
+(*builtin_local_data_compare_fn) (struct cr_builtins *a,
+				  struct cr_builtins *b);
+static inline int
+search_builtin_data (builtin_local_data_compare_fn callback,
+		     struct cr_builtins *elem)
+{
+  unsigned int i;
+  for (i = 0; i < builtins_data.length (); i++)
+    if ((callback != NULL && (callback) (elem, &builtins_data[i]))
+       || (callback == NULL
+	   && (builtins_data[i].orig_arg_expr == elem->orig_arg_expr)))
+      return (int) i;
+
+  return -1;
+}
+
+/* Possible relocation decisions.  */
+enum cr_decision {
+  FAILED_VALIDATION = 0,
+  KEEP_ORIGINAL_NO_RELOCATION,
+  REPLACE_CREATE_RELOCATION,
+  REPLACE_NO_RELOCATION
+};
+
+/* Core Relocation Pack local structure.  */
+struct cr_local
+{
+  struct cr_builtins reloc_data;
+  enum cr_decision reloc_decision;
+  bool fail;
+};
+#define CR_LOCAL_EMPTY { CORE_BUILTINS_DATA_EMPTY, FAILED_VALIDATION, false }
+
+/* Core Relocation Final data */
+struct cr_final
+{
+  char *str;
+  tree type;
+  enum btf_core_reloc_kind kind;
+};
+
+/* CO-RE builtin helpers struct.  Used and initialized in
+   bpf_init_core_builtins.  */
+struct builtin_helpers {
+  enum btf_core_reloc_kind (*kind) (tree *args, int nargs);
+  bool (*compare) (struct cr_builtins *a, struct cr_builtins *b);
+  struct cr_local (*pack) (tree *args,
+			   enum btf_core_reloc_kind kind,
+			   enum bpf_builtins code);
+  struct cr_final (*process) (struct cr_builtins *data);
+  bool is_pure;
+  bool is_valid;
+};
+
+struct builtin_helpers
+  core_builtin_helpers[(int) BPF_BUILTIN_MAX];
+
+#define BPF_CORE_HELPER_NOTSET { NULL, NULL, NULL, NULL, false, false }
+#define BPF_CORE_HELPER_SET(KIND, COMPARE, PACK, PROCESS, IS_PURE) \
+	{ KIND, COMPARE, PACK, PROCESS, IS_PURE, true }
+
+enum bpf_plugin_states {
+  BPF_PLUGIN_DISABLED = 0,
+  BPF_PLUGIN_ENABLED,
+  BPF_PLUGIN_REMOVED
+};
+enum bpf_plugin_states plugin_state = BPF_PLUGIN_DISABLED;
+
+static void
+remove_parser_plugin ()
+{
+  /* Restore state of the plugin system.  */
+  if (flag_plugin_added == true && plugin_state != BPF_PLUGIN_REMOVED)
+    {
+      unregister_callback ("bpf_collect_enum_info", PLUGIN_FINISH_TYPE);
+      flag_plugin_added = (bool) plugin_state == BPF_PLUGIN_ENABLED;
+      plugin_state = BPF_PLUGIN_REMOVED;
+    }
+}
+
+#define bpf_error(MSG) { \
+  remove_parser_plugin (); \
+  error (MSG); \
+}
+
+#define bpf_error_at(LOC, MSG) { \
+  remove_parser_plugin (); \
+  error_at (LOC, MSG); \
+}
+
+
+/* Helper compare functions used to verify if multiple builtin calls contain
+   the same argument as input.  In that case the builtin calls can be optimized
+   out by identifying redundat calls.  This happen since the internal
+   __core_reloc builtin is marked as PURE.  */
+
+static inline bool
+compare_same_kind (struct cr_builtins *a, struct cr_builtins *b)
+{
+  return a->kind == b->kind;
+}
+static inline bool
+compare_same_ptr_expr (struct cr_builtins *a, struct cr_builtins *b)
+{
+  return compare_same_kind (a, b) && a->expr == b->expr;
+}
+static inline bool
+compare_same_ptr_type (struct cr_builtins *a, struct cr_builtins *b)
+{
+  return compare_same_kind (a, b) && a->type == b->type;
+}
+
+/* Handling for __attribute__((preserve_access_index)) for BPF CO-RE support.
+
+   This attribute marks a structure/union/array type as "preseve", so that
+   every access to that type should be recorded and replayed by the BPF loader;
+   this is just the same functionality as __builtin_preserve_access_index,
+   but in the form of an attribute for an entire aggregate type.
+
+   Note also that nested structs behave as though they all have the attribute.
+   For example:
+     struct X { int a; };
+     struct Y { struct X bar} __attribute__((preserve_access_index));
+     struct Y foo;
+     foo.bar.a;
+   will record access all the way to 'a', even though struct X does not have
+   the preserve_access_index attribute.
+
+   This is to follow LLVM behavior. */
+
+/* True if tree T accesses any member of a struct/union/class which is marked
+   with the PRESERVE_ACCESS_INDEX attribute.  */
+
+static bool
+is_attr_preserve_access (tree t)
+{
+  if (t == NULL_TREE)
+    return false;
+
+  poly_int64 bitsize, bitpos;
+  tree var_off;
+  machine_mode mode;
+  int sign, reverse, vol;
+
+  tree base = get_inner_reference (t, &bitsize, &bitpos, &var_off, &mode,
+				   &sign, &reverse, &vol);
+
+  if (TREE_CODE (base) == MEM_REF)
+    {
+      return lookup_attribute ("preserve_access_index",
+			       TYPE_ATTRIBUTES (TREE_TYPE (base)));
+    }
+
+  if (TREE_CODE (t) == COMPONENT_REF)
+    {
+      /* preserve_access_index propagates into nested structures,
+	 so check whether this is a component of another component
+	 which in turn is part of such a struct.  */
+
+      const tree op = TREE_OPERAND (t, 0);
+
+      if (TREE_CODE (op) == COMPONENT_REF)
+	return is_attr_preserve_access (op);
+
+      const tree container = DECL_CONTEXT (TREE_OPERAND (t, 1));
+
+      return lookup_attribute ("preserve_access_index",
+			       TYPE_ATTRIBUTES (container));
+    }
+
+  else if (TREE_CODE (t) == ADDR_EXPR)
+    return is_attr_preserve_access (TREE_OPERAND (t, 0));
+
+  return false;
+}
+
+
+/* Expand a call to __builtin_preserve_field_info by evaluating the requested
+   information about SRC according to KIND, and return a tree holding
+   the result.  */
+
+static tree
+core_field_info (tree src, enum btf_core_reloc_kind kind)
+{
+  unsigned int result;
+  poly_int64 bitsize, bitpos;
+  tree var_off = NULL_TREE;
+  machine_mode mode;
+  int unsignedp, reversep, volatilep;
+  location_t loc = EXPR_LOCATION (src);
+  tree type = TREE_TYPE (src);
+
+  get_inner_reference (src, &bitsize, &bitpos, &var_off, &mode, &unsignedp,
+		       &reversep, &volatilep);
+
+  /* Note: Use DECL_BIT_FIELD_TYPE rather than DECL_BIT_FIELD here, because it
+     remembers whether the field in question was originally declared as a
+     bitfield, regardless of how it has been optimized.  */
+  bool bitfieldp = (TREE_CODE (src) == COMPONENT_REF
+		    && DECL_BIT_FIELD_TYPE (TREE_OPERAND (src, 1)));
+
+  unsigned int align = TYPE_ALIGN (TREE_TYPE (src));
+  if (TREE_CODE (src) == COMPONENT_REF)
+    {
+      tree field = TREE_OPERAND (src, 1);
+      if (DECL_BIT_FIELD_TYPE (field))
+	align = TYPE_ALIGN (DECL_BIT_FIELD_TYPE (field));
+      else
+	align = TYPE_ALIGN (TREE_TYPE (field));
+    }
+
+  unsigned int start_bitpos = bitpos & ~(align - 1);
+  unsigned int end_bitpos = start_bitpos + align;
+
+  switch (kind)
+    {
+    case BPF_RELO_FIELD_BYTE_OFFSET:
+      {
+	type = unsigned_type_node;
+	if (var_off != NULL_TREE)
+	  {
+	    bpf_error_at (loc, "unsupported variable field offset");
+	    return error_mark_node;
+	  }
+
+	if (bitfieldp)
+	  result = start_bitpos / 8;
+	else
+	  result = bitpos / 8;
+      }
+      break;
+
+    case BPF_RELO_FIELD_BYTE_SIZE:
+      {
+	type = unsigned_type_node;
+	if (mode == BLKmode && bitsize == -1)
+	  {
+	    bpf_error_at (loc, "unsupported variable size field access");
+	    return error_mark_node;
+	  }
+
+	if (bitfieldp)
+	  {
+	    /* To match LLVM behavior, byte size of bitfields is recorded as
+	       the full size of the base type.  A 3-bit bitfield of type int is
+	       therefore recorded as having a byte size of 4 bytes.  */
+	    result = end_bitpos - start_bitpos;
+	    if (result & (result - 1))
+	      {
+		bpf_error_at (loc, "unsupported field expression");
+		return error_mark_node;
+	      }
+	    result = result / 8;
+	  }
+	else
+	  result = bitsize / 8;
+      }
+      break;
+
+    case BPF_RELO_FIELD_EXISTS:
+      type = unsigned_type_node;
+      /* The field always exists at compile time.  */
+      result = 1;
+      break;
+
+    case BPF_RELO_FIELD_SIGNED:
+      type = unsigned_type_node;
+      result = !unsignedp;
+      break;
+
+    case BPF_RELO_FIELD_LSHIFT_U64:
+    case BPF_RELO_FIELD_RSHIFT_U64:
+      {
+	type = unsigned_type_node;
+	if (mode == BLKmode && bitsize == -1)
+	  {
+	    bpf_error_at (loc, "unsupported variable size field access");
+	    return error_mark_node;
+	  }
+	if (var_off != NULL_TREE)
+	  {
+	    bpf_error_at (loc, "unsupported variable field offset");
+	    return error_mark_node;
+	  }
+
+	if (!bitfieldp)
+	  {
+	    if (bitsize > 64)
+	      {
+		bpf_error_at (loc, "field size too large");
+		return error_mark_node;
+	      }
+	    result = 64 - bitsize;
+	    break;
+	  }
+
+	if (end_bitpos - start_bitpos > 64)
+	  {
+	    bpf_error_at (loc, "field size too large");
+	    return error_mark_node;
+	  }
+
+	if (kind == BPF_RELO_FIELD_LSHIFT_U64)
+	  {
+	    if (TARGET_BIG_ENDIAN)
+	      result = bitpos + 64 - start_bitpos - align;
+	    else
+	      result = start_bitpos + 64 - bitpos - bitsize;
+	  }
+	else /* RSHIFT_U64 */
+	  result = 64 - bitsize;
+      }
+      break;
+
+    default:
+      bpf_error ("invalid second argument to built-in function");
+      return error_mark_node;
+      break;
+    }
+
+  return build_int_cst (type, result);
+}
+
+/* Compute the index of the NODE in its immediate container.
+   NODE should be a FIELD_DECL (i.e. of struct or union), or an ARRAY_REF.  */
+
+static int
+bpf_core_get_index (const tree node)
+{
+  enum tree_code code = TREE_CODE (node);
+
+  if (code == FIELD_DECL)
+    {
+      /* Lookup the index from the BTF information.  Some struct/union members
+	 may not be emitted in BTF; only the BTF container has enough
+	 information to compute the correct index.  */
+
+      /* FIXED: This wat not Ok.
+       * Anonymous structures would not be found in BTF information.
+       */
+      const tree container = DECL_CONTEXT (node);
+      int i = 0;
+      for (tree l = TYPE_FIELDS (container); l; l = DECL_CHAIN (l))
+	{
+	  if (l == node)
+	    return i;
+	  i++;
+	}
+    }
+  else if (code == ARRAY_REF || code == ARRAY_RANGE_REF || code == MEM_REF)
+    {
+      /* For array accesses, the index is operand 1.  */
+      tree index = TREE_OPERAND (node, 1);
+
+      /* If the indexing operand is a constant, extracting is trivial.  */
+      if (TREE_CODE (index) == INTEGER_CST && tree_fits_shwi_p (index))
+	return tree_to_shwi (index);
+    }
+  else if (code == POINTER_PLUS_EXPR)
+    {
+      tree offset = TREE_OPERAND (node, 1);
+      tree type = TREE_TYPE (TREE_OPERAND (node, 0));
+
+      if (TREE_CODE (offset) == INTEGER_CST && tree_fits_shwi_p (offset)
+	  && COMPLETE_TYPE_P (type) && tree_fits_shwi_p (TYPE_SIZE (type)))
+	{
+	  HOST_WIDE_INT offset_i = tree_to_shwi (offset);
+	  HOST_WIDE_INT type_size_i = tree_to_shwi (TYPE_SIZE_UNIT (type));
+	  if ((offset_i % type_size_i) == 0)
+	    return offset_i / type_size_i;
+	}
+    }
+
+  gcc_unreachable ();
+  return -1;
+}
+
+/* This function takes a possible field expression (node) and verifies it is
+   valid, extracts what should be the root of the valid field expression and
+   composes the accessors array of indices.  The accessors are later used in the
+   CO-RE relocation in the string field.  */
+
+static unsigned char
+compute_field_expr (tree node, unsigned int *accessors, bool *valid,
+		    tree *root)
+{
+  unsigned char n = 0;
+  if (node == NULL_TREE)
+    {
+      *valid = false;
+      return 0;
+    }
+
+  switch (TREE_CODE (node))
+    {
+    case INDIRECT_REF:
+    case ADDR_EXPR:
+      accessors[0] = 0;
+      n = compute_field_expr (TREE_OPERAND (node, 0), &accessors[0], valid,
+			      root);
+      *root = node;
+      return n + 1;
+    case POINTER_PLUS_EXPR:
+      accessors[0] = bpf_core_get_index (node);
+      *root = node;
+      return 1;
+    case COMPONENT_REF:
+      n = compute_field_expr (TREE_OPERAND (node, 0), accessors, valid,
+			      root);
+      accessors[n] = bpf_core_get_index (TREE_OPERAND (node, 1));
+      *root = node;
+      return n + 1;
+    case ARRAY_REF:
+    case ARRAY_RANGE_REF:
+    case MEM_REF:
+      n = compute_field_expr (TREE_OPERAND (node, 0), accessors, valid, root);
+      accessors[n] = bpf_core_get_index (node);
+      *root = node;
+      return n + 1;
+    case NOP_EXPR:
+      n = compute_field_expr (TREE_OPERAND (node, 0), accessors, valid, root);
+      *root = node;
+      return n;
+    case TARGET_EXPR:
+      {
+	tree value = TREE_OPERAND (node, 1);
+	if (TREE_CODE (value) == BIND_EXPR
+	    && TREE_CODE (value = BIND_EXPR_BODY (value)) == MODIFY_EXPR)
+	  return compute_field_expr (TREE_OPERAND (value, 1), accessors, valid,
+				     root);
+      }
+      *root = node;
+      return 0;
+    case SSA_NAME:
+    case VAR_DECL:
+    case PARM_DECL:
+      return 0;
+    default:
+      *valid = false;
+      return 0;
+    }
+}
+
+static struct cr_local
+pack_field_expr_for_access_index (tree *args,
+				  enum btf_core_reloc_kind kind,
+				  enum bpf_builtins code ATTRIBUTE_UNUSED)
+{
+  struct cr_local ret = CR_LOCAL_EMPTY;
+  ret.fail = false;
+
+  tree arg = args[0];
+  tree root = arg;
+
+  /* Avoid double-recording information if argument is an access to
+     a struct/union marked __attribute__((preserve_access_index)).  This
+     Will be handled by the attribute handling pass.  */
+  if (is_attr_preserve_access (arg))
+    {
+      ret.reloc_decision = REPLACE_NO_RELOCATION;
+      ret.reloc_data.expr = arg;
+    }
+  else
+    {
+      ret.reloc_decision = REPLACE_CREATE_RELOCATION;
+
+      unsigned int accessors[100];
+      bool valid = true;
+      compute_field_expr (arg, accessors, &valid, &root);
+
+      if (valid == true)
+	ret.reloc_data.expr = root;
+      else
+	{
+	  bpf_error_at (EXPR_LOC_OR_LOC (arg, UNKNOWN_LOCATION),
+			"Cannot compute index for field argument");
+	  ret.fail = true;
+	}
+    }
+
+  /* Note: the type of default_value is used to define the return type of
+   __builtin_core_reloc in bpf_resolve_overloaded_core_builtin.  */
+  ret.reloc_data.type = TREE_TYPE (root);
+  ret.reloc_data.default_value = build_int_cst (ret.reloc_data.type, 0);
+  ret.reloc_data.kind = kind;
+
+  if (TREE_CODE (ret.reloc_data.default_value) == ERROR_MARK)
+    ret.fail = true;
+
+  return ret;
+}
+
+static struct cr_local
+pack_field_expr_for_preserve_field (tree *args,
+				    enum btf_core_reloc_kind kind,
+				    enum bpf_builtins code ATTRIBUTE_UNUSED)
+{
+  struct cr_local ret = CR_LOCAL_EMPTY;
+  ret.fail = false;
+
+  tree arg = args[0];
+  tree tmp;
+  tree root = arg;
+
+  /* Remove cast to void * created by front-end to fit builtin type, when passed
+   * a simple expression like f->u.  */
+  if (TREE_CODE (arg) == NOP_EXPR && (tmp = TREE_OPERAND (arg, 0))
+      && TREE_CODE (tmp) == ADDR_EXPR && (tmp = TREE_OPERAND (tmp, 0))
+      && arg != NULL_TREE)
+    arg = tmp;
+
+  unsigned int accessors[100];
+  bool valid = true;
+  compute_field_expr (arg, accessors, &valid, &root);
+
+  if (valid == true)
+    ret.reloc_data.expr = root;
+  else
+    {
+      bpf_error_at (EXPR_LOC_OR_LOC (arg, UNKNOWN_LOCATION),
+		    "argument is not a field access");
+      ret.fail = true;
+    }
+
+  ret.reloc_decision = REPLACE_CREATE_RELOCATION;
+  ret.reloc_data.type = TREE_TYPE (root);
+  ret.reloc_data.default_value = core_field_info (root, kind);
+  ret.reloc_data.kind = kind;
+
+  if (TREE_CODE (ret.reloc_data.default_value) == ERROR_MARK)
+    ret.fail = true;
+
+  return ret;
+}
+
+static struct cr_final
+process_field_expr (struct cr_builtins *data)
+{
+  gcc_assert (data->kind == BPF_RELO_FIELD_BYTE_OFFSET
+	      || data->kind == BPF_RELO_FIELD_BYTE_SIZE
+	      || data->kind == BPF_RELO_FIELD_LSHIFT_U64
+	      || data->kind == BPF_RELO_FIELD_RSHIFT_U64
+	      || data->kind == BPF_RELO_FIELD_SIGNED
+	      || data->kind == BPF_RELO_FIELD_EXISTS);
+
+  unsigned int accessors[100];
+  unsigned char nr_accessors = 0;
+  bool valid = true;
+  tree root = NULL_TREE;
+  tree expr = data->expr;
+  tree type = TREE_TYPE (data->expr);
+
+  if (TREE_CODE (expr) == ADDR_EXPR)
+    expr = TREE_OPERAND (expr, 0);
+
+  nr_accessors = compute_field_expr (expr, accessors, &valid, &root);
+
+  struct cr_final ret = { NULL, type, data->kind};
+
+  char str[100];
+  if (nr_accessors > 0)
+    {
+      int n = 0;
+      for (int i = 0; i < nr_accessors; i++)
+	n += snprintf (str + n, sizeof (str) - n,
+		       i == 0 ? "%u" : ":%u", accessors[i]);
+      ret.str = CONST_CAST (char *, ggc_strdup (str));
+    }
+  else
+    gcc_unreachable ();
+
+  return ret;
+}
+
+hash_map <tree, tree> bpf_enum_mappings;
+
+tree enum_value_type = NULL_TREE;
+static struct cr_local
+pack_enum_value (tree *args, enum btf_core_reloc_kind kind,
+		 enum bpf_builtins code ATTRIBUTE_UNUSED)
+{
+  struct cr_local ret = CR_LOCAL_EMPTY;
+  ret.reloc_decision = REPLACE_CREATE_RELOCATION;
+  ret.fail = false;
+
+  tree *result = NULL;
+  tree tmp = args[0];
+  tree enum_value = args[1];
+  tree type = NULL_TREE;
+
+  /* Deconstructing "*(typeof (enum_type) *) enum_value" to collect both the
+   * enum_type and enum_value.  */
+  if (TREE_CODE (tmp) != TARGET_EXPR
+      || (type = TREE_TYPE (tmp)) == NULL_TREE
+      || (TREE_CODE (type) != POINTER_TYPE)
+      || (type = TREE_TYPE (type)) == NULL_TREE
+      || (TREE_CODE (type) != ENUMERAL_TYPE))
+    {
+      bpf_error ("invalid type argument format for enum value builtin");
+      ret.fail = true;
+    }
+
+  if (TREE_CODE (enum_value) != INTEGER_CST)
+    goto pack_enum_value_fail;
+
+  result = bpf_enum_mappings.get (enum_value);
+  if (result == NULL)
+    goto pack_enum_value_fail;
+
+  tmp = *result;
+
+  if (TREE_CODE (tmp) != CONST_DECL)
+    {
+pack_enum_value_fail:
+      bpf_error ("invalid enum value argument for enum value builtin");
+      ret.fail = true;
+    }
+  else
+    {
+      ret.reloc_data.expr = tmp;
+      if (kind == BPF_RELO_ENUMVAL_VALUE)
+	ret.reloc_data.default_value = enum_value;
+      else
+	ret.reloc_data.default_value = integer_one_node;
+    }
+
+  ret.reloc_data.type = type;
+  ret.reloc_data.kind = kind;
+  return ret;
+}
+
+static struct cr_final
+process_enum_value (struct cr_builtins *data)
+{
+  gcc_assert (data->kind == BPF_RELO_ENUMVAL_EXISTS
+	      || data->kind == BPF_RELO_ENUMVAL_VALUE);
+
+  tree expr = data->expr;
+  tree type = data->type;
+
+  struct cr_final ret = { NULL, type, data->kind };
+
+  if (TREE_CODE (expr) == CONST_DECL
+     && TREE_CODE (type) == ENUMERAL_TYPE)
+    {
+      unsigned int index = 0;
+      for (tree l = TYPE_VALUES (type); l; l = TREE_CHAIN (l))
+	{
+	  if (TREE_VALUE (l) == expr)
+	    {
+	      ret.str = (char *) ggc_alloc_atomic ((index / 10) + 1);
+	      sprintf (ret.str, "%d", index);
+	      break;
+	    }
+	  index++;
+	}
+    }
+  else
+    gcc_unreachable ();
+
+  return ret;
+}
+
+static struct cr_local
+pack_type (tree *args, enum btf_core_reloc_kind kind,
+	   enum bpf_builtins code ATTRIBUTE_UNUSED)
+{
+  struct cr_local ret = CR_LOCAL_EMPTY;
+  ret.reloc_decision = FAILED_VALIDATION;
+  ret.reloc_data.default_value = integer_zero_node;
+  ret.fail = false;
+
+  tree root_type = NULL_TREE;
+  tree tmp = args[0];
+  HOST_WIDE_INT type_size_i;
+
+  /* Typical structure to match:
+   *    *({ extern typeof (TYPE) *<tmp_name>; <tmp_name>; })
+   */
+
+  /* Extract Pointer dereference from the construct.  */
+
+  while (tmp != NULL_TREE
+	&& (TREE_CODE (tmp) == INDIRECT_REF
+	    || TREE_CODE (tmp) == NOP_EXPR))
+    tmp = TREE_OPERAND (tmp, 0);
+
+  if (TREE_CODE (tmp) != TARGET_EXPR
+      || TREE_CODE (tmp = TREE_OPERAND (tmp, 1)) != BIND_EXPR)
+    goto pack_type_fail;
+
+  tmp = BIND_EXPR_VARS (tmp);
+
+  if (TREE_CODE (tmp) != TYPE_DECL
+      && TREE_CODE (tmp) != VAR_DECL)
+    goto pack_type_fail;
+
+  tmp = TREE_TYPE (tmp);
+
+  if (TREE_CODE (tmp) == POINTER_TYPE)
+    tmp = TREE_TYPE (tmp);
+
+  root_type = tmp;
+
+  if (TREE_CODE (tmp) != RECORD_TYPE
+      && TREE_CODE (tmp) != UNION_TYPE
+      && TREE_CODE (tmp) != ENUMERAL_TYPE
+      && (TREE_CODE (tmp) != POINTER_TYPE
+	  || TREE_CODE (TREE_TYPE (tmp)) == FUNCTION_TYPE)
+      && (TREE_CODE (tmp) != POINTER_TYPE
+	  || TREE_CODE (TREE_TYPE (tmp)) == VOID_TYPE)
+      && TREE_CODE (tmp) != ARRAY_TYPE
+      && TREE_CODE (tmp) != INTEGER_TYPE)
+    goto pack_type_fail;
+
+  ret.reloc_data.type = root_type;
+  ret.reloc_decision = REPLACE_CREATE_RELOCATION;
+
+  /* Force this type to be marked as used in dwarf2out. */
+  gcc_assert (cfun);
+  if (cfun->used_types_hash == NULL)
+    cfun->used_types_hash = hash_set<tree>::create_ggc (37);
+  cfun->used_types_hash->add (root_type);
+
+  type_size_i = tree_to_shwi (TYPE_SIZE_UNIT (ret.reloc_data.type));
+
+  switch (kind)
+    {
+      case BPF_RELO_TYPE_SIZE:
+	ret.reloc_data.default_value = build_int_cst (integer_type_node,
+						      type_size_i);
+	break;
+      case BPF_RELO_TYPE_EXISTS:
+      case BPF_RELO_TYPE_MATCHES:
+	ret.reloc_data.default_value = integer_one_node;
+	break;
+      case BPF_RELO_TYPE_ID_LOCAL:
+      case BPF_RELO_TYPE_ID_TARGET:
+	ret.reloc_data.default_value = integer_zero_node;
+	break;
+      default:
+	break;
+    }
+
+  ret.reloc_data.kind = kind;
+  return ret;
+
+pack_type_fail:
+      bpf_error_at (EXPR_LOC_OR_LOC (args[0], UNKNOWN_LOCATION),
+		    "invelid first argument format for enum value builtin");
+      ret.fail = true;
+  return ret;
+}
+
+static struct cr_final
+process_type (struct cr_builtins *data)
+{
+  gcc_assert (data->kind == BPF_RELO_TYPE_ID_LOCAL
+	      || data->kind == BPF_RELO_TYPE_ID_TARGET
+	      || data->kind == BPF_RELO_TYPE_EXISTS
+	      || data->kind == BPF_RELO_TYPE_SIZE
+	      || data->kind == BPF_RELO_TYPE_MATCHES);
+
+  struct cr_final ret;
+  ret.str = NULL;
+  ret.type = data->type;
+  ret.kind = data->kind;
+
+  if ((data->kind == BPF_RELO_TYPE_ID_LOCAL
+      || data->kind == BPF_RELO_TYPE_ID_TARGET)
+      && data->default_value != NULL)
+  {
+    ctf_container_ref ctfc = ctf_get_tu_ctfc ();
+    unsigned int btf_id = get_btf_id (ctf_lookup_tree_type (ctfc, ret.type));
+    data->rtx_default_value = expand_normal (build_int_cst (integer_type_node,
+							    btf_id));
+  }
+
+  return ret;
+}
+
+static bool
+bpf_require_core_support ()
+{
+  if (!TARGET_BPF_CORE)
+    {
+      bpf_error ("BPF CO-RE is required but not enabled");
+      return false;
+    }
+  return true;
+}
+
+/* BPF Compile Once - Run Everywhere (CO-RE) support.  Construct a CO-RE
+   relocation record in DATA to be emitted in the .BTF.ext
+   section.  Does nothing if we are not targetting BPF CO-RE, or if the
+   constructed relocation would be a no-op.  */
+
+static void
+make_core_relo (struct cr_final *data, rtx_code_label *label)
+{
+  /* If we are not targetting BPF CO-RE, do not make a relocation.  We
+     might not be generating any debug info at all.  */
+  if (!bpf_require_core_support ())
+    return;
+
+  gcc_assert (data->type);
+
+  /* Determine what output section this relocation will apply to.
+     If this function is associated with a section, use that.  Otherwise,
+     fall back on '.text'.  */
+  const char * section_name;
+  if (current_function_decl && DECL_SECTION_NAME (current_function_decl))
+    section_name = DECL_SECTION_NAME (current_function_decl);
+  else
+    section_name = ".text";
+
+  /* Add the CO-RE relocation information to the BTF container.  */
+  bpf_core_reloc_add (data->type, section_name, data->str, label,
+		      data->kind);
+}
+
+/* Support function to extract kind information for CO-RE builtin
+   calls.  */
+
+static inline char
+read_kind (tree kind, char max_value, char enum_offset)
+{
+  char kind_val = 0;
+
+  if (kind == NULL_TREE)
+    goto invalid_kind_arg_error;
+
+  if (TREE_CODE (kind) != CONST_DECL
+      && TREE_CODE (kind) == NOP_EXPR)
+    kind = TREE_OPERAND (kind, 0);
+
+  if (TREE_CODE (kind) == CONST_DECL)
+    kind = DECL_INITIAL (kind);
+
+  if (TREE_CODE (kind) == INTEGER_CST
+      && tree_fits_uhwi_p (kind))
+    kind_val = tree_to_uhwi (kind);
+  else
+    goto invalid_kind_arg_error;
+
+  if (kind_val > max_value)
+    {
+invalid_kind_arg_error:
+      bpf_error ("invalid kind argument to core builtin");
+      return -1;
+    }
+  return kind_val + enum_offset;
+}
+
+#define KIND_EXPECT_NARGS(N, MSG) \
+  { if (nargs != N) { bpf_error (MSG); return BPF_RELO_INVALID; } }
+
+/* Helper functions to extract kind information.  */
+static inline enum btf_core_reloc_kind
+kind_access_index (tree *args ATTRIBUTE_UNUSED, int nargs)
+{
+  KIND_EXPECT_NARGS (1,
+	"wrong number of arguments for access index core builtin");
+  return BPF_RELO_FIELD_BYTE_OFFSET;
+}
+static inline enum btf_core_reloc_kind
+kind_preserve_field_info (tree *args, int nargs)
+{
+  KIND_EXPECT_NARGS (2,
+	"wrong number of arguments for field info core builtin");
+  return (enum btf_core_reloc_kind) read_kind (args[1], 5,
+					       BPF_RELO_FIELD_BYTE_OFFSET);
+}
+static inline enum btf_core_reloc_kind
+kind_enum_value (tree *args, int nargs)
+{
+  KIND_EXPECT_NARGS (3,
+	"wrong number of arguments for enum value core builtin");
+  return (enum btf_core_reloc_kind) read_kind (args[2], 1,
+					       BPF_RELO_ENUMVAL_EXISTS);
+}
+static inline enum btf_core_reloc_kind
+kind_type_id (tree *args, int nargs)
+{
+  KIND_EXPECT_NARGS (2,
+	"wrong number of arguments for type id core builtin");
+  return (enum btf_core_reloc_kind) read_kind (args[1], 1,
+					       BPF_RELO_TYPE_ID_LOCAL);
+}
+static inline enum btf_core_reloc_kind
+kind_preserve_type_info (tree *args, int nargs)
+{
+  KIND_EXPECT_NARGS (2,
+	"wrong number of arguments for type info core builtin");
+  char val = read_kind (args[1], 2, 0);
+  switch (val)
+    {
+    case 0:
+      return BPF_RELO_TYPE_EXISTS;
+    case 1:
+      return BPF_RELO_TYPE_SIZE;
+    case 2:
+      return BPF_RELO_TYPE_MATCHES;
+    default:
+      break;
+    }
+  return BPF_RELO_INVALID;
+}
+
+
+/* Required to overcome having different return type builtins to avoid warnings
+   at front-end and be able to share the same builtin definition and permitting
+   the PURE attribute to work.  */
+hash_map<tree, tree> core_builtin_type_defs;
+
+static tree
+get_core_builtin_fndecl_for_type (tree ret_type)
+{
+  tree *def = core_builtin_type_defs.get (ret_type);
+  if (def)
+    return *def;
+
+  tree rettype = build_function_type_list (ret_type, integer_type_node, NULL);
+  tree new_fndecl = add_builtin_function_ext_scope ("__builtin_core_reloc",
+						    rettype,
+						    BPF_BUILTIN_CORE_RELOC,
+						    BUILT_IN_MD, NULL, NULL);
+  DECL_PURE_P (new_fndecl) = 1;
+
+  core_builtin_type_defs.put (ret_type, new_fndecl);
+
+  return new_fndecl;
+}
+
+void
+bpf_handle_plugin_finish_type (void *event_data,
+			       void *data ATTRIBUTE_UNUSED)
+{
+  tree type = (tree) event_data;
+
+  if (TREE_CODE (type) == ENUMERAL_TYPE)
+    for (tree l = TYPE_VALUES (type); l; l = TREE_CHAIN (l))
+      {
+	tree value = TREE_VALUE (l);
+
+	tree initial = DECL_INITIAL (value);
+	initial = copy_node (initial);
+	DECL_INITIAL (value) = initial;
+
+	bpf_enum_mappings.put (initial, value);
+      }
+}
+
+/* -- Header file exposed functions -- */
+
+/* Initializes support information to process CO-RE builtins.
+   Defines information for the builtin processing, such as helper functions to
+   support the builtin convertion.  */
+
+void
+bpf_init_core_builtins (void)
+{
+  memset (core_builtin_helpers, 0, sizeof (core_builtin_helpers));
+
+  core_builtin_helpers[BPF_BUILTIN_PRESERVE_ACCESS_INDEX] =
+    BPF_CORE_HELPER_SET (kind_access_index,
+			 NULL,
+			 pack_field_expr_for_access_index,
+			 process_field_expr,
+			 true);
+  core_builtin_helpers[BPF_BUILTIN_PRESERVE_FIELD_INFO] =
+    BPF_CORE_HELPER_SET (kind_preserve_field_info,
+			 NULL,
+			 pack_field_expr_for_preserve_field,
+			 process_field_expr,
+			 true);
+  core_builtin_helpers[BPF_BUILTIN_BTF_TYPE_ID] =
+    BPF_CORE_HELPER_SET (kind_type_id,
+			 compare_same_ptr_type,
+			 pack_type,
+			 process_type,
+			 true);
+
+  core_builtin_helpers[BPF_BUILTIN_PRESERVE_TYPE_INFO] =
+    BPF_CORE_HELPER_SET (kind_preserve_type_info,
+			 compare_same_ptr_type,
+			 pack_type,
+			 process_type,
+			 true);
+
+  core_builtin_helpers[BPF_BUILTIN_PRESERVE_ENUM_VALUE] =
+    BPF_CORE_HELPER_SET (kind_enum_value,
+			 compare_same_ptr_expr,
+			 pack_enum_value,
+			 process_enum_value,
+			 true);
+
+  core_builtin_helpers[BPF_BUILTIN_CORE_RELOC] =
+    BPF_CORE_HELPER_SET (NULL, NULL, NULL, NULL, true);
+
+  /* Initialize plugin handler to record enums value for use in
+   * __builtin_preserve_enum_value.  */
+  plugin_state = (enum bpf_plugin_states) flag_plugin_added;
+  flag_plugin_added = true;
+  register_callback ("bpf_collect_enum_info", PLUGIN_FINISH_TYPE,
+		     bpf_handle_plugin_finish_type, NULL);
+}
+
+static tree
+construct_builtin_core_reloc (location_t loc, tree fndecl, tree *args,
+			      int nargs)
+{
+  int code = DECL_MD_FUNCTION_CODE (fndecl);
+  builtin_helpers helper = core_builtin_helpers[code];
+
+  if (helper.is_valid)
+    {
+      gcc_assert (helper.kind);
+      gcc_assert (helper.pack);
+      gcc_assert (helper.process);
+
+      struct cr_local local_data = CR_LOCAL_EMPTY;
+      local_data.fail = false;
+
+      enum btf_core_reloc_kind kind = helper.kind (args, nargs);
+      if (kind == BPF_RELO_INVALID)
+	local_data.fail = true;
+      else if (helper.pack != NULL)
+	{
+	  local_data = helper.pack (args, kind, (enum bpf_builtins) code);
+	  local_data.reloc_data.orig_builtin_code = (enum bpf_builtins) code;
+	  local_data.reloc_data.orig_arg_expr = args[0];
+	}
+      else
+	local_data.reloc_decision = KEEP_ORIGINAL_NO_RELOCATION;
+
+      if (local_data.fail == true)
+	return error_mark_node;
+
+      if (local_data.reloc_decision == REPLACE_NO_RELOCATION)
+	return local_data.reloc_data.expr;
+      else if (local_data.reloc_decision == REPLACE_CREATE_RELOCATION)
+	{
+	  int index = search_builtin_data (helper.compare,
+					   &local_data.reloc_data);
+	  if (index == -1)
+	    index = allocate_builtin_data ();
+	  struct cr_builtins *data = get_builtin_data (index);
+	  memcpy (data, &local_data.reloc_data, sizeof (struct cr_builtins));
+
+	  tree new_fndecl = bpf_builtins[BPF_BUILTIN_CORE_RELOC];
+
+	  tree ret_type = TREE_TYPE (local_data.reloc_data.default_value);
+	  if (ret_type != ptr_type_node)
+	    new_fndecl = get_core_builtin_fndecl_for_type (ret_type);
+	  return build_call_expr_loc (loc,
+				      new_fndecl, 1,
+				      build_int_cst (integer_type_node,
+						     index));
+	}
+    }
+  return NULL_TREE;
+}
+
+/* This function is used by bpf_resolve_overloaded_builtin which is the
+   implementation of the TARGET_RESOLVE_OVERLOADED_BUILTIN.  It is executed in
+   a very early stage and allows to adapt the builtin to different arguments
+   allowing the compiler to make builtins polymorphic.  In this particular
+   implementation, it collects information of the specific builtin call,
+   converts it to the internal __builtin_core_reloc, stores any required
+   information from the original builtin call in a vec<cr_builtins> and assigns
+   the index within the *vec*, replacing by __builtin_core_reloc.  In the
+   process we also adjust return type of the __builtin_core_reloc to permit
+   polymorphic return type, as it is expected in some of the BPF CO-RE
+   builtins.  */
+
+#define MAX_CORE_BUILTIN_ARGS 3
+tree
+bpf_resolve_overloaded_core_builtin (location_t loc, tree fndecl,
+				     void *arglist)
+{
+  if (!bpf_require_core_support ())
+    return error_mark_node;
+
+  vec<tree, va_gc> *argsvec = static_cast<vec<tree, va_gc> *> (arglist);
+  tree args[MAX_CORE_BUILTIN_ARGS];
+  for (unsigned int i = 0; i < argsvec->length (); i++)
+    args[i] = (*argsvec)[i];
+
+  remove_parser_plugin ();
+
+  return construct_builtin_core_reloc (loc, fndecl, args, argsvec->length ());
+}
+
+/* Used in bpf_expand_builtin.  This function is called in RTL expand stage to
+   convert the internal __builtin_core_reloc in unspec:UNSPEC_CORE_RELOC RTL,
+   which will contain a third argument that is the index in the vec collected in
+   bpf_resolve_overloaded_core_builtin.  */
+
+rtx
+bpf_expand_core_builtin (tree exp, enum bpf_builtins code)
+{
+  if (code == BPF_BUILTIN_CORE_RELOC)
+    {
+      tree index = CALL_EXPR_ARG (exp, 0);
+      struct cr_builtins *data = get_builtin_data (TREE_INT_CST_LOW (index));
+
+      rtx v = expand_normal (data->default_value);
+      rtx i = expand_normal (index);
+      return gen_rtx_UNSPEC (DImode,
+			     gen_rtvec (2, v, i),
+			     UNSPEC_CORE_RELOC);
+    }
+
+  return NULL_RTX;
+}
+
+/* This function is called in the final assembly output for the
+   unspec:UNSPEC_CORE_RELOC.  It recovers the vec index kept as the third
+   operand and collects the data from the vec.  With that it calls the process
+   helper in order to construct the data required for the CO-RE relocation.
+   Also it creates a label pointing to the unspec instruction and uses it
+   in the CO-RE relocation creation.  */
+
+const char *
+bpf_add_core_reloc (rtx *operands, const char *templ)
+{
+  struct cr_builtins *data = get_builtin_data (INTVAL (operands[2]));
+  builtin_helpers helper;
+  helper = core_builtin_helpers[data->orig_builtin_code];
+
+  rtx_code_label * tmp_label = gen_label_rtx ();
+  output_asm_label (tmp_label);
+  assemble_name (asm_out_file, ":\n");
+
+  gcc_assert (helper.process != NULL);
+  struct cr_final reloc_data = helper.process (data);
+  make_core_relo (&reloc_data, tmp_label);
+
+  /* Replace default value for later processing builtin types.
+     Example if the type id builtins. */
+  if (data->rtx_default_value != NULL_RTX)
+    operands[1] = data->rtx_default_value;
+
+  return templ;
+}
+
+/* This function is used within the defined_expand for mov in bpf.md file.
+   It identifies if any of the operands in a move is a expression with a
+   type with __attribute__((preserve_access_index)), which case it
+   will emit an unspec:UNSPEC_CORE_RELOC such that it would later create a
+   CO-RE relocation for this expression access.  */
+
+bool
+bpf_replace_core_move_operands (rtx *operands)
+{
+  for (int i = 0; i < 2; i++)
+    if (MEM_P (operands[i]))
+      {
+	tree expr = MEM_EXPR (operands[i]);
+
+	if (expr == NULL_TREE)
+	  continue;
+
+	if (TREE_CODE (expr) == MEM_REF
+	    && TREE_CODE (TREE_OPERAND (expr, 0)) == SSA_NAME)
+	  {
+	    gimple *def_stmt = SSA_NAME_DEF_STMT (TREE_OPERAND (expr, 0));
+	    if (def_stmt && is_gimple_assign (def_stmt))
+		expr = gimple_assign_rhs1 (def_stmt);
+	  }
+	if (is_attr_preserve_access (expr)
+	    && bpf_require_core_support ())
+	  {
+	    struct cr_local local_data = pack_field_expr_for_access_index (
+					   &expr,
+					   BPF_RELO_FIELD_BYTE_OFFSET,
+					   BPF_BUILTIN_PRESERVE_ACCESS_INDEX);
+
+	    local_data.reloc_decision = REPLACE_CREATE_RELOCATION;
+	    local_data.reloc_data.orig_arg_expr = expr;
+	    local_data.reloc_data.orig_builtin_code = BPF_BUILTIN_PRESERVE_ACCESS_INDEX;
+
+	    int index = allocate_builtin_data ();
+	    struct cr_builtins *data = get_builtin_data (index);
+	    memcpy (data, &local_data.reloc_data, sizeof (struct cr_builtins));
+
+	    rtx reg = XEXP (operands[i], 0);
+	    if (!REG_P (reg))
+	      {
+		reg = gen_reg_rtx (Pmode);
+		operands[i] = gen_rtx_MEM (GET_MODE (operands[i]), reg);
+	      }
+
+	    emit_insn ( \
+	      gen_mov_reloc_coredi (reg, \
+				    gen_rtx_CONST_INT (Pmode, 0), \
+				    gen_rtx_CONST_INT (Pmode, index))); \
+	    return true;
+	  }
+      }
+  return false;
+}
diff --git a/gcc/config/bpf/core-builtins.h b/gcc/config/bpf/core-builtins.h
new file mode 100644
index 000000000000..95b5a216d6e5
--- /dev/null
+++ b/gcc/config/bpf/core-builtins.h
@@ -0,0 +1,36 @@
+#ifndef BPF_CORE_BUILTINS_H
+#define BPF_CORE_BUILTINS_H
+
+#include "coreout.h"
+
+enum bpf_builtins
+{
+  BPF_BUILTIN_UNUSED = 0,
+  /* Built-ins for non-generic loads and stores.  */
+  BPF_BUILTIN_LOAD_BYTE,
+  BPF_BUILTIN_LOAD_HALF,
+  BPF_BUILTIN_LOAD_WORD,
+
+  /* Compile Once - Run Everywhere (CO-RE) support.  */
+  BPF_CORE_BUILTINS_MARKER = 10,
+  BPF_BUILTIN_PRESERVE_ACCESS_INDEX,
+  BPF_BUILTIN_PRESERVE_FIELD_INFO,
+  BPF_BUILTIN_BTF_TYPE_ID,
+  BPF_BUILTIN_PRESERVE_TYPE_INFO,
+  BPF_BUILTIN_PRESERVE_ENUM_VALUE,
+
+  /* CO-RE INTERNAL reloc.  */
+  BPF_BUILTIN_CORE_RELOC,
+
+  BPF_BUILTIN_MAX,
+};
+
+extern GTY (()) tree bpf_builtins[(int) BPF_BUILTIN_MAX];
+
+void bpf_init_core_builtins (void);
+rtx bpf_expand_core_builtin (tree exp, enum bpf_builtins code);
+tree bpf_resolve_overloaded_core_builtin (location_t loc, tree fndecl,
+					  void *arglist);
+bool bpf_replace_core_move_operands (rtx *operands);
+
+#endif
diff --git a/gcc/config/bpf/coreout.cc b/gcc/config/bpf/coreout.cc
index bd609ad6278f..b84585fb104e 100644
--- a/gcc/config/bpf/coreout.cc
+++ b/gcc/config/bpf/coreout.cc
@@ -30,6 +30,7 @@
 #include "ctfc.h"
 #include "btf.h"
 #include "rtl.h"
+#include "tree-pretty-print.h"
 
 #include "coreout.h"
 
@@ -146,38 +147,37 @@ static char btf_ext_info_section_label[MAX_BTF_EXT_LABEL_BYTES];
 
 static GTY (()) vec<bpf_core_section_ref, va_gc> *bpf_core_sections;
 
+struct bpf_core_extra {
+  const char *accessor_str;
+  tree type;
+};
+static hash_map<bpf_core_reloc_ref, struct bpf_core_extra *> bpf_comment_info;
 
 /* Create a new BPF CO-RE relocation record, and add it to the appropriate
    CO-RE section.  */
-
 void
 bpf_core_reloc_add (const tree type, const char * section_name,
-		    vec<unsigned int> *accessors, rtx_code_label *label,
+		    const char *accessor,
+		    rtx_code_label *label,
 		    enum btf_core_reloc_kind kind)
 {
-  char buf[40];
-  unsigned int i, n = 0;
-
-  /* A valid CO-RE access must have at least one accessor.  */
-  if (accessors->length () < 1)
-    return;
-
-  for (i = 0; i < accessors->length () - 1; i++)
-    n += snprintf (buf + n, sizeof (buf) - n, "%u:", (*accessors)[i]);
-  snprintf (buf + n, sizeof (buf) - n, "%u", (*accessors)[i]);
-
   bpf_core_reloc_ref bpfcr = ggc_cleared_alloc<bpf_core_reloc_t> ();
+  struct bpf_core_extra *info = ggc_cleared_alloc<struct bpf_core_extra> ();
   ctf_container_ref ctfc = ctf_get_tu_ctfc ();
 
   /* Buffer the access string in the auxiliary strtab.  */
-  ctf_add_string (ctfc, buf, &(bpfcr->bpfcr_astr_off), CTF_AUX_STRTAB);
-
+  ctf_add_string (ctfc, accessor, &(bpfcr->bpfcr_astr_off), CTF_AUX_STRTAB);
   bpfcr->bpfcr_type = get_btf_id (ctf_lookup_tree_type (ctfc, type));
   bpfcr->bpfcr_insn_label = label;
   bpfcr->bpfcr_kind = kind;
 
+  info->accessor_str = accessor;
+  info->type = type;
+  bpf_comment_info.put (bpfcr, info);
+
   /* Add the CO-RE reloc to the appropriate section.  */
   bpf_core_section_ref sec;
+  int i;
   FOR_EACH_VEC_ELT (*bpf_core_sections, i, sec)
     if (strcmp (sec->name, section_name) == 0)
       {
@@ -288,14 +288,26 @@ output_btfext_header (void)
 static void
 output_asm_btfext_core_reloc (bpf_core_reloc_ref bpfcr)
 {
+  struct bpf_core_extra **info = bpf_comment_info.get (bpfcr);
+  gcc_assert (info != NULL);
+
   bpfcr->bpfcr_astr_off += ctfc_get_strtab_len (ctf_get_tu_ctfc (),
 						CTF_STRTAB);
 
   dw2_assemble_integer (4, gen_rtx_LABEL_REF (Pmode, bpfcr->bpfcr_insn_label));
-  fprintf (asm_out_file, "\t%s bpfcr_insn\n", ASM_COMMENT_START);
-
-  dw2_asm_output_data (4, bpfcr->bpfcr_type, "bpfcr_type");
-  dw2_asm_output_data (4, bpfcr->bpfcr_astr_off, "bpfcr_astr_off");
+  fprintf (asm_out_file, "\t%s%s\n",
+	   flag_debug_asm ? ASM_COMMENT_START : "",
+	   (flag_debug_asm ? " bpfcr_insn" : ""));
+
+  /* Extract the pretty print for the type expression.  */
+  pretty_printer pp;
+  dump_generic_node (&pp, (*info)->type, 0, TDF_VOPS|TDF_MEMSYMS|TDF_SLIM,
+		     false);
+  char *str = xstrdup (pp_formatted_text (&pp));
+
+  dw2_asm_output_data (4, bpfcr->bpfcr_type, "bpfcr_type (%s)", str);
+  dw2_asm_output_data (4, bpfcr->bpfcr_astr_off, "bpfcr_astr_off (\"%s\")",
+			  (*info)->accessor_str);
   dw2_asm_output_data (4, bpfcr->bpfcr_kind, "bpfcr_kind");
 }
 
diff --git a/gcc/config/bpf/coreout.h b/gcc/config/bpf/coreout.h
index 8bdb364b7228..c99b1ca885b2 100644
--- a/gcc/config/bpf/coreout.h
+++ b/gcc/config/bpf/coreout.h
@@ -23,6 +23,7 @@
 #define __COREOUT_H
 
 #include <stdint.h>
+#include "ctfc.h"
 
 #ifdef	__cplusplus
 extern "C"
@@ -55,6 +56,7 @@ struct btf_ext_lineinfo
 
 enum btf_core_reloc_kind
 {
+  BPF_RELO_INVALID = -1,
   BPF_RELO_FIELD_BYTE_OFFSET = 0,
   BPF_RELO_FIELD_BYTE_SIZE = 1,
   BPF_RELO_FIELD_EXISTS = 2,
@@ -66,7 +68,8 @@ enum btf_core_reloc_kind
   BPF_RELO_TYPE_EXISTS = 8,
   BPF_RELO_TYPE_SIZE = 9,
   BPF_RELO_ENUMVAL_EXISTS = 10,
-  BPF_RELO_ENUMVAL_VALUE = 11
+  BPF_RELO_ENUMVAL_VALUE = 11,
+  BPF_RELO_TYPE_MATCHES = 12
 };
 
 struct btf_ext_reloc
@@ -102,8 +105,12 @@ struct btf_ext_header
 extern void btf_ext_init (void);
 extern void btf_ext_output (void);
 
-extern void bpf_core_reloc_add (const tree, const char *, vec<unsigned int> *,
-				rtx_code_label *, enum btf_core_reloc_kind);
+void
+bpf_core_reloc_add (const tree type, const char * section_name,
+		    const char *accessor,
+		    rtx_code_label *label,
+		    enum btf_core_reloc_kind kind);
+
 extern int bpf_core_get_sou_member_index (ctf_container_ref, const tree);
 
 #ifdef	__cplusplus
diff --git a/gcc/config/bpf/t-bpf b/gcc/config/bpf/t-bpf
index 3f3cf8daf8fc..c289dde8b173 100644
--- a/gcc/config/bpf/t-bpf
+++ b/gcc/config/bpf/t-bpf
@@ -1,8 +1,10 @@
 
-TM_H += $(srcdir)/config/bpf/coreout.h
+TM_H += $(srcdir)/config/bpf/coreout.h $(srcdir)/config/bpf/core-builtins.h
 
 coreout.o: $(srcdir)/config/bpf/coreout.cc
 	$(COMPILE) $<
 	$(POSTCOMPILE)
 
-PASSES_EXTRA += $(srcdir)/config/bpf/bpf-passes.def
+core-builtins.o: $(srcdir)/config/bpf/core-builtins.cc
+	$(COMPILE) $<
+	$(POSTCOMPILE)
diff --git a/gcc/doc/extend.texi b/gcc/doc/extend.texi
index 97eaacf8a7ec..e06caf38e467 100644
--- a/gcc/doc/extend.texi
+++ b/gcc/doc/extend.texi
@@ -16015,6 +16015,57 @@ read_y (struct S *arg)
 @end smallexample
 @enddefbuiltin
 
+@defbuiltin{{unsigned int} __builtin_preserve_enum_value (@var{type}, @var{enum}, unsigned int @var{kind})}
+BPF Compile Once-Run Everywhere (CO-RE) support. This builtin collects enum
+information and creates a CO-RE relocation relative to @var{enum} that should
+be of @var{type}.  The @var{kind} specifies the action performed.
+
+The following values are supported for @var{kind}:
+@table @code
+@item ENUM_VALUE_EXISTS = 0
+The return value is either 0 or 1 depending if the enum value exists in the
+target.
+
+@item ENUM_VALUE = 1
+The return value is the enum value in the target kernel.
+@end table
+@enddefbuiltin
+
+@defbuiltin{{unsigned int} __builtin_btf_type_id (@var{type}, unsigned int @var{kind})}
+BPF Compile Once-Run Everywhere (CO-RE) support. This builtin is used to get
+the BTF type ID of a specified type. Depending on the @var{kind} argument, it
+will either return the ID of the local BTF information, or the BTF type ID in
+the target kernel.
+
+The following values are supported for @var{kind}:
+@table @code
+@item BTF_TYPE_ID_LOCAL = 0
+Return the local BTF type ID. Always succeeds.
+
+@item BTF_TYPE_ID_TARGET = 1
+Return the target BTF type ID. If type does not exist in the target, returns 0.
+@end table
+@enddefbuiltin
+
+@defbuiltin{{unsigned int} __builtin_preserve_type_info (@var{type}, unsigned int @var{kind})}
+BPF Compile Once-Run Everywhere (CO-RE) support. This builtin performs named
+type (struct/union/enum/typedef) verifications. The type of verification
+dependents on the @var{kind} argument provided.  This builtin will always
+return 0 if type does not exists in the target kernel.
+
+The following values are supported for @var{kind}:
+@table @code
+@item BTF_TYPE_EXISTS = 0
+Checks if type exists in the target.
+
+@item BTF_TYPE_MATCHES = 1
+Checks if type matches the local definition in the target kernel.
+
+@item BTF_TYPE_SIZE = 2
+Returns the size of the type within the target.
+@end table
+@enddefbuiltin
+
 @node FR-V Built-in Functions
 @subsection FR-V Built-in Functions
 
diff --git a/gcc/testsuite/gcc.target/bpf/core-builtin-fieldinfo-const-elimination.c b/gcc/testsuite/gcc.target/bpf/core-builtin-fieldinfo-const-elimination.c
new file mode 100644
index 000000000000..5f8354874830
--- /dev/null
+++ b/gcc/testsuite/gcc.target/bpf/core-builtin-fieldinfo-const-elimination.c
@@ -0,0 +1,29 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -dA -gbtf -mco-re" } */
+
+struct S {
+  unsigned int a1: 7;
+  unsigned int a2: 4;
+  unsigned int a3: 13;
+  unsigned int a4: 5;
+  int x;
+};
+
+struct T {
+  unsigned int y;
+  struct S s[2];
+  char c;
+  char d;
+};
+
+enum {
+  FIELD_BYTE_OFFSET = 0,
+};
+
+unsigned int foo (struct T *t)
+{
+  return __builtin_preserve_field_info (t->s[0].a1, FIELD_BYTE_OFFSET) + 1;
+}
+
+/* { dg-final { scan-assembler-times "\[\t \]mov\[\t \]%r\[0-9\],4" 1 } } */
+/* { dg-final { scan-assembler-times "\[\t \]add32\[\t \]%r\[0-9\],1" 1 } } */
-- 
2.38.1


  reply	other threads:[~2023-08-01 18:43 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-01 18:43 [PATCH] CO-RE BPF builtins support Cupertino Miranda
2023-08-01 18:43 ` Cupertino Miranda [this message]
2023-08-03  7:01   ` [PATCH 1/2] bpf: Implementation of BPF CO-RE builtins Jose E. Marchesi
2023-08-03  8:52     ` Cupertino Miranda
2023-08-03 14:36       ` Jose E. Marchesi
2023-08-03  9:52     ` Cupertino Miranda
2023-08-03  9:54   ` [v2 PATCH " Cupertino Miranda
2023-08-03 14:39     ` Jose E. Marchesi
2023-08-03 18:52       ` Cupertino Miranda
2023-08-11 14:17     ` Shung-Hsi Yu
2023-08-11 17:12       ` Cupertino Miranda
2023-08-01 18:43 ` [PATCH 2/2] bpf: CO-RE builtins support tests Cupertino Miranda
2023-08-03  9:57   ` [v2 PATCH " Cupertino Miranda
2023-08-03 14:41     ` Jose E. Marchesi
2023-08-03 18:53       ` Cupertino Miranda

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230801184307.179692-2-cupertino.miranda@oracle.com \
    --to=cupertino.miranda@oracle.com \
    --cc=david.faust@oracle.com \
    --cc=elena.zannoni@oracle.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jose.marchesi@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).