public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc(refs/vendors/ARM/heads/morello)] Change symbol loading to account for PureCap
@ 2021-09-21  9:14 Matthew Malcomson
  0 siblings, 0 replies; only message in thread
From: Matthew Malcomson @ 2021-09-21  9:14 UTC (permalink / raw)
  To: gcc-cvs

https://gcc.gnu.org/g:97e6602b07e6c6e4349652d138837edd724f0b4a

commit 97e6602b07e6c6e4349652d138837edd724f0b4a
Author: Matthew Malcomson <matthew.malcomson@arm.com>
Date:   Thu Sep 16 11:33:48 2021 +0100

    Change symbol loading to account for PureCap
    
    When loading capability symbols we now need to worry about the
    permissions on the capability we end up with.  A simple `adrp` and `add`
    which would give us the correct address value may not give us a
    capability with the permissions that we want.
    
    For function symbols and code labels this is not much of a worry since
    the sequence above will give us permissions based off of the PCC.  This
    is just what we want for capabilities pointing into code.
    
    For writeable data symbols we need writable permissions on the
    capability (which are not on the PCC).  For security reasons we want
    read-only data to have permissions and bounds different from the PCC
    (i.e. restricted to that which the data object requires).  While we do
    not need to restrict the permissions when we are simply loading
    data from a known-safe position, we would still need some way to change
    the permissions and bounds for user-visible data that has its address
    used, or that is offset by a compile time unknown value.
    
    Reducing permissions (i.e. in the case of going from PCC to the
    permissions for read-only data) could be done by emitting a sequence of
    instructions to drop permissions and narrow bounds.  However gaining
    permissions must be done by getting the runtime to provide us with some
    higher-permission capabilities than the PCC at runtime.  We choose to
    use the mechanism to let the runtime provide us with these
    write-permissioned capabilities in order to get the less permissive
    capabilities pointing to read-only data.
    
    This follows the way that LLVM behaves.
    
    The mechanism used is the following: relocations put the relevant
    capabilities either into the GOT or into a translation-unit specific
    indirection table that we store in the constant pool of our binary.  The
    code then loads that required capability from where the linker placed it.
    
    An optimisation could be to only ever go through this indirection table
    when we must.  I.e. when the address is going to be visible to
    user-code, when the access is not known to be in-bounds, or when we need
    to store data.  This patch implements a much more conservative approach
    where we go through indirection for every access to any object (except
    for accesses to symbols inside the constant pool indirection table).
    
    The choice of whether to put something in the GOT or in the constant
    pool is made based on whether the symbol is something that can be shared
    between translation units or not.  If it may be shared the symbol goes
    into the GOT, otherwise it is put in this translation units constant
    pool.
    
    N.b. even though all global symbols are placed in the GOT, the
    relocation of that GOT entry can be made a plain RELATIVE relocation by
    the static linker if the symbol should not be interceptable.  Hence this
    does not change the behaviour of global symbols defined in an
    executable.
    
    The set of symbols stored in the constant pool for purposes of
    indirection is related to the idea of .L__cap_merged_table in LLVM.  We
    index into this set using a section anchor and offset when viable, which
    matches how LLVM tries to index into that table.  However, it is useful
    to remember that the .L__cap_merged_table array ends up being a sized
    and typed object symbol containing all indirected symbols, while the
    section anchors we use are not sized, typed, and do not in general span
    all indirection symbols.
    
    Some notes on the implementation:
    - The decision of whether to indirect or not is made based on the
      symbol type in `aarch64_classify_symbol`.  This also decides whether
      to load from the constant pool or the GOT depending on whether the
      symbol is global or not.
      This function returns one of the existing enum values describing how
      this symbol must be loaded, the only change is how that decision is
      made.
    - Since all symbols are now loaded indirectly, the section anchor
      optimisation is no longer helpful except on the local indirection
      table.  Hence we disallow accesses to non-indirection symbols using
      section anchors.
    - In order to distinguish symbols which address into the local
      indirection table from those which need to be loaded indirectly we use
      the SYMBOL_FLAG_MACH_DEP flag on SYMBOL_REF_FLAGS.  In the AArch64
      backend we give this the synonym SYMBOL_FLAG_INDIRECTION.
    - We disable the large code model for purecap in order to reduce the
      combinatorial number of cases to reason about.
    - Since GCC identifies whether constant pool values must be emitted by
      checking if they are referenced in RTL, the new indirection to these
      constant pool values must not be accessed with a section anchor.  GCC
      can not then tell that a given constant pool object is used.
    
    Note on future approaches for optimisation:
    We have two possible approaches to optimise this.  One is to avoid
    indirection for symbols which we know will only ever be accessed in a
    safe way.  While the other is to avoid indirection for accesses that are
    known safe (even if the object could be accessed in an improper manner
    elsewhere).
    
    Our testing is relatively simple in that we check the assembly we emit
    has patterns indicating that indirection happens.  The change is very
    widespread and quite fundamental, so ensuring that software compiles and
    runs will be a very good test of this behaviour (once we can do that).
    
    --- Extra notes on indirectly accessing constant pool objects using
        section anchors to load their address.
    
    When outputting constants into the constant pool, GCC ensures that we
    only emit constants that are actually used.  For CONSTANT_POOL_ADDRESS_P
    and TREE_CONSTANT_POOL_ADDRESS_P constants this is done by marking all
    constants referenced in the RTL insn chain.  This marking boils down to
    using `mark_constants_in_pattern` on all insns in the chain.  That
    function looks in a pattern and can follow a reference to one symbol
    that references another.  However it can not determine which object in
    an object block is used when it is accessed via a section anchor and
    offset.
    
    This means that accessing such constants through a section anchor and
    offset into an indirection table which references them can lead to the
    constants not getting emitted, even though they are used.
    
    Constants that can be named by the user (i.e. with an associated DECL
    that is not DECL_ARTIFICIAL) are output based on whether the DECL_RTL is
    set on them and hence do not have this problem.
    
    To counter this we disallow using section anchors to reference constant
    pool addresses in the indirection table.  We do this using the
    `use_anchors_for_symbol_p` hook.  This means the section anchor
    optimisation is lost for such symbols, but we still keep them in their
    object blocks (which avoids changing the layout of data more
    dramatically than necessary).  This also applies to constants containing
    symbols -- since if an RTX representing the sum of a constant pool
    address and an offset is the only use of that constant pool address we
    still must be able to tell that the address has been used by following
    the constant pool references.
    
    In addition to this change we also add a few more assertions throughout
    the code to ensure it is only asked to handle cases that we have
    expected.  These assertions and why they are required are listed below:
    
    - We never attempt to put ANCHOR+offset as a constant into memory (for
      PureCap ABI).
      If we did we would have no information on the eventual symbol with
      which the runtime could determine the required bounds and permissions.
    - We never end up using anchor symbols for the indirection of constant
      pool or tree constant pool addresses.
      Requirement described above.
    - Never are asked to load a symbol into the indirection table with an
      offset.
      If this were happening then our assumption that symbols into the
      indirection table will always be safely loaded is likely broken.
    
    We also add a `const_rtx` overload of `strip_offset`.  This function
    does not modify its argument, and with an overload using `const_rtx`
    arguments and return value we can use it on constant values.  This makes
    implementing some helper functions easier.
    
    The testcase added uses a __int128 constant that can not be loaded
    directly with AArch64 instructions.  This means it gets forced into the
    constant pool and loaded.  The testcase ensures that this does not cause
    linker errors, which ensures the constant is still emitted.  It also
    happens to test that the constant is emitted with an associated size.
    
    N.B.  The testsuite doesn't pass since we have problems linking with the
    current libgcc and newlib.  However it *should* pass, and will test what
    we want.

Diff:
---
 gcc/config/aarch64/aarch64.c                       | 213 ++++++++++++++++++++-
 gcc/config/aarch64/aarch64.h                       |   7 +
 gcc/rtl.h                                          |   1 +
 gcc/rtlanal.c                                      |  18 ++
 .../morello/indirected-constant-gets-emitted.c     |  18 ++
 .../morello/indirection-of-symbol-plus-offset.c    |  34 ++++
 .../gcc.target/aarch64/morello/symbol-loading.c    |  26 +++
 7 files changed, 311 insertions(+), 6 deletions(-)

diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index 126782246bf..6f34ee8c672 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -3066,6 +3066,20 @@ static rtx_insn *check_emit_insn (rtx x)
   return emit_insn (x);
 }
 
+static bool
+aarch64_indirection_sym_with_offset_p (const_rtx imm)
+{
+  poly_int64 offset;
+  const_rtx base = strip_offset (imm, &offset);
+  /* Not indirection symbol => not indirection sym plus offset.  */
+  if (!SYMBOL_REF_P (base) || !SYMBOL_REF_INDIRECTION_P (base))
+    return false;
+  /* Offset is zero => not indirection sym plus offset.  */
+  if (known_eq (offset, 0))
+    return false;
+  return true;
+}
+
 /* We'll allow lo_sum's in addresses in our legitimate addresses
    so that combine would take care of combining addresses where
    necessary, but for generation purposes, we'll generate the address
@@ -3112,6 +3126,15 @@ static void
 aarch64_load_symref_appropriately (rtx dest, rtx imm,
 				   enum aarch64_symbol_type type)
 {
+  /* Assert that any time we're asked to load an indirection symbol there is no
+     offset.  If there were an offset that would break one of the assumptions
+     we have to believe that loading symbols is safe.  (I.e. if we're ever
+     asked to load an indirection symbol plus offset that means something is
+     indexing into a symbol in the constant pool -- because all indirection
+     symbols just point at single capabilities -- if that's the case then we
+     will have to look into where the index is coming from).  */
+  gcc_assert (!aarch64_indirection_sym_with_offset_p (imm));
+
   switch (type)
     {
     case SYMBOL_SMALL_ABSOLUTE:
@@ -5239,6 +5262,30 @@ aarch64_expand_sve_const_pred (rtx target, rtx_vector_builder &builder)
 					   int_builder.build ());
 }
 
+/* Helper function to identify immediates that should be accessed via the
+   Morello indirection table.  Any basic constant is not directly accessed via
+   this table, but any label or symbol is.  */
+static bool
+aarch64_sym_indirectly_accessed_p (const_rtx sym)
+{
+  return TARGET_CAPABILITY_PURE && SYMBOL_REF_P (sym)
+    && !SYMBOL_REF_INDIRECTION_P (sym) && !SYMBOL_REF_ANCHOR_P (sym);
+}
+
+/* Return true if the provided symbol_ref will not get emitted as a variable
+   if it is not found in the RTL by `mark_constants_in_pattern`.  If this is
+   the case then we can not access it through an indirection table where we
+   access the indirection table using section anchors.  That would mean there
+   is no direct reference in the RTL and that the constant would not get
+   emitted.  */
+static bool
+aarch64_sym_requires_reference_in_rtl_p (const_rtx x)
+{
+  return SYMBOL_REF_P (x) && (CONSTANT_POOL_ADDRESS_P (x)
+			      || TREE_CONSTANT_POOL_ADDRESS_P (x));
+}
+
+
 /* Set DEST to immediate IMM.  */
 
 void
@@ -5254,7 +5301,7 @@ aarch64_expand_mov_immediate (rtx dest, rtx imm)
        || GET_CODE (imm) == CONST_POLY_INT)
       && is_a <scalar_addr_mode> (mode, &addr_mode))
     {
-      rtx mem;
+      rtx mem, sym;
       poly_int64 offset;
       HOST_WIDE_INT const_offset;
       enum aarch64_symbol_type sty;
@@ -5314,6 +5361,19 @@ aarch64_expand_mov_immediate (rtx dest, rtx imm)
 	    }
 
 	  mem = force_const_mem (ptr_mode, imm);
+	  if (TARGET_CAPABILITY_PURE && SYMBOL_REF_P (base))
+	    {
+	      /* Mark the symbol created by `force_const_mem` as one into the
+		 indirection table.  */
+	      sym = XEXP (mem, 0);
+	      /* The symbol getting forced into memory should never be an
+		 indirection symbol, we should always be given a constant pool
+		 symbol pointing to the relevant position.  */
+	      gcc_assert (SYMBOL_REF_P (sym)
+			  && CONSTANT_POOL_ADDRESS_P (sym)
+			  && !SYMBOL_REF_INDIRECTION_P (base));
+	      SYMBOL_REF_FLAGS (sym) |= SYMBOL_FLAG_INDIRECTION;
+	    }
 	  gcc_assert (mem);
 
 	  /* If we aren't generating PC relative literals, then
@@ -5323,10 +5383,38 @@ aarch64_expand_mov_immediate (rtx dest, rtx imm)
 	  if (!aarch64_pcrelative_literal_loads)
 	    {
 	      gcc_assert (can_create_pseudo_p ());
-	      base = gen_reg_rtx (ptr_mode);
-	      aarch64_expand_mov_immediate (base, XEXP (mem, 0));
-	      if (ptr_mode != Pmode)
-		base = convert_memory_address (Pmode, base);
+	      if (TARGET_CAPABILITY_PURE && SYMBOL_REF_P (base))
+		{
+		  /* Accesses into the table of symbols pointing around the
+		     current file may be accessed using a section anchor.
+		     These can not be indirected, and accesses via a section
+		     anchor have same problems if offset by a runtime-specified
+		     index or exposed to the user code as accesses using the PCC
+		     with `adrp`.  (While we believe that the compiler
+		     generated accesses into this table will never be exposed
+		     in such a manner it's still nice to point out that using
+		     section anchors can not reduce the security properties).
+
+		     Finally, we believe that there may very well be
+		     optimisation opportunities for section anchors into this
+		     indirection table, since they should happen any time a
+		     function uses more than one locally bound global data
+		     object.  */
+		  mem = use_anchored_address (mem);
+		  /* Just another assertion to check we are not using anchor
+		     symbols for the indirection of addresses that need to be
+		     referenced in RTL for their constant to be emitted.  */
+		  if (aarch64_sym_requires_reference_in_rtl_p (base))
+		    gcc_assert (XEXP (mem, 0) == sym);
+		  base = force_reg (Pmode, XEXP (mem, 0));
+		}
+	      else
+		{
+		  base = gen_reg_rtx (ptr_mode);
+		  aarch64_expand_mov_immediate (base, XEXP (mem, 0));
+		  if (ptr_mode != Pmode)
+		    base = convert_memory_address (Pmode, base);
+		}
 	      mem = gen_rtx_MEM (ptr_mode, base);
 	    }
 
@@ -11504,13 +11592,50 @@ aarch64_can_use_per_function_literal_pools_p (void)
 }
 
 static bool
-aarch64_use_blocks_for_constant_p (machine_mode, const_rtx)
+aarch64_use_blocks_for_constant_p (machine_mode, const_rtx x)
 {
+  /* Ensure that we never try to put ANCHOR+offset as a constant into memory on
+     PureCap ABI.  If we did, then we would have no way to specify the bounds
+     and permissions needed for the associated `capinit` relocation.  */
+  gcc_assert (!(TARGET_CAPABILITY_PURE && SYMBOL_REF_P (x)
+		&& SYMBOL_REF_ANCHOR_P (x)));
   /* We can't use blocks for constants when we're using a per-function
      constant pool.  */
   return !aarch64_can_use_per_function_literal_pools_p ();
 }
 
+static bool
+aarch64_use_anchors_for_symbol_p (const_rtx x)
+{
+  /* For morello we never want to access symbols using an anchor symbol unless
+     they are special indirection symbols.  We believe the indirection symbols
+     are always accessed in bounds.
+
+     On top of that, some indirection symbols must not be accessed using
+     section anchors because the constant to which they point must be
+     referenced in RTL for it to be output.  */
+
+   /* MORELLO TODO
+     This is a conservative approach.  We believe we could allow read-only
+     artificial decls to be accessed in this way if there is no way for user
+     code to index them or find their address.  Similarly, it would be possible
+     to do the same with read-write objects and allow indexing using a section
+     anchor when we know we want an in-bounds load.  We leave such possible
+     optimisations for future work.  */
+
+  if (TARGET_CAPABILITY_PURE)
+    {
+      if (!SYMBOL_REF_INDIRECTION_P (x))
+	return false;
+      rtx constant = get_pool_constant (x);
+      poly_int64 offset;
+      rtx base = strip_offset (constant, &offset);
+      if (aarch64_sym_requires_reference_in_rtl_p (base))
+	return false;
+    }
+  return default_use_anchors_for_symbol_p (x);
+}
+
 /* Select appropriate section for constants depending
    on where we place literal pools.  */
 
@@ -15483,6 +15608,8 @@ initialize_aarch64_code_model (struct gcc_options *opts)
 	}
       break;
     case AARCH64_CMODEL_LARGE:
+      if (TARGET_CAPABILITY_PURE)
+	sorry ("code model large with %<-mabi=purecap%>");
       if (opts->x_flag_pic)
 	sorry ("code model %qs with %<-f%s%>", "large",
 	       opts->x_flag_pic > 1 ? "PIC" : "pic");
@@ -16369,6 +16496,72 @@ aarch64_classify_tls_symbol (rtx x)
     }
 }
 
+static bool
+aarch64_symbol_public_p (const_rtx x)
+{
+  return (SYMBOL_REF_DECL (x)
+	  ? TREE_PUBLIC (SYMBOL_REF_DECL (x))
+	  : false);
+}
+
+enum aarch64_symbol_type
+aarch64_classify_capability_symbol (rtx x, HOST_WIDE_INT)
+{
+  gcc_assert (GET_CODE (x) == SYMBOL_REF);
+  gcc_assert (GET_MODE (x) == CADImode);
+  gcc_assert (TARGET_CAPABILITY_PURE);
+  gcc_assert (!SYMBOL_REF_INDIRECTION_P (x));
+
+  /* Capabilities must take their permissions from somewhere.  If loading a
+     function address then the permissions in PCC will be fine.  If fetching
+     data that the C source code can use then we will want to have narrower
+     bounds (that only span the object) and will sometimes want write
+     permissions.
+
+     In order to handle those different permissions we require the runtime to
+     give us the relevant capabilities we want.  We then have to fetch those
+     capabilities in order to use them.
+
+     There are two places that the linker can store these new intermediate
+     capabilities: the GOT, or in a constant pool that we create.  In order to
+     request a constant pool we return SYMBOL_FORCE_TO_MEM and in order to
+     request fetching them from the GOT we return SYMBOL_*_GOT*.
+
+     If a symbol can be shared between multiple translation units we want its
+     address stored in the GOT (in order to allow other translation units to
+     read it).  Otherwise we store it in a constant pool with no external
+     visibility.
+
+     We conservatively treat any SYMBOL_REF that was not created to index into
+     an indirection table as data that the C source code can use.  This is
+     likely to be overly conservative (see TODO in
+     `aarch64_use_anchors_for_symbol_p`).  */
+  switch (aarch64_cmodel)
+    {
+    case AARCH64_CMODEL_TINY_PIC:
+    case AARCH64_CMODEL_TINY:
+      /* While forcing to memory will work for any binds_local_p symbols, it
+	 would mean that the symbol can not be shared between different
+	 translation units.  LLVM tries to store things in the GOT if
+	 they can be shared across different translation units.  Hence we
+	 determine what to do based on whether the symbol is public or not.  */
+      return aarch64_symbol_public_p (x)
+	? SYMBOL_TINY_GOT
+	: SYMBOL_FORCE_TO_MEM;
+
+    case AARCH64_CMODEL_SMALL_SPIC:
+    case AARCH64_CMODEL_SMALL_PIC:
+    case AARCH64_CMODEL_SMALL:
+      return aarch64_symbol_public_p (x)
+	? SYMBOL_SMALL_GOT_4G
+	: SYMBOL_FORCE_TO_MEM;
+
+    case AARCH64_CMODEL_LARGE:
+    default:
+      gcc_unreachable ();
+    }
+}
+
 /* Return the correct method for accessing X + OFFSET, where X is either
    a SYMBOL_REF or LABEL_REF.  */
 
@@ -16401,6 +16594,11 @@ aarch64_classify_symbol (rtx x, HOST_WIDE_INT offset)
       if (aarch64_tls_symbol_p (x))
 	return aarch64_classify_tls_symbol (x);
 
+      /* Load any non-indirected symbol either through a local indirection
+	 table or through the GOT.  */
+      if (aarch64_sym_indirectly_accessed_p (x))
+	return aarch64_classify_capability_symbol (x, offset);
+
       switch (aarch64_cmodel)
 	{
 	case AARCH64_CMODEL_TINY:
@@ -24390,6 +24588,9 @@ aarch64_libgcc_floating_mode_supported_p
 #undef TARGET_USE_BLOCKS_FOR_CONSTANT_P
 #define TARGET_USE_BLOCKS_FOR_CONSTANT_P aarch64_use_blocks_for_constant_p
 
+#undef TARGET_USE_ANCHORS_FOR_SYMBOL_P
+#define TARGET_USE_ANCHORS_FOR_SYMBOL_P aarch64_use_anchors_for_symbol_p
+
 #undef TARGET_VECTOR_MODE_SUPPORTED_P
 #define TARGET_VECTOR_MODE_SUPPORTED_P aarch64_vector_mode_supported_p
 
diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h
index 8a87719f52f..fcc3b1055c8 100644
--- a/gcc/config/aarch64/aarch64.h
+++ b/gcc/config/aarch64/aarch64.h
@@ -1256,6 +1256,13 @@ extern enum aarch64_code_model aarch64_cmodel;
 #define ENDIAN_LANE_N(NUNITS, N) \
   (BYTES_BIG_ENDIAN ? NUNITS - 1 - N : N)
 
+/* We use the first machine dependent symbol ref flag to record whether this
+   symbol is a symbol indexing into the local indirection table required for
+   Morello data pointers.  */
+#define SYMBOL_FLAG_INDIRECTION  SYMBOL_FLAG_MACH_DEP
+#define SYMBOL_REF_INDIRECTION_P(RTX) \
+  ((SYMBOL_REF_FLAGS (RTX) & SYMBOL_FLAG_INDIRECTION) != 0)
+
 /* Support for a configure-time default CPU, etc.  We currently support
    --with-arch and --with-cpu.  Both are ignored if either is specified
    explicitly on the command line at run time.  */
diff --git a/gcc/rtl.h b/gcc/rtl.h
index 8f56907ae26..2626635881b 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -3572,6 +3572,7 @@ extern rtx get_related_value (const_rtx);
 extern bool offset_within_block_p (const_rtx, HOST_WIDE_INT);
 extern void split_const (rtx, rtx *, rtx *);
 extern rtx strip_offset (rtx, poly_int64_pod *);
+extern const_rtx strip_offset (const_rtx, poly_int64_pod *);
 extern poly_int64 get_args_size (const_rtx);
 extern bool unsigned_reg_p (rtx);
 extern int reg_mentioned_p (const_rtx, const_rtx);
diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c
index 8f407bc9b2b..61248d925a7 100644
--- a/gcc/rtlanal.c
+++ b/gcc/rtlanal.c
@@ -958,6 +958,24 @@ strip_offset (rtx x, poly_int64_pod *offset_out)
   return x;
 }
 
+const_rtx
+strip_offset (const_rtx x, poly_int64_pod *offset_out)
+{
+  const_rtx base = const0_rtx;
+  const_rtx test = x;
+  if (GET_CODE (test) == CONST)
+    test = XEXP (test, 0);
+  if (any_plus_p (test))
+    {
+      base = XEXP (test, 0);
+      test = XEXP (test, 1);
+    }
+  if (poly_int_rtx_p (test, offset_out))
+    return base;
+  *offset_out = 0;
+  return x;
+}
+
 /* Return the argument size in REG_ARGS_SIZE note X.  */
 
 poly_int64
diff --git a/gcc/testsuite/gcc.target/aarch64/morello/indirected-constant-gets-emitted.c b/gcc/testsuite/gcc.target/aarch64/morello/indirected-constant-gets-emitted.c
new file mode 100644
index 00000000000..fd0a3b7030a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/morello/indirected-constant-gets-emitted.c
@@ -0,0 +1,18 @@
+/* Ensure that putting a constant pool address into our indirection table
+   works.  Two problems that we hit during development were:
+   a) The original constant not getting emitted (due to not being mentioned in
+      the RTL stream).
+   b) No .size (or .type) directive associated with constant pool entries.
+
+   Both of these problems can be checked by ensuring that the binary links to
+   an executable.  */
+/* { dg-do link } */
+__int128 val()
+{
+  __int128 ret = 0xffaffafafafaffafULL;
+  return ret;
+}
+int main()
+{
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/morello/indirection-of-symbol-plus-offset.c b/gcc/testsuite/gcc.target/aarch64/morello/indirection-of-symbol-plus-offset.c
new file mode 100644
index 00000000000..b6444ba4a84
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/morello/indirection-of-symbol-plus-offset.c
@@ -0,0 +1,34 @@
+/* Modified from gcc.c-torture/compile/pr38343-2.c  */
+/* { dg-do run } */
+
+/* N.b. This test not only checks whether we manage indirection of a symbol
+   plus an offset correctly.  It also checks that we ensure the indirection of
+   a TREE_CONSTANT_POOL_ADDRESS_P is not referenced using an anchor symbol.
+
+   In this testcase the tree constant pool object is the string "S0022".
+   If the indirection was done using an anchor symbol, then the address would
+   not be directly referenced in RTL anywhere and we would end up with a linker
+   error.  */
+
+#define assert(X) do { if (!(X)) __builtin_abort (); } while (0)
+static struct S
+{
+  char f[6];
+} s[] = { {"01000"} };
+
+char *
+foo (void)
+{
+  return __builtin_stpcpy (s[0].f, "S0022");
+}
+
+/* MORELLO TODO
+   When we get things linking and running want to actually double check this
+   test does what it should.  */
+int main()
+{
+  void * sptr = &s[0];
+  char * foo_ret = foo();
+  assert (foo_ret == sptr + 5);
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/morello/symbol-loading.c b/gcc/testsuite/gcc.target/aarch64/morello/symbol-loading.c
new file mode 100644
index 00000000000..ce4e1473b57
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/morello/symbol-loading.c
@@ -0,0 +1,26 @@
+/* { dg-do compile } */
+/* Checks that this test performs:
+   1) Ensure that a global defined in this TU is accessed through the GOT.
+   2) Ensure a local variable is accessed indirectly through the constant pool.
+   3) Ensure a global declared here and defined elsewhere is accessed through
+      the GOT (as for non-PureCap).  */
+extern int a;
+static int b=2;
+int c=3;
+
+int ret_a() { return a; }
+int ret_b() { return b; }
+int ret_c() { return c; }
+/* Add a function modifying the static variable to ensure we don't optimise
+   away the access to returning a simple constant.  */
+void modify_just_for_optimisation () { b += 1; }
+
+/* Ensure that we load `b` *indirectly*.  Do this by checking that we do not
+   emit an `adrp` for that symbol and by ensuring that there is a `capinit b`
+   in the output.  */
+/* { dg-final { scan-assembler-not {adrp[^\n]*b} { target cheri_capability_pure } } } */
+/* { dg-final { scan-assembler {capinit\tb} { target cheri_capability_pure } } } */
+
+/* Ensure that `a` and `c` are accessed through the GOT.  */
+/* { dg-final { scan-assembler {adrp[^\n]*:got:a} { target cheri_capability_pure } } } */
+/* { dg-final { scan-assembler {adrp[^\n]*:got:c} { target cheri_capability_pure } } } */


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-09-21  9:14 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-21  9:14 [gcc(refs/vendors/ARM/heads/morello)] Change symbol loading to account for PureCap Matthew Malcomson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).