* [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand
@ 2014-06-02 15:03 Ilya Enkovich
2014-06-02 15:28 ` Michael Matz
0 siblings, 1 reply; 11+ messages in thread
From: Ilya Enkovich @ 2014-06-02 15:03 UTC (permalink / raw)
To: gcc-patches
Hi,
This patch adds support for input bounds, call bounds args and returned bounds in expand pass.
Bootstrapped and tested on linux-x86_64.
Thanks,
Ilya
--
gcc/
2014-06-02 Ilya Enkovich <ilya.enkovich@intel.com>
* calls.c: Include tree-chkp.h, rtl-chkp.h.
(arg_data): Add fields special_slot, pointer_arg and
pointer_offset.
(store_bounds): New.
(emit_call_1): Propagate instrumentation flag for CALL.
(initialize_argument_information): Compute pointer_arg,
pointer_offset and special_slot for pointer bounds arguments.
(finalize_must_preallocate): Preallocate when storing bounds
in bounds table.
(compute_argument_addresses): Skip pointer bounds.
(expand_call): Store bounds into tables separately. Return
result joined with resulting bounds.
* cfgexpand.c: Include tree-chkp.h, rtl-chkp.h.
(expand_call_stmt): Propagate bounds flag for CALL_EXPR.
(expand_return): Add returned bounds arg. Handle returned bounds.
(expand_gimple_stmt_1): Adjust to new expand_return signature.
(gimple_expand_cfg): Reset rtx bounds map.
* expr.h (store_expr): Add param for bounds target.
* expr.c: Include tree-chkp.h, rtl-chkp.h.
(expand_assignment): Handle returned bounds.
(store_expr): Add bounds target argument. Handle
bounds returned by calls.
(store_constructor): Adjust to new store_expr signature.
(store_field): Likewise.
(expand_expr_real_2): Likewise.
(expand_expr_real_1): Likewise.
* function.c: Include tree-chkp.h, rtl-chkp.h.
(bounds_parm_data): New.
(use_register_for_decl): Do not registerize decls used for bounds
stores and loads.
(assign_parms_augmented_arg_list): Add bounds of the result
structure pointer as the second argument.
(assign_parm_find_entry_rtl): Mark bounds are never passed on
the stack.
(assign_parm_is_stack_parm): Likewise.
(assign_parm_load_bounds): New.
(assign_bounds): New.
(assign_parms): Load bounds and determine a location for
returned bounds.
(diddle_return_value_1): New.
(diddle_return_value): Handle returned bounds.
* function.h (rtl_data): Add field for returned bounds.
* tree-outof-ssa.c (insert_value_copy_on_edge): Adjust to new
store_expr signature.
diff --git a/gcc/calls.c b/gcc/calls.c
index e1dc8eb..140ceb4 100644
--- a/gcc/calls.c
+++ b/gcc/calls.c
@@ -44,11 +44,14 @@ along with GCC; see the file COPYING3. If not see
#include "tm_p.h"
#include "timevar.h"
#include "sbitmap.h"
+#include "bitmap.h"
#include "langhooks.h"
#include "target.h"
#include "cgraph.h"
#include "except.h"
#include "dbgcnt.h"
+#include "tree-chkp.h"
+#include "rtl-chkp.h"
/* Like PREFERRED_STACK_BOUNDARY but in units of bytes, not bits. */
#define STACK_BYTES (PREFERRED_STACK_BOUNDARY / BITS_PER_UNIT)
@@ -76,6 +79,15 @@ struct arg_data
/* If REG is a PARALLEL, this is a copy of VALUE pulled into the correct
form for emit_group_move. */
rtx parallel_value;
+ /* If value is passed in neither reg nor stack, this field holds a number
+ of a special slot to be used. */
+ rtx special_slot;
+ /* For pointer bounds hold an index of parm bounds are bound to. -1 if
+ there is no such pointer. */
+ int pointer_arg;
+ /* If pointer_arg refers a structure, then pointer_offset holds an offset
+ of a pointer in this structure. */
+ int pointer_offset;
/* If REG was promoted from the actual mode of the argument expression,
indicates whether the promotion is sign- or zero-extended. */
int unsignedp;
@@ -133,6 +145,7 @@ static void emit_call_1 (rtx, tree, tree, tree, HOST_WIDE_INT, HOST_WIDE_INT,
HOST_WIDE_INT, rtx, rtx, int, rtx, int,
cumulative_args_t);
static void precompute_register_parameters (int, struct arg_data *, int *);
+static void store_bounds (struct arg_data *, struct arg_data *);
static int store_one_arg (struct arg_data *, rtx, int, int, int);
static void store_unaligned_arguments_into_pseudos (struct arg_data *, int);
static int finalize_must_preallocate (int, int, struct arg_data *,
@@ -396,6 +409,10 @@ emit_call_1 (rtx funexp, tree fntree ATTRIBUTE_UNUSED, tree fndecl ATTRIBUTE_UNU
&& MEM_EXPR (funmem) != NULL_TREE)
set_mem_expr (XEXP (call, 0), MEM_EXPR (funmem));
+ /* Mark instrumented calls. */
+ if (call && fntree)
+ CALL_EXPR_WITH_BOUNDS_P (call) = CALL_WITH_BOUNDS_P (fntree);
+
/* Put the register usage information there. */
add_function_usage_to (call_insn, call_fusage);
@@ -1141,18 +1158,84 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
/* First fill in the actual arguments in the ARGS array, splitting
complex arguments if necessary. */
{
- int j = i;
+ int j = i, ptr_arg = -1;
call_expr_arg_iterator iter;
tree arg;
+ bitmap slots = NULL;
if (struct_value_addr_value)
{
args[j].tree_value = struct_value_addr_value;
+
j += inc;
+
+ /* If we pass structure address then we need to
+ create bounds for it. Since created bounds is
+ a call statement, we expand it right here to avoid
+ fixing all other places where it may be expanded. */
+ if (CALL_WITH_BOUNDS_P (exp))
+ {
+ args[j].value = gen_reg_rtx (targetm.chkp_bound_mode ());
+ args[j].tree_value
+ = chkp_make_bounds_for_struct_addr (struct_value_addr_value);
+ expand_expr_real (args[j].tree_value, args[j].value, VOIDmode,
+ EXPAND_NORMAL, 0, false);
+ args[j].pointer_arg = j - inc;
+
+ j += inc;
+ }
}
FOR_EACH_CALL_EXPR_ARG (arg, iter, exp)
{
tree argtype = TREE_TYPE (arg);
+
+ /* Remember last param with pointer and associate it
+ with following pointer bounds. */
+ if (CALL_WITH_BOUNDS_P (exp)
+ && chkp_type_has_pointer (argtype))
+ {
+ if (slots)
+ {
+ BITMAP_FREE (slots);
+ slots = NULL;
+ }
+ ptr_arg = j;
+ if (!BOUNDED_TYPE_P (argtype))
+ slots = chkp_find_bound_slots (argtype);
+ }
+ else if (POINTER_BOUNDS_TYPE_P (argtype))
+ {
+ /* We expect bounds in instrumented calls only.
+ Otherwise it is a sign we lost flag due to some optimization
+ and may emit call args incorrectly. */
+ gcc_assert (CALL_WITH_BOUNDS_P (exp));
+
+ /* For structures look for the next available pointer. */
+ if (ptr_arg != -1 && slots)
+ {
+ unsigned bnd_no = bitmap_first_set_bit (slots);
+ args[j].pointer_offset =
+ bnd_no * POINTER_SIZE / BITS_PER_UNIT;
+
+ bitmap_clear_bit (slots, bnd_no);
+
+ /* Check we have no more pointers in the structure. */
+ if (bitmap_empty_p (slots))
+ {
+ BITMAP_FREE (slots);
+ slots = NULL;
+ }
+ }
+ args[j].pointer_arg = ptr_arg;
+
+ /* Check we covered all pointers in the previous
+ non bounds arg. */
+ if (!slots)
+ ptr_arg = -1;
+ }
+ else
+ ptr_arg = -1;
+
if (targetm.calls.split_complex_arg
&& argtype
&& TREE_CODE (argtype) == COMPLEX_TYPE
@@ -1167,6 +1250,9 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
args[j].tree_value = arg;
j += inc;
}
+
+ if (slots)
+ BITMAP_FREE (slots);
}
/* I counts args in order (to be) pushed; ARGPOS counts in order written. */
@@ -1271,7 +1357,7 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
else
copy = assign_temp (type, 1, 0);
- store_expr (args[i].tree_value, copy, 0, false);
+ store_expr (args[i].tree_value, copy, 0, false, NULL);
/* Just change the const function to pure and then let
the next test clear the pure based on
@@ -1302,6 +1388,12 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
args[i].reg = targetm.calls.function_arg (args_so_far, mode, type,
argpos < n_named_args);
+ if (args[i].reg && CONST_INT_P (args[i].reg))
+ {
+ args[i].special_slot = args[i].reg;
+ args[i].reg = NULL;
+ }
+
/* If this is a sibling call and the machine has register windows, the
register window has to be unwinded before calling the routine, so
arguments have to go into the incoming registers. */
@@ -1335,10 +1427,13 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
|| (args[i].pass_on_stack && args[i].reg != 0))
*must_preallocate = 1;
+ /* No stack allocation and padding for bounds. */
+ if (POINTER_BOUNDS_P (args[i].tree_value))
+ ;
/* Compute the stack-size of this argument. */
- if (args[i].reg == 0 || args[i].partial != 0
- || reg_parm_stack_space > 0
- || args[i].pass_on_stack)
+ else if (args[i].reg == 0 || args[i].partial != 0
+ || reg_parm_stack_space > 0
+ || args[i].pass_on_stack)
locate_and_pad_parm (mode, type,
#ifdef STACK_PARMS_IN_REG_PARM_AREA
1,
@@ -1553,6 +1648,12 @@ finalize_must_preallocate (int must_preallocate, int num_actuals,
partial_seen = 1;
else if (partial_seen && args[i].reg == 0)
must_preallocate = 1;
+ /* We preallocate in case there are bounds passed
+ in the bounds table to have precomputed address
+ for bounds association. */
+ else if (POINTER_BOUNDS_P (args[i].tree_value)
+ && !args[i].reg)
+ must_preallocate = 1;
if (TYPE_MODE (TREE_TYPE (args[i].tree_value)) == BLKmode
&& (TREE_CODE (args[i].tree_value) == CALL_EXPR
@@ -1604,6 +1705,10 @@ compute_argument_addresses (struct arg_data *args, rtx argblock, int num_actuals
&& args[i].partial == 0)
continue;
+ /* Pointer Bounds are never passed on the stack. */
+ if (POINTER_BOUNDS_P (args[i].tree_value))
+ continue;
+
if (CONST_INT_P (offset))
addr = plus_constant (Pmode, arg_reg, INTVAL (offset));
else
@@ -2233,6 +2338,8 @@ expand_call (tree exp, rtx target, int ignore)
/* Register in which non-BLKmode value will be returned,
or 0 if no value or if value is BLKmode. */
rtx valreg;
+ /* Register(s) in which bounds are returned. */
+ rtx valbnd = NULL;
/* Address where we should return a BLKmode value;
0 if value not BLKmode. */
rtx structure_value_addr = 0;
@@ -2484,7 +2591,7 @@ expand_call (tree exp, rtx target, int ignore)
structure_value_addr_value =
make_tree (build_pointer_type (TREE_TYPE (funtype)), temp);
- structure_value_addr_parm = 1;
+ structure_value_addr_parm = CALL_WITH_BOUNDS_P (exp) ? 2 : 1;
}
/* Count the arguments and set NUM_ACTUALS. */
@@ -3003,15 +3110,28 @@ expand_call (tree exp, rtx target, int ignore)
/* Figure out the register where the value, if any, will come back. */
valreg = 0;
+ valbnd = 0;
if (TYPE_MODE (rettype) != VOIDmode
&& ! structure_value_addr)
{
if (pcc_struct_value)
- valreg = hard_function_value (build_pointer_type (rettype),
- fndecl, NULL, (pass == 0));
+ {
+ valreg = hard_function_value (build_pointer_type (rettype),
+ fndecl, NULL, (pass == 0));
+ if (CALL_WITH_BOUNDS_P (exp))
+ valbnd = targetm.calls.
+ chkp_function_value_bounds (build_pointer_type (rettype),
+ fndecl, (pass == 0));
+ }
else
- valreg = hard_function_value (rettype, fndecl, fntype,
- (pass == 0));
+ {
+ valreg = hard_function_value (rettype, fndecl, fntype,
+ (pass == 0));
+ if (CALL_WITH_BOUNDS_P (exp))
+ valbnd = targetm.calls.chkp_function_value_bounds (rettype,
+ fndecl,
+ (pass == 0));
+ }
/* If VALREG is a PARALLEL whose first member has a zero
offset, use that. This is for targets such as m68k that
@@ -3052,7 +3172,10 @@ expand_call (tree exp, rtx target, int ignore)
for (i = 0; i < num_actuals; i++)
{
- if (args[i].reg == 0 || args[i].pass_on_stack)
+ /* Delay bounds until all other args are stored. */
+ if (POINTER_BOUNDS_P (args[i].tree_value))
+ continue;
+ else if (args[i].reg == 0 || args[i].pass_on_stack)
{
rtx before_arg = get_last_insn ();
@@ -3105,6 +3228,17 @@ expand_call (tree exp, rtx target, int ignore)
sibcall_failure = 1;
}
+ /* Store all bounds not passed in registers. */
+ for (i = 0; i < num_actuals; i++)
+ {
+ if (POINTER_BOUNDS_P (args[i].tree_value)
+ && !args[i].reg)
+ store_bounds (&args[i],
+ args[i].pointer_arg == -1
+ ? NULL
+ : &args[args[i].pointer_arg]);
+ }
+
/* If we pushed args in forward order, perform stack alignment
after pushing the last arg. */
if (!PUSH_ARGS_REVERSED && argblock == 0)
@@ -3502,6 +3636,9 @@ expand_call (tree exp, rtx target, int ignore)
free (stack_usage_map_buf);
+ /* Join result with returned bounds so caller may use them if needed. */
+ target = chkp_join_splitted_slot (target, valbnd);
+
return target;
}
@@ -4380,6 +4517,68 @@ emit_library_call_value (rtx orgfun, rtx value,
return result;
}
\f
+
+/* Store pointer bounds argument ARG into Bounds Table entry
+ associated with PARM. */
+static void
+store_bounds (struct arg_data *arg, struct arg_data *parm)
+{
+ rtx slot = NULL, ptr = NULL, addr = NULL;
+
+ /* We may pass bounds not associated with any pointer. */
+ if (!parm)
+ {
+ gcc_assert (arg->special_slot);
+ slot = arg->special_slot;
+ ptr = const0_rtx;
+ }
+ /* Find pointer associated with bounds and where it is
+ passed. */
+ else
+ {
+ if (!parm->reg)
+ {
+ gcc_assert (!arg->special_slot);
+
+ addr = adjust_address (parm->stack, Pmode, arg->pointer_offset);
+ }
+ else if (REG_P (parm->reg))
+ {
+ gcc_assert (arg->special_slot);
+ slot = arg->special_slot;
+
+ if (MEM_P (parm->value))
+ addr = adjust_address (parm->value, Pmode, arg->pointer_offset);
+ else if (REG_P (parm->value))
+ ptr = gen_rtx_SUBREG (Pmode, parm->value, arg->pointer_offset);
+ else
+ {
+ gcc_assert (!arg->pointer_offset);
+ ptr = parm->value;
+ }
+ }
+ else
+ {
+ gcc_assert (GET_CODE (parm->reg) == PARALLEL);
+
+ gcc_assert (arg->special_slot);
+ slot = arg->special_slot;
+
+ if (parm->parallel_value)
+ ptr = chkp_get_value_with_offs (parm->parallel_value,
+ GEN_INT (arg->pointer_offset));
+ else
+ gcc_unreachable ();
+ }
+ }
+
+ /* Expand bounds. */
+ if (!arg->value)
+ arg->value = expand_normal (arg->tree_value);
+
+ targetm.calls.store_bounds_for_arg (ptr, addr, arg->value, slot);
+}
+
/* Store a single argument for a function call
into the register or memory area where it must be passed.
*ARG describes the argument value and where to pass it.
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index b7f6360..1c75586 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -73,6 +73,8 @@ along with GCC; see the file COPYING3. If not see
#include "tree-ssa-address.h"
#include "recog.h"
#include "output.h"
+#include "tree-chkp.h"
+#include "rtl-chkp.h"
/* Some systems use __main in a way incompatible with its use in gcc, in these
cases use the macros NAME__MAIN to give a quoted symbol and SYMBOL__MAIN to
@@ -2238,6 +2240,7 @@ expand_call_stmt (gimple stmt)
CALL_FROM_THUNK_P (exp) = gimple_call_from_thunk_p (stmt);
CALL_EXPR_VA_ARG_PACK (exp) = gimple_call_va_arg_pack_p (stmt);
SET_EXPR_LOCATION (exp, gimple_location (stmt));
+ CALL_WITH_BOUNDS_P (exp) = gimple_call_with_bounds_p (stmt);
/* Ensure RTL is created for debug args. */
if (decl && DECL_HAS_DEBUG_ARGS_P (decl))
@@ -3048,11 +3051,12 @@ expand_value_return (rtx val)
from the current function. */
static void
-expand_return (tree retval)
+expand_return (tree retval, tree bounds)
{
rtx result_rtl;
rtx val = 0;
tree retval_rhs;
+ rtx bounds_rtl;
/* If function wants no value, give it none. */
if (TREE_CODE (TREE_TYPE (TREE_TYPE (current_function_decl))) == VOID_TYPE)
@@ -3078,6 +3082,56 @@ expand_return (tree retval)
result_rtl = DECL_RTL (DECL_RESULT (current_function_decl));
+ /* Put returned bounds to the right place. */
+ bounds_rtl = DECL_BOUNDS_RTL (DECL_RESULT (current_function_decl));
+ if (bounds_rtl)
+ {
+ rtx addr, bnd;
+
+ if (bounds)
+ {
+ bnd = expand_normal (bounds);
+ targetm.calls.store_returned_bounds (bounds_rtl, bnd);
+ }
+ else if (REG_P (bounds_rtl))
+ {
+ addr = expand_normal (build_fold_addr_expr (retval_rhs));
+ addr = gen_rtx_MEM (Pmode, addr);
+ bnd = targetm.calls.load_bounds_for_arg (addr, NULL, NULL);
+ targetm.calls.store_returned_bounds (bounds_rtl, bnd);
+ }
+ else
+ {
+ int n;
+
+ gcc_assert (GET_CODE (bounds_rtl) == PARALLEL);
+
+ addr = expand_normal (build_fold_addr_expr (retval_rhs));
+ addr = gen_rtx_MEM (Pmode, addr);
+
+ for (n = 0; n < XVECLEN (bounds_rtl, 0); n++)
+ {
+ rtx offs = XEXP (XVECEXP (bounds_rtl, 0, n), 1);
+ rtx slot = XEXP (XVECEXP (bounds_rtl, 0, n), 0);
+ rtx from = adjust_address (addr, Pmode, INTVAL (offs));
+ rtx bnd = targetm.calls.load_bounds_for_arg (from, NULL, NULL);
+ targetm.calls.store_returned_bounds (slot, bnd);
+ }
+ }
+ }
+ else if (chkp_function_instrumented_p (current_function_decl)
+ && !BOUNDED_P (retval_rhs)
+ && chkp_type_has_pointer (TREE_TYPE (retval_rhs))
+ && TREE_CODE (retval_rhs) != RESULT_DECL)
+ {
+ rtx addr = expand_normal (build_fold_addr_expr (retval_rhs));
+ addr = gen_rtx_MEM (Pmode, addr);
+
+ gcc_assert (MEM_P (result_rtl));
+
+ chkp_copy_bounds_for_stack_parm (result_rtl, addr, TREE_TYPE (retval_rhs));
+ }
+
/* If we are returning the RESULT_DECL, then the value has already
been stored into it, so we don't have to do anything special. */
if (TREE_CODE (retval_rhs) == RESULT_DECL)
@@ -3183,7 +3237,7 @@ expand_gimple_stmt_1 (gimple stmt)
if (!op0)
expand_null_return ();
else
- expand_return (op0);
+ expand_return (op0, gimple_return_retbnd (stmt));
break;
case GIMPLE_ASSIGN:
@@ -5556,6 +5610,9 @@ gimple_expand_cfg (void)
rtl_profile_for_bb (ENTRY_BLOCK_PTR_FOR_FN (cfun));
+ if (chkp_function_instrumented_p (current_function_decl))
+ chkp_reset_rtl_bounds ();
+
insn_locations_init ();
if (!DECL_IS_BUILTIN (current_function_decl))
{
diff --git a/gcc/expr.c b/gcc/expr.c
index 72e4401..fe76553 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -67,6 +67,8 @@ along with GCC; see the file COPYING3. If not see
#include "params.h"
#include "tree-ssa-address.h"
#include "cfgexpand.h"
+#include "tree-chkp.h"
+#include "rtl-chkp.h"
/* Decide whether a function's arguments should be processed
from first to last or from last to first.
@@ -4917,11 +4919,11 @@ expand_assignment (tree to, tree from, bool nontemporal)
if (COMPLEX_MODE_P (TYPE_MODE (TREE_TYPE (from)))
&& bitpos == 0
&& bitsize == mode_bitsize)
- result = store_expr (from, to_rtx, false, nontemporal);
+ result = store_expr (from, to_rtx, false, nontemporal, NULL);
else if (bitsize == mode_bitsize / 2
&& (bitpos == 0 || bitpos == mode_bitsize / 2))
result = store_expr (from, XEXP (to_rtx, bitpos != 0), false,
- nontemporal);
+ nontemporal, NULL);
else if (bitpos + bitsize <= mode_bitsize / 2)
result = store_field (XEXP (to_rtx, 0), bitsize, bitpos,
bitregion_start, bitregion_end,
@@ -5008,9 +5010,14 @@ expand_assignment (tree to, tree from, bool nontemporal)
|| TREE_CODE (to) == SSA_NAME))
{
rtx value;
+ rtx bounds;
push_temp_slots ();
value = expand_normal (from);
+
+ /* Split value and bounds to store them separately. */
+ chkp_split_slot (value, &value, &bounds);
+
if (to_rtx == 0)
to_rtx = expand_expr (to, NULL_RTX, VOIDmode, EXPAND_WRITE);
@@ -5044,6 +5051,15 @@ expand_assignment (tree to, tree from, bool nontemporal)
emit_move_insn (to_rtx, value);
}
+
+ /* Store bounds if required. */
+ if (bounds
+ && (BOUNDED_P (to) || chkp_type_has_pointer (TREE_TYPE (to))))
+ {
+ gcc_assert (MEM_P (to_rtx));
+ chkp_emit_bounds_store (bounds, value, to_rtx);
+ }
+
preserve_temp_slots (to_rtx);
pop_temp_slots ();
return;
@@ -5119,7 +5135,7 @@ expand_assignment (tree to, tree from, bool nontemporal)
/* Compute FROM and store the value in the rtx we got. */
push_temp_slots ();
- result = store_expr (from, to_rtx, 0, nontemporal);
+ result = store_expr (from, to_rtx, 0, nontemporal, to);
preserve_temp_slots (result);
pop_temp_slots ();
return;
@@ -5156,10 +5172,14 @@ emit_storent_insn (rtx to, rtx from)
If CALL_PARAM_P is nonzero, this is a store into a call param on the
stack, and block moves may need to be treated specially.
- If NONTEMPORAL is true, try using a nontemporal store instruction. */
+ If NONTEMPORAL is true, try using a nontemporal store instruction.
+
+ If BTARGET is not NULL then computed bounds of TARGET are
+ associated with BTARGET. */
rtx
-store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
+store_expr (tree exp, rtx target, int call_param_p, bool nontemporal,
+ tree btarget)
{
rtx temp;
rtx alt_rtl = NULL_RTX;
@@ -5181,7 +5201,7 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
expand_expr (TREE_OPERAND (exp, 0), const0_rtx, VOIDmode,
call_param_p ? EXPAND_STACK_PARM : EXPAND_NORMAL);
return store_expr (TREE_OPERAND (exp, 1), target, call_param_p,
- nontemporal);
+ nontemporal, btarget);
}
else if (TREE_CODE (exp) == COND_EXPR && GET_MODE (target) == BLKmode)
{
@@ -5196,12 +5216,12 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
NO_DEFER_POP;
jumpifnot (TREE_OPERAND (exp, 0), lab1, -1);
store_expr (TREE_OPERAND (exp, 1), target, call_param_p,
- nontemporal);
+ nontemporal, btarget);
emit_jump_insn (gen_jump (lab2));
emit_barrier ();
emit_label (lab1);
store_expr (TREE_OPERAND (exp, 2), target, call_param_p,
- nontemporal);
+ nontemporal, btarget);
emit_label (lab2);
OK_DEFER_POP;
@@ -5253,6 +5273,19 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
temp = expand_expr (exp, inner_target, VOIDmode,
call_param_p ? EXPAND_STACK_PARM : EXPAND_NORMAL);
+ /* Handle bounds returned by call. */
+ if (TREE_CODE (exp) == CALL_EXPR)
+ {
+ rtx bounds;
+ chkp_split_slot (temp, &temp, &bounds);
+ if (bounds && btarget)
+ {
+ gcc_assert (TREE_CODE (btarget) == SSA_NAME);
+ rtx tmp = targetm.calls.load_returned_bounds (bounds);
+ chkp_set_rtl_bounds (btarget, tmp);
+ }
+ }
+
/* If TEMP is a VOIDmode constant, use convert_modes to make
sure that we properly convert it. */
if (CONSTANT_P (temp) && GET_MODE (temp) == VOIDmode)
@@ -5334,6 +5367,19 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
(call_param_p
? EXPAND_STACK_PARM : EXPAND_NORMAL),
&alt_rtl, false);
+
+ /* Handle bounds returned by call. */
+ if (TREE_CODE (exp) == CALL_EXPR)
+ {
+ rtx bounds;
+ chkp_split_slot (temp, &temp, &bounds);
+ if (bounds && btarget)
+ {
+ gcc_assert (TREE_CODE (btarget) == SSA_NAME);
+ rtx tmp = targetm.calls.load_returned_bounds (bounds);
+ chkp_set_rtl_bounds (btarget, tmp);
+ }
+ }
}
/* If TEMP is a VOIDmode constant and the mode of the type of EXP is not
@@ -6222,7 +6268,7 @@ store_constructor (tree exp, rtx target, int cleared, HOST_WIDE_INT size)
VAR_DECL, NULL_TREE, domain);
index_r = gen_reg_rtx (promote_decl_mode (index, NULL));
SET_DECL_RTL (index, index_r);
- store_expr (lo_index, index_r, 0, false);
+ store_expr (lo_index, index_r, 0, false, NULL);
/* Build the head of the loop. */
do_pending_stack_adjust ();
@@ -6249,7 +6295,7 @@ store_constructor (tree exp, rtx target, int cleared, HOST_WIDE_INT size)
store_constructor (value, xtarget, cleared,
bitsize / BITS_PER_UNIT);
else
- store_expr (value, xtarget, 0, false);
+ store_expr (value, xtarget, 0, false, NULL);
/* Generate a conditional jump to exit the loop. */
exit_cond = build2 (LT_EXPR, integer_type_node,
@@ -6292,7 +6338,7 @@ store_constructor (tree exp, rtx target, int cleared, HOST_WIDE_INT size)
expand_normal (position),
highest_pow2_factor (position));
xtarget = adjust_address (xtarget, mode, 0);
- store_expr (value, xtarget, 0, false);
+ store_expr (value, xtarget, 0, false, NULL);
}
else
{
@@ -6498,7 +6544,7 @@ store_field (rtx target, HOST_WIDE_INT bitsize, HOST_WIDE_INT bitpos,
/* We're storing into a struct containing a single __complex. */
gcc_assert (!bitpos);
- return store_expr (exp, target, 0, nontemporal);
+ return store_expr (exp, target, 0, nontemporal, NULL);
}
/* If the structure is in a register or if the component
@@ -6651,7 +6697,7 @@ store_field (rtx target, HOST_WIDE_INT bitsize, HOST_WIDE_INT bitpos,
if (!MEM_KEEP_ALIAS_SET_P (to_rtx) && MEM_ALIAS_SET (to_rtx) != 0)
set_mem_alias_set (to_rtx, alias_set);
- return store_expr (exp, to_rtx, 0, nontemporal);
+ return store_expr (exp, to_rtx, 0, nontemporal, NULL);
}
}
\f
@@ -8141,7 +8187,7 @@ expand_expr_real_2 (sepops ops, rtx target, enum machine_mode tmode,
store_expr (treeop0,
adjust_address (target, TYPE_MODE (valtype), 0),
modifier == EXPAND_STACK_PARM,
- false);
+ false, NULL);
else
{
@@ -9189,14 +9235,14 @@ expand_expr_real_2 (sepops ops, rtx target, enum machine_mode tmode,
jumpifnot (treeop0, op0, -1);
store_expr (treeop1, temp,
modifier == EXPAND_STACK_PARM,
- false);
+ false, NULL);
emit_jump_insn (gen_jump (op1));
emit_barrier ();
emit_label (op0);
store_expr (treeop2, temp,
modifier == EXPAND_STACK_PARM,
- false);
+ false, NULL);
emit_label (op1);
OK_DEFER_POP;
@@ -9722,7 +9768,7 @@ expand_expr_real_1 (tree exp, rtx target, enum machine_mode tmode,
{
temp = assign_stack_temp (DECL_MODE (base),
GET_MODE_SIZE (DECL_MODE (base)));
- store_expr (base, temp, 0, false);
+ store_expr (base, temp, 0, false, NULL);
temp = adjust_address (temp, BLKmode, offset);
set_mem_size (temp, int_size_in_bytes (type));
return temp;
diff --git a/gcc/expr.h b/gcc/expr.h
index 524da67..bbb29bc 100644
--- a/gcc/expr.h
+++ b/gcc/expr.h
@@ -432,7 +432,7 @@ extern void expand_assignment (tree, tree, bool);
and storing the value into TARGET.
If SUGGEST_REG is nonzero, copy the value through a register
and return that register, if that is possible. */
-extern rtx store_expr (tree, rtx, int, bool);
+extern rtx store_expr (tree, rtx, int, bool, tree);
/* Given an rtx that may include add and multiply operations,
generate them as insns and return a pseudo-reg containing the value.
diff --git a/gcc/function.c b/gcc/function.c
index a61e475..a08d4ad 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -63,6 +63,8 @@ along with GCC; see the file COPYING3. If not see
#include "df.h"
#include "params.h"
#include "bb-reorder.h"
+#include "tree-chkp.h"
+#include "rtl-chkp.h"
/* So we can assign to cfun in this file. */
#undef cfun
@@ -2082,6 +2084,14 @@ use_register_for_decl (const_tree decl)
if (TREE_ADDRESSABLE (decl))
return false;
+ /* Decl is implicitly addressible by bound stores and loads
+ if it is an aggregate holding bounds. */
+ if (chkp_function_instrumented_p (current_function_decl)
+ && TREE_TYPE (decl)
+ && !BOUNDED_P (decl)
+ && chkp_type_has_pointer (TREE_TYPE (decl)))
+ return false;
+
/* Only register-like things go in registers. */
if (DECL_MODE (decl) == BLKmode)
return false;
@@ -2202,6 +2212,15 @@ struct assign_parm_data_one
BOOL_BITFIELD loaded_in_reg : 1;
};
+struct bounds_parm_data
+{
+ assign_parm_data_one parm_data;
+ tree bounds_parm;
+ tree ptr_parm;
+ rtx ptr_entry;
+ int bound_no;
+};
+
/* A subroutine of assign_parms. Initialize ALL. */
static void
@@ -2312,6 +2331,23 @@ assign_parms_augmented_arg_list (struct assign_parm_data_all *all)
fnargs.safe_insert (0, decl);
all->function_result_decl = decl;
+
+ /* If function is instrumented then bounds of the
+ passed structure address is the second argument. */
+ if (chkp_function_instrumented_p (fndecl))
+ {
+ decl = build_decl (DECL_SOURCE_LOCATION (fndecl),
+ PARM_DECL, get_identifier (".result_bnd"),
+ pointer_bounds_type_node);
+ DECL_ARG_TYPE (decl) = pointer_bounds_type_node;
+ DECL_ARTIFICIAL (decl) = 1;
+ DECL_NAMELESS (decl) = 1;
+ TREE_CONSTANT (decl) = 1;
+
+ DECL_CHAIN (decl) = DECL_CHAIN (all->orig_fnargs);
+ DECL_CHAIN (all->orig_fnargs) = decl;
+ fnargs.safe_insert (1, decl);
+ }
}
/* If the target wants to split complex arguments into scalars, do so. */
@@ -2452,7 +2488,7 @@ assign_parm_find_entry_rtl (struct assign_parm_data_all *all,
it came in a register so that REG_PARM_STACK_SPACE isn't skipped.
In this case, we call FUNCTION_ARG with NAMED set to 1 instead of 0
as it was the previous time. */
- in_regs = entry_parm != 0;
+ in_regs = (entry_parm != 0) || POINTER_BOUNDS_TYPE_P (data->passed_type);
#ifdef STACK_PARMS_IN_REG_PARM_AREA
in_regs = true;
#endif
@@ -2541,8 +2577,12 @@ static bool
assign_parm_is_stack_parm (struct assign_parm_data_all *all,
struct assign_parm_data_one *data)
{
+ /* Bounds are never passed on the stack to keep compatibility
+ with not instrumented code. */
+ if (POINTER_BOUNDS_TYPE_P (data->passed_type))
+ return false;
/* Trivially true if we've no incoming register. */
- if (data->entry_parm == NULL)
+ else if (data->entry_parm == NULL)
;
/* Also true if we're partially in registers and partially not,
since we've arranged to drop the entire argument on the stack. */
@@ -3348,6 +3388,119 @@ assign_parms_unsplit_complex (struct assign_parm_data_all *all,
}
}
+/* Load bounds PARM from bounds table. */
+static void
+assign_parm_load_bounds (struct assign_parm_data_one *data,
+ tree parm,
+ rtx entry,
+ unsigned bound_no)
+{
+ bitmap_iterator bi;
+ unsigned i, offs = 0;
+ int bnd_no = -1;
+ rtx slot = NULL, ptr = NULL;
+
+ if (parm)
+ {
+ bitmap slots = chkp_find_bound_slots (TREE_TYPE (parm));
+ EXECUTE_IF_SET_IN_BITMAP (slots, 0, i, bi)
+ {
+ if (bound_no)
+ bound_no--;
+ else
+ {
+ bnd_no = i;
+ break;
+ }
+ }
+ BITMAP_FREE (slots);
+ }
+
+ /* We may have bounds not associated with any pointer. */
+ if (bnd_no != -1)
+ offs = bnd_no * POINTER_SIZE / BITS_PER_UNIT;
+
+ /* Find associated pointer. */
+ if (bnd_no == -1)
+ {
+ /* If bounds are not associated with any bounds,
+ then it is passed in a register or special slot. */
+ gcc_assert (data->entry_parm);
+ ptr = const0_rtx;
+ }
+ else if (MEM_P (entry))
+ slot = adjust_address (entry, Pmode, offs);
+ else if (REG_P (entry))
+ ptr = gen_rtx_REG (Pmode, REGNO (entry) + bnd_no);
+ else if (GET_CODE (entry) == PARALLEL)
+ ptr = chkp_get_value_with_offs (entry, GEN_INT (offs));
+ else
+ gcc_unreachable ();
+ data->entry_parm = targetm.calls.load_bounds_for_arg (slot, ptr,
+ data->entry_parm);
+}
+
+/* Assign RTL expressions to the function's bounds parameters BNDARGS. */
+
+static void
+assign_bounds (vec<bounds_parm_data> &bndargs,
+ struct assign_parm_data_all &all)
+{
+ unsigned i, pass, handled = 0;
+ bounds_parm_data *pbdata;
+
+ if (!bndargs.exists ())
+ return;
+
+ /* We make few passes to store input bounds. Firstly handle bounds
+ passed in registers. After that we load bounds passed in special
+ slots. Finally we load bounds from Bounds Table. */
+ for (pass = 0; pass < 3; pass++)
+ FOR_EACH_VEC_ELT (bndargs, i, pbdata)
+ {
+ /* Pass 0 => regs only. */
+ if (pass == 0
+ && (!pbdata->parm_data.entry_parm
+ || GET_CODE (pbdata->parm_data.entry_parm) != REG))
+ continue;
+ /* Pass 1 => slots only. */
+ else if (pass == 1
+ && (!pbdata->parm_data.entry_parm
+ || GET_CODE (pbdata->parm_data.entry_parm) == REG))
+ continue;
+ /* Pass 2 => BT only. */
+ else if (pass == 2
+ && pbdata->parm_data.entry_parm)
+ continue;
+
+ if (!pbdata->parm_data.entry_parm
+ || GET_CODE (pbdata->parm_data.entry_parm) != REG)
+ assign_parm_load_bounds (&pbdata->parm_data, pbdata->ptr_parm,
+ pbdata->ptr_entry, pbdata->bound_no);
+
+ set_decl_incoming_rtl (pbdata->bounds_parm,
+ pbdata->parm_data.entry_parm, false);
+
+ if (assign_parm_setup_block_p (&pbdata->parm_data))
+ assign_parm_setup_block (&all, pbdata->bounds_parm,
+ &pbdata->parm_data);
+ else if (pbdata->parm_data.passed_pointer
+ || use_register_for_decl (pbdata->bounds_parm))
+ assign_parm_setup_reg (&all, pbdata->bounds_parm,
+ &pbdata->parm_data);
+ else
+ assign_parm_setup_stack (&all, pbdata->bounds_parm,
+ &pbdata->parm_data);
+
+ /* Count handled bounds to make sure we miss nothing. */
+ handled++;
+ }
+
+ gcc_assert (handled == bndargs.length ());
+
+ bndargs.release ();
+}
+
/* Assign RTL expressions to the function's parameters. This may involve
copying them into registers and using those registers as the DECL_RTL. */
@@ -3357,7 +3510,11 @@ assign_parms (tree fndecl)
struct assign_parm_data_all all;
tree parm;
vec<tree> fnargs;
- unsigned i;
+ unsigned i, bound_no = 0;
+ tree last_arg = NULL;
+ rtx last_arg_entry = NULL;
+ vec<bounds_parm_data> bndargs = vNULL;
+ bounds_parm_data bdata;
crtl->args.internal_arg_pointer
= targetm.calls.internal_arg_pointer ();
@@ -3399,9 +3556,6 @@ assign_parms (tree fndecl)
}
}
- if (cfun->stdarg && !DECL_CHAIN (parm))
- assign_parms_setup_varargs (&all, &data, false);
-
/* Find out where the parameter arrives in this function. */
assign_parm_find_entry_rtl (&all, &data);
@@ -3411,7 +3565,15 @@ assign_parms (tree fndecl)
assign_parm_find_stack_rtl (parm, &data);
assign_parm_adjust_entry_rtl (&data);
}
-
+ if (!POINTER_BOUNDS_TYPE_P (data.passed_type))
+ {
+ /* Remember where last non bounds arg was passed in case
+ we have to load associated bounds for it from Bounds
+ Table. */
+ last_arg = parm;
+ last_arg_entry = data.entry_parm;
+ bound_no = 0;
+ }
/* Record permanently how this parm was passed. */
if (data.passed_pointer)
{
@@ -3423,20 +3585,63 @@ assign_parms (tree fndecl)
else
set_decl_incoming_rtl (parm, data.entry_parm, false);
+ /* Boudns should be loaded in the particular order to
+ have registers allocated correctly. Collect info about
+ input bounds and load them later. */
+ if (POINTER_BOUNDS_TYPE_P (data.passed_type))
+ {
+ /* Expect bounds in instrumented functions only. */
+ gcc_assert (chkp_function_instrumented_p (fndecl));
+
+ bdata.parm_data = data;
+ bdata.bounds_parm = parm;
+ bdata.ptr_parm = last_arg;
+ bdata.ptr_entry = last_arg_entry;
+ bdata.bound_no = bound_no;
+ bndargs.safe_push (bdata);
+ }
+ else
+ {
+ assign_parm_adjust_stack_rtl (&data);
+
+ if (assign_parm_setup_block_p (&data))
+ assign_parm_setup_block (&all, parm, &data);
+ else if (data.passed_pointer || use_register_for_decl (parm))
+ assign_parm_setup_reg (&all, parm, &data);
+ else
+ assign_parm_setup_stack (&all, parm, &data);
+ }
+
+ if (cfun->stdarg && !DECL_CHAIN (parm))
+ {
+ int pretend_bytes = 0;
+
+ assign_parms_setup_varargs (&all, &data, false);
+
+ if (chkp_function_instrumented_p (fndecl))
+ {
+ /* We expect this is the last parm. Otherwise it is wrong
+ to assign bounds right now. */
+ gcc_assert (i == (fnargs.length () - 1));
+ assign_bounds (bndargs, all);
+ targetm.calls.setup_incoming_vararg_bounds (all.args_so_far,
+ data.promoted_mode,
+ data.passed_type,
+ &pretend_bytes,
+ false);
+ }
+ }
+
/* Update info on where next arg arrives in registers. */
targetm.calls.function_arg_advance (all.args_so_far, data.promoted_mode,
data.passed_type, data.named_arg);
- assign_parm_adjust_stack_rtl (&data);
-
- if (assign_parm_setup_block_p (&data))
- assign_parm_setup_block (&all, parm, &data);
- else if (data.passed_pointer || use_register_for_decl (parm))
- assign_parm_setup_reg (&all, parm, &data);
- else
- assign_parm_setup_stack (&all, parm, &data);
+ if (POINTER_BOUNDS_TYPE_P (data.passed_type))
+ bound_no++;
}
+ assign_bounds (bndargs, all);
+
if (targetm.calls.split_complex_arg)
assign_parms_unsplit_complex (&all, fnargs);
@@ -3557,6 +3762,10 @@ assign_parms (tree fndecl)
real_decl_rtl = targetm.calls.function_value (TREE_TYPE (decl_result),
fndecl, true);
+ if (chkp_function_instrumented_p (fndecl))
+ crtl->return_bnd
+ = targetm.calls.chkp_function_value_bounds (TREE_TYPE (decl_result),
+ fndecl, true);
REG_FUNCTION_VALUE_P (real_decl_rtl) = 1;
/* The delay slot scheduler assumes that crtl->return_rtx
holds the hard register containing the return value, not a
@@ -4778,6 +4987,14 @@ expand_function_start (tree subr)
/* Set DECL_REGISTER flag so that expand_function_end will copy the
result to the real return register(s). */
DECL_REGISTER (DECL_RESULT (subr)) = 1;
+
+ if (chkp_function_instrumented_p (current_function_decl))
+ {
+ tree return_type = TREE_TYPE (DECL_RESULT (subr));
+ rtx bounds = targetm.calls.chkp_function_value_bounds (return_type,
+ subr, 1);
+ SET_DECL_BOUNDS_RTL (DECL_RESULT (subr), bounds);
+ }
}
/* Initialize rtx for parameters and local variables.
@@ -4867,14 +5084,11 @@ expand_dummy_function_end (void)
in_dummy_function = false;
}
-/* Call DOIT for each hard register used as a return value from
- the current function. */
+/* Helper for diddle_return_value. */
void
-diddle_return_value (void (*doit) (rtx, void *), void *arg)
+diddle_return_value_1 (void (*doit) (rtx, void *), void *arg, rtx outgoing)
{
- rtx outgoing = crtl->return_rtx;
-
if (! outgoing)
return;
@@ -4894,6 +5108,16 @@ diddle_return_value (void (*doit) (rtx, void *), void *arg)
}
}
+/* Call DOIT for each hard register used as a return value from
+ the current function. */
+
+void
+diddle_return_value (void (*doit) (rtx, void *), void *arg)
+{
+ diddle_return_value_1 (doit, arg, crtl->return_rtx);
+ diddle_return_value_1 (doit, arg, crtl->return_bnd);
+}
+
static void
do_clobber_return_reg (rtx reg, void *arg ATTRIBUTE_UNUSED)
{
diff --git a/gcc/function.h b/gcc/function.h
index 38a0fc4..736bb02 100644
--- a/gcc/function.h
+++ b/gcc/function.h
@@ -252,6 +252,9 @@ struct GTY(()) rtl_data {
result in a register, current_function_return_rtx will always be
the hard register containing the result. */
rtx return_rtx;
+ /* If nonxero, an RTL expression for the lcoation at which the current
+ function returns bounds for its result. */
+ rtx return_bnd;
/* Vector of initial-value pairs. Each pair consists of a pseudo
register of approprite mode that stores the initial value a hard
diff --git a/gcc/tree-outof-ssa.c b/gcc/tree-outof-ssa.c
index d5a635b..07f336b 100644
--- a/gcc/tree-outof-ssa.c
+++ b/gcc/tree-outof-ssa.c
@@ -313,7 +313,7 @@ insert_value_copy_on_edge (edge e, int dest, tree src, source_location locus)
else if (src_mode == BLKmode)
{
x = SA.partition_to_pseudo[dest];
- store_expr (src, x, 0, false);
+ store_expr (src, x, 0, false, NULL);
}
else
x = expand_expr (src, SA.partition_to_pseudo[dest],
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand
2014-06-02 15:03 [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand Ilya Enkovich
@ 2014-06-02 15:28 ` Michael Matz
2014-06-02 15:55 ` Ilya Enkovich
0 siblings, 1 reply; 11+ messages in thread
From: Michael Matz @ 2014-06-02 15:28 UTC (permalink / raw)
To: Ilya Enkovich; +Cc: gcc-patches
Hi,
On Mon, 2 Jun 2014, Ilya Enkovich wrote:
> This patch adds support for input bounds, call bounds args and returned
> bounds in expand pass.
>
> * expr.h (store_expr): Add param for bounds target.
There is exactly one place (except for the self-recursive ones) where you
call the new store_expr with a non-null argument for bounds target, and it
seems to be only necessary for when some sub-expression of the RHS is a
call. Can you somehow arrange to move that handling to the single place
in expand_assignment() so that you don't need to change the signature of
store_expr?
Ciao,
Michael.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand
2014-06-02 15:28 ` Michael Matz
@ 2014-06-02 15:55 ` Ilya Enkovich
2014-06-04 14:36 ` Michael Matz
0 siblings, 1 reply; 11+ messages in thread
From: Ilya Enkovich @ 2014-06-02 15:55 UTC (permalink / raw)
To: Michael Matz; +Cc: gcc-patches
2014-06-02 19:28 GMT+04:00 Michael Matz <matz@suse.de>:
> Hi,
>
> On Mon, 2 Jun 2014, Ilya Enkovich wrote:
>
>> This patch adds support for input bounds, call bounds args and returned
>> bounds in expand pass.
>>
>> * expr.h (store_expr): Add param for bounds target.
>
> There is exactly one place (except for the self-recursive ones) where you
> call the new store_expr with a non-null argument for bounds target, and it
> seems to be only necessary for when some sub-expression of the RHS is a
> call. Can you somehow arrange to move that handling to the single place
> in expand_assignment() so that you don't need to change the signature of
> store_expr?
I see the only nice way to do it - store_expr should return bounds of
expanded exp. Currently it always return NULL_RTX. Does it look better
than a new argument?
Ilya
>
>
> Ciao,
> Michael.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand
2014-06-02 15:55 ` Ilya Enkovich
@ 2014-06-04 14:36 ` Michael Matz
2014-06-05 14:46 ` Ilya Enkovich
2014-11-05 23:05 ` Eric Botcazou
0 siblings, 2 replies; 11+ messages in thread
From: Michael Matz @ 2014-06-04 14:36 UTC (permalink / raw)
To: Ilya Enkovich; +Cc: gcc-patches
Hi,
On Mon, 2 Jun 2014, Ilya Enkovich wrote:
> > There is exactly one place (except for the self-recursive ones) where
> > you call the new store_expr with a non-null argument for bounds
> > target, and it seems to be only necessary for when some sub-expression
> > of the RHS is a call. Can you somehow arrange to move that handling
> > to the single place in expand_assignment() so that you don't need to
> > change the signature of store_expr?
>
> I see the only nice way to do it - store_expr should return bounds of
> expanded exp. Currently it always return NULL_RTX. Does it look better
> than a new argument?
IMHO it does. That or introducing a new store_expr_with_bounds (with the
new argument) and letting store_expr be a wrapper for that, passing the
NULL. Basically anything that avoids adding a new parameter for most of
the existing calls to store_expr.
Ciao,
Michael.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand
2014-06-04 14:36 ` Michael Matz
@ 2014-06-05 14:46 ` Ilya Enkovich
2014-09-15 7:20 ` Ilya Enkovich
2014-09-23 20:58 ` Jeff Law
2014-11-05 23:05 ` Eric Botcazou
1 sibling, 2 replies; 11+ messages in thread
From: Ilya Enkovich @ 2014-06-05 14:46 UTC (permalink / raw)
To: Michael Matz; +Cc: gcc-patches
On 04 Jun 16:36, Michael Matz wrote:
> Hi,
>
> On Mon, 2 Jun 2014, Ilya Enkovich wrote:
>
> > > There is exactly one place (except for the self-recursive ones) where
> > > you call the new store_expr with a non-null argument for bounds
> > > target, and it seems to be only necessary for when some sub-expression
> > > of the RHS is a call. Can you somehow arrange to move that handling
> > > to the single place in expand_assignment() so that you don't need to
> > > change the signature of store_expr?
> >
> > I see the only nice way to do it - store_expr should return bounds of
> > expanded exp. Currently it always return NULL_RTX. Does it look better
> > than a new argument?
>
> IMHO it does. That or introducing a new store_expr_with_bounds (with the
> new argument) and letting store_expr be a wrapper for that, passing the
> NULL. Basically anything that avoids adding a new parameter for most of
> the existing calls to store_expr.
>
>
> Ciao,
> Michael.
Here is an updated version using store_expr_with_bounds and store_expr as a wrapper for it.
Bootstrapped and tested on linux-x86_64.
Thanks,
Ilya
--
gcc/
2014-06-05 Ilya Enkovich <ilya.enkovich@intel.com>
* calls.c: Include tree-chkp.h, rtl-chkp.h, bitmap.h.
(arg_data): Add fields special_slot, pointer_arg and
pointer_offset.
(store_bounds): New.
(emit_call_1): Propagate instrumentation flag for CALL.
(initialize_argument_information): Compute pointer_arg,
pointer_offset and special_slot for pointer bounds arguments.
(finalize_must_preallocate): Preallocate when storing bounds
in bounds table.
(compute_argument_addresses): Skip pointer bounds.
(expand_call): Store bounds into tables separately. Return
result joined with resulting bounds.
* cfgexpand.c: Include tree-chkp.h, rtl-chkp.h.
(expand_call_stmt): Propagate bounds flag for CALL_EXPR.
(expand_return): Add returned bounds arg. Handle returned bounds.
(expand_gimple_stmt_1): Adjust to new expand_return signature.
(gimple_expand_cfg): Reset rtx bounds map.
* expr.c: Include tree-chkp.h, rtl-chkp.h.
(expand_assignment): Handle returned bounds.
(store_expr_with_bounds): New. Replaces store_expr with new bounds
target argument. Handle bounds returned by calls.
(store_expr): Now wraps store_expr_with_bounds.
* expr.h (store_expr_with_bounds): New.
* function.c: Include tree-chkp.h, rtl-chkp.h.
(bounds_parm_data): New.
(use_register_for_decl): Do not registerize decls used for bounds
stores and loads.
(assign_parms_augmented_arg_list): Add bounds of the result
structure pointer as the second argument.
(assign_parm_find_entry_rtl): Mark bounds are never passed on
the stack.
(assign_parm_is_stack_parm): Likewise.
(assign_parm_load_bounds): New.
(assign_bounds): New.
(assign_parms): Load bounds and determine a location for
returned bounds.
(diddle_return_value_1): New.
(diddle_return_value): Handle returned bounds.
* function.h (rtl_data): Add field for returned bounds.
diff --git a/gcc/calls.c b/gcc/calls.c
index e1dc8eb..5fbbe9f 100644
--- a/gcc/calls.c
+++ b/gcc/calls.c
@@ -44,11 +44,14 @@ along with GCC; see the file COPYING3. If not see
#include "tm_p.h"
#include "timevar.h"
#include "sbitmap.h"
+#include "bitmap.h"
#include "langhooks.h"
#include "target.h"
#include "cgraph.h"
#include "except.h"
#include "dbgcnt.h"
+#include "tree-chkp.h"
+#include "rtl-chkp.h"
/* Like PREFERRED_STACK_BOUNDARY but in units of bytes, not bits. */
#define STACK_BYTES (PREFERRED_STACK_BOUNDARY / BITS_PER_UNIT)
@@ -76,6 +79,15 @@ struct arg_data
/* If REG is a PARALLEL, this is a copy of VALUE pulled into the correct
form for emit_group_move. */
rtx parallel_value;
+ /* If value is passed in neither reg nor stack, this field holds a number
+ of a special slot to be used. */
+ rtx special_slot;
+ /* For pointer bounds hold an index of parm bounds are bound to. -1 if
+ there is no such pointer. */
+ int pointer_arg;
+ /* If pointer_arg refers a structure, then pointer_offset holds an offset
+ of a pointer in this structure. */
+ int pointer_offset;
/* If REG was promoted from the actual mode of the argument expression,
indicates whether the promotion is sign- or zero-extended. */
int unsignedp;
@@ -133,6 +145,7 @@ static void emit_call_1 (rtx, tree, tree, tree, HOST_WIDE_INT, HOST_WIDE_INT,
HOST_WIDE_INT, rtx, rtx, int, rtx, int,
cumulative_args_t);
static void precompute_register_parameters (int, struct arg_data *, int *);
+static void store_bounds (struct arg_data *, struct arg_data *);
static int store_one_arg (struct arg_data *, rtx, int, int, int);
static void store_unaligned_arguments_into_pseudos (struct arg_data *, int);
static int finalize_must_preallocate (int, int, struct arg_data *,
@@ -396,6 +409,10 @@ emit_call_1 (rtx funexp, tree fntree ATTRIBUTE_UNUSED, tree fndecl ATTRIBUTE_UNU
&& MEM_EXPR (funmem) != NULL_TREE)
set_mem_expr (XEXP (call, 0), MEM_EXPR (funmem));
+ /* Mark instrumented calls. */
+ if (call && fntree)
+ CALL_EXPR_WITH_BOUNDS_P (call) = CALL_WITH_BOUNDS_P (fntree);
+
/* Put the register usage information there. */
add_function_usage_to (call_insn, call_fusage);
@@ -1141,18 +1158,84 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
/* First fill in the actual arguments in the ARGS array, splitting
complex arguments if necessary. */
{
- int j = i;
+ int j = i, ptr_arg = -1;
call_expr_arg_iterator iter;
tree arg;
+ bitmap slots = NULL;
if (struct_value_addr_value)
{
args[j].tree_value = struct_value_addr_value;
+
j += inc;
+
+ /* If we pass structure address then we need to
+ create bounds for it. Since created bounds is
+ a call statement, we expand it right here to avoid
+ fixing all other places where it may be expanded. */
+ if (CALL_WITH_BOUNDS_P (exp))
+ {
+ args[j].value = gen_reg_rtx (targetm.chkp_bound_mode ());
+ args[j].tree_value
+ = chkp_make_bounds_for_struct_addr (struct_value_addr_value);
+ expand_expr_real (args[j].tree_value, args[j].value, VOIDmode,
+ EXPAND_NORMAL, 0, false);
+ args[j].pointer_arg = j - inc;
+
+ j += inc;
+ }
}
FOR_EACH_CALL_EXPR_ARG (arg, iter, exp)
{
tree argtype = TREE_TYPE (arg);
+
+ /* Remember last param with pointer and associate it
+ with following pointer bounds. */
+ if (CALL_WITH_BOUNDS_P (exp)
+ && chkp_type_has_pointer (argtype))
+ {
+ if (slots)
+ {
+ BITMAP_FREE (slots);
+ slots = NULL;
+ }
+ ptr_arg = j;
+ if (!BOUNDED_TYPE_P (argtype))
+ slots = chkp_find_bound_slots (argtype);
+ }
+ else if (POINTER_BOUNDS_TYPE_P (argtype))
+ {
+ /* We expect bounds in instrumented calls only.
+ Otherwise it is a sign we lost flag due to some optimization
+ and may emit call args incorrectly. */
+ gcc_assert (CALL_WITH_BOUNDS_P (exp));
+
+ /* For structures look for the next available pointer. */
+ if (ptr_arg != -1 && slots)
+ {
+ unsigned bnd_no = bitmap_first_set_bit (slots);
+ args[j].pointer_offset =
+ bnd_no * POINTER_SIZE / BITS_PER_UNIT;
+
+ bitmap_clear_bit (slots, bnd_no);
+
+ /* Check we have no more pointers in the structure. */
+ if (bitmap_empty_p (slots))
+ {
+ BITMAP_FREE (slots);
+ slots = NULL;
+ }
+ }
+ args[j].pointer_arg = ptr_arg;
+
+ /* Check we covered all pointers in the previous
+ non bounds arg. */
+ if (!slots)
+ ptr_arg = -1;
+ }
+ else
+ ptr_arg = -1;
+
if (targetm.calls.split_complex_arg
&& argtype
&& TREE_CODE (argtype) == COMPLEX_TYPE
@@ -1167,6 +1250,9 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
args[j].tree_value = arg;
j += inc;
}
+
+ if (slots)
+ BITMAP_FREE (slots);
}
/* I counts args in order (to be) pushed; ARGPOS counts in order written. */
@@ -1302,6 +1388,12 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
args[i].reg = targetm.calls.function_arg (args_so_far, mode, type,
argpos < n_named_args);
+ if (args[i].reg && CONST_INT_P (args[i].reg))
+ {
+ args[i].special_slot = args[i].reg;
+ args[i].reg = NULL;
+ }
+
/* If this is a sibling call and the machine has register windows, the
register window has to be unwinded before calling the routine, so
arguments have to go into the incoming registers. */
@@ -1335,10 +1427,13 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
|| (args[i].pass_on_stack && args[i].reg != 0))
*must_preallocate = 1;
+ /* No stack allocation and padding for bounds. */
+ if (POINTER_BOUNDS_P (args[i].tree_value))
+ ;
/* Compute the stack-size of this argument. */
- if (args[i].reg == 0 || args[i].partial != 0
- || reg_parm_stack_space > 0
- || args[i].pass_on_stack)
+ else if (args[i].reg == 0 || args[i].partial != 0
+ || reg_parm_stack_space > 0
+ || args[i].pass_on_stack)
locate_and_pad_parm (mode, type,
#ifdef STACK_PARMS_IN_REG_PARM_AREA
1,
@@ -1553,6 +1648,12 @@ finalize_must_preallocate (int must_preallocate, int num_actuals,
partial_seen = 1;
else if (partial_seen && args[i].reg == 0)
must_preallocate = 1;
+ /* We preallocate in case there are bounds passed
+ in the bounds table to have precomputed address
+ for bounds association. */
+ else if (POINTER_BOUNDS_P (args[i].tree_value)
+ && !args[i].reg)
+ must_preallocate = 1;
if (TYPE_MODE (TREE_TYPE (args[i].tree_value)) == BLKmode
&& (TREE_CODE (args[i].tree_value) == CALL_EXPR
@@ -1604,6 +1705,10 @@ compute_argument_addresses (struct arg_data *args, rtx argblock, int num_actuals
&& args[i].partial == 0)
continue;
+ /* Pointer Bounds are never passed on the stack. */
+ if (POINTER_BOUNDS_P (args[i].tree_value))
+ continue;
+
if (CONST_INT_P (offset))
addr = plus_constant (Pmode, arg_reg, INTVAL (offset));
else
@@ -2233,6 +2338,8 @@ expand_call (tree exp, rtx target, int ignore)
/* Register in which non-BLKmode value will be returned,
or 0 if no value or if value is BLKmode. */
rtx valreg;
+ /* Register(s) in which bounds are returned. */
+ rtx valbnd = NULL;
/* Address where we should return a BLKmode value;
0 if value not BLKmode. */
rtx structure_value_addr = 0;
@@ -2484,7 +2591,7 @@ expand_call (tree exp, rtx target, int ignore)
structure_value_addr_value =
make_tree (build_pointer_type (TREE_TYPE (funtype)), temp);
- structure_value_addr_parm = 1;
+ structure_value_addr_parm = CALL_WITH_BOUNDS_P (exp) ? 2 : 1;
}
/* Count the arguments and set NUM_ACTUALS. */
@@ -3003,15 +3110,28 @@ expand_call (tree exp, rtx target, int ignore)
/* Figure out the register where the value, if any, will come back. */
valreg = 0;
+ valbnd = 0;
if (TYPE_MODE (rettype) != VOIDmode
&& ! structure_value_addr)
{
if (pcc_struct_value)
- valreg = hard_function_value (build_pointer_type (rettype),
- fndecl, NULL, (pass == 0));
+ {
+ valreg = hard_function_value (build_pointer_type (rettype),
+ fndecl, NULL, (pass == 0));
+ if (CALL_WITH_BOUNDS_P (exp))
+ valbnd = targetm.calls.
+ chkp_function_value_bounds (build_pointer_type (rettype),
+ fndecl, (pass == 0));
+ }
else
- valreg = hard_function_value (rettype, fndecl, fntype,
- (pass == 0));
+ {
+ valreg = hard_function_value (rettype, fndecl, fntype,
+ (pass == 0));
+ if (CALL_WITH_BOUNDS_P (exp))
+ valbnd = targetm.calls.chkp_function_value_bounds (rettype,
+ fndecl,
+ (pass == 0));
+ }
/* If VALREG is a PARALLEL whose first member has a zero
offset, use that. This is for targets such as m68k that
@@ -3052,7 +3172,10 @@ expand_call (tree exp, rtx target, int ignore)
for (i = 0; i < num_actuals; i++)
{
- if (args[i].reg == 0 || args[i].pass_on_stack)
+ /* Delay bounds until all other args are stored. */
+ if (POINTER_BOUNDS_P (args[i].tree_value))
+ continue;
+ else if (args[i].reg == 0 || args[i].pass_on_stack)
{
rtx before_arg = get_last_insn ();
@@ -3105,6 +3228,17 @@ expand_call (tree exp, rtx target, int ignore)
sibcall_failure = 1;
}
+ /* Store all bounds not passed in registers. */
+ for (i = 0; i < num_actuals; i++)
+ {
+ if (POINTER_BOUNDS_P (args[i].tree_value)
+ && !args[i].reg)
+ store_bounds (&args[i],
+ args[i].pointer_arg == -1
+ ? NULL
+ : &args[args[i].pointer_arg]);
+ }
+
/* If we pushed args in forward order, perform stack alignment
after pushing the last arg. */
if (!PUSH_ARGS_REVERSED && argblock == 0)
@@ -3502,6 +3636,9 @@ expand_call (tree exp, rtx target, int ignore)
free (stack_usage_map_buf);
+ /* Join result with returned bounds so caller may use them if needed. */
+ target = chkp_join_splitted_slot (target, valbnd);
+
return target;
}
@@ -4380,6 +4517,68 @@ emit_library_call_value (rtx orgfun, rtx value,
return result;
}
\f
+
+/* Store pointer bounds argument ARG into Bounds Table entry
+ associated with PARM. */
+static void
+store_bounds (struct arg_data *arg, struct arg_data *parm)
+{
+ rtx slot = NULL, ptr = NULL, addr = NULL;
+
+ /* We may pass bounds not associated with any pointer. */
+ if (!parm)
+ {
+ gcc_assert (arg->special_slot);
+ slot = arg->special_slot;
+ ptr = const0_rtx;
+ }
+ /* Find pointer associated with bounds and where it is
+ passed. */
+ else
+ {
+ if (!parm->reg)
+ {
+ gcc_assert (!arg->special_slot);
+
+ addr = adjust_address (parm->stack, Pmode, arg->pointer_offset);
+ }
+ else if (REG_P (parm->reg))
+ {
+ gcc_assert (arg->special_slot);
+ slot = arg->special_slot;
+
+ if (MEM_P (parm->value))
+ addr = adjust_address (parm->value, Pmode, arg->pointer_offset);
+ else if (REG_P (parm->value))
+ ptr = gen_rtx_SUBREG (Pmode, parm->value, arg->pointer_offset);
+ else
+ {
+ gcc_assert (!arg->pointer_offset);
+ ptr = parm->value;
+ }
+ }
+ else
+ {
+ gcc_assert (GET_CODE (parm->reg) == PARALLEL);
+
+ gcc_assert (arg->special_slot);
+ slot = arg->special_slot;
+
+ if (parm->parallel_value)
+ ptr = chkp_get_value_with_offs (parm->parallel_value,
+ GEN_INT (arg->pointer_offset));
+ else
+ gcc_unreachable ();
+ }
+ }
+
+ /* Expand bounds. */
+ if (!arg->value)
+ arg->value = expand_normal (arg->tree_value);
+
+ targetm.calls.store_bounds_for_arg (ptr, addr, arg->value, slot);
+}
+
/* Store a single argument for a function call
into the register or memory area where it must be passed.
*ARG describes the argument value and where to pass it.
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index b7f6360..1c75586 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -73,6 +73,8 @@ along with GCC; see the file COPYING3. If not see
#include "tree-ssa-address.h"
#include "recog.h"
#include "output.h"
+#include "tree-chkp.h"
+#include "rtl-chkp.h"
/* Some systems use __main in a way incompatible with its use in gcc, in these
cases use the macros NAME__MAIN to give a quoted symbol and SYMBOL__MAIN to
@@ -2238,6 +2240,7 @@ expand_call_stmt (gimple stmt)
CALL_FROM_THUNK_P (exp) = gimple_call_from_thunk_p (stmt);
CALL_EXPR_VA_ARG_PACK (exp) = gimple_call_va_arg_pack_p (stmt);
SET_EXPR_LOCATION (exp, gimple_location (stmt));
+ CALL_WITH_BOUNDS_P (exp) = gimple_call_with_bounds_p (stmt);
/* Ensure RTL is created for debug args. */
if (decl && DECL_HAS_DEBUG_ARGS_P (decl))
@@ -3048,11 +3051,12 @@ expand_value_return (rtx val)
from the current function. */
static void
-expand_return (tree retval)
+expand_return (tree retval, tree bounds)
{
rtx result_rtl;
rtx val = 0;
tree retval_rhs;
+ rtx bounds_rtl;
/* If function wants no value, give it none. */
if (TREE_CODE (TREE_TYPE (TREE_TYPE (current_function_decl))) == VOID_TYPE)
@@ -3078,6 +3082,56 @@ expand_return (tree retval)
result_rtl = DECL_RTL (DECL_RESULT (current_function_decl));
+ /* Put returned bounds to the right place. */
+ bounds_rtl = DECL_BOUNDS_RTL (DECL_RESULT (current_function_decl));
+ if (bounds_rtl)
+ {
+ rtx addr, bnd;
+
+ if (bounds)
+ {
+ bnd = expand_normal (bounds);
+ targetm.calls.store_returned_bounds (bounds_rtl, bnd);
+ }
+ else if (REG_P (bounds_rtl))
+ {
+ addr = expand_normal (build_fold_addr_expr (retval_rhs));
+ addr = gen_rtx_MEM (Pmode, addr);
+ bnd = targetm.calls.load_bounds_for_arg (addr, NULL, NULL);
+ targetm.calls.store_returned_bounds (bounds_rtl, bnd);
+ }
+ else
+ {
+ int n;
+
+ gcc_assert (GET_CODE (bounds_rtl) == PARALLEL);
+
+ addr = expand_normal (build_fold_addr_expr (retval_rhs));
+ addr = gen_rtx_MEM (Pmode, addr);
+
+ for (n = 0; n < XVECLEN (bounds_rtl, 0); n++)
+ {
+ rtx offs = XEXP (XVECEXP (bounds_rtl, 0, n), 1);
+ rtx slot = XEXP (XVECEXP (bounds_rtl, 0, n), 0);
+ rtx from = adjust_address (addr, Pmode, INTVAL (offs));
+ rtx bnd = targetm.calls.load_bounds_for_arg (from, NULL, NULL);
+ targetm.calls.store_returned_bounds (slot, bnd);
+ }
+ }
+ }
+ else if (chkp_function_instrumented_p (current_function_decl)
+ && !BOUNDED_P (retval_rhs)
+ && chkp_type_has_pointer (TREE_TYPE (retval_rhs))
+ && TREE_CODE (retval_rhs) != RESULT_DECL)
+ {
+ rtx addr = expand_normal (build_fold_addr_expr (retval_rhs));
+ addr = gen_rtx_MEM (Pmode, addr);
+
+ gcc_assert (MEM_P (result_rtl));
+
+ chkp_copy_bounds_for_stack_parm (result_rtl, addr, TREE_TYPE (retval_rhs));
+ }
+
/* If we are returning the RESULT_DECL, then the value has already
been stored into it, so we don't have to do anything special. */
if (TREE_CODE (retval_rhs) == RESULT_DECL)
@@ -3183,7 +3237,7 @@ expand_gimple_stmt_1 (gimple stmt)
if (!op0)
expand_null_return ();
else
- expand_return (op0);
+ expand_return (op0, gimple_return_retbnd (stmt));
break;
case GIMPLE_ASSIGN:
@@ -5556,6 +5610,9 @@ gimple_expand_cfg (void)
rtl_profile_for_bb (ENTRY_BLOCK_PTR_FOR_FN (cfun));
+ if (chkp_function_instrumented_p (current_function_decl))
+ chkp_reset_rtl_bounds ();
+
insn_locations_init ();
if (!DECL_IS_BUILTIN (current_function_decl))
{
diff --git a/gcc/expr.c b/gcc/expr.c
index 72e4401..40cf67e 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -67,6 +67,8 @@ along with GCC; see the file COPYING3. If not see
#include "params.h"
#include "tree-ssa-address.h"
#include "cfgexpand.h"
+#include "tree-chkp.h"
+#include "rtl-chkp.h"
/* Decide whether a function's arguments should be processed
from first to last or from last to first.
@@ -5008,9 +5010,14 @@ expand_assignment (tree to, tree from, bool nontemporal)
|| TREE_CODE (to) == SSA_NAME))
{
rtx value;
+ rtx bounds;
push_temp_slots ();
value = expand_normal (from);
+
+ /* Split value and bounds to store them separately. */
+ chkp_split_slot (value, &value, &bounds);
+
if (to_rtx == 0)
to_rtx = expand_expr (to, NULL_RTX, VOIDmode, EXPAND_WRITE);
@@ -5044,6 +5051,15 @@ expand_assignment (tree to, tree from, bool nontemporal)
emit_move_insn (to_rtx, value);
}
+
+ /* Store bounds if required. */
+ if (bounds
+ && (BOUNDED_P (to) || chkp_type_has_pointer (TREE_TYPE (to))))
+ {
+ gcc_assert (MEM_P (to_rtx));
+ chkp_emit_bounds_store (bounds, value, to_rtx);
+ }
+
preserve_temp_slots (to_rtx);
pop_temp_slots ();
return;
@@ -5119,7 +5135,7 @@ expand_assignment (tree to, tree from, bool nontemporal)
/* Compute FROM and store the value in the rtx we got. */
push_temp_slots ();
- result = store_expr (from, to_rtx, 0, nontemporal);
+ result = store_expr_with_bounds (from, to_rtx, 0, nontemporal, to);
preserve_temp_slots (result);
pop_temp_slots ();
return;
@@ -5156,10 +5172,14 @@ emit_storent_insn (rtx to, rtx from)
If CALL_PARAM_P is nonzero, this is a store into a call param on the
stack, and block moves may need to be treated specially.
- If NONTEMPORAL is true, try using a nontemporal store instruction. */
+ If NONTEMPORAL is true, try using a nontemporal store instruction.
+
+ If BTARGET is not NULL then computed bounds of EXP are
+ associated with BTARGET. */
rtx
-store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
+store_expr_with_bounds (tree exp, rtx target, int call_param_p,
+ bool nontemporal, tree btarget)
{
rtx temp;
rtx alt_rtl = NULL_RTX;
@@ -5180,8 +5200,8 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
part. */
expand_expr (TREE_OPERAND (exp, 0), const0_rtx, VOIDmode,
call_param_p ? EXPAND_STACK_PARM : EXPAND_NORMAL);
- return store_expr (TREE_OPERAND (exp, 1), target, call_param_p,
- nontemporal);
+ return store_expr_with_bounds (TREE_OPERAND (exp, 1), target,
+ call_param_p, nontemporal, btarget);
}
else if (TREE_CODE (exp) == COND_EXPR && GET_MODE (target) == BLKmode)
{
@@ -5195,13 +5215,13 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
do_pending_stack_adjust ();
NO_DEFER_POP;
jumpifnot (TREE_OPERAND (exp, 0), lab1, -1);
- store_expr (TREE_OPERAND (exp, 1), target, call_param_p,
- nontemporal);
+ store_expr_with_bounds (TREE_OPERAND (exp, 1), target, call_param_p,
+ nontemporal, btarget);
emit_jump_insn (gen_jump (lab2));
emit_barrier ();
emit_label (lab1);
- store_expr (TREE_OPERAND (exp, 2), target, call_param_p,
- nontemporal);
+ store_expr_with_bounds (TREE_OPERAND (exp, 2), target, call_param_p,
+ nontemporal, btarget);
emit_label (lab2);
OK_DEFER_POP;
@@ -5253,6 +5273,19 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
temp = expand_expr (exp, inner_target, VOIDmode,
call_param_p ? EXPAND_STACK_PARM : EXPAND_NORMAL);
+ /* Handle bounds returned by call. */
+ if (TREE_CODE (exp) == CALL_EXPR)
+ {
+ rtx bounds;
+ chkp_split_slot (temp, &temp, &bounds);
+ if (bounds && btarget)
+ {
+ gcc_assert (TREE_CODE (btarget) == SSA_NAME);
+ rtx tmp = targetm.calls.load_returned_bounds (bounds);
+ chkp_set_rtl_bounds (btarget, tmp);
+ }
+ }
+
/* If TEMP is a VOIDmode constant, use convert_modes to make
sure that we properly convert it. */
if (CONSTANT_P (temp) && GET_MODE (temp) == VOIDmode)
@@ -5334,6 +5367,19 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
(call_param_p
? EXPAND_STACK_PARM : EXPAND_NORMAL),
&alt_rtl, false);
+
+ /* Handle bounds returned by call. */
+ if (TREE_CODE (exp) == CALL_EXPR)
+ {
+ rtx bounds;
+ chkp_split_slot (temp, &temp, &bounds);
+ if (bounds && btarget)
+ {
+ gcc_assert (TREE_CODE (btarget) == SSA_NAME);
+ rtx tmp = targetm.calls.load_returned_bounds (bounds);
+ chkp_set_rtl_bounds (btarget, tmp);
+ }
+ }
}
/* If TEMP is a VOIDmode constant and the mode of the type of EXP is not
@@ -5498,6 +5544,13 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
return NULL_RTX;
}
+
+/* Same as store_expr_with_bounds but ignoring bounds of EXP. */
+rtx
+store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
+{
+ return store_expr_with_bounds (exp, target, call_param_p, nontemporal, NULL);
+}
\f
/* Return true if field F of structure TYPE is a flexible array. */
diff --git a/gcc/expr.h b/gcc/expr.h
index 524da67..d06468d 100644
--- a/gcc/expr.h
+++ b/gcc/expr.h
@@ -432,6 +432,7 @@ extern void expand_assignment (tree, tree, bool);
and storing the value into TARGET.
If SUGGEST_REG is nonzero, copy the value through a register
and return that register, if that is possible. */
+extern rtx store_expr_with_bounds (tree, rtx, int, bool, tree);
extern rtx store_expr (tree, rtx, int, bool);
/* Given an rtx that may include add and multiply operations,
diff --git a/gcc/function.c b/gcc/function.c
index a61e475..a08d4ad 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -63,6 +63,8 @@ along with GCC; see the file COPYING3. If not see
#include "df.h"
#include "params.h"
#include "bb-reorder.h"
+#include "tree-chkp.h"
+#include "rtl-chkp.h"
/* So we can assign to cfun in this file. */
#undef cfun
@@ -2082,6 +2084,14 @@ use_register_for_decl (const_tree decl)
if (TREE_ADDRESSABLE (decl))
return false;
+ /* Decl is implicitly addressible by bound stores and loads
+ if it is an aggregate holding bounds. */
+ if (chkp_function_instrumented_p (current_function_decl)
+ && TREE_TYPE (decl)
+ && !BOUNDED_P (decl)
+ && chkp_type_has_pointer (TREE_TYPE (decl)))
+ return false;
+
/* Only register-like things go in registers. */
if (DECL_MODE (decl) == BLKmode)
return false;
@@ -2202,6 +2212,15 @@ struct assign_parm_data_one
BOOL_BITFIELD loaded_in_reg : 1;
};
+struct bounds_parm_data
+{
+ assign_parm_data_one parm_data;
+ tree bounds_parm;
+ tree ptr_parm;
+ rtx ptr_entry;
+ int bound_no;
+};
+
/* A subroutine of assign_parms. Initialize ALL. */
static void
@@ -2312,6 +2331,23 @@ assign_parms_augmented_arg_list (struct assign_parm_data_all *all)
fnargs.safe_insert (0, decl);
all->function_result_decl = decl;
+
+ /* If function is instrumented then bounds of the
+ passed structure address is the second argument. */
+ if (chkp_function_instrumented_p (fndecl))
+ {
+ decl = build_decl (DECL_SOURCE_LOCATION (fndecl),
+ PARM_DECL, get_identifier (".result_bnd"),
+ pointer_bounds_type_node);
+ DECL_ARG_TYPE (decl) = pointer_bounds_type_node;
+ DECL_ARTIFICIAL (decl) = 1;
+ DECL_NAMELESS (decl) = 1;
+ TREE_CONSTANT (decl) = 1;
+
+ DECL_CHAIN (decl) = DECL_CHAIN (all->orig_fnargs);
+ DECL_CHAIN (all->orig_fnargs) = decl;
+ fnargs.safe_insert (1, decl);
+ }
}
/* If the target wants to split complex arguments into scalars, do so. */
@@ -2452,7 +2488,7 @@ assign_parm_find_entry_rtl (struct assign_parm_data_all *all,
it came in a register so that REG_PARM_STACK_SPACE isn't skipped.
In this case, we call FUNCTION_ARG with NAMED set to 1 instead of 0
as it was the previous time. */
- in_regs = entry_parm != 0;
+ in_regs = (entry_parm != 0) || POINTER_BOUNDS_TYPE_P (data->passed_type);
#ifdef STACK_PARMS_IN_REG_PARM_AREA
in_regs = true;
#endif
@@ -2541,8 +2577,12 @@ static bool
assign_parm_is_stack_parm (struct assign_parm_data_all *all,
struct assign_parm_data_one *data)
{
+ /* Bounds are never passed on the stack to keep compatibility
+ with not instrumented code. */
+ if (POINTER_BOUNDS_TYPE_P (data->passed_type))
+ return false;
/* Trivially true if we've no incoming register. */
- if (data->entry_parm == NULL)
+ else if (data->entry_parm == NULL)
;
/* Also true if we're partially in registers and partially not,
since we've arranged to drop the entire argument on the stack. */
@@ -3348,6 +3388,119 @@ assign_parms_unsplit_complex (struct assign_parm_data_all *all,
}
}
+/* Load bounds PARM from bounds table. */
+static void
+assign_parm_load_bounds (struct assign_parm_data_one *data,
+ tree parm,
+ rtx entry,
+ unsigned bound_no)
+{
+ bitmap_iterator bi;
+ unsigned i, offs = 0;
+ int bnd_no = -1;
+ rtx slot = NULL, ptr = NULL;
+
+ if (parm)
+ {
+ bitmap slots = chkp_find_bound_slots (TREE_TYPE (parm));
+ EXECUTE_IF_SET_IN_BITMAP (slots, 0, i, bi)
+ {
+ if (bound_no)
+ bound_no--;
+ else
+ {
+ bnd_no = i;
+ break;
+ }
+ }
+ BITMAP_FREE (slots);
+ }
+
+ /* We may have bounds not associated with any pointer. */
+ if (bnd_no != -1)
+ offs = bnd_no * POINTER_SIZE / BITS_PER_UNIT;
+
+ /* Find associated pointer. */
+ if (bnd_no == -1)
+ {
+ /* If bounds are not associated with any bounds,
+ then it is passed in a register or special slot. */
+ gcc_assert (data->entry_parm);
+ ptr = const0_rtx;
+ }
+ else if (MEM_P (entry))
+ slot = adjust_address (entry, Pmode, offs);
+ else if (REG_P (entry))
+ ptr = gen_rtx_REG (Pmode, REGNO (entry) + bnd_no);
+ else if (GET_CODE (entry) == PARALLEL)
+ ptr = chkp_get_value_with_offs (entry, GEN_INT (offs));
+ else
+ gcc_unreachable ();
+ data->entry_parm = targetm.calls.load_bounds_for_arg (slot, ptr,
+ data->entry_parm);
+}
+
+/* Assign RTL expressions to the function's bounds parameters BNDARGS. */
+
+static void
+assign_bounds (vec<bounds_parm_data> &bndargs,
+ struct assign_parm_data_all &all)
+{
+ unsigned i, pass, handled = 0;
+ bounds_parm_data *pbdata;
+
+ if (!bndargs.exists ())
+ return;
+
+ /* We make few passes to store input bounds. Firstly handle bounds
+ passed in registers. After that we load bounds passed in special
+ slots. Finally we load bounds from Bounds Table. */
+ for (pass = 0; pass < 3; pass++)
+ FOR_EACH_VEC_ELT (bndargs, i, pbdata)
+ {
+ /* Pass 0 => regs only. */
+ if (pass == 0
+ && (!pbdata->parm_data.entry_parm
+ || GET_CODE (pbdata->parm_data.entry_parm) != REG))
+ continue;
+ /* Pass 1 => slots only. */
+ else if (pass == 1
+ && (!pbdata->parm_data.entry_parm
+ || GET_CODE (pbdata->parm_data.entry_parm) == REG))
+ continue;
+ /* Pass 2 => BT only. */
+ else if (pass == 2
+ && pbdata->parm_data.entry_parm)
+ continue;
+
+ if (!pbdata->parm_data.entry_parm
+ || GET_CODE (pbdata->parm_data.entry_parm) != REG)
+ assign_parm_load_bounds (&pbdata->parm_data, pbdata->ptr_parm,
+ pbdata->ptr_entry, pbdata->bound_no);
+
+ set_decl_incoming_rtl (pbdata->bounds_parm,
+ pbdata->parm_data.entry_parm, false);
+
+ if (assign_parm_setup_block_p (&pbdata->parm_data))
+ assign_parm_setup_block (&all, pbdata->bounds_parm,
+ &pbdata->parm_data);
+ else if (pbdata->parm_data.passed_pointer
+ || use_register_for_decl (pbdata->bounds_parm))
+ assign_parm_setup_reg (&all, pbdata->bounds_parm,
+ &pbdata->parm_data);
+ else
+ assign_parm_setup_stack (&all, pbdata->bounds_parm,
+ &pbdata->parm_data);
+
+ /* Count handled bounds to make sure we miss nothing. */
+ handled++;
+ }
+
+ gcc_assert (handled == bndargs.length ());
+
+ bndargs.release ();
+}
+
/* Assign RTL expressions to the function's parameters. This may involve
copying them into registers and using those registers as the DECL_RTL. */
@@ -3357,7 +3510,11 @@ assign_parms (tree fndecl)
struct assign_parm_data_all all;
tree parm;
vec<tree> fnargs;
- unsigned i;
+ unsigned i, bound_no = 0;
+ tree last_arg = NULL;
+ rtx last_arg_entry = NULL;
+ vec<bounds_parm_data> bndargs = vNULL;
+ bounds_parm_data bdata;
crtl->args.internal_arg_pointer
= targetm.calls.internal_arg_pointer ();
@@ -3399,9 +3556,6 @@ assign_parms (tree fndecl)
}
}
- if (cfun->stdarg && !DECL_CHAIN (parm))
- assign_parms_setup_varargs (&all, &data, false);
-
/* Find out where the parameter arrives in this function. */
assign_parm_find_entry_rtl (&all, &data);
@@ -3411,7 +3565,15 @@ assign_parms (tree fndecl)
assign_parm_find_stack_rtl (parm, &data);
assign_parm_adjust_entry_rtl (&data);
}
-
+ if (!POINTER_BOUNDS_TYPE_P (data.passed_type))
+ {
+ /* Remember where last non bounds arg was passed in case
+ we have to load associated bounds for it from Bounds
+ Table. */
+ last_arg = parm;
+ last_arg_entry = data.entry_parm;
+ bound_no = 0;
+ }
/* Record permanently how this parm was passed. */
if (data.passed_pointer)
{
@@ -3423,20 +3585,63 @@ assign_parms (tree fndecl)
else
set_decl_incoming_rtl (parm, data.entry_parm, false);
+ /* Boudns should be loaded in the particular order to
+ have registers allocated correctly. Collect info about
+ input bounds and load them later. */
+ if (POINTER_BOUNDS_TYPE_P (data.passed_type))
+ {
+ /* Expect bounds in instrumented functions only. */
+ gcc_assert (chkp_function_instrumented_p (fndecl));
+
+ bdata.parm_data = data;
+ bdata.bounds_parm = parm;
+ bdata.ptr_parm = last_arg;
+ bdata.ptr_entry = last_arg_entry;
+ bdata.bound_no = bound_no;
+ bndargs.safe_push (bdata);
+ }
+ else
+ {
+ assign_parm_adjust_stack_rtl (&data);
+
+ if (assign_parm_setup_block_p (&data))
+ assign_parm_setup_block (&all, parm, &data);
+ else if (data.passed_pointer || use_register_for_decl (parm))
+ assign_parm_setup_reg (&all, parm, &data);
+ else
+ assign_parm_setup_stack (&all, parm, &data);
+ }
+
+ if (cfun->stdarg && !DECL_CHAIN (parm))
+ {
+ int pretend_bytes = 0;
+
+ assign_parms_setup_varargs (&all, &data, false);
+
+ if (chkp_function_instrumented_p (fndecl))
+ {
+ /* We expect this is the last parm. Otherwise it is wrong
+ to assign bounds right now. */
+ gcc_assert (i == (fnargs.length () - 1));
+ assign_bounds (bndargs, all);
+ targetm.calls.setup_incoming_vararg_bounds (all.args_so_far,
+ data.promoted_mode,
+ data.passed_type,
+ &pretend_bytes,
+ false);
+ }
+ }
+
/* Update info on where next arg arrives in registers. */
targetm.calls.function_arg_advance (all.args_so_far, data.promoted_mode,
data.passed_type, data.named_arg);
- assign_parm_adjust_stack_rtl (&data);
-
- if (assign_parm_setup_block_p (&data))
- assign_parm_setup_block (&all, parm, &data);
- else if (data.passed_pointer || use_register_for_decl (parm))
- assign_parm_setup_reg (&all, parm, &data);
- else
- assign_parm_setup_stack (&all, parm, &data);
+ if (POINTER_BOUNDS_TYPE_P (data.passed_type))
+ bound_no++;
}
+ assign_bounds (bndargs, all);
+
if (targetm.calls.split_complex_arg)
assign_parms_unsplit_complex (&all, fnargs);
@@ -3557,6 +3762,10 @@ assign_parms (tree fndecl)
real_decl_rtl = targetm.calls.function_value (TREE_TYPE (decl_result),
fndecl, true);
+ if (chkp_function_instrumented_p (fndecl))
+ crtl->return_bnd
+ = targetm.calls.chkp_function_value_bounds (TREE_TYPE (decl_result),
+ fndecl, true);
REG_FUNCTION_VALUE_P (real_decl_rtl) = 1;
/* The delay slot scheduler assumes that crtl->return_rtx
holds the hard register containing the return value, not a
@@ -4778,6 +4987,14 @@ expand_function_start (tree subr)
/* Set DECL_REGISTER flag so that expand_function_end will copy the
result to the real return register(s). */
DECL_REGISTER (DECL_RESULT (subr)) = 1;
+
+ if (chkp_function_instrumented_p (current_function_decl))
+ {
+ tree return_type = TREE_TYPE (DECL_RESULT (subr));
+ rtx bounds = targetm.calls.chkp_function_value_bounds (return_type,
+ subr, 1);
+ SET_DECL_BOUNDS_RTL (DECL_RESULT (subr), bounds);
+ }
}
/* Initialize rtx for parameters and local variables.
@@ -4867,14 +5084,11 @@ expand_dummy_function_end (void)
in_dummy_function = false;
}
-/* Call DOIT for each hard register used as a return value from
- the current function. */
+/* Helper for diddle_return_value. */
void
-diddle_return_value (void (*doit) (rtx, void *), void *arg)
+diddle_return_value_1 (void (*doit) (rtx, void *), void *arg, rtx outgoing)
{
- rtx outgoing = crtl->return_rtx;
-
if (! outgoing)
return;
@@ -4894,6 +5108,16 @@ diddle_return_value (void (*doit) (rtx, void *), void *arg)
}
}
+/* Call DOIT for each hard register used as a return value from
+ the current function. */
+
+void
+diddle_return_value (void (*doit) (rtx, void *), void *arg)
+{
+ diddle_return_value_1 (doit, arg, crtl->return_rtx);
+ diddle_return_value_1 (doit, arg, crtl->return_bnd);
+}
+
static void
do_clobber_return_reg (rtx reg, void *arg ATTRIBUTE_UNUSED)
{
diff --git a/gcc/function.h b/gcc/function.h
index 38a0fc4..736bb02 100644
--- a/gcc/function.h
+++ b/gcc/function.h
@@ -252,6 +252,9 @@ struct GTY(()) rtl_data {
result in a register, current_function_return_rtx will always be
the hard register containing the result. */
rtx return_rtx;
+ /* If nonxero, an RTL expression for the lcoation at which the current
+ function returns bounds for its result. */
+ rtx return_bnd;
/* Vector of initial-value pairs. Each pair consists of a pseudo
register of approprite mode that stores the initial value a hard
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand
2014-06-05 14:46 ` Ilya Enkovich
@ 2014-09-15 7:20 ` Ilya Enkovich
2014-09-23 20:58 ` Jeff Law
1 sibling, 0 replies; 11+ messages in thread
From: Ilya Enkovich @ 2014-09-15 7:20 UTC (permalink / raw)
To: Michael Matz; +Cc: gcc-patches
Ping
2014-06-05 18:46 GMT+04:00 Ilya Enkovich <enkovich.gnu@gmail.com>:
> On 04 Jun 16:36, Michael Matz wrote:
>> Hi,
>>
>> On Mon, 2 Jun 2014, Ilya Enkovich wrote:
>>
>> > > There is exactly one place (except for the self-recursive ones) where
>> > > you call the new store_expr with a non-null argument for bounds
>> > > target, and it seems to be only necessary for when some sub-expression
>> > > of the RHS is a call. Can you somehow arrange to move that handling
>> > > to the single place in expand_assignment() so that you don't need to
>> > > change the signature of store_expr?
>> >
>> > I see the only nice way to do it - store_expr should return bounds of
>> > expanded exp. Currently it always return NULL_RTX. Does it look better
>> > than a new argument?
>>
>> IMHO it does. That or introducing a new store_expr_with_bounds (with the
>> new argument) and letting store_expr be a wrapper for that, passing the
>> NULL. Basically anything that avoids adding a new parameter for most of
>> the existing calls to store_expr.
>>
>>
>> Ciao,
>> Michael.
>
> Here is an updated version using store_expr_with_bounds and store_expr as a wrapper for it.
>
> Bootstrapped and tested on linux-x86_64.
>
> Thanks,
> Ilya
> --
> gcc/
>
> 2014-06-05 Ilya Enkovich <ilya.enkovich@intel.com>
>
> * calls.c: Include tree-chkp.h, rtl-chkp.h, bitmap.h.
> (arg_data): Add fields special_slot, pointer_arg and
> pointer_offset.
> (store_bounds): New.
> (emit_call_1): Propagate instrumentation flag for CALL.
> (initialize_argument_information): Compute pointer_arg,
> pointer_offset and special_slot for pointer bounds arguments.
> (finalize_must_preallocate): Preallocate when storing bounds
> in bounds table.
> (compute_argument_addresses): Skip pointer bounds.
> (expand_call): Store bounds into tables separately. Return
> result joined with resulting bounds.
> * cfgexpand.c: Include tree-chkp.h, rtl-chkp.h.
> (expand_call_stmt): Propagate bounds flag for CALL_EXPR.
> (expand_return): Add returned bounds arg. Handle returned bounds.
> (expand_gimple_stmt_1): Adjust to new expand_return signature.
> (gimple_expand_cfg): Reset rtx bounds map.
> * expr.c: Include tree-chkp.h, rtl-chkp.h.
> (expand_assignment): Handle returned bounds.
> (store_expr_with_bounds): New. Replaces store_expr with new bounds
> target argument. Handle bounds returned by calls.
> (store_expr): Now wraps store_expr_with_bounds.
> * expr.h (store_expr_with_bounds): New.
> * function.c: Include tree-chkp.h, rtl-chkp.h.
> (bounds_parm_data): New.
> (use_register_for_decl): Do not registerize decls used for bounds
> stores and loads.
> (assign_parms_augmented_arg_list): Add bounds of the result
> structure pointer as the second argument.
> (assign_parm_find_entry_rtl): Mark bounds are never passed on
> the stack.
> (assign_parm_is_stack_parm): Likewise.
> (assign_parm_load_bounds): New.
> (assign_bounds): New.
> (assign_parms): Load bounds and determine a location for
> returned bounds.
> (diddle_return_value_1): New.
> (diddle_return_value): Handle returned bounds.
> * function.h (rtl_data): Add field for returned bounds.
>
>
> diff --git a/gcc/calls.c b/gcc/calls.c
> index e1dc8eb..5fbbe9f 100644
> --- a/gcc/calls.c
> +++ b/gcc/calls.c
> @@ -44,11 +44,14 @@ along with GCC; see the file COPYING3. If not see
> #include "tm_p.h"
> #include "timevar.h"
> #include "sbitmap.h"
> +#include "bitmap.h"
> #include "langhooks.h"
> #include "target.h"
> #include "cgraph.h"
> #include "except.h"
> #include "dbgcnt.h"
> +#include "tree-chkp.h"
> +#include "rtl-chkp.h"
>
> /* Like PREFERRED_STACK_BOUNDARY but in units of bytes, not bits. */
> #define STACK_BYTES (PREFERRED_STACK_BOUNDARY / BITS_PER_UNIT)
> @@ -76,6 +79,15 @@ struct arg_data
> /* If REG is a PARALLEL, this is a copy of VALUE pulled into the correct
> form for emit_group_move. */
> rtx parallel_value;
> + /* If value is passed in neither reg nor stack, this field holds a number
> + of a special slot to be used. */
> + rtx special_slot;
> + /* For pointer bounds hold an index of parm bounds are bound to. -1 if
> + there is no such pointer. */
> + int pointer_arg;
> + /* If pointer_arg refers a structure, then pointer_offset holds an offset
> + of a pointer in this structure. */
> + int pointer_offset;
> /* If REG was promoted from the actual mode of the argument expression,
> indicates whether the promotion is sign- or zero-extended. */
> int unsignedp;
> @@ -133,6 +145,7 @@ static void emit_call_1 (rtx, tree, tree, tree, HOST_WIDE_INT, HOST_WIDE_INT,
> HOST_WIDE_INT, rtx, rtx, int, rtx, int,
> cumulative_args_t);
> static void precompute_register_parameters (int, struct arg_data *, int *);
> +static void store_bounds (struct arg_data *, struct arg_data *);
> static int store_one_arg (struct arg_data *, rtx, int, int, int);
> static void store_unaligned_arguments_into_pseudos (struct arg_data *, int);
> static int finalize_must_preallocate (int, int, struct arg_data *,
> @@ -396,6 +409,10 @@ emit_call_1 (rtx funexp, tree fntree ATTRIBUTE_UNUSED, tree fndecl ATTRIBUTE_UNU
> && MEM_EXPR (funmem) != NULL_TREE)
> set_mem_expr (XEXP (call, 0), MEM_EXPR (funmem));
>
> + /* Mark instrumented calls. */
> + if (call && fntree)
> + CALL_EXPR_WITH_BOUNDS_P (call) = CALL_WITH_BOUNDS_P (fntree);
> +
> /* Put the register usage information there. */
> add_function_usage_to (call_insn, call_fusage);
>
> @@ -1141,18 +1158,84 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
> /* First fill in the actual arguments in the ARGS array, splitting
> complex arguments if necessary. */
> {
> - int j = i;
> + int j = i, ptr_arg = -1;
> call_expr_arg_iterator iter;
> tree arg;
> + bitmap slots = NULL;
>
> if (struct_value_addr_value)
> {
> args[j].tree_value = struct_value_addr_value;
> +
> j += inc;
> +
> + /* If we pass structure address then we need to
> + create bounds for it. Since created bounds is
> + a call statement, we expand it right here to avoid
> + fixing all other places where it may be expanded. */
> + if (CALL_WITH_BOUNDS_P (exp))
> + {
> + args[j].value = gen_reg_rtx (targetm.chkp_bound_mode ());
> + args[j].tree_value
> + = chkp_make_bounds_for_struct_addr (struct_value_addr_value);
> + expand_expr_real (args[j].tree_value, args[j].value, VOIDmode,
> + EXPAND_NORMAL, 0, false);
> + args[j].pointer_arg = j - inc;
> +
> + j += inc;
> + }
> }
> FOR_EACH_CALL_EXPR_ARG (arg, iter, exp)
> {
> tree argtype = TREE_TYPE (arg);
> +
> + /* Remember last param with pointer and associate it
> + with following pointer bounds. */
> + if (CALL_WITH_BOUNDS_P (exp)
> + && chkp_type_has_pointer (argtype))
> + {
> + if (slots)
> + {
> + BITMAP_FREE (slots);
> + slots = NULL;
> + }
> + ptr_arg = j;
> + if (!BOUNDED_TYPE_P (argtype))
> + slots = chkp_find_bound_slots (argtype);
> + }
> + else if (POINTER_BOUNDS_TYPE_P (argtype))
> + {
> + /* We expect bounds in instrumented calls only.
> + Otherwise it is a sign we lost flag due to some optimization
> + and may emit call args incorrectly. */
> + gcc_assert (CALL_WITH_BOUNDS_P (exp));
> +
> + /* For structures look for the next available pointer. */
> + if (ptr_arg != -1 && slots)
> + {
> + unsigned bnd_no = bitmap_first_set_bit (slots);
> + args[j].pointer_offset =
> + bnd_no * POINTER_SIZE / BITS_PER_UNIT;
> +
> + bitmap_clear_bit (slots, bnd_no);
> +
> + /* Check we have no more pointers in the structure. */
> + if (bitmap_empty_p (slots))
> + {
> + BITMAP_FREE (slots);
> + slots = NULL;
> + }
> + }
> + args[j].pointer_arg = ptr_arg;
> +
> + /* Check we covered all pointers in the previous
> + non bounds arg. */
> + if (!slots)
> + ptr_arg = -1;
> + }
> + else
> + ptr_arg = -1;
> +
> if (targetm.calls.split_complex_arg
> && argtype
> && TREE_CODE (argtype) == COMPLEX_TYPE
> @@ -1167,6 +1250,9 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
> args[j].tree_value = arg;
> j += inc;
> }
> +
> + if (slots)
> + BITMAP_FREE (slots);
> }
>
> /* I counts args in order (to be) pushed; ARGPOS counts in order written. */
> @@ -1302,6 +1388,12 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
> args[i].reg = targetm.calls.function_arg (args_so_far, mode, type,
> argpos < n_named_args);
>
> + if (args[i].reg && CONST_INT_P (args[i].reg))
> + {
> + args[i].special_slot = args[i].reg;
> + args[i].reg = NULL;
> + }
> +
> /* If this is a sibling call and the machine has register windows, the
> register window has to be unwinded before calling the routine, so
> arguments have to go into the incoming registers. */
> @@ -1335,10 +1427,13 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
> || (args[i].pass_on_stack && args[i].reg != 0))
> *must_preallocate = 1;
>
> + /* No stack allocation and padding for bounds. */
> + if (POINTER_BOUNDS_P (args[i].tree_value))
> + ;
> /* Compute the stack-size of this argument. */
> - if (args[i].reg == 0 || args[i].partial != 0
> - || reg_parm_stack_space > 0
> - || args[i].pass_on_stack)
> + else if (args[i].reg == 0 || args[i].partial != 0
> + || reg_parm_stack_space > 0
> + || args[i].pass_on_stack)
> locate_and_pad_parm (mode, type,
> #ifdef STACK_PARMS_IN_REG_PARM_AREA
> 1,
> @@ -1553,6 +1648,12 @@ finalize_must_preallocate (int must_preallocate, int num_actuals,
> partial_seen = 1;
> else if (partial_seen && args[i].reg == 0)
> must_preallocate = 1;
> + /* We preallocate in case there are bounds passed
> + in the bounds table to have precomputed address
> + for bounds association. */
> + else if (POINTER_BOUNDS_P (args[i].tree_value)
> + && !args[i].reg)
> + must_preallocate = 1;
>
> if (TYPE_MODE (TREE_TYPE (args[i].tree_value)) == BLKmode
> && (TREE_CODE (args[i].tree_value) == CALL_EXPR
> @@ -1604,6 +1705,10 @@ compute_argument_addresses (struct arg_data *args, rtx argblock, int num_actuals
> && args[i].partial == 0)
> continue;
>
> + /* Pointer Bounds are never passed on the stack. */
> + if (POINTER_BOUNDS_P (args[i].tree_value))
> + continue;
> +
> if (CONST_INT_P (offset))
> addr = plus_constant (Pmode, arg_reg, INTVAL (offset));
> else
> @@ -2233,6 +2338,8 @@ expand_call (tree exp, rtx target, int ignore)
> /* Register in which non-BLKmode value will be returned,
> or 0 if no value or if value is BLKmode. */
> rtx valreg;
> + /* Register(s) in which bounds are returned. */
> + rtx valbnd = NULL;
> /* Address where we should return a BLKmode value;
> 0 if value not BLKmode. */
> rtx structure_value_addr = 0;
> @@ -2484,7 +2591,7 @@ expand_call (tree exp, rtx target, int ignore)
>
> structure_value_addr_value =
> make_tree (build_pointer_type (TREE_TYPE (funtype)), temp);
> - structure_value_addr_parm = 1;
> + structure_value_addr_parm = CALL_WITH_BOUNDS_P (exp) ? 2 : 1;
> }
>
> /* Count the arguments and set NUM_ACTUALS. */
> @@ -3003,15 +3110,28 @@ expand_call (tree exp, rtx target, int ignore)
>
> /* Figure out the register where the value, if any, will come back. */
> valreg = 0;
> + valbnd = 0;
> if (TYPE_MODE (rettype) != VOIDmode
> && ! structure_value_addr)
> {
> if (pcc_struct_value)
> - valreg = hard_function_value (build_pointer_type (rettype),
> - fndecl, NULL, (pass == 0));
> + {
> + valreg = hard_function_value (build_pointer_type (rettype),
> + fndecl, NULL, (pass == 0));
> + if (CALL_WITH_BOUNDS_P (exp))
> + valbnd = targetm.calls.
> + chkp_function_value_bounds (build_pointer_type (rettype),
> + fndecl, (pass == 0));
> + }
> else
> - valreg = hard_function_value (rettype, fndecl, fntype,
> - (pass == 0));
> + {
> + valreg = hard_function_value (rettype, fndecl, fntype,
> + (pass == 0));
> + if (CALL_WITH_BOUNDS_P (exp))
> + valbnd = targetm.calls.chkp_function_value_bounds (rettype,
> + fndecl,
> + (pass == 0));
> + }
>
> /* If VALREG is a PARALLEL whose first member has a zero
> offset, use that. This is for targets such as m68k that
> @@ -3052,7 +3172,10 @@ expand_call (tree exp, rtx target, int ignore)
>
> for (i = 0; i < num_actuals; i++)
> {
> - if (args[i].reg == 0 || args[i].pass_on_stack)
> + /* Delay bounds until all other args are stored. */
> + if (POINTER_BOUNDS_P (args[i].tree_value))
> + continue;
> + else if (args[i].reg == 0 || args[i].pass_on_stack)
> {
> rtx before_arg = get_last_insn ();
>
> @@ -3105,6 +3228,17 @@ expand_call (tree exp, rtx target, int ignore)
> sibcall_failure = 1;
> }
>
> + /* Store all bounds not passed in registers. */
> + for (i = 0; i < num_actuals; i++)
> + {
> + if (POINTER_BOUNDS_P (args[i].tree_value)
> + && !args[i].reg)
> + store_bounds (&args[i],
> + args[i].pointer_arg == -1
> + ? NULL
> + : &args[args[i].pointer_arg]);
> + }
> +
> /* If we pushed args in forward order, perform stack alignment
> after pushing the last arg. */
> if (!PUSH_ARGS_REVERSED && argblock == 0)
> @@ -3502,6 +3636,9 @@ expand_call (tree exp, rtx target, int ignore)
>
> free (stack_usage_map_buf);
>
> + /* Join result with returned bounds so caller may use them if needed. */
> + target = chkp_join_splitted_slot (target, valbnd);
> +
> return target;
> }
>
> @@ -4380,6 +4517,68 @@ emit_library_call_value (rtx orgfun, rtx value,
> return result;
> }
>
> +
> +/* Store pointer bounds argument ARG into Bounds Table entry
> + associated with PARM. */
> +static void
> +store_bounds (struct arg_data *arg, struct arg_data *parm)
> +{
> + rtx slot = NULL, ptr = NULL, addr = NULL;
> +
> + /* We may pass bounds not associated with any pointer. */
> + if (!parm)
> + {
> + gcc_assert (arg->special_slot);
> + slot = arg->special_slot;
> + ptr = const0_rtx;
> + }
> + /* Find pointer associated with bounds and where it is
> + passed. */
> + else
> + {
> + if (!parm->reg)
> + {
> + gcc_assert (!arg->special_slot);
> +
> + addr = adjust_address (parm->stack, Pmode, arg->pointer_offset);
> + }
> + else if (REG_P (parm->reg))
> + {
> + gcc_assert (arg->special_slot);
> + slot = arg->special_slot;
> +
> + if (MEM_P (parm->value))
> + addr = adjust_address (parm->value, Pmode, arg->pointer_offset);
> + else if (REG_P (parm->value))
> + ptr = gen_rtx_SUBREG (Pmode, parm->value, arg->pointer_offset);
> + else
> + {
> + gcc_assert (!arg->pointer_offset);
> + ptr = parm->value;
> + }
> + }
> + else
> + {
> + gcc_assert (GET_CODE (parm->reg) == PARALLEL);
> +
> + gcc_assert (arg->special_slot);
> + slot = arg->special_slot;
> +
> + if (parm->parallel_value)
> + ptr = chkp_get_value_with_offs (parm->parallel_value,
> + GEN_INT (arg->pointer_offset));
> + else
> + gcc_unreachable ();
> + }
> + }
> +
> + /* Expand bounds. */
> + if (!arg->value)
> + arg->value = expand_normal (arg->tree_value);
> +
> + targetm.calls.store_bounds_for_arg (ptr, addr, arg->value, slot);
> +}
> +
> /* Store a single argument for a function call
> into the register or memory area where it must be passed.
> *ARG describes the argument value and where to pass it.
> diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
> index b7f6360..1c75586 100644
> --- a/gcc/cfgexpand.c
> +++ b/gcc/cfgexpand.c
> @@ -73,6 +73,8 @@ along with GCC; see the file COPYING3. If not see
> #include "tree-ssa-address.h"
> #include "recog.h"
> #include "output.h"
> +#include "tree-chkp.h"
> +#include "rtl-chkp.h"
>
> /* Some systems use __main in a way incompatible with its use in gcc, in these
> cases use the macros NAME__MAIN to give a quoted symbol and SYMBOL__MAIN to
> @@ -2238,6 +2240,7 @@ expand_call_stmt (gimple stmt)
> CALL_FROM_THUNK_P (exp) = gimple_call_from_thunk_p (stmt);
> CALL_EXPR_VA_ARG_PACK (exp) = gimple_call_va_arg_pack_p (stmt);
> SET_EXPR_LOCATION (exp, gimple_location (stmt));
> + CALL_WITH_BOUNDS_P (exp) = gimple_call_with_bounds_p (stmt);
>
> /* Ensure RTL is created for debug args. */
> if (decl && DECL_HAS_DEBUG_ARGS_P (decl))
> @@ -3048,11 +3051,12 @@ expand_value_return (rtx val)
> from the current function. */
>
> static void
> -expand_return (tree retval)
> +expand_return (tree retval, tree bounds)
> {
> rtx result_rtl;
> rtx val = 0;
> tree retval_rhs;
> + rtx bounds_rtl;
>
> /* If function wants no value, give it none. */
> if (TREE_CODE (TREE_TYPE (TREE_TYPE (current_function_decl))) == VOID_TYPE)
> @@ -3078,6 +3082,56 @@ expand_return (tree retval)
>
> result_rtl = DECL_RTL (DECL_RESULT (current_function_decl));
>
> + /* Put returned bounds to the right place. */
> + bounds_rtl = DECL_BOUNDS_RTL (DECL_RESULT (current_function_decl));
> + if (bounds_rtl)
> + {
> + rtx addr, bnd;
> +
> + if (bounds)
> + {
> + bnd = expand_normal (bounds);
> + targetm.calls.store_returned_bounds (bounds_rtl, bnd);
> + }
> + else if (REG_P (bounds_rtl))
> + {
> + addr = expand_normal (build_fold_addr_expr (retval_rhs));
> + addr = gen_rtx_MEM (Pmode, addr);
> + bnd = targetm.calls.load_bounds_for_arg (addr, NULL, NULL);
> + targetm.calls.store_returned_bounds (bounds_rtl, bnd);
> + }
> + else
> + {
> + int n;
> +
> + gcc_assert (GET_CODE (bounds_rtl) == PARALLEL);
> +
> + addr = expand_normal (build_fold_addr_expr (retval_rhs));
> + addr = gen_rtx_MEM (Pmode, addr);
> +
> + for (n = 0; n < XVECLEN (bounds_rtl, 0); n++)
> + {
> + rtx offs = XEXP (XVECEXP (bounds_rtl, 0, n), 1);
> + rtx slot = XEXP (XVECEXP (bounds_rtl, 0, n), 0);
> + rtx from = adjust_address (addr, Pmode, INTVAL (offs));
> + rtx bnd = targetm.calls.load_bounds_for_arg (from, NULL, NULL);
> + targetm.calls.store_returned_bounds (slot, bnd);
> + }
> + }
> + }
> + else if (chkp_function_instrumented_p (current_function_decl)
> + && !BOUNDED_P (retval_rhs)
> + && chkp_type_has_pointer (TREE_TYPE (retval_rhs))
> + && TREE_CODE (retval_rhs) != RESULT_DECL)
> + {
> + rtx addr = expand_normal (build_fold_addr_expr (retval_rhs));
> + addr = gen_rtx_MEM (Pmode, addr);
> +
> + gcc_assert (MEM_P (result_rtl));
> +
> + chkp_copy_bounds_for_stack_parm (result_rtl, addr, TREE_TYPE (retval_rhs));
> + }
> +
> /* If we are returning the RESULT_DECL, then the value has already
> been stored into it, so we don't have to do anything special. */
> if (TREE_CODE (retval_rhs) == RESULT_DECL)
> @@ -3183,7 +3237,7 @@ expand_gimple_stmt_1 (gimple stmt)
> if (!op0)
> expand_null_return ();
> else
> - expand_return (op0);
> + expand_return (op0, gimple_return_retbnd (stmt));
> break;
>
> case GIMPLE_ASSIGN:
> @@ -5556,6 +5610,9 @@ gimple_expand_cfg (void)
>
> rtl_profile_for_bb (ENTRY_BLOCK_PTR_FOR_FN (cfun));
>
> + if (chkp_function_instrumented_p (current_function_decl))
> + chkp_reset_rtl_bounds ();
> +
> insn_locations_init ();
> if (!DECL_IS_BUILTIN (current_function_decl))
> {
> diff --git a/gcc/expr.c b/gcc/expr.c
> index 72e4401..40cf67e 100644
> --- a/gcc/expr.c
> +++ b/gcc/expr.c
> @@ -67,6 +67,8 @@ along with GCC; see the file COPYING3. If not see
> #include "params.h"
> #include "tree-ssa-address.h"
> #include "cfgexpand.h"
> +#include "tree-chkp.h"
> +#include "rtl-chkp.h"
>
> /* Decide whether a function's arguments should be processed
> from first to last or from last to first.
> @@ -5008,9 +5010,14 @@ expand_assignment (tree to, tree from, bool nontemporal)
> || TREE_CODE (to) == SSA_NAME))
> {
> rtx value;
> + rtx bounds;
>
> push_temp_slots ();
> value = expand_normal (from);
> +
> + /* Split value and bounds to store them separately. */
> + chkp_split_slot (value, &value, &bounds);
> +
> if (to_rtx == 0)
> to_rtx = expand_expr (to, NULL_RTX, VOIDmode, EXPAND_WRITE);
>
> @@ -5044,6 +5051,15 @@ expand_assignment (tree to, tree from, bool nontemporal)
>
> emit_move_insn (to_rtx, value);
> }
> +
> + /* Store bounds if required. */
> + if (bounds
> + && (BOUNDED_P (to) || chkp_type_has_pointer (TREE_TYPE (to))))
> + {
> + gcc_assert (MEM_P (to_rtx));
> + chkp_emit_bounds_store (bounds, value, to_rtx);
> + }
> +
> preserve_temp_slots (to_rtx);
> pop_temp_slots ();
> return;
> @@ -5119,7 +5135,7 @@ expand_assignment (tree to, tree from, bool nontemporal)
> /* Compute FROM and store the value in the rtx we got. */
>
> push_temp_slots ();
> - result = store_expr (from, to_rtx, 0, nontemporal);
> + result = store_expr_with_bounds (from, to_rtx, 0, nontemporal, to);
> preserve_temp_slots (result);
> pop_temp_slots ();
> return;
> @@ -5156,10 +5172,14 @@ emit_storent_insn (rtx to, rtx from)
> If CALL_PARAM_P is nonzero, this is a store into a call param on the
> stack, and block moves may need to be treated specially.
>
> - If NONTEMPORAL is true, try using a nontemporal store instruction. */
> + If NONTEMPORAL is true, try using a nontemporal store instruction.
> +
> + If BTARGET is not NULL then computed bounds of EXP are
> + associated with BTARGET. */
>
> rtx
> -store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
> +store_expr_with_bounds (tree exp, rtx target, int call_param_p,
> + bool nontemporal, tree btarget)
> {
> rtx temp;
> rtx alt_rtl = NULL_RTX;
> @@ -5180,8 +5200,8 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
> part. */
> expand_expr (TREE_OPERAND (exp, 0), const0_rtx, VOIDmode,
> call_param_p ? EXPAND_STACK_PARM : EXPAND_NORMAL);
> - return store_expr (TREE_OPERAND (exp, 1), target, call_param_p,
> - nontemporal);
> + return store_expr_with_bounds (TREE_OPERAND (exp, 1), target,
> + call_param_p, nontemporal, btarget);
> }
> else if (TREE_CODE (exp) == COND_EXPR && GET_MODE (target) == BLKmode)
> {
> @@ -5195,13 +5215,13 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
> do_pending_stack_adjust ();
> NO_DEFER_POP;
> jumpifnot (TREE_OPERAND (exp, 0), lab1, -1);
> - store_expr (TREE_OPERAND (exp, 1), target, call_param_p,
> - nontemporal);
> + store_expr_with_bounds (TREE_OPERAND (exp, 1), target, call_param_p,
> + nontemporal, btarget);
> emit_jump_insn (gen_jump (lab2));
> emit_barrier ();
> emit_label (lab1);
> - store_expr (TREE_OPERAND (exp, 2), target, call_param_p,
> - nontemporal);
> + store_expr_with_bounds (TREE_OPERAND (exp, 2), target, call_param_p,
> + nontemporal, btarget);
> emit_label (lab2);
> OK_DEFER_POP;
>
> @@ -5253,6 +5273,19 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
> temp = expand_expr (exp, inner_target, VOIDmode,
> call_param_p ? EXPAND_STACK_PARM : EXPAND_NORMAL);
>
> + /* Handle bounds returned by call. */
> + if (TREE_CODE (exp) == CALL_EXPR)
> + {
> + rtx bounds;
> + chkp_split_slot (temp, &temp, &bounds);
> + if (bounds && btarget)
> + {
> + gcc_assert (TREE_CODE (btarget) == SSA_NAME);
> + rtx tmp = targetm.calls.load_returned_bounds (bounds);
> + chkp_set_rtl_bounds (btarget, tmp);
> + }
> + }
> +
> /* If TEMP is a VOIDmode constant, use convert_modes to make
> sure that we properly convert it. */
> if (CONSTANT_P (temp) && GET_MODE (temp) == VOIDmode)
> @@ -5334,6 +5367,19 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
> (call_param_p
> ? EXPAND_STACK_PARM : EXPAND_NORMAL),
> &alt_rtl, false);
> +
> + /* Handle bounds returned by call. */
> + if (TREE_CODE (exp) == CALL_EXPR)
> + {
> + rtx bounds;
> + chkp_split_slot (temp, &temp, &bounds);
> + if (bounds && btarget)
> + {
> + gcc_assert (TREE_CODE (btarget) == SSA_NAME);
> + rtx tmp = targetm.calls.load_returned_bounds (bounds);
> + chkp_set_rtl_bounds (btarget, tmp);
> + }
> + }
> }
>
> /* If TEMP is a VOIDmode constant and the mode of the type of EXP is not
> @@ -5498,6 +5544,13 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
>
> return NULL_RTX;
> }
> +
> +/* Same as store_expr_with_bounds but ignoring bounds of EXP. */
> +rtx
> +store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
> +{
> + return store_expr_with_bounds (exp, target, call_param_p, nontemporal, NULL);
> +}
>
> /* Return true if field F of structure TYPE is a flexible array. */
>
> diff --git a/gcc/expr.h b/gcc/expr.h
> index 524da67..d06468d 100644
> --- a/gcc/expr.h
> +++ b/gcc/expr.h
> @@ -432,6 +432,7 @@ extern void expand_assignment (tree, tree, bool);
> and storing the value into TARGET.
> If SUGGEST_REG is nonzero, copy the value through a register
> and return that register, if that is possible. */
> +extern rtx store_expr_with_bounds (tree, rtx, int, bool, tree);
> extern rtx store_expr (tree, rtx, int, bool);
>
> /* Given an rtx that may include add and multiply operations,
> diff --git a/gcc/function.c b/gcc/function.c
> index a61e475..a08d4ad 100644
> --- a/gcc/function.c
> +++ b/gcc/function.c
> @@ -63,6 +63,8 @@ along with GCC; see the file COPYING3. If not see
> #include "df.h"
> #include "params.h"
> #include "bb-reorder.h"
> +#include "tree-chkp.h"
> +#include "rtl-chkp.h"
>
> /* So we can assign to cfun in this file. */
> #undef cfun
> @@ -2082,6 +2084,14 @@ use_register_for_decl (const_tree decl)
> if (TREE_ADDRESSABLE (decl))
> return false;
>
> + /* Decl is implicitly addressible by bound stores and loads
> + if it is an aggregate holding bounds. */
> + if (chkp_function_instrumented_p (current_function_decl)
> + && TREE_TYPE (decl)
> + && !BOUNDED_P (decl)
> + && chkp_type_has_pointer (TREE_TYPE (decl)))
> + return false;
> +
> /* Only register-like things go in registers. */
> if (DECL_MODE (decl) == BLKmode)
> return false;
> @@ -2202,6 +2212,15 @@ struct assign_parm_data_one
> BOOL_BITFIELD loaded_in_reg : 1;
> };
>
> +struct bounds_parm_data
> +{
> + assign_parm_data_one parm_data;
> + tree bounds_parm;
> + tree ptr_parm;
> + rtx ptr_entry;
> + int bound_no;
> +};
> +
> /* A subroutine of assign_parms. Initialize ALL. */
>
> static void
> @@ -2312,6 +2331,23 @@ assign_parms_augmented_arg_list (struct assign_parm_data_all *all)
> fnargs.safe_insert (0, decl);
>
> all->function_result_decl = decl;
> +
> + /* If function is instrumented then bounds of the
> + passed structure address is the second argument. */
> + if (chkp_function_instrumented_p (fndecl))
> + {
> + decl = build_decl (DECL_SOURCE_LOCATION (fndecl),
> + PARM_DECL, get_identifier (".result_bnd"),
> + pointer_bounds_type_node);
> + DECL_ARG_TYPE (decl) = pointer_bounds_type_node;
> + DECL_ARTIFICIAL (decl) = 1;
> + DECL_NAMELESS (decl) = 1;
> + TREE_CONSTANT (decl) = 1;
> +
> + DECL_CHAIN (decl) = DECL_CHAIN (all->orig_fnargs);
> + DECL_CHAIN (all->orig_fnargs) = decl;
> + fnargs.safe_insert (1, decl);
> + }
> }
>
> /* If the target wants to split complex arguments into scalars, do so. */
> @@ -2452,7 +2488,7 @@ assign_parm_find_entry_rtl (struct assign_parm_data_all *all,
> it came in a register so that REG_PARM_STACK_SPACE isn't skipped.
> In this case, we call FUNCTION_ARG with NAMED set to 1 instead of 0
> as it was the previous time. */
> - in_regs = entry_parm != 0;
> + in_regs = (entry_parm != 0) || POINTER_BOUNDS_TYPE_P (data->passed_type);
> #ifdef STACK_PARMS_IN_REG_PARM_AREA
> in_regs = true;
> #endif
> @@ -2541,8 +2577,12 @@ static bool
> assign_parm_is_stack_parm (struct assign_parm_data_all *all,
> struct assign_parm_data_one *data)
> {
> + /* Bounds are never passed on the stack to keep compatibility
> + with not instrumented code. */
> + if (POINTER_BOUNDS_TYPE_P (data->passed_type))
> + return false;
> /* Trivially true if we've no incoming register. */
> - if (data->entry_parm == NULL)
> + else if (data->entry_parm == NULL)
> ;
> /* Also true if we're partially in registers and partially not,
> since we've arranged to drop the entire argument on the stack. */
> @@ -3348,6 +3388,119 @@ assign_parms_unsplit_complex (struct assign_parm_data_all *all,
> }
> }
>
> +/* Load bounds PARM from bounds table. */
> +static void
> +assign_parm_load_bounds (struct assign_parm_data_one *data,
> + tree parm,
> + rtx entry,
> + unsigned bound_no)
> +{
> + bitmap_iterator bi;
> + unsigned i, offs = 0;
> + int bnd_no = -1;
> + rtx slot = NULL, ptr = NULL;
> +
> + if (parm)
> + {
> + bitmap slots = chkp_find_bound_slots (TREE_TYPE (parm));
> + EXECUTE_IF_SET_IN_BITMAP (slots, 0, i, bi)
> + {
> + if (bound_no)
> + bound_no--;
> + else
> + {
> + bnd_no = i;
> + break;
> + }
> + }
> + BITMAP_FREE (slots);
> + }
> +
> + /* We may have bounds not associated with any pointer. */
> + if (bnd_no != -1)
> + offs = bnd_no * POINTER_SIZE / BITS_PER_UNIT;
> +
> + /* Find associated pointer. */
> + if (bnd_no == -1)
> + {
> + /* If bounds are not associated with any bounds,
> + then it is passed in a register or special slot. */
> + gcc_assert (data->entry_parm);
> + ptr = const0_rtx;
> + }
> + else if (MEM_P (entry))
> + slot = adjust_address (entry, Pmode, offs);
> + else if (REG_P (entry))
> + ptr = gen_rtx_REG (Pmode, REGNO (entry) + bnd_no);
> + else if (GET_CODE (entry) == PARALLEL)
> + ptr = chkp_get_value_with_offs (entry, GEN_INT (offs));
> + else
> + gcc_unreachable ();
> + data->entry_parm = targetm.calls.load_bounds_for_arg (slot, ptr,
> + data->entry_parm);
> +}
> +
> +/* Assign RTL expressions to the function's bounds parameters BNDARGS. */
> +
> +static void
> +assign_bounds (vec<bounds_parm_data> &bndargs,
> + struct assign_parm_data_all &all)
> +{
> + unsigned i, pass, handled = 0;
> + bounds_parm_data *pbdata;
> +
> + if (!bndargs.exists ())
> + return;
> +
> + /* We make few passes to store input bounds. Firstly handle bounds
> + passed in registers. After that we load bounds passed in special
> + slots. Finally we load bounds from Bounds Table. */
> + for (pass = 0; pass < 3; pass++)
> + FOR_EACH_VEC_ELT (bndargs, i, pbdata)
> + {
> + /* Pass 0 => regs only. */
> + if (pass == 0
> + && (!pbdata->parm_data.entry_parm
> + || GET_CODE (pbdata->parm_data.entry_parm) != REG))
> + continue;
> + /* Pass 1 => slots only. */
> + else if (pass == 1
> + && (!pbdata->parm_data.entry_parm
> + || GET_CODE (pbdata->parm_data.entry_parm) == REG))
> + continue;
> + /* Pass 2 => BT only. */
> + else if (pass == 2
> + && pbdata->parm_data.entry_parm)
> + continue;
> +
> + if (!pbdata->parm_data.entry_parm
> + || GET_CODE (pbdata->parm_data.entry_parm) != REG)
> + assign_parm_load_bounds (&pbdata->parm_data, pbdata->ptr_parm,
> + pbdata->ptr_entry, pbdata->bound_no);
> +
> + set_decl_incoming_rtl (pbdata->bounds_parm,
> + pbdata->parm_data.entry_parm, false);
> +
> + if (assign_parm_setup_block_p (&pbdata->parm_data))
> + assign_parm_setup_block (&all, pbdata->bounds_parm,
> + &pbdata->parm_data);
> + else if (pbdata->parm_data.passed_pointer
> + || use_register_for_decl (pbdata->bounds_parm))
> + assign_parm_setup_reg (&all, pbdata->bounds_parm,
> + &pbdata->parm_data);
> + else
> + assign_parm_setup_stack (&all, pbdata->bounds_parm,
> + &pbdata->parm_data);
> +
> + /* Count handled bounds to make sure we miss nothing. */
> + handled++;
> + }
> +
> + gcc_assert (handled == bndargs.length ());
> +
> + bndargs.release ();
> +}
> +
> /* Assign RTL expressions to the function's parameters. This may involve
> copying them into registers and using those registers as the DECL_RTL. */
>
> @@ -3357,7 +3510,11 @@ assign_parms (tree fndecl)
> struct assign_parm_data_all all;
> tree parm;
> vec<tree> fnargs;
> - unsigned i;
> + unsigned i, bound_no = 0;
> + tree last_arg = NULL;
> + rtx last_arg_entry = NULL;
> + vec<bounds_parm_data> bndargs = vNULL;
> + bounds_parm_data bdata;
>
> crtl->args.internal_arg_pointer
> = targetm.calls.internal_arg_pointer ();
> @@ -3399,9 +3556,6 @@ assign_parms (tree fndecl)
> }
> }
>
> - if (cfun->stdarg && !DECL_CHAIN (parm))
> - assign_parms_setup_varargs (&all, &data, false);
> -
> /* Find out where the parameter arrives in this function. */
> assign_parm_find_entry_rtl (&all, &data);
>
> @@ -3411,7 +3565,15 @@ assign_parms (tree fndecl)
> assign_parm_find_stack_rtl (parm, &data);
> assign_parm_adjust_entry_rtl (&data);
> }
> -
> + if (!POINTER_BOUNDS_TYPE_P (data.passed_type))
> + {
> + /* Remember where last non bounds arg was passed in case
> + we have to load associated bounds for it from Bounds
> + Table. */
> + last_arg = parm;
> + last_arg_entry = data.entry_parm;
> + bound_no = 0;
> + }
> /* Record permanently how this parm was passed. */
> if (data.passed_pointer)
> {
> @@ -3423,20 +3585,63 @@ assign_parms (tree fndecl)
> else
> set_decl_incoming_rtl (parm, data.entry_parm, false);
>
> + /* Boudns should be loaded in the particular order to
> + have registers allocated correctly. Collect info about
> + input bounds and load them later. */
> + if (POINTER_BOUNDS_TYPE_P (data.passed_type))
> + {
> + /* Expect bounds in instrumented functions only. */
> + gcc_assert (chkp_function_instrumented_p (fndecl));
> +
> + bdata.parm_data = data;
> + bdata.bounds_parm = parm;
> + bdata.ptr_parm = last_arg;
> + bdata.ptr_entry = last_arg_entry;
> + bdata.bound_no = bound_no;
> + bndargs.safe_push (bdata);
> + }
> + else
> + {
> + assign_parm_adjust_stack_rtl (&data);
> +
> + if (assign_parm_setup_block_p (&data))
> + assign_parm_setup_block (&all, parm, &data);
> + else if (data.passed_pointer || use_register_for_decl (parm))
> + assign_parm_setup_reg (&all, parm, &data);
> + else
> + assign_parm_setup_stack (&all, parm, &data);
> + }
> +
> + if (cfun->stdarg && !DECL_CHAIN (parm))
> + {
> + int pretend_bytes = 0;
> +
> + assign_parms_setup_varargs (&all, &data, false);
> +
> + if (chkp_function_instrumented_p (fndecl))
> + {
> + /* We expect this is the last parm. Otherwise it is wrong
> + to assign bounds right now. */
> + gcc_assert (i == (fnargs.length () - 1));
> + assign_bounds (bndargs, all);
> + targetm.calls.setup_incoming_vararg_bounds (all.args_so_far,
> + data.promoted_mode,
> + data.passed_type,
> + &pretend_bytes,
> + false);
> + }
> + }
> +
> /* Update info on where next arg arrives in registers. */
> targetm.calls.function_arg_advance (all.args_so_far, data.promoted_mode,
> data.passed_type, data.named_arg);
>
> - assign_parm_adjust_stack_rtl (&data);
> -
> - if (assign_parm_setup_block_p (&data))
> - assign_parm_setup_block (&all, parm, &data);
> - else if (data.passed_pointer || use_register_for_decl (parm))
> - assign_parm_setup_reg (&all, parm, &data);
> - else
> - assign_parm_setup_stack (&all, parm, &data);
> + if (POINTER_BOUNDS_TYPE_P (data.passed_type))
> + bound_no++;
> }
>
> + assign_bounds (bndargs, all);
> +
> if (targetm.calls.split_complex_arg)
> assign_parms_unsplit_complex (&all, fnargs);
>
> @@ -3557,6 +3762,10 @@ assign_parms (tree fndecl)
>
> real_decl_rtl = targetm.calls.function_value (TREE_TYPE (decl_result),
> fndecl, true);
> + if (chkp_function_instrumented_p (fndecl))
> + crtl->return_bnd
> + = targetm.calls.chkp_function_value_bounds (TREE_TYPE (decl_result),
> + fndecl, true);
> REG_FUNCTION_VALUE_P (real_decl_rtl) = 1;
> /* The delay slot scheduler assumes that crtl->return_rtx
> holds the hard register containing the return value, not a
> @@ -4778,6 +4987,14 @@ expand_function_start (tree subr)
> /* Set DECL_REGISTER flag so that expand_function_end will copy the
> result to the real return register(s). */
> DECL_REGISTER (DECL_RESULT (subr)) = 1;
> +
> + if (chkp_function_instrumented_p (current_function_decl))
> + {
> + tree return_type = TREE_TYPE (DECL_RESULT (subr));
> + rtx bounds = targetm.calls.chkp_function_value_bounds (return_type,
> + subr, 1);
> + SET_DECL_BOUNDS_RTL (DECL_RESULT (subr), bounds);
> + }
> }
>
> /* Initialize rtx for parameters and local variables.
> @@ -4867,14 +5084,11 @@ expand_dummy_function_end (void)
> in_dummy_function = false;
> }
>
> -/* Call DOIT for each hard register used as a return value from
> - the current function. */
> +/* Helper for diddle_return_value. */
>
> void
> -diddle_return_value (void (*doit) (rtx, void *), void *arg)
> +diddle_return_value_1 (void (*doit) (rtx, void *), void *arg, rtx outgoing)
> {
> - rtx outgoing = crtl->return_rtx;
> -
> if (! outgoing)
> return;
>
> @@ -4894,6 +5108,16 @@ diddle_return_value (void (*doit) (rtx, void *), void *arg)
> }
> }
>
> +/* Call DOIT for each hard register used as a return value from
> + the current function. */
> +
> +void
> +diddle_return_value (void (*doit) (rtx, void *), void *arg)
> +{
> + diddle_return_value_1 (doit, arg, crtl->return_rtx);
> + diddle_return_value_1 (doit, arg, crtl->return_bnd);
> +}
> +
> static void
> do_clobber_return_reg (rtx reg, void *arg ATTRIBUTE_UNUSED)
> {
> diff --git a/gcc/function.h b/gcc/function.h
> index 38a0fc4..736bb02 100644
> --- a/gcc/function.h
> +++ b/gcc/function.h
> @@ -252,6 +252,9 @@ struct GTY(()) rtl_data {
> result in a register, current_function_return_rtx will always be
> the hard register containing the result. */
> rtx return_rtx;
> + /* If nonxero, an RTL expression for the lcoation at which the current
> + function returns bounds for its result. */
> + rtx return_bnd;
>
> /* Vector of initial-value pairs. Each pair consists of a pseudo
> register of approprite mode that stores the initial value a hard
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand
2014-06-05 14:46 ` Ilya Enkovich
2014-09-15 7:20 ` Ilya Enkovich
@ 2014-09-23 20:58 ` Jeff Law
2014-09-24 8:29 ` Ilya Enkovich
1 sibling, 1 reply; 11+ messages in thread
From: Jeff Law @ 2014-09-23 20:58 UTC (permalink / raw)
To: Ilya Enkovich, Michael Matz; +Cc: gcc-patches
On 06/05/14 08:46, Ilya Enkovich wrote:
> 2014-06-05 Ilya Enkovich <ilya.enkovich@intel.com>
>
> * calls.c: Include tree-chkp.h, rtl-chkp.h, bitmap.h.
> (arg_data): Add fields special_slot, pointer_arg and
> pointer_offset.
> (store_bounds): New.
> (emit_call_1): Propagate instrumentation flag for CALL.
> (initialize_argument_information): Compute pointer_arg,
> pointer_offset and special_slot for pointer bounds arguments.
> (finalize_must_preallocate): Preallocate when storing bounds
> in bounds table.
> (compute_argument_addresses): Skip pointer bounds.
> (expand_call): Store bounds into tables separately. Return
> result joined with resulting bounds.
> * cfgexpand.c: Include tree-chkp.h, rtl-chkp.h.
> (expand_call_stmt): Propagate bounds flag for CALL_EXPR.
> (expand_return): Add returned bounds arg. Handle returned bounds.
> (expand_gimple_stmt_1): Adjust to new expand_return signature.
> (gimple_expand_cfg): Reset rtx bounds map.
> * expr.c: Include tree-chkp.h, rtl-chkp.h.
> (expand_assignment): Handle returned bounds.
> (store_expr_with_bounds): New. Replaces store_expr with new bounds
> target argument. Handle bounds returned by calls.
> (store_expr): Now wraps store_expr_with_bounds.
> * expr.h (store_expr_with_bounds): New.
> * function.c: Include tree-chkp.h, rtl-chkp.h.
> (bounds_parm_data): New.
> (use_register_for_decl): Do not registerize decls used for bounds
> stores and loads.
> (assign_parms_augmented_arg_list): Add bounds of the result
> structure pointer as the second argument.
> (assign_parm_find_entry_rtl): Mark bounds are never passed on
> the stack.
> (assign_parm_is_stack_parm): Likewise.
> (assign_parm_load_bounds): New.
> (assign_bounds): New.
> (assign_parms): Load bounds and determine a location for
> returned bounds.
> (diddle_return_value_1): New.
> (diddle_return_value): Handle returned bounds.
> * function.h (rtl_data): Add field for returned bounds.
>
>
> diff --git a/gcc/calls.c b/gcc/calls.c
> index e1dc8eb..5fbbe9f 100644
> --- a/gcc/calls.c
> +++ b/gcc/calls.c
> @@ -44,11 +44,14 @@ along with GCC; see the file COPYING3. If not see
> #include "tm_p.h"
> #include "timevar.h"
> #include "sbitmap.h"
> +#include "bitmap.h"
> #include "langhooks.h"
> #include "target.h"
> #include "cgraph.h"
> #include "except.h"
> #include "dbgcnt.h"
> +#include "tree-chkp.h"
> +#include "rtl-chkp.h"
>
> /* Like PREFERRED_STACK_BOUNDARY but in units of bytes, not bits. */
> #define STACK_BYTES (PREFERRED_STACK_BOUNDARY / BITS_PER_UNIT)
> @@ -76,6 +79,15 @@ struct arg_data
> /* If REG is a PARALLEL, this is a copy of VALUE pulled into the correct
> form for emit_group_move. */
> rtx parallel_value;
> + /* If value is passed in neither reg nor stack, this field holds a number
> + of a special slot to be used. */
> + rtx special_slot;
I really dislike "special_slot" and the comment here. The comment that
it's neither a reg nor stack is just bogus. What hardware resource does
"special_slot" refer to? It's a register, but one that we do not
typically expose. Let's at least clarify the comment and then we'll see
if something other than "special_slot" as a name makes sense. Yes, I
realize this is a bit of bikeshedding, but when the comments/terminology
is confusing, the code becomes even harder to understand.
I'm a bit concerned that this is exposing more details of the MPX
implementation than is advisable to the front/middle end. On the other
hand, I'd expect any other implementation that seeks to work in a
transparent manner is going to have many of the same implementation
properties as we see with MPX, so perhaps it's not a major problem.
> @@ -1141,18 +1158,84 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
> /* First fill in the actual arguments in the ARGS array, splitting
> complex arguments if necessary. */
> {
> - int j = i;
> + int j = i, ptr_arg = -1;
> call_expr_arg_iterator iter;
> tree arg;
> + bitmap slots = NULL;
>
> if (struct_value_addr_value)
> {
> args[j].tree_value = struct_value_addr_value;
> +
> j += inc;
> +
> + /* If we pass structure address then we need to
> + create bounds for it. Since created bounds is
> + a call statement, we expand it right here to avoid
> + fixing all other places where it may be expanded. */
> + if (CALL_WITH_BOUNDS_P (exp))
> + {
> + args[j].value = gen_reg_rtx (targetm.chkp_bound_mode ());
> + args[j].tree_value
> + = chkp_make_bounds_for_struct_addr (struct_value_addr_value);
> + expand_expr_real (args[j].tree_value, args[j].value, VOIDmode,
> + EXPAND_NORMAL, 0, false);
> + args[j].pointer_arg = j - inc;
> +
> + j += inc;
> + }
Just an FYI, I'm pretty sure this hunk isn't going to apply cleanly as
the context has changed on the trunk. I'd recommend getting this code
updated for the trunk. I suspect you're getting close to having all the
basic functionality bits in, you're obviously going to need to do a new
bootstrap & regression test prior to checkin. I think git sqashing the
series and testing/committing them as an atomic unit is probably wise.
It's been a while since I looked at this code, but is it safe to create
a new call tree at this point? I recall some major complications if you
try to insert a call once you've started filling in arguments. Hmm,
gIven you're at the start of initialize_argument_information, you're
probably OK since we haven't stored any arguments into their arg
regs/memory yet.
> @@ -1302,6 +1388,12 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
> args[i].reg = targetm.calls.function_arg (args_so_far, mode, type,
> argpos < n_named_args);
>
> + if (args[i].reg && CONST_INT_P (args[i].reg))
> + {
> + args[i].special_slot = args[i].reg;
> + args[i].reg = NULL;
> + }
I can't recall from the earlier patches, but have you updated the
documentation to indicate that function_arg can return a CONST_INT?
I think this is mostly OK. If you could update and resend for another
once-over, it'd be appreciated.
Jeff
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand
2014-09-23 20:58 ` Jeff Law
@ 2014-09-24 8:29 ` Ilya Enkovich
2014-10-02 14:03 ` Ilya Enkovich
0 siblings, 1 reply; 11+ messages in thread
From: Ilya Enkovich @ 2014-09-24 8:29 UTC (permalink / raw)
To: Jeff Law; +Cc: Michael Matz, gcc-patches
2014-09-24 0:58 GMT+04:00 Jeff Law <law@redhat.com>:
> On 06/05/14 08:46, Ilya Enkovich wrote:
>>
>> 2014-06-05 Ilya Enkovich <ilya.enkovich@intel.com>
>>
>> * calls.c: Include tree-chkp.h, rtl-chkp.h, bitmap.h.
>> (arg_data): Add fields special_slot, pointer_arg and
>> pointer_offset.
>> (store_bounds): New.
>> (emit_call_1): Propagate instrumentation flag for CALL.
>> (initialize_argument_information): Compute pointer_arg,
>> pointer_offset and special_slot for pointer bounds arguments.
>> (finalize_must_preallocate): Preallocate when storing bounds
>> in bounds table.
>> (compute_argument_addresses): Skip pointer bounds.
>> (expand_call): Store bounds into tables separately. Return
>> result joined with resulting bounds.
>> * cfgexpand.c: Include tree-chkp.h, rtl-chkp.h.
>> (expand_call_stmt): Propagate bounds flag for CALL_EXPR.
>> (expand_return): Add returned bounds arg. Handle returned bounds.
>> (expand_gimple_stmt_1): Adjust to new expand_return signature.
>> (gimple_expand_cfg): Reset rtx bounds map.
>> * expr.c: Include tree-chkp.h, rtl-chkp.h.
>> (expand_assignment): Handle returned bounds.
>> (store_expr_with_bounds): New. Replaces store_expr with new
>> bounds
>> target argument. Handle bounds returned by calls.
>> (store_expr): Now wraps store_expr_with_bounds.
>> * expr.h (store_expr_with_bounds): New.
>> * function.c: Include tree-chkp.h, rtl-chkp.h.
>> (bounds_parm_data): New.
>> (use_register_for_decl): Do not registerize decls used for bounds
>> stores and loads.
>> (assign_parms_augmented_arg_list): Add bounds of the result
>> structure pointer as the second argument.
>> (assign_parm_find_entry_rtl): Mark bounds are never passed on
>> the stack.
>> (assign_parm_is_stack_parm): Likewise.
>> (assign_parm_load_bounds): New.
>> (assign_bounds): New.
>> (assign_parms): Load bounds and determine a location for
>> returned bounds.
>> (diddle_return_value_1): New.
>> (diddle_return_value): Handle returned bounds.
>> * function.h (rtl_data): Add field for returned bounds.
>>
>>
>> diff --git a/gcc/calls.c b/gcc/calls.c
>> index e1dc8eb..5fbbe9f 100644
>> --- a/gcc/calls.c
>> +++ b/gcc/calls.c
>> @@ -44,11 +44,14 @@ along with GCC; see the file COPYING3. If not see
>> #include "tm_p.h"
>> #include "timevar.h"
>> #include "sbitmap.h"
>> +#include "bitmap.h"
>> #include "langhooks.h"
>> #include "target.h"
>> #include "cgraph.h"
>> #include "except.h"
>> #include "dbgcnt.h"
>> +#include "tree-chkp.h"
>> +#include "rtl-chkp.h"
>>
>> /* Like PREFERRED_STACK_BOUNDARY but in units of bytes, not bits. */
>> #define STACK_BYTES (PREFERRED_STACK_BOUNDARY / BITS_PER_UNIT)
>> @@ -76,6 +79,15 @@ struct arg_data
>> /* If REG is a PARALLEL, this is a copy of VALUE pulled into the
>> correct
>> form for emit_group_move. */
>> rtx parallel_value;
>> + /* If value is passed in neither reg nor stack, this field holds a
>> number
>> + of a special slot to be used. */
>> + rtx special_slot;
>
> I really dislike "special_slot" and the comment here. The comment that it's
> neither a reg nor stack is just bogus. What hardware resource does
> "special_slot" refer to? It's a register, but one that we do not typically
> expose. Let's at least clarify the comment and then we'll see if something
> other than "special_slot" as a name makes sense. Yes, I realize this is a
> bit of bikeshedding, but when the comments/terminology is confusing, the
> code becomes even harder to understand.
Special slot is not a register. When bounds are passed in a register
then everything work as if we pass any other argument in a register.
Special slot is used when we are out of bounds registers and pass
bounds for pointer passed in a register. It doesn't refer to any
hardware resource. In MPX ABI we state that special Bounds Table
entries (related to stack pointer value (and lower) right before a
call) are used. In software implementation it also may be some other
places like vars in TLS.
>
> I'm a bit concerned that this is exposing more details of the MPX
> implementation than is advisable to the front/middle end. On the other
> hand, I'd expect any other implementation that seeks to work in a
> transparent manner is going to have many of the same implementation
> properties as we see with MPX, so perhaps it's not a major problem.
I'm trying to not introduce any hardware dependencies into middle end.
Several months ago I created a simple prototype of generic target
support in Pointer Bounds Checker which used library calls instead of
MPX instructions, TLS for bounds passing etc. I did it to check our
design is not bound to MPX and allows such implementation. It was
very useful and showed some MPX details soaked into GIMPLE part. E.g.
chkp_initialize_bounds and chkp_make_bounds_constant hooks appeared
during that work. Special slots mechanism worked well for it though.
>
>
>
>
>> @@ -1141,18 +1158,84 @@ initialize_argument_information (int num_actuals
>> ATTRIBUTE_UNUSED,
>> /* First fill in the actual arguments in the ARGS array, splitting
>> complex arguments if necessary. */
>> {
>> - int j = i;
>> + int j = i, ptr_arg = -1;
>> call_expr_arg_iterator iter;
>> tree arg;
>> + bitmap slots = NULL;
>>
>> if (struct_value_addr_value)
>> {
>> args[j].tree_value = struct_value_addr_value;
>> +
>> j += inc;
>> +
>> + /* If we pass structure address then we need to
>> + create bounds for it. Since created bounds is
>> + a call statement, we expand it right here to avoid
>> + fixing all other places where it may be expanded. */
>> + if (CALL_WITH_BOUNDS_P (exp))
>> + {
>> + args[j].value = gen_reg_rtx (targetm.chkp_bound_mode ());
>> + args[j].tree_value
>> + = chkp_make_bounds_for_struct_addr
>> (struct_value_addr_value);
>> + expand_expr_real (args[j].tree_value, args[j].value, VOIDmode,
>> + EXPAND_NORMAL, 0, false);
>> + args[j].pointer_arg = j - inc;
>> +
>> + j += inc;
>> + }
>
> Just an FYI, I'm pretty sure this hunk isn't going to apply cleanly as the
> context has changed on the trunk. I'd recommend getting this code updated
> for the trunk. I suspect you're getting close to having all the basic
> functionality bits in, you're obviously going to need to do a new bootstrap
> & regression test prior to checkin. I think git sqashing the series and
> testing/committing them as an atomic unit is probably wise.
There are also some other patches which require update. I'll do it
and repost modified patches.
>
> It's been a while since I looked at this code, but is it safe to create a
> new call tree at this point? I recall some major complications if you try
> to insert a call once you've started filling in arguments. Hmm, gIven
> you're at the start of initialize_argument_information, you're probably OK
> since we haven't stored any arguments into their arg regs/memory yet.
It should be quite safe to expand another call until we start moving
arguments to their actual places.
>
>
>
>> @@ -1302,6 +1388,12 @@ initialize_argument_information (int num_actuals
>> ATTRIBUTE_UNUSED,
>> args[i].reg = targetm.calls.function_arg (args_so_far, mode, type,
>> argpos < n_named_args);
>>
>> + if (args[i].reg && CONST_INT_P (args[i].reg))
>> + {
>> + args[i].special_slot = args[i].reg;
>> + args[i].reg = NULL;
>> + }
>
> I can't recall from the earlier patches, but have you updated the
> documentation to indicate that function_arg can return a CONST_INT?
I didn't update it. Will do.
Thanks,
Ilya
>
>
> I think this is mostly OK. If you could update and resend for another
> once-over, it'd be appreciated.
>
> Jeff
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand
2014-09-24 8:29 ` Ilya Enkovich
@ 2014-10-02 14:03 ` Ilya Enkovich
2014-10-03 20:07 ` Jeff Law
0 siblings, 1 reply; 11+ messages in thread
From: Ilya Enkovich @ 2014-10-02 14:03 UTC (permalink / raw)
To: Jeff Law; +Cc: Michael Matz, gcc-patches
On 24 Sep 12:29, Ilya Enkovich wrote:
> 2014-09-24 0:58 GMT+04:00 Jeff Law <law@redhat.com>:
> > On 06/05/14 08:46, Ilya Enkovich wrote:
> >>
> >> 2014-06-05 Ilya Enkovich <ilya.enkovich@intel.com>
> >>
> >> * calls.c: Include tree-chkp.h, rtl-chkp.h, bitmap.h.
> >> (arg_data): Add fields special_slot, pointer_arg and
> >> pointer_offset.
> >> (store_bounds): New.
> >> (emit_call_1): Propagate instrumentation flag for CALL.
> >> (initialize_argument_information): Compute pointer_arg,
> >> pointer_offset and special_slot for pointer bounds arguments.
> >> (finalize_must_preallocate): Preallocate when storing bounds
> >> in bounds table.
> >> (compute_argument_addresses): Skip pointer bounds.
> >> (expand_call): Store bounds into tables separately. Return
> >> result joined with resulting bounds.
> >> * cfgexpand.c: Include tree-chkp.h, rtl-chkp.h.
> >> (expand_call_stmt): Propagate bounds flag for CALL_EXPR.
> >> (expand_return): Add returned bounds arg. Handle returned bounds.
> >> (expand_gimple_stmt_1): Adjust to new expand_return signature.
> >> (gimple_expand_cfg): Reset rtx bounds map.
> >> * expr.c: Include tree-chkp.h, rtl-chkp.h.
> >> (expand_assignment): Handle returned bounds.
> >> (store_expr_with_bounds): New. Replaces store_expr with new
> >> bounds
> >> target argument. Handle bounds returned by calls.
> >> (store_expr): Now wraps store_expr_with_bounds.
> >> * expr.h (store_expr_with_bounds): New.
> >> * function.c: Include tree-chkp.h, rtl-chkp.h.
> >> (bounds_parm_data): New.
> >> (use_register_for_decl): Do not registerize decls used for bounds
> >> stores and loads.
> >> (assign_parms_augmented_arg_list): Add bounds of the result
> >> structure pointer as the second argument.
> >> (assign_parm_find_entry_rtl): Mark bounds are never passed on
> >> the stack.
> >> (assign_parm_is_stack_parm): Likewise.
> >> (assign_parm_load_bounds): New.
> >> (assign_bounds): New.
> >> (assign_parms): Load bounds and determine a location for
> >> returned bounds.
> >> (diddle_return_value_1): New.
> >> (diddle_return_value): Handle returned bounds.
> >> * function.h (rtl_data): Add field for returned bounds.
> >>
> >>
> >> diff --git a/gcc/calls.c b/gcc/calls.c
> >> index e1dc8eb..5fbbe9f 100644
> >> --- a/gcc/calls.c
> >> +++ b/gcc/calls.c
> >> @@ -44,11 +44,14 @@ along with GCC; see the file COPYING3. If not see
> >> #include "tm_p.h"
> >> #include "timevar.h"
> >> #include "sbitmap.h"
> >> +#include "bitmap.h"
> >> #include "langhooks.h"
> >> #include "target.h"
> >> #include "cgraph.h"
> >> #include "except.h"
> >> #include "dbgcnt.h"
> >> +#include "tree-chkp.h"
> >> +#include "rtl-chkp.h"
> >>
> >> /* Like PREFERRED_STACK_BOUNDARY but in units of bytes, not bits. */
> >> #define STACK_BYTES (PREFERRED_STACK_BOUNDARY / BITS_PER_UNIT)
> >> @@ -76,6 +79,15 @@ struct arg_data
> >> /* If REG is a PARALLEL, this is a copy of VALUE pulled into the
> >> correct
> >> form for emit_group_move. */
> >> rtx parallel_value;
> >> + /* If value is passed in neither reg nor stack, this field holds a
> >> number
> >> + of a special slot to be used. */
> >> + rtx special_slot;
> >
> > I really dislike "special_slot" and the comment here. The comment that it's
> > neither a reg nor stack is just bogus. What hardware resource does
> > "special_slot" refer to? It's a register, but one that we do not typically
> > expose. Let's at least clarify the comment and then we'll see if something
> > other than "special_slot" as a name makes sense. Yes, I realize this is a
> > bit of bikeshedding, but when the comments/terminology is confusing, the
> > code becomes even harder to understand.
>
> Special slot is not a register. When bounds are passed in a register
> then everything work as if we pass any other argument in a register.
> Special slot is used when we are out of bounds registers and pass
> bounds for pointer passed in a register. It doesn't refer to any
> hardware resource. In MPX ABI we state that special Bounds Table
> entries (related to stack pointer value (and lower) right before a
> call) are used. In software implementation it also may be some other
> places like vars in TLS.
>
> >
> > I'm a bit concerned that this is exposing more details of the MPX
> > implementation than is advisable to the front/middle end. On the other
> > hand, I'd expect any other implementation that seeks to work in a
> > transparent manner is going to have many of the same implementation
> > properties as we see with MPX, so perhaps it's not a major problem.
>
> I'm trying to not introduce any hardware dependencies into middle end.
> Several months ago I created a simple prototype of generic target
> support in Pointer Bounds Checker which used library calls instead of
> MPX instructions, TLS for bounds passing etc. I did it to check our
> design is not bound to MPX and allows such implementation. It was
> very useful and showed some MPX details soaked into GIMPLE part. E.g.
> chkp_initialize_bounds and chkp_make_bounds_constant hooks appeared
> during that work. Special slots mechanism worked well for it though.
>
> >
> >
> >
> >
> >> @@ -1141,18 +1158,84 @@ initialize_argument_information (int num_actuals
> >> ATTRIBUTE_UNUSED,
> >> /* First fill in the actual arguments in the ARGS array, splitting
> >> complex arguments if necessary. */
> >> {
> >> - int j = i;
> >> + int j = i, ptr_arg = -1;
> >> call_expr_arg_iterator iter;
> >> tree arg;
> >> + bitmap slots = NULL;
> >>
> >> if (struct_value_addr_value)
> >> {
> >> args[j].tree_value = struct_value_addr_value;
> >> +
> >> j += inc;
> >> +
> >> + /* If we pass structure address then we need to
> >> + create bounds for it. Since created bounds is
> >> + a call statement, we expand it right here to avoid
> >> + fixing all other places where it may be expanded. */
> >> + if (CALL_WITH_BOUNDS_P (exp))
> >> + {
> >> + args[j].value = gen_reg_rtx (targetm.chkp_bound_mode ());
> >> + args[j].tree_value
> >> + = chkp_make_bounds_for_struct_addr
> >> (struct_value_addr_value);
> >> + expand_expr_real (args[j].tree_value, args[j].value, VOIDmode,
> >> + EXPAND_NORMAL, 0, false);
> >> + args[j].pointer_arg = j - inc;
> >> +
> >> + j += inc;
> >> + }
> >
> > Just an FYI, I'm pretty sure this hunk isn't going to apply cleanly as the
> > context has changed on the trunk. I'd recommend getting this code updated
> > for the trunk. I suspect you're getting close to having all the basic
> > functionality bits in, you're obviously going to need to do a new bootstrap
> > & regression test prior to checkin. I think git sqashing the series and
> > testing/committing them as an atomic unit is probably wise.
>
> There are also some other patches which require update. I'll do it
> and repost modified patches.
>
> >
> > It's been a while since I looked at this code, but is it safe to create a
> > new call tree at this point? I recall some major complications if you try
> > to insert a call once you've started filling in arguments. Hmm, gIven
> > you're at the start of initialize_argument_information, you're probably OK
> > since we haven't stored any arguments into their arg regs/memory yet.
>
> It should be quite safe to expand another call until we start moving
> arguments to their actual places.
>
> >
> >
> >
> >> @@ -1302,6 +1388,12 @@ initialize_argument_information (int num_actuals
> >> ATTRIBUTE_UNUSED,
> >> args[i].reg = targetm.calls.function_arg (args_so_far, mode, type,
> >> argpos < n_named_args);
> >>
> >> + if (args[i].reg && CONST_INT_P (args[i].reg))
> >> + {
> >> + args[i].special_slot = args[i].reg;
> >> + args[i].reg = NULL;
> >> + }
> >
> > I can't recall from the earlier patches, but have you updated the
> > documentation to indicate that function_arg can return a CONST_INT?
>
> I didn't update it. Will do.
>
> Thanks,
> Ilya
>
> >
> >
> > I think this is mostly OK. If you could update and resend for another
> > once-over, it'd be appreciated.
> >
> > Jeff
Here is an updated version.
Thanks,
Ilya
--
2014-10-02 Ilya Enkovich <ilya.enkovich@intel.com>
* calls.c: Include tree-chkp.h, rtl-chkp.h, bitmap.h.
(arg_data): Add fields special_slot, pointer_arg and
pointer_offset.
(store_bounds): New.
(emit_call_1): Propagate instrumentation flag for CALL.
(initialize_argument_information): Compute pointer_arg,
pointer_offset and special_slot for pointer bounds arguments.
(finalize_must_preallocate): Preallocate when storing bounds
in bounds table.
(compute_argument_addresses): Skip pointer bounds.
(expand_call): Store bounds into tables separately. Return
result joined with resulting bounds.
* cfgexpand.c: Include tree-chkp.h, rtl-chkp.h.
(expand_call_stmt): Propagate bounds flag for CALL_EXPR.
(expand_return): Add returned bounds arg. Handle returned bounds.
(expand_gimple_stmt_1): Adjust to new expand_return signature.
(gimple_expand_cfg): Reset rtx bounds map.
* expr.c: Include tree-chkp.h, rtl-chkp.h.
(expand_assignment): Handle returned bounds.
(store_expr_with_bounds): New. Replaces store_expr with new bounds
target argument. Handle bounds returned by calls.
(store_expr): Now wraps store_expr_with_bounds.
* expr.h (store_expr_with_bounds): New.
* function.c: Include tree-chkp.h, rtl-chkp.h.
(bounds_parm_data): New.
(use_register_for_decl): Do not registerize decls used for bounds
stores and loads.
(assign_parms_augmented_arg_list): Add bounds of the result
structure pointer as the second argument.
(assign_parm_find_entry_rtl): Mark bounds are never passed on
the stack.
(assign_parm_is_stack_parm): Likewise.
(assign_parm_load_bounds): New.
(assign_bounds): New.
(assign_parms): Load bounds and determine a location for
returned bounds.
(diddle_return_value_1): New.
(diddle_return_value): Handle returned bounds.
* function.h (rtl_data): Add field for returned bounds.
* target.def (TARGET_FUNCTION_ARG): Update hook description with new
possible return value CONST_INT.
* doc/tm.texi: Regenerate.
diff --git a/gcc/calls.c b/gcc/calls.c
index df6d268..a9c22ac 100644
--- a/gcc/calls.c
+++ b/gcc/calls.c
@@ -44,12 +44,15 @@ along with GCC; see the file COPYING3. If not see
#include "tm_p.h"
#include "timevar.h"
#include "sbitmap.h"
+#include "bitmap.h"
#include "langhooks.h"
#include "target.h"
#include "cgraph.h"
#include "except.h"
#include "dbgcnt.h"
#include "rtl-iter.h"
+#include "tree-chkp.h"
+#include "rtl-chkp.h"
/* Like PREFERRED_STACK_BOUNDARY but in units of bytes, not bits. */
#define STACK_BYTES (PREFERRED_STACK_BOUNDARY / BITS_PER_UNIT)
@@ -77,6 +80,15 @@ struct arg_data
/* If REG is a PARALLEL, this is a copy of VALUE pulled into the correct
form for emit_group_move. */
rtx parallel_value;
+ /* If value is passed in neither reg nor stack, this field holds a number
+ of a special slot to be used. */
+ rtx special_slot;
+ /* For pointer bounds hold an index of parm bounds are bound to. -1 if
+ there is no such pointer. */
+ int pointer_arg;
+ /* If pointer_arg refers a structure, then pointer_offset holds an offset
+ of a pointer in this structure. */
+ int pointer_offset;
/* If REG was promoted from the actual mode of the argument expression,
indicates whether the promotion is sign- or zero-extended. */
int unsignedp;
@@ -134,6 +146,7 @@ static void emit_call_1 (rtx, tree, tree, tree, HOST_WIDE_INT, HOST_WIDE_INT,
HOST_WIDE_INT, rtx, rtx, int, rtx, int,
cumulative_args_t);
static void precompute_register_parameters (int, struct arg_data *, int *);
+static void store_bounds (struct arg_data *, struct arg_data *);
static int store_one_arg (struct arg_data *, rtx, int, int, int);
static void store_unaligned_arguments_into_pseudos (struct arg_data *, int);
static int finalize_must_preallocate (int, int, struct arg_data *,
@@ -398,6 +411,10 @@ emit_call_1 (rtx funexp, tree fntree ATTRIBUTE_UNUSED, tree fndecl ATTRIBUTE_UNU
&& MEM_EXPR (funmem) != NULL_TREE)
set_mem_expr (XEXP (call, 0), MEM_EXPR (funmem));
+ /* Mark instrumented calls. */
+ if (call && fntree)
+ CALL_EXPR_WITH_BOUNDS_P (call) = CALL_WITH_BOUNDS_P (fntree);
+
/* Put the register usage information there. */
add_function_usage_to (call_insn, call_fusage);
@@ -1128,18 +1145,82 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
i = num_actuals - 1;
{
- int j = i;
+ int j = i, ptr_arg = -1;
call_expr_arg_iterator iter;
tree arg;
+ bitmap slots = NULL;
if (struct_value_addr_value)
{
args[j].tree_value = struct_value_addr_value;
j--;
+
+ /* If we pass structure address then we need to
+ create bounds for it. Since created bounds is
+ a call statement, we expand it right here to avoid
+ fixing all other places where it may be expanded. */
+ if (CALL_WITH_BOUNDS_P (exp))
+ {
+ args[j].value = gen_reg_rtx (targetm.chkp_bound_mode ());
+ args[j].tree_value
+ = chkp_make_bounds_for_struct_addr (struct_value_addr_value);
+ expand_expr_real (args[j].tree_value, args[j].value, VOIDmode,
+ EXPAND_NORMAL, 0, false);
+ args[j].pointer_arg = j + 1;
+ j--;
+ }
}
FOR_EACH_CALL_EXPR_ARG (arg, iter, exp)
{
tree argtype = TREE_TYPE (arg);
+
+ /* Remember last param with pointer and associate it
+ with following pointer bounds. */
+ if (CALL_WITH_BOUNDS_P (exp)
+ && chkp_type_has_pointer (argtype))
+ {
+ if (slots)
+ {
+ BITMAP_FREE (slots);
+ slots = NULL;
+ }
+ ptr_arg = j;
+ if (!BOUNDED_TYPE_P (argtype))
+ slots = chkp_find_bound_slots (argtype);
+ }
+ else if (POINTER_BOUNDS_TYPE_P (argtype))
+ {
+ /* We expect bounds in instrumented calls only.
+ Otherwise it is a sign we lost flag due to some optimization
+ and may emit call args incorrectly. */
+ gcc_assert (CALL_WITH_BOUNDS_P (exp));
+
+ /* For structures look for the next available pointer. */
+ if (ptr_arg != -1 && slots)
+ {
+ unsigned bnd_no = bitmap_first_set_bit (slots);
+ args[j].pointer_offset =
+ bnd_no * POINTER_SIZE / BITS_PER_UNIT;
+
+ bitmap_clear_bit (slots, bnd_no);
+
+ /* Check we have no more pointers in the structure. */
+ if (bitmap_empty_p (slots))
+ {
+ BITMAP_FREE (slots);
+ slots = NULL;
+ }
+ }
+ args[j].pointer_arg = ptr_arg;
+
+ /* Check we covered all pointers in the previous
+ non bounds arg. */
+ if (!slots)
+ ptr_arg = -1;
+ }
+ else
+ ptr_arg = -1;
+
if (targetm.calls.split_complex_arg
&& argtype
&& TREE_CODE (argtype) == COMPLEX_TYPE
@@ -1154,6 +1235,9 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
args[j].tree_value = arg;
j--;
}
+
+ if (slots)
+ BITMAP_FREE (slots);
}
/* I counts args in order (to be) pushed; ARGPOS counts in order written. */
@@ -1289,6 +1373,12 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
args[i].reg = targetm.calls.function_arg (args_so_far, mode, type,
argpos < n_named_args);
+ if (args[i].reg && CONST_INT_P (args[i].reg))
+ {
+ args[i].special_slot = args[i].reg;
+ args[i].reg = NULL;
+ }
+
/* If this is a sibling call and the machine has register windows, the
register window has to be unwinded before calling the routine, so
arguments have to go into the incoming registers. */
@@ -1322,10 +1412,13 @@ initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
|| (args[i].pass_on_stack && args[i].reg != 0))
*must_preallocate = 1;
+ /* No stack allocation and padding for bounds. */
+ if (POINTER_BOUNDS_P (args[i].tree_value))
+ ;
/* Compute the stack-size of this argument. */
- if (args[i].reg == 0 || args[i].partial != 0
- || reg_parm_stack_space > 0
- || args[i].pass_on_stack)
+ else if (args[i].reg == 0 || args[i].partial != 0
+ || reg_parm_stack_space > 0
+ || args[i].pass_on_stack)
locate_and_pad_parm (mode, type,
#ifdef STACK_PARMS_IN_REG_PARM_AREA
1,
@@ -1539,6 +1632,12 @@ finalize_must_preallocate (int must_preallocate, int num_actuals,
partial_seen = 1;
else if (partial_seen && args[i].reg == 0)
must_preallocate = 1;
+ /* We preallocate in case there are bounds passed
+ in the bounds table to have precomputed address
+ for bounds association. */
+ else if (POINTER_BOUNDS_P (args[i].tree_value)
+ && !args[i].reg)
+ must_preallocate = 1;
if (TYPE_MODE (TREE_TYPE (args[i].tree_value)) == BLKmode
&& (TREE_CODE (args[i].tree_value) == CALL_EXPR
@@ -1590,6 +1689,10 @@ compute_argument_addresses (struct arg_data *args, rtx argblock, int num_actuals
&& args[i].partial == 0)
continue;
+ /* Pointer Bounds are never passed on the stack. */
+ if (POINTER_BOUNDS_P (args[i].tree_value))
+ continue;
+
if (CONST_INT_P (offset))
addr = plus_constant (Pmode, arg_reg, INTVAL (offset));
else
@@ -2212,6 +2315,8 @@ expand_call (tree exp, rtx target, int ignore)
/* Register in which non-BLKmode value will be returned,
or 0 if no value or if value is BLKmode. */
rtx valreg;
+ /* Register(s) in which bounds are returned. */
+ rtx valbnd = NULL;
/* Address where we should return a BLKmode value;
0 if value not BLKmode. */
rtx structure_value_addr = 0;
@@ -2463,7 +2568,7 @@ expand_call (tree exp, rtx target, int ignore)
structure_value_addr_value =
make_tree (build_pointer_type (TREE_TYPE (funtype)), temp);
- structure_value_addr_parm = 1;
+ structure_value_addr_parm = CALL_WITH_BOUNDS_P (exp) ? 2 : 1;
}
/* Count the arguments and set NUM_ACTUALS. */
@@ -2981,15 +3086,28 @@ expand_call (tree exp, rtx target, int ignore)
/* Figure out the register where the value, if any, will come back. */
valreg = 0;
+ valbnd = 0;
if (TYPE_MODE (rettype) != VOIDmode
&& ! structure_value_addr)
{
if (pcc_struct_value)
- valreg = hard_function_value (build_pointer_type (rettype),
- fndecl, NULL, (pass == 0));
+ {
+ valreg = hard_function_value (build_pointer_type (rettype),
+ fndecl, NULL, (pass == 0));
+ if (CALL_WITH_BOUNDS_P (exp))
+ valbnd = targetm.calls.
+ chkp_function_value_bounds (build_pointer_type (rettype),
+ fndecl, (pass == 0));
+ }
else
- valreg = hard_function_value (rettype, fndecl, fntype,
- (pass == 0));
+ {
+ valreg = hard_function_value (rettype, fndecl, fntype,
+ (pass == 0));
+ if (CALL_WITH_BOUNDS_P (exp))
+ valbnd = targetm.calls.chkp_function_value_bounds (rettype,
+ fndecl,
+ (pass == 0));
+ }
/* If VALREG is a PARALLEL whose first member has a zero
offset, use that. This is for targets such as m68k that
@@ -3030,7 +3148,10 @@ expand_call (tree exp, rtx target, int ignore)
for (i = 0; i < num_actuals; i++)
{
- if (args[i].reg == 0 || args[i].pass_on_stack)
+ /* Delay bounds until all other args are stored. */
+ if (POINTER_BOUNDS_P (args[i].tree_value))
+ continue;
+ else if (args[i].reg == 0 || args[i].pass_on_stack)
{
rtx_insn *before_arg = get_last_insn ();
@@ -3083,6 +3204,17 @@ expand_call (tree exp, rtx target, int ignore)
sibcall_failure = 1;
}
+ /* Store all bounds not passed in registers. */
+ for (i = 0; i < num_actuals; i++)
+ {
+ if (POINTER_BOUNDS_P (args[i].tree_value)
+ && !args[i].reg)
+ store_bounds (&args[i],
+ args[i].pointer_arg == -1
+ ? NULL
+ : &args[args[i].pointer_arg]);
+ }
+
/* If register arguments require space on the stack and stack space
was not preallocated, allocate stack space here for arguments
passed in registers. */
@@ -3487,6 +3619,9 @@ expand_call (tree exp, rtx target, int ignore)
free (stack_usage_map_buf);
+ /* Join result with returned bounds so caller may use them if needed. */
+ target = chkp_join_splitted_slot (target, valbnd);
+
return target;
}
@@ -4356,6 +4491,68 @@ emit_library_call_value (rtx orgfun, rtx value,
return result;
}
\f
+
+/* Store pointer bounds argument ARG into Bounds Table entry
+ associated with PARM. */
+static void
+store_bounds (struct arg_data *arg, struct arg_data *parm)
+{
+ rtx slot = NULL, ptr = NULL, addr = NULL;
+
+ /* We may pass bounds not associated with any pointer. */
+ if (!parm)
+ {
+ gcc_assert (arg->special_slot);
+ slot = arg->special_slot;
+ ptr = const0_rtx;
+ }
+ /* Find pointer associated with bounds and where it is
+ passed. */
+ else
+ {
+ if (!parm->reg)
+ {
+ gcc_assert (!arg->special_slot);
+
+ addr = adjust_address (parm->stack, Pmode, arg->pointer_offset);
+ }
+ else if (REG_P (parm->reg))
+ {
+ gcc_assert (arg->special_slot);
+ slot = arg->special_slot;
+
+ if (MEM_P (parm->value))
+ addr = adjust_address (parm->value, Pmode, arg->pointer_offset);
+ else if (REG_P (parm->value))
+ ptr = gen_rtx_SUBREG (Pmode, parm->value, arg->pointer_offset);
+ else
+ {
+ gcc_assert (!arg->pointer_offset);
+ ptr = parm->value;
+ }
+ }
+ else
+ {
+ gcc_assert (GET_CODE (parm->reg) == PARALLEL);
+
+ gcc_assert (arg->special_slot);
+ slot = arg->special_slot;
+
+ if (parm->parallel_value)
+ ptr = chkp_get_value_with_offs (parm->parallel_value,
+ GEN_INT (arg->pointer_offset));
+ else
+ gcc_unreachable ();
+ }
+ }
+
+ /* Expand bounds. */
+ if (!arg->value)
+ arg->value = expand_normal (arg->tree_value);
+
+ targetm.calls.store_bounds_for_arg (ptr, addr, arg->value, slot);
+}
+
/* Store a single argument for a function call
into the register or memory area where it must be passed.
*ARG describes the argument value and where to pass it.
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index f95981b..d54a18c 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -74,6 +74,8 @@ along with GCC; see the file COPYING3. If not see
#include "recog.h"
#include "output.h"
#include "builtins.h"
+#include "tree-chkp.h"
+#include "rtl-chkp.h"
/* Some systems use __main in a way incompatible with its use in gcc, in these
cases use the macros NAME__MAIN to give a quoted symbol and SYMBOL__MAIN to
@@ -2300,6 +2302,7 @@ expand_call_stmt (gimple stmt)
CALL_FROM_THUNK_P (exp) = gimple_call_from_thunk_p (stmt);
CALL_EXPR_VA_ARG_PACK (exp) = gimple_call_va_arg_pack_p (stmt);
SET_EXPR_LOCATION (exp, gimple_location (stmt));
+ CALL_WITH_BOUNDS_P (exp) = gimple_call_with_bounds_p (stmt);
/* Ensure RTL is created for debug args. */
if (decl && DECL_HAS_DEBUG_ARGS_P (decl))
@@ -3110,11 +3113,12 @@ expand_value_return (rtx val)
from the current function. */
static void
-expand_return (tree retval)
+expand_return (tree retval, tree bounds)
{
rtx result_rtl;
rtx val = 0;
tree retval_rhs;
+ rtx bounds_rtl;
/* If function wants no value, give it none. */
if (TREE_CODE (TREE_TYPE (TREE_TYPE (current_function_decl))) == VOID_TYPE)
@@ -3140,6 +3144,56 @@ expand_return (tree retval)
result_rtl = DECL_RTL (DECL_RESULT (current_function_decl));
+ /* Put returned bounds to the right place. */
+ bounds_rtl = DECL_BOUNDS_RTL (DECL_RESULT (current_function_decl));
+ if (bounds_rtl)
+ {
+ rtx addr, bnd;
+
+ if (bounds)
+ {
+ bnd = expand_normal (bounds);
+ targetm.calls.store_returned_bounds (bounds_rtl, bnd);
+ }
+ else if (REG_P (bounds_rtl))
+ {
+ addr = expand_normal (build_fold_addr_expr (retval_rhs));
+ addr = gen_rtx_MEM (Pmode, addr);
+ bnd = targetm.calls.load_bounds_for_arg (addr, NULL, NULL);
+ targetm.calls.store_returned_bounds (bounds_rtl, bnd);
+ }
+ else
+ {
+ int n;
+
+ gcc_assert (GET_CODE (bounds_rtl) == PARALLEL);
+
+ addr = expand_normal (build_fold_addr_expr (retval_rhs));
+ addr = gen_rtx_MEM (Pmode, addr);
+
+ for (n = 0; n < XVECLEN (bounds_rtl, 0); n++)
+ {
+ rtx offs = XEXP (XVECEXP (bounds_rtl, 0, n), 1);
+ rtx slot = XEXP (XVECEXP (bounds_rtl, 0, n), 0);
+ rtx from = adjust_address (addr, Pmode, INTVAL (offs));
+ rtx bnd = targetm.calls.load_bounds_for_arg (from, NULL, NULL);
+ targetm.calls.store_returned_bounds (slot, bnd);
+ }
+ }
+ }
+ else if (chkp_function_instrumented_p (current_function_decl)
+ && !BOUNDED_P (retval_rhs)
+ && chkp_type_has_pointer (TREE_TYPE (retval_rhs))
+ && TREE_CODE (retval_rhs) != RESULT_DECL)
+ {
+ rtx addr = expand_normal (build_fold_addr_expr (retval_rhs));
+ addr = gen_rtx_MEM (Pmode, addr);
+
+ gcc_assert (MEM_P (result_rtl));
+
+ chkp_copy_bounds_for_stack_parm (result_rtl, addr, TREE_TYPE (retval_rhs));
+ }
+
/* If we are returning the RESULT_DECL, then the value has already
been stored into it, so we don't have to do anything special. */
if (TREE_CODE (retval_rhs) == RESULT_DECL)
@@ -3245,7 +3299,7 @@ expand_gimple_stmt_1 (gimple stmt)
if (!op0)
expand_null_return ();
else
- expand_return (op0);
+ expand_return (op0, gimple_return_retbnd (stmt));
break;
case GIMPLE_ASSIGN:
@@ -5644,6 +5698,9 @@ pass_expand::execute (function *fun)
rtl_profile_for_bb (ENTRY_BLOCK_PTR_FOR_FN (fun));
+ if (chkp_function_instrumented_p (current_function_decl))
+ chkp_reset_rtl_bounds ();
+
insn_locations_init ();
if (!DECL_IS_BUILTIN (current_function_decl))
{
diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi
index 3e8a41c..1a86dc3 100644
--- a/gcc/doc/tm.texi
+++ b/gcc/doc/tm.texi
@@ -3843,6 +3843,12 @@ The return value is usually either a @code{reg} RTX for the hard
register in which to pass the argument, or zero to pass the argument
on the stack.
+The return value can be a @code{const_int} which means argument is
+passed in a target specific slot with specified number. Target hooks
+should be used to store or load argument in such case. See
+@code{TARGET_STORE_BOUNDS_FOR_ARG} and @code{TARGET_LOAD_BOUNDS_FOR_ARG}
+for more information.
+
The value of the expression can also be a @code{parallel} RTX@. This is
used when an argument is passed in multiple locations. The mode of the
@code{parallel} should be the mode of the entire argument. The
diff --git a/gcc/expr.c b/gcc/expr.c
index a6233f3..1aaa361 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -68,6 +68,8 @@ along with GCC; see the file COPYING3. If not see
#include "tree-ssa-address.h"
#include "cfgexpand.h"
#include "builtins.h"
+#include "tree-chkp.h"
+#include "rtl-chkp.h"
#ifndef STACK_PUSH_CODE
#ifdef STACK_GROWS_DOWNWARD
@@ -5009,9 +5011,14 @@ expand_assignment (tree to, tree from, bool nontemporal)
|| TREE_CODE (to) == SSA_NAME))
{
rtx value;
+ rtx bounds;
push_temp_slots ();
value = expand_normal (from);
+
+ /* Split value and bounds to store them separately. */
+ chkp_split_slot (value, &value, &bounds);
+
if (to_rtx == 0)
to_rtx = expand_expr (to, NULL_RTX, VOIDmode, EXPAND_WRITE);
@@ -5045,6 +5052,15 @@ expand_assignment (tree to, tree from, bool nontemporal)
emit_move_insn (to_rtx, value);
}
+
+ /* Store bounds if required. */
+ if (bounds
+ && (BOUNDED_P (to) || chkp_type_has_pointer (TREE_TYPE (to))))
+ {
+ gcc_assert (MEM_P (to_rtx));
+ chkp_emit_bounds_store (bounds, value, to_rtx);
+ }
+
preserve_temp_slots (to_rtx);
pop_temp_slots ();
return;
@@ -5120,7 +5136,7 @@ expand_assignment (tree to, tree from, bool nontemporal)
/* Compute FROM and store the value in the rtx we got. */
push_temp_slots ();
- result = store_expr (from, to_rtx, 0, nontemporal);
+ result = store_expr_with_bounds (from, to_rtx, 0, nontemporal, to);
preserve_temp_slots (result);
pop_temp_slots ();
return;
@@ -5157,10 +5173,14 @@ emit_storent_insn (rtx to, rtx from)
If CALL_PARAM_P is nonzero, this is a store into a call param on the
stack, and block moves may need to be treated specially.
- If NONTEMPORAL is true, try using a nontemporal store instruction. */
+ If NONTEMPORAL is true, try using a nontemporal store instruction.
+
+ If BTARGET is not NULL then computed bounds of EXP are
+ associated with BTARGET. */
rtx
-store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
+store_expr_with_bounds (tree exp, rtx target, int call_param_p,
+ bool nontemporal, tree btarget)
{
rtx temp;
rtx alt_rtl = NULL_RTX;
@@ -5181,8 +5201,8 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
part. */
expand_expr (TREE_OPERAND (exp, 0), const0_rtx, VOIDmode,
call_param_p ? EXPAND_STACK_PARM : EXPAND_NORMAL);
- return store_expr (TREE_OPERAND (exp, 1), target, call_param_p,
- nontemporal);
+ return store_expr_with_bounds (TREE_OPERAND (exp, 1), target,
+ call_param_p, nontemporal, btarget);
}
else if (TREE_CODE (exp) == COND_EXPR && GET_MODE (target) == BLKmode)
{
@@ -5196,13 +5216,13 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
do_pending_stack_adjust ();
NO_DEFER_POP;
jumpifnot (TREE_OPERAND (exp, 0), lab1, -1);
- store_expr (TREE_OPERAND (exp, 1), target, call_param_p,
- nontemporal);
+ store_expr_with_bounds (TREE_OPERAND (exp, 1), target, call_param_p,
+ nontemporal, btarget);
emit_jump_insn (gen_jump (lab2));
emit_barrier ();
emit_label (lab1);
- store_expr (TREE_OPERAND (exp, 2), target, call_param_p,
- nontemporal);
+ store_expr_with_bounds (TREE_OPERAND (exp, 2), target, call_param_p,
+ nontemporal, btarget);
emit_label (lab2);
OK_DEFER_POP;
@@ -5254,6 +5274,19 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
temp = expand_expr (exp, inner_target, VOIDmode,
call_param_p ? EXPAND_STACK_PARM : EXPAND_NORMAL);
+ /* Handle bounds returned by call. */
+ if (TREE_CODE (exp) == CALL_EXPR)
+ {
+ rtx bounds;
+ chkp_split_slot (temp, &temp, &bounds);
+ if (bounds && btarget)
+ {
+ gcc_assert (TREE_CODE (btarget) == SSA_NAME);
+ rtx tmp = targetm.calls.load_returned_bounds (bounds);
+ chkp_set_rtl_bounds (btarget, tmp);
+ }
+ }
+
/* If TEMP is a VOIDmode constant, use convert_modes to make
sure that we properly convert it. */
if (CONSTANT_P (temp) && GET_MODE (temp) == VOIDmode)
@@ -5335,6 +5368,19 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
(call_param_p
? EXPAND_STACK_PARM : EXPAND_NORMAL),
&alt_rtl, false);
+
+ /* Handle bounds returned by call. */
+ if (TREE_CODE (exp) == CALL_EXPR)
+ {
+ rtx bounds;
+ chkp_split_slot (temp, &temp, &bounds);
+ if (bounds && btarget)
+ {
+ gcc_assert (TREE_CODE (btarget) == SSA_NAME);
+ rtx tmp = targetm.calls.load_returned_bounds (bounds);
+ chkp_set_rtl_bounds (btarget, tmp);
+ }
+ }
}
/* If TEMP is a VOIDmode constant and the mode of the type of EXP is not
@@ -5499,6 +5545,13 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
return NULL_RTX;
}
+
+/* Same as store_expr_with_bounds but ignoring bounds of EXP. */
+rtx
+store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
+{
+ return store_expr_with_bounds (exp, target, call_param_p, nontemporal, NULL);
+}
\f
/* Return true if field F of structure TYPE is a flexible array. */
diff --git a/gcc/expr.h b/gcc/expr.h
index 2e66329..de212d9 100644
--- a/gcc/expr.h
+++ b/gcc/expr.h
@@ -426,6 +426,7 @@ extern void expand_assignment (tree, tree, bool);
and storing the value into TARGET.
If SUGGEST_REG is nonzero, copy the value through a register
and return that register, if that is possible. */
+extern rtx store_expr_with_bounds (tree, rtx, int, bool, tree);
extern rtx store_expr (tree, rtx, int, bool);
/* Given an rtx that may include add and multiply operations,
diff --git a/gcc/function.c b/gcc/function.c
index ac50f4a..99dccdb 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -66,6 +66,8 @@ along with GCC; see the file COPYING3. If not see
#include "shrink-wrap.h"
#include "toplev.h"
#include "rtl-iter.h"
+#include "tree-chkp.h"
+#include "rtl-chkp.h"
/* So we can assign to cfun in this file. */
#undef cfun
@@ -2095,6 +2097,14 @@ use_register_for_decl (const_tree decl)
if (TREE_ADDRESSABLE (decl))
return false;
+ /* Decl is implicitly addressible by bound stores and loads
+ if it is an aggregate holding bounds. */
+ if (chkp_function_instrumented_p (current_function_decl)
+ && TREE_TYPE (decl)
+ && !BOUNDED_P (decl)
+ && chkp_type_has_pointer (TREE_TYPE (decl)))
+ return false;
+
/* Only register-like things go in registers. */
if (DECL_MODE (decl) == BLKmode)
return false;
@@ -2215,6 +2225,15 @@ struct assign_parm_data_one
BOOL_BITFIELD loaded_in_reg : 1;
};
+struct bounds_parm_data
+{
+ assign_parm_data_one parm_data;
+ tree bounds_parm;
+ tree ptr_parm;
+ rtx ptr_entry;
+ int bound_no;
+};
+
/* A subroutine of assign_parms. Initialize ALL. */
static void
@@ -2326,6 +2345,23 @@ assign_parms_augmented_arg_list (struct assign_parm_data_all *all)
fnargs.safe_insert (0, decl);
all->function_result_decl = decl;
+
+ /* If function is instrumented then bounds of the
+ passed structure address is the second argument. */
+ if (chkp_function_instrumented_p (fndecl))
+ {
+ decl = build_decl (DECL_SOURCE_LOCATION (fndecl),
+ PARM_DECL, get_identifier (".result_bnd"),
+ pointer_bounds_type_node);
+ DECL_ARG_TYPE (decl) = pointer_bounds_type_node;
+ DECL_ARTIFICIAL (decl) = 1;
+ DECL_NAMELESS (decl) = 1;
+ TREE_CONSTANT (decl) = 1;
+
+ DECL_CHAIN (decl) = DECL_CHAIN (all->orig_fnargs);
+ DECL_CHAIN (all->orig_fnargs) = decl;
+ fnargs.safe_insert (1, decl);
+ }
}
/* If the target wants to split complex arguments into scalars, do so. */
@@ -2466,7 +2502,7 @@ assign_parm_find_entry_rtl (struct assign_parm_data_all *all,
it came in a register so that REG_PARM_STACK_SPACE isn't skipped.
In this case, we call FUNCTION_ARG with NAMED set to 1 instead of 0
as it was the previous time. */
- in_regs = entry_parm != 0;
+ in_regs = (entry_parm != 0) || POINTER_BOUNDS_TYPE_P (data->passed_type);
#ifdef STACK_PARMS_IN_REG_PARM_AREA
in_regs = true;
#endif
@@ -2555,8 +2591,12 @@ static bool
assign_parm_is_stack_parm (struct assign_parm_data_all *all,
struct assign_parm_data_one *data)
{
+ /* Bounds are never passed on the stack to keep compatibility
+ with not instrumented code. */
+ if (POINTER_BOUNDS_TYPE_P (data->passed_type))
+ return false;
/* Trivially true if we've no incoming register. */
- if (data->entry_parm == NULL)
+ else if (data->entry_parm == NULL)
;
/* Also true if we're partially in registers and partially not,
since we've arranged to drop the entire argument on the stack. */
@@ -3365,6 +3405,119 @@ assign_parms_unsplit_complex (struct assign_parm_data_all *all,
}
}
+/* Load bounds PARM from bounds table. */
+static void
+assign_parm_load_bounds (struct assign_parm_data_one *data,
+ tree parm,
+ rtx entry,
+ unsigned bound_no)
+{
+ bitmap_iterator bi;
+ unsigned i, offs = 0;
+ int bnd_no = -1;
+ rtx slot = NULL, ptr = NULL;
+
+ if (parm)
+ {
+ bitmap slots = chkp_find_bound_slots (TREE_TYPE (parm));
+ EXECUTE_IF_SET_IN_BITMAP (slots, 0, i, bi)
+ {
+ if (bound_no)
+ bound_no--;
+ else
+ {
+ bnd_no = i;
+ break;
+ }
+ }
+ BITMAP_FREE (slots);
+ }
+
+ /* We may have bounds not associated with any pointer. */
+ if (bnd_no != -1)
+ offs = bnd_no * POINTER_SIZE / BITS_PER_UNIT;
+
+ /* Find associated pointer. */
+ if (bnd_no == -1)
+ {
+ /* If bounds are not associated with any bounds,
+ then it is passed in a register or special slot. */
+ gcc_assert (data->entry_parm);
+ ptr = const0_rtx;
+ }
+ else if (MEM_P (entry))
+ slot = adjust_address (entry, Pmode, offs);
+ else if (REG_P (entry))
+ ptr = gen_rtx_REG (Pmode, REGNO (entry) + bnd_no);
+ else if (GET_CODE (entry) == PARALLEL)
+ ptr = chkp_get_value_with_offs (entry, GEN_INT (offs));
+ else
+ gcc_unreachable ();
+ data->entry_parm = targetm.calls.load_bounds_for_arg (slot, ptr,
+ data->entry_parm);
+}
+
+/* Assign RTL expressions to the function's bounds parameters BNDARGS. */
+
+static void
+assign_bounds (vec<bounds_parm_data> &bndargs,
+ struct assign_parm_data_all &all)
+{
+ unsigned i, pass, handled = 0;
+ bounds_parm_data *pbdata;
+
+ if (!bndargs.exists ())
+ return;
+
+ /* We make few passes to store input bounds. Firstly handle bounds
+ passed in registers. After that we load bounds passed in special
+ slots. Finally we load bounds from Bounds Table. */
+ for (pass = 0; pass < 3; pass++)
+ FOR_EACH_VEC_ELT (bndargs, i, pbdata)
+ {
+ /* Pass 0 => regs only. */
+ if (pass == 0
+ && (!pbdata->parm_data.entry_parm
+ || GET_CODE (pbdata->parm_data.entry_parm) != REG))
+ continue;
+ /* Pass 1 => slots only. */
+ else if (pass == 1
+ && (!pbdata->parm_data.entry_parm
+ || GET_CODE (pbdata->parm_data.entry_parm) == REG))
+ continue;
+ /* Pass 2 => BT only. */
+ else if (pass == 2
+ && pbdata->parm_data.entry_parm)
+ continue;
+
+ if (!pbdata->parm_data.entry_parm
+ || GET_CODE (pbdata->parm_data.entry_parm) != REG)
+ assign_parm_load_bounds (&pbdata->parm_data, pbdata->ptr_parm,
+ pbdata->ptr_entry, pbdata->bound_no);
+
+ set_decl_incoming_rtl (pbdata->bounds_parm,
+ pbdata->parm_data.entry_parm, false);
+
+ if (assign_parm_setup_block_p (&pbdata->parm_data))
+ assign_parm_setup_block (&all, pbdata->bounds_parm,
+ &pbdata->parm_data);
+ else if (pbdata->parm_data.passed_pointer
+ || use_register_for_decl (pbdata->bounds_parm))
+ assign_parm_setup_reg (&all, pbdata->bounds_parm,
+ &pbdata->parm_data);
+ else
+ assign_parm_setup_stack (&all, pbdata->bounds_parm,
+ &pbdata->parm_data);
+
+ /* Count handled bounds to make sure we miss nothing. */
+ handled++;
+ }
+
+ gcc_assert (handled == bndargs.length ());
+
+ bndargs.release ();
+}
+
/* Assign RTL expressions to the function's parameters. This may involve
copying them into registers and using those registers as the DECL_RTL. */
@@ -3374,7 +3527,11 @@ assign_parms (tree fndecl)
struct assign_parm_data_all all;
tree parm;
vec<tree> fnargs;
- unsigned i;
+ unsigned i, bound_no = 0;
+ tree last_arg = NULL;
+ rtx last_arg_entry = NULL;
+ vec<bounds_parm_data> bndargs = vNULL;
+ bounds_parm_data bdata;
crtl->args.internal_arg_pointer
= targetm.calls.internal_arg_pointer ();
@@ -3416,9 +3573,6 @@ assign_parms (tree fndecl)
}
}
- if (cfun->stdarg && !DECL_CHAIN (parm))
- assign_parms_setup_varargs (&all, &data, false);
-
/* Find out where the parameter arrives in this function. */
assign_parm_find_entry_rtl (&all, &data);
@@ -3428,7 +3582,15 @@ assign_parms (tree fndecl)
assign_parm_find_stack_rtl (parm, &data);
assign_parm_adjust_entry_rtl (&data);
}
-
+ if (!POINTER_BOUNDS_TYPE_P (data.passed_type))
+ {
+ /* Remember where last non bounds arg was passed in case
+ we have to load associated bounds for it from Bounds
+ Table. */
+ last_arg = parm;
+ last_arg_entry = data.entry_parm;
+ bound_no = 0;
+ }
/* Record permanently how this parm was passed. */
if (data.passed_pointer)
{
@@ -3440,20 +3602,63 @@ assign_parms (tree fndecl)
else
set_decl_incoming_rtl (parm, data.entry_parm, false);
+ /* Boudns should be loaded in the particular order to
+ have registers allocated correctly. Collect info about
+ input bounds and load them later. */
+ if (POINTER_BOUNDS_TYPE_P (data.passed_type))
+ {
+ /* Expect bounds in instrumented functions only. */
+ gcc_assert (chkp_function_instrumented_p (fndecl));
+
+ bdata.parm_data = data;
+ bdata.bounds_parm = parm;
+ bdata.ptr_parm = last_arg;
+ bdata.ptr_entry = last_arg_entry;
+ bdata.bound_no = bound_no;
+ bndargs.safe_push (bdata);
+ }
+ else
+ {
+ assign_parm_adjust_stack_rtl (&data);
+
+ if (assign_parm_setup_block_p (&data))
+ assign_parm_setup_block (&all, parm, &data);
+ else if (data.passed_pointer || use_register_for_decl (parm))
+ assign_parm_setup_reg (&all, parm, &data);
+ else
+ assign_parm_setup_stack (&all, parm, &data);
+ }
+
+ if (cfun->stdarg && !DECL_CHAIN (parm))
+ {
+ int pretend_bytes = 0;
+
+ assign_parms_setup_varargs (&all, &data, false);
+
+ if (chkp_function_instrumented_p (fndecl))
+ {
+ /* We expect this is the last parm. Otherwise it is wrong
+ to assign bounds right now. */
+ gcc_assert (i == (fnargs.length () - 1));
+ assign_bounds (bndargs, all);
+ targetm.calls.setup_incoming_vararg_bounds (all.args_so_far,
+ data.promoted_mode,
+ data.passed_type,
+ &pretend_bytes,
+ false);
+ }
+ }
+
/* Update info on where next arg arrives in registers. */
targetm.calls.function_arg_advance (all.args_so_far, data.promoted_mode,
data.passed_type, data.named_arg);
- assign_parm_adjust_stack_rtl (&data);
-
- if (assign_parm_setup_block_p (&data))
- assign_parm_setup_block (&all, parm, &data);
- else if (data.passed_pointer || use_register_for_decl (parm))
- assign_parm_setup_reg (&all, parm, &data);
- else
- assign_parm_setup_stack (&all, parm, &data);
+ if (POINTER_BOUNDS_TYPE_P (data.passed_type))
+ bound_no++;
}
+ assign_bounds (bndargs, all);
+
if (targetm.calls.split_complex_arg)
assign_parms_unsplit_complex (&all, fnargs);
@@ -3574,6 +3779,10 @@ assign_parms (tree fndecl)
real_decl_rtl = targetm.calls.function_value (TREE_TYPE (decl_result),
fndecl, true);
+ if (chkp_function_instrumented_p (fndecl))
+ crtl->return_bnd
+ = targetm.calls.chkp_function_value_bounds (TREE_TYPE (decl_result),
+ fndecl, true);
REG_FUNCTION_VALUE_P (real_decl_rtl) = 1;
/* The delay slot scheduler assumes that crtl->return_rtx
holds the hard register containing the return value, not a
@@ -4804,6 +5013,14 @@ expand_function_start (tree subr)
/* Set DECL_REGISTER flag so that expand_function_end will copy the
result to the real return register(s). */
DECL_REGISTER (DECL_RESULT (subr)) = 1;
+
+ if (chkp_function_instrumented_p (current_function_decl))
+ {
+ tree return_type = TREE_TYPE (DECL_RESULT (subr));
+ rtx bounds = targetm.calls.chkp_function_value_bounds (return_type,
+ subr, 1);
+ SET_DECL_BOUNDS_RTL (DECL_RESULT (subr), bounds);
+ }
}
/* Initialize rtx for parameters and local variables.
@@ -4907,14 +5124,11 @@ expand_dummy_function_end (void)
in_dummy_function = false;
}
-/* Call DOIT for each hard register used as a return value from
- the current function. */
+/* Helper for diddle_return_value. */
void
-diddle_return_value (void (*doit) (rtx, void *), void *arg)
+diddle_return_value_1 (void (*doit) (rtx, void *), void *arg, rtx outgoing)
{
- rtx outgoing = crtl->return_rtx;
-
if (! outgoing)
return;
@@ -4934,6 +5148,16 @@ diddle_return_value (void (*doit) (rtx, void *), void *arg)
}
}
+/* Call DOIT for each hard register used as a return value from
+ the current function. */
+
+void
+diddle_return_value (void (*doit) (rtx, void *), void *arg)
+{
+ diddle_return_value_1 (doit, arg, crtl->return_rtx);
+ diddle_return_value_1 (doit, arg, crtl->return_bnd);
+}
+
static void
do_clobber_return_reg (rtx reg, void *arg ATTRIBUTE_UNUSED)
{
diff --git a/gcc/function.h b/gcc/function.h
index e71210d..5cd0149 100644
--- a/gcc/function.h
+++ b/gcc/function.h
@@ -253,6 +253,9 @@ struct GTY(()) rtl_data {
result in a register, current_function_return_rtx will always be
the hard register containing the result. */
rtx return_rtx;
+ /* If nonxero, an RTL expression for the lcoation at which the current
+ function returns bounds for its result. */
+ rtx return_bnd;
/* Vector of initial-value pairs. Each pair consists of a pseudo
register of approprite mode that stores the initial value a hard
diff --git a/gcc/target.def b/gcc/target.def
index 7952f02..ae01d82 100644
--- a/gcc/target.def
+++ b/gcc/target.def
@@ -4088,6 +4088,12 @@ The return value is usually either a @code{reg} RTX for the hard\n\
register in which to pass the argument, or zero to pass the argument\n\
on the stack.\n\
\n\
+The return value can be a @code{const_int} which means argument is\n\
+passed in a target specific slot with specified number. Target hooks\n\
+should be used to store or load argument in such case. See\n\
+@code{TARGET_STORE_BOUNDS_FOR_ARG} and @code{TARGET_LOAD_BOUNDS_FOR_ARG}\n\
+for more information.\n\
+\n\
The value of the expression can also be a @code{parallel} RTX@. This is\n\
used when an argument is passed in multiple locations. The mode of the\n\
@code{parallel} should be the mode of the entire argument. The\n\
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand
2014-10-02 14:03 ` Ilya Enkovich
@ 2014-10-03 20:07 ` Jeff Law
0 siblings, 0 replies; 11+ messages in thread
From: Jeff Law @ 2014-10-03 20:07 UTC (permalink / raw)
To: Ilya Enkovich; +Cc: Michael Matz, gcc-patches
On 10/02/14 08:03, Ilya Enkovich wrote:
> Here is an updated version.
>
> Thanks,
> Ilya
> --
> 2014-10-02 Ilya Enkovich <ilya.enkovich@intel.com>
>
> * calls.c: Include tree-chkp.h, rtl-chkp.h, bitmap.h.
> (arg_data): Add fields special_slot, pointer_arg and
> pointer_offset.
> (store_bounds): New.
> (emit_call_1): Propagate instrumentation flag for CALL.
> (initialize_argument_information): Compute pointer_arg,
> pointer_offset and special_slot for pointer bounds arguments.
> (finalize_must_preallocate): Preallocate when storing bounds
> in bounds table.
> (compute_argument_addresses): Skip pointer bounds.
> (expand_call): Store bounds into tables separately. Return
> result joined with resulting bounds.
> * cfgexpand.c: Include tree-chkp.h, rtl-chkp.h.
> (expand_call_stmt): Propagate bounds flag for CALL_EXPR.
> (expand_return): Add returned bounds arg. Handle returned bounds.
> (expand_gimple_stmt_1): Adjust to new expand_return signature.
> (gimple_expand_cfg): Reset rtx bounds map.
> * expr.c: Include tree-chkp.h, rtl-chkp.h.
> (expand_assignment): Handle returned bounds.
> (store_expr_with_bounds): New. Replaces store_expr with new bounds
> target argument. Handle bounds returned by calls.
> (store_expr): Now wraps store_expr_with_bounds.
> * expr.h (store_expr_with_bounds): New.
> * function.c: Include tree-chkp.h, rtl-chkp.h.
> (bounds_parm_data): New.
> (use_register_for_decl): Do not registerize decls used for bounds
> stores and loads.
> (assign_parms_augmented_arg_list): Add bounds of the result
> structure pointer as the second argument.
> (assign_parm_find_entry_rtl): Mark bounds are never passed on
> the stack.
> (assign_parm_is_stack_parm): Likewise.
> (assign_parm_load_bounds): New.
> (assign_bounds): New.
> (assign_parms): Load bounds and determine a location for
> returned bounds.
> (diddle_return_value_1): New.
> (diddle_return_value): Handle returned bounds.
> * function.h (rtl_data): Add field for returned bounds.
> * target.def (TARGET_FUNCTION_ARG): Update hook description with new
> possible return value CONST_INT.
> * doc/tm.texi: Regenerate.
>
>
OK Thanks for your patience with my questions/concerns.
jeff
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand
2014-06-04 14:36 ` Michael Matz
2014-06-05 14:46 ` Ilya Enkovich
@ 2014-11-05 23:05 ` Eric Botcazou
1 sibling, 0 replies; 11+ messages in thread
From: Eric Botcazou @ 2014-11-05 23:05 UTC (permalink / raw)
To: Michael Matz; +Cc: gcc-patches, Ilya Enkovich
> IMHO it does. That or introducing a new store_expr_with_bounds (with the
> new argument) and letting store_expr be a wrapper for that, passing the
> NULL. Basically anything that avoids adding a new parameter for most of
> the existing calls to store_expr.
That looks so C-ish though... Can't we use a parameter with a default value?
--
Eric Botcazou
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2014-11-05 23:05 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-02 15:03 [PATCH, Pointer Bounds Checker 19/x] Support bounds in expand Ilya Enkovich
2014-06-02 15:28 ` Michael Matz
2014-06-02 15:55 ` Ilya Enkovich
2014-06-04 14:36 ` Michael Matz
2014-06-05 14:46 ` Ilya Enkovich
2014-09-15 7:20 ` Ilya Enkovich
2014-09-23 20:58 ` Jeff Law
2014-09-24 8:29 ` Ilya Enkovich
2014-10-02 14:03 ` Ilya Enkovich
2014-10-03 20:07 ` Jeff Law
2014-11-05 23:05 ` Eric Botcazou
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).