From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 18231 invoked by alias); 23 Oct 2017 17:21:48 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 17999 invoked by uid 89); 23 Oct 2017 17:21:47 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-11.0 required=5.0 tests=AWL,BAYES_00,GIT_PATCH_2,GIT_PATCH_3,KAM_ASCII_DIVIDERS,RCVD_IN_DNSWL_NONE,SPF_PASS autolearn=ham version=3.3.2 spammy=25838 X-HELO: mail-wm0-f41.google.com Received: from mail-wm0-f41.google.com (HELO mail-wm0-f41.google.com) (74.125.82.41) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon, 23 Oct 2017 17:21:45 +0000 Received: by mail-wm0-f41.google.com with SMTP id m72so9334435wmc.0 for ; Mon, 23 Oct 2017 10:21:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:subject:references:date :in-reply-to:message-id:user-agent:mime-version; bh=ZW5Db5qRAdZ6EO4jaj6fqhOMkzCaXJ0C6sg8TBSEZ0U=; b=CRHAExqriJo6uh8PqYpSk2Q1sTK85cnorp9bstuJAha2BWfqinlq7+ba4EtopMPtt4 fM7zm5WNwNS77YyAp3xLFGMlYjTrQ5rWfwQiuaXmXizA39/efg+e6RuLq3nNuIBWCNa9 1WaWdiCHmorQWrdrieTVYmnTqUIC7RBLn5krC7SZ2SmvrtILKCiUDEbLaN9WKEJjihZy ptBBCNjOqcmKb3DfalkYlrUlKkV8AYPYHGSWEMF52Bx9INAmMPfFfGMie1sXP0uO/Oph TLYianvbl+MDembgfyA0IkRKS0p6CxE4jXUQS/n09YrZL3f5q7NUcrWq5RafAWVgpyAi sNyw== X-Gm-Message-State: AMCzsaU6991O+E1X25tlSlaG/kuHU7xA8gV6NUtE0O2AqAjO8OXdPWa7 GqVqw38wIjVBxYm0PoE/c4NZKuvAsX4= X-Google-Smtp-Source: ABhQp+QUKbBWcFORbynuJUoIYWEoeqGOe8FbmozJiNGEm2ZegvKhnM6tuWgm8zV9YVx4x5k/tWt20A== X-Received: by 10.28.69.91 with SMTP id s88mr5940326wma.19.1508779303496; Mon, 23 Oct 2017 10:21:43 -0700 (PDT) Received: from localhost ([2.26.27.199]) by smtp.gmail.com with ESMTPSA id 65sm4346358wrn.27.2017.10.23.10.21.42 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 23 Oct 2017 10:21:42 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@linaro.org Subject: [051/nnn] poly_int: emit_group_load/store References: <871sltvm7r.fsf@linaro.org> Date: Mon, 23 Oct 2017 17:22:00 -0000 In-Reply-To: <871sltvm7r.fsf@linaro.org> (Richard Sandiford's message of "Mon, 23 Oct 2017 17:54:32 +0100") Message-ID: <87r2ttlqze.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-SW-Source: 2017-10/txt/msg01552.txt.bz2 This patch changes the sizes passed to emit_group_load and emit_group_store from int to poly_int64. 2017-10-23 Richard Sandiford Alan Hayward David Sherwood gcc/ * expr.h (emit_group_load, emit_group_load_into_temps) (emit_group_store): Take the size as a poly_int64 rather than an int. * expr.c (emit_group_load_1, emit_group_load): Likewise. (emit_group_load_into_temp, emit_group_store): Likewise. Index: gcc/expr.h =================================================================== --- gcc/expr.h 2017-10-23 17:18:56.434286222 +0100 +++ gcc/expr.h 2017-10-23 17:20:49.571719793 +0100 @@ -128,10 +128,10 @@ extern rtx gen_group_rtx (rtx); /* Load a BLKmode value into non-consecutive registers represented by a PARALLEL. */ -extern void emit_group_load (rtx, rtx, tree, int); +extern void emit_group_load (rtx, rtx, tree, poly_int64); /* Similarly, but load into new temporaries. */ -extern rtx emit_group_load_into_temps (rtx, rtx, tree, int); +extern rtx emit_group_load_into_temps (rtx, rtx, tree, poly_int64); /* Move a non-consecutive group of registers represented by a PARALLEL into a non-consecutive group of registers represented by a PARALLEL. */ @@ -142,7 +142,7 @@ extern rtx emit_group_move_into_temps (r /* Store a BLKmode value from non-consecutive registers represented by a PARALLEL. */ -extern void emit_group_store (rtx, rtx, tree, int); +extern void emit_group_store (rtx, rtx, tree, poly_int64); extern rtx maybe_emit_group_store (rtx, tree); Index: gcc/expr.c =================================================================== --- gcc/expr.c 2017-10-23 17:18:57.860160878 +0100 +++ gcc/expr.c 2017-10-23 17:20:49.571719793 +0100 @@ -2095,7 +2095,8 @@ gen_group_rtx (rtx orig) into corresponding XEXP (XVECEXP (DST, 0, i), 0) element. */ static void -emit_group_load_1 (rtx *tmps, rtx dst, rtx orig_src, tree type, int ssize) +emit_group_load_1 (rtx *tmps, rtx dst, rtx orig_src, tree type, + poly_int64 ssize) { rtx src; int start, i; @@ -2134,12 +2135,16 @@ emit_group_load_1 (rtx *tmps, rtx dst, r for (i = start; i < XVECLEN (dst, 0); i++) { machine_mode mode = GET_MODE (XEXP (XVECEXP (dst, 0, i), 0)); - HOST_WIDE_INT bytepos = INTVAL (XEXP (XVECEXP (dst, 0, i), 1)); - unsigned int bytelen = GET_MODE_SIZE (mode); - int shift = 0; - - /* Handle trailing fragments that run over the size of the struct. */ - if (ssize >= 0 && bytepos + (HOST_WIDE_INT) bytelen > ssize) + poly_int64 bytepos = INTVAL (XEXP (XVECEXP (dst, 0, i), 1)); + poly_int64 bytelen = GET_MODE_SIZE (mode); + poly_int64 shift = 0; + + /* Handle trailing fragments that run over the size of the struct. + It's the target's responsibility to make sure that the fragment + cannot be strictly smaller in some cases and strictly larger + in others. */ + gcc_checking_assert (ordered_p (bytepos + bytelen, ssize)); + if (known_size_p (ssize) && may_gt (bytepos + bytelen, ssize)) { /* Arrange to shift the fragment to where it belongs. extract_bit_field loads to the lsb of the reg. */ @@ -2153,7 +2158,7 @@ emit_group_load_1 (rtx *tmps, rtx dst, r ) shift = (bytelen - (ssize - bytepos)) * BITS_PER_UNIT; bytelen = ssize - bytepos; - gcc_assert (bytelen > 0); + gcc_assert (may_gt (bytelen, 0)); } /* If we won't be loading directly from memory, protect the real source @@ -2177,33 +2182,34 @@ emit_group_load_1 (rtx *tmps, rtx dst, r if (MEM_P (src) && (! targetm.slow_unaligned_access (mode, MEM_ALIGN (src)) || MEM_ALIGN (src) >= GET_MODE_ALIGNMENT (mode)) - && bytepos * BITS_PER_UNIT % GET_MODE_ALIGNMENT (mode) == 0 - && bytelen == GET_MODE_SIZE (mode)) + && multiple_p (bytepos * BITS_PER_UNIT, GET_MODE_ALIGNMENT (mode)) + && must_eq (bytelen, GET_MODE_SIZE (mode))) { tmps[i] = gen_reg_rtx (mode); emit_move_insn (tmps[i], adjust_address (src, mode, bytepos)); } else if (COMPLEX_MODE_P (mode) && GET_MODE (src) == mode - && bytelen == GET_MODE_SIZE (mode)) + && must_eq (bytelen, GET_MODE_SIZE (mode))) /* Let emit_move_complex do the bulk of the work. */ tmps[i] = src; else if (GET_CODE (src) == CONCAT) { - unsigned int slen = GET_MODE_SIZE (GET_MODE (src)); - unsigned int slen0 = GET_MODE_SIZE (GET_MODE (XEXP (src, 0))); - unsigned int elt = bytepos / slen0; - unsigned int subpos = bytepos % slen0; + poly_int64 slen = GET_MODE_SIZE (GET_MODE (src)); + poly_int64 slen0 = GET_MODE_SIZE (GET_MODE (XEXP (src, 0))); + unsigned int elt; + poly_int64 subpos; - if (subpos + bytelen <= slen0) + if (can_div_trunc_p (bytepos, slen0, &elt, &subpos) + && must_le (subpos + bytelen, slen0)) { /* The following assumes that the concatenated objects all have the same size. In this case, a simple calculation can be used to determine the object and the bit field to be extracted. */ tmps[i] = XEXP (src, elt); - if (subpos != 0 - || subpos + bytelen != slen0 + if (maybe_nonzero (subpos) + || may_ne (subpos + bytelen, slen0) || (!CONSTANT_P (tmps[i]) && (!REG_P (tmps[i]) || GET_MODE (tmps[i]) != mode))) tmps[i] = extract_bit_field (tmps[i], bytelen * BITS_PER_UNIT, @@ -2215,7 +2221,7 @@ emit_group_load_1 (rtx *tmps, rtx dst, r { rtx mem; - gcc_assert (!bytepos); + gcc_assert (known_zero (bytepos)); mem = assign_stack_temp (GET_MODE (src), slen); emit_move_insn (mem, src); tmps[i] = extract_bit_field (mem, bytelen * BITS_PER_UNIT, @@ -2234,23 +2240,21 @@ emit_group_load_1 (rtx *tmps, rtx dst, r mem = assign_stack_temp (GET_MODE (src), slen); emit_move_insn (mem, src); - tmps[i] = adjust_address (mem, mode, (int) bytepos); + tmps[i] = adjust_address (mem, mode, bytepos); } else if (CONSTANT_P (src) && GET_MODE (dst) != BLKmode && XVECLEN (dst, 0) > 1) tmps[i] = simplify_gen_subreg (mode, src, GET_MODE (dst), bytepos); else if (CONSTANT_P (src)) { - HOST_WIDE_INT len = (HOST_WIDE_INT) bytelen; - - if (len == ssize) + if (must_eq (bytelen, ssize)) tmps[i] = src; else { rtx first, second; /* TODO: const_wide_int can have sizes other than this... */ - gcc_assert (2 * len == ssize); + gcc_assert (must_eq (2 * bytelen, ssize)); split_double (src, &first, &second); if (i) tmps[i] = second; @@ -2265,7 +2269,7 @@ emit_group_load_1 (rtx *tmps, rtx dst, r bytepos * BITS_PER_UNIT, 1, NULL_RTX, mode, mode, false, NULL); - if (shift) + if (maybe_nonzero (shift)) tmps[i] = expand_shift (LSHIFT_EXPR, mode, tmps[i], shift, tmps[i], 0); } @@ -2277,7 +2281,7 @@ emit_group_load_1 (rtx *tmps, rtx dst, r if not known. */ void -emit_group_load (rtx dst, rtx src, tree type, int ssize) +emit_group_load (rtx dst, rtx src, tree type, poly_int64 ssize) { rtx *tmps; int i; @@ -2300,7 +2304,7 @@ emit_group_load (rtx dst, rtx src, tree in the right place. */ rtx -emit_group_load_into_temps (rtx parallel, rtx src, tree type, int ssize) +emit_group_load_into_temps (rtx parallel, rtx src, tree type, poly_int64 ssize) { rtvec vec; int i; @@ -2371,7 +2375,8 @@ emit_group_move_into_temps (rtx src) known. */ void -emit_group_store (rtx orig_dst, rtx src, tree type ATTRIBUTE_UNUSED, int ssize) +emit_group_store (rtx orig_dst, rtx src, tree type ATTRIBUTE_UNUSED, + poly_int64 ssize) { rtx *tmps, dst; int start, finish, i; @@ -2502,24 +2507,28 @@ emit_group_store (rtx orig_dst, rtx src, /* Process the pieces. */ for (i = start; i < finish; i++) { - HOST_WIDE_INT bytepos = INTVAL (XEXP (XVECEXP (src, 0, i), 1)); + poly_int64 bytepos = INTVAL (XEXP (XVECEXP (src, 0, i), 1)); machine_mode mode = GET_MODE (tmps[i]); - unsigned int bytelen = GET_MODE_SIZE (mode); - unsigned int adj_bytelen; + poly_int64 bytelen = GET_MODE_SIZE (mode); + poly_uint64 adj_bytelen; rtx dest = dst; - /* Handle trailing fragments that run over the size of the struct. */ - if (ssize >= 0 && bytepos + (HOST_WIDE_INT) bytelen > ssize) + /* Handle trailing fragments that run over the size of the struct. + It's the target's responsibility to make sure that the fragment + cannot be strictly smaller in some cases and strictly larger + in others. */ + gcc_checking_assert (ordered_p (bytepos + bytelen, ssize)); + if (known_size_p (ssize) && may_gt (bytepos + bytelen, ssize)) adj_bytelen = ssize - bytepos; else adj_bytelen = bytelen; if (GET_CODE (dst) == CONCAT) { - if (bytepos + adj_bytelen - <= GET_MODE_SIZE (GET_MODE (XEXP (dst, 0)))) + if (must_le (bytepos + adj_bytelen, + GET_MODE_SIZE (GET_MODE (XEXP (dst, 0))))) dest = XEXP (dst, 0); - else if (bytepos >= GET_MODE_SIZE (GET_MODE (XEXP (dst, 0)))) + else if (must_ge (bytepos, GET_MODE_SIZE (GET_MODE (XEXP (dst, 0))))) { bytepos -= GET_MODE_SIZE (GET_MODE (XEXP (dst, 0))); dest = XEXP (dst, 1); @@ -2529,7 +2538,7 @@ emit_group_store (rtx orig_dst, rtx src, machine_mode dest_mode = GET_MODE (dest); machine_mode tmp_mode = GET_MODE (tmps[i]); - gcc_assert (bytepos == 0 && XVECLEN (src, 0)); + gcc_assert (known_zero (bytepos) && XVECLEN (src, 0)); if (GET_MODE_ALIGNMENT (dest_mode) >= GET_MODE_ALIGNMENT (tmp_mode)) @@ -2554,7 +2563,7 @@ emit_group_store (rtx orig_dst, rtx src, } /* Handle trailing fragments that run over the size of the struct. */ - if (ssize >= 0 && bytepos + (HOST_WIDE_INT) bytelen > ssize) + if (known_size_p (ssize) && may_gt (bytepos + bytelen, ssize)) { /* store_bit_field always takes its value from the lsb. Move the fragment to the lsb if it's not already there. */ @@ -2567,7 +2576,7 @@ emit_group_store (rtx orig_dst, rtx src, #endif ) { - int shift = (bytelen - (ssize - bytepos)) * BITS_PER_UNIT; + poly_int64 shift = (bytelen - (ssize - bytepos)) * BITS_PER_UNIT; tmps[i] = expand_shift (RSHIFT_EXPR, mode, tmps[i], shift, tmps[i], 0); } @@ -2583,8 +2592,9 @@ emit_group_store (rtx orig_dst, rtx src, else if (MEM_P (dest) && (!targetm.slow_unaligned_access (mode, MEM_ALIGN (dest)) || MEM_ALIGN (dest) >= GET_MODE_ALIGNMENT (mode)) - && bytepos * BITS_PER_UNIT % GET_MODE_ALIGNMENT (mode) == 0 - && bytelen == GET_MODE_SIZE (mode)) + && multiple_p (bytepos * BITS_PER_UNIT, + GET_MODE_ALIGNMENT (mode)) + && must_eq (bytelen, GET_MODE_SIZE (mode))) emit_move_insn (adjust_address (dest, mode, bytepos), tmps[i]); else