From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 31991 invoked by alias); 4 Jul 2011 11:44:10 -0000 Received: (qmail 31982 invoked by uid 22791); 4 Jul 2011 11:44:09 -0000 X-SWARE-Spam-Status: No, hits=-1.7 required=5.0 tests=AWL,BAYES_00,MSGID_FROM_MTA_HEADER,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mtagate7.uk.ibm.com (HELO mtagate7.uk.ibm.com) (194.196.100.167) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Mon, 04 Jul 2011 11:43:40 +0000 Received: from d06nrmr1507.portsmouth.uk.ibm.com (d06nrmr1507.portsmouth.uk.ibm.com [9.149.38.233]) by mtagate7.uk.ibm.com (8.13.1/8.13.1) with ESMTP id p64BhXom032405 for ; Mon, 4 Jul 2011 11:43:33 GMT Received: from d06av02.portsmouth.uk.ibm.com (d06av02.portsmouth.uk.ibm.com [9.149.37.228]) by d06nrmr1507.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p64BhXrK2592820 for ; Mon, 4 Jul 2011 12:43:33 +0100 Received: from d06av02.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av02.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p64BhWDS016895 for ; Mon, 4 Jul 2011 05:43:32 -0600 Received: from tuxmaker.boeblingen.de.ibm.com (tuxmaker.boeblingen.de.ibm.com [9.152.85.9]) by d06av02.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with SMTP id p64BhV9P016874; Mon, 4 Jul 2011 05:43:31 -0600 Message-Id: <201107041143.p64BhV9P016874@d06av02.portsmouth.uk.ibm.com> Received: by tuxmaker.boeblingen.de.ibm.com (sSMTP sendmail emulation); Mon, 04 Jul 2011 13:43:31 +0200 Subject: Re: [PATCH, ARM] Unaligned accesses for packed structures [1/2] To: julian@codesourcery.com (Julian Brown) Date: Mon, 04 Jul 2011 11:44:00 -0000 From: "Ulrich Weigand" Cc: gcc-patches@gcc.gnu.org, paul@codesourcery.com, rearnsha@arm.com, ramana.radhakrishnan@linaro.org (Ramana Radhakrishnan) In-Reply-To: <20110506133414.14dc027c@rex.config> from "Julian Brown" at May 06, 2011 01:34:14 PM MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org X-SW-Source: 2011-07/txt/msg00174.txt.bz2 Julian Brown wrote: > The most awkward change in the patch is to generic code (expmed.c, > {store,extract}_bit_field_1): in big-endian mode, the existing behaviour > (when inserting/extracting a bitfield to a memory location) is > definitely bogus: "unit" is set to BITS_PER_UNIT for memory locations, > and if bitsize (the size of the field to insert/extract) is greater than > BITS_PER_UNIT (which isn't unusual at all), xbitpos becomes negative. > That can't possibly be intentional; I can only assume that this code > path is not exercised for machines which have memory alternatives for > bitfield insert/extract, and BITS_BIG_ENDIAN of 0 in BYTES_BIG_ENDIAN > mode. [snip] > @@ -648,7 +648,7 @@ store_bit_field_1 (rtx str_rtx, unsigned HOST_WIDE_INT bitsize, > /* On big-endian machines, we count bits from the most significant. > If the bit field insn does not, we must invert. */ > > - if (BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN) > + if (BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN && !MEM_P (xop0)) > xbitpos = unit - bitsize - xbitpos; I agree that the current code cannot possibly be correct. However, just disabling the BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN renumbering *completely* seems wrong to me as well. According to the docs, the meaning bit position passed to the extv/insv expanders is determined by BITS_BIG_ENDIAN, both in the cases of register and memory operands. Therefore, if BITS_BIG_ENDIAN differs from BYTES_BIG_ENDIAN, we should need a correction for memory operands as well. However, this correction needs to be relative to the size of the access (i.e. the operand to the extv/insn), not just BITS_PER_UNIT. >From looking at the sources, the simplest way to implement that might be to swap the order of the two corrections, that is, change this: /* On big-endian machines, we count bits from the most significant. If the bit field insn does not, we must invert. */ if (BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN) xbitpos = unit - bitsize - xbitpos; /* We have been counting XBITPOS within UNIT. Count instead within the size of the register. */ if (BITS_BIG_ENDIAN && !MEM_P (xop0)) xbitpos += GET_MODE_BITSIZE (op_mode) - unit; unit = GET_MODE_BITSIZE (op_mode); to look instead like: /* We have been counting XBITPOS within UNIT. Count instead within the size of the register. */ if (BYTES_BIG_ENDIAN && !MEM_P (xop0)) xbitpos += GET_MODE_BITSIZE (op_mode) - unit; unit = GET_MODE_BITSIZE (op_mode); /* On big-endian machines, we count bits from the most significant. If the bit field insn does not, we must invert. */ if (BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN) xbitpos = unit - bitsize - xbitpos; (Note that the condition in the first if must then check BYTES_BIG_ENDIAN instead of BITS_BIG_ENDIAN.) This change results in unchanged behaviour for register operands in all cases, and memory operands if BITS_BIG_ENDIAN == BYTES_BIG_ENDIAN. For the problematic case of memory operands with BITS_BIG_ENDIAN != BYTES_BIG ENDIAN it should result in the appropriate correction. Note that with that change, the new code your patch introduces to the ARM back-end will also need to change. You currently handle bitpos like this: base_addr = adjust_address (operands[1], HImode, bitpos / BITS_PER_UNIT); This implicitly assumes that bitpos counts according to BYTES_BIG_ENDIAN, not BITS_BIG_ENDIAN -- which exactly cancels out the common code behaviour introduced by your patch ... Thoughts? Am I overlooking something here? Bye, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com