public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Chung-Lin Tang <cltang@codesourcery.com>
To: Richard Earnshaw <rearnsha@arm.com>
Cc: gcc-patches <gcc-patches@gcc.gnu.org>,
	 Ramana Radhakrishnan <ramana.radhakrishnan@arm.com>
Subject: Re: [patch, ARM] Fix PR48250,	adjust	DImode	reload	address	legitimizing
Date: Thu, 31 Mar 2011 14:36:00 -0000	[thread overview]
Message-ID: <4D948D44.2040207@codesourcery.com> (raw)
In-Reply-To: <1301566482.7986.23.camel@e102346-lin.cambridge.arm.com>

[-- Attachment #1: Type: text/plain, Size: 10536 bytes --]

On 2011/3/31 06:14 PM, Richard Earnshaw wrote:
> 
> On Thu, 2011-03-31 at 11:33 +0800, Chung-Lin Tang wrote:
>> On 2011/3/30 05:28 PM, Richard Earnshaw wrote:
>>>
>>> On Wed, 2011-03-30 at 15:35 +0800, Chung-Lin Tang wrote:
>>>> On 2011/3/30 上午 12:23, Richard Earnshaw wrote:
>>>>>
>>>>> On Tue, 2011-03-29 at 22:53 +0800, Chung-Lin Tang wrote:
>>>>>> On 2011/3/29 下午 10:26, Richard Earnshaw wrote:
>>>>>>> On Tue, 2011-03-29 at 18:25 +0800, Chung-Lin Tang wrote:
>>>>>>>> On 2011/3/24 06:51 PM, Richard Earnshaw wrote:
>>>>>>>>>
>>>>>>>>> On Thu, 2011-03-24 at 12:56 +0900, Chung-Lin Tang wrote:
>>>>>>>>>> Hi,
>>>>>>>>>> PR48250 happens under TARGET_NEON, where DImode is included within the
>>>>>>>>>> valid NEON modes. This turns the range of legitimate constant indexes to
>>>>>>>>>> step-4 (coproc load/store), thus arm_legitimize_reload_address() when
>>>>>>>>>> trying to decompose the [reg+index] reload address into
>>>>>>>>>> [(reg+index_high)+index_low], can cause an ICE later when 'index_low'
>>>>>>>>>> part is not aligned to 4.
>>>>>>>>>>
>>>>>>>>>> I'm not sure why the current DImode index is computed as:
>>>>>>>>>> low = ((val & 0xf) ^ 0x8) - 0x8;  the sign-extending into negative
>>>>>>>>>> values, then subtracting back, actually creates further off indexes.
>>>>>>>>>> e.g. in the supplied testcase, [sp+13] was turned into [(sp+16)-3].
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hysterical Raisins... the code there was clearly written for the days
>>>>>>>>> before we had LDRD in the architecture.  At that time the most efficient
>>>>>>>>> way to load a 64-bit object was to use the LDM{ia,ib,da,db}
>>>>>>>>> instructions.  The computation here was (I think), intended to try and
>>>>>>>>> make the most efficient use of an add/sub instruction followed by
>>>>>>>>> LDM/STM offsetting.  At that time the architecture had no unaligned
>>>>>>>>> access either, so dealing with 64-bit that were less than 32-bit aligned
>>>>>>>>> (in those days 32-bit was the maximum alignment) probably wasn't
>>>>>>>>> considered, or couldn't even get through to reload.
>>>>>>>>>
>>>>>>>>
>>>>>>>> I see it now. The code in output_move_double() returning assembly for
>>>>>>>> ldm/stm(db/da/ib) for offsets -8/-4/+4 seems to confirm this.
>>>>>>>>
>>>>>>>> I have changed the patch to let the new code handle the TARGET_LDRD case
>>>>>>>> only.  The pre-LDRD case is still handled by the original code, with an
>>>>>>>> additional & ~0x3 for aligning the offset to 4.
>>>>>>>>
>>>>>>>> I've also added a comment for the pre-TARGET_LDRD case. Please see if
>>>>>>>> the description is accurate enough.
>>>>>>>>
>>>>>>>>>> My patch changes the index decomposing to a more straightforward way; it
>>>>>>>>>> also sort of outlines the way the other reload address indexes are
>>>>>>>>>> broken by using and-masks, is not the most effective.  The address is
>>>>>>>>>> computed by addition, subtracting away the parts to obtain low+high
>>>>>>>>>> should be the optimal way of giving the largest computable index range.
>>>>>>>>>>
>>>>>>>>>> I have included a few Thumb-2 bits in the patch; I know currently
>>>>>>>>>> arm_legitimize_reload_address() is only used under TARGET_ARM, but I
>>>>>>>>>> guess it might eventually be turned into TARGET_32BIT.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I think this needs to be looked at carefully on ARMv4/ARMv4T to check
>>>>>>>>> that it doesn't cause regressions there when we don't have LDRD in the
>>>>>>>>> instruction set.
>>>>>>>>
>>>>>>>> I'll be testing the modified patch under an ARMv4/ARMv4T configuration.
>>>>>>>> Okay for trunk if no regressions?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Chung-Lin
>>>>>>>>
>>>>>>>> 	PR target/48250
>>>>>>>> 	* config/arm/arm.c (arm_legitimize_reload_address): Adjust
>>>>>>>> 	DImode constant index decomposing under TARGET_LDRD. Clear
>>>>>>>> 	lower two bits for NEON, Thumb-2, and !TARGET_LDRD. Add
>>>>>>>> 	comment for !TARGET_LDRD case.
>>>>>>>
>>>>>>> This looks technically correct, but I can't help feeling that trying to
>>>>>>> deal with the bottom two bits being non-zero is not really worthwhile.
>>>>>>> This hook is an optimization that's intended to generate better code
>>>>>>> than the default mechanisms that reload provides.  It is allowed to
>>>>>>> fail, but it must say so if it does (by returning false).
>>>>>>>
>>>>>>> What this hook is trying to do for DImode is to take an address of the
>>>>>>> form (reg + TOO_BIG_CONST) and turn it into two instructions that are
>>>>>>> legitimate:
>>>>>>>
>>>>>>> 	tmp = (REG + LEGAL_BIG_CONST)
>>>>>>> 	some_use_of (mem (tmp + SMALL_LEGAL_CONST))
>>>>>>>
>>>>>>> The idea behind the optimization is that LEGAL_BIG_CONST will need fewer
>>>>>>> instructions to generate than TOO_BIG_CONST.  It's unlikely (probably
>>>>>>> impossible in ARM state) that pushing the bottom two bits of the address
>>>>>>> into LEGAL_BIG_CONST part of the offset is going to lead to a better
>>>>>>> code sequence.  
>>>>>>>
>>>>>>> So I think it would be more sensible to just return false if we have a
>>>>>>> DImode address with the bottom 2 bits non-zero and then let the normal
>>>>>>> reload recovery mechanism take over.
>>>>>>
>>>>>> It is supposed to provide better utilization of the full range of
>>>>>> LEGAL_BIG_CONST+SMALL_LEGAL_CONST.  I am not sure, but I guess reload
>>>>>> will resolve it with the reg+LEGAL_BIG_CONST part only, using only (mem
>>>>>> (reg)) for the load/store (correct me if I'm wrong).
>>>>>>
>>>>>> Also, the new code slighty improves the reloading, for example currently
>>>>>> [reg+64] is broken into [(reg+72)-8], creating an additional unneeded
>>>>>> reload, which is certainly not good when we have ldrd/strd available.
>>>>>>
>>>>>
>>>>> Sorry, didn't mean to suggest that we don't want to do something better
>>>>> for addresses that are a multiple of 4, just that for addresses that
>>>>> aren't (at least) word-aligned that we should just bail as the code in
>>>>> that case won't benefit from the optimization.  So something like
>>>>>
>>>>>        if (mode == DImode || (mode == DFmode && TARGET_SOFT_FLOAT))
>>>>> 	 {
>>>>> 	    if (val & 3)
>>>>> 		return false;  /* No point in trying to handle this.  */
>>>>> 	    ... /* Cases that are useful to handle */
>>>>
>>>> I've looked at the reload code surrounding the call to
>>>> LEGITIMIZE_RELOAD_ADDRESS. It looks like for ARM, reload transforms the
>>>> address from [reg+#const] to newreg=#const; [reg+newreg]. ARM/Thumb-2
>>>> has 16-bits to move that constant, which is much more wider in range
>>>> than a 12-bit constant operand + 8-bit index. So I agree that simply
>>>> bailing out should be okay.
>>>>
>>>> OTOH, I'll still add that, for some micro-architectures, register read
>>>> ports may be a critical resource; for those cores, handling as many
>>>> reloads here as possible by breaking into an address add is still
>>>> slightly better than a 'move + [reg+reg]', for the latter load/store
>>>> uses one more register read.  So maybe the best should be, to handle
>>>> when the 'high' part is a valid add-immediate-operand, and bail out if
>>>> not...
>>>>
>>>> C.L.
>>>
>>> If the address is unaligned, then the access is going to be slow anyway;
>>> but this is all corner case stuff - the vast majority of accesses will
>>> be at natural alignment.  I think it's better to seek clarity in these
>>> cases than outright performance in theoretical micro-architectural
>>> corner cases.
>>>
>>> The largest number of read ports would be needed by store[reg+reg].
>>> That's only 3 ports for a normal store (four for a pair of registers),
>>> but cores can normally handle this without penalty by reading the
>>> address registers in one cycle and the data to be stored in a later
>>> cycle -- critical paths tend to be on address generation, not data to be
>>> stored.
>>
>> Actually, I was thinking of cores with dual-issue, where an additional
>> port read may prevent it from happening...
>>
>> Anyways, here's a new patch. I've removed the unaligned handling bits as
>> you suggested, simply returning false for those cases.
>>
>> The points above did inspire another improvement, I think. I have added
>> a test to also return false when the high part is not a valid immediate
>> operand.  The rationale is, after such a reg=reg+high address compute is
>> created, it will still have to be split into multiple adds later, so it
>> may be better to let reload turn it into the [reg+reg] form.
>>
> 
> Hmm, I think you've missed the point with some of this, which is that
> not only is it generally more efficient to try and use offset addressing
> but careful selection of the immediate values used in the load and the
> ADD insns can often also lead to better reload CSE.  For example:
> 
> 	ldr	r0, [r2, #4100]  // Offset too large
> 	ldr	r1, [r2, #4104]  // Offset too large
> 
> is best reloaded as
> 	add	t1, r2, #4096
> 	ldr	r0, [t1, #4]
> 	add	t2, r2, #4096
> 	ldr	r1, [t2, #8]
> 
> which of course post-reload CSE can simplify in most cases to eliminate
> the second add instruction:
> 
> 	add	t1, r2, #4096
> 	ldr	r0, [t1, #4]
> 	ldr	r1, [t1, #8]
> 
> This is true even if the amount of the offset being split out is larger
> than a simple legitimate_constant.
> 
> The idea here is that we want to split out the bits of the constant as a
> mask rather than as subtracting the maximum offset that ldr can handle
> (the same principle applies to LDRD too).
> 
> A further trick is that we can make use of negative offsets even if the
> overall offset is positive and that sometimes this may lead to an
> immediate that can be constructed with one fewer add instructions  For
> example,
> 
> 	ldr	r0, [r2, #0x3FFFFC]
> 
> This is best reloaded as:
> 
> 	add	t1, r2, #0x400000
> 	ldr	r0, [t1, #-4]
> 
> The trick for spotting this for a load instruction with N bits of offset
> (ie bits N-1:0) is to look at bit N: if it is set, then chose a negative
> offset that is going to make bit N and all the bits below it come to
> zero in the remainder part.
> 
> The final thing to note is that offsets for negative values in Thumb2
> are asymmetrical from the positive values that are available.  That
> makes selecting the best offset more complicated, and thus using
> negative values is less likely to be worth while.

Richard, thanks for the detailed explanation, you should really turn
this into a comment in arm.c some time.

I have revised the patch again, this time mostly in an analogous form
following the other cases, as you explained above.

Thanks,
Chung-Lin

[-- Attachment #2: pr48250-4.diff --]
[-- Type: text/plain, Size: 1035 bytes --]

Index: config/arm/arm.c
===================================================================
--- config/arm/arm.c	(revision 171716)
+++ config/arm/arm.c	(working copy)
@@ -6420,7 +6420,21 @@
       HOST_WIDE_INT low, high;
 
       if (mode == DImode || (mode == DFmode && TARGET_SOFT_FLOAT))
-	low = ((val & 0xf) ^ 0x8) - 0x8;
+	{
+	  /* We handle the aligned to 4 case only.  */
+	  if ((val & 0x3) != 0)
+	    return false;
+
+	  if (TARGET_LDRD)
+	    /* ??? There may be more adjustments later for Thumb-2,
+	       which has a ldrd insn with +-1020 index range.  */
+	    low = val >= 0 ? (val & 0xff) : -((-val) & 0xff);
+	  else
+	    /* For pre-ARMv5TE (without ldrd), we use ldm/stm(db/da/ib)
+	       to access doublewords. The supported load/store offsets are
+	       -8, -4, and 4, which we try to produce here.  */
+	    low = ((val & 0xf) ^ 0x8) - 0x8;
+	}
       else if (TARGET_MAVERICK && TARGET_HARD_FLOAT)
 	/* Need to be careful, -256 is not a valid offset.  */
 	low = val >= 0 ? (val & 0xff) : -((-val) & 0xff);

  reply	other threads:[~2011-03-31 14:17 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-03-24  3:57 Chung-Lin Tang
2011-03-24 10:52 ` Richard Earnshaw
2011-03-29 10:27   ` Chung-Lin Tang
2011-03-29 14:35     ` Richard Earnshaw
2011-03-29 15:46       ` Chung-Lin Tang
2011-03-29 16:42         ` Richard Earnshaw
2011-03-30  9:00           ` Chung-Lin Tang
2011-03-30  9:46             ` Richard Earnshaw
2011-03-31  6:31               ` Chung-Lin Tang
2011-03-31 10:16                 ` Richard Earnshaw
2011-03-31 14:36                   ` Chung-Lin Tang [this message]
2011-03-31 18:25                     ` Richard Earnshaw
2011-03-31 19:47                       ` Chung-Lin Tang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4D948D44.2040207@codesourcery.com \
    --to=cltang@codesourcery.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=ramana.radhakrishnan@arm.com \
    --cc=rearnsha@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).