From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 4680 invoked by alias); 31 Mar 2011 10:14:55 -0000 Received: (qmail 4670 invoked by uid 22791); 31 Mar 2011 10:14:54 -0000 X-SWARE-Spam-Status: No, hits=-2.4 required=5.0 tests=AWL,BAYES_00,RCVD_IN_DNSWL_LOW X-Spam-Check-By: sourceware.org Received: from service87.mimecast.com (HELO service87.mimecast.com) (94.185.240.25) by sourceware.org (qpsmtpd/0.43rc1) with SMTP; Thu, 31 Mar 2011 10:14:48 +0000 Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Thu, 31 Mar 2011 11:14:44 +0100 Received: from [10.1.67.34] ([10.1.255.212]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.0); Thu, 31 Mar 2011 11:14:42 +0100 Subject: Re: [patch, ARM] Fix PR48250, adjust DImode reload address legitimizing From: Richard Earnshaw To: Chung-Lin Tang Cc: gcc-patches , Ramana Radhakrishnan In-Reply-To: <4D93F627.4060505@codesourcery.com> References: <4D8AC105.5070909@codesourcery.com> <1300963916.12868.38.camel@e102346-lin.cambridge.arm.com> <4D91B39F.6080202@codesourcery.com> <1301408786.16904.12.camel@e102346-lin.cambridge.arm.com> <4D91F284.2090406@codesourcery.com> <1301415785.16904.16.camel@e102346-lin.cambridge.arm.com> <4D92DD36.2090101@codesourcery.com> <1301477335.8492.7.camel@e102346-lin.cambridge.arm.com> <4D93F627.4060505@codesourcery.com> Date: Thu, 31 Mar 2011 10:16:00 -0000 Message-Id: <1301566482.7986.23.camel@e102346-lin.cambridge.arm.com> Mime-Version: 1.0 X-MC-Unique: 111033111144402701 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org X-SW-Source: 2011-03/txt/msg02166.txt.bz2 On Thu, 2011-03-31 at 11:33 +0800, Chung-Lin Tang wrote: > On 2011/3/30 05:28 PM, Richard Earnshaw wrote: > >=20 > > On Wed, 2011-03-30 at 15:35 +0800, Chung-Lin Tang wrote: > >> On 2011/3/30 =E4=B8=8A=E5=8D=88 12:23, Richard Earnshaw wrote: > >>> > >>> On Tue, 2011-03-29 at 22:53 +0800, Chung-Lin Tang wrote: > >>>> On 2011/3/29 =E4=B8=8B=E5=8D=88 10:26, Richard Earnshaw wrote: > >>>>> On Tue, 2011-03-29 at 18:25 +0800, Chung-Lin Tang wrote: > >>>>>> On 2011/3/24 06:51 PM, Richard Earnshaw wrote: > >>>>>>> > >>>>>>> On Thu, 2011-03-24 at 12:56 +0900, Chung-Lin Tang wrote: > >>>>>>>> Hi, > >>>>>>>> PR48250 happens under TARGET_NEON, where DImode is included with= in the > >>>>>>>> valid NEON modes. This turns the range of legitimate constant in= dexes to > >>>>>>>> step-4 (coproc load/store), thus arm_legitimize_reload_address()= when > >>>>>>>> trying to decompose the [reg+index] reload address into > >>>>>>>> [(reg+index_high)+index_low], can cause an ICE later when 'index= _low' > >>>>>>>> part is not aligned to 4. > >>>>>>>> > >>>>>>>> I'm not sure why the current DImode index is computed as: > >>>>>>>> low =3D ((val & 0xf) ^ 0x8) - 0x8; the sign-extending into nega= tive > >>>>>>>> values, then subtracting back, actually creates further off inde= xes. > >>>>>>>> e.g. in the supplied testcase, [sp+13] was turned into [(sp+16)-= 3]. > >>>>>>>> > >>>>>>> > >>>>>>> Hysterical Raisins... the code there was clearly written for the = days > >>>>>>> before we had LDRD in the architecture. At that time the most ef= ficient > >>>>>>> way to load a 64-bit object was to use the LDM{ia,ib,da,db} > >>>>>>> instructions. The computation here was (I think), intended to tr= y and > >>>>>>> make the most efficient use of an add/sub instruction followed by > >>>>>>> LDM/STM offsetting. At that time the architecture had no unalign= ed > >>>>>>> access either, so dealing with 64-bit that were less than 32-bit = aligned > >>>>>>> (in those days 32-bit was the maximum alignment) probably wasn't > >>>>>>> considered, or couldn't even get through to reload. > >>>>>>> > >>>>>> > >>>>>> I see it now. The code in output_move_double() returning assembly = for > >>>>>> ldm/stm(db/da/ib) for offsets -8/-4/+4 seems to confirm this. > >>>>>> > >>>>>> I have changed the patch to let the new code handle the TARGET_LDR= D case > >>>>>> only. The pre-LDRD case is still handled by the original code, wi= th an > >>>>>> additional & ~0x3 for aligning the offset to 4. > >>>>>> > >>>>>> I've also added a comment for the pre-TARGET_LDRD case. Please see= if > >>>>>> the description is accurate enough. > >>>>>> > >>>>>>>> My patch changes the index decomposing to a more straightforward= way; it > >>>>>>>> also sort of outlines the way the other reload address indexes a= re > >>>>>>>> broken by using and-masks, is not the most effective. The addre= ss is > >>>>>>>> computed by addition, subtracting away the parts to obtain low+h= igh > >>>>>>>> should be the optimal way of giving the largest computable index= range. > >>>>>>>> > >>>>>>>> I have included a few Thumb-2 bits in the patch; I know currently > >>>>>>>> arm_legitimize_reload_address() is only used under TARGET_ARM, b= ut I > >>>>>>>> guess it might eventually be turned into TARGET_32BIT. > >>>>>>>> > >>>>>>> > >>>>>>> I think this needs to be looked at carefully on ARMv4/ARMv4T to c= heck > >>>>>>> that it doesn't cause regressions there when we don't have LDRD i= n the > >>>>>>> instruction set. > >>>>>> > >>>>>> I'll be testing the modified patch under an ARMv4/ARMv4T configura= tion. > >>>>>> Okay for trunk if no regressions? > >>>>>> > >>>>>> Thanks, > >>>>>> Chung-Lin > >>>>>> > >>>>>> PR target/48250 > >>>>>> * config/arm/arm.c (arm_legitimize_reload_address): Adjust > >>>>>> DImode constant index decomposing under TARGET_LDRD. Clear > >>>>>> lower two bits for NEON, Thumb-2, and !TARGET_LDRD. Add > >>>>>> comment for !TARGET_LDRD case. > >>>>> > >>>>> This looks technically correct, but I can't help feeling that tryin= g to > >>>>> deal with the bottom two bits being non-zero is not really worthwhi= le. > >>>>> This hook is an optimization that's intended to generate better code > >>>>> than the default mechanisms that reload provides. It is allowed to > >>>>> fail, but it must say so if it does (by returning false). > >>>>> > >>>>> What this hook is trying to do for DImode is to take an address of = the > >>>>> form (reg + TOO_BIG_CONST) and turn it into two instructions that a= re > >>>>> legitimate: > >>>>> > >>>>> tmp =3D (REG + LEGAL_BIG_CONST) > >>>>> some_use_of (mem (tmp + SMALL_LEGAL_CONST)) > >>>>> > >>>>> The idea behind the optimization is that LEGAL_BIG_CONST will need = fewer > >>>>> instructions to generate than TOO_BIG_CONST. It's unlikely (probab= ly > >>>>> impossible in ARM state) that pushing the bottom two bits of the ad= dress > >>>>> into LEGAL_BIG_CONST part of the offset is going to lead to a better > >>>>> code sequence.=20=20 > >>>>> > >>>>> So I think it would be more sensible to just return false if we hav= e a > >>>>> DImode address with the bottom 2 bits non-zero and then let the nor= mal > >>>>> reload recovery mechanism take over. > >>>> > >>>> It is supposed to provide better utilization of the full range of > >>>> LEGAL_BIG_CONST+SMALL_LEGAL_CONST. I am not sure, but I guess reload > >>>> will resolve it with the reg+LEGAL_BIG_CONST part only, using only (= mem > >>>> (reg)) for the load/store (correct me if I'm wrong). > >>>> > >>>> Also, the new code slighty improves the reloading, for example curre= ntly > >>>> [reg+64] is broken into [(reg+72)-8], creating an additional unneeded > >>>> reload, which is certainly not good when we have ldrd/strd available. > >>>> > >>> > >>> Sorry, didn't mean to suggest that we don't want to do something bett= er > >>> for addresses that are a multiple of 4, just that for addresses that > >>> aren't (at least) word-aligned that we should just bail as the code in > >>> that case won't benefit from the optimization. So something like > >>> > >>> if (mode =3D=3D DImode || (mode =3D=3D DFmode && TARGET_SOFT_F= LOAT)) > >>> { > >>> if (val & 3) > >>> return false; /* No point in trying to handle this. */ > >>> ... /* Cases that are useful to handle */ > >> > >> I've looked at the reload code surrounding the call to > >> LEGITIMIZE_RELOAD_ADDRESS. It looks like for ARM, reload transforms the > >> address from [reg+#const] to newreg=3D#const; [reg+newreg]. ARM/Thumb-2 > >> has 16-bits to move that constant, which is much more wider in range > >> than a 12-bit constant operand + 8-bit index. So I agree that simply > >> bailing out should be okay. > >> > >> OTOH, I'll still add that, for some micro-architectures, register read > >> ports may be a critical resource; for those cores, handling as many > >> reloads here as possible by breaking into an address add is still > >> slightly better than a 'move + [reg+reg]', for the latter load/store > >> uses one more register read. So maybe the best should be, to handle > >> when the 'high' part is a valid add-immediate-operand, and bail out if > >> not... > >> > >> C.L. > >=20 > > If the address is unaligned, then the access is going to be slow anyway; > > but this is all corner case stuff - the vast majority of accesses will > > be at natural alignment. I think it's better to seek clarity in these > > cases than outright performance in theoretical micro-architectural > > corner cases. > >=20 > > The largest number of read ports would be needed by store[reg+reg]. > > That's only 3 ports for a normal store (four for a pair of registers), > > but cores can normally handle this without penalty by reading the > > address registers in one cycle and the data to be stored in a later > > cycle -- critical paths tend to be on address generation, not data to be > > stored. >=20 > Actually, I was thinking of cores with dual-issue, where an additional > port read may prevent it from happening... >=20 > Anyways, here's a new patch. I've removed the unaligned handling bits as > you suggested, simply returning false for those cases. >=20 > The points above did inspire another improvement, I think. I have added > a test to also return false when the high part is not a valid immediate > operand. The rationale is, after such a reg=3Dreg+high address compute is > created, it will still have to be split into multiple adds later, so it > may be better to let reload turn it into the [reg+reg] form. >=20 Hmm, I think you've missed the point with some of this, which is that not only is it generally more efficient to try and use offset addressing but careful selection of the immediate values used in the load and the ADD insns can often also lead to better reload CSE. For example: ldr r0, [r2, #4100] // Offset too large ldr r1, [r2, #4104] // Offset too large is best reloaded as add t1, r2, #4096 ldr r0, [t1, #4] add t2, r2, #4096 ldr r1, [t2, #8] which of course post-reload CSE can simplify in most cases to eliminate the second add instruction: add t1, r2, #4096 ldr r0, [t1, #4] ldr r1, [t1, #8] This is true even if the amount of the offset being split out is larger than a simple legitimate_constant. The idea here is that we want to split out the bits of the constant as a mask rather than as subtracting the maximum offset that ldr can handle (the same principle applies to LDRD too). A further trick is that we can make use of negative offsets even if the overall offset is positive and that sometimes this may lead to an immediate that can be constructed with one fewer add instructions For example, ldr r0, [r2, #0x3FFFFC] This is best reloaded as: add t1, r2, #0x400000 ldr r0, [t1, #-4] The trick for spotting this for a load instruction with N bits of offset (ie bits N-1:0) is to look at bit N: if it is set, then chose a negative offset that is going to make bit N and all the bits below it come to zero in the remainder part. The final thing to note is that offsets for negative values in Thumb2 are asymmetrical from the positive values that are available. That makes selecting the best offset more complicated, and thus using negative values is less likely to be worth while. > I'll be testing this patch later, here it is for reviewing. >=20 > Thanks, > Chung-Lin