public inbox for
 help / color / mirror / Atom feed
From: Jiufu Guo <>
To: Jiufu Guo via Gcc-patches <>
Cc: Richard Biener <>,
	Segher Boessenkool <>,
	Jeff Law <>,,,,
Subject: Re: [RFC] propgation leap over memory copy for struct
Date: Wed, 09 Nov 2022 15:51:52 +0800	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <> (Jiufu Guo via Gcc-patches's message of "Tue, 08 Nov 2022 12:05:48 +0800")

Jiufu Guo via Gcc-patches <> writes:

> Richard Biener <> writes:
>> On Tue, 1 Nov 2022, Jiufu Guo wrote:
>>> Segher Boessenkool <> writes:
>>> > On Mon, Oct 31, 2022 at 04:13:38PM -0600, Jeff Law wrote:
>>> >> On 10/30/22 20:42, Jiufu Guo via Gcc-patches wrote:
>>> >> >We know that for struct variable assignment, memory copy may be used.
>>> >> >And for memcpy, we may load and store more bytes as possible at one time.
>>> >> >While it may be not best here:
>>> >
>>> >> So the first question in my mind is can we do better at the gimple 
>>> >> phase?  For the second case in particular can't we just "return a" 
>>> >> rather than copying a into <retval> then returning <retval>?  This feels 
>>> >> a lot like the return value optimization from C++.  I'm not sure if it 
>>> >> applies to the first case or not, it's been a long time since I looked 
>>> >> at NRV optimizations, but it might be worth poking around in there a bit 
>>> >> (
>>> >
>>> > If it is a bigger struct you end up with quite a lot of stuff in
>>> > registers.  GCC will eventually put that all in memory so it will work
>>> > out fine in the end, but you are likely to get inefficient code.
>>> Yes.  We may need to use memory to save regiters for big struct.
>>> Small struct may be practical to use registers.  We may leverage the
>>> idea that: some type of small struct are passing to function through
>>> registers. 
>>> >
>>> > OTOH, 8 bytes isn't as big as we would want these days, is it?  So it
>>> > would be useful to put smaller temportaries, say 32 bytes and smaller,
>>> > in registers instead of in memory.
>>> I think you mean:  we should try to registers to avoid memory accesing,
>>> and using registers would be ok for more bytes memcpy(32bytes).
>>> Great sugguestion, thanks a lot!
>>> Like below idea:
>>> [r100:TI, r101:TI] = src;  //Or r100:OI/OO = src;
>>> dest = [r100:TI, r101:TI];
>>> Currently, for 8bytes structure, we are using TImode for it.
>>> And subreg/fwprop/cse passes are able to optimize it as expected.
>>> Two concerns here: larger int modes(OI/OO/..) may be not introduced yet;
>>> I'm not sure if current infrastructure supports to use two more
>>> registers for one structure.
>>> >
>>> >> But even so, these kinds of things are still bound to happen, so it's 
>>> >> probably worth thinking about if we can do better in RTL as well.
>>> >
>>> > Always.  It is a mistake to think that having better high-level
>>> > optimisations means that you don't need good low-level optimisations
>>> > anymore: in fact deficiencies there become more glaringly apparent if
>>> > the early pipeline opts become better :-)
>>> Understant, thanks :)
>>> >
>>> >> The first thing that comes to my mind is to annotate memcpy calls that 
>>> >> are structure assignments.  The idea here is that we may want to expand 
>>> >> a memcpy differently in those cases.   Changing how we expand an opaque 
>>> >> memcpy call is unlikely to be beneficial in most cases.  But changing 
>>> >> how we expand a structure copy may be beneficial by exposing the 
>>> >> underlying field values.   This would roughly correspond to your method 
>>> >> #1.
>>> >> 
>>> >> Or instead of changing how we expand, teach the optimizers about these 
>>> >> annotated memcpy calls -- they're just a a copy of each field.   That's 
>>> >> how CSE and the propagators could treat them. After some point we'd 
>>> >> lower them in the usual ways, but at least early in the RTL pipeline we 
>>> >> could keep them as annotated memcpy calls.  This roughly corresponds to 
>>> >> your second suggestion.
>>> >
>>> > Ideally this won't ever make it as far as RTL, if the structures do not
>>> > need to go via memory.  All high-level optimissations should have been
>>> > done earlier, and hopefully it was not expand tiself that forced stuff
>>> > into memory!  :-/
>>> Currently, after early gimple optimization, the struct member accessing
>>> may still need to be in memory (if the mode of the struct is BLK).
>>> For example:
>>> _Bool foo (const A a) { return a.a[0] > 1.0; }
>>> The optimized gimple would be:
>>>   _1 = a.a[0];
>>>   _3 = _1 > 1.0e+0;
>>>   return _3;
>>> During expand to RTL, parm 'a' is store to memory from arg regs firstly,
>>> and "a.a[0]" is also reading from memory.  It may be better to use
>>> "f1" for "a.a[0]" here.
>>> Maybe, method3 is similar with your idea: using "parallel:BLK {DF;DF;DF; DF}"
>>> for the struct (BLK may be changed), and using 4 DF registers to access
>>> the structure in expand pass.
>> I think for cases like this it might be a good idea to perform
>> SRA-like analysis at RTL expansion time when we know how parameters
>> arrive (in pieces) and take that knowledge into account when
>> assigning the RTL to a decl.  The same applies for the return ABI.
>> Since we rely on RTL to elide copies to/from return/argument
>> registers/slots we have to assign "layout compatible" registers
>> to the corresponding auto vars.
In other words, for this kind of parameter, creating a few scalars
for each pieces.  And the 'accessing to the paramter' is expanded to
'accessing the scalars' accordingly.
This would also able to avoid memory accesing for the paramter.

Maybe we could use something like "parallel:M? {DF;DF;DF;DF}" or
"parallel:M? {DI;DI;DI;DI}" to group the scalars in DECL_RTL.

For this, we would need to support 'move'/'access' these sub-RTLs.

Any more sugguestions? Thanks.


> Thanks for pointing out this!
> This looks like a kind of SRA, especially for parm and return value.
> As you pointed out, there is something that we may need to take care
> to adjust:
> 1. We would use the "layout compatible" mode reg for the scalar. e.g.
> DF for "{double arr[4];}", but DI for "{double arr[3]; long l;}".
> 2. For an aggregate that will be assigned to return value, before
> expanding to 'return stmt', we may not sure if need to assign
> 'scalar rtl(s)' to decl. 
> To handle this issue, we may use 'scalar rtl(s)' for all struct decl
> as if it is parm or return result.
> Then method3 may be similar to this idea: using "parallel RTL" for
> the decl (may use DECL_RTL directly).
> Please point out any misunderstandings or suggestions.
> Thanks again!
> BR,
> Jeff(Jiufu)
>>> Thanks again for your kindly and helpful comments!
>>> BR,
>>> Jeff(Jiufu)
>>> >
>>> >
>>> > Segher

  reply	other threads:[~2022-11-09  7:52 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-31  2:42 Jiufu Guo
2022-10-31 22:13 ` Jeff Law
2022-11-01  0:49   ` Segher Boessenkool
2022-11-01  4:30     ` Jiufu Guo
2022-11-05 14:13       ` Richard Biener
2022-11-08  4:05         ` Jiufu Guo
2022-11-09  7:51           ` Jiufu Guo [this message]
2022-11-09  8:50             ` Richard Biener
2022-11-01  3:30   ` Jiufu Guo
2022-11-05 11:38   ` Richard Biener
2022-11-09  9:21     ` Jiufu Guo
2022-11-09 12:56       ` Richard Biener
2022-11-01  0:37 ` Segher Boessenkool
2022-11-01  3:01   ` Jiufu Guo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).