public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Bernd Edlinger <bernd.edlinger@hotmail.de>
To: "H.J. Lu" <hjl.tools@gmail.com>,
	Richard Biener <richard.guenther@gmail.com>
Cc: GCC Patches <gcc-patches@gcc.gnu.org>,
	Richard Sandiford <richard.sandiford@arm.com>,
	Uros Bizjak <ubizjak@gmail.com>
Subject: Re: [PATCH] constructor: Elide expand_constructor when can move by pieces is true
Date: Fri, 21 May 2021 07:35:42 +0200	[thread overview]
Message-ID: <AM8PR10MB47085C3C75150670BF60D89DE4299@AM8PR10MB4708.EURPRD10.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <CAMe9rOpEeTHmJwzdsVGPEm1XVzd=Ejw81_7yv0+yzDCXTrWyDw@mail.gmail.com>

On 5/20/21 4:03 PM, H.J. Lu wrote:
> On Thu, May 20, 2021 at 12:51 AM Richard Biener
> <richard.guenther@gmail.com> wrote:
>>
>> On Wed, May 19, 2021 at 3:22 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>>>
>>> On Wed, May 19, 2021 at 2:33 AM Richard Biener
>>> <richard.guenther@gmail.com> wrote:
>>>>
>>>> On Tue, May 18, 2021 at 9:16 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>>>>>
>>>>> When expanding a constant constructor, don't call expand_constructor if
>>>>> it is more efficient to load the data from the memory via move by pieces.
>>>>>
>>>>> gcc/
>>>>>
>>>>>         PR middle-end/90773
>>>>>         * expr.c (expand_expr_real_1): Don't call expand_constructor if
>>>>>         it is more efficient to load the data from the memory.
>>>>>
>>>>> gcc/testsuite/
>>>>>
>>>>>         PR middle-end/90773
>>>>>         * gcc.target/i386/pr90773-24.c: New test.
>>>>>         * gcc.target/i386/pr90773-25.c: Likewise.
>>>>> ---
>>>>>  gcc/expr.c                                 | 10 ++++++++++
>>>>>  gcc/testsuite/gcc.target/i386/pr90773-24.c | 22 ++++++++++++++++++++++
>>>>>  gcc/testsuite/gcc.target/i386/pr90773-25.c | 20 ++++++++++++++++++++
>>>>>  3 files changed, 52 insertions(+)
>>>>>  create mode 100644 gcc/testsuite/gcc.target/i386/pr90773-24.c
>>>>>  create mode 100644 gcc/testsuite/gcc.target/i386/pr90773-25.c
>>>>>
>>>>> diff --git a/gcc/expr.c b/gcc/expr.c
>>>>> index d09ee42e262..80e01ea1cbe 100644
>>>>> --- a/gcc/expr.c
>>>>> +++ b/gcc/expr.c
>>>>> @@ -10886,6 +10886,16 @@ expand_expr_real_1 (tree exp, rtx target, machine_mode tmode,
>>>>>                 unsigned HOST_WIDE_INT ix;
>>>>>                 tree field, value;
>>>>>
>>>>> +               /* Check if it is more efficient to load the data from
>>>>> +                  the memory directly.  FIXME: How many stores do we
>>>>> +                  need here if not moved by pieces?  */
>>>>> +               unsigned HOST_WIDE_INT bytes
>>>>> +                 = tree_to_uhwi (TYPE_SIZE_UNIT (type));
>>>>
>>>> that's prone to fail - it could be a VLA.
>>>
>>> What do you mean by fail?  Is it ICE or missed optimization?
>>> Do you have a testcase?
>>>
>>>>
>>>>> +               if ((bytes / UNITS_PER_WORD) > 2
>>>>> +                   && MOVE_MAX_PIECES > UNITS_PER_WORD
>>>>> +                   && can_move_by_pieces (bytes, TYPE_ALIGN (type)))
>>>>> +                 goto normal_inner_ref;
>>>>> +
>>>>
>>>> It looks like you're concerned about aggregate copies but this also handles
>>>> non-aggregates (which on GIMPLE might already be optimized of course).
>>>
>>> Here I check if we copy more than 2 words and we can move more than
>>> a word in a single instruction.
>>>
>>>> Also you say "if it's cheaper" but I see no cost considerations.  How do
>>>> we generally handle immed const vs. load from constant pool costs?
>>>
>>> This trades 2 (update to 8) stores with one load plus one store.  Is there
>>> a way to check which one is faster?
>>
>> I'm not sure - it depends on whether the target can do stores from immediates
>> at all or what restrictions apply, what the immediate value actually is
>> (zero or all-ones should be way cheaper than sth arbitrary) and how the
>> pressure on the load unit is.  can_move_by_pieces (bytes, TYPE_ALIGN (type))
>> also does not guarantee it will actually move pieces larger than UNITS_PER_WORD,
>> that might depend on alignment.  There's by_pieces_ninsns that might provide
>> some hint here.
>>
>> I'm sure it works well for x86.
>>
>> I wonder if the existing code is in the appropriate place and we
>> shouldn't instead
>> handle this somewhere upthread where we ask to copy 'exp' into some other
>> memory location.  For your testcase that's expand_assignment but I can
>> imagine passing array[0] by value to a function resulting in similar copying.
>> Testing that shows we get
>>
>>         pushq   array+56(%rip)
>>         .cfi_def_cfa_offset 24
>>         pushq   array+48(%rip)
>>         .cfi_def_cfa_offset 32
>>         pushq   array+40(%rip)
>>         .cfi_def_cfa_offset 40
>>         pushq   array+32(%rip)
>>         .cfi_def_cfa_offset 48
>>         pushq   array+24(%rip)
>>         .cfi_def_cfa_offset 56
>>         pushq   array+16(%rip)
>>         .cfi_def_cfa_offset 64
>>         pushq   array+8(%rip)
>>         .cfi_def_cfa_offset 72
>>         pushq   array(%rip)
>>         .cfi_def_cfa_offset 80
>>         call    bar
>>
>> for that.  We do have the by-pieces infrastructure to generally do this kind of
>> copying but in both of these cases we do not seem to use it.  I also wonder
>> if the by-pieces infrastructure can pick up constant initializers automagically
>> (we could native_encode the initializer part and feed the by-pieces
>> infrastructure with an array of bytes).  There for example might be easy to
>> immediate-store byte parts and difficult ones where we could decide on a
>> case-by-case basis whether to load+store or immediate-store them.
> 
> I opened:
> 
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100704
> 
>> For example if I change your testcase to have the array[] initializer
>> all-zero we currently emit
>>
>>         pxor    %xmm0, %xmm0
>>         movups  %xmm0, (%rdi)
>>         movups  %xmm0, 16(%rdi)
>>         movups  %xmm0, 32(%rdi)
>>         movups  %xmm0, 48(%rdi)
>>         ret
>>
>> will your patch cause us to emit 4 loads?  OTHO if I do
>>
>> const struct S array[] = {
>>   { 0, 0, 0, 7241, 124764, 48, 16, 33, 10, 96, 2, 0, 0, 4 }
>> };
>>
>> we get
>>
>>         movq    $0, (%rdi)
>>         movl    $0, 8(%rdi)
>>         movl    $0, 12(%rdi)
>>         movl    $7241, 16(%rdi)
>> ...
>>
>> ideally we'd have sth like
>>
>>     pxor %xmm0, %xmm0
>>     movups  %xmm0, (%rdi)
>>     movaps array+16(%rip), %xmm0
>>     movups %xmm0, 16(%rdi)
>> ...
>>
>> thus have the zeros written as immediates and the remaining pieces
>> with load+stores.
>>
>> The by-pieces infrastructure eventually get's to see
>>
>> (mem/u/c:BLK (symbol_ref:DI ("array") [flags 0x2] <var_decl
>> 0x7ffff7ff5b40 array>) [1 array+0 S64 A256])
>>
>> where the MEM_EXPR should provide a way to access the constant initializer.
>>
>> That said I do agree the current code is a bit premature optimization
>> - but maybe
>> it should be fend off in expand_constructor which has the cheap clear_storage
>> first and which already does check can_move_by_pieces with some heuristics,
>> but that seems to be guarded by
>>
>>            || (tree_fits_uhwi_p (TYPE_SIZE_UNIT (type))
>>                && (! can_move_by_pieces
>>                    (tree_to_uhwi (TYPE_SIZE_UNIT (type)),
>>                     TYPE_ALIGN (type)))
>>                && ! mostly_zeros_p (exp))))
>>
>> which is odd (we _can_ move by pieces, but how does this apply to
>> TREE_CONSTANT CTORs and avoid_temp_mem?).
>>
>> That said, I wonder if we want to elide expand_constructor when the
>> CTOR is TREE_STATIC && TREE_CONSTANT and !mostly_zeros_p
>> and we can_move_by_pieces.
>>
>> So sth like
>>
>> diff --git a/gcc/expr.c b/gcc/expr.c
>> index 7139545d543..76b3bdf0c01 100644
>> --- a/gcc/expr.c
>> +++ b/gcc/expr.c
>> @@ -8504,6 +8504,12 @@ expand_constructor (tree exp, rtx target, enum
>> expand_modifier modifier,
>>                && (! can_move_by_pieces
>>                    (tree_to_uhwi (TYPE_SIZE_UNIT (type)),
>>                     TYPE_ALIGN (type)))
>> +              && ! mostly_zeros_p (exp))
>> +          || (TREE_CONSTANT (exp)
>> +              && tree_fits_uhwi_p (TYPE_SIZE_UNIT (type))
>> +              && (can_move_by_pieces
>> +                  (tree_to_uhwi (TYPE_SIZE_UNIT (type)),
>> +                   TYPE_ALIGN (type)))

Just a minor nit: superfluous parentheses around can_move_by_pieces here.


Bernd.

>>                && ! mostly_zeros_p (exp))))
>>        || ((modifier == EXPAND_INITIALIZER || modifier == EXPAND_CONST_ADDRESS)
>>           && TREE_CONSTANT (exp)))
>>
>> which handles your initializer and the all-zero one optimal?
>>
> 
> It works.  Here is the updated patch.
> 
> Thanks.
> 

  reply	other threads:[~2021-05-21  5:35 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-18 19:16 [PATCH v4 00/12] Allow TImode/OImode/XImode in op_by_pieces operations H.J. Lu
2021-05-18 19:16 ` [PATCH v4 01/12] Add TARGET_READ_MEMSET_VALUE/TARGET_GEN_MEMSET_VALUE H.J. Lu
2021-05-19  9:25   ` Richard Biener
2021-05-19 12:55     ` H.J. Lu
2021-05-20 20:49       ` [PATCH] Add 3 target hooks for memset H.J. Lu
2021-05-21  5:42         ` Bernd Edlinger
2021-05-21 11:53           ` H.J. Lu
2021-05-25 14:34         ` Richard Biener
2021-05-25 15:11           ` H.J. Lu
2021-05-26  8:28             ` Richard Biener
2021-05-31 12:09               ` [PATCH] Add integer_extract and vec_const_duplicate optabs H.J. Lu
2021-05-31 12:46                 ` Richard Biener
2021-05-31 13:12                   ` H.J. Lu
2021-05-31 13:25                     ` Richard Biener
2021-05-31 13:32                       ` H.J. Lu
2021-05-31 13:36                         ` H.J. Lu
2021-05-31 20:22                         ` [PATCH v2] Add vec_const_duplicate optab and TARGET_GEN_MEMSET_SCRATCH_RTX H.J. Lu
2021-06-01  5:50                           ` Richard Sandiford
2021-06-01  5:54                             ` Jeff Law
2021-06-01 13:05                               ` H.J. Lu
2021-06-01 13:25                                 ` Richard Biener
2021-06-01 13:29                                   ` H.J. Lu
2021-06-01 14:21                                     ` Jeff Law
2021-06-01 23:07                                       ` H.J. Lu
2021-06-02  1:21                                         ` Hongtao Liu
2021-06-02  1:54                                           ` H.J. Lu
2021-06-02  7:02                                             ` Richard Biener
2021-06-02 13:50                                               ` H.J. Lu
2021-05-18 19:16 ` [PATCH v4 02/12] x86: Add TARGET_READ_MEMSET_VALUE/TARGET_GEN_MEMSET_VALUE H.J. Lu
2021-05-18 19:16 ` [PATCH v4 03/12] x86: Avoid stack realignment when copying data H.J. Lu
2021-05-18 19:16 ` [PATCH v4 04/12] Remove MAX_BITSIZE_MODE_ANY_INT H.J. Lu
2021-05-25 14:37   ` Richard Biener
2021-05-18 19:16 ` [PATCH v4 05/12] x86: Update piecewise move and store H.J. Lu
2021-05-18 19:16 ` [PATCH v4 06/12] x86: Add AVX2 tests for PR middle-end/90773 H.J. Lu
2021-05-18 19:16 ` [PATCH v4 07/12] x86: Add tests for piecewise move and store H.J. Lu
2021-05-18 19:16 ` [PATCH v4 08/12] x86: Also pass -mno-avx to pr72839.c H.J. Lu
2021-05-18 19:16 ` [PATCH v4 09/12] x86: Also pass -mno-avx to cold-attribute-1.c H.J. Lu
2021-05-18 19:16 ` [PATCH v4 10/12] x86: Also pass -mno-avx to sw-1.c for ia32 H.J. Lu
2021-05-18 19:16 ` [PATCH v4 11/12] x86: Update gcc.target/i386/incoming-11.c H.J. Lu
2021-05-18 19:16 ` [PATCH v4 12/12] constructor: Check if it is faster to load constant from memory H.J. Lu
2021-05-19  9:33   ` Richard Biener
2021-05-19 13:22     ` H.J. Lu
2021-05-19 13:27       ` Bernd Edlinger
2021-05-19 19:04         ` H.J. Lu
2021-05-20  6:57           ` Richard Biener
2021-05-20  7:51       ` Richard Biener
2021-05-20 14:03         ` [PATCH] constructor: Elide expand_constructor when can move by pieces is true H.J. Lu
2021-05-21  5:35           ` Bernd Edlinger [this message]
2021-05-21  6:57           ` Richard Biener
2021-05-21  7:30             ` Bernd Edlinger
2021-05-21 13:13               ` H.J. Lu
2021-05-21 13:09             ` [PATCH] Elide expand_constructor if move by pieces is preferred H.J. Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM8PR10MB47085C3C75150670BF60D89DE4299@AM8PR10MB4708.EURPRD10.PROD.OUTLOOK.COM \
    --to=bernd.edlinger@hotmail.de \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=hjl.tools@gmail.com \
    --cc=richard.guenther@gmail.com \
    --cc=richard.sandiford@arm.com \
    --cc=ubizjak@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).