public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Richard Biener <rguenther@suse.de>
To: Jeff Law <law@redhat.com>
Cc: Tamar Christina <tamar.christina@arm.com>,
	    GCC Patches <gcc-patches@gcc.gnu.org>,
	    "jakub@redhat.com" <jakub@redhat.com>, nd <nd@arm.com>
Subject: Re: [PATCH] Optimise the fpclassify builtin to perform integer operations when possible
Date: Wed, 21 Sep 2016 07:13:00 -0000	[thread overview]
Message-ID: <alpine.LSU.2.11.1609210906270.26629@t29.fhfr.qr> (raw)
In-Reply-To: <c092ca3e-fef4-0d89-f527-17ce09e41de3@redhat.com>

On Tue, 20 Sep 2016, Jeff Law wrote:

> On 09/20/2016 06:00 AM, Tamar Christina wrote:
> > 
> > 
> > On 16/09/16 20:49, Jeff Law wrote:
> > > On 09/12/2016 10:19 AM, Tamar Christina wrote:
> > > > Hi All,
> > > > +
> > > > +      /* Re-interpret the float as an unsigned integer type
> > > > +     with equal precision.  */
> > > > +      int_arg_type = build_nonstandard_integer_type (TYPE_PRECISION
> > > > (type), 0);
> > > > +      int_arg = fold_build1_loc (loc, INDIRECT_REF, int_arg_type,
> > > > +          fold_build1_loc (loc, NOP_EXPR,
> > > > +                   build_pointer_type (int_arg_type),
> > > > +            fold_build1_loc (loc, ADDR_EXPR,
> > > > +                     build_pointer_type (type), arg)));
> > > Doesn't this make ARG addressable?  Which in turn means ARG won't be
> > > exposed to the gimple/ssa optimizers.    Or is it the case that when
> > > fpclassify is used its argument is already in memory (and thus
> > > addressable?)
> > > 
> > I believe that it is the case that when fpclassify is use the argument
> > is already addressable, but I am not 100% certain. I may be able to do
> > this differently so I'll come back to you on this one.
> The more I think about it, the more I suspect ARG is only going to already be
> marked as addressable if it has already had its address taken.

Sure, if it has it's address taken ... but I don't see how
fpclassify requires the arg to be address taken.

> But I think we can look at this as an opportunity.  If ARG is already
> addressable, then it's most likely going to be living in memory (there are
> exceptions).  If ARG is most likely going to be living in memory, then we
> clearly want to use your fast integer path, regardless of the target.
> 
> If ARG is not addressable, then it's not as clear as the object is likely
> going to be assigned into an FP register.  Integer operations on the an FP
> register likely will force a sequence where we dump the register into memory,
> load from memory into a GPR, then bit test on the GPR.  That gets very
> expensive on some architectures.
> 
> Could we defer lowering in the case where the object is not addressable until
> gimple->rtl expansion time?  That's the best time to introduce target
> dependencies into the code we generate.

Note that GIMPLE doesn't require sth to be addressable just because
you access random pieces of it.  The IL has tricks like allowing
MEM[&decl + CST] w/o actually marking decl TREE_ADDRESSABLE (and the
expanders trying to cope with that) and there is of course
BIT_FIELD_REF which you can use to extract arbitrary bits off any
entity without it living in memory (and again the expanders trying to
cope with that).

So may I suggest to move the "folding" from builtins.c to gimplify.c
and simply emit GIMPLE directly there?  That would make it also clearer
that we are dealing with a lowering process rather than a "folding".

Doing it in GIMPLE lowering is another possibility - we lower things
like posix_memalign and setjmp there as well.

Thanks,
Richard.

  parent reply	other threads:[~2016-09-21  7:12 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-12 16:21 Tamar Christina
2016-09-12 22:33 ` Joseph Myers
2016-09-13 12:25   ` Tamar Christina
2016-09-12 22:41 ` Joseph Myers
2016-09-13 12:30   ` Tamar Christina
2016-09-13 12:44     ` Joseph Myers
2016-09-15  9:08       ` Tamar Christina
2016-09-15 11:21         ` Wilco Dijkstra
2016-09-15 12:56           ` Joseph Myers
2016-09-15 13:05         ` Joseph Myers
2016-09-12 22:49 ` Joseph Myers
2016-09-13 12:33   ` Tamar Christina
2016-09-13 12:48     ` Joseph Myers
2016-09-13  8:58 ` Jakub Jelinek
2016-09-13 16:16   ` Jeff Law
2016-09-14  8:31     ` Richard Biener
2016-09-15 16:02       ` Jeff Law
2016-09-15 16:28         ` Richard Biener
2016-09-16 19:53 ` Jeff Law
2016-09-20 12:14   ` Tamar Christina
2016-09-20 14:52     ` Jeff Law
2016-09-20 17:52       ` Joseph Myers
2016-09-21  7:13       ` Richard Biener [this message]
2016-09-19 22:43 ` Michael Meissner
     [not found]   ` <41217f33-3861-dbb8-2f11-950ab30a7021@arm.com>
2016-09-20 21:27     ` Michael Meissner
2016-09-21  2:05       ` Joseph Myers
2016-09-21  8:32         ` Richard Biener
2016-09-12 17:24 Moritz Klammler
2016-09-12 20:08 ` Andrew Pinski
2016-09-13 12:16 Wilco Dijkstra
2016-09-13 16:10 ` Joseph Myers
2016-09-21 14:51 ` Richard Earnshaw (lists)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.LSU.2.11.1609210906270.26629@t29.fhfr.qr \
    --to=rguenther@suse.de \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jakub@redhat.com \
    --cc=law@redhat.com \
    --cc=nd@arm.com \
    --cc=tamar.christina@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).