From: Maxim Kuvyrkov <maxim@codesourcery.com>
To: Paolo Bonzini <bonzini@gnu.org>
Cc: Vladimir Makarov <vmakarov@redhat.com>,
Andrey Belevantsev <abel@ispras.ru>,
gcc-patches <gcc-patches@gcc.gnu.org>
Subject: Re: [PATCH] Fix ICE in ia64 speculation support
Date: Fri, 21 Sep 2007 09:53:00 -0000 [thread overview]
Message-ID: <46F38A7A.3030008@codesourcery.com> (raw)
In-Reply-To: <46F3829A.2090305@gnu.org>
Paolo Bonzini wrote:
>
>> OK, back to the immediate problem: speculative load appears to be trap
>> risky to the rtl analyzer due to use of (unspec) in its pattern.
>> UNSPEC is placed into speculative loads to distinguish them from
>> regular ones. What else, besides UNSPEC, can we use to make insn emit
>> different asm, but still have the same RTL meaning? I can think of
>> only one alternative: use a parallel with a nop, e.g., (parallel [(set
>> (reg) (mem)) (const_int 0)]). I think this is uglier than unspec,
>> plus rtl analyzers favor parallel less than unspec. I appreciate any
>> advice here.
>
> It seems to me that only UNSPEC_VOLATILE is counted as possibly
> trapping.
The case is (set (reg) (UNSPEC:<fp_mode> (mem))), which is trapping
because it is a floating point operation (it doesn't even get to
analyzing the mem).
> The problem is than the UNSPEC recurs inside the MEM and that
> one is marked as trapping. You could add a target hook like
> unspec_may_trap_p, with a patch like this:
This is how the same problem is solved on sel-sched-branch. I agree
that it won't hurt to teach may_trap_p to ask target about UNSPECs.
>
> Index: rtlanal.c
> ===================================================================
> --- rtlanal.c (revision 126191)
> +++ rtlanal.c (working copy)
> @@ -2206,8 +2206,11 @@ may_trap_p_1 (rtx x, unsigned flags)
> case SCRATCH:
> return 0;
>
> - case ASM_INPUT:
> + case UNSPEC:
> case UNSPEC_VOLATILE:
> + return targetm.unspec_may_trap_p (x, flags);
> +
> + case ASM_INPUT:
> case TRAP_IF:
> return 1;
>
>
> and a default implementation of
>
> int j;
> if (GET_CODE (x) == UNSPEC_VOLATILE)
> return 1;
>
> for (j = 0; j < XVECLEN (x, 0); j++)
> if (may_trap_p_1 (XVECEXP (x, 0, j), flags))
> return 1;
>
> The ia64 back-end could special case the unspec like this:
>
> if (XINT (x, 1) == ...)
> return 0;
>
> return default_unspec_may_trap_p (x);
>
> (Hmm, requires making may_trap_p_1 public, it's static now).
Not if default_unspec_may_trap_p () is defined in rtlanal.c .
Thanks,
Maxim
next prev parent reply other threads:[~2007-09-21 9:10 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-09-13 17:42 Maxim Kuvyrkov
2007-09-20 20:47 ` Vladimir Makarov
2007-09-20 21:51 ` Vladimir Makarov
2007-09-21 9:18 ` Maxim Kuvyrkov
2007-09-21 9:38 ` Paolo Bonzini
2007-09-21 9:53 ` Maxim Kuvyrkov [this message]
2007-09-21 10:43 ` Paolo Bonzini
2007-10-14 18:04 ` Maxim Kuvyrkov
2007-10-15 8:06 ` Paolo Bonzini
2007-10-15 8:17 ` Eric Botcazou
2007-10-15 10:52 ` Maxim Kuvyrkov
2007-10-15 21:57 ` Jim Wilson
2007-10-16 9:51 ` Maxim Kuvyrkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=46F38A7A.3030008@codesourcery.com \
--to=maxim@codesourcery.com \
--cc=abel@ispras.ru \
--cc=bonzini@gnu.org \
--cc=gcc-patches@gcc.gnu.org \
--cc=vmakarov@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).