public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Alexander Monakov <amonakov@ispras.ru>
To: Richard Biener <rguenther@suse.de>
Cc: Peter Bergner <bergner@vnet.ibm.com>,
	    GCC Patches <gcc-patches@gcc.gnu.org>,
	Jakub Jelinek <jakub@redhat.com>,     Jeff Law <law@redhat.com>,
	Zhuykov Roman <zhroma@ispras.ru>
Subject: Re: [PATCH, rtl] Fix PR84878: Segmentation fault in add_cross_iteration_register_deps
Date: Mon, 02 Apr 2018 14:22:00 -0000	[thread overview]
Message-ID: <alpine.LNX.2.20.13.1804021654590.23034@monopod.intra.ispras.ru> (raw)
In-Reply-To: <alpine.LSU.2.20.1803271519240.18265@zhemvz.fhfr.qr>

On Tue, 27 Mar 2018, Richard Biener wrote:
> > > so this is kind-of global regs being live across all BBs?  This sounds
> > > a bit stupid to me, but well ... IMHO those refs should be at
> > > specific insns like calls.
> > > 
> > > So maybe, with a big fat comment, it is OK to ignore artificial
> > > refs in this loop...
> > 
> > Yeah, I'd like someone else's opinion too, as I know even less about
> > real artificial uses (as opposed to my incorrect mention in my first
> > post). :-)
> 
> If they only appear in the exit/entry block ignoring them should be safe.
> 
> But who knows...

Roman and I discussed a related problem a few weeks ago, so here's my 2c.
As I don't have any special DF knowledge, this is merely my understanding.

(apropos i: SMS uses sched-deps for intra-loop deps, and then separately uses
DF for cross-iteration deps, which means that it should be ready for surprises
when the two scanners are not 100% in sync)

(apropos ii: given the flexibility of RTL, it would have been really nice
if there were no implicit cc0-like uses that need to be special-cased in DF,
sched-deps and other scanners)

In this case I believe it's fine to skip processing of r_use when the associated
BB is not the loop BB (i.e. 'if (DF_REF_BB (r_use->ref) != g->bb)' as Richard
suggested), but I'm concerned that skipping it when the artificial's use BB
corresponds to loop BB goes from ICE to wrong-code. It should be detected
earlier, in sms_schedule (see the comment starting with "Don't handle BBs with
calls or barriers").

Alexander

  reply	other threads:[~2018-04-02 14:22 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-27  7:08 Peter Bergner
2018-03-27  8:33 ` Richard Biener
2018-03-27 13:19   ` Peter Bergner
2018-03-27 13:32     ` Richard Biener
2018-04-02 14:22       ` Alexander Monakov [this message]
2018-04-03 18:36         ` Peter Bergner
2018-04-03 18:40           ` H.J. Lu
2018-04-03 19:05             ` Peter Bergner
2018-04-04  7:16               ` Richard Biener
2018-04-04 15:43                 ` Peter Bergner
2018-04-04 18:25                   ` Peter Bergner
2018-04-04 19:23                     ` Richard Biener
2018-04-04 21:07                       ` Peter Bergner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.LNX.2.20.13.1804021654590.23034@monopod.intra.ispras.ru \
    --to=amonakov@ispras.ru \
    --cc=bergner@vnet.ibm.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jakub@redhat.com \
    --cc=law@redhat.com \
    --cc=rguenther@suse.de \
    --cc=zhroma@ispras.ru \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).