public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* Recent libstdc++-v3 regressions (PCHs related?!?)
@ 2013-08-20 13:35 Paolo Carlini
  2013-08-20 13:36 ` Rainer Orth
  2013-08-20 15:49 ` David Malcolm
  0 siblings, 2 replies; 7+ messages in thread
From: Paolo Carlini @ 2013-08-20 13:35 UTC (permalink / raw)
  To: 'gcc@gcc.gnu.org'; +Cc: dmalcolm, Jan Hubicka

Hi,

sorry it the issue is by now well known but... I see many libstdc++-v3 
regressions on at least x86_64-linux. When running the libstdc++-v3 
testsuite (which uses PCHs) one gets tons of new fails like the below. 
That's annoying, a lot of confusing noise.

Thanks!
Paolo.

PS: CC-ing two "random" ;) people lately very active.

///////////////////////////

FAIL: 17_intro/headers/c++200x/stdc++.cc (test for excess errors)
Excess errors:
/home/paolo/Gcc/svn-dirs/trunk-build/x86_64-unknown-linux-gnu/libstdc++-v3/include/bits/shared_ptr_base.h:567:2: 
internal compiler error: Segmentation fault
0xb2521f crash_signal
     /scratch/Gcc/svn-dirs/trunk/gcc/toplev.c:335
0xa747a7 gcc::pass_manager::gt_ggc_mx()
     /scratch/Gcc/svn-dirs/trunk/gcc/passes.c:201
0x9652b5 ggc_mark_root_tab
     /scratch/Gcc/svn-dirs/trunk/gcc/ggc-common.c:133
0x965600 ggc_mark_roots()
     /scratch/Gcc/svn-dirs/trunk/gcc/ggc-common.c:152
0x7c1f44 ggc_collect()
     /scratch/Gcc/svn-dirs/trunk/gcc/ggc-page.c:2077
0x836995 cgraph_finalize_function(tree_node*, bool)
     /scratch/Gcc/svn-dirs/trunk/gcc/cgraphunit.c:456
0x6e882f expand_or_defer_fn(tree_node*)
     /scratch/Gcc/svn-dirs/trunk/gcc/cp/semantics.c:3941
0x719104 maybe_clone_body(tree_node*)
     /scratch/Gcc/svn-dirs/trunk/gcc/cp/optimize.c:428
0x6e848f expand_or_defer_fn_1(tree_node*)
     /scratch/Gcc/svn-dirs/trunk/gcc/cp/semantics.c:3866
0x6e8808 expand_or_defer_fn(tree_node*)
     /scratch/Gcc/svn-dirs/trunk/gcc/cp/semantics.c:3936
0x5ce0ed instantiate_decl(tree_node*, int, bool)
     /scratch/Gcc/svn-dirs/trunk/gcc/cp/pt.c:19269
0x6096df instantiate_pending_templates(int)
     /scratch/Gcc/svn-dirs/trunk/gcc/cp/pt.c:19356
0x646b2a cp_write_global_declarations()
     /scratch/Gcc/svn-dirs/trunk/gcc/cp/decl2.c:4064

Thanks!
Paolo.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Recent libstdc++-v3 regressions (PCHs related?!?)
  2013-08-20 13:35 Recent libstdc++-v3 regressions (PCHs related?!?) Paolo Carlini
@ 2013-08-20 13:36 ` Rainer Orth
  2013-08-20 15:49 ` David Malcolm
  1 sibling, 0 replies; 7+ messages in thread
From: Rainer Orth @ 2013-08-20 13:36 UTC (permalink / raw)
  To: Paolo Carlini; +Cc: 'gcc@gcc.gnu.org', dmalcolm, Jan Hubicka

Hi Paolo,

> sorry it the issue is by now well known but... I see many libstdc++-v3
> regressions on at least x86_64-linux. When running the libstdc++-v3
> testsuite (which uses PCHs) one gets tons of new fails like the
> below. That's annoying, a lot of confusing noise.

same on i386-pc-solaris2.10 and sparc-sun-solaris2.11 as of r201870.

	Rainer

-- 
-----------------------------------------------------------------------------
Rainer Orth, Center for Biotechnology, Bielefeld University

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Recent libstdc++-v3 regressions (PCHs related?!?)
  2013-08-20 13:35 Recent libstdc++-v3 regressions (PCHs related?!?) Paolo Carlini
  2013-08-20 13:36 ` Rainer Orth
@ 2013-08-20 15:49 ` David Malcolm
  2013-08-20 15:53   ` Paolo Carlini
  2013-08-20 19:38   ` David Malcolm
  1 sibling, 2 replies; 7+ messages in thread
From: David Malcolm @ 2013-08-20 15:49 UTC (permalink / raw)
  To: Paolo Carlini; +Cc: 'gcc@gcc.gnu.org', Jan Hubicka

On Tue, 2013-08-20 at 14:03 +0200, Paolo Carlini wrote:
> Hi,
> 
> sorry it the issue is by now well known but... I see many libstdc++-v3 
> regressions on at least x86_64-linux. When running the libstdc++-v3 
> testsuite (which uses PCHs) one gets tons of new fails like the below. 
> That's annoying, a lot of confusing noise.
> 
> Thanks!
> Paolo.
> 
> PS: CC-ing two "random" ;) people lately very active.

Sorry about this - looking at the backtrace this could well be due to
me, specifically r201865, which moved the gcc::pass_manager and all the
passes into the GC heap (so that we can then move GC-owned per-pass
state into the pass instances).  This would require pch files to be
regenerated, but presumably the test suite does this, right?

I did rerun the bootstrap and regression tests before committing (based
on clean builds with and without the patches), but presumably something
unexpected is happening.

I'm investigating now.

(FWIW, if we have to back out r201865, I believe we also have to back
out r201864).


> ///////////////////////////
> 
> FAIL: 17_intro/headers/c++200x/stdc++.cc (test for excess errors)
> Excess errors:
> /home/paolo/Gcc/svn-dirs/trunk-build/x86_64-unknown-linux-gnu/libstdc++-v3/include/bits/shared_ptr_base.h:567:2: 
> internal compiler error: Segmentation fault
> 0xb2521f crash_signal
>      /scratch/Gcc/svn-dirs/trunk/gcc/toplev.c:335
> 0xa747a7 gcc::pass_manager::gt_ggc_mx()
>      /scratch/Gcc/svn-dirs/trunk/gcc/passes.c:201
> 0x9652b5 ggc_mark_root_tab
>      /scratch/Gcc/svn-dirs/trunk/gcc/ggc-common.c:133
> 0x965600 ggc_mark_roots()
>      /scratch/Gcc/svn-dirs/trunk/gcc/ggc-common.c:152
> 0x7c1f44 ggc_collect()
>      /scratch/Gcc/svn-dirs/trunk/gcc/ggc-page.c:2077
> 0x836995 cgraph_finalize_function(tree_node*, bool)
>      /scratch/Gcc/svn-dirs/trunk/gcc/cgraphunit.c:456
> 0x6e882f expand_or_defer_fn(tree_node*)
>      /scratch/Gcc/svn-dirs/trunk/gcc/cp/semantics.c:3941
> 0x719104 maybe_clone_body(tree_node*)
>      /scratch/Gcc/svn-dirs/trunk/gcc/cp/optimize.c:428
> 0x6e848f expand_or_defer_fn_1(tree_node*)
>      /scratch/Gcc/svn-dirs/trunk/gcc/cp/semantics.c:3866
> 0x6e8808 expand_or_defer_fn(tree_node*)
>      /scratch/Gcc/svn-dirs/trunk/gcc/cp/semantics.c:3936
> 0x5ce0ed instantiate_decl(tree_node*, int, bool)
>      /scratch/Gcc/svn-dirs/trunk/gcc/cp/pt.c:19269
> 0x6096df instantiate_pending_templates(int)
>      /scratch/Gcc/svn-dirs/trunk/gcc/cp/pt.c:19356
> 0x646b2a cp_write_global_declarations()
>      /scratch/Gcc/svn-dirs/trunk/gcc/cp/decl2.c:4064
> 
> Thanks!
> Paolo.
> 
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Recent libstdc++-v3 regressions (PCHs related?!?)
  2013-08-20 15:49 ` David Malcolm
@ 2013-08-20 15:53   ` Paolo Carlini
  2013-08-21  9:58     ` David Malcolm
  2013-08-20 19:38   ` David Malcolm
  1 sibling, 1 reply; 7+ messages in thread
From: Paolo Carlini @ 2013-08-20 15:53 UTC (permalink / raw)
  To: David Malcolm; +Cc: 'gcc@gcc.gnu.org', Jan Hubicka

Hi,

On 08/20/2013 04:41 PM, David Malcolm wrote:
> On Tue, 2013-08-20 at 14:03 +0200, Paolo Carlini wrote:
>> Hi,
>>
>> sorry it the issue is by now well known but... I see many libstdc++-v3
>> regressions on at least x86_64-linux. When running the libstdc++-v3
>> testsuite (which uses PCHs) one gets tons of new fails like the below.
>> That's annoying, a lot of confusing noise.
>>
>> Thanks!
>> Paolo.
>>
>> PS: CC-ing two "random" ;) people lately very active.
> Sorry about this - looking at the backtrace this could well be due to
> me, specifically r201865, which moved the gcc::pass_manager and all the
> passes into the GC heap (so that we can then move GC-owned per-pass
> state into the pass instances).
I can confirm that r201864 isn't affected. Thanks for looking into this!

Paolo.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Recent libstdc++-v3 regressions (PCHs related?!?)
  2013-08-20 15:49 ` David Malcolm
  2013-08-20 15:53   ` Paolo Carlini
@ 2013-08-20 19:38   ` David Malcolm
  1 sibling, 0 replies; 7+ messages in thread
From: David Malcolm @ 2013-08-20 19:38 UTC (permalink / raw)
  To: Paolo Carlini; +Cc: 'gcc@gcc.gnu.org', Jan Hubicka

On Tue, 2013-08-20 at 10:41 -0400, David Malcolm wrote:
> On Tue, 2013-08-20 at 14:03 +0200, Paolo Carlini wrote:
> > Hi,
> > 
> > sorry it the issue is by now well known but... I see many libstdc++-v3 
> > regressions on at least x86_64-linux. When running the libstdc++-v3 
> > testsuite (which uses PCHs) one gets tons of new fails like the below. 
> > That's annoying, a lot of confusing noise.
> > 
> > Thanks!
> > Paolo.
> > 
> > PS: CC-ing two "random" ;) people lately very active.
> 
> Sorry about this - looking at the backtrace this could well be due to
> me, specifically r201865, which moved the gcc::pass_manager and all the
> passes into the GC heap (so that we can then move GC-owned per-pass
> state into the pass instances).  This would require pch files to be
> regenerated, but presumably the test suite does this, right?
> 
> I did rerun the bootstrap and regression tests before committing (based
> on clean builds with and without the patches), but presumably something
> unexpected is happening.
> 
> I'm investigating now.

I've been erroneously running the test suite as:

  make check

but looking at http://gcc.gnu.org/contribute.html#testing
I now realise that I should be running:

  make -k check

and that it's only been running the gcc tests, stopping when they don't
all pass - the "-k" flag is needed:

$ find test/*/build -name "*.sum"
test/control/build/gcc/testsuite/gcc/gcc.sum
test/control/build/gcc/testsuite/g++/g++.sum
test/control/build/gcc/testsuite/gfortran/gfortran.sum
test/control/build/gcc/testsuite/objc/objc.sum
test/experiment/build/gcc/testsuite/gcc/gcc.sum
test/experiment/build/gcc/testsuite/g++/g++.sum
test/experiment/build/gcc/testsuite/gfortran/gfortran.sum
test/experiment/build/gcc/testsuite/objc/objc.sum

Sorry about this.  I've updated my scripts to fix this, and am rerunning
(I hope) the full set of test suites now.

It's probably worth updating that page to spell out that -k is
necessary.

> (FWIW, if we have to back out r201865, I believe we also have to back
> out r201864).

Given that I don't yet have a fix and that my testing has been
incomplete, I've gone ahead and backed out both changes as r201887.

Sorry again.
Dave

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Recent libstdc++-v3 regressions (PCHs related?!?)
  2013-08-20 15:53   ` Paolo Carlini
@ 2013-08-21  9:58     ` David Malcolm
  2013-08-21 13:16       ` David Malcolm
  0 siblings, 1 reply; 7+ messages in thread
From: David Malcolm @ 2013-08-21  9:58 UTC (permalink / raw)
  To: Paolo Carlini; +Cc: 'gcc@gcc.gnu.org', Jan Hubicka

On Tue, 2013-08-20 at 17:02 +0200, Paolo Carlini wrote:
> Hi,
> 
> On 08/20/2013 04:41 PM, David Malcolm wrote:
> > On Tue, 2013-08-20 at 14:03 +0200, Paolo Carlini wrote:
> >> Hi,
> >>
> >> sorry it the issue is by now well known but... I see many libstdc++-v3
> >> regressions on at least x86_64-linux. When running the libstdc++-v3
> >> testsuite (which uses PCHs) one gets tons of new fails like the below.
> >> That's annoying, a lot of confusing noise.
> >>
> >> Thanks!
> >> Paolo.
> >>
> >> PS: CC-ing two "random" ;) people lately very active.
> > Sorry about this - looking at the backtrace this could well be due to
> > me, specifically r201865, which moved the gcc::pass_manager and all the
> > passes into the GC heap (so that we can then move GC-owned per-pass
> > state into the pass instances).
> I can confirm that r201864 isn't affected. Thanks for looking into this!

Thanks.

It tool me a while to reproduce the failure - perhaps on my test box, GC
is happening much less than on yours?  On a test run I only saw 6
failures due to this (my test machine appears to have 3.6 GB of RAM)

Is there a good way to encourage the testsuite to GC more? (I know about
setting --param ggc-min-expand=0 --param ggc-min-heapsize=0 on an
individual invocation, but is there a standard way of doing this?  Or do
I need to run the whole thing in a cgroup or somesuch, and constrain the
available RAM?)

The bug (or, at least the first one I see) is that the pass_manager's
"passes_by_id" array is being allocated using XRESIZEVEC (xrealloc,
hence malloc/realloc under the covers), but given that all this is meant
to interact with GC, it needs to be persistable to PCH [1].  Now that I
see it, I'm wondering how it managed to work in my prior testing.
Presumably the array always happened to be allocated at the same
location between the process that created the pch file and the processes
that read it.

Is there a good way to perturb memory so that accidentally mixing a
malloc allocation with a GC allocation is more reliably fatal, to shake
out this kind of bug?  i.e. to make malloc place its results at
locations that vary from process to process?  (perhaps in libiberty?)
Perhaps also valgrind can catch this kind of thing?

Thanks
Dave

[1] I wish I could just have GC and not have to deal with PCH; oh well.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Recent libstdc++-v3 regressions (PCHs related?!?)
  2013-08-21  9:58     ` David Malcolm
@ 2013-08-21 13:16       ` David Malcolm
  0 siblings, 0 replies; 7+ messages in thread
From: David Malcolm @ 2013-08-21 13:16 UTC (permalink / raw)
  To: Paolo Carlini; +Cc: 'gcc@gcc.gnu.org', Jan Hubicka

On Tue, 2013-08-20 at 21:26 -0400, David Malcolm wrote:
> On Tue, 2013-08-20 at 17:02 +0200, Paolo Carlini wrote:
> > Hi,
> > 
> > On 08/20/2013 04:41 PM, David Malcolm wrote:
> > > On Tue, 2013-08-20 at 14:03 +0200, Paolo Carlini wrote:
> > >> Hi,
> > >>
> > >> sorry it the issue is by now well known but... I see many libstdc++-v3
> > >> regressions on at least x86_64-linux. When running the libstdc++-v3
> > >> testsuite (which uses PCHs) one gets tons of new fails like the below.
> > >> That's annoying, a lot of confusing noise.
> > >>
> > >> Thanks!
> > >> Paolo.
> > >>
> > >> PS: CC-ing two "random" ;) people lately very active.
> > > Sorry about this - looking at the backtrace this could well be due to
> > > me, specifically r201865, which moved the gcc::pass_manager and all the
> > > passes into the GC heap (so that we can then move GC-owned per-pass
> > > state into the pass instances).
> > I can confirm that r201864 isn't affected. Thanks for looking into this!
> 
> Thanks.
> 
> It tool me a while to reproduce the failure - perhaps on my test box, GC
> is happening much less than on yours?  On a test run I only saw 6
> failures due to this (my test machine appears to have 3.6 GB of RAM)
> 
> Is there a good way to encourage the testsuite to GC more? (I know about
> setting --param ggc-min-expand=0 --param ggc-min-heapsize=0 on an
> individual invocation, but is there a standard way of doing this?  Or do
> I need to run the whole thing in a cgroup or somesuch, and constrain the
> available RAM?)
> 
> The bug (or, at least the first one I see) is that the pass_manager's
> "passes_by_id" array is being allocated using XRESIZEVEC (xrealloc,
> hence malloc/realloc under the covers), but given that all this is meant
> to interact with GC, it needs to be persistable to PCH [1].  Now that I
> see it, I'm wondering how it managed to work in my prior testing.
> Presumably the array always happened to be allocated at the same
> location between the process that created the pch file and the processes
> that read it.

...and also the code wasn't visiting the array itself during pch
traversal, just the elements within it.   Fixing this will likely make
the other problem show up immediately.


> Is there a good way to perturb memory so that accidentally mixing a
> malloc allocation with a GC allocation is more reliably fatal, to shake
> out this kind of bug?  i.e. to make malloc place its results at
> locations that vary from process to process?  (perhaps in libiberty?)
> Perhaps also valgrind can catch this kind of thing?
> 
> Thanks
> Dave
> 
> [1] I wish I could just have GC and not have to deal with PCH; oh well.
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-08-21  1:33 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-20 13:35 Recent libstdc++-v3 regressions (PCHs related?!?) Paolo Carlini
2013-08-20 13:36 ` Rainer Orth
2013-08-20 15:49 ` David Malcolm
2013-08-20 15:53   ` Paolo Carlini
2013-08-21  9:58     ` David Malcolm
2013-08-21 13:16       ` David Malcolm
2013-08-20 19:38   ` David Malcolm

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).