* GCC 3.2.1 -> GCC 3.3 compile speed regression
@ 2003-01-31 7:49 Ziemowit Laski
2003-01-31 7:49 ` Zack Weinberg
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Ziemowit Laski @ 2003-01-31 7:49 UTC (permalink / raw)
To: gcc; +Cc: Geoffrey Keating
I will try to make this as succinct as possible. For testing
I used the gcc_3_2_1_release and the gcc-3_3-branch; for the
test case, a preprocessed version of a single source file
found in the open source version of the Qt toolkit. You will
find the two test cases (created under RedHat Linux 8.0) here:
http://homepage.mac.com/zlaski/FileSharing1.html
I've instrumented my local copy of 3.3 to produce -fmem-report
stats comparable to 3.2.1 (productizing this for FSF use is another
matter that I will need to coordinate with Geoff). Also, I've augmented
-fmem-report in both compilers to also tell us how many times
gcc_collect()
is called, and how many times it actually decides to go through with its
mark-n-sweep thang.
Both compilers are using the default 4MB heap size. I re-ran both
command lines several times to warm up the various caches, and then
reported the last 3 timings for each. The -fmem-report and -Q output
has been trimmed to only show the juicy cuts:
/home/zlaski/fsf/obj/gcc/gcc_3_2_1_release/gcc/g++ -B
/home/zlaski/fsf/obj/gcc/gcc_3_2_1_release/gcc/ -c
structureparser.gcc_3_2_1_release.ii -fmem-report -Q
{GC 5327k -> 3237k}
{GC 5328k -> 4109k}
{GC 5343k -> 4661k}
{GC 6065k -> 5406k}
{GC 7029k -> 6218k}
{GC 8120k -> 7203k}
{GC 9386k -> 8260k}
{GC 10741k -> 9470k}
{GC 12395k -> 10397k}
{GC 13584k -> 10727k}
{GC 14039k -> 11681k}
{GC 13922k -> 12090k}
Tree Number Bytes % Total
Total 216513 10M
RTX Number Bytes % Total
Total 1165 12k
Collector calls: 7644
Collections performed: 12
Size Allocated Used Overhead
Total 13M 11M 130k
real 0m2.692s
real 0m2.682s
real 0m2.686s
/home/zlaski/fsf/obj/gcc/gcc-3_3-branch/gcc/g++ -B
/home/zlaski/fsf/obj/gcc/gcc-3_3-branch/gcc/ -c
structureparser.gcc-3_3-branch.ii -fmem-report -Q
{GC 5326k -> 2058k}
{GC 5325k -> 3023k}
{GC 5411k -> 3871k}
{GC 5420k -> 4278k}
{GC 5564k -> 4414k}
{GC 5740k -> 4772k}
{GC 6209k -> 5170k}
{GC 6745k -> 5728k}
{GC 7447k -> 6243k}
{GC 8119k -> 6722k}
{GC 8740k -> 7364k}
{GC 9584k -> 8121k}
{GC 10596k -> 8662k}
{GC 11279k -> 9516k}
{GC 12400k -> 10220k}
{GC 13352k -> 10597k}
{GC 13844k -> 10762k}
{GC 14069k -> 10903k}
{GC 14183k -> 11670k}
{GC 15455k -> 12306k}
{GC 4194303k -> 12411k} /* ignore the 4Gb - needed to trigger the last
collection :-) */
Tree Number Bytes % Total
Total 227136 11M
RTX Number Bytes % Total
Total 1322 12k
Collector calls: 7805
Collections performed: 21
Size Allocated Used Overhead
Total 14M 12M 133k
real 0m3.649s
real 0m3.648s
real 0m3.641s
Immediately, a culprit comes to mind: The 3.3 compiler performs almost
TWICE AS MANY COLLECTIONS as the 3.2.1 compiler, with a roughly
comparable # of gcc_collect() calls. This can't be cheap.
What we see from the GC numbers is that the 3.3 collections are able to
reclaim a greater percentage of allocated memory at each turn. This
could be a combination of two factors:
- Data in the 3.3 compiler have better temporal locality (!!)
- The 3.2.1 collector was buggy/incomplete and could not reliably
reclaim some stuff.
Anyway, this better reclamation done by the 3.3 ggc also explains why
the collections occur more frequently there.
So, I throttled the heap size all the way to 128 MB. For the 3.2.1
compiler, this involved rebuilding it after changing ggc-page.c; the
3.3 compiler is more civilized and accepts a --param:
/home/zlaski/fsf/obj/gcc/gcc_3_2_1_release/gcc/g++ -B
/home/zlaski/fsf/obj/gcc/gcc_3_2_1_release/gcc/ -c
structureparser.gcc_3_2_1_release.ii -fmem-report
Execution times (seconds)
garbage collection : 0.11 ( 4%) usr 0.00 ( 0%) sys 0.11 ( 4%)
wall
cfg construction : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.00 ( 0%)
wall
life analysis : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.00 ( 0%)
wall
preprocessing : 0.19 ( 8%) usr 0.13 (27%) sys 0.28 ( 9%)
wall
lexical analysis : 0.28 (11%) usr 0.21 (43%) sys 0.44 (14%)
wall
parser : 1.73 (71%) usr 0.15 (31%) sys 1.98 (65%)
wall
expand : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.05 ( 2%)
wall
varconst : 0.03 ( 1%) usr 0.00 ( 0%) sys 0.05 ( 2%)
wall
integration : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.00 ( 0%)
wall
flow analysis : 0.00 ( 0%) usr 0.00 ( 0%) sys 0.02 ( 1%)
wall
mode switching : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.02 ( 1%)
wall
local alloc : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.02 ( 1%)
wall
global alloc : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.02 ( 1%)
wall
shorten branches : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.00 ( 0%)
wall
rest of compilation : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.02 ( 1%)
wall
TOTAL : 2.45 0.49 3.05
Tree Number Bytes % Total
Total 216390 10M
RTX Number Bytes % Total
Total 1165 12k
Collector calls: 7644
Collections performed: 1
Size Allocated Used Overhead
Total 23M 11M 242k
real 0m1.997s
real 0m1.999s
real 0m1.994s
/home/zlaski/fsf/obj/gcc/gcc-3_3-branch/gcc/g++ -B
/home/zlaski/fsf/obj/gcc/gcc-3_3-branch/gcc/ -c
structureparser.gcc-3_3-branch.ii -fmem-report --param
ggc-min-heapsize=131072
Execution times (seconds)
garbage collection : 0.10 ( 3%) usr 0.00 ( 0%) sys 0.09 ( 3%)
wall
life info update : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.00 ( 0%)
wall
preprocessing : 0.27 ( 9%) usr 0.13 (22%) sys 0.27 ( 7%)
wall
lexical analysis : 0.29 (10%) usr 0.15 (26%) sys 0.72 (20%)
wall
parser : 2.13 (72%) usr 0.28 (48%) sys 2.41 (66%)
wall
expand : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.00 ( 0%)
wall
varconst : 0.06 ( 2%) usr 0.00 ( 0%) sys 0.06 ( 2%)
wall
jump : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.00 ( 0%)
wall
global alloc : 0.02 ( 1%) usr 0.00 ( 0%) sys 0.00 ( 0%)
wall
reg stack : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.00 ( 0%)
wall
final : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.05 ( 1%)
wall
symout : 0.00 ( 0%) usr 0.01 ( 2%) sys 0.02 ( 0%)
wall
rest of compilation : 0.01 ( 0%) usr 0.00 ( 0%) sys 0.02 ( 0%)
wall
TOTAL : 2.94 0.58 3.67
Tree Number Bytes % Total
Total 227014 11M
RTX Number Bytes % Total
Total 1322 12k
Collector calls: 7805
Collections performed: 1
Size Allocated Used Overhead
Total 27M 12M 274k
real 0m2.580s
real 0m2.585s
real 0m2.580s
The overall _amount_ of extra memory being allocated clearly does not
have an adverse effect on the time it takes the garbage collector to
traverse it, since the 3.3 collector runs at least as fast as the 3.2.1
collector in spite of about 10% more data under its wing.
However, the access _patterns_ for the allocated memory clearly do have
an effect on the _number_ of traversals the garbage collector will
perform on it. What you see from the numbers is that more heap data
becomes orphaned sooner in 3.3 than in 3.2.1, and is therefore
reclaimed. The less remaining (i.e., marked) data means a lower
threshold for firing off the next collection, and ergo we see more of
them.
But, this is not all; even though a lot more stuff is successfully
reclaimed, the live memory size grows back quickly, only to be
reclaimed with equal vigor. What this suggests (correct me if I'm
going astray) is a "sawtooth" time series of memory allocations. Each
tooth tends to group together data with a high degree of temporal
locality, most of it destroyed at the end of some activity (my guess:
compiling a body of a function). Then, as the compilation of the next
function begins, the compiler again allocates a bunch of data that it
will use for that function and that function only.
This leads me to the following half-baked hypothesis: Some kinds of
data/information that used to persist across the entire compilation are
now created and destroyed repeatedly in some local temporal scope, like
the compilation of a single function. This hypothesis would explain
not only the increased number of gc passes, but also the overall
performance degradation we see (even with gc turned off).
So, the question is, has anyone touched code in the C++ front-end that
deals with compilation of functions? Classes? Perhaps someone has
tried to reduce the memory footprint of the running compiler by
destroying all data unless it is absolutely needed? The law of
unintended consequences applies in engineering as well as in economic
policy. :-)
--Zem
--------------------------------------------------------------
Ziemowit Laski 1 Infinite Loop, MS 301-2K
Mac OS X Compiler Group Cupertino, CA USA 95014-2083
Apple Computer, Inc. +1.408.974.6229 Fax .5477
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: GCC 3.2.1 -> GCC 3.3 compile speed regression
2003-01-31 7:49 GCC 3.2.1 -> GCC 3.3 compile speed regression Ziemowit Laski
@ 2003-01-31 7:49 ` Zack Weinberg
2003-02-01 19:03 ` Devang Patel
2003-01-31 10:45 ` Fergus Henderson
2003-02-01 8:09 ` Ephemeral garbage [Was: GCC 3.2.1 -> GCC 3.3 compile speed regression] Timothy J. Wood
2 siblings, 1 reply; 8+ messages in thread
From: Zack Weinberg @ 2003-01-31 7:49 UTC (permalink / raw)
To: Ziemowit Laski; +Cc: gcc, Geoffrey Keating
Ziemowit Laski <zlaski@apple.com> writes:
[lots of good juicy data]
> Immediately, a culprit comes to mind: The 3.3 compiler performs
> almost TWICE AS MANY COLLECTIONS as the 3.2.1 compiler, with a roughly
> comparable # of gcc_collect() calls. This can't be cheap.
>
> What we see from the GC numbers is that the 3.3 collections are able
> to reclaim a greater percentage of allocated memory at each turn.
> This could be a combination of two factors:
> - Data in the 3.3 compiler have better temporal locality (!!)
> - The 3.2.1 collector was buggy/incomplete and could not reliably
> reclaim some stuff.
>
> Anyway, this better reclamation done by the 3.3 ggc also explains why
> the collections occur more frequently there.
[...]
> This leads me to the following half-baked hypothesis: Some kinds of
> data/information that used to persist across the entire compilation
> are now created and destroyed repeatedly in some local temporal scope,
> like the compilation of a single function. This hypothesis would
> explain not only the increased number of gc passes, but also the
> overall performance degradation we see (even with gc turned off).
A suggestion: Make ggc_collect a macro like this:
#define ggc_collect() ggc_collect_real(__FUNCTION__)
and then have ggc_collect_real print out the caller but only when it
decides to do a collection. That'll tell us if, say, all the
additional collections in 3.3 are happening in the same place.
zw
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: GCC 3.2.1 -> GCC 3.3 compile speed regression
2003-01-31 7:49 GCC 3.2.1 -> GCC 3.3 compile speed regression Ziemowit Laski
2003-01-31 7:49 ` Zack Weinberg
@ 2003-01-31 10:45 ` Fergus Henderson
2003-02-01 8:09 ` Ephemeral garbage [Was: GCC 3.2.1 -> GCC 3.3 compile speed regression] Timothy J. Wood
2 siblings, 0 replies; 8+ messages in thread
From: Fergus Henderson @ 2003-01-31 10:45 UTC (permalink / raw)
To: Ziemowit Laski; +Cc: gcc, Geoffrey Keating
On 30-Jan-2003, Ziemowit Laski <zlaski@apple.com> wrote:
> Immediately, a culprit comes to mind: The 3.3 compiler performs almost
> TWICE AS MANY COLLECTIONS as the 3.2.1 compiler, with a roughly
> comparable # of gcc_collect() calls. This can't be cheap.
>
> What we see from the GC numbers is that the 3.3 collections are able to
> reclaim a greater percentage of allocated memory at each turn.
How do the number of allocations compare?
It sounds like 3.3 is allocating a lot more soon-to-be-garbage data than 3.2.1.
It could be useful to produce an allocation profile, showing which
functions were responsible for increased numbers of allocations.
That is, instrument ggc_alloc() so that it records each allocation
in a hash table mapping from allocation site to allocation counts
For identifying allocation sites, you can use __FUNCTION__ (you'd need
to rename the definition of ggc_alloc() and instead define ggc_alloc()
to a macro which passes __FUNCTION__ to the renamed version).
Do this for both 3.2.1 and 3.3, and it should be easy to see which
function(s) are doing a lot more allocations.
--
Fergus Henderson <fjh@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Ephemeral garbage [Was: GCC 3.2.1 -> GCC 3.3 compile speed regression]
2003-01-31 7:49 GCC 3.2.1 -> GCC 3.3 compile speed regression Ziemowit Laski
2003-01-31 7:49 ` Zack Weinberg
2003-01-31 10:45 ` Fergus Henderson
@ 2003-02-01 8:09 ` Timothy J. Wood
2003-02-02 5:20 ` Segher Boessenkool
2003-02-03 20:42 ` Mike Stump
2 siblings, 2 replies; 8+ messages in thread
From: Timothy J. Wood @ 2003-02-01 8:09 UTC (permalink / raw)
To: Ziemowit Laski; +Cc: gcc, Geoffrey Keating
On Thursday, January 30, 2003, at 06:45 PM, Ziemowit Laski wrote:
[...]
> What we see from the GC numbers is that the 3.3 collections are able
> to reclaim a greater percentage of allocated memory at each turn.
> This could be a combination of two factors:
> - Data in the 3.3 compiler have better temporal locality (!!)
> - The 3.2.1 collector was buggy/incomplete and could not reliably
> reclaim some stuff.
Another possibility that occurs to me is that 3.3 might be allocating
more ephemeral blocks. I guess you could say that this is better
temporal locality, but maybe its just a few sloppy temporary object
allocations.
If you sum the memory reclaimed during collection (i.e., the total
size of temporary objects that didn't actually end up being needed for
the whole life of the compiler), you get 17820k for your 3.2.1 run and
38408k for the 3.3 run.
(BTW, I was using: pbpaste | sed 's/[^-0-9]//g' | bc | awk
'{sum=sum+$1} END{print sum}' to get the numbers from your data :)
So, to me it looks like 3.3 creates 2.15x as much garbage that needs
collecting.
IMHO, the garbage collector should be used for objects with lifetimes
that are difficult to determine. Local temporary stuff with easily
computed lifetimes should be on a obstack or something similar and not
get allocated from the GC. I imagine this is harder than just saying
that, but it may help to reduce the load on the garbage collector by
creating less garbage.
Just to add some new data to the discussion, I took Zem's test file
and ran it through the head of the 3.3 branch after making a change to
ggc-page.c to collect after every ggc_pop_context call. As you might
expect, this makes the compiler rather slow :) But, it lets you see
something about how much trash is generated for various bits of work
(with -Q on, say).
This file contains a an example of the output when run with the
structureparser.gcc-3_3-branch.ii file:
http://www.omnigroup.com/~bungi/gc.txt.gz
As one example, right at the top of Zem's file there is:
inline int qRound( double d )
{
return d >= 0.0 ? int(d + 0.5) : int( d - ((int)d-1) + 0.5 ) +
((int)d-1);
}
I made several renamed duplicates of this right after the original
(just to help stabilize the numbers) and got:
int qRound(double) {GC 187k -> 182k}
int xRound(double) {GC 188k -> 184k}
int x1Round(double) {GC 190k -> 185k}
int x2Round(double) {GC 191k -> 186k}
int x3Round(double) {GC 192k -> 188k}
int x4Round(double) {GC 194k -> 189k}
int x5Round(double) {GC 195k -> 190k}
int x6Round(double) {GC 196k -> 191k}
int x7Round(double) {GC 197k -> 193k}
Each one of these inlines ended up creating about 5k of garbage --
not actually useful data -- but 5k (more than a full page on Mac OS X!)
of crud that will never be used again. Multiply this by several
thousand times after including the STL headers and you have a ugly
picture :)
It looks like the actual *saved* data for each of these inlines was
between 1k and 2k (closer to 1k). Not a good ratio, I think.
I need to do some work on ggc-page.c before I can run the real test I
wanted to run (basically I want to detect exactly which blocks became
free during a collection so that I can hook this into OmniObjectMeter
and try to see which allocation sites have the most short lived blocks).
-tim
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: GCC 3.2.1 -> GCC 3.3 compile speed regression
2003-01-31 7:49 ` Zack Weinberg
@ 2003-02-01 19:03 ` Devang Patel
0 siblings, 0 replies; 8+ messages in thread
From: Devang Patel @ 2003-02-01 19:03 UTC (permalink / raw)
To: gcc; +Cc: Devang Patel, Zem Laski
> This leads me to the following half-baked hypothesis: Some kinds of
> data/information that used to persist across the entire compilation
> are now created and destroyed repeatedly in some local temporal scope,
> like the compilation of a single function. This hypothesis would
> explain not only the increased number of gc passes, but also the
> overall performance degradation we see (even with gc turned off).
I found following differences in source
1) In alias.c
'reg_base_value' is allocated using ggc_alloc_cleared() instead of
xcalloc(). And it is done in init_alias_analysis() which is called from
many places.
2) In except.c
'entry' is allocated now using ggc_alloc() inside add_ehl_entry(). It
used xmalloc earlier. eh_region is now allocated using
ggc_alloc_cleared() instead of xcalloc() inside duplicate_eh_region_1().
3) In cselib.c
'elt_list's are now allocated using ggc_alloc() instead of
obstack_alloc(). Same for 'cselib_val'.
4) spew.c now uses gcc_alloc instead of obstack.
-Devang
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Ephemeral garbage [Was: GCC 3.2.1 -> GCC 3.3 compile speed regression]
2003-02-01 8:09 ` Ephemeral garbage [Was: GCC 3.2.1 -> GCC 3.3 compile speed regression] Timothy J. Wood
@ 2003-02-02 5:20 ` Segher Boessenkool
2003-02-03 20:42 ` Mike Stump
1 sibling, 0 replies; 8+ messages in thread
From: Segher Boessenkool @ 2003-02-02 5:20 UTC (permalink / raw)
To: Timothy J. Wood; +Cc: Ziemowit Laski, gcc, Geoffrey Keating
Timothy J. Wood wrote:
>
> IMHO, the garbage collector should be used for objects with lifetimes that are difficult to determine. Local temporary stuff with easily computed lifetimes should be on a obstack or something similar and not get allocated from the GC. I imagine
this is harder than just saying that, but it may help to reduce the load on the garbage collector by creating less garbage.
The garbage collector could be expanded to make it possible
to *explicitly* deallocate a piece of GCable memory, and then
callers can use it to get rid of stuff they are sure they don't
need any more. This has the advantage that you don't need to
free stuff on unexpected/error paths, as the GC will take care
of it. ggc_free() or something like that would seem a good name.
Segher
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Ephemeral garbage [Was: GCC 3.2.1 -> GCC 3.3 compile speed regression]
2003-02-01 8:09 ` Ephemeral garbage [Was: GCC 3.2.1 -> GCC 3.3 compile speed regression] Timothy J. Wood
2003-02-02 5:20 ` Segher Boessenkool
@ 2003-02-03 20:42 ` Mike Stump
2003-02-03 21:15 ` Mark Mitchell
1 sibling, 1 reply; 8+ messages in thread
From: Mike Stump @ 2003-02-03 20:42 UTC (permalink / raw)
To: Timothy J. Wood; +Cc: Ziemowit Laski, gcc, Geoffrey Keating
On Saturday, February 1, 2003, at 12:09 AM, Timothy J. Wood wrote:
> Another possibility that occurs to me is that 3.3 might be
> allocating more ephemeral blocks. I guess you could say that this is
> better temporal locality, but maybe its just a few sloppy temporary
> object allocations.
I have a suggestion for this case. ggc_free. ggc_free would be
documented as:
ggc_free
Routine to be called on an object allocated by ggc_alloc when the
object is known to not have any other references that are live left.
Semantics, when ENABLE_CHECKING is off, the object is immediately made
available for reallocation via ggc_alloc, without any fuirther
checking. This routine can be used to speed GC in common cases like:
while (++try < MAX_TRIES)
{
o = ggc_alloc ();
if (property (o) < min)
{
min = property (o);
r = o;
}
}
return r;
and after:
while (++try <MAX_TRIES)
{
o = ggc_alloc ();
if (property (o) < min)
{
min = property (o);
r = o;
} else {
ggc_free (o);
}
}
return r;
With ENABLE_CHECKING, we could mark them as should be free until the
next collect, and at collect time, we can double check that the object
is indeed unreferenced, and then truly free it, otherwise, abort.
Thus, we get the speed tight alloc/dealloc code for releases, with the
error checking of full GC during development for objects that we
actually know something about. This should give us the advantages of
both worlds, with the flexibility of either world, depending upon needs.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Ephemeral garbage [Was: GCC 3.2.1 -> GCC 3.3 compile speed regression]
2003-02-03 20:42 ` Mike Stump
@ 2003-02-03 21:15 ` Mark Mitchell
0 siblings, 0 replies; 8+ messages in thread
From: Mark Mitchell @ 2003-02-03 21:15 UTC (permalink / raw)
To: Mike Stump, Timothy J. Wood; +Cc: Ziemowit Laski, gcc, Geoffrey Keating
--On Monday, February 03, 2003 12:37:59 PM -0800 Mike Stump
<mstump@apple.com> wrote:
> I have a suggestion for this case. ggc_free. ggc_free would be
> documented as:
>
> ggc_free
Yeah, I've meant to implement this since forever.
(I've avoided doing it partly because I think a lot of the problem is
that we allocate too much memory. If we didn't wastefully allocate
so much junk, we wouldn't have to worry so much about how quickly we
clean it up.)
--
Mark Mitchell mark@codesourcery.com
CodeSourcery, LLC http://www.codesourcery.com
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2003-02-03 21:15 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-01-31 7:49 GCC 3.2.1 -> GCC 3.3 compile speed regression Ziemowit Laski
2003-01-31 7:49 ` Zack Weinberg
2003-02-01 19:03 ` Devang Patel
2003-01-31 10:45 ` Fergus Henderson
2003-02-01 8:09 ` Ephemeral garbage [Was: GCC 3.2.1 -> GCC 3.3 compile speed regression] Timothy J. Wood
2003-02-02 5:20 ` Segher Boessenkool
2003-02-03 20:42 ` Mike Stump
2003-02-03 21:15 ` Mark Mitchell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).