public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* Re: Faster compilation speed
@ 2002-08-13 12:49 Robert Dewar
  2002-08-14 10:17 ` Dale Johannesen
  0 siblings, 1 reply; 215+ messages in thread
From: Robert Dewar @ 2002-08-13 12:49 UTC (permalink / raw)
  To: gcc, robertlipe

<<Now will you please quit arguing with Apple that GCC is not really too
slow for them today based solely on counterarguments that either some
other compiler for some other language was fast or that processors will
be faster sooner than the compiler can be made faster?
>>

You miss my point. Which is that it is only worth doing things that have a
really substantial impact and can be done on a reasonably short time scale.
You are simply not going to get anywhere by, for example, worrying about
avoiding refolding expressions.

I would guess the two big opportunites are the persistent front end and PCH,
but from what I understand Apple has already done these two steps, so the 
question is where to go from there, and that is far from clear.

I have always found GCC awfully slow myself. Remember that I am used to
using compilers that are far far faster than the code warrior compilers :-)

The thing to avoid is putting in a large amount of work that results in little
real speed up, at the expense of reliability and other improvements.

One thing that would be interesting is to know, for one of these giant OS
projects (which I assume are in the million line but not dozens of million
line range) what the division between front end time and back end time is.

In the case of Ada most of the time is spent in the back end for large programs
so there is not much we can do in the front end if optimization is turned on.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-21 15:35 Tim Josling
  0 siblings, 0 replies; 215+ messages in thread
From: Tim Josling @ 2002-08-21 15:35 UTC (permalink / raw)
  To: gcc

"Tim Josling wrote:

>This is consistent with my tests; I found that a simplistic allocation which
>put everything on the same page, but which never freed anything, actually
>bootstrapped GCC faster than the standard GC.
>
Not too surprising actually; GCC's own sources aren't the hard cases for GC.

>
>The GC was never supposed to make GCC faster, it was supposed to reduce
>workload by getting rid of memory problems. But I doubt it achieves that
>objective. Certainly, keeping track of all the attempts to 'fix' GC has burned
>a lot of my time.
>
The original rationale that I remember was to deal with hairy C++ code
where the compiler would literally exhaust available VM when doing
function-at-a-time compilation.  If that's still the case, then memory
reclamation is a correctness issue.  But it's worth tinkering with the
heuristics; we got a little improvement on Darwin by bumping
GGC_MIN_EXPAND_FOR_GC from 1.3 to 2.0 (it was a while back, don't
have the comparative numbers).

Stan"

Much of the overhead of GC is not the collection as such, but the allocation
process and its side-effects. In fact, if you allocate using the GC code, the
build runs faster if you do the GC, though tweaking the threshold can help.
However for many programs you are better off to allocate very simply and not
do GC at all. 

The GC changes have, in my opinion, made small number of programs better at
the expense of making most compiles slower. We should not be using GC for most
compiles at all.

This - an optimisation that actually make things worse overall - is
unfortunately a common situation with 'improvments' to GCC.

Tim Josling

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-21  6:59 Richard Kenner
  2002-08-21 15:04 ` David S. Miller
  0 siblings, 1 reply; 215+ messages in thread
From: Richard Kenner @ 2002-08-21  6:59 UTC (permalink / raw)
  To: davem; +Cc: gcc

    This is the one of the huge (of many) problems with GC as it currently
    is implemented.  Different tree and RTL types land on different pages
    so when you walk a "SET" for example, the MEM and REG objects
    contained within will be on different pages and this costs a lot
    especially on modern processors.  Our page working set is huge as a
    result of this.

True if you only walk *one* SET, but normally you walk a whole bunch,
each of which have MEM and REG objects.  So I disagree this adds to
the working set size.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-20 14:11 Tim Josling
  2002-08-20 14:13 ` David S. Miller
  2002-08-20 14:43 ` Stan Shebs
  0 siblings, 2 replies; 215+ messages in thread
From: Tim Josling @ 2002-08-20 14:11 UTC (permalink / raw)
  To: gcc

>   From: 
>        "David S. Miller" <davem@redhat.com>
> 
>    From: Richard Henderson <rth@redhat.com>
>    Date: Mon, 19 Aug 2002 10:29:09 -0700
> 
>    Well, no, since SET, MEM, REG, PLUS all have two arguments.
>    And thus are all allocated from the same page.
> 
> Ok, how about walking from INSN down to the SET?  The problem
> does indeed exist there.
> 
> Next, we have the fragmentation issue.  Look at the RTL you
> have right before reload runs on any non-trivial compilation,
> and see where the pointers are.
> 
> So the problem is there.

This is consistent with my tests; I found that a simplistic allocation which
put everything on the same page, but which never freed anything, actually
bootstrapped GCC faster than the standard GC.

The GC was never supposed to make GCC faster, it was supposed to reduce
workload by getting rid of memory problems. But I doubt it achieves that
objective. Certainly, keeping track of all the attempts to 'fix' GC has burned
a lot of my time.

Tim Josling

^ permalink raw reply	[flat|nested] 215+ messages in thread
[parent not found: <1029519609.8400.ezmlm@gcc.gnu.org>]
* Re: Faster compilation speed
@ 2002-08-16  5:08 Joe Wilson
  2002-08-16  5:51 ` Noel Yap
  2002-08-16 11:04 ` Mike Stump
  0 siblings, 2 replies; 215+ messages in thread
From: Joe Wilson @ 2002-08-16  5:08 UTC (permalink / raw)
  To: gcc

Mat Hounsell wrote:
>But why load and unload the compiler and the headers for every file in a
>module. It would be far more effecient to adapt the build process and start gcc
>for the module and then to tell it to compile each file that needs to be
>re-compiled. Add pre-compiled header support and it wouldn't even need to
>compile the headers once.

I was thinking the same thing, except without introducing new pragmas.
You could do the common (header) code precompiling only for modules listed 
on the commandline without having to save state to a file-based code 
respository.  i.e.:

 g++ -c [flags] module1.cpp module2.cpp module3.cpp

But compiling groups of modules at one time is contrary to the way most 
makefiles work, so it might not be practical.

Perhaps GCC already economizes the evaluation of common code in such 
"group" builds.  Can anyone comment on whether it does or not?


__________________________________________________
Do You Yahoo!?
HotJobs - Search Thousands of New Jobs
http://www.hotjobs.com

^ permalink raw reply	[flat|nested] 215+ messages in thread
[parent not found: <1029475232.9572.ezmlm@gcc.gnu.org>]
* Re: Faster compilation speed
@ 2002-08-14 19:11 Tim Josling
  0 siblings, 0 replies; 215+ messages in thread
From: Tim Josling @ 2002-08-14 19:11 UTC (permalink / raw)
  To: gcc

> It could've been interesting to try incremental/generational collection.
> I didn't do that.

There may be quite a few ways to improve locality if that is the problem
(maybe the problem could just be that GC causes a bigger footprint and thereby
affects the hit rates).

Examples: 

Subpools (give allocations a name and put like named allocations together),
for example "Front End" and "Back End".

Hints about allocations that are likely to be long and short lived. Put them
in different places.

"Allocate Near" models where you give a pointer that gives a hint where you
want the next thing allocated.

Some big functions should only be optimised in chunks perhaps. This could
avoid walking long lists that are bigger than cache, and reduce the damage of
various non-linear algorithms:

big.c:999999: Warning: "This function is too big to optimise in a reasonable
time"

Compaction of allocations after freeing memory, perhaps combined with other
options. This requires knowing about all the users of that memory so pointers
can be updated. Indirect pointers perhaps?

Allocate all sizes together. This would make reuse of storage harder of course
but could improve locality.

Explicitly freeing stuff when you know you are the only user.

Allocating certain things in 'never to be freed' mode, thus avoiding having to
GC it all the time. These could be all put together with no need for bitmaps,
holes in allocated memory etc etc.

Maybe some things should be allowed to migrate out of cache and never return.
Maybe freeing them is worse than leaving them alone.

----

One problem is that GCC is so complex and large it is difficult to try
theories. 

It is pretty easy to effectively turn off GC, just increase the size below
which GC does nothing - hardcoded in ggc-page.c (GGC_MIN_LAST_ALLOCATED)
default is 4mb.

Tim Josling

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-13 15:00 Tim Josling
  2002-08-13 15:48 ` Russ Allbery
  0 siblings, 1 reply; 215+ messages in thread
From: Tim Josling @ 2002-08-13 15:00 UTC (permalink / raw)
  To: gcc

>>File size is not the only parameter. Modern languages do more
>> complicated thing than the average Cobol compiler I suppose....
>>

> You suppose dramatically wrong (it is amazing how little people now about
> COBOL and how much they are willing to guess). Modern COBOL is an extremely
> complex language, certainly more complex than Ada, and probably more complex
> than C++.

The COBOL spec is about 1500 pages in a smallish font (including addenda and
the "intrinsic functions"). My copy of the C standard, for example, runs to
about 200 pages(1). 'Modern' languages are a lot more regular and were
designed with the compiler writer in mind. The concerns of the compiler writer
were definitely not at the forefront of the COBOL language designers' minds.

> The point is that GCC has a really terrible time if you throw a single
> procedure with tens of thousands of lines of code in it at the compiler.

Correct. The largest single function written in COBOL, that I have been able
to find, is several *hundred thousand* lines long. Even the slightest
non-linearity is a major problem.

Tim Josling

(1) Excluding the library. You could argue that the some COBOL verbs are
similar to the library, which is true, but the C library hardly affects the
compiler itself. In GNU the C library is even a separate project. In COBOL the
verbs are part of the language syntax and require their own parse trees and so
forth so it would be very difficult to have a separate project. Even the
intrinsic functions though they look like functions are just more syntax in a
slightly more regular form.

Some of the C library functions are tightly coupled to the compiler e.g.
setjmp, va_*, memset (if inlined), printf (for parameter checking). But by and
large the library is independent.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-13 12:02 Robert Dewar
  2002-08-13 12:32 ` Robert Lipe
  2002-08-14  2:55 ` Daniel Egger
  0 siblings, 2 replies; 215+ messages in thread
From: Robert Dewar @ 2002-08-13 12:02 UTC (permalink / raw)
  To: austern, dewar; +Cc: Theodore.Papadopoulo, drow, gcc, mrs, phil, shebs

<<For some of the things Apple does, a 10-hour build time would be
a major improvement.  I don't think we're alone in that.  Machines
are faster than they once were, but projects are now much larger.
>>

and compilers are indeed slower. 10-hours should certainly be enough to build
any project at this stage. 

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-13 10:36 Robert Dewar
  2002-08-13 13:46 ` Kai Henningsen
  2002-08-13 16:53 ` Joe Buck
  0 siblings, 2 replies; 215+ messages in thread
From: Robert Dewar @ 2002-08-13 10:36 UTC (permalink / raw)
  To: Theodore.Papadopoulo, dewar; +Cc: gcc, mrs, shebs

<<We should see any speed improvement as a possibility to add
more functionnality into the compiler without changing much the
increase of speed the user expects to see. Even though for the time
being (and given the current state of gcc compared to the competition),
it looks like a lot of people just want to see the compiler go faster...
>>

But remember that work you put in on speeding up the compiler is work
that you do not put in on improving the compiler. As time goes on, quality
of generated code continues to be critical, compiler speed is less critical.

<<Now, you may probably be right in this case, you certainly know more
than I do. Are you sure though that the quality of the codes generated by
these compilers were equal ?!? I suppose so, but just asking a
confirmation.
>>

Well Phillipe Kahn in the keynote address at one big PC meeting asked
the audience if they knew which compiler for any language on the PC
generated the best code for the popular sieve benchmark. He surprised
the audience by telling them it was Realia COBOL. Now I don't know if
the guys at Computer Associates have kept up, but certainly that date
point shows that fast compilers can generate efficient code.

<<File size is not the only parameter. Modern languages do more
complicated thing than the average Cobol compiler I suppose....
>>

You suppose dramatically wrong (it is amazing how little people now about
COBOL and how much they are willing to guess). Modern COBOL is an extremely
complex language, certainly more complex than Ada, and probably more complex
than C++.

The point is that GCC has a really terrible time if you throw a single
procedure with tens of thousands of lines of code in it at the compiler.

<<At the same time, people are getting new machines and expect their
programs to compile faster... nad not to mention that the "average
source code" (to be defined by someone ;-) ) is also probably growing
in size and complexity...
>>

Actually compilers have in general got slower with time (see my SIGPLAN
compiler tutorial of many years ago, where I talked about the spectacular
advances in technology of slow compilers :-) Few modern compilers can
match Fastran on the IBM 7094.

<<And, it also depends on what the nine minutes you gained allow you
to do on your computer.... If the nine minutes can be used to do what
the average user considers to be a very important task, then nine
minutes is a lot !!!
>>

Very little in practice. You do not rebuild a million line system every
two minutes after all, and in practice once the build time for a large
system is down in the ten minute range, the gains in making it faster
diminish rapidly. This is not a guess, as I say, this is an observation 
of market forces over a period of year in the competition between
Realia COBOL and Microfocus COBOL, where Realia always had a factor of
ten or more in compile speed to compete with.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-13 10:08 Robert Dewar
  0 siblings, 0 replies; 215+ messages in thread
From: Robert Dewar @ 2002-08-13 10:08 UTC (permalink / raw)
  To: Theodore.Papadopoulo, mrs; +Cc: gcc, phil, shebs

incidentally, I find the idea of a persistent front end for the compiler that
keeps compiled stuff around a very good one. This is something we have 
considered for GNAT for years :-)

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-13  9:10 Robert Dewar
  2002-08-13 10:20 ` Theodore Papadopoulo
                   ` (2 more replies)
  0 siblings, 3 replies; 215+ messages in thread
From: Robert Dewar @ 2002-08-13  9:10 UTC (permalink / raw)
  To: dewar, drow; +Cc: Theodore.Papadopoulo, gcc, mrs, phil, shebs

<<Yes it is - projects have grown correspondingly.  Maybe not for COBOL,
but for the sorts of things GCC is used for.  A factor of ten is
still very significant, which is the whole point of Apple's efforts!
>>

Actually COBOL programs are FAR FAR larger than C or C++ programs in practice.
In particular, single files of hundreds of thousands of lines are common, and
complete systems of millions of lines are common. That's why there is so much
legacy COBOL around :-)

My point is that a factor of ten is relative.

If you have a million lines COBOL program and it takes 10 hours to compile,
then cutting it down to 1 hour is a real win. If it takes 10 minutes to
compile, then cutting it down to 1 minute is a much smaller win in practice.

Remember, I am a great fan of fast compilers. Realia COBOL is certainly the
fastest compiler for arbitrarily large programs ever written for the PC, and
when I used to bootstrap the compiler (it was about 100,000 lines of COBOL)
on a 386 in a couple of minutes, that was definitely pleasant. I certainly
agree that GCC is slow :-)

My point is that if you embark on a big project that will take you two
years to complete successfully, that speeds up the compiler by a factor
of two, then it probably will seem not that worth while when it is finished.

You have to look for easy opportunities for big gains. Nothing else is worth
while. In general you cannot design a slow compiler and then molest it into
being a fast compiler, you have to design in speed as a major criterion from
the start. Small incremental changes just don't get you where you want to be.

Obviously in our situation PCH are a good target of opportunity (though I
will say again, that if you designed a really fast C++ compiler, that
compiled code at millions lines/minute, then PCH would not be such an
obvious win, but that's not what we are dealing with here).

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-13  8:07 Robert Dewar
  2002-08-13  8:40 ` Daniel Jacobowitz
  0 siblings, 1 reply; 215+ messages in thread
From: Robert Dewar @ 2002-08-13  8:07 UTC (permalink / raw)
  To: Theodore.Papadopoulo, mrs; +Cc: gcc, phil, shebs

>>Why not make incremental compilation a standard for gcc...

I seriously doubt that incremental compilation can help. Usually it is far
better to aim at the simplest fastest possible compilation path without
bothering with the extra bookkeeping needed for IC.

Historically the fastest compilers have not been incremental, and IC has
only been used to make painfully slow compilers a little less painful

(I realize that some would put GCC into the second category here, but I 
would prefer that we keep efforts focussed on moving it into the first
category).

That being said, I still wonder over time whether the effort to speed up
gcc is effort well spent. Or rather, put that another way, let's try to make
sure that it is effort well spent. If there are obvious opportunities, then
certainly it makes sense to take advantage of them.

But there are definite effort tradeoffs, and continued increase in speed of
machines does tend to mute the requirements for faster compilation.

When Realia COBOL ran 10,000 lpm on a PC-1, with the major competitor running
at 1,000 lpm, then the speed difference was a major marketing advantage, but
now days with essentially the same compiler running over a million lines a
minute, and essentially the same competitive compiler running at 100,000 lpm
the difference is no longer nearly so significant :-)

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-12 23:39 Tim Josling
  0 siblings, 0 replies; 215+ messages in thread
From: Tim Josling @ 2002-08-12 23:39 UTC (permalink / raw)
  To: gcc

> On Sat, 10 Aug 2002, Noel Yap spake:
>>  parser                :   6.12 (65%) usr   0.75
>> (53%) sys  10.85 (63%) wall
>> ...
>>  parser                :   6.46 (65%) usr   0.63
>> (53%) sys   9.98 (62%) wall
>> ...
> Thanks,
> Noel

I have trouble believing that bison is taking that amount of time. There are a
lot of calls from the parser that are counted as PARSE. And flag_syntax_only
doesn't turn off as much as you might think. 

In my COBOL front end, all I do in the parse file is build a 'tree'. Although
many people told me bison would be too slow, let alone using flex, profiling
shows them to be a non-issue. The problem is the code generation.

According to a gprof on the largest gcc module (insn-recog.c) the parser is
only 0.43% of the total run time. On the other hand the GC figures very
prominently in the top 100 functions. This is of course without taking into
account the additional effect on cache hit rates of the larger working set
that results from using GC. On my system, this program takes about 90 seconds
to compile, but preprocessing takes less than one second. The RTL time is very
large.

The largest hand coded code gcc module (combine.c) shows broadly similar
results. The parser remains negligible. The GC is somewhat lower presumably
due to the smaller size of the program. GC remains significant, even apart
from working set/cache effects.

Compiling combine.c takes 7 seconds with -O0, 15 seconds with -O1 and 25
seconds with -O2. Nearly everyone uses -O2 so it is clear where the time is
being spent in most cases - doing optimisation. Even in -O0 a fair bit of
time, maybe 2-3 seconds, is spent optimising. 

Conclusion:

1. The fault dear Bison, is in ourselves not in you.

2. Same for the preprocessor, except maybe for C++ where many headers are
included. This is one of many design problems with the C++ language IMHO but
maybe something can be done to help.

3. GC chews up a substantial amount of time, especially in non-optimised
compiles. GC needs to be improved, but any further changes to GC should be
evidence based and subject to peer review. This would have two beneficial
effects: firstly reduced thrashing of front end developers keeping up with
significant changes of unknown benefit; and secondly we could be confident
that changes represent significant progress.

4. We do need some good numbers on how much GCC is affected by cache misses.
This would give us an idea how much effort should be devoted to improving
working set size and locality. There are lots of ways to improve locality and
reduce working sets. But let's find out if it is needed before we start
coding.

5. Most of the time in GCC compiles is spent in optimisation. So, the focus
should be there. The RTL phase of GCC is poorly understood, by anyone. Code
that is not well understood and that people are afraid to touch is invariably
inefficient. 

Two gprof outputs follow.

Tim Josling

insn-recog.c:
 %   cumulative   self              self     total
 time   seconds   seconds    calls  ms/call  ms/call  name
  4.84      2.14     2.14 23088715     0.00     0.00  ggc_set_mark
  3.87      3.85     1.71  2153853     0.00     0.00  ggc_mark_rtx_children_1
  3.40      5.35     1.50  1070849     0.00     0.01  cse_insn
  3.28      6.80     1.45     1169     1.24     1.47  verify_flow_info
  2.83      8.05     1.25  5252491     0.00     0.00  for_each_rtx
  2.67      9.23     1.18                             htab_traverse
  1.90     10.07     0.84      456     1.84     2.99  init_alias_analysis
  1.81     10.87     0.80 10643243     0.00     0.00  find_reg_note
  1.72     11.63     0.76  6645463     0.00     0.00  side_effects_p
  1.68     12.37     0.74  2561968     0.00     0.00  fold_rtx
  1.52     13.04     0.67  6523785     0.00     0.00  ggc_alloc
  1.47     13.69     0.65   799692     0.00     0.00  gt_ggc_mx_lang_tree_node
  1.45     14.33     0.64  2176585     0.00     0.00  canon_reg
  1.38     14.94     0.61  2676602     0.00     0.00  rtx_cost
  1.15     15.45     0.51  1967977     0.00     0.00  ggc_mark_rtx_children
  1.06     15.92     0.47  1747317     0.00     0.00  insert
  1.00     16.36     0.44  4171744     0.00     0.00  canon_hash
  1.00     16.80     0.44  1526861     0.00     0.00  exp_equiv_p
  0.95     17.22     0.42  1356669     0.00     0.00  propagate_one_insn
  0.91     17.62     0.40  7510741     0.00     0.00  canon_rtx
  0.88     18.01     0.39   554639     0.00     0.00  count_reg_usage
  0.86     18.39     0.38  3718565     0.00     0.00  note_stores
  0.82     18.75     0.36    88538     0.00     0.01  find_reloads
  0.77     19.09     0.34       43     7.91     7.91  poison_pages
  0.72     19.41     0.32    49907     0.01     0.01  preprocess_constraints
  0.70     19.72     0.31  1011445     0.00     0.00  invalidate
  0.70     20.03     0.31   558511     0.00     0.00  reg_scan_mark_refs
  0.66     20.32     0.29   650208     0.00     0.00  constrain_operands
  0.63     20.60     0.28   774372     0.00     0.00  mark_used_regs
  0.63     20.88     0.28    24927     0.01     0.01 
count_or_remove_death_notes
  0.61     21.15     0.27  2880996     0.00     0.00  mark_set_1
  0.59     21.41     0.26  7613625     0.00     0.00  approx_reg_cost_1
  0.57     21.66     0.25  1516590     0.00     0.00 
simplify_binary_operation
  0.57     21.91     0.25  1014709     0.00     0.00  mention_regs
  0.57     22.16     0.25   177560     0.00     0.00  validate_value_data
  0.57     22.41     0.25     7063     0.04     0.10  compute_transp
  0.54     22.65     0.24   539000     0.00     0.00  copy_rtx
  0.52     22.88     0.23  1172954     0.00     0.00  insn_extract
  0.52     23.11     0.23   109544     0.00     0.00  record_reg_classes
  0.50     23.33     0.22  1495174     0.00     0.00  reg_mentioned_p
  0.48     23.54     0.21    51125     0.00     0.23  cse_basic_block
  0.48     23.75     0.21    51125     0.00     0.00  cse_end_of_basic_block
  0.45     23.95     0.20  1886796     0.00     0.00  legitimate_address_p
  0.45     24.15     0.20   501459     0.00     0.00  find_best_addr
  0.45     24.35     0.20   354991     0.00     0.00  mark_jump_label
  0.43     24.54     0.19  6597365     0.00     0.00  get_cse_reg_info
  0.43     24.73     0.19   598028     0.00     0.00  copy_rtx_if_shared
  0.43     24.92     0.19        1   190.00 40549.99  yyparse
  0.41     25.10     0.18   279766     0.00     0.00  simplify_plus_minus
...

combine.c:

Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total           
 time   seconds   seconds    calls  ms/call  ms/call  name    
  2.63      0.29     0.29   146391     0.00     0.01  cse_insn
  2.45      0.56     0.27  2791299     0.00     0.00  find_reg_note
  2.45      0.83     0.27   872878     0.00     0.00  for_each_rtx
  2.18      1.07     0.24  1844867     0.00     0.00  side_effects_p
  2.00      1.29     0.22  2403378     0.00     0.00  ggc_set_mark
  2.00      1.51     0.22     1779     0.12     0.18  verify_flow_info
  1.72      1.70     0.19  2090080     0.00     0.00  ggc_alloc
  1.63      1.88     0.18                             htab_traverse
  1.45      2.04     0.16   196228     0.00     0.00  gt_ggc_mx_lang_tree_node
  1.36      2.19     0.15  2787969     0.00     0.00  bitmap_bit_p
  1.36      2.34     0.15    21830     0.01     0.01  preprocess_constraints
  1.27      2.48     0.14  2235546     0.00     0.00  canon_rtx
  1.27      2.62     0.14    42175     0.00     0.01  find_reloads
  1.18      2.75     0.13  1707121     0.00     0.00  mark_set_1
  1.18      2.88     0.13   328751     0.00     0.00  fold_rtx
  1.09      3.00     0.12   624046     0.00     0.00  propagate_one_insn
  1.09      3.12     0.12   288895     0.00     0.00  count_reg_usage
  1.00      3.23     0.11   276995     0.00     0.00  constrain_operands
  1.00      3.34     0.11   128667     0.00     0.00  ggc_mark_rtx_children_1
  1.00      3.45     0.11    77278     0.00     0.00  validate_value_data
  1.00      3.56     0.11      786     0.14     0.43  init_alias_analysis
  0.82      3.65     0.09  1502223     0.00     0.00  note_stores
  0.82      3.74     0.09  1093219     0.00     0.00  get_cse_reg_info
  0.82      3.83     0.09   291031     0.00     0.00  m16m
  0.82      3.92     0.09   157999     0.00     0.00  mark_jump_label
  0.82      4.01     0.09    43257     0.00     0.00 
reload_cse_simplify_operands
  0.82      4.10     0.09    42404     0.00     0.00  record_reg_classes
  0.73      4.18     0.08   513121     0.00     0.00  find_base_term
  0.73      4.26     0.08   497589     0.00     0.00  insn_extract
  0.73      4.34     0.08   256522     0.00     0.00  reg_scan_mark_refs
  0.64      4.41     0.07  1126581     0.00     0.00  returnjump_p_1
  0.64      4.48     0.07   450028     0.00     0.00  mark_used_reg
  0.64      4.55     0.07   417871     0.00     0.00  mark_used_regs
  0.64      4.62     0.07   386598     0.00     0.00  loc_mentioned_in_p
  0.64      4.69     0.07   299594     0.00     0.00  bitmap_operation
  0.64      4.76     0.07   298619     0.00     0.00  canon_reg
  0.64      4.83     0.07   227797     0.00     0.00  copy_rtx_if_shared
  0.64      4.90     0.07   150028     0.00     0.00  ggc_mark_rtx_children
  0.64      4.97     0.07                             htab_find_slot_with_hash
  0.54      5.03     0.06  1747388     0.00     0.00  bitmap_set_bit
  0.54      5.09     0.06   794615     0.00     0.00  record_set
  0.54      5.15     0.06   538361     0.00     0.00  canon_hash
  0.54      5.21     0.06   497904     0.00     0.00  extract_insn
  0.54      5.27     0.06   322376     0.00     0.00  rtx_cost
  0.54      5.33     0.06   139171     0.00     0.00  try_forward_edges
  0.54      5.39     0.06   111797     0.00     0.00  cselib_subst_to_values
  0.54      5.45     0.06    61617     0.00     0.00  fold
  0.54      5.51     0.06    12204     0.00     0.01  cse_end_of_basic_block
  0.54      5.57     0.06     9853     0.01     0.01 
count_or_remove_death_notes
  0.54      5.63     0.06        1    60.00 10319.57  yyparse
  0.45      5.68     0.05  1149752     0.00     0.00  rtx_equal_p
  0.45      5.73     0.05   544172     0.00     0.00  ix86_decompose_address
  0.45      5.78     0.05   405992     0.00     0.00  insns_for_mem_walk
  0.45      5.83     0.05   308718     0.00     0.00  volatile_refs_p
...

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-12 15:21 Robert Dewar
  2002-08-12 15:25 ` David S. Miller
  0 siblings, 1 reply; 215+ messages in thread
From: Robert Dewar @ 2002-08-12 15:21 UTC (permalink / raw)
  To: davem, terra; +Cc: gcc

> Frankly, nobody who wants to improve GCCs runtime performance can
> reasonably complain about this "dicipline" in the same breath :-)
> Others can feel free to disagree.


Of course the issue is what happens if there is a lapse in discipline. If it
is only a matter of efficiency, that's one thing, if it becomes a focus of bugs
then that's another. For me reliability of the code generator is far far more
important than speed.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-12 14:10 Morten Welinder
  2002-08-12 15:01 ` David S. Miller
  0 siblings, 1 reply; 215+ messages in thread
From: Morten Welinder @ 2002-08-12 14:10 UTC (permalink / raw)
  To: davem; +Cc: gcc

Hi there,

> 5) If you are still bored at this point, add the machinery to use the
>    RTX walking of the current garbage collector to verify the
>    reference counts.  This will basically be required in order to
>    make and sufficiently correctness check a final implementation.

There are other ways.

* Excess unrefs and missing refs will show whereever the ref count goes
  below zero.
* Excess refs and missing unrefs will show as leaks.
* An evil combination might not show.  (Tough.)

Take a look at Gnumeric's chunk allocator (in src/gutils.c) which
has an almost-for-free leak walker, see gnm_mem_chunk_foreach_leak,
which is always turned on for gnumeric.  (It's linear-time in the
number of leaks.)  If we leak an expression tree, we will be told.
And we will be told what that expression was.  Same thing for all
the other structured objects we have in Gnumeric.

ftp://ftp.gnome.org/pub/GNOME/pre-gnome2/sources/gnumeric/gnumeric-1.1.6.tar.gz
(or 1.1.7 if you wait half an hour)

The hardest part probably is that ref-counting requires more discipline
than a lot of people can muster.

Morten

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-10 13:47 Robert Dewar
  0 siblings, 0 replies; 215+ messages in thread
From: Robert Dewar @ 2002-08-10 13:47 UTC (permalink / raw)
  To: dewar, gmariani; +Cc: dje, gcc

>>People might be anti c++, but this is where I think it shines.

Or any other language with a smidgeon of abstraction :-)

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-10 11:51 Robert Dewar
  0 siblings, 0 replies; 215+ messages in thread
From: Robert Dewar @ 2002-08-10 11:51 UTC (permalink / raw)
  To: kenner, torvalds; +Cc: gcc

<<Now, I'm probably very biased, because in a kernel you really have to be
very very careful indeed about never leaking memory, and about being able
to reclaim stuff when new situations arise. So to me, memory management is
the basis of anything working _at_all_.
>>

Many compilers don't bother with memory management, they simply don't use
that much memory and there is nothing worth reclaiming (the front end
of GNAT is certainly in this category for instance).

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-10 10:56 Robert Dewar
  0 siblings, 0 replies; 215+ messages in thread
From: Robert Dewar @ 2002-08-10 10:56 UTC (permalink / raw)
  To: dewar, kenner; +Cc: gcc

Also we have had obstack problems which were NOT front end problems, if you
look through the fixed bugs for obstack, you will find quite a few of these.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-10 10:55 Robert Dewar
  0 siblings, 0 replies; 215+ messages in thread
From: Robert Dewar @ 2002-08-10 10:55 UTC (permalink / raw)
  To: dewar, kenner; +Cc: gcc

<<Yes, but whenever that happened, it represented a scoping problem in the
front end.  If the entities in question only involved constants, switching to
GC indeed "removed" the bug.  But in most of these cases, the problem could
also occur where non-constants are involved.  In that case, what we've done
is to replace a memory corruption problem in the compiler which causes a
crash with a bug that generates subtly wrong code.  Not a good trade, in my
opinion.  In most cases, though, what this does is that it makes the scoping
bug become latent.
>>

Of course in retrospect the fierce rules on scoping were a HUGE mistake, and
it is too bad that they cannot be fixed in gigi. Almost all of the time, the
requirement for "correct" scoping is entirely artificial, since there is
no code for elaboraiton of the declaration (this is true for instance of
almost all itypes).

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-10 10:52 Richard Kenner
  0 siblings, 0 replies; 215+ messages in thread
From: Richard Kenner @ 2002-08-10 10:52 UTC (permalink / raw)
  To: dewar; +Cc: gcc

    It also removes a pernicious variety of bug that often caused nasty memory
    corruption in earlier versions of GCC. 

Yes, but whenever that happened, it represented a scoping problem in the
front end.  If the entities in question only involved constants, switching to
GC indeed "removed" the bug.  But in most of these cases, the problem could
also occur where non-constants are involved.  In that case, what we've done
is to replace a memory corruption problem in the compiler which causes a
crash with a bug that generates subtly wrong code.  Not a good trade, in my
opinion.  In most cases, though, what this does is that it makes the scoping
bug become latent.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-10 10:47 Richard Kenner
  2002-08-10 11:17 ` Linus Torvalds
  0 siblings, 1 reply; 215+ messages in thread
From: Richard Kenner @ 2002-08-10 10:47 UTC (permalink / raw)
  To: torvalds; +Cc: gcc

    Or am I wrong?

Yes.

    Basically, what I'm saying is that it _does_ have everything to do with
    allocation efficiency. The gcc allocators have just always been bad.

No, it doesn't.  As I said, it has to do with *correctness* issues.  For
example, GCC assumes that there is exactly one copy of the RTL for each
pseudo-register so that when the pseudo is forced to memory, only that
RTL needs to be changed.

It also assumes that certain other RTL is *not* shared, so that it can
be changed without affecting any others insns.

Nothing whatsoever to do with memory management.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-10 10:45 Robert Dewar
  2002-08-10 13:26 ` Gianni Mariani
  0 siblings, 1 reply; 215+ messages in thread
From: Robert Dewar @ 2002-08-10 10:45 UTC (permalink / raw)
  To: dje, torvalds; +Cc: gcc

<        GCC did not switch from obstacks to garbage collection because of
any inherent love for garbage collection.  Using garbage collection
instead of obstacks was the most efficient way to support other features
which were added to GCC 3.0.
>

It also removes a pernicious variety of bug that often caused nasty memory
corruption in earlier versions of GCC. Our experience with the back end of
GCC (from the point of view of GNAT) is that code generation errors have
been a much more serious problem than time and space requirements.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-10 10:43 Robert Dewar
  2002-08-10 11:02 ` Linus Torvalds
  0 siblings, 1 reply; 215+ messages in thread
From: Robert Dewar @ 2002-08-10 10:43 UTC (permalink / raw)
  To: dewar, torvalds; +Cc: dberlin, gcc, kevin

<<Well, at some point space "optimizations" do actually become functional
requirements. When you need to have a gigabyte of real memory in order to
compile some things in a reasonable timeframe, it has definitely become
functional ;)
>>

Interesting example, because this is just on the edge. We are just on the point
where cheap machines have less than a gigabyte, but not by much (my notebook
has a gigabyte of real memory). In two years time, a gigabyte of real memory
will sound small.

It is always hard to know how to target main memory requirements (Realia
COBOL, one of the fastest compilers ever written for the PC, it compiled
100,000 lines/minute on a 386, was targetted to work in 64K bytes, we did
not make that, it required 130K bytes :-)

But of course it is not clear that caches get larger that quickly, so the
point Linus is making about cache usage is certainly valid, though it would
be nice to have measurements rather than just rhetoric [on both sides of
the issue].

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-10  9:52 Richard Kenner
  2002-08-10 10:41 ` Linus Torvalds
  0 siblings, 1 reply; 215+ messages in thread
From: Richard Kenner @ 2002-08-10  9:52 UTC (permalink / raw)
  To: torvalds; +Cc: gcc

    Just to make a point: look at copy_rtx_if_shared(), which tries to do 
    this (yeah, I have an older tree, maybe this is fixed these days. I 
    seriously doubt it).

    The code is CRAP. Total and utter sh*t. The damn thing should just
    test a reference count and be done with it. Instead, it has this
    heuristic that knows about some rtx's that might be shared, and knows
    which never can be.  And that _cap_ comes directly from the fact that
    the code uses a lazy GC scheme instead of a more intelligent memory
    manager.

Bad example. That code predates any sort of GC and has to do with 
*correctness* issues involving the semantics of RTL, not anything having
to do with allocation efficiency.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-10  4:38 Robert Dewar
  2002-08-10  9:47 ` Linus Torvalds
  0 siblings, 1 reply; 215+ messages in thread
From: Robert Dewar @ 2002-08-10  4:38 UTC (permalink / raw)
  To: dberlin, torvalds; +Cc: gcc, kevin

<<Hmm. I can't imagine what is there that is inherently cyclic, but breaking
the cycles might be more painful than it's worth, so I'll take your word
for it.
>>

Indeed it may be perfectly acceptable to simply ignore the cycles, garbage
collection is not a functional requirement here, just a space optimization.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-10  4:35 Robert Dewar
  2002-08-10  9:45 ` Linus Torvalds
  0 siblings, 1 reply; 215+ messages in thread
From: Robert Dewar @ 2002-08-10  4:35 UTC (permalink / raw)
  To: dberlin, torvalds; +Cc: gcc, kevin

If garbage collection is taking a significant amount of time (is this really
the case), then concentrating on speeding it up may make sense, but I am
quite dubious that reference counting would speed things up (it very rarely
does, speaking of long experience in implementation of garbage collected
languages, because the distributed overhead is high -- one of the interesting
things about reference counting is that, since it distributes the overhead,
it then becomes very difficult to accurately measure the overhead.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Re: Faster compilation speed
@ 2002-08-09 19:45 Robert Dewar
  2002-08-09 20:24 ` Daniel Berlin
  0 siblings, 1 reply; 215+ messages in thread
From: Robert Dewar @ 2002-08-09 19:45 UTC (permalink / raw)
  To: dje, shebs; +Cc: gcc, mrs

<<        Saying "do not run any optimization at -O0" shows a tremendous
lack of understanding or investigation.  One wants minimal optimization
even at -O0 to decrease the size of the IL representation of the function
being compiled.  The little bit of computation to perform trivial
optimization more than makes up for itself with the decreased size of the
IL that needs to be processed to generate the output.
>>

There are two reasons to run at -O0

a) make the code as easy to debug as possible
b) speedy compilation

There is also a third reason that is relevant to safety critical code

c) avoid optimization, on the grounds that it inteferes with verification

Now with respect to a), the trouble with GCC is that the code generated
with no optimization is really horrible. Much worse than typical competing
compilers operating in no optimization mode. Now of course we can say
"yes, but gcc is really doing what you want, the other compiler is not"
but the fact remains that you are stuck between two unpleasant choices

  -O0 generates far too much code and giant executables
  -O1 already loses debugging information

I think there is a real need for a mode which would do all possible
optimizations that do NOT intefere with debugging. I would probably
use this as my default development mode all the time.

With respect to b) one has to be careful that sometimes some limited
amount of optimzation (e.g. simple register tracking, and slightly
reasonable register allocation) can cut down the size of the code
enough that compilation time suffers very little, or even is improved.

With respect to c), we find in practice that -O1 mode is manageable for
a lot of certification needs, but probably it is a good idea to retain
the absolutely-no-optimization mode.

^ permalink raw reply	[flat|nested] 215+ messages in thread
* Faster compilation speed
@ 2002-08-09 12:17 Mike Stump
  2002-08-09 13:04 ` Noel Yap
                   ` (6 more replies)
  0 siblings, 7 replies; 215+ messages in thread
From: Mike Stump @ 2002-08-09 12:17 UTC (permalink / raw)
  To: gcc

I'd like to introduce lots of various changes to improve compiler 
speed.  I thought I should send out an email and see if others think 
this would be good to have in the tree.  Also, if it is, I'd like to 
solicit any ideas others have for me to pursue.  I'd be happy to do all 
the hard work, if you come up with the ideas!  The target is to be 6x 
faster.

The first realization I came to is that the only existing control for 
such things is -O[123], and having thought about it, I think it would 
be best to retain and use those flags.  For minimal user impact, I 
think it would be good to not perturb existing users of -O[0123] too 
much, or at leaast, not at first.  If we wanted to change them, I think 
-O0 should be the `fast' version, -O1 should be what -O0 does now with 
some additions around the edges, and -O2 and -O3 also slide over (at 
least one).  What do you think, slide them all over one or more, or 
just make -O0 do less, or...?  Maybe we have a -O0.0 to mean compile 
very quickly?

Another question would be how many knobs should we have?  At first, I 
am inclined to say just one.  If we want, we can later break them out 
into more choices.  I am mainly interested in a single knob at this 
point.

Another question is, what should the lower limit be on uglifying code 
for the sake of compilation speed.

Below are some concrete ideas so others can get a feel for the types of 
changes, and to comment on the flag and how it is used.
While I give a specific example, I'm more interested in the upper level 
comments, than discussion of not combining temp slots.

The use of a macro preprocessor symbol allows us to replace it with 0 
or 1, should we want to obtain a compiler that is unconditionally 
faster, or one that doesn't have any extra code in it.

This change yields a 0.9% speed improvement when compiling expr.c.  Not 
much, but if the compiler were 6x faster, this would be 5.5% change in 
compilation speed.  The resulting code is worse, but not by much.

So, let the discussion begin...


Doing diffs in flags.h.~1~:
*** flags.h.~1~ Fri Aug  9 10:17:36 2002
--- flags.h     Fri Aug  9 10:37:58 2002
*************** extern int flag_signaling_nans;
*** 696,699 ****
--- 696,705 ----
  #define HONOR_SIGN_DEPENDENT_ROUNDING(MODE) \
    (MODE_HAS_SIGN_DEPENDENT_ROUNDING (MODE) && 
!flag_unsafe_math_optimizations)

+ /* Nonzero for compiling as fast as we can.  */
+
+ extern int flag_speed_compile;
+
+ #define SPEEDCOMPILE flag_speed_compile
+
  #endif /* ! GCC_FLAGS_H */
--------------
Doing diffs in function.c.~1~:
*** function.c.~1~      Fri Aug  9 10:17:36 2002
--- function.c  Fri Aug  9 10:37:58 2002
*************** free_temp_slots ()
*** 1198,1203 ****
--- 1198,1206 ----
  {
    struct temp_slot *p;

+   if (SPEEDCOMPILE)
+     return;
+
    for (p = temp_slots; p; p = p->next)
      if (p->in_use && p->level == temp_slot_level && ! p->keep
        && p->rtl_expr == 0)
*************** free_temps_for_rtl_expr (t)
*** 1214,1219 ****
--- 1217,1225 ----
  {
    struct temp_slot *p;

+   if (SPEEDCOMPILE)
+     return;
+
    for (p = temp_slots; p; p = p->next)
      if (p->rtl_expr == t)
        {
*************** pop_temp_slots ()
*** 1301,1311 ****
  {
    struct temp_slot *p;

!   for (p = temp_slots; p; p = p->next)
!     if (p->in_use && p->level == temp_slot_level && p->rtl_expr == 0)
!       p->in_use = 0;

!   combine_temp_slots ();

    temp_slot_level--;
  }
--- 1307,1320 ----
  {
    struct temp_slot *p;

!   if (! SPEEDCOMPILE)
!     {
!       for (p = temp_slots; p; p = p->next)
!       if (p->in_use && p->level == temp_slot_level && p->rtl_expr == 
0)
!         p->in_use = 0;

!       combine_temp_slots ();
!     }

    temp_slot_level--;
  }
--------------
Doing diffs in toplev.c.~1~:
*** toplev.c.~1~        Fri Aug  9 10:17:40 2002
--- toplev.c    Fri Aug  9 11:31:50 2002
*************** int flag_new_regalloc = 0;
*** 894,899 ****
--- 894,903 ----

  int flag_tracer = 0;

+ /* If nonzero, speed-up the compile as fast as we can.  */
+
+ int flag_speed_compile = 0;
+
  /* Values of the -falign-* flags: how much to align labels in code.
     0 means `use default', 1 means `don't align'.
     For each variable, there is an _log variant which is the power
*************** display_help ()
*** 3679,3684 ****
--- 3683,3689 ----

    printf (_("  -O[number]              Set optimization level to 
[number]\n"));
    printf (_("  -Os                     Optimize for space rather than 
speed\n"));
+   printf (_("  -Of                     Compile as fast as 
possible\n"));
    for (i = LAST_PARAM; i--;)
      {
        const char *description = compiler_params[i].help;
*************** parse_options_and_default_flags (argc, a
*** 4772,4777 ****
--- 4777,4786 ----
              /* Optimizing for size forces optimize to be 2.  */
              optimize = 2;
            }
+         else if ((p[0] == 'f') && (p[1] == 0))
+           {
+             flag_speed_compile = 1;
+           }
          else
            {
              const int optimize_val = read_integral_parameter (p, p - 
2, -1);
--------------


^ permalink raw reply	[flat|nested] 215+ messages in thread

end of thread, other threads:[~2002-08-23 15:39 UTC | newest]

Thread overview: 215+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-08-13 12:49 Faster compilation speed Robert Dewar
2002-08-14 10:17 ` Dale Johannesen
2002-08-14 10:56   ` David S. Miller
2002-08-14 11:04     ` Dale Johannesen
2002-08-14 11:08       ` David S. Miller
2002-08-19  5:15     ` Nick Ing-Simmons
2002-08-19  7:06       ` David S. Miller
2002-08-19 10:29         ` Richard Henderson
2002-08-19 11:33           ` David S. Miller
2002-08-19  9:20       ` Daniel Egger
2002-08-14 11:27   ` Timothy J. Wood
2002-08-14 11:42     ` David S. Miller
2002-08-14 13:13       ` Jamie Lokier
2002-08-19  5:20         ` Nick Ing-Simmons
2002-08-14 13:16     ` Jamie Lokier
2002-08-14 13:29       ` Timothy J. Wood
2002-08-14 13:35         ` Jamie Lokier
2002-08-14 13:43           ` Tim Hollebeek
2002-08-14 13:57             ` Jamie Lokier
  -- strict thread matches above, loose matches on Subject: below --
2002-08-21 15:35 Tim Josling
2002-08-21  6:59 Richard Kenner
2002-08-21 15:04 ` David S. Miller
2002-08-20 14:11 Tim Josling
2002-08-20 14:13 ` David S. Miller
2002-08-20 14:43 ` Stan Shebs
     [not found] <1029519609.8400.ezmlm@gcc.gnu.org>
2002-08-17 22:23 ` Mat Hounsell
2002-08-18  6:27   ` Michael S. Zick
2002-08-16  5:08 Joe Wilson
2002-08-16  5:51 ` Noel Yap
2002-08-16 11:04 ` Mike Stump
     [not found] <1029475232.9572.ezmlm@gcc.gnu.org>
2002-08-16  1:28 ` Mat Hounsell
2002-08-14 19:11 Tim Josling
2002-08-13 15:00 Tim Josling
2002-08-13 15:48 ` Russ Allbery
2002-08-13 12:02 Robert Dewar
2002-08-13 12:32 ` Robert Lipe
2002-08-13 12:45   ` Gabriel Dos Reis
2002-08-14  2:55 ` Daniel Egger
2002-08-13 10:36 Robert Dewar
2002-08-13 13:46 ` Kai Henningsen
2002-08-13 16:53 ` Joe Buck
2002-08-13 17:24   ` Paul Koning
2002-08-13 10:08 Robert Dewar
2002-08-13  9:10 Robert Dewar
2002-08-13 10:20 ` Theodore Papadopoulo
2002-08-13 21:44   ` Fergus Henderson
2002-08-14  4:00     ` Noel Yap
2002-08-14  4:36       ` Michael Matz
2002-08-14  4:45         ` Noel Yap
2002-08-14 10:06           ` Janis Johnson
2002-08-19  4:58     ` Nick Ing-Simmons
2002-08-13 10:50 ` Matt Austern
2002-08-13 11:53 ` Stan Shebs
2002-08-13 14:53   ` Joe Buck
2002-08-13  8:07 Robert Dewar
2002-08-13  8:40 ` Daniel Jacobowitz
2002-08-12 23:39 Tim Josling
2002-08-12 15:21 Robert Dewar
2002-08-12 15:25 ` David S. Miller
2002-08-13 13:46   ` Kai Henningsen
2002-08-12 14:10 Morten Welinder
2002-08-12 15:01 ` David S. Miller
2002-08-10 13:47 Robert Dewar
2002-08-10 11:51 Robert Dewar
2002-08-10 10:56 Robert Dewar
2002-08-10 10:55 Robert Dewar
2002-08-10 10:52 Richard Kenner
2002-08-10 10:47 Richard Kenner
2002-08-10 11:17 ` Linus Torvalds
2002-08-12 23:33   ` Kai Henningsen
2002-08-10 10:45 Robert Dewar
2002-08-10 13:26 ` Gianni Mariani
2002-08-10 10:43 Robert Dewar
2002-08-10 11:02 ` Linus Torvalds
2002-08-10  9:52 Richard Kenner
2002-08-10 10:41 ` Linus Torvalds
2002-08-10  4:38 Robert Dewar
2002-08-10  9:47 ` Linus Torvalds
2002-08-10  4:35 Robert Dewar
2002-08-10  9:45 ` Linus Torvalds
2002-08-10 10:24   ` David Edelsohn
2002-08-09 19:45 Robert Dewar
2002-08-09 20:24 ` Daniel Berlin
2002-08-09 12:17 Mike Stump
2002-08-09 13:04 ` Noel Yap
2002-08-09 13:10   ` Matt Austern
2002-08-09 14:22   ` Neil Booth
2002-08-09 14:44     ` Noel Yap
2002-08-09 15:14       ` Neil Booth
2002-08-10 15:54         ` Noel Yap
2002-08-09 15:13   ` Stan Shebs
2002-08-09 15:18     ` Neil Booth
2002-08-10 16:12       ` Noel Yap
2002-08-10 18:00         ` Nix
2002-08-10 20:36           ` Noel Yap
2002-08-11  4:30             ` Nix
2002-08-12 15:08           ` Mike Stump
2002-08-09 15:19     ` Ziemowit Laski
2002-08-09 15:25       ` Neil Booth
2002-08-10 16:16       ` Noel Yap
2002-08-10 16:07     ` Noel Yap
2002-08-10 16:18       ` Neil Booth
2002-08-10 20:27         ` Noel Yap
2002-08-11  0:11           ` Neil Booth
2002-08-12 12:04             ` Devang Patel
2002-08-09 18:57   ` Linus Torvalds
2002-08-09 19:12     ` Phil Edwards
2002-08-09 19:34     ` Kevin Atkinson
2002-08-09 20:28       ` Linus Torvalds
2002-08-09 21:12         ` Daniel Berlin
2002-08-09 21:52           ` Linus Torvalds
2002-08-10  6:32         ` Robert Lipe
2002-08-10 14:26           ` Cyrille Chepelov
2002-08-10 17:33             ` Daniel Berlin
2002-08-10 18:21               ` Linus Torvalds
2002-08-10 18:38                 ` Daniel Berlin
2002-08-10 18:39                 ` Cyrille Chepelov
2002-08-10 18:28               ` Cyrille Chepelov
2002-08-10 18:30                 ` John Levon
2002-08-11  1:03             ` Florian Weimer
2002-08-10 19:20     ` Noel Yap
2002-08-09 13:10 ` Aldy Hernandez
2002-08-09 15:28   ` Mike Stump
2002-08-09 16:00     ` Aldy Hernandez
2002-08-09 16:26       ` Stan Shebs
2002-08-09 16:31         ` Aldy Hernandez
2002-08-09 16:51           ` Stan Shebs
2002-08-09 16:54             ` Aldy Hernandez
2002-08-09 17:44             ` Daniel Berlin
2002-08-09 18:35               ` David S. Miller
2002-08-09 18:39                 ` Aldy Hernandez
2002-08-09 18:59                   ` David S. Miller
2002-08-09 20:01                   ` Per Bothner
2002-08-09 18:25             ` David S. Miller
2002-08-13  0:50               ` Loren James Rittle
2002-08-13 21:46                 ` Fergus Henderson
2002-08-13 22:40                   ` David S. Miller
2002-08-13 23:44                     ` Fergus Henderson
2002-08-14  7:58                     ` Jeff Sturm
2002-08-14  9:52                     ` Richard Henderson
2002-08-14 10:00                       ` David Edelsohn
2002-08-14 12:01                         ` Andreas Schwab
2002-08-14 12:07                           ` David Edelsohn
2002-08-14 13:20                             ` Jamie Lokier
2002-08-14 16:01                               ` Nix
2002-08-14 13:20                             ` Michael Matz
2002-08-14 10:15                       ` David Edelsohn
2002-08-14 16:35                         ` Richard Henderson
2002-08-14 17:02                           ` David Edelsohn
2002-08-20  4:15                         ` Richard Earnshaw
2002-08-20  5:38                           ` Jeff Sturm
2002-08-20  5:53                             ` Richard Earnshaw
2002-08-20 13:42                               ` Jeff Sturm
2002-08-22  1:55                                 ` Richard Earnshaw
2002-08-22  2:03                                   ` David S. Miller
2002-08-23 15:39                                   ` Jeff Sturm
2002-08-20  8:00                           ` David Edelsohn
2002-08-14  7:36                   ` Jeff Sturm
2002-08-10 10:02             ` Neil Booth
2002-08-09 17:36         ` Daniel Berlin
2002-08-12 16:23         ` Mike Stump
2002-08-12 16:05       ` Mike Stump
2002-08-09 19:07     ` David Edelsohn
2002-08-09 14:29 ` Neil Booth
2002-08-09 15:02   ` Nathan Sidwell
2002-08-09 17:05     ` Stan Shebs
2002-08-10  2:21     ` Gabriel Dos Reis
2002-08-12 12:11   ` Mike Stump
2002-08-12 12:41     ` David Edelsohn
2002-08-12 12:47       ` Matt Austern
2002-08-12 12:56         ` David S. Miller
2002-08-12 13:56           ` Matt Austern
2002-08-12 14:27             ` Daniel Berlin
2002-08-12 15:26               ` David Edelsohn
2002-08-13 10:49                 ` David Edelsohn
2002-08-13 10:52                   ` David S. Miller
2002-08-13 14:03                   ` David Edelsohn
2002-08-13 14:46                     ` Geoff Keating
2002-08-13 15:10                       ` David Edelsohn
2002-08-13 15:26                         ` Neil Booth
2002-08-14  9:25                     ` Kevin Handy
2002-08-18 12:58                     ` Jeff Sturm
2002-08-19 12:55                       ` Mike Stump
2002-08-20 11:22                       ` Will Cohen
2002-08-13 15:32                   ` Daniel Berlin
2002-08-13 15:58                     ` David Edelsohn
2002-08-13 16:49                       ` David S. Miller
2002-08-12 14:59             ` David S. Miller
2002-08-12 16:00             ` Geoff Keating
2002-08-13  2:58               ` Nick Ing-Simmons
2002-08-13 10:47               ` Richard Henderson
2002-08-12 14:28           ` Stan Shebs
2002-08-12 15:05             ` David S. Miller
2002-08-12 19:17     ` Mike Stump
2002-08-12 23:28       ` Neil Booth
2002-08-09 14:51 ` Stan Shebs
2002-08-09 15:03   ` David Edelsohn
2002-08-09 15:43     ` Stan Shebs
2002-08-09 16:43     ` Alan Lehotsky
2002-08-09 16:49       ` Matt Austern
2002-08-10  2:24         ` Gabriel Dos Reis
2002-08-09 15:26   ` Geoff Keating
2002-08-09 16:06     ` Stan Shebs
2002-08-09 16:14       ` Terry Flannery
2002-08-09 16:29         ` Neil Booth
2002-08-09 16:29       ` Phil Edwards
2002-08-12 16:24         ` Mike Stump
2002-08-12 18:38           ` Phil Edwards
2002-08-13  5:27           ` Theodore Papadopoulo
2002-08-13 10:03             ` Mike Stump
2002-08-12 15:55     ` Mike Stump
2002-08-09 14:59 ` Timothy J. Wood
2002-08-09 16:01 ` Richard Henderson
2002-08-10 17:48 ` Aaron Lehmann
2002-08-12 10:36   ` Dale Johannesen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).