public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-07 15:17 Robert Dewar
  2003-01-07 17:02 ` Michael S. Zick
  0 siblings, 1 reply; 85+ messages in thread
From: Robert Dewar @ 2003-01-07 15:17 UTC (permalink / raw)
  To: dewar, velco; +Cc: gcc, ja_walker, lord, mszick

I had thought that these instructions were only available on recent chips,
so if they are used you have to be careful about back compatibility.

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-08 12:27 Robert Dewar
  0 siblings, 0 replies; 85+ messages in thread
From: Robert Dewar @ 2003-01-08 12:27 UTC (permalink / raw)
  To: gcc, marcel_cox

> What you're describing is actually bad on the Pentium, and probably
> subsequent implementations as well.
> The Pentium can dual-issue loads as long as they reference separate cache
> ways. So, manually sorting the stack so contiguous accesses are localized
> increases the probability of the loads accessing the same cache way, thus
> decreasing the probability of single-issuing.

I would guess this would be dominated by the improvement in icache behavior
from the use of shorter offsets.

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-08 12:13 Robert Dewar
  2003-01-08 12:21 ` Lars Segerlund
  0 siblings, 1 reply; 85+ messages in thread
From: Robert Dewar @ 2003-01-08 12:13 UTC (permalink / raw)
  To: dewar, ja_walker, mszick, velco; +Cc: gcc, lord

> dumb statistic, fwiw: dual processor 500Mhz Celerons.  
> My Mandrake 8.2 distro calls it a pentiumpro.  

I am surprised anyone would ever have built a dual processor with such
a strange processor choice. By the way the reason 8.2 calls it a 
pentiumpro is probably just because it is so slow that it looks like
it is a pentiumpro :)

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-08  5:36 Robert Dewar
  0 siblings, 0 replies; 85+ messages in thread
From: Robert Dewar @ 2003-01-08  5:36 UTC (permalink / raw)
  To: brane, marcel_cox; +Cc: gcc

I find the misplacement of local stack variables really amazing here. This
sounds like an easy place to make a significant win.

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-07 21:01 Marcel Cox
  2003-01-07 22:53 ` tm_gccmail
  0 siblings, 1 reply; 85+ messages in thread
From: Marcel Cox @ 2003-01-07 21:01 UTC (permalink / raw)
  To: gcc

I have been following the discussion on synthetic registers since the
beginning, and so far, I have kept myself out of it because I'm not really
an expert on the inner working of GCC, nor do I know all optimisation tricks
for Intel processors. However I think over the time, I have got a little of
an understanding on how GCC works, and I consider I also know a little bit
on how the Intel processors work. So please don't consider what I say as an
expert opinion, but just the opinion of an observer who just knows a bit
about things. Maybe I'm talking complete nonsense, but maybe some of my
ideas have some truth anyway.

This message is a reply to various things said in this discussion. It is not
directed at any single person, but comments on ideas put forward by various
people. The reason I make these comments is because I think there are 2
groups of people in this discussion, one group who claims that synthetic
registers are a big win, and the other group who more or less claims it's
just nonsense. I think what is lacking in the whole discussion is trying to
see what the experience at Bell Labs with the ML compiler actually was and
why it gave some improvement to their compiler.

I think the expression "synthetic registers" suggests a wrong idea. It kind
of suggests that with too few registers, the compiler is not able to
optimise well enough, but if you fake the existence of more registers, it
does a better job. I think what this is all about is optimising the usage of
the stack slots and get a speed gain due to various positive effects of
optimising those stack slots. With the modification of the ML compiler, the
approach was not to create a new register class to simulate additional
registers, but rather to run the register allocator twice. The first is done
with simply 32 artificial registers which are just stack slots. This first
run in essence has the effect of optimising the stack by creating a set of
privileged stack slots which are accessed more often than others because
they are considered registers. Then, there is a second run which does the
real register allocation.

I think the speed gain is achieved for the following reasons:

1) The most active variables are kept close together in memory. They only
occupy one or a few cache lines. Normally,  one would expect the stack frame
to always be in L1 cache. However especially when traversing data structures
that are bigger than L1 cache, you can expect the cache lines holding the
stack frame to be regularly be replaced by data, and having fewer cache
lines to reload for local variables will certainly give a speed advantage.

2) Running the RA over the stack slots will cause the slots to be reused
when the life range of variables does not overlap. This even increases the
compactness that already gives the benefit of point 1. Also, overall
reducing stack usage will always be a small gain.

3) The "compact" memory access pattern and the reuse of stack slots might
increase the opportunity for the processor to use "shortcut" features in
memory access. For example successive writes to the same memory location
might be optimised to a single write, or read access to a memory location
may be fast if there is still a pending write on the same location

4) Finally,  the frequently used stack slots are probably placed next to the
frame pointer so that 1 byte offsets can always be used for the most
frequent stack accesses, thus reducing code size and reducing pressure on
the
instruction cache.

However I think that "synthetic registers" or not needed to get this gain.
They are just a "trick" used to make the RA optimise the stack. However, it
is probably possible to have a separate stack optimisation pass do the same
thing, and this in a more flexible way as the number of important variables
can dynamically adjust and does not have to be a fixed value like 32. Such a
stack optimisation pass should do the following:
- analyse the life range of variables and temporaries and use this
information to reuse stack slots, rather than always assigning individual
stack slots for each individual variable or temporary
- analyse the usage of local variables and use this information to sort the
local variables such that the most frequently used variables are nearest to
the frame pointer (or to the stack point if you work without frame pointer).
This will guarantee that 1 byte addressing can be used for the most frequent
variables, and it will also put the most frequently variables together thus
optimising the cache access pattern.

Finally,  here is a short sample problem which shows that GCC (3.21 tested
here) is both better and worse than some people think or have claimed in
this thread:

void touch(void *, void *, void *);

void testinit(void)
{
  int i=0,j=0,k=0;
  char buffer[256];

  touch(&i,&j,&k);
  i+=k;
  touch(&i,&j,&k);
}

Compiling for Pentium4 with -O3, this gives:

_testinit:
 pushl %ebp
 movl %esp, %ebp
 subl $312, %esp
 movl %ebx, -12(%ebp)
 movl %esi, -8(%ebp)
 movl %edi, -4(%ebp)
 leal -288(%ebp), %esi
 leal -292(%ebp), %ebx
 leal -284(%ebp), %edi
 movl %edi, 8(%esp)
 movl %esi, 4(%esp)
 movl %ebx, (%esp)
 movl $0, -292(%ebp)
 movl $0, -288(%ebp)
 movl $0, -284(%ebp)
 call _touch
 movl %ebx, (%esp)
 movl %esi, 4(%esp)
 movl -284(%ebp), %eax
 movl %edi, 8(%esp)
 addl %eax, -292(%ebp)
 call _touch
 movl -4(%ebp), %edi
 movl -12(%ebp), %ebx
 movl -8(%ebp), %esi
 movl %ebp, %esp
 popl %ebp
 ret

What's bad about GCC in this example:
The variables are allocated in the worst possible way. The dummy array is
allocated near the frame pointer, while the simple variables are far away.
Because of this, a long offset has to be used to access all variables
resulting in unnecessary code bloat. The worst thing about this example is
that GCC insists on the bad stack usage. Even if you declare the char array
before the integers, the array is still allocated near the frame pointer and
the simple variables far from it. So, GCC deliberately sorts the variables
in a bad order here. I think even a very simplistic rule like "sort all
variables by size and put the smallest variables nearest to the frame
pointer" would give an improvement. Note that if you compile
with -fomit-frame-pointer , the result is much better.

What's good about GCC in this example:
Someone claimed that GCC would always use a load/modify/store approach when
dealing with variables not in registers. This example shows that this is not
the case. GCC can generate instructions that directly operate on memory when
this is of advantage. In this example, it is the 'addl %eax, -292(%ebp)'
instruction. The variable i is never loaded into a register, but the
addition to i occurs directly in memory.

Some general comments I have on ideas brought forward:
- I don't think it is a good idea to specially align the stack for best
possible L1 cache alignment. I think the stack bloat would negate the
benefit, especially as without L1 alignment and with a frame pointer, the
arguments of a function might be in the same cache line as the first
frequently used variables.
- locating synthetic registers outside the stack would be complete suicide.
You would need expensive memory to memory operations to save and restore
synthetic registers on function calls, and it would be impossible to create
multithreaded applications (beside possibly many other API related
problems). Also, access to those memory locations would required long
addresses, thus leading to severe code bloat
- any change to the stack layout that would lead to ABI incompatibilities
would certainly not have a chance of every being accepted

Marcel



^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-07 17:19 Robert Dewar
  0 siblings, 0 replies; 85+ messages in thread
From: Robert Dewar @ 2003-01-07 17:19 UTC (permalink / raw)
  To: dewar, mszick, velco; +Cc: gcc, ja_walker, lord

> The exchange (xchg) instruction was included in the original 80386.

Actually this was on the 8086, it is one of the original instructions, of
course that was not ia32 architecture :-)

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-07 12:32 Robert Dewar
  2003-01-07 19:03 ` tm_gccmail
  2003-01-08  6:08 ` Andy Walker
  0 siblings, 2 replies; 85+ messages in thread
From: Robert Dewar @ 2003-01-07 12:32 UTC (permalink / raw)
  To: dewar, ja_walker, lord, mszick; +Cc: gcc

> First, XCHG is what I think of as an Operating System instruction.  It is 
> quite valuable because the exchange can be limited to a single process on a 
> single processor in a multiprocessor system, in conjunction with the locking 
> process.  It is one of the very reliable ways to implement semaphores.  

Please look through the instruction set more carefully, this is NOT the way
you would implement any sychronization instructions on the x86.

Also, be very careful about timing of instructions when you start to look
at the complex instructions of the x86. No one should even think of generating
code for the x86 without reading the Intel guide for compiler writers. 
Basically the rule on most variants of the x86 is that you should treat
it as a conventional load/store RISC machine when it comes to generating
code.

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-07 12:08 Robert Dewar
  2003-01-07 12:10 ` Momchil Velikov
  0 siblings, 1 reply; 85+ messages in thread
From: Robert Dewar @ 2003-01-07 12:08 UTC (permalink / raw)
  To: dewar, ja_walker, lord, mszick; +Cc: gcc

> I am pretty familiar with the x86 instruction set, but I clearly recall that 
> I have never seen anything like this.  Is there such a thing in the x86 
> instruction set, and if so, what is it called?  Is it perhaps one of the 
> testing instructions?  

There is no prefetch instruction as such on the x86, but of course any
access acts as a prefetch in practice.

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-06 20:59 Robert Dewar
  2003-01-07  5:29 ` Andy Walker
  2003-01-08 17:32 ` Tom Lord
  0 siblings, 2 replies; 85+ messages in thread
From: Robert Dewar @ 2003-01-06 20:59 UTC (permalink / raw)
  To: dewar, lord; +Cc: denisc, gcc, ja_walker

> In other words, with synthregs, the CPU can ship some value off to
> memory and not care how long it takes to get there or to get back from
> there -- because it also ships it off to the synthreg, which it
> hypothetically has faster access to.

But this "hypothesis" is wrong. memory used for spills or locals is exactly
the same as memory used for "synthetic registers" [this fancy term is nothing
more than a fancy name for a local temporary]. So there is no issue of having
faster access to one or the other. It may of course be the case that in one
case you get more competent code than the other, but if so, the fix is to
fix incompetent code :-)

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-05 15:47 Robert Dewar
  2003-01-05 22:14 ` Tom Lord
  0 siblings, 1 reply; 85+ messages in thread
From: Robert Dewar @ 2003-01-05 15:47 UTC (permalink / raw)
  To: dewar, lord; +Cc: gcc, ja_walker

> Otherwise, what you lose for locals/args (the dominant case) will
> probably exceed what you gain for other values.

Actually I think you will break even most of the time and generate
essentially identical code.

You will end up saying, "great, I can keep this local variable Q in a 
register, I don't need to store it in the stack frame, but then the
register turns out to be a SR, and in fact it is right back there in
the stack frame, with identical instructions used to access it.

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-05 14:08 Robert Dewar
  2003-01-05 16:50 ` Michael S. Zick
  2003-01-06 19:42 ` Tom Lord
  0 siblings, 2 replies; 85+ messages in thread
From: Robert Dewar @ 2003-01-05 14:08 UTC (permalink / raw)
  To: dewar, lord; +Cc: denisc, gcc, ja_walker

> Bah.  I missed that (misread while skimming, actually).  Maybe his
> implementation approach is bogus after all.  Why not use the FP, if
> it's there, or SP when FP is omitted.
> 

well indeed FP makes more sense, and that's why the discussoin lead there

> But it can also improve both locality and the temporally proximate
> re-use of memory locations.

That's really not an issue for scalars. L1 caches are small but not that
small. once again, empirically most references to local stack frames are
in cache anyway, so there's really not much to improve here.

> ja_walker is right: it's a worthwhile emperical question

it's reasonable to ask the question, but the way to explore the answer is
to study some examples in detail. I think you will find it is very
difficult to provide even one semi-convincing example if you look at it
in detail.

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-05 14:05 Robert Dewar
  2003-01-06 19:42 ` Tom Lord
  0 siblings, 1 reply; 85+ messages in thread
From: Robert Dewar @ 2003-01-05 14:05 UTC (permalink / raw)
  To: dewar, lord; +Cc: denisc, gcc, ja_walker

> 1) I don't fully understand why synthregs aren't a common area rather
>    than part of stack frames.  A common area _adds_ code to
>    save/restore synthregs -- but it also increases the number and
>    frequency of references to synthregs.  I don't think L1 is the only
>    cache that can be used better by synthregs.

But if you put the SR's in a global area, then indeed they WILL require
6 byte instructions for their access, and that can greatly increase
pressure on the instruction cache, and will likely slow things down
greatly. 

Once again, you really can not discuss this in the abstract without looking
at the actual code sequences.

If you decide to allocate a base register, as was proposed at one point,
for the synth registers, then you are losing one of your real registers,
and that is a huge hit, you won't begin to buy that back.

With a change in the ABI, and OS, you could use FS or GS as the base
register, but that still does not help much, since the code wold
still be worse than normal access to local variables.

Remember again that the code to access SR's can be no better than the
code we generate right now for all accesses to local variables, function
arguments in memory, and spilled registers.

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-05 13:13 Robert Dewar
  2003-01-06  4:40 ` Andy Walker
  2003-01-06 19:42 ` Tom Lord
  0 siblings, 2 replies; 85+ messages in thread
From: Robert Dewar @ 2003-01-05 13:13 UTC (permalink / raw)
  To: dewar, lord; +Cc: denisc, gcc, ja_walker

> So, with synthetic registers, some values that are not intermediates 
> can be retained (in synthetic registers).  Without synthetic
> registers, the next time those values are used, they have to be 
> fetched from (non-special) memory.

Well most certainly you should not get trapped into a situation where CSE
values *must* live in registers, but that's not a problem. Remember that
"retrieving from memory" is *EXACTLY* the same code sequence as reading
a synthetic register, assuming both are on the current stack frame. 

> It might eventually lead to some hw advances: give synthregs with
> absolute locations cache preference.  Or, if synthregs are on the
> stack, give locations near the frame pointer cache preference (or is
> that done already?).

I don't see that as a good idea at all. The stack frame indeed will almost
always be in cache with current designs, and locking cache seems a bad idea.

Once again, I would just love to see one (1) example of what is being talked
about here. Let's see a small kernel in source, the current GCC code being
generated, and the amazing improved code that can be generated with synthetic
registers (which are nothing more than local memory locations). At this stage
I really can't imagine such an example, so, assuming this is a failure of
my imagination (I am not the only one with this handicap), please enlighten
with one convincing example :-)

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-05 11:41 Robert Dewar
  2003-01-05 16:30 ` Michael S. Zick
                   ` (2 more replies)
  0 siblings, 3 replies; 85+ messages in thread
From: Robert Dewar @ 2003-01-05 11:41 UTC (permalink / raw)
  To: denisc, dewar, ja_walker; +Cc: gcc

> Before I started this, I had never heard of an optimization technique that 
> tries to take advantage of L1 cache.  That may very well indicate that the 
> register allocator really is "just dumb".  (No flame wars, please.  
> Outstanding and brilliant developers did the best they could with the 
> algorithms they had.  I sincerely doubt that I could have done as well).  

This is a bit of an odd statement. In practice on a machine like the x86, 
the current stack frame will typically be resident in L1 cache, and that's
where the register allocator spills to. What some of us still don't see
is the difference in final resulting code between your "synthetic registers"
and normal spill locations from the register allocator. 

Perhaps you could give at least a small example of actual code. We all know
that (even on the 486), register register moves take the same time as
register-local stack frame moves when the local stack frame is in cache,
and the code that GCC generates now heavily depends on this.

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-04 18:12 Robert Dewar
  0 siblings, 0 replies; 85+ messages in thread
From: Robert Dewar @ 2003-01-04 18:12 UTC (permalink / raw)
  To: denisc, dewar; +Cc: dnovillo, gcc, ja_walker, sabre, zack

> It's indicate that register allocator is operating very poorly or just
> dumb.

Exactly, since the implementation of "synthetic registers" is quite naive,
and the register allocator should be able to do at least this well.

^ permalink raw reply	[flat|nested] 85+ messages in thread
* Re: An unusual Performance approach using Synthetic registers
@ 2003-01-04 14:50 Robert Dewar
  2003-01-04 18:00 ` Denis Chertykov
  2003-01-05  5:43 ` Andy Walker
  0 siblings, 2 replies; 85+ messages in thread
From: Robert Dewar @ 2003-01-04 14:50 UTC (permalink / raw)
  To: ja_walker, zack; +Cc: dnovillo, gcc, sabre

> What I think Diego is trying to say is, creating synthetic registers
> for the x86 isn't going to help much, possibly not at all, because the
> optimizer passes that could benefit already have unlimited registers
> to work with.

I would put it a different way. If "synthetic registers" help, it would
just indicate that the optimizer and code generator is operating very
poorly. I certainly don't have the impression that this is the case,
at least not at the level that this naive synthetic register approach
would help.

Wouldn't it be best to take some typical kernels, look at the code generated
by GCC, and then try by hand to see how much help SR's would be. I am pretty
sure this will quickly discourage the approach and save a lot of wasted
effort in modifying gcc.

An approach that might really be helpful is to have the register allocator
and scheduler understand the existence and behavior of renamed registers.
Quite often you see gcc generated code use two registers when it could
use one, under the illusion that this helps, when in fact it does not
since the hardware would in any case use two registers using register
renmaing.

^ permalink raw reply	[flat|nested] 85+ messages in thread
* RE: An unusual Performance approach using Synthetic registers
@ 2002-12-27  5:47 Chris Lattner
  2002-12-29  0:35 ` Andy Walker
  0 siblings, 1 reply; 85+ messages in thread
From: Chris Lattner @ 2002-12-27  5:47 UTC (permalink / raw)
  To: Andy Walker; +Cc: gcc


> I am modifying gcc to add Synthetic registers.  If I am mistaken, this
> is a big waste of time.  If I am correct, (meaning "really really
> Lucky") then this may provide a significant increase in speed for
> executables.

IMHO, this could be a much bigger win on modern processors than you might
think. You may win a suprising amount strictly because of the register
renaming hardware found on modern x86 processors, which would eliminate
the L1 reference.  Take a look at this paper, for example, which describes
a very similar approach:

http://cm.bell-labs.com/cm/cs/what/smlnj/compiler-notes/k32.ps

On the other hand, I think this approach is not the right one to take if
improving optimizer effectiveness is the goal.  IMHO, more stuff should
be done on a mid-level representation, such as treessa, than the
low-level RTL... but the infrastructure appears to not be ready yet.

You will probably find that the number of "virtual" registers varies
widely across different incarnations of the architecture, implying that
the -march options should change the number of virtual registers.  Maybe
keeping this in mind when you design the code will help down the line.  :)

This will certainly be an interesting project, please keep the list
informed with what you find.  :)

-Chris

-- 
http://llvm.cs.uiuc.edu/
http://www.nondot.org/~sabre/Projects/

^ permalink raw reply	[flat|nested] 85+ messages in thread

end of thread, other threads:[~2003-01-08 20:26 UTC | newest]

Thread overview: 85+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-01-07 15:17 An unusual Performance approach using Synthetic registers Robert Dewar
2003-01-07 17:02 ` Michael S. Zick
2003-01-08  6:56   ` Andy Walker
2003-01-08 12:14     ` Michael S. Zick
  -- strict thread matches above, loose matches on Subject: below --
2003-01-08 12:27 Robert Dewar
2003-01-08 12:13 Robert Dewar
2003-01-08 12:21 ` Lars Segerlund
2003-01-08  5:36 Robert Dewar
2003-01-07 21:01 Marcel Cox
2003-01-07 22:53 ` tm_gccmail
2003-01-08  1:05   ` tm_gccmail
2003-01-08  1:22   ` tm_gccmail
2003-01-08 11:45   ` Marcel Cox
2003-01-08 17:29   ` Marcel Cox
2003-01-07 17:19 Robert Dewar
2003-01-07 12:32 Robert Dewar
2003-01-07 19:03 ` tm_gccmail
2003-01-07 19:20   ` tm_gccmail
2003-01-08  7:52     ` Andy Walker
2003-01-08 19:29       ` Michael S. Zick
2003-01-08 20:10         ` Michael S. Zick
2003-01-08 20:44         ` tm_gccmail
2003-01-08 21:34           ` Michael S. Zick
2003-01-08 22:05             ` tm_gccmail
2003-01-08  6:08 ` Andy Walker
2003-01-07 12:08 Robert Dewar
2003-01-07 12:10 ` Momchil Velikov
2003-01-06 20:59 Robert Dewar
2003-01-07  5:29 ` Andy Walker
2003-01-07 21:49   ` Marcel Cox
2003-01-07 21:55     ` Branko Čibej
2003-01-07 21:55       ` Marcel Cox
2003-01-08 17:32 ` Tom Lord
2003-01-05 15:47 Robert Dewar
2003-01-05 22:14 ` Tom Lord
2003-01-05 14:08 Robert Dewar
2003-01-05 16:50 ` Michael S. Zick
2003-01-06 19:42 ` Tom Lord
2003-01-06  8:06   ` Andy Walker
2003-01-06 22:45     ` Michael S. Zick
2003-01-07  6:04       ` Andy Walker
2003-01-05 14:05 Robert Dewar
2003-01-06 19:42 ` Tom Lord
2003-01-06  6:49   ` Andy Walker
2003-01-05 13:13 Robert Dewar
2003-01-06  4:40 ` Andy Walker
2003-01-06 16:46   ` Michael S. Zick
2003-01-07  5:20     ` Andy Walker
2003-01-06 19:42 ` Tom Lord
2003-01-06  6:39   ` Andy Walker
2003-01-06  6:50     ` Daniel Berlin
2003-01-06  9:00       ` Andy Walker
2003-01-05 11:41 Robert Dewar
2003-01-05 16:30 ` Michael S. Zick
2003-01-06  4:53 ` Andy Walker
2003-01-06 19:50 ` Tom Lord
2003-01-06  6:29   ` Andy Walker
2003-01-06 21:53   ` Michael S. Zick
2003-01-07  6:02     ` Andy Walker
2003-01-07 17:41       ` Janis Johnson
2003-01-04 18:12 Robert Dewar
2003-01-04 14:50 Robert Dewar
2003-01-04 18:00 ` Denis Chertykov
2003-01-05  5:53   ` Andy Walker
2003-01-05  5:43 ` Andy Walker
2002-12-27  5:47 Chris Lattner
2002-12-29  0:35 ` Andy Walker
2002-12-29  5:58   ` Chris Lattner
2002-12-29  6:26     ` Alexandre Oliva
2002-12-29 12:04     ` Andy Walker
2002-12-29 13:58       ` Daniel Berlin
2002-12-29 22:41         ` Andy Walker
2002-12-29 15:50       ` Diego Novillo
2002-12-29 22:44         ` Andy Walker
2002-12-30  1:30           ` Zack Weinberg
2002-12-30  2:57             ` Andy Walker
2002-12-30  7:52             ` Michael S. Zick
2002-12-29  7:44   ` Daniel Egger
2002-12-29 12:10     ` Andy Walker
2002-12-30 20:58       ` James Mansion
2002-12-31  3:56         ` Michael S. Zick
2002-12-30  1:09     ` Michael S. Zick
2002-12-30  7:27       ` Daniel Egger
2002-12-30 10:25         ` Michael S. Zick
2002-12-30 20:50         ` Daniel Berlin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).