public inbox for gcc-help@gcc.gnu.org
 help / color / mirror / Atom feed
* GCC's -ffast-math behavior
@ 2012-02-09  8:33 xunxun
  2012-02-09  9:21 ` Andrew Haley
  0 siblings, 1 reply; 17+ messages in thread
From: xunxun @ 2012-02-09  8:33 UTC (permalink / raw)
  To: gcc-help

[-- Attachment #1: Type: text/plain, Size: 1358 bytes --]

Hi, all

There is something with -ffast-math I don't understand.

     I want to know why gcc don't use other math lib when using 
-ffast-math.

     Take the test.c for example,

        gcc -O3 -c test.c -o test1.o
        nm test1.o  shows:
------------------------
00000000 b .bss
00000000 d .data
00000000 i .drectve
00000000 r .eh_frame
00000000 r .rdata
00000000 t .text
00000000 t .text.startup
          U ___main
          U _exp
00000000 D _in
00000000 T _main
00000080 C _out
          U _printf
------------------------

        gcc -O3 -ffast-math -c test.c -o test2.o
        nm test.2.o shows:
------------------------
00000000 b .bss
00000000 d .data
00000000 i .drectve
00000000 r .eh_frame
00000000 r .rdata
00000000 t .text
00000000 t .text.startup
          U ___main
00000000 D _in
00000000 T _main
00000080 C _out
          U _printf
------------------------

        When using -ffast-math, gcc don't generate the math function 
symbol: U _exp

              That causes the problem: if I have other optimization math 
lib (such as Intel Math Lib), gcc can't link the obj with the lib, but 
when no using -ffast-math, the lib can be linked with.

              So I want to know: are there some methods when using 
-ffast-math, I also can link it with the optimization math lib?

              Many thanks.

-- 
Best Regards,
xunxun


[-- Attachment #2: test.c --]
[-- Type: text/plain, Size: 322 bytes --]

#include <math.h>
#include <stdio.h>

#define N 16

double in[N] = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
11.0, 12.0, 13.0, 14.0, 15.0, 16.0 };
double out[N];

int main ()
{
  int i;

  for (i = 0; i < N; i++)
    out[i] = exp (in[i]);

  for (i = 0; i < N; i++)
    printf ("%f\n", out[i]);

  return 0;
} 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-09  8:33 GCC's -ffast-math behavior xunxun
@ 2012-02-09  9:21 ` Andrew Haley
  2012-02-09  9:35   ` xunxun
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Haley @ 2012-02-09  9:21 UTC (permalink / raw)
  To: gcc-help

On 02/09/2012 08:33 AM, xunxun wrote:
>         When using -ffast-math, gcc don't generate the math function 
> symbol: U _exp

No, it doesn't.  Instead gcc uses the F2XM1 instruction.  Why would
you want to call a library when gcc has an instruction to do the
job?

Andrew.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-09  9:21 ` Andrew Haley
@ 2012-02-09  9:35   ` xunxun
  2012-02-09  9:47     ` Andrew Haley
  0 siblings, 1 reply; 17+ messages in thread
From: xunxun @ 2012-02-09  9:35 UTC (permalink / raw)
  To: Andrew Haley; +Cc: gcc-help

于 2012/2/9 17:21, Andrew Haley 写道:
> On 02/09/2012 08:33 AM, xunxun wrote:
>>          When using -ffast-math, gcc don't generate the math function
>> symbol: U _exp
> No, it doesn't.  Instead gcc uses the F2XM1 instruction.  Why would
> you want to call a library when gcc has an instruction to do the
> job?
>
> Andrew.
>
>
Because other math lib works faster than gcc itself (even with 
fastmath), and I want to use fastmath to make other caculation faster, too.

-- 
Best Regards,
xunxun

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-09  9:35   ` xunxun
@ 2012-02-09  9:47     ` Andrew Haley
  2012-02-09 10:08       ` xunxun
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Haley @ 2012-02-09  9:47 UTC (permalink / raw)
  To: gcc-help

On 02/09/2012 09:35 AM, xunxun wrote:
> 于 2012/2/9 17:21, Andrew Haley 写道:
>> On 02/09/2012 08:33 AM, xunxun wrote:
>>>          When using -ffast-math, gcc don't generate the math function
>>> symbol: U _exp
>> No, it doesn't.  Instead gcc uses the F2XM1 instruction.  Why would
>> you want to call a library when gcc has an instruction to do the
>> job?
>>
> Because other math lib works faster than gcc itself (even with 
> fastmath), and I want to use fastmath to make other caculation faster, too.

Hmm, I think that'll be difficult.  We tend to assume that when a
processor has built-in instructions to do something, that's the
fastest way to do it.  It's usually true, and I am wondering what
tricks Intel uses.  Granted, the floating-point transcendental
instructions aren't super-fast, and perhaps Intel doesn't optimize
them any more.

Andrew.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-09  9:47     ` Andrew Haley
@ 2012-02-09 10:08       ` xunxun
  2012-02-09 10:11         ` Andrew Haley
  0 siblings, 1 reply; 17+ messages in thread
From: xunxun @ 2012-02-09 10:08 UTC (permalink / raw)
  To: Andrew Haley; +Cc: gcc-help

于 2012/2/9 17:47, Andrew Haley 写道:
> On 02/09/2012 09:35 AM, xunxun wrote:
>> 于 2012/2/9 17:21, Andrew Haley 写道:
>>> On 02/09/2012 08:33 AM, xunxun wrote:
>>>>           When using -ffast-math, gcc don't generate the math function
>>>> symbol: U _exp
>>> No, it doesn't.  Instead gcc uses the F2XM1 instruction.  Why would
>>> you want to call a library when gcc has an instruction to do the
>>> job?
>>>
>> Because other math lib works faster than gcc itself (even with
>> fastmath), and I want to use fastmath to make other caculation faster, too.
> Hmm, I think that'll be difficult.  We tend to assume that when a
> processor has built-in instructions to do something, that's the
> fastest way to do it.  It's usually true, and I am wondering what
> tricks Intel uses.  Granted, the floating-point transcendental
> instructions aren't super-fast, and perhaps Intel doesn't optimize
> them any more.
>
> Andrew.
>
Thank you for the explanation.

I think I can separate all the math functions from other code, put the 
math functions within one lib, and don't use fastmath to build the lib. :)

-- 
Best Regards,
xunxun

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-09 10:08       ` xunxun
@ 2012-02-09 10:11         ` Andrew Haley
  2012-02-09 10:21           ` xunxun
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Haley @ 2012-02-09 10:11 UTC (permalink / raw)
  To: gcc-help

On 02/09/2012 10:07 AM, xunxun wrote:
> 于 2012/2/9 17:47, Andrew Haley 写道:
>> On 02/09/2012 09:35 AM, xunxun wrote:
>>> 于 2012/2/9 17:21, Andrew Haley 写道:
>>>> On 02/09/2012 08:33 AM, xunxun wrote:
>>>>>           When using -ffast-math, gcc don't generate the math function
>>>>> symbol: U _exp
>>>> No, it doesn't.  Instead gcc uses the F2XM1 instruction.  Why would
>>>> you want to call a library when gcc has an instruction to do the
>>>> job?
>>>>
>>> Because other math lib works faster than gcc itself (even with
>>> fastmath), and I want to use fastmath to make other caculation faster, too.
>> Hmm, I think that'll be difficult.  We tend to assume that when a
>> processor has built-in instructions to do something, that's the
>> fastest way to do it.  It's usually true, and I am wondering what
>> tricks Intel uses.  Granted, the floating-point transcendental
>> instructions aren't super-fast, and perhaps Intel doesn't optimize
>> them any more.
>>
> Thank you for the explanation.
> 
> I think I can separate all the math functions from other code, put the 
> math functions within one lib, and don't use fastmath to build the lib. :)

Okay.  Can you tell us how much faster than the builtins the Intel lib
actually, is, and how you measured that?

Andrew.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-09 10:11         ` Andrew Haley
@ 2012-02-09 10:21           ` xunxun
  2012-02-09 10:29             ` Andrew Haley
  0 siblings, 1 reply; 17+ messages in thread
From: xunxun @ 2012-02-09 10:21 UTC (permalink / raw)
  To: Andrew Haley; +Cc: gcc-help

[-- Attachment #1: Type: text/plain, Size: 545 bytes --]

于 2012/2/9 18:10, Andrew Haley 写道:
>
> Okay.  Can you tell us how much faster than the builtins the Intel lib
> actually, is, and how you measured that?
>
> Andrew.
>
I use the code main.c (test sin speed)

On Win7 64bit, gcc 4.6.2 32bit

gcc -O3 -ffast-math main.c -o main.exe

run main.exe will cost 6.853s.

When linking with intel libM no fastmath

gcc -O3 main.c -o main.exe libmmt.lib libircmt.lib

run main.exe will cost 4.367s.

ps : libmmt.lib and libircmt.lib comes from Intel C/C++ Compiler.

-- 
Best Regards,
xunxun


[-- Attachment #2: main.c --]
[-- Type: text/plain, Size: 1029 bytes --]

#include <stdio.h>
#include <stdlib.h> 
#include <time.h> 
#include <math.h>


#define INTEG_FUNC(x)  abs(sin(x))

int main(void)
{
   unsigned int i, j, N;
   double step, x_i, sum;
   double start, finish, duration;
   double interval_begin = 0.0;
   double interval_end = 2.0 * 3.141592653589793238;

   start = clock();

   printf("     \n");
   printf("    Number of    | Computed Integral | \n");
   printf(" Interior Points |                   | \n");
   for (j=2;j<27;j++)
   {
    printf("------------------------------------- \n");

     N =  1 << j;
     step = (interval_end - interval_begin) / N;
     sum = INTEG_FUNC(interval_begin) * step / 2.0;

     for (i=1;i<N;i++)
     {
        x_i = i * step;
        sum += INTEG_FUNC(x_i) * step;
     }

     sum += INTEG_FUNC(interval_end) * step / 2.0;

     printf(" %10d      |  %14e   | \n", N, sum);
   }
   finish = clock();
   duration = (finish - start);
   printf("     \n");
   printf("   Application Clocks   = %10e  \n", duration);
   printf("     \n");
}

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-09 10:21           ` xunxun
@ 2012-02-09 10:29             ` Andrew Haley
  2012-02-09 10:35               ` xunxun
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Haley @ 2012-02-09 10:29 UTC (permalink / raw)
  To: xunxun; +Cc: gcc-help

On 02/09/2012 10:20 AM, xunxun wrote:
> I use the code main.c (test sin speed)
> 
> On Win7 64bit, gcc 4.6.2 32bit
> 
> gcc -O3 -ffast-math main.c -o main.exe
> 
> run main.exe will cost 6.853s.
> 
> When linking with intel libM no fastmath
> 
> gcc -O3 main.c -o main.exe libmmt.lib libircmt.lib
> 
> run main.exe will cost 4.367s.

Ah okay, I get it now, its sin, not exp.  I suspect that it's the
argument reduction step that's slowing you down.

Andrew.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-09 10:29             ` Andrew Haley
@ 2012-02-09 10:35               ` xunxun
  2012-02-09 10:45                 ` xunxun
  0 siblings, 1 reply; 17+ messages in thread
From: xunxun @ 2012-02-09 10:35 UTC (permalink / raw)
  To: Andrew Haley, gcc-help

于 2012/2/9 18:29, Andrew Haley 写道:
> On 02/09/2012 10:20 AM, xunxun wrote:
>> I use the code main.c (test sin speed)
>>
>> On Win7 64bit, gcc 4.6.2 32bit
>>
>> gcc -O3 -ffast-math main.c -o main.exe
>>
>> run main.exe will cost 6.853s.
>>
>> When linking with intel libM no fastmath
>>
>> gcc -O3 main.c -o main.exe libmmt.lib libircmt.lib
>>
>> run main.exe will cost 4.367s.
> Ah okay, I get it now, its sin, not exp.  I suspect that it's the
> argument reduction step that's slowing you down.
>
> Andrew.
>
I think so. I don't know whether I may change main.c 's sin to exp

It will be also slow.

-- 
Best Regards,
xunxun

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-09 10:35               ` xunxun
@ 2012-02-09 10:45                 ` xunxun
  2012-02-10  5:52                   ` Miles Bader
  0 siblings, 1 reply; 17+ messages in thread
From: xunxun @ 2012-02-09 10:45 UTC (permalink / raw)
  To: Andrew Haley, gcc-help

于 2012/2/9 18:35, xunxun 写道:
> 于 2012/2/9 18:29, Andrew Haley 写道:
>> On 02/09/2012 10:20 AM, xunxun wrote:
>>> I use the code main.c (test sin speed)
>>>
>>> On Win7 64bit, gcc 4.6.2 32bit
>>>
>>> gcc -O3 -ffast-math main.c -o main.exe
>>>
>>> run main.exe will cost 6.853s.
>>>
>>> When linking with intel libM no fastmath
>>>
>>> gcc -O3 main.c -o main.exe libmmt.lib libircmt.lib
>>>
>>> run main.exe will cost 4.367s.
>> Ah okay, I get it now, its sin, not exp.  I suspect that it's the
>> argument reduction step that's slowing you down.
>>
>> Andrew.
>>
> I think so. I don't know whether I may change main.c 's sin to exp
>
> It will be also slow.
>
And I think it's related with -funsafe-math-optimizations

If I only use the option, gcc also don't generate the symbol

The fastmath other option

  -fno-math-errno, -ffinite-math-only, -fno-rounding-math, -fno-signaling-nans, and -fcx-limited-range

don't have the effect.

-- 
Best Regards,
xunxun

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-09 10:45                 ` xunxun
@ 2012-02-10  5:52                   ` Miles Bader
  2012-02-10  6:18                     ` xunxun
  0 siblings, 1 reply; 17+ messages in thread
From: Miles Bader @ 2012-02-10  5:52 UTC (permalink / raw)
  To: xunxun; +Cc: Andrew Haley, gcc-help

xunxun <xunxun1982@gmail.com> writes:
> And I think it's related with -funsafe-math-optimizations
>
> If I only use the option, gcc also don't generate the symbol
>
> The fastmath other option
>
>  -fno-math-errno, -ffinite-math-only, -fno-rounding-math,
> -fno-signaling-nans, and -fcx-limited-range
>
> don't have the effect.

I presume you're compiling for i386 32-bit, which defaults to using
the legacy 387 FP unit?

If you switch to using SSE floating-point, e.g. using "-mfpmath=sse",
it will still call the library functions even when using -ffast-math
(because the SSE unit doesn't have special instructions like "fsin" or
"f2xm1").  I think SSE FP is typically faster than the 387 for many
CPUs anyway.

[If you compile for the x86_64 arch, it will default to using SSE FP.]

-miles

-- 
Guilt, n. The condition of one who is known to have committed an indiscretion,
as distinguished from the state of him who has covered his tracks.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-10  5:52                   ` Miles Bader
@ 2012-02-10  6:18                     ` xunxun
  2012-02-10  6:41                       ` Miles Bader
  0 siblings, 1 reply; 17+ messages in thread
From: xunxun @ 2012-02-10  6:18 UTC (permalink / raw)
  To: Miles Bader, gcc-help

于 2012/2/10 13:52, Miles Bader 写道:
>
>
> If you switch to using SSE floating-point, e.g. using "-mfpmath=sse",
> it will still call the library functions even when using -ffast-math
> (because the SSE unit doesn't have special instructions like "fsin" or
> "f2xm1").  I think SSE FP is typically faster than the 387 for many
> CPUs anyway.
>
Well, that's right.
But in my experience, -mfpmath=sse will slow my code very much.
I think -mfpmath=sse option has some bugs on X86.


-- 
Best Regards,
xunxun

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-10  6:18                     ` xunxun
@ 2012-02-10  6:41                       ` Miles Bader
  2012-02-10 13:00                         ` Tim Prince
  2012-02-17  0:30                         ` James Cloos
  0 siblings, 2 replies; 17+ messages in thread
From: Miles Bader @ 2012-02-10  6:41 UTC (permalink / raw)
  To: xunxun; +Cc: gcc-help

xunxun <xunxun1982@gmail.com> writes:
>> If you switch to using SSE floating-point, e.g. using "-mfpmath=sse",
>> it will still call the library functions even when using -ffast-math
>> (because the SSE unit doesn't have special instructions like "fsin" or
>> "f2xm1").  I think SSE FP is typically faster than the 387 for many
>> CPUs anyway.
>>
> Well, that's right.
> But in my experience, -mfpmath=sse will slow my code very much.

Hmm, I've always found SSE FP to be a speedup -- sometimes a _big_
speedup -- over 387 FP, at least when one is using mostly primitive FP
operations (mul, divide, sqrt, etc) ... I think it's worth testing, at
least.

Complicated FP functions like sin, exp, etc, seem to be a little
faster on using 387 FP than using SSE FP -- but that's presumably
because when using SSE, those functions are implemented in -lm instead
of using special instructions.  Since you want to switch to a faster
FP library _anyway_, the quality of the standard FP library presumably
isn't much of a limitation for you... :]

> I think -mfpmath=sse option has some bugs on X86.

The x86 is really the only place that option is useful, so I hope it's
mostly OK there... :]

-Miles

-- 
Circus, n. A place where horses, ponies and elephants are permitted to see
men, women and children acting the fool.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-10  6:41                       ` Miles Bader
@ 2012-02-10 13:00                         ` Tim Prince
  2012-02-17  0:30                         ` James Cloos
  1 sibling, 0 replies; 17+ messages in thread
From: Tim Prince @ 2012-02-10 13:00 UTC (permalink / raw)
  To: gcc-help

On 2/10/2012 1:39 AM, Miles Bader wrote:
> xunxun<xunxun1982@gmail.com>  writes:
>>> If you switch to using SSE floating-point, e.g. using "-mfpmath=sse",
>>> it will still call the library functions even when using -ffast-math
>>> (because the SSE unit doesn't have special instructions like "fsin" or
>>> "f2xm1").  I think SSE FP is typically faster than the 387 for many
>>> CPUs anyway.
>>>
>> Well, that's right.
>> But in my experience, -mfpmath=sse will slow my code very much.
>
> Hmm, I've always found SSE FP to be a speedup -- sometimes a _big_
> speedup -- over 387 FP, at least when one is using mostly primitive FP
> operations (mul, divide, sqrt, etc) ... I think it's worth testing, at
> least.
>

387 code may be faster when you mix float and double, e.g. by forcing 
double evaluation of float expressions by removing the f suffix from 
constants.
It has been a long time since I had 32-bit OS installed and could 
examine the broken mathinline.h headers which typically came with them. 
  You would need to fix those before trying -ffast-math.


-- 
Tim Prince

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-10  6:41                       ` Miles Bader
  2012-02-10 13:00                         ` Tim Prince
@ 2012-02-17  0:30                         ` James Cloos
  2012-02-17  1:21                           ` Miles Bader
  1 sibling, 1 reply; 17+ messages in thread
From: James Cloos @ 2012-02-17  0:30 UTC (permalink / raw)
  To: gcc-help; +Cc: Miles Bader, xunxun

>>>>> "MB" == Miles Bader <miles@gnu.org> writes:

>> But in my experience, -mfpmath=sse will slow my code very much.

MB> Hmm, I've always found SSE FP to be a speedup -- sometimes a _big_
MB> speedup -- over 387 FP, at least when one is using mostly primitive
MB> FP operations (mul, divide, sqrt, etc) ... I think it's worth
MB> testing, at least.

Many years ago, when I asked about using -fpmath=sse on an ia32 box, the
advice was that, because the function args and return values had to be
passed on the 387 stack, most code would be much slower.

Some of the new chips seem to have specific optimizations to deal with
code which constantly moves values between registers and the stack, so
it is probably less of an issue on newer chips than it used to be.

But if one is using a newer chip, why not upgrade to -m64, too?

-JimC
-- 
James Cloos <cloos@jhcloos.com>         OpenPGP: 1024D/ED7DAEA6

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-17  0:30                         ` James Cloos
@ 2012-02-17  1:21                           ` Miles Bader
  2012-03-26  4:05                             ` xunxun
  0 siblings, 1 reply; 17+ messages in thread
From: Miles Bader @ 2012-02-17  1:21 UTC (permalink / raw)
  To: James Cloos; +Cc: gcc-help, xunxun

2012/2/17 James Cloos <cloos@jhcloos.com>:
>>>>>> "MB" == Miles Bader <miles@gnu.org> writes:
>
>>> But in my experience, -mfpmath=sse will slow my code very much.
>
> MB> Hmm, I've always found SSE FP to be a speedup -- sometimes a _big_
> MB> speedup -- over 387 FP, at least when one is using mostly primitive
> MB> FP operations (mul, divide, sqrt, etc) ... I think it's worth
> MB> testing, at least.
>
> Many years ago, when I asked about using -fpmath=sse on an ia32 box, the
> advice was that, because the function args and return values had to be
> passed on the 387 stack, most code would be much slower.

I suppose it depends on the actual content of the functions whether
that would be a significant factor.

In general, I'd think there shouldn't be a whole lot of
function-calling going on in the inner loop unless the function in
question actually do something non-trivial (I think this is especially
true for a lot of FP-intensive coding styles, where somewhat more
attention is paid to throughput, and a bit less to things like
abstraction), and the more a function does, the less impact the
function call itself has.  So a speed increase in primitive operations
should make up for some extra per-call overhead.

> Some of the new chips seem to have specific optimizations to deal with
> code which constantly moves values between registers and the stack, so
> it is probably less of an issue on newer chips than it used to be.

My earlier observation is based on benchmarks mostly on P3-era CPUs
(the last time I used the traditional x86 abi much).  I dunno how
representative that is...

> But if one is using a newer chip, why not upgrade to -m64, too?

Totally :]

-miles

-- 
Cat is power.  Cat is peace.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: GCC's -ffast-math behavior
  2012-02-17  1:21                           ` Miles Bader
@ 2012-03-26  4:05                             ` xunxun
  0 siblings, 0 replies; 17+ messages in thread
From: xunxun @ 2012-03-26  4:05 UTC (permalink / raw)
  To: Miles Bader; +Cc: James Cloos, gcc-help

I think I found a good method

is to use -fno-builtin

-- 
Best Regards,
xunxun

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2012-03-26  4:05 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-09  8:33 GCC's -ffast-math behavior xunxun
2012-02-09  9:21 ` Andrew Haley
2012-02-09  9:35   ` xunxun
2012-02-09  9:47     ` Andrew Haley
2012-02-09 10:08       ` xunxun
2012-02-09 10:11         ` Andrew Haley
2012-02-09 10:21           ` xunxun
2012-02-09 10:29             ` Andrew Haley
2012-02-09 10:35               ` xunxun
2012-02-09 10:45                 ` xunxun
2012-02-10  5:52                   ` Miles Bader
2012-02-10  6:18                     ` xunxun
2012-02-10  6:41                       ` Miles Bader
2012-02-10 13:00                         ` Tim Prince
2012-02-17  0:30                         ` James Cloos
2012-02-17  1:21                           ` Miles Bader
2012-03-26  4:05                             ` xunxun

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).