public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-15  0:05 N8TM
  1998-12-15 10:01 ` Joe Buck
  0 siblings, 1 reply; 38+ messages in thread
From: N8TM @ 1998-12-15  0:05 UTC (permalink / raw)
  To: burley, jbuck; +Cc: ejr, hjstein, egcs

In a message dated 12/14/98 11:01:15 PM Pacific Standard Time, burley@gnu.org
writes:

<< avoids transformations that can change numerical results,
 >such as pre-evaluating expressions with 64-bit that would otherwise
 >be evaluated using 80-bit precision at runtime >>

A good point.  Among other things, this will require a good strtold (?)
conversion from decimal to binary, which I don't think is available.  At least
this needs some planning.

^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-22 13:30 Toon Moene
  0 siblings, 0 replies; 38+ messages in thread
From: Toon Moene @ 1998-12-22 13:30 UTC (permalink / raw)
  To: d.love, egcs

> FWIW, the g77 manual has a reference (in 
> `Floating-point Errors') to a
> supplemented version but I don't remember details.  
> (Expert comments on the collection of references there 
> would be welcome.)

[ Well, I'm certainly not an expert on floating point
  arithmetic, but I'm working with them for 20 years now
  and learned the hard way to be careful ]

The `Supplement' mentioned in the docs more than covers everything I
wanted to write on this subject.

Sigh - I probably slept while you added this to the documentation ...

For those not having the g77 info stuff handy: See
http://www.validgh.com/

The reason people do not jump to this information right away might be
caused by the fact that it is titled "Floating point errors".  Those who
fall into the various traps floating point arithmetic lays out for them
invariably think of it as a "compiler error", not a "floating point
error" [not in the least because the example I showed will simply
"hang"] ;-)

Cheers,

-- 
Toon Moene (toon@moene.indiv.nluug.nl)
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
Phone: +31 346 214290; Fax: +31 346 214286
g77 Support: fortran@gnu.org; egcs: egcs-bugs@cygnus.com

^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-22 11:07 John Wehle
  0 siblings, 0 replies; 38+ messages in thread
From: John Wehle @ 1998-12-22 11:07 UTC (permalink / raw)
  To: N8TM; +Cc: egcs, pcg, rth, hjstein, toon, burley

> Contrary to an opinion put forth in this exchange, I see that alignment of
> 64-bit spills makes a measurable difference in performance on my Pentium 2.
> 
> I noticed, somewhat accidentally, that Livermore Fortran Kernel 8 runs 10%
> faster when linked with cygwin-b20.1 than with cygwin-b19/coolview.  I built
> the compiler today under cygwin-b19, and the performance of all the other
> kernels was unchanged from the previous version of egcs/g77.  Relinking the
> same .o with the different .dll made the difference, and it made no difference
> whether I ran under bash linked with one .dll or the other.

Just as another data point the BRL-CAD raytracing benchmarks run about 5%
faster when the compiler properly aligns doubles.  The current state of the
patch for this is:

  1) It only affects leaf functions.

  2) It aligns all registers spills as necessary and all simple uses
     of double / long double variables.

Open issues:

  1) The patch requires a frame pointer for those functions where that stack
     needs alignment.  I haven't run the BRL-CAD raytracing benchmarks with
     -fomit-frame-pointer to see if the proper alignment is worth requiring
     a frame.

  2) The patch currently doesn't provide alignment for variables such as:

     double a[10]

  3) The patch currently doesn't provide alignment in non-leaf functions.

  4) GDB will probably need updating due to the i386 prologue changes.

If I recall correctly, the main Pentium Pro / Pentium II performance hit
is when a double or long double crosses a cache boundary (which can happen
if they're not aligned correctly).

-- John
-------------------------------------------------------------------------
|   Feith Systems  |   Voice: 1-215-646-8000  |  Email: john@feith.com  |
|    John Wehle    |     Fax: 1-215-540-5495  |                         |
-------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-21 23:30 N8TM
  0 siblings, 0 replies; 38+ messages in thread
From: N8TM @ 1998-12-21 23:30 UTC (permalink / raw)
  To: pcg, rth, hjstein, toon, burley; +Cc: egcs

Contrary to an opinion put forth in this exchange, I see that alignment of
64-bit spills makes a measurable difference in performance on my Pentium 2.

I noticed, somewhat accidentally, that Livermore Fortran Kernel 8 runs 10%
faster when linked with cygwin-b20.1 than with cygwin-b19/coolview.  I built
the compiler today under cygwin-b19, and the performance of all the other
kernels was unchanged from the previous version of egcs/g77.  Relinking the
same .o with the different .dll made the difference, and it made no difference
whether I ran under bash linked with one .dll or the other.

Examining the code, 9 loop invariant REAL*8 scalars are spilled outside the 2
innermost loops.  Each is restored once inside the inner loop.  There are 15
REAL*8 memory accesses directly to COMMON in the inner loop, and I believe 33
floating point operations.  In addition, 5 pointers are spilled and restored
in the inner loop.  The 10% increase in execution time for a mis-aligned stack
would indicate that the penalty for restoring a spilled REAL*8 is twice as
great when it is mis-aligned, even though it surely would stay in level 1
cache in the absence of cache mapping conflicts.

As I had mentioned several times earlier, I had noticed that the -O2 code was
running slower on W95 than -Os code, while this effect was not repeated on
linux-gnulibc1.  Today's finding confirms that effects like this stemmed from
mis-alignment of the stack, together with the smaller number of spills
generated with -Os.  With the up-to-date versions of both g77 and cygwin-b20,
there no longer are any Livermore Kernels which run slower with -O2 than -Os. 

Not to say there are no challenges left!  I still find a few cases where the
commercial compiler lf90 4.50g runs 40% faster than g77 (as well as a smaller
number where g77 excels).  Apparently, there are no 80-bit spills or mis-
aligned COMMONs in that Lahey version, unlike the current l95.

^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-19 15:17 Geert Bosch
  1998-12-20  8:09 ` Toon Moene
  1998-12-22  4:17 ` Dave Love
  0 siblings, 2 replies; 38+ messages in thread
From: Geert Bosch @ 1998-12-19 15:17 UTC (permalink / raw)
  To: egcs, N8TM, Toon Moene

On Sat, 19 Dec 1998 21:37:49 +0100, Toon Moene wrote:

  In the mean time, it would be useful for the compiler to warn about
  testing floating point variables for (in)equality.

Testing for equality is perfectly fine on systems with IEEE arithmetic 
and many algorithms would be impossible to write efficiently if one would 
regard floating-point as a fuzzy kind of real value. Your statement would 
be true in the pre-IEEE era, but fortunately fpt arithmetic is well-defined 
on the large majority of current systems.

If you want to know why, I advise you to read "What Every Computer Scientist 
Should Know About Floating-Point Arithmetic", by David Goldberg, in ACM 
Computing Surveys, vol. 23 nr. 1, march 1991, available in PostScript at:
http://swift.lanl.gov/Internal/Computing/SunOS_Compilers/common-tools/numerical_comp_guide/goldberg1.ps

Regards,
   Geert



^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-19 14:26 N8TM
  0 siblings, 0 replies; 38+ messages in thread
From: N8TM @ 1998-12-19 14:26 UTC (permalink / raw)
  To: pcg, rth, hjstein, toon; +Cc: egcs

In a message dated 12/19/98 1:41:05 PM Pacific Standard Time, pcg@goof.com
writes:

<< Maybe, but compared to what? Nobody so far has brought some
 alternative with the same (good) semantics. People wanting speed
 do not need to use xfmode spilling. >>
If the xfmode spilling is an option, and aligned storage is available, I could
see no objection.

^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-19 14:23 N8TM
  1998-12-20 13:51 ` Marc Lehmann
  0 siblings, 1 reply; 38+ messages in thread
From: N8TM @ 1998-12-19 14:23 UTC (permalink / raw)
  To: pcg; +Cc: egcs

In a message dated 12/19/98 1:39:03 PM Pacific Standard Time, pcg@goof.com
writes:

<< Ok, _some_ data: if everything is in the cache, on my p-ii,
 
         fldl %0
         fxam
         fstpl %0
         fwait
 
 takes 3 cycles regardless of how the memory is aligned.
 
 The code sequence:
 
         fldt %0
         fxam
         fstpt %0
 	fwait
 
 takes 6 cycles.
 
 I have no idea how valid these results are (I'm probably not measuring the
 fst), but xfmode spills seem to be expensive.
  >>
Thanks for this indication.  That would reinforce my opinion that double
spills might be preferred where the syntax indicates single (float) precision,
with xfmode reserved for those cases where the syntax indicates double.  I
still would wish to be assured of a mechanism to align the spills, unless
tests could show that is unnecessary.  I have to be skeptical when the
compiler (lf95) which uses xfmode spills suffers so from mis-alignment of most
declared double arrays.

^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-19 13:00 N8TM
  0 siblings, 0 replies; 38+ messages in thread
From: N8TM @ 1998-12-19 13:00 UTC (permalink / raw)
  To: toon, egcs

In a message dated 12/19/98 12:39:34 PM Pacific Standard Time,
toon@moene.indiv.nluug.nl writes:

<< I tend to turn this remark around:  What we need in the g77 manual
 (despite the fact that it is not exclusively relevant to FORTRAN) is a
 section on the uses and pitfalls of floating point arithmetic.
 
 I'll set out to write this (this won't be easy, as I have to evade the
 obvious references for copyright reasons).>>

Excellent; if you are prepared for pre-publication suggestions or criticism,
let me know.
 
<< In the mean time, it would be useful for the compiler to warn about
 testing floating point variables for (in)equality.>>

I have used too many compilers which included such warnings, and find
them a hindrance.
 
<< HTH,
 
 -- 
 Toon Moene (toon@moene.indiv.nluug.nl)
  >>

^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-19  9:05 N8TM
  1998-12-19 12:39 ` Toon Moene
  0 siblings, 1 reply; 38+ messages in thread
From: N8TM @ 1998-12-19  9:05 UTC (permalink / raw)
  To: emil, burley, egcs

In a message dated 12/19/98 6:46:15 AM Pacific Standard Time,
emil@skatter.usask.ca writes:

<<  I very much
 appreciate your proposal AND I endorse it completely. I am more than willing
to
 pay a performance penalty in order to get numerically accurate results with
less
 programming on my part.  >>
I would like to join in thanking Craig for raising this issue and offering to
work on it.    My primary objection to it was that the performance penalty
would be too large if the problem of mis-aligned spills were not solved.  With
that qualification, I endorse it also.

^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-18 23:07 N8TM
  1998-12-19 13:39 ` Marc Lehmann
  0 siblings, 1 reply; 38+ messages in thread
From: N8TM @ 1998-12-18 23:07 UTC (permalink / raw)
  To: rth, hjstein, toon; +Cc: egcs

In a message dated 12/18/98 10:36:15 PM Pacific Standard Time, rth@cygnus.com
writes:

<< I have not tried quantifing the change.  I would want to examine
 things more closely, however, because 25% seems low to me. >>

Some proponents of the idea felt that spills were so rare that no difference
would be seen regardless of the efficiency of an 80-bit spilling
implementation. My 25% figure is for a complete execution of the application;
certainly there must be sections of this application where the 80-bit spills
are doubling the time spent.  That means the 80-bit spills, a majority of them
mis-aligned, are taking several times as long as the 32-bit spills.

 As I'm seeing so many implementations where 64-bit spills are mis-aligned,
I'd like to see the alignment problem solved before I'm stuck with wider
spills.

^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-18 21:58 N8TM
  1998-12-18 22:36 ` Richard Henderson
  0 siblings, 1 reply; 38+ messages in thread
From: N8TM @ 1998-12-18 21:58 UTC (permalink / raw)
  To: rth, hjstein, toon; +Cc: egcs

In a message dated 12/18/98 4:03:09 PM Pacific Standard Time, rth@cygnus.com
writes:

<< On the contrary.  If you work with SFmode values, they'll be spilled
 in SFmode.  And XFmode reads/writes to unaligned (mod 16) addresses
 takes extra time.
  >>
How much extra time?  Is it feasible to make the XFmode spills use aligned
addresses, and would alignment be as much of an improvement as in DFmode?  The
only quantification I've seen is my test of one application indicating that
changing spills from SFmode to XFmode appears to make that application run 25%
longer on a PPro.

^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-17  1:43 N8TM
  1998-12-17 12:35 ` Marc Lehmann
  0 siblings, 1 reply; 38+ messages in thread
From: N8TM @ 1998-12-17  1:43 UTC (permalink / raw)
  To: pcg, egcs

In a message dated 12/16/98 12:36:17 PM Pacific Standard Time, pcg@goof.com
writes:

<< I still don't see what the 64 bit precision idea gives us, in terms of
 performance. First, it doesn't give us full ieee, second, it kills
 performance, depending on where the rounding mode is set (before each
 assignment? resetting it to normal before each long double assignment?)
 
 IAW, how is 64 bit rounding mode going to be faster? For me, it seems this
 creates a similar situation to the float->integer conversion, i.e. save and
 restoring the control word with each assignment. >>
Although I haven't seen anyone specify this, I assume they mean to leave
64-bit mode set throughout the program, or at least for the duration of any
intensive computing.  I've tried running Livermore Fortran Kernels this way,
and it does speed up division and sqrt(), as it should.  It works reasonably
well as long as all arithmetic is intended to be ordinary single or double
precision.  

^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-16  6:10 N8TM
  0 siblings, 0 replies; 38+ messages in thread
From: N8TM @ 1998-12-16  6:10 UTC (permalink / raw)
  To: hjstein, egcs; +Cc: ejr, jbuck, egcs

In a message dated 12/16/98 12:36:10 AM Pacific Standard Time,
hjstein@bfr.co.il writes:

<<  > Maybe that option could be implied by -ffast-math.
 
 I'd much rather have more precise control over it.  Doesn't
 -ffast-math imply various sorts of liberties to be taken? >>

Yes, there are too many unrelated liberties collected under -ffast-math
already.  I've never found a situation where changing the treatment of
comparisons gave any benefit, and I leave it off for that reason.

^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-13  6:19 Stephen L Moshier
  1998-12-13 10:49 ` Craig Burley
  0 siblings, 1 reply; 38+ messages in thread
From: Stephen L Moshier @ 1998-12-13  6:19 UTC (permalink / raw)
  To: burley, egcs

Spilling of fp registers was very rare before the fforce-mem flag was
turned by default.  In fact there was a compiler bug that would
overflow the x87 register stack before any fp register actually got spilled.
Running with -fno-force-mem will tend to relieve any actual pressure on fp
register allocations.

Compiler-generated temporaries are not the same thing as spilling.
The ffloat-store switch usually will not work on them, as you can
see by stepping through some compilations.




^ permalink raw reply	[flat|nested] 38+ messages in thread
* Re: FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86
@ 1998-12-03  6:34 N8TM
  1998-12-04 15:23 ` Craig Burley
  0 siblings, 1 reply; 38+ messages in thread
From: N8TM @ 1998-12-03  6:34 UTC (permalink / raw)
  To: tprince, burley, egcs

In a message dated 12/2/98 burley@gnu.org writes:
C: Craig Burley
T: Tim
<<C: the loads/stores involving the variables themselves would be
 single-precision, but the operations are done in, or produce results
 in, extended (80-bit) precision.  These should, according to my
 proposal, be *spilled* as 80-bit, not 64-bit or 32-bit, values,
 though when written to destinations (user-named variables), they'd
 then (normally) be chopped down to size, per -ffloat-store and
 what-not.
 
T: For a single-precision calculation, performing the register spills in
double would provide enough extra precision, without significant impact on
performance, if aligned storage can be used. Certainly, 80-bit spills would be
fine if they didn't impact performance.  This is like going back to the old
days of the GE600/Honeywell6000 architecture, where the floating point
register was 80 bits wide (only 8 bits for the exponent!) but there was no
efficient way to spill the full register width, nor would there have been much
use for it, considering how much of the extra precision was lost due to under-
flows.

 >>C:In other words, the default for x86 code generation should
 >>apparently be that, when the compiler generates an intermediate result,
 >>it *always* uses maximum available precision for that result, even
 >>if it has to spill the result to memory.  (I *think* it can do this while
 >>obeying the current FP mode, but don't have time to check right
 >>now.)
 >>[...]
 >
 >T: In the case where e is used in a subsequent calculation, we
 >don't want to force a store and reload unless -ffloat-store is
 >invoked.
 
 >C: Correct, AFAIK.

T: There's some uncertainty here, where the desire to maintain performance
causes us to keep the extra precision, although the programmer might
conceivably not want it.  In order to turn it off in a "fine-grained" manner,
the programmer must program in a "float-store" which I do by invoking an
external function which returns the rounded-off value (can't be in-lined).
 
 >T: But I'm not sure you can always apply the same rules to
 >storage to a named variable (it might be stored in a structure or
 >COMMON block) as to register spills, which aren't visible in the
 >source code.
 
>C:  No, I don't think you can, and that's what my proposal and email
 were trying to clarify (less than successfully, I gather!).
 
>C: That is, I was trying to focus my proposal on only the compiler-
 generated temporaries that get spilled and chopped down to "size"
 at the same time.
 
 >T: This is a more
 >difficult question to solve and I'm confused about what
 >connection you are making between that and the spilled
 >temporaries.
 
>C:  In my proposal, essentially none, except that it used to confuse me,
 and I believe it still confuses others, that there are pretty bright-
 line distinctions between compiler-generated temporaries and user-named
 variables, in terms of precisions the compiler is, or should be,
 permitted to employ for each class.  (But not all the distinctions
 are so clear, it seems.)
 
 
>C:  With compiler-generated temporaries, it is, again, helpful or hurtful,
 and normally permitted, for the compiler to employ *more* than the
 implicit precision of the operation, but the problem with the gcc
 back end, on the x86 at least, is that it (apparently) sometimes
 employs *less*, specifically, when spilling those temporaries.  (That
 is, when the temporary needs to be copied from the register in which
 it "lives" to a memory location, the gcc back end apparently is
 happy to chop the temporary down to fit into a smaller memory location.)
 
 >C: My proposal deals only with this latter deficiency (as I now think it
 is), that is, it recommends that precision *reduction* of compiler-
 generated temporaries no longer happen (at least not by default).
 
  
>C:  -  The compiler provides no way to "force" available excess precision
      to be reliably used for programmer-named variables anyplace that
      is possible (say, within a module).  Some compilers offer explicit
      extended type declarations (REAL*10 in Fortran; `long double' in C?),
      but g77 doesn't yet.  So, whether a named variable carries the
      (possible) excess precision of its computed value into subsequent
      calculations is at the whim of the compiler's optimization phases.
 
T: I think what you are getting at is that it's usually acceptable for the
results to be calculated in the declared precision; extra precision is usually
desirable, but unpredictable combinations of extra precision and no extra
precision may be disastrous.  See Kahan's writings about the quadratic
formula.  Your proposal would make an improvement here.
  
>C: REAL*16 seems to be asked for fairly often.)

T:  Probably by people who don' t recognize how much performance hit the Intel
processors will take going from REAL*10 to REAL*16.  If the Lahey/Fuji f95
compiler gets the alignment problems fixed so that REAL(kind=8) returns to
good performance, I think this will become more evident.
 

 >T: I suspect the 96 bits must be written to a 128-bit aligned storage
 >location to minimize the performance hit.
 
>C:  Probably.  But we're not even at 64-bit aligned storage for stack
 variables (which is where spills must happen, for the most part) yet,
 and IMO code that requires FP spills, on the x86 anyway, is probably
 not going to notice the lack of alignment due to its complexity.

T:  I believe that i686-pc-linux-gnulibc1 is trying with some success to do
aligned spills, and that that's the reason why -O2 is often faster running
than -Os on that target, while -O2 is slower than -Os on the same code on the
targets which don't have double alignments on the stack.
 
 
 >T: If someone does manage to implement this, I would like to study
 >the effect on the complex math functions of libF77, using Cody's
 >CELEFUNT test suite.  I have demonstrated already that the
 >extended double facility shows to good advantage in the double
 >complex functions.  The single complex functions already
 >accomplish what we are talking about by using double
 >declarations for all locals, and that gives them a big advantage
 >over certain vendors' libraries.
 
>C:  Right now, my impression is that the effect would be nil *unless*
 these codes are complicated enough to cause spills of temporaries
 in the first place.

T: The improvement in accuracy depends on getting extended precision results
from built-in math functions, so it would require a math-inline option as well
as the 80-bit register spills.  I don't know whether it can be done
effectively say by taking care to make the math-inline headers of libc6 more
reliable.
 
 
>C:  First, the main goal of my proposal is to reduce unpredictable loss
 of precision on machines like x86, where programmers should be
 aware their code will often employ extended precision (and thus might
 depend on it).
 
>C:  However, if -ffloat-store is not used, then perhaps this reduction
 would not be complete, and lead to rarer, yet even more obscure and
 hard-to-find, bugs, unless we indeed make sure that even spills of
 named variables carry never chop the values of those variables (which
 might be in extended precision).

T:  That might be too much to expect.  It's true that there could be
situations where adding code might cause a named variable to be spilled to its
declared precision where a simpler version used extended precision, but I
doubt it's feasible to prevent that.  I'll suggest a less ambitious goal:
that the recognition of common sub-expressions should not lead to reduced
precision:

	a = b*c + d*e
	f = d*e*g + h

If the compiler decides to treat d*e as a common sub-expression, in order to
save an operation, but then finds that this expression needs to spill, that
spill and restore should be full precision.  Otherwise, we get back to the
unpredictable situations.
 
 
         tq vm, (burley)
 
 >C: P.S. Most, if not all of this, is the result of widespread disagreement
 over what a simple type declaration like `REAL*8 A' or `double a;' really
 means.  The simple view is "it means that the variable must be capable
 of holding the specified precision", but so many people really expect
 it to mean so much more, in terms of whether operations on the variable
 may, might, or must involve more precision, etc.  And, since the
 predominant languages give those people no straightforward way to express
 what they *do* really want, how surprising is it that they "overload" the
 "simple" view of what a type definition really means?
  >>

T: This is getting off-topic.  I might think that f90 declarations like

	a = REAL(selected_real_kind(15))
	b = REAL(selected_real_kind(18))

could allow the programmer to express intent in more detail while retaining
portability, but I don't think any existing compilers implement this in a
useful way.

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~1998-12-22 13:30 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1998-12-15  0:05 FLOATING-POINT CONSISTENCY, -FFLOAT-STORE, AND X86 N8TM
1998-12-15 10:01 ` Joe Buck
  -- strict thread matches above, loose matches on Subject: below --
1998-12-22 13:30 Toon Moene
1998-12-22 11:07 John Wehle
1998-12-21 23:30 N8TM
1998-12-19 15:17 Geert Bosch
1998-12-20  8:09 ` Toon Moene
1998-12-22  4:17 ` Dave Love
1998-12-19 14:26 N8TM
1998-12-19 14:23 N8TM
1998-12-20 13:51 ` Marc Lehmann
1998-12-20 13:52   ` Marc Lehmann
1998-12-19 13:00 N8TM
1998-12-19  9:05 N8TM
1998-12-19 12:39 ` Toon Moene
1998-12-19 14:42   ` Dave Love
1998-12-18 23:07 N8TM
1998-12-19 13:39 ` Marc Lehmann
1998-12-18 21:58 N8TM
1998-12-18 22:36 ` Richard Henderson
1998-12-19 13:41   ` Marc Lehmann
1998-12-17  1:43 N8TM
1998-12-17 12:35 ` Marc Lehmann
1998-12-18 12:14   ` Dave Love
1998-12-18 14:25     ` Gerald Pfeifer
1998-12-19 13:50       ` Dave Love
1998-12-18 18:37     ` Marc Lehmann
1998-12-19 14:03       ` Dave Love
1998-12-16  6:10 N8TM
1998-12-13  6:19 Stephen L Moshier
1998-12-13 10:49 ` Craig Burley
1998-12-13 15:18   ` Stephen L Moshier
1998-12-14  8:49     ` Craig Burley
1998-12-14  9:25       ` Joe Buck
1998-12-14 14:30         ` Edward Jason Riedy
1998-12-15  0:04         ` Craig Burley
1998-12-03  6:34 N8TM
1998-12-04 15:23 ` Craig Burley

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).