From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jeffrey A Law To: jh@oobleck.tn.cornell.edu Cc: egcs@cygnus.com Subject: Re: build report Date: Sun, 08 Feb 1998 13:45:00 -0000 Message-id: <12198.886971718@hurl.cygnus.com> References: <199802061743.MAA28101@oobleck.tn.cornell.edu> X-SW-Source: 1998-02/msg00320.html In message < 199802061743.MAA28101@oobleck.tn.cornell.edu >you write: > True. But, I missed it first time around, and so will others. It > would be good to put a link to this before "configure" in index.html > (where you point people from the main README). I'd also note the need > for a new dejagnu and possibly a new texinfo. In general, make > dependencies that differ from release GNU systems *prominent* in any > instructions. No matter where we put it folks are going to miss it. IMHO the target specific stuff belongs *after* the other stuff since it's not likely to make such sense if you don't have some clue about the overall configure and build procedure. > Two docs bugs: First, texinfo.texi gives 16 errors on each of its 3 > runs in make dvi. Second, the command line to format g77.texi > includes a -I, which isn't understood by the texinfo-3.9 installed on > this Red Hat 5.0 box. A pointer to a later version of TeXinfo or use > of the version included in the egcs release might fix these if they're > not actual bugs. Hmmm, we haven't run make.dvi... Sigh. Can you find out why it didn't use the texinfo in the distribution? The whole point of including it is to prevent this kind of problem. > Thanks. Perhaps put in a link from the included docs? I see that > Pentium and PPro stuff is mentionned there, but see nothing in the man > page about turning them on. AH! It's in the info but not in the man > page. Updating the man page would be good. Sigh. Yes, this is an age-old problem with GNU tools -- nobody updates the man pages because the "info" documentation is the official stuff. > Gcc's default target architecture for the x86 series seems broken. > Why is the default to build for the *oldest* machine, rather than the > *user's* machine? It's been our experience that providing the LCD code generation tends to be a better default. Most folks that care about performance will take the time to find the additional options to tune optimization for their particular chip. Either way there's going to be a group of folks that think the default is wrong :-) > Finally, I tested the different -m options for the x86 family on a > numerical model. Results are not what I'd expect: > > with Haifa no Haifa > -m t1 t2 t3 t1 t2 t3 > 386 11.70 18.19 8.49 11.68 18.21 8.55 > 486 11.65 18.18 8.47 11.74 18.17 8.50 > pentium 11.69 18.12 8.48 11.75 18.22 8.50 > pentiumpro 11.72 18.20 8.52 11.69 18.16 8.56 > [ ... ] > I'm on a PII, so I'd expect the last times to be the shortest by a > substantial margin. Instead, the differences are significant only in > a statistical sense, there's no consistent pattern to which runs > fastest, and often the code supposedly generated for this CPU runs the > slowest! Wouldn't you expect to see a substantial improvement between > code generated for a 386 and code generated for a 686, when you run on > a 686? In theory, yes, but I don't think either the code generator has been well tuned for those machines yet. Particularly it the scheduler hasn't been tuned, so we probably aren't getting any of the benefits of their exposed pipeline architecture. jeff