public inbox for gcc-help@gcc.gnu.org
 help / color / mirror / Atom feed
* Efficient linking
@ 2003-09-11 19:59 Starling
  2003-09-11 20:03 ` Ian Lance Taylor
  0 siblings, 1 reply; 4+ messages in thread
From: Starling @ 2003-09-11 19:59 UTC (permalink / raw)
  To: gcc-help

On the subject of incremental linking I did find something saying you
can go
ld -r -o piece1.o A.o B.o C.o ...
ld -r -o piece2.o D.o E.o F.o ...
ld -o all main.o piece1.o piece2.o
And that way you don't have to relink D, E, and F when A, B or C
changes.  However, wouldn't that generate a large object file
(pieceN.o)?  How is linking piece1.o and piece2.o more efficient than
linking all of the lesser object files at once?  Is it more efficient,
or does the linker have to piece together the files just as if they
were separate?


Starling

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Efficient linking
  2003-09-11 19:59 Efficient linking Starling
@ 2003-09-11 20:03 ` Ian Lance Taylor
  2003-09-11 22:06   ` Starling
  0 siblings, 1 reply; 4+ messages in thread
From: Ian Lance Taylor @ 2003-09-11 20:03 UTC (permalink / raw)
  To: Starling; +Cc: gcc-help

Starling <wassdamo@pacbell.net> writes:

> On the subject of incremental linking I did find something saying you
> can go
> ld -r -o piece1.o A.o B.o C.o ...
> ld -r -o piece2.o D.o E.o F.o ...
> ld -o all main.o piece1.o piece2.o
> And that way you don't have to relink D, E, and F when A, B or C
> changes.  However, wouldn't that generate a large object file
> (pieceN.o)?  How is linking piece1.o and piece2.o more efficient than
> linking all of the lesser object files at once?  Is it more efficient,
> or does the linker have to piece together the files just as if they
> were separate?

It's a little bit more efficient, because the linker doesn't have to
open as many files, and because it can read the information in bigger
chunks--that is, all the relocation information will be gathered
together in pieceN.o and can be processed at once rather than being
processed in several different parts.  That may sound completely
trivial, but that kind of thing is what the linker spends a lot of
time doing.

But it's not a lot more efficient.

Ian

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Efficient linking
  2003-09-11 20:03 ` Ian Lance Taylor
@ 2003-09-11 22:06   ` Starling
  2003-09-12 12:42     ` Eljay Love-Jensen
  0 siblings, 1 reply; 4+ messages in thread
From: Starling @ 2003-09-11 22:06 UTC (permalink / raw)
  To: gcc-help

Ian Lance Taylor <ian@wasabisystems.com> writes:

> But it's not a lot more efficient.

What strategy would you use for keeping a large amount of code up to
date with maximum efficiency?  There's that library shortcut I
mentioned, and of course writing separate executables for separate
tasks instead of one huge monolithic blob.  Should I even be worried
about link level efficiency?  I notice there are a lot of ways to
minimize efficiency of compiling, only recompiling what is necessary.
Perhaps if the linker can't deal with it the program is just too big
to be feasible?


Starling

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Efficient linking
  2003-09-11 22:06   ` Starling
@ 2003-09-12 12:42     ` Eljay Love-Jensen
  0 siblings, 0 replies; 4+ messages in thread
From: Eljay Love-Jensen @ 2003-09-12 12:42 UTC (permalink / raw)
  To: Starling, gcc-help

Hi Starling,

I think you should read this book, "Large-Scale C++ Software Design" by Lakos.

And you should also be concerned about having good makefiles, as per this PDF essay "Recursive Make Considered Harmful" by Miller.  URL: <http://www.tip.net.au/~millerp/rmch/recu-make-cons-harm.html>.

My own rules of thumb:
+ Every header file should include ONLY the header files that they need.
+ Every source file should include ONLY the header files that they need.
+ No header/source file should depend upon a needed header file being included by another header file.  After all, someone may change a header's included header to a forward declaration (which I also shy away from).
+ Every header file should be "complete", such that they are stand-alone compileable.
+ No (or very few) "uber header files" that includes all header files.
+ No reliance upon PCH to include everything and the kitchen sink.  Very bad practice, in my opinion.
+ Use generated dependencies in your makefiles.
+ Use make fragments, which are all make-included into a whole.  (Assuming you use GNU make.)
+ No recursive builds.  (Presuming the individual builds otherwise have interdependencies.)

These rules mean that you won't "build too much" (waste of time), nor "build too little" (untrustworthy builds, unless done from a clean slate).  And you should be able to take advantage of multi CPUs for concurrent builds.

--Eljay


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2003-09-12 12:42 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-09-11 19:59 Efficient linking Starling
2003-09-11 20:03 ` Ian Lance Taylor
2003-09-11 22:06   ` Starling
2003-09-12 12:42     ` Eljay Love-Jensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).