* Re: Slow compilation with many files
@ 2008-04-20 2:53 J.C. Pizarro
0 siblings, 0 replies; 4+ messages in thread
From: J.C. Pizarro @ 2008-04-20 2:53 UTC (permalink / raw)
To: gcc-help, Zed, Brian Dessent
Zed <perrose@kth.se> wrote:
> I've tried to make my c++ source files small in my current project. Compiling
> my approx. 20 files takes very long time. I created a new source file that
> just includes all my original ones with #include statements, and compiled
> it. This method was much quicker. Compiling all files included in one takes
> approx. double the time compared to that of compiling just one of the small
> source files. So If I make a change to some header that is included by many
> of the source files, the last described method that includes the files to
> one is much faster.
>
> It seems there is a lot of compilation time overhead if the code is compiled
> in many small parts - this is without taking linking into account. Is there
> any settings or some trick that can reduce this overhead, except my ugly
> inclusion method?
>
> This is the command executed per source file, created by autoconf:
>
> g++ -DHAVE_CONFIG_H -I. -I.. -DPACKAGE_LOCALE_DIR=\""/usr/local//locale"\"
> -DPACKAGE_SRC_DIR=\""."\" -DPACKAGE_DATA_DIR=\""/usr/local/share"\"
> -I/usr/include/opencv -pthread -I/usr/include/glib-2.0
> -I/usr/lib/glib-2.0/include -I/usr/include/gtk-2.0
> -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo
> -I/usr/include/pango-1.0 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include
> -I/usr/include/freetype2 -I/usr/include/libpng12 -I/usr/include/pixman-1
> -I/usr/local/include/ltilib -D_GNU_SOURCE -fpic -D_DEBUG -g -O2 -MT
> pick-vertex-manipulator.o -MD -MP -MF .deps/pick-vertex-manipulator.Tpo -c
> -o pick-vertex-manipulator.o pick-vertex-manipulator.cpp
>
> Which is essentially after removing unimportant stuff:
>
> g++ -DHAVE_CONFIG_H -D_GNU_SOURCE -fpic -D_DEBUG -g -O2 -MT
> pick-vertex-manipulator.o -MD -MP -MF .deps/pick-vertex-manipulator.Tpo -c
> -o pick-vertex-manipulator.o pick-vertex-manipulator.cpp
>
> Thank you!
I think that GCC lacks
http://en.wikipedia.org/wiki/Memoization
With Memoization and gcc-kernel daemon, it can compile faster bigger apps.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Slow compilation with many files
@ 2008-04-15 4:17 Zed
2008-04-15 10:41 ` Brian Dessent
0 siblings, 1 reply; 4+ messages in thread
From: Zed @ 2008-04-15 4:17 UTC (permalink / raw)
To: gcc-help
I've tried to make my c++ source files small in my current project. Compiling
my approx. 20 files takes very long time. I created a new source file that
just includes all my original ones with #include statements, and compiled
it. This method was much quicker. Compiling all files included in one takes
approx. double the time compared to that of compiling just one of the small
source files. So If I make a change to some header that is included by many
of the source files, the last described method that includes the files to
one is much faster.
It seems there is a lot of compilation time overhead if the code is compiled
in many small parts - this is without taking linking into account. Is there
any settings or some trick that can reduce this overhead, except my ugly
inclusion method?
This is the command executed per source file, created by autoconf:
g++ -DHAVE_CONFIG_H -I. -I.. -DPACKAGE_LOCALE_DIR=\""/usr/local//locale"\"
-DPACKAGE_SRC_DIR=\""."\" -DPACKAGE_DATA_DIR=\""/usr/local/share"\"
-I/usr/include/opencv -pthread -I/usr/include/glib-2.0
-I/usr/lib/glib-2.0/include -I/usr/include/gtk-2.0
-I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo
-I/usr/include/pango-1.0 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include
-I/usr/include/freetype2 -I/usr/include/libpng12 -I/usr/include/pixman-1
-I/usr/local/include/ltilib -D_GNU_SOURCE -fpic -D_DEBUG -g -O2 -MT
pick-vertex-manipulator.o -MD -MP -MF .deps/pick-vertex-manipulator.Tpo -c
-o pick-vertex-manipulator.o pick-vertex-manipulator.cpp
Which is essentially after removing unimportant stuff:
g++ -DHAVE_CONFIG_H -D_GNU_SOURCE -fpic -D_DEBUG -g -O2 -MT
pick-vertex-manipulator.o -MD -MP -MF .deps/pick-vertex-manipulator.Tpo -c
-o pick-vertex-manipulator.o pick-vertex-manipulator.cpp
Thank you!
--
View this message in context: http://www.nabble.com/Slow-compilation-with-many-files-tp16691342p16691342.html
Sent from the gcc - Help mailing list archive at Nabble.com.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Slow compilation with many files
2008-04-15 4:17 Zed
@ 2008-04-15 10:41 ` Brian Dessent
2008-04-18 10:46 ` Per Rosengren
0 siblings, 1 reply; 4+ messages in thread
From: Brian Dessent @ 2008-04-15 10:41 UTC (permalink / raw)
To: Zed; +Cc: gcc-help
Zed wrote:
> I've tried to make my c++ source files small in my current project. Compiling
> my approx. 20 files takes very long time. I created a new source file that
> just includes all my original ones with #include statements, and compiled
> it. This method was much quicker. Compiling all files included in one takes
> approx. double the time compared to that of compiling just one of the small
> source files. So If I make a change to some header that is included by many
> of the source files, the last described method that includes the files to
> one is much faster.
This is essentially what the -combine switch does, except that switch
only supports C.
> It seems there is a lot of compilation time overhead if the code is compiled
> in many small parts - this is without taking linking into account. Is there
> any settings or some trick that can reduce this overhead, except my ugly
> inclusion method?
Yes, there's the startup/cleanup overhead of the compiler itself, plus
the overhead of parsing all the various headers 20 times instead of
once. You can try using a precompiled header to reduce the cost of the
latter, see
<http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html>. Note that
this isn't something you can just switch on, you have to think a little
bit about how to implement it, but the manual gives some good
suggestions about how to do that.
Brian
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Slow compilation with many files
2008-04-15 10:41 ` Brian Dessent
@ 2008-04-18 10:46 ` Per Rosengren
0 siblings, 0 replies; 4+ messages in thread
From: Per Rosengren @ 2008-04-18 10:46 UTC (permalink / raw)
To: gcc-help
Thank you! Precompiled headers is the solution.
My only problem now is how to get my buildsystem to use it. I am
currently using autoconf and automake. If I run "gcc -c myheader.hh", I
get a myheader.hh.gch file. I have tried to add the headers to
<target>SOURCES in Makefile.am, but the resulting Makefile doesn't
compile them.
I have also looked into switching to CMake, but it doesn't seem to
support precompiled headers, without adding some hack script for an
ADD_PRECOMPILED_HEADERS macro.
I use Kdevelop as my programming environment, so if there is anything
there that enables the use of precompiled headers, it would be very
convenient.
This is probably the wrong forum for discussing build systems, but maybe
you know if and how they support this feature of gcc anyway.
Per
Brian Dessent wrote:
> Zed wrote:
>
>> I've tried to make my c++ source files small in my current project. Compiling
>> my approx. 20 files takes very long time. I created a new source file that
>> just includes all my original ones with #include statements, and compiled
>> it. This method was much quicker. Compiling all files included in one takes
>> approx. double the time compared to that of compiling just one of the small
>> source files. So If I make a change to some header that is included by many
>> of the source files, the last described method that includes the files to
>> one is much faster.
>
> This is essentially what the -combine switch does, except that switch
> only supports C.
>
>> It seems there is a lot of compilation time overhead if the code is compiled
>> in many small parts - this is without taking linking into account. Is there
>> any settings or some trick that can reduce this overhead, except my ugly
>> inclusion method?
>
> Yes, there's the startup/cleanup overhead of the compiler itself, plus
> the overhead of parsing all the various headers 20 times instead of
> once. You can try using a precompiled header to reduce the cost of the
> latter, see
> <http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html>. Note that
> this isn't something you can just switch on, you have to think a little
> bit about how to implement it, but the manual gives some good
> suggestions about how to do that.
>
> Brian
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2008-04-19 17:05 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-04-20 2:53 Slow compilation with many files J.C. Pizarro
-- strict thread matches above, loose matches on Subject: below --
2008-04-15 4:17 Zed
2008-04-15 10:41 ` Brian Dessent
2008-04-18 10:46 ` Per Rosengren
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).