public inbox for gcc-help@gcc.gnu.org
 help / color / mirror / Atom feed
* reduce compilation times?
@ 2007-11-27 10:04 mahmoodn
  2007-11-27 11:11 ` Andrew Haley
  2007-11-27 13:48 ` John Love-Jensen
  0 siblings, 2 replies; 69+ messages in thread
From: mahmoodn @ 2007-11-27 10:04 UTC (permalink / raw)
  To: gcc-help


Is it possible to reduce compilation time with GCC?
-- 
View this message in context: http://www.nabble.com/reduce-compilation-times--tf4880765.html#a13967871
Sent from the gcc - Help mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 10:04 reduce compilation times? mahmoodn
@ 2007-11-27 11:11 ` Andrew Haley
  2007-11-27 11:15   ` mahmoodn
                     ` (2 more replies)
  2007-11-27 13:48 ` John Love-Jensen
  1 sibling, 3 replies; 69+ messages in thread
From: Andrew Haley @ 2007-11-27 11:11 UTC (permalink / raw)
  To: mahmoodn; +Cc: gcc-help

mahmoodn writes:
 > 
 > Is it possible to reduce compilation time with GCC?

Yes.  distcc will help you, as will "make -j".  ccache is also useful.

Andrew.

-- 
Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, UK
Registered in England and Wales No. 3798903

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 11:11 ` Andrew Haley
@ 2007-11-27 11:15   ` mahmoodn
  2007-11-27 11:30     ` Andrew Haley
  2007-11-27 15:48   ` Sven Eschenberg
  2007-12-01 12:20   ` mahmoodn
  2 siblings, 1 reply; 69+ messages in thread
From: mahmoodn @ 2007-11-27 11:15 UTC (permalink / raw)
  To: gcc-help


"make -j" does not work. according to "make --help":

-j [N], --jobs[=N]          Allow N jobs at once; infinite jobs with no arg.

I don't think it is suitable for my work. I will move on to ccache. But yet
I did not find any tutorial.




Andrew Haley wrote:
> 
> mahmoodn writes:
>  > 
>  > Is it possible to reduce compilation time with GCC?
> 
> Yes.  distcc will help you, as will "make -j".  ccache is also useful.
> 
> Andrew.
> 
> -- 
> Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor,
> Berkshire, SL4 1TE, UK
> Registered in England and Wales No. 3798903
> 
> 

-- 
View this message in context: http://www.nabble.com/reduce-compilation-times--tf4880765.html#a13968885
Sent from the gcc - Help mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 11:15   ` mahmoodn
@ 2007-11-27 11:30     ` Andrew Haley
  2007-11-27 12:20       ` mahmoodn
  0 siblings, 1 reply; 69+ messages in thread
From: Andrew Haley @ 2007-11-27 11:30 UTC (permalink / raw)
  To: mahmoodn; +Cc: gcc-help

mahmoodn writes:
 > 
 > "make -j" does not work. according to "make --help":
 > 
 > -j [N], --jobs[=N]          Allow N jobs at once; infinite jobs with no arg.
 > 
 > I don't think it is suitable for my work. I will move on to ccache. 

It works for me.  What's wrong with it?

 > But yet I did not find any tutorial.

http://ccache.samba.org/
 
Andrew.


 > Andrew Haley wrote:
 > > 
 > > mahmoodn writes:
 > >  > 
 > >  > Is it possible to reduce compilation time with GCC?
 > > 
 > > Yes.  distcc will help you, as will "make -j".  ccache is also useful.

-- 
Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, UK
Registered in England and Wales No. 3798903

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 11:30     ` Andrew Haley
@ 2007-11-27 12:20       ` mahmoodn
  2007-11-27 12:25         ` John Love-Jensen
  2007-11-27 14:07         ` Andrew Haley
  0 siblings, 2 replies; 69+ messages in thread
From: mahmoodn @ 2007-11-27 12:20 UTC (permalink / raw)
  To: gcc-help


I mean (I think):
"Allow N jobs at once"   !=    "reduce compile time"     

I did this to see the effect of "make -j":

]# rm *.o
]# make
....( 10 minute )

then I edit one of my files (only one statement), and then:
]# make -j
...( still 10 minute )




Andrew Haley wrote:
> 
> mahmoodn writes:
>  > 
>  > "make -j" does not work. according to "make --help":
>  > 
>  > -j [N], --jobs[=N]          Allow N jobs at once; infinite jobs with no
> arg.
>  > 
>  > I don't think it is suitable for my work. I will move on to ccache. 
> 
> It works for me.  What's wrong with it?
> 
>  > But yet I did not find any tutorial.
> 
> http://ccache.samba.org/
>  
> Andrew.
> 
> 
>  > Andrew Haley wrote:
>  > > 
>  > > mahmoodn writes:
>  > >  > 
>  > >  > Is it possible to reduce compilation time with GCC?
>  > > 
>  > > Yes.  distcc will help you, as will "make -j".  ccache is also
> useful.
> 
> -- 
> Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor,
> Berkshire, SL4 1TE, UK
> Registered in England and Wales No. 3798903
> 
> 

-- 
View this message in context: http://www.nabble.com/reduce-compilation-times--tf4880765.html#a13969133
Sent from the gcc - Help mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 12:20       ` mahmoodn
@ 2007-11-27 12:25         ` John Love-Jensen
  2007-11-27 15:27           ` Tim Prince
  2007-11-27 14:07         ` Andrew Haley
  1 sibling, 1 reply; 69+ messages in thread
From: John Love-Jensen @ 2007-11-27 12:25 UTC (permalink / raw)
  To: mahmoodn, MSX to GCC

Hi mahmoodn,

> I mean (I think):
> "Allow N jobs at once"   !=    "reduce compile time"

Reduces my project's overall compile time on my machine, by a factor of 4.

--Eljay

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 10:04 reduce compilation times? mahmoodn
  2007-11-27 11:11 ` Andrew Haley
@ 2007-11-27 13:48 ` John Love-Jensen
  1 sibling, 0 replies; 69+ messages in thread
From: John Love-Jensen @ 2007-11-27 13:48 UTC (permalink / raw)
  To: mahmoodn, MSX to GCC

Hi mahmoodn,

> Is it possible to reduce compilation time with GCC?

Yes.

Besides the approaches that Andrew pointed out, there is also having good
source code hygiene, with low coupling and high cohesion, and having the
#include statements only include that which is needed in the translation
unit, and relying on header-header files.

Details of which -- and a lot more good stuff -- is in this book:

Large-Scale C++ Software Design
by John Lakos
http://www.amazon.com/dp/0201633620/

HTH,
--Eljay

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 12:20       ` mahmoodn
  2007-11-27 12:25         ` John Love-Jensen
@ 2007-11-27 14:07         ` Andrew Haley
  2007-11-28  9:01           ` mahmoodn
  1 sibling, 1 reply; 69+ messages in thread
From: Andrew Haley @ 2007-11-27 14:07 UTC (permalink / raw)
  To: mahmoodn; +Cc: gcc-help

mahmoodn writes:
 > 
 > I mean (I think):
 > "Allow N jobs at once"   !=    "reduce compile time"     

Like I said, this works for me, and for many others too.

 > I did this to see the effect of "make -j":
 > 
 > ]# rm *.o
 > ]# make
 > ....( 10 minute )
 > 
 > then I edit one of my files (only one statement), and then:
 > ]# make -j
 > ...( still 10 minute )

Perhaps you should have explained your problem better.  There's
nothing we can do to make a single compilation of a single file go
faster.  However, that's not the usual problem.

I wonder why your compilation is taking so long.  It might be a bug in
gcc.  Perhaps we could look at a test case.

Andrew.

-- 
Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, UK
Registered in England and Wales No. 3798903

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 12:25         ` John Love-Jensen
@ 2007-11-27 15:27           ` Tim Prince
  0 siblings, 0 replies; 69+ messages in thread
From: Tim Prince @ 2007-11-27 15:27 UTC (permalink / raw)
  To: John Love-Jensen; +Cc: mahmoodn, MSX to GCC

John Love-Jensen wrote:
> Hi mahmoodn,
> 
>> I mean (I think):
>> "Allow N jobs at once"   !=    "reduce compile time"
> 
> Reduces my project's overall compile time on my machine, by a factor of 4.
> 
Might do that on a quad core. A dual core with HyperThreading enabled
approaches a factor of 3.  OP wants magic without qualification as to
platform.
Selective optimizations also are important; use -O3 only where it is useful.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 11:11 ` Andrew Haley
  2007-11-27 11:15   ` mahmoodn
@ 2007-11-27 15:48   ` Sven Eschenberg
  2007-11-27 16:27     ` Andrew Haley
  2007-12-01 12:20   ` mahmoodn
  2 siblings, 1 reply; 69+ messages in thread
From: Sven Eschenberg @ 2007-11-27 15:48 UTC (permalink / raw)
  To: gcc-help

Aside from using -j on HT/Mulitcore/Multi-CPU Systems and ccache it 
might help to put the sourcecode into a ramdisk for compilation (no 
ccache needd then), or at least the build directory, for all the 
temporary stuff.

-Sven


Andrew Haley schrieb:
> mahmoodn writes:
>  > 
>  > Is it possible to reduce compilation time with GCC?
>
> Yes.  distcc will help you, as will "make -j".  ccache is also useful.
>
> Andrew.
>
>   

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 15:48   ` Sven Eschenberg
@ 2007-11-27 16:27     ` Andrew Haley
  2007-11-27 18:51       ` Sven Eschenberg
  0 siblings, 1 reply; 69+ messages in thread
From: Andrew Haley @ 2007-11-27 16:27 UTC (permalink / raw)
  To: Sven Eschenberg; +Cc: gcc-help

Sven Eschenberg writes:

 > Aside from using -j on HT/Mulitcore/Multi-CPU Systems and ccache it
 > might help to put the sourcecode into a ramdisk for compilation (no
 > ccache needd then), or at least the build directory, for all the
 > temporary stuff.

I don't think that ccache does what you think it does.  As long as you
have plenty of RAM "make -j2" tends to speed things up even on a
uniprocessor, but not by a huge amount.

Andrew.

-- 
Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, UK
Registered in England and Wales No. 3798903

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 16:27     ` Andrew Haley
@ 2007-11-27 18:51       ` Sven Eschenberg
  2007-11-27 19:21         ` Andrew Haley
  0 siblings, 1 reply; 69+ messages in thread
From: Sven Eschenberg @ 2007-11-27 18:51 UTC (permalink / raw)
  To: Andrew Haley; +Cc: gcc-help

I am not sure about ccache, but I thought it does some file and 
preprocessing caching (not exactly sure, how it works, I thought, it 
kinda gets called instead of the preprocessor or at least before the PP).

Anyway, what I meant: Compiling a package like firefox, glibc etc. with 
ccache gives you some speed increase, but it is small compared to 
uncompressing the source directly into a ram disk and build everything 
in there.

Combining both didn't seem to give additional reproduceable benefit, but 
I gotta admit, never tried to put ccache's data into a ramdisk too, 
since I don't have enough ram for that on sufficently big enough packages.
If -j2 speeds things, it's mostly because of the kernel's scheduling, I 
assume.

The only box I got left, which is Uniprocessore and doesn't have 
HT/Multiple cores didn't really compile faster with -j2 - Then again it 
is a server, which has a certain minor load anyway all the time,
that's why I assume -j2 on Uniprocessor only benefits from scheduling 
strategies.

Regards

-Sven

P.S.: Of course having properly factorized code with reasonable 
filesizes is the first step, makes the whole project more structured and 
manageable (imho)



Andrew Haley schrieb:
> Sven Eschenberg writes:
>
>  > Aside from using -j on HT/Mulitcore/Multi-CPU Systems and ccache it
>  > might help to put the sourcecode into a ramdisk for compilation (no
>  > ccache needd then), or at least the build directory, for all the
>  > temporary stuff.
>
> I don't think that ccache does what you think it does.  As long as you
> have plenty of RAM "make -j2" tends to speed things up even on a
> uniprocessor, but not by a huge amount.
>
> Andrew.
>
>   

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 18:51       ` Sven Eschenberg
@ 2007-11-27 19:21         ` Andrew Haley
  2007-11-27 20:43           ` Sven Eschenberg
  0 siblings, 1 reply; 69+ messages in thread
From: Andrew Haley @ 2007-11-27 19:21 UTC (permalink / raw)
  To: Sven Eschenberg; +Cc: gcc-help

Sven Eschenberg writes:

 > I am not sure about ccache, but I thought it does some file and
 > preprocessing caching (not exactly sure, how it works, I thought,
 > it kinda gets called instead of the preprocessor or at least before
 > the PP).

That's right: if the file has been compiled before, ccache bypasses
compilation entirely.

 > Anyway, what I meant: Compiling a package like firefox, glibc
 > etc. with ccache gives you some speed increase, but it is small
 > compared to uncompressing the source directly into a ram disk and
 > build everything in there.

That sounds pretty surprising to me.  How is a RAM disk going to be so
much faster than. say, /tmp?  I suppose there's no overhead of writing
the files back to disk after updating them, but thet's usually done in
the background anyway.  "make -jN" is usually enough to swallow up any
rotational latency.  But when I'm compling, all CPU cores are usually
at 90% plus; the compiler certainly isn't waiting for disk.  That RAM
disk is going to get me 10% more at best.

 > Combining both didn't seem to give additional reproduceable
 > benefit, but I gotta admit, never tried to put ccache's data into a
 > ramdisk too, since I don't have enough ram for that on sufficently
 > big enough packages.  If -j2 speeds things, it's mostly because of
 > the kernel's scheduling, I assume.
 > 
 > The only box I got left, which is Uniprocessore and doesn't have
 > HT/Multiple cores didn't really compile faster with -j2 - Then
 > again it is a server, which has a certain minor load anyway all the
 > time, that's why I assume -j2 on Uniprocessor only benefits from
 > scheduling strategies.

The main purpose of -j2 on a uniprocessor is to absorb any disk
latency: when one process blocks because a file is not ready, another
process has something useful to do.  It's not a huge win when building
gcc, but it is significant.  It is very usefule when building on an
NFS-mounted drive.

Andrew.

-- 
Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, UK
Registered in England and Wales No. 3798903

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 19:21         ` Andrew Haley
@ 2007-11-27 20:43           ` Sven Eschenberg
  0 siblings, 0 replies; 69+ messages in thread
From: Sven Eschenberg @ 2007-11-27 20:43 UTC (permalink / raw)
  To: Andrew Haley; +Cc: gcc-help

Andrew Haley schrieb:
> Sven Eschenberg writes:
>
>  > Anyway, what I meant: Compiling a package like firefox, glibc
>  > etc. with ccache gives you some speed increase, but it is small
>  > compared to uncompressing the source directly into a ram disk and
>  > build everything in there.
>
> That sounds pretty surprising to me.  How is a RAM disk going to be so
> much faster than. say, /tmp?  I suppose there's no overhead of writing
> the files back to disk after updating them, but thet's usually done in
> the background anyway.  "make -jN" is usually enough to swallow up any
> rotational latency.  But when I'm compling, all CPU cores are usually
> at 90% plus; the compiler certainly isn't waiting for disk.  That RAM
> disk is going to get me 10% more at best.
>   
I assume this all depends on the usage scenario etc. . If /tmp is on 
disk (which it often is, because it can grow pretty big), you
save quite some io, ccache needs to do disk-IO too, to access it's 
caching data. I guess the major effect is bypassing the filesystem's
caching strategies - read the source package (i.e. 50MB = 1-2 sec) after 
that, all IO is in RAM, every sourcefile is read from ram, objects are
put into ram and reread from there etc. - though disk IO can use DMA, it 
still needs to wait during reading, if the data is not yet there.
(which would be avoided with some read-ahead strategies).

As I said, certainly the combination of ccache and keeping ccache data 
and the build in ram might be the fastet way.

Of course, if /tmp and your ccache data is on a RAID5, with a 
controller, that has it's own 1-2 Gb RAM, things look diferently from a 
notebook, which
only carries a 5400 RPM drive, I assume.
The question is, if ccache can read the cached preprocessed source 
faster from disk, than gcc (resp.cpp) the source from ram and preproces it,
which certainly depends on the way the sources look like (factoring), 
disk IO speed, processing speed etc.
>  > Combining both didn't seem to give additional reproduceable
>  > benefit, but I gotta admit, never tried to put ccache's data into a
>  > ramdisk too, since I don't have enough ram for that on sufficently
>  > big enough packages.  If -j2 speeds things, it's mostly because of
>  > the kernel's scheduling, I assume.
>  > 
>  > The only box I got left, which is Uniprocessore and doesn't have
>  > HT/Multiple cores didn't really compile faster with -j2 - Then
>  > again it is a server, which has a certain minor load anyway all the
>  > time, that's why I assume -j2 on Uniprocessor only benefits from
>  > scheduling strategies.
>
> The main purpose of -j2 on a uniprocessor is to absorb any disk
> latency: when one process blocks because a file is not ready, another
> process has something useful to do.  It's not a huge win when building
> gcc, but it is significant.  It is very usefule when building on an
> NFS-mounted drive.
>
> Andrew.
>   
Ah okay, I forgot the disk IO, but this makes perfect sense ...

Regards

-Sven

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 14:07         ` Andrew Haley
@ 2007-11-28  9:01           ` mahmoodn
  2007-11-28 12:11             ` John (Eljay) Love-Jensen
  0 siblings, 1 reply; 69+ messages in thread
From: mahmoodn @ 2007-11-28  9:01 UTC (permalink / raw)
  To: gcc-help


>Like I said, this works for me, and for many others too.

I have single core, P4. So I think -j does not make any sense. Is it right?

I use a library for my code which has lots of templates and header files.
Thats why I need to reduce compile time.

I do not know what information you need, but I am ready to provide what you
want. I use gcc 3.3

Thanks,



Andrew Haley wrote:
> 
> mahmoodn writes:
>  > 
>  > I mean (I think):
>  > "Allow N jobs at once"   !=    "reduce compile time"     
> 
> Like I said, this works for me, and for many others too.
> 
>  > I did this to see the effect of "make -j":
>  > 
>  > ]# rm *.o
>  > ]# make
>  > ....( 10 minute )
>  > 
>  > then I edit one of my files (only one statement), and then:
>  > ]# make -j
>  > ...( still 10 minute )
> 
> Perhaps you should have explained your problem better.  There's
> nothing we can do to make a single compilation of a single file go
> faster.  However, that's not the usual problem.
> 
> I wonder why your compilation is taking so long.  It might be a bug in
> gcc.  Perhaps we could look at a test case.
> 
> Andrew.
> 
> -- 
> Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor,
> Berkshire, SL4 1TE, UK
> Registered in England and Wales No. 3798903
> 
> 

-- 
View this message in context: http://www.nabble.com/reduce-compilation-times--tf4880765.html#a13987559
Sent from the gcc - Help mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: reduce compilation times?
  2007-11-28  9:01           ` mahmoodn
@ 2007-11-28 12:11             ` John (Eljay) Love-Jensen
  2007-11-30  9:15               ` mahmoodn
  0 siblings, 1 reply; 69+ messages in thread
From: John (Eljay) Love-Jensen @ 2007-11-28 12:11 UTC (permalink / raw)
  To: mahmoodn, gcc-help

Hi mahmoodn,

> I have single core, P4. So I think -j does not make any sense. Is it right?

If your hard drive throughput is faster than your CPU, then you are correct and it does not make any sense.

For example, if you are using a 25 MHz 68030 and a 15,000 rpm 8 GB cache Seagate drive connected through SCSI-3, the drive is probably able to completely feed the CPU.

However, if your hard drive throughput is slower than your CPU, then -j makes sense.

For example, if your CPU is a single core Pentium 4 at 3.6 GHz, and your hard drive is any ATA connected IDE drive, then -j would help, since the CPU would have many spare cycles to burn while waiting for the hard drive to feed it, so it could be busy working on another compiler concurrently.

HTH,
--Eljay

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: reduce compilation times?
  2007-11-28 12:11             ` John (Eljay) Love-Jensen
@ 2007-11-30  9:15               ` mahmoodn
  2007-11-30 13:33                 ` mahmoodn
  0 siblings, 1 reply; 69+ messages in thread
From: mahmoodn @ 2007-11-30  9:15 UTC (permalink / raw)
  To: gcc-help


>If your hard drive throughput is faster than your CPU, then you are correct
and it does not make any >sense.

>For example, if you are using a 25 MHz 68030 and a 15,000 rpm 8 GB cache
Seagate drive connected >through SCSI-3, the drive is probably able to
completely feed the CPU.

>However, if your hard drive throughput is slower than your CPU, then -j
makes sense.>

>For example, if your CPU is a single core Pentium 4 at 3.6 GHz, and your
hard drive is any ATA connected >IDE drive, then -j would help, since the
CPU would have many spare cycles to burn while waiting for the >hard drive
to feed it, so it could be busy working on another compiler concurrently.

I test with a core 2 and -j2 worked fine.... But I still do not know why it
does not work on p4.
Any way.... thanks





John (Eljay) Love-Jensen wrote:
> 
> Hi mahmoodn,
> 
>> I have single core, P4. So I think -j does not make any sense. Is it
>> right?
> 
> If your hard drive throughput is faster than your CPU, then you are
> correct and it does not make any sense.
> 
> For example, if you are using a 25 MHz 68030 and a 15,000 rpm 8 GB cache
> Seagate drive connected through SCSI-3, the drive is probably able to
> completely feed the CPU.
> 
> However, if your hard drive throughput is slower than your CPU, then -j
> makes sense.
> 
> For example, if your CPU is a single core Pentium 4 at 3.6 GHz, and your
> hard drive is any ATA connected IDE drive, then -j would help, since the
> CPU would have many spare cycles to burn while waiting for the hard drive
> to feed it, so it could be busy working on another compiler concurrently.
> 
> HTH,
> --Eljay
> 
> 

-- 
View this message in context: http://www.nabble.com/reduce-compilation-times--tf4880765.html#a14042402
Sent from the gcc - Help mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: reduce compilation times?
  2007-11-30  9:15               ` mahmoodn
@ 2007-11-30 13:33                 ` mahmoodn
  0 siblings, 0 replies; 69+ messages in thread
From: mahmoodn @ 2007-11-30 13:33 UTC (permalink / raw)
  To: gcc-help


>If your hard drive throughput is faster than your CPU, then you are correct
and it does not make any >sense.

>For example, if you are using a 25 MHz 68030 and a 15,000 rpm 8 GB cache
Seagate drive connected >through SCSI-3, the drive is probably able to
completely feed the CPU.

>However, if your hard drive throughput is slower than your CPU, then -j
makes sense.>

>For example, if your CPU is a single core Pentium 4 at 3.6 GHz, and your
hard drive is any ATA connected >IDE drive, then -j would help, since the
CPU would have many spare cycles to burn while waiting for the >hard drive
to feed it, so it could be busy working on another compiler concurrently.

I test with a core 2 and -j2 worked fine.... But I still do not know why it
does not work on p4.
Any way.... thank you





John (Eljay) Love-Jensen wrote:
> 
> Hi mahmoodn,
> 
>> I have single core, P4. So I think -j does not make any sense. Is it
>> right?
> 
> If your hard drive throughput is faster than your CPU, then you are
> correct and it does not make any sense.
> 
> For example, if you are using a 25 MHz 68030 and a 15,000 rpm 8 GB cache
> Seagate drive connected through SCSI-3, the drive is probably able to
> completely feed the CPU.
> 
> However, if your hard drive throughput is slower than your CPU, then -j
> makes sense.
> 
> For example, if your CPU is a single core Pentium 4 at 3.6 GHz, and your
> hard drive is any ATA connected IDE drive, then -j would help, since the
> CPU would have many spare cycles to burn while waiting for the hard drive
> to feed it, so it could be busy working on another compiler concurrently.
> 
> HTH,
> --Eljay
> 
> 

-- 
View this message in context: http://www.nabble.com/reduce-compilation-times--tf4880765.html#a14042402
Sent from the gcc - Help mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 11:11 ` Andrew Haley
  2007-11-27 11:15   ` mahmoodn
  2007-11-27 15:48   ` Sven Eschenberg
@ 2007-12-01 12:20   ` mahmoodn
  2007-12-03 16:14     ` Andrew Haley
  2 siblings, 1 reply; 69+ messages in thread
From: mahmoodn @ 2007-12-01 12:20 UTC (permalink / raw)
  To: gcc-help


I also saw that it is some how possible to exclude unnecessary header files
from compiling in the make file. But I do not know how??

can anyone help me...
thanks,



Andrew Haley wrote:
> 
> mahmoodn writes:
>  > 
>  > Is it possible to reduce compilation time with GCC?
> 
> Yes.  distcc will help you, as will "make -j".  ccache is also useful.
> 
> Andrew.
> 
> -- 
> Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor,
> Berkshire, SL4 1TE, UK
> Registered in England and Wales No. 3798903
> 
> 

-- 
View this message in context: http://www.nabble.com/reduce-compilation-times--tf4880765.html#a14104659
Sent from the gcc - Help mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-12-01 12:20   ` mahmoodn
@ 2007-12-03 16:14     ` Andrew Haley
  2007-12-04 11:23       ` mahmoodn
  0 siblings, 1 reply; 69+ messages in thread
From: Andrew Haley @ 2007-12-03 16:14 UTC (permalink / raw)
  To: mahmoodn; +Cc: gcc-help

mahmoodn writes:
 > 
 > I also saw that it is some how possible to exclude unnecessary header files
 > from compiling in the make file. But I do not know how??
 > 
 > can anyone help me...
 > thanks,

Did you read the manual Section "Using Precompiled Headers" ?

 > 
 > 
 > 
 > Andrew Haley wrote:
 > > 
 > > mahmoodn writes:
 > >  > 
 > >  > Is it possible to reduce compilation time with GCC?
 > > 
 > > Yes.  distcc will help you, as will "make -j".  ccache is also useful.
 > > 
 > > Andrew.
 > > 
 > > -- 
 > > Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor,
 > > Berkshire, SL4 1TE, UK
 > > Registered in England and Wales No. 3798903
 > > 
 > > 
 > 
 > -- 
 > View this message in context: http://www.nabble.com/reduce-compilation-times--tf4880765.html#a14104659
 > Sent from the gcc - Help mailing list archive at Nabble.com.

-- 
Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, UK
Registered in England and Wales No. 3798903

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-12-03 16:14     ` Andrew Haley
@ 2007-12-04 11:23       ` mahmoodn
  2007-12-04 12:19         ` Tom Browder
  0 siblings, 1 reply; 69+ messages in thread
From: mahmoodn @ 2007-12-04 11:23 UTC (permalink / raw)
  To: gcc-help


> Did you read the manual Section "Using Precompiled Headers" ?
if you mean http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html, I
have not yet found anything related to that I said.

I am not expert in compiler and its options so, maybe I did not understand
exactly what it said.



Andrew Haley wrote:
> 
> mahmoodn writes:
>  > 
>  > I also saw that it is some how possible to exclude unnecessary header
> files
>  > from compiling in the make file. But I do not know how??
>  > 
>  > can anyone help me...
>  > thanks,
> 
> Did you read the manual Section "Using Precompiled Headers" ?
> 
>  > 
>  > 
>  > 
>  > Andrew Haley wrote:
>  > > 
>  > > mahmoodn writes:
>  > >  > 
>  > >  > Is it possible to reduce compilation time with GCC?
>  > > 
>  > > Yes.  distcc will help you, as will "make -j".  ccache is also
> useful.
>  > > 
>  > > Andrew.
>  > > 
>  > > -- 
>  > > Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor,
>  > > Berkshire, SL4 1TE, UK
>  > > Registered in England and Wales No. 3798903
>  > > 
>  > > 
>  > 
>  > -- 
>  > View this message in context:
> http://www.nabble.com/reduce-compilation-times--tf4880765.html#a14104659
>  > Sent from the gcc - Help mailing list archive at Nabble.com.
> 
> -- 
> Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor,
> Berkshire, SL4 1TE, UK
> Registered in England and Wales No. 3798903
> 
> 

-- 
View this message in context: http://www.nabble.com/reduce-compilation-times--tf4880765.html#a14148752
Sent from the gcc - Help mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-12-04 11:23       ` mahmoodn
@ 2007-12-04 12:19         ` Tom Browder
  2007-12-05  7:44           ` mahmoodn
  0 siblings, 1 reply; 69+ messages in thread
From: Tom Browder @ 2007-12-04 12:19 UTC (permalink / raw)
  To: mahmoodn; +Cc: gcc-help

On Dec 4, 2007 5:23 AM, mahmoodn <nt_mahmood@yahoo.com> wrote:
>
> > Did you read the manual Section "Using Precompiled Headers" ?
> if you mean http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html, I
> have not yet found anything related to that I said.
>
> I am not expert in compiler and its options so, maybe I did not understand
> exactly what it said.

I think Andrew was referring to reducing compilation times by using
precompiled headers.  Since headers tend to be more stable than
implementation code, such use will reduce recompilation time.

As far as removing unneeded headers, as far as I know that is a
manual, trial-and-error job that no one has as yet automated (but it
should be fairly easy to make a housekeeping script to do that).  Try
commenting out header include lines one at a time and test for
successful compilation.

-Tom

Tom Browder
Niceville, Florida
USA

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-12-04 12:19         ` Tom Browder
@ 2007-12-05  7:44           ` mahmoodn
  2007-12-05 10:24             ` Tom Browder
  0 siblings, 1 reply; 69+ messages in thread
From: mahmoodn @ 2007-12-05  7:44 UTC (permalink / raw)
  To: gcc-help


For example look at this paragraph in that link:

>To create a precompiled header file, simply compile it as you would any
other file, if necessary using the -x option to make >the driver treat it as
a C or C++ header file. You will probably want to use a tool like make to
keep the precompiled header >up-to-date when the headers it contains change.

I can not understand it. My little understanding is, I just use normal
"make" for the first time. Then the compiler would create some precompiled
headers. After doing this, I could not find ant gch file for precompiled
header. 





Tom Browder wrote:
> 
> On Dec 4, 2007 5:23 AM, mahmoodn <nt_mahmood@yahoo.com> wrote:
>>
>> > Did you read the manual Section "Using Precompiled Headers" ?
>> if you mean http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html, I
>> have not yet found anything related to that I said.
>>
>> I am not expert in compiler and its options so, maybe I did not
>> understand
>> exactly what it said.
> 
> I think Andrew was referring to reducing compilation times by using
> precompiled headers.  Since headers tend to be more stable than
> implementation code, such use will reduce recompilation time.
> 
> As far as removing unneeded headers, as far as I know that is a
> manual, trial-and-error job that no one has as yet automated (but it
> should be fairly easy to make a housekeeping script to do that).  Try
> commenting out header include lines one at a time and test for
> successful compilation.
> 
> -Tom
> 
> Tom Browder
> Niceville, Florida
> USA
> 
> 

-- 
View this message in context: http://www.nabble.com/reduce-compilation-times--tf4880765.html#a14166678
Sent from the gcc - Help mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-12-05  7:44           ` mahmoodn
@ 2007-12-05 10:24             ` Tom Browder
  2007-12-05 10:29               ` mahmoodn
  0 siblings, 1 reply; 69+ messages in thread
From: Tom Browder @ 2007-12-05 10:24 UTC (permalink / raw)
  To: mahmoodn; +Cc: gcc-help

On Dec 5, 2007 1:44 AM, mahmoodn <nt_mahmood@yahoo.com> wrote:
>
> For example look at this paragraph in that link:
>
> >To create a precompiled header file, simply compile it as you would any
> other file, if necessary using the -x option to make >the driver treat it as
> a C or C++ header file. You will probably want to use a tool like make to
> keep the precompiled header >up-to-date when the headers it contains change.
>
> I can not understand it. My little understanding is, I just use normal
> "make" for the first time. Then the compiler would create some precompiled
> headers. After doing this, I could not find ant gch file for precompiled
> header.

No, it won't do it automaticalyy.  You have to modify your Makefile a
little.  Say we have foo.cc and its header, foo.h, then something like
this in the Makefile should work:

CXX = g++

foo.o : foo.cc foo.h.gch
<tab>$(CXX) -c $<

foo.h.gch: foo.h
<tab>$(CXX) -c $<

-Tom

Tom Browder
Niceville, Florida
USA

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-12-05 10:24             ` Tom Browder
@ 2007-12-05 10:29               ` mahmoodn
  0 siblings, 0 replies; 69+ messages in thread
From: mahmoodn @ 2007-12-05 10:29 UTC (permalink / raw)
  To: gcc-help


>CXX = g++

>foo.o : foo.cc foo.h.gch
><tab>$(CXX) -c $<

>foo.h.gch: foo.h
><tab>$(CXX) -c $<

Interesting... So these lines should create foo.h.gch? I will work on it




Tom Browder wrote:
> 
> On Dec 5, 2007 1:44 AM, mahmoodn <nt_mahmood@yahoo.com> wrote:
>>
>> For example look at this paragraph in that link:
>>
>> >To create a precompiled header file, simply compile it as you would any
>> other file, if necessary using the -x option to make >the driver treat it
>> as
>> a C or C++ header file. You will probably want to use a tool like make to
>> keep the precompiled header >up-to-date when the headers it contains
>> change.
>>
>> I can not understand it. My little understanding is, I just use normal
>> "make" for the first time. Then the compiler would create some
>> precompiled
>> headers. After doing this, I could not find ant gch file for precompiled
>> header.
> 
> No, it won't do it automaticalyy.  You have to modify your Makefile a
> little.  Say we have foo.cc and its header, foo.h, then something like
> this in the Makefile should work:
> 
> CXX = g++
> 
> foo.o : foo.cc foo.h.gch
> <tab>$(CXX) -c $<
> 
> foo.h.gch: foo.h
> <tab>$(CXX) -c $<
> 
> -Tom
> 
> Tom Browder
> Niceville, Florida
> USA
> 
> 

-- 
View this message in context: http://www.nabble.com/reduce-compilation-times--tf4880765.html#a14168961
Sent from the gcc - Help mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 13:56 Duft Markus
  2007-11-28 14:35 ` Tom St Denis
@ 2007-11-29  0:23 ` Tim Prince
  1 sibling, 0 replies; 69+ messages in thread
From: Tim Prince @ 2007-11-29  0:23 UTC (permalink / raw)
  To: Duft Markus; +Cc: Tom St Denis, Fabian Cenedese, gcc-help

Duft Markus wrote:
> Hi!
>
>   
>> This is where automated tools come in handy.  In my projects, I have
>> scripts that pick up source files and insert them into the
>> makefile(s). So with very little fuss I can add new files (either new
>> functionality or new split up code).
>>
>> It really depends on the size of the class whether factoring makes
>> sense or not.  but if you have heaps of 3000+ line long functions, I
>> suspect you spend enough time searching through them as is.
>>
>> When I was working on DB2 and scouring their often 6000+ line files
>> looking for why GCC 4.whatever.beta wasn't working as hot as it could,
>> it wasn't exactly a lot of fun.
>>     
>
> I agree with such big files beeing no fun at all. I managed to keep a
> structure where files don't get longer than say 500 lines.
>
>   
Without even venturing into particularly good practice, we build with 
scripts which automatically split large source files containing 
sometimes 300 functions down to 1 function per file, compile them 
individually, and use a relocatable link to put together an object file 
corresponding to the large source file.  Yes, this is definitely a way 
to reduce compilation time, when used together with a Makefile which 
applies minimum code size generation options for appropriate functions 
with maximum run time performance where needed.

---AV & Spam Filtering by M+Guardian - Risk Free Email (TM)---

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 16:34   ` J.C. Pizarro
@ 2007-11-28 18:18     ` Tom St Denis
  0 siblings, 0 replies; 69+ messages in thread
From: Tom St Denis @ 2007-11-28 18:18 UTC (permalink / raw)
  To: J.C. Pizarro; +Cc: gcc-help

J.C. Pizarro wrote:
>> Anyways, most OSS projects routinely violate most basic rules of proper
>> software development.  About the only thing they get right is they at
>> least use some form of revision and bug control.  Firefox is another
>> beast.  OpenOffice is a much more annoying offender.
>>     
>
> Repairing the development's violations is not an offense.
> It's a good solution to try repair the violated rules of software development.
>   
I'd like to think, at least in the GCC case, that there are plenty of 
good folk to steer things in the right direction.  Could be wrong, but 
so far GCC has been a fairly reliable toolsuite.

Unlike the anthem of the bazaar not all projects are helped by having 
1000s of unqualified hands in the pot.  I'm not a compiler designer.  
Just because I can design and write software doesn't mean I should be 
engineering a compiler project.  So we have to trust that the people who 
own/maintain the tree are actually going to make things better.

And in the end, it's not perfect, but honestly what is?  All I'd like to 
see if people can easily help avoid bad development practices, why not?

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 16:16 ` Tom St Denis
@ 2007-11-28 16:34   ` J.C. Pizarro
  2007-11-28 18:18     ` Tom St Denis
  0 siblings, 1 reply; 69+ messages in thread
From: J.C. Pizarro @ 2007-11-28 16:34 UTC (permalink / raw)
  To: Tom St Denis, gcc-help

On 2007/11/28, Tom St Denis <tstdenis@ellipticsemi.com> wrote:
> J.C. Pizarro wrote:
> > On 2007/11/28, Tom St Denis <tstdenis@ellipticsemi.com> wrote:
> >
> >> As I said in my first post on the subject, there is no "hard set"
> >> rule about when to refactor. If your class has 3 methods and
> >> is 75 lines of code, it's probably better to have it all organized
> >> in one unit/file. But if your class has 15 methods, and requires
> >> 1500 lines of code, you're probably better off refactoring it.
> >>
> >
> > Well, and how is this GCC in reality?
> >
> > svn://gcc.gnu.org/svn/gcc/trunk
> > $ svn info
> > ...
> > Revision: 130486
> >
> While I won't defend the GCC process (mostly because I'm not part of it)
> I will say that quite  a few files are machine generated.  i386.c for
> instance is generated from i386.md isn't it?

I've not idea if i386.c is generated from i386.md.
i386.c say nothing that is generated from i386.md or by a generator.
I don't believe that i386.c is generated from i386.md because
i386.c has large comments that i386.md hasn't.

>
> Anyways, most OSS projects routinely violate most basic rules of proper
> software development.  About the only thing they get right is they at
> least use some form of revision and bug control.  Firefox is another
> beast.  OpenOffice is a much more annoying offender.

Repairing the development's violations is not an offense.
It's a good solution to try repair the violated rules of software development.

> Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 16:06 J.C. Pizarro
@ 2007-11-28 16:16 ` Tom St Denis
  2007-11-28 16:34   ` J.C. Pizarro
  0 siblings, 1 reply; 69+ messages in thread
From: Tom St Denis @ 2007-11-28 16:16 UTC (permalink / raw)
  To: J.C. Pizarro; +Cc: gcc-help

J.C. Pizarro wrote:
> On 2007/11/28, Tom St Denis <tstdenis@ellipticsemi.com> wrote:
>   
>> As I said in my first post on the subject, there is no "hard set"
>> rule about when to refactor. If your class has 3 methods and
>> is 75 lines of code, it's probably better to have it all organized
>> in one unit/file. But if your class has 15 methods, and requires
>> 1500 lines of code, you're probably better off refactoring it.
>>     
>
> Well, and how is this GCC in reality?
>
> svn://gcc.gnu.org/svn/gcc/trunk
> $ svn info
> ...
> Revision: 130486
>   
While I won't defend the GCC process (mostly because I'm not part of it) 
I will say that quite  a few files are machine generated.  i386.c for 
instance is generated from i386.md isn't it?

Anyways, most OSS projects routinely violate most basic rules of proper 
software development.  About the only thing they get right is they at 
least use some form of revision and bug control.  Firefox is another 
beast.  OpenOffice is a much more annoying offender.

Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
@ 2007-11-28 16:06 J.C. Pizarro
  2007-11-28 16:16 ` Tom St Denis
  0 siblings, 1 reply; 69+ messages in thread
From: J.C. Pizarro @ 2007-11-28 16:06 UTC (permalink / raw)
  To: gcc-help, Tom St Denis

On 2007/11/28, Tom St Denis <tstdenis@ellipticsemi.com> wrote:
> As I said in my first post on the subject, there is no "hard set"
> rule about when to refactor. If your class has 3 methods and
> is 75 lines of code, it's probably better to have it all organized
> in one unit/file. But if your class has 15 methods, and requires
> 1500 lines of code, you're probably better off refactoring it.

Well, and how is this GCC in reality?

svn://gcc.gnu.org/svn/gcc/trunk
$ svn info
...
Revision: 130486
...
Last Changed Date: 2007-11-28 02:09:35 +0100 (Wed, 28 Nov 2007)

find . -type f -iregex '.*\.c.*\|.\*.h.*' | grep -v '\.svn' | xargs ls -l | \
   tr -s ' ' | cut -d' ' -f5,8 | sort -nr | head -200 | cut -d' ' -f2 | \
   while read F ; do wc -l "$F" ; done | sort -nr | head -20 | awk \
   '{ printf substr($2,3) ":\t\t" $1 " lines, " ; \
      system("echo -n $(ls -l " $2 " | tr -s \\\\040 \\\\t | cut -f5)") ; \
      print " bytes." }'

Here is the list of the 20 first big files (sorted by KLOCs):

libgcc/config/libbid/bid_binarydecimal.c:               147484 lines,
6403812 bytes.
gcc/config/i386/i386.c:         25308 lines, 815664 bytes.
libjava/gnu/gcj/convert/Unicode_to_JIS.cc:              23139 lines,
625205 bytes.
gcc/config/rs6000/rs6000.c:             21799 lines, 689671 bytes.
gcc/cp/parser.c:                20557 lines, 626246 bytes.
gcc/config/arm/arm.c:           18711 lines, 549174 bytes.
libstdc++-v3/testsuite/tr1/5_numerical_facilities/special_functions/17_hyperg/check_value.cc:
          17196 lines, 823435 bytes.
gcc/cp/pt.c:            16087 lines, 501737 bytes.
gcc/fold-const.c:               15206 lines, 482273 bytes.
gcc/dwarf2out.c:                14990 lines, 453585 bytes.
gcc/combine.c:          13041 lines, 429100 bytes.
gcc/builtins.c:         13026 lines, 397350 bytes.
gcc/config/mips/mips.c:         12531 lines, 393171 bytes.
gcc/cp/decl.c:          12341 lines, 386701 bytes.
gcc/config/sh/sh.c:             10932 lines, 329618 bytes.
gcc/config/alpha/alpha.c:               10727 lines, 294299 bytes.
libstdc++-v3/testsuite/tr1/5_numerical_facilities/special_functions/14_ellint_3/check_value.cc:
        10116 lines, 391467 bytes.
gcc/expr.c:             10102 lines, 317028 bytes.
gcc/config/ia64/ia64.c:         9970 lines, 294743 bytes.
gcc/config/frv/frv.c:           9594 lines, 285369 bytes.

They are between 147.4 and 9.5 Klines in comparison
to 1.5 Klines of code that you wish.

There is much work to refactor the GCC's {.c/.h} sources.

   J.C.Pizarro

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: reduce compilation times?
  2007-11-28 13:51           ` Tom St Denis
  2007-11-28 13:59             ` Tom St Denis
@ 2007-11-28 15:51             ` John (Eljay) Love-Jensen
  1 sibling, 0 replies; 69+ messages in thread
From: John (Eljay) Love-Jensen @ 2007-11-28 15:51 UTC (permalink / raw)
  To: gcc-help

Correction:  Squeak is a Smalltalk derivative, not a Lisp derivative.  I blame insufficient caffeine, and poor memory.

Everyone:  this is a lively discussion!  A reflection of the passion and concern to which this topic is regarded.  Probably a bit off-topic from GCC in particular, but still very interesting discussion of personal experience with this problem domain.

Sincerely,
--Eljay

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 13:56 Duft Markus
@ 2007-11-28 14:35 ` Tom St Denis
  2007-11-29  0:23 ` Tim Prince
  1 sibling, 0 replies; 69+ messages in thread
From: Tom St Denis @ 2007-11-28 14:35 UTC (permalink / raw)
  To: Duft Markus; +Cc: Fabian Cenedese, gcc-help

Duft Markus wrote:
> You didn't get the question: i was asking if a *change* in a header is
> recognized to affect a *certain* .cpp file, not how make handles
> dependencies and updates targets. Micorosft recognizes that a change in
> a certain struct inside a header file may not require a recompilation of
> a certain .cpp file if it doesn't use that struct. Thats what i asked
> for.
>   
Sounds like a nice feature, but honestly, I'm a bit paranoid, most of 
the time when I edit header files I just do a "make clean" first 
anyways.  In a lot of our smaller SDKs, we often do a clean as part of 
the "code/test/verify" cycle.  It's just simpler to make sure everything 
is fresh then have some lingering files that weren't picked up properly 
in a makefile.   On the larger projects we only clean if the headers 
change for obvious reasons.

But I can't say honestly that I spend more time editing header files 
than source files. 

Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 13:51           ` Tom St Denis
@ 2007-11-28 13:59             ` Tom St Denis
  2007-11-28 15:51             ` John (Eljay) Love-Jensen
  1 sibling, 0 replies; 69+ messages in thread
From: Tom St Denis @ 2007-11-28 13:59 UTC (permalink / raw)
  To: J.C. Pizarro; +Cc: gcc-help

Tom St Denis wrote:
> J.C. Pizarro wrote:
>> A) When they are using "nested" templates.
>> B) When there are "cyclic" dependences of compilation beetwen 2 or 
>> more files.
>> C) When there are "overloading" of methods and functions, virtual and
>> non-virtual.
>> D) When there are macros in C++.
>> E) When there are __atributes__ in C++.
>>   
> Without turning this into (too much of) a language war.  No, because I 
> write maintainable easy to read C programs (which through structs get 
> the benefits of anonymous implementations of interfaces).
I want to supplement this with saying that just because a language 
supports something doesn't mean you should use it.  Templates often from 
what I see are wholesale abused.  I can't imagine a case where a 
template makes sense.  If I'm writing a math library for instance, to do 
FFTs, I'd use a typedef to allow switching from float/double.  Yes, I 
can see how a template would allow at compile time to instantiate a 
different flavour of the code, but honestly, I'd just build two copies 
if I needed that flexibility in my system (since really that's what you 
end up with anyways).

Just like C's bitfields.  I have never, in my 14 years as a 
student/hobbiest/professional ever used a bitfield.  And I've done quite 
a bit [hahahaha punny] of MCU programming.

Sometimes the "hard way" isn't so hard and just as simple.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: reduce compilation times?
@ 2007-11-28 13:56 Duft Markus
  2007-11-28 14:35 ` Tom St Denis
  2007-11-29  0:23 ` Tim Prince
  0 siblings, 2 replies; 69+ messages in thread
From: Duft Markus @ 2007-11-28 13:56 UTC (permalink / raw)
  To: Tom St Denis; +Cc: Fabian Cenedese, gcc-help

Hi!

> This is where automated tools come in handy.  In my projects, I have
> scripts that pick up source files and insert them into the
> makefile(s). So with very little fuss I can add new files (either new
> functionality or new split up code).
> 
> It really depends on the size of the class whether factoring makes
> sense or not.  but if you have heaps of 3000+ line long functions, I
> suspect you spend enough time searching through them as is.
> 
> When I was working on DB2 and scouring their often 6000+ line files
> looking for why GCC 4.whatever.beta wasn't working as hot as it could,
> it wasn't exactly a lot of fun.

I agree with such big files beeing no fun at all. I managed to keep a
structure where files don't get longer than say 500 lines.

> 
> 
>> BTW: has gcc a mechanism to determine, wether a change in a header
>> file affects a particular .cpp file? Microsoft has... They skip
>> every file where no affecting changes are detected...
>> 
> Can't a makefile do that?
> 
> file.o: file.H file.C
> 
> Yes that works fine with GNU Make
> 
> -bash-3.1$ make
> make: `ctmp1.o' is up to date.
> -bash-3.1$ touch ctmp.H
> -bash-3.1$ make
> g++    -c -o ctmp1.o ctmp1.C
> -bash-3.1$ cat Makefile
> ctmp1.o: ctmp.H ctmp1.C
> 
> It's almost like, if you engineer your builds correctly, these are
> "non-issues."
> 
> I don't want to make this personal, but honestly, if you're going to
> try and give out advice, at least know what you are talking about. 
> There are actual proper ways to do things that reduce errors, increase
> productivity, etc, etc.

You didn't get the question: i was asking if a *change* in a header is
recognized to affect a *certain* .cpp file, not how make handles
dependencies and updates targets. Micorosft recognizes that a change in
a certain struct inside a header file may not require a recompilation of
a certain .cpp file if it doesn't use that struct. Thats what i asked
for.

Also i have absolutely no problem with managing my builds correctly. One
part of my job is organizing, managing and building about 80 software
packages...

As far as getting personal is concerned: You are free to get personal if
you like, still i really know very well what i'm talking about, since im
make my money with this. You can easily recognize things that i don't
know by those question marks behind my sentences.

Cheers, Markus

> 
> Tom


-- 
5. Dezember 2007
Salomon Automation am  Schweizer Forum fur Logistik, Lausanne, CH




Salomon Automation GmbH - Friesachstrasse 15 - A-8114 Friesach bei Graz
Sitz der Gesellschaft: Friesach bei Graz
UID-NR:ATU28654300 - Firmenbuchnummer: 49324 K
Firmenbuchgericht: Landesgericht fur Zivilrechtssachen Graz

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 13:40         ` J.C. Pizarro
@ 2007-11-28 13:51           ` Tom St Denis
  2007-11-28 13:59             ` Tom St Denis
  2007-11-28 15:51             ` John (Eljay) Love-Jensen
  0 siblings, 2 replies; 69+ messages in thread
From: Tom St Denis @ 2007-11-28 13:51 UTC (permalink / raw)
  To: J.C. Pizarro; +Cc: gcc-help

J.C. Pizarro wrote:
> A) When they are using "nested" templates.
> B) When there are "cyclic" dependences of compilation beetwen 2 or more files.
> C) When there are "overloading" of methods and functions, virtual and
> non-virtual.
> D) When there are macros in C++.
> E) When there are __atributes__ in C++.
>   
Without turning this into (too much of) a language war.  No, because I 
write maintainable easy to read C programs (which through structs get 
the benefits of anonymous implementations of interfaces).

> I understand it, but GCC compiler doesn't understand the "refactoring" meaning.
> GCC only understands the difference between splittled and non-splitted files,
> and how to affect him in terms of compile-time and optimization-gainining.
>   
And I'm saying that profiling and careful limited use of macros and 
static inlined functions will get you the same performance without all 
the mess.

I say this as the author of competitive cryptographic and mathematical 
libraries (the latter of which holds the rank as worlds fastest public 
domain crypto math library). 

Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 13:17       ` Tom St Denis
@ 2007-11-28 13:40         ` J.C. Pizarro
  2007-11-28 13:51           ` Tom St Denis
  0 siblings, 1 reply; 69+ messages in thread
From: J.C. Pizarro @ 2007-11-28 13:40 UTC (permalink / raw)
  To: Tom St Denis, gcc-help

On 2007/11/28, Tom St Denis <tstdenis@ellipticsemi.com> wrote:
> >> In the case of C++, you can just put each method of a class in a
> >> separate .C file.  Provided they all include a .H file which defines the
> >> class prototype it's ok.
> >>
> >
> > I'm not sure if GCC C++ does it.
> >
> Yes, it does.  Consider:
>
> ctmp1.C
> #include "ctmp.H"
>
> ctmp::ctmp(void)
> {
>    cout << "hello world" << endl;
> }
>
> ctmp2.C
> #include "ctmp.H"
>
> ctmp::~ctmp(void)
> {
>    cout << "goodbye world" << endl;
> }
>
> ctmp.H
> #include <iostream>
> using namespace std;
>
> class ctmp {
>    public:
>       ctmp(void);
>       ~ctmp(void);
> };
>
> Both .C files compile just fine.

Have you problems in separated files in below cases?
A) When they are using "nested" templates.
B) When there are "cyclic" dependences of compilation beetwen 2 or more files.
C) When there are "overloading" of methods and functions, virtual and
non-virtual.
D) When there are macros in C++.
E) When there are __atributes__ in C++.

>
> >> In the case of Java, you can break up a large task into classes which
> >> handle separate functions of the program.  For example, a compiler may
> >> have an I/O class, a lexer class, a parser class, an interface for
> >> optimizations, and various implementations of the interface, etc, etc.
> >> Hell, most colleges teach things like the MVC model when doing GUI Java
> >> apps which, last I checked, is a way to refactor one large program into
> >> separate tasks.
> >>
> >
> > We've talking to split files (e.g. 1 file per 1 function or per 1 method),
> > not to separate tasks or factor tasks.
> >
> > Java can't split many methods of a class to many files,
> > only 1 file per 1 class, not 1 file per method.
> >
>
> Refactoring doesn't strictly mean one function per file.  That's the
> ideal [in most cases].  Refactoring simply means breaking a large task
> into smaller re-useable components.  For example, suppose you had a Java
> program made up of 10 classes, and each class you manually coded up a
> [say] GZIP decompressor, wouldn't it make more sense to split that off
> into it's own class?  That's re-factoring.

I understand it, but GCC compiler doesn't understand the "refactoring" meaning.
GCC only understands the difference between splittled and non-splitted files,
and how to affect him in terms of compile-time and optimization-gainining.

   J.C.Pizarro

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 12:52     ` J.C. Pizarro
  2007-11-28 13:17       ` Tom St Denis
@ 2007-11-28 13:30       ` Ted Byers
  1 sibling, 0 replies; 69+ messages in thread
From: Ted Byers @ 2007-11-28 13:30 UTC (permalink / raw)
  To: J.C. Pizarro, Tom St Denis, gcc-help


--- "J.C. Pizarro" <jcpiza@gmail.com> wrote:

> On 2007/11/28, Tom St Denis
> <tstdenis@ellipticsemi.com> wrote:
> > J.C. Pizarro wrote:
> > > On 2007/11/28, Duft Markus
> <Markus.Duft@salomon.at> wrote:
> > >
> > >> Hi!
> > >>
> > >> I assume, that all strategies discussed here
> are targeted at C. now what
> > >> about C++, how do things behave there? As far
> as i know C++ is much
> > >> different, and requires completely different
> thinking with regards to
> > >> splitting source in more files, etc.
> > >>
> > >> Cheers, Markus
> > >>
> > >
> > > Your comment is good.
> > >
> > > Splitting C files is different to splitting C++
> files or splitting Java files,
> > > Fortran, Ada, ObjC, ....
> > >
> > > As GCC is made in C-only then we only need to
> split C files to reduce the
> > > recompilation time if we want.
> > >
> > > For other projects made in C++, Java, Fortran,
> Ada, ObjC, ...., they are
> > > hard to split their files.
> > >
> > This is so blatantly false ... I don't know about
> fortran/ada/obj, but
> > for C++ and Java you can trivially factor your
> code.
> 
> It's not false, you get wrong.
> 
No!  Tom got it right!  Had I responded when I first
saw that nonsense, I would have used harsher language
than he did!

I don't use ObjC or Ada, but a fortran project can be
rationally divided among several compilation units, as
can C++.  I have seen fortran library projects in
which there is only one function in most of the
compilation units in the project.  Java requires on
file per class, but you can rationally design your
classes so that each is relatively small.  This allows
a single complex class to make use of a suite of small
classes, so its member functions can often be one or
two lines of code, simply referencing instances and
functions of other classes.  This is easy to do, at
least if you have sufficient experience coding in a
variety of languages!  It comes down to thinking about
your code and how best to implement what you need.

Cheers

Ted

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 13:25 Duft Markus
@ 2007-11-28 13:26 ` Tom St Denis
  0 siblings, 0 replies; 69+ messages in thread
From: Tom St Denis @ 2007-11-28 13:26 UTC (permalink / raw)
  To: Duft Markus; +Cc: Fabian Cenedese, gcc-help

Duft Markus wrote:
> Hi!
>
> Fabian Cenedese <> wrote:
>   
>>>> Splitting C files is different to splitting C++ files or splitting
>>>> Java files, Fortran, Ada, ObjC, ....
>>>>         
>>> In the case of C++, you can just put each method of a class in a
>>> separate .C file.  Provided they all include a .H file which defines
>>> the class prototype it's ok.  
>>>       
>> The problem may not be the .cpp but the .h files. If I add a new
>> member 
>> or method all files of this class need to be rebuilt. With the
>> independent functions in C this may be easier to do. But still, if
>> everything is rebuilt then it doesn't matter how many files you
>> spread your code over. 
>>
>> Of course from maintenance point of view splitting files is good
>> though 
>> I maybe wouldn't go down to function level, more like class level.
>> Otherwise the bad overview in the file is just transferred to the
>> project level.
>>     
>
> +1 ;o) Exactly how i think about it. I personally have a project with
> approximately 30K lines, where each .cpp file contains exactly one class
> (and it's internal-only stuff...). The project is perfectly
> maintainable, further splitting into more files would only make it more
> *unmaintainable*, since i would spend my time searching things... ;o)
>   
This is where automated tools come in handy.  In my projects, I have 
scripts that pick up source files and insert them into the makefile(s).  
So with very little fuss I can add new files (either new functionality 
or new split up code). 

It really depends on the size of the class whether factoring makes sense 
or not.  but if you have heaps of 3000+ line long functions, I suspect 
you spend enough time searching through them as is. 

When I was working on DB2 and scouring their often 6000+ line files 
looking for why GCC 4.whatever.beta wasn't working as hot as it could, 
it wasn't exactly a lot of fun.


> BTW: has gcc a mechanism to determine, wether a change in a header file
> affects a particular .cpp file? Microsoft has... They skip every file
> where no affecting changes are detected...
>   
Can't a makefile do that?

file.o: file.H file.C

Yes that works fine with GNU Make

-bash-3.1$ make
make: `ctmp1.o' is up to date.
-bash-3.1$ touch ctmp.H
-bash-3.1$ make
g++    -c -o ctmp1.o ctmp1.C
-bash-3.1$ cat Makefile
ctmp1.o: ctmp.H ctmp1.C

It's almost like, if you engineer your builds correctly, these are 
"non-issues." 

I don't want to make this personal, but honestly, if you're going to try 
and give out advice, at least know what you are talking about.  There 
are actual proper ways to do things that reduce errors, increase 
productivity, etc, etc. 

Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: reduce compilation times?
@ 2007-11-28 13:25 Duft Markus
  2007-11-28 13:26 ` Tom St Denis
  0 siblings, 1 reply; 69+ messages in thread
From: Duft Markus @ 2007-11-28 13:25 UTC (permalink / raw)
  To: Fabian Cenedese, gcc-help

Hi!

Fabian Cenedese <> wrote:
>>> Splitting C files is different to splitting C++ files or splitting
>>> Java files, Fortran, Ada, ObjC, ....
>> 
>> In the case of C++, you can just put each method of a class in a
>> separate .C file.  Provided they all include a .H file which defines
>> the class prototype it's ok.  
> 
> The problem may not be the .cpp but the .h files. If I add a new
> member 
> or method all files of this class need to be rebuilt. With the
> independent functions in C this may be easier to do. But still, if
> everything is rebuilt then it doesn't matter how many files you
> spread your code over. 
> 
> Of course from maintenance point of view splitting files is good
> though 
> I maybe wouldn't go down to function level, more like class level.
> Otherwise the bad overview in the file is just transferred to the
> project level.

+1 ;o) Exactly how i think about it. I personally have a project with
approximately 30K lines, where each .cpp file contains exactly one class
(and it's internal-only stuff...). The project is perfectly
maintainable, further splitting into more files would only make it more
*unmaintainable*, since i would spend my time searching things... ;o)

And as Fabian sayd, in C++ the header files are the problem. It's even
fater to have a class in one C++ file as opposed to having multiple
files, because (with dependency tracking ebnabled) each of the files
including the header (which would be all of those file, right?) will be
rebuilt anyway.

BTW: has gcc a mechanism to determine, wether a change in a header file
affects a particular .cpp file? Microsoft has... They skip every file
where no affecting changes are detected...

Cheers, Markus

> 
> bye  Fabi


-- 
5. Dezember 2007
Salomon Automation am  Schweizer Forum fur Logistik, Lausanne, CH




Salomon Automation GmbH - Friesachstrasse 15 - A-8114 Friesach bei Graz
Sitz der Gesellschaft: Friesach bei Graz
UID-NR:ATU28654300 - Firmenbuchnummer: 49324 K
Firmenbuchgericht: Landesgericht fur Zivilrechtssachen Graz

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 12:52     ` J.C. Pizarro
@ 2007-11-28 13:17       ` Tom St Denis
  2007-11-28 13:40         ` J.C. Pizarro
  2007-11-28 13:30       ` Ted Byers
  1 sibling, 1 reply; 69+ messages in thread
From: Tom St Denis @ 2007-11-28 13:17 UTC (permalink / raw)
  To: J.C. Pizarro; +Cc: gcc-help

J.C. Pizarro wrote:
>> This is so blatantly false ... I don't know about fortran/ada/obj, but
>> for C++ and Java you can trivially factor your code.
>>     
>
> It's not false, you get wrong.
>   

I don't want to make this personal, so I apologize for the tone in my 
other reply.

>> In the case of C++, you can just put each method of a class in a
>> separate .C file.  Provided they all include a .H file which defines the
>> class prototype it's ok.
>>     
>
> I'm not sure if GCC C++ does it.
>   
Yes, it does.  Consider:

ctmp1.C
#include "ctmp.H"

ctmp::ctmp(void)
{
   cout << "hello world" << endl;
}

ctmp2.C
#include "ctmp.H"

ctmp::~ctmp(void)
{
   cout << "goodbye world" << endl;
}

ctmp.H
#include <iostream>
using namespace std;

class ctmp {
   public: 
      ctmp(void);
      ~ctmp(void);
};

Both .C files compile just fine.

>> In the case of Java, you can break up a large task into classes which
>> handle separate functions of the program.  For example, a compiler may
>> have an I/O class, a lexer class, a parser class, an interface for
>> optimizations, and various implementations of the interface, etc, etc.
>> Hell, most colleges teach things like the MVC model when doing GUI Java
>> apps which, last I checked, is a way to refactor one large program into
>> separate tasks.
>>     
>
> We've talking to split files (e.g. 1 file per 1 function or per 1 method),
> not to separate tasks or factor tasks.
>
> Java can't split many methods of a class to many files,
> only 1 file per 1 class, not 1 file per method.
>   

Refactoring doesn't strictly mean one function per file.  That's the 
ideal [in most cases].  Refactoring simply means breaking a large task 
into smaller re-useable components.  For example, suppose you had a Java 
program made up of 10 classes, and each class you manually coded up a 
[say] GZIP decompressor, wouldn't it make more sense to split that off 
into it's own class?  That's re-factoring.

In an ideal application (as opposed to library), your "main" class 
should be fairly trivial pawning off all the hard work to a set of 
classes which handle all the individual aspects of the program.

This is reflected well in how glade designs GTK+ applications in C.  
Basically you end up with a C file which constructs and maintains the 
GUI, the code there should just be calling library functions to do the 
heavy work.

Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 12:18 ` Tom St Denis
@ 2007-11-28 13:09   ` Ted Byers
  0 siblings, 0 replies; 69+ messages in thread
From: Ted Byers @ 2007-11-28 13:09 UTC (permalink / raw)
  To: Tom St Denis, Duft Markus; +Cc: NightStrike, J.C. Pizarro, Galloth, gcc-help


--- Tom St Denis <tstdenis@ellipticsemi.com> wrote:

> Duft Markus wrote:
> > Hi!
> >
> > I assume, that all strategies discussed here are
> targeted at C. now what
> > about C++, how do things behave there? As far as i
> know C++ is much
> > different, and requires completely different
> thinking with regards to
> > splitting source in more files, etc.
> >   
> 
> I don't know enough about C++ linking but there is
> no reason you can't 
> put methods in seperate .C files.  The problem is
> most C++ developers 
> want to inline all of their methods and put quite a
> bit of actual code 
> in their .H files instead, which is just a
> maintenance nightmare.
> 
Well, that MAY be true of kids still wet behind the
ears, but it isn't true of experienced C++ developers
I know.  I prefer C++ for high performance code: in
fact my best C++ code is faster than my best fortran
code, but that is another story.

I routinely split my C++ classes across multiple
compilation units, sometimes to the point of one
function per compilation unit.  But sometimes there
are a handful of member functions that are
sufficiently closely related to warrant placing them
in the same compilation unit.  I put only that code in
a header file that could rationally be inlined.  There
are, in fact, a handful of best practices that have
developed over the years regarding helping the
compiler determine which functions to consider
inlining.

The only exception to this is how we need to handle
template classes.  I deal with this, though, by
keeping my template classes as small as practicable
(and perhaps using a small family of related template
classes rather than a single catchall).  My
experience, though, is that once you start working
with template classes, especially with state of the
art template metaprogramming, compilation times
increase dramatically anyway, so you just bite the
bullet and deal with it.  But this gets into an area
where I wouldn't hand the task to a junior developer
anyway.  They just don't have the experience needed to
do it well.  I'd instead want to spend some time
giving them on-the-job training for a few years,
before letting them run with a template
metaprogramming task (I know of a few college programs
where C++ programmers aren't given so much as 5
minutes on generic programming, so the inability of
most junior C++ programmers to understand it isn't a
surprise).

> The benefits of code factoring are hardly limited to
> C or C++.  They 
> equally apply to Java applications (with the sad
> exception, hehehe, that 
> your class has to be in one file, but you can
> refactor into smaller 
> classes, etc), pascal, assembler, etc.
> 
Yeh, but it is a mistake to implement high performance
code in Java anyway, so neither compilation speed or
runtime performance is a major factor in choosing to
use it (rather, in my experience, it is the ease of
use and time to complete the coding for certain types
of application).  While it has improved over the
years, it still doesn't come close to C++ or fortran. 
Where Java has the edge is the ease with which
distributed programming can be done, and this is
because of the wealth of libraries supplied in the
J2SE and J2EE SDKs.  

I generally agree with the rest of what you said. 
Anyone who has been doing this a while, and learned
from that experience, will be using version control,
well factored compilation units (thought about rather
than just mindlessly munging things together or
dividing them), &c.  Dividing a class into multiple
compilation units gets you only so far.  If one of
your edits requires a change to the header file
containing the class declaration, then all the files
depending on it must be recompiled also, even though
you didn't touch them.  You have to think about how to
distribute your code, not just put it all into a
single file or distribute among countless files.  Both
ideas, mindlessly applied, are bound to get you into
trouble with a hard to maintain project.

Cheers,

Ted

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 12:49     ` Fabian Cenedese
@ 2007-11-28 13:03       ` Tom St Denis
  0 siblings, 0 replies; 69+ messages in thread
From: Tom St Denis @ 2007-11-28 13:03 UTC (permalink / raw)
  To: Fabian Cenedese; +Cc: gcc-help

Fabian Cenedese wrote:
>>> Splitting C files is different to splitting C++ files or splitting Java files,
>>> Fortran, Ada, ObjC, ....
>>>       
>> In the case of C++, you can just put each method of a class in a separate .C file.  Provided they all include a .H file which defines the class prototype it's ok.
>>     
>
> The problem may not be the .cpp but the .h files. If I add a new member
> or method all files of this class need to be rebuilt. With the independent
> functions in C this may be easier to do. But still, if everything is rebuilt
> then it doesn't matter how many files you spread your code over.
>
> Of course from maintenance point of view splitting files is good though
> I maybe wouldn't go down to function level, more like class level.
> Otherwise the bad overview in the file is just transferred to the project
> level.
>   

That's no different than in C where you change a struct, union, or enum 
(or other macros).

But most of your re-compiles will be after changing code not definitions 
or prototypes.  And even if it didn't save compile time [which it will] 
it still makes code more maintainable.

As I said in my first post on the subject, there is no "hard set" rule 
about when to refactor.  If your class has 3 methods and is 75 lines of 
code, it's probably better to have it all organized in one unit/file.  
But if your class has 15 methods, and requires 1500 lines of code, 
you're probably better off refactoring it.

Libraries are different from applications in this sense.  In a library, 
it usually makes sense to factor at the function level as you get a 
better chance to smart link (as well as the other development 
benefits).  This doesn't strictly apply to C++ I suppose (well it may if 
nothing calls a method), but it definitely does to C.

Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: reduce compilation times?
  2007-11-28 12:31   ` J.C. Pizarro
  2007-11-28 12:39     ` Tom St Denis
@ 2007-11-28 12:54     ` John (Eljay) Love-Jensen
  1 sibling, 0 replies; 69+ messages in thread
From: John (Eljay) Love-Jensen @ 2007-11-28 12:54 UTC (permalink / raw)
  To: J.C. Pizarro, gcc-help

Hi J.C.,

> How to take you of care in "dangling pointers" and "memory leaks" from C++ sources?

Don't dangle pointers.

Don't leak memory.

Both of which are weaknesses in C++, since it is very easy to dangle pointers and leak memory in C++.

The solution to both those problems is to be meticulous about pointers and memory management.  Boost has some nice facilities to aid in both of those, by wrapping memory management in smart pointers.

And be willing to use tools (valgrind, gdb, whatever) to help track down those problems when they (non-trivial C/C++: inevitably?) occur.

> For large-scale projects, besides C++, there are another high-level languages as Java (hated people because of Sun), Eiffel, Erlang, Mercury, Oz, Common Lisp, Ruby, Python, etc.

For the record, I don't "hate Java" because of Sun.  Quite the opposite, I have a lot of respect for Sun, and I actually "love Java" because of IBM's Eclipse.  (Without Eclipse, I'd be lukewarm on Java.)  And for enterprise applications, J2EE is the cat's meow!

Eiffel - have not programmed in it, but I have adopted some of the program-by-contract paradigms into my C++ programs.

Erlang, Mercury, Oz - have not used, and unfamiliar with.

Common Lisp - I've programmed some small applications in Scheme and in Squeak, but not Common Lisp or CLOS.  Is it suitable for large scale?

Ruby - is Ruby suitable for large scale?

Python - is Python suitable for large scale?

Of course, when talking about large scale applications, often more than one language is involved in the mix.  My project has C, C++, and JavaScript (both as a user scripting language, and used internally for parts).  The scripting language could just as well have been Ruby or Python, or Lua or Perl.

> They aren't good idea splitting files of large-scale projects from these languages.

Ummm... okay.  But that's not the context of the discussion - reduction of compilation times.

And the tangent of breaking up C/C++ code into finer granularity source files to aid in maintenance and management, with the concern over possible runtime impact (the refrain being:  profile).  There's probably concerns on both the "premature optimization" side (i.e., you cannot optimize without profiling), and the other side being "premature pessimization" (such as discussed by Sutter in the exceptional C++ series).

By the way:  I highly recommend Herb Sutter's books on C++.

> The "maintainance" is an important issue in large-scale projects.

You, me and Tom are in 100% agreement on that!

Sincerely,
--Eljay

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 12:28   ` Tom St Denis
  2007-11-28 12:49     ` Fabian Cenedese
@ 2007-11-28 12:52     ` J.C. Pizarro
  2007-11-28 13:17       ` Tom St Denis
  2007-11-28 13:30       ` Ted Byers
  1 sibling, 2 replies; 69+ messages in thread
From: J.C. Pizarro @ 2007-11-28 12:52 UTC (permalink / raw)
  To: Tom St Denis, gcc-help

On 2007/11/28, Tom St Denis <tstdenis@ellipticsemi.com> wrote:
> J.C. Pizarro wrote:
> > On 2007/11/28, Duft Markus <Markus.Duft@salomon.at> wrote:
> >
> >> Hi!
> >>
> >> I assume, that all strategies discussed here are targeted at C. now what
> >> about C++, how do things behave there? As far as i know C++ is much
> >> different, and requires completely different thinking with regards to
> >> splitting source in more files, etc.
> >>
> >> Cheers, Markus
> >>
> >
> > Your comment is good.
> >
> > Splitting C files is different to splitting C++ files or splitting Java files,
> > Fortran, Ada, ObjC, ....
> >
> > As GCC is made in C-only then we only need to split C files to reduce the
> > recompilation time if we want.
> >
> > For other projects made in C++, Java, Fortran, Ada, ObjC, ...., they are
> > hard to split their files.
> >
> This is so blatantly false ... I don't know about fortran/ada/obj, but
> for C++ and Java you can trivially factor your code.

It's not false, you get wrong.

> In the case of C++, you can just put each method of a class in a
> separate .C file.  Provided they all include a .H file which defines the
> class prototype it's ok.

I'm not sure if GCC C++ does it.

> In the case of Java, you can break up a large task into classes which
> handle separate functions of the program.  For example, a compiler may
> have an I/O class, a lexer class, a parser class, an interface for
> optimizations, and various implementations of the interface, etc, etc.
> Hell, most colleges teach things like the MVC model when doing GUI Java
> apps which, last I checked, is a way to refactor one large program into
> separate tasks.

We've talking to split files (e.g. 1 file per 1 function or per 1 method),
not to separate tasks or factor tasks.

Java can't split many methods of a class to many files,
only 1 file per 1 class, not 1 file per method.

> And again you're missing the point (with your comment about future GCCs
> that I accidentally snipped).  You want to refactor your code so you can
> ***MAINTAIN*** it.

"to maintain" and "to optimize" (reducing time) too.

>
> Tom
>

   J.C.Pizarro

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 12:28   ` Tom St Denis
@ 2007-11-28 12:49     ` Fabian Cenedese
  2007-11-28 13:03       ` Tom St Denis
  2007-11-28 12:52     ` J.C. Pizarro
  1 sibling, 1 reply; 69+ messages in thread
From: Fabian Cenedese @ 2007-11-28 12:49 UTC (permalink / raw)
  To: gcc-help


>>Splitting C files is different to splitting C++ files or splitting Java files,
>>Fortran, Ada, ObjC, ....
>
>In the case of C++, you can just put each method of a class in a separate .C file.  Provided they all include a .H file which defines the class prototype it's ok.

The problem may not be the .cpp but the .h files. If I add a new member
or method all files of this class need to be rebuilt. With the independent
functions in C this may be easier to do. But still, if everything is rebuilt
then it doesn't matter how many files you spread your code over.

Of course from maintenance point of view splitting files is good though
I maybe wouldn't go down to function level, more like class level.
Otherwise the bad overview in the file is just transferred to the project
level.

bye  Fabi


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 12:31   ` J.C. Pizarro
@ 2007-11-28 12:39     ` Tom St Denis
  2007-11-28 12:54     ` John (Eljay) Love-Jensen
  1 sibling, 0 replies; 69+ messages in thread
From: Tom St Denis @ 2007-11-28 12:39 UTC (permalink / raw)
  To: J.C. Pizarro; +Cc: John (Eljay) Love-Jensen, gcc-help

J.C. Pizarro wrote:
> On 2007/11/28, John (Eljay) Love-Jensen <eljay@adobe.com> wrote:
>   
>> Hi Duft,
>>
>>     
>>> I assume, that all strategies discussed here are targeted at C. now what about C++, how do things behave there? As far as i know C++ is much different, and requires completely different thinking with regards to splitting source in more files, etc.
>>>       
>> The Large-Scale C++ Software Design by Lakos which I've recommended targets C++.
>>
>> http://www.amazon.com/dp/0201633620
>>     
>
> How to take you of care in "dangling pointers" and "memory leaks"
> from C++ sources?
>   
Debuggers and profilers.  (Hint:  learn to use gdb and valgrind).

> For large-scale projects, besides C++, there are another high-level
> languages as Java (hated people because of Sun), Eiffel, Erlang,
> Mercury, Oz, Common Lisp, Ruby, Python, etc.
>
> 1. http://en.wikipedia.org/wiki/Programming_language
> 2. http://www.dmoz.org/Computers/Programming/Languages/
> 3. http://directory.google.com/Top/Computers/Programming/Languages/
> 4. http://en.wikipedia.org/wiki/List_of_programming_languages
> 4.1 http://en.wikipedia.org/wiki/Timeline_of_programming_languages
> 4.2 http://en.wikipedia.org/wiki/Alphabetical_list_of_programming_languages
> 4.3 http://en.wikipedia.org/wiki/Generational_list_of_programming_languages
> 4.4 http://en.wikipedia.org/wiki/Categorical_list_of_programming_languages
>   

This is asinine and off-topic for this list.  If you want to talk about 
other languages that are not used in, or provided by, the GCC toolset, 
you should probably move the conversation off-list.

That said, who in their right mind develops a large modern day project 
in Eiffel, Erlang, etc, whatever, etc?

I'd rather use Perl or C (or I suppose C++) as they're languages that 
are likely to be well supported in, oh say, 10-20 years.  Seen many 
Cobol compilers recently?

Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: reduce compilation times?
@ 2007-11-28 12:36 Duft Markus
  0 siblings, 0 replies; 69+ messages in thread
From: Duft Markus @ 2007-11-28 12:36 UTC (permalink / raw)
  To: John (Eljay) Love-Jensen, Tom St Denis, NightStrike
  Cc: J.C. Pizarro, Galloth, gcc-help

John (Eljay) Love-Jensen <mailto:eljay@adobe.com> wrote:
> Hi Duft,
> 
>> I assume, that all strategies discussed here are targeted at C. now
>> what about C++, how do things behave there? As far as i know C++ is
>> much different, and requires completely different thinking with
>> regards to splitting source in more files, etc.   
> 
> The Large-Scale C++ Software Design by Lakos which I've recommended
> targets C++. 

Hi!

Oh, i must have missed that post ;o) Thanks anyway...

Cheers, Markus

> 
> http://www.amazon.com/dp/0201633620
> 
> HTH,
> --Eljay


-- 
5. Dezember 2007
Salomon Automation am  Schweizer Forum fur Logistik, Lausanne, CH




Salomon Automation GmbH - Friesachstrasse 15 - A-8114 Friesach bei Graz
Sitz der Gesellschaft: Friesach bei Graz
UID-NR:ATU28654300 - Firmenbuchnummer: 49324 K
Firmenbuchgericht: Landesgericht fur Zivilrechtssachen Graz

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 12:07             ` Tom St Denis
@ 2007-11-28 12:35               ` Brian Dessent
  0 siblings, 0 replies; 69+ messages in thread
From: Brian Dessent @ 2007-11-28 12:35 UTC (permalink / raw)
  To: Tom St Denis; +Cc: gcc-help, NightStrike, J.C. Pizarro, Galloth

Tom St Denis wrote:

> Yeah, except putting all your functions in one file goes against the
> very nature of proper software development strategies.

That's not what I was advocating at all, just that marking
non-externally visible functions as static is better than manually
inlining.

> Often the savings, especially on desktop/server class processors from
> the minutia of optimizations possible at that level do not out weigh the
> cost to the development process.

But with LTO the development process doesn't change at all.  The
structure of the source into separate compilation units remains, and
each unit is still individually compiled.  It's just that some
optimization and processing are delayed until the point which would
traditionally be called linking.

> For example, in my math library the modexp function calls an external
> mul, sqr, and mod functions (well montgomery reduction but you get the
> point).  So even though they're not inlined (well they're big so they
> wouldn't anyways) and you have the overhead of a call, the performance
> is still 99% dominated by what happens inside the calls, not by the call
> itself.  In my case, my multipliers are fully unrolled/inlined since
> that's where the performance is to be had.  So it was worth the
> readability cost (well they're machine generated anyways) for it.

I see no reason why any of the above would change under LTO.

> I question the sanity of a LTO step (if indeed that means it
> re-organizes the object code at link time).  It'll make debugging harder
> when supposedly non-inlined code gets inlined, or other nasties (e.g.
> picking up a constant from another module then removing dead code, etc).

Debugging when a function is inlined is hard, yes, but that happens
today already for a number of reasons.  When it happens as a result of
LTO it is no different than when it happens now.  Ideally the debug
information should be expressive enough that debugging an inlined
function should be just as straightforward as the out-of-line version,
and there are several competing approaches underway to move more towards
that ideal.

> I think most people would prefer their object files to be representative
> of the compiler input.

I don't see how that follows, as each LTO object file is still directly
representative of the contents of its corresponding translation unit,
they are just in a more raw form (IL) rather than being assembled
machine code.

If you don't want LTO nobody is going to ram it down your throat, just
don't use -flto.

Brian

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 12:12 ` John (Eljay) Love-Jensen
@ 2007-11-28 12:31   ` J.C. Pizarro
  2007-11-28 12:39     ` Tom St Denis
  2007-11-28 12:54     ` John (Eljay) Love-Jensen
  0 siblings, 2 replies; 69+ messages in thread
From: J.C. Pizarro @ 2007-11-28 12:31 UTC (permalink / raw)
  To: John (Eljay) Love-Jensen, gcc-help

On 2007/11/28, John (Eljay) Love-Jensen <eljay@adobe.com> wrote:
> Hi Duft,
>
> > I assume, that all strategies discussed here are targeted at C. now what about C++, how do things behave there? As far as i know C++ is much different, and requires completely different thinking with regards to splitting source in more files, etc.
>
> The Large-Scale C++ Software Design by Lakos which I've recommended targets C++.
>
> http://www.amazon.com/dp/0201633620

How to take you of care in "dangling pointers" and "memory leaks"
from C++ sources?

For large-scale projects, besides C++, there are another high-level
languages as Java (hated people because of Sun), Eiffel, Erlang,
Mercury, Oz, Common Lisp, Ruby, Python, etc.

1. http://en.wikipedia.org/wiki/Programming_language
2. http://www.dmoz.org/Computers/Programming/Languages/
3. http://directory.google.com/Top/Computers/Programming/Languages/
4. http://en.wikipedia.org/wiki/List_of_programming_languages
4.1 http://en.wikipedia.org/wiki/Timeline_of_programming_languages
4.2 http://en.wikipedia.org/wiki/Alphabetical_list_of_programming_languages
4.3 http://en.wikipedia.org/wiki/Generational_list_of_programming_languages
4.4 http://en.wikipedia.org/wiki/Categorical_list_of_programming_languages

They aren't good idea splitting files of large-scale projects from
these languages.

The "maintainance" is an important issue in large-scale projects.

   J.C.Pizarro

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28 12:01 ` J.C. Pizarro
@ 2007-11-28 12:28   ` Tom St Denis
  2007-11-28 12:49     ` Fabian Cenedese
  2007-11-28 12:52     ` J.C. Pizarro
  0 siblings, 2 replies; 69+ messages in thread
From: Tom St Denis @ 2007-11-28 12:28 UTC (permalink / raw)
  To: J.C. Pizarro; +Cc: Duft Markus, gcc-help

J.C. Pizarro wrote:
> On 2007/11/28, Duft Markus <Markus.Duft@salomon.at> wrote:
>   
>> Hi!
>>
>> I assume, that all strategies discussed here are targeted at C. now what
>> about C++, how do things behave there? As far as i know C++ is much
>> different, and requires completely different thinking with regards to
>> splitting source in more files, etc.
>>
>> Cheers, Markus
>>     
>
> Your comment is good.
>
> Splitting C files is different to splitting C++ files or splitting Java files,
> Fortran, Ada, ObjC, ....
>
> As GCC is made in C-only then we only need to split C files to reduce the
> recompilation time if we want.
>
> For other projects made in C++, Java, Fortran, Ada, ObjC, ...., they are
> hard to split their files.
>   
This is so blatantly false ... I don't know about fortran/ada/obj, but 
for C++ and Java you can trivially factor your code.

In the case of C++, you can just put each method of a class in a 
separate .C file.  Provided they all include a .H file which defines the 
class prototype it's ok.

In the case of Java, you can break up a large task into classes which 
handle separate functions of the program.  For example, a compiler may 
have an I/O class, a lexer class, a parser class, an interface for 
optimizations, and various implementations of the interface, etc, etc.  
Hell, most colleges teach things like the MVC model when doing GUI Java 
apps which, last I checked, is a way to refactor one large program into 
separate tasks.

And again you're missing the point (with your comment about future GCCs 
that I accidentally snipped).  You want to refactor your code so you can 
***MAINTAIN*** it.

Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28  7:57 Duft Markus
  2007-11-28 12:01 ` J.C. Pizarro
  2007-11-28 12:12 ` John (Eljay) Love-Jensen
@ 2007-11-28 12:18 ` Tom St Denis
  2007-11-28 13:09   ` Ted Byers
  2 siblings, 1 reply; 69+ messages in thread
From: Tom St Denis @ 2007-11-28 12:18 UTC (permalink / raw)
  To: Duft Markus; +Cc: NightStrike, J.C. Pizarro, Galloth, gcc-help

Duft Markus wrote:
> Hi!
>
> I assume, that all strategies discussed here are targeted at C. now what
> about C++, how do things behave there? As far as i know C++ is much
> different, and requires completely different thinking with regards to
> splitting source in more files, etc.
>   

I don't know enough about C++ linking but there is no reason you can't 
put methods in seperate .C files.  The problem is most C++ developers 
want to inline all of their methods and put quite a bit of actual code 
in their .H files instead, which is just a maintenance nightmare.

The benefits of code factoring are hardly limited to C or C++.  They 
equally apply to Java applications (with the sad exception, hehehe, that 
your class has to be in one file, but you can refactor into smaller 
classes, etc), pascal, assembler, etc.

Even if you're a one person shop.  It helps, especially if you use some 
form of revision control.  For example, if you've messed up a function, 
and changed over functions (and want to keep them), it's easier to 
restore one file from the last sane revision, then to patch one huge 
file with mix of current, unstable, old, code.  I've done that myself a 
few times.  I would be working on one of my libraries, and amongst say 
5-6 changes I'm making one of them doesn't pan out.  So I just nuke the 
file and cvs update it.  Boom, the last stable copy is back. 

Anyways... despite what others are saying, putting all of your eggs in 
one basket won't magically make the compiler be able to optimize the 
code significantly differently than what you could have with well 
factored code, and ideally created static inlines/macros.  All doing 
what they're saying gets you is a hard to maintain project.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: reduce compilation times?
  2007-11-28  7:57 Duft Markus
  2007-11-28 12:01 ` J.C. Pizarro
@ 2007-11-28 12:12 ` John (Eljay) Love-Jensen
  2007-11-28 12:31   ` J.C. Pizarro
  2007-11-28 12:18 ` Tom St Denis
  2 siblings, 1 reply; 69+ messages in thread
From: John (Eljay) Love-Jensen @ 2007-11-28 12:12 UTC (permalink / raw)
  To: Duft Markus, Tom St Denis, NightStrike; +Cc: J.C. Pizarro, Galloth, gcc-help

Hi Duft,

> I assume, that all strategies discussed here are targeted at C. now what about C++, how do things behave there? As far as i know C++ is much different, and requires completely different thinking with regards to splitting source in more files, etc.

The Large-Scale C++ Software Design by Lakos which I've recommended targets C++.

http://www.amazon.com/dp/0201633620

HTH,
--Eljay

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28  9:19           ` Brian Dessent
@ 2007-11-28 12:07             ` Tom St Denis
  2007-11-28 12:35               ` Brian Dessent
  0 siblings, 1 reply; 69+ messages in thread
From: Tom St Denis @ 2007-11-28 12:07 UTC (permalink / raw)
  To: gcc-help; +Cc: NightStrike, J.C. Pizarro, Galloth

Brian Dessent wrote:
> Tom St Denis wrote:
>
>   
>> What you really should do, is profile your code, then create "static
>> inline" or macro copies of heavily used (and not overly large) pieces of
>> code.  And even then, inlining code doesn't always help.
>>     
>
> You don't have to go to the trouble of inlining things manually, the
> compiler can do a much better job of estimating whether that's
> advantageous or not.  Just mark functions that are not for export as
> static and the compiler will now have a large range of optimizations
> that it can automatically perform, including but not limited to inlining
> them.  This is a case where having support/helper functions in the same
> .c file as the exportable functions that use them makes a great deal of
> sense.  The key word in the original statement was exportabe:
>   
Yeah, except putting all your functions in one file goes against the 
very nature of proper software development strategies.  First off, you 
should be running a profiler anyways is performance is important.  If 
you're not, then you're not very well educated in the field of work. 

That aside, the profiler will tell you where time is spent.  Yes, giving 
the compiler the option to inline or not is "ideal" but putting 100K of 
lines in a single file is not.

>> This is why you should re-factor your code as to contain only one [or as
>> few as possible] exportable functions per unit.
>>     
>
> In general the compiler can do the best job when it can see everything
> at once, which is why currently so much work is being poured into
> developing the LTO branch, which will allow the compiler do certain
> optimizations as if the entire program was a single compilation unit
> even though it was compiled separately.
>   
Often the savings, especially on desktop/server class processors from 
the minutia of optimizations possible at that level do not out weigh the 
cost to the development process. 

For example, in my math library the modexp function calls an external 
mul, sqr, and mod functions (well montgomery reduction but you get the 
point).  So even though they're not inlined (well they're big so they 
wouldn't anyways) and you have the overhead of a call, the performance 
is still 99% dominated by what happens inside the calls, not by the call 
itself.  In my case, my multipliers are fully unrolled/inlined since 
that's where the performance is to be had.  So it was worth the 
readability cost (well they're machine generated anyways) for it.

I question the sanity of a LTO step (if indeed that means it 
re-organizes the object code at link time).  It'll make debugging harder 
when supposedly non-inlined code gets inlined, or other nasties (e.g. 
picking up a constant from another module then removing dead code, etc).

I think most people would prefer their object files to be representative 
of the compiler input.

Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-28  7:57 Duft Markus
@ 2007-11-28 12:01 ` J.C. Pizarro
  2007-11-28 12:28   ` Tom St Denis
  2007-11-28 12:12 ` John (Eljay) Love-Jensen
  2007-11-28 12:18 ` Tom St Denis
  2 siblings, 1 reply; 69+ messages in thread
From: J.C. Pizarro @ 2007-11-28 12:01 UTC (permalink / raw)
  To: Duft Markus, gcc-help

On 2007/11/28, Duft Markus <Markus.Duft@salomon.at> wrote:
> Hi!
>
> I assume, that all strategies discussed here are targeted at C. now what
> about C++, how do things behave there? As far as i know C++ is much
> different, and requires completely different thinking with regards to
> splitting source in more files, etc.
>
> Cheers, Markus

Your comment is good.

Splitting C files is different to splitting C++ files or splitting Java files,
Fortran, Ada, ObjC, ....

As GCC is made in C-only then we only need to split C files to reduce the
recompilation time if we want.

For other projects made in C++, Java, Fortran, Ada, ObjC, ...., they are
hard to split their files.

It would be a good idea to have a future implementation of GCC that says:
"you don't need to split your source files because
 GCC recognizes automaticly parts of the unmodified & modified files".

But this future implementation for the programmer that haven't to split
source files is hard too!.

   J.C.Pizarro

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 19:49         ` Tom St Denis
@ 2007-11-28  9:19           ` Brian Dessent
  2007-11-28 12:07             ` Tom St Denis
  0 siblings, 1 reply; 69+ messages in thread
From: Brian Dessent @ 2007-11-28  9:19 UTC (permalink / raw)
  To: Tom St Denis; +Cc: NightStrike, J.C. Pizarro, Galloth, gcc-help

Tom St Denis wrote:

> What you really should do, is profile your code, then create "static
> inline" or macro copies of heavily used (and not overly large) pieces of
> code.  And even then, inlining code doesn't always help.

You don't have to go to the trouble of inlining things manually, the
compiler can do a much better job of estimating whether that's
advantageous or not.  Just mark functions that are not for export as
static and the compiler will now have a large range of optimizations
that it can automatically perform, including but not limited to inlining
them.  This is a case where having support/helper functions in the same
.c file as the exportable functions that use them makes a great deal of
sense.  The key word in the original statement was exportabe:

> This is why you should re-factor your code as to contain only one [or as
> few as possible] exportable functions per unit.

In general the compiler can do the best job when it can see everything
at once, which is why currently so much work is being poured into
developing the LTO branch, which will allow the compiler do certain
optimizations as if the entire program was a single compilation unit
even though it was compiled separately.

Brian

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: reduce compilation times?
@ 2007-11-28  7:57 Duft Markus
  2007-11-28 12:01 ` J.C. Pizarro
                   ` (2 more replies)
  0 siblings, 3 replies; 69+ messages in thread
From: Duft Markus @ 2007-11-28  7:57 UTC (permalink / raw)
  To: Tom St Denis, NightStrike; +Cc: J.C. Pizarro, Galloth, gcc-help

Hi!

I assume, that all strategies discussed here are targeted at C. now what
about C++, how do things behave there? As far as i know C++ is much
different, and requires completely different thinking with regards to
splitting source in more files, etc.

Cheers, Markus

Tom St Denis <> wrote:
> NightStrike wrote:
>> On Nov 27, 2007 11:43 AM, Tom St Denis <tstdenis@ellipticsemi.com>
>> wrote: 
>> 
>>> This is why you should re-factor your code as to contain only one
>>> [or as few as possible] exportable functions per unit.
>>> 
>> 
>> Just so I understand (and I realize that this would not be done), but
>> let's say that I have a machine that can compile extraordinarily
>> quickly, and compile time was not a factor.  Is there a difference in
>> the speed of the resulting program when everything is split into many
>> object files instead of being combined into a single main.c, or is
>> the resulting binary identical bit for bit?
>> 
> 
> The only time it would matter (legally) is if there was inline'ing. 
> And really, you should be setting that up yourself with the "inline"
> tag (or macros).
> 
> suppose you had something like
> 
> int myfunc(int x)
> {
>    return x * x + x * x;
> }
> 
> and you only called it from main like
> 
> int main(void)
> {
>    int res;
>    res = myfunc(0);
> }
> 
> Can the compiler special case optimize it?  Well, strictly yes, the
> compiler could inline "myfunc" then reduce it.  Suppose "myfunc" is
> more complicated or larger and it couldn't be inlined.  If the
> compiler could determine the result at buildtime it would be legal to
> optimize it out, but if it can't it won't and it will call the
> function.  So really, in all but the trivial cases [dead code, etc]
> having everything in one unit, especially when your functions aren't
> static, won't help reduce code size or speed.
> 
> What you really should do, is profile your code, then create "static
> inline" or macro copies of heavily used (and not overly large) pieces
> of code.  And even then, inlining code doesn't always help.
> 
> Putting everything in one big file has several disadvantages though:
> 
> -  It increases build time, every time you build it [which could be
> 1000s of times]
> -  It makes content control harder since you have to lock larger
> portions of the project to work on it
> -  It makes editing harder as you have more to scroll/look through
> -  It decreases [not always though] the ability to use smart linking,
> which can increase image size
> -  It makes building on smaller machines [with less ram, slower
> processors, etc] harder
> 
> Ideally, but this isn't a hard set rule, you want to keep each source
> file under 200-300 lines (excluding tables).  It's not a sin to
> violate it here or there where it makes sense.  Most of the time
> though, it's a good idea to try for it.
> 
> In both of my OSS projects, the average file has 1 function in it, and
> is ~150-200 lines per file.  The exceptions being machine generated
> code (e.g. unrolled multipliers), and lookup tables for
> hashes/ciphers. 
> 
> Tom


-- 
5. Dezember 2007
Salomon Automation am  Schweizer Forum fur Logistik, Lausanne, CH




Salomon Automation GmbH - Friesachstrasse 15 - A-8114 Friesach bei Graz
Sitz der Gesellschaft: Friesach bei Graz
UID-NR:ATU28654300 - Firmenbuchnummer: 49324 K
Firmenbuchgericht: Landesgericht fur Zivilrechtssachen Graz

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
       [not found]       ` <998d0e4a0711271310k657b791cy6ad5cc5721105f4c@mail.gmail.com>
@ 2007-11-27 22:30         ` J.C. Pizarro
  0 siblings, 0 replies; 69+ messages in thread
From: J.C. Pizarro @ 2007-11-27 22:30 UTC (permalink / raw)
  To: gcc-help

On 2007/11/27, Vladimir Vassilev <vladimir@logicom-bg.com> wrote:
> J.C. Pizarro wrote:
> > It's the same problem of "Nine Woman Can't Have a Baby in One Month"
> >
> > "Nine Cores Can't Reduce a Compilation of 9 Seconds to One Second"
> >
> In case the project is composed of reasonably sized files and the number
> of those files is sufficient the parallel build gives good scalability.
> The "make -j" option is a very powerful tool not only in terms of
> compilation. We have been using this also for rendering and other time
> consuming jobs. It basically allows you to build in parallel any target
> described in a Makefile. Implementation of MPI based plugin module for
> GNU Make (the -j option comes with a well defined interface to spawn the
> "threads" and it is easy to implement that for any MPI or even socket
> interconnected number of hosts) in combination with shared file system
> has allowed us to build various time consuming projects on computer
> clusters gaining almost linear scaling.
>
> Vladimir

About scaling, we can use LiveCDs to test the measures of compile-time
in clusters or virtual clusters of machines with not much effort.

These LiveCDs can be ParallelKnoppix, Knoppix OpenSSI, Clusterix,
ClusterKnoppix, Scientific, Quantian, Rocks Cluster, VMKnoppix, Xenoppix, ..

On 2007/11/27, Sven Eschenberg <eschenb@cs.uni-frankfurt.de> wrote:
> I am not sure about ccache, but I thought it does some file and
> preprocessing caching (not exactly sure, how it works, I thought, it kinda
> gets called instead of the preprocessor or at least before the PP).

Combining distcc & ccache is little complicated because of the square vision
with two triangles in the fourth vertex:

    no distcc  ----------------------------   ccache-only
    no ccache                                     |
         |                                        |
         |                                        |
         |                                     \  1st ccache of the local machi$
   distcc-only   ----------------     1st distcc \   2nd distcc (otherwise)
                                    2nd ccache of  \
                                 the remote machine  \

How to redistribute to the machines the ccache's data?

The quick and dirty strategy is to use "rsync" between every dirs
/ramdisk/tmps/ccache_data/ of the participing remote machines.

It's replicating and merging the local data of ccache of each local
machine to every machines of the cluster.

I suppose that doesn't exist yet a "distributed distccache for GCC".

   J.C.Pizarro

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 19:35       ` NightStrike
  2007-11-27 19:41         ` John (Eljay) Love-Jensen
@ 2007-11-27 19:49         ` Tom St Denis
  2007-11-28  9:19           ` Brian Dessent
  1 sibling, 1 reply; 69+ messages in thread
From: Tom St Denis @ 2007-11-27 19:49 UTC (permalink / raw)
  To: NightStrike; +Cc: J.C. Pizarro, Galloth, gcc-help

NightStrike wrote:
> On Nov 27, 2007 11:43 AM, Tom St Denis <tstdenis@ellipticsemi.com> wrote:
>   
>> This is why you should re-factor your code as to contain only one [or as
>> few as possible] exportable functions per unit.
>>     
>
> Just so I understand (and I realize that this would not be done), but
> let's say that I have a machine that can compile extraordinarily
> quickly, and compile time was not a factor.  Is there a difference in
> the speed of the resulting program when everything is split into many
> object files instead of being combined into a single main.c, or is the
> resulting binary identical bit for bit?
>   

The only time it would matter (legally) is if there was inline'ing.  And 
really, you should be setting that up yourself with the "inline" tag (or 
macros).

suppose you had something like

int myfunc(int x)
{
   return x * x + x * x;
}

and you only called it from main like

int main(void)
{
   int res;
   res = myfunc(0);
}

Can the compiler special case optimize it?  Well, strictly yes, the 
compiler could inline "myfunc" then reduce it.  Suppose "myfunc" is more 
complicated or larger and it couldn't be inlined.  If the compiler could 
determine the result at buildtime it would be legal to optimize it out, 
but if it can't it won't and it will call the function.  So really, in 
all but the trivial cases [dead code, etc] having everything in one 
unit, especially when your functions aren't static, won't help reduce 
code size or speed.

What you really should do, is profile your code, then create "static 
inline" or macro copies of heavily used (and not overly large) pieces of 
code.  And even then, inlining code doesn't always help. 

Putting everything in one big file has several disadvantages though:

-  It increases build time, every time you build it [which could be 
1000s of times]
-  It makes content control harder since you have to lock larger 
portions of the project to work on it
-  It makes editing harder as you have more to scroll/look through
-  It decreases [not always though] the ability to use smart linking, 
which can increase image size
-  It makes building on smaller machines [with less ram, slower 
processors, etc] harder

Ideally, but this isn't a hard set rule, you want to keep each source 
file under 200-300 lines (excluding tables).  It's not a sin to violate 
it here or there where it makes sense.  Most of the time though, it's a 
good idea to try for it.

In both of my OSS projects, the average file has 1 function in it, and 
is ~150-200 lines per file.  The exceptions being machine generated code 
(e.g. unrolled multipliers), and lookup tables for hashes/ciphers. 

Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* RE: reduce compilation times?
  2007-11-27 19:35       ` NightStrike
@ 2007-11-27 19:41         ` John (Eljay) Love-Jensen
  2007-11-27 19:49         ` Tom St Denis
  1 sibling, 0 replies; 69+ messages in thread
From: John (Eljay) Love-Jensen @ 2007-11-27 19:41 UTC (permalink / raw)
  To: NightStrike, Tom St Denis; +Cc: J.C. Pizarro, Galloth, gcc-help

Hi NightStrike,

> Is there a difference in the speed of the resulting program when everything is split into many object files instead of being combined into a single main.c ...?

There may be a negative small performance impact in a resulting program that is split into many object files instead of being combined into a single main.c.  (My expectation is that, overall, the performance impact will be negligible if it is even measurable.)

There may be a few interoperating routines that are strongly negatively impacted by being split into many object files instead of being combined into a single object file.  (My expectation for these particular routines is that they should be heavily optimized, perhaps even being rewritten in lovingly handcrafted assembly -- assuming assembly chops are superior to the optimizing compiler's amazing optimizations.)  If not re-written in hand coded assembly, at least having performance critical routines' code hand-tweaked to allow the GCC optimizer to do it's best to optimize it would be prudent (including using inline functions, and avoiding the anti-patterns that cripple optimization).

The way to assess those routines is through profiling.

GCC does not do "holistic" optimizations (yet).  In contrast, LLVM does "holistic" optimizations.

>... or is the resulting binary identical bit for bit?

No, the resulting binary is not identical bit-for-bit.

It should be identical output for identical input.  (Barring non-compliant or undefined behavior code, of course.)

HTH,
--Eljay

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 16:46     ` Tom St Denis
  2007-11-27 17:16       ` J.C. Pizarro
@ 2007-11-27 19:35       ` NightStrike
  2007-11-27 19:41         ` John (Eljay) Love-Jensen
  2007-11-27 19:49         ` Tom St Denis
  1 sibling, 2 replies; 69+ messages in thread
From: NightStrike @ 2007-11-27 19:35 UTC (permalink / raw)
  To: Tom St Denis; +Cc: J.C. Pizarro, Galloth, gcc-help

On Nov 27, 2007 11:43 AM, Tom St Denis <tstdenis@ellipticsemi.com> wrote:
> This is why you should re-factor your code as to contain only one [or as
> few as possible] exportable functions per unit.

Just so I understand (and I realize that this would not be done), but
let's say that I have a machine that can compile extraordinarily
quickly, and compile time was not a factor.  Is there a difference in
the speed of the resulting program when everything is split into many
object files instead of being combined into a single main.c, or is the
resulting binary identical bit for bit?

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 17:46         ` Tom St Denis
@ 2007-11-27 18:26           ` Wesley Smith
  0 siblings, 0 replies; 69+ messages in thread
From: Wesley Smith @ 2007-11-27 18:26 UTC (permalink / raw)
  To: Tom St Denis; +Cc: J.C. Pizarro, gcc-help

You can also use precompiled headers which can be especially useful
for large amounts on template code:

http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html

wes

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 17:16       ` J.C. Pizarro
@ 2007-11-27 17:46         ` Tom St Denis
  2007-11-27 18:26           ` Wesley Smith
  0 siblings, 1 reply; 69+ messages in thread
From: Tom St Denis @ 2007-11-27 17:46 UTC (permalink / raw)
  To: J.C. Pizarro; +Cc: gcc-help

J.C. Pizarro wrote:
> On 2007/11/27, Tom St Denis <tstdenis@ellipticsemi.com> wrote:
>   
>> This is why you should re-factor your code as to contain only one [or as
>> few as possible] exportable functions per unit.
>>
>> If you write an entire 100K line program as "main.c" of course you'll be
>> hit by slow compiles.
>>
>> But if you factor the code you can get good savings.  For instance, one
>> of my OSS projects (if you know who I am you know what I'm talking
>> about) is ~50K lines and compiles in ~29 seconds on a pentium 4.  It
>> builds in 8 seconds a quad-core Intel Core2.  For most files [units] I
>> only have one function, so the line count per file is on average ~200 or so.
>>     
>
> It's good idea, "to refactorize the code" and "to split many functions
> to many files"
> (e.g. one file per one function) with the objective of re-compile-time reduction
> (many compiled objects don't need to be recompiled).
>
> GCC needs LTO (Link Time Optimization), too.
>   
It also carries the benefit of making working with others easier as 
you're putting locks on smaller portions of the overall project.

As for link time optimizations, the only downside really is inlining, 
and you can always use #define macros for that.

Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 16:41   ` J.C. Pizarro
  2007-11-27 16:46     ` Tom St Denis
@ 2007-11-27 17:44     ` Vladimir Vassilev
       [not found]       ` <998d0e4a0711271310k657b791cy6ad5cc5721105f4c@mail.gmail.com>
  1 sibling, 1 reply; 69+ messages in thread
From: Vladimir Vassilev @ 2007-11-27 17:44 UTC (permalink / raw)
  To: J.C. Pizarro, gcc-help

J.C. Pizarro wrote:
> It's the same problem of "Nine Woman Can't Have a Baby in One Month"
>
> "Nine Cores Can't Reduce a Compilation of 9 Seconds to One Second"
>   
In case the project is composed of reasonably sized files and the number
of those files is sufficient the parallel build gives good scalability.
The "make -j" option is a very powerful tool not only in terms of
compilation. We have been using this also for rendering and other time
consuming jobs. It basically allows you to build in parallel any target
described in a Makefile. Implementation of MPI based plugin module for
GNU Make (the -j option comes with a well defined interface to spawn the
"threads" and it is easy to implement that for any MPI or even socket
interconnected number of hosts) in combination with shared file system
has allowed us to build various time consuming projects on computer
clusters gaining almost linear scaling.

Vladimir


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 16:46     ` Tom St Denis
@ 2007-11-27 17:16       ` J.C. Pizarro
  2007-11-27 17:46         ` Tom St Denis
  2007-11-27 19:35       ` NightStrike
  1 sibling, 1 reply; 69+ messages in thread
From: J.C. Pizarro @ 2007-11-27 17:16 UTC (permalink / raw)
  To: Tom St Denis, gcc-help

On 2007/11/27, Tom St Denis <tstdenis@ellipticsemi.com> wrote:
> This is why you should re-factor your code as to contain only one [or as
> few as possible] exportable functions per unit.
>
> If you write an entire 100K line program as "main.c" of course you'll be
> hit by slow compiles.
>
> But if you factor the code you can get good savings.  For instance, one
> of my OSS projects (if you know who I am you know what I'm talking
> about) is ~50K lines and compiles in ~29 seconds on a pentium 4.  It
> builds in 8 seconds a quad-core Intel Core2.  For most files [units] I
> only have one function, so the line count per file is on average ~200 or so.

It's good idea, "to refactorize the code" and "to split many functions
to many files"
(e.g. one file per one function) with the objective of re-compile-time reduction
(many compiled objects don't need to be recompiled).

GCC needs LTO (Link Time Optimization), too.

   J.C.Pizarro

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 16:41   ` J.C. Pizarro
@ 2007-11-27 16:46     ` Tom St Denis
  2007-11-27 17:16       ` J.C. Pizarro
  2007-11-27 19:35       ` NightStrike
  2007-11-27 17:44     ` Vladimir Vassilev
  1 sibling, 2 replies; 69+ messages in thread
From: Tom St Denis @ 2007-11-27 16:46 UTC (permalink / raw)
  To: J.C. Pizarro; +Cc: Galloth, gcc-help

J.C. Pizarro wrote:
> 2007/11/27, Galloth <lordgalloth@gmail.com> wrote:
>   
>>> * to put more machines with more cores per chip (quadcore?),
>>>    bigger caches (8 MiB L2?) and higher frequencies
>>>       
>> Does it means that gcc can use several cores for one compilation (If
>> yes, how to activate this, please) or this is the same idea as using
>> make -j (several compilations at once)
>>     
>
> It's the same problem of "Nine Woman Can't Have a Baby in One Month"
>
> "Nine Cores Can't Reduce a Compilation of 9 Seconds to One Second"
>
> I believe that it's possible to reduce it but it's very hardful.
>   

This is why you should re-factor your code as to contain only one [or as 
few as possible] exportable functions per unit. 

If you write an entire 100K line program as "main.c" of course you'll be 
hit by slow compiles.

But if you factor the code you can get good savings.  For instance, one 
of my OSS projects (if you know who I am you know what I'm talking 
about) is ~50K lines and compiles in ~29 seconds on a pentium 4.  It 
builds in 8 seconds a quad-core Intel Core2.  For most files [units] I 
only have one function, so the line count per file is on average ~200 or so.

Tom

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
       [not found] ` <5abcb5650711270804o171e1facr565beec70314af75@mail.gmail.com>
@ 2007-11-27 16:41   ` J.C. Pizarro
  2007-11-27 16:46     ` Tom St Denis
  2007-11-27 17:44     ` Vladimir Vassilev
  0 siblings, 2 replies; 69+ messages in thread
From: J.C. Pizarro @ 2007-11-27 16:41 UTC (permalink / raw)
  To: Galloth, gcc-help

2007/11/27, Galloth <lordgalloth@gmail.com> wrote:
> > * to put more machines with more cores per chip (quadcore?),
> >    bigger caches (8 MiB L2?) and higher frequencies
> Does it means that gcc can use several cores for one compilation (If
> yes, how to activate this, please) or this is the same idea as using
> make -j (several compilations at once)

It's the same problem of "Nine Woman Can't Have a Baby in One Month"

"Nine Cores Can't Reduce a Compilation of 9 Seconds to One Second"

I believe that it's possible to reduce it but it's very hardful.

     [ using 9 hardthreads
      (in hw, with complex combination of traps, paging and cores)
       instead of 9 softthreads in one process or of 9 processes ]

    J.C.Pizarro

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 16:19 ` Brian Dessent
@ 2007-11-27 16:26   ` J.C. Pizarro
  0 siblings, 0 replies; 69+ messages in thread
From: J.C. Pizarro @ 2007-11-27 16:26 UTC (permalink / raw)
  To: gcc-help, Brian Dessent

2007/11/27, Brian Dessent <brian@dessent.net> wrote:
> "J.C. Pizarro" wrote:
>
> > * to use -O3 -fomit-frame-pointer -funroll-loops -finline-functions -fpeel-loops
>
> This is exactly the opposite of what you should do if you're trying to
> reduce compile time.  -O3 includes marginal optimizations that increase
> compile time and usually do not have substantial benefit.  -O2 is meant
> as a balance between decent optimization and compile speed.  If you want
> faster compile speed then you use -O1 or -O0, or (sadly) use an older
> version of gcc.  You certainly don't tell the compiler to try wild and
> crazy stuff, just like you don't use gzip -9 if compression speed
> matters.

1) For decent released compiler GCC: -O3 ... etc
2) For re-compiles of the GCC snapshots using 1) GCC:
         ../configure --disable-bootstrap ... and -O0

Is it good idea?

Another thing, if there is not time to pass shared libs to static libs
then to use
the prelinking's howto from Mandriva (old Mandrake).

   J.C.Pizarro

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
  2007-11-27 16:07 J.C. Pizarro
@ 2007-11-27 16:19 ` Brian Dessent
  2007-11-27 16:26   ` J.C. Pizarro
       [not found] ` <5abcb5650711270804o171e1facr565beec70314af75@mail.gmail.com>
  1 sibling, 1 reply; 69+ messages in thread
From: Brian Dessent @ 2007-11-27 16:19 UTC (permalink / raw)
  To: J.C. Pizarro; +Cc: Sven Eschenberg, gcc-help

"J.C. Pizarro" wrote:

> * to use -O3 -fomit-frame-pointer -funroll-loops -finline-functions -fpeel-loops

This is exactly the opposite of what you should do if you're trying to
reduce compile time.  -O3 includes marginal optimizations that increase
compile time and usually do not have substantial benefit.  -O2 is meant
as a balance between decent optimization and compile speed.  If you want
faster compile speed then you use -O1 or -O0, or (sadly) use an older
version of gcc.  You certainly don't tell the compiler to try wild and
crazy stuff, just like you don't use gzip -9 if compression speed
matters.

> * to use SSE2/SSE/AltiVec (SSE3 is little bit slower)
> * to disable shared (it reduces linking time and paging time), to
> enable -static.
> * to recompile the shared libraries (that it depends) to static libraries.
> * to disable threads (they reduce I/O bandwith) if the program
>   doesn't require threading
> * to disable checking (-DNDEBUG)
> * to modify optimizing their sources after profiled the runned
> testsuite (-pg, gprof)

These all have to do with runtime performance, not compile time. 
Please, read the question.

Brian

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: reduce compilation times?
@ 2007-11-27 16:07 J.C. Pizarro
  2007-11-27 16:19 ` Brian Dessent
       [not found] ` <5abcb5650711270804o171e1facr565beec70314af75@mail.gmail.com>
  0 siblings, 2 replies; 69+ messages in thread
From: J.C. Pizarro @ 2007-11-27 16:07 UTC (permalink / raw)
  To: Sven Eschenberg, gcc-help

On 2007/11/27, Sven Eschenberg <eschenb@cs.uni-frankfurt.de> wrote:
> Aside from using -j on HT/Mulitcore/Multi-CPU Systems and ccache it might help to put
> the sourcecode into a ramdisk for compilation (no ccache needd then), or at least
> the build directory, for all the temporary stuff.
>
> -Sven

Here the list of how to try to reduce the compilation times:
* to put more RAM of higher frequencies and lower latencies
* to put more machines with more cores per chip (quadcore?),
   bigger caches (8 MiB L2?) and higher frequencies
* to link /tmp to /ramdisk/tmp (mount -t tmpfs)
* to configure the kernel for SMP workloads
* distcc
* make -j N
* ccache
* to recompile optimized gcc, binutils, ELF loader ld, ...
* to use strip --strip-all
* to use -O3 -fomit-frame-pointer -funroll-loops -finline-functions -fpeel-loops
* to use SSE2/SSE/AltiVec (SSE3 is little bit slower)
* to disable shared (it reduces linking time and paging time), to
enable -static.
* to recompile the shared libraries (that it depends) to static libraries.
* to disable threads (they reduce I/O bandwith) if the program
  doesn't require threading
* to disable checking (-DNDEBUG)
* to modify optimizing their sources after profiled the runned
testsuite (-pg, gprof)

Sincerely, J.C.Pizarro

^ permalink raw reply	[flat|nested] 69+ messages in thread

end of thread, other threads:[~2007-12-05 10:29 UTC | newest]

Thread overview: 69+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-11-27 10:04 reduce compilation times? mahmoodn
2007-11-27 11:11 ` Andrew Haley
2007-11-27 11:15   ` mahmoodn
2007-11-27 11:30     ` Andrew Haley
2007-11-27 12:20       ` mahmoodn
2007-11-27 12:25         ` John Love-Jensen
2007-11-27 15:27           ` Tim Prince
2007-11-27 14:07         ` Andrew Haley
2007-11-28  9:01           ` mahmoodn
2007-11-28 12:11             ` John (Eljay) Love-Jensen
2007-11-30  9:15               ` mahmoodn
2007-11-30 13:33                 ` mahmoodn
2007-11-27 15:48   ` Sven Eschenberg
2007-11-27 16:27     ` Andrew Haley
2007-11-27 18:51       ` Sven Eschenberg
2007-11-27 19:21         ` Andrew Haley
2007-11-27 20:43           ` Sven Eschenberg
2007-12-01 12:20   ` mahmoodn
2007-12-03 16:14     ` Andrew Haley
2007-12-04 11:23       ` mahmoodn
2007-12-04 12:19         ` Tom Browder
2007-12-05  7:44           ` mahmoodn
2007-12-05 10:24             ` Tom Browder
2007-12-05 10:29               ` mahmoodn
2007-11-27 13:48 ` John Love-Jensen
2007-11-27 16:07 J.C. Pizarro
2007-11-27 16:19 ` Brian Dessent
2007-11-27 16:26   ` J.C. Pizarro
     [not found] ` <5abcb5650711270804o171e1facr565beec70314af75@mail.gmail.com>
2007-11-27 16:41   ` J.C. Pizarro
2007-11-27 16:46     ` Tom St Denis
2007-11-27 17:16       ` J.C. Pizarro
2007-11-27 17:46         ` Tom St Denis
2007-11-27 18:26           ` Wesley Smith
2007-11-27 19:35       ` NightStrike
2007-11-27 19:41         ` John (Eljay) Love-Jensen
2007-11-27 19:49         ` Tom St Denis
2007-11-28  9:19           ` Brian Dessent
2007-11-28 12:07             ` Tom St Denis
2007-11-28 12:35               ` Brian Dessent
2007-11-27 17:44     ` Vladimir Vassilev
     [not found]       ` <998d0e4a0711271310k657b791cy6ad5cc5721105f4c@mail.gmail.com>
2007-11-27 22:30         ` J.C. Pizarro
2007-11-28  7:57 Duft Markus
2007-11-28 12:01 ` J.C. Pizarro
2007-11-28 12:28   ` Tom St Denis
2007-11-28 12:49     ` Fabian Cenedese
2007-11-28 13:03       ` Tom St Denis
2007-11-28 12:52     ` J.C. Pizarro
2007-11-28 13:17       ` Tom St Denis
2007-11-28 13:40         ` J.C. Pizarro
2007-11-28 13:51           ` Tom St Denis
2007-11-28 13:59             ` Tom St Denis
2007-11-28 15:51             ` John (Eljay) Love-Jensen
2007-11-28 13:30       ` Ted Byers
2007-11-28 12:12 ` John (Eljay) Love-Jensen
2007-11-28 12:31   ` J.C. Pizarro
2007-11-28 12:39     ` Tom St Denis
2007-11-28 12:54     ` John (Eljay) Love-Jensen
2007-11-28 12:18 ` Tom St Denis
2007-11-28 13:09   ` Ted Byers
2007-11-28 12:36 Duft Markus
2007-11-28 13:25 Duft Markus
2007-11-28 13:26 ` Tom St Denis
2007-11-28 13:56 Duft Markus
2007-11-28 14:35 ` Tom St Denis
2007-11-29  0:23 ` Tim Prince
2007-11-28 16:06 J.C. Pizarro
2007-11-28 16:16 ` Tom St Denis
2007-11-28 16:34   ` J.C. Pizarro
2007-11-28 18:18     ` Tom St Denis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).