public inbox for gcc-help@gcc.gnu.org
 help / color / mirror / Atom feed
* statically linked gcc executables
@ 2008-01-24 18:53 Angelo leto
  2008-01-25  3:51 ` Andrew Haley
  0 siblings, 1 reply; 19+ messages in thread
From: Angelo leto @ 2008-01-24 18:53 UTC (permalink / raw)
  To: gcc-help

Hi, I'm trying to build statically all the gcc executables in order to
generate a portable compiler package, in particular I need a package
which is not dependent from a specific dynamic loader version
(ld-linux.so.2), could you please help me to find a way to obtain
this?
For instance I can run gcc using the command "ld-linux.so.2
~/mygcc/usr/bin/c++", but c++ then calls cc1plus which also needs
ld-linux.so.2 ....
Many thanks for any help.
bye

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-24 18:53 statically linked gcc executables Angelo leto
@ 2008-01-25  3:51 ` Andrew Haley
  2008-01-25  6:30   ` Angelo Leto
  0 siblings, 1 reply; 19+ messages in thread
From: Andrew Haley @ 2008-01-25  3:51 UTC (permalink / raw)
  To: Angelo leto; +Cc: gcc-help

Angelo leto wrote:
> Hi, I'm trying to build statically all the gcc executables in order to
> generate a portable compiler package, in particular I need a package
> which is not dependent from a specific dynamic loader version
> (ld-linux.so.2), could you please help me to find a way to obtain
> this?
> For instance I can run gcc using the command "ld-linux.so.2
> ~/mygcc/usr/bin/c++", but c++ then calls cc1plus which also needs
> ld-linux.so.2 ....

The short answer is to set the makefile args to that gcc links with
-static.  Simply "make LDFLAGS=-static" might work for you.

The long answer:

Usually, people who want to do this don't know what they're doing, and
people who do know how to do it wouldn't consider doing it because
they know all the problems it will cause.

When you build gcc you're building it for a specific host/target
combination, and configure autodetects properties of both.  It doesn't
usually make much sense to use a gcc that's been built for one host on
a different host.

Sometimes, however, people build gcc on an old operating system
version and it will run on a newer version.  That makes sense for
cross-compilers, in particular.

So, can I ask you what you are really trying to do?  Is it that you
really need to run on some ancient Linux that really doesn't have
ld-linux.so.2?

Andrew.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-25  3:51 ` Andrew Haley
@ 2008-01-25  6:30   ` Angelo Leto
  2008-01-25  9:20     ` Ted Byers
                       ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Angelo Leto @ 2008-01-25  6:30 UTC (permalink / raw)
  To: Andrew Haley; +Cc: gcc-help

On Jan 24, 2008 12:18 PM, Andrew Haley <aph@redhat.com> wrote:
>
> Angelo leto wrote:
> > Hi, I'm trying to build statically all the gcc executables in order to
> > generate a portable compiler package, in particular I need a package
> > which is not dependent from a specific dynamic loader version
> > (ld-linux.so.2), could you please help me to find a way to obtain
> > this?
> > For instance I can run gcc using the command "ld-linux.so.2
> > ~/mygcc/usr/bin/c++", but c++ then calls cc1plus which also needs
> > ld-linux.so.2 ....
>
> The short answer is to set the makefile args to that gcc links with
> -static.  Simply "make LDFLAGS=-static" might work for you.

I already tried this, but seems not to work.

>
> The long answer:
>
> Usually, people who want to do this don't know what they're doing, and
> people who do know how to do it wouldn't consider doing it because
> they know all the problems it will cause.

Question: which kind of problems could happen if I build gcc without
architecture specific optimizations?

>
> When you build gcc you're building it for a specific host/target
> combination, and configure autodetects properties of both.  It doesn't
> usually make much sense to use a gcc that's been built for one host on
> a different host.

I'm working on applications which are data critical, so when I change
a library on the system there is the risk that results may be
different, so I create a repository with the critical libraries, and I
upgrade the libraries on repository only when it is needed and
independently from the system libraries (I do this in order to upgrade
the productivity tools and their related libraries without interacting
with the libraries linked by my application). Obviously when I change
the compiler I obtain different results on my applications, so my idea
is to create a "development package" which includes my critical
libraries and also the compiler in order to obtain the same result
(always using the same optimizations flags) on  my application also
when I'm compiling on different Linux installations.
I guess that the same gcc static binary (e.g. compiled for generic
i386 architecture) should give me the same output on different linux
environments running on i386 machines. Is there any reason for which
this might not be true?

>
> Sometimes, however, people build gcc on an old operating system
> version and it will run on a newer version.  That makes sense for
> cross-compilers, in particular.
>
> So, can I ask you what you are really trying to do?  Is it that you really need to run on some ancient Linux that really doesn't have
> ld-linux.so.2?

All the linux on which I run the applications do have  ld-linux.so.2,
but it is different from a glibc version to another. For example when
I use a different version of ld-linux.so.2 I obtain the following :

....
/home/test/svn/external-pkg/gcc_i386_march_pentium4/usr/libexec/gcc/i686-pc-linux-gnu/4.2.2/cc1plus:
relocation error:/home/test/svn/external-pkg/gcc_i386_march_pentium4/lib/libc.so.6:
symbol _dl_tls_get_addr_soft, version GLIBC_PRIVATE not defined in
file ld-linux.so.2 with link time reference

so I cannot use the same ld-linux.so.2 because they are different.

Thanks a lot for your help.
Angelo

>
> Andrew.
>
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-25  6:30   ` Angelo Leto
@ 2008-01-25  9:20     ` Ted Byers
  2008-01-25 12:26       ` Angelo Leto
  2008-01-29 13:14       ` John Carter
  2008-01-25 10:40     ` Andrew Haley
  2008-01-25 22:10     ` Ian Lance Taylor
  2 siblings, 2 replies; 19+ messages in thread
From: Ted Byers @ 2008-01-25  9:20 UTC (permalink / raw)
  To: Angelo Leto, Andrew Haley; +Cc: gcc-help

--- Angelo Leto <angleto@gmail.com> wrote:
> I'm working on applications which are data critical,
> so when I change
> a library on the system there is the risk that
> results may be
> different, so I create a repository with the
> critical libraries, and I
> upgrade the libraries on repository only when it is
> needed and
> independently from the system libraries (I do this
> in order to upgrade
> the productivity tools and their related libraries
> without interacting
> with the libraries linked by my application).
> Obviously when I change
> the compiler I obtain different results on my
> applications, so my idea
> is to create a "development package" which includes
> my critical
> libraries and also the compiler in order to obtain
> the same result
> (always using the same optimizations flags) on  my
> application also
> when I'm compiling on different Linux installations.

This would make me nervous.  If you program gives
different results if you use different tool chains,
that suggests to me that either your program is broken
or the results you're obtaining are affected by bugs
in the libraries you're using.  

You're half right.  If your program uses library X,
and  that library has a subtle bug in the function
you're using, then the result you get using a
different library will be different.  The fix is not
to ensure that you use the same library all the time,
but to ensure your test suite is sufficiently well
developed that you can detect such a bug, and use a
different function (even if you have to write it
yourself) that routinely gives you provably correct
answers.

To illustrate, I generally work with number crunching
related to risk assessment.  My programs had better
give me identical results regardless of whether I use
gcc or MS Visual C++ or Intel's compiler, or whatever
other tool might be tried, and on whatever platform. 
I have written code to do numeric integration, compute
the eigenstructure of general matrices, &c.  In each
case, there are well defined mathematical properties
that must be true of the result, and I construct a
test suite that, for example, will apply my
eigensystem calculation code to tens of millions of
random general square matrices (random values and
random size of matrix), and test the result.  My code,
then, is provably correct if it consistently provides
mathematically correct results, and these results will
be the same regardless of the platform and tool chain
used because the mathematics of the problem do not
depend on these things.  Even if you're dealing with
numerically unstable systems (such as a dynamic system
that produces chaos), it ought to give identical
results for identical input.  Something is wrong if it
doesn't, and the fix isn't to ensure the program is
executed always with binaries created from the same
toolchain.  It is to figure out precisely why so you
can fix the program.  Whether the bug is in my program
or in a library I am using, if I do not take
corrective action, my program remains buggy, and I
have yet to see a situation where a program that is
correct gives different results when compiled using
different tools.

I am sorry to say that if one has to resort to the
practices you describe to ensure the same results by
ensuring the same libraries are used, then I would not
consider trusting the program at all.  Rather, use of
such practices suggests QA code for the program is
inadequate to ensure correct results.  I certainly
would not tolerate a situation where I get different
trajectories from a numeric integration, or a
different eigensystem from a given matrix, simply
because I used a different library to compile the
program.  If such a situation arose, then one of the
versions, if not both, is giving mathematically
incorrect results!

HTH

Ted

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-25  6:30   ` Angelo Leto
  2008-01-25  9:20     ` Ted Byers
@ 2008-01-25 10:40     ` Andrew Haley
  2008-01-25 12:38       ` Angelo Leto
  2008-01-25 22:10     ` Ian Lance Taylor
  2 siblings, 1 reply; 19+ messages in thread
From: Andrew Haley @ 2008-01-25 10:40 UTC (permalink / raw)
  To: Angelo Leto; +Cc: gcc-help

Angelo Leto wrote:
> On Jan 24, 2008 12:18 PM, Andrew Haley <aph@redhat.com> wrote:
>> Angelo leto wrote:
>>> Hi, I'm trying to build statically all the gcc executables in order to
>>> generate a portable compiler package, in particular I need a package
>>> which is not dependent from a specific dynamic loader version
>>> (ld-linux.so.2), could you please help me to find a way to obtain
>>> this?
>>> For instance I can run gcc using the command "ld-linux.so.2
>>> ~/mygcc/usr/bin/c++", but c++ then calls cc1plus which also needs
>>> ld-linux.so.2 ....
>> The short answer is to set the makefile args to that gcc links with
>> -static.  Simply "make LDFLAGS=-static" might work for you.
> 
> I already tried this, but seems not to work.

It works for me.  You need to tell us in what way it seems not to
work for you.  We can't get far by guessing.

>> The long answer:
>>
>> Usually, people who want to do this don't know what they're doing, and
>> people who do know how to do it wouldn't consider doing it because
>> they know all the problems it will cause.
> 
> Question: which kind of problems could happen if I build gcc without
> architecture specific optimizations?

None that I'm aware of, but I'm not sure of the relevance of that question.

>> When you build gcc you're building it for a specific host/target
>> combination, and configure autodetects properties of both.  It doesn't
>> usually make much sense to use a gcc that's been built for one host on
>> a different host.
> 
> I'm working on applications which are data critical, so when I change
> a library on the system there is the risk that results may be
> different,   so I create a repository with the critical libraries, and I
> upgrade the libraries on repository only when it is needed and
> independently from the system libraries (I do this in order to upgrade
> the productivity tools and their related libraries without interacting
> with the libraries linked by my application). Obviously when I change
> the compiler I obtain different results on my applications, so my idea
> is to create a "development package" which includes my critical
> libraries and also the compiler in order to obtain the same result
> (always using the same optimizations flags) on  my application also
> when I'm compiling on different Linux installations.

All fine and good, but I don't understand why that requires you to link
gcc itself statically.  gcc doesn't need very many libraries and you
should be able to include the ones you need.

Even better, build gcc dynamically on the older box.

> I guess that the same gcc static binary (e.g. compiled for generic
> i386 architecture) should give me the same output on different linux
> environments running on i386 machines. Is there any reason for which
> this might not be true?

You have to be very careful when linking _anything_ statically against
libc, in particular.  See http://people.redhat.com/drepper/no_static_linking.html

>> Sometimes, however, people build gcc on an old operating system
>> version and it will run on a newer version.  That makes sense for
>> cross-compilers, in particular.
>>
>> So, can I ask you what you are really trying to do?  Is it that you really need to run on some ancient Linux that really doesn't have
>> ld-linux.so.2?
> 
> All the linux on which I run the applications do have  ld-linux.so.2,
> but it is different from a glibc version to another. For example when
> I use a different version of ld-linux.so.2 I obtain the following :
> 
> ....
> /home/test/svn/external-pkg/gcc_i386_march_pentium4/usr/libexec/gcc/i686-pc-linux-gnu/4.2.2/cc1plus:
> relocation error:/home/test/svn/external-pkg/gcc_i386_march_pentium4/lib/libc.so.6:
> symbol _dl_tls_get_addr_soft, version GLIBC_PRIVATE not defined in
> file ld-linux.so.2 with link time reference
> 
> so I cannot use the same ld-linux.so.2 because they are different.

The problem here looks like you have a different version of libc and ld-linux.so.2.
They're a pair.  You can't take them from different versions.

Andrew.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-25  9:20     ` Ted Byers
@ 2008-01-25 12:26       ` Angelo Leto
  2008-01-29 13:14       ` John Carter
  1 sibling, 0 replies; 19+ messages in thread
From: Angelo Leto @ 2008-01-25 12:26 UTC (permalink / raw)
  To: Ted Byers; +Cc: Andrew Haley, gcc-help

On Jan 24, 2008 3:28 PM, Ted Byers <r.ted.byers@rogers.com> wrote:
> --- Angelo Leto <angleto@gmail.com> wrote:
> > I'm working on applications which are data critical,
> > so when I change
> > a library on the system there is the risk that
> > results may be
> > different, so I create a repository with the
> > critical libraries, and I
> > upgrade the libraries on repository only when it is
> > needed and
> > independently from the system libraries (I do this
> > in order to upgrade
> > the productivity tools and their related libraries
> > without interacting
> > with the libraries linked by my application).
> > Obviously when I change
> > the compiler I obtain different results on my
> > applications, so my idea
> > is to create a "development package" which includes
> > my critical
> > libraries and also the compiler in order to obtain
> > the same result
> > (always using the same optimizations flags) on  my
> > application also
> > when I'm compiling on different Linux installations.
>
> This would make me nervous.  If you program gives
> different results if you use different tool chains,
> that suggests to me that either your program is broken
> or the results you're obtaining are affected by bugs
> in the libraries you're using.

may be the problem is on my application (and or libraries) and not on
toolchains, but the situation is the following:
we obtain differents results with non linear algorithms (on higher
significative bits!!) using gcc 4.1 and gcc 4.2 using
aggressive optimizations flags (e.g. -march=nocona) and I think is
quite normal if the optimizations algorithms changes
between the two versions. So if my results are validated for gcc 4.1
with optimization flags, I cannot be sure that with the new
version of gcc 4.2 the result would be the same if the optimization
routines changes. Then I will use the new compiler only when
I'm sure about the results produced by the application compiled with it.

>
> You're half right.  If your program uses library X,
> and  that library has a subtle bug in the function
> you're using, then the result you get using a
> different library will be different.  The fix is not
> to ensure that you use the same library all the time,

why not? I would not use the same library "all the time", but only
until my new results are validated.
I mean, that's not the preferred way, but could be the only way (if
you have time constraints) to guarantee the results already
 validated, meanwhile you go to check the problem.
Let's the case you have an application that give you the same results
everywhere and in all the environments; then you
upgrade a set of tools which requires some new libraries used by your
application, then your regression testing procedures
say that there is a difference in results. You need both the new
upgraded tools but you cannot stop compiling your application
until you have solved the problem and  validated again the results.
The only effective solution to this problem I found was to
keep separated the system libraries from the development libraries and
upgrade them in different moments and with different
version when needed. I think the same thing is valid also for toolchains.

> but to ensure your test suite is sufficiently well
> developed that you can detect such a bug, and use a
> different function (even if you have to write it
> yourself) that routinely gives you provably correct
> answers.

True, but meanwhile, you cannot stop the whole development process.
The goal is to make the other tools "safely" upgradable without pain
to introduce unexpected differences on your application, or if you
prefer to switch to the new library when you trust the new output
data.
Moreover to write a very complex algorithm could not ever be a
feasible step in terms of time; since you discovered the different
results, you can still use the old library meanwhile you write
yourself the new one.

>
> To illustrate, I generally work with number crunching
> related to risk assessment.  My programs had better
> give me identical results regardless of whether I use
> gcc or MS Visual C++ or Intel's compiler, or whatever
> other tool might be tried, and on whatever platform.
> I have written code to do numeric integration, compute
> the eigenstructure of general matrices, &c.  In each
> case, there are well defined mathematical properties
> that must be true of the result, and I construct a
> test suite that, for example, will apply my
> eigensystem calculation code to tens of millions of
> random general square matrices (random values and
> random size of matrix), and test the result.  My code,
> then, is provably correct if it consistently provides
> mathematically correct results, and these results will
> be the same regardless of the platform and tool chain
> used because the mathematics of the problem do not
> depend on these things.  Even if you're dealing with
> numerically unstable systems (such as a dynamic system
> that produces chaos), it ought to give identical
> results for identical input.  Something is wrong if it

this in my experience is true if you don't use strong optimization flags.

> doesn't, and the fix isn't to ensure the program is
> executed always with binaries created from the same
> toolchain.  It is to figure out precisely why so you
> can fix the program.  Whether the bug is in my program
> or in a library I am using, if I do not take
> corrective action, my program remains buggy, and I
> have yet to see a situation where a program that is
> correct gives different results when compiled using
> different tools.
>
> I am sorry to say that if one has to resort to the
> practices you describe to ensure the same results by
> ensuring the same libraries are used, then I would not
> consider trusting the program at all.  Rather, use of

I partially agree with you: if you upgrade a library and the results
changes, this may not be due to your code,
you already validated your results, they are enough accurate and fits
you model. The point is that with the new
libraries you introduced a factor of variation. Until you do not
demonstrate that the
new result are valid, the good results are the previous.

> such practices suggests QA code for the program is
> inadequate to ensure correct results.  I certainly
> would not tolerate a situation where I get different
> trajectories from a numeric integration, or a
> different eigensystem from a given matrix, simply
> because I used a different library to compile the
> program.  If such a situation arose, then one of the
> versions, if not both, is giving mathematically
> incorrect results!

thanks for your opinion.
bye
Angelo

>
> HTH
>
> Ted
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-25 10:40     ` Andrew Haley
@ 2008-01-25 12:38       ` Angelo Leto
  2008-01-25 13:17         ` Andrew Haley
  0 siblings, 1 reply; 19+ messages in thread
From: Angelo Leto @ 2008-01-25 12:38 UTC (permalink / raw)
  To: Andrew Haley; +Cc: gcc-help

On Jan 24, 2008 5:12 PM, Andrew Haley <aph@redhat.com> wrote:
> Angelo Leto wrote:
> > On Jan 24, 2008 12:18 PM, Andrew Haley <aph@redhat.com> wrote:
> >> Angelo leto wrote:
> >>> Hi, I'm trying to build statically all the gcc executables in order to
> >>> generate a portable compiler package, in particular I need a package
> >>> which is not dependent from a specific dynamic loader version
> >>> (ld-linux.so.2), could you please help me to find a way to obtain
> >>> this?
> >>> For instance I can run gcc using the command "ld-linux.so.2
> >>> ~/mygcc/usr/bin/c++", but c++ then calls cc1plus which also needs
> >>> ld-linux.so.2 ....
> >> The short answer is to set the makefile args to that gcc links with
> >> -static.  Simply "make LDFLAGS=-static" might work for you.
> >
> > I already tried this, but seems not to work.
>
> It works for me.  You need to tell us in what way it seems not to
> work for you.  We can't get far by guessing.

The steps I execute are:
1) I downloaded gcc-4.2.2 from
ftp://ftp.fu-berlin.de/unix/languages/gcc/releases/gcc-4.2.2/gcc-4.2.2.tar.bz2
2) enter gcc-4.2.2
3) make LDFLAGS=-static
4)  /usr/local/src/gcc-4.2.2 # ldd host-i686-pc-linux-gnu/gcc/cc1plus
        linux-gate.so.1 =>  (0xffffe000)
        libc.so.6 => /lib/libc.so.6 (0xb7e94000)
        /lib/ld-linux.so.2 (0xb7fdd000)

>
> >> The long answer:
> >>
> >> Usually, people who want to do this don't know what they're doing, and
> >> people who do know how to do it wouldn't consider doing it because
> >> they know all the problems it will cause.
> >
> > Question: which kind of problems could happen if I build gcc without
> > architecture specific optimizations?
>
> None that I'm aware of, but I'm not sure of the relevance of that question.
>
> >> When you build gcc you're building it for a specific host/target
> >> combination, and configure autodetects properties of both.  It doesn't
> >> usually make much sense to use a gcc that's been built for one host on
> >> a different host.
> >
> > I'm working on applications which are data critical, so when I change
> > a library on the system there is the risk that results may be
> > different,   so I create a repository with the critical libraries, and I
> > upgrade the libraries on repository only when it is needed and
> > independently from the system libraries (I do this in order to upgrade
> > the productivity tools and their related libraries without interacting
> > with the libraries linked by my application). Obviously when I change
> > the compiler I obtain different results on my applications, so my idea
> > is to create a "development package" which includes my critical
> > libraries and also the compiler in order to obtain the same result
> > (always using the same optimizations flags) on  my application also
> > when I'm compiling on different Linux installations.
>
> All fine and good, but I don't understand why that requires you to link
> gcc itself statically.  gcc doesn't need very many libraries and you
> should be able to include the ones you need.

because I want to run gcc also on linux installed with different
version of glibc.

>
> Even better, build gcc dynamically on the older box.

I tryied, but does not work on linux with new libc ...

>
> > I guess that the same gcc static binary (e.g. compiled for generic
> > i386 architecture) should give me the same output on different linux
> > environments running on i386 machines. Is there any reason for which
> > this might not be true?
>
> You have to be very careful when linking _anything_ statically against
> libc, in particular.  See http://people.redhat.com/drepper/no_static_linking.html

i will carefully read it.

>
> >> Sometimes, however, people build gcc on an old operating system
> >> version and it will run on a newer version.  That makes sense for
> >> cross-compilers, in particular.
> >>
> >> So, can I ask you what you are really trying to do?  Is it that you really need to run on some ancient Linux that really doesn't have
> >> ld-linux.so.2?
> >
> > All the linux on which I run the applications do have  ld-linux.so.2,
> > but it is different from a glibc version to another. For example when
> > I use a different version of ld-linux.so.2 I obtain the following :
> >
> > ....
> > /home/test/svn/external-pkg/gcc_i386_march_pentium4/usr/libexec/gcc/i686-pc-linux-gnu/4.2.2/cc1plus:
> > relocation error:/home/test/svn/external-pkg/gcc_i386_march_pentium4/lib/libc.so.6:
> > symbol _dl_tls_get_addr_soft, version GLIBC_PRIVATE not defined in
> > file ld-linux.so.2 with link time reference
> >
> > so I cannot use the same ld-linux.so.2 because they are different.
>
> The problem here looks like you have a different version of libc and ld-linux.so.2.
> They're a pair.  You can't take them from different versions.

I know, in fact I want to use the gcc compiled using libc 2.6 on a
system whith libc 2.3, and to do this you must run the c++ using the
command:
ld-linux.so.2 c++
where the loader is the one coming with libc 2.6.
And this work, but gcc call cc1plus and it also need to be executed
using the ld-linux.so.2 from libc v2.6.

bye.
Angelo
>
> Andrew.
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-25 12:38       ` Angelo Leto
@ 2008-01-25 13:17         ` Andrew Haley
  2008-01-25 23:12           ` Angelo Leto
  0 siblings, 1 reply; 19+ messages in thread
From: Andrew Haley @ 2008-01-25 13:17 UTC (permalink / raw)
  To: Angelo Leto; +Cc: gcc-help

Angelo Leto wrote:
> On Jan 24, 2008 5:12 PM, Andrew Haley <aph@redhat.com> wrote:
>> Angelo Leto wrote:
>>> On Jan 24, 2008 12:18 PM, Andrew Haley <aph@redhat.com> wrote:
>>>> Angelo leto wrote:
>>>>> Hi, I'm trying to build statically all the gcc executables in order to
>>>>> generate a portable compiler package, in particular I need a package
>>>>> which is not dependent from a specific dynamic loader version
>>>>> (ld-linux.so.2), could you please help me to find a way to obtain
>>>>> this?
>>>>> For instance I can run gcc using the command "ld-linux.so.2
>>>>> ~/mygcc/usr/bin/c++", but c++ then calls cc1plus which also needs
>>>>> ld-linux.so.2 ....
>>>> The short answer is to set the makefile args to that gcc links with
>>>> -static.  Simply "make LDFLAGS=-static" might work for you.
>>> I already tried this, but seems not to work.
>> It works for me.  You need to tell us in what way it seems not to
>> work for you.  We can't get far by guessing.
> 
> The steps I execute are:
> 1) I downloaded gcc-4.2.2 from
> ftp://ftp.fu-berlin.de/unix/languages/gcc/releases/gcc-4.2.2/gcc-4.2.2.tar.bz2
> 2) enter gcc-4.2.2
> 3) make LDFLAGS=-static
> 4)  /usr/local/src/gcc-4.2.2 # ldd host-i686-pc-linux-gnu/gcc/cc1plus
>         linux-gate.so.1 =>  (0xffffe000)
>         libc.so.6 => /lib/libc.so.6 (0xb7e94000)
>         /lib/ld-linux.so.2 (0xb7fdd000)

That's odd, because when I tried it, it worked.  Perhaps because I built
without bootstrapping.  Maybe it's because you're building a different
version of gcc.

If you go into the gcc dir,

  # rm cc1plus
  # make cc1plus LDFLAGS=-static

what happens?

I do sympathize, but I think you're doing the wrong thing.  Yes, you are
going to have to have two versions of your gcc binaries, one with
ld-linux.so.2 and one with ld-linux.so.1, but that should be all.

Andrew.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-25  6:30   ` Angelo Leto
  2008-01-25  9:20     ` Ted Byers
  2008-01-25 10:40     ` Andrew Haley
@ 2008-01-25 22:10     ` Ian Lance Taylor
  2 siblings, 0 replies; 19+ messages in thread
From: Ian Lance Taylor @ 2008-01-25 22:10 UTC (permalink / raw)
  To: Angelo Leto; +Cc: gcc-help

"Angelo Leto" <angleto@gmail.com> writes:

> I'm working on applications which are data critical, so when I change
> a library on the system there is the risk that results may be
> different, so I create a repository with the critical libraries, and I
> upgrade the libraries on repository only when it is needed and
> independently from the system libraries (I do this in order to upgrade
> the productivity tools and their related libraries without interacting
> with the libraries linked by my application). Obviously when I change
> the compiler I obtain different results on my applications, so my idea
> is to create a "development package" which includes my critical
> libraries and also the compiler in order to obtain the same result
> (always using the same optimizations flags) on  my application also
> when I'm compiling on different Linux installations.
> I guess that the same gcc static binary (e.g. compiled for generic
> i386 architecture) should give me the same output on different linux
> environments running on i386 machines. Is there any reason for which
> this might not be true?

I recommend building a chroot environment.

Ian

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-25 13:17         ` Andrew Haley
@ 2008-01-25 23:12           ` Angelo Leto
  2008-01-26  0:56             ` Andrew Haley
  0 siblings, 1 reply; 19+ messages in thread
From: Angelo Leto @ 2008-01-25 23:12 UTC (permalink / raw)
  To: Andrew Haley; +Cc: gcc-help

On Jan 24, 2008 7:53 PM, Andrew Haley <aph@redhat.com> wrote:
>
> Angelo Leto wrote:
> > On Jan 24, 2008 5:12 PM, Andrew Haley <aph@redhat.com> wrote:
> >> Angelo Leto wrote:
> >>> On Jan 24, 2008 12:18 PM, Andrew Haley <aph@redhat.com> wrote:
> >>>> Angelo leto wrote:
> >>>>> Hi, I'm trying to build statically all the gcc executables in order to
> >>>>> generate a portable compiler package, in particular I need a package
> >>>>> which is not dependent from a specific dynamic loader version
> >>>>> (ld-linux.so.2), could you please help me to find a way to obtain
> >>>>> this?
> >>>>> For instance I can run gcc using the command "ld-linux.so.2
> >>>>> ~/mygcc/usr/bin/c++", but c++ then calls cc1plus which also needs
> >>>>> ld-linux.so.2 ....
> >>>> The short answer is to set the makefile args to that gcc links with
> >>>> -static.  Simply "make LDFLAGS=-static" might work for you.
> >>> I already tried this, but seems not to work.
> >> It works for me.  You need to tell us in what way it seems not to
> >> work for you.  We can't get far by guessing.
> >
> > The steps I execute are:
> > 1) I downloaded gcc-4.2.2 from
> > ftp://ftp.fu-berlin.de/unix/languages/gcc/releases/gcc-4.2.2/gcc-4.2.2.tar.bz2
> > 2) enter gcc-4.2.2
> > 3) make LDFLAGS=-static
> > 4)  /usr/local/src/gcc-4.2.2 # ldd host-i686-pc-linux-gnu/gcc/cc1plus
> >         linux-gate.so.1 =>  (0xffffe000)
> >         libc.so.6 => /lib/libc.so.6 (0xb7e94000)
> >         /lib/ld-linux.so.2 (0xb7fdd000)
>
> That's odd, because when I tried it, it worked.  Perhaps because I built
> without bootstrapping.  Maybe it's because you're building a different
> version of gcc.
>
> If you go into the gcc dir,
>
>   # rm cc1plus
>   # make cc1plus LDFLAGS=-static
>
> what happens?

it works if I enter the directory host-i686-pc-linux-gnu/gcc
and then i do  `make LDFLAGS=-static`
but if I run the command  from the gcc-4.2.2 directory it build almost
all dynamically.
Anyway it's ok now.

>
> I do sympathize, but I think you're doing the wrong thing.  Yes, you are
> going to have to have two versions of your gcc binaries, one with
> ld-linux.so.2 and one with ld-linux.so.1, but that should be all.

no, there are diffences in symbols between the ld-linux.so.2 coming
from libc6 2.3.6 (debian 4.0) and the
ld-linux.so.2 coming e.g. from gentoo
(http://distfiles.gentoo.org/distfiles/glibc-2.6.1.tar.bz2) but the
same thing append also with a newer debian version.
This difference may be due to different building flags, patches ....

########## LINUX BOX 1 debian 4.0 libc 2.3.6.ds1-8 ##############
root@nowhere:/lib# dpkg -s libc6
Package: libc6
Status: install ok installed
....
Maintainer: GNU Libc Maintainers <debian-glibc@lists.debian.org>
Architecture: i386
Source: glibc
Version: 2.3.6.ds1-8

root@nowhere:/lib# dpkg -L libc6
....
/lib/libm.so.6
/lib/libcidn.so.1
/lib/libc.so.6
/lib/libanl.so.1
/lib/ld-linux.so.2            <<< =============
/lib/tls/libutil.so.1
/lib/tls/libresolv.so.2
.....

root@nowhere:/lib# readelf -s /lib/ld-linux.so.2
Symbol table '.dynsym' contains 35 entries:
   Num:    Value  Size Type    Bind   Vis      Ndx Name
     0: 00000000     0 NOTYPE  LOCAL  DEFAULT  UND
     1: 00000790     0 SECTION LOCAL  DEFAULT    9
     2: 000112e0     0 SECTION LOCAL  DEFAULT   10
     3: 0001464c     0 SECTION LOCAL  DEFAULT   11
     4: 000146b0     0 SECTION LOCAL  DEFAULT   12
     5: 00015ca0     0 SECTION LOCAL  DEFAULT   13
     6: 00016020     0 SECTION LOCAL  DEFAULT   17
     7: 00016458     0 SECTION LOCAL  DEFAULT   18
     8: 00016020  1080 OBJECT  GLOBAL DEFAULT   17 _rtld_global@@GLIBC_PRIVATE
     9: 0000e6c0   279 FUNC    GLOBAL DEFAULT    9
_dl_make_stack_executable@@GLIBC_PRIVATE
    10: 00015f1c     4 OBJECT  GLOBAL DEFAULT   13 __libc_stack_end@@GLIBC_2.1
    11: 0000fc90   288 FUNC    WEAK   DEFAULT    9 __libc_memalign@@GLIBC_2.0
    12: 0000fdb0    43 FUNC    WEAK   DEFAULT    9 malloc@@GLIBC_2.0
    13: 00000000     0 OBJECT  GLOBAL DEFAULT  ABS GLIBC_2.1
    14: 0000d9c0    88 FUNC    GLOBAL DEFAULT    9
_dl_deallocate_tls@@GLIBC_PRIVATE
    15: 00015f18     4 OBJECT  GLOBAL DEFAULT   13
__libc_enable_secure@@GLIBC_PRIVATE
    16: 0000d980    13 FUNC    GLOBAL DEFAULT    9 __tls_get_addr@@GLIBC_2.3
    17: 0000d990    34 FUNC    GLOBAL DEFAULT    9
_dl_get_tls_static_info@@GLIBC_PRIVATE
    18: 0000fe90    39 FUNC    WEAK   DEFAULT    9 calloc@@GLIBC_2.0
    19: 0000c010     5 FUNC    GLOBAL DEFAULT    9
_dl_debug_state@@GLIBC_PRIVATE
    20: 00015ca0     4 OBJECT  GLOBAL DEFAULT   13 _dl_argv@@GLIBC_PRIVATE
    21: 0000df40   522 FUNC    GLOBAL DEFAULT    9
_dl_allocate_tls_init@@GLIBC_PRIVATE
    22: 00000000     0 OBJECT  GLOBAL DEFAULT  ABS GLIBC_2.0
    23: 00000000     0 OBJECT  GLOBAL DEFAULT  ABS GLIBC_PRIVATE
    24: 00015cc0   460 OBJECT  GLOBAL DEFAULT   13
_rtld_global_ro@@GLIBC_PRIVATE
    25: 0000fde0   171 FUNC    WEAK   DEFAULT    9 realloc@@GLIBC_2.0
    26: 0000e480   203 FUNC    GLOBAL DEFAULT    9 _dl_tls_setup@@GLIBC_PRIVATE
    27: 00006460   405 FUNC    GLOBAL DEFAULT    9
_dl_rtld_di_serinfo@@GLIBC_PRIVATE
    28: 00011b99    14 OBJECT  GLOBAL DEFAULT   10
_dl_out_of_memory@@GLIBC_PRIVATE
    29: 0000cec0   557 FUNC    GLOBAL DEFAULT    9 _dl_mcount@@GLIBC_2.1
    30: 0000e240    39 FUNC    GLOBAL DEFAULT    9
_dl_allocate_tls@@GLIBC_PRIVATE
    31: 0000db60   990 FUNC    GLOBAL DEFAULT    9 ___tls_get_addr@@GLIBC_2.3
    32: 000164e4    20 OBJECT  GLOBAL DEFAULT   18 _r_debug@@GLIBC_2.0
    33: 00000000     0 OBJECT  GLOBAL DEFAULT  ABS GLIBC_2.3
    34: 0000fc40    79 FUNC    WEAK   DEFAULT    9 free@@GLIBC_2.0

########## END LINUX BOX 1 debian 4.0 libc 2.3.6.ds1-8 ##############


########## LINUX BOX 2 gentoo glibc-2.6.1 ##############

root@localhost:/lib# readelf -s /lib/ld-linux.so.2
Symbol table '.dynsym' contains 31 entries:
   Num:    Value  Size Type    Bind   Vis      Ndx Name
     0: 00000000     0 NOTYPE  LOCAL  DEFAULT  UND
     1: 00000000     0 OBJECT  GLOBAL DEFAULT  ABS GLIBC_2.1
     2: 0000fd60    34 FUNC    GLOBAL DEFAULT   10
_dl_get_tls_static_info@@GLIBC_PRIVATE
     3: 00000000     0 OBJECT  GLOBAL DEFAULT  ABS GLIBC_PRIVATE
     4: 00000000     0 OBJECT  GLOBAL DEFAULT  ABS GLIBC_2.3
     5: 00000000     0 OBJECT  GLOBAL DEFAULT  ABS GLIBC_2.4
     6: 000140b0    81 FUNC    WEAK   DEFAULT   10 free@@GLIBC_2.0
     7: 00014260   154 FUNC    WEAK   DEFAULT   10 realloc@@GLIBC_2.0
     8: 000107f0    29 FUNC    GLOBAL DEFAULT   10
_dl_allocate_tls@@GLIBC_PRIVATE
     9: 0001b63c    20 OBJECT  GLOBAL DEFAULT   20 _r_debug@@GLIBC_2.0
    10: 0001aef4     4 OBJECT  GLOBAL DEFAULT   15 __libc_stack_end@@GLIBC_2.1
    11: 0000fe80   133 FUNC    GLOBAL DEFAULT   10
_dl_tls_get_addr_soft@@GLIBC_PRIVATE
    12: 00014110   278 FUNC    WEAK   DEFAULT   10 __libc_memalign@@GLIBC_2.0
    13: 000102a0   137 FUNC    GLOBAL DEFAULT   10
_dl_deallocate_tls@@GLIBC_PRIVATE
    14: 00014300    96 FUNC    WEAK   DEFAULT   10 calloc@@GLIBC_2.0
    15: 0001ac80     4 OBJECT  GLOBAL DEFAULT   15 _dl_argv@@GLIBC_PRIVATE
    16: 0000f240   582 FUNC    GLOBAL DEFAULT   10 _dl_mcount@@GLIBC_2.1
    17: 00010a30   198 FUNC    GLOBAL DEFAULT   10 _dl_tls_setup@@GLIBC_PRIVATE
    18: 0000e2d0     5 FUNC    GLOBAL DEFAULT   10
_dl_debug_state@@GLIBC_PRIVATE
    19: 00000000     0 OBJECT  GLOBAL DEFAULT  ABS GLIBC_2.3.2
    20: 000105e0   280 FUNC    GLOBAL DEFAULT   10 ___tls_get_addr@@GLIBC_2.3
    21: 0001b000  1420 OBJECT  GLOBAL DEFAULT   18 _rtld_global@@GLIBC_PRIVATE
    22: 0000fd50    12 FUNC    GLOBAL DEFAULT   10 __tls_get_addr@@GLIBC_2.3
    23: 00010c90   182 FUNC    GLOBAL DEFAULT   10
_dl_make_stack_executable@@GLIBC_PRIVATE
    24: 00014230    43 FUNC    WEAK   DEFAULT   10 malloc@@GLIBC_2.0
    25: 00010020   567 FUNC    GLOBAL DEFAULT   10
_dl_allocate_tls_init@@GLIBC_PRIVATE
    26: 0001aca0   448 OBJECT  GLOBAL DEFAULT   15
_rtld_global_ro@@GLIBC_PRIVATE
    27: 0001aec8     4 OBJECT  WEAK   DEFAULT   15 __guard@@GLIBC_2.3.2
    28: 0001aef0     4 OBJECT  GLOBAL DEFAULT   15
__libc_enable_secure@@GLIBC_PRIVATE
    29: 00000000     0 OBJECT  GLOBAL DEFAULT  ABS GLIBC_2.0
    30: 00007e20   394 FUNC    GLOBAL DEFAULT   10
_dl_rtld_di_serinfo@@GLIBC_PRIVATE

########## END LINUX BOX 2 gentoo glibc-2.6.1 ##############


The output of the two readelf commands differs, and the linkers are
both ld-linux.so.2 and not ld-linux.so.1
In particular the symbol #11 _dl_tls_get_addr_soft exists in
ld-linux.so.2 from linux box 2 but not in ld-linux.so.1 from linux box
1

Angelo

>
> Andrew.
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-25 23:12           ` Angelo Leto
@ 2008-01-26  0:56             ` Andrew Haley
  2008-01-26  2:11               ` Angelo Leto
  0 siblings, 1 reply; 19+ messages in thread
From: Andrew Haley @ 2008-01-26  0:56 UTC (permalink / raw)
  To: Angelo Leto; +Cc: gcc-help

Angelo Leto wrote:
> On Jan 24, 2008 7:53 PM, Andrew Haley <aph@redhat.com> wrote:
> 
>> I do sympathize, but I think you're doing the wrong thing.  Yes, you are
>> going to have to have two versions of your gcc binaries, one with
>> ld-linux.so.2 and one with ld-linux.so.1, but that should be all.
> 
> no, there are diffences in symbols between the ld-linux.so.2 coming
> from libc6 2.3.6 (debian 4.0) and the
> ld-linux.so.2 coming e.g. from gentoo

> (http://distfiles.gentoo.org/distfiles/glibc-2.6.1.tar.bz2) but the
> same thing append also with a newer debian version.
> This difference may be due to different building flags, patches ....

Indeed.

Ian Taylor's suggestion of a chroot is sound, because it solves all of the
library and include file problems too.  You would have a complete environment
that you could move around.

Anyway, it now sounds like you have something that works for you.

Andrew.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-26  0:56             ` Andrew Haley
@ 2008-01-26  2:11               ` Angelo Leto
  2008-01-30  5:44                 ` Angelo Leto
  0 siblings, 1 reply; 19+ messages in thread
From: Angelo Leto @ 2008-01-26  2:11 UTC (permalink / raw)
  To: Andrew Haley; +Cc: gcc-help

On Jan 25, 2008 11:39 AM, Andrew Haley <aph@redhat.com> wrote:
> Angelo Leto wrote:
> > On Jan 24, 2008 7:53 PM, Andrew Haley <aph@redhat.com> wrote:
> >
> >> I do sympathize, but I think you're doing the wrong thing.  Yes, you are
> >> going to have to have two versions of your gcc binaries, one with
> >> ld-linux.so.2 and one with ld-linux.so.1, but that should be all.
> >
> > no, there are diffences in symbols between the ld-linux.so.2 coming
> > from libc6 2.3.6 (debian 4.0) and the
> > ld-linux.so.2 coming e.g. from gentoo
>
> > (http://distfiles.gentoo.org/distfiles/glibc-2.6.1.tar.bz2) but the
> > same thing append also with a newer debian version.
> > This difference may be due to different building flags, patches ....
>
> Indeed.
>
> Ian Taylor's suggestion of a chroot is sound, because it solves all of the
> library and include file problems too.  You would have a complete environment
> that you could move around.
>
> Anyway, it now sounds like you have something that works for you.

yes, thanks.
Angelo

>
> Andrew.
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-25  9:20     ` Ted Byers
  2008-01-25 12:26       ` Angelo Leto
@ 2008-01-29 13:14       ` John Carter
  2008-01-29 16:30         ` Ted Byers
  1 sibling, 1 reply; 19+ messages in thread
From: John Carter @ 2008-01-29 13:14 UTC (permalink / raw)
  To: gcc-help

On Thu, 24 Jan 2008, Ted Byers wrote:

> You're half right.  If your program uses library X,
> and  that library has a subtle bug in the function
> you're using, then the result you get using a
> different library will be different.  The fix is not
> to ensure that you use the same library all the time,
> but to ensure your test suite is sufficiently well
> developed that you can detect such a bug, and use a
> different function (even if you have to write it
> yourself) that routinely gives you provably correct
> answers.

Alas, Reality bites, we all suck, nobody on planet with a non-trivial
product has perfect test coverage of code and state, and we all have
bugs.

And even if you have really really good coverage, you seldom have the
time to rerun _every_ test after every change.

So given how much reality sucks, one of eminently practical things you
can do is reduce the variance between what you have tested and what
you ship.

Test what like you fly, fly what you test.

And that applies to shipping products to customers, it applies to
internal products like shipping cross compilers to colleagues.

As I said, Reality truly sucks.

Hint: C/C++ based reality sucks even more since, unless you test
heavily under Valgrind, most code has subtle uninitialized data bugs
that often don't fire under even the heaviest testing. One of the
reasons I like dynamic languages like Ruby.


John Carter                             Phone : (64)(3) 358 6639
Tait Electronics                        Fax   : (64)(3) 359 4632
PO Box 1645 Christchurch                Email : john.carter@tait.co.nz
New Zealand

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-29 13:14       ` John Carter
@ 2008-01-29 16:30         ` Ted Byers
  0 siblings, 0 replies; 19+ messages in thread
From: Ted Byers @ 2008-01-29 16:30 UTC (permalink / raw)
  To: John Carter, gcc-help

--- John Carter <john.carter@tait.co.nz> wrote:
> On Thu, 24 Jan 2008, Ted Byers wrote:
> 
> > You're half right.  If your program uses library
> X,
> > and  that library has a subtle bug in the function
> > you're using, then the result you get using a
> > different library will be different.  The fix is
> not
> > to ensure that you use the same library all the
> time,
> > but to ensure your test suite is sufficiently well
> > developed that you can detect such a bug, and use
> a
> > different function (even if you have to write it
> > yourself) that routinely gives you provably
> correct
> > answers.
> 
> Alas, Reality bites, we all suck, nobody on planet
> with a non-trivial
> product has perfect test coverage of code and state,
> and we all have
> bugs.
> 
True.  No-one is perfect.  i never said that anyone
had achieved perfection.  At best, perfection is a
state one must strive for, but which can never be
achieved.  But that doesn't stop us from beginning
with unit tests, and proceeding to integration tests
and usability tests, &c., and adopting a protocol that
requires the test suite to be expanded every time a
new bug is found, and prohibiting new code from being
added to an application's code base unless all
existing tests pass.  Such a practice generally
results in the number of bugs per line of code
diminishing through time, although the total number of
bugs may not.  You never stop trying when the kind of
application you're helping develop could have
catastrophic consequences, for the company for which
you're developing it, or for people using it, or
affected by facilities where it is used, should your
application fail in a bad way.

> And even if you have really really good coverage,
> you seldom have the
> time to rerun _every_ test after every change.
> 
True.  But standard practice here is to run the full
test suite, with no failures, before code is committed
to the code-base.  That may be overkill for an
application supporting only drawing cartoons, but in
other industries, where real and significant harm can
be done if an application is wrong, it is a price no
one questions.

> So given how much reality sucks, one of eminently
> practical things you
> can do is reduce the variance between what you have
> tested and what
> you ship.
> 
Right.  So what is the problem with not upgrading all
your development machines to a new release of the tool
chain you're using until you have proven the new
version of the toolchain won't break your code?  Or
that the new version has found a bug in your code the
previous version didn't (when it produces results
inconsistent with a previous version of your
application), and that you have fixed the bug and
extended your testsuite accordingly?

> Test what like you fly, fly what you test.
> 
Right.  All tests, for the kinds of applications I
develop, in the test suite must pass before the
application can be released for general use (generally
by consultants with doctorates in some aspect of
environmental science).

> And that applies to shipping products to customers,
> it applies to
> internal products like shipping cross compilers to
> colleagues.
> 
Right, we upgrade ASAP when a new release is available
for our development tools, but this process includes
stress testing them, especially to prove that they
don't break existing code.  If a test in our existing
suite fails upon using a new tool, we have no option
but to investigate to see if the problem lies with
something that was missed in our previous testing (in
which case, the bug revealed is fixed and additional
tests developed to improve our QA), or with something
in the new tool (for which we must find a solution). 
All this must be done before a project can be migrated
to the new tool.  But we do it in anticipation of
relatively continual improvement in our tools as new
releases become available.

> As I said, Reality truly sucks.
> 
Yup.  There is a reason it is more expensive to
develop applications in some disciplines than it is in
others.

> Hint: C/C++ based reality sucks even more since,
> unless you test
> heavily under Valgrind, most code has subtle
> uninitialized data bugs
> that often don't fire under even the heaviest
> testing. One of the
> reasons I like dynamic languages like Ruby.
> 
This is debatable, and this probably isn't the forum
to debate it.  Each programming language has its own
problems, and some problems transcend the language
used.  What really matters is the experience and
discipline of the team doing the work, including
especially the senior programmers, architects, team
leads, &c.: people who know well the potential worst
case consequences of a bug in the application they're
developing, and design and implement accordingly, with
due attention paid to QA and testability.

No one will be too upset if a tool used for animation
in the entertainment industry occasionally fails
(apart, perhaps, from the people who paid for it, or
for a good animation), but if an application could
result in the loss of life or an adverse effect on
someone's health, should it fail (e.g. an application
used in aircraft, such as the autopilot or the
navigation software, or in medicine, or in risk
assessment in environmental protection), one goes the
extra mile to try to ensure such failures don't
happen.

Good QA is more about the people doing the work, and
the protocols they use, than it is about the tools
they have at their disposal.  This is part of why I
tried to explain to the OP that instead of going
through major hoops on your developer's machines, you
have a smaller team working to assess the new tool, or
suite of tools, and deploy it to the core developers
only after you have proven that the new tools produce
correct code when used on your existing application
and test suite.   Once you have THAT proof, you can
proceed confidently with a routine deployment of the
new tool on all the developer's machines.  If there is
insufficient manpower or time to do it right, then
don't upgrade until you do; and recognize that if
there is always insufficient manpower or time to do it
right, then those paying to have it done can't really
afford to pay to get it done right (which is a scary
notion to me, with the kinds of applications I
routinely develop).  This is a protocol that will
likely be more efficient than one in which a major
effort is put into altering all your developer's
machines to use the same versions of the same tool
chain. 

I try to keep my tools current, expecting continual
improvement in their quality, but I would never go
through the kinds of hoops the OP described as that
struck me as counterproductive: time not spent either
developing new code for the application or
trouble-shooting the combination of the new tool chain
with the existing codebase and testsuite.  I can work
around deficiencies in the tools I use, since I know
my tools, and while I upgrade my tools as soon as
practicable, I don't do it until I know the new tool
won't break my existing code, and if it does, I
investigate to see where the problem really lies: and
if it is in my code, I fix it and if it is in the new
version of the tool, I develop a solution for the
problem.  Only once I know the code I get from the new
tools is correct do I proceed with an upgrade.

Cheers,

Ted

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-26  2:11               ` Angelo Leto
@ 2008-01-30  5:44                 ` Angelo Leto
  2008-01-30 11:32                   ` Ted Byers
  0 siblings, 1 reply; 19+ messages in thread
From: Angelo Leto @ 2008-01-30  5:44 UTC (permalink / raw)
  To: r.ted.byers; +Cc: john.carter, gcc-help

> --- John Carter <john.carter@tait.co.nz> wrote:
> > On Thu, 24 Jan 2008, Ted Byers wrote:
> >
> > > You're half right.  If your program uses library
> > X,
> > > and  that library has a subtle bug in the function
> > > you're using, then the result you get using a
> > > different library will be different.  The fix is
> > not
> > > to ensure that you use the same library all the
> > time,
> > > but to ensure your test suite is sufficiently well
> > > developed that you can detect such a bug, and use
> > a
> > > different function (even if you have to write it
> > > yourself) that routinely gives you provably
> > correct
> > > answers.
> >
> > Alas, Reality bites, we all suck, nobody on planet
> > with a non-trivial
> > product has perfect test coverage of code and state,
> > and we all have
> > bugs.
> >
> True.  No-one is perfect.  i never said that anyone
> had achieved perfection.  At best, perfection is a
> state one must strive for, but which can never be
> achieved.  But that doesn't stop us from beginning
> with unit tests, and proceeding to integration tests
> and usability tests, &c., and adopting a protocol that
> requires the test suite to be expanded every time a
> new bug is found, and prohibiting new code from being
> added to an application's code base unless all
> existing tests pass.  Such a practice generally
> results in the number of bugs per line of code
> diminishing through time, although the total number of
> bugs may not.  You never stop trying when the kind of
> application you're helping develop could have
> catastrophic consequences, for the company for which
> you're developing it, or for people using it, or
> affected by facilities where it is used, should your
> application fail in a bad way.
>
> > And even if you have really really good coverage,
> > you seldom have the
> > time to rerun _every_ test after every change.
> >
> True.  But standard practice here is to run the full
> test suite, with no failures, before code is committed
> to the code-base.  That may be overkill for an
> application supporting only drawing cartoons, but in
> other industries, where real and significant harm can
> be done if an application is wrong, it is a price no
> one questions.
>
> > So given how much reality sucks, one of eminently
> > practical things you
> > can do is reduce the variance between what you have
> > tested and what
> > you ship.
> >
> Right.  So what is the problem with not upgrading all
> your development machines to a new release of the tool
> chain you're using until you have proven the new
> version of the toolchain won't break your code?  Or

the reason may be the following:
you may want the old branches on your repository to work with the old
libraries (note that changing a library could also mean changing a
function's interfaces);
The versions of your code which have already been released should not
be further modified and even when you need to introduce small changes
(e.g. change an error message) it is not always a good idea to use the
new library, so you may need
(at least for a given interval of time) to be able to use the old
library on some branch. Furthermore the new library may produce
different results not only because of some unexpected error, but
simply because the library changes; sometime is useful to keep both
the library in order to prepare the transition.

> that the new version has found a bug in your code the
> previous version didn't (when it produces results
> inconsistent with a previous version of your
> application), and that you have fixed the bug and
> extended your testsuite accordingly?
>
> > Test what like you fly, fly what you test.
> >
> Right.  All tests, for the kinds of applications I
> develop, in the test suite must pass before the
> application can be released for general use (generally
> by consultants with doctorates in some aspect of
> environmental science).
>
> > And that applies to shipping products to customers,
> > it applies to
> > internal products like shipping cross compilers to
> > colleagues.
> >
> Right, we upgrade ASAP when a new release is available
> for our development tools, but this process includes
> stress testing them, especially to prove that they
> don't break existing code.  If a test in our existing
> suite fails upon using a new tool, we have no option
> but to investigate to see if the problem lies with
> something that was missed in our previous testing (in
> which case, the bug revealed is fixed and additional
> tests developed to improve our QA), or with something
> in the new tool (for which we must find a solution).
> All this must be done before a project can be migrated
> to the new tool.  But we do it in anticipation of
> relatively continual improvement in our tools as new
> releases become available.
>
> > As I said, Reality truly sucks.
> >
> Yup.  There is a reason it is more expensive to
> develop applications in some disciplines than it is in
> others.
>
> > Hint: C/C++ based reality sucks even more since,
> > unless you test
> > heavily under Valgrind, most code has subtle
> > uninitialized data bugs
> > that often don't fire under even the heaviest
> > testing. One of the
> > reasons I like dynamic languages like Ruby.
> >
> This is debatable, and this probably isn't the forum
> to debate it.  Each programming language has its own
> problems, and some problems transcend the language
> used.  What really matters is the experience and
> discipline of the team doing the work, including
> especially the senior programmers, architects, team
> leads, &c.: people who know well the potential worst
> case consequences of a bug in the application they're
> developing, and design and implement accordingly, with
> due attention paid to QA and testability.
>
> No one will be too upset if a tool used for animation
> in the entertainment industry occasionally fails
> (apart, perhaps, from the people who paid for it, or
> for a good animation), but if an application could
> result in the loss of life or an adverse effect on
> someone's health, should it fail (e.g. an application
> used in aircraft, such as the autopilot or the
> navigation software, or in medicine, or in risk
> assessment in environmental protection), one goes the
> extra mile to try to ensure such failures don't
> happen.
>
> Good QA is more about the people doing the work, and
> the protocols they use, than it is about the tools
> they have at their disposal.  This is part of why I
> tried to explain to the OP that instead of going
> through major hoops on your developer's machines, you
> have a smaller team working to assess the new tool, or
> suite of tools, and deploy it to the core developers
> only after you have proven that the new tools produce
> correct code when used on your existing application
> and test suite.   Once you have THAT proof, you can
> proceed confidently with a routine deployment of the
> new tool on all the developer's machines.  If there is
> insufficient manpower or time to do it right, then
> don't upgrade until you do; and recognize that if
> there is always insufficient manpower or time to do it
> right, then those paying to have it done can't really
> afford to pay to get it done right (which is a scary
> notion to me, with the kinds of applications I
> routinely develop).  This is a protocol that will
> likely be more efficient than one in which a major
> effort is put into altering all your developer's
> machines to use the same versions of the same tool
> chain.
>
> I try to keep my tools current, expecting continual
> improvement in their quality, but I would never go
> through the kinds of hoops the OP described as that
> struck me as counterproductive: time not spent either
> developing new code for the application or
> trouble-shooting the combination of the new tool chain
> with the existing codebase and testsuite.  I can work
> around deficiencies in the tools I use, since I know
> my tools, and while I upgrade my tools as soon as
> practicable, I don't do it until I know the new tool
> won't break my existing code, and if it does, I
> investigate to see where the problem really lies: and
> if it is in my code, I fix it and if it is in the new
> version of the tool, I develop a solution for the
> problem.  Only once I know the code I get from the new
> tools is correct do I proceed with an upgrade.

bye
Angelo

>
> Cheers,
>
> Ted

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-30  5:44                 ` Angelo Leto
@ 2008-01-30 11:32                   ` Ted Byers
  2008-01-31  1:20                     ` Angelo Leto
  2008-01-31 15:45                     ` John Carter
  0 siblings, 2 replies; 19+ messages in thread
From: Ted Byers @ 2008-01-30 11:32 UTC (permalink / raw)
  To: Angelo Leto; +Cc: gcc-help

--- Angelo Leto <angleto@gmail.com> wrote:
> > --- John Carter <john.carter@tait.co.nz> wrote:
> > > On Thu, 24 Jan 2008, Ted Byers wrote:
> > >
> > > > You're half right.  If your program uses
> library
> > > X,
> > > > and  that library has a subtle bug in the
> function
> > > > you're using, then the result you get using a
> > > > different library will be different.  The fix
> is
> > > not
> > > > to ensure that you use the same library all
> the
> > > time,
> > > > but to ensure your test suite is sufficiently
> well
> > > > developed that you can detect such a bug, and
> use
> > > a
> > > > different function (even if you have to write
> it
> > > > yourself) that routinely gives you provably
> > > correct
> > > > answers.
> > >
> >
> > > So given how much reality sucks, one of
> eminently
> > > practical things you
> > > can do is reduce the variance between what you
> have
> > > tested and what
> > > you ship.
> > >
> > Right.  So what is the problem with not upgrading
> all
> > your development machines to a new release of the
> tool
> > chain you're using until you have proven the new
> > version of the toolchain won't break your code? 
> Or
> 
> the reason may be the following:
> you may want the old branches on your repository to
> work with the old
> libraries (note that changing a library could also
> mean changing a
> function's interfaces);
> The versions of your code which have already been
> released should not
> be further modified and even when you need to
> introduce small changes
> (e.g. change an error message) it is not always a
> good idea to use the
> new library, so you may need
> (at least for a given interval of time) to be able
> to use the old
> library on some branch. Furthermore the new library
> may produce
> different results not only because of some
> unexpected error, but
> simply because the library changes; sometime is
> useful to keep both
> the library in order to prepare the transition.
> 
I find this line of reasoning wholly inadequate.

It is one thing to maintain old branches of your code
base.  It is quite another to insist they continue to
work with tools rendered obsolete.  Yes, I know that
changing a library can involve changes to functions'
interfaces, but that is just another part of the cost
of maintaining your tools.

The bottom line is that if two versions of the same
program produce different results, one of them is
wrong (or in the case of tools based on environmental
models, one is more wrong than the other, since there
is no such thing as a "model" that is correct, only
models that are adequate and reliable).  For many
calculations, there is only one correct answer.  If
one version produces the correct answer and the other
produces something different, then the other is wrong
a needs to be fixed.  In other kinds of calculations,
it is already known that the result produced can only
be an estimate of the correct answer, and in most
cases (such as numeric integration) there are ways to
estimate the amount of error in the result.  In such a
case, when a genius in numeric methods produces a
better algorithm for doing, say, numeric integration,
then the new library may well produce a more accurate
result than the old, but also good, code (you have to
love people who can improve already decent code, rare
as they are: I would not hesitate to pay a premium for
their work).  In such a case, one still has evidence
that allows one to deduce the reason for any
difference, and once that determination has been made,
in my view professional ethics (focussed on how one
treats clients) requires that the improved code be
used in any and all variants of my own product (so
even old branches that I may be maintaining for
whatever reason will be improved with the use of the
new code).

If it is the case that the new version of the library
is buggy, then don't use it until it has been fixed. 
If, instead, the new version of the library brings a
bug in your existing codebase to light, then the old
baseline code is wrong and needs to be fixed.  If you
have a user who is stuck using the old branch, for
whatever reason, it is not a service to him to allow
the bug to remain unfixed.  To my mind, that means
that all branches we choose to maintain must build
correctly with whatever tools we are using in
production at the time.  It is, to me, a waste of
resources to attempt to maintain a suite of versions
of my development tools along with the suite of
branches of my own code base.  I will maintain as many
branches of my own codebase as needed (and that number
is typically very small, since most of these are only
for development purposes and end ultimately being
folding back into the trunk), but I will not maintain
countless variants of the tool chain I use (unless, of
course, a client is willing to pay a very high premium
to do so, contrary to any advice I may give him).  At
any one time, then, I have only one version of a
toolchain in use, and at most one more in an
assessment phase before being deployed (and this only
at the most opportune time based on detailed
information about what changes are needed in the suite
of branches that are being actively maintained).  Even
if I started a project using gcc 3.4.4, having
upgraded to gcc 4.2.1, I am not going to maintain all
versions of all branches of gcc since gcc 3.4.4, or
even most of them.  I am not even going to maintain
4.2.0, and any new release, from any branches I choose
to maintain, will be guaranteed to build properly with
4.2.1, but the user is on his own if for whatever
reason he wants to stick with gcc 4.2.0 or earlier. 
Similarly, when I decide to upgrade the version of gcc
I am using, I won't be supporting earlier versions of
it.  If I am releasing source code, I will state in
the release notes what toolchain was used for it. 
There will be nothing, though, that compells my users
to upgrade either their tools or their copies of my
code.  It is up to them to make the same kinds of
assessments I have made.  If they come to a different
conclusion, so be it.  If one of them wants to
maintain an old branch of code I have released, using
older tools, they are welcome to do so, but I will not
waste time on a toolchain I have set aside as obsolete
in favour of a new version of those tools.  I have,
for example, both MS Visual Studio V6 and MS Visual
Studio 2005  (commercial reasons require use of such
tools in some circumstances).  I am not going to waste
time making my code build using MS VS v6 when I have
MS VS 2005.  Doing so would certainly result in wasted
time and inferior code.  Once I make the decision to
upgrade my tools, I don't waste further time on the
old ones.

Cheers

Ted

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-30 11:32                   ` Ted Byers
@ 2008-01-31  1:20                     ` Angelo Leto
  2008-01-31 15:32                       ` Ted Byers
  2008-01-31 15:45                     ` John Carter
  1 sibling, 1 reply; 19+ messages in thread
From: Angelo Leto @ 2008-01-31  1:20 UTC (permalink / raw)
  To: Ted Byers; +Cc: gcc-help

On Jan 29, 2008 5:56 PM, Ted Byers <r.ted.byers@rogers.com> wrote:
>
> --- Angelo Leto <angleto@gmail.com> wrote:
> > > --- John Carter <john.carter@tait.co.nz> wrote:
> > > > On Thu, 24 Jan 2008, Ted Byers wrote:
> > > >
> > > > > You're half right.  If your program uses
> > library
> > > > X,
> > > > > and  that library has a subtle bug in the
> > function
> > > > > you're using, then the result you get using a
> > > > > different library will be different.  The fix
> > is
> > > > not
> > > > > to ensure that you use the same library all
> > the
> > > > time,
> > > > > but to ensure your test suite is sufficiently
> > well
> > > > > developed that you can detect such a bug, and
> > use
> > > > a
> > > > > different function (even if you have to write
> > it
> > > > > yourself) that routinely gives you provably
> > > > correct
> > > > > answers.
> > > >
> > >
>
> > > > So given how much reality sucks, one of
> > eminently
> > > > practical things you
> > > > can do is reduce the variance between what you
> > have
> > > > tested and what
> > > > you ship.
> > > >
> > > Right.  So what is the problem with not upgrading
> > all
> > > your development machines to a new release of the
> > tool
> > > chain you're using until you have proven the new
> > > version of the toolchain won't break your code?
> > Or
> >
> > the reason may be the following:
> > you may want the old branches on your repository to
> > work with the old
> > libraries (note that changing a library could also
> > mean changing a
> > function's interfaces);
> > The versions of your code which have already been
> > released should not
> > be further modified and even when you need to
> > introduce small changes
> > (e.g. change an error message) it is not always a
> > good idea to use the
> > new library, so you may need
> > (at least for a given interval of time) to be able
> > to use the old
> > library on some branch. Furthermore the new library
> > may produce
> > different results not only because of some
> > unexpected error, but
> > simply because the library changes; sometime is
> > useful to keep both
> > the library in order to prepare the transition.
> >
> I find this line of reasoning wholly inadequate.
>
> It is one thing to maintain old branches of your code
> base.  It is quite another to insist they continue to
> work with tools rendered obsolete.

I don't wanna spend a lot of time to keep updated the old
(unmanteined) branches, but I will keep them working, for historical
reasons and because
in the future I could have the need to compare some output data ....
Then is not a problem if they use obsolete tools.

> Yes, I know that
> changing a library can involve changes to functions'
> interfaces, but that is just another part of the cost
> of maintaining your tools.

sure

>
> The bottom line is that if two versions of the same
> program produce different results, one of them is
> wrong (or in the case of tools based on environmental
> models, one is more wrong than the other, since there
> is no such thing as a "model" that is correct, only
> models that are adequate and reliable).

indeed, may be one of them is less accurate, but is not wrong, this
depends from your requirements which may change. If a new library
(an algorithm of the new algorithm) promise to produce more accurate
results, this isn't a sufficient reason to use it, this accuracy must
go together with the  "reliability" of the code (correct result for
the whole representative set of input data), and sometime (not ever)
the reliability of a code is not easy to be proved. In this case I
would keep the
old library until the tests procedures say that the code using new
library is reliable.
In critical environment the testing procedures are quite expensive,
and you may need to keep different version of the same library (or
tool) at least for the transitory period.

> For many
> calculations, there is only one correct answer.  If

for graphical interfaces for example, a change on the library may not
produce a wrong or correct answer,
but only a different result. In this case I will keep the library
which fits better my needs, and I can decide to switch to this
new library only on specific branches at a first time. When some
functional changes are made to the library the problem is not if the
result is
wrong or correct, it's just different. In this case I would keep
different versions of libraries for different branches because
different behaviours
may be desidered by different users.

> one version produces the correct answer and the other
> produces something different, then the other is wrong
> a needs to be fixed.  In other kinds of calculations,
> it is already known that the result produced can only
> be an estimate of the correct answer, and in most
> cases (such as numeric integration) there are ways to
> estimate the amount of error in the result.  In such a
> case, when a genius in numeric methods produces a
> better algorithm for doing, say, numeric integration,
> then the new library may well produce a more accurate
> result than the old, but also good, code (you have to
> love people who can improve already decent code, rare
> as they are: I would not hesitate to pay a premium for
> their work).
>  In such a case, one still has evidence
> that allows one to deduce the reason for any
> difference, and once that determination has been made,
> in my view professional ethics (focussed on how one
> treats clients) requires that the improved code be
> used in any and all variants of my own product (so
> even old branches that I may be maintaining for
> whatever reason will be improved with the use of the
> new code).

ok for the improved code, but as I said, the differences can be on
functional behaviour,
in order to keep unchanged specific functionality, may be needed to
use an older version of a library on a branch,
and a new version where functionality is to be provided different.
This is a reason which could make troublesome the installation
of library directly on the system.

>
> If it is the case that the new version of the library
> is buggy, then don't use it until it has been fixed.
> If, instead, the new version of the library brings a
> bug in your existing codebase to light, then the old
> baseline code is wrong and needs to be fixed.  If you
> have a user who is stuck using the old branch, for
> whatever reason, it is not a service to him to allow
> the bug to remain unfixed.  To my mind, that means
> that all branches we choose to maintain must build
> correctly with whatever tools we are using in
> production at the time.  It is, to me, a waste of
> resources to attempt to maintain a suite of versions
> of my development tools along with the suite of
> branches of my own code base.  I will maintain as many
> branches of my own codebase as needed (and that number
> is typically very small, since most of these are only
> for development purposes and end ultimately being
> folding back into the trunk), but I will not maintain
> countless variants of the tool chain I use (unless, of
> course, a client is willing to pay a very high premium
> to do so, contrary to any advice I may give him).  At
> any one time, then, I have only one version of a
> toolchain in use, and at most one more in an
> assessment phase before being deployed (and this only
> at the most opportune time based on detailed
> information about what changes are needed in the suite
> of branches that are being actively maintained).  Even
> if I started a project using gcc 3.4.4, having
> upgraded to gcc 4.2.1, I am not going to maintain all
> versions of all branches of gcc since gcc 3.4.4, or

I will not mantain all the version of gcc used since the first branch.
I can keep the different versions of toolchains on a sandbox,
this way the system tools can be upgraded without problems and
independently, without the worry of potential incoming problems.
I will use the new toolchains on all the mantained versions but only
after an in depth testing. Meanwhile the validated version of
toolchains (or whatever) will be used.

> even most of them.  I am not even going to maintain
> 4.2.0, and any new release, from any branches I choose
> to maintain, will be guaranteed to build properly with
> 4.2.1, but the user is on his own if for whatever
> reason he wants to stick with gcc 4.2.0 or earlier.
> Similarly, when I decide to upgrade the version of gcc
> I am using, I won't be supporting earlier versions of
> it.  If I am releasing source code, I will state in
> the release notes what toolchain was used for it.
> There will be nothing, though, that compells my users
> to upgrade either their tools or their copies of my
> code.  It is up to them to make the same kinds of
> assessments I have made.  If they come to a different
> conclusion, so be it.  If one of them wants to
> maintain an old branch of code I have released, using
> older tools, they are welcome to do so, but I will not
> waste time on a toolchain I have set aside as obsolete
> in favour of a new version of those tools.  I have,
> for example, both MS Visual Studio V6 and MS Visual
> Studio 2005  (commercial reasons require use of such
> tools in some circumstances).  I am not going to waste
> time making my code build using MS VS v6 when I have
> MS VS 2005.  Doing so would certainly result in wasted
> time and inferior code.  Once I make the decision to
> upgrade my tools, I don't waste further time on the
> old ones.
>

by
Angelo

> Cheers
>
> Ted
>
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-31  1:20                     ` Angelo Leto
@ 2008-01-31 15:32                       ` Ted Byers
  0 siblings, 0 replies; 19+ messages in thread
From: Ted Byers @ 2008-01-31 15:32 UTC (permalink / raw)
  To: Angelo Leto; +Cc: gcc-help

--- Angelo Leto <angleto@gmail.com> wrote:
> On Jan 29, 2008 5:56 PM, Ted Byers
> >
> > It is one thing to maintain old branches of your
> code
> > base.  It is quite another to insist they continue
> to
> > work with tools rendered obsolete.
> 
> I don't wanna spend a lot of time to keep updated
> the old
> (unmanteined) branches, but I will keep them
> working, for historical
> reasons and because
> in the future I could have the need to compare some
> output data ....
> Then is not a problem if they use obsolete tools.
>
So we agree to disagree.  You have resources to use on
old branches, I don't.  Rather, I regard that as a
waste of resources better spent on QA.  Once a branch
is abandonned, I forget it.  I won't waste time either
maintaining it (i.e. keeping it working) or upgrading
it.

But note, the important thing in older code is what it
does with the data, and that can be handled in a well
designed back end fully distinct from the user
interface.  If you're using fortran or C++, and you
ensure this code is as compliant with the extant
standard of the day, it won't stop compiling on
compliant compilers any time soon (apart from having
to deal with bugs due to unnoticed reliance on
undefined behaviour or on compiler's extensions, and
fixes to address deprecated features).  So if your
code for doing nonlinear systems theory related math,
which you mentioned a while ago, is written in C++
that is compliant with the standard, you ought to
still be able to compile it with a compliant compiler
50 years from now, with only a little fiddling with a
small selection of the sorts of bugs I mention above. 
This I know from occasionally having resorted to using
very old fortran code (because the algorithm used has
seen little significant improvement over the decades
and the code in question has become a standard in its
own right, having been published and the method seen
as the standard default method to use in a given
context).
 
> >
> > The bottom line is that if two versions of the
> same
> > program produce different results, one of them is
> > wrong (or in the case of tools based on
> environmental
> > models, one is more wrong than the other, since
> there
> > is no such thing as a "model" that is correct,
> only
> > models that are adequate and reliable).
> 
> indeed, may be one of them is less accurate, but is
> not wrong, this
> depends from your requirements which may change. If
> a new library
> (an algorithm of the new algorithm) promise to
> produce more accurate
> results, this isn't a sufficient reason to use it,
> this accuracy must
> go together with the  "reliability" of the code
> (correct result for
> the whole representative set of input data), and
> sometime (not ever)
> the reliability of a code is not easy to be proved.
> In this case I
> would keep the
> old library until the tests procedures say that the
> code using new
> library is reliable.
> In critical environment the testing procedures are
> quite expensive,
> and you may need to keep different version of the
> same library (or
> tool) at least for the transitory period.
> 
Yes, I know, from experience, testing is expensive. 
What you say here, though, is not all that different
that what I said about not deploying new tools until
they have been thoroughly tested.  The developers
continue using the tried and tested tools already
deployed, but they don't worry about the new tools
until senior staff have finished their evaluation.

> > For many
> > calculations, there is only one correct answer. 
> If for graphical interfaces for example, a change on
> the library may not
> produce a wrong or correct answer,
> but only a different result. In this case I will
> keep the library
> which fits better my needs, and I can decide to
> switch to this
> new library only on specific branches at a first
> time. When some
> functional changes are made to the library the
> problem is not if the
> result is
> wrong or correct, it's just different. In this case
> I would keep
> different versions of libraries for different
> branches because
> different behaviours
> may be desidered by different users.
> 
So here, you have changed your concerns from the
quality of the output results to a question of taste. 
These things really don't matter.  I do not care if
the windows I create look like the Windows that
existed on MS Windows v 3.1 or those on Windows XP. 
That just doesn't matter.  If there are clients
willing to pay a significant premium, I may well
provide support for customizing the GUI to suite the
tastes of the user, but I won't put that there by
default.

First, with a GUI, the conceptual model is trivially
simple, and it isn't all that hard to do with one GUI
library what you can do with another.  In the case, of
MS Windows, we have an extreme example where at least
the early versions of a new GUI library is written
using the previous standard library.  But look at
wxWindows, and descendants.  That is an impressive
example of how you can do using any GUI library what
you can do with any other.  Sometimes a given task is
easier, and at others harder, but it is always doable.

If you have clients willing to pay you to maintain
different GUI libraries, great.  But I would not waste
my time on it without good reason.

> ok for the improved code, but as I said, the
> differences can be on
> functional behaviour,
> in order to keep unchanged specific functionality,
> may be needed to
> use an older version of a library on a branch,
> and a new version where functionality is to be
> provided different.
> This is a reason which could make troublesome the
> installation
> of library directly on the system.
> 
Perhaps, but it seems to me you're making too much
work for yourself.  It seems to me to be generally
easier to adapt the old code to make use of the new
library, even if that means adding a thunk layer to
map old calls to a new interface in the library or to
add perceived deficiencies in the new library.


> I will not mantain all the version of gcc used since
> the first branch.
> I can keep the different versions of toolchains on a
> sandbox,
> this way the system tools can be upgraded without
> problems and
> independently, without the worry of potential
> incoming problems.
> I will use the new toolchains on all the mantained
> versions but only
> after an in depth testing. Meanwhile the validated
> version of
> toolchains (or whatever) will be used.
> 
So the principle difference between this, and what
I've been arguing, is the number of versions of the
tool chain to maintain.  You would opt to use several
in production, plus one or more in evaluation, while I
would insist on only one in production and no more
than one in evaluation.  

I can see problems if your machine is shared and you
don't have control over upgrade cycles, but that is a
different problem (and one I'd find intolerable).  If
I am working within an organization that has to
provide me with a shared machine, and my development
tools, I'd insist that they ensure that whatever else
they do with the system, they don't mess with my
development tools.  The function of the system
administrator who is responsible for administering the
machine I use include ensuring continual availability
of a development environment conducive to permitting
me to be as productive as possible.  System upgrades
can not be done just whenever the sysop gets a whim to
do so.

Cheers,

Ted

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: statically linked gcc executables
  2008-01-30 11:32                   ` Ted Byers
  2008-01-31  1:20                     ` Angelo Leto
@ 2008-01-31 15:45                     ` John Carter
  1 sibling, 0 replies; 19+ messages in thread
From: John Carter @ 2008-01-31 15:45 UTC (permalink / raw)
  To: Ted Byers; +Cc: gcc-help

On Tue, 29 Jan 2008, Ted Byers wrote:

>  Once I make the decision to
> upgrade my tools, I don't waste further time on the
> old ones.

Yup. In all the words, these are the ones that best describe the
unavoidable difference.

You can decide what tools you will use, you can afford to abandon old
branches, we can't.

That choice of tools use for us is largely determine by what phase of
the various product lifecycles they on.

Typically some of the team will be on the final stages of a release
cycle, I'm _never_ permitted to even touch their toolset.

Some of the team will be cutting new code for a future release. I will
be feeding them the latest distro and toolchain.

Some of those guys will be pulled back to fix bugs on and do patch
releases on older releases of products. They must have the old toolset
that can live on the latest distro, but can still build the old code.

Sometimes up to two to three versions back! As I've said elsewhen in
this forum... reality sucks.

The guy in the latest Dr. Dobbs who is raving about the concept of
virtualization understands exactly where I'm at!

http://drdobbs.com/development-tools/205917147




John Carter                             Phone : (64)(3) 358 6639
Tait Electronics                        Fax   : (64)(3) 359 4632
PO Box 1645 Christchurch                Email : john.carter@tait.co.nz
New Zealand

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2008-01-30  1:14 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-01-24 18:53 statically linked gcc executables Angelo leto
2008-01-25  3:51 ` Andrew Haley
2008-01-25  6:30   ` Angelo Leto
2008-01-25  9:20     ` Ted Byers
2008-01-25 12:26       ` Angelo Leto
2008-01-29 13:14       ` John Carter
2008-01-29 16:30         ` Ted Byers
2008-01-25 10:40     ` Andrew Haley
2008-01-25 12:38       ` Angelo Leto
2008-01-25 13:17         ` Andrew Haley
2008-01-25 23:12           ` Angelo Leto
2008-01-26  0:56             ` Andrew Haley
2008-01-26  2:11               ` Angelo Leto
2008-01-30  5:44                 ` Angelo Leto
2008-01-30 11:32                   ` Ted Byers
2008-01-31  1:20                     ` Angelo Leto
2008-01-31 15:32                       ` Ted Byers
2008-01-31 15:45                     ` John Carter
2008-01-25 22:10     ` Ian Lance Taylor

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).