public inbox for gcc-help@gcc.gnu.org
 help / color / mirror / Atom feed
* Unclear documentation on building GCC together with binutils
@ 2006-11-29  6:10 Ulf Magnusson
  2006-11-29  7:52 ` Tim Prince
  2006-11-29  8:34 ` Brian Dessent
  0 siblings, 2 replies; 9+ messages in thread
From: Ulf Magnusson @ 2006-11-29  6:10 UTC (permalink / raw)
  To: gcc-help

The following paragraph in the GCC installation guide
(http://gcc.gnu.org/install/) seems a bit unclear to me:

"If you also intend to build binutils (either to upgrade an existing
installation or for use in place of the corresponding tools of your
OS), unpack the binutils distribution either in the same directory or
a separate one. In the latter case, add symbolic links to any
components of the binutils you intend to build alongside the compiler
(bfd, binutils, gas, gprof, ld, opcodes, ...) to the directory
containing the GCC sources."

What exactly is meant by "the same directory"? Say the GCC tarball is
unpacked in the directory foo, yielding foo/gcc-3.x.x. Should binutils
be unpacked in the same directory, so that you get

foo/gcc-3.x.x
foo/binutils-x.x

, or in the gcc-3.x.x directory, so that you get

foo/gcc-3.x.x/binutils-x.x

, or into the gcc-3.x.x directory with one directory level stripped
(e.g. with --strip-components 1 passed to tar), essentially "merging"
the two packages in foo/gcc-3.x.x? Whatever turns out to be The Right
Way, the doc really needs to be updated for clarity.

Why would you want to build gcc and binutils together in this way by
the way? Isn't it possible to install them separately?

/Ulf Magnusson

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Unclear documentation on building GCC together with binutils
  2006-11-29  6:10 Unclear documentation on building GCC together with binutils Ulf Magnusson
@ 2006-11-29  7:52 ` Tim Prince
  2006-11-29  8:34 ` Brian Dessent
  1 sibling, 0 replies; 9+ messages in thread
From: Tim Prince @ 2006-11-29  7:52 UTC (permalink / raw)
  To: Ulf Magnusson; +Cc: gcc-help

Ulf Magnusson wrote:
> 
> "If you also intend to build binutils (either to upgrade an existing
> installation or for use in place of the corresponding tools of your
> OS), unpack the binutils distribution either in the same directory or
> a separate one. In the latter case, add symbolic links to any
> components of the binutils you intend to build alongside the compiler
> (bfd, binutils, gas, gprof, ld, opcodes, ...) to the directory
> containing the GCC sources."
> 
> What exactly is meant by "the same directory"? Say the GCC tarball is
> unpacked in the directory foo, yielding foo/gcc-3.x.x. Should binutils
> be unpacked in the same directory, so that you get
> 
> foo/gcc-3.x.x
> foo/binutils-x.x
> 

It has been a long time since I've done this. I believe this was how it 
was done.

> Why would you want to build gcc and binutils together in this way by
> the way? Isn't it possible to install them separately?
> 

The most evident reason would be on a system where neither gcc nor 
binutils is available pre-built, but each depends on the other.  That 
used to be a usual case, e.g. on Sun or HP systems.  All systems I run 
on nowadays come with adequate versions of each, or pre-built versions 
are available for internet download, so it is possible to upgrade one at 
a time.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Unclear documentation on building GCC together with binutils
  2006-11-29  6:10 Unclear documentation on building GCC together with binutils Ulf Magnusson
  2006-11-29  7:52 ` Tim Prince
@ 2006-11-29  8:34 ` Brian Dessent
  2006-11-29 22:23   ` Ulf Magnusson
  2006-11-29 23:08   ` Unclear documentation on building GCC together with binutils Ulf Magnusson
  1 sibling, 2 replies; 9+ messages in thread
From: Brian Dessent @ 2006-11-29  8:34 UTC (permalink / raw)
  To: Ulf Magnusson; +Cc: gcc-help

Ulf Magnusson wrote:

> , or into the gcc-3.x.x directory with one directory level stripped
> (e.g. with --strip-components 1 passed to tar), essentially "merging"
> the two packages in foo/gcc-3.x.x? Whatever turns out to be The Right
> Way, the doc really needs to be updated for clarity.

Yes.  You want to merge both the contents of gcc and binutils into one
directory.  If you examine the structure of the source code, you will
find that they share the same "toplevel" infrastructure, and they are
designed to live in the same tree.  In other words, both the binutils
and gcc tarballs are subsets of one larger directory tree of code.  And
in fact it's not just gcc and binutils, it's all of sourceware: gcc,
newlib, binutils, gdb/insight, sim, cygwin, etc are all really one big
tree that share a common toplevel, and can be built that way if you are
adventurous.  The build machinery at the toplevel is supposed to be able
to build any one or all of these things at once, depending on what's
present.

When gcc still lived in CVS this was much easier, as there was the
"uberbaum" which was a single CVS tree that actually contained
everything that you could check out as one huge tree.  Now that gcc is
in SVN it's a little more separated, but the common toplevel files are
maintained in sync in both repositories.

When doing this from release tarballs though you will encounter some
problems in that each package will contain its own copy of some common
infrastructure, like libiberty/ and config/.  Due to this it can be a
little difficult to actually combine the trees because what you have is
two different vintages of some common things, depending on when each was
released.  You want to use the newer copy whereever a conflict exists,
but in some cases there will be older files that were deleted in one
version but the older one still expects them, so you really must combine
the two trees, not just select the newer of the two.  The script
"symlink-tree" in the toplevel is meant to help with this.  Google for
"combined tree build" for more information.

AFAIK this tradition originated at Cygnus Solutions, and thus it is not
surprising that it's still used a lot in some circles given that a great
number of gcc/binutils developers used to work at Cygnus or continue to
work for Redhat after they merged.

> Why would you want to build gcc and binutils together in this way by
> the way? Isn't it possible to install them separately?

It can be a lot more convenient.  For example, for a cross-toolchain,
the normal procedure would be:

configure cross-binutils
build cross-binutils
install cross-binutils
adjust PATH to make just-installed files available
move to another build dir
configure cross-gcc
build cross-gcc
install cross-gcc
etc...

However, the build infrastructure knows that if you are building in a
combined tree, to use the just-built in-tree tools where necessary, so
that nothing has to be installed.  The procedure then becomes just:

configure combined tree
build combined tree
install combined tree

This builds everything at once, in one place, instead of as a chain of
stages.  And likewise, you can add newlib/gdb/insight into this mix, so
this becomes very convenient for people that work with cross-toolchains.

Its benefit may not be very obvious to you if all you care about is
native tools that use the existing host tools or host libc, but for a
cross situation those tools don't exist it can make things a lot easier.

This especially true for targets where you'd don't have easy access to
existing libc headers, as there is a chicken-and-egg problem of "can't
build a fully functional gcc without libc headers" and "can't build libc
without a functional gcc."  The common way to solve this is to drop
newlib into the tree and do a combined build of both at once, which
breaks the circular dependency.  (Another way to solve it, for cases
where you aren't using newlib and can't do a combined tree, is the way
crosstool does it, by first building a stripped down C-only gcc, using
that to configure the libc enough to get its headers, then build a full
gcc using those headers, then build the full libc using that gcc.)

And keep in mind that these subdirs are always modular, so that even if
you are working in a combined tree, if you just want to remake one
component (or just install one component) you can just cd into that dir
and work from there.  It just gives you the flexibility to either
build/install everything at once or work in specific areas.

Brian

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Unclear documentation on building GCC together with binutils
  2006-11-29  8:34 ` Brian Dessent
@ 2006-11-29 22:23   ` Ulf Magnusson
  2006-11-29 23:14     ` Ian Lance Taylor
  2006-11-30 10:37     ` Kai Ruottu
  2006-11-29 23:08   ` Unclear documentation on building GCC together with binutils Ulf Magnusson
  1 sibling, 2 replies; 9+ messages in thread
From: Ulf Magnusson @ 2006-11-29 22:23 UTC (permalink / raw)
  To: gcc-help

On 11/29/06, Brian Dessent <brian@dessent.net> wrote:
> Ulf Magnusson wrote:
>
> > , or into the gcc-3.x.x directory with one directory level stripped
> > (e.g. with --strip-components 1 passed to tar), essentially "merging"
> > the two packages in foo/gcc-3.x.x? Whatever turns out to be The Right
> > Way, the doc really needs to be updated for clarity.
>
> Yes.  You want to merge both the contents of gcc and binutils into one
> directory.  If you examine the structure of the source code, you will
> find that they share the same "toplevel" infrastructure, and they are
> designed to live in the same tree.  In other words, both the binutils
> and gcc tarballs are subsets of one larger directory tree of code.  And
> in fact it's not just gcc and binutils, it's all of sourceware: gcc,
> newlib, binutils, gdb/insight, sim, cygwin, etc are all really one big
> tree that share a common toplevel, and can be built that way if you are
> adventurous.  The build machinery at the toplevel is supposed to be able
> to build any one or all of these things at once, depending on what's
> present.
>
> When gcc still lived in CVS this was much easier, as there was the
> "uberbaum" which was a single CVS tree that actually contained
> everything that you could check out as one huge tree.  Now that gcc is
> in SVN it's a little more separated, but the common toplevel files are
> maintained in sync in both repositories.
>
> When doing this from release tarballs though you will encounter some
> problems in that each package will contain its own copy of some common
> infrastructure, like libiberty/ and config/.  Due to this it can be a
> little difficult to actually combine the trees because what you have is
> two different vintages of some common things, depending on when each was
> released.  You want to use the newer copy whereever a conflict exists,
> but in some cases there will be older files that were deleted in one
> version but the older one still expects them, so you really must combine
> the two trees, not just select the newer of the two.  The script
> "symlink-tree" in the toplevel is meant to help with this.  Google for
> "combined tree build" for more information.
>
> AFAIK this tradition originated at Cygnus Solutions, and thus it is not
> surprising that it's still used a lot in some circles given that a great
> number of gcc/binutils developers used to work at Cygnus or continue to
> work for Redhat after they merged.
>
> > Why would you want to build gcc and binutils together in this way by
> > the way? Isn't it possible to install them separately?
>
> It can be a lot more convenient.  For example, for a cross-toolchain,
> the normal procedure would be:
>
> configure cross-binutils
> build cross-binutils
> install cross-binutils
> adjust PATH to make just-installed files available
> move to another build dir
> configure cross-gcc
> build cross-gcc
> install cross-gcc
> etc...
>
> However, the build infrastructure knows that if you are building in a
> combined tree, to use the just-built in-tree tools where necessary, so
> that nothing has to be installed.  The procedure then becomes just:
>
> configure combined tree
> build combined tree
> install combined tree
>
> This builds everything at once, in one place, instead of as a chain of
> stages.  And likewise, you can add newlib/gdb/insight into this mix, so
> this becomes very convenient for people that work with cross-toolchains.
>
> Its benefit may not be very obvious to you if all you care about is
> native tools that use the existing host tools or host libc, but for a
> cross situation those tools don't exist it can make things a lot easier.
>
> This especially true for targets where you'd don't have easy access to
> existing libc headers, as there is a chicken-and-egg problem of "can't
> build a fully functional gcc without libc headers" and "can't build libc
> without a functional gcc."  The common way to solve this is to drop
> newlib into the tree and do a combined build of both at once, which
> breaks the circular dependency.  (Another way to solve it, for cases
> where you aren't using newlib and can't do a combined tree, is the way
> crosstool does it, by first building a stripped down C-only gcc, using
> that to configure the libc enough to get its headers, then build a full
> gcc using those headers, then build the full libc using that gcc.)
>
> And keep in mind that these subdirs are always modular, so that even if
> you are working in a combined tree, if you just want to remake one
> component (or just install one component) you can just cd into that dir
> and work from there.  It just gives you the flexibility to either
> build/install everything at once or work in specific areas.
>
> Brian
>

Thanks for that very clarifying post!
Could I submit a documentation patch that expands the install guide to
include some of this information, and to mention tools like
symlink-tree? It's a bit on the short side right now.
By the way, what's "RDA" that I see mentioned in a lot of posts having
to do with
cross-compilation?

/Ulf Magnusson

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Unclear documentation on building GCC together with binutils
  2006-11-29  8:34 ` Brian Dessent
  2006-11-29 22:23   ` Ulf Magnusson
@ 2006-11-29 23:08   ` Ulf Magnusson
  2006-11-30  1:25     ` Brian Dessent
  1 sibling, 1 reply; 9+ messages in thread
From: Ulf Magnusson @ 2006-11-29 23:08 UTC (permalink / raw)
  To: gcc-help

On 11/29/06, Brian Dessent <brian@dessent.net> wrote:
> Ulf Magnusson wrote:
>
> > , or into the gcc-3.x.x directory with one directory level stripped
> > (e.g. with --strip-components 1 passed to tar), essentially "merging"
> > the two packages in foo/gcc-3.x.x? Whatever turns out to be The Right
> > Way, the doc really needs to be updated for clarity.
>
> Yes.  You want to merge both the contents of gcc and binutils into one
> directory.  If you examine the structure of the source code, you will
> find that they share the same "toplevel" infrastructure, and they are
> designed to live in the same tree.  In other words, both the binutils
> and gcc tarballs are subsets of one larger directory tree of code.  And
> in fact it's not just gcc and binutils, it's all of sourceware: gcc,
> newlib, binutils, gdb/insight, sim, cygwin, etc are all really one big
> tree that share a common toplevel, and can be built that way if you are
> adventurous.  The build machinery at the toplevel is supposed to be able
> to build any one or all of these things at once, depending on what's
> present.
>
> When gcc still lived in CVS this was much easier, as there was the
> "uberbaum" which was a single CVS tree that actually contained
> everything that you could check out as one huge tree.  Now that gcc is
> in SVN it's a little more separated, but the common toplevel files are
> maintained in sync in both repositories.
>
> When doing this from release tarballs though you will encounter some
> problems in that each package will contain its own copy of some common
> infrastructure, like libiberty/ and config/.  Due to this it can be a
> little difficult to actually combine the trees because what you have is
> two different vintages of some common things, depending on when each was
> released.  You want to use the newer copy whereever a conflict exists,
> but in some cases there will be older files that were deleted in one
> version but the older one still expects them, so you really must combine
> the two trees, not just select the newer of the two.  The script
> "symlink-tree" in the toplevel is meant to help with this.  Google for
> "combined tree build" for more information.
>
> AFAIK this tradition originated at Cygnus Solutions, and thus it is not
> surprising that it's still used a lot in some circles given that a great
> number of gcc/binutils developers used to work at Cygnus or continue to
> work for Redhat after they merged.
>
> > Why would you want to build gcc and binutils together in this way by
> > the way? Isn't it possible to install them separately?
>
> It can be a lot more convenient.  For example, for a cross-toolchain,
> the normal procedure would be:
>
> configure cross-binutils
> build cross-binutils
> install cross-binutils
> adjust PATH to make just-installed files available
> move to another build dir
> configure cross-gcc
> build cross-gcc
> install cross-gcc
> etc...
>
> However, the build infrastructure knows that if you are building in a
> combined tree, to use the just-built in-tree tools where necessary, so
> that nothing has to be installed.  The procedure then becomes just:
>
> configure combined tree
> build combined tree
> install combined tree
>
> This builds everything at once, in one place, instead of as a chain of
> stages.  And likewise, you can add newlib/gdb/insight into this mix, so
> this becomes very convenient for people that work with cross-toolchains.
>
> Its benefit may not be very obvious to you if all you care about is
> native tools that use the existing host tools or host libc, but for a
> cross situation those tools don't exist it can make things a lot easier.
>
> This especially true for targets where you'd don't have easy access to
> existing libc headers, as there is a chicken-and-egg problem of "can't
> build a fully functional gcc without libc headers" and "can't build libc
> without a functional gcc."  The common way to solve this is to drop
> newlib into the tree and do a combined build of both at once, which
> breaks the circular dependency.  (Another way to solve it, for cases
> where you aren't using newlib and can't do a combined tree, is the way
> crosstool does it, by first building a stripped down C-only gcc, using
> that to configure the libc enough to get its headers, then build a full
> gcc using those headers, then build the full libc using that gcc.)
>
> And keep in mind that these subdirs are always modular, so that even if
> you are working in a combined tree, if you just want to remake one
> component (or just install one component) you can just cd into that dir
> and work from there.  It just gives you the flexibility to either
> build/install everything at once or work in specific areas.
>
> Brian
>

Just to check that I have understood the procedure and read the
symlink-tree script properly, would the following be the right way to
combine gcc and binutils using the symlink-tree script?

tar -xvjf gcc-recent.release.tar.bz2
tar -xvjf binutils-not.so.recent.release.tar.bz2
cd binutils-not.so.recent.release
./symlink-tree ../gcc-recent.release
configure ...
make ...

Oh, and shouldn't srcdir be quoted when it's assigned and used in the
script, to correctly handle paths with whitespace in them?

/Ulf Magnusson

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Unclear documentation on building GCC together with binutils
  2006-11-29 22:23   ` Ulf Magnusson
@ 2006-11-29 23:14     ` Ian Lance Taylor
  2006-11-30 10:37     ` Kai Ruottu
  1 sibling, 0 replies; 9+ messages in thread
From: Ian Lance Taylor @ 2006-11-29 23:14 UTC (permalink / raw)
  To: Ulf Magnusson; +Cc: gcc-help

"Ulf Magnusson" <ulfalizer@gmail.com> writes:

> By the way, what's "RDA" that I see mentioned in a lot of posts having
> to do with
> cross-compilation?

It's a library for the gdb remote debug protocol, which can be
included in a program to permit it to be debugged remotely.  RDA
stands for "Red Hat Debug Agent".

Ian

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Unclear documentation on building GCC together with binutils
  2006-11-29 23:08   ` Unclear documentation on building GCC together with binutils Ulf Magnusson
@ 2006-11-30  1:25     ` Brian Dessent
  0 siblings, 0 replies; 9+ messages in thread
From: Brian Dessent @ 2006-11-30  1:25 UTC (permalink / raw)
  To: Ulf Magnusson; +Cc: gcc-help

Ulf Magnusson wrote:

> Just to check that I have understood the procedure and read the
> symlink-tree script properly, would the following be the right way to
> combine gcc and binutils using the symlink-tree script?
> 
> tar -xvjf gcc-recent.release.tar.bz2
> tar -xvjf binutils-not.so.recent.release.tar.bz2
> cd binutils-not.so.recent.release
> ./symlink-tree ../gcc-recent.release
> configure ...
> make ...

I guess that would work, but you might want to try a build to verify.  I
have not personally used symlink-tree.  I think a more common approach
that you'll find in tutorials is to create a new dir, that consists of
entirely links:

tar jxf gcc-?.?.?.tar.bz2
tar jxf binutils-?.?.?.tar.bz2
mkdir combined && cd combined
../gcc/symlink-tree ../binutils-?.?.?
../gcc/symlink-tree ../gcc-?.?.?

You could also do this with "cp -puR" or tar to merge the trees (if you
don't care about disk space) or with "cpio -l" and hard links, as
suggested in <http://gcc.gnu.org/simtest-howto.html>.

> Oh, and shouldn't srcdir be quoted when it's assigned and used in the
> script, to correctly handle paths with whitespace in them?

Probably.  Although having pathnames with spaces in them will probably
throw a wrench into the build in other places as well so I'd suggest
avoiding it if at all possible.  I would say file a bug report for any
place where an unquoted argument causes a failure due to spaces in
filenames, but I don't know if the official stance is "that isn't
supported" or not.

Brian

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Unclear documentation on building GCC together with binutils
  2006-11-29 22:23   ` Ulf Magnusson
  2006-11-29 23:14     ` Ian Lance Taylor
@ 2006-11-30 10:37     ` Kai Ruottu
  2006-11-30 16:20       ` Available Linux C libraries for scepticals Kai Ruottu
  1 sibling, 1 reply; 9+ messages in thread
From: Kai Ruottu @ 2006-11-30 10:37 UTC (permalink / raw)
  To: gcc-help

On 11/29/06, Brian Dessent <brian@dessent.net> wrote:
>> It can be a lot more convenient.  For example, for a cross-toolchain,
>> the normal procedure would be:
>>
>> configure cross-binutils
>> build cross-binutils
>> install cross-binutils
>> adjust PATH to make just-installed files available
>> move to another build dir
>> configure cross-gcc
>> build cross-gcc
>> install cross-gcc
 Normal?  After producing much more than 1000 cross-toolchains this doesn't
look in any way "normal".  It looks being the situation where a newbie makes
ones first crosstoolchain and not being the situation where the builder 
updates
ones cross-toolchain by rebuilding one of its components from newer bugfixed
sources....  I would see the latter case much more common among the cross-
toolchain builders.

>> This especially true for targets where you'd don't have easy access to
>> existing libc headers, as there is a chicken-and-egg problem of "can't
>> build a fully functional gcc without libc headers" and "can't build libc
>> without a functional gcc."
 I would call these ideas as "bolshevism" or "bullshitism" because there 
is no
truth in them, only some blind believe to some weird ideas...  For 
instance if
someone tells that there are no prebuilt C libraries for Linux/PPC, 
Linux/ARM,
Linux/MIPS, Linux/SH, Linux/Sparc, Linux/m32r, Linux/am33, Linux/m68k,
quite many really believe although some 'dissident' would give the URLs from
where to download these...  The prebuilt Linux libs are like UFOs, 
people may
see them but they don't believe that they see them because so many tell that
they really don't exist....

 I myself have never had any trouble in finding these libs...  Besides 
of course
with AIX, HP-UX, Apple's MacOS X etc. "closed" proprietary systems.  And
with something like RHEL it is quite hard to find its original prebuilt 
C libraries,
much harder than in the SCO UnixWare 7.1.4 case I tried recently... Why 
RedHat
Linux is more "closed" than SCO Unix?

 It could be interesting to do research on  the misunderstandings people 
have
about the target libraries...  For all sanity one could assume  people 
to understand
that these are like PDF files in documents,  one just copies them from a 
host to
another and don't need to "customize" them for the new host...  But some 
people
really rebuild them for each new host and if the result on the new host 
is different
from the existing, no bells are ringing in the builders heads...  Of 
course using a
different GCC to compile produces a different result but when using 
"identical"
(made from the same sources) GCCs on different hosts, what they produce
should generally be identical....

 Or when needing tyres for a car, one can choose among Michelins, GoodYears,
Continentals, Nokians (yes, Nokia was once famous for its very good rubber
boots and car tyres made for arctic environment!),  Firestones etc.  All 
these
being "suitable" if not "right" for some demanding user...  But even 
this user
would prefer to drive with some "suitable" tyres to the shop where to 
find the
"right" tyres instead of driving there with a "stripped car", with bare 
wheels
because nothing else than the right tyres would be accepted...  With Linux C
libraries the situation is quite the same,  one either accepts 
bootstraping with
a "suitable" Linux C library or then not.

 In 1994 after buying a box with Linux install media and when trying to 
install
it into an empty PC which had no opsys on it, it was really frustrating 
to find
out that installing Linux required one to first purchase MS-DOS!  Or ask 
some
friend to make the boot floppies while laughing at that stupid Linux which
doesn't even have boot floppies for it...  Nowadays PCs can boot from CDs
and don't require one to purchase a MS opsys earlier, but not then...  
But still
old PCs cannot boot from the new Linux CDs, fortunately there are things 
like
"Smart Boot Manager" and if one has that on a floppy, installing Linux 
succeeds...
Ok, my point was to say that one should be prepared to some misappointments
and to use some "bootstrap" stuff sometimes, life will be much easier if one
accepts this fact...

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Available Linux C libraries for scepticals
  2006-11-30 10:37     ` Kai Ruottu
@ 2006-11-30 16:20       ` Kai Ruottu
  0 siblings, 0 replies; 9+ messages in thread
From: Kai Ruottu @ 2006-11-30 16:20 UTC (permalink / raw)
  To: gcc-help

Kai Ruottu kirjoitti:
> On 11/29/06, Brian Dessent <brian@dessent.net> wrote:
>>> This especially true for targets where you'd don't have easy access to
>>> existing libc headers, as there is a chicken-and-egg problem of "can't
>>> build a fully functional gcc without libc headers" and "can't build 
>>> libc
>>> without a functional gcc."
> I would call these ideas as "bolshevism" or "bullshitism" because 
> there is no
> truth in them, only some blind believe to some weird ideas...  For 
> instance if
> someone tells that there are no prebuilt C libraries for Linux/PPC, 
> Linux/ARM,
> Linux/MIPS, Linux/SH, Linux/Sparc, Linux/m32r, Linux/am33, Linux/m68k,
> quite many really believe although some 'dissident' would give the 
> URLs from
> where to download these...  The prebuilt Linux libs are like UFOs, 
> people may
> see them but they don't believe that they see them because so many 
> tell that
> they really don't exist....
 For Brian and others who don't believe that prebuilt glibcs for all 
kind of CPU
architectures really do exist, some URLs will follow,  from a single 
archive alone
first :

  ftp://ftp.sunet.se/pub/Linux/distributions/debian/pool/main/g/glibc
  
ftp://ftp.sunet.se/pub/Linux/distributions/ubuntu/ubuntu/pool/main/g/glibc/
  ftp://ftp.sunet.se/pub/Linux/distributions/eldk/4.0/
  ftp://ftp.sunet.se/pub/Linux/distributions/fedora/6

Debian has Linux ports for quite many CPU architectures, including ARM, 
MIPS, m68k,
hppa, x86, ia64, x86_64, sparc, s390 and powerpc.  Ubuntu has not for so 
many, ELDK
has for ARM, MIPS and PPC and Fedora for x86, x86_64 and PPC. The Open 
SuSE 10.1
i586, x86_64 and PPC glibcs can be found for instance via :

  http://suse.inode.at/opensuse/distribution/SL-10.1/inst-source/suse/

The Linux/FRV and Linux/AM33 toolchain distros can be found at least via :

  ftp://ftp.funet.fi/pub/linux/mirrors/redhat/redhat/gnupro

The Linux/m32r and Linux/SH stuff can be found via :

  http://www.linux-m32r.org/
  http://www.sh-linux.org/index.html

For which CPU-variation of Linux one yet cannot find a prebuilt glibc to 
be used
during bootstraping?

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2006-11-30 16:20 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-11-29  6:10 Unclear documentation on building GCC together with binutils Ulf Magnusson
2006-11-29  7:52 ` Tim Prince
2006-11-29  8:34 ` Brian Dessent
2006-11-29 22:23   ` Ulf Magnusson
2006-11-29 23:14     ` Ian Lance Taylor
2006-11-30 10:37     ` Kai Ruottu
2006-11-30 16:20       ` Available Linux C libraries for scepticals Kai Ruottu
2006-11-29 23:08   ` Unclear documentation on building GCC together with binutils Ulf Magnusson
2006-11-30  1:25     ` Brian Dessent

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).