public inbox for gsl-discuss@sourceware.org
 help / color / mirror / Atom feed
From: "Robert G. Brown" <rgb@phy.duke.edu>
To: Przemyslaw Sliwa <sliwa@euv-frankfurt-o.de>
Cc: gsl-discuss@sources.redhat.com
Subject: Re: Question about gsl
Date: Tue, 02 Dec 2003 11:42:00 -0000	[thread overview]
Message-ID: <Pine.LNX.4.44.0312020622480.5204-100000@lucifer.rgb.private.net> (raw)
In-Reply-To: <37206.160.83.32.14.1070359054.squirrel@webmail.euv-frankfurt-o.de>

On Tue, 2 Dec 2003, Przemyslaw Sliwa wrote:

> Dear All,
> 
> Last week our chair has got a new cluster consisting of 32 dual processor
> computers. I am not an expert in parallel computing and would like to know
> if it is possible to use the GSL library on a cluster. I suppose there
> must be a special verison of the library, like NAG for instance.

Of course it is possible to use the GSL library on a cluster (I do it
all the time:-), but as far as I know the library itself is not
parallelized in the sense that you can (for example) invoke the ODE
solver and have it automagically parallelize on a cluster if one happens
to be available.

This is for fairly sound reasons.  It is absurdly difficult to write
algorithms for even a fairly simple set of numerical operations (say,
elements of linear algebra such as those in BLAS) and have them run
optimizally efficiently in SERIAL code with a CPU and various
speeds/layers/sizes of memory.  This has led to the development of
ATLAS, a brilliant piece of work that automatically tunes BLAS to switch
algorithms and adjust block sizes and stride to perform optimally on a
given specific architecture (generally resulting in a 150-300% speedup).

Parallel code is the same only more so -- for some algorithms or
operations, the extra nodes can be viewed as extensions of memory, with
a significant latency and bandwidth hit.  For others, the code is
actively parallelized, with the task partitioned onto the nodes into
more or less independent chunks and with interprocessor communications
between the nodes to handle the more or less part, again with a variety
of latency and bandwidth bottlenecks depending on the PRECISE
architecture of node CPU, cache, memory, and the network.

Even libraries that advertise "parallel versions" should therefore be
viewed with a modest degree of suspicion.  In most cases to achieve
anything near optimal parallel speedup, nontrivial parallel task design
will require very specific engineering of the task using either a
message passing library such as PVM and/or MPI or (for the Real
Programmers amongst us:-) raw sockets or the hardware communications
stack provided by a high performance NIC vendor.

As you sound like you're getting started, I'd recommend minimally Ian
Foster's book "Designing and Building Parallel Programs" (available
online for free as well as from Amazon in hardcover).  I'd also
recommend a visit to http://www.phy.duke.edu/brahma/index.php to browse
the many cluster resources (including Ian's book) linked to the site.
There is an online, not quite complete book on engineering clusters,
collections of links to other useful cluster resources and sites, some
programs you might find useful.  Finally, consider subscribing to the
new Cluster World magazine -- it has both columns and themed articles
designed to be useful to clustervolken from the tyro to the expert.

  HTH,

    rgb

> 
> Am I right?
> 
> Thank you  for help,
> 
> Kind regards,
> 
> Przemyslaw
> 
> 

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb@phy.duke.edu



      reply	other threads:[~2003-12-02 11:42 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-12-02  9:57 Przemyslaw Sliwa
2003-12-02 11:42 ` Robert G. Brown [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.44.0312020622480.5204-100000@lucifer.rgb.private.net \
    --to=rgb@phy.duke.edu \
    --cc=gsl-discuss@sources.redhat.com \
    --cc=sliwa@euv-frankfurt-o.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).