public inbox for gsl-discuss@sourceware.org
 help / color / mirror / Atom feed
* Question about gsl
@ 2003-12-02  9:57 Przemyslaw Sliwa
  2003-12-02 11:42 ` Robert G. Brown
  0 siblings, 1 reply; 2+ messages in thread
From: Przemyslaw Sliwa @ 2003-12-02  9:57 UTC (permalink / raw)
  To: gsl-discuss

Dear All,

Last week our chair has got a new cluster consisting of 32 dual processor
computers. I am not an expert in parallel computing and would like to know
if it is possible to use the GSL library on a cluster. I suppose there
must be a special verison of the library, like NAG for instance.

Am I right?

Thank you  for help,

Kind regards,

Przemyslaw


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Question about gsl
  2003-12-02  9:57 Question about gsl Przemyslaw Sliwa
@ 2003-12-02 11:42 ` Robert G. Brown
  0 siblings, 0 replies; 2+ messages in thread
From: Robert G. Brown @ 2003-12-02 11:42 UTC (permalink / raw)
  To: Przemyslaw Sliwa; +Cc: gsl-discuss

On Tue, 2 Dec 2003, Przemyslaw Sliwa wrote:

> Dear All,
> 
> Last week our chair has got a new cluster consisting of 32 dual processor
> computers. I am not an expert in parallel computing and would like to know
> if it is possible to use the GSL library on a cluster. I suppose there
> must be a special verison of the library, like NAG for instance.

Of course it is possible to use the GSL library on a cluster (I do it
all the time:-), but as far as I know the library itself is not
parallelized in the sense that you can (for example) invoke the ODE
solver and have it automagically parallelize on a cluster if one happens
to be available.

This is for fairly sound reasons.  It is absurdly difficult to write
algorithms for even a fairly simple set of numerical operations (say,
elements of linear algebra such as those in BLAS) and have them run
optimizally efficiently in SERIAL code with a CPU and various
speeds/layers/sizes of memory.  This has led to the development of
ATLAS, a brilliant piece of work that automatically tunes BLAS to switch
algorithms and adjust block sizes and stride to perform optimally on a
given specific architecture (generally resulting in a 150-300% speedup).

Parallel code is the same only more so -- for some algorithms or
operations, the extra nodes can be viewed as extensions of memory, with
a significant latency and bandwidth hit.  For others, the code is
actively parallelized, with the task partitioned onto the nodes into
more or less independent chunks and with interprocessor communications
between the nodes to handle the more or less part, again with a variety
of latency and bandwidth bottlenecks depending on the PRECISE
architecture of node CPU, cache, memory, and the network.

Even libraries that advertise "parallel versions" should therefore be
viewed with a modest degree of suspicion.  In most cases to achieve
anything near optimal parallel speedup, nontrivial parallel task design
will require very specific engineering of the task using either a
message passing library such as PVM and/or MPI or (for the Real
Programmers amongst us:-) raw sockets or the hardware communications
stack provided by a high performance NIC vendor.

As you sound like you're getting started, I'd recommend minimally Ian
Foster's book "Designing and Building Parallel Programs" (available
online for free as well as from Amazon in hardcover).  I'd also
recommend a visit to http://www.phy.duke.edu/brahma/index.php to browse
the many cluster resources (including Ian's book) linked to the site.
There is an online, not quite complete book on engineering clusters,
collections of links to other useful cluster resources and sites, some
programs you might find useful.  Finally, consider subscribing to the
new Cluster World magazine -- it has both columns and themed articles
designed to be useful to clustervolken from the tyro to the expert.

  HTH,

    rgb

> 
> Am I right?
> 
> Thank you  for help,
> 
> Kind regards,
> 
> Przemyslaw
> 
> 

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb@phy.duke.edu



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2003-12-02 11:42 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-12-02  9:57 Question about gsl Przemyslaw Sliwa
2003-12-02 11:42 ` Robert G. Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).