* simplicity vs. efficiency of algorithms in GSL [not found] ` <20080922162507.GA29877@hippogriff.homeunix.org> @ 2008-09-22 18:32 ` Tuomo Keskitalo 2008-09-22 19:48 ` Patrick Alken 0 siblings, 1 reply; 23+ messages in thread From: Tuomo Keskitalo @ 2008-09-22 18:32 UTC (permalink / raw) To: Patrick Alken; +Cc: gsl-discuss On 09/22/2008 07:25 PM, Patrick Alken wrote: > There are much faster (and far more sophisticated) eigenvalue > algorithms that are not really suited for GSL due to their > complexity to code. LAPACK contains these types of algorithms > and it is encouraged to use that library if one needs the > fastest available algorithms. About this point: I have been wondering where the balance of GSL should be on simplicity vs. efficiency of the code. Of course, even a simple algorithm is better than none, but I think it would be good to offer efficient routines in GSL. If somebody publishes a well written, efficient eigenvalue algorithm, would it get included in GSL? Or would it be better off as an extension library? -- Tuomo.Keskitalo@iki.fi http://iki.fi/tuomo.keskitalo ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-09-22 18:32 ` simplicity vs. efficiency of algorithms in GSL Tuomo Keskitalo @ 2008-09-22 19:48 ` Patrick Alken 2008-09-22 21:01 ` Robert G. Brown 0 siblings, 1 reply; 23+ messages in thread From: Patrick Alken @ 2008-09-22 19:48 UTC (permalink / raw) To: Tuomo Keskitalo; +Cc: Patrick Alken, gsl-discuss > About this point: I have been wondering where the balance of GSL should > be on simplicity vs. efficiency of the code. Of course, even a simple > algorithm is better than none, but I think it would be good to offer > efficient routines in GSL. If somebody publishes a well written, > efficient eigenvalue algorithm, would it get included in GSL? Or would > it be better off as an extension library? I think it depends on how simple and easy to understand it is. The algorithm in Golub and Van Loan is relatively simple and someone who is interested could read that book and then look at the GSL code and understand what is going on fairly easily. A year or so ago I tried to implement the current LAPACK algorithm for nonsymmetric eigenvalues, called the "Small Bulge Agressive Early Deflation" algorithm. This algorithm is easily 50-70% faster than the current GSL algorithm however it is extremely complex and it would take a non-expert several weeks to understand what the code is doing. Since that algorithm is available in LAPACK, a free library, I really don't see the need to put faster more complicated algorithms in GSL since LAPACK already has the fastest algorithms currently available. However if you want to implement these as extensions some would certainly find that useful. Patrick Alken ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-09-22 19:48 ` Patrick Alken @ 2008-09-22 21:01 ` Robert G. Brown 2008-09-23 21:52 ` Gerard Jungman 0 siblings, 1 reply; 23+ messages in thread From: Robert G. Brown @ 2008-09-22 21:01 UTC (permalink / raw) To: Patrick Alken; +Cc: Tuomo Keskitalo, gsl-discuss On Mon, 22 Sep 2008, Patrick Alken wrote: >> About this point: I have been wondering where the balance of GSL should >> be on simplicity vs. efficiency of the code. Of course, even a simple >> algorithm is better than none, but I think it would be good to offer >> efficient routines in GSL. If somebody publishes a well written, >> efficient eigenvalue algorithm, would it get included in GSL? Or would >> it be better off as an extension library? > > I think it depends on how simple and easy to understand it is. The > algorithm in Golub and Van Loan is relatively simple and someone > who is interested could read that book and then look at the GSL > code and understand what is going on fairly easily. > > A year or so ago I tried to implement the current LAPACK algorithm > for nonsymmetric eigenvalues, called the "Small Bulge Agressive > Early Deflation" algorithm. This algorithm is easily 50-70% faster > than the current GSL algorithm however it is extremely complex > and it would take a non-expert several weeks to understand what > the code is doing. > > Since that algorithm is available in LAPACK, a free library, I > really don't see the need to put faster more complicated algorithms > in GSL since LAPACK already has the fastest algorithms currently > available. My only comment on this is that by in large, from a user point of view I don't really care what the code is doing as long as it is written by experts and tested and delivered to me ready to run (this is an exagerration, sure, but still, one point of a library is to not HAVE to understand the algorithm to use the -- reliable -- tool). It isn't, therefore, the algorithm that matters so much as the front end and in particular its documentation. I recently had call to do some linear algebra in code (something I fortunately don't have to do too often) and discovered that LAPACK is not, actually, terribly well documented. It took me many hours and a lot of googling to find useful and relevant examples I could adapt to my own needs. OTOH, the GSL is superbly documented. From the beginning I've been able to go into the online manual and get a very accurate idea of what the API looks like for all its many functions, with at least adequate examples ranging from code fragments to working programs to help smooth the way. I use it so much I went ahead and bought the paper copy of the manual not so much because I need it (paper, alas, becomes obsolete) but to help support the project. For that reason (and for completeness) I'd be happy to see LAPACK "swallowed" by the GSL -- either imported and made its own or fronted by thinly disguising macro-level GSL calls (that can always be replaced by actual local library calls if/when appropriate) that are DOCUMENTED in the GSL online manual, with examples and so on. To me the hardest thing about library linear algebra calls is not the algorithm at all -- it is figuring out just where one is supposed to put the matrix being worked on, the vector(s) being worked on, whether or not one has to call this routine before (to, say, put the matrix in tridiagonal form) or that routine after in order to take a half-cooked answer and turn it into what you are looking for. Another good reason for doing this is that linear algebra is important in a lot of other scientific numerical work and could easily be useful during the development process of new GSL routines or functions. Having internal, efficient, linear algebra routines would permit the GSL to use those routines in other functions -- perhaps in curve fitting, conjugate gradient etc -- in a self-contained way with no external dependencies. The same thing is true in other areas. There exists at least one fairly advanced C++ library that does multidimensional (high dimensional) quadrature, but the GSL does not have it. I think it should (especially since C and C++ don't exactly have the same program interface), and offered to port at least a few routines from Hint with no takers. The GSL doesn't manage high dimensional tensors -- it pretty much stops with 2x2 -- but a lot of science is currently done in N>2 dimensional spaces where having e.g. 4th rank tensors would be very useful. I >>personally<< would like the GSL to become the one-stop shop for scientific and mathematical subroutines, well documented (with plenty of examples) and exhaustive. With a GPL code base, there is no reason for it not to eventually get there. I'd like to see it become a true swiss-army-knife universal toolbox -- the only library one ever needs to link to accomplish heavy numerical lifting. But at the moment, it seems to avoid adding things like tensors, multidimensional quadrature, full-service linear algebra that are in principle (but not always in practice, at least "easy" practice) available in other packages. rgb > > However if you want to implement these as extensions some would > certainly find that useful. > > Patrick Alken > -- Robert G. Brown Phone(cell): 1-919-280-8443 Duke University Physics Dept, Box 90305 Durham, N.C. 27708-0305 Web: http://www.phy.duke.edu/~rgb Book of Lilith Website: http://www.phy.duke.edu/~rgb/Lilith/Lilith.php Lulu Bookstore: http://stores.lulu.com/store.php?fAcctID=877977 ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-09-22 21:01 ` Robert G. Brown @ 2008-09-23 21:52 ` Gerard Jungman 2008-09-24 21:19 ` Brian Gough 0 siblings, 1 reply; 23+ messages in thread From: Gerard Jungman @ 2008-09-23 21:52 UTC (permalink / raw) To: Robert G. Brown; +Cc: Patrick Alken, Tuomo Keskitalo, gsl-discuss On Mon, 2008-09-22 at 16:59 -0400, Robert G. Brown wrote: > > For that reason (and for completeness) I'd be happy to see LAPACK > "swallowed" by the GSL -- either imported and made its own or fronted by > thinly disguising macro-level GSL calls (that can always be replaced by > actual local library calls if/when appropriate) that are DOCUMENTED in > the GSL online manual, with examples and so on. I wanted to do this ten years ago. It's kind of a messed-up situation. Here's the way I see it. 1) GSL was a militant single-language design. 2) Ten years ago (and still today, I think) there was no reasonable C interface to LAPACK. By reasonable I mean something that deals rationally with the row-major vs. column-major nonsense, as well as all the specialized packing schemes. Also, f2c probably does not count as reasonable (i.e. CLAPACK). 3) License issues must be considered. 4) We would all like to see ATLAS become a complete solution. But currently ATLAS is only complete for BLAS; it has only a small portion of LAPACK functionality. 5) The C++ world has gone hog-wild with half-finished linear algebra projects. Normally such things would be irrelevant (the world is full of living-dead software), but some of these projects are really promising, and I worry/wonder how they deal with interface issues. CBLAS was introduced around that time (~1998), as an actual documented interface standard (in draft form). That's why it is possible to link any CBLAS implementation with GSL, and why the native gslcblas implementation is a separate library. I insisted on doing it that way so you could drop in any conforming implementation at link time. Doing the same with LAPACK was not possible, due to the lack of any rationally designed and standardized C interface. Dongarra et. al claim to be working on the newest LAPACK-thing, whatever it is, which is supposed to have a rationally designed multi-language interface. I have no idea what state that project is in, but I haven't seen anything yet, not even a draft interface standard. Ten years ago, we kind of gave up, and the "native GSL" faction won the argument. Personally, I just wanted to take the fortran LAPACK code, stick it in a subdirectory, design a rational C interface, and call it done. But maybe that was a bad idea after all... At least we have standards for BLAS, and ATLAS is BLAS-complete. To the extent that LAPACK functionality benefits from BLAS tuning, it's not a total disaster. The native GSL linear algebra crap benefits too, since it's supposed to be using BLAS whenever possible. But somebody still needs to sit down and think through all the usage cases. What BLAS should you use, which LAPACK, or native GSL, under what conditions, etc. In my view, there will be no gsl-2.0 without cleaning up this mess. As an aside, a similar problem exists with FFTs. It's kinda pathetic when the GSL designers urge people to "just use FFTW". It's true, you should just use FFTW. But exactly how should you use it, how do you make GSL play nice with it, and why then does GSL have FFTs at all? Clearly GSL should have interfaces to FFTs, but why does it have a native implementation? In fairness to the developers (including myself, I share the guilt), many things were different in 1998. And there was a general fear of building in any dependencies on anybody else; that can be a risky way to live. But I still remember when we implemented GSL native CBLAS. Oh my god. I never want to do anything like that again. Not ever. > Another good reason for doing this is that linear algebra is important > in a lot of other scientific numerical work and could easily be useful > during the development process of new GSL routines or functions. Having > internal, efficient, linear algebra routines would permit the GSL to use > those routines in other functions -- perhaps in curve fitting, conjugate > gradient etc -- in a self-contained way with no external dependencies. Absolutely. This was my number one argument for having a best-of-class native implementation (by dropping fortran-LAPACK into a subdirectory!). One of the ingredients in the original GSL kool-aid was the idea that the sub-libraries should be independent of each other, each devoted to some task. I pointed out very early on that this was naive, since you would need to use some functionality to build up other stuff. I proposed an explicitly designed "level" scheme, making clear which sub-libraries were foundational, and which had dependencies. People didn't get it. In the end, we adopted a build procedure where all the libraries got mashed together into one giant libgsl.a anyway; so much for independence. [Except for libgslcblas.a of course...] I do still think that it is fundamentally wrong to have a native GSL implementation for linear algebra, even given the desire for the library to be self-contained. What we need is a good standard interface, which requires people outside GSL to agree on something, and at least one good implementation, which requires experts to implement (and none of the GSL developers are linear algebra experts). We currently have these things for BLAS functions (ATLAS with CBLAS interfaces). We have LAPACK, in stable fortran implementations, etc.; now we just need a good C interface. I hope Dongarra et. al do something. Somebody please do something... > With a GPL code base, there is no reason for it not to eventually get > there. I'd like to see it become a true swiss-army-knife universal > toolbox -- the only library one ever needs to link to accomplish heavy > numerical lifting. But at the moment, it seems to avoid adding things > like tensors, multidimensional quadrature, full-service linear algebra > that are in principle (but not always in practice, at least "easy" > practice) available in other packages. To achieve this, we have to think bigger than GSL itself. We must open the door to other implementations and we must conform to established interfaces, when they exist. Saves us a lot of work too! What is needed is an integration effort; getting everything we need under one roof, and usable through one well-designed consistent interface. As a by-product, I think we will get multi-language interfaces for free, and the whole world of PyGSL and the rest will benefit. Hi-ho. -- G. Jungman ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-09-23 21:52 ` Gerard Jungman @ 2008-09-24 21:19 ` Brian Gough 2008-09-25 0:07 ` Gerard Jungman 0 siblings, 1 reply; 23+ messages in thread From: Brian Gough @ 2008-09-24 21:19 UTC (permalink / raw) To: gsl-discuss I mentally gave up on LAPACK as an option a long time ago. The FLAME library has more potential, it is LGPL'ed and faster than LAPACK, but it does not have all the functionality [1]. Realistically I see the role of GSL as an alternative to Numerical Recipes and other similar non-free libraries. None of these have any super features but they are still widely used. As such, I would see simplicity, ease of use and good documentation as priorities. [1] http://www.cs.utexas.edu/users/flame/ ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-09-24 21:19 ` Brian Gough @ 2008-09-25 0:07 ` Gerard Jungman 2008-09-25 3:07 ` Andrew W. Steiner ` (2 more replies) 0 siblings, 3 replies; 23+ messages in thread From: Gerard Jungman @ 2008-09-25 0:07 UTC (permalink / raw) To: gsl-discuss On Wed, 2008-09-24 at 22:15 +0100, Brian Gough wrote: > I mentally gave up on LAPACK as an option a long time ago. Sounds reasonable. I'm tired of waiting for them to produce something attractive. But what do we do? > The FLAME > library has more potential, it is LGPL'ed and faster than LAPACK, but > it does not have all the functionality [1]. Interesting. Are we waiting for them to do something practical, like generate a feature-complete LAPACK replacement? I hope they do... More than that, I hope they produce an interface that makes sense. Enough sense that people are motivated to code to it. > Realistically I see the role of GSL as an alternative to Numerical > Recipes and other similar non-free libraries. None of these have any > super features but they are still widely used. As such, I would see > simplicity, ease of use and good documentation as priorities. I don't know about this "alternative to NR" philosophy. Those words have been used before; they make some sense, as far as they go. But that's really aiming pretty low. GSL is far from perfect, but, in my opinion, it is clearly better than NR in every way. As far as the range of functionality to encompass, I agree with Robert Brown; we should have everything. That doesn't mean we have to implement everything, we just have to know how it would fit in. I think figuring out how components fit together is most of the battle. For the same reason, simplicity vs efficiency is not the right argument. Experts should produce the the most efficient code, in some rational and usable form, and we should use it. The only thing that prevents us from doing this tomorrow is that, as far as I can tell, no expert has managed to produce what we need in a rational and usable form. For me, rational and usable means that it solves all those tedious problems that plague the fortran-to-anything-else interface. At least that would be a start. Of course, I like things to be neat and tidy; that's just me. Maybe other people don't mind having to cobble things together, but I have a very low threshold for that. There's no work I do that is so compelling that I don't care how painful it is to get the answer. And I always want better tools. I think we surpassed the "alternative to NR" goal some years ago. Now we should try to make the best possible thing. Period. And my reasons are very mercenary; if there were such a thing, then I would be able to use it. -- G. Jungman ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-09-25 0:07 ` Gerard Jungman @ 2008-09-25 3:07 ` Andrew W. Steiner 2008-09-25 3:50 ` Z F 2008-09-26 21:48 ` Brian Gough 2 siblings, 0 replies; 23+ messages in thread From: Andrew W. Steiner @ 2008-09-25 3:07 UTC (permalink / raw) To: gsl-discuss My 2 cents: I thought Brian's emphasis on ease of use and good documentation was particularly important. IMHO, more than anything else that is what makes GSL such a superb piece of work. It installs everywhere. It's easy to understand, and it simply works. I'm a bit uncomfortable with the phrase "just having everything", because I wouldn't want to see something like root (root.cern.ch), which does indeed have everything...and it's always sort of half complete. That being said, including multidimensional quadrature is a good idea, and there are definitely other things to be included. And yes, a good quality C interface to LAPACK sounds fabulous. There's no need to even mention NR, as I can't believe anyone really takes that seriously anymore. Take care, Andrew On Wed, Sep 24, 2008 at 8:07 PM, Gerard Jungman <jungman@lanl.gov> wrote: > > On Wed, 2008-09-24 at 22:15 +0100, Brian Gough wrote: >> I mentally gave up on LAPACK as an option a long time ago. > > Sounds reasonable. I'm tired of waiting for them to produce > something attractive. But what do we do? > > >> The FLAME >> library has more potential, it is LGPL'ed and faster than LAPACK, but >> it does not have all the functionality [1]. > > Interesting. Are we waiting for them to do something practical, > like generate a feature-complete LAPACK replacement? I hope they do... > > More than that, I hope they produce an interface that makes sense. > Enough sense that people are motivated to code to it. > > >> Realistically I see the role of GSL as an alternative to Numerical >> Recipes and other similar non-free libraries. None of these have any >> super features but they are still widely used. As such, I would see >> simplicity, ease of use and good documentation as priorities. > > I don't know about this "alternative to NR" philosophy. Those words have > been used before; they make some sense, as far as they go. But that's > really aiming pretty low. GSL is far from perfect, but, in my opinion, > it is clearly better than NR in every way. > > As far as the range of functionality to encompass, I agree with > Robert Brown; we should have everything. That doesn't mean we have > to implement everything, we just have to know how it would fit in. > I think figuring out how components fit together is most of the battle. > > For the same reason, simplicity vs efficiency is not the right argument. > Experts should produce the the most efficient code, in some rational > and usable form, and we should use it. The only thing that prevents us > from doing this tomorrow is that, as far as I can tell, no expert has > managed to produce what we need in a rational and usable form. For me, > rational and usable means that it solves all those tedious problems > that plague the fortran-to-anything-else interface. At least that > would be a start. > > Of course, I like things to be neat and tidy; that's just me. Maybe > other people don't mind having to cobble things together, but I have > a very low threshold for that. There's no work I do that is so > compelling that I don't care how painful it is to get the answer. > And I always want better tools. > > I think we surpassed the "alternative to NR" goal some years ago. > Now we should try to make the best possible thing. Period. And my > reasons are very mercenary; if there were such a thing, then > I would be able to use it. > > -- > G. Jungman > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-09-25 0:07 ` Gerard Jungman 2008-09-25 3:07 ` Andrew W. Steiner @ 2008-09-25 3:50 ` Z F 2008-09-26 21:48 ` Brian Gough 2 siblings, 0 replies; 23+ messages in thread From: Z F @ 2008-09-25 3:50 UTC (permalink / raw) To: gsl-discuss, Gerard Jungman Hello Jerry, > For the same reason, simplicity vs efficiency is not the > right argument. > Experts should produce the the most efficient code, in some > rational > and usable form, and we should use it. The only thing that > prevents us > from doing this tomorrow is that, as far as I can tell, no > expert has > managed to produce what we need in a rational and usable > form. For me, > rational and usable means that it solves all those tedious > problems > that plague the fortran-to-anything-else interface. At > least that > would be a start. > > Of course, I like things to be neat and tidy; that's > just me. Maybe > other people don't mind having to cobble things > together, but I have > a very low threshold for that. There's no work I do > that is so > compelling that I don't care how painful it is to get > the answer. > And I always want better tools. BRAVO!! ZF ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-09-25 0:07 ` Gerard Jungman 2008-09-25 3:07 ` Andrew W. Steiner 2008-09-25 3:50 ` Z F @ 2008-09-26 21:48 ` Brian Gough 2008-09-27 6:12 ` Jochen Küpper 2 siblings, 1 reply; 23+ messages in thread From: Brian Gough @ 2008-09-26 21:48 UTC (permalink / raw) To: gsl-discuss At Wed, 24 Sep 2008 18:07:20 -0600, Gerard Jungman wrote: > Sounds reasonable. I'm tired of waiting for them to produce > something attractive. But what do we do? I think we should continue with simple linear algebra implementations until there is a major change in the state of the art, like being able to generate all LAPACK routines automatically with the FLAME tools or similar. > Of course, I like things to be neat and tidy; that's just me. Maybe > other people don't mind having to cobble things together, but I have > a very low threshold for that. There's no work I do that is so > compelling that I don't care how painful it is to get the answer. > And I always want better tools. The history of the field, and computing in general, doesn't give much cause for optimism here unfortunately. -- Brian ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-09-26 21:48 ` Brian Gough @ 2008-09-27 6:12 ` Jochen Küpper 2008-09-29 13:35 ` Brian Gough 2008-10-07 21:14 ` Gerard Jungman 0 siblings, 2 replies; 23+ messages in thread From: Jochen Küpper @ 2008-09-27 6:12 UTC (permalink / raw) To: GSL Discuss Mailing List Hi Everybody, On 26.09.2008, at 23:45, Brian Gough wrote: > At Wed, 24 Sep 2008 18:07:20 -0600, Gerard Jungman wrote: >> Sounds reasonable. I'm tired of waiting for them to produce >> something attractive. But what do we do? > > I think we should continue with simple linear algebra > implementations until there is a major change in the state of the > art, like being able to generate all LAPACK routines automatically > with the FLAME tools or similar. > >> Of course, I like things to be neat and tidy; that's just me. Maybe >> other people don't mind having to cobble things together, but I >> have a very low threshold for that. There's no work I do that is so >> compelling that I don't care how painful it is to get the answer. >> And I always want better tools. > > The history of the field, and computing in general, doesn't give > much cause for optimism here unfortunately. I really like GSL and use it a lot -- from C++ and from Python. I also was involved in early efforts to create a Python layer (PyGSL) and still use that a lot, i.e. for special functions and to get the constants. In C++ I am using mostly ODE solvers, linear algebra, local optimizers (plus sf and const again, of course). It does bother me (and colleagues) a lot that neither the ODE solvers nor the LA routines are really up to speed. My colleagues and myself are really considering to abandon the GSL because we want better ODE solvers and better eigenvalue/vector routines! And if the crucial parts are not from GSL, why us it at all? The name GNU Scientific Library implies it is meant for scientific computing. Well, computational science (turned words around on purpose), to a large extent, is solving large-to-big problems. Really big problems probably have the resources to put together the best algorithms for all their needs. For large problems that is often not the case: We often implement the programs as a single person. There, we need the best libraries around. We like GSL, because it is nicely documented and delivers routines for many (most) problems. However, we are very much in need of getting improved algorithms! For LA I would be very much in favor to have people (re)implement or wrap fractions of LAPACK with a GSL-style interface. If there would be a single nicely implemented routine, others would be easy to add. For ODE, for example, I am considering to test, and eventually switch to, cvode (but I have not used it at all so far). I would be very happy to not need to do this, but instead get improved ODE routines in GSL and use my time for doing science! Please take this as a call that scientists need improved routines in a GSL-quality and -breadth library. I understand that this is a difficult problem, but that is what GSL should aim at and what the FSF should support and push if they want a scientific library! Regards, Jochen -- Fritz-Haber-Institut der MPG -- Department of Molecular Physics -- Manipulation of Large Molecules Faradayweg 4-6 (C1.03) D-14195 Berlin, Germany phone: +49-30-84135686 fax: +49-30-84135892 http://www.fhi-berlin.mpg.de/mp/jochen ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-09-27 6:12 ` Jochen Küpper @ 2008-09-29 13:35 ` Brian Gough 2008-10-07 21:14 ` Gerard Jungman 1 sibling, 0 replies; 23+ messages in thread From: Brian Gough @ 2008-09-29 13:35 UTC (permalink / raw) To: GSL Discuss Mailing List At Sat, 27 Sep 2008 08:10:55 +0200, Jochen Küpper wrote: > It does bother me (and colleagues) a lot that neither the ODE solvers > nor the LA routines are really up to speed. My colleagues and myself > are really considering to abandon the GSL because we want better ODE > solvers and better eigenvalue/vector routines! Thanks for your comments. I agree that the ODE solvers need improvement, I think some of the algorithms used in CVODE are being implemented by Tuomo Keskitalo. -- Brian Gough ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-09-27 6:12 ` Jochen Küpper 2008-09-29 13:35 ` Brian Gough @ 2008-10-07 21:14 ` Gerard Jungman 2008-10-09 18:38 ` Tuomo Keskitalo 1 sibling, 1 reply; 23+ messages in thread From: Gerard Jungman @ 2008-10-07 21:14 UTC (permalink / raw) To: GSL Discuss Mailing List On Sat, 2008-09-27 at 08:10 +0200, Jochen Küpper wrote: > > In C++ I am using mostly ODE solvers, linear algebra, local optimizers > (plus sf and const again, of course). > It does bother me (and colleagues) a lot that neither the ODE solvers > nor the LA routines are really up to speed. My colleagues and myself > are really considering to abandon the GSL because we want better ODE > solvers and better eigenvalue/vector routines! And if the crucial > parts are not from GSL, why us it at all? I agree. GSL needs to go to the next level, otherwise we're back to cobbling parts together, and GSL becomes a frozen page in history. > The name GNU Scientific Library implies it is meant for scientific > computing. Well, computational science (turned words around on > purpose), to a large extent, is solving large-to-big problems. Really > big problems probably have the resources to put together the best > algorithms for all their needs. For large problems that is often not > the case: We often implement the programs as a single person. Exactly. A library should exist in order to make it possible for one person to do the work needed to solve a large problem. That is the target audience. Thank you for making this point very clearly. > There, we need the best libraries around. We like GSL, because it is > nicely documented and delivers routines for many (most) problems. > However, we are very much in need of getting improved algorithms! > > For LA I would be very much in favor to have people (re)implement or > wrap fractions of LAPACK with a GSL-style interface. If there would be > a single nicely implemented routine, others would be easy to add. We need an interface design. Maybe it's difficult, maybe not. I would very much prefer that an expert created one for us. Somebody who has used, or understands, all the parts of lapack. Also, there is a danger that the thing we call lapack will shift underneath us. Of course, the glob of code (on netlib for example) will not change. But the underlying thing that you want to use may change, as the "lapack" world moves forward. This means that there has to be some rational approach to extending/updating the interface design. Who will maintain this? I was hoping that Dongarra et al. would create and maintain such an interface. They have been promising this for some time. Does anybody know what is happening with that? > For ODE, for example, I am considering to test, and eventually switch > to, cvode (but I have not used it at all so far). I would be very > happy to not need to do this, but instead get improved ODE routines in > GSL and use my time for doing science! I looked at CVODE many years ago. So I thought it was time to look again, and to have a quick look at the the whole Sundials package, which is new to me. Years ago, I thought the technical aspects were good, but the design and packaging (and the code itself) were not attractive. Also, it seemed to exist in one of these no-obvious-license limbos. But it looks like they have made a lot of progress. And it's an explicit BSD license now. The main objection I had, those years ago, was the way CVODE came with its own little linear algebra world. That is definitely a design mistake, though you can see why they did it. That still seems to be the case. I'm not sure how the end user can use it as a research tool, if the linear algebra implementation is essentially closed. (Of course you can always modify the code, but that's not the point. The ability to modify should be designed into the system, so you can replace/modify parts of the machinery in a rational way, maybe through some kind of framework design.) As a canned tool, it makes sense. But it should be more than that. However, it is possible that I do not understand the CVODE design. For example, they claim that one of their goals was to allow users to supply their own data structures. I'm not sure how that can work. I would enjoy hearing what anybody has to say about CVODE/Sundials. Tell me why the design makes sense (or not). Also, what is the right way to handle this dependency issue, which occurs in GSL as well. ODE methods clearly rely on linear algebra, so logically they should build on a foundational linear algebra package. Having them carry their own implementations makes me very queasy. Yet, we have no standard linear algebra interfaces to program to. > Please take this as a call that scientists need improved routines in a > GSL-quality and -breadth library. > > I understand that this is a difficult problem, but that is what GSL > should aim at and what the FSF should support and push if they want a > scientific library! Again, I agree. Unfortunately, I don't think there is any such thing as "FSF support" or "push". As always, we're on our own. Just a handful of people trying to make something better than what we had before. GSL should definitely aim higher. But GSL is us... so what are we going to do? I have no personal love of computational linear algebra. But organizing some linear algebra interfaces seems to be the number one problem holding up our progress. -- G. Jungman ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-10-07 21:14 ` Gerard Jungman @ 2008-10-09 18:38 ` Tuomo Keskitalo 2008-10-09 18:57 ` Frank Reininghaus 2008-10-10 0:05 ` Gerard Jungman 0 siblings, 2 replies; 23+ messages in thread From: Tuomo Keskitalo @ 2008-10-09 18:38 UTC (permalink / raw) To: Gerard Jungman; +Cc: GSL Discuss Mailing List Hello, On 10/08/2008 12:13 AM, Gerard Jungman wrote: [about cvode/Sundials] > But it looks like they have made a lot of progress. And it's > an explicit BSD license now. I am no expert on licenses, can somebody please tell briefly what their license allows us to do with the source code? > I would enjoy hearing what anybody has to say about CVODE/Sundials. Well, Alan Hindmarsh is a definitive author on multistep methods and the package looks quite promising. I will definitely look up those numerical methods. > I have no personal love of computational linear algebra. But > organizing some linear algebra interfaces seems to be the number one > problem holding up our progress. I, too, would very much like to see a good interface for linear algebra! I will gladly make the necessary changes to ode-initval if such an interface will become available and if I can spare the time for it. Until then, I think I will continue the ODE-solver work with what is currently available in GSL. Hopefully linear algebra will get the much-needed interfaces! -- Tuomo.Keskitalo@iki.fi http://iki.fi/tuomo.keskitalo ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-10-09 18:38 ` Tuomo Keskitalo @ 2008-10-09 18:57 ` Frank Reininghaus 2008-10-09 21:17 ` Gerard Jungman 2008-10-10 0:05 ` Gerard Jungman 1 sibling, 1 reply; 23+ messages in thread From: Frank Reininghaus @ 2008-10-09 18:57 UTC (permalink / raw) To: gsl-discuss Hi, On Thursday 09 October 2008 20:37:19 Tuomo Keskitalo wrote: > > But it looks like they have made a lot of progress. And it's > > an explicit BSD license now. > > I am no expert on licenses, can somebody please tell briefly what their > license allows us to do with the source code? I think that you can do basically anything you want with BSD-licensed code (at least if it's the license version without advertising clause) as long as you don't remove the original copyright notice, see http://en.wikipedia.org/wiki/BSD_license In particular, you can use it in GPL'ed or even proprietary code. Regards Frank ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-10-09 18:57 ` Frank Reininghaus @ 2008-10-09 21:17 ` Gerard Jungman 0 siblings, 0 replies; 23+ messages in thread From: Gerard Jungman @ 2008-10-09 21:17 UTC (permalink / raw) To: gsl-discuss On Thu, 2008-10-09 at 20:56 +0200, Frank Reininghaus wrote: > > I think that you can do basically anything you want with BSD-licensed code (at > least if it's the license version without advertising clause) as long as you > don't remove the original copyright notice, see > > http://en.wikipedia.org/wiki/BSD_license Yes, it appears to be the usual (modern) BSD license, without the advertising clause. https://computation.llnl.gov/casc/sundials/download/license.html -- G. Jungman ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-10-09 18:38 ` Tuomo Keskitalo 2008-10-09 18:57 ` Frank Reininghaus @ 2008-10-10 0:05 ` Gerard Jungman 2008-10-10 15:50 ` Robert G. Brown 2008-10-11 2:48 ` Brian Gough 1 sibling, 2 replies; 23+ messages in thread From: Gerard Jungman @ 2008-10-10 0:05 UTC (permalink / raw) To: GSL Discuss Mailing List On Thu, 2008-10-09 at 21:37 +0300, Tuomo Keskitalo wrote: > I, too, would very much like to see a good interface for linear algebra! It may be too soon to have a party, but I'm excited about the following, which was just brought to my attention. The ten-year wait may soon be over. http://icl.cs.utk.edu/~delmas/lapwrapc.html It's a draft standard for a C LAPACK interface. Apparently the LAPACK group is working with Intel to create a final version of the standard. It may be done early next year. Perhaps it's time to start thinking seriously about this. -- G. Jungman ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-10-10 0:05 ` Gerard Jungman @ 2008-10-10 15:50 ` Robert G. Brown 2008-10-10 18:47 ` James Amundson 2008-10-11 2:48 ` Brian Gough 1 sibling, 1 reply; 23+ messages in thread From: Robert G. Brown @ 2008-10-10 15:50 UTC (permalink / raw) To: Gerard Jungman; +Cc: GSL Discuss Mailing List On Thu, 9 Oct 2008, Gerard Jungman wrote: > > On Thu, 2008-10-09 at 21:37 +0300, Tuomo Keskitalo wrote: > >> I, too, would very much like to see a good interface for linear algebra! > > It may be too soon to have a party, but I'm excited about the > following, which was just brought to my attention. The ten-year > wait may soon be over. > > http://icl.cs.utk.edu/~delmas/lapwrapc.html > > It's a draft standard for a C LAPACK interface. > > Apparently the LAPACK group is working with Intel to create a final > version of the standard. It may be done early next year. Perhaps it's > time to start thinking seriously about this. If there is a draft version of the standard, it would probably be GOOD to implement it in something like GSL, as IMO "problems" with the proposed standard are likely to surface only in widespread use anyway. It it actually were implemented (even as a sort of "beta" form, clearly announced as such) before they do finish the final version, we might be able to comment on or resolve any emergent problems in time to get solutions into the standard. Just a thought. rgb > > -- > G. Jungman > > -- Robert G. Brown Phone(cell): 1-919-280-8443 Duke University Physics Dept, Box 90305 Durham, N.C. 27708-0305 Web: http://www.phy.duke.edu/~rgb Book of Lilith Website: http://www.phy.duke.edu/~rgb/Lilith/Lilith.php Lulu Bookstore: http://stores.lulu.com/store.php?fAcctID=877977 ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-10-10 15:50 ` Robert G. Brown @ 2008-10-10 18:47 ` James Amundson 0 siblings, 0 replies; 23+ messages in thread From: James Amundson @ 2008-10-10 18:47 UTC (permalink / raw) To: Robert G. Brown; +Cc: Gerard Jungman, GSL Discuss Mailing List Robert G. Brown wrote: > > If there is a draft version of the standard, it would probably be GOOD > to implement it in something like GSL, as IMO "problems" with the > proposed standard are likely to surface only in widespread use anyway. > It it actually were implemented (even as a sort of "beta" form, clearly > announced as such) before they do finish the final version, we might be > able to comment on or resolve any emergent problems in time to get > solutions into the standard. > I am in complete agreement. I would not only like to see GSL get it right, I would like to see LAPACK get it right. --Jim Amundson ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-10-10 0:05 ` Gerard Jungman 2008-10-10 15:50 ` Robert G. Brown @ 2008-10-11 2:48 ` Brian Gough 2008-10-14 21:33 ` Gerard Jungman 1 sibling, 1 reply; 23+ messages in thread From: Brian Gough @ 2008-10-11 2:48 UTC (permalink / raw) To: GSL Discuss Mailing List At Thu, 09 Oct 2008 18:04:09 -0600, Gerard Jungman wrote: > It may be too soon to have a party, but I'm excited about the > following, which was just brought to my attention. The ten-year > wait may soon be over. > > http://icl.cs.utk.edu/~delmas/lapwrapc.html > > It's a draft standard for a C LAPACK interface. Interesting, but it's column major only. ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-10-11 2:48 ` Brian Gough @ 2008-10-14 21:33 ` Gerard Jungman 2008-10-14 22:41 ` Daniel Franke 2008-10-17 13:27 ` Brian Gough 0 siblings, 2 replies; 23+ messages in thread From: Gerard Jungman @ 2008-10-14 21:33 UTC (permalink / raw) To: GSL Discuss Mailing List On Fri, 2008-10-10 at 22:13 +0100, Brian Gough wrote: > > > > It's a draft standard for a C LAPACK interface. > > Interesting, but it's column major only. Yup. Here's the way I see it. Anybody who uses lapack already faces this problem; if you want to use lapack, you have to get your ducks lined up in columns at the start. People must already be doing this. So it makes sense to me that a library (GSL) should encapsulate whatever they are doing, so they don't have to do it over and over again. This is the problem that we face. The C/C++ community must decide for themselves what it means to "use lapack". Does it mean that column-major is required? If so, how does one manage both the row-major and column-major data types? A library should presumably provide both. What are the relative costs of runtime conversion? For many uses that could be a viable option, if people are forced into using both types in code. People who use only lapack and blas functions probably don't care and don't need to know how their data is laid out, and could be given the column-major type by default. How should a library support both types? Clearly, the data layer need not change, only the access patterns need to be abstracted. Those patterns can be parametrized in such a way as to make code generation easy. Personally, I would prefer C++ templates for this, but we can use whatever C code-generation kludge people happen to fancy. Are these problems best solved as part of a general "interface to fortran", or are they specific to matrix operations? Is any of this addressed in the new fortran standard (fortran-2003 apparently has specific standardization for inter-language support, including stuff like passing structs back and forth, but I don't know the details). Obviously they can't tell you how to lay out your data, but any guidance in this area could be helpful. Finally, we should contact the people working on this interface standard. It may be just that one guy. He might appreciate some discussion. As people are probably aware, the cblas interface is data-order agnostic; you can treat any matrix as row-major or column-major, using a flag. It is unfortunate that the proposed lapack standard does not adopt this idea. But we may have to accept the fact that lapack is essentially legacy code, and that's just the way it is. Let's talk to them and find out. -- G. Jungman ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-10-14 21:33 ` Gerard Jungman @ 2008-10-14 22:41 ` Daniel Franke 2008-10-16 22:20 ` Gerard Jungman 2008-10-17 13:27 ` Brian Gough 1 sibling, 1 reply; 23+ messages in thread From: Daniel Franke @ 2008-10-14 22:41 UTC (permalink / raw) To: gsl-discuss; +Cc: Gerard Jungman On Tuesday 14 October 2008 23:28:58 Gerard Jungman wrote: > Here's the way I see it. Anybody who uses lapack already > faces this problem; if you want to use lapack, you have > to get your ducks lined up in columns at the start. [snip] > Are these problems best solved as part of a general > "interface to fortran", or are they specific to > matrix operations? Is any of this addressed in > the new fortran standard (fortran-2003 apparently > has specific standardization for inter-language > support, including stuff like passing structs > back and forth, but I don't know the details). F2003 Standard (final draft): http://www.j3-fortran.org/doc/year/04/04-007.pdf -- 8< -- 15.2.5 Interoperability of array variables" NOTE 15.18 For example, a Fortran array declared as INTEGER :: A(18, 3:7, *) is interoperable with a C array declared as int b[][5][18] -- 8< -- Does this help? Daniel ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-10-14 22:41 ` Daniel Franke @ 2008-10-16 22:20 ` Gerard Jungman 0 siblings, 0 replies; 23+ messages in thread From: Gerard Jungman @ 2008-10-16 22:20 UTC (permalink / raw) To: gsl-discuss On Wed, 2008-10-15 at 00:45 +0200, Daniel Franke wrote: > F2003 Standard (final draft): > http://www.j3-fortran.org/doc/year/04/04-007.pdf > > Does this help? Yes. Thanks for the link. Note 15.18 makes sense, there's really nothing else that would work. I'm more concerned with Note 15.16, which reads "An element of a multi-dimensional C array is an array type, so a Fortran array of rank one is not interoperable with a multidimensional C array." I'm note sure what this means. My understanding of the C standard for data layout is that a multi-dimensional C array is stored continuously in the obvious fashion. In other words, a multi-dimensional C array is completely interoperable with a rank-one C array. So why wouldn't it be interoperable with a rank-one fortran array? They're both just the obvious continuous list of data. Anyway, this is not a central issue; I'm just wondering what I am missing with that statement. -- G. Jungman ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: simplicity vs. efficiency of algorithms in GSL 2008-10-14 21:33 ` Gerard Jungman 2008-10-14 22:41 ` Daniel Franke @ 2008-10-17 13:27 ` Brian Gough 1 sibling, 0 replies; 23+ messages in thread From: Brian Gough @ 2008-10-17 13:27 UTC (permalink / raw) To: GSL Discuss Mailing List At Tue, 14 Oct 2008 15:28:58 -0600, Gerard Jungman wrote: > > Let's talk to them and find out. > Ok, let us know what they say. ^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2008-10-17 13:27 UTC | newest] Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <200809221621.54890.emanuele.passera@treuropa.com> [not found] ` <20080922162507.GA29877@hippogriff.homeunix.org> 2008-09-22 18:32 ` simplicity vs. efficiency of algorithms in GSL Tuomo Keskitalo 2008-09-22 19:48 ` Patrick Alken 2008-09-22 21:01 ` Robert G. Brown 2008-09-23 21:52 ` Gerard Jungman 2008-09-24 21:19 ` Brian Gough 2008-09-25 0:07 ` Gerard Jungman 2008-09-25 3:07 ` Andrew W. Steiner 2008-09-25 3:50 ` Z F 2008-09-26 21:48 ` Brian Gough 2008-09-27 6:12 ` Jochen Küpper 2008-09-29 13:35 ` Brian Gough 2008-10-07 21:14 ` Gerard Jungman 2008-10-09 18:38 ` Tuomo Keskitalo 2008-10-09 18:57 ` Frank Reininghaus 2008-10-09 21:17 ` Gerard Jungman 2008-10-10 0:05 ` Gerard Jungman 2008-10-10 15:50 ` Robert G. Brown 2008-10-10 18:47 ` James Amundson 2008-10-11 2:48 ` Brian Gough 2008-10-14 21:33 ` Gerard Jungman 2008-10-14 22:41 ` Daniel Franke 2008-10-16 22:20 ` Gerard Jungman 2008-10-17 13:27 ` Brian Gough
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).