From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 2550 invoked by alias); 15 Dec 2004 12:11:36 -0000 Mailing-List: contact gsl-discuss-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gsl-discuss-owner@sources.redhat.com Received: (qmail 2468 invoked from network); 15 Dec 2004 12:11:25 -0000 Received: from unknown (HELO mail.phy.duke.edu) (152.3.182.2) by sourceware.org with SMTP; 15 Dec 2004 12:11:25 -0000 Received: from lilith.rgb.private.net (client212-5.dsl.intrex.net [209.42.212.5]) by mail.phy.duke.edu (Postfix) with ESMTP id 336789322E; Wed, 15 Dec 2004 07:11:22 -0500 (EST) Date: Wed, 15 Dec 2004 12:11:00 -0000 From: "Robert G. Brown" X-X-Sender: rgb@lilith.rgb.private.net To: Fabrice Rossi Cc: gsl-discuss@sources.redhat.com Subject: Re: Help: working with large gsl_vector In-Reply-To: <41C01C1A.8060306@apiacoa.org> Message-ID: References: <200412102351.20027.ntt@physics.ucla.edu> <41BDE486.9090009@apiacoa.org> <200412141510.28746.ntt@physics.ucla.edu> <41C01C1A.8060306@apiacoa.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-SW-Source: 2004-q4/txt/msg00122.txt.bz2 On Wed, 15 Dec 2004, Fabrice Rossi wrote: > Toan T Nguyen wrote: > > Hi, thank you for the analysis. Apparently, my analytical calculation is able > > to guess the minimum reasonably good, so I need only 20000 iterations to find > > the minimum. > > Nice! > > > However, the strange thing is if I take the final minimized vector and run it > > through the conjugate gradient routine again, the energy (my function to > > minimize) keeps going down for a few more 1000s iterations (albeit it change > > only the 4th significant digit). Do you know what's wrong? > > It might be related to the restarting of the algorithm. A conjugate > gradient calculates is descent direction thanks to the gradient at the > current point and to the previously used direction. The first descent > direction is (minus) the gradient alone. When you restart the algorithm, > it will use the gradient rather than the descent direction it should > have used. A second possibility is that if you save the final minimized vector at some precision and reload it, that you are beginning a gradient decent from a new point in the neighborhood of the old point. This is, in fact, how deterministic chaos was discovered in a manner of speaking. If the problem is one with high complexity, this may be enough to push it to a different local minimum in the particular well where the first local minimum is found. Note that I say local minimum. Unless you have a really sound theoretical basis for assuming a single, smooth global minimum in any problem with high dimensionality, you should presume as a default that the topology of the minimization surface is neither monotonic nor smooth, and that multiple local minima exist. In fact, local minima may even be "dense" (or nearly so) on the underlying space so that nearly ANY minimum you find by merely going downhill is a local minimum, and may not even be a "good" minimum. Think rugged landscape. Gradient decent methods basically go downhill from whereever they are started. From many/most starting points, this will take you down to the nearest footprint in the sand, to a nearby hole in the ground, to the bottom of a pond. It is relatively unlikely to take you to the bottom of the deepest ocean trench. And if you move the starting point over just a little bit, you're as likely to go down into a DIFFERENT footprint in the sand as back into the same one -- it's still a local minimum, just a different one. There are various easy ways to test for this sort of behavior. The easiest one is to just run the code lots of times with completely random starting data and see if you go down to the same minimum every time. Also look at the distribution of minima that you get -- are they all about the same quality or is there one that is occassionally much better than the others. If the latter is the case, you'll have to resort to much more powerful and complex methods to get a "good" minimum on your space. Most likely genetic algorithms and/or simulated annealing. rgb > > > I follow the example in the GSL document and set my success criteria as > > > > status = gsl_multimin_test_gradient (s->gradient, 1e-20); > > 1e-20 is very very small, maybe to close to machine precision to give > reliable results. But I'm far from being an expert in this domain. What > look strange is that if the algorithm stops because of this criteria, I > don't know why it can continue again after. It would mean that the norm > of the gradient is going up after having reached 1e-20 once. This might > be related to numerical errors. > > > The initial setup of the vector is > > > > T = gsl_multimin_fdfminimizer_conjugate_pr; > > s = gsl_multimin_fdfminimizer_alloc (T, 3*nparts); > > gsl_multimin_fdfminimizer_set (s, &energy, x, 0.001, 1e-12); > > 1e-12 is again quite small. > > > For clarity, I used dimensionless units so ALL the nearest neighbor distances > > and the coupling constants are of the order of unity. nparts is the number of > > particles which is about 100000. > > > > Could you give some advises? Thanks a lot in advance. > > > > Toan > > I think that you are basically puting the code under some pressure, > because of the size of your problem and of the very low limits you are > using. To my knowledge, it has not been tested under such constraints > and it's therefore difficult to give sensible advice. > > Fabrice > > -- Robert G. Brown http://www.phy.duke.edu/~rgb/ Duke University Dept. of Physics, Box 90305 Durham, N.C. 27708-0305 Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb@phy.duke.edu