From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 6261 invoked by alias); 26 Aug 2002 23:07:57 -0000 Mailing-List: contact gsl-discuss-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gsl-discuss-owner@sources.redhat.com Received: (qmail 6254 invoked from network); 26 Aug 2002 23:07:56 -0000 Received: from unknown (HELO eddy.mines.edu) (138.67.20.140) by sources.redhat.com with SMTP; 26 Aug 2002 23:07:56 -0000 Received: (from edhill@localhost) by eddy.mines.edu (8.11.6/8.11.6) id g7QN7t830993; Mon, 26 Aug 2002 17:07:55 -0600 X-Authentication-Warning: eddy.mines.edu: edhill set sender to ed@eh3.com using -f Subject: Re: looking for algorithm From: Ed Hill To: "Charles P. Reese" , gsl-discuss@sources.redhat.com In-Reply-To: <1030400769.4912.3.camel@localhost> References: <1030400769.4912.3.camel@localhost> Content-Type: text/plain Content-Transfer-Encoding: 7bit Date: Mon, 26 Aug 2002 16:07:00 -0000 Message-Id: <1030403275.29590.49.camel@eddy> Mime-Version: 1.0 X-SW-Source: 2002-q3/txt/msg00176.txt.bz2 On Mon, 2002-08-26 at 16:26, Charles P. Reese wrote: > Hi, > > I didn't see this in the GSL manual, but I thought I would ask anyway. > I'm looking for a multidimensional minimization algorithm that does not > require evaluation of the gradient. I would like something compatible > with the GPL because I want to release code under that license, but at > this point I will take anything that works. Hi Charles, Why no gradients? Is it because your objective function has some fundamental "noise" to it (so that local gradients are misleading) or is it just because you don't have them handy? In the former case, good approaches include the Nelder-Mead ("amoeba"), IFFCO, and related algorithms [0]. Nelder-Mead is relatively easy to code and does remarkably well on a wide variety of "noisy" optimization problems (it does "implicit filtering" [1]). I have a version of it coded in C++ thats part of one of my pet projects [2] and I've been meaning to implement a version of it within the GSL framework. Let me know if you'd like to take this approach and I'll try to get it coded soon... If the objective function is relatively smooth, then you probably should compute and use the gradients as it will tend to speed up convergence. You can compute gradients using finite difference methods. In fact, many commercial optimization programs have an option that tell them to compute the gradients for you automatically. I'm not very familiar with the GSL optimization code, but perhaps automatic gradient calculations could be (or already have been?) implemented. hth, Ed [0] http://mathworld.wolfram.com/Nelder-MeadMethod.html [1] http://www4.ncsu.edu/~ctk/iffco.html [2] http://qaxa.sourceforge.net -- Edward H. Hill III, PhD Post-Doctoral Researcher | Email: ed@eh3.com, ehill@mines.edu Division of ESE | URLs: http://www.eh3.com Colorado School of Mines | http://cesep.mines.edu/people/edhill.php Golden, CO 80401 | Phone: 303-273-3483 Fax: 303-273-3311