From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 3753 invoked by alias); 31 Aug 2004 08:30:15 -0000 Mailing-List: contact gsl-discuss-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gsl-discuss-owner@sources.redhat.com Received: (qmail 25251 invoked from network); 30 Aug 2004 22:15:01 -0000 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Subject: High-dimensional Minimization without analytical derivatives Date: Tue, 31 Aug 2004 08:30:00 -0000 Message-ID: X-MS-Has-Attach: X-MS-TNEF-Correlator: From: "Anatoliy Belaygorod" To: X-SW-Source: 2004-q3/txt/msg00040.txt.bz2 Hello, I need to find a (local) maximum of my likelihood function with 80 datapoin= ts over the 17-dimensional parameter space. I want to use gsl_multimin_fdfm= inimizer_vector_bfgs, (or some other gradient-based algorithm), but I would= really hate to specify 17 (or maybe much more if we change the model) anal= ytic derivatives.=20 Can you please tell me if I have better options? Can I use the one-dimensio= nal numerical derivatives gsl_diff_central instead of analytic ones when I = write "my_df" function for BFGS? How would this approach (if it is feasible= at all) compare to Nelder Mead Simplex algorithm provided in my version of= GSL 1.4? Is there a better option that would involve numerical gradient ev= aluation coupled with BFGS method? =20 I really appreciate any help or advice you can give me. Sincerely, Anatoliy =20