From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 19616 invoked by alias); 1 May 2014 23:21:10 -0000 Mailing-List: contact gsl-discuss-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gsl-discuss-owner@sourceware.org Received: (qmail 19604 invoked by uid 89); 1 May 2014 23:21:09 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-5.9 required=5.0 tests=AWL,BAYES_00,RP_MATCHES_RCVD,SPF_PASS autolearn=ham version=3.3.2 X-HELO: proofpoint5.lanl.gov Received: from proofpoint5.lanl.gov (HELO proofpoint5.lanl.gov) (204.121.3.53) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-SHA encrypted) ESMTPS; Thu, 01 May 2014 23:21:08 +0000 Received: from mailrelay1.lanl.gov (mailrelay1.lanl.gov [128.165.4.101]) by mailgate5.lanl.gov (8.14.5/8.14.5) with ESMTP id s41NL559025231 for ; Thu, 1 May 2014 17:21:05 -0600 Received: from localhost (localhost.localdomain [127.0.0.1]) by mailrelay1.lanl.gov (Postfix) with ESMTP id CC60813673BA for ; Thu, 1 May 2014 17:21:05 -0600 (MDT) X-NIE-2-Virus-Scanner: amavisd-new at mailrelay1.lanl.gov Received: from manticore.lanl.gov (manticore.lanl.gov [130.55.124.157]) by mailrelay1.lanl.gov (Postfix) with ESMTP id B3F2613673A8 for ; Thu, 1 May 2014 17:21:05 -0600 (MDT) Message-ID: <5362D6E1.5080605@lanl.gov> Date: Thu, 01 May 2014 23:21:00 -0000 From: Gerard Jungman User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: gsl-discuss@sourceware.org Subject: Re: switched sparse format to binary trees References: <535DAFC0.8090402@colorado.edu> <535ED7A5.7020309@lanl.gov> <535EFCA5.8050807@colorado.edu> <535FFA33.90604@lanl.gov> <53600311.7040805@colorado.edu> <53602290.8020309@lanl.gov> <53629696.7070907@colorado.edu> In-Reply-To: <53629696.7070907@colorado.edu> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.96,1.0.14,0.0.0000 definitions=2014-05-01_06:2014-05-01,2014-05-01,1970-01-01 signatures=0 X-SW-Source: 2014-q2/txt/msg00021.txt.bz2 On 05/01/2014 12:46 PM, Patrick Alken wrote: > I was thinking a little more about your suggestion and just wanted to > clarify something. As I understand it, what you are proposing is to > define a new "allocator" structure such as: > > typedef struct > { > void *malloc(size_t size, void *param); > void free(void *block, void *param); > } gsl_allocator; That's the idea. Just like the avl allocator mechanism. > And then for every gsl_xxx_alloc function, we should make a new > function gsl_xxx_alloc_m() which would be: > > gsl_xxx_alloc_m(normal_args, gsl_allocator *allocator, void > *allocator_param) We need to think about the allocator parameter. There is probably a design issue here. It is hard to insure that the client uses the correct parameter if there are several "pools" or "heaps" in existence. So the state data represented by the parameter is somewhat dangerous. There may even be deeper semantic issues in keeping track of state data like that, even though it seems attractive to encapsulate the state in some explicit way. It has to do with the fact that an an allocator resource is sort of a singleton. Even if you have different ones floating around, each one is probably doing a unique job, just like the unique malloc/free heap. Notice that the avl library allocator abstraction avoids any notion of allocator state. A client can always hide their state data behind a functional interface. So they could easily create a "heap" for vectors, and a "heap" for matrices, or whatever, each with their own malloc. So it may be best to avoid any explicit notion of state and let the client do what they want behind the functions. > and all malloc/free calls would then be replaced by calls to > allocator->malloc and allocator->free Yup. > In this scenario, all gsl data structures would need an additional > pointer to store the gsl_allocator desired by the user. This seems correct, in order to match the allocation call with the correct free call. But there may be more to it as well. Basically, any function which currently does a malloc or free needs to be thought about, probably requiring an additional interface also taking an allocator parameter. It is clearly possible and probably not too difficult to do the operations in the right order, swap the allocator pointers, and move on, so that the container is always in a consistent state with regard to its allocator. > The default gsl_xxx_alloc() functions would use a stub allocator which > are just malloc()/free() wrappers. Yup. > For this implementation, every gsl module would use the same > gsl_allocator framework. I wanted to confirm with you that this is > what you intended? Or are you thinking of module-specific allocators > since you were focusing a lot on the spmatrix design? Most of the containers have a homogeneous structure, basically using one layer of heap allocation, for their data blocks. When we get to something like spmatrix, there are really at least two distinct allocation needs, for the data and for the tree nodes. So it may well be correct to design a more semantically rich allocation framework for those containers. It may be as simple as providng two allocators, one for the nodes, one for the data. But maybe more is needed, to allow control of the relative locality of the nodes and data, as I wondered earlier. Other places in GSL may need some custom tailoring as well. Basically, I will just grep for all the malloc and free calls currently, and estimate what needs to be done. Anybody interested in this should do the same. Then we can discuss. > It seems like a reasonable idea to me, and if we are going to do it, > it should be done before the 2.0 release, since every gsl_workspace > will need to be modified. Exactly. > I'm not certain how much demand there will be for something like this, > since I suspect most users won't want to write their own malloc/free > implementations, but who knows :) It is hard to say. But let's think about what it might entail and weigh the costs and benefits when we know more. -- G. Jungman