From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 2607 invoked by alias); 29 Apr 2014 19:52:53 -0000 Mailing-List: contact gsl-discuss-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gsl-discuss-owner@sourceware.org Received: (qmail 2591 invoked by uid 89); 29 Apr 2014 19:52:52 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,RP_MATCHES_RCVD,SPF_PASS autolearn=ham version=3.3.2 X-HELO: ipmx6.colorado.edu Received: from ipmx6.colorado.edu (HELO ipmx6.colorado.edu) (128.138.128.246) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 29 Apr 2014 19:52:51 +0000 From: Patrick Alken Received: from bonanza.ngdc.noaa.gov ([140.172.179.41]) by smtp.colorado.edu with ESMTP/TLS/DHE-RSA-AES256-SHA; 29 Apr 2014 13:52:50 -0600 Message-ID: <53600311.7040805@colorado.edu> Date: Tue, 29 Apr 2014 19:52:00 -0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: gsl-discuss@sourceware.org Subject: Re: switched sparse format to binary trees References: <535DAFC0.8090402@colorado.edu> <535ED7A5.7020309@lanl.gov> <535EFCA5.8050807@colorado.edu> <535FFA33.90604@lanl.gov> In-Reply-To: <535FFA33.90604@lanl.gov> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-SW-Source: 2014-q2/txt/msg00018.txt.bz2 Thanks Gerard and Frank, I've implemented the block allocation strategy so there are no more malloc/free calls during the matrix assembly process. This was made much easier by not deleting individual nodes. In principle, individual nodes could be deleted if the user sets that matrix element to 0, since the point of sparse matrix data structures is to keep track of only non-zero elements. However, I think this is not common, and the simplicity of the memory management scheme outweighs the benefit of handling this rare case I think. For now, the spmatrix routines internally handle all of the block allocation, node array pointers, etc. I don't think its really necessary to expose this to the general user, since its already about as efficient as it will probably get, and most users (like myself) simply want a nice easy intuitive interface to build a matrix. > It's a shame they don't explain why this is happening. > Those large factors are kind of nuts. Yes those times for the hash method do seem unreasonably large. I haven't been able to find many other papers on this subject for comparison. On 04/29/2014 01:14 PM, Gerard Jungman wrote: > On 04/29/2014 02:39 AM, Frank Reininghaus wrote: > > > you can create an array of many tree nodes in a single allocation > > though. The allocator struct could then store a pointer to this array, > > the total size of the array, and the index of the next node that is > > not in use yet. > > Right. This is what I meant by a pool allocator. Just to clarify, if > GSL provides an allocator mechanism, it should be more general than > this, but it is a good idea to provide an implementation of something > like this that a client can just invoke if they want. It also gives > clients an example implementation that they can use to create their > own variations when necessary. > > This is why the avl library provides such a mechanism themselves. > They know that some clients really need it. > > > And saving memory for each individual node also means > > that the entire tree will become more cache-friendly > > and thus possibly more performant. > > Even more important than the memory savings is the control that the > client gains over cache coherency and other memory layout issues. > The main lossage with malloc is that there is only one. So allocations > of different kinds of memory can get interleaved, and it may become > impossible to control your layout and caching. If the allocations > are not exposed, then the client is wedged. > > > If individual nodes could be deleted at some point, then the > > allocator needed some bookkeeping of reusable nodes of its own, > > but the only node deletion that can take place is that all nodes > > are deleted together, then this would probably be quite > > straightforward to implement. > > In practice, you are probably right, the nodes would all > be deleted together. So we do not have to worry about it > too much. > > For the sake of completeness though, it is worth pointing > out at least one simple way to handle the general case. > An allocator can implement a lazy-deletion strategy, > where the pool is allowed to fragment, until the > client requests a repacking. You can also trigger > a repacking when a certain fragmentation level is > reached, although this kind of approach (garbage collection) > is somewhat dangerous if the client loses control of when > it happens. In any case, it is not hard to implement this > sort of thing. > > Anyway, the main point is that once a general mechanism > exists, the client can create any arbitrarily complicated > scheme for themselves. They get to take control of > allocations, if they need to. > > > On 04/28/2014 07:13 PM, Patrick Alken wrote: > > > Anyway the study found that AVL trees were much superior to hash > > tables in terms of speed. The study is here: > > > > http://link.springer.com/chapter/10.1007%2F978-3-540-25944-2_135 > > It's a shame they don't explain why this is happening. > Those large factors are kind of nuts. > > -- > G. Jungman >