public inbox for java-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* Get libffi closures to cope with SELinux execmem/execmod
@ 2007-01-18  7:14 Alexandre Oliva
  2007-01-18  7:28 ` Alexandre Oliva
                   ` (2 more replies)
  0 siblings, 3 replies; 35+ messages in thread
From: Alexandre Oliva @ 2007-01-18  7:14 UTC (permalink / raw)
  To: gcc-patches, java-patches

[-- Attachment #1: Type: text/plain, Size: 1412 bytes --]

This is a patch that fixes the bug described at 
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=202209

The problem is that SELinux security policies are often configured to
prevent pages from being mapped as both writable and executable, and
even to stop pages that were ever modified from being turned
executable.

The recommended way to implement dynamically-generated code is to map
pages from a file in an executable filesystem into separate locations
in the virtual memory, one writable, one executable.
http://people.redhat.com/drepper/selinux-mem.html

This is what I've done in libffi, if mmap fails to allocate anonymous
pages that are both writable and executable on GNU/Linux systems.
Various changes were need to enable closures to be created at one
address that would later be executed at another.

libjava needed changes to cope with these interface changes in libffi,
and manual memory management of closures.  Since they were not scanned
by the garbage collector, it turned out to be relatively simple to
make the change and arrange for class finalizers to deallocate them.

The first patch below adds dlmalloc.c to libffi as downloaded from
Doug Lea's web page (except for CRLF->LF conversion).  The second
patch does the rest of the work, including the minimal changes needed
for dlmalloc.c to support this custom separate-mapping behavior.

Tested on x86_64-linux-gnu.  Ok to install?


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: libffi-closure-prot-exec-dlmalloc.patch --]
[-- Type: text/x-patch, Size: 185459 bytes --]

for  libffi/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* src/dlmalloc.c: New file, imported version 2.8.3 of Doug
        Lea's malloc.

Index: libffi/src/dlmalloc.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ libffi/src/dlmalloc.c	2007-01-17 06:31:01.000000000 -0200
@@ -0,0 +1,5061 @@
+/*
+  This is a version (aka dlmalloc) of malloc/free/realloc written by
+  Doug Lea and released to the public domain, as explained at
+  http://creativecommons.org/licenses/publicdomain.  Send questions,
+  comments, complaints, performance data, etc to dl@cs.oswego.edu
+
+* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
+
+   Note: There may be an updated version of this malloc obtainable at
+           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
+         Check before installing!
+
+* Quickstart
+
+  This library is all in one file to simplify the most common usage:
+  ftp it, compile it (-O3), and link it into another program. All of
+  the compile-time options default to reasonable values for use on
+  most platforms.  You might later want to step through various
+  compile-time and dynamic tuning options.
+
+  For convenience, an include file for code using this malloc is at:
+     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
+  You don't really need this .h file unless you call functions not
+  defined in your system include files.  The .h file contains only the
+  excerpts from this file needed for using this malloc on ANSI C/C++
+  systems, so long as you haven't changed compile-time options about
+  naming and tuning parameters.  If you do, then you can create your
+  own malloc.h that does include all settings by cutting at the point
+  indicated below. Note that you may already by default be using a C
+  library containing a malloc that is based on some version of this
+  malloc (for example in linux). You might still want to use the one
+  in this file to customize settings or to avoid overheads associated
+  with library versions.
+
+* Vital statistics:
+
+  Supported pointer/size_t representation:       4 or 8 bytes
+       size_t MUST be an unsigned type of the same width as
+       pointers. (If you are using an ancient system that declares
+       size_t as a signed type, or need it to be a different width
+       than pointers, you can use a previous release of this malloc
+       (e.g. 2.7.2) supporting these.)
+
+  Alignment:                                     8 bytes (default)
+       This suffices for nearly all current machines and C compilers.
+       However, you can define MALLOC_ALIGNMENT to be wider than this
+       if necessary (up to 128bytes), at the expense of using more space.
+
+  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
+                                          8 or 16 bytes (if 8byte sizes)
+       Each malloced chunk has a hidden word of overhead holding size
+       and status information, and additional cross-check word
+       if FOOTERS is defined.
+
+  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
+                          8-byte ptrs:  32 bytes    (including overhead)
+
+       Even a request for zero bytes (i.e., malloc(0)) returns a
+       pointer to something of the minimum allocatable size.
+       The maximum overhead wastage (i.e., number of extra bytes
+       allocated than were requested in malloc) is less than or equal
+       to the minimum size, except for requests >= mmap_threshold that
+       are serviced via mmap(), where the worst case wastage is about
+       32 bytes plus the remainder from a system page (the minimal
+       mmap unit); typically 4096 or 8192 bytes.
+
+  Security: static-safe; optionally more or less
+       The "security" of malloc refers to the ability of malicious
+       code to accentuate the effects of errors (for example, freeing
+       space that is not currently malloc'ed or overwriting past the
+       ends of chunks) in code that calls malloc.  This malloc
+       guarantees not to modify any memory locations below the base of
+       heap, i.e., static variables, even in the presence of usage
+       errors.  The routines additionally detect most improper frees
+       and reallocs.  All this holds as long as the static bookkeeping
+       for malloc itself is not corrupted by some other means.  This
+       is only one aspect of security -- these checks do not, and
+       cannot, detect all possible programming errors.
+
+       If FOOTERS is defined nonzero, then each allocated chunk
+       carries an additional check word to verify that it was malloced
+       from its space.  These check words are the same within each
+       execution of a program using malloc, but differ across
+       executions, so externally crafted fake chunks cannot be
+       freed. This improves security by rejecting frees/reallocs that
+       could corrupt heap memory, in addition to the checks preventing
+       writes to statics that are always on.  This may further improve
+       security at the expense of time and space overhead.  (Note that
+       FOOTERS may also be worth using with MSPACES.)
+
+       By default detected errors cause the program to abort (calling
+       "abort()"). You can override this to instead proceed past
+       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
+       has no effect, and a malloc that encounters a bad address
+       caused by user overwrites will ignore the bad address by
+       dropping pointers and indices to all known memory. This may
+       be appropriate for programs that should continue if at all
+       possible in the face of programming errors, although they may
+       run out of memory because dropped memory is never reclaimed.
+
+       If you don't like either of these options, you can define
+       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
+       else. And if if you are sure that your program using malloc has
+       no errors or vulnerabilities, you can define INSECURE to 1,
+       which might (or might not) provide a small performance improvement.
+
+  Thread-safety: NOT thread-safe unless USE_LOCKS defined
+       When USE_LOCKS is defined, each public call to malloc, free,
+       etc is surrounded with either a pthread mutex or a win32
+       spinlock (depending on WIN32). This is not especially fast, and
+       can be a major bottleneck.  It is designed only to provide
+       minimal protection in concurrent environments, and to provide a
+       basis for extensions.  If you are using malloc in a concurrent
+       program, consider instead using ptmalloc, which is derived from
+       a version of this malloc. (See http://www.malloc.de).
+
+  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
+       This malloc can use unix sbrk or any emulation (invoked using
+       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
+       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
+       memory.  On most unix systems, it tends to work best if both
+       MORECORE and MMAP are enabled.  On Win32, it uses emulations
+       based on VirtualAlloc. It also uses common C library functions
+       like memset.
+
+  Compliance: I believe it is compliant with the Single Unix Specification
+       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
+       others as well.
+
+* Overview of algorithms
+
+  This is not the fastest, most space-conserving, most portable, or
+  most tunable malloc ever written. However it is among the fastest
+  while also being among the most space-conserving, portable and
+  tunable.  Consistent balance across these factors results in a good
+  general-purpose allocator for malloc-intensive programs.
+
+  In most ways, this malloc is a best-fit allocator. Generally, it
+  chooses the best-fitting existing chunk for a request, with ties
+  broken in approximately least-recently-used order. (This strategy
+  normally maintains low fragmentation.) However, for requests less
+  than 256bytes, it deviates from best-fit when there is not an
+  exactly fitting available chunk by preferring to use space adjacent
+  to that used for the previous small request, as well as by breaking
+  ties in approximately most-recently-used order. (These enhance
+  locality of series of small allocations.)  And for very large requests
+  (>= 256Kb by default), it relies on system memory mapping
+  facilities, if supported.  (This helps avoid carrying around and
+  possibly fragmenting memory used only for large chunks.)
+
+  All operations (except malloc_stats and mallinfo) have execution
+  times that are bounded by a constant factor of the number of bits in
+  a size_t, not counting any clearing in calloc or copying in realloc,
+  or actions surrounding MORECORE and MMAP that have times
+  proportional to the number of non-contiguous regions returned by
+  system allocation routines, which is often just 1.
+
+  The implementation is not very modular and seriously overuses
+  macros. Perhaps someday all C compilers will do as good a job
+  inlining modular code as can now be done by brute-force expansion,
+  but now, enough of them seem not to.
+
+  Some compilers issue a lot of warnings about code that is
+  dead/unreachable only on some platforms, and also about intentional
+  uses of negation on unsigned types. All known cases of each can be
+  ignored.
+
+  For a longer but out of date high-level description, see
+     http://gee.cs.oswego.edu/dl/html/malloc.html
+
+* MSPACES
+  If MSPACES is defined, then in addition to malloc, free, etc.,
+  this file also defines mspace_malloc, mspace_free, etc. These
+  are versions of malloc routines that take an "mspace" argument
+  obtained using create_mspace, to control all internal bookkeeping.
+  If ONLY_MSPACES is defined, only these versions are compiled.
+  So if you would like to use this allocator for only some allocations,
+  and your system malloc for others, you can compile with
+  ONLY_MSPACES and then do something like...
+    static mspace mymspace = create_mspace(0,0); // for example
+    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
+
+  (Note: If you only need one instance of an mspace, you can instead
+  use "USE_DL_PREFIX" to relabel the global malloc.)
+
+  You can similarly create thread-local allocators by storing
+  mspaces as thread-locals. For example:
+    static __thread mspace tlms = 0;
+    void*  tlmalloc(size_t bytes) {
+      if (tlms == 0) tlms = create_mspace(0, 0);
+      return mspace_malloc(tlms, bytes);
+    }
+    void  tlfree(void* mem) { mspace_free(tlms, mem); }
+
+  Unless FOOTERS is defined, each mspace is completely independent.
+  You cannot allocate from one and free to another (although
+  conformance is only weakly checked, so usage errors are not always
+  caught). If FOOTERS is defined, then each chunk carries around a tag
+  indicating its originating mspace, and frees are directed to their
+  originating spaces.
+
+ -------------------------  Compile-time options ---------------------------
+
+Be careful in setting #define values for numerical constants of type
+size_t. On some systems, literal values are not automatically extended
+to size_t precision unless they are explicitly casted.
+
+WIN32                    default: defined if _WIN32 defined
+  Defining WIN32 sets up defaults for MS environment and compilers.
+  Otherwise defaults are for unix.
+
+MALLOC_ALIGNMENT         default: (size_t)8
+  Controls the minimum alignment for malloc'ed chunks.  It must be a
+  power of two and at least 8, even on machines for which smaller
+  alignments would suffice. It may be defined as larger than this
+  though. Note however that code and data structures are optimized for
+  the case of 8-byte alignment.
+
+MSPACES                  default: 0 (false)
+  If true, compile in support for independent allocation spaces.
+  This is only supported if HAVE_MMAP is true.
+
+ONLY_MSPACES             default: 0 (false)
+  If true, only compile in mspace versions, not regular versions.
+
+USE_LOCKS                default: 0 (false)
+  Causes each call to each public routine to be surrounded with
+  pthread or WIN32 mutex lock/unlock. (If set true, this can be
+  overridden on a per-mspace basis for mspace versions.)
+
+FOOTERS                  default: 0
+  If true, provide extra checking and dispatching by placing
+  information in the footers of allocated chunks. This adds
+  space and time overhead.
+
+INSECURE                 default: 0
+  If true, omit checks for usage errors and heap space overwrites.
+
+USE_DL_PREFIX            default: NOT defined
+  Causes compiler to prefix all public routines with the string 'dl'.
+  This can be useful when you only want to use this malloc in one part
+  of a program, using your regular system malloc elsewhere.
+
+ABORT                    default: defined as abort()
+  Defines how to abort on failed checks.  On most systems, a failed
+  check cannot die with an "assert" or even print an informative
+  message, because the underlying print routines in turn call malloc,
+  which will fail again.  Generally, the best policy is to simply call
+  abort(). It's not very useful to do more than this because many
+  errors due to overwriting will show up as address faults (null, odd
+  addresses etc) rather than malloc-triggered checks, so will also
+  abort.  Also, most compilers know that abort() does not return, so
+  can better optimize code conditionally calling it.
+
+PROCEED_ON_ERROR           default: defined as 0 (false)
+  Controls whether detected bad addresses cause them to bypassed
+  rather than aborting. If set, detected bad arguments to free and
+  realloc are ignored. And all bookkeeping information is zeroed out
+  upon a detected overwrite of freed heap space, thus losing the
+  ability to ever return it from malloc again, but enabling the
+  application to proceed. If PROCEED_ON_ERROR is defined, the
+  static variable malloc_corruption_error_count is compiled in
+  and can be examined to see if errors have occurred. This option
+  generates slower code than the default abort policy.
+
+DEBUG                    default: NOT defined
+  The DEBUG setting is mainly intended for people trying to modify
+  this code or diagnose problems when porting to new platforms.
+  However, it may also be able to better isolate user errors than just
+  using runtime checks.  The assertions in the check routines spell
+  out in more detail the assumptions and invariants underlying the
+  algorithms.  The checking is fairly extensive, and will slow down
+  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
+  set will attempt to check every non-mmapped allocated and free chunk
+  in the course of computing the summaries.
+
+ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
+  Debugging assertion failures can be nearly impossible if your
+  version of the assert macro causes malloc to be called, which will
+  lead to a cascade of further failures, blowing the runtime stack.
+  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
+  which will usually make debugging easier.
+
+MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
+  The action to take before "return 0" when malloc fails to be able to
+  return memory because there is none available.
+
+HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
+  True if this system supports sbrk or an emulation of it.
+
+MORECORE                  default: sbrk
+  The name of the sbrk-style system routine to call to obtain more
+  memory.  See below for guidance on writing custom MORECORE
+  functions. The type of the argument to sbrk/MORECORE varies across
+  systems.  It cannot be size_t, because it supports negative
+  arguments, so it is normally the signed type of the same width as
+  size_t (sometimes declared as "intptr_t").  It doesn't much matter
+  though. Internally, we only call it with arguments less than half
+  the max value of a size_t, which should work across all reasonable
+  possibilities, although sometimes generating compiler warnings.  See
+  near the end of this file for guidelines for creating a custom
+  version of MORECORE.
+
+MORECORE_CONTIGUOUS       default: 1 (true)
+  If true, take advantage of fact that consecutive calls to MORECORE
+  with positive arguments always return contiguous increasing
+  addresses.  This is true of unix sbrk. It does not hurt too much to
+  set it true anyway, since malloc copes with non-contiguities.
+  Setting it false when definitely non-contiguous saves time
+  and possibly wasted space it would take to discover this though.
+
+MORECORE_CANNOT_TRIM      default: NOT defined
+  True if MORECORE cannot release space back to the system when given
+  negative arguments. This is generally necessary only if you are
+  using a hand-crafted MORECORE function that cannot handle negative
+  arguments.
+
+HAVE_MMAP                 default: 1 (true)
+  True if this system supports mmap or an emulation of it.  If so, and
+  HAVE_MORECORE is not true, MMAP is used for all system
+  allocation. If set and HAVE_MORECORE is true as well, MMAP is
+  primarily used to directly allocate very large blocks. It is also
+  used as a backup strategy in cases where MORECORE fails to provide
+  space from system. Note: A single call to MUNMAP is assumed to be
+  able to unmap memory that may have be allocated using multiple calls
+  to MMAP, so long as they are adjacent.
+
+HAVE_MREMAP               default: 1 on linux, else 0
+  If true realloc() uses mremap() to re-allocate large blocks and
+  extend or shrink allocation spaces.
+
+MMAP_CLEARS               default: 1 on unix
+  True if mmap clears memory so calloc doesn't need to. This is true
+  for standard unix mmap using /dev/zero.
+
+USE_BUILTIN_FFS            default: 0 (i.e., not used)
+  Causes malloc to use the builtin ffs() function to compute indices.
+  Some compilers may recognize and intrinsify ffs to be faster than the
+  supplied C version. Also, the case of x86 using gcc is special-cased
+  to an asm instruction, so is already as fast as it can be, and so
+  this setting has no effect. (On most x86s, the asm version is only
+  slightly faster than the C version.)
+
+malloc_getpagesize         default: derive from system includes, or 4096.
+  The system page size. To the extent possible, this malloc manages
+  memory from the system in page-size units.  This may be (and
+  usually is) a function rather than a constant. This is ignored
+  if WIN32, where page size is determined using getSystemInfo during
+  initialization.
+
+USE_DEV_RANDOM             default: 0 (i.e., not used)
+  Causes malloc to use /dev/random to initialize secure magic seed for
+  stamping footers. Otherwise, the current time is used.
+
+NO_MALLINFO                default: 0
+  If defined, don't compile "mallinfo". This can be a simple way
+  of dealing with mismatches between system declarations and
+  those in this file.
+
+MALLINFO_FIELD_TYPE        default: size_t
+  The type of the fields in the mallinfo struct. This was originally
+  defined as "int" in SVID etc, but is more usefully defined as
+  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
+
+REALLOC_ZERO_BYTES_FREES    default: not defined
+  This should be set if a call to realloc with zero bytes should 
+  be the same as a call to free. Some people think it should. Otherwise, 
+  since this malloc returns a unique pointer for malloc(0), so does 
+  realloc(p, 0).
+
+LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
+LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
+LACKS_STDLIB_H                default: NOT defined unless on WIN32
+  Define these if your system does not have these header files.
+  You might need to manually insert some of the declarations they provide.
+
+DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
+                                system_info.dwAllocationGranularity in WIN32,
+                                otherwise 64K.
+      Also settable using mallopt(M_GRANULARITY, x)
+  The unit for allocating and deallocating memory from the system.  On
+  most systems with contiguous MORECORE, there is no reason to
+  make this more than a page. However, systems with MMAP tend to
+  either require or encourage larger granularities.  You can increase
+  this value to prevent system allocation functions to be called so
+  often, especially if they are slow.  The value must be at least one
+  page and must be a power of two.  Setting to 0 causes initialization
+  to either page size or win32 region size.  (Note: In previous
+  versions of malloc, the equivalent of this option was called
+  "TOP_PAD")
+
+DEFAULT_TRIM_THRESHOLD    default: 2MB
+      Also settable using mallopt(M_TRIM_THRESHOLD, x)
+  The maximum amount of unused top-most memory to keep before
+  releasing via malloc_trim in free().  Automatic trimming is mainly
+  useful in long-lived programs using contiguous MORECORE.  Because
+  trimming via sbrk can be slow on some systems, and can sometimes be
+  wasteful (in cases where programs immediately afterward allocate
+  more large chunks) the value should be high enough so that your
+  overall system performance would improve by releasing this much
+  memory.  As a rough guide, you might set to a value close to the
+  average size of a process (program) running on your system.
+  Releasing this much memory would allow such a process to run in
+  memory.  Generally, it is worth tuning trim thresholds when a
+  program undergoes phases where several large chunks are allocated
+  and released in ways that can reuse each other's storage, perhaps
+  mixed with phases where there are no such chunks at all. The trim
+  value must be greater than page size to have any useful effect.  To
+  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
+  some people use of mallocing a huge space and then freeing it at
+  program startup, in an attempt to reserve system memory, doesn't
+  have the intended effect under automatic trimming, since that memory
+  will immediately be returned to the system.
+
+DEFAULT_MMAP_THRESHOLD       default: 256K
+      Also settable using mallopt(M_MMAP_THRESHOLD, x)
+  The request size threshold for using MMAP to directly service a
+  request. Requests of at least this size that cannot be allocated
+  using already-existing space will be serviced via mmap.  (If enough
+  normal freed space already exists it is used instead.)  Using mmap
+  segregates relatively large chunks of memory so that they can be
+  individually obtained and released from the host system. A request
+  serviced through mmap is never reused by any other request (at least
+  not directly; the system may just so happen to remap successive
+  requests to the same locations).  Segregating space in this way has
+  the benefits that: Mmapped space can always be individually released
+  back to the system, which helps keep the system level memory demands
+  of a long-lived program low.  Also, mapped memory doesn't become
+  `locked' between other chunks, as can happen with normally allocated
+  chunks, which means that even trimming via malloc_trim would not
+  release them.  However, it has the disadvantage that the space
+  cannot be reclaimed, consolidated, and then used to service later
+  requests, as happens with normal chunks.  The advantages of mmap
+  nearly always outweigh disadvantages for "large" chunks, but the
+  value of "large" may vary across systems.  The default is an
+  empirically derived value that works well in most systems. You can
+  disable mmap by setting to MAX_SIZE_T.
+
+*/
+
+#ifndef WIN32
+#ifdef _WIN32
+#define WIN32 1
+#endif  /* _WIN32 */
+#endif  /* WIN32 */
+#ifdef WIN32
+#define WIN32_LEAN_AND_MEAN
+#include <windows.h>
+#define HAVE_MMAP 1
+#define HAVE_MORECORE 0
+#define LACKS_UNISTD_H
+#define LACKS_SYS_PARAM_H
+#define LACKS_SYS_MMAN_H
+#define LACKS_STRING_H
+#define LACKS_STRINGS_H
+#define LACKS_SYS_TYPES_H
+#define LACKS_ERRNO_H
+#define MALLOC_FAILURE_ACTION
+#define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
+#endif  /* WIN32 */
+
+#if defined(DARWIN) || defined(_DARWIN)
+/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
+#ifndef HAVE_MORECORE
+#define HAVE_MORECORE 0
+#define HAVE_MMAP 1
+#endif  /* HAVE_MORECORE */
+#endif  /* DARWIN */
+
+#ifndef LACKS_SYS_TYPES_H
+#include <sys/types.h>  /* For size_t */
+#endif  /* LACKS_SYS_TYPES_H */
+
+/* The maximum possible size_t value has all bits set */
+#define MAX_SIZE_T           (~(size_t)0)
+
+#ifndef ONLY_MSPACES
+#define ONLY_MSPACES 0
+#endif  /* ONLY_MSPACES */
+#ifndef MSPACES
+#if ONLY_MSPACES
+#define MSPACES 1
+#else   /* ONLY_MSPACES */
+#define MSPACES 0
+#endif  /* ONLY_MSPACES */
+#endif  /* MSPACES */
+#ifndef MALLOC_ALIGNMENT
+#define MALLOC_ALIGNMENT ((size_t)8U)
+#endif  /* MALLOC_ALIGNMENT */
+#ifndef FOOTERS
+#define FOOTERS 0
+#endif  /* FOOTERS */
+#ifndef ABORT
+#define ABORT  abort()
+#endif  /* ABORT */
+#ifndef ABORT_ON_ASSERT_FAILURE
+#define ABORT_ON_ASSERT_FAILURE 1
+#endif  /* ABORT_ON_ASSERT_FAILURE */
+#ifndef PROCEED_ON_ERROR
+#define PROCEED_ON_ERROR 0
+#endif  /* PROCEED_ON_ERROR */
+#ifndef USE_LOCKS
+#define USE_LOCKS 0
+#endif  /* USE_LOCKS */
+#ifndef INSECURE
+#define INSECURE 0
+#endif  /* INSECURE */
+#ifndef HAVE_MMAP
+#define HAVE_MMAP 1
+#endif  /* HAVE_MMAP */
+#ifndef MMAP_CLEARS
+#define MMAP_CLEARS 1
+#endif  /* MMAP_CLEARS */
+#ifndef HAVE_MREMAP
+#ifdef linux
+#define HAVE_MREMAP 1
+#else   /* linux */
+#define HAVE_MREMAP 0
+#endif  /* linux */
+#endif  /* HAVE_MREMAP */
+#ifndef MALLOC_FAILURE_ACTION
+#define MALLOC_FAILURE_ACTION  errno = ENOMEM;
+#endif  /* MALLOC_FAILURE_ACTION */
+#ifndef HAVE_MORECORE
+#if ONLY_MSPACES
+#define HAVE_MORECORE 0
+#else   /* ONLY_MSPACES */
+#define HAVE_MORECORE 1
+#endif  /* ONLY_MSPACES */
+#endif  /* HAVE_MORECORE */
+#if !HAVE_MORECORE
+#define MORECORE_CONTIGUOUS 0
+#else   /* !HAVE_MORECORE */
+#ifndef MORECORE
+#define MORECORE sbrk
+#endif  /* MORECORE */
+#ifndef MORECORE_CONTIGUOUS
+#define MORECORE_CONTIGUOUS 1
+#endif  /* MORECORE_CONTIGUOUS */
+#endif  /* HAVE_MORECORE */
+#ifndef DEFAULT_GRANULARITY
+#if MORECORE_CONTIGUOUS
+#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
+#else   /* MORECORE_CONTIGUOUS */
+#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
+#endif  /* MORECORE_CONTIGUOUS */
+#endif  /* DEFAULT_GRANULARITY */
+#ifndef DEFAULT_TRIM_THRESHOLD
+#ifndef MORECORE_CANNOT_TRIM
+#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
+#else   /* MORECORE_CANNOT_TRIM */
+#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
+#endif  /* MORECORE_CANNOT_TRIM */
+#endif  /* DEFAULT_TRIM_THRESHOLD */
+#ifndef DEFAULT_MMAP_THRESHOLD
+#if HAVE_MMAP
+#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
+#else   /* HAVE_MMAP */
+#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
+#endif  /* HAVE_MMAP */
+#endif  /* DEFAULT_MMAP_THRESHOLD */
+#ifndef USE_BUILTIN_FFS
+#define USE_BUILTIN_FFS 0
+#endif  /* USE_BUILTIN_FFS */
+#ifndef USE_DEV_RANDOM
+#define USE_DEV_RANDOM 0
+#endif  /* USE_DEV_RANDOM */
+#ifndef NO_MALLINFO
+#define NO_MALLINFO 0
+#endif  /* NO_MALLINFO */
+#ifndef MALLINFO_FIELD_TYPE
+#define MALLINFO_FIELD_TYPE size_t
+#endif  /* MALLINFO_FIELD_TYPE */
+
+/*
+  mallopt tuning options.  SVID/XPG defines four standard parameter
+  numbers for mallopt, normally defined in malloc.h.  None of these
+  are used in this malloc, so setting them has no effect. But this
+  malloc does support the following options.
+*/
+
+#define M_TRIM_THRESHOLD     (-1)
+#define M_GRANULARITY        (-2)
+#define M_MMAP_THRESHOLD     (-3)
+
+/* ------------------------ Mallinfo declarations ------------------------ */
+
+#if !NO_MALLINFO
+/*
+  This version of malloc supports the standard SVID/XPG mallinfo
+  routine that returns a struct containing usage properties and
+  statistics. It should work on any system that has a
+  /usr/include/malloc.h defining struct mallinfo.  The main
+  declaration needed is the mallinfo struct that is returned (by-copy)
+  by mallinfo().  The malloinfo struct contains a bunch of fields that
+  are not even meaningful in this version of malloc.  These fields are
+  are instead filled by mallinfo() with other numbers that might be of
+  interest.
+
+  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
+  /usr/include/malloc.h file that includes a declaration of struct
+  mallinfo.  If so, it is included; else a compliant version is
+  declared below.  These must be precisely the same for mallinfo() to
+  work.  The original SVID version of this struct, defined on most
+  systems with mallinfo, declares all fields as ints. But some others
+  define as unsigned long. If your system defines the fields using a
+  type of different width than listed here, you MUST #include your
+  system version and #define HAVE_USR_INCLUDE_MALLOC_H.
+*/
+
+/* #define HAVE_USR_INCLUDE_MALLOC_H */
+
+#ifdef HAVE_USR_INCLUDE_MALLOC_H
+#include "/usr/include/malloc.h"
+#else /* HAVE_USR_INCLUDE_MALLOC_H */
+
+struct mallinfo {
+  MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
+  MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
+  MALLINFO_FIELD_TYPE smblks;   /* always 0 */
+  MALLINFO_FIELD_TYPE hblks;    /* always 0 */
+  MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
+  MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
+  MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
+  MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
+  MALLINFO_FIELD_TYPE fordblks; /* total free space */
+  MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
+};
+
+#endif /* HAVE_USR_INCLUDE_MALLOC_H */
+#endif /* NO_MALLINFO */
+
+#ifdef __cplusplus
+extern "C" {
+#endif /* __cplusplus */
+
+#if !ONLY_MSPACES
+
+/* ------------------- Declarations of public routines ------------------- */
+
+#ifndef USE_DL_PREFIX
+#define dlcalloc               calloc
+#define dlfree                 free
+#define dlmalloc               malloc
+#define dlmemalign             memalign
+#define dlrealloc              realloc
+#define dlvalloc               valloc
+#define dlpvalloc              pvalloc
+#define dlmallinfo             mallinfo
+#define dlmallopt              mallopt
+#define dlmalloc_trim          malloc_trim
+#define dlmalloc_stats         malloc_stats
+#define dlmalloc_usable_size   malloc_usable_size
+#define dlmalloc_footprint     malloc_footprint
+#define dlmalloc_max_footprint malloc_max_footprint
+#define dlindependent_calloc   independent_calloc
+#define dlindependent_comalloc independent_comalloc
+#endif /* USE_DL_PREFIX */
+
+
+/*
+  malloc(size_t n)
+  Returns a pointer to a newly allocated chunk of at least n bytes, or
+  null if no space is available, in which case errno is set to ENOMEM
+  on ANSI C systems.
+
+  If n is zero, malloc returns a minimum-sized chunk. (The minimum
+  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
+  systems.)  Note that size_t is an unsigned type, so calls with
+  arguments that would be negative if signed are interpreted as
+  requests for huge amounts of space, which will often fail. The
+  maximum supported value of n differs across systems, but is in all
+  cases less than the maximum representable value of a size_t.
+*/
+void* dlmalloc(size_t);
+
+/*
+  free(void* p)
+  Releases the chunk of memory pointed to by p, that had been previously
+  allocated using malloc or a related routine such as realloc.
+  It has no effect if p is null. If p was not malloced or already
+  freed, free(p) will by default cause the current program to abort.
+*/
+void  dlfree(void*);
+
+/*
+  calloc(size_t n_elements, size_t element_size);
+  Returns a pointer to n_elements * element_size bytes, with all locations
+  set to zero.
+*/
+void* dlcalloc(size_t, size_t);
+
+/*
+  realloc(void* p, size_t n)
+  Returns a pointer to a chunk of size n that contains the same data
+  as does chunk p up to the minimum of (n, p's size) bytes, or null
+  if no space is available.
+
+  The returned pointer may or may not be the same as p. The algorithm
+  prefers extending p in most cases when possible, otherwise it
+  employs the equivalent of a malloc-copy-free sequence.
+
+  If p is null, realloc is equivalent to malloc.
+
+  If space is not available, realloc returns null, errno is set (if on
+  ANSI) and p is NOT freed.
+
+  if n is for fewer bytes than already held by p, the newly unused
+  space is lopped off and freed if possible.  realloc with a size
+  argument of zero (re)allocates a minimum-sized chunk.
+
+  The old unix realloc convention of allowing the last-free'd chunk
+  to be used as an argument to realloc is not supported.
+*/
+
+void* dlrealloc(void*, size_t);
+
+/*
+  memalign(size_t alignment, size_t n);
+  Returns a pointer to a newly allocated chunk of n bytes, aligned
+  in accord with the alignment argument.
+
+  The alignment argument should be a power of two. If the argument is
+  not a power of two, the nearest greater power is used.
+  8-byte alignment is guaranteed by normal malloc calls, so don't
+  bother calling memalign with an argument of 8 or less.
+
+  Overreliance on memalign is a sure way to fragment space.
+*/
+void* dlmemalign(size_t, size_t);
+
+/*
+  valloc(size_t n);
+  Equivalent to memalign(pagesize, n), where pagesize is the page
+  size of the system. If the pagesize is unknown, 4096 is used.
+*/
+void* dlvalloc(size_t);
+
+/*
+  mallopt(int parameter_number, int parameter_value)
+  Sets tunable parameters The format is to provide a
+  (parameter-number, parameter-value) pair.  mallopt then sets the
+  corresponding parameter to the argument value if it can (i.e., so
+  long as the value is meaningful), and returns 1 if successful else
+  0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
+  normally defined in malloc.h.  None of these are use in this malloc,
+  so setting them has no effect. But this malloc also supports other
+  options in mallopt. See below for details.  Briefly, supported
+  parameters are as follows (listed defaults are for "typical"
+  configurations).
+
+  Symbol            param #  default    allowed param values
+  M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
+  M_GRANULARITY        -2     page size   any power of 2 >= page size
+  M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
+*/
+int dlmallopt(int, int);
+
+/*
+  malloc_footprint();
+  Returns the number of bytes obtained from the system.  The total
+  number of bytes allocated by malloc, realloc etc., is less than this
+  value. Unlike mallinfo, this function returns only a precomputed
+  result, so can be called frequently to monitor memory consumption.
+  Even if locks are otherwise defined, this function does not use them,
+  so results might not be up to date.
+*/
+size_t dlmalloc_footprint(void);
+
+/*
+  malloc_max_footprint();
+  Returns the maximum number of bytes obtained from the system. This
+  value will be greater than current footprint if deallocated space
+  has been reclaimed by the system. The peak number of bytes allocated
+  by malloc, realloc etc., is less than this value. Unlike mallinfo,
+  this function returns only a precomputed result, so can be called
+  frequently to monitor memory consumption.  Even if locks are
+  otherwise defined, this function does not use them, so results might
+  not be up to date.
+*/
+size_t dlmalloc_max_footprint(void);
+
+#if !NO_MALLINFO
+/*
+  mallinfo()
+  Returns (by copy) a struct containing various summary statistics:
+
+  arena:     current total non-mmapped bytes allocated from system
+  ordblks:   the number of free chunks
+  smblks:    always zero.
+  hblks:     current number of mmapped regions
+  hblkhd:    total bytes held in mmapped regions
+  usmblks:   the maximum total allocated space. This will be greater
+                than current total if trimming has occurred.
+  fsmblks:   always zero
+  uordblks:  current total allocated space (normal or mmapped)
+  fordblks:  total free space
+  keepcost:  the maximum number of bytes that could ideally be released
+               back to system via malloc_trim. ("ideally" means that
+               it ignores page restrictions etc.)
+
+  Because these fields are ints, but internal bookkeeping may
+  be kept as longs, the reported values may wrap around zero and
+  thus be inaccurate.
+*/
+struct mallinfo dlmallinfo(void);
+#endif /* NO_MALLINFO */
+
+/*
+  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
+
+  independent_calloc is similar to calloc, but instead of returning a
+  single cleared space, it returns an array of pointers to n_elements
+  independent elements that can hold contents of size elem_size, each
+  of which starts out cleared, and can be independently freed,
+  realloc'ed etc. The elements are guaranteed to be adjacently
+  allocated (this is not guaranteed to occur with multiple callocs or
+  mallocs), which may also improve cache locality in some
+  applications.
+
+  The "chunks" argument is optional (i.e., may be null, which is
+  probably the most typical usage). If it is null, the returned array
+  is itself dynamically allocated and should also be freed when it is
+  no longer needed. Otherwise, the chunks array must be of at least
+  n_elements in length. It is filled in with the pointers to the
+  chunks.
+
+  In either case, independent_calloc returns this pointer array, or
+  null if the allocation failed.  If n_elements is zero and "chunks"
+  is null, it returns a chunk representing an array with zero elements
+  (which should be freed if not wanted).
+
+  Each element must be individually freed when it is no longer
+  needed. If you'd like to instead be able to free all at once, you
+  should instead use regular calloc and assign pointers into this
+  space to represent elements.  (In this case though, you cannot
+  independently free elements.)
+
+  independent_calloc simplifies and speeds up implementations of many
+  kinds of pools.  It may also be useful when constructing large data
+  structures that initially have a fixed number of fixed-sized nodes,
+  but the number is not known at compile time, and some of the nodes
+  may later need to be freed. For example:
+
+  struct Node { int item; struct Node* next; };
+
+  struct Node* build_list() {
+    struct Node** pool;
+    int n = read_number_of_nodes_needed();
+    if (n <= 0) return 0;
+    pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
+    if (pool == 0) die();
+    // organize into a linked list...
+    struct Node* first = pool[0];
+    for (i = 0; i < n-1; ++i)
+      pool[i]->next = pool[i+1];
+    free(pool);     // Can now free the array (or not, if it is needed later)
+    return first;
+  }
+*/
+void** dlindependent_calloc(size_t, size_t, void**);
+
+/*
+  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
+
+  independent_comalloc allocates, all at once, a set of n_elements
+  chunks with sizes indicated in the "sizes" array.    It returns
+  an array of pointers to these elements, each of which can be
+  independently freed, realloc'ed etc. The elements are guaranteed to
+  be adjacently allocated (this is not guaranteed to occur with
+  multiple callocs or mallocs), which may also improve cache locality
+  in some applications.
+
+  The "chunks" argument is optional (i.e., may be null). If it is null
+  the returned array is itself dynamically allocated and should also
+  be freed when it is no longer needed. Otherwise, the chunks array
+  must be of at least n_elements in length. It is filled in with the
+  pointers to the chunks.
+
+  In either case, independent_comalloc returns this pointer array, or
+  null if the allocation failed.  If n_elements is zero and chunks is
+  null, it returns a chunk representing an array with zero elements
+  (which should be freed if not wanted).
+
+  Each element must be individually freed when it is no longer
+  needed. If you'd like to instead be able to free all at once, you
+  should instead use a single regular malloc, and assign pointers at
+  particular offsets in the aggregate space. (In this case though, you
+  cannot independently free elements.)
+
+  independent_comallac differs from independent_calloc in that each
+  element may have a different size, and also that it does not
+  automatically clear elements.
+
+  independent_comalloc can be used to speed up allocation in cases
+  where several structs or objects must always be allocated at the
+  same time.  For example:
+
+  struct Head { ... }
+  struct Foot { ... }
+
+  void send_message(char* msg) {
+    int msglen = strlen(msg);
+    size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
+    void* chunks[3];
+    if (independent_comalloc(3, sizes, chunks) == 0)
+      die();
+    struct Head* head = (struct Head*)(chunks[0]);
+    char*        body = (char*)(chunks[1]);
+    struct Foot* foot = (struct Foot*)(chunks[2]);
+    // ...
+  }
+
+  In general though, independent_comalloc is worth using only for
+  larger values of n_elements. For small values, you probably won't
+  detect enough difference from series of malloc calls to bother.
+
+  Overuse of independent_comalloc can increase overall memory usage,
+  since it cannot reuse existing noncontiguous small chunks that
+  might be available for some of the elements.
+*/
+void** dlindependent_comalloc(size_t, size_t*, void**);
+
+
+/*
+  pvalloc(size_t n);
+  Equivalent to valloc(minimum-page-that-holds(n)), that is,
+  round up n to nearest pagesize.
+ */
+void*  dlpvalloc(size_t);
+
+/*
+  malloc_trim(size_t pad);
+
+  If possible, gives memory back to the system (via negative arguments
+  to sbrk) if there is unused memory at the `high' end of the malloc
+  pool or in unused MMAP segments. You can call this after freeing
+  large blocks of memory to potentially reduce the system-level memory
+  requirements of a program. However, it cannot guarantee to reduce
+  memory. Under some allocation patterns, some large free blocks of
+  memory will be locked between two used chunks, so they cannot be
+  given back to the system.
+
+  The `pad' argument to malloc_trim represents the amount of free
+  trailing space to leave untrimmed. If this argument is zero, only
+  the minimum amount of memory to maintain internal data structures
+  will be left. Non-zero arguments can be supplied to maintain enough
+  trailing space to service future expected allocations without having
+  to re-obtain memory from the system.
+
+  Malloc_trim returns 1 if it actually released any memory, else 0.
+*/
+int  dlmalloc_trim(size_t);
+
+/*
+  malloc_usable_size(void* p);
+
+  Returns the number of bytes you can actually use in
+  an allocated chunk, which may be more than you requested (although
+  often not) due to alignment and minimum size constraints.
+  You can use this many bytes without worrying about
+  overwriting other allocated objects. This is not a particularly great
+  programming practice. malloc_usable_size can be more useful in
+  debugging and assertions, for example:
+
+  p = malloc(n);
+  assert(malloc_usable_size(p) >= 256);
+*/
+size_t dlmalloc_usable_size(void*);
+
+/*
+  malloc_stats();
+  Prints on stderr the amount of space obtained from the system (both
+  via sbrk and mmap), the maximum amount (which may be more than
+  current if malloc_trim and/or munmap got called), and the current
+  number of bytes allocated via malloc (or realloc, etc) but not yet
+  freed. Note that this is the number of bytes allocated, not the
+  number requested. It will be larger than the number requested
+  because of alignment and bookkeeping overhead. Because it includes
+  alignment wastage as being in use, this figure may be greater than
+  zero even when no user-level chunks are allocated.
+
+  The reported current and maximum system memory can be inaccurate if
+  a program makes other calls to system memory allocation functions
+  (normally sbrk) outside of malloc.
+
+  malloc_stats prints only the most commonly interesting statistics.
+  More information can be obtained by calling mallinfo.
+*/
+void  dlmalloc_stats(void);
+
+#endif /* ONLY_MSPACES */
+
+#if MSPACES
+
+/*
+  mspace is an opaque type representing an independent
+  region of space that supports mspace_malloc, etc.
+*/
+typedef void* mspace;
+
+/*
+  create_mspace creates and returns a new independent space with the
+  given initial capacity, or, if 0, the default granularity size.  It
+  returns null if there is no system memory available to create the
+  space.  If argument locked is non-zero, the space uses a separate
+  lock to control access. The capacity of the space will grow
+  dynamically as needed to service mspace_malloc requests.  You can
+  control the sizes of incremental increases of this space by
+  compiling with a different DEFAULT_GRANULARITY or dynamically
+  setting with mallopt(M_GRANULARITY, value).
+*/
+mspace create_mspace(size_t capacity, int locked);
+
+/*
+  destroy_mspace destroys the given space, and attempts to return all
+  of its memory back to the system, returning the total number of
+  bytes freed. After destruction, the results of access to all memory
+  used by the space become undefined.
+*/
+size_t destroy_mspace(mspace msp);
+
+/*
+  create_mspace_with_base uses the memory supplied as the initial base
+  of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
+  space is used for bookkeeping, so the capacity must be at least this
+  large. (Otherwise 0 is returned.) When this initial space is
+  exhausted, additional memory will be obtained from the system.
+  Destroying this space will deallocate all additionally allocated
+  space (if possible) but not the initial base.
+*/
+mspace create_mspace_with_base(void* base, size_t capacity, int locked);
+
+/*
+  mspace_malloc behaves as malloc, but operates within
+  the given space.
+*/
+void* mspace_malloc(mspace msp, size_t bytes);
+
+/*
+  mspace_free behaves as free, but operates within
+  the given space.
+
+  If compiled with FOOTERS==1, mspace_free is not actually needed.
+  free may be called instead of mspace_free because freed chunks from
+  any space are handled by their originating spaces.
+*/
+void mspace_free(mspace msp, void* mem);
+
+/*
+  mspace_realloc behaves as realloc, but operates within
+  the given space.
+
+  If compiled with FOOTERS==1, mspace_realloc is not actually
+  needed.  realloc may be called instead of mspace_realloc because
+  realloced chunks from any space are handled by their originating
+  spaces.
+*/
+void* mspace_realloc(mspace msp, void* mem, size_t newsize);
+
+/*
+  mspace_calloc behaves as calloc, but operates within
+  the given space.
+*/
+void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
+
+/*
+  mspace_memalign behaves as memalign, but operates within
+  the given space.
+*/
+void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
+
+/*
+  mspace_independent_calloc behaves as independent_calloc, but
+  operates within the given space.
+*/
+void** mspace_independent_calloc(mspace msp, size_t n_elements,
+                                 size_t elem_size, void* chunks[]);
+
+/*
+  mspace_independent_comalloc behaves as independent_comalloc, but
+  operates within the given space.
+*/
+void** mspace_independent_comalloc(mspace msp, size_t n_elements,
+                                   size_t sizes[], void* chunks[]);
+
+/*
+  mspace_footprint() returns the number of bytes obtained from the
+  system for this space.
+*/
+size_t mspace_footprint(mspace msp);
+
+/*
+  mspace_max_footprint() returns the peak number of bytes obtained from the
+  system for this space.
+*/
+size_t mspace_max_footprint(mspace msp);
+
+
+#if !NO_MALLINFO
+/*
+  mspace_mallinfo behaves as mallinfo, but reports properties of
+  the given space.
+*/
+struct mallinfo mspace_mallinfo(mspace msp);
+#endif /* NO_MALLINFO */
+
+/*
+  mspace_malloc_stats behaves as malloc_stats, but reports
+  properties of the given space.
+*/
+void mspace_malloc_stats(mspace msp);
+
+/*
+  mspace_trim behaves as malloc_trim, but
+  operates within the given space.
+*/
+int mspace_trim(mspace msp, size_t pad);
+
+/*
+  An alias for mallopt.
+*/
+int mspace_mallopt(int, int);
+
+#endif /* MSPACES */
+
+#ifdef __cplusplus
+};  /* end of extern "C" */
+#endif /* __cplusplus */
+
+/*
+  ========================================================================
+  To make a fully customizable malloc.h header file, cut everything
+  above this line, put into file malloc.h, edit to suit, and #include it
+  on the next line, as well as in programs that use this malloc.
+  ========================================================================
+*/
+
+/* #include "malloc.h" */
+
+/*------------------------------ internal #includes ---------------------- */
+
+#ifdef WIN32
+#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
+#endif /* WIN32 */
+
+#include <stdio.h>       /* for printing in malloc_stats */
+
+#ifndef LACKS_ERRNO_H
+#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
+#endif /* LACKS_ERRNO_H */
+#if FOOTERS
+#include <time.h>        /* for magic initialization */
+#endif /* FOOTERS */
+#ifndef LACKS_STDLIB_H
+#include <stdlib.h>      /* for abort() */
+#endif /* LACKS_STDLIB_H */
+#ifdef DEBUG
+#if ABORT_ON_ASSERT_FAILURE
+#define assert(x) if(!(x)) ABORT
+#else /* ABORT_ON_ASSERT_FAILURE */
+#include <assert.h>
+#endif /* ABORT_ON_ASSERT_FAILURE */
+#else  /* DEBUG */
+#define assert(x)
+#endif /* DEBUG */
+#ifndef LACKS_STRING_H
+#include <string.h>      /* for memset etc */
+#endif  /* LACKS_STRING_H */
+#if USE_BUILTIN_FFS
+#ifndef LACKS_STRINGS_H
+#include <strings.h>     /* for ffs */
+#endif /* LACKS_STRINGS_H */
+#endif /* USE_BUILTIN_FFS */
+#if HAVE_MMAP
+#ifndef LACKS_SYS_MMAN_H
+#include <sys/mman.h>    /* for mmap */
+#endif /* LACKS_SYS_MMAN_H */
+#ifndef LACKS_FCNTL_H
+#include <fcntl.h>
+#endif /* LACKS_FCNTL_H */
+#endif /* HAVE_MMAP */
+#if HAVE_MORECORE
+#ifndef LACKS_UNISTD_H
+#include <unistd.h>     /* for sbrk */
+#else /* LACKS_UNISTD_H */
+#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
+extern void*     sbrk(ptrdiff_t);
+#endif /* FreeBSD etc */
+#endif /* LACKS_UNISTD_H */
+#endif /* HAVE_MMAP */
+
+#ifndef WIN32
+#ifndef malloc_getpagesize
+#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
+#    ifndef _SC_PAGE_SIZE
+#      define _SC_PAGE_SIZE _SC_PAGESIZE
+#    endif
+#  endif
+#  ifdef _SC_PAGE_SIZE
+#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
+#  else
+#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
+       extern size_t getpagesize();
+#      define malloc_getpagesize getpagesize()
+#    else
+#      ifdef WIN32 /* use supplied emulation of getpagesize */
+#        define malloc_getpagesize getpagesize()
+#      else
+#        ifndef LACKS_SYS_PARAM_H
+#          include <sys/param.h>
+#        endif
+#        ifdef EXEC_PAGESIZE
+#          define malloc_getpagesize EXEC_PAGESIZE
+#        else
+#          ifdef NBPG
+#            ifndef CLSIZE
+#              define malloc_getpagesize NBPG
+#            else
+#              define malloc_getpagesize (NBPG * CLSIZE)
+#            endif
+#          else
+#            ifdef NBPC
+#              define malloc_getpagesize NBPC
+#            else
+#              ifdef PAGESIZE
+#                define malloc_getpagesize PAGESIZE
+#              else /* just guess */
+#                define malloc_getpagesize ((size_t)4096U)
+#              endif
+#            endif
+#          endif
+#        endif
+#      endif
+#    endif
+#  endif
+#endif
+#endif
+
+/* ------------------- size_t and alignment properties -------------------- */
+
+/* The byte and bit size of a size_t */
+#define SIZE_T_SIZE         (sizeof(size_t))
+#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
+
+/* Some constants coerced to size_t */
+/* Annoying but necessary to avoid errors on some plaftorms */
+#define SIZE_T_ZERO         ((size_t)0)
+#define SIZE_T_ONE          ((size_t)1)
+#define SIZE_T_TWO          ((size_t)2)
+#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
+#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
+#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
+#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
+
+/* The bit mask value corresponding to MALLOC_ALIGNMENT */
+#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
+
+/* True if address a has acceptable alignment */
+#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
+
+/* the number of bytes to offset an address to align it */
+#define align_offset(A)\
+ ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
+  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
+
+/* -------------------------- MMAP preliminaries ------------------------- */
+
+/*
+   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
+   checks to fail so compiler optimizer can delete code rather than
+   using so many "#if"s.
+*/
+
+
+/* MORECORE and MMAP must return MFAIL on failure */
+#define MFAIL                ((void*)(MAX_SIZE_T))
+#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
+
+#if !HAVE_MMAP
+#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
+#define USE_MMAP_BIT         (SIZE_T_ZERO)
+#define CALL_MMAP(s)         MFAIL
+#define CALL_MUNMAP(a, s)    (-1)
+#define DIRECT_MMAP(s)       MFAIL
+
+#else /* HAVE_MMAP */
+#define IS_MMAPPED_BIT       (SIZE_T_ONE)
+#define USE_MMAP_BIT         (SIZE_T_ONE)
+
+#ifndef WIN32
+#define CALL_MUNMAP(a, s)    munmap((a), (s))
+#define MMAP_PROT            (PROT_READ|PROT_WRITE)
+#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
+#define MAP_ANONYMOUS        MAP_ANON
+#endif /* MAP_ANON */
+#ifdef MAP_ANONYMOUS
+#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
+#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
+#else /* MAP_ANONYMOUS */
+/*
+   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
+   is unlikely to be needed, but is supplied just in case.
+*/
+#define MMAP_FLAGS           (MAP_PRIVATE)
+static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
+#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
+           (dev_zero_fd = open("/dev/zero", O_RDWR), \
+            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
+            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
+#endif /* MAP_ANONYMOUS */
+
+#define DIRECT_MMAP(s)       CALL_MMAP(s)
+#else /* WIN32 */
+
+/* Win32 MMAP via VirtualAlloc */
+static void* win32mmap(size_t size) {
+  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
+  return (ptr != 0)? ptr: MFAIL;
+}
+
+/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
+static void* win32direct_mmap(size_t size) {
+  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
+                           PAGE_READWRITE);
+  return (ptr != 0)? ptr: MFAIL;
+}
+
+/* This function supports releasing coalesed segments */
+static int win32munmap(void* ptr, size_t size) {
+  MEMORY_BASIC_INFORMATION minfo;
+  char* cptr = ptr;
+  while (size) {
+    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
+      return -1;
+    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
+        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
+      return -1;
+    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
+      return -1;
+    cptr += minfo.RegionSize;
+    size -= minfo.RegionSize;
+  }
+  return 0;
+}
+
+#define CALL_MMAP(s)         win32mmap(s)
+#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
+#define DIRECT_MMAP(s)       win32direct_mmap(s)
+#endif /* WIN32 */
+#endif /* HAVE_MMAP */
+
+#if HAVE_MMAP && HAVE_MREMAP
+#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
+#else  /* HAVE_MMAP && HAVE_MREMAP */
+#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
+#endif /* HAVE_MMAP && HAVE_MREMAP */
+
+#if HAVE_MORECORE
+#define CALL_MORECORE(S)     MORECORE(S)
+#else  /* HAVE_MORECORE */
+#define CALL_MORECORE(S)     MFAIL
+#endif /* HAVE_MORECORE */
+
+/* mstate bit set if continguous morecore disabled or failed */
+#define USE_NONCONTIGUOUS_BIT (4U)
+
+/* segment bit set in create_mspace_with_base */
+#define EXTERN_BIT            (8U)
+
+
+/* --------------------------- Lock preliminaries ------------------------ */
+
+#if USE_LOCKS
+
+/*
+  When locks are defined, there are up to two global locks:
+
+  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
+    MORECORE.  In many cases sys_alloc requires two calls, that should
+    not be interleaved with calls by other threads.  This does not
+    protect against direct calls to MORECORE by other threads not
+    using this lock, so there is still code to cope the best we can on
+    interference.
+
+  * magic_init_mutex ensures that mparams.magic and other
+    unique mparams values are initialized only once.
+*/
+
+#ifndef WIN32
+/* By default use posix locks */
+#include <pthread.h>
+#define MLOCK_T pthread_mutex_t
+#define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
+#define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
+#define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
+
+#if HAVE_MORECORE
+static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
+#endif /* HAVE_MORECORE */
+
+static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+#else /* WIN32 */
+/*
+   Because lock-protected regions have bounded times, and there
+   are no recursive lock calls, we can use simple spinlocks.
+*/
+
+#define MLOCK_T long
+static int win32_acquire_lock (MLOCK_T *sl) {
+  for (;;) {
+#ifdef InterlockedCompareExchangePointer
+    if (!InterlockedCompareExchange(sl, 1, 0))
+      return 0;
+#else  /* Use older void* version */
+    if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
+      return 0;
+#endif /* InterlockedCompareExchangePointer */
+    Sleep (0);
+  }
+}
+
+static void win32_release_lock (MLOCK_T *sl) {
+  InterlockedExchange (sl, 0);
+}
+
+#define INITIAL_LOCK(l)      *(l)=0
+#define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
+#define RELEASE_LOCK(l)      win32_release_lock(l)
+#if HAVE_MORECORE
+static MLOCK_T morecore_mutex;
+#endif /* HAVE_MORECORE */
+static MLOCK_T magic_init_mutex;
+#endif /* WIN32 */
+
+#define USE_LOCK_BIT               (2U)
+#else  /* USE_LOCKS */
+#define USE_LOCK_BIT               (0U)
+#define INITIAL_LOCK(l)
+#endif /* USE_LOCKS */
+
+#if USE_LOCKS && HAVE_MORECORE
+#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
+#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
+#else /* USE_LOCKS && HAVE_MORECORE */
+#define ACQUIRE_MORECORE_LOCK()
+#define RELEASE_MORECORE_LOCK()
+#endif /* USE_LOCKS && HAVE_MORECORE */
+
+#if USE_LOCKS
+#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
+#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
+#else  /* USE_LOCKS */
+#define ACQUIRE_MAGIC_INIT_LOCK()
+#define RELEASE_MAGIC_INIT_LOCK()
+#endif /* USE_LOCKS */
+
+
+/* -----------------------  Chunk representations ------------------------ */
+
+/*
+  (The following includes lightly edited explanations by Colin Plumb.)
+
+  The malloc_chunk declaration below is misleading (but accurate and
+  necessary).  It declares a "view" into memory allowing access to
+  necessary fields at known offsets from a given base.
+
+  Chunks of memory are maintained using a `boundary tag' method as
+  originally described by Knuth.  (See the paper by Paul Wilson
+  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
+  techniques.)  Sizes of free chunks are stored both in the front of
+  each chunk and at the end.  This makes consolidating fragmented
+  chunks into bigger chunks fast.  The head fields also hold bits
+  representing whether chunks are free or in use.
+
+  Here are some pictures to make it clearer.  They are "exploded" to
+  show that the state of a chunk can be thought of as extending from
+  the high 31 bits of the head field of its header through the
+  prev_foot and PINUSE_BIT bit of the following chunk header.
+
+  A chunk that's in use looks like:
+
+   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+           | Size of previous chunk (if P = 1)                             |
+           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
+         | Size of this chunk                                         1| +-+
+   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         |                                                               |
+         +-                                                             -+
+         |                                                               |
+         +-                                                             -+
+         |                                                               :
+         +-      size - sizeof(size_t) available payload bytes          -+
+         :                                                               |
+ chunk-> +-                                                             -+
+         |                                                               |
+         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
+       | Size of next chunk (may or may not be in use)               | +-+
+ mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+    And if it's free, it looks like this:
+
+   chunk-> +-                                                             -+
+           | User payload (must be in use, or we would have merged!)       |
+           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
+         | Size of this chunk                                         0| +-+
+   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         | Next pointer                                                  |
+         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         | Prev pointer                                                  |
+         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         |                                                               :
+         +-      size - sizeof(struct chunk) unused bytes               -+
+         :                                                               |
+ chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         | Size of this chunk                                            |
+         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
+       | Size of next chunk (must be in use, or we would have merged)| +-+
+ mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+       |                                                               :
+       +- User payload                                                -+
+       :                                                               |
+       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+                                                                     |0|
+                                                                     +-+
+  Note that since we always merge adjacent free chunks, the chunks
+  adjacent to a free chunk must be in use.
+
+  Given a pointer to a chunk (which can be derived trivially from the
+  payload pointer) we can, in O(1) time, find out whether the adjacent
+  chunks are free, and if so, unlink them from the lists that they
+  are on and merge them with the current chunk.
+
+  Chunks always begin on even word boundaries, so the mem portion
+  (which is returned to the user) is also on an even word boundary, and
+  thus at least double-word aligned.
+
+  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
+  chunk size (which is always a multiple of two words), is an in-use
+  bit for the *previous* chunk.  If that bit is *clear*, then the
+  word before the current chunk size contains the previous chunk
+  size, and can be used to find the front of the previous chunk.
+  The very first chunk allocated always has this bit set, preventing
+  access to non-existent (or non-owned) memory. If pinuse is set for
+  any given chunk, then you CANNOT determine the size of the
+  previous chunk, and might even get a memory addressing fault when
+  trying to do so.
+
+  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
+  the chunk size redundantly records whether the current chunk is
+  inuse. This redundancy enables usage checks within free and realloc,
+  and reduces indirection when freeing and consolidating chunks.
+
+  Each freshly allocated chunk must have both cinuse and pinuse set.
+  That is, each allocated chunk borders either a previously allocated
+  and still in-use chunk, or the base of its memory arena. This is
+  ensured by making all allocations from the the `lowest' part of any
+  found chunk.  Further, no free chunk physically borders another one,
+  so each free chunk is known to be preceded and followed by either
+  inuse chunks or the ends of memory.
+
+  Note that the `foot' of the current chunk is actually represented
+  as the prev_foot of the NEXT chunk. This makes it easier to
+  deal with alignments etc but can be very confusing when trying
+  to extend or adapt this code.
+
+  The exceptions to all this are
+
+     1. The special chunk `top' is the top-most available chunk (i.e.,
+        the one bordering the end of available memory). It is treated
+        specially.  Top is never included in any bin, is used only if
+        no other chunk is available, and is released back to the
+        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
+        the top chunk is treated as larger (and thus less well
+        fitting) than any other available chunk.  The top chunk
+        doesn't update its trailing size field since there is no next
+        contiguous chunk that would have to index off it. However,
+        space is still allocated for it (TOP_FOOT_SIZE) to enable
+        separation or merging when space is extended.
+
+     3. Chunks allocated via mmap, which have the lowest-order bit
+        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
+        PINUSE_BIT in their head fields.  Because they are allocated
+        one-by-one, each must carry its own prev_foot field, which is
+        also used to hold the offset this chunk has within its mmapped
+        region, which is needed to preserve alignment. Each mmapped
+        chunk is trailed by the first two fields of a fake next-chunk
+        for sake of usage checks.
+
+*/
+
+struct malloc_chunk {
+  size_t               prev_foot;  /* Size of previous chunk (if free).  */
+  size_t               head;       /* Size and inuse bits. */
+  struct malloc_chunk* fd;         /* double links -- used only if free. */
+  struct malloc_chunk* bk;
+};
+
+typedef struct malloc_chunk  mchunk;
+typedef struct malloc_chunk* mchunkptr;
+typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
+typedef unsigned int bindex_t;         /* Described below */
+typedef unsigned int binmap_t;         /* Described below */
+typedef unsigned int flag_t;           /* The type of various bit flag sets */
+
+/* ------------------- Chunks sizes and alignments ----------------------- */
+
+#define MCHUNK_SIZE         (sizeof(mchunk))
+
+#if FOOTERS
+#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
+#else /* FOOTERS */
+#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
+#endif /* FOOTERS */
+
+/* MMapped chunks need a second word of overhead ... */
+#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
+/* ... and additional padding for fake next-chunk at foot */
+#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
+
+/* The smallest size we can malloc is an aligned minimal chunk */
+#define MIN_CHUNK_SIZE\
+  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
+
+/* conversion from malloc headers to user pointers, and back */
+#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
+#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
+/* chunk associated with aligned address A */
+#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
+
+/* Bounds on request (not chunk) sizes. */
+#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
+#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
+
+/* pad request bytes into a usable size */
+#define pad_request(req) \
+   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
+
+/* pad request, checking for minimum (but not maximum) */
+#define request2size(req) \
+  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
+
+
+/* ------------------ Operations on head and foot fields ----------------- */
+
+/*
+  The head field of a chunk is or'ed with PINUSE_BIT when previous
+  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
+  use. If the chunk was obtained with mmap, the prev_foot field has
+  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
+  mmapped region to the base of the chunk.
+*/
+
+#define PINUSE_BIT          (SIZE_T_ONE)
+#define CINUSE_BIT          (SIZE_T_TWO)
+#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
+
+/* Head value for fenceposts */
+#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
+
+/* extraction of fields from head words */
+#define cinuse(p)           ((p)->head & CINUSE_BIT)
+#define pinuse(p)           ((p)->head & PINUSE_BIT)
+#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
+
+#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
+#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
+
+/* Treat space at ptr +/- offset as a chunk */
+#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
+#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
+
+/* Ptr to next or previous physical malloc_chunk. */
+#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
+#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
+
+/* extract next chunk's pinuse bit */
+#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
+
+/* Get/set size at footer */
+#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
+#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
+
+/* Set size, pinuse bit, and foot */
+#define set_size_and_pinuse_of_free_chunk(p, s)\
+  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
+
+/* Set size, pinuse bit, foot, and clear next pinuse */
+#define set_free_with_pinuse(p, s, n)\
+  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
+
+#define is_mmapped(p)\
+  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
+
+/* Get the internal overhead associated with chunk p */
+#define overhead_for(p)\
+ (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
+
+/* Return true if malloced space is not necessarily cleared */
+#if MMAP_CLEARS
+#define calloc_must_clear(p) (!is_mmapped(p))
+#else /* MMAP_CLEARS */
+#define calloc_must_clear(p) (1)
+#endif /* MMAP_CLEARS */
+
+/* ---------------------- Overlaid data structures ----------------------- */
+
+/*
+  When chunks are not in use, they are treated as nodes of either
+  lists or trees.
+
+  "Small"  chunks are stored in circular doubly-linked lists, and look
+  like this:
+
+    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Size of previous chunk                            |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+    `head:' |             Size of chunk, in bytes                         |P|
+      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Forward pointer to next chunk in list             |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Back pointer to previous chunk in list            |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Unused space (may be 0 bytes long)                .
+            .                                                               .
+            .                                                               |
+nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+    `foot:' |             Size of chunk, in bytes                           |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+  Larger chunks are kept in a form of bitwise digital trees (aka
+  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
+  free chunks greater than 256 bytes, their size doesn't impose any
+  constraints on user chunk sizes.  Each node looks like:
+
+    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Size of previous chunk                            |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+    `head:' |             Size of chunk, in bytes                         |P|
+      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Forward pointer to next chunk of same size        |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Back pointer to previous chunk of same size       |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Pointer to left child (child[0])                  |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Pointer to right child (child[1])                 |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Pointer to parent                                 |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             bin index of this chunk                           |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Unused space                                      .
+            .                                                               |
+nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+    `foot:' |             Size of chunk, in bytes                           |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
+  of the same size are arranged in a circularly-linked list, with only
+  the oldest chunk (the next to be used, in our FIFO ordering)
+  actually in the tree.  (Tree members are distinguished by a non-null
+  parent pointer.)  If a chunk with the same size an an existing node
+  is inserted, it is linked off the existing node using pointers that
+  work in the same way as fd/bk pointers of small chunks.
+
+  Each tree contains a power of 2 sized range of chunk sizes (the
+  smallest is 0x100 <= x < 0x180), which is is divided in half at each
+  tree level, with the chunks in the smaller half of the range (0x100
+  <= x < 0x140 for the top nose) in the left subtree and the larger
+  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
+  done by inspecting individual bits.
+
+  Using these rules, each node's left subtree contains all smaller
+  sizes than its right subtree.  However, the node at the root of each
+  subtree has no particular ordering relationship to either.  (The
+  dividing line between the subtree sizes is based on trie relation.)
+  If we remove the last chunk of a given size from the interior of the
+  tree, we need to replace it with a leaf node.  The tree ordering
+  rules permit a node to be replaced by any leaf below it.
+
+  The smallest chunk in a tree (a common operation in a best-fit
+  allocator) can be found by walking a path to the leftmost leaf in
+  the tree.  Unlike a usual binary tree, where we follow left child
+  pointers until we reach a null, here we follow the right child
+  pointer any time the left one is null, until we reach a leaf with
+  both child pointers null. The smallest chunk in the tree will be
+  somewhere along that path.
+
+  The worst case number of steps to add, find, or remove a node is
+  bounded by the number of bits differentiating chunks within
+  bins. Under current bin calculations, this ranges from 6 up to 21
+  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
+  is of course much better.
+*/
+
+struct malloc_tree_chunk {
+  /* The first four fields must be compatible with malloc_chunk */
+  size_t                    prev_foot;
+  size_t                    head;
+  struct malloc_tree_chunk* fd;
+  struct malloc_tree_chunk* bk;
+
+  struct malloc_tree_chunk* child[2];
+  struct malloc_tree_chunk* parent;
+  bindex_t                  index;
+};
+
+typedef struct malloc_tree_chunk  tchunk;
+typedef struct malloc_tree_chunk* tchunkptr;
+typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
+
+/* A little helper macro for trees */
+#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
+
+/* ----------------------------- Segments -------------------------------- */
+
+/*
+  Each malloc space may include non-contiguous segments, held in a
+  list headed by an embedded malloc_segment record representing the
+  top-most space. Segments also include flags holding properties of
+  the space. Large chunks that are directly allocated by mmap are not
+  included in this list. They are instead independently created and
+  destroyed without otherwise keeping track of them.
+
+  Segment management mainly comes into play for spaces allocated by
+  MMAP.  Any call to MMAP might or might not return memory that is
+  adjacent to an existing segment.  MORECORE normally contiguously
+  extends the current space, so this space is almost always adjacent,
+  which is simpler and faster to deal with. (This is why MORECORE is
+  used preferentially to MMAP when both are available -- see
+  sys_alloc.)  When allocating using MMAP, we don't use any of the
+  hinting mechanisms (inconsistently) supported in various
+  implementations of unix mmap, or distinguish reserving from
+  committing memory. Instead, we just ask for space, and exploit
+  contiguity when we get it.  It is probably possible to do
+  better than this on some systems, but no general scheme seems
+  to be significantly better.
+
+  Management entails a simpler variant of the consolidation scheme
+  used for chunks to reduce fragmentation -- new adjacent memory is
+  normally prepended or appended to an existing segment. However,
+  there are limitations compared to chunk consolidation that mostly
+  reflect the fact that segment processing is relatively infrequent
+  (occurring only when getting memory from system) and that we
+  don't expect to have huge numbers of segments:
+
+  * Segments are not indexed, so traversal requires linear scans.  (It
+    would be possible to index these, but is not worth the extra
+    overhead and complexity for most programs on most platforms.)
+  * New segments are only appended to old ones when holding top-most
+    memory; if they cannot be prepended to others, they are held in
+    different segments.
+
+  Except for the top-most segment of an mstate, each segment record
+  is kept at the tail of its segment. Segments are added by pushing
+  segment records onto the list headed by &mstate.seg for the
+  containing mstate.
+
+  Segment flags control allocation/merge/deallocation policies:
+  * If EXTERN_BIT set, then we did not allocate this segment,
+    and so should not try to deallocate or merge with others.
+    (This currently holds only for the initial segment passed
+    into create_mspace_with_base.)
+  * If IS_MMAPPED_BIT set, the segment may be merged with
+    other surrounding mmapped segments and trimmed/de-allocated
+    using munmap.
+  * If neither bit is set, then the segment was obtained using
+    MORECORE so can be merged with surrounding MORECORE'd segments
+    and deallocated/trimmed using MORECORE with negative arguments.
+*/
+
+struct malloc_segment {
+  char*        base;             /* base address */
+  size_t       size;             /* allocated size */
+  struct malloc_segment* next;   /* ptr to next segment */
+  flag_t       sflags;           /* mmap and extern flag */
+};
+
+#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
+#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
+
+typedef struct malloc_segment  msegment;
+typedef struct malloc_segment* msegmentptr;
+
+/* ---------------------------- malloc_state ----------------------------- */
+
+/*
+   A malloc_state holds all of the bookkeeping for a space.
+   The main fields are:
+
+  Top
+    The topmost chunk of the currently active segment. Its size is
+    cached in topsize.  The actual size of topmost space is
+    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
+    fenceposts and segment records if necessary when getting more
+    space from the system.  The size at which to autotrim top is
+    cached from mparams in trim_check, except that it is disabled if
+    an autotrim fails.
+
+  Designated victim (dv)
+    This is the preferred chunk for servicing small requests that
+    don't have exact fits.  It is normally the chunk split off most
+    recently to service another small request.  Its size is cached in
+    dvsize. The link fields of this chunk are not maintained since it
+    is not kept in a bin.
+
+  SmallBins
+    An array of bin headers for free chunks.  These bins hold chunks
+    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
+    chunks of all the same size, spaced 8 bytes apart.  To simplify
+    use in double-linked lists, each bin header acts as a malloc_chunk
+    pointing to the real first node, if it exists (else pointing to
+    itself).  This avoids special-casing for headers.  But to avoid
+    waste, we allocate only the fd/bk pointers of bins, and then use
+    repositioning tricks to treat these as the fields of a chunk.
+
+  TreeBins
+    Treebins are pointers to the roots of trees holding a range of
+    sizes. There are 2 equally spaced treebins for each power of two
+    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
+    larger.
+
+  Bin maps
+    There is one bit map for small bins ("smallmap") and one for
+    treebins ("treemap).  Each bin sets its bit when non-empty, and
+    clears the bit when empty.  Bit operations are then used to avoid
+    bin-by-bin searching -- nearly all "search" is done without ever
+    looking at bins that won't be selected.  The bit maps
+    conservatively use 32 bits per map word, even if on 64bit system.
+    For a good description of some of the bit-based techniques used
+    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
+    supplement at http://hackersdelight.org/). Many of these are
+    intended to reduce the branchiness of paths through malloc etc, as
+    well as to reduce the number of memory locations read or written.
+
+  Segments
+    A list of segments headed by an embedded malloc_segment record
+    representing the initial space.
+
+  Address check support
+    The least_addr field is the least address ever obtained from
+    MORECORE or MMAP. Attempted frees and reallocs of any address less
+    than this are trapped (unless INSECURE is defined).
+
+  Magic tag
+    A cross-check field that should always hold same value as mparams.magic.
+
+  Flags
+    Bits recording whether to use MMAP, locks, or contiguous MORECORE
+
+  Statistics
+    Each space keeps track of current and maximum system memory
+    obtained via MORECORE or MMAP.
+
+  Locking
+    If USE_LOCKS is defined, the "mutex" lock is acquired and released
+    around every public call using this mspace.
+*/
+
+/* Bin types, widths and sizes */
+#define NSMALLBINS        (32U)
+#define NTREEBINS         (32U)
+#define SMALLBIN_SHIFT    (3U)
+#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
+#define TREEBIN_SHIFT     (8U)
+#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
+#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
+#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
+
+struct malloc_state {
+  binmap_t   smallmap;
+  binmap_t   treemap;
+  size_t     dvsize;
+  size_t     topsize;
+  char*      least_addr;
+  mchunkptr  dv;
+  mchunkptr  top;
+  size_t     trim_check;
+  size_t     magic;
+  mchunkptr  smallbins[(NSMALLBINS+1)*2];
+  tbinptr    treebins[NTREEBINS];
+  size_t     footprint;
+  size_t     max_footprint;
+  flag_t     mflags;
+#if USE_LOCKS
+  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
+#endif /* USE_LOCKS */
+  msegment   seg;
+};
+
+typedef struct malloc_state*    mstate;
+
+/* ------------- Global malloc_state and malloc_params ------------------- */
+
+/*
+  malloc_params holds global properties, including those that can be
+  dynamically set using mallopt. There is a single instance, mparams,
+  initialized in init_mparams.
+*/
+
+struct malloc_params {
+  size_t magic;
+  size_t page_size;
+  size_t granularity;
+  size_t mmap_threshold;
+  size_t trim_threshold;
+  flag_t default_mflags;
+};
+
+static struct malloc_params mparams;
+
+/* The global malloc_state used for all non-"mspace" calls */
+static struct malloc_state _gm_;
+#define gm                 (&_gm_)
+#define is_global(M)       ((M) == &_gm_)
+#define is_initialized(M)  ((M)->top != 0)
+
+/* -------------------------- system alloc setup ------------------------- */
+
+/* Operations on mflags */
+
+#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
+#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
+#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
+
+#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
+#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
+#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
+
+#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
+#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
+
+#define set_lock(M,L)\
+ ((M)->mflags = (L)?\
+  ((M)->mflags | USE_LOCK_BIT) :\
+  ((M)->mflags & ~USE_LOCK_BIT))
+
+/* page-align a size */
+#define page_align(S)\
+ (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
+
+/* granularity-align a size */
+#define granularity_align(S)\
+  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
+
+#define is_page_aligned(S)\
+   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
+#define is_granularity_aligned(S)\
+   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
+
+/*  True if segment S holds address A */
+#define segment_holds(S, A)\
+  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
+
+/* Return segment holding given address */
+static msegmentptr segment_holding(mstate m, char* addr) {
+  msegmentptr sp = &m->seg;
+  for (;;) {
+    if (addr >= sp->base && addr < sp->base + sp->size)
+      return sp;
+    if ((sp = sp->next) == 0)
+      return 0;
+  }
+}
+
+/* Return true if segment contains a segment link */
+static int has_segment_link(mstate m, msegmentptr ss) {
+  msegmentptr sp = &m->seg;
+  for (;;) {
+    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
+      return 1;
+    if ((sp = sp->next) == 0)
+      return 0;
+  }
+}
+
+#ifndef MORECORE_CANNOT_TRIM
+#define should_trim(M,s)  ((s) > (M)->trim_check)
+#else  /* MORECORE_CANNOT_TRIM */
+#define should_trim(M,s)  (0)
+#endif /* MORECORE_CANNOT_TRIM */
+
+/*
+  TOP_FOOT_SIZE is padding at the end of a segment, including space
+  that may be needed to place segment records and fenceposts when new
+  noncontiguous segments are added.
+*/
+#define TOP_FOOT_SIZE\
+  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
+
+
+/* -------------------------------  Hooks -------------------------------- */
+
+/*
+  PREACTION should be defined to return 0 on success, and nonzero on
+  failure. If you are not using locking, you can redefine these to do
+  anything you like.
+*/
+
+#if USE_LOCKS
+
+/* Ensure locks are initialized */
+#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
+
+#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
+#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
+#else /* USE_LOCKS */
+
+#ifndef PREACTION
+#define PREACTION(M) (0)
+#endif  /* PREACTION */
+
+#ifndef POSTACTION
+#define POSTACTION(M)
+#endif  /* POSTACTION */
+
+#endif /* USE_LOCKS */
+
+/*
+  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
+  USAGE_ERROR_ACTION is triggered on detected bad frees and
+  reallocs. The argument p is an address that might have triggered the
+  fault. It is ignored by the two predefined actions, but might be
+  useful in custom actions that try to help diagnose errors.
+*/
+
+#if PROCEED_ON_ERROR
+
+/* A count of the number of corruption errors causing resets */
+int malloc_corruption_error_count;
+
+/* default corruption action */
+static void reset_on_error(mstate m);
+
+#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
+#define USAGE_ERROR_ACTION(m, p)
+
+#else /* PROCEED_ON_ERROR */
+
+#ifndef CORRUPTION_ERROR_ACTION
+#define CORRUPTION_ERROR_ACTION(m) ABORT
+#endif /* CORRUPTION_ERROR_ACTION */
+
+#ifndef USAGE_ERROR_ACTION
+#define USAGE_ERROR_ACTION(m,p) ABORT
+#endif /* USAGE_ERROR_ACTION */
+
+#endif /* PROCEED_ON_ERROR */
+
+/* -------------------------- Debugging setup ---------------------------- */
+
+#if ! DEBUG
+
+#define check_free_chunk(M,P)
+#define check_inuse_chunk(M,P)
+#define check_malloced_chunk(M,P,N)
+#define check_mmapped_chunk(M,P)
+#define check_malloc_state(M)
+#define check_top_chunk(M,P)
+
+#else /* DEBUG */
+#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
+#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
+#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
+#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
+#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
+#define check_malloc_state(M)       do_check_malloc_state(M)
+
+static void   do_check_any_chunk(mstate m, mchunkptr p);
+static void   do_check_top_chunk(mstate m, mchunkptr p);
+static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
+static void   do_check_inuse_chunk(mstate m, mchunkptr p);
+static void   do_check_free_chunk(mstate m, mchunkptr p);
+static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
+static void   do_check_tree(mstate m, tchunkptr t);
+static void   do_check_treebin(mstate m, bindex_t i);
+static void   do_check_smallbin(mstate m, bindex_t i);
+static void   do_check_malloc_state(mstate m);
+static int    bin_find(mstate m, mchunkptr x);
+static size_t traverse_and_check(mstate m);
+#endif /* DEBUG */
+
+/* ---------------------------- Indexing Bins ---------------------------- */
+
+#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
+#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
+#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
+#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
+
+/* addressing by index. See above about smallbin repositioning */
+#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
+#define treebin_at(M,i)     (&((M)->treebins[i]))
+
+/* assign tree index for size S to variable I */
+#if defined(__GNUC__) && defined(i386)
+#define compute_tree_index(S, I)\
+{\
+  size_t X = S >> TREEBIN_SHIFT;\
+  if (X == 0)\
+    I = 0;\
+  else if (X > 0xFFFF)\
+    I = NTREEBINS-1;\
+  else {\
+    unsigned int K;\
+    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
+    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
+  }\
+}
+#else /* GNUC */
+#define compute_tree_index(S, I)\
+{\
+  size_t X = S >> TREEBIN_SHIFT;\
+  if (X == 0)\
+    I = 0;\
+  else if (X > 0xFFFF)\
+    I = NTREEBINS-1;\
+  else {\
+    unsigned int Y = (unsigned int)X;\
+    unsigned int N = ((Y - 0x100) >> 16) & 8;\
+    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
+    N += K;\
+    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
+    K = 14 - N + ((Y <<= K) >> 15);\
+    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
+  }\
+}
+#endif /* GNUC */
+
+/* Bit representing maximum resolved size in a treebin at i */
+#define bit_for_tree_index(i) \
+   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
+
+/* Shift placing maximum resolved bit in a treebin at i as sign bit */
+#define leftshift_for_tree_index(i) \
+   ((i == NTREEBINS-1)? 0 : \
+    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
+
+/* The size of the smallest chunk held in bin with index i */
+#define minsize_for_tree_index(i) \
+   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
+   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
+
+
+/* ------------------------ Operations on bin maps ----------------------- */
+
+/* bit corresponding to given index */
+#define idx2bit(i)              ((binmap_t)(1) << (i))
+
+/* Mark/Clear bits with given index */
+#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
+#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
+#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
+
+#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
+#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
+#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
+
+/* index corresponding to given bit */
+
+#if defined(__GNUC__) && defined(i386)
+#define compute_bit2idx(X, I)\
+{\
+  unsigned int J;\
+  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
+  I = (bindex_t)J;\
+}
+
+#else /* GNUC */
+#if  USE_BUILTIN_FFS
+#define compute_bit2idx(X, I) I = ffs(X)-1
+
+#else /* USE_BUILTIN_FFS */
+#define compute_bit2idx(X, I)\
+{\
+  unsigned int Y = X - 1;\
+  unsigned int K = Y >> (16-4) & 16;\
+  unsigned int N = K;        Y >>= K;\
+  N += K = Y >> (8-3) &  8;  Y >>= K;\
+  N += K = Y >> (4-2) &  4;  Y >>= K;\
+  N += K = Y >> (2-1) &  2;  Y >>= K;\
+  N += K = Y >> (1-0) &  1;  Y >>= K;\
+  I = (bindex_t)(N + Y);\
+}
+#endif /* USE_BUILTIN_FFS */
+#endif /* GNUC */
+
+/* isolate the least set bit of a bitmap */
+#define least_bit(x)         ((x) & -(x))
+
+/* mask with all bits to left of least bit of x on */
+#define left_bits(x)         ((x<<1) | -(x<<1))
+
+/* mask with all bits to left of or equal to least bit of x on */
+#define same_or_left_bits(x) ((x) | -(x))
+
+
+/* ----------------------- Runtime Check Support ------------------------- */
+
+/*
+  For security, the main invariant is that malloc/free/etc never
+  writes to a static address other than malloc_state, unless static
+  malloc_state itself has been corrupted, which cannot occur via
+  malloc (because of these checks). In essence this means that we
+  believe all pointers, sizes, maps etc held in malloc_state, but
+  check all of those linked or offsetted from other embedded data
+  structures.  These checks are interspersed with main code in a way
+  that tends to minimize their run-time cost.
+
+  When FOOTERS is defined, in addition to range checking, we also
+  verify footer fields of inuse chunks, which can be used guarantee
+  that the mstate controlling malloc/free is intact.  This is a
+  streamlined version of the approach described by William Robertson
+  et al in "Run-time Detection of Heap-based Overflows" LISA'03
+  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
+  of an inuse chunk holds the xor of its mstate and a random seed,
+  that is checked upon calls to free() and realloc().  This is
+  (probablistically) unguessable from outside the program, but can be
+  computed by any code successfully malloc'ing any chunk, so does not
+  itself provide protection against code that has already broken
+  security through some other means.  Unlike Robertson et al, we
+  always dynamically check addresses of all offset chunks (previous,
+  next, etc). This turns out to be cheaper than relying on hashes.
+*/
+
+#if !INSECURE
+/* Check if address a is at least as high as any from MORECORE or MMAP */
+#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
+/* Check if address of next chunk n is higher than base chunk p */
+#define ok_next(p, n)    ((char*)(p) < (char*)(n))
+/* Check if p has its cinuse bit on */
+#define ok_cinuse(p)     cinuse(p)
+/* Check if p has its pinuse bit on */
+#define ok_pinuse(p)     pinuse(p)
+
+#else /* !INSECURE */
+#define ok_address(M, a) (1)
+#define ok_next(b, n)    (1)
+#define ok_cinuse(p)     (1)
+#define ok_pinuse(p)     (1)
+#endif /* !INSECURE */
+
+#if (FOOTERS && !INSECURE)
+/* Check if (alleged) mstate m has expected magic field */
+#define ok_magic(M)      ((M)->magic == mparams.magic)
+#else  /* (FOOTERS && !INSECURE) */
+#define ok_magic(M)      (1)
+#endif /* (FOOTERS && !INSECURE) */
+
+
+/* In gcc, use __builtin_expect to minimize impact of checks */
+#if !INSECURE
+#if defined(__GNUC__) && __GNUC__ >= 3
+#define RTCHECK(e)  __builtin_expect(e, 1)
+#else /* GNUC */
+#define RTCHECK(e)  (e)
+#endif /* GNUC */
+#else /* !INSECURE */
+#define RTCHECK(e)  (1)
+#endif /* !INSECURE */
+
+/* macros to set up inuse chunks with or without footers */
+
+#if !FOOTERS
+
+#define mark_inuse_foot(M,p,s)
+
+/* Set cinuse bit and pinuse bit of next chunk */
+#define set_inuse(M,p,s)\
+  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
+  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
+
+/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
+#define set_inuse_and_pinuse(M,p,s)\
+  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
+  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
+
+/* Set size, cinuse and pinuse bit of this chunk */
+#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
+  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
+
+#else /* FOOTERS */
+
+/* Set foot of inuse chunk to be xor of mstate and seed */
+#define mark_inuse_foot(M,p,s)\
+  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
+
+#define get_mstate_for(p)\
+  ((mstate)(((mchunkptr)((char*)(p) +\
+    (chunksize(p))))->prev_foot ^ mparams.magic))
+
+#define set_inuse(M,p,s)\
+  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
+  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
+  mark_inuse_foot(M,p,s))
+
+#define set_inuse_and_pinuse(M,p,s)\
+  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
+  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
+ mark_inuse_foot(M,p,s))
+
+#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
+  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
+  mark_inuse_foot(M, p, s))
+
+#endif /* !FOOTERS */
+
+/* ---------------------------- setting mparams -------------------------- */
+
+/* Initialize mparams */
+static int init_mparams(void) {
+  if (mparams.page_size == 0) {
+    size_t s;
+
+    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
+    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
+#if MORECORE_CONTIGUOUS
+    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
+#else  /* MORECORE_CONTIGUOUS */
+    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
+#endif /* MORECORE_CONTIGUOUS */
+
+#if (FOOTERS && !INSECURE)
+    {
+#if USE_DEV_RANDOM
+      int fd;
+      unsigned char buf[sizeof(size_t)];
+      /* Try to use /dev/urandom, else fall back on using time */
+      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
+          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
+        s = *((size_t *) buf);
+        close(fd);
+      }
+      else
+#endif /* USE_DEV_RANDOM */
+        s = (size_t)(time(0) ^ (size_t)0x55555555U);
+
+      s |= (size_t)8U;    /* ensure nonzero */
+      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
+
+    }
+#else /* (FOOTERS && !INSECURE) */
+    s = (size_t)0x58585858U;
+#endif /* (FOOTERS && !INSECURE) */
+    ACQUIRE_MAGIC_INIT_LOCK();
+    if (mparams.magic == 0) {
+      mparams.magic = s;
+      /* Set up lock for main malloc area */
+      INITIAL_LOCK(&gm->mutex);
+      gm->mflags = mparams.default_mflags;
+    }
+    RELEASE_MAGIC_INIT_LOCK();
+
+#ifndef WIN32
+    mparams.page_size = malloc_getpagesize;
+    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
+                           DEFAULT_GRANULARITY : mparams.page_size);
+#else /* WIN32 */
+    {
+      SYSTEM_INFO system_info;
+      GetSystemInfo(&system_info);
+      mparams.page_size = system_info.dwPageSize;
+      mparams.granularity = system_info.dwAllocationGranularity;
+    }
+#endif /* WIN32 */
+
+    /* Sanity-check configuration:
+       size_t must be unsigned and as wide as pointer type.
+       ints must be at least 4 bytes.
+       alignment must be at least 8.
+       Alignment, min chunk size, and page size must all be powers of 2.
+    */
+    if ((sizeof(size_t) != sizeof(char*)) ||
+        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
+        (sizeof(int) < 4)  ||
+        (MALLOC_ALIGNMENT < (size_t)8U) ||
+        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
+        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
+        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
+        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
+      ABORT;
+  }
+  return 0;
+}
+
+/* support for mallopt */
+static int change_mparam(int param_number, int value) {
+  size_t val = (size_t)value;
+  init_mparams();
+  switch(param_number) {
+  case M_TRIM_THRESHOLD:
+    mparams.trim_threshold = val;
+    return 1;
+  case M_GRANULARITY:
+    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
+      mparams.granularity = val;
+      return 1;
+    }
+    else
+      return 0;
+  case M_MMAP_THRESHOLD:
+    mparams.mmap_threshold = val;
+    return 1;
+  default:
+    return 0;
+  }
+}
+
+#if DEBUG
+/* ------------------------- Debugging Support --------------------------- */
+
+/* Check properties of any chunk, whether free, inuse, mmapped etc  */
+static void do_check_any_chunk(mstate m, mchunkptr p) {
+  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
+  assert(ok_address(m, p));
+}
+
+/* Check properties of top chunk */
+static void do_check_top_chunk(mstate m, mchunkptr p) {
+  msegmentptr sp = segment_holding(m, (char*)p);
+  size_t  sz = chunksize(p);
+  assert(sp != 0);
+  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
+  assert(ok_address(m, p));
+  assert(sz == m->topsize);
+  assert(sz > 0);
+  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
+  assert(pinuse(p));
+  assert(!next_pinuse(p));
+}
+
+/* Check properties of (inuse) mmapped chunks */
+static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
+  size_t  sz = chunksize(p);
+  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
+  assert(is_mmapped(p));
+  assert(use_mmap(m));
+  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
+  assert(ok_address(m, p));
+  assert(!is_small(sz));
+  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
+  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
+  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
+}
+
+/* Check properties of inuse chunks */
+static void do_check_inuse_chunk(mstate m, mchunkptr p) {
+  do_check_any_chunk(m, p);
+  assert(cinuse(p));
+  assert(next_pinuse(p));
+  /* If not pinuse and not mmapped, previous chunk has OK offset */
+  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
+  if (is_mmapped(p))
+    do_check_mmapped_chunk(m, p);
+}
+
+/* Check properties of free chunks */
+static void do_check_free_chunk(mstate m, mchunkptr p) {
+  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
+  mchunkptr next = chunk_plus_offset(p, sz);
+  do_check_any_chunk(m, p);
+  assert(!cinuse(p));
+  assert(!next_pinuse(p));
+  assert (!is_mmapped(p));
+  if (p != m->dv && p != m->top) {
+    if (sz >= MIN_CHUNK_SIZE) {
+      assert((sz & CHUNK_ALIGN_MASK) == 0);
+      assert(is_aligned(chunk2mem(p)));
+      assert(next->prev_foot == sz);
+      assert(pinuse(p));
+      assert (next == m->top || cinuse(next));
+      assert(p->fd->bk == p);
+      assert(p->bk->fd == p);
+    }
+    else  /* markers are always of size SIZE_T_SIZE */
+      assert(sz == SIZE_T_SIZE);
+  }
+}
+
+/* Check properties of malloced chunks at the point they are malloced */
+static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
+  if (mem != 0) {
+    mchunkptr p = mem2chunk(mem);
+    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
+    do_check_inuse_chunk(m, p);
+    assert((sz & CHUNK_ALIGN_MASK) == 0);
+    assert(sz >= MIN_CHUNK_SIZE);
+    assert(sz >= s);
+    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
+    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
+  }
+}
+
+/* Check a tree and its subtrees.  */
+static void do_check_tree(mstate m, tchunkptr t) {
+  tchunkptr head = 0;
+  tchunkptr u = t;
+  bindex_t tindex = t->index;
+  size_t tsize = chunksize(t);
+  bindex_t idx;
+  compute_tree_index(tsize, idx);
+  assert(tindex == idx);
+  assert(tsize >= MIN_LARGE_SIZE);
+  assert(tsize >= minsize_for_tree_index(idx));
+  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
+
+  do { /* traverse through chain of same-sized nodes */
+    do_check_any_chunk(m, ((mchunkptr)u));
+    assert(u->index == tindex);
+    assert(chunksize(u) == tsize);
+    assert(!cinuse(u));
+    assert(!next_pinuse(u));
+    assert(u->fd->bk == u);
+    assert(u->bk->fd == u);
+    if (u->parent == 0) {
+      assert(u->child[0] == 0);
+      assert(u->child[1] == 0);
+    }
+    else {
+      assert(head == 0); /* only one node on chain has parent */
+      head = u;
+      assert(u->parent != u);
+      assert (u->parent->child[0] == u ||
+              u->parent->child[1] == u ||
+              *((tbinptr*)(u->parent)) == u);
+      if (u->child[0] != 0) {
+        assert(u->child[0]->parent == u);
+        assert(u->child[0] != u);
+        do_check_tree(m, u->child[0]);
+      }
+      if (u->child[1] != 0) {
+        assert(u->child[1]->parent == u);
+        assert(u->child[1] != u);
+        do_check_tree(m, u->child[1]);
+      }
+      if (u->child[0] != 0 && u->child[1] != 0) {
+        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
+      }
+    }
+    u = u->fd;
+  } while (u != t);
+  assert(head != 0);
+}
+
+/*  Check all the chunks in a treebin.  */
+static void do_check_treebin(mstate m, bindex_t i) {
+  tbinptr* tb = treebin_at(m, i);
+  tchunkptr t = *tb;
+  int empty = (m->treemap & (1U << i)) == 0;
+  if (t == 0)
+    assert(empty);
+  if (!empty)
+    do_check_tree(m, t);
+}
+
+/*  Check all the chunks in a smallbin.  */
+static void do_check_smallbin(mstate m, bindex_t i) {
+  sbinptr b = smallbin_at(m, i);
+  mchunkptr p = b->bk;
+  unsigned int empty = (m->smallmap & (1U << i)) == 0;
+  if (p == b)
+    assert(empty);
+  if (!empty) {
+    for (; p != b; p = p->bk) {
+      size_t size = chunksize(p);
+      mchunkptr q;
+      /* each chunk claims to be free */
+      do_check_free_chunk(m, p);
+      /* chunk belongs in bin */
+      assert(small_index(size) == i);
+      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
+      /* chunk is followed by an inuse chunk */
+      q = next_chunk(p);
+      if (q->head != FENCEPOST_HEAD)
+        do_check_inuse_chunk(m, q);
+    }
+  }
+}
+
+/* Find x in a bin. Used in other check functions. */
+static int bin_find(mstate m, mchunkptr x) {
+  size_t size = chunksize(x);
+  if (is_small(size)) {
+    bindex_t sidx = small_index(size);
+    sbinptr b = smallbin_at(m, sidx);
+    if (smallmap_is_marked(m, sidx)) {
+      mchunkptr p = b;
+      do {
+        if (p == x)
+          return 1;
+      } while ((p = p->fd) != b);
+    }
+  }
+  else {
+    bindex_t tidx;
+    compute_tree_index(size, tidx);
+    if (treemap_is_marked(m, tidx)) {
+      tchunkptr t = *treebin_at(m, tidx);
+      size_t sizebits = size << leftshift_for_tree_index(tidx);
+      while (t != 0 && chunksize(t) != size) {
+        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
+        sizebits <<= 1;
+      }
+      if (t != 0) {
+        tchunkptr u = t;
+        do {
+          if (u == (tchunkptr)x)
+            return 1;
+        } while ((u = u->fd) != t);
+      }
+    }
+  }
+  return 0;
+}
+
+/* Traverse each chunk and check it; return total */
+static size_t traverse_and_check(mstate m) {
+  size_t sum = 0;
+  if (is_initialized(m)) {
+    msegmentptr s = &m->seg;
+    sum += m->topsize + TOP_FOOT_SIZE;
+    while (s != 0) {
+      mchunkptr q = align_as_chunk(s->base);
+      mchunkptr lastq = 0;
+      assert(pinuse(q));
+      while (segment_holds(s, q) &&
+             q != m->top && q->head != FENCEPOST_HEAD) {
+        sum += chunksize(q);
+        if (cinuse(q)) {
+          assert(!bin_find(m, q));
+          do_check_inuse_chunk(m, q);
+        }
+        else {
+          assert(q == m->dv || bin_find(m, q));
+          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
+          do_check_free_chunk(m, q);
+        }
+        lastq = q;
+        q = next_chunk(q);
+      }
+      s = s->next;
+    }
+  }
+  return sum;
+}
+
+/* Check all properties of malloc_state. */
+static void do_check_malloc_state(mstate m) {
+  bindex_t i;
+  size_t total;
+  /* check bins */
+  for (i = 0; i < NSMALLBINS; ++i)
+    do_check_smallbin(m, i);
+  for (i = 0; i < NTREEBINS; ++i)
+    do_check_treebin(m, i);
+
+  if (m->dvsize != 0) { /* check dv chunk */
+    do_check_any_chunk(m, m->dv);
+    assert(m->dvsize == chunksize(m->dv));
+    assert(m->dvsize >= MIN_CHUNK_SIZE);
+    assert(bin_find(m, m->dv) == 0);
+  }
+
+  if (m->top != 0) {   /* check top chunk */
+    do_check_top_chunk(m, m->top);
+    assert(m->topsize == chunksize(m->top));
+    assert(m->topsize > 0);
+    assert(bin_find(m, m->top) == 0);
+  }
+
+  total = traverse_and_check(m);
+  assert(total <= m->footprint);
+  assert(m->footprint <= m->max_footprint);
+}
+#endif /* DEBUG */
+
+/* ----------------------------- statistics ------------------------------ */
+
+#if !NO_MALLINFO
+static struct mallinfo internal_mallinfo(mstate m) {
+  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
+  if (!PREACTION(m)) {
+    check_malloc_state(m);
+    if (is_initialized(m)) {
+      size_t nfree = SIZE_T_ONE; /* top always free */
+      size_t mfree = m->topsize + TOP_FOOT_SIZE;
+      size_t sum = mfree;
+      msegmentptr s = &m->seg;
+      while (s != 0) {
+        mchunkptr q = align_as_chunk(s->base);
+        while (segment_holds(s, q) &&
+               q != m->top && q->head != FENCEPOST_HEAD) {
+          size_t sz = chunksize(q);
+          sum += sz;
+          if (!cinuse(q)) {
+            mfree += sz;
+            ++nfree;
+          }
+          q = next_chunk(q);
+        }
+        s = s->next;
+      }
+
+      nm.arena    = sum;
+      nm.ordblks  = nfree;
+      nm.hblkhd   = m->footprint - sum;
+      nm.usmblks  = m->max_footprint;
+      nm.uordblks = m->footprint - mfree;
+      nm.fordblks = mfree;
+      nm.keepcost = m->topsize;
+    }
+
+    POSTACTION(m);
+  }
+  return nm;
+}
+#endif /* !NO_MALLINFO */
+
+static void internal_malloc_stats(mstate m) {
+  if (!PREACTION(m)) {
+    size_t maxfp = 0;
+    size_t fp = 0;
+    size_t used = 0;
+    check_malloc_state(m);
+    if (is_initialized(m)) {
+      msegmentptr s = &m->seg;
+      maxfp = m->max_footprint;
+      fp = m->footprint;
+      used = fp - (m->topsize + TOP_FOOT_SIZE);
+
+      while (s != 0) {
+        mchunkptr q = align_as_chunk(s->base);
+        while (segment_holds(s, q) &&
+               q != m->top && q->head != FENCEPOST_HEAD) {
+          if (!cinuse(q))
+            used -= chunksize(q);
+          q = next_chunk(q);
+        }
+        s = s->next;
+      }
+    }
+
+    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
+    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
+    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
+
+    POSTACTION(m);
+  }
+}
+
+/* ----------------------- Operations on smallbins ----------------------- */
+
+/*
+  Various forms of linking and unlinking are defined as macros.  Even
+  the ones for trees, which are very long but have very short typical
+  paths.  This is ugly but reduces reliance on inlining support of
+  compilers.
+*/
+
+/* Link a free chunk into a smallbin  */
+#define insert_small_chunk(M, P, S) {\
+  bindex_t I  = small_index(S);\
+  mchunkptr B = smallbin_at(M, I);\
+  mchunkptr F = B;\
+  assert(S >= MIN_CHUNK_SIZE);\
+  if (!smallmap_is_marked(M, I))\
+    mark_smallmap(M, I);\
+  else if (RTCHECK(ok_address(M, B->fd)))\
+    F = B->fd;\
+  else {\
+    CORRUPTION_ERROR_ACTION(M);\
+  }\
+  B->fd = P;\
+  F->bk = P;\
+  P->fd = F;\
+  P->bk = B;\
+}
+
+/* Unlink a chunk from a smallbin  */
+#define unlink_small_chunk(M, P, S) {\
+  mchunkptr F = P->fd;\
+  mchunkptr B = P->bk;\
+  bindex_t I = small_index(S);\
+  assert(P != B);\
+  assert(P != F);\
+  assert(chunksize(P) == small_index2size(I));\
+  if (F == B)\
+    clear_smallmap(M, I);\
+  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
+                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
+    F->bk = B;\
+    B->fd = F;\
+  }\
+  else {\
+    CORRUPTION_ERROR_ACTION(M);\
+  }\
+}
+
+/* Unlink the first chunk from a smallbin */
+#define unlink_first_small_chunk(M, B, P, I) {\
+  mchunkptr F = P->fd;\
+  assert(P != B);\
+  assert(P != F);\
+  assert(chunksize(P) == small_index2size(I));\
+  if (B == F)\
+    clear_smallmap(M, I);\
+  else if (RTCHECK(ok_address(M, F))) {\
+    B->fd = F;\
+    F->bk = B;\
+  }\
+  else {\
+    CORRUPTION_ERROR_ACTION(M);\
+  }\
+}
+
+/* Replace dv node, binning the old one */
+/* Used only when dvsize known to be small */
+#define replace_dv(M, P, S) {\
+  size_t DVS = M->dvsize;\
+  if (DVS != 0) {\
+    mchunkptr DV = M->dv;\
+    assert(is_small(DVS));\
+    insert_small_chunk(M, DV, DVS);\
+  }\
+  M->dvsize = S;\
+  M->dv = P;\
+}
+
+/* ------------------------- Operations on trees ------------------------- */
+
+/* Insert chunk into tree */
+#define insert_large_chunk(M, X, S) {\
+  tbinptr* H;\
+  bindex_t I;\
+  compute_tree_index(S, I);\
+  H = treebin_at(M, I);\
+  X->index = I;\
+  X->child[0] = X->child[1] = 0;\
+  if (!treemap_is_marked(M, I)) {\
+    mark_treemap(M, I);\
+    *H = X;\
+    X->parent = (tchunkptr)H;\
+    X->fd = X->bk = X;\
+  }\
+  else {\
+    tchunkptr T = *H;\
+    size_t K = S << leftshift_for_tree_index(I);\
+    for (;;) {\
+      if (chunksize(T) != S) {\
+        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
+        K <<= 1;\
+        if (*C != 0)\
+          T = *C;\
+        else if (RTCHECK(ok_address(M, C))) {\
+          *C = X;\
+          X->parent = T;\
+          X->fd = X->bk = X;\
+          break;\
+        }\
+        else {\
+          CORRUPTION_ERROR_ACTION(M);\
+          break;\
+        }\
+      }\
+      else {\
+        tchunkptr F = T->fd;\
+        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
+          T->fd = F->bk = X;\
+          X->fd = F;\
+          X->bk = T;\
+          X->parent = 0;\
+          break;\
+        }\
+        else {\
+          CORRUPTION_ERROR_ACTION(M);\
+          break;\
+        }\
+      }\
+    }\
+  }\
+}
+
+/*
+  Unlink steps:
+
+  1. If x is a chained node, unlink it from its same-sized fd/bk links
+     and choose its bk node as its replacement.
+  2. If x was the last node of its size, but not a leaf node, it must
+     be replaced with a leaf node (not merely one with an open left or
+     right), to make sure that lefts and rights of descendents
+     correspond properly to bit masks.  We use the rightmost descendent
+     of x.  We could use any other leaf, but this is easy to locate and
+     tends to counteract removal of leftmosts elsewhere, and so keeps
+     paths shorter than minimally guaranteed.  This doesn't loop much
+     because on average a node in a tree is near the bottom.
+  3. If x is the base of a chain (i.e., has parent links) relink
+     x's parent and children to x's replacement (or null if none).
+*/
+
+#define unlink_large_chunk(M, X) {\
+  tchunkptr XP = X->parent;\
+  tchunkptr R;\
+  if (X->bk != X) {\
+    tchunkptr F = X->fd;\
+    R = X->bk;\
+    if (RTCHECK(ok_address(M, F))) {\
+      F->bk = R;\
+      R->fd = F;\
+    }\
+    else {\
+      CORRUPTION_ERROR_ACTION(M);\
+    }\
+  }\
+  else {\
+    tchunkptr* RP;\
+    if (((R = *(RP = &(X->child[1]))) != 0) ||\
+        ((R = *(RP = &(X->child[0]))) != 0)) {\
+      tchunkptr* CP;\
+      while ((*(CP = &(R->child[1])) != 0) ||\
+             (*(CP = &(R->child[0])) != 0)) {\
+        R = *(RP = CP);\
+      }\
+      if (RTCHECK(ok_address(M, RP)))\
+        *RP = 0;\
+      else {\
+        CORRUPTION_ERROR_ACTION(M);\
+      }\
+    }\
+  }\
+  if (XP != 0) {\
+    tbinptr* H = treebin_at(M, X->index);\
+    if (X == *H) {\
+      if ((*H = R) == 0) \
+        clear_treemap(M, X->index);\
+    }\
+    else if (RTCHECK(ok_address(M, XP))) {\
+      if (XP->child[0] == X) \
+        XP->child[0] = R;\
+      else \
+        XP->child[1] = R;\
+    }\
+    else\
+      CORRUPTION_ERROR_ACTION(M);\
+    if (R != 0) {\
+      if (RTCHECK(ok_address(M, R))) {\
+        tchunkptr C0, C1;\
+        R->parent = XP;\
+        if ((C0 = X->child[0]) != 0) {\
+          if (RTCHECK(ok_address(M, C0))) {\
+            R->child[0] = C0;\
+            C0->parent = R;\
+          }\
+          else\
+            CORRUPTION_ERROR_ACTION(M);\
+        }\
+        if ((C1 = X->child[1]) != 0) {\
+          if (RTCHECK(ok_address(M, C1))) {\
+            R->child[1] = C1;\
+            C1->parent = R;\
+          }\
+          else\
+            CORRUPTION_ERROR_ACTION(M);\
+        }\
+      }\
+      else\
+        CORRUPTION_ERROR_ACTION(M);\
+    }\
+  }\
+}
+
+/* Relays to large vs small bin operations */
+
+#define insert_chunk(M, P, S)\
+  if (is_small(S)) insert_small_chunk(M, P, S)\
+  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
+
+#define unlink_chunk(M, P, S)\
+  if (is_small(S)) unlink_small_chunk(M, P, S)\
+  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
+
+
+/* Relays to internal calls to malloc/free from realloc, memalign etc */
+
+#if ONLY_MSPACES
+#define internal_malloc(m, b) mspace_malloc(m, b)
+#define internal_free(m, mem) mspace_free(m,mem);
+#else /* ONLY_MSPACES */
+#if MSPACES
+#define internal_malloc(m, b)\
+   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
+#define internal_free(m, mem)\
+   if (m == gm) dlfree(mem); else mspace_free(m,mem);
+#else /* MSPACES */
+#define internal_malloc(m, b) dlmalloc(b)
+#define internal_free(m, mem) dlfree(mem)
+#endif /* MSPACES */
+#endif /* ONLY_MSPACES */
+
+/* -----------------------  Direct-mmapping chunks ----------------------- */
+
+/*
+  Directly mmapped chunks are set up with an offset to the start of
+  the mmapped region stored in the prev_foot field of the chunk. This
+  allows reconstruction of the required argument to MUNMAP when freed,
+  and also allows adjustment of the returned chunk to meet alignment
+  requirements (especially in memalign).  There is also enough space
+  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
+  the PINUSE bit so frees can be checked.
+*/
+
+/* Malloc using mmap */
+static void* mmap_alloc(mstate m, size_t nb) {
+  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
+  if (mmsize > nb) {     /* Check for wrap around 0 */
+    char* mm = (char*)(DIRECT_MMAP(mmsize));
+    if (mm != CMFAIL) {
+      size_t offset = align_offset(chunk2mem(mm));
+      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
+      mchunkptr p = (mchunkptr)(mm + offset);
+      p->prev_foot = offset | IS_MMAPPED_BIT;
+      (p)->head = (psize|CINUSE_BIT);
+      mark_inuse_foot(m, p, psize);
+      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
+      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
+
+      if (mm < m->least_addr)
+        m->least_addr = mm;
+      if ((m->footprint += mmsize) > m->max_footprint)
+        m->max_footprint = m->footprint;
+      assert(is_aligned(chunk2mem(p)));
+      check_mmapped_chunk(m, p);
+      return chunk2mem(p);
+    }
+  }
+  return 0;
+}
+
+/* Realloc using mmap */
+static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
+  size_t oldsize = chunksize(oldp);
+  if (is_small(nb)) /* Can't shrink mmap regions below small size */
+    return 0;
+  /* Keep old chunk if big enough but not too big */
+  if (oldsize >= nb + SIZE_T_SIZE &&
+      (oldsize - nb) <= (mparams.granularity << 1))
+    return oldp;
+  else {
+    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
+    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
+    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
+                                         CHUNK_ALIGN_MASK);
+    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
+                                  oldmmsize, newmmsize, 1);
+    if (cp != CMFAIL) {
+      mchunkptr newp = (mchunkptr)(cp + offset);
+      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
+      newp->head = (psize|CINUSE_BIT);
+      mark_inuse_foot(m, newp, psize);
+      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
+      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
+
+      if (cp < m->least_addr)
+        m->least_addr = cp;
+      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
+        m->max_footprint = m->footprint;
+      check_mmapped_chunk(m, newp);
+      return newp;
+    }
+  }
+  return 0;
+}
+
+/* -------------------------- mspace management -------------------------- */
+
+/* Initialize top chunk and its size */
+static void init_top(mstate m, mchunkptr p, size_t psize) {
+  /* Ensure alignment */
+  size_t offset = align_offset(chunk2mem(p));
+  p = (mchunkptr)((char*)p + offset);
+  psize -= offset;
+
+  m->top = p;
+  m->topsize = psize;
+  p->head = psize | PINUSE_BIT;
+  /* set size of fake trailing chunk holding overhead space only once */
+  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
+  m->trim_check = mparams.trim_threshold; /* reset on each update */
+}
+
+/* Initialize bins for a new mstate that is otherwise zeroed out */
+static void init_bins(mstate m) {
+  /* Establish circular links for smallbins */
+  bindex_t i;
+  for (i = 0; i < NSMALLBINS; ++i) {
+    sbinptr bin = smallbin_at(m,i);
+    bin->fd = bin->bk = bin;
+  }
+}
+
+#if PROCEED_ON_ERROR
+
+/* default corruption action */
+static void reset_on_error(mstate m) {
+  int i;
+  ++malloc_corruption_error_count;
+  /* Reinitialize fields to forget about all memory */
+  m->smallbins = m->treebins = 0;
+  m->dvsize = m->topsize = 0;
+  m->seg.base = 0;
+  m->seg.size = 0;
+  m->seg.next = 0;
+  m->top = m->dv = 0;
+  for (i = 0; i < NTREEBINS; ++i)
+    *treebin_at(m, i) = 0;
+  init_bins(m);
+}
+#endif /* PROCEED_ON_ERROR */
+
+/* Allocate chunk and prepend remainder with chunk in successor base. */
+static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
+                           size_t nb) {
+  mchunkptr p = align_as_chunk(newbase);
+  mchunkptr oldfirst = align_as_chunk(oldbase);
+  size_t psize = (char*)oldfirst - (char*)p;
+  mchunkptr q = chunk_plus_offset(p, nb);
+  size_t qsize = psize - nb;
+  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
+
+  assert((char*)oldfirst > (char*)q);
+  assert(pinuse(oldfirst));
+  assert(qsize >= MIN_CHUNK_SIZE);
+
+  /* consolidate remainder with first chunk of old base */
+  if (oldfirst == m->top) {
+    size_t tsize = m->topsize += qsize;
+    m->top = q;
+    q->head = tsize | PINUSE_BIT;
+    check_top_chunk(m, q);
+  }
+  else if (oldfirst == m->dv) {
+    size_t dsize = m->dvsize += qsize;
+    m->dv = q;
+    set_size_and_pinuse_of_free_chunk(q, dsize);
+  }
+  else {
+    if (!cinuse(oldfirst)) {
+      size_t nsize = chunksize(oldfirst);
+      unlink_chunk(m, oldfirst, nsize);
+      oldfirst = chunk_plus_offset(oldfirst, nsize);
+      qsize += nsize;
+    }
+    set_free_with_pinuse(q, qsize, oldfirst);
+    insert_chunk(m, q, qsize);
+    check_free_chunk(m, q);
+  }
+
+  check_malloced_chunk(m, chunk2mem(p), nb);
+  return chunk2mem(p);
+}
+
+
+/* Add a segment to hold a new noncontiguous region */
+static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
+  /* Determine locations and sizes of segment, fenceposts, old top */
+  char* old_top = (char*)m->top;
+  msegmentptr oldsp = segment_holding(m, old_top);
+  char* old_end = oldsp->base + oldsp->size;
+  size_t ssize = pad_request(sizeof(struct malloc_segment));
+  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
+  size_t offset = align_offset(chunk2mem(rawsp));
+  char* asp = rawsp + offset;
+  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
+  mchunkptr sp = (mchunkptr)csp;
+  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
+  mchunkptr tnext = chunk_plus_offset(sp, ssize);
+  mchunkptr p = tnext;
+  int nfences = 0;
+
+  /* reset top to new space */
+  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
+
+  /* Set up segment record */
+  assert(is_aligned(ss));
+  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
+  *ss = m->seg; /* Push current record */
+  m->seg.base = tbase;
+  m->seg.size = tsize;
+  m->seg.sflags = mmapped;
+  m->seg.next = ss;
+
+  /* Insert trailing fenceposts */
+  for (;;) {
+    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
+    p->head = FENCEPOST_HEAD;
+    ++nfences;
+    if ((char*)(&(nextp->head)) < old_end)
+      p = nextp;
+    else
+      break;
+  }
+  assert(nfences >= 2);
+
+  /* Insert the rest of old top into a bin as an ordinary free chunk */
+  if (csp != old_top) {
+    mchunkptr q = (mchunkptr)old_top;
+    size_t psize = csp - old_top;
+    mchunkptr tn = chunk_plus_offset(q, psize);
+    set_free_with_pinuse(q, psize, tn);
+    insert_chunk(m, q, psize);
+  }
+
+  check_top_chunk(m, m->top);
+}
+
+/* -------------------------- System allocation -------------------------- */
+
+/* Get memory from system using MORECORE or MMAP */
+static void* sys_alloc(mstate m, size_t nb) {
+  char* tbase = CMFAIL;
+  size_t tsize = 0;
+  flag_t mmap_flag = 0;
+
+  init_mparams();
+
+  /* Directly map large chunks */
+  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
+    void* mem = mmap_alloc(m, nb);
+    if (mem != 0)
+      return mem;
+  }
+
+  /*
+    Try getting memory in any of three ways (in most-preferred to
+    least-preferred order):
+    1. A call to MORECORE that can normally contiguously extend memory.
+       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
+       or main space is mmapped or a previous contiguous call failed)
+    2. A call to MMAP new space (disabled if not HAVE_MMAP).
+       Note that under the default settings, if MORECORE is unable to
+       fulfill a request, and HAVE_MMAP is true, then mmap is
+       used as a noncontiguous system allocator. This is a useful backup
+       strategy for systems with holes in address spaces -- in this case
+       sbrk cannot contiguously expand the heap, but mmap may be able to
+       find space.
+    3. A call to MORECORE that cannot usually contiguously extend memory.
+       (disabled if not HAVE_MORECORE)
+  */
+
+  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
+    char* br = CMFAIL;
+    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
+    size_t asize = 0;
+    ACQUIRE_MORECORE_LOCK();
+
+    if (ss == 0) {  /* First time through or recovery */
+      char* base = (char*)CALL_MORECORE(0);
+      if (base != CMFAIL) {
+        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
+        /* Adjust to end on a page boundary */
+        if (!is_page_aligned(base))
+          asize += (page_align((size_t)base) - (size_t)base);
+        /* Can't call MORECORE if size is negative when treated as signed */
+        if (asize < HALF_MAX_SIZE_T &&
+            (br = (char*)(CALL_MORECORE(asize))) == base) {
+          tbase = base;
+          tsize = asize;
+        }
+      }
+    }
+    else {
+      /* Subtract out existing available top space from MORECORE request. */
+      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
+      /* Use mem here only if it did continuously extend old space */
+      if (asize < HALF_MAX_SIZE_T &&
+          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
+        tbase = br;
+        tsize = asize;
+      }
+    }
+
+    if (tbase == CMFAIL) {    /* Cope with partial failure */
+      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
+        if (asize < HALF_MAX_SIZE_T &&
+            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
+          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
+          if (esize < HALF_MAX_SIZE_T) {
+            char* end = (char*)CALL_MORECORE(esize);
+            if (end != CMFAIL)
+              asize += esize;
+            else {            /* Can't use; try to release */
+              CALL_MORECORE(-asize);
+              br = CMFAIL;
+            }
+          }
+        }
+      }
+      if (br != CMFAIL) {    /* Use the space we did get */
+        tbase = br;
+        tsize = asize;
+      }
+      else
+        disable_contiguous(m); /* Don't try contiguous path in the future */
+    }
+
+    RELEASE_MORECORE_LOCK();
+  }
+
+  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
+    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
+    size_t rsize = granularity_align(req);
+    if (rsize > nb) { /* Fail if wraps around zero */
+      char* mp = (char*)(CALL_MMAP(rsize));
+      if (mp != CMFAIL) {
+        tbase = mp;
+        tsize = rsize;
+        mmap_flag = IS_MMAPPED_BIT;
+      }
+    }
+  }
+
+  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
+    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
+    if (asize < HALF_MAX_SIZE_T) {
+      char* br = CMFAIL;
+      char* end = CMFAIL;
+      ACQUIRE_MORECORE_LOCK();
+      br = (char*)(CALL_MORECORE(asize));
+      end = (char*)(CALL_MORECORE(0));
+      RELEASE_MORECORE_LOCK();
+      if (br != CMFAIL && end != CMFAIL && br < end) {
+        size_t ssize = end - br;
+        if (ssize > nb + TOP_FOOT_SIZE) {
+          tbase = br;
+          tsize = ssize;
+        }
+      }
+    }
+  }
+
+  if (tbase != CMFAIL) {
+
+    if ((m->footprint += tsize) > m->max_footprint)
+      m->max_footprint = m->footprint;
+
+    if (!is_initialized(m)) { /* first-time initialization */
+      m->seg.base = m->least_addr = tbase;
+      m->seg.size = tsize;
+      m->seg.sflags = mmap_flag;
+      m->magic = mparams.magic;
+      init_bins(m);
+      if (is_global(m)) 
+        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
+      else {
+        /* Offset top by embedded malloc_state */
+        mchunkptr mn = next_chunk(mem2chunk(m));
+        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
+      }
+    }
+
+    else {
+      /* Try to merge with an existing segment */
+      msegmentptr sp = &m->seg;
+      while (sp != 0 && tbase != sp->base + sp->size)
+        sp = sp->next;
+      if (sp != 0 &&
+          !is_extern_segment(sp) &&
+          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
+          segment_holds(sp, m->top)) { /* append */
+        sp->size += tsize;
+        init_top(m, m->top, m->topsize + tsize);
+      }
+      else {
+        if (tbase < m->least_addr)
+          m->least_addr = tbase;
+        sp = &m->seg;
+        while (sp != 0 && sp->base != tbase + tsize)
+          sp = sp->next;
+        if (sp != 0 &&
+            !is_extern_segment(sp) &&
+            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
+          char* oldbase = sp->base;
+          sp->base = tbase;
+          sp->size += tsize;
+          return prepend_alloc(m, tbase, oldbase, nb);
+        }
+        else
+          add_segment(m, tbase, tsize, mmap_flag);
+      }
+    }
+
+    if (nb < m->topsize) { /* Allocate from new or extended top space */
+      size_t rsize = m->topsize -= nb;
+      mchunkptr p = m->top;
+      mchunkptr r = m->top = chunk_plus_offset(p, nb);
+      r->head = rsize | PINUSE_BIT;
+      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
+      check_top_chunk(m, m->top);
+      check_malloced_chunk(m, chunk2mem(p), nb);
+      return chunk2mem(p);
+    }
+  }
+
+  MALLOC_FAILURE_ACTION;
+  return 0;
+}
+
+/* -----------------------  system deallocation -------------------------- */
+
+/* Unmap and unlink any mmapped segments that don't contain used chunks */
+static size_t release_unused_segments(mstate m) {
+  size_t released = 0;
+  msegmentptr pred = &m->seg;
+  msegmentptr sp = pred->next;
+  while (sp != 0) {
+    char* base = sp->base;
+    size_t size = sp->size;
+    msegmentptr next = sp->next;
+    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
+      mchunkptr p = align_as_chunk(base);
+      size_t psize = chunksize(p);
+      /* Can unmap if first chunk holds entire segment and not pinned */
+      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
+        tchunkptr tp = (tchunkptr)p;
+        assert(segment_holds(sp, (char*)sp));
+        if (p == m->dv) {
+          m->dv = 0;
+          m->dvsize = 0;
+        }
+        else {
+          unlink_large_chunk(m, tp);
+        }
+        if (CALL_MUNMAP(base, size) == 0) {
+          released += size;
+          m->footprint -= size;
+          /* unlink obsoleted record */
+          sp = pred;
+          sp->next = next;
+        }
+        else { /* back out if cannot unmap */
+          insert_large_chunk(m, tp, psize);
+        }
+      }
+    }
+    pred = sp;
+    sp = next;
+  }
+  return released;
+}
+
+static int sys_trim(mstate m, size_t pad) {
+  size_t released = 0;
+  if (pad < MAX_REQUEST && is_initialized(m)) {
+    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
+
+    if (m->topsize > pad) {
+      /* Shrink top space in granularity-size units, keeping at least one */
+      size_t unit = mparams.granularity;
+      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
+                      SIZE_T_ONE) * unit;
+      msegmentptr sp = segment_holding(m, (char*)m->top);
+
+      if (!is_extern_segment(sp)) {
+        if (is_mmapped_segment(sp)) {
+          if (HAVE_MMAP &&
+              sp->size >= extra &&
+              !has_segment_link(m, sp)) { /* can't shrink if pinned */
+            size_t newsize = sp->size - extra;
+            /* Prefer mremap, fall back to munmap */
+            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
+                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
+              released = extra;
+            }
+          }
+        }
+        else if (HAVE_MORECORE) {
+          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
+            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
+          ACQUIRE_MORECORE_LOCK();
+          {
+            /* Make sure end of memory is where we last set it. */
+            char* old_br = (char*)(CALL_MORECORE(0));
+            if (old_br == sp->base + sp->size) {
+              char* rel_br = (char*)(CALL_MORECORE(-extra));
+              char* new_br = (char*)(CALL_MORECORE(0));
+              if (rel_br != CMFAIL && new_br < old_br)
+                released = old_br - new_br;
+            }
+          }
+          RELEASE_MORECORE_LOCK();
+        }
+      }
+
+      if (released != 0) {
+        sp->size -= released;
+        m->footprint -= released;
+        init_top(m, m->top, m->topsize - released);
+        check_top_chunk(m, m->top);
+      }
+    }
+
+    /* Unmap any unused mmapped segments */
+    if (HAVE_MMAP) 
+      released += release_unused_segments(m);
+
+    /* On failure, disable autotrim to avoid repeated failed future calls */
+    if (released == 0)
+      m->trim_check = MAX_SIZE_T;
+  }
+
+  return (released != 0)? 1 : 0;
+}
+
+/* ---------------------------- malloc support --------------------------- */
+
+/* allocate a large request from the best fitting chunk in a treebin */
+static void* tmalloc_large(mstate m, size_t nb) {
+  tchunkptr v = 0;
+  size_t rsize = -nb; /* Unsigned negation */
+  tchunkptr t;
+  bindex_t idx;
+  compute_tree_index(nb, idx);
+
+  if ((t = *treebin_at(m, idx)) != 0) {
+    /* Traverse tree for this bin looking for node with size == nb */
+    size_t sizebits = nb << leftshift_for_tree_index(idx);
+    tchunkptr rst = 0;  /* The deepest untaken right subtree */
+    for (;;) {
+      tchunkptr rt;
+      size_t trem = chunksize(t) - nb;
+      if (trem < rsize) {
+        v = t;
+        if ((rsize = trem) == 0)
+          break;
+      }
+      rt = t->child[1];
+      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
+      if (rt != 0 && rt != t)
+        rst = rt;
+      if (t == 0) {
+        t = rst; /* set t to least subtree holding sizes > nb */
+        break;
+      }
+      sizebits <<= 1;
+    }
+  }
+
+  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
+    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
+    if (leftbits != 0) {
+      bindex_t i;
+      binmap_t leastbit = least_bit(leftbits);
+      compute_bit2idx(leastbit, i);
+      t = *treebin_at(m, i);
+    }
+  }
+
+  while (t != 0) { /* find smallest of tree or subtree */
+    size_t trem = chunksize(t) - nb;
+    if (trem < rsize) {
+      rsize = trem;
+      v = t;
+    }
+    t = leftmost_child(t);
+  }
+
+  /*  If dv is a better fit, return 0 so malloc will use it */
+  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
+    if (RTCHECK(ok_address(m, v))) { /* split */
+      mchunkptr r = chunk_plus_offset(v, nb);
+      assert(chunksize(v) == rsize + nb);
+      if (RTCHECK(ok_next(v, r))) {
+        unlink_large_chunk(m, v);
+        if (rsize < MIN_CHUNK_SIZE)
+          set_inuse_and_pinuse(m, v, (rsize + nb));
+        else {
+          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
+          set_size_and_pinuse_of_free_chunk(r, rsize);
+          insert_chunk(m, r, rsize);
+        }
+        return chunk2mem(v);
+      }
+    }
+    CORRUPTION_ERROR_ACTION(m);
+  }
+  return 0;
+}
+
+/* allocate a small request from the best fitting chunk in a treebin */
+static void* tmalloc_small(mstate m, size_t nb) {
+  tchunkptr t, v;
+  size_t rsize;
+  bindex_t i;
+  binmap_t leastbit = least_bit(m->treemap);
+  compute_bit2idx(leastbit, i);
+
+  v = t = *treebin_at(m, i);
+  rsize = chunksize(t) - nb;
+
+  while ((t = leftmost_child(t)) != 0) {
+    size_t trem = chunksize(t) - nb;
+    if (trem < rsize) {
+      rsize = trem;
+      v = t;
+    }
+  }
+
+  if (RTCHECK(ok_address(m, v))) {
+    mchunkptr r = chunk_plus_offset(v, nb);
+    assert(chunksize(v) == rsize + nb);
+    if (RTCHECK(ok_next(v, r))) {
+      unlink_large_chunk(m, v);
+      if (rsize < MIN_CHUNK_SIZE)
+        set_inuse_and_pinuse(m, v, (rsize + nb));
+      else {
+        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
+        set_size_and_pinuse_of_free_chunk(r, rsize);
+        replace_dv(m, r, rsize);
+      }
+      return chunk2mem(v);
+    }
+  }
+
+  CORRUPTION_ERROR_ACTION(m);
+  return 0;
+}
+
+/* --------------------------- realloc support --------------------------- */
+
+static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
+  if (bytes >= MAX_REQUEST) {
+    MALLOC_FAILURE_ACTION;
+    return 0;
+  }
+  if (!PREACTION(m)) {
+    mchunkptr oldp = mem2chunk(oldmem);
+    size_t oldsize = chunksize(oldp);
+    mchunkptr next = chunk_plus_offset(oldp, oldsize);
+    mchunkptr newp = 0;
+    void* extra = 0;
+
+    /* Try to either shrink or extend into top. Else malloc-copy-free */
+
+    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
+                ok_next(oldp, next) && ok_pinuse(next))) {
+      size_t nb = request2size(bytes);
+      if (is_mmapped(oldp))
+        newp = mmap_resize(m, oldp, nb);
+      else if (oldsize >= nb) { /* already big enough */
+        size_t rsize = oldsize - nb;
+        newp = oldp;
+        if (rsize >= MIN_CHUNK_SIZE) {
+          mchunkptr remainder = chunk_plus_offset(newp, nb);
+          set_inuse(m, newp, nb);
+          set_inuse(m, remainder, rsize);
+          extra = chunk2mem(remainder);
+        }
+      }
+      else if (next == m->top && oldsize + m->topsize > nb) {
+        /* Expand into top */
+        size_t newsize = oldsize + m->topsize;
+        size_t newtopsize = newsize - nb;
+        mchunkptr newtop = chunk_plus_offset(oldp, nb);
+        set_inuse(m, oldp, nb);
+        newtop->head = newtopsize |PINUSE_BIT;
+        m->top = newtop;
+        m->topsize = newtopsize;
+        newp = oldp;
+      }
+    }
+    else {
+      USAGE_ERROR_ACTION(m, oldmem);
+      POSTACTION(m);
+      return 0;
+    }
+
+    POSTACTION(m);
+
+    if (newp != 0) {
+      if (extra != 0) {
+        internal_free(m, extra);
+      }
+      check_inuse_chunk(m, newp);
+      return chunk2mem(newp);
+    }
+    else {
+      void* newmem = internal_malloc(m, bytes);
+      if (newmem != 0) {
+        size_t oc = oldsize - overhead_for(oldp);
+        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
+        internal_free(m, oldmem);
+      }
+      return newmem;
+    }
+  }
+  return 0;
+}
+
+/* --------------------------- memalign support -------------------------- */
+
+static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
+  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
+    return internal_malloc(m, bytes);
+  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
+    alignment = MIN_CHUNK_SIZE;
+  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
+    size_t a = MALLOC_ALIGNMENT << 1;
+    while (a < alignment) a <<= 1;
+    alignment = a;
+  }
+  
+  if (bytes >= MAX_REQUEST - alignment) {
+    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
+      MALLOC_FAILURE_ACTION;
+    }
+  }
+  else {
+    size_t nb = request2size(bytes);
+    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
+    char* mem = (char*)internal_malloc(m, req);
+    if (mem != 0) {
+      void* leader = 0;
+      void* trailer = 0;
+      mchunkptr p = mem2chunk(mem);
+
+      if (PREACTION(m)) return 0;
+      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
+        /*
+          Find an aligned spot inside chunk.  Since we need to give
+          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
+          the first calculation places us at a spot with less than
+          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
+          We've allocated enough total room so that this is always
+          possible.
+        */
+        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
+                                                       alignment -
+                                                       SIZE_T_ONE)) &
+                                             -alignment));
+        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
+          br : br+alignment;
+        mchunkptr newp = (mchunkptr)pos;
+        size_t leadsize = pos - (char*)(p);
+        size_t newsize = chunksize(p) - leadsize;
+
+        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
+          newp->prev_foot = p->prev_foot + leadsize;
+          newp->head = (newsize|CINUSE_BIT);
+        }
+        else { /* Otherwise, give back leader, use the rest */
+          set_inuse(m, newp, newsize);
+          set_inuse(m, p, leadsize);
+          leader = chunk2mem(p);
+        }
+        p = newp;
+      }
+
+      /* Give back spare room at the end */
+      if (!is_mmapped(p)) {
+        size_t size = chunksize(p);
+        if (size > nb + MIN_CHUNK_SIZE) {
+          size_t remainder_size = size - nb;
+          mchunkptr remainder = chunk_plus_offset(p, nb);
+          set_inuse(m, p, nb);
+          set_inuse(m, remainder, remainder_size);
+          trailer = chunk2mem(remainder);
+        }
+      }
+
+      assert (chunksize(p) >= nb);
+      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
+      check_inuse_chunk(m, p);
+      POSTACTION(m);
+      if (leader != 0) {
+        internal_free(m, leader);
+      }
+      if (trailer != 0) {
+        internal_free(m, trailer);
+      }
+      return chunk2mem(p);
+    }
+  }
+  return 0;
+}
+
+/* ------------------------ comalloc/coalloc support --------------------- */
+
+static void** ialloc(mstate m,
+                     size_t n_elements,
+                     size_t* sizes,
+                     int opts,
+                     void* chunks[]) {
+  /*
+    This provides common support for independent_X routines, handling
+    all of the combinations that can result.
+
+    The opts arg has:
+    bit 0 set if all elements are same size (using sizes[0])
+    bit 1 set if elements should be zeroed
+  */
+
+  size_t    element_size;   /* chunksize of each element, if all same */
+  size_t    contents_size;  /* total size of elements */
+  size_t    array_size;     /* request size of pointer array */
+  void*     mem;            /* malloced aggregate space */
+  mchunkptr p;              /* corresponding chunk */
+  size_t    remainder_size; /* remaining bytes while splitting */
+  void**    marray;         /* either "chunks" or malloced ptr array */
+  mchunkptr array_chunk;    /* chunk for malloced ptr array */
+  flag_t    was_enabled;    /* to disable mmap */
+  size_t    size;
+  size_t    i;
+
+  /* compute array length, if needed */
+  if (chunks != 0) {
+    if (n_elements == 0)
+      return chunks; /* nothing to do */
+    marray = chunks;
+    array_size = 0;
+  }
+  else {
+    /* if empty req, must still return chunk representing empty array */
+    if (n_elements == 0)
+      return (void**)internal_malloc(m, 0);
+    marray = 0;
+    array_size = request2size(n_elements * (sizeof(void*)));
+  }
+
+  /* compute total element size */
+  if (opts & 0x1) { /* all-same-size */
+    element_size = request2size(*sizes);
+    contents_size = n_elements * element_size;
+  }
+  else { /* add up all the sizes */
+    element_size = 0;
+    contents_size = 0;
+    for (i = 0; i != n_elements; ++i)
+      contents_size += request2size(sizes[i]);
+  }
+
+  size = contents_size + array_size;
+
+  /*
+     Allocate the aggregate chunk.  First disable direct-mmapping so
+     malloc won't use it, since we would not be able to later
+     free/realloc space internal to a segregated mmap region.
+  */
+  was_enabled = use_mmap(m);
+  disable_mmap(m);
+  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
+  if (was_enabled)
+    enable_mmap(m);
+  if (mem == 0)
+    return 0;
+
+  if (PREACTION(m)) return 0;
+  p = mem2chunk(mem);
+  remainder_size = chunksize(p);
+
+  assert(!is_mmapped(p));
+
+  if (opts & 0x2) {       /* optionally clear the elements */
+    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
+  }
+
+  /* If not provided, allocate the pointer array as final part of chunk */
+  if (marray == 0) {
+    size_t  array_chunk_size;
+    array_chunk = chunk_plus_offset(p, contents_size);
+    array_chunk_size = remainder_size - contents_size;
+    marray = (void**) (chunk2mem(array_chunk));
+    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
+    remainder_size = contents_size;
+  }
+
+  /* split out elements */
+  for (i = 0; ; ++i) {
+    marray[i] = chunk2mem(p);
+    if (i != n_elements-1) {
+      if (element_size != 0)
+        size = element_size;
+      else
+        size = request2size(sizes[i]);
+      remainder_size -= size;
+      set_size_and_pinuse_of_inuse_chunk(m, p, size);
+      p = chunk_plus_offset(p, size);
+    }
+    else { /* the final element absorbs any overallocation slop */
+      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
+      break;
+    }
+  }
+
+#if DEBUG
+  if (marray != chunks) {
+    /* final element must have exactly exhausted chunk */
+    if (element_size != 0) {
+      assert(remainder_size == element_size);
+    }
+    else {
+      assert(remainder_size == request2size(sizes[i]));
+    }
+    check_inuse_chunk(m, mem2chunk(marray));
+  }
+  for (i = 0; i != n_elements; ++i)
+    check_inuse_chunk(m, mem2chunk(marray[i]));
+
+#endif /* DEBUG */
+
+  POSTACTION(m);
+  return marray;
+}
+
+
+/* -------------------------- public routines ---------------------------- */
+
+#if !ONLY_MSPACES
+
+void* dlmalloc(size_t bytes) {
+  /*
+     Basic algorithm:
+     If a small request (< 256 bytes minus per-chunk overhead):
+       1. If one exists, use a remainderless chunk in associated smallbin.
+          (Remainderless means that there are too few excess bytes to
+          represent as a chunk.)
+       2. If it is big enough, use the dv chunk, which is normally the
+          chunk adjacent to the one used for the most recent small request.
+       3. If one exists, split the smallest available chunk in a bin,
+          saving remainder in dv.
+       4. If it is big enough, use the top chunk.
+       5. If available, get memory from system and use it
+     Otherwise, for a large request:
+       1. Find the smallest available binned chunk that fits, and use it
+          if it is better fitting than dv chunk, splitting if necessary.
+       2. If better fitting than any binned chunk, use the dv chunk.
+       3. If it is big enough, use the top chunk.
+       4. If request size >= mmap threshold, try to directly mmap this chunk.
+       5. If available, get memory from system and use it
+
+     The ugly goto's here ensure that postaction occurs along all paths.
+  */
+
+  if (!PREACTION(gm)) {
+    void* mem;
+    size_t nb;
+    if (bytes <= MAX_SMALL_REQUEST) {
+      bindex_t idx;
+      binmap_t smallbits;
+      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
+      idx = small_index(nb);
+      smallbits = gm->smallmap >> idx;
+
+      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
+        mchunkptr b, p;
+        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
+        b = smallbin_at(gm, idx);
+        p = b->fd;
+        assert(chunksize(p) == small_index2size(idx));
+        unlink_first_small_chunk(gm, b, p, idx);
+        set_inuse_and_pinuse(gm, p, small_index2size(idx));
+        mem = chunk2mem(p);
+        check_malloced_chunk(gm, mem, nb);
+        goto postaction;
+      }
+
+      else if (nb > gm->dvsize) {
+        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
+          mchunkptr b, p, r;
+          size_t rsize;
+          bindex_t i;
+          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
+          binmap_t leastbit = least_bit(leftbits);
+          compute_bit2idx(leastbit, i);
+          b = smallbin_at(gm, i);
+          p = b->fd;
+          assert(chunksize(p) == small_index2size(i));
+          unlink_first_small_chunk(gm, b, p, i);
+          rsize = small_index2size(i) - nb;
+          /* Fit here cannot be remainderless if 4byte sizes */
+          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
+            set_inuse_and_pinuse(gm, p, small_index2size(i));
+          else {
+            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
+            r = chunk_plus_offset(p, nb);
+            set_size_and_pinuse_of_free_chunk(r, rsize);
+            replace_dv(gm, r, rsize);
+          }
+          mem = chunk2mem(p);
+          check_malloced_chunk(gm, mem, nb);
+          goto postaction;
+        }
+
+        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
+          check_malloced_chunk(gm, mem, nb);
+          goto postaction;
+        }
+      }
+    }
+    else if (bytes >= MAX_REQUEST)
+      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
+    else {
+      nb = pad_request(bytes);
+      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
+        check_malloced_chunk(gm, mem, nb);
+        goto postaction;
+      }
+    }
+
+    if (nb <= gm->dvsize) {
+      size_t rsize = gm->dvsize - nb;
+      mchunkptr p = gm->dv;
+      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
+        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
+        gm->dvsize = rsize;
+        set_size_and_pinuse_of_free_chunk(r, rsize);
+        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
+      }
+      else { /* exhaust dv */
+        size_t dvs = gm->dvsize;
+        gm->dvsize = 0;
+        gm->dv = 0;
+        set_inuse_and_pinuse(gm, p, dvs);
+      }
+      mem = chunk2mem(p);
+      check_malloced_chunk(gm, mem, nb);
+      goto postaction;
+    }
+
+    else if (nb < gm->topsize) { /* Split top */
+      size_t rsize = gm->topsize -= nb;
+      mchunkptr p = gm->top;
+      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
+      r->head = rsize | PINUSE_BIT;
+      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
+      mem = chunk2mem(p);
+      check_top_chunk(gm, gm->top);
+      check_malloced_chunk(gm, mem, nb);
+      goto postaction;
+    }
+
+    mem = sys_alloc(gm, nb);
+
+  postaction:
+    POSTACTION(gm);
+    return mem;
+  }
+
+  return 0;
+}
+
+void dlfree(void* mem) {
+  /*
+     Consolidate freed chunks with preceeding or succeeding bordering
+     free chunks, if they exist, and then place in a bin.  Intermixed
+     with special cases for top, dv, mmapped chunks, and usage errors.
+  */
+
+  if (mem != 0) {
+    mchunkptr p  = mem2chunk(mem);
+#if FOOTERS
+    mstate fm = get_mstate_for(p);
+    if (!ok_magic(fm)) {
+      USAGE_ERROR_ACTION(fm, p);
+      return;
+    }
+#else /* FOOTERS */
+#define fm gm
+#endif /* FOOTERS */
+    if (!PREACTION(fm)) {
+      check_inuse_chunk(fm, p);
+      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
+        size_t psize = chunksize(p);
+        mchunkptr next = chunk_plus_offset(p, psize);
+        if (!pinuse(p)) {
+          size_t prevsize = p->prev_foot;
+          if ((prevsize & IS_MMAPPED_BIT) != 0) {
+            prevsize &= ~IS_MMAPPED_BIT;
+            psize += prevsize + MMAP_FOOT_PAD;
+            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
+              fm->footprint -= psize;
+            goto postaction;
+          }
+          else {
+            mchunkptr prev = chunk_minus_offset(p, prevsize);
+            psize += prevsize;
+            p = prev;
+            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
+              if (p != fm->dv) {
+                unlink_chunk(fm, p, prevsize);
+              }
+              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
+                fm->dvsize = psize;
+                set_free_with_pinuse(p, psize, next);
+                goto postaction;
+              }
+            }
+            else
+              goto erroraction;
+          }
+        }
+
+        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
+          if (!cinuse(next)) {  /* consolidate forward */
+            if (next == fm->top) {
+              size_t tsize = fm->topsize += psize;
+              fm->top = p;
+              p->head = tsize | PINUSE_BIT;
+              if (p == fm->dv) {
+                fm->dv = 0;
+                fm->dvsize = 0;
+              }
+              if (should_trim(fm, tsize))
+                sys_trim(fm, 0);
+              goto postaction;
+            }
+            else if (next == fm->dv) {
+              size_t dsize = fm->dvsize += psize;
+              fm->dv = p;
+              set_size_and_pinuse_of_free_chunk(p, dsize);
+              goto postaction;
+            }
+            else {
+              size_t nsize = chunksize(next);
+              psize += nsize;
+              unlink_chunk(fm, next, nsize);
+              set_size_and_pinuse_of_free_chunk(p, psize);
+              if (p == fm->dv) {
+                fm->dvsize = psize;
+                goto postaction;
+              }
+            }
+          }
+          else
+            set_free_with_pinuse(p, psize, next);
+          insert_chunk(fm, p, psize);
+          check_free_chunk(fm, p);
+          goto postaction;
+        }
+      }
+    erroraction:
+      USAGE_ERROR_ACTION(fm, p);
+    postaction:
+      POSTACTION(fm);
+    }
+  }
+#if !FOOTERS
+#undef fm
+#endif /* FOOTERS */
+}
+
+void* dlcalloc(size_t n_elements, size_t elem_size) {
+  void* mem;
+  size_t req = 0;
+  if (n_elements != 0) {
+    req = n_elements * elem_size;
+    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
+        (req / n_elements != elem_size))
+      req = MAX_SIZE_T; /* force downstream failure on overflow */
+  }
+  mem = dlmalloc(req);
+  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
+    memset(mem, 0, req);
+  return mem;
+}
+
+void* dlrealloc(void* oldmem, size_t bytes) {
+  if (oldmem == 0)
+    return dlmalloc(bytes);
+#ifdef REALLOC_ZERO_BYTES_FREES
+  if (bytes == 0) {
+    dlfree(oldmem);
+    return 0;
+  }
+#endif /* REALLOC_ZERO_BYTES_FREES */
+  else {
+#if ! FOOTERS
+    mstate m = gm;
+#else /* FOOTERS */
+    mstate m = get_mstate_for(mem2chunk(oldmem));
+    if (!ok_magic(m)) {
+      USAGE_ERROR_ACTION(m, oldmem);
+      return 0;
+    }
+#endif /* FOOTERS */
+    return internal_realloc(m, oldmem, bytes);
+  }
+}
+
+void* dlmemalign(size_t alignment, size_t bytes) {
+  return internal_memalign(gm, alignment, bytes);
+}
+
+void** dlindependent_calloc(size_t n_elements, size_t elem_size,
+                                 void* chunks[]) {
+  size_t sz = elem_size; /* serves as 1-element array */
+  return ialloc(gm, n_elements, &sz, 3, chunks);
+}
+
+void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
+                                   void* chunks[]) {
+  return ialloc(gm, n_elements, sizes, 0, chunks);
+}
+
+void* dlvalloc(size_t bytes) {
+  size_t pagesz;
+  init_mparams();
+  pagesz = mparams.page_size;
+  return dlmemalign(pagesz, bytes);
+}
+
+void* dlpvalloc(size_t bytes) {
+  size_t pagesz;
+  init_mparams();
+  pagesz = mparams.page_size;
+  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
+}
+
+int dlmalloc_trim(size_t pad) {
+  int result = 0;
+  if (!PREACTION(gm)) {
+    result = sys_trim(gm, pad);
+    POSTACTION(gm);
+  }
+  return result;
+}
+
+size_t dlmalloc_footprint(void) {
+  return gm->footprint;
+}
+
+size_t dlmalloc_max_footprint(void) {
+  return gm->max_footprint;
+}
+
+#if !NO_MALLINFO
+struct mallinfo dlmallinfo(void) {
+  return internal_mallinfo(gm);
+}
+#endif /* NO_MALLINFO */
+
+void dlmalloc_stats() {
+  internal_malloc_stats(gm);
+}
+
+size_t dlmalloc_usable_size(void* mem) {
+  if (mem != 0) {
+    mchunkptr p = mem2chunk(mem);
+    if (cinuse(p))
+      return chunksize(p) - overhead_for(p);
+  }
+  return 0;
+}
+
+int dlmallopt(int param_number, int value) {
+  return change_mparam(param_number, value);
+}
+
+#endif /* !ONLY_MSPACES */
+
+/* ----------------------------- user mspaces ---------------------------- */
+
+#if MSPACES
+
+static mstate init_user_mstate(char* tbase, size_t tsize) {
+  size_t msize = pad_request(sizeof(struct malloc_state));
+  mchunkptr mn;
+  mchunkptr msp = align_as_chunk(tbase);
+  mstate m = (mstate)(chunk2mem(msp));
+  memset(m, 0, msize);
+  INITIAL_LOCK(&m->mutex);
+  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
+  m->seg.base = m->least_addr = tbase;
+  m->seg.size = m->footprint = m->max_footprint = tsize;
+  m->magic = mparams.magic;
+  m->mflags = mparams.default_mflags;
+  disable_contiguous(m);
+  init_bins(m);
+  mn = next_chunk(mem2chunk(m));
+  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
+  check_top_chunk(m, m->top);
+  return m;
+}
+
+mspace create_mspace(size_t capacity, int locked) {
+  mstate m = 0;
+  size_t msize = pad_request(sizeof(struct malloc_state));
+  init_mparams(); /* Ensure pagesize etc initialized */
+
+  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
+    size_t rs = ((capacity == 0)? mparams.granularity :
+                 (capacity + TOP_FOOT_SIZE + msize));
+    size_t tsize = granularity_align(rs);
+    char* tbase = (char*)(CALL_MMAP(tsize));
+    if (tbase != CMFAIL) {
+      m = init_user_mstate(tbase, tsize);
+      m->seg.sflags = IS_MMAPPED_BIT;
+      set_lock(m, locked);
+    }
+  }
+  return (mspace)m;
+}
+
+mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
+  mstate m = 0;
+  size_t msize = pad_request(sizeof(struct malloc_state));
+  init_mparams(); /* Ensure pagesize etc initialized */
+
+  if (capacity > msize + TOP_FOOT_SIZE &&
+      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
+    m = init_user_mstate((char*)base, capacity);
+    m->seg.sflags = EXTERN_BIT;
+    set_lock(m, locked);
+  }
+  return (mspace)m;
+}
+
+size_t destroy_mspace(mspace msp) {
+  size_t freed = 0;
+  mstate ms = (mstate)msp;
+  if (ok_magic(ms)) {
+    msegmentptr sp = &ms->seg;
+    while (sp != 0) {
+      char* base = sp->base;
+      size_t size = sp->size;
+      flag_t flag = sp->sflags;
+      sp = sp->next;
+      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
+          CALL_MUNMAP(base, size) == 0)
+        freed += size;
+    }
+  }
+  else {
+    USAGE_ERROR_ACTION(ms,ms);
+  }
+  return freed;
+}
+
+/*
+  mspace versions of routines are near-clones of the global
+  versions. This is not so nice but better than the alternatives.
+*/
+
+
+void* mspace_malloc(mspace msp, size_t bytes) {
+  mstate ms = (mstate)msp;
+  if (!ok_magic(ms)) {
+    USAGE_ERROR_ACTION(ms,ms);
+    return 0;
+  }
+  if (!PREACTION(ms)) {
+    void* mem;
+    size_t nb;
+    if (bytes <= MAX_SMALL_REQUEST) {
+      bindex_t idx;
+      binmap_t smallbits;
+      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
+      idx = small_index(nb);
+      smallbits = ms->smallmap >> idx;
+
+      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
+        mchunkptr b, p;
+        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
+        b = smallbin_at(ms, idx);
+        p = b->fd;
+        assert(chunksize(p) == small_index2size(idx));
+        unlink_first_small_chunk(ms, b, p, idx);
+        set_inuse_and_pinuse(ms, p, small_index2size(idx));
+        mem = chunk2mem(p);
+        check_malloced_chunk(ms, mem, nb);
+        goto postaction;
+      }
+
+      else if (nb > ms->dvsize) {
+        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
+          mchunkptr b, p, r;
+          size_t rsize;
+          bindex_t i;
+          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
+          binmap_t leastbit = least_bit(leftbits);
+          compute_bit2idx(leastbit, i);
+          b = smallbin_at(ms, i);
+          p = b->fd;
+          assert(chunksize(p) == small_index2size(i));
+          unlink_first_small_chunk(ms, b, p, i);
+          rsize = small_index2size(i) - nb;
+          /* Fit here cannot be remainderless if 4byte sizes */
+          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
+            set_inuse_and_pinuse(ms, p, small_index2size(i));
+          else {
+            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
+            r = chunk_plus_offset(p, nb);
+            set_size_and_pinuse_of_free_chunk(r, rsize);
+            replace_dv(ms, r, rsize);
+          }
+          mem = chunk2mem(p);
+          check_malloced_chunk(ms, mem, nb);
+          goto postaction;
+        }
+
+        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
+          check_malloced_chunk(ms, mem, nb);
+          goto postaction;
+        }
+      }
+    }
+    else if (bytes >= MAX_REQUEST)
+      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
+    else {
+      nb = pad_request(bytes);
+      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
+        check_malloced_chunk(ms, mem, nb);
+        goto postaction;
+      }
+    }
+
+    if (nb <= ms->dvsize) {
+      size_t rsize = ms->dvsize - nb;
+      mchunkptr p = ms->dv;
+      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
+        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
+        ms->dvsize = rsize;
+        set_size_and_pinuse_of_free_chunk(r, rsize);
+        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
+      }
+      else { /* exhaust dv */
+        size_t dvs = ms->dvsize;
+        ms->dvsize = 0;
+        ms->dv = 0;
+        set_inuse_and_pinuse(ms, p, dvs);
+      }
+      mem = chunk2mem(p);
+      check_malloced_chunk(ms, mem, nb);
+      goto postaction;
+    }
+
+    else if (nb < ms->topsize) { /* Split top */
+      size_t rsize = ms->topsize -= nb;
+      mchunkptr p = ms->top;
+      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
+      r->head = rsize | PINUSE_BIT;
+      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
+      mem = chunk2mem(p);
+      check_top_chunk(ms, ms->top);
+      check_malloced_chunk(ms, mem, nb);
+      goto postaction;
+    }
+
+    mem = sys_alloc(ms, nb);
+
+  postaction:
+    POSTACTION(ms);
+    return mem;
+  }
+
+  return 0;
+}
+
+void mspace_free(mspace msp, void* mem) {
+  if (mem != 0) {
+    mchunkptr p  = mem2chunk(mem);
+#if FOOTERS
+    mstate fm = get_mstate_for(p);
+#else /* FOOTERS */
+    mstate fm = (mstate)msp;
+#endif /* FOOTERS */
+    if (!ok_magic(fm)) {
+      USAGE_ERROR_ACTION(fm, p);
+      return;
+    }
+    if (!PREACTION(fm)) {
+      check_inuse_chunk(fm, p);
+      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
+        size_t psize = chunksize(p);
+        mchunkptr next = chunk_plus_offset(p, psize);
+        if (!pinuse(p)) {
+          size_t prevsize = p->prev_foot;
+          if ((prevsize & IS_MMAPPED_BIT) != 0) {
+            prevsize &= ~IS_MMAPPED_BIT;
+            psize += prevsize + MMAP_FOOT_PAD;
+            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
+              fm->footprint -= psize;
+            goto postaction;
+          }
+          else {
+            mchunkptr prev = chunk_minus_offset(p, prevsize);
+            psize += prevsize;
+            p = prev;
+            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
+              if (p != fm->dv) {
+                unlink_chunk(fm, p, prevsize);
+              }
+              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
+                fm->dvsize = psize;
+                set_free_with_pinuse(p, psize, next);
+                goto postaction;
+              }
+            }
+            else
+              goto erroraction;
+          }
+        }
+
+        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
+          if (!cinuse(next)) {  /* consolidate forward */
+            if (next == fm->top) {
+              size_t tsize = fm->topsize += psize;
+              fm->top = p;
+              p->head = tsize | PINUSE_BIT;
+              if (p == fm->dv) {
+                fm->dv = 0;
+                fm->dvsize = 0;
+              }
+              if (should_trim(fm, tsize))
+                sys_trim(fm, 0);
+              goto postaction;
+            }
+            else if (next == fm->dv) {
+              size_t dsize = fm->dvsize += psize;
+              fm->dv = p;
+              set_size_and_pinuse_of_free_chunk(p, dsize);
+              goto postaction;
+            }
+            else {
+              size_t nsize = chunksize(next);
+              psize += nsize;
+              unlink_chunk(fm, next, nsize);
+              set_size_and_pinuse_of_free_chunk(p, psize);
+              if (p == fm->dv) {
+                fm->dvsize = psize;
+                goto postaction;
+              }
+            }
+          }
+          else
+            set_free_with_pinuse(p, psize, next);
+          insert_chunk(fm, p, psize);
+          check_free_chunk(fm, p);
+          goto postaction;
+        }
+      }
+    erroraction:
+      USAGE_ERROR_ACTION(fm, p);
+    postaction:
+      POSTACTION(fm);
+    }
+  }
+}
+
+void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
+  void* mem;
+  size_t req = 0;
+  mstate ms = (mstate)msp;
+  if (!ok_magic(ms)) {
+    USAGE_ERROR_ACTION(ms,ms);
+    return 0;
+  }
+  if (n_elements != 0) {
+    req = n_elements * elem_size;
+    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
+        (req / n_elements != elem_size))
+      req = MAX_SIZE_T; /* force downstream failure on overflow */
+  }
+  mem = internal_malloc(ms, req);
+  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
+    memset(mem, 0, req);
+  return mem;
+}
+
+void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
+  if (oldmem == 0)
+    return mspace_malloc(msp, bytes);
+#ifdef REALLOC_ZERO_BYTES_FREES
+  if (bytes == 0) {
+    mspace_free(msp, oldmem);
+    return 0;
+  }
+#endif /* REALLOC_ZERO_BYTES_FREES */
+  else {
+#if FOOTERS
+    mchunkptr p  = mem2chunk(oldmem);
+    mstate ms = get_mstate_for(p);
+#else /* FOOTERS */
+    mstate ms = (mstate)msp;
+#endif /* FOOTERS */
+    if (!ok_magic(ms)) {
+      USAGE_ERROR_ACTION(ms,ms);
+      return 0;
+    }
+    return internal_realloc(ms, oldmem, bytes);
+  }
+}
+
+void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
+  mstate ms = (mstate)msp;
+  if (!ok_magic(ms)) {
+    USAGE_ERROR_ACTION(ms,ms);
+    return 0;
+  }
+  return internal_memalign(ms, alignment, bytes);
+}
+
+void** mspace_independent_calloc(mspace msp, size_t n_elements,
+                                 size_t elem_size, void* chunks[]) {
+  size_t sz = elem_size; /* serves as 1-element array */
+  mstate ms = (mstate)msp;
+  if (!ok_magic(ms)) {
+    USAGE_ERROR_ACTION(ms,ms);
+    return 0;
+  }
+  return ialloc(ms, n_elements, &sz, 3, chunks);
+}
+
+void** mspace_independent_comalloc(mspace msp, size_t n_elements,
+                                   size_t sizes[], void* chunks[]) {
+  mstate ms = (mstate)msp;
+  if (!ok_magic(ms)) {
+    USAGE_ERROR_ACTION(ms,ms);
+    return 0;
+  }
+  return ialloc(ms, n_elements, sizes, 0, chunks);
+}
+
+int mspace_trim(mspace msp, size_t pad) {
+  int result = 0;
+  mstate ms = (mstate)msp;
+  if (ok_magic(ms)) {
+    if (!PREACTION(ms)) {
+      result = sys_trim(ms, pad);
+      POSTACTION(ms);
+    }
+  }
+  else {
+    USAGE_ERROR_ACTION(ms,ms);
+  }
+  return result;
+}
+
+void mspace_malloc_stats(mspace msp) {
+  mstate ms = (mstate)msp;
+  if (ok_magic(ms)) {
+    internal_malloc_stats(ms);
+  }
+  else {
+    USAGE_ERROR_ACTION(ms,ms);
+  }
+}
+
+size_t mspace_footprint(mspace msp) {
+  size_t result;
+  mstate ms = (mstate)msp;
+  if (ok_magic(ms)) {
+    result = ms->footprint;
+  }
+  USAGE_ERROR_ACTION(ms,ms);
+  return result;
+}
+
+
+size_t mspace_max_footprint(mspace msp) {
+  size_t result;
+  mstate ms = (mstate)msp;
+  if (ok_magic(ms)) {
+    result = ms->max_footprint;
+  }
+  USAGE_ERROR_ACTION(ms,ms);
+  return result;
+}
+
+
+#if !NO_MALLINFO
+struct mallinfo mspace_mallinfo(mspace msp) {
+  mstate ms = (mstate)msp;
+  if (!ok_magic(ms)) {
+    USAGE_ERROR_ACTION(ms,ms);
+  }
+  return internal_mallinfo(ms);
+}
+#endif /* NO_MALLINFO */
+
+int mspace_mallopt(int param_number, int value) {
+  return change_mparam(param_number, value);
+}
+
+#endif /* MSPACES */
+
+/* -------------------- Alternative MORECORE functions ------------------- */
+
+/*
+  Guidelines for creating a custom version of MORECORE:
+
+  * For best performance, MORECORE should allocate in multiples of pagesize.
+  * MORECORE may allocate more memory than requested. (Or even less,
+      but this will usually result in a malloc failure.)
+  * MORECORE must not allocate memory when given argument zero, but
+      instead return one past the end address of memory from previous
+      nonzero call.
+  * For best performance, consecutive calls to MORECORE with positive
+      arguments should return increasing addresses, indicating that
+      space has been contiguously extended.
+  * Even though consecutive calls to MORECORE need not return contiguous
+      addresses, it must be OK for malloc'ed chunks to span multiple
+      regions in those cases where they do happen to be contiguous.
+  * MORECORE need not handle negative arguments -- it may instead
+      just return MFAIL when given negative arguments.
+      Negative arguments are always multiples of pagesize. MORECORE
+      must not misinterpret negative args as large positive unsigned
+      args. You can suppress all such calls from even occurring by defining
+      MORECORE_CANNOT_TRIM,
+
+  As an example alternative MORECORE, here is a custom allocator
+  kindly contributed for pre-OSX macOS.  It uses virtually but not
+  necessarily physically contiguous non-paged memory (locked in,
+  present and won't get swapped out).  You can use it by uncommenting
+  this section, adding some #includes, and setting up the appropriate
+  defines above:
+
+      #define MORECORE osMoreCore
+
+  There is also a shutdown routine that should somehow be called for
+  cleanup upon program exit.
+
+  #define MAX_POOL_ENTRIES 100
+  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
+  static int next_os_pool;
+  void *our_os_pools[MAX_POOL_ENTRIES];
+
+  void *osMoreCore(int size)
+  {
+    void *ptr = 0;
+    static void *sbrk_top = 0;
+
+    if (size > 0)
+    {
+      if (size < MINIMUM_MORECORE_SIZE)
+         size = MINIMUM_MORECORE_SIZE;
+      if (CurrentExecutionLevel() == kTaskLevel)
+         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
+      if (ptr == 0)
+      {
+        return (void *) MFAIL;
+      }
+      // save ptrs so they can be freed during cleanup
+      our_os_pools[next_os_pool] = ptr;
+      next_os_pool++;
+      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
+      sbrk_top = (char *) ptr + size;
+      return ptr;
+    }
+    else if (size < 0)
+    {
+      // we don't currently support shrink behavior
+      return (void *) MFAIL;
+    }
+    else
+    {
+      return sbrk_top;
+    }
+  }
+
+  // cleanup any allocated memory pools
+  // called as last thing before shutting down driver
+
+  void osCleanupMem(void)
+  {
+    void **ptr;
+
+    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
+      if (*ptr)
+      {
+         PoolDeallocate(*ptr);
+         *ptr = 0;
+      }
+  }
+
+*/
+
+
+/* -----------------------------------------------------------------------
+History:
+    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
+      * Add max_footprint functions
+      * Ensure all appropriate literals are size_t
+      * Fix conditional compilation problem for some #define settings
+      * Avoid concatenating segments with the one provided
+        in create_mspace_with_base
+      * Rename some variables to avoid compiler shadowing warnings
+      * Use explicit lock initialization.
+      * Better handling of sbrk interference.
+      * Simplify and fix segment insertion, trimming and mspace_destroy
+      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
+      * Thanks especially to Dennis Flanagan for help on these.
+
+    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
+      * Fix memalign brace error.
+
+    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
+      * Fix improper #endif nesting in C++
+      * Add explicit casts needed for C++
+
+    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
+      * Use trees for large bins
+      * Support mspaces
+      * Use segments to unify sbrk-based and mmap-based system allocation,
+        removing need for emulation on most platforms without sbrk.
+      * Default safety checks
+      * Optional footer checks. Thanks to William Robertson for the idea.
+      * Internal code refactoring
+      * Incorporate suggestions and platform-specific changes.
+        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
+        Aaron Bachmann,  Emery Berger, and others.
+      * Speed up non-fastbin processing enough to remove fastbins.
+      * Remove useless cfree() to avoid conflicts with other apps.
+      * Remove internal memcpy, memset. Compilers handle builtins better.
+      * Remove some options that no one ever used and rename others.
+
+    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
+      * Fix malloc_state bitmap array misdeclaration
+
+    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
+      * Allow tuning of FIRST_SORTED_BIN_SIZE
+      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
+      * Better detection and support for non-contiguousness of MORECORE.
+        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
+      * Bypass most of malloc if no frees. Thanks To Emery Berger.
+      * Fix freeing of old top non-contiguous chunk im sysmalloc.
+      * Raised default trim and map thresholds to 256K.
+      * Fix mmap-related #defines. Thanks to Lubos Lunak.
+      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
+      * Branch-free bin calculation
+      * Default trim and mmap thresholds now 256K.
+
+    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
+      * Introduce independent_comalloc and independent_calloc.
+        Thanks to Michael Pachos for motivation and help.
+      * Make optional .h file available
+      * Allow > 2GB requests on 32bit systems.
+      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
+        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
+        and Anonymous.
+      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
+        helping test this.)
+      * memalign: check alignment arg
+      * realloc: don't try to shift chunks backwards, since this
+        leads to  more fragmentation in some programs and doesn't
+        seem to help in any others.
+      * Collect all cases in malloc requiring system memory into sysmalloc
+      * Use mmap as backup to sbrk
+      * Place all internal state in malloc_state
+      * Introduce fastbins (although similar to 2.5.1)
+      * Many minor tunings and cosmetic improvements
+      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
+      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
+        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
+      * Include errno.h to support default failure action.
+
+    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
+      * return null for negative arguments
+      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
+         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
+          (e.g. WIN32 platforms)
+         * Cleanup header file inclusion for WIN32 platforms
+         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
+         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
+           memory allocation routines
+         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
+         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
+           usage of 'assert' in non-WIN32 code
+         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
+           avoid infinite loop
+      * Always call 'fREe()' rather than 'free()'
+
+    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
+      * Fixed ordering problem with boundary-stamping
+
+    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
+      * Added pvalloc, as recommended by H.J. Liu
+      * Added 64bit pointer support mainly from Wolfram Gloger
+      * Added anonymously donated WIN32 sbrk emulation
+      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
+      * malloc_extend_top: fix mask error that caused wastage after
+        foreign sbrks
+      * Add linux mremap support code from HJ Liu
+
+    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
+      * Integrated most documentation with the code.
+      * Add support for mmap, with help from
+        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
+      * Use last_remainder in more cases.
+      * Pack bins using idea from  colin@nyx10.cs.du.edu
+      * Use ordered bins instead of best-fit threshhold
+      * Eliminate block-local decls to simplify tracing and debugging.
+      * Support another case of realloc via move into top
+      * Fix error occuring when initial sbrk_base not word-aligned.
+      * Rely on page size for units instead of SBRK_UNIT to
+        avoid surprises about sbrk alignment conventions.
+      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
+        (raymond@es.ele.tue.nl) for the suggestion.
+      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
+      * More precautions for cases where other routines call sbrk,
+        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
+      * Added macros etc., allowing use in linux libc from
+        H.J. Lu (hjl@gnu.ai.mit.edu)
+      * Inverted this history list
+
+    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
+      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
+      * Removed all preallocation code since under current scheme
+        the work required to undo bad preallocations exceeds
+        the work saved in good cases for most test programs.
+      * No longer use return list or unconsolidated bins since
+        no scheme using them consistently outperforms those that don't
+        given above changes.
+      * Use best fit for very large chunks to prevent some worst-cases.
+      * Added some support for debugging
+
+    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
+      * Removed footers when chunks are in use. Thanks to
+        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
+
+    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
+      * Added malloc_trim, with help from Wolfram Gloger
+        (wmglo@Dent.MED.Uni-Muenchen.DE).
+
+    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
+
+    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
+      * realloc: try to expand in both directions
+      * malloc: swap order of clean-bin strategy;
+      * realloc: only conditionally expand backwards
+      * Try not to scavenge used bins
+      * Use bin counts as a guide to preallocation
+      * Occasionally bin return list chunks in first scan
+      * Add a few optimizations from colin@nyx10.cs.du.edu
+
+    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
+      * faster bin computation & slightly different binning
+      * merged all consolidations to one part of malloc proper
+         (eliminating old malloc_find_space & malloc_clean_bin)
+      * Scan 2 returns chunks (not just 1)
+      * Propagate failure in realloc if malloc returns 0
+      * Add stuff to allow compilation on non-ANSI compilers
+          from kpv@research.att.com
+
+    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
+      * removed potential for odd address access in prev_chunk
+      * removed dependency on getpagesize.h
+      * misc cosmetics and a bit more internal documentation
+      * anticosmetics: mangled names in macros to evade debugger strangeness
+      * tested on sparc, hp-700, dec-mips, rs6000
+          with gcc & native cc (hp, dec only) allowing
+          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
+
+    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
+      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
+         structure of old version,  but most details differ.)
+ 
+*/

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #3: libffi-closure-prot-exec.patch --]
[-- Type: text/x-patch, Size: 78986 bytes --]

for  libffi/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* include/ffi.h.in (ffi_closure_alloc, ffi_closure_free): New.
	(ffi_prep_closure_loc): New.
	(ffi_prep_raw_closure_loc): New.
	(ffi_prep_java_raw_closure_loc): New.
	* src/closures.c: New file.
	* src/dlmalloc.c [FFI_MMAP_EXEC_WRIT] (struct malloc_segment):
	Replace sflags with exec_offset.
	[FFI_MMAP_EXEC_WRIT] (mmap_exec_offset, add_segment_exec_offset,
	sub_segment_exec_offset): New macros.
	(get_segment_flags, set_segment_flags, check_segment_merge): New
	macros.
	(is_mmapped_segment, is_extern_segment): Use get_segment_flags.
	(add_segment, sys_alloc, create_mspace, create_mspace_with_base,
	destroy_mspace): Use new macros.
	(sys_alloc): Silence warning.
	* Makefile.am (libffi_la_SOURCES): Add src/closures.c.
	* Makefile.in: Rebuilt.
	* src/prep_cif [FFI_CLOSURES] (ffi_prep_closure): Implement in
	terms of ffi_prep_closure_loc.
	* src/raw_api.c (ffi_prep_raw_closure_loc): Renamed and adjusted
	from...
	(ffi_prep_raw_closure): ... this.  Re-implement in terms of the
	renamed version.
	* src/java_raw_api (ffi_prep_java_raw_closure_loc): Renamed and
	adjusted from...
	(ffi_prep_java_raw_closure): ... this.  Re-implement in terms of
	the renamed version.
	* src/alpha/ffi.c (ffi_prep_closure_loc): Renamed from
	(ffi_prep_closure): ... this.
	* src/pa/ffi.c: Likewise.
	* src/cris/ffi.c: Likewise.  Adjust.
	* src/frv/ffi.c: Likewise.
	* src/ia64/ffi.c: Likewise.
	* src/mips/ffi.c: Likewise.
	* src/powerpc/ffi_darwin.c: Likewise.
	* src/s390/ffi.c: Likewise.
	* src/sh/ffi.c: Likewise.
	* src/sh64/ffi.c: Likewise.
	* src/sparc/ffi.c: Likewise.
	* src/x86/ffi64.c: Likewise.
	* src/x86/ffi.c: Likewise.
	(FFI_INIT_TRAMPOLINE): Adjust.
	(ffi_prep_raw_closure_loc): Renamed and adjusted from...
	(ffi_prep_raw_closure): ... this.
	* src/powerpc/ffi.c (ffi_prep_closure_loc): Renamed from
	(ffi_prep_closure): ... this.
	(flush_icache): Adjust.
	
for  libjava/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* include/boehm-gc.h (_Jv_AllocClosure, _Jv_FreeClosure): New.
	* boehm.cc: Likewise.
	* nogc.cc: Likewise.
	* java/lang/Class.h (class _Jv_ClosureList): New.
	(class java::lang::Class): Declare _Jv_ClosureList as friend.
	* java/lang/natClass.cc (java::lang::Class::finalize): Release
	closures associated with the class.
	(_Jv_ClosureList::releaseClosures): New.
	(_Jv_ClosureList::registerClousure): New.
	* include/execution.h (_Jv_ExecutionEngine): Add get_closure_list.
	(_Jv_CompiledEngine::do_get_closure_list): New.
	(_Jv_CompiledEngine::_Jv_CompiledEngine): Use it.
	(_Jv_IndirectCompiledClass): Add closures.
	(_Jv_IndirectCompiledEngine::get_aux_info): New.
	(_Jv_IndirectCompiledEngine::do_allocate_field_initializers): Use
	it.
	(_Jv_IndirectCompiledEngine::do_get_closure_list): New.
	(_Jv_IndirectCompiledEngine::_Jv_IndirectCompiledEngine): Use it.
	(_Jv_InterpreterEngine::do_get_closure_list): Declare.
	(_Jv_InterpreterEngine::_Jv_InterpreterEngine): Use it.
	* interpret.cc (FFI_PREP_RAW_CLOSURE): Use _loc variants.
	(node_closure): Add closure list.
	(_Jv_InterpMethod::ncode): Add jclass argument.  Use
	_Jv_AllocClosure and the separate code pointer.  Register the
	closure for finalization.
	(_Jv_JNIMethod::ncode): Likewise.
	(_Jv_InterpreterEngine::do_create_ncode): Pass klass to ncode.
	(_Jv_InterpreterEngine::do_get_closure_list): New.
	* include/java-interp.h (_Jv_InterpMethod::ncode): Adjust.
	(_Jv_InterpClass): Add closures field.
	(_Jv_JNIMethod::ncode): Adjust.
	* defineclass.cc (_Jv_ClassReader::handleCodeAttribute): Adjust.
	(_Jv_ClassReader::handleMethodsEnd): Likewise.
	* link.cc (struct method_closure): Add closure list.
	(_Jv_Linker::create_error_method): Add jclass argument.  Use
	_Jv_AllocClosure and the separate code pointer.  Register the
	closure for finalization.
	(_Jv_Linker::link_symbol_table): Remove outdated comment about
	sharing of otable and atable.  Adjust.
	* include/jvm.h (_Jv_Linker::create_error_method): Adjust.
	* java/lang/reflect/natVMProxy.cc (ncode_closure): Add closure
	list.
	(ncode): Add jclass argument.  Use _Jv_AllocClosure and the
	separate code pointer.  Register the closure for finalization.
	(java::lang::reflect::VMProxy::generateProxyClass): Adjust.

Index: libffi/include/ffi.h.in
===================================================================
--- libffi/include/ffi.h.in.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/include/ffi.h.in	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------*-C-*-
-   libffi @VERSION@ - Copyright (c) 1996-2003  Red Hat, Inc.
+   libffi @VERSION@ - Copyright (c) 1996-2003, 2007  Red Hat, Inc.
 
    Permission is hereby granted, free of charge, to any person obtaining
    a copy of this software and associated documentation files (the
@@ -228,12 +228,22 @@ typedef struct {
   void      *user_data;
 } ffi_closure __attribute__((aligned (8)));
 
+void *ffi_closure_alloc (size_t size, void **code);
+void ffi_closure_free (void *);
+
 ffi_status
 ffi_prep_closure (ffi_closure*,
 		  ffi_cif *,
 		  void (*fun)(ffi_cif*,void*,void**,void*),
 		  void *user_data);
 
+ffi_status
+ffi_prep_closure_loc (ffi_closure*,
+		      ffi_cif *,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void*codeloc);
+
 typedef struct {
   char tramp[FFI_TRAMPOLINE_SIZE];
 
@@ -262,11 +272,25 @@ ffi_prep_raw_closure (ffi_raw_closure*,
 		      void *user_data);
 
 ffi_status
+ffi_prep_raw_closure_loc (ffi_raw_closure*,
+			  ffi_cif *cif,
+			  void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			  void *user_data,
+			  void *codeloc);
+
+ffi_status
 ffi_prep_java_raw_closure (ffi_raw_closure*,
 		           ffi_cif *cif,
 		           void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
 		           void *user_data);
 
+ffi_status
+ffi_prep_java_raw_closure_loc (ffi_raw_closure*,
+			       ffi_cif *cif,
+			       void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			       void *user_data,
+			       void *codeloc);
+
 #endif /* FFI_CLOSURES */
 
 /* ---- Public interface definition -------------------------------------- */
Index: libffi/src/closures.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ libffi/src/closures.c	2007-01-18 04:03:40.000000000 -0200
@@ -0,0 +1,513 @@
+/* -----------------------------------------------------------------------
+   closures.c - Copyright (c) 2007  Red Hat, Inc.
+
+   Code to allocate and deallocate memory for closures.
+
+   Permission is hereby granted, free of charge, to any person obtaining
+   a copy of this software and associated documentation files (the
+   ``Software''), to deal in the Software without restriction, including
+   without limitation the rights to use, copy, modify, merge, publish,
+   distribute, sublicense, and/or sell copies of the Software, and to
+   permit persons to whom the Software is furnished to do so, subject to
+   the following conditions:
+
+   The above copyright notice and this permission notice shall be included
+   in all copies or substantial portions of the Software.
+
+   THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS
+   OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+   MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+   IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+   OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+   ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+   OTHER DEALINGS IN THE SOFTWARE.
+   ----------------------------------------------------------------------- */
+
+#include <ffi.h>
+#include <ffi_common.h>
+
+#ifndef FFI_MMAP_EXEC_WRIT
+# if __gnu_linux__
+/* This macro indicates it may be forbidden to map anonymous memory
+   with both write and execute permission.  Code compiled when this
+   option is defined will attempt to map such pages once, but if it
+   fails, it falls back to creating a temporary file in a writable and
+   executable filesystem and mapping pages from it into separate
+   locations in the virtual memory space, one location writable and
+   another executable.  */
+#  define FFI_MMAP_EXEC_WRIT 1
+# endif
+#endif
+
+#if FFI_CLOSURES
+
+# if FFI_MMAP_EXEC_WRIT
+
+#define USE_LOCKS 1
+#define USE_DL_PREFIX 1
+#define USE_BUILTIN_FFS 1
+
+/* We need to use mmap, not sbrk.  */
+#define HAVE_MORECORE 0
+
+/* We could, in theory, support mremap, but it wouldn't buy us anything.  */
+#define HAVE_MREMAP 0
+
+/* We have no use for this, so save some code and data.  */
+#define NO_MALLINFO 1
+
+/* We need all allocations to be in regular segments, otherwise we
+   lose track of the corresponding code address.  */
+#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
+
+/* Don't allocate more than a page unless needed.  */
+#define DEFAULT_GRANULARITY ((size_t)malloc_getpagesize)
+
+#if FFI_CLOSURE_TEST
+/* Don't release single pages, to avoid a worst-case scenario of
+   continuously allocating and releasing single pages, but release
+   pairs of pages, which should do just as well given that allocations
+   are likely to be small.  */
+#define DEFAULT_TRIM_THRESHOLD ((size_t)malloc_getpagesize)
+#endif
+
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdio.h>
+#include <mntent.h>
+#include <sys/param.h>
+#include <pthread.h>
+
+/* We don't want sys/mman.h to be included after we redefine mmap and
+   dlmunmap.  */
+#include <sys/mman.h>
+#define LACKS_SYS_MMAN_H 1
+
+#define MAYBE_UNUSED __attribute__((__unused__))
+
+/* Declare all functions defined in dlmalloc.c as static.  */
+static void *dlmalloc(size_t);
+static void dlfree(void*);
+static void *dlcalloc(size_t, size_t) MAYBE_UNUSED;
+static void *dlrealloc(void *, size_t) MAYBE_UNUSED;
+static void *dlmemalign(size_t, size_t) MAYBE_UNUSED;
+static void *dlvalloc(size_t) MAYBE_UNUSED;
+static int dlmallopt(int, int) MAYBE_UNUSED;
+static size_t dlmalloc_footprint(void) MAYBE_UNUSED;
+static size_t dlmalloc_max_footprint(void) MAYBE_UNUSED;
+static void** dlindependent_calloc(size_t, size_t, void**) MAYBE_UNUSED;
+static void** dlindependent_comalloc(size_t, size_t*, void**) MAYBE_UNUSED;
+static void *dlpvalloc(size_t) MAYBE_UNUSED;
+static int dlmalloc_trim(size_t) MAYBE_UNUSED;
+static size_t dlmalloc_usable_size(void*) MAYBE_UNUSED;
+static void dlmalloc_stats(void) MAYBE_UNUSED;
+
+/* Use these for mmap and munmap within dlmalloc.c.  */
+static void *dlmmap(void *, size_t, int, int, int, off_t);
+static int dlmunmap(void *, size_t);
+
+#define mmap dlmmap
+#define munmap dlmunmap
+
+#include "dlmalloc.c"
+
+#undef mmap
+#undef munmap
+
+/* A mutex used to synchronize access to *exec* variables in this file.  */
+static pthread_mutex_t open_temp_exec_file_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+/* A file descriptor of a temporary file from which we'll map
+   executable pages.  */
+static int execfd = -1;
+
+/* The amount of space already allocated from the temporary file.  */
+static size_t execsize = 0;
+
+/* Open a temporary file name, and immediately unlink it.  */
+static int
+open_temp_exec_file_name (char *name)
+{
+  int fd = mkstemp (name);
+
+  if (fd != -1)
+    unlink (name);
+
+  return fd;
+}
+
+/* Open a temporary file in the named directory.  */
+static int
+open_temp_exec_file_dir (const char *dir)
+{
+  static const char suffix[] = "/ffiXXXXXX";
+  int lendir = strlen (dir);
+  char *tempname = __builtin_alloca (lendir + sizeof (suffix));
+
+  if (!tempname)
+    return -1;
+
+  memcpy (tempname, dir, lendir);
+  memcpy (tempname + lendir, suffix, sizeof (suffix));
+
+  return open_temp_exec_file_name (tempname);
+}
+
+/* Open a temporary file in the directory in the named environment
+   variable.  */
+static int
+open_temp_exec_file_env (const char *envvar)
+{
+  const char *value = getenv (envvar);
+
+  if (!value)
+    return -1;
+
+  return open_temp_exec_file_dir (value);
+}
+
+/* Open a temporary file in an executable and writable mount point
+   listed in the mounts file.  Subsequent calls with the same mounts
+   keep searching for mount points in the same file.  Providing NULL
+   as the mounts file closes the file.  */
+static int
+open_temp_exec_file_mnt (const char *mounts)
+{
+  static const char *last_mounts;
+  static FILE *last_mntent;
+
+  if (mounts != last_mounts)
+    {
+      if (last_mntent)
+	endmntent (last_mntent);
+
+      last_mounts = mounts;
+
+      if (mounts)
+	last_mntent = setmntent (mounts, "r");
+      else
+	last_mntent = NULL;
+    }
+
+  if (!last_mntent)
+    return -1;
+
+  for (;;)
+    {
+      int fd;
+      struct mntent mnt;
+      char buf[MAXPATHLEN * 3];
+
+      if (getmntent_r (last_mntent, &mnt, buf, sizeof (buf)))
+	return -1;
+
+      if (hasmntopt (&mnt, "ro")
+	  || hasmntopt (&mnt, "noexec")
+	  || access (mnt.mnt_dir, W_OK))
+	continue;
+
+      fd = open_temp_exec_file_dir (mnt.mnt_dir);
+
+      if (fd != -1)
+	return fd;
+    }
+}
+
+/* Instructions to look for a location to hold a temporary file that
+   can be mapped in for execution.  */
+static struct
+{
+  int (*func)(const char *);
+  const char *arg;
+  int repeat;
+} open_temp_exec_file_opts[] = {
+  { open_temp_exec_file_env, "TMPDIR", 0 },
+  { open_temp_exec_file_dir, "/tmp", 0 },
+  { open_temp_exec_file_dir, "/var/tmp", 0 },
+  { open_temp_exec_file_dir, "/dev/shm", 0 },
+  { open_temp_exec_file_env, "HOME", 0 },
+  { open_temp_exec_file_mnt, "/etc/mtab", 1 },
+  { open_temp_exec_file_mnt, "/proc/mounts", 1 },
+};
+
+/* Current index into open_temp_exec_file_opts.  */
+static int open_temp_exec_file_opts_idx = 0;
+
+/* Reset a current multi-call func, then advances to the next entry.
+   If we're at the last, go back to the first and return nonzero,
+   otherwise return zero.  */
+static int
+open_temp_exec_file_opts_next (void)
+{
+  if (open_temp_exec_file_opts[open_temp_exec_file_opts_idx].repeat)
+    open_temp_exec_file_opts[open_temp_exec_file_opts_idx].func (NULL);
+
+  open_temp_exec_file_opts_idx++;
+  if (open_temp_exec_file_opts_idx
+      == (sizeof (open_temp_exec_file_opts)
+	  / sizeof (*open_temp_exec_file_opts)))
+    {
+      open_temp_exec_file_opts_idx = 0;
+      return 1;
+    }
+
+  return 0;
+}
+
+/* Return a file descriptor of a temporary zero-sized file in a
+   writable and exexutable filesystem.  */
+static int
+open_temp_exec_file (void)
+{
+  int fd;
+
+  do
+    {
+      fd = open_temp_exec_file_opts[open_temp_exec_file_opts_idx].func
+	(open_temp_exec_file_opts[open_temp_exec_file_opts_idx].arg);
+
+      if (!open_temp_exec_file_opts[open_temp_exec_file_opts_idx].repeat
+	  || fd == -1)
+	{
+	  if (open_temp_exec_file_opts_next ())
+	    break;
+	}
+    }
+  while (fd == -1);
+
+  return fd;
+}
+
+/* Map in a chunk of memory from the temporary exec file into separate
+   locations in the virtual memory address space, one writable and one
+   executable.  Returns the address of the writable portion, after
+   storing an offset to the corresponding executable portion at the
+   last word of the requested chunk.  */
+static void *
+dlmmap_locked (void *start, size_t length, int prot, int flags, off_t offset)
+{
+  void *ptr;
+
+  if (execfd == -1)
+    {
+      open_temp_exec_file_opts_idx = 0;
+    retry_open:
+      execfd = open_temp_exec_file ();
+      if (execfd == -1)
+	return MFAIL;
+    }
+
+  offset = execsize;
+
+  if (ftruncate (execfd, offset + length))
+    return MFAIL;
+
+  flags &= ~(MAP_PRIVATE | MAP_ANONYMOUS);
+  flags |= MAP_SHARED;
+
+  ptr = mmap (NULL, length, (prot & ~PROT_WRITE) | PROT_EXEC,
+	      flags, execfd, offset);
+  if (ptr == MFAIL)
+    {
+      if (!offset)
+	{
+	  close (execfd);
+	  goto retry_open;
+	}
+      ftruncate (execfd, offset);
+      return MFAIL;
+    }
+  else if (!offset
+	   && open_temp_exec_file_opts[open_temp_exec_file_opts_idx].repeat)
+    open_temp_exec_file_opts_next ();
+
+  start = mmap (start, length, prot, flags, execfd, offset);
+
+  if (start == MFAIL)
+    {
+      munmap (ptr, length);
+      ftruncate (execfd, offset);
+      return start;
+    }
+
+  mmap_exec_offset ((char *)start, length) = (char*)ptr - (char*)start;
+
+  execsize += length;
+
+  return start;
+}
+
+/* Map in a writable and executable chunk of memory if possible.
+   Failing that, fall back to dlmmap_locked.  */
+static void *
+dlmmap (void *start, size_t length, int prot,
+	int flags, int fd, off_t offset)
+{
+  void *ptr;
+
+  assert (start == NULL && length % malloc_getpagesize == 0
+	  && prot == (PROT_READ | PROT_WRITE)
+	  && flags == (MAP_PRIVATE | MAP_ANONYMOUS)
+	  && fd == -1 && offset == 0);
+
+#if FFI_CLOSURE_TEST
+  printf ("mapping in %zi\n", length);
+#endif
+
+  if (execfd == -1)
+    {
+      ptr = mmap (start, length, prot | PROT_EXEC, flags, fd, offset);
+
+      if (ptr != MFAIL || (errno != EPERM && errno != EACCES))
+	/* Cool, no need to mess with separate segments.  */
+	return ptr;
+
+      /* If MREMAP_DUP is ever introduced and implemented, try mmap
+	 with ((prot & ~PROT_WRITE) | PROT_EXEC) and mremap with
+	 MREMAP_DUP and prot at this point.  */
+    }
+
+  if (execsize == 0 || execfd == -1)
+    {
+      pthread_mutex_lock (&open_temp_exec_file_mutex);
+      ptr = dlmmap_locked (start, length, prot, flags, offset);
+      pthread_mutex_unlock (&open_temp_exec_file_mutex);
+
+      return ptr;
+    }
+
+  return dlmmap_locked (start, length, prot, flags, offset);
+}
+
+/* Release memory at the given address, as well as the corresponding
+   executable page if it's separate.  */
+static int
+dlmunmap (void *start, size_t length)
+{
+  /* We don't bother decreasing execsize or truncating the file, since
+     we can't quite tell whether we're unmapping the end of the file.
+     We don't expect frequent deallocation anyway.  If we did, we
+     could locate pages in the file by writing to the pages being
+     deallocated and checking that the file contents change.
+     Yuck.  */
+  msegmentptr seg = segment_holding (gm, start);
+  void *code;
+
+#if FFI_CLOSURE_TEST
+  printf ("unmapping %zi\n", length);
+#endif
+
+  if (seg && (code = add_segment_exec_offset (start, seg)) != start)
+    {
+      int ret = munmap (code, length);
+      if (ret)
+	return ret;
+    }
+
+  return munmap (start, length);
+}
+
+#if FFI_CLOSURE_FREE_CODE
+/* Return segment holding given code address.  */
+static msegmentptr
+segment_holding_code (mstate m, char* addr)
+{
+  msegmentptr sp = &m->seg;
+  for (;;) {
+    if (addr >= add_segment_exec_offset (sp->base, sp)
+	&& addr < add_segment_exec_offset (sp->base, sp) + sp->size)
+      return sp;
+    if ((sp = sp->next) == 0)
+      return 0;
+  }
+}
+#endif
+
+/* Allocate a chunk of memory with the given size.  Returns a pointer
+   to the writable address, and sets *CODE to the executable
+   corresponding virtual address.  */
+void *
+ffi_closure_alloc (size_t size, void **code)
+{
+  void *ptr;
+
+  if (!code)
+    return NULL;
+
+  ptr = dlmalloc (size);
+
+  if (ptr)
+    {
+      msegmentptr seg = segment_holding (gm, ptr);
+
+      *code = add_segment_exec_offset (ptr, seg);
+    }
+
+  return ptr;
+}
+
+/* Release a chunk of memory allocated with ffi_closure_alloc.  If
+   FFI_CLOSURE_FREE_CODE is nonzero, the given address can be the
+   writable or the executable address given.  Otherwise, only the
+   writable address can be provided here.  */
+void
+ffi_closure_free (void *ptr)
+{
+#if FFI_CLOSURE_FREE_CODE
+  msegmentptr seg = segment_holding_code (gm, ptr);
+
+  if (seg)
+    ptr = sub_segment_exec_offset (ptr, seg);
+#endif
+
+  dlfree (ptr);
+}
+
+
+#if FFI_CLOSURE_TEST
+/* Do some internal sanity testing to make sure allocation and
+   deallocation of pages are working as intended.  */
+int main ()
+{
+  void *p[3];
+#define GET(idx, len) do { p[idx] = dlmalloc (len); printf ("allocated %zi for p[%i]\n", (len), (idx)); } while (0)
+#define PUT(idx) do { printf ("freeing p[%i]\n", (idx)); dlfree (p[idx]); } while (0)
+  GET (0, malloc_getpagesize / 2);
+  GET (1, 2 * malloc_getpagesize - 64 * sizeof (void*));
+  PUT (1);
+  GET (1, 2 * malloc_getpagesize);
+  GET (2, malloc_getpagesize / 2);
+  PUT (1);
+  PUT (0);
+  PUT (2);
+  return 0;
+}
+#endif /* FFI_CLOSURE_TEST */
+# else /* ! FFI_MMAP_EXEC_WRIT */
+
+/* On many systems, memory returned by malloc is writable and
+   executable, so just use it.  */
+
+#include <stdlib.h>
+
+void *
+ffi_closure_alloc (size_t size, void **code)
+{
+  if (!code)
+    return NULL;
+
+  return *code = malloc (size);
+}
+
+void
+ffi_closure_free (void *ptr)
+{
+  free (ptr);
+}
+
+# endif /* ! FFI_MMAP_EXEC_WRIT */
+#endif /* FFI_CLOSURES */
Index: libffi/src/dlmalloc.c
===================================================================
--- libffi/src/dlmalloc.c.orig	2007-01-17 22:33:01.000000000 -0200
+++ libffi/src/dlmalloc.c	2007-01-17 22:33:16.000000000 -0200
@@ -1879,11 +1879,47 @@ struct malloc_segment {
   char*        base;             /* base address */
   size_t       size;             /* allocated size */
   struct malloc_segment* next;   /* ptr to next segment */
+#if FFI_MMAP_EXEC_WRIT
+  /* The mmap magic is supposed to store the address of the executable
+     segment at the very end of the requested block.  */
+
+# define mmap_exec_offset(b,s) (*(ptrdiff_t*)((b)+(s)-sizeof(ptrdiff_t)))
+
+  /* We can only merge segments if their corresponding executable
+     segments are at identical offsets.  */
+# define check_segment_merge(S,b,s) \
+  (mmap_exec_offset((b),(s)) == (S)->exec_offset)
+
+# define add_segment_exec_offset(p,S) ((char*)(p) + (S)->exec_offset)
+# define sub_segment_exec_offset(p,S) ((char*)(p) - (S)->exec_offset)
+
+  /* The removal of sflags only works with HAVE_MORECORE == 0.  */
+
+# define get_segment_flags(S)   (IS_MMAPPED_BIT)
+# define set_segment_flags(S,v) \
+  (((v) != IS_MMAPPED_BIT) ? (ABORT, (v)) :				\
+   (((S)->exec_offset =							\
+     mmap_exec_offset((S)->base, (S)->size)),				\
+    (mmap_exec_offset((S)->base + (S)->exec_offset, (S)->size) !=	\
+     (S)->exec_offset) ? (ABORT, (v)) :					\
+   (mmap_exec_offset((S)->base, (S)->size) = 0), (v)))
+
+  /* We use an offset here, instead of a pointer, because then, when
+     base changes, we don't have to modify this.  On architectures
+     with segmented addresses, this might not work.  */
+  ptrdiff_t    exec_offset;
+#else
+
+# define get_segment_flags(S)   ((S)->sflags)
+# define set_segment_flags(S,v) ((S)->sflags = (v))
+# define check_segment_merge(S,b,s) (1)
+
   flag_t       sflags;           /* mmap and extern flag */
+#endif
 };
 
-#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
-#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
+#define is_mmapped_segment(S)  (get_segment_flags(S) & IS_MMAPPED_BIT)
+#define is_extern_segment(S)   (get_segment_flags(S) & EXTERN_BIT)
 
 typedef struct malloc_segment  msegment;
 typedef struct malloc_segment* msegmentptr;
@@ -3290,7 +3326,7 @@ static void add_segment(mstate m, char* 
   *ss = m->seg; /* Push current record */
   m->seg.base = tbase;
   m->seg.size = tsize;
-  m->seg.sflags = mmapped;
+  set_segment_flags(&m->seg, mmapped);
   m->seg.next = ss;
 
   /* Insert trailing fenceposts */
@@ -3393,7 +3429,7 @@ static void* sys_alloc(mstate m, size_t 
             if (end != CMFAIL)
               asize += esize;
             else {            /* Can't use; try to release */
-              CALL_MORECORE(-asize);
+              (void)CALL_MORECORE(-asize);
               br = CMFAIL;
             }
           }
@@ -3450,7 +3486,7 @@ static void* sys_alloc(mstate m, size_t 
     if (!is_initialized(m)) { /* first-time initialization */
       m->seg.base = m->least_addr = tbase;
       m->seg.size = tsize;
-      m->seg.sflags = mmap_flag;
+      set_segment_flags(&m->seg, mmap_flag);
       m->magic = mparams.magic;
       init_bins(m);
       if (is_global(m)) 
@@ -3469,7 +3505,8 @@ static void* sys_alloc(mstate m, size_t 
         sp = sp->next;
       if (sp != 0 &&
           !is_extern_segment(sp) &&
-          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
+	  check_segment_merge(sp, tbase, tsize) &&
+          (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag &&
           segment_holds(sp, m->top)) { /* append */
         sp->size += tsize;
         init_top(m, m->top, m->topsize + tsize);
@@ -3482,7 +3519,8 @@ static void* sys_alloc(mstate m, size_t 
           sp = sp->next;
         if (sp != 0 &&
             !is_extern_segment(sp) &&
-            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
+	    check_segment_merge(sp, tbase, tsize) &&
+            (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag) {
           char* oldbase = sp->base;
           sp->base = tbase;
           sp->size += tsize;
@@ -4397,7 +4435,7 @@ mspace create_mspace(size_t capacity, in
     char* tbase = (char*)(CALL_MMAP(tsize));
     if (tbase != CMFAIL) {
       m = init_user_mstate(tbase, tsize);
-      m->seg.sflags = IS_MMAPPED_BIT;
+      set_segment_flags(&m->seg, IS_MMAPPED_BIT);
       set_lock(m, locked);
     }
   }
@@ -4412,7 +4450,7 @@ mspace create_mspace_with_base(void* bas
   if (capacity > msize + TOP_FOOT_SIZE &&
       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
     m = init_user_mstate((char*)base, capacity);
-    m->seg.sflags = EXTERN_BIT;
+    set_segment_flags(&m->seg, EXTERN_BIT);
     set_lock(m, locked);
   }
   return (mspace)m;
@@ -4426,7 +4464,7 @@ size_t destroy_mspace(mspace msp) {
     while (sp != 0) {
       char* base = sp->base;
       size_t size = sp->size;
-      flag_t flag = sp->sflags;
+      flag_t flag = get_segment_flags(sp);
       sp = sp->next;
       if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
           CALL_MUNMAP(base, size) == 0)
Index: libffi/Makefile.am
===================================================================
--- libffi/Makefile.am.orig	2007-01-17 22:32:56.000000000 -0200
+++ libffi/Makefile.am	2007-01-17 22:33:16.000000000 -0200
@@ -78,7 +78,7 @@ toolexeclib_LTLIBRARIES = libffi.la
 noinst_LTLIBRARIES = libffi_convenience.la
 
 libffi_la_SOURCES = src/debug.c src/prep_cif.c src/types.c \
-		src/raw_api.c src/java_raw_api.c
+		src/raw_api.c src/java_raw_api.c src/closures.c
 
 nodist_libffi_la_SOURCES =
 
Index: libffi/Makefile.in
===================================================================
--- libffi/Makefile.in.orig	2007-01-17 22:32:56.000000000 -0200
+++ libffi/Makefile.in	2007-01-17 22:33:16.000000000 -0200
@@ -92,7 +92,7 @@ LTLIBRARIES = $(noinst_LTLIBRARIES) $(to
 libffi_la_LIBADD =
 am__dirstamp = $(am__leading_dot)dirstamp
 am_libffi_la_OBJECTS = src/debug.lo src/prep_cif.lo src/types.lo \
-	src/raw_api.lo src/java_raw_api.lo
+	src/raw_api.lo src/java_raw_api.lo src/closures.lo
 @MIPS_IRIX_TRUE@am__objects_1 = src/mips/ffi.lo src/mips/o32.lo \
 @MIPS_IRIX_TRUE@	src/mips/n32.lo
 @MIPS_LINUX_TRUE@am__objects_2 = src/mips/ffi.lo src/mips/o32.lo
@@ -141,7 +141,7 @@ libffi_la_OBJECTS = $(am_libffi_la_OBJEC
 	$(nodist_libffi_la_OBJECTS)
 libffi_convenience_la_LIBADD =
 am__objects_24 = src/debug.lo src/prep_cif.lo src/types.lo \
-	src/raw_api.lo src/java_raw_api.lo
+	src/raw_api.lo src/java_raw_api.lo src/closures.lo
 am_libffi_convenience_la_OBJECTS = $(am__objects_24)
 am__objects_25 = $(am__objects_1) $(am__objects_2) $(am__objects_3) \
 	$(am__objects_4) $(am__objects_5) $(am__objects_6) \
@@ -416,7 +416,7 @@ MAKEOVERRIDES = 
 toolexeclib_LTLIBRARIES = libffi.la
 noinst_LTLIBRARIES = libffi_convenience.la
 libffi_la_SOURCES = src/debug.c src/prep_cif.c src/types.c \
-		src/raw_api.c src/java_raw_api.c
+		src/raw_api.c src/java_raw_api.c src/closures.c
 
 nodist_libffi_la_SOURCES = $(am__append_1) $(am__append_2) \
 	$(am__append_3) $(am__append_4) $(am__append_5) \
@@ -534,6 +534,7 @@ src/prep_cif.lo: src/$(am__dirstamp) src
 src/types.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
 src/raw_api.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
 src/java_raw_api.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
+src/closures.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
 src/mips/$(am__dirstamp):
 	@$(mkdir_p) src/mips
 	@: > src/mips/$(am__dirstamp)
@@ -729,6 +730,8 @@ mostlyclean-compile:
 	-rm -f src/arm/ffi.lo
 	-rm -f src/arm/sysv.$(OBJEXT)
 	-rm -f src/arm/sysv.lo
+	-rm -f src/closures.$(OBJEXT)
+	-rm -f src/closures.lo
 	-rm -f src/cris/ffi.$(OBJEXT)
 	-rm -f src/cris/ffi.lo
 	-rm -f src/cris/sysv.$(OBJEXT)
@@ -827,6 +830,7 @@ mostlyclean-compile:
 distclean-compile:
 	-rm -f *.tab.c
 
+@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/closures.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/debug.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/java_raw_api.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/prep_cif.Plo@am__quote@
Index: libffi/src/prep_cif.c
===================================================================
--- libffi/src/prep_cif.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/prep_cif.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   prep_cif.c - Copyright (c) 1996, 1998  Red Hat, Inc.
+   prep_cif.c - Copyright (c) 1996, 1998, 2007  Red Hat, Inc.
 
    Permission is hereby granted, free of charge, to any person obtaining
    a copy of this software and associated documentation files (the
@@ -158,3 +158,16 @@ ffi_status ffi_prep_cif(ffi_cif *cif, ff
   return ffi_prep_cif_machdep(cif);
 }
 #endif /* not __CRIS__ */
+
+#if FFI_CLOSURES
+
+ffi_status
+ffi_prep_closure (ffi_closure* closure,
+		  ffi_cif* cif,
+		  void (*fun)(ffi_cif*,void*,void**,void*),
+		  void *user_data)
+{
+  return ffi_prep_closure_loc (closure, cif, fun, user_data, closure);
+}
+
+#endif
Index: libffi/src/raw_api.c
===================================================================
--- libffi/src/raw_api.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/raw_api.c	2007-01-17 22:33:16.000000000 -0200
@@ -209,22 +209,20 @@ ffi_translate_args (ffi_cif *cif, void *
   (*cl->fun) (cif, rvalue, raw, cl->user_data);
 }
 
-/* Again, here is the generic version of ffi_prep_raw_closure, which
- * will install an intermediate "hub" for translation of arguments from
- * the pointer-array format, to the raw format */
-
 ffi_status
-ffi_prep_raw_closure (ffi_raw_closure* cl,
-		      ffi_cif *cif,
-		      void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
-		      void *user_data)
+ffi_prep_raw_closure_loc (ffi_raw_closure* cl,
+			  ffi_cif *cif,
+			  void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			  void *user_data,
+			  void *codeloc)
 {
   ffi_status status;
 
-  status = ffi_prep_closure ((ffi_closure*) cl,
-			     cif,
-			     &ffi_translate_args,
-			     (void*)cl);
+  status = ffi_prep_closure_loc ((ffi_closure*) cl,
+				 cif,
+				 &ffi_translate_args,
+				 codeloc,
+				 codeloc);
   if (status == FFI_OK)
     {
       cl->fun       = fun;
@@ -236,4 +234,22 @@ ffi_prep_raw_closure (ffi_raw_closure* c
 
 #endif /* FFI_CLOSURES */
 #endif /* !FFI_NATIVE_RAW_API */
+
+#if FFI_CLOSURES
+
+/* Again, here is the generic version of ffi_prep_raw_closure, which
+ * will install an intermediate "hub" for translation of arguments from
+ * the pointer-array format, to the raw format */
+
+ffi_status
+ffi_prep_raw_closure (ffi_raw_closure* cl,
+		      ffi_cif *cif,
+		      void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+		      void *user_data)
+{
+  return ffi_prep_raw_closure_loc (cl, cif, fun, user_data, cl);
+}
+
+#endif /* FFI_CLOSURES */
+
 #endif /* !FFI_NO_RAW_API */
Index: libffi/src/java_raw_api.c
===================================================================
--- libffi/src/java_raw_api.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/java_raw_api.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   java_raw_api.c - Copyright (c) 1999  Red Hat, Inc.
+   java_raw_api.c - Copyright (c) 1999, 2007  Red Hat, Inc.
 
    Cloned from raw_api.c
 
@@ -307,22 +307,20 @@ ffi_java_translate_args (ffi_cif *cif, v
   ffi_java_raw_to_rvalue (cif, rvalue);
 }
 
-/* Again, here is the generic version of ffi_prep_raw_closure, which
- * will install an intermediate "hub" for translation of arguments from
- * the pointer-array format, to the raw format */
-
 ffi_status
-ffi_prep_java_raw_closure (ffi_raw_closure* cl,
-		      ffi_cif *cif,
-		      void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
-		      void *user_data)
+ffi_prep_java_raw_closure_loc (ffi_raw_closure* cl,
+			       ffi_cif *cif,
+			       void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			       void *user_data,
+			       void *codeloc)
 {
   ffi_status status;
 
-  status = ffi_prep_closure ((ffi_closure*) cl,
-			     cif,
-			     &ffi_java_translate_args,
-			     (void*)cl);
+  status = ffi_prep_closure_loc ((ffi_closure*) cl,
+				 cif,
+				 &ffi_java_translate_args,
+				 codeloc,
+				 codeloc);
   if (status == FFI_OK)
     {
       cl->fun       = fun;
@@ -332,6 +330,19 @@ ffi_prep_java_raw_closure (ffi_raw_closu
   return status;
 }
 
+/* Again, here is the generic version of ffi_prep_raw_closure, which
+ * will install an intermediate "hub" for translation of arguments from
+ * the pointer-array format, to the raw format */
+
+ffi_status
+ffi_prep_java_raw_closure (ffi_raw_closure* cl,
+			   ffi_cif *cif,
+			   void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			   void *user_data)
+{
+  return ffi_prep_java_raw_closure_loc (cl, cif, fun, user_data, cl);
+}
+
 #endif /* FFI_CLOSURES */
 #endif /* !FFI_NATIVE_RAW_API */
 #endif /* !FFI_NO_RAW_API */
Index: libffi/src/alpha/ffi.c
===================================================================
--- libffi/src/alpha/ffi.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/alpha/ffi.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1998, 2001 Red Hat, Inc.
+   ffi.c - Copyright (c) 1998, 2001, 2007 Red Hat, Inc.
    
    Alpha Foreign Function Interface 
 
@@ -146,10 +146,11 @@ ffi_call(ffi_cif *cif, void (*fn)(), voi
 
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp;
 
Index: libffi/src/cris/ffi.c
===================================================================
--- libffi/src/cris/ffi.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/cris/ffi.c	2007-01-17 22:33:16.000000000 -0200
@@ -2,6 +2,7 @@
    ffi.c - Copyright (c) 1998 Cygnus Solutions
            Copyright (c) 2004 Simon Posnjak
 	   Copyright (c) 2005 Axis Communications AB
+	   Copyright (C) 2007 Free Software Foundation, Inc.
 
    CRIS Foreign Function Interface
 
@@ -360,10 +361,11 @@ ffi_prep_closure_inner (void **params, f
 /* API function: Prepare the trampoline.  */
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif *, void *, void **, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif *, void *, void **, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   void *innerfn = ffi_prep_closure_inner;
   FFI_ASSERT (cif->abi == FFI_SYSV);
@@ -375,7 +377,7 @@ ffi_prep_closure (ffi_closure* closure,
   memcpy (closure->tramp + ffi_cris_trampoline_fn_offset,
 	  &innerfn, sizeof (void *));
   memcpy (closure->tramp + ffi_cris_trampoline_closure_offset,
-	  &closure, sizeof (void *));
+	  &codeloc, sizeof (void *));
 
   return FFI_OK;
 }
Index: libffi/src/frv/ffi.c
===================================================================
--- libffi/src/frv/ffi.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/frv/ffi.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,6 @@
 /* -----------------------------------------------------------------------
    ffi.c - Copyright (c) 2004  Anthony Green
+   Copyright (C) 2007  Free Software Foundation, Inc.
    
    FR-V Foreign Function Interface 
 
@@ -243,14 +244,15 @@ void ffi_closure_eabi (unsigned arg1, un
 }
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp = (unsigned int *) &closure->tramp[0];
   unsigned long fn = (long) ffi_closure_eabi;
-  unsigned long cls = (long) closure;
+  unsigned long cls = (long) codeloc;
 #ifdef __FRV_FDPIC__
   register void *got __asm__("gr15");
 #endif
@@ -259,7 +261,7 @@ ffi_prep_closure (ffi_closure* closure,
   fn = (unsigned long) ffi_closure_eabi;
 
 #ifdef __FRV_FDPIC__
-  tramp[0] = &tramp[2];
+  tramp[0] = &((unsigned int *)codeloc)[2];
   tramp[1] = got;
   tramp[2] = 0x8cfc0000 + (fn  & 0xffff); /* setlos lo(fn), gr6    */
   tramp[3] = 0x8efc0000 + (cls & 0xffff); /* setlos lo(cls), gr7   */
@@ -281,7 +283,8 @@ ffi_prep_closure (ffi_closure* closure,
 
   /* Cache flushing.  */
   for (i = 0; i < FFI_TRAMPOLINE_SIZE; i++)
-    __asm__ volatile ("dcf @(%0,%1)\n\tici @(%0,%1)" :: "r" (tramp), "r" (i));
+    __asm__ volatile ("dcf @(%0,%1)\n\tici @(%2,%1)" :: "r" (tramp), "r" (i),
+		      "r" (codeloc));
 
   return FFI_OK;
 }
Index: libffi/src/ia64/ffi.c
===================================================================
--- libffi/src/ia64/ffi.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/ia64/ffi.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1998 Red Hat, Inc.
+   ffi.c - Copyright (c) 1998, 2007 Red Hat, Inc.
 	   Copyright (c) 2000 Hewlett Packard Company
    
    IA64 Foreign Function Interface 
@@ -400,10 +400,11 @@ ffi_call(ffi_cif *cif, void (*fn)(), voi
 extern void ffi_closure_unix ();
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*,void*,void**,void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void *codeloc)
 {
   /* The layout of a function descriptor.  A C function pointer really 
      points to one of these.  */
@@ -430,7 +431,7 @@ ffi_prep_closure (ffi_closure* closure,
 
   tramp->code_pointer = fd->code_pointer;
   tramp->real_gp = fd->gp;
-  tramp->fake_gp = (UINT64)(PTR64)closure;
+  tramp->fake_gp = (UINT64)(PTR64)codeloc;
   closure->cif = cif;
   closure->user_data = user_data;
   closure->fun = fun;
Index: libffi/src/mips/ffi.c
===================================================================
--- libffi/src/mips/ffi.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/mips/ffi.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1996 Red Hat, Inc.
+   ffi.c - Copyright (c) 1996, 2007 Red Hat, Inc.
    
    MIPS Foreign Function Interface 
 
@@ -497,14 +497,15 @@ extern void ffi_closure_O32(void);
 #endif /* FFI_MIPS_O32 */
 
 ffi_status
-ffi_prep_closure (ffi_closure *closure,
-		  ffi_cif *cif,
-		  void (*fun)(ffi_cif*,void*,void**,void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure *closure,
+		      ffi_cif *cif,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp = (unsigned int *) &closure->tramp[0];
   unsigned int fn;
-  unsigned int ctx = (unsigned int) closure;
+  unsigned int ctx = (unsigned int) codeloc;
 
 #if defined(FFI_MIPS_O32)
   FFI_ASSERT(cif->abi == FFI_O32 || cif->abi == FFI_O32_SOFT_FLOAT);
@@ -525,7 +526,7 @@ ffi_prep_closure (ffi_closure *closure,
   closure->user_data = user_data;
 
   /* XXX this is available on Linux, but anything else? */
-  cacheflush (tramp, FFI_TRAMPOLINE_SIZE, ICACHE);
+  cacheflush (codeloc, FFI_TRAMPOLINE_SIZE, ICACHE);
 
   return FFI_OK;
 }
Index: libffi/src/pa/ffi.c
===================================================================
--- libffi/src/pa/ffi.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/pa/ffi.c	2007-01-17 22:33:16.000000000 -0200
@@ -613,10 +613,11 @@ ffi_status ffi_closure_inner_pa32(ffi_cl
 extern void ffi_closure_pa32(void);
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*,void*,void**,void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void *codeloc)
 {
   UINT32 *tramp = (UINT32 *)(closure->tramp);
 #ifdef PA_HPUX
Index: libffi/src/powerpc/ffi.c
===================================================================
--- libffi/src/powerpc/ffi.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/powerpc/ffi.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,6 @@
 /* -----------------------------------------------------------------------
    ffi.c - Copyright (c) 1998 Geoffrey Keating
+   Copyright (C) 2007 Free Software Foundation, Inc
 
    PowerPC Foreign Function Interface
 
@@ -834,27 +835,27 @@ ffi_call(ffi_cif *cif, void (*fn)(), voi
 #define MIN_CACHE_LINE_SIZE 8
 
 static void
-flush_icache (char *addr1, int size)
+flush_icache (char *wraddr, char *xaddr, int size)
 {
   int i;
-  char * addr;
   for (i = 0; i < size; i += MIN_CACHE_LINE_SIZE)
     {
       addr = addr1 + i;
-      __asm__ volatile ("icbi 0,%0;" "dcbf 0,%0;"
-			: : "r" (addr) : "memory");
+      __asm__ volatile ("icbi 0,%0;" "dcbf 0,%1;"
+			: : "r" (xaddr + i), "r" (wraddr + i) : "memory");
     }
-  addr = addr1 + size - 1;
-  __asm__ volatile ("icbi 0,%0;" "dcbf 0,%0;" "sync;" "isync;"
-		    : : "r"(addr) : "memory");
+  __asm__ volatile ("icbi 0,%0;" "dcbf 0,%1;" "sync;" "isync;"
+		    : : "r"(xaddr + size - 1), "r"(wraddr + size - 1)
+		    : "memory");
 }
 #endif
 
 ffi_status
-ffi_prep_closure (ffi_closure *closure,
-		  ffi_cif *cif,
-		  void (*fun) (ffi_cif *, void *, void **, void *),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure *closure,
+		      ffi_cif *cif,
+		      void (*fun) (ffi_cif *, void *, void **, void *),
+		      void *user_data,
+		      void *codeloc)
 {
 #ifdef POWERPC64
   void **tramp = (void **) &closure->tramp[0];
@@ -862,7 +863,7 @@ ffi_prep_closure (ffi_closure *closure,
   FFI_ASSERT (cif->abi == FFI_LINUX64);
   /* Copy function address and TOC from ffi_closure_LINUX64.  */
   memcpy (tramp, (char *) ffi_closure_LINUX64, 16);
-  tramp[2] = (void *) closure;
+  tramp[2] = (void *) codeloc;
 #else
   unsigned int *tramp;
 
@@ -878,10 +879,10 @@ ffi_prep_closure (ffi_closure *closure,
   tramp[8] = 0x7c0903a6;  /*   mtctr   r0 */
   tramp[9] = 0x4e800420;  /*   bctr */
   *(void **) &tramp[2] = (void *) ffi_closure_SYSV; /* function */
-  *(void **) &tramp[3] = (void *) closure;          /* context */
+  *(void **) &tramp[3] = (void *) codeloc;          /* context */
 
   /* Flush the icache.  */
-  flush_icache (&closure->tramp[0],FFI_TRAMPOLINE_SIZE);
+  flush_icache (tramp, codeloc, FFI_TRAMPOLINE_SIZE);
 #endif
 
   closure->cif = cif;
Index: libffi/src/powerpc/ffi_darwin.c
===================================================================
--- libffi/src/powerpc/ffi_darwin.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/powerpc/ffi_darwin.c	2007-01-17 22:33:16.000000000 -0200
@@ -3,7 +3,7 @@
 
    Copyright (C) 1998 Geoffrey Keating
    Copyright (C) 2001 John Hornkvist
-   Copyright (C) 2002, 2006 Free Software Foundation, Inc.
+   Copyright (C) 2002, 2006, 2007 Free Software Foundation, Inc.
 
    FFI support for Darwin and AIX.
    
@@ -528,10 +528,11 @@ SP current -->    +---------------------
 
 */
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp;
   struct ffi_aix_trampoline_struct *tramp_aix;
@@ -553,14 +554,14 @@ ffi_prep_closure (ffi_closure* closure,
       tramp[8] = 0x816b0004;  /*   lwz     r11,4(r11) static chain  */
       tramp[9] = 0x4e800420;  /*   bctr  */
       tramp[2] = (unsigned long) ffi_closure_ASM; /* function  */
-      tramp[3] = (unsigned long) closure; /* context  */
+      tramp[3] = (unsigned long) codeloc; /* context  */
 
       closure->cif = cif;
       closure->fun = fun;
       closure->user_data = user_data;
 
       /* Flush the icache. Only necessary on Darwin.  */
-      flush_range(&closure->tramp[0],FFI_TRAMPOLINE_SIZE);
+      flush_range(codeloc, FFI_TRAMPOLINE_SIZE);
 
       break;
 
@@ -573,7 +574,7 @@ ffi_prep_closure (ffi_closure* closure,
 
       tramp_aix->code_pointer = fd->code_pointer;
       tramp_aix->toc = fd->toc;
-      tramp_aix->static_chain = closure;
+      tramp_aix->static_chain = codeloc;
       closure->cif = cif;
       closure->fun = fun;
       closure->user_data = user_data;
Index: libffi/src/s390/ffi.c
===================================================================
--- libffi/src/s390/ffi.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/s390/ffi.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 2000 Software AG
+   ffi.c - Copyright (c) 2000, 2007 Software AG
  
    S390 Foreign Function Interface
  
@@ -709,17 +709,18 @@ ffi_closure_helper_SYSV (ffi_closure *cl
 
 /*====================================================================*/
 /*                                                                    */
-/* Name     - ffi_prep_closure.                                       */
+/* Name     - ffi_prep_closure_loc.                                   */
 /*                                                                    */
 /* Function - Prepare a FFI closure.                                  */
 /*                                                                    */
 /*====================================================================*/
  
 ffi_status
-ffi_prep_closure (ffi_closure *closure,
-                  ffi_cif *cif,
-                  void (*fun) (ffi_cif *, void *, void **, void *),
-                  void *user_data)
+ffi_prep_closure_loc (ffi_closure *closure,
+		      ffi_cif *cif,
+		      void (*fun) (ffi_cif *, void *, void **, void *),
+		      void *user_data,
+		      void *codeloc)
 {
   FFI_ASSERT (cif->abi == FFI_SYSV);
 
@@ -728,7 +729,7 @@ ffi_prep_closure (ffi_closure *closure,
   *(short *)&closure->tramp [2] = 0x9801;   /* lm %r0,%r1,6(%r1) */
   *(short *)&closure->tramp [4] = 0x1006;
   *(short *)&closure->tramp [6] = 0x07f1;   /* br %r1 */
-  *(long  *)&closure->tramp [8] = (long)closure;
+  *(long  *)&closure->tramp [8] = (long)codeloc;
   *(long  *)&closure->tramp[12] = (long)&ffi_closure_SYSV;
 #else
   *(short *)&closure->tramp [0] = 0x0d10;   /* basr %r1,0 */
@@ -736,7 +737,7 @@ ffi_prep_closure (ffi_closure *closure,
   *(short *)&closure->tramp [4] = 0x100e;
   *(short *)&closure->tramp [6] = 0x0004;
   *(short *)&closure->tramp [8] = 0x07f1;   /* br %r1 */
-  *(long  *)&closure->tramp[16] = (long)closure;
+  *(long  *)&closure->tramp[16] = (long)codeloc;
   *(long  *)&closure->tramp[24] = (long)&ffi_closure_SYSV;
 #endif 
  
Index: libffi/src/sh/ffi.c
===================================================================
--- libffi/src/sh/ffi.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/sh/ffi.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 2002, 2003, 2004, 2005, 2006 Kaz Kojima
+   ffi.c - Copyright (c) 2002, 2003, 2004, 2005, 2006, 2007 Kaz Kojima
    
    SuperH Foreign Function Interface 
 
@@ -452,10 +452,11 @@ extern void __ic_invalidate (void *line)
 #endif
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp;
   unsigned short insn;
@@ -475,7 +476,7 @@ ffi_prep_closure (ffi_closure* closure,
   tramp[0] = 0xd102d301;
   tramp[1] = 0x412b0000 | insn;
 #endif
-  *(void **) &tramp[2] = (void *)closure;          /* ctx */
+  *(void **) &tramp[2] = (void *)codeloc;          /* ctx */
   *(void **) &tramp[3] = (void *)ffi_closure_SYSV; /* funaddr */
 
   closure->cif = cif;
@@ -484,7 +485,7 @@ ffi_prep_closure (ffi_closure* closure,
 
 #if defined(__SH4__)
   /* Flush the icache.  */
-  __ic_invalidate(&closure->tramp[0]);
+  __ic_invalidate(codeloc);
 #endif
 
   return FFI_OK;
Index: libffi/src/sh64/ffi.c
===================================================================
--- libffi/src/sh64/ffi.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/sh64/ffi.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 2003, 2004, 2006 Kaz Kojima
+   ffi.c - Copyright (c) 2003, 2004, 2006, 2007 Kaz Kojima
    
    SuperH SHmedia Foreign Function Interface 
 
@@ -283,10 +283,11 @@ extern void ffi_closure_SYSV (void);
 extern void __ic_invalidate (void *line);
 
 ffi_status
-ffi_prep_closure (ffi_closure *closure,
-		  ffi_cif *cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure *closure,
+		      ffi_cif *cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp;
 
@@ -310,8 +311,8 @@ ffi_prep_closure (ffi_closure *closure,
   tramp[2] = 0xcc000010 | (((UINT32) ffi_closure_SYSV) >> 16) << 10;
   tramp[3] = 0xc8000010 | (((UINT32) ffi_closure_SYSV) & 0xffff) << 10;
   tramp[4] = 0x6bf10600;
-  tramp[5] = 0xcc000010 | (((UINT32) closure) >> 16) << 10;
-  tramp[6] = 0xc8000010 | (((UINT32) closure) & 0xffff) << 10;
+  tramp[5] = 0xcc000010 | (((UINT32) codeloc) >> 16) << 10;
+  tramp[6] = 0xc8000010 | (((UINT32) codeloc) & 0xffff) << 10;
   tramp[7] = 0x4401fff0;
 
   closure->cif = cif;
@@ -319,7 +320,8 @@ ffi_prep_closure (ffi_closure *closure,
   closure->user_data = user_data;
 
   /* Flush the icache.  */
-  asm volatile ("ocbwb %0,0; synco; icbi %0,0; synci" : : "r" (tramp));
+  asm volatile ("ocbwb %0,0; synco; icbi %1,0; synci" : : "r" (tramp),
+		"r"(codeloc));
 
   return FFI_OK;
 }
Index: libffi/src/sparc/ffi.c
===================================================================
--- libffi/src/sparc/ffi.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/sparc/ffi.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1996, 2003, 2004 Red Hat, Inc.
+   ffi.c - Copyright (c) 1996, 2003, 2004, 2007 Red Hat, Inc.
    
    SPARC Foreign Function Interface 
 
@@ -425,10 +425,11 @@ extern void ffi_closure_v8(void);
 #endif
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp = (unsigned int *) &closure->tramp[0];
   unsigned long fn;
@@ -443,7 +444,7 @@ ffi_prep_closure (ffi_closure* closure,
   tramp[3] = 0x01000000;	/* nop			*/
   *((unsigned long *) &tramp[4]) = fn;
 #else
-  unsigned long ctx = (unsigned long) closure;
+  unsigned long ctx = (unsigned long) codeloc;
   FFI_ASSERT (cif->abi == FFI_V8);
   fn = (unsigned long) ffi_closure_v8;
   tramp[0] = 0x03000000 | fn >> 10;	/* sethi %hi(fn), %g1	*/
Index: libffi/src/x86/ffi.c
===================================================================
--- libffi/src/x86/ffi.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/x86/ffi.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1996, 1998, 1999, 2001  Red Hat, Inc.
+   ffi.c - Copyright (c) 1996, 1998, 1999, 2001, 2007  Red Hat, Inc.
            Copyright (c) 2002  Ranjit Mathew
            Copyright (c) 2002  Bo Thorsen
            Copyright (c) 2002  Roger Sayle
@@ -302,7 +302,7 @@ ffi_prep_incoming_args_SYSV(char *stack,
 ({ unsigned char *__tramp = (unsigned char*)(TRAMP); \
    unsigned int  __fun = (unsigned int)(FUN); \
    unsigned int  __ctx = (unsigned int)(CTX); \
-   unsigned int  __dis = __fun - ((unsigned int) __tramp + FFI_TRAMPOLINE_SIZE); \
+   unsigned int  __dis = __fun - (__ctx + FFI_TRAMPOLINE_SIZE); \
    *(unsigned char*) &__tramp[0] = 0xb8; \
    *(unsigned int*)  &__tramp[1] = __ctx; /* movl __ctx, %eax */ \
    *(unsigned char *)  &__tramp[5] = 0xe9; \
@@ -313,16 +313,17 @@ ffi_prep_incoming_args_SYSV(char *stack,
 /* the cif must already be prep'ed */
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*,void*,void**,void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void *codeloc)
 {
   FFI_ASSERT (cif->abi == FFI_SYSV);
 
   FFI_INIT_TRAMPOLINE (&closure->tramp[0], \
 		       &ffi_closure_SYSV,  \
-		       (void*)closure);
+		       codeloc);
     
   closure->cif  = cif;
   closure->user_data = user_data;
@@ -336,10 +337,11 @@ ffi_prep_closure (ffi_closure* closure,
 #if !FFI_NO_RAW_API
 
 ffi_status
-ffi_prep_raw_closure (ffi_raw_closure* closure,
-		      ffi_cif* cif,
-		      void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
-		      void *user_data)
+ffi_prep_raw_closure_loc (ffi_raw_closure* closure,
+			  ffi_cif* cif,
+			  void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			  void *user_data,
+			  void *codeloc)
 {
   int i;
 
@@ -358,7 +360,7 @@ ffi_prep_raw_closure (ffi_raw_closure* c
   
 
   FFI_INIT_TRAMPOLINE (&closure->tramp[0], &ffi_closure_raw_SYSV,
-		       (void*)closure);
+		       codeloc);
     
   closure->cif  = cif;
   closure->user_data = user_data;
Index: libffi/src/x86/ffi64.c
===================================================================
--- libffi/src/x86/ffi64.c.orig	2007-01-17 22:24:18.000000000 -0200
+++ libffi/src/x86/ffi64.c	2007-01-17 22:33:16.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 2002  Bo Thorsen <bo@suse.de>
+   ffi.c - Copyright (c) 2002, 2007  Bo Thorsen <bo@suse.de>
    
    x86-64 Foreign Function Interface 
 
@@ -433,10 +433,11 @@ ffi_call (ffi_cif *cif, void (*fn)(), vo
 extern void ffi_closure_unix64(void);
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   volatile unsigned short *tramp;
 
@@ -445,7 +446,7 @@ ffi_prep_closure (ffi_closure* closure,
   tramp[0] = 0xbb49;		/* mov <code>, %r11	*/
   *(void * volatile *) &tramp[1] = ffi_closure_unix64;
   tramp[5] = 0xba49;		/* mov <data>, %r10	*/
-  *(void * volatile *) &tramp[6] = closure;
+  *(void * volatile *) &tramp[6] = codeloc;
 
   /* Set the carry bit iff the function uses any sse registers.
      This is clc or stc, together with the first byte of the jmp.  */
Index: libjava/include/boehm-gc.h
===================================================================
--- libjava/include/boehm-gc.h.orig	2007-01-17 22:24:18.000000000 -0200
+++ libjava/include/boehm-gc.h	2007-01-17 22:33:16.000000000 -0200
@@ -1,7 +1,7 @@
 // -*- c++ -*-
 // boehm-gc.h - Defines for Boehm collector.
 
-/* Copyright (C) 1998, 1999, 2002, 2004, 2006  Free Software Foundation
+/* Copyright (C) 1998, 1999, 2002, 2004, 2006, 2007  Free Software Foundation
 
    This file is part of libgcj.
 
@@ -36,6 +36,9 @@ extern "C" void * GC_local_gcj_malloc(si
 extern "C" void * GC_local_malloc_atomic(size_t);
 #endif
 
+void *_Jv_AllocClosure (jsize size, void **codep) __attribute__((__malloc__));
+void _Jv_FreeClosure (void *code);
+
 #ifndef LIBGCJ_GC_DEBUG
 
 inline void *
Index: libjava/boehm.cc
===================================================================
--- libjava/boehm.cc.orig	2007-01-17 22:24:18.000000000 -0200
+++ libjava/boehm.cc	2007-01-18 04:11:01.000000000 -0200
@@ -1,6 +1,6 @@
 // boehm.cc - interface between libjava and Boehm GC.
 
-/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006
+/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
    Free Software Foundation
 
    This file is part of libgcj.
@@ -380,6 +380,27 @@ _Jv_AllocRawObj (jsize size)
   return (void *) GC_MALLOC (size ? size : 1);
 }
 
+/* Allocate writable and executable memory with the given size.  This
+   memory is not under GC control, nor is it a GC root.  It returns
+   the writable address and stores the executable address in *code.
+   They are often the same, but in some situations that forbid pages
+   from being both writable and executable, separate virtual addresses
+   referring to the same data may be used.  */
+void *
+_Jv_AllocClosure (jsize size, void **code)
+{
+  return ffi_closure_alloc (size, code);
+}
+
+/* Release memory allocated by _Jv_AllocClosure.  PTR must denote a
+   writable-memory address.  */
+void
+_Jv_FreeClosure (void *ptr)
+{
+  if (ptr)
+    ffi_closure_free (ptr);
+}
+
 static void
 call_finalizer (GC_PTR obj, GC_PTR client_data)
 {
Index: libjava/nogc.cc
===================================================================
--- libjava/nogc.cc.orig	2007-01-17 22:24:18.000000000 -0200
+++ libjava/nogc.cc	2007-01-18 04:12:28.000000000 -0200
@@ -1,6 +1,7 @@
 // nogc.cc - Implement null garbage collector.
 
-/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2006  Free Software Foundation
+/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2006, 2007
+   Free Software Foundation
 
    This file is part of libgcj.
 
@@ -71,6 +72,19 @@ _Jv_AllocRawObj (jsize size)
   return calloc (size, 1);
 }
 
+void *
+_Jv_AllocClosure (jsize size, void **code)
+{
+  return ffi_closure_alloc (size, code);
+}
+
+void
+_Jv_FreeClosure (void *ptr)
+{
+  if (ptr)
+    return ffi_closure_free (ptr);
+}
+
 void
 _Jv_RegisterFinalizer (void *, _Jv_FinalizerFunc *)
 {
Index: libjava/java/lang/Class.h
===================================================================
--- libjava/java/lang/Class.h.orig	2007-01-17 22:24:18.000000000 -0200
+++ libjava/java/lang/Class.h	2007-01-17 22:33:16.000000000 -0200
@@ -1,6 +1,7 @@
 // Class.h - Header file for java.lang.Class.  -*- c++ -*-
 
-/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006  Free Software Foundation
+/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
+   Free Software Foundation
 
    This file is part of libgcj.
 
@@ -105,6 +106,15 @@ class _Jv_InterpClass;
 class _Jv_InterpMethod;
 #endif
 
+class _Jv_ClosureList
+{
+  _Jv_ClosureList *next;
+  void *ptr;
+public:
+  void registerClosure (jclass klass, void *ptr);
+  static void releaseClosures (_Jv_ClosureList **closures);
+};
+
 struct _Jv_Constants
 {
   jint size;
@@ -623,6 +633,7 @@ private:   
   friend class ::_Jv_CompiledEngine;
   friend class ::_Jv_IndirectCompiledEngine;
   friend class ::_Jv_InterpreterEngine;
+  friend class ::_Jv_ClosureList;
 
   friend void ::_Jv_sharedlib_register_hook (jclass klass);
 
Index: libjava/java/lang/natClass.cc
===================================================================
--- libjava/java/lang/natClass.cc.orig	2007-01-17 22:24:18.000000000 -0200
+++ libjava/java/lang/natClass.cc	2007-01-17 22:33:16.000000000 -0200
@@ -1,6 +1,6 @@
 // natClass.cc - Implementation of java.lang.Class native methods.
 
-/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006
+/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
    Free Software Foundation
 
    This file is part of libgcj.
@@ -666,7 +666,31 @@ java::lang::Class::newInstance (void)
 void
 java::lang::Class::finalize (void)
 {
+  _Jv_ClosureList **closures = engine->get_closure_list (this);
   engine->unregister(this);
+  _Jv_ClosureList::releaseClosures (closures);
+}
+
+void
+_Jv_ClosureList::releaseClosures (_Jv_ClosureList **closures)
+{
+  if (!closures)
+    return;
+
+  while (_Jv_ClosureList *current = *closures)
+    {
+      *closures = current->next;
+      _Jv_FreeClosure (current->ptr);
+    }
+}
+
+void
+_Jv_ClosureList::registerClosure (jclass klass, void *ptr)
+{
+  _Jv_ClosureList **closures = klass->engine->get_closure_list (klass);
+  this->ptr = ptr;
+  this->next = *closures;
+  *closures = this;
 }
 
 // This implements the initialization process for a class.  From Spec
Index: libjava/include/execution.h
===================================================================
--- libjava/include/execution.h.orig	2007-01-17 22:24:18.000000000 -0200
+++ libjava/include/execution.h	2007-01-17 22:33:16.000000000 -0200
@@ -1,6 +1,6 @@
 // execution.h - Execution engines. -*- c++ -*-
 
-/* Copyright (C) 2004, 2006  Free Software Foundation
+/* Copyright (C) 2004, 2006, 2007  Free Software Foundation
 
    This file is part of libgcj.
 
@@ -29,6 +29,7 @@ struct _Jv_ExecutionEngine
   _Jv_ResolvedMethod *(*resolve_method) (_Jv_Method *, jclass,
 					 jboolean);
   void (*post_miranda_hook) (jclass);
+  _Jv_ClosureList **(*get_closure_list) (jclass);
 };
 
 // This handles gcj-compiled code except that compiled with
@@ -77,6 +78,11 @@ struct _Jv_CompiledEngine : public _Jv_E
     // Not needed.
   }
 
+  static _Jv_ClosureList **do_get_closure_list (jclass)
+  {
+    return NULL;
+  }
+
   _Jv_CompiledEngine ()
   {
     unregister = do_unregister;
@@ -87,6 +93,7 @@ struct _Jv_CompiledEngine : public _Jv_E
     create_ncode = do_create_ncode;
     resolve_method = do_resolve_method;
     post_miranda_hook = do_post_miranda_hook;
+    get_closure_list = do_get_closure_list;
   }
 
   // These operators make it so we don't have to link in libstdc++.
@@ -105,6 +112,7 @@ class _Jv_IndirectCompiledClass
 {
 public:
   void **field_initializers;
+  _Jv_ClosureList *closures;
 };
 
 // This handles gcj-compiled code compiled with -findirect-classes.
@@ -114,14 +122,32 @@ struct _Jv_IndirectCompiledEngine : publ
   {
     allocate_static_fields = do_allocate_static_fields;
     allocate_field_initializers = do_allocate_field_initializers;
+    get_closure_list = do_get_closure_list;
   }
   
+  static _Jv_IndirectCompiledClass *get_aux_info (jclass klass)
+  {
+    _Jv_IndirectCompiledClass *aux =
+      (_Jv_IndirectCompiledClass*)klass->aux_info;
+    if (!aux)
+      {
+	aux = (_Jv_IndirectCompiledClass*)
+	  _Jv_AllocRawObj (sizeof (_Jv_IndirectCompiledClass));
+	klass->aux_info = aux;
+      }
+
+    return aux;
+  }
+
   static void do_allocate_field_initializers (jclass klass)
   {
-    _Jv_IndirectCompiledClass *aux 
-      =  (_Jv_IndirectCompiledClass*)
-        _Jv_AllocRawObj (sizeof (_Jv_IndirectCompiledClass));
-    klass->aux_info = aux;
+    _Jv_IndirectCompiledClass *aux = get_aux_info (klass);
+    if (!aux)
+      {
+	aux = (_Jv_IndirectCompiledClass*)
+	  _Jv_AllocRawObj (sizeof (_Jv_IndirectCompiledClass));
+	klass->aux_info = aux;
+      }
 
     aux->field_initializers = (void **)_Jv_Malloc (klass->field_count 
 						   * sizeof (void*));    
@@ -172,6 +198,13 @@ struct _Jv_IndirectCompiledEngine : publ
       } 
     _Jv_Free (aux->field_initializers);
   }
+
+  static _Jv_ClosureList **do_get_closure_list (jclass klass)
+  {
+    _Jv_IndirectCompiledClass *aux = get_aux_info (klass);
+
+    return &aux->closures;
+  }
 };
 
 
@@ -203,6 +236,8 @@ class _Jv_InterpreterEngine : public _Jv
 
   static void do_post_miranda_hook (jclass);
 
+  static _Jv_ClosureList **do_get_closure_list (jclass klass);
+
   _Jv_InterpreterEngine ()
   {
     unregister = do_unregister;
@@ -213,6 +248,7 @@ class _Jv_InterpreterEngine : public _Jv
     create_ncode = do_create_ncode;
     resolve_method = do_resolve_method;
     post_miranda_hook = do_post_miranda_hook;
+    get_closure_list = do_get_closure_list;
   }
 
   // These operators make it so we don't have to link in libstdc++.
Index: libjava/interpret.cc
===================================================================
--- libjava/interpret.cc.orig	2007-01-17 22:24:18.000000000 -0200
+++ libjava/interpret.cc	2007-01-17 22:33:16.000000000 -0200
@@ -1,6 +1,7 @@
 // interpret.cc - Code for the interpreter
 
-/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation
+/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
+   Free Software Foundation
 
    This file is part of libgcj.
 
@@ -1234,10 +1235,10 @@ _Jv_init_cif (_Jv_Utf8Const* signature,
 }
 
 #if FFI_NATIVE_RAW_API
-#   define FFI_PREP_RAW_CLOSURE ffi_prep_raw_closure
+#   define FFI_PREP_RAW_CLOSURE ffi_prep_raw_closure_loc
 #   define FFI_RAW_SIZE ffi_raw_size
 #else
-#   define FFI_PREP_RAW_CLOSURE ffi_prep_java_raw_closure
+#   define FFI_PREP_RAW_CLOSURE ffi_prep_java_raw_closure_loc
 #   define FFI_RAW_SIZE ffi_java_raw_size
 #endif
 
@@ -1248,6 +1249,7 @@ _Jv_init_cif (_Jv_Utf8Const* signature,
 
 typedef struct {
   ffi_raw_closure  closure;
+  _Jv_ClosureList list;
   ffi_cif   cif;
   ffi_type *arg_types[0];
 } ncode_closure;
@@ -1255,7 +1257,7 @@ typedef struct {
 typedef void (*ffi_closure_fun) (ffi_cif*,void*,ffi_raw*,void*);
 
 void *
-_Jv_InterpMethod::ncode ()
+_Jv_InterpMethod::ncode (jclass klass)
 {
   using namespace java::lang::reflect;
 
@@ -1265,9 +1267,12 @@ _Jv_InterpMethod::ncode ()
   jboolean staticp = (self->accflags & Modifier::STATIC) != 0;
   int arg_count = _Jv_count_arguments (self->signature, staticp);
 
+  void *code;
   ncode_closure *closure =
-    (ncode_closure*)_Jv_AllocBytes (sizeof (ncode_closure)
-					+ arg_count * sizeof (ffi_type*));
+    (ncode_closure*)_Jv_AllocClosure (sizeof (ncode_closure)
+				      + arg_count * sizeof (ffi_type*),
+				      &code);
+  closure->list.registerClosure (klass, closure);
 
   _Jv_init_cif (self->signature,
 		arg_count,
@@ -1320,9 +1325,11 @@ _Jv_InterpMethod::ncode ()
   FFI_PREP_RAW_CLOSURE (&closure->closure,
 		        &closure->cif, 
 		        fun,
-		        (void*)this);
+		        (void*)this,
+			code);
+
+  self->ncode = code;
 
-  self->ncode = (void*)closure;
   return self->ncode;
 }
 
@@ -1450,7 +1457,7 @@ _Jv_InterpMethod::set_insn (jlong index,
 }
 
 void *
-_Jv_JNIMethod::ncode ()
+_Jv_JNIMethod::ncode (jclass klass)
 {
   using namespace java::lang::reflect;
 
@@ -1460,9 +1467,12 @@ _Jv_JNIMethod::ncode ()
   jboolean staticp = (self->accflags & Modifier::STATIC) != 0;
   int arg_count = _Jv_count_arguments (self->signature, staticp);
 
+  void *code;
   ncode_closure *closure =
-    (ncode_closure*)_Jv_AllocBytes (sizeof (ncode_closure)
-				    + arg_count * sizeof (ffi_type*));
+    (ncode_closure*)_Jv_AllocClosure (sizeof (ncode_closure)
+				      + arg_count * sizeof (ffi_type*),
+				      &code);
+  closure->list.registerClosure (klass, closure);
 
   ffi_type *rtype;
   _Jv_init_cif (self->signature,
@@ -1504,9 +1514,10 @@ _Jv_JNIMethod::ncode ()
   FFI_PREP_RAW_CLOSURE (&closure->closure,
 			&closure->cif, 
 			fun,
-			(void*) this);
+			(void*) this,
+			code);
 
-  self->ncode = (void *) closure;
+  self->ncode = code;
   return self->ncode;
 }
 
@@ -1567,16 +1578,23 @@ _Jv_InterpreterEngine::do_create_ncode (
 	  // cases.  Well, we can't, because we don't allocate these
 	  // objects using `new', and thus they don't get a vtable.
 	  _Jv_JNIMethod *jnim = reinterpret_cast<_Jv_JNIMethod *> (imeth);
-	  klass->methods[i].ncode = jnim->ncode ();
+	  klass->methods[i].ncode = jnim->ncode (klass);
 	}
       else if (imeth != 0)		// it could be abstract
 	{
 	  _Jv_InterpMethod *im = reinterpret_cast<_Jv_InterpMethod *> (imeth);
-	  klass->methods[i].ncode = im->ncode ();
+	  klass->methods[i].ncode = im->ncode (klass);
 	}
     }
 }
 
+_Jv_ClosureList **
+_Jv_InterpreterEngine::do_get_closure_list (jclass klass)
+{
+  _Jv_InterpClass *iclass = (_Jv_InterpClass *) klass->aux_info;
+  return &iclass->closures;
+}
+
 void
 _Jv_InterpreterEngine::do_allocate_static_fields (jclass klass,
 						  int pointer_size,
Index: libjava/include/java-interp.h
===================================================================
--- libjava/include/java-interp.h.orig	2007-01-17 22:24:18.000000000 -0200
+++ libjava/include/java-interp.h	2007-01-17 22:33:16.000000000 -0200
@@ -1,6 +1,7 @@
 // java-interp.h - Header file for the bytecode interpreter.  -*- c++ -*-
 
-/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006  Free Software Foundation
+/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
+   Free Software Foundation
 
    This file is part of libgcj.
 
@@ -183,7 +184,7 @@ class _Jv_InterpMethod : public _Jv_Meth
   }
 
   // return the method's invocation pointer (a stub).
-  void *ncode ();
+  void *ncode (jclass);
   void compile (const void * const *);
 
   static void run_normal (ffi_cif*, void*, ffi_raw*, void*);
@@ -250,6 +251,7 @@ class _Jv_InterpClass
   _Jv_MethodBase **interpreted_methods;
   _Jv_ushort     *field_initializers;
   jstring source_file_name;
+  _Jv_ClosureList *closures;
 
   friend class _Jv_ClassReader;
   friend class _Jv_InterpMethod;
@@ -298,7 +300,7 @@ class _Jv_JNIMethod : public _Jv_MethodB
   // This function is used when making a JNI call from the interpreter.
   static void call (ffi_cif *, void *, ffi_raw *, void *);
 
-  void *ncode ();
+  void *ncode (jclass);
   friend class _Jv_ClassReader;
   friend class _Jv_InterpreterEngine;
Index: libjava/defineclass.cc
===================================================================
--- libjava/defineclass.cc.orig	2007-01-17 22:24:18.000000000 -0200
+++ libjava/defineclass.cc	2007-01-17 22:33:16.000000000 -0200
@@ -1,6 +1,7 @@
 // defineclass.cc - defining a class from .class format.
 
-/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006  Free Software Foundation
+/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
+   Free Software Foundation
 
    This file is part of libgcj.
 
@@ -1652,7 +1653,7 @@ void _Jv_ClassReader::handleCodeAttribut
       // call a static method of an interpreted class from precompiled
       // code without first resolving the class (that will happen
       // during class initialization instead).
-      method->self->ncode = method->ncode ();
+      method->self->ncode = method->ncode (def);
     }
 }
 
@@ -1697,7 +1698,7 @@ void _Jv_ClassReader::handleMethodsEnd (
 		  // interpreted class from precompiled code without
 		  // first resolving the class (that will happen
 		  // during class initialization instead).
-		  method->ncode = m->ncode ();
+		  method->ncode = m->ncode (def);
 		}
 	    }
 	}
Index: libjava/link.cc
===================================================================
--- libjava/link.cc.orig	2007-01-17 22:24:18.000000000 -0200
+++ libjava/link.cc	2007-01-17 22:33:16.000000000 -0200
@@ -1020,15 +1020,17 @@ struct method_closure
   // be the same as the address of the overall structure.  This is due
   // to disabling interior pointers in the GC.
   ffi_closure closure;
+  _Jv_ClosureList list;
   ffi_cif cif;
   ffi_type *arg_types[1];
 };
 
 void *
-_Jv_Linker::create_error_method (_Jv_Utf8Const *class_name)
+_Jv_Linker::create_error_method (_Jv_Utf8Const *class_name, jclass klass)
 {
+  void *code;
   method_closure *closure
-    = (method_closure *) _Jv_AllocBytes(sizeof (method_closure));
+    = (method_closure *) _Jv_AllocClosure(sizeof (method_closure), &code);
 
   closure->arg_types[0] = &ffi_type_void;
 
@@ -1040,13 +1042,18 @@ _Jv_Linker::create_error_method (_Jv_Utf
                        1,
                        &ffi_type_void,
 		       closure->arg_types) == FFI_OK
-      && ffi_prep_closure (&closure->closure,
-                           &closure->cif,
-			   _Jv_ThrowNoClassDefFoundErrorTrampoline,
-			   class_name) == FFI_OK)
-    return &closure->closure;
+      && ffi_prep_closure_loc (&closure->closure,
+			       &closure->cif,
+			       _Jv_ThrowNoClassDefFoundErrorTrampoline,
+			       class_name,
+			       code) == FFI_OK)
+    {
+      closure->list.registerClosure (klass, closure);
+      return code;
+    }
   else
     {
+      _Jv_FreeClosure (closure);
       java::lang::StringBuffer *buffer = new java::lang::StringBuffer();
       buffer->append(JvNewStringLatin1("Error setting up FFI closure"
 				       " for static method of"
@@ -1057,7 +1064,7 @@ _Jv_Linker::create_error_method (_Jv_Utf
 }
 #else
 void *
-_Jv_Linker::create_error_method (_Jv_Utf8Const *)
+_Jv_Linker::create_error_method (_Jv_Utf8Const *, jclass)
 {
   // Codepath for platforms which do not support (or want) libffi.
   // You have to accept that it is impossible to provide the name
@@ -1088,8 +1095,6 @@ static bool debug_link = false;
 // at the corresponding position in the virtual method offset table
 // (klass->otable). 
 
-// The same otable and atable may be shared by many classes.
-
 // This must be called while holding the class lock.
 
 void
@@ -1240,13 +1245,15 @@ _Jv_Linker::link_symbol_table (jclass kl
       // NullPointerException
       klass->atable->addresses[index] = NULL;
 
+      bool use_error_method = false;
+
       // If the target class is missing we prepare a function call
       // that throws a NoClassDefFoundError and store the address of
       // that newly prepared method in the atable. The user can run
       // code in classes where the missing class is part of the
       // execution environment as long as it is never referenced.
       if (target_class == NULL)
-        klass->atable->addresses[index] = create_error_method(sym.class_name);
+	use_error_method = true;
       // We're looking for a static field or a static method, and we
       // can tell which is needed by looking at the signature.
       else if (signature->first() == '(' && signature->len() >= 2)
@@ -1294,12 +1301,16 @@ _Jv_Linker::link_symbol_table (jclass kl
 		}
 	    }
 	  else
+	    use_error_method = true;
+
+	  if (use_error_method)
 	    klass->atable->addresses[index]
-              = create_error_method(sym.class_name);
+	      = create_error_method(sym.class_name, klass);
 
 	  continue;
 	}
 
+
       // Try fields only if the target class exists.
       if (target_class != NULL)
       {
Index: libjava/include/jvm.h
===================================================================
--- libjava/include/jvm.h.orig	2007-01-17 22:25:10.000000000 -0200
+++ libjava/include/jvm.h	2007-01-17 22:33:16.000000000 -0200
@@ -288,7 +288,7 @@ private:
  						    _Jv_Utf8Const *method_signature,
 						    jclass *found_class,
 						    bool check_perms = true);
-  static void *create_error_method(_Jv_Utf8Const *);
+  static void *create_error_method(_Jv_Utf8Const *, jclass);
 
   /* The least significant bit of the signature pointer in a symbol
      table is set to 1 by the compiler if the reference is "special",
Index: libjava/java/lang/reflect/natVMProxy.cc
===================================================================
--- libjava/java/lang/reflect/natVMProxy.cc.orig	2007-01-17 22:24:18.000000000 -0200
+++ libjava/java/lang/reflect/natVMProxy.cc	2007-01-17 22:33:16.000000000 -0200
@@ -1,6 +1,6 @@
 // natVMProxy.cc -- Implementation of VMProxy methods.
 
-/* Copyright (C) 2006
+/* Copyright (C) 2006, 2007
    Free Software Foundation
 
    This file is part of libgcj.
@@ -66,7 +66,8 @@ using namespace java::lang::reflect;
 using namespace java::lang;
 
 typedef void (*closure_fun) (ffi_cif*, void*, void**, void*);
-static void *ncode (_Jv_Method *self, closure_fun fun, Method *meth);
+static void *ncode (jclass klass, _Jv_Method *self, closure_fun fun,
+		    Method *meth);
 static void run_proxy (ffi_cif*, void*, void**, void*);
 
 typedef jobject invoke_t (jobject, Proxy *, Method *, JArray< jobject > *);
@@ -165,7 +166,7 @@ java::lang::reflect::VMProxy::generatePr
       // the interfaces of which it is a proxy will also be reachable,
       // so this is safe.
       method = imethod;
-      method.ncode = ncode (&method, run_proxy, elements(d->methods)[i]);
+      method.ncode = ncode (klass, &method, run_proxy, elements(d->methods)[i]);
       method.accflags &= ~Modifier::ABSTRACT;
     }
 
@@ -290,6 +291,7 @@ unbox (jobject o, jclass klass, void *rv
 
 typedef struct {
   ffi_closure  closure;
+  _Jv_ClosureList list;
   ffi_cif   cif;
   Method *meth;
   _Jv_Method *self;
@@ -362,16 +364,19 @@ run_proxy (ffi_cif *cif,
 // the address of its closure.
 
 static void *
-ncode (_Jv_Method *self, closure_fun fun, Method *meth)
+ncode (jclass klass, _Jv_Method *self, closure_fun fun, Method *meth)
 {
   using namespace java::lang::reflect;
 
   jboolean staticp = (self->accflags & Modifier::STATIC) != 0;
   int arg_count = _Jv_count_arguments (self->signature, staticp);
 
+  void *code;
   ncode_closure *closure =
-    (ncode_closure*)_Jv_AllocBytes (sizeof (ncode_closure)
-				    + arg_count * sizeof (ffi_type*));
+    (ncode_closure*)_Jv_AllocClosure (sizeof (ncode_closure)
+				      + arg_count * sizeof (ffi_type*),
+				      &code);
+  closure->list.registerClosure (klass, closure);
 
   _Jv_init_cif (self->signature,
 		arg_count,
@@ -384,11 +389,12 @@ ncode (_Jv_Method *self, closure_fun fun
 
   JvAssert ((self->accflags & Modifier::NATIVE) == 0);
 
-  ffi_prep_closure (&closure->closure,
-		    &closure->cif, 
-		    fun,
-		    (void*)closure);
+  ffi_prep_closure_loc (&closure->closure,
+			&closure->cif,
+			fun,
+			code,
+			code);
 
-  self->ncode = (void*)closure;
+  self->ncode = code;
   return self->ncode;
 }

[-- Attachment #4: Type: text/plain, Size: 249 bytes --]


-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-18  7:14 Get libffi closures to cope with SELinux execmem/execmod Alexandre Oliva
@ 2007-01-18  7:28 ` Alexandre Oliva
  2007-01-18 12:30 ` Andrew Haley
  2007-02-15  7:11 ` Tom Tromey
  2 siblings, 0 replies; 35+ messages in thread
From: Alexandre Oliva @ 2007-01-18  7:28 UTC (permalink / raw)
  To: gcc-patches; +Cc: java-patches

[-- Attachment #1: Type: text/plain, Size: 241 bytes --]

On Jan 18, 2007, Alexandre Oliva <aoliva@redhat.com> wrote:

> The second patch

... was corrupted while I composed its ChangeLog :-(

Here's a patch to be applied to the patch file itself to fix it such
that it applies.  Sorry about that.


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: libffi-closure-prot-exec.patch.fix --]
[-- Type: text/x-patch, Size: 346 bytes --]

--- .patches/libffi-closure-prot-exec.patch.broken	2007-01-18 05:23:50.000000000 -0200
+++ .patches/libffi-closure-prot-exec.patch	2007-01-18 05:21:12.000000000 -0200
@@ -2114,6 +2114,7 @@
  
 -  void *ncode ();
 +  void *ncode (jclass);
+ 
    friend class _Jv_ClassReader;
    friend class _Jv_InterpreterEngine;
 Index: libjava/defineclass.cc

[-- Attachment #3: Type: text/plain, Size: 249 bytes --]


-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-18  7:14 Get libffi closures to cope with SELinux execmem/execmod Alexandre Oliva
  2007-01-18  7:28 ` Alexandre Oliva
@ 2007-01-18 12:30 ` Andrew Haley
  2007-01-18 12:47   ` Andrew Haley
  2007-01-19  3:57   ` Alexandre Oliva
  2007-02-15  7:11 ` Tom Tromey
  2 siblings, 2 replies; 35+ messages in thread
From: Andrew Haley @ 2007-01-18 12:30 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: gcc-patches, java-patches

Alexandre Oliva writes:
 > This is a patch that fixes the bug described at 
 > https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=202209
 > 
 > The problem is that SELinux security policies are often configured to
 > prevent pages from being mapped as both writable and executable, and
 > even to stop pages that were ever modified from being turned
 > executable.
 > 
 > The recommended way to implement dynamically-generated code is to map
 > pages from a file in an executable filesystem into separate locations
 > in the virtual memory, one writable, one executable.
 > http://people.redhat.com/drepper/selinux-mem.html
 > 
 > This is what I've done in libffi, if mmap fails to allocate anonymous
 > pages that are both writable and executable on GNU/Linux systems.
 > Various changes were need to enable closures to be created at one
 > address that would later be executed at another.
 > 
 > libjava needed changes to cope with these interface changes in libffi,
 > and manual memory management of closures.  Since they were not scanned
 > by the garbage collector, it turned out to be relatively simple to
 > make the change and arrange for class finalizers to deallocate them.
 > 
 > The first patch below adds dlmalloc.c to libffi as downloaded from
 > Doug Lea's web page (except for CRLF->LF conversion).  The second
 > patch does the rest of the work, including the minimal changes needed
 > for dlmalloc.c to support this custom separate-mapping behavior.
 > 
 > Tested on x86_64-linux-gnu.  Ok to install?

A few thoughts.  Firstly, including the entire malloc library seems
heavyweight.  I presume you've explored other solutions, but a word or
two here of explanation for the record would be good.  People will
ask.

I presume that the allocator is thread-safe.

When a Java object, such as a Class, is being finalized, it is
reachable.  So, disposing of the closures at this point is potentially
dangerous.  Please disable the call to
_Jv_ClosureList::releaseClosures for the time being, and we'll
investigate ways to defer the release until the class in unreachable.
I know this will cause a small memory leak, but we'll live with that
for the time being.  (One idea I have is to mark the closures as dead,
and then have a separate loop that reclaims them after all finalizers
have run.)

Andrew.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-18 12:30 ` Andrew Haley
@ 2007-01-18 12:47   ` Andrew Haley
  2007-01-19  3:57   ` Alexandre Oliva
  1 sibling, 0 replies; 35+ messages in thread
From: Andrew Haley @ 2007-01-18 12:47 UTC (permalink / raw)
  To: Alexandre Oliva, gcc-patches, java-patches

Andrew Haley writes:
 > 
 > When a Java object, such as a Class, is being finalized, it is
 > reachable.  So, disposing of the closures at this point is potentially
 > dangerous.  Please disable the call to
 > _Jv_ClosureList::releaseClosures for the time being, and we'll
 > investigate ways to defer the release until the class in unreachable.
 > I know this will cause a small memory leak, but we'll live with that
 > for the time being.  (One idea I have is to mark the closures as dead,
 > and then have a separate loop that reclaims them after all finalizers
 > have run.)

Or, alterntatively, we'll be able to prove that my worries here are
unfounded, and we'll put the call back in.

Andrew.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-18 12:30 ` Andrew Haley
  2007-01-18 12:47   ` Andrew Haley
@ 2007-01-19  3:57   ` Alexandre Oliva
       [not found]     ` <Pine.GHP.4.58.0701190016420.29118@tomil.hpl.hp.com>
  1 sibling, 1 reply; 35+ messages in thread
From: Alexandre Oliva @ 2007-01-19  3:57 UTC (permalink / raw)
  To: Andrew Haley; +Cc: gcc-patches, java-patches

On Jan 18, 2007, Andrew Haley <aph@redhat.com> wrote:

> A few thoughts.  Firstly, including the entire malloc library seems
> heavyweight.  I presume you've explored other solutions, but a word or
> two here of explanation for the record would be good.  People will
> ask.

Good point.  Let me summarize the private discussions we've had on
this matter:

- it would have been possible to use a simpler allocator for
  fixed-size objects, and thus require the separation of ffi_closures
  from other data normally carried along with it.  However, this would
  require the closure to carry a pointer to modifyable data, making it
  much easier to corrupt it so as to do something else, defeating at
  least in part the security advantage afforded by the security
  policies.  It would also make allocation less efficient, in that a
  pair of data objects would have to be allocated.

- Storing the closure and the corresponding helper unmodifyable data
  in read-only memory would be much safer in the general case,
  although I ended up keeping a pointer to the modifyable data in the
  read-only data in the libjava side.  But at least that's not a
  choice imposed by libffi.

- Since the data attached to a closure object, at least in the libjava
  case, can grow up to two 2KB, and stop at that only because of
  constraints imposed by the JVM specs, it sounded desirable to use a
  generic memory allocator that could deal with this, even if
  heavily optimized for small objects.

- Modifying boehm-gc so as to support a separate memory space with
  this special mmap magic was considered, but after some investigation
  I didn't get the impression this was feasible without a lot of
  effort, and it didn't seem worth it considering that dlmalloc is a
  well-known memory allocator library, glibc's allocator is based on
  it, I was somewhat familiar with it already, and tweaking it to suit
  our needs proved to be quite simple.
  
> I presume that the allocator is thread-safe.

Yes, given the option I passed to it.

> When a Java object, such as a Class, is being finalized, it is
> reachable.

What I read about boehm-gc's finalization
(boehm-gc/doc/gcdescr.html:Finalization and
http://www.hpl.hp.com/personal/Hans_Boehm/gc/finalization.html) gave
me the impression that this was safe.  The docs said something along
the lines that would run finalizers only when their objects actually
became unreachable, which means it might take multiple collection
cycles for an object with a finalizer to be finalized and discarded if
there's a chain of such objects.  The docs also mentioned that cycles
of such objects wouldn't be discarded.

It would still be possible for such an object to be reconnected in the
finalizer, but our class finalizers don't do this, so I concluded it
was safe.

> Please disable the call to
> _Jv_ClosureList::releaseClosures for the time being,

Given the above, would you still like me to do so?  Can you give me a
hint as to a testcase that might expose this kind of error, such that
I can verify whether the docs and what I'm reading from the code
actually match the behavior I've come to rely upon?

> (One idea I have is to mark the closures as dead, and then have a
> separate loop that reclaims them after all finalizers have run.)

Yes, it would be pretty simple to concat a to-be-disposed list with
the closure list of a class at that point, and dispose of the list at
a later time.  Should I?

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
       [not found]     ` <Pine.GHP.4.58.0701190016420.29118@tomil.hpl.hp.com>
@ 2007-01-22 21:28       ` Alexandre Oliva
  2007-01-24  6:46         ` Alexandre Oliva
  2007-02-15  7:14         ` Tom Tromey
  0 siblings, 2 replies; 35+ messages in thread
From: Alexandre Oliva @ 2007-01-22 21:28 UTC (permalink / raw)
  To: Hans Boehm, Andrew Haley; +Cc: gcc-patches, java-patches

[-- Attachment #1: Type: text/plain, Size: 2830 bytes --]

On Jan 19, 2007, Hans Boehm <Hans.Boehm@hp.com> wrote:

> The collector can register finalizable objects in a way that
> either does or doesn't prevent objects they reference from being
> finalized in the same cycle,  The default is to prevent such
> finalization.  But Java requires that objects normally be registered
> the other way, so that they can be finalized in any order.  And we
> do that.

> If you are taking this into consideration, please ignore this message ...

I wasn't.  Thanks for pointing this out.  I've come up with a testcase
that crashes with my previous patch, it's attached below.

Now, if I'm reading you correctly, it's not the object itself that
determines whether its finalizer can run while there are other
pointers to it, but rather the object that points to it.  Is this
correct?

I ask because I reasoned I could create a GC-managed pointer to the
closure list, and register a finalizer for that pointer such that the
closure list would only get deallocated when the class that held the
pointer to it was collected.

As it turned out, this wasn't enough, but adding a second layer of
indirection to the closure list, such that both levels of indirection
involve finalizers registered with the traditional (!= Java)
semantics, it should work, right?  The pointer to pointer to the
closure list could get finalized in the same cycle as the class, but
the pointer to the closure list would be saved for another cycle, and
everything would work, right?

Well, too bad, it doesn't.  And it's not even because I'm
re-registering objects, which might cause the pointer to pointer to
remain live but without a finalizer, thus presumably enabling the
pointer to be finalized at the same time as the class that holds on to
it.  Yuck.  This is getting out of hand.

The problem is more subtle than this, although I can't quite figure
out how to deal with it.  AFAICT something is getting messed up such
that, when we try to run the finalizer for an object, some data
structures associated with that class are already gone, and that
includes something in the vtable that would have enabled the finalize
method to be called.

I don't see how it is even possible that we could have a live object
with a finalizer yet to be run that wouldn't keep its class object
alive along with all other data structures, but this is what appears
to be going on.

ATM I'm considering doing away with the closure list and creating a
finalizable pointer for each closure, storing the pointer just next to
the pointer to the closure.  This would probably waste a lot of
memory, but it would at least take us back to where we stood in terms
of keeping closures around long enough.

I'll get back to banging my head against this wall after getting some
sleep, but if any of you have any insights to offer, I'm all ears.

Thanks,


[-- Attachment #2: testGC.java --]
[-- Type: application/octet-stream, Size: 2375 bytes --]

public class testGC {
    public static String objId (Object obj) {
	return obj + "/"
	    + Integer.toHexString(obj.getClass().getClassLoader().hashCode());
    }
    public static class cld extends java.net.URLClassLoader {
	static Object obj = new cl0();
	public cld () throws Exception {
	    super(new java.net.URL[] { new java.net.URL("file://./") });
	    System.out.println (objId (this) + " created");
	}
	public void finalize () {
	    System.out.println (objId (this) + " finalized");
	}
	public String toString () {
	    return this.getClass().getName() + "@"
		+ Integer.toHexString (hashCode ());
	}
	public Class loadClass (String name) throws ClassNotFoundException {
	    try {
		java.io.InputStream IS = getResourceAsStream (name + ".class");
		int maxsz = 1, readsz = 0;
		byte buf[] = new byte[maxsz];
		for(;;) {
		    int readnow = IS.read (buf, readsz, maxsz - readsz);
		    if (readnow <= 0)
			break;
		    readsz += readnow;
		    if (readsz == maxsz) {
			byte newbuf[] = new byte[maxsz *= 2];
			System.arraycopy (buf, 0, newbuf, 0, readsz);
			buf = newbuf;
		    }
		}
		return defineClass (name, buf, 0, readsz);
	    } catch (Exception e) {
		return super.loadClass (name);
	    }
	}
    }
    public static class cl0 {
	public cl0 () {
	    System.out.println (objId (this) + " created");
	}
	public void finalize () {
	    System.out.println (objId (this) + " finalized");
	}
    }
    public static class cl1 {
	static Object obj = new cl0();
	public cl1 () {
	    System.out.println (objId (this) + " created");
	}
	public void finalize () {
	    System.out.println (objId (this) + " finalized");
	}
    }
    public static class cl2 {
	static Object obj = new cl0();
	public cl2 () {
	    System.out.println (objId (this) + " created");
	}
	public void finalize () {
	    System.out.println (objId (this) + " finalized");
	}
    }
    static Object obj = new cl0();
    public static void main(String[] argv) throws Exception {
	new cld().loadClass ("testGC$cl1").newInstance ();
	new cld().loadClass ("testGC$cl2").newInstance ();
	new cld().loadClass ("testGC$cl1").newInstance ();
	new cld().loadClass ("testGC$cl2").newInstance ();
	for (int i = 1; i <= 8; i++) {
	    System.out.println ("Will run GC and finalization " + i);
	    System.gc ();
	    Thread.yield ();
	    System.runFinalization ();
	    Thread.yield ();
	}
    }
}

[-- Attachment #3: Type: text/plain, Size: 249 bytes --]


-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-22 21:28       ` Alexandre Oliva
@ 2007-01-24  6:46         ` Alexandre Oliva
  2007-01-24 15:47           ` Andrew Haley
  2007-02-15  7:14         ` Tom Tromey
  1 sibling, 1 reply; 35+ messages in thread
From: Alexandre Oliva @ 2007-01-24  6:46 UTC (permalink / raw)
  To: Hans Boehm; +Cc: Andrew Haley, gcc-patches, java-patches

[-- Attachment #1: Type: text/plain, Size: 1509 bytes --]

On Jan 22, 2007, Alexandre Oliva <aoliva@redhat.com> wrote:

> Now, if I'm reading you correctly, it's not the object itself that
> determines whether its finalizer can run while there are other
> pointers to it, but rather the object that points to it.  Is this
> correct?

It was, so I added an option to get the other behavior, that appears
to be generally desirable for Java system-level resource disposal,
through objects whose finalizers should only run when the
corresponding objects are unreachable even from other finalizable
objects.  It was quite easy to add, after I figured out how it worked.

> As it turned out, this wasn't enough, but adding a second layer of
> indirection to the closure list, such that both levels of indirection
> involve finalizers registered with the traditional (!= Java)
> semantics, it should work, right?

No.  Especially if one allocates the doubly-indirect pointer as not
containing pointers.  *blush*  :-)

Fixing that enabled things to work until I added finalizers that would
resurrect objects, perhaps even multiple times (consider a finalizer
that optionally creates another instance of the same class, bringing
back to life the class and the class loader).

At that point, after scratching my head for a while trying to figure
out a way to implement the needed semantics with the existing
primitives, I concluded it couldn't be done, and so I came up with new
finalizer semantics option described above.

Is this ok to install?  Tested on x86_64-linux-gnu.



[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: libffi-closure-prot-exec.patch --]
[-- Type: text/x-patch, Size: 93943 bytes --]

for  libffi/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* include/ffi.h.in (ffi_closure_alloc, ffi_closure_free): New.
	(ffi_prep_closure_loc): New.
	(ffi_prep_raw_closure_loc): New.
	(ffi_prep_java_raw_closure_loc): New.
	* src/closures.c: New file.
	* src/dlmalloc.c [FFI_MMAP_EXEC_WRIT] (struct malloc_segment):
	Replace sflags with exec_offset.
	[FFI_MMAP_EXEC_WRIT] (mmap_exec_offset, add_segment_exec_offset,
	sub_segment_exec_offset): New macros.
	(get_segment_flags, set_segment_flags, check_segment_merge): New
	macros.
	(is_mmapped_segment, is_extern_segment): Use get_segment_flags.
	(add_segment, sys_alloc, create_mspace, create_mspace_with_base,
	destroy_mspace): Use new macros.
	(sys_alloc): Silence warning.
	* Makefile.am (libffi_la_SOURCES): Add src/closures.c.
	* Makefile.in: Rebuilt.
	* src/prep_cif [FFI_CLOSURES] (ffi_prep_closure): Implement in
	terms of ffi_prep_closure_loc.
	* src/raw_api.c (ffi_prep_raw_closure_loc): Renamed and adjusted
	from...
	(ffi_prep_raw_closure): ... this.  Re-implement in terms of the
	renamed version.
	* src/java_raw_api (ffi_prep_java_raw_closure_loc): Renamed and
	adjusted from...
	(ffi_prep_java_raw_closure): ... this.  Re-implement in terms of
	the renamed version.
	* src/alpha/ffi.c (ffi_prep_closure_loc): Renamed from
	(ffi_prep_closure): ... this.
	* src/pa/ffi.c: Likewise.
	* src/cris/ffi.c: Likewise.  Adjust.
	* src/frv/ffi.c: Likewise.
	* src/ia64/ffi.c: Likewise.
	* src/mips/ffi.c: Likewise.
	* src/powerpc/ffi_darwin.c: Likewise.
	* src/s390/ffi.c: Likewise.
	* src/sh/ffi.c: Likewise.
	* src/sh64/ffi.c: Likewise.
	* src/sparc/ffi.c: Likewise.
	* src/x86/ffi64.c: Likewise.
	* src/x86/ffi.c: Likewise.
	(FFI_INIT_TRAMPOLINE): Adjust.
	(ffi_prep_raw_closure_loc): Renamed and adjusted from...
	(ffi_prep_raw_closure): ... this.
	* src/powerpc/ffi.c (ffi_prep_closure_loc): Renamed from
	(ffi_prep_closure): ... this.
	(flush_icache): Adjust.
	
for  boehm-gc/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* include/gc.h (GC_REGISTER_FINALIZER_UNREACHABLE): New.
	(GC_register_finalizer_unreachable): Declare.
	(GC_debug_register_finalizer_unreachable): Declare.
	* finalize.c (GC_unreachable_finalize_mark_proc): New.
	(GC_register_finalizer_unreachable): New.
	(GC_finalize): Handle it.
	* dbg_mlc.c (GC_debug_register_finalizer_unreachable): New.
	(GC_debug_register_finalizer_no_order): Fix whitespace.

for  libjava/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* include/jvm.h (_Jv_AllocClosure, _Jv_FreeClosure): New.
	(_Jv_ClosureListFinalizer): New.
	(_Jv_Linker::create_error_method): Adjust.
	* boehm.cc (_Jv_AllocClosure, _Jv_FreeClosure): New.
	(_Jv_ClosureListFinalizer): New.
	* nogc.cc: Likewise.
	* java/lang/Class.h (class _Jv_ClosureList): New.
	(class java::lang::Class): Declare it as friend.
	* java/lang/natClass.cc (_Jv_ClosureList::releaseClosures): New.
	(_Jv_ClosureList::registerClousure): New.
	* include/execution.h (_Jv_ExecutionEngine): Add get_closure_list.
	(_Jv_CompiledEngine::do_get_closure_list): New.
	(_Jv_CompiledEngine::_Jv_CompiledEngine): Use it.
	(_Jv_IndirectCompiledClass): Add closures.
	(_Jv_IndirectCompiledEngine::get_aux_info): New.
	(_Jv_IndirectCompiledEngine::do_allocate_field_initializers): Use
	it.
	(_Jv_IndirectCompiledEngine::do_get_closure_list): New.
	(_Jv_IndirectCompiledEngine::_Jv_IndirectCompiledEngine): Use it.
	(_Jv_InterpreterEngine::do_get_closure_list): Declare.
	(_Jv_InterpreterEngine::_Jv_InterpreterEngine): Use it.
	* interpret.cc (FFI_PREP_RAW_CLOSURE): Use _loc variants.
	(node_closure): Add closure list.
	(_Jv_InterpMethod::ncode): Add jclass argument.  Use
	_Jv_AllocClosure and the separate code pointer.  Register the
	closure for finalization.
	(_Jv_JNIMethod::ncode): Likewise.
	(_Jv_InterpreterEngine::do_create_ncode): Pass klass to ncode.
	(_Jv_InterpreterEngine::do_get_closure_list): New.
	* include/java-interp.h (_Jv_InterpMethod::ncode): Adjust.
	(_Jv_InterpClass): Add closures field.
	(_Jv_JNIMethod::ncode): Adjust.
	* defineclass.cc (_Jv_ClassReader::handleCodeAttribute): Adjust.
	(_Jv_ClassReader::handleMethodsEnd): Likewise.
	* link.cc (struct method_closure): Add closure list.
	(_Jv_Linker::create_error_method): Add jclass argument.  Use
	_Jv_AllocClosure and the separate code pointer.  Register the
	closure for finalization.
	(_Jv_Linker::link_symbol_table): Remove outdated comment about
	sharing of otable and atable.  Adjust.
	* java/lang/reflect/natVMProxy.cc (ncode_closure): Add closure
	list.
	(ncode): Add jclass argument.  Use _Jv_AllocClosure and the
	separate code pointer.  Register the closure for finalization.
	(java::lang::reflect::VMProxy::generateProxyClass): Adjust.
	* testsuite/libjava.jar/TestClosureGC.java: New.
	* testsuite/libjava.jar/TestClosureGC.out: New.
	* testsuite/libjava.jar/TestClosureGC.xfail: New.
	* testsuite/libjava.jar/TestClosureGC.jar: New.

Index: libffi/include/ffi.h.in
===================================================================
--- libffi/include/ffi.h.in.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/include/ffi.h.in	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------*-C-*-
-   libffi @VERSION@ - Copyright (c) 1996-2003  Red Hat, Inc.
+   libffi @VERSION@ - Copyright (c) 1996-2003, 2007  Red Hat, Inc.
 
    Permission is hereby granted, free of charge, to any person obtaining
    a copy of this software and associated documentation files (the
@@ -228,12 +228,22 @@ typedef struct {
   void      *user_data;
 } ffi_closure __attribute__((aligned (8)));
 
+void *ffi_closure_alloc (size_t size, void **code);
+void ffi_closure_free (void *);
+
 ffi_status
 ffi_prep_closure (ffi_closure*,
 		  ffi_cif *,
 		  void (*fun)(ffi_cif*,void*,void**,void*),
 		  void *user_data);
 
+ffi_status
+ffi_prep_closure_loc (ffi_closure*,
+		      ffi_cif *,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void*codeloc);
+
 typedef struct {
   char tramp[FFI_TRAMPOLINE_SIZE];
 
@@ -262,11 +272,25 @@ ffi_prep_raw_closure (ffi_raw_closure*,
 		      void *user_data);
 
 ffi_status
+ffi_prep_raw_closure_loc (ffi_raw_closure*,
+			  ffi_cif *cif,
+			  void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			  void *user_data,
+			  void *codeloc);
+
+ffi_status
 ffi_prep_java_raw_closure (ffi_raw_closure*,
 		           ffi_cif *cif,
 		           void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
 		           void *user_data);
 
+ffi_status
+ffi_prep_java_raw_closure_loc (ffi_raw_closure*,
+			       ffi_cif *cif,
+			       void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			       void *user_data,
+			       void *codeloc);
+
 #endif /* FFI_CLOSURES */
 
 /* ---- Public interface definition -------------------------------------- */
Index: libffi/src/closures.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ libffi/src/closures.c	2007-01-24 03:18:54.000000000 -0200
@@ -0,0 +1,513 @@
+/* -----------------------------------------------------------------------
+   closures.c - Copyright (c) 2007  Red Hat, Inc.
+
+   Code to allocate and deallocate memory for closures.
+
+   Permission is hereby granted, free of charge, to any person obtaining
+   a copy of this software and associated documentation files (the
+   ``Software''), to deal in the Software without restriction, including
+   without limitation the rights to use, copy, modify, merge, publish,
+   distribute, sublicense, and/or sell copies of the Software, and to
+   permit persons to whom the Software is furnished to do so, subject to
+   the following conditions:
+
+   The above copyright notice and this permission notice shall be included
+   in all copies or substantial portions of the Software.
+
+   THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS
+   OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+   MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+   IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+   OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+   ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+   OTHER DEALINGS IN THE SOFTWARE.
+   ----------------------------------------------------------------------- */
+
+#include <ffi.h>
+#include <ffi_common.h>
+
+#ifndef FFI_MMAP_EXEC_WRIT
+# if __gnu_linux__
+/* This macro indicates it may be forbidden to map anonymous memory
+   with both write and execute permission.  Code compiled when this
+   option is defined will attempt to map such pages once, but if it
+   fails, it falls back to creating a temporary file in a writable and
+   executable filesystem and mapping pages from it into separate
+   locations in the virtual memory space, one location writable and
+   another executable.  */
+#  define FFI_MMAP_EXEC_WRIT 1
+# endif
+#endif
+
+#if FFI_CLOSURES
+
+# if FFI_MMAP_EXEC_WRIT
+
+#define USE_LOCKS 1
+#define USE_DL_PREFIX 1
+#define USE_BUILTIN_FFS 1
+
+/* We need to use mmap, not sbrk.  */
+#define HAVE_MORECORE 0
+
+/* We could, in theory, support mremap, but it wouldn't buy us anything.  */
+#define HAVE_MREMAP 0
+
+/* We have no use for this, so save some code and data.  */
+#define NO_MALLINFO 1
+
+/* We need all allocations to be in regular segments, otherwise we
+   lose track of the corresponding code address.  */
+#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
+
+/* Don't allocate more than a page unless needed.  */
+#define DEFAULT_GRANULARITY ((size_t)malloc_getpagesize)
+
+#if FFI_CLOSURE_TEST
+/* Don't release single pages, to avoid a worst-case scenario of
+   continuously allocating and releasing single pages, but release
+   pairs of pages, which should do just as well given that allocations
+   are likely to be small.  */
+#define DEFAULT_TRIM_THRESHOLD ((size_t)malloc_getpagesize)
+#endif
+
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdio.h>
+#include <mntent.h>
+#include <sys/param.h>
+#include <pthread.h>
+
+/* We don't want sys/mman.h to be included after we redefine mmap and
+   dlmunmap.  */
+#include <sys/mman.h>
+#define LACKS_SYS_MMAN_H 1
+
+#define MAYBE_UNUSED __attribute__((__unused__))
+
+/* Declare all functions defined in dlmalloc.c as static.  */
+static void *dlmalloc(size_t);
+static void dlfree(void*);
+static void *dlcalloc(size_t, size_t) MAYBE_UNUSED;
+static void *dlrealloc(void *, size_t) MAYBE_UNUSED;
+static void *dlmemalign(size_t, size_t) MAYBE_UNUSED;
+static void *dlvalloc(size_t) MAYBE_UNUSED;
+static int dlmallopt(int, int) MAYBE_UNUSED;
+static size_t dlmalloc_footprint(void) MAYBE_UNUSED;
+static size_t dlmalloc_max_footprint(void) MAYBE_UNUSED;
+static void** dlindependent_calloc(size_t, size_t, void**) MAYBE_UNUSED;
+static void** dlindependent_comalloc(size_t, size_t*, void**) MAYBE_UNUSED;
+static void *dlpvalloc(size_t) MAYBE_UNUSED;
+static int dlmalloc_trim(size_t) MAYBE_UNUSED;
+static size_t dlmalloc_usable_size(void*) MAYBE_UNUSED;
+static void dlmalloc_stats(void) MAYBE_UNUSED;
+
+/* Use these for mmap and munmap within dlmalloc.c.  */
+static void *dlmmap(void *, size_t, int, int, int, off_t);
+static int dlmunmap(void *, size_t);
+
+#define mmap dlmmap
+#define munmap dlmunmap
+
+#include "dlmalloc.c"
+
+#undef mmap
+#undef munmap
+
+/* A mutex used to synchronize access to *exec* variables in this file.  */
+static pthread_mutex_t open_temp_exec_file_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+/* A file descriptor of a temporary file from which we'll map
+   executable pages.  */
+static int execfd = -1;
+
+/* The amount of space already allocated from the temporary file.  */
+static size_t execsize = 0;
+
+/* Open a temporary file name, and immediately unlink it.  */
+static int
+open_temp_exec_file_name (char *name)
+{
+  int fd = mkstemp (name);
+
+  if (fd != -1)
+    unlink (name);
+
+  return fd;
+}
+
+/* Open a temporary file in the named directory.  */
+static int
+open_temp_exec_file_dir (const char *dir)
+{
+  static const char suffix[] = "/ffiXXXXXX";
+  int lendir = strlen (dir);
+  char *tempname = __builtin_alloca (lendir + sizeof (suffix));
+
+  if (!tempname)
+    return -1;
+
+  memcpy (tempname, dir, lendir);
+  memcpy (tempname + lendir, suffix, sizeof (suffix));
+
+  return open_temp_exec_file_name (tempname);
+}
+
+/* Open a temporary file in the directory in the named environment
+   variable.  */
+static int
+open_temp_exec_file_env (const char *envvar)
+{
+  const char *value = getenv (envvar);
+
+  if (!value)
+    return -1;
+
+  return open_temp_exec_file_dir (value);
+}
+
+/* Open a temporary file in an executable and writable mount point
+   listed in the mounts file.  Subsequent calls with the same mounts
+   keep searching for mount points in the same file.  Providing NULL
+   as the mounts file closes the file.  */
+static int
+open_temp_exec_file_mnt (const char *mounts)
+{
+  static const char *last_mounts;
+  static FILE *last_mntent;
+
+  if (mounts != last_mounts)
+    {
+      if (last_mntent)
+	endmntent (last_mntent);
+
+      last_mounts = mounts;
+
+      if (mounts)
+	last_mntent = setmntent (mounts, "r");
+      else
+	last_mntent = NULL;
+    }
+
+  if (!last_mntent)
+    return -1;
+
+  for (;;)
+    {
+      int fd;
+      struct mntent mnt;
+      char buf[MAXPATHLEN * 3];
+
+      if (getmntent_r (last_mntent, &mnt, buf, sizeof (buf)))
+	return -1;
+
+      if (hasmntopt (&mnt, "ro")
+	  || hasmntopt (&mnt, "noexec")
+	  || access (mnt.mnt_dir, W_OK))
+	continue;
+
+      fd = open_temp_exec_file_dir (mnt.mnt_dir);
+
+      if (fd != -1)
+	return fd;
+    }
+}
+
+/* Instructions to look for a location to hold a temporary file that
+   can be mapped in for execution.  */
+static struct
+{
+  int (*func)(const char *);
+  const char *arg;
+  int repeat;
+} open_temp_exec_file_opts[] = {
+  { open_temp_exec_file_env, "TMPDIR", 0 },
+  { open_temp_exec_file_dir, "/tmp", 0 },
+  { open_temp_exec_file_dir, "/var/tmp", 0 },
+  { open_temp_exec_file_dir, "/dev/shm", 0 },
+  { open_temp_exec_file_env, "HOME", 0 },
+  { open_temp_exec_file_mnt, "/etc/mtab", 1 },
+  { open_temp_exec_file_mnt, "/proc/mounts", 1 },
+};
+
+/* Current index into open_temp_exec_file_opts.  */
+static int open_temp_exec_file_opts_idx = 0;
+
+/* Reset a current multi-call func, then advances to the next entry.
+   If we're at the last, go back to the first and return nonzero,
+   otherwise return zero.  */
+static int
+open_temp_exec_file_opts_next (void)
+{
+  if (open_temp_exec_file_opts[open_temp_exec_file_opts_idx].repeat)
+    open_temp_exec_file_opts[open_temp_exec_file_opts_idx].func (NULL);
+
+  open_temp_exec_file_opts_idx++;
+  if (open_temp_exec_file_opts_idx
+      == (sizeof (open_temp_exec_file_opts)
+	  / sizeof (*open_temp_exec_file_opts)))
+    {
+      open_temp_exec_file_opts_idx = 0;
+      return 1;
+    }
+
+  return 0;
+}
+
+/* Return a file descriptor of a temporary zero-sized file in a
+   writable and exexutable filesystem.  */
+static int
+open_temp_exec_file (void)
+{
+  int fd;
+
+  do
+    {
+      fd = open_temp_exec_file_opts[open_temp_exec_file_opts_idx].func
+	(open_temp_exec_file_opts[open_temp_exec_file_opts_idx].arg);
+
+      if (!open_temp_exec_file_opts[open_temp_exec_file_opts_idx].repeat
+	  || fd == -1)
+	{
+	  if (open_temp_exec_file_opts_next ())
+	    break;
+	}
+    }
+  while (fd == -1);
+
+  return fd;
+}
+
+/* Map in a chunk of memory from the temporary exec file into separate
+   locations in the virtual memory address space, one writable and one
+   executable.  Returns the address of the writable portion, after
+   storing an offset to the corresponding executable portion at the
+   last word of the requested chunk.  */
+static void *
+dlmmap_locked (void *start, size_t length, int prot, int flags, off_t offset)
+{
+  void *ptr;
+
+  if (execfd == -1)
+    {
+      open_temp_exec_file_opts_idx = 0;
+    retry_open:
+      execfd = open_temp_exec_file ();
+      if (execfd == -1)
+	return MFAIL;
+    }
+
+  offset = execsize;
+
+  if (ftruncate (execfd, offset + length))
+    return MFAIL;
+
+  flags &= ~(MAP_PRIVATE | MAP_ANONYMOUS);
+  flags |= MAP_SHARED;
+
+  ptr = mmap (NULL, length, (prot & ~PROT_WRITE) | PROT_EXEC,
+	      flags, execfd, offset);
+  if (ptr == MFAIL)
+    {
+      if (!offset)
+	{
+	  close (execfd);
+	  goto retry_open;
+	}
+      ftruncate (execfd, offset);
+      return MFAIL;
+    }
+  else if (!offset
+	   && open_temp_exec_file_opts[open_temp_exec_file_opts_idx].repeat)
+    open_temp_exec_file_opts_next ();
+
+  start = mmap (start, length, prot, flags, execfd, offset);
+
+  if (start == MFAIL)
+    {
+      munmap (ptr, length);
+      ftruncate (execfd, offset);
+      return start;
+    }
+
+  mmap_exec_offset ((char *)start, length) = (char*)ptr - (char*)start;
+
+  execsize += length;
+
+  return start;
+}
+
+/* Map in a writable and executable chunk of memory if possible.
+   Failing that, fall back to dlmmap_locked.  */
+static void *
+dlmmap (void *start, size_t length, int prot,
+	int flags, int fd, off_t offset)
+{
+  void *ptr;
+
+  assert (start == NULL && length % malloc_getpagesize == 0
+	  && prot == (PROT_READ | PROT_WRITE)
+	  && flags == (MAP_PRIVATE | MAP_ANONYMOUS)
+	  && fd == -1 && offset == 0);
+
+#if FFI_CLOSURE_TEST
+  printf ("mapping in %zi\n", length);
+#endif
+
+  if (execfd == -1)
+    {
+      ptr = mmap (start, length, prot | PROT_EXEC, flags, fd, offset);
+
+      if (ptr != MFAIL || (errno != EPERM && errno != EACCES))
+	/* Cool, no need to mess with separate segments.  */
+	return ptr;
+
+      /* If MREMAP_DUP is ever introduced and implemented, try mmap
+	 with ((prot & ~PROT_WRITE) | PROT_EXEC) and mremap with
+	 MREMAP_DUP and prot at this point.  */
+    }
+
+  if (execsize == 0 || execfd == -1)
+    {
+      pthread_mutex_lock (&open_temp_exec_file_mutex);
+      ptr = dlmmap_locked (start, length, prot, flags, offset);
+      pthread_mutex_unlock (&open_temp_exec_file_mutex);
+
+      return ptr;
+    }
+
+  return dlmmap_locked (start, length, prot, flags, offset);
+}
+
+/* Release memory at the given address, as well as the corresponding
+   executable page if it's separate.  */
+static int
+dlmunmap (void *start, size_t length)
+{
+  /* We don't bother decreasing execsize or truncating the file, since
+     we can't quite tell whether we're unmapping the end of the file.
+     We don't expect frequent deallocation anyway.  If we did, we
+     could locate pages in the file by writing to the pages being
+     deallocated and checking that the file contents change.
+     Yuck.  */
+  msegmentptr seg = segment_holding (gm, start);
+  void *code;
+
+#if FFI_CLOSURE_TEST
+  printf ("unmapping %zi\n", length);
+#endif
+
+  if (seg && (code = add_segment_exec_offset (start, seg)) != start)
+    {
+      int ret = munmap (code, length);
+      if (ret)
+	return ret;
+    }
+
+  return munmap (start, length);
+}
+
+#if FFI_CLOSURE_FREE_CODE
+/* Return segment holding given code address.  */
+static msegmentptr
+segment_holding_code (mstate m, char* addr)
+{
+  msegmentptr sp = &m->seg;
+  for (;;) {
+    if (addr >= add_segment_exec_offset (sp->base, sp)
+	&& addr < add_segment_exec_offset (sp->base, sp) + sp->size)
+      return sp;
+    if ((sp = sp->next) == 0)
+      return 0;
+  }
+}
+#endif
+
+/* Allocate a chunk of memory with the given size.  Returns a pointer
+   to the writable address, and sets *CODE to the executable
+   corresponding virtual address.  */
+void *
+ffi_closure_alloc (size_t size, void **code)
+{
+  void *ptr;
+
+  if (!code)
+    return NULL;
+
+  ptr = dlmalloc (size);
+
+  if (ptr)
+    {
+      msegmentptr seg = segment_holding (gm, ptr);
+
+      *code = add_segment_exec_offset (ptr, seg);
+    }
+
+  return ptr;
+}
+
+/* Release a chunk of memory allocated with ffi_closure_alloc.  If
+   FFI_CLOSURE_FREE_CODE is nonzero, the given address can be the
+   writable or the executable address given.  Otherwise, only the
+   writable address can be provided here.  */
+void
+ffi_closure_free (void *ptr)
+{
+#if FFI_CLOSURE_FREE_CODE
+  msegmentptr seg = segment_holding_code (gm, ptr);
+
+  if (seg)
+    ptr = sub_segment_exec_offset (ptr, seg);
+#endif
+
+  dlfree (ptr);
+}
+
+
+#if FFI_CLOSURE_TEST
+/* Do some internal sanity testing to make sure allocation and
+   deallocation of pages are working as intended.  */
+int main ()
+{
+  void *p[3];
+#define GET(idx, len) do { p[idx] = dlmalloc (len); printf ("allocated %zi for p[%i]\n", (len), (idx)); } while (0)
+#define PUT(idx) do { printf ("freeing p[%i]\n", (idx)); dlfree (p[idx]); } while (0)
+  GET (0, malloc_getpagesize / 2);
+  GET (1, 2 * malloc_getpagesize - 64 * sizeof (void*));
+  PUT (1);
+  GET (1, 2 * malloc_getpagesize);
+  GET (2, malloc_getpagesize / 2);
+  PUT (1);
+  PUT (0);
+  PUT (2);
+  return 0;
+}
+#endif /* FFI_CLOSURE_TEST */
+# else /* ! FFI_MMAP_EXEC_WRIT */
+
+/* On many systems, memory returned by malloc is writable and
+   executable, so just use it.  */
+
+#include <stdlib.h>
+
+void *
+ffi_closure_alloc (size_t size, void **code)
+{
+  if (!code)
+    return NULL;
+
+  return *code = malloc (size);
+}
+
+void
+ffi_closure_free (void *ptr)
+{
+  free (ptr);
+}
+
+# endif /* ! FFI_MMAP_EXEC_WRIT */
+#endif /* FFI_CLOSURES */
Index: libffi/src/dlmalloc.c
===================================================================
--- libffi/src/dlmalloc.c.orig	2007-01-24 03:18:07.000000000 -0200
+++ libffi/src/dlmalloc.c	2007-01-24 03:18:54.000000000 -0200
@@ -1879,11 +1879,47 @@ struct malloc_segment {
   char*        base;             /* base address */
   size_t       size;             /* allocated size */
   struct malloc_segment* next;   /* ptr to next segment */
+#if FFI_MMAP_EXEC_WRIT
+  /* The mmap magic is supposed to store the address of the executable
+     segment at the very end of the requested block.  */
+
+# define mmap_exec_offset(b,s) (*(ptrdiff_t*)((b)+(s)-sizeof(ptrdiff_t)))
+
+  /* We can only merge segments if their corresponding executable
+     segments are at identical offsets.  */
+# define check_segment_merge(S,b,s) \
+  (mmap_exec_offset((b),(s)) == (S)->exec_offset)
+
+# define add_segment_exec_offset(p,S) ((char*)(p) + (S)->exec_offset)
+# define sub_segment_exec_offset(p,S) ((char*)(p) - (S)->exec_offset)
+
+  /* The removal of sflags only works with HAVE_MORECORE == 0.  */
+
+# define get_segment_flags(S)   (IS_MMAPPED_BIT)
+# define set_segment_flags(S,v) \
+  (((v) != IS_MMAPPED_BIT) ? (ABORT, (v)) :				\
+   (((S)->exec_offset =							\
+     mmap_exec_offset((S)->base, (S)->size)),				\
+    (mmap_exec_offset((S)->base + (S)->exec_offset, (S)->size) !=	\
+     (S)->exec_offset) ? (ABORT, (v)) :					\
+   (mmap_exec_offset((S)->base, (S)->size) = 0), (v)))
+
+  /* We use an offset here, instead of a pointer, because then, when
+     base changes, we don't have to modify this.  On architectures
+     with segmented addresses, this might not work.  */
+  ptrdiff_t    exec_offset;
+#else
+
+# define get_segment_flags(S)   ((S)->sflags)
+# define set_segment_flags(S,v) ((S)->sflags = (v))
+# define check_segment_merge(S,b,s) (1)
+
   flag_t       sflags;           /* mmap and extern flag */
+#endif
 };
 
-#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
-#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
+#define is_mmapped_segment(S)  (get_segment_flags(S) & IS_MMAPPED_BIT)
+#define is_extern_segment(S)   (get_segment_flags(S) & EXTERN_BIT)
 
 typedef struct malloc_segment  msegment;
 typedef struct malloc_segment* msegmentptr;
@@ -3290,7 +3326,7 @@ static void add_segment(mstate m, char* 
   *ss = m->seg; /* Push current record */
   m->seg.base = tbase;
   m->seg.size = tsize;
-  m->seg.sflags = mmapped;
+  set_segment_flags(&m->seg, mmapped);
   m->seg.next = ss;
 
   /* Insert trailing fenceposts */
@@ -3393,7 +3429,7 @@ static void* sys_alloc(mstate m, size_t 
             if (end != CMFAIL)
               asize += esize;
             else {            /* Can't use; try to release */
-              CALL_MORECORE(-asize);
+              (void)CALL_MORECORE(-asize);
               br = CMFAIL;
             }
           }
@@ -3450,7 +3486,7 @@ static void* sys_alloc(mstate m, size_t 
     if (!is_initialized(m)) { /* first-time initialization */
       m->seg.base = m->least_addr = tbase;
       m->seg.size = tsize;
-      m->seg.sflags = mmap_flag;
+      set_segment_flags(&m->seg, mmap_flag);
       m->magic = mparams.magic;
       init_bins(m);
       if (is_global(m)) 
@@ -3469,7 +3505,8 @@ static void* sys_alloc(mstate m, size_t 
         sp = sp->next;
       if (sp != 0 &&
           !is_extern_segment(sp) &&
-          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
+	  check_segment_merge(sp, tbase, tsize) &&
+          (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag &&
           segment_holds(sp, m->top)) { /* append */
         sp->size += tsize;
         init_top(m, m->top, m->topsize + tsize);
@@ -3482,7 +3519,8 @@ static void* sys_alloc(mstate m, size_t 
           sp = sp->next;
         if (sp != 0 &&
             !is_extern_segment(sp) &&
-            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
+	    check_segment_merge(sp, tbase, tsize) &&
+            (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag) {
           char* oldbase = sp->base;
           sp->base = tbase;
           sp->size += tsize;
@@ -4397,7 +4435,7 @@ mspace create_mspace(size_t capacity, in
     char* tbase = (char*)(CALL_MMAP(tsize));
     if (tbase != CMFAIL) {
       m = init_user_mstate(tbase, tsize);
-      m->seg.sflags = IS_MMAPPED_BIT;
+      set_segment_flags(&m->seg, IS_MMAPPED_BIT);
       set_lock(m, locked);
     }
   }
@@ -4412,7 +4450,7 @@ mspace create_mspace_with_base(void* bas
   if (capacity > msize + TOP_FOOT_SIZE &&
       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
     m = init_user_mstate((char*)base, capacity);
-    m->seg.sflags = EXTERN_BIT;
+    set_segment_flags(&m->seg, EXTERN_BIT);
     set_lock(m, locked);
   }
   return (mspace)m;
@@ -4426,7 +4464,7 @@ size_t destroy_mspace(mspace msp) {
     while (sp != 0) {
       char* base = sp->base;
       size_t size = sp->size;
-      flag_t flag = sp->sflags;
+      flag_t flag = get_segment_flags(sp);
       sp = sp->next;
       if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
           CALL_MUNMAP(base, size) == 0)
Index: libffi/Makefile.am
===================================================================
--- libffi/Makefile.am.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/Makefile.am	2007-01-24 03:18:54.000000000 -0200
@@ -78,7 +78,7 @@ toolexeclib_LTLIBRARIES = libffi.la
 noinst_LTLIBRARIES = libffi_convenience.la
 
 libffi_la_SOURCES = src/debug.c src/prep_cif.c src/types.c \
-		src/raw_api.c src/java_raw_api.c
+		src/raw_api.c src/java_raw_api.c src/closures.c
 
 nodist_libffi_la_SOURCES =
 
Index: libffi/Makefile.in
===================================================================
--- libffi/Makefile.in.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/Makefile.in	2007-01-24 03:18:54.000000000 -0200
@@ -92,7 +92,7 @@ LTLIBRARIES = $(noinst_LTLIBRARIES) $(to
 libffi_la_LIBADD =
 am__dirstamp = $(am__leading_dot)dirstamp
 am_libffi_la_OBJECTS = src/debug.lo src/prep_cif.lo src/types.lo \
-	src/raw_api.lo src/java_raw_api.lo
+	src/raw_api.lo src/java_raw_api.lo src/closures.lo
 @MIPS_IRIX_TRUE@am__objects_1 = src/mips/ffi.lo src/mips/o32.lo \
 @MIPS_IRIX_TRUE@	src/mips/n32.lo
 @MIPS_LINUX_TRUE@am__objects_2 = src/mips/ffi.lo src/mips/o32.lo
@@ -141,7 +141,7 @@ libffi_la_OBJECTS = $(am_libffi_la_OBJEC
 	$(nodist_libffi_la_OBJECTS)
 libffi_convenience_la_LIBADD =
 am__objects_24 = src/debug.lo src/prep_cif.lo src/types.lo \
-	src/raw_api.lo src/java_raw_api.lo
+	src/raw_api.lo src/java_raw_api.lo src/closures.lo
 am_libffi_convenience_la_OBJECTS = $(am__objects_24)
 am__objects_25 = $(am__objects_1) $(am__objects_2) $(am__objects_3) \
 	$(am__objects_4) $(am__objects_5) $(am__objects_6) \
@@ -416,7 +416,7 @@ MAKEOVERRIDES = 
 toolexeclib_LTLIBRARIES = libffi.la
 noinst_LTLIBRARIES = libffi_convenience.la
 libffi_la_SOURCES = src/debug.c src/prep_cif.c src/types.c \
-		src/raw_api.c src/java_raw_api.c
+		src/raw_api.c src/java_raw_api.c src/closures.c
 
 nodist_libffi_la_SOURCES = $(am__append_1) $(am__append_2) \
 	$(am__append_3) $(am__append_4) $(am__append_5) \
@@ -534,6 +534,7 @@ src/prep_cif.lo: src/$(am__dirstamp) src
 src/types.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
 src/raw_api.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
 src/java_raw_api.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
+src/closures.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
 src/mips/$(am__dirstamp):
 	@$(mkdir_p) src/mips
 	@: > src/mips/$(am__dirstamp)
@@ -729,6 +730,8 @@ mostlyclean-compile:
 	-rm -f src/arm/ffi.lo
 	-rm -f src/arm/sysv.$(OBJEXT)
 	-rm -f src/arm/sysv.lo
+	-rm -f src/closures.$(OBJEXT)
+	-rm -f src/closures.lo
 	-rm -f src/cris/ffi.$(OBJEXT)
 	-rm -f src/cris/ffi.lo
 	-rm -f src/cris/sysv.$(OBJEXT)
@@ -827,6 +830,7 @@ mostlyclean-compile:
 distclean-compile:
 	-rm -f *.tab.c
 
+@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/closures.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/debug.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/java_raw_api.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/prep_cif.Plo@am__quote@
Index: libffi/src/prep_cif.c
===================================================================
--- libffi/src/prep_cif.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/prep_cif.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   prep_cif.c - Copyright (c) 1996, 1998  Red Hat, Inc.
+   prep_cif.c - Copyright (c) 1996, 1998, 2007  Red Hat, Inc.
 
    Permission is hereby granted, free of charge, to any person obtaining
    a copy of this software and associated documentation files (the
@@ -158,3 +158,16 @@ ffi_status ffi_prep_cif(ffi_cif *cif, ff
   return ffi_prep_cif_machdep(cif);
 }
 #endif /* not __CRIS__ */
+
+#if FFI_CLOSURES
+
+ffi_status
+ffi_prep_closure (ffi_closure* closure,
+		  ffi_cif* cif,
+		  void (*fun)(ffi_cif*,void*,void**,void*),
+		  void *user_data)
+{
+  return ffi_prep_closure_loc (closure, cif, fun, user_data, closure);
+}
+
+#endif
Index: libffi/src/raw_api.c
===================================================================
--- libffi/src/raw_api.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/raw_api.c	2007-01-24 03:18:54.000000000 -0200
@@ -209,22 +209,20 @@ ffi_translate_args (ffi_cif *cif, void *
   (*cl->fun) (cif, rvalue, raw, cl->user_data);
 }
 
-/* Again, here is the generic version of ffi_prep_raw_closure, which
- * will install an intermediate "hub" for translation of arguments from
- * the pointer-array format, to the raw format */
-
 ffi_status
-ffi_prep_raw_closure (ffi_raw_closure* cl,
-		      ffi_cif *cif,
-		      void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
-		      void *user_data)
+ffi_prep_raw_closure_loc (ffi_raw_closure* cl,
+			  ffi_cif *cif,
+			  void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			  void *user_data,
+			  void *codeloc)
 {
   ffi_status status;
 
-  status = ffi_prep_closure ((ffi_closure*) cl,
-			     cif,
-			     &ffi_translate_args,
-			     (void*)cl);
+  status = ffi_prep_closure_loc ((ffi_closure*) cl,
+				 cif,
+				 &ffi_translate_args,
+				 codeloc,
+				 codeloc);
   if (status == FFI_OK)
     {
       cl->fun       = fun;
@@ -236,4 +234,22 @@ ffi_prep_raw_closure (ffi_raw_closure* c
 
 #endif /* FFI_CLOSURES */
 #endif /* !FFI_NATIVE_RAW_API */
+
+#if FFI_CLOSURES
+
+/* Again, here is the generic version of ffi_prep_raw_closure, which
+ * will install an intermediate "hub" for translation of arguments from
+ * the pointer-array format, to the raw format */
+
+ffi_status
+ffi_prep_raw_closure (ffi_raw_closure* cl,
+		      ffi_cif *cif,
+		      void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+		      void *user_data)
+{
+  return ffi_prep_raw_closure_loc (cl, cif, fun, user_data, cl);
+}
+
+#endif /* FFI_CLOSURES */
+
 #endif /* !FFI_NO_RAW_API */
Index: libffi/src/java_raw_api.c
===================================================================
--- libffi/src/java_raw_api.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/java_raw_api.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   java_raw_api.c - Copyright (c) 1999  Red Hat, Inc.
+   java_raw_api.c - Copyright (c) 1999, 2007  Red Hat, Inc.
 
    Cloned from raw_api.c
 
@@ -307,22 +307,20 @@ ffi_java_translate_args (ffi_cif *cif, v
   ffi_java_raw_to_rvalue (cif, rvalue);
 }
 
-/* Again, here is the generic version of ffi_prep_raw_closure, which
- * will install an intermediate "hub" for translation of arguments from
- * the pointer-array format, to the raw format */
-
 ffi_status
-ffi_prep_java_raw_closure (ffi_raw_closure* cl,
-		      ffi_cif *cif,
-		      void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
-		      void *user_data)
+ffi_prep_java_raw_closure_loc (ffi_raw_closure* cl,
+			       ffi_cif *cif,
+			       void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			       void *user_data,
+			       void *codeloc)
 {
   ffi_status status;
 
-  status = ffi_prep_closure ((ffi_closure*) cl,
-			     cif,
-			     &ffi_java_translate_args,
-			     (void*)cl);
+  status = ffi_prep_closure_loc ((ffi_closure*) cl,
+				 cif,
+				 &ffi_java_translate_args,
+				 codeloc,
+				 codeloc);
   if (status == FFI_OK)
     {
       cl->fun       = fun;
@@ -332,6 +330,19 @@ ffi_prep_java_raw_closure (ffi_raw_closu
   return status;
 }
 
+/* Again, here is the generic version of ffi_prep_raw_closure, which
+ * will install an intermediate "hub" for translation of arguments from
+ * the pointer-array format, to the raw format */
+
+ffi_status
+ffi_prep_java_raw_closure (ffi_raw_closure* cl,
+			   ffi_cif *cif,
+			   void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			   void *user_data)
+{
+  return ffi_prep_java_raw_closure_loc (cl, cif, fun, user_data, cl);
+}
+
 #endif /* FFI_CLOSURES */
 #endif /* !FFI_NATIVE_RAW_API */
 #endif /* !FFI_NO_RAW_API */
Index: libffi/src/alpha/ffi.c
===================================================================
--- libffi/src/alpha/ffi.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/alpha/ffi.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1998, 2001 Red Hat, Inc.
+   ffi.c - Copyright (c) 1998, 2001, 2007 Red Hat, Inc.
    
    Alpha Foreign Function Interface 
 
@@ -146,10 +146,11 @@ ffi_call(ffi_cif *cif, void (*fn)(), voi
 
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp;
 
Index: libffi/src/cris/ffi.c
===================================================================
--- libffi/src/cris/ffi.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/cris/ffi.c	2007-01-24 03:18:54.000000000 -0200
@@ -2,6 +2,7 @@
    ffi.c - Copyright (c) 1998 Cygnus Solutions
            Copyright (c) 2004 Simon Posnjak
 	   Copyright (c) 2005 Axis Communications AB
+	   Copyright (C) 2007 Free Software Foundation, Inc.
 
    CRIS Foreign Function Interface
 
@@ -360,10 +361,11 @@ ffi_prep_closure_inner (void **params, f
 /* API function: Prepare the trampoline.  */
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif *, void *, void **, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif *, void *, void **, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   void *innerfn = ffi_prep_closure_inner;
   FFI_ASSERT (cif->abi == FFI_SYSV);
@@ -375,7 +377,7 @@ ffi_prep_closure (ffi_closure* closure,
   memcpy (closure->tramp + ffi_cris_trampoline_fn_offset,
 	  &innerfn, sizeof (void *));
   memcpy (closure->tramp + ffi_cris_trampoline_closure_offset,
-	  &closure, sizeof (void *));
+	  &codeloc, sizeof (void *));
 
   return FFI_OK;
 }
Index: libffi/src/frv/ffi.c
===================================================================
--- libffi/src/frv/ffi.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/frv/ffi.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,6 @@
 /* -----------------------------------------------------------------------
    ffi.c - Copyright (c) 2004  Anthony Green
+   Copyright (C) 2007  Free Software Foundation, Inc.
    
    FR-V Foreign Function Interface 
 
@@ -243,14 +244,15 @@ void ffi_closure_eabi (unsigned arg1, un
 }
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp = (unsigned int *) &closure->tramp[0];
   unsigned long fn = (long) ffi_closure_eabi;
-  unsigned long cls = (long) closure;
+  unsigned long cls = (long) codeloc;
 #ifdef __FRV_FDPIC__
   register void *got __asm__("gr15");
 #endif
@@ -259,7 +261,7 @@ ffi_prep_closure (ffi_closure* closure,
   fn = (unsigned long) ffi_closure_eabi;
 
 #ifdef __FRV_FDPIC__
-  tramp[0] = &tramp[2];
+  tramp[0] = &((unsigned int *)codeloc)[2];
   tramp[1] = got;
   tramp[2] = 0x8cfc0000 + (fn  & 0xffff); /* setlos lo(fn), gr6    */
   tramp[3] = 0x8efc0000 + (cls & 0xffff); /* setlos lo(cls), gr7   */
@@ -281,7 +283,8 @@ ffi_prep_closure (ffi_closure* closure,
 
   /* Cache flushing.  */
   for (i = 0; i < FFI_TRAMPOLINE_SIZE; i++)
-    __asm__ volatile ("dcf @(%0,%1)\n\tici @(%0,%1)" :: "r" (tramp), "r" (i));
+    __asm__ volatile ("dcf @(%0,%1)\n\tici @(%2,%1)" :: "r" (tramp), "r" (i),
+		      "r" (codeloc));
 
   return FFI_OK;
 }
Index: libffi/src/ia64/ffi.c
===================================================================
--- libffi/src/ia64/ffi.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/ia64/ffi.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1998 Red Hat, Inc.
+   ffi.c - Copyright (c) 1998, 2007 Red Hat, Inc.
 	   Copyright (c) 2000 Hewlett Packard Company
    
    IA64 Foreign Function Interface 
@@ -400,10 +400,11 @@ ffi_call(ffi_cif *cif, void (*fn)(), voi
 extern void ffi_closure_unix ();
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*,void*,void**,void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void *codeloc)
 {
   /* The layout of a function descriptor.  A C function pointer really 
      points to one of these.  */
@@ -430,7 +431,7 @@ ffi_prep_closure (ffi_closure* closure,
 
   tramp->code_pointer = fd->code_pointer;
   tramp->real_gp = fd->gp;
-  tramp->fake_gp = (UINT64)(PTR64)closure;
+  tramp->fake_gp = (UINT64)(PTR64)codeloc;
   closure->cif = cif;
   closure->user_data = user_data;
   closure->fun = fun;
Index: libffi/src/mips/ffi.c
===================================================================
--- libffi/src/mips/ffi.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/mips/ffi.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1996 Red Hat, Inc.
+   ffi.c - Copyright (c) 1996, 2007 Red Hat, Inc.
    
    MIPS Foreign Function Interface 
 
@@ -497,14 +497,15 @@ extern void ffi_closure_O32(void);
 #endif /* FFI_MIPS_O32 */
 
 ffi_status
-ffi_prep_closure (ffi_closure *closure,
-		  ffi_cif *cif,
-		  void (*fun)(ffi_cif*,void*,void**,void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure *closure,
+		      ffi_cif *cif,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp = (unsigned int *) &closure->tramp[0];
   unsigned int fn;
-  unsigned int ctx = (unsigned int) closure;
+  unsigned int ctx = (unsigned int) codeloc;
 
 #if defined(FFI_MIPS_O32)
   FFI_ASSERT(cif->abi == FFI_O32 || cif->abi == FFI_O32_SOFT_FLOAT);
@@ -525,7 +526,7 @@ ffi_prep_closure (ffi_closure *closure,
   closure->user_data = user_data;
 
   /* XXX this is available on Linux, but anything else? */
-  cacheflush (tramp, FFI_TRAMPOLINE_SIZE, ICACHE);
+  cacheflush (codeloc, FFI_TRAMPOLINE_SIZE, ICACHE);
 
   return FFI_OK;
 }
Index: libffi/src/pa/ffi.c
===================================================================
--- libffi/src/pa/ffi.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/pa/ffi.c	2007-01-24 03:18:54.000000000 -0200
@@ -613,10 +613,11 @@ ffi_status ffi_closure_inner_pa32(ffi_cl
 extern void ffi_closure_pa32(void);
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*,void*,void**,void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void *codeloc)
 {
   UINT32 *tramp = (UINT32 *)(closure->tramp);
 #ifdef PA_HPUX
Index: libffi/src/powerpc/ffi.c
===================================================================
--- libffi/src/powerpc/ffi.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/powerpc/ffi.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,6 @@
 /* -----------------------------------------------------------------------
    ffi.c - Copyright (c) 1998 Geoffrey Keating
+   Copyright (C) 2007 Free Software Foundation, Inc
 
    PowerPC Foreign Function Interface
 
@@ -834,27 +835,27 @@ ffi_call(ffi_cif *cif, void (*fn)(), voi
 #define MIN_CACHE_LINE_SIZE 8
 
 static void
-flush_icache (char *addr1, int size)
+flush_icache (char *wraddr, char *xaddr, int size)
 {
   int i;
-  char * addr;
   for (i = 0; i < size; i += MIN_CACHE_LINE_SIZE)
     {
       addr = addr1 + i;
-      __asm__ volatile ("icbi 0,%0;" "dcbf 0,%0;"
-			: : "r" (addr) : "memory");
+      __asm__ volatile ("icbi 0,%0;" "dcbf 0,%1;"
+			: : "r" (xaddr + i), "r" (wraddr + i) : "memory");
     }
-  addr = addr1 + size - 1;
-  __asm__ volatile ("icbi 0,%0;" "dcbf 0,%0;" "sync;" "isync;"
-		    : : "r"(addr) : "memory");
+  __asm__ volatile ("icbi 0,%0;" "dcbf 0,%1;" "sync;" "isync;"
+		    : : "r"(xaddr + size - 1), "r"(wraddr + size - 1)
+		    : "memory");
 }
 #endif
 
 ffi_status
-ffi_prep_closure (ffi_closure *closure,
-		  ffi_cif *cif,
-		  void (*fun) (ffi_cif *, void *, void **, void *),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure *closure,
+		      ffi_cif *cif,
+		      void (*fun) (ffi_cif *, void *, void **, void *),
+		      void *user_data,
+		      void *codeloc)
 {
 #ifdef POWERPC64
   void **tramp = (void **) &closure->tramp[0];
@@ -862,7 +863,7 @@ ffi_prep_closure (ffi_closure *closure,
   FFI_ASSERT (cif->abi == FFI_LINUX64);
   /* Copy function address and TOC from ffi_closure_LINUX64.  */
   memcpy (tramp, (char *) ffi_closure_LINUX64, 16);
-  tramp[2] = (void *) closure;
+  tramp[2] = (void *) codeloc;
 #else
   unsigned int *tramp;
 
@@ -878,10 +879,10 @@ ffi_prep_closure (ffi_closure *closure,
   tramp[8] = 0x7c0903a6;  /*   mtctr   r0 */
   tramp[9] = 0x4e800420;  /*   bctr */
   *(void **) &tramp[2] = (void *) ffi_closure_SYSV; /* function */
-  *(void **) &tramp[3] = (void *) closure;          /* context */
+  *(void **) &tramp[3] = (void *) codeloc;          /* context */
 
   /* Flush the icache.  */
-  flush_icache (&closure->tramp[0],FFI_TRAMPOLINE_SIZE);
+  flush_icache (tramp, codeloc, FFI_TRAMPOLINE_SIZE);
 #endif
 
   closure->cif = cif;
Index: libffi/src/powerpc/ffi_darwin.c
===================================================================
--- libffi/src/powerpc/ffi_darwin.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/powerpc/ffi_darwin.c	2007-01-24 03:18:54.000000000 -0200
@@ -3,7 +3,7 @@
 
    Copyright (C) 1998 Geoffrey Keating
    Copyright (C) 2001 John Hornkvist
-   Copyright (C) 2002, 2006 Free Software Foundation, Inc.
+   Copyright (C) 2002, 2006, 2007 Free Software Foundation, Inc.
 
    FFI support for Darwin and AIX.
    
@@ -528,10 +528,11 @@ SP current -->    +---------------------
 
 */
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp;
   struct ffi_aix_trampoline_struct *tramp_aix;
@@ -553,14 +554,14 @@ ffi_prep_closure (ffi_closure* closure,
       tramp[8] = 0x816b0004;  /*   lwz     r11,4(r11) static chain  */
       tramp[9] = 0x4e800420;  /*   bctr  */
       tramp[2] = (unsigned long) ffi_closure_ASM; /* function  */
-      tramp[3] = (unsigned long) closure; /* context  */
+      tramp[3] = (unsigned long) codeloc; /* context  */
 
       closure->cif = cif;
       closure->fun = fun;
       closure->user_data = user_data;
 
       /* Flush the icache. Only necessary on Darwin.  */
-      flush_range(&closure->tramp[0],FFI_TRAMPOLINE_SIZE);
+      flush_range(codeloc, FFI_TRAMPOLINE_SIZE);
 
       break;
 
@@ -573,7 +574,7 @@ ffi_prep_closure (ffi_closure* closure,
 
       tramp_aix->code_pointer = fd->code_pointer;
       tramp_aix->toc = fd->toc;
-      tramp_aix->static_chain = closure;
+      tramp_aix->static_chain = codeloc;
       closure->cif = cif;
       closure->fun = fun;
       closure->user_data = user_data;
Index: libffi/src/s390/ffi.c
===================================================================
--- libffi/src/s390/ffi.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/s390/ffi.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 2000 Software AG
+   ffi.c - Copyright (c) 2000, 2007 Software AG
  
    S390 Foreign Function Interface
  
@@ -709,17 +709,18 @@ ffi_closure_helper_SYSV (ffi_closure *cl
 
 /*====================================================================*/
 /*                                                                    */
-/* Name     - ffi_prep_closure.                                       */
+/* Name     - ffi_prep_closure_loc.                                   */
 /*                                                                    */
 /* Function - Prepare a FFI closure.                                  */
 /*                                                                    */
 /*====================================================================*/
  
 ffi_status
-ffi_prep_closure (ffi_closure *closure,
-                  ffi_cif *cif,
-                  void (*fun) (ffi_cif *, void *, void **, void *),
-                  void *user_data)
+ffi_prep_closure_loc (ffi_closure *closure,
+		      ffi_cif *cif,
+		      void (*fun) (ffi_cif *, void *, void **, void *),
+		      void *user_data,
+		      void *codeloc)
 {
   FFI_ASSERT (cif->abi == FFI_SYSV);
 
@@ -728,7 +729,7 @@ ffi_prep_closure (ffi_closure *closure,
   *(short *)&closure->tramp [2] = 0x9801;   /* lm %r0,%r1,6(%r1) */
   *(short *)&closure->tramp [4] = 0x1006;
   *(short *)&closure->tramp [6] = 0x07f1;   /* br %r1 */
-  *(long  *)&closure->tramp [8] = (long)closure;
+  *(long  *)&closure->tramp [8] = (long)codeloc;
   *(long  *)&closure->tramp[12] = (long)&ffi_closure_SYSV;
 #else
   *(short *)&closure->tramp [0] = 0x0d10;   /* basr %r1,0 */
@@ -736,7 +737,7 @@ ffi_prep_closure (ffi_closure *closure,
   *(short *)&closure->tramp [4] = 0x100e;
   *(short *)&closure->tramp [6] = 0x0004;
   *(short *)&closure->tramp [8] = 0x07f1;   /* br %r1 */
-  *(long  *)&closure->tramp[16] = (long)closure;
+  *(long  *)&closure->tramp[16] = (long)codeloc;
   *(long  *)&closure->tramp[24] = (long)&ffi_closure_SYSV;
 #endif 
  
Index: libffi/src/sh/ffi.c
===================================================================
--- libffi/src/sh/ffi.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/sh/ffi.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 2002, 2003, 2004, 2005, 2006 Kaz Kojima
+   ffi.c - Copyright (c) 2002, 2003, 2004, 2005, 2006, 2007 Kaz Kojima
    
    SuperH Foreign Function Interface 
 
@@ -452,10 +452,11 @@ extern void __ic_invalidate (void *line)
 #endif
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp;
   unsigned short insn;
@@ -475,7 +476,7 @@ ffi_prep_closure (ffi_closure* closure,
   tramp[0] = 0xd102d301;
   tramp[1] = 0x412b0000 | insn;
 #endif
-  *(void **) &tramp[2] = (void *)closure;          /* ctx */
+  *(void **) &tramp[2] = (void *)codeloc;          /* ctx */
   *(void **) &tramp[3] = (void *)ffi_closure_SYSV; /* funaddr */
 
   closure->cif = cif;
@@ -484,7 +485,7 @@ ffi_prep_closure (ffi_closure* closure,
 
 #if defined(__SH4__)
   /* Flush the icache.  */
-  __ic_invalidate(&closure->tramp[0]);
+  __ic_invalidate(codeloc);
 #endif
 
   return FFI_OK;
Index: libffi/src/sh64/ffi.c
===================================================================
--- libffi/src/sh64/ffi.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/sh64/ffi.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 2003, 2004, 2006 Kaz Kojima
+   ffi.c - Copyright (c) 2003, 2004, 2006, 2007 Kaz Kojima
    
    SuperH SHmedia Foreign Function Interface 
 
@@ -283,10 +283,11 @@ extern void ffi_closure_SYSV (void);
 extern void __ic_invalidate (void *line);
 
 ffi_status
-ffi_prep_closure (ffi_closure *closure,
-		  ffi_cif *cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure *closure,
+		      ffi_cif *cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp;
 
@@ -310,8 +311,8 @@ ffi_prep_closure (ffi_closure *closure,
   tramp[2] = 0xcc000010 | (((UINT32) ffi_closure_SYSV) >> 16) << 10;
   tramp[3] = 0xc8000010 | (((UINT32) ffi_closure_SYSV) & 0xffff) << 10;
   tramp[4] = 0x6bf10600;
-  tramp[5] = 0xcc000010 | (((UINT32) closure) >> 16) << 10;
-  tramp[6] = 0xc8000010 | (((UINT32) closure) & 0xffff) << 10;
+  tramp[5] = 0xcc000010 | (((UINT32) codeloc) >> 16) << 10;
+  tramp[6] = 0xc8000010 | (((UINT32) codeloc) & 0xffff) << 10;
   tramp[7] = 0x4401fff0;
 
   closure->cif = cif;
@@ -319,7 +320,8 @@ ffi_prep_closure (ffi_closure *closure,
   closure->user_data = user_data;
 
   /* Flush the icache.  */
-  asm volatile ("ocbwb %0,0; synco; icbi %0,0; synci" : : "r" (tramp));
+  asm volatile ("ocbwb %0,0; synco; icbi %1,0; synci" : : "r" (tramp),
+		"r"(codeloc));
 
   return FFI_OK;
 }
Index: libffi/src/sparc/ffi.c
===================================================================
--- libffi/src/sparc/ffi.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/sparc/ffi.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1996, 2003, 2004 Red Hat, Inc.
+   ffi.c - Copyright (c) 1996, 2003, 2004, 2007 Red Hat, Inc.
    
    SPARC Foreign Function Interface 
 
@@ -425,10 +425,11 @@ extern void ffi_closure_v8(void);
 #endif
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp = (unsigned int *) &closure->tramp[0];
   unsigned long fn;
@@ -443,7 +444,7 @@ ffi_prep_closure (ffi_closure* closure,
   tramp[3] = 0x01000000;	/* nop			*/
   *((unsigned long *) &tramp[4]) = fn;
 #else
-  unsigned long ctx = (unsigned long) closure;
+  unsigned long ctx = (unsigned long) codeloc;
   FFI_ASSERT (cif->abi == FFI_V8);
   fn = (unsigned long) ffi_closure_v8;
   tramp[0] = 0x03000000 | fn >> 10;	/* sethi %hi(fn), %g1	*/
Index: libffi/src/x86/ffi.c
===================================================================
--- libffi/src/x86/ffi.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/x86/ffi.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1996, 1998, 1999, 2001  Red Hat, Inc.
+   ffi.c - Copyright (c) 1996, 1998, 1999, 2001, 2007  Red Hat, Inc.
            Copyright (c) 2002  Ranjit Mathew
            Copyright (c) 2002  Bo Thorsen
            Copyright (c) 2002  Roger Sayle
@@ -302,7 +302,7 @@ ffi_prep_incoming_args_SYSV(char *stack,
 ({ unsigned char *__tramp = (unsigned char*)(TRAMP); \
    unsigned int  __fun = (unsigned int)(FUN); \
    unsigned int  __ctx = (unsigned int)(CTX); \
-   unsigned int  __dis = __fun - ((unsigned int) __tramp + FFI_TRAMPOLINE_SIZE); \
+   unsigned int  __dis = __fun - (__ctx + FFI_TRAMPOLINE_SIZE); \
    *(unsigned char*) &__tramp[0] = 0xb8; \
    *(unsigned int*)  &__tramp[1] = __ctx; /* movl __ctx, %eax */ \
    *(unsigned char *)  &__tramp[5] = 0xe9; \
@@ -313,16 +313,17 @@ ffi_prep_incoming_args_SYSV(char *stack,
 /* the cif must already be prep'ed */
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*,void*,void**,void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void *codeloc)
 {
   FFI_ASSERT (cif->abi == FFI_SYSV);
 
   FFI_INIT_TRAMPOLINE (&closure->tramp[0], \
 		       &ffi_closure_SYSV,  \
-		       (void*)closure);
+		       codeloc);
     
   closure->cif  = cif;
   closure->user_data = user_data;
@@ -336,10 +337,11 @@ ffi_prep_closure (ffi_closure* closure,
 #if !FFI_NO_RAW_API
 
 ffi_status
-ffi_prep_raw_closure (ffi_raw_closure* closure,
-		      ffi_cif* cif,
-		      void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
-		      void *user_data)
+ffi_prep_raw_closure_loc (ffi_raw_closure* closure,
+			  ffi_cif* cif,
+			  void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			  void *user_data,
+			  void *codeloc)
 {
   int i;
 
@@ -358,7 +360,7 @@ ffi_prep_raw_closure (ffi_raw_closure* c
   
 
   FFI_INIT_TRAMPOLINE (&closure->tramp[0], &ffi_closure_raw_SYSV,
-		       (void*)closure);
+		       codeloc);
     
   closure->cif  = cif;
   closure->user_data = user_data;
Index: libffi/src/x86/ffi64.c
===================================================================
--- libffi/src/x86/ffi64.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ libffi/src/x86/ffi64.c	2007-01-24 03:18:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 2002  Bo Thorsen <bo@suse.de>
+   ffi.c - Copyright (c) 2002, 2007  Bo Thorsen <bo@suse.de>
    
    x86-64 Foreign Function Interface 
 
@@ -433,10 +433,11 @@ ffi_call (ffi_cif *cif, void (*fn)(), vo
 extern void ffi_closure_unix64(void);
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   volatile unsigned short *tramp;
 
@@ -445,7 +446,7 @@ ffi_prep_closure (ffi_closure* closure,
   tramp[0] = 0xbb49;		/* mov <code>, %r11	*/
   *(void * volatile *) &tramp[1] = ffi_closure_unix64;
   tramp[5] = 0xba49;		/* mov <data>, %r10	*/
-  *(void * volatile *) &tramp[6] = closure;
+  *(void * volatile *) &tramp[6] = codeloc;
 
   /* Set the carry bit iff the function uses any sse registers.
      This is clc or stc, together with the first byte of the jmp.  */
Index: boehm-gc/include/gc.h
===================================================================
--- boehm-gc/include/gc.h.orig	2007-01-24 03:17:50.000000000 -0200
+++ boehm-gc/include/gc.h	2007-01-24 03:18:54.000000000 -0200
@@ -3,6 +3,7 @@
  * Copyright (c) 1991-1995 by Xerox Corporation.  All rights reserved.
  * Copyright 1996-1999 by Silicon Graphics.  All rights reserved.
  * Copyright 1999 by Hewlett-Packard Company.  All rights reserved.
+ * Copyright (C) 2007 Free Software Foundation, Inc
  *
  * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
  * OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
@@ -602,6 +603,8 @@ GC_API GC_PTR GC_debug_realloc_replaceme
 	GC_debug_register_finalizer_ignore_self(p, f, d, of, od)
 #   define GC_REGISTER_FINALIZER_NO_ORDER(p, f, d, of, od) \
 	GC_debug_register_finalizer_no_order(p, f, d, of, od)
+#   define GC_REGISTER_FINALIZER_UNREACHABLE(p, f, d, of, od) \
+	GC_debug_register_finalizer_unreachable(p, f, d, of, od)
 #   define GC_MALLOC_STUBBORN(sz) GC_debug_malloc_stubborn(sz, GC_EXTRAS);
 #   define GC_CHANGE_STUBBORN(p) GC_debug_change_stubborn(p)
 #   define GC_END_STUBBORN_CHANGE(p) GC_debug_end_stubborn_change(p)
@@ -624,6 +627,8 @@ GC_API GC_PTR GC_debug_realloc_replaceme
 	GC_register_finalizer_ignore_self(p, f, d, of, od)
 #   define GC_REGISTER_FINALIZER_NO_ORDER(p, f, d, of, od) \
 	GC_register_finalizer_no_order(p, f, d, of, od)
+#   define GC_REGISTER_FINALIZER_UNREACHABLE(p, f, d, of, od) \
+	GC_register_finalizer_unreachable(p, f, d, of, od)
 #   define GC_MALLOC_STUBBORN(sz) GC_malloc_stubborn(sz)
 #   define GC_CHANGE_STUBBORN(p) GC_change_stubborn(p)
 #   define GC_END_STUBBORN_CHANGE(p) GC_end_stubborn_change(p)
@@ -716,6 +721,19 @@ GC_API void GC_debug_register_finalizer_
 	GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd,
 		  GC_finalization_proc *ofn, GC_PTR *ocd));
 
+/* This is a special finalizer that is useful when an object's  */
+/* finalizer must be run when the object is known to be no      */
+/* longer reachable, not even from other finalizable objects.   */
+/* This can be used in combination with finalizer_no_order so   */
+/* as to release resources that must not be released while an   */
+/* object can still be brought back to life by other            */
+/* finalizers.                                                  */
+GC_API void GC_register_finalizer_unreachable
+	GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd,
+		  GC_finalization_proc *ofn, GC_PTR *ocd));
+GC_API void GC_debug_register_finalizer_unreachable
+	GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd,
+		  GC_finalization_proc *ofn, GC_PTR *ocd));
 
 /* The following routine may be used to break cycles between	*/
 /* finalizable objects, thus causing cyclic finalizable		*/
Index: boehm-gc/finalize.c
===================================================================
--- boehm-gc/finalize.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ boehm-gc/finalize.c	2007-01-24 03:18:54.000000000 -0200
@@ -2,6 +2,7 @@
  * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
  * Copyright (c) 1991-1996 by Xerox Corporation.  All rights reserved.
  * Copyright (c) 1996-1999 by Silicon Graphics.  All rights reserved.
+ * Copyright (C) 2007 Free Software Foundation, Inc
 
  * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
  * OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
@@ -315,6 +316,14 @@ ptr_t p;
 {
 }
 
+/* Possible finalization_marker procedures.  Note that mark stack	*/
+/* overflow is handled by the caller, and is not a disaster.		*/
+GC_API void GC_unreachable_finalize_mark_proc(p)
+ptr_t p;
+{
+    return GC_normal_finalize_mark_proc(p);
+}
+
 
 
 /* Register a finalization function.  See gc.h for details.	*/
@@ -511,6 +520,23 @@ finalization_mark_proc * mp;
     				ocd, GC_null_finalize_mark_proc);
 }
 
+# if defined(__STDC__)
+    void GC_register_finalizer_unreachable(void * obj,
+			       GC_finalization_proc fn, void * cd,
+			       GC_finalization_proc *ofn, void ** ocd)
+# else
+    void GC_register_finalizer_unreachable(obj, fn, cd, ofn, ocd)
+    GC_PTR obj;
+    GC_finalization_proc fn;
+    GC_PTR cd;
+    GC_finalization_proc * ofn;
+    GC_PTR * ocd;
+# endif
+{
+    GC_register_finalizer_inner(obj, fn, cd, ofn,
+    				ocd, GC_unreachable_finalize_mark_proc);
+}
+
 #ifndef NO_DEBUGGING
 void GC_dump_finalization()
 {
@@ -638,9 +664,44 @@ void GC_finalize()
   	    if (curr_fo -> fo_mark_proc == GC_null_finalize_mark_proc) {
   	        GC_MARK_FO(real_ptr, GC_normal_finalize_mark_proc);
   	    }
-  	    GC_set_mark_bit(real_ptr);
+	    if (curr_fo -> fo_mark_proc != GC_unreachable_finalize_mark_proc) {
+		GC_set_mark_bit(real_ptr);
+	    }
   	}
       }
+
+      /* now revive finalize-when-unreachable objects reachable from
+	 other finalizable objects */
+      curr_fo = GC_finalize_now;
+      prev_fo = 0;
+      while (curr_fo != 0) {
+	next_fo = fo_next(curr_fo);
+	if (curr_fo -> fo_mark_proc == GC_unreachable_finalize_mark_proc) {
+	  real_ptr = (ptr_t)curr_fo -> fo_hidden_base;
+	  if (!GC_is_marked(real_ptr)) {
+	      GC_set_mark_bit(real_ptr);
+	  } else {
+	      if (prev_fo == 0)
+		GC_finalize_now = next_fo;
+	      else
+		fo_set_next(prev_fo, next_fo);
+
+              curr_fo -> fo_hidden_base =
+              		(word) HIDE_POINTER(curr_fo -> fo_hidden_base);
+              GC_words_finalized -=
+                 	ALIGNED_WORDS(curr_fo -> fo_object_size)
+              		+ ALIGNED_WORDS(sizeof(struct finalizable_object));
+
+	      i = HASH2(real_ptr, log_fo_table_size);
+	      fo_set_next (curr_fo, fo_head[i]);
+	      GC_fo_entries++;
+	      fo_head[i] = curr_fo;
+	      curr_fo = prev_fo;
+	  }
+	}
+	prev_fo = curr_fo;
+	curr_fo = next_fo;
+      }
   }
 
   /* Remove dangling disappearing links. */
Index: boehm-gc/dbg_mlc.c
===================================================================
--- boehm-gc/dbg_mlc.c.orig	2007-01-24 03:17:50.000000000 -0200
+++ boehm-gc/dbg_mlc.c	2007-01-24 03:18:54.000000000 -0200
@@ -3,6 +3,7 @@
  * Copyright (c) 1991-1995 by Xerox Corporation.  All rights reserved.
  * Copyright (c) 1997 by Silicon Graphics.  All rights reserved.
  * Copyright (c) 1999-2004 Hewlett-Packard Development Company, L.P.
+ * Copyright (C) 2007 Free Software Foundation, Inc
  *
  * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
  * OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
@@ -1118,8 +1119,8 @@ GC_PTR *ocd;
     if (0 == base) return;
     if ((ptr_t)obj - base != sizeof(oh)) {
         GC_err_printf1(
-	  "GC_debug_register_finalizer_no_order called with non-base-pointer 0x%lx\n",
-	  obj);
+	    "GC_debug_register_finalizer_no_order called with non-base-pointer 0x%lx\n",
+	    obj);
     }
     if (0 == fn) {
       GC_register_finalizer_no_order(base, 0, 0, &my_old_fn, &my_old_cd);
@@ -1129,7 +1130,41 @@ GC_PTR *ocd;
 				     &my_old_cd);
     }
     store_old(obj, my_old_fn, (struct closure *)my_old_cd, ofn, ocd);
- }
+}
+
+# ifdef __STDC__
+    void GC_debug_register_finalizer_unreachable
+    				    (GC_PTR obj, GC_finalization_proc fn,
+    				     GC_PTR cd, GC_finalization_proc *ofn,
+				     GC_PTR *ocd)
+# else
+    void GC_debug_register_finalizer_unreachable
+    				    (obj, fn, cd, ofn, ocd)
+    GC_PTR obj;
+    GC_finalization_proc fn;
+    GC_PTR cd;
+    GC_finalization_proc *ofn;
+    GC_PTR *ocd;
+# endif
+{
+    GC_finalization_proc my_old_fn;
+    GC_PTR my_old_cd;
+    ptr_t base = GC_base(obj);
+    if (0 == base) return;
+    if ((ptr_t)obj - base != sizeof(oh)) {
+        GC_err_printf1(
+	    "GC_debug_register_finalizer_unreachable called with non-base-pointer 0x%lx\n",
+	    obj);
+    }
+    if (0 == fn) {
+      GC_register_finalizer_unreachable(base, 0, 0, &my_old_fn, &my_old_cd);
+    } else {
+      GC_register_finalizer_unreachable(base, GC_debug_invoke_finalizer,
+    			    	     GC_make_closure(fn,cd), &my_old_fn,
+				     &my_old_cd);
+    }
+    store_old(obj, my_old_fn, (struct closure *)my_old_cd, ofn, ocd);
+}
 
 # ifdef __STDC__
     void GC_debug_register_finalizer_ignore_self
Index: libjava/include/jvm.h
===================================================================
--- libjava/include/jvm.h.orig	2007-01-24 03:18:08.000000000 -0200
+++ libjava/include/jvm.h	2007-01-24 03:18:54.000000000 -0200
@@ -288,7 +288,7 @@ private:
  						    _Jv_Utf8Const *method_signature,
 						    jclass *found_class,
 						    bool check_perms = true);
-  static void *create_error_method(_Jv_Utf8Const *);
+  static void *create_error_method(_Jv_Utf8Const *, jclass);
 
   /* The least significant bit of the signature pointer in a symbol
      table is set to 1 by the compiler if the reference is "special",
@@ -341,6 +341,20 @@ void *_Jv_AllocBytes (jsize size) __attr
 /* Allocate space for a new non-Java object, which does not have the usual 
    Java object header but may contain pointers to other GC'ed objects.  */
 void *_Jv_AllocRawObj (jsize size) __attribute__((__malloc__));
+/* Allocate writable and executable memory with the given size.  This
+   memory is intended to contain code, not pointers.  It returns the
+   writable address and stores the executable address in *code.  They
+   are often the same, but in some situations that forbid pages from
+   being both writable and executable, separate virtual addresses
+   referring to the same data may be used.  */
+void *_Jv_AllocClosure (jsize size, void **codep) __attribute__((__malloc__));
+/* Release memory allocated by _Jv_AllocClosure.  PTR must denote a
+   writable-memory address.  */
+void _Jv_FreeClosure (void *data);
+/* Allocate a double-indirect pointer to a _Jv_ClosureList such that
+   the _Jv_ClosureList gets automatically finalized when it is no
+   longer reachable, not even by other finalizable objects.  */
+_Jv_ClosureList **_Jv_ClosureListFinalizer (void) __attribute__((__malloc__));
 /* Explicitly throw an out-of-memory exception.	*/
 void _Jv_ThrowNoMemory() __attribute__((__noreturn__));
 /* Allocate an object with a single pointer.  The first word is reserved
Index: libjava/boehm.cc
===================================================================
--- libjava/boehm.cc.orig	2007-01-24 03:17:50.000000000 -0200
+++ libjava/boehm.cc	2007-01-24 03:56:55.000000000 -0200
@@ -1,6 +1,6 @@
 // boehm.cc - interface between libjava and Boehm GC.
 
-/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006
+/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
    Free Software Foundation
 
    This file is part of libgcj.
@@ -380,6 +380,51 @@ _Jv_AllocRawObj (jsize size)
   return (void *) GC_MALLOC (size ? size : 1);
 }
 
+/* Allocate writable and executable memory with the given size.  This
+   memory is not under GC control, nor is it a GC root.  It returns
+   the writable address and stores the executable address in *code.
+   They are often the same, but in some situations that forbid pages
+   from being both writable and executable, separate virtual addresses
+   referring to the same data may be used.  */
+void *
+_Jv_AllocClosure (jsize size, void **code)
+{
+  void *ret = ffi_closure_alloc (size, code);
+  return ret;
+}
+
+/* Release memory allocated by _Jv_AllocClosure.  PTR must denote a
+   writable-memory address.  */
+void
+_Jv_FreeClosure (void *ptr)
+{
+  if (ptr)
+    ffi_closure_free (ptr);
+}
+
+typedef _Jv_ClosureList *closure_list_pointer;
+
+/* Release closures in a _Jv_ClosureList.  */
+static void
+finalize_closure_list (GC_PTR obj, GC_PTR)
+{
+  _Jv_ClosureList **clpp = (_Jv_ClosureList **)obj;
+  _Jv_ClosureList::releaseClosures (clpp);
+}
+
+/* Allocate a double-indirect pointer to a _Jv_ClosureList that will
+   get garbage-collected after this double-indirect pointer becomes
+   unreachable by any other objects, including finalizable ones.  */
+_Jv_ClosureList **
+_Jv_ClosureListFinalizer ()
+{
+  _Jv_ClosureList **clpp;
+  clpp = (_Jv_ClosureList **)_Jv_AllocBytes (sizeof (*clpp));
+  GC_REGISTER_FINALIZER_UNREACHABLE (clpp, finalize_closure_list,
+				     NULL, NULL, NULL);
+  return clpp;
+}
+
 static void
 call_finalizer (GC_PTR obj, GC_PTR client_data)
 {
Index: libjava/nogc.cc
===================================================================
--- libjava/nogc.cc.orig	2007-01-24 03:17:50.000000000 -0200
+++ libjava/nogc.cc	2007-01-24 03:18:54.000000000 -0200
@@ -1,6 +1,7 @@
 // nogc.cc - Implement null garbage collector.
 
-/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2006  Free Software Foundation
+/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2006, 2007
+   Free Software Foundation
 
    This file is part of libgcj.
 
@@ -71,6 +72,27 @@ _Jv_AllocRawObj (jsize size)
   return calloc (size, 1);
 }
 
+void *
+_Jv_AllocClosure (jsize size, void **code)
+{
+  return ffi_closure_alloc (size, code);
+}
+
+void
+_Jv_FreeClosure (void *ptr)
+{
+  if (ptr)
+    return ffi_closure_free (ptr);
+}
+
+_Jv_ClosureList **
+_Jv_ClosureListFinalizer ()
+{
+  _Jv_ClosureList **clpp;
+  clpp = (_Jv_ClosureList **)_Jv_AllocBytes (sizeof (*clpp));
+  return clpp;
+}
+
 void
 _Jv_RegisterFinalizer (void *, _Jv_FinalizerFunc *)
 {
Index: libjava/java/lang/Class.h
===================================================================
--- libjava/java/lang/Class.h.orig	2007-01-24 03:17:50.000000000 -0200
+++ libjava/java/lang/Class.h	2007-01-24 03:18:54.000000000 -0200
@@ -1,6 +1,7 @@
 // Class.h - Header file for java.lang.Class.  -*- c++ -*-
 
-/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006  Free Software Foundation
+/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
+   Free Software Foundation
 
    This file is part of libgcj.
 
@@ -105,6 +106,15 @@ class _Jv_InterpClass;
 class _Jv_InterpMethod;
 #endif
 
+class _Jv_ClosureList
+{
+  _Jv_ClosureList *next;
+  void *ptr;
+public:
+  void registerClosure (jclass klass, void *ptr);
+  static void releaseClosures (_Jv_ClosureList **closures);
+};
+
 struct _Jv_Constants
 {
   jint size;
@@ -623,6 +633,7 @@ private:   
   friend class ::_Jv_CompiledEngine;
   friend class ::_Jv_IndirectCompiledEngine;
   friend class ::_Jv_InterpreterEngine;
+  friend class ::_Jv_ClosureList;
 
   friend void ::_Jv_sharedlib_register_hook (jclass klass);
 
Index: libjava/java/lang/natClass.cc
===================================================================
--- libjava/java/lang/natClass.cc.orig	2007-01-24 03:17:50.000000000 -0200
+++ libjava/java/lang/natClass.cc	2007-01-24 03:18:54.000000000 -0200
@@ -1,6 +1,6 @@
 // natClass.cc - Implementation of java.lang.Class native methods.
 
-/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006
+/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
    Free Software Foundation
 
    This file is part of libgcj.
@@ -669,6 +669,28 @@ java::lang::Class::finalize (void)
   engine->unregister(this);
 }
 
+void
+_Jv_ClosureList::releaseClosures (_Jv_ClosureList **closures)
+{
+  if (!closures)
+    return;
+
+  while (_Jv_ClosureList *current = *closures)
+    {
+      *closures = current->next;
+      _Jv_FreeClosure (current->ptr);
+    }
+}
+
+void
+_Jv_ClosureList::registerClosure (jclass klass, void *ptr)
+{
+  _Jv_ClosureList **closures = klass->engine->get_closure_list (klass);
+  this->ptr = ptr;
+  this->next = *closures;
+  *closures = this;
+}
+
 // This implements the initialization process for a class.  From Spec
 // section 12.4.2.
 void
Index: libjava/include/execution.h
===================================================================
--- libjava/include/execution.h.orig	2007-01-24 03:17:50.000000000 -0200
+++ libjava/include/execution.h	2007-01-24 03:18:54.000000000 -0200
@@ -1,6 +1,6 @@
 // execution.h - Execution engines. -*- c++ -*-
 
-/* Copyright (C) 2004, 2006  Free Software Foundation
+/* Copyright (C) 2004, 2006, 2007  Free Software Foundation
 
    This file is part of libgcj.
 
@@ -29,6 +29,7 @@ struct _Jv_ExecutionEngine
   _Jv_ResolvedMethod *(*resolve_method) (_Jv_Method *, jclass,
 					 jboolean);
   void (*post_miranda_hook) (jclass);
+  _Jv_ClosureList **(*get_closure_list) (jclass);
 };
 
 // This handles gcj-compiled code except that compiled with
@@ -77,6 +78,11 @@ struct _Jv_CompiledEngine : public _Jv_E
     // Not needed.
   }
 
+  static _Jv_ClosureList **do_get_closure_list (jclass)
+  {
+    return NULL;
+  }
+
   _Jv_CompiledEngine ()
   {
     unregister = do_unregister;
@@ -87,6 +93,7 @@ struct _Jv_CompiledEngine : public _Jv_E
     create_ncode = do_create_ncode;
     resolve_method = do_resolve_method;
     post_miranda_hook = do_post_miranda_hook;
+    get_closure_list = do_get_closure_list;
   }
 
   // These operators make it so we don't have to link in libstdc++.
@@ -105,6 +112,7 @@ class _Jv_IndirectCompiledClass
 {
 public:
   void **field_initializers;
+  _Jv_ClosureList **closures;
 };
 
 // This handles gcj-compiled code compiled with -findirect-classes.
@@ -114,14 +122,32 @@ struct _Jv_IndirectCompiledEngine : publ
   {
     allocate_static_fields = do_allocate_static_fields;
     allocate_field_initializers = do_allocate_field_initializers;
+    get_closure_list = do_get_closure_list;
   }
   
+  static _Jv_IndirectCompiledClass *get_aux_info (jclass klass)
+  {
+    _Jv_IndirectCompiledClass *aux =
+      (_Jv_IndirectCompiledClass*)klass->aux_info;
+    if (!aux)
+      {
+	aux = (_Jv_IndirectCompiledClass*)
+	  _Jv_AllocRawObj (sizeof (_Jv_IndirectCompiledClass));
+	klass->aux_info = aux;
+      }
+
+    return aux;
+  }
+
   static void do_allocate_field_initializers (jclass klass)
   {
-    _Jv_IndirectCompiledClass *aux 
-      =  (_Jv_IndirectCompiledClass*)
-        _Jv_AllocRawObj (sizeof (_Jv_IndirectCompiledClass));
-    klass->aux_info = aux;
+    _Jv_IndirectCompiledClass *aux = get_aux_info (klass);
+    if (!aux)
+      {
+	aux = (_Jv_IndirectCompiledClass*)
+	  _Jv_AllocRawObj (sizeof (_Jv_IndirectCompiledClass));
+	klass->aux_info = aux;
+      }
 
     aux->field_initializers = (void **)_Jv_Malloc (klass->field_count 
 						   * sizeof (void*));    
@@ -172,6 +198,16 @@ struct _Jv_IndirectCompiledEngine : publ
       } 
     _Jv_Free (aux->field_initializers);
   }
+
+  static _Jv_ClosureList **do_get_closure_list (jclass klass)
+  {
+    _Jv_IndirectCompiledClass *aux = get_aux_info (klass);
+
+    if (!aux->closures)
+      aux->closures = _Jv_ClosureListFinalizer ();
+
+    return aux->closures;
+  }
 };
 
 
@@ -203,6 +239,8 @@ class _Jv_InterpreterEngine : public _Jv
 
   static void do_post_miranda_hook (jclass);
 
+  static _Jv_ClosureList **do_get_closure_list (jclass klass);
+
   _Jv_InterpreterEngine ()
   {
     unregister = do_unregister;
@@ -213,6 +251,7 @@ class _Jv_InterpreterEngine : public _Jv
     create_ncode = do_create_ncode;
     resolve_method = do_resolve_method;
     post_miranda_hook = do_post_miranda_hook;
+    get_closure_list = do_get_closure_list;
   }
 
   // These operators make it so we don't have to link in libstdc++.
Index: libjava/interpret.cc
===================================================================
--- libjava/interpret.cc.orig	2007-01-24 03:17:50.000000000 -0200
+++ libjava/interpret.cc	2007-01-24 03:18:54.000000000 -0200
@@ -1,6 +1,7 @@
 // interpret.cc - Code for the interpreter
 
-/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation
+/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
+   Free Software Foundation
 
    This file is part of libgcj.
 
@@ -1234,10 +1235,10 @@ _Jv_init_cif (_Jv_Utf8Const* signature,
 }
 
 #if FFI_NATIVE_RAW_API
-#   define FFI_PREP_RAW_CLOSURE ffi_prep_raw_closure
+#   define FFI_PREP_RAW_CLOSURE ffi_prep_raw_closure_loc
 #   define FFI_RAW_SIZE ffi_raw_size
 #else
-#   define FFI_PREP_RAW_CLOSURE ffi_prep_java_raw_closure
+#   define FFI_PREP_RAW_CLOSURE ffi_prep_java_raw_closure_loc
 #   define FFI_RAW_SIZE ffi_java_raw_size
 #endif
 
@@ -1248,6 +1249,7 @@ _Jv_init_cif (_Jv_Utf8Const* signature,
 
 typedef struct {
   ffi_raw_closure  closure;
+  _Jv_ClosureList list;
   ffi_cif   cif;
   ffi_type *arg_types[0];
 } ncode_closure;
@@ -1255,7 +1257,7 @@ typedef struct {
 typedef void (*ffi_closure_fun) (ffi_cif*,void*,ffi_raw*,void*);
 
 void *
-_Jv_InterpMethod::ncode ()
+_Jv_InterpMethod::ncode (jclass klass)
 {
   using namespace java::lang::reflect;
 
@@ -1265,9 +1267,12 @@ _Jv_InterpMethod::ncode ()
   jboolean staticp = (self->accflags & Modifier::STATIC) != 0;
   int arg_count = _Jv_count_arguments (self->signature, staticp);
 
+  void *code;
   ncode_closure *closure =
-    (ncode_closure*)_Jv_AllocBytes (sizeof (ncode_closure)
-					+ arg_count * sizeof (ffi_type*));
+    (ncode_closure*)_Jv_AllocClosure (sizeof (ncode_closure)
+				      + arg_count * sizeof (ffi_type*),
+				      &code);
+  closure->list.registerClosure (klass, closure);
 
   _Jv_init_cif (self->signature,
 		arg_count,
@@ -1320,9 +1325,11 @@ _Jv_InterpMethod::ncode ()
   FFI_PREP_RAW_CLOSURE (&closure->closure,
 		        &closure->cif, 
 		        fun,
-		        (void*)this);
+		        (void*)this,
+			code);
+
+  self->ncode = code;
 
-  self->ncode = (void*)closure;
   return self->ncode;
 }
 
@@ -1450,7 +1457,7 @@ _Jv_InterpMethod::set_insn (jlong index,
 }
 
 void *
-_Jv_JNIMethod::ncode ()
+_Jv_JNIMethod::ncode (jclass klass)
 {
   using namespace java::lang::reflect;
 
@@ -1460,9 +1467,12 @@ _Jv_JNIMethod::ncode ()
   jboolean staticp = (self->accflags & Modifier::STATIC) != 0;
   int arg_count = _Jv_count_arguments (self->signature, staticp);
 
+  void *code;
   ncode_closure *closure =
-    (ncode_closure*)_Jv_AllocBytes (sizeof (ncode_closure)
-				    + arg_count * sizeof (ffi_type*));
+    (ncode_closure*)_Jv_AllocClosure (sizeof (ncode_closure)
+				      + arg_count * sizeof (ffi_type*),
+				      &code);
+  closure->list.registerClosure (klass, closure);
 
   ffi_type *rtype;
   _Jv_init_cif (self->signature,
@@ -1504,9 +1514,10 @@ _Jv_JNIMethod::ncode ()
   FFI_PREP_RAW_CLOSURE (&closure->closure,
 			&closure->cif, 
 			fun,
-			(void*) this);
+			(void*) this,
+			code);
 
-  self->ncode = (void *) closure;
+  self->ncode = code;
   return self->ncode;
 }
 
@@ -1567,16 +1578,27 @@ _Jv_InterpreterEngine::do_create_ncode (
 	  // cases.  Well, we can't, because we don't allocate these
 	  // objects using `new', and thus they don't get a vtable.
 	  _Jv_JNIMethod *jnim = reinterpret_cast<_Jv_JNIMethod *> (imeth);
-	  klass->methods[i].ncode = jnim->ncode ();
+	  klass->methods[i].ncode = jnim->ncode (klass);
 	}
       else if (imeth != 0)		// it could be abstract
 	{
 	  _Jv_InterpMethod *im = reinterpret_cast<_Jv_InterpMethod *> (imeth);
-	  klass->methods[i].ncode = im->ncode ();
+	  klass->methods[i].ncode = im->ncode (klass);
 	}
     }
 }
 
+_Jv_ClosureList **
+_Jv_InterpreterEngine::do_get_closure_list (jclass klass)
+{
+  _Jv_InterpClass *iclass = (_Jv_InterpClass *) klass->aux_info;
+
+  if (!iclass->closures)
+    iclass->closures = _Jv_ClosureListFinalizer ();
+
+  return iclass->closures;
+}
+
 void
 _Jv_InterpreterEngine::do_allocate_static_fields (jclass klass,
 						  int pointer_size,
Index: libjava/include/java-interp.h
===================================================================
--- libjava/include/java-interp.h.orig	2007-01-24 03:17:50.000000000 -0200
+++ libjava/include/java-interp.h	2007-01-24 03:18:54.000000000 -0200
@@ -1,6 +1,7 @@
 // java-interp.h - Header file for the bytecode interpreter.  -*- c++ -*-
 
-/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006  Free Software Foundation
+/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
+   Free Software Foundation
 
    This file is part of libgcj.
 
@@ -183,7 +184,7 @@ class _Jv_InterpMethod : public _Jv_Meth
   }
 
   // return the method's invocation pointer (a stub).
-  void *ncode ();
+  void *ncode (jclass);
   void compile (const void * const *);
 
   static void run_normal (ffi_cif*, void*, ffi_raw*, void*);
@@ -250,6 +251,7 @@ class _Jv_InterpClass
   _Jv_MethodBase **interpreted_methods;
   _Jv_ushort     *field_initializers;
   jstring source_file_name;
+  _Jv_ClosureList **closures;
 
   friend class _Jv_ClassReader;
   friend class _Jv_InterpMethod;
@@ -298,7 +300,7 @@ class _Jv_JNIMethod : public _Jv_MethodB
   // This function is used when making a JNI call from the interpreter.
   static void call (ffi_cif *, void *, ffi_raw *, void *);
 
-  void *ncode ();
+  void *ncode (jclass);
 
   friend class _Jv_ClassReader;
   friend class _Jv_InterpreterEngine;
Index: libjava/defineclass.cc
===================================================================
--- libjava/defineclass.cc.orig	2007-01-24 03:17:50.000000000 -0200
+++ libjava/defineclass.cc	2007-01-24 03:18:54.000000000 -0200
@@ -1,6 +1,7 @@
 // defineclass.cc - defining a class from .class format.
 
-/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006  Free Software Foundation
+/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
+   Free Software Foundation
 
    This file is part of libgcj.
 
@@ -1652,7 +1653,7 @@ void _Jv_ClassReader::handleCodeAttribut
       // call a static method of an interpreted class from precompiled
       // code without first resolving the class (that will happen
       // during class initialization instead).
-      method->self->ncode = method->ncode ();
+      method->self->ncode = method->ncode (def);
     }
 }
 
@@ -1697,7 +1698,7 @@ void _Jv_ClassReader::handleMethodsEnd (
 		  // interpreted class from precompiled code without
 		  // first resolving the class (that will happen
 		  // during class initialization instead).
-		  method->ncode = m->ncode ();
+		  method->ncode = m->ncode (def);
 		}
 	    }
 	}
Index: libjava/link.cc
===================================================================
--- libjava/link.cc.orig	2007-01-24 03:17:50.000000000 -0200
+++ libjava/link.cc	2007-01-24 03:18:54.000000000 -0200
@@ -1020,15 +1020,17 @@ struct method_closure
   // be the same as the address of the overall structure.  This is due
   // to disabling interior pointers in the GC.
   ffi_closure closure;
+  _Jv_ClosureList list;
   ffi_cif cif;
   ffi_type *arg_types[1];
 };
 
 void *
-_Jv_Linker::create_error_method (_Jv_Utf8Const *class_name)
+_Jv_Linker::create_error_method (_Jv_Utf8Const *class_name, jclass klass)
 {
+  void *code;
   method_closure *closure
-    = (method_closure *) _Jv_AllocBytes(sizeof (method_closure));
+    = (method_closure *) _Jv_AllocClosure(sizeof (method_closure), &code);
 
   closure->arg_types[0] = &ffi_type_void;
 
@@ -1040,13 +1042,18 @@ _Jv_Linker::create_error_method (_Jv_Utf
                        1,
                        &ffi_type_void,
 		       closure->arg_types) == FFI_OK
-      && ffi_prep_closure (&closure->closure,
-                           &closure->cif,
-			   _Jv_ThrowNoClassDefFoundErrorTrampoline,
-			   class_name) == FFI_OK)
-    return &closure->closure;
+      && ffi_prep_closure_loc (&closure->closure,
+			       &closure->cif,
+			       _Jv_ThrowNoClassDefFoundErrorTrampoline,
+			       class_name,
+			       code) == FFI_OK)
+    {
+      closure->list.registerClosure (klass, closure);
+      return code;
+    }
   else
     {
+      _Jv_FreeClosure (closure);
       java::lang::StringBuffer *buffer = new java::lang::StringBuffer();
       buffer->append(JvNewStringLatin1("Error setting up FFI closure"
 				       " for static method of"
@@ -1057,7 +1064,7 @@ _Jv_Linker::create_error_method (_Jv_Utf
 }
 #else
 void *
-_Jv_Linker::create_error_method (_Jv_Utf8Const *)
+_Jv_Linker::create_error_method (_Jv_Utf8Const *, jclass)
 {
   // Codepath for platforms which do not support (or want) libffi.
   // You have to accept that it is impossible to provide the name
@@ -1088,8 +1095,6 @@ static bool debug_link = false;
 // at the corresponding position in the virtual method offset table
 // (klass->otable). 
 
-// The same otable and atable may be shared by many classes.
-
 // This must be called while holding the class lock.
 
 void
@@ -1240,13 +1245,15 @@ _Jv_Linker::link_symbol_table (jclass kl
       // NullPointerException
       klass->atable->addresses[index] = NULL;
 
+      bool use_error_method = false;
+
       // If the target class is missing we prepare a function call
       // that throws a NoClassDefFoundError and store the address of
       // that newly prepared method in the atable. The user can run
       // code in classes where the missing class is part of the
       // execution environment as long as it is never referenced.
       if (target_class == NULL)
-        klass->atable->addresses[index] = create_error_method(sym.class_name);
+	use_error_method = true;
       // We're looking for a static field or a static method, and we
       // can tell which is needed by looking at the signature.
       else if (signature->first() == '(' && signature->len() >= 2)
@@ -1294,12 +1301,16 @@ _Jv_Linker::link_symbol_table (jclass kl
 		}
 	    }
 	  else
+	    use_error_method = true;
+
+	  if (use_error_method)
 	    klass->atable->addresses[index]
-              = create_error_method(sym.class_name);
+	      = create_error_method(sym.class_name, klass);
 
 	  continue;
 	}
 
+
       // Try fields only if the target class exists.
       if (target_class != NULL)
       {
Index: libjava/java/lang/reflect/natVMProxy.cc
===================================================================
--- libjava/java/lang/reflect/natVMProxy.cc.orig	2007-01-24 03:17:50.000000000 -0200
+++ libjava/java/lang/reflect/natVMProxy.cc	2007-01-24 03:18:54.000000000 -0200
@@ -1,6 +1,6 @@
 // natVMProxy.cc -- Implementation of VMProxy methods.
 
-/* Copyright (C) 2006
+/* Copyright (C) 2006, 2007
    Free Software Foundation
 
    This file is part of libgcj.
@@ -66,7 +66,8 @@ using namespace java::lang::reflect;
 using namespace java::lang;
 
 typedef void (*closure_fun) (ffi_cif*, void*, void**, void*);
-static void *ncode (_Jv_Method *self, closure_fun fun, Method *meth);
+static void *ncode (jclass klass, _Jv_Method *self, closure_fun fun,
+		    Method *meth);
 static void run_proxy (ffi_cif*, void*, void**, void*);
 
 typedef jobject invoke_t (jobject, Proxy *, Method *, JArray< jobject > *);
@@ -165,7 +166,7 @@ java::lang::reflect::VMProxy::generatePr
       // the interfaces of which it is a proxy will also be reachable,
       // so this is safe.
       method = imethod;
-      method.ncode = ncode (&method, run_proxy, elements(d->methods)[i]);
+      method.ncode = ncode (klass, &method, run_proxy, elements(d->methods)[i]);
       method.accflags &= ~Modifier::ABSTRACT;
     }
 
@@ -290,6 +291,7 @@ unbox (jobject o, jclass klass, void *rv
 
 typedef struct {
   ffi_closure  closure;
+  _Jv_ClosureList list;
   ffi_cif   cif;
   Method *meth;
   _Jv_Method *self;
@@ -362,16 +364,19 @@ run_proxy (ffi_cif *cif,
 // the address of its closure.
 
 static void *
-ncode (_Jv_Method *self, closure_fun fun, Method *meth)
+ncode (jclass klass, _Jv_Method *self, closure_fun fun, Method *meth)
 {
   using namespace java::lang::reflect;
 
   jboolean staticp = (self->accflags & Modifier::STATIC) != 0;
   int arg_count = _Jv_count_arguments (self->signature, staticp);
 
+  void *code;
   ncode_closure *closure =
-    (ncode_closure*)_Jv_AllocBytes (sizeof (ncode_closure)
-				    + arg_count * sizeof (ffi_type*));
+    (ncode_closure*)_Jv_AllocClosure (sizeof (ncode_closure)
+				      + arg_count * sizeof (ffi_type*),
+				      &code);
+  closure->list.registerClosure (klass, closure);
 
   _Jv_init_cif (self->signature,
 		arg_count,
@@ -384,11 +389,12 @@ ncode (_Jv_Method *self, closure_fun fun
 
   JvAssert ((self->accflags & Modifier::NATIVE) == 0);
 
-  ffi_prep_closure (&closure->closure,
-		    &closure->cif, 
-		    fun,
-		    (void*)closure);
+  ffi_prep_closure_loc (&closure->closure,
+			&closure->cif,
+			fun,
+			code,
+			code);
 
-  self->ncode = (void*)closure;
+  self->ncode = code;
   return self->ncode;
 }
Index: libjava/testsuite/libjava.jar/TestClosureGC.java
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ libjava/testsuite/libjava.jar/TestClosureGC.java	2007-01-24 04:31:00.000000000 -0200
@@ -0,0 +1,116 @@
+/* Verify that libffi closures aren't deallocated too early.
+
+   Copyright (C) 2007 Free Software Foundation, Inc
+   Contributed by Alexandre Oliva <aoliva@redhat.com>
+
+   If libffi closures are released too early, we lose.
+ */
+
+import java.util.HashSet;
+
+public class TestClosureGC {
+    public static String objId (Object obj) {
+	return obj + "/"
+	    + Integer.toHexString(obj.getClass().getClassLoader().hashCode());
+    }
+    public static class cld extends java.net.URLClassLoader {
+	static final Object obj = new cl0();
+	public cld () throws Exception {
+	    super(new java.net.URL[] { });
+	    /* System.out.println (objId (this) + " created"); */
+	}
+	public void finalize () {
+	    /* System.out.println (objId (this) + " finalized"); */
+	}
+	public String toString () {
+	    return this.getClass().getName() + "@"
+		+ Integer.toHexString (hashCode ());
+	}
+	public Class loadClass (String name) throws ClassNotFoundException {
+	    try {
+		java.io.InputStream IS = getSystemResourceAsStream
+		    (name + ".class");
+		int maxsz = 1024, readsz = 0;
+		byte buf[] = new byte[maxsz];
+		for(;;) {
+		    int readnow = IS.read (buf, readsz, maxsz - readsz);
+		    if (readnow <= 0)
+			break;
+		    readsz += readnow;
+		    if (readsz == maxsz) {
+			byte newbuf[] = new byte[maxsz *= 2];
+			System.arraycopy (buf, 0, newbuf, 0, readsz);
+			buf = newbuf;
+		    }
+		}
+		return defineClass (name, buf, 0, readsz);
+	    } catch (Exception e) {
+		return super.loadClass (name);
+	    }
+	}
+    }
+    public static class cl0 {
+	public cl0 () {
+	    /* System.out.println (objId (this) + " created"); */
+	}
+	public void finalize () {
+	    /* System.out.println (objId (this) + " finalized"); */
+	}
+    }
+    public static class cl1 {
+	final HashSet hs;
+	static final Object obj = new cl0();
+	public cl1 (final HashSet hs) {
+	    this.hs = hs;
+	    /* System.out.println (objId (this) + " created"); */
+	}
+	public void finalize () {
+	    /* System.out.println (objId (this) + " finalized"); */
+	}
+    }
+    public static class cl2 {
+	final HashSet hs;
+	static final Object obj = new cl0();
+	public cl2 (final HashSet hs) {
+	    this.hs = hs;
+	    /* System.out.println (objId (this) + " created"); */
+	}
+	public void finalize () {
+	    /* System.out.println (objId (this) + " finalized"); */
+	    hs.add(this);
+	    hs.add(new cl0());
+	}
+    }
+    static final HashSet hs = new HashSet();
+    static final Object obj = new cl0();
+    public static void main(String[] argv) throws Exception {
+	{
+	    Class[] hscs = { HashSet.class };
+	    Object[] hsos = { hs };
+	    new cld().loadClass ("TestClosureGC$cl1").
+		getConstructor (hscs).newInstance (hsos);
+	    new cld().loadClass ("TestClosureGC$cl2").
+		getConstructor (hscs).newInstance (hsos);
+	    new cld().loadClass ("TestClosureGC$cl1").
+		getConstructor (hscs).newInstance (hsos);
+	    new cld().loadClass ("TestClosureGC$cl1").
+		getConstructor (hscs).newInstance (hsos);
+	}
+	for (int i = 1; i <= 5; i++) {
+	    /* System.out.println ("Will run GC and finalization " + i); */
+	    System.gc ();
+	    Thread.sleep (100);
+	    System.runFinalization ();
+	    Thread.sleep (100);
+	    if (hs.isEmpty ())
+		continue;
+	    java.util.Iterator it = hs.iterator ();
+	    while (it.hasNext ()) {
+		Object obj = it.next();
+		/* System.out.println (objId (obj) + " in ht, removing"); */
+		it.remove ();
+	    }
+	}
+	System.out.println ("ok");
+    }
+}
Index: libjava/testsuite/libjava.jar/TestClosureGC.out
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ libjava/testsuite/libjava.jar/TestClosureGC.out	2007-01-24 04:13:23.000000000 -0200
@@ -0,0 +1 @@
+ok
Index: libjava/testsuite/libjava.jar/TestClosureGC.xfail
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ libjava/testsuite/libjava.jar/TestClosureGC.xfail	2007-01-24 04:28:19.000000000 -0200
@@ -0,0 +1 @@
+main=TestClosureGC
Index: libjava/testsuite/libjava.jar/TestClosureGC.jar
===================================================================
Binary files /dev/null	1970-01-01 00:00:00.000000000 +0000 and libjava/testsuite/libjava.jar/TestClosureGC.jar	2007-01-24 04:23:16.000000000 -0200 differ

[-- Attachment #3: Type: text/plain, Size: 249 bytes --]


-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-24  6:46         ` Alexandre Oliva
@ 2007-01-24 15:47           ` Andrew Haley
  2007-01-24 16:59             ` David Daney
                               ` (2 more replies)
  0 siblings, 3 replies; 35+ messages in thread
From: Andrew Haley @ 2007-01-24 15:47 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: green, Hans Boehm, gcc-patches, java-patches

Alexandre Oliva writes:
 > On Jan 22, 2007, Alexandre Oliva <aoliva@redhat.com> wrote:
 > 
 > > Now, if I'm reading you correctly, it's not the object itself that
 > > determines whether its finalizer can run while there are other
 > > pointers to it, but rather the object that points to it.  Is this
 > > correct?
 > 
 > It was, so I added an option to get the other behavior, that appears
 > to be generally desirable for Java system-level resource disposal,
 > through objects whose finalizers should only run when the
 > corresponding objects are unreachable even from other finalizable
 > objects.  It was quite easy to add, after I figured out how it worked.
 > 
 > > As it turned out, this wasn't enough, but adding a second layer of
 > > indirection to the closure list, such that both levels of indirection
 > > involve finalizers registered with the traditional (!= Java)
 > > semantics, it should work, right?
 > 
 > No.  Especially if one allocates the doubly-indirect pointer as not
 > containing pointers.  *blush*  :-)
 > 
 > Fixing that enabled things to work until I added finalizers that would
 > resurrect objects, perhaps even multiple times (consider a finalizer
 > that optionally creates another instance of the same class, bringing
 > back to life the class and the class loader).
 > 
 > At that point, after scratching my head for a while trying to figure
 > out a way to implement the needed semantics with the existing
 > primitives, I concluded it couldn't be done, and so I came up with new
 > finalizer semantics option described above.
 > 
 > Is this ok to install?  Tested on x86_64-linux-gnu.

Mmm, this looks good.  

OK, Hans?

Paging Anthony Green ... any comments?

Andrew.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-24 15:47           ` Andrew Haley
@ 2007-01-24 16:59             ` David Daney
  2007-01-24 17:09               ` Andrew Haley
  2007-01-24 17:33             ` Anthony Green
  2007-01-25 21:52             ` Boehm, Hans
  2 siblings, 1 reply; 35+ messages in thread
From: David Daney @ 2007-01-24 16:59 UTC (permalink / raw)
  To: Andrew Haley
  Cc: Alexandre Oliva, green, Hans Boehm, gcc-patches, java-patches

Andrew Haley wrote:
> Alexandre Oliva writes:
>  > On Jan 22, 2007, Alexandre Oliva <aoliva@redhat.com> wrote:
>  > 
>  > > Now, if I'm reading you correctly, it's not the object itself that
>  > > determines whether its finalizer can run while there are other
>  > > pointers to it, but rather the object that points to it.  Is this
>  > > correct?
>  > 
>  > It was, so I added an option to get the other behavior, that appears
>  > to be generally desirable for Java system-level resource disposal,
>  > through objects whose finalizers should only run when the
>  > corresponding objects are unreachable even from other finalizable
>  > objects.  It was quite easy to add, after I figured out how it worked.
>  > 
>  > > As it turned out, this wasn't enough, but adding a second layer of
>  > > indirection to the closure list, such that both levels of indirection
>  > > involve finalizers registered with the traditional (!= Java)
>  > > semantics, it should work, right?
>  > 
>  > No.  Especially if one allocates the doubly-indirect pointer as not
>  > containing pointers.  *blush*  :-)
>  > 
>  > Fixing that enabled things to work until I added finalizers that would
>  > resurrect objects, perhaps even multiple times (consider a finalizer
>  > that optionally creates another instance of the same class, bringing
>  > back to life the class and the class loader).
>  > 
>  > At that point, after scratching my head for a while trying to figure
>  > out a way to implement the needed semantics with the existing
>  > primitives, I concluded it couldn't be done, and so I came up with new
>  > finalizer semantics option described above.
>  > 
>  > Is this ok to install?  Tested on x86_64-linux-gnu.
>
> Mmm, this looks good.  
>
> OK, Hans?
>
> Paging Anthony Green ... any comments?
>
> Andrew.
>   
Well I'm not Anthony, but I am concerned about the overhead for systems 
that have an executable stack.  I am thinking small embedded systems.  
It would be nice to have some sort of configure option that would allow 
us to build it the old way, or close to the old way if desired.  That 
said, I have not measured the size of the dlmalloc and other additions 
to know if it really makes a difference.  I am trying to put an entire 
Linux/GNU system with an application statically linked to libgcj into a 
16MB flash memory.  Adding more code to libgcj does not make it any easier.

David Daney

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-24 16:59             ` David Daney
@ 2007-01-24 17:09               ` Andrew Haley
  2007-01-25  5:59                 ` Alexandre Oliva
  0 siblings, 1 reply; 35+ messages in thread
From: Andrew Haley @ 2007-01-24 17:09 UTC (permalink / raw)
  To: David Daney; +Cc: Alexandre Oliva, green, Hans Boehm, gcc-patches, java-patches

David Daney writes:

 > Well I'm not Anthony, but I am concerned about the overhead for
 > systems that have an executable stack.  I am thinking small
 > embedded systems.  It would be nice to have some sort of configure
 > option that would allow us to build it the old way, or close to the
 > old way if desired.  That said, I have not measured the size of the
 > dlmalloc and other additions to know if it really makes a
 > difference.  I am trying to put an entire Linux/GNU system with an
 > application statically linked to libgcj into a 16MB flash memory.
 > Adding more code to libgcj does not make it any easier.

Thanks for that.  I agree; it makes no sense to burden non-SELinux
systems with all this stuff.

Andrew.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-24 15:47           ` Andrew Haley
  2007-01-24 16:59             ` David Daney
@ 2007-01-24 17:33             ` Anthony Green
  2007-01-25 21:52             ` Boehm, Hans
  2 siblings, 0 replies; 35+ messages in thread
From: Anthony Green @ 2007-01-24 17:33 UTC (permalink / raw)
  To: Andrew Haley; +Cc: Alexandre Oliva, Hans Boehm, gcc-patches, java-patches

On Wed, 2007-01-24 at 15:47 +0000, Andrew Haley wrote:
> 
> 
> Paging Anthony Green ... any comments? 

Looks great.  Thanks Alex.

AG


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-24 17:09               ` Andrew Haley
@ 2007-01-25  5:59                 ` Alexandre Oliva
  2007-01-25 10:13                   ` Andrew Haley
  0 siblings, 1 reply; 35+ messages in thread
From: Alexandre Oliva @ 2007-01-25  5:59 UTC (permalink / raw)
  To: Andrew Haley; +Cc: David Daney, green, Hans Boehm, gcc-patches, java-patches

On Jan 24, 2007, Andrew Haley <aph@redhat.com> wrote:

> David Daney writes:
>> Well I'm not Anthony, but I am concerned about the overhead for
>> systems that have an executable stack.  I am thinking small
>> embedded systems.  It would be nice to have some sort of configure
>> option that would allow us to build it the old way, or close to the
>> old way if desired.  That said, I have not measured the size of the
>> dlmalloc and other additions to know if it really makes a
>> difference.  I am trying to put an entire Linux/GNU system with an
>> application statically linked to libgcj into a 16MB flash memory.
>> Adding more code to libgcj does not make it any easier.

> Thanks for that.  I agree; it makes no sense to burden non-SELinux
> systems with all this stuff.

There's no way to tell, at compiler build time, whether the target
system is going to have some such security measures in effect.  In the
patch, I take the approach that, barring separate specification,
GNU/Linux systems are assumed to possibly require this more costly
approach, with some impact on code size, but that still tries the less
costly approach at run time and uses that if it works.

If you know the target system is not going to be subject to these
constraints, and it is a GNU/Linux system, it suffices to build libffi
with -DFFI_MMAP_EXEC_WRIT=0 and we'll use only malloc, with a
negligible impact on code size.

That, or patching libffi/src/closures.c, might appease the code-size
concerns.  If not, I suppose we could simplify the definition of the
macro with a configure-time option for libffi.  I'm not convinced
having the configure option would be so much better than say
make CPPFLAGS_FOR_TARGET=-DFFI_MMAP_EXEC_WRIT=0 for an option that
would already be hidden in libffi, would be of so little use for
most people and would make such a little difference even when it does:

    text     data     bss      dec     hex filename
   11232        4    1136    12372    3054 libffi/src/closures.o
37921368  9304904  468848 47695120 2d7c510 libjava/.libs/libgcj.so

Sure, every 11KiB counts, and it's easy enough to shave them off.  But
does it justify yet another obscure configure option, if you probably
already have a number of changes in place to achieve the
apparently-impossible goal of fitting this in 16MiB? :-)

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-25  5:59                 ` Alexandre Oliva
@ 2007-01-25 10:13                   ` Andrew Haley
  2007-01-25 17:21                     ` David Daney
  0 siblings, 1 reply; 35+ messages in thread
From: Andrew Haley @ 2007-01-25 10:13 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: David Daney, green, Hans Boehm, gcc-patches, java-patches

Alexandre Oliva writes:
 > On Jan 24, 2007, Andrew Haley <aph@redhat.com> wrote:
 > 
 > > David Daney writes:
 > >> Well I'm not Anthony, but I am concerned about the overhead for
 > >> systems that have an executable stack.  I am thinking small
 > >> embedded systems.  It would be nice to have some sort of configure
 > >> option that would allow us to build it the old way, or close to the
 > >> old way if desired.  That said, I have not measured the size of the
 > >> dlmalloc and other additions to know if it really makes a
 > >> difference.  I am trying to put an entire Linux/GNU system with an
 > >> application statically linked to libgcj into a 16MB flash memory.
 > >> Adding more code to libgcj does not make it any easier.
 > 
 > > Thanks for that.  I agree; it makes no sense to burden non-SELinux
 > > systems with all this stuff.
 > 
 > There's no way to tell, at compiler build time, whether the target
 > system is going to have some such security measures in effect.  In the
 > patch, I take the approach that, barring separate specification,
 > GNU/Linux systems are assumed to possibly require this more costly
 > approach, with some impact on code size, but that still tries the less
 > costly approach at run time and uses that if it works.

Sure, you can't automagically tell if it'll be needed, but a configure
option will do the trick.

 > If you know the target system is not going to be subject to these
 > constraints, and it is a GNU/Linux system, it suffices to build libffi
 > with -DFFI_MMAP_EXEC_WRIT=0 and we'll use only malloc, with a
 > negligible impact on code size.
 > 
 > That, or patching libffi/src/closures.c, might appease the code-size
 > concerns.  If not, I suppose we could simplify the definition of the
 > macro with a configure-time option for libffi.  I'm not convinced
 > having the configure option would be so much better than say
 > make CPPFLAGS_FOR_TARGET=-DFFI_MMAP_EXEC_WRIT=0 for an option that
 > would already be hidden in libffi, would be of so little use for
 > most people and would make such a little difference even when it does:
 > 
 >     text     data     bss      dec     hex filename
 >    11232        4    1136    12372    3054 libffi/src/closures.o
 > 37921368  9304904  468848 47695120 2d7c510 libjava/.libs/libgcj.so
 > 
 > Sure, every 11KiB counts, and it's easy enough to shave them off.  But
 > does it justify yet another obscure configure option, if you probably
 > already have a number of changes in place to achieve the
 > apparently-impossible goal of fitting this in 16MiB? :-)

You aren't taking into account the embedded gcj users who strip the
library down to what they need.  I'm not such a user, but David Daney
-- the guy who made this request -- is.  I assume he has some idea
what he's talking about.

David, what say you?

Andrew.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-25 10:13                   ` Andrew Haley
@ 2007-01-25 17:21                     ` David Daney
  2007-01-25 17:32                       ` Andrew Haley
  2007-01-26  7:14                       ` Alexandre Oliva
  0 siblings, 2 replies; 35+ messages in thread
From: David Daney @ 2007-01-25 17:21 UTC (permalink / raw)
  To: Andrew Haley
  Cc: Alexandre Oliva, green, Hans Boehm, gcc-patches, java-patches

Andrew Haley wrote:
> Alexandre Oliva writes:
>  > On Jan 24, 2007, Andrew Haley <aph@redhat.com> wrote:
>  > 
>  > > David Daney writes:
>  > >> Well I'm not Anthony, but I am concerned about the overhead for
>  > >> systems that have an executable stack.  I am thinking small
>  > >> embedded systems.  It would be nice to have some sort of configure
>  > >> option that would allow us to build it the old way, or close to the
>  > >> old way if desired.  That said, I have not measured the size of the
>  > >> dlmalloc and other additions to know if it really makes a
>  > >> difference.  I am trying to put an entire Linux/GNU system with an
>  > >> application statically linked to libgcj into a 16MB flash memory.
>  > >> Adding more code to libgcj does not make it any easier.
>  > 
>  > > Thanks for that.  I agree; it makes no sense to burden non-SELinux
>  > > systems with all this stuff.
>  > 
>  > There's no way to tell, at compiler build time, whether the target
>  > system is going to have some such security measures in effect.  In the
>  > patch, I take the approach that, barring separate specification,
>  > GNU/Linux systems are assumed to possibly require this more costly
>  > approach, with some impact on code size, but that still tries the less
>  > costly approach at run time and uses that if it works.
>
> Sure, you can't automagically tell if it'll be needed, but a configure
> option will do the trick.
>
>  > If you know the target system is not going to be subject to these
>  > constraints, and it is a GNU/Linux system, it suffices to build libffi
>  > with -DFFI_MMAP_EXEC_WRIT=0 and we'll use only malloc, with a
>  > negligible impact on code size.
>  > 
>  > That, or patching libffi/src/closures.c, might appease the code-size
>  > concerns.  If not, I suppose we could simplify the definition of the
>  > macro with a configure-time option for libffi.  I'm not convinced
>  > having the configure option would be so much better than say
>  > make CPPFLAGS_FOR_TARGET=-DFFI_MMAP_EXEC_WRIT=0 for an option that
>  > would already be hidden in libffi, would be of so little use for
>  > most people and would make such a little difference even when it does:
>  > 
>  >     text     data     bss      dec     hex filename
>  >    11232        4    1136    12372    3054 libffi/src/closures.o
>  > 37921368  9304904  468848 47695120 2d7c510 libjava/.libs/libgcj.so
>  > 
>  > Sure, every 11KiB counts, and it's easy enough to shave them off.  But
>  > does it justify yet another obscure configure option, if you probably
>  > already have a number of changes in place to achieve the
>  > apparently-impossible goal of fitting this in 16MiB? :-)
>
> You aren't taking into account the embedded gcj users who strip the
> library down to what they need.  I'm not such a user, but David Daney
> -- the guy who made this request -- is.  I assume he has some idea
> what he's talking about.
>
> David, what say you?
>
>   
11K on x86, on MIPS it would probably be a bit more.  In the flash 
everything is compressed, so we are probably talking something in the 4 
- 8 KiB range.

I guess Alexandre should commit the patch.  We are still using GCC 3.4.3 
for 'production' code, so it does not immediately affect us.  I may 
prepare a patch in the future for a configure option that reduces the 
code size if there is an executable stack.  Currently for mips[el]-linux 
the stack is always executable because the signal trampolines are on the 
stack.

David Daney


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-25 17:21                     ` David Daney
@ 2007-01-25 17:32                       ` Andrew Haley
  2007-01-26  7:14                       ` Alexandre Oliva
  1 sibling, 0 replies; 35+ messages in thread
From: Andrew Haley @ 2007-01-25 17:32 UTC (permalink / raw)
  To: David Daney; +Cc: Alexandre Oliva, green, Hans Boehm, gcc-patches, java-patches

David Daney writes:
 > Andrew Haley wrote:

 > 11K on x86, on MIPS it would probably be a bit more.  In the flash 
 > everything is compressed, so we are probably talking something in the 4 
 > - 8 KiB range.
 > 
 > I guess Alexandre should commit the patch.  We are still using GCC
 > 3.4.3 for 'production' code, so it does not immediately affect us.
 > I may prepare a patch in the future for a configure option that
 > reduces the code size if there is an executable stack.

A fine compromise.  Thank you!

Andrew.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-24 15:47           ` Andrew Haley
  2007-01-24 16:59             ` David Daney
  2007-01-24 17:33             ` Anthony Green
@ 2007-01-25 21:52             ` Boehm, Hans
  2007-01-26  8:11               ` Alexandre Oliva
  2007-01-26 10:16               ` Andrew Haley
  2 siblings, 2 replies; 35+ messages in thread
From: Boehm, Hans @ 2007-01-25 21:52 UTC (permalink / raw)
  To: Andrew Haley, Alexandre Oliva; +Cc: green, gcc-patches, java-patches

I'm still missing the big picture with respect to the finalization
changes.

Normally, the GC's intended usage model is that if I have an object P
whose finalizer accesses other referenced objects R, then I register P
with something like the "normal" finalization mark procedure, so that
the referenced objects R don't get finalized until P is done with them.
I think you are proposing to somehow have the objects R know that they
might be referenced by another finalizable object, and hence postpone
their own finalization by a cycle.  That seems fundamentally backwards
and brittle.  This just isn't R's business.

Can't we just register the referring object (P, the class in the real
case, I think) correctly?

Hans

> -----Original Message-----
> From: Andrew Haley [mailto:aph@redhat.com] 
> Sent: Wednesday, January 24, 2007 7:47 AM
> To: Alexandre Oliva
> Cc: green@redhat.com; Boehm, Hans; gcc-patches@gcc.gnu.org; 
> java-patches@gcc.gnu.org
> Subject: Re: Get libffi closures to cope with SELinux execmem/execmod
> 
> Alexandre Oliva writes:
>  > On Jan 22, 2007, Alexandre Oliva <aoliva@redhat.com> wrote:
>  >
>  > > Now, if I'm reading you correctly, it's not the object 
> itself that  > > determines whether its finalizer can run 
> while there are other  > > pointers to it, but rather the 
> object that points to it.  Is this  > > correct?
>  >
>  > It was, so I added an option to get the other behavior, 
> that appears  > to be generally desirable for Java 
> system-level resource disposal,  > through objects whose 
> finalizers should only run when the  > corresponding objects 
> are unreachable even from other finalizable  > objects.  It 
> was quite easy to add, after I figured out how it worked.
>  >
>  > > As it turned out, this wasn't enough, but adding a 
> second layer of  > > indirection to the closure list, such 
> that both levels of indirection  > > involve finalizers 
> registered with the traditional (!= Java)  > > semantics, it 
> should work, right?
>  >
>  > No.  Especially if one allocates the doubly-indirect 
> pointer as not  > containing pointers.  *blush*  :-)  >  > 
> Fixing that enabled things to work until I added finalizers 
> that would  > resurrect objects, perhaps even multiple times 
> (consider a finalizer  > that optionally creates another 
> instance of the same class, bringing  > back to life the 
> class and the class loader).
>  >
>  > At that point, after scratching my head for a while trying 
> to figure  > out a way to implement the needed semantics with 
> the existing  > primitives, I concluded it couldn't be done, 
> and so I came up with new  > finalizer semantics option 
> described above.
>  >
>  > Is this ok to install?  Tested on x86_64-linux-gnu.
> 
> Mmm, this looks good.  
> 
> OK, Hans?
> 
> Paging Anthony Green ... any comments?
> 
> Andrew.
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-25 17:21                     ` David Daney
  2007-01-25 17:32                       ` Andrew Haley
@ 2007-01-26  7:14                       ` Alexandre Oliva
  2007-01-26  7:28                         ` Andrew Pinski
  1 sibling, 1 reply; 35+ messages in thread
From: Alexandre Oliva @ 2007-01-26  7:14 UTC (permalink / raw)
  To: David Daney; +Cc: Andrew Haley, green, Hans Boehm, gcc-patches, java-patches

On Jan 25, 2007, David Daney <ddaney@avtrex.com> wrote:

> I guess Alexandre should commit the patch.  We are still using GCC
> 3.4.3 for 'production' code, so it does not immediately affect us.  I
> may prepare a patch in the future for a configure option

Sounds like a fair compromise, thanks

> that reduces the code size if there is an executable stack.

Note that this is not just about executable stack, it's about not
turning writable memory into executable memory, so as to remedy a
large class of security exploits.

> Currently for mips[el]-linux the stack is always executable because
> the signal trampolines are on the stack.

If this is so, I guess it doesn't make much sense to enable this
split-map memory allocator for GNU/Linux/mips.  Would you care to let
me know the proper preprocessor magic to recognize the affected
platforms such that I can adjust the default for it in the following
portion of the patch, in libffi/closures.c?

+#ifndef FFI_MMAP_EXEC_WRIT
+# if __gnu_linux__
+/* This macro indicates it may be forbidden to map anonymous memory
+   with both write and execute permission.  Code compiled when this
+   option is defined will attempt to map such pages once, but if it
+   fails, it falls back to creating a temporary file in a writable and
+   executable filesystem and mapping pages from it into separate
+   locations in the virtual memory space, one location writable and
+   another executable.  */
+#  define FFI_MMAP_EXEC_WRIT 1
+# endif
+#endif


Thanks,

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-26  7:14                       ` Alexandre Oliva
@ 2007-01-26  7:28                         ` Andrew Pinski
  0 siblings, 0 replies; 35+ messages in thread
From: Andrew Pinski @ 2007-01-26  7:28 UTC (permalink / raw)
  To: Alexandre Oliva
  Cc: David Daney, Andrew Haley, green, Hans Boehm, gcc-patches, java-patches

> 
> On Jan 25, 2007, David Daney <ddaney@avtrex.com> wrote:
> 
> > I guess Alexandre should commit the patch.  We are still using GCC
> > 3.4.3 for 'production' code, so it does not immediately affect us.  I
> > may prepare a patch in the future for a configure option
> 
> Sounds like a fair compromise, thanks
> 
> > that reduces the code size if there is an executable stack.
> 
> Note that this is not just about executable stack, it's about not
> turning writable memory into executable memory, so as to remedy a
> large class of security exploits.

I think people are over doing security exploits thing.  Basically
there are less than .01% of todays population who will even
exploit an issue.  Even then the executable stack is not really
a problem if you have bounds checking and checking the input of
what goes on the stack for execution.

So I think making the stack non exectuable is the wrong approach
of fixing these security exploits.


Thanks,
Andrew Pinski

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-25 21:52             ` Boehm, Hans
@ 2007-01-26  8:11               ` Alexandre Oliva
  2007-01-26 21:13                 ` Boehm, Hans
  2007-01-26 10:16               ` Andrew Haley
  1 sibling, 1 reply; 35+ messages in thread
From: Alexandre Oliva @ 2007-01-26  8:11 UTC (permalink / raw)
  To: Boehm, Hans; +Cc: Andrew Haley, green, gcc-patches, java-patches

On Jan 25, 2007, "Boehm, Hans" <hans.boehm@hp.com> wrote:

> Normally, the GC's intended usage model is that if I have an object P
> whose finalizer accesses other referenced objects R, then I register P
> with something like the "normal" finalization mark procedure, so that
> the referenced objects R don't get finalized until P is done with them.

Well, Java isn't like that, as you know.

> I think you are proposing to somehow have the objects R know that they
> might be referenced by another finalizable object, and hence postpone
> their own finalization by a cycle.  That seems fundamentally backwards
> and brittle.  This just isn't R's business.

I think it actually is, to some extent, because we're trying to fit
this R in a world in which the normal finalization mark procedure is
not the rule, but the exception, and this requires a stronger
primitive to cancel out its negative effects.  See below.

> Can't we just register the referring object (P, the class in the real
> case, I think) correctly?

No.  Consider that:

- object O1 is an instance of class P, and that R is a resource that
must be disposed of when P is garbage-collected.

- both O1 and P have finalizers, and, because of the Java finalization
model, they can both run at the same time (O1 must not use normal
semantics to keep P's finalizer from running in parallel because this
would violate Java semantics, and it would have to apply to all Java
objects, so it's a non-starter)

- O1's finalizer flips a coin to decide whether or not to create a new
instance of P, say O2, which thus carries the same finalizer code


Now, let's look at the alternatives to try to model the requirement
that R must be disposed of no earlier than when P is disposed of.

a) dispose of R in P's finalizer

O1 and P are found to be unreachable.  Their finalizers run.  O2 is
created and makes P reachable again, but its finalizer has already run
and disposed of R.  Requirement not satisfied.

b) control R by a separate object that P points to, and dispose of R
in R's finalizer

b.1) P has Java finalization semantics

R's finalizer may run at the time O1 and P become unreachable, so it
breaks just like a.

b.2) P has normal finalization semantics

O1, P and R are found to be unreachable.  R is reachable from P, so
it's not finalized, but O1 and P are.  O2 is created and makes P and R
reachable again, but P no longer has a finalizer.  O2, P and R are
found to be unreachable, but since P no longer has a normal finalizer,
R is not marked, so finalizers for O2 and R are run.  O3 is created
and makes P reachable again, but R has already been disposed of.
Requirement not satisfied.

c) introduce additional levels of indirection between P and R

Just extend the b.2 scenario with additional instances of O, each one
taking down one level of finalizers, and it eventually fails in just
the same way.

d) assign new semantics to P such that it can keep R from being
finalized before P is collected

This would amount to registering some kind of special do-nothing
auto-reinstalled finalizer in P, that would implement the semantics of
marking objects reachable from it for later disposal, but that
wouldn't stop P from being collected if it's not brought back to life
by other finalizers.

d.1) check that P is still unreachable after running finalizers

Well, that might work, but it would amount to running markers after
finalizers to check that P didn't become reachable.  Either a full
mark pass, or keeping track of all changes made by finalizers and
checking that they haven't added new references to P.  Sounds like a
non-starter to me.

d.2) check that P is not reachable from other finalizable objects

If nobody else can reach P, not even finalizable objects, then no new
references to it can be created, so it is safe to dispose of it
instead of marking it as reachable and running its pseudo-finalizer
one more time.

This is almost equivalent to d.1 in the sense that we use in d.2 the
mark pass of the next round as the post-finalization mark pass of a
mark/finalize-and-mark/sweep cycle in d.1.  The sweep ends up delayed
by one round, which is probably acceptable.

e) assign new semantics to R that keeps it from being finalized before
P is collected

Apply the same logic of d.2, but without creating a do-nothing
pseudo-finalizer that needlessly delays the collection of P to keep R
alive long enough.  Apply the special semantics to R, that holds the
critical resource that needs special treatment, rather than to the
object that happens to point to it, or to multiple such objects that
might dynamically point to it, directly or indirectly.


Of course, once we have this new e) semantics, we could apply it to P
directly, but this would break the Java garbage collection model when
P is a class, so we're better off splitting out the special semantics
into a separate object that can't possibly take part in cycles.

The beauty of this e) semantics is that, even in a world in which the
Java finalization semantics is present and widely used, it enables an
object to restore the semantics that it can only be brought back to
reachable state by its own finalizer, because it knows no other
finalizable objects hold references to it.  This semantics is safe
because then the finalizer can install a new finalizer, if needed to
handle another round of finalization.  This semantics is essential for
finalizers that dispose of resources outside the control of the
garbage collector, and that must not be disposed of while the object
is still reachable.


So I counter your claim that this isn't R's business.  It is R's
business to keep a resource available while R is still reachable, and
to dispose of it when R becomes no longer reachable.  No other object
can possibly take this responsibility, unless they all do, propagating
the "normal" semantics back to all finalizable objects that directly
or indirectly point to R.

In fact, this was the semantics I first understood when you wrote
there was a risk that my first patch wouldn't work: per normal
semantics, an object can rely on not having its finalizer if anyone
else still points at it.  Java semantics breaks this property, and we
just can't restore it properly from the points-to side with existing
primitives, as demonstrated in b.2 and c, so we can fix it either in
all points-to objects, or in the pointed-to object.

Since it's the pointed-to object that wants the normal finalization
semantics, it seems just natural to provide it with means to do so,
instead of engineering something that would require unnatural smarts
elsewhere.

It just so happens that the normal semantics falls apart in the
frontier between the Java semantics and the normal semantics, and e)
introduces a way to restore the normal semantics when it's needed.

Arguably, e) could be *the* normal semantics when GC_java_finalization
is enabled, since I can hardly see that it makes much sense otherwise.
If the hole point of the normal semantics is to keep things safe, I
don't see that it makes sense to depend on the semantics of objects
that point to you to ensure the safety you rely on or not.

But maybe there are cases in which this current relaxed normal
semantics make sense, even in the presence of Java objects.  So I
thought I wouldn't impose the burden of this stricter normal semantics
on them all.


Does this make sense to you?

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-25 21:52             ` Boehm, Hans
  2007-01-26  8:11               ` Alexandre Oliva
@ 2007-01-26 10:16               ` Andrew Haley
  1 sibling, 0 replies; 35+ messages in thread
From: Andrew Haley @ 2007-01-26 10:16 UTC (permalink / raw)
  To: Boehm, Hans; +Cc: Alexandre Oliva, green, gcc-patches, java-patches

Boehm, Hans writes:
 > I'm still missing the big picture with respect to the finalization
 > changes.
 > 
 > Normally, the GC's intended usage model is that if I have an object P
 > whose finalizer accesses other referenced objects R, then I register P
 > with something like the "normal" finalization mark procedure, so that
 > the referenced objects R don't get finalized until P is done with them.
 > I think you are proposing to somehow have the objects R know that they
 > might be referenced by another finalizable object, and hence postpone
 > their own finalization by a cycle.  That seems fundamentally backwards
 > and brittle.  This just isn't R's business.
 > 
 > Can't we just register the referring object (P, the class in the real
 > case, I think) correctly?

For avoidance of doubt: I don't want any changes to be made to libgcj
until Hans' queries and objections have been satsified.

Andrew.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-26  8:11               ` Alexandre Oliva
@ 2007-01-26 21:13                 ` Boehm, Hans
  2007-01-27  7:27                   ` Alexandre Oliva
  0 siblings, 1 reply; 35+ messages in thread
From: Boehm, Hans @ 2007-01-26 21:13 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Andrew Haley, green, gcc-patches, java-patches

How about the following variant of (d):

f) Register all Java finalizable objects with a finalize_mark_proc that
traces the class, but nothing else?

This effectively states that object finalizers need the corresponding
classes to stay around until the object finalizer has completed.  It
requires that we invent a new finalize_mark_proc, which might seem a bit
ad hoc in the GC code.  But the addition is pretty trivial, and fits
well into the current framework.

I think the performance impact should also be trivial.  (I think I
measured that adding a finalizer currently adds a factor of 11 or so to
allocation overhead.  This might increase it to 12 or 13.  If you care,
you're already in trouble.)

This in some sense violated the Java requirement for unordered
finalization.  But I think that's completely undetectable to the client
code.  I don't think we can get finalization cycles in this manner.
(And even if it were, we wouldn't really be violating the spec, I
suspect.)

Is there a reason this wouldn't work?

Hans


> -----Original Message-----
> From: Alexandre Oliva [mailto:aoliva@redhat.com] 
> Sent: Friday, January 26, 2007 12:11 AM
> To: Boehm, Hans
> Cc: Andrew Haley; green@redhat.com; gcc-patches@gcc.gnu.org; 
> java-patches@gcc.gnu.org
> Subject: Re: Get libffi closures to cope with SELinux execmem/execmod
> 
> On Jan 25, 2007, "Boehm, Hans" <hans.boehm@hp.com> wrote:
> 
> > Normally, the GC's intended usage model is that if I have 
> an object P 
> > whose finalizer accesses other referenced objects R, then I 
> register P 
> > with something like the "normal" finalization mark 
> procedure, so that 
> > the referenced objects R don't get finalized until P is 
> done with them.
> 
> Well, Java isn't like that, as you know.
> 
> > I think you are proposing to somehow have the objects R 
> know that they 
> > might be referenced by another finalizable object, and 
> hence postpone 
> > their own finalization by a cycle.  That seems 
> fundamentally backwards 
> > and brittle.  This just isn't R's business.
> 
> I think it actually is, to some extent, because we're trying 
> to fit this R in a world in which the normal finalization 
> mark procedure is not the rule, but the exception, and this 
> requires a stronger primitive to cancel out its negative 
> effects.  See below.
> 
> > Can't we just register the referring object (P, the class 
> in the real 
> > case, I think) correctly?
> 
> No.  Consider that:
> 
> - object O1 is an instance of class P, and that R is a 
> resource that must be disposed of when P is garbage-collected.
> 
> - both O1 and P have finalizers, and, because of the Java 
> finalization model, they can both run at the same time (O1 
> must not use normal semantics to keep P's finalizer from 
> running in parallel because this would violate Java 
> semantics, and it would have to apply to all Java objects, so 
> it's a non-starter)
> 
> - O1's finalizer flips a coin to decide whether or not to 
> create a new instance of P, say O2, which thus carries the 
> same finalizer code
> 
> 
> Now, let's look at the alternatives to try to model the 
> requirement that R must be disposed of no earlier than when P 
> is disposed of.
> 
> a) dispose of R in P's finalizer
> 
> O1 and P are found to be unreachable.  Their finalizers run.  
> O2 is created and makes P reachable again, but its finalizer 
> has already run and disposed of R.  Requirement not satisfied.
> 
> b) control R by a separate object that P points to, and 
> dispose of R in R's finalizer
> 
> b.1) P has Java finalization semantics
> 
> R's finalizer may run at the time O1 and P become 
> unreachable, so it breaks just like a.
> 
> b.2) P has normal finalization semantics
> 
> O1, P and R are found to be unreachable.  R is reachable from 
> P, so it's not finalized, but O1 and P are.  O2 is created 
> and makes P and R reachable again, but P no longer has a 
> finalizer.  O2, P and R are found to be unreachable, but 
> since P no longer has a normal finalizer, R is not marked, so 
> finalizers for O2 and R are run.  O3 is created and makes P 
> reachable again, but R has already been disposed of.
> Requirement not satisfied.
> 
> c) introduce additional levels of indirection between P and R
> 
> Just extend the b.2 scenario with additional instances of O, 
> each one taking down one level of finalizers, and it 
> eventually fails in just the same way.
> 
> d) assign new semantics to P such that it can keep R from 
> being finalized before P is collected
> 
> This would amount to registering some kind of special 
> do-nothing auto-reinstalled finalizer in P, that would 
> implement the semantics of marking objects reachable from it 
> for later disposal, but that wouldn't stop P from being 
> collected if it's not brought back to life by other finalizers.
> 
> d.1) check that P is still unreachable after running finalizers
> 
> Well, that might work, but it would amount to running markers 
> after finalizers to check that P didn't become reachable.  
> Either a full mark pass, or keeping track of all changes made 
> by finalizers and checking that they haven't added new 
> references to P.  Sounds like a non-starter to me.
> 
> d.2) check that P is not reachable from other finalizable objects
> 
> If nobody else can reach P, not even finalizable objects, 
> then no new references to it can be created, so it is safe to 
> dispose of it instead of marking it as reachable and running 
> its pseudo-finalizer one more time.
> 
> This is almost equivalent to d.1 in the sense that we use in 
> d.2 the mark pass of the next round as the post-finalization 
> mark pass of a mark/finalize-and-mark/sweep cycle in d.1.  
> The sweep ends up delayed by one round, which is probably acceptable.
> 
> e) assign new semantics to R that keeps it from being 
> finalized before P is collected
> 
> Apply the same logic of d.2, but without creating a 
> do-nothing pseudo-finalizer that needlessly delays the 
> collection of P to keep R alive long enough.  Apply the 
> special semantics to R, that holds the critical resource that 
> needs special treatment, rather than to the object that 
> happens to point to it, or to multiple such objects that 
> might dynamically point to it, directly or indirectly.
> 
> 
> Of course, once we have this new e) semantics, we could apply 
> it to P directly, but this would break the Java garbage 
> collection model when P is a class, so we're better off 
> splitting out the special semantics into a separate object 
> that can't possibly take part in cycles.
> 
> The beauty of this e) semantics is that, even in a world in 
> which the Java finalization semantics is present and widely 
> used, it enables an object to restore the semantics that it 
> can only be brought back to reachable state by its own 
> finalizer, because it knows no other finalizable objects hold 
> references to it.  This semantics is safe because then the 
> finalizer can install a new finalizer, if needed to handle 
> another round of finalization.  This semantics is essential 
> for finalizers that dispose of resources outside the control 
> of the garbage collector, and that must not be disposed of 
> while the object is still reachable.
> 
> 
> So I counter your claim that this isn't R's business.  It is 
> R's business to keep a resource available while R is still 
> reachable, and to dispose of it when R becomes no longer 
> reachable.  No other object can possibly take this 
> responsibility, unless they all do, propagating the "normal" 
> semantics back to all finalizable objects that directly or 
> indirectly point to R.
> 
> In fact, this was the semantics I first understood when you 
> wrote there was a risk that my first patch wouldn't work: per 
> normal semantics, an object can rely on not having its 
> finalizer if anyone else still points at it.  Java semantics 
> breaks this property, and we just can't restore it properly 
> from the points-to side with existing primitives, as 
> demonstrated in b.2 and c, so we can fix it either in all 
> points-to objects, or in the pointed-to object.
> 
> Since it's the pointed-to object that wants the normal 
> finalization semantics, it seems just natural to provide it 
> with means to do so, instead of engineering something that 
> would require unnatural smarts elsewhere.
> 
> It just so happens that the normal semantics falls apart in 
> the frontier between the Java semantics and the normal 
> semantics, and e) introduces a way to restore the normal 
> semantics when it's needed.
> 
> Arguably, e) could be *the* normal semantics when 
> GC_java_finalization is enabled, since I can hardly see that 
> it makes much sense otherwise.
> If the hole point of the normal semantics is to keep things 
> safe, I don't see that it makes sense to depend on the 
> semantics of objects that point to you to ensure the safety 
> you rely on or not.
> 
> But maybe there are cases in which this current relaxed 
> normal semantics make sense, even in the presence of Java 
> objects.  So I thought I wouldn't impose the burden of this 
> stricter normal semantics on them all.
> 
> 
> Does this make sense to you?
> 
> -- 
> Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
> FSF Latin America Board Member         http://www.fsfla.org/
> Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
> Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-26 21:13                 ` Boehm, Hans
@ 2007-01-27  7:27                   ` Alexandre Oliva
  2007-01-31 23:43                     ` Boehm, Hans
  0 siblings, 1 reply; 35+ messages in thread
From: Alexandre Oliva @ 2007-01-27  7:27 UTC (permalink / raw)
  To: Boehm, Hans; +Cc: Andrew Haley, green, gcc-patches, java-patches

On Jan 26, 2007, "Boehm, Hans" <hans.boehm@hp.com> wrote:

> f) Register all Java finalizable objects with a finalize_mark_proc that
> traces the class, but nothing else?

This won't work.  Consider that O1 is not an instance of P, but rather
an instance of a subclass, or even an instance of an unrelated class
whose constructor instantiates P.  At the point you start requiring
special semantics in finalizable objects, you need to apply it to
every object and class reachable from them, and then you violate the
Java requirements or you still don't get the normal finalization
semantics for objects in the frontier between Java finalization and
normal finalization.

> This effectively states that object finalizers need the corresponding
> classes to stay around until the object finalizer has completed.

It would be just a (failed) hack to try to work around a real problem
in boehm-gc.

Basically, the current implementation doesn't offer the documented
semantics at all when there are Java objects involved, and I'm
convinced there's no way to offer the proper semantics without the new
primitive I've added.  The only doubt in my mind is whether it needs
to be a separate primitive, or if we should fix the behavior of the
existing primitive as to adopt the fixed semantics when Java
finalization semantics are in use.

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-27  7:27                   ` Alexandre Oliva
@ 2007-01-31 23:43                     ` Boehm, Hans
  2007-02-05 14:21                       ` Alexandre Oliva
  0 siblings, 1 reply; 35+ messages in thread
From: Boehm, Hans @ 2007-01-31 23:43 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Andrew Haley, green, gcc-patches, java-patches

Thanks for the clarification.

I still think this is really ugly in principle, but I'm now inclined to
agree that we need such a mechanism.  (This is not a statement about
Alexandre's implementation, which is quite clever and clean.)

I'll try to restate the problem, yet again:

We have an object O1 which is finalizable, and indirectly points to O2.
O2 is the last member of class P, and the representation of class P
includes a resource R that must be explicitly reclaimed when P is
collected.  (Which presumably happens when P's class loader is also
collected?)  Thus R has a finalizer.  O1 sometimes resurrects itself
from its finalizer.  (Or it might resurrect some other object on the
chain to O2; it doesn't matter.)  Since Java finalization is unordered,
R may be finalized, even though it will be needed again after O1
resurrects itself.

The way the GC was originally designed, this should be impossible
because R won't be considered for finalization after O1 has run.

More generally, the GC internally provides a facility that allows every
finalizable object to declare which fields it might need for the
finalizer to run correctly, and anything reachable from those fields
will still be intact while the finalizer is run and after a possible
resurrection.  But this doesn't work here because O1 is a Java object,
and R is a libgcj internal object.

Similarly, for a Java object to be able to correctly resurrect itself,
it generally needs to ensure that all indirectly referenced objects are
reachable through some other path.  But O1 might happen to know that,
for example, it doesn't refer to any Java finalizable objects.  Or it
might even be buggy.  We can't crash the runtime in either case.

Thus we need a way for R to protect itself without cooperation from O1,
which we don't control.  Alexandre's patch does that by allowing objects
to declare themselves to not be finalizable while reachable from other
finalizable objects, overriding other instructions to finalize objects
out of order.

The reasons I don't feel enthusiastic about including this:

- It doesn't fit well with the GCs original design.  It feels like a
hack on the interface, even more so than the Java finalization
extension.

- It also feels like an admission of failure for the Java design.
JSR133 finally legitimized the technique of enforcing Java finalizer
ordering by having an object A whose finalizer depends on B put B into a
reachable container, and having it removed by A's finalizer.  This is a
somewhat less efficient way to enforce ordering of the kind the
collector always supported, at least internally, i.e. a finalizable
object declares what fields the finalizer depends on.  This argues that
that mechanism isn't sufficient.  But I think there is no way to do
something vaguely analogous to what Alexandre is proposing from Java.
Hence we're essentially saying that we can't eat our own dog food
because it's not good enough.  (There's a bit of an argument that this
issue is specific to the runtime, but I don't think it's really
convincing.)

In spite of all of that, I'm OK with also putting this into the upstream
GC tree.

Hans

> -----Original Message-----
> From: Alexandre Oliva [mailto:aoliva@redhat.com] 
> Sent: Friday, January 26, 2007 11:27 PM
> To: Boehm, Hans
> Cc: Andrew Haley; green@redhat.com; gcc-patches@gcc.gnu.org; 
> java-patches@gcc.gnu.org
> Subject: Re: Get libffi closures to cope with SELinux execmem/execmod
> 
> On Jan 26, 2007, "Boehm, Hans" <hans.boehm@hp.com> wrote:
> 
> > f) Register all Java finalizable objects with a finalize_mark_proc 
> > that traces the class, but nothing else?
> 
> This won't work.  Consider that O1 is not an instance of P, 
> but rather an instance of a subclass, or even an instance of 
> an unrelated class whose constructor instantiates P.  At the 
> point you start requiring special semantics in finalizable 
> objects, you need to apply it to every object and class 
> reachable from them, and then you violate the Java 
> requirements or you still don't get the normal finalization 
> semantics for objects in the frontier between Java 
> finalization and normal finalization.
> 
> > This effectively states that object finalizers need the 
> corresponding 
> > classes to stay around until the object finalizer has completed.
> 
> It would be just a (failed) hack to try to work around a real 
> problem in boehm-gc.
> 
> Basically, the current implementation doesn't offer the 
> documented semantics at all when there are Java objects 
> involved, and I'm convinced there's no way to offer the 
> proper semantics without the new primitive I've added.  The 
> only doubt in my mind is whether it needs to be a separate 
> primitive, or if we should fix the behavior of the existing 
> primitive as to adopt the fixed semantics when Java 
> finalization semantics are in use.
> 
> -- 
> Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
> FSF Latin America Board Member         http://www.fsfla.org/
> Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
> Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-31 23:43                     ` Boehm, Hans
@ 2007-02-05 14:21                       ` Alexandre Oliva
  2007-02-08 19:39                         ` Andrew Haley
  2007-02-09  6:35                         ` Hans Boehm
  0 siblings, 2 replies; 35+ messages in thread
From: Alexandre Oliva @ 2007-02-05 14:21 UTC (permalink / raw)
  To: Boehm, Hans; +Cc: Andrew Haley, green, gcc-patches, java-patches

On Jan 31, 2007, "Boehm, Hans" <hans.boehm@hp.com> wrote:

> I still think this is really ugly in principle, but I'm now inclined to
> agree that we need such a mechanism.  (This is not a statement about
> Alexandre's implementation, which is quite clever and clean.)

Thanks

> We have an object O1 which is finalizable, and indirectly points to O2.
> O2 is the last member of class P, and the representation of class P
> includes a resource R that must be explicitly reclaimed when P is
> collected.  (Which presumably happens when P's class loader is also
> collected?)

Yep

> Thus R has a finalizer.  O1 sometimes resurrects itself from its
> finalizer.  (Or it might resurrect some other object on the chain to
> O2; it doesn't matter.)  Since Java finalization is unordered, R may
> be finalized, even though it will be needed again after O1
> resurrects itself.

I'd state it even more generally without involving objects and
classes.  Multiple objects O_n point, directly or indirectly, to R.
Although the objects are configured for unordered finalization, R is
configured for traditional finalization, such that, when its finalizer
runs, it knows no other finalizer can resurrect it.

> The way the GC was originally designed, this should be impossible
> because R won't be considered for finalization after O1 has run.

Exactly.  And this is the very property I'm trying to restore.  The
way the GC was originally designed, it held that:

  terminally_unreachable(O_j) => O_j is finalizable

but as it turned out this could be implemented with a simpler, more
local test:

  O_i reaches O_j => O_j is not finalizable

and in the stage of preparation for finalization i only had to iterate
over other potentially-finalizable objects, because other live objects
would have already made O_j ineligible.

But then, when Java finalization was introduced, this conceptual model
fell apart, because Java finalizers can (and should) run even for
objects that are reachable (from other finalizable objects).

However, it also broke the implementation above for mixed cases.  It
no longer holds that, if for every j such that O_j is a finalizable
object with normal semantics and terminally_unreachable(O_j), there
exists some i such that O_i is also a finalizable object with normal
semantics and O_i reaches O_j.  O_j may be reachable from a Java
object, and that's what breaks the original property.

The change I proposed restores the property, but in a more localized
way: only if you say "hey, I really care about that original property"
will it enforce it.  In other cases, we'd use the implementation that
worked to enforce the property for non-mixed cases but that fails to
enforce it in border cases.  This sounds broken to me.

> Thus we need a way for R to protect itself without cooperation from O1,
> which we don't control.  Alexandre's patch does that by allowing objects
> to declare themselves to not be finalizable while reachable from other
> finalizable objects, overriding other instructions to finalize objects
> out of order.

> The reasons I don't feel enthusiastic about including this:

> - It doesn't fit well with the GCs original design.  It feels like a
> hack on the interface, even more so than the Java finalization
> extension.

I agree.  That's why I've been hinting that, instead of adding a new
finalization type, we should modify the non-Java finalization types
such that they enforce the original property even in Java mode.

I haven't been able to think of any case in which it could possibly be
useful for a finalizer to want to run only if no other finalizers with
normal semantics reach it, even if other finalizers with Java
semantics do.

What I have in mind is something like this (untested) patch.  How do
you feel about it?


Index: finalize.c
===================================================================
--- finalize.c	(revision 121196)
+++ finalize.c	(working copy)
@@ -2,6 +2,7 @@
  * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
  * Copyright (c) 1991-1996 by Xerox Corporation.  All rights reserved.
  * Copyright (c) 1996-1999 by Silicon Graphics.  All rights reserved.
+ * Copyright (C) 2007 Free Software Foundation, Inc
 
  * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
  * OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
@@ -637,10 +638,43 @@
   	if (!GC_is_marked(real_ptr)) {
   	    if (curr_fo -> fo_mark_proc == GC_null_finalize_mark_proc) {
   	        GC_MARK_FO(real_ptr, GC_normal_finalize_mark_proc);
+		GC_set_mark_bit(real_ptr);
   	    }
-  	    GC_set_mark_bit(real_ptr);
   	}
       }
+
+      /* now revive finalize-when-unreachable objects reachable from
+	 other finalizable objects */
+      curr_fo = GC_finalize_now;
+      prev_fo = 0;
+      while (curr_fo != 0) {
+	next_fo = fo_next(curr_fo);
+	if (curr_fo -> fo_mark_proc != GC_null_finalize_mark_proc) {
+	  real_ptr = (ptr_t)curr_fo -> fo_hidden_base;
+	  if (!GC_is_marked(real_ptr)) {
+	      GC_set_mark_bit(real_ptr);
+	  } else {
+	      if (prev_fo == 0)
+		GC_finalize_now = next_fo;
+	      else
+		fo_set_next(prev_fo, next_fo);
+
+              curr_fo -> fo_hidden_base =
+              		(word) HIDE_POINTER(curr_fo -> fo_hidden_base);
+              GC_words_finalized -=
+                 	ALIGNED_WORDS(curr_fo -> fo_object_size)
+              		+ ALIGNED_WORDS(sizeof(struct finalizable_object));
+
+	      i = HASH2(real_ptr, log_fo_table_size);
+	      fo_set_next (curr_fo, fo_head[i]);
+	      GC_fo_entries++;
+	      fo_head[i] = curr_fo;
+	      curr_fo = prev_fo;
+	  }
+	}
+	prev_fo = curr_fo;
+	curr_fo = next_fo;
+      }
   }
 
   /* Remove dangling disappearing links. */


-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-02-05 14:21                       ` Alexandre Oliva
@ 2007-02-08 19:39                         ` Andrew Haley
  2007-02-08 20:46                           ` Alexandre Oliva
  2007-02-09  6:35                         ` Hans Boehm
  1 sibling, 1 reply; 35+ messages in thread
From: Andrew Haley @ 2007-02-08 19:39 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Boehm, Hans, green, gcc-patches, java-patches

Alexandre Oliva writes:
 > On Jan 31, 2007, "Boehm, Hans" <hans.boehm@hp.com> wrote:
 > 
 > > I still think this is really ugly in principle, but I'm now inclined to
 > > agree that we need such a mechanism.  (This is not a statement about
 > > Alexandre's implementation, which is quite clever and clean.)
 > 
 > Thanks
 > 
 > > We have an object O1 which is finalizable, and indirectly points to O2.
 > > O2 is the last member of class P, and the representation of class P
 > > includes a resource R that must be explicitly reclaimed when P is
 > > collected.  (Which presumably happens when P's class loader is also
 > > collected?)
 > 
 > Yep
 > 
 > > Thus R has a finalizer.  O1 sometimes resurrects itself from its
 > > finalizer.  (Or it might resurrect some other object on the chain to
 > > O2; it doesn't matter.)  Since Java finalization is unordered, R may
 > > be finalized, even though it will be needed again after O1
 > > resurrects itself.
 > 
 > I'd state it even more generally without involving objects and
 > classes.  Multiple objects O_n point, directly or indirectly, to R.
 > Although the objects are configured for unordered finalization, R is
 > configured for traditional finalization, such that, when its finalizer
 > runs, it knows no other finalizer can resurrect it.
 > 
 > > The way the GC was originally designed, this should be impossible
 > > because R won't be considered for finalization after O1 has run.
 > 
 > Exactly.  And this is the very property I'm trying to restore.  The
 > way the GC was originally designed, it held that:
 > 
 >   terminally_unreachable(O_j) => O_j is finalizable
 > 
 > but as it turned out this could be implemented with a simpler, more
 > local test:
 > 
 >   O_i reaches O_j => O_j is not finalizable
 > 
 > and in the stage of preparation for finalization i only had to iterate
 > over other potentially-finalizable objects, because other live objects
 > would have already made O_j ineligible.
 > 
 > But then, when Java finalization was introduced, this conceptual model
 > fell apart, because Java finalizers can (and should) run even for
 > objects that are reachable (from other finalizable objects).
 > 
 > However, it also broke the implementation above for mixed cases.  It
 > no longer holds that, if for every j such that O_j is a finalizable
 > object with normal semantics and terminally_unreachable(O_j), there
 > exists some i such that O_i is also a finalizable object with normal
 > semantics and O_i reaches O_j.  O_j may be reachable from a Java
 > object, and that's what breaks the original property.
 > 
 > The change I proposed restores the property, but in a more localized
 > way: only if you say "hey, I really care about that original property"
 > will it enforce it.  In other cases, we'd use the implementation that
 > worked to enforce the property for non-mixed cases but that fails to
 > enforce it in border cases.  This sounds broken to me.
 > 
 > > Thus we need a way for R to protect itself without cooperation from O1,
 > > which we don't control.  Alexandre's patch does that by allowing objects
 > > to declare themselves to not be finalizable while reachable from other
 > > finalizable objects, overriding other instructions to finalize objects
 > > out of order.
 > 
 > > The reasons I don't feel enthusiastic about including this:
 > 
 > > - It doesn't fit well with the GCs original design.  It feels like a
 > > hack on the interface, even more so than the Java finalization
 > > extension.
 > 
 > I agree.  That's why I've been hinting that, instead of adding a new
 > finalization type, we should modify the non-Java finalization types
 > such that they enforce the original property even in Java mode.
 > 
 > I haven't been able to think of any case in which it could possibly be
 > useful for a finalizer to want to run only if no other finalizers with
 > normal semantics reach it, even if other finalizers with Java
 > semantics do.
 > 
 > What I have in mind is something like this (untested) patch.  How do
 > you feel about it?

Is there any reason (to do with correctness) why we can't commit your
existing patch as is?

Andrew.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-02-08 19:39                         ` Andrew Haley
@ 2007-02-08 20:46                           ` Alexandre Oliva
  0 siblings, 0 replies; 35+ messages in thread
From: Alexandre Oliva @ 2007-02-08 20:46 UTC (permalink / raw)
  To: Andrew Haley; +Cc: Boehm, Hans, green, gcc-patches, java-patches

On Feb  8, 2007, Andrew Haley <aph@redhat.com> wrote:

> Is there any reason (to do with correctness) why we can't commit your
> existing patch as is?

I was waiting for explicit approval.

And then, since the patch adds interfaces to boehm-gc, that could be
removed should we go the way I advocated in my latest e-mail, it might
make sense to give it more time, because removing interfaces is bad.

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-02-05 14:21                       ` Alexandre Oliva
  2007-02-08 19:39                         ` Andrew Haley
@ 2007-02-09  6:35                         ` Hans Boehm
  2007-02-09 18:27                           ` Alexandre Oliva
  1 sibling, 1 reply; 35+ messages in thread
From: Hans Boehm @ 2007-02-09  6:35 UTC (permalink / raw)
  To: Alexandre Oliva
  Cc: Boehm, Hans, Andrew Haley, green, gcc-patches, java-patches

On Mon, 5 Feb 2007, Alexandre Oliva wrote:

> > - It doesn't fit well with the GCs original design.  It feels like a
> > hack on the interface, even more so than the Java finalization
> > extension.
>
> I agree.  That's why I've been hinting that, instead of adding a new
> finalization type, we should modify the non-Java finalization types
> such that they enforce the original property even in Java mode.
>
> I haven't been able to think of any case in which it could possibly be
> useful for a finalizer to want to run only if no other finalizers with
> normal semantics reach it, even if other finalizers with Java
> semantics do.

I'm unconvinced of this point, but there probably is a variant of this
solution that could also work.

To see why I'm not convinced, assume I have some object A with a bunch of
reference fields, but only field f is needed by the finalizer.

Case 1:
If I'm careful and using normal (ordered) finalization semantics, I
would separate A into two objects A and A' where A' contains (possibly
a copy of) f and none of the other reference fields, A points to A',
and only A' is finalizaton-enabled, something like

A   ->   A'    ->  ...
       final.  f

That way I avoid spurious finalization cycles.

Case 2:
If I'm using Java (unordered) finalization for A, there is no reason to do
this, and I just make A finalizable.  (To avoid premature finalization,
I would probably put a copy of f in some permanent data structure, but that
doesn't matter here.)

Now assume A has another field f2 that (possibly indirectly) points to a
(ordered) non-Java-finalizable object B.  Assume for the case of argument
that it points back to A since B and Bs finalizer needs A.  I claim:

In case 1:  B can be finalized since it's not reachable from a
finalization-enabled object, A will probably become unreachable in the
next cycle, and A' will then be finalized.  I'm fine, as expected.

In case 2:  B is reachable from A, and the author of A had no reason to
avoid that, since unordered finalization is used.  But now if I let
that inhibit finalization of B, I have a finalization cycle, and neither
can get finalized.  That seems wrong.

I think the more general solution you suggest works only because the
normally finalizable resources in question are known not to refer to
Java objects, and hence the above situation can't arise.  It's needed
because we can't can't trust the Java code to do the right thing.
(And it probably couldn't anyway because not all the relevant references
are visible at the Java level.)  But if we had a C program mixing
finalization types, I think you'd want the other treatment.

I think the solution below incurs a slight risk of breaking something,
if e.g. Mono relied on the current behavior someplace.  But I could
also live with the variant in which this was a global (initialization
time) option, and the code below were wrapped in a suitable conditional.
The GC already has some such options.

Otherwise, the original patch seems fine.

Hans

>
> What I have in mind is something like this (untested) patch.  How do
> you feel about it?
>
>
> Index: finalize.c
> ===================================================================
> --- finalize.c	(revision 121196)
> +++ finalize.c	(working copy)
> @@ -2,6 +2,7 @@
>   * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
>   * Copyright (c) 1991-1996 by Xerox Corporation.  All rights reserved.
>   * Copyright (c) 1996-1999 by Silicon Graphics.  All rights reserved.
> + * Copyright (C) 2007 Free Software Foundation, Inc
>
>   * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
>   * OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
> @@ -637,10 +638,43 @@
>    	if (!GC_is_marked(real_ptr)) {
>    	    if (curr_fo -> fo_mark_proc == GC_null_finalize_mark_proc) {
>    	        GC_MARK_FO(real_ptr, GC_normal_finalize_mark_proc);
> +		GC_set_mark_bit(real_ptr);
>    	    }
> -  	    GC_set_mark_bit(real_ptr);
>    	}
>        }
> +
> +      /* now revive finalize-when-unreachable objects reachable from
> +	 other finalizable objects */
> +      curr_fo = GC_finalize_now;
> +      prev_fo = 0;
> +      while (curr_fo != 0) {
> +	next_fo = fo_next(curr_fo);
> +	if (curr_fo -> fo_mark_proc != GC_null_finalize_mark_proc) {
> +	  real_ptr = (ptr_t)curr_fo -> fo_hidden_base;
> +	  if (!GC_is_marked(real_ptr)) {
> +	      GC_set_mark_bit(real_ptr);
> +	  } else {
> +	      if (prev_fo == 0)
> +		GC_finalize_now = next_fo;
> +	      else
> +		fo_set_next(prev_fo, next_fo);
> +
> +              curr_fo -> fo_hidden_base =
> +              		(word) HIDE_POINTER(curr_fo -> fo_hidden_base);
> +              GC_words_finalized -=
> +                 	ALIGNED_WORDS(curr_fo -> fo_object_size)
> +              		+ ALIGNED_WORDS(sizeof(struct finalizable_object));
> +
> +	      i = HASH2(real_ptr, log_fo_table_size);
> +	      fo_set_next (curr_fo, fo_head[i]);
> +	      GC_fo_entries++;
> +	      fo_head[i] = curr_fo;
> +	      curr_fo = prev_fo;
> +	  }
> +	}
> +	prev_fo = curr_fo;
> +	curr_fo = next_fo;
> +      }
>    }
>
>    /* Remove dangling disappearing links. */
>
>
> --
> Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
> FSF Latin America Board Member         http://www.fsfla.org/
> Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
> Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-02-09  6:35                         ` Hans Boehm
@ 2007-02-09 18:27                           ` Alexandre Oliva
  2007-02-15  5:42                             ` Hans Boehm
  0 siblings, 1 reply; 35+ messages in thread
From: Alexandre Oliva @ 2007-02-09 18:27 UTC (permalink / raw)
  To: Hans Boehm; +Cc: Andrew Haley, green, gcc-patches, java-patches

On Feb  9, 2007, Hans Boehm <Hans.Boehm@hp.com> wrote:

> Now assume A has another field f2 that (possibly indirectly) points to a
> (ordered) non-Java-finalizable object B.  Assume for the case of argument
> that it points back to A since B and Bs finalizer needs A.

> In case 2:  B is reachable from A, and the author of A had no reason to
> avoid that, since unordered finalization is used.  But now if I let
> that inhibit finalization of B, I have a finalization cycle, and neither
> can get finalized.  That seems wrong.

Err...  If B is part of a cycle and it requires ordered finalization,
it must not get finalized.  It doesn't matter if other objects in the
cycle have ordered or unordered finalization, or even no finalization
at all.  This is the case in both the current implementation and with
the alternate implementation I've suggested, so I'm not sure what
you're getting at.

This won't stop A's finalizer from running, and if A's finalizer
clears the reference to B, B will eventually be finalized.  But if A
doesn't, B will be forever part of a cycle, which sounds like correct
behavior to me, even if undesirable.

> I think the solution below incurs a slight risk of breaking something,
> if e.g. Mono relied on the current behavior someplace.  But I could
> also live with the variant in which this was a global (initialization
> time) option, and the code below were wrapped in a suitable conditional.
> The GC already has some such options.

> Otherwise, the original patch seems fine.

If you see that the potential distinction can be put to good use, then
let's go with the original patch.

Ok to install?

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-02-09 18:27                           ` Alexandre Oliva
@ 2007-02-15  5:42                             ` Hans Boehm
  2007-02-15  7:24                               ` Alexandre Oliva
  0 siblings, 1 reply; 35+ messages in thread
From: Hans Boehm @ 2007-02-15  5:42 UTC (permalink / raw)
  To: Alexandre Oliva
  Cc: Hans Boehm, Andrew Haley, green, gcc-patches, java-patches



On Fri, 9 Feb 2007, Alexandre Oliva wrote:

> On Feb  9, 2007, Hans Boehm <Hans.Boehm@hp.com> wrote:
>
> > Now assume A has another field f2 that (possibly indirectly) points to a
> > (ordered) non-Java-finalizable object B.  Assume for the case of argument
> > that it points back to A since B and Bs finalizer needs A.
>
> > In case 2:  B is reachable from A, and the author of A had no reason to
> > avoid that, since unordered finalization is used.  But now if I let
> > that inhibit finalization of B, I have a finalization cycle, and neither
> > can get finalized.  That seems wrong.
>
> Err...  If B is part of a cycle and it requires ordered finalization,
> it must not get finalized.  It doesn't matter if other objects in the
> cycle have ordered or unordered finalization, or even no finalization
> at all.  This is the case in both the current implementation and with
> the alternate implementation I've suggested, so I'm not sure what
> you're getting at.
>
> This won't stop A's finalizer from running, and if A's finalizer
> clears the reference to B, B will eventually be finalized.  But if A
> doesn't, B will be forever part of a cycle, which sounds like correct
> behavior to me, even if undesirable.
If B uses ordered finalization, it's finalization mark procedure will
mark A as not being ready for finalization this cycle, as it should.
Thus neither A nor B will get finalized, I think.

The basic issue is that in the most general case, client code needs
to let the collector know which fields are needed for finalization.
The way you do that currently depends on whether the object in question
uses ordered finalization.  With the simpler scheme, that breaks.

I will admit that there is little code for which this is an issue.

>
> > I think the solution below incurs a slight risk of breaking something,
> > if e.g. Mono relied on the current behavior someplace.  But I could
> > also live with the variant in which this was a global (initialization
> > time) option, and the code below were wrapped in a suitable conditional.
> > The GC already has some such options.
>
> > Otherwise, the original patch seems fine.
>
> If you see that the potential distinction can be put to good use, then
> let's go with the original patch.
>
> Ok to install?

Sounds good to me.

Hans
>
> --
> Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
> FSF Latin America Board Member         http://www.fsfla.org/
> Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
> Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-18  7:14 Get libffi closures to cope with SELinux execmem/execmod Alexandre Oliva
  2007-01-18  7:28 ` Alexandre Oliva
  2007-01-18 12:30 ` Andrew Haley
@ 2007-02-15  7:11 ` Tom Tromey
  2007-02-18 23:09   ` Alexandre Oliva
  2 siblings, 1 reply; 35+ messages in thread
From: Tom Tromey @ 2007-02-15  7:11 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: gcc-patches, java-patches

>>>>> "Alexandre" == Alexandre Oliva <aoliva@redhat.com> writes:

Finally went through this thread.

Alexandre> +void *
Alexandre> +_Jv_AllocClosure (jsize size, void **code)
Alexandre> +{
Alexandre> +  return ffi_closure_alloc (size, code);
Alexandre> +}

It is ok to have the libjava code directly call ffi_closure_alloc and
ffi_closure_free.  We're already making direct calls to the various
ffi functions, so this extra layer isn't needed.

The other, GC-related, allocation functions go through an extra layer
because, back in the day, we considered being able to swap out GCs.
(And, at least one person actually did this... but on the whole we'd
probably have done better to make direct calls.)

Tom

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-01-22 21:28       ` Alexandre Oliva
  2007-01-24  6:46         ` Alexandre Oliva
@ 2007-02-15  7:14         ` Tom Tromey
  1 sibling, 0 replies; 35+ messages in thread
From: Tom Tromey @ 2007-02-15  7:14 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Hans Boehm, Andrew Haley, gcc-patches, java-patches

>>>>> "Alexandre" == Alexandre Oliva <aoliva@redhat.com> writes:

Alexandre> I ask because I reasoned I could create a GC-managed
Alexandre> pointer to the closure list, and register a finalizer for
Alexandre> that pointer such that the closure list would only get
Alexandre> deallocated when the class that held the pointer to it was
Alexandre> collected.

This whole thread makes my head hurt.

There is an already-existing solution, albeit an expensive one.  We
could use a PhantomReference on the Class whose enqueueing behavior is
to free the closures.

But after reading the rest of the thread I'm not sure whether phantom
references really work properly as-is.  And, anyhow, I think this
approach would require us to have a cleanup thread.

Tom

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-02-15  5:42                             ` Hans Boehm
@ 2007-02-15  7:24                               ` Alexandre Oliva
  0 siblings, 0 replies; 35+ messages in thread
From: Alexandre Oliva @ 2007-02-15  7:24 UTC (permalink / raw)
  To: Hans Boehm; +Cc: Andrew Haley, green, gcc-patches, java-patches

On Feb 15, 2007, Hans Boehm <Hans.Boehm@hp.com> wrote:

> On Fri, 9 Feb 2007, Alexandre Oliva wrote:

>> On Feb  9, 2007, Hans Boehm <Hans.Boehm@hp.com> wrote:
>> 
>> > Now assume A has another field f2 that (possibly indirectly) points to a
>> > (ordered) non-Java-finalizable object B.  Assume for the case of argument
>> > that it points back to A since B and Bs finalizer needs A.

>> > In case 2:  B is reachable from A, and the author of A had no reason to
>> > avoid that, since unordered finalization is used.  But now if I let
>> > that inhibit finalization of B, I have a finalization cycle, and neither
>> > can get finalized.  That seems wrong.

>> This won't stop A's finalizer from running, and if A's finalizer
>> clears the reference to B, B will eventually be finalized.  But if A
>> doesn't, B will be forever part of a cycle, which sounds like correct
>> behavior to me, even if undesirable.

> If B uses ordered finalization, it's finalization mark procedure will
> mark A as not being ready for finalization this cycle, as it should.
> Thus neither A nor B will get finalized, I think.

Indeed, you're right.  I was basing my reasoning not on the code, but
on what I thought would be the right thing to do.

It wouldn't be too hard to make it match the behavior I expected:

1. enqueue all unmarked Java-finalization objects for finalization
(new loop)

2. run the loop over other finalization-enabled objects that runs
their mark procedure (like before, but Java objects have already been
enqueued for finalization)

3. enqueue unmarked objects for finalization (like before)

4. mark Java objects enqueued for finalization (as in my proposed
change)

5. dequeue non-Java objects that are marked, and mark them otherwise
(as in my proposed change)


The reason I like this design is that it respects the properties that
each object requested: Java objects get finalized even if other
objects (of any kind) point to them; other objects only get finalized
when no other objects (of any kind) point to them.

The new loop would be as simple as a copy of the loop that follows:
  /* Enqueue for finalization all objects that are still		*/
  /* unreachable.							*/

except that it would be enclosed in an if (GC_java_finalization), and
instead of:

	    if (!GC_java_finalization) {
              GC_set_mark_bit(real_ptr);
	    }

it would do:

 	    if (curr_fo -> fo_mark_proc != GC_null_finalize_mark_proc) {
              continue;
            }
  

Sounds sound? ;-)


Then we could change GC_null_finalize_mark_proc() such that it
abort()s if it's ever called (such that, if you use Java finalization,
you have to set GC_java_finalization; if you don't, things just don't
quite work as expected) and tweak GC_register_finalizer_no_order() to
set this variable.  Makes sense?

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-02-15  7:11 ` Tom Tromey
@ 2007-02-18 23:09   ` Alexandre Oliva
  2007-03-05 13:15     ` Tom Tromey
  0 siblings, 1 reply; 35+ messages in thread
From: Alexandre Oliva @ 2007-02-18 23:09 UTC (permalink / raw)
  To: tromey; +Cc: gcc-patches, java-patches

[-- Attachment #1: Type: text/plain, Size: 331 bytes --]

On Feb 15, 2007, Tom Tromey <tromey@redhat.com> wrote:

> It is ok to have the libjava code directly call ffi_closure_alloc and
> ffi_closure_free.  We're already making direct calls to the various
> ffi functions, so this extra layer isn't needed.

Done in the revised patch below.  Retested on x86_64-linux-gnu.  Ok to
install?


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: libffi-closure-prot-exec-dlmalloc.patch --]
[-- Type: text/x-patch, Size: 185459 bytes --]

for  libffi/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* src/dlmalloc.c: New file, imported version 2.8.3 of Doug
        Lea's malloc.

Index: libffi/src/dlmalloc.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ libffi/src/dlmalloc.c	2007-01-17 06:31:01.000000000 -0200
@@ -0,0 +1,5061 @@
+/*
+  This is a version (aka dlmalloc) of malloc/free/realloc written by
+  Doug Lea and released to the public domain, as explained at
+  http://creativecommons.org/licenses/publicdomain.  Send questions,
+  comments, complaints, performance data, etc to dl@cs.oswego.edu
+
+* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
+
+   Note: There may be an updated version of this malloc obtainable at
+           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
+         Check before installing!
+
+* Quickstart
+
+  This library is all in one file to simplify the most common usage:
+  ftp it, compile it (-O3), and link it into another program. All of
+  the compile-time options default to reasonable values for use on
+  most platforms.  You might later want to step through various
+  compile-time and dynamic tuning options.
+
+  For convenience, an include file for code using this malloc is at:
+     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
+  You don't really need this .h file unless you call functions not
+  defined in your system include files.  The .h file contains only the
+  excerpts from this file needed for using this malloc on ANSI C/C++
+  systems, so long as you haven't changed compile-time options about
+  naming and tuning parameters.  If you do, then you can create your
+  own malloc.h that does include all settings by cutting at the point
+  indicated below. Note that you may already by default be using a C
+  library containing a malloc that is based on some version of this
+  malloc (for example in linux). You might still want to use the one
+  in this file to customize settings or to avoid overheads associated
+  with library versions.
+
+* Vital statistics:
+
+  Supported pointer/size_t representation:       4 or 8 bytes
+       size_t MUST be an unsigned type of the same width as
+       pointers. (If you are using an ancient system that declares
+       size_t as a signed type, or need it to be a different width
+       than pointers, you can use a previous release of this malloc
+       (e.g. 2.7.2) supporting these.)
+
+  Alignment:                                     8 bytes (default)
+       This suffices for nearly all current machines and C compilers.
+       However, you can define MALLOC_ALIGNMENT to be wider than this
+       if necessary (up to 128bytes), at the expense of using more space.
+
+  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
+                                          8 or 16 bytes (if 8byte sizes)
+       Each malloced chunk has a hidden word of overhead holding size
+       and status information, and additional cross-check word
+       if FOOTERS is defined.
+
+  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
+                          8-byte ptrs:  32 bytes    (including overhead)
+
+       Even a request for zero bytes (i.e., malloc(0)) returns a
+       pointer to something of the minimum allocatable size.
+       The maximum overhead wastage (i.e., number of extra bytes
+       allocated than were requested in malloc) is less than or equal
+       to the minimum size, except for requests >= mmap_threshold that
+       are serviced via mmap(), where the worst case wastage is about
+       32 bytes plus the remainder from a system page (the minimal
+       mmap unit); typically 4096 or 8192 bytes.
+
+  Security: static-safe; optionally more or less
+       The "security" of malloc refers to the ability of malicious
+       code to accentuate the effects of errors (for example, freeing
+       space that is not currently malloc'ed or overwriting past the
+       ends of chunks) in code that calls malloc.  This malloc
+       guarantees not to modify any memory locations below the base of
+       heap, i.e., static variables, even in the presence of usage
+       errors.  The routines additionally detect most improper frees
+       and reallocs.  All this holds as long as the static bookkeeping
+       for malloc itself is not corrupted by some other means.  This
+       is only one aspect of security -- these checks do not, and
+       cannot, detect all possible programming errors.
+
+       If FOOTERS is defined nonzero, then each allocated chunk
+       carries an additional check word to verify that it was malloced
+       from its space.  These check words are the same within each
+       execution of a program using malloc, but differ across
+       executions, so externally crafted fake chunks cannot be
+       freed. This improves security by rejecting frees/reallocs that
+       could corrupt heap memory, in addition to the checks preventing
+       writes to statics that are always on.  This may further improve
+       security at the expense of time and space overhead.  (Note that
+       FOOTERS may also be worth using with MSPACES.)
+
+       By default detected errors cause the program to abort (calling
+       "abort()"). You can override this to instead proceed past
+       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
+       has no effect, and a malloc that encounters a bad address
+       caused by user overwrites will ignore the bad address by
+       dropping pointers and indices to all known memory. This may
+       be appropriate for programs that should continue if at all
+       possible in the face of programming errors, although they may
+       run out of memory because dropped memory is never reclaimed.
+
+       If you don't like either of these options, you can define
+       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
+       else. And if if you are sure that your program using malloc has
+       no errors or vulnerabilities, you can define INSECURE to 1,
+       which might (or might not) provide a small performance improvement.
+
+  Thread-safety: NOT thread-safe unless USE_LOCKS defined
+       When USE_LOCKS is defined, each public call to malloc, free,
+       etc is surrounded with either a pthread mutex or a win32
+       spinlock (depending on WIN32). This is not especially fast, and
+       can be a major bottleneck.  It is designed only to provide
+       minimal protection in concurrent environments, and to provide a
+       basis for extensions.  If you are using malloc in a concurrent
+       program, consider instead using ptmalloc, which is derived from
+       a version of this malloc. (See http://www.malloc.de).
+
+  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
+       This malloc can use unix sbrk or any emulation (invoked using
+       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
+       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
+       memory.  On most unix systems, it tends to work best if both
+       MORECORE and MMAP are enabled.  On Win32, it uses emulations
+       based on VirtualAlloc. It also uses common C library functions
+       like memset.
+
+  Compliance: I believe it is compliant with the Single Unix Specification
+       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
+       others as well.
+
+* Overview of algorithms
+
+  This is not the fastest, most space-conserving, most portable, or
+  most tunable malloc ever written. However it is among the fastest
+  while also being among the most space-conserving, portable and
+  tunable.  Consistent balance across these factors results in a good
+  general-purpose allocator for malloc-intensive programs.
+
+  In most ways, this malloc is a best-fit allocator. Generally, it
+  chooses the best-fitting existing chunk for a request, with ties
+  broken in approximately least-recently-used order. (This strategy
+  normally maintains low fragmentation.) However, for requests less
+  than 256bytes, it deviates from best-fit when there is not an
+  exactly fitting available chunk by preferring to use space adjacent
+  to that used for the previous small request, as well as by breaking
+  ties in approximately most-recently-used order. (These enhance
+  locality of series of small allocations.)  And for very large requests
+  (>= 256Kb by default), it relies on system memory mapping
+  facilities, if supported.  (This helps avoid carrying around and
+  possibly fragmenting memory used only for large chunks.)
+
+  All operations (except malloc_stats and mallinfo) have execution
+  times that are bounded by a constant factor of the number of bits in
+  a size_t, not counting any clearing in calloc or copying in realloc,
+  or actions surrounding MORECORE and MMAP that have times
+  proportional to the number of non-contiguous regions returned by
+  system allocation routines, which is often just 1.
+
+  The implementation is not very modular and seriously overuses
+  macros. Perhaps someday all C compilers will do as good a job
+  inlining modular code as can now be done by brute-force expansion,
+  but now, enough of them seem not to.
+
+  Some compilers issue a lot of warnings about code that is
+  dead/unreachable only on some platforms, and also about intentional
+  uses of negation on unsigned types. All known cases of each can be
+  ignored.
+
+  For a longer but out of date high-level description, see
+     http://gee.cs.oswego.edu/dl/html/malloc.html
+
+* MSPACES
+  If MSPACES is defined, then in addition to malloc, free, etc.,
+  this file also defines mspace_malloc, mspace_free, etc. These
+  are versions of malloc routines that take an "mspace" argument
+  obtained using create_mspace, to control all internal bookkeeping.
+  If ONLY_MSPACES is defined, only these versions are compiled.
+  So if you would like to use this allocator for only some allocations,
+  and your system malloc for others, you can compile with
+  ONLY_MSPACES and then do something like...
+    static mspace mymspace = create_mspace(0,0); // for example
+    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
+
+  (Note: If you only need one instance of an mspace, you can instead
+  use "USE_DL_PREFIX" to relabel the global malloc.)
+
+  You can similarly create thread-local allocators by storing
+  mspaces as thread-locals. For example:
+    static __thread mspace tlms = 0;
+    void*  tlmalloc(size_t bytes) {
+      if (tlms == 0) tlms = create_mspace(0, 0);
+      return mspace_malloc(tlms, bytes);
+    }
+    void  tlfree(void* mem) { mspace_free(tlms, mem); }
+
+  Unless FOOTERS is defined, each mspace is completely independent.
+  You cannot allocate from one and free to another (although
+  conformance is only weakly checked, so usage errors are not always
+  caught). If FOOTERS is defined, then each chunk carries around a tag
+  indicating its originating mspace, and frees are directed to their
+  originating spaces.
+
+ -------------------------  Compile-time options ---------------------------
+
+Be careful in setting #define values for numerical constants of type
+size_t. On some systems, literal values are not automatically extended
+to size_t precision unless they are explicitly casted.
+
+WIN32                    default: defined if _WIN32 defined
+  Defining WIN32 sets up defaults for MS environment and compilers.
+  Otherwise defaults are for unix.
+
+MALLOC_ALIGNMENT         default: (size_t)8
+  Controls the minimum alignment for malloc'ed chunks.  It must be a
+  power of two and at least 8, even on machines for which smaller
+  alignments would suffice. It may be defined as larger than this
+  though. Note however that code and data structures are optimized for
+  the case of 8-byte alignment.
+
+MSPACES                  default: 0 (false)
+  If true, compile in support for independent allocation spaces.
+  This is only supported if HAVE_MMAP is true.
+
+ONLY_MSPACES             default: 0 (false)
+  If true, only compile in mspace versions, not regular versions.
+
+USE_LOCKS                default: 0 (false)
+  Causes each call to each public routine to be surrounded with
+  pthread or WIN32 mutex lock/unlock. (If set true, this can be
+  overridden on a per-mspace basis for mspace versions.)
+
+FOOTERS                  default: 0
+  If true, provide extra checking and dispatching by placing
+  information in the footers of allocated chunks. This adds
+  space and time overhead.
+
+INSECURE                 default: 0
+  If true, omit checks for usage errors and heap space overwrites.
+
+USE_DL_PREFIX            default: NOT defined
+  Causes compiler to prefix all public routines with the string 'dl'.
+  This can be useful when you only want to use this malloc in one part
+  of a program, using your regular system malloc elsewhere.
+
+ABORT                    default: defined as abort()
+  Defines how to abort on failed checks.  On most systems, a failed
+  check cannot die with an "assert" or even print an informative
+  message, because the underlying print routines in turn call malloc,
+  which will fail again.  Generally, the best policy is to simply call
+  abort(). It's not very useful to do more than this because many
+  errors due to overwriting will show up as address faults (null, odd
+  addresses etc) rather than malloc-triggered checks, so will also
+  abort.  Also, most compilers know that abort() does not return, so
+  can better optimize code conditionally calling it.
+
+PROCEED_ON_ERROR           default: defined as 0 (false)
+  Controls whether detected bad addresses cause them to bypassed
+  rather than aborting. If set, detected bad arguments to free and
+  realloc are ignored. And all bookkeeping information is zeroed out
+  upon a detected overwrite of freed heap space, thus losing the
+  ability to ever return it from malloc again, but enabling the
+  application to proceed. If PROCEED_ON_ERROR is defined, the
+  static variable malloc_corruption_error_count is compiled in
+  and can be examined to see if errors have occurred. This option
+  generates slower code than the default abort policy.
+
+DEBUG                    default: NOT defined
+  The DEBUG setting is mainly intended for people trying to modify
+  this code or diagnose problems when porting to new platforms.
+  However, it may also be able to better isolate user errors than just
+  using runtime checks.  The assertions in the check routines spell
+  out in more detail the assumptions and invariants underlying the
+  algorithms.  The checking is fairly extensive, and will slow down
+  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
+  set will attempt to check every non-mmapped allocated and free chunk
+  in the course of computing the summaries.
+
+ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
+  Debugging assertion failures can be nearly impossible if your
+  version of the assert macro causes malloc to be called, which will
+  lead to a cascade of further failures, blowing the runtime stack.
+  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
+  which will usually make debugging easier.
+
+MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
+  The action to take before "return 0" when malloc fails to be able to
+  return memory because there is none available.
+
+HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
+  True if this system supports sbrk or an emulation of it.
+
+MORECORE                  default: sbrk
+  The name of the sbrk-style system routine to call to obtain more
+  memory.  See below for guidance on writing custom MORECORE
+  functions. The type of the argument to sbrk/MORECORE varies across
+  systems.  It cannot be size_t, because it supports negative
+  arguments, so it is normally the signed type of the same width as
+  size_t (sometimes declared as "intptr_t").  It doesn't much matter
+  though. Internally, we only call it with arguments less than half
+  the max value of a size_t, which should work across all reasonable
+  possibilities, although sometimes generating compiler warnings.  See
+  near the end of this file for guidelines for creating a custom
+  version of MORECORE.
+
+MORECORE_CONTIGUOUS       default: 1 (true)
+  If true, take advantage of fact that consecutive calls to MORECORE
+  with positive arguments always return contiguous increasing
+  addresses.  This is true of unix sbrk. It does not hurt too much to
+  set it true anyway, since malloc copes with non-contiguities.
+  Setting it false when definitely non-contiguous saves time
+  and possibly wasted space it would take to discover this though.
+
+MORECORE_CANNOT_TRIM      default: NOT defined
+  True if MORECORE cannot release space back to the system when given
+  negative arguments. This is generally necessary only if you are
+  using a hand-crafted MORECORE function that cannot handle negative
+  arguments.
+
+HAVE_MMAP                 default: 1 (true)
+  True if this system supports mmap or an emulation of it.  If so, and
+  HAVE_MORECORE is not true, MMAP is used for all system
+  allocation. If set and HAVE_MORECORE is true as well, MMAP is
+  primarily used to directly allocate very large blocks. It is also
+  used as a backup strategy in cases where MORECORE fails to provide
+  space from system. Note: A single call to MUNMAP is assumed to be
+  able to unmap memory that may have be allocated using multiple calls
+  to MMAP, so long as they are adjacent.
+
+HAVE_MREMAP               default: 1 on linux, else 0
+  If true realloc() uses mremap() to re-allocate large blocks and
+  extend or shrink allocation spaces.
+
+MMAP_CLEARS               default: 1 on unix
+  True if mmap clears memory so calloc doesn't need to. This is true
+  for standard unix mmap using /dev/zero.
+
+USE_BUILTIN_FFS            default: 0 (i.e., not used)
+  Causes malloc to use the builtin ffs() function to compute indices.
+  Some compilers may recognize and intrinsify ffs to be faster than the
+  supplied C version. Also, the case of x86 using gcc is special-cased
+  to an asm instruction, so is already as fast as it can be, and so
+  this setting has no effect. (On most x86s, the asm version is only
+  slightly faster than the C version.)
+
+malloc_getpagesize         default: derive from system includes, or 4096.
+  The system page size. To the extent possible, this malloc manages
+  memory from the system in page-size units.  This may be (and
+  usually is) a function rather than a constant. This is ignored
+  if WIN32, where page size is determined using getSystemInfo during
+  initialization.
+
+USE_DEV_RANDOM             default: 0 (i.e., not used)
+  Causes malloc to use /dev/random to initialize secure magic seed for
+  stamping footers. Otherwise, the current time is used.
+
+NO_MALLINFO                default: 0
+  If defined, don't compile "mallinfo". This can be a simple way
+  of dealing with mismatches between system declarations and
+  those in this file.
+
+MALLINFO_FIELD_TYPE        default: size_t
+  The type of the fields in the mallinfo struct. This was originally
+  defined as "int" in SVID etc, but is more usefully defined as
+  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
+
+REALLOC_ZERO_BYTES_FREES    default: not defined
+  This should be set if a call to realloc with zero bytes should 
+  be the same as a call to free. Some people think it should. Otherwise, 
+  since this malloc returns a unique pointer for malloc(0), so does 
+  realloc(p, 0).
+
+LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
+LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
+LACKS_STDLIB_H                default: NOT defined unless on WIN32
+  Define these if your system does not have these header files.
+  You might need to manually insert some of the declarations they provide.
+
+DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
+                                system_info.dwAllocationGranularity in WIN32,
+                                otherwise 64K.
+      Also settable using mallopt(M_GRANULARITY, x)
+  The unit for allocating and deallocating memory from the system.  On
+  most systems with contiguous MORECORE, there is no reason to
+  make this more than a page. However, systems with MMAP tend to
+  either require or encourage larger granularities.  You can increase
+  this value to prevent system allocation functions to be called so
+  often, especially if they are slow.  The value must be at least one
+  page and must be a power of two.  Setting to 0 causes initialization
+  to either page size or win32 region size.  (Note: In previous
+  versions of malloc, the equivalent of this option was called
+  "TOP_PAD")
+
+DEFAULT_TRIM_THRESHOLD    default: 2MB
+      Also settable using mallopt(M_TRIM_THRESHOLD, x)
+  The maximum amount of unused top-most memory to keep before
+  releasing via malloc_trim in free().  Automatic trimming is mainly
+  useful in long-lived programs using contiguous MORECORE.  Because
+  trimming via sbrk can be slow on some systems, and can sometimes be
+  wasteful (in cases where programs immediately afterward allocate
+  more large chunks) the value should be high enough so that your
+  overall system performance would improve by releasing this much
+  memory.  As a rough guide, you might set to a value close to the
+  average size of a process (program) running on your system.
+  Releasing this much memory would allow such a process to run in
+  memory.  Generally, it is worth tuning trim thresholds when a
+  program undergoes phases where several large chunks are allocated
+  and released in ways that can reuse each other's storage, perhaps
+  mixed with phases where there are no such chunks at all. The trim
+  value must be greater than page size to have any useful effect.  To
+  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
+  some people use of mallocing a huge space and then freeing it at
+  program startup, in an attempt to reserve system memory, doesn't
+  have the intended effect under automatic trimming, since that memory
+  will immediately be returned to the system.
+
+DEFAULT_MMAP_THRESHOLD       default: 256K
+      Also settable using mallopt(M_MMAP_THRESHOLD, x)
+  The request size threshold for using MMAP to directly service a
+  request. Requests of at least this size that cannot be allocated
+  using already-existing space will be serviced via mmap.  (If enough
+  normal freed space already exists it is used instead.)  Using mmap
+  segregates relatively large chunks of memory so that they can be
+  individually obtained and released from the host system. A request
+  serviced through mmap is never reused by any other request (at least
+  not directly; the system may just so happen to remap successive
+  requests to the same locations).  Segregating space in this way has
+  the benefits that: Mmapped space can always be individually released
+  back to the system, which helps keep the system level memory demands
+  of a long-lived program low.  Also, mapped memory doesn't become
+  `locked' between other chunks, as can happen with normally allocated
+  chunks, which means that even trimming via malloc_trim would not
+  release them.  However, it has the disadvantage that the space
+  cannot be reclaimed, consolidated, and then used to service later
+  requests, as happens with normal chunks.  The advantages of mmap
+  nearly always outweigh disadvantages for "large" chunks, but the
+  value of "large" may vary across systems.  The default is an
+  empirically derived value that works well in most systems. You can
+  disable mmap by setting to MAX_SIZE_T.
+
+*/
+
+#ifndef WIN32
+#ifdef _WIN32
+#define WIN32 1
+#endif  /* _WIN32 */
+#endif  /* WIN32 */
+#ifdef WIN32
+#define WIN32_LEAN_AND_MEAN
+#include <windows.h>
+#define HAVE_MMAP 1
+#define HAVE_MORECORE 0
+#define LACKS_UNISTD_H
+#define LACKS_SYS_PARAM_H
+#define LACKS_SYS_MMAN_H
+#define LACKS_STRING_H
+#define LACKS_STRINGS_H
+#define LACKS_SYS_TYPES_H
+#define LACKS_ERRNO_H
+#define MALLOC_FAILURE_ACTION
+#define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
+#endif  /* WIN32 */
+
+#if defined(DARWIN) || defined(_DARWIN)
+/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
+#ifndef HAVE_MORECORE
+#define HAVE_MORECORE 0
+#define HAVE_MMAP 1
+#endif  /* HAVE_MORECORE */
+#endif  /* DARWIN */
+
+#ifndef LACKS_SYS_TYPES_H
+#include <sys/types.h>  /* For size_t */
+#endif  /* LACKS_SYS_TYPES_H */
+
+/* The maximum possible size_t value has all bits set */
+#define MAX_SIZE_T           (~(size_t)0)
+
+#ifndef ONLY_MSPACES
+#define ONLY_MSPACES 0
+#endif  /* ONLY_MSPACES */
+#ifndef MSPACES
+#if ONLY_MSPACES
+#define MSPACES 1
+#else   /* ONLY_MSPACES */
+#define MSPACES 0
+#endif  /* ONLY_MSPACES */
+#endif  /* MSPACES */
+#ifndef MALLOC_ALIGNMENT
+#define MALLOC_ALIGNMENT ((size_t)8U)
+#endif  /* MALLOC_ALIGNMENT */
+#ifndef FOOTERS
+#define FOOTERS 0
+#endif  /* FOOTERS */
+#ifndef ABORT
+#define ABORT  abort()
+#endif  /* ABORT */
+#ifndef ABORT_ON_ASSERT_FAILURE
+#define ABORT_ON_ASSERT_FAILURE 1
+#endif  /* ABORT_ON_ASSERT_FAILURE */
+#ifndef PROCEED_ON_ERROR
+#define PROCEED_ON_ERROR 0
+#endif  /* PROCEED_ON_ERROR */
+#ifndef USE_LOCKS
+#define USE_LOCKS 0
+#endif  /* USE_LOCKS */
+#ifndef INSECURE
+#define INSECURE 0
+#endif  /* INSECURE */
+#ifndef HAVE_MMAP
+#define HAVE_MMAP 1
+#endif  /* HAVE_MMAP */
+#ifndef MMAP_CLEARS
+#define MMAP_CLEARS 1
+#endif  /* MMAP_CLEARS */
+#ifndef HAVE_MREMAP
+#ifdef linux
+#define HAVE_MREMAP 1
+#else   /* linux */
+#define HAVE_MREMAP 0
+#endif  /* linux */
+#endif  /* HAVE_MREMAP */
+#ifndef MALLOC_FAILURE_ACTION
+#define MALLOC_FAILURE_ACTION  errno = ENOMEM;
+#endif  /* MALLOC_FAILURE_ACTION */
+#ifndef HAVE_MORECORE
+#if ONLY_MSPACES
+#define HAVE_MORECORE 0
+#else   /* ONLY_MSPACES */
+#define HAVE_MORECORE 1
+#endif  /* ONLY_MSPACES */
+#endif  /* HAVE_MORECORE */
+#if !HAVE_MORECORE
+#define MORECORE_CONTIGUOUS 0
+#else   /* !HAVE_MORECORE */
+#ifndef MORECORE
+#define MORECORE sbrk
+#endif  /* MORECORE */
+#ifndef MORECORE_CONTIGUOUS
+#define MORECORE_CONTIGUOUS 1
+#endif  /* MORECORE_CONTIGUOUS */
+#endif  /* HAVE_MORECORE */
+#ifndef DEFAULT_GRANULARITY
+#if MORECORE_CONTIGUOUS
+#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
+#else   /* MORECORE_CONTIGUOUS */
+#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
+#endif  /* MORECORE_CONTIGUOUS */
+#endif  /* DEFAULT_GRANULARITY */
+#ifndef DEFAULT_TRIM_THRESHOLD
+#ifndef MORECORE_CANNOT_TRIM
+#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
+#else   /* MORECORE_CANNOT_TRIM */
+#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
+#endif  /* MORECORE_CANNOT_TRIM */
+#endif  /* DEFAULT_TRIM_THRESHOLD */
+#ifndef DEFAULT_MMAP_THRESHOLD
+#if HAVE_MMAP
+#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
+#else   /* HAVE_MMAP */
+#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
+#endif  /* HAVE_MMAP */
+#endif  /* DEFAULT_MMAP_THRESHOLD */
+#ifndef USE_BUILTIN_FFS
+#define USE_BUILTIN_FFS 0
+#endif  /* USE_BUILTIN_FFS */
+#ifndef USE_DEV_RANDOM
+#define USE_DEV_RANDOM 0
+#endif  /* USE_DEV_RANDOM */
+#ifndef NO_MALLINFO
+#define NO_MALLINFO 0
+#endif  /* NO_MALLINFO */
+#ifndef MALLINFO_FIELD_TYPE
+#define MALLINFO_FIELD_TYPE size_t
+#endif  /* MALLINFO_FIELD_TYPE */
+
+/*
+  mallopt tuning options.  SVID/XPG defines four standard parameter
+  numbers for mallopt, normally defined in malloc.h.  None of these
+  are used in this malloc, so setting them has no effect. But this
+  malloc does support the following options.
+*/
+
+#define M_TRIM_THRESHOLD     (-1)
+#define M_GRANULARITY        (-2)
+#define M_MMAP_THRESHOLD     (-3)
+
+/* ------------------------ Mallinfo declarations ------------------------ */
+
+#if !NO_MALLINFO
+/*
+  This version of malloc supports the standard SVID/XPG mallinfo
+  routine that returns a struct containing usage properties and
+  statistics. It should work on any system that has a
+  /usr/include/malloc.h defining struct mallinfo.  The main
+  declaration needed is the mallinfo struct that is returned (by-copy)
+  by mallinfo().  The malloinfo struct contains a bunch of fields that
+  are not even meaningful in this version of malloc.  These fields are
+  are instead filled by mallinfo() with other numbers that might be of
+  interest.
+
+  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
+  /usr/include/malloc.h file that includes a declaration of struct
+  mallinfo.  If so, it is included; else a compliant version is
+  declared below.  These must be precisely the same for mallinfo() to
+  work.  The original SVID version of this struct, defined on most
+  systems with mallinfo, declares all fields as ints. But some others
+  define as unsigned long. If your system defines the fields using a
+  type of different width than listed here, you MUST #include your
+  system version and #define HAVE_USR_INCLUDE_MALLOC_H.
+*/
+
+/* #define HAVE_USR_INCLUDE_MALLOC_H */
+
+#ifdef HAVE_USR_INCLUDE_MALLOC_H
+#include "/usr/include/malloc.h"
+#else /* HAVE_USR_INCLUDE_MALLOC_H */
+
+struct mallinfo {
+  MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
+  MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
+  MALLINFO_FIELD_TYPE smblks;   /* always 0 */
+  MALLINFO_FIELD_TYPE hblks;    /* always 0 */
+  MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
+  MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
+  MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
+  MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
+  MALLINFO_FIELD_TYPE fordblks; /* total free space */
+  MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
+};
+
+#endif /* HAVE_USR_INCLUDE_MALLOC_H */
+#endif /* NO_MALLINFO */
+
+#ifdef __cplusplus
+extern "C" {
+#endif /* __cplusplus */
+
+#if !ONLY_MSPACES
+
+/* ------------------- Declarations of public routines ------------------- */
+
+#ifndef USE_DL_PREFIX
+#define dlcalloc               calloc
+#define dlfree                 free
+#define dlmalloc               malloc
+#define dlmemalign             memalign
+#define dlrealloc              realloc
+#define dlvalloc               valloc
+#define dlpvalloc              pvalloc
+#define dlmallinfo             mallinfo
+#define dlmallopt              mallopt
+#define dlmalloc_trim          malloc_trim
+#define dlmalloc_stats         malloc_stats
+#define dlmalloc_usable_size   malloc_usable_size
+#define dlmalloc_footprint     malloc_footprint
+#define dlmalloc_max_footprint malloc_max_footprint
+#define dlindependent_calloc   independent_calloc
+#define dlindependent_comalloc independent_comalloc
+#endif /* USE_DL_PREFIX */
+
+
+/*
+  malloc(size_t n)
+  Returns a pointer to a newly allocated chunk of at least n bytes, or
+  null if no space is available, in which case errno is set to ENOMEM
+  on ANSI C systems.
+
+  If n is zero, malloc returns a minimum-sized chunk. (The minimum
+  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
+  systems.)  Note that size_t is an unsigned type, so calls with
+  arguments that would be negative if signed are interpreted as
+  requests for huge amounts of space, which will often fail. The
+  maximum supported value of n differs across systems, but is in all
+  cases less than the maximum representable value of a size_t.
+*/
+void* dlmalloc(size_t);
+
+/*
+  free(void* p)
+  Releases the chunk of memory pointed to by p, that had been previously
+  allocated using malloc or a related routine such as realloc.
+  It has no effect if p is null. If p was not malloced or already
+  freed, free(p) will by default cause the current program to abort.
+*/
+void  dlfree(void*);
+
+/*
+  calloc(size_t n_elements, size_t element_size);
+  Returns a pointer to n_elements * element_size bytes, with all locations
+  set to zero.
+*/
+void* dlcalloc(size_t, size_t);
+
+/*
+  realloc(void* p, size_t n)
+  Returns a pointer to a chunk of size n that contains the same data
+  as does chunk p up to the minimum of (n, p's size) bytes, or null
+  if no space is available.
+
+  The returned pointer may or may not be the same as p. The algorithm
+  prefers extending p in most cases when possible, otherwise it
+  employs the equivalent of a malloc-copy-free sequence.
+
+  If p is null, realloc is equivalent to malloc.
+
+  If space is not available, realloc returns null, errno is set (if on
+  ANSI) and p is NOT freed.
+
+  if n is for fewer bytes than already held by p, the newly unused
+  space is lopped off and freed if possible.  realloc with a size
+  argument of zero (re)allocates a minimum-sized chunk.
+
+  The old unix realloc convention of allowing the last-free'd chunk
+  to be used as an argument to realloc is not supported.
+*/
+
+void* dlrealloc(void*, size_t);
+
+/*
+  memalign(size_t alignment, size_t n);
+  Returns a pointer to a newly allocated chunk of n bytes, aligned
+  in accord with the alignment argument.
+
+  The alignment argument should be a power of two. If the argument is
+  not a power of two, the nearest greater power is used.
+  8-byte alignment is guaranteed by normal malloc calls, so don't
+  bother calling memalign with an argument of 8 or less.
+
+  Overreliance on memalign is a sure way to fragment space.
+*/
+void* dlmemalign(size_t, size_t);
+
+/*
+  valloc(size_t n);
+  Equivalent to memalign(pagesize, n), where pagesize is the page
+  size of the system. If the pagesize is unknown, 4096 is used.
+*/
+void* dlvalloc(size_t);
+
+/*
+  mallopt(int parameter_number, int parameter_value)
+  Sets tunable parameters The format is to provide a
+  (parameter-number, parameter-value) pair.  mallopt then sets the
+  corresponding parameter to the argument value if it can (i.e., so
+  long as the value is meaningful), and returns 1 if successful else
+  0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
+  normally defined in malloc.h.  None of these are use in this malloc,
+  so setting them has no effect. But this malloc also supports other
+  options in mallopt. See below for details.  Briefly, supported
+  parameters are as follows (listed defaults are for "typical"
+  configurations).
+
+  Symbol            param #  default    allowed param values
+  M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
+  M_GRANULARITY        -2     page size   any power of 2 >= page size
+  M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
+*/
+int dlmallopt(int, int);
+
+/*
+  malloc_footprint();
+  Returns the number of bytes obtained from the system.  The total
+  number of bytes allocated by malloc, realloc etc., is less than this
+  value. Unlike mallinfo, this function returns only a precomputed
+  result, so can be called frequently to monitor memory consumption.
+  Even if locks are otherwise defined, this function does not use them,
+  so results might not be up to date.
+*/
+size_t dlmalloc_footprint(void);
+
+/*
+  malloc_max_footprint();
+  Returns the maximum number of bytes obtained from the system. This
+  value will be greater than current footprint if deallocated space
+  has been reclaimed by the system. The peak number of bytes allocated
+  by malloc, realloc etc., is less than this value. Unlike mallinfo,
+  this function returns only a precomputed result, so can be called
+  frequently to monitor memory consumption.  Even if locks are
+  otherwise defined, this function does not use them, so results might
+  not be up to date.
+*/
+size_t dlmalloc_max_footprint(void);
+
+#if !NO_MALLINFO
+/*
+  mallinfo()
+  Returns (by copy) a struct containing various summary statistics:
+
+  arena:     current total non-mmapped bytes allocated from system
+  ordblks:   the number of free chunks
+  smblks:    always zero.
+  hblks:     current number of mmapped regions
+  hblkhd:    total bytes held in mmapped regions
+  usmblks:   the maximum total allocated space. This will be greater
+                than current total if trimming has occurred.
+  fsmblks:   always zero
+  uordblks:  current total allocated space (normal or mmapped)
+  fordblks:  total free space
+  keepcost:  the maximum number of bytes that could ideally be released
+               back to system via malloc_trim. ("ideally" means that
+               it ignores page restrictions etc.)
+
+  Because these fields are ints, but internal bookkeeping may
+  be kept as longs, the reported values may wrap around zero and
+  thus be inaccurate.
+*/
+struct mallinfo dlmallinfo(void);
+#endif /* NO_MALLINFO */
+
+/*
+  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
+
+  independent_calloc is similar to calloc, but instead of returning a
+  single cleared space, it returns an array of pointers to n_elements
+  independent elements that can hold contents of size elem_size, each
+  of which starts out cleared, and can be independently freed,
+  realloc'ed etc. The elements are guaranteed to be adjacently
+  allocated (this is not guaranteed to occur with multiple callocs or
+  mallocs), which may also improve cache locality in some
+  applications.
+
+  The "chunks" argument is optional (i.e., may be null, which is
+  probably the most typical usage). If it is null, the returned array
+  is itself dynamically allocated and should also be freed when it is
+  no longer needed. Otherwise, the chunks array must be of at least
+  n_elements in length. It is filled in with the pointers to the
+  chunks.
+
+  In either case, independent_calloc returns this pointer array, or
+  null if the allocation failed.  If n_elements is zero and "chunks"
+  is null, it returns a chunk representing an array with zero elements
+  (which should be freed if not wanted).
+
+  Each element must be individually freed when it is no longer
+  needed. If you'd like to instead be able to free all at once, you
+  should instead use regular calloc and assign pointers into this
+  space to represent elements.  (In this case though, you cannot
+  independently free elements.)
+
+  independent_calloc simplifies and speeds up implementations of many
+  kinds of pools.  It may also be useful when constructing large data
+  structures that initially have a fixed number of fixed-sized nodes,
+  but the number is not known at compile time, and some of the nodes
+  may later need to be freed. For example:
+
+  struct Node { int item; struct Node* next; };
+
+  struct Node* build_list() {
+    struct Node** pool;
+    int n = read_number_of_nodes_needed();
+    if (n <= 0) return 0;
+    pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
+    if (pool == 0) die();
+    // organize into a linked list...
+    struct Node* first = pool[0];
+    for (i = 0; i < n-1; ++i)
+      pool[i]->next = pool[i+1];
+    free(pool);     // Can now free the array (or not, if it is needed later)
+    return first;
+  }
+*/
+void** dlindependent_calloc(size_t, size_t, void**);
+
+/*
+  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
+
+  independent_comalloc allocates, all at once, a set of n_elements
+  chunks with sizes indicated in the "sizes" array.    It returns
+  an array of pointers to these elements, each of which can be
+  independently freed, realloc'ed etc. The elements are guaranteed to
+  be adjacently allocated (this is not guaranteed to occur with
+  multiple callocs or mallocs), which may also improve cache locality
+  in some applications.
+
+  The "chunks" argument is optional (i.e., may be null). If it is null
+  the returned array is itself dynamically allocated and should also
+  be freed when it is no longer needed. Otherwise, the chunks array
+  must be of at least n_elements in length. It is filled in with the
+  pointers to the chunks.
+
+  In either case, independent_comalloc returns this pointer array, or
+  null if the allocation failed.  If n_elements is zero and chunks is
+  null, it returns a chunk representing an array with zero elements
+  (which should be freed if not wanted).
+
+  Each element must be individually freed when it is no longer
+  needed. If you'd like to instead be able to free all at once, you
+  should instead use a single regular malloc, and assign pointers at
+  particular offsets in the aggregate space. (In this case though, you
+  cannot independently free elements.)
+
+  independent_comallac differs from independent_calloc in that each
+  element may have a different size, and also that it does not
+  automatically clear elements.
+
+  independent_comalloc can be used to speed up allocation in cases
+  where several structs or objects must always be allocated at the
+  same time.  For example:
+
+  struct Head { ... }
+  struct Foot { ... }
+
+  void send_message(char* msg) {
+    int msglen = strlen(msg);
+    size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
+    void* chunks[3];
+    if (independent_comalloc(3, sizes, chunks) == 0)
+      die();
+    struct Head* head = (struct Head*)(chunks[0]);
+    char*        body = (char*)(chunks[1]);
+    struct Foot* foot = (struct Foot*)(chunks[2]);
+    // ...
+  }
+
+  In general though, independent_comalloc is worth using only for
+  larger values of n_elements. For small values, you probably won't
+  detect enough difference from series of malloc calls to bother.
+
+  Overuse of independent_comalloc can increase overall memory usage,
+  since it cannot reuse existing noncontiguous small chunks that
+  might be available for some of the elements.
+*/
+void** dlindependent_comalloc(size_t, size_t*, void**);
+
+
+/*
+  pvalloc(size_t n);
+  Equivalent to valloc(minimum-page-that-holds(n)), that is,
+  round up n to nearest pagesize.
+ */
+void*  dlpvalloc(size_t);
+
+/*
+  malloc_trim(size_t pad);
+
+  If possible, gives memory back to the system (via negative arguments
+  to sbrk) if there is unused memory at the `high' end of the malloc
+  pool or in unused MMAP segments. You can call this after freeing
+  large blocks of memory to potentially reduce the system-level memory
+  requirements of a program. However, it cannot guarantee to reduce
+  memory. Under some allocation patterns, some large free blocks of
+  memory will be locked between two used chunks, so they cannot be
+  given back to the system.
+
+  The `pad' argument to malloc_trim represents the amount of free
+  trailing space to leave untrimmed. If this argument is zero, only
+  the minimum amount of memory to maintain internal data structures
+  will be left. Non-zero arguments can be supplied to maintain enough
+  trailing space to service future expected allocations without having
+  to re-obtain memory from the system.
+
+  Malloc_trim returns 1 if it actually released any memory, else 0.
+*/
+int  dlmalloc_trim(size_t);
+
+/*
+  malloc_usable_size(void* p);
+
+  Returns the number of bytes you can actually use in
+  an allocated chunk, which may be more than you requested (although
+  often not) due to alignment and minimum size constraints.
+  You can use this many bytes without worrying about
+  overwriting other allocated objects. This is not a particularly great
+  programming practice. malloc_usable_size can be more useful in
+  debugging and assertions, for example:
+
+  p = malloc(n);
+  assert(malloc_usable_size(p) >= 256);
+*/
+size_t dlmalloc_usable_size(void*);
+
+/*
+  malloc_stats();
+  Prints on stderr the amount of space obtained from the system (both
+  via sbrk and mmap), the maximum amount (which may be more than
+  current if malloc_trim and/or munmap got called), and the current
+  number of bytes allocated via malloc (or realloc, etc) but not yet
+  freed. Note that this is the number of bytes allocated, not the
+  number requested. It will be larger than the number requested
+  because of alignment and bookkeeping overhead. Because it includes
+  alignment wastage as being in use, this figure may be greater than
+  zero even when no user-level chunks are allocated.
+
+  The reported current and maximum system memory can be inaccurate if
+  a program makes other calls to system memory allocation functions
+  (normally sbrk) outside of malloc.
+
+  malloc_stats prints only the most commonly interesting statistics.
+  More information can be obtained by calling mallinfo.
+*/
+void  dlmalloc_stats(void);
+
+#endif /* ONLY_MSPACES */
+
+#if MSPACES
+
+/*
+  mspace is an opaque type representing an independent
+  region of space that supports mspace_malloc, etc.
+*/
+typedef void* mspace;
+
+/*
+  create_mspace creates and returns a new independent space with the
+  given initial capacity, or, if 0, the default granularity size.  It
+  returns null if there is no system memory available to create the
+  space.  If argument locked is non-zero, the space uses a separate
+  lock to control access. The capacity of the space will grow
+  dynamically as needed to service mspace_malloc requests.  You can
+  control the sizes of incremental increases of this space by
+  compiling with a different DEFAULT_GRANULARITY or dynamically
+  setting with mallopt(M_GRANULARITY, value).
+*/
+mspace create_mspace(size_t capacity, int locked);
+
+/*
+  destroy_mspace destroys the given space, and attempts to return all
+  of its memory back to the system, returning the total number of
+  bytes freed. After destruction, the results of access to all memory
+  used by the space become undefined.
+*/
+size_t destroy_mspace(mspace msp);
+
+/*
+  create_mspace_with_base uses the memory supplied as the initial base
+  of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
+  space is used for bookkeeping, so the capacity must be at least this
+  large. (Otherwise 0 is returned.) When this initial space is
+  exhausted, additional memory will be obtained from the system.
+  Destroying this space will deallocate all additionally allocated
+  space (if possible) but not the initial base.
+*/
+mspace create_mspace_with_base(void* base, size_t capacity, int locked);
+
+/*
+  mspace_malloc behaves as malloc, but operates within
+  the given space.
+*/
+void* mspace_malloc(mspace msp, size_t bytes);
+
+/*
+  mspace_free behaves as free, but operates within
+  the given space.
+
+  If compiled with FOOTERS==1, mspace_free is not actually needed.
+  free may be called instead of mspace_free because freed chunks from
+  any space are handled by their originating spaces.
+*/
+void mspace_free(mspace msp, void* mem);
+
+/*
+  mspace_realloc behaves as realloc, but operates within
+  the given space.
+
+  If compiled with FOOTERS==1, mspace_realloc is not actually
+  needed.  realloc may be called instead of mspace_realloc because
+  realloced chunks from any space are handled by their originating
+  spaces.
+*/
+void* mspace_realloc(mspace msp, void* mem, size_t newsize);
+
+/*
+  mspace_calloc behaves as calloc, but operates within
+  the given space.
+*/
+void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
+
+/*
+  mspace_memalign behaves as memalign, but operates within
+  the given space.
+*/
+void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
+
+/*
+  mspace_independent_calloc behaves as independent_calloc, but
+  operates within the given space.
+*/
+void** mspace_independent_calloc(mspace msp, size_t n_elements,
+                                 size_t elem_size, void* chunks[]);
+
+/*
+  mspace_independent_comalloc behaves as independent_comalloc, but
+  operates within the given space.
+*/
+void** mspace_independent_comalloc(mspace msp, size_t n_elements,
+                                   size_t sizes[], void* chunks[]);
+
+/*
+  mspace_footprint() returns the number of bytes obtained from the
+  system for this space.
+*/
+size_t mspace_footprint(mspace msp);
+
+/*
+  mspace_max_footprint() returns the peak number of bytes obtained from the
+  system for this space.
+*/
+size_t mspace_max_footprint(mspace msp);
+
+
+#if !NO_MALLINFO
+/*
+  mspace_mallinfo behaves as mallinfo, but reports properties of
+  the given space.
+*/
+struct mallinfo mspace_mallinfo(mspace msp);
+#endif /* NO_MALLINFO */
+
+/*
+  mspace_malloc_stats behaves as malloc_stats, but reports
+  properties of the given space.
+*/
+void mspace_malloc_stats(mspace msp);
+
+/*
+  mspace_trim behaves as malloc_trim, but
+  operates within the given space.
+*/
+int mspace_trim(mspace msp, size_t pad);
+
+/*
+  An alias for mallopt.
+*/
+int mspace_mallopt(int, int);
+
+#endif /* MSPACES */
+
+#ifdef __cplusplus
+};  /* end of extern "C" */
+#endif /* __cplusplus */
+
+/*
+  ========================================================================
+  To make a fully customizable malloc.h header file, cut everything
+  above this line, put into file malloc.h, edit to suit, and #include it
+  on the next line, as well as in programs that use this malloc.
+  ========================================================================
+*/
+
+/* #include "malloc.h" */
+
+/*------------------------------ internal #includes ---------------------- */
+
+#ifdef WIN32
+#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
+#endif /* WIN32 */
+
+#include <stdio.h>       /* for printing in malloc_stats */
+
+#ifndef LACKS_ERRNO_H
+#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
+#endif /* LACKS_ERRNO_H */
+#if FOOTERS
+#include <time.h>        /* for magic initialization */
+#endif /* FOOTERS */
+#ifndef LACKS_STDLIB_H
+#include <stdlib.h>      /* for abort() */
+#endif /* LACKS_STDLIB_H */
+#ifdef DEBUG
+#if ABORT_ON_ASSERT_FAILURE
+#define assert(x) if(!(x)) ABORT
+#else /* ABORT_ON_ASSERT_FAILURE */
+#include <assert.h>
+#endif /* ABORT_ON_ASSERT_FAILURE */
+#else  /* DEBUG */
+#define assert(x)
+#endif /* DEBUG */
+#ifndef LACKS_STRING_H
+#include <string.h>      /* for memset etc */
+#endif  /* LACKS_STRING_H */
+#if USE_BUILTIN_FFS
+#ifndef LACKS_STRINGS_H
+#include <strings.h>     /* for ffs */
+#endif /* LACKS_STRINGS_H */
+#endif /* USE_BUILTIN_FFS */
+#if HAVE_MMAP
+#ifndef LACKS_SYS_MMAN_H
+#include <sys/mman.h>    /* for mmap */
+#endif /* LACKS_SYS_MMAN_H */
+#ifndef LACKS_FCNTL_H
+#include <fcntl.h>
+#endif /* LACKS_FCNTL_H */
+#endif /* HAVE_MMAP */
+#if HAVE_MORECORE
+#ifndef LACKS_UNISTD_H
+#include <unistd.h>     /* for sbrk */
+#else /* LACKS_UNISTD_H */
+#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
+extern void*     sbrk(ptrdiff_t);
+#endif /* FreeBSD etc */
+#endif /* LACKS_UNISTD_H */
+#endif /* HAVE_MMAP */
+
+#ifndef WIN32
+#ifndef malloc_getpagesize
+#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
+#    ifndef _SC_PAGE_SIZE
+#      define _SC_PAGE_SIZE _SC_PAGESIZE
+#    endif
+#  endif
+#  ifdef _SC_PAGE_SIZE
+#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
+#  else
+#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
+       extern size_t getpagesize();
+#      define malloc_getpagesize getpagesize()
+#    else
+#      ifdef WIN32 /* use supplied emulation of getpagesize */
+#        define malloc_getpagesize getpagesize()
+#      else
+#        ifndef LACKS_SYS_PARAM_H
+#          include <sys/param.h>
+#        endif
+#        ifdef EXEC_PAGESIZE
+#          define malloc_getpagesize EXEC_PAGESIZE
+#        else
+#          ifdef NBPG
+#            ifndef CLSIZE
+#              define malloc_getpagesize NBPG
+#            else
+#              define malloc_getpagesize (NBPG * CLSIZE)
+#            endif
+#          else
+#            ifdef NBPC
+#              define malloc_getpagesize NBPC
+#            else
+#              ifdef PAGESIZE
+#                define malloc_getpagesize PAGESIZE
+#              else /* just guess */
+#                define malloc_getpagesize ((size_t)4096U)
+#              endif
+#            endif
+#          endif
+#        endif
+#      endif
+#    endif
+#  endif
+#endif
+#endif
+
+/* ------------------- size_t and alignment properties -------------------- */
+
+/* The byte and bit size of a size_t */
+#define SIZE_T_SIZE         (sizeof(size_t))
+#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
+
+/* Some constants coerced to size_t */
+/* Annoying but necessary to avoid errors on some plaftorms */
+#define SIZE_T_ZERO         ((size_t)0)
+#define SIZE_T_ONE          ((size_t)1)
+#define SIZE_T_TWO          ((size_t)2)
+#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
+#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
+#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
+#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
+
+/* The bit mask value corresponding to MALLOC_ALIGNMENT */
+#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
+
+/* True if address a has acceptable alignment */
+#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
+
+/* the number of bytes to offset an address to align it */
+#define align_offset(A)\
+ ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
+  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
+
+/* -------------------------- MMAP preliminaries ------------------------- */
+
+/*
+   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
+   checks to fail so compiler optimizer can delete code rather than
+   using so many "#if"s.
+*/
+
+
+/* MORECORE and MMAP must return MFAIL on failure */
+#define MFAIL                ((void*)(MAX_SIZE_T))
+#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
+
+#if !HAVE_MMAP
+#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
+#define USE_MMAP_BIT         (SIZE_T_ZERO)
+#define CALL_MMAP(s)         MFAIL
+#define CALL_MUNMAP(a, s)    (-1)
+#define DIRECT_MMAP(s)       MFAIL
+
+#else /* HAVE_MMAP */
+#define IS_MMAPPED_BIT       (SIZE_T_ONE)
+#define USE_MMAP_BIT         (SIZE_T_ONE)
+
+#ifndef WIN32
+#define CALL_MUNMAP(a, s)    munmap((a), (s))
+#define MMAP_PROT            (PROT_READ|PROT_WRITE)
+#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
+#define MAP_ANONYMOUS        MAP_ANON
+#endif /* MAP_ANON */
+#ifdef MAP_ANONYMOUS
+#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
+#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
+#else /* MAP_ANONYMOUS */
+/*
+   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
+   is unlikely to be needed, but is supplied just in case.
+*/
+#define MMAP_FLAGS           (MAP_PRIVATE)
+static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
+#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
+           (dev_zero_fd = open("/dev/zero", O_RDWR), \
+            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
+            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
+#endif /* MAP_ANONYMOUS */
+
+#define DIRECT_MMAP(s)       CALL_MMAP(s)
+#else /* WIN32 */
+
+/* Win32 MMAP via VirtualAlloc */
+static void* win32mmap(size_t size) {
+  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
+  return (ptr != 0)? ptr: MFAIL;
+}
+
+/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
+static void* win32direct_mmap(size_t size) {
+  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
+                           PAGE_READWRITE);
+  return (ptr != 0)? ptr: MFAIL;
+}
+
+/* This function supports releasing coalesed segments */
+static int win32munmap(void* ptr, size_t size) {
+  MEMORY_BASIC_INFORMATION minfo;
+  char* cptr = ptr;
+  while (size) {
+    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
+      return -1;
+    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
+        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
+      return -1;
+    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
+      return -1;
+    cptr += minfo.RegionSize;
+    size -= minfo.RegionSize;
+  }
+  return 0;
+}
+
+#define CALL_MMAP(s)         win32mmap(s)
+#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
+#define DIRECT_MMAP(s)       win32direct_mmap(s)
+#endif /* WIN32 */
+#endif /* HAVE_MMAP */
+
+#if HAVE_MMAP && HAVE_MREMAP
+#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
+#else  /* HAVE_MMAP && HAVE_MREMAP */
+#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
+#endif /* HAVE_MMAP && HAVE_MREMAP */
+
+#if HAVE_MORECORE
+#define CALL_MORECORE(S)     MORECORE(S)
+#else  /* HAVE_MORECORE */
+#define CALL_MORECORE(S)     MFAIL
+#endif /* HAVE_MORECORE */
+
+/* mstate bit set if continguous morecore disabled or failed */
+#define USE_NONCONTIGUOUS_BIT (4U)
+
+/* segment bit set in create_mspace_with_base */
+#define EXTERN_BIT            (8U)
+
+
+/* --------------------------- Lock preliminaries ------------------------ */
+
+#if USE_LOCKS
+
+/*
+  When locks are defined, there are up to two global locks:
+
+  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
+    MORECORE.  In many cases sys_alloc requires two calls, that should
+    not be interleaved with calls by other threads.  This does not
+    protect against direct calls to MORECORE by other threads not
+    using this lock, so there is still code to cope the best we can on
+    interference.
+
+  * magic_init_mutex ensures that mparams.magic and other
+    unique mparams values are initialized only once.
+*/
+
+#ifndef WIN32
+/* By default use posix locks */
+#include <pthread.h>
+#define MLOCK_T pthread_mutex_t
+#define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
+#define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
+#define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
+
+#if HAVE_MORECORE
+static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
+#endif /* HAVE_MORECORE */
+
+static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+#else /* WIN32 */
+/*
+   Because lock-protected regions have bounded times, and there
+   are no recursive lock calls, we can use simple spinlocks.
+*/
+
+#define MLOCK_T long
+static int win32_acquire_lock (MLOCK_T *sl) {
+  for (;;) {
+#ifdef InterlockedCompareExchangePointer
+    if (!InterlockedCompareExchange(sl, 1, 0))
+      return 0;
+#else  /* Use older void* version */
+    if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
+      return 0;
+#endif /* InterlockedCompareExchangePointer */
+    Sleep (0);
+  }
+}
+
+static void win32_release_lock (MLOCK_T *sl) {
+  InterlockedExchange (sl, 0);
+}
+
+#define INITIAL_LOCK(l)      *(l)=0
+#define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
+#define RELEASE_LOCK(l)      win32_release_lock(l)
+#if HAVE_MORECORE
+static MLOCK_T morecore_mutex;
+#endif /* HAVE_MORECORE */
+static MLOCK_T magic_init_mutex;
+#endif /* WIN32 */
+
+#define USE_LOCK_BIT               (2U)
+#else  /* USE_LOCKS */
+#define USE_LOCK_BIT               (0U)
+#define INITIAL_LOCK(l)
+#endif /* USE_LOCKS */
+
+#if USE_LOCKS && HAVE_MORECORE
+#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
+#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
+#else /* USE_LOCKS && HAVE_MORECORE */
+#define ACQUIRE_MORECORE_LOCK()
+#define RELEASE_MORECORE_LOCK()
+#endif /* USE_LOCKS && HAVE_MORECORE */
+
+#if USE_LOCKS
+#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
+#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
+#else  /* USE_LOCKS */
+#define ACQUIRE_MAGIC_INIT_LOCK()
+#define RELEASE_MAGIC_INIT_LOCK()
+#endif /* USE_LOCKS */
+
+
+/* -----------------------  Chunk representations ------------------------ */
+
+/*
+  (The following includes lightly edited explanations by Colin Plumb.)
+
+  The malloc_chunk declaration below is misleading (but accurate and
+  necessary).  It declares a "view" into memory allowing access to
+  necessary fields at known offsets from a given base.
+
+  Chunks of memory are maintained using a `boundary tag' method as
+  originally described by Knuth.  (See the paper by Paul Wilson
+  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
+  techniques.)  Sizes of free chunks are stored both in the front of
+  each chunk and at the end.  This makes consolidating fragmented
+  chunks into bigger chunks fast.  The head fields also hold bits
+  representing whether chunks are free or in use.
+
+  Here are some pictures to make it clearer.  They are "exploded" to
+  show that the state of a chunk can be thought of as extending from
+  the high 31 bits of the head field of its header through the
+  prev_foot and PINUSE_BIT bit of the following chunk header.
+
+  A chunk that's in use looks like:
+
+   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+           | Size of previous chunk (if P = 1)                             |
+           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
+         | Size of this chunk                                         1| +-+
+   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         |                                                               |
+         +-                                                             -+
+         |                                                               |
+         +-                                                             -+
+         |                                                               :
+         +-      size - sizeof(size_t) available payload bytes          -+
+         :                                                               |
+ chunk-> +-                                                             -+
+         |                                                               |
+         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
+       | Size of next chunk (may or may not be in use)               | +-+
+ mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+    And if it's free, it looks like this:
+
+   chunk-> +-                                                             -+
+           | User payload (must be in use, or we would have merged!)       |
+           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
+         | Size of this chunk                                         0| +-+
+   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         | Next pointer                                                  |
+         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         | Prev pointer                                                  |
+         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         |                                                               :
+         +-      size - sizeof(struct chunk) unused bytes               -+
+         :                                                               |
+ chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+         | Size of this chunk                                            |
+         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
+       | Size of next chunk (must be in use, or we would have merged)| +-+
+ mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+       |                                                               :
+       +- User payload                                                -+
+       :                                                               |
+       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+                                                                     |0|
+                                                                     +-+
+  Note that since we always merge adjacent free chunks, the chunks
+  adjacent to a free chunk must be in use.
+
+  Given a pointer to a chunk (which can be derived trivially from the
+  payload pointer) we can, in O(1) time, find out whether the adjacent
+  chunks are free, and if so, unlink them from the lists that they
+  are on and merge them with the current chunk.
+
+  Chunks always begin on even word boundaries, so the mem portion
+  (which is returned to the user) is also on an even word boundary, and
+  thus at least double-word aligned.
+
+  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
+  chunk size (which is always a multiple of two words), is an in-use
+  bit for the *previous* chunk.  If that bit is *clear*, then the
+  word before the current chunk size contains the previous chunk
+  size, and can be used to find the front of the previous chunk.
+  The very first chunk allocated always has this bit set, preventing
+  access to non-existent (or non-owned) memory. If pinuse is set for
+  any given chunk, then you CANNOT determine the size of the
+  previous chunk, and might even get a memory addressing fault when
+  trying to do so.
+
+  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
+  the chunk size redundantly records whether the current chunk is
+  inuse. This redundancy enables usage checks within free and realloc,
+  and reduces indirection when freeing and consolidating chunks.
+
+  Each freshly allocated chunk must have both cinuse and pinuse set.
+  That is, each allocated chunk borders either a previously allocated
+  and still in-use chunk, or the base of its memory arena. This is
+  ensured by making all allocations from the the `lowest' part of any
+  found chunk.  Further, no free chunk physically borders another one,
+  so each free chunk is known to be preceded and followed by either
+  inuse chunks or the ends of memory.
+
+  Note that the `foot' of the current chunk is actually represented
+  as the prev_foot of the NEXT chunk. This makes it easier to
+  deal with alignments etc but can be very confusing when trying
+  to extend or adapt this code.
+
+  The exceptions to all this are
+
+     1. The special chunk `top' is the top-most available chunk (i.e.,
+        the one bordering the end of available memory). It is treated
+        specially.  Top is never included in any bin, is used only if
+        no other chunk is available, and is released back to the
+        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
+        the top chunk is treated as larger (and thus less well
+        fitting) than any other available chunk.  The top chunk
+        doesn't update its trailing size field since there is no next
+        contiguous chunk that would have to index off it. However,
+        space is still allocated for it (TOP_FOOT_SIZE) to enable
+        separation or merging when space is extended.
+
+     3. Chunks allocated via mmap, which have the lowest-order bit
+        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
+        PINUSE_BIT in their head fields.  Because they are allocated
+        one-by-one, each must carry its own prev_foot field, which is
+        also used to hold the offset this chunk has within its mmapped
+        region, which is needed to preserve alignment. Each mmapped
+        chunk is trailed by the first two fields of a fake next-chunk
+        for sake of usage checks.
+
+*/
+
+struct malloc_chunk {
+  size_t               prev_foot;  /* Size of previous chunk (if free).  */
+  size_t               head;       /* Size and inuse bits. */
+  struct malloc_chunk* fd;         /* double links -- used only if free. */
+  struct malloc_chunk* bk;
+};
+
+typedef struct malloc_chunk  mchunk;
+typedef struct malloc_chunk* mchunkptr;
+typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
+typedef unsigned int bindex_t;         /* Described below */
+typedef unsigned int binmap_t;         /* Described below */
+typedef unsigned int flag_t;           /* The type of various bit flag sets */
+
+/* ------------------- Chunks sizes and alignments ----------------------- */
+
+#define MCHUNK_SIZE         (sizeof(mchunk))
+
+#if FOOTERS
+#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
+#else /* FOOTERS */
+#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
+#endif /* FOOTERS */
+
+/* MMapped chunks need a second word of overhead ... */
+#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
+/* ... and additional padding for fake next-chunk at foot */
+#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
+
+/* The smallest size we can malloc is an aligned minimal chunk */
+#define MIN_CHUNK_SIZE\
+  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
+
+/* conversion from malloc headers to user pointers, and back */
+#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
+#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
+/* chunk associated with aligned address A */
+#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
+
+/* Bounds on request (not chunk) sizes. */
+#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
+#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
+
+/* pad request bytes into a usable size */
+#define pad_request(req) \
+   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
+
+/* pad request, checking for minimum (but not maximum) */
+#define request2size(req) \
+  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
+
+
+/* ------------------ Operations on head and foot fields ----------------- */
+
+/*
+  The head field of a chunk is or'ed with PINUSE_BIT when previous
+  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
+  use. If the chunk was obtained with mmap, the prev_foot field has
+  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
+  mmapped region to the base of the chunk.
+*/
+
+#define PINUSE_BIT          (SIZE_T_ONE)
+#define CINUSE_BIT          (SIZE_T_TWO)
+#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
+
+/* Head value for fenceposts */
+#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
+
+/* extraction of fields from head words */
+#define cinuse(p)           ((p)->head & CINUSE_BIT)
+#define pinuse(p)           ((p)->head & PINUSE_BIT)
+#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
+
+#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
+#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
+
+/* Treat space at ptr +/- offset as a chunk */
+#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
+#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
+
+/* Ptr to next or previous physical malloc_chunk. */
+#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
+#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
+
+/* extract next chunk's pinuse bit */
+#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
+
+/* Get/set size at footer */
+#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
+#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
+
+/* Set size, pinuse bit, and foot */
+#define set_size_and_pinuse_of_free_chunk(p, s)\
+  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
+
+/* Set size, pinuse bit, foot, and clear next pinuse */
+#define set_free_with_pinuse(p, s, n)\
+  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
+
+#define is_mmapped(p)\
+  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
+
+/* Get the internal overhead associated with chunk p */
+#define overhead_for(p)\
+ (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
+
+/* Return true if malloced space is not necessarily cleared */
+#if MMAP_CLEARS
+#define calloc_must_clear(p) (!is_mmapped(p))
+#else /* MMAP_CLEARS */
+#define calloc_must_clear(p) (1)
+#endif /* MMAP_CLEARS */
+
+/* ---------------------- Overlaid data structures ----------------------- */
+
+/*
+  When chunks are not in use, they are treated as nodes of either
+  lists or trees.
+
+  "Small"  chunks are stored in circular doubly-linked lists, and look
+  like this:
+
+    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Size of previous chunk                            |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+    `head:' |             Size of chunk, in bytes                         |P|
+      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Forward pointer to next chunk in list             |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Back pointer to previous chunk in list            |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Unused space (may be 0 bytes long)                .
+            .                                                               .
+            .                                                               |
+nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+    `foot:' |             Size of chunk, in bytes                           |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+  Larger chunks are kept in a form of bitwise digital trees (aka
+  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
+  free chunks greater than 256 bytes, their size doesn't impose any
+  constraints on user chunk sizes.  Each node looks like:
+
+    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Size of previous chunk                            |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+    `head:' |             Size of chunk, in bytes                         |P|
+      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Forward pointer to next chunk of same size        |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Back pointer to previous chunk of same size       |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Pointer to left child (child[0])                  |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Pointer to right child (child[1])                 |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Pointer to parent                                 |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             bin index of this chunk                           |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+            |             Unused space                                      .
+            .                                                               |
+nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+    `foot:' |             Size of chunk, in bytes                           |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
+  of the same size are arranged in a circularly-linked list, with only
+  the oldest chunk (the next to be used, in our FIFO ordering)
+  actually in the tree.  (Tree members are distinguished by a non-null
+  parent pointer.)  If a chunk with the same size an an existing node
+  is inserted, it is linked off the existing node using pointers that
+  work in the same way as fd/bk pointers of small chunks.
+
+  Each tree contains a power of 2 sized range of chunk sizes (the
+  smallest is 0x100 <= x < 0x180), which is is divided in half at each
+  tree level, with the chunks in the smaller half of the range (0x100
+  <= x < 0x140 for the top nose) in the left subtree and the larger
+  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
+  done by inspecting individual bits.
+
+  Using these rules, each node's left subtree contains all smaller
+  sizes than its right subtree.  However, the node at the root of each
+  subtree has no particular ordering relationship to either.  (The
+  dividing line between the subtree sizes is based on trie relation.)
+  If we remove the last chunk of a given size from the interior of the
+  tree, we need to replace it with a leaf node.  The tree ordering
+  rules permit a node to be replaced by any leaf below it.
+
+  The smallest chunk in a tree (a common operation in a best-fit
+  allocator) can be found by walking a path to the leftmost leaf in
+  the tree.  Unlike a usual binary tree, where we follow left child
+  pointers until we reach a null, here we follow the right child
+  pointer any time the left one is null, until we reach a leaf with
+  both child pointers null. The smallest chunk in the tree will be
+  somewhere along that path.
+
+  The worst case number of steps to add, find, or remove a node is
+  bounded by the number of bits differentiating chunks within
+  bins. Under current bin calculations, this ranges from 6 up to 21
+  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
+  is of course much better.
+*/
+
+struct malloc_tree_chunk {
+  /* The first four fields must be compatible with malloc_chunk */
+  size_t                    prev_foot;
+  size_t                    head;
+  struct malloc_tree_chunk* fd;
+  struct malloc_tree_chunk* bk;
+
+  struct malloc_tree_chunk* child[2];
+  struct malloc_tree_chunk* parent;
+  bindex_t                  index;
+};
+
+typedef struct malloc_tree_chunk  tchunk;
+typedef struct malloc_tree_chunk* tchunkptr;
+typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
+
+/* A little helper macro for trees */
+#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
+
+/* ----------------------------- Segments -------------------------------- */
+
+/*
+  Each malloc space may include non-contiguous segments, held in a
+  list headed by an embedded malloc_segment record representing the
+  top-most space. Segments also include flags holding properties of
+  the space. Large chunks that are directly allocated by mmap are not
+  included in this list. They are instead independently created and
+  destroyed without otherwise keeping track of them.
+
+  Segment management mainly comes into play for spaces allocated by
+  MMAP.  Any call to MMAP might or might not return memory that is
+  adjacent to an existing segment.  MORECORE normally contiguously
+  extends the current space, so this space is almost always adjacent,
+  which is simpler and faster to deal with. (This is why MORECORE is
+  used preferentially to MMAP when both are available -- see
+  sys_alloc.)  When allocating using MMAP, we don't use any of the
+  hinting mechanisms (inconsistently) supported in various
+  implementations of unix mmap, or distinguish reserving from
+  committing memory. Instead, we just ask for space, and exploit
+  contiguity when we get it.  It is probably possible to do
+  better than this on some systems, but no general scheme seems
+  to be significantly better.
+
+  Management entails a simpler variant of the consolidation scheme
+  used for chunks to reduce fragmentation -- new adjacent memory is
+  normally prepended or appended to an existing segment. However,
+  there are limitations compared to chunk consolidation that mostly
+  reflect the fact that segment processing is relatively infrequent
+  (occurring only when getting memory from system) and that we
+  don't expect to have huge numbers of segments:
+
+  * Segments are not indexed, so traversal requires linear scans.  (It
+    would be possible to index these, but is not worth the extra
+    overhead and complexity for most programs on most platforms.)
+  * New segments are only appended to old ones when holding top-most
+    memory; if they cannot be prepended to others, they are held in
+    different segments.
+
+  Except for the top-most segment of an mstate, each segment record
+  is kept at the tail of its segment. Segments are added by pushing
+  segment records onto the list headed by &mstate.seg for the
+  containing mstate.
+
+  Segment flags control allocation/merge/deallocation policies:
+  * If EXTERN_BIT set, then we did not allocate this segment,
+    and so should not try to deallocate or merge with others.
+    (This currently holds only for the initial segment passed
+    into create_mspace_with_base.)
+  * If IS_MMAPPED_BIT set, the segment may be merged with
+    other surrounding mmapped segments and trimmed/de-allocated
+    using munmap.
+  * If neither bit is set, then the segment was obtained using
+    MORECORE so can be merged with surrounding MORECORE'd segments
+    and deallocated/trimmed using MORECORE with negative arguments.
+*/
+
+struct malloc_segment {
+  char*        base;             /* base address */
+  size_t       size;             /* allocated size */
+  struct malloc_segment* next;   /* ptr to next segment */
+  flag_t       sflags;           /* mmap and extern flag */
+};
+
+#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
+#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
+
+typedef struct malloc_segment  msegment;
+typedef struct malloc_segment* msegmentptr;
+
+/* ---------------------------- malloc_state ----------------------------- */
+
+/*
+   A malloc_state holds all of the bookkeeping for a space.
+   The main fields are:
+
+  Top
+    The topmost chunk of the currently active segment. Its size is
+    cached in topsize.  The actual size of topmost space is
+    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
+    fenceposts and segment records if necessary when getting more
+    space from the system.  The size at which to autotrim top is
+    cached from mparams in trim_check, except that it is disabled if
+    an autotrim fails.
+
+  Designated victim (dv)
+    This is the preferred chunk for servicing small requests that
+    don't have exact fits.  It is normally the chunk split off most
+    recently to service another small request.  Its size is cached in
+    dvsize. The link fields of this chunk are not maintained since it
+    is not kept in a bin.
+
+  SmallBins
+    An array of bin headers for free chunks.  These bins hold chunks
+    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
+    chunks of all the same size, spaced 8 bytes apart.  To simplify
+    use in double-linked lists, each bin header acts as a malloc_chunk
+    pointing to the real first node, if it exists (else pointing to
+    itself).  This avoids special-casing for headers.  But to avoid
+    waste, we allocate only the fd/bk pointers of bins, and then use
+    repositioning tricks to treat these as the fields of a chunk.
+
+  TreeBins
+    Treebins are pointers to the roots of trees holding a range of
+    sizes. There are 2 equally spaced treebins for each power of two
+    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
+    larger.
+
+  Bin maps
+    There is one bit map for small bins ("smallmap") and one for
+    treebins ("treemap).  Each bin sets its bit when non-empty, and
+    clears the bit when empty.  Bit operations are then used to avoid
+    bin-by-bin searching -- nearly all "search" is done without ever
+    looking at bins that won't be selected.  The bit maps
+    conservatively use 32 bits per map word, even if on 64bit system.
+    For a good description of some of the bit-based techniques used
+    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
+    supplement at http://hackersdelight.org/). Many of these are
+    intended to reduce the branchiness of paths through malloc etc, as
+    well as to reduce the number of memory locations read or written.
+
+  Segments
+    A list of segments headed by an embedded malloc_segment record
+    representing the initial space.
+
+  Address check support
+    The least_addr field is the least address ever obtained from
+    MORECORE or MMAP. Attempted frees and reallocs of any address less
+    than this are trapped (unless INSECURE is defined).
+
+  Magic tag
+    A cross-check field that should always hold same value as mparams.magic.
+
+  Flags
+    Bits recording whether to use MMAP, locks, or contiguous MORECORE
+
+  Statistics
+    Each space keeps track of current and maximum system memory
+    obtained via MORECORE or MMAP.
+
+  Locking
+    If USE_LOCKS is defined, the "mutex" lock is acquired and released
+    around every public call using this mspace.
+*/
+
+/* Bin types, widths and sizes */
+#define NSMALLBINS        (32U)
+#define NTREEBINS         (32U)
+#define SMALLBIN_SHIFT    (3U)
+#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
+#define TREEBIN_SHIFT     (8U)
+#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
+#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
+#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
+
+struct malloc_state {
+  binmap_t   smallmap;
+  binmap_t   treemap;
+  size_t     dvsize;
+  size_t     topsize;
+  char*      least_addr;
+  mchunkptr  dv;
+  mchunkptr  top;
+  size_t     trim_check;
+  size_t     magic;
+  mchunkptr  smallbins[(NSMALLBINS+1)*2];
+  tbinptr    treebins[NTREEBINS];
+  size_t     footprint;
+  size_t     max_footprint;
+  flag_t     mflags;
+#if USE_LOCKS
+  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
+#endif /* USE_LOCKS */
+  msegment   seg;
+};
+
+typedef struct malloc_state*    mstate;
+
+/* ------------- Global malloc_state and malloc_params ------------------- */
+
+/*
+  malloc_params holds global properties, including those that can be
+  dynamically set using mallopt. There is a single instance, mparams,
+  initialized in init_mparams.
+*/
+
+struct malloc_params {
+  size_t magic;
+  size_t page_size;
+  size_t granularity;
+  size_t mmap_threshold;
+  size_t trim_threshold;
+  flag_t default_mflags;
+};
+
+static struct malloc_params mparams;
+
+/* The global malloc_state used for all non-"mspace" calls */
+static struct malloc_state _gm_;
+#define gm                 (&_gm_)
+#define is_global(M)       ((M) == &_gm_)
+#define is_initialized(M)  ((M)->top != 0)
+
+/* -------------------------- system alloc setup ------------------------- */
+
+/* Operations on mflags */
+
+#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
+#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
+#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
+
+#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
+#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
+#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
+
+#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
+#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
+
+#define set_lock(M,L)\
+ ((M)->mflags = (L)?\
+  ((M)->mflags | USE_LOCK_BIT) :\
+  ((M)->mflags & ~USE_LOCK_BIT))
+
+/* page-align a size */
+#define page_align(S)\
+ (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
+
+/* granularity-align a size */
+#define granularity_align(S)\
+  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
+
+#define is_page_aligned(S)\
+   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
+#define is_granularity_aligned(S)\
+   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
+
+/*  True if segment S holds address A */
+#define segment_holds(S, A)\
+  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
+
+/* Return segment holding given address */
+static msegmentptr segment_holding(mstate m, char* addr) {
+  msegmentptr sp = &m->seg;
+  for (;;) {
+    if (addr >= sp->base && addr < sp->base + sp->size)
+      return sp;
+    if ((sp = sp->next) == 0)
+      return 0;
+  }
+}
+
+/* Return true if segment contains a segment link */
+static int has_segment_link(mstate m, msegmentptr ss) {
+  msegmentptr sp = &m->seg;
+  for (;;) {
+    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
+      return 1;
+    if ((sp = sp->next) == 0)
+      return 0;
+  }
+}
+
+#ifndef MORECORE_CANNOT_TRIM
+#define should_trim(M,s)  ((s) > (M)->trim_check)
+#else  /* MORECORE_CANNOT_TRIM */
+#define should_trim(M,s)  (0)
+#endif /* MORECORE_CANNOT_TRIM */
+
+/*
+  TOP_FOOT_SIZE is padding at the end of a segment, including space
+  that may be needed to place segment records and fenceposts when new
+  noncontiguous segments are added.
+*/
+#define TOP_FOOT_SIZE\
+  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
+
+
+/* -------------------------------  Hooks -------------------------------- */
+
+/*
+  PREACTION should be defined to return 0 on success, and nonzero on
+  failure. If you are not using locking, you can redefine these to do
+  anything you like.
+*/
+
+#if USE_LOCKS
+
+/* Ensure locks are initialized */
+#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
+
+#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
+#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
+#else /* USE_LOCKS */
+
+#ifndef PREACTION
+#define PREACTION(M) (0)
+#endif  /* PREACTION */
+
+#ifndef POSTACTION
+#define POSTACTION(M)
+#endif  /* POSTACTION */
+
+#endif /* USE_LOCKS */
+
+/*
+  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
+  USAGE_ERROR_ACTION is triggered on detected bad frees and
+  reallocs. The argument p is an address that might have triggered the
+  fault. It is ignored by the two predefined actions, but might be
+  useful in custom actions that try to help diagnose errors.
+*/
+
+#if PROCEED_ON_ERROR
+
+/* A count of the number of corruption errors causing resets */
+int malloc_corruption_error_count;
+
+/* default corruption action */
+static void reset_on_error(mstate m);
+
+#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
+#define USAGE_ERROR_ACTION(m, p)
+
+#else /* PROCEED_ON_ERROR */
+
+#ifndef CORRUPTION_ERROR_ACTION
+#define CORRUPTION_ERROR_ACTION(m) ABORT
+#endif /* CORRUPTION_ERROR_ACTION */
+
+#ifndef USAGE_ERROR_ACTION
+#define USAGE_ERROR_ACTION(m,p) ABORT
+#endif /* USAGE_ERROR_ACTION */
+
+#endif /* PROCEED_ON_ERROR */
+
+/* -------------------------- Debugging setup ---------------------------- */
+
+#if ! DEBUG
+
+#define check_free_chunk(M,P)
+#define check_inuse_chunk(M,P)
+#define check_malloced_chunk(M,P,N)
+#define check_mmapped_chunk(M,P)
+#define check_malloc_state(M)
+#define check_top_chunk(M,P)
+
+#else /* DEBUG */
+#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
+#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
+#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
+#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
+#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
+#define check_malloc_state(M)       do_check_malloc_state(M)
+
+static void   do_check_any_chunk(mstate m, mchunkptr p);
+static void   do_check_top_chunk(mstate m, mchunkptr p);
+static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
+static void   do_check_inuse_chunk(mstate m, mchunkptr p);
+static void   do_check_free_chunk(mstate m, mchunkptr p);
+static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
+static void   do_check_tree(mstate m, tchunkptr t);
+static void   do_check_treebin(mstate m, bindex_t i);
+static void   do_check_smallbin(mstate m, bindex_t i);
+static void   do_check_malloc_state(mstate m);
+static int    bin_find(mstate m, mchunkptr x);
+static size_t traverse_and_check(mstate m);
+#endif /* DEBUG */
+
+/* ---------------------------- Indexing Bins ---------------------------- */
+
+#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
+#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
+#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
+#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
+
+/* addressing by index. See above about smallbin repositioning */
+#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
+#define treebin_at(M,i)     (&((M)->treebins[i]))
+
+/* assign tree index for size S to variable I */
+#if defined(__GNUC__) && defined(i386)
+#define compute_tree_index(S, I)\
+{\
+  size_t X = S >> TREEBIN_SHIFT;\
+  if (X == 0)\
+    I = 0;\
+  else if (X > 0xFFFF)\
+    I = NTREEBINS-1;\
+  else {\
+    unsigned int K;\
+    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
+    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
+  }\
+}
+#else /* GNUC */
+#define compute_tree_index(S, I)\
+{\
+  size_t X = S >> TREEBIN_SHIFT;\
+  if (X == 0)\
+    I = 0;\
+  else if (X > 0xFFFF)\
+    I = NTREEBINS-1;\
+  else {\
+    unsigned int Y = (unsigned int)X;\
+    unsigned int N = ((Y - 0x100) >> 16) & 8;\
+    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
+    N += K;\
+    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
+    K = 14 - N + ((Y <<= K) >> 15);\
+    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
+  }\
+}
+#endif /* GNUC */
+
+/* Bit representing maximum resolved size in a treebin at i */
+#define bit_for_tree_index(i) \
+   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
+
+/* Shift placing maximum resolved bit in a treebin at i as sign bit */
+#define leftshift_for_tree_index(i) \
+   ((i == NTREEBINS-1)? 0 : \
+    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
+
+/* The size of the smallest chunk held in bin with index i */
+#define minsize_for_tree_index(i) \
+   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
+   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
+
+
+/* ------------------------ Operations on bin maps ----------------------- */
+
+/* bit corresponding to given index */
+#define idx2bit(i)              ((binmap_t)(1) << (i))
+
+/* Mark/Clear bits with given index */
+#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
+#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
+#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
+
+#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
+#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
+#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
+
+/* index corresponding to given bit */
+
+#if defined(__GNUC__) && defined(i386)
+#define compute_bit2idx(X, I)\
+{\
+  unsigned int J;\
+  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
+  I = (bindex_t)J;\
+}
+
+#else /* GNUC */
+#if  USE_BUILTIN_FFS
+#define compute_bit2idx(X, I) I = ffs(X)-1
+
+#else /* USE_BUILTIN_FFS */
+#define compute_bit2idx(X, I)\
+{\
+  unsigned int Y = X - 1;\
+  unsigned int K = Y >> (16-4) & 16;\
+  unsigned int N = K;        Y >>= K;\
+  N += K = Y >> (8-3) &  8;  Y >>= K;\
+  N += K = Y >> (4-2) &  4;  Y >>= K;\
+  N += K = Y >> (2-1) &  2;  Y >>= K;\
+  N += K = Y >> (1-0) &  1;  Y >>= K;\
+  I = (bindex_t)(N + Y);\
+}
+#endif /* USE_BUILTIN_FFS */
+#endif /* GNUC */
+
+/* isolate the least set bit of a bitmap */
+#define least_bit(x)         ((x) & -(x))
+
+/* mask with all bits to left of least bit of x on */
+#define left_bits(x)         ((x<<1) | -(x<<1))
+
+/* mask with all bits to left of or equal to least bit of x on */
+#define same_or_left_bits(x) ((x) | -(x))
+
+
+/* ----------------------- Runtime Check Support ------------------------- */
+
+/*
+  For security, the main invariant is that malloc/free/etc never
+  writes to a static address other than malloc_state, unless static
+  malloc_state itself has been corrupted, which cannot occur via
+  malloc (because of these checks). In essence this means that we
+  believe all pointers, sizes, maps etc held in malloc_state, but
+  check all of those linked or offsetted from other embedded data
+  structures.  These checks are interspersed with main code in a way
+  that tends to minimize their run-time cost.
+
+  When FOOTERS is defined, in addition to range checking, we also
+  verify footer fields of inuse chunks, which can be used guarantee
+  that the mstate controlling malloc/free is intact.  This is a
+  streamlined version of the approach described by William Robertson
+  et al in "Run-time Detection of Heap-based Overflows" LISA'03
+  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
+  of an inuse chunk holds the xor of its mstate and a random seed,
+  that is checked upon calls to free() and realloc().  This is
+  (probablistically) unguessable from outside the program, but can be
+  computed by any code successfully malloc'ing any chunk, so does not
+  itself provide protection against code that has already broken
+  security through some other means.  Unlike Robertson et al, we
+  always dynamically check addresses of all offset chunks (previous,
+  next, etc). This turns out to be cheaper than relying on hashes.
+*/
+
+#if !INSECURE
+/* Check if address a is at least as high as any from MORECORE or MMAP */
+#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
+/* Check if address of next chunk n is higher than base chunk p */
+#define ok_next(p, n)    ((char*)(p) < (char*)(n))
+/* Check if p has its cinuse bit on */
+#define ok_cinuse(p)     cinuse(p)
+/* Check if p has its pinuse bit on */
+#define ok_pinuse(p)     pinuse(p)
+
+#else /* !INSECURE */
+#define ok_address(M, a) (1)
+#define ok_next(b, n)    (1)
+#define ok_cinuse(p)     (1)
+#define ok_pinuse(p)     (1)
+#endif /* !INSECURE */
+
+#if (FOOTERS && !INSECURE)
+/* Check if (alleged) mstate m has expected magic field */
+#define ok_magic(M)      ((M)->magic == mparams.magic)
+#else  /* (FOOTERS && !INSECURE) */
+#define ok_magic(M)      (1)
+#endif /* (FOOTERS && !INSECURE) */
+
+
+/* In gcc, use __builtin_expect to minimize impact of checks */
+#if !INSECURE
+#if defined(__GNUC__) && __GNUC__ >= 3
+#define RTCHECK(e)  __builtin_expect(e, 1)
+#else /* GNUC */
+#define RTCHECK(e)  (e)
+#endif /* GNUC */
+#else /* !INSECURE */
+#define RTCHECK(e)  (1)
+#endif /* !INSECURE */
+
+/* macros to set up inuse chunks with or without footers */
+
+#if !FOOTERS
+
+#define mark_inuse_foot(M,p,s)
+
+/* Set cinuse bit and pinuse bit of next chunk */
+#define set_inuse(M,p,s)\
+  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
+  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
+
+/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
+#define set_inuse_and_pinuse(M,p,s)\
+  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
+  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
+
+/* Set size, cinuse and pinuse bit of this chunk */
+#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
+  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
+
+#else /* FOOTERS */
+
+/* Set foot of inuse chunk to be xor of mstate and seed */
+#define mark_inuse_foot(M,p,s)\
+  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
+
+#define get_mstate_for(p)\
+  ((mstate)(((mchunkptr)((char*)(p) +\
+    (chunksize(p))))->prev_foot ^ mparams.magic))
+
+#define set_inuse(M,p,s)\
+  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
+  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
+  mark_inuse_foot(M,p,s))
+
+#define set_inuse_and_pinuse(M,p,s)\
+  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
+  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
+ mark_inuse_foot(M,p,s))
+
+#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
+  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
+  mark_inuse_foot(M, p, s))
+
+#endif /* !FOOTERS */
+
+/* ---------------------------- setting mparams -------------------------- */
+
+/* Initialize mparams */
+static int init_mparams(void) {
+  if (mparams.page_size == 0) {
+    size_t s;
+
+    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
+    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
+#if MORECORE_CONTIGUOUS
+    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
+#else  /* MORECORE_CONTIGUOUS */
+    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
+#endif /* MORECORE_CONTIGUOUS */
+
+#if (FOOTERS && !INSECURE)
+    {
+#if USE_DEV_RANDOM
+      int fd;
+      unsigned char buf[sizeof(size_t)];
+      /* Try to use /dev/urandom, else fall back on using time */
+      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
+          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
+        s = *((size_t *) buf);
+        close(fd);
+      }
+      else
+#endif /* USE_DEV_RANDOM */
+        s = (size_t)(time(0) ^ (size_t)0x55555555U);
+
+      s |= (size_t)8U;    /* ensure nonzero */
+      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
+
+    }
+#else /* (FOOTERS && !INSECURE) */
+    s = (size_t)0x58585858U;
+#endif /* (FOOTERS && !INSECURE) */
+    ACQUIRE_MAGIC_INIT_LOCK();
+    if (mparams.magic == 0) {
+      mparams.magic = s;
+      /* Set up lock for main malloc area */
+      INITIAL_LOCK(&gm->mutex);
+      gm->mflags = mparams.default_mflags;
+    }
+    RELEASE_MAGIC_INIT_LOCK();
+
+#ifndef WIN32
+    mparams.page_size = malloc_getpagesize;
+    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
+                           DEFAULT_GRANULARITY : mparams.page_size);
+#else /* WIN32 */
+    {
+      SYSTEM_INFO system_info;
+      GetSystemInfo(&system_info);
+      mparams.page_size = system_info.dwPageSize;
+      mparams.granularity = system_info.dwAllocationGranularity;
+    }
+#endif /* WIN32 */
+
+    /* Sanity-check configuration:
+       size_t must be unsigned and as wide as pointer type.
+       ints must be at least 4 bytes.
+       alignment must be at least 8.
+       Alignment, min chunk size, and page size must all be powers of 2.
+    */
+    if ((sizeof(size_t) != sizeof(char*)) ||
+        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
+        (sizeof(int) < 4)  ||
+        (MALLOC_ALIGNMENT < (size_t)8U) ||
+        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
+        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
+        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
+        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
+      ABORT;
+  }
+  return 0;
+}
+
+/* support for mallopt */
+static int change_mparam(int param_number, int value) {
+  size_t val = (size_t)value;
+  init_mparams();
+  switch(param_number) {
+  case M_TRIM_THRESHOLD:
+    mparams.trim_threshold = val;
+    return 1;
+  case M_GRANULARITY:
+    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
+      mparams.granularity = val;
+      return 1;
+    }
+    else
+      return 0;
+  case M_MMAP_THRESHOLD:
+    mparams.mmap_threshold = val;
+    return 1;
+  default:
+    return 0;
+  }
+}
+
+#if DEBUG
+/* ------------------------- Debugging Support --------------------------- */
+
+/* Check properties of any chunk, whether free, inuse, mmapped etc  */
+static void do_check_any_chunk(mstate m, mchunkptr p) {
+  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
+  assert(ok_address(m, p));
+}
+
+/* Check properties of top chunk */
+static void do_check_top_chunk(mstate m, mchunkptr p) {
+  msegmentptr sp = segment_holding(m, (char*)p);
+  size_t  sz = chunksize(p);
+  assert(sp != 0);
+  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
+  assert(ok_address(m, p));
+  assert(sz == m->topsize);
+  assert(sz > 0);
+  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
+  assert(pinuse(p));
+  assert(!next_pinuse(p));
+}
+
+/* Check properties of (inuse) mmapped chunks */
+static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
+  size_t  sz = chunksize(p);
+  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
+  assert(is_mmapped(p));
+  assert(use_mmap(m));
+  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
+  assert(ok_address(m, p));
+  assert(!is_small(sz));
+  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
+  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
+  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
+}
+
+/* Check properties of inuse chunks */
+static void do_check_inuse_chunk(mstate m, mchunkptr p) {
+  do_check_any_chunk(m, p);
+  assert(cinuse(p));
+  assert(next_pinuse(p));
+  /* If not pinuse and not mmapped, previous chunk has OK offset */
+  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
+  if (is_mmapped(p))
+    do_check_mmapped_chunk(m, p);
+}
+
+/* Check properties of free chunks */
+static void do_check_free_chunk(mstate m, mchunkptr p) {
+  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
+  mchunkptr next = chunk_plus_offset(p, sz);
+  do_check_any_chunk(m, p);
+  assert(!cinuse(p));
+  assert(!next_pinuse(p));
+  assert (!is_mmapped(p));
+  if (p != m->dv && p != m->top) {
+    if (sz >= MIN_CHUNK_SIZE) {
+      assert((sz & CHUNK_ALIGN_MASK) == 0);
+      assert(is_aligned(chunk2mem(p)));
+      assert(next->prev_foot == sz);
+      assert(pinuse(p));
+      assert (next == m->top || cinuse(next));
+      assert(p->fd->bk == p);
+      assert(p->bk->fd == p);
+    }
+    else  /* markers are always of size SIZE_T_SIZE */
+      assert(sz == SIZE_T_SIZE);
+  }
+}
+
+/* Check properties of malloced chunks at the point they are malloced */
+static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
+  if (mem != 0) {
+    mchunkptr p = mem2chunk(mem);
+    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
+    do_check_inuse_chunk(m, p);
+    assert((sz & CHUNK_ALIGN_MASK) == 0);
+    assert(sz >= MIN_CHUNK_SIZE);
+    assert(sz >= s);
+    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
+    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
+  }
+}
+
+/* Check a tree and its subtrees.  */
+static void do_check_tree(mstate m, tchunkptr t) {
+  tchunkptr head = 0;
+  tchunkptr u = t;
+  bindex_t tindex = t->index;
+  size_t tsize = chunksize(t);
+  bindex_t idx;
+  compute_tree_index(tsize, idx);
+  assert(tindex == idx);
+  assert(tsize >= MIN_LARGE_SIZE);
+  assert(tsize >= minsize_for_tree_index(idx));
+  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
+
+  do { /* traverse through chain of same-sized nodes */
+    do_check_any_chunk(m, ((mchunkptr)u));
+    assert(u->index == tindex);
+    assert(chunksize(u) == tsize);
+    assert(!cinuse(u));
+    assert(!next_pinuse(u));
+    assert(u->fd->bk == u);
+    assert(u->bk->fd == u);
+    if (u->parent == 0) {
+      assert(u->child[0] == 0);
+      assert(u->child[1] == 0);
+    }
+    else {
+      assert(head == 0); /* only one node on chain has parent */
+      head = u;
+      assert(u->parent != u);
+      assert (u->parent->child[0] == u ||
+              u->parent->child[1] == u ||
+              *((tbinptr*)(u->parent)) == u);
+      if (u->child[0] != 0) {
+        assert(u->child[0]->parent == u);
+        assert(u->child[0] != u);
+        do_check_tree(m, u->child[0]);
+      }
+      if (u->child[1] != 0) {
+        assert(u->child[1]->parent == u);
+        assert(u->child[1] != u);
+        do_check_tree(m, u->child[1]);
+      }
+      if (u->child[0] != 0 && u->child[1] != 0) {
+        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
+      }
+    }
+    u = u->fd;
+  } while (u != t);
+  assert(head != 0);
+}
+
+/*  Check all the chunks in a treebin.  */
+static void do_check_treebin(mstate m, bindex_t i) {
+  tbinptr* tb = treebin_at(m, i);
+  tchunkptr t = *tb;
+  int empty = (m->treemap & (1U << i)) == 0;
+  if (t == 0)
+    assert(empty);
+  if (!empty)
+    do_check_tree(m, t);
+}
+
+/*  Check all the chunks in a smallbin.  */
+static void do_check_smallbin(mstate m, bindex_t i) {
+  sbinptr b = smallbin_at(m, i);
+  mchunkptr p = b->bk;
+  unsigned int empty = (m->smallmap & (1U << i)) == 0;
+  if (p == b)
+    assert(empty);
+  if (!empty) {
+    for (; p != b; p = p->bk) {
+      size_t size = chunksize(p);
+      mchunkptr q;
+      /* each chunk claims to be free */
+      do_check_free_chunk(m, p);
+      /* chunk belongs in bin */
+      assert(small_index(size) == i);
+      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
+      /* chunk is followed by an inuse chunk */
+      q = next_chunk(p);
+      if (q->head != FENCEPOST_HEAD)
+        do_check_inuse_chunk(m, q);
+    }
+  }
+}
+
+/* Find x in a bin. Used in other check functions. */
+static int bin_find(mstate m, mchunkptr x) {
+  size_t size = chunksize(x);
+  if (is_small(size)) {
+    bindex_t sidx = small_index(size);
+    sbinptr b = smallbin_at(m, sidx);
+    if (smallmap_is_marked(m, sidx)) {
+      mchunkptr p = b;
+      do {
+        if (p == x)
+          return 1;
+      } while ((p = p->fd) != b);
+    }
+  }
+  else {
+    bindex_t tidx;
+    compute_tree_index(size, tidx);
+    if (treemap_is_marked(m, tidx)) {
+      tchunkptr t = *treebin_at(m, tidx);
+      size_t sizebits = size << leftshift_for_tree_index(tidx);
+      while (t != 0 && chunksize(t) != size) {
+        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
+        sizebits <<= 1;
+      }
+      if (t != 0) {
+        tchunkptr u = t;
+        do {
+          if (u == (tchunkptr)x)
+            return 1;
+        } while ((u = u->fd) != t);
+      }
+    }
+  }
+  return 0;
+}
+
+/* Traverse each chunk and check it; return total */
+static size_t traverse_and_check(mstate m) {
+  size_t sum = 0;
+  if (is_initialized(m)) {
+    msegmentptr s = &m->seg;
+    sum += m->topsize + TOP_FOOT_SIZE;
+    while (s != 0) {
+      mchunkptr q = align_as_chunk(s->base);
+      mchunkptr lastq = 0;
+      assert(pinuse(q));
+      while (segment_holds(s, q) &&
+             q != m->top && q->head != FENCEPOST_HEAD) {
+        sum += chunksize(q);
+        if (cinuse(q)) {
+          assert(!bin_find(m, q));
+          do_check_inuse_chunk(m, q);
+        }
+        else {
+          assert(q == m->dv || bin_find(m, q));
+          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
+          do_check_free_chunk(m, q);
+        }
+        lastq = q;
+        q = next_chunk(q);
+      }
+      s = s->next;
+    }
+  }
+  return sum;
+}
+
+/* Check all properties of malloc_state. */
+static void do_check_malloc_state(mstate m) {
+  bindex_t i;
+  size_t total;
+  /* check bins */
+  for (i = 0; i < NSMALLBINS; ++i)
+    do_check_smallbin(m, i);
+  for (i = 0; i < NTREEBINS; ++i)
+    do_check_treebin(m, i);
+
+  if (m->dvsize != 0) { /* check dv chunk */
+    do_check_any_chunk(m, m->dv);
+    assert(m->dvsize == chunksize(m->dv));
+    assert(m->dvsize >= MIN_CHUNK_SIZE);
+    assert(bin_find(m, m->dv) == 0);
+  }
+
+  if (m->top != 0) {   /* check top chunk */
+    do_check_top_chunk(m, m->top);
+    assert(m->topsize == chunksize(m->top));
+    assert(m->topsize > 0);
+    assert(bin_find(m, m->top) == 0);
+  }
+
+  total = traverse_and_check(m);
+  assert(total <= m->footprint);
+  assert(m->footprint <= m->max_footprint);
+}
+#endif /* DEBUG */
+
+/* ----------------------------- statistics ------------------------------ */
+
+#if !NO_MALLINFO
+static struct mallinfo internal_mallinfo(mstate m) {
+  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
+  if (!PREACTION(m)) {
+    check_malloc_state(m);
+    if (is_initialized(m)) {
+      size_t nfree = SIZE_T_ONE; /* top always free */
+      size_t mfree = m->topsize + TOP_FOOT_SIZE;
+      size_t sum = mfree;
+      msegmentptr s = &m->seg;
+      while (s != 0) {
+        mchunkptr q = align_as_chunk(s->base);
+        while (segment_holds(s, q) &&
+               q != m->top && q->head != FENCEPOST_HEAD) {
+          size_t sz = chunksize(q);
+          sum += sz;
+          if (!cinuse(q)) {
+            mfree += sz;
+            ++nfree;
+          }
+          q = next_chunk(q);
+        }
+        s = s->next;
+      }
+
+      nm.arena    = sum;
+      nm.ordblks  = nfree;
+      nm.hblkhd   = m->footprint - sum;
+      nm.usmblks  = m->max_footprint;
+      nm.uordblks = m->footprint - mfree;
+      nm.fordblks = mfree;
+      nm.keepcost = m->topsize;
+    }
+
+    POSTACTION(m);
+  }
+  return nm;
+}
+#endif /* !NO_MALLINFO */
+
+static void internal_malloc_stats(mstate m) {
+  if (!PREACTION(m)) {
+    size_t maxfp = 0;
+    size_t fp = 0;
+    size_t used = 0;
+    check_malloc_state(m);
+    if (is_initialized(m)) {
+      msegmentptr s = &m->seg;
+      maxfp = m->max_footprint;
+      fp = m->footprint;
+      used = fp - (m->topsize + TOP_FOOT_SIZE);
+
+      while (s != 0) {
+        mchunkptr q = align_as_chunk(s->base);
+        while (segment_holds(s, q) &&
+               q != m->top && q->head != FENCEPOST_HEAD) {
+          if (!cinuse(q))
+            used -= chunksize(q);
+          q = next_chunk(q);
+        }
+        s = s->next;
+      }
+    }
+
+    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
+    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
+    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
+
+    POSTACTION(m);
+  }
+}
+
+/* ----------------------- Operations on smallbins ----------------------- */
+
+/*
+  Various forms of linking and unlinking are defined as macros.  Even
+  the ones for trees, which are very long but have very short typical
+  paths.  This is ugly but reduces reliance on inlining support of
+  compilers.
+*/
+
+/* Link a free chunk into a smallbin  */
+#define insert_small_chunk(M, P, S) {\
+  bindex_t I  = small_index(S);\
+  mchunkptr B = smallbin_at(M, I);\
+  mchunkptr F = B;\
+  assert(S >= MIN_CHUNK_SIZE);\
+  if (!smallmap_is_marked(M, I))\
+    mark_smallmap(M, I);\
+  else if (RTCHECK(ok_address(M, B->fd)))\
+    F = B->fd;\
+  else {\
+    CORRUPTION_ERROR_ACTION(M);\
+  }\
+  B->fd = P;\
+  F->bk = P;\
+  P->fd = F;\
+  P->bk = B;\
+}
+
+/* Unlink a chunk from a smallbin  */
+#define unlink_small_chunk(M, P, S) {\
+  mchunkptr F = P->fd;\
+  mchunkptr B = P->bk;\
+  bindex_t I = small_index(S);\
+  assert(P != B);\
+  assert(P != F);\
+  assert(chunksize(P) == small_index2size(I));\
+  if (F == B)\
+    clear_smallmap(M, I);\
+  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
+                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
+    F->bk = B;\
+    B->fd = F;\
+  }\
+  else {\
+    CORRUPTION_ERROR_ACTION(M);\
+  }\
+}
+
+/* Unlink the first chunk from a smallbin */
+#define unlink_first_small_chunk(M, B, P, I) {\
+  mchunkptr F = P->fd;\
+  assert(P != B);\
+  assert(P != F);\
+  assert(chunksize(P) == small_index2size(I));\
+  if (B == F)\
+    clear_smallmap(M, I);\
+  else if (RTCHECK(ok_address(M, F))) {\
+    B->fd = F;\
+    F->bk = B;\
+  }\
+  else {\
+    CORRUPTION_ERROR_ACTION(M);\
+  }\
+}
+
+/* Replace dv node, binning the old one */
+/* Used only when dvsize known to be small */
+#define replace_dv(M, P, S) {\
+  size_t DVS = M->dvsize;\
+  if (DVS != 0) {\
+    mchunkptr DV = M->dv;\
+    assert(is_small(DVS));\
+    insert_small_chunk(M, DV, DVS);\
+  }\
+  M->dvsize = S;\
+  M->dv = P;\
+}
+
+/* ------------------------- Operations on trees ------------------------- */
+
+/* Insert chunk into tree */
+#define insert_large_chunk(M, X, S) {\
+  tbinptr* H;\
+  bindex_t I;\
+  compute_tree_index(S, I);\
+  H = treebin_at(M, I);\
+  X->index = I;\
+  X->child[0] = X->child[1] = 0;\
+  if (!treemap_is_marked(M, I)) {\
+    mark_treemap(M, I);\
+    *H = X;\
+    X->parent = (tchunkptr)H;\
+    X->fd = X->bk = X;\
+  }\
+  else {\
+    tchunkptr T = *H;\
+    size_t K = S << leftshift_for_tree_index(I);\
+    for (;;) {\
+      if (chunksize(T) != S) {\
+        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
+        K <<= 1;\
+        if (*C != 0)\
+          T = *C;\
+        else if (RTCHECK(ok_address(M, C))) {\
+          *C = X;\
+          X->parent = T;\
+          X->fd = X->bk = X;\
+          break;\
+        }\
+        else {\
+          CORRUPTION_ERROR_ACTION(M);\
+          break;\
+        }\
+      }\
+      else {\
+        tchunkptr F = T->fd;\
+        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
+          T->fd = F->bk = X;\
+          X->fd = F;\
+          X->bk = T;\
+          X->parent = 0;\
+          break;\
+        }\
+        else {\
+          CORRUPTION_ERROR_ACTION(M);\
+          break;\
+        }\
+      }\
+    }\
+  }\
+}
+
+/*
+  Unlink steps:
+
+  1. If x is a chained node, unlink it from its same-sized fd/bk links
+     and choose its bk node as its replacement.
+  2. If x was the last node of its size, but not a leaf node, it must
+     be replaced with a leaf node (not merely one with an open left or
+     right), to make sure that lefts and rights of descendents
+     correspond properly to bit masks.  We use the rightmost descendent
+     of x.  We could use any other leaf, but this is easy to locate and
+     tends to counteract removal of leftmosts elsewhere, and so keeps
+     paths shorter than minimally guaranteed.  This doesn't loop much
+     because on average a node in a tree is near the bottom.
+  3. If x is the base of a chain (i.e., has parent links) relink
+     x's parent and children to x's replacement (or null if none).
+*/
+
+#define unlink_large_chunk(M, X) {\
+  tchunkptr XP = X->parent;\
+  tchunkptr R;\
+  if (X->bk != X) {\
+    tchunkptr F = X->fd;\
+    R = X->bk;\
+    if (RTCHECK(ok_address(M, F))) {\
+      F->bk = R;\
+      R->fd = F;\
+    }\
+    else {\
+      CORRUPTION_ERROR_ACTION(M);\
+    }\
+  }\
+  else {\
+    tchunkptr* RP;\
+    if (((R = *(RP = &(X->child[1]))) != 0) ||\
+        ((R = *(RP = &(X->child[0]))) != 0)) {\
+      tchunkptr* CP;\
+      while ((*(CP = &(R->child[1])) != 0) ||\
+             (*(CP = &(R->child[0])) != 0)) {\
+        R = *(RP = CP);\
+      }\
+      if (RTCHECK(ok_address(M, RP)))\
+        *RP = 0;\
+      else {\
+        CORRUPTION_ERROR_ACTION(M);\
+      }\
+    }\
+  }\
+  if (XP != 0) {\
+    tbinptr* H = treebin_at(M, X->index);\
+    if (X == *H) {\
+      if ((*H = R) == 0) \
+        clear_treemap(M, X->index);\
+    }\
+    else if (RTCHECK(ok_address(M, XP))) {\
+      if (XP->child[0] == X) \
+        XP->child[0] = R;\
+      else \
+        XP->child[1] = R;\
+    }\
+    else\
+      CORRUPTION_ERROR_ACTION(M);\
+    if (R != 0) {\
+      if (RTCHECK(ok_address(M, R))) {\
+        tchunkptr C0, C1;\
+        R->parent = XP;\
+        if ((C0 = X->child[0]) != 0) {\
+          if (RTCHECK(ok_address(M, C0))) {\
+            R->child[0] = C0;\
+            C0->parent = R;\
+          }\
+          else\
+            CORRUPTION_ERROR_ACTION(M);\
+        }\
+        if ((C1 = X->child[1]) != 0) {\
+          if (RTCHECK(ok_address(M, C1))) {\
+            R->child[1] = C1;\
+            C1->parent = R;\
+          }\
+          else\
+            CORRUPTION_ERROR_ACTION(M);\
+        }\
+      }\
+      else\
+        CORRUPTION_ERROR_ACTION(M);\
+    }\
+  }\
+}
+
+/* Relays to large vs small bin operations */
+
+#define insert_chunk(M, P, S)\
+  if (is_small(S)) insert_small_chunk(M, P, S)\
+  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
+
+#define unlink_chunk(M, P, S)\
+  if (is_small(S)) unlink_small_chunk(M, P, S)\
+  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
+
+
+/* Relays to internal calls to malloc/free from realloc, memalign etc */
+
+#if ONLY_MSPACES
+#define internal_malloc(m, b) mspace_malloc(m, b)
+#define internal_free(m, mem) mspace_free(m,mem);
+#else /* ONLY_MSPACES */
+#if MSPACES
+#define internal_malloc(m, b)\
+   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
+#define internal_free(m, mem)\
+   if (m == gm) dlfree(mem); else mspace_free(m,mem);
+#else /* MSPACES */
+#define internal_malloc(m, b) dlmalloc(b)
+#define internal_free(m, mem) dlfree(mem)
+#endif /* MSPACES */
+#endif /* ONLY_MSPACES */
+
+/* -----------------------  Direct-mmapping chunks ----------------------- */
+
+/*
+  Directly mmapped chunks are set up with an offset to the start of
+  the mmapped region stored in the prev_foot field of the chunk. This
+  allows reconstruction of the required argument to MUNMAP when freed,
+  and also allows adjustment of the returned chunk to meet alignment
+  requirements (especially in memalign).  There is also enough space
+  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
+  the PINUSE bit so frees can be checked.
+*/
+
+/* Malloc using mmap */
+static void* mmap_alloc(mstate m, size_t nb) {
+  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
+  if (mmsize > nb) {     /* Check for wrap around 0 */
+    char* mm = (char*)(DIRECT_MMAP(mmsize));
+    if (mm != CMFAIL) {
+      size_t offset = align_offset(chunk2mem(mm));
+      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
+      mchunkptr p = (mchunkptr)(mm + offset);
+      p->prev_foot = offset | IS_MMAPPED_BIT;
+      (p)->head = (psize|CINUSE_BIT);
+      mark_inuse_foot(m, p, psize);
+      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
+      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
+
+      if (mm < m->least_addr)
+        m->least_addr = mm;
+      if ((m->footprint += mmsize) > m->max_footprint)
+        m->max_footprint = m->footprint;
+      assert(is_aligned(chunk2mem(p)));
+      check_mmapped_chunk(m, p);
+      return chunk2mem(p);
+    }
+  }
+  return 0;
+}
+
+/* Realloc using mmap */
+static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
+  size_t oldsize = chunksize(oldp);
+  if (is_small(nb)) /* Can't shrink mmap regions below small size */
+    return 0;
+  /* Keep old chunk if big enough but not too big */
+  if (oldsize >= nb + SIZE_T_SIZE &&
+      (oldsize - nb) <= (mparams.granularity << 1))
+    return oldp;
+  else {
+    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
+    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
+    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
+                                         CHUNK_ALIGN_MASK);
+    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
+                                  oldmmsize, newmmsize, 1);
+    if (cp != CMFAIL) {
+      mchunkptr newp = (mchunkptr)(cp + offset);
+      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
+      newp->head = (psize|CINUSE_BIT);
+      mark_inuse_foot(m, newp, psize);
+      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
+      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
+
+      if (cp < m->least_addr)
+        m->least_addr = cp;
+      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
+        m->max_footprint = m->footprint;
+      check_mmapped_chunk(m, newp);
+      return newp;
+    }
+  }
+  return 0;
+}
+
+/* -------------------------- mspace management -------------------------- */
+
+/* Initialize top chunk and its size */
+static void init_top(mstate m, mchunkptr p, size_t psize) {
+  /* Ensure alignment */
+  size_t offset = align_offset(chunk2mem(p));
+  p = (mchunkptr)((char*)p + offset);
+  psize -= offset;
+
+  m->top = p;
+  m->topsize = psize;
+  p->head = psize | PINUSE_BIT;
+  /* set size of fake trailing chunk holding overhead space only once */
+  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
+  m->trim_check = mparams.trim_threshold; /* reset on each update */
+}
+
+/* Initialize bins for a new mstate that is otherwise zeroed out */
+static void init_bins(mstate m) {
+  /* Establish circular links for smallbins */
+  bindex_t i;
+  for (i = 0; i < NSMALLBINS; ++i) {
+    sbinptr bin = smallbin_at(m,i);
+    bin->fd = bin->bk = bin;
+  }
+}
+
+#if PROCEED_ON_ERROR
+
+/* default corruption action */
+static void reset_on_error(mstate m) {
+  int i;
+  ++malloc_corruption_error_count;
+  /* Reinitialize fields to forget about all memory */
+  m->smallbins = m->treebins = 0;
+  m->dvsize = m->topsize = 0;
+  m->seg.base = 0;
+  m->seg.size = 0;
+  m->seg.next = 0;
+  m->top = m->dv = 0;
+  for (i = 0; i < NTREEBINS; ++i)
+    *treebin_at(m, i) = 0;
+  init_bins(m);
+}
+#endif /* PROCEED_ON_ERROR */
+
+/* Allocate chunk and prepend remainder with chunk in successor base. */
+static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
+                           size_t nb) {
+  mchunkptr p = align_as_chunk(newbase);
+  mchunkptr oldfirst = align_as_chunk(oldbase);
+  size_t psize = (char*)oldfirst - (char*)p;
+  mchunkptr q = chunk_plus_offset(p, nb);
+  size_t qsize = psize - nb;
+  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
+
+  assert((char*)oldfirst > (char*)q);
+  assert(pinuse(oldfirst));
+  assert(qsize >= MIN_CHUNK_SIZE);
+
+  /* consolidate remainder with first chunk of old base */
+  if (oldfirst == m->top) {
+    size_t tsize = m->topsize += qsize;
+    m->top = q;
+    q->head = tsize | PINUSE_BIT;
+    check_top_chunk(m, q);
+  }
+  else if (oldfirst == m->dv) {
+    size_t dsize = m->dvsize += qsize;
+    m->dv = q;
+    set_size_and_pinuse_of_free_chunk(q, dsize);
+  }
+  else {
+    if (!cinuse(oldfirst)) {
+      size_t nsize = chunksize(oldfirst);
+      unlink_chunk(m, oldfirst, nsize);
+      oldfirst = chunk_plus_offset(oldfirst, nsize);
+      qsize += nsize;
+    }
+    set_free_with_pinuse(q, qsize, oldfirst);
+    insert_chunk(m, q, qsize);
+    check_free_chunk(m, q);
+  }
+
+  check_malloced_chunk(m, chunk2mem(p), nb);
+  return chunk2mem(p);
+}
+
+
+/* Add a segment to hold a new noncontiguous region */
+static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
+  /* Determine locations and sizes of segment, fenceposts, old top */
+  char* old_top = (char*)m->top;
+  msegmentptr oldsp = segment_holding(m, old_top);
+  char* old_end = oldsp->base + oldsp->size;
+  size_t ssize = pad_request(sizeof(struct malloc_segment));
+  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
+  size_t offset = align_offset(chunk2mem(rawsp));
+  char* asp = rawsp + offset;
+  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
+  mchunkptr sp = (mchunkptr)csp;
+  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
+  mchunkptr tnext = chunk_plus_offset(sp, ssize);
+  mchunkptr p = tnext;
+  int nfences = 0;
+
+  /* reset top to new space */
+  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
+
+  /* Set up segment record */
+  assert(is_aligned(ss));
+  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
+  *ss = m->seg; /* Push current record */
+  m->seg.base = tbase;
+  m->seg.size = tsize;
+  m->seg.sflags = mmapped;
+  m->seg.next = ss;
+
+  /* Insert trailing fenceposts */
+  for (;;) {
+    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
+    p->head = FENCEPOST_HEAD;
+    ++nfences;
+    if ((char*)(&(nextp->head)) < old_end)
+      p = nextp;
+    else
+      break;
+  }
+  assert(nfences >= 2);
+
+  /* Insert the rest of old top into a bin as an ordinary free chunk */
+  if (csp != old_top) {
+    mchunkptr q = (mchunkptr)old_top;
+    size_t psize = csp - old_top;
+    mchunkptr tn = chunk_plus_offset(q, psize);
+    set_free_with_pinuse(q, psize, tn);
+    insert_chunk(m, q, psize);
+  }
+
+  check_top_chunk(m, m->top);
+}
+
+/* -------------------------- System allocation -------------------------- */
+
+/* Get memory from system using MORECORE or MMAP */
+static void* sys_alloc(mstate m, size_t nb) {
+  char* tbase = CMFAIL;
+  size_t tsize = 0;
+  flag_t mmap_flag = 0;
+
+  init_mparams();
+
+  /* Directly map large chunks */
+  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
+    void* mem = mmap_alloc(m, nb);
+    if (mem != 0)
+      return mem;
+  }
+
+  /*
+    Try getting memory in any of three ways (in most-preferred to
+    least-preferred order):
+    1. A call to MORECORE that can normally contiguously extend memory.
+       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
+       or main space is mmapped or a previous contiguous call failed)
+    2. A call to MMAP new space (disabled if not HAVE_MMAP).
+       Note that under the default settings, if MORECORE is unable to
+       fulfill a request, and HAVE_MMAP is true, then mmap is
+       used as a noncontiguous system allocator. This is a useful backup
+       strategy for systems with holes in address spaces -- in this case
+       sbrk cannot contiguously expand the heap, but mmap may be able to
+       find space.
+    3. A call to MORECORE that cannot usually contiguously extend memory.
+       (disabled if not HAVE_MORECORE)
+  */
+
+  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
+    char* br = CMFAIL;
+    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
+    size_t asize = 0;
+    ACQUIRE_MORECORE_LOCK();
+
+    if (ss == 0) {  /* First time through or recovery */
+      char* base = (char*)CALL_MORECORE(0);
+      if (base != CMFAIL) {
+        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
+        /* Adjust to end on a page boundary */
+        if (!is_page_aligned(base))
+          asize += (page_align((size_t)base) - (size_t)base);
+        /* Can't call MORECORE if size is negative when treated as signed */
+        if (asize < HALF_MAX_SIZE_T &&
+            (br = (char*)(CALL_MORECORE(asize))) == base) {
+          tbase = base;
+          tsize = asize;
+        }
+      }
+    }
+    else {
+      /* Subtract out existing available top space from MORECORE request. */
+      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
+      /* Use mem here only if it did continuously extend old space */
+      if (asize < HALF_MAX_SIZE_T &&
+          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
+        tbase = br;
+        tsize = asize;
+      }
+    }
+
+    if (tbase == CMFAIL) {    /* Cope with partial failure */
+      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
+        if (asize < HALF_MAX_SIZE_T &&
+            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
+          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
+          if (esize < HALF_MAX_SIZE_T) {
+            char* end = (char*)CALL_MORECORE(esize);
+            if (end != CMFAIL)
+              asize += esize;
+            else {            /* Can't use; try to release */
+              CALL_MORECORE(-asize);
+              br = CMFAIL;
+            }
+          }
+        }
+      }
+      if (br != CMFAIL) {    /* Use the space we did get */
+        tbase = br;
+        tsize = asize;
+      }
+      else
+        disable_contiguous(m); /* Don't try contiguous path in the future */
+    }
+
+    RELEASE_MORECORE_LOCK();
+  }
+
+  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
+    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
+    size_t rsize = granularity_align(req);
+    if (rsize > nb) { /* Fail if wraps around zero */
+      char* mp = (char*)(CALL_MMAP(rsize));
+      if (mp != CMFAIL) {
+        tbase = mp;
+        tsize = rsize;
+        mmap_flag = IS_MMAPPED_BIT;
+      }
+    }
+  }
+
+  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
+    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
+    if (asize < HALF_MAX_SIZE_T) {
+      char* br = CMFAIL;
+      char* end = CMFAIL;
+      ACQUIRE_MORECORE_LOCK();
+      br = (char*)(CALL_MORECORE(asize));
+      end = (char*)(CALL_MORECORE(0));
+      RELEASE_MORECORE_LOCK();
+      if (br != CMFAIL && end != CMFAIL && br < end) {
+        size_t ssize = end - br;
+        if (ssize > nb + TOP_FOOT_SIZE) {
+          tbase = br;
+          tsize = ssize;
+        }
+      }
+    }
+  }
+
+  if (tbase != CMFAIL) {
+
+    if ((m->footprint += tsize) > m->max_footprint)
+      m->max_footprint = m->footprint;
+
+    if (!is_initialized(m)) { /* first-time initialization */
+      m->seg.base = m->least_addr = tbase;
+      m->seg.size = tsize;
+      m->seg.sflags = mmap_flag;
+      m->magic = mparams.magic;
+      init_bins(m);
+      if (is_global(m)) 
+        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
+      else {
+        /* Offset top by embedded malloc_state */
+        mchunkptr mn = next_chunk(mem2chunk(m));
+        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
+      }
+    }
+
+    else {
+      /* Try to merge with an existing segment */
+      msegmentptr sp = &m->seg;
+      while (sp != 0 && tbase != sp->base + sp->size)
+        sp = sp->next;
+      if (sp != 0 &&
+          !is_extern_segment(sp) &&
+          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
+          segment_holds(sp, m->top)) { /* append */
+        sp->size += tsize;
+        init_top(m, m->top, m->topsize + tsize);
+      }
+      else {
+        if (tbase < m->least_addr)
+          m->least_addr = tbase;
+        sp = &m->seg;
+        while (sp != 0 && sp->base != tbase + tsize)
+          sp = sp->next;
+        if (sp != 0 &&
+            !is_extern_segment(sp) &&
+            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
+          char* oldbase = sp->base;
+          sp->base = tbase;
+          sp->size += tsize;
+          return prepend_alloc(m, tbase, oldbase, nb);
+        }
+        else
+          add_segment(m, tbase, tsize, mmap_flag);
+      }
+    }
+
+    if (nb < m->topsize) { /* Allocate from new or extended top space */
+      size_t rsize = m->topsize -= nb;
+      mchunkptr p = m->top;
+      mchunkptr r = m->top = chunk_plus_offset(p, nb);
+      r->head = rsize | PINUSE_BIT;
+      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
+      check_top_chunk(m, m->top);
+      check_malloced_chunk(m, chunk2mem(p), nb);
+      return chunk2mem(p);
+    }
+  }
+
+  MALLOC_FAILURE_ACTION;
+  return 0;
+}
+
+/* -----------------------  system deallocation -------------------------- */
+
+/* Unmap and unlink any mmapped segments that don't contain used chunks */
+static size_t release_unused_segments(mstate m) {
+  size_t released = 0;
+  msegmentptr pred = &m->seg;
+  msegmentptr sp = pred->next;
+  while (sp != 0) {
+    char* base = sp->base;
+    size_t size = sp->size;
+    msegmentptr next = sp->next;
+    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
+      mchunkptr p = align_as_chunk(base);
+      size_t psize = chunksize(p);
+      /* Can unmap if first chunk holds entire segment and not pinned */
+      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
+        tchunkptr tp = (tchunkptr)p;
+        assert(segment_holds(sp, (char*)sp));
+        if (p == m->dv) {
+          m->dv = 0;
+          m->dvsize = 0;
+        }
+        else {
+          unlink_large_chunk(m, tp);
+        }
+        if (CALL_MUNMAP(base, size) == 0) {
+          released += size;
+          m->footprint -= size;
+          /* unlink obsoleted record */
+          sp = pred;
+          sp->next = next;
+        }
+        else { /* back out if cannot unmap */
+          insert_large_chunk(m, tp, psize);
+        }
+      }
+    }
+    pred = sp;
+    sp = next;
+  }
+  return released;
+}
+
+static int sys_trim(mstate m, size_t pad) {
+  size_t released = 0;
+  if (pad < MAX_REQUEST && is_initialized(m)) {
+    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
+
+    if (m->topsize > pad) {
+      /* Shrink top space in granularity-size units, keeping at least one */
+      size_t unit = mparams.granularity;
+      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
+                      SIZE_T_ONE) * unit;
+      msegmentptr sp = segment_holding(m, (char*)m->top);
+
+      if (!is_extern_segment(sp)) {
+        if (is_mmapped_segment(sp)) {
+          if (HAVE_MMAP &&
+              sp->size >= extra &&
+              !has_segment_link(m, sp)) { /* can't shrink if pinned */
+            size_t newsize = sp->size - extra;
+            /* Prefer mremap, fall back to munmap */
+            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
+                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
+              released = extra;
+            }
+          }
+        }
+        else if (HAVE_MORECORE) {
+          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
+            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
+          ACQUIRE_MORECORE_LOCK();
+          {
+            /* Make sure end of memory is where we last set it. */
+            char* old_br = (char*)(CALL_MORECORE(0));
+            if (old_br == sp->base + sp->size) {
+              char* rel_br = (char*)(CALL_MORECORE(-extra));
+              char* new_br = (char*)(CALL_MORECORE(0));
+              if (rel_br != CMFAIL && new_br < old_br)
+                released = old_br - new_br;
+            }
+          }
+          RELEASE_MORECORE_LOCK();
+        }
+      }
+
+      if (released != 0) {
+        sp->size -= released;
+        m->footprint -= released;
+        init_top(m, m->top, m->topsize - released);
+        check_top_chunk(m, m->top);
+      }
+    }
+
+    /* Unmap any unused mmapped segments */
+    if (HAVE_MMAP) 
+      released += release_unused_segments(m);
+
+    /* On failure, disable autotrim to avoid repeated failed future calls */
+    if (released == 0)
+      m->trim_check = MAX_SIZE_T;
+  }
+
+  return (released != 0)? 1 : 0;
+}
+
+/* ---------------------------- malloc support --------------------------- */
+
+/* allocate a large request from the best fitting chunk in a treebin */
+static void* tmalloc_large(mstate m, size_t nb) {
+  tchunkptr v = 0;
+  size_t rsize = -nb; /* Unsigned negation */
+  tchunkptr t;
+  bindex_t idx;
+  compute_tree_index(nb, idx);
+
+  if ((t = *treebin_at(m, idx)) != 0) {
+    /* Traverse tree for this bin looking for node with size == nb */
+    size_t sizebits = nb << leftshift_for_tree_index(idx);
+    tchunkptr rst = 0;  /* The deepest untaken right subtree */
+    for (;;) {
+      tchunkptr rt;
+      size_t trem = chunksize(t) - nb;
+      if (trem < rsize) {
+        v = t;
+        if ((rsize = trem) == 0)
+          break;
+      }
+      rt = t->child[1];
+      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
+      if (rt != 0 && rt != t)
+        rst = rt;
+      if (t == 0) {
+        t = rst; /* set t to least subtree holding sizes > nb */
+        break;
+      }
+      sizebits <<= 1;
+    }
+  }
+
+  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
+    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
+    if (leftbits != 0) {
+      bindex_t i;
+      binmap_t leastbit = least_bit(leftbits);
+      compute_bit2idx(leastbit, i);
+      t = *treebin_at(m, i);
+    }
+  }
+
+  while (t != 0) { /* find smallest of tree or subtree */
+    size_t trem = chunksize(t) - nb;
+    if (trem < rsize) {
+      rsize = trem;
+      v = t;
+    }
+    t = leftmost_child(t);
+  }
+
+  /*  If dv is a better fit, return 0 so malloc will use it */
+  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
+    if (RTCHECK(ok_address(m, v))) { /* split */
+      mchunkptr r = chunk_plus_offset(v, nb);
+      assert(chunksize(v) == rsize + nb);
+      if (RTCHECK(ok_next(v, r))) {
+        unlink_large_chunk(m, v);
+        if (rsize < MIN_CHUNK_SIZE)
+          set_inuse_and_pinuse(m, v, (rsize + nb));
+        else {
+          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
+          set_size_and_pinuse_of_free_chunk(r, rsize);
+          insert_chunk(m, r, rsize);
+        }
+        return chunk2mem(v);
+      }
+    }
+    CORRUPTION_ERROR_ACTION(m);
+  }
+  return 0;
+}
+
+/* allocate a small request from the best fitting chunk in a treebin */
+static void* tmalloc_small(mstate m, size_t nb) {
+  tchunkptr t, v;
+  size_t rsize;
+  bindex_t i;
+  binmap_t leastbit = least_bit(m->treemap);
+  compute_bit2idx(leastbit, i);
+
+  v = t = *treebin_at(m, i);
+  rsize = chunksize(t) - nb;
+
+  while ((t = leftmost_child(t)) != 0) {
+    size_t trem = chunksize(t) - nb;
+    if (trem < rsize) {
+      rsize = trem;
+      v = t;
+    }
+  }
+
+  if (RTCHECK(ok_address(m, v))) {
+    mchunkptr r = chunk_plus_offset(v, nb);
+    assert(chunksize(v) == rsize + nb);
+    if (RTCHECK(ok_next(v, r))) {
+      unlink_large_chunk(m, v);
+      if (rsize < MIN_CHUNK_SIZE)
+        set_inuse_and_pinuse(m, v, (rsize + nb));
+      else {
+        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
+        set_size_and_pinuse_of_free_chunk(r, rsize);
+        replace_dv(m, r, rsize);
+      }
+      return chunk2mem(v);
+    }
+  }
+
+  CORRUPTION_ERROR_ACTION(m);
+  return 0;
+}
+
+/* --------------------------- realloc support --------------------------- */
+
+static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
+  if (bytes >= MAX_REQUEST) {
+    MALLOC_FAILURE_ACTION;
+    return 0;
+  }
+  if (!PREACTION(m)) {
+    mchunkptr oldp = mem2chunk(oldmem);
+    size_t oldsize = chunksize(oldp);
+    mchunkptr next = chunk_plus_offset(oldp, oldsize);
+    mchunkptr newp = 0;
+    void* extra = 0;
+
+    /* Try to either shrink or extend into top. Else malloc-copy-free */
+
+    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
+                ok_next(oldp, next) && ok_pinuse(next))) {
+      size_t nb = request2size(bytes);
+      if (is_mmapped(oldp))
+        newp = mmap_resize(m, oldp, nb);
+      else if (oldsize >= nb) { /* already big enough */
+        size_t rsize = oldsize - nb;
+        newp = oldp;
+        if (rsize >= MIN_CHUNK_SIZE) {
+          mchunkptr remainder = chunk_plus_offset(newp, nb);
+          set_inuse(m, newp, nb);
+          set_inuse(m, remainder, rsize);
+          extra = chunk2mem(remainder);
+        }
+      }
+      else if (next == m->top && oldsize + m->topsize > nb) {
+        /* Expand into top */
+        size_t newsize = oldsize + m->topsize;
+        size_t newtopsize = newsize - nb;
+        mchunkptr newtop = chunk_plus_offset(oldp, nb);
+        set_inuse(m, oldp, nb);
+        newtop->head = newtopsize |PINUSE_BIT;
+        m->top = newtop;
+        m->topsize = newtopsize;
+        newp = oldp;
+      }
+    }
+    else {
+      USAGE_ERROR_ACTION(m, oldmem);
+      POSTACTION(m);
+      return 0;
+    }
+
+    POSTACTION(m);
+
+    if (newp != 0) {
+      if (extra != 0) {
+        internal_free(m, extra);
+      }
+      check_inuse_chunk(m, newp);
+      return chunk2mem(newp);
+    }
+    else {
+      void* newmem = internal_malloc(m, bytes);
+      if (newmem != 0) {
+        size_t oc = oldsize - overhead_for(oldp);
+        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
+        internal_free(m, oldmem);
+      }
+      return newmem;
+    }
+  }
+  return 0;
+}
+
+/* --------------------------- memalign support -------------------------- */
+
+static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
+  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
+    return internal_malloc(m, bytes);
+  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
+    alignment = MIN_CHUNK_SIZE;
+  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
+    size_t a = MALLOC_ALIGNMENT << 1;
+    while (a < alignment) a <<= 1;
+    alignment = a;
+  }
+  
+  if (bytes >= MAX_REQUEST - alignment) {
+    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
+      MALLOC_FAILURE_ACTION;
+    }
+  }
+  else {
+    size_t nb = request2size(bytes);
+    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
+    char* mem = (char*)internal_malloc(m, req);
+    if (mem != 0) {
+      void* leader = 0;
+      void* trailer = 0;
+      mchunkptr p = mem2chunk(mem);
+
+      if (PREACTION(m)) return 0;
+      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
+        /*
+          Find an aligned spot inside chunk.  Since we need to give
+          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
+          the first calculation places us at a spot with less than
+          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
+          We've allocated enough total room so that this is always
+          possible.
+        */
+        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
+                                                       alignment -
+                                                       SIZE_T_ONE)) &
+                                             -alignment));
+        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
+          br : br+alignment;
+        mchunkptr newp = (mchunkptr)pos;
+        size_t leadsize = pos - (char*)(p);
+        size_t newsize = chunksize(p) - leadsize;
+
+        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
+          newp->prev_foot = p->prev_foot + leadsize;
+          newp->head = (newsize|CINUSE_BIT);
+        }
+        else { /* Otherwise, give back leader, use the rest */
+          set_inuse(m, newp, newsize);
+          set_inuse(m, p, leadsize);
+          leader = chunk2mem(p);
+        }
+        p = newp;
+      }
+
+      /* Give back spare room at the end */
+      if (!is_mmapped(p)) {
+        size_t size = chunksize(p);
+        if (size > nb + MIN_CHUNK_SIZE) {
+          size_t remainder_size = size - nb;
+          mchunkptr remainder = chunk_plus_offset(p, nb);
+          set_inuse(m, p, nb);
+          set_inuse(m, remainder, remainder_size);
+          trailer = chunk2mem(remainder);
+        }
+      }
+
+      assert (chunksize(p) >= nb);
+      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
+      check_inuse_chunk(m, p);
+      POSTACTION(m);
+      if (leader != 0) {
+        internal_free(m, leader);
+      }
+      if (trailer != 0) {
+        internal_free(m, trailer);
+      }
+      return chunk2mem(p);
+    }
+  }
+  return 0;
+}
+
+/* ------------------------ comalloc/coalloc support --------------------- */
+
+static void** ialloc(mstate m,
+                     size_t n_elements,
+                     size_t* sizes,
+                     int opts,
+                     void* chunks[]) {
+  /*
+    This provides common support for independent_X routines, handling
+    all of the combinations that can result.
+
+    The opts arg has:
+    bit 0 set if all elements are same size (using sizes[0])
+    bit 1 set if elements should be zeroed
+  */
+
+  size_t    element_size;   /* chunksize of each element, if all same */
+  size_t    contents_size;  /* total size of elements */
+  size_t    array_size;     /* request size of pointer array */
+  void*     mem;            /* malloced aggregate space */
+  mchunkptr p;              /* corresponding chunk */
+  size_t    remainder_size; /* remaining bytes while splitting */
+  void**    marray;         /* either "chunks" or malloced ptr array */
+  mchunkptr array_chunk;    /* chunk for malloced ptr array */
+  flag_t    was_enabled;    /* to disable mmap */
+  size_t    size;
+  size_t    i;
+
+  /* compute array length, if needed */
+  if (chunks != 0) {
+    if (n_elements == 0)
+      return chunks; /* nothing to do */
+    marray = chunks;
+    array_size = 0;
+  }
+  else {
+    /* if empty req, must still return chunk representing empty array */
+    if (n_elements == 0)
+      return (void**)internal_malloc(m, 0);
+    marray = 0;
+    array_size = request2size(n_elements * (sizeof(void*)));
+  }
+
+  /* compute total element size */
+  if (opts & 0x1) { /* all-same-size */
+    element_size = request2size(*sizes);
+    contents_size = n_elements * element_size;
+  }
+  else { /* add up all the sizes */
+    element_size = 0;
+    contents_size = 0;
+    for (i = 0; i != n_elements; ++i)
+      contents_size += request2size(sizes[i]);
+  }
+
+  size = contents_size + array_size;
+
+  /*
+     Allocate the aggregate chunk.  First disable direct-mmapping so
+     malloc won't use it, since we would not be able to later
+     free/realloc space internal to a segregated mmap region.
+  */
+  was_enabled = use_mmap(m);
+  disable_mmap(m);
+  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
+  if (was_enabled)
+    enable_mmap(m);
+  if (mem == 0)
+    return 0;
+
+  if (PREACTION(m)) return 0;
+  p = mem2chunk(mem);
+  remainder_size = chunksize(p);
+
+  assert(!is_mmapped(p));
+
+  if (opts & 0x2) {       /* optionally clear the elements */
+    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
+  }
+
+  /* If not provided, allocate the pointer array as final part of chunk */
+  if (marray == 0) {
+    size_t  array_chunk_size;
+    array_chunk = chunk_plus_offset(p, contents_size);
+    array_chunk_size = remainder_size - contents_size;
+    marray = (void**) (chunk2mem(array_chunk));
+    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
+    remainder_size = contents_size;
+  }
+
+  /* split out elements */
+  for (i = 0; ; ++i) {
+    marray[i] = chunk2mem(p);
+    if (i != n_elements-1) {
+      if (element_size != 0)
+        size = element_size;
+      else
+        size = request2size(sizes[i]);
+      remainder_size -= size;
+      set_size_and_pinuse_of_inuse_chunk(m, p, size);
+      p = chunk_plus_offset(p, size);
+    }
+    else { /* the final element absorbs any overallocation slop */
+      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
+      break;
+    }
+  }
+
+#if DEBUG
+  if (marray != chunks) {
+    /* final element must have exactly exhausted chunk */
+    if (element_size != 0) {
+      assert(remainder_size == element_size);
+    }
+    else {
+      assert(remainder_size == request2size(sizes[i]));
+    }
+    check_inuse_chunk(m, mem2chunk(marray));
+  }
+  for (i = 0; i != n_elements; ++i)
+    check_inuse_chunk(m, mem2chunk(marray[i]));
+
+#endif /* DEBUG */
+
+  POSTACTION(m);
+  return marray;
+}
+
+
+/* -------------------------- public routines ---------------------------- */
+
+#if !ONLY_MSPACES
+
+void* dlmalloc(size_t bytes) {
+  /*
+     Basic algorithm:
+     If a small request (< 256 bytes minus per-chunk overhead):
+       1. If one exists, use a remainderless chunk in associated smallbin.
+          (Remainderless means that there are too few excess bytes to
+          represent as a chunk.)
+       2. If it is big enough, use the dv chunk, which is normally the
+          chunk adjacent to the one used for the most recent small request.
+       3. If one exists, split the smallest available chunk in a bin,
+          saving remainder in dv.
+       4. If it is big enough, use the top chunk.
+       5. If available, get memory from system and use it
+     Otherwise, for a large request:
+       1. Find the smallest available binned chunk that fits, and use it
+          if it is better fitting than dv chunk, splitting if necessary.
+       2. If better fitting than any binned chunk, use the dv chunk.
+       3. If it is big enough, use the top chunk.
+       4. If request size >= mmap threshold, try to directly mmap this chunk.
+       5. If available, get memory from system and use it
+
+     The ugly goto's here ensure that postaction occurs along all paths.
+  */
+
+  if (!PREACTION(gm)) {
+    void* mem;
+    size_t nb;
+    if (bytes <= MAX_SMALL_REQUEST) {
+      bindex_t idx;
+      binmap_t smallbits;
+      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
+      idx = small_index(nb);
+      smallbits = gm->smallmap >> idx;
+
+      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
+        mchunkptr b, p;
+        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
+        b = smallbin_at(gm, idx);
+        p = b->fd;
+        assert(chunksize(p) == small_index2size(idx));
+        unlink_first_small_chunk(gm, b, p, idx);
+        set_inuse_and_pinuse(gm, p, small_index2size(idx));
+        mem = chunk2mem(p);
+        check_malloced_chunk(gm, mem, nb);
+        goto postaction;
+      }
+
+      else if (nb > gm->dvsize) {
+        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
+          mchunkptr b, p, r;
+          size_t rsize;
+          bindex_t i;
+          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
+          binmap_t leastbit = least_bit(leftbits);
+          compute_bit2idx(leastbit, i);
+          b = smallbin_at(gm, i);
+          p = b->fd;
+          assert(chunksize(p) == small_index2size(i));
+          unlink_first_small_chunk(gm, b, p, i);
+          rsize = small_index2size(i) - nb;
+          /* Fit here cannot be remainderless if 4byte sizes */
+          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
+            set_inuse_and_pinuse(gm, p, small_index2size(i));
+          else {
+            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
+            r = chunk_plus_offset(p, nb);
+            set_size_and_pinuse_of_free_chunk(r, rsize);
+            replace_dv(gm, r, rsize);
+          }
+          mem = chunk2mem(p);
+          check_malloced_chunk(gm, mem, nb);
+          goto postaction;
+        }
+
+        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
+          check_malloced_chunk(gm, mem, nb);
+          goto postaction;
+        }
+      }
+    }
+    else if (bytes >= MAX_REQUEST)
+      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
+    else {
+      nb = pad_request(bytes);
+      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
+        check_malloced_chunk(gm, mem, nb);
+        goto postaction;
+      }
+    }
+
+    if (nb <= gm->dvsize) {
+      size_t rsize = gm->dvsize - nb;
+      mchunkptr p = gm->dv;
+      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
+        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
+        gm->dvsize = rsize;
+        set_size_and_pinuse_of_free_chunk(r, rsize);
+        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
+      }
+      else { /* exhaust dv */
+        size_t dvs = gm->dvsize;
+        gm->dvsize = 0;
+        gm->dv = 0;
+        set_inuse_and_pinuse(gm, p, dvs);
+      }
+      mem = chunk2mem(p);
+      check_malloced_chunk(gm, mem, nb);
+      goto postaction;
+    }
+
+    else if (nb < gm->topsize) { /* Split top */
+      size_t rsize = gm->topsize -= nb;
+      mchunkptr p = gm->top;
+      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
+      r->head = rsize | PINUSE_BIT;
+      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
+      mem = chunk2mem(p);
+      check_top_chunk(gm, gm->top);
+      check_malloced_chunk(gm, mem, nb);
+      goto postaction;
+    }
+
+    mem = sys_alloc(gm, nb);
+
+  postaction:
+    POSTACTION(gm);
+    return mem;
+  }
+
+  return 0;
+}
+
+void dlfree(void* mem) {
+  /*
+     Consolidate freed chunks with preceeding or succeeding bordering
+     free chunks, if they exist, and then place in a bin.  Intermixed
+     with special cases for top, dv, mmapped chunks, and usage errors.
+  */
+
+  if (mem != 0) {
+    mchunkptr p  = mem2chunk(mem);
+#if FOOTERS
+    mstate fm = get_mstate_for(p);
+    if (!ok_magic(fm)) {
+      USAGE_ERROR_ACTION(fm, p);
+      return;
+    }
+#else /* FOOTERS */
+#define fm gm
+#endif /* FOOTERS */
+    if (!PREACTION(fm)) {
+      check_inuse_chunk(fm, p);
+      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
+        size_t psize = chunksize(p);
+        mchunkptr next = chunk_plus_offset(p, psize);
+        if (!pinuse(p)) {
+          size_t prevsize = p->prev_foot;
+          if ((prevsize & IS_MMAPPED_BIT) != 0) {
+            prevsize &= ~IS_MMAPPED_BIT;
+            psize += prevsize + MMAP_FOOT_PAD;
+            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
+              fm->footprint -= psize;
+            goto postaction;
+          }
+          else {
+            mchunkptr prev = chunk_minus_offset(p, prevsize);
+            psize += prevsize;
+            p = prev;
+            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
+              if (p != fm->dv) {
+                unlink_chunk(fm, p, prevsize);
+              }
+              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
+                fm->dvsize = psize;
+                set_free_with_pinuse(p, psize, next);
+                goto postaction;
+              }
+            }
+            else
+              goto erroraction;
+          }
+        }
+
+        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
+          if (!cinuse(next)) {  /* consolidate forward */
+            if (next == fm->top) {
+              size_t tsize = fm->topsize += psize;
+              fm->top = p;
+              p->head = tsize | PINUSE_BIT;
+              if (p == fm->dv) {
+                fm->dv = 0;
+                fm->dvsize = 0;
+              }
+              if (should_trim(fm, tsize))
+                sys_trim(fm, 0);
+              goto postaction;
+            }
+            else if (next == fm->dv) {
+              size_t dsize = fm->dvsize += psize;
+              fm->dv = p;
+              set_size_and_pinuse_of_free_chunk(p, dsize);
+              goto postaction;
+            }
+            else {
+              size_t nsize = chunksize(next);
+              psize += nsize;
+              unlink_chunk(fm, next, nsize);
+              set_size_and_pinuse_of_free_chunk(p, psize);
+              if (p == fm->dv) {
+                fm->dvsize = psize;
+                goto postaction;
+              }
+            }
+          }
+          else
+            set_free_with_pinuse(p, psize, next);
+          insert_chunk(fm, p, psize);
+          check_free_chunk(fm, p);
+          goto postaction;
+        }
+      }
+    erroraction:
+      USAGE_ERROR_ACTION(fm, p);
+    postaction:
+      POSTACTION(fm);
+    }
+  }
+#if !FOOTERS
+#undef fm
+#endif /* FOOTERS */
+}
+
+void* dlcalloc(size_t n_elements, size_t elem_size) {
+  void* mem;
+  size_t req = 0;
+  if (n_elements != 0) {
+    req = n_elements * elem_size;
+    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
+        (req / n_elements != elem_size))
+      req = MAX_SIZE_T; /* force downstream failure on overflow */
+  }
+  mem = dlmalloc(req);
+  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
+    memset(mem, 0, req);
+  return mem;
+}
+
+void* dlrealloc(void* oldmem, size_t bytes) {
+  if (oldmem == 0)
+    return dlmalloc(bytes);
+#ifdef REALLOC_ZERO_BYTES_FREES
+  if (bytes == 0) {
+    dlfree(oldmem);
+    return 0;
+  }
+#endif /* REALLOC_ZERO_BYTES_FREES */
+  else {
+#if ! FOOTERS
+    mstate m = gm;
+#else /* FOOTERS */
+    mstate m = get_mstate_for(mem2chunk(oldmem));
+    if (!ok_magic(m)) {
+      USAGE_ERROR_ACTION(m, oldmem);
+      return 0;
+    }
+#endif /* FOOTERS */
+    return internal_realloc(m, oldmem, bytes);
+  }
+}
+
+void* dlmemalign(size_t alignment, size_t bytes) {
+  return internal_memalign(gm, alignment, bytes);
+}
+
+void** dlindependent_calloc(size_t n_elements, size_t elem_size,
+                                 void* chunks[]) {
+  size_t sz = elem_size; /* serves as 1-element array */
+  return ialloc(gm, n_elements, &sz, 3, chunks);
+}
+
+void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
+                                   void* chunks[]) {
+  return ialloc(gm, n_elements, sizes, 0, chunks);
+}
+
+void* dlvalloc(size_t bytes) {
+  size_t pagesz;
+  init_mparams();
+  pagesz = mparams.page_size;
+  return dlmemalign(pagesz, bytes);
+}
+
+void* dlpvalloc(size_t bytes) {
+  size_t pagesz;
+  init_mparams();
+  pagesz = mparams.page_size;
+  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
+}
+
+int dlmalloc_trim(size_t pad) {
+  int result = 0;
+  if (!PREACTION(gm)) {
+    result = sys_trim(gm, pad);
+    POSTACTION(gm);
+  }
+  return result;
+}
+
+size_t dlmalloc_footprint(void) {
+  return gm->footprint;
+}
+
+size_t dlmalloc_max_footprint(void) {
+  return gm->max_footprint;
+}
+
+#if !NO_MALLINFO
+struct mallinfo dlmallinfo(void) {
+  return internal_mallinfo(gm);
+}
+#endif /* NO_MALLINFO */
+
+void dlmalloc_stats() {
+  internal_malloc_stats(gm);
+}
+
+size_t dlmalloc_usable_size(void* mem) {
+  if (mem != 0) {
+    mchunkptr p = mem2chunk(mem);
+    if (cinuse(p))
+      return chunksize(p) - overhead_for(p);
+  }
+  return 0;
+}
+
+int dlmallopt(int param_number, int value) {
+  return change_mparam(param_number, value);
+}
+
+#endif /* !ONLY_MSPACES */
+
+/* ----------------------------- user mspaces ---------------------------- */
+
+#if MSPACES
+
+static mstate init_user_mstate(char* tbase, size_t tsize) {
+  size_t msize = pad_request(sizeof(struct malloc_state));
+  mchunkptr mn;
+  mchunkptr msp = align_as_chunk(tbase);
+  mstate m = (mstate)(chunk2mem(msp));
+  memset(m, 0, msize);
+  INITIAL_LOCK(&m->mutex);
+  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
+  m->seg.base = m->least_addr = tbase;
+  m->seg.size = m->footprint = m->max_footprint = tsize;
+  m->magic = mparams.magic;
+  m->mflags = mparams.default_mflags;
+  disable_contiguous(m);
+  init_bins(m);
+  mn = next_chunk(mem2chunk(m));
+  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
+  check_top_chunk(m, m->top);
+  return m;
+}
+
+mspace create_mspace(size_t capacity, int locked) {
+  mstate m = 0;
+  size_t msize = pad_request(sizeof(struct malloc_state));
+  init_mparams(); /* Ensure pagesize etc initialized */
+
+  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
+    size_t rs = ((capacity == 0)? mparams.granularity :
+                 (capacity + TOP_FOOT_SIZE + msize));
+    size_t tsize = granularity_align(rs);
+    char* tbase = (char*)(CALL_MMAP(tsize));
+    if (tbase != CMFAIL) {
+      m = init_user_mstate(tbase, tsize);
+      m->seg.sflags = IS_MMAPPED_BIT;
+      set_lock(m, locked);
+    }
+  }
+  return (mspace)m;
+}
+
+mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
+  mstate m = 0;
+  size_t msize = pad_request(sizeof(struct malloc_state));
+  init_mparams(); /* Ensure pagesize etc initialized */
+
+  if (capacity > msize + TOP_FOOT_SIZE &&
+      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
+    m = init_user_mstate((char*)base, capacity);
+    m->seg.sflags = EXTERN_BIT;
+    set_lock(m, locked);
+  }
+  return (mspace)m;
+}
+
+size_t destroy_mspace(mspace msp) {
+  size_t freed = 0;
+  mstate ms = (mstate)msp;
+  if (ok_magic(ms)) {
+    msegmentptr sp = &ms->seg;
+    while (sp != 0) {
+      char* base = sp->base;
+      size_t size = sp->size;
+      flag_t flag = sp->sflags;
+      sp = sp->next;
+      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
+          CALL_MUNMAP(base, size) == 0)
+        freed += size;
+    }
+  }
+  else {
+    USAGE_ERROR_ACTION(ms,ms);
+  }
+  return freed;
+}
+
+/*
+  mspace versions of routines are near-clones of the global
+  versions. This is not so nice but better than the alternatives.
+*/
+
+
+void* mspace_malloc(mspace msp, size_t bytes) {
+  mstate ms = (mstate)msp;
+  if (!ok_magic(ms)) {
+    USAGE_ERROR_ACTION(ms,ms);
+    return 0;
+  }
+  if (!PREACTION(ms)) {
+    void* mem;
+    size_t nb;
+    if (bytes <= MAX_SMALL_REQUEST) {
+      bindex_t idx;
+      binmap_t smallbits;
+      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
+      idx = small_index(nb);
+      smallbits = ms->smallmap >> idx;
+
+      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
+        mchunkptr b, p;
+        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
+        b = smallbin_at(ms, idx);
+        p = b->fd;
+        assert(chunksize(p) == small_index2size(idx));
+        unlink_first_small_chunk(ms, b, p, idx);
+        set_inuse_and_pinuse(ms, p, small_index2size(idx));
+        mem = chunk2mem(p);
+        check_malloced_chunk(ms, mem, nb);
+        goto postaction;
+      }
+
+      else if (nb > ms->dvsize) {
+        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
+          mchunkptr b, p, r;
+          size_t rsize;
+          bindex_t i;
+          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
+          binmap_t leastbit = least_bit(leftbits);
+          compute_bit2idx(leastbit, i);
+          b = smallbin_at(ms, i);
+          p = b->fd;
+          assert(chunksize(p) == small_index2size(i));
+          unlink_first_small_chunk(ms, b, p, i);
+          rsize = small_index2size(i) - nb;
+          /* Fit here cannot be remainderless if 4byte sizes */
+          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
+            set_inuse_and_pinuse(ms, p, small_index2size(i));
+          else {
+            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
+            r = chunk_plus_offset(p, nb);
+            set_size_and_pinuse_of_free_chunk(r, rsize);
+            replace_dv(ms, r, rsize);
+          }
+          mem = chunk2mem(p);
+          check_malloced_chunk(ms, mem, nb);
+          goto postaction;
+        }
+
+        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
+          check_malloced_chunk(ms, mem, nb);
+          goto postaction;
+        }
+      }
+    }
+    else if (bytes >= MAX_REQUEST)
+      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
+    else {
+      nb = pad_request(bytes);
+      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
+        check_malloced_chunk(ms, mem, nb);
+        goto postaction;
+      }
+    }
+
+    if (nb <= ms->dvsize) {
+      size_t rsize = ms->dvsize - nb;
+      mchunkptr p = ms->dv;
+      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
+        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
+        ms->dvsize = rsize;
+        set_size_and_pinuse_of_free_chunk(r, rsize);
+        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
+      }
+      else { /* exhaust dv */
+        size_t dvs = ms->dvsize;
+        ms->dvsize = 0;
+        ms->dv = 0;
+        set_inuse_and_pinuse(ms, p, dvs);
+      }
+      mem = chunk2mem(p);
+      check_malloced_chunk(ms, mem, nb);
+      goto postaction;
+    }
+
+    else if (nb < ms->topsize) { /* Split top */
+      size_t rsize = ms->topsize -= nb;
+      mchunkptr p = ms->top;
+      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
+      r->head = rsize | PINUSE_BIT;
+      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
+      mem = chunk2mem(p);
+      check_top_chunk(ms, ms->top);
+      check_malloced_chunk(ms, mem, nb);
+      goto postaction;
+    }
+
+    mem = sys_alloc(ms, nb);
+
+  postaction:
+    POSTACTION(ms);
+    return mem;
+  }
+
+  return 0;
+}
+
+void mspace_free(mspace msp, void* mem) {
+  if (mem != 0) {
+    mchunkptr p  = mem2chunk(mem);
+#if FOOTERS
+    mstate fm = get_mstate_for(p);
+#else /* FOOTERS */
+    mstate fm = (mstate)msp;
+#endif /* FOOTERS */
+    if (!ok_magic(fm)) {
+      USAGE_ERROR_ACTION(fm, p);
+      return;
+    }
+    if (!PREACTION(fm)) {
+      check_inuse_chunk(fm, p);
+      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
+        size_t psize = chunksize(p);
+        mchunkptr next = chunk_plus_offset(p, psize);
+        if (!pinuse(p)) {
+          size_t prevsize = p->prev_foot;
+          if ((prevsize & IS_MMAPPED_BIT) != 0) {
+            prevsize &= ~IS_MMAPPED_BIT;
+            psize += prevsize + MMAP_FOOT_PAD;
+            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
+              fm->footprint -= psize;
+            goto postaction;
+          }
+          else {
+            mchunkptr prev = chunk_minus_offset(p, prevsize);
+            psize += prevsize;
+            p = prev;
+            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
+              if (p != fm->dv) {
+                unlink_chunk(fm, p, prevsize);
+              }
+              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
+                fm->dvsize = psize;
+                set_free_with_pinuse(p, psize, next);
+                goto postaction;
+              }
+            }
+            else
+              goto erroraction;
+          }
+        }
+
+        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
+          if (!cinuse(next)) {  /* consolidate forward */
+            if (next == fm->top) {
+              size_t tsize = fm->topsize += psize;
+              fm->top = p;
+              p->head = tsize | PINUSE_BIT;
+              if (p == fm->dv) {
+                fm->dv = 0;
+                fm->dvsize = 0;
+              }
+              if (should_trim(fm, tsize))
+                sys_trim(fm, 0);
+              goto postaction;
+            }
+            else if (next == fm->dv) {
+              size_t dsize = fm->dvsize += psize;
+              fm->dv = p;
+              set_size_and_pinuse_of_free_chunk(p, dsize);
+              goto postaction;
+            }
+            else {
+              size_t nsize = chunksize(next);
+              psize += nsize;
+              unlink_chunk(fm, next, nsize);
+              set_size_and_pinuse_of_free_chunk(p, psize);
+              if (p == fm->dv) {
+                fm->dvsize = psize;
+                goto postaction;
+              }
+            }
+          }
+          else
+            set_free_with_pinuse(p, psize, next);
+          insert_chunk(fm, p, psize);
+          check_free_chunk(fm, p);
+          goto postaction;
+        }
+      }
+    erroraction:
+      USAGE_ERROR_ACTION(fm, p);
+    postaction:
+      POSTACTION(fm);
+    }
+  }
+}
+
+void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
+  void* mem;
+  size_t req = 0;
+  mstate ms = (mstate)msp;
+  if (!ok_magic(ms)) {
+    USAGE_ERROR_ACTION(ms,ms);
+    return 0;
+  }
+  if (n_elements != 0) {
+    req = n_elements * elem_size;
+    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
+        (req / n_elements != elem_size))
+      req = MAX_SIZE_T; /* force downstream failure on overflow */
+  }
+  mem = internal_malloc(ms, req);
+  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
+    memset(mem, 0, req);
+  return mem;
+}
+
+void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
+  if (oldmem == 0)
+    return mspace_malloc(msp, bytes);
+#ifdef REALLOC_ZERO_BYTES_FREES
+  if (bytes == 0) {
+    mspace_free(msp, oldmem);
+    return 0;
+  }
+#endif /* REALLOC_ZERO_BYTES_FREES */
+  else {
+#if FOOTERS
+    mchunkptr p  = mem2chunk(oldmem);
+    mstate ms = get_mstate_for(p);
+#else /* FOOTERS */
+    mstate ms = (mstate)msp;
+#endif /* FOOTERS */
+    if (!ok_magic(ms)) {
+      USAGE_ERROR_ACTION(ms,ms);
+      return 0;
+    }
+    return internal_realloc(ms, oldmem, bytes);
+  }
+}
+
+void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
+  mstate ms = (mstate)msp;
+  if (!ok_magic(ms)) {
+    USAGE_ERROR_ACTION(ms,ms);
+    return 0;
+  }
+  return internal_memalign(ms, alignment, bytes);
+}
+
+void** mspace_independent_calloc(mspace msp, size_t n_elements,
+                                 size_t elem_size, void* chunks[]) {
+  size_t sz = elem_size; /* serves as 1-element array */
+  mstate ms = (mstate)msp;
+  if (!ok_magic(ms)) {
+    USAGE_ERROR_ACTION(ms,ms);
+    return 0;
+  }
+  return ialloc(ms, n_elements, &sz, 3, chunks);
+}
+
+void** mspace_independent_comalloc(mspace msp, size_t n_elements,
+                                   size_t sizes[], void* chunks[]) {
+  mstate ms = (mstate)msp;
+  if (!ok_magic(ms)) {
+    USAGE_ERROR_ACTION(ms,ms);
+    return 0;
+  }
+  return ialloc(ms, n_elements, sizes, 0, chunks);
+}
+
+int mspace_trim(mspace msp, size_t pad) {
+  int result = 0;
+  mstate ms = (mstate)msp;
+  if (ok_magic(ms)) {
+    if (!PREACTION(ms)) {
+      result = sys_trim(ms, pad);
+      POSTACTION(ms);
+    }
+  }
+  else {
+    USAGE_ERROR_ACTION(ms,ms);
+  }
+  return result;
+}
+
+void mspace_malloc_stats(mspace msp) {
+  mstate ms = (mstate)msp;
+  if (ok_magic(ms)) {
+    internal_malloc_stats(ms);
+  }
+  else {
+    USAGE_ERROR_ACTION(ms,ms);
+  }
+}
+
+size_t mspace_footprint(mspace msp) {
+  size_t result;
+  mstate ms = (mstate)msp;
+  if (ok_magic(ms)) {
+    result = ms->footprint;
+  }
+  USAGE_ERROR_ACTION(ms,ms);
+  return result;
+}
+
+
+size_t mspace_max_footprint(mspace msp) {
+  size_t result;
+  mstate ms = (mstate)msp;
+  if (ok_magic(ms)) {
+    result = ms->max_footprint;
+  }
+  USAGE_ERROR_ACTION(ms,ms);
+  return result;
+}
+
+
+#if !NO_MALLINFO
+struct mallinfo mspace_mallinfo(mspace msp) {
+  mstate ms = (mstate)msp;
+  if (!ok_magic(ms)) {
+    USAGE_ERROR_ACTION(ms,ms);
+  }
+  return internal_mallinfo(ms);
+}
+#endif /* NO_MALLINFO */
+
+int mspace_mallopt(int param_number, int value) {
+  return change_mparam(param_number, value);
+}
+
+#endif /* MSPACES */
+
+/* -------------------- Alternative MORECORE functions ------------------- */
+
+/*
+  Guidelines for creating a custom version of MORECORE:
+
+  * For best performance, MORECORE should allocate in multiples of pagesize.
+  * MORECORE may allocate more memory than requested. (Or even less,
+      but this will usually result in a malloc failure.)
+  * MORECORE must not allocate memory when given argument zero, but
+      instead return one past the end address of memory from previous
+      nonzero call.
+  * For best performance, consecutive calls to MORECORE with positive
+      arguments should return increasing addresses, indicating that
+      space has been contiguously extended.
+  * Even though consecutive calls to MORECORE need not return contiguous
+      addresses, it must be OK for malloc'ed chunks to span multiple
+      regions in those cases where they do happen to be contiguous.
+  * MORECORE need not handle negative arguments -- it may instead
+      just return MFAIL when given negative arguments.
+      Negative arguments are always multiples of pagesize. MORECORE
+      must not misinterpret negative args as large positive unsigned
+      args. You can suppress all such calls from even occurring by defining
+      MORECORE_CANNOT_TRIM,
+
+  As an example alternative MORECORE, here is a custom allocator
+  kindly contributed for pre-OSX macOS.  It uses virtually but not
+  necessarily physically contiguous non-paged memory (locked in,
+  present and won't get swapped out).  You can use it by uncommenting
+  this section, adding some #includes, and setting up the appropriate
+  defines above:
+
+      #define MORECORE osMoreCore
+
+  There is also a shutdown routine that should somehow be called for
+  cleanup upon program exit.
+
+  #define MAX_POOL_ENTRIES 100
+  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
+  static int next_os_pool;
+  void *our_os_pools[MAX_POOL_ENTRIES];
+
+  void *osMoreCore(int size)
+  {
+    void *ptr = 0;
+    static void *sbrk_top = 0;
+
+    if (size > 0)
+    {
+      if (size < MINIMUM_MORECORE_SIZE)
+         size = MINIMUM_MORECORE_SIZE;
+      if (CurrentExecutionLevel() == kTaskLevel)
+         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
+      if (ptr == 0)
+      {
+        return (void *) MFAIL;
+      }
+      // save ptrs so they can be freed during cleanup
+      our_os_pools[next_os_pool] = ptr;
+      next_os_pool++;
+      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
+      sbrk_top = (char *) ptr + size;
+      return ptr;
+    }
+    else if (size < 0)
+    {
+      // we don't currently support shrink behavior
+      return (void *) MFAIL;
+    }
+    else
+    {
+      return sbrk_top;
+    }
+  }
+
+  // cleanup any allocated memory pools
+  // called as last thing before shutting down driver
+
+  void osCleanupMem(void)
+  {
+    void **ptr;
+
+    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
+      if (*ptr)
+      {
+         PoolDeallocate(*ptr);
+         *ptr = 0;
+      }
+  }
+
+*/
+
+
+/* -----------------------------------------------------------------------
+History:
+    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
+      * Add max_footprint functions
+      * Ensure all appropriate literals are size_t
+      * Fix conditional compilation problem for some #define settings
+      * Avoid concatenating segments with the one provided
+        in create_mspace_with_base
+      * Rename some variables to avoid compiler shadowing warnings
+      * Use explicit lock initialization.
+      * Better handling of sbrk interference.
+      * Simplify and fix segment insertion, trimming and mspace_destroy
+      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
+      * Thanks especially to Dennis Flanagan for help on these.
+
+    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
+      * Fix memalign brace error.
+
+    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
+      * Fix improper #endif nesting in C++
+      * Add explicit casts needed for C++
+
+    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
+      * Use trees for large bins
+      * Support mspaces
+      * Use segments to unify sbrk-based and mmap-based system allocation,
+        removing need for emulation on most platforms without sbrk.
+      * Default safety checks
+      * Optional footer checks. Thanks to William Robertson for the idea.
+      * Internal code refactoring
+      * Incorporate suggestions and platform-specific changes.
+        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
+        Aaron Bachmann,  Emery Berger, and others.
+      * Speed up non-fastbin processing enough to remove fastbins.
+      * Remove useless cfree() to avoid conflicts with other apps.
+      * Remove internal memcpy, memset. Compilers handle builtins better.
+      * Remove some options that no one ever used and rename others.
+
+    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
+      * Fix malloc_state bitmap array misdeclaration
+
+    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
+      * Allow tuning of FIRST_SORTED_BIN_SIZE
+      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
+      * Better detection and support for non-contiguousness of MORECORE.
+        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
+      * Bypass most of malloc if no frees. Thanks To Emery Berger.
+      * Fix freeing of old top non-contiguous chunk im sysmalloc.
+      * Raised default trim and map thresholds to 256K.
+      * Fix mmap-related #defines. Thanks to Lubos Lunak.
+      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
+      * Branch-free bin calculation
+      * Default trim and mmap thresholds now 256K.
+
+    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
+      * Introduce independent_comalloc and independent_calloc.
+        Thanks to Michael Pachos for motivation and help.
+      * Make optional .h file available
+      * Allow > 2GB requests on 32bit systems.
+      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
+        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
+        and Anonymous.
+      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
+        helping test this.)
+      * memalign: check alignment arg
+      * realloc: don't try to shift chunks backwards, since this
+        leads to  more fragmentation in some programs and doesn't
+        seem to help in any others.
+      * Collect all cases in malloc requiring system memory into sysmalloc
+      * Use mmap as backup to sbrk
+      * Place all internal state in malloc_state
+      * Introduce fastbins (although similar to 2.5.1)
+      * Many minor tunings and cosmetic improvements
+      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
+      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
+        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
+      * Include errno.h to support default failure action.
+
+    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
+      * return null for negative arguments
+      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
+         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
+          (e.g. WIN32 platforms)
+         * Cleanup header file inclusion for WIN32 platforms
+         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
+         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
+           memory allocation routines
+         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
+         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
+           usage of 'assert' in non-WIN32 code
+         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
+           avoid infinite loop
+      * Always call 'fREe()' rather than 'free()'
+
+    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
+      * Fixed ordering problem with boundary-stamping
+
+    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
+      * Added pvalloc, as recommended by H.J. Liu
+      * Added 64bit pointer support mainly from Wolfram Gloger
+      * Added anonymously donated WIN32 sbrk emulation
+      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
+      * malloc_extend_top: fix mask error that caused wastage after
+        foreign sbrks
+      * Add linux mremap support code from HJ Liu
+
+    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
+      * Integrated most documentation with the code.
+      * Add support for mmap, with help from
+        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
+      * Use last_remainder in more cases.
+      * Pack bins using idea from  colin@nyx10.cs.du.edu
+      * Use ordered bins instead of best-fit threshhold
+      * Eliminate block-local decls to simplify tracing and debugging.
+      * Support another case of realloc via move into top
+      * Fix error occuring when initial sbrk_base not word-aligned.
+      * Rely on page size for units instead of SBRK_UNIT to
+        avoid surprises about sbrk alignment conventions.
+      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
+        (raymond@es.ele.tue.nl) for the suggestion.
+      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
+      * More precautions for cases where other routines call sbrk,
+        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
+      * Added macros etc., allowing use in linux libc from
+        H.J. Lu (hjl@gnu.ai.mit.edu)
+      * Inverted this history list
+
+    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
+      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
+      * Removed all preallocation code since under current scheme
+        the work required to undo bad preallocations exceeds
+        the work saved in good cases for most test programs.
+      * No longer use return list or unconsolidated bins since
+        no scheme using them consistently outperforms those that don't
+        given above changes.
+      * Use best fit for very large chunks to prevent some worst-cases.
+      * Added some support for debugging
+
+    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
+      * Removed footers when chunks are in use. Thanks to
+        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
+
+    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
+      * Added malloc_trim, with help from Wolfram Gloger
+        (wmglo@Dent.MED.Uni-Muenchen.DE).
+
+    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
+
+    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
+      * realloc: try to expand in both directions
+      * malloc: swap order of clean-bin strategy;
+      * realloc: only conditionally expand backwards
+      * Try not to scavenge used bins
+      * Use bin counts as a guide to preallocation
+      * Occasionally bin return list chunks in first scan
+      * Add a few optimizations from colin@nyx10.cs.du.edu
+
+    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
+      * faster bin computation & slightly different binning
+      * merged all consolidations to one part of malloc proper
+         (eliminating old malloc_find_space & malloc_clean_bin)
+      * Scan 2 returns chunks (not just 1)
+      * Propagate failure in realloc if malloc returns 0
+      * Add stuff to allow compilation on non-ANSI compilers
+          from kpv@research.att.com
+
+    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
+      * removed potential for odd address access in prev_chunk
+      * removed dependency on getpagesize.h
+      * misc cosmetics and a bit more internal documentation
+      * anticosmetics: mangled names in macros to evade debugger strangeness
+      * tested on sparc, hp-700, dec-mips, rs6000
+          with gcc & native cc (hp, dec only) allowing
+          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
+
+    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
+      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
+         structure of old version,  but most details differ.)
+ 
+*/

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #3: libffi-closure-prot-exec.patch --]
[-- Type: text/x-patch, Size: 90876 bytes --]

for  libffi/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* include/ffi.h.in (ffi_closure_alloc, ffi_closure_free): New.
	(ffi_prep_closure_loc): New.
	(ffi_prep_raw_closure_loc): New.
	(ffi_prep_java_raw_closure_loc): New.
	* src/closures.c: New file.
	* src/dlmalloc.c [FFI_MMAP_EXEC_WRIT] (struct malloc_segment):
	Replace sflags with exec_offset.
	[FFI_MMAP_EXEC_WRIT] (mmap_exec_offset, add_segment_exec_offset,
	sub_segment_exec_offset): New macros.
	(get_segment_flags, set_segment_flags, check_segment_merge): New
	macros.
	(is_mmapped_segment, is_extern_segment): Use get_segment_flags.
	(add_segment, sys_alloc, create_mspace, create_mspace_with_base,
	destroy_mspace): Use new macros.
	(sys_alloc): Silence warning.
	* Makefile.am (libffi_la_SOURCES): Add src/closures.c.
	* Makefile.in: Rebuilt.
	* src/prep_cif [FFI_CLOSURES] (ffi_prep_closure): Implement in
	terms of ffi_prep_closure_loc.
	* src/raw_api.c (ffi_prep_raw_closure_loc): Renamed and adjusted
	from...
	(ffi_prep_raw_closure): ... this.  Re-implement in terms of the
	renamed version.
	* src/java_raw_api (ffi_prep_java_raw_closure_loc): Renamed and
	adjusted from...
	(ffi_prep_java_raw_closure): ... this.  Re-implement in terms of
	the renamed version.
	* src/alpha/ffi.c (ffi_prep_closure_loc): Renamed from
	(ffi_prep_closure): ... this.
	* src/pa/ffi.c: Likewise.
	* src/cris/ffi.c: Likewise.  Adjust.
	* src/frv/ffi.c: Likewise.
	* src/ia64/ffi.c: Likewise.
	* src/mips/ffi.c: Likewise.
	* src/powerpc/ffi_darwin.c: Likewise.
	* src/s390/ffi.c: Likewise.
	* src/sh/ffi.c: Likewise.
	* src/sh64/ffi.c: Likewise.
	* src/sparc/ffi.c: Likewise.
	* src/x86/ffi64.c: Likewise.
	* src/x86/ffi.c: Likewise.
	(FFI_INIT_TRAMPOLINE): Adjust.
	(ffi_prep_raw_closure_loc): Renamed and adjusted from...
	(ffi_prep_raw_closure): ... this.
	* src/powerpc/ffi.c (ffi_prep_closure_loc): Renamed from
	(ffi_prep_closure): ... this.
	(flush_icache): Adjust.
	
for  boehm-gc/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* include/gc.h (GC_REGISTER_FINALIZER_UNREACHABLE): New.
	(GC_register_finalizer_unreachable): Declare.
	(GC_debug_register_finalizer_unreachable): Declare.
	* finalize.c (GC_unreachable_finalize_mark_proc): New.
	(GC_register_finalizer_unreachable): New.
	(GC_finalize): Handle it.
	* dbg_mlc.c (GC_debug_register_finalizer_unreachable): New.
	(GC_debug_register_finalizer_no_order): Fix whitespace.

for  libjava/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* include/jvm.h (_Jv_ClosureListFinalizer): New.
	(_Jv_Linker::create_error_method): Adjust.
	* boehm.cc (_Jv_ClosureListFinalizer): New.
	* nogc.cc (_Jv_ClosureListFinalizer): New.
	* java/lang/Class.h (class _Jv_ClosureList): New.
	(class java::lang::Class): Declare it as friend.
	* java/lang/natClass.cc (_Jv_ClosureList::releaseClosures): New.
	(_Jv_ClosureList::registerClousure): New.
	* include/execution.h (_Jv_ExecutionEngine): Add get_closure_list.
	(_Jv_CompiledEngine::do_get_closure_list): New.
	(_Jv_CompiledEngine::_Jv_CompiledEngine): Use it.
	(_Jv_IndirectCompiledClass): Add closures.
	(_Jv_IndirectCompiledEngine::get_aux_info): New.
	(_Jv_IndirectCompiledEngine::do_allocate_field_initializers): Use
	it.
	(_Jv_IndirectCompiledEngine::do_get_closure_list): New.
	(_Jv_IndirectCompiledEngine::_Jv_IndirectCompiledEngine): Use it.
	(_Jv_InterpreterEngine::do_get_closure_list): Declare.
	(_Jv_InterpreterEngine::_Jv_InterpreterEngine): Use it.
	* interpret.cc (FFI_PREP_RAW_CLOSURE): Use _loc variants.
	(node_closure): Add closure list.
	(_Jv_InterpMethod::ncode): Add jclass argument.  Use
	ffi_closure_alloc and the separate code pointer.  Register the
	closure for finalization.
	(_Jv_JNIMethod::ncode): Likewise.
	(_Jv_InterpreterEngine::do_create_ncode): Pass klass to ncode.
	(_Jv_InterpreterEngine::do_get_closure_list): New.
	* include/java-interp.h (_Jv_InterpMethod::ncode): Adjust.
	(_Jv_InterpClass): Add closures field.
	(_Jv_JNIMethod::ncode): Adjust.
	* defineclass.cc (_Jv_ClassReader::handleCodeAttribute): Adjust.
	(_Jv_ClassReader::handleMethodsEnd): Likewise.
	* link.cc (struct method_closure): Add closure list.
	(_Jv_Linker::create_error_method): Add jclass argument.  Use
	ffi_closure_alloc and the separate code pointer.  Register the
	closure for finalization.
	(_Jv_Linker::link_symbol_table): Remove outdated comment about
	sharing of otable and atable.  Adjust.
	* java/lang/reflect/natVMProxy.cc (ncode_closure): Add closure
	list.
	(ncode): Add jclass argument.  Use ffi_closure_alloc and the
	separate code pointer.  Register the closure for finalization.
	(java::lang::reflect::VMProxy::generateProxyClass): Adjust.
	* testsuite/libjava.jar/TestClosureGC.java: New.
	* testsuite/libjava.jar/TestClosureGC.out: New.
	* testsuite/libjava.jar/TestClosureGC.xfail: New.
	* testsuite/libjava.jar/TestClosureGC.jar: New.

Index: libffi/include/ffi.h.in
===================================================================
--- libffi/include/ffi.h.in.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/include/ffi.h.in	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------*-C-*-
-   libffi @VERSION@ - Copyright (c) 1996-2003  Red Hat, Inc.
+   libffi @VERSION@ - Copyright (c) 1996-2003, 2007  Red Hat, Inc.
 
    Permission is hereby granted, free of charge, to any person obtaining
    a copy of this software and associated documentation files (the
@@ -228,12 +228,22 @@ typedef struct {
   void      *user_data;
 } ffi_closure __attribute__((aligned (8)));
 
+void *ffi_closure_alloc (size_t size, void **code);
+void ffi_closure_free (void *);
+
 ffi_status
 ffi_prep_closure (ffi_closure*,
 		  ffi_cif *,
 		  void (*fun)(ffi_cif*,void*,void**,void*),
 		  void *user_data);
 
+ffi_status
+ffi_prep_closure_loc (ffi_closure*,
+		      ffi_cif *,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void*codeloc);
+
 typedef struct {
   char tramp[FFI_TRAMPOLINE_SIZE];
 
@@ -262,11 +272,25 @@ ffi_prep_raw_closure (ffi_raw_closure*,
 		      void *user_data);
 
 ffi_status
+ffi_prep_raw_closure_loc (ffi_raw_closure*,
+			  ffi_cif *cif,
+			  void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			  void *user_data,
+			  void *codeloc);
+
+ffi_status
 ffi_prep_java_raw_closure (ffi_raw_closure*,
 		           ffi_cif *cif,
 		           void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
 		           void *user_data);
 
+ffi_status
+ffi_prep_java_raw_closure_loc (ffi_raw_closure*,
+			       ffi_cif *cif,
+			       void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			       void *user_data,
+			       void *codeloc);
+
 #endif /* FFI_CLOSURES */
 
 /* ---- Public interface definition -------------------------------------- */
Index: libffi/src/closures.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ libffi/src/closures.c	2007-02-16 21:58:54.000000000 -0200
@@ -0,0 +1,513 @@
+/* -----------------------------------------------------------------------
+   closures.c - Copyright (c) 2007  Red Hat, Inc.
+
+   Code to allocate and deallocate memory for closures.
+
+   Permission is hereby granted, free of charge, to any person obtaining
+   a copy of this software and associated documentation files (the
+   ``Software''), to deal in the Software without restriction, including
+   without limitation the rights to use, copy, modify, merge, publish,
+   distribute, sublicense, and/or sell copies of the Software, and to
+   permit persons to whom the Software is furnished to do so, subject to
+   the following conditions:
+
+   The above copyright notice and this permission notice shall be included
+   in all copies or substantial portions of the Software.
+
+   THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS
+   OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+   MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+   IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+   OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+   ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+   OTHER DEALINGS IN THE SOFTWARE.
+   ----------------------------------------------------------------------- */
+
+#include <ffi.h>
+#include <ffi_common.h>
+
+#ifndef FFI_MMAP_EXEC_WRIT
+# if __gnu_linux__
+/* This macro indicates it may be forbidden to map anonymous memory
+   with both write and execute permission.  Code compiled when this
+   option is defined will attempt to map such pages once, but if it
+   fails, it falls back to creating a temporary file in a writable and
+   executable filesystem and mapping pages from it into separate
+   locations in the virtual memory space, one location writable and
+   another executable.  */
+#  define FFI_MMAP_EXEC_WRIT 1
+# endif
+#endif
+
+#if FFI_CLOSURES
+
+# if FFI_MMAP_EXEC_WRIT
+
+#define USE_LOCKS 1
+#define USE_DL_PREFIX 1
+#define USE_BUILTIN_FFS 1
+
+/* We need to use mmap, not sbrk.  */
+#define HAVE_MORECORE 0
+
+/* We could, in theory, support mremap, but it wouldn't buy us anything.  */
+#define HAVE_MREMAP 0
+
+/* We have no use for this, so save some code and data.  */
+#define NO_MALLINFO 1
+
+/* We need all allocations to be in regular segments, otherwise we
+   lose track of the corresponding code address.  */
+#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
+
+/* Don't allocate more than a page unless needed.  */
+#define DEFAULT_GRANULARITY ((size_t)malloc_getpagesize)
+
+#if FFI_CLOSURE_TEST
+/* Don't release single pages, to avoid a worst-case scenario of
+   continuously allocating and releasing single pages, but release
+   pairs of pages, which should do just as well given that allocations
+   are likely to be small.  */
+#define DEFAULT_TRIM_THRESHOLD ((size_t)malloc_getpagesize)
+#endif
+
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdio.h>
+#include <mntent.h>
+#include <sys/param.h>
+#include <pthread.h>
+
+/* We don't want sys/mman.h to be included after we redefine mmap and
+   dlmunmap.  */
+#include <sys/mman.h>
+#define LACKS_SYS_MMAN_H 1
+
+#define MAYBE_UNUSED __attribute__((__unused__))
+
+/* Declare all functions defined in dlmalloc.c as static.  */
+static void *dlmalloc(size_t);
+static void dlfree(void*);
+static void *dlcalloc(size_t, size_t) MAYBE_UNUSED;
+static void *dlrealloc(void *, size_t) MAYBE_UNUSED;
+static void *dlmemalign(size_t, size_t) MAYBE_UNUSED;
+static void *dlvalloc(size_t) MAYBE_UNUSED;
+static int dlmallopt(int, int) MAYBE_UNUSED;
+static size_t dlmalloc_footprint(void) MAYBE_UNUSED;
+static size_t dlmalloc_max_footprint(void) MAYBE_UNUSED;
+static void** dlindependent_calloc(size_t, size_t, void**) MAYBE_UNUSED;
+static void** dlindependent_comalloc(size_t, size_t*, void**) MAYBE_UNUSED;
+static void *dlpvalloc(size_t) MAYBE_UNUSED;
+static int dlmalloc_trim(size_t) MAYBE_UNUSED;
+static size_t dlmalloc_usable_size(void*) MAYBE_UNUSED;
+static void dlmalloc_stats(void) MAYBE_UNUSED;
+
+/* Use these for mmap and munmap within dlmalloc.c.  */
+static void *dlmmap(void *, size_t, int, int, int, off_t);
+static int dlmunmap(void *, size_t);
+
+#define mmap dlmmap
+#define munmap dlmunmap
+
+#include "dlmalloc.c"
+
+#undef mmap
+#undef munmap
+
+/* A mutex used to synchronize access to *exec* variables in this file.  */
+static pthread_mutex_t open_temp_exec_file_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+/* A file descriptor of a temporary file from which we'll map
+   executable pages.  */
+static int execfd = -1;
+
+/* The amount of space already allocated from the temporary file.  */
+static size_t execsize = 0;
+
+/* Open a temporary file name, and immediately unlink it.  */
+static int
+open_temp_exec_file_name (char *name)
+{
+  int fd = mkstemp (name);
+
+  if (fd != -1)
+    unlink (name);
+
+  return fd;
+}
+
+/* Open a temporary file in the named directory.  */
+static int
+open_temp_exec_file_dir (const char *dir)
+{
+  static const char suffix[] = "/ffiXXXXXX";
+  int lendir = strlen (dir);
+  char *tempname = __builtin_alloca (lendir + sizeof (suffix));
+
+  if (!tempname)
+    return -1;
+
+  memcpy (tempname, dir, lendir);
+  memcpy (tempname + lendir, suffix, sizeof (suffix));
+
+  return open_temp_exec_file_name (tempname);
+}
+
+/* Open a temporary file in the directory in the named environment
+   variable.  */
+static int
+open_temp_exec_file_env (const char *envvar)
+{
+  const char *value = getenv (envvar);
+
+  if (!value)
+    return -1;
+
+  return open_temp_exec_file_dir (value);
+}
+
+/* Open a temporary file in an executable and writable mount point
+   listed in the mounts file.  Subsequent calls with the same mounts
+   keep searching for mount points in the same file.  Providing NULL
+   as the mounts file closes the file.  */
+static int
+open_temp_exec_file_mnt (const char *mounts)
+{
+  static const char *last_mounts;
+  static FILE *last_mntent;
+
+  if (mounts != last_mounts)
+    {
+      if (last_mntent)
+	endmntent (last_mntent);
+
+      last_mounts = mounts;
+
+      if (mounts)
+	last_mntent = setmntent (mounts, "r");
+      else
+	last_mntent = NULL;
+    }
+
+  if (!last_mntent)
+    return -1;
+
+  for (;;)
+    {
+      int fd;
+      struct mntent mnt;
+      char buf[MAXPATHLEN * 3];
+
+      if (getmntent_r (last_mntent, &mnt, buf, sizeof (buf)))
+	return -1;
+
+      if (hasmntopt (&mnt, "ro")
+	  || hasmntopt (&mnt, "noexec")
+	  || access (mnt.mnt_dir, W_OK))
+	continue;
+
+      fd = open_temp_exec_file_dir (mnt.mnt_dir);
+
+      if (fd != -1)
+	return fd;
+    }
+}
+
+/* Instructions to look for a location to hold a temporary file that
+   can be mapped in for execution.  */
+static struct
+{
+  int (*func)(const char *);
+  const char *arg;
+  int repeat;
+} open_temp_exec_file_opts[] = {
+  { open_temp_exec_file_env, "TMPDIR", 0 },
+  { open_temp_exec_file_dir, "/tmp", 0 },
+  { open_temp_exec_file_dir, "/var/tmp", 0 },
+  { open_temp_exec_file_dir, "/dev/shm", 0 },
+  { open_temp_exec_file_env, "HOME", 0 },
+  { open_temp_exec_file_mnt, "/etc/mtab", 1 },
+  { open_temp_exec_file_mnt, "/proc/mounts", 1 },
+};
+
+/* Current index into open_temp_exec_file_opts.  */
+static int open_temp_exec_file_opts_idx = 0;
+
+/* Reset a current multi-call func, then advances to the next entry.
+   If we're at the last, go back to the first and return nonzero,
+   otherwise return zero.  */
+static int
+open_temp_exec_file_opts_next (void)
+{
+  if (open_temp_exec_file_opts[open_temp_exec_file_opts_idx].repeat)
+    open_temp_exec_file_opts[open_temp_exec_file_opts_idx].func (NULL);
+
+  open_temp_exec_file_opts_idx++;
+  if (open_temp_exec_file_opts_idx
+      == (sizeof (open_temp_exec_file_opts)
+	  / sizeof (*open_temp_exec_file_opts)))
+    {
+      open_temp_exec_file_opts_idx = 0;
+      return 1;
+    }
+
+  return 0;
+}
+
+/* Return a file descriptor of a temporary zero-sized file in a
+   writable and exexutable filesystem.  */
+static int
+open_temp_exec_file (void)
+{
+  int fd;
+
+  do
+    {
+      fd = open_temp_exec_file_opts[open_temp_exec_file_opts_idx].func
+	(open_temp_exec_file_opts[open_temp_exec_file_opts_idx].arg);
+
+      if (!open_temp_exec_file_opts[open_temp_exec_file_opts_idx].repeat
+	  || fd == -1)
+	{
+	  if (open_temp_exec_file_opts_next ())
+	    break;
+	}
+    }
+  while (fd == -1);
+
+  return fd;
+}
+
+/* Map in a chunk of memory from the temporary exec file into separate
+   locations in the virtual memory address space, one writable and one
+   executable.  Returns the address of the writable portion, after
+   storing an offset to the corresponding executable portion at the
+   last word of the requested chunk.  */
+static void *
+dlmmap_locked (void *start, size_t length, int prot, int flags, off_t offset)
+{
+  void *ptr;
+
+  if (execfd == -1)
+    {
+      open_temp_exec_file_opts_idx = 0;
+    retry_open:
+      execfd = open_temp_exec_file ();
+      if (execfd == -1)
+	return MFAIL;
+    }
+
+  offset = execsize;
+
+  if (ftruncate (execfd, offset + length))
+    return MFAIL;
+
+  flags &= ~(MAP_PRIVATE | MAP_ANONYMOUS);
+  flags |= MAP_SHARED;
+
+  ptr = mmap (NULL, length, (prot & ~PROT_WRITE) | PROT_EXEC,
+	      flags, execfd, offset);
+  if (ptr == MFAIL)
+    {
+      if (!offset)
+	{
+	  close (execfd);
+	  goto retry_open;
+	}
+      ftruncate (execfd, offset);
+      return MFAIL;
+    }
+  else if (!offset
+	   && open_temp_exec_file_opts[open_temp_exec_file_opts_idx].repeat)
+    open_temp_exec_file_opts_next ();
+
+  start = mmap (start, length, prot, flags, execfd, offset);
+
+  if (start == MFAIL)
+    {
+      munmap (ptr, length);
+      ftruncate (execfd, offset);
+      return start;
+    }
+
+  mmap_exec_offset ((char *)start, length) = (char*)ptr - (char*)start;
+
+  execsize += length;
+
+  return start;
+}
+
+/* Map in a writable and executable chunk of memory if possible.
+   Failing that, fall back to dlmmap_locked.  */
+static void *
+dlmmap (void *start, size_t length, int prot,
+	int flags, int fd, off_t offset)
+{
+  void *ptr;
+
+  assert (start == NULL && length % malloc_getpagesize == 0
+	  && prot == (PROT_READ | PROT_WRITE)
+	  && flags == (MAP_PRIVATE | MAP_ANONYMOUS)
+	  && fd == -1 && offset == 0);
+
+#if FFI_CLOSURE_TEST
+  printf ("mapping in %zi\n", length);
+#endif
+
+  if (execfd == -1)
+    {
+      ptr = mmap (start, length, prot | PROT_EXEC, flags, fd, offset);
+
+      if (ptr != MFAIL || (errno != EPERM && errno != EACCES))
+	/* Cool, no need to mess with separate segments.  */
+	return ptr;
+
+      /* If MREMAP_DUP is ever introduced and implemented, try mmap
+	 with ((prot & ~PROT_WRITE) | PROT_EXEC) and mremap with
+	 MREMAP_DUP and prot at this point.  */
+    }
+
+  if (execsize == 0 || execfd == -1)
+    {
+      pthread_mutex_lock (&open_temp_exec_file_mutex);
+      ptr = dlmmap_locked (start, length, prot, flags, offset);
+      pthread_mutex_unlock (&open_temp_exec_file_mutex);
+
+      return ptr;
+    }
+
+  return dlmmap_locked (start, length, prot, flags, offset);
+}
+
+/* Release memory at the given address, as well as the corresponding
+   executable page if it's separate.  */
+static int
+dlmunmap (void *start, size_t length)
+{
+  /* We don't bother decreasing execsize or truncating the file, since
+     we can't quite tell whether we're unmapping the end of the file.
+     We don't expect frequent deallocation anyway.  If we did, we
+     could locate pages in the file by writing to the pages being
+     deallocated and checking that the file contents change.
+     Yuck.  */
+  msegmentptr seg = segment_holding (gm, start);
+  void *code;
+
+#if FFI_CLOSURE_TEST
+  printf ("unmapping %zi\n", length);
+#endif
+
+  if (seg && (code = add_segment_exec_offset (start, seg)) != start)
+    {
+      int ret = munmap (code, length);
+      if (ret)
+	return ret;
+    }
+
+  return munmap (start, length);
+}
+
+#if FFI_CLOSURE_FREE_CODE
+/* Return segment holding given code address.  */
+static msegmentptr
+segment_holding_code (mstate m, char* addr)
+{
+  msegmentptr sp = &m->seg;
+  for (;;) {
+    if (addr >= add_segment_exec_offset (sp->base, sp)
+	&& addr < add_segment_exec_offset (sp->base, sp) + sp->size)
+      return sp;
+    if ((sp = sp->next) == 0)
+      return 0;
+  }
+}
+#endif
+
+/* Allocate a chunk of memory with the given size.  Returns a pointer
+   to the writable address, and sets *CODE to the executable
+   corresponding virtual address.  */
+void *
+ffi_closure_alloc (size_t size, void **code)
+{
+  void *ptr;
+
+  if (!code)
+    return NULL;
+
+  ptr = dlmalloc (size);
+
+  if (ptr)
+    {
+      msegmentptr seg = segment_holding (gm, ptr);
+
+      *code = add_segment_exec_offset (ptr, seg);
+    }
+
+  return ptr;
+}
+
+/* Release a chunk of memory allocated with ffi_closure_alloc.  If
+   FFI_CLOSURE_FREE_CODE is nonzero, the given address can be the
+   writable or the executable address given.  Otherwise, only the
+   writable address can be provided here.  */
+void
+ffi_closure_free (void *ptr)
+{
+#if FFI_CLOSURE_FREE_CODE
+  msegmentptr seg = segment_holding_code (gm, ptr);
+
+  if (seg)
+    ptr = sub_segment_exec_offset (ptr, seg);
+#endif
+
+  dlfree (ptr);
+}
+
+
+#if FFI_CLOSURE_TEST
+/* Do some internal sanity testing to make sure allocation and
+   deallocation of pages are working as intended.  */
+int main ()
+{
+  void *p[3];
+#define GET(idx, len) do { p[idx] = dlmalloc (len); printf ("allocated %zi for p[%i]\n", (len), (idx)); } while (0)
+#define PUT(idx) do { printf ("freeing p[%i]\n", (idx)); dlfree (p[idx]); } while (0)
+  GET (0, malloc_getpagesize / 2);
+  GET (1, 2 * malloc_getpagesize - 64 * sizeof (void*));
+  PUT (1);
+  GET (1, 2 * malloc_getpagesize);
+  GET (2, malloc_getpagesize / 2);
+  PUT (1);
+  PUT (0);
+  PUT (2);
+  return 0;
+}
+#endif /* FFI_CLOSURE_TEST */
+# else /* ! FFI_MMAP_EXEC_WRIT */
+
+/* On many systems, memory returned by malloc is writable and
+   executable, so just use it.  */
+
+#include <stdlib.h>
+
+void *
+ffi_closure_alloc (size_t size, void **code)
+{
+  if (!code)
+    return NULL;
+
+  return *code = malloc (size);
+}
+
+void
+ffi_closure_free (void *ptr)
+{
+  free (ptr);
+}
+
+# endif /* ! FFI_MMAP_EXEC_WRIT */
+#endif /* FFI_CLOSURES */
Index: libffi/src/dlmalloc.c
===================================================================
--- libffi/src/dlmalloc.c.orig	2007-02-16 21:58:54.000000000 -0200
+++ libffi/src/dlmalloc.c	2007-02-16 21:58:54.000000000 -0200
@@ -1879,11 +1879,47 @@ struct malloc_segment {
   char*        base;             /* base address */
   size_t       size;             /* allocated size */
   struct malloc_segment* next;   /* ptr to next segment */
+#if FFI_MMAP_EXEC_WRIT
+  /* The mmap magic is supposed to store the address of the executable
+     segment at the very end of the requested block.  */
+
+# define mmap_exec_offset(b,s) (*(ptrdiff_t*)((b)+(s)-sizeof(ptrdiff_t)))
+
+  /* We can only merge segments if their corresponding executable
+     segments are at identical offsets.  */
+# define check_segment_merge(S,b,s) \
+  (mmap_exec_offset((b),(s)) == (S)->exec_offset)
+
+# define add_segment_exec_offset(p,S) ((char*)(p) + (S)->exec_offset)
+# define sub_segment_exec_offset(p,S) ((char*)(p) - (S)->exec_offset)
+
+  /* The removal of sflags only works with HAVE_MORECORE == 0.  */
+
+# define get_segment_flags(S)   (IS_MMAPPED_BIT)
+# define set_segment_flags(S,v) \
+  (((v) != IS_MMAPPED_BIT) ? (ABORT, (v)) :				\
+   (((S)->exec_offset =							\
+     mmap_exec_offset((S)->base, (S)->size)),				\
+    (mmap_exec_offset((S)->base + (S)->exec_offset, (S)->size) !=	\
+     (S)->exec_offset) ? (ABORT, (v)) :					\
+   (mmap_exec_offset((S)->base, (S)->size) = 0), (v)))
+
+  /* We use an offset here, instead of a pointer, because then, when
+     base changes, we don't have to modify this.  On architectures
+     with segmented addresses, this might not work.  */
+  ptrdiff_t    exec_offset;
+#else
+
+# define get_segment_flags(S)   ((S)->sflags)
+# define set_segment_flags(S,v) ((S)->sflags = (v))
+# define check_segment_merge(S,b,s) (1)
+
   flag_t       sflags;           /* mmap and extern flag */
+#endif
 };
 
-#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
-#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
+#define is_mmapped_segment(S)  (get_segment_flags(S) & IS_MMAPPED_BIT)
+#define is_extern_segment(S)   (get_segment_flags(S) & EXTERN_BIT)
 
 typedef struct malloc_segment  msegment;
 typedef struct malloc_segment* msegmentptr;
@@ -3290,7 +3326,7 @@ static void add_segment(mstate m, char* 
   *ss = m->seg; /* Push current record */
   m->seg.base = tbase;
   m->seg.size = tsize;
-  m->seg.sflags = mmapped;
+  set_segment_flags(&m->seg, mmapped);
   m->seg.next = ss;
 
   /* Insert trailing fenceposts */
@@ -3393,7 +3429,7 @@ static void* sys_alloc(mstate m, size_t 
             if (end != CMFAIL)
               asize += esize;
             else {            /* Can't use; try to release */
-              CALL_MORECORE(-asize);
+              (void)CALL_MORECORE(-asize);
               br = CMFAIL;
             }
           }
@@ -3450,7 +3486,7 @@ static void* sys_alloc(mstate m, size_t 
     if (!is_initialized(m)) { /* first-time initialization */
       m->seg.base = m->least_addr = tbase;
       m->seg.size = tsize;
-      m->seg.sflags = mmap_flag;
+      set_segment_flags(&m->seg, mmap_flag);
       m->magic = mparams.magic;
       init_bins(m);
       if (is_global(m)) 
@@ -3469,7 +3505,8 @@ static void* sys_alloc(mstate m, size_t 
         sp = sp->next;
       if (sp != 0 &&
           !is_extern_segment(sp) &&
-          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
+	  check_segment_merge(sp, tbase, tsize) &&
+          (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag &&
           segment_holds(sp, m->top)) { /* append */
         sp->size += tsize;
         init_top(m, m->top, m->topsize + tsize);
@@ -3482,7 +3519,8 @@ static void* sys_alloc(mstate m, size_t 
           sp = sp->next;
         if (sp != 0 &&
             !is_extern_segment(sp) &&
-            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
+	    check_segment_merge(sp, tbase, tsize) &&
+            (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag) {
           char* oldbase = sp->base;
           sp->base = tbase;
           sp->size += tsize;
@@ -4397,7 +4435,7 @@ mspace create_mspace(size_t capacity, in
     char* tbase = (char*)(CALL_MMAP(tsize));
     if (tbase != CMFAIL) {
       m = init_user_mstate(tbase, tsize);
-      m->seg.sflags = IS_MMAPPED_BIT;
+      set_segment_flags(&m->seg, IS_MMAPPED_BIT);
       set_lock(m, locked);
     }
   }
@@ -4412,7 +4450,7 @@ mspace create_mspace_with_base(void* bas
   if (capacity > msize + TOP_FOOT_SIZE &&
       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
     m = init_user_mstate((char*)base, capacity);
-    m->seg.sflags = EXTERN_BIT;
+    set_segment_flags(&m->seg, EXTERN_BIT);
     set_lock(m, locked);
   }
   return (mspace)m;
@@ -4426,7 +4464,7 @@ size_t destroy_mspace(mspace msp) {
     while (sp != 0) {
       char* base = sp->base;
       size_t size = sp->size;
-      flag_t flag = sp->sflags;
+      flag_t flag = get_segment_flags(sp);
       sp = sp->next;
       if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
           CALL_MUNMAP(base, size) == 0)
Index: libffi/Makefile.am
===================================================================
--- libffi/Makefile.am.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/Makefile.am	2007-02-16 21:58:54.000000000 -0200
@@ -78,7 +78,7 @@ toolexeclib_LTLIBRARIES = libffi.la
 noinst_LTLIBRARIES = libffi_convenience.la
 
 libffi_la_SOURCES = src/debug.c src/prep_cif.c src/types.c \
-		src/raw_api.c src/java_raw_api.c
+		src/raw_api.c src/java_raw_api.c src/closures.c
 
 nodist_libffi_la_SOURCES =
 
Index: libffi/Makefile.in
===================================================================
--- libffi/Makefile.in.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/Makefile.in	2007-02-16 21:58:54.000000000 -0200
@@ -92,7 +92,7 @@ LTLIBRARIES = $(noinst_LTLIBRARIES) $(to
 libffi_la_LIBADD =
 am__dirstamp = $(am__leading_dot)dirstamp
 am_libffi_la_OBJECTS = src/debug.lo src/prep_cif.lo src/types.lo \
-	src/raw_api.lo src/java_raw_api.lo
+	src/raw_api.lo src/java_raw_api.lo src/closures.lo
 @MIPS_IRIX_TRUE@am__objects_1 = src/mips/ffi.lo src/mips/o32.lo \
 @MIPS_IRIX_TRUE@	src/mips/n32.lo
 @MIPS_LINUX_TRUE@am__objects_2 = src/mips/ffi.lo src/mips/o32.lo
@@ -141,7 +141,7 @@ libffi_la_OBJECTS = $(am_libffi_la_OBJEC
 	$(nodist_libffi_la_OBJECTS)
 libffi_convenience_la_LIBADD =
 am__objects_24 = src/debug.lo src/prep_cif.lo src/types.lo \
-	src/raw_api.lo src/java_raw_api.lo
+	src/raw_api.lo src/java_raw_api.lo src/closures.lo
 am_libffi_convenience_la_OBJECTS = $(am__objects_24)
 am__objects_25 = $(am__objects_1) $(am__objects_2) $(am__objects_3) \
 	$(am__objects_4) $(am__objects_5) $(am__objects_6) \
@@ -416,7 +416,7 @@ MAKEOVERRIDES = 
 toolexeclib_LTLIBRARIES = libffi.la
 noinst_LTLIBRARIES = libffi_convenience.la
 libffi_la_SOURCES = src/debug.c src/prep_cif.c src/types.c \
-		src/raw_api.c src/java_raw_api.c
+		src/raw_api.c src/java_raw_api.c src/closures.c
 
 nodist_libffi_la_SOURCES = $(am__append_1) $(am__append_2) \
 	$(am__append_3) $(am__append_4) $(am__append_5) \
@@ -534,6 +534,7 @@ src/prep_cif.lo: src/$(am__dirstamp) src
 src/types.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
 src/raw_api.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
 src/java_raw_api.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
+src/closures.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp)
 src/mips/$(am__dirstamp):
 	@$(mkdir_p) src/mips
 	@: > src/mips/$(am__dirstamp)
@@ -729,6 +730,8 @@ mostlyclean-compile:
 	-rm -f src/arm/ffi.lo
 	-rm -f src/arm/sysv.$(OBJEXT)
 	-rm -f src/arm/sysv.lo
+	-rm -f src/closures.$(OBJEXT)
+	-rm -f src/closures.lo
 	-rm -f src/cris/ffi.$(OBJEXT)
 	-rm -f src/cris/ffi.lo
 	-rm -f src/cris/sysv.$(OBJEXT)
@@ -827,6 +830,7 @@ mostlyclean-compile:
 distclean-compile:
 	-rm -f *.tab.c
 
+@AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/closures.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/debug.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/java_raw_api.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/prep_cif.Plo@am__quote@
Index: libffi/src/prep_cif.c
===================================================================
--- libffi/src/prep_cif.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/prep_cif.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   prep_cif.c - Copyright (c) 1996, 1998  Red Hat, Inc.
+   prep_cif.c - Copyright (c) 1996, 1998, 2007  Red Hat, Inc.
 
    Permission is hereby granted, free of charge, to any person obtaining
    a copy of this software and associated documentation files (the
@@ -158,3 +158,16 @@ ffi_status ffi_prep_cif(ffi_cif *cif, ff
   return ffi_prep_cif_machdep(cif);
 }
 #endif /* not __CRIS__ */
+
+#if FFI_CLOSURES
+
+ffi_status
+ffi_prep_closure (ffi_closure* closure,
+		  ffi_cif* cif,
+		  void (*fun)(ffi_cif*,void*,void**,void*),
+		  void *user_data)
+{
+  return ffi_prep_closure_loc (closure, cif, fun, user_data, closure);
+}
+
+#endif
Index: libffi/src/raw_api.c
===================================================================
--- libffi/src/raw_api.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/raw_api.c	2007-02-16 21:58:54.000000000 -0200
@@ -209,22 +209,20 @@ ffi_translate_args (ffi_cif *cif, void *
   (*cl->fun) (cif, rvalue, raw, cl->user_data);
 }
 
-/* Again, here is the generic version of ffi_prep_raw_closure, which
- * will install an intermediate "hub" for translation of arguments from
- * the pointer-array format, to the raw format */
-
 ffi_status
-ffi_prep_raw_closure (ffi_raw_closure* cl,
-		      ffi_cif *cif,
-		      void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
-		      void *user_data)
+ffi_prep_raw_closure_loc (ffi_raw_closure* cl,
+			  ffi_cif *cif,
+			  void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			  void *user_data,
+			  void *codeloc)
 {
   ffi_status status;
 
-  status = ffi_prep_closure ((ffi_closure*) cl,
-			     cif,
-			     &ffi_translate_args,
-			     (void*)cl);
+  status = ffi_prep_closure_loc ((ffi_closure*) cl,
+				 cif,
+				 &ffi_translate_args,
+				 codeloc,
+				 codeloc);
   if (status == FFI_OK)
     {
       cl->fun       = fun;
@@ -236,4 +234,22 @@ ffi_prep_raw_closure (ffi_raw_closure* c
 
 #endif /* FFI_CLOSURES */
 #endif /* !FFI_NATIVE_RAW_API */
+
+#if FFI_CLOSURES
+
+/* Again, here is the generic version of ffi_prep_raw_closure, which
+ * will install an intermediate "hub" for translation of arguments from
+ * the pointer-array format, to the raw format */
+
+ffi_status
+ffi_prep_raw_closure (ffi_raw_closure* cl,
+		      ffi_cif *cif,
+		      void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+		      void *user_data)
+{
+  return ffi_prep_raw_closure_loc (cl, cif, fun, user_data, cl);
+}
+
+#endif /* FFI_CLOSURES */
+
 #endif /* !FFI_NO_RAW_API */
Index: libffi/src/java_raw_api.c
===================================================================
--- libffi/src/java_raw_api.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/java_raw_api.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   java_raw_api.c - Copyright (c) 1999  Red Hat, Inc.
+   java_raw_api.c - Copyright (c) 1999, 2007  Red Hat, Inc.
 
    Cloned from raw_api.c
 
@@ -307,22 +307,20 @@ ffi_java_translate_args (ffi_cif *cif, v
   ffi_java_raw_to_rvalue (cif, rvalue);
 }
 
-/* Again, here is the generic version of ffi_prep_raw_closure, which
- * will install an intermediate "hub" for translation of arguments from
- * the pointer-array format, to the raw format */
-
 ffi_status
-ffi_prep_java_raw_closure (ffi_raw_closure* cl,
-		      ffi_cif *cif,
-		      void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
-		      void *user_data)
+ffi_prep_java_raw_closure_loc (ffi_raw_closure* cl,
+			       ffi_cif *cif,
+			       void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			       void *user_data,
+			       void *codeloc)
 {
   ffi_status status;
 
-  status = ffi_prep_closure ((ffi_closure*) cl,
-			     cif,
-			     &ffi_java_translate_args,
-			     (void*)cl);
+  status = ffi_prep_closure_loc ((ffi_closure*) cl,
+				 cif,
+				 &ffi_java_translate_args,
+				 codeloc,
+				 codeloc);
   if (status == FFI_OK)
     {
       cl->fun       = fun;
@@ -332,6 +330,19 @@ ffi_prep_java_raw_closure (ffi_raw_closu
   return status;
 }
 
+/* Again, here is the generic version of ffi_prep_raw_closure, which
+ * will install an intermediate "hub" for translation of arguments from
+ * the pointer-array format, to the raw format */
+
+ffi_status
+ffi_prep_java_raw_closure (ffi_raw_closure* cl,
+			   ffi_cif *cif,
+			   void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			   void *user_data)
+{
+  return ffi_prep_java_raw_closure_loc (cl, cif, fun, user_data, cl);
+}
+
 #endif /* FFI_CLOSURES */
 #endif /* !FFI_NATIVE_RAW_API */
 #endif /* !FFI_NO_RAW_API */
Index: libffi/src/alpha/ffi.c
===================================================================
--- libffi/src/alpha/ffi.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/alpha/ffi.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1998, 2001 Red Hat, Inc.
+   ffi.c - Copyright (c) 1998, 2001, 2007 Red Hat, Inc.
    
    Alpha Foreign Function Interface 
 
@@ -146,10 +146,11 @@ ffi_call(ffi_cif *cif, void (*fn)(), voi
 
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp;
 
Index: libffi/src/cris/ffi.c
===================================================================
--- libffi/src/cris/ffi.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/cris/ffi.c	2007-02-16 21:58:54.000000000 -0200
@@ -2,6 +2,7 @@
    ffi.c - Copyright (c) 1998 Cygnus Solutions
            Copyright (c) 2004 Simon Posnjak
 	   Copyright (c) 2005 Axis Communications AB
+	   Copyright (C) 2007 Free Software Foundation, Inc.
 
    CRIS Foreign Function Interface
 
@@ -360,10 +361,11 @@ ffi_prep_closure_inner (void **params, f
 /* API function: Prepare the trampoline.  */
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif *, void *, void **, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif *, void *, void **, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   void *innerfn = ffi_prep_closure_inner;
   FFI_ASSERT (cif->abi == FFI_SYSV);
@@ -375,7 +377,7 @@ ffi_prep_closure (ffi_closure* closure,
   memcpy (closure->tramp + ffi_cris_trampoline_fn_offset,
 	  &innerfn, sizeof (void *));
   memcpy (closure->tramp + ffi_cris_trampoline_closure_offset,
-	  &closure, sizeof (void *));
+	  &codeloc, sizeof (void *));
 
   return FFI_OK;
 }
Index: libffi/src/frv/ffi.c
===================================================================
--- libffi/src/frv/ffi.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/frv/ffi.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,6 @@
 /* -----------------------------------------------------------------------
    ffi.c - Copyright (c) 2004  Anthony Green
+   Copyright (C) 2007  Free Software Foundation, Inc.
    
    FR-V Foreign Function Interface 
 
@@ -243,14 +244,15 @@ void ffi_closure_eabi (unsigned arg1, un
 }
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp = (unsigned int *) &closure->tramp[0];
   unsigned long fn = (long) ffi_closure_eabi;
-  unsigned long cls = (long) closure;
+  unsigned long cls = (long) codeloc;
 #ifdef __FRV_FDPIC__
   register void *got __asm__("gr15");
 #endif
@@ -259,7 +261,7 @@ ffi_prep_closure (ffi_closure* closure,
   fn = (unsigned long) ffi_closure_eabi;
 
 #ifdef __FRV_FDPIC__
-  tramp[0] = &tramp[2];
+  tramp[0] = &((unsigned int *)codeloc)[2];
   tramp[1] = got;
   tramp[2] = 0x8cfc0000 + (fn  & 0xffff); /* setlos lo(fn), gr6    */
   tramp[3] = 0x8efc0000 + (cls & 0xffff); /* setlos lo(cls), gr7   */
@@ -281,7 +283,8 @@ ffi_prep_closure (ffi_closure* closure,
 
   /* Cache flushing.  */
   for (i = 0; i < FFI_TRAMPOLINE_SIZE; i++)
-    __asm__ volatile ("dcf @(%0,%1)\n\tici @(%0,%1)" :: "r" (tramp), "r" (i));
+    __asm__ volatile ("dcf @(%0,%1)\n\tici @(%2,%1)" :: "r" (tramp), "r" (i),
+		      "r" (codeloc));
 
   return FFI_OK;
 }
Index: libffi/src/ia64/ffi.c
===================================================================
--- libffi/src/ia64/ffi.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/ia64/ffi.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1998 Red Hat, Inc.
+   ffi.c - Copyright (c) 1998, 2007 Red Hat, Inc.
 	   Copyright (c) 2000 Hewlett Packard Company
    
    IA64 Foreign Function Interface 
@@ -400,10 +400,11 @@ ffi_call(ffi_cif *cif, void (*fn)(), voi
 extern void ffi_closure_unix ();
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*,void*,void**,void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void *codeloc)
 {
   /* The layout of a function descriptor.  A C function pointer really 
      points to one of these.  */
@@ -430,7 +431,7 @@ ffi_prep_closure (ffi_closure* closure,
 
   tramp->code_pointer = fd->code_pointer;
   tramp->real_gp = fd->gp;
-  tramp->fake_gp = (UINT64)(PTR64)closure;
+  tramp->fake_gp = (UINT64)(PTR64)codeloc;
   closure->cif = cif;
   closure->user_data = user_data;
   closure->fun = fun;
Index: libffi/src/mips/ffi.c
===================================================================
--- libffi/src/mips/ffi.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/mips/ffi.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1996 Red Hat, Inc.
+   ffi.c - Copyright (c) 1996, 2007 Red Hat, Inc.
    
    MIPS Foreign Function Interface 
 
@@ -497,14 +497,15 @@ extern void ffi_closure_O32(void);
 #endif /* FFI_MIPS_O32 */
 
 ffi_status
-ffi_prep_closure (ffi_closure *closure,
-		  ffi_cif *cif,
-		  void (*fun)(ffi_cif*,void*,void**,void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure *closure,
+		      ffi_cif *cif,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp = (unsigned int *) &closure->tramp[0];
   unsigned int fn;
-  unsigned int ctx = (unsigned int) closure;
+  unsigned int ctx = (unsigned int) codeloc;
 
 #if defined(FFI_MIPS_O32)
   FFI_ASSERT(cif->abi == FFI_O32 || cif->abi == FFI_O32_SOFT_FLOAT);
@@ -525,7 +526,7 @@ ffi_prep_closure (ffi_closure *closure,
   closure->user_data = user_data;
 
   /* XXX this is available on Linux, but anything else? */
-  cacheflush (tramp, FFI_TRAMPOLINE_SIZE, ICACHE);
+  cacheflush (codeloc, FFI_TRAMPOLINE_SIZE, ICACHE);
 
   return FFI_OK;
 }
Index: libffi/src/pa/ffi.c
===================================================================
--- libffi/src/pa/ffi.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/pa/ffi.c	2007-02-16 21:58:54.000000000 -0200
@@ -613,10 +613,11 @@ ffi_status ffi_closure_inner_pa32(ffi_cl
 extern void ffi_closure_pa32(void);
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*,void*,void**,void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void *codeloc)
 {
   UINT32 *tramp = (UINT32 *)(closure->tramp);
 #ifdef PA_HPUX
Index: libffi/src/powerpc/ffi.c
===================================================================
--- libffi/src/powerpc/ffi.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/powerpc/ffi.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,6 @@
 /* -----------------------------------------------------------------------
    ffi.c - Copyright (c) 1998 Geoffrey Keating
+   Copyright (C) 2007 Free Software Foundation, Inc
 
    PowerPC Foreign Function Interface
 
@@ -834,27 +835,27 @@ ffi_call(ffi_cif *cif, void (*fn)(), voi
 #define MIN_CACHE_LINE_SIZE 8
 
 static void
-flush_icache (char *addr1, int size)
+flush_icache (char *wraddr, char *xaddr, int size)
 {
   int i;
-  char * addr;
   for (i = 0; i < size; i += MIN_CACHE_LINE_SIZE)
     {
       addr = addr1 + i;
-      __asm__ volatile ("icbi 0,%0;" "dcbf 0,%0;"
-			: : "r" (addr) : "memory");
+      __asm__ volatile ("icbi 0,%0;" "dcbf 0,%1;"
+			: : "r" (xaddr + i), "r" (wraddr + i) : "memory");
     }
-  addr = addr1 + size - 1;
-  __asm__ volatile ("icbi 0,%0;" "dcbf 0,%0;" "sync;" "isync;"
-		    : : "r"(addr) : "memory");
+  __asm__ volatile ("icbi 0,%0;" "dcbf 0,%1;" "sync;" "isync;"
+		    : : "r"(xaddr + size - 1), "r"(wraddr + size - 1)
+		    : "memory");
 }
 #endif
 
 ffi_status
-ffi_prep_closure (ffi_closure *closure,
-		  ffi_cif *cif,
-		  void (*fun) (ffi_cif *, void *, void **, void *),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure *closure,
+		      ffi_cif *cif,
+		      void (*fun) (ffi_cif *, void *, void **, void *),
+		      void *user_data,
+		      void *codeloc)
 {
 #ifdef POWERPC64
   void **tramp = (void **) &closure->tramp[0];
@@ -862,7 +863,7 @@ ffi_prep_closure (ffi_closure *closure,
   FFI_ASSERT (cif->abi == FFI_LINUX64);
   /* Copy function address and TOC from ffi_closure_LINUX64.  */
   memcpy (tramp, (char *) ffi_closure_LINUX64, 16);
-  tramp[2] = (void *) closure;
+  tramp[2] = (void *) codeloc;
 #else
   unsigned int *tramp;
 
@@ -878,10 +879,10 @@ ffi_prep_closure (ffi_closure *closure,
   tramp[8] = 0x7c0903a6;  /*   mtctr   r0 */
   tramp[9] = 0x4e800420;  /*   bctr */
   *(void **) &tramp[2] = (void *) ffi_closure_SYSV; /* function */
-  *(void **) &tramp[3] = (void *) closure;          /* context */
+  *(void **) &tramp[3] = (void *) codeloc;          /* context */
 
   /* Flush the icache.  */
-  flush_icache (&closure->tramp[0],FFI_TRAMPOLINE_SIZE);
+  flush_icache (tramp, codeloc, FFI_TRAMPOLINE_SIZE);
 #endif
 
   closure->cif = cif;
Index: libffi/src/powerpc/ffi_darwin.c
===================================================================
--- libffi/src/powerpc/ffi_darwin.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/powerpc/ffi_darwin.c	2007-02-16 21:58:54.000000000 -0200
@@ -3,7 +3,7 @@
 
    Copyright (C) 1998 Geoffrey Keating
    Copyright (C) 2001 John Hornkvist
-   Copyright (C) 2002, 2006 Free Software Foundation, Inc.
+   Copyright (C) 2002, 2006, 2007 Free Software Foundation, Inc.
 
    FFI support for Darwin and AIX.
    
@@ -528,10 +528,11 @@ SP current -->    +---------------------
 
 */
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp;
   struct ffi_aix_trampoline_struct *tramp_aix;
@@ -553,14 +554,14 @@ ffi_prep_closure (ffi_closure* closure,
       tramp[8] = 0x816b0004;  /*   lwz     r11,4(r11) static chain  */
       tramp[9] = 0x4e800420;  /*   bctr  */
       tramp[2] = (unsigned long) ffi_closure_ASM; /* function  */
-      tramp[3] = (unsigned long) closure; /* context  */
+      tramp[3] = (unsigned long) codeloc; /* context  */
 
       closure->cif = cif;
       closure->fun = fun;
       closure->user_data = user_data;
 
       /* Flush the icache. Only necessary on Darwin.  */
-      flush_range(&closure->tramp[0],FFI_TRAMPOLINE_SIZE);
+      flush_range(codeloc, FFI_TRAMPOLINE_SIZE);
 
       break;
 
@@ -573,7 +574,7 @@ ffi_prep_closure (ffi_closure* closure,
 
       tramp_aix->code_pointer = fd->code_pointer;
       tramp_aix->toc = fd->toc;
-      tramp_aix->static_chain = closure;
+      tramp_aix->static_chain = codeloc;
       closure->cif = cif;
       closure->fun = fun;
       closure->user_data = user_data;
Index: libffi/src/s390/ffi.c
===================================================================
--- libffi/src/s390/ffi.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/s390/ffi.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 2000 Software AG
+   ffi.c - Copyright (c) 2000, 2007 Software AG
  
    S390 Foreign Function Interface
  
@@ -736,17 +736,18 @@ ffi_closure_helper_SYSV (ffi_closure *cl
 
 /*====================================================================*/
 /*                                                                    */
-/* Name     - ffi_prep_closure.                                       */
+/* Name     - ffi_prep_closure_loc.                                   */
 /*                                                                    */
 /* Function - Prepare a FFI closure.                                  */
 /*                                                                    */
 /*====================================================================*/
  
 ffi_status
-ffi_prep_closure (ffi_closure *closure,
-                  ffi_cif *cif,
-                  void (*fun) (ffi_cif *, void *, void **, void *),
-                  void *user_data)
+ffi_prep_closure_loc (ffi_closure *closure,
+		      ffi_cif *cif,
+		      void (*fun) (ffi_cif *, void *, void **, void *),
+		      void *user_data,
+		      void *codeloc)
 {
   FFI_ASSERT (cif->abi == FFI_SYSV);
 
@@ -755,7 +756,7 @@ ffi_prep_closure (ffi_closure *closure,
   *(short *)&closure->tramp [2] = 0x9801;   /* lm %r0,%r1,6(%r1) */
   *(short *)&closure->tramp [4] = 0x1006;
   *(short *)&closure->tramp [6] = 0x07f1;   /* br %r1 */
-  *(long  *)&closure->tramp [8] = (long)closure;
+  *(long  *)&closure->tramp [8] = (long)codeloc;
   *(long  *)&closure->tramp[12] = (long)&ffi_closure_SYSV;
 #else
   *(short *)&closure->tramp [0] = 0x0d10;   /* basr %r1,0 */
@@ -763,7 +764,7 @@ ffi_prep_closure (ffi_closure *closure,
   *(short *)&closure->tramp [4] = 0x100e;
   *(short *)&closure->tramp [6] = 0x0004;
   *(short *)&closure->tramp [8] = 0x07f1;   /* br %r1 */
-  *(long  *)&closure->tramp[16] = (long)closure;
+  *(long  *)&closure->tramp[16] = (long)codeloc;
   *(long  *)&closure->tramp[24] = (long)&ffi_closure_SYSV;
 #endif 
  
Index: libffi/src/sh/ffi.c
===================================================================
--- libffi/src/sh/ffi.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/sh/ffi.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 2002, 2003, 2004, 2005, 2006 Kaz Kojima
+   ffi.c - Copyright (c) 2002, 2003, 2004, 2005, 2006, 2007 Kaz Kojima
    
    SuperH Foreign Function Interface 
 
@@ -452,10 +452,11 @@ extern void __ic_invalidate (void *line)
 #endif
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp;
   unsigned short insn;
@@ -475,7 +476,7 @@ ffi_prep_closure (ffi_closure* closure,
   tramp[0] = 0xd102d301;
   tramp[1] = 0x412b0000 | insn;
 #endif
-  *(void **) &tramp[2] = (void *)closure;          /* ctx */
+  *(void **) &tramp[2] = (void *)codeloc;          /* ctx */
   *(void **) &tramp[3] = (void *)ffi_closure_SYSV; /* funaddr */
 
   closure->cif = cif;
@@ -484,7 +485,7 @@ ffi_prep_closure (ffi_closure* closure,
 
 #if defined(__SH4__)
   /* Flush the icache.  */
-  __ic_invalidate(&closure->tramp[0]);
+  __ic_invalidate(codeloc);
 #endif
 
   return FFI_OK;
Index: libffi/src/sh64/ffi.c
===================================================================
--- libffi/src/sh64/ffi.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/sh64/ffi.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 2003, 2004, 2006 Kaz Kojima
+   ffi.c - Copyright (c) 2003, 2004, 2006, 2007 Kaz Kojima
    
    SuperH SHmedia Foreign Function Interface 
 
@@ -283,10 +283,11 @@ extern void ffi_closure_SYSV (void);
 extern void __ic_invalidate (void *line);
 
 ffi_status
-ffi_prep_closure (ffi_closure *closure,
-		  ffi_cif *cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure *closure,
+		      ffi_cif *cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp;
 
@@ -310,8 +311,8 @@ ffi_prep_closure (ffi_closure *closure,
   tramp[2] = 0xcc000010 | (((UINT32) ffi_closure_SYSV) >> 16) << 10;
   tramp[3] = 0xc8000010 | (((UINT32) ffi_closure_SYSV) & 0xffff) << 10;
   tramp[4] = 0x6bf10600;
-  tramp[5] = 0xcc000010 | (((UINT32) closure) >> 16) << 10;
-  tramp[6] = 0xc8000010 | (((UINT32) closure) & 0xffff) << 10;
+  tramp[5] = 0xcc000010 | (((UINT32) codeloc) >> 16) << 10;
+  tramp[6] = 0xc8000010 | (((UINT32) codeloc) & 0xffff) << 10;
   tramp[7] = 0x4401fff0;
 
   closure->cif = cif;
@@ -319,7 +320,8 @@ ffi_prep_closure (ffi_closure *closure,
   closure->user_data = user_data;
 
   /* Flush the icache.  */
-  asm volatile ("ocbwb %0,0; synco; icbi %0,0; synci" : : "r" (tramp));
+  asm volatile ("ocbwb %0,0; synco; icbi %1,0; synci" : : "r" (tramp),
+		"r"(codeloc));
 
   return FFI_OK;
 }
Index: libffi/src/sparc/ffi.c
===================================================================
--- libffi/src/sparc/ffi.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/sparc/ffi.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1996, 2003, 2004 Red Hat, Inc.
+   ffi.c - Copyright (c) 1996, 2003, 2004, 2007 Red Hat, Inc.
    
    SPARC Foreign Function Interface 
 
@@ -425,10 +425,11 @@ extern void ffi_closure_v8(void);
 #endif
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   unsigned int *tramp = (unsigned int *) &closure->tramp[0];
   unsigned long fn;
@@ -443,7 +444,7 @@ ffi_prep_closure (ffi_closure* closure,
   tramp[3] = 0x01000000;	/* nop			*/
   *((unsigned long *) &tramp[4]) = fn;
 #else
-  unsigned long ctx = (unsigned long) closure;
+  unsigned long ctx = (unsigned long) codeloc;
   FFI_ASSERT (cif->abi == FFI_V8);
   fn = (unsigned long) ffi_closure_v8;
   tramp[0] = 0x03000000 | fn >> 10;	/* sethi %hi(fn), %g1	*/
Index: libffi/src/x86/ffi.c
===================================================================
--- libffi/src/x86/ffi.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/x86/ffi.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 1996, 1998, 1999, 2001  Red Hat, Inc.
+   ffi.c - Copyright (c) 1996, 1998, 1999, 2001, 2007  Red Hat, Inc.
            Copyright (c) 2002  Ranjit Mathew
            Copyright (c) 2002  Bo Thorsen
            Copyright (c) 2002  Roger Sayle
@@ -302,7 +302,7 @@ ffi_prep_incoming_args_SYSV(char *stack,
 ({ unsigned char *__tramp = (unsigned char*)(TRAMP); \
    unsigned int  __fun = (unsigned int)(FUN); \
    unsigned int  __ctx = (unsigned int)(CTX); \
-   unsigned int  __dis = __fun - ((unsigned int) __tramp + FFI_TRAMPOLINE_SIZE); \
+   unsigned int  __dis = __fun - (__ctx + FFI_TRAMPOLINE_SIZE); \
    *(unsigned char*) &__tramp[0] = 0xb8; \
    *(unsigned int*)  &__tramp[1] = __ctx; /* movl __ctx, %eax */ \
    *(unsigned char *)  &__tramp[5] = 0xe9; \
@@ -313,16 +313,17 @@ ffi_prep_incoming_args_SYSV(char *stack,
 /* the cif must already be prep'ed */
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*,void*,void**,void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*,void*,void**,void*),
+		      void *user_data,
+		      void *codeloc)
 {
   FFI_ASSERT (cif->abi == FFI_SYSV);
 
   FFI_INIT_TRAMPOLINE (&closure->tramp[0], \
 		       &ffi_closure_SYSV,  \
-		       (void*)closure);
+		       codeloc);
     
   closure->cif  = cif;
   closure->user_data = user_data;
@@ -336,10 +337,11 @@ ffi_prep_closure (ffi_closure* closure,
 #if !FFI_NO_RAW_API
 
 ffi_status
-ffi_prep_raw_closure (ffi_raw_closure* closure,
-		      ffi_cif* cif,
-		      void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
-		      void *user_data)
+ffi_prep_raw_closure_loc (ffi_raw_closure* closure,
+			  ffi_cif* cif,
+			  void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+			  void *user_data,
+			  void *codeloc)
 {
   int i;
 
@@ -358,7 +360,7 @@ ffi_prep_raw_closure (ffi_raw_closure* c
   
 
   FFI_INIT_TRAMPOLINE (&closure->tramp[0], &ffi_closure_raw_SYSV,
-		       (void*)closure);
+		       codeloc);
     
   closure->cif  = cif;
   closure->user_data = user_data;
Index: libffi/src/x86/ffi64.c
===================================================================
--- libffi/src/x86/ffi64.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ libffi/src/x86/ffi64.c	2007-02-16 21:58:54.000000000 -0200
@@ -1,5 +1,5 @@
 /* -----------------------------------------------------------------------
-   ffi.c - Copyright (c) 2002  Bo Thorsen <bo@suse.de>
+   ffi.c - Copyright (c) 2002, 2007  Bo Thorsen <bo@suse.de>
    
    x86-64 Foreign Function Interface 
 
@@ -433,10 +433,11 @@ ffi_call (ffi_cif *cif, void (*fn)(), vo
 extern void ffi_closure_unix64(void);
 
 ffi_status
-ffi_prep_closure (ffi_closure* closure,
-		  ffi_cif* cif,
-		  void (*fun)(ffi_cif*, void*, void**, void*),
-		  void *user_data)
+ffi_prep_closure_loc (ffi_closure* closure,
+		      ffi_cif* cif,
+		      void (*fun)(ffi_cif*, void*, void**, void*),
+		      void *user_data,
+		      void *codeloc)
 {
   volatile unsigned short *tramp;
 
@@ -445,7 +446,7 @@ ffi_prep_closure (ffi_closure* closure,
   tramp[0] = 0xbb49;		/* mov <code>, %r11	*/
   *(void * volatile *) &tramp[1] = ffi_closure_unix64;
   tramp[5] = 0xba49;		/* mov <data>, %r10	*/
-  *(void * volatile *) &tramp[6] = closure;
+  *(void * volatile *) &tramp[6] = codeloc;
 
   /* Set the carry bit iff the function uses any sse registers.
      This is clc or stc, together with the first byte of the jmp.  */
Index: boehm-gc/include/gc.h
===================================================================
--- boehm-gc/include/gc.h.orig	2007-02-16 21:56:48.000000000 -0200
+++ boehm-gc/include/gc.h	2007-02-16 21:58:55.000000000 -0200
@@ -3,6 +3,7 @@
  * Copyright (c) 1991-1995 by Xerox Corporation.  All rights reserved.
  * Copyright 1996-1999 by Silicon Graphics.  All rights reserved.
  * Copyright 1999 by Hewlett-Packard Company.  All rights reserved.
+ * Copyright (C) 2007 Free Software Foundation, Inc
  *
  * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
  * OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
@@ -602,6 +603,8 @@ GC_API GC_PTR GC_debug_realloc_replaceme
 	GC_debug_register_finalizer_ignore_self(p, f, d, of, od)
 #   define GC_REGISTER_FINALIZER_NO_ORDER(p, f, d, of, od) \
 	GC_debug_register_finalizer_no_order(p, f, d, of, od)
+#   define GC_REGISTER_FINALIZER_UNREACHABLE(p, f, d, of, od) \
+	GC_debug_register_finalizer_unreachable(p, f, d, of, od)
 #   define GC_MALLOC_STUBBORN(sz) GC_debug_malloc_stubborn(sz, GC_EXTRAS);
 #   define GC_CHANGE_STUBBORN(p) GC_debug_change_stubborn(p)
 #   define GC_END_STUBBORN_CHANGE(p) GC_debug_end_stubborn_change(p)
@@ -624,6 +627,8 @@ GC_API GC_PTR GC_debug_realloc_replaceme
 	GC_register_finalizer_ignore_self(p, f, d, of, od)
 #   define GC_REGISTER_FINALIZER_NO_ORDER(p, f, d, of, od) \
 	GC_register_finalizer_no_order(p, f, d, of, od)
+#   define GC_REGISTER_FINALIZER_UNREACHABLE(p, f, d, of, od) \
+	GC_register_finalizer_unreachable(p, f, d, of, od)
 #   define GC_MALLOC_STUBBORN(sz) GC_malloc_stubborn(sz)
 #   define GC_CHANGE_STUBBORN(p) GC_change_stubborn(p)
 #   define GC_END_STUBBORN_CHANGE(p) GC_end_stubborn_change(p)
@@ -716,6 +721,19 @@ GC_API void GC_debug_register_finalizer_
 	GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd,
 		  GC_finalization_proc *ofn, GC_PTR *ocd));
 
+/* This is a special finalizer that is useful when an object's  */
+/* finalizer must be run when the object is known to be no      */
+/* longer reachable, not even from other finalizable objects.   */
+/* This can be used in combination with finalizer_no_order so   */
+/* as to release resources that must not be released while an   */
+/* object can still be brought back to life by other            */
+/* finalizers.                                                  */
+GC_API void GC_register_finalizer_unreachable
+	GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd,
+		  GC_finalization_proc *ofn, GC_PTR *ocd));
+GC_API void GC_debug_register_finalizer_unreachable
+	GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd,
+		  GC_finalization_proc *ofn, GC_PTR *ocd));
 
 /* The following routine may be used to break cycles between	*/
 /* finalizable objects, thus causing cyclic finalizable		*/
Index: boehm-gc/finalize.c
===================================================================
--- boehm-gc/finalize.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ boehm-gc/finalize.c	2007-02-16 21:58:55.000000000 -0200
@@ -2,6 +2,7 @@
  * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
  * Copyright (c) 1991-1996 by Xerox Corporation.  All rights reserved.
  * Copyright (c) 1996-1999 by Silicon Graphics.  All rights reserved.
+ * Copyright (C) 2007 Free Software Foundation, Inc
 
  * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
  * OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
@@ -315,6 +316,14 @@ ptr_t p;
 {
 }
 
+/* Possible finalization_marker procedures.  Note that mark stack	*/
+/* overflow is handled by the caller, and is not a disaster.		*/
+GC_API void GC_unreachable_finalize_mark_proc(p)
+ptr_t p;
+{
+    return GC_normal_finalize_mark_proc(p);
+}
+
 
 
 /* Register a finalization function.  See gc.h for details.	*/
@@ -511,6 +520,23 @@ finalization_mark_proc * mp;
     				ocd, GC_null_finalize_mark_proc);
 }
 
+# if defined(__STDC__)
+    void GC_register_finalizer_unreachable(void * obj,
+			       GC_finalization_proc fn, void * cd,
+			       GC_finalization_proc *ofn, void ** ocd)
+# else
+    void GC_register_finalizer_unreachable(obj, fn, cd, ofn, ocd)
+    GC_PTR obj;
+    GC_finalization_proc fn;
+    GC_PTR cd;
+    GC_finalization_proc * ofn;
+    GC_PTR * ocd;
+# endif
+{
+    GC_register_finalizer_inner(obj, fn, cd, ofn,
+    				ocd, GC_unreachable_finalize_mark_proc);
+}
+
 #ifndef NO_DEBUGGING
 void GC_dump_finalization()
 {
@@ -638,9 +664,44 @@ void GC_finalize()
   	    if (curr_fo -> fo_mark_proc == GC_null_finalize_mark_proc) {
   	        GC_MARK_FO(real_ptr, GC_normal_finalize_mark_proc);
   	    }
-  	    GC_set_mark_bit(real_ptr);
+	    if (curr_fo -> fo_mark_proc != GC_unreachable_finalize_mark_proc) {
+		GC_set_mark_bit(real_ptr);
+	    }
   	}
       }
+
+      /* now revive finalize-when-unreachable objects reachable from
+	 other finalizable objects */
+      curr_fo = GC_finalize_now;
+      prev_fo = 0;
+      while (curr_fo != 0) {
+	next_fo = fo_next(curr_fo);
+	if (curr_fo -> fo_mark_proc == GC_unreachable_finalize_mark_proc) {
+	  real_ptr = (ptr_t)curr_fo -> fo_hidden_base;
+	  if (!GC_is_marked(real_ptr)) {
+	      GC_set_mark_bit(real_ptr);
+	  } else {
+	      if (prev_fo == 0)
+		GC_finalize_now = next_fo;
+	      else
+		fo_set_next(prev_fo, next_fo);
+
+              curr_fo -> fo_hidden_base =
+              		(word) HIDE_POINTER(curr_fo -> fo_hidden_base);
+              GC_words_finalized -=
+                 	ALIGNED_WORDS(curr_fo -> fo_object_size)
+              		+ ALIGNED_WORDS(sizeof(struct finalizable_object));
+
+	      i = HASH2(real_ptr, log_fo_table_size);
+	      fo_set_next (curr_fo, fo_head[i]);
+	      GC_fo_entries++;
+	      fo_head[i] = curr_fo;
+	      curr_fo = prev_fo;
+	  }
+	}
+	prev_fo = curr_fo;
+	curr_fo = next_fo;
+      }
   }
 
   /* Remove dangling disappearing links. */
Index: boehm-gc/dbg_mlc.c
===================================================================
--- boehm-gc/dbg_mlc.c.orig	2007-02-16 21:56:48.000000000 -0200
+++ boehm-gc/dbg_mlc.c	2007-02-16 21:58:55.000000000 -0200
@@ -3,6 +3,7 @@
  * Copyright (c) 1991-1995 by Xerox Corporation.  All rights reserved.
  * Copyright (c) 1997 by Silicon Graphics.  All rights reserved.
  * Copyright (c) 1999-2004 Hewlett-Packard Development Company, L.P.
+ * Copyright (C) 2007 Free Software Foundation, Inc
  *
  * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
  * OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
@@ -1118,8 +1119,8 @@ GC_PTR *ocd;
     if (0 == base) return;
     if ((ptr_t)obj - base != sizeof(oh)) {
         GC_err_printf1(
-	  "GC_debug_register_finalizer_no_order called with non-base-pointer 0x%lx\n",
-	  obj);
+	    "GC_debug_register_finalizer_no_order called with non-base-pointer 0x%lx\n",
+	    obj);
     }
     if (0 == fn) {
       GC_register_finalizer_no_order(base, 0, 0, &my_old_fn, &my_old_cd);
@@ -1129,7 +1130,41 @@ GC_PTR *ocd;
 				     &my_old_cd);
     }
     store_old(obj, my_old_fn, (struct closure *)my_old_cd, ofn, ocd);
- }
+}
+
+# ifdef __STDC__
+    void GC_debug_register_finalizer_unreachable
+    				    (GC_PTR obj, GC_finalization_proc fn,
+    				     GC_PTR cd, GC_finalization_proc *ofn,
+				     GC_PTR *ocd)
+# else
+    void GC_debug_register_finalizer_unreachable
+    				    (obj, fn, cd, ofn, ocd)
+    GC_PTR obj;
+    GC_finalization_proc fn;
+    GC_PTR cd;
+    GC_finalization_proc *ofn;
+    GC_PTR *ocd;
+# endif
+{
+    GC_finalization_proc my_old_fn;
+    GC_PTR my_old_cd;
+    ptr_t base = GC_base(obj);
+    if (0 == base) return;
+    if ((ptr_t)obj - base != sizeof(oh)) {
+        GC_err_printf1(
+	    "GC_debug_register_finalizer_unreachable called with non-base-pointer 0x%lx\n",
+	    obj);
+    }
+    if (0 == fn) {
+      GC_register_finalizer_unreachable(base, 0, 0, &my_old_fn, &my_old_cd);
+    } else {
+      GC_register_finalizer_unreachable(base, GC_debug_invoke_finalizer,
+    			    	     GC_make_closure(fn,cd), &my_old_fn,
+				     &my_old_cd);
+    }
+    store_old(obj, my_old_fn, (struct closure *)my_old_cd, ofn, ocd);
+}
 
 # ifdef __STDC__
     void GC_debug_register_finalizer_ignore_self
Index: libjava/include/jvm.h
===================================================================
--- libjava/include/jvm.h.orig	2007-02-16 21:56:48.000000000 -0200
+++ libjava/include/jvm.h	2007-02-16 22:16:50.000000000 -0200
@@ -288,7 +288,7 @@ private:
  						    _Jv_Utf8Const *method_signature,
 						    jclass *found_class,
 						    bool check_perms = true);
-  static void *create_error_method(_Jv_Utf8Const *);
+  static void *create_error_method(_Jv_Utf8Const *, jclass);
 
   /* The least significant bit of the signature pointer in a symbol
      table is set to 1 by the compiler if the reference is "special",
@@ -341,6 +341,10 @@ void *_Jv_AllocBytes (jsize size) __attr
 /* Allocate space for a new non-Java object, which does not have the usual 
    Java object header but may contain pointers to other GC'ed objects.  */
 void *_Jv_AllocRawObj (jsize size) __attribute__((__malloc__));
+/* Allocate a double-indirect pointer to a _Jv_ClosureList such that
+   the _Jv_ClosureList gets automatically finalized when it is no
+   longer reachable, not even by other finalizable objects.  */
+_Jv_ClosureList **_Jv_ClosureListFinalizer (void) __attribute__((__malloc__));
 /* Explicitly throw an out-of-memory exception.	*/
 void _Jv_ThrowNoMemory() __attribute__((__noreturn__));
 /* Allocate an object with a single pointer.  The first word is reserved
Index: libjava/boehm.cc
===================================================================
--- libjava/boehm.cc.orig	2007-02-16 21:56:48.000000000 -0200
+++ libjava/boehm.cc	2007-02-16 22:15:22.000000000 -0200
@@ -1,6 +1,6 @@
 // boehm.cc - interface between libjava and Boehm GC.
 
-/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006
+/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
    Free Software Foundation
 
    This file is part of libgcj.
@@ -380,6 +380,29 @@ _Jv_AllocRawObj (jsize size)
   return (void *) GC_MALLOC (size ? size : 1);
 }
 
+typedef _Jv_ClosureList *closure_list_pointer;
+
+/* Release closures in a _Jv_ClosureList.  */
+static void
+finalize_closure_list (GC_PTR obj, GC_PTR)
+{
+  _Jv_ClosureList **clpp = (_Jv_ClosureList **)obj;
+  _Jv_ClosureList::releaseClosures (clpp);
+}
+
+/* Allocate a double-indirect pointer to a _Jv_ClosureList that will
+   get garbage-collected after this double-indirect pointer becomes
+   unreachable by any other objects, including finalizable ones.  */
+_Jv_ClosureList **
+_Jv_ClosureListFinalizer ()
+{
+  _Jv_ClosureList **clpp;
+  clpp = (_Jv_ClosureList **)_Jv_AllocBytes (sizeof (*clpp));
+  GC_REGISTER_FINALIZER_UNREACHABLE (clpp, finalize_closure_list,
+				     NULL, NULL, NULL);
+  return clpp;
+}
+
 static void
 call_finalizer (GC_PTR obj, GC_PTR client_data)
 {
Index: libjava/nogc.cc
===================================================================
--- libjava/nogc.cc.orig	2007-02-16 21:56:48.000000000 -0200
+++ libjava/nogc.cc	2007-02-16 22:16:27.000000000 -0200
@@ -1,6 +1,7 @@
 // nogc.cc - Implement null garbage collector.
 
-/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2006  Free Software Foundation
+/* Copyright (C) 1998, 1999, 2000, 2001, 2002, 2006, 2007
+   Free Software Foundation
 
    This file is part of libgcj.
 
@@ -71,6 +72,14 @@ _Jv_AllocRawObj (jsize size)
   return calloc (size, 1);
 }
 
+_Jv_ClosureList **
+_Jv_ClosureListFinalizer ()
+{
+  _Jv_ClosureList **clpp;
+  clpp = (_Jv_ClosureList **)_Jv_AllocBytes (sizeof (*clpp));
+  return clpp;
+}
+
 void
 _Jv_RegisterFinalizer (void *, _Jv_FinalizerFunc *)
 {
Index: libjava/java/lang/Class.h
===================================================================
--- libjava/java/lang/Class.h.orig	2007-02-16 21:56:48.000000000 -0200
+++ libjava/java/lang/Class.h	2007-02-16 22:16:12.000000000 -0200
@@ -105,6 +105,15 @@ class _Jv_InterpClass;
 class _Jv_InterpMethod;
 #endif
 
+class _Jv_ClosureList
+{
+  _Jv_ClosureList *next;
+  void *ptr;
+public:
+  void registerClosure (jclass klass, void *ptr);
+  static void releaseClosures (_Jv_ClosureList **closures);
+};
+
 struct _Jv_Constants
 {
   jint size;
@@ -626,6 +635,7 @@ private:   
   friend class ::_Jv_CompiledEngine;
   friend class ::_Jv_IndirectCompiledEngine;
   friend class ::_Jv_InterpreterEngine;
+  friend class ::_Jv_ClosureList;
 
   friend void ::_Jv_sharedlib_register_hook (jclass klass);
 
Index: libjava/java/lang/natClass.cc
===================================================================
--- libjava/java/lang/natClass.cc.orig	2007-02-16 21:56:48.000000000 -0200
+++ libjava/java/lang/natClass.cc	2007-02-17 00:28:31.000000000 -0200
@@ -670,6 +670,28 @@ java::lang::Class::finalize (void)
   engine->unregister(this);
 }
 
+void
+_Jv_ClosureList::releaseClosures (_Jv_ClosureList **closures)
+{
+  if (!closures)
+    return;
+
+  while (_Jv_ClosureList *current = *closures)
+    {
+      *closures = current->next;
+      ffi_closure_free (current->ptr);
+    }
+}
+
+void
+_Jv_ClosureList::registerClosure (jclass klass, void *ptr)
+{
+  _Jv_ClosureList **closures = klass->engine->get_closure_list (klass);
+  this->ptr = ptr;
+  this->next = *closures;
+  *closures = this;
+}
+
 // This implements the initialization process for a class.  From Spec
 // section 12.4.2.
 void
Index: libjava/include/execution.h
===================================================================
--- libjava/include/execution.h.orig	2007-02-16 21:56:48.000000000 -0200
+++ libjava/include/execution.h	2007-02-16 22:15:49.000000000 -0200
@@ -1,6 +1,6 @@
 // execution.h - Execution engines. -*- c++ -*-
 
-/* Copyright (C) 2004, 2006  Free Software Foundation
+/* Copyright (C) 2004, 2006, 2007  Free Software Foundation
 
    This file is part of libgcj.
 
@@ -29,6 +29,7 @@ struct _Jv_ExecutionEngine
   _Jv_ResolvedMethod *(*resolve_method) (_Jv_Method *, jclass,
 					 jboolean);
   void (*post_miranda_hook) (jclass);
+  _Jv_ClosureList **(*get_closure_list) (jclass);
 };
 
 // This handles gcj-compiled code except that compiled with
@@ -77,6 +78,11 @@ struct _Jv_CompiledEngine : public _Jv_E
     // Not needed.
   }
 
+  static _Jv_ClosureList **do_get_closure_list (jclass)
+  {
+    return NULL;
+  }
+
   _Jv_CompiledEngine ()
   {
     unregister = do_unregister;
@@ -87,6 +93,7 @@ struct _Jv_CompiledEngine : public _Jv_E
     create_ncode = do_create_ncode;
     resolve_method = do_resolve_method;
     post_miranda_hook = do_post_miranda_hook;
+    get_closure_list = do_get_closure_list;
   }
 
   // These operators make it so we don't have to link in libstdc++.
@@ -105,6 +112,7 @@ class _Jv_IndirectCompiledClass
 {
 public:
   void **field_initializers;
+  _Jv_ClosureList **closures;
 };
 
 // This handles gcj-compiled code compiled with -findirect-classes.
@@ -114,14 +122,32 @@ struct _Jv_IndirectCompiledEngine : publ
   {
     allocate_static_fields = do_allocate_static_fields;
     allocate_field_initializers = do_allocate_field_initializers;
+    get_closure_list = do_get_closure_list;
   }
   
+  static _Jv_IndirectCompiledClass *get_aux_info (jclass klass)
+  {
+    _Jv_IndirectCompiledClass *aux =
+      (_Jv_IndirectCompiledClass*)klass->aux_info;
+    if (!aux)
+      {
+	aux = (_Jv_IndirectCompiledClass*)
+	  _Jv_AllocRawObj (sizeof (_Jv_IndirectCompiledClass));
+	klass->aux_info = aux;
+      }
+
+    return aux;
+  }
+
   static void do_allocate_field_initializers (jclass klass)
   {
-    _Jv_IndirectCompiledClass *aux 
-      =  (_Jv_IndirectCompiledClass*)
-        _Jv_AllocRawObj (sizeof (_Jv_IndirectCompiledClass));
-    klass->aux_info = aux;
+    _Jv_IndirectCompiledClass *aux = get_aux_info (klass);
+    if (!aux)
+      {
+	aux = (_Jv_IndirectCompiledClass*)
+	  _Jv_AllocRawObj (sizeof (_Jv_IndirectCompiledClass));
+	klass->aux_info = aux;
+      }
 
     aux->field_initializers = (void **)_Jv_Malloc (klass->field_count 
 						   * sizeof (void*));    
@@ -172,6 +198,16 @@ struct _Jv_IndirectCompiledEngine : publ
       } 
     _Jv_Free (aux->field_initializers);
   }
+
+  static _Jv_ClosureList **do_get_closure_list (jclass klass)
+  {
+    _Jv_IndirectCompiledClass *aux = get_aux_info (klass);
+
+    if (!aux->closures)
+      aux->closures = _Jv_ClosureListFinalizer ();
+
+    return aux->closures;
+  }
 };
 
 
@@ -203,6 +239,8 @@ class _Jv_InterpreterEngine : public _Jv
 
   static void do_post_miranda_hook (jclass);
 
+  static _Jv_ClosureList **do_get_closure_list (jclass klass);
+
   _Jv_InterpreterEngine ()
   {
     unregister = do_unregister;
@@ -213,6 +251,7 @@ class _Jv_InterpreterEngine : public _Jv
     create_ncode = do_create_ncode;
     resolve_method = do_resolve_method;
     post_miranda_hook = do_post_miranda_hook;
+    get_closure_list = do_get_closure_list;
   }
 
   // These operators make it so we don't have to link in libstdc++.
Index: libjava/interpret.cc
===================================================================
--- libjava/interpret.cc.orig	2007-02-16 21:56:48.000000000 -0200
+++ libjava/interpret.cc	2007-02-16 22:20:22.000000000 -0200
@@ -1255,10 +1255,10 @@ _Jv_init_cif (_Jv_Utf8Const* signature,
 }
 
 #if FFI_NATIVE_RAW_API
-#   define FFI_PREP_RAW_CLOSURE ffi_prep_raw_closure
+#   define FFI_PREP_RAW_CLOSURE ffi_prep_raw_closure_loc
 #   define FFI_RAW_SIZE ffi_raw_size
 #else
-#   define FFI_PREP_RAW_CLOSURE ffi_prep_java_raw_closure
+#   define FFI_PREP_RAW_CLOSURE ffi_prep_java_raw_closure_loc
 #   define FFI_RAW_SIZE ffi_java_raw_size
 #endif
 
@@ -1269,6 +1269,7 @@ _Jv_init_cif (_Jv_Utf8Const* signature,
 
 typedef struct {
   ffi_raw_closure  closure;
+  _Jv_ClosureList list;
   ffi_cif   cif;
   ffi_type *arg_types[0];
 } ncode_closure;
@@ -1276,7 +1277,7 @@ typedef struct {
 typedef void (*ffi_closure_fun) (ffi_cif*,void*,ffi_raw*,void*);
 
 void *
-_Jv_InterpMethod::ncode ()
+_Jv_InterpMethod::ncode (jclass klass)
 {
   using namespace java::lang::reflect;
 
@@ -1286,9 +1287,12 @@ _Jv_InterpMethod::ncode ()
   jboolean staticp = (self->accflags & Modifier::STATIC) != 0;
   int arg_count = _Jv_count_arguments (self->signature, staticp);
 
+  void *code;
   ncode_closure *closure =
-    (ncode_closure*)_Jv_AllocBytes (sizeof (ncode_closure)
-					+ arg_count * sizeof (ffi_type*));
+    (ncode_closure*)ffi_closure_alloc (sizeof (ncode_closure)
+				       + arg_count * sizeof (ffi_type*),
+				       &code);
+  closure->list.registerClosure (klass, closure);
 
   _Jv_init_cif (self->signature,
 		arg_count,
@@ -1341,9 +1345,11 @@ _Jv_InterpMethod::ncode ()
   FFI_PREP_RAW_CLOSURE (&closure->closure,
 		        &closure->cif, 
 		        fun,
-		        (void*)this);
+		        (void*)this,
+			code);
+
+  self->ncode = code;
 
-  self->ncode = (void*)closure;
   return self->ncode;
 }
 
@@ -1540,7 +1546,7 @@ _Jv_InterpMethod::set_insn (jlong index,
 }
 
 void *
-_Jv_JNIMethod::ncode ()
+_Jv_JNIMethod::ncode (jclass klass)
 {
   using namespace java::lang::reflect;
 
@@ -1550,9 +1556,12 @@ _Jv_JNIMethod::ncode ()
   jboolean staticp = (self->accflags & Modifier::STATIC) != 0;
   int arg_count = _Jv_count_arguments (self->signature, staticp);
 
+  void *code;
   ncode_closure *closure =
-    (ncode_closure*)_Jv_AllocBytes (sizeof (ncode_closure)
-				    + arg_count * sizeof (ffi_type*));
+    (ncode_closure*)ffi_closure_alloc (sizeof (ncode_closure)
+				       + arg_count * sizeof (ffi_type*),
+				       &code);
+  closure->list.registerClosure (klass, closure);
 
   ffi_type *rtype;
   _Jv_init_cif (self->signature,
@@ -1594,9 +1603,10 @@ _Jv_JNIMethod::ncode ()
   FFI_PREP_RAW_CLOSURE (&closure->closure,
 			&closure->cif, 
 			fun,
-			(void*) this);
+			(void*) this,
+			code);
 
-  self->ncode = (void *) closure;
+  self->ncode = code;
   return self->ncode;
 }
 
@@ -1657,16 +1667,27 @@ _Jv_InterpreterEngine::do_create_ncode (
 	  // cases.  Well, we can't, because we don't allocate these
 	  // objects using `new', and thus they don't get a vtable.
 	  _Jv_JNIMethod *jnim = reinterpret_cast<_Jv_JNIMethod *> (imeth);
-	  klass->methods[i].ncode = jnim->ncode ();
+	  klass->methods[i].ncode = jnim->ncode (klass);
 	}
       else if (imeth != 0)		// it could be abstract
 	{
 	  _Jv_InterpMethod *im = reinterpret_cast<_Jv_InterpMethod *> (imeth);
-	  klass->methods[i].ncode = im->ncode ();
+	  klass->methods[i].ncode = im->ncode (klass);
 	}
     }
 }
 
+_Jv_ClosureList **
+_Jv_InterpreterEngine::do_get_closure_list (jclass klass)
+{
+  _Jv_InterpClass *iclass = (_Jv_InterpClass *) klass->aux_info;
+
+  if (!iclass->closures)
+    iclass->closures = _Jv_ClosureListFinalizer ();
+
+  return iclass->closures;
+}
+
 void
 _Jv_InterpreterEngine::do_allocate_static_fields (jclass klass,
 						  int pointer_size,
Index: libjava/include/java-interp.h
===================================================================
--- libjava/include/java-interp.h.orig	2007-02-16 21:56:48.000000000 -0200
+++ libjava/include/java-interp.h	2007-02-16 21:58:55.000000000 -0200
@@ -202,7 +202,7 @@ class _Jv_InterpMethod : public _Jv_Meth
   }
 
   // return the method's invocation pointer (a stub).
-  void *ncode ();
+  void *ncode (jclass);
   void compile (const void * const *);
 
   static void run_normal (ffi_cif*, void*, ffi_raw*, void*);
@@ -293,6 +293,7 @@ class _Jv_InterpClass
   _Jv_MethodBase **interpreted_methods;
   _Jv_ushort     *field_initializers;
   jstring source_file_name;
+  _Jv_ClosureList **closures;
 
   friend class _Jv_ClassReader;
   friend class _Jv_InterpMethod;
@@ -341,7 +342,7 @@ class _Jv_JNIMethod : public _Jv_MethodB
   // This function is used when making a JNI call from the interpreter.
   static void call (ffi_cif *, void *, ffi_raw *, void *);
 
-  void *ncode ();
+  void *ncode (jclass);
 
   friend class _Jv_ClassReader;
   friend class _Jv_InterpreterEngine;
Index: libjava/defineclass.cc
===================================================================
--- libjava/defineclass.cc.orig	2007-02-16 21:56:48.000000000 -0200
+++ libjava/defineclass.cc	2007-02-16 21:58:55.000000000 -0200
@@ -1,6 +1,7 @@
 // defineclass.cc - defining a class from .class format.
 
-/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006  Free Software Foundation
+/* Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
+   Free Software Foundation
 
    This file is part of libgcj.
 
@@ -1695,7 +1696,7 @@ void _Jv_ClassReader::handleCodeAttribut
       // call a static method of an interpreted class from precompiled
       // code without first resolving the class (that will happen
       // during class initialization instead).
-      method->self->ncode = method->ncode ();
+      method->self->ncode = method->ncode (def);
     }
 }
 
@@ -1740,7 +1741,7 @@ void _Jv_ClassReader::handleMethodsEnd (
 		  // interpreted class from precompiled code without
 		  // first resolving the class (that will happen
 		  // during class initialization instead).
-		  method->ncode = m->ncode ();
+		  method->ncode = m->ncode (def);
 		}
 	    }
 	}
Index: libjava/link.cc
===================================================================
--- libjava/link.cc.orig	2007-02-16 21:56:48.000000000 -0200
+++ libjava/link.cc	2007-02-16 22:06:42.000000000 -0200
@@ -1022,15 +1022,17 @@ struct method_closure
   // be the same as the address of the overall structure.  This is due
   // to disabling interior pointers in the GC.
   ffi_closure closure;
+  _Jv_ClosureList list;
   ffi_cif cif;
   ffi_type *arg_types[1];
 };
 
 void *
-_Jv_Linker::create_error_method (_Jv_Utf8Const *class_name)
+_Jv_Linker::create_error_method (_Jv_Utf8Const *class_name, jclass klass)
 {
+  void *code;
   method_closure *closure
-    = (method_closure *) _Jv_AllocBytes(sizeof (method_closure));
+    = (method_closure *)ffi_closure_alloc (sizeof (method_closure), &code);
 
   closure->arg_types[0] = &ffi_type_void;
 
@@ -1042,13 +1044,18 @@ _Jv_Linker::create_error_method (_Jv_Utf
                        1,
                        &ffi_type_void,
 		       closure->arg_types) == FFI_OK
-      && ffi_prep_closure (&closure->closure,
-                           &closure->cif,
-			   _Jv_ThrowNoClassDefFoundErrorTrampoline,
-			   class_name) == FFI_OK)
-    return &closure->closure;
+      && ffi_prep_closure_loc (&closure->closure,
+			       &closure->cif,
+			       _Jv_ThrowNoClassDefFoundErrorTrampoline,
+			       class_name,
+			       code) == FFI_OK)
+    {
+      closure->list.registerClosure (klass, closure);
+      return code;
+    }
   else
     {
+      ffi_closure_free (closure);
       java::lang::StringBuffer *buffer = new java::lang::StringBuffer();
       buffer->append(JvNewStringLatin1("Error setting up FFI closure"
 				       " for static method of"
@@ -1059,7 +1066,7 @@ _Jv_Linker::create_error_method (_Jv_Utf
 }
 #else
 void *
-_Jv_Linker::create_error_method (_Jv_Utf8Const *)
+_Jv_Linker::create_error_method (_Jv_Utf8Const *, jclass)
 {
   // Codepath for platforms which do not support (or want) libffi.
   // You have to accept that it is impossible to provide the name
@@ -1090,8 +1097,6 @@ static bool debug_link = false;
 // at the corresponding position in the virtual method offset table
 // (klass->otable). 
 
-// The same otable and atable may be shared by many classes.
-
 // This must be called while holding the class lock.
 
 void
@@ -1242,13 +1247,15 @@ _Jv_Linker::link_symbol_table (jclass kl
       // NullPointerException
       klass->atable->addresses[index] = NULL;
 
+      bool use_error_method = false;
+
       // If the target class is missing we prepare a function call
       // that throws a NoClassDefFoundError and store the address of
       // that newly prepared method in the atable. The user can run
       // code in classes where the missing class is part of the
       // execution environment as long as it is never referenced.
       if (target_class == NULL)
-        klass->atable->addresses[index] = create_error_method(sym.class_name);
+	use_error_method = true;
       // We're looking for a static field or a static method, and we
       // can tell which is needed by looking at the signature.
       else if (signature->first() == '(' && signature->len() >= 2)
@@ -1296,12 +1303,16 @@ _Jv_Linker::link_symbol_table (jclass kl
 		}
 	    }
 	  else
+	    use_error_method = true;
+
+	  if (use_error_method)
 	    klass->atable->addresses[index]
-              = create_error_method(sym.class_name);
+	      = create_error_method(sym.class_name, klass);
 
 	  continue;
 	}
 
+
       // Try fields only if the target class exists.
       if (target_class != NULL)
       {
Index: libjava/java/lang/reflect/natVMProxy.cc
===================================================================
--- libjava/java/lang/reflect/natVMProxy.cc.orig	2007-02-16 21:56:48.000000000 -0200
+++ libjava/java/lang/reflect/natVMProxy.cc	2007-02-16 22:07:13.000000000 -0200
@@ -1,6 +1,6 @@
 // natVMProxy.cc -- Implementation of VMProxy methods.
 
-/* Copyright (C) 2006
+/* Copyright (C) 2006, 2007
    Free Software Foundation
 
    This file is part of libgcj.
@@ -66,7 +66,8 @@ using namespace java::lang::reflect;
 using namespace java::lang;
 
 typedef void (*closure_fun) (ffi_cif*, void*, void**, void*);
-static void *ncode (_Jv_Method *self, closure_fun fun, Method *meth);
+static void *ncode (jclass klass, _Jv_Method *self, closure_fun fun,
+		    Method *meth);
 static void run_proxy (ffi_cif*, void*, void**, void*);
 
 typedef jobject invoke_t (jobject, Proxy *, Method *, JArray< jobject > *);
@@ -165,7 +166,7 @@ java::lang::reflect::VMProxy::generatePr
       // the interfaces of which it is a proxy will also be reachable,
       // so this is safe.
       method = imethod;
-      method.ncode = ncode (&method, run_proxy, elements(d->methods)[i]);
+      method.ncode = ncode (klass, &method, run_proxy, elements(d->methods)[i]);
       method.accflags &= ~Modifier::ABSTRACT;
     }
 
@@ -290,6 +291,7 @@ unbox (jobject o, jclass klass, void *rv
 
 typedef struct {
   ffi_closure  closure;
+  _Jv_ClosureList list;
   ffi_cif   cif;
   Method *meth;
   _Jv_Method *self;
@@ -362,16 +364,19 @@ run_proxy (ffi_cif *cif,
 // the address of its closure.
 
 static void *
-ncode (_Jv_Method *self, closure_fun fun, Method *meth)
+ncode (jclass klass, _Jv_Method *self, closure_fun fun, Method *meth)
 {
   using namespace java::lang::reflect;
 
   jboolean staticp = (self->accflags & Modifier::STATIC) != 0;
   int arg_count = _Jv_count_arguments (self->signature, staticp);
 
+  void *code;
   ncode_closure *closure =
-    (ncode_closure*)_Jv_AllocBytes (sizeof (ncode_closure)
-				    + arg_count * sizeof (ffi_type*));
+    (ncode_closure*)ffi_closure_alloc (sizeof (ncode_closure)
+				       + arg_count * sizeof (ffi_type*),
+				       &code);
+  closure->list.registerClosure (klass, closure);
 
   _Jv_init_cif (self->signature,
 		arg_count,
@@ -384,11 +389,12 @@ ncode (_Jv_Method *self, closure_fun fun
 
   JvAssert ((self->accflags & Modifier::NATIVE) == 0);
 
-  ffi_prep_closure (&closure->closure,
-		    &closure->cif, 
-		    fun,
-		    (void*)closure);
+  ffi_prep_closure_loc (&closure->closure,
+			&closure->cif,
+			fun,
+			code,
+			code);
 
-  self->ncode = (void*)closure;
+  self->ncode = code;
   return self->ncode;
 }
Index: libjava/testsuite/libjava.jar/TestClosureGC.java
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ libjava/testsuite/libjava.jar/TestClosureGC.java	2007-02-16 21:58:55.000000000 -0200
@@ -0,0 +1,116 @@
+/* Verify that libffi closures aren't deallocated too early.
+
+   Copyright (C) 2007 Free Software Foundation, Inc
+   Contributed by Alexandre Oliva <aoliva@redhat.com>
+
+   If libffi closures are released too early, we lose.
+ */
+
+import java.util.HashSet;
+
+public class TestClosureGC {
+    public static String objId (Object obj) {
+	return obj + "/"
+	    + Integer.toHexString(obj.getClass().getClassLoader().hashCode());
+    }
+    public static class cld extends java.net.URLClassLoader {
+	static final Object obj = new cl0();
+	public cld () throws Exception {
+	    super(new java.net.URL[] { });
+	    /* System.out.println (objId (this) + " created"); */
+	}
+	public void finalize () {
+	    /* System.out.println (objId (this) + " finalized"); */
+	}
+	public String toString () {
+	    return this.getClass().getName() + "@"
+		+ Integer.toHexString (hashCode ());
+	}
+	public Class loadClass (String name) throws ClassNotFoundException {
+	    try {
+		java.io.InputStream IS = getSystemResourceAsStream
+		    (name + ".class");
+		int maxsz = 1024, readsz = 0;
+		byte buf[] = new byte[maxsz];
+		for(;;) {
+		    int readnow = IS.read (buf, readsz, maxsz - readsz);
+		    if (readnow <= 0)
+			break;
+		    readsz += readnow;
+		    if (readsz == maxsz) {
+			byte newbuf[] = new byte[maxsz *= 2];
+			System.arraycopy (buf, 0, newbuf, 0, readsz);
+			buf = newbuf;
+		    }
+		}
+		return defineClass (name, buf, 0, readsz);
+	    } catch (Exception e) {
+		return super.loadClass (name);
+	    }
+	}
+    }
+    public static class cl0 {
+	public cl0 () {
+	    /* System.out.println (objId (this) + " created"); */
+	}
+	public void finalize () {
+	    /* System.out.println (objId (this) + " finalized"); */
+	}
+    }
+    public static class cl1 {
+	final HashSet hs;
+	static final Object obj = new cl0();
+	public cl1 (final HashSet hs) {
+	    this.hs = hs;
+	    /* System.out.println (objId (this) + " created"); */
+	}
+	public void finalize () {
+	    /* System.out.println (objId (this) + " finalized"); */
+	}
+    }
+    public static class cl2 {
+	final HashSet hs;
+	static final Object obj = new cl0();
+	public cl2 (final HashSet hs) {
+	    this.hs = hs;
+	    /* System.out.println (objId (this) + " created"); */
+	}
+	public void finalize () {
+	    /* System.out.println (objId (this) + " finalized"); */
+	    hs.add(this);
+	    hs.add(new cl0());
+	}
+    }
+    static final HashSet hs = new HashSet();
+    static final Object obj = new cl0();
+    public static void main(String[] argv) throws Exception {
+	{
+	    Class[] hscs = { HashSet.class };
+	    Object[] hsos = { hs };
+	    new cld().loadClass ("TestClosureGC$cl1").
+		getConstructor (hscs).newInstance (hsos);
+	    new cld().loadClass ("TestClosureGC$cl2").
+		getConstructor (hscs).newInstance (hsos);
+	    new cld().loadClass ("TestClosureGC$cl1").
+		getConstructor (hscs).newInstance (hsos);
+	    new cld().loadClass ("TestClosureGC$cl1").
+		getConstructor (hscs).newInstance (hsos);
+	}
+	for (int i = 1; i <= 5; i++) {
+	    /* System.out.println ("Will run GC and finalization " + i); */
+	    System.gc ();
+	    Thread.sleep (100);
+	    System.runFinalization ();
+	    Thread.sleep (100);
+	    if (hs.isEmpty ())
+		continue;
+	    java.util.Iterator it = hs.iterator ();
+	    while (it.hasNext ()) {
+		Object obj = it.next();
+		/* System.out.println (objId (obj) + " in ht, removing"); */
+		it.remove ();
+	    }
+	}
+	System.out.println ("ok");
+    }
+}
Index: libjava/testsuite/libjava.jar/TestClosureGC.out
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ libjava/testsuite/libjava.jar/TestClosureGC.out	2007-02-16 21:58:55.000000000 -0200
@@ -0,0 +1 @@
+ok
Index: libjava/testsuite/libjava.jar/TestClosureGC.xfail
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ libjava/testsuite/libjava.jar/TestClosureGC.xfail	2007-02-16 21:58:55.000000000 -0200
@@ -0,0 +1 @@
+main=TestClosureGC

[-- Attachment #4: libffi-closure-prot-exec-TestClosureGC.jar --]
[-- Type: application/x-java-archive, Size: 3997 bytes --]

[-- Attachment #5: Type: text/plain, Size: 249 bytes --]


-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-02-18 23:09   ` Alexandre Oliva
@ 2007-03-05 13:15     ` Tom Tromey
  2007-03-07  7:29       ` Alexandre Oliva
  0 siblings, 1 reply; 35+ messages in thread
From: Tom Tromey @ 2007-03-05 13:15 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: gcc-patches, java-patches

>>>>> "Alexandre" == Alexandre Oliva <aoliva@redhat.com> writes:

Alexandre> On Feb 15, 2007, Tom Tromey <tromey@redhat.com> wrote:
>> It is ok to have the libjava code directly call ffi_closure_alloc and
>> ffi_closure_free.  We're already making direct calls to the various
>> ffi functions, so this extra layer isn't needed.

Alexandre> Done in the revised patch below.  Retested on
Alexandre> x86_64-linux-gnu.  Ok to install?

If there are no other remaining objections or concerns, then yes.
(I forgot the state of the thread while traveling :-)

Tom

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Get libffi closures to cope with SELinux execmem/execmod
  2007-03-05 13:15     ` Tom Tromey
@ 2007-03-07  7:29       ` Alexandre Oliva
  0 siblings, 0 replies; 35+ messages in thread
From: Alexandre Oliva @ 2007-03-07  7:29 UTC (permalink / raw)
  To: tromey; +Cc: gcc-patches, java-patches

On Mar  5, 2007, Tom Tromey <tromey@redhat.com> wrote:

>>>>>> "Alexandre" == Alexandre Oliva <aoliva@redhat.com> writes:
Alexandre> On Feb 15, 2007, Tom Tromey <tromey@redhat.com> wrote:
>>> It is ok to have the libjava code directly call ffi_closure_alloc and
>>> ffi_closure_free.  We're already making direct calls to the various
>>> ffi functions, so this extra layer isn't needed.

Alexandre> Done in the revised patch below.  Retested on
Alexandre> x86_64-linux-gnu.  Ok to install?

> If there are no other remaining objections or concerns, then yes.
> (I forgot the state of the thread while traveling :-)

Thanks, I had to tweak the changes to natVMProxy.cc to account for the
removal of one of the arguments from ncode.  Here's the revised patch
for that file.  The rest of the patch went in exactly as posted
before.

Index: libjava/java/lang/reflect/natVMProxy.cc
===================================================================
--- libjava/java/lang/reflect/natVMProxy.cc.orig	2007-03-07 04:20:58.000000000 -0300
+++ libjava/java/lang/reflect/natVMProxy.cc	2007-03-07 04:24:29.000000000 -0300
@@ -1,6 +1,6 @@
 // natVMProxy.cc -- Implementation of VMProxy methods.
 
-/* Copyright (C) 2006
+/* Copyright (C) 2006, 2007
    Free Software Foundation
 
    This file is part of libgcj.
@@ -66,7 +66,7 @@ using namespace java::lang::reflect;
 using namespace java::lang;
 
 typedef void (*closure_fun) (ffi_cif*, void*, void**, void*);
-static void *ncode (_Jv_Method *self, closure_fun fun);
+static void *ncode (jclass klass, _Jv_Method *self, closure_fun fun);
 static void run_proxy (ffi_cif*, void*, void**, void*);
 
 typedef jobject invoke_t (jobject, Proxy *, Method *, JArray< jobject > *);
@@ -165,7 +165,7 @@ java::lang::reflect::VMProxy::generatePr
       // the interfaces of which it is a proxy will also be reachable,
       // so this is safe.
       method = imethod;
-      method.ncode = ncode (&method, run_proxy);
+      method.ncode = ncode (klass, &method, run_proxy);
       method.accflags &= ~Modifier::ABSTRACT;
     }
 
@@ -289,6 +289,7 @@ unbox (jobject o, jclass klass, void *rv
 
 typedef struct {
   ffi_closure  closure;
+  _Jv_ClosureList list;
   ffi_cif   cif;
   _Jv_Method *self;
   ffi_type *arg_types[0];
@@ -366,16 +367,19 @@ run_proxy (ffi_cif *cif,
 // the address of its closure.
 
 static void *
-ncode (_Jv_Method *self, closure_fun fun)
+ncode (jclass klass, _Jv_Method *self, closure_fun fun)
 {
   using namespace java::lang::reflect;
 
   jboolean staticp = (self->accflags & Modifier::STATIC) != 0;
   int arg_count = _Jv_count_arguments (self->signature, staticp);
 
+  void *code;
   ncode_closure *closure =
-    (ncode_closure*)_Jv_AllocBytes (sizeof (ncode_closure)
-				    + arg_count * sizeof (ffi_type*));
+    (ncode_closure*)ffi_closure_alloc (sizeof (ncode_closure)
+				       + arg_count * sizeof (ffi_type*),
+				       &code);
+  closure->list.registerClosure (klass, closure);
 
   _Jv_init_cif (self->signature,
 		arg_count,
@@ -387,11 +391,12 @@ ncode (_Jv_Method *self, closure_fun fun
 
   JvAssert ((self->accflags & Modifier::NATIVE) == 0);
 
-  ffi_prep_closure (&closure->closure,
-		    &closure->cif, 
-		    fun,
-		    (void*)closure);
+  ffi_prep_closure_loc (&closure->closure,
+			&closure->cif,
+			fun,
+			code,
+			code);
 
-  self->ncode = (void*)closure;
+  self->ncode = code;
   return self->ncode;
 }

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2007-03-07  7:29 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-01-18  7:14 Get libffi closures to cope with SELinux execmem/execmod Alexandre Oliva
2007-01-18  7:28 ` Alexandre Oliva
2007-01-18 12:30 ` Andrew Haley
2007-01-18 12:47   ` Andrew Haley
2007-01-19  3:57   ` Alexandre Oliva
     [not found]     ` <Pine.GHP.4.58.0701190016420.29118@tomil.hpl.hp.com>
2007-01-22 21:28       ` Alexandre Oliva
2007-01-24  6:46         ` Alexandre Oliva
2007-01-24 15:47           ` Andrew Haley
2007-01-24 16:59             ` David Daney
2007-01-24 17:09               ` Andrew Haley
2007-01-25  5:59                 ` Alexandre Oliva
2007-01-25 10:13                   ` Andrew Haley
2007-01-25 17:21                     ` David Daney
2007-01-25 17:32                       ` Andrew Haley
2007-01-26  7:14                       ` Alexandre Oliva
2007-01-26  7:28                         ` Andrew Pinski
2007-01-24 17:33             ` Anthony Green
2007-01-25 21:52             ` Boehm, Hans
2007-01-26  8:11               ` Alexandre Oliva
2007-01-26 21:13                 ` Boehm, Hans
2007-01-27  7:27                   ` Alexandre Oliva
2007-01-31 23:43                     ` Boehm, Hans
2007-02-05 14:21                       ` Alexandre Oliva
2007-02-08 19:39                         ` Andrew Haley
2007-02-08 20:46                           ` Alexandre Oliva
2007-02-09  6:35                         ` Hans Boehm
2007-02-09 18:27                           ` Alexandre Oliva
2007-02-15  5:42                             ` Hans Boehm
2007-02-15  7:24                               ` Alexandre Oliva
2007-01-26 10:16               ` Andrew Haley
2007-02-15  7:14         ` Tom Tromey
2007-02-15  7:11 ` Tom Tromey
2007-02-18 23:09   ` Alexandre Oliva
2007-03-05 13:15     ` Tom Tromey
2007-03-07  7:29       ` Alexandre Oliva

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).