On 12/04/2013 10:51 AM, Christopher Faylor wrote: >>>> One question, though. Assuming start is == size, then the current code >>>> in CVS extends the fd table by only 1. If that happens often, the >>>> current code would have to call ccalloc/memcpy/cfree a lot. Wouldn't >>>> it in fact be better to extend always by at least NOFILE_INCR, and to >>>> extend by (1 + start - size) only if start is > size + NOFILE_INCR? >>>> Something like >>>> >>>> size_t extendby = (start >= size + NOFILE_INCR) ? 1 + start - size : NOFILE_INCR; >>>> Always increasing by a minimum of NOFILE_INCR is wrong in one case - we should never increase beyond OPEN_MAX_MAX (currently 3200). dup2(0, 3199) should succeed (unless it fails with EMFILE due to rlimit, but we already know that our handling of setrlimit(RLIMIT_NOFILE) is still a bit awkward); but dup2(0, 3200) must always fail with EBADF. I think the code in CVS is still wrong: we want to increase to the larger of the value specified by the user or NOFILE_INCR to minimize repeated calloc, but we also need to cap the increase to be at most OPEN_MAX_MAX descriptors, to avoid having a table larger than what the rest of our code base will support. Not having NOFILE_INCR free slots after a user allocation is not fatal; it means that the first allocation to a large number will not have tail padding, but the next allocation to fd+1 will allocate NOFILE_INCR slots rather than just one. My original idea of MAX(NOFILE_INCR, start - size) expresses that. >> >> That might be helpful. Tcsh, for instance, always dup's it's std >> descriptors to the new fds 15-19. If it does so in this order, it would >> have to call extend 5 times. > > dtable.h:#define NOFILE_INCR 32 > > It shouldn't extend in that scenario. The table starts with 32 > elements. Rather, the table starts with 256 elements; which is why dup2 wouldn't crash until dup'ing to 256 or greater before I started touching this. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org