public inbox for cygwin@cygwin.com
 help / color / mirror / Atom feed
* Limit for number of child processes
@ 2020-08-26 12:00 sten.kristian.ivarsson
  2020-08-26 17:57 ` Corinna Vinschen
  0 siblings, 1 reply; 12+ messages in thread
From: sten.kristian.ivarsson @ 2020-08-26 12:00 UTC (permalink / raw)
  To: cygwin

Dear cygwin folks

It seems like there's a limit of the number of possible child processes
defined to 256 with 'NPROCS' in //winsup/cygwin/child_info.h used in
'cprocs' in //winsup/cygwin/sigproc.cc

256 is quite few possible children in an enterprise environment and perhaps
the limit should be limited by the physical resources or possibly Windows ?

As of now, posix_spawnp sometimes fail with EAGAIN (maybe it should be
ENOMEM ?) if this limit is reached

Would it be possible to somehow make this dynamic and let the operating
system set the limits ?


Best regards,
Kristian


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Limit for number of child processes
  2020-08-26 12:00 Limit for number of child processes sten.kristian.ivarsson
@ 2020-08-26 17:57 ` Corinna Vinschen
  2020-08-27 12:17   ` Sv: " sten.kristian.ivarsson
  0 siblings, 1 reply; 12+ messages in thread
From: Corinna Vinschen @ 2020-08-26 17:57 UTC (permalink / raw)
  To: cygwin

On Aug 26 14:00, Kristian Ivarsson via Cygwin wrote:
> Dear cygwin folks
> 
> It seems like there's a limit of the number of possible child processes
> defined to 256 with 'NPROCS' in //winsup/cygwin/child_info.h used in
> 'cprocs' in //winsup/cygwin/sigproc.cc
> 
> 256 is quite few possible children in an enterprise environment and perhaps
> the limit should be limited by the physical resources or possibly Windows ?

The info has to be kept available in the process itself so we need
this array of NPROCS * sizeof (pinfo).

Of course, there's no reason to use a static array, the code could just
as well use a dynamically allocated array or a linked list.  It's just
not the way it is right now and would need a patch or rewrite.

As for the static array, sizeof pinfo is 64, so the current size of
the array is just 16K.  We could easily bump it to 64K with NPROCS
raised to 1024 for the next Cygwin release, at least on 64 bit.
I don't think we should raise this limit for 32 bit Cygwin, which is
kind of EOL anyway, given the massive restrictions.


Corinna

-- 
Corinna Vinschen
Cygwin Maintainer

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Sv: Limit for number of child processes
  2020-08-26 17:57 ` Corinna Vinschen
@ 2020-08-27 12:17   ` sten.kristian.ivarsson
  2020-08-28  1:42     ` Ken Brown
  0 siblings, 1 reply; 12+ messages in thread
From: sten.kristian.ivarsson @ 2020-08-27 12:17 UTC (permalink / raw)
  To: cygwin

Hi Corinna

> > Dear cygwin folks
> >
> > It seems like there's a limit of the number of possible child
> > processes defined to 256 with 'NPROCS' in //winsup/cygwin/child_info.h
> > used in 'cprocs' in //winsup/cygwin/sigproc.cc
> >
> > 256 is quite few possible children in an enterprise environment and
> > perhaps the limit should be limited by the physical resources or
> possibly Windows ?
> 
> The info has to be kept available in the process itself so we need this
> array of NPROCS * sizeof (pinfo).
> 
> Of course, there's no reason to use a static array, the code could just as
> well use a dynamically allocated array or a linked list.  It's just not
> the way it is right now and would need a patch or rewrite.
> 
> As for the static array, sizeof pinfo is 64, so the current size of the
> array is just 16K.  We could easily bump it to 64K with NPROCS raised to
> 1024 for the next Cygwin release, at least on 64 bit.
> I don't think we should raise this limit for 32 bit Cygwin, which is kind
> of EOL anyway, given the massive restrictions.

I don't know the exact purpose of this and how the cprocs is used, but I'd
prefer something totally dynamic 7 days out of 7 or otherwise another limit
would just bite you in the ass some other day instead ;-)

A linked list could be used if you wanna optimize (dynamic) memory usage but
an (amortized) array would probably provide faster linear search but I guess
simplicity of the code and external functionality is the most important
demands for this choice

I'm looking forward for a future release where there's no children limit

Keep up the good work

Best regards,
Kristian

> Corinna
> 
> --
> Corinna Vinschen
> Cygwin Maintainer
> --
> Problem reports:      https://cygwin.com/problems.html
> FAQ:                  https://cygwin.com/faq/
> Documentation:        https://cygwin.com/docs.html
> Unsubscribe info:     https://cygwin.com/ml/#unsubscribe-simple


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Sv: Limit for number of child processes
  2020-08-27 12:17   ` Sv: " sten.kristian.ivarsson
@ 2020-08-28  1:42     ` Ken Brown
  2020-08-28  8:35       ` Corinna Vinschen
  2020-08-28  8:38       ` Sv: " sten.kristian.ivarsson
  0 siblings, 2 replies; 12+ messages in thread
From: Ken Brown @ 2020-08-28  1:42 UTC (permalink / raw)
  To: sten.kristian.ivarsson, cygwin

On 8/27/2020 8:17 AM, Kristian Ivarsson via Cygwin wrote:
> Hi Corinna
> 
>>> Dear cygwin folks
>>>
>>> It seems like there's a limit of the number of possible child
>>> processes defined to 256 with 'NPROCS' in //winsup/cygwin/child_info.h
>>> used in 'cprocs' in //winsup/cygwin/sigproc.cc
>>>
>>> 256 is quite few possible children in an enterprise environment and
>>> perhaps the limit should be limited by the physical resources or
>> possibly Windows ?
>>
>> The info has to be kept available in the process itself so we need this
>> array of NPROCS * sizeof (pinfo).
>>
>> Of course, there's no reason to use a static array, the code could just as
>> well use a dynamically allocated array or a linked list.  It's just not
>> the way it is right now and would need a patch or rewrite.
>>
>> As for the static array, sizeof pinfo is 64, so the current size of the
>> array is just 16K.  We could easily bump it to 64K with NPROCS raised to
>> 1024 for the next Cygwin release, at least on 64 bit.
>> I don't think we should raise this limit for 32 bit Cygwin, which is kind
>> of EOL anyway, given the massive restrictions.
> 
> I don't know the exact purpose of this and how the cprocs is used, but I'd
> prefer something totally dynamic 7 days out of 7 or otherwise another limit
> would just bite you in the ass some other day instead ;-)
> 
> A linked list could be used if you wanna optimize (dynamic) memory usage but
> an (amortized) array would probably provide faster linear search but I guess
> simplicity of the code and external functionality is the most important
> demands for this choice

Any change here (aside from just increasing NPROCS) would have to be done with 
care to avoid a performance hit.  I looked at the history of changes to 
sigproc.cc, and I found commit 4ce15a49 in 2001 in which a static array 
something like cprocs was replaced by a dynamically allocated buffer in order to 
save DLL space.  This was reverted 3 days later (commit e2ea684e) because of 
performance issues.

Ken

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Sv: Limit for number of child processes
  2020-08-28  1:42     ` Ken Brown
@ 2020-08-28  8:35       ` Corinna Vinschen
  2020-08-28  8:38       ` Sv: " sten.kristian.ivarsson
  1 sibling, 0 replies; 12+ messages in thread
From: Corinna Vinschen @ 2020-08-28  8:35 UTC (permalink / raw)
  To: cygwin

On Aug 27 21:42, Ken Brown via Cygwin wrote:
> On 8/27/2020 8:17 AM, Kristian Ivarsson via Cygwin wrote:
> > Hi Corinna
> > 
> > > > Dear cygwin folks
> > > > 
> > > > It seems like there's a limit of the number of possible child
> > > > processes defined to 256 with 'NPROCS' in //winsup/cygwin/child_info.h
> > > > used in 'cprocs' in //winsup/cygwin/sigproc.cc
> > > > 
> > > > 256 is quite few possible children in an enterprise environment and
> > > > perhaps the limit should be limited by the physical resources or
> > > possibly Windows ?
> > > 
> > > The info has to be kept available in the process itself so we need this
> > > array of NPROCS * sizeof (pinfo).
> > > 
> > > Of course, there's no reason to use a static array, the code could just as
> > > well use a dynamically allocated array or a linked list.  It's just not
> > > the way it is right now and would need a patch or rewrite.
> > > 
> > > As for the static array, sizeof pinfo is 64, so the current size of the
> > > array is just 16K.  We could easily bump it to 64K with NPROCS raised to
> > > 1024 for the next Cygwin release, at least on 64 bit.
> > > I don't think we should raise this limit for 32 bit Cygwin, which is kind
> > > of EOL anyway, given the massive restrictions.
> > 
> > I don't know the exact purpose of this and how the cprocs is used, but I'd
> > prefer something totally dynamic 7 days out of 7 or otherwise another limit
> > would just bite you in the ass some other day instead ;-)
> > 
> > A linked list could be used if you wanna optimize (dynamic) memory usage but
> > an (amortized) array would probably provide faster linear search but I guess
> > simplicity of the code and external functionality is the most important
> > demands for this choice
> 
> Any change here (aside from just increasing NPROCS) would have to be done
> with care to avoid a performance hit.  I looked at the history of changes to
> sigproc.cc, and I found commit 4ce15a49 in 2001 in which a static array
> something like cprocs was replaced by a dynamically allocated buffer in
> order to save DLL space.  This was reverted 3 days later (commit e2ea684e)
> because of performance issues.

Yup, that's one of the problems to keep in mind.


Corinna

-- 
Corinna Vinschen
Cygwin Maintainer

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Sv: Sv: Limit for number of child processes
  2020-08-28  1:42     ` Ken Brown
  2020-08-28  8:35       ` Corinna Vinschen
@ 2020-08-28  8:38       ` sten.kristian.ivarsson
  2020-08-28 12:29         ` Ken Brown
  1 sibling, 1 reply; 12+ messages in thread
From: sten.kristian.ivarsson @ 2020-08-28  8:38 UTC (permalink / raw)
  To: 'Ken Brown', cygwin

> > Hi Corinna
> >
> >>> Dear cygwin folks
> >>>
> >>> It seems like there's a limit of the number of possible child
> >>> processes defined to 256 with 'NPROCS' in
> >>> //winsup/cygwin/child_info.h used in 'cprocs' in
> >>> //winsup/cygwin/sigproc.cc
> >>>
> >>> 256 is quite few possible children in an enterprise environment and
> >>> perhaps the limit should be limited by the physical resources or
> >> possibly Windows ?
> >>
> >> The info has to be kept available in the process itself so we need
> >> this array of NPROCS * sizeof (pinfo).
> >>
> >> Of course, there's no reason to use a static array, the code could
> >> just as well use a dynamically allocated array or a linked list.
> >> It's just not the way it is right now and would need a patch or
> rewrite.
> >>
> >> As for the static array, sizeof pinfo is 64, so the current size of
> >> the array is just 16K.  We could easily bump it to 64K with NPROCS
> >> raised to
> >> 1024 for the next Cygwin release, at least on 64 bit.
> >> I don't think we should raise this limit for 32 bit Cygwin, which is
> >> kind of EOL anyway, given the massive restrictions.
> >
> > I don't know the exact purpose of this and how the cprocs is used, but
> > I'd prefer something totally dynamic 7 days out of 7 or otherwise
> > another limit would just bite you in the ass some other day instead
> > ;-)
> >
> > A linked list could be used if you wanna optimize (dynamic) memory
> > usage but an (amortized) array would probably provide faster linear
> > search but I guess simplicity of the code and external functionality
> > is the most important demands for this choice
> 
> Any change here (aside from just increasing NPROCS) would have to be done
> with care to avoid a performance hit.  I looked at the history of changes
> to sigproc.cc, and I found commit 4ce15a49 in 2001 in which a static array
> something like cprocs was replaced by a dynamically allocated buffer in
> order to save DLL space.  This was reverted 3 days later (commit e2ea684e)
> because of performance issues.


I wonder what kind of performance issue ? Nevertheless, that old commit
didn't make the number of possible children more dynamic though, when it was
still restricted to NPROCS (or ZOMBIEMAX/NZOMBIES), it was just not
allocated on the stack. But yes, accessing dynamic allocated memory can
theoretically be slower than stack allocated memory, but without measuring
it one cannot tell ;-) Todays hardware is pretty good at prefetching etc,
but as I said, it needs measurements

Looking at the code, I didn't see that many searches (though the existing
ones could be used excessively) and if that is a bottleneck it could be a
good idea to keep the children sorted to be able to do a binary search (that
will give random access containers a huge advantage (theoretically)) instead
of a linear search 

I'm confident that you'll find the best solution to this though, but 256
children is not enough, at least, for us

BTW, when the limit is reached, errno is set to EAGAIN btw, but would ENOMEM
be a more appropriate (regardless if it is NPROCS is reached or that malloc
return NULL) ?

Best regards,
Kristian

> Ken


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Sv: Sv: Limit for number of child processes
  2020-08-28  8:38       ` Sv: " sten.kristian.ivarsson
@ 2020-08-28 12:29         ` Ken Brown
  2020-08-28 13:36           ` Corinna Vinschen
  0 siblings, 1 reply; 12+ messages in thread
From: Ken Brown @ 2020-08-28 12:29 UTC (permalink / raw)
  To: sten.kristian.ivarsson, cygwin

On 8/28/2020 4:38 AM, sten.kristian.ivarsson@gmail.com wrote:
>>> Hi Corinna
>>>
>>>>> Dear cygwin folks
>>>>>
>>>>> It seems like there's a limit of the number of possible child
>>>>> processes defined to 256 with 'NPROCS' in
>>>>> //winsup/cygwin/child_info.h used in 'cprocs' in
>>>>> //winsup/cygwin/sigproc.cc
>>>>>
>>>>> 256 is quite few possible children in an enterprise environment and
>>>>> perhaps the limit should be limited by the physical resources or
>>>> possibly Windows ?
>>>>
>>>> The info has to be kept available in the process itself so we need
>>>> this array of NPROCS * sizeof (pinfo).
>>>>
>>>> Of course, there's no reason to use a static array, the code could
>>>> just as well use a dynamically allocated array or a linked list.
>>>> It's just not the way it is right now and would need a patch or
>> rewrite.
>>>>
>>>> As for the static array, sizeof pinfo is 64, so the current size of
>>>> the array is just 16K.  We could easily bump it to 64K with NPROCS
>>>> raised to
>>>> 1024 for the next Cygwin release, at least on 64 bit.
>>>> I don't think we should raise this limit for 32 bit Cygwin, which is
>>>> kind of EOL anyway, given the massive restrictions.
>>>
>>> I don't know the exact purpose of this and how the cprocs is used, but
>>> I'd prefer something totally dynamic 7 days out of 7 or otherwise
>>> another limit would just bite you in the ass some other day instead
>>> ;-)
>>>
>>> A linked list could be used if you wanna optimize (dynamic) memory
>>> usage but an (amortized) array would probably provide faster linear
>>> search but I guess simplicity of the code and external functionality
>>> is the most important demands for this choice
>>
>> Any change here (aside from just increasing NPROCS) would have to be done
>> with care to avoid a performance hit.  I looked at the history of changes
>> to sigproc.cc, and I found commit 4ce15a49 in 2001 in which a static array
>> something like cprocs was replaced by a dynamically allocated buffer in
>> order to save DLL space.  This was reverted 3 days later (commit e2ea684e)
>> because of performance issues.
> 
> 
> I wonder what kind of performance issue ? Nevertheless, that old commit
> didn't make the number of possible children more dynamic though, when it was
> still restricted to NPROCS (or ZOMBIEMAX/NZOMBIES), it was just not
> allocated on the stack. But yes, accessing dynamic allocated memory can
> theoretically be slower than stack allocated memory, but without measuring
> it one cannot tell ;-) Todays hardware is pretty good at prefetching etc,
> but as I said, it needs measurements

I don't know for sure, but I doubt if it had anything to do with memory access. 
My guess is that the performance hit came from the need to free the allocated 
memory after every fork call (see sigproc_fixup_after_fork).

Ken

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Sv: Sv: Limit for number of child processes
  2020-08-28 12:29         ` Ken Brown
@ 2020-08-28 13:36           ` Corinna Vinschen
  2020-08-28 14:09             ` Sv: " sten.kristian.ivarsson
  2020-08-28 16:02             ` Ken Brown
  0 siblings, 2 replies; 12+ messages in thread
From: Corinna Vinschen @ 2020-08-28 13:36 UTC (permalink / raw)
  To: cygwin

On Aug 28 08:29, Ken Brown via Cygwin wrote:
> On 8/28/2020 4:38 AM, sten.kristian.ivarsson@gmail.com wrote:
> > > > > > It seems like there's a limit of the number of possible child
> > > > > > processes defined to 256 with 'NPROCS' in
> > > > > > //winsup/cygwin/child_info.h used in 'cprocs' in
> > > > > > //winsup/cygwin/sigproc.cc
> > > > > > 
> > > > > > 256 is quite few possible children in an enterprise environment and
> > > > > > perhaps the limit should be limited by the physical resources or
> > > > > possibly Windows ?
> > > > > 
> > > > > The info has to be kept available in the process itself so we need
> > > > > this array of NPROCS * sizeof (pinfo).
> > > > > 
> > > > > Of course, there's no reason to use a static array, the code could
> > > > > just as well use a dynamically allocated array or a linked list.
> > > > > It's just not the way it is right now and would need a patch or
> > > rewrite.
> > > > > [...]
> > > > A linked list could be used if you wanna optimize (dynamic) memory
> > > > usage but an (amortized) array would probably provide faster linear
> > > > search but I guess simplicity of the code and external functionality
> > > > is the most important demands for this choice
> > > 
> > > Any change here (aside from just increasing NPROCS) would have to be done
> > > with care to avoid a performance hit.  I looked at the history of changes
> > > to sigproc.cc, and I found commit 4ce15a49 in 2001 in which a static array
> > > something like cprocs was replaced by a dynamically allocated buffer in
> > > order to save DLL space.  This was reverted 3 days later (commit e2ea684e)
> > > because of performance issues.
> > 
> > I wonder what kind of performance issue ? 
> > [...]
> I don't know for sure, but I doubt if it had anything to do with memory
> access. My guess is that the performance hit came from the need to free the
> allocated memory after every fork call (see sigproc_fixup_after_fork).

Either way, I rewrote this partially so we now have a default array size
for 255 child processes on 32 bit and 1023 child processes on 64 bit.

The new code is mainly a minor update in that it convertes the code
directly accessing stuff into using a class, encapsulating the mechanism
used under the hood behind a class barrier and access methods.

As POC, I added a bit of code to maintain a second array, which is only
allocated (using HeapAlloc so as not to spill into the child processes)
if the default array overflows.  This second array adds room for another
1023 (32 bit) or 4095 (64 bit) child processes, raising the number of
max child processes per process to 1278 on 32 bit and 5118 on 64 bit.

My STC just forking like crazy overflowed my 4 Gigs RAM + 2.5 Gigs
pagefile after roughly 1450 child processes.  I'm pretty confident that
this POC implementation is sufficient for a while, even in enterprise
scenarios.

And if not, we can now easily tweak the numbers without having to 
tweak much of the code.

For testing purposes I uploaded a developer snapshot to
https://cygwin.com/snapshots/


Corinna

-- 
Corinna Vinschen
Cygwin Maintainer

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Sv: Sv: Sv: Limit for number of child processes
  2020-08-28 13:36           ` Corinna Vinschen
@ 2020-08-28 14:09             ` sten.kristian.ivarsson
  2020-08-28 16:02             ` Ken Brown
  1 sibling, 0 replies; 12+ messages in thread
From: sten.kristian.ivarsson @ 2020-08-28 14:09 UTC (permalink / raw)
  To: cygwin

Hi all

> > > > > > > It seems like there's a limit of the number of possible
> > > > > > > child processes defined to 256 with 'NPROCS' in
> > > > > > > //winsup/cygwin/child_info.h used in 'cprocs' in
> > > > > > > //winsup/cygwin/sigproc.cc
> > > > > > >
> > > > > > > 256 is quite few possible children in an enterprise
> > > > > > > environment and perhaps the limit should be limited by the
> > > > > > > physical resources or
> > > > > > possibly Windows ?
> > > > > >
> > > > > > The info has to be kept available in the process itself so we
> > > > > > need this array of NPROCS * sizeof (pinfo).
> > > > > >
> > > > > > Of course, there's no reason to use a static array, the code
> > > > > > could just as well use a dynamically allocated array or a linked
> list.
> > > > > > It's just not the way it is right now and would need a patch
> > > > > > or
> > > > rewrite.
> > > > > > [...]
> > > > > A linked list could be used if you wanna optimize (dynamic)
> > > > > memory usage but an (amortized) array would probably provide
> > > > > faster linear search but I guess simplicity of the code and
> > > > > external functionality is the most important demands for this
> > > > > choice
> > > >
> > > > Any change here (aside from just increasing NPROCS) would have to
> > > > be done with care to avoid a performance hit.  I looked at the
> > > > history of changes to sigproc.cc, and I found commit 4ce15a49 in
> > > > 2001 in which a static array something like cprocs was replaced by
> > > > a dynamically allocated buffer in order to save DLL space.  This
> > > > was reverted 3 days later (commit e2ea684e) because of performance
> issues.
> > >
> > > I wonder what kind of performance issue ?
> > > [...]
> > I don't know for sure, but I doubt if it had anything to do with
> > memory access. My guess is that the performance hit came from the need
> > to free the allocated memory after every fork call (see
> sigproc_fixup_after_fork).
> 
> Either way, I rewrote this partially so we now have a default array size
> for 255 child processes on 32 bit and 1023 child processes on 64 bit.
> 
> The new code is mainly a minor update in that it convertes the code
> directly accessing stuff into using a class, encapsulating the mechanism
> used under the hood behind a class barrier and access methods.
> 
> As POC, I added a bit of code to maintain a second array, which is only
> allocated (using HeapAlloc so as not to spill into the child processes) if
> the default array overflows.  This second array adds room for another
> 1023 (32 bit) or 4095 (64 bit) child processes, raising the number of max
> child processes per process to 1278 on 32 bit and 5118 on 64 bit.
> 
> My STC just forking like crazy overflowed my 4 Gigs RAM + 2.5 Gigs
> pagefile after roughly 1450 child processes.  I'm pretty confident that
> this POC implementation is sufficient for a while, even in enterprise
> scenarios.
> 
> And if not, we can now easily tweak the numbers without having to tweak
> much of the code.


That is super

Otherwise amortized dynamic arrays (such as C++ std::vector) does provide
deterministic complexity and is proven work well for most use cases fairly
good with no surprise-hick-ups (hence, amortized). I guess the biggest
problem is (if?) how to determine when to shrink allocated memory when
number of children is reduced

Best regards,
Kristian



> For testing purposes I uploaded a developer snapshot to
> https://cygwin.com/snapshots/
> 
> 
> Corinna
> 
> --
> Corinna Vinschen
> Cygwin Maintainer
> --
> Problem reports:      https://cygwin.com/problems.html
> FAQ:                  https://cygwin.com/faq/
> Documentation:        https://cygwin.com/docs.html
> Unsubscribe info:     https://cygwin.com/ml/#unsubscribe-simple


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Sv: Sv: Limit for number of child processes
  2020-08-28 13:36           ` Corinna Vinschen
  2020-08-28 14:09             ` Sv: " sten.kristian.ivarsson
@ 2020-08-28 16:02             ` Ken Brown
  2020-08-28 17:39               ` Corinna Vinschen
  1 sibling, 1 reply; 12+ messages in thread
From: Ken Brown @ 2020-08-28 16:02 UTC (permalink / raw)
  To: cygwin

On 8/28/2020 9:36 AM, Corinna Vinschen wrote:
> As POC, I added a bit of code to maintain a second array, which is only
> allocated (using HeapAlloc so as not to spill into the child processes)

Should there be a call to HeapFree somewhere, or is there some reason this isn't 
needed?

Ken

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Sv: Sv: Limit for number of child processes
  2020-08-28 16:02             ` Ken Brown
@ 2020-08-28 17:39               ` Corinna Vinschen
  2020-08-29 10:01                 ` Ken Brown
  0 siblings, 1 reply; 12+ messages in thread
From: Corinna Vinschen @ 2020-08-28 17:39 UTC (permalink / raw)
  To: cygwin

On Aug 28 12:02, Ken Brown via Cygwin wrote:
> On 8/28/2020 9:36 AM, Corinna Vinschen wrote:
> > As POC, I added a bit of code to maintain a second array, which is only
> > allocated (using HeapAlloc so as not to spill into the child processes)
> 
> Should there be a call to HeapFree somewhere, or is there some reason this
> isn't needed?

It's local memory in a single process, and it's allocated only once.
There's no reason to shrink or free this ever again.


Corinna

-- 
Corinna Vinschen
Cygwin Maintainer

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Sv: Sv: Limit for number of child processes
  2020-08-28 17:39               ` Corinna Vinschen
@ 2020-08-29 10:01                 ` Ken Brown
  0 siblings, 0 replies; 12+ messages in thread
From: Ken Brown @ 2020-08-29 10:01 UTC (permalink / raw)
  To: cygwin

On 8/28/2020 1:39 PM, Corinna Vinschen wrote:
> On Aug 28 12:02, Ken Brown via Cygwin wrote:
>> On 8/28/2020 9:36 AM, Corinna Vinschen wrote:
>>> As POC, I added a bit of code to maintain a second array, which is only
>>> allocated (using HeapAlloc so as not to spill into the child processes)
>>
>> Should there be a call to HeapFree somewhere, or is there some reason this
>> isn't needed?
> 
> It's local memory in a single process, and it's allocated only once.
> There's no reason to shrink or free this ever again.

It took a night's sleep for this to fully sink in, but I get it now.  A class 
that allocates memory would normally have a destructor that frees the memory. 
But only one instance of this class is created per process, so there's no need 
for the destructor.

I'll send a patch with a comment in case you think it might be helpful to future 
readers of the code.

Ken

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-08-29 10:01 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-26 12:00 Limit for number of child processes sten.kristian.ivarsson
2020-08-26 17:57 ` Corinna Vinschen
2020-08-27 12:17   ` Sv: " sten.kristian.ivarsson
2020-08-28  1:42     ` Ken Brown
2020-08-28  8:35       ` Corinna Vinschen
2020-08-28  8:38       ` Sv: " sten.kristian.ivarsson
2020-08-28 12:29         ` Ken Brown
2020-08-28 13:36           ` Corinna Vinschen
2020-08-28 14:09             ` Sv: " sten.kristian.ivarsson
2020-08-28 16:02             ` Ken Brown
2020-08-28 17:39               ` Corinna Vinschen
2020-08-29 10:01                 ` Ken Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).