public inbox for ecos-discuss@sourceware.org
 help / color / mirror / Atom feed
* [ECOS] Scheduler lock in Doug Lea's malloc implementation
@ 2006-09-07 15:35 bob.koninckx
  2006-09-07 19:40 ` Bart Veer
  0 siblings, 1 reply; 7+ messages in thread
From: bob.koninckx @ 2006-09-07 15:35 UTC (permalink / raw)
  To: ecos-discuss

Hello list,

A couple of weeks ago, I posted the question below without any reaction so far. However, I can hardly imagine that none of the eCos maintainers can give some insight on this, so I guess the question went unnoticed somehow. I guess it doesn't hurt to ask again.

Thanks,

Bob


Hi,

Can someone explain me why the default memory allocator locks the scheduler? I would expect it to be legitimate for a low priority thread with no real time constraints to do dynamic memory allocation without influencing the real-time behaviour of threads with real-time constraints, as long as they have a higher priority. We experienced timing problems with high priority real-time threads that do not touch the heap because of dynamic memory allocation in a lower priority thread without real-time constraints and traced it down to the code snippet from packages/services/memalloc/common/current/include/mempolt2.inl below

// get some memory; wait if none available
template <class T>
inline cyg_uint8 *
Cyg_Mempolt2<T>::alloc( cyg_int32 size )
{
    CYG_REPORT_FUNCTION();


    // Prevent preemption
    Cyg_Scheduler::lock();
    CYG_ASSERTCLASS( this, "Bad this pointer");


    cyg_uint8 *ret;
    ret = pool.try_alloc( size );
    if ( ret ) {
        Cyg_Scheduler::unlock();
        CYG_ASSERTCLASS( this, "Bad this pointer");
        CYG_REPORT_RETVAL( ret );
        return ret;
    }


    Cyg_Thread *self = Cyg_Thread::self();


    Mempolt2WaitInfo waitinfo( size );


    CYG_MEMALLOC_FAIL(size);


    self->set_wait_info( (CYG_ADDRWORD)&waitinfo );
    self->set_sleep_reason( Cyg_Thread::WAIT );
    self->sleep();
    queue.enqueue( self );


    CYG_ASSERT( 1 == Cyg_Scheduler::get_sched_lock(),
                "Called with non-zero scheduler lock");


    // Unlock scheduler and allow other threads to run
    Cyg_Scheduler::unlock();
[...]

Thanks,

Bob





--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [ECOS] Scheduler lock in Doug Lea's malloc implementation
  2006-09-07 15:35 [ECOS] Scheduler lock in Doug Lea's malloc implementation bob.koninckx
@ 2006-09-07 19:40 ` Bart Veer
  2006-09-08 20:33   ` [ECOS] Scheduler lock in Doug Lea's malloc implementation - be real careful Roy E Richardson
  0 siblings, 1 reply; 7+ messages in thread
From: Bart Veer @ 2006-09-07 19:40 UTC (permalink / raw)
  To: bob.koninckx; +Cc: ecos-discuss

>>>>> "Bob" == bob koninckx <bob.koninckx@o-3s.com> writes:

    Bob> Can someone explain me why the default memory allocator locks
    Bob> the scheduler ? I would expect it to be legitimate for a low
    Bob> priority thread with no real time constraints to do dynamic
    Bob> memory allocation without influencing the real-time behaviour
    Bob> of threads with real-time constraints, as long as they have a
    Bob> higher priority.

I was not involved in the initial implementation of the memory
allocators, so this is not gospel. I do remember that the uITRON
compatibility layer imposed various requirements.

I believe the theory is that the memory allocators should be very
fast. If you use an alternative locking mechanism like a mutex, both
the lock and unlock functions will themselves lock and unlock the
scheduler. When you have a memory allocator that takes a comparable
amount of time to a mutex operation, you are better off locking the
scheduler around the memory allocation rather than going through mutex
lock and unlock calls.

If you are using a fixed block memory pool as per mfiximpl.hxx then
that theory may well be correct. If you are using some other allocator
such as dlmalloc then I suspect the theory is wrong and for such
allocators the code should use a mutex instead, but I have not done
any measurements. There are going to be complications, e.g.
average vs. maximum times.

Bart

-- 
Bart Veer                       eCos Configuration Architect
http://www.ecoscentric.com/     The eCos and RedBoot experts

-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [ECOS] Scheduler lock in Doug Lea's malloc implementation - be real careful
  2006-09-07 19:40 ` Bart Veer
@ 2006-09-08 20:33   ` Roy E Richardson
  2006-09-09 13:26     ` Tim michals
  0 siblings, 1 reply; 7+ messages in thread
From: Roy E Richardson @ 2006-09-08 20:33 UTC (permalink / raw)
  To: bob.koninckx, Bart Veer; +Cc: ecos-discuss


----- Original Message ----- 
From: "Bart Veer" <bartv@ecoscentric.com>
To: <bob.koninckx@o-3s.com>
Cc: <ecos-discuss@sources.redhat.com>
Sent: Thursday, September 07, 2006 12:40 PM
Subject: Re: [ECOS] Scheduler lock in Doug Lea's malloc implementation


>>>>>> "Bob" == bob koninckx <bob.koninckx@o-3s.com> writes:
>
>    Bob> Can someone explain me why the default memory allocator locks
>    Bob> the scheduler ? I would expect it to be legitimate for a low
>    Bob> priority thread with no real time constraints to do dynamic
>    Bob> memory allocation without influencing the real-time behaviour
>    Bob> of threads with real-time constraints, as long as they have a
>    Bob> higher priority.
>
> I was not involved in the initial implementation of the memory
> allocators, so this is not gospel. I do remember that the uITRON
> compatibility layer imposed various requirements.
>
> I believe the theory is that the memory allocators should be very
> fast. If you use an alternative locking mechanism like a mutex, both
> the lock and unlock functions will themselves lock and unlock the
> scheduler. When you have a memory allocator that takes a comparable
> amount of time to a mutex operation, you are better off locking the
> scheduler around the memory allocation rather than going through mutex
> lock and unlock calls.
>
> If you are using a fixed block memory pool as per mfiximpl.hxx then
> that theory may well be correct. If you are using some other allocator
> such as dlmalloc then I suspect the theory is wrong and for such
> allocators the code should use a mutex instead, but I have not done
> any measurements. There are going to be complications, e.g.
> average vs. maximum times.
>
> Bart
>
> -- 
> Bart Veer                       eCos Configuration Architect
> http://www.ecoscentric.com/     The eCos and RedBoot experts
>
> -- 

The data that describes what portion(s) of the pool is common to all threads 
accessing the pool.
Each successful allocate/deallocate request results at least 1 instance of a 
read of a portion of that common data,
a varying quantifty of instructions to evaluate, and finally updates to the 
affected data value(s).

The above "read/process/write" sequence is commonly referred to as a 
"critical instruction sequence" that requires
a guaratee that absolutely precludes any potential of a 2nd operation from 
occurring while a prior one is in progress.
To not do so allows for sporatic, unpredictable timing errors that are 
extremely difficult to recognize, let alone identify and correct.

For Mr. Lee's malloc, the use of "scheduler lock/unlock" sequences is a 
minimal protection, valid ONLY IF one can be certain that no interrupt 
routine uses either allocate or dellocate.  As I recall, there is a 
reference to this in the source comments, but I don't recall that free() / 
malloc() requests during an ISR will generate an exception if attempted.

> Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
> and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss
>
> 



-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [ECOS] Scheduler lock in Doug Lea's malloc implementation - be real careful
  2006-09-08 20:33   ` [ECOS] Scheduler lock in Doug Lea's malloc implementation - be real careful Roy E Richardson
@ 2006-09-09 13:26     ` Tim michals
  0 siblings, 0 replies; 7+ messages in thread
From: Tim michals @ 2006-09-09 13:26 UTC (permalink / raw)
  To: bob.koninckx; +Cc: ecos-discuss

IMHO, you should use memory pools
http://ecos.sourceware.org/docs-latest/ref/memalloc.html,
also my understanding is that most OS do the same
allocation scheme.  Thus if realtime is required, each
task must control its own memory allocation pool.  

--- Roy E Richardson <eng_at_play@cox.net> wrote:

> 
> ----- Original Message ----- 
> From: "Bart Veer" <bartv@ecoscentric.com>
> To: <bob.koninckx@o-3s.com>
> Cc: <ecos-discuss@sources.redhat.com>
> Sent: Thursday, September 07, 2006 12:40 PM
> Subject: Re: [ECOS] Scheduler lock in Doug Lea's
> malloc implementation
> 
> 
> >>>>>> "Bob" == bob koninckx <bob.koninckx@o-3s.com>
> writes:
> >
> >    Bob> Can someone explain me why the default
> memory allocator locks
> >    Bob> the scheduler ? I would expect it to be
> legitimate for a low
> >    Bob> priority thread with no real time
> constraints to do dynamic
> >    Bob> memory allocation without influencing the
> real-time behaviour
> >    Bob> of threads with real-time constraints, as
> long as they have a
> >    Bob> higher priority.
> >
> > I was not involved in the initial implementation
> of the memory
> > allocators, so this is not gospel. I do remember
> that the uITRON
> > compatibility layer imposed various requirements.
> >
> > I believe the theory is that the memory allocators
> should be very
> > fast. If you use an alternative locking mechanism
> like a mutex, both
> > the lock and unlock functions will themselves lock
> and unlock the
> > scheduler. When you have a memory allocator that
> takes a comparable
> > amount of time to a mutex operation, you are
> better off locking the
> > scheduler around the memory allocation rather than
> going through mutex
> > lock and unlock calls.
> >
> > If you are using a fixed block memory pool as per
> mfiximpl.hxx then
> > that theory may well be correct. If you are using
> some other allocator
> > such as dlmalloc then I suspect the theory is
> wrong and for such
> > allocators the code should use a mutex instead,
> but I have not done
> > any measurements. There are going to be
> complications, e.g.
> > average vs. maximum times.
> >
> > Bart
> >
> > -- 
> > Bart Veer                       eCos Configuration
> Architect
> > http://www.ecoscentric.com/     The eCos and
> RedBoot experts
> >
> > -- 
> 
> The data that describes what portion(s) of the pool
> is common to all threads 
> accessing the pool.
> Each successful allocate/deallocate request results
> at least 1 instance of a 
> read of a portion of that common data,
> a varying quantifty of instructions to evaluate, and
> finally updates to the 
> affected data value(s).
> 
> The above "read/process/write" sequence is commonly
> referred to as a 
> "critical instruction sequence" that requires
> a guaratee that absolutely precludes any potential
> of a 2nd operation from 
> occurring while a prior one is in progress.
> To not do so allows for sporatic, unpredictable
> timing errors that are 
> extremely difficult to recognize, let alone identify
> and correct.
> 
> For Mr. Lee's malloc, the use of "scheduler
> lock/unlock" sequences is a 
> minimal protection, valid ONLY IF one can be certain
> that no interrupt 
> routine uses either allocate or dellocate.  As I
> recall, there is a 
> reference to this in the source comments, but I
> don't recall that free() / 
> malloc() requests during an ISR will generate an
> exception if attempted.
> 
> > Before posting, please read the FAQ:
> http://ecos.sourceware.org/fom/ecos
> > and search the list archive:
> http://ecos.sourceware.org/ml/ecos-discuss
> >
> > 
> 
> 
> 
> -- 
> Before posting, please read the FAQ:
> http://ecos.sourceware.org/fom/ecos
> and search the list archive:
> http://ecos.sourceware.org/ml/ecos-discuss
> 
> 


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re:  [ECOS] Scheduler lock in Doug Lea's malloc implementation           - be real careful
@ 2006-09-11 13:56 bob.koninckx
  0 siblings, 0 replies; 7+ messages in thread
From: bob.koninckx @ 2006-09-11 13:56 UTC (permalink / raw)
  To: Bart Veer, bob.koninckx; +Cc: ecos-discuss


>
>    Bob> Obviously, you need protection there. What I do not
>    Bob> understand is why it is implemented using scheduler
>    Bob> lock/unlock instead of using mutexes or semaphores. Also the
>    Bob> proposed use of memory pools is not really clear to me (and
>    Bob> it certainly raises portability issues when moving from one
>    Bob> OS to another). As a general rule, I _always_ consider
>    Bob> dynamic memory allocation an undeterministic process, so I
>    Bob> _never_ use it in time critical code. It nevertheless seems
>    Bob> odd to me that you shouldn't be allowed to use dynamic memory
>    Bob> allocation in low priority threads with no real-time
>    Bob> requirements at all, just because of timing requirements in
>    Bob> higher priority threads. I see no reason why scanning the
>    Bob> heap could not be interrupted (put on hold for a while) to
>    Bob> give a real-time thread time to finish its job. This is not
>    Bob> possible if you protect things with a big kernel lock, but
>    Bob> would be if searching for memory would be protected by a
>    Bob> mutex.
>
>    Bob> Any comments ? Am I making a fundamental reasoning error here
>    Bob> ? If not, I am going to try to change the implementation
>    Bob> and see if it solves our problem and propose a patch.
>
>You seem to have missed my point. Let me try again.


I don't think so. I think we are thinking very much along the same lines. Let me also try again :-)


>The current code behaves as follows:
>
>    lock scheduler
>    alloc() or free() or realloc()
>    unlock scheduler


Which locks the scheduler for an _undeterministic_ amount of time, depending on the size of the heap, its fragmentation etc...


>
>Let T_alloc be the worst case time spent in the alloc(), free() or
>realloc() routines, noting that the realloc() implementations in the
>heap will fail if the chunk cannot be resized and the alternative
>alloc()/memcpy()/free() implementation happens elsewhere. T_alloc will
>depend on factors like heap fragmentation so ideally would be measured
>in the context of a real application, but some of testcases in the
>memalloc package probably come close enough.
>
>What you are proposing is:
>
>    lock_mutex
>      lock scheduler
>      mutex lock code
>      unlock scheduler
>    alloc() or free() or realloc()
>    unlock_mutex
>      lock scheduler
>      mutex unlock code
>      unlock scheduler
>


Indeed


>Let T_lock be the time spent in the mutex lock code and T_unlock the
>time spent in the mutex unlock code. These should probably be measured
>for the case where the mutex is contested, priority inversion
>protection triggers, and there are threads blocked on the mutex, not
>for the simple uncontested case. The other important number is T_max -
>the maximum time the system will spend with the scheduler locked
>because of other code. This is rather tricky to measure and will
>depend in part on the application and its I/O behaviour.
>If T_alloc is comparable to max(T_lock, T_unlock) then there is
>nothing to be gained by using a mutex rather than the scheduler lock.
>You are adding cpu cycle overhead to the memory allocation routines
>for no improvement in real-time performance.

Agree

>If T_alloc < T_max, then switching from a scheduler lock to a mutex
>lock/unlock will not change the maximum dispatch latency and will not
>affect the worst case real-time characteristics. It may still affect
>the average dispatch latency.

Agree

>If T_alloc == T_max, i.e. the worst case dispatch latency is caused by
>the memory allocation code, then we have a problem that needs
>addressing.

Agree

>I believe the author of the original code assumed that T_alloc was
>comparable to max(T_lock, T_unlock), or at least that (T_alloc < T_max).
>Hence using a scheduler lock saved some cpu cycles for every memory
>allocation operation without affecting real-time behaviour. I suspect
>that assumption is false but do not have any numbers to prove it.

From what we see happening in our application, I am almost sure that it is false.

>There are several memory allocators, but the ones we are interested in
>are the fixed block, variable block and dlmalloc ones. If for all
>three we have T_alloc >> MAX(T_lock, T_unlock) then we should probably
>switch to using a mutex by default on the grounds of improving average
>dispatch latency. If for all three we have T_alloc == T_max then we
>must switch to using a mutex by default to reduce the maximum dispatch
>latency. Unfortunately this requires measuring T_max.

Anything we can do to provide you with the numbers you need ?

>If on the other hand we find that for say the fixed memory allocator
>T_alloc is much the same as MAX(T_lock, T_unlock) but dlmalloc behaves
>very differently then we would need to introduce new templates, with
>the old ones still using the scheduler lock and the new ones using a
>mutex. 
>
>
>I do not have time right now to experiment and get some real numbers.
>However I would very much like to see such numbers before or
>accompanying any patch that changes the behaviour. Also it would be
>important to run not just the memory allocation package's own tests
>but also the uITRON ones.

I don't have that much time myself. I'd like to hear your ideas on a suitable test for measuring thing such that we can make a sound decision.

Thanks for your input

Bob

>
>Bart
>



-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [ECOS] Scheduler lock in Doug Lea's malloc implementation      - be real careful
  2006-09-11  7:31 bob.koninckx
@ 2006-09-11 11:01 ` Bart Veer
  0 siblings, 0 replies; 7+ messages in thread
From: Bart Veer @ 2006-09-11 11:01 UTC (permalink / raw)
  To: bob.koninckx; +Cc: ecos-discuss

>>>>> "Bob" == bob koninckx <bob.koninckx@o-3s.com> writes:

    Bob> Obviously, you need protection there. What I do not
    Bob> understand is why it is implemented using scheduler
    Bob> lock/unlock instead of using mutexes or semaphores. Also the
    Bob> proposed use of memory pools is not really clear to me (and
    Bob> it certainly raises portability issues when moving from one
    Bob> OS to another). As a general rule, I _always_ consider
    Bob> dynamic memory allocation an undeterministic process, so I
    Bob> _never_ use it in time critical code. It nevertheless seems
    Bob> odd to me that you shouldn't be allowed to use dynamic memory
    Bob> allocation in low priority threads with no real-time
    Bob> requirements at all, just because of timing requirements in
    Bob> higher priority threads. I see no reason why scanning the
    Bob> heap could not be interrupted (put on hold for a while) to
    Bob> give a real-time thread time to finish its job. This is not
    Bob> possible if you protect things with a big kernel lock, but
    Bob> would be if searching for memory would be protected by a
    Bob> mutex.

    Bob> Any comments ? Am I making a fundamental reasoning error here
    Bob> ? If not, I am going to try to change the implementation
    Bob> and see if it solves our problem and propose a patch.

You seem to have missed my point. Let me try again.

The current code behaves as follows:

    lock scheduler
    alloc() or free() or realloc()
    unlock scheduler

Let T_alloc be the worst case time spent in the alloc(), free() or
realloc() routines, noting that the realloc() implementations in the
heap will fail if the chunk cannot be resized and the alternative
alloc()/memcpy()/free() implementation happens elsewhere. T_alloc will
depend on factors like heap fragmentation so ideally would be measured
in the context of a real application, but some of testcases in the
memalloc package probably come close enough.

What you are proposing is:

    lock_mutex
      lock scheduler
      mutex lock code
      unlock scheduler
    alloc() or free() or realloc()
    unlock_mutex
      lock scheduler
      mutex unlock code
      unlock scheduler

Let T_lock be the time spent in the mutex lock code and T_unlock the
time spent in the mutex unlock code. These should probably be measured
for the case where the mutex is contested, priority inversion
protection triggers, and there are threads blocked on the mutex, not
for the simple uncontested case. The other important number is T_max -
the maximum time the system will spend with the scheduler locked
because of other code. This is rather tricky to measure and will
depend in part on the application and its I/O behaviour.

If T_alloc is comparable to max(T_lock, T_unlock) then there is
nothing to be gained by using a mutex rather than the scheduler lock.
You are adding cpu cycle overhead to the memory allocation routines
for no improvement in real-time performance.

If T_alloc < T_max, then switching from a scheduler lock to a mutex
lock/unlock will not change the maximum dispatch latency and will not
affect the worst case real-time characteristics. It may still affect
the average dispatch latency.

If T_alloc == T_max, i.e. the worst case dispatch latency is caused by
the memory allocation code, then we have a problem that needs
addressing.


I believe the author of the original code assumed that T_alloc was
comparable to max(T_lock, T_unlock), or at least that (T_alloc < T_max).
Hence using a scheduler lock saved some cpu cycles for every memory
allocation operation without affecting real-time behaviour. I suspect
that assumption is false but do not have any numbers to prove it.

There are several memory allocators, but the ones we are interested in
are the fixed block, variable block and dlmalloc ones. If for all
three we have T_alloc >> MAX(T_lock, T_unlock) then we should probably
switch to using a mutex by default on the grounds of improving average
dispatch latency. If for all three we have T_alloc == T_max then we
must switch to using a mutex by default to reduce the maximum dispatch
latency. Unfortunately this requires measuring T_max.

If on the other hand we find that for say the fixed memory allocator
T_alloc is much the same as MAX(T_lock, T_unlock) but dlmalloc behaves
very differently then we would need to introduce new templates, with
the old ones still using the scheduler lock and the new ones using a
mutex. 


I do not have time right now to experiment and get some real numbers.
However I would very much like to see such numbers before or
accompanying any patch that changes the behaviour. Also it would be
important to run not just the memory allocation package's own tests
but also the uITRON ones.

Bart

-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re:  [ECOS] Scheduler lock in Doug Lea's malloc implementation      - be real careful
@ 2006-09-11  7:31 bob.koninckx
  2006-09-11 11:01 ` Bart Veer
  0 siblings, 1 reply; 7+ messages in thread
From: bob.koninckx @ 2006-09-11  7:31 UTC (permalink / raw)
  To: Roy E Richardson, bob.koninckx, Bart Veer; +Cc: ecos-discuss

>>
>>    Bob> Can someone explain me why the default memory allocator locks
>>    Bob> the scheduler ? I would expect it to be legitimate for a low
>>    Bob> priority thread with no real time constraints to do dynamic
>>    Bob> memory allocation without influencing the real-time behaviour
>>    Bob> of threads with real-time constraints, as long as they have a
>>    Bob> higher priority.
>>
>> I was not involved in the initial implementation of the memory
>> allocators, so this is not gospel. I do remember that the uITRON
>> compatibility layer imposed various requirements.
>>
>> I believe the theory is that the memory allocators should be very
>> fast. If you use an alternative locking mechanism like a mutex, both
>> the lock and unlock functions will themselves lock and unlock the
>> scheduler. When you have a memory allocator that takes a comparable
>> amount of time to a mutex operation, you are better off locking the
>> scheduler around the memory allocation rather than going through mutex
>> lock and unlock calls.
>>
>> If you are using a fixed block memory pool as per mfiximpl.hxx then
>> that theory may well be correct. If you are using some other allocator
>> such as dlmalloc then I suspect the theory is wrong and for such
>> allocators the code should use a mutex instead, but I have not done
>> any measurements. There are going to be complications, e.g.
>> average vs. maximum times.
>>
>> Bart
>>
>> --
>> Bart Veer                       eCos Configuration Architect
>> http://www.ecoscentric.com/     The eCos and RedBoot experts
>>
>> --
>
>The data that describes what portion(s) of the pool is common to all threads
>accessing the pool.
>Each successful allocate/deallocate request results at least 1 instance of a
>read of a portion of that common data,
>a varying quantifty of instructions to evaluate, and finally updates to the
>affected data value(s).
>
>The above "read/process/write" sequence is commonly referred to as a
>"critical instruction sequence" that requires
>a guaratee that absolutely precludes any potential of a 2nd operation from
>occurring while a prior one is in progress.
>To not do so allows for sporatic, unpredictable timing errors that are
>extremely difficult to recognize, let alone identify and correct.

Obviously, you need protection there. What I do not understand is why it is implemented using scheduler lock/unlock instead of using mutexes or semaphores. Also the proposed use of memory pools is not really clear to me (and it certainly raises portability issues when moving from one OS to another). As a general rule, I _always_ consider dynamic memory allocation an undeterministic process, so I _never_ use it in time critical code. It nevertheless seems odd to me that you shouldn't be allowed to use dynamic memory allocation in low priority threads with no real-time requirements at all, just because of timing requirements in higher priority threads. I see no reason why scanning the heap could not be interrupted (put on hold for a while) to give a real-time thread time to finish its job. This is not possible if you protect things with a big kernel lock, but would be if searching for memory would be protected by a mutex.

Any comments ? Am I making a fundamental reasoning error here ? If not, I am going to try to change the implementation and see if it solves our problem and propose a patch.

Thanks,

Bob




--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2006-09-11 13:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-09-07 15:35 [ECOS] Scheduler lock in Doug Lea's malloc implementation bob.koninckx
2006-09-07 19:40 ` Bart Veer
2006-09-08 20:33   ` [ECOS] Scheduler lock in Doug Lea's malloc implementation - be real careful Roy E Richardson
2006-09-09 13:26     ` Tim michals
2006-09-11  7:31 bob.koninckx
2006-09-11 11:01 ` Bart Veer
2006-09-11 13:56 bob.koninckx

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).