public inbox for ecos-discuss@sourceware.org
 help / color / mirror / Atom feed
From: Bart Veer <bartv@tymora.demon.co.uk>
To: george@stratalight.com
Cc: ecos-discuss@sources.redhat.com
Subject: Re: [ECOS] malloc/new in DSRs
Date: Mon, 29 Jul 2002 16:22:00 -0000	[thread overview]
Message-ID: <20020729213416.92A6B6165C@delenn.bartv.net> (raw)
In-Reply-To: <F626113795D3EB4482E5ACFBD93465121361A4@mailhost.stratalight.com> (george@stratalight.com)

>>>>> "George" == George Sosnowski <george@stratalight.com> writes:

    George> If malloc is configed to be threadsafe in ecos.ecc, then
    George> is it ok to use malloc/new/free/delete in DSRs? I assume
    George> it is, but want to make sure.

The short answer is no: thread context is very different from DSR
context, see the kernel documentation. The obvious way of implementing
a thread-safe malloc() uses a mutex to protect the heap shared data,
so any malloc() or free() calls need to lock the mutex and then unlock
it again. A DSR is not allowed to call a mutex lock function.

Consider what might happen if a DSR did try to call malloc(). Suppose
some thread is in the middle of a malloc() call, and hence has the
mutex locked. An interrupt now goes off, the ISR runs and requests a
DSR invocation. Your DSR now calls malloc(), tries to lock the mutex,
and discovers the mutex is already owned by a thread. So the DSR would
need to wait until the thread had unlocked the mutex, but DSRs have
absolute priority over threads so the thread cannot run again until
the DSR has completed. Deadlock.


Now for a longer answer: the current implementation of threadsafe
malloc() does not always use a mutex to protect the heap. Instead it
locks the scheduler. In this scenario it would actually be safe to
call malloc() from a DSR because DSRs will not run if the scheduler is
locked. However this is really a bug in the current malloc
implementation, a left-over from the early days when the only
allocator was the fixed block one, and may get fixed at any time.
Therefore you should not rely on the current behaviour.

There is an argument that for certain implementations of memory
allocators, especially fixed block ones, it is legitimate to use a
scheduler lock rather than a mutex: the implementation of mutex lock
and unlock implicitly involves locking the scheduler; so for a
sufficiently simple memory allocator, briefly locking the scheduler
might actually be preferable to using a mutex. This needs further
investigation.

Bart

-- 
Before posting, please read the FAQ: http://sources.redhat.com/fom/ecos
and search the list archive: http://sources.redhat.com/ml/ecos-discuss

      parent reply	other threads:[~2002-07-29 23:22 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-07-11 14:04 George Sosnowski
2002-07-11 14:18 ` Gary Thomas
2002-07-29 16:22 ` Bart Veer [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20020729213416.92A6B6165C@delenn.bartv.net \
    --to=bartv@tymora.demon.co.uk \
    --cc=ecos-discuss@sources.redhat.com \
    --cc=george@stratalight.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).