public inbox for ecos-discuss@sourceware.org
 help / color / mirror / Atom feed
* Re: [ECOS] Preemptive Scheduling in Multilevel Queue Scheduler
       [not found] <l03110714b295f219cd51.cygnus.sourceware.ecos.d@[130.221.199.187]>
@ 1998-12-11  6:51 ` Nick Garnett
  0 siblings, 0 replies; 3+ messages in thread
From: Nick Garnett @ 1998-12-11  6:51 UTC (permalink / raw)
  To: ecos-discuss

gorlick@aero.org (Michael Gorlick) writes:

x> Will someone please straighten me out? After staring at the eCos source code
x> it just isn't clear to me how preemptive (timesliced) scheduling actually
x> works in
x> the multilevel queue scheduler (mlqueue).
x> 
x> The expiration of a timeslice results in a DSR being posted on the DSR
x> queue and that
x> DSR, when executed, forces the current thread to relinquish the processor
x> (by invoking
x> "yield()" on the current thread).  However (and this is where my confusion
x> arises) DSRs
x> are executed when, and only when, the scheduler lock is about to transition
x> from 1
x> to 0.  Consequently, if a thread running under the multilevel queue
x> discipline NEVER
x> directly or indirectly invokes the scheduler "unlock()" method  no DSRs
x> will ever be
x> executed and the thread will never be forced to yield the processor
x> irrespective of the
x> number of timeslices periods that have passed.  Is this correct and if not
x> where should
x> I look in the source code to correct my misunderstanding?
x> 
x> The same problem exists for DSRs in general since "unlock_inner" is the only
x> component (as far as I can determine) that calls and executes the posted DSRs.
x> Again, how do device drivers and interrupts get their DSRs executed in a
x> timely manner
x> if their execution can be delayed indefinitely by a thread that, for
x> whatever reason,
x> never acquires or releases the scheduler lock?
x> 
x> Thank you in advance for taking the time to answer my question.
x> 
x> 
x>        __
x>      _/mg\__
x> ... /o-----o>
x> 
x> 

The missing piece of the puzzle is that the scheduler lock is acquired
and released during interrupt processing. In the default interrupt VSR
(in vectors.S) the lock in incremented and in interrupt_end() (in
intr/intr.cxx) as well as posting a DSR, a call to
Cyg_Scheduler::unlock() is made. If the current thread already had the
scheduler locked then nothing happens until it unlocks for the last
time. If the current thread did not have the lock then this call will
cause any DSRs to be run immediately and any rescheduling to take
place. The lock also works here as an interrupt nest counter, so that
rescheduling only occurs when the last nested interrupt exits.

So, in the common case, when interrupting code that does not have the
lock, DSRs are run immediately and any preemption happens right
after. When the current thread has the lock, all of this is deferred
until it releases the lock, which is what the lock is for.

Hope this helps.

-- 
Nick Garnett           mailto:nickg@cygnus.co.uk
Cygnus Solutions, UK   http://www.cygnus.co.uk

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [ECOS] Preemptive Scheduling in Multilevel Queue Scheduler
       [not found] ` < l03110714b295f219cd51@[130.221.199.187] >
@ 1998-12-11  6:51   ` Bart Veer
  0 siblings, 0 replies; 3+ messages in thread
From: Bart Veer @ 1998-12-11  6:51 UTC (permalink / raw)
  To: gorlick; +Cc: ecos-discuss

>>>>> "Michael" == Michael Gorlick <gorlick@aero.org> writes:

    Michael> Will someone please straighten me out? After staring at
    Michael> the eCos source code it just isn't clear to me how
    Michael> preemptive (timesliced) scheduling actually works in the
    Michael> multilevel queue scheduler (mlqueue).

    Michael> The expiration of a timeslice results in a DSR being
    Michael> posted on the DSR queue and that DSR, when executed,
    Michael> forces the current thread to relinquish the processor (by
    Michael> invoking "yield()" on the current thread). However (and
    Michael> this is where my confusion arises) DSRs are executed
    Michael> when, and only when, the scheduler lock is about to
    Michael> transition from 1 to 0. Consequently, if a thread running
    Michael> under the multilevel queue discipline NEVER directly or
    Michael> indirectly invokes the scheduler "unlock()" method no
    Michael> DSRs will ever be executed and the thread will never be
    Michael> forced to yield the processor irrespective of the number
    Michael> of timeslices periods that have passed. Is this correct
    Michael> and if not where should I look in the source code to
    Michael> correct my misunderstanding?

    Michael> The same problem exists for DSRs in general since
    Michael> "unlock_inner" is the only component (as far as I can
    Michael> determine) that calls and executes the posted DSRs.
    Michael> Again, how do device drivers and interrupts get their
    Michael> DSRs executed in a timely manner if their execution can
    Michael> be delayed indefinitely by a thread that, for whatever
    Michael> reason, never acquires or releases the scheduler lock?

The missing piece in the puzzle is the routine interrupt_end() in
kernel/v1.1/src/intr/intr.cxx. This gets invoked at the end of ISR
interrupt handling, and includes a call to Cyg_Scheduler::unlock().

If an interrupt happens when the foreground application was executing
ordinary code, and a resulting DSR has invoked yield(), this call to
unlock() will result in a context switch at the end of interrupt
handling. If the foreground application was in a critical section at
the time that the interrupt occurred then the scheduler will be
locked, the interrupt handler will return to the current thread, and
the context switch happens when the thread leaves its critical section
and unlocks the scheduler.

Bart Veer // eCos net maintainer

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [ECOS] Preemptive Scheduling in Multilevel Queue Scheduler
@ 1998-12-10 13:48 Michael Gorlick
       [not found] ` < l03110714b295f219cd51@[130.221.199.187] >
  0 siblings, 1 reply; 3+ messages in thread
From: Michael Gorlick @ 1998-12-10 13:48 UTC (permalink / raw)
  To: ecos-discuss

Will someone please straighten me out? After staring at the eCos source code
it just isn't clear to me how preemptive (timesliced) scheduling actually
works in
the multilevel queue scheduler (mlqueue).

The expiration of a timeslice results in a DSR being posted on the DSR
queue and that
DSR, when executed, forces the current thread to relinquish the processor
(by invoking
"yield()" on the current thread).  However (and this is where my confusion
arises) DSRs
are executed when, and only when, the scheduler lock is about to transition
from 1
to 0.  Consequently, if a thread running under the multilevel queue
discipline NEVER
directly or indirectly invokes the scheduler "unlock()" method  no DSRs
will ever be
executed and the thread will never be forced to yield the processor
irrespective of the
number of timeslices periods that have passed.  Is this correct and if not
where should
I look in the source code to correct my misunderstanding?

The same problem exists for DSRs in general since "unlock_inner" is the only
component (as far as I can determine) that calls and executes the posted DSRs.
Again, how do device drivers and interrupts get their DSRs executed in a
timely manner
if their execution can be delayed indefinitely by a thread that, for
whatever reason,
never acquires or releases the scheduler lock?

Thank you in advance for taking the time to answer my question.


       __
     _/mg\__
... /o-----o>


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~1998-12-11  6:51 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <l03110714b295f219cd51.cygnus.sourceware.ecos.d@[130.221.199.187]>
1998-12-11  6:51 ` [ECOS] Preemptive Scheduling in Multilevel Queue Scheduler Nick Garnett
1998-12-10 13:48 Michael Gorlick
     [not found] ` < l03110714b295f219cd51@[130.221.199.187] >
1998-12-11  6:51   ` Bart Veer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).