public inbox for ecos-discuss@sourceware.org
 help / color / mirror / Atom feed
* [ECOS] About Cyg_Scheduler::unlock_inner
@ 2001-05-22 12:55 Rafael Rodríguez Velilla
  2001-05-22 13:50 ` Jonathan Larmour
  0 siblings, 1 reply; 5+ messages in thread
From: Rafael Rodríguez Velilla @ 2001-05-22 12:55 UTC (permalink / raw)
  To: ecos

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 607 bytes --]

  I'm working with eCos 1.3.1 and I have a question about
Cyg_Scheduler::unlock_inner.

  This method is only called when  calling Cyg_Scheduler::unlock with
sched_lock=1; (so it should become 0)
  I have seen in the code that it first calls any pending DSR (if there
is any) and then it checks if there's a new thread that reclaims the
CPU.
  Why is the context of the new thread restored before decrementing
sched_lock?
  Why is not the new thread run with the scheduler unlocked?




--
Rafael Rodríguez Velilla        rrv@tid.es
Telefónica I+D          http://www.tid.es
Telf: +34 - 91 337 4270



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [ECOS] About Cyg_Scheduler::unlock_inner
  2001-05-22 12:55 [ECOS] About Cyg_Scheduler::unlock_inner Rafael Rodríguez Velilla
@ 2001-05-22 13:50 ` Jonathan Larmour
  2001-05-23  3:41   ` Hugo Tyson
  0 siblings, 1 reply; 5+ messages in thread
From: Jonathan Larmour @ 2001-05-22 13:50 UTC (permalink / raw)
  To: Rafael Rodríguez Velilla; +Cc: ecos

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 918 bytes --]

Rafael Rodríguez Velilla wrote:
> 
>   I'm working with eCos 1.3.1 and I have a question about
> Cyg_Scheduler::unlock_inner.
> 
>   This method is only called when  calling Cyg_Scheduler::unlock with
> sched_lock=1; (so it should become 0)
>   I have seen in the code that it first calls any pending DSR (if there
> is any) and then it checks if there's a new thread that reclaims the
> CPU.
>   Why is the context of the new thread restored before decrementing
> sched_lock?

You can't be in the process of switching the context when there's a chance
the system could be rescheduled again (e.g. due to a new interrupt).

>   Why is not the new thread run with the scheduler unlocked?

Once it is restored, it unlocks the scheduler almost straight away.

Jifl
-- 
Red Hat, Rustat House, Clifton Road, Cambridge, UK. Tel: +44 (1223) 271062
Maybe this world is another planet's Hell -Aldous Huxley || Opinions==mine

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [ECOS] About Cyg_Scheduler::unlock_inner
  2001-05-22 13:50 ` Jonathan Larmour
@ 2001-05-23  3:41   ` Hugo Tyson
  2001-05-23  3:53     ` Rafael Rodríguez Velilla
  0 siblings, 1 reply; 5+ messages in thread
From: Hugo Tyson @ 2001-05-23  3:41 UTC (permalink / raw)
  To: ecos-discuss

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1682 bytes --]

Jonathan Larmour <jlarmour@redhat.com> writes:
> Rafael Rodríguez Velilla wrote:
> > 
> >   I'm working with eCos 1.3.1 and I have a question about
> > Cyg_Scheduler::unlock_inner.
> > 
> >   This method is only called when  calling Cyg_Scheduler::unlock with
> > sched_lock=1; (so it should become 0)
> >   I have seen in the code that it first calls any pending DSR (if there
> > is any) and then it checks if there's a new thread that reclaims the
> > CPU.
> >   Why is the context of the new thread restored before decrementing
> > sched_lock?
> 
> You can't be in the process of switching the context when there's a chance
> the system could be rescheduled again (e.g. due to a new interrupt).
> 
> >   Why is not the new thread run with the scheduler unlocked?
> 
> Once it is restored, it unlocks the scheduler almost straight away.

Yup.  To clarify, the new thread context is not just some random location
in the thread's execution.  It can only be either that very same location
in Cyg_Scheduler::unlock() or a similar piece of code used at the very
start of a thread's life.

You would expect that a thread which is interrupted and descheduled by a
real hardware IRQ, such as timer ticks, had a saved context that points to
what the thread was doing at the time.  But it's not so, the saved context
is in the middle of the interrupt-handling code, doing the same unlock()
call as a "normal" yield of the CPU.

So when an interrupted thread restarts, it restarts in the middle of the
normal interrupt sequence, and does all of: unlocks the scheduler, restores
the interrupted state and returns from interrupt, and thus continues where
it was interrupted.

HTH,
	- Huge

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [ECOS] About Cyg_Scheduler::unlock_inner
  2001-05-23  3:41   ` Hugo Tyson
@ 2001-05-23  3:53     ` Rafael Rodríguez Velilla
  2001-05-23  4:05       ` Hugo Tyson
  0 siblings, 1 reply; 5+ messages in thread
From: Rafael Rodríguez Velilla @ 2001-05-23  3:53 UTC (permalink / raw)
  To: ecos

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1155 bytes --]

> Yup.  To clarify, the new thread context is not just some random location
> in the thread's execution.  It can only be either that very same location
> in Cyg_Scheduler::unlock() or a similar piece of code used at the very
> start of a thread's life.
>
> You would expect that a thread which is interrupted and descheduled by a
> real hardware IRQ, such as timer ticks, had a saved context that points to
> what the thread was doing at the time.  But it's not so, the saved context
> is in the middle of the interrupt-handling code, doing the same unlock()
> call as a "normal" yield of the CPU.
>
> So when an interrupted thread restarts, it restarts in the middle of the
> normal interrupt sequence, and does all of: unlocks the scheduler, restores
> the interrupted state and returns from interrupt, and thus continues where
> it was interrupted.

  Then, the scheduling of new threads only happens when an interrupt  ocurrs
(in the context of interrupt_end)?
  Doesn't cyg_thread_delay produce a rescheduling? (for example)





--
Rafael Rodríguez Velilla        rrv@tid.es
Telefónica I+D          http://www.tid.es
Telf: +34 - 91 337 4270



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [ECOS] About Cyg_Scheduler::unlock_inner
  2001-05-23  3:53     ` Rafael Rodríguez Velilla
@ 2001-05-23  4:05       ` Hugo Tyson
  0 siblings, 0 replies; 5+ messages in thread
From: Hugo Tyson @ 2001-05-23  4:05 UTC (permalink / raw)
  To: ecos-discuss

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1598 bytes --]

Rafael Rodríguez Velilla <rrv@tid.es> writes:
> > Yup.  To clarify, the new thread context is not just some random location
> > in the thread's execution.  It can only be either that very same location
> > in Cyg_Scheduler::unlock() or a similar piece of code used at the very
> > start of a thread's life.
> >
> > You would expect that a thread which is interrupted and descheduled by a
> > real hardware IRQ, such as timer ticks, had a saved context that points to
> > what the thread was doing at the time.  But it's not so, the saved context
> > is in the middle of the interrupt-handling code, doing the same unlock()
> > call as a "normal" yield of the CPU.
> >
> > So when an interrupted thread restarts, it restarts in the middle of the
> > normal interrupt sequence, and does all of: unlocks the scheduler, restores
> > the interrupted state and returns from interrupt, and thus continues where
> > it was interrupted.
> 
>   Then, the scheduling of new threads only happens when an interrupt  ocurrs
> (in the context of interrupt_end)?
>   Doesn't cyg_thread_delay produce a rescheduling? (for example)

Yes, of course, sorry.  I was clarifying that it's the same code that's
used to change over who owns the CPU in both cases: interrupted thread and
"voluntary" reschedule such as cyg_thread_delay() or waiting for a
semaphore, or signalling a semaphore that wakes up a higher priority
thread.

In the "voluntary" reschedule the Cyg_Scheduler::unlock() is called
directly by application code, in the interrupted case it's called as part
of the interrupt handling mechanism.

	- Huge

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2001-05-23  4:05 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-05-22 12:55 [ECOS] About Cyg_Scheduler::unlock_inner Rafael Rodríguez Velilla
2001-05-22 13:50 ` Jonathan Larmour
2001-05-23  3:41   ` Hugo Tyson
2001-05-23  3:53     ` Rafael Rodríguez Velilla
2001-05-23  4:05       ` Hugo Tyson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).