From: Peter Graf <p.graf@itknet.de>
To: ecos-discuss@sources.redhat.com
Subject: Re: [ECOS] sscanf() vs. fgetc()
Date: Fri, 13 Jul 2001 02:33:00 -0000 [thread overview]
Message-ID: <3.0.5.32.20010713113034.0092e440@128.128.128.220> (raw)
In-Reply-To: <86wv5e108s.fsf@halftrack.hq.acn-group.ch>
Hi Robin,
>I have tried your program on my SA-1110 based target. I had to add a delay in
>the first for loop (and fix both of them ;-)) so that the reader thread calls
>fgetc() before the main thread calls sscanf().
Oh, interesting. So cyg_thread_resume() doesn't cause re-scheduling
immediately.
I wasn't sure about this. One more thing I have learned about eCos :-) Thanks.
>It fails on my target as well.
Good to know. It worked for Jonathan, so I feared something target-specific.
>However, there exists a link between fgetc() and sscanf() due to the
>implementation of sscanf(). sscanf() uses a Cyg_StdioStream() that tries to
>flush all the other streams at the beginning of its refill_read_buffer()
>operation. In my case, this seems to cause the main thread to loop
endlessly in
>cyg_libc_stdio_flush_all_but() because stream->trylock_me() always fails.
Thank you very much! I didn't get this far due to my debugging problems.
>The problem comes from the conjunction of
Cyg_StdioStream::refill_read_buffer()
>and cyg_libc_stdio_flush_all_but(). The former routine locks '*this' and
calls
>cyg_stdio_read() which finally blocks on a condition variable, waiting for
>characters coming from the underlying device. Then, another thread with a
higher
>priority (lower value) calls sscanf() => cyg_libc_stdio_flush_all_but() which
>spins trying to flush the other (locked) stream and thus does not give the
CPU
>back to the lower priority thread in order to unlock the stream (provided the
>stream has some characters to consume).
>
>Not a very clear picture but I hope it will help more than my first
message on
>this subject.
It has helped a lot. Thanks you very much !
It is good to have the reasons why. I am not sure if I can fix this myself
properly, but at least I know what to be aware of.
Maybe, if Jonathan considers this a bug, he will help ;-)
Peter
prev parent reply other threads:[~2001-07-13 2:33 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-07-11 14:09 [ECOS] Linux over RedBoot Venkat Mynampati
2001-07-11 14:36 ` Jonathan Larmour
2001-07-11 14:53 ` Gary Thomas
2001-07-11 20:32 ` Fabrice Gautier
2001-07-11 23:37 ` Jesper Skov
2001-07-12 5:11 ` Gary Thomas
2001-07-12 5:30 ` Fabrice Gautier
2001-07-12 5:39 ` Gary Thomas
2001-07-12 5:45 ` Jesper Skov
2001-07-12 1:54 ` [ECOS] sscanf() vs. fgetc() Peter Graf
2001-07-12 3:23 ` Peter Graf
2001-07-12 3:28 ` Robin Farine
2001-07-12 3:42 ` Jonathan Larmour
2001-07-12 5:02 ` Robin Farine
2001-07-12 3:45 ` Peter Graf
2001-07-12 4:00 ` Jonathan Larmour
2001-07-12 5:57 ` Peter Graf
2001-07-12 6:13 ` Jonathan Larmour
2001-07-12 7:43 ` Peter Graf
2001-07-12 11:17 ` Robin Farine
2001-07-12 11:55 ` Jonathan Larmour
2001-07-13 2:33 ` Peter Graf
2001-07-13 2:33 ` Peter Graf [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3.0.5.32.20010713113034.0092e440@128.128.128.220 \
--to=p.graf@itknet.de \
--cc=ecos-discuss@sources.redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).