From: Jonathan Larmour <jifl@jifvik.org>
To: Rutger Hofman <rutger@cs.vu.nl>
Cc: "Jürgen Lambrecht" <J.Lambrecht@televic.com>,
"Ross Younger" <wry@ecoscentric.com>,
"eCos developers" <ecos-devel@ecos.sourceware.org>,
"Deroo Stijn" <S.Deroo@televic.com>
Subject: Re: NAND technical review
Date: Thu, 15 Oct 2009 04:41:00 -0000 [thread overview]
Message-ID: <4AD6A7EC.8080703@jifvik.org> (raw)
In-Reply-To: <4AD47ADE.9010606@cs.vu.nl>
Rutger Hofman wrote:
> Jonathan Larmour wrote:
> [snip]
>
>>> We also prefer R's model of course because we started with R's model
>>> and use it now.
>>
>>
>> You haven't done any profiling by any luck have you? Or code size
>> analysis? Although I haven't got into the detail of R's version yet
>> (since I was starting with dissecting E's), both the footprint and the
>> cumulative function call and indirection time overhead are concerns of
>> mine.
>
>
> In a first step in mitigating the 'footprint pressure', I have added CDL
> options to configure in/out support for the various chips types, to wit:
> - ONFI chips;
> - 'regular' large-page chips;
> - 'regular' small-page chips.
> It is in r678 on my download page
> (http://www.cs.vu.nl/~rutger/software/ecos/nand-flash/). As I had
> suggested before, this was a very small refactoring (although code has
> moved about in io_nand_chip.c to save on the number of #ifdefs).
I'm sure that's useful.
> One more candidate for a reduce in code footprint: I can add a CDL
> option to configure out support for heterogeneous controllers/chips. The
> ANC layer will become paper-thin then. If this change will make any
> difference, I will do it within, say, a week's time.
I wouldn't want you to spend time until the decision's made. I'll make a
note that it would take a week to do. Admittedly, I'm not sure the savings
would be enough to make it "paper-thin".
> As regards the concerns for (indirect) function call overhead: my
> intuition is that the NAND operations themselves (page read, page write,
> block erase) will dominate. It takes 200..500us only to transfer a page
> over the data bus to the NAND chip; one recent data sheet mentions
> program time 200us, erase time 1.5ms. I think only a very slow CPU would
> show the overhead of less than 10 indirect function calls.
I think it's more the cumulative effect, primarily on reads. Especially as
there's no asynchronous aspect - the control process is synchronous, so
any delays between real underlying NAND operations only add up. Ross
quoted an example of about 25us for a page read. Off the top of my head,
for something like a 64MHz CPU with 4 clock ticks per instruction on
average, that's 16 insns per us, so a page read is about equivalent to 400
insns. At that sort of level I'm not sure overheads are lost in the noise.
Maybe I've messed up those guestimates though.
I wonder if Ross has any performance data for E he could contribute?
On a separate point, while I'm here, I think the use of printf via
cyg_nand_global.pf would want tidied up a lot. Some of them seem to be
there to mention errors to the user, but without any programmatic
treatment of the errors, primarily reporting them to higher layers.
It should also be possible to eliminate the overheads of the printf. Right
now there's quite a lot of them, involving function calls, allocation of
const string data, and occasionally calculation of arguments, even if the
pf function pointer is pointing to an empty null printf function. It
should be possible to turn them off entirely, and not be any worse off for
it (including error reporting back up to higher layers). It might not be
so bad if the strings were a lot shorter, or the printf functions less
frequently used, but being able to turn them off entirely would seem better.
Jifl
--
--["No sense being pessimistic, it wouldn't work anyway"]-- Opinions==mine
next prev parent reply other threads:[~2009-10-15 4:41 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-02 15:51 Jonathan Larmour
2009-10-06 13:51 ` Ross Younger
2009-10-07 3:12 ` Jonathan Larmour
2009-10-07 16:22 ` Rutger Hofman
2009-10-08 7:15 ` Jürgen Lambrecht
2009-10-15 3:53 ` Jonathan Larmour
2009-10-15 11:54 ` Jürgen Lambrecht
2009-10-15 3:49 ` Jonathan Larmour
2009-10-15 14:36 ` Rutger Hofman
2009-10-16 1:32 ` Jonathan Larmour
2009-10-19 9:56 ` Ross Younger
2009-10-19 14:21 ` Rutger Hofman
2009-10-20 3:21 ` Jonathan Larmour
2009-10-20 12:19 ` Rutger Hofman
2009-10-21 1:45 ` Jonathan Larmour
2009-10-21 12:15 ` Rutger Hofman
2009-10-23 14:06 ` Jonathan Larmour
2009-10-23 15:25 ` Rutger Hofman
2009-10-23 18:03 ` Rutger Hofman
2009-10-27 20:02 ` Rutger Hofman
2009-11-10 7:03 ` Jonathan Larmour
2010-12-11 19:18 ` John Dallaway
2010-12-22 14:54 ` Rutger Hofman
2009-10-15 15:43 ` Rutger Hofman
[not found] ` <4ACDF868.7050706@ecoscentric.com>
2009-10-09 8:27 ` Ross Younger
2009-10-13 2:21 ` Jonathan Larmour
2009-10-13 13:35 ` Rutger Hofman
2009-10-16 4:04 ` Jonathan Larmour
2009-10-19 14:51 ` Rutger Hofman
2009-10-20 4:28 ` Jonathan Larmour
2009-10-07 9:40 ` Jürgen Lambrecht
2009-10-07 16:27 ` Rutger Hofman
2009-10-13 2:44 ` Jonathan Larmour
2009-10-13 6:35 ` Jürgen Lambrecht
2009-10-15 3:55 ` Jonathan Larmour
2009-10-13 12:59 ` Rutger Hofman
2009-10-15 4:41 ` Jonathan Larmour [this message]
2009-10-15 14:55 ` Rutger Hofman
2009-10-16 1:45 ` Jonathan Larmour
2009-10-19 10:53 ` Ross Younger
2009-10-20 1:40 ` Jonathan Larmour
2009-10-20 10:17 ` Ross Younger
2009-10-21 2:06 ` Jonathan Larmour
2009-10-22 10:05 ` Ross Younger
2009-11-10 5:15 ` Jonathan Larmour
2009-11-10 10:38 ` Ross Younger
2009-11-10 11:28 ` Ethernet over SPI driver for ENC424J600 Ilija Stanislevik
2009-11-10 12:16 ` Chris Holgate
2009-11-12 18:32 ` NAND technical review Ross Younger
2009-10-13 14:19 ` Rutger Hofman
2009-10-13 19:58 ` Lambrecht Jürgen
2009-10-07 12:11 ` Rutger Hofman
2009-10-08 12:31 ` Ross Younger
2009-10-08 8:16 ` Jürgen Lambrecht
2009-10-12 1:13 ` Jonathan Larmour
2009-10-16 7:29 ` Simon Kallweit
2009-10-16 13:53 ` Jonathan Larmour
2009-10-19 15:02 ` Rutger Hofman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4AD6A7EC.8080703@jifvik.org \
--to=jifl@jifvik.org \
--cc=J.Lambrecht@televic.com \
--cc=S.Deroo@televic.com \
--cc=ecos-devel@ecos.sourceware.org \
--cc=rutger@cs.vu.nl \
--cc=wry@ecoscentric.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).