From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Lewin A.R.W. Edwards" To: "Grant Edwards" , ecos-discuss@sources.redhat.com Subject: Re: [ECOS] Simple flash filesystem? Date: Mon, 05 Feb 2001 14:29:00 -0000 Message-id: <4.3.2.7.2.20010205171128.00aa9200@larwe.com> References: <20010205221012.183C07A814@visi.com> X-SW-Source: 2001-02/msg00053.html Hi Grant, >I've been looking for info on flash filesystems, and have found pretty >much nothing. Random notes: I suspect that the reason you haven't found many references is that these sorts of applications (in my experience so far) fall into these categories: 1. Cases where you _want_ to emulate a DOS-type filesystem, especially cases where you have an underlying flash controller that does the level-wearing and error correction for you, and 2. Cases where you're only going to write data once in a blue moon, and it's most efficient to solve the problem on an ad hoc basis. It really doesn't matter what logical sector size you use as long as you have a reasonably efficient scatter/gather system. I have a very very similar problem working on SmartMedia - the filesystem is organized into 512-byte sectors but the card is only erasable in blocks (which by default are cluster-sized). In order to keep the upper layers of the filesystem as generic as possible, I let my DOS filesystem work with the 512-byte sectors it knows and loves. In older versions of my SSFDC code, the DOS layer decomposed each R/W op into a series of single sector R/W ops. In the case of a random sector W op, the flash interface layer would then read in a block, update, erase and verify. This was obviously very inefficient (though it works fine on CompactFlash, which has an intelligent R/W controller), so I later changed the breakdown. In my current code, the DOS filesystem layer decomposes R/W ops into a series of variable-length ops, each of which is no more than one cluster long. This implicitly allows the underlying flash driver to make the R/W/erase loop much more efficient. By judiciously adding a few extra K of RAM, it is also possible to increase the write caching capability to cope with any erase block size. The main performance hit I encountered in emulating a DOS filesystem over dumb flash is updating the FAT. If you can keep the whole FAT, or at least all the sectors for open-for-writing chains, in RAM (and only update sections when open-for-write files are closed), you'll have a huge performance increase. In the eval board for the processor in our old products, the vendor uses a very simple, dumb, intended-for-read-only filesystem. It's easiest to illustrate by example: If your flash free space starts at 0x20000: 0x20000 00 12 00 00 = length of file (x00001200), 0x00000000 for deleted file, or 0xFFFFFFFF if no more files 0x20004 FREDXXXX.XXX\0 = filename (ASCIIZ) 0x2000x file data 0x2120x next file header To find a particular file you start at the beginning of filespace and get the first word (pointer to next file). You then check the filename immediately after the word. If it's the file you want, then read it out. If it's not the desired file, then use the pointer to skip to the next file until you find the one you want or reach the end of the chain. Each file is guaranteed (by manipulating the length field) to start on a write-page boundary (not an erase-block boundary though). If you do something like this, you will eventually get so fragmented that you'll need to gather up all the files again (which you could do with an "optimize" option in your UI if you wanted to). But as a quick and dirty solution, it has some merit. Maybe you might want to consider NAND flash (SmartMedia in a chip package, essentially) for your file storage. It does have the benefit of having a block size that is exactly the same as the best DOS cluster size for the media capacity. And it's fairly easy to work with. === Lewin A.R.W. Edwards (Embedded Engineer) Work: http://www.digi-frame.com/ Personal: http://www.zws.com/ and http://www.larwe.com/ "Und setzet ihr nicht das Leben ein, Nie wird euch das Leben gewonnen sein."