Author: Peter Barada Date: To: yaffs, manningc2 Subject: Re: [Yaffs] How to test in linux w/real NAND using userspace test
On 03/22/2012 12:14 AM, Charles Manning wrote: > On Sunday 11 March 2012 06:07:13 Peter Barada wrote:
>> Charles e al,
>> I'm working on adding read-distrub counting to YAFFS (to force a garbage
>> collection when the number of reads in a block hit a certain limit (20K
>> for MT29C4G48MAZAPAKQ5 OMAP PoP), and I'm looking for a straight-forward
>> testing harness that will beat up YAFFS pretty hard.
>> Googling around didn't come up with much obvious, so I'm asking what to
>> use to test YAFFS out-of-kernel (i.e. using a userspace app that mounts
>> a partition and thrash it) instead of nandsim and in-kernel testing...
>> Thanks in advance!
> There are three ways I can think of to achieve this:
> Take the simulation code in yaffs direct and write a AND interface that calls
> the userspace mtd functions. These are specced in mtd/mtd-user,h
> Another way to do this is to work the changes into a hacked version of u-boot
> or such.
> Yet another way is to stick with Linux. Straight Linux will cache data in the
> VFS which will not abuse the read path as you desire. You can ask Linux to
> drop the caches by doing:
> echo 3 > /proc/sys/vm/drop_caches
> Thus something like:
> while true; do cat /yaffs/dir/file > /dev/null; done
> in parallel with
> while true; do sleep 0.1; echo 3 > /proc/sys/vm/drop_caches; done
> should do some serious read pounding. I've written a script (attached) to do read pounding by:
1) Erase a MTD device and mount as /mnt/yaffs
2) Create 1MB of data from /dev/urandom as /mnt/yaffs/test-data
3) Create reference md5sum of that file
4) copy from /mnt/yaffs/test-data to /tmp/srcfile
5) compare md5sum of /tmp/srcfile to reference md5sum
6) copy /tmp/srcfile to /mnt/yaffs/test-dataN
7) sync the data
8) drop the page cache
9) cop /mnt/yaffs/test-data-N to /tmp/newfile
10) compare md5sum of /tmp/newfile to reference md5sum
11) If N > 1, delete /mnt/yaffs/test-data(N-1)
12) Dump selected YAFFS stats
13) Increment N and loop to step 4 if not to many failures or N hasn't
hit a limit (100K)
If the md5sums don't match, then its a failure, and since I have
/tmp/srcfile and /tmp/newfile I then use a loop to wade through the
files, 2K at a time looking for differences and dump the pages that differ.
The thinking is that /mnt/yaffs/test-data will be read on each loop
iteration to create the copy, so it should force read-disturb pretty
quickly (I tweaked my read-disturb driver to throw -ESTALE from MTD
after only 500 reads of a block instead of 20K), so reading 64MB from a
block would trigger -ESTALE. After letting it run using a 32MB
partition on two boards overnight (~9 hours of testing) I didn't see any
Pass: 18467 Failed: 0
Sat Jan 1 09:12:16 UTC 2000
Copy /mnt/yaffs/test-data to test-data18468
Force unwritten blocks to MTD
Dropping page cache
Calculate MD5 of fresh copy test-data18468
"n_ecc_stale" is the number of times yaffs_handle_chunk_error() was
called with an -ESTALE state, "n_stale_blocks" tracks the number of
current stale blocks, and in that testing only saw 1 block still stall
in step 12, and then only at the end of three passes.
But I want to beat it up even _further_, and was looking to see if there
already existed a tool that people regularly use on real hardware to
beat up YAFFS. I'm now looking at LTP (and its ltp-fsx.c test) to see
if it will suit my needs.
Under separate cover I'll send my read-disturb changes to for comment to
see if my approach is sane...