Posted by: kurtsh | July 25, 2008

INFO: How Windows OS’s get around the I/O limitations of Flash Memory

I’ve seen this pop up recently as a discussion topic and I thought I’d try to shed some light on the answers being that this was a curiosity of my own a year or so back.

FLASH MEMORY DIES
It’s a fact of life that Flash memory has a relatively short life span in comparison to other read/write storage technologies.  There are only so many reads & writes that can be executed against a given flash memory cell before it becomes unreliable and unusable.  This failure or inability to read/write reliably is often referred to as "burnout".

The number of ‘writes’ that can be executed against a flash cell has historically been something like 50,000 to 100,000 changes, but what’s often disregarded is the fact that ‘reads’ also diminish a flash memory cells life span as well.  The bottom line is that flash memory goes bad over time.  It’s just a matter of ‘when’.

WEAR LEVELING
To compensate for this problem, algorithms have been written into Windows operating systems (XP, Vista, Windows Mobile, etc.) that recognize flash storage mediums and use a different method of I/O.  The basic concept is to distribute the usage of the flash storage across each and every cell so that ever part of the flash memory gets used equally over time, instead of one area getting "burned out" faster than others.

As a result of these wear distribution algorithms, flash manufacturers (especially the manufacturers of the recent new breed of solid state devices/SSD) claim that failure occurs only after many years of usage – assuming an even spread of I/O calls across all flash memory cells.

DIFFERENCES IN FLASH I/O ALGORITHMS:  FREE CELL WRITE DISTRIBUTION
With regard to the algorithms themselves, one basic technique I’ve heard of is to maximize the usage of free space by writing to non-sequential cells.  This has the benefit of scattering the "wear & tear" of the memory used without diminishing performance since unlike ferro-magnetic/mechanical drives, flash memory seek times are a constant no matter where the data resides meaning writting data sequentially is unnecessary.  (And in the case of flash memory, possibly even dangerous to the life of the storage medium)

The problem with this technique is while it maximizes I/O performance, it relies on the existence of free space on the flash storage medium.  If you have very little free space, the same areas of the storage may be written to over and over again.  Take for example the usage of flash storage for an Internet Browser cache or an RSS feed repository.  If the content of either of these caches are constantly deleted then rewritten with new data, because there is little free space you can see how the same memory cells might be written to over and over again.

This might not seem like a feasible scenario until you realize that most cellular phones use a lot of flash memory.  All of a sudden, reading/writing to flash memory for the use of a mobile browser or a mobile newsreader becomes very disconcerting.

DIFFERENCES IN FLASH I/O ALGORITHMS:  CELL SWAPPING
Another more complex technique is to literally swap data content between frequently used memory cells and less frequently used cells.  This technique essentially takes data that "hasn’t moved in a long time" and puts it in cells that have been used a lot recently.  This has the benefit of more evenly distributing wear & tear across all memory cells – not just free memory.

Of course the problem here is that the additional read/writes to accomplish this may affect performance but this could be compensated for using background I/O during periods in which the medium isn’t actively being used and proactively swapping data between cells.

THE CONSEQUENCE OF SIZE
So if you have a larger storage medium, you have more "flash surface" to write to and that means a longer life, correct?  After all, if you’re wear-leveling effectively and distributing I/O across the entire storage medium, making more space available equates to making more reads/writes available over the life of the flash, right?

For the most part, the answer is ‘yes’:  A simple solution is to use larger flash storage to distribute I/O across.  And better yet, if you are using an algorithm that distributes just across free space, then having more free space available in general should increase the life of your flash medium.

The problem is that people assume flash has an even quality across all manufacturers and rarely discriminate between flash brands.  Flash is flash is flash, in the minds of most consumers, but as any tech will tell you, that just isn’t the case.  Speed, life span, storage capacities, cost, all of these vary widely between manufacturers and even a single manufacturer can change across product lines.

The bottom line is that if you do get a larger capacity and expect greater life spans for your flash, be sure that the manufacturer is the same to at least have some semblance of comparing apples to apples and also check to see if their have differing life span ratings if they even give them.  Just because you have more capacity on your new flash drive doesn’t mean it’ll last longer if the flash itself is more prone to failure on your new storage.

[Once I get the time, I’ll enhance this post with what we’ve done in Windows to provide wear distribution for flash memory]

I should throw out the caveat that this is just what I read – I have little knowledge of any of these technologies except in the context of how it is used in Microsoft products.


Categories

%d bloggers like this: