The popular consensus on three-bit-per-cell flash changed over the last few months. I've remained quiet about the Samsung 840 EVO performance degradation issues, but I don't think it can reverse the slow-down without lowering endurance levels below warranty standards. Samsung's software fix was a firmware update and a tool that read and rewrote information on the drives - effectively static data rotation. If a drive holds a lot of data, there was quite a bit of writing going on. Wear-leveling takes place on SSDs as a background operation that we never see; too many wear-leveling operations to avoid bit shifts takes a toll in the form of write amplification. The same 1GB written to flash may be many times that over the course of a product's life. To cut down on write amplification, technologies like LDPC are needed. Samsung never disclosed whether the 840 EVO uses BCH or LDPC, but we do know there is an issue that Samsung failed to fix the first time around. I'm convinced the slow read performance comes from read retries; latency increases with each unsuccessful reach for the data.
We know what issues lie ahead due to smaller flash lithography, and the path is clear on how to limit their impact on user experience. At this point, it's just a race to bring advanced controllers to market that successfully mitigate read retry complications. Over the coming year, we'll hear a lot about LDPC technology used to increase the lifespan of low-cost TLC NAND.
Once implemented in shipping products, the user cost of solid-state storage products will decrease. SSDs as large as 256GB are already down around $100 for many models, and we see this number dropping rapidly over the next six months. The $50 256GB SSD is coming, so expect a realization of that projection by Computex, in June.
Silicon Motion went from servicing lower-tier manufacturers like PNY and Angelbird to tier-one vendors like Crucial and SanDisk in one product cycle. The SM2246EN is a great entry-level four-channel controller with better-than-expected performance and excellent power consumption numbers. We're not surprised the top SSD companies released products based on the SM2246EN. An established relationship with Silicon Motion should help with the development of drives based on the SM2256 we tested today.
SK Hynix, Micron and Toshiba have three-bit-per-cell flash ready, but SSD vendors lack access to the third-party controllers with advanced LDPC to tame endurance. Once Silicon Motion has the SM2256 ready for consumption, we expect an avalanche of new product announcements.
If you have a really heavy workload though, by all means get a high-end unit.
While they're making 3d nand, why don't they add an extra layer for parity and then use the raid-5 algorithm?
e.g. 8 layers for data, adding 1 extra for parity. Not that much extra overhead, but data will be much more reliable.
No you don't. Each sector has CRC right?
So if the sector read fails CRC, simply calculate CRC replacing each layer in turn with the raid parity bit.
All of them will be off except for one with the faulty bit.
And these CRC's can all be calculated in parallel so there would be 0 overhead with regard to time.
In the case of a silent error though, which is what you get if you have an even-count bit error when using parity alone, you have no idea where the error is or even that there ever was an error in the first place. That's why more complex error detection and correction block codes exist and are used wherever read/receive errors carry a high cost, such as performance, reliability, monetary cost or loss of data.
BUT, one significant critique I have was the density limitation. Everything here was based on the ~250 GB drives. Comparing a drive with the exact name, but in a different density, is akin to comparing two entirely different drives.
I realize producing the data can be time consuming, but having the same information at three density points would be extremely helpful for purchasing decisions - lowest, highest, and middle densities.