Silicon Motion SM2256 Preview
Final Thoughts
The popular consensus on three-bit-per-cell flash changed over the last few months. I've remained quiet about the Samsung 840 EVO performance degradation issues, but I don't think it can reverse the slow-down without lowering endurance levels below warranty standards. Samsung's software fix was a firmware update and a tool that read and rewrote information on the drives - effectively static data rotation. If a drive holds a lot of data, there was quite a bit of writing going on. Wear-leveling takes place on SSDs as a background operation that we never see; too many wear-leveling operations to avoid bit shifts takes a toll in the form of write amplification. The same 1GB written to flash may be many times that over the course of a product's life. To cut down on write amplification, technologies like LDPC are needed. Samsung never disclosed whether the 840 EVO uses BCH or LDPC, but we do know there is an issue that Samsung failed to fix the first time around. I'm convinced the slow read performance comes from read retries; latency increases with each unsuccessful reach for the data.
We know what issues lie ahead due to smaller flash lithography, and the path is clear on how to limit their impact on user experience. At this point, it's just a race to bring advanced controllers to market that successfully mitigate read retry complications. Over the coming year, we'll hear a lot about LDPC technology used to increase the lifespan of low-cost TLC NAND.
Once implemented in shipping products, the user cost of solid-state storage products will decrease. SSDs as large as 256GB are already down around $100 for many models, and we see this number dropping rapidly over the next six months. The $50 256GB SSD is coming, so expect a realization of that projection by Computex, in June.
Silicon Motion went from servicing lower-tier manufacturers like PNY and Angelbird to tier-one vendors like Crucial and SanDisk in one product cycle. The SM2246EN is a great entry-level four-channel controller with better-than-expected performance and excellent power consumption numbers. We're not surprised the top SSD companies released products based on the SM2246EN. An established relationship with Silicon Motion should help with the development of drives based on the SM2256 we tested today.
SK Hynix, Micron and Toshiba have three-bit-per-cell flash ready, but SSD vendors lack access to the third-party controllers with advanced LDPC to tame endurance. Once Silicon Motion has the SM2256 ready for consumption, we expect an avalanche of new product announcements.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
-
Maxx_Power I hope this doesn't encourage manufacturers to use even lower endurance NAND in their SSDs.Reply -
InvalidError
It most likely will. Simply going from 20-22nm TLC to 14-16nm will likely ensure that with smaller trapped charge, increased leakage, more exotic dielectric materials to keep the first two in check.15495485 said:I hope this doesn't encourage manufacturers to use even lower endurance NAND in their SSDs. -
unityole this is the way where we are heading, only thing to get good grade SSD is probably to spend big. good grade as in HET MLC flash and pcie 3.0 performance.Reply -
alextheblue As long as they don't corrupt data, I'm not really freaking out over endurance. When a HDD died, that was a major concern. If an SSD that's a few years old starts to really reach the end of its useful life, as long as I can recover everything and dump it onto a new drive, I'm happy.Reply
If you have a really heavy workload though, by all means get a high-end unit. -
JoeMomma I am using a small ssd for Windows that is backup weekly. Saved my butt last week. Speed is key. I can provide my own reliability.Reply -
jasonkaler Here's an idea:Reply
While they're making 3d nand, why don't they add an extra layer for parity and then use the raid-5 algorithm?
e.g. 8 layers for data, adding 1 extra for parity. Not that much extra overhead, but data will be much more reliable. -
InvalidError
SSD/MMC controllers already use FAR more complex and rugged algorithms than plain parity. Parity only lets you detect single-bit errors. It only allows you to "correct" errors if you know where the error was, such as a drive failure in a RAID3/5 array. If you want to correct arbitrary errors without knowing their location beforehand, you need block codes and those require about twice as many extra bits as the number of correctable bit-errors you want to implement. (I say "about twice as many" because twice is the general requirement for uncorrelated, non-deterministic errors. If typical failures on a given media tend to be correlated or deterministic, then it becomes possible to use less than two coding bits per correctable error.)15517825 said:Here's an idea:
While they're making 3d nand, why don't they add an extra layer for parity and then use the raid-5 algorithm?
e.g. 8 layers for data, adding 1 extra for parity. Not that much extra overhead, but data will be much more reliable.
-
jasonkaler 15519330 said:
SSD/MMC controllers already use FAR more complex and rugged algorithms than plain parity. Parity only lets you detect single-bit errors. It only allows you to "correct" errors if you know where the error was, such as a drive failure in a RAID3/5 array. If you want to correct arbitrary errors without knowing their location beforehand, you need block codes and those require about twice as many extra bits as the number of correctable bit-errors you want to implement. (I say "about twice as many" because twice is the general requirement for uncorrelated, non-deterministic errors. If typical failures on a given media tend to be correlated or deterministic, then it becomes possible to use less than two coding bits per correctable error.)15517825 said:Here's an idea:
While they're making 3d nand, why don't they add an extra layer for parity and then use the raid-5 algorithm?
e.g. 8 layers for data, adding 1 extra for parity. Not that much extra overhead, but data will be much more reliable.
No you don't. Each sector has CRC right?
So if the sector read fails CRC, simply calculate CRC replacing each layer in turn with the raid parity bit.
All of them will be off except for one with the faulty bit.
And these CRC's can all be calculated in parallel so there would be 0 overhead with regard to time.
Easy huh?
-
InvalidError
If you know which drive/sector is bad thanks to a read error, your scheme is needlessly complicated: you can simply ignore ("erase") the known-bad data and calculate it by simply XORing all remaining volumes. But you needed the extra bits from the HDD's "CRC" to know that the sector was bad in the first place.15536005 said:No you don't. Each sector has CRC right?
So if the sector read fails CRC, simply calculate CRC replacing each layer in turn with the raid parity bit.
All of them will be off except for one with the faulty bit.
And these CRC's can all be calculated in parallel so there would be 0 overhead with regard to time.
Easy huh?
In the case of a silent error though, which is what you get if you have an even-count bit error when using parity alone, you have no idea where the error is or even that there ever was an error in the first place. That's why more complex error detection and correction block codes exist and are used wherever read/receive errors carry a high cost, such as performance, reliability, monetary cost or loss of data. -
Eggz I know this article was about the SM2256, but the graphs really made the SanDisk PRO shine bright! In the latency tests, which content creators care about, nothing seemed to phase it, doing better than even the 850 Pro - consistently!Reply
BUT, one significant critique I have was the density limitation. Everything here was based on the ~250 GB drives. Comparing a drive with the exact name, but in a different density, is akin to comparing two entirely different drives.
I realize producing the data can be time consuming, but having the same information at three density points would be extremely helpful for purchasing decisions - lowest, highest, and middle densities.