PCMark 8 Advanced Workload Performance
Futuremark created what we consider to be the best SSD benchmark to date. It shows both heavy and light use over time. The advanced storage tests run from a command line interface, conditioning the drives before simulating heavy workloads. After those finish, five-minute intervals are inserted between each test, allowing the products we're reviewing to recover through garbage collection, TRIM and wear leveling. This simulates typical consumer workloads that we experience every day.
To learn how we test advanced workload performance, please click here.
The throughput test illustrates the full scope of performance, from the intensive degrade phases to the recovery periods, which better represent behavior under normal use.
Isolating the heavy workloads that might represent what you'd see in a workstation environment, the SM2256 finishes at the bottom. Just remember, SSDs based on the SM2256 won't be designed for prosumer customers. We still like to run this test though, since it sometimes exposes diamonds in the rough.
These results are more indicative of client-oriented workloads with added idle time to simulate the environments we all face. As you can see, the SM2256 performs much better than our steady state benchmark. But it's still slower than many of the other products available today.
For me, latency tests are more important than the throughput results. I don't watch file transfers and think to myself how fast they happen. Rather, you want a responsive experience more than anything. When an application, webpage or file takes longer than expected to open, I start wondering what's wrong. Talk about a thankless subsystem.
Low latency is what makes your computer feel fast, so you want the lines on these charts to come as close to the X axis as possible. The testing is performed in a steady state, which most of us won't see from our SSDs.
Futuremark's consumer workload tests show that the SM2256 pre-production drive encounters higher latency than the popular drives already available on the market. Consistency and power management optimizations usually happen late in the firmware programing cycle. With that in mind, we can't hold the SM2256's feet to the fire just yet. We need to wait for retail products before passing judgement.
If you have a really heavy workload though, by all means get a high-end unit.
While they're making 3d nand, why don't they add an extra layer for parity and then use the raid-5 algorithm?
e.g. 8 layers for data, adding 1 extra for parity. Not that much extra overhead, but data will be much more reliable.
No you don't. Each sector has CRC right?
So if the sector read fails CRC, simply calculate CRC replacing each layer in turn with the raid parity bit.
All of them will be off except for one with the faulty bit.
And these CRC's can all be calculated in parallel so there would be 0 overhead with regard to time.
In the case of a silent error though, which is what you get if you have an even-count bit error when using parity alone, you have no idea where the error is or even that there ever was an error in the first place. That's why more complex error detection and correction block codes exist and are used wherever read/receive errors carry a high cost, such as performance, reliability, monetary cost or loss of data.
BUT, one significant critique I have was the density limitation. Everything here was based on the ~250 GB drives. Comparing a drive with the exact name, but in a different density, is akin to comparing two entirely different drives.
I realize producing the data can be time consuming, but having the same information at three density points would be extremely helpful for purchasing decisions - lowest, highest, and middle densities.