Benchmarking For The Enterprise: A Whole New World
Just as server- and workstation-oriented processors have to be tested differently than desktop CPUs, so too does enterprise-oriented storage need to be evaluated in a unique way.
We got the above slide from last year's Flash memory Summit in Santa Clara, CA. The assumption for enterprise storage is that it's generally full, and often being accessed around the clock. As a result, fresh-out-of-box/burst performance is largely irrelevant. There's little to no idle time for the background garbage collection and TRIM commands to recover performance, which means an SSD in such a taxing environment is going to hit its steady-state and stay there.
Ultimately, we have to use a different methodology to get to and then test steady-state performance. The goal is to benchmark at a point where an SSD's performance no longer changes over time, necessitating constant writes in order to determine sustained performance. The chart above illustrates how, after some period of use, an SSD drops from its out-of-box performance level to a more sustainable steady-state level.
In order to attain that second point, we precondition our SSDs before running our enterprise benchmarks. But because every drive's steady-state point is different (and because there are multiple steady states, depending on the workload you run), we specifically subject our SSDs to two types of conditioning:
- For our 4 KB random, database, file server, and Web server tests, we write 3x full capacity of the drive using random writes.
- For our 128 KB sequential tests, we write 3x full capacity of the drive sequentially.
Perhaps the Enterprise SSD Fairy will bring you a Hitatchi UltraStar with Intel's 6gbps controller. I'd be eager to see how it compares.
There is no substitute for SLC though.
...fullish of cash? Definitely. Foolish? Probably not.
You've clearly not understood the purpose of this article. Stick to commenting the desktop drive reviews in the future, please.
Thank you for this review, and especially your estimations on the endurance of the drive. It's something that's damn near impossible for us IT professionals to get accurate estimations of in the real world. For some reason, bosses tend to want the expensive hardware to be put to use instead of being thoroughly tested.
More of these types of articles please! :]
Even when the INTEL SSD already has an endurance longer than your refresh cycle for your tech stack?
"Back in my days storage drives used to have moving parts. Now its all solid state."
Unlike super-sized enterprise which I am not, the cost/benefit calculations would be difficult for myself. I know firsthand the money that i.e. financial institutions push into their data centers, and for those folks $7K isn't out of the question.
Interesting SSD and if the prices come down and warranty extended then IMO it would be something to consider and compare against Intel's products.