Storage Bench v1.0, In More Detail
SSD manufacturers prefer that we benchmark drives the way they behave fresh out of the box because solid-state drives slow down once you start using them. If you give an SSD enough time, though, it will reach a steady-state performance level. At that point, its benchmark results reflect more consistent long-term use. In general, reads are a little faster, writes are slower, and erase cycles happen as slow as you'll ever see from the drive.
We want to move away from benchmarking SSDs fresh out of the box whenever possible because you only really get that performance for a limited time. After that, you end up with steady-state performance until you perform a secure erase and start all over again. Now, we don't know about you, but we don't reformat our production workstations every week. So, while performance right out of the box is an interesting metric, it's not nearly as relevant in the grand scheme of things. Steady-state performance is what ultimately matters.
While this is a new move for us, IT professionals have long used this approach to evaluate SSDs. That's why the consortium of producers and consumers of storage products, Storage Networking Industry Association (SNIA), recommends benchmarking steady-state performance. It's really the only way to examine the true performance of an SSD in a way that represents what you'll actually see over time.
There are multiple ways to get to a SSD’s steady state, but we're going to use a proprietary benchmark storage benchmark from Intel. This is a trace-based benchmark, which means that we're using an I/O recording to measure relative performance. Our trace, which we're dubbing Storage Bench v1.0, comes from a two-week recording of my own personal machine, and it captures the level of I/O that you would see during the first two weeks of setting up a computer.
- Games like Call of Duty: Modern Warfare 2, Crysis 2, and Civilization V
- Microsoft Office 2010 Professional Plus
- Adobe Photoshop CS5
- Various Canon and HP Printer Utilities
- LCD Calibration Tools: ColorEyes, i1Match
- General Utility Software: WinZip, Adobe Acrobat Reader, WinRAR, Skype
- Development Tools: Android SDK, iOS SDK, and Bloodshed
- Multimedia Software: iTunes, VLC
The I/O workload is somewhat moderate. I read the news, browse the Web for information, read several white papers, occasionally compile code, run gaming benchmarks, and calibrate monitors. On a daily basis, I edit photos, upload them to our corporate server, write articles in Word, and perform research across multiple Firefox windows.
The following are stats on the two-week trace of my personal workstation:
|Statistics||Storage Bench v1.0|
|Read Operations||7 408 938|
|Write Operations||3 061 162|
|Data Read||84.27 GB|
|Data Written||142.19 GB|
|Max Queue Depth||452|
According to the stats, I'm writing more data than I'm reading over the course of two weeks. However, this needs to put into context. Remember that the trace includes the I/O activity of setting up the computer. A lot of this information is considered one-touch, since it isn't accessed repeatedly. If we exclude the first few hours of my trace, the amount of data written drops by over 50%. So on a day-to-day basis, my usage pattern evens out to include a fairly balanced mix of read and writes (~8-10 GB/day). That seems pretty typical for the average desktop user, though this number is expected to favor reads among the folks consuming streaming media on a larger and more frequent basis.
On a separate note, we specifically avoided creating a really big trace by installing multiple things over the course of a few hours, because that really doesn't capture real-world use. As Intel points out, traces of this nature are largely contrived because they don't take into account idle garbage collection, which has a tangible effect on performance (more on that later).
I find it interesting that SATA 3 doesn't make a difference in file copy. Most SATA 3 drives cost the same as a SATA 2 so no need to save a few dollars.
I asked before but no one answered. Anyway here goes... If SSD's are supposed to be more reliable than spinning drives, why are most warranties for 3 years instead of the usual 5 years on high end conventional spinning drives? It seems like the companies are not to confident in their products to me, and that's why I ask this question and the one that preceded it. It would be nice to get some honest answers......
Well, the warranties are mostly 3 years, but some drives like Intel's 320s and Plextor's M3S drives do have 5 years of coverage.
As for stress testing... well... some have taken this matter in their own hands to answer that very question. So far, it's far more than anyone could imagine. And for complex reasons, a drive only writing 10GB might not wear out it's NAND in over a century. A drive's endurance is typically way underestimated. No one is going to wear out any 3xnm or 2xnm NAND in 5 years, except in the most extreme cases. Most drives die from firmware problems, or physical damage to the PCB or components, or some other unknown phenomenon. Only the factory could do a proper autopsy, and since the FW, FTL, controller, etc. are usually trade secrets or covered under NDA, no one in the know is going to volunteer.
There is an SSD endurance thread on the XtremeSystems forum:
I know when I first got my 1st gen OCZ Vertex nearly when it first came out, I was always the first person on the map for Counter Strike. While other players were still loading the level, I would rush in from the side and lob a grenade and take a few people out because they didn't think anyone could get there so fast (now with more people with SSD's, it's not quite so funny anymore).
I do appreciate being able to open PS CS5 in less than 2 seconds (for quick photo re-edits) and opening Premiere a lot faster too. Transferring large RAW photo folders (think 50+GBs total) to and from backup HDD's, I could use the extra MB's from these new 6Gb/s versions.