As we have pointed out in the past, and as we're sure you would have concluded logically on your own, an enterprise storage workload is quite different from desktop or client workloads. The differences between them affect how we test, analyze, and evaluate enterprise-oriented devices. The slide below, from last year’s Flash Memory Summit, gives a great overview of the differences.

SSDs are not easy to evaluate. Unlike traditional rotating disks, solid-state drives are affected by many factors that are difficult to control.
The Storage Networking Industry Association (SNIA), a working group made up of SSD, flash, and controller vendors, has produced a testing procedure that attempts to control as many of the variables inherent to SSDs as possible. SNIA’s Solid State Storage Performance Test Specification (SSS PTS) is a great resource for enterprise SSD testing. The procedure does not define what tests should be run, but rather the way in which they are run. This workflow is broken down into four parts:
- Purge: Purging puts the drive at a known starting point. For SSDs, this normally means Secure Erase.
- Workload-Independent Preconditioning: A prescribed workload that is unrelated to the test workload.
- Workload-Based Preconditioning: The actual test workload (4 KB random, 128 KB sequential, and so on), which pushes the drive towards a steady state.
- Steady State: The point at which the drive’s performance is no longer changing for the variable being tracked.
These steps are critical when testing SSDs. It is incredibly easy to not fully condition the drive and still see fresh-out-of-box behavior and think it is steady-state. These steps are also important when going between random and sequential writes.
The graph below demonstrates the rationale behind SNIA's guidelines on Intel's SSD 910. We first performed a Secure Erase (Purge), followed by five full disk writes of random 4 KB data (Workload-Independent Preconditioning). Then, we wrote the full capacity of the disk four times in a row with 8 MB sequential writes (Workload-Based Preconditioning). It wasn’t until the fourth full disk write that we achieved Steady State.

For all performance tests in this review, the SSS PTS was followed to ensure accurate and repeatable results.
Finally, the SSS PTS mandates that all data patterns be random. This is an attempt to normalize results for SSDs that optimize performance for compressible data. In general, the compressibility of data is very case-dependent. So, to represent worst-case scenarios, random data is used when applicable in the performance tests. It should be noted that Intel's SSD 910 does not perform any data compression, and the results for compressible data are identical.
Intel sent us an 800 GB sample of its SSD 910 for evaluation. We ran tests in both Maximum Performance mode and its Default mode. To simulate the performance of the 400 GB model, we only configured two of the four NAND modules, per Intel’s instructions. The evaluation unit did not come with a full-height PCIe bracket, so testing was performed without one installed.

For comparison purposes, we're putting Intel's SSD 910 up against OCZ's Z-Drive R4 RM88 1.6 TB. Is this a fair fight? No, it isn’t. The R4 sports two times the capacity, twice as many controllers, it requires a ¾-height, full-length PCIe slot, and sells for somewhere around $7/GB. Since the R4 uses SandForce-based controllers, though, we wanted to see how much of a fight the SSD 910 can put up, especially since most of our testing is being performed with incompressible data, a known problem for SandForce's technology.
- SSD 910 Gets A True Enterprise-Class Workout
- When One SSD Is Actually Four
- Default Versus Maximum Performance Mode
- Test Setup And Benchmarks
- Testing Methodology
- Write Endurance
- 4 KB Random Performance
- Enterprise Workload Performance
- Sequential Performance
- Enterprise Video Streaming Performance
- Power Consumption
- Temperature
- Is Intel's SSD 910 Right For Your Enterprise Application?
Review sites never cover real world use - that is to live with it day in day out (reliability), its not all about raw speed and performance.
As best I understand it as it was descibed by the company that analyzed these failed drives, a block of NAND flash either went bad or became inaccessible by the controller rendering the drives useless and unable to be accessed by normal means of hooking it up to a SATA or USB port. Two drives, different NAND (50 nm for the G1 and 34 nm for the G2), same failure mode.
Once again, this is not definitive, just my observations but to me, I think review sites need to be a little more cautious about how they qualify intel's reputation for quality and reliability because from my perspective, intel has neither and I have since began using crucial SSD's. Hopefully, I will see much longer life from these new drives.
Intel, you should test these drive in that real world application. EMC, VM-ware and several data bases carve out some LUN's and Push the envelope. In this situation, should the device prove worthy, the 4000 price tag will come down very fast, and the data center will put it trust in product, So for those reading this for your personal home workstation and gaming ridge, you need not apply in this arena.
Intel is just about 18-months 2 years of owning the data center, Even EMC is powered by intel.
That's because this was not designed for consumers. It's not like they're marking the price up 1000% for shits and giggles. Enterprise hardware costs more to make because it must be much faster and much more reliable.
This drive, and every other piece of enterprise hardware out there, was never meant to be used by consumers.
Check out the Sequential Performance page, lists both compressible and incompressible. For all the other tests, random (incompressible) data was used.
I agree that we shouldn't use blanket statements, especially on quality, without going through the proper process. Intel has had many issues with their consumer lines, X25-M, 320, etc. I have personally worked with large distributions of their enterprise drives and they are rock solid. Other studies, including articles on this site, have shown the same in real-world scenarios.
Best of the best NAND ? firmware? overprovisioning ?
I have to admit I lol'ed at this
As best I understand it as it was descibed by the company that analyzed these failed drives, a block of NAND flash either went bad or became inaccessible by the controller rendering the drives useless and unable to be accessed by normal means of hooking it up to a SATA or USB port. Two drives, different NAND (50 nm for the G1 and 34 nm for the G2), same failure mode.
Once again, this is not definitive, just my observations but to me, I think review sites need to be a little more cautious about how they qualify intel's reputation for quality and reliability because from my perspective, intel has neither and I have since began using crucial SSD's. Hopefully, I will see much longer life from these new drives.
So in other words, you are saying that because of your experience with TWO drives, that reviewers "need to be a little more cautious about how they qualify intel's reputation for quality and reliability", in spite of the fact that Intel drives are universally acknowledged to be the most reliable in the industry.
Obviously, you got a bad break on the drives you purchased, but things like that can happen, and if you want to change drives, try Samsung, because they also are establishing a reputation for above average reliability.
True, Intel is the reliable choice, but for consumer systems this is not necessarily the 'right' choice. My wife's system drive died last year (mid Aug), and I replaced it with a 60GB OCZ Solid3 which ran $80 on sale at the time ($100 retail). Today I can get a newer, faster, more reliable 60GB SSD for ~$50, which is ~1/2 of the cost per performance on the same size drive. Next year the 60GB drives will not halve again, but we are going to see something more like what we see with traditional drives where there is a base floor of, say, $40 for a 60GB drive, and then $50-60 for 120GB, and $75-100 for 240GB. In fact we are already beginning to see this sandwiching of prices. But because SSDs are simpler to make than HDDs (no motor, no actuator, etc.) the floor may actually be lower than what HDDs hit.
But my point is that for the same cost of your single drive I can re-purchase ~2-3x over the same period and still have saved money over the same time period, and get a massive upgrade in performance and/or size with each upgrade. And because everything lives as an image, it is just a matter of a few hours of down time to hike up to Microcenter and deploy the new drive.
Buying for stability makes sense in a mission critical environment, or in a slow moving or mature technology. But in a market that is moving so quickly, it really makes more sense to buy cheap and plan on replacing it in a year or two. Otherwise it is more like being attached to a boat anchor where your initial investment ties you to antiquated technology.
BTW, 1 year out and the Solid 3 still runs great.
Early MLC OCZ drives (Core?) 2 x 128GB & 1 x the smaller one (forget capacity) = all returned. Refund not given, so when RMA replaced, straight to Flea-bay, un-opened, I pity the buyers.
Intel 1 x X25M MLC 160GB, arrived DOA. Replacement sent quickly: 3 yrs later, not a hiccup. Running as an OS drive with databases in the background too. Installs were not as fast as hoped for, due to lower write performance, but no real complaints, and 160GB was a nice size.
Intel 1 x X25E SLC 64GB, really what you would hope for in terms of performance: Absolutely no problems to date. Same usage as X25M. Installs are lightening quick. Nothing to fault except capacity.
Kingston: 2 x MLC SSD Now V+ (100GB ?) Both failed within 6 months. Yet to return second one. Usage: CrapBook, email, general usage.
Patriot: 1x MLC Wildfire 240GB, waited until BSOD issue resolved before purchase, updated FW right from the start, fault-less to date. Usage: same as Intel drives above. Under SATA-2 I reckon the X25E is faster though. No space problems.
Hope that helps someone...
The technology needs to mature still.
Hard drives were the same way... (MFM/RLL: notorious for bad sectors out of nowhere,snail performance. IDE: getting better, bad sector problems starting to go away. SATA: Bad sectors are caused by YOU now =P Interface now outpaces theoretical maximum physical speed limit.)
In a couple of years, you all will be only complaining about the size of the chips and wish that they were the size of your thumbdrives.... =P
No manufacturer will offer a 5 year warranty if less then 99% of the drives will meet this criteria... it is expensive to RMA product that expire during the warranty period.