One issue that PCIe-based SSDs face is thermal management. The SSD 910 is a very compact card, which means it requires adequate cooling. Intel is very upfront about what it takes to cool this card. In Default mode, 200 LFM is required, while Maximum Performance mode necessitates 300 LFM.

To test the drive's thermal performance, we used a 1U server from Supermicro that provides cooling typical of what we'd expect from most other 1U servers. The tests were performed with the machine set to use its high and low fan settings, which should give us results at both extremes.

Each of the four 200 GB modules has its own temperature sensor. We used the Intel Data Center Tool to record thermal data. Each sensor is at a different location on the board and subject to different amounts of airflow, giving us different results. The graph below shows the delta between the four sensors at idle.

Even at idle, the temperature sensor for drive two is 11oC above the coolest sensor, and 5oC above the next-warmest sensor.
In the following test, only the data from that hottest drive-two is being used.

With the chassis fans at high, the SSD 910 reaches 25oC above ambient, worst-case. The coolest sensor topped out at 10o C above ambient. You can see that there is very little difference between the Default and Maximum Performance modes. So, if you have adequate cooling in your system, you shouldn’t have to worry too much about the extra 3 W that the Maximum Performance mode draws. The 400 GB version is even more conservative, giving off thermal readings between 7 and 20o C above ambient.

You need to pay more attention when the server's fans are set to low, though: the 800 GB SSD 910 gets up to 51o C hotter than ambient in Maximum Performance mode! Plagued by poor airflow, Maximum Performance mode causes the temperatures to soar during sequential writes. Even the coolest sensor still reports 17o C above ambient. If your company's server room runs hot, you'll end up pushing this drive very close to its thermal limits.
As an aside, we ran the same tests in a pedestal chassis. Although we didn't generate charts with the data, it's worth noting that the SSD 910 performed nearly identically as the 1U server with its fans set to low. And that's with the freestanding enclosure's fans set to high and no add-in cards next to Intel's SSD. In the same chassis with its fans set to low, the SSD 910 reached its thermal cutoff of 85o C in Maximum Performance mode. If you plan on using this drive in a workstation, it would be wise to limit the number of adjacent cards and look into using a slot cooler.
Of course, in the interest of completeness, setting our server's fans to low doesn't generate enough airflow to meet Intel’s requirements, even in a server. That test scenario was intended to demonstrate the importance of knowing the state of your server's cooling configuration. Airflow requirements need to be taken seriously.
- SSD 910 Gets A True Enterprise-Class Workout
- When One SSD Is Actually Four
- Default Versus Maximum Performance Mode
- Test Setup And Benchmarks
- Testing Methodology
- Write Endurance
- 4 KB Random Performance
- Enterprise Workload Performance
- Sequential Performance
- Enterprise Video Streaming Performance
- Power Consumption
- Temperature
- Is Intel's SSD 910 Right For Your Enterprise Application?
Review sites never cover real world use - that is to live with it day in day out (reliability), its not all about raw speed and performance.
As best I understand it as it was descibed by the company that analyzed these failed drives, a block of NAND flash either went bad or became inaccessible by the controller rendering the drives useless and unable to be accessed by normal means of hooking it up to a SATA or USB port. Two drives, different NAND (50 nm for the G1 and 34 nm for the G2), same failure mode.
Once again, this is not definitive, just my observations but to me, I think review sites need to be a little more cautious about how they qualify intel's reputation for quality and reliability because from my perspective, intel has neither and I have since began using crucial SSD's. Hopefully, I will see much longer life from these new drives.
Intel, you should test these drive in that real world application. EMC, VM-ware and several data bases carve out some LUN's and Push the envelope. In this situation, should the device prove worthy, the 4000 price tag will come down very fast, and the data center will put it trust in product, So for those reading this for your personal home workstation and gaming ridge, you need not apply in this arena.
Intel is just about 18-months 2 years of owning the data center, Even EMC is powered by intel.
That's because this was not designed for consumers. It's not like they're marking the price up 1000% for shits and giggles. Enterprise hardware costs more to make because it must be much faster and much more reliable.
This drive, and every other piece of enterprise hardware out there, was never meant to be used by consumers.
Check out the Sequential Performance page, lists both compressible and incompressible. For all the other tests, random (incompressible) data was used.
I agree that we shouldn't use blanket statements, especially on quality, without going through the proper process. Intel has had many issues with their consumer lines, X25-M, 320, etc. I have personally worked with large distributions of their enterprise drives and they are rock solid. Other studies, including articles on this site, have shown the same in real-world scenarios.
Best of the best NAND ? firmware? overprovisioning ?
I have to admit I lol'ed at this
As best I understand it as it was descibed by the company that analyzed these failed drives, a block of NAND flash either went bad or became inaccessible by the controller rendering the drives useless and unable to be accessed by normal means of hooking it up to a SATA or USB port. Two drives, different NAND (50 nm for the G1 and 34 nm for the G2), same failure mode.
Once again, this is not definitive, just my observations but to me, I think review sites need to be a little more cautious about how they qualify intel's reputation for quality and reliability because from my perspective, intel has neither and I have since began using crucial SSD's. Hopefully, I will see much longer life from these new drives.
So in other words, you are saying that because of your experience with TWO drives, that reviewers "need to be a little more cautious about how they qualify intel's reputation for quality and reliability", in spite of the fact that Intel drives are universally acknowledged to be the most reliable in the industry.
Obviously, you got a bad break on the drives you purchased, but things like that can happen, and if you want to change drives, try Samsung, because they also are establishing a reputation for above average reliability.
True, Intel is the reliable choice, but for consumer systems this is not necessarily the 'right' choice. My wife's system drive died last year (mid Aug), and I replaced it with a 60GB OCZ Solid3 which ran $80 on sale at the time ($100 retail). Today I can get a newer, faster, more reliable 60GB SSD for ~$50, which is ~1/2 of the cost per performance on the same size drive. Next year the 60GB drives will not halve again, but we are going to see something more like what we see with traditional drives where there is a base floor of, say, $40 for a 60GB drive, and then $50-60 for 120GB, and $75-100 for 240GB. In fact we are already beginning to see this sandwiching of prices. But because SSDs are simpler to make than HDDs (no motor, no actuator, etc.) the floor may actually be lower than what HDDs hit.
But my point is that for the same cost of your single drive I can re-purchase ~2-3x over the same period and still have saved money over the same time period, and get a massive upgrade in performance and/or size with each upgrade. And because everything lives as an image, it is just a matter of a few hours of down time to hike up to Microcenter and deploy the new drive.
Buying for stability makes sense in a mission critical environment, or in a slow moving or mature technology. But in a market that is moving so quickly, it really makes more sense to buy cheap and plan on replacing it in a year or two. Otherwise it is more like being attached to a boat anchor where your initial investment ties you to antiquated technology.
BTW, 1 year out and the Solid 3 still runs great.
Early MLC OCZ drives (Core?) 2 x 128GB & 1 x the smaller one (forget capacity) = all returned. Refund not given, so when RMA replaced, straight to Flea-bay, un-opened, I pity the buyers.
Intel 1 x X25M MLC 160GB, arrived DOA. Replacement sent quickly: 3 yrs later, not a hiccup. Running as an OS drive with databases in the background too. Installs were not as fast as hoped for, due to lower write performance, but no real complaints, and 160GB was a nice size.
Intel 1 x X25E SLC 64GB, really what you would hope for in terms of performance: Absolutely no problems to date. Same usage as X25M. Installs are lightening quick. Nothing to fault except capacity.
Kingston: 2 x MLC SSD Now V+ (100GB ?) Both failed within 6 months. Yet to return second one. Usage: CrapBook, email, general usage.
Patriot: 1x MLC Wildfire 240GB, waited until BSOD issue resolved before purchase, updated FW right from the start, fault-less to date. Usage: same as Intel drives above. Under SATA-2 I reckon the X25E is faster though. No space problems.
Hope that helps someone...
The technology needs to mature still.
Hard drives were the same way... (MFM/RLL: notorious for bad sectors out of nowhere,snail performance. IDE: getting better, bad sector problems starting to go away. SATA: Bad sectors are caused by YOU now =P Interface now outpaces theoretical maximum physical speed limit.)
In a couple of years, you all will be only complaining about the size of the chips and wish that they were the size of your thumbdrives.... =P
No manufacturer will offer a 5 year warranty if less then 99% of the drives will meet this criteria... it is expensive to RMA product that expire during the warranty period.