While the virtues of SSD storage have been known for many years, the technology's steady march towards enterprise dominance has focused squarely on the form, fit, and functional replacement of 2.5” SAS-based hard drives. In the data center world, that means you still have large racks of hot-swappable drive carriers, RAID cards, and supporting infrastructure.
PCIe-based SSDs are an attempt to reduce or combine all essential components in the storage chain. This reduction has multiple benefits, including lower cost, higher performance, higher reliability, and tighter integration. That last point is especially important when talking about solid-state technology. Compared to mechanical drives, SSDs can best be described as finicky. Their performance changes based on previous operations, they periodically execute maintenance tasks n the background, and they have a very specific life cycle. These idiosyncrasies, which are a nuisance for consumer applications, become deal-breakers at an enterprise level. The tight end-to-end (host interface to controller to NAND) integration of PCIe-based SSDs means that the entire package should operate in a single, harmonious state. At least, that's the theory.

Intel's SSD 910 series represents the company's first attempt at this marriage of NAND, PCIe bridge logic, and SAS ASICs.
The drive family is available in 400 and 800 GB capacities, available on the street for $1999 and $4499, respectively. Both flavors feature Intel’s 25 nm High Endurance Technology (HET) MLC NAND, the same stuff we covered in Intel SSD 710 Tested: MLC NAND Flash Hits The Enterprise. When the SATA-based SSD 710 first launched, its capacity was valued at $6/GB. Now, we're looking at less than $5/GB for enterprise-class solid-state storage. Intel packages it all together in a half-height, half-length PCIe 2.0 x8 card.
| Intel SSD 910 Series | SSDPEDOX400G301 | SSDPEDPX800G301 |
|---|---|---|
| User Capacity | 400 GB | 800 GB |
| Interface | PCIe 2.0 x8, Half-Height, Half-Length | |
| Sequential Read | 1 GB/s | 2 GB/s |
| Sequential Write | 0.75 GB/s | 1 GB/s |
| 4K Random Read | 90 000 IOPS | 180 000 IOPS |
| 4K Random Write | 38 000 IOPS | 75 000 IOPS |
| Power Consumption (Active) | <25 W | <25 W* |
| Power Consumption (Idle) | 8 W | 12 W |
| Write Endurance | 7 PB | 14 PB |
| Encryption | AES-256 | AES-256 |
When it comes to high-end storage, details matter a lot. But if you find yourself reading the first and last pages of this piece, let's cut right to the chase in order to make something clear: Intel's SSD 910 doesn't blow away its competition with sequential throughput or random I/O numbers we've never seen before.
The good news for Intel, however, is that judging this particular piece of hardware based on its spec sheer doesn't convey the whole story. Enterprise-oriented customers value quality and reliability as much or more than raw performance. With that in mind, Intel is in a particularly good place. Its X25-E was the gold standard for many years, and its SSD 710 family continues that legacy.
Will well-established quality and reliability be enough to propel Intel out ahead of the pack in a rapidly-expanding and continually-evolving PCIe-based SSD market?
Intel's block diagram makes clear how the SSD 910 comes together. From left to right, you have the physical PCI Express interface, the logic that facilitates communication between the PCIe bus and SAS, a multitude of SAS-based controllers, and the NAND flash itself, attached via an ONFi 2.0 interface.

In this exploded view, it's easy to see how the SSD 910 is built. Modularity is important here, as you can see from the NAND-laden daughtercards stacked on the main PCB.

That main controller board, which hosts the eight-lane, second-gen PCIe edge connector, is the brains of the operation. Under that silver heat sink, you'll find an LSISAS2008 PCIe-to-SAS controller. Armed with eight SAS 6Gb/s ports, the controller is capable of RAID 0, 1, 1E, and 10, though Intel's configuration doesn't facilitate hardware-based RAID support.
Instead, you're presented with either two or four unique volumes, depending on your version of the card. If you want to use the SSD 910 as a single, contiguous volume, it's necessary to create a software-based RAID setup.
The (positive) result is that there are no drivers to install or update. The LSISAS2008 has been on the market for quite a few years, and most modern operating systems support it natively.

LSI's PCIe-to-SAS hardware is flanked by four Intel EW29AA31AA1 controllers, pictured above, which were co-designed by Intel and Hitachi. Under them, you find DDR2 SDRAM cache chips from Micron. The controller board also hosts the interfaces used to attach the NAND daughterboards.
Each daughterboard contains 28 HET MLC NAND packages totaling 448 GB. Understandably, the second daughterboard is only utilized on the 800 GB version of Intel's drive, yielding 896 GB of flash.
The SSD 910 is not a bootable device. We know that's a blow to the aspirations of hardware PC enthusiasts, but Intel makes it very clear that this is a data center-oriented product. To that end, it's understandable that you'd never want your operating system and data together on the same drive. That's not a show-stopping feature for the SSD 910's target market. But it is a unique decision from Intel, considering that many of its competitors do support this.
The Intel SSD 910 offers two performance modes for the 800 GB SKU; Default and Maximum Performance.
| Performance Mode | Default | Maximum |
|---|---|---|
| User Capacity | 800 GB | 800 GB |
| PCIe Compliant | Yes | Maybe |
| Interface | PCIe 2.0 x8, Half-Height, Half-Length | |
| Sequential Read | 2 GB/s | 2 GB/s |
| Sequential Write | 1 GB/s | 1.5 GB/s |
| 4K Random Read | 180 000 IOPS | 180 000 IOPS |
| 4K Random Write | 75 000 IOPS | 75 000 IOPS |
| Power Consumption (Active) | <25 W | 28 W (38 W Max) |
| Power Consumption (Idle) | 12 W | 12 W |
| Required Airlfow | 200 LFM | 300 LFM |
| Write Endurance | 14 PB | 14 PB |
Intel refers to each storage controller and its corresponding 200 GB of user-accessible flash memory as a module. In Default mode, each of the two or four NAND modules is throttled to make sure that the total device power falls within the PCIe power specification's envelope. In Maximum Performance mode, any two NAND modules can be accessed at full speed and still be considered PCIe-compliant. Stressing all four NAND modules in Maximum Performance mode violates the PCIe specification, and may cause trouble in some systems.
PCI Express® Card Electromechanical Specification Revision 2.0
As you can see in the image above, the power dissipation limit for a PCI Express x4 or x8 card is 25 W. Maximum Performance mode is specified by Intel to top out at 38 W, though. Now, will your specific server drive this device correctly without a compatibility problem? It should. But you'll certainly want to check with your server vendor to be sure. We dropped the SSD 910 into a half-dozen systems and didn't run into trouble with any of them.
More concerning is the amount of airflow required in Maximum Performance mode. The base requirement of 200 Linear Feet per Minute (LFM) is fairly common for server-oriented add-in cards. Bumping up to 300 LFM might be a challenge in servers with adjacent cards installed, especially if those cards are also PCIe-based SSDs or high-powered RAID cards.
We do appreciate the fact that Intel allows for this option, and that the company is very clear about the implications of enabling it. If you are concerned about power and cooling (enterprise customers should be), check out the performance results in Default mode first.
| Test Hardware | |
|---|---|
| Processor | Intel Core i7-3960X (Sandy Bridge-E), 32 nm, 3.3 GHz, LGA 2011, 15 MB Shared L3, Turbo Boost Enabled |
| Motherboard | Intel DX79SI, X79 Express |
| Memory | G.Skill Ripjaws Z-Series (4 x 4 GB) DDR3-1600 @ DDR3-1600, 1.5 V |
| System Drive | Intel 320 160 GB SATA 3Gb/s |
| Tested Drives | Intel SSD 910 800 GB, PCI Express x8, Firmware: 1200D006A40D |
| OCZ Z-Drive R4 RM88 1.6 TB, Firmware: 3.00E | |
| Graphics | AMD FirePro V4800 1 GB |
| Power Supply | OCZ ModXStream Pro 700 W |
| System Software and Drivers | |
| Operating System | Windows 7 x64 Ultimate |
| DirectX | DirectX 11 |
| Driver | Graphics: ATI 8.883 |
| Iometer 1.1.0 | # Workers = 4, 4 KB Random: LBA= Full Span varying Queue Depths | ||
|---|---|---|---|
| AS SSD | v1.6437.30508 | ||
| ATTO | v2.47, 2 GB, QD=4 | ||
| Custom | C++, 8 MB Sequential, QD=4 | ||
| Enterprise Testing: Iometer Workloads | Read | Random | Transfer Size |
| Database | 67% | 100% | 8 KB - 100% |
| File server | 80% | 100% | 512 Bytes – 10% |
| 1 KB – 5% | |||
| 2 KB – 5% | |||
| 4 KB – 60% | |||
| 8 KB – 2% | |||
| 16 KB – 4% | |||
| 32 KB – 4% | |||
| 64 KB – 10% | |||
| Web server | 100% | 100% | 512 Bytes – 22% |
| 1 KB – 15% | |||
| 2 KB – 8% | |||
| 4 KB – 23% | |||
| 8 KB – 15% | |||
| 16 KB – 2% | |||
| 32 KB - 6% | |||
| 64 KB – 7% | |||
| 128 KB – 1% | |||
| 512 KB – 1% |
As we have pointed out in the past, and as we're sure you would have concluded logically on your own, an enterprise storage workload is quite different from desktop or client workloads. The differences between them affect how we test, analyze, and evaluate enterprise-oriented devices. The slide below, from last year’s Flash Memory Summit, gives a great overview of the differences.

SSDs are not easy to evaluate. Unlike traditional rotating disks, solid-state drives are affected by many factors that are difficult to control.
The Storage Networking Industry Association (SNIA), a working group made up of SSD, flash, and controller vendors, has produced a testing procedure that attempts to control as many of the variables inherent to SSDs as possible. SNIA’s Solid State Storage Performance Test Specification (SSS PTS) is a great resource for enterprise SSD testing. The procedure does not define what tests should be run, but rather the way in which they are run. This workflow is broken down into four parts:
- Purge: Purging puts the drive at a known starting point. For SSDs, this normally means Secure Erase.
- Workload-Independent Preconditioning: A prescribed workload that is unrelated to the test workload.
- Workload-Based Preconditioning: The actual test workload (4 KB random, 128 KB sequential, and so on), which pushes the drive towards a steady state.
- Steady State: The point at which the drive’s performance is no longer changing for the variable being tracked.
These steps are critical when testing SSDs. It is incredibly easy to not fully condition the drive and still see fresh-out-of-box behavior and think it is steady-state. These steps are also important when going between random and sequential writes.
The graph below demonstrates the rationale behind SNIA's guidelines on Intel's SSD 910. We first performed a Secure Erase (Purge), followed by five full disk writes of random 4 KB data (Workload-Independent Preconditioning). Then, we wrote the full capacity of the disk four times in a row with 8 MB sequential writes (Workload-Based Preconditioning). It wasn’t until the fourth full disk write that we achieved Steady State.

For all performance tests in this review, the SSS PTS was followed to ensure accurate and repeatable results.
Finally, the SSS PTS mandates that all data patterns be random. This is an attempt to normalize results for SSDs that optimize performance for compressible data. In general, the compressibility of data is very case-dependent. So, to represent worst-case scenarios, random data is used when applicable in the performance tests. It should be noted that Intel's SSD 910 does not perform any data compression, and the results for compressible data are identical.
Intel sent us an 800 GB sample of its SSD 910 for evaluation. We ran tests in both Maximum Performance mode and its Default mode. To simulate the performance of the 400 GB model, we only configured two of the four NAND modules, per Intel’s instructions. The evaluation unit did not come with a full-height PCIe bracket, so testing was performed without one installed.

For comparison purposes, we're putting Intel's SSD 910 up against OCZ's Z-Drive R4 RM88 1.6 TB. Is this a fair fight? No, it isn’t. The R4 sports two times the capacity, twice as many controllers, it requires a ¾-height, full-length PCIe slot, and sells for somewhere around $7/GB. Since the R4 uses SandForce-based controllers, though, we wanted to see how much of a fight the SSD 910 can put up, especially since most of our testing is being performed with incompressible data, a known problem for SandForce's technology.
Most folks will never even come close to exceeding the write endurance limits of today's desktop-oriented SSDs. Write exhaustion requires continuous writing to a drive for weeks and months on end before you completely exhaust the usable life of each NAND cell.
In the enterprise world, however, this is a much more likely scenario. Knowing the write endurance of an SSD can help IT professionals select drives that are best suited to their tasks.
When Intel released its first enterprise drive, X25-E, the company did not publicly state write endurance specifications. With its two subsequent offerings, though, Intel was very specific about what results were achievable and how to achieve them. The 400/800 GB versions of the Intel SSD 910 have a stated write endurance of 7 and 14 PB, respectively. According to Intel, write endurance is measured while running 100% random 4 KB and 8 KB writes spanning 100% of the SSD using Iometer. This is, by far, the worst-case scenario. In a mixed workload, you'd see more favorable results, as we will see.
Before we dig into the results, if you are unfamiliar with the different types of NAND or the concept of write exhaustion in general, take a look at our reviews of the Toshiba MK4001GRZB and Intel SSD 710.
To test write endurance, we wrote large block, sequential data to the drive, while continuously monitoring the MWI (Media Wearout Indicator). The MWI reports, from 0-100, the percentage of life that has been used on the drive. We started with a clean drive and wrote to it until the MWI reached 1%. It should be noted that each of the four NAND modules has its own MWI. The data below is based on when the first module reported a change to the MWI. The other three modules all changed within ~150 GB of the first. This difference only accounted for ~0.15% of the total number of writes.
By writing sequential data, we are showing the maximum usable life of the NAND itself, removing outside factors like wear-leveling and garbage collection. In this configuration, the write amplification should be very close to 1.0.
| Endurance Rating Sequential Workload, QD=1, 8 MB, Random | Intel SSD 910 | Intel SSD 710 | Intel X25-E | Toshiba MK4001GRZB |
|---|---|---|---|---|
| NAND Type | Intel 25 nm eMLC (HET) | Intel 25 nm eMLC (HET) | Intel 50 nm SLC | Toshiba 32 nm SLC |
| RAW NAND Capacity | 896 GB | 320 GB | 77GB | 512 GB |
| IDEMA Capacity (User Accessible) | 800 GB | 200 GB | 64 GB | 400 GB |
| Over-provisioning | 12% | 60% | 20% | 28% |
| P/E Cycles Observed (IDEMA) | 46 339 | 36 600 | 237 968 | 225 064 |
| P/E Cycles Observed (Raw) | 41 374 | 22 875 | 198 307 | 175 831 |
| Host Writes per 1% of MWI | 370.71 TB | 73.20 TB | 152.3 TB | 900.2 TB |
| $/PB-Written | $106.60 | $181.72 | $60.51 | $79.63 |
In terms of P/E cycles observed, the Intel SSD 910 outperforms Intel's SSD 710 by 80%, even though they use the same NAND. But, as with all MLC-based flash, it can’t really hold a candle to good old-fashioned SLC.
So why, in an enterprise application, where write endurance is so important, would you consider anything other than SLC? Simply, cost. HET MLC (or eMLC) offers a solid middle-ground to those that need enterprise-level write endurance, but can’t justify the price of SLC-based drives. Intel's SSD 910 makes that value proposition even more intriguing compared to its SSD 710.
When you look at just write endurance and cost ($/PB-written), ignoring all other factors, the SLC-based X25-E is still the clear winner. But the comparison to the SSD 910 is much better-looking than the SSD 710. This is important for customers who want to use these drives purely as write-caching devices, where speed and size can be secondary features.

The Intel SSD 910 absolutely holds its own compared the larger and more expensive OCZ R4 in our random read test. The 800 GB variant even tops its 180 000 IOPS specification, clearing 225 000 IOPS at queue depths of 64 and 128. At lower queue depths, it even slides ahead of the R4. The 400 GB model does similarly well, topping 110 000 IOPS (versus its 90 000 IOPS specification).
When it comes to random I/O performance, neither drive's specification changes whether you're in Default or Maximum Performance mode. Our testing demonstrates almost identical results between the two configurations in all Iometer tests.
There is also no benefit to treating each drive as a separate volume, which isn’t the case in subsequent testing.

Moving on to random writes, once again, Intel's SSD 910 outperforms its rated specifications. The 800 GB incarnation hovers in the 75 000 to 80 000 IOPS range, while the 400 GB model hits its 38 000 IOPS spec at a queue depth of four.
The picture isn’t as favorable compared to the R4, though, which uses eight SandForce 2582 controllers to plow through the 4K write test. That shouldn't be a surprise, though, given the R4's size and cost.
Testing the 800 GB SSD 910 as four 200 GB drives does give it a clear advantage at lower queue depths. This difference is observed in all of the Iometer tests where writes are involved.

Average access times are very consistent across runs for the Intel drives, but are not fast enough to catch OCZ's R4.

Maximum response time, on the other hand, favors Intel's hardware in all cases. The 800 GB SSD 910, tested as individual disks (JBOD), does a wonderful job keeping the worst-case response time very low.
The next set of tests simulates different enterprise workloads, including database, file server, Web server, and workstation configurations.
Our Iometer database workload (also categorized as transaction processing) involves purely random I/O. Its profile consists of 67% reads and 33% writes using 8 KB transfers.

The Intel SSD 910 starts to struggle compared to the R4 when it comes to mixed workload tests. Considering how close the two were in our random read benchmarks, we expected them to finish closer here. As it turns out, the R4 is two times faster at a queue depth of 16, and it pushes that margin to 3x at a queue depth of 64.

The file server profile is also completely random, but biased even more to read operations. The relative difference between the SSD 910 and R4 remains the same, 2x at a queue depth of 16 and 3x at a queue depth of 64.

With a workload set to 100% random reads at various transfer sizes, the Intel SSD 910 is much more competitive, finishing slightly ahead of the R4 at lower queue depths. As queue depth increases, though, the R4 starts to pull away.

The workstation profile consists of 80% reads and 80% random operations. In this mixed workload, the R4 once again moves past the SSD 910 at higher queue depths.

The 800 GB SSD 910 hits it 2 GB/s sequential read speed specification at a block size of 2 MB, while the 400 GB version achieves its 1 GB/s spec at a block size of 512 KB. The R4 is still the clear winner at larger transfer sizes, though.

Maximum Performance mode finally sets itself apart from Default mode in our sequential write speed test. Commit this one to memory, though, because this is only time you'll see a difference. But what a difference it makes, granting a 50% boost in sequential writes.
That's not enough to match the R4, which peaks at 2.8 GB/s. However, the news isn't all bad for Intel, since these tests do employ highly-compressible data, which favors the R4's SandForce-based controllers.
When we switch to AS SSD and use fully random data, the gap between Intel and OCZ evaporates.

The R4 still edges out Intel's drive, but the results are much closer. The SSD 910 pulls within 100 MB/s in the sequential write test.
As we know, enterprise customers have different requirements and expectations of their storage than even desktop enthusiasts. If a consumer drive demonstrates periodic performance dips, most users don't perceive the difference. Photoshop may load a few milliseconds slower, or a file finishing copying one second later. In the enterprise video sector, though, large block performance is critical, and even small performance hiccups can cause major issues.
In many streaming applications, you are getting data from a physical device, which could be a digital frame grabber, and writing it to disk. If the disk can't keep up, the data still has to go somewhere. If it can't get to the drive at a specified rate, buffers overflow and data is lost. Ideally, the acquired data would DMA from the device into host memory, and then down to disk. But in the real world, you need buffers. Their size and location can vary greatly based on the application. This section of our story helps show how much buffer allocation is needed for a specific data rate.
Reviews (this one included) give you lots of data designed to demonstrate performance in a number of different scenarios, with the idea that at least some will be relevant to you. The main drawback is that, by going wide, you end up with averages or small sample sizes. Here, we're using the 800 GB SSD 910 in Maximum Performance mode, getting into a steady state, and writing the full capacity 100 times in a row. Each test consists of 8 MB sequential writes with a queue depth of four. Each point on the graph is a 100-point average of the individual 8 MB writes. We'd give you the chart without averages, but Excel does care for 95 000 data points.
The graph below shows the best and worst runs out of all 100 iterations.

If you don't know any better, that might look bad. But it really isn't. Intel's SSD 910 actually does a really good job of maintaining its performance across the entire disk. The table below shows estimated buffer sizes.
| Setpoint (MB/s) | Best-Case Buffer Size In MB | Worst-Case Buffer Size In MB |
|---|---|---|
| 1350 | 23 | 28 |
| 1400 | 24 | 100 |
| 1450 | 33 | 646 |
| 1500 | 46 | 2520 |
| 1550 | 65 | 6729 |
The Intel SSD 910 can easily sustain 1400 MB/s with only a minimal amount of buffering (100 MB). If you go much beyond that, you need to seriously look into allocating multiple gigabytes of memory in order to sustain higher data rates.
The average speed across the entire drive during the best- and worst-case iterations was 1568 and 1536 MB/s, respectively. Even though the difference was 2%, there were a number of other runs with the same deltas that did not show the same drastically different buffer requirements.
Power is always a major concern when it comes to working in large enterprise environments. Intel states that its product uses 2.5x less power than eight 10 000 RPM SAS drives connected to a PCIe-based HBA, while providing better performance. The graph below shows the power draw of the card itself.

Our sample's performance lines up very well with its official specifications. In its default configuration, the card draws a maximum of 25 W, while Maximum Performance mode pushes it right up to 28 W. We did measure a slightly higher idle draw than the 12 W we were expecting.
Sequential operations stress the device to a larger degree than 4 KB random I/O, which makes sense considering the more demanding Maximum Performance mode only affects sequential operations.
As a reminder, our review sample is an 800 GB SSD 910 that we also tested as the 400 GB version. Even though we're not stressing two of its 200 GB modules, the NAND and controllers are still present and drawing power. The actual 400 GB version should draw less power than what we observed during testing.
One issue that PCIe-based SSDs face is thermal management. The SSD 910 is a very compact card, which means it requires adequate cooling. Intel is very upfront about what it takes to cool this card. In Default mode, 200 LFM is required, while Maximum Performance mode necessitates 300 LFM.

To test the drive's thermal performance, we used a 1U server from Supermicro that provides cooling typical of what we'd expect from most other 1U servers. The tests were performed with the machine set to use its high and low fan settings, which should give us results at both extremes.

Each of the four 200 GB modules has its own temperature sensor. We used the Intel Data Center Tool to record thermal data. Each sensor is at a different location on the board and subject to different amounts of airflow, giving us different results. The graph below shows the delta between the four sensors at idle.

Even at idle, the temperature sensor for drive two is 11oC above the coolest sensor, and 5oC above the next-warmest sensor.
In the following test, only the data from that hottest drive-two is being used.

With the chassis fans at high, the SSD 910 reaches 25oC above ambient, worst-case. The coolest sensor topped out at 10o C above ambient. You can see that there is very little difference between the Default and Maximum Performance modes. So, if you have adequate cooling in your system, you shouldn’t have to worry too much about the extra 3 W that the Maximum Performance mode draws. The 400 GB version is even more conservative, giving off thermal readings between 7 and 20o C above ambient.

You need to pay more attention when the server's fans are set to low, though: the 800 GB SSD 910 gets up to 51o C hotter than ambient in Maximum Performance mode! Plagued by poor airflow, Maximum Performance mode causes the temperatures to soar during sequential writes. Even the coolest sensor still reports 17o C above ambient. If your company's server room runs hot, you'll end up pushing this drive very close to its thermal limits.
As an aside, we ran the same tests in a pedestal chassis. Although we didn't generate charts with the data, it's worth noting that the SSD 910 performed nearly identically as the 1U server with its fans set to low. And that's with the freestanding enclosure's fans set to high and no add-in cards next to Intel's SSD. In the same chassis with its fans set to low, the SSD 910 reached its thermal cutoff of 85o C in Maximum Performance mode. If you plan on using this drive in a workstation, it would be wise to limit the number of adjacent cards and look into using a slot cooler.
Of course, in the interest of completeness, setting our server's fans to low doesn't generate enough airflow to meet Intel’s requirements, even in a server. That test scenario was intended to demonstrate the importance of knowing the state of your server's cooling configuration. Airflow requirements need to be taken seriously.
Determining the right SSD for your enterprise application is a daunting task. The variables that must be taken into account are too numerous to list, and many of them are at odds with the others. When you add in the fact that enterprise-oriented drives are purchased in much higher quantities over longer life-cycles, picking the right one becomes a very critical decision.
Intel's SSD 910 is like the Swiss Army knife of PCIe-based SSDs. It isn’t class-leading in any one test or specification, but it consistently performs well in every metric we use for evaluating high-end storage products. It strikes a great balance between performance, endurance, physical dimensions, and cost.
We were particularly surprised during the write endurance testing. After spending some time with Intel's SSD 710, we thought we knew what to expect from HET MLC flash. But we were wrong. Intel's SSD 910 nearly doubles the SSD 710's observed P/E cycles. Admittedly, write endurance testing isn’t an exact science. Taking 1% of off one drive isn’t exactly statistically significant. But it gives us a general indication of how the drive performs. And it's reliable enough to tell that SLC-based SSDs are still king when it comes to write endurance, so long as you're willing to pay a much higher price.
Beyond reliability, the SSD 910 performs well, too. In our testing, the 800 GB version posted 225 000 read IOPS, which is well above its 180 000 rating. The 400 GB version did well too, pushing past 110 000 IOPS when it's only specified for 90 000. Write performance was almost as good, and each configuration easily achieved its specifications. Sequential read and write performance lived up to our expectations, too.
We also found that unless you are performing large-transfer sequential writes, there really isn’t much reason to use Maximum Performance mode. If you are, though, you get huge sequential write performance improvements at the expense of a slightly hotter-running card.
Admittedly, we were initially concerned about power consumption and heat dissipation, which normally go hand-in-hand. Our testing shows that, while you need to be aware your server's power delivery and cooling, that's no more true here than with any other device. In fact, any add-in card that draws 25 W and is passively cooled can be expected to behave in much the same way. Intel is just very up-front with its data.
Summing It All Up
After spending a few weeks with the SSD 910 (in addition to older enterprise-oriented drives from Intel), we have two critical takeaways. First, the company almost always specifies its enterprise-class hardware for worst-case situations, which we appreciate. Second, the drives always meet or exceed their specifications. And, really, isn’t that the highest compliment that you can give an enterprise device?
With that said, there isn’t much use in crowning any one SSD the best. The key is whether a given device is right for your specific application. Based on its specifications, we weren’t sure how well the SSD 910 would hold up against other PCIe-based SSDs. Over the course of our testing, though, it became clear that Intel's rookie PCIe-based effort is much more than the sum of its parts.
If you need massive IOPS at any cost, this isn’t the right drive for you. If you need write endurance at any cost, this isn’t the right drive for you. But, if you have a defined workload where you need good write endurance at a good price point, this could be a very attractive solution, indeed.
Does Intel catch up to other vendors selling PCIe-based SSDs with its SSD 910? Decidedly, yes. Our only concern is one of timing. Might other manufacturers be preparing to leapfrog Intel in the next few months? OCZ and Micron already announced PCIe-based SSDs with direct PCIe-to-NAND connectivity (no SATA/SAS controller needed), random I/O performance close to 1 000 000 IOPS, and throughput exceeding 3 GB/s. We'll have to wait and see if that means Intel will be playing catch-up.