Sign in with
Sign up | Sign in
SanDisk Extreme II SSD Review: Striking At The Heavy-Hitters
By ,
1. Extreme II, The Sequel From SanDisk

SanDisk hasn't really spent much time trying to break into the retail market. Its most notable effort was the original Ultra, a first-generation SandForce-based SSD. The drive didn't have much pep though, and it was up against fairly fast competition packing the formidable SF-2000 controller hardware.

Then again, companies like SanDisk don't really make their money selling drives online and through the odd brick-and-mortar outfit. Like Lite-On and Samsung, most of SanDisk's sales come from OEMs. Retail is usually a fraction of the overall pie, though it's acknowledged as an important piece of the whole. Making the move from selling drives in the OEM space to courting end-users directly isn't a walk in the park, either. Intel and Micron/Crucial started there to an extent, while companies like SanDisk and Toshiba are increasingly looking to play in the same sandbox.

You might not know this, but SanDisk and Toshiba collectively operate a joint venture under the aegis of Flash Forward. Intel and Micron have IMFT; SanDisk and Toshiba have Flash Forward. In essence, the two go halfsies on NAND fabrication. IMFT pumps out wafers of ONFi-capable memory, while Flash Forward makes Toggle-mode NAND. Samsung, the world's largest producer, keeps most of its flash for the company's own purposes, occasionally sharing it with special partners like Seagate. Intel/Micron and Toshiba will sell their production to almost anyone. But SanDisk, the biggest player in flash memory products for digital devices, holds on to what it gets for memory cards, thumb drives, and a range of proto-SSD storage products.

Speaking of SSDs, the first Ultra eventually gave way to a more potent SF-2281-based drive, the Extreme. SandForce's technology and Toggle-mode NAND have always been a powerful combination, but going the SandForce route isn't always advantageous for a company like SanDisk. Unfortunately, an inability to write its own firmware meant SanDisk's expertise in NAND manufacturing went to waste as it achieved similar performance as other SSD vendors. That partly explains the impetus behind recently-released products like the Ultra Plus, and the higher-end Extreme II we're looking at today.

Now, I know what you're thinking: naming something the Extreme II shows a distinct lack of imagination. Maybe so, but SanDisk's faster storage media for digital cameras shows up under the Extreme label. And regardless, we're far more concerned with what's under the hood.

The Extreme II ditches SandForce's hardware in favor of a Marvell flash processor (specifically, the Marvell 88SS9187). It's probably helpful to point out that SandForce's partners are locked into that company's firmware. Making major changes isn't in the cards, and there isn't a lot of available control over what the drive does or how it does it. Conversely, it's said that Marvell wouldn't write firmware for your fancy new SSD if you gave the company all the tea in China. Marvell's customers have to craft their own firmware. Stealing it might be a viable option. But in the end, we like the fact that each implementation is slightly different.

Writing the firmware probably isn't very hard. Making it truly outstanding is much more difficult. SanDisk adds another layer of complexity on top of its custom firmware package in an attempt to distinguish its drive from others based on the respected '9187. That layer is called nCache.

nCache isn't new, but it couldn't be implemented in previous SandForce-based SSDs without low-level firmware access. The Extreme II uses a variable-sized chunk of NAND operating in SLC mode to cache data for speeding-up low queue depth transactions, amongst other things (namely, caching small writes to commit to the MLC flash at a later time). It's difficult to say how large the cache is, but it's purported to be somewhere between 512 and 1024 MB.

According to SanDisk, the nCache system should generate a noticeable boost, especially with fewer outstanding commands in the queue (good news on the desktop, right?). It also helps rectify some of the shortcomings inherent to modern flash. As lithography shrinks and die capacity grows, page and block sizes increase as a consequence. Break down a trace of I/O activity and you'll find that most transfers are 4 KB in modern operating systems. Our Storage Bench trace is composed of a staggering 69.87% 4 KB transfers, and SanDisk believes that these smaller accesses are enhanced with a three-tier strategy: DDR3 DRAM, nCache caching, and MLC become its strategy to overcome the structural deficits of newer flash. 

SanDisk Extreme II
120 GB
240 GB
480 GB
Controller
Marvell 88SS9187-BLD2
NAND
19 nm SanDisk eX2 ABL Toggle-mode, 64 Gb Die
Interface
SATA Revision 3.1
Warranty
Five Year (Limited)
Seq. Read/Write MB/s
550/340 MB/s
550/510 MB/s
540/500 MB/s
Random Read/Write IOPS
91,000/74,000 IOPS
95,000/78,000 IOPS
95,000/75,000 IOPS
Die Count
16
32
64
MSRP
$130
$240
$430

There are three Extreme II capacity points: 120, 240, and 480 GB. And there are two different packages available per drive: a desktop kit with a 3.5" sled and mounting cable, and a laptop kit with a 2.5 mm shim for 9.5 mm Z-height applications.

2. A Guided Tour Of SanDisk's Extreme II

Taking the Extreme II apart is easy. Four screws hide behind the label, and the plastic top half falls away from the metal chassis down below. A series of thermal pads mate PCB components to the metal housing for improved heat transfer. These pads cover the DRAM cache, Marvell's controller, and the eight NAND packages. It's like silly putty in a way; it tends to pull the screen printing off of component ICs, making them harder to decipher in photographs.

The 240 GB PCB you see here may not completely reflect the final product. What shouldn't change, however, are the eight quad-die packages of 19 nm ABL eX2 Toggle-mode NAND, adding up to 256 GB of capacity. The Toggle-mode interface eliminates the clock signal needed by synchronous flash, theoretically lowering power consumption. We've seen similar power characteristics from the 19 nm flash manufactured by Toshiba and SanDisk, though older Toggle-mode-based SSDs tended to use more power than competing drives with ONFi-compliant memory.

Marvell's '9187 controller is flanked by 256 MB of Hynix DDR3 DRAM. We like to see a ratio of DRAM to NAND running 1 MB for every gigabyte of flash on-board, so it makes sense that this 240 GB Extreme II has 256 GB riding shotgun. The 120 GB hosts 128 MB of cache, while the 480 GB model sports 512 MB.

The PCB's back side is bare, aside from some solder points.

3. Test Setup And Benchmarks

Our consumer storage platform is based on Intel's Z77 platform controller hub paired with an Intel Core i5-2400 CPU. Intel's 6- and 7-series chipsets are virtually identical from a storage perspective. We're standardizing on older RST 10.6.1002 drivers for the foreseeable future.

Test Hardware
ProcessorIntel Core i5-2400 (Sandy Bridge), 32 nm, 3.1 GHz, LGA 1155, 6 MB Shared L3, Turbo Boost Enabled
MotherboardGigabyte G1.Sniper M3
MemoryG.Skill Ripjaws 8 GB (2 x 4 GB) DDR3-1866 @ DDR3-1333, 1.5 V
System Drive Kingston HyperX 3K 240 GB, Firmware: 5.02
Tested DrivesSandisk Extreme II 120 GB, Firmware: R1311

Sandisk Extreme II 240 GB, Firmware: R1311

Sandisk Extreme II 480 GB, Firmware: R1311
Comparative
OCZ Vertex 450 256 GB SATA 6Gb/s, Firmware: 1.0

Seagate 600 SSD 240 GB SATA 6Gb/s, Firmware: B660

Intel SSD 525 30 GB mSATA 6Gb/s, Firmware: LLKi

Intel SSD 525 60 GB mSATA 6Gb/s, Firmware: LLKi

Intel SSD 525 120 GB mSATA 6Gb/s, Firmware: LLKi

Intel SSD 525 180 GB mSATA 6Gb/s, Firmware: LLKi

Intel SSD 525 240 GB mSATA 6Gb/s, Firmware: LLKi

Intel SSD 335 240 GB SATA 6Gb/s, Firmware: 335s

Intel SSD 510 250 GB SATA 6Gb/s, Firmware: PWG2

OCZ Vertex 3.20 240 GB SATA 6Gb/s, Firmware: 2.25

OCZ Vector 256 GB SATA 6Gb/s, Firmware: 2.0

Samsung 830 512 GB SATA 6Gb/s, Firmware: CXMO3B1Q

Crucial m4 256 GB SATA 6Gb/s, Firmware: 000F

Plextor M5 Pro 256 GB SATA 6Gb/s, Firmware: 1.02

Corsair Neutron GTX 240 GB SATA 6Gb/s, Firmware: M206
Graphics
MSI Cyclone GTX 460 1 GB
Power Supply
Seasonic X-650, 650 W 80 PLUS Gold
Chassis
Lian Li Pitstop
System Software and Drivers
Operating
System
Windows 7 x64 Ultimate
DirectX
DirectX 11
Drivers
Graphics: Nvidia 314.07
RST: 10.6.1002
IMEI: 7.1.21.1124
Benchmarks
Tom's Hardware Storage Bench v1.0
Trace-Based 
Iometer 1.1.0# Workers = 1, 4 KB Random: LBA=16 GB, varying QDs, 128 KB Sequential, 8 GB LBA Precondition, Exponential QD Scaling
PCMark 7
Secondary Storage Suite
PCMark Vantage
Storage Suite
4. Results: Sequential Performance

Once again, we turn to Iometer to measure the most basic performance parameters.

Fantastic sequential read and write performance is a trademark of modern SSDs. To measure it, we use incompressible data over a 16 GB LBA space and then test at queue depths from one to 16. We're reporting these numbers in binary (where 1 KB equals 1024) instead of decimal (where 1 KB is 1000 bytes). When necessary, we're also limiting the scale of the chart to help readability.

128 KB Sequential Read Scaling

Just about every newer SSD ends up beyond the 500 MB/s mark with eight or 16 outstanding commands. The most notable differences come into play at lower queue depths, particularly when the queue depth is one or two.

The SanDisk drives peak past two outstanding commands, laying down a fierce 530+ MB/s. They best the next-fastest repositories, though not by much. Even at a queue depth of one, the Extreme IIs push 500 MB/s.

128 KB Sequential Write Scaling

Again, the SanDisk trio shows up swinging. The 240 and 480 GB models both touch 500 MB/s on writes, while the 120 GB version makes a splash by achieving 316 MB/s. That's actually the upper range for a 120 GB SSD, unless you're taking about a SandForce-based drive working on zero-fill data, and the most junior Extreme is on par with Intel's SSD 510 and 335 with two times the capacity.

Performance Versus Capacity

There aren't many surprises to see when we look at capacity across the LBA range. In truth, this chart should be a flat line across the entire drive. In reality, not every SSD behaves that way.

And that's exactly what we see. The Extreme IIs are seemingly wedded together, separated by just a few MB/s.

A similar story is told when we look at writes (that is to say, not a very interesting one). The larger two models are almost as quick as each other, while the 120 GB version kisses 320 MB/s. It's possible that the comparatively jagged performance line is a result of nCache.

5. Results: Random Performance

Iometer is still our synthetic metric of choice for testing 4 KB random performance. Technically, "random" translates to a consecutive access that occurs more than one sector away. On a mechanical hard disk, this can lead to significant latencies that hammer performance. Spinning media simply handles sequential accesses much better than random ones, since the heads don't have to be physically repositioned. With SSDs, the random/sequential access distinction is much less relevant. Data can be put wherever the controller wants it, so the idea that the operating system sees one piece of information next to another is mostly just an illusion.

4 KB Random Read

Plextor's M5 Pro and the SanDisk drives offer similar performance. Throughout the capacity range, the Extreme IIs are competitive. The 120 GB model isn't as strong, but it's almost exactly as fast as the 240 GB Seagate 600.

The 240 GB and 480 GB Extreme IIs don't quite hit 100,000 IOPS, but there's no shame in 94,000 and 91,000 IOPS, either.

4 KB Random Write

And then things seem to go pear-shaped. A glance at the above chart makes it clear that SanDisk's drives aren't living up to their specifications. Shouldn't they be hitting 80,000 IOPS or so?

The explanation is relatively simple. We test with random data over a 16 GB LBA space. Industry-wide, most consumer-oriented SSD tests are limited to 8 GB. Now, this doesn't matter most of the time. Hard drives are especially sensitive to LBA active ranges, since spinning platters and floating heads need more time to move when the data you request is physically farther away. Solid-state storage obviously isn't subject to the same limitation, though some SSDs are more sensitive to changes in LBA ranges than others. The difference just usually isn't so profound.

Using the 240 GB Extreme II, we can demonstrate this idiosyncrasy. Starting with 1 GB of sectors and graduating to the entire LBA range, the drop in performance is substantial by the time we get to 16 GB. There are technical reasons why this might happen to a lesser degree with other SSDs, but it looks like the implementation of nCache can result in slower random writes at high queue depths over a large number of LBAs. It's possible that a tradeoff exists between writes that can be cached and writes that exceed the cache's capacity, hurting performance when the cache is full and improving speed when nCache can effectively handle smaller random writes.

Is this a problem? In a word, no.

Typically, random workloads bombarding the entire drive are considered enterprise-oriented. Consumer usage just doesn't match that profile. Random writes are more typically limited to smaller areas, and the amount of writing is exceptionally light. The fact of the matter is that SanDisk's Extreme II was designed for desktop workloads. Wringing the last few drops of performance from an interface-limited SSD means taking steps to improve one area at the expense of others.

The trade-off seems fair. The Extreme II is less useful for a selection of some enterprise applications, but is better as a boot and desktop application drive. We can live with that.

Besides, the Extreme IIs aren't even as bad off as they might appear. Consider the above 4 KB write saturation test at a queue depth of 32. Sure, the drives start well off of their highs. But after the SSDs are filled and garbage collection is in full swing, SanDisk's solutions aren't any worse than competing models. In some cases, they're even better. Performance levels off and ranges from 7,000 to 10,000 IOPS, depending on size. If nothing else, that's competitive. Regardless, there just aren't any reasons why you'd ever write like this on a gaming rig.

6. Results: Tom's Storage Bench

Storage Bench v1.0 (Background Info)

Our Storage Bench incorporates all of the I/O from a trace recorded over two weeks. The process of replaying this sequence to capture performance gives us a bunch of numbers that aren't really intuitive at first glance. Most idle time gets expunged, leaving only the time that each benchmarked drive was actually busy working on host commands. So, by taking the ratio of that busy time and the the amount of data exchanged during the trace, we arrive at an average data rate (in MB/s) metric we can use to compare drives.

It's not quite a perfect system. The original trace captures the TRIM command in transit, but since the trace is played on a drive without a file system, TRIM wouldn't work even if it were sent during the trace replay (which, sadly, it isn't). Still, trace testing is a great way to capture periods of actual storage activity, a great companion to synthetic testing like Iometer.

Incompressible Data and Storage Bench v1.0

Also worth noting is the fact that our trace testing pushes incompressible data through the system's buffers to the drive getting benchmarked. So, when the trace replay plays back write activity, it's writing largely incompressible data. If we run our storage bench on a SandForce-based SSD, we can monitor the SMART attributes for a bit more insight.

Mushkin Chronos Deluxe 120 GB
SMART Attributes
RAW Value Increase
#242 Host Reads (in GB)
84 GB
#241 Host Writes (in GB)
142 GB
#233 Compressed NAND Writes (in GB)
149 GB

Host reads are greatly outstripped by host writes to be sure. That's all baked into the trace. But with SandForce's inline deduplication/compression, you'd expect that the amount of information written to flash would be less than the host writes (unless the data is mostly incompressible, of course). For every 1 GB the host asked to be written, Mushkin's drive is forced to write 1.05 GB.

If our trace replay was just writing easy-to-compress zeros out of the buffer, we'd see writes to NAND as a fraction of host writes. This puts the tested drives on a more equal footing, regardless of the controller's ability to compress data on the fly.

Average Data Rate

The Storage Bench trace generates more than 140 GB worth of writes during testing. Obviously, this tends to penalize drives smaller than 180 GB and reward those with more than 256 GB of capacity. Further, the average data rate is based on total busy time. Divide the amount of data read and written by the busy time, and you have a MB/s metric. Busy time is merely time in which the drive was performing an operation.

Most of the time, host I/O activity is a constant, low-level background drone, punctuated by spikes of more demanding I/O at higher queue depths. The average data rate is heavily weighted in favor of light I/O activity, with only a small portion reflecting higher demand.

SanDisk's Marvell-powered drives show up at the top of our chart, though they fall short of first place.

The 120 GB version lands in fifth place, but that's a super-impressive showing for a modestly-sized SSD. It holds 70 MB/s over the 120 GB Intel SSD 525.

Service Times and Standard Deviation

There is a wealth of information we can collect with Tom's Storage Bench above and beyond the average data rate. Mean (average) service times show what responsiveness is like on an average I/O during the trace. It would be difficult to plot the 10 million I/Os that make up our test, so looking at the average time to service an I/O makes more sense. We can also plot the standard deviation against mean service time. That way, drives with quicker and more consistent service plot toward the origin (lower numbers are better here).

More important, these service time metrics are heavily weighted in favor of intense drive activity, where higher queue depths are observed. Busy time is simply the time a tested disk was performing any host-initiated activity. Consider a period of one second during which five I/O operations are simultaneously executed. If each operation took one second, five seconds of service time would accrue during that period, while only one second of busy time is incurred.

Service time is arguably a more important metric, since periods of rapid activity are more difficult for slower SSDs to accommodate.

The above screen shot shows the cumulative I/O of our trace. Writes are consistent, picking up at a slow rate during this time slice. Reads spike quickly over a short period of time. That initial spike, in red, is a demanding period during which large amounts of data are transferred rapidly.

The SanDisk SSDs are quite the fastest, but they're not far behind. OCZ's Vertex 450 and Vector serve up I/O more quickly, as the two larger Extreme IIs show up in third and fourth place. The 120 GB variant is nestled between Seagate's 600 and the SSD 335s.

7. Results: PCMark Vantage And PCMark 7

Futuremark's PCMark 7: Secondary Storage Suite

PCMark 7 uses the same trace-based technology as our Storage Bench v1.0 for its storage suite testing. It employs a geometric mean scoring system to generate a composite, so we end up with PCMarks instead of a megabytes per second. One-thousand points separate the top and bottom, but that encompasses a far larger difference than the score alone indicates.

PCMark 7 is a vast improvement over the older PCMark Vantage, at least for SSD benchmarking. The storage suite is composed of several small traces. At the end, the geometric mean of those scores is scaled with a number representing the test system's speed. The scores generated are much different from PCMark Vantage, and many manufacturers are predisposed to dislike it for that reason. It's hard to figure out how PCMark 7 "works" because it uses a sliding scale to generate scores. Still, it represents one of the best canned benchmarks for storage, and if nothing else, it helps reinforce the idea that the differences in modern SSD performance don't necessarily amount to a better user experience in average consumer workloads.

This test's storage benchmarks use Intel's IPEAK trace testing to evaluate performance over several scenarios. Representatives from several manufacturers have told us that PCMark 7 does a good job portraying average user workloads, which include things like media consumption and system maintenance.

The composite scores we're generating are pretty similar for most of the faster SSDs. In terms of percentage difference, the deltas are miniscule.

OCZ's Vector flagship and Plextor's M5 Pro sit at the head of the class, though all three Extreme IIs are in hot pursuit. They don't quite make it, but the 120 GB drive is around 2% behind; that's not much at all.

Futuremark's PCMark Vantage: Hard Drive Suite

PCMark's Vantage isn't the paragon of SSD testing, mainly just because it's old and wasn't designed for the massive performance solid-state technology enables. Intended to exploit the new features in Windows Vista, Vantage was certainly at the forefront of consumer storage benching at the time. Vantage works by taking the geometric mean of composite storage scores and then scaling them a lot like PCMark 7 does. But in Vantage's case, this scaling is achieved by arbitrarily multiplying the geometric sub-score mean by 214.65. That scaling factor is supposed to represent an average test system of the day (a system that's now close to a decade behind the times). PCMark 7 improves on this by creating a unique system-dependent scaling factor and newer trace technology. Why bother including this metric, then? A lot of folks prefer Vantage in spite of or because of the cartoonish scores and widespread adoption.

It'd be hyperbole to say that SanDisk crushes its competition, but the 240 GB Extreme II takes first place, the 480 GB model takes third, and the 120 GB version gets an honorable mention in fourth.

We'll single out that 120 GB repository again, even if high Vantage scores aren't really the best indicator of performance. Smaller amounts of transferred data over smaller LBA spaces seemingly play right into the nCache scheme's strengths.

8. Results: Power Consumption

Idle Power Consumption

Idle consumption is the most important power metric for consumer and client SSDs. After all, solid-state drives complete host commands quickly, and then drop back down to idle. Aside from the occasional background garbage collection and house keeping, a modern SSD spends most of its life doing very little.

Enterprise-oriented drives are more frequently used at full tilt, making their idle power numbers far less important. But this just isn't the case on the desktop, where the demands of client and consumer computing leave most SSDs sitting on their hands for long stretches of time.

It might turn out that the only issue with SanDisk's Extreme II we stumble across is higher power demands. The idle numbers are a little above average, but still within a reasonable range.

PCMark 7 Average Power Consumption

A log of our PCMark 7 run shows higher-than-average power spikes, both in intensity and frequency. Still, we're not calling this a big deal yet.

The Extreme IIs fall to the back of the pack in average PCMark 7 power consumption. Only Corsair's Neutron GTX fares worse on average. The 120 GB Extreme II surprisingly finishes second-to-last, suggesting that the smallest family member has to work harder in this benchmark, despite the higher peak power consumption seen from the 240 and 480 GB models.

Maximum Observed Power Consumption

These results just aren't as important for consumer SSDs. It's rare you see drives pulling down this much power for anything more than a few seconds per hour.

9. Not Extreme To The Second Power, But Close Enough

There's a lot to like about the Extreme II, and it's good to see SanDisk leveraging its unique strengths to create an enthusiast-oriented offering. Moving units in the retail space isn't just about selling more drives; getting the message out to raise SanDisk's profile is probably even more important to the company. If the Extreme II is a success, the benefits will no doubt go beyond selling a few more boxed SSDs.

With that in mind, it's helpful that the Extreme II represents itself assertively in the hand-to-hand knife fight that is high-performance storage. Is it the fastest of the fast? Probably not. It is fast enough to do battle amongst that esteemed company? We sure think so. And it's hard not to like the idea of Marvell's '9187 controller with Toggle-mode NAND and an emulated-SLC twist. nCache seems a lot like what OCZ has been doing with its Vertex 4/450 and Vector drives, at least on a spiritual level, if not on a technical one. SanDisk and OCZ are both looking for advantages anywhere they can find them, and their respective solutions add value in a segment where innovation is increasingly hard to come by.

The Extreme II isn't very fancy-looking. But it's fast and has the cachet that comes from a NAND fabricator's SSD. Samsung, Intel, and Micron/Crucial have reputations based in large part on their engineering and validation, but also from the fact that they're responsible for what is arguably the most important ingredient in an SSD: the flash itself.

More competition from SanDisk is going to put more pressure on the vendors already left in the lurch by higher material costs. The Extreme II is just another evidentiary exhibit that the SSD producers at the bottom of the food chain are on borrowed time. Even as solid-state drive shipments increase year-over-year, the harsh reality is that companies like SanDisk are already in the driver's seat. The more precariously-positioned firms are going to be along for the ride. When push comes to shove, who can undercut a NAND manufacturer that builds its own SSDs?

We'd have a hard time naming every SSD manufacturer, and the average enthusiast familiar with storage probably couldn't name more than a few. But there's a good chance that Intel, Crucial, and Samsung are on that list. At the end of the day, maybe what SanDisk wants most is to be included in the conversation.