JMicron has been in the SSD silicon game for years as part of its fabless design business. The trouble is that the company doesn't have the best reputation amongst enthusiasts as a result of problematic controllers from its past. But the situation could very well change if the JMF667H processor we're looking at today is on its best behavior. After all, I can't imagine the company would have sent me a handful of reference-class SSDs if the silicon wasn't capable of redeeming JMicron.
Last year, I had the opportunity to check out Silicon Motion's reference platform with its four-channel SM2246EN (Hands-On With Silicon Motion's New SSD Controller). That solution was an excellent complement to the Marvell- and SandForce-based offerings of the storage world. For 2014, I'm doing the same thing with JMicron's own four-channel design, and adding a twist I'll introduce shortly.
Four bare SSDs that you can't buy
Why four channels and not eight? Efficiency is one key motivator. Fewer channels facilitate a smaller ASIC, which can, in turn, be more power-friendly. In storage, that's a boon in the enterprise and mobile spaces, but less of an issue on the desktop. Of course, there are some big eight-channel designs too, including Marvell's newer 9189 and Intel's own SATA controller featured prominently in drives like the SSD 730. They tend to use more juice and enable higher performance. But as with any job, it makes sense to use the right tool, and it'd be hard to deny the effectiveness of these more modern four-channel implementations.
Apparently, JMicron is keeping the eight-channel designs tucked away for PCIe-based SSDs, while SATA-attached drives get the company's more mainstream platform that includes the PCB, processor, and firmware. It's a flexible business model that lets drive vendors pick and choose the pieces they need. Silicon Motion made that a key part of its strategy as well. Buy the controller and write your own firmware, or get everything in a turnkey package.
Worldwide, there are a number of SSD brands we don't see in the U.S. Someone has to sell them processors too, and they're filling the gaps between SandForce launches using Marvell, Silicon Motion, and now JMicron logic.
The JMF667H
Built on a 55 nm TSMC process, the ARM-based JMF667H supports up to 512 MB of DDR3 cache and eight chip enables per channel. As a consequence, most drives adopting the 667H will fall into the 128 to 512 GB range. Naturally, the proliferation of 128 Gb NAND is a big deal, and the 667H supports most NAND interfaces, in particular 20 nm IMFT and SanDisk/Toshiba's 19 nm A19 Toggle-mode flash.
But with just four channels available, the architecture does tend to limit performance somewhat, especially in measures of small random operations.
We do get the addition of DevSlp, which we'll test later. Beyond that, there aren't many headline-grabbing features.
Here's what's really cool about today's exploration: for perhaps the first time, we have the chance to test one platform using different kinds of flash, truly isolating NAND as a variable with JMicron's newest controller. The company sent along four different configurations, allowing us to examine the JMF667H with L85A from Intel (20 nm, 128 Gb die), L85C from Intel (20 nm, 128 Gb die), and Toshiba's A19 (19 nm, 64 and 128 Gb die). The exercise should be illuminating, since it's often difficult for us to isolate the impact of a given controller architecture compared to the flash it's attached to.
It should go without saying that these are samples built by JMicron for internal use, and you can't actually go out and buy any of them. Not that you'd want to; they aren't covered by a chassis or protected by a warranty. But customers of branded memory products like Transcend are making drives, and you can buy them right now.
| JMicron JMF667H Ref. Platform | 128 GB L85C | 128 GB A19 | 256 GB A19 | 256 GB L85A | |||
|---|---|---|---|---|---|---|---|
| Controller | JMicron JMF667H, SATA 3.1, Four-channel, Eight CE per channel | ||||||
| NAND | Intel L85C, 20 nm, 128 Gb die | Toshiba A19, 19 nm, 64 Gb die | Toshiba A19, 19 nm, 64 Gb die | Intel L85A, 20 nm, 128 Gb die | |||
| Form Factor | 2.5" PCB | ||||||
| Die Count | 8 | 16 | 16 | 16 | |||
And so we're taking a moment to reacquaint ourselves with a seldom-mentioned name in the storage business. We suspect the company is going to have big things going on in the future. But as attention shifts from SATA to SATA Express and PCIe, we think the time is ripe for JMicron to strike with controllers to match the times. First, it has to make it though our gauntlet of impressively-constructed tests.
Our consumer storage test bench is based on Intel's Z77 Platform Controller Hub paired with an Intel Core i5-2400 CPU. Intel's 6- and 7-series chipsets are virtually identical from a storage perspective. We're standardizing on older RST 10.6.1002 drivers for the foreseeable future.
Updates to the RST driver package occasionally result in subtle performance changes. They can also lead to some truly profound variance in scores and results as well, depending on the revision. Some versions flush writes more or less frequently. Others work better in RAID situations. Builds 11.2 and newer support TRIM in RAID as well. Regardless, results obtained with one revision may or may not be comparable to results obtained with another, so sticking with one version across all testing is mandatory.
Test System Specs
| Power Testing Laptop | Lenovo T440s, 8 GB DDR3, Windows To Go 8.1, ULINK DevSlp Test Platform |
|---|---|
| Processor | Intel Core i5-2400 (Sandy Bridge), 32 nm, 3.1 GHz, LGA 1155, 6 MB Shared L3, Turbo Boost Enabled |
| Motherboard | Gigabyte G1.Sniper M3 |
| Memory | G.Skill Ripjaws 8 GB (2 x 4 GB) DDR3-1866 @ DDR3-1333, 1.5 V |
| System Drive | Intel S3500 480 GB SATA 6 Gb/s, Firmware: 0306 |
| Drive(s) Under Test | 128 GB JMicron JMF667H Test SSD L85C Flash, SATA 6 Gb/s, Firmware: 417a |
| Drive(s) Under Test | 256 GB JMicron JMF667H Test SSD L85A Flash, SATA 6 Gb/s, Firmware: 417a |
| Drive(s) Under Test | 128 GB JMicron JMF667H Test SSD A19 Flash, SATA 6 Gb/s, Firmware: 423a |
| Drive(s) Under Test | 256 GB JMicron JMF667H Test SSD A19 Flash, SATA 6 Gb/s, Firmware: 423a |
| Comparison Drives | Transcend SSD340 256 GB SATA 6 Gb/s, Firmware: SVN235 |
| Comparison Drives | Plextor M6e 256 GB M.2 PCIe x2, Firmware: 1.00 |
| Plextor M6S 256 GB SATA 6 Gb/s, Firmware: 1.00 | |
| Plextor M6M 256 GB mSATA 6 Gb/s, Firmware: 1.00 | |
| Adata SP920 1024 GB SATA 6 Gb/s, Firmware: MU01 | |
| Adata SP920 512GB SATA 6 Gb/s, Firmware: MU01 | |
| Adata SP920 256 GB SATA 6 Gb/s, Firmware: MU01 | |
| Adata SP920 128 GB SATA 6 Gb/s, Firmware: MU01 | |
| Crucial M550 1024 GB SATA 6 Gb/s, Firmware: MU01 | |
| Crucial M550 512 GB SATA 6 Gb/s, Firmware: MU01 | |
| Intel SSD 730 480 GB SATA 6 Gb/s, Firmware: L2010400 | |
| Samsung 840 EVO mSATA 120 GB, Firmware: EXT41B6Q | |
| Samsung 840 EVO mSATA 250 GB, Firmware: EXT41B6Q | |
| Samsung 840 EVO mSATA 500 GB, Firmware: EXT41B6Q | |
| Samsung 840 EVO mSATA 1000 GB, Firmware: EXT41B6Q | |
| SanDisk X210 256 GB, Firmware X210400 | |
| SanDisk X210 512 GB, Firmware X210400 | |
| Intel SSD 530 180 GB SATA 6Gb/s, Firmware: DC12 | |
| Intel SSD 520 180 GB SATA 6Gb/s, Firmware: 400i | |
| Intel SSD 525 180 GB mSATA, Firmware: LLKi | |
| SanDisk A110 256 GB M.2 PCIe x2, Firmware: A200100 | |
| Silicon Motion SM226EN 128 GB SATA 6Gb/s, Firmware: M0709A | |
| Crucial M500 120 GB SATA 6Gb/s, Firmware: MU02 | |
| Crucial M500 240 GB SATA 6Gb/s, Firmware: MU02 | |
| Crucial M500 480 GB SATA 6Gb/s, Firmware: MU02 | |
| Crucial M500 960 GB SATA 6Gb/s, Firmware: MU02 | |
| Samsung 840 EVO 120 GB SATA 6Gb/s, Firmware: EXT0AB0Q | |
| Samsung 840 EVO 240 GB SATA 6Gb/s, Firmware: EXT0AB0Q | |
| Samsung 840 EVO 480 GB SATA 6Gb/s, Firmware: EXT0AB0Q | |
| Samsung 840 EVO 1 TB SATA 6Gb/s, Firmware: EXT0AB0Q | |
| SanDisk Ultra Plus 64 GB SATA 6Gb/s, Firmware: X211200 | |
| SanDisk Ultra Plus 128 GB SATA 6Gb/s, Firmware X211200 | |
| SanDisk Ultra Plus 256 GB SATA 6Gb/s, Firmware X211200 | |
| Samsung 840 Pro 256 GB SATA 6Gb/s, Firmware DXM04B0Q | |
| Samsung 840 Pro 128 GB SATA 6Gb/s, Firmware DXM04B0Q | |
| SanDisk Extreme II 120 GB, Firmware: R1311 | |
| SanDisk Extreme II 240 GB, Firmware: R1311 | |
| SanDisk Extreme II 480 GB, Firmware: R1311 | |
| Seagate 600 SSD 240 GB SATA 6Gb/s, Firmware: B660 | |
| Intel SSD 525 30 GB mSATA 6Gb/s, Firmware LLKi | |
| Intel SSD 525 60 GB mSATA 6Gb/s, Firmware LLKi | |
| Intel SSD 525 120 GB mSATA 6Gb/s, Firmware LLKi | |
| Intel SSD 525 180 GB mSATA 6Gb/s, Firmware LLKi | |
| Intel SSD 525 240 GB mSATA 6Gb/s, Firmware LLKi | |
| Intel SSD 335 240 GB SATA 6Gb/s, Firmware: 335s | |
| Intel SSD 510 250 GB SATA 6Gb/s, Firmware: PWG2 | |
| OCZ Vertex 3.20 240 GB SATA 6Gb/s, Firmware: 2.25 | |
| OCZ Vector 256 GB SATA 6Gb/s, Firmware: 2.0 | |
| Samsung 830 512 GB SATA 6Gb/s, Firmware: CXMO3B1Q | |
| Crucial m4 256 GB SATA 6Gb/s Firmware: 000F | |
| Plextor M5 Pro 256 GB SATA 6Gb/s Firmware: 1.02 | |
| Corsair Neutron GTX 240 GB SATA 6Gb/s, Firmware: M206 | |
| Graphics | MSI Cyclone GTX 460 1 GB |
| Chassis | Lian Li Pitstop T60 |
| RAID | LSI 9266-8i PCIe x8, FastPath and CacheCade AFK |
| Power Supply | Seasonic X-650, 650 W 80 PLUS Gold |
System Software and Drivers
| Operating System | Windows 7 x64 Ultimate |
|---|---|
| API | DirectX 11 |
| Graphics | Nvidia 314.07 |
| RST | 10.61002 |
| IMEI | 7.1.21.1124 |
| Generic AHCI | MSAHCI.SYS |
Benchmark Suite
| ULINK DriveMaster 2012 | DM2012 v980, JEDEC 218A-based TRIM Test, Protocol Test Suite |
|---|---|
| Test Specific Hardware | SAS/SATA Power Hub, DevSlp Platform |
| Tom's Hardware Storage Bench v1.0 | Intel iPeak Storage Toolkit 5.2.1, Tom's Storage Bench 1.0 Trace Recording |
| Iometer 1.1.0 | # Workers = 1, 4 KB Random: LBA=16 GB, varying QDs, 128 KB Sequential, 16 GB LBA Precondition, Exponential QD Scaling |
| PCMark 8 | PCMark 8 2.0.228, Storage Consistency Test |
| PCMark 7 | Secondary Storage Suite |
Fantastic sequential read and write performance is a trademark of modern SSDs. To measure it, we use incompressible data over a 16 GB LBA space, and then test at queue depths from one to 16. We're reporting these numbers in binary (where 1 KB equals 1024) instead of decimal numbers (where 1 KB is 1000 bytes). When necessary, we also limit the scale of the chart to enhance readability.
128 KB Sequential Read
Before we get too obscure with the benchmarks, I'll start with the basics. Sequential reads are somewhat mixed between these four reference-class drives equipped with different types of NAND. The SSD armed with Intel's L85C only picks up speed at the end, as queue depth increases. The L85A- and A19-based models get near or pass the 500 MB/s barrier.
Between 520 to 530 MB/s is the practical limit of SATA, and that's where the two ONFi-capable models peak (ONFi stands for Open NAND Flash Interface, by the way, which is a workgroup that created an interface standard for certain flash components). The drives sporting A19 NAND are a little different; they don't demonstrate as high of a read throughput ceiling, which is typically of the Toggle-mode DDR interface. But this will probably be the last time you see the ONFi flash win.
128 KB Sequential Write
Typically, when I test four drives, I get different capacities. Those capacities often behave differently on a benchmark chart due to their die and package configuration. In this case, however, there is significant differentiation from only two capacities. Why? The NAND interface types matter. A lot. The 256 GB model equipped with Toshiba's A19 flash takes top honors by achieving 450 MB/s, followed by the 128 GB drive sporting the same stuff.
Both ONFi-capable drives appear further down the list. Combining JMicron's controller and L85A flash results in just under 250 MB/s, while the 128 GB L85C-armed model falls 100 MB/s behind. Really, these numbers aren't surprising, given fewer die available for interleaving on the SSD with L85C NAND.
We turn to Iometer as our synthetic metric of choice for testing 4 KB random performance. Technically, "random" translates to a consecutive access that occurs more than one sector away. On a mechanical hard disk, this can lead to significant latencies that hammer performance. Spinning media simply handles sequential accesses much better than random ones, since the heads don't have to be physically repositioned. With SSDs, the random/sequential access distinction is much less relevant. Data are put wherever the controller wants it, so the idea that the operating system sees one piece of information next to another is mostly just an illusion.
4 KB Random Reads
Testing the performance of SSDs often emphasizes 4 KB random reads, and for good reason. Most system accesses are both small and random. Moreover, read performance is arguably more important than writes when you're talking about typical client workloads.
The controller is "only" good for about 80,000 4 KB IOPS with the fastest flash around. The ONFi-equipped L85 variants appear under that peak, while the A19-equipped configurations are even from a queue depth of one through 32. The low-queue-depth results are the most relevant to desktop users, and they're all clustered around 10,000 IOPS.
Follow the lines upward, though, and it's plain that Toshiba's A19 flash offers a big performance bump at every step. The two Toggle-mode-equipped drives plateau at a queue depth of 16, but that could be the controller running out of steam. After all, Plextor's M6S (also with A19 NAND, but a four-channel Marvell processor) gets close to 100,000 in this same workload.
Granted, pushing peaks for the sake of spec sheets doesn't make much sense when nobody in a desktop environment will ever see them outside of freakishly improbable circumstance...
4 KB Random Writes
Again, the A19-based models track closely, reaching 80,000 IOPS. This is because the 128 GB sample employs the same numbers of dies as the 256 GB model. JMicron's bigger reference SSD simply uses denser 128 Gb dies.
The 256 GB model armed with L85A doesn't make it much past 60,000 IOPS, while the L85C-based drive never even sees 40,000. It does, however, fare much better at a queue depth of one than the L85A-based version. The four-channel design does appear to extract maximum performance from each respective NAND interface.
But we're not going to use theoretical corner cases (the sequential and random 4 KB benchmarks we just ran) to crown one configuration a winner and another a loser. Really, the trace-based metrics that follow are better for formulating conclusions. And doubly-so for these JMF667H platforms, since you can't go out and buy any of them.
Random Performance Over Time
My saturation test consists of writing to each drive for 12 hours using 4 KB blocks with 32 outstanding commands. But first I secure erase each drive. Then, I apply the write load, illustrating average IOPS for each minute (except for the last 20 minutes, where I zoom in and show you one-second average increments).
Example: Intel's SSD 730
The above chart comes from The SSD 730 Series Review: Intel Is Back With Its Own Controller; consider it a reference of sorts. The 100% write (in pink), 50% write (in green), and 30% write (in blue) workloads are tightly grouped. There aren't any disturbing variations. And note the order of the bands: a 100% write generally results in the lowest performance, and as we add reads (as a percentage), we get progressively faster.

Now, this is from a production JMF667H-based drive, Transcend's 256 GB SSD340 with L85A flash, using older 263 series firmware. In other words, it's identical to the 256 GB L85A-based reference drive, except that JMicron's implementation employs newer 417 firmware.
Look at the order of those bands. The 50% write trace falls below the 100% write workload, and that's not orthodox. When I first saw this, before getting my hands on the reference platform with its newer firmware, I assumed something was wrong. But then, take a look at this:

This is JMicron's reference SSD with the 417 firmware. It's actually less consistent, but that's partly because performance was so low before. That 50% write band shows up above 100%, but below 30%, where it should be. Clearly, the newer firmware's garbage collection and background processes are far more optimized.
What happens when JMicron swaps in A19 flash? Does that make an appreciable difference?

This is the 128 GB reference drive with 64 Gb A19. It's clean and pretty. The bands are narrow and distinct, with few outliers. The 100% write separates into two layers, but is still orderly-looking. Actually, I'm amazed at how much the saturation test changes after swapping around some flash. I know Plextor's drives do well in this metric too, and it's possible that the good behavior is related to reliance on Toshiba's Toggle-mode interface. JMicron looks to have its act together here.
Storage Bench v1.0 (Background Info)
Our Storage Bench incorporates all of the I/O from a trace recorded over two weeks. The process of replaying this sequence to capture performance gives us a bunch of numbers that aren't really intuitive at first glance. Most idle time gets expunged, leaving only the time that each benchmarked drive is actually busy working on host commands. So, by taking the ratio of that busy time and the the amount of data exchanged during the trace, we arrive at an average data rate (in MB/s) metric we can use to compare drives.
It's not quite a perfect system. The original trace captures the TRIM command in transit, but since the trace is played on a drive without a file system, TRIM wouldn't work even if it were sent during the trace replay (which, sadly, it isn't). Still, trace testing is a great way to capture periods of actual storage activity, a great companion to synthetic testing like Iometer.
Incompressible Data and Storage Bench v1.0
Also worth noting is the fact that our trace testing pushes incompressible data through the system's buffers to the drive getting benchmarked. So, when the trace replay plays back write activity, it's writing largely incompressible data. If we run our storage bench on a SandForce-based SSD, we can monitor the SMART attributes for a bit more insight.
| Mushkin Chronos Deluxe 120 GB SMART Attributes | RAW Value Increase |
|---|---|
| #242 Host Reads (in GB) | 84 GB |
| #241 Host Writes (in GB) | 142 GB |
| #233 Compressed NAND Writes (in GB) | 149 GB |
Host reads are greatly outstripped by host writes to be sure. That's all baked into the trace. But with SandForce's inline deduplication/compression, you'd expect that the amount of information written to flash would be less than the host writes (unless the data is mostly incompressible, of course). For every 1 GB the host asked to be written, Mushkin's drive is forced to write 1.05 GB.
If our trace replay was just writing easy-to-compress zeros out of the buffer, we'd see writes to NAND as a fraction of host writes. This puts the tested drives on a more equal footing, regardless of the controller's ability to compress data on the fly.
Average Data Rate
The Storage Bench trace generates more than 140 GB worth of writes during testing. Obviously, this tends to penalize drives smaller than 180 GB and reward those with more than 256 GB of capacity.

This list is long, but keep an eye out for the JMicron-powered SSDs in purple. Both ONFi-equipped drives compete readily, especially compared to the 120 and 240 GB M500 and SP920. The reference platforms armed with A19 flash behave much differently, landing well above their expected weight class.
I wish I could say these results, taken alone, are all you need to reach a positive conclusion. But they aren't; the next page is critical, too. Still, we can't ignore how well the JMicron drives complemented by A19 NAND fare. The 256 GB model appears next to SanDisk's X210 (a drive you know I love). And the 128 GB version bests Plextor's M6S/M, employing the same flash interface.
Let's turn to the service time mechanics on the next page for more detail.
Service Times
Beyond the average data rate reported on the previous page, there's even more information we can collect from Tom's Hardware's Storage Bench. For instance, mean (average) service times show what responsiveness is like on an average I/O during the trace.
It would be difficult to graph the 10+ million I/Os that make up our test, so looking at the average time to service an I/O makes more sense. For a more nuanced idea of what's transpiring during the trace, we plot mean service times for reads against writes. That way, drives with better latency show up closer to the origin; lower numbers are better.
Write service time is simply the total time it takes an input or output operation to be issued by the host operating system, travel to the storage subsystem, commit to the device, and have the drive acknowledge the operation. Read service is similar. The operating system asks the storage device for data in a certain location, the SSD reads that information, and then sends it to the host. Modern computers are fast and SSDs are zippy, but there's still a significant amount of latency involved in a storage transaction.
As you look through the results, note that any time we get four capacities from the same product family, it's not uncommon for there to be a big delta between the smallest and largest models.
And indeed, the spread is more significant than I would have expected given two capacities and different NAND interfaces.
Mean Read Service Time
By now, we've come to expect the L85A/L85C-equipped configs pulling up the rear. On the other hand, the reference JMF667H-based SSDs with Toggle-mode NAND look hot. Both capacities are split by the M5 Pro, and they finish ahead of Adata's zippy SP920 at 512 GB.
Those read service times are only slightly related to capacity though, whereas write service times correlate more closely.

JMicron's L85C-based 128 GB drive beats out the 120 GB M500, while the 128 GB JMicron drive with A19 flash falls behind Silicon Motion's reference platform by a slim 11-microsecond average.
Futuremark's PCMark 8 expanded storage tests are awesome. With so much data and a comprehensive testing regimen, we can really drill down on drive performance.
First, the raw block device (there is no partition) is preconditioned twice by filling the entire accessible LBA space with 128 KB sequential writes. Once that is completed, the first Degradation Phase randomly writes blocks between 4 KB and 1 MB in size to random LBA spaces on the drive. Since the writes aren't 4 KB-aligned much of the time, the SSD's performance drops quickly. After all, non-4 KB-aligned accesses create overhead and generally increase write amplification significantly.
The first Degradation Phase begins with 10 minutes of those punishing random offset writes, after which each PCMark 8 activity trace is played against the SSD being tested. The successive degradation rounds are similar, except an additional five minutes are tacked onto each iteration. After eight repetitions, that write period expands to 45 minutes.
Next comes the Steady Phase. Each of five Steady Phases writes 45 minutes worth of random offset data prior to trace playback, pushing the drive even harder and making it more difficult to perform housekeeping duties. With fewer blocks available for writing, latency increases substantially.
Lastly, PCMark 8 moves into a Recovery Phase, which consists of five idle minutes before trace playback. Repeat that five times, and the test concludes.
For more information on the test and how it works, check out Plextor M6e 256 GB PCI Express SSD Review: M.2 For Your Desktop.
Storage Consistency With PCMark 8's Adobe Photoshop (Heavy) Trace
Because there are 18 individual rounds packed with 10 traces each, we need to focus. We'll choose one trace, Adobe Photoshop (Heavy), and keep tabs on it through the entire extended run.
Bandwidth
Since you're this far into my story, I'll confess something to you: I love storage testing. But, I love good storage tests even more. It's a shame that the storage consistency test isn't available for normal PCMark 8. Only super-special people get it, limiting access to advanced users, including members of the media. If you really want to check it out, it'll cost you as much as a well-equipped Ultrabook.

This is what I'm talking about. Pitted against three Marvell-based drives at 256 GB, both 128 GB JMF667H-powered SSDs and the 256 GB A19-equipped model rise above the crowd. Give JMicron's controller a little room to breathe (or five minutes of idle time) and it takes the competition out.
Not shown is the 256 GB variant with L85A flash, since it's hard to show more than six drives on the chart. If you're keeping score, though, it lands right up against the 128 GB drive with L85C NAND.
Latency
In this test, we're taking that same Adobe Photoshop (Heavy) trace and using average read and write latency to illustrate responsiveness. We'll sprinkle in competing drives for comparison, too.

Through the first 13 rounds, the script gets flipped. Aside from the excellent 256 GB drive with A19 flash, the two 128 GB SSDs experience significant read latency. Then, as the recovery rounds hit, they head straight to the floor.

As before, the observed write latency is superb once JMicron's JMF667H has a chance to catch its breath. During the punishing first 13 stages, the two A19-equipped models manage to maintain their composure.
Best and Worst Score Reference

We've been utilizing ULINK's DriveMaster 2012 software and hardware suite to introduce a new test for client drives. Using JEDEC's standardized 218A Master Trace, DriveMaster can turn a sequence of I/O (similar to our Tom's Hardware Storage Bench) into a TRIM test. JEDEC's trace is months and months of drive activity, day-to-day activities, and background operating system tasks.
ULINK strips out the read commands for this benchmark, leaving us with the write, flush, and TRIM commands to work with. Execute the same workload with TRIM support and without, and you end up with a killer metric for further characterizing drive behavior.
DriveMaster is used by most SSD manufacturers to create and perform specific measurements. It's currently the only commercial product that can create the scenarios needed to validate TCG Opal 2.0 security, though it's almost unlimited in potential applications. Much of the benefit tied to a solution like DriveMaster is its ability to diagnose bugs, ensure compatibility, and issue low-level commands. In short, it's very handy for the companies actually building SSDs. And if off-the-shelf scripts don't do it for you, make your own. There's a steep learning curve, but the C-like environment and command documentation gives you a fighting chance.
This product also gives us some new ways to explore performance. Testing the TRIM command is just the first example of how we'll be using ULINK's contribution to the Tom's Hardware benchmark suite.
On a 256 GB drive, each iteration writes close to 800 GB of data, so running the JEDEC TRIM test suite once on a 256 GB SSD generates almost 3.2 TB of mostly random writes (it's 75% random and 25% sequential). By the end of each run, over 37 million write commands are issued.
The first two tests employ DMA to access the storage, while the last two use Native Command Queuing. Since most folks don't use DMA with SSDs (aside from some legacy or industrial applications) we don't concern ourselves with those. It can take up to 96 hours to run one drive through all four runs, though faster drives can roughly cut the time in half. Because so much information is being written to an already-full SSD (the drive is filled before each test, and then close to 800 GB are written per iteration), SSDs that perform better under heavy load fare best. Without TRIM, on-the-fly garbage collection becomes a big contributor to high IOPS. With TRIM, 13% of space gets TRIM'ed, leaving more room for the controller to use for maintenance operations.
TRIM Testing
Average
To avoid adding too much data, I have the average performance for each benchmarked SSD with and without TRIM support enabled. Displayed in IOPS, this helps us make comparisons more quickly.

To the extent that you can buy JMF667H-based drives at the moment, they're mostly running older 200-series firmware. Our reference models from JMicron get updated 400-series software, which makes a difference.
Take a peek at the 128 GB models. They surround the excellent 120 GB Samsung 840 EVO and nearly topple the 240 GB Crucial M500.
Instantaneous
This is the instantaneous TRIM result for all four reference platforms. It's complicated-looking with all four lines spiking up and down, but keep your eyes focused on the individual colors. The 256 GB model with A19 flash performs particularly well. But it also has a secret weapon: over-provisioning. Technically, we'd call this a 240 GB model. But it's coined and minted as a 256 GB SSD, so that's how I've referred to it thus far.

Although this is a TRIM test, that over-provisioning helps with and without the command implemented. Otherwise, the 256 GB models would be closer together. As long as the NAND isn't radically dissimilar, tests like this are largely a result of TRIM performance, garbage collection/idle merge processes, and and spare area. It just so happens that the new firmware is quite good in TRIM testing, while some drives (like Plextor's M5P) don't benefit much from TRIM.
Throughput
We collect and report the total throughput of each drive in the NCQ with TRIM test. It's one number that helps capture overall performance in the test.

Active Idle Power Consumption
Idle consumption is the most important power metric for consumer and client SSDs. After all, solid-state drives complete host commands quickly and then drop back down to idle. Aside from the occasional background garbage collection, a modern SSD spends most of its life doing very little. Enterprise-oriented drives are more frequently used at full tilt, making their idle power numbers less relevant. But this just isn't the case on the desktop, where the demands of client and consumer computing leave most SSDs sitting on their hands for long stretches of time.
Active idle power numbers are critical, especially when it comes to their impact on mobile platforms. Idle means different things on different systems, though. Pretty much every drive we're testing is capable of one or more low-power states, up to and including DevSlp. That last feature is a part of the SATA 3.1 host specification. And while it requires a capable SSD and a compatible platform, enabling DevSlp takes power consumption down to a very small number.
Interestingly, I measure consistently different active idle states for the JMicron reference drives. The two 128 GB configurations land in the .4 W range, while both 256 GB SSDs idle closer to .2 W. For drives with twice as much NAND, that's seemingly weird.
PCMark 7 Average Power Consumption
If we log power consumption through a workload, even a relatively heavy one, we see that average use is still pretty close to the idle numbers. Maximum power may spike fiercely, but the draw during a PCMark 7 run is light. You can see the drives fall back down to the idle "floor" between peaks of varying intensity.
At this point, it's tempting to suspect something is wrong. But it's not. The two 256 GB JMF667H-based drives again use substantially less power (on average) through the run, thanks largely to the low idle power consumption already observed.

After a lot of power testing, logging, and Excel work, we end up with the above chart. It's not particularly readable, but that just lends to its air of mystery, right? We've heard plenty of readers like Chris Angelini's power charts in his CPU reviews; this is something similar for you.
But it's beyond me why the disparity in idle power use is so pronounced. This has to be an artifact of JMicron's JMF667H-based reference platforms and the on-board components though, though. To conclude then, these drives appear super efficient overall, and it bothers me that the drives are so different at idle. But the outcome doesn't truly become significant until we can start getting our hands on more retail hardware employing JMicron's processor.
It's true that SATA 3.1 is slowly going out of vogue, and that next year you'll start seeing fewer SSDs leveraging it. They'll still be prolific, dominating the sales charts, but they won't be getting as much attention. PCI Express and SATA Express will start to pick up steam, demonstrate lower power consumption, and dominate the performance charts. They'll become the new sexy in solid-state storage. I think it's reasonable to expect that a mainstream laptop in 2015 or 2016 will employ a different interface and form factor than what we're testing today.
That makes now the golden age of SATA. From here, it's all downhill. Or is it?
After years of testing SATA-based drives, it's clear that most mainstream users are still served well by 6 Gb/s drives (even if the underlying technology is limited by the interface in certain ways). They're good with power, they're fast, and they're cheap. If you give me the choice between two solid SATA drives or one SATA Express-based repository, I'll take the RAID array in a desktop machine. The math works out a little differently for laptops. But even in that application, SATA is still smart.
That's because the power savings you can enable through a PCIe-based device isn't as good as the current crop of SATA SSDs. DevSlp was introduced with SATA 3.1, and the latest-generation notebooks are better for it. Even if your storage subsystem isn't DevSlp-capable, slumber states are still quite conservative.
If only for that reason, it's too early to signal the demise of SATA-based SSDs. In truth, none of the existing PCIe-based solutions in the M.2 form factor (at least the ones I've tested) radically improve your computing experience, particularly in the mobile space, where workloads are predictably milder.
All of that is my long-winded way of affirming that JMicron's power-efficient, cost-effective JMF667H SATA controller is a good idea. And the processor's building blocks can almost assuredly be applied to PCIe-based silicon down the road. In fact, I'm predicting that we see just that. But for now, JMicron's platform is a good mainstream solution.
Performance is actually breathtakingly-impressive paired to Toshiba's A19 flash. Don't expect too many drives leveraging that pricier combination, though. More relevant will be ONFi-compliant NAND in budget-oriented drives. That's too bad, since the A19-equipped reference platforms best the M6S and M6M in several of our benchmarks. Those Plextor drives utilize a Marvell four-channel controller and Toshiba's A19 flash as well, so the fight's pretty darned fair. In my mind, this demonstrates the 55 nm JMF667H to be competitive where it counts: actual workloads.
As we move forward, watching to see what the rest of 2014 has to offer storage mavens like myself, don't forget about JMicron. The Taiwanese firm isn't abandoning third-gen SATA, nor is neglecting the future. Against all odds, this company (and Silicon Motion) is looking to succeed where others have already failed. Now that we've tested its reference-class hardware, the real question becomes: how does the JMF667H compare Silicon Motion's SM2246EN? I know a good way to find out.













