Sign in with
Sign up | Sign in
Transcend SSD340 256 GB Review: Now With JMicron Inside
By ,
1. Transcend And JMicron Pair Up For The SSD340

If you're a storage enthusiast like I am, then you probably know about the third-party controller vendors and how they package their wares. For instance, Marvell won't sell firmware to go with its processors, while SandForce can't sell you silicon without the software to go with it. For certain companies putting SSDs together, this presents an interesting challenge. So many of them got into SSDs as an extension of their branded memory products. Over the years, some of them, like OCZ, developed engineering resources and intellectual property around solid-state storage. Others were content to hitch their wagons to SandForce's star as it rose to prominence, and didn't really compete in the same IP arms race.

That meant most of SandForce's partners didn't need the infrastructure necessary to build a high-quality SSD. SandForce did most of the heavy lifting by selling turnkey solutions, which made entering the business a relatively trifling matter. That company's last major revision to its controller hardware debuted in 2011, though. Three years later, there isn't anything to replace it yet.

Enthusiasts don't like to wait that long. So, the drive makers had to adjust their trajectories. Adata, for instance, reached out and got its hands on a Marvell-powered platform courtesy of Micron. In the value space, Silicon Motion is making some inroads, and that's a story we'll be telling another day.

Transcend plays the field between both ends of the spectrum, using controllers from Silicon Motion, SandForce, and JMicron. The JMF667H sits at the heart of Transcend's SSD340, a value-oriented offering designed to slug it out with drives like Crucial's M500 at 120 and 240 GB. Despite its low-cost billing, the SSD340 still manages to offer compelling performance.

How do speed and pricing come together? We'll sort that out in the lab. But first, have a look at the line-up:

Transcend SSD340
64 GB
128 GB
256 GB
Controller
JMicron JMF667H, SATA 3.1, Four-channel, Eight CE per channel
NAND
Micron L85A, 20 nm, 128 Gb die
Form Factor
2.5" PCB, 7 mm
Die Count
4
8
16
Seq Read/Write (in MB/s)
Not listed
Not listed520/290
Rand Read/Write 4 KB (IOps)
Not listedNot listed60,000/60,000
Warranty
Three-year limited
Accessories
2.5" to 3.5" sled, mounting screws, SSD Scope cloning and management software

Transcend Information (as it's formally called) is selling the SSD340 at three capacity points. There may actually be a 32 GB model at some point, though we don't see a need for it. Even as a cache drive, its performance would suffer with 128 Gb dies.

Worryingly, Transcend doesn't list performance specifications for anything other than its 256 GB drive. Since that's the only one I have, we cannot comment on how the other two versions will run. We'd like to see Transcend update its performance data for the other two SSD340s.

Inside Transcend's SSD340

We begin our tour with the SSD340's chassis:

There aren't any screws holding the plastic exterior together, except for one under the warranty sticker. The top and bottom halves are secured largely by molded clips.

You probably won't have to worry about this, but I've taken the drive apart and put it back together several times now, and the plastic holds up well. Of course, if you open your SSD340, you void the warranty.

Micron's L85A 128 Gb flash at 20 nmMicron's L85A 128 Gb flash at 20 nm

Transcend uses Micron's L85A 20 nm NAND in 128 Gb dies. This is more or less the same flash we saw on Crucial's M500 and more recently on the Adata SP920. It seems like the 128 Gb stuff is truly ready for prime time. In the past, only Micron/Crucial was really using it. Now it's popping up all over the place. Our 256 GB sample employs 16 placements of MT29F128G08CBCABH6-6:A. Each package sports a single die on a synchronous interface.

With 256 MB of Samsung LPDDR3 serving as data cache, the only other component to point out is the controller. This is the first time we've put our hands on JMicron's JMF667H in a retail product.

The JMicron JMF667H Processor

We've already covered this controller in great depth. Check out JMicron Returns: The JMF667H Controller On Four Reference SSDs for more information. In essence, though, the JMF667H is a 32-bit processor based on the ARM9 instruction set. It's a four-channel design with eight chip enables per channel, so 512 GB is really the top-end capacity right now. In a lot of ways, it's similar to the Silicon Motion SM2246EN we previewed last year, also sharing a bit with Marvell's 9188.

There is clearly an appetite for value-oriented four-channel designs, and we've seen our share( Marvell's 9175, the SM2246EN, and the new Marvell 9188). By and large, each of the SSDs we've tested based on those controllers was admirable. Now, with its 128 Gb L85A flash, Transcend's SSD340 isn't going to optimally represent what the JMF667H can do. My JMicron preview showed it takes Toshiba's A19 to really hit the afterburners. Shipping products would likely use more economical NAND, I suspected. And that's what the SSD340 gives us.

2. How We Tested Transcend's SSD340

Our consumer storage test bench is based on Intel's Z77 Platform Controller Hub paired with an Intel Core i5-2400 CPU. Intel's 6- and 7-series chipsets are virtually identical from a storage perspective. We're standardizing on older RST 10.6.1002 drivers for the foreseeable future.

Updates to the RST driver package occasionally result in subtle performance changes. They can also lead to some truly profound variance in scores and results as well, depending on the revision. Some versions flush writes more or less frequently. Others work better in RAID situations. Builds 11.2 and newer support TRIM in RAID as well. Regardless, results obtained with one revision may or may not be comparable to results obtained with another, so sticking with one version across all testing is mandatory.

Test Hardware
ProcessorIntel Core i5-2400 (Sandy Bridge), 32 nm, 3.1 GHz, LGA 1155, 6 MB Shared L3, Turbo Boost Enabled
MotherboardGigabyte G1.Sniper M3
MemoryG.Skill Ripjaws 8 GB (2 x 4 GB) DDR3-1866 @ DDR3-1333, 1.5 V
System Drive Intel S3500 480 GB SATA 6 Gb/s, Firmware: 0306
Drive(s) Under TestTranscend SSD340 256 GB SATA 6 Gb/s, Firmware: SVN235
Comparison DrivesPlextor M6e 256 GB M.2 PCIe x2, Firmware: 1.00

Plextor M6S 256 GB SATA 6 Gb/s, Firmware: 1.00

Plextor M6M 256 GB mSATA 6 Gb/s, Firmware: 1.00

Adata SP920 1024 GB SATA 6 Gb/s, Firmware: MU01

Adata SP920 512GB SATA 6 Gb/s, Firmware: MU01

Adata SP920 256 GB SATA 6 Gb/s, Firmware: MU01

Adata SP920 128 GB SATA 6 Gb/s, Firmware: MU01

Crucial M550 1024 GB SATA 6 Gb/s, Firmware: MU01

Crucial M550 512 GB SATA 6 Gb/s, Firmware: MU01

Intel SSD 730 480 GB SATA 6 Gb/s, Firmware: L2010400

Samsung 840 EVO mSATA 120 GB, Firmware: EXT41B6Q

Samsung 840 EVO mSATA 250 GB, Firmware: EXT41B6Q

Samsung 840 EVO mSATA 500 GB, Firmware: EXT41B6Q

Samsung 840 EVO mSATA 1000 GB, Firmware: EXT41B6Q

SanDisk X210 256 GB, Firmware X210400

SanDisk X210 512 GB, Firmware X210400

Intel SSD 530 180 GB SATA 6Gb/s, Firmware: DC12

Intel SSD 520 180 GB SATA 6Gb/s, Firmware: 400i

Intel SSD 525 180 GB mSATA, Firmware: LLKi

SanDisk A110 256 GB M.2 PCIe x2, Firmware: A200100

Silicon Motion SM226EN 128 GB SATA 6Gb/s, Firmware: M0709A

Crucial M500 120 GB SATA 6Gb/s, Firmware: MU02

Crucial M500 240 GB SATA 6Gb/s, Firmware: MU02

Crucial M500 480 GB SATA 6Gb/s, Firmware: MU02

Crucial M500 960 GB SATA 6Gb/s, Firmware: MU02

Samsung 840 EVO 120 GB SATA 6Gb/s, Firmware: EXT0AB0Q

Samsung 840 EVO 240 GB SATA 6Gb/s, Firmware: EXT0AB0Q

Samsung 840 EVO 480 GB SATA 6Gb/s, Firmware: EXT0AB0Q

Samsung 840 EVO 1 TB SATA 6Gb/s, Firmware: EXT0AB0Q

SanDisk Ultra Plus 64 GB SATA 6Gb/s, Firmware: X211200

SanDisk Ultra Plus 128 GB SATA 6Gb/s, Firmware X211200

SanDisk Ultra Plus 256 GB SATA 6Gb/s, Firmware X211200

Samsung 840 Pro 256 GB SATA 6Gb/s, Firmware DXM04B0Q

Samsung 840 Pro 128 GB SATA 6Gb/s, Firmware DXM04B0Q

SanDisk Extreme II 120 GB, Firmware: R1311

SanDisk Extreme II 240 GB, Firmware: R1311

SanDisk Extreme II 480 GB, Firmware: R1311

Seagate 600 SSD 240 GB SATA 6Gb/s, Firmware: B660

Intel SSD 525 30 GB mSATA 6Gb/s, Firmware LLKi

Intel SSD 525 60 GB mSATA 6Gb/s, Firmware LLKi

Intel SSD 525 120 GB mSATA 6Gb/s, Firmware LLKi

Intel SSD 525 180 GB mSATA 6Gb/s, Firmware LLKi

Intel SSD 525 240 GB mSATA 6Gb/s, Firmware LLKi

Intel SSD 335 240 GB SATA 6Gb/s, Firmware: 335s

Intel SSD 510 250 GB SATA 6Gb/s, Firmware: PWG2

OCZ Vertex 3.20 240 GB SATA 6Gb/s, Firmware: 2.25

OCZ Vector 256 GB SATA 6Gb/s, Firmware: 2.0

Samsung 830 512 GB SATA 6Gb/s, Firmware: CXMO3B1Q

Crucial m4 256 GB SATA 6Gb/s Firmware: 000F

Plextor M5 Pro 256 GB SATA 6Gb/s Firmware: 1.02

 Corsair Neutron GTX 240 GB SATA 6Gb/s, Firmware: M206
Graphics
MSI Cyclone GTX 460 1 GB
Power Supply
Seasonic X-650, 650 W 80 PLUS Gold
Chassis
Lian Li Pitstop T60
RAID
LSI 9266-8i PCIe x8, FastPath and CacheCade AFK
System Software and Drivers
Operating
System
Windows 7 x64 Ultimate
DirectX
DirectX 11
Drivers
Graphics: Nvidia 314.07
RST: 10.6.1002
IMEI: 7.1.21.1124
Generic AHCI: MSAHCI.SYS
Benchmarks
ULINK DriveMaster 2012
DM2012 v980, JEDEC 218A-based TRIM Test, Protocol Test Suite
Test Specific Hardware
SAS/SATA Power Hub, DevSlp Platform
Tom's Hardware Storage Bench v1.0
Intel iPeak Storage Toolkit 5.2.1, Tom's Storage Bench 1.0 Trace Recording
Iometer 1.1.0# Workers = 1, 4 KB Random: LBA=16 GB, varying QDs, 128 KB Sequential, 16 GB LBA Precondition, Exponential QD Scaling
PCMark 8
PCMark 8 2.0.228, Storage Consistency Test
PCMark 7
Secondary Storage Suite

3. Results: Sequential Performance

Fantastic sequential read and write performance is a trademark of modern SSDs. To measure it, we use incompressible data over a 16 GB LBA space, and then test at queue depths from one to 16. We're reporting these numbers in binary (where 1 KB equals 1024) instead of decimal numbers (where 1 KB is 1000 bytes). When necessary, we also limit the scale of the chart to enhance readability.

128 KB Sequential Read

We're pitting the SSD340 against a select group of 256 GB-class offerings. All four use 16 KB page sizes, and all four employ 128 Gb dies.

Right out of the gate, there aren't many meaningful observations to make. The dip we've seen from other 128 Gb L85A-equipped SSDs with a single outstanding 128 KB read command affects Transcend's drive as well. Only Adata's SP920 stands out from the group.

128 KB Sequential Write

These results are somewhat disappointing, at least compared to the pricier drives in our comparison. The Adata sells for $150, Transcend's SSD340 is $115, and Crucial's slightly quicker M500 is also priced at $115. But Transcend can't catch the comparably-priced Crucial drive in our sequential write speed benchmark, and that was already one of the slower drives we've tested recently.

Meanwhile, Plextor's four-channel M6S hangs the rest of the field out to dry with its complement of 128 Gb Toshiba A19 Toggle-mode flash. That's a $145 offering.

Here's a breakdown of the maximum observed 128 KB sequential read and write performance with Iometer:

4. Results: Random Performance

We turn to Iometer as our synthetic metric of choice for testing 4 KB random performance. Technically, "random" translates to a consecutive access that occurs more than one sector away. On a mechanical hard disk, this can lead to significant latencies that hammer performance. Spinning media simply handles sequential accesses much better than random ones, since the heads don't have to be physically repositioned. With SSDs, the random/sequential access distinction is much less relevant. Data are put wherever the controller wants it, so the idea that the operating system sees one piece of information next to another is mostly just an illusion.

4 KB Random Reads

Testing the performance of SSDs often emphasizes 4 KB random reads, and for good reason. Most system accesses are both small and random. Moreover, read performance is arguably more important than writes when you're talking about typical client workloads.

Plextor's M6S aces the other three drives with 32 outstanding commands. Transcend falls notably short at 68,000 IOPS. But when the queue drops to levels more typical of desktop PCs, the field tightens quite a bit. That's where you should focus your attention; most drives are so fast that commands hitting the controllers are serviced before they're able to back up.

4 KB Random Writes

Random write performance is also important. Early SSDs didn't do well in this discipline, seizing up even in light workloads. Newer SSDs wield more than 100x the performance of drives from 2007, though we also recognize that there's a point of diminishing returns in desktop environments.

The two SSDs with four-channel controllers succumb to the two drives with eight-channel Marvell processors between queue depths of two and four. Eventually, the M6S reasserts itself thanks to a capable Toggle-mode flash interface. But the SSD340 taps out at a queue depth of four.

Given the performance levels demonstrated by all four drives at a queue depth of one, though, it'll be interesting to see how they fare in our trace-based workloads.

Here's a break-down of the maximum observed 4 KB sequential read and write performance with Iometer. The order the drives appear in our chart is determined by maximum combined read and write performance.

5. Results: Latency And Performance Consistency

Random Performance Over Time

My saturation test consists of writing to each drive for 12 hours using 4 KB blocks with 32 outstanding commands. But first I secure erase each drive. Then, I apply the write load, illustrating average IOPS for each minute (except for the last 20 minutes, where I zoom in and show you one-second average increments).

The write saturation test has always been more about characterizing drive behavior than pure performance. In that way, it's necessary to push client SSDs harder at times. Once they start sweating, we get a better idea of what might be going on under the hood.

This chart comes from The SSD 730 Series Review: Intel Is Back With Its Own Controller. The 100% write (in pink), 50% write (in green), and 30% write (in blue) workloads are tightly grouped. More important, as the workload gets progressively more read-biased, speed improves. Writes are the limiting factor, so as we turn the dial up on those, distinct performance bands emerge with the 100% write workload on the bottom.

As you're about to see, most drives don't look as neat and orderly as Intel's exception SSD 730. But you get the idea.

After hammering on the SSD340 for a while, we start seeing some unusual activity. The drive creates the tightly-grouped bands seen in the chart above, with little separation between minimum and maximum performance. We just don't get much of an improvement as the workload shift from write-only (most demanding) to 50% writes to 30% writes. The SSD340 is value-oriented, so it's understandable that Transcend doesn't sacrifice usable capacity for more over-provisioning, which would have helped improve degraded performance. Instead, you get 7% spare area, and that's probably going to hurt the SSD340 through our testing.

6. Results: Tom's Hardware Storage Bench v1.0

Storage Bench v1.0 (Background Info)

Our Storage Bench incorporates all of the I/O from a trace recorded over two weeks. The process of replaying this sequence to capture performance gives us a bunch of numbers that aren't really intuitive at first glance. Most idle time gets expunged, leaving only the time that each benchmarked drive is actually busy working on host commands. So, by taking the ratio of that busy time and the the amount of data exchanged during the trace, we arrive at an average data rate (in MB/s) metric we can use to compare drives.

It's not quite a perfect system. The original trace captures the TRIM command in transit, but since the trace is played on a drive without a file system, TRIM wouldn't work even if it were sent during the trace replay (which, sadly, it isn't). Still, trace testing is a great way to capture periods of actual storage activity, a great companion to synthetic testing like Iometer.

Incompressible Data and Storage Bench v1.0

Also worth noting is the fact that our trace testing pushes incompressible data through the system's buffers to the drive getting benchmarked. So, when the trace replay plays back write activity, it's writing largely incompressible data. If we run our storage bench on a SandForce-based SSD, we can monitor the SMART attributes for a bit more insight.

Mushkin Chronos Deluxe 120 GB
SMART Attributes
RAW Value Increase
#242 Host Reads (in GB)
84 GB
#241 Host Writes (in GB)
142 GB
#233 Compressed NAND Writes (in GB)
149 GB

Host reads are greatly outstripped by host writes to be sure. That's all baked into the trace. But with SandForce's inline deduplication/compression, you'd expect that the amount of information written to flash would be less than the host writes (unless the data is mostly incompressible, of course). For every 1 GB the host asked to be written, Mushkin's drive is forced to write 1.05 GB.

If our trace replay was just writing easy-to-compress zeros out of the buffer, we'd see writes to NAND as a fraction of host writes. This puts the tested drives on a more equal footing, regardless of the controller's ability to compress data on the fly.

Average Data Rate

The Storage Bench trace generates more than 140 GB worth of writes during testing. Obviously, this tends to penalize drives smaller than 180 GB and reward those with more than 256 GB of capacity.

As I spent time with Transcend's 256 GB SSD340, it became increasingly apparent to me that this SSD is a lot like Crucial's old m4. Obviously it employs a different controller and flash. However, it seems to be right there with the m4 in a lot of my performance benchmarks.

This is one of them. The SSD340 slots in just behind that once-favorite from 2011. Eventually, Crucial replaced the m4 with its M500, and that model doesn't land but a few slots away. That Transcend can hang with this contingent is laudable, given the tools (or lack thereof) it's working with.

7. Results: Tom's Hardware Storage Bench v1.0, Continued

Service Times

Beyond the average data rate reported on the previous page, there's even more information we can collect from Tom's Hardware's Storage Bench. For instance, mean (average) service times show what responsiveness is like on an average I/O during the trace.

It would be difficult to graph the 10+ million I/Os that make up our test, so looking at the average time to service an I/O makes more sense. For a more nuanced idea of what's transpiring during the trace, we plot mean service times for reads against writes. That way, drives with better latency show up closer to the origin; lower numbers are better.

Write service time is simply the total time it takes an input or output operation to be issued by the host operating system, travel to the storage subsystem, commit to the storage device, and have the drive acknowledge the operation. Read service is similar. The operating system asks the storage device for data in a certain location, the SSD reads that information, and then it's sent to the host. Modern computers are fast and SSDs are zippy, but there's still a significant amount of latency involved in a storage transaction.

Transcend pushes past some of the less attractive drives we've benchmarked to its own spot adjacent to some 180 GB Intel SSDs, the 256 GB m4, and Crucial's newer M500 at 240 GB.

Mean Read Service Time

The other three highlighted drives punish Transcend's SSD340. Clearly, it's just not as good at servicing burst I/O in our trace, though we'd stop short of calling its failure something to worry about. The delta isn't large, and the outcome doesn't fall out of bounds.

The same story is written by this chart, more or less. When it comes to burst write activity, the 256 GB SSD340 registers the same service time as Crucial's 240 GB M500.

8. Results: PCMark 8 Storage Consistency Testing

Futuremark's PCMark 8 expanded storage tests are awesome. With so much data and a comprehensive testing regimen, we can really drill down on drive performance.

First, the raw block device (there is no partition) is preconditioned twice by filling the entire accessible LBA space with 128 KB sequential writes. Once that is completed, the first Degradation Phase randomly writes blocks between 4 KB and 1 MB in size to random LBA spaces on the drive. Since the writes aren't 4 KB-aligned much of the time, the SSD's performance drops quickly. After all, non-4 KB-aligned accesses create overhead and generally increase write amplification significantly.

The first Degradation Phase begins with 10 minutes of those punishing random offset writes, after which each PCMark 8 activity trace is played against the SSD being tested. The successive degradation rounds are similar, except an additional five minutes are tacked onto each iteration. After eight repetitions, that write period expands to 45 minutes.

Next comes the Steady Phase. Each of five Steady Phases writes 45 minutes worth of random offset data prior to trace playback, pushing the drive even harder and making it more difficult to perform housekeeping duties. With fewer blocks available for writing, latency increases substantially.

Lastly, PCMark 8 moves into a Recovery Phase, which consists of five idle minutes before trace playback. Repeat that five times, and the test concludes.

For more information on the test and how it works, check out a more in-depth description, check out the Plextor M6e 256 GB PCI Express SSD Review: M.2 For Your Desktop.

Storage Consistency With PCMark 8's Adobe Photoshop (Heavy) Trace

Because there are 18 individual rounds packed with 10 traces each, we need to focus. We'll choose one trace, Adobe Photoshop (Heavy), and keep tabs on it through the entire extended run.

Bandwidth

A few pages ago, I mentioned that the lack of additional over-provisioning on Transcend's SSD340 was going to hurt it. Well, this is where we see the pain first. The JMicron-powered drive just doesn't get much of a bandwidth boost in the later Recovery phases. In fact, it's only slightly faster than it was in the Degrade phases.

Of course, that's disappointing, though, given what we saw on page five, the numbers make sense. At least the SSD340 beats the more expensive Plextor M6M.

Latency

In this test, we're taking that same Adobe Photoshop (Heavy) trace and using average read and write latency to illustrate responsiveness. We'll sprinkle in some competing drives for comparison, too.

The Tom's Hardware Storage Bench trace test on the previous page made it clear that Transcend's SSD340 encounters higher read service times than competing solutions. We see the same behavior yet again.

Write latency falls in line with some of the other drives we're using for comparison. That L85A flash doesn't do JMicron's JMF667H any favors.

Best and Worst Score Reference

9. Results: TRIM Testing With DriveMaster 2012

We've been utilizing ULINK's DriveMaster 2012 software and hardware suite to introduce a new test for client drives. Using JEDEC's standardized 218A Master Trace, DriveMaster can turn a sequence of I/O (similar to our Tom's Hardware Storage Bench) into a TRIM test. JEDEC's trace is months and months of drive activity, day-to-day activities, and background operating system tasks.

ULINK strips out the read commands for this benchmark, leaving us with the write, flush, and TRIM commands to work with. Execute the same workload with TRIM support and without, and you end up with a killer metric for further characterizing drive behavior.

DriveMaster is used by most SSD manufacturers to create and perform specific measurements. It's currently the only commercial product that can create the scenarios needed to validate TCG Opal 2.0 security, though it's almost unlimited in potential applications. Much of the benefit tied to a solution like DriveMaster is its ability to diagnose bugs, ensure compatibility, and issue low-level commands. In short, it's very handy for the companies actually building SSDs. And if off-the-shelf scripts don't do it for you, make your own. There's a steep learning curve, but the C-like environment and command documentation give you a fighting chance.

This product also gives us some new ways to explore performance. Testing the TRIM command is just the first example of how we'll be using ULINK's contribution to the Tom's Hardware benchmark suite.

On a 256 GB drive, each iteration writes close to 800 GB of data, so running the JEDEC TRIM test suite once generates almost 3.2 TB of mostly random writes (it's 75% random and 25% sequential). By the end of each run, over 37 million write commands are issued.

The first two tests employ DMA to access the storage, while the last two use Native Command Queuing. Since most folks don't use DMA with SSDs (aside from some legacy or industrial applications) we don't concern ourselves with those. It can take up to 96 hours to run one drive through all four runs, though faster SSDs can roughly cut the time in half. Because so much information is being written to an already-full SSD (the drive is filled before each test), devices that perform better under heavy load fare best. Without TRIM, on-the-fly garbage collection becomes a big contributor to high IOPS. With TRIM, 13% of space gets TRIM'ed, leaving more room for the controller to use for maintenance operations.

TRIM Testing

Average

To avoid adding too much data, I have the average performance for each benchmarked SSD with and without TRIM support enabled. Displayed in IOPS, this helps us make comparisons more quickly.

The SSD340 does benefit from TRIM. And although overall performance is not spectacular, the difference between using and not using the command is palpable. In fact, again, Transcend's submission is a dead ringer for the Crucial M500 that appears right below it. Unsurprisingly, both drives are equipped with 256 GB of L85A flash from IMFT.

Instantaneous

But I also want results for the instantaneous average of my TRIM test. How does the drive fare servicing writes with and without TRIM during each 100,000-command window? The purple line represents IOPS across the entire trace, without TRIM. The teal line is with TRIM. Each data point represents write IOPS per 100,000-command test reporting period.

This chart tells the whole story. The run with TRIM enabled demonstrates higher performance as the test progresses, pulling several hundred percent over the run without TRIM at times.

I'm curious about the relationship between Transcend's SSD340 and Crucial's 240 GB M500. Previously, we saw that the averages at the end of the run with TRIM enabled were basically identical. But when we overlay the two graphs, the story appears different. That average belies periods where the M500 just can't hang with the SSD340, despite using the same flash and sporting an eight-channel controller.

Throughput

We collect and report the total throughput of each drive in the NCQ with TRIM test. It's one number that helps capture overall performance in the test.

The M500 and SSD340 land next to each other in our average throughput benchmark. It takes a chart like the one we just looked at to tell the real differences.

10. Power Consumption: Now With DevSlp Testing

Sometimes I find it unfortunate that most of our storage analysis is in the context of desktop PCs, where the power consumption of an SSD doesn't really matter. The topic is far more meaningful in the enterprise and mobile spaces though, so I find it critically important to benchmark power thoroughly and as precisely as possible.

On a laptop, every milliwatt matters. So much so, in fact, that Intel's Haswell-based CPUs and corresponding chipsets on the mobile side support a new mode for reducing SSD power consumption. DevSlp, or device sleep, is a sideband signal sent to the storage device to indicate that it should drop into a super-low power state. Essentially, everything that can be off is.

This is a great way to get a little extra battery life out of an Ultrabook (particularly in light of Intel's targets for runtime and standby connectivity). But you do pay a price: it takes longer to enter and exit the DevSlp state. Granted, the delay is less than powering the SSD down and back up as needed, a process that can take seconds. Worse, an drive may use substantial amounts of power as it's readied again. DevSlp should need only 50 ms in contrast, along with a few milliwatts.

To measure power consumption in a DevSlp state, we need two things. First is an Ultrabook with a Haswell-based CPU on a compatible platform. I'm using Lenovo's ThinkPad T440s. It's reasonably versatile, including a 2.5" SATA bay and two M.2 slots (for M.2 2242s) wired to the PCH's SATA ports. I typically don't need more than one slot, but it's nice to have the option at least.

The second item is a test platform able to initiate the DevSlp command, measure the current draw, and record the results. To do that, ULINK Technologies sent over some hardware designed expressly for this purpose. I've been using the company's DriveMaster software and SATA/SAS power hubs for a year now, and they confer a spectacular amount of control over what drives under test do. In this case, DevSlp testing is made possible in a way that's informative and easy to manage.

Using a test script to record amperage and issue the appropriate commands, this is what we end up with:

This is an example from my Plextor M6S review. The test script begins at active idle, then issues write commands (the first big increase in power). After 20,000 I/Os, the drive gets issued the DevSlp signal (denoted by the vertical purple bars). In this DevSlp zone, it takes a few tens of milliseconds before the drive enters DevSlp as commanded, but it stays at the state using just 2.5 mW until DevSlp exits (noted by the second purple bar). More I/O is then issued, and then it's back to idle before the script ends. The results are recorded in milliamps, and I convert to watts.

DevSlp State Testing

This is what we end up with when we apply the same methodology to Transcend's SSD340. It draws considerably more power than the M6S when it drops into DevSlp. That's 2.5 mW from Plextor and 53 mW from Transcend. Under 5 mW is what we're looking for.

Elsewhere, we observe 3.03 W maximum consumption in the script, with active idle settling at 0.444 W (or about one-sixth of the maximum).

We can also sort power use in slumber and partial slumber states. These are mostly important for mobile applications. On the desktop, you'd want to turn them off since it takes time to transition from power-saving to active states. The deeper the sleep, the more delay is incurred getting back.

Here are some results for a couple of other SSDs:

Sadly, the SSD340 really does use more power in DevSlp (like ten times as much). After a lot of head-scratching and retesting, I can confirm these numbers are accurate on my Lenovo T440s test rig.

I'm not particularly worried though. This is only important news if you're upgrading an Ultrabook with a Haswell-class processor in it. And in that subcategory, there are few platforms with 2.5" SATA bays. Lastly, 53 mW still isn't much.

11. SSD340: An Attractive Price, But Not Differentiated

In its high school yearbook, the SSD340 was probably voted "Most Likely to Blend In". Without much to set it apart from a pile of other drives in our lab, it'd be easy to overlook. There's just so much good storage out there.

I have to believe that most enthusiasts start their SSD search on the value end of the spectrum and most choose to stay there when it comes time to buy. In that way, the SSD340 lives its life in a target-rich environment. But it probably has a harder time closing the deal.

Still, armed with JMicron's JMF667H processor and economical L85A flash, the SSD340 does a lot with relatively modest firepower. As such, it does deserve a mention from the crowd of mainstream 256 GB-class SSDs swimming around the $120 price point. Crucial's M500 is the SSD340's closest competitor, which sucks for Transcend because the M500 has one feature the SSD340 lacks: TCG Opal 2.0 and Microsoft eDrive support for hardware-based encryption with Bitlocker. It's a small addition. But given that most performance metrics put the drives on equal footing, it tips the scales in Crucial's favor. Samsung's 250 GB 840 EVO is definitively faster. However, it's also about 15% more expensive. Instead, the SSD340 is going to appeal to folks out shopping for 128 GB drives, who decide to treat themselves to 256 GB for a few dollars more. The same applies to the M500, but that model might not be around much longer.

You do get one cool unique feature: a piece of software called SSD Scope, which is the Taiwanese firm's toolbox. With it, system images can be cloned, TRIM can be sent, firmware updated, and secure erasures performed. I like a good SSD management utility bundled with my SSDs, and Transcend's is surprisingly excellent.

There is one competitor I haven't mentioned yet, which could be a problem for Transcend, and that's PNY with its Optima 240 GB using the SM2246EN processor. We loved four-channel controller last year when we looked at Silicon Motion's reference platform. Now that products based on that chip and JMicron's JMF667H are available, the mainstream segment is loaded with compelling hardware.

With all of the talk about SATA Express and PCIe, it's easy for enthusiasts to adopt a "wait a little while longer" approach to storage. For most, however, that's not a great idea. There's a good chance you won't notice the difference between a fast SATA 6Gb/s SSD and something plugged into M.2. And this is from a guy who tests SSDs all day, every day.

And the SSD340 certainly is quick. But I can't say I envy Transcend right now. Selling drives at the value end of the market must be difficult. But thanks to cheap flash and solid third-party controllers like the JMF667H, those vendors without their own foundation of IP still stand a chance in the aftermarket.