SSD vendors selling SandForce-based drives are incredibly enthusiastic about differentiating their offerings. There are three aspects of solid-state storage that affect performance: the controller, the NAND, and the firmware. We all know that these drives center on the same firmware. We've seen that the flash does have some affect on performance, though two drives with the same configuration are pretty much comparable. So, they try to sell us on custom firmware with home-brewed optimizations not offered by other vendors.
Can we create of list of what those tweaks entail? Unfortunately not. No SSD vendor has ever gotten specific with us about what its "golden" or "purely in-house" firmware includes that other vendors don't have.
What we do know is that the basic core of SandForce’s compression technology cannot be altered. We tested for this in Intel SSD 520 Review: Taking Back The High-End With SandForce by measuring endurance by writing highly compressible data. What we found were close to identical values for write amplification. When write amplification is similar, then we know that two drives (in this case, the oldest and newest SandForce-based SSDs) are benefiting from the same level of compression.
|128 KB Compressible Sequential Write1 Hour, QD=1||Intel SSD 52060 GB||OCZ Vertex 360 GB|
|Host Writes||1258 GB||1301 GB|
|NAND Writes||176 GB||182 GB|
In the time between publishing our SSD 520 review and now, we've seen similar results from all of the 60 GB SF-22xx-based SSDs in our lab, suggesting that every vendor using SandForce's technology enjoys the same degree of compression, which most influentially affects the performance of these drives.
Current page: Endurance TestingPrev Page PCMark 7 And Power Consumption Next Page Exploring The Performance Of A Full SandForce-Based SSD
Stay on the Cutting Edge
Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.
As these drives are basically boot drives, i would have liked a test where you measure the total time taken to install a fresh wi7-sp1 on it and install updates and install a few softwares likeReply
Adobe pdf reader
a web browser, a photo manipulating program
a music/video player.
Install a game from a ISO.
And all these apps should be installed from the SSD itself (meaning their setups should be on the SSD).Then you should test the startup and shutdown times.
All these synthetic benchies dont make much sense, IMHO.
I have found that when working with SSD's, single core CPU performance becomes a big bottleneck in some tasks.Reply
A lot of operations use only a single core and the SSD cant use its true potential. That is, the CPU cant process data as fast as the SSD can provide.
This is just reverse of what happens in case of mechanical HDD's.
You're not going to see a major difference.
mayankleoboy1I have found that when working with SSD's, single core CPU performance becomes a big bottleneck in some tasks.A lot of operations use only a single core and the SSD cant use its true potential. That is, the CPU cant process data as fast as the SSD can provide.This is just reverse of what happens in case of mechanical HDD's.Reply
Well, it is pointless though since everything you are doing is so fast that it doesn't matter anymore. I however see your point since I can be loading a program and my SSD is not even at max speed my CPU frequency is maxed out. The only way to get more speed is to just overclock as much as you can.
ackuhttp://www.tomshardware.com/review 24-14.htmlYou're not going to see a major difference.Reply
that is the point of buying a cheaper SSD based on a chepaer NAND.
Considering the conclusion that performance is defined by flash, I find it interesting that the one SF2281 with Toggle NAND at 60GB is not in the roundup (in North America anyway). The Mushkin Chronos Deluxe 60 is substantially cheaper now at $99. It's performance characteristics are much more profound than the 25nm ONFI sync/async models. They're often out of stock at Newegg, and for good reason.Reply
Is there a benchmark to compare virtual memory performance? My current workstation has 24gb of memory, which means Windows eats up 36gb of my boot drive for virtual memory. (yes, I know I can change/disable it, but some programs act wonky when it's screwed with). A dedicated virtual memory drive would free up space on my primary ssd, as well as keep the writes down.Reply
I'd also like to see small drives benchmarked as swap drives in video editing machines. Currently I'm using a raid 0 array of 1tb samsung drives that keeps up well enough, but I'd be interested to see if there are tangible productivity differences.
fwiw...intel uses its own premium binned 25nm sych...that why 4k read were so good.Reply
With a final page heading "Performance Is Defined By Flash" I would have like to see that difference looked at more closely. For example, the Mushkin Chronos Deluxe uses premium 3Xnm Toshiba Toggle Mode Flash (as does Patriot Wildfire, Vertex 3 Max IOPS and OWC Mercury Extreme Pro) and I would love to see for example how just changing the Flashin in an SSD from the same manufacturer and line (i.e Chronos standard versus Deluxe, Vertex 3 versus Vertex 3 Max IOPS). With that info, a user can decide whether it's makes sense to invest in say the premium Toshiba stuff as compared to the "same SSD w/o the premium Flash. That was what I expected to see when I read the referenced page heading.Reply
I'm wondering why Toms' own trace-based benchmark didn't make it into this round-up? Does it take much longer to run than the other tests? While comparing synthetics is important to determine why a certain drive behaves a certain way, trace-based benchmarks (PCMark 7 could be considered trace-based) is what makes the final purchasing decision. In this case, PCMark was the one with the most clear-cut differences, ones that would likely be mirrored in a trace-based benchmark.Reply
For a future SSD review/roundup could you take, for example, 10 real-life traces from 10 different editor's machines (the more variation in workload, the better), and then compare the %change in execution time vs. a reference drive?