We use a mixture of real-world and synthetic benchmarks to quantify storage performance in our reviews. But how do you know our methodology is sound? We decided to test several workstation-oriented apps in order to generate real-world performance data.
Although storage benchmarks often show that many SSDs offer raw throughput many times better than hard drives, real-world testing isn't always as decisive. Many applications simply cannot take advantage of an SSD's benefits to the same degree as a synthetic metric designed to extract every bit of performance from a storage device.
In general, SSDs post the best results when they're presented to high queue depths. If you check out our Intel SSD 520 review for a better idea of how we're testing solid-state storage in real-world environments, though, you'll notice that desktop-class apps simply do not push the high queue depths needed to most significantly differentiate storage technologies. So, the question becomes: do the tasks you run from an SSD require all, some, or none of the drive's strengths? In some cases, the answer is surprising. Take a virus scan as an example. You'd think that piling up files to check would increase queue depth. But that's simply not the case, according to our office productivity investigation.
Over the past several months, as we've tweaked and optimized our benchmarking suite, we've also broken down the storage performance of many different applications and broken them into a handful of real-world analysis stories unlike anything else available. They include Office Productivity, Entertainment and Content Creation, and two different explorations of gaming behavior.
Today, we round out our evaluation of real-world SSD performance by looking at workstation-oriented tasks. Specifically, we're looking at 3D modeling, CAD, programming, and operating system virtualization.
| Test Hardware | |
|---|---|
| Processor | Intel Core i5-2500K (Sandy Bridge), 32 nm, 3.3 GHz, LGA 1155, 6 MB Shared L3, Turbo Boost Enabled |
| Motherboard | ASRock Z68 Extreme4, BIOS v1.4 |
| Memory | Kingston HyperX 8 GB (2 x 4 GB) DDR3-1333 @ DDR3-1333, 1.5 V |
| System Drive | OCZ Vertex 3 240 GB SATA 6Gb/s, Firmware: 2.15 |
| Graphics | Palit GeForce GTX 460 1 GB |
| Capture Card | Black Magic Intensity Pro Hauppauge Colossus |
| Power Supply | Seasonic 760 W, 80 PLUS |
| System Software and Drivers | |
| Operating System | Windows 7 Ultimate 64-bit |
| DirectX | DirectX 11 |
| Driver | Graphics: 285.62 RST: 10.6.0.1002 Virtu: 1.1.101 |
| Benchmarks | |
|---|---|
| Intel Trace-based Tool | v5.2 |
| Software | |
| LightWave | v10.1 |
| AutoCAD | v2012 |
| Visual Studio | v2010 |
| MATLAB | R2011b |
| VMware | 7.1.3 |
- Exploring SSD Performance: Workstation Applications
- LightWave 3D (Modeling): Editing Project
- LightWave 3D (Modeling): Rendering
- AutoCAD: Editing Project
- Visual Studio (Programming): Opening Project
- Visual Studio (Programming): Compiling Code
- MATLAB: Loading Data
- MATLAB: Analyzing Data
- VMware: Operating System Installation
- VMware: Booting
- VMware: Browsing
- Analyzing Workstation Storage Performance

I currently run my OS and production software from an SSD, have 24gb of system memory, page file set to write to ssd, and user files on striped 1tb drives. I'd be interested to see the benefits of installing a separate small ssd only to handle a large page-file, and different configurations with swap drives. Basically, there are a lot of different drive configuration options with all of the hardware available atm, and it would be nice to know the most streamlined/cost effective setup.
We'll look into that!
Cheers,
Andrew Ku
TomsHardware.com
I would really like to see more multitasking as well including application startup and shutdowns. Throughout the day I am constantly opening and closing applications like remote desktop, sql management studio, 1-4 instances at a time of Visual Studio 2010, word, excel, outlook, visio, windows xp virtual machine, etc.......
I disagree. Try the test again with a distributed build system.
I work on a project with around 3M lines of code, which is actually smaller than Firefox. To get compile times down, we use a distributed build system across about a dozen computers (all the developers and testers pool their resources for builds). Even though we all use 10k RPM drives in RAID 0 and put our OS on a separate drive, disk I/O is still the limiting factor in build speed.
I'll agree that building on a single computer, an SSD has little benefit. But I'd imagine that most groups working on very large projects will probably try to leverage the power of more than one computer to save developer resources. Time spent building is time lost, so hour long builds are very, very expensive.
On top of the SSD Cache, i would like to know where this performance gains plateau off (like if a 16gb SSD cache performs the same as a 32 or 64+ etc etc)
I'd like to see these put up against some SAS drives in RAID 0, RAID 1 and RAID10 @ 10k and 15k RPMs. I"m currently running a dual socket xeon board with 48gb RAM on a 120GB Vertex2 SSD and a 4 pack of 300GB 10K SAS Disks in RAID10.
I think i'd LOVE to see Something along the lines of the Momentus XT in a commercial 10k/15k RPM SAS disk with 32gb SSD which could be the sweet spot for extremely large CAD/3dModeling Files out there.
Add VMware benchmarks on normal desktop CPUs reviews!
Andrew - the reference to the 'Xeon E5-2600 Workstation' completely screwed me up, the benchmarks made no sense until I looked at the 'Test Hardware' and then noticed an i5-2500K??!! Please, swap-out the image, it's misleading at best.
Try doing this on a RAM Drive and better on the DP E5-2600 with 64GB~128GB; 128GB might be a hard one. I've been 'trying' to experiment with SQL on a RAM Drive (my X79 is out for an RMA visit). However, the few times with smaller databases it's remarkable. Like you feel about going from HDD to SSD's, it's the same and them some going from a RAM Drive. Also, playing with RAM Cache on SSD's, stuck until RMA is done.
And if the coding needs to be fixed and replaced, well, even more time is lost.
- The choice of a Core i5 as the host CPU is a bad one. Hyperthreading in Core i7 makes a lot of sense since it enables higher parallelism during compilation - 8 files compile in parallel instead of 4. Incidentally that would increase the I/O load as well.
- There's nothing surprising in the mixture of random and sequential transfers. While source code files are small, the produced binary object files are not, not to mention the final libraries and executables. For a single file of source code you'd typically get a 50 to 500K of object code file produced. Precompiled headers run to 30-40 MB as well. Some of our libraries' builds exceed 4 GB in size. True, these include both debug and release builds, but they don't include the intermediate object files - only the final libraries. The main reason for these large sizes is the debug symbols.
Small SSDs don't make much sense for development. On a complex project you can work with a 120GB drive, but you may end up frequently deleting old builds (of dependency libraries) from your cache due to running out of disk space. I have a 240 GB Vertex 2 SSD on my laptop (it's a secondary machine) dedicated for development (e.g. it's not even a boot drive) and that works ok for now, meaning I still haven't had to clean it up from obsolete builds...
I think agnickolov is onto something with his comment, though.
There are things called deadlines and having a life outside of work.
Do you think they could have rendered Transformers (or any other CGI heavy movie) with a Pentium 4? Probably not.
One question. Did you compile Firefox in Release or Debug? Because Release builds tend to load processor more (optimizations take a lot of time), and Debug builds don't load processor as much, at the same time loading disks more. In programmer's work, Debug builds are far more common, BTW.
And of course you should have used a system with i7-3930k for this test, or better yet with a pair of Xeons. i7-2500? It is not a workstation.
Compiling with Java creates a .class file for every .java file, and even without counting in the construction of JAR and WAR files, it is very disk intensive.
Some of the most demanding workstation tasks are for FEA -- Ansys, Abaqus, Cosmos, Creo Simulate (Pro/Mechanica). A single model often takes hours or days to solve, especially if RAM is not sufficient (common) and the solver turns to swap space on a drive. An SSD can cut solution times by 50% or even 80+% -- see this article:
http://www.ansys.com/staticassets/ANSYS/staticassets/resourcelibrary/article/AA-V4-I1-Boosting-Memory-Capacity-with-SSDs.pdf
These programs write reams of incompressible data -- my 2 week-old SSD has had 7,000 GB written to it (yes, hammered). At this rate it will last 1-2 years, which is fine. But as a Sandforce Duraclass drive, it has throttled to ~80 GB/s writes, which slows the solution. Whether at 80 or 500 GB/s, the SSD will get the exact same GB written to it. So I don't see how the throttle helps its life -- except at the expense of human wait times -- a poor bargain.
So for workstations, it would be really helpful to find an inexpensive SSD that doesn't throttle, or a way to defeat it on a SF.