Sign in with
Sign up | Sign in

Workstation Storage: Modeling, CAD, Programming, And Virtualization

Workstation Storage: Modeling, CAD, Programming, And Virtualization
By

We use a mixture of real-world and synthetic benchmarks to quantify storage performance in our reviews. But how do you know our methodology is sound? We decided to test several workstation-oriented apps in order to generate real-world performance data.

Although storage benchmarks often show that many SSDs offer raw throughput many times better than hard drives, real-world testing isn't always as decisive. Many applications simply cannot take advantage of an SSD's benefits to the same degree as a synthetic metric designed to extract every bit of performance from a storage device.

In general, SSDs post the best results when they're presented to high queue depths. If you check out our Intel SSD 520 review for a better idea of how we're testing solid-state storage in real-world environments, though, you'll notice that desktop-class apps simply do not push the high queue depths needed to most significantly differentiate storage technologies. So, the question becomes: do the tasks you run from an SSD require all, some, or none of the drive's strengths? In some cases, the answer is surprising. Take a virus scan as an example. You'd think that piling up files to check would increase queue depth. But that's simply not the case, according to our office productivity investigation

Over the past several months, as we've tweaked and optimized our benchmarking suite, we've also broken down the storage performance of many different applications and broken them into a handful of real-world analysis stories unlike anything else available. They include Office Productivity, Entertainment and Content Creation, and two different explorations of gaming behavior.

Today, we round out our evaluation of real-world SSD performance by looking at workstation-oriented tasks. Specifically, we're looking at 3D modeling, CAD, programming, and operating system virtualization.

Xeon E5-2600 WorkstationXeon E5-2600 Workstation

Test Hardware
Processor
Intel Core i5-2500K (Sandy Bridge), 32 nm, 3.3 GHz, LGA 1155, 6 MB Shared L3, Turbo Boost Enabled
Motherboard
ASRock Z68 Extreme4, BIOS v1.4
Memory
Kingston HyperX 8 GB (2 x 4 GB) DDR3-1333 @ DDR3-1333, 1.5 V
System Drive
OCZ Vertex 3 240 GB SATA 6Gb/s, Firmware: 2.15
Graphics
Palit GeForce GTX 460 1 GB
Capture Card
Black Magic Intensity Pro
Hauppauge Colossus
Power Supply
Seasonic 760 W, 80 PLUS
System Software and Drivers
Operating System
Windows 7 Ultimate 64-bit
DirectX
DirectX 11
DriverGraphics: 285.62
RST: 10.6.0.1002
Virtu: 1.1.101
Benchmarks
Intel Trace-based Tool
v5.2
Software
LightWavev10.1
AutoCAD
v2012
Visual Studio
v2010
MATLAB
R2011b
VMware
7.1.3
Display 19 Comments.
This thread is closed for comments
  • 4 Hide
    clownbaby , March 13, 2012 4:40 AM
    Thanks for the workstation analysis. I'd really like to see some tests comparing performance while utilizing multiple programs and lots of disk caching. I.E. having many complimentary programs open, photoshop, illustrator, after effects and premiere pro ), with many gigs worth or projects opened and cached and multiple background renders. Something like this would be a worst case scenario for me, and finding the balance between ssds, raided disks, and memory properly configured would be interesting.

    I currently run my OS and production software from an SSD, have 24gb of system memory, page file set to write to ssd, and user files on striped 1tb drives. I'd be interested to see the benefits of installing a separate small ssd only to handle a large page-file, and different configurations with swap drives. Basically, there are a lot of different drive configuration options with all of the hardware available atm, and it would be nice to know the most streamlined/cost effective setup.
  • 4 Hide
    acku , March 13, 2012 5:17 AM
    clownbabyThanks for the workstation analysis. I'd really like to see some tests comparing performance while utilizing multiple programs and lots of disk caching. I.E. having many complimentary programs open, photoshop, illustrator, after effects and premiere pro ), with many gigs worth or projects opened and cached and multiple background renders. Something like this would be a worst case scenario for me, and finding the balance between ssds, raided disks, and memory properly configured would be interesting.I currently run my OS and production software from an SSD, have 24gb of system memory, page file set to write to ssd, and user files on striped 1tb drives. I'd be interested to see the benefits of installing a separate small ssd only to handle a large page-file, and different configurations with swap drives. Basically, there are a lot of different drive configuration options with all of the hardware available atm, and it would be nice to know the most streamlined/cost effective setup.


    We'll look into that!

    Cheers,
    Andrew Ku
    TomsHardware.com
  • 1 Hide
    cknobman , March 13, 2012 11:54 AM
    As an applications developer working on a brand new dell m4600 mobile workstation with a slow 250 mechanical hard drive it is very interesting to see tests like this and makes me wonder how much improvement I would see if my machine was equipped with an SSD.

    I would really like to see more multitasking as well including application startup and shutdowns. Throughout the day I am constantly opening and closing applications like remote desktop, sql management studio, 1-4 instances at a time of Visual Studio 2010, word, excel, outlook, visio, windows xp virtual machine, etc.......

  • 0 Hide
    teddymines , March 13, 2012 12:08 PM
    Is having to wait for a task really that much of a deal-breaker? I tend to use that time to hit the restroom, get a coffee, discuss with co-workers, or work on another task. Besides, if computers get to be too fast, then we'll be expected to get more done. ;^)
  • 2 Hide
    willard , March 13, 2012 12:22 PM
    Quote:
    Consequently, compiling code isn't a usage scenario where SSDs provide a clear lead.

    I disagree. Try the test again with a distributed build system.

    I work on a project with around 3M lines of code, which is actually smaller than Firefox. To get compile times down, we use a distributed build system across about a dozen computers (all the developers and testers pool their resources for builds). Even though we all use 10k RPM drives in RAID 0 and put our OS on a separate drive, disk I/O is still the limiting factor in build speed.

    I'll agree that building on a single computer, an SSD has little benefit. But I'd imagine that most groups working on very large projects will probably try to leverage the power of more than one computer to save developer resources. Time spent building is time lost, so hour long builds are very, very expensive.
  • 2 Hide
    jgutz2006 , March 13, 2012 12:29 PM
    ackuWe'll look into that!Cheers,Andrew KuTomsHardware.com



    On top of the SSD Cache, i would like to know where this performance gains plateau off (like if a 16gb SSD cache performs the same as a 32 or 64+ etc etc)

    I'd like to see these put up against some SAS drives in RAID 0, RAID 1 and RAID10 @ 10k and 15k RPMs. I"m currently running a dual socket xeon board with 48gb RAM on a 120GB Vertex2 SSD and a 4 pack of 300GB 10K SAS Disks in RAID10.

    I think i'd LOVE to see Something along the lines of the Momentus XT in a commercial 10k/15k RPM SAS disk with 32gb SSD which could be the sweet spot for extremely large CAD/3dModeling Files out there.
  • 2 Hide
    Zatanus , March 13, 2012 1:26 PM
    PLEASE!

    Add VMware benchmarks on normal desktop CPUs reviews!
  • 2 Hide
    Anonymous , March 13, 2012 3:00 PM
    It's nice that you test "workstation software", however you do not test any compositing software such as Eyeon Fusion or Adobe After Effects. Testing 3D rendering seems pretty silly. Compositing and video editing is a LOT more demanding on storage.
  • -2 Hide
    andywork78 , March 13, 2012 4:57 PM
    SSD 1TB for $200 right now !!!!
  • 1 Hide
    jaquith , March 13, 2012 5:06 PM
    Very nice Article & Thanks! Pictures, in this case a Video is all you needed to make the point ;) 

    Andrew - the reference to the 'Xeon E5-2600 Workstation' completely screwed me up, the benchmarks made no sense until I looked at the 'Test Hardware' and then noticed an i5-2500K??!! Please, swap-out the image, it's misleading at best.

    Try doing this on a RAM Drive and better on the DP E5-2600 with 64GB~128GB; 128GB might be a hard one. I've been 'trying' to experiment with SQL on a RAM Drive (my X79 is out for an RMA visit). However, the few times with smaller databases it's remarkable. Like you feel about going from HDD to SSD's, it's the same and them some going from a RAM Drive. Also, playing with RAM Cache on SSD's, stuck until RMA is done.
  • 0 Hide
    A Bad Day , March 13, 2012 8:04 PM
    willardTime spent building is time lost, so hour long builds are very, very expensive.


    And if the coding needs to be fixed and replaced, well, even more time is lost.
  • 1 Hide
    agnickolov , March 14, 2012 4:42 AM
    I'm really delighted programming was one of the chosen workstation disciplines. Some comments:

    - The choice of a Core i5 as the host CPU is a bad one. Hyperthreading in Core i7 makes a lot of sense since it enables higher parallelism during compilation - 8 files compile in parallel instead of 4. Incidentally that would increase the I/O load as well.

    - There's nothing surprising in the mixture of random and sequential transfers. While source code files are small, the produced binary object files are not, not to mention the final libraries and executables. For a single file of source code you'd typically get a 50 to 500K of object code file produced. Precompiled headers run to 30-40 MB as well. Some of our libraries' builds exceed 4 GB in size. True, these include both debug and release builds, but they don't include the intermediate object files - only the final libraries. The main reason for these large sizes is the debug symbols.

    Small SSDs don't make much sense for development. On a complex project you can work with a 120GB drive, but you may end up frequently deleting old builds (of dependency libraries) from your cache due to running out of disk space. I have a 240 GB Vertex 2 SSD on my laptop (it's a secondary machine) dedicated for development (e.g. it's not even a boot drive) and that works ok for now, meaning I still haven't had to clean it up from obsolete builds...
  • 0 Hide
    descendency , March 15, 2012 1:39 AM
    I find the result from compiling quite interesting, as I've always thought of compiling as a largely disk IO bottlenecked process. I would have figured an SSD provided significantly more benefit than a 2 disc RAID 0 would...

    I think agnickolov is onto something with his comment, though.
  • 0 Hide
    sarcasm , March 17, 2012 10:28 PM
    teddyminesIs having to wait for a task really that much of a deal-breaker? I tend to use that time to hit the restroom, get a coffee, discuss with co-workers, or work on another task. Besides, if computers get to be too fast, then we'll be expected to get more done. ;^)


    There are things called deadlines and having a life outside of work. :)  The more time spent waiting for a project to finish, the more time wasted, the more money lost, and the unhappier the client.

    Do you think they could have rendered Transformers (or any other CGI heavy movie) with a Pentium 4? Probably not. :p 
  • 0 Hide
    peevee , March 19, 2012 12:50 AM
    Thanks, useful test for non-kids here.

    One question. Did you compile Firefox in Release or Debug? Because Release builds tend to load processor more (optimizations take a lot of time), and Debug builds don't load processor as much, at the same time loading disks more. In programmer's work, Debug builds are far more common, BTW.
    And of course you should have used a system with i7-3930k for this test, or better yet with a pair of Xeons. i7-2500? It is not a workstation.
  • 0 Hide
    svdb , March 21, 2012 7:06 PM
    I'd like to see a test using a Javac compiler instead of VS.
    Compiling with Java creates a .class file for every .java file, and even without counting in the construction of JAR and WAR files, it is very disk intensive.
  • 1 Hide
    Anonymous , April 2, 2012 12:29 AM
    I'd like to see real CAD programs tested, e.g. solid modelers like Autodesk Inventor, SolidWorks, Creo (formerly Pro/E Wildfire) not just a line drawing program (AutoCAD). Throw in some analysis tools like ANSYS to round it out for mechanical design workstations.
  • 0 Hide
    jbeans83 , April 4, 2012 9:02 AM
    A corollary to the SSD analyses should be which of the SSDs now lend themselves well to the real-world displayed utilizations (shown in this article). Some of the SSDs shine in different areas. Now, given what's shown here, which SSDs actually make the most difference in each of the categories analyzed.
  • 0 Hide
    dmalicky , April 27, 2012 7:44 AM
    Thanks for this test.

    Some of the most demanding workstation tasks are for FEA -- Ansys, Abaqus, Cosmos, Creo Simulate (Pro/Mechanica). A single model often takes hours or days to solve, especially if RAM is not sufficient (common) and the solver turns to swap space on a drive. An SSD can cut solution times by 50% or even 80+% -- see this article:
    http://www.ansys.com/staticassets/ANSYS/staticassets/resourcelibrary/article/AA-V4-I1-Boosting-Memory-Capacity-with-SSDs.pdf

    These programs write reams of incompressible data -- my 2 week-old SSD has had 7,000 GB written to it (yes, hammered). At this rate it will last 1-2 years, which is fine. But as a Sandforce Duraclass drive, it has throttled to ~80 GB/s writes, which slows the solution. Whether at 80 or 500 GB/s, the SSD will get the exact same GB written to it. So I don't see how the throttle helps its life -- except at the expense of human wait times -- a poor bargain.

    So for workstations, it would be really helpful to find an inexpensive SSD that doesn't throttle, or a way to defeat it on a SF.