Sign in with
Sign up | Sign in

VMware: Booting

Workstation Storage: Modeling, CAD, Programming, And Virtualization
By

Overall Statistics

Elapsed Time
31.39 s
Read Operations
23 752
Write Operations
1303
Data Read
864.65 MB
Data Written
16.27 MB
Disk Busy Time
3.236 s
Average Data Rate
272.20 MB/s


SSDs are known for helping accelerate boot times. Queue depths during Windows start-up can easily exceed four, as the operating system accesses multiple files in quick succession or at the same time. This is exactly we see when Windows 7 loads up under VMware Workstation. Fewer than half of the operations occur at a queue depth of one. The vast majority of the transferred data is sequential, with about two-thirds of the operations 64 KB in size.

VMware Workstations takes as little as 30 seconds to reach the desktop. In comparison, our two Caviar Green drives in RAID 1 take about 50 seconds to get there.

I/O Trends:

  • 38% of all operations occur at queue depth of one
  • 55% of all operations occur at queue depth between two and five
  • 75% of all data transferred is sequential
  • 18% of all operations are 4 KB in transfer size
  • 64% of all operations are 64 KB in transfer size



Display all 19 comments.
This thread is closed for comments
  • 4 Hide
    clownbaby , March 13, 2012 4:40 AM
    Thanks for the workstation analysis. I'd really like to see some tests comparing performance while utilizing multiple programs and lots of disk caching. I.E. having many complimentary programs open, photoshop, illustrator, after effects and premiere pro ), with many gigs worth or projects opened and cached and multiple background renders. Something like this would be a worst case scenario for me, and finding the balance between ssds, raided disks, and memory properly configured would be interesting.

    I currently run my OS and production software from an SSD, have 24gb of system memory, page file set to write to ssd, and user files on striped 1tb drives. I'd be interested to see the benefits of installing a separate small ssd only to handle a large page-file, and different configurations with swap drives. Basically, there are a lot of different drive configuration options with all of the hardware available atm, and it would be nice to know the most streamlined/cost effective setup.
  • 4 Hide
    acku , March 13, 2012 5:17 AM
    clownbabyThanks for the workstation analysis. I'd really like to see some tests comparing performance while utilizing multiple programs and lots of disk caching. I.E. having many complimentary programs open, photoshop, illustrator, after effects and premiere pro ), with many gigs worth or projects opened and cached and multiple background renders. Something like this would be a worst case scenario for me, and finding the balance between ssds, raided disks, and memory properly configured would be interesting.I currently run my OS and production software from an SSD, have 24gb of system memory, page file set to write to ssd, and user files on striped 1tb drives. I'd be interested to see the benefits of installing a separate small ssd only to handle a large page-file, and different configurations with swap drives. Basically, there are a lot of different drive configuration options with all of the hardware available atm, and it would be nice to know the most streamlined/cost effective setup.


    We'll look into that!

    Cheers,
    Andrew Ku
    TomsHardware.com
  • 1 Hide
    cknobman , March 13, 2012 11:54 AM
    As an applications developer working on a brand new dell m4600 mobile workstation with a slow 250 mechanical hard drive it is very interesting to see tests like this and makes me wonder how much improvement I would see if my machine was equipped with an SSD.

    I would really like to see more multitasking as well including application startup and shutdowns. Throughout the day I am constantly opening and closing applications like remote desktop, sql management studio, 1-4 instances at a time of Visual Studio 2010, word, excel, outlook, visio, windows xp virtual machine, etc.......

  • 0 Hide
    teddymines , March 13, 2012 12:08 PM
    Is having to wait for a task really that much of a deal-breaker? I tend to use that time to hit the restroom, get a coffee, discuss with co-workers, or work on another task. Besides, if computers get to be too fast, then we'll be expected to get more done. ;^)
  • 2 Hide
    willard , March 13, 2012 12:22 PM
    Quote:
    Consequently, compiling code isn't a usage scenario where SSDs provide a clear lead.

    I disagree. Try the test again with a distributed build system.

    I work on a project with around 3M lines of code, which is actually smaller than Firefox. To get compile times down, we use a distributed build system across about a dozen computers (all the developers and testers pool their resources for builds). Even though we all use 10k RPM drives in RAID 0 and put our OS on a separate drive, disk I/O is still the limiting factor in build speed.

    I'll agree that building on a single computer, an SSD has little benefit. But I'd imagine that most groups working on very large projects will probably try to leverage the power of more than one computer to save developer resources. Time spent building is time lost, so hour long builds are very, very expensive.
  • 2 Hide
    jgutz2006 , March 13, 2012 12:29 PM
    ackuWe'll look into that!Cheers,Andrew KuTomsHardware.com



    On top of the SSD Cache, i would like to know where this performance gains plateau off (like if a 16gb SSD cache performs the same as a 32 or 64+ etc etc)

    I'd like to see these put up against some SAS drives in RAID 0, RAID 1 and RAID10 @ 10k and 15k RPMs. I"m currently running a dual socket xeon board with 48gb RAM on a 120GB Vertex2 SSD and a 4 pack of 300GB 10K SAS Disks in RAID10.

    I think i'd LOVE to see Something along the lines of the Momentus XT in a commercial 10k/15k RPM SAS disk with 32gb SSD which could be the sweet spot for extremely large CAD/3dModeling Files out there.
  • 2 Hide
    Zatanus , March 13, 2012 1:26 PM
    PLEASE!

    Add VMware benchmarks on normal desktop CPUs reviews!
  • 2 Hide
    Anonymous , March 13, 2012 3:00 PM
    It's nice that you test "workstation software", however you do not test any compositing software such as Eyeon Fusion or Adobe After Effects. Testing 3D rendering seems pretty silly. Compositing and video editing is a LOT more demanding on storage.
  • -2 Hide
    andywork78 , March 13, 2012 4:57 PM
    SSD 1TB for $200 right now !!!!
  • 1 Hide
    jaquith , March 13, 2012 5:06 PM
    Very nice Article & Thanks! Pictures, in this case a Video is all you needed to make the point ;) 

    Andrew - the reference to the 'Xeon E5-2600 Workstation' completely screwed me up, the benchmarks made no sense until I looked at the 'Test Hardware' and then noticed an i5-2500K??!! Please, swap-out the image, it's misleading at best.

    Try doing this on a RAM Drive and better on the DP E5-2600 with 64GB~128GB; 128GB might be a hard one. I've been 'trying' to experiment with SQL on a RAM Drive (my X79 is out for an RMA visit). However, the few times with smaller databases it's remarkable. Like you feel about going from HDD to SSD's, it's the same and them some going from a RAM Drive. Also, playing with RAM Cache on SSD's, stuck until RMA is done.
  • 0 Hide
    A Bad Day , March 13, 2012 8:04 PM
    willardTime spent building is time lost, so hour long builds are very, very expensive.


    And if the coding needs to be fixed and replaced, well, even more time is lost.
  • 1 Hide
    agnickolov , March 14, 2012 4:42 AM
    I'm really delighted programming was one of the chosen workstation disciplines. Some comments:

    - The choice of a Core i5 as the host CPU is a bad one. Hyperthreading in Core i7 makes a lot of sense since it enables higher parallelism during compilation - 8 files compile in parallel instead of 4. Incidentally that would increase the I/O load as well.

    - There's nothing surprising in the mixture of random and sequential transfers. While source code files are small, the produced binary object files are not, not to mention the final libraries and executables. For a single file of source code you'd typically get a 50 to 500K of object code file produced. Precompiled headers run to 30-40 MB as well. Some of our libraries' builds exceed 4 GB in size. True, these include both debug and release builds, but they don't include the intermediate object files - only the final libraries. The main reason for these large sizes is the debug symbols.

    Small SSDs don't make much sense for development. On a complex project you can work with a 120GB drive, but you may end up frequently deleting old builds (of dependency libraries) from your cache due to running out of disk space. I have a 240 GB Vertex 2 SSD on my laptop (it's a secondary machine) dedicated for development (e.g. it's not even a boot drive) and that works ok for now, meaning I still haven't had to clean it up from obsolete builds...
  • 0 Hide
    descendency , March 15, 2012 1:39 AM
    I find the result from compiling quite interesting, as I've always thought of compiling as a largely disk IO bottlenecked process. I would have figured an SSD provided significantly more benefit than a 2 disc RAID 0 would...

    I think agnickolov is onto something with his comment, though.
  • 0 Hide
    sarcasm , March 17, 2012 10:28 PM
    teddyminesIs having to wait for a task really that much of a deal-breaker? I tend to use that time to hit the restroom, get a coffee, discuss with co-workers, or work on another task. Besides, if computers get to be too fast, then we'll be expected to get more done. ;^)


    There are things called deadlines and having a life outside of work. :)  The more time spent waiting for a project to finish, the more time wasted, the more money lost, and the unhappier the client.

    Do you think they could have rendered Transformers (or any other CGI heavy movie) with a Pentium 4? Probably not. :p 
  • 0 Hide
    peevee , March 19, 2012 12:50 AM
    Thanks, useful test for non-kids here.

    One question. Did you compile Firefox in Release or Debug? Because Release builds tend to load processor more (optimizations take a lot of time), and Debug builds don't load processor as much, at the same time loading disks more. In programmer's work, Debug builds are far more common, BTW.
    And of course you should have used a system with i7-3930k for this test, or better yet with a pair of Xeons. i7-2500? It is not a workstation.
  • 0 Hide
    svdb , March 21, 2012 7:06 PM
    I'd like to see a test using a Javac compiler instead of VS.
    Compiling with Java creates a .class file for every .java file, and even without counting in the construction of JAR and WAR files, it is very disk intensive.
  • 1 Hide
    Anonymous , April 2, 2012 12:29 AM
    I'd like to see real CAD programs tested, e.g. solid modelers like Autodesk Inventor, SolidWorks, Creo (formerly Pro/E Wildfire) not just a line drawing program (AutoCAD). Throw in some analysis tools like ANSYS to round it out for mechanical design workstations.
  • 0 Hide
    jbeans83 , April 4, 2012 9:02 AM
    A corollary to the SSD analyses should be which of the SSDs now lend themselves well to the real-world displayed utilizations (shown in this article). Some of the SSDs shine in different areas. Now, given what's shown here, which SSDs actually make the most difference in each of the categories analyzed.
  • 0 Hide
    dmalicky , April 27, 2012 7:44 AM
    Thanks for this test.

    Some of the most demanding workstation tasks are for FEA -- Ansys, Abaqus, Cosmos, Creo Simulate (Pro/Mechanica). A single model often takes hours or days to solve, especially if RAM is not sufficient (common) and the solver turns to swap space on a drive. An SSD can cut solution times by 50% or even 80+% -- see this article:
    http://www.ansys.com/staticassets/ANSYS/staticassets/resourcelibrary/article/AA-V4-I1-Boosting-Memory-Capacity-with-SSDs.pdf

    These programs write reams of incompressible data -- my 2 week-old SSD has had 7,000 GB written to it (yes, hammered). At this rate it will last 1-2 years, which is fine. But as a Sandforce Duraclass drive, it has throttled to ~80 GB/s writes, which slows the solution. Whether at 80 or 500 GB/s, the SSD will get the exact same GB written to it. So I don't see how the throttle helps its life -- except at the expense of human wait times -- a poor bargain.

    So for workstations, it would be really helpful to find an inexpensive SSD that doesn't throttle, or a way to defeat it on a SF.