Sign in with
Sign up | Sign in

Benchmark Results: PCMark Application Performance

Enterprise Storage: Two 2.5" 600 GB Hard Drives Tested
By

We use PCMark Vantage test less for everyday applicability and more for the sake of looking at possible performance differences due to varying workload types.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 15 comments.
This thread is closed for comments
  • 0 Hide
    jjamess , July 30, 2010 3:06 PM
    The only reason you should be using those 2.5" drives if it you need to conserve rack space, over worrying about performance.
  • 0 Hide
    WyomingKnott , July 30, 2010 4:07 PM
    Please teach me something: If the throughput reaches 140 MB/s, or 1.4 Gb/s after allowing for 8/10 encoding, what is the advantage of a 6 Gb/s interface? I know that interfaces don't run at their max in the real world, but under 50%?
  • 0 Hide
    cjl , July 30, 2010 5:05 PM
    WyomingKnottPlease teach me something: If the throughput reaches 140 MB/s, or 1.4 Gb/s after allowing for 8/10 encoding, what is the advantage of a 6 Gb/s interface? I know that interfaces don't run at their max in the real world, but under 50%?


    It can read full speed from (and write full speed to) the cache. Other than that, you're right, the interface bandwidth is unnecessary for most things.

  • 0 Hide
    liquidsnake718 , July 30, 2010 10:19 PM
    Any indication on the prices for this? So far I haven t seen 10k rpm 2.5hdds in my area, even the specialty stores only have the 3.5's 10k rpm drives at most.
  • 0 Hide
    g00ey , July 31, 2010 12:39 AM
    WyomingKnott & cjl: There is also an "overhead" to consider as there is an overhead when transferring TCP packets. Not only pure data is transferred over the SAS bus, but also other things such as SATA/SCSI commands, parity bits etc. The limit will be pushed further when using RAID setups, especially when connecting the hard drives using port multipliers where several drives have to share a channel. A SAS controller typically provide 4 channels per connector but it is possible to connect many more hard drives to it using multipliers.

    Another aspect of the SAS/SATA bus is the latency which is especially important performance wise when it comes to solid state drives. I would say that the latency is even more crucial to performance than the bandwidth.
  • -2 Hide
    wotan31 , July 31, 2010 3:38 PM
    jjamessThe only reason you should be using those 2.5" drives if it you need to conserve rack space, over worrying about performance.

    You are wrong. These drives use a tiny fraction of power compared to 3.5" models. Power savings, not just for the servers themselves, but from the air conditioners in the room, are a big focus in today's datacenter. Furthermore, these drives when used inside a server, are going to be in a mirrored pair for the OS. You don't care about the disk performance of the OS on a server. That's not where your application or data is running. Used as application or data volumes, these are going to be attached to a RAID controller, dozens or more of these drives, so you simply size your number of RAID set members to meet your performance target. The individual performance of one of these drives isn't really that relevant. Sorry, but the server world is a very different place from the kiddie peecee world you're used to.
  • 2 Hide
    rbarone69 , July 31, 2010 7:31 PM
    jjamessThe only reason you should be using those 2.5" drives if it you need to conserve rack space, over worrying about performance.



    It's also about performance density. If I can get the same performance with half the space I can double my performance in the alloted space.


    Have you ever bought or run any kind of managed storage (SANs). If you have you'd know the high cost of the units that house these disks. An Equalogic PS6000 will set you back around 50k (and that's the low end vs emc or netapp) The more spindles I can put in those the better the performance and the lower the overall cost.

    Equinix runs rack and power leases for around 1000-1200+/month for a single rack in their facilities. It adds up when you have to pay monthly to power and house your 3.5" drives...

    My point is space conservation is sometimes performance.

    I dont even know why I'm posting. It wont matter in a few more years... SSDs will be taking over.
  • 1 Hide
    Anonymous , July 31, 2010 9:59 PM
    Why doesn't the Test Setup / Hardware page list the SAS controller and drivers used?
  • 1 Hide
    a7xfire , August 1, 2010 7:23 AM
    very true about SSD's
  • 0 Hide
    jrst , August 2, 2010 12:32 AM
    Quote:
    WyomingKnott : Please teach me something: If the throughput reaches 140 MB/s, or 1.4 Gb/s after allowing for 8/10 encoding, what is the advantage of a 6 Gb/s interface? I know that interfaces don't run at their max in the real world, but under 50%?


    The advantages aren't apparent if you're looking only at the drive interface, or if you have a pure point-to-point topology with one controller port/channel per drive.

    However, SAS allows more complex interconnects. Large arrays tend to have multiple SAS drives competing for a smaller number of controller ports/channels through port expanders/multipliers. (SATA also has port expanders, but they're more limited.)

    E.g., A shelf 24 SAS drives doesn't typically have one controller port/channel per drive--that would be horribly expensive and a cabling nightmare. Instead, through port multipliers/expanders, those 24 drives might be connected to only 1-2 SAS ports on the controller (typically at least 2 in for redundancy).

    Consider arrays with large numbers of drives and the benefits of SAS, and SAS 6Gbps, become apparent.
  • 0 Hide
    jrst , August 2, 2010 9:39 AM
    WyomingKnottPlease teach me something: If the throughput reaches 140 MB/s, or 1.4 Gb/s after allowing for 8/10 encoding, what is the advantage of a 6 Gb/s interface? I know that interfaces don't run at their max in the real world, but under 50%?


    p.s. In addition to my previous comment, you'll notice that the _interface speed_ for these drives (among others) exceeds 300MBps (bottom chart pg. 5). Even though their sustained xfer is ~50% of that, the interface speed is very important--it allows the drives to xfer their data over the SAS channel that much faster, and release the channel that much quicker, making the channel available for other drives to xfer their data--very important when dealing with large numbers of drives.
  • 0 Hide
    nforce4max , August 2, 2010 8:44 PM
    As far as temps are concerned they are high. On my workstation the lowest temp I have seen was a meager 22c while the hottest was only 42c. My 3 sata drives stay under 45c while the two top ide drives stay under 40c in daily use.
  • 0 Hide
    Casper42 , August 3, 2010 11:27 PM
    Any enteprise admins out there notice that Dell and IBM are selling these drives already but HP is not?
  • 0 Hide
    eth77 , March 24, 2013 10:50 AM
    What is the maximum transaction size of the chipset on your MB? Can it handle 4K transactions in a single transfer?

    I'm interested in the performance of the real world chipsets when transfering 300 GB files and larger, so you've given me a tantalizing peek at what might be possible, but there's a lot of info still missing.
  • 0 Hide
    eth77 , March 24, 2013 10:51 AM
    Oops, I'm referring to 4KB transactions on the PCI-e I/F.