Sign in with
Sign up | Sign in

Test Hardware And Setup

Adaptec MaxIQ: Flash SSDs Boost RAID Performance
By

Adaptec RAID 5805, Fujitsu SAS HDDs

Test Setup

System Hardware
Hardware
Details
CPU
Intel Core i7-920 (45nm, 2.66 GHz, 8MB L3 Cache)
Motherboard (Socket 1366)
Supermicro X8SAX
Revision: 1.1, Chipset: Intel X58 + ICH10R, BIOS: 1.0B
RAM
3 x 1GB DDR3-1333 Corsair CM3X1024-1333C9DHX
HDD
Seagate NL35 400GB
ST3400832NS, 7,200 RPM, SATA 1.5 Gb/s, 8MB Cache
Test HDD (x3)
Fujitsu MBA3147 147GB
15,000 RPM, SAS 3 Gb/s, 16MB Cache
RAID Adapter
Adaptec RAID 5805
Eight Port SAS Controller, Firmware Ver. 5.2.0 Build 17544
Adaptec MaxIQ SSD
ASM-00603-01-A-R (v2009.3 / v6.3), Intel X25-E 32GB
Power Supply
OCZ EliteXstream 800W
OCZ800EXS-EU
Benchmarks
I/O Performance
IOMeter 2006.07.27
Fileserver-Benchmark
Database-Benchmark
Webserver-Benchmark
Workstation-Benchmark
System Software and Drivers
Drivers
Details
Operating System
Windows Vista Ultimate SP1
Intel Chipset
Chipset Installation Utility 9.1.0.1007
AMD Graphics
Radeon 8.12
Intel Matrix Storage
8.7.0.1007


In order to analyze the benefits of a 32GB cache, we decided to limited the arrays in four increments: 5GB and 25GB capacities ensure that the 32GB cache can still buffer all the data on the hard drive-based RAID array, while 50GB and 100GB array sizes reflect the performance scenario if you’re working with larger arrays.

Display all 26 comments.
This thread is closed for comments
  • 2 Hide
    Area51 , February 12, 2010 6:03 AM
    n individual SSD would still be significantly faster in our database benchmark. However, it would also limit the available capacity to 32GB and remove all redundancy.

    What Redundancy? you are running RAID 0. if you consider the faliure rates of spinning drives the single SSD should outlast the RID setup since there is less parts that can fail.
  • 0 Hide
    Anonymous , February 12, 2010 6:33 AM
    There are software-based solutions for flash cache! Solaris has them in ZFS filesystem, it's called L2ARC. It's also available in Linux BTRFS filesystem.

    Also if you use either SUN storage disk array, or NetApp storage array with mechanical disks you can use flash cache that's inside this array (available as option).
  • 0 Hide
    theholylancer , February 12, 2010 12:25 PM
    Area51n individual SSD would still be significantly faster in our database benchmark. However, it would also limit the available capacity to 32GB and remove all redundancy.What Redundancy? you are running RAID 0. if you consider the faliure rates of spinning drives the single SSD should outlast the RID setup since there is less parts that can fail.



    i think they wanted to do raid 5 with 4 drives, but can't due to not having 4 drives, so they simulated raid 5 performance with 3 drives and raid 0
  • 0 Hide
    WyomingKnott , February 12, 2010 1:12 PM
    Sounds like it's a larger cache with prefetching. It may be effective, but it's nothing new.
    Then again, there's nothing wrong with improving an existing idea.
  • 0 Hide
    jrsdav , February 12, 2010 1:32 PM
    On a side note, if you are planning to use this in conjunction with SSD drives in a array configuration, triple check that you have ones that are fully supported by the controller. Adaptec is having issues with most major SSD manufacturers.
  • 0 Hide
    Anonymous , February 12, 2010 2:15 PM
    I'm curious if this works with the OCZ Agility EX drives? I already have an Adaptec 5805 controller and the OCZ drive.
  • 1 Hide
    mattshwink1 , February 12, 2010 3:56 PM
    StorageUserThere are software-based solutions for flash cache! Solaris has them in ZFS filesystem, it's called L2ARC. It's also available in Linux BTRFS filesystem.Also if you use either SUN storage disk array, or NetApp storage array with mechanical disks you can use flash cache that's inside this array (available as option).


    True, but if you're using SUN or NetAPP you are talking about a solution that is hundreds of thousands of dollars. This solution would probably be under $1000 (depending on disk sizes and number of disks).
  • -8 Hide
    juanc , February 12, 2010 4:02 PM
    Bull****

    1) Expensive. while they say it isn't.
    2) Covers just a few applications like database
    3) The SSDs, how much will they last? In a real environment outside read-only DBs... a month?
    4) Can be done by software
    5) RAM is faster. R/W. The only issue is that usually the RAID cards requiere expensive RAM and they charge you a freaking amount of money for the battery backups.
  • 3 Hide
    mattshwink1 , February 12, 2010 4:23 PM
    juancBull****1) Expensive. while they say it isn't. 2) Covers just a few applications like database3) The SSDs, how much will they last? In a real environment outside read-only DBs... a month?4) Can be done by software5) RAM is faster. R/W. The only issue is that usually the RAID cards requiere expensive RAM and they charge you a freaking amount of money for the battery backups.


    1. Enterprise level solutions aren't cheap. Enterprise SANs run in the hundreds of thousands (or millions, depending on capacity and features). This would be in place of buying either all SSDs or buying a SAN. Its cheaper then both and solves a problem (capacity vs performance). It's a niche product, but it could prove useful.

    2. The graphic on MAXIQ details (and the benchmarks that follow) seem tos how it will work for any workload. Beside, in the enterprise your most storage intensive workloads (which this is for) are usually database anyway.

    3. We're talking X-25Es here, enterprise level SSDs. 2 million hours MTBF. Even the second generation X-25 M's are 1.2 million hours MTBF.

    4. What software can accelerate storage performance?

    5. RAM is absolutely faster. But more expensive. High-end database servers can have 256GB (or more). RAM is essential to support high-performance work loads. But if you have a database (or application) of any significant size (e.g. 500GB or more) you will need fast disk to move transactions. And I haven't seen many RAID cards with more then 1GB of cache (Enterprise storage solutions generally don't have more then 64GB, which is very expensive). You need to go to disk at some point, this solution meets a mid-market need. Whether anyone sees enough benefit to use it is another question (but the benchmarks show that it could improve performance)
  • -2 Hide
    JohnnyLucky , February 12, 2010 5:39 PM
    Costs more than my new build!!! Ouch! :( 
  • -1 Hide
    belardo , February 12, 2010 7:40 PM
    Wouldn't it be cheaper to go with two SSD drives in a RAID-1 config? Then a 3rd as a nightly backup?
  • 0 Hide
    mattshwink1 , February 12, 2010 7:53 PM
    BelardoWouldn't it be cheaper to go with two SSD drives in a RAID-1 config? Then a 3rd as a nightly backup?


    It absolutely would, but your capacity would be severely limited. By combining standard mechanical drives with SSDs (for caching) you get increased storage and a boost in performance.
  • 1 Hide
    Anonymous , February 12, 2010 9:30 PM
    You should really be testing arrays that are much bigger than the SSD cache - like 1TB in size (i.e. 4 WD Velociraptors). There is no reason I would have a 5GB or 25GB RAID array using hard drives if I already had a 32GB SSD. I would just use the SSD to begin with.
  • 0 Hide
    El_Capitan , February 12, 2010 10:13 PM
    I will always Thumbs Up comments with Thumbs Down.
  • -1 Hide
    dneonu , February 13, 2010 11:07 PM
    there's no question that when it comes to computer storage as well as
    the best upgrade to speed up computers, swapping the hdd for a ssd is
    the only way to go. currently the only problem is, ssd's are just too
    expensive for most people while hdd's prices are dirt cheap. kind of
    like comparing led's to ccfl's.
  • -1 Hide
    g00ey , February 14, 2010 12:49 AM
    I've never liked Adaptecs controller boards which are prorpietary; If you build a RAID on an Adaptec controller you're bound to Adaptecs controllers unable to migrate your RAID cluster to other brand controllers. I would rather recommend a controller from 3Ware or LSI or any MegaRAID based controller such as Dell Perc, Fujitsu-Siemens, IBM ServeRAID or Intel Controllers.

    The major reasons for the recommendations are good driver support and non-proprietary hardware.
  • -1 Hide
    g00ey , February 14, 2010 12:54 AM
    mattshwink1True, but if you're using SUN or NetAPP you are talking about a solution that is hundreds of thousands of dollars. This solution would probably be under $1000 (depending on disk sizes and number of disks).


    But OpenSolaris use ZFS. It's free and is compatible with most PC hardware (but be careful with which SAS controller you choose HighPoint and Marvell don't agree well with Solaris). FreeNAS, which is FreeBSD based also use ZFS but is less stable and ZFS is not as mature and well-implemented as in OpenSolaris.
  • -1 Hide
    Aragorn , February 14, 2010 3:57 PM
    The benchmarks here all read and wrote to random areas of the disks, mostly negating the effect of the SSD Cache. Wouldn't a real world sysem be likely to be accessing certain sectors far more often, thereby creating a much bigger boost from the SSD?
  • 0 Hide
    ShadowFlash , February 15, 2010 2:45 AM
    Again, my own crazy idea I've mentioned before....instead of using RAID 5, A hardware RAID 3 ( or 4 )should be used with an X-25 as the dedicated parity drive. The SSD should negate the usual bottleneck of the dedicated parity drive, reducing overhead to the quality of xor engine used. 146GB SAS 15K's are common...use a bunch of them with a single ( still pricey ) 160GB X-25. Highpoint still supports what they call RAID 3, but it's really a RAID 4 with a 64K stripe size, which will do just fine.
  • 1 Hide
    randomstar , February 15, 2010 5:23 PM
    Didnt I see a device a few days ago that took a SSD and used it as a cache for a mechanical drive, creating in effedt a hybrid drive?

    would that not be a great idea if it could support a raid behind it rather than a single drive? is there a simple raid controller that takes a single SATA input and handles the raid to 4 drives in raid 5? maybe a backplane or something?
    I know its off topic a bit, but got me to thinking. will research..
Display more comments