Sign in with
Sign up | Sign in

Adaptec Storage Manager And MaxIQ

Adaptec MaxIQ: Flash SSDs Boost RAID Performance
By

Let’s look at the configuration process of a storage system equipped with Adaptec's MaxIQ. We grabbed an Adaptec RAID 5805 card (firmware 5.2.0.17544) with eight ports and created a RAID 0 array using three Fujitsu MBA3147 15,000 RPM SAS hard drives. This represents an entry-level high-performance hard drive-based RAID solution, and although we used RAID 0, it is nearly comparable with a four-drive RAID 5 configuration in terms of read performance.

Once the updated firmware is installed, you’ll also need an updated version of Adaptec Storage Manager, the management suite for Adaptec storage products. We found a copy on the MaxIQ CD, but it’s always best to get the latest version from the vendor.

The latest version of ASM will distinguish hard drives from SSDs, which is necessary if you want to convert the X25-E SSD into a caching device. Simply right-click on the SSD…

…and add the SSD to the MaxIQ pool. You can run two or more SSDs if you want caching capacities larger than this SSD’s 32GB, but you’ll have to purchase more drives. Adaptec aims to offer support for other SSDs, but today there remains no alternative to Intel's X25-E.

An SSD used as a MaxIQ caching device is marked as such.

All that remains is to enable MaxIQ caching on the overview page of the desired RAID array. You can activate the cache feature for multiple arrays should you have a more complex storage arrangement. That’s about it. MaxIQ commences operation automatically and is fully transparent.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 26 comments.
This thread is closed for comments
  • 2 Hide
    Area51 , February 12, 2010 6:03 AM
    n individual SSD would still be significantly faster in our database benchmark. However, it would also limit the available capacity to 32GB and remove all redundancy.

    What Redundancy? you are running RAID 0. if you consider the faliure rates of spinning drives the single SSD should outlast the RID setup since there is less parts that can fail.
  • 0 Hide
    Anonymous , February 12, 2010 6:33 AM
    There are software-based solutions for flash cache! Solaris has them in ZFS filesystem, it's called L2ARC. It's also available in Linux BTRFS filesystem.

    Also if you use either SUN storage disk array, or NetApp storage array with mechanical disks you can use flash cache that's inside this array (available as option).
  • 0 Hide
    theholylancer , February 12, 2010 12:25 PM
    Area51n individual SSD would still be significantly faster in our database benchmark. However, it would also limit the available capacity to 32GB and remove all redundancy.What Redundancy? you are running RAID 0. if you consider the faliure rates of spinning drives the single SSD should outlast the RID setup since there is less parts that can fail.



    i think they wanted to do raid 5 with 4 drives, but can't due to not having 4 drives, so they simulated raid 5 performance with 3 drives and raid 0
  • 0 Hide
    WyomingKnott , February 12, 2010 1:12 PM
    Sounds like it's a larger cache with prefetching. It may be effective, but it's nothing new.
    Then again, there's nothing wrong with improving an existing idea.
  • 0 Hide
    jrsdav , February 12, 2010 1:32 PM
    On a side note, if you are planning to use this in conjunction with SSD drives in a array configuration, triple check that you have ones that are fully supported by the controller. Adaptec is having issues with most major SSD manufacturers.
  • 0 Hide
    Anonymous , February 12, 2010 2:15 PM
    I'm curious if this works with the OCZ Agility EX drives? I already have an Adaptec 5805 controller and the OCZ drive.
  • 1 Hide
    mattshwink1 , February 12, 2010 3:56 PM
    StorageUserThere are software-based solutions for flash cache! Solaris has them in ZFS filesystem, it's called L2ARC. It's also available in Linux BTRFS filesystem.Also if you use either SUN storage disk array, or NetApp storage array with mechanical disks you can use flash cache that's inside this array (available as option).


    True, but if you're using SUN or NetAPP you are talking about a solution that is hundreds of thousands of dollars. This solution would probably be under $1000 (depending on disk sizes and number of disks).
  • -8 Hide
    juanc , February 12, 2010 4:02 PM
    Bull****

    1) Expensive. while they say it isn't.
    2) Covers just a few applications like database
    3) The SSDs, how much will they last? In a real environment outside read-only DBs... a month?
    4) Can be done by software
    5) RAM is faster. R/W. The only issue is that usually the RAID cards requiere expensive RAM and they charge you a freaking amount of money for the battery backups.
  • 3 Hide
    mattshwink1 , February 12, 2010 4:23 PM
    juancBull****1) Expensive. while they say it isn't. 2) Covers just a few applications like database3) The SSDs, how much will they last? In a real environment outside read-only DBs... a month?4) Can be done by software5) RAM is faster. R/W. The only issue is that usually the RAID cards requiere expensive RAM and they charge you a freaking amount of money for the battery backups.


    1. Enterprise level solutions aren't cheap. Enterprise SANs run in the hundreds of thousands (or millions, depending on capacity and features). This would be in place of buying either all SSDs or buying a SAN. Its cheaper then both and solves a problem (capacity vs performance). It's a niche product, but it could prove useful.

    2. The graphic on MAXIQ details (and the benchmarks that follow) seem tos how it will work for any workload. Beside, in the enterprise your most storage intensive workloads (which this is for) are usually database anyway.

    3. We're talking X-25Es here, enterprise level SSDs. 2 million hours MTBF. Even the second generation X-25 M's are 1.2 million hours MTBF.

    4. What software can accelerate storage performance?

    5. RAM is absolutely faster. But more expensive. High-end database servers can have 256GB (or more). RAM is essential to support high-performance work loads. But if you have a database (or application) of any significant size (e.g. 500GB or more) you will need fast disk to move transactions. And I haven't seen many RAID cards with more then 1GB of cache (Enterprise storage solutions generally don't have more then 64GB, which is very expensive). You need to go to disk at some point, this solution meets a mid-market need. Whether anyone sees enough benefit to use it is another question (but the benchmarks show that it could improve performance)
  • -2 Hide
    JohnnyLucky , February 12, 2010 5:39 PM
    Costs more than my new build!!! Ouch! :( 
  • -1 Hide
    belardo , February 12, 2010 7:40 PM
    Wouldn't it be cheaper to go with two SSD drives in a RAID-1 config? Then a 3rd as a nightly backup?
  • 0 Hide
    mattshwink1 , February 12, 2010 7:53 PM
    BelardoWouldn't it be cheaper to go with two SSD drives in a RAID-1 config? Then a 3rd as a nightly backup?


    It absolutely would, but your capacity would be severely limited. By combining standard mechanical drives with SSDs (for caching) you get increased storage and a boost in performance.
  • 1 Hide
    Anonymous , February 12, 2010 9:30 PM
    You should really be testing arrays that are much bigger than the SSD cache - like 1TB in size (i.e. 4 WD Velociraptors). There is no reason I would have a 5GB or 25GB RAID array using hard drives if I already had a 32GB SSD. I would just use the SSD to begin with.
  • 0 Hide
    El_Capitan , February 12, 2010 10:13 PM
    I will always Thumbs Up comments with Thumbs Down.
  • -1 Hide
    dneonu , February 13, 2010 11:07 PM
    there's no question that when it comes to computer storage as well as
    the best upgrade to speed up computers, swapping the hdd for a ssd is
    the only way to go. currently the only problem is, ssd's are just too
    expensive for most people while hdd's prices are dirt cheap. kind of
    like comparing led's to ccfl's.
  • -1 Hide
    g00ey , February 14, 2010 12:49 AM
    I've never liked Adaptecs controller boards which are prorpietary; If you build a RAID on an Adaptec controller you're bound to Adaptecs controllers unable to migrate your RAID cluster to other brand controllers. I would rather recommend a controller from 3Ware or LSI or any MegaRAID based controller such as Dell Perc, Fujitsu-Siemens, IBM ServeRAID or Intel Controllers.

    The major reasons for the recommendations are good driver support and non-proprietary hardware.
  • -1 Hide
    g00ey , February 14, 2010 12:54 AM
    mattshwink1True, but if you're using SUN or NetAPP you are talking about a solution that is hundreds of thousands of dollars. This solution would probably be under $1000 (depending on disk sizes and number of disks).


    But OpenSolaris use ZFS. It's free and is compatible with most PC hardware (but be careful with which SAS controller you choose HighPoint and Marvell don't agree well with Solaris). FreeNAS, which is FreeBSD based also use ZFS but is less stable and ZFS is not as mature and well-implemented as in OpenSolaris.
  • -1 Hide
    Aragorn , February 14, 2010 3:57 PM
    The benchmarks here all read and wrote to random areas of the disks, mostly negating the effect of the SSD Cache. Wouldn't a real world sysem be likely to be accessing certain sectors far more often, thereby creating a much bigger boost from the SSD?
  • 0 Hide
    ShadowFlash , February 15, 2010 2:45 AM
    Again, my own crazy idea I've mentioned before....instead of using RAID 5, A hardware RAID 3 ( or 4 )should be used with an X-25 as the dedicated parity drive. The SSD should negate the usual bottleneck of the dedicated parity drive, reducing overhead to the quality of xor engine used. 146GB SAS 15K's are common...use a bunch of them with a single ( still pricey ) 160GB X-25. Highpoint still supports what they call RAID 3, but it's really a RAID 4 with a 64K stripe size, which will do just fine.
  • 1 Hide
    randomstar , February 15, 2010 5:23 PM
    Didnt I see a device a few days ago that took a SSD and used it as a cache for a mechanical drive, creating in effedt a hybrid drive?

    would that not be a great idea if it could support a raid behind it rather than a single drive? is there a simple raid controller that takes a single SATA input and handles the raid to 4 drives in raid 5? maybe a backplane or something?
    I know its off topic a bit, but got me to thinking. will research..
Display more comments