Sign in with
Sign up | Sign in

Adaptec MaxIQ: Flash SSDs Boost RAID Performance

Adaptec MaxIQ: Flash SSDs Boost RAID Performance
By

Everyone seems to be talking about flash SSDs these days and how they provide maximum performance with minimal power consumption. However, Adaptec isn’t convinced that SSDs are a silver bullet. High costs and limited capacities remain severe concerns in business applications. MaxIQ is Adaptec’s answer to these issues. It’s a software-based enhancement to Adaptec’s 5- and 2-series RAID controllers that allows administrators to add read caching to RAID arrays using customized Intel X25-E drives.

I/O Path Conditioning

Adaptec is right when it says that data paths between servers and storage require optimization. Millions of servers worldwide are running RAID controllers and arrays based on mechanical hard drives. Although SSDs are in the process of conquering the enterprise market, they primarily do it at the very high-end, where cost is secondary. 

With most systems, storage capacity and TCO are imperative, effectively disqualifying many SSD solutions due to limited per-drive capacities and disproportionate costs. In addition, SSDs may even disrupt storage ecosystems. Lastly, higher performance can be beneficial, but it may not be essential in many cases. So how can one affordably increase performance while maintaining existing high-capacity, validated storage ecosystems? Adaptec’s goals were to achieve the best cost per I/O, best power per I/O, best cost per gigabyte, and best data protection per I/O.

MaxIQ is an SSD Cache for Your RAID Array

Adaptec calls its MaxIQ  solution a high-performance hybrid array technology, and defines it as delivering maximum performance without the need for expensive DRAM caches, capacity-cutting short stroking, or application tuning. It is available as an upgrade to all 5- and 2-series Adaptec controllers and requires a firmware update, as well as one or more customized X25-E drives. The basic MaxIQ package with a 32GB Intel X25-E professional SSD retails today for $1,295.

We’ll look at details and benchmark numbers on the following pages, but can already speak highly of the combo for the way it integrates with your system. Once MaxIQ-capable firmware is installed on your Adaptec controller, you can potentially multiply read I/O performance. No applications or additional drivers are required, the setup only requires initial configuration. From there, the MaxIQ operation is transparent. Since the SSD is only used as a cache, data on your RAID array(s) is never at risk.

Let’s see if MaxIQ delivers on its massive number of promises, though.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 26 comments.
This thread is closed for comments
  • 2 Hide
    Area51 , February 12, 2010 6:03 AM
    n individual SSD would still be significantly faster in our database benchmark. However, it would also limit the available capacity to 32GB and remove all redundancy.

    What Redundancy? you are running RAID 0. if you consider the faliure rates of spinning drives the single SSD should outlast the RID setup since there is less parts that can fail.
  • 0 Hide
    Anonymous , February 12, 2010 6:33 AM
    There are software-based solutions for flash cache! Solaris has them in ZFS filesystem, it's called L2ARC. It's also available in Linux BTRFS filesystem.

    Also if you use either SUN storage disk array, or NetApp storage array with mechanical disks you can use flash cache that's inside this array (available as option).
  • 0 Hide
    theholylancer , February 12, 2010 12:25 PM
    Area51n individual SSD would still be significantly faster in our database benchmark. However, it would also limit the available capacity to 32GB and remove all redundancy.What Redundancy? you are running RAID 0. if you consider the faliure rates of spinning drives the single SSD should outlast the RID setup since there is less parts that can fail.



    i think they wanted to do raid 5 with 4 drives, but can't due to not having 4 drives, so they simulated raid 5 performance with 3 drives and raid 0
  • 0 Hide
    WyomingKnott , February 12, 2010 1:12 PM
    Sounds like it's a larger cache with prefetching. It may be effective, but it's nothing new.
    Then again, there's nothing wrong with improving an existing idea.
  • 0 Hide
    jrsdav , February 12, 2010 1:32 PM
    On a side note, if you are planning to use this in conjunction with SSD drives in a array configuration, triple check that you have ones that are fully supported by the controller. Adaptec is having issues with most major SSD manufacturers.
  • 0 Hide
    Anonymous , February 12, 2010 2:15 PM
    I'm curious if this works with the OCZ Agility EX drives? I already have an Adaptec 5805 controller and the OCZ drive.
  • 1 Hide
    mattshwink1 , February 12, 2010 3:56 PM
    StorageUserThere are software-based solutions for flash cache! Solaris has them in ZFS filesystem, it's called L2ARC. It's also available in Linux BTRFS filesystem.Also if you use either SUN storage disk array, or NetApp storage array with mechanical disks you can use flash cache that's inside this array (available as option).


    True, but if you're using SUN or NetAPP you are talking about a solution that is hundreds of thousands of dollars. This solution would probably be under $1000 (depending on disk sizes and number of disks).
  • -8 Hide
    juanc , February 12, 2010 4:02 PM
    Bull****

    1) Expensive. while they say it isn't.
    2) Covers just a few applications like database
    3) The SSDs, how much will they last? In a real environment outside read-only DBs... a month?
    4) Can be done by software
    5) RAM is faster. R/W. The only issue is that usually the RAID cards requiere expensive RAM and they charge you a freaking amount of money for the battery backups.
  • 3 Hide
    mattshwink1 , February 12, 2010 4:23 PM
    juancBull****1) Expensive. while they say it isn't. 2) Covers just a few applications like database3) The SSDs, how much will they last? In a real environment outside read-only DBs... a month?4) Can be done by software5) RAM is faster. R/W. The only issue is that usually the RAID cards requiere expensive RAM and they charge you a freaking amount of money for the battery backups.


    1. Enterprise level solutions aren't cheap. Enterprise SANs run in the hundreds of thousands (or millions, depending on capacity and features). This would be in place of buying either all SSDs or buying a SAN. Its cheaper then both and solves a problem (capacity vs performance). It's a niche product, but it could prove useful.

    2. The graphic on MAXIQ details (and the benchmarks that follow) seem tos how it will work for any workload. Beside, in the enterprise your most storage intensive workloads (which this is for) are usually database anyway.

    3. We're talking X-25Es here, enterprise level SSDs. 2 million hours MTBF. Even the second generation X-25 M's are 1.2 million hours MTBF.

    4. What software can accelerate storage performance?

    5. RAM is absolutely faster. But more expensive. High-end database servers can have 256GB (or more). RAM is essential to support high-performance work loads. But if you have a database (or application) of any significant size (e.g. 500GB or more) you will need fast disk to move transactions. And I haven't seen many RAID cards with more then 1GB of cache (Enterprise storage solutions generally don't have more then 64GB, which is very expensive). You need to go to disk at some point, this solution meets a mid-market need. Whether anyone sees enough benefit to use it is another question (but the benchmarks show that it could improve performance)
  • -2 Hide
    JohnnyLucky , February 12, 2010 5:39 PM
    Costs more than my new build!!! Ouch! :( 
  • -1 Hide
    belardo , February 12, 2010 7:40 PM
    Wouldn't it be cheaper to go with two SSD drives in a RAID-1 config? Then a 3rd as a nightly backup?
  • 0 Hide
    mattshwink1 , February 12, 2010 7:53 PM
    BelardoWouldn't it be cheaper to go with two SSD drives in a RAID-1 config? Then a 3rd as a nightly backup?


    It absolutely would, but your capacity would be severely limited. By combining standard mechanical drives with SSDs (for caching) you get increased storage and a boost in performance.
  • 1 Hide
    Anonymous , February 12, 2010 9:30 PM
    You should really be testing arrays that are much bigger than the SSD cache - like 1TB in size (i.e. 4 WD Velociraptors). There is no reason I would have a 5GB or 25GB RAID array using hard drives if I already had a 32GB SSD. I would just use the SSD to begin with.
  • 0 Hide
    El_Capitan , February 12, 2010 10:13 PM
    I will always Thumbs Up comments with Thumbs Down.
  • -1 Hide
    dneonu , February 13, 2010 11:07 PM
    there's no question that when it comes to computer storage as well as
    the best upgrade to speed up computers, swapping the hdd for a ssd is
    the only way to go. currently the only problem is, ssd's are just too
    expensive for most people while hdd's prices are dirt cheap. kind of
    like comparing led's to ccfl's.
  • -1 Hide
    g00ey , February 14, 2010 12:49 AM
    I've never liked Adaptecs controller boards which are prorpietary; If you build a RAID on an Adaptec controller you're bound to Adaptecs controllers unable to migrate your RAID cluster to other brand controllers. I would rather recommend a controller from 3Ware or LSI or any MegaRAID based controller such as Dell Perc, Fujitsu-Siemens, IBM ServeRAID or Intel Controllers.

    The major reasons for the recommendations are good driver support and non-proprietary hardware.
  • -1 Hide
    g00ey , February 14, 2010 12:54 AM
    mattshwink1True, but if you're using SUN or NetAPP you are talking about a solution that is hundreds of thousands of dollars. This solution would probably be under $1000 (depending on disk sizes and number of disks).


    But OpenSolaris use ZFS. It's free and is compatible with most PC hardware (but be careful with which SAS controller you choose HighPoint and Marvell don't agree well with Solaris). FreeNAS, which is FreeBSD based also use ZFS but is less stable and ZFS is not as mature and well-implemented as in OpenSolaris.
  • -1 Hide
    Aragorn , February 14, 2010 3:57 PM
    The benchmarks here all read and wrote to random areas of the disks, mostly negating the effect of the SSD Cache. Wouldn't a real world sysem be likely to be accessing certain sectors far more often, thereby creating a much bigger boost from the SSD?
  • 0 Hide
    ShadowFlash , February 15, 2010 2:45 AM
    Again, my own crazy idea I've mentioned before....instead of using RAID 5, A hardware RAID 3 ( or 4 )should be used with an X-25 as the dedicated parity drive. The SSD should negate the usual bottleneck of the dedicated parity drive, reducing overhead to the quality of xor engine used. 146GB SAS 15K's are common...use a bunch of them with a single ( still pricey ) 160GB X-25. Highpoint still supports what they call RAID 3, but it's really a RAID 4 with a 64K stripe size, which will do just fine.
  • 1 Hide
    randomstar , February 15, 2010 5:23 PM
    Didnt I see a device a few days ago that took a SSD and used it as a cache for a mechanical drive, creating in effedt a hybrid drive?

    would that not be a great idea if it could support a raid behind it rather than a single drive? is there a simple raid controller that takes a single SATA input and handles the raid to 4 drives in raid 5? maybe a backplane or something?
    I know its off topic a bit, but got me to thinking. will research..
Display more comments