Sign in with
Sign up | Sign in
Your question

Slow Performance: 4x OCZ SSDs and Adaptec RAID Controller

Last response: in Storage
Share
August 1, 2008 8:04:07 PM

Ok, so something is wrong. I'm using a test system now so this isn't my final setup, but I should still get better results than what I'm finding now.

Configuration:
  • System Setup: Dell Precision 390, Intel E4500, Intel i975x, 4GB RAM, WinXP SP2 *Note
  • System HDD: Dell SAS 5/iR controller, Cheetah 136GB 10K SAS
  • RAID Controller: Adaptec RAID 3805, 128MB RAM, w/ SATA dongle (/w newest BIOS and drivers)
  • Test Drives: (4) OCZ Core 64GB SSD

    Note: The system SAS controller is connected to the PCIe x8 slot (x4 wiring), I installed a PCI video card in the bottom slot, and put the Adaptec card in the PCIe x16 slot where the video card was.

    Here are my RAID controller settings:
  • 32KB strip size (w/ RAID 0 setup)
  • NTFS partitions w/ 4KB sector size
  • Write-Back: Disabled
  • Read-Cache: Disabled
  • Write Cache mode: Disabled (write-through)
  • Write Cache Setting: Disabled (write-through)

    Here is my results with one drive:


    And then here with 2 in a RAID 0 setup:


    Why are my results so poor? RAID 0 setup with all 4 SSDs has the same performance as the RAID 0 with 2 SSDs. I had older drivers and controller BIOS previously, but had the same results, so I thought I should check for newer versions and there were so I updated. I tried various settings for the controller with little to no difference. I also tried Iometer and got similar results. I tried a RAID 5 configuration as well to no avail. Would using the SAS connectors/dongle make a difference? What am I doing wrong?
    August 1, 2008 9:11:15 PM

    I just ran one of the SSDs off of the motherboard's SATA connectors (from the onboard Intel chipset) and got these results:


    Something's up with the controller.
    August 5, 2008 1:54:16 PM

    No bites? Anyone?
    Related resources
    Can't find your answer ? Ask !
    August 5, 2008 3:54:57 PM

    1. Why are all of the cache settings disabled on the Adaptec controller? Enable these and try again.

    2. Instead of using HDTach, use HDTune, and select a 64K block size. You should get similar results. Then switch block sizes to 1MB and see what you get.

    3. Why a 32K stripe size? I would recommend 64K.

    4. When you created the NTFS partition on the RAID array, did you use the command line DISKPART utility so that you align the partition on a stripe boundary? If not, delete the partition and re-do this.
    August 6, 2008 2:36:32 AM

    *Edit* Sorry wrong thread.
    August 6, 2008 8:53:16 PM

    SomeJoe7777 said:
    1. Why are all of the cache settings disabled on the Adaptec controller? Enable these and try again.

    2. Instead of using HDTach, use HDTune, and select a 64K block size. You should get similar results. Then switch block sizes to 1MB and see what you get.

    3. Why a 32K stripe size? I would recommend 64K.

    4. When you created the NTFS partition on the RAID array, did you use the command line DISKPART utility so that you align the partition on a stripe boundary? If not, delete the partition and re-do this.


    1. I disabled all the caches because that would show me the raw performance of the disks without it being influenced by the cache, correct? I also disabled the cache on the disk (as configured from Adaptec Storage Manager) since SSDs don't come with a cache on the drive.

    2. Ok, I'll try HDTune and let you know what I get.

    3. I just picked one, but it shouldn't make that much of a difference though, right?

    4. I did not use DISKPART and I haven't heard of "align[ing] the partition on a stripe boundary". I will try this by following this example: http://support.microsoft.com/kb/929491

    With regards to #3, I'll try the 64k stripe size, but what do you recommend for the sector size? The same? Thanks for your reply!
    August 6, 2008 11:10:09 PM

    gwolfman said:
    ...
    4. I did not use DISKPART and I haven't heard of "align[ing] the partition on a stripe boundary". I will try this by following this example: http://support.microsoft.com/kb/929491
    ...

    Ok, so went to run the command and it said
    Quote:
    The arguments you specified for this command are not valid.

    I tried it without the align argument, and it worked. I went to microsoft.com to search for answers and found some pages that listed "align=N" as an argument and others that didn't. I found this blurb in a forum:
    Quote:
    You need the DISKPART version 5.2 from Windows 2003 (or 6.0 from Vista) in
    order to use the ALIGN parameter. Windows XP does not support this feature.
    Have you run into this?

    I'll see if I can find my Win2k3 discs and pull diskpart off of that.
    August 7, 2008 12:30:52 AM

    Ah. Yes, apparently the Windows XP version of DISKPART does not support the align parameter. I haven't run into that, my home server is Windows Server 2003. You should be able to use the Windows Server 2003 version to create the partition.

    If you pick a stripe size of 64K, then use align=64 to align on the stripe boundary. If you use another stripe size, adjust the align parameter accordingly.

    When the partition is not aligned on a stripe boundary, what can happen is that requests from the computer to the array that should fit within one disk will have to be spread across 2 disks.

    For example, take a 9-drive RAID-5 with a 64K stripe size. Each 64K stripe is spread across 9 disks, one of the drives is used for parity of this stripe, resulting in 4K blocks on each disk and a 9th 4K block of parity. The standard NTFS cluster size is 4K, so when the computer requests one cluster from the array, if the partition is aligned properly, this request will go to exactly one drive.

    If you let the Disk Management application format the array, the partition is created starting at sector 63, the first cylinder boundary after the MBR. This offsets the start of the partition and the start of a stripe by 512 bytes (1 sector). Now, when the computer requests a 4K cluster from the array, 7 sectors of the cluster are on one drive and 1 sector is on an adjacent drive, resulting in 2 I/Os to 2 different drives instead of 1 I/O to 1 drive. This can reduce array performance a lot for certain applications.

    To your other points:

    1. The caching is important for array performance because the controller uses that to improve the sequencing of I/O's. Turning all the caches off won't really give you a real evaluation of the array's "raw" performance, because the controller won't be issuing I/O commands as fast as it will when the caches are turned on.

    3. Some controllers are specifically optimized for 64K stripe sizes. Some people have posted threads here in the forum that the Intel ICH controllers have performance degradations at stripe sizes other than 64K.

    You cannot control the sector size - sectors are fixed by the device 99.9% of the time at 512 bytes per sector.

    You can control the cluster size, which is the base allocation unit size for the NTFS file system. NTFS defaults to 4K clusters, and I would leave it at that. I have experimented with other cluster sizes and found that they did nothing for performance. NTFS can go up to 256TB with 4K clusters, so there is really no compelling need to go to larger cluster sizes. Further, you can make more efficient use of the controller's cache with the smaller 4K clusters.
    August 7, 2008 2:45:02 PM

    Hey, thanks a lot for your responses.

    I tried pulling the diskpart.exe from my win2k3 discs but the app never ran. I guess since the kernel is different. But I'll boot into my Vista DVD and run diskpart from there and see if that works :) 

    I'll try your suggestions and get back to you, probably sometime Friday.
    August 8, 2008 8:32:36 PM

    Ok, some interesting results with align=64 argument:

    This is with 64KB stripe size and 64KB reads using HD Tune:


    Now here with 64KB stripe size but with 1MB reads:


    This is starting to look better. Why is the CPU usage so high?

    Ok, now here are the interesting results...
    Intel chipset (ICH7R) with 128KB stripe size, 64KB reads, not aligned:


    And now with the same setting but align=128:


    I get about 5MB/s more when it's aligned but why does the CPU usage go up so much (according to HD Tune)?

    Now to look at 1MB reads...
    Intel chipset, 128KB stripe, 1MB reads, not aligned:


    Intel chipset, 128KB stripe, 1MB reads, aligned:


    Once again, why are the Intel results so much better and why does the CPU usage jump up when I align the partition?

    Any comments would be greatly appreciated.
    Thanks!
    August 8, 2008 8:57:44 PM

    lolz you have poo SSDs!!!!

    i think that is the answer to your question :) 

    my 5 1/4" pork-n-beans hard drive from 1983 run faster than your silly SSD ARRAY!!!!!!!!!!!!!
    August 8, 2008 10:02:58 PM

    ereetos said:
    lolz you have poo SSDs!!!!

    i think that is the answer to your question :) 

    my 5 1/4" pork-n-beans hard drive from 1983 run faster than your silly SSD ARRAY!!!!!!!!!!!!!

    Thanks man, I luv u 2
    August 9, 2008 3:39:24 AM

    Hmmm ... apparently some people are eager to prove they have nothing to add to the discussion ... :sarcastic: 

    Anyway ...

    I think what's going on here is the interaction between the stripe size, alignment, and the physical characteristics of the SSDs. Since the SSDs have no cache, all of the reads are directly from the flash chips. I think what you might have to do to optimize this is to find a combination of stripe size and alignment that results in the best performance. This will mean that the stripe size and alignment correspond to the chunk size that the SSDs are using internally.

    At first, don't worry about alignment. Try successively increasing the stripe size: 64K, 128K, 256K, 512K, 1MB. Once you find the best stripe size, then attempt to align it to get the best performance and lowest CPU utilization. For instance, if you find that 256K stripes work best, then try unaligned, and then align=64, 128, 256, 512, and 1024.

    Hard drive performance is less dependent on the alignment and physical characteristics of the device than the SSDs are, and the lack of cache on the SSDs exacerbates the problem.

    This may be a lot of reformatting and retesting, but in the end you can be fairly sure you're getting the maximum possible performance out of the array. Don't forget that the RAID controller you're using is designed for hard drives, not SSDs, so there may be some optimizations that simply can't be realized.
    August 9, 2008 11:28:10 PM

    I've been having some trouble with my OCZ Raid setup as well. They don't particularly like HD Tune. Try ATTO and see what that tells you.

    Might try this: http://managedflash.com/
    There's a demo you can download
    I have not had a chance to try it, as my drives are still being RMA'd.
    August 11, 2008 12:44:13 AM

    It's probably in degraded mode. It takes awhile to make the RAID.
    August 11, 2008 2:07:00 PM

    xxsk8er101xx said:
    It's probably in degraded mode. It takes awhile to make the RAID.

    Except for the fact that RAID 0's don't have a degraded mode.
    August 12, 2008 10:50:50 PM

    i'm tellin ya man... its becuz of ur poo arry and crap SSSD disk!

    did i evur tell u i kno whta nVidia stand for?!!!??

    August 24, 2008 3:12:37 PM

    What's wrong?

    NOTHING is wrong. You are on the leading edge of "discovery" of what will soon become known as the Flash SSD "read-write" penalty.

    Congratulations for turning off the write cache -- in doing so you help expose the truth about Flash SSD.

    Ask yourself, why does Intel's new "extreme performance" SSD drop from 35,000 IOPS READ to only 7,000 IOPS when there is a 2:1 read:write workload? See their spec sheet on the X25-E. Since 100% writes are 3,300 and 100% reads are 35,000 IOPS, shouldn't the combined performance at 67% read be more like 24,000 IOPS? Why only 7,000 IOPS?

    You'll find out....

    And, intel achieved even this miserable performance only after putting a massive (relative to disk) write cache in front of the flash, and did their tests with write-cache enabled. I noticed that no one is saying HOW MUCH DRAM is on the Intel device, but (oops) there goes the "non-volatility" argument for Flash!!! Lose power and you have lost a LOT of writes in the cache! Anybody wonder why Intel's SSD uses so much power? It's the massive DRAM write cache they needed to get decent write performance!!!

    Sorry. but when all you guys start looking really hard at how NAND flash actually works, you'll discover the truth. uncached NAND flash is ridiculously slow whenever you are NOT doing 100.0000% reads. Insert even a few writes into the workload and the WHOLE THING slows down to a crawl. DRAM write cache can help a little, but not even as much as Intel's spec sheet shows -- note they had to drive 32 outstanding IOs into the queue of the disk to get the numbers they did. That kind of queue depth SIMPLY NEVER EXISTS IN THE REAL WORLD!!!

    Oh...and OBTW...for those who keep saying "it will get better with time", actually the opposite is true. As MLC flash (on which all of the future cost reductions are based) goes from 2 bits/cell to 4, 8 and 16 bits per cell, this problem gets WORSE...not better.

    Oops....(again)

    By the way, another thing that never happens in the real world is 100% random IO -- and that is what ALL these ridiculous performance comparisons are based on. Disk gets MUCH better when the percentage of random-to-sequential IO is in realistic ranges (like 50/50) and so a huge chunk of the SSD performance benefits simply evaporate. This is why IDC's benchmarks recently found only a very small improvement for flash vs. 7,200 RPM disk (and also found several places where disk was substantially faster).

    Keep it up guys...at this rate you will rapidly discover the truth of flash SSD.

    Here's a hint. For your next test, compare a flash-based RAID-5 to a disk based raid-5 of equal CAPACITY. Then, pull a drive and see how long it takes to rebuild parity on the flash SSD array. You'll be blown away at how much faster spinning disks are than Flash SSD -- especially in the rebuild!
    a b G Storage
    August 25, 2008 5:44:41 AM

    anon_reader said:
    What's wrong?

    NOTHING is wrong. You are on the leading edge of "discovery" of what will soon become known as the Flash SSD "read-write" penalty.

    Congratulations for turning off the write cache -- in doing so you help expose the truth about Flash SSD.

    Ask yourself, why does Intel's new "extreme performance" SSD drop from 35,000 IOPS READ to only 7,000 IOPS when there is a 2:1 read:write workload? See their spec sheet on the X25-E. Since 100% writes are 3,300 and 100% reads are 35,000 IOPS, shouldn't the combined performance at 67% read be more like 24,000 IOPS? Why only 7,000 IOPS?

    I love this question. Why only 7000 IOPS? It sounds so terribly slow. Until you realize that even the fastest hard drives struggle to get 500-700 IOPS read or write.
    August 27, 2008 1:55:41 PM

    RocketSci...

    1) the 7,000 IOPS number is derived with IOmeter by driving the queue-depth at the disk to 32 outstanding requests -- NEVER happens in the real world. At a more realistic queue depth of 3, the Intel device will do maybe 1000-1500 IOPS in a 2:1 read/write profile. on an IOPS/dollar basis, this hardly justifies the $1,400 price tag (20x spinning disk) for the X25-E device, UNLESS this IOPS number translated into meaningful performance. According to the IDC (and MANY other application benchmarks, it does not).

    2) In the REAL world, a significantly large percentage of the read workload IO requests cannot be issued by the host (get into the disk queue) until a previous write has completed and ack'd back to the host. This is called "synchronous" IO and it is predominant in the real-world but almost never modeled in benchmarks. The asymmetrical performance of read vs. write in flash is a huge problem here. The only benchmarks that reliably model this behavior are application benchmarks such as TPC-C, which is one reason why you never see flash SSD used in that benchmark.

    3) Now...the question I actually posed was...in the 2:1 read:write workload; why doesn't the Intel device do [[35,000x2]+[3,300x1]]/3 = 24,433 IOPS instead of 7,000?

    If you ponder the question I actually posed, you'll move a step closer to understanding why folks like IDC are finding only marginal performance benefits for SSD, while also finding numerous areas where spinning disk is faster.



    August 27, 2008 2:32:05 PM

    Oh by the way, anybody care to speculate why Intel are not saying just how big IS that DRAM write cache on the X25-E.?

    Why on earth is Intel NOT SAYING?

    http://download.intel.com/design/flash/nand/extreme/ext...

    Extrapolating from the "base" performance of SLC Flash SSD on writes (about 130 IOPS according the Imation white paper), I'm guessing that Intel stuffed about 128MB of DRAM (volatile) write cache onto the X25-E, which would also explain the high power-consumption of the device. The typical spinning disk needs only an 8MB cache, because read performance and write berformance are "balanced" (roughly equal).

    In the "Enterprise" markets at which X25-E is aimed, customers will simply NOT accept the risk of losing 64 or 128 Megabytes of writes in the event of a power failure or device failure -- this is why all the major Enterprise-class disk array vendors turn off the write cache on the disk. So...for Flash SSD this means it's back to about 130IOPS write, which will in turn throttle back the read performance (due to synchronous IO from applications) and...well...you know the rest.

    Flash SSD = WORM device!
    August 28, 2008 11:01:35 PM

    Interesting take on all this Anon. I can see how what you are talking applies to SSDs. It's good to see someone understand why I disabled the write caches for the test, though most of my test were or should have been read only. However, I did run various iometer tests with 100% reads and I still could not get above ~140MB/s even with 4 disks in RAID 0. It doesn't seem right. Anon, do you have any experience with Adaptec's RAID controllers?
    Anonymous
    a b G Storage
    August 29, 2008 12:26:07 PM

    Hello, I tried 2 Ocz core v1 drives in raid0 on an Adaptec controller and I can confirm they work badly. A little better with an integrated Marvell chipset on my mobo, although the best results are obtained by far with Intel's ICH9R controller. No contest. I don't know why it's like this though.
    August 29, 2008 10:14:52 PM

    gwolfman said:
    Interesting take on all this Anon. I can see how what you are talking applies to SSDs. It's good to see someone understand why I disabled the write caches for the test, though most of my test were or should have been read only. However, I did run various iometer tests with 100% reads and I still could not get above ~140MB/s even with 4 disks in RAID 0. It doesn't seem right. Anon, do you have any experience with Adaptec's RAID controllers?


    Natively, the Adaptec 3805 is a SAS controller and uses STP (SATA tunelling protocol) to talk to SATA devices. My guess is that it's a poor implementation of STP.

    Try connecting a pair of the SSDs direct to the on-board SATA ports on your MOBO and striping them using Windows disk manager.
    August 29, 2008 10:15:02 PM

    gwolfman what is your question with Adaptec controllers?
    October 1, 2008 2:10:08 PM

    Quote:
    I'm guessing that Intel stuffed about 128MB of DRAM (volatile) write cache onto the X25-E, which would also explain the high power-consumption of the device.


    http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx...

    Take a CLOSE look at the Samsung DRAM part number for the X25-M.

    http://www.bit-tech.net/news/2008/08/20/intel-x18-m-80g...

    16MB for X25-M. Which is what modern hard drives have.

    http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx...

    11k iops random write 100% with queue depth of 1

    "which we presume acts as a buffer and helps the Advanced Dynamic Write Levelling technology do its thing."

    "Finding good data on the JMicron JMF602 controller is nearly impossible, but from what I've heard it's got 16KB of on-chip memory for read/write requests. By comparison, Intel's controller has a 256KB SRAM on-die."

    And X25-M is the MLC, which is known to have more write problems fundamentally.
    October 25, 2008 3:06:52 PM

    Legacy OS like Windows Vista, XP, and Applications like Microsoft Office 2003, 2007, etc. have built in, inherent flaws with regard to SSDs.

    Specifically, optimizations of these OS for mechanical hard drives like superfetch, prefetch, etc. tend to slow down, rather than help performance and is unnecessary to speed up reads in an SSD, but slow it down with unnecessary writes of small files, which SSDs are slower than a regular hard drive.

    Things like automatic drive defragmentation with Vista does nothing for SSDs except to slow them down.

    Properly optimized, even low cost 2007 generation SSDs test out as equivalent to a 7200 rpm consumer grade drive, and typical SSDs made in 2008 or later tend to outperform mechanical hard drives.

    See the discussion here for a detailed discussion of SSD performance tweaks and what it takes to make them perform well with legacy OS and Applications.

    http://www.ocztechnologyforum.com/forum...display.php?s...

    December 11, 2008 1:52:23 PM

    Legacy OS = before Wk2. Like 9x and NT's.
    a b G Storage
    December 11, 2008 2:03:17 PM

    Superfetch actually helps performance, even in SSDs, as no SSD can match the speed of RAM.
    December 11, 2008 2:07:16 PM

    d111 said:
    Legacy OS like Windows Vista, XP, and Applications like Microsoft Office 2003, 2007, etc. have built in, inherent flaws with regard to SSDs.


    What i think is that term "Legacy OS" In Windoses means OS's before Wk2.

    Like 9x and NT's.

    Other than that i fully agree.

    There is plenty of optimization tips for SSD's in OCZ forums and most of them work also in Intel SSD's and some to spinning drives too.

    Like disabling 8.3 names and indexing.

    I am still trying to look that little fact that Which is better cluster size for SSD drives -smaller or bigger ones?

    If they have vastly faster seek times one might think that small clusters. Somebody said that it does not have matter.. =/
    December 11, 2008 2:10:47 PM

    cjl said:
    Superfetch actually helps performance, even in SSDs, as no SSD can match the speed of RAM.


    I thought that Superfetch means using something like USB memory stick as alternative store for system files, because even USB is faster than normal harddrives when booting computer on.

    .j
    a b G Storage
    December 11, 2008 2:21:24 PM

    That's readyboost. Superfetch pulls commonly used files into available RAM when the system is idle, so when you use them, the loading is practically instant. It's also why Vista appears to use so much RAM, although it will instantly free up all of the RAM used by superfetch if another program needs it.
    December 11, 2008 3:10:17 PM

    Thanks for informing. I have not currently moved from XP so it is a bit irrelevant to me. =)
    But sooner or later.. . .
    December 22, 2008 11:57:38 PM

    I just yesterday ripped out my Adaptec 51645 SAS/SATA Controller with battery backup on card. I had 8 OCZ SSD's hooked up to it, 4 WD VR drives and 4 Savvio 10K2 drives. I have tons of benches on the Savvio's and the VR's and never really had problems. The Adaptec gets hot though but not over heating withing the SSD drives. I then hooked up the SSD drives and suddenly my alarm keeps going off. The card seems to suddenly get much hotter when trying to run these things and I had to put a fan by it. So that is not to bad and I proceeded to test some more. I kept getting lockups or suddenly missing drives from the array. After rebooting they were back though. I was getting good performance when running but just could not seem to get over the problem. So I hooked 6 of them up to my motherboards Intel controller running a RAID 0 and have not had one problem at all. When I run benchmarks though I am getting low performance but they seem to be responding quickly so not sure if the benchmark is telling real throughput.

    I am kinda bummed about this card. I now have a Adaptec 31605, Areca 1680ix-16 (with 4GB cache) and this Adaptec 51645 and none are really perfect. The Adaptecs are much more refined than the Areca but still have problems. I am wondering if maybe the simpler raid cards are better for SSD's or something.

    Sorry if there is nothing in here to help the original poster but I just throught I would share my experience. I too was not happy. I was getting insane benchmarks but could not keep it stable.
    January 19, 2009 12:43:19 AM

    I think that the "read write" penalty comment on the Intel, although along a good path, is also very misinformed.

    Many single threaded workloads cannot benefit from the parallel operations of modern RAID arrays. An SSD than can perform 10,000 random mixed IOps at 8K has a MAJOR advantage over 50 HDD's that can perform at 200 IOps. Sure, the 50 drives probably have a sequential read/write capability that is enormous, and they have incredible $/MB. But that's not why you buy SSD.

    An Intel E drive is for a workload in which single or low threaded performance is necessary. Try performing 50/50 read write, 8K random workload (OLTP) on your 50 drives, and you'll find out that the latency isn't much better than it is on one. About 5 to 7 ms on an enterprise drive and >10ms on a large SATA.

    You should never place an SSD behind a RAID card if possible. It actually reduces IOps. Instead, put it as close to your system bus as possible. Sun is putting six SATA busses on their Intel based (4150) servers, almost directly connected to the I/O chip.

    One a single E series, we see almost 14,000 IOps sustained random, 8K. We see 14,000 IOps read. Of course it's a little less if you mix in some streams. The the point is that we see <200uS (micro seconds). That's about 35 times that 10K RPM F/C, and a heck of a lot better than a 7,200 RPM drive.

    So the rhetoric works only when you're considering a certain workload. In situations where latency and single threaded random performance is needed such as OLTP, SSD is king.

    Also, Intel SSD's have super-capacitors to retain DRAM memory in case of power outage (not sure how long).

    Finally, if you're interested in using this in your computer and you fear:
    -- Power outages / data loss / untried technology...
    --Writes getting in the way of reads.
    -- High cost per MB...
    --Sequential performance suffering/ interfering...
    You should be looking at Solaris ZFS with ZIL and L2ARC.
    http://blogs.sun.com/brendan/entry/test
    http://blogs.sun.com/perrin/entry/the_lumberjack

    You can build a massive volume and get great read latency where it matters, and microsecond synchronous commits by adding a few cheap SSD's...

    --Ken
    January 29, 2009 2:31:15 PM

    This issue is obvious in the first post! The raid card in question is an 8x pcie and is in a 4x only slot. You might have well just used the onboard because you are killing the raid card in IO performance. I now have 4 Samsung SLC SSD's (the same ones that OCZ uses) in raid tested both on an Adaptec 5405 and a Highpoint 3510. My board is an ASUS P5Q Pro using a Q9300 and 2 GB 1066 ram. The 2 drives first started on the ICH10R in raid 0 maxing out at 134mb/s reads. I got the Adaptec 5405 and 2 more drives, and that instantly hit 435mb/s reads in the 8x slot. I got really disapointed by the long boot time of the adaptec controller, saw some reviews indicating the highpoint did not have this issue. The Highpoint did not match the adaptec with it's shipping bios, only giving 380ish reads. I flashed to the latest and now have the 430mb/s reads back and a much shorter boot. I will say I'm not knocking the Adaptec, it's a better card and way better management software! Just that is takes longer to get past it's bios, and I don't see that changing with newer firmware, unless Adaptec completely overhauls the whole software firmware. The Highpoint should be downgraded either, it just does things diferently. The HPT 3510/20 only allows you to mess with the drives and arrays in the mangement software, but not card settings! The card setting must be changed in bios at boot time. Adaptec lets you do pretty much anything in the management console. Both card are adequate in the home or server environments. The Adaptec has the faster 1.2 ghz chip vs. the 800 on the 3510/20 and both has 256mb DDR cache. Both chips have similiar power disipation 11 watts for the 800 and 12 watts for the 1.2, and the Adaptec has a slightly larger heatsink. The adaptec is ever so slightly faster, and has better management software but is slower at boot, and cost me $47 more. The 3510 is also sold on the Egg were you have to go somewhere else to find the adaptec. All this being said, get a board to support the raid card and SSD investment. I have also tried both cards on an older AMD rig using an M2N32 -SLI Vista Premium, x2 6400+ 3.2 ghz, and got the exact same speeds from both cards and the onboard raid (590 chipset) sucked at 130ish reads. I have played with all the diferent stripe sizes and 256 on either raid card is the best with SSDs. I should also point out I have a pair of raptors (not veloci) that do fine on the onboard chipsets in raid. Can't explain why, but the SSDs MUST be on a true HW raid card capable of full bandwith. No pciE 4x junk allowed, although theory says 4 x lanes should support 400mb/s is doesn't work out that way! I have no way of testing a 4x bus with any of my rigs, but I am sure this is your issue.

    I have also seen reports that MLC can do factory advertised specs and beat my SLC drives if they are on one of these two raid cards. $300 seems to be the entry price and an 8x pcie slot! Looking back, 4 MLC drives and the raid card would be cheaper and faster (no write stuttering issues on these cards) at reduced lifespan (MLC write amplification).
    I should alo note, that I strongly recommend the battery cache backup module for both of these cards, especially is you use MLC drives, because we KNOW the cache will be waiting to write to the drives! Power loss could pretty much guarantee data loss on a write at the wrong moment!
    May 24, 2009 9:58:20 PM

    This is my raid setup, i think its the fastest for money period. I use six ocz throttle esata 8gb in raid zero. The drives are rated at 90 mb/s read and 30mb/s write. I get one 45gb Partition. I put windows 7127 x64 and a couple of games on it, gta4. Write cache is enabled in the ich10r intel storage manager. The onboard raid is great. Doesnt use up my slots... The theoretical max speed is 540 mb/s but i get 520 mb/s which is equals a 7.7 rating in disk performance. The write performance is equal to 180 mb/s which seems not to drop 20 mb/s like the read speed. EP45-DS3L


    http://www.ocztechnologyforum.com/forum/showthread.php?...
    March 24, 2010 12:50:07 AM

    The Intel X25-E uses most of the DRAM to cache the page map and only has a small amount for caching user data.

    7000 IOPS with 2:1 split is like : 4668 Read iops, 2334 write iops based on iops not time. It would take (4668/35000) ~0.133s for the required reads, and (2334/3300) ~0.707s for the writes for a total of 0.841s with .159s added overhead based on quoted stats (4668+2334=7002). The theoretical max would be 5552+2776=8328.

    SSD may like smaller stripe setting for best alignment and smaller minimum write. Does a 4KB write to a 64KB stripe cause all 64KB to be read and re-written ?

    Intel SSD operates the 10 channels independently so you need a IO queue larger than 10 (or maybe several MB per IO) otherwise performance is closer to that of 1 channel.
    !