Adaptec's Series 5 Unified Serial Controllers

RAID 6 I/O Performance (Intact & Degraded)

  • Fedor
    The degraded figures for streaming writes don't look right. They are too close (or above??) the normal/optimal state numbers. One idea that comes to mind is that if the writes were too small, they would all go into the cache regardless and render the results somewhat useless.
    Reply
  • h4vok
    FedorThe degraded figures for streaming writes don't look right. They are too close (or above??)The figures look OK. Sequential writes to a degraded array are basically done the same way as writes to an optimal array. The only difference is that the write to the failed drive is skipped.
    Reply
  • guan1307
    I am confused your testing report , due to Our testing figure of Areca ARC-1680 firmware 1.45 is better than your report ,
    Reply
  • bull2760
    Can someone tell me what Database server pattern, web server pattern, file server pattern mean. When I run iometer those options are not present I can select 4k-32k or create a custom script. Plus at what stripe size are these tests being run at? I purchased this exact controller and have not duplicated TG results. It would be helpful if you explain in detail how you configured the RAID setup. RAID 5, 6 or 10 with a 16k, 32k, 64k, 128k, 256k, 512k or 1MB stripe size.
    Reply
  • I have an ASUS P5K-E/WIFI-AP which has 2 PCI-E x16. The blue one runs at x16 and black can run at x4 or x1.
    Will this Adaptec card work on my board?
    Reply
  • kakashi
    I think that Tomshardware should run the Areca’s ARC-1680ML test again with the firmware 1.45 and maybe with the latest IOMeter 2006.07.27. Areca claimed that they have better result: http://www.areca.com.tw/indeximg/arc1680performanceqdepth_32%20_vs_%20tomshardwareqdepth_1_test.pdf
    Reply
  • MrMickelson
    Degraded RAID 5 write performance is going to be better than an optimal RAID 5 write because only data stripes are being written opposed to writing data stripes then using XOR to generate the parity stripe thus the write operations will be quicker. Degraded RAID 5 read performance will take a significant hit in performance because rather than just reading only the data stripes for an optimal RAID 5, the available data stripes and available parity stripes will be read then XOR will re-generate this missing data.
    Reply
  • Initializing the controller during POST takes a very long time with Adaptec Raid 3 series, which is very frustrating when used in high performance workstations.
    Has this been fixed with the new Raid 5 series ?
    Reply
  • makaira
    Turn up the heat all right. I installed a new 5805 in a Lian-Li 7010 case with 8 x 1 Tb Seagate drives, Core 2 Quad 2.83Gb and 800w PSU - more fans than you could poke a stick at.

    The controller overheated - reported 99 deg in messages and set off alarm.
    That was on drive initiation. We had a range of errors reported from drives, a number of different drives. The array (5.4Tb Raid 6) never completed building and verifying.

    CPU temp was 45, motherboard 32, and ambient room temp 22deg.

    I installed a 3ware - and all worked fine. Was Tomshardware comment "turns up the heat" written tongue in cheek as there seems to be a heat issue with this card?
    Reply
  • elektrip
    I'd love to see how this controller performs with some Intel X25-M/E or OCZ Vertex SATA SSDs connected. The tested drives here are probably a bottleneck, not the storage controller. Rather in I/O then sequential though.
    Reply