The storage controller market is constantly moving, and it is very likely that the new pecking order - with Adaptec on top - might change again by the middle of the year when AMCC is ready with its next product generation. Until then, Adaptec’s Series 5 8-port controller RAID 5805 clearly dominates all of our I/O performance benchmarks at almost all command queue depths. Only Areca’s ARC-1680ML can beat Adaptec when high I/O performance is required with no pending commands in the queue.
We used our new sequential throughput IOMeter benchmark pattern for the first time to verify Adaptec’s statement about much improved transfer performance. Sequential throughput has been an Adaptec weakness for years; hence it was interesting to see how the new Series 5 card performs. The results look good for the new 5805, as it delivers throughput of 300-500 MB/s in RAID 10, an almost constant 550 MB/s in RAID 5 and 400-500 MB/s in RAID 6 using eight Seagate Savvio 10K.2 2.5" SAS hard drives. However, Areca still provides slightly better throughput in RAID 5 or RAID 6 setups. For RAID 10, Adaptec obviously utilizes caching to increase performance at deep command queues, as it beats everyone else in read performance.
From a features point of view Adaptec still cannot beat Ciprico’s Raidcore cards, which are based on the Fulcrum software layer. Since this product is host-based and taxes the system processor(s) for XOR calculations, it should only be directly compared for dedicated storage servers where no other purpose has to be served. Also, Raidcore still does not support RAID 6, which Adaptec and others do. However, Adaptec’s feature set is still comprehensive, and the product family is certainly capable of handling all sorts of storage applications: from entry-level to the high-end enterprise space.
The RAID 5805 starts at $650, which is well below the prices of main competitors AMCC, Areca, Atto and LSI. Units with 12 ports cost $915 and up, which still is quite acceptable. The overall package is very well designed, and doesn’t stop at excellent performance: we liked the well-known Adaptec Storage Manager software, and found software support to be comprehensive; only Mac OS X drivers are missing. Adaptec’s Series 5 cards are clearly going to have an impressive career, as they will find their way into SAN appliances, NAS servers and other storage servers.
Stay on the Cutting Edge
Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.
The degraded figures for streaming writes don't look right. They are too close (or above??) the normal/optimal state numbers. One idea that comes to mind is that if the writes were too small, they would all go into the cache regardless and render the results somewhat useless.Reply
FedorThe degraded figures for streaming writes don't look right. They are too close (or above??)The figures look OK. Sequential writes to a degraded array are basically done the same way as writes to an optimal array. The only difference is that the write to the failed drive is skipped.Reply
I am confused your testing report , due to Our testing figure of Areca ARC-1680 firmware 1.45 is better than your report ,Reply
Can someone tell me what Database server pattern, web server pattern, file server pattern mean. When I run iometer those options are not present I can select 4k-32k or create a custom script. Plus at what stripe size are these tests being run at? I purchased this exact controller and have not duplicated TG results. It would be helpful if you explain in detail how you configured the RAID setup. RAID 5, 6 or 10 with a 16k, 32k, 64k, 128k, 256k, 512k or 1MB stripe size.Reply
I have an ASUS P5K-E/WIFI-AP which has 2 PCI-E x16. The blue one runs at x16 and black can run at x4 or x1.Reply
Will this Adaptec card work on my board?
I think that Tomshardware should run the Areca’s ARC-1680ML test again with the firmware 1.45 and maybe with the latest IOMeter 2006.07.27. Areca claimed that they have better result: http://www.areca.com.tw/indeximg/arc1680performanceqdepth_32%20_vs_%20tomshardwareqdepth_1_test.pdfReply
Degraded RAID 5 write performance is going to be better than an optimal RAID 5 write because only data stripes are being written opposed to writing data stripes then using XOR to generate the parity stripe thus the write operations will be quicker. Degraded RAID 5 read performance will take a significant hit in performance because rather than just reading only the data stripes for an optimal RAID 5, the available data stripes and available parity stripes will be read then XOR will re-generate this missing data.Reply
Initializing the controller during POST takes a very long time with Adaptec Raid 3 series, which is very frustrating when used in high performance workstations.Reply
Has this been fixed with the new Raid 5 series ?
Turn up the heat all right. I installed a new 5805 in a Lian-Li 7010 case with 8 x 1 Tb Seagate drives, Core 2 Quad 2.83Gb and 800w PSU - more fans than you could poke a stick at.Reply
The controller overheated - reported 99 deg in messages and set off alarm.
That was on drive initiation. We had a range of errors reported from drives, a number of different drives. The array (5.4Tb Raid 6) never completed building and verifying.
CPU temp was 45, motherboard 32, and ambient room temp 22deg.
I installed a 3ware - and all worked fine. Was Tomshardware comment "turns up the heat" written tongue in cheek as there seems to be a heat issue with this card?
I'd love to see how this controller performs with some Intel X25-M/E or OCZ Vertex SATA SSDs connected. The tested drives here are probably a bottleneck, not the storage controller. Rather in I/O then sequential though.Reply