The market offers multiple, powerful solutions for Unified Serial storage requirements. The following table lists all available options, including the new Adaptec Series 5. Please also have a look at the following articles for more details:
Windows XP, Server 2003/2008, Vista, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), SCO OpenServer, UnixWare, Sun Solaris 10 x86, FreeBSD
Windows 2000, XP, Server 2003, VistaRed Hat Enterprise Linux (RHEL)SUSE Linux Enterprise Server (SLES)Novell NetWare 6.5SCO OpenServerUnixWareSun Solaris 10 x86FreeBSD
Windows 2000, XP, Server 2003, Vista, RedHat EL 5, OpenSuSE 10.2, SuSE Enterprise (SLES) 10, Fedora Core 6
Windows 2000, XP, Server 2003, Linux, FreeBSD, Novell Netware 6.5, Solaris 10x86/x86_64, SCO Unixware 7.x.x, Mac OS x(not bootable)
Other Features
Copyback Hotspare
Copyback Hotspare
Row 15 - Cell 3
Row 15 - Cell 4
Warranty
3 years
3 years
3 years
3 years
Price
$650
$575
$700
$1,000
Manufacturer
Atto
ICP
LSI
Raidcore/Ciprico
Model
ExpressSAS R348
ICP 5085BL
MegaRAID SAS 8888ELP
RC5252-8
Internal connectors
2x SFF 8087
2x SFF 8087
2xSFF-8087
2x SFF 8087
External connectors
1x SFF 8088
n/a
2xSFF-8088
n/a
Cache
256 MB DDR2 ECC
256 MB DDR2 ECC
256 MB DDR2-667 ECC
-
Profile
Low Profile
Low Profile
Low Profile
Low Profile
Interface
PCI Express x8
PCI Express x4
PCI Express x8
PCI Express x4
XOR Engine
IOP 348800 MHz
Intel 80333800 MHz
PowerPC500 MHz LSISAS1078
software
RAID Level Migration
yes
yes
yes
yes
Online Capacity Expansion
yes
yes
yes
yes
Multiple RAID Arrays
yes
yes
yes
yes
Hot Spare Support
yes
yes
yes
yes
Battery Backup Unit
optional
optional
optional
not required
RAID 5 init
23min
57min
17min
2h 42min
RAID 6 init
n/a
57min
17min
n/a
FAN
no
no
no
no
Supported OS
Windows Vista, Server 2003, XP, 2000Mac OS X (10.4.x)Linux (Fedora, Red Hat and SuSE)
Windows 2000, XP, Server 2003, VistaRed Hat Enterprise Linux (RHEL)SUSE Linux Enterprise Server (SLES)Novell NetWare 6.5SCO OpenServerUnixWareSun Solaris 10 x86FreeBSD
Windows 2000, XP, Server 2003, VistaRed Hat Enterprise Linux (RHEL) 4,5SuSE 9.3, 10.1, 10.2SUSE Linux Enterprise Server (SLES)Solaris 10SCO Unix
Windows 2000, XP, Server 2003, VistaRed Hat Enterprise Linux (RHEL) 4,5SuSE 9.3, 10.1, 10.2SUSE Linux Enterprise Server (SLES)Fedora Core 5,6
Other Features
DVRAID
Copyback Hotspare
Row 35 - Cell 3
Controller Spanning
Warranty
2 years
3 years
3 years
3 years
Price
$1,095
$650
$850
n/a
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
The degraded figures for streaming writes don't look right. They are too close (or above??) the normal/optimal state numbers. One idea that comes to mind is that if the writes were too small, they would all go into the cache regardless and render the results somewhat useless.
FedorThe degraded figures for streaming writes don't look right. They are too close (or above??)The figures look OK. Sequential writes to a degraded array are basically done the same way as writes to an optimal array. The only difference is that the write to the failed drive is skipped.
Can someone tell me what Database server pattern, web server pattern, file server pattern mean. When I run iometer those options are not present I can select 4k-32k or create a custom script. Plus at what stripe size are these tests being run at? I purchased this exact controller and have not duplicated TG results. It would be helpful if you explain in detail how you configured the RAID setup. RAID 5, 6 or 10 with a 16k, 32k, 64k, 128k, 256k, 512k or 1MB stripe size.
I think that Tomshardware should run the Areca’s ARC-1680ML test again with the firmware 1.45 and maybe with the latest IOMeter 2006.07.27. Areca claimed that they have better result: http://www.areca.com.tw/indeximg/arc1680performanceqdepth_32%20_vs_%20tomshardwareqdepth_1_test.pdf
Degraded RAID 5 write performance is going to be better than an optimal RAID 5 write because only data stripes are being written opposed to writing data stripes then using XOR to generate the parity stripe thus the write operations will be quicker. Degraded RAID 5 read performance will take a significant hit in performance because rather than just reading only the data stripes for an optimal RAID 5, the available data stripes and available parity stripes will be read then XOR will re-generate this missing data.
Initializing the controller during POST takes a very long time with Adaptec Raid 3 series, which is very frustrating when used in high performance workstations.
Has this been fixed with the new Raid 5 series ?
Turn up the heat all right. I installed a new 5805 in a Lian-Li 7010 case with 8 x 1 Tb Seagate drives, Core 2 Quad 2.83Gb and 800w PSU - more fans than you could poke a stick at.
The controller overheated - reported 99 deg in messages and set off alarm.
That was on drive initiation. We had a range of errors reported from drives, a number of different drives. The array (5.4Tb Raid 6) never completed building and verifying.
CPU temp was 45, motherboard 32, and ambient room temp 22deg.
I installed a 3ware - and all worked fine. Was Tomshardware comment "turns up the heat" written tongue in cheek as there seems to be a heat issue with this card?
I'd love to see how this controller performs with some Intel X25-M/E or OCZ Vertex SATA SSDs connected. The tested drives here are probably a bottleneck, not the storage controller. Rather in I/O then sequential though.