Storage Controller Fundamentals
Basic storage controllers can be found on every PC-type motherboard as a part of the chipset core logic. Serial ATA (SATA) has been the dominating interface for hard drives for some time, and is increasingly also used for optical drives such as CD, DVD, HD-DVD and Blu-ray devices. Other drives, based on Flash memory or other technologies, typically are attached via SATA as well.
Professional hard drives utilize the SAS interface (Serial Attached SCSI), which is a serialized version of the parallel SCSI bus (Small Computer System Interface). SAS controllers, however, are dual-ported, meaning that drives can be connected with redundant connections, or to double the interface bandwidth. SAS also supports four 300 MB/s connections per SAS cable (SFF-8087 for internal use or SFF-8088 for external applications), which can be used for SAS or SATA hard drives. Since SAS offers considerably more flexibility, it is more complex, and hence SAS controllers cannot yet be found integrated onto motherboard core logic.
Many upper class on-board controllers support RAID configurations, which means that they can utilize multiple hard drives configured as one array to provide redundancy or better performance. If you look into the SMB and enterprise space, an on-board controller doesn’t get you anywhere, as arrays consist of many hard drives and offer sophisticated redundancy options (RAID 51, RAID 6), which require a huge amount of processing power. Also, array configuration and management is an issue that has to be handled efficiently in businesses, requiring proper, web-based solutions.
Some RAID controllers, such as the Raidcore RS5200 family by Ciprico, run host-based RAID, but the majority of Unified Serial controller hardware utilizes its own storage processor. Adaptec, Areca, Atto, ICP and LSI have been using Intel IOP 80333 or 348 engines; only AMCC uses its own PowerPC-based storage processor. The new Adaptec Series 5 uses a dual core storage processor, which unfortunately isn’t specified any further.
All professional RAID controllers offer a minimum amount of cache memory, or they come with a memory slot to have the customer install the desired amount of DDR2 RAM. For the sake of data safety, ECC (error correcting code) memory has to be used. Watch out for controllers that do not offer an (optional) battery backup unit, often referred to as a BBU - they are necessary to maintain cached data in the case of a power outage. Although all mission critical servers should be protected by a UPS unit (uninterruptible power supply), a BBU provides an additional layer of security.
Most RAID controllers have traditionally utilized the 64-bit PCI-X bus, which provides a maximum bandwidth of up to 1,066 MB/s (64 bit, 133 MHz). However, PCI-X requires dedicated controllers, and the bus is shared by all client devices, making it less attractive than the PCI Express interface, which offers point-to-point connections at 2 GB/s upstream and downstream when using eight PCIe lanes (PCI Express x8). Most mainstream servers offer at least one x8 PCI Express slot, and we recommend investing in PCIe today, as the interface will still be around in years to come.
We’ve talked a lot about the various RAID options in the past. RAID 0 setups put all available hard drives into a stripe set, meaning that blocks are evenly distributed across all available hard drives. This, however, makes your storage vulnerable, as one defective drive will destroy the entire array. RAID 1 simply mirrors one drive’s data onto a second drive, which is easy to do and secure, but it doesn’t allow administrators to create high-capacity or high-performance arrays. The combination, RAID 0+1 or 1+0, represents a mirrored stripeset or a striped mirror; these combine performance with data safety, but still only offers 50% of the total hard drive capacity. RAID 5 or RAID 6 is typically the best solution for secure data arrays, as parity data is calculated using a simple XOR operation. The parity data is then distributed along with the data across all hard drives. RAID 6 works like RAID 5, but two copies of parity data are created, resulting in increased reliability in case of hardware failures: while a RAID 5 will survive a single failed drive, RAID 6 will withstand two simultaneous drive defects.
Current page: Storage Controller FundamentalsPrev Page Adaptec Turns Up The Heat On Unified Serial Storage Next Page RAID Controller Overview
Stay on the Cutting Edge
Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.
The degraded figures for streaming writes don't look right. They are too close (or above??) the normal/optimal state numbers. One idea that comes to mind is that if the writes were too small, they would all go into the cache regardless and render the results somewhat useless.Reply
FedorThe degraded figures for streaming writes don't look right. They are too close (or above??)The figures look OK. Sequential writes to a degraded array are basically done the same way as writes to an optimal array. The only difference is that the write to the failed drive is skipped.Reply
I am confused your testing report , due to Our testing figure of Areca ARC-1680 firmware 1.45 is better than your report ,Reply
Can someone tell me what Database server pattern, web server pattern, file server pattern mean. When I run iometer those options are not present I can select 4k-32k or create a custom script. Plus at what stripe size are these tests being run at? I purchased this exact controller and have not duplicated TG results. It would be helpful if you explain in detail how you configured the RAID setup. RAID 5, 6 or 10 with a 16k, 32k, 64k, 128k, 256k, 512k or 1MB stripe size.Reply
I have an ASUS P5K-E/WIFI-AP which has 2 PCI-E x16. The blue one runs at x16 and black can run at x4 or x1.Reply
Will this Adaptec card work on my board?
I think that Tomshardware should run the Areca’s ARC-1680ML test again with the firmware 1.45 and maybe with the latest IOMeter 2006.07.27. Areca claimed that they have better result: http://www.areca.com.tw/indeximg/arc1680performanceqdepth_32%20_vs_%20tomshardwareqdepth_1_test.pdfReply
Degraded RAID 5 write performance is going to be better than an optimal RAID 5 write because only data stripes are being written opposed to writing data stripes then using XOR to generate the parity stripe thus the write operations will be quicker. Degraded RAID 5 read performance will take a significant hit in performance because rather than just reading only the data stripes for an optimal RAID 5, the available data stripes and available parity stripes will be read then XOR will re-generate this missing data.Reply
Initializing the controller during POST takes a very long time with Adaptec Raid 3 series, which is very frustrating when used in high performance workstations.Reply
Has this been fixed with the new Raid 5 series ?
Turn up the heat all right. I installed a new 5805 in a Lian-Li 7010 case with 8 x 1 Tb Seagate drives, Core 2 Quad 2.83Gb and 800w PSU - more fans than you could poke a stick at.Reply
The controller overheated - reported 99 deg in messages and set off alarm.
That was on drive initiation. We had a range of errors reported from drives, a number of different drives. The array (5.4Tb Raid 6) never completed building and verifying.
CPU temp was 45, motherboard 32, and ambient room temp 22deg.
I installed a 3ware - and all worked fine. Was Tomshardware comment "turns up the heat" written tongue in cheek as there seems to be a heat issue with this card?
I'd love to see how this controller performs with some Intel X25-M/E or OCZ Vertex SATA SSDs connected. The tested drives here are probably a bottleneck, not the storage controller. Rather in I/O then sequential though.Reply