Sign in with
Sign up | Sign in
Your question

PCI Express SATA RAID Controllers For SMB Servers

Last response: in Storage
Share
December 13, 2006 10:22:49 AM

Fileservers require redundant storage to safeguard vital data. Here are five of the latest RAID controllers for Serial ATA hard drives that supposedly do the job.
December 13, 2006 11:41:27 AM

Q1) How much cache does each controller have ?

Q2) What I/O processor is each card using ?
December 13, 2006 12:18:54 PM

I'm kinda disappointed with your selection of the Areca controller. The one you worked with is, I believe, one generation back from the latest. Also, you selected the lowest end one, when the 1220 would have been just fine, or the entry-level next-gen 1231ml. Either should meet the price-point requirement.
Related resources
December 13, 2006 1:16:34 PM

I agree with neuromandw.

The choice of Areca controller is poor. TH reviewd the ARC-1220 last year amd it received the best rating. Areca probably didn't want to pay extra for a good review. :roll: TH used to be a decent "UNBIASED" review site, but that is no longer the case.
December 13, 2006 2:00:13 PM

Inquiring minds would like to know if these cards
work with entry-level motherboards ?

What I would like to build is a raid server using a
motherboard with integrated graphics and the one
pci-e x16 slot for the raid card. It would be good
to review "entry-level" mobos for raid card use.
Or anyone who tried some and found it to work ?
December 13, 2006 2:05:33 PM

Yes, they should. It would mostly depend on chipset. Most server boards use onboard graphics, and the Areca ARC-1220 does work with Tyan boards (S2891)
December 13, 2006 2:14:17 PM

This article is incorrect. You state that: "HighPoint offers the fastest RAID 0+1 solution, but requires the controller card RocketRAID 2322 plus an external multi-lane SATA enclosure, which together exceeds $600".. But this is totally untrue. Highpoint Rocketraid does NOT require the external enclosure. That is an add on if you want to put your drives outside of your server case.

That is like saying, a TV costs $500 but requires the PS3 which will put it over $1,000. Please correct the article. You are doing a disservice to your readers with this bad info.

I myself am a HighPoint Raid card user. It is one of the cheapest and best Serial RAID cards on the market. The only negative points are that it has no cache, and once u build an array, you can't add drives to the array. On my system though, each of my eight 500 GB drives have 16 MB of cache on the drive itself, and I have used all 8 ports, so both of these negatives are nullified in my usage.
December 13, 2006 2:25:05 PM

I'm not running block for TH, but the RacketRAID 2322 reviewed is an External solution. The only way to make it internal is to run the cables back inside your case. There are no internal connectors on the card reviewed
December 13, 2006 3:06:47 PM

The Raid 5 performance is abysmally low. How many hard drives were used in the Raid 5 tests?

3 is minimum for running raid 5 without a backup.
4 is minimum for running raid 5 with a backup.
6 or more before you start seeing benefits over 0, 1, 0+1

Would have been nice to see a Raid 5 setup on one of the 8 Disk controllers with 8 Disks. Talk about a spanking :lol: 
December 13, 2006 3:08:40 PM

Absolutely! I'd also like to see how each of these scale as drives are added, be very interesting. Good point!
December 13, 2006 3:16:29 PM

Quote:

I myself am a HighPoint Raid card user. It is one of the cheapest and best Serial RAID cards on the market. The only negative points are that it has no cache, and once u build an array, you can't add drives to the array.


The RocketRaid 22xx and 23xx support OCE/ORLM, so you can add more drives to an existing array. This was noted correctly in the article as well.
December 13, 2006 3:25:07 PM

Quote:
Fileservers require redundant storage to safeguard vital data. Here are five of the latest RAID controllers for Serial ATA hard drives that supposedly do the job.


I'm pretty ignorant about these devices. Can anyone explain command queue depth and what it means to real world performance? For example, the LSI card performed worse as Q increased. Does this matter and if so, under what circumstances?
December 13, 2006 3:27:25 PM

Were any of the controllers tested, configured with a battery backup and write caching enabled??
December 13, 2006 5:16:54 PM

I believe most server boards are compatible with
raid.

What I meant for "entry-level" is consumer,
desktop mobos with IGP. I heard some of the single x16
pci-e is hardwired for video use only, i.e. can't
be used for raid cards, e.g. can't use igp with pci-e at
the same time. Another problem is that these
consumer mobos with onboard raid are not very
expandable (4 ports).
December 13, 2006 7:27:41 PM

Highpoint RocketRaid 2320. 8 internal connectors and same performance as the 2322. I have one on an asus socket 939 board and it's fast. It does allow adding HD's, had three to begin with, added a fourth and it expanded, slowly but it worked great.

I use this for my media server and it's fast enough with my gigabit network connection to serve video to all four of my tv's media boxes without problem. RAID 5 btw.
December 14, 2006 1:33:34 AM

Quote:

I myself am a HighPoint Raid card user. It is one of the cheapest and best Serial RAID cards on the market. The only negative points are that it has no cache, and once u build an array, you can't add drives to the array.


The RocketRaid 22xx and 23xx support OCE/ORLM, so you can add more drives to an existing array. This was noted correctly in the article as well.

Sorry.. I mispoke.. I was talking about MY negative points on the Highpoint card I have..the 1820A. On that version, u can't add drives to an existing array.

As for the article, why would u compare an external array card, to internal cards... At least include a comparison of Highpoints internal array cards for an accurate peice.

Last... for the speed differences in using more drives on a RAID 5 array. I can tell you from experience.. When I decompress a RAR'd movie on a stand alone drive, it can take up to 30 seconds to finish. On a RAID 5 array with 8 drives, it can take 5-8 seconds. I now decompress all my downloads on my self bult server.

http://mtvhouse.evanwashere.com/ Scroll down the first 3 pics...You will see the Highpoint card I used as well as the other parts.. Note: I replaced the 300 GB drives with 500 GB as well as added a bunch of externals 2 TB's. Here is another pic before the RAID upgrade, but still with 9 TB:
December 14, 2006 1:39:50 AM

I also have an 1820a and a 1640, both are VERY slow in comparison to the 232x series of cards. Both use host XOR, offloading to the computer instead of doing it themselves. Even though the 232x doesn't have XOR ram, it still does the calculations on the card.
December 14, 2006 2:09:08 AM

I'm liking Areca's card... but I already have an Adaptec SAS controller... PCI-X... it's nice... and it cost me an arm/leg/other body parts... lol. But for the lower end I'd say Areca is a good bet, LSI logic is really only good for SCSI or SAS... not for everyday SATA.

Edit: Make that AMCC's card... don't know how I got those confused.
December 14, 2006 4:44:01 PM

How about a comparison between these hardware RAID controllers and software RAID on various systems? I'd be interested to see how much of a load software RAID puts on these new CPU's, justifying whether or not hardware RAID is needed.
December 14, 2006 6:30:20 PM

Quote:
How about a comparison between these hardware RAID controllers and software RAID on various systems? I'd be interested to see how much of a load software RAID puts on these new CPU's, justifying whether or not hardware RAID is needed.

If you have a few minutes to dig around you'll find a post by me that summarizes some info I found in one of the 7200.10 -related threads, which shows that some of the motherboard -based RAID solutions can take as much as 14% CPU. The low end is around 2-4%.

If you're CPU -starved then a dedicated hardware RAID controller, be it on a server -variety motherboard or an expansion card, is probably a good idea if you need RAID.

-Brad
December 15, 2006 7:00:19 AM

I'm a little confused about a few things regarding the Highpoint card. In the comparison table in the original article there isn't listed any IO processor for the card. But still it mentions that it offers "top-notch" performance. Does the "x" in the comparison table mean that it's host based or that it's unknown what kind of processor it uses.

Also, for the last weeks I've been trying to find detailed information on different Highpoint cards but it's pretty much impossible. I'm very interested in the RocketRAID 2310 card and e-mailed Highpoint about it and they replied that they use "hardware of their own design". You'd think that they use the same hardware in the 2322 model? Apparently Highpoint uses their own XOR engine, a little like 3Ware has its own too.

Can someone confirm this?
December 15, 2006 10:16:07 AM

Any idea how the areca ARC-1230 performs? i'm thinking of it and i've read some reviews and benchmarks saying it can do 65MB/s write on a 3-4 disk array.
December 15, 2006 9:56:57 PM

Quote:
mommy is making a fruit salad,
tommy has the apples and pears in his room,
billy has the oranges in his room;
if mommy fetches the fruit in alphabetical order,
she goes to tommy's room for apples,
over to billy's room for oranges, then
back to tommy's room for pears;
if she picks up both apples and pears
while in tommy's room, however, then
she can pick up the oranges from billy
on her way to the kitchen, eliminating
the extra trip back to tommy's room.

Native Command Queuing works like that;
however, there needs to be a backlog of
I/O requests that allow the NCQ logic
to re-order those requests so as to
minimize movement of the read/write heads.

"Queue depth" is the measure of the
size of that backlog.

Using the Promise "WebPAM" software
on a new TX4310 and 2 x WD2500YS,
we switched NCQ ON and OFF and
noticed a significant performance
penalty doing RAW READS with
the PERFORMANCETEST software,
Version 4.0 (testing the speed of
sequential READs).


Sincerely yours,
/s/ Paul Andrew Mitchell
Webmaster, Supreme Law Library
http://www.supremelaw.org/


Thanks for that analogy, Paul. I had a basic understanding of NCQ but had searched and had not found a good description of queue depth. It's weird, I recently bought a card for my home server and got a LSI based on a couple of reviews I read on the web. The card I got (MegaRAID SATA150-6) did very well in the tests I read, one of which was in a IT forum. So I was surprised to see the LSI in this review suck hind tit. Oh well, I like the card and the array seems super fast to my uneducated eye.
December 20, 2006 9:16:27 AM

Hmmm.. is this the case for all PCI-e controllers? or is it motherboard specific?

I have two Areca 1120 PCI-X controllers, one in an Athlon X2 m/b PCI slot. Unsurprisingly it maxes out PCI bandwidth so it it partly hamstrung. Alas I cannot find a PCI-X m/bd for Athlon X2s.

So I was thinking of getting an Areca 1220 PCI-e card which *should* be x16 - or would this have no improvement?
December 20, 2006 9:28:16 AM

I was puzzled at the Areca's showing in this review. In two previous reviews of SATA RAID cards the Areca blew away the competition in performance terms, yet in this review it trailed.

Looking at the Areca specs it seems the 1210 uses an IOP332 processor and all the other models use the IOP333 but it still seems odd that the performance gap should be so large.

My other puzzle is why they bother with a RAID0 test benchmarks - who on earth runs their disk array for an SMB server on RAID0?? I know you could mirror two RAID0 arrays but in that case get a card with more ports e.g. 8 and run 0+1

TH looks like it is becoming "unreliable"
December 20, 2006 9:33:31 AM

The article was focusing on 4-port cards for SMB servers, so they stuck to this specification, as it focuses on cost/performance at this level.

I would say this was partly a mistake - almost every server you see has been upgraded at some point, so all these cards can run RAID5 with four disks and no expansion. It would be FAR better for an SMB to buy an eight port card and use four ports.

What was useful about this review is that it shows the performance variation at the cheaper end - Areca suffers. Move upto the 8+ ports range and Areca trashes the other cards (see the Jan 2006 review et al).
December 20, 2006 12:40:51 PM

Bottom line, the review was hampered by a lack of focus on the actual usage patterns of the SMB market, or even an enthusiast crowd. Would not opine as to the trends here at TH, but this review could have been done better. Still, I like the site, but it could possibly benefit from some more insightful direction. This is usually the job of the editors...
December 20, 2006 12:58:27 PM

Quote:
Bottom line, the review was hampered by a lack of focus on the actual usage patterns of the SMB market, or even an enthusiast crowd. Would not opine as to the trends here at TH, but this review could have been done better.

Maybe, maybe not. Are enthusiasts buying 8-port SATA controllers instead of 4-port controllers? I have no clue. Very few would bother investing in an outboard RAID controller. Certainly almost none are buying into SAS.

I've personally seen plenty of SMBs running RAID-5 on three drives. Worse yet, without a BBU! That still leaves a slot for a hot spare on a 4-port controller. Eh, not that most folks bother... But the question left is whether current SMB purchases are utilizing SATA or SAS.

Frankly, most SMB purchases that I'm familiar with that would include RAID are servers and will likely not be "home brew" systems where the RAID controller could be a pick-one-from-column-B deal.

I would have liked to see the review include performance stats with a BBU installed on the controllers that support it.
December 21, 2006 12:21:40 PM

How do these compare to the ICH8R controller found on the intel 965 chipset? I'm having some write performance issues with my RAID 5 setup and might buy a dedicated controller.
December 22, 2006 8:04:30 AM

Has there ever been a benchmark comparing real raid cards to built in motherboard raid controllers?

If so could somebody provide a link.

Ta
December 22, 2006 1:23:40 PM

Cheers Dante. I think that makes the built in look very good considering it's "free"
December 22, 2006 1:50:11 PM

Yeah I thought that was a bit fast on the reads. Made me think mine was slow again.
December 22, 2006 5:50:15 PM

Quote:
Yes, and here's a comparison. Now do take it with a grain of salt since it's tested in a workstation environment but it should give you a good comparison.
http://www.extremetech.com/article2/0,1697,1977005,00.a...

Nice find, thanks. While it's sort of heartwarming that the ICH7 did so well (since I'm planning on buying a D975XBX2), the low IOMETER CPU utilization numbers so seem a bit inconsistent with the results other people have had over on StorageReview. And as one or two others have already mentioned, parts of that article are a little suspect.

-Brad
December 22, 2006 6:53:10 PM

Well, like I said... take it with a grain of salt. I myself am using an Adaptec 8-channel SAS controller, simply because I need fast RAID 5, and if my mobo goes down my data won't go with it.
December 26, 2006 12:33:56 PM

Quote:
We built a cheap external enclosure to host
8 HDDs cabled to the Highpoint 2322
by recycling an older AT-style case:

the Antec 300W PSU has its own ON/OFF switch
eliminating the need for an ATX-style
motherboard to power it up and down.

In addition to a 2322 controller,
you'll need 2 mini-SAS cables
plus a custom connector in the
storage case to mate with the
other ends of these mini-SAS cables:
Addonics sells those connectors:

www.addonics.com

We customized a PCI slot cover
with a COM1 aperture by drilling
a pair of holes for a firmer cable
connection with 2 machine screws.

Everything is working quite well,
except that our ASUS P5WD2 Premium
motherboard tops out at x4 (4 lanes)
in the second "universal" PCI-E slot
(x16 mechanical). This is a BIOS
limitation in that motherboard.

We think this x4 ceiling explains why
we are only getting around 220 MB/second
instead of the 400+ MB/second reported
with this same controller on a recent Mac G5.

The next machine we are contemplating
will have the ASUS P5W64 WS Professional
motherboard with four x16 slots configured
x8-x8-x4-x8 in a Thermaltake Armor case, and
a quad-core Intel CPU and 64-bit Windows XP.

We are already using 2 x Thermaltake "iCage"
internal HDD enclosures, and we really like
the way they are engineered and built
with integrated 120mm fan.

In addition to the 11 x 5.25" drive bays up front,
this Armor case has another 3-drive cage with fan
right next to the PSU, which is mounted vertically
allowing its intake fan to ventilate this drive cage too
(a very ingenius solution for adding HDD bays).

One of those x8 PCI-E slots will host an Areca
hardware RAID controller with 1GB+ of on-board cache,
driving 8+ SATA2 HDDs probably with vertical
recording technology e.g. Seagate has the
largest market share presently.

The only remaining decision will then be the
size of each individual HDD: since we already
have plenty of slower drives for back-ups and
archiving, we'll probably opt for smaller HDDs
in order to save money on the total cost of 8+ units.

The objectives will be speed and a 5-year warranty,
plus: prices will no doubt change a lot between now
and when we're ready to start buying parts for
this quad-core machine.


Good luck, everyone! I hope this helps some of you.


Sincerely yours,
/s/ Paul Andrew Mitchell
Webmaster, Supreme Law Library
http://www.supremelaw.org/


You may consider a NORCO 12 bay external enclosure for business:
http://www.norcotek.com/DS-1240.php
It works with the HighPoint RocketRAID 2322 according to the specs on their website.
December 29, 2006 9:27:10 PM

Has anyone encountered a logical drive size limit on these controllers?

I work alot with the HP Array controllers such as the P600 and 6400 series, these all have a 2TB logical drive limit.

I was wondering if these have the same limitation.
December 29, 2006 10:29:18 PM

I find it disappointing that you did not include adaptec in your review. (But I notice that their advert. dollars are welcome at Tom's...)
December 31, 2006 6:58:38 PM

The socket AM2 ASUS WS-series motherboard has two PCI-X slots in it. This motherboard is very similar to the QuadFX L1N64-WS board. However the AM2 version uses the NVIDIA 680a chipset with the 3rd and 4th PCIe slots run through a PCI-e to PCI-X bridge so you get two PCI-X slots. However the board is not cheap- about $230 IIRC.
December 31, 2006 7:11:54 PM

Since modern CPUs are quite powerful and the motherboard SATA ports generally don't hang off of a shared bus, software RAID can be quite fast. A PCIe-based hardware RAID card can be quick if it has enough lanes feeding it (1 lane for 2-3 HDDs, 4 lanes for 12-16 HDDs) but at best the dedicated I/O processors are equivalent to a Pentium 2 in power. A lot of RAID cards run off of a 33 MHz, 32-bit PCI bus that can easily be saturated by just 2 HDDs. 64-bit PCI busses are RARE and at best (66 MHz) can support 533 MB/sec in I/O, which is what about 7-8 HDDs can muster. The 33 MHz 64-bit PCI cards can handle about 4 HDDs. ET ran four HDDs off of a 32-bit, 33 MHz PCI bus and it's no shock that they were bus-limited while the integrated ports were not.
!