Sign in with
Sign up | Sign in
Your question

New Server Config....Raid 5 on MOBO or Controller

Last response: in Storage
Share
August 13, 2007 12:23:21 AM

I'm in charge of building a server for our office. Will be running SBS 2003 for a database and Exchange. I'll be going with a dual CPU setup, either dual or quad core. Never setup a RAID array before and wondering if I should bother using an add-on controller or not. Will I be affected by performance on a MOBO RAID if I'm using 2 CPUs? What are the best cards out there for RAID 5?
August 13, 2007 12:39:27 AM

Using Software RAID5 on windows is a no-go due to its crappy implementation. Onboard (fake) RAID is little better, though the intel ICH9R controllers offers fair performance. It depends on your chipset in this case.

If you really want to use RAID5 on Windows, you could look at a real controller like Areca ARC-1210. They are not cheap but decent RAID cards. Know that using Windows means you have a stripe/filesystem misalignment. Somehow Windows and RAID don't seem to blend in well. :( 
August 13, 2007 12:44:45 AM

daveohr said:
I'm in charge of building a server for our office. Will be running SBS 2003 for a database and Exchange. I'll be going with a dual CPU setup, either dual or quad core. Never setup a RAID array before and wondering if I should bother using an add-on controller or not. Will I be affected by performance on a MOBO RAID if I'm using 2 CPUs? What are the best cards out there for RAID 5?

some dual cpu boards come with on board hardware SAS raid now days.
Related resources
August 13, 2007 12:48:48 AM

Joe_The_Dragon said:
some dual cpu boards come with on board hardware SAS raid now days.


That's why I'm wondering if it would be better to use MOBO RAID or from a separate controller.
August 13, 2007 12:55:59 AM

a separate controller may be better then the on board hardware raid but some separate controllers are part software.
August 13, 2007 1:16:20 AM

Ok, couple of questions...Would I experience any performance loss using onboard RAID when I'll be using 2 CPUs? Also, if my MOBO dies, will I lose all data? Not really sure how it works when a controller dies.
August 13, 2007 1:27:25 AM

Ok with 2 CPU's on a server board i dont think you would notice any performance loss as theres enough processing power to handle.

If the Mobo Dies you will more than likely loose the data on a RAID5 setup. Most RAID controllers setup arrays differently and if you transfer from one RAID controller to another you will more than likely distroy the array. If you happen to get the same mainboard or even controller chip you should be ok though (most serverboards have some awesome warranty so you shouldnt need to worry about that for a few years).

I would go with a seperate controller. I have recently got an Adaptec 3805 with supports SAS/SATA drives and 8 of them. Before you buy make sure its Hardware though. Adaptec, Acera and Promise are some good ones.
August 13, 2007 1:34:56 AM

chookman said:
Ok with 2 CPU's on a server board i dont think you would notice any performance loss as theres enough processing power to handle.

If the Mobo Dies you will more than likely loose the data on a RAID5 setup. Most RAID controllers setup arrays differently and if you transfer from one RAID controller to another you will more than likely distroy the array. If you happen to get the same mainboard or even controller chip you should be ok though (most serverboards have some awesome warranty so you shouldnt need to worry about that for a few years).

I would go with a seperate controller. I have recently got an Adaptec 3805 with supports SAS/SATA drives and 8 of them. Before you buy make sure its Hardware though. Adaptec, Acera and Promise are some good ones.


Thanks for the info. No matter which method I go with though I'll be able to recover the array by replacing with the same MOBO or controller? Of course I'll be keeping a separate backup of all info. So, even if the controller or MOBO dies, I'll still have backup.
August 13, 2007 1:50:29 AM

To my knowledge there is no issue with a straight swap of hard drives into the same Mobo or controller.

The main reason i went with a separate controller was because i dont have a sever board only a desktop board that doesnt have the options or ports that a server board has.
August 13, 2007 2:17:21 AM

If your only using a couple drives, up to 4 then I think it is a waste. There are a few articles about it now that it is more common on home computers.
August 13, 2007 2:27:15 AM

royalcrown said:
If your only using a couple drives, up to 4 then I think it is a waste. There are a few articles about it now that it is more common on home computers.


I was thinking of using 3 drives (250 or 320 a piece).
August 13, 2007 3:10:12 AM

I use 5x500Gb Western Digital drives in RAID 5 on my Adaptec 3805

I would think if you have a good Server board with a good onboard RAID chip thats a hardware controller and your only using 3 drives then a seperate controller may not be worth it.
August 13, 2007 3:33:58 AM

chookman said:
Ok with 2 CPU's on a server board i dont think you would notice any performance loss as theres enough processing power to handle.

When implementing RAID5, processing power is not the only variable when considering throughput performance. In order to write sequentially at high performance, you need to combine I/O requests to form full stripe writes (so called 1-phase writes). Most (cheap) implementions do not do this, and do virtually all writes in 2 phases which involves reading data in order to calculate parity data. This means a minimum of 5 physical I/O requests for EACH logical I/O requests and a lot of head seeking by the physical drives. The result is that the disks themselves become bottlenecks due to higher-than-needed I/O activity, non-contigious writing and even reading in between.

Real controllers using an IO processor do not suffer from this, and also some intelligent software implementations like geom_raid5 for FreeBSD/FreeNAS are capable of request combining. Without it, do not expect much of throughput performance. Also the many seeks can cause wear on the disks, potentially reducing their lifespan.
August 14, 2007 12:21:02 AM

Quote:
chookman wrote :


Ok with 2 CPU's on a server board i dont think you would notice any performance loss as theres enough processing power to handle.



I thought daveohr was talking in regard to other processes slowing in performance due to the overhead of the RAID on the mainboard. Which they would not.

Quote:
Most (cheap) implementions do not do this, and do virtually all writes in 2 phases which involves reading data in order to calculate parity data.


I would think that a server board with the capability of 2 CPU's and a RAID 5 SAS/SATA controller wouldnt be a cheap implementation. "I would THINK"

But i do agree with what you have said enlightenment
August 14, 2007 4:05:45 PM

chookman said:

I thought daveohr was talking in regard to other processes slowing in performance due to the overhead of the RAID on the mainboard. Which they would not.

Ah you're entirely right, my bad.

Quote:
I would think that a server board with the capability of 2 CPU's and a RAID 5 SAS/SATA controller wouldnt be a cheap implementation. "I would THINK"

I have never seen 'real' hardware controllers that are integrated on the motherboard. Its always fake RAID. The drivers of the nVidia Pro chipset might be better tested but i doubt it makes much difference regarding their implementation. Maybe things are different for expensive quad-socket boards (the "real" stuff) - which i do not know much about. Still i doubt they would integrate a great RAID controller, for two reasons:

- integrating a non-software RAID controller on the motherboard seems stupid, since you cannot transfer the array and thus data to another system, you've become motherboard dependant. If the motherboard fails you need an exact same type of it. Replacing a hardware controller is much more easy with fewer downtime.

- RAID controllers come in many flavours, PCI-express allows you to choose one yourself. An expensive controller integrating on a motherboard makes that motherboard very pricey without providing any benefit at all in comparison with an add-on card. Plus integrated solutions in the server-market are scepticly received. In the world of the big boys you will be laughed if you used onboard RAID, just by the sound of it.

Aside from that, there is no 'perfect' implementation of RAID yet. People have to understand that RAID is both theory (the opportunities that RAID gives you) and the reality (actual implementations). The latter is always less beautiful than the first.
August 14, 2007 4:37:02 PM

enlightenment said:
Ah you're entirely right, my bad.

Quote:
I would think that a server board with the capability of 2 CPU's and a RAID 5 SAS/SATA controller wouldnt be a cheap implementation. "I would THINK"

I have never seen 'real' hardware controllers that are integrated on the motherboard. Its always fake RAID. The drivers of the nVidia Pro chipset might be better tested but i doubt it makes much difference regarding their implementation. Maybe things are different for expensive quad-socket boards (the "real" stuff) - which i do not know much about. Still i doubt they would integrate a great RAID controller, for two reasons:

some nVidia Pro chipset boards have a raid chip hooked up to pci-e bus at x4 speed plus the build in the chipset one.
August 14, 2007 4:57:11 PM

DEFINATELY get a seperate controller
August 14, 2007 5:01:03 PM

I designed & manage a Windower SBS 2003 network. Definately go with a controller. Check out the wiki on RAID implementations, or even read some of the articles on tomshardware...the perfomance takes a hit, and moving arrays for data recovery can be very difficult or impossible.

Further, I would suggest using a controller that can support multiple arrays and go with a 5 drive setup. Set up 2 drives in RAID 1 for your system & Exchange. Set up the other 3 in a RAID 5 array for data/apps.

Controllers are better because:
1. Higher I/O rates
2. Large write/fetch cache onboard (some even have their own battery backup built in, so it can finalize read/writes if the CPU loses power.)
3. Rebuild or change arrays, add/remove hotswap easier.

I'd take a RAID controller & RAID 1/RAID 5 configuration over 2 CPUS any day of the week...

Current Config:
Dell PowerEdge 2900
Xeon 5130 2.0GHZ
2GB ECC DDR2
PERC 5/i
(2) 160GB RAID 1, (3) 500GB RAID 5
Windows SBS 2003 30 CALS
SQL Server, Exchange, Access
August 14, 2007 5:11:06 PM

Get a PCI-e raid controller and if you can afford it, buy a spare in case 5 or 6 years down the line it fails and the manufactuer doesnt make that card anymore. I would also recommend not building a server yourself, only because if you have a hardware failure you will be down for awhile until you get parts replaced, buy something like what singingigo has in his sig. get a 7X24 hour contract so you are never down more then a few hours, rather then days or weeks, nothing is worse then a CEO who cant get his email for a week.
August 14, 2007 5:29:09 PM

DEFINATELY get a seperate raid controller, at least that way there's a chance you can save the raid when the mobo dies.

People never seem to realize that RAID is about fault tolerance, not performance. Sure, you can configure raid arrays to perform, but what your buying is the ability to lose a drive without losing the data.

WHATEVER you get better have a warranty, or it's just a toaster waiting for it's chance to fail....

Current config:
WORK: LOTS of servers (dual 2xCores, 4G ram+, 2+ TB of raid 5), um - and routers/firewalls/switches/ups....all that crap we have at work

Home: same as above but it works better cause I care more
August 14, 2007 5:46:28 PM

mford66215: im using RAID for performance, does that make me bad? Of course not, RAID delivers you many things:
- fault tolerance
- availability (array stays accessible in case of disk failure)
- flexibility (grow the array size keeping data intact)
- large single volume (larger than 1TB; no single disk can achieve that)
- higher performance

If you're using it ONLY for the added fault tolerance, there's nothing wrong with that. But you have to understand other people might have other interests than you and as such would (also) focus on the other key features RAID delivers you.
a b G Storage
August 14, 2007 5:56:10 PM

To really answer this questions properly, I would need some more info on how the server is going to be used. How many users will be accessing the database? What type dbase; Oracle, Access, Filemaker? What other services will be running aside from the dbase and Exchange? Are you running Exchange 2003 or 2007? Will Exchange be used as an Edge, Hub, Mailbox Server, or Mulit-Role? Is this an existing server or are you building a new box? What are the machine specs?

If you're in a SOHO situation with 20 or less people hitting the dbase and sending/receiving a few hundred emails a day, then a dual/quad core proc with 4GB RAM and the onboard RAID5 will offer enough performance to do the job. As the small business grows, you can always add a controller card amd migrate the array.

However, if you're in a large office/enterprise situation with 100's of users hitting the d-base, receiving 1000's of emails, as well as running other services from the same machine, then a dedicated controller card is the better way to go.
August 14, 2007 6:39:32 PM

he said SBS 2003 so it has to be exchange 2003 as there is no x64 SBS and exchange 07 needs x64 that also has a 75 user limit, so it cant be more then that.
August 14, 2007 7:48:12 PM

Enlightenment: Didn't mean to be negative about raid performance!

Your right, of course....I didn't mean to denegrate the other positive aspects of raid systems. I've used increase performance to sell raid sets to management myself, should have kept that in mind.

*properly chastised, he slinks off to delete some user accounts*


August 14, 2007 8:09:37 PM

chunkymonster said:
To really answer this questions properly, I would need some more info on how the server is going to be used. How many users will be accessing the database? What type dbase; Oracle, Access, Filemaker? What other services will be running aside from the dbase and Exchange? Are you running Exchange 2003 or 2007? Will Exchange be used as an Edge, Hub, Mailbox Server, or Mulit-Role? Is this an existing server or are you building a new box? What are the machine specs?

If you're in a SOHO situation with 20 or less people hitting the dbase and sending/receiving a few hundred emails a day, then a dual/quad core proc with 4GB RAM and the onboard RAID5 will offer enough performance to do the job. As the small business grows, you can always add a controller card amd migrate the array.

However, if you're in a large office/enterprise situation with 100's of users hitting the d-base, receiving 1000's of emails, as well as running other services from the same machine, then a dedicated controller card is the better way to go.

you don't want to use raid 5 on any kind of build in the chipset raid even for low use.
August 15, 2007 1:16:41 AM

Daveohr...

It seems most are in agreeance, go with a separate controller for your drives. Use RAID5 on data you want fault tolerance. Possibly buy 2 controller cards for extra safety.

Enjoy your build and good luck with it.
!