JeffN825

Distinguished
Aug 5, 2008
5
0
18,510
Hi,

We have a Dell PowerEdge 6800 with 4 3.0Ghz dual core CPUs. We've recently added several more drives to the system and I'm evaluating how to reconfigure the RAID array(s).

Right now we have 10 300GB 10k RPM drives on a split backplane hooked into a PCI Express LSI PERC RAID card. 5 drives on each channel.

We have 2 73GB 15k RPM drives in a media cage, hooked up to the onboard RAID controller.

The server acts as a file server and SQL server. Both roles are important, but the file server performance takes precedence. Also, there's only about 20GB of SQL data, whereas there is about 500GB-600GB of file server data.

I'm currently torn between 3 solutions and was hoping to get some advice on what will yield optimal performance:

1. OS on a RAID 1 on the two 73GB drives. 2 RAID arrays on the backplane. I'm thinking 1 with 3x300GB drives in a RAID 5 for the SQL data. And 7x300GB drives in a RAID 5 for the file server data.

2. OS on a RAID 1 on the two 73GB drives. 1 RAID 5 array across all ten drives on the backblane. I realize this would be best performing overall....but I'm worried about SQL Server degrading the performance of the file server.

3. SQL on a RAID 1 across the two 73GB drives. OS on a RAID 1 of two drives on the backplane. The remaining 8 300GB drives on the backplane would be in a RAID 5. I'm thinking this might be good because the two 73GB drives are 15k RPM which might make a difference for SQL (vs. the 10k RPM 300GB drives)...and also they best suit the needs of the SQL data size wise (about 20GB).

What are your thoughts?

Any advice is greatly appreciated.

Thank you.
 

4745454b

Titan
Moderator
I like idea number 3. First, if the data is only 20GBs, this isn't very much. Why waste space on the 300GB drives? (unless the database is expected to grow a lot. Even if you put the database on the RAID1 73GB drives, your database could double and you'd still have room.

I wouldn't worry about 15K vs 10K. 8 10K disks in a RAID5 array will be fast.
 

rozar

Distinguished
Jun 7, 2007
345
0
18,780
I would start by getting the 2 drives you have on your motherboard on the LSI controller. You might be thinking that its not possible since you have only 2 channels on your controller and you have a split backplane, but you can do it as long as the controller/backplane controls termination. All you need to do is use a single cable with at least 3 connectors and plug a middle connector into one spot on the controller and 2 ends then go to the 2 backplane connectors. This will get the 2 15k drives on the controller. Also the difference in the 15k drives for you will be access time really more than throughput. So keep your OS there. For the rest just follow plan 1 above.
 

JeffN825

Distinguished
Aug 5, 2008
5
0
18,510
Thanks for the advice.

One other question.

If the 600GB of data consists of about 300GB of data that requires high performance and about 300GB that doesn't matter, would you say I'm best off keeping them on two different RAID 5 arrays?....the high performance data on a larger drive array and the low performance data on a smaller array? Or does the added benefit from a single array outweigh the performance gains from having lower disk usage on the high performance RAID 5?

Thanks.
 

rozar

Distinguished
Jun 7, 2007
345
0
18,780
Keeping in mind that arrays, volumes, and partitions are all possible. The real question is why do you want the 2 different sets of data seperated? You could create 2 arrays, but by doing this you lose spindle speed and possibly less important but also 1 more drive of overhead. If your controller allows it you can create (or carve) 2 volumes on the same array. You will have slightly more exposure that way. But in windows device manager you will see 2 seperate disks, both of which are on the same RAID 5 array and no spindle speed is lost. You could even create 1 RAID 5 array with no carving and in windows create 2 partition on the single volume.

So really without knowing much more detail of what you are trying to do and why. I really cant answer your question. But hopefully with the brief explanation above you can choose for yourself which is best for your appication. I think the main concern is how often and constant each set of data will be accessed with both reads and writes.
 

JeffN825

Distinguished
Aug 5, 2008
5
0
18,510
That's a really simple and excellent point. Thank you. Although it may seem like allocating separate arrays is beneficial, it's really just a red herring. At the end of the day, it's the same amount of usage spread across a certain amount of throughput available. Creating two separate arrays means less throughput.

Is there a reason not to create a single RAID5 across all 10 drives and put both file data and SQL data on it? I'd then keep the T-Logs on the RAID1 with the OS...
 
Typically in a corporate setting the following are done.

For the operating system you want to use 2 drives in raid 1 - mirroring. Since the OS typically does a lot of reads it would benefit more from raid 1. Then the rest of your drives you want to do RAID 5 where all the heavy writes (databases, file storage, etc) are done.

You should see two drives after that.

It takes a long time jsyk. With 10 drives It might take ~2 days to complete the RAID and run it out of degraded mode.

edit: For example you would use the two 73g drives for the OS in RAID 1, then use the rest of your drives that you can fit for RAID 5. Assuming they are ALL the same models, brands, and size.

Because it is RAID 5 it doesn't make a whole lot of sense to do 2 raid 5's? I don't understand that. Just do Raid 1 + 5 and you'll be fine.
 

JeffN825

Distinguished
Aug 5, 2008
5
0
18,510
There are 12 drives. 2 73GB 15k drives (which, yes, will be in a RAID 1 for the OS). 10 300GB 10k drives. I was just trying to decide whether or not to break up the 10 300GB drives into multiple RAID 5 arrays.
 

rozar

Distinguished
Jun 7, 2007
345
0
18,780
Jeff I just have to ask why would you want to? So far I dont see a reason for you to do it. If you want to seperate the data just a single RAID 5 and create 2 partitions to meet the sizes you want.

There are a few reasons to have 2 seperate arrays but so far you have not listed a reason to do it.

Why were you considering 2 data arrays?
 

JeffN825

Distinguished
Aug 5, 2008
5
0
18,510
Your previous post a couple of hours ago pretty much convinced me to have a single RAID 5. As you pointed out, I don't want to lose the spindles or the throughput in maintining 2 separate arrays (and thus 2 drives dedicated to parity, not one).

Basically, the only reason I was considering otherwise:
-I don't want low priority resources to contend with higher priority resources for disk access (thus slowing down the availability of high priority resources).