Sign in with
Sign up | Sign in
Your question
Solved

RAID - optimal configuration for six disks with single controller

Last response: in Storage
Share
February 28, 2011 5:45:42 PM

I am configuring a new Dell PE R710 server for Windows 2008 R2 SP1 with Hyper-V. With two x5650 (six core) processors and 32 GB of RAM, the server has more processor and RAM than I expect to need (especially now that SP1 supports pooling RAM) in order to run the 10 or so servers I intend to run. However, I don't have a SAN, and I am working with a single PERC 6/i controller and six 600 GB 15K SAS disks. Most of the servers will be low load application servers, but there will be one database server (either SQL or Exchange, haven't decided which to place here yet).

I am looking for some feedback on the best disk configuration. A common suggestion I see online is to use a RAID 1 for host OS, then a separate RAID 10 for VHDs, but with a single RAID controller, is there any advantage to splitting up my disks into two separate arrays like that? I'm thinking that a six disk RAID 10 would perform significantly better than a 4 disk RAID 10, and with the host OS only running the Hyper-V role, would the added load of the host OS really outway the performance gain of 6 disks?

Also, could I (or should I) simply create multiple RAID 10 arrays across all 6 disks, possibly creating 3 separate RAID 10s, one for host OS, one for VHDs and one array to attach directly to the database server VM as a Passthrough disk (I read that Passthrough would peform better than a VHD)?

Also, if I configure one array as a Passthrough disk to a VM, and I lose that VM, I assume that I would be able to simply attach that Passthrough disk to a new VM, correct?

I am not really strong with hardware, so any feedback would be appreciated.

thanks,
Jeeef

More about : raid optimal configuration disks single controller

a b G Storage
February 28, 2011 5:58:27 PM

I always separate boot and data volumes personally, just in case something happens to either, you don't gack both. I generally run the OS on a mirror, and the data disks on either a RAID-5, or a RAID-10, depending on how much data will be housed, the level of redundancy i need, and the throughput I need (e.g. a file server would get a raid-5, but databases, vm's, etc would sit on a raid-10). There are other factors I weigh in when deploying a new server, but that's the basics.

m
0
l
a b G Storage
March 8, 2011 2:50:09 AM

raid doesnt improve access time.
so if you have one server doing a bunch of little reads and writes.. it will slow everything down even with comparatively little throughput.

6 drives is also very bad because if you mess something up your os is hosed too.

Depending on if its a test environment or production you can vary your setup to control costs.
ie no raid on os drive because you have a full backup.. and its not important if its down for an hour.

optimal setup would vary widely depending on your exact vm's and what you are running.
m
0
l
Related resources
a b G Storage
March 8, 2011 6:11:08 PM

rand_79 said:
raid doesnt improve access time.



I suggest reading up a little bit more on RAID levels. RAID 0, 10, and 0+1 all will decrease access times.




Quote:
6 drives is also very bad because if you mess something up your os is hosed too.



Hence, the reason i recommended separating the boot and data volumes into different arrays.



m
0
l

Best solution

a c 415 G Storage
March 8, 2011 7:29:04 PM

In performance terms I don't think there's any particular reason to use RAID 1 for the OS in conjunction with RAID 0+1 for the data - you may actually get a bit of an edge by using just one large 6-volume RAID 0+1 set for everything. Remember that on a server you normally start up the system and whatever server software you're using (Web, Database, File servers, etc.) and then leave it up for a long period of time. So the OS disk (or partition) is really only a bottleneck during startup - once the system is up and running it's really not all that terribly active. Spreading everything across 6 disks will probably give better concurrent I/O performance than partitioning it into two separate sets.

But there are good reasons to keep the OS separate from the data. You can do this by simply partitioning one big RAID set, or you can physically separate OS onto it's own drives by having two RAID sets. On the balance, my tendency has always been to keep the OS on it's own drives, although I have to admit that I've never really run into a situation where that saved my bacon.
Share
a b G Storage
March 9, 2011 12:48:49 PM

sminlal said:
In performance terms I don't think there's any particular reason to use RAID 1 for the OS in conjunction with RAID 0+1 for the data - you may actually get a bit of an edge by using just one large 6-volume RAID 0+1 set for everything. Remember that on a server you normally start up the system and whatever server software you're using (Web, Database, File servers, etc.) and then leave it up for a long period of time. So the OS disk (or partition) is really only a bottleneck during startup - once the system is up and running it's really not all that terribly active. Spreading everything across 6 disks will probably give better concurrent I/O performance than partitioning it into two separate sets.

But there are good reasons to keep the OS separate from the data. You can do this by simply partitioning one big RAID set, or you can physically separate OS onto it's own drives by having two RAID sets. On the balance, my tendency has always been to keep the OS on it's own drives, although I have to admit that I've never really run into a situation where that saved my bacon.




You ALWAYS seperate OS and data volumes in the real world. It's a best practice for server builds. That way if the OS gacks it's volume for some reason (not a hardware problem, something like partition corruption, etc), you don't lose the partition that houses your data. If it's all on the same volume, then you would lose everything. You use a RAID 1 on your boot volume for redundancy of your OS and software. That way if you lose a boot drive, you don't have to spend hours reloading the OS, configuring, patching, loading software, patching more, etc.... you just replace the failed drive and rebuild the array. I'm not refering to "partitioning" the volume into different sets. I'm talking build two seperate arrays. One array as a 2-drive mirror for boot, and one array as a 4-drive RAID 5, or a 4-drive RAID 10. Partitioning one giant RAID set into two seperate volumes would work, but if the partition table gets gacked up, then you'd lose both partitions anyway. If you seperate them into different arrays, you eliminate that possibility. There's no sense in cutting corners, since you dont have to buy any new hardware to do it the right way, you just have to configure it properly.
m
0
l
a c 415 G Storage
March 9, 2011 3:47:14 PM

mavroxur said:
You ALWAYS seperate OS and data volumes in the real world. It's a best practice for server builds.
Actually, that's not necessarily true. In many SAN environments the LUNs allocated for various servers come from a shared storage pool. Although the server sees separate "drives", the physical storage is in fact shared. The benefits of this include better performance (by not partitioning I/Os onto separate spindles) and the ability to snapshot the entire server.

But for direct-attached storage, I do agree that separate OS drives is the usual practice.
m
0
l
a b G Storage
March 11, 2011 5:33:22 PM

sminlal said:
Actually, that's not necessarily true. In many SAN environments the LUNs allocated for various servers come from a shared storage pool. Although the server sees separate "drives", the physical storage is in fact shared. The benefits of this include better performance (by not partitioning I/Os onto separate spindles) and the ability to snapshot the entire server.

But for direct-attached storage, I do agree that separate OS drives is the usual practice.




Nowhere did he mention he was using a SAN though. And even if, you don't typically boot your OS from a SAN.
m
0
l
March 12, 2011 6:11:00 PM

Best answer selected by Jeeef.
m
0
l
March 12, 2011 6:16:58 PM

Thank you everyone for your feedback. After discussing with a hardware expert and getting feedback from Microsoft, I decided to create one large 6 disk RAID 10. I created a 100 GB OS partition and used the rest for a data volume which will hold my VHDs. I am fairly confident that this was the best choice given the hardware I had to work with. If I was ordering a new server, and had the option to order two additional disks, then my choice would have been different.

Thanks again for the feedback.
m
0
l
!