Sign in with
Sign up | Sign in
Your question

RAID Configuration: Dell T7600 w/PERC H710P - 4 x 4 TB HDDs and 2 x 250 GB SSDs

Tags:
  • Storage
  • Dell
  • NAS / RAID
  • SSD
  • Dual Boot
  • Debian
Last response: in Storage
Share
October 15, 2014 8:14:13 AM

I am trying to decide on the best RAID configuration for dual booting Windows 7 x64 and Debian (Wheezy). My initial thought was to RAID 1+0 the 4 TB drives and use that as a shared data drive. I would then use one SSD for Win7 and the other for Debian. Unfortunately, the H710P adapter doesn't allow non-raid drives so I have to RAID 0 the drive to use them. Furthermore, when the PERC is plugged in it disables the onboard SATA ports so I can't plug the SSDs up directly. Here are a few options I've considered:

Option 1
4 x 4 TB HDDs - RAID 1+0 (Data)
1 x 250 GB SSD - RAID 0 (Win7)
1 x 250 GB SSD - RAID 0 (Debian)

Option 2
4 x 4 TB HDDs - RAID 1+0 (Data)
2 x 250 GB SSD - RAID 0 (partitioned: 250 GB - Win7; 250 GB - Debian)

Option 3 - Buy a PCIe SATA controller so I can keep the bootable drives off the PERC
4 x 4 TB HDDs - RAID 1+0 (Data)
1 x 250 GB SSD - PCIe SATA Controller (Win7)
1 x 250 GB SSD - PCIe SATA Controller (Debian)

This is my first attempt at a RAID configuration so any advice would be appreciated! Thanks!

More about : raid configuration dell t7600 perc h710p hdds 250 ssds

October 15, 2014 10:58:14 AM

I like option 3, just for the fact that it will separate your boot drives entierely from your RAID 10 array, it will make things easier.

Definitely not option 2 because if one of those drives fail, both your boot partitions are done.
m
1
l
October 15, 2014 12:30:47 PM

saywhut said:
I like option 3, just for the fact that it will separate your boot drives entierely from your RAID 10 array, it will make things easier.

Definitely not option 2 because if one of those drives fail, both your boot partitions are done.


So the only reason I considered option 2 is because I've read that RAID 0 over two SSDs can lead to significant performance boosts if configured correctly. Is there any merit to that?
m
0
l
Related resources
October 15, 2014 12:34:50 PM

That is a true statement. RAID 0 is the only one that does not offer data redundancy because of the performance boost that it gets.

That being said, SSD's are already fast enough as it is. It may be faster with the RAID 0 config, but realisticly speaking, a stand alone SSD will fare just fine in terms of performance.

If you like to squeeze every last bit of performance, by all means, go for it.
m
1
l
October 15, 2014 1:05:58 PM

saywhut said:
That is a true statement. RAID 0 is the only one that does not offer data redundancy because of the performance boost that it gets.

That being said, SSD's are already fast enough as it is. It may be faster with the RAID 0 config, but realisticly speaking, a stand alone SSD will fare just fine in terms of performance.

If you like to squeeze every last bit of performance, by all means, go for it.


Perhaps, some other specs on the machine will help inform my decision.

2 x Intel Xeon Processor E5-2687W (20M Cache, 3.10 GHz, 8.00 GT/s Intel® QPI)
128 GB RAM (1600mhz DDR3)
NVIDIA Quadro K5000 (4 GB GDDR5)
4 x 4 TB WD Red
2 x 250 GB Samsung 840 EVO

The workstation is used primarily for scientific modeling (e.g. Monte Carlo simulations, agent-based modeling, etc.). Would the performance benefit of RAID 0 be worth it? I think that most of these tasks are CPU bound, so RAID 0 may result in minimal performance increases. Does that seem correct? If so, it sounds like option 1 or 3 is a no-brainer.
m
0
l
October 15, 2014 1:55:11 PM

That is correct. The only thing that you would benefit from the RAID 0 are load times from the programs that are installed in the array. If the programs are stored in your RAID 10 array, you will see no benefit whatsoever whether you have a RAID 0 for your SSD's or having them standalone.

All the rendering will be handled by your CPU/GPU for the modeling and simulators.

The only thing to watch out for on option 3 is if the machine will be able to boot off the PCIe card for the drive. I know the PERC is listed in the boot sequence in BIOS, so the PCIe card MAY show up there as well.

That is one heck of a machine :) 
m
1
l
October 15, 2014 2:25:07 PM

saywhut said:
That is correct. The only thing that you would benefit from the RAID 0 are load times from the programs that are installed in the array. If the programs are stored in your RAID 10 array, you will see no benefit whatsoever whether you have a RAID 0 for your SSD's or having them standalone.

All the rendering will be handled by your CPU/GPU for the modeling and simulators.


That's what I was thinking. Thank you for confirming! The machine is already snappy enough and we care more about processing data than opening programs. :) 

saywhut said:
The only thing to watch out for on option 3 is if the machine will be able to boot off the PCIe card for the drive. I know the PERC is listed in the boot sequence in BIOS, so the PCIe card MAY show up there as well.


I'll try to find out if it can UEFI boot from a PCIe SATA controller. Any thoughts on a decent controller? One other thing to note is that the SSDs are mounted in the Dell swappable bays. The cable out is an SAS cable. I assume an SAS controller would control all four drives in this bay?

Also, are there any drawbacks to option 1 if I can't boot from PCIe?

saywhut said:
That is one heck of a machine :) 


Thanks! We've been running it RAID 10 with the 4 x 4 TB drives. We decided to make the transition to Debian, but we still have some users who SSH in to run Windows programs (hence the dual boot). I figured adding two SSDs and keeping the OSes separate would be an easy and smart solution, but it's turned out to lead me down a few rabbit holes!

m
0
l
a c 944 G Storage
October 15, 2014 2:32:08 PM

If you want to pass a single drive though to the OS with a perc 700 you must create a single HDD RAID 0, since the PERC Controllers do not provide direct pass through functionality. This is probably the same on the 710
m
0
l
October 15, 2014 2:43:58 PM

popatim said:
If you want to pass a single drive though to the OS with a perc 700 you must create a single HDD RAID 0, since the PERC Controllers do not provide direct pass through functionality. This is probably the same on the 710


Yes, this is true. That's what I have laid out with option 1. I'm curious as to whether there is a performance increase by hooking these drives up to a PCIe SAS Controller instead of passing through the PERC. Thoughts?
m
0
l
a c 944 G Storage
October 15, 2014 6:26:59 PM

Sorry, I thought you meant to raid them together in opt 1
There would be little to no difference in speed but the added complexity of an additional controller. The h710 is pretty fast and can do 1200MB/s & 38000iops sequentially so you wont be limited by the card with just 2 SSD's.
m
0
l
October 16, 2014 7:14:20 AM

ctasich said:
saywhut said:
That is correct. The only thing that you would benefit from the RAID 0 are load times from the programs that are installed in the array. If the programs are stored in your RAID 10 array, you will see no benefit whatsoever whether you have a RAID 0 for your SSD's or having them standalone.

All the rendering will be handled by your CPU/GPU for the modeling and simulators.


That's what I was thinking. Thank you for confirming! The machine is already snappy enough and we care more about processing data than opening programs. :) 

saywhut said:
The only thing to watch out for on option 3 is if the machine will be able to boot off the PCIe card for the drive. I know the PERC is listed in the boot sequence in BIOS, so the PCIe card MAY show up there as well.


I'll try to find out if it can UEFI boot from a PCIe SATA controller. Any thoughts on a decent controller? One other thing to note is that the SSDs are mounted in the Dell swappable bays. The cable out is an SAS cable. I assume an SAS controller would control all four drives in this bay?

Also, are there any drawbacks to option 1 if I can't boot from PCIe?

saywhut said:
That is one heck of a machine :) 


Thanks! We've been running it RAID 10 with the 4 x 4 TB drives. We decided to make the transition to Debian, but we still have some users who SSH in to run Windows programs (hence the dual boot). I figured adding two SSDs and keeping the OSes separate would be an easy and smart solution, but it's turned out to lead me down a few rabbit holes!



Not really, you will just have to RAID 0 the drives individually, and they will all be booting from the PERC. PERC fails, the whole system will be down.

The T7600 3.5" and 2.5" bays should be able to support both SAS/SATA. Not sure if there is just a SAS cable there already by chance?
m
0
l
!