Double Raid 1+0 set up possible?

mike.deslandes

Prominent
Aug 16, 2017
3
0
510
Hey Folks

System: i5-2500 cpu
8GB RAM (can jump to 24GB)
Mobo: P8z77lx-v

120GB SSD
Originally ran windows 10

1 HDD 2TB (WD - Black)
1 HDD 1TB (WD - Black)

What I'd like to do is create a web development environment with multiple VM running multiple instances of Apache, and maybe an email server. It would also have a DB server attached to each instance (for purposes of sizing - consider 1 DB would be 1000 records maximum + tables)

#1. I'm thinking I can install all the software on 2 x 120GB in RAID 1 + 0 via the main motherboard. (redundancy),

But i'd also like to increase the HDD storage size, so several questions now come up:

#2. Would 6 (1TB +2TB - on hand) + (1TB + 2TB (i'm thinking Seagate)) = 6TB be suffcicent storage for all of what i want to do in Raid 1 + 0, so if 1 bombs I just drop in a replacement and have it rebuild itself.

#3. Would it be more advantagous to run 4 NEW 3 or 4TB drives? (4TB Seagate is $120)

#4. Would I need a pci-e raid controller (4 sata ports) to put the HDD's on their own raid, would the 2 RAID groups work together, or would the motherboard go direct to PCI-e, and ignore whats on its own board?

Thanks a bunch

M.
 
Solution
For what you are proposing I would run the two 500's indiviually and use the 1tb internally and the 2tb as external backup. As I see it, it won't matter if the server goes down while you wait for a drive should one fail.

Use one 500gb as the boot os & vm drive while using the other as the drive to host the actual vm's. Should be more then fast enough for light use.
If you find you need a little more performace. Switching the two ssd's into a raid0 and restoring your backups shouldn't take long at all.

popatim

Titan
Moderator
#1 - you need 4 drive to make a raid10. Two for each of the raid 1's which you then raid0 together.

#2 - these will not give you 6tb due to a few factors.
Drives normally are the same size but if they are different only the size of the smaller drive will be used when putting it in raid with the larger drive.
in
-Raid 1 you loose the entire capacity of one of the drives. So your 1+2 example would result in a 1tb raid1 array.
- Raid 0 add both drive spaces together but keep in mind the 'only the smaller drive size' issue. Your 1+2 in a raid 0 would yeild a (1+1) = 2tb raid 0 array.

[ in your (1+2) + (1+2) example you would wind up with a (1+1) in raid1 (=1tb) + (second 1tb raid1) = 2tb raid0

#3 - yes much better to use 4 of the same size & rpm speed drives <see above>

#4 - Your Z77 chupset motherbd supports raid 10 so this can be done with just that if supported by the OS you will be using. Motherbd raid is still software raid and needs OS support.
 

mike.deslandes

Prominent
Aug 16, 2017
3
0
510
Well your post got me thinking.

I've altered the hardware components, slightly. Right now I'm trying to decide which way is most appropriate (and COST EFFECTIVE).

I guess I should also mention the actually box is running
a i5-2500 cpu (quad core) and 32GB RAM @ 1333/1600

I have 2 x Samsung EVO 850 - 500GB

I can put them in Raid 0 or 1 or 10.

IF I go to Raid 0, then the drives are stripped, its treated as 1 TB of SSD but in event of failure, I loose all.
If I go to Raid 1, then the drives are mirrored, if 1 fails, then the other can rebuild it.
If I go to Raid 10, then I need 2 more 500GB drives for a total of 4 SSD's.

That said, I could also pack up everything onto the 2TB and 1TB (7200 WD black) HDD's. but I could do it internally or externally. The question is, which one build is best.

Consider this isn't a high traffic, I'm not anticipating 1000's of hits every hour. It might have an email server on it, or a blog. But that's about it, aside from my web development portfolio, that will house functional sites. So that wouldn't get many hits aside from prospective employers.

What are you thoughts? Would you still spend the extra $400 (500GB are on sale at the moment next 24 hours)

Hit me back.

Thanks :)


 

popatim

Titan
Moderator
For what you are proposing I would run the two 500's indiviually and use the 1tb internally and the 2tb as external backup. As I see it, it won't matter if the server goes down while you wait for a drive should one fail.

Use one 500gb as the boot os & vm drive while using the other as the drive to host the actual vm's. Should be more then fast enough for light use.
If you find you need a little more performace. Switching the two ssd's into a raid0 and restoring your backups shouldn't take long at all.
 
Solution