Sign in with
Sign up | Sign in
Your question
Solved

Which RAID setup for 26TB Media Server / HTPC ?

Last response: in Storage
Share
October 13, 2011 1:12:31 AM

Hello everyone,

I have a 13-bay (13x HDDs) HTPC / Media Server that I built and need to know which RAID setup will be best for me. I have already purchased the hardware (scroll to bottom of post for equipment). I understand this may be overkill, but I went with this card because of the 16 ports (need 13) huge cache, built-in hot swap so I can sleep at night and the SSD "upgrade" key that I'm connecting an Intel 320 Series SSD to (Will just tape it to my case). The BBU is great for those occasional brown outs too. All of this is worth my peace of mind for years to come IMO. Oh, and the 4 SAS to SATA cables are included which I know are pricey.

Let me tell you what I do with it so you understand my needs.
Tasks and objectives from the HTPC / Server:
- Streaming 1080p content to two (MORE IN FUTURE) different HDTVs at the same time.
- Record up to two ATSC/QAM shows at the same time with a dual tuner card.
- Torrenting
- Central backup unit that 3 PCs report to in the middle of night for Images and folder syncs.
- 3D Blu-Ray Player
- I use a SSD to keep my OS separated from the RAID array.

Questions:


Do I go with Raid 5, 6, 10, 50 or 60?Keeping the most possible storage space is highly preferable because I already have 8TB of just media, not including backups. However, I want redundancy. I plan on using the "Hot Swap" feature built-in to the card so in case of HDD failure, it will automatically replace the failed HDD. I am thinking RAID 5 or 6 is my best option. Am I correct or incorrect?

Can I do a relatively painless transition from 2TB HDDs I own currently (Seagate Greens) to 4TB HDDs (When available)? The card supports 3TB+, and I have UEFI BIOS.

RAID Equipment purchased:

Intel RS2WG160 PCI-Express 2.0 x8 SATA / SAS (Serial Attached SCSI) Controller Card $799 with cables
This is the exact same as LSI MegaRAID Internal 9260-16i that cost $930 without cables.
Excellent Performance, Highly Scalable: LSI SAS2108 ROC technology, x8 PCI Express Generation 2 host interface and 512MB on-board DDR II 800 MHz cache enhance the performance of mainstream applications. Capable of connecting up to 16 drives directly or up to 128 using SAS expanders.

Supports data redundancy using SAS or SATA hard disk drives through mirroring, parity, and double parity (RAID levels 1, 5, and 6), plus striping capability for spans (RAID levels 10, 50, and 60).

BBU Support: This adapter supports optional Intel Smart Battery AXXRSBBU7 or AXXRSBBU8 to maintain data in case the server or power fails, eliminating the need of an additional bulky power supply.

Hot Spare: Includes global hot spare support that automatically comes online to replace the first drive to fail on any array or disk group on the controller.

Intel RAID Smart Battery AXXRSBBU7 $169
This Intel RAID Smart Battery AXXRSBBU7 monitors the voltage level of the DRAM modules on the RAID controller. If the voltage drops below a predefined level, this Smart Battery switches the memory power source from the RAID controller to the battery pack. This battery pack provides power for the memory until the voltage returns to an acceptable level, at which time the Smart Battery circuit board switches the power source back to the RAID controller. Cached data is then written to the storage devices with no loss of data. This Smart Battery provides additional fault tolerance when used with an UPS.

Intel AXXRPFKSSD Activation Key $170
Uses solid-state drives (SSDs) as additional cache for the RAID controller by means of SSD flash tiering;
frequently accessed information is stored in cache to allow for rapid access

Accelerates SSDs using FastPath I/O, providing up to 465,000 I/O reads per second for small, random
block-size I/O activity; this is a dramatic increase over solutions that do not use FastPath

Thanks in advance :D 

More about : raid setup 26tb media server htpc

a b G Storage
October 13, 2011 3:37:31 AM

I honestly don't know how to answer your question. 13x2TB is a lot. RAID5 is a good choice, but finding the best setup for 13 drives is difficult. RAID6 would have a smaller capacity but would allow a fault tolerance of two drives. RAID5 will give you a capacity of 24TB and RAID6 will give you a capacity of 22TB. RAID50 is a strange situation and I'm not sure how well it would work with a prime number of drives.

I would be interested to see your entire configuration if you wouldn't mind posting it.
m
0
l
October 13, 2011 3:55:53 AM

If you're looking to go purely for space, RAID5. HOWEVER, since you have 13 drives in your array, that were all likely bought at the same time, probably some from the same batch even, you're safer bet would be RAID6, since hard drives have a rough time during the rebuild process and sometimes an additional drive will fail. These aren't enterprise-level drives either, so bear that in mind.

Any form of RAID0 (RAID60,50,10,etc) are shearly for performance gains over the base (6,5,1,etc respectively), and don't give you any additional protection, and most likely will cost you in space.

Any form of mirroring (RAID10,1,etc) will HALVE your available space, but give you good data protection. Since you're looking for space, avoid these.

In short, RAID6 would be your best bet. You can lose up to two drives at any time and still have your data. Just remember, RAID is not a backup, so accidental/malicious (think virus) deletions, file corruption, parition/file-system correction, etc will still be your weak points. Granted, you'd have to have a complete similar system to keep backups, but you could at least keep copies of your most important/favorite things on a 3TB external drive just in case.
m
0
l
Related resources
October 13, 2011 6:56:33 AM

danraies said:
I honestly don't know how to answer your question. 13x2TB is a lot. RAID5 is a good choice, but finding the best setup for 13 drives is difficult. RAID6 would have a smaller capacity but would allow a fault tolerance of two drives. RAID5 will give you a capacity of 24TB and RAID6 will give you a capacity of 22TB. RAID50 is a strange situation and I'm not sure how well it would work with a prime number of drives.

I would be interested to see your entire configuration if you wouldn't mind posting it.


13x 2TB Seagate Green Drives (Storage) with 2x Evercool HDD cooling boxes
1x 250GB Intel 510 Series SSD (OS)
1x 40GB Intel 320 Series SSD (RAID Cache)
Intel Core i7 2600k CPU
ASUS Maximus Extreme IV Gene-Z Micro-ATX motherboard (Remember, size matters here)
16GB DDR3 1600 RAM
Corsair H80 Water Cooler with fan resistors for ultimate quietness / coolness
PC Power & Cooling 650W PSU
12x External Blu-Ray Burner connected via eSATA.
NZXT GAMMA Mid-Tower Case (small case that can accomodate 13x HDDs and isn't all lit up, cheap too)

No GPU as the iGPU provides all my needs, and of course my RAID card and a Hauppage Dual-TV Tuner. This is connected to a 47in. 3D HDTV from Vizio and I currently use Wireless Headphones and/or the TV speakers, no sound system is needed yet.

I do plan on some overclocking, but I need to get a Kill-a-Watt so I can find the price/performance sweetspot.
m
0
l
October 13, 2011 7:00:45 AM

ammaross said:
If you're looking to go purely for space, RAID5. HOWEVER, since you have 13 drives in your array, that were all likely bought at the same time, probably some from the same batch even, you're safer bet would be RAID6, since hard drives have a rough time during the rebuild process and sometimes an additional drive will fail. These aren't enterprise-level drives either, so bear that in mind.

In short, RAID6 would be your best bet. You can lose up to two drives at any time and still have your data. Just remember, RAID is not a backup, so accidental/malicious (think virus) deletions, file corruption, parition/file-system correction, etc will still be your weak points. Granted, you'd have to have a complete similar system to keep backups, but you could at least keep copies of your most important/favorite things on a 3TB external drive just in case.

I just might go RAID 6, given you're recommendation. I would probably skip the "Hot Spare" feature and instaed use a "scratch drive" for my downloads and such to avoid needless writes and possible malware on the RAID array. I don't think I could bring myself to use the hot spare feature, that would put 3 drives completely dedicated to backup lol. That would leave me with 10 drives without the hot spare, or 9 drives with the hot swap. Thanks for your input.
m
0
l
a b G Storage
October 13, 2011 12:22:25 PM

I wouldn't consider anything less then Raid 6 with 13x consumer grade drives. I would highly recommend having a hot spare if you care about the data. That many drives running 24/7 will have failures.

Personally I think that case is a really bad idea to run 13x drives inside it. It's going to be one hell of a hotbox in there. The design also doesn't really encourage hot swapping drives when one does die.

Maybe something like this, http://www.pc-pitstop.com/sas_cables_enclosures/scsas15...

attached to your main case would be a bit better idea. I realize that adds a lot to your cost, but if you are putting together something with that much storage you should spend more then $40 on the case imo. I just did a quick google to get to one of those devices, but there are probably lots of option and that one may be a complete pos, I was just linking it for an example.
m
0
l
October 13, 2011 3:06:59 PM

tomatthe said:
I wouldn't consider anything less then Raid 6 with 13x consumer grade drives. I would highly recommend having a hot spare if you care about the data. That many drives running 24/7 will have failures.

Personally I think that case is a really bad idea to run 13x drives inside it. It's going to be one hell of a hotbox in there. The design also doesn't really encourage hot swapping drives when one does die.

Maybe something like this, http://www.pc-pitstop.com/sas_cables_enclosures/scsas15...

attached to your main case would be a bit better idea. I realize that adds a lot to your cost, but if you are putting together something with that much storage you should spend more then $40 on the case imo. I just did a quick google to get to one of those devices, but there are probably lots of option and that one may be a complete pos, I was just linking it for an example.

It doesn't get hot at all. The HDDs remain between 27c-38c at any given time thanks to the cooling boxes. Ill admit, swapping drives is a bit of a chore, but it is regardless if I buy a $1,500 case or not. Plus, that adds a whole other box that I don't want to maintain. Having this "all in one" unit in a mid-tower is extremely awesome. I'll live with the sacrifice of the occasional drive swap.

EDIT: RAID 6 is looking like a better option, but if I have one drive failure on RAID 6, I still have to "rebuild" the array to bring back the double parity protection correct? So then RAID 5 with a hot spare still sounds like a decent option if that holds true.

Also, having a 128k stripe size will best suit my needs correct?

I also thought that there were some tests done that enterprise drives are no better at reliability than a consumer drive. It's the firmware that is the primary difference, not mechanics.

Choices, choices...
m
0
l
a b G Storage
October 13, 2011 3:26:09 PM

steelbeast said:
It doesn't get hot at all. The HDDs remain between 27c-38c at any given time thanks to the cooling boxes. Ill admit, swapping drives is a bit of a chore, but it is regardless if I buy a $1,500 case or not. Plus, that adds a whole other box that I don't want to maintain. Having this "all in one" unit in a mid-tower is extremely awesome. I'll live with the sacrifice of the occasional drive swap.

EDIT: RAID 6 is looking like a better option, but if I have one drive failure on RAID 6, I still have to "rebuild" the array to bring back the double parity protection correct? So then RAID 5 with a hot spare still sounds like a decent option if that holds true.

Also, having a 128k stripe size will best suit my needs correct?

I also thought that there were some tests done that enterprise drives are no better at reliability than a consumer drive. It's the firmware that is the primary difference, not mechanics.

Choices, choices...


$1,500 is pretty excessive I agree, just seems like a pretty major setup to have in a standard mid tower case. Normally a device with 26tb of storage would be placed somewhere completely out of the way and probably never even looked at. Your setup sounds a bit different since you are also wanting to use the same machine as an htpc.

You would have to repair the array, but the advantage with Raid 6 is that you could lose a drive while it was rebuilding, and still have the chance to replace the drive. Raid 5 systems can definitely drop another drive while rebuilding, particularly when using as many drives as you've got in your set. The rebuilds are very hard work on the drives, which is why it's not that uncommon for another drive to fail during a rebuild.

Not sure on stripe size, prob fairly easy to google and get some good comparisons.

I thought the hardware was actually better suited to running 24/7 in enterprise drives, and the warranty reflected that. Never researched it though.
m
0
l
October 13, 2011 3:33:16 PM

tomatthe said:
$1,500 is pretty excessive I agree, just seems like a pretty major setup to have in a standard mid tower case. Normally a device with 26tb of storage would be placed somewhere completely out of the way and probably never even looked at. Your setup sounds a bit different since you are also wanting to use the same machine as an htpc.

You would have to repair the array, but the advantage with Raid 6 is that you could lose a drive while it was rebuilding, and still have the chance to replace the drive. Raid 5 systems can definitely drop another drive while rebuilding, particularly when using as many drives as you've got in your set. The rebuilds are very hard work on the drives, which is why it's not that uncommon for another drive to fail during a rebuild.

Not sure on stripe size, prob fairly easy to google and get some good comparisons.

I thought the hardware was actually better suited to running 24/7 in enterprise drives, and the warranty reflected that. Never researched it though.

Yeah, by having the RAID Array setup on the HTPC, I am saving tons of bandwidth on the home network by being able to atleast have one HD stream off the network since it passes through the HDMI to the HDTV. Which I have setup with CAT6a cabling through out the home by the way. This also eliminates the need for building/buying a seperate NAS enclosure that is more overhead, and it still clogs the network even more. Plus, I have highly efficient PC components used on this rig for good power efficiency. I dont know the idle wattage load as I dont have a kill-a-watt however.

I'll google stripe size to verify the 128k stripe size.

You bring a great argument for RAID 6 and I'm quite certain I'll do it. I know my RAID controller can handle it no problem lol.
m
0
l
a b G Storage
October 13, 2011 3:44:34 PM

With that controller, with that many drives, and shooting for a large array, personally i'd go with one large RAID 5 and dedicate one drive as a hot spare. That way, you have redundancy, with maximum storage space, and the piece-of-mind of a hot spare so if anything happens, the controller can automatically rotate the hot spare in and rebuild the array on the fly. If you want more piece of mind, you can go with a RAID 6, you'll just lose 1 more drive of storage space. (e.g. RAID 6's capacity is n-2, as opposed to RAID 5's n-1 capacity). Depending on how critical the data is should be your guide as to which direction you take. I don't see any reason at all for you to go with anything like a RAID 50 or 60 though. If you go the RAID6 solution with a hot spare (best redundancy) you'd have a redundant unformatted capacity of 20TB. A RAID 5 solution with a hot spare would net you 22TB. For example, a RAID 50 setup with a hot spare would net you 10TB with that hardware configuration.
m
0
l
October 13, 2011 3:52:29 PM

mavroxur said:
With that controller, with that many drives, and shooting for a large array, personally i'd go with one large RAID 5 and dedicate one drive as a hot spare. That way, you have redundancy, with maximum storage space, and the piece-of-mind of a hot spare so if anything happens, the controller can automatically rotate the hot spare in and rebuild the array on the fly. If you want more piece of mind, you can go with a RAID 6, you'll just lose 1 more drive of storage space. (e.g. RAID 6's capacity is n-2, as opposed to RAID 5's n-1 capacity). Depending on how critical the data is should be your guide as to which direction you take. I don't see any reason at all for you to go with anything like a RAID 50 or 60 though. If you go the RAID6 solution with a hot spare (best redundancy) you'd have a redundant unformatted capacity of 20TB. A RAID 5 solution with a hot spare would net you 22TB. For example, a RAID 50 setup with a hot spare would net you 10TB with that hardware configuration.

Thanks for giving me solid numbers on the available space I would have, that helps alot. Considering I would be at 20TB with RAID6, including a hot spare, will do me just fine. I've been vigilant/lucky in never losing any critical data, but my data storage requirements has exploded in the last few years and I really like to know I can have extra redundancy along with a hot spare for automation that leaves me with virtually ZERO headaches in the future. That is what's most important to me. I can lose an additional 2TB over that. I will be upgrading to 4TB when they are out and main stream anyway, so 20TB should do just fine till then.

"The tribe has spoken" RAID 6 it is :D 

Thanks again to the great Tom's Hardware community.
m
0
l
a b G Storage
October 13, 2011 3:59:56 PM

Seriously, NOT RAID 5. This RAID only gives you 1 faulty HDD option, and with 13 HDDs of that TB, that is a stupid choice. Altough RAID 5, 10, 0+1 and 0 are the most common consumer RAID options, it is not your case.

You need a RAID solution that gives you the the benefit of not losing your data should more then 1 HDD fail at the same time. There are more than 10 RAID options available. Some alrady include a "spare" HDD, should any HDD fail, it automatically rebuilds using the spare. I sugest you search online for ALL RAID and study them carefully
m
0
l
October 13, 2011 4:01:23 PM

Well, one more question. should I use a "scratch drive" for my dual TV tuner and downloads to reduce the load on the RAID array and vet any viruses. This would put me down to an 18TB array using RAID 6 with a hot swap, yikes. Is it worth it?
m
0
l
October 13, 2011 4:03:49 PM

Considering that you want to stream to multiple tvs simultaneously as well as a range of other activities as well of retain adequate redundancy, I would definately recommend RAID 50, but keep a small stripe size due to the number of large files. I once did a full work-up comparison on Raid 1, 10, 5, 5ee, and 50 using stripe sizes 64k, 256k, and 512k. Now granted I was measuring this against Jetstress for Exchange, however, if I forget the exchange data and strictly examine the specs for the drive i/os, my raid 50 and raid 5 had nearly identical read latency, the raid 50 had inconsequintial higher write latency, and raid 50 handles nearly 300% the read/write throughput of raid 5. Raid 50 is impressive if you've never tried it, plus you can have multiple simultaneous failures IF they are in the right slots. I was using enterprise class Savvio drives and the raid 50 was nearly equal to a RAID 10 in performance. Plus all those lights mvoing for one raid set are a really awsome show to watch lol.
m
0
l
October 13, 2011 4:07:42 PM

keebs said:
Considering that you want to stream to multiple tvs simultaneously as well as a range of other activities as well of retain adequate redundancy, I would definately recommend RAID 50, but keep a small stripe size due to the number of large files. I once did a full work-up comparison on Raid 1, 10, 5, 5ee, and 50 using stripe sizes 64k, 256k, and 512k. Now granted I was measuring this against Jetstress for Exchange, however, if I forget the exchange data and strictly examine the specs for the drive i/os, my raid 50 and raid 5 had nearly identical read latency, the raid 50 had inconsequintial higher write latency, and raid 50 handles nearly 300% the read/write throughput of raid 5. Raid 50 is impressive if you've never tried it, plus you can have multiple simultaneous failures IF they are in the right slots. I was using enterprise class Savvio drives and the raid 50 was nearly equal to a RAID 10 in performance. Plus all those lights mvoing for one raid set are a really awsome show to watch lol.

Sounds like quite a test, but it seems that RAID 50 may be too much overkill for my case since these loads are staggered through all hours of the day and not all at the same time.

Why keep a small stripe size for large files? This is new to me.
m
0
l
October 13, 2011 4:12:22 PM

steelbeast said:
Sounds like quite a test, but it seems that RAID 50 may be too much overkill for a home setting.

Why keep a small stripe size for large files? This is new to me.



I never heard anyone complain because something was too fast. But it's your config... You do what you want to do... Just passing along my results...
m
0
l
October 13, 2011 4:14:00 PM

keebs said:
I never heard anyone complain because something was too fast. But it's your config... You do what you want to do... Just passing along my results...

I didn't mean to insult you, I am just wanting to know the logic behind these conclusions so that I understand them is all. Because it seems the consensus is RAID 6 for my case, but you say RAID 50.
m
0
l

Best solution

a b G Storage
October 13, 2011 4:16:51 PM

steelbeast said:
Sounds like quite a test, but it seems that RAID 50 may be too much overkill for a home setting.

Why keep a small stripe size for large files? This is new to me.





With that controller and with descent 2TB drives, you should see read/write speeds in excess of 90MB/sec easily. That should be sufficent for the IO demands you will be throwing at it. You would only need a RAID 50/60 if you were going to be experiencing heavy disk demands (e.g. manipulating SQL databases, data sets, etc. on a server). As it sits, 90MB/sec is going to completely saturate a gigabit ethernet connection anyhow. If you were to go to a RAID 50/60, any external connections to the server through the network would be bottle-necked by the ethernet connection, so you would see absolutely NO difference in backups and file copy operations over the network.

And to the poster that said ABSOLUTELY NO RAID 5, I specifically mentioned "how critical is the data" in my reply. RAID 5 still provides fault tolerance. You will only have no redundancy while 1) a drive is dead in the array and 2) while the controller is rebuilding the array to the hot spare. If the OP didn't want to use a hot spare, then i'd definately say "NO" to RAID 5. But with redundancy and a hot spare, you're only looking at a 2-3 hour window where you'll be "with your pants down", and if the data set only contains DVR'ed TV shows and backups of home computers, that might be an acceptable risk. If it isn't, that's why I recommended RAID 6 for additional redundancy.


Share
October 13, 2011 4:21:49 PM

mavroxur said:
With that controller and with descent 2TB drives, you should see read/write speeds in excess of 90MB/sec easily. That should be sufficent for the IO demands you will be throwing at it. You would only need a RAID 50/60 if you were going to be experiencing heavy disk demands (e.g. manipulating SQL databases, data sets, etc. on a server). As it sits, 90MB/sec is going to completely saturate a gigabit ethernet connection anyhow. If you were to go to a RAID 50/60, any external connections to the server through the network would be bottle-necked by the ethernet connection, so you would see absolutely NO difference in backups and file copy operations over the network.

And to the poster that said ABSOLUTELY NO RAID 5, I specifically mentioned "how critical is the data" in my reply. RAID 5 still provides fault tolerance. You will only have no redundancy while 1) a drive is dead in the array and 2) while the controller is rebuilding the array to the hot spare. If the OP didn't want to use a hot spare, then i'd definately say "NO" to RAID 5. But with redundancy and a hot spare, you're only looking at a 2-3 hour window where you'll be "with your pants down", and if the data set only contains DVR'ed TV shows and backups of home computers, that might be an acceptable risk. If it isn't, that's why I recommended RAID 6 for additional redundancy.


This is again exactly the kind of reasoning I'm looking for, thank you. I really want to go with RAID 5, but I really don't want to deal with the headache of the real possibility of a drive failing during the rebuild since all these drives are probably from the same batch and what not. RAID 6 with a hot swap for sure. Yes, it is overkill for my data, but it's worth my peace of mind.

EDIT: My last question. Would a scratch drive be recommended (worth it) to take load off the array for my dual tuner (writes on the array) and downloads, to vet any possible viruses before transferring to the array? Or is that defeating the purpose of using RAID 6 anyway? Thanks.
m
0
l
a b G Storage
October 13, 2011 4:31:11 PM

steelbeast said:
Well, one more question. should I use a "scratch drive" for my dual TV tuner and downloads to reduce the load on the RAID array and vet any viruses. This would put me down to an 18TB array using RAID 6 with a hot swap, yikes. Is it worth it?
I don't know about the TV tuners, but a "scratch drive" for viruses sounds a bit weird to me. A virus on a disk isn't a problem in itself, it won't compromise the disk; it's when the virus activates that the problems start and no matter on which drive the virus is on, it can always "decide" to delete/corrupt stuff on other drives. The only things that can protect from virus damage is a good AV and a good backup. And the later is the best as it also protects against human errors, but backing-up 20+TB of data is kind of complicated :p .
m
0
l
October 13, 2011 4:35:40 PM

Zenthar said:
I don't know about the TV tuners, but a "scratch drive" for viruses sounds a bit weird to me. A virus on a disk isn't a problem in itself, it won't compromise the disk; it's when the virus activates that the problems start and no matter on which drive the virus is on, it can always "decide" to delete/corrupt stuff on other drives. The only things that can protect from virus damage is a good AV and a good backup. And the later is the best as it also protects against human errors, but backing-up 20+TB of data is kind of complicated :p .

Yeah, I know that most viruses and such are coded to go to the "C:" drive, so it would be my OS drive, but I wouldn't want to be forced with dealing with the removal of the virus from the array. Plus, with my tuner card, you're potentailly looking at alot of writes to the array. I'm guessing that with the RAID 6 setup, a scratch drive is too much overkill and another loss of 2TB of space which is a real compromise given I'll be using RAID 6 with a hot swap. Or can I eliminate the hot swap in favor of the scratch drive? I just can't bring myself to sacrifice more than 3 drives. Going from 2 initially, now to 3 drives (RAID 6 and hot swap) is my max for this data. Which leaves me with 20TB total storage. I already have 8TB I need to migrate over, really only leaving me 12TB for expansion (really less because of firmware on the drive), which should be enough until 4TB or 5TB drives are mainstream.
m
0
l
a c 316 G Storage
October 13, 2011 5:04:43 PM

A thought: Would you have any objection to having two separate RAID arrays, say two of RAID 5 or RAID 6 plus a hot spare (I love controllers that can swap to a hot spare)? You would end up with two separate, smaller drives. If the content that you stream is not the content that you record, perhaps recording to one and streaming from the other would lower contention from concurrent access? Just an idea that occurs to me reading this; I have never build a RAID with more than four drives. You should see FireWire2's rig, though.
m
0
l
October 13, 2011 5:17:58 PM

WyomingKnott said:
A thought: Would you have any objection to having two separate RAID arrays, say two of RAID 5 or RAID 6 plus a hot spare (I love controllers that can swap to a hot spare)? You would end up with two separate, smaller drives. If the content that you stream is not the content that you record, perhaps recording to one and streaming from the other would lower contention from concurrent access? Just an idea that occurs to me reading this; I have never build a RAID with more than four drives. You should see FireWire2's rig, though.

This sounds good, but I guess my logic of using a scratch drive is to avoid needless writes to the array (less wear and tear). Your solution is partial to my reasoning, but not a complete solution. I suppose that if I'm trying to find reasons for it, but no one can come up with it, then there's no reason to have a scratch drive.
m
0
l
a b G Storage
October 13, 2011 6:11:36 PM

I don't see any point to a scratch drive for your setup. Generally a scratch drive is used for video editing / photo editing, not just plain recording. And as far as mitigating a virus threat, that isn't what a scratch drive does. A scratch drive is just a temporary storage space that programs use when editing/modifying files (such as video/photo editing) and to store save points during the edit process.



@WyomingKnott -

Not a bad idea with the split array. The only negative i could point out would be that it'd reduce the available space, since you'd have parity disks on each array. A split RAID5 with a global hot spare would net him 20TB total before formatting. A RAID6 would give him 16TB. Unless it's totally necessary to split the array, i'd just make one large GPT volume, and move on. Just a reminder to the OP, you'll need to be running XP x64 or newer OS on the server to use a GPT volume, otherwise, you'd have split it into a dozen 2TB MBR volumes.
m
0
l
October 13, 2011 7:42:03 PM

mavroxur said:
I don't see any point to a scratch drive for your setup. Generally a scratch drive is used for video editing / photo editing, not just plain recording. And as far as mitigating a virus threat, that isn't what a scratch drive does. A scratch drive is just a temporary storage space that programs use when editing/modifying files (such as video/photo editing) and to store save points during the edit process.



@WyomingKnott -

Not a bad idea with the split array. The only negative i could point out would be that it'd reduce the available space, since you'd have parity disks on each array. A split RAID5 with a global hot spare would net him 20TB total before formatting. A RAID6 would give him 16TB. Unless it's totally necessary to split the array, i'd just make one large GPT volume, and move on. Just a reminder to the OP, you'll need to be running XP x64 or newer OS on the server to use a GPT volume, otherwise, you'd have split it into a dozen 2TB MBR volumes.

Thanks for confirming my suspicions on a scratch drive, I figured it wouldn't serve any real purpose for me. I will definitely be using Win7 x64 as the OS too since this is a HTPC.
m
0
l
October 13, 2011 7:42:36 PM

Best answer selected by steelbeast.
m
0
l
October 24, 2011 7:10:14 AM

You seem to have a an issue with wanting storage size yet wanting peace of mind...

Streaming to multiple hd tv's can be handled by a single drive without worry.

But obviously you want your 'storage' to do the streaming or do you?

I ask this because it doesnt take a long time to transfer files from one hdd to another. Im sure you could transfer a few HDTV shows equalling what 8-25GB in an hour. Heck at our gaming store i transfer 10gb of gaming files in about 15mins max between networked pc's

SO with that in mind, why not have 2 HDD's with a playlist that you will be streaming for the day, that are standalone drives. You load up the shows you want. You now have 2 hdd's that will do the streaming work for the day. Failure of these you could care less cause any data on them is replaceable.

So now your left with 11 drives where you choose a raid 5 or 6 setup. Sounds like your leaning to raid 6. This will leave you with 18tb of storage.

Or just keep one drive as your streaming drive, that you can pre-load some 100HD shows onto. Then use the other 12 drives in a raid 6, this will allow 20TB of storage space.

This way your raid is not used to stream, taking off a serious amount of load, is only used as storage. And the Drive that will be doing all the streaming load is very much replaceable since its only holding data that is already in your raid...

just my 2 cents and ideas.
m
0
l
a b G Storage
October 24, 2011 12:16:58 PM

mavroxur said:
With that controller and with descent 2TB drives, you should see read/write speeds in excess of 90MB/sec easily. That should be sufficent for the IO demands you will be throwing at it. You would only need a RAID 50/60 if you were going to be experiencing heavy disk demands (e.g. manipulating SQL databases, data sets, etc. on a server). As it sits, 90MB/sec is going to completely saturate a gigabit ethernet connection anyhow. If you were to go to a RAID 50/60, any external connections to the server through the network would be bottle-necked by the ethernet connection, so you would see absolutely NO difference in backups and file copy operations over the network.

And to the poster that said ABSOLUTELY NO RAID 5, I specifically mentioned "how critical is the data" in my reply. RAID 5 still provides fault tolerance. You will only have no redundancy while 1) a drive is dead in the array and 2) while the controller is rebuilding the array to the hot spare. If the OP didn't want to use a hot spare, then i'd definately say "NO" to RAID 5. But with redundancy and a hot spare, you're only looking at a 2-3 hour window where you'll be "with your pants down", and if the data set only contains DVR'ed TV shows and backups of home computers, that might be an acceptable risk. If it isn't, that's why I recommended RAID 6 for additional redundancy.



Just to add, 2 to 3 hour rebuild time on a failed drive in a 26TB raid 5 array is not a very good estimate IMO. I don't know that there is a good chart estimating rebuild times, but if I was guessing the rebuild time on a 26TB Raid 5 array, I would say 24hrs minimum (online build). Maybe some others can post a few examples of rebuild times.
m
0
l
October 24, 2011 1:17:17 PM

I ended up going with RAID 5, with a hot swap, instead of RAID 6. The reason being is that I would've only had 18TB with RAID 6, but RAID 5 with hot swap gives me 20TB, because of the firmware built into the drives. The stuff I have on there is mostly media and I need the extra space over extra redundancy. Apparently I under estimated the space I would lose.

EDIT: The SSD caching is awesome! I can read/write as fast possible.
m
0
l
a b G Storage
October 24, 2011 1:48:08 PM

steelbeast said:
I ended up going with RAID 5, with a hot swap, instead of RAID 6. The reason being is that I would've only had 18TB with RAID 6, but RAID 5 with hot swap gives me 20TB, because of the firmware built into the drives. The stuff I have on there is mostly media and I need the extra space over extra redundancy. Apparently I under estimated the space I would lose.

EDIT: The SSD caching is awesome! I can read/write as fast possible.




Glad you're happy with the results. And to the poster that said 24 hours to rebuild a failed drive, i'm not sure where that estimate came from, but it seems a little long for a good controller and good drives to take 24 hours to rebuild a failed drive. Maybe for an array that's under constant load during a rebuild, but for his situation, i don't see it taking that long.
m
0
l
!