Sign in with
Sign up | Sign in
Your question

What specs to look for in my case in a NAS??

Tags:
  • NAS / RAID
  • PCS
  • Storage
  • Product
Last response: in Storage
Share
April 5, 2012 11:53:31 AM

Hi,

I have a network with around 20 PCs connected to each other via 1 Gb Ethernet.

My plan was to use one of the PCs to have a network share with mostly 1080p uncompressed video, and then other PCs on the network would read from there, do some render and then write back to the same place.. Then with some discussion on forums, we realized that there'd be a bottleneck on the PC with the share.. Then solution was to use a RAID 0 system with 4x 15K rpm drives to improve the performance. Then some said that SSDs would be the best option for speed but some would disagree giving real life examples that SSDs would not work that great, or even die after some heavy use on a server.. Then finally I have seen dedicated NAS devices that'd connect to a switch with 2 or more GigE ports which seemed a better option for me...

So, given the above options I have been through, which way is the best (more on the performance side and not safety of data) way to go f or me?? If you can think of alternative methods, they are more than welcome.

Thanks for your help in advance.

More about : specs case nas

a b G Storage
April 5, 2012 12:43:38 PM

Has anyone suggested setting up a RAID array with an SSD caching the RAID array? You could set up 2 or 4 2 TB HDDs in RAID 0 and have that array cached by something like the OCZ Synapse. This set up is considerably less costly than buying 15 K rpm drives. In fact you could probably use 5900 rpm drives, and with the OCZ Synapse drive cache get performance that rips the doors off the 15K drive array.

BTW, I would set this NAS as a server. You would probably need to install Windows Server 2008, if this is a Windows based network.
April 5, 2012 1:06:03 PM

chesteracorgi said:
Has anyone suggested setting up a RAID array with an SSD caching the RAID array? You could set up 2 or 4 2 TB HDDs in RAID 0 and have that array cached by something like the OCZ Synapse. This set up is considerably less costly than buying 15 K rpm drives. In fact you could probably use 5900 rpm drives, and with the OCZ Synapse drive cache get performance that rips the doors off the 15K drive array.

BTW, I would set this NAS as a server. You would probably need to install Windows Server 2008, if this is a Windows based network.


SSD caching the RAID array??? I really don't know what that means....

Related resources
a b G Storage
April 5, 2012 1:16:25 PM

First you set up the HDDs in a RAID array. After you have set up the RAID array you install a OCZ Synapse drive and set it up to cache the RAID array (the software walks you through the process). The Dataplex software acts to read ahead into SSD cache the most commonly used commands from the RAID storage and keep them at SSD (not HDD) access (usually 5 X + the speed of HDD access).
April 5, 2012 1:56:59 PM

chesteracorgi said:
First you set up the HDDs in a RAID array. After you have set up the RAID array you install a OCZ Synapse drive and set it up to cache the RAID array (the software walks you through the process). The Dataplex software acts to read ahead into SSD cache the most commonly used commands from the RAID storage and keep them at SSD (not HDD) access (usually 5 X + the speed of HDD access).


Sorry if I misunderstood you but my only concern is how i can get the max read speed by 20 PCs through a single GigE port from that computer.. and not particularly interested in OS speed at all..
April 5, 2012 2:06:58 PM

If your data is valuable or you plan on keeping it for any length of time, I'd avoid doing RAID 0. RAID 0 is not redundant (the R in RAID). If any single drive in your array fails, you'll lose all the data in the entire array. If you need redundancy, you'll lose some of your available disk space, but be able to better recover from drive failure. Wikipedia has a decent article on RAID which should help you determine what you need. You may also want to look into getting a dedicated RAID controller.

SSD is probably overkill for a NAS. Gigabit ethernet is about 119 megabytes per second. I believe a RAID 5 or 10 should be able to keep up with that. For the CPU, I'd probably go with something like a SB Pentium dual-core. Lastly, I don't think you'll need a server OS. You're basically just making a dedicated network share, and it doesn't need a lot of functionality.
a b G Storage
April 5, 2012 2:38:41 PM

RAID0=Bad idea
Stick with RAID 1, 5, or 10 so that if a drive dies you are not up a creek. And with that many users constantly hammering the drives a failure should be expected.

1000/t Ethernet can only manage a theoretical max of 120MB/s, more than enough to stream HD movies to 2-3 PCs/devices on the network, but much more than that and you will be having issues (unless using some form of broadcast technology that lets you send the same thing to all users at the same time... but I dobut that would work for your use).

The SSD caching is a great idea, but it will only boost the most used files. So you would get mind-blowing performance for the most popular files that fit on the SSD, but then everything else will be slow/normal. Also, depending on how many HDDs you intend to use, the bottleneck will be at the network end, not the HDD end. Obviously the HDD throughput is still a concern, and I would suggest a minimum of 5 HDDs in RAID5 (4 drives of space +parody drive, and more drives would be better). They do not need to be super huge, just big enough to fit everything on. The idea is that you want multiple drives so you can get more IOPS, and faster throughput for having so many users on the network.

What you should do is simply invest in more ethernet cards for the server. They are not expensive and (depending on the OS used) the server will use whatever cards are the least populated, which will give you the throughput you need for 20 concurrent users.

Lastly, SSDs are just as reliable as HDDs these days. The cheap ones like the OCZ products are fine for home use, but would die young in this type of application. However, the M4 and Intel drives would be just as reliable (if not more) than HDDs, and offer advantages of insanely higher throughput, much cooler temps, much quieter, and much less power usage. They are quite a bit more expensive per GB, but if you have the money then it is really the only way to go. Also, you can purchas much less of them (2-4) to provide the bandwidth needed for 20 users compared to really needing 8+ 15K drives to do the same workload (of course, if you need terabytes of storage then 15K drives are the cheaper option).

At any rate, go look at some forums on websites specifically dedicated to this kind of stuff and get an education. Network throughput and design is much more interesting/complicated than a simple computer, and a single wrong part can be the difference between a network that works well, and a network that absolutely sucks.
April 5, 2012 4:09:17 PM

CaedenV said:
RAID0=Bad idea
Stick with RAID 1, 5, or 10 so that if a drive dies you are not up a creek. And with that many users constantly hammering the drives a failure should be expected.

1000/t Ethernet can only manage a theoretical max of 120MB/s, more than enough to stream HD movies to 2-3 PCs/devices on the network, but much more than that and you will be having issues (unless using some form of broadcast technology that lets you send the same thing to all users at the same time... but I dobut that would work for your use).

The SSD caching is a great idea, but it will only boost the most used files. So you would get mind-blowing performance for the most popular files that fit on the SSD, but then everything else will be slow/normal. Also, depending on how many HDDs you intend to use, the bottleneck will be at the network end, not the HDD end. Obviously the HDD throughput is still a concern, and I would suggest a minimum of 5 HDDs in RAID5 (4 drives of space +parody drive, and more drives would be better). They do not need to be super huge, just big enough to fit everything on. The idea is that you want multiple drives so you can get more IOPS, and faster throughput for having so many users on the network.

What you should do is simply invest in more ethernet cards for the server. They are not expensive and (depending on the OS used) the server will use whatever cards are the least populated, which will give you the throughput you need for 20 concurrent users.

Lastly, SSDs are just as reliable as HDDs these days. The cheap ones like the OCZ products are fine for home use, but would die young in this type of application. However, the M4 and Intel drives would be just as reliable (if not more) than HDDs, and offer advantages of insanely higher throughput, much cooler temps, much quieter, and much less power usage. They are quite a bit more expensive per GB, but if you have the money then it is really the only way to go. Also, you can purchas much less of them (2-4) to provide the bandwidth needed for 20 users compared to really needing 8+ 15K drives to do the same workload (of course, if you need terabytes of storage then 15K drives are the cheaper option).

At any rate, go look at some forums on websites specifically dedicated to this kind of stuff and get an education. Network throughput and design is much more interesting/complicated than a simple computer, and a single wrong part can be the difference between a network that works well, and a network that absolutely sucks.




I have been through so many forums and thinking finally hit what is best for my case:

RAID 5 with 4/5x Intel SSD (http://www.newegg.com/Product/Product.aspx?Item=N82E168...)
Multiple NICs from Intel again...

It all seems to the best option for the price/speed ratio.....
April 5, 2012 4:41:47 PM

uzuncakmak said:
I have been through so many forums and thinking finally hit what is best for my case:

RAID 5 with 4/5x Intel SSD (http://www.newegg.com/Product/Product.aspx?Item=N82E168...)
Multiple NICs from Intel again...

It all seems to the best option for the price/speed ratio.....


It depends on how you define price/speed ratio. Enterprises use these types of setups for their databases because they rely on having high IOPS. But they also have the servers and networks to use it. They don't use SSD RAIDs for data shares and storage because it's not needed and it's very expensive.

Save your money. Get some large, affordable 7200 rpm HDDs, a decent RAID controller, and put them in RAID 5 (or 10). That will be PLENTY to stream video to multiple devices and will still place the bottleneck on your network. If you need more bandwidth, you'll probably have to upgrade your switch, NAS, and the link between your switch and NAS.
!