Building NAS, Differences between software raid, onboard RAID controller, PCIe RAID controller (16x), sas and scsi?

MDF

Reputable
Sep 3, 2014
7
0
4,510
It has been a while since I tried to build a system, so I'm kind of outdated. As an indication, I was very surprised to see that the ribbon connectors are replaced...

Now I'm trying to build a small home server (NAS, file, email, media, ftp and sql dbase).

With this server I believe a few point are most important.
1. About 9 clients will need direct and simultaneously access (with the least delay).
2. These clients will read write large files (about 100gb to 2tb per file / time) to and from the NAS over a GB network.
3. There will be heavy SQL dbase analyzes on the data stored on the NAS.
4. There should be around 16 to 20 TB of storage.
4. The read write speed from the NAS should be between 500 MB and 1 GB/s.
5. There should be some form of redundancy.

Sadly my knowledge seems to be very outdated. In the past you would have used a scsi controller, so that multiple disks could work in unison and the data would be divided across the multiple disks in an array, which increased the speed and stability and added some redundancy. I think this will not have changed that much.

But nowadays it seems that everyone is talking about Raid controllers on the internet. Do these raid controllers do the same thing as the old scsi controllers and what’s the difference between a scsi controller, sas controller and raid controller?

Furthermore I noticed that there are a lot of options with these raid systems (controllers (PCI PCIe (4x, 8x, 16x), software and even onboard?). Is it still necessary to have such a controller, or are the software based and onboard systems strong enough?

So what kind of hardware should I be looking for? I already looked around on the internet, but there seems to be a lot of discussion.

But I came up with sort of a list:
- mother board with a fixed 8core Atom CPU (2750) with probably two PCIe (16x) slots (those for the raid controller and network adapter as listed below). Unfortunately I can’t find one;
- presumably 4GB or 8 GB internal;
- a good raid controller (PCIe 16x) with approximately six to eight ports (6GB/s. I coudn’t find anything faster);
- a four-port GB network adapter (PCIe 16x presumably);
- 4 to 8 3 or 4 TB drives;
- silent fans, quiet power supply;
- small housing.

In connection with the discs, there appears to some discussion about whether you should choose the Seagate NAS drives or Western Digital Red's. What is the discussion about? Is it just the brand?

And what kind of read /write speeds can be achieved with such a system with at least four constant connections (hence the four GB ports, intended load balancing aggregation) with a large amount of moving data (say first computer streaming a movie, second computer listening to music, third computer making a file backup to the NAS, fourth computer performing an analysis on the SQL database).

If anyone can help me with my questions (for example, the motherboard, or disks, or in general notes about possible bottlenecks / overkill that / I do not see) that would be very nice to me.
 
Solution

viewtyjoe

Reputable
Jul 28, 2014
1,132
0
5,960
Generally, unless you are using very large arrays of disks or using arrays of arrays (RAID 10/100 come to mind), most onboard RAID controllers should be adequate to service 500-1000 MB r/w under load, especially given that SATA3 maxes at 6GB/s per drive. I would not recommend using a software controller with your desired specifications. Without getting into nonstandard RAID setups, the best performance/redundancy is RAID 5, which uses block-level striping and distributed parity. Should a single drive fail, you can swap in a clean drive and the array will be able to rebuild from the parity bits across the other drives in the array. Read and write performance optimally works out to (n-1)x drive speed, where n is the number of drives in the array.

In regards to drive selection, Seagate and WD are both highly regarded, and their NAS drives are generally binned/built to be extremely tolerant to long-term heavy loading that you would see in a NAS. The discussion is likely performance/endurance related, or general brand arguments.

Unfortunately, most of your needs are straddling the fuzzy line between what's easily available to consumers and professional setups, and I lack experience building a NAS, so I can't really offer much on actual suggestions for components to use.
 

FireWire2

Distinguished
For starter you can refer to this thread... just for info...
http://www.tomshardware.com/forum/265641-32-40tb-server-performance-issue

For less than 20TB NAS/iSCSI server, i would highly recommend to use FreeNAS and its ZFS or Win8.x's ReFS. But that comes with electrical bill, it needs a decent CPU and lot of RAM with (more energy)

Or you can build low power NAS (Green NAS)
With ANY mini-ITX Atom board and hw raid.
http://www.amazon.com/Port-Multiplier-SATA-hardware-controller/dp/B004JPHAF0
 
Solution