philipbowser said:
Hey popatim,
Thanks for the response. I'm definitely trying to have the projects processed on the user's machines. We have a couple MacPros and a bunch of fast PCs I've built that are very capable. I just need fast central storage to keep all of the footage and project files they can work on. I'm assuming that once a project file is opened up on a user's workstation, all of the processing and actual "workings" in the program would happen on the user's machine, correct? Theres no reason why any of that "thinking" would happen on the server pc, it would just process the read/writes to the hardrive... unless I'm very very wrong and confused ha.
It seems like the fast fibre servers that exist are tens of thousands of dollars (for a very good reason) ... but they are also built to support huge infrastructures. I'm hoping that theres a solution for just 3-4 Machines trying to access fast video files for not as much cash money!
The way you were talking in the original post I thought you were going to have the server do all the heavy lifting, and the end users would basically be dumb terminals. But this is very different and more than possible as it is just a simple NAS that you need for file storage.
I would suggest looking into 2 routes, and I will also explain what I am attempting to do for my own use down the road.
Route 1: NAS PC with multiple NICs
Build (or re-purpose) a PC that has a decent onboard RAID controller and space for ~5 HDDs (Core2quad/i3 or better). Using RAID1, 10 or 5 (RAID0 will only ever end in heartache if trusted with data) fill up with drives.
RAID Configurations:
-RAID 1 will take 2 2TB drives in mirror (cheapest but slowest).
-RAID 10 will take 4 1TB drives in a mirrored stripe (can loose up to 2 drives before loosing information, plus striping and multiple drives helps on performance, especially for random IOPS).
-RAID5 will take 5 500GB drives, or 3 2TB drives (only one drive of fault tolerance, but will have the best performance).
Notes on Performance:
Make sure all drives are identical, and be sure to purchase a spare drive or 2 before the drive is no longer available for sale so you can replace a failed drive easily. Bigger drives will have better sequential throughput, while a multitude of smaller drives will have better IOPS which is more important for a multi user setup.
While large 5900rpm drives may be good enough for a single user (and are nice and quiet), you seriously want to consider 7200 or 10K drives for a multi user setup (but not 10 or 15K SAS as they will not work right in a non-server rig due to a different connector). 5400 drives are simply crap.
Overbuild your storage space on HDDs; HDDs loose performance rather dramatically after you hit ~60% full, so if using mechanical drives then try to only use 50% of the space and keep it well defragmented, or short stroke the drives to force all data at the edges of the platter.
the way prices are falling you could almost afford 2TB of SSDs...
If re-purposing an old high end system:
-Add a duel gigabit NIC
-Max out the Ram that the mobo will take so that you have a minimum of 8GB.
-Cram as many, and as large of SSDs in there as you can afford. Again, I would avoid RAID0, but if you are religous about your backups then it becomes a possibility to have 4 512GB M4 drive in RAID0 for some truly impressive performance.
If building a new system I would do this:
-For the processor I would get a high end i3 or low end i5 (ivy bridge if available) if buying new, but a core2quad will do just fine if recycling an older machine. Remember, the bottleneck is on the HDDs and Ethernet. Even a Core 2 Duo likely has enough horsepower to serve up ~200MB/s of raw data, so don't sink a lot of money into the CPU unless it is going to be doing real work.
-For ram I have always heard it said that you should have at least 2GB per CPU core on a server. As the bottleneck is the Ethernet port it really does not matter what speed of ram you are running so long as it is DDR2 or better and that you have ~8GB of it (though more is always welcome).
-Load up with a RAID 5 setup like described above, the more individual drives the better the multi-user performance will be.
-Ethernet is annoyingly slow. Gigabit Ethernet, has a max theoretical throughput of 120MB/s, and practical throughput of ~80-100MB/s. RAID 10 and 5 should have no problem pumping out this kind of throughput even on 7200rpm drives, but what happens when you have 2 people working on entirely different projects all demanding data? While 100MB/s 'OK' for a single user, 50MB/s is simply not acceptable for busy workloads. I would suggest getting a mobo that has 2 intel gigabit ports on it (200MB/s total, but a max of 100MB/s per user/port), or else purchase a separate card that has 2 high quality ports on it and use them instead of (or in addition to) the onboard port. Make sure that your switch supports this kind of operation, most do these days, but you still want to be sure.
Route 2: Find used server hardware (this is what I am attempting)
Find an older rack mountable server (preferably u4 as u1 servers are really really loud) for cheap and fill it up with drives and fiber Ethernet. You should have no problem finding a core2quad based Xeon server with a fiber Ethernet (4, 8, or 10 gigabit) for ~$3-600, then load it up with 2TB of new SAS HDDs for ~$1000.
Then find a switch for cheap that has a fiber connection on it. You are still going to be limited to gigabit ethernet for each end-user, but with the better hardware behind it you will get more throughput (100-110MB/s instead of consumer level 80-100MB/s), and you can have a lot more people accessing the server at the same time without things slowing down (1 user per 1 gigabit, so 4, 8, or 10 concurrent users... though I think at the 8-10 level we would be back to the HDD bottleneck again).
The problem with this route is that the hardware is still a bit hard to find on eBay, and it is a waiting game of finding the right parts for the right price and snapping it up before anyone else does.
What I am working on at home is moving all of my drives out of my system and into another area of the house entirely. I do a bit of AV work and hate using headphones, so it is hard to work with a lot of background noise. I have already gotten the case and CPU to being nearly silent, next will be the GPU, and then moving the HDDs out and using an SSD locally (just picked up the SSD 2 weeks ago! It is AWESOME! Finally my i7 is unleashed!).
I found a duel quad core server for free (woot!), and picked up 8GB of ram for it for cheap (already installed and tested). I am now in the process of finding the fiber adapter/cables/switch for the network and saving up for newer/faster HDDs. I am hoping to find a fiber card that will work in my system as well, and a switch that I has 2 fiber ports on it so that I can have a full dedicated 4Gb/s or 400MB/s of throughput between myself and the server, while the rest of the house will be on gigabit Ethernet, or wireless ac when it becomes available/relatively cheap (then no more running Ethernet through the HVAC duct-work... it works, but definitely not up to code).
When it is all said and done I should have massive throughput for HD video editing, while the only noise coming from the system are my 120mm 800rpm fans
![:) :)]()
the only problem is that it takes forever (been looking at doing this for 6mo already) as it is just a matter of getting lucky with parts, and saving money for drives (which I have other things to save money for... damn house! lol). But so far it has been a lot of fun, and a great learning experience as I am rather new to the server side of things.
Also, keep in mind that 10 gigabit Ethernet should make it's way down to 'enthusiast' (ie gamer) equipment within the next 2-3 years which will be much cheaper than the professional equipment available today, and at that point you could go all SSD (OCZ R5 PCIe3 4x cards are coming out shortly which can hold up to 12TB of space), and have 1GB/s throughput for each and every user... *drool*