Sign in with
Sign up | Sign in
Your question

Multiple User Media Server Build

Last response: in Components
Share
March 11, 2012 3:05:16 AM

So! After getting off the high of building my first PC and render farm, I figured I'd take the challenge to a difference level.. Building a small server for Graphics work and 3D. First off, I don't even know if this is possible, but in my research I have a tingly spidy-sense feeling like it could be.

Here's what I'm trying to accomplish. A 2TB (minimum) hardrive that the 3 of us in the studio can all mount on our desktops and work/render off of. This drive only needs to store active projects, not archived projects, and doesn't need to be terribly "secure" in a sense as we already do manual backups frequently, I just need something fast that all 3 of us can work off of at the same time. Mostly After Effects compositing with HD and 2K files, and 3D work in Cinema 4D.

My instincts are to go with fast hardrives set up in Raid-0 on a PC that we are all connected to, and the drives are mapped to our individual systems with all the same path names. Would this work? Or am I insane for thinking we'd all be able to access these drives as fast as we'd need to?

Has anyone done something like this before? I have little experience with servers and it'd be great if I could find a solution that doesn't break the bank. I've got about $2,000-ish to spend on something.
March 11, 2012 3:18:53 AM

vmware. Stuff like, do projects on the system you are talking about and send the file to the networked systems to render off of.
m
0
l
March 11, 2012 3:53:14 AM

agreed, short of vmware you will be hard pressed to find things that work well for multiple people taking advantage of the hardware (remember, the HDD is the easy part, the CPU and GPU do the heavy lifting, and they generally do not share well). Besides, Ethernet is too slow to work off of with such large files, especially for multiple users.
m
0
l
Related resources
March 14, 2012 7:04:26 PM

Aye! Thanks for the responses folks. I had hopes that I would get a response to do with something I had the slightest clue about, but I have never even heard of vmware ha!
I'm going to have to look into that.

There would be no other way short of an expensive fibre server eh?
m
0
l
March 14, 2012 9:29:16 PM

If you are talking about that sever doing the graphics and 3d work then I dont think you have enough money allocated, though I do think its possible.

If you are talking about having the server just send the projects which get processed on the users machine then yes thats possible within your budget.
m
0
l
March 14, 2012 9:57:29 PM

Hey popatim,

Thanks for the response. I'm definitely trying to have the projects processed on the user's machines. We have a couple MacPros and a bunch of fast PCs I've built that are very capable. I just need fast central storage to keep all of the footage and project files they can work on. I'm assuming that once a project file is opened up on a user's workstation, all of the processing and actual "workings" in the program would happen on the user's machine, correct? Theres no reason why any of that "thinking" would happen on the server pc, it would just process the read/writes to the hardrive... unless I'm very very wrong and confused ha.

It seems like the fast fibre servers that exist are tens of thousands of dollars (for a very good reason) ... but they are also built to support huge infrastructures. I'm hoping that theres a solution for just 3-4 Machines trying to access fast video files for not as much cash money!
m
0
l
March 15, 2012 1:49:34 AM

Hey I didnt say it could be done professionally at that pricepoint. LoL.

From what I see you just need a large raid 0 to feed the individual users thru dual gigabit Nics.

The users process the file and store it locally or on another server. Each user should also have a seperate scratch/work disk btw. You want to avoid the bottleneck of sending the finished product back to the raid0 at the same time as its trying to send out files. I'm assuming you already have a gigabit network in place.

So you get something like this NAS with 6 bays
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

throw in 6 500gb Wester Digital RE4 drives in for $780 and have $500 left to upgrade anything else you need to take advantage of this box like a gigabit router.
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

or throw in 6 1tb drives for 1200 with a few bucks left for a round of beer later.
http://www.newegg.com/Product/Product.aspx?Item=N82E168...
(the 500gb drives are more reliable)

m
0
l
May 4, 2012 7:15:53 PM

philipbowser said:
Hey popatim,

Thanks for the response. I'm definitely trying to have the projects processed on the user's machines. We have a couple MacPros and a bunch of fast PCs I've built that are very capable. I just need fast central storage to keep all of the footage and project files they can work on. I'm assuming that once a project file is opened up on a user's workstation, all of the processing and actual "workings" in the program would happen on the user's machine, correct? Theres no reason why any of that "thinking" would happen on the server pc, it would just process the read/writes to the hardrive... unless I'm very very wrong and confused ha.

It seems like the fast fibre servers that exist are tens of thousands of dollars (for a very good reason) ... but they are also built to support huge infrastructures. I'm hoping that theres a solution for just 3-4 Machines trying to access fast video files for not as much cash money!

The way you were talking in the original post I thought you were going to have the server do all the heavy lifting, and the end users would basically be dumb terminals. But this is very different and more than possible as it is just a simple NAS that you need for file storage.

I would suggest looking into 2 routes, and I will also explain what I am attempting to do for my own use down the road.

Route 1: NAS PC with multiple NICs
Build (or re-purpose) a PC that has a decent onboard RAID controller and space for ~5 HDDs (Core2quad/i3 or better). Using RAID1, 10 or 5 (RAID0 will only ever end in heartache if trusted with data) fill up with drives.

RAID Configurations:
-RAID 1 will take 2 2TB drives in mirror (cheapest but slowest).
-RAID 10 will take 4 1TB drives in a mirrored stripe (can loose up to 2 drives before loosing information, plus striping and multiple drives helps on performance, especially for random IOPS).
-RAID5 will take 5 500GB drives, or 3 2TB drives (only one drive of fault tolerance, but will have the best performance).

Notes on Performance:
Make sure all drives are identical, and be sure to purchase a spare drive or 2 before the drive is no longer available for sale so you can replace a failed drive easily. Bigger drives will have better sequential throughput, while a multitude of smaller drives will have better IOPS which is more important for a multi user setup.
While large 5900rpm drives may be good enough for a single user (and are nice and quiet), you seriously want to consider 7200 or 10K drives for a multi user setup (but not 10 or 15K SAS as they will not work right in a non-server rig due to a different connector). 5400 drives are simply crap.
Overbuild your storage space on HDDs; HDDs loose performance rather dramatically after you hit ~60% full, so if using mechanical drives then try to only use 50% of the space and keep it well defragmented, or short stroke the drives to force all data at the edges of the platter.
the way prices are falling you could almost afford 2TB of SSDs...

If re-purposing an old high end system:
-Add a duel gigabit NIC
-Max out the Ram that the mobo will take so that you have a minimum of 8GB.
-Cram as many, and as large of SSDs in there as you can afford. Again, I would avoid RAID0, but if you are religous about your backups then it becomes a possibility to have 4 512GB M4 drive in RAID0 for some truly impressive performance.

If building a new system I would do this:
-For the processor I would get a high end i3 or low end i5 (ivy bridge if available) if buying new, but a core2quad will do just fine if recycling an older machine. Remember, the bottleneck is on the HDDs and Ethernet. Even a Core 2 Duo likely has enough horsepower to serve up ~200MB/s of raw data, so don't sink a lot of money into the CPU unless it is going to be doing real work.
-For ram I have always heard it said that you should have at least 2GB per CPU core on a server. As the bottleneck is the Ethernet port it really does not matter what speed of ram you are running so long as it is DDR2 or better and that you have ~8GB of it (though more is always welcome).
-Load up with a RAID 5 setup like described above, the more individual drives the better the multi-user performance will be.
-Ethernet is annoyingly slow. Gigabit Ethernet, has a max theoretical throughput of 120MB/s, and practical throughput of ~80-100MB/s. RAID 10 and 5 should have no problem pumping out this kind of throughput even on 7200rpm drives, but what happens when you have 2 people working on entirely different projects all demanding data? While 100MB/s 'OK' for a single user, 50MB/s is simply not acceptable for busy workloads. I would suggest getting a mobo that has 2 intel gigabit ports on it (200MB/s total, but a max of 100MB/s per user/port), or else purchase a separate card that has 2 high quality ports on it and use them instead of (or in addition to) the onboard port. Make sure that your switch supports this kind of operation, most do these days, but you still want to be sure.

Route 2: Find used server hardware (this is what I am attempting)
Find an older rack mountable server (preferably u4 as u1 servers are really really loud) for cheap and fill it up with drives and fiber Ethernet. You should have no problem finding a core2quad based Xeon server with a fiber Ethernet (4, 8, or 10 gigabit) for ~$3-600, then load it up with 2TB of new SAS HDDs for ~$1000.
Then find a switch for cheap that has a fiber connection on it. You are still going to be limited to gigabit ethernet for each end-user, but with the better hardware behind it you will get more throughput (100-110MB/s instead of consumer level 80-100MB/s), and you can have a lot more people accessing the server at the same time without things slowing down (1 user per 1 gigabit, so 4, 8, or 10 concurrent users... though I think at the 8-10 level we would be back to the HDD bottleneck again).
The problem with this route is that the hardware is still a bit hard to find on eBay, and it is a waiting game of finding the right parts for the right price and snapping it up before anyone else does.


What I am working on at home is moving all of my drives out of my system and into another area of the house entirely. I do a bit of AV work and hate using headphones, so it is hard to work with a lot of background noise. I have already gotten the case and CPU to being nearly silent, next will be the GPU, and then moving the HDDs out and using an SSD locally (just picked up the SSD 2 weeks ago! It is AWESOME! Finally my i7 is unleashed!).
I found a duel quad core server for free (woot!), and picked up 8GB of ram for it for cheap (already installed and tested). I am now in the process of finding the fiber adapter/cables/switch for the network and saving up for newer/faster HDDs. I am hoping to find a fiber card that will work in my system as well, and a switch that I has 2 fiber ports on it so that I can have a full dedicated 4Gb/s or 400MB/s of throughput between myself and the server, while the rest of the house will be on gigabit Ethernet, or wireless ac when it becomes available/relatively cheap (then no more running Ethernet through the HVAC duct-work... it works, but definitely not up to code).
When it is all said and done I should have massive throughput for HD video editing, while the only noise coming from the system are my 120mm 800rpm fans :)  the only problem is that it takes forever (been looking at doing this for 6mo already) as it is just a matter of getting lucky with parts, and saving money for drives (which I have other things to save money for... damn house! lol). But so far it has been a lot of fun, and a great learning experience as I am rather new to the server side of things.

Also, keep in mind that 10 gigabit Ethernet should make it's way down to 'enthusiast' (ie gamer) equipment within the next 2-3 years which will be much cheaper than the professional equipment available today, and at that point you could go all SSD (OCZ R5 PCIe3 4x cards are coming out shortly which can hold up to 12TB of space), and have 1GB/s throughput for each and every user... *drool*
m
0
l
May 4, 2012 8:19:48 PM

CaedenV said:

Also, keep in mind that 10 gigabit Ethernet should make it's way down to 'enthusiast' (ie gamer) equipment within the next 2-3 years which will be much cheaper than the professional equipment available today, and at that point you could go all SSD (OCZ R5 PCIe3 4x cards are coming out shortly which can hold up to 12TB of space), and have 1GB/s throughput for each and every user... *drool*


10 Gigabit Ethernet hasn't even made it down to many enterprises, I think it will be quite a bit longer before it reaches a price point suitable to enthusiasts and home users. It also doesn't provide much any benefit over Gigabit Ethernet outside of the LAN. I do run 10 Gigabit Ethernet as part of a lab environment I have set up, but even at work we have yet to implement it due to cost ($27,000 for 6 ports of 10 Gig Ethernet).
m
0
l
May 11, 2012 6:28:01 PM

sk1939 said:
10 Gigabit Ethernet hasn't even made it down to many enterprises, I think it will be quite a bit longer before it reaches a price point suitable to enthusiasts and home users. It also doesn't provide much any benefit over Gigabit Ethernet outside of the LAN. I do run 10 Gigabit Ethernet as part of a lab environment I have set up, but even at work we have yet to implement it due to cost ($27,000 for 6 ports of 10 Gig Ethernet).

Like I said, it will not happen in the next year, and possibly not within 2 years. But if there is not a movement to a faster standard available to home users in the next 3 years I would be genuinely surprised. You have several movements headed this direction. One is centralized media servers for the home, which gigabit is fine for a single user, or even 2, but when you get up to 3+ users then bottlenecks can start happening. Another is for people who are moving to virtual machines which run off of a centralized server (I realize that it is a small market, but it is beginning to grow). And another is people like me who want to move to single drive systems, with networked RAID drives for all of our data content, and gigabit is not good enough for a single user in such situations, much less multi-user scenarios.

Also keep in mind the cost difference between 'Managed' level equipment for businesses, and 'Unmanaged' equipment for home and small business use. My 24 port gigabit switch goes for $200 (got it for ~$120 mis-marked as a 100/t switch at a going out of business sale 5 years ago :)  ), while the managed version goes for $900 right now.
I am not saying that an unmanaged 16 port 10gbit switch will cost $1-200 in 3 years, but that it will be available, and 'affordable' which will likely mean somewhere in the $500-1000 range.
m
0
l
!