New to servers, advice needed

Litso

Distinguished
Feb 26, 2010
7
0
18,510
The company I work for has tasked me with procuring our first server. The problem is this is my first foray into the world of servers and I'm a bit lost. I'm quite knowledgeable concerning desktop software and hardware, and I feel confident that I can build the machine we need, I'm just not sure where to start really.

What I need:
We are a smallish company, around 35 employees overall. Approximately 15 people will be using the server actively. The primary use for the server (at this point) will be generating and printing image files. Most of the work will be done client side, but the batching and whatnot will be done by the server. Storage will be provided by our NAS's, so that is not a concern for this box. Scalability is also of concern, as we get more acclimated with the capabilities and functionality of the server we will be adding additional tasks.

I think we've decided to run Windows Server 2008 R2 Standard edition, and my preliminary research has pointed me towards the Intel Xeon X3440 Lynnfield processor. Is this a good choice, or should we consider stepping up to a Xeon 5-series? We're estimating 8-12GB of RAM. Is registered RAM necessary? What is a good manufactorer? It will be rack mounted. What I'm unsure about are enclosures, motherboards, etc. I've looked at several bare bones systems, primarily Asus and Intel. Are they viable? What manufacturers are the field leaders in quality? Would we be better off just trying to configure a prebuilt system from somewhere like Dell?

Thanks in advance for any insight you guys can provide!
 
Solution
Oops sorry for being late. Ah well, LGA 1156 is based off a different chipset than LGA 1366 which means the motherboards would have have different north/south bridges.

The North Bridge for 1366: Intel X58 (Single Socket) Intel 5500 Series (Dual Socket)
The South Bridge for 1366: Intel ICH10R (Single and Dual Sockets)

The North Bridge for 1156: Intel 3400 Series
The South Bridge for 1156: N/A

Intel's X58/ICH10R are capable of using tri-channel memory where as Intel's 1156 based systems can only utilize two channels. Don't be fooled that your motherboard may have 6 slots for memory. This is just allows for more memory to be installed.

I like your motherboard choice. Just remember to get the right memory. Since server motherboards are...

lauxenburg

Distinguished
Feb 9, 2009
540
0
19,010
Here's how it goes...

I am no expert, but what I do know is as followed.

The Core i7 line is targeted at desktop applications and has no support for ECC or registered memory.

The Xeon 3-series has support for ECC and registered DIMMs.

The Xeon-5 series allows for multiple CPUs on one board.

In essence, the Xeon 5000 lineup will be a waste for you because you will not be using multiple CPUs and because the i7s do not have support for registered memory, your best bet is for the Xeon 3000 processors.

Registered memory is not important for desktop PCs, nor is ECC, however when multiple requests reach the server, registered memory is going to be a big help with performance, as does ECC. Server memory? Well just about all of the desktop brands offer their selection of server modules. I would stick to Kingston, Patriot and Corsair for a quality selection.

For quality motherboards, ASUS has always led the way. I use them in all of my systems.
 

Litso

Distinguished
Feb 26, 2010
7
0
18,510


I was reading about some memory modules and it mentioned Dual Ranked vs Quad ranked modules. If I'm running a Xeon 3440 (socket LGA 1156) with say 12GB DDR3 RAM would I need DR or QR modules? Also, as I understand socket 1156 uses dual channel DDR3, as opposed to triple channel. The way I understand it any DDR3 module will run in dual or triple channel mode, just depends on what your motherboard supports, right?

Also, this is the motherboard I'm considering:
Asus P7F-E
Any thoughts?

And one last thing. I think I'm going to go with a 4U rack mount case (Norco RPC-430). Is there any need for a 'server' power supply, aside from form factor? Presumably there will be plenty of room in a 4U case for any kind of PSU, right?

Thanks for the fast response, that's a good place to start!
 

lauxenburg

Distinguished
Feb 9, 2009
540
0
19,010
Oops sorry for being late. Ah well, LGA 1156 is based off a different chipset than LGA 1366 which means the motherboards would have have different north/south bridges.

The North Bridge for 1366: Intel X58 (Single Socket) Intel 5500 Series (Dual Socket)
The South Bridge for 1366: Intel ICH10R (Single and Dual Sockets)

The North Bridge for 1156: Intel 3400 Series
The South Bridge for 1156: N/A

Intel's X58/ICH10R are capable of using tri-channel memory where as Intel's 1156 based systems can only utilize two channels. Don't be fooled that your motherboard may have 6 slots for memory. This is just allows for more memory to be installed.

I like your motherboard choice. Just remember to get the right memory. Since server motherboards are picky, look at the recommended kits on ASUS's website.

A 4U rack case is a good idea as it will give you plenty of room to maneuver and maintenance the machine. It also means you can use regular cooling solutions and nothing out of the ordinary.

As for the power supply. "Server power supplies" is a term that is probably too vague. All it means is that there are extra redundant power supplies attached to the main one, that the machine will resort to if one of them fails. These type of power supplies are more common in mission-critical type situations such as data centers. For the cause of a small business/company, one of these isn't really necessary and the chance that a normal power supply will fail is also quite low.

The other thing is that server power supplies have their intakes and exhausts mounted strategically for a server rack. However, since the Norco rack you chose accepts standard ATX power supplies, you would be better off with one of those, and it is budget friendly as well.

You can try one of these supplies as your server wont be drawing much power at all, but it will give you room for option in the future. It also has a 4 + 4 pin connector for the CPU as I am not sure if the x3440 uses 4 or 8 pins. Either way you are in the clear.

Good luck. Looks like a good setup.
 
Solution

Litso

Distinguished
Feb 26, 2010
7
0
18,510

lauxenburg

Distinguished
Feb 9, 2009
540
0
19,010
Hey wow. I am liking that a lot. Make sure you have double checked EVERYTHING. As far as I can see, it looks very nice.

For efficiency, the Corsair power supply is pretty good, however, since it will be running for long periods of time, it might be wiser, especially for a business to get a higher rating of efficiency. Plus, you can tell your boss you built the world's most efficient server which means lower energy bills. I kid you not.

However, the added efficiency means better capacitors and better capacitors = you guessed it, more money. Here we have an 80 Plus Gold rated supply which means it will use up to 87% of the energy that enters a system. The Corsair will only use 80% of the energy at max. As you can see it costs $60 more but it may be worth it, depending on the usage.

Will you need that big of a hard drive for a print server? No, the print requests will usually take no more than 50Mb at a time, but the increased size might give you more option down the road. I'd say you you could settle with 500GB or less, but I have never built a print server, therefore I couldn't really tell you.

Apart from that, all is looking well.

Just bear in mind, this will be able to have a lot of other jobs in the future as it is quiet powerful. So, don't be afraid to use it for anything else.

Good luck.
 

lauxenburg

Distinguished
Feb 9, 2009
540
0
19,010
Oh one other thing. Because it is a rackmount case there are no top exhaust vents, therefore it makes it hard for a CPU heatsink/fan to get the air out of the case since the vents are rear mounted. High quality cooler that will push the air out the back....Cheaper cooler

Extras: I do not know where this server will be located but if you need silence the preinstalled 80mm fans will probably not cut it. Here is a very quiet high airflow fan. It is expensive, but it depends on your application
 

Litso

Distinguished
Feb 26, 2010
7
0
18,510


I'll definitely check into that and see which way the boss wants to go. Pretty sure I can sell him based on the savings that can be made on the power bill over the lifetime of the rig.



Yea I went ahead and selected a larger HDD for 2 reasons. 1) I know I can get optimum single drive performance out of that particular drive, and 2) future proof. Plus for 99.99 its not like I'm breaking the bank ;) .




It will definitely be tested to its limits! Like I said, the print server is really just the most immediate demand. Eventually we will most likely rework the bulk of our infrastructure around this server. There's only 2 of us in our "IT" department, and we're learning as we go haha. Its just a matter of time.


 

lauxenburg

Distinguished
Feb 9, 2009
540
0
19,010


As far as I have heard. It is just going to be a print server as of now, so if it fails whats the worst that could happen. A power backup might be helpful, but its not a Webserver that needs 100% uptime.
 

wuzy

Distinguished
Jun 1, 2009
900
0
19,010


It's definitely got a lot of potential for future scaling. You might want to consider using two WD RE3 1TB (WD1002FBYS) to use in RAID1. HDD is the most prone component to fail and for a server uptime is of utter most importance. It would've been nice to get a case with hotswap bays, but for only $63 it'll do (still possible to hotswap w/o powering down).
 

lauxenburg

Distinguished
Feb 9, 2009
540
0
19,010


Yeah RE3s are very reliable. But in all of my years with computers I have never had an HDD fail on me. People have them failing as common as as rain falls from the sky.
 

Kewlx25

Distinguished
ECC memory is a must with servers. Memory does have errors, even HD data gets corrupted from time-to-time by memory errors. Some sort of Hardware RAID, probably RAID1 since the NAS is the primary storage.

I7 based XEONs are crazy fast and coupled with Win2k8, they use ~10% less power because of the new thread scheduler(if not at full load).

Got some i7s at work on a few Database servers, they're IO limited by our fiber SAN and $120k 16TB SAN storage. We now have one of our new i7 DB servers with two fiber SAN cards because only one is starving the CPU.

To put things in perspective, 1-2 SQL instances will run our dual socket quad core 3.2ghz XEONS(non-i7) near 100% usage. Our new dual socket quad core i7 based Xeons @ 2.66ghz are around 30%-40% peak CPU and they have 3 SQL instances.

We're trying to put more SQL instances on them to actually load them, but the dang fiber cards can't keep up.

edit: forgot, UPS. APC is a great company for UPS products. They aren't meant to run your server through an outage, just to give it enough time to shutdown or switch over to a generator.
 

wuzy

Distinguished
Jun 1, 2009
900
0
19,010


Try managing a rackfull of servers and SAN and say that to me again. ;) Even in my limited 1yr of experience working as a sys admin I've had at least one SAS drive that failed. Then there are old fart admins that will tell you countless unbelievable HDD failure stories in their lifetime. Not using RAID redundancy for anything except caching drives (which runs in RAID0) in a server is just plain crazy.

[EDITED]Picking RE3 was for the purpose of TLER (enabled by default, important for RAID usage) and with a firmware orientated for higher queue performance. Physically they're no different from Caviar Black desktop counterpart. The difference is all in the firmware and better QC before leaving the factory.
FYI, WD have started disabling TLER activation on newer batch of desktop drive via the tool wdtler.exe :(
 

wuzy

Distinguished
Jun 1, 2009
900
0
19,010


Proprietary, overpriced SAN ftl. (looking at lower-end EMC) I'm not the guy that tunes performance of our SAN, that's done by the system engineer, but from what I can see it's runs off 10GbE iSCSI and is feeding our four hypervised Nehalem-EP hosts nicely.
There's a few SLC based SSD in RAID0 used for caching database (mainly SQL I guess). That's probably one of the performance keypoint to our SAN.

Anywhoo, off-topic.
 

Litso

Distinguished
Feb 26, 2010
7
0
18,510
Thanks for all the info everyone. I was skeptical about making a post in the first place, but you've all been a great deal of help, for someone who has exactly 0 experience with servers and is tasked with building, deploying, and administering one lol. I'll definitely consider the HDD's and PSU/UPS situations before I place my order. Glad to hear that the 3-series is going to provide us with plenty of headroom. As far as the hotswap bays, for the time being the HDD's in the server won't really be going anywhere, and 100% uptime isn't a major concern right now. If a time comes where redundancy and and hotswapping drives becomes a priority I'm sure we'll update the system.

Right now all the HDD is going to be used for is to hold the files temporarily while the server does its work on them. Aside from 2k8 there will be very little on the server storage. None-the-less I'm seriously considering a couple of RE3's for a RAID 1. Just seems like a smart move all around. We do need high performance from these drives, which I expect from the RE3, but should I consider a hardware RAID solution? I know you can get better performance from a RAID0 with a hardware controller, but is there going to be a noticeable difference in a RAID1 setup?
 

lauxenburg

Distinguished
Feb 9, 2009
540
0
19,010


Yeah well I don't have a rack of 15k drives spinning 24/7/365 so I couldn't tell you. But what I am saying is that most drives if they don't get all that much use, will be fine. The more they spin, the more likely they fail. For a print server, hard drives dont have a rediculous task. For a print server, I'd almost recommend a nice 40GB SSD. Should be enough, eh?
 

Litso

Distinguished
Feb 26, 2010
7
0
18,510
Went ahead and ordered the system, pretty nerve racking charge to put on the company card. I went with 2 RE3's to be set up in a RAID 1 most likely. I figure if I'm future proofing, since we don't know exactly what all we will end up using this server for, that's the best route. If nothing else I don't have to worry too much about losing my OS install and all my configurations. Not to mention some of the files that will be manipulated by the server are quite large, and our batches usually consist of 200-500 files.
 

lauxenburg

Distinguished
Feb 9, 2009
540
0
19,010


Ah alright. Does the motherboard have built in RAID? In the future, if this becomes your big main, server a RAID card might be a good idea. Mind you, the Xeon is powerful enough to run RAID while doing its other jobs in the background. On a desktop, most people don't use RAID cards, but on a server it is more common, for good reason. They are also expensive....like $500+ for a good hardware one.
 

Kewlx25

Distinguished


Offtopic:
It's not the actual SAN, but the iSCSI interfaces that are getting pegged. As for the price, there's a ton of redundancy because of the data we host/create.