Sign in with
Sign up | Sign in
Your question

How to Build a Great Server and Network: Speed and Cost Optimized

Tags:
  • Build
  • Servers
  • Components
Last response: in Components
Share
May 20, 2011 9:20:25 AM

I have been reading a lot about servers, but unfortunately it seems that there is no central place where you can get the breakdown of all the important information. Different sites cover different aspects. This might be a bit ambitious, but I will try to have all the important bases covered here and hopefully get some good advice and corrections, if any. This will help me and all future Googlers. I build computers, but I have not built a server yet. So let's break it down:

OS: Server R2 2008 or Windows 7? It seems that the quick answer is "depends." The basic difference between Server R2 and W7 is that the server edition is optimized so that the memory usage is less - fewer native background applications are running. The Server edition can be used just like W7; however, there are some limitations. For example, you can't use it to record TV or easily burn CDs without admin access. You can't use a regular antivirus either and server AV prices are high. Additionally, if you modify W7 startup services and remove what you don't need, you can bring down the memory usage, though it still won't be as efficient as the Server edition. But more importantly, Server R2 2008 costs about $700 vs $100 for W7 Ultimate edition (ebay prices). As such, it is likely a good idea to use W7 instead unless you are going to use the server to host your website, emails, and/or serve dozens of computers. I should note that I would at least get W7 Ultimate version because anything below that does not have elaborate/easily accessible user access controls and Bitlocker, which is essential for security. It is also necessary to get the 64bit edition for enhanced security and larger memory maximum (4GB on 32bit vs 192GB on 64bit!)

Hardware: This is probably the most difficult part as there are a lot of components. Overall, there are two paths one can take: you can either build a server with true "server" components or use a regular desktop build. My understanding is that the most basic difference is reliability. Servers run 24/365 and if you want to build a server that will last 10 years or so, you should select components made for servers. These components can be twice as expensive though and to me the decision is simpler: if you are building a server for your home or small business with 20 or so computers for simple file sharing (no intensive media edits, etc), you don't really have to use server components. First, you won't need the power offered by server motherboards, which can have slots for two CPUs, eight RAM sticks, etc. Second, technology changes so fast that it is unlikely that you will hang on to your system for more than 5 years without any upgrades. 10 years ago Windows XP wasn't even released and we were on 32MB SDRAM sticks. Can you imagine running W7 on that technology? It won't work. Some people choose to not upgrade their OS, but that creates a security risk. This is why I don't see anyone keeping a computer for over 5 years without running into a security risk.

Let's break down the components:

1. Motherboard:
I would like to get some feedback on this, but to me it seems that there are three important aspects: the ethernet must be gigabit capable, there must be a RAID controller for the RAID config you want with a good reporting software (unless you get a separate RAID controller), and ideally the BIOS should have the option to power itself up after a power outage. Did I miss any important features?

2. RAID Controller: see above. This component is needed if the motherboard doesn't have a RAID controller. It should have a good software that can monitor the integrity of all drives and send email reports. My question in this area is: are there any advantages to using a dedicated RAID controller? Also, what features should one look for to achieve maximum transfer speeds? Newegg does not list the speed of each controller.

3. Memory: 4GB for W7 should be sufficient. I have not found a good way to test memory usage on the system to determine whether more memory is needed. The native W7 memory usage tracker is not accurate because it modifies memory usage based on what you have already available. A server for a dozen or so computers should not need much more than 4GB RAM.

4. Hard Drive: This is another difficult area. I think the best approach to this is speed. The fastest networks for regular use are the gigabit networks. They provide 1Gbps max speed, which is equivalent to 128 MB/s. However, many people don't come anywhere near this speed because they don't realize that to achieve these speeds you need ALL your endpoints to be gigabit compliant and you need an HD that can actually support that speed. So how do we select the right HD? It is almost impossible to look at the RPM or the cache size and determine the speed of the drive. Real testing is required (one such tool is HD Tune). There are also online test databases, such as this one or this. A quick look at the results shows that unless you get a Velociraptor (10000RPM), which is expensive at $150 for just 300GB, you're not going to approach 128MB/s. WD Caviar Black is more reasonable at $85 for 1TB and the speed is at ~110MB/s for both writing and reading. Given WDs reliability, I think Caviar Black would serve well for high performance servers. Of course, if you're trying to save energy or keep the system heat down, then you might want to sacrifice performance and go for the Blue or Green drives.

This leads into the RAID discussion. Another way to increase HD performance while safeguarding data is to arrange the right RAID configuration. For best performance and redundancy RAID 0+1 or RAID 10 (aka, RAID 1+0) are recommended. This is the fastest RAID configuration (excluding RAID 0, which has twice the write speed of RAID10, in theory). All other RAID configurations suffer a speed loss compared to non-RAID setups. The downside is that this is costly. RAID 10 uses a minimum of 4 HDs and only half of those have a usable space. If you have 4x1TB HDs, only 2TB will be available for storage. Some recommend using RAID 5 if cost is an issue, but that has it's own issues. If you don't care about performance, RAID 1 is another alternative. Theoretically, RAID10 should increase the read and write speeds by 4 and 2 times, respectively. This means that if you get that 110MB/s Caviar Black, you will max out the speed provided by your gigabit LAN since your read and write speeds for RAID 10 will be 440MB/s by 220MB/s, in theory. I would appreciate if someone with real data can confirm this performance prediction.


Cables: If you are using anything less than Cat.5 ethernet cable, you are not going to be able to take full advantage of the gigabit network. In fact, it is strongly recommended to use Cat.6 cable as it is specifically certified for gigabit networks and transfers data better with less errors.

Security: How sensitive is the data on the server? To secure data, entire hard drives should be encrypted with bitlocker, which is available only in W7 Ultimate and Enterprise editions (or Server 2008 R2). All computers trying to access the server must also have the Ultimate edition. Note that some time ago Passware came up with a method to quickly decrypt Bitlocker, Truecrypt, and other security solutions by reading the encryption key from the RAM. This is why the operating system must also be encrypted as it will encrypt the boot and hibernation files and make the system resistant to RAM attacks. It is impossible to encrypt the RAM itself and the only way to clean the RAM is to reboot the computer (disable quick reboot or quick memory test). Note: If you don't want to use a USB key with bitlocker, you have to get a motherboard that supports TPM.

This is it, at least for now. I hope that those more familiar with servers will provide answers to my questions and correct me where I am wrong.

More about : build great server network speed cost optimized

May 20, 2011 9:59:42 AM

You missed the single biggest factor in specifying ANY system - What do you actually want it to do? Basic file, print and streaming can be handled from a NAS that will cost you less than a proper server MB.

So...

1. What do you want it to do?
2. How many users?
3. Environment - is this lights out or will it be maintained locally?
4. Can you afford downtime? Will you be able to take the system off line as part of scheduled maintenance?

Answering these questions will then allow you to move forwards with an idea of what you actually want.
m
0
l
May 20, 2011 1:33:12 PM

Interesting that you don't consider a Linux for your OS - means you can spend the money on something useful rather than to line Microsofts pockets. If you want the best performance out of it you can use the server version, if you want an easier user experience then Desktop should be easy enough to use.

Disks - I have 2 WD blue drives on my server, they are the laptop versions and work fine. Speed - my server has a RAID card ( P400 with 256MB BBWC ) and they would be fine for most purposes. Although they are in RAID 0 - I'd use RAID 5 if I had the extra drives. Enough performance whilst getting the most storage out of them.

Networks - most server boards have dual to quad gigabit NICs, so worrying about throughput shouldn't be a problem if you set it up properly. If your really worried, you can use link aggregation which allows you to slave multiple ports together.

Motherboard - Get a server motherboard - usually cheap enough off e-bay. Although you will have to be careful when buying all the components, memory is the easiest to screw up on. CPU's- I would suggest AMD - as they are cheaper, but I'm sure others would push Intel - I've built a 4xquad core amd, 32GB, 1TB server based on a HP DL585 G2 for about £800. Also using an original server as my base - I have dual PSU's and remote server management ( I can turn the server on and off remotely ).
m
0
l
May 20, 2011 11:14:15 PM

audiovoodoo said:
You missed the single biggest factor in specifying ANY system - What do you actually want it to do? Basic file, print and streaming can be handled from a NAS that will cost you less than a proper server MB.

So...

1. What do you want it to do?
2. How many users?
3. Environment - is this lights out or will it be maintained locally?
4. Can you afford downtime? Will you be able to take the system off line as part of scheduled maintenance?

Answering these questions will then allow you to move forwards with an idea of what you actually want.


You're right. I was going to address NAS, but decided to make this specific to a servers. NAS can be a cheap alternative if you're not worried about security and don't need a server to manage other computers from a centralized point. As far as I am aware, Bitlocker does not work with NAS, at least not easily.

For our particular use, these are the answers:

1. We need a server mainly for file sharing - sensitive information.
2. There are three users for now, but this is a quickly expanding business and I use the server to manage antivirus and other software from a centralized point locally and via VPN. The files are also accessed by off-site users about 50% of the time.
3. Not sure what you mean, but we're maintaining this ourselves. It will always be on and there is a UPS as a backup.
4. Taking the system offline after work hours/weekends will not be an issue. We are not hosting our own website or emails here. We might in the future though.
m
0
l
May 20, 2011 11:39:14 PM

nigelren said:
Interesting that you don't consider a Linux for your OS - means you can spend the money on something useful rather than to line Microsofts pockets. If you want the best performance out of it you can use the server version, if you want an easier user experience then Desktop should be easy enough to use.

Disks - I have 2 WD blue drives on my server, they are the laptop versions and work fine. Speed - my server has a RAID card ( P400 with 256MB BBWC ) and they would be fine for most purposes. Although they are in RAID 0 - I'd use RAID 5 if I had the extra drives. Enough performance whilst getting the most storage out of them.

Networks - most server boards have dual to quad gigabit NICs, so worrying about throughput shouldn't be a problem if you set it up properly. If your really worried, you can use link aggregation which allows you to slave multiple ports together.

Motherboard - Get a server motherboard - usually cheap enough off e-bay. Although you will have to be careful when buying all the components, memory is the easiest to screw up on. CPU's- I would suggest AMD - as they are cheaper, but I'm sure others would push Intel - I've built a 4xquad core amd, 32GB, 1TB server based on a HP DL585 G2 for about £800. Also using an original server as my base - I have dual PSU's and remote server management ( I can turn the server on and off remotely ).



I have never used Linux and frankly I don't really want to spend the time to learn about it for now. Windows has better support for applications and this server is also going to be work as a central point for some of the software - they won't work on Linux.

WD Blue is not bad, but then again you're using RAID 0. That provides 4x the speed and even the Green would perform well. The problem with RAID 0 is it's extreme vulnerability. You have basically doubled the chances of data loss because if any one of your HDs fails, your entire data will be lost. This is why RAID 10 is the best since it retains almost the same performance of RAID 0 but also adds data protection. Can you tell me your read and write speeds for RAID 0?

I can get a regular motherboard with dual gigabit NIC as well. However, even if I bridge the connections to get a throughput of 256MB/s (and assuming I have the RAID 10 set up), the rest of the computers on the network won't be able to take advantage of this speed unless they also have dual LAN cards. With just few users, I am not worried that too many simultaneous requests will be made to limit the transfer rates below the 128MB/s. I am presuming that you either have a lot of users or all your computers are fitted with dual/quad LAN.

By the way, do you use WOL to remotely turn on your server? Is there a good guide for this?
m
0
l
May 21, 2011 6:45:02 PM

Excelsius said:
WD Blue is not bad, but then again you're using RAID 0. That provides 4x the speed and even the Green would perform well. The problem with RAID 0 is it's extreme vulnerability. You have basically doubled the chances of data loss because if any one of your HDs fails, your entire data will be lost. This is why RAID 10 is the best since it retains almost the same performance of RAID 0 but also adds data protection. Can you tell me your read and write speeds for RAID 0?

I'm not too bothered about loosing data as it's only a server to play around with my own cloud. It's mainly to try out clustering, clouds etc - with 16 cores I can set quite a lot up on the one server.

I did some playing with a set of 4x73GB 10K SAS drives to test RAID speeds, this is what I got

4x73GB SAS 10k - /dev/ccis/c0d1 –
RAID 10, 146GB, Read - 161MB/s 6.4ms access time,
Write – 147MB/s 7.7ms access time
RAID 5, 220GB, Read – 190MB/s 7.2ms access time,
Write - 173MB/s 9.9ms access time
* RAID 0, 293GB, Read – 200MB/s 7.3ms access time,
Write - 382MB/s 7.7ms access time

I ended up using RAID 5 as it gives the best capacity. performance and tolerance. Note that this is a dedicated RAID card and has 256MB of cache memory on it. If you want decent performance on drives then I think a card is the only way to go.
Although saying that - my desktop has a LSI 8 port SAS RAID controller built in. Although I've not played with the SAS drives on it as I'm selling the drives.

Excelsius said:
By the way, do you use WOL to remotely turn on your server? Is there a good guide for this?

Sorry - Don't use WOL - after all it's about 4 feet away from me.

m
0
l
May 22, 2011 12:20:47 AM

If all you need is a file server, any old pc should handle the load. An underclocked core 2 is quite miserly with power and is still plenty powerfull.
m
0
l
May 22, 2011 3:46:44 AM

nigelren said:
I'm not too bothered about loosing data as it's only a server to play around with my own cloud. It's mainly to try out clustering, clouds etc - with 16 cores I can set quite a lot up on the one server.

I did some playing with a set of 4x73GB 10K SAS drives to test RAID speeds, this is what I got

4x73GB SAS 10k - /dev/ccis/c0d1 –
RAID 10, 146GB, Read - 161MB/s 6.4ms access time,
Write – 147MB/s 7.7ms access time
RAID 5, 220GB, Read – 190MB/s 7.2ms access time,
Write - 173MB/s 9.9ms access time
* RAID 0, 293GB, Read – 200MB/s 7.3ms access time,
Write - 382MB/s 7.7ms access time

I ended up using RAID 5 as it gives the best capacity. performance and tolerance. Note that this is a dedicated RAID card and has 256MB of cache memory on it. If you want decent performance on drives then I think a card is the only way to go.
Although saying that - my desktop has a LSI 8 port SAS RAID controller built in. Although I've not played with the SAS drives on it as I'm selling the drives.


Sorry - Don't use WOL - after all it's about 4 feet away from me.


I don't understand why your RAID 5 is faster than RAID 10. Your overall speed for the RAID seems to be slow as well for four disks with 10K RPM. Have you looked into this?

The other issue is even with link aggregation, unless you have dozens of computers and that make multiple simultaneous requests, I don't see how you could ever use up even the RAID10 speed since the gigabit bottleneck is slower than that already. I was looking into getting a faster than gigabit speed on LAN, but currently the only way to get more than 1Gbps LAN is to use fiber networks where the card alone costs around $500 or more. It's interesting how slowly the network speeds are evolving.

I have been considering SAS for the server as well, but I think it's too expensive to justify the cost, especially with SSDs on the horizon. The error correction and life expectancy is better, but with daily backups of RAID10 I should be covered.
m
0
l
May 22, 2011 3:57:08 AM

popatim said:
If all you need is a file server, any old pc should handle the load. An underclocked core 2 is quite miserly with power and is still plenty powerfull.


The server is going to have applications installed as well as controlling the antivirus and other security software on network computers. Core i3 should work. Plus it's not just the CPU - the newer motherboards are more robust.
m
0
l
June 7, 2011 12:57:24 PM

Hey everyone,
I just bought a HP Proliant DL585 G2. I am wondering if I can install two Nvidia Quadro FX 5500 SLI in this machine and convert it to a workstation. I need it for intensive CAD applications. Any suggestions?
m
0
l
June 7, 2011 3:46:07 PM

profsegp said:
Hey everyone,
I just bought a HP Proliant DL585 G2. I am wondering if I can install two Nvidia Quadro FX 5500 SLI in this machine and convert it to a workstation. I need it for intensive CAD applications. Any suggestions?


If you have a look at the quickspecs at http://h18000.www1.hp.com/products/quickspecs/12575_div..., the 585 only goes up to pci-e x8, whereas I think the card your looking at needs pci-e x16. The main thing about the 585 is also the noise - it aint quiet!

I've been working with a XW9400 motherboard to build a desktop machine. It's designed to be a wrokstation, so dual pci-e x16 slots, runs dual quad core opterons ( may run 6 core ), loads of SATA and SAS connectors for disks.

On the plus side though - the 585 is easy to upgrade the processors, I changed 8218's for 8356's without a problem. Didn't try the Shanghai processors as I found some cheap 8356's on e-bay, but that would give you even more processing power.
m
0
l
June 30, 2011 12:05:53 AM

I have mentioned server notes in previous posts here, look me up and you should get most the info. you need. Also, if you are only using it for what you mentioned, look into VMWare ESXi. It should work beautifully for what you are doing. It has a bit of a learning curve but it is free and extremely powerful. I have built several servers and, to be honest, I would say buy a preconfigured Dell, HP, or Intel solution. They may seem pricey to begin with but, if you have never built a server with actual server components in the past, it is not worth starting now. You can get similarly priced (within several hundred dollars) Intel units for the price you pay to build your own with server boards/procs (Xeons are pricey unless you eBay them up). I do recommend Intel because they run cooler. AMD makes good, solid, cheap procs. but Intel is more renowned in the Server processor market and they tend to have a longer life (support wise). Support from the RAID controller and motherboard manufacturers is key to a successful server. Intel updates Server board about once a monthe, backplanes once or twice a year, HSC's yearly, BMC's yearly, and RAID controller bi-annually, usually.

NEVER use a consumer motherboard in a server. Yes they are cheap, yes they are good. NO, they cannot handle the demands a server throws at them. I bought a nice Asus motherboard for my home gaming machine, yes, it was expensive. I bought a $400 server board for my companies backup server, that was cheap. You pay a premium for a reason.
m
0
l
July 9, 2011 7:05:07 AM



Quote:
the ethernet must be gigabit capable,


AND offer redundancy. Server boards have to NIC's, usually gigabit but not always. In older servers anyway.

Quote:
there must be a RAID controller for the RAID config you want with a good reporting software (unless you get a separate RAID controller),


Enterprise servers always have seperate RAID controllers. SMB (Small and Medium Business) servers can get by with ROMB (RAID On MOtherboard) but, most extensive firms use add-in controllers to allow preservation of data in unexpected scenarios*. These are sometimes called or have ROC(RAID On Chip) (*Power failure: Raid controllers have backup-battery. Auto-write complete, and similar features)


Quote:
2. RAID Controller: see above. This component is needed if the motherboard doesn't have a RAID controller. It should have a good software that can monitor the integrity of all drives and send email reports. My question in this area is: are there any advantages to using a dedicated RAID controller? Also, what features should one look for to achieve maximum transfer speeds? Newegg does not list the speed of each controller.


There is a MASSIVE benefit to using dedicated RAID controllers. ROMB is processed on the motherboard, where everything else is also being processed/transmitted thus, ROMB is MUCH slower than dedicated controllers. Dedicated controllers are also MUCH more reliable because they are ONLY a controller. If a motherboard has a problem, you lose your entire array on a ROMB setup whereas, if your dedicated controller faults, it may break the array but this is usually correctable by implementing an exact match dedicated controller and telling it what is what. (Not often done as controllers are fairly stable units. MUCH less likely to fail than a motherboard)

Quote:
3. Memory: 4GB for W7 should be sufficient. I have not found a good way to test memory usage on the system to determine whether more memory is needed. The native W7 memory usage tracker is not accurate because it modifies memory usage based on what you have already available. A server for a dozen or so computers should not need much more than 4GB RAM.


Unlike your Windows 7 machine that has the processor throughput most of the data, a server pushes most stuff through the RAM to allow the client (the user asking for said file) to receive the file faster. This is why servers require 'special' RAM. The RAM in your PC's is 'cheap' RAM. It can have faults, inconsistencies, and errors. In a server, the RAM is Buffered and ECC to allow it to be out of the way. It basically doesn't have flaws which is what allows simultaneous requests at once. It can comprehend it because there is nothing in it's way. (Such as slow chips, "dirty" paths, etc..) This is a very simply way to took at it but, I hope it illustrates it's purpose.

Also, I wouldn't use anything less than 8GB of RAM in a Windows '03 or '08 server. Again, RAM is more important than nearly any other single component in the server, whereas your consumer desktop PC has it's processor and chipset.

I will continue later with this elaboration. Thank you for posting

4. Hard Drive: This is another difficult area. I think the best approach to this is speed. The fastest networks for regular use are the gigabit networks. They provide 1Gbps max speed, which is equivalent to 128 MB/s. However, many people don't come anywhere near this speed because they don't realize that to achieve these speeds you need ALL your endpoints to be gigabit compliant and you need an HD that can actually support that speed. So how do we select the right HD? It is almost impossible to look at the RPM or the cache size and determine the speed of the drive. Real testing is required (one such tool is HD Tune). There are also online test databases, such as this one or this. A quick look at the results shows that unless you get a Velociraptor (10000RPM), which is expensive at $150 for just 300GB, you're not going to approach 128MB/s. WD Caviar Black is more reasonable at $85 for 1TB and the speed is at ~110MB/s for both writing and reading. Given WDs reliability, I think Caviar Black would serve well for high performance servers. Of course, if you're trying to save energy or keep the system heat down, then you might want to sacrifice performance and go for the Blue or Green drives.

This leads into the RAID discussion. Another way to increase HD performance while safeguarding data is to arrange the right RAID configuration. For best performance and redundancy RAID 0+1 or RAID 10 (aka, RAID 1+0) are recommended. This is the fastest RAID configuration (excluding RAID 0, which has twice the write speed of RAID10, in theory). All other RAID configurations suffer a speed loss compared to non-RAID setups. The downside is that this is costly. RAID 10 uses a minimum of 4 HDs and only half of those have a usable space. If you have 4x1TB HDs, only 2TB will be available for storage. Some recommend using RAID 5 if cost is an issue, but that has it's own issues. If you don't care about performance, RAID 1 is another alternative. Theoretically, RAID10 should increase the read and write speeds by 4 and 2 times, respectively. This means that if you get that 110MB/s Caviar Black, you will max out the speed provided by your gigabit LAN since your read and write speeds for RAID 10 will be 440MB/s by 220MB/s, in theory. I would appreciate if someone with real data can confirm this performance prediction.


Cables: If you are using anything less than Cat.5 ethernet cable, you are not going to be able to take full advantage of the gigabit network. In fact, it is strongly recommended to use Cat.6 cable as it is specifically certified for gigabit networks and transfers data better with less errors.

Security: How sensitive is the data on the server? To secure data, entire hard drives should be encrypted with bitlocker, which is available only in W7 Ultimate and Enterprise editions (or Server 2008 R2). All computers trying to access the server must also have the Ultimate edition. Note that some time ago Passware came up with a method to quickly decrypt Bitlocker, Truecrypt, and other security solutions by reading the encryption key from the RAM. This is why the operating system must also be encrypted as it will encrypt the boot and hibernation files and make the system resistant to RAM attacks. It is impossible to encrypt the RAM itself and the only way to clean the RAM is to reboot the computer (disable quick reboot or quick memory test). Note: If you don't want to use a USB key with bitlocker, you have to get a motherboard that supports TPM.

This is it, at least for now. I hope that those more familiar with servers will provide answers to my questions and correct me where I am wrong.[/quotemsg]
m
0
l
!