Sign in with
Sign up | Sign in
Your question

The Server Primer, Part 1

Last response: in Systems
Share
March 13, 2007 10:31:56 AM

Where are the differences between consumer hardware and professional server products? Tom's Hardware evaluates professional components and their characteristics.

More about : server primer part

March 13, 2007 3:17:15 PM

Not that I'm an expert, but like the article states, server hardware is chosen based on it's stability. Server admins are usually willing to sacrifice speed for uptime. A high end desktop will have more raw performance than a single server blade.

And, servers are a lot more proprietary than desktops. Building a server is a more rigorous exercise than a desktop because most computer parts are compatible in a desktop while the opposite is true for a server.

Another note of interest, in a lot of servers (especially HP) the BIOS is not stored on a mobo chip. Its stored in a small hard drive partition. And often a server must be 'primed' through the BIOS to run a particular OS, which makes switching OS's take more steps.
March 13, 2007 5:21:48 PM

I really appreciated this article and I look forward to it's successors. As a PC enthusiast who has recently "made the plunge" into network support as a career, I am in need of articles like this that help me to expand my knowledge base. Keep these articles coming!
Related resources
March 13, 2007 5:33:00 PM

I currently work for IBM as a System X/Blade Server Engineer. You left out the Tulsa Processors, the 71xx Line of processors. These are so great with 16MBL3 and 2x1MB L2. Either way good article.

Maybe next time you should talk about some Blades, and how they can save lots of space for less than that of several 2U or 3U rack mounts.
March 13, 2007 8:41:02 PM

The difference? Server/enterprise markets need longer lifetimes for their hardware as it's quiet expensive to have to replace what can sometimes amount to hundreds of computers. RELIABILITY! They're designed for reliability and SPEED!

Really good article, I'm big into this kind of stuff, hence I'm running dual-xeons and a couple SAS drives. I wish the Quadro FX 5600 was cheaper and could support SM 5.0, but oh well.
March 13, 2007 9:07:22 PM

Another HUGE difference:
Enterprise class hardware scores low in 3dmark06.


Great article.
March 13, 2007 9:23:13 PM

You mean like their workstation GPUs? Well, yeah... they're not meant for that kind of work, they're meant for stuff like Maya and CAD programs. I mean, you can game with them, sure, but DX and OpenGL are two different things. Then again, I'm not much of a gamer, what can I say?

Speed is a huge factor though, that's why they have SAS drives, despite their small capacity.

It's true though, it does score low in 3DMark06, but that rather irrelevant to the enterprise market anyways.
March 13, 2007 9:27:33 PM

Lol sas drives are not so small any more. Setting next to me are 3 300gig sas drives that I cant use cause I dont have a SAS controller.


oh well! Makes for Atlas art.

Note joking about playing games on an enterprise server
March 13, 2007 9:29:34 PM

I know they have 300GB SAS drives, but compared to like the 750GB and soon 1TB drives, they're expensive as far as GB/Dollar figures go. I have 2 147GB 15k rpm SAS drives in my rig right now, though I'm not using it since it eats up power like crazy.
March 13, 2007 9:35:35 PM

300 gig sas drives were 1000$ last month...

Ya a bit expensive.
March 13, 2007 11:02:32 PM

Correct me if im wrong but scsi drives and sas drives have read after write verify. I know the old ide drives did not but im not sure about sata drives. Back in the old days thats why we used scsi in servers. Faster, Longer Life and less chance of write erros.
Lee

I posted this comment in the wrong thread the first time if a fourm admin sees it please delete. sorry
Lee
March 14, 2007 12:15:02 AM

Yeah, now I take it those are 10k rpms? I know that Seagate is making 15k 300GB drives, but those are at least like $1200. TOO MUCH! My 147GB drives are quite sufficient. That, and my SAS controller supports SATA drives so that saves me money!
March 14, 2007 12:25:36 AM

I always wondering if Fiber Hard Drive was faster then Scsi, when i say faster i mean the read/write Xfers, acces time,etc...
March 14, 2007 12:26:48 AM

Fiber provides greater bandwidth, but like just about every other interface out there it's limited by the I/O of the drive.
March 14, 2007 12:28:31 AM

Would it be good for gaming, i know there some on ebay not that expensive. But the card can be holy expensive.
March 14, 2007 12:48:39 AM

It wouldn't be any faster than a SCSI or SAS drive, and if you're concerned about gaming then your HDD has very little to do with the game, that's more based on your RAM, CPU and GPU.
March 14, 2007 12:55:03 AM

Well my bad i say gaming but i was mean global performance vs exemple: Raptor Sata Drive.
March 14, 2007 1:38:45 AM

Hmm... compared to a single SATA drive like a Raptor? SCSI and SAS really shine in like RAID, and where you have TONS of sequential reads and writes, but they're still fast.
March 14, 2007 5:00:18 AM

Quote:
It wouldn't be any faster than a SCSI or SAS drive, and if you're concerned about gaming then your HDD has very little to do with the game, that's more based on your RAM, CPU and GPU.


scsi = sas btw.
March 14, 2007 7:02:56 AM

Thanks for the tip. I had to go look it up.

SAS=Serial Attached SCSI and some SAS controllers can use SATA drives.

WIKI=new learning tool for the 21st century
March 14, 2007 8:03:11 AM

In the larger corporate world you never write directly to a drive anyway, EMC and IBM's SHark units have all writes going into cache. You never even see single drives on these units, just logical LUNS carved out by the Storage admins....Im a UNIX Admin that works closely with the Storage guys. RELIABILITY comes first and performance second....sometimes a distant second in the Corporate world.
March 14, 2007 9:29:26 AM

I know they're the same drives, just different interfaces... but thanks for clarifying that anyways.

@immagikman

You bring up a good point, and it's true that you rarely see single drives on servers, maybe a small desktop in your little cubicle, but that wont' be SCSI. Yup, writing into cache is actually another good point. I think in the next part of this Server Primer they should bring up more of the differences between servers and consumer products.
March 14, 2007 9:46:56 AM

Quote:
I know they're the same drives, just different interfaces... but thanks for clarifying that anyways.

@immagikman

You bring up a good point, and it's true that you rarely see single drives on servers, maybe a small desktop in your little cubicle, but that wont' be SCSI. Yup, writing into cache is actually another good point. I think in the next part of this Server Primer they should bring up more of the differences between servers and consumer products.


Well yes there not only 1 drive in a server , but mine have 1 Scsi ( Os installed ) and the other are IDE for storage. My server is a bit old, i would change for Sata2 Storage.
March 14, 2007 8:01:27 PM

Well..., I was talking like enterprise level, or at least small business level, but ok.
March 15, 2007 3:21:35 AM

I am definately talking Enterprise level. Our UNIX servers (except for a few very old ones) are configured with 1 to 4 drives for the OS, all applications and DB's go out on the SAN storage.

Our Internal drives for the OS are all Some form of SCSI, but as I work with this stuff all day long, I realize that enterprise level gear is not really your best bet for gaming. Our WINTEL folks are in the process of getting the storage for MS out to the SAN and the LINUX servers were always set up like the UNIX boxes.
March 15, 2007 9:25:51 AM

Ahh.. ok I get ya now, sorry, I was a bit confused there for a moment.
March 15, 2007 2:33:06 PM

Very good article but at my first glance, misses a few subtle and not so subtle things...

Such as the performance factor with FB-DIMMs in deeply queued instructions.

Or that servers will frequently be marketed with backup solutions as well, which is an important matter for everyone but marketed to desktop users only as an afterthought.

Or that mid-size and up hardware will come with or or have as an option "lights-out" console access via IP.

And the enterprise -grade hardware coming with managed baseboards and components, and with the software to monitor and administer it all.

Thanks,
-Brad
March 15, 2007 4:01:58 PM

Quote:
Server admins are usually willing to sacrifice speed for uptime. A high end desktop will have more raw performance than a single server blade. [...] Another note of interest, in a lot of servers (especially HP) the BIOS is not stored on a mobo chip. Its stored in a small hard drive partition.

Something will be sacrificed, but if you don't mind sacrificing budget then speed and uptime do not have to suffer. But as far as "raw performance", you can't buy a server -grade box and expect it to perform as well as what you might think is an equivalent spec'd desktop in typical desktop use. They are engineered for vastly different work patterns.

And I think you're confusing the BIOS with the system partition utility. But yes, many of the things that are in a desktop's BIOS UI have been moved to the system partition utility in some of the enterprise servers, and the BIOS offers you the choice of booting an installed OS or the contents of the system partition utility.

-Brad
March 15, 2007 11:19:01 PM

Quote:
Another note of interest, in a lot of servers (especially HP) the BIOS is not stored on a mobo chip. Its stored in a small hard drive partition. And often a server must be 'primed' through the BIOS to run a particular OS, which makes switching OS's take more steps.


Not true these days. With the old Compaq's you still had the BIOS on the flash, but it required the software on another partition to configure it. I have seen many older Compaq servers that didn't have that partition, and you had to use a boot floppy to reconfig the BIOS. Put simply, a computer won't boot without a BIOS, so a server with that missing partition that works must have a BIOS on the chip.

For the current HP x86 servers, you can configure the BIOS like any other board.
March 15, 2007 11:23:41 PM

Quote:
I am definately talking Enterprise level. Our UNIX servers (except for a few very old ones) are configured with 1 to 4 drives for the OS, all applications and DB's go out on the SAN storage.


4 drives for the OS? We just put two 72 GB drives into our HP's and use LVM to mirror the disks, and haven't run out of space in 3 years. But, we also keep the homes of heavier users on the SAN along with everything else.
March 15, 2007 11:42:11 PM

Quote:
Or that mid-size and up hardware will come with or or have as an option "lights-out" console access via IP.


I really like the iLO on our HP servers. I can mount a CD iso sitting on my laptop to the server, and boot from that ISO image, without being even close to the server.
March 16, 2007 12:48:34 AM

Quote:
Or that mid-size and up hardware will come with or or have as an option "lights-out" console access via IP.

I really like the iLO on our HP servers. I can mount a CD iso sitting on my laptop to the server, and boot from that ISO image, without being even close to the server.
Yup, good stuff that. HP has ILO or RILO depending on vintage, Dell has DRAC, or at least that's what it was called last I looked at Dell boxes, and IBM's (whose cursor control seems much more reliable than HP's by the way) is called RSA. They all not only give you remote console access and power control but also health data, absurdly detailed inventory information, etc. all right through a web browser.

The only dark side to this is that such remote control flexibility makes it easier to take offshore all the server work beyond racking the box and getting power and connectivity attached. :(  Oh well.

-Brad
March 16, 2007 12:50:12 AM

RSAII is also very sweet for remote management.

I liked the article as well, and I think (as someone mentioned earlier) it would be nice to see an article on blade servers. There are so many different options between vendors and models, but they're all pretty sweet from a hardware perspective.

Edit:

bberson- You will always need onsite hardware techs though, because as of now there is no way to swap parts without a person at the server. Until they make hardware tech robots... then we're screwed 8O
March 16, 2007 1:56:17 AM

Well technically we use two devices mirrored for rootvg and the other two devices in the AIX world are used as an Alt-Disk Intall pair....aka a Spare rootvg.

In the HP world we use two mirrored devices for vg00 and the other two devices as an mirrored image of vg00 that gets updated weekly....yeah we dont use all 4 devices for the OS at the same time.....2 are the ready backups in case of disaster....or if someone screws up while implementing a patch....you just switch the boot path to the other pair.


Home directories and all Apps go out on the SAN, we "TRY" to keep root clean as we can....unfortunately when they hire new guys they usually put them to building systems....and of course they don't know what they are doing....but...thats a whole different rant :) 
March 16, 2007 7:54:45 PM

Hi, I have a couple of comments. In my experience you can split server buyers into 4 groups.

Small sized business. Normally one or two servers. File/Print and maybe email. Unlikely to use big brand server, more likely a hi end PC with some sort of backup solution. Desktop CPU, DDRII memory, desktop ATX M/B with SATA RAID, 100mb. No dedicated IT personal. Expected server life 5 to 7 years, may upgrades any component. Price is high selection criteria.

Medium sized business. Again one or two servers as above. More likely to be branded, single Xeon/Opteron, ECC DDRII, custom M/B, SATA RAID, gb LAN, redundant PSU. Directly attached tape drive. Dedicated general IT person. Expected server life 4 to 5 years, may upgrade CPU, memory, disk. Reliability is high selection criteria.

Large business. Dedicated computer room with server racks. System management card with 'out of band' KVM for remote support and active monitoring/alerting. Dedicated server IT support. Multiple server, file/print & apps. Clustering to improve uptime. Branded only, multiple CPU, FB DIMM, SAS RAID, redundant PSU, LAN. Hotswap components. Network backup. Expected server life 3 or 4 year, often leased so unlikely to upgrade except disk or memory. Reliability and managability is high selection criteria.

Enterprise. Multiple data centres. SAN with dark fibre off site backup. Servers as per large business but little internal storage (often just mirrored pair for O/S). Potential Blade systems, 4 or 8 way running apps or Virtual servers. Separate operations, hardware, software teams. 3 year lease, unlikely to upgrade. Managability is high selection criteria, Power usage / heat have also become high on the agenda.

Good article, mainly focused on small buyer. Other buyers more likely to select branded systems who have done matching of components and testing.

Check warranty as onsite support can often half the cost of server. Varying response and fix time, funnily with clustering and redundant / hot swap hardware, problems can be fixed with no downtime.

For large and enterprise solutions, looking at TCO hardware is less than half the cost with power usage/cooling and management higher.

I have came across many techies who are home enthusiasts who do not focus on the business in larger environment . Understanding the technology is useful but understanding the business (required uptime, disaster recovery, business continuity, security, operation procedures) is what makes a successful server implementation.
March 17, 2007 9:20:36 PM

There are a couple more important but easy topics not mentioned. I see this article is named "Part 1" so perhaps more will be coming?

Oh well, I'll quickly go over what I see as missing. :) 

- Processor failover
- Hotplug fans
- Sound, what sound?
- Memory Mirroring/Sparing
- Server size and density

-------------------------------------------------


- Processor failover - Keep on running if the primary CPU fries

For a business critical work where downtime means losing money in website sales (think of NewEgg or eBay going down for an hour), keeping the system running at all costs can go so far as to permit an entire CPU to die and the server keeps on limping along until you can power down to fix it.


- Hotplug fans - replace dead fans with the power on

This allows you to keep a business critical server running if the fans are on the verge of failure, slowing down, or seized. Standard fans as seen in desktop cases are mounted into frames that just plug right in and power up the instant you insert it into the machine, no need to hunt for a power connector somewhere on the board. Removal involves squeezing a clip and popping the fan out without worrying about the power connection.

Note that you usually have to move fast to avoid damage, since server cases are heavily engineered for airflow efficiency and use the case as an integral ducting component. Especially with the 1U pizzabox servers, the case must be closed or the server will overheat and fry itself. Most server cases usually cannot be open for more than two minutes before the components start to overheat and damage occurs.


- Sound, what sound?

I see the basic ATI RAGE video mentioned, but I don't recall seeing sound mentioned. Servers do not have built-in audio, because for the most part they will never have speakers installed so why waste the money?


- Memory Mirroring/Sparing - It's RAID 1 except uses memory!

Do you think server memory costs a fortune? Well, now you can double what you pay for memory. With sparing, if you buy 8 gigs of memory, you actually only have 4 gigs available. But if one of the banks suffers a serious multi-bit error that not even ECC can deal with, the server will deactivate that entire bank and continue running with the other bank.

Usually you need a certain minimum number of sticks, and they must all be identical, for this feature to activate in a server. If memory is installed in pairs, then you need two pairs for sparing to activate.

Interestingly Dell poweredge servers support this but their product configurator does not mention it, so it's up to you to know what you're buying if you intend to turn this feature on.


- Server size and density - Smaller is not necessarily better

Those 1U (about 1.5" tall) pizzabox servers sure look cute, and probably are your idea of a performance server, but be warned they are extremely loud and sound like a jet engine flying at low altitude. They use a large collection of tiny whizzy 7000+ RPM turbofans to force high pressure air through a confined and heat-dense system. Just a single 1U server can drown out all conversation in the room. This is not something you stick in a bedroom.

You don't buy 1U servers unless you are really tight for space or want to think you're some cool high-tech company. Most small businesses don't need 1U servers.

The bigger the server, the slower and quieter the fans can be, and the biggest common standard are 5U tower/rack servers. These practically silent compared to even a standard desktop system. These big-box servers normally sell as a tower that stands on the floor, but are exactly 19" tall and 5U wide, and will fit perfectly into a rack with a rackmount kit.

Dell is a little weird in this regard, but their biggest and fullest-size performance servers are the PowerEdge 2900, which sits off separate from their rack-only servers.

The big, fat, and quiet 5U servers are fine for a small organization that is not expanding rapidly, and four of these can fit into a rack that is about chest-high like a file cabinet. It's time to start looking at smaller, thinner servers only when your company seems to be growing rapidly and you are at risk of running out of expansion space from using the bigger, quiet machines.

-Javik
March 17, 2007 9:36:26 PM

And two of the more intermediate topics, before it gets too complex for a small business or home user to deal with. :) 

- RAID 1/5 + hotspare + Hot plug drives
- Sentenced to the Rack

--------------------------------------------------


- RAID 1/5 + hotspare + Hot plug drives - Survive drive failures and repair a running machine!

For a business critical server where downtime means losing money, you don't necessarily want the entire server to crash if a hard drive fails. So servers almost universally use RAID 1 (Mirroring) or RAID 5 (parity) to provide at least one redundant drive. Any one drive can fail and no data is lost, though performance may suffer.

If you can afford it, you can have a hotspare drive. This is a completely empty hard drive that is the same size as your other RAID drives, and it is fully powered up and running -- but it contains no data at all. But if for some reason one of your other RAID 1/5 drives dies suddenly, the hotspare is instantly available to reinstate the redundancy. It takes time for the RAID controller to rebuild the redundancy by copying the known-good data over to the hotspare, and performance suffers during rebuild, but when finished in about 10-30 minutes the server is again fully protected from further drive failures.

So now your critical server contains a dead hard drive. How to get it out without shutting down the server? With traditional SCSI/IDE/SATA, you need to turn off the computer to pull the drive and replace it. With hot-plug drives, you can pull a dead drive out and install a new one, and then have the new one detected, activated, and reinstated as a new hotspare, all with the power on and the server still busy doing work.

(Dell is again annoying in this regard. Their servers with PERC RAID generally all support hotsparing, but you often cannot specify a hotspare when buying the machine using the braindead online configurator. Grrrr...)


- Sentenced to the Rack - For the growing small business

Once you've reached the point of about three servers in one room, it's time to look at getting a rack to hold the growing pile of equipment. Spreading stuff across the floor takes up lots of space, and servers generally are not meant to stack free-standing on top of each other. You can do it, but when you gotta fix the bottom one, you have to disconnect and move everything on top of it. Arrrggghhh...

Racks are designed to organize your growing mess. Typically rack-mount servers sit on sliding rails and you can pull it out of the rack like a file cabinet drawer. This allows you to stack up a pile of equipment in the rack but each device slides freely and can be pulled out for service without having to disconnect and move everything above it.

Rackmount servers also usually include a cable-management arm on the back. The arm folds up behind the server with it pushed in, but unfolds as the server is pulled out. This keeps the cables neatly organized rather than bunching up into a snarl on the back side, and you can pull the server out with the power on and the server still running.

Note that most tower servers need a special case type and size to fit in a rack, so if you intend to use a rack eventually it is best to buy the server with support for a rack configuration rather wait until later. Converting from a tower to rack configuration after you've bought it can be expensive, because entire case panels may need to be replaced to mount it onto rack rails.

Most modern racks are square-hole, allowing quick installation and removal of devices that simply clip into the square holes. Older racks are round-hole, and everything is fastened down with screws. Both square and round hole racks can use screws, but device installs are generally easier in a square-hole rack.

There is a third type of cheap rack which has prethreaded holes, but generally no one uses it for servers because you can strip the threads or break a bolt in a hole, and render that mounting hole unusable. This type is used for network/phone cabling "relay racks" but only because the hardware doesn't get replaced very often and is very low wear 'n tear.



From here we wander off into Fiber-Channel, Storage-Area Networks and Server Clustering, which most mere mortals will never touch. :) 

-Javik
March 17, 2007 11:19:42 PM

Our HP's and IBM P5 and P6's are twice to three times the size of a refrigerator, loud as a jet engine with exhaust you can almost cook a hotdog in......carrying up to 16 Logical Partions (servers) and 256gig or memory....but you can't play FEAR on them ;) 
March 18, 2007 2:59:49 AM

Well.. at least not until they make it fully multi-threaded. Then... HEHEHEHE I don't even think you'd need a GPU. Well.. maybe not sure though.
March 21, 2007 5:42:39 AM

From my point of view, this topic only covers very low-end servers, and here are some comments why I didn't like the article:
1. Article covers only x86 (and x86_64) processors. Itanium is only mentioned, but nothing more. And I think Sun's Sparc has enough market share just to be mentined, even in low-end (UltraSparc T1)
2. One statement was missleading:
Quote:
Intel is the first processor company to deliver quad core processors. The Clovertown is assembled by placing two dual core Woodcrest chips into a processor package.

What about UltraSparc T1 mentined above, which already has 16 cores and 2 threads per core? I'm not sure, but I think it was launched even before Dual Cores in x86 have arrived.
3. What about security? I gues this topic was not mentioned, because x86 processors doesn't offer any featrue regarding security. T1 has CPU built-in stack overflow protection.

So I think this article should be renamed "The x86 Server Primer"
March 21, 2007 2:23:29 PM

Quote:
What about security? I gues this topic was not mentioned, because x86 processors doesn't offer any featrue regarding security. T1 has CPU built-in stack overflow protection.

My old PDP-11 had separate code and data space.

Come to think of it, RSTS/E had a much more useful ACL arrangement too.

-Brad
March 21, 2007 4:06:31 PM

Good grief you nerds need to settle down. Most of the people reading Tom's Hardware think a SPARC is what happens when the power supply fails. This site isn't Tom's Mainframe Guide. :p 

You're looking for the article Introduction to Esoteric Server Hardware That Cannot Run Windows Operating Systems. :lol: 

-Javik
March 22, 2007 4:21:24 AM

OK, then going back to Wintel, what about the NX bit and Data Execution Prevention? Jurna writes "What about security? I gues this topic was not mentioned, because x86 processors doesn't offer any featrue regarding security."

That is simply not correct.


-Brad
March 22, 2007 5:06:23 AM

Quote:

That is simply not correct.

Ok. That means I don't now things about x86 that much. I think Opteron also has stack overflow protection. What about Xenons then? What security related features does Xenon have?
March 22, 2007 12:27:03 PM

Quote:

That is simply not correct.

Ok. That means I don't now things about x86 that much. I think Opteron also has stack overflow protection. What about Xenons then? What security related features does Xenon have?
Xenon is a gas.

-Brad
!