While there are many technical and application-originated reasons why virtualization makes a lot of sense, there is one huge reason that will finally accelerate deployment of virtualization technology. Servers are expensive, to say the least, and high-end servers can be especially expensive; the fewer servers you need, the more money you can save. Powerful systems with the ability to run multiple or even dozens of different operating system partitions/sessions will help to consolidate cost for both server solutions and the environment.
Even today, one powerful quad socket quad core computer (16 processing cores) with plenty of resources should be capable of hosting several individual systems. While there are certain limitations due to the nature of a single system (limited I/O capabilities), this solution is highly interesting for applications that require frequent and rapid deployment of additional servers. Think of software development companies, or even an ISP who wants to sell lots of hosting packages within a short period of time without having to buy dozens of servers every week.
If your software architecture is strong, then scalable, clustered high-end environments are definitely capable of moving an operating system partition from one system to another one. They can also move from one virtual partition to another somewhere on another server, without touching any of the hardware.
If paired with nice upgrade options, purchasing higher-end servers with support for virtualization technology makes a lot of sense for customers with a high sensitivity to TCO. A system that is flexible enough to support future processors with a higher core count, and to move operating system partitions and applications across the software infrastructure, is invaluable in the enterprise space.
RAS stands for Reliability, Availability and Serviceability, and according to Wikipedia, the term was molded by IBM to support its mainframe computers. (It also stands for Remote Access Service, so don’t get the two mixed up.) But how does RAS actually translate into professional products?
A system can only be fully reliably if it is capable of detecting problems to avoid delivering corrupt data, and possibly fix the problem itself. If this is not possible, the system needs to notify the administrator that it’s no longer working reliably.
Most companies use this term to refer to total annual downtime in hours, minutes and seconds. A comfortable way of getting around this issue is an availability statement of 99.9%, which, in the context of a year with 8,760 hours is only 8,751. This means that the product may be offline for a total of nine hours per year. Availability targets can be reached by deploying mechanisms that help a system to remain available even when errors occur.
This refers to a system’s ability of self-diagnosing possible issues. If you know about issues, you can use this early-warning intelligence to take countermeasures quickly, so you might be able to avoid or minimize downtime.
Here is a list of common RAS features for servers:
- Surge protection, uninterruptible power supply (UPS), other emergency power sources
- RAID setups for storage (really redundant, no RAID 0)
- Component hot-swapping
- ECC memory (Error Correcting Code), which resembles parity for memory
- CPU and component throttling as components get too hot
- Virtualization to balance the impact of operating system failures
- Data transmission using CRC (Cyclic Redundancy Check)
- Redundant memory and DIMM sparing
Another nice resource to read is our article on Intel’s Xeon quad core "Clovertown".
There are several key players in the server market, which has an approximate volume of 50 billion dollars, or roughly 2 million server systems today:
|Company||Q3/2006Revenue (million US$)||Market Share|
|Hewlett-Packard (HP)||3,290||25.3 %|
|Fujitsu and Fujitsu Siemens (FSC)||634||4.9 %|
Source: Gartner (11-2006)
HP is the dominant player in the blade server market, while IBM is leader for supercomputers and servers. Since all of the five big players address the mainstream and high-end segments of the server market, a recommendation is quite difficult to make. When it comes to customized professional solutions, you either need someone you can trust because you’ve had good experience with them, or you should go with the one you feel most confident with.