Sign in with
Sign up | Sign in

Modular Server Control

Intel’s 24-Core, 14-Drive Modular Server Reviewed
By

Intel's Modular Server Control is a Web-based administration interface that runs off of the MFSYS25's management module. It offers the administrator a well rounded set of tools to manage, configure, and monitor the many different modules and services running on the modular server. This includes the compute modules, networking, storage, and power.

After logging into Intel's Modular Server Control, you’re presented with a straightforward interface split into two main screens. 

Once you log into the Intel Modular Server Control, you are given the Dashboard screen as a starting point.  Core diagnostics are presented in the Dashboard as it gives you a quick overview of the MFSYS25’s system health.Once you log into the Intel Modular Server Control, you are given the Dashboard screen as a starting point. Core diagnostics are presented in the Dashboard as it gives you a quick overview of the MFSYS25’s system health.

On the left-hand side is a Navigation menu that provides shortcuts to the servers, storage, and switch interfaces. It’s from these views that the admin can power on the compute modules, create virtual drives, or configure the external ports on the Ethernet switch module. You can also access reports that provide storage layouts, system events, and system diagnostics. The final set of objects in the Navigation menu are some of the system settings needed to setup the modular server, including the network configuration for the management module, firmware updating tools, and as mentioned before, additional feature activation.

On the right hand side are tabbed shortcuts to many of the items in the Navigation menu. By default, the first tab that comes up after logging into the Modular Server Control is the Dashboard. The Dashboard provides a general overview of the MFSYS25 and gives the admin a quick look at the current state of the overall system. Environmental diagnostics for the power and temperature are given in an easy-to-read graphical format as well as the general system’s health and a quick view of the critical-system events. Three other tabs have great graphical tools that let you look at the machine as if you were standing right in front of it. The Chassis Front tab shows all the installed compute modules, disk drives, and their corresponding lights, while the Chassis Back tab shows all the rear-mounted components and their current states as well. The Storage tab is just as graphical as the other two tabs, providing a nice visual picture of the storage configurations running in the MFSYS25.

The Chassis Front tab in the Intel Modular Server Control gives you manageable accessibility to the all the device on the front side of the chassis.The Chassis Front tab in the Intel Modular Server Control gives you manageable accessibility to the all the device on the front side of the chassis.

Like the Chassis Front tab, the Chassis Back tab help you keep an eye on the modules running on the rear of the MFSYS25.Like the Chassis Front tab, the Chassis Back tab help you keep an eye on the modules running on the rear of the MFSYS25.

The Storage Tab gives you administrative access to all the disk drives and their Storage Pool configurations.The Storage Tab gives you administrative access to all the disk drives and their Storage Pool configurations.

The particular feature that stands out most for me is the built-in Remote KVM (keyboard/video/mouse)

and CD feature. Intel’s inclusion of a built-in KVM is great because you don’t have to go out and buy a separate device to access the compute modules’ keyboard, video, and mouse controls. 

Located in the Servers section of Modular Server Control, Intel’s KVM lets you work on your servers as if though they were right in front of you. You can also use the Remote CD feature to load ISO files onto the virtual CDROM and install operating systems from your desk. By simply launching KVM, you can control your remote servers using your desktop mouse, keyboard, and monitor with the same browser connection.

While RDP is a great tool for Windows users, you still don’t get the full functionality as you would with direct console access. If the server can’t talk over the network anymore, RPD won’t connect and you need to wait for a solid network connection before you can even get back on the machine. With KVM, I get to see what comes up during the boot process. With Linux, for example, I like to review and catch any red flags that might concern me as the operating system starts up. If not for the KVM, a blind restart of the server would hide important information from the admin that he or she would know about when working on a problem server locally, defeating the purpose of remote administration.  

The Server Actions menu comes up when you select a server you want to work on.  Included are Power, Identity and Server Failover functions.  Here we are starting up the Remote KVM and CD.  These are great tools for remote installation and management.The Server Actions menu comes up when you select a server you want to work on. Included are Power, Identity and Server Failover functions. Here we are starting up the Remote KVM and CD. These are great tools for remote installation and management.

Screenshot of the Intel MFSYS25’s KVM application with a Windows 2008 desktopScreenshot of the Intel MFSYS25’s KVM application with a Windows 2008 desktop

Another feature worth mentioning is the Server Failover function used to “move” a compute module’s assigned virtual disks from one server to another. Done while either the source server is running or not, with a couple of clicks of a button you can transfer its running drives to a different destination server in the chassis. The Server Failover can come in handy for repairs, especially if you need to replace faulty hardware on a compute module. I’ve successfully failed over storage from one server to another in both with the servers running and powered off.

However, I got a warning message recommending that the source server be powered-off first. The help file explains that there may be processes running in the operating system that may not like the failover and de-stabilize the running machine.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 39 comments.
This thread is closed for comments
  • 4 Hide
    kevikom , January 30, 2009 6:06 AM
    This is not a new concept. HP & IBM already have Blade servers. HP has one that is 6U and is modular. You can put up to 64 cores in it. Maybe Tom's could compare all of the blade chassis.
  • 4 Hide
    kevikom , January 30, 2009 6:08 AM
    Also I did not see any pricing on this. Did i miss it somewhere???
  • 0 Hide
    sepuko , January 30, 2009 10:33 AM
    Are the blades in IBM's and HP's solutions having to carry hard drives to operate? Or are you talking of certain model or what are you talking about anyway I'm lost in your general comparison. "They are not new cause those guys have had something similar/the concept is old."
  • 0 Hide
    Anonymous , January 30, 2009 11:04 AM
    Why isn't the poor network performance addressed as a con? No GigE interface should be producing results at FastE levels, ever.
  • 1 Hide
    nukemaster , January 30, 2009 11:35 AM
    So, When you gonna start folding on it :p 

    Did you contact Intel about that network thing. There network cards are normally top end. That has to be a bug.

    You should have tried to render 3d images on it. It should be able to flex some muscles there.
  • 2 Hide
    MonsterCookie , January 30, 2009 12:39 PM
    Now frankly, this is NOT a computational server, and i would bet 30% of the price of this thing, that the product will be way overpriced and one could buid the same thing from normal 1U servers, like Supermicro 1U Twin.
    The nodes themselves are fine, because the CPU-s are fast. The problem is the build in Gigabit LAN, which is jut too slow (neither the troughput nor the latency of the GLan was not ment for these pourposes).
    In a real cumputational server the CPU-s should be directly interconnected with something like Hyper-Transport, or the separate nodes should communicate trough build-in Infiniband cards. The MINIMUM nowadays for a computational cluster would be 10G LAN buid in, and some software tool which can reduce the TCP/IP overhead and decrease the latency.
  • 1 Hide
    Anonymous , January 30, 2009 1:39 PM
    less its a typo the bench marked older AMD opterons. the AMD opteron 200s are based off the 939 socket(i think) which is ddr1 ecc. so no way would it stack up to the intel.
  • 0 Hide
    Anonymous , January 30, 2009 2:31 PM
    The server could be used as a Oracle RAC cluster. But as noted you really want better interconnects than 1gb Ethernet. And I suspect from the setup it makes a fare VM engine.
  • 1 Hide
    ghyatt , January 30, 2009 4:29 PM
    I priced a full chassis out for a client, and it was under 20k...
  • 0 Hide
    navvara , January 30, 2009 4:44 PM
    It can't be under 20K.

    I reallty want to know what the price of this server is.
  • 0 Hide
    Anonymous , January 30, 2009 4:57 PM
    Actually it is under $20k for a fully confgured system with 6 blades - i priced it up online. You can push it a bit higher than this if you go for very high end memory (16GB+) and top bin processors but for most the fully loaded config would come in around $20k. It's very well priced.
  • 1 Hide
    kittle , January 30, 2009 5:40 PM
    The chassis for your client was under 20k... np

    but to get one IDENTICAL to what was tested in this article - whats that cost? I would think "price as tested" would be a standard data point.

    Also - the disk i/o graphs are way to small to read without a lot of extra mouse clicks, and even then i get "error on page" when trying to see the full rez version. Makes all that work you spent gathering the disk benchmarks rather useless if people cant read them.
  • 0 Hide
    Anonymous , January 30, 2009 5:51 PM
    the price as tested in the article is way less than $20k. they only had 3 compute modules and non redundant SAN or Switch. Their configuration would cost around $15k - seriously just go and price it up online - search MFSSYS25 and MFS5000SI
  • 3 Hide
    asburye , January 30, 2009 8:38 PM
    I have one sitting here on my desk with 6 compute modules, 2 Ethernet switches, 2 Controller modules, 4 power supplies, and 14-143GB/10k SAS drives. The 6 compute modules all have 16GB RAM, 2-Xeon 5420's each and 4 of them have the extra HBA card as well, our price was < $25,000 with everything including shipping and tax. The Shared LUN Key is about $400. We bought ours about 2 months ago.
  • 0 Hide
    Shadow703793 , January 30, 2009 10:23 PM
    nukemasterSo, When you gonna start folding on it Did you contact Intel about that network thing. There network cards are normally top end. That has to be a bug.You should have tried to render 3d images on it. It should be able to flex some muscles there.

    Nahhh... you don't run F@H on CPUs any more ;) 
    You run it on GPUs! CUDA FTW! :p 
  • 0 Hide
    nukemaster , January 31, 2009 12:56 AM
    yeah, thats true. CUDA kills in folding.
  • 0 Hide
    JAU , January 31, 2009 1:22 AM
    Thanks for the comments/suggestions/questions everyone. Your input is appreciated and will be applied to future reviews.

    We're addressing the issue with the network test results. - julio
  • 2 Hide
    Area51 , January 31, 2009 3:09 AM
    This is the only solution that I can think of that has the integrated SAN solution. None of the OEM's (Dell, HP, IBM) can do that in their solution as of yet. Also if you configure the CPU's with L5430's this becomes the perfect VMware box.
    As far as power switch... Remember that in a datacenter enviornment you do not turn off the chassis. there is less chance of accidental shutdown if there is no power switch on the main circuit. This is 6 servers network switch, and a SAn solution in-one you do not want a kill switch. that is why no OEM ever puts a power switch in thier blade solution Chassis.
  • 1 Hide
    JAU , January 31, 2009 4:33 AM
    Hi folks. We're re-running the network test this weekend. Stay-tune for the update. - julio
  • 0 Hide
    MonsterCookie , January 31, 2009 8:21 AM
    I went over the review/test, and as far as i understood in this system there is only a single GLAN switch.
    Besides, the individual nodes do not have their own HDD, but a NAS instead.

    This is particularly bad, because the disk-I/O is also handled by this poor LAN switch.
    One should use at least two switches: one for internode communication, and the other for NFS and so on.


    Second point: if those prices which i have seen in the forum are right, than 20k$ is equivalent to 15600Euros.
    For that money i can buy from the company where we are buying our equipment from the same system build from Supermicro 1U Twins. For this price i even have Dual GLAN per node, and one Infiniband per node.
    This system above could be called indeed a computational server, and the Intel system is just something like a custom made weakly coupled network of computers which is coming with a hefty price tag.
    Of course one could argue that buying 3 Supermicro twins plus an Infiniband switch is not as neet looking as this Intel, but once it is in a rack, who cares?
    I would not really would like to listen to this intel machine on my desk anyway, so that should be put in a nice rack as well.
Display more comments