Sign in with
Sign up | Sign in

MFSYS25 Modular Server Chassis

Intel’s 24-Core, 14-Drive Modular Server Reviewed
By

The core piece of hardware in the modular server is the 90 lb. 6U MFSYS25 chassis. It’s designed to hold six compute modules, 14 SAS drives, four power supply modules, two main cooling modules, two Ethernet switch modules, two storage controller modules, a single I/O cooling module, and the main management module.  Intel says a fully loaded MFSYS25 would weigh just under 200 lbs.  Compared to a stack of six 1U DellPE 1950 servers, the MFSYS25 is just a little lighter in weight thanks to its all-in-one modular design. Still, the installation of a fully loaded MFSYS25 is not a one-person job and requires more than two people to mount it. 

A graphical image of the Intel MFSYS25’s Front Panel.  Lower left: I/O Cooling Module, Upper Left: SAN Enclosure, Upper Right: Compute ModulesA graphical image of the Intel MFSYS25’s Front Panel. Lower left: I/O Cooling Module, Upper Left: SAN Enclosure, Upper Right: Compute Modules

The front of the chassis offers easy access to the compute modules, the I/O cooling module, and the SAN drives. The remaining real estate around the front bezel leaves little or no room for additional components. The sole indicator light built into the frame is the system fault LED that turns amber if there’s a problem coming from one of the MFSYS25’s rear-mounted components (such as a main cooling module).   

On the back of the MFSYS25, you have a number of bays for more cooling, power, management, networking, and storage modules. For the most part, Intel provides optional redundancy for everything but the management module. Depending on where and how the machine is setup, having a single Management Module may not be an immediate problem as long as the compute modules get their power and are accessible via their own VGA and USB ports. Still, redundancy for the remote management of the chassis should have been an option.

A graphical image of the Intel MFSYS25’s Rear Panel.  Left side:  Main Cooling Modules, Center: Management Module, Ethernet Switch Module and Storage Controller Module, Upper Right: Three installed Power Modules, Bottom Right: Power Module Filler Panel   A graphical image of the Intel MFSYS25’s Rear Panel. Left side: Main Cooling Modules, Center: Management Module, Ethernet Switch Module and Storage Controller Module, Upper Right: Three installed Power Modules, Bottom Right: Power Module Filler Panel
The design of the chassis is impressive as it offers efficient airflow from the front intake points through the chassis and out of the system’s rear assembly. That said, I’d recommend using perforated doors if racking the server is a requirement. Just remember to keep the cabling very sparse. There’s plenty of air going through this machine once it’s powered on. Blocking the air going in and out of the chassis will heat things up. Regarding the air-flow out of the back of the chassis, it’s one of my pet peeves with some cable-management systems when bundled cables and mechanical arms block the exhaust coming out of servers cooling fans. Unfortunately, the demo we received didn’t come with any rack mounts or rails, so we weren’t able to see what racking configuration Intel has in mind for the system. For proper airflow, Intel also makes it clear in its user guide that none of the hot-swappable device bays should be left empty and that you need to use either the appropriate module or one of the specially designed covers to maintain flow. The documentation also warns that anything out of place for more than two minutes could result in possible performance degradation.

A picture of the Compute Module bay shows the rear fans that pull air and cool down each server’s internal components.A picture of the Compute Module bay shows the rear fans that pull air and cool down each server’s internal components.

Regarding system-status monitoring, indicator lights play an important role in server management. They provide a quick-and-easy way to see if there’s a problem with a server. As mentioned already, the chassis has one built-in status light on the front panel.

Most of the information on the system’s health and configuration is only accessible through the modular server management UI called the “Modular Server Control.”  This setup assumes that most folks will have the luxury of connecting to the MFSYS25 across a network all the time. Consider the scenario where you happen to be nearby the modular server and just want to take a quick glance at the temperature reading in the chassis or you want to look at the IP address assigned to the chassis or one of its servers. You would have to go find a local PC and log into the modular server control.  Instead, wouldn’t it be nice to just walk up to the server and press a button to get a quick answer to your question?  In a production environment, servers should come with some kind of alphanumeric LED-based module mounted on the faceplate that displays some basic information at the press of the button.  This is especially convenient if the server needs to be physically identified or looked at during a quick spot check.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 39 comments.
This thread is closed for comments
  • 4 Hide
    kevikom , January 30, 2009 6:06 AM
    This is not a new concept. HP & IBM already have Blade servers. HP has one that is 6U and is modular. You can put up to 64 cores in it. Maybe Tom's could compare all of the blade chassis.
  • 4 Hide
    kevikom , January 30, 2009 6:08 AM
    Also I did not see any pricing on this. Did i miss it somewhere???
  • 0 Hide
    sepuko , January 30, 2009 10:33 AM
    Are the blades in IBM's and HP's solutions having to carry hard drives to operate? Or are you talking of certain model or what are you talking about anyway I'm lost in your general comparison. "They are not new cause those guys have had something similar/the concept is old."
  • 0 Hide
    Anonymous , January 30, 2009 11:04 AM
    Why isn't the poor network performance addressed as a con? No GigE interface should be producing results at FastE levels, ever.
  • 1 Hide
    nukemaster , January 30, 2009 11:35 AM
    So, When you gonna start folding on it :p 

    Did you contact Intel about that network thing. There network cards are normally top end. That has to be a bug.

    You should have tried to render 3d images on it. It should be able to flex some muscles there.
  • 2 Hide
    MonsterCookie , January 30, 2009 12:39 PM
    Now frankly, this is NOT a computational server, and i would bet 30% of the price of this thing, that the product will be way overpriced and one could buid the same thing from normal 1U servers, like Supermicro 1U Twin.
    The nodes themselves are fine, because the CPU-s are fast. The problem is the build in Gigabit LAN, which is jut too slow (neither the troughput nor the latency of the GLan was not ment for these pourposes).
    In a real cumputational server the CPU-s should be directly interconnected with something like Hyper-Transport, or the separate nodes should communicate trough build-in Infiniband cards. The MINIMUM nowadays for a computational cluster would be 10G LAN buid in, and some software tool which can reduce the TCP/IP overhead and decrease the latency.
  • 1 Hide
    Anonymous , January 30, 2009 1:39 PM
    less its a typo the bench marked older AMD opterons. the AMD opteron 200s are based off the 939 socket(i think) which is ddr1 ecc. so no way would it stack up to the intel.
  • 0 Hide
    Anonymous , January 30, 2009 2:31 PM
    The server could be used as a Oracle RAC cluster. But as noted you really want better interconnects than 1gb Ethernet. And I suspect from the setup it makes a fare VM engine.
  • 1 Hide
    ghyatt , January 30, 2009 4:29 PM
    I priced a full chassis out for a client, and it was under 20k...
  • 0 Hide
    navvara , January 30, 2009 4:44 PM
    It can't be under 20K.

    I reallty want to know what the price of this server is.
  • 0 Hide
    Anonymous , January 30, 2009 4:57 PM
    Actually it is under $20k for a fully confgured system with 6 blades - i priced it up online. You can push it a bit higher than this if you go for very high end memory (16GB+) and top bin processors but for most the fully loaded config would come in around $20k. It's very well priced.
  • 1 Hide
    kittle , January 30, 2009 5:40 PM
    The chassis for your client was under 20k... np

    but to get one IDENTICAL to what was tested in this article - whats that cost? I would think "price as tested" would be a standard data point.

    Also - the disk i/o graphs are way to small to read without a lot of extra mouse clicks, and even then i get "error on page" when trying to see the full rez version. Makes all that work you spent gathering the disk benchmarks rather useless if people cant read them.
  • 0 Hide
    Anonymous , January 30, 2009 5:51 PM
    the price as tested in the article is way less than $20k. they only had 3 compute modules and non redundant SAN or Switch. Their configuration would cost around $15k - seriously just go and price it up online - search MFSSYS25 and MFS5000SI
  • 3 Hide
    asburye , January 30, 2009 8:38 PM
    I have one sitting here on my desk with 6 compute modules, 2 Ethernet switches, 2 Controller modules, 4 power supplies, and 14-143GB/10k SAS drives. The 6 compute modules all have 16GB RAM, 2-Xeon 5420's each and 4 of them have the extra HBA card as well, our price was < $25,000 with everything including shipping and tax. The Shared LUN Key is about $400. We bought ours about 2 months ago.
  • 0 Hide
    Shadow703793 , January 30, 2009 10:23 PM
    nukemasterSo, When you gonna start folding on it Did you contact Intel about that network thing. There network cards are normally top end. That has to be a bug.You should have tried to render 3d images on it. It should be able to flex some muscles there.

    Nahhh... you don't run F@H on CPUs any more ;) 
    You run it on GPUs! CUDA FTW! :p 
  • 0 Hide
    nukemaster , January 31, 2009 12:56 AM
    yeah, thats true. CUDA kills in folding.
  • 0 Hide
    JAU , January 31, 2009 1:22 AM
    Thanks for the comments/suggestions/questions everyone. Your input is appreciated and will be applied to future reviews.

    We're addressing the issue with the network test results. - julio
  • 2 Hide
    Area51 , January 31, 2009 3:09 AM
    This is the only solution that I can think of that has the integrated SAN solution. None of the OEM's (Dell, HP, IBM) can do that in their solution as of yet. Also if you configure the CPU's with L5430's this becomes the perfect VMware box.
    As far as power switch... Remember that in a datacenter enviornment you do not turn off the chassis. there is less chance of accidental shutdown if there is no power switch on the main circuit. This is 6 servers network switch, and a SAn solution in-one you do not want a kill switch. that is why no OEM ever puts a power switch in thier blade solution Chassis.
  • 1 Hide
    JAU , January 31, 2009 4:33 AM
    Hi folks. We're re-running the network test this weekend. Stay-tune for the update. - julio
  • 0 Hide
    MonsterCookie , January 31, 2009 8:21 AM
    I went over the review/test, and as far as i understood in this system there is only a single GLAN switch.
    Besides, the individual nodes do not have their own HDD, but a NAS instead.

    This is particularly bad, because the disk-I/O is also handled by this poor LAN switch.
    One should use at least two switches: one for internode communication, and the other for NFS and so on.


    Second point: if those prices which i have seen in the forum are right, than 20k$ is equivalent to 15600Euros.
    For that money i can buy from the company where we are buying our equipment from the same system build from Supermicro 1U Twins. For this price i even have Dual GLAN per node, and one Infiniband per node.
    This system above could be called indeed a computational server, and the Intel system is just something like a custom made weakly coupled network of computers which is coming with a hefty price tag.
    Of course one could argue that buying 3 Supermicro twins plus an Infiniband switch is not as neet looking as this Intel, but once it is in a rack, who cares?
    I would not really would like to listen to this intel machine on my desk anyway, so that should be put in a nice rack as well.
Display more comments