Sign in with
Sign up | Sign in

First Impressions

Intel’s 24-Core, 14-Drive Modular Server Reviewed
By

All the hot-swappable modules used the Intel Modular Server come marked for easy identification. All the hot-swappable modules used the Intel Modular Server come marked for easy identification.

Setting up the MFSYS25 wasn’t that difficult.  As soon as we got it out of the crate, we plugged it in and off it went. The system, while not fully loaded, was still pretty heavy. It took two of us to move it a short distance to an empty space in the lab. Once all the cables were plugged in, we powered up the chassis and were ready to start evaluating the system. 

As soon as you plug the power cables into one or more of the MFSYS25’s power supplies, you might be overwhelmed by the initial jet-engine-like roar of spinning fans coming from the back of the server. We were fearful that the system would run at this noise level all the time. But after several minutes, the machine eventually calmed down, producing modest noise. For a server this big, the low level of noise generated is pretty impressive.

It seems that Intel had this in mind when it put this system together. The MFSYS25 enclosure was custom built by Silentium. Using Silentium’s Active Silencer design, the noise level coming from the MFSYS25’s chassis is reduced by about 10 decibels. While Silentium's Website states that this is a significant reduction in noise level, with so many components built into such a small package, it would be difficult to compare the MFSYS25 to other machines.

Intel recommended that I update the MFSYS25 with the latest firmware package. The process didn’t take too long. First, I downloaded the most recent firmware package from Intel's Website. Then, using the Modular Server Control’s firmware update user interface, I found the downloaded file on my desktop and started the upload. It took about 16 minutes for the entire process to finish including reboots of each compute module. The wait was not too bad if you consider that the firmware update ran upgrades on the all the compute, power, and storage modules during the update.

The Firmware Update interface provides an upload utility that lets you run the update to the latest firmware from your desktop.  Single firmware updates can include upgrades for multiple modules install them all during the same session.The Firmware Update interface provides an upload utility that lets you run the update to the latest firmware from your desktop. Single firmware updates can include upgrades for multiple modules install them all during the same session.

At first, I used a laptop connected directly to the management module to work with the Modular Server Control application.  As time went on, I put the machine on an isolated network and gave it a static IP address so I could connect to it from different machines. I then connected all three compute modules to the network as well. Next, I changed the admin password to something a little more secure.

Security may have not been the only reason why I got rid of the default password. I kind of had a “2001: Space Odyssey” moment as there was a constant reminder displayed in the Dashboard tab’s Required Actions box whenever I logged in, reminding me that I was still using the default password. I think I changed the password just to appease the system. Either way…it’s good practice to secure your password and I can appreciate how Intel helps the admin enforce a password change, at the very least.

As soon as I powered on the MFSYS25 chassis, one of the first things that came up on the Dashboard was a constant reminder to change the default password.As soon as I powered on the MFSYS25 chassis, one of the first things that came up on the Dashboard was a constant reminder to change the default password.

More graphical representations in the Modular Server Control UI accurately depict the actual state of the MFSYS25’s failed component.More graphical representations in the Modular Server Control UI accurately depict the actual state of the MFSYS25’s failed component.

I have also been impressed by the amount and accessibility and reliability built into hot-swappable devices for the MFSYS25. Identifying what’s hot swappable is easy, as removable devices are physically color-coded with green tabs. This includes all the disk drives and all the modules in the chassis. For logic-based devices like the management module and the Ethernet switch module, configurations are backed up to flash media sitting on the chassis midplane and recoverable through the Modular Server Control. 

Structure-wise, I was pretty happy with the design of the MFSYS25. One initial concern I did have was about the latches used to lock the MFS5000SI compute modules to the chassis’ main bezel.  In order to remove a compute module from the main chassis, you have to press a green release button on the front of the module. This in turn disengages the release handles and unlatches the MFS5000SI from the chassis.

Having seen various blade servers on the market, I know that parts that rely on repetitive mechanical actions tend to wear down. The piece of metal that holds the compute module release handles is a small bump of metal that may, over time, bend out of shape or loosen. However, Intel did a nice job in designing the hold-and-release system by including additional slots in the compute module’s bezel that guide and keep the release arms in place. This provides a nice and secure fit once the compute module is locked inside the MFSYS25’s chassis.

The Compute Modules in the MFSYS25 are securely fastened to the chassis by the two symmetrically placed release handles.  The Compute Modules in the MFSYS25 are securely fastened to the chassis by the two symmetrically placed release handles.

However, I did have a problem with one of the main cooling modules. After having the chassis shutdown for a couple days of downtime, I powered up the chassis and saw the single amber LED on the front panel of the main

Display all 39 comments.
This thread is closed for comments
  • 4 Hide
    kevikom , January 30, 2009 6:06 AM
    This is not a new concept. HP & IBM already have Blade servers. HP has one that is 6U and is modular. You can put up to 64 cores in it. Maybe Tom's could compare all of the blade chassis.
  • 4 Hide
    kevikom , January 30, 2009 6:08 AM
    Also I did not see any pricing on this. Did i miss it somewhere???
  • 0 Hide
    sepuko , January 30, 2009 10:33 AM
    Are the blades in IBM's and HP's solutions having to carry hard drives to operate? Or are you talking of certain model or what are you talking about anyway I'm lost in your general comparison. "They are not new cause those guys have had something similar/the concept is old."
  • 0 Hide
    Anonymous , January 30, 2009 11:04 AM
    Why isn't the poor network performance addressed as a con? No GigE interface should be producing results at FastE levels, ever.
  • 1 Hide
    nukemaster , January 30, 2009 11:35 AM
    So, When you gonna start folding on it :p 

    Did you contact Intel about that network thing. There network cards are normally top end. That has to be a bug.

    You should have tried to render 3d images on it. It should be able to flex some muscles there.
  • 2 Hide
    MonsterCookie , January 30, 2009 12:39 PM
    Now frankly, this is NOT a computational server, and i would bet 30% of the price of this thing, that the product will be way overpriced and one could buid the same thing from normal 1U servers, like Supermicro 1U Twin.
    The nodes themselves are fine, because the CPU-s are fast. The problem is the build in Gigabit LAN, which is jut too slow (neither the troughput nor the latency of the GLan was not ment for these pourposes).
    In a real cumputational server the CPU-s should be directly interconnected with something like Hyper-Transport, or the separate nodes should communicate trough build-in Infiniband cards. The MINIMUM nowadays for a computational cluster would be 10G LAN buid in, and some software tool which can reduce the TCP/IP overhead and decrease the latency.
  • 1 Hide
    Anonymous , January 30, 2009 1:39 PM
    less its a typo the bench marked older AMD opterons. the AMD opteron 200s are based off the 939 socket(i think) which is ddr1 ecc. so no way would it stack up to the intel.
  • 0 Hide
    Anonymous , January 30, 2009 2:31 PM
    The server could be used as a Oracle RAC cluster. But as noted you really want better interconnects than 1gb Ethernet. And I suspect from the setup it makes a fare VM engine.
  • 1 Hide
    ghyatt , January 30, 2009 4:29 PM
    I priced a full chassis out for a client, and it was under 20k...
  • 0 Hide
    navvara , January 30, 2009 4:44 PM
    It can't be under 20K.

    I reallty want to know what the price of this server is.
  • 0 Hide
    Anonymous , January 30, 2009 4:57 PM
    Actually it is under $20k for a fully confgured system with 6 blades - i priced it up online. You can push it a bit higher than this if you go for very high end memory (16GB+) and top bin processors but for most the fully loaded config would come in around $20k. It's very well priced.
  • 1 Hide
    kittle , January 30, 2009 5:40 PM
    The chassis for your client was under 20k... np

    but to get one IDENTICAL to what was tested in this article - whats that cost? I would think "price as tested" would be a standard data point.

    Also - the disk i/o graphs are way to small to read without a lot of extra mouse clicks, and even then i get "error on page" when trying to see the full rez version. Makes all that work you spent gathering the disk benchmarks rather useless if people cant read them.
  • 0 Hide
    Anonymous , January 30, 2009 5:51 PM
    the price as tested in the article is way less than $20k. they only had 3 compute modules and non redundant SAN or Switch. Their configuration would cost around $15k - seriously just go and price it up online - search MFSSYS25 and MFS5000SI
  • 3 Hide
    asburye , January 30, 2009 8:38 PM
    I have one sitting here on my desk with 6 compute modules, 2 Ethernet switches, 2 Controller modules, 4 power supplies, and 14-143GB/10k SAS drives. The 6 compute modules all have 16GB RAM, 2-Xeon 5420's each and 4 of them have the extra HBA card as well, our price was < $25,000 with everything including shipping and tax. The Shared LUN Key is about $400. We bought ours about 2 months ago.
  • 0 Hide
    Shadow703793 , January 30, 2009 10:23 PM
    nukemasterSo, When you gonna start folding on it Did you contact Intel about that network thing. There network cards are normally top end. That has to be a bug.You should have tried to render 3d images on it. It should be able to flex some muscles there.

    Nahhh... you don't run F@H on CPUs any more ;) 
    You run it on GPUs! CUDA FTW! :p 
  • 0 Hide
    nukemaster , January 31, 2009 12:56 AM
    yeah, thats true. CUDA kills in folding.
  • 0 Hide
    JAU , January 31, 2009 1:22 AM
    Thanks for the comments/suggestions/questions everyone. Your input is appreciated and will be applied to future reviews.

    We're addressing the issue with the network test results. - julio
  • 2 Hide
    Area51 , January 31, 2009 3:09 AM
    This is the only solution that I can think of that has the integrated SAN solution. None of the OEM's (Dell, HP, IBM) can do that in their solution as of yet. Also if you configure the CPU's with L5430's this becomes the perfect VMware box.
    As far as power switch... Remember that in a datacenter enviornment you do not turn off the chassis. there is less chance of accidental shutdown if there is no power switch on the main circuit. This is 6 servers network switch, and a SAn solution in-one you do not want a kill switch. that is why no OEM ever puts a power switch in thier blade solution Chassis.
  • 1 Hide
    JAU , January 31, 2009 4:33 AM
    Hi folks. We're re-running the network test this weekend. Stay-tune for the update. - julio
  • 0 Hide
    MonsterCookie , January 31, 2009 8:21 AM
    I went over the review/test, and as far as i understood in this system there is only a single GLAN switch.
    Besides, the individual nodes do not have their own HDD, but a NAS instead.

    This is particularly bad, because the disk-I/O is also handled by this poor LAN switch.
    One should use at least two switches: one for internode communication, and the other for NFS and so on.


    Second point: if those prices which i have seen in the forum are right, than 20k$ is equivalent to 15600Euros.
    For that money i can buy from the company where we are buying our equipment from the same system build from Supermicro 1U Twins. For this price i even have Dual GLAN per node, and one Infiniband per node.
    This system above could be called indeed a computational server, and the Intel system is just something like a custom made weakly coupled network of computers which is coming with a hefty price tag.
    Of course one could argue that buying 3 Supermicro twins plus an Infiniband switch is not as neet looking as this Intel, but once it is in a rack, who cares?
    I would not really would like to listen to this intel machine on my desk anyway, so that should be put in a nice rack as well.
Display more comments