Intel’s 24-Core, 14-Drive Modular Server Reviewed

Management Module

The Management Module is the piece that provide remote administration for the MFSYS25.The Management Module is the piece that provide remote administration for the MFSYS25.

Remote administration of the MFSYS25 is made possible by the management module that sits in the center of the chassis’ rear panel. Via the module’s single network port, the management module offers a Web-based user interface that is used to configure, manage, and monitor the hardware inside the modular server as well as provide remote functionality for the compute modules.

The functions of the management module are served with a built-in Linux-based operating system. On the first try, I was able to connect to the modular server control application using Firefox and then was able to use PuTTY to telnet to the Linux CLI, but with the latter interface, I had limited functionality. I couldn’t find anything on Intel's Website that had instructions on how to properly use the CLI, so I just relied on the Web interface instead.

Along with the network port, there’s a reset button to reboot the management module. A nine-pin serial port is also built into the module, although a MFSYS25 FAQ on Intel's Website states that the serial port is only used for manufacturing and engineering. I then found an Intel document (Solution ID: CS-029107) that explains how to connect to the management module by using a terminal program. This program serves as a backup procedure in case you lose network connectivity or need to reset the module back to its factory defaults.

Unlike the other hot-swappable modules in the MFSYS25, the management module is not redundant as there is no space in the back for a second module, nor is there a second network port on the single management module that comes with the MFSYS25. It’s assumed that as long as the compute modules are running, then you can survive without a management module until it’s replaced.  RDP, telnet, or SSH could be used for remote connection, but local administration of each of the running servers would require using the video and USB ports on the compute modules. One good thing about replacing a management module is that if the module is completely lost, the configuration is backed up on a flash card located on the chassis midplane. The data on the memory card is then restored once the management module is replaced.

The login screen used to get into the Modular Server Control application used to manage the MFSYS25 and all it’s components.  Very Firefox friendly; IE…not so much.>The login screen used to get into the Modular Server Control application used to manage the MFSYS25 and all it’s components. Very Firefox friendly; IE…not so much.>

Create a new thread in the US Reviews comments forum about this subject
This thread is closed for comments
39 comments
    Your comment
  • kevikom
    This is not a new concept. HP & IBM already have Blade servers. HP has one that is 6U and is modular. You can put up to 64 cores in it. Maybe Tom's could compare all of the blade chassis.
    4
  • kevikom
    Also I did not see any pricing on this. Did i miss it somewhere???
    4
  • sepuko
    Are the blades in IBM's and HP's solutions having to carry hard drives to operate? Or are you talking of certain model or what are you talking about anyway I'm lost in your general comparison. "They are not new cause those guys have had something similar/the concept is old."
    0
  • Anonymous
    Why isn't the poor network performance addressed as a con? No GigE interface should be producing results at FastE levels, ever.
    0
  • nukemaster
    So, When you gonna start folding on it :p

    Did you contact Intel about that network thing. There network cards are normally top end. That has to be a bug.

    You should have tried to render 3d images on it. It should be able to flex some muscles there.
    1
  • MonsterCookie
    Now frankly, this is NOT a computational server, and i would bet 30% of the price of this thing, that the product will be way overpriced and one could buid the same thing from normal 1U servers, like Supermicro 1U Twin.
    The nodes themselves are fine, because the CPU-s are fast. The problem is the build in Gigabit LAN, which is jut too slow (neither the troughput nor the latency of the GLan was not ment for these pourposes).
    In a real cumputational server the CPU-s should be directly interconnected with something like Hyper-Transport, or the separate nodes should communicate trough build-in Infiniband cards. The MINIMUM nowadays for a computational cluster would be 10G LAN buid in, and some software tool which can reduce the TCP/IP overhead and decrease the latency.
    2
  • Anonymous
    less its a typo the bench marked older AMD opterons. the AMD opteron 200s are based off the 939 socket(i think) which is ddr1 ecc. so no way would it stack up to the intel.
    1
  • Anonymous
    The server could be used as a Oracle RAC cluster. But as noted you really want better interconnects than 1gb Ethernet. And I suspect from the setup it makes a fare VM engine.
    0
  • ghyatt
    I priced a full chassis out for a client, and it was under 20k...
    1
  • navvara
    It can't be under 20K.

    I reallty want to know what the price of this server is.
    0
  • Anonymous
    Actually it is under $20k for a fully confgured system with 6 blades - i priced it up online. You can push it a bit higher than this if you go for very high end memory (16GB+) and top bin processors but for most the fully loaded config would come in around $20k. It's very well priced.
    0
  • kittle
    The chassis for your client was under 20k... np

    but to get one IDENTICAL to what was tested in this article - whats that cost? I would think "price as tested" would be a standard data point.

    Also - the disk i/o graphs are way to small to read without a lot of extra mouse clicks, and even then i get "error on page" when trying to see the full rez version. Makes all that work you spent gathering the disk benchmarks rather useless if people cant read them.
    1
  • Anonymous
    the price as tested in the article is way less than $20k. they only had 3 compute modules and non redundant SAN or Switch. Their configuration would cost around $15k - seriously just go and price it up online - search MFSSYS25 and MFS5000SI
    0
  • asburye
    I have one sitting here on my desk with 6 compute modules, 2 Ethernet switches, 2 Controller modules, 4 power supplies, and 14-143GB/10k SAS drives. The 6 compute modules all have 16GB RAM, 2-Xeon 5420's each and 4 of them have the extra HBA card as well, our price was < $25,000 with everything including shipping and tax. The Shared LUN Key is about $400. We bought ours about 2 months ago.
    3
  • Shadow703793
    nukemasterSo, When you gonna start folding on it Did you contact Intel about that network thing. There network cards are normally top end. That has to be a bug.You should have tried to render 3d images on it. It should be able to flex some muscles there.

    Nahhh... you don't run F@H on CPUs any more ;)
    You run it on GPUs! CUDA FTW! :P
    0
  • nukemaster
    yeah, thats true. CUDA kills in folding.
    0
  • Anonymous
    Thanks for the comments/suggestions/questions everyone. Your input is appreciated and will be applied to future reviews.

    We're addressing the issue with the network test results. - julio
    0
  • Area51
    This is the only solution that I can think of that has the integrated SAN solution. None of the OEM's (Dell, HP, IBM) can do that in their solution as of yet. Also if you configure the CPU's with L5430's this becomes the perfect VMware box.
    As far as power switch... Remember that in a datacenter enviornment you do not turn off the chassis. there is less chance of accidental shutdown if there is no power switch on the main circuit. This is 6 servers network switch, and a SAn solution in-one you do not want a kill switch. that is why no OEM ever puts a power switch in thier blade solution Chassis.
    2
  • Anonymous
    Hi folks. We're re-running the network test this weekend. Stay-tune for the update. - julio
    1
  • MonsterCookie
    I went over the review/test, and as far as i understood in this system there is only a single GLAN switch.
    Besides, the individual nodes do not have their own HDD, but a NAS instead.

    This is particularly bad, because the disk-I/O is also handled by this poor LAN switch.
    One should use at least two switches: one for internode communication, and the other for NFS and so on.


    Second point: if those prices which i have seen in the forum are right, than 20k$ is equivalent to 15600Euros.
    For that money i can buy from the company where we are buying our equipment from the same system build from Supermicro 1U Twins. For this price i even have Dual GLAN per node, and one Infiniband per node.
    This system above could be called indeed a computational server, and the Intel system is just something like a custom made weakly coupled network of computers which is coming with a hefty price tag.
    Of course one could argue that buying 3 Supermicro twins plus an Infiniband switch is not as neet looking as this Intel, but once it is in a rack, who cares?
    I would not really would like to listen to this intel machine on my desk anyway, so that should be put in a nice rack as well.
    0