Intel’s 24-Core, 14-Drive Modular Server Reviewed

Ethernet Switch Module

The MFSYS25 chassis provides networking for the compute modules through a hot-swappable Ethernet switch module located in the rear panel. The 10 physical network ports on the Ethernet switch module’s front panel provide the uplink to the external network. There are 12 internal ports that connect the six compute modules’ 12 NICs to the Ethernet switch module. The Ethernet switch module also provides connectivity for the compute modules to the management module via the main chassis’ midplane.

Having one Ethernet switch module can serve as a minimum configuration for networking, but if you want redundancy, you’ll have to add another Ethernet switch module to the second available networking slot. The external port settings on the switch are configurable through the Modular server control management application to speed things up to 1Gb Full-Duplex. You can also enable/disable the external ports and enable Spanning Tree functionality.  The internal ports on the switch don’t get the same treatment, as the external ports only have the NIC enable/disable option, but they are hard-set to 1Gb Full Duplex.

The gigabit Ethernet Switch Module provide the network connections for the Compute Modules through internal and external ports.
  • kevikom
    This is not a new concept. HP & IBM already have Blade servers. HP has one that is 6U and is modular. You can put up to 64 cores in it. Maybe Tom's could compare all of the blade chassis.
    Reply
  • kevikom
    Also I did not see any pricing on this. Did i miss it somewhere???
    Reply
  • sepuko
    Are the blades in IBM's and HP's solutions having to carry hard drives to operate? Or are you talking of certain model or what are you talking about anyway I'm lost in your general comparison. "They are not new cause those guys have had something similar/the concept is old."
    Reply
  • Why isn't the poor network performance addressed as a con? No GigE interface should be producing results at FastE levels, ever.
    Reply
  • nukemaster
    So, When you gonna start folding on it :p

    Did you contact Intel about that network thing. There network cards are normally top end. That has to be a bug.

    You should have tried to render 3d images on it. It should be able to flex some muscles there.
    Reply
  • MonsterCookie
    Now frankly, this is NOT a computational server, and i would bet 30% of the price of this thing, that the product will be way overpriced and one could buid the same thing from normal 1U servers, like Supermicro 1U Twin.
    The nodes themselves are fine, because the CPU-s are fast. The problem is the build in Gigabit LAN, which is jut too slow (neither the troughput nor the latency of the GLan was not ment for these pourposes).
    In a real cumputational server the CPU-s should be directly interconnected with something like Hyper-Transport, or the separate nodes should communicate trough build-in Infiniband cards. The MINIMUM nowadays for a computational cluster would be 10G LAN buid in, and some software tool which can reduce the TCP/IP overhead and decrease the latency.
    Reply
  • less its a typo the bench marked older AMD opterons. the AMD opteron 200s are based off the 939 socket(i think) which is ddr1 ecc. so no way would it stack up to the intel.
    Reply
  • The server could be used as a Oracle RAC cluster. But as noted you really want better interconnects than 1gb Ethernet. And I suspect from the setup it makes a fare VM engine.
    Reply
  • ghyatt
    I priced a full chassis out for a client, and it was under 20k...
    Reply
  • navvara
    It can't be under 20K.

    I reallty want to know what the price of this server is.
    Reply