Intel’s 24-Core, 14-Drive Modular Server Reviewed
Power Supply Modules
While the MFSYS25 comes with two pre-installed power supply modules, it has a capacity for up to four. The 1,000 W power supplies can run from 110 to 240 V, which is a great range as most small offices may not have the high-grade electrical infrastructure that big data centers have. As is the case with most of the major parts for the MFSYS25, the power supply modules are hot-swappable. The number of power supply modules you need depends on the number of compute modules you have running. One power supply can handle one compute module and all the other non-compute modules in the chassis. The second power supply would be your redundant backup. If you’re adding more modules, then you need to increase the number of power supplies as follows:
- one compute module=one power supply plus backup
- two to three compute modules=two power supplies plus backup
- four to six compute modules=three power supplies plus backup
While I think Intel put a lot of thought into designing the modular server, I’m not too crazy about having to physically unplug the power from the chassis in order to completely shutdown the MFSYS25. It would have been nice to add a power button somewhere on the chassis so administrators could avoid unplugging “hot” cables.
However, the power supply modules use standard computer power cords just like the ones that come with most PCs and there are no proprietary cables.
A last note regarding the power supply modules actually has to do with ventilation. I didn’t notice until later during my tests that the filler panel occupying the fourth power supply module bay wasn’t just taking up space, but it had several small fans and a circuit board inside that offered additional cooling to the chassis. I think that’s pretty smart as it enhances the functionality of an otherwise benign piece of hardware.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
-
kevikom This is not a new concept. HP & IBM already have Blade servers. HP has one that is 6U and is modular. You can put up to 64 cores in it. Maybe Tom's could compare all of the blade chassis.Reply -
sepuko Are the blades in IBM's and HP's solutions having to carry hard drives to operate? Or are you talking of certain model or what are you talking about anyway I'm lost in your general comparison. "They are not new cause those guys have had something similar/the concept is old."Reply -
Why isn't the poor network performance addressed as a con? No GigE interface should be producing results at FastE levels, ever.Reply
-
nukemaster So, When you gonna start folding on it :pReply
Did you contact Intel about that network thing. There network cards are normally top end. That has to be a bug.
You should have tried to render 3d images on it. It should be able to flex some muscles there. -
MonsterCookie Now frankly, this is NOT a computational server, and i would bet 30% of the price of this thing, that the product will be way overpriced and one could buid the same thing from normal 1U servers, like Supermicro 1U Twin.Reply
The nodes themselves are fine, because the CPU-s are fast. The problem is the build in Gigabit LAN, which is jut too slow (neither the troughput nor the latency of the GLan was not ment for these pourposes).
In a real cumputational server the CPU-s should be directly interconnected with something like Hyper-Transport, or the separate nodes should communicate trough build-in Infiniband cards. The MINIMUM nowadays for a computational cluster would be 10G LAN buid in, and some software tool which can reduce the TCP/IP overhead and decrease the latency. -
less its a typo the bench marked older AMD opterons. the AMD opteron 200s are based off the 939 socket(i think) which is ddr1 ecc. so no way would it stack up to the intel.Reply
-
The server could be used as a Oracle RAC cluster. But as noted you really want better interconnects than 1gb Ethernet. And I suspect from the setup it makes a fare VM engine.Reply