Sign in with
Sign up | Sign in

MFS5000SI Compute Modules

Intel’s 24-Core, 14-Drive Modular Server Reviewed
By

Each of the Intel MFS5000SI compute modules fits into one of the six horizontal slots on the front-right of the MFSYS25 chassis. Simply put, the compute module looks like a 1U server minus any internal drives and without a power supply. Each of the MFSYS5000SI modules slides into one of six bays on the front of the main chassis, much like they would with a typical blade server. Once mounted, the modules can be powered up via the modular server control application or with the small power button on each compute module.

The MFS5000SI Compute Modules are blade-like servers that contain the CPUs, memory, chipsets and networking components.  Storage is handled by the built-in SAN next to the Compute Modules.The MFS5000SI Compute Modules are blade-like servers that contain the CPUs, memory, chipsets and networking components. Storage is handled by the built-in SAN next to the Compute Modules.

The compute modules included in the evaluation unit were configured as follows:

Server 1:

  • 2 Intel Xeon E5440 CPUs @ 2.83 GHz
  • 8 GB of Kingston FB DDR2-667 RAM                  

 Server 2 and 3:

  • 2 Intel Xeon E5410 CPUs @ 2.33 GHz 
  • 16 GB of Kingston FB DDR2-667 RAM

 All Servers:

  • Intel 5000P chipset
  • Intel 5000P memory controller hub
  • Intel 6321ESB I/O controller hub
  • Network interface controller
  • LSI 1064e controller, SAS 3000 Series
  • AMD ES1000 16 MB graphics accelerator

A picture of the Compute Module interior.A picture of the Compute Module interior.

One of the two core components built into the compute modules’ system board is the Intel 5000P memory controller hub. The 5000P serves as a system bus interface for the processor subsystem, offers fully buffered DIMM (FBD) thermal management, and supports memory RASUM features, such as memory scrubbing, self testing, and mirroring.  With the inclusion of the memory controller hub built into the motherboard’s architecture, transfer speeds of memory data are between 6.4 to 8 GB/s while data transfers across the 1,333 MHz data bus can reach speeds up to 10.66 GB/s. 

The MFSYS5000SI compute modules support up to two Intel 5100/5200 Duo-Core and 5300/5400 Quad-Core Xeon processors. Older Xeon processors are not supported. The eight DIMM slots on the compute modules hold up to 32 GB of paired memory in non-mirrored mode. As the board does not support non-buffered DIMMs, only fully buffered DDR2-533 and DDR2-667 DIMMs are supported, although the former are not validated for the MFS5000SI. The eight DIMM slots themselves are split among four separate channels mastered by the memory controller hub, which Intel theorizes would support the peak bandwidths mentioned above: DDR2-533 at 6.4 GB/s and DDR2-667 at 8 GB/s. As for supported memory configurations, DIMMs must be paired, except in a single DIMM system where the DIMM must sit in the first slot (A1).

Intel supplied us with Kingston FB DDR2-667 RAM for our demo.  Intel supplied us with Kingston FB DDR2-667 RAM for our demo.

The second core component is the Intel 6321 ESB I/O controller hub. Besides serving as an I/O controller, this onboard device offers support for the other I/O devices on the compute module board, including the PCI-X bridge, the GB Ethernet controller and baseboard management.  I/O devices that rely on the controller hub gateway services include the dual gigabyte Ethernet connections, USB connections, and the LSI SAS 1064e SAS controller.   

With the ventilation cover removed we see one of the CPUs and the I/O Controller Hub without their heatsinks.  We couldn’t show the Memory Controller Hub as the fins were too tight to remove.With the ventilation cover removed we see one of the CPUs and the I/O Controller Hub without their heatsinks. We couldn’t show the Memory Controller Hub as the fins were too tight to remove.

Network connectivity from the MFSYS5000SI to the chassis is provided by the built-in dual gigabyte Ethernet media access control features in the 6321ESB I/O controller hub.  Since networking runs across the chassis’ midplane, there are no physical network ports needed on the compute module. Instead, data is routed through a gigabyte Ethernet controller to the rear Ethernet switch module on the back of the chassis. Network performance is also streamlined by the Intel I/O Acceleration Technology as data is run through the Xeon processors, which by design, minimizes system overhead.

The 16 MB AMD ES1000 graphics accelerator serves as the graphics engine, which can be accessed via the front VGA port of the MFSYS5000SI. The modes supported by the accelerator are as follows:

This chart shows the supported 2D modes for the ATI ES1000 Graphics Accelerator built into the MFSYS25’s Compute Modules.This chart shows the supported 2D modes for the ATI ES1000 Graphics Accelerator built into the MFSYS25’s Compute Modules.

On-board video provided by an ATI es1000 Graphics Processor.On-board video provided by an ATI es1000 Graphics Processor.

There are additional ports for local administration on the front panel of each compute module. These are especially handy for those times when you can’t access the compute modules over the network. You get one VGA port to run your video and two USB 2.0 ports for input devices or external drives. 

Not too far from the video port is a small power button on the front panel. It's great to have the power button to manually start up the server, but it’s one of those really tiny buttons that’s not big enough to be able to press it with a finger. You’ll have to use a pen or paper clip to be able to apply enough pressure to trigger the power-on process. My concern about having a button like this is that I’ve seen small, similar-sized buttons broken as operators and administrators tend to use too much force when pressing down on them with a pen. 

For quick status checks, activity and status LEDs for NICs 1 and 2, drive activity, power, and I/O are on the lower edge of the panel, while the remaining real estate on the front panel is perforated for air induction.

The top cover of one of the compute modules has a diagram of each servers’ internal configuration.The top cover of one of the compute modules has a diagram of each servers’ internal configuration.

React To This Article