Today's tests involve typical 1U server platforms. Supermicro sent along a new 1U SuperServer configured with two Intel E5-2690 v3 processors and 16 x 8 GB DDR4-2133 DIMMs from Samsung. We had a similar 1U Supermicro platform and pairs of Intel Xeon E5-2690 v1 and v2 processors to create a direct comparison. The Xeon E5-2690s are generally considered the higher-end of what ends up becoming mainstream. For example, companies like Amazon use the E5-2670 v1 and v2 quite extensively in their AWS EC2 compute platforms. The -2690 generally offers the same core count, just at a higher clock rate.
Intel also sent along a 2U "Wildcat Pass" server platform that was configured with two Xeon E5-2699 v3 samples and 8 x 16 GB registered DDR4 modules (with one DIMM per channel) and two SSD DC S3500 SSDs. The E5-2699 v3 is a massive processor. It wields a full 18 cores capable of addressing 36 threads through Hyper-Threading. Forty-five megabytes of shared L3 cache maintain 2.5 MB per core, and the whole configuration fits into a 145 W TDP.
Naturally, this is going to represent a lower-volume, high-dollar server. But it's going to illustrate the full potential of Haswell-EP, too. We're using the Wildcat Pass server as our control for Intel's newest architecture.
Meanwhile, a Lenovo RD640 2U server operates as our control for Sandy Bridge-EP and Ivy Bridge-EP. It leverages 8 x 16 GB of registered DDR3 memory, totaling 128 GB. We also dropped those SSD DC S3500s in there, too.
As we make our comparisons, keep a few points in mind. First, at the time of testing, DDR4 RDIMM pricing is absolutely obscene. Street prices are several times higher per gigabyte than DDR3. This will come down over time as manufacturing ramps up. But prohibitive expense did affect our ability to configure the servers with more than 128 GB.
We are focusing today's review on processor performance and power consumption. As a result, we are using the two SSD DC S3500s with 240 GB each in a RAID 1 array. We did have a stack of trusty SanDisk Lightning 400 GB SLC SSDs available. But neither of our test platforms came with SAS connectivity. Although there are plenty of add-in controllers that would have done the job, there is clearly a market shift happening away from such configurations. Sticking with SATA-based SSDs kept the storage subsystem's power consumption relatively low, while at the same time leaning on a fairly common arrangement in servers reliant on shared network storage.
Bear in mind also that we're using 1U and 2U enclosures, each with a single server inside. The Xeon E5 series is often found in high-density configurations with multiple nodes per 1U, 2U, or 4U chassis. For instance, the venerable Dell C6100, based on Nehalem-EP and Westmere-EP, was extremely popular with large Web 2.0 outfits like Facebook and Twitter. Many of those platforms have been replaced by OpenCompute versions, but we expect many non-traditional designs to be popular with the E5-2600 v3 generation, especially given its power characteristics.
- Xeon E5-2600 v3 Platform Introduction
- Meet Intel's Grantley Platform
- Fortville: 40 GbE Ethernet For The Masses
- How We Tested
- Supermicro SYS-6018R-WTR
- Linux-Bench Components And Test Setup
- Benchmark Results
- More Benchmark Results
- Power Consumption Results
- Haswell-EP Evolves The Server And Workstation



Actually we should be trying to move away from traditional serial-styled processing and move towards parallel processing. Each core can handle only one task at a time and only utilize it's own resources by itself.
This is unlike a GPU, where many processors utilize the same resources and perform multiple tasks at the same time. The problem is that this type of architecture is not supported at all in CPUs and Nvidia is looking for people to learn to program for parallel styled architectures.
But this lineup of CPUs is clearly a marvel of engineering and hard work. Glad to see the server industry will truly start to benefit from the low power and finely-tuned abilities of haswell along with the recently introduced DDR4 which is optimized for low power usage as well. This, combined along with flash-based storage (aka SSDs) which also have lower power drain than the average HDD, will slash through server power bills and save companies literally billions of dollars. Technology is amazing isn't it?
However, with multiple cores, now we can have better AI and other "off-screen" items that don't necessarily always depend upon the user's direct input. There's still a lot of work to be done there, though.
I think all of the major server vendors are going to suck up all of the major memory manufacturers DDR4 capacity for a while before the prices go down.
I think all of the major server vendors are going to suck up all of the major memory manufacturers DDR4 capacity for a while before the prices go down.
Whether it helps or hinders will ultimately depend on the VM admin. What most VM admins don't realize is that HT can actually end up degrading performance in virtual environments unless the VM admin took specific steps to use HT properly (and most do not). A lot of companies will tell you to turn off HT to increase performance because they've dealt with a lot of VM admins that don't set things up properly (a lot of VM admins over allocate which is part of the reason using HT can degrade performance, but there are other settings as well that have to be set in the Hypervisor so that the guest VMs get the resources they need).
This is easier said than done since there are tons of everyday algorithms, such as text/code parsing, that are fundamentally incompatible with threading. If you want to build a list or tree using threads, you usually need to split the operation to let each thread work in isolated parts of the list/tree so they do not trip over each other and waste most of their time waiting on mutexes and at the end of the build process, you have a merge process to bring everything back together which is usually not very thread-friendly if you want it to be efficient.
In many cases, trying to convert algorithms to threads is simply more trouble than it is worth.
Simply never.
A game is made by sound, logic and graphics. You may dedicate this 3 processes to a number of cores but they remain 3. As you split load some of the logic must recall who did what and where. Logic deals mainly with FPU units, while graphics with integers. GPUs are great integers number crunchers. They have to be fed by the CPU so an extra core manage data through different memories, this is where we start failing. Keeping all in one spot, with the same resources reduces need to transfer data. By implementing a whole processor with GPU, FPU, x86 and sound processor all in one package with on board memory makes for the ultimate gaming processor. As long as we render scenes with triangles we will keep using the legacy stuff. When the time will come to render scenes by pixel we will need a fraction of today's performance, and half of the texture memory (just scale the highest quality) and half of models memory. Epic is already working on that.
Great points. One minor complication is that the NVIDIA GeForce Titan used in the Haswell-E review would not have fit in the 1U servers (let alone be cooled well by then.) Onboard Matrox G200eW graphics are too much of a bottleneck for the standard test suite.
On the other hand, this platform is going to be used primarily in servers. Although there are some really nice workstation options coming, we did not have access in time for testing.
One plus is that you can run the tests directly on your own machine by booting to a Ubuntu 14.04 LTS LiveCD, and issuing three commands. There is a video and the three simple commands here: http://linux-bench.com/howto.html That should give you a rough idea in terms of performance of your system compared to the test systems.
Hopefully we will get some workstation appropriate platforms in the near future where we can run the standard set of TH tests. Thanks for your feedback since it is certainly on the radar.
Agree, the conclusion says "Server and Workstation" while the benchmarks only shows Server application.. I came here only to saw Workstation performance specially 3ds max rendering ( and I hope to see vray and mental ray benchmarks also ) and Adobe applications also as mentioned above
This is easier said than done since there are tons of everyday algorithms, such as text/code parsing, that are fundamentally incompatible with threading. If you want to build a list or tree using threads, you usually need to split the operation to let each thread work in isolated parts of the list/tree so they do not trip over each other and waste most of their time waiting on mutexes and at the end of the build process, you have a merge process to bring everything back together which is usually not very thread-friendly if you want it to be efficient.
In many cases, trying to convert algorithms to threads is simply more trouble than it is worth.
Then wouldn't a smart move be to move towards an HSA oriented architecture that combines the parallel compute abilities with the serial-oriented task managing? I believe that is essentially what AMD did with Kaveri actually. It is more befitting towards consumer/workstation workloads that can utilize OpenCL.
Although that wouldn't really be the best option for a server setting. There are usually two scenarios: you'll either need huge amounts of raw compute ability for services such as OnLive
Or the streamlined style of multiple CPUs performing just general server tasks such as accepting a large amount of packet requests and ping queries which is what the run of the mill server is built for.
In relation to what I was speaking about concerning Nvidia, it was this little piece:
http://www.nvidia.com/object/what-is-gpu-computing.html
Forgive me if I am incorrect about anything. I'm certainly not an engineer or a talented programmer by any means.
In theory, yes. In practice, not necessarily - algorithms like parsing are full of non-linear and highly context-sensitive branch-driven code which makes those sorts of algorithms effectively impossible to thread no matter how close you bring the extra compute power. That is what I mean by fundamental algorithms that are also fundamentally non-threadable.
I think all of the major server vendors are going to suck up all of the major memory manufacturers DDR4 capacity for a while before the prices go down.
Whether it helps or hinders will ultimately depend on the VM admin. What most VM admins don't realize is that HT can actually end up degrading performance in virtual environments unless the VM admin took specific steps to use HT properly (and most do not). A lot of companies will tell you to turn off HT to increase performance because they've dealt with a lot of VM admins that don't set things up properly (a lot of VM admins over allocate which is part of the reason using HT can degrade performance, but there are other settings as well that have to be set in the Hypervisor so that the guest VMs get the resources they need).
Ummm... buddy... I didn't mention HT (hyper-threading) at all... just memory.
Xajel,
An AE test will only be useful if the system has a lot of RAM (64GB+),
and that could be hard to set up atm, given the cost involved (unless a
kind RAM maker can provide a whole pile of kits to toms).
Ian.
"c-ray 1.1 is a popular and simple ray-tracing benchmark for Linux systems ..."
Blimey, wasn't expecting to see that in the benchmark list.
c-ray has taken on a life of its own (I didn't know it was being so widely used
until about a year ago). I took it over from John because he didn't have time
for it anymore.
One thing though, can you change the link to the results page please? The
Blinkenlights site is a mirror (I have no control over its persistence) and may
not be around in the future. The primary location is here, my own domain.
I'm glad you didn't use the simple test, it is indeed really small, and on any
kind of modern hardware it completes way too fast for useful measurement.
It's a pity though that the other tests don't use the settings I've used, since
the results can't be compared, but never mind.
The other tests do impose a degree of main memory access, but not much.
I created them mainly to have something which lasted long enough to be
useful for testing multicore systems. Even then, the slowest test takes just
11s to complete on an old 8-core XEON. Maybe I should start a separate
new table for something like 'sphfract' at 7500x3500 with 8X oversampling...
Btw, c-ray's threading is by scanline, so there's no gain from having more
threads than the no. of lines in an image.
Ian.
PS. Just a thought - any chance you could manually run the C-ray tests
using the settings on my page? I'll add them to the tables. 8) Include the
'simple' test aswell, just for the hell of it.
I think all of the major server vendors are going to suck up all of the major memory manufacturers DDR4 capacity for a while before the prices go down.
Whether it helps or hinders will ultimately depend on the VM admin. What most VM admins don't realize is that HT can actually end up degrading performance in virtual environments unless the VM admin took specific steps to use HT properly (and most do not). A lot of companies will tell you to turn off HT to increase performance because they've dealt with a lot of VM admins that don't set things up properly (a lot of VM admins over allocate which is part of the reason using HT can degrade performance, but there are other settings as well that have to be set in the Hypervisor so that the guest VMs get the resources they need).
Ummm... buddy... I didn't mention HT (hyper-threading) at all... just memory.
You mentioned virtualization in the very first line, that's the crux of my post.
This is easier said than done since there are tons of everyday algorithms, such as text/code parsing, that are fundamentally incompatible with threading. If you want to build a list or tree using threads, you usually need to split the operation to let each thread work in isolated parts of the list/tree so they do not trip over each other and waste most of their time waiting on mutexes and at the end of the build process, you have a merge process to bring everything back together which is usually not very thread-friendly if you want it to be efficient.
In many cases, trying to convert algorithms to threads is simply more trouble than it is worth.
Then wouldn't a smart move be to move towards an HSA oriented architecture that combines the parallel compute abilities with the serial-oriented task managing? I believe that is essentially what AMD did with Kaveri actually. It is more befitting towards consumer/workstation workloads that can utilize OpenCL.
Although that wouldn't really be the best option for a server setting. There are usually two scenarios: you'll either need huge amounts of raw compute ability for services such as OnLive
Or the streamlined style of multiple CPUs performing just general server tasks such as accepting a large amount of packet requests and ping queries which is what the run of the mill server is built for.
In relation to what I was speaking about concerning Nvidia, it was this little piece:
http://www.nvidia.com/object/what-is-gpu-computing.html
Forgive me if I am incorrect about anything. I'm certainly not an engineer or a talented programmer by any means.
It's a great concept, and AMD was brilliant thinking about it. the two main problems I saw at that time ( times of Kaveri launch ) was the not that great serialised performance and the almost lack of Software support. not to mention the weak OpenCL implementation... I mean weak by comparing it to CUDA, don't think that I prefer CUDA over OpenCL, infact I love OpenCL more ( I always loved open standard where every company can adopt it and the users will have the choice of whatever hardware they want )
Most pro-applications now like 3d rendering prefer CUDA over OpenCL for at least for the time being, I saw GPU accelerated beta's for vray but it only support CUDA duo to weak OpenCL implementation ( on both sides NVIDIA and AMD )
I'm not saying that OpenCL is weak, but from a software standpoint there's a still a lot of work to do, OpenCL by it self is very promising, but it should work good at first..
Xajel,
An AE test will only be useful if the system has a lot of RAM (64GB+),
and that could be hard to set up atm, given the cost involved (unless a
kind RAM maker can provide a whole pile of kits to toms).
Ian.
RAM manufacturing really need to promote their products you know, and one of promoting ways is giving these RAM to hardware sites... so If requested by the site I bet there will be some companies that will be interested in this promising new market ( DDR4 + ECC DDR4 )
At UK wholesale prices there's less than 15% difference between DDR3 and DDR4 for the same speed/size. This has as much to do with DDR3 having gone up ~50% in the last 12 months as with ready availability of DDR4 if you look in the right places.