Test Setup And Benchmarks
|Test System Configuration|
|CPU||Intel Core i7-3960X (Sandy Bridge-E), 32 nm, 3.3 GHz base clock, 3.9 GHz maximum Turbo Boost, 15 MB Cache, LGA 2011|
|CPU Cooler||Swiftech Apogee GTX, MCP 655b, Triple-Fan Radiator Kit|
|Motherboard||Asus P9X79 WS, Firmware 0603 (11-11-2011)|
|Graphics||Nvidia GeForce GTX 580: 772 MHz GPU, GDDR5-4008|
|Hard Drive||Samsung 470 Series MZ5PA256HMDR, 256 GB SSD|
|Sound||Integrated HD Audio|
|Network||Integrated Gigabit Networking|
|Power||Seasonic X760 SS-760KM: ATX12V v2.3, EPS12V, 80 PLUS Gold|
|OS||Microsoft Windows 7 Ultimate x64|
|Graphics||Nvidia GeForce 285.62|
|Chipset||Intel INF 126.96.36.1990|
We wanted to see what effect various memory speeds might have on program performance, and games are one of the types of programs that occasionally show this difference. Nvidia’s GeForce GTX 580 is fast enough to keep the pressure on our CPU and GPU.
Intel’s Core i7-3960X was locked at 34x throughout testing to keep its clock frequency stable at non-reference base clocks.
The lowest-possible game settings would show the biggest impact of memory performance on frames-per-second, but nobody actually games at those settings. Instead, we selected the lowest settings that high-end buyers would likely use (if forced to do so), along with a couple other applications that have been influenced by memory performance in the past.
|Stability Test||Memtest86+ v1.70, single pass (~45 minutes) Max Speed at CAS 9 Min Latency at DDR3-1600, -1333, -1066|
|Bandwidth Test||SiSoftware Sandra 20011 SP4 Bandwidth Benchmark|
|DiRT 3||1680x1050, High Quality Preset, No AA|
|Metro 2033||1680x1050, DX11, High, AAA, 4xAF, no PhysX/DOF|
|3ds Max 2012||Version 14.0 x64: Space Flyby Mentalray, 248 Frames, 1440x1080|
|WinRAR||Version 4.01: THG-Workload (464 MB) to RAR, command line switches "winrar a -r -m3"|