Test Setup And Benchmarks
|Processors||Intel Core i7-3770K (Ivy Bridge) 3.5 GHz at 4.0 GHz (40 * 100 MHz), LGA 1155, 8 MB Shared L3, Hyper-Threading enabled, Power-savings enabled|
|Motherboard||Gigabyte Z77X-UD5H (LGA 1155) Z77 Express Chipset, BIOS F15q|
|Memory||G.Skill 16 GB (4 x 4 GB) DDR3-1600, F3-12800CL9Q2-32GBZL @ 9-9-9-24 and 1.5 V|
|Hard Drive||Crucial m4 SSD 256 GB SATA 6Gb/s|
|Graphics||Nvidia GeForce GTX 780 3 GB|
|Row 5 - Cell 0||AMD Radeon HD 7990 6 GB|
|Row 6 - Cell 0||AMD Radeon HD 7970 GHz Edition 3 GB|
|Row 7 - Cell 0||Nvidia GeForce GTX 580 1.5 GB|
|Row 8 - Cell 0||Nvidia GeForce GTX 690 4 GB|
|Row 9 - Cell 0||Nvidia GeForce GTX 680 2 GB|
|Row 10 - Cell 0||Nvidia GeForce GTX Titan 6 GB|
|Power Supply||Cooler Master UCP-1000 W|
|System Software And Drivers|
|Operating System||Windows 8 Professional 64-bit|
|Graphics Driver||AMD Catalyst 13.5 (Beta 2)|
|Row 16 - Cell 0||Nvidia GeForce Release 320.00|
|Row 17 - Cell 0||Nvidia GeForce Release 320.18 (for GeForce GTX 780)|
Getting Frame Time Variance Right
Astute readers will notice that the numbers on the following page (and those thereafter) are quite a bit more conservative than the same page in my Radeon HD 7990 review, and there is a reason for this. We were previously reporting the raw and real-world frame rates, and then showing you the frame time variance data with runt and dropped frames still included. The thing is, if that’s not what you experience, it isn’t fair to then point to the raw frame time latencies and hammer AMD on them.
This is why we’re now giving you the more practical frame rates over time, along with frame rate variance numbers that match. The outcome is far less exaggerated, though still very telling in terms of the games where AMD struggles.
|Benchmarks And Settings|
|Battlefield 3||Ultra Quality Preset, v-sync off, 2560x1440, DirectX 11, Going Hunting, 90-Second playback, FCAT|
|Far Cry 3||Ultra Quality Preset, DirectX 11, v-sync off, 2560x1440, Custom Run-Through, 50-Second playback, FCAT|
|Borderlands 2||Highest-Quality Settings, PhysX Low, 16x Anisotropic Filtering, 2560x1440, Custom Run-Through, FCAT|
|Hitman: Absolution||Ultra Quality Preset, MSAA Off, 2560x1440, Built-In Benchmark Sequence, FCAT|
|The Elder Scrolls V: Skyrim||Ultra Quality Preset, FXAA Enabled, 2560x1440, Custom Run-Through, 25-Second playback, FCAT|
|BioShock Infinite||Ultra Quality Settings, DirectX 11, Diffusion Depth of Field, 2560x1440, Built-in Benchmark Sequence, FCAT|
|Crysis 3||Very High System Spec, MSAA: Low (2x), High Texture Resolution, 2560x1440, Custom Run-Through, 60-Second Sequence, FCAT|
|Tomb Raider||Ultimate Quality Preset, FXAA Enabled, 16x Anisotropic Filtering, TressFX Hair, 2560x1440, Custom Run-Through, 45-Second Sequence, FCAT|
|LuxMark 2.0||64-bit Binary, Version 2.0, Sala Scene|
|SiSoftware Sandra 2013 Professional||Sandra Tech Support (Engineer) 2013.SP1, Cryptography, Financial Analysis Performance|
Of course, one could argue that as we get closer to higher-end products, the performance increase is always minimal and price to performance ratio starts to increase, however, for the past 3-4 years (or so I guess), never has it been that the 2nd highest-end GPU having such low performance difference with the highest-end GPU. It's usually significant enough that the highest end GPU (GTX x80) still has it's place.
The GTX Titan was released to make the GTX 780 look incredibly good, and people (especially on the internet), will spread the news fast enough claiming the $650 release price for the GTX 780 is good and reasonable, and people who didn't even bother reading reviews and benchmarks, will take their word and pay the premium for GTX 780.
Nvidia is taking a different route to compete with AMD or one could say that they're not even trying to compete with AMD in terms of price/performance (at least for the high-end products).
Thats apretty bad analogy. A gpu is still smooth even with some of the cores/vram/etc turned off, it doesn't increase latency/frametimes/etc.
I must've missed something. Why wait a week?