Tomb Raider
We’re going relatively easy on our test group with Tomb Raider. Typically, this game is made more demanding by enabling its compute-heavy TressFX feature. We disable the AMD-biased capability, though. And we aren't using PhysX in some of the other benchmarks. Fair is fair.
The benchmark runs three times, though our video only depicts one iteration. Of course, that first time through is discarded, and the second two are averaged together.
We adjusted the settings once again to let us test a wide and balanced range of boards rendering at smooth frame rates.
| Tomb Raider | |
|---|---|
| Run 1 | 1920x1080 (1080p) API: DirectX 11 Quality: Ultra Anti-aliasing: FXAA Texture Quality: Ultra AF: 16x Hair Quality: Normal Shadows: Normal Shadow Resolution: High SSAO: Ultra DoF: Ultra Reflection Quality: High LOD Scale: Ultra Post-processing: On High Precision RT: On Tessellation: On |
| Run 2 | 3840x2160 (2160p) API: DirectX 11 Quality: Ultra Anti-aliasing: Off Texture Quality: High AF: 8x Hair Quality: Normal Shadows: Normal Shadow Resolution: High SSAO: Normal DoF: Normal Reflection Quality: High LOD Scale: Normal Post-processing: On High Precision RT: On Tessellation: Off |
| Loops | Three per resolution; two used for evaluation |
Hitman: Absolution
Hitman is also lightweight enough that it can be played on almost any graphics card (Ed.: In fact, poor scaling was why I pulled it from our graphics card launches). It might not be the most recent game, but we still like to include it for this reason.
Another three benchmark runs per resolution give us one warm-up and two results to average. The video showcases the sequence used for our test.
Once again, here are the settings we use:
| Hitman: Absolution | |
|---|---|
| Run 1 | 1920x1080 (1080p) MSAA: 2x Texture Quality: High AF: 16x Shadows: Ultra SSAO: Normal Global Illumination: On Reflections: High FXAA: Off LoD: Ultra DoF: High Tessellation: On Bloom: Normal |
| Run 2 | 3840x2160 (2160p) MSAA: Off Texture Quality: High AF: 16x Shadows: High SSAO: Off Global Illumination: On Reflections: High FXAA: Off LoD: High DoF: High Tessellation: On Bloom: Normal |
| Loops | Three per resolution; two used for evaluation |
- Introducing Our Reference System And Methodology For 2014
- The Components In Our Reference Build
- How We Measure Power Consumption
- How We Measure Noise
- 3DMark Fire Strike And Unigine Heaven
- Metro: Last Light And Thief
- DiRT 3 And BioShock Infinite
- Tomb Raider And Hitman: Absolution
- Battlefield 4 And Far Cry 3
- Covering The Bases


I was never a fan of this style of benchmarking. It sure gives clean graph of gpu capabilities which we always needed. I would love to see new bottleneck analysis. Or at least parallel test done on midrange PC.
Everyone should keep mind that these charts represent performance of <1% PC builds out there.
If I recall correctly we are at this moment at the edge of PCI 2.0 x8 which = PCI 1.0 x16 . Next or following gen will finally outdate PCI 1.0 in single and PCI 2.0 in dual GPU configs as there will finally be noticeable bottle necks.
It would be nice to add any opengl crossplattform game as any ioquake based one or something more modern and test it under MS WOS and under GNU / Linux
Better if it is future Steam OS to let us know the performance at the same game under MS WOS and under GNU/Linux.
Also it would be nice to test at MS WOS with and without antivirus, perhaps avast that is free or any other of your preference.
Last but not least, in opengl or in directx there are version changes and being able to split cards generations by opengl / directx version support would help as a current price / performance index based in your sponsored links prices.
720p ( 1280x720 píxels = 921.600 píxels) is half 1080p more or less
1080p (1920×1080 píxels = 2.073.600 pixels)
And when a game is very demanding or you prefer to play with better graphics playing at 720p is a great option
Of course,latest best GPUs would be able to play at 4k and full graphics, but when we read the benchmarks we want to know also if our actual card CAN play at 720p (1k) or what the best ones can do at 1k to be able to compare
Also even it is not a standard or accurate, for benchmarking purposes calling 720p (1k) 1080p (2k) and 2160p (4K) wouldbeeasier to understand in a fast sight than UHD FHD and HDR, that can be used too UHD (4k) FHD (2k) HDR (1k)
720p does not stress most reasonably decent GPUs much and how many people would drop resolution to 720p these days with all the re-scaling artifacts that might add? In most cases, it would make more sense to stick with native resolution and tweak some of the more GPU/memory-intensive settings down a notch or two - at least I know I greatly prefer cleaner images over "details" that get blurred by the lower resolution and re-scaling that further distorts it.
Considering how you can get 1080p displays for $100, I would call standardizing the GPU chart on 1080p fair enough: the people who can only afford a $100 display won't care much about enabling every bell and whistle and the people who want to max everything out likely won't be playing on $100 displays and $100 GPUs either.
I also like seeing how current cards stack up performance-wise to previous generations. That really helps when you're deciding whether to upgrade or not.
So you're not (directly) controlling the relative humidity of the air you're testing the GPUs in? You do know that it affects air's thermal capacity, huh?
(I'm just joking, I'm glad you normalize temperature. Besides, by using an AC unit you're already putting a ceiling on RH%, thus controlling it Indirectly).
The air-conditioner is only the last help. I'm living in Central Europe in the 1st floor of a a very old, historical building with very thick walls (up to 1 meter!). It is in the hottest summer impossible to reach more than 25 or 26°C inside (with closed doors and windows). This can be cooled down very fast and easy. Mostly I have to heat up my room
For the 720p lovers:
I'll start after summer the entry-level charts with smaller cards and the same benchmarks - but lower resolution and settings for a better comparison. The difference between all cards is too large to put it into one database. This must fail.
However I appreciate the effort that has been put into trying to give some sort of comprehensive chart which can be of some use.
I understand your interest but this is at the end a big time-problem. But it will be a good idea for a separate review with the most common cards.
AMD Radeon R9 290X Reference
4GB Uber Mode / R9 290, 4GB GDDR5, 1000 MHz
4GB Quiet Mode / R9 290, 4GB GDDR5, 1000 MHz
2. Anything to address driver date ? ..... we all know that both teams make driver improvements but if a card is tested with version X.01 in May and then other cards are added in September, how do we compare the current performance of the May tested card w/ the current driver ZZ.01 and the September card with the current driver ? Will the tests be updated with driver revisions for apples and apples (current and current or release date and release date) comparisons ? Of course this is asking a lot but it would make the data more relevant.
3. Any chance of getting a bar extension on those charts so that for example we can see just what a non reference card adds to the equation either outta the box or when OC'd "Bawlz to the Wall".
4. Any chance of getting a specs chart for the "variations" as to what stock clocks are, base and boost, PCB, VRM phases, warranty, dimensions like the one here
http://www.tomshardware.com/reviews/geforce-gtx-560-ti-roundup-asus-engtx560-graphics-card-overclocking,2858.html
Yes, again asking a lot, but would make everything more relevant .... haven't installed a reference card in as far back as I can remember.
Do reviewers get cherry-picked golden sample GPUs for testing?
Does company X bin their superclock/OC model chips higher?
Does ASIC quality consistently mean better overclocking potential?
Does ASIC quality have any significance at all in real world gaming?
Etc.
- 90% of all cards are pure retail cards, no golden samples. All this was verified and proofed.
- the Asic quality is more or less voodo. GPU-Z makes a lot of errors and it's not clear, which GPU-Z version gives you which result.
- I've tested a handfull of 290X f.e. and the bechmark results were mostly similar. But the power consumption not (up to 5% difference)
As I we wrote in the article - the selection of benchmarks is the result of a long selection process and if you take a look at the normalized results (index) you can see, that this results are very close to the average of other sites. For all this benchmarks the driver war is more or less over, so we get stable results over a longer time. All exclusive things were not used as StressFX or PhysX, some anti-alisasing options or lights/shadows.
If I see from company A or N some significant driver improvements, I'm able now to re-bench all the stuff partially. This was done one time with the latest Wonder-driver from Nvidia a few weeks ago. And Dirt3? OpenCL is public, not AMD-exclusive. It is Nvidias part to improve finally the OpenCL performance, because it is 100% a driver issue.
The difference between quiet and uber mode is with full heated cards (and normalized over all benches) below 2%. You can hold in therory the clock rates a little bit longer but after heating and reaching the target temperature above 90°C all this reference cards increases the clock speed to hold it. This "uber mode" is only used to disguise the weakness of this really horrible cooling solution for a few minutes longer.
And finally:
I will bench all reference cards first to make an overview, but I'll also add the results of custom cards later - periodically, each month. I'm not able to write reviews and bechmark more than 20 cards per month at the same time. The current charts content was produced within 2 months and I'm sure that this is a good base to extend it step by step.
In addition, most games don't really stress out a good card. Try 3D on a 4k monitor on 3D and then we can really talk about a stress test and performance gains that make a difference in gameplay.