These are two Performance Rating tables I created for myself for two reasons, I tend to upgrade my hardware by power of two usually. So if my own hardware is 1.0 rating and what I plan to upgrade to is 2.0 rating this means a doubling of performance compared to my current. The other reason I made these performance charts was that Tom's Hardware has no such performance rating.
It should be relatively easy to add this feature to Tom's Hardware as all that is needed is to use a baseline and then divide the test results of all the other entries in the CPU or GFX charts with the baseline entry's value, and there's your performance rating, simple right?
You could use a fixed performance baseline like in my two examples, in which case I guess you guys could call it Tom's Hardware Performance Rating, not unlike the Windows Performance Index, but way more accurate obviously.
Or you could simply allow the user to choose their own baseline (their own hardware maybe?) and have the performance rating calculated against the latest hardware test results.
So what is the benefit of a Performance Rating system like this?
Take my two example tables for instance, the GFX card I have is the baseline one. If I where to upgrade I would look at something with a 2.0 or better for at least a doubling in performance, anything less than that would be a waste of money and not give a significant improvement in performance. 3.0 or higher would be too much and obviously too costly.
Another benefit is that this performance rating system is test independent (if you guys decide on a fixed baseline for the future like in my examples), it would be possibly to test new hardware against only one of the existing entries and apply the proper modifier and add it to the list.
Another benefit of this system is it is test independent so when that old test benchmarks being phased out and new test benchmarks being introduced these performance rating values would be unaffected and still valid as they all have a common baseline.
Obviously this means that in 5+ years depending on hardware progress the best hardware might end up with a rating of 25.0 or higher, but the need for a adjustment would probably not be needed until it hits 10000+ really.
And if you look at the CPU example you will see that there is note above the table about 7 of your CPU tests was used, summed and divided by 7 to get the performance rating. This system is highly scalable and test benchmarks can thus be added or removed or be very different between each hardware yet the rating is not affected thanks to the baseline.
Looking forward to hearing response on this, if there is no interest in using such a system I'll probably end up making something more permanent on my own site. It's just that Tom's Hardware has much better and instant access to test data and combine this performance rating with the price and you get a much better way to do price/performance info.
Yeah, that is true but look at Microsoft's index system. It's proprietary and for Windows only, it is also not so fine grained. Look at the CSV's from my tables and you see the it's full floating point, it's only rounded for the display for readability.
This system is natural. (something that has twice the performance as the baseline is 2.0 simple floating point math for example) so even if a test method is no longer available the rating is still usable in comparisons.
Often when looking at a game there is a minimum requirement, now imagine just looking up that GFX card seeing it has 1.5 rating on Tom's, then you just look to see what cards have 1.5 rating or better on Tom's.
Currently the closest thing is the GFX card chart on Tom's and the FPS rating.
Why not do something similar for the CPU's? Right now one has to look at more than just 7 benchmarks, a sort of summed one similar to GFX cards FPS table would make this possible with minor changes to the current charts even. (just one more table added, computed, no need to redo any tests)
How far back (in the yearly charts) does the GFX card's FPS table go?
This is why a rating system like this is so nice, it will work even if some tables are no longer included or new ones added because regardless of what the tests are over the years, a card or cpu that is twice as good (or 149% or 252%) better than another will always be that much better regardless of the test method (thee will always be the odd test whee a lesser hardware scores better but several tests summed auto corrects such anomalies when summed)
I was thinking (for my own tables) to allow input/selection of a hardware, then have it's rating shown or highlighted, the user could then just easily note that value down.
It's also very easy to compare their current hardware or planned hardware against the minimum or recommended one from say a game by choosing the two and looking at the ratings.
And as you see in my tables I left out some details (number of shaders or cpu cache) as those are part of the performance and not actual requirements (like shader version)
But the really cool thing about all this is how easy it is to do,
it could be pre-calculated by just using a automated script that goes through the database and calculates against a baseline,
or it could be done on the fly (a bit more overhead for the webserver though)
Best of all, it's not that hard to "move" the baseline in the future and simply "shift" the rating values.
Heck, the rating would not need to be displayed it could be internal,
and instead just display a % performance to the users instead.
I.e. the baseline is 1.0 obviously, a card or cpu that is 2.0 is 200% better (2.0/1.0)*100=200%.
And a card or cpu that is 4.0 is 100% better than 2.0 (4.0/2.0)*100=200%