Requests for a points-based scoring system have appeared in the comments section of the Web Browser Grand Prix articles for some time now. Today we're ready to implement one, but we need help. And who better to ask than you, the Tom's Hardware readers.
Over the past couple of years we've implemented several reader suggestions into the Web Browser Grand Prix, such as adding the analysis tables alongside raw placing, later dropping the placing tables entirely, and de-emphasizing the winner over other strong finishers. However, one of the most frequent requests has been to incorporate some kind of points-based scoring system. One which gives added weight to the more important categories of testing, and less weight to areas that have little or no bearing on everyday real-world Web browsing.
We've received numerous emails suggesting such a system, but so far they've all been too simplistic or far too complicated (think Dungeons & Dragons rule-set). With the tenth installment of the Web Browser Grand Prix just around the corner, we think it's about time to grant this request. So, we're seeking your help.
First, let's look at the the current analysis table from which the champion is largely determined. Today the Web Browser Grand Prix has 48 individual tests which fall into the following 14 categories:
|Page Load Time|
|Page Load Reliability|
|HTML5 Hardware Acceleration|
From here we need to rank these categories into brackets which reflect their importance to the average Web browsing experience. We've come up with the following four brackets:
|Nonessential||Startup Time, Memory Efficiency, Java, Silverlight|
|Unimportant||HTML5 Hardware Acceleration, WebGL|
The Essential bracket holds everything that makes up the core of what it is to browse the Web. The Important bracket includes the ubiquitous Flash plug-in and the rapidly-evolving HTML5 spec. The Nonessential bracket is for tests that could apply to any application (not just browsers) as well as the common, but lesser-used plug-ins. The Unimportant bracket is for upcoming technologies that simply aren't found in the wild, outside of testing and demo pages. While these brackets aren't set in stone and we're still open to feedback, the next step is where we really need your help.
This is where the points come in. We need to assign point values to the bracketed analysis table. There are a variety of ways to go about this. We could have a simple system where each type of finish (winner, strong, average, and weak) has a set score and a different modifier is applied to each bracket. Alternatively, we could have different point values assigned to each finish in each bracket.
Either way, there are more questions to be answered. Does an acceptable finish rate any points at all? Should weak be given negative points? Or should every type of finish in every bracket merit some points? How much of a bonus does the winner deserve over the strong finishers? Et cetera, et cetera.
Testing for the tenth installment of the Web Browser Grand Prix is complete - this one has a twist, and it's not what you'd think. Give us your feedback on the scoring system in the comments below so we can declare a champion. The outcome of Web Browser Grand Prix 10 is up to you!