I understand this would probably be a lot more difficult to do, but I think it would be worth it in the long run.
Why not use the same 2 or 3 systems everytime you test something? For example, if you're testing 4 cases this momth and then testing 5 new cases next month, why not use the same system and then compile the data into a single article? There would still be 2 separate articles, 1 for the 4 cases being compares, and another for the 5 cases being compared. You could even have winners out of those groupings, but at the end of the article there should be a link to a page that compiles and compares all this data, so we would know whether these new cases are better than the old ones.
Then after 2 to 3 years or however long you choose, you could upgrade the systems, applications, and testing methods you use and then take the top 3-5 cases and test those with your new system and method along with any new cases being released so people can compare them to the old ones.
You could have an AMD and Intel setup for example.
Same with testing and comparing graphics cards. If you use the same system with the same games, settings, and applications to test it people can get a better idea the performance increases between different cards. You could also add in some higher settings to compare newer graphics cards and their new features, but if you ran the basic tests, you could compile all the data to a webpage that people could view and have links to each article the GPU is tested in to get more info on specific cards people want to look at and compare. Then after 2 or 3 years when updating your system and testing methods, you could just add in a couple of the other, but still popular cards or recommended buys with the new setup along with the new release of graphics cards.
For instance how much better would the recently released Gigabyte SuperClocked 470 gtx article be if it was being compared to all the other 470 gtxs and it's OCing capabilities compared to reference 470 gtxs that were OC to their limits also.
It just seems like you'd have better data on different hardware and how it compares to other, similar hardware if you kept the rest of the system and testing methods the same and compiled the data from each article together.
Hopefully this is possible and I just didn't waste a bunch of time talking about doing something that's impossible to do. And hopefully it's a good idea too.
Difficulty is definitely one of the factors here. For purely scientific comparisons, it would be a boon to have consistent hardware configurations across the board, but it isn't really feasible outside a lab setting specifically set up for that purpose.
It is important to note that there are reviewers from several different countries who write for THG, most notably USA, France and Germany (with articles from the latter two typically being translated). This makes it much more difficult to keep consistent systems between all of the reviewers. In addition, keeping the same system for 2-3 years would likely cause some people to get upset because of bottlenecks.