I am going to go out on a limb here and risk getting flamed, but I actually feel the need to defend the Passmark software. It's actually amazingly useful and the truth is that it provides more in-depth information about YOUR system than most people could possibly ask for. The fact that it individually runs tests focused on different functions of the CPU, 2D graphcis, 3D graphics, memory, and disk and then provides those results for you to compare to your own previous tests is invaluable. It is a very, very good way to keep track of the performance of different elements of your system over time, as you upgrade one part or another.
There are two aspects of Passmark that do deserve criticism, but mostly because people tend to focus on them far too much.
First, yes, the "overall score" isn't something I think is truly very accurate in comparing different systems. For example, the difference between the fastest FX8350-based system and the slower of my two X4 965 rigs is only 800 points. Meanwhile, the difference between just various i7 machines is literally thousands of points. Instead of the gap getting smaller between systems as they become faster, as would be the actual experience perceived by a human user, the formula makes the gap much larger numerically. Thus, it's hard to compare anything that isn't in the same ballpark as your own system.
Second, as someone has pointed out already, the index of component scores needs to be taken with not a grain, but a teaspoon of salt. The CPU numbers aren't accurate representations of the real performance level of the processor potential, because there is no way to weigh just who is submitting the scores. What I mean by that is right now, anyone submitting a FX-8350 score is going to be an overclocker 90% of the time, and thus the scores are through the roof. Meanwhile, 90% of the people submitting scores for i5 processors that are available in retail machines are going to be running stock speeds. Thus, the scores get skewed far in favor of the FX chip in that case. The best example of this is the A10-5800k. When it was released, the first 72 hours had it's overall average AHEAD of the FX-8150. Why? Because the people who rushed out and bought it the day of release were clearly enthusiasts who took it home and maxed out the limits before benchmarking. Since then, it's dropped to about half that initial average. There are other things I could point out as well- for example, if you look at the Hard Drive charts, usually there is a "Kyle's SSD RAID" or something like that in the top 10 models. Obviously, that should have been caught before filtering through to the hardware index. The entire hardware score section of the site really took a hit when they upgraded from version 7 to version 8. For whatever reason, they decided to wipe ALL of the past scores from the database, which had actually been fairly accurate when the sample sizes were collected over a period of months and years.
So, that long-winded thought summarized: Passmark is a great tool to assess progress you make with your own system, if viewed on the sub-score level. It's not especially great to use as a way to judge your system against the database of scores.