Your reasoning is good, I assumed the same thing at one time. Here's the problem:
Say you build two systems, both with 1.475v 2.4Cs. Two different motherboards and power supplies, same CPU. And one defaults to 1.48v, the other to 1.52v. Would that imply that the one 2.4C overclocks better than the other? No, because the overvoltage is caused by the motherboad and/or power supply being overvoltage.
Example 2: Say Intel determines that all P4's on the latest Northwood core up to 2.5GHz will run at 1.475v, and all 2.60GHz to 3.2GHz P4's will run 1.525v. Now, you know most P4s won't run at 3.2GHz/1.475v. So they had to "draw the line" somewhere. We've seen this in the past with the 5xxMhz Coppermines running at 1.50v while the 6xxMHz+ processors ran at 1.65v or more! But my Celeron 566 didn't like going past 850MHz, and my PIII 700 went to 1057MHz!
Third instance, Intel sets their core voltage based on such things as heat (too much heat would require a new cooling solution) and needed clock speed (too little voltage reduces max clock speed). Most of their core revisions deal with reducing hot spots I believe. Reducing hot spots will allow the processor to tolerate more voltage. Higher voltage increases yield rate at higher clock speeds. This explains why the PIII 750 went from 1.65v to 1.70v and finally 1.75v as newer core revisions were made. Most of the later versions would run at a lower voltage, but...having all the cores on the same voltage was handy for making more 1000EB's and Coppermine Celeron 1100's.
So anyway, there are a number of reasons his CPU might be running at a higher voltage than yours, both by design or by system configuration.
<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>