Here's the Ultra-Mega-Superfreakiness; I'm all stock except for CPU, which is at 8x400 for 3.2GHz. Stable as a rock, straight 5.9's in WEI, 12+ hours in P95 "Torture Test", nice cool temps (45-50C max during test, 30-35C idle), haven't needed to touch any voltage whatsoever, as RAM is at default (400) speed since I'm using a 2.0 div in MIT.
The system POSTs at 8x400=3.2GHz without issue. When booting into Vista Ultimate x64 the "Welcome Center" details displays the correct processor, but reports that it is running at 3.6GHz, and not 3.2GHz. CoreTemp (latest) also reports 3.6GHz, as does PC Pit Stop's benchmark series. Weird, but...OK, I can live with it.
Now, along comes CPU-Z (latest). CPU-Z reports, while the other applications are running (all displaying 3.6GHz), the correct processor, but says it is running at the stock 2.4GHz, but every 30 seconds or so it will jump to 3.2GHz and then fall back to 2.4GHz after a couple of seconds.
Rebooting and verifying at POST as well as in BIOS shows 8x400=3.2GHz.
What in the world is going on?!?! POST/BIOS says one thing (3.2), Vista, Speedfan, CoreTemp, and PCPS say another (3.6), and CPU-Z says yet another (2.4).
not sure where the 3.6ghz comes from but the 2.4 and 3.2 is understandable.
If you have C2D CPU which is what I am guessing, there is a function called SpeedStep that automatically throttles the mutiplier if your application don't need the speed and save energy (I recommend to keep this on). So when u have multiplier at 8 I bet your lowest multiplier setting is 6 so it goes from 8 to 6 or 6 to 8 therefore the 2.4 and 3.2 ghz reported in the CPUZ. I'd trust your CPUZ's freq.
1) I can't find SpeedStep anywhere in BIOS F4 on my P35C-DS3R. Can someone tell me specifically where it is to disable it?
2) I can't find any reference to SpeedStep in Vista x64. I know you could set the power settings to "Always On" in WinXP, but I see no such option readily apparent in x64 Vista (Ultimate, though I doubt that matters).
Oh, I *think* I know why all of those applications are mis-reporting clock speed. The only explanation I can come up with is that they only recognize a fixed multiplier for a given CPU -- 9 in the case of the Q6600. When they sense the bus speed they automatically calculate by the default specified multiplier. It makes sense, since OC'ers comprise such a non-relevant percentage of users.
More sophisticated applications, like CPU-z, detect everything and render the correct response.
Oh, I *think* I know why all of those applications are mis-reporting clock speed. The only explanation I can come up with is that they only recognize a fixed multiplier for a given CPU -- 9 in the case of the Q6600. When they sense the bus speed they automatically calculate by the default specified multiplier.
I believe that is what's happening. As far as the speed step, there is no reason to turn it off unless your system is unstable. So leave it enabled. Try it both ways, there really isn't any discernible difference, except power usage and heat.
Speed step is causing it to show 2.4 because when the computer is not in high use the cores drops to 2.4 (6X400).
Once loaded it snaps back to 3.2(8x400)
The 3.6 is just a software bug....windows will show the same. i happens when the software does not see the dropped multiplier(not sure exactly why, but it does it to me too when i drop the multiplier)....so it thinks 9x400(3600)
If should not have any adverse effects....
As for speedstep....Leave it on(unless its unstable....but you say its not....so keep it) since it saves power and makes it run cooler....