I've posted this twice in the 550 unlocked thread in OC forum and it's been ignored... I heard there's a fellow named bilbat that hangs around here who knows about stuff like this!
Hey, I found my old copy of Peter Norton's "Inside the IBM PC" the other day - guess it's the kind of thing you want from AMD eh?
Regarding unlocking the AMD Phenom II X2 550 BE - one thing I haven't figured out yet is whether by unlocking the 2 other cores you increase the Watts the processor uses. The CPU becomes a Phenom II X4 B50 rather than 950. And is this now officially a Deneb processor and does AMD plan to release a X4 950?
The X2 550 is 3100MHz and draws 80W. The "real" X4 945 is 3000MHz and the X4 955 is 3200MHz and both draw 125W. So if an unlocked X2 550 becomes a X4 950 - does it now draw 125W?
And how can the Watts a CPU draws be measured? We know the voltage it's sent, but nothing about the Amps. You'd think that unlocking the core would increase the temp. of the CPU, but mine actually goes down at rest a degree or two. I think it increases the Prime95 temps a little, but not sure.
Funny, I've heard that bilbat guy only knows a lot of arcane BS about Intel processors - cause they're the only ones who document everything! If you wanna know how many Lahore pigeons crap on the roof of the Santa Clara fab each year, not only can you find it in a PDF somewhere (but where - that's the skill!) on their web site, but there's probably a three year plan documented to change their feeding habits, so they crap a lighter color, causing the roof to reflect more sunlight, and cut down on the air-conditioning costs... Every time I try to find out something about an AMD BIOS for someone, I see this business about "update AGESA three point five point three point nine point more digits than pi", and I've been randomly trying for months just to find out what 'AGESA' is - bah - no luck! (I hate acronymns anyway - the only one that ever sticks in my head is back from the days when they finally got completely out of hand with 'PCMCIA' - people can't memorize computer industry acronymns!) And you don't wanna even get me started about nVidia! As far as I can figure, nVidia is actually a front company for the CIA/NSA - if you go looking there for documentation, they'll have you investigated to find out why the hell you're looking for their documents!
Levity aside, that's a really interesting question; I'll haunt a couple of my favorite 'rabid overclockers' sites, and see can I find anything remotely reliable... The actual deal about CPUs is they 'fit 'em into' families, (which is cheaper and easier than documenting each core revision of each CPU exactly) so they can say (to the MOBO and HSF makers, at least) the thermal load won't be bigger than this. The current draws and thermal loads are often not tightly related, due to litho process improvements, and, (at least for Intels) the thermal design specs are valid only for processors at stock speeds, with all the 'green' features enabled - that 9650 may have a max TDP of ninety-five watts, but, if you're runnin' that puppy at 4 gig with the 'green' goodies turned off for stability, that number is out the window!
As I told someone else - I just had seven teeth pulled and dentures fitted, so I definitely need some excess projects to keep my mind off my mooouth!
The way that a max power rating ( TDP for you acronym lovers ) is applied to a whole family of CPUs running at a variety of speeds has always made me suspicious that it was a generic number. Are they really all 89w processors or all 125w ? I'm no expert - at all - at what's going on in the silicon of a multi-core processor. It needs a test from the guys that run a lab. My general impression is that most of the chip is turned on anyhow. The cache memory takes up as much or more of the chip as the processor cores. At the standard clock speed - not OC'd - the power might go 10 to 20 percent.
As mongox observes the temps from the CPU seem to not change much. The only easy way to measure the power consumption is to measure the total power being consumed. And compare running the 2 cores to unlocking and running with the 4 cores.
Since I was installing different RAM in this system, I did a couple more benchmarks. Turned off the 2 extra cores and chk'd temps at no OC, 3100MHz. I did get a very slightly lower at-rest temp - varying between 37-38C rather than the 4-core of 38C - but it amounts to no difference. The full Prime95 test does show a lower temp though, maxing out at 58-59C rather than the 62C I get unlocked.
But these temps, in my opinion, don't represent enough difference between 80W and 125W. I know the cores, either 2 or 4, are running at 100% usage - I've run with AMD's Power Monitor which does a nice job of showing it.
If you assume the difference in CPU wattage is all from the 2 extra cores - likely not true - then each core uses 22.5W. That gives a base wattage of the CPUs as 35W. 35 + 4*22.5 = 125
I doubt the idea of a 'base' CPU power usage is a good one, but, I can't believe that those 2 extra cores are using an additional 45W and only raising the temp about 3C.
Trying to put it in perspective - If I replaced my 25W reading lamp bulb with a 75W, it would crackle the plastic off in no time. Lots more heat.
CPUID's Monitor is my normal tool for CPU temps. It shows that my core temps run well under the reported CPU temp - guess that means I'm blowing off heat well? But it loses the core temp reading when I go unlocked - likely since there's no definition recorded of a 950 chip to reference. So I can't get a feel for the 4-core internal temps.
That's one of the main reasons for my ire at nVidia - they came up with this much-touted "open standard" neat advance for components, and then locked it away in their vault, only to ever be used with nVidia-based MOBOs! To my limited understanding, "open standard" means "Guys, the software development kit is here", not "locked away in our vault!"