Some words of warning regarding CPU temperatures...
Anyone who thinks the temperatures reported by any arbitrary system are accurate to within 5C, and in some cases even 10C...is fooling themselves.
If you want your CPU to be as safe from overheating as possible, I would take your sensor that consistently reports the highest "CPU" temperature, and add +10C offset to it, and use THAT figure as the reported CPU temperature to assess whether you're nearing the specified critical temperature for the CPU, and to trigger alarms, increase fan speeds, etc.
There are too many uncertainties in the process; from the sensor type used, the location of the sensor, the BIOS algorithm used to translate the data to the "right" temperature, then the reporting program's algorithm to interpret the data passed to it by the hardware/firmware (Some temp reporting programs have used the wrong manufacturer's max temp specification for a given CPU. Note that it varies from CPU to CPU), whether the sensor used is even the one matched to the spec'd critical temperature (e.g. if the critical temp is spec'd at the case then you have to use the case sensor data...if it's spec'd at an on-die sensor, then you have to use an on-die sensor data), etc.
In some systems, I've seen "CPU" temperatures reported that are as much as 10C BELOW the coldest case air-ambient found inside the case (the air ambient was accurately confirmed with lab equipment in addition to the motherboard's built-in "ambient" temperature sensor). That result is impossible with any cooling system that doesn't use an cooling assembly external to the system and in a cooler location, or uses active cooling (like freon). And, that assumes you're even looking at the "right" CPU sensor as mentioned earlier. Are you looking at a case sensor, a die sensor, a thermistor vs. diode sensor, a motherboard sensor under the CPU, etc.
For the X3-450 and GA-880GMA-UD2H mobo as one specific example (I have seen similar discrepancies with other CPUs and mobos) this is apparently NOT a trivial consideration, and different programs can report vastly different numbers...so for a given configuration you have to verify WHICH is the right sensor and whether you have to assign an offset to it. That is in fact why decent programs provide the ability for the user to add or subtract an offset to the displayed temperature(s).
For example, on a Gigabyte GA-880GMA-UD2H ver2.1 mobo I've got in the lab right now, "Open-Source Hardware Monitor" (OSHM) provides THREE "CPU" temperatures. One of them at idle is 10C lower than the other 2 readouts (e.g. when they report ~44C, that one sensor is reported as being at ~34C. Note: it doesn't track quite linearly w/r the other 2 temp sensor data. There is slightly more latency. However, testing confirms it IS a sensor located at the CPU itself. Other temp reporting programs report the same temperature anomaly. There is no explanation yet for this 10C anomaly.
For this never unlocked and never overclocked Athlon II X3 450, the respected program "CoreTemp" reports only ONE of those 3 available temperatures (ID'd as "CPU#0"). As it turns out..."CoreTemp" reports the "CPU#0" sensor datum at the same 10C below the other two (as seen in OSHM).
Importantly, "CoreTemp" also bases ALL its CPU overheating safety control features on this single reported datum. If that datum is reported as 10C below what it really is, then "CoreTemp's" CPU protection actions will be based on flawed data, with potentially disastrous results for the CPU. "CoreTemp" also shows the specified maximum "TjMax=67C" for the X3 450. As a result of these findings, for conservative CPU safety while using OSHM or "CoreTemp", I now have manually added an +10C offset to the reported temp for the appropriate sensor in order to assume "worst-case".