Archived from groups: alt.comp.hardware.overclocking (
More info?)
David, try reading the second half of my post:
In addition, to guarantee proper operation in almost all circumstances,
Intel CPU's have a
lot of performance overhead built in. If your computer system offers lower
temperatures, better voltage control, higher core voltages, faster memory,
etc. you have great overclocking, limited only by the performance margin of
the fastest CPU's of that design being produced (plus some luck!)
--
Phil Weldon, pweldonatmindjumpdotcom
For communication,
replace "at" with the 'at sign'
replace "mindjump" with "mindspring."
replace "dot" with "."
"David Maynard" <dNOTmayn@ev1.net> wrote in message
news:107ml8o45ugqk05@corp.supernews.com...
> atwifa wrote:
> > just out of interest, Phil - do you have any theories as to *why* the
choice
> > intel offerings are so overclockable? over the years i've had a celeron
> > 300a, a cel333, a cumine 566 and 600, a P3 700, a tualatin celeron 1.0a,
and
> > now a P4 2.4 - and all of these, as you pointed out, have been (at
least)
> > capable of running 50% beyond spec with stock cooling and minimal (in
some
> > cases no) voltage tweak.... and i have often wondered why they were able
to
> > perform at such speeds and with such consistent stability. is there
some
> > benchmarking, d'you think, that we just don't know about?
>
> Phil mentioned the yield aspect but there's others: safety margins,
> reliability, and operating environment. Intel is conservative in their
> specifications as part of their reliability and reputation strategy, and
we
> use up some of that in a tradeoff for more speed.
>
> Secondly, the processor must operate over the entire range of operating
> conditions and with all combinations of tolerances of what it's placed
into
> (I.E. a motherboard may be 'better' than spec or 'just barely meet it').
>
> When we overclock we cool it 'better' (when was the last time you heard
> anyone in an overclocking group say that being 'within' the maximum
> temperature spec'd was 'ok'?) and don't generally expect the system to
> operate with an ambient of 120F, for example. So, in that sense, we are
> trading off one specification for another. We spend more time 'tweaking'
> than would be practical from a cost standpoint on a production model, and
> we trade off some risk, that would not be cost effective on a production
> model (nor good for a business reputation [e.g. gee, the web site went
> down, let me re-tweak the FSB a bit again. Wouldn't be too long before
they
> decided to buy something "more reliable."]).
>
>
> > i know the whole price point and demand theory, that allegedly decides
what
> > wafers get earmarked for what badges ... but this has never seemed
entirely
> > logical to me. hm ...
>
> When the process is 'mature' the yield is such that the market needs are
> driving the availability of the lower speed versions. I.E. there is a
> 'surplus' of the higher speeds over what the market will bear so it makes
> business sense to simply sell that surplus at a lower rating. (although
> this 'they all are capable of x speed' is overestimated by users because
> they only look at what it can do at 'room temp' in 'their system' and not
> over all operating conditions.)
>
> But the 'logic' of it at any point in time is dependent on the yield,
> price/cost ratios, demand curves, competition, business strategy,
> expectations, etc., and manufacturer's don't generally pass out that kind
> of information.
>