Archived from groups: alt.comp.hardware.overclocking (
More info?)
"Phil Weldon" <notdisclosed@example.com> wrote in message
news:taXrc.6042$Tn6.4163@newsread1.news.pas.earthlink.net...
> Well, my prediction would be just an opinion, but you could look at some
of
> the "laws" (rules of thumb is a more accurate description) for some clues.
>
> Amhdal/Case Rule: A balanced computer system needs about 1 Mbyte of main
> memory capacity and 1 Mbit per second of I/O capacity for each MIPS of CPU
> performance.
>
> This rule seems to have applied pretty well over the years. It would
> indicate that a balanced system using a Pentium 4 3.2 GHz CPU ( say about
> 5,000 MIPS) would require an I/O bandwidth of 5 Gbits per second and 5
> Gbytes of main memory. A 64 bit 66 MHz PCI slot (found in high end
> servers) has a bandwidth of 4.2 Gbits per second. And 4 Gbytes of main
> memory would not be out of the question for such a system. Greatly
> increasing one of these factors without a similar increase in the other
> create a bottle-neck.
>
> Caches have a paradoxical quality. A larger cache does not necessarily
> result in faster memory access for typical programs. The larger a cache,
> the LARGER the penalty for cache hits, hits even though the time lost on
> cache misses is decreased. The size and associativity of caches must be
> carefully tuned to the operation of the CPU and main memory, and for the
> types of programs that will run, with the restraint that CPU's like those
> in the Pentium stable are general purpose processors. Special purpose
> processors like GPU's have entirely different approaches to caching.
>
> If forced to predict for the next ten years, I'd say that Intel
> hyper-threading is bridge to multiple CPU's on a single chip, and that
the
> number of ACTUAL CPU's on a chip will be transparent to the operating
> system.
Wouldn't that be more like having a large array of execution units (either
ALUs or FPUs, or others?) in parallel, maybe grouped, served by large shared
L2 caches and dedicated L1 caches (per group)... or am I just hypothesizing
?
It cannot be treated as multiple CPUs (or cores) if they aren't discrete.
E.V
Close coupled L2 caches will become larger, pipelines will become
> deeper. The EFFECTIVE number of instructions per second will be a better
> judge of performance than any clock rate accross processors using the same
> instruction set.
> --
> Phil Weldon, pweldonatmindjumpdotcom
> For communication,
> replace "at" with the 'at sign'
> replace "mindjump" with "mindspring."
> replace "dot" with "."
>
> "Moderately Confused" <moderatelyconfused@yahoospleen.com> wrote in
message
> news:_M2dnWJxxuGdui3dRVn-sw@comcast.com...
> > Thanks for the very detailed reply. Since 800MHz seems to be the
highest
> > right now, do you have a prediction how fast it *could* get, given the
> > limitations we have now?
> >
> > MC
> >
> >
> > "Phil Weldon" <notdisclosed@example.com> wrote in message
> > news:3gVrc.6688$be.5857@newsread2.news.pas.earthlink.net...
> > > In a DIFFERENT universe with different fundamental constants it could
be
> > > POSSIBLE, but WE might not be possible.
> > >
> > > In THIS universe it is unlikely that the FrontSideBus speed will
ever
> be
> > > the same as the CPU clock speed. It is physicaly impossible now to
get
> > > that kind speed for random access (with a data bus width comprable to
> CPU
> > > register width) to storage except over distances comprable to a single
> > > chip's length or width (say 10 to 20 mm.)
> > >
> > > The limiting factors are the speed of light and power consumption
(which
> > > equals heat production). An electrical signal in a conductor travels
> > about
> > > 2/3 the speed of light, or 2/3 X 300,000,000 meters per second =
> > 200,000,000
> > > meters per second. At 4 GHz, a clock cycle in a conductor is
> 200,000,000
> > > meters per second / 4,000,000,000 cycles per second = 50 mm long.
> > >
> > > With pipeling some of that can be overcome, but the power consumption
> will
> > > be horrendous because of the load long conductors represent at that
> speed.
> > > There would not be just a clock signal path operating at that speed,
but
> > > also a signal for each data bit and for each address bit. And even if
> you
> > > could handle the power and heat problems, there would still be the
> > > syncronization problem: if the data is stored more than about 25 mm
> (one
> > > inch0 from the CPU, new data from a random address could not be ready
> each
> > > clock cycle, even if you eliminate gate delay and contentions.
> > >
> > > However, you can approach that kind of performance more easily than
> > > switching to a different universe.
> > >
> > > Even now, the CPU registers and the caches (L1, L2, and possibly L3)
> > bridge
> > > the gap between the speed of main memory, as does pipelining. If the
> data
> > > needed by an instruction is already in CPU registers or a cache, it is
> > > availiable much faster than from main memory.
> > >
> > > Then there are
> > >
> > > parallel processors
> > >
> > > Optical pathways and switches
> > >
> > > 3-D integrated circuits
> > >
> > > Qbits
> > > .
> > > .
> > > .
> > >
> > > Computing capability will continue to increase, but making the data
> busses
> > > as fast as the CPU clock rate is not going to be the road taken, at
> least
> > in
> > > this univerese.
> > >
> > >
> > >
> > > --
> > > Phil Weldon, pweldonatmindjumpdotcom
> > > For communication,
> > > replace "at" with the 'at sign'
> > > replace "mindjump" with "mindspring."
> > > replace "dot" with "."
> > >
> > > "Moderately Confused" <moderatelyconfused@yahoospleen.com> wrote in
> > message
> > > news:j6Wdnaz3yvX3kC3dRVn-tw@comcast.com...
> > > > I was just sitting here reading the posts when something popped into
> my
> > > > head. What will happen when the speed of the FSB is equal to the
> speed
> > of
> > > > the processor? IE 3GHz processor with a 3GHz FSB. Eventually it is
> > going
> > > > to happen, isn't it? Would the processor be able to process
something
> > as
> > > > fast as it can input the information? Or would the universe as we
> know
> > it
> > > > collapse if this were to happen...
> > > >
> > > > MC
> > > >
> > > >
> > >
> > >
> >
> >
>
>
Phil,