FSB question

G

Guest

Guest
Archived from groups: alt.comp.hardware.overclocking (More info?)

I was just sitting here reading the posts when something popped into my
head. What will happen when the speed of the FSB is equal to the speed of
the processor? IE 3GHz processor with a 3GHz FSB. Eventually it is going
to happen, isn't it? Would the processor be able to process something as
fast as it can input the information? Or would the universe as we know it
collapse if this were to happen...

MC
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.overclocking (More info?)

> I was just sitting here reading the posts when something popped into my
> head. What will happen when the speed of the FSB is equal to the speed of
> the processor? IE 3GHz processor with a 3GHz FSB. Eventually it is going
> to happen, isn't it?

Don't hold your breath. It's a lot easier to make the internal circuitry
of a CPU work at high frequencies than to make long traces on the
motherboard work at high frequencies.

> Would the processor be able to process something as
> fast as it can input the information? Or would the universe as we know it
> collapse if this were to happen...

Because the universe still exists, we can assume that having FSB=frequency
won't cause any damage. Before the 486 DX2 chips, all of the 386/486 chips
(and probably the earlier ones) had the clock of the CPU equal to the FSB.

steve
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.overclocking (More info?)

"Moderately Confused" <moderatelyconfused@yahoospleen.com> writes:
>I was just sitting here reading the posts when something popped into my
>head. What will happen when the speed of the FSB is equal to the speed of
>the processor? IE 3GHz processor with a 3GHz FSB. Eventually it is going
>to happen, isn't it? Would the processor be able to process something as
>fast as it can input the information? Or would the universe as we know it
>collapse if this were to happen...

Over the decades changes in the implementation of computers have had
the ratio processor speed/memory speed substantially higher than 1.0
and substantially lower than 1.0.

Back in the olden days, when memory was pings of sound racing down a
tube of mercury (really) or magnetic drums making a rotation the
processor was far faster than the memory was.

Minicomputers came along and sometimes memory was substantially faster
than the processor.

There was a management guy from IBM recently who claimed that in the
next ten years we will see both memory&processor speed and storage
capacity go up by about a factor of 10^14.

To put that in perspective, in the last 30 years we have seen memory
and processor speed go up by about 10^3 and storage capacity go up
by about 10^6. So I think he is excessively optimistic.

Either way, it seems unlikely that anyone will notice much difference,
let alone have the universe collapse. That we can leave to other causes.
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.overclocking (More info?)

Moderately Confused wrote:

> I was just sitting here reading the posts when something popped into my
> head. What will happen when the speed of the FSB is equal to the speed of
> the processor? IE 3GHz processor with a 3GHz FSB. Eventually it is going
> to happen, isn't it? Would the processor be able to process something as
> fast as it can input the information? Or would the universe as we know it
> collapse if this were to happen...
>
> MC
>
>

That would be ideal but I don't know what makes you think it's going to
happen, at least any time soon, because it's a heck of a lot easier to
increase die speed than it is to push signals over a PC board.
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.overclocking (More info?)

In a DIFFERENT universe with different fundamental constants it could be
POSSIBLE, but WE might not be possible.

In THIS universe it is unlikely that the FrontSideBus speed will ever be
the same as the CPU clock speed. It is physicaly impossible now to get
that kind speed for random access (with a data bus width comprable to CPU
register width) to storage except over distances comprable to a single
chip's length or width (say 10 to 20 mm.)

The limiting factors are the speed of light and power consumption (which
equals heat production). An electrical signal in a conductor travels about
2/3 the speed of light, or 2/3 X 300,000,000 meters per second = 200,000,000
meters per second. At 4 GHz, a clock cycle in a conductor is 200,000,000
meters per second / 4,000,000,000 cycles per second = 50 mm long.

With pipeling some of that can be overcome, but the power consumption will
be horrendous because of the load long conductors represent at that speed.
There would not be just a clock signal path operating at that speed, but
also a signal for each data bit and for each address bit. And even if you
could handle the power and heat problems, there would still be the
syncronization problem: if the data is stored more than about 25 mm (one
inch0 from the CPU, new data from a random address could not be ready each
clock cycle, even if you eliminate gate delay and contentions.

However, you can approach that kind of performance more easily than
switching to a different universe.

Even now, the CPU registers and the caches (L1, L2, and possibly L3) bridge
the gap between the speed of main memory, as does pipelining. If the data
needed by an instruction is already in CPU registers or a cache, it is
availiable much faster than from main memory.

Then there are

parallel processors

Optical pathways and switches

3-D integrated circuits

Qbits
..
..
..

Computing capability will continue to increase, but making the data busses
as fast as the CPU clock rate is not going to be the road taken, at least in
this univerese.



--
Phil Weldon, pweldonatmindjumpdotcom
For communication,
replace "at" with the 'at sign'
replace "mindjump" with "mindspring."
replace "dot" with "."

"Moderately Confused" <moderatelyconfused@yahoospleen.com> wrote in message
news:j6Wdnaz3yvX3kC3dRVn-tw@comcast.com...
> I was just sitting here reading the posts when something popped into my
> head. What will happen when the speed of the FSB is equal to the speed of
> the processor? IE 3GHz processor with a 3GHz FSB. Eventually it is going
> to happen, isn't it? Would the processor be able to process something as
> fast as it can input the information? Or would the universe as we know it
> collapse if this were to happen...
>
> MC
>
>
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.overclocking (More info?)

Thanks for the very detailed reply. Since 800MHz seems to be the highest
right now, do you have a prediction how fast it *could* get, given the
limitations we have now?

MC


"Phil Weldon" <notdisclosed@example.com> wrote in message
news:3gVrc.6688$be.5857@newsread2.news.pas.earthlink.net...
> In a DIFFERENT universe with different fundamental constants it could be
> POSSIBLE, but WE might not be possible.
>
> In THIS universe it is unlikely that the FrontSideBus speed will ever be
> the same as the CPU clock speed. It is physicaly impossible now to get
> that kind speed for random access (with a data bus width comprable to CPU
> register width) to storage except over distances comprable to a single
> chip's length or width (say 10 to 20 mm.)
>
> The limiting factors are the speed of light and power consumption (which
> equals heat production). An electrical signal in a conductor travels
about
> 2/3 the speed of light, or 2/3 X 300,000,000 meters per second =
200,000,000
> meters per second. At 4 GHz, a clock cycle in a conductor is 200,000,000
> meters per second / 4,000,000,000 cycles per second = 50 mm long.
>
> With pipeling some of that can be overcome, but the power consumption will
> be horrendous because of the load long conductors represent at that speed.
> There would not be just a clock signal path operating at that speed, but
> also a signal for each data bit and for each address bit. And even if you
> could handle the power and heat problems, there would still be the
> syncronization problem: if the data is stored more than about 25 mm (one
> inch0 from the CPU, new data from a random address could not be ready each
> clock cycle, even if you eliminate gate delay and contentions.
>
> However, you can approach that kind of performance more easily than
> switching to a different universe.
>
> Even now, the CPU registers and the caches (L1, L2, and possibly L3)
bridge
> the gap between the speed of main memory, as does pipelining. If the data
> needed by an instruction is already in CPU registers or a cache, it is
> availiable much faster than from main memory.
>
> Then there are
>
> parallel processors
>
> Optical pathways and switches
>
> 3-D integrated circuits
>
> Qbits
> .
> .
> .
>
> Computing capability will continue to increase, but making the data busses
> as fast as the CPU clock rate is not going to be the road taken, at least
in
> this univerese.
>
>
>
> --
> Phil Weldon, pweldonatmindjumpdotcom
> For communication,
> replace "at" with the 'at sign'
> replace "mindjump" with "mindspring."
> replace "dot" with "."
>
> "Moderately Confused" <moderatelyconfused@yahoospleen.com> wrote in
message
> news:j6Wdnaz3yvX3kC3dRVn-tw@comcast.com...
> > I was just sitting here reading the posts when something popped into my
> > head. What will happen when the speed of the FSB is equal to the speed
of
> > the processor? IE 3GHz processor with a 3GHz FSB. Eventually it is
going
> > to happen, isn't it? Would the processor be able to process something
as
> > fast as it can input the information? Or would the universe as we know
it
> > collapse if this were to happen...
> >
> > MC
> >
> >
>
>
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.overclocking (More info?)

Well, my prediction would be just an opinion, but you could look at some of
the "laws" (rules of thumb is a more accurate description) for some clues.

Amhdal/Case Rule: A balanced computer system needs about 1 Mbyte of main
memory capacity and 1 Mbit per second of I/O capacity for each MIPS of CPU
performance.

This rule seems to have applied pretty well over the years. It would
indicate that a balanced system using a Pentium 4 3.2 GHz CPU ( say about
5,000 MIPS) would require an I/O bandwidth of 5 Gbits per second and 5
Gbytes of main memory. A 64 bit 66 MHz PCI slot (found in high end
servers) has a bandwidth of 4.2 Gbits per second. And 4 Gbytes of main
memory would not be out of the question for such a system. Greatly
increasing one of these factors without a similar increase in the other
create a bottle-neck.

Caches have a paradoxical quality. A larger cache does not necessarily
result in faster memory access for typical programs. The larger a cache,
the LARGER the penalty for cache hits, hits even though the time lost on
cache misses is decreased. The size and associativity of caches must be
carefully tuned to the operation of the CPU and main memory, and for the
types of programs that will run, with the restraint that CPU's like those
in the Pentium stable are general purpose processors. Special purpose
processors like GPU's have entirely different approaches to caching.

If forced to predict for the next ten years, I'd say that Intel
hyper-threading is bridge to multiple CPU's on a single chip, and that the
number of ACTUAL CPU's on a chip will be transparent to the operating
system. Close coupled L2 caches will become larger, pipelines will become
deeper. The EFFECTIVE number of instructions per second will be a better
judge of performance than any clock rate accross processors using the same
instruction set.
--
Phil Weldon, pweldonatmindjumpdotcom
For communication,
replace "at" with the 'at sign'
replace "mindjump" with "mindspring."
replace "dot" with "."

"Moderately Confused" <moderatelyconfused@yahoospleen.com> wrote in message
news:_M2dnWJxxuGdui3dRVn-sw@comcast.com...
> Thanks for the very detailed reply. Since 800MHz seems to be the highest
> right now, do you have a prediction how fast it *could* get, given the
> limitations we have now?
>
> MC
>
>
> "Phil Weldon" <notdisclosed@example.com> wrote in message
> news:3gVrc.6688$be.5857@newsread2.news.pas.earthlink.net...
> > In a DIFFERENT universe with different fundamental constants it could be
> > POSSIBLE, but WE might not be possible.
> >
> > In THIS universe it is unlikely that the FrontSideBus speed will ever
be
> > the same as the CPU clock speed. It is physicaly impossible now to get
> > that kind speed for random access (with a data bus width comprable to
CPU
> > register width) to storage except over distances comprable to a single
> > chip's length or width (say 10 to 20 mm.)
> >
> > The limiting factors are the speed of light and power consumption (which
> > equals heat production). An electrical signal in a conductor travels
> about
> > 2/3 the speed of light, or 2/3 X 300,000,000 meters per second =
> 200,000,000
> > meters per second. At 4 GHz, a clock cycle in a conductor is
200,000,000
> > meters per second / 4,000,000,000 cycles per second = 50 mm long.
> >
> > With pipeling some of that can be overcome, but the power consumption
will
> > be horrendous because of the load long conductors represent at that
speed.
> > There would not be just a clock signal path operating at that speed, but
> > also a signal for each data bit and for each address bit. And even if
you
> > could handle the power and heat problems, there would still be the
> > syncronization problem: if the data is stored more than about 25 mm
(one
> > inch0 from the CPU, new data from a random address could not be ready
each
> > clock cycle, even if you eliminate gate delay and contentions.
> >
> > However, you can approach that kind of performance more easily than
> > switching to a different universe.
> >
> > Even now, the CPU registers and the caches (L1, L2, and possibly L3)
> bridge
> > the gap between the speed of main memory, as does pipelining. If the
data
> > needed by an instruction is already in CPU registers or a cache, it is
> > availiable much faster than from main memory.
> >
> > Then there are
> >
> > parallel processors
> >
> > Optical pathways and switches
> >
> > 3-D integrated circuits
> >
> > Qbits
> > .
> > .
> > .
> >
> > Computing capability will continue to increase, but making the data
busses
> > as fast as the CPU clock rate is not going to be the road taken, at
least
> in
> > this univerese.
> >
> >
> >
> > --
> > Phil Weldon, pweldonatmindjumpdotcom
> > For communication,
> > replace "at" with the 'at sign'
> > replace "mindjump" with "mindspring."
> > replace "dot" with "."
> >
> > "Moderately Confused" <moderatelyconfused@yahoospleen.com> wrote in
> message
> > news:j6Wdnaz3yvX3kC3dRVn-tw@comcast.com...
> > > I was just sitting here reading the posts when something popped into
my
> > > head. What will happen when the speed of the FSB is equal to the
speed
> of
> > > the processor? IE 3GHz processor with a 3GHz FSB. Eventually it is
> going
> > > to happen, isn't it? Would the processor be able to process something
> as
> > > fast as it can input the information? Or would the universe as we
know
> it
> > > collapse if this were to happen...
> > >
> > > MC
> > >
> > >
> >
> >
>
>
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.overclocking (More info?)

> This rule seems to have applied pretty well over the years. It would
> indicate that a balanced system using a Pentium 4 3.2 GHz CPU ( say about
> 5,000 MIPS) would require an I/O bandwidth of 5 Gbits per second and 5
> Gbytes of main memory. A 64 bit 66 MHz PCI slot (found in high end
> servers) has a bandwidth of 4.2 Gbits per second.

Actually, only the lower-end servers I've bought over the past year have
come with 64-bit, 66-MHz PCI slots. The mid- and high-end x86 servers (and
some of the low-end!) have come with 100-MHz or 133-MHz PCI slots.

steve
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.overclocking (More info?)

"Phil Weldon" <notdisclosed@example.com> wrote in message
news:taXrc.6042$Tn6.4163@newsread1.news.pas.earthlink.net...
> Well, my prediction would be just an opinion, but you could look at some
of
> the "laws" (rules of thumb is a more accurate description) for some clues.
>
> Amhdal/Case Rule: A balanced computer system needs about 1 Mbyte of main
> memory capacity and 1 Mbit per second of I/O capacity for each MIPS of CPU
> performance.
>
> This rule seems to have applied pretty well over the years. It would
> indicate that a balanced system using a Pentium 4 3.2 GHz CPU ( say about
> 5,000 MIPS) would require an I/O bandwidth of 5 Gbits per second and 5
> Gbytes of main memory. A 64 bit 66 MHz PCI slot (found in high end
> servers) has a bandwidth of 4.2 Gbits per second. And 4 Gbytes of main
> memory would not be out of the question for such a system. Greatly
> increasing one of these factors without a similar increase in the other
> create a bottle-neck.
>
> Caches have a paradoxical quality. A larger cache does not necessarily
> result in faster memory access for typical programs. The larger a cache,
> the LARGER the penalty for cache hits, hits even though the time lost on
> cache misses is decreased. The size and associativity of caches must be
> carefully tuned to the operation of the CPU and main memory, and for the
> types of programs that will run, with the restraint that CPU's like those
> in the Pentium stable are general purpose processors. Special purpose
> processors like GPU's have entirely different approaches to caching.
>
> If forced to predict for the next ten years, I'd say that Intel
> hyper-threading is bridge to multiple CPU's on a single chip, and that
the
> number of ACTUAL CPU's on a chip will be transparent to the operating
> system.
Wouldn't that be more like having a large array of execution units (either
ALUs or FPUs, or others?) in parallel, maybe grouped, served by large shared
L2 caches and dedicated L1 caches (per group)... or am I just hypothesizing
?
It cannot be treated as multiple CPUs (or cores) if they aren't discrete.
E.V
Close coupled L2 caches will become larger, pipelines will become
> deeper. The EFFECTIVE number of instructions per second will be a better
> judge of performance than any clock rate accross processors using the same
> instruction set.
> --
> Phil Weldon, pweldonatmindjumpdotcom
> For communication,
> replace "at" with the 'at sign'
> replace "mindjump" with "mindspring."
> replace "dot" with "."
>
> "Moderately Confused" <moderatelyconfused@yahoospleen.com> wrote in
message
> news:_M2dnWJxxuGdui3dRVn-sw@comcast.com...
> > Thanks for the very detailed reply. Since 800MHz seems to be the
highest
> > right now, do you have a prediction how fast it *could* get, given the
> > limitations we have now?
> >
> > MC
> >
> >
> > "Phil Weldon" <notdisclosed@example.com> wrote in message
> > news:3gVrc.6688$be.5857@newsread2.news.pas.earthlink.net...
> > > In a DIFFERENT universe with different fundamental constants it could
be
> > > POSSIBLE, but WE might not be possible.
> > >
> > > In THIS universe it is unlikely that the FrontSideBus speed will
ever
> be
> > > the same as the CPU clock speed. It is physicaly impossible now to
get
> > > that kind speed for random access (with a data bus width comprable to
> CPU
> > > register width) to storage except over distances comprable to a single
> > > chip's length or width (say 10 to 20 mm.)
> > >
> > > The limiting factors are the speed of light and power consumption
(which
> > > equals heat production). An electrical signal in a conductor travels
> > about
> > > 2/3 the speed of light, or 2/3 X 300,000,000 meters per second =
> > 200,000,000
> > > meters per second. At 4 GHz, a clock cycle in a conductor is
> 200,000,000
> > > meters per second / 4,000,000,000 cycles per second = 50 mm long.
> > >
> > > With pipeling some of that can be overcome, but the power consumption
> will
> > > be horrendous because of the load long conductors represent at that
> speed.
> > > There would not be just a clock signal path operating at that speed,
but
> > > also a signal for each data bit and for each address bit. And even if
> you
> > > could handle the power and heat problems, there would still be the
> > > syncronization problem: if the data is stored more than about 25 mm
> (one
> > > inch0 from the CPU, new data from a random address could not be ready
> each
> > > clock cycle, even if you eliminate gate delay and contentions.
> > >
> > > However, you can approach that kind of performance more easily than
> > > switching to a different universe.
> > >
> > > Even now, the CPU registers and the caches (L1, L2, and possibly L3)
> > bridge
> > > the gap between the speed of main memory, as does pipelining. If the
> data
> > > needed by an instruction is already in CPU registers or a cache, it is
> > > availiable much faster than from main memory.
> > >
> > > Then there are
> > >
> > > parallel processors
> > >
> > > Optical pathways and switches
> > >
> > > 3-D integrated circuits
> > >
> > > Qbits
> > > .
> > > .
> > > .
> > >
> > > Computing capability will continue to increase, but making the data
> busses
> > > as fast as the CPU clock rate is not going to be the road taken, at
> least
> > in
> > > this univerese.
> > >
> > >
> > >
> > > --
> > > Phil Weldon, pweldonatmindjumpdotcom
> > > For communication,
> > > replace "at" with the 'at sign'
> > > replace "mindjump" with "mindspring."
> > > replace "dot" with "."
> > >
> > > "Moderately Confused" <moderatelyconfused@yahoospleen.com> wrote in
> > message
> > > news:j6Wdnaz3yvX3kC3dRVn-tw@comcast.com...
> > > > I was just sitting here reading the posts when something popped into
> my
> > > > head. What will happen when the speed of the FSB is equal to the
> speed
> > of
> > > > the processor? IE 3GHz processor with a 3GHz FSB. Eventually it is
> > going
> > > > to happen, isn't it? Would the processor be able to process
something
> > as
> > > > fast as it can input the information? Or would the universe as we
> know
> > it
> > > > collapse if this were to happen...
> > > >
> > > > MC
> > > >
> > > >
> > >
> > >
> >
> >
>
>
Phil,
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.overclocking (More info?)

Think of it as a mini cluster on a chip. Cache coherency could be a real
problem, but since we are speculating, each CPU would have its own L1 cache
at least. Multiple execution units share a pipeline, multiple CPU's don't.

--
Phil Weldon, pweldonatmindjumpdotcom
For communication,
replace "at" with the 'at sign'
replace "mindjump" with "mindspring."
replace "dot" with "."

"Erez Volach" <ivrit@netvision.net.il> wrote in message
news:40b1ebd5$1@news.012.net.il...
>
> "Phil Weldon" <notdisclosed@example.com> wrote in message
> news:taXrc.6042$Tn6.4163@newsread1.news.pas.earthlink.net...
> > Well, my prediction would be just an opinion, but you could look at
some
> of
> > the "laws" (rules of thumb is a more accurate description) for some
clues.
> >
> > Amhdal/Case Rule: A balanced computer system needs about 1 Mbyte of
main
> > memory capacity and 1 Mbit per second of I/O capacity for each MIPS of
CPU
> > performance.
> >
> > This rule seems to have applied pretty well over the years. It would
> > indicate that a balanced system using a Pentium 4 3.2 GHz CPU ( say
about
> > 5,000 MIPS) would require an I/O bandwidth of 5 Gbits per second and
5
> > Gbytes of main memory. A 64 bit 66 MHz PCI slot (found in high end
> > servers) has a bandwidth of 4.2 Gbits per second. And 4 Gbytes of main
> > memory would not be out of the question for such a system. Greatly
> > increasing one of these factors without a similar increase in the other
> > create a bottle-neck.
> >
> > Caches have a paradoxical quality. A larger cache does not
necessarily
> > result in faster memory access for typical programs. The larger a
cache,
> > the LARGER the penalty for cache hits, hits even though the time lost on
> > cache misses is decreased. The size and associativity of caches must
be
> > carefully tuned to the operation of the CPU and main memory, and for
the
> > types of programs that will run, with the restraint that CPU's like
those
> > in the Pentium stable are general purpose processors. Special purpose
> > processors like GPU's have entirely different approaches to caching.
> >
> > If forced to predict for the next ten years, I'd say that Intel
> > hyper-threading is bridge to multiple CPU's on a single chip, and that
> the
> > number of ACTUAL CPU's on a chip will be transparent to the operating
> > system.
> Wouldn't that be more like having a large array of execution units (either
> ALUs or FPUs, or others?) in parallel, maybe grouped, served by large
shared
> L2 caches and dedicated L1 caches (per group)... or am I just
hypothesizing
> ?
> It cannot be treated as multiple CPUs (or cores) if they aren't discrete.
> E.V
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.overclocking (More info?)

800 is the *OFFICIAL* maximu for Intel.

But many people runs above that mark without problems...
I'm running at 1000 right now, have been to 1120Mhz with a 2.4C M0 and
max i've seen on the Internet was 1280Mhz...
Have seen an 1600Mhz BUS, but was on a highly modified board :/



Stormgiant
P4 3.0@3750 with 512MB OCZ GOLD PC3200 REV 1.2 @2-2-2-6 @500

On Sun, 23 May 2004 00:18:08 -0400, "Moderately Confused"
<moderatelyconfused@yahoospleen.com> wrote:

>Thanks for the very detailed reply. Since 800MHz seems to be the highest
>right now, do you have a prediction how fast it *could* get, given the
>limitations we have now?
>
>MC