Sign in with
Sign up | Sign in
Your question

76.8GB/s of memory bandwidth in 2004

Last response: in CPUs
Share
October 23, 2001 12:31:33 AM

Thanks to some new <A HREF="http://www.tomshardware.com/technews/index.html" target="_new">R&D advancements</A>, it looks like we will be seeing some massive bandwidth in just a couple years' time. I will put forth a technical explanation below.

Current forms of RDRAM are a DDR (Double Data Rate) technology. For whatever memory clock is used, they transmit twice per clock. The i850 chipset provides a 400MHz memory clock (FSB). The RDRAM modules transmit twice per clock for an effective 800MHz rate. This provides 1.6GB/s of memory bandwidth per 16-bit channel. The i850 chipset provides two of these channels to obtain the advertised 3.2GB/s of memory bandwidth.

A new <A HREF="http://www.tomshardware.com/technews/index.html" target="_new">Octal Data Rate</A> (ODR) technology has been developed that can transmit 8 times per clock. If you kept the same 400MHz FSB clock, then this would be an effective 3200MHz (3.2GHz) rate. Note that this is still on a per-16bit-channel basis. This provides 6.4GB/s of memory bandwidth per 16-bit channel. On a dual-channel chipset with a 400MHz FSB this would provide 12.8GB/s of memory bandwidth.

Now this might not sound very impressive yet. After all, this is over 2 years away. We should get more than 4 times current bandwidth with 2 years of research and development. This is where the fun starts. 16-bit RDRAM channels will be a thing of the past in the second half of 2002. We will be using 32-bit channels by then, and 64-bit channels by 2004. In addition to this, by the second half of 2002 RDRAM platforms will be using a 533MHz FSB clock (PC1066). By 2004 they will be using a 600MHz FSB clock (PC1200).

Couple the ODR technology with dual 64-bit channels running off a 600MHZ FSB clock and you get 76.8GB/s of memory bandwidth. 76.8GB/s of memory bandwidth in just over 2 years is pretty nice, is it not?

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 12:40:07 AM

yes and as we know, rdram is NOT SUPPORTED by the amd platform. Therefore they will not see any of the benifits you mentioned. Oh well.
October 23, 2001 12:45:31 AM

If you turn this thread into an Intel vs. AMD war, I am going to find you and hang you up by whatever genitals you have left. I want to actually discuss memory technology and how it relates to our processors. I do not want to discuss public relations between processor companies. That got very old very fast. I want you to know that yes, I obviously do support Intel. But I do not support trolling.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
Related resources
October 23, 2001 12:54:49 AM

RD RAM at 600Mhz will have a lower latency than RD RAM at 400Mhz. How will the 64 bit data path, and the octal data rate affect the latency?
Why does RD RAM have a higher latency than SD RAM in the first place?

"Ignorance is bliss, but I tend to get screwed over."
October 23, 2001 12:58:01 AM

Ray, how will this affect latency..or has this been researched yet?

Mark-

When all else fails, throw your computer out the window!!!
October 23, 2001 1:00:17 AM

No fair! Your post beat mine!

grumble..

mutter..

danged slow cable internet!


When all else fails, throw your computer out the window!!!
October 23, 2001 1:03:57 AM

"How will the 64 bit data path, and the octal data rate affect the latency?"

RDRAM's latency decreases as it ramps up in speed. The ODR (Octal Data Rate) will likely further reduce latency. The 64-bit data path will not affect latency at all. It is much like adding multiple channels. It increases bandwidth, but latency for accessing each channel remains the same.


"Why does RD RAM have a higher latency than SD RAM in the first place?"

This is due mostly to the overhead of laying out the circuitry in a serial nature. SDRAM uses parallel circuit pathways. However, these parallel circuit pathways do not scale well and latency actually increases as you significantly increase bandwidth, such as through DDR technology. PC1066 RDRAM has about the same latency as PC2100 DDR SDRAM. As both increase further in speed RDRAM will continue to attain lower latency and will surpass SDRAM in latency performance as well as bandwidth.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 1:13:52 AM

What about cost?
It's only natural to assume that this new RAM will be more expensive for at least a little while, but what about the motherboards designed to use it?
Will moving to 32 bit and then 64 bit data paths mean that the motherboards will be 6 layer, or even 8 layer designs?

"Ignorance is bliss, but I tend to get screwed over."
October 23, 2001 1:14:06 AM

Where is your proof Intel_inside? Within 2 years, many things can happen. AMD motherboards might start supporting RDRAM. 2 years is more like 2 centuries for computer technology. Countless new technologies may be released within the next 2 years. In two years, Intel and AMD might not even exist (possible, but highly unlikely). The computer industry is moving so quickly it's hard to predict more than a few months of progress.

AMD technology + Intel technology = Intel/AMD Pentathlon IV; the <b>ULTIMATE</b> PC processor
October 23, 2001 1:30:55 AM

"It's only natural to assume that this new RAM will be more expensive for at least a little while"

Every new technology starts out at a higher price. This is the nature of technology.


"Will moving to 32 bit and then 64 bit data paths mean that the motherboards will be 6 layer, or even 8 layer designs?"

Due to the serial nature of RDRAM, the memory circuitry takes up much less space. This can all be accomplished on 4-layer PCB motherboards.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 1:59:44 AM

I think the industry is long overdue for a large increase in memory bandwidth, and I imagine that Intel has the marketing muscle to push software developers toward taking advantage of this.
While I’m not particularly fond of Rambus’ marketing and legal tactics, it looks like they’ll be developing some very useful technology over the next few years, and it will be interesting to see how DRAM technologies compete with that.
It will also be interesting to see if any graphics card companies adopt Rambus technologies, since they are always striving for lower latencies and higher bandwidth.
Could you imagine NVidia making a deal with Rambus similar to the one Intel has?

"Ignorance is bliss, but I tend to get screwed over."
October 23, 2001 3:03:58 AM

"I think the industry is long overdue for a large increase in memory bandwidth"

Agreed. This is the main reason that we need GPUs at all today. If we had sufficient memory bandwidth to the main processor, it could easily handle all your 3D graphics needs. While it is nice to have more computing power in the form of a GPU, the main reason we use them is not any remarkable processing performance delivered by them. We use them because they can be closely tied to video memory and have available huge amounts of memory bandwidth when rendering to video memory. This is because the GPU and memory reside on the same card and have a dedicated high-speed bus to each other.

We all know the limiting factor of the video subsystem is memory bandwidth. If we eliminated the memory bandwidth bottleneck in our systems we would have vast amounts of processing power available using our GHz CPUs to render beautiful images, even in games. The concept of a separate GPU would fall by the wayside because these GPUs are actually pitifully slow compared to our main system processors. The only benefit is the proximity to video memory. Without a highspeed memory bus connecting the CPU to whatever memory is to be used for the display, you are required to use another processor such as a GPU on a video card with dedicated video RAM.

Imagine the complex scenes that could be created in real-time using our modern processors if they were given enough memory bandwidth to act as the GPU. Games programmers roll over and beg when someone drops them the bone of being able to have a programmable vertex shader on the GPU. Well <b>everything</b> would be programmable if we used our main CPUs. You could literally do <b>anything</b>.

I would love to see fully ray-traced scenes in games. I would love to see a game world that made me think I was looking out a window. I am pretty sure everyone else would love these things as well. But such complex algorithms are not going to be forthcoming from video card companies. They do not specialize in computational power. They specialize in delivering bandwidth. We should look to the main CPU manufacturers in the industry for our computational power to be able to do these things.


"Could you imagine NVidia making a deal with Rambus similar to the one Intel has?"

Yes I could, but I would rather see GPUs replaced by our CPU with the coming of huge amounts of memory bandwidth. nVidia has been dreaming lately about completely replacing the main processor in your system as the central component. They need a wakeup call. Their 'processors' are pitiful compared to those of Intel and AMD.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 3:35:51 AM

Correct me if I am wrong but wouldn't an octal Rdram system need to coincide with an octal memory contorller (chipset) as well? Any info on such a chipset? And then to achieve this bandwith how do you see the processor to memory contoller bus operating ( fsb, octal pumped as well?)

Video editing?? Ha, I don't even own a camera!
Anonymous
a b à CPUs
a b } Memory
October 23, 2001 3:37:17 AM

Mr Raystonn sir,

Yes that is a hefty bandwidth figure you quote, very nice indeed. It seems though, as has been shown that in many cases while simulating todays available software through synthetic benchmarks, that the lower latency of the various SDRAM's often allows it to perform better in spite of it's lower available data bandwidth. This is demonstrated to be fairly accurate by comparing real world application performance. The reason I bring this up is because in the previous posts you mention that some of the new functions of this newer RDRAM is allowing for lower and lower latencies, and that raises a question in my mind. Do you have any opinions about whether RDRAM in any future incarnations will ever be able to also claim the lowest latency figures with respect to available SDRAM (or whatever competing technologies exist at the time) types at that time? If so, at what point do you predict that this milestone will be achieved?

Hopefully you see why I ask this.

Also, does anybody know if the agreements between Intel, and Rambus disallow chipset makers to support RDRAM/Athalon chipsets and motherboards?

Edit- oops, punctuation errors<P ID="edit"><FONT SIZE=-1><EM>Edited by knewton on 10/22/01 11:41 PM.</EM></FONT></P>
October 23, 2001 3:52:42 AM

That’s a very interesting way of looking at it. As the power of CPU’s increases, it makes sense to move the work done by peripheral components to the CPU, saving costs.
I can imagine that we will eventually reach a point where the quality of graphics in games surpasses our ability to interpret it, and it is therefore pointless to increase the power of graphics cards. I can also imagine that the power of CPU’s will eventually so completely surpass the processing power required for this, that there will be no reason to not do this processing on the CPU.
I’ve always felt that it was only a matter of time before all of the work done in a computer is done on a single chip. The question is, how long will it take to get there?
76.8GB/s of bandwidth is certainly a step in the right direction, but it will take even more than that, I think, if we are going to render scenes like <A HREF="http://www.irtc.org/ftp/pub/stills/2001-06-30/warm_up.j..." target="_new">this</A> in real time. That image took 100 hours to render on a 1.4Ghz Athlon with 1 Gb of DDR RAM.

"Ignorance is bliss, but I tend to get screwed over."
October 23, 2001 3:55:01 AM

"wouldn't an octal Rdram system need to coincide with an octal memory contorller (chipset) as well"

Yes, the OCD technology will be out in 2002. By that time there will be a supporting chipset.


"how do you see the processor to memory contoller bus operating ( fsb, octal pumped as well?"

The FSB would likely be operating at 600MHz, quad pumped off a 150MHz external clock.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 3:59:08 AM

Quote:
By 2004 they will be using a 600MHz FSB clock (PC1200).

This does not seem to be a logical progression, typically increases are by factors of 33 1/3. But then again by 2004 pci and even agp devices may be obsolete so who knows? However major changes would need to be made to the p4 to use this bandwith.

For instance given your projection

The Pentium 4's system bus would be clocked at 150 MHz and 64-bit wide, but it being 'quad-pumped', using the same principle as AGP4x. Thus it could transfer 8 byte * 150 million/s * 4 = 4,800 MB/s.

so you would need to radically change the p4 by adding another 64 bit pathway from the cpu to memory controller ( ie alpha) for a cpu to memory bandwith of 9,600 or Octel pump ( that just does not sound right) the cpu at 150. In either case wouldn't you have a completely different CPU?

Video editing?? Ha, I don't even own a camera!
October 23, 2001 4:10:59 AM

"Do you have any opinions about whether RDRAM in any future incarnations will ever be able to also claim the lowest latency figures with respect to available SDRAM (or whatever competing technologies exist at the time) types at that time"

In the second half of 2002 PC1066 RDRAM will become the standard, with overclocking going somewhat beyond to probably around PC1150 or so. At that point it will have lower latency than the DDR SDRAM alternative.


"does anybody know if the agreements between Intel, and Rambus disallow chipset makers to support RDRAM/Athalon chipsets and motherboards"

There are no restrictions on the part of AMD or any other companies. They are free to license the technology just as Intel has done. Be aware that when they decide to do so it will take a considerable amount of time to ramp up support and get everything bug-free. New technologies take a while to perfect. [lowblow](Though VIA can just skip that part. ;) [/lowblow]

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 4:12:40 AM

Remember that if you follow the more common interpretation of Moore's Law you get a 100-fold increase in performance every 10 years.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 4:21:08 AM

Quote:
Remember that if you follow the more common interpretation of Moore's Law you get a 100-fold increase in performance every 10 years.


Speaking of Moore's law, Intel has always followed it pretty closely haven't they?
Rambus will break it completely if they improve memory bandwidth by 24 times in just three years. How likely do you think it is that they will manage the same again after 2004?

"Ignorance is bliss, but I tend to get screwed over."
October 23, 2001 4:24:30 AM

Actually AMD has had a license to use Rdram for some time now, its just they have choose not to use it. Your conjectures for calling for RDRAM to be the standard in just over 8 months is a little over optimistic, especially with Intel just releasing the I845.

Quote:
At that point it will have lower latency than the DDR SDRAM alternative



Only in comparison to the current DDR, but DDR will ramp in speed as well getting a double pumped bus of 166 then eventually a quad pumped bus.

Myself, I was hopeing for a completely different solution by 2004 perhaps megnetic ram technology?

Video editing?? Ha, I don't even own a camera!<P ID="edit"><FONT SIZE=-1><EM>Edited by ncogneto on 10/23/01 00:25 AM.</EM></FONT></P>
Anonymous
a b à CPUs
a b } Memory
October 23, 2001 4:24:53 AM

Well well well, if this is all true then things are looking rather grim for SDRAM. I just can't imagine what could be done to make it compete with numbers like this. Low cost can only take you so far. Oh well it kicked some butt back in the day.

"[lowblow](Though VIA can just skip that part. ;) [/lowblow]"

not really sure if you are referring to the fact that they seem to be imune to the need to license techs, or the fact that they seem to be immune from the need to perfect their new technologies before releasing. heh heh
October 23, 2001 4:35:32 AM

The FSB would indeed need to be increased to make use of all of this bandwidth. However, this could easily be done by dropping back down to a low multiplier and using a very high FSB clockrate. That 600MHz FSB figure I gave was a bit inaccurate. It would be required to use PC1200 RDRAM at the same dual 16-bit channels (or single 32-bit channel) with the same multipliers. This would only achieve 4.8GB/s of memory bandwidth by itself.

To properly use all 76.8GB/s of memory bandwidth would require a 64-bit FSB with an effective rate of 9.6GHz. The Pentium 4 has a 64-bit FSB that currently operates at 400MHz. If instead of a multiplier, a divider was implemented, we could set the CPU's divider to 3 and have it running at 3.2GHz on a 9.6GHz FSB. Eventually the Pentium 4's core is expected to scale beyond 10GHz, so the divider may be unnecessary, depending on how long it takes to get there. This would unlock the full potential of 76.8GB/s of memory bandwidth.

Once we move to a new core (the Pentium 5) we can implement wider FSB buses. The 64-bit bus can be moved up to 128-bit or 256-bit, which would cut the FSB clock requirements by a factor of 2 or 4 respectively. Now you may be questioning the effectiveness of a processor with what seems like more memory bandwidth than it can handle. I assure you this is not the case. With more and more SIMD instructions being introduced, and most of them becoming standard among all competitors, a couple CPU clocks are capable of accessing a vast amount of memory. You will soon see FSBs with a higher clockrate than the processor.

I envision a time when the SVGA port for the monitor is attached to the motherboard and the CPU uses local memory as video memory with its massive bandwidth. All 3D processing would be done by the CPU (and much faster as well.)

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 4:40:21 AM

"a little over optimistic, especially with Intel just releasing the I845"

The i845 was released to cover the lower pricepoints (i.e. those who complain about high prices.) It is not intended to ever be the best performing platform.


"DDR will ramp in speed as well getting a double pumped bus of 166 then eventually a quad pumped bus."

That is nice and all but it will not beat the bandwidth available with RDRAM. Additionally, every time they bump up the speed on SDRAM its latency increases. It will quickly fall out of fashion as it moves beyond its original design specifications. It is time for something new.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 4:42:51 AM

"Oh well it kicked some butt back in the day."

So did EDO RAM... :) 


"not really sure if you are referring to the fact that they seem to be imune to the need to license techs, or the fact that they seem to be immune from the need to perfect their new technologies before releasing."

I had the first in mind really. I believe most would rate them as the developer of the buggiest chipsets.

-Raystonn



= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 4:45:34 AM

Quote:
That is nice and all but it will not beat the bandwidth available with RDRAM. Additionally, every time they bump up the speed on SDRAM its latency increases. It will quickly fall out of fashion as it moves beyond its original design specifications. It is time for something new.

Actually Ray yes ( currently ) it does beat the bandwith of Rdram it just doesn't have the advantage of the dual memory controller of the I 850.

Video editing?? Ha, I don't even own a camera!
October 23, 2001 4:48:26 AM

Quote:
The i845 was released to cover the lower pricepoints (i.e. those who complain about high prices.) It is not intended to ever be the best performing platform.

Sorry, my interpretation of the word "standard" would be that of the most commonly used. Thus looking at all the systems ( Intel and AMD ) and seeing which memory was being used more.


Video editing?? Ha, I don't even own a camera!
October 23, 2001 4:52:41 AM

Memory bandwidth is measured in bandwidth per pin because it is just as easy to place an SDRAM pin/trace on a motherboard as it is to place one that is destined to be used by RDRAM. Thus the basic unit of measurement is how much bandwidth you can get per unit of space. This translates to a 'per pin' basis. The current form (PC800) of RDRAM gets 100MB/s of bandwidth per pin. the current form of DDR (PC2100) gets a bit over 33MB/s of bandwidth per pin.

I believe the RDRAM wins here. If you want to compare 64 pins of SDRAM to 16 pins of RDRAM then sure the SDRAM will win. But you should know that it is just as easy to place 4 16-bit RDRAM channels on a motherboard as it is to place a single 64-bit SDRAM channel. A dual channel SDRAM chipset is about as difficult to make as an 8-channel RDRAM chipset. This is why nForce is so expensive.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 4:54:10 AM

Quote:
The Pentium 4 has a 64-bit FSB that currently operates at 400MHz. If instead of a multiplier, a divider was implemented, we could set the CPU's divider to 3 and have it running at 3.2GHz on a 9.6GHz FSB.

I am still scracthing my head on this one. Now would the chipset be running at 9.6 ghz as well then? That would be quite a feet. And then the the Rdram itself gets a divider as well? Sounds like a motherboard designers nightmare.

Video editing?? Ha, I don't even own a camera!
October 23, 2001 4:57:30 AM

I am comparing stick for stick (Rdram vs DDRAM).......I don't believe the nforce will be anymore expensive than the I 850 expecially factoring in the sound and video that you don't get with an I 850 board (yet anyway).

Video editing?? Ha, I don't even own a camera!
October 23, 2001 5:02:15 AM

Quote:
But you should know that it is just as easy to place 4 16-bit RDRAM channels on a motherboard as it is to place a single 64-bit SDRAM channel. A dual channel SDRAM chipset is about as difficult to make as an 8-channel RDRAM chipset. This is why nForce is so expensive.

I want to clarify this, are we talking channels or slots here? The nforce is manufactored on a 4 layer pcb so it can't be that difficult. What purpose would an 8 channel Rdram chipset possibly have?

Video editing?? Ha, I don't even own a camera!
October 23, 2001 5:04:14 AM

You could get a 9.6GHz FSB using a multiplier off the external clock, similar to the 'quad pumped' FSB of the current Pentium 4. This is just a multiplier of 4.0 off the external clock currently. If you increase the multiplier you increase the FSB. To use the same processor core with the same 64-bit FSB would require a multiplier of 64 off a 150MHz external clock. This can be done but would probably prove fairly difficult. A better solution would be a 256-bit FSB, a 300MHz external clock, and a multiplier of only 8. This would require a new core generation to do.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 5:07:38 AM

An 8 channel (using 16-bit channels) RDRAM chipset would have 64 pins, similar to a single SDRAM channel. It would provide 12.8GB/s of memory bandwidth in the same circuit space as a single SDRAM channel chipset. This is what makes RDRAM so attractive. It has a very high 'bandwidth per pin' figure.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 5:10:15 AM

There are i850 motherboards with onboard audio. I do not see why anyone around here would want onboard video currently though. ;) 

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 5:16:06 AM

Quote:
If you increase the multiplier you increase the FSB.

only to the processor Ray not the other components.

I have never heard of FSB being measure in bits, I thought it was just a measurement in hertz of the bus speed, am I confused? Never-the-less, in order for the p4 to use the 76.8 gig of bandwith in your opening post it would require the chipset to be operating at 9.6 ghz. This I do not see as foreseable by 2004. As I thought Intel hoped to scale the p4 up to 10 gig I think a new core is out of the question for awhile. Now, the link you provided has merit and I can see Rdram achieving 3.2 gig by 2004, but to claim 76.8 by that time is a reach to say the least.

Video editing?? Ha, I don't even own a camera!
October 23, 2001 5:19:44 AM

yes but Ray the onboard audio is nothing that would compare to that on the nforce ( at least not if it is close to what it is supposed to be.)

Video editing?? Ha, I don't even own a camera!
October 23, 2001 5:25:57 AM

"in order for the p4 to use the 76.8 gig of bandwith in your opening post it would require the chipset to be operating at 9.6 ghz."

No, with dual 64-bit RDRAM channels running at ODR (Octal Data Rate) off a 150MHz external clock you get 76.8GB/s of bandwidth. The chipset is well within specifications. Many people run with an external clock of 150MHz today.


Current Pentium 4:

100MHz external clock, Quad Pumped with a DRCG to 400MHz, DDR for an effective 800MHZ, 16-bit for 1.6GB/s, dual channel for 3.2GB/s

Proposed system:

150MHz external clock, Quad Pumped with a DRCG to 600MHz, ODR for an effective 4800MHz, 64-bit for 38.4GB/s, dual channel for 76.8GB/s

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 5:40:16 AM

THANK YOU raystonn.
for 2 things.

1. for jumping on that stupid troll. i for one an very tired of the intel vs amd crap.

and

2. yes, we need bandwidth. hopefully gone are the days of real 14x multipliers (taking into account double/quad/oct data pumping)
we can finally get back to the days of the 386 and 486 where memory, bus speed & Mhz were all around the same! YUM.

say raystonn... any info on QDR DRAM?

Religious wars are 2 groups of people fighting over who has the best imaginary friend.
October 23, 2001 5:40:36 AM

Ok Ray maybe I am missing something here ( or maybe you are). Now unless the p4 has a memory contoller on die ( which it doesn't) we need to get the information from the memory banks to the CPU. This is what a memory controller does ( which is found in the chipset) So, in order to deliver all this incredible bandwith we need to do so with out a bottleneck ( for instance the nforce has a memory bandwith of 4.2 gigs but, the athlon cpu only has a possible thouroughput of 2.1 gigs...thus the small increase in performance).

Now you have speculated on how this could be done but, at least to me, none of your solutions seems remotely feasable. So what we have is a memory controller that is octel ( 8x ) The p4 has a 64 bit path. So, at what frequency will this memory controller need to operate to supply 76.8 gigabytes to the cpu?

Video editing?? Ha, I don't even own a camera!
October 23, 2001 5:41:55 AM

I'm a troll?

Video editing?? Ha, I don't even own a camera!
October 23, 2001 5:48:26 AM

The MCH (Memory Controller Hub) would be responsible for taking the 128-bits of data at a rate of approximately 4.8GHz and sending that to the processor across its FSB. This is not unreasonable.

-Raystonn



= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 5:48:55 AM

Quote:
150MHz external clock, Quad Pumped with a DRCG to 600MHz, ODR for an effective 4800MHz, 64-bit for 38.4GB/s, dual channel for 76.8GB/s

Thats all fine Ray but ( ok gross overexageration here) lets say I design a 40 channel nforce chipset whith ddr.

2.1 x 40 = 82gb/s Great now I need to get it to the cpu. If I choke it down to 2.1 gigs it does me no good. I am pretty sure the memory controller and the cpu bus need to run in sync don't they?

By the way, with your 150 FSB do you know you are running your agp and pci busses out of spec? Perhaps you should choose 166 fsb?

Video editing?? Ha, I don't even own a camera!
October 23, 2001 5:49:19 AM

I believe he was referring to the first reply of this thread.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 5:52:02 AM

Quote:
The MCH (Memory Controller Hub) would be responsible for taking the 128-bits of data at a rate of approximately 4.8GHz and sending that to the processor across its FSB. This is not unreasonable.

well thats quite a leep in technology in and of itself not to mention that is has to then send it to the cpu at 64 bits and 9.6 ghz.

Video editing?? Ha, I don't even own a camera!
October 23, 2001 5:53:32 AM

Ever here of EMT?

Video editing?? Ha, I don't even own a camera!
October 23, 2001 5:53:58 AM

"If I choke it down to 2.1 gigs it does me no good."

You are right. This would do no good. This is the main problem with nForce. Our MCH would have to be capable of taking 128-bits of data at 4.8GHz and feeding it to the CPU in 64-bit chunks at 9.6GHz (for the current core, which has a 64-bit FSB.)


"I am pretty sure the memory controller and the cpu bus need to run in sync don't they?"

Not in sync, you just need to make sure the input and output bandwidth are equal in the MCH.


"By the way, with your 150 FSB do you know you are running your agp and pci busses out of spec? Perhaps you should choose 166 fsb?"

A simple fractional divider would work here. PC1200 RDRAM is already on the roadmaps. This would work with a 150MHz external clock.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 5:59:14 AM

"well thats quite a leep in technology"

The current MCH takes in 32-bits of data at a frequency of about 800MHz. 4.8GHz is only a factor of 6. This is not out of reach, especially if Intel licenses Rambus's new ODR technology.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
<P ID="edit"><FONT SIZE=-1><EM>Edited by Raystonn on 10/22/01 11:06 PM.</EM></FONT></P>
October 23, 2001 6:01:11 AM

LOL, ok you build it and I will buy it :) 

p.s. that is not the problem with the nforce as the additional bandwith is available to the integrated graphics let us not forget. It is still by far the best integrated solution on the market.

Video editing?? Ha, I don't even own a camera!
October 23, 2001 6:04:30 AM

True, but you must admit that currently the best integrated solution is on par with some of the worst AGP card solutions.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
October 23, 2001 6:05:21 AM

Does Rambus have anything to do with the design of the MCH? Just curious. And btw it is a factor of 6 not 8.

Video editing?? Ha, I don't even own a camera!
    • 1 / 6
    • 2
    • 3
    • 4
    • 5
    • More pages
    • Next
    • Newest
!