Do you have proof that the latency multiplies with the number of modules? I know it goes up, but I have a very hard time believing that it's multiplied.
It is simple mathmatics and part of the RDRAM spec. RDRAM is a serial memory technology, so any signal to the higher memory addresses has to traverse the other modules. The RDRAM spec requires pre-negotiation that limits EVERY call to the speed of the slowest possible call.
<A HREF="http://www.realworldtech.com/page.cfm?section=news&AID=RWT110799000000&p=3" target="_new">To put the "random access" back into a DRDRAM-based memory system, Rambus Inc. designed into each memory chip the capability of delaying the output of read data onto the channel beyond the normal 20 ns page read access latency by a programmed amount of 2.5, 5.0, 7.5, or 10.0 ns using the TPARM control register. When a DRDRAM-based computer system is powered-on or reset, the processor and memory controller ASIC perform an elaborate initialization ritual for each DRDRAM in the system. As part of this effort the read round trip delay for each memory device is measured and the longest delay is determined. Then the processor and/or ASIC attempt to equalize the round trip read access time for all devices by programming extra read delays into DRDRAMs closest to the ASIC. The net result is all DRDRAM devices appear as equally slow as the farthest device.</A>
Please note that the new 32bit RIMMS do not solve this problem; they are simply double-sided 16bit RIMMs -
--------------------------------------------------------------------------------
No they're not. I have double-sided 16-bit RIMMs sitting in my computer right now.
Current double sided RIMMs do not function like current double sided SDRAM (DDR or SDR)DIMMs. Double sided DIMMs are usually dual bank; Current double sided RIMMs are still serial. The upcoming 32bit RIMMs more closely match current double sided DIMM technology; you get a full bank from each side. In the case of RIMM technology this means 16bit on each side (16 + 16 =32)<A HREF="http://www.anandtech.com/showdoc.html?i=1590&p=5" target="_new">The board layout is relatively simple, a single 16-bit RDRAM channel is routed to one side of the RIMM slots while another channel is routed to the opposite side of the slots.</A>
I doubt it will, as simply adding more pins shouldn't affect the internal workings of the chips. In fact, if it gets spread out as a result, it's possible that they would run cooler. But you could be right, we'll have to wait and see.
Yeah, we will have to wait and see. The issue that causes more heat is more power to drive higher densities of transistors (what is required to develop higher density RDRAM chip modules). Unless the RDRAM process is drastically scaled down (SOI and/or .09 micron, etc.) the heat generated by the higher density chip modules (a RIMM is made up of chips that are in and of themselves modules) will require active cooling.
Hmm...2.7GB/s vs. 3.2GB/s. Nope, RDRAM has more bandwidth. Sorry.
<A HREF="http://www6.tomshardware.com/mainboard/00q1/000315/rambus-01.html" target="_new">See Table</A>
Again, do the math: PC2700 DDR333 has 2.667GB/s bandwidth per stick (333.333MHz * 8bytes = 2666.667MB/s). PC800 RDRAM has 1.6GB/s bandwidth per stick (800MHz * 2bytes = 1600MB/s). DDR SDRAM has more bandwidth. OTOH, as I stated above, RDRAM chipsets (i850, i860) double that bandwidth by combining two channels into one (1.6 * 2 = 3.2GB/s). The extra bandwidth is due to the chipset, not the memory technology used. The Intel e7500 chipset does the same thing with older PC1600 DDR200 SDRAM. So, DDR <b>technology</b> currently offers 66% more (2.667GB/s vs. 1.6GB/s) bandwidth.
<A HREF="http://www.anandtech.com/chipsets/showdoc.html?i=1588&p=3" target="_new">The memory controller in the E7500 is validated for use with both DDR200 and DDR266 SDRAM however the bus will only operate at 100MHz (DDR200 speeds). This means that although you can use DDR266 SDRAM in it, your memory will always run at DDR200 speeds. Intel's reasoning behind this that dual DDR200 channels yield a theoretical 3.2GB/s of bandwidth to main memory which is perfectly matched up to the 3.2GB/s FSB. As we've seen in the past (take the KT133A chipset for example), a synchronized FSB and memory bus generally yields lower latency CPU/memory accesses than an asynchronous setup. It is very clear however that when Intel does move to a 133MHz (533MHz quad-pumped) FSB, a future successor to the E7500 chipset will support DDR266 SDRAM.</A>
It would be much easier and more economical to simply introduce dual-channel using 32-bit modules
For Intel, Rambus and Samsung, maybe. But for consumers, when 32bit PC1066 * 2 (4.2GB/s) modules finally ship, how much will it cost? We can't even buy PC1066 yet and if Rambus tradition holds, it will cost an arm and a leg. Current DDR333 would already provide 25% greater bandwidth in comparable dual-channel configurations (5.3GB/s). DDRII will also probably be available around this timeframe.
<A HREF="http://www.ee.umd.edu/~blj/papers/memwall2000.pdf" target="_new">http://www.ee.umd.edu/~blj/papers/memwall2000.pdf</A>
So, I stand by my statement: it is nearly impossible to tell what memory technology will be in the forefront in 2 years time.
I thought a thought, but the thought I thought wasn't the thought I thought I had thought.