G
Guest
Guest
Here's a news article that I thought you guys would be interested in.
http://www.ebnews.com/story/OEG20020226S0040
http://www.ebnews.com/story/OEG20020226S0040
I assure you that RDRAM is the memory standard of the future and present for Intel desktop systems and the memory standard of the future for Intel server and workstations once 1GB modules appear. Further RDRAM chipsets will appear when they are needed.
The next drawback to the RAMBUS channel will be apparent after a good look at the previous diagram. Each SDRAM in an SDRAM system is no more than a few inches along a straight path to the chipset, so commands and data don't have very far to travel to reach their destination. The RAMBUS channel, on the other hand, gets longer as more RDRAMs are added to it, which means that the amount of time that commands and data must travel to reach the outermost device can get pretty high. What makes this even worse is that the system read latency of the entire system can be only as fast as that farthest (and, by extension, slowest) RDRAM. Here's why:
Remember how, way back at the beginning of the first edition of this RAM Guide, we said that, to the CPU, main memory looks just like one, single file line of 1-byte locations? When the CPU asks for data from a series of locations, it expects that it will come to it in the order that it asked for it. It doesn't care where that data lives, or how long it takes to get from one place to the other--it just cares that it sent out a series of requests for x, y, and z, one right after the other, and it expects x, y, and z to be fed to it in that exact order, one right after the other. Well, if x, y, and z each live in different RDRAM chips, where, say, y and z live close to the chipset but x lives way out there in the last chip on the outermost RIMM, then we've got problems. The packet that's farthest from the chipset, x, is going to take quite a bit longer than y and z to reach the chipset, but since x has to be there first and all three packets have to file in one right after the other, y and z will have to wait on x before they can go in.
Because of the need to be able to delay the output of read requests so that reads from different RDRAM chips can arrive at the chipset together and in the right order, a RAMBUS system has to go through an elaborate initialization ritual on boot-up in order to determine the amount of delay that needs to be inserted into each RDRAM. The read delay value for each individual RDRAM chip is programmed via the control pins into one of those control registers that we met in the previous section. These read delays effectively slow down the entire system so that each device has the same latency as the outermost RDRAM. As you add more devices to a RAMBUS system, the entire system has higher and higher read latency. So, while individual RDRAM chips might have a read latency (access time) of 20ns, which is about the same read latency as some SDRAMs, once you stick them in a system with three full RIMMs the overall system latency (which is the total amount of time from when the CPU sends out the read command and the data arrives back at it) will be either slightly better or significantly worse than the system latency for an SDRAM system, depending on a myriad of factors. (More on these factors in a second.)
Further aggravating the read latency situation is the fact that RAMBUS doesn't support critical word first bursting. When the CPU asks for 8 bytes of data from a conventional SDRAM, the memory system sends it back 16 bytes data along with under the presumption that it'll probably need those extra 8 bytes shortly. Nevertheless, the 8 bytes that were specifically asked for-- the critical word--arrive at the CPU first, with the other freebie bytes coming next. RDRAM doesn't do this. It just sends you a whole 16 byte train of data, and if the 8 bytes you asked for are at the end of that train, then you'll just have to wait until they get there.
Finally, since the bus is so long and passes through so many devices, the capacitances added in by the loads of all of the attached devices significantly increase bus signal propagation time. So again, the more devices you stick on the RAMBUS channel, the worse the latency gets. However, RAMBUS' signaling layer, high quality packaging, and strict specifications for producing RIMMs are aimed at reducing these types of unwanted electrical effects.