RAM Wars: Return of the JEDEC

Introduction

This year will bring a radical change in the kind of memory you will buy and how you will buy it. The death knell has begun to sound for SDRAM while DDRAM has become a standard memory device. A bevy of new memory capabilities, such as dual channel DDR, will make life that much more interesting.

On the supplier side, the ramifications of Intel's recently-announced divorce from Rambus and its willingness to work with the Joint Electronic Device Engineering Council (JEDEC), the industry standards body, are beginning to bear fruit. Now, PC performance is largely contingent on what RAM is in the DIMM slot, to the same extent as the CPU and graphics processor, for gaming or any other performance-oriented applications. Meanwhile, Rambus is suddenly out of the picture.

In this article, we will look at the role that RAM plays in relation to the CPU, how RAM works, and what DDR, DDR400, and DDRII are all about.

Swipe to scroll horizontally
Header Cell - Column 0 SDRAMDDRRDRAMEDODDR2
200255%39%5%1%Row 0 - Cell 5
200313%81%3%Row 1 - Cell 4 3%
20048%83%2%Row 2 - Cell 4 9%
20055%58%2%Row 3 - Cell 4 35%

Marketshare of PC RAM types. (Source: iSuppli)

DRAM Basics

The principal means of evaluating memory performance is in terms of cache latency, which is measured by the time it takes the CPU to fetch or retrieve data from memory. The RAM segment of memory, in fact, represents only one of three temporary memory caches through which data signals must pass on their way to and from the CPU and the main memory on the hard disk. Waxing philosophical, Dean McCarron, an analyst for Mercury Research in Cave Creek, AZ, noted: "By the time the processor goes out to the main memory, a bunch of bad things can happen."

Along the way through the L1 or L2 caches, a signal needed by the processor can drop from one of the caches and back out to main memory during which time the processor runs idle, which adversely affects latency. All said, the shorter the amount of time the processor waits for a response, the lower the latency. To see an example of how latency affects performance, turn down latency in your PC's Bios and compare performance with and without it.

During the last few years, efforts to improve cache performance and latency have been largely focused on the processor and chipset link (the front-side bus) and the memory system bus between the memory and the chipset, with the bus speed representing a potential bottleneck. A 533 MHz front-side bus on a Pentium 4 talking to PC-133 (133 MHz) SDRAM represents a significant differential, for example, between the front-side speed and the RAM's bus speed.

It is possible to configure a system that has a higher memory bus speed but a slower latency, though it would perform worse than something that was slower with better latency. As a rule of thumb, the front-side bus frequency of the processor should be no less than one-fifth of the processor frequency. This basically means that the cache can handle things up to that point after which the front-side bus becomes saturated. Using Intel's high-end chipsets as an example, Intel will come out with new bus speeds once they get past the 5x ratio. But with Celerons, which Intel sells on megahertz, the bus/ processor ratio is reportedly as much as 13 to one.

Irrespective of technical specifications, some memory modules do not perform to specification. At the billion dollar fab units where memory is made, RAM devices that do not test out at 333 MHz might be sold instead as 266 MHz devices. Among the vendors and distributors around the world, memory is also often mislabeled. A RAM device with a 266 MHz device may in fact run at a slower clock speed. If several bits go bad, stability will suffer, especially in intensive applications such as overclocking.