Predicting the system performance impact of a new type of DRAM is a tricky business. It is impossible to do this merely be evaluating how much faster the DRAM is by itself. If you can get your hands on the hardware, testing is the best approach. But if that cannot be done for some reason, modeling is the only option.
Because it is the newest JEDEC standard for DRAM, and because it offers excellent latency, I have chosen to do some performance modeling with ESDRAM (with all features enabled). The results are summarized at the end of this article.
Very soon, Tom & I expect to be able to do some hands on testing of ESDRAM to evaluate performance. Because the BX chip set is not optimized for ESDRAM, we expect only a small performance impact in most cases, but we will also be looking for better reliability at higher bus speeds such as 133 MHz.
As ESDRAM optimized chip sets and graphics controllers become available, we will offer more test results. Meanwhile, more questions have popped up that ought to be addressed.
What determines the performance impact of fast latency DRAM?
DRAM bus utilization is the primary factor. If DRAM is not used very heavily, the performance difference may be very small. If the DRAM bus is highly saturated with activity, the performance impact will be much more profound.
Is the DRAM bus usually saturated, or idle?
When running normal business applications, bus utilization is usually very low. This is because the cache is doing its job. But some applications, (i.e. multimedia and games) can tend to beat up the cache and drive memory utilization much higher. More on this later.
Doesn't the cache take care of the latency problem altogether?
It helps a lot, but also hurts a little. Caches reduce external bus activity and reduce the average latency as seen by the CPU. But behind the cache, memory accesses become much more random in nature. This result is a very high DRAM page miss rate. Page misses produce the worst possible latency from DRAM. This causes the average DRAM latency to be even worse than systems without an L2 cache. Fortunately, the cache helps to offset much of this problem, but there is still room for improvement.