Performance Impact of Rambus

Is Rambus Faster Than SDRAM?

In this Rambus analysis I will repeat the same application modeling technique that I applied to SDRAM and ESDRAM in a previous article. Because Rambus is seen as a high-end technology, I have chosen to raise the CPU speeds up a few notches. This time the range is between 350 and 667 MHz. As before 2D (biz apps), multimedia and 3D loads are evaluated in standard architecture platforms and in UMA platforms.

This chart below categorizes and averages the results of 192 system simulations with Rambus and with standard SDRAM. The values displayed in this chart represent the average performance impact that Rambus introduces to each platform and its computational loads. The performance impact is not always positive.

Of the 96 comparisons, only 34 showed an increase in performance while 62 configurations showed a decrease in performance. The biggest performance advantage was demonstrated on processors and platforms aimed at the mid range and the low end.

A quick look at the average performance impact by CPU type below indicates that Rambus decreases benchmarkable performance by about 1% in standard architecture systems compared to SDRAM. However, the low-end UMA platform benefits from a 1-3% performance boost as compared to SDRAM. This would be somewhat encouraging, except that Intel is not expected to use Rambus in its UMA systems anytime soon. If Intel can convince you that Rambus is better, they will want to use it as a hook to sell more high-end systems, not more low-end systems.

In these high-end systems, users pay hundreds of dollars for performance improvements of just a few percent. The unfortunate reality appears to be that Rambus will take some of that away, while probably driving the system cost up even higher.

This is a strange thing for a CPU vendor to do. Why would Intel deliberately promote a memory type that reduces CPU efficiency? I can't answer that, but I must point out that the same question applies to the 740. Why would Intel promote a graphics chip architecture that needlessly sacrifices CPU performance?

In the case of the 740, Intel potentially degrades CPU performance by 10% in order to save a few dollars in graphics DRAM. Then, in the case of Rambus, Intel reverses its position and asks us to pay a premium for DRAM, while still suffering a reduction in performance. The whole thing seems terribly screwed up.

It seems to me that users are willing to shell out a few extra dollars to ensure that they have sufficient graphics memory, but I don't think anyone wants to pay an excise tax on all of main system memory unless there is a clear performance advantage. Doesn't this seem obvious? Does Intel see this? If so, what motive could they have for acting in this counter-intuitive manner?

I don't know if I can answer this question without sounding like a crack-pot, so lets just stick with the facts. (BTW - have you seen the movie Conspiracy Theory? Just because you are paranoid it doesn't mean they are not out to get you. As a matter of fact, it was the illustrious Andy Grove who, shortly before retiring, graced us with a book entitled "Only the Paranoid Survive ". A prophetic warning?)

Now, back to the matter at hand...