I think going forward, latency issues will be crucial until clock speeds increase. When QDR and QDRII come out, slated sometime end Q3 or in Q4 this year, latency issues will be more important if the clock speeds are the same.
For the reason that DDRSDRAM and RDRAM are hindered by the culmination of the number of latency steps just to execute a read/write with the use of DDR signal; a quad data rate will suffer from increased latency penalties.
I mean in the fact that one missed cycle means 2 bits instead of 1 are delayed and in the case of QDR that is 4 bits per signal that is delayed. (Add additional channels and start multiplying) So at the same clock speeds, QDR will have more potential latency than DDR but will have twice the bandwidth of DDR. So for the price of an increase in the potential overall latency you get twice the bandwidth. Speed of course covers that price somewhat. More clocks per second the smaller the impact of a missed address, etc.
I agree with you under current memory speeds that it would be beneficial to use DDR333/PC2700 while on a budget.
One thought however... Increasing the speed of the memory is more beneficial than that of dropping the timing settings down to the most aggressive levels. (Under current conditions.) Even if you could save 10 cycles of latency, that would be the difference between...
1 - tRC Timing: 3, 4, 5, 6, 7, 8, 9 cycles
2 - tRP Timing: 3, 2, 1, 4 cycles
3 - tRAS Timing: 2, 3, 4, 5, 6, 7, 8, 9 cycles
4 - CAS Latency: 2, 2.5, 3 cycles
5 - tRCD Timing: 1, 2, 3, 4 cycles
of 9-3-5-3-3 setting and a 3-1-2-2-2. That would be 10 cycles. So for 10 cycles that would be 20 bits delayed for a total of 0.000000602 seconds. That over the course of a year, running full time 24-7-365, would be the difference of 1.9 seconds. However if you increase the speed to 200MHz clock with DDR400 the difference and using the DDR333 with the more aggressive settings the difference is 1.58 seconds. Again it is miniscule but the difference in speed is obvious. At 166MHz, with zero latency, there would be 21,200,000,000 bits transferred. At 200MHz, that would be 25,600,000,000 bit transferred. That is 440,000,000 bits via the speed increase and only 20 bits from the timing change. So you see, speed increases are more important than latency settings.
If it is within the budget of the person doing the upgrade, then DDR400 is much better choice, even at bad timing settings, than DDR333 at CL2 vs. CL3. But I totally agree with you that on a budget Dual channel using DDR333 is a very good choice.
Going back to RDRAM...for the THGC...
RDRAM PC1066 using 16 bit RIMMs and 2 modules will yield the same results<font color=red><b>*</font color=red></b> in terms of Peak Bandwidth they are the same. The 4200 RIMM only eliminates the need for two modules. It didn't actually make the channels wider.
...so under 16 bit RIMMs...
1066MHz x 2 Bytes x 2 Channels = 4.2667GB/s using 2 RDRAM modules.
or under 32 bit RIMMs...
1066MHz x 4 Bytes (2x2)= 4.2667 GB/s using one RDRAM module.
<font color=red><b>* =</font color=red></b>Well the 32 bit RIMM with the one module should yield better performance results as Crash mentioned due to the termination resistors on the module and not on the Motherboard, etc. But for Peak Bandwidth, it will not make any difference between the two.
<b>"Sometimes you can't hear me because I'm talking in parenthesis" - Steven Wright</b> :lol: