Synchronous Operation
Once it became apparent that bus speeds would need to run faster than 66 MHz, DRAM designers needed to find a way to overcome the significant latency issues that still existed. By implementing a synchronous interface, they were able to do this and gain some additional advantages as well.
With an asynchronous interface, the processor must wait idly for the DRAM to complete its internal operations, which typically takes about 60ns. With synchronous control, the DRAM latches information from the processor under control of the system clock. These latches store the addresses, data and control signals, which allows the processor to handle other tasks. After a specific number of clock cycles the data becomes available and the processor can read it from the output lines.
Another advantage of a synchronous interface is that the system clock is the only timing edge that needs to be provided to the DRAM. This eliminates the need for multiple timing strobes to be propagated. The inputs are simplified as well, since the control signals, addresses and data can all be latched in without the processor monitoring setup and hold timings. Similar benefits are realized for output operations as well.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
JEDEC's new standards will double DDR5 bandwidth, enable faster memory for laptops — DDR5 MRDIMM and LPDDR6 standards for next-generation servers and laptops being ratified
Gigabyte twists its RAM slots to fit 24TB of DDR5 sticks into a standard server — AMD EPYC sports an impossible 48 DIMMs in new configs