Data or address bus?

dzenn

Distinguished
Oct 12, 2011
11
0
18,510
So after the data has been processed in the cpu via data bus,where does it goes to update the memory about the changes?
data or address bus?or none of these two?

Thank you! :hello:
 

mathew7

Distinguished
Jun 3, 2011
295
0
18,860
It is said that 50% of the problem is solved by the right question.
So to make you question more clear, are you interested in what's going on outside of the CPU die (like the "visible" CPU pins) or internally? In the latter case the answer is more complex and dependent of CPU.
For the former case, current gen of x86 (or should I say DDR,DDR2,DDR3) all use shared data/address bus lines. That means that the same lines are used for both address and data, the function being selected by some additional lines.
So in short, here is what happens (including reading):
1. CPU computes the address
2. CPU queries the address from cache
3. cache does not find data, so queries memory controller
------ going outside of CPU die ----
4.memory controller puts row address (and activating the Row Access Strobe line)
5.after TRAS, memory controller switches to column address (along with Column Accesss Strobe line)
6.after TCAS, memory controller reads the lines and obtains the data burst from memory (thus the lines are now data) which are sent directly to cache (step 8)
7.some closing actions (also related to memory timings), done in parallel to next actions
------------- inside CPU die again
8.memory controller gives data to cache (during step 6)
9.cache gives data to CPU
10. CPU processes data (thus initating a write)
11. CPU gives new data to cache
12. cache initiates memory controller write
----------- outside CPU die again
13.-14. same as 4.-5.
15. after TCAS, memory controller starts sending the data burst to memory
16. again some closing operations.

This is a simplified operation. In reality many of the steps are not really performed because of cache policy. What I described is a write-through cache, where any writes to cache will trigger a memory write (so cache and memory are always in sync).
Optimizations (and nature of modularisation):
If data is in cache, steps 3-8 are not executed
If the data is not in cache but getting it would require writing some other data (due to how cache is organised), steps 12-16 are executed for the older data between 3 and 4.
If the cache is write-back, then steps 12-16 are not executed, but are delayed to when the cache slot needs to be cleared.

Do note that write-back data means that memory and cache are not always in sync. When using multi-cores/multi-CPUs, there are cache interconnections to ensure that a 2nd CPU will always get the latest data (either by forcing the 1st to write or by retrieving with cache-to-cache communication; although I doubt the latter is really used).

Hope it answers your question. I tried to be as clear as possible.