Cpu cache for cache

Haserath

Distinguished
Apr 13, 2010
1,377
0
19,360
I just had a thought about the cache for cpus. If as you increase the ways of a cache there is more chance to have a cache miss, wouldn't it be better to have a cache to store where the data could be in the cache? This would allow for a seamless flow if the processor could read from this cache while reading/writing to the actual cache.

It seems like a simple idea, so there must be more to this than just what I'm thinking; it seems like this secondary cache wouldn't need to be all that big, since it would just hold 'pointers' to where the data is in the cache thus this could just be a one-way cache to allow 100% accuracy, or even for the bigger caches, such as L3, which might need more of this pointer cache, two/four way would have a better chance of hitting then the big 16-way L3 cache in modern x86 procs.

Could anyone explain what might be the problems with this?
 
The memory address itself already holds a pointer as to which set it can be in the cache, the tag bits at the beginning of the address let you know which set of the cache it can be in, from there it checks the lines in the set to see if the appropriate blocks are present, so there already is a pointer cache built into the addresses itself.

The associativity is mostly to avoid cache thrashing, and they already adjust the associativity for the best performance in a variety of scenarios, hence why your L1, L2, and L3 caches usually have different associativity, my L1 is 2 way and L2 is 16 way on my laptop, i know its different on my desktop as there is an L3 too.

The wikipedia article has some nice pictures to help explain it, in their direct mapped vs 2 way you can see that even numbers go in index 0 in the cache and odds go in index 1, for larger associativity and for larger memory addresses its often the first few bits of the address modded by 16 to determine which set they go in.