"8 way L1 cache" (in CPU) - what exactly means "8 way" ?

devuniv

Prominent
Aug 4, 2017
6
0
510
I thought certain CPU core makes one inquiry (asks to fetch cache line number n) to L1 cache (or to L1 TLB) and waits for the result. L1 cache is dedicated to a single core.

So it seems nonsense to make 8 such inquiries in parallel. So why "8 lines"?

What are these "8 lines" for?
 
Solution
LoL it's not an 8-line minimum prefetch length like in DDR3 and DDR4.

n-way cache refers to set associativity which defines the number of places in the cache that can be mapped to memory. Fully associative cache means memory data can be copied to anywhere in the cache--so any time the CPU needs to look up anything it must parse the entire cache which takes the most time so is slowest. The opposite is direct-mapped or 1-way associative cache which means memory data can be copied to only a single place in the cache--this makes for a fast lookup but makes the hitrate (and thus efficiency) of the cache very low. As memory is several hundreds of times slower than L1, in the worst case scenario this can starve the CPU of data and...
LoL it's not an 8-line minimum prefetch length like in DDR3 and DDR4.

n-way cache refers to set associativity which defines the number of places in the cache that can be mapped to memory. Fully associative cache means memory data can be copied to anywhere in the cache--so any time the CPU needs to look up anything it must parse the entire cache which takes the most time so is slowest. The opposite is direct-mapped or 1-way associative cache which means memory data can be copied to only a single place in the cache--this makes for a fast lookup but makes the hitrate (and thus efficiency) of the cache very low. As memory is several hundreds of times slower than L1, in the worst case scenario this can starve the CPU of data and make things essentially stall with the CPU effectively running 1/300 its normal speed momentarily. Things aren't always so bad though because the data might be found in L2 but you get the idea.

So 8-way L1 is a compromise between permanently slowed cache and occasional long delays due to too many cache misses. Each lookup only has to check 8 places in the L1 cache.
 
Solution