Sign in with
Sign up | Sign in
Your question

Point me in the right direction

Tags:
Last response: in CPUs
Share
October 26, 2006 2:27:44 PM

I would like two find white papers or something technical on two subjects:

1.) An indepth description on the meaning of a 4P, 8P system. What does it mean to be 4 way associative?

2.) Agood source on "symmetric multiprocessing"


thanks tbii

More about : point direction

October 26, 2006 2:41:48 PM

* For symmetric multiprocessing:

http://en.wikipedia.org/wiki/Symmetric_multiprocessing
http://www.webopedia.com/TERM/S/SMP.html
http://tldp.org/LDP/lkmpg/2.4/html/c1294.htm
http://whitepapers.silicon.com/0,39024759,60014419p-39000492q,00.htm
http://www.bitpipe.com/tlist/Processor-Architectures.html
http://images.apple.com/server/pdfs/L301298A_PowerPCG5_WP.pdf

* On Associativity:
Quote:
Recall that the replacement policy decides where in the cache a copy of a particular entry of main memory will go. If the replacement policy is free to choose any entry in the cache to hold the copy, the cache is called fully associative. At the other extreme, if each entry in main memory can go in just one place in the cache, the cache is direct mapped. Many caches implement a compromise, and are described as set associative. For example, the level-1 data cache in an AMD Athlon is 2-way set associative, which means that any particular location in main memory can be cached in either of 2 locations in the level-1 data cache.

If each location in main memory can be cached in either of two locations in the cache, one logical question is: which two? The simplest and most commonly used scheme, shown in the right-hand diagram above, is to use the least significant bits of the memory location's index as the index for the cache memory, and to have two entries for each index. One good property of this scheme is that the tags stored in the cache do not have to include that part of the main memory address which is specified by the cache memory's index. Since the cache tags are fewer bits, they take less area and can be read and compared faster.

Other schemes have been suggested, such as the skewed cache, where the index for way 0 is direct, as above, but the index for way 1 is formed with a hash function. A good hash function has the property that addresses which conflict with the direct mapping tend not to conflict when mapped with the hash function, and so it is less likely that a program will suffer from an unexpectedly large number of conflict misses due to a pathological access pattern. The downside is extra latency from computing the hash function. Additionally, when it comes time to load a new line and evict an old line, it may be difficult to determine which existing line was least recently used, because the new line conflicts with data at different indexes in each way; LRU tracking for non-skewed caches is usually done on a per-set basis.

Associativity is a tradeoff. If there are ten places the replacement policy can put a new cache entry, then when the cache is checked for a hit, all ten places must be searched. Checking more places takes more power, area, and potentially time. On the other hand, caches with more associativity suffer fewer misses (see conflict misses, below). The rule of thumb is that doubling the associativity, from direct mapped to 2-way, or from 2-way to 4-way, has about the same effect on hit rate as doubling the cache size. Associativity increases beyond 4-way have much less effect on the hit rate, and are generally done for other reasons (see virtual aliasing, below).

One of the advantages of a direct mapped cache is that it allows simple and fast speculation. Once the address has been computed, the one cache index which might have a copy of that datum is known. That cache entry can be read, and the processor can continue to work with that data before it finishes checking that the tag actually matches the requested address.

The idea of having the processor use the cached data before the tag match completes can be applied to associative caches as well. A subset of the tag, called a hint, can be used to pick just one of the possible cache entries mapping to the requested address. This datum can then be used in parallel with checking the full tag.


Which memory locations can be cached by which cache locations


Full article here:
http://en.wikipedia.org/wiki/CPU_cache

Hope this helps.
Ninja
!