http://www.tomshardware.com/forum/356983-28-8350-good-g...
Scroll to the bottom of the page in that thread, I broke down how HSA and hyperthreading works...if you're interested in parallel processing you'll want to read the bit of conversation there...
This is an excerpt from the conversation:
Quote:
What he is talking about, are protocols...
A software setup with data fed in a mostly serial manner favors intel, because intel's instruction execution protocol for their CPUs are 90% serial data...which means intel chips break down a serial stream of data faster (single threaded performance). AMD's instruction execution protocol for their CPUs are setup to run parallel streams of data (heavily threaded performance), which most software out right now is not designed to feed data to the CPU in this manner. So, data being fed serially to a CPU designed to run parallel streams of executions is inefficient, and favors one designed for that type of data streaming.
For example...
Picture you're at Wal-Mart (or where ever), and there are 8 checkout lanes open...the first lane has a line a mile long, and they will only allow 4 of the other 7 lanes to have a line 1 person long. It doesn't make any sense right? For starters, they're not even using all of the lanes available, and the ones they are, aren't being utilized efficiently.
That's what's happening inside an AMD architecture FX8350 with current software...
With Intel chips right now...it's more like the line at best buy...where you have 1 line a mile long, but the front person has 4 different cashiers to go to when they arrive at the front of the line.
So, having 1 line a mile long doesn't slow them down, they're designed that way...
However, once information is fed in a parallel manner to the CPU...AMD will have all 8 lanes at Wal-Mart open for business and the lines will be distributed equally with people (instructions for the CPU), but Intel will still have the Best buy type line with 4 people running a cash register...except that now there will be 4 or even 8 lines forming into that one line, which makes things slow down because they are not designed to execute like that.
I hope the analogy makes this very complicated architecture discussion make sense.
Quote:
That's a great question man...
Think of it like this, when you're multitasking, RAM has an effect on the amount of multitasking you can do, though windows negates this to some degree by putting "page file" on your HDD that is dynamic in size. What that means is that windows opens a file similar to RAM and loads files from it when you don't have enough RAM to load everything into RAM at once. RAM is faster, but your performance loss is only noticeable if you're running something extremely CPU/GPU heavy...like a game in 1440p or hardcore video encoding/rendering.
When you're multitasking, AMD protocols allow your background programs to form a serial line in front of an unused core...so that you're not tapping the resources your foreground program is using.
Now, say you were running 5 fairly intensive things at once...(let's say, streaming web videos, downloading multiple music files, and playing a web game...) Your i5-3570k would be able to execute 2-3 of those (depending on their resource needs) well...the others would be passed off to a virtual core in the background and would run at a considerably slower rate.
Doing the same thing on an AMD 8 core chip, since only 1 of those requires any FP calculations, you could literally tap 5 cores to do all the work simultaneously.
Quote:
That's what hyperthreading is, it is essentially passing off background or foreground applications to a "virtual core" which the processor is basically taking a fraction of clock time each cycle to run the threads dedicated to the virtual core. Now, for the sake of hardware, the i5 doesn't have any "virtual cores" in intel speak...however, the 4 cores you have can divide clocktime by % to execute functions you're currently running(the definition of CPU multitasking effectively). This will tap resources you're using elsewhere, though it won't be a largely noticeable difference in your foreground application performance unless you're doing several CPU intensive things at once. So for example...your foreground application may be using 80% of 4 cores, and your background applications may be using 20% of the clocktime per cycle to run their functions, but it's at a highly reduced rate compared to what it would be if that was the primary program running.
Clear as mud yet?