Sign in with
Sign up | Sign in
Your question
sticky

AMD CPU speculation... and expert conjecture - Page 30

Tags:
  • AMD
  • CPUs
Last response: in CPUs
Share
a b à CPUs
February 15, 2013 2:40:37 PM

iceclock said:
i dont care how fast the code is compiled. i just want multi-threaded supports over 4core or more


Its been there since the earliest computers with multiple CPU support back in the 70's.

No whether the software can be written in a multithreaded, parallel way, and the OS smart enough to put parallel threads on separate cores, thats another story.
a b à CPUs
February 15, 2013 2:42:23 PM

mayankleoboy1 said:
AFAIK, VS2012 SP1, it defaults to targeting SSE2 instruction set. In fact Firefox had to specifically change this setting, as they dont even target SSE2 (which is almost ancient now) for compatibility's sake.


Good to hear; still using MSVC 2008 at work.

And yeah, not shocked about the compatibility; as long as XP hangs around, you aren't going to see SSE2 as the default option (as XP supports CPUs pre-SSE2).
Related resources
a b À AMD
a c 136 à CPUs
February 15, 2013 3:45:05 PM

so basically theres gotta be a good communication between the software thats coded and the os, to have a good understanding on how to run in multi-threaded environnement.

a b à CPUs
February 15, 2013 5:15:39 PM

Quote:
Agree, except for the psychotic part. The NT scheduler is actually pretty good: The highest priority feasible (able to run) thread ALWAYS runs. If theres a tie, one is picked at random. Threads that are waiting get a priority boost, threads that are running get a priority decrement. Threads running on a forground application get a priority boost. OS kernal threads can preempt any user thread. And other odds and ends like that. All in all, it does a good job in regards to program throughput (especially when there is only one foreground application), even if latency is only so-so.


That's not why it's psychotic. It's constantly moving threads around on the cores without respect to where they were last running, this isn't good for performance as it forces a cache flush every time the CPU task switch's, which is fairly often. I know this because I do my own performance metrics and watch it happening. This was REALLY a PITA when I was using K10stat on my Llano to overclock + undervolt. K10 would keep all four cores @800mhz all the time unless I started up something heavy then it would cycle the core up to 2.7Ghz. It could only keep one or two cores at that speed though. The NT scheduler would cycle the thread around until all four cores were @2.0ghz and kept them all there, drove me crazy. Used processor affinity to force the process on a single core and *poof* problem over, the other three cycled down to 800 and the 4th went to 2.7.

There was a Windows 7 patch awhile back that kinda made it a bit better on BD CPU's. Mostly it was to allow the BD CPUs to do core parking properly.
a b À AMD
a c 136 à CPUs
February 15, 2013 5:17:37 PM

interesting

a b à CPUs
February 15, 2013 5:56:53 PM

palladin9479 said:
Quote:
Agree, except for the psychotic part. The NT scheduler is actually pretty good: The highest priority feasible (able to run) thread ALWAYS runs. If theres a tie, one is picked at random. Threads that are waiting get a priority boost, threads that are running get a priority decrement. Threads running on a forground application get a priority boost. OS kernal threads can preempt any user thread. And other odds and ends like that. All in all, it does a good job in regards to program throughput (especially when there is only one foreground application), even if latency is only so-so.


That's not why it's psychotic. It's constantly moving threads around on the cores without respect to where they were last running, this isn't good for performance as it forces a cache flush every time the CPU task switch's, which is fairly often. I know this because I do my own performance metrics and watch it happening. This was REALLY a PITA when I was using K10stat on my Llano to overclock + undervolt. K10 would keep all four cores @800mhz all the time unless I started up something heavy then it would cycle the core up to 2.7Ghz. It could only keep one or two cores at that speed though. The NT scheduler would cycle the thread around until all four cores were @2.0ghz and kept them all there, drove me crazy. Used processor affinity to force the process on a single core and *poof* problem over, the other three cycled down to 800 and the 4th went to 2.7.

There was a Windows 7 patch awhile back that kinda made it a bit better on BD CPU's. Mostly it was to allow the BD CPUs to do core parking properly.


The issue is that the scheduler tries to run threads on the least worked cores, at that moment in time. For instance, if you get preempted, and some other heavy workload thread from some other application pops in, you probably don't want your games main thread running on that core, even if it costs you a cache flush.

The Windows API allows for setting a "preferred" core, or the core which you would like the thread to run on (::SetThreadIdealProcessor), which helps mitigate this issue somewhat. Hard core locking fails performance wise though, when more then one heavy workload application runs. You also run the risk of preventing a thread from running much if a higher priority thread is active (say, Windows is doing some task on that core). Hence why the scheduler works as it does. As you can see, TurboBoost (and equivalent tech) complicates this feature, so I'd imagine MSFT would need to update the scheduler to favor putting threads on the same core if TurboBoost (or equivalent) is detected enabled for the CPU.

For those interested, the full list of Windows threading API functons:
http://msdn.microsoft.com/en-us/library/windows/desktop...(v=vs.85).aspx
a b À AMD
a c 136 à CPUs
February 15, 2013 6:03:07 PM

interesting.

a b à CPUs
February 15, 2013 6:11:28 PM

Having done benchmarking on this, hard locking a single threaded application to a core will give you a slight performance boost. We have multiple cores, windows will not preempt a thread off a core if there is another core already free. Hard locking won't run into issues with NT jacking your time slice as NT will just run on another available core instead. NT will only evict you if all other cores are @100% in which case you really don't need to be hard locking as your already maxing CPU performance. Hard locking is just a trick to squeeze out maximum performance in poorly threaded applications.

The problem with it being an API call is we're now at the mercy of a programmer to be assed to actually use it. I prefer having such control myself. This feature is actually incredibly important when dealing with NUMA architectures, scheduling a task to run on one CPU where it's memory space is located on a different physical CPU's memory is very bad.
a b à CPUs
February 15, 2013 6:33:17 PM

Quote:
Having done benchmarking on this, hard locking a single threaded application to a core will give you a slight performance boost. We have multiple cores, windows will not preempt a thread off a core if there is another core already free. Hard locking won't run into issues with NT jacking your time slice as NT will just run on another available core instead. NT will only evict you if all other cores are @100% in which case you really don't need to be hard locking as your already maxing CPU performance. Hard locking is just a trick to squeeze out maximum performance in poorly threaded applications.


Not true. Understand the first rule of threading in Windows:

Windows, in all cases, without exception, will run the highest priority thread(s) that are capable of being run.

Granted, the switch may be for a few ns or so, so you don't notice. But it happens, and it happens a lot. Every time you access RAM? Your thread gets booted. Look at some variable stored in the L3? Thread gets booted? L2? Same deal. If your thread can not run, or if some higher priority thread can, you get booted. And guess what? Every time you get booted, there's a chance it may resume on a different CPU core.

Basic example: You have a priority of 5. Some other task has a priority of 1. So your thread runs for two cycles. Due to how the scheduler works, the prioritys will have changed so you have a priority of 3, and the other task has a priority of 3 (your priority gets decrmented while you run, while the other thread gets a boost while waiting). Windows decides to run the other task, your thread stops running. One cycle later, your thread has a priority of 4, and the other thread a priority of 2. So you figure your thread will resume on the same core, right? Unless some other thread on a different core has a priority of 1, in which case, guess what? He's the one who gets booted, due to having the lowest priority.

So you say "OK, I'll hardlock the core to avoid having to dump the cache". Lets look at this example:

Lets take the case of a 4 core system; you have 4 threads; 3 do a lot of work, 1 not so much. You query the processor, see it has four cores, and hard lock each thread to a core at runtime.

Meanwhile, at some point in time, you alt-tab out of the application, do some stuff on the internet, and so on. Meanwhile, your AV kicks off, and seeing no major CPU activity, kicks off the AV scanner, and loads its main thread on core 0, because it also hardlocked its threads to cores.

You come back, and blissfully unaware what just happened, return to your application.

You are now officially screwed from a performance standpoint; the thread on the first core (likely the games main thread) is now competing against the AV scan. Even if your thread has the higher priority (is gets a boost due to being in a foreground process), it is going to get booted some percentage of the time by the AV. And heaven forbid if the AV is coded where its main thread gets "high" priority by default.

When you hard lock threads, you make a major assumption: No other heavy workload application will come around and prevent an application critical thread from running.

The worst case, of course, would be the oft mentioned theoretical example of two games running side by side in windowed mode. Imagine if both games were coded to lock their three or four heavy workload threads on the first four CPU cores, blissfully unaware you are running an 8 core system and the last four cores are sitting idle. But, because you hard-locked a 1:1 core:thread ratio, the OS can't dispatch to the unused cores. Another good example: FRAPS. [And no, querying the CPU for current is not a good idea, because the CPU core will NEVER be doing nothing at the time you query it. (sarcasm implied)]

So yes, if no other process heavy app is working, hard locking would likely lead to very minimal (5%) gains, simply due to not flushing the CPU cache. Run with another process heavy app though, and performance suffers for both applications.

Now, what you describe is common for consoles, because there is NO threat of other threads running. [As far as the PS3, which I am more familiar with, you have just over 200MB RAM and 6 SPE's to play with; the rest (~56MB RAM and the 7th PPE) is reserved for the OS. As only one task (OS aside) can run at a time, you can hard lock threads:cores without negative impacts, unlike on a multitasking OS.

Quote:
This feature is actually incredibly important when dealing with NUMA architectures, scheduling a task to run on one CPU where it's memory space is located on a different physical CPU's memory is very bad.


Windows actually has NUMA specific API calls for just that situation. For one, you can set logical process groups (up to 64 cores per process group). Point being, you aren't going to see applications move onto a different processor node on their own.
February 15, 2013 6:48:59 PM

One thing thats true from the earler conference call is, looks to be a 50% boost to top mobile solution, plus others, so more mobile wins can be seen for AMD
The rest of the call, about discrete, not so sure about
February 15, 2013 8:10:37 PM

truegenius said:
Quote:
gamerk316 wrote:
1: I explicity states that Multiplayer lends itself to better performance with more cores, since you are adding a fairly process heavy thread into the mix.


got my point ;) 
i mean, gamerk316 already answered it

and

Quote:
esrever wrote :

i3 runs 4 threads. Game is only optimized for 4 threads. Better IPC for the i3. There is balance. Why are you using the i3 for comparison and not a pentium if you are arguing 2 hardware threads is all you need?

Quote:
gamerk316 wrote:

I'm not saying quads don't help to some extent, but
for the majority of games, an i3 is more then
sufficient. In most cases, you run into a brick wall
in terms of scaling after 3 cores or so.


got my point ;) 



i mean, read previous posts carefully, you may found your answers there

:D  glad i am not gamerk316 , otherwise i will be dead because of this much repeating of same thing again and again and again :p 


What is your point? Are you still trying to argue that games can't be made to utilize atleast six cores? The BF3 multiplayer pretty conclusively proves that it can be done. Or are you trying to reconcile these two incompatable sentences? Or something else?

"Funny, I have't seen a single example of that as far as games go..."
"I explicity states that Multiplayer lends itself to better performance with more cores, since you are adding a fairly process heavy thread into the mix."
a b à CPUs
February 15, 2013 8:34:06 PM

Gamer everything you just said only matters if you have a single CPU core, the moment you have multiple targets windows will always pile stuff onto the lowest utilized target. Thus if you hard lock a thread it never gets booted unless all other CPU's are already more busy then it is. I've stated this multiple times now.

Everything I've stated so far has been proven and is well known. I have had benchmark results from doing exactly this, namely with Cinebench running in single threaded mode. If you search around the site you might bump my old posted results.

Quote:
Run with another process heavy app though, and performance suffers for both applications.


What part of having multiple CPU cores do you not understand here. Four cores, one is at 100% the other three are somewhere from 0 to 15%. Under no circumstance will the NT scheduler put something else onto that 100% utilizes core when there are three other cores with open resources available. The priority of the thread isn't nearly as important because you have four targets, your not getting four priority one tasks happening simultaneously that all consume large amounts of resources, unless you forgot to update your AV.

Quote:
Windows actually has NUMA specific API calls for just that situation. For one, you can set logical process groups (up to 64 cores per process group). Point being, you aren't going to see applications move onto a different processor node on their own.


And NOBODY use's them (well mostly nobody). I know this because I do lots of performance analysis. They just write new threads and keep on trucking, ends up messy.
February 15, 2013 8:47:41 PM

Cazalan said:
That's the FX line without graphics. That doesn't matter too much. It's the APUs that count now for AMD.


I disagree. It's the high margin server chips that are most important. If AMD can sort them out the rest will fall in line more or less automatically. I mean that they would improve their financial situation with succesfull server line and those server cores can be used for consumer products with or without iGPU. But I sort of agree that the CPU/GPU chips could be more important product line of the two consumer lines (FX/A series). I think Steamroller will be a significant step forward with more dedicated hardware/core, considering that the BD/PD cores are fine. It's the shared uncore that is bottlenecking the cores. That's my opinion atleast.
a b à CPUs
February 15, 2013 8:56:01 PM

JAYDEEJOHN said:
One thing thats true from the earler conference call is, looks to be a 50% boost to top mobile solution, plus others, so more mobile wins can be seen for AMD
The rest of the call, about discrete, not so sure about


Hmm good to hear, was that 50% in GPU performance, CPU performance or some aggregate of the two?
February 15, 2013 9:27:40 PM

Its not like for like
But allows for larger solutions with better perf at same tdp
February 16, 2013 11:14:19 PM

Nice link
a b À AMD
a c 136 à CPUs
February 17, 2013 1:52:08 AM

this website doesnt look trust worthy as a source. pass.

February 17, 2013 1:57:01 AM

^ If you are talking about Phoronix, you are mistaken. It is very very trustworthy.
a b À AMD
a c 84 à CPUs
February 17, 2013 2:02:31 AM

^ unless it isn't praising your favorite cpu brand. then it's run by paid shill of the rival brand.
imo it is half decent, just really low on resources.
their articles sometimes don't have concluding or summerizing paragraphs. :p 
February 17, 2013 2:44:57 AM

I wanted to add a little bit to this conversation. I've been out of it for a while.

Several pages ago, I mentioned consoles were holding threading back because game developers were optimizing around 3 cores. I was then told that the OS is in charge of scheduling threads.

It seems the point I was trying to make was missed. If you have a three core CPU, you're going to aim for program logic that uses 3 cores. The OS scheduler can handle more, but why bother? It just adds needless complexity.

The entire argument about scaling to 8 cores or even 4 reminds me of when we started heading towards dual core. There were tons, and I mean tons, of people who felt that the second core was a waste and there was no way to use it.

Now, we have quads and 8 cores all over the place, and at least two threads are common now.

The point I'm trying to make is that, much like when we were transitioning for one core to two, it doesn't make sense to go to extra threads. Eventually, developers were forced to adapt into a situation where they had to start optimizing things for two cores.

Imagine if we never took that step from single thread. How well do you think Xbox 360 with tri-core PowerPC 970 @ 3.2ghz would run if it only used one core?

Software will catch up, and if the new consoles use 8 weak cores, it's going to force developers into programming such that 8 threads are used. Yes, it's not optimal. Yes, no one does it now. But will they? They will have if the 8 core Jaguar in console rumors are true. A game simply isn't going to run well when it's using 2 threads on a 1.6ghz 8 core Jaguar system.

And as for compilers, just knowing what it was compiled with is way, way too big of a generalization. I run Gentoo on my FX and I set flags on GCC that make my FX 8350 more than twice as fast. Hell, if I wanted to, I could compile it with horrible flags and chug out code that probably runs worse than everything out there.

Simply knowing what was used to compile software, and then to say that the entire compile is at fault is way, way too vague. In the case of people complaining about ICC, it's because it used to disable optimizations for certain products that were more than capable of using those optimizations regardless of what settings were.

http://software.intel.com/en-us/articles/performance-to...

There's a lot of options there just for ICC. For GCC, it's far more extensive as you can flag each individual instructions. I would blame the person setting options way before I start blaming the compiler in general.
February 17, 2013 3:07:59 AM

de5_Roy said:
^ unless it isn't praising your favorite cpu brand. then it's run by paid shill of the rival brand.
imo it is half decent, just really low on resources.
their articles sometimes don't have concluding or summerizing paragraphs. :p 


Because Phoronix "articles" are usually a comment on a set of open source code patches that have landed on linux kernel trees. So you cant really summary that. Its not like Toms News, which report wrong, dont usually give sources and are full of spelling errors.

Regarding paid shill, i'd say that its the only site on the net(apart from AMDzone) which shows AMD procs in a positive light.
a b À AMD
a c 136 à CPUs
February 17, 2013 3:10:12 AM

sorry but seems off. i keep my tech reports with big names. not with some small website ive never heard of.

a b à CPUs
February 17, 2013 3:31:04 AM

Phoronix is reputable.

(Hoping I can get away without a lengthy protracted page of evidence that requires too much effort to procure)
a b À AMD
a c 136 à CPUs
February 17, 2013 3:42:01 AM

looks alright to me, but never heard of it, any proof of validation here so i can think otherwise?

February 17, 2013 5:25:18 AM

^ Phoronix doesnt do any "leaks" or "breaking news" . It reports on the open source code, which 99.9999% of the press ignores, because

1. Its linux, ususally
2. Deglamourised, because its low level
3.Its linux.

Anybody and everybody can confirm Phoronix's validity. Its super easy. Just download the source code and see for yourself :) 

The only "leaks" they did AFAIR were
1. Intel Valleyview (?) SoC's have HD4000 GPU.
2. Names of the Haswell GPU configurations.

And anyone can check the veracity of these claims. Just subscribe to the linux dev channel.
a b à CPUs
February 17, 2013 6:17:35 AM

And almost nothing is secret because it's open source.
a b à CPUs
February 17, 2013 11:58:55 AM

amdfangirl said:
And almost nothing is secret because it's open source.



Did GCC ever get around to implementing proper dispatching? I remember that as being a big issue when compiling code that might run on various CPU targets. Had lots of people just compiling for 586/686 targets.
February 17, 2013 1:17:58 PM

well, you can use "-mtune" . This will compile optimise the code to the specific cpu architecture specified, but the final code will run on all other procs as well.
February 17, 2013 3:56:01 PM

Tomorrow is 18th feb. GTX Titan is supposed to launch tomorrow.
People sure have huge expectations from this card...
February 17, 2013 5:16:00 PM

socket AM3+ is dead. Steamroller/Excavator uarch exist, but not as high perf desktop CPUs. They will be Trinity type APUs.

Look at any roadmap by the company in recent history, no AM3/Orochi RevC+ chip is listed. Trinity Richland has "piledriver" cores. The next trinity will be Kaveri. But there are no more AM3+ socket CPUs from AMD. Its over.
a b à CPUs
February 17, 2013 5:44:00 PM

hansmoleman1981 said:
socket AM3+ is dead. Steamroller/Excavator uarch exist, but not as high perf desktop CPUs. They will be Trinity type APUs.

Look at any roadmap by the company in recent history, no AM3/Orochi RevC+ chip is listed. Trinity Richland has "piledriver" cores. The next trinity will be Kaveri. But there are no more AM3+ socket CPUs from AMD. Its over.




What? any proof since Amd said countless times that steamroller will be on AM3+?

a b À AMD
a c 84 à CPUs
February 17, 2013 5:44:07 PM

pd cpus look like desktop derivatives of pd opterons... with some features (dual ch. instead of quad ch.) disabled. kinda like sb-e cpus. am i right? as long opterons exist, steamroller dt cpus will also come out in the future.
now if jaguar-based cpus take up opteron lineup, that'd be a different story....
i think amd won't change sockets until new ddr4 (late 2014-15?) memory becomes mainstream. then they'll implement new imc, pcie controller (or a new unified platform) at once. until then users have to bear with am3+ and am3++. good news for existing amd owners, underwhelming news for new buyers.
February 17, 2013 6:43:27 PM

hansmoleman1981 said:
socket AM3+ is dead. Steamroller/Excavator uarch exist, but not as high perf desktop CPUs. They will be Trinity type APUs.

Look at any roadmap by the company in recent history, no AM3/Orochi RevC+ chip is listed. Trinity Richland has "piledriver" cores. The next trinity will be Kaveri. But there are no more AM3+ socket CPUs from AMD. Its over.



You are looking at 2013 road maps, Piledriver will last throughout 2013, and Steamy is 2014, that is why it is not on road maps.
AMD already said that they will continue support for AM3+ after Piledriver, and the APU's are a cut down version of the FX line, do you have any clue what you are talking about here?
February 17, 2013 7:48:48 PM

Well, speculation doesnt mean it has to contain expert conjecture
a b À AMD
a c 136 à CPUs
February 17, 2013 9:49:48 PM

speculation is speculation and isnt based off fact.

a b à CPUs
February 17, 2013 10:43:36 PM

Quote:
good news for existing amd owners, underwhelming news for new buyers.


How is it underwhelming? What do you want on AM3+ that isn't already present? A "new socket" isn't new if it doesn't offer something that didn't exist previously. PCIe 3.0 is the same pin-out as PCIe 2.0 and that's implemented on the motherboard chip anyway. The HT links between the motherboard chipset and the socket aren't anywhere close to being saturated, even on dual x16 or triple x8 configurations. Honestly under DDR4 is out and available to consumers there is absolutely no need for a new socket. They could add on 100 blank pins and call it "Socket 1040", is that what people are asking for?
a b À AMD
a c 136 à CPUs
February 17, 2013 11:20:42 PM

i agree. all these things are gimicks. ddr4 doesnt seem that interesting. pci-express 2.0 to 1.0 dint change much. same for 3.0.

same as sata2- and sata-3. ill actually be impressed when these technologies actually get used for something other than bragging rights

February 17, 2013 11:26:32 PM

I hope these processors put AMD on the right foot again. I miss the good ol' AMD :( 
a b à CPUs
February 18, 2013 1:01:26 AM

iceclock said:
i agree. all these things are gimicks. ddr4 doesnt seem that interesting. pci-express 2.0 to 1.0 dint change much. same for 3.0.

same as sata2- and sata-3. ill actually be impressed when these technologies actually get used for something other than bragging rights



DDR4 will be revolutionary, especially for APUs. The PCIe jumps tend to be three to four years early. A year or so again we were just beginning to see PCIe 2.0 x8/x8 configurations start to bottleneck at high resolutions with obscenely large AA/AF settings. I don't expect the bandwidth on PCIe 3.0 to be needed for another two years. USB is handled by the south bridge / media hub / whatever not the CPU.
a b À AMD
a c 136 à CPUs
February 18, 2013 1:04:14 AM

thanks for telling me except one thing. if uve already got a motherboard. gonna be costly to have to buy a new one just for ddr4. also when people say apus is that the graphical part of the processor thats onboard? i get confused with these terms thanks.

even when u think u know alot in computers, theres always something new or someone that knows more my dad told me.

he was right.

a b à CPUs
February 18, 2013 1:22:40 AM

^I would have thought you'd know about APUs, given how knowledgeable you've been elsewhere. Yes, APUs are CPUs that share the die space with an integrated GPU.
a b à CPUs
February 18, 2013 1:28:54 AM

iceclock said:
thanks for telling me except one thing. if uve already got a motherboard. gonna be costly to have to buy a new one just for ddr4. also when people say apus is that the graphical part of the processor thats onboard? i get confused with these terms thanks.

even when u think u know alot in computers, theres always something new or someone that knows more my dad told me.

he was right.


The memory controller is inside the CPU nowadays, you can't just throw a current CPU onto a board with DDR4 and have it work. Instead you must buy a board, CPU and memory together, this typically happens when someone is rebuilding their old system or replacing the entire thing all at once. So it won't be any more expensive then it is now to build a new system.

APU (Accelerated Processing Unit) is AMD's term for when you take a CPU and a GPU and put them on the same die. When we say APU we're referring to Llano, Trinity and now Richland. Their low power Piledriver CPUs with a low power Radeon GPU built into them. Memory performance is of utmost important to anything GPU related as GPU's employ large SIMD arrays to process a large quantity of transactions at once. You can often tell the general performance level of a GPU by it's memory bandwidth. Right now GPU's are using GDDR5 which is a special type of high performance DDR3.
February 18, 2013 1:56:54 AM

palladin9479 said:
APU (Accelerated Processing Unit) is AMD's term for when you take a CPU and a GPU and put them on the same die. When we say APU we're referring to Llano, Trinity and now Richland. Their low power Piledriver CPUs with a low power Radeon GPU built into them. Memory performance is of utmost important to anything GPU related as GPU's employ large SIMD arrays to process a large quantity of transactions at once. You can often tell the general performance level of a GPU by it's memory bandwidth. Right now GPU's are using GDDR5 which is a special type of high performance DDR3.


Arguably, most intel procs have been "APU's" since Sandy Bridge days. (maybe older,if you consider the first gen core i5's too)
a b à CPUs
February 18, 2013 2:05:38 AM

mayankleoboy1 said:
Arguably, most intel procs have been "APU's" since Sandy Bridge days. (maybe older,if you consider the first gen core i5's too)


Those didn't have dedicated vector processors just basic frame-buffer and media acceleration. It was AMD who glued a GPU (literally it was a cut out Radeon) onto a CPU, though Intel wasn't very far behind them. The big distinction is that the "GPU" is capable of doing some form of alternate processing such that it's treated as a coprocessor instead of an exaggerated frame buffer.
February 18, 2013 2:19:36 AM

palladin9479 said:
Those didn't have dedicated vector processors just basic frame-buffer and media acceleration.


I would call this a GPU.

Quote:
The big distinction is that the "GPU" is capable of doing some form of alternate processing such that it's treated as a coprocessor instead of an exaggerated frame buffer


You could game on those, albeit poorly. And it did have computation units to accelerate game graphic calculations(meaning, it supported the DirectX API) over a pure CPU.
a b À AMD
a c 136 à CPUs
February 18, 2013 2:31:50 AM

its a gpu but not an apu.

    • First
    • Previous
    • 30 / 329
    • 31
    • 32
    • 33
    • 34
    • More pages
    • Next
    • Newest
!