why amd cpus lag behind intel but its gpus rival nvidia

hover389

Honorable
May 14, 2013
21
0
10,510
Im just curious as to why AMD cpu single thread performance is so far behind intel and less efficient but its GPUs are on par with the performance of nvidias gpus... as far as I can tell.
Is more money put into R&D on their gpus vs cpus?
 
Solution
Its a completely different architecture and layout. Its easier for them to compete because the performance steps up ideally. AMD has decided with CPUs more cores, for cheaper, is their approach. This way they have a large stake in the entry level and budget fields.

EDIT: A second good point I realized after a few minutes, was that AMD acquired ATI for their cards. So they had a basic plan setout before hand.
Its a completely different architecture and layout. Its easier for them to compete because the performance steps up ideally. AMD has decided with CPUs more cores, for cheaper, is their approach. This way they have a large stake in the entry level and budget fields.

EDIT: A second good point I realized after a few minutes, was that AMD acquired ATI for their cards. So they had a basic plan setout before hand.
 
Solution


Intel has about eight times the assets as AMD and NVidia combined.
 

random stalker

Honorable
Feb 3, 2013
764
0
11,360
for CPU - they lost a bet they made. they predicted that the future lies in massively multicore apps and made their CPU in a way that prefers that way. well, the plan backfired and the only way they can now go is increase the frequency beyond the levels of sanity. the same happened to intel way back (before core architecture) and the only cure was to redraw everything from a scratch.

for GPU - the trends there are easier to follow and ATI was always on par with other GPU manufacturers...
 
G

Guest

Guest
Welp, this opens a can of worms.

Before we start getting the fanboys rolling in, here's the unbiased side of it:

AMD and Intel are, obviously, different companies. They share the same market but they do things differently.
There was a time when comparing CPUs meant the one with the higher clock was better, nowadays it all comes down to architecture. This not only varies from the two companies, but from product line and generation too.

Intel just generally put more focus on their CPUs, which ends up meaning they are faster, consume less power and run cooler/at a much lower voltage. They are generally one step ahead of the game all the time with a decent roadmap.
AMD compete well in this market by offering something within margin of error worse, for a much less price tag. Take the 8350 in my sig for example, a very good CPU released around the time of the 3rd generation core i chips. It came out at roughly the same price as the i5 at the time, give or take a tenner and generally the two performed on par.
Not bad, when you consider the extra logical cores have some advantage for non-gaming applications.

This same business plan was echoed in the past with the Phenom II line.

Their advantage is also in the budget end, Intel don't tend to go in for the sub-£100 mark, the i3 sits just a little over it and from there downwards it's all up to the Pentium/Celeron line/s.

Many years ago AMD acquired ATI, a graphics card maker and just stuck with the AMD branding. ATI was no more, in name anyway. While it integrated well in to AMD, so well in fact they're now making APUs using technology leveraged from that portion of the business. There's still a lot of mis-communication going on. Not too bad though.

In short then, AMD like to compete by offering an alternative product in either the GPU market or the CPU market, that is just as good, give or take a percentage, while sometimes sacrificing heat and/or power consumption - For less money. They always sit nicely in the price/performance ratio. Fanboyism aside AMD are kinda needed for this anyway, any company with a market monopoly is not good at all.

Sadly they have recently turned their sights to the APU market, in my opinion at least, the current line of APUs don't really make sense in the high-end CPU market.... or the GPU market. They make a lot of sense for media PCs. It's a shame really because high-end CPUs are always exciting, but alas, that's Intel's territory now.
I suppose in a way it makes sense. CPUs haven't really needed to get much better for about 5 years, but they do - which is great.... in a way. Part of me see's their logic in this and the other doesn't. Nerdgasms and bench-tastic numbers aside I guess they're taking an interesting role in the consumer PC market.

Plus you raised a good point, AMD as a company have been struggling a bit recently causing them to sell off quite a few factories. But then again, most companies are struggling a bit since the recession.
 
Indeed, with their aquisition of ATI some years back, they have aggressively pursued he GPU market, but have decided that trying to compete head to head with Intel in the high end desktop market is not going to be part of their core business.
Give AMD credit though, if AMD had not released the Athlon and the following Athlon X2 processor with its new (and at the time, Intel butt-kicking architecture) Intel would have not been pressured into developing its core processor family. We would likely still be running Pentium D's with 6 cores, and the heatsinks to keep them cool would cost $400.
 

Cristi72

Admirable
Hello,

Here are my thoughts: Since Bulldozer, AMD is trying to teach software developers how to use their CPU instead producing something which works with the existing software. Remember the discussions a few years back? "Yeah, Win7 doesn't know how to handle well our hardware prefetcher, but we work closely with Microsoft and Win 8 will see big performance bumps" The result? Win 8 came and go, Win 8.1 is here but still not the performance advertised. Yes, they closed the gap on high-performance desktop products, but mainly because Intel has no incentive to push the development harder.

On the APU matter: the idea was/is good and it works well in HTPC's, but there is no reason to consider right now an A10-7850K, as at the same price point an FX-6300 or an i5 paired with an R7-250 DDR5 is still way better. Now there is an ongoing debate about HSA but, as before, the same thing happens: a product which requires new software and further testing for some benefits that will or will not exist.

I still want an 8-core Thuban on 32nm or 28nm... Keep dreaming...
 

con635

Honorable
Oct 3, 2013
645
0
11,010
Amd once had better performing cpus and were gaining market share then intel who were much bigger and richer bribed alot of the oem pc makers and even cheated in some benchmarks as well, with amd well and truly 'crippled' there was no money, no innovation and no fabs all amds cpus are now about 3 years late. Buying ati saved amd from extinction and still does today.
 

Cristi72

Admirable


Yes, you are right, but there are also other factors to consider:
Back in 2004/2005 an AMD CPU was at the same price-point to a Pentium 4-series and it was way better in every aspect (performance, power consumption, efficiency), but on the mobile segment it had no proper response to Pentium M which was even cooler and faster than AMD (remember those 479-to-478 P4 adapters?). AMD didn't paid attention on that little processor and SURPRISE: in 2006 Intel released Core and Core 2, based on the Pentium M, which blew AMD out of the water. Funny fact: the release was made exactly AFTER the AMD-ATI merger. Inside job? We'll never know...

The real thing is that AMD had nothing to respond with and they had also no more money to spend on development. Given the fact that ATI was also in troubled waters at the time, AMD had no other choice but to take the loss. Intel was find guilty on unfair competition and eventually settled an out-of-court agreement with AMD in november 2009 (circa 1,25 billion dollars), but not before AMD spinned-out their Global Foundries in march 2009 (Another inside job? I already see a pattern... Hooray for the Conspiration Theory!)

Back to the present: Intel's mobile CPU's with IRIS are very close to the performance of the A10-7850K. What if they release an i3 for desktop sporting IRIS-like graphics accelerator? After all, Intel is the biggest graphics adapter supplier as virtually all their CPU's have onboard graphics...
 

con635

Honorable
Oct 3, 2013
645
0
11,010

Good points though iris pro is as fast mainly due to the edram, eventually amd will offer a high b/w memory solution + gcn to counter and I think iris and future intel graphics is a bigger threat to nvidia than amd.

 

Layarion

Distinguished
Aug 8, 2013
52
0
18,630


I've had an AMD Phenom 2 1090t for two years, and yesterday i replaced it with an i5-4690k. i never OC'ed the amd and i have yet to OC the intel. I am using the GTX770 for both and even on games that use 6 cores performance went up by 10-20 frames across the board. this is because the AMD was bottlenecking the GPU. so i am trying to say your wrong about AMD "performing on par"

if anyone listens to you they'll think they can't go wrong with amd and save money, thats what i thought 2 years ago and it's all blown up in my face 24 hours ago. sure the i5 is .3 ghz higher but that goes out the window whent he 6 thread games are even doing better with the 4 core intel
 
I agree 100% with the poster above. If you want to game, and you want maximum performance, you go with Intel, there is no way you can sugar coat AMD to beat that. Period. You can point to any number of reasons, but you when sit down and hit the power button on your build, it is what it is
I ran nothing but AMD since the late 90's, but when the 2600k came along, I had to jump ship. AMD is a great performing processor for a budget minded build, and its good enough to get the job done for most people, but performance will not be on the same level as a similar i5 or i7 build.
 
G

Guest

Guest


Wait what? You're stating that I'm incorrect, because your older generation AMD CPU doesn't match the latest generation of Intel's?
When I was referring to performance being on par, I was referring to at the time of release. When the Phenom II series came out I believe we were around the first or second gen Intel i chips.
The 8350, as I said, was around the 3rd gen, and since then AMD haven't brought out anything in the high end market, and don't plan to - again, I stated this by referring to their current market strategy of focussing on APUs.

I must say I'm shocked my 8350 performs better than the Celeron inside the Windows XP machine (Currently stashed in the shed)....
Seriously dude, I shouldn't be having to point this out.
 

Layarion

Distinguished
Aug 8, 2013
52
0
18,630


sure, it's slightly older. oh wait, it's only slightly older so my point is still valid. ill bet $400 if i were to get a new amd cpu it'd be the exact same story
 
G

Guest

Guest


Slightly older means a lot with technology. (Moore's law says hello), especially considering the Phenom II is about 5 or 6 years old now.

And yes, you probably would, again I must say that their only offering currently is the 8350 (I would include the 9000 series, but they were released around the same time, and are basically just a better binned/overclocked 8350), which is a generation old (or, about 2 or 3 years). Your lovely new 4th gen i5 was released very very recently.

I'm not sure if I'm losing my sanity. Or, if you're just really, really stupid.

 

Layarion

Distinguished
Aug 8, 2013
52
0
18,630

the year my amd was released it wasen't up to par with my games, amd is not the choice for gaming. that's all im trying to say (i.e they are not "performing on par")
 
G

Guest

Guest


I'd have to look in to that. I seem to remember the 6-core Thubans were quite competitive at the time.
Then again, I've been wrong before.

Incidentally, the i5-4690k was released only a few months ago.
 

Layarion

Distinguished
Aug 8, 2013
52
0
18,630
also, i wanted to add for anyone reading this, my hitman:absolution benchmark for the amd on medium/low settingswas 33fps average and on intel high settings it's 66fps average.

the latest Thief game the minimum frame was 15 on the amd, and on the intel the minimum frame was 30
 

Fananagram

Reputable
Mar 22, 2014
31
0
4,560


In about 80% of the systems I've built within the last 4 years I've used Intel i series CPUs (Namely i7 and some i5s. Simply because that suited my clients and myself better and we were looking in the upper $300 range to which AMD had no answer. Is this to say AMD is bad? No. AMD does an outstanding job of low-medium budget CPUs (as well as server class Xeons, although lately the $1000 4960k would be my personal choice even though they're significantly less powerful). My most recent build was a 8320 and even though it's a much older CPU it overclocked and performed like a dream in the gaming PC it was in (for a mere $700 all in price tag).

To avoid getting too far off topic or just taking the lazy-man's route and calling you an uninformed fanboy I'll get to the point. Monopoly=bad, and AMD has from day 1 been designed to be a low-cost competitor to Intel. (Source: http://en.wikipedia.org/wiki/AMD http://www.amd.com/en-us/who-we-are/corporate-information/history http://arstechnica.com/business/2013/04/the-rise-and-fall-of-amd-how-an-underdog-stuck-it-to-intel/) Since I refuse to be one of those people who post "facts" on the internet without any backing. My very close friend has her Doctorate in Business and did her dissertation on AMD and it hurts to hear the amount of falsities that you read on forums sometimes.

As for GPUs it's a whole different ball game, they are relatively similar but at the same time it would be like comparing a Starter/Alternator to the type of Engine that's in it.
 

Serpent of Heaven

Honorable
Oct 10, 2013
13
0
10,510


AMD never really intentionally made their CPUs to be better at single thread processing. In a sense, you can say that AMD future-proofed their CPUs in a time where it's not widely needed for the bigger consumer groups (Gamers and average users). Yes, AMD still sells to the same consumer groups their products, but they aren't going to come up with different architectures in the same generation to accommodate those groups (1 arch for burst single thread and another for multi thread processing). Cost and expenses go up by 2x or 10x, and with an economy that's coming out of a recession, and in debt to the Chinese, there's no absolute guarantee that AMD will recover it's losses or pay it's debts in the process. AMD Cores are ideal for multi-threading processing, but it won't matter if developers don't code their programs to utilize more core. In a way, they couldn't...

A lot of people don't realize this, but AMD Cores, as abundant as they are, are more ideal for processing larges bits of information. Programs that utilize more cores than just 1 to 2 cores. So using it for video games is kind of a big no unless you overclock it, but if you overclock an AMD core, like their redundant processing GPUs, you get more out of it when you feed it more watts. Performance goes up as TDP goes up. AMD 9590 is a good example of feeding a lot of TDP or juice into the Core, getting a 5.0Ghz 8core monster that's meant for gaming, and have a competitive chip that can push the same or similar single thread performance as an i7 3990k. The point really is that AMD just needs to pour more TDP on the chip, and eventually, it will have the same single thread performance as it's Intel Competitors. If you increase the TDP on AMD chips to improve their single thread performance, you also increase it's multi-thread performance when it's being utilized. If a program uses more cores, or say you're rendering Voxels through a program, and you use all cores, that core frequency gets divided in a sense amongst the cores in use. In addition, on the Intel side, the performance actually goes down. The burst performance of more cores in use, goes down. You have a TDP guzzling 8 core at 5.0 Ghz using all cores to render images of voxels, the drop in performance is less amongst all cores in comparison to the Intel CPU under the same conditions and loads. This is why they say AMD cores are better at Multi-threading.

As to the gentleman who mentioned about APUs and why AMD has gone down that route. APUs is AMD's label of it, but it basically means having silicon of the CPU and GPU on one processor. Intel is doing the same thing, but they don't call it APUs. They call is Haswell, Haswell-E, Broadwell, Skylark, etc... Since AMD is playing it smart, they don't want to be in direct competition with Intel in the same market, they will follow the same market strategy as it has done for a while: Intel covers one extreme of the CPU market, and AMD covers the rest with respect to the average consumer base. This isn't taking into account the server markets and others. APUs were originally created by AMD to be apart of the Cellphone and Tablet Markets. It's implementation was never very good, the market was heavily dominated by Qualacom's ARMs, and AMD mainly kept it in the PC market (Desktops, notebooks/Laptops). Nvidia has been trying to do the same thing with it's Tegra SoCs, which is their own version of an APU. NVidia has been successful in branching out to other markets with Tegra besides the Cellphone/Tablet market with G-Sync and their Nvidia Shield.

Now to the reason behind why NVidia and AMD are having good competition in this current generation. AMD has cheaper products that aren't far behind on performance with NVidia. They can go toe-to-toe with NVidia with the R9 and R7-200 series. There's really a difference, at the max, 15% in performance between GTX 780 Ti and up variants versus AMD's Graphic cards, but the average is less than 10% difference in FPS on all PC Games. Take into account that you are spending an additional $100 to $300 for that extra "less than 10% average performance increase," lower TDP, voltage-locked, higher Transistor count--NVidia products. It isn't worth it. With an economy that has less expendable assets (less loose cash to burn on desires versus needs), AMD would seem like the more practical choice on the matter. You save some dollars buying into AMD versus NVidia. Roughly $100 to 300 dollars difference in the same Graphic Card Tier. With the $3,000 stunt pulled by NVidia with the GTX Titan-Z, it doesn't help NVidia's situation. The GTX Titan-Z is basically two Titan Blacks on the same PCB, and it offers nothing more. Same 2880 Cuda Core GPUs, higher memory per GPU, and that's it... The FPS performance of two GTX 780 Ti with smaller frame buffer yields a higher fps performance over the Titan-Z. The only thing I will say that good about the GTX Titan-Z is the Frame Time Variance on it's curve, the bandwidth of the curve is tighter, and that's a good thing. This would imply that the card scales really well in SLI. So the question then becomes, why pay $3,000 for a GTX Titan Z when two GTX 780 Ti's are half the price with better performance. The only way they would sell GTX Titan-Z to consumers is if they were rich, or extremely illiterate about the card, or if they turned GTX Titan-Z into workstation cards. The target consumer-base is really small. So saying that a lot of consumers will buy the Titan-Z, is unrealistic. So the point, the sum of all these NVidia goof-ups are the reason why, besides AMD having a decent product that actually works now in the discrete GPU market, is part of the main reason why AMD graphic cards are rivaling NVidia's products.

 


You may wish to review your theory a little bit because most of what you just wrote is nonsense.

Performance goes up as TDP goes up. AMD 9590 is a good example of feeding a lot of TDP or juice into the Core, getting a 5.0Ghz 8core monster that's meant for gaming, and have a competitive chip that can push the same or similar single thread performance as an i7 3990k.

TDP is a manufacturer issued rating used to establish power and cooling requirements. It is not directly related to performance, merely correlated.

The FX-9590 is an example of taking the manufactured dies with the most ideal fast switching characteristics (very leaky transistors, high power consumption) and providing them with almost dangerous supply voltage levels. The result is a microprocessor that requires half again as much energy to even start being competitive.

If a program uses more cores, or say you're rendering Voxels through a program, and you use all cores, that core frequency gets divided in a sense amongst the cores in use.

This is just gibberish. Seriously, I can't even respond to it because it makes no sense whatsoever.

In addition, on the Intel side, the performance actually goes down. The burst performance of more cores in use, goes down.

If this were true, Intel's microprocessors wouldn't hold nearly every highly concurrent computational record would they? I can imagine some very particular workloads where AMD's larger but less associative cache architecture may result in fewer unused issue slots but the vast majority of the time Intel wins out.

You have a TDP guzzling 8 core at 5.0 Ghz using all cores to render images of voxels, the drop in performance is less amongst all cores in comparison to the Intel CPU under the same conditions and loads.

An FX-9590 @4.7Ghz gets about 45 GFLOPS on IBT and draws around 230 watts while running. My 3960x @4.5Ghz gets about 155 GFLOPS on IBT and draws around 230 watts while running. That's all double precision too, so appropriate for voxel rendering.
 

Ptparker88

Reputable
Jun 29, 2014
42
1
4,535
Back in 2010 i believe i built a phenom 2 965 BE, last beast processor i think amd has put out, still using to this day but im about to upgrade. my friend built a i5 1st gen. my phenom could put that thing to shame. i think after the phenom 2 was when intel starting pulling so far ahead. I think Amd will make a comeback in cpus just give them time, i believe they lost alot of money on fx series even though they are great processors just nothing compared to intel now
 

liamwalby

Reputable
Sep 10, 2014
96
0
4,640


With Amd FX CPUs every two physical core's shares a single FPU (Floating Point Unit). If both core's must use the FPU in each module, then one core basically needs to wait and do nothing until the other core is done using the FPU. whereas with Intel each core has its own FPU. So you could say Amd's 8 core CPUs are equivalent to Intel 4 core CPUs.

Someone correct me if im wrong.