hover389 :
Im just curious as to why AMD cpu single thread performance is so far behind intel and less efficient but its GPUs are on par with the performance of nvidias gpus... as far as I can tell.
Is more money put into R&D on their gpus vs cpus?
AMD never really intentionally made their CPUs to be better at single thread processing. In a sense, you can say that AMD future-proofed their CPUs in a time where it's not widely needed for the bigger consumer groups (Gamers and average users). Yes, AMD still sells to the same consumer groups their products, but they aren't going to come up with different architectures in the same generation to accommodate those groups (1 arch for burst single thread and another for multi thread processing). Cost and expenses go up by 2x or 10x, and with an economy that's coming out of a recession, and in debt to the Chinese, there's no absolute guarantee that AMD will recover it's losses or pay it's debts in the process. AMD Cores are ideal for multi-threading processing, but it won't matter if developers don't code their programs to utilize more core. In a way, they couldn't...
A lot of people don't realize this, but AMD Cores, as abundant as they are, are more ideal for processing larges bits of information. Programs that utilize more cores than just 1 to 2 cores. So using it for video games is kind of a big no unless you overclock it, but if you overclock an AMD core, like their redundant processing GPUs, you get more out of it when you feed it more watts. Performance goes up as TDP goes up. AMD 9590 is a good example of feeding a lot of TDP or juice into the Core, getting a 5.0Ghz 8core monster that's meant for gaming, and have a competitive chip that can push the same or similar single thread performance as an i7 3990k. The point really is that AMD just needs to pour more TDP on the chip, and eventually, it will have the same single thread performance as it's Intel Competitors. If you increase the TDP on AMD chips to improve their single thread performance, you also increase it's multi-thread performance when it's being utilized. If a program uses more cores, or say you're rendering Voxels through a program, and you use all cores, that core frequency gets divided in a sense amongst the cores in use. In addition, on the Intel side, the performance actually goes down. The burst performance of more cores in use, goes down. You have a TDP guzzling 8 core at 5.0 Ghz using all cores to render images of voxels, the drop in performance is less amongst all cores in comparison to the Intel CPU under the same conditions and loads. This is why they say AMD cores are better at Multi-threading.
As to the gentleman who mentioned about APUs and why AMD has gone down that route. APUs is AMD's label of it, but it basically means having silicon of the CPU and GPU on one processor. Intel is doing the same thing, but they don't call it APUs. They call is Haswell, Haswell-E, Broadwell, Skylark, etc... Since AMD is playing it smart, they don't want to be in direct competition with Intel in the same market, they will follow the same market strategy as it has done for a while: Intel covers one extreme of the CPU market, and AMD covers the rest with respect to the average consumer base. This isn't taking into account the server markets and others. APUs were originally created by AMD to be apart of the Cellphone and Tablet Markets. It's implementation was never very good, the market was heavily dominated by Qualacom's ARMs, and AMD mainly kept it in the PC market (Desktops, notebooks/Laptops). Nvidia has been trying to do the same thing with it's Tegra SoCs, which is their own version of an APU. NVidia has been successful in branching out to other markets with Tegra besides the Cellphone/Tablet market with G-Sync and their Nvidia Shield.
Now to the reason behind why NVidia and AMD are having good competition in this current generation. AMD has cheaper products that aren't far behind on performance with NVidia. They can go toe-to-toe with NVidia with the R9 and R7-200 series. There's really a difference, at the max, 15% in performance between GTX 780 Ti and up variants versus AMD's Graphic cards, but the average is less than 10% difference in FPS on all PC Games. Take into account that you are spending an additional $100 to $300 for that extra "less than 10% average performance increase," lower TDP, voltage-locked, higher Transistor count--NVidia products. It isn't worth it. With an economy that has less expendable assets (less loose cash to burn on desires versus needs), AMD would seem like the more practical choice on the matter. You save some dollars buying into AMD versus NVidia. Roughly $100 to 300 dollars difference in the same Graphic Card Tier. With the $3,000 stunt pulled by NVidia with the GTX Titan-Z, it doesn't help NVidia's situation. The GTX Titan-Z is basically two Titan Blacks on the same PCB, and it offers nothing more. Same 2880 Cuda Core GPUs, higher memory per GPU, and that's it... The FPS performance of two GTX 780 Ti with smaller frame buffer yields a higher fps performance over the Titan-Z. The only thing I will say that good about the GTX Titan-Z is the Frame Time Variance on it's curve, the bandwidth of the curve is tighter, and that's a good thing. This would imply that the card scales really well in SLI. So the question then becomes, why pay $3,000 for a GTX Titan Z when two GTX 780 Ti's are half the price with better performance. The only way they would sell GTX Titan-Z to consumers is if they were rich, or extremely illiterate about the card, or if they turned GTX Titan-Z into workstation cards. The target consumer-base is really small. So saying that a lot of consumers will buy the Titan-Z, is unrealistic. So the point, the sum of all these NVidia goof-ups are the reason why, besides AMD having a decent product that actually works now in the discrete GPU market, is part of the main reason why AMD graphic cards are rivaling NVidia's products.