Yeah Yeah, I know, intel is better then AMD, but why?

xXCrossfireXx

Reputable
Jan 16, 2016
869
0
5,160
OK it's pretty clear, I don't need to ask the question. Assuming you have an Intel processor with the same cores as the AMD, the Intel wins, even if it has less cache, lower clock speed, and in some cases even if it has less cores, I believe the i5 4690k will win against the 8350 or even 9590 in modern titles, but don't quote me on that, that's from Nicolas11x12TECHX's video benchmark (he also said the 4690k was better at video editing, but with 8 cores, no idea if it's bullshit or not).

Obviously this advantage comes at a price. My question is, what's making that price go up, what specific part of an Intel processor makes it perform better than an AMD processor if they have similar specs?
 
Solution
Haven't seen what Nicolas11x12TECHX said exactly but he sounds right on both fronts mostly. Gaming for sure. Depending on the renderer, video rendering performance is a mix of single-thread and multi-thread performance. As an example: When I do a render job in Lightwave, some of the processes (like geometry calculation, anti-aliasing, etc) during the render use more threads and some of them use less threads. I have an FX 8320 on my distributed rendering network and also an i5-4670k stock-clocked (in addition to other systems). The 4670k beats the FX every time in all of my jobs despite only having four threads vs the 8320's 8 threads. If the renderer was able to fully utilize all eight threads during the entire render then that...
architecture plays a big part. I don't know the specifics of it, but it's the design.
back in the p4 days, I believe amd was better than intel back then. amd runs hotter, but faster than p4's.

actually, the ideas of amd were later on used by intel, I believe they did that a few times. like more actions per cpu cycle (or something like that), and the integrated memory controller, I think amd was first to use those, then intel followed
 

larkspur

Distinguished
Haven't seen what Nicolas11x12TECHX said exactly but he sounds right on both fronts mostly. Gaming for sure. Depending on the renderer, video rendering performance is a mix of single-thread and multi-thread performance. As an example: When I do a render job in Lightwave, some of the processes (like geometry calculation, anti-aliasing, etc) during the render use more threads and some of them use less threads. I have an FX 8320 on my distributed rendering network and also an i5-4670k stock-clocked (in addition to other systems). The 4670k beats the FX every time in all of my jobs despite only having four threads vs the 8320's 8 threads. If the renderer was able to fully utilize all eight threads during the entire render then that might change. But in the real world, this doesn't happen (at least in my situation).

Intel not only designs their CPUs, they manufacture their CPUs. AMD only designs their CPUs and relies on 3rd-party foundries to manufacture their CPUs. Intel has the most advanced manufacturing facilities and techniques in the world. Their design teams are also some of the best in the world and they've managed to design CPUs that are much more efficient (both in power consumption) and architecture (instructions per cycle aka IPC) than AMD. AMD's over-arching Bulldozer architecture (used in their current CPU and APU offerings) is actually a modular design. An "8-core" AMD processor is really a 4 module processor. Each module contains two integer units (hence why they say 2 "cores" per module) but the module shares a variety of resources including but not limited to only one floating point unit (FPU) and the L2 cache. This makes it behave a lot like a 4-core processor in many demanding applications.

It really all boils down to a better design and a quicker cycle for new designs for Intel vs. AMD. Couple that with Intel's fabulous fabrication and AMD is forced to sell their CPUs at the price points in which they compete with similar performing Intel CPUs.
 
Solution

TheVorlon_44

Distinguished
May 14, 2010
13
0
18,510
Most of the time people look at how many cores a cpu has to get a rough gauge of how fast it is (along with, of course, clock speed) but the real indicator is a little bit deeper below that.

The x86 instruction set has become so large it is almost an abstraction layer - both Intel and AMD actually break the individual x86 instructions down into component parts within their respective CPUs. Intel calls their broken out parts of the x86 instruction "micro-ops" while AMD I believe calls theirs "risc86"

How "wide" or "parallel" the cpu can deal with these "mirco-ops" is a pretty good hint at the true speed of a CPU. - The long forgotten Netburts Pentium IVs for example could only retire or execute two micro-ops per clock cycle so no matter how fast Intel got the clock speed the Athlons of the day (which could do 3 micro-ops per cycle) usually ended up faster.

The "Core" series of chips could retire 4 micro-ops per clock and hence were waaay faster than Netburst chips (basically twice as fast clock for clock)

Sandy and Ivy Bridge chips could also retire 4 instructions per clock cycle (and in some rare cases 5 due to "micro-op fusion")

Haswell chips can usually do 4 per cycle, and can do 5 a bit more often with some new instructions (FMA for example)

Skylake is "in theory" able retire 6 instructions per clock, but there is not a lot out in the public domain about how often that actually happens.

By contrast the current AMD chips retire 3 micro-ops per cycle, so clock for clock they are inherently slower than Intel's chips.

There are other factors as well, Intel's cache speeds tend to be waaay faster than AMD's as well, the latency is often half of the AMD chips.
 

xXCrossfireXx

Reputable
Jan 16, 2016
869
0
5,160


Ohhh so that's why AMD chips have like 8 megs of L2 Cache?