when it comes to CPU performance

Adam_26

Reputable
Oct 20, 2015
23
0
4,510
I've recently been looking into the conditions under which bottle necking happens and stumbled upon a tip bit of info about processor performance where GHz is less relevant in performance and in all everyone's explanations nobody explained to me why a CPU with less GHz and cores was better (Intel vs AMD) they just told me it was so I ask, how do you calculate performance figures of either CPU brand in such away you have an accurate representation of performance? if there's an answer how can you then do that for a GPU?
 
Solution
I think the trouble is trying to come up with a solution or formula that can be applied on the fly to decide between two different cpu's (or gpu's). Not that the idea gamerk316 mentioned is wrong but everything is variable. It's difficult to compare frequency (core speed) between brands and different generations. It's not helpful to compare 4ghz to 3.5ghz when comparing amd's fx 8350 to intel's i5 4690. Because of the efficiency involved and improvements to ipc made from 4th to 6th gen i5's even when you're comparing intel to intel, a 3.3ghz skylake i5 will perform similarly to a 3.5ghz haswell i5.

Benchmarks are critical for determining between specific cpu models and preferably real world benchmarks. Not every program uses the cpu in...
Basically, performance can be expressed by this formula:

Performance = (Instructions Per Clock * Clock Speed) * Number of CPU Cores

Or in other words:

Performance = Single Core Performance * Number of CPU Cores

Assuming perfect scaling, the second variable dominates. With less then perfect scaling, the first one dominates.

Intel has superior single core performance, especially in the Instructions Per Cycle (Work done per clock tick) metric, and most applications do not scale across all CPU cores. As a result, Intel's chips tend to be faster, despite being clocked lower and having fewer cores.

So yeah, it's complicated. :p
 

Felix England

Honorable
Sep 12, 2013
81
0
10,660
Look at the amount of power the CPU draws as AMD uses more power than Intel as AMD runs hotter and also, the AMD FX-8350 (8 cores and 4Ghz) benchmarks lower than the Intel i7-3770K (4 cores and 3.5Ghz).
Best thing to do is look at benchmarks, cores, clock speed, price, CPU Voltage and brand and then determine. And just looking at that can be difficult as some processors are better at multi threading than others.

A simple way to think about cores and clock speed is with workers. Quad core (4 workers) laying down bricks at one per second, single core (1 worker) laying down bricks at 2.6 per second, when all four workers are Woking the quad core is better but those 4 workers don't always work at the same time, so the single core would be better in that case. It depends on what you do on your PC and how much you do.

The Performance = (Instructions Per Clock * Clock Speed) * Number of CPU Cores, is useless for comparing AMD to Intel (no offence)
^-^
 

Adam_26

Reputable
Oct 20, 2015
23
0
4,510


so AMD has many cores and a higher clock speed overall but Intel has not as many cores but a higher individual speed on cores resulting in more being done across a wider variety of processess i.e. each core does more meaning more comes together forming the end result...

okay I get that.

I'm still thinking about your method for performance measurement particularly "Assuming perfect scaling, the second variable dominates. With less then perfect scaling, the first one dominates." which I don't understand so well so for now the formula in my mind is instructions per clock * clock speed= single core performance and then multiply that by the number of cores and that should give an accurate idea of how both brand of CPU perform as it factors in everything.
 
The idea is basically this: If you know how much work an individual core can do under a specific workload, all you need to do is multiply that by the number of cores you have available in order to get the theoretical maximum performance. This however assumes that workload scales 100% across all CPU cores, so this is simply a "best case" outcome.

The Performance = (Instructions Per Clock * Clock Speed) * Number of CPU Cores, is useless for comparing AMD to Intel (no offence)

You can actually use this formula to get a rough estimate for IPC between any two CPUs, since all the other variables are known from benchmark results. For example, take this theoretical benchmark:

Some Random game X: I7 2600k: 30 FPS FX-8350: 25 FPS

I can solve for the relative IPC difference between the two CPUs:

2500k: 30 = IPC * 3.4 * 4
IPC ~ 2.21

8350: 25 = IPC * 4 * 8
IPC ~ .78

Relative IPC Difference: The 2500k is ~ 2.83x faster at the same clock. Assuming 100% full load, perfect scaling, no other bottlenecks, and a host of other assumptions. But it kind of highlights how, even today, IPC matters. [Granted, things get complicated when things like HTT come into the calculation, but as a rough estimate, it gets the point across]
 
I think the trouble is trying to come up with a solution or formula that can be applied on the fly to decide between two different cpu's (or gpu's). Not that the idea gamerk316 mentioned is wrong but everything is variable. It's difficult to compare frequency (core speed) between brands and different generations. It's not helpful to compare 4ghz to 3.5ghz when comparing amd's fx 8350 to intel's i5 4690. Because of the efficiency involved and improvements to ipc made from 4th to 6th gen i5's even when you're comparing intel to intel, a 3.3ghz skylake i5 will perform similarly to a 3.5ghz haswell i5.

Benchmarks are critical for determining between specific cpu models and preferably real world benchmarks. Not every program uses the cpu in the same way so performance could appear one way in one game and another way in another game or in other applications. The most accurate way is to try and compare a benchmark closest to the application(s) you plan to use with hardware as similar as possible.

For instance if you're gaming, plan to play bf4 with a r9 280x at 1080p then looking at a cpu gaming benchmark for gtaV using a gtx 980 at 1440p won't be as helpful. Too many differences to get an apples to apples comparison. The same is true for other programs like video encoding. An adobe premiere benchmark won't be as helpful if you use handbrake and the settings used in the test are specific. If you change the fps of the video, use additional plugins or filters during your render and so on your results may vary anywhere from a little to a lot from the benchmark used with specific settings.

Gpu's are similar, you can try to compare architecture like fermi to fermi or hawaii to hawaii cores but amd cards use stream processes while nvidia use cuda cores. Some software is specifically designed to take further advantage of things like cuda cores. The best thing would be to try and find head to head comparisons in an environment closest to how you'll be using it to see which, if either, actually outperforms the other.
 
Solution


What gamerk316 provided to you is a very simple model of how a CPU's performance can be calculated. Like any simple model it is more of a concept rather than a way to accurately predict the performance of a CPU.

The best way to judge a CPU's or GPU's performance is by looking at benchmarks; a lot of benchmarks. The more benchmarks you look at the better your understanding will be of the overall average performance of the CPU or GPU. Of course focusing on the benchmarks of CPU processes or the games you want to play can give you a very accurate picture of performance.

For example, if you want to know how well a GPU can handle Metal Gear Solid 5, then you look a benchmarks for that specific game. I used to encode a lot of videos with my PC; specifically my large DVD library, and later on my slowly growing Blu-Ray collection which would then be transferred to my HTPC. Therefore, video encoding benchmarks would be the first thing I look at.