G860 vs i3 vs i5 in Crysis 3 at high, min FPS

beil

Honorable
Jul 1, 2013
38
0
10,540
Hello.

According to chart in the review , Crysis 3 at high settings gets very low min.FPS (below 20) with G860 and i3.

How much real gameplay affected by min. FPS vs average FPS: does game drops to min. FPS for just a few frames now and then, or goes for few seconds, or in dozens of seconds, or even for minutes, or at some specific scenes?

How much motion blur, lens flare, AA, AF will bottleneck CPU?

The chart: http://media.bestofmicro.com/O/M/375430/original/Crysis3-CPU.png
 
Solution
It's probably for a second or two at a time but it creates choppiness and generally hoses the experience. If you can afford the i5 that's the way to go.

haider95

Honorable
Dec 31, 2012
1,097
0
11,460


Duh. If you're going to buy something around that price range you best get the FX6300 in all its 6 core glory can run run crysis 3 on decent settings at 60 fps with a HD7770 on a resolution which i do not remember correctly but definitely higher than 1280x1024 and only half of the cpu's true power is utilized. YES. even when running CRYSIS 3
 
Look at the 72th to 97th percentile frame rates for a better idea in a 2D graph.
- The FX-6300 only has 3 x L1 code caches and each of those has poor set way associativity compared to the Core i3.
- Yes, it has 6 x L1 data caches, but you're talking about lots of instructions flying around, most the data is sent over the PCI Express x16 slot in this case.

60fps on a HD7770 at 1280x1024 sounds like BS to me. Sure it 'might' be possible but it would be an average with low details at best.
 

haider95

Honorable
Dec 31, 2012
1,097
0
11,460


The FX-6300 is an actual 6 core. It may share some floating point units but that's it.
http://www.youtube.com/watch?v=eQasGfg2DMY
FX-6300 and Crysis 3. Keep in mind that the HD7770 is faster than the Ti
Before you call B.S on that..
http://www.anandtech.com/bench/Product/536?vs=541
 
It's SMT (Intel call it HyperThreading) with dedicated L1 data caches per thread. The L1 code (or instruction) caches have resource contention between the two threads each core runs, which is made alltheworse by the lack of set way associativity of the L1 code cache in the AMD FX processors of today.

- This is excellent for price/performance in large servers. (As it basically copies the DEC Alpha design).
- This is terrible for game performance in consumer and enthusiast systems as it creates massive stuttering problems.

Each of these 'cores' (threads) within a 'module' (core unit) only has about 61% of the resources that a CPU core should have.
- It is a form of SMT by design, and AMD happily admit this in their tech doco.
- It's a form of SMT that is superior to HyperThreading and scales much better than HyperThreading due to dedicated L1 data caches, but for gaming workloads the L1 code caches (of which there are only three, not six!) really hold them back.
- Do not expect 'future titles' to scale better on them, they are not designed for gamer workloads they are designed for server workloads.

The AMD FX-6300 has a minimum frame rate of 15.16fps in the OP's linked benchmark, while the Core i3 3220 scores 17.00
- That's still +12% higher using two cores with hyperthreading, instead of three cores with 'better' SMT.
- This is because the Intel cores are over engineered (by comparison), they have better 'width' and extract more IPC for their 14 stage pipeline. (Simple maths).

The AMD FX-6300 has a higher average frame rate than the Phenom II X4, but a lower minimum frate rate and it stutters like a mofo in the test.
 

TRENDING THREADS