Sign in with
Sign up | Sign in
Your question
Solved

How do I chose which Intel chip to get?

Last response: in CPUs
Share
August 23, 2014 11:12:35 PM

I currently have a FX8350 paired with 2 GTX770 4GB cards in SLI. I would like to upgrade to an Intel I7 or I5 chip. I intend to use this PC for gaming on Ultra graphics. I'm a little hesitant to upgrade because the Intel chips I saw are like 300 bucks. However, I have some suspicion that my FX is holding my GPU's back. Some games seems to have FPS drops (mostly open world games like watch dogs and assassins creed) while others run just fine. I prefer Asus motherboards so if some suggestions could be made for an intel chip paired with an Asus motherboard in the 300 to 400 dollar price range that would be great. Thank you.

More about : chose intel chip

a b à CPUs
August 23, 2014 11:16:33 PM

I bet it wouldnt be worth the upgrade since you realistically would buy a 4690k for $220 if you find it on sale, Then you would need to get a new mobo and while Asus are great mobos they arent the cheapest brand so add on another like $150 and I'm assuming you already have a good cpu cooler but if not add on another $30. Thats not worth the performance increase.

The rule of thumb is that you dont upgrade cpus that arent atleast 2 Tiers ahead of your current one on this chart: http://www.tomshardware.com/reviews/gaming-cpu-review-o...

Which there currently arent any
m
0
l
Related resources
a b à CPUs
August 24, 2014 12:15:35 AM

The performance increase would not be worth the price.
m
0
l
a b å Intel
a c 77 à CPUs
August 24, 2014 12:22:22 AM

Tbh, from what I hear Watch Dogs and AC get some framerate drops for pretty much everyone sometimes. It's something about the Anvil Engine (and the bits of the Anvil Engine used in Watch Dogs). I far prefer Ubi's Dunia 2 Engine, not really sure why they used so much of the Anvil Engine in Watchdogs because it clearly butchered the performance.

That said, Assassin's Creed at least isn't optimized for many cores, so it may still benefit significantly from an i5.
m
0
l
August 24, 2014 7:13:46 AM

So the AMD is working fine but the games are the problem and the Intel chip wont provide enough of a performance boost to justify the cost? Thanks for the info I will do some more research and make a decision. I don't know why Im doubting this AMD chip. It really has done well. I just get annoyed when I think about 1500 bucks worth of PC getting hung up on a game when I feel like it should be having no problems on Max settings. I think I will jst hold out another year or two and ride this thing till it dies or becomes out dated. Thanks guys!
m
0
l
a b à CPUs
August 24, 2014 9:12:04 AM

Clock for clock, haswell i5's and i7's can deliver up to ~75% higher execution performance to real-time workloads compared to piledriver FX chips. That can mean substantial differences in minimum FPS. Especially noteworthy and meaningful in conditions that are GPU heavy with high FPS goals in compute intensive games (multiplayer).

IMO Toms gaming hierarchy chart has some pretty serious errors developing. I hadn't seen it in while till participating in this thread. I'm not sure whose in charge of that chart but my research on the issue and those "positions" on the chart don't make any sense to me at all for many of the chips listed....

I could point out many obvious errors (I see CPUs listed 2 tiers apart that I know from research and testing to have very similar performance in real-time workloads), but lets not worry about that for now. Whether or not a CPU is "enough tiers" of difference on Toms chart probably shouldn't matter as the chart is fundamentally flawed for its intended purposes anyway. Maybe I can help them sort that mess out sometime. What should matter here, is whether or not your workloads are apt to take advantage of that potential ~75% advantage in available execution performance in real-time/gaming workloads.
m
0
l
August 24, 2014 12:59:45 PM


mdocod said:
Clock for clock, haswell i5's and i7's can deliver up to ~75% higher execution performance to real-time workloads compared to piledriver FX chips. That can mean substantial differences in minimum FPS. Especially noteworthy and meaningful in conditions that are GPU heavy with high FPS goals in compute intensive games (multiplayer).

IMO Toms gaming hierarchy chart has some pretty serious errors developing. I hadn't seen it in while till participating in this thread. I'm not sure whose in charge of that chart but my research on the issue and those "positions" on the chart don't make any sense to me at all for many of the chips listed....

I could point out many obvious errors (I see CPUs listed 2 tiers apart that I know from research and testing to have very similar performance in real-time workloads), but lets not worry about that for now. Whether or not a CPU is "enough tiers" of difference on Toms chart probably shouldn't matter as the chart is fundamentally flawed for its intended purposes anyway. Maybe I can help them sort that mess out sometime. What should matter here, is whether or not your workloads are apt to take advantage of that potential ~75% advantage in available execution performance in real-time/gaming workloads.



That is what I am trying to figure out. The Fx chip, for the most part, gets the job done. My monitor has a 60hz refresh rate so I usually have vsync on. The CPU seems to have no problems providing 60 fps for games with max setting. It just struggles with a couple games and I feel like 2 GTX770 4gb cards should be able to chew up what ever I throw at them and that maybe the CPU is holding them back. I might just be over thinking things. This PC is my first build and now I have some kind of obsession over new gear.
m
0
l
August 24, 2014 1:20:11 PM

BrandonCSLC said:

mdocod said:
Clock for clock, haswell i5's and i7's can deliver up to ~75% higher execution performance to real-time workloads compared to piledriver FX chips. That can mean substantial differences in minimum FPS. Especially noteworthy and meaningful in conditions that are GPU heavy with high FPS goals in compute intensive games (multiplayer).

IMO Toms gaming hierarchy chart has some pretty serious errors developing. I hadn't seen it in while till participating in this thread. I'm not sure whose in charge of that chart but my research on the issue and those "positions" on the chart don't make any sense to me at all for many of the chips listed....

I could point out many obvious errors (I see CPUs listed 2 tiers apart that I know from research and testing to have very similar performance in real-time workloads), but lets not worry about that for now. Whether or not a CPU is "enough tiers" of difference on Toms chart probably shouldn't matter as the chart is fundamentally flawed for its intended purposes anyway. Maybe I can help them sort that mess out sometime. What should matter here, is whether or not your workloads are apt to take advantage of that potential ~75% advantage in available execution performance in real-time/gaming workloads.



That is what I am trying to figure out. The Fx chip, for the most part, gets the job done. My monitor has a 60hz refresh rate so I usually have vsync on. The CPU seems to have no problems providing 60 fps for games with max setting. It just struggles with a couple games and I feel like 2 GTX770 4gb cards should be able to chew up what ever I throw at them and that maybe the CPU is holding them back. I might just be over thinking things. This PC is my first build and now I have some kind of obsession over new gear.


If you don't mind spending a couple hundred more I would suggest a MAXIMUS VII RANGER with a 4790k...
m
0
l

Best solution

a b à CPUs
August 24, 2014 1:22:49 PM

You will end up spending 300-500 dollars for up to 10% increase in some games.
Share
August 25, 2014 6:51:27 AM

Yeah.... I think Im good. Just gonna keep the FX
m
0
l
a b à CPUs
August 25, 2014 6:54:24 AM

In compute intensive games it can be a lot more than 10%. The execution resources available in a haswell core are significantly "wider" than a PD core. (approximately double). Real-time workloads run much better on the big-core intra-core parallelism level. Users who make the switch from many-core to big-core for compute intensive games consistently report significant changes in game-play smoothness etc. The only way to "hide" or shrink the difference to ~10% or less would be if the render workload were adjusted to solidly plant the bottleneck on the GPU configuration at all times at a relatively low FPS (~30). If your idea of smooth game-play is 30FPS, then yes, a switch to haswell may have very little effect. If you're like most competitive gamers, who notice FPS minimums when they drop below ~50-60FPS, then you will likely benefit from a switch to haswell by significant margins. If you work for the AMD marketing and deception department, then you would find a way to demonstrate that the FX chips perform similarly to the competition. Easy: hide the difference under the rug of a GPU bottleneck.

When the news media takes White House propaganda/talking points and turns it into news, I am ashamed of our media. When freelance forum participants take the bullet points of the hardware marketing departments and basically share them as fact without question, I am ashamed to wear the badge of "geek."

Clock for clock, haswell can indeed produce up to ~75% higher minimum FPS in many games, though in some cases, it can be even higher than this due to the way draw calls can be slacked off in some game engines to preserve a cohesive timeline. This depends on how the game engine functions are tied together to the timeline. The compute overhead available above and beyond the minimum required to keep up with the timeline of events/AI etc, can, in some game engines, winds up having a dramatic effect on FPS. I've observed this behavior in testing. Stock clocked a CPU might cause FPS minimums to dip as low as say 20FPS, but then a mild overclock of just 10-20% could improve those minimum FPS to 30-40FPS, because that 10-20% of compute overhead actually nearly doubles the available headroom for draw calls. This may seem counter-intuitive, but many games do wind up responding this way to changes in compute performance. The substantially wider haswell instruction engine offers a much larger piece of compute overhead pie for draw calls, and thus, solves those nasty FPS minimums and glitches in MANY games.
m
0
l
August 25, 2014 7:18:57 AM

I dont doubt that your information is valid. That is the reason behind my original desire to switch to an Intel chip. However, at the moment I have come to the conclusion that I don't really NEED to make the switch. The FX chip has been a great jump off point for my first build. It is a work horse and gets the job done. Future builds though, I think I Wil go with the thoroughbred. I just cant justify spending that much money right now for an uncertain amount of gain when what I have realistically is working fine. I appreciate all of your inputs. Thank you for the info guys.
m
1
l
!