Which processor is better for gaming.

Junaid Hussain

Honorable
Apr 14, 2013
10
0
10,510
Decided to upgrade my pc and finally came down to 2 chioces for the cpu:
FX 8350 and i3 3570k. Which one should i go for and this is regarding gaming only.
 

8350rocks

Distinguished
That depends entirely on the games you play...

If you solely play StarCraft 2 and Skyrim, go Intel...

If you play a lot of FPS/3PS titles (that use a lot of cores), like Crysis 3, Far Cry 3, Max Payne 3, BF3 Multiplayer...AMD will even come out ahead in some of those...

If you play titles that are predominantly GPU bound like Metro2033 for example...the difference will be negligible.

If you run a lot of instances of games, play several at once, or multitask alot, AMD has an advantage there. If you like to do video editing, or encoding, etc. AMD will have an advantage there as well.

So, depending on what you do...one is better than the other. It just depends on what you're doing.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
The FX has eight threads. The i5 has only four threads. If you are gaming whereas doing several doing background tasks (e.g. video encoding) then the FX will be more responsible.

Notice also that most current games only use two-four threads, but the next consoles (PS4 and next XboX) will use eight-core chips from AMD. An entire generation of games will be using more threads. Precisely the FX chips are used in both the PS4 and the next Xbox development kits.

Moreover, the FX chip can be faster than an i7 under linux, but I do not know about gaming under linux (e.g. Steam).
 

Traciatim

Distinguished


People keep saying this, but ignoreing the fact theat it's 8 netbook cores and not 'real' pc cores. It's not like it's going to have an 8350 in there, it's more like 4 E-350's glued together. Similar in speed to an i3-3220 or along that line. One core will probably be dedicated to system resources, with the others available to be allocated as needed to gaming things, but like games designed today you will probably have threads for audio, networking, asset loading and releasing . . . and then one big main thread for the gameplay.
 

cmi86

Distinguished
Right now the i5 is a better gaming CPU minus a couple titles that are threaded enough to use the FX's core advantage. People can speculate over what next gen consoles running 8 core AMD chips will do to the PC gaming world in the future as far as utilizing more cores.. They may be on to something but the fact is right now that's all it is is speculation because next gen consoles have not arrived yet. My personal though it that more cores will be utilized in the future simply from observing all other software that has become more and more threaded over the past years and it's my though that this trend will continue on into the future, but again only time will tell !
 

8350rocks

Distinguished


Actually it's quite a bit stronger than 4 E350s...it's more like 4 current dual core pentium chips glued together, and it has built in GPU with GDDR5 shared memory hardwired for about 10x the bandwidth your home PC gets from DDR3. There's a separate chip on board for system resources, the 8 cores are dedicated to pushing the game, and only that. Also, considering the gaming industry as whole loves AMD's willingness to offer improvements toward coding efficiency, I think you are all about to see that Intel's lack of willingness to listen is going to cost them on the gaming front.
 

Traciatim

Distinguished


Where are you getting that data from, what I read is that it's jaguar based and they were able to get about 20% more performance and then they just stick 8 cores together. Where are you reading it's a dual CPU and one CPU is for the system? Since an E-350 pulls about 750 on Passmark's CPU bench then it would get near a grand with 20% more, and stick 4 of them together it would bull about 4 grand... exactly where and i3-3220 or FX-4300 sit.

Also, GDDR5 has tons of bandwidth for sequential access, specifically things like reading textures or drawing a screen, but it has latency problems when doing random access. While it's great for pushing through pixels it's not really all that great for doing any actual work, which is why GDDR hasn't taken off as main PC memory. So as it's tasks are being switched between game data, and texture, and screen it will take much more of a performance hit than a PC would having two pools, but that should be mitigated against fairly well by the ability to pass data between the GPU/CPU without using bandwidth since both share the same pool and you can just pass pointers, or some similar mechanic, back and forth (which is probably the best feature it has).

I'm not claiming it won't be a great gaming rig, or not have fun games. It's actually the first console I've ever seen that has me intrigued. But to call it some miracle hardware is a little ridiculous.
 

8350rocks

Distinguished
Well, when your CPU and GPU share the same memory addresses, and you don't have something like Windows interfering with your access to the hardware, GDDR5 is far better than DDR3.

It is Jaguar based, with 8 cores, the PS4 developer's release stated an additional single core CPU to run the system processes.

The Jaguar architecture is a 20% Performance gain over the E2-1800...which is substantially more powerful than the notebook E350. The E2-1800 is much closer to a current pentium dual core.

Plus, with HSA, the GPU and CPU will be working on the same issues together. So, the capability of just the CPU or GPU independently is not proportionate to the 2 working together.
 

Traciatim

Distinguished


Citation needed. When your video card is reading textures and your CPU wants to make the next frame, how is both accessing memory better than DDR3? At least their marketing department is working overtime because 5 is better than three it must be awesomesauce.



The E-350 scores about 770, the E2-1800 about 850. The lowest possiblw current pentium dual core, the G620T scores in the 2130 range, well over twice as fast. To call them close is delusional.

Even at 850, to increase it by 20% and scalw it perfectly so that there is no overhead to get 8 cores all working 100% efficiently you're still looking at 4080... well under where the i3-3220 or FX-4300 sit.



Citation Needed. The CPU will be working on AI, and Frame Data, the GPU will be working on textures and pixel pumping. Both will need access to the memory. Though they are both working on the game, so you can kind of twist it to say they are working on the same thing, but in no way shape or form are they actually working on the same things. To say otherwise is idiotic.

Here is an interesting test. I grabbed a little fill rate tester to measure my GPU's memory bandwidth and ran it to simulate a couple of things. The first test was just nothing running on my PC but the test and it spits out:

GPU fill rate, single-texture (16/0): 7669 Mtexels/sec
GPU fill rate, multi-texture (16/0): 15345 Mtexels/sec

So I ran it again with Prime95 running on all of my CPU cores to max that out to see what kind of impact it would have. You would think it would be very little but as it turns out it does impact it quite a bit:

GPU fill rate, single-texture (16/0): 6344 Mtexels/sec (Dropped 17.2%)
GPU fill rate, multi-texture (16/0): 12678 Mtexels/sec (Dropped 17.3%)

So then I ran it again with Unigen Heaven running on my second monitor to simulate multiple things accessing the GDDR5 since it's not very CPU or system RAM heavy . . .

GPU fill rate, single-texture (16/0): 587 Mtexels/sec (Dropped 92.3%)
GPU fill rate, multi-texture (16/0): 437 Mtexels/sec (Dropped 97.1%)

So then I thought to myself . . . Hmmm, how about DDR3 . . . I have that in my machine and how to test that similarly. So I grabbed Passmark's performance test and ran it stand alone on the memory test section and it spit out the memory mark of 2688. So I ran it with Unigen Heaven since that mostly avoids CPU usage and it spit out 2534, so an impact of 5.7%. So then I ran Prime95 on all my cores and made sure it was on the test to impact as much memory as possible and Passmark spits out 1966, and impact of 26.8%.

Still think GDDR5 is far superior when multiple things try to use it? Sony is using it as a marketing gimmick at best, and I'm not convinced it's the best choice for a shared access scenario. Like I said before. I'm sure it will be a great gaming machine, and incredibly fun. But to think it's going to blow gaming PC's away or it's hardware is anything magical is just plain silly.
 

cmi86

Distinguished


I'm not saying one way or another because it hasn't happened yet... But common sense and logic lead me to believe it is a definite possibility when statements from the PS4 development team like "The technology in the PlayStation 4 will be relatively similar to the hardware found in personal computers in many respects. This familiarity should make it easier and less expensive for game studios to develop games for the PS4" and "The idea is to make video game development easier on the next-generation console, attracting a broader range of developers large and small" So basically what he is saying is we are going to be developing X86 titles on X86 hardware, X86 titles that will be designed around 8 core X86 chip. Granted it's not a full on FX chip but I wouldn't think that would really matter as the software is still threaded to use 8 cores. Now ask yourself the question "why would game devs porting over titles or developing new ones not use this model as it is cheaper and will provide large performance increases?" Isn't that the goal of every single company on the planet, maximize product minimize cost ? Go ahead and sit smugly up in your tower of intel and relish in the fact your chip is 15% faster than mine, because the honest truth is that for now it is. Soak it up while you can man because if common sense and business's that want to provide a better product for less money prevail I got a gut feeling there is a real big wave coming and it's gonna wash that high horse of intel you sit on right out from under your ass. But then again only time will tell will it not ?
 

Traciatim

Distinguished


Yes, logic dictates if they program for the low end (PS4) and it can port easily over to the much more powerful PC platform then they will. What I really worry about is that currently if you want to port over to the far superior PC you essentially have to re-do all your assets. Most people are looking to this as a great boon to PC gaming since console ports will now be incredibly easy to port over, but I worry that this will just lead to lazy porting and we will see a rash of low end games and horrific control schemes designed for consoles . . . and when they don't sell well it will be blamed on PC gamers and not the sheer laziness of the terrible porting going on.

 

8350rocks

Distinguished


http://www.amd.com/us/products/desktop/pages/consumer-desktops.aspx#2

Called AMD App Acceleration, it's on their APUs and GPUs...you think for one second they won't have it on the ps4 and xbox720? Also, they have a new instruction set launching with Jaguar and Richland that allows the CPU/GPU to share memory space addresses under identical nomenclature in the DDR3 and GDDR5. GDDR5 is far better than DDR3, when you design around its strengths. The shared address technology is a first in the PC industry by the way...just thought I would point that out.


The E-350 scores about 770, the E2-1800 about 850. The lowest possiblw current pentium dual core, the G620T scores in the 2130 range, well over twice as fast. To call them close is delusional.

Even at 850, to increase it by 20% and scalw it perfectly so that there is no overhead to get 8 cores all working 100% efficiently you're still looking at 4080... well under where the i3-3220 or FX-4300 sit.

That's like saying that the A10-5800k is inferior because one benchmark says so...as I said, you can't judge the parts separately. If you look at laptop benchmarks...the E350 (much less the E2-1800) beat intel i5m's in game performance...isn't that a funny coincidence? Gaming consoles with hardware designed for gaming and not CPU benchmarks...weird huh?



Citation Needed. The CPU will be working on AI, and Frame Data, the GPU will be working on textures and pixel pumping. Both will need access to the memory. Though they are both working on the game, so you can kind of twist it to say they are working on the same thing, but in no way shape or form are they actually working on the same things. To say otherwise is idiotic.

Handled this above...you really need to do some research

http://community.futuremark.com/hardware/gpu/AMD+Radeon+HD+6670/compare

Scroll down, the best comparison I could find was near the bottom. HD 7660G is APU with GDDR5 compatability and HD 7660D is same APU with DDR3 compatability. Notice the HD 7660G is 4x more powerful in the 3dmark benchmarks? Care to wager which version of HD 7660 is included on the Jaguar CPUs? As a matter of fact, it should be nearly twice as effective given they have designed the 8 core jaguar to work with a double sized version of the HD 7660G instead of putting 2 of them together.

Here is an interesting test. I grabbed a little fill rate tester to measure my GPU's memory bandwidth and ran it to simulate a couple of things. The first test was just nothing running on my PC but the test and it spits out:

GPU fill rate, single-texture (16/0): 7669 Mtexels/sec
GPU fill rate, multi-texture (16/0): 15345 Mtexels/sec

So I ran it again with Prime95 running on all of my CPU cores to max that out to see what kind of impact it would have. You would think it would be very little but as it turns out it does impact it quite a bit:

GPU fill rate, single-texture (16/0): 6344 Mtexels/sec (Dropped 17.2%)
GPU fill rate, multi-texture (16/0): 12678 Mtexels/sec (Dropped 17.3%)

So then I ran it again with Unigen Heaven running on my second monitor to simulate multiple things accessing the GDDR5 since it's not very CPU or system RAM heavy . . .

GPU fill rate, single-texture (16/0): 587 Mtexels/sec (Dropped 92.3%)
GPU fill rate, multi-texture (16/0): 437 Mtexels/sec (Dropped 97.1%)

So then I thought to myself . . . Hmmm, how about DDR3 . . . I have that in my machine and how to test that similarly. So I grabbed Passmark's performance test and ran it stand alone on the memory test section and it spit out the memory mark of 2688. So I ran it with Unigen Heaven since that mostly avoids CPU usage and it spit out 2534, so an impact of 5.7%. So then I ran Prime95 on all my cores and made sure it was on the test to impact as much memory as possible and Passmark spits out 1966, and impact of 26.8%.

So I suppose by your standards, and using a PC benchmark conclusion based on your test results, that the GPGPUs out there that blow the doors off of benchmarks using GDDR5 RAM are an absolute anomaly then? Like the TITAN supercomputer that uses exclusively GPGPUs for example...how do you accomodate for that?

Still think GDDR5 is far superior when multiple things try to use it? Sony is using it as a marketing gimmick at best, and I'm not convinced it's the best choice for a shared access scenario. Like I said before. I'm sure it will be a great gaming machine, and incredibly fun. But to think it's going to blow gaming PC's away or it's hardware is anything magical is just plain silly.

I think you're either naive, or underestimating the technology if you want my honest, unadulterated opinion...which I will freely give. I would like to lean toward the latter, though I cannot rule out the former entirely.

The technology employed with the launch of the next gen APUs is unseen before in the industry...

http://www.theregister.co.uk/2012/08/29/amd_jaguar_core_design/

Take a look, the estimated performance gain is 15% in single threaded apps alone...add in the beefed up FPU, capability to do as many SP FPU calculations and DP FPU MAD operations as a Bulldozer core CPU...and you're not getting something that's nearly as weak as any of you are talking about...

If these were literally Bulldozers we were talking about, Piledriver would be something like a D7 and Jaguar would be something like a D5. You are not seeing the architecture improvements being made.

Why do you think the dev kits shipped with FX8350s? Just because they could? No, it's because FPU operations and double 128 bit FPU handlers are all the same or similar between the 2. Granted, clock speed will be less on Jaguar, but the TDP is only 90W less than the FX8350 as well. By increasing memory bandwidth, and allowing the APU to share memory address nomenclature across the 2 internal platforms, now you eliminate time spent in a normal PC converting GPU addresses to CPU nomenclature so that the 2 can work together. That's likely good for a 5-10% gain all by itself...add in the architecture enhancements...and suddenly it's not so weak anymore.



 

cmi86

Distinguished


This is one major fault i see with a lot of peoples conceptions about the PS4. It is not some junk super low end machine. Several highly accredited review sites have gone over the release details from Sony and landed on the conclusion that the graphics in the PS4 are roughly equivalent to a 7870 which is some pretty solid gaming horsepower. Whatever people may say about the CPU it's gotta be good enough not to bottleneck a 7870 so it definitely isn't some garbage netbook chip. When you boil it all down the PS4 is nothing more or less than a solid mid range gaming PC that doesn't have to haul around a big OS juggling 60 processes and 1000 threads allowing it to be lower clocked on the CPU side. http://www.anandtech.com/show/6770/sony-announces-playstation-4-pc-hardware-inside Don't get me wrong I'm sure some lazy devs will push non optimized ports of low end titles to turn a quick buck but all the same a lot of reputable developing companies will release quality fully optimized ports of major gaming titles to the PC because it's going to be way easier and higher performing than it ever was before having to switch platforms. These are the types of titles I feel will be the majority and these same titles will be the types to allow the FX chips to shine. At the very worst the non optimized ports will just run like they do on the ps4 the just wont stress your full blown FX so much as they are designed arounf a lower clocked chip. It's still an 8 core FX ya kno. Not trying to argue just trying to logically speculate on this situation because it is really has the potential to be a very (no pun intended) game changing situation for the CPU market.
 

cmi86

Distinguished
But all the same we are way off thread here. Lets start a new thread specifically for this topic and there we can discuss the matter logically. Yes it's all speculation but it's definitely a topic worth speculating on a bit.
 

whyso

Distinguished
Jan 15, 2012
689
0
19,060


dev kits shipped with a 8350 because that was the only 8 core chip amd had.
 

8350rocks

Distinguished
This article quotes double CPU performance on Jaguar vs. Bobcat APUs. This taken from AMD @ CES 2013:

http://www.bit-tech.net/news/hardware/2013/01/08/amd-ces-2013/1

The above chart may be wrong...I only looked for relevant comparisons with GDDR5 vs DDR3...so I wan't looking at what the other cards in the table were.

Also, this is a link to the exact same discussion, and the conclusions drawn mirror my own, though with a more technical explanation as to why...

http://www.techspot.com/community/topics/whats-the-difference-between-ddr3-memory-and-gddr5-memory.186408/

Also, these articles discuss the differences further, and frankly...as I have said, slightly higher latency in a console...where parallelism is far easier to take advantage of, will not be an issue. Additionally, the far better bandwidth will mean far greater access to data between the CPU/GPU.

http://www.digitaltrends.com/gaming/a-look-inside-the-playstation-4/

Honestly, the setup sounds like a very custom GPGPU configuration for ps4 more than it does an APU. Considering the direction of AMD's technology path, this is really not at all surprising. GDDR5 will be a boon to a console where developers will be able to utilize the memory bandwidth maximum and latency will not be an issue because of the accessibility of the hardware without interference from software.

If this was a PC, I would say the CPU may be hindered by higher latency slightly, though considering the information and the accessibility of the hardware, I doubt it will be an issue at all.

Additionally, Jaguar is supposed to be GCN from what I understand, with 18 GCN cores and 1152 shaders...it will be no slouch. The advantage of the unified memory will be interesting as well. When ps3 released, the CPU was moderately impressive at the time, but the GPu was based on a Nvidia G70 which was pretty commonplace. No one was really talking about poor performance there, when everyone knows that game performance is based far more on GPU performance than it is on CPU performance, generally.

I do think this will push parallelism into PC games very quickly though.


 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


It seems you did not pay enough attention to this part of my message:

Precisely the FX chips are used in both the PS4 and the next Xbox development kits.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The next consoles will be eight-core. This is confirmed, not speculation. The development kits for those consoles use eight-core FX chips. Those kits are being sold to game developers. Again no speculation.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Except that game developers have already provided demos where the "miracle hardware" was able to compete with an i7 (ivi-bridge with HT activated) + 16 GB RAM + GTX-680 and this is not the maximum that the "miracle hardware" can do...
 

Traciatim

Distinguished


lol, yeah right. With that garbage hardware in the PS4... or with a dev kit? You claim the PS4 has an FX chip in it, but it doesn't. You should stop spreading lies. It's a jaguar based chip, like a net book with more cores glued together.