i5-4690k or i7-4790k

Solution


The 8320/50 has 4 modules, each module houses one FPU (floating point calculations) and two ALU's (arithmetic/logical units), the "threads" everyone generally refers to is the ALU's. The FX ends up having 8 dedicated ALU units amongst its "cores" or "modules" but still has to share resources when it comes to the FPU and other parts of the processor.

Hyperthreading on intels is a slightly different method of achieving the same result. 4 ALU's on the i5 but they are better, and each ALU as an FPU (maybe even 2 i know intel is on its FPU game). To turn...

FunkyFeatures

Reputable
Mar 3, 2014
859
0
5,060
The i7 has hyperthreading, the i5 does not.
The i5 starts out at 3.5, the i7 at 4.0ghz.

I7 is more expensive, i5 is cheaper.
What will you be using the cpu for? If you will render and such, go for i7, but pure gaming, the i5 is just as good
 


Its intels way of ripping you off for hundreds of dollars more lol
 


Agreed. But there has to be some sort of tiering (though I wish there wasn't that much of a price gap).
 


yeah and i hope we all realize if AMD never came around intel would still sell that crap for 500$. your still gettin a bit of a poke in the eye though, hyperthreading is hit or miss for games, and helps for people that do a lot of video rendering and editing, or 3d scenes/animation.
 

MightyLion

Reputable
Jun 7, 2014
114
0
4,680


naa xD
 


Its dedicated threads are a better multitasking solution than the i7's hyperthreading, and at a fraction of the cost. just saying.
 


I won't get into the difference, but generally performance-wise a hyperthread equates to about 30% of a normal thread.

For gaming: i7 4790k (narrowly, often no difference) > i5 4690k > FX 8350.
 

logainofhades

Titan
Moderator


At least we have the poor man's i7, a.k.a. Xeon E3 1230 v3. :lol:
 

FunkyFeatures

Reputable
Mar 3, 2014
859
0
5,060


I love me som poor mans i7 :D
In my opinion, the best i7 if not overclocking
 


The 8320/50 has 4 modules, each module houses one FPU (floating point calculations) and two ALU's (arithmetic/logical units), the "threads" everyone generally refers to is the ALU's. The FX ends up having 8 dedicated ALU units amongst its "cores" or "modules" but still has to share resources when it comes to the FPU and other parts of the processor.

Hyperthreading on intels is a slightly different method of achieving the same result. 4 ALU's on the i5 but they are better, and each ALU as an FPU (maybe even 2 i know intel is on its FPU game). To turn the i5 into the i7 Intel implemented hyperthreading, which makes it more like 1.5 ALU on each core as opposed to 1 on each (performance wise this makes it more like 6 threads as opposed to the 4 total. Windows will recognize the i7 as "8 threads" though because thats how the OS works).

As for how HT works; If you are working on 2 tasks, well call them A and B, basically the module will work on A until it reaches a stopping point for whatever reason, this stopping point in the calculation lasts a fraction of a fraction of a second, but in terms or processors its a lot of time. So when task A is 'stalled' the chip knows to quickly switch to task B until task A is done being stalled. Then when B gets held up for whatever reason, it switches back to A. This process continues back and forth in concert, essentially completing tasks A and B much faster than just handling them each on their own.

For games, they generally have to be written in such a way to utilize the hyperthreading. with that said only some games see a boost from it, some see none. Even still games are more graphics dependant than CPU dependant (a good balance, but for now graphics card is more important than CPU).

For other tasks that are incredibly intensive on the CPU alone, then hyperthreading starts to shine. This is generally video encoding/processing and rendering huge animations or scenes that are 3D in nature. Thats all calculations and the graphics card generally has little to do with that (CPU + GPU rendering is relatively new, until now its been a CPU only thing). Things like lighting and hair and wind all get calculated during the rendering process, and each hair for example can be treated a single item, so 3D renders are packed with physics based operations. So the computer has to calculate which direction the hair should blow, and how the light will affect it, and this is much more intensive on the CPU than gaming will ever be (thats a whole nother reason).

Even still the 8350 trades blows with the i7 FOR ALL TASKS OTHER than gaming. The reason intel gets more support on the gaming thread is because most games (still) are threading to no more than 4 cores with few exceptions like crysis 3 and bf4. But the future is moving towards cores, this is a fact, so take that how you will. When DX12 does come around ,the FX 8350 will be better than the i5 4670k for gaming.

I have the FX 8320 running at STOCK (3.5Ghz and it CPU's dont turbo during these tests). Even though i can overclock with my setup and cooling i choose not to because its not really necessary. This benchmark shows it compared to popular other set ups (each rig is uploaded from a real users home, none of these baselines are theoretically calcualted.so its all real users comparing amongst eachother). The program used is PerformanceTest 8.0 full version.
http://imageshack.com/a/img855/5489/z3es.jpg
z3es.jpg


yes the 8320 is leaving the i5 in a dust cloud for everything other than single core performance (obviously), so thats why people have generally said the i5 is better for gaming.

What is evident is the 8320 and i7 4770k trading blows with each other throughout. Even the FPU's on the 8320 match the FPU's i7. But for gaming right now the FX sees more frequent frame dips, and it dips into lower frames than the intels do when they do happen to dip. All games from 2010-2013 will run much better on an i5. 2014 its really level footing between the 8320 and i5 4670. 2015 (the year DirectX 12 comes out) the 8320 will take the lead over the i5, im very confident of that (we wont know until its here of course). DX 12 is the first DirectX API since i think 9 or 10 to be written from scratch, and its main focus is better gameplay by utilizing more cores. The PS3/4 and Xbox 360/one both have 8 cores and their games too will be written with DX12 language, making porting games much easier. this means much better performance.

In my own personal opinion, i think everything is going to move towards cores, because single core performance and microarchitecture has its limits (heat really). so we have no choice other than to code programs, games, and apps to utilize as many cores as their are available. Soon windows will have to be re-written from scratch to do so. Windows 8 DOES have *better* mutlicore support, but true multi core support will warrant re writing the whole OS.

Either that or we will have to look into different materials for manufacturing and better processes. Silicon can only get so hot. Ultimately our future will be a combination of these two realities, because why not have both? Stronger cores, and more of them.
 
Solution

all stalked out

Honorable
Jul 3, 2013
46
0
10,540
Hi, If you want to get the best results then the choice is obvious, Thanks to the base and boost clock increases and the additional features they enabled (VT-D, TSX-NI) it is now the best choice for both the overclockers and non-overclockers. If you can afford the extra go with the 4790k.
 


hmm, fx 8320-70 is still an older cpu. Sure it will be better with DX 12 but i still doubt it would be faster than the i5.

However when AMD releases it's next gen FX cpus next year then i have no doubt those will be faster than haswell.
 

all stalked out

Honorable
Jul 3, 2013
46
0
10,540


 

all stalked out

Honorable
Jul 3, 2013
46
0
10,540
@ Beezy, Your clearly deluded? AMD haven't currently got anything that can perform and compete at the moment in or out of gaming, A 9590 can't and your claiming an 8320 can and your only evidence is AMD friendly program results, Plus your claiming DX12 is going to allow the FX to catch up in games which if that was the case it would of already happened with Mantle titles. The improvements have been negligible at best so I wouldn't hold my breath for leaps and bounds which is what the FX range needs to catch up. It's a dead architecture on a dead platform. AMD should have something to compete in 2016 though but the FX stuff is useless and forced me to move from my old Phenom II to Intel. Also all the "theoretical performance wins" you claim that Intel gets aren't that at all. Do some real research instead of stitching people up by making claims that the FX range can't back up.
http://home.anandtech.com/bench/product/1289?vs=1260
 


lol this thread got kinda necro'd, eh?

i think its very likely that the i5 will remain faster for games, but not because of what it is per se', rather developers will be lazy and under-payed and wont work hard to optimize more than they need to. In properly threaded environments the FX is already faster than the i5. Plus age matters for instruction sets, but im sure you can find old Xeons that would outperform a new i5 4690k, so age matters but its not the whole story. Like for example the FX chips can handle FMA4 instruction sets while intel can only do FMA3, even if not many programs use it (dont be confused with age here, i think FMA4 came before FMA3, regardless of when and how it was implemented.)

the FX should see an update around 2015 though and combined with DX 12 picking up steam with Windows 10 it could be a huge turn around for AMD. im hopeful, and crossing my fingers lol. I invested in my FX not wanting to upgrade for some time, so i continue to hope software will see optimizations around the FX hardware.
 


No body even considers the 9590. The FX 8320 can be had for as cheap as 110$ while the intel 4770k costs 340$ on newegg right now. Thats cool lol.

like i said in properly threaded environments the FX takes the cake
http://openbenchmarking.org/prospect/1401165-PL-A107850K704/c8abb70dee982dd494fb1891bd9dc154fa7a7f47

The only thing those anadtech benches really show is the preference cinebench has for intel over the FX, which was done on purpose and is biased, intel got sued by the FTC for it. But Its not like Intel used underhanded business practices in the past to make intel appear faster and better than the FX. Remember the Intel fortran compiler fiasco??

http://3d-coat.com/forum/index.php?showtopic=14916

http://www.agner.org/optimize/blog/read.php?i=49#49

so go ahead and try to make me look a fool. I support AMD as a business, morally speaking. They believe in open source standards. If they had the money intel had they would have their own fab process and foundries too. So go ahead and keep shilling for a global CPU hegemony via Intel. In the past, before AMD could compete, Intel GROSSLY overpriced their CPUs so be thankful for AMD.

so i'm fine living in a world where i have to wait 2 more seconds for a program to function, or if i get 5 less frames in a game. Better that than a single CPU company who will withhold progress until they see fit.