i7-4790K vs i7 5820K vs i7 6700K ( Skylake )

DukiNuki

Distinguished
Aug 21, 2011
986
0
19,060
Hey Guys . I'm about to build a new computer . but i'm stuck in choosing from these powerful cpus which each have their own benefits and weak points .

i7-4790K has 4 PowerFull cores which is what games really take advantage of over 6 Weaker cores of i7 5820K but who knows it will stay like that for long ? maybe newer games will benefit from 6 cores . i7 4790K despite having fewer stronger cores is rather old and does not support DDR 4 which is becoming the new standard soon . i7 6700K is coming with all new tech and arch and 14nm design but it still has 4 cores which is not as future proof as i7 5820K . by the way according to latest i7 6700K benchmakrs leaked all around it clearly shows that i7 5820K does better than both i7 6700K and i7-4790K in big titles like GTA V and BattleField 4 and such . you can just google i7 6700K benchmakrs and make sure about that .

anyway . these three CPU might never show any difference in Gaming . but still . which one is better to go with . i wanna grab something and not touch it for pretty long , say 3 years . no matter what they might release next . because CPUs are not affecting games as much as GPUs and intel cpus ( no matter how old ) hardly ever bottleneck top end new GPUs . so should i wait for skylake which is slower than i7 5820K or what ? all im doing is hardcore gaming . no editing and stuff . so pick the best from these 3 .
 
Solution
Multi-core CPUs have been around for 10+ years and still only a minority of software makes significant use of more than two cores. It is unlikely typical games will grow beyond requiring quad cores before the i7-5820 becomes obsolete, so I would not bother losing sleep over that.

DDR4 is still considerably more expensive than DDR3 and based on how little performance scaling there is from increasing clocks on DDR3, there is little reason to believe DDR4's bandwidth will produce significant performance gains except while using the IGP, so I would not lose sleep over that either.

The i5-2500k is still a very viable gaming CPU by today's standards even at stock clocks, so whichever i7 you buy today will likely remain viable for 5-7 years.

InvalidError

Titan
Moderator
Multi-core CPUs have been around for 10+ years and still only a minority of software makes significant use of more than two cores. It is unlikely typical games will grow beyond requiring quad cores before the i7-5820 becomes obsolete, so I would not bother losing sleep over that.

DDR4 is still considerably more expensive than DDR3 and based on how little performance scaling there is from increasing clocks on DDR3, there is little reason to believe DDR4's bandwidth will produce significant performance gains except while using the IGP, so I would not lose sleep over that either.

The i5-2500k is still a very viable gaming CPU by today's standards even at stock clocks, so whichever i7 you buy today will likely remain viable for 5-7 years.
 
Solution
I see games using quad core CPU's now. People rightfully think multicore programming is hard, it is. The hard part is creating the first and 2nd thread, after that creating more threads is simple. Once a game (or any program) can run efficiently on a dual core its significantly easier to add more threads.

However I don't see the industry pushing this yet. They rightfully notice that 6 and 8 core Intel CPU's are still out of the reach of most gamers and will program instead for four cores or less for the foreseeable future. Nearly no one would play a game that required a i7-5820k to run well, simply because the costs are still very high compared to quad core CPU's.

Like InvalidError said, the i7-5820k will be obsolete before games recommend 6/8 core CPU's.

I would wait for Skylake just so the prices on the current generation drop. Its only 1 to 3 months before broadwell comes out and 4 to 6 months for Skylake. As an added bonus we will know more about AMD Zen by the time Skylake hatches.
 

gerr

Distinguished
Apr 1, 2008
503
0
19,060
I wouldn't look too much into "future proofing" your new system as most people don't keep their system much more than 3-5 years. Build the best PC you can for RIGHT NOW! For me, that's an easy choice, i7-4790K with DDR3 memory. Like you said, 4 faster cores will generally out preform a slower 6+ core CPU in most games now. Plus most games are GPU limited, so take the money you would save on a Haswell i7 and put it into better GPU(s) and/or monitor(s).
 

InvalidError

Titan
Moderator

I wouldn't say that.

The first threads you create in a typical applications are to delegate tasks to avoid stalling the UI thread but most of those threads do relatively little work. When you have time-consuming tasks, then you run into the additional hurdle of figuring out how to factor the tasks as efficient multi-threaded problems and many algorithms do not scale well beyond a handful of threads. Quite a bit of research and many a PhD thesis have gone into finding more threading-friendly alternatives to common constructs.

Much of the time, implementing threading beyond running unrelated or only loosely related tasks in parallel is simply not worth the trouble.
 

DubbleClick

Admirable
Exactly. Depending on the programming language, you can spam threads with one or two lines of source code. Does that make sense? Sometimes, for unrelated tasks that need to run in the background - usually not.

The challenge is finding ways to create threads with as equal load as possible. In games, this often gets hard due to constantly changing circumstances and the lack of predictability, of which a need for deterministic instruction execution results. Your cpu can't first calculate where you step in 10 seconds and then where you step now - doesn't make sense, as well as awkward position changes might occur even if movement was predictable.

Therefore, you try to offload as many seperate tasks from the main thread as possible. Be it a kind of trading post, an ability changing system, the user interface, calculation of damage and health changes after an attack, mana/energy control, etc. etc. etc. It's very common for games to have 50+ threads working. Those, however, all take very little cpu time, so the challenge is finding ways to split up the cpu heavy tasks such as physic calculations. There you'll run into challenges, for example if you split up X-, Y- and Z-axis movement into separate threads and Y-position gets calculated before X-/Z-, you'll end up in the air or inside the ground your next step. Therefore you need to coordinate the threads to still get the behavior you want, which means unless they always take the exact same time to finish, a fast one might end up waiting on a slow one, which shortens the performance increase you get from having two or three cores running at the same time instead of one It's hard finding things you can parallelize efficiently without much need of scheduling them. Because as far as that goes, you way more often run into unwanted scenarios (bugs) than with only having to deal with one thread.

You might actually tun out of tasks that can be offloaded before your game makes efficient use of X cores. I'd even argue creating the first set of offloaded instructions is easier than the following, because you'll eventually have problems finding something where controlling it in separate instances is worth it performance wise and doesn't add too much extra code.

By the way, go code for incrementing a number 1.000.000 times using 10 threads:

package main
import("fmt")

func main(){
var counter int
for i:=0;i<10:i++ {
go incr(&counter)
}
fmt.Println(counter)
}

func incr(a *int){
for l:=0;l<100000:l++{
n:=*a
n++
*a=n
}
}

The result is going to be around 10 though, not the expected 1e6. That's because the main thread finishes before others do. Now, you can add channels to wait for all threads to finish:



package main
import("fmt")

var done = make(chan bool)

func main(){
var counter int
for i:=0;i<10:i++ {
go incr(&counter)
}
for I:=0;I<10;I++{
<-done
}
fmt.Println(counter)
}

func incr(a *int){
for l:=0;l<100000:l++{
n:=*a
n++
*a=n
}
done <- true
}

...And the result still isn't going to be 1.000.000, while one thread reads the value and calculates, another thread might read the value, do the same increment and write the same value again.
You'll need to schedule the threads by using semaphores or locks or channels or whatever (or in this example simply use an atomic increment).
 

DukiNuki

Distinguished
Aug 21, 2011
986
0
19,060
:D lost you .

thanks for answers by the way but . don't you think that i7 5820k is more future proof ( can get you runnin longer ) because . first your already updated with DDR4 so you wont be waiting for skylake and you wont eventually see games and apps taking advantage from ddr 4 while your stuck with your older DDR 3 . Second , maybe games will start using 6 cores in near future . amd hexa and octa cores have been around for pretty long now , so its not that knew . games that might take advantage of more cores might not be in a far far future .

by the way . is there ANY game that will have trouble running on a monster like i7 5820K ? can any one see any difference between that and older i7 4790K ? they both do great with a powerful GPU . so why not take newer one and not upgrade to skylake . many are tired of waiting . i just built my i7 5820K with GTX 980 in PCHound just for around 1700$ . i know its still expensive but it's still affordable . what you guys think ? how about those benchmarks showing 5820K beating 6700K in GTA V and Crysis 3 and such ?

 

InvalidError

Titan
Moderator

By the time either of those two things become a significant concern, the i7-5960X and DDR4-3200 will likely be obsolete anyway.

It boils down to buying a $2000 system today that might last seven years or build a $1000 today that will last four or five and another $1000 PC at that point that will exceed the original $2000 PC's specs and last another four or five. The best overall bang-per-buck is building two systems.

After three or four years, you will start itching to upgrade the system anyway due to all the updated IOs regardless of how "future-proof" your processing power was anyway.
 

DubbleClick

Admirable

There is no advantage of ddr4 over ddr3 speed wise. It has higher theoretical bandwith but you're never even going to use a fraction of that anyway.



Quad core cpu's are in the mainstream lineup of intel and amd for roughly 10 years. Most games still don't make efficient use of 4 cores (tl;dr of last post: harder to code, more vulnerable to bugs, no reward for the devs) and I don't see that changing any time soon. If you want an "upgrade" for gaming you'll be looking at a cpu with better single core performance than the i7 4790k, which does already exist (i7 6700k) but won't be available for purchase for the next few months.


Oh, so much this.
 

DukiNuki

Distinguished
Aug 21, 2011
986
0
19,060
"By the time either of those two things become a significant concern, the i7-5960X and DDR4-3200 will likely be obsolete anyway"

well when that time comes . i already have fast DDR 4 and 6 Core i7 CPU ( that means ill be already updated with that time's standards ) . no matter if i7-5960X will be outdated by that time . am i right ? people are still using and are still satisfied with their old i5 2500K with high end VGA s . well i can wait for skylake or even canon lake or just go for cheaper i7 4790K or more expensive i7 5820K , bottom line i don't think they well make a noticeable difference in Gaming as long as you have a great GPU and your CPU is not Bottle necking it .

CPUs are hardly ever bringing big performance gains . they are not working like graphic cards . so i guess outdated does not quite go with CPUs . its just the number that changes and some new features .
 

DukiNuki

Distinguished
Aug 21, 2011
986
0
19,060
no i have not . that's why i came here . just don't get why no body is voting for i7 5820k . i'm still waiting for skylake . sorry if i sound like a jerk who is not listening . by the way 'll probably go for i7 4790 because more i read more i get that its a better deal than i7 5820k . thanks anyway
 

DubbleClick

Admirable
Because games don't put continuous load well distributed across more than 4 cores, for the reasons I mentioned in previous posts and therefore scale better to faster frequency than to more cores. They don't benefit from dd4 ram and therefore with the i7 5820k you're wasting big money for no performance gain.
 

InvalidError

Titan
Moderator

Because for most gaming situations, six cores, HT and DDR4 yield little to no extra performance gain at a significantly higher cost compared to an i5 system. For most gamers, the i7-5820k with its expensive support components are simply a waste of money with little foreseeable benefits.

Many/most gamers who buy i7-5820k also use their computers for large video processing, 3D rendering, compiling, simulation, etc. jobs and need the extra cores for that. They buy the i7-5820k/5930k/5960x to run workstation-style jobs and make their workstation double as their gaming rig or vice versa. Either way, they have immediate significant uses for the extra cores.

And you also have the few who want the 5930k/5960x for the 40xPCIe lanes to run 8x8x8x8 CF/SLI in their extreme gaming rig.
 

DukiNuki

Distinguished
Aug 21, 2011
986
0
19,060
got it :D thanks a lot . few days ago i read about crysis taking advantage of HT . and there was a 20Fps difference between HT on and off . so i thought maybe few cpu heavy games like arma 3 would take advantage of it . but its better not to waste my money on it and get 4 stronger cores . sorry if i was rude or idiot in anyway
 

H11poc

Reputable
Jul 25, 2015
2
0
4,510


I bought a 5930K exactly for this purpose - Video rendering. Tests show the full six cores in use (around 80%) on each even when using the 290x OpenCL encoding as well. This proves a good place for the six and eight core processing units to do what they are built to do. Howveer, to stay on the topic I must add that I have kept my 2600k build in case I ever decide to play games again in the future. I would expect the 2600k at 4.5ghz to perform better than the 5930k stock at this present time.

When i render video the instructions of my task are set, I do not change anything except let the process run. This would seem far easier to multi core than a continuous set of instruction changes during game play. I of course say that in simpler terms as how it works has been done pretty well by another poster above.
 

jddem

Reputable
May 10, 2015
41
0
4,530
I bought the 5820, however gaming was my second need, video editing (which does benefit from multiple cores) was my priority. I went with the x99 platform becasue of the additional sata ports and M2 and eSATA options since I wanted to run a variety of SSDs. Eventually settled on Intel 750PCIe and couple of samsung EVOs) and have option of adding a M2 or eSATA.

And I have played every game thus far with no issues.
 

enewmen

Distinguished
Mar 6, 2005
2,249
5
19,815
Seconds after I read the review on the 6700k, I bought a ASRock x99 ITX, Crucial 32GB (2 x 16GB) DDR4 ECC, SAMSUNG SM951 M.2, and a Xeon E5-1650 v3. (Already have a decent 680 GTX)
The 1650-v3 (like a 5930k with ECC) has better multithreaded performance than a 6700k AND has lower load temps, so it seemed like a no brainer if I can afford it even though there are better cost effective solutions. I will use the new rig for real work and not just games & web-surfing. The 6700k temps may go down with maturity, but I don't think by much. I spend every day reading these kinds of articles, so I can't see myself with a new 4-core CPU after 7 years of using a q9550.
With 3.8 Ghz turbo speed, it should work well with games and should show bigger gains with the more multithreaded DX12.

Just my 2 cents worth.
 

darkmeiun

Reputable
Aug 7, 2015
1
0
4,510


i would add a small note for when choosing the cpu in this case, the main difference with 6700k is using a different chipset which has quite a few changes in terms of pci-e lanes which gives a lot better performance when you want to use an m.2 drive and sli/cs usb 3.1 and similar situation where pci-e lanes are involved
 

InvalidError

Titan
Moderator

No.

Just because the sockets look nearly identical except for one extra pins does not make them compatible. Power delivery to the CPU on Skylake is completely different from Haswell since Skylake moved the CPU core voltage regulator back to the motherboard.
 

GObonzo

Distinguished
Apr 5, 2011
972
0
19,160
i didn't write they looked identical. i wrote a quote from the motherboard's specification that it supports: Core i7 / i5 / i3 / Pentium / Celeron (LGA1150). since it's stating specifically that it supports the 1150 version of Celeron that should imply it supports the other 1150 "i" series listed.

 

InvalidError

Titan
Moderator

If you see an LGA1151 motherboard advertised as compatible with LGA1150, that would be because someone copy-pasted the web page or database entry for the motherboard and forgot to update the CPU list with LGA1151.
 

GObonzo

Distinguished
Apr 5, 2011
972
0
19,160

MSI = Z170A, Z170Gaming
ASUS = Z170Deluxe
GIGABYTE = Z170XP
multiples from Newegg. the first spec paste earlier was from MSI's website showing 1150 compatibility, most of these i'm looking at have changed their Celeron support to 1151 only. but the others only state i3, i5, i7. no 6th gen limitation stated.
 

InvalidError

Titan
Moderator

They did not "change their Celeron support to LGA1151", the "(LGA1151)" at the end of the CPU support list is meant to apply to the whole list and is completely redundant since the LGA1151-only or 6th-gen-only compatibility (the only CPUs you can get for LGA1151) is already a given due to the socket's pin count.

A copy-paste error/oversight does not magically make LGA1150 compatible with LGA1151.