Sign in with
Sign up | Sign in
Your question
Solved

Xeon 1231v3 vs locked Core i5's

Tags:
  • Xeon
  • CPUs
  • Intel i5
Last response: in CPUs
Share
October 9, 2014 11:43:03 AM

I know if I buy a Xeon the multiplier is locked, so instead I'll compare it to the locked i5 4670. They're both 3.4GHz CPUs with 3.8GHz turbo. The Xeon runs $256, while the i5 $219. The Xeon has 8MB of L3 cache vs the i5's 6MB. But for me the real difference is hyperthreading. What I don't understand is why hyperthreading is always brushed off as meaningless when comparing the merits of the i7 to the i5 for gaming, yet i3's are usually recommended as better gaming CPUs than G3258's. If HT matters now when comparing an i3 to a G3258, doesn't it make sense to have it to future proof for 2-3 years down the line? The cheapest i5's I can find on newegg are $190, and if I'm going to spend that much for a CPU I'd rather spend a little more and give it some future proofing. Thoughts?

More about : xeon 1231v3 locked core

a b à CPUs
October 9, 2014 11:51:02 AM

For gaming only? get the i5, al this futureproofing stuff is really silly, get what you need now, and worry about the future later. A locked i5 will be enough for any sigle and dual card solution right now.
m
0
l
a b à CPUs
October 9, 2014 12:58:10 PM

Hyperthreading on an I3 is important since a lot of games already utilize 4 threads. Most don't support more, though. So from an I5 to an I7, you won't see much difference.
As for locked I5 vs xeon, my choice would be the xeon easily. But I'm doimg productivity stuff where lots of threads are used and workload can be parallelized. If you do such too, go for the xeon. If you want to just game, the I5 will do fine.
m
0
l
Related resources
October 9, 2014 1:32:20 PM

DubbleClick said:
Hyperthreading on an I3 is important since a lot of games already utilize 4 threads. Most don't support more, though. So from an I5 to an I7, you won't see much difference.
As for locked I5 vs xeon, my choice would be the xeon easily. But I'm doimg productivity stuff where lots of threads are used and workload can be parallelized. If you do such too, go for the xeon. If you want to just game, the I5 will do fine.


Here's my dilemma though. If I asked a similar question 2-3 years ago would people here have been telling me dual core is fine, who cares about running 4 threads? I would hate to spend in the neighborhood of $200 for a CPU that could be pretty irrelevant in 4 years. What disillusioned me from PC gaming was the constant upgrade cycle. I remember back in mid 1997 having a 200MHz Pentium II with MMX and a nice video card and thinking the system was really awesome. A year later it was old and 2 years later it was a complete dinosaur. Then I built an AMD system with a Palomino core @ 1.5 GHz (TBred had yet to come out), a really nice Abit board, expensive Radeon 8500 graphics, and it was great for two years, but then felt old and slow again after. So when XBox 360 released with incredible hardware at the time, especially for the price, I said to hell with PC gaming and went to consoles.

Now that this generation of console isn't too impressive hardware wise, seems like a good time to go back to PC gaming. Plus Moore's Law is long since dead, so I'm not looking at a CPU being outdated two years later, am I? Seems like clock speeds have hit a wall this past 10 years or so, seems like the only way to grow is to use more cores. And while I know HT will never replace having 8 physical cores, I wonder if it would still give a nice performance boost to keep my CPU relevant 5 years perhaps, instead of 3-4, as games are forced to use more threads. Obviously I'm not going to spend $1000 on an 8-core i7 5960X right now, but I'll be really disappointed if I buy an i5 and in two years 4 threads isn't enough and I have to buy a new board, CPU, and RAM. I know the GPU might need to go in 2-3 years, but I don't want to do a full rebuild when $65 more now could possibly stave that off a year or two.
m
0
l
a b à CPUs
October 9, 2014 2:42:35 PM

Technical progression hasn't been going that fast since 2008 anymore. 2006 was the year quad cores (c2q/athlons) began selling. I got my q6600 and back then games slowly started utilizing 4 cores. Now its 2014. Games still utilize 2-4 threads. And as it was before, they won't start using more until 6-8 cores become mainstream.
Now the first 8 core was just released and sells for $1000. Broadwell will be 2/4 cores in the mainstream segment too. Sky- and cannonlake most likely too.

With a xeon 1231v3, id say you can expect your rig to be relevant for about 3 years at least. Just like first gen core i's are still relevant. And after those 3 years, it will be usable a long time too. Just like my q6600 was.
m
0
l
October 9, 2014 6:10:13 PM

Just talked to a friend who is a game programmer, and he said spending the extra money on the Xeon would be stupid instead of just getting an i5 for a gaming system. He said that extra 2MB of cache would probably give a negligible gain in speed with games not really optimized to use it.
m
0
l

Best solution

October 9, 2014 8:19:13 PM

I see a lot of people claiming games only use X number of threads, but then I saw this video- https://www.youtube.com/watch?v=tvLRZxRL8N8

The guy who runs the channel has a lot of experience with building computers, and is in IT. According to his video, BF4 can use 8 threads... so I'm getting confused. Is this a case of, "If we repeat it enough, it becomes true"? Or is Jay wrong in his video? If anyone can prove either side of this debate, I encourage it. I'm not looking for someone to just say "Yes, games can use more than 4 threads," or "No, they can't." I want someone to show some proof. That will help a lot of people figure out their builds much more easily, and it will also give some evidence to support one of the sides in this multi/hyper-threading debate. However, from a personal standpoint on which chip to buy, until there is any real evidence to support either side of this debate, I'd say to air on the side of caution: Xeon.

As far as comparing the 4670 to the 1231, I wouldn't be worried about matching clock speeds. Here's a better comparison: i5-4590 (3.3 Ghz), i5-4690k (3.5 Ghz), and the Xeon E3-1231 V3 (3.4 Ghz). In the benchmarks (where hyper-threading shows its performance), the Xeon will walk away victorious. For single threaded processing, however, it isn't too different from either of the i5 chips (marginally better than the 4590, and marginally slower than the 4690k). In fact, if you OC the 4690k, I have a feeling you would obtain roughly the same performance as the 1231 in games that (in theory) support hyper-threading. This is speculation, of course, but it's definitely a different way to look at it. The real upside with the Xeon is you have plenty of performance power in the instance you might want to do something like video editing; and that seems to be a growing interest lately. For the extra few bucks, the Xeon is a substantial comparison to the 4690k; and that includes taking into account the OC capabilities of the 4690k.

Since games currently don't need anything more than what the 4590/4690k can offer, then either one would be fine for present-day gaming. While the Xeon is best at home running CPU demanding tasks, it will give i7 performance when needed, for the price of an i5. I think that's a strong selling point; especially if we do see the shift in gaming to 6/8 thread support (if it hasn't already happened). The only real reason I don't see the relevant need for it right now is because four threads is all one currently needs to run games... but technology progresses, and it's usually quite predictable when you look at the lineage. I wouldn't be surprised if 8+ thread support is a standard in a couple years.

I will say one thing, though: this will NOT future proof your system. In reality, there is no such thing as future proofing, beyond certain aspects - like buying the right motherboard to support the next CPU release, or waiting for a particular GPU release to avoid dating your card within a couple months of purchase. Even then, that really isn't future proofing; it's just buying the right hardware for what you expect to happen. Given the rising popularity of video editing, and other tasks that utilise higher CPU thread counts, the fact that we've always gravitated towards higher CPU performance (including hyper-threading), and the fact that there are so many FX-63xx and 83xx series chips in gaming rigs, I really would expect more threads to be supported in the near future to accommodate the market; again, if it isn't already being utilised. Do I think we will truly need more than four threads any time soon? I really don't know. It doesn't seem like it, but I don't know where things are really headed. If you watch the video, I'd say things are looking good for multi/hyper-threading CPU gamers with their triple/quad cores.

Do what you think is best, but take everything into consideration. My vote is the Xeon 1231, but the i5-4690k would give you more than enough performance for today's gaming standards. If someone can provide some evidence on this multi-thread debate, that may help you make a choice, but I have a feeling you're already leaning towards the Xeon... and I really can't blame you.
Share
a b à CPUs
October 9, 2014 9:13:09 PM

Take skyrim for example. It's heavily single threaded. It'll run just fine stock on a c2d since it's only using 1 thread, 1 core, for the most part. Sometimes it'll use a second core, again, no biggie. Now add some ambitious mods, like 2k or 4k town graphics, flora overhauls etc. Here, the c2d will suffer as a 3rd and 4th core are called into play. A Xeon won't matter here, Skyrim won't use 8 cores, but an i3 with HT enabled will suffice. Skyrim was built for Intel cpu's.

BF4 or watchdogs is the opposite. They were built for AMD 8 core usage. And it shows. A Xeon or i7, with 8 cores, will easily hold its own, vrs any Fx 8 core, but the i3's even i5's suffer in comparison. They simply lack the cores to fully utilize the game engine at optimum performance.

I played a couple of games some years back, when I started it was on a pii 350 with a Voodoo3 2000 16Mb AGP, and the last I played them was on a p4 3.2 with ati x800. The games were Starseige by Dynamix and StarWars Xwing vrs Tie. Reason I stopped playing them, they were 10 odd years old, no new players, and all the old players had discovered BF, CoD, WoW etc, etc, so the server hosts shut them down to make more room for these new exciting games with lots of players.

What's this got to do with cpus? Everything. Hardware tech is not going to antiquate older generation cpus. I'm currently running an i5 3570k Ivy Bridge at 4.3. Absolutely nothing wrong or dated about it even if it is 2 generations older than current. Is it strong enough to handle anything out there currently? Yep, easily. Why? Because 4 core Intel is mainstream and will be for some years to come. Some time in the future, 8 core or 4+HT will be mainstream, and pure 4 cores will go just like the c2d. That's all software. I could boot up my old pii 350 with its 1Gb of ram and AGP card running 98SE and it'll work just fine and run all of the currently loaded programs. But nothing new. Software requirements, whether they be games, or IE, or YouTube just won't allow it. Progress. It's unavoidable.

If you think I'm biased towards the Xeon, you'd be correct. It'll have the longest lifespan with its 8 cores, whereas the 4 core i5 is good for now, and that's about it.
m
0
l
October 9, 2014 9:27:13 PM

I think the arguments on the table are: 1) The Xeon won't give you much performance increase right now, but it will likely give you more wiggle room for performance requirements in the near future. 2) The Xeon isn't going to be very beneficial now, so there's no point in getting it.

If you want "future proofing" (still an inaccurate term), the Xeon is a good choice for the money.
m
0
l
a b à CPUs
October 9, 2014 9:41:09 PM

Its more like: are you just into gaming? Getting the ) xeon would be wasted, now and then. Xeon is a 4 core with hyperthreading, which parallelizes computations to get them done more efficiently, which you can see is about useless in games, as they usually focus on real time work.
If you do productivity stuff, render videos/images or mine whatever stuff, the xeon will be faster.

By the way, bf4 may use 8 cores. But an I5 4690k does perform slightly better than the 5960x. 4c/4t vs 8c/16t.
m
0
l
October 9, 2014 10:16:19 PM

I'm going to go with the Xeon 1230v3 and go low budget on the video card right now (maybe GTX 750 Ti or R7 260x), then upgrade to either GTX 970 or perhaps a successor in a year or so. What sold me was this video where they show an i5-4690k at stock bottlenecking a GTX 780 Ti a little bit in BF4 and a lot in Crysis 3 when compared to an i7-4790k at stock. The frame times jump around a lot in Crysis 3 on the i5 compared to the i7.
http://www.youtube.com/watch?v=b6LUufXCPDM

Seems like I can get a smooth 30-40 FPS now on pretty much any game with this setup and still have the CPU to get smooth 60+ FPS without hiccups in a year or so when upgrading to a GTX 970 with an 8 thread CPU.
m
0
l
October 9, 2014 10:52:46 PM

The 4690k isn't bottlenecking the GPU. Bottlenecking is what the Pentium CPUs were doing throughout the video. You probably noticed the frame rates drop to the 0-20 fps range on occasions with those CPUS? That is bottlenecking. The 4690k was still pumping out a decent frame rate the entire time, and it never had a point where the frame rate dropped due to CPU overload; that would have been real bottlenecking. A better visual example of bottlenecking is in the video list that's in my signature. Just trying to clear that up, because the term is definitely overused.

Hope you enjoy your Xeon, though! I'd go the Xeon route for a gamer as well, if I were building one.
m
0
l
October 9, 2014 10:55:45 PM

Crap, but I forgot the 4790k runs 500 MHz higher clockspeed than the i5-4690k. Arggghh, I wonder how much of that smooth frame time variance for the i7 vs i5 is due to the clock rate and how much to the 8 threads.
m
0
l
October 9, 2014 11:01:19 PM

The 4690k was doing fine, so I don't see why you're sweating the difference. The Xeon practically sits between the 4690k and the 4790k in benchmarks, and is basically a hyper-threaded version of the 4690k (not in a literal sense, just making a point). The Xeon will likely sit a bit closer to the 4790k in performance on games that utilise hyper-threading.
m
0
l
October 9, 2014 11:03:44 PM

Skylyne said:
The 4690k isn't bottlenecking the GPU. Bottlenecking is what the Pentium CPUs were doing throughout the video. You probably noticed the frame rates drop to the 0-20 fps range on occasions with those CPUS? That is bottlenecking. The 4690k was still pumping out a decent frame rate the entire time, and it never had a point where the frame rate dropped due to CPU overload; that would have been real bottlenecking. A better visual example of bottlenecking is in the video list that's in my signature. Just trying to clear that up, because the term is definitely overused.

Hope you enjoy your Xeon, though! I'd go the Xeon route for a gamer as well, if I were building one.


I was referring to the frame time drops into the 30 millisecond range in Crysis 3 in that video. Considering the refresh time on a 1080p monitor is 17 ms, wouldn't frame times spikes into the 30s really interfere with smoothness when it's pumping 80-90 fps on average?
m
0
l
October 9, 2014 11:54:53 PM

That is a little more in depth than I'm knowledgeable about.
m
0
l
October 10, 2014 12:12:20 AM

I had to go back to that video to see what you're talking about, and I figured I'd clear up a dumb mistake *facepalm*. When I was talking about the frame rate drops, I really meant the noticeable spikes/drops in the frame time. Stupid mistake.

I think I might have found your answer, though; I had a hunch it wasn't the CPU. In the article accompanying the video, there's a review of the Pentium chips. In the 'Alternative Analysis' for BF4 it states, "In both cases, the overclocked Pentium has trouble locking to 60fps whereas the i7 sails through. Most of the other dips appear to be GPU-related - the Core i7 playthrough drops at exactly the same points. " I would assume this is the same case with the Crysis 3 video, because the i7 appears to be doing the exact same thing in that video.

In this case, even if it did make a difference, that would be between you and your GPU. Since they were using the 760 in some tests, I'm assuming that's why you're seeing those results. This shouldn't happen with the 970, or any other higher end card.
m
0
l
a b à CPUs
October 10, 2014 12:24:26 AM

So I did watch the whole video and the only thing it really tells is "do not overlook amd, you won't notice any difference" for about 6 or 7 times. Once he mentions bf4 using 8 threads. Great, may it be, yet there is no difference between an I5 and an I7 on the same clock speed.
I don't actually even know where you get that "xeon is between I5 4690k and I7 4790k information from. For gaming, the xeon is at I5 4690 level. An overclocked I5 certainly does better than it in games and quite a bit other software. If a program like bf4 is focussed on real time computations, it may use 8 threads, but yet gain no benefit from parallelizing workload on the cost of real time computing.
The I7 4790k does give slightly better framerates (min/avg/max) than the I5 4690k. However, if you oc the I5 to 4790k speeds, there is no difference anymore. Sometimes the I5 is a frame ahead, sometimes the I7. The important part isn't the cpu anyway, when in that price range.

So yeah, I'm going to say it once more: going with an I7/xeon for just gaming is quite pointless. Theres many situations where it does make sense, but if gaming is the most important thing to you, an I5 4690k will certainly do better than a xeon, given you'd oc it. As for the 4690, you'd simply save $50 at the cost of 15-20% performance in rendering, given the program doesn't support Intel quick sync. Thats actually one advantage over xeons, the iGPU. It doesn't matter for cpu rendering or gaming, but for programs who support it, iqs is a hell lot faster than normal cpu rendering. Takes about 20 seconds for a 2 min video instead of 2:30 min (1080p, 60fps, 20m bitrate, mp4)
m
0
l
a b à CPUs
October 10, 2014 12:32:28 AM

As for the second video, doesn't that exactly show what I have been saying? The difference between the I5 and the I7 is negotiable. And if not, to you, an oc'ed I5 to same speed would give the same results. Don't forget the I5 is on 3.7ghz 4 core turbo, the I7 on 4.2ghz. That's about a 15% difference, while the difference in the video wasn't even that big.

And fps drop for the I7 just the same. Sometimes a few milliseconds later, sometimes a bit sooner. Natural variation. I can't see what scares you on the I5, not at all.

By the way, I'm owning the 4790k myself and experimented a bit with different speeds, iGPU for rendering/desktop and dGPU's for games, hyperthreading, virtualization and what not. I've personally not been able to detect any difference between 4.0 and 4.2ghz, hyperthreading on/off in any game or graphic benchmark I tried. Where I can see the difference is cpu rendering in vegas and handbrake, or when coding a bit while running 5-8 game instances with different injectors and interacting client simulating software. Well, that possibly caters to your use as well, but then we shouldn't have been talking about gaming in the first line.
m
0
l
October 10, 2014 2:39:23 AM

DubbleClick said:
So I did watch the whole video and the only thing it really tells is "do not overlook amd, you won't notice any difference" for about 6 or 7 times. Once he mentions bf4 using 8 threads. Great, may it be, yet there is no difference between an I5 and an I7 on the same clock speed.
I don't actually even know where you get that "xeon is between I5 4690k and I7 4790k information from. For gaming, the xeon is at I5 4690 level. An overclocked I5 certainly does better than it in games and quite a bit other software. If a program like bf4 is focussed on real time computations, it may use 8 threads, but yet gain no benefit from parallelizing workload on the cost of real time computing.
The I7 4790k does give slightly better framerates (min/avg/max) than the I5 4690k. However, if you oc the I5 to 4790k speeds, there is no difference anymore. Sometimes the I5 is a frame ahead, sometimes the I7. The important part isn't the cpu anyway, when in that price range.

So yeah, I'm going to say it once more: going with an I7/xeon for just gaming is quite pointless. Theres many situations where it does make sense, but if gaming is the most important thing to you, an I5 4690k will certainly do better than a xeon, given you'd oc it. As for the 4690, you'd simply save $50 at the cost of 15-20% performance in rendering, given the program doesn't support Intel quick sync. Thats actually one advantage over xeons, the iGPU. It doesn't matter for cpu rendering or gaming, but for programs who support it, iqs is a hell lot faster than normal cpu rendering. Takes about 20 seconds for a 2 min video instead of 2:30 min (1080p, 60fps, 20m bitrate, mp4)

I think you're taking things a bit too much out of context, mate. I've not said the Xeon would give better gaming performance, though I may have implied it. That would be my fault, as I meant it in the "future proofing" sense; or really, in the theoretical situation that games will utilise a hyper-threading CPU more than they do currently. I know that a Xeon/i7 won't give any noticeable difference in today's games (if any improvement is to be had at all). As far as the video I linked, that was to give people an example of the conflicting evidence I've seen with what you, and others have said about games only using 2-4 threads. There was also some minor information regarding where he sees the future of gaming, and multi-thread support, which I found relevant to the conversation.

When I said the Xeon is in between the 4690k and the 4790k, I directly stated that it was from the benchmarks I've seen. If you are curious as to what benchmarks I was looking at, it was PassMark. I know these aren't going to translate into real-world performance; however, I would think it gives a little insight into how games might be handled in the future, if the gaming companies start to code their games towards the i7 market. Again this is all speculation, and I've not said that any of this is irrefutable. I've tried to make that pretty clear, but I guess it hasn't worked.

DubbleClick said:
As for the second video, doesn't that exactly show what I have been saying? The difference between the I5 and the I7 is negotiable. And if not, to you, an oc'ed I5 to same speed would give the same results. Don't forget the I5 is on 3.7ghz 4 core turbo, the I7 on 4.2ghz. That's about a 15% difference, while the difference in the video wasn't even that big.

And fps drop for the I7 just the same. Sometimes a few milliseconds later, sometimes a bit sooner. Natural variation. I can't see what scares you on the I5, not at all.

By the way, I'm owning the 4790k myself and experimented a bit with different speeds, iGPU for rendering/desktop and dGPU's for games, hyperthreading, virtualization and what not. I've personally not been able to detect any difference between 4.0 and 4.2ghz, hyperthreading on/off in any game or graphic benchmark I tried. Where I can see the difference is cpu rendering in vegas and handbrake, or when coding a bit while running 5-8 game instances with different injectors and interacting client simulating software. Well, that possibly caters to your use as well, but then we shouldn't have been talking about gaming in the first line.

This entire conversation (from my understanding) wasn't intended to be about how well games perform today, but speculation as to what hardware might perform better in the future. Maybe that is where this is getting slightly derailed? I haven't been saying that a quad core w/hyper-threading is going to give superior performance in today's games; I'm simply stating that I think the games that are released over the next few years will likely perform better on CPUs that are along the lines of the Xeon/i7, in terms of hyper-threading capability.

Maybe I just wasn't too straight forward about that? Of course, all of this is speculation; however, we've already mentioned how gaming has constantly graduated in the minimum required hardware, and that it will likely continue to require more hardware in the future than what is currently deemed "enough" for today's games. Is it so far-fetched to assume that games will likely require 8 threads to run smoother, and without hiccups in the future? Personally, I don't think so; however, I could be proven wrong as new games keep rolling out. As the OP stated, and as I personally agree, CPUs seem to only be growing in core and thread counts; they are growing horizontally (more cores) instead of vertically (more power per core). With that in mind, for games to keep up with current hardware, implementing more threads into games sounds like the most logical thing that will happen after a few years. With that said, concluding that an i5 will be fit for gaming after a couple years seems short-sighted. If we had proof that games won't increase much more in threads utilised, then that would save a lot of speculating headaches; but all we get in return is "You don't need an i7/Xeon for games right now." It's two different debates going on at once.

What you're saying is relevant to today, and we're not contesting that; we are contesting whether the hardware specs of today are going to hold up as time goes on. I've already said future-proofing is basically nonsense, but choosing the right hardware for the future is a relatively smart thing to do. I don't think the OP is really trying to "future proof" so much, but is more so trying to make the best choice for the long term, in order to potentially avoid having to spend extra after a couple years to keep up with gaming hardware demands; or to avoid overspending today if the games might stop growing 'horizontally' in the future.
m
0
l
a b à CPUs
October 10, 2014 4:27:00 AM

I thought the debate was about gaming in general, which is what my comments have been about.

As for games starting to properly utilize 6 threads, let's say from now it will take as long as it took back when the first quad cores have been released in the mainstream line until they yielded benefit in games. (Of course, six cores are not in the mainstream segment yet and it will probably take even longer from than on, due to a general slowdown in technical progression since ~2005)
First quad cores in mainstream segment released 2006. About 2010-2012 games finally began seeing great benefit from 4 over 2 cores, if I recall correctly. And yes, we're talking about cores here, not just threads. So if we assume same happens now, the xeon won't give better results than the I5 for 4-6 years.

Secondly, hyperthreading just doesn't magically double your core count. It sacrifices time to get stuff in line, but then saves time working it out. In rendering, that yields benefit, because the buildup takes just a bit of time, the actual work longer. Theres a lot of information to execute.
For games, though, you get tiny bits of information again and again. Those don't take a lot of time to work through. So you lose time and then gain time, not really beneficial, overall mostly +/- 0. There are even cases where games run noticeable better with hyperthreading disabled (measurable, not noticeable, sorry).

I'm pretty sure that hyperthreading will not give benefit in gaming for a long time. I mean, why would games stop being responsive and instead let the whole game be calculated at start? That wouldnt be a game anymore, huh?

And by the way, one more thing. An I5 can actually get to xeon speeds in completely parallelized workload, by increasing its frequency by 20%. For an I5 4690k, that is 4.45ghz to get on xeon speeds. Most of them actually go higher.
m
0
l
October 10, 2014 6:04:25 AM

I was referring to the single video where they played 5 different games and showed the results in those two graphs parallel with the G3258 gameplay for the different processors on a GTX 780 Ti, and you can see the 4790k outperforming the 4690k pretty significantly at the beginning of the Crysis 3 gameplay at the 1:16 mark. The GTX 780 Ti generally outperforms a GTX 970, doesn't it? But like I was wondering earlier, is it the 500 MHz advantage in stock clock speed the 4790k has over the 4690k responsible for that difference? If so, I'd probably opt for the 4960k and a good air cooler with the difference in price between the Xeon-1230v3 and the i5-4960k, and hope I won the silicon lottery. Because there is no way I'm spending $350 on the i7-4970k short of winning the lottery.

But the whole point of the Xeon vs i5 debate for me is running games that make use of more than 4 threads. Crysis 3 seems to be one. Since I'd be gaming at 1920x1080 I don't care about a lesser frame rate for average games from inefficiencies in HT unless it consistently drops below 60 FPS. But dropping from 60 fps to a choppy oscillation between 45 fps and 60 fps in demanding areas of really punishing games (like Crysis 3) because I don't have HT would matter somewhat if I'm spending $330 on the GPU (I doubt a 970 GTX should go down very much in price over the next year).

As for comparing now to 2005, why do you think the growth of parallelization will proceed as slowly now as in 2005, DubbleClick? It seems like it should be the opposite. In the early 2000s everyone programmed to single core because there were still enormous gains in clockspeed all the way to about 2002 or so, attributable to Moore's Law. But then of course heat made it impossible to continue these ridiculous gains, and CPUs had to start growing horizontally. I think a lot of the slow adoption of four thread gaming was due to inexperience with programming heavily multithreaded games. It's a complete paradigm shift to change from programming serial to now having multiple threads which have to communicate and have to stay synchronized. Ten years later now that multithreaded games are the norm and developers have a ton of experience programming for four cores, I don't think there's anything close to the inertia that was there in 2005 against expanding thread counts. Surely developers are a lot more comfortable with threading now. Now learning how to effectively program multithreaded processes is a part of a modern CS curriculum. The algorithms books, like CLRS3, now teach multithreaded algorithms. It doesn't stand to reason that increasing thread count now should be a lot less painful than it was in 2005? It seemed like the big bottleneck in the 2005-2012 era of gaming was in the programming side.
m
0
l
October 10, 2014 6:22:01 AM

Then again, if I'm right in my speculation (and that's all it is) and thread counts start increasing with workload well balanced, the Xeon will become just as obsolete as the i5 would be, as HT with 8 fully utilized threads won't come close to having 8 physical cores. My game programmer friend makes it out like HT is just a hack to keep ALUs fed due to x86 requiring inorder execution as opposed to out of order CPUs like MIPS.
m
0
l
a b à CPUs
October 10, 2014 7:33:39 AM

HomerThompson said:
Then again, if I'm right in my speculation (and that's all it is) and thread counts start increasing with workload well balanced, the Xeon will become just as obsolete as the i5 would be, as HT with 8 fully utilized threads won't come close to having 8 physical cores. My game programmer friend makes it out like HT is just a hack to keep ALUs fed due to x86 requiring inorder execution as opposed to out of order CPUs like MIPS.


Hyperthreading is just a technology to increase efficiency of calculations. You could imagine an i7 like 4 secretaries, each working on two files. The i7 secretary works on both at the same time, instead of one after the other, like an i5 secretary. That results in slightly faster execution, there's more load on each core at the same time compared to an i5. But yeah, it is absolutely not an extreme performance increasing factor and for games not worth it at all, since it comes at the cost of realtime execution.

As for the i5 4690k, chances of it oc'ing to 4.4ghz are very, very high. It's basically the same as the i7 4790k, just the factory clock is lower. But it oc's the same, generally until 4.6-4.8ghz. That will come at the cost of power consumption and heat dissipation, of course, but then the i5 would be faster than the xeon in about everything.

And you're right, the difference between the i5 and the i7 is mainly caused by the clock speed difference. However, even though I watched the video, I don't see what major difference you see there. Both drop in frames around the same time, which is to a 95% chance caused by the gpu and not the cpu. And aside of that, the fps difference is usually under 5%. That's nothing noticeable anyway, at such high framerates.
m
0
l
October 10, 2014 3:49:23 PM

DubbleClick said:
I thought the debate was about gaming in general, which is what my comments have been about...

Secondly, hyperthreading just doesn't magically double your core count. It sacrifices time to get stuff in line, but then saves time working it out. In rendering, that yields benefit, because the buildup takes just a bit of time, the actual work longer. Theres a lot of information to execute.
For games, though, you get tiny bits of information again and again. Those don't take a lot of time to work through. So you lose time and then gain time, not really beneficial, overall mostly +/- 0. There are even cases where games run noticeable better with hyperthreading disabled (measurable, not noticeable, sorry).

I'm pretty sure that hyperthreading will not give benefit in gaming for a long time. I mean, why would games stop being responsive and instead let the whole game be calculated at start? That wouldnt be a game anymore, huh?

And by the way, one more thing. An I5 can actually get to xeon speeds in completely parallelized workload, by increasing its frequency by 20%. For an I5 4690k, that is 4.45ghz to get on xeon speeds. Most of them actually go higher.

Like I said, two debates going on at once lol. I could be looking at the original question wrong, but I'm not entirely sure if I am yet. And I never said that hyper-threading doubled core count, I said chip manufacturers are adding cores/threads; that meant that they tend to either add more cores, or they add hyper-threading, to increase the amount of simultaneous calculations. I know they aren't the same thing, and I know they work differently; that isn't much of an unknown to me. I just speculate hyper-threading might be something utilised in the future of gaming.

While faster single thread speeds might compete with slower multi-thread speeds, I'm wondering how useful it will be if the programming shifts to favour hyper-threaded CPUs. Again, IF that were to happen.

HomerThompson said:
As for comparing now to 2005, why do you think the growth of parallelization will proceed as slowly now as in 2005, DubbleClick? It seems like it should be the opposite. In the early 2000s everyone programmed to single core because there were still enormous gains in clockspeed all the way to about 2002 or so, attributable to Moore's Law. But then of course heat made it impossible to continue these ridiculous gains, and CPUs had to start growing horizontally. I think a lot of the slow adoption of four thread gaming was due to inexperience with programming heavily multithreaded games. It's a complete paradigm shift to change from programming serial to now having multiple threads which have to communicate and have to stay synchronized. Ten years later now that multithreaded games are the norm and developers have a ton of experience programming for four cores, I don't think there's anything close to the inertia that was there in 2005 against expanding thread counts. Surely developers are a lot more comfortable with threading now. Now learning how to effectively program multithreaded processes is a part of a modern CS curriculum. The algorithms books, like CLRS3, now teach multithreaded algorithms. It doesn't stand to reason that increasing thread count now should be a lot less painful than it was in 2005? It seemed like the big bottleneck in the 2005-2012 era of gaming was in the programming side.

I'm kind of along these lines, though not entirely.

HomerThompson said:
Then again, if I'm right in my speculation (and that's all it is) and thread counts start increasing with workload well balanced, the Xeon will become just as obsolete as the i5 would be, as HT with 8 fully utilized threads won't come close to having 8 physical cores. My game programmer friend makes it out like HT is just a hack to keep ALUs fed due to x86 requiring inorder execution as opposed to out of order CPUs like MIPS.

DubbleClick said:
Hyperthreading is just a technology to increase efficiency of calculations. You could imagine an i7 like 4 secretaries, each working on two files. The i7 secretary works on both at the same time, instead of one after the other, like an i5 secretary. That results in slightly faster execution, there's more load on each core at the same time compared to an i5. But yeah, it is absolutely not an extreme performance increasing factor and for games not worth it at all, since it comes at the cost of realtime execution.

I guess what is flying over my head is where exactly games wouldn't benefit from hyper-threading. While it isn't the same as having more physical cores, it seems like there's something missing; that or I'm missing the point. I understand the concept of hyper-threading, and I understand that it sacrifices calculation time for smaller individual calculations, but what I'm not entirely understanding is how it has no benefit over extra physical cores with future gaming. While, at the moment, the calculations being thrown at each core, during a game, are relatively small, the idea that hyper-threading won't be beneficial in the future seems to rely on the idea that games won't increase in the complexity of the calculations being made on each thread (to put it in dumbed-down terms). Since I don't program video games, I don't know if keeping calculations small, or making them more complex would benefit in the long term. That could easily be narrow speculation, but I find it a valid one from my standpoint.

The idea behind hyper-threading is to increase the calculation speed of more complex workloads on each physical core. That being said, I would think that, as games progress, the complexity of the workload would also increase; although, this is all speculation. If it's better to program games where workload increases in the quantity of calculations, and not complexity, then I can definitely see why hyper-threading isn't going to be beneficial. If video games will continue to use the CPU for small calculations on each thread, then obviously faster cores, or more physical cores, are going to be an answer. But, when the complexity of the calculations grow, multiple threads per core seems like the answer; and I would think this is where the future of programming will lead gaming. I could easily be wrong. Thoughts?
m
0
l
a b à CPUs
October 11, 2014 2:07:50 AM

Hyperthreading wouldn't become helpful if complexity of games' calculations increase (which is very unlikely, being a programmer myself the trend is going away from high programming languages to hardware close programming, for efficiency's sake), but if the factor would go where it is a steady exchange of data to be calculated new over and over.
Now, this is not even close to exact, but it gives an idea:
start: game tells cpu to calculate x.
1ns cost of time for hyperthreading
4ns calculation time (instead of 5)
cpu tells result
game wants x calculated again
etc.

You lose time, you gain time. Now, when hyperthreading actually starts to benefit is when calculations are so slow that you're heavily fps limited already.

start:
3ns time for ht
77ns calcultion
->80ns instead of 100ns for I5
but: game waiting 75/95ns and standing still, horrible lags and framerates.

So if the I5 starts to become noticeable worse than the xeon, you mostlikely wouldn't want play on either. Given, you still could. Just like an old gpu in current games just shows a black screen, same might happen with an old cpu. Other api's are being used and support for older versions and cpu instruction sets simply ends. I'm myself quite surprised that most software still has 32 bit versions.

And yes, for games quantity of calculations usually goes over quality of calculations. Imagine a game with as nice quality as cinebench 8k pictures, with other players models standing on exact positions. Sounds great, right? Until you factor in how much time that costs.
I think everyone is fine playing on slightly less than possible quality (not just graphics wise, also limitations to player count and physics etc.) but getting 60fps for smooth gameplay instead of seeing ultra realistic pictures - one each 10 seconds.
m
0
l
October 11, 2014 2:32:25 AM

That was a much better explanation than I expected; thank you. That makes much more sense, now. I guess I figured the GPU and CPU were doing certain things a little backwards, and that was leading to the confusion. Thanks, mate. I'll definitely have to make a note of that. Kind of sucks that's how it will work out, but I guess that's another reason to cross my fingers for affordable consumer Larrabee-style chips in the future? lol

Also, same here with the 32 bit software. It's hard to find certain programs in x64, if they even exist. I'm kind of annoyed some of my most important programs are not available in x64; like both of my primary web browsers (Aviator and Iron). Meh, I guess some people don't like change?
m
0
l
a b à CPUs
October 11, 2014 3:59:26 AM

Well, you can run 32 bit software on 64 bit hardware, just not the other way round. After all the difference isn't even noticeable unless more than 4gb of ram need to be adressed to one task.
I guess people are just lazy, since 32 bit code has been used for quite long and the required time needed to get in touch with x64 just isn't worth it. I started with 64 bit code and can't really tell huge differences. But who knows, maybe x86 architecture will be replaced in a while anyway.
m
0
l
October 11, 2014 6:02:03 AM

That is the beauty of 32 bit programs... they really don't show a difference. Even still, I would prefer x64 coding, as that is where things seem to be headed. It doesn't seem logical, to me, to stay with 32 bit software, in the age of 64 bit hardware. This is the one industry where we constantly clamour over the latest, greatest thing; but we still are stuck with the old 32 bit software at every turn. It seems... almost hypocritical? I don't know how to put it.

Replacing x86 would be interesting; I could get behind that. Things need an update altogether pretty badly. Hell, we might see a lot of improvements, if programmers were forced to rewrite all of their software for whatever replaced x86. But that's neither here nor there... I'll just dream about it for now.
m
0
l
a b à CPUs
October 12, 2014 11:02:23 AM

The biggest obstacle to x86 software trending to x64 is the sheer volume of x86 hardware still in use worldwide. Since you can't use x64 on x86 platforms, software has to be written in x86 just to cover everybody. Until most, if not all, people are running x64 platforms this requirement isn't gonna change. For example, my wife works for the US govt. They use a Citrix program on XP Pro platform. OMFG is it slow, especially when working from home, but it works. And the 1000 odd other people in her building know how to use it, and there are many of those buildings throughout the States, all knowing it, all networked on pc's using the same OS, all dated etc. The cost to upgrade to a x64 based platform would be staggering (and paid for by tax payers) as well as retraining for a new OS, new program etc.

There are several ppl who do code both versions, like Norton, Microsoft etc, but that's a large undertaking and just keeps costs higher. Once the platform go the way of win98SE, then there will be a change, but until then, you're stuck with x86 programming.
m
0
l
October 12, 2014 4:57:36 PM

x86 is the instruction set, while x64 is really just shorthand for x86-64. There is no architecture/instruction set change; however, the programs are written differently when written for 64 bit systems. The thing is, modern hardware is designed to run 64 bit operating systems. The only real difference between the 32 and 64 bit OS platforms, on modern hardware, would be how much hardware is required to run it (ie: HDD and RAM). By upgrading all platforms to support 64 bit operating systems, you wouldn't be doing very much, if at all. In fact, the first 64 bit CPU was released in 2003. Unless the hardware being used pre-dates the implementation of the 64 bit CPU, then updating to 64 bit OS's shouldn't be much of a hassle. All the 32 bit software that was written can still be used, as it's still the same x86 architecture.

The real reason to stay with a 32 bit OS, IMHO, would be for those using machines that do not have the hardware specs, or have the need for a 64 bit OS; which may be along the lines of the machines you're describing. The thing is, if the hardware can support running a 64 bit OS, there's nothing beneficial from running a 32 bit OS (from what I've seen); yet people still run 32 bit systems. Windows XP is available in a 64 bit version, so the OS on the hardware you're mention most likely can be updated to a 64 bit version. It's the question of whether the hardware is new enough to support it. I would assume that the hardware has a 64 bit CPU, as the hardware would be very old if not; however, I could easily be wrong. The only time I've noticed a real difference in running 32 bit is you are capped out with how much RAM the OS will support; so there doesn't seem to be much harm in running 64 bits if the hardware can support it.
m
0
l
!