Introducing Intel's 14nm Node and the Broadwell Processor
Tags:
-
CPUs
-
Intel
Last response: in Reviews comments
Intel finally provides solid information on Haswell's successor, the next-generation Broadwell core. We also learn some detailed info about the new 14nm processing node, a must-read for CPU enthusiasts who are interested in the future of Intel's Core!
Introducing Intel's 14nm Node and the Broadwell Processor : Read more
Introducing Intel's 14nm Node and the Broadwell Processor : Read more
More about : introducing intel 14nm node broadwell processor
Mike Stewart
August 11, 2014 9:23:49 AM
With Intel heavily focusing on power-efficiency, my bet is clocks are not going to get bumped by more than 200MHz, quite possibly less: by collapsing the multiplier pipeline from five cycles to three and supersizing a bunch of other things, that is a whole lot more logic per stage and of those additions chip away at any timing closure margins that may have been gained from the shrink.
I would expect this to also translate into even more unpredictable and voltage/temperature-sensitive overclock outcomes.
I would expect this to also translate into even more unpredictable and voltage/temperature-sensitive overclock outcomes.
Score
2
Related resources
- Should I wait for 14nm intel processors for my laptop I'll buy? And how is this laptop, please take a look at it. - Forum
- Intels New 14nm Processors Coming ! - Forum
- should I wait for the intel broadwell series or just get a haswell processor? - Forum
2700K at 4.0/4.4 turbo 1 core for silent operation. Guess this year, I will change the Evo to something beefier and go 4.2 or 4.3 with 4.6 as a turbo to remain silent. Rocking 2011 year gear and still not finding a reason to upgrade. For a person who renders almost all the time, this CPU stagnation is frustrating.
Score
3
Gaurav Rai
August 11, 2014 10:02:02 AM
ceeblueyonder
August 11, 2014 10:15:35 AM
intel need die shrinks to cram in their specializalize units or cpu's. i am not really sure. but, intel has a bunch of specialize instructions built into their chips to beef it up, make it have faster ipc but also relying on software to take advantage of it. things like quicksync comes to mind or the SSE or whatever it is called. in comparison, amnd chips like the fx-series chips and the phenoms before it seems simpler, to me. a more general computing unit. kind of like powerpc chips or arm chips. they compose of execution cores or integer cores and floating point units and thats it. no instruction sets and quicksync decoders to gain software-driven advantage. but i don't really know. just hunches. i tend to make hunches but hopefully, they're educated hunches. so i am rooting for amd! intel's die shrinks seems to be like a monopolistic grip that keeps others with better "Architectures," simpler more general logical integer units to stay behind. since these cores with die shrinks, say an amd at 14nm, too would probably blow the top off of a competing intel that is also at 14nm. again, general computing units which i think AMD, IMB POWERPC, ARM chips have an inherent advantage to intel's inferior x86 architecture. AMD and POWERC both were the first x64 cpu's. but i could be wrong. again, just a hunch. intel blows amd out of the water today b/c intel has chips at 14nm, competing with AMD that has 32nm or 28nm chips. and also the software-driven instruction sets that intel has crammed into their chips that make software developers basically just check a box or intel has supported them to make intel chips run even faster than say AMD, which doesn't have quicksync or SSE or whatever it's called.
thus beat amd. but, to me, amd chips like the fx-series and the phenoms before have a simplicity to them that i admire. although i can't specifically say how or what it is.
thus beat amd. but, to me, amd chips like the fx-series and the phenoms before have a simplicity to them that i admire. although i can't specifically say how or what it is.
Score
-8
qlum
August 11, 2014 10:19:24 AM
While better efficiency is nice and all I fear intel won't do enough for gamers to warrant an upgrade of their cpu. When overclocked Haswell doesn't do much above Sandy bridge and while intel may not have the strongest competition from amd on the higher end anymore if people won't upgrade their cpu's it will hurt them on the longer run.
Score
4
Gaurav Rai said:
Meanwhile Amd innovates with 220W processsor XDCan you really call it innovation when AMD needs a 200W chip to compete with Intel's sub-100W chips? Unless you meant innovation in the high-tech space-heater market.
Intel has gone down the crank-clocks-power-be-damned path with Prescott about a decade ago and that did not work too well. AMD just tried the same thing and "shockingly," that did not work particularly well for them either.
Score
1
ceeblueyonder
August 11, 2014 10:24:38 AM
it's also odd that intel is the die shrink boy. die shrink cpu company doing all the die shrinks. perhaps, the die shrink industry can only support one coompany at a time. as if, supporting two companies that are also doing tick tocks would "saturate" the indsutry too much. it probably has a lot to do with greed. money. but, that doesn't matter. what we all need to know is that just because intel beats amd today doesn't mean intel makes better chips. or that intel has superior technology. we need to take a closer look at what intel is doing and what amd is not doing and then look at their products more closely than just geekbench scores or ceinbench scores. look under the hood.
Score
-9
ceeblueyonder said:
intel blows amd out of the water today b/c intel has chips at 14nm, competing with AMD that has 32nm or 28nm chips.Even if you compare Sandy Bridge (32nm) Intel CPUs with AMD's FX83xx (28nm) which theoretically gives the advantage to AMD, Intel's older chips still win most benchmarks. Intel being one process node ahead has very little to do with their performance lead; their architecture itself is just that far ahead.
Score
8
"... Having said that, Moore's Law appears to continue unabated for the moment. ..."
Hardly. Performance of the current Intel 4-core isn't that much better than the
equivalent model from 18 months ago. I know they've improved power consumption,
etc., but without significant speedups, most potential users really won't care.
Mike Stewart, you should be able to run your 2700K at 5.0. Every 2700K I've
obtained runs at 5 no problem, with good temps, etc.
OTOH, the chipset improvements with Z97 do at least offer a vaguely passable
rationale for upgrading, re the greater number of Intel SATA3 ports, newer
storage tech, etc. If budget was not an issue, I'd build with a 4790K without
hesitation.
Can't help feeling though, with various comments I've seen this past few weeks,
that what may be holding many people back from their ideal build is RAM pricing
which is now completely ridiculous. RAM is just too expensive. Huge step backwards
in system cost. And please I don't want to hear about chip shortages, etc., we all
know why RAM is more expensive now, because it's happened so many times before:
the suppliers don't like the pricing levels, so they restrict supply to raise prices. Well
IMO it's counter productive, because I can't be the only one who thinks no thanks,
I'm not paying that much for an 8GB 1600 kit when for about a 3rd less one could
get an 8GB 2133 kit a year+ ago, so heck with it I'll look for used kits instead, save
a bundle. I've bought four used GSkill 2x4GB 2133 kits this year, saved over 100 UKP
so far.
Price drops & efficiency improvements on CPUs are all fine & lovely, but what's the
point if potential future power savings are being wiped out by an artificially upfront
cost increase via the RAM?
Ian.
Hardly. Performance of the current Intel 4-core isn't that much better than the
equivalent model from 18 months ago. I know they've improved power consumption,
etc., but without significant speedups, most potential users really won't care.
Mike Stewart, you should be able to run your 2700K at 5.0. Every 2700K I've
obtained runs at 5 no problem, with good temps, etc.
OTOH, the chipset improvements with Z97 do at least offer a vaguely passable
rationale for upgrading, re the greater number of Intel SATA3 ports, newer
storage tech, etc. If budget was not an issue, I'd build with a 4790K without
hesitation.
Can't help feeling though, with various comments I've seen this past few weeks,
that what may be holding many people back from their ideal build is RAM pricing
which is now completely ridiculous. RAM is just too expensive. Huge step backwards
in system cost. And please I don't want to hear about chip shortages, etc., we all
know why RAM is more expensive now, because it's happened so many times before:
the suppliers don't like the pricing levels, so they restrict supply to raise prices. Well
IMO it's counter productive, because I can't be the only one who thinks no thanks,
I'm not paying that much for an 8GB 1600 kit when for about a 3rd less one could
get an 8GB 2133 kit a year+ ago, so heck with it I'll look for used kits instead, save
a bundle. I've bought four used GSkill 2x4GB 2133 kits this year, saved over 100 UKP
so far.
Price drops & efficiency improvements on CPUs are all fine & lovely, but what's the
point if potential future power savings are being wiped out by an artificially upfront
cost increase via the RAM?
Ian.
Score
-2
InvalidError said:
Intel has gone down the crank-clocks-power-be-damned path with Prescott about a decade ago and that did not work too well. AMD just tried the same thing and "shockingly," that did not work particularly well for them either.Which makes it all the more funny considering the Athlon XPs at the same time were more focused on efficient computing with better IPC instead of insane clock rates. You'd think AMD would have learned enough from that time not to fall into the Netburst trap.
Score
0
maroon1
August 11, 2014 11:22:47 AM
balister
August 11, 2014 11:51:14 AM
Quote:
"... Having said that, Moore's Law appears to continue unabated for the moment. ..."Hardly. Performance of the current Intel 4-core isn't that much better than the
equivalent model from 18 months ago. I know they've improved power consumption,
etc., but without significant speedups, most potential users really won't care.
Moore's Law states that the number of transistors on the chip will double every 24 months: http://en.wikipedia.org/wiki/Moore's_law
From the Wiki article: Moore's law is the observation that, over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years.
Double Transistors <> Double Performance (although early on it seemed that way)
Score
3
robholden
August 11, 2014 12:29:09 PM
"For example, if we compare Intel's 22nm to 14nm nodes, we find that transistor fin pitch (the space between fins) has been reduced from 60nm to 42nm, transistor gate pitch (the space between the edge of adjacent gates)"
Actually pitch means the space from the center of one fin to the center of the adjacent fin... it is not just the space between the 2 fins...
Actually pitch means the space from the center of one fin to the center of the adjacent fin... it is not just the space between the 2 fins...
Score
2
balister said:
Double Transistors <> Double Performance (although early on it seemed that way)Back in those days, newer chips with more transistors were also on a smaller process, significantly higher clocks and usually accompanied with some fundamental performance enhancements/breakthroughs so the performance doubling every ~18 months was a combination of multiple compounding factors.
Today, practically all the fundamental discoveries have been made and all they are doing is refine them so that side of performance scaling is effectively shut down. The clock scaling also appears to have hit a brick wall since the latency hit from making pipelines longer to enable higher clocks causes the execution pipelines to stall on dependencies more often and negate gains from higher clocks. Process wise, they are at a point where they are starting to fight with fundamental laws of physics, which does not help with smooth progress either.
There is little reason to believe things are going to improve much any time soon when all aspects are well into their diminishing return curve.
Score
3
ceeblueyonder
August 11, 2014 12:43:21 PM
Quote:
ceeblueyonder said:
intel blows amd out of the water today b/c intel has chips at 14nm, competing with AMD that has 32nm or 28nm chips.Even if you compare Sandy Bridge (32nm) Intel CPUs with AMD's FX83xx (28nm) which theoretically gives the advantage to AMD, Intel's older chips still win most benchmarks. Intel being one process node ahead has very little to do with their performance lead; their architecture itself is just that far ahead.
FX-83xx series are 32mm, btw/fyi. fx-8350 vs. i7-2600k is probably a fair fight. i bet they'd trade blows. or, an fx-8350 is not far behind if it is behind. and amd has a software/platform/optimization disadvantage, meaning that the programs are not optimized for amd chips since most pc's have intel chips inside them.
Score
-2
ceeblueyonder
August 11, 2014 12:53:06 PM
Quote:
ceeblueyonder said:
intel blows amd out of the water today b/c intel has chips at 14nm, competing with AMD that has 32nm or 28nm chips.Even if you compare Sandy Bridge (32nm) Intel CPUs with AMD's FX83xx (28nm) which theoretically gives the advantage to AMD, Intel's older chips still win most benchmarks. Intel being one process node ahead has very little to do with their performance lead; their architecture itself is just that far ahead.
FX-83xx series are 32mm, btw/fyi. fx-8350 vs. i7-2600k is probably a fair fight. i bet they'd trade blows. or, an fx-8350 is not far behind if it is behind. and amd has a software/platform/optimization disadvantage, meaning that the programs are not optimized for amd chips since most pc's have intel chips inside them.
Score
-4
none12345 said:
Moores law says nothing about performance. It only has to do with the number of transistors on a chip doubling roughly every 18 months. ...It was never a Law as such, merely an observation that seemed to be conveniently accurate many years
ago, but often it's quoted as being either a performance doubling or a density doubling every year and a
half (not 24 months). And for the other poster, wikipedia is not the word of god.
Either way, my point still stands - neither angle has been true for a long time now.
Ian.
Score
-2
ceeblueyonder said:
fyi. fx-8350 vs. i7-2600k is probably a fair fight. i bet they'd trade blows. or, an fx-8350 is not far behind if it is behind.The i7-2600k wins most benchmarks by a 10-20% margin and quite a few by a more substantial 30-50% lead. The only benches AMD wins by a significant margin (~15%) are 7zip and 2nd-pass h264.
http://www.anandtech.com/bench/product/697?vs=287
To make the FX a more even match for the stock i7, it needs at least an extra 600MHz.
Score
7
ozicom
August 11, 2014 1:01:17 PM
They made the "tick" with getting same performance with less watts and made "tock" with making the architecture better and giving better performance with that watts. So Intel's play seems good for me. You'll get better gaming performance when game developers develops games for that architecture. For example you can get better gaming experience with less performance devices like PS4 or Xbox because developers are making special games for those platforms. Intel's move from 22nm to 14nm is a good choice. I'm waiting for new products.
Score
-3
ceeblueyonder
August 11, 2014 1:20:54 PM
Quote:
ceeblueyonder said:
fyi. fx-8350 vs. i7-2600k is probably a fair fight. i bet they'd trade blows. or, an fx-8350 is not far behind if it is behind.The i7-2600k wins most benchmarks by a 10-20% margin and quite a few by a more substantial 30-50% lead. The only benches AMD wins by a significant margin (~15%) are 7zip and 2nd-pass h264.
http://www.anandtech.com/bench/product/697?vs=287
To make the FX a more even match for the stock i7, it needs at least an extra 600MHz.
i did acknowledge that "if the fx-8350 is behind, it isn't behind by much." 10-20% is not much to me. it's not what you considered in your earlier post as intel having an architectural advancement b/c if you wanna speak architecture, AMD patented x64. x64 is better than x86 which intel uses. correct me if i'm wrong.
Score
-5
ceeblueyonder
August 11, 2014 1:33:37 PM
this is why AMD is focusing on APU's because Intel is focusing on mobile platforms, too with all the die shrinks. the FX series and AM3+ chipset is also stagnant b/c AMD knows the performance delta of when the introduced FX-8350 to now is not much. i mean an FX-8350 is a great gaming cpu and even video editing cpu. it's a good "multi-tasker" because it has 8 cores. the only thing that i think ppl will shy away from and keep buying intel for their gaming and video editing needs is b/c magazine editors think that old is dead when a computer that is two yrs old is not the same as say an old bridge. an old bridge needs repair as it gets old. a cpu will not wither and die by the passage of time. an fx-8350 and an am3+ mobo still has sata6GB speeds and adequate pci 2.0 lanes since there isn't a gpu today that will even saturate a pci 2.0 lane. i mean, an AMD 990FX mobo has more sata ports and pci lanes than intel. intel just wows ppl today with "thunderbolt" "sata express" "m2 sata" and other things that to me seems superfluous. am i alone in this?
Score
-5
Menigmand
August 11, 2014 2:33:40 PM
I can't believe Tom's Hardware doesn't know how to do percentages, but here we go:
"In fact, the Broadwell-Y die has about 63% less area than the Haswell-Y die." (page 1)
"The Broadwell-Y chip is 82mm2, scaled down about 63% compared to Haswell-Y's 130mm2 die size." (page 2)
No. It's scaled down 37%. It has 37% less area. So, the new chip is 63% of the original size.
"In fact, the Broadwell-Y die has about 63% less area than the Haswell-Y die." (page 1)
"The Broadwell-Y chip is 82mm2, scaled down about 63% compared to Haswell-Y's 130mm2 die size." (page 2)
No. It's scaled down 37%. It has 37% less area. So, the new chip is 63% of the original size.
Score
9
blppt
August 11, 2014 2:40:08 PM
"AMD patented x64. x64 is better than x86 which intel uses. correct me if i'm wrong."
While AMD64 does have certain slight advantages in feature set, intel has their own version of it called EMT64. Since AMD has to license x86 from Intel, any fruit from that tree has to also be available to Intel, and thus, AMD64 (aka x86-64). Since the last prescott P4s, Intel has had chips with EMT64 on them (with a slight gap with the very first Core Solo/Duo chips).
While AMD64 does have certain slight advantages in feature set, intel has their own version of it called EMT64. Since AMD has to license x86 from Intel, any fruit from that tree has to also be available to Intel, and thus, AMD64 (aka x86-64). Since the last prescott P4s, Intel has had chips with EMT64 on them (with a slight gap with the very first Core Solo/Duo chips).
Score
-1
blppt
August 11, 2014 2:47:42 PM
Also, for what its worth---the 2600K Sandy Bridge I have trades blows with my 9590 box, which considering the gap in TDP (95 vs 220W) is just sad. And for games that are heavily CPU dependant, and dont use all 8 cores of the 9590, the 2600K stomps all over it (see: Skyrim). Hopefully, with the advent of Mantle and the fact that a lot of future games are going to be designed with the 8 core consoles in mind, this will be reversed, but right now, its a very narrow range of people who would choose these power-hungry monster over a (now-ancient) 2600K. Never mind the even more power efficient Ivy and Haswell.
Score
7
childofthekorn
August 11, 2014 3:20:31 PM
Quote:
Quote:
ceeblueyonder said:
intel blows amd out of the water today b/c intel has chips at 14nm, competing with AMD that has 32nm or 28nm chips.Even if you compare Sandy Bridge (32nm) Intel CPUs with AMD's FX83xx (28nm) which theoretically gives the advantage to AMD, Intel's older chips still win most benchmarks. Intel being one process node ahead has very little to do with their performance lead; their architecture itself is just that far ahead.
FX-83xx series are 32mm, btw/fyi. fx-8350 vs. i7-2600k is probably a fair fight. i bet they'd trade blows. or, an fx-8350 is not far behind if it is behind. and amd has a software/platform/optimization disadvantage, meaning that the programs are not optimized for amd chips since most pc's have intel chips inside them.
The ALU and FPU units used by the FX series are also very low quality. The rumor mill generating that AMD is custom making the ALU/FPU units for Excavator. Even for software thats goign to be made to utilize the FX series, its still a lower quality processor compared to intel, hence the price point.
Score
-1
army_ant7
August 11, 2014 3:39:53 PM
Here are some corrections that I think need to be made. Just doing my part as a community member. :-)
I think it should say "...scaled down to about 63% of Haswell-Y's 130mm2 die size." The reason being that the original statement seems to imply that Broadwell-Y shrunk by 63% which is a significantly larger shrink as opposed to the about 37% it really shrunk by.
(old - new) / old * 100% = shrink % e.g. (82 - 130) / 130 * 100% = 37%
I could be mistaken or behind the times, but shouldn't that by "Execution Units" or "EUs"?
Quote:
The Broadwell-Y chip is 82mm2, scaled down about 63% compared to Haswell-Y's 130mm2 die size.(old - new) / old * 100% = shrink % e.g. (82 - 130) / 130 * 100% = 37%
Quote:
...Haswell-Y's integrated graphics has a maximum of 20 AUs...Score
1
gsxrme
August 11, 2014 3:58:52 PM
blppt
August 11, 2014 5:03:39 PM
gsxrme said:
bah! My 2600k @ 5.1GHz @ 1.5v (a real water setup) will just have to stay. Its a shame too, I can't push my memory bus passed 2200Mhz either. I hate you Intel and AMD! I want to build something!Geez, my 9590 needs north of that to hit 5ghz (all cores, not turbo). And that 5ghz is about equal to a 2600K @ 3.8 (all cores, not turbo).
Score
0
balister
August 11, 2014 5:37:23 PM
Quote:
none12345 said:
Moores law says nothing about performance. It only has to do with the number of transistors on a chip doubling roughly every 18 months. ...It was never a Law as such, merely an observation that seemed to be conveniently accurate many years
ago, but often it's quoted as being either a performance doubling or a density doubling every year and a
half (not 24 months). And for the other poster, wikipedia is not the word of god.
Either way, my point still stands - neither angle has been true for a long time now.
Ian.
I'm pretty sure you're wrong on the transistor side of that argument as they have continued to double the amount of transistors every 2 years which still holds with what Moore originally said in his report.
And in this case, Wikipedia is correct as the information there is pulled from Moore's original work.
Score
-1
tomfreak
August 11, 2014 6:42:21 PM
ceeblueyonder
August 11, 2014 7:55:30 PM
Quote:
Quote:
Quote:
ceeblueyonder said:
intel blows amd out of the water today b/c intel has chips at 14nm, competing with AMD that has 32nm or 28nm chips.Even if you compare Sandy Bridge (32nm) Intel CPUs with AMD's FX83xx (28nm) which theoretically gives the advantage to AMD, Intel's older chips still win most benchmarks. Intel being one process node ahead has very little to do with their performance lead; their architecture itself is just that far ahead.
FX-83xx series are 32mm, btw/fyi. fx-8350 vs. i7-2600k is probably a fair fight. i bet they'd trade blows. or, an fx-8350 is not far behind if it is behind. and amd has a software/platform/optimization disadvantage, meaning that the programs are not optimized for amd chips since most pc's have intel chips inside them.
The ALU and FPU units used by the FX series are also very low quality. The rumor mill generating that AMD is custom making the ALU/FPU units for Excavator. Even for software thats goign to be made to utilize the FX series, its still a lower quality processor compared to intel, hence the price point.
how do you know they're low quality? what do you mean by it? did you take the cpu apart and look at the actual silicon via a micron microscope to examine its "quality?" i don't even know what you're talking about. but, if i may guess, maybe you mean slower? if so, you are right. amd needs to work on their modules. even though amd has 8 real logical and physical cores, each module with two cores shares resources like the fpu, l2 cache and another thing, which hampers its performance. at least they're not "hyperthreading." or what i call "fake" cores. lol. just kidding.
anyway, the modules and cores sharing resources is probably how amd is able to deliver more cores for the price. but, it doesn't mean the actual silicon themselves that make up those units are of low quality. if they are, those chips are rejected or not used b/c the chip itself will not function. maybe your computer will not even start because of it.
also, if you wanna talk about quality, isn't intel the one skimping on it? the thing about the soldered tim not being used on their cpu's now to save money. also, i have bullt an fx-8320 pc and an i7-3770k pc and the box that the amd fx chip comes in is metal and seems more of quality than the box that the i7-3770k chip came in which is cardboard. seems like the cpu could easily get damaged without a metal box. but, i digress.
also, the cpu cooler that comes with the intel also seems "cheaper" than the one that comes with the amd chip.
Score
0
balister
August 11, 2014 8:09:57 PM
none12345
August 11, 2014 11:18:07 PM
"*reads about people complaining not having a reason to upgrade and spend more money* what am I reading "
As a gamer, im tired of games stagnating. The long counsel cycle deserves a lot of the blame. But stagnation of hardware is just as guilty. Nothing has really changed in the gaming world in the last 7 or so years. Graphics havent really improved any, nor has ai, or anything else. Sure there have been some tiny improvements, but its all pretty boring.
If processors were still doubling in performance every 18 months or so....the games of today would make the current stuff look like old school nintendo games.
Ive been waiting for another spurt of innovation since 2009. Yet my 6 year old system, still plays every new game just fine. So why upgrade.... Wish intel/amd would give me a reason to.
I wish someone would blow me away again with a hardware advancement.
As a gamer, im tired of games stagnating. The long counsel cycle deserves a lot of the blame. But stagnation of hardware is just as guilty. Nothing has really changed in the gaming world in the last 7 or so years. Graphics havent really improved any, nor has ai, or anything else. Sure there have been some tiny improvements, but its all pretty boring.
If processors were still doubling in performance every 18 months or so....the games of today would make the current stuff look like old school nintendo games.
Ive been waiting for another spurt of innovation since 2009. Yet my 6 year old system, still plays every new game just fine. So why upgrade.... Wish intel/amd would give me a reason to.
I wish someone would blow me away again with a hardware advancement.
Score
2
bin1127
August 12, 2014 12:04:55 AM
Quote:
Its good to see Intel working so hard on their thermal department. Gaming is great, but you cant help feeling guilty about mother earth every time you fire up your pc. Meanwhile Amd innovates with 220W processor XDIntel always does great with their tdp/performance. I worry more about graphic cards. I wish that people would scale down their graphics after every new game when smooth shiny tree leaves with swaying shadows in the background loses their novelty.
Score
0
I would say that after reading a few of these comments I do feel for some people a bit now. That excitement of new tech being dished out and all the rumors and facts surrounding it is what brings us debating all this stuff in the first place. Because we like to be impressed with technology.
I was just stating my viewpoint as a consumer that I'm pleased my system will have nice longevity. I can focus on buying other things and not have to worry about it as much.
Still kinda sad though that a game like crysis will never be released again. Back in 2007 it was unbelievable just to watch a video of it let alone run it.
While I already see a good bit of improvement in this next generation of titles in terms of graphics and physics, none have came out that really wow people like crysis did.
Sure crysis 3 is a great looking game but it doesn't have the same notoriety as the first one. As hardware has slowed down greatly in terms of improvement, so have graphics as well. Although really I feel like I'm terms of graphics we should be upping the polygon budget more.
Until I can't see any triangular shaped bumps on agent 47's head, I think we still have room for improvement.
I was just stating my viewpoint as a consumer that I'm pleased my system will have nice longevity. I can focus on buying other things and not have to worry about it as much.
Still kinda sad though that a game like crysis will never be released again. Back in 2007 it was unbelievable just to watch a video of it let alone run it.
While I already see a good bit of improvement in this next generation of titles in terms of graphics and physics, none have came out that really wow people like crysis did.
Sure crysis 3 is a great looking game but it doesn't have the same notoriety as the first one. As hardware has slowed down greatly in terms of improvement, so have graphics as well. Although really I feel like I'm terms of graphics we should be upping the polygon budget more.
Until I can't see any triangular shaped bumps on agent 47's head, I think we still have room for improvement.
Score
0
Some people do need to realize that games is only part of the world. And not even the largest piece of the cake. Our world is spinning because of we have the power of computers. Even phones turned into mini computers.
If gamers have no reason to upgrade - well blame the consoles and their manufacturers. All of the "current generation" consoles arrived already outdated. Remember the "hack" or "mod" to unlock all of Watchdogs potential?
The reason why a lot of us and specifically me are "upset" by the current CPU development is because it dictates what the average level of performance will be. Intel launches Sandy Bridge and then Sandy Bridge-E and EP, Ivy Bridge and then Ivy Bridge-E and EP, Haswell...etc. But the moment the mainstream line launches - you know what to expect. If anyone here has renders out of Mental Ray (I got only Mental Ray at home) that runs for 28 hours for a single frame is going to get me. And the performance of extra cores is never linear improvement. The more cores you add, the more time a single pixel is rendered. Pixar's Render man is the best example - 4 cores render faster than 24. But even in the best threaded render engine, if you have 2 render nodes with 2 I7s running at 3 GHz, they will render 30% faster than a single 8 core Xeon at 3 GHz. and you can get those 2 render nodes for less than the price of the Xeon by itself. And if you get more performance out of your cheap render nodes - all the best.
If Broadwell's performance is 5% on top of Haswell, than Broadwell-E and EP will be 5% on top of the Haswell. Everything is linked to the mainstream part. Intel launches a mainstream part and adjusts number of cores and TDP for the enthusiast/professional line. The moment this Broadwell article came out, we already know what is in store for the next 3 years.
And if some people think - get a 12 core Xeon or something - that is not always possible or smart. A lot of software scales bad after 8 cores/threads. Even half the functions that you use in a work flow are single threaded (diffuse to object baking, modeling tools, deformers, conversions, etc). In a lot of animation studios they use high-clocked low-core count machines for animators because of such reasons. Running on screaming "Cores" does not work. My home Sandy when at 4.5 renders 10-20% slower the the Xeon 2650v2 at work.
The software is too far behind. If in the late 1990s and begging of 2000 - the hardware was holding back the software, ever since late 2000s and specifically since 2010 - the software is lagging. VRay 3.0 which is available for 3Ds Max and soon to be out for Maya was rebuild from the ground up to use AVX. And when I was at a presentation of VRay 3.0 - there was 30% improvement in render speeds compared to the older version. And this is happening in 2014 and 2015. And AVX is? Technology from 2008 first implemented in 2010/2011.
And now imagine all those I7s/I5s or their Xeon versions (mainstream socket xeons perform exactly the same as the I series counterparts) arriving in cheaper workstations or in Macs. This is the main hardware of the working force. Not only Adobe products and 3D packages and SDKs, but also a lot of software developers and scientists sit on this hardware. And all the Indie studios.
If all of you guys have something to blame for no reason to upgrade - blame the software. Don't blame Intel. Blame the console manufacturers for their outdated consoles. Don't blame AMD. Blame all those lazy programmers that either can't or won't or don't have enough resources to program for multi-threaded environment and are stuck in 1-2 threaded functions. And also blame the world. If you bought less phones and tablets and more High Performance PCs the market interest and innovation would have been different. It is the mainstream that defines the performance increase for enthusiasts. Cheers.
If gamers have no reason to upgrade - well blame the consoles and their manufacturers. All of the "current generation" consoles arrived already outdated. Remember the "hack" or "mod" to unlock all of Watchdogs potential?
The reason why a lot of us and specifically me are "upset" by the current CPU development is because it dictates what the average level of performance will be. Intel launches Sandy Bridge and then Sandy Bridge-E and EP, Ivy Bridge and then Ivy Bridge-E and EP, Haswell...etc. But the moment the mainstream line launches - you know what to expect. If anyone here has renders out of Mental Ray (I got only Mental Ray at home) that runs for 28 hours for a single frame is going to get me. And the performance of extra cores is never linear improvement. The more cores you add, the more time a single pixel is rendered. Pixar's Render man is the best example - 4 cores render faster than 24. But even in the best threaded render engine, if you have 2 render nodes with 2 I7s running at 3 GHz, they will render 30% faster than a single 8 core Xeon at 3 GHz. and you can get those 2 render nodes for less than the price of the Xeon by itself. And if you get more performance out of your cheap render nodes - all the best.
If Broadwell's performance is 5% on top of Haswell, than Broadwell-E and EP will be 5% on top of the Haswell. Everything is linked to the mainstream part. Intel launches a mainstream part and adjusts number of cores and TDP for the enthusiast/professional line. The moment this Broadwell article came out, we already know what is in store for the next 3 years.
And if some people think - get a 12 core Xeon or something - that is not always possible or smart. A lot of software scales bad after 8 cores/threads. Even half the functions that you use in a work flow are single threaded (diffuse to object baking, modeling tools, deformers, conversions, etc). In a lot of animation studios they use high-clocked low-core count machines for animators because of such reasons. Running on screaming "Cores" does not work. My home Sandy when at 4.5 renders 10-20% slower the the Xeon 2650v2 at work.
The software is too far behind. If in the late 1990s and begging of 2000 - the hardware was holding back the software, ever since late 2000s and specifically since 2010 - the software is lagging. VRay 3.0 which is available for 3Ds Max and soon to be out for Maya was rebuild from the ground up to use AVX. And when I was at a presentation of VRay 3.0 - there was 30% improvement in render speeds compared to the older version. And this is happening in 2014 and 2015. And AVX is? Technology from 2008 first implemented in 2010/2011.
And now imagine all those I7s/I5s or their Xeon versions (mainstream socket xeons perform exactly the same as the I series counterparts) arriving in cheaper workstations or in Macs. This is the main hardware of the working force. Not only Adobe products and 3D packages and SDKs, but also a lot of software developers and scientists sit on this hardware. And all the Indie studios.
If all of you guys have something to blame for no reason to upgrade - blame the software. Don't blame Intel. Blame the console manufacturers for their outdated consoles. Don't blame AMD. Blame all those lazy programmers that either can't or won't or don't have enough resources to program for multi-threaded environment and are stuck in 1-2 threaded functions. And also blame the world. If you bought less phones and tablets and more High Performance PCs the market interest and innovation would have been different. It is the mainstream that defines the performance increase for enthusiasts. Cheers.
Score
5
leeb2013 said:
hmm, not much is moving performance wise, just power consumption. I wonder how much the next Tock will improve performance.If you look at the pattern for the last couple of chips, improvements are around 5-7% regardless of Tick or Tock so I would expect up to 7% IPC improvement from Skylake since that is what we got from IB to Haswell.
Score
0
edlivian
August 12, 2014 11:02:28 AM
edlivian said:
i dont give a hoot about power savings anymore,intel has to start finding a way to gain 25% performance per cycle, or it will never become ideal to upgrade from sandy and ivy bridge i7'sGaining 25% performance per cycle is simple: make the core wider and add two extra threads per core to make sure those extra execution resources have work to do. Alternately, they could add cores.
Either way, the extra throughput per clock from larger-scale thread-level parallelism is pointless without massively threaded code to actually use it.
Score
0
- 1 / 2
- 2
- Newest
!