Haswell: News, Rumors & Reviews
Tags:
-
CPUs
- Reviews
-
Intel
- Product
Last response: in CPUs
This sticky is to provide a centralized thread to post information (news, rumors & reviews) related to Intel's Haswell.
We don't want close this thread, as we closed the other ones, so remember the forum rules (ToU & RoC) and FOCUS ON THREAD. Personal attacks to another user will be deleted but I hope not to have to do this, we are adult people, so, act like that and don't be an.....
Ivy Bridge discussion has moved!
http://www.tomshardware.com/forum/332628-28-official-th...
This is a centralised discussion thread for Haswell now.
We don't want close this thread, as we closed the other ones, so remember the forum rules (ToU & RoC) and FOCUS ON THREAD. Personal attacks to another user will be deleted but I hope not to have to do this, we are adult people, so, act like that and don't be an.....
Ivy Bridge discussion has moved!
http://www.tomshardware.com/forum/332628-28-official-th...
This is a centralised discussion thread for Haswell now.
More about : haswell news rumors reviews
Quote:
What the point in posting rumours. Already there is enough junk on net that may burn readers pockets or mislead them into taking wrong decisions/purchases.Better to stick to hard facts or just wait for them to be released.
I agree with you...
I have updated the thread to focus on News, Rumors and Reviews (once released). Facts are important on the matter but many times the initial facts are released as "Rumors", such as leaked road maps.
Related resources
- What is the best website for PC games news, reviews, rankings, etc? - Forum
- GTX 760 Ti news and rumors...! - Forum
- Grand Theft Auto V - Non-offical discussion/news/rumors! - Forum
- what news and rumors? - Forum
- Any news on Haswell mobile CPU? - Forum
4745454b said:
And I was going to seriously ask if we are allowed to post our own personal "theories" on SB/IB or if we have to find it in an article.4745454b: I would focus on articles you come across on the subject matter versus your own personal "theories", unless you have information to backup your "theories"
That Xbit article is the only one I've seen that claims "normal" overclocking for SNB-E (instead of the effectively multiplier-only overclocking of SNB-M), but I really hope it's true.
Edit: Actually, I found another mention on page 2 of the first article linked below.
This could be the first move by Intel towards splitting the mainstream and enthusiast sectors by actually having a separate product for each. Mainstream users could still overclock safely via multiplier with SNB-M, but the real performance would be had by moving over to the SNB-E which would have multiplier and BCLK/DMI (or whatever they want to call it) overclocking.
Here's a distillation of what I have heard about SNB-E from various sources:
130W TDP
LGA 2011 socket
Four or six cores plus HyperThreading (have seen forumites say 8 core also but haven't seen a slide or article that says so -- maybe I missed it?)
No integrated graphics
Quad-channel memory controller, DDR3-1600 support (1.25v/1.35v memory?)
40 PCIe lanes (32 in CPU plus 8 from PCH; can do dual x16 or quad x8)
PCIe3? (some articles say PCIe2 others say PCIe3)
Up to 15MB L3 cache
Extreme Edition(s) mentioned
Multiplier overclocking (maybe only Extreme Editions?)
DMI overclocking? (now two mentions)
Another article: A Look Into Intel's Next Gen Enthusiast Platform : Sandy Bridge E & Waimea Bay
Yet another article: Ivy Bridge chipset model names revealed alongside more Sandy Bridge-E details
Edit: Actually, I found another mention on page 2 of the first article linked below.
This could be the first move by Intel towards splitting the mainstream and enthusiast sectors by actually having a separate product for each. Mainstream users could still overclock safely via multiplier with SNB-M, but the real performance would be had by moving over to the SNB-E which would have multiplier and BCLK/DMI (or whatever they want to call it) overclocking.
Here's a distillation of what I have heard about SNB-E from various sources:
130W TDP
LGA 2011 socket
Four or six cores plus HyperThreading (have seen forumites say 8 core also but haven't seen a slide or article that says so -- maybe I missed it?)
No integrated graphics
Quad-channel memory controller, DDR3-1600 support (1.25v/1.35v memory?)
40 PCIe lanes (32 in CPU plus 8 from PCH; can do dual x16 or quad x8)
PCIe3? (some articles say PCIe2 others say PCIe3)
Up to 15MB L3 cache
Extreme Edition(s) mentioned
Multiplier overclocking (maybe only Extreme Editions?)
DMI overclocking? (now two mentions)
Another article: A Look Into Intel's Next Gen Enthusiast Platform : Sandy Bridge E & Waimea Bay
Yet another article: Ivy Bridge chipset model names revealed alongside more Sandy Bridge-E details
It looks like Ivy Bridge is going to support PCIe 3.0: Ivy Bridge CPUs Feature PCI-Express 3.0
Source: http://www.xfastest.com/viewthread.php?tid=60184&page=1...
Source: http://www.xfastest.com/viewthread.php?tid=60184&page=1...
Semiaccurate says Intel's 22nm will be using FinFETs:
http://semiaccurate.com/2011/04/07/intel-goes-finfet-on...
Also, the onboard GPU might get beefed up to 16 or 24 execution units, and maybe stacked lowpower DDR2 on the chip itself:
http://semiaccurate.com/2010/12/29/intel-puts-gpu-memor...
http://semiaccurate.com/2011/04/07/intel-goes-finfet-on...
Also, the onboard GPU might get beefed up to 16 or 24 execution units, and maybe stacked lowpower DDR2 on the chip itself:
http://semiaccurate.com/2010/12/29/intel-puts-gpu-memor...
Article on Panther Point chipsets for Ivy/Sandy Bridge: More Intel 7-series chipset details come to light
Some good info on SNB-E and X79:
Makers To Demo Intel X79 Boards at COMPUTEX 2011
Not too far off now ... August/September should be a good time for the next round of upgrades.
Makers To Demo Intel X79 Boards at COMPUTEX 2011
Not too far off now ... August/September should be a good time for the next round of upgrades.
Leaps-from-Shadows said:
Some good info on SNB-E and X79:Makers To Demo Intel X79 Boards at COMPUTEX 2011
Not too far off now ... August/September should be a good time for the next round of upgrades.
Thank you! I read the press release; very impressive. I wonder what the pricing strategy will be!
Though not necessarily about Sandy Bridge-E or Ivy Bridge but here is some information on their new Atom processors.
Intel Readies 32 nm Cedar View Atom Processors for Late 2011
Intel Readies 32 nm Cedar View Atom Processors for Late 2011
fir_ser
May 4, 2011 10:12:26 AM
Today Intel is organizing a press conference at 9:30am PDT, where it will be making its “most significant technology announcement of the year.”
As far as what I’m aware of, Intel should discuss its upcoming 22nm process technology.
So I wonder what will Intel announced with regards to Sandy Bridge-E and Ivy Bridge.
The link to this event is: http://www.intc.com/eventdetail.cfm?EventID=96649
As far as what I’m aware of, Intel should discuss its upcoming 22nm process technology.
So I wonder what will Intel announced with regards to Sandy Bridge-E and Ivy Bridge.
The link to this event is: http://www.intc.com/eventdetail.cfm?EventID=96649
Here is the big announcement!!
Intel Reinvents Transistors Using New 3-D Structure
Intel Makes 22nm 3-D Tri-Gate Tech for Ivy Bridge
Intel Reinvents Transistors Using New 3-D Structure
Intel Makes 22nm 3-D Tri-Gate Tech for Ivy Bridge
^ Well this is what I was expecting - FinFET or Tri-Gate transistors as Intel is calling them now. There have been rumors about Intel using FinFETs ever since the 45nm node.
So I imagine that 22nm Atom SoC's will be very competitive in the cellphone & tablet markets, esp. if Intel combines it with the stacked DDR2 memory for the HD4000 GPU, or whatever it'll be called (16 or 24 EU).
I also now think that perhaps Ivy Bridge will be more than 20% faster in the same power envelope than Sandy Bridge - as the article states, the 22nm transistor is almost 40% more performance than the 32nm transistors Intel uses in Sandy Bridge..
So I imagine that 22nm Atom SoC's will be very competitive in the cellphone & tablet markets, esp. if Intel combines it with the stacked DDR2 memory for the HD4000 GPU, or whatever it'll be called (16 or 24 EU).
I also now think that perhaps Ivy Bridge will be more than 20% faster in the same power envelope than Sandy Bridge - as the article states, the 22nm transistor is almost 40% more performance than the 32nm transistors Intel uses in Sandy Bridge..
fazers_on_stun said:
^ Well this is what I was expecting - FinFET or Tri-Gate transistors as Intel is calling them now. There have been rumors about Intel using FinFETs ever since the 45nm node. So I imagine that 22nm Atom SoC's will be very competitive in the cellphone & tablet markets, esp. if Intel combines it with the stacked DDR2 memory for the HD4000 GPU, or whatever it'll be called (16 or 24 EU).
I also now think that perhaps Ivy Bridge will be more than 20% faster in the same power envelope than Sandy Bridge - as the article states, the 22nm transistor is almost 40% more performance than the 32nm transistors Intel uses in Sandy Bridge..
I am a bit doubtful of this technology.I hope it gives us more performance.Just doubtful of the problems.
ghnader hsmithot said:
I am a bit doubtful of this technology.I hope it gives us more performance.Just doubtful of the problems.Well the problem with ever-shrinking process nodes is that the leakage currents tend to rise rapidly as a percentage of all currents. For example, at 45nm the leakage between the gate and the channel in a FET is significant, despite the fact that the insulator (SiO2) is basically pure glass. When that layer of glass is mere nanometers in thickness, even a ~1V potential difference between the gate and the channel will cause a significant amount of charge leakage. That current multiplied by the potential difference is just wasted energy that shows up as heat. Which is why Intel introduced HKMG that reduces the leakage signficantly, and which Global Foundries is now using at 32nm.
Another even bigger problem is the channel leakage - the residual current between source and drain - when the transistor is supposed to be OFF. Transistors are not really ideal ON/OFF switches - they alternate between mostly ON and mostly OFF, which means they waste power in either state, plus the parasitic capacitances cause them to waste energy when switching from one state to the other. Eventually as the process shrinks, more energy is wasted than actually used in performing useful computations, and that wasted energy shows up as unwanted heat. AMD uses Silicon-On-Insulator (SOI) wafers which greatly reduces the parasitic capacitance, at least at the larger nodes like 90nm, probably does not have as significant a benefit at the smaller nodes, because the above leakage currents start to outweigh the parasitic effects.
Tri-gate FinFETS reduce the channel leakage by a huge amount, relatively speaking, since now the charge carrier (electrons in N-channel FETs, holes in P-channel) now have to get past 3 potential barriers vs. one when the transistor is nominally "OFF". If each barrier gives an exponential tail-off, then the total leakage current will be significantly reduced.
Or at least that's how I imagine it works, given that I had solid-state physics and design some 20+ years ago
...
FYI, I forgot to mention that a 1 Volt difference across a 10nm thick insulator (like the SiO2 glass insulator I mentioned above), amounts to a potential difference of 100,000,000 volts per meter, which is way more sufficient to permit lightning bolts to zap through air (a pretty good insulator when dry).
fazers_on_stun said:
Well the problem with ever-shrinking process nodes is that the leakage currents tend to rise rapidly as a percentage of all currents. For example, at 45nm the leakage between the gate and the channel in a FET is significant, despite the fact that the insulator (SiO2) is basically pure glass. When that layer of glass is mere nanometers in thickness, even a ~1V potential difference between the gate and the channel will cause a significant amount of charge leakage. That current multiplied by the potential difference is just wasted energy that shows up as heat. Which is why Intel introduced HKMG that reduces the leakage signficantly, and which Global Foundries is now using at 32nm.Another even bigger problem is the channel leakage - the residual current between source and drain - when the transistor is supposed to be OFF. Transistors are not really ideal ON/OFF switches - they alternate between mostly ON and mostly OFF, which means they waste power in either state, plus the parasitic capacitances cause them to waste energy when switching from one state to the other. Eventually as the process shrinks, more energy is wasted than actually used in performing useful computations, and that wasted energy shows up as unwanted heat. AMD uses Silicon-On-Insulator (SOI) wafers which greatly reduces the parasitic capacitance, at least at the larger nodes like 90nm, probably does not have as significant a benefit at the smaller nodes, because the above leakage currents start to outweigh the parasitic effects.
Tri-gate FinFETS reduce the channel leakage by a huge amount, relatively speaking, since now the charge carrier (electrons in N-channel FETs, holes in P-channel) now have to get past 3 potential barriers vs. one when the transistor is nominally "OFF". If each barrier gives an exponential tail-off, then the total leakage current will be significantly reduced.
Or at least that's how I imagine it works, given that I had solid-state physics and design some 20+ years ago
...
halfcalf
May 5, 2011 1:00:56 PM
fazers (or anyone): What about the actual "size" of the electrons in a 22nm circuit? Are we not getting to the point where we're looking at boulders hurtling down a pipe... but in this case boulders which have the nasty habit of materializing on the other side of a fixed barrier? Is it logical to assume that we've shrunk the process just about as much as can be expected before the LHC forces us to rewrite quantum physics?
halfcalf said:
fazers (or anyone): What about the actual "size" of the electrons in a 22nm circuit? Are we not getting to the point where we're looking at boulders hurtling down a pipe... but in this case boulders which have the nasty habit of materializing on the other side of a fixed barrier? Is it logical to assume that we've shrunk the process just about as much as can be expected before the LHC forces us to rewrite quantum physics?
Eventually that will happen. There will be some manufacturing nightmares, but sooner or later, the design guys will see that there is life beyond La_La_Land! That's when it will hit the fan!
halfcalf
May 5, 2011 1:43:20 PM
Ubrales & zulfadhli: Yeah, no matter how nano the technology, "ye canna change the laws of physics" to quote Lt. Montgomery Scott. What makes it especially interesting is that although I am in my 50s and have been around the scientific block at least a few times, I still have never read an explanation of electricity that (were it to describe anything else) would pass muster in any high school!
halfcalf said:
Ubrales & zulfadhli: Yeah, no matter how nano the technology, "ye canna change the laws of physics" to quote Lt. Montgomery Scott. What makes it especially interesting is that although I am in my 50s and have been around the scientific block at least a few times, I still have never read an explanation of electricity that (were it to describe anything else) would pass muster in any high school!
The tragedy of science is the slaying of a beautiful hypothesis by an ugly fact!
halfcalf
May 5, 2011 2:06:01 PM
Ubrales said:
The tragedy of science is the slaying of a beautiful hypothesis by an ugly fact!I agree fully. I still can't keep a straight face when somebody is yakking on about "electron flow" (really??? electrons flow, huh???) and the propagation as both a wave and a particle. That one is a knee slapper.
halfcalf said:
fazers (or anyone): What about the actual "size" of the electrons in a 22nm circuit? Are we not getting to the point where we're looking at boulders hurtling down a pipe... but in this case boulders which have the nasty habit of materializing on the other side of a fixed barrier? Is it logical to assume that we've shrunk the process just about as much as can be expected before the LHC forces us to rewrite quantum physics?
Atomic sizes range from 0.3 to 3 angstroms in radius, an angstrom being 10^-10 meters or 0.1 nm. So that's 0.06 nm to 0.6 nm in diameter. I've seen some patents (Hitachi?? Don't recall the assignee) on single-atom "circuits", based on using scanning tunnelling electron microscopes that 'pick up' individual atoms and deposit them on a substrate, in which the Van der Waals force holds them in place. So basically you can make a line of atoms that are close enough together that single electrons can 'hop' from one atom to the next. IIRC it was IBM that showed how this could be done, back in the '80s when they wrote the company name "IBM" with helium (?) atoms on a nickel substrate.
The physics behind it are beyond me - when I was in college studying solid-state physics, the equations were based on bulk semiconductor properties where surface effects were negligible. So calculating band-gaps, etc was relatively easy using bulk properties. When you line up atoms on a substrate, it's all surface effects..
Anyway, the point of the patent was that you could make NAND and NOR gates and even memory circuits using single atoms lined up appropriately in circuits, using strongly-held atoms and loosely-held atoms (lesser Van der Waals attraction to the substrate), and these 'circuits' would be < 1 nm.
Of course, it would take probably a thousand years just to assemble a one-billion-circuit CPU, using SEM equipment and moving one atom at a time
. But it would be 1/(22^2) times smaller than Ivy Bridge, and probably cost 2^22 times as much as Ivy Bridge
.
http://news.softpedia.com/news/Intel-s-Z68-Chipset-Make...
Well our apple friends finally get the better of us..
Well our apple friends finally get the better of us..
halfcalf
May 5, 2011 6:35:43 PM
fazers: If you say that the physics are beyond you then I have to say that I can't count to 21 without being naked. So I certainly can't add to your superb analysis. It would seem to me (with my junior ankle biter grasp of the issue) that the single atom layer graphene experiments might end up being used in some way for this hopalong scenario. That would create exactly what I'm looking for: A CPU that costs more than the Obama deficit.
zulfadhli: Last week I went on to the Apple site to price out a MacPro that's set up like my next "desired" rig. The cost with a current i7 2600 was over 8 grand. At those prices Steve Jobs can use his Z68s chips to dip in salsa as far as I'm concerned.
zulfadhli: Last week I went on to the Apple site to price out a MacPro that's set up like my next "desired" rig. The cost with a current i7 2600 was over 8 grand. At those prices Steve Jobs can use his Z68s chips to dip in salsa as far as I'm concerned.
halfcalf said:
zulfadhli: Last week I went on to the Apple site to price out a MacPro that's set up like my next "desired" rig. The cost with a current i7 2600 was over 8 grand. At those prices Steve Jobs can use his Z68s chips to dip in salsa as far as I'm concerned.
These boards have been selling in China for over 2weeks now.I am surprised why US hasnt even begun to sell their z68 chipsets..
halfcalf
May 5, 2011 6:41:13 PM
bruce555
May 5, 2011 8:29:11 PM
Oh wow, it's been a while since last upgrade and wasn't expecting this outta Ivy, well, even knowing this tech advancement is in that processor it's too much for me to ignore or pass up. Gonna have to wait some more. God I can't wait for other companies to start to copy this tech. Especially GPU companies, we've been at a limit for so long with them and this is something that could be a complete game changer once implimented. But I still feel Nvidia will go the rout "Our GPU's were this size before and they'll stay this size with this much more horsepower" rather than using the tech to bring down GPU die sizes.
bruce555 said:
Oh wow, it's been a while since last upgrade and wasn't expecting this outta Ivy, well, even knowing this tech advancement is in that processor it's too much for me to ignore or pass up. Gonna have to wait some more. God I can't wait for other companies to start to copy this tech. Especially GPU companies, we've been at a limit for so long with them and this is something that could be a complete game changer once implimented. But I still feel Nvidia will go the rout "Our GPU's were this size before and they'll stay this size with this much more horsepower" rather than using the tech to bring down GPU die sizes.What are your thoughts on the heat produced by Graphics cards in general? I can put up with the size, but I wish the temps were somewhat (no, much) lower than what they are at present.
bruce555
May 6, 2011 4:05:23 AM
Heat is a huge issue right now, temps of 80-90C under non OC full load are insane. It's the same sort of general thinking that I was talking about when it comes to size that they will still push the thermal threshold as well. Remember when the 6800 Ultra came out it was a ugly jump from the sub x800 series from ATI for dual slot cooler with a 400+ Watt PSU requirement. Even the OC'd 9800 xt's were single slot tiny fan's.
I do feel that almost no high end GPU will last longer than 5 years but at least good companies have good warranties and unless buying second hand I've never been with stuck with a dead vid card and no warranty to save my a$$. Cept from Sapphire which left me completely SOL.
I do feel that almost no high end GPU will last longer than 5 years but at least good companies have good warranties and unless buying second hand I've never been with stuck with a dead vid card and no warranty to save my a$$. Cept from Sapphire which left me completely SOL.
halfcalf
May 6, 2011 11:38:36 AM
Not a huge amount of new info, but interesting nonetheless:
http://www.cpu-world.com/news_2011/2011050402_Intel_Xeon_E5-1600_and_E5-2600_processor_details.html
What I don't understand is:
"The memory controller will support up to 3 DIMMs per channel. When used with 8GB DIMMs, single CPU may utilize up to 96 GB of RAM, or up to 192 GB in dual-processor configuration"
If it's 3 DIMMs per channel and there's 4 channels does that mean that at least the server mobos can have 12 RAM slots? I'm salivating just thinking about it. Is there really any reason why such a mobo couldn't be plunked into a single computer/workstation and work just like an i7 rig?
http://www.cpu-world.com/news_2011/2011050402_Intel_Xeon_E5-1600_and_E5-2600_processor_details.html
What I don't understand is:
"The memory controller will support up to 3 DIMMs per channel. When used with 8GB DIMMs, single CPU may utilize up to 96 GB of RAM, or up to 192 GB in dual-processor configuration"
If it's 3 DIMMs per channel and there's 4 channels does that mean that at least the server mobos can have 12 RAM slots? I'm salivating just thinking about it. Is there really any reason why such a mobo couldn't be plunked into a single computer/workstation and work just like an i7 rig?
halfcalf
May 6, 2011 12:25:55 PM
Heck, I don't care if the motherboard has to be the size of a pool table, I'd do anything for 12 RAM slots.
In the old days there were a lot of differences between Xeon systems and regular PCs, such as the 771-775 slot, registered memory, etc. With the SB-Es I can't see a single frickin' thing that really differentiates the i7 (or i9?) systems from the Xeons, other than they may be internally configured to go into dual or quad socket mobos. So since I'm seriously considering the possibility of selling my blood and living on bread and water between now and the end of the year, what would be the drawbacks to setting up my own personal system on an SB-E Xeon single socket and the least expensive mobo with those 12 luscious RAM slots?
In the old days there were a lot of differences between Xeon systems and regular PCs, such as the 771-775 slot, registered memory, etc. With the SB-Es I can't see a single frickin' thing that really differentiates the i7 (or i9?) systems from the Xeons, other than they may be internally configured to go into dual or quad socket mobos. So since I'm seriously considering the possibility of selling my blood and living on bread and water between now and the end of the year, what would be the drawbacks to setting up my own personal system on an SB-E Xeon single socket and the least expensive mobo with those 12 luscious RAM slots?
bruce555 said:
Heat is a huge issue right now, temps of 80-90C under non OC full load are insane. It's the same sort of general thinking that I was talking about when it comes to size that they will still push the thermal threshold as well. Remember when the 6800 Ultra came out it was a ugly jump from the sub x800 series from ATI for dual slot cooler with a 400+ Watt PSU requirement. Even the OC'd 9800 xt's were single slot tiny fan's.I do feel that almost no high end GPU will last longer than 5 years but at least good companies have good warranties and unless buying second hand I've never been with stuck with a dead vid card and no warranty to save my a$$. Cept from Sapphire which left me completely SOL.
Heh, I'm still using my ancient GTX8800 which draws around 160 watts full-load, no oc. Bought a 5770 to upgrade but went back to the 8800 due to my Nvidia chipset having issues with ATI drivers, or else it's XP's fault, or both
. With high-end GPUs, I think the power draw will remain at 250 - 300 watts even with process shrinks, simply because that's what gamers are used to putting up with and new games will require more processing power. So the die shrinks/process advances will go to raise the latter while keeping the former about the same..
Related resources
- Intel's Future Chips: News, Rumours & Reviews Forum
- SolvedLooking for new Haswell build review/refinement Forum
- SolvedPlease review my haswell gaming pc build for 1440p Forum
- TWO WORLDS 2 - news ,screenshots,reviews,vids etc Forum
- Any news from the front? (BG's reviews and opinions) Forum
- Haswell Pentium benchmark/review Forum
- Review before I purchase- 1500$ Haswell Gaming PC Forum
- Build Review - ITX Haswell w/ GTX780 Forum
- More Mem News. 6 stick PC2700 Review! Forum
- More resources
!
