Sign in with
Sign up | Sign in
Your question

Intel's next-gen Sandy Bridge summarised

Last response: in CPUs
Share
a b à CPUs
August 28, 2010 6:32:17 AM

Just found out some details & benchmarks about Sandy Bridge. Here's what should be of interest:

1) Clock for clock, Sandy Bridge seems to offer a 10% increase in performance compared to similarly priced Nehalem processors (forget the Thubans)

2) The 32 nm tech makes them post even lower overall power consumption than the current Lynnfields (& obviously much lower than AMDs)

3) The integrated graphics are good. It's fast enough to put all previous attempts at integrated graphics to shame and compete with entry level discrete GPUs. If you were planning on spending $50 on a GPU - you may not need to with Sandy Bridge. Of course, there's no comparison to higher-end GPUs (not yet)

4) The main hitch: overclocking
Whatever rumours you have been hearing are partly true. Without going into the details, this is what should bother us: Intel gives us less headroom at the budget-mainstream end of processors. Meaning you won't be able to overclock a budget Sandy Bridge as well as say a budget clarkdale, say i3 530/540. The enthusiasts should have no problem, whatever numbers they achieve in their current Nehalems, Gulftowns, same should be achievable by equivalent sandy bridges, coursey unlocked multipliers.

Will keep posting should something more come up.

a b à CPUs
August 28, 2010 7:25:05 AM

True for many of us, not all.

I was really surprised by the GPU part of the review. I'm not sure if they finally learned how to do this from the larabee project or if its from moving it on die or the 32nm process or ??? But its nice to see their onboard not as the worst thing out there. Give them some time, higher end parts in the future perhaps?
a b à CPUs
August 28, 2010 1:05:39 PM

Quote:
There is no need to overclock.Base frequency of most sandybridge is above 3.2ghz.which is sufficient for any game

Well since when did anybody start overclocking for sheer need? We all do it because we can :sol: 

Personally, I feel the restricted overclocking is the only grey area in an otherwise bright potential. Since OCing also pushes the board manufacturers to pack newer, smarter tech into their products, resulting in better builds, higher quality components & other bells & whistles.

Quote:
But its nice to see their onboard not as the worst thing out there. Give them some time, higher end parts in the future perhaps?

I guess that would be the worst nightmare for Nvidia (not ATI, since it does have Fusion to counter it)


Related resources
a b à CPUs
August 28, 2010 2:07:04 PM

Taking Fusion to its extreme, Nvidia is all but done for. The GPU will be folded into the die of the CPU, and used for all FP calculations. Bulldozer is perfect for this as AMD is doubling up on the Int cores. Intel is currently ahead on doing this, but both are far form where they need to be.

The thing is, once this id is done, whats left for Nvidia? No chipset business, no need for lower end cards, and even cuda might not be enough once people start using their GPU/CPU combos to convert things. The only market they would have left is mid to high end cards. A market that gets horribly cramped if Intel enters the race.
a b à CPUs
August 28, 2010 4:57:45 PM

Actually there's already another thread here on the Anandtech SB preview.

Keep in mind the "10%" IPC improvements over Nehalems are with an engineering sample with early BIOS and drivers, not a shipping product. Also, more importantly, Anand said that Turbo was not working on the sample, and if it was, he expected between 13% and 17% IPC improvement over Nehalem. I'd guess 20% improvement once the chip ships. And the on-die GPU was the 6 execution unit version, not the 12 EU that will also be available.

Finally, there are unlocked multiplier versions available as well, for those who want to OC.

Overall, this is much more impressive than the now-suspect SB previews that were floating around on Xtreme Systems a month or two ago.
a c 123 à CPUs
August 28, 2010 5:34:56 PM

^Wait a second.....

That SB only had half the EUs of the top end SB?

That could potentially put it almost all the way over the top of the HD5450 and mke it start nipping the heels of a HD5500+....

Thats not bad at all.
a c 131 à CPUs
August 28, 2010 7:44:47 PM

Quote:
There is no need to overclock.Base frequency of most sandybridge is above 3.2ghz.which is sufficient for any game.

<sarcasm>Oh yeah that's right. Because everyone plays games and gamers only play games. Yep. That's all computers are good for.</sarcasm>

Do u guys really want nvidia to go out of buisness?
I was about to say that would make ATI (AMD) a monopoly in the dedicated graphics card market and I believe there are laws against it but then I remembered this:
http://www.s3graphics.com/en/products/class2.aspx?serie...
August 28, 2010 9:13:20 PM

4745454b said:
Taking Fusion to its extreme, Nvidia is all but done for. The GPU will be folded into the die of the CPU, and used for all FP calculations.


You're not going to be sticking a 300W GPU into a 100W CPU any time soon. Nor does it make sense, since gamers generally replace GPUs far more often that CPUs, so why would they want to be forced to replace both at the same time?

But I agree, if Intel do manage to destroy the market for low-end GPUs then it's likely to hit Nvidia hard.
August 28, 2010 10:12:40 PM

This has been known for quite awhile now, and I actually thought itd come sooner, at least this level would have already been here.
nVidia knows this as well, has known it for some time, and is heading in thier only direction left them.
Or, notice the lack of urgency of the low end Fermis
August 28, 2010 10:56:26 PM

You mentioned that the integrated GPU on the Sandy Bridge will be the equivalent of an entry level video card and can't compare with high performance discrete GPU's yet.

Will there be options to have a Sandy Bridge with its internal GPU and also have a high end discrete GPU? And how much would performance increase on the notebook because of both the discrete and integrated graphics components?
a b à CPUs
August 28, 2010 10:58:43 PM

Ryuzaki said:
You mentioned that the integrated GPU on the Sandy Bridge will be the equivalent of an entry level video card and can't compare with high performance discrete GPU's yet.

Will there be options to have a Sandy Bridge with its internal GPU and also have a high end discrete GPU? And how much would performance increase on the notebook because of both the discrete and integrated graphics components?


I'm sure that will be an option. There are already some notebooks (most notably those with Nvidia Optimus) that use the Intel integrated graphics for 2d and low power 3d applications to save battery, and then turn on the high performance discrete GPU for games. I doubt that they will do any sort of GPU combinations (like SLI or CF) combining the Intel with the discrete though.
August 28, 2010 11:48:40 PM

If this :
SuperPi 1M

Q6600 @2.5GHz - 20 sec
Sandy @2.5 GHz - 16.349 sec
is true, then things may yet get interesting
a b à CPUs
August 29, 2010 12:23:43 AM

JAYDEEJOHN said:
This has been known for quite awhile now, and I actually thought itd come sooner, at least this level would have already been here.
nVidia knows this as well, has known it for some time, and is heading in thier only direction left them.
Or, notice the lack of urgency of the low end Fermis

+1 or notice how AMD seems to be pushing the Tesla cards for HPC and the likes. I'm pretty sure the profit margins in the HPC are good enough for those that specialize in that to not run out of business, for example look at SGI,etc.
a c 123 à CPUs
August 29, 2010 12:40:39 AM

cjl said:
I'm sure that will be an option. There are already some notebooks (most notably those with Nvidia Optimus) that use the Intel integrated graphics for 2d and low power 3d applications to save battery, and then turn on the high performance discrete GPU for games. I doubt that they will do any sort of GPU combinations (like SLI or CF) combining the Intel with the discrete though.


They even have them in some Netbooks but unless the discrete GPU they put in is better than entry level, Intels new IGP might just kill that out.

I could see it in a Alienware laptop that has a GTX480M or HD5870M. Use Intels IGP for low end gaming to save power when on battery life and the discrete GPU for high end and when plugged in.

JAYDEEJOHN said:
Not the same field, but fewer parts in production for specific jobs can bring in some coin
IBM Describes Fastest Microprocessor Ever
http://www.pcmag.com/article2/0,2817,2368260,00.asp


Hah. Its probably very fast for what it does. But 5.2GHz? Maybe fastest clocked stock ship. AMD had a 7GHz quad and Intel had a 8GHz single.
a b à CPUs
August 29, 2010 1:24:28 AM

jimmysmitty said:


I could see it in a Alienware laptop that has a GTX480M or HD5870M. Use Intels IGP for low end gaming to save power when on battery life and the discrete GPU for high end and when plugged in.

That's what mine does (I have a brand new M11x with an i7 UM) - it uses the GT335M for gaming, and the intel on-chip (MCM) IGP for windows usage. It does a darn good job too - I get 5-7 hours of battery life on a computer that is small, light, and gaming-capable (although I can only game for 1.5-2 hours on battery).
a b à CPUs
August 29, 2010 4:35:30 AM

Quote:
You're not going to be sticking a 300W GPU into a 100W CPU any time soon. Nor does it make sense, since gamers generally replace GPUs far more often that CPUs, so why would they want to be forced to replace both at the same time?


Not what I said, or what I meant. By bringing the GPU into the die of the CPU, you don't have to have a FP processor in the chip any more. You'll just use the shaders on the GPU to run that type of math. (like CUDA) And if you can get "good enough" graphics performance with this, then Nvidia is in serious trouble due to the lack of x86 license. CUDA is all thats left for them, and seriously high end GPUs. At that point they because an afterthought, like S3.

I don't think AMD will be in legal problems if this happens. First, they had nothing to do with Nvidia's exit of the market. Second, Intel would still be the biggest provider of graphics chips, just like they are now. No legal problems there. We are still a ways away from this happening, but there is starting to be enough writing on the walls that I'm not sure this can be avoided for Nvidia. At least not with their current relationship with Intel. I don't want one less player in the market, I'll be sorry if this actually does happen.

I'll have to go back and look at that review. If that was the lower end GPU from Intel then HOLY $HI7! There would be no reason to buy a low end card anymore. (as long as it can do HTPC duty that is. Does it support Bit streaming?)
a b à CPUs
August 29, 2010 7:31:01 PM

jimmysmitty said:
^Wait a second.....

That SB only had half the EUs of the top end SB?

That could potentially put it almost all the way over the top of the HD5450 and mke it start nipping the heels of a HD5500+....

Thats not bad at all.


Not only that, but Anand also thought maybe turbo was disabled on the on-die GPU as well as the CPU.

Still, I'd be more interested in a 6 or 8 core (or 10 core, since Intel mentioned that possibility :p ) SB without any GPU, for desktop anyway, as long as Intel didn't charge arm+leg+torso for it :D .

However if the on-die GPU plays nicely with a discrete card or cards (such as the latter being turned off when not needed, but able to power on instantaneously when needed so that it is completely transparent to the user), then that would be a nice touch...
a b à CPUs
August 29, 2010 7:35:53 PM

cjl said:
I'm sure that will be an option. There are already some notebooks (most notably those with Nvidia Optimus) that use the Intel integrated graphics for 2d and low power 3d applications to save battery, and then turn on the high performance discrete GPU for games. I doubt that they will do any sort of GPU combinations (like SLI or CF) combining the Intel with the discrete though.


IIRC Lucid had a "hydra" chip that could get an AMD and nVidia GPU to work together, in a sort of hybrid SLI/Xfire combination. But it sat on the mobo between the PCIe lanes and the GPU, and thus wouldn't work with an on-die GPU. Not unless you had really teensy-weensy hands and a itsy-bitsy soldering iron :p .
August 29, 2010 7:42:05 PM

I'll call the guy from Burger King and see if he owns a soldering iron The guy on the left
a b à CPUs
August 29, 2010 8:40:40 PM

After looking at the anandtech review, im more impressed than i thought id be. The on board graphics are better than i thought they would be, however i dont really care much about that, but the arch improvements are far greater than i had anticipated. Intel still needs a better form of core multiplication/ hyper threading. This is even more impressive considering that the top end, likely priced where the i7 is now, is 300 MHz higher than the one tested, with having functioning turbo boost, and likely a few improvements, for likely another 10-20% overall performance increase in single threaded applications. I do hope the release some without IGP, and maybe faster clock rate/lower energy consumption. Or one with maybe 5770+ IGP performance. This is looking very promising if it is in a $100-300 range which it very well might and should be. Only 2 things concern me. Chipset, and overclocking. Whoever said that there is no need to overclock should be shot! :lol:  Might be a bit harsh, but it extends the life of the cpu as far as decent performance, it speeds things BESIDES gaming up, although not like ANYBODY does anything else besides gaming :pfff:  And if neither of those, sheer bragging rights. This might be all well and good, but if Intel doesnt price the K series right, I might have to skip this and do BD. While a 3.1 GHz SB beats a 2.8 GHz Lynnfield, it might beat a 3 GHz Bulldozer. But what if that bulldozer OC's to 4.3 GHz? I doubt it will fair as well against that. Thats disappointing to me, although it really is a very smart move by intel, i just hope AMD doesnt do the same. The P55 chipset was pretty weak, while toms proved that x8, and even x4 didnt cost much performance, in the future it will. If the 6870 is 15% faster than the 5870, then x8 is costing 20% performance. Also the lack of native support for USB 3.0 is questionable, however the motherboards will just add it later. All in all, very impressive, but if anything brings it down, it will definitely be the lack of overclocking on the normal priced non-"k" series, and possibly weak chipset/mobos. Maybe even sheer strength pf Bulldozer, but AMD has definitely got their work cut out for them, especially since Intel is fully invading their sub $200 market. Although i do wonder, how long will intel keep 1155? By christmas are we going to have 1154? Knowing Intel, i wouldnt be surprised :lol: 
a c 123 à CPUs
August 29, 2010 10:41:19 PM

^For the discrete GPUs, I would hope that the P6 series of chipsets would have PCIe 3.0. If so, it would mean a x8 link would be as fast as or even faster than a PCIe 2.0 x16 which means it would probably take up to a HD7k/GTX600 series to show any performance drain for the bus.

I for one don't think Intel is worried about Bulldozer since GF has been having touble with 32nm and by the time its out in 2011, Intel will be ramping Ivy Bridge on the 22nm process that will possibly also utilize HK/MG gen 3.

As for Bulldozer, I am waiting to see it in the wild. Due to the major difference from a K10.5 based CPU, I want to see what they can do in terms of overclocking or if the new features they have on it might also prevent it.

The biggest reason why SB would have problems OCing like the normal is due to the most of the northbridge being inegrated on the CPU itself. If Bulldozer follows suit and has the clock generator integrated onto the CPU, it might also put the same limitation into OCing.
a b à CPUs
August 30, 2010 12:23:16 AM

Quote:
The biggest reason why SB would have problems OCing like the normal is due to the most of the northbridge being inegrated on the CPU itself. If Bulldozer follows suit and has the clock generator integrated onto the CPU, it might also put the same limitation into OCing.


The problem isn't that the clock generator is on the CPU, its been that way for a long time. The problem is Intel decided to use ONE clock generator for all things that need a clock. This means you change the clock for one thing (CPU) and ALL clocks change. RAM, PCIe, PCI, all of them. And because there is only the one clock, there is no way to lock the others down. Had they put in separate clocks the other devices, they wouldn't have this problem now.
a b à CPUs
August 30, 2010 12:35:34 AM

Right, so now uping the bus much at all makes USB 3 and SATA 6 crash, making a max oc of maybe 100 MHz.
a b à CPUs
August 30, 2010 12:58:21 AM

4745454b said:
Quote:
The biggest reason why SB would have problems OCing like the normal is due to the most of the northbridge being inegrated on the CPU itself. If Bulldozer follows suit and has the clock generator integrated onto the CPU, it might also put the same limitation into OCing.


The problem isn't that the clock generator is on the CPU, its been that way for a long time. The problem is Intel decided to use ONE clock generator for all things that need a clock. This means you change the clock for one thing (CPU) and ALL clocks change. RAM, PCIe, PCI, all of them. And because there is only the one clock, there is no way to lock the others down. Had they put in separate clocks the other devices, they wouldn't have this problem now.

Yup. That's what I thought so, but exactly WHY did Intel do so and does it really offer advantages at stock setting? Also, is it even possible to use a different clock (as in external clock on the motherboard) to drive USB,PCIe,etc?
a b à CPUs
August 30, 2010 2:33:06 AM

I have a feeling it does help with stock performance, consolidating everything. I cant see why, but i honestly cant think of another reason to do it really. Im sure intel would know it would kill overclocking for all intents and purposes, so thats the only reason left.
a b à CPUs
August 30, 2010 8:51:14 AM

Why intel did it? Stated reason or real reason? I'm not sure they gave us a stated reason, and real reason remains unknown. They probably did it to prevent Ocing, but I get the feeling there was a different reason behind it.
a b à CPUs
August 30, 2010 12:33:14 PM

4745454b said:
Taking Fusion to its extreme, Nvidia is all but done for. The GPU will be folded into the die of the CPU, and used for all FP calculations. Bulldozer is perfect for this as AMD is doubling up on the Int cores. Intel is currently ahead on doing this, but both are far form where they need to be.

The thing is, once this id is done, whats left for Nvidia? No chipset business, no need for lower end cards, and even cuda might not be enough once people start using their GPU/CPU combos to convert things. The only market they would have left is mid to high end cards. A market that gets horribly cramped if Intel enters the race.


I STRONGLY disagree. CPU's, as Intel has found out via larabee, are not good architectures for rendering. The process of rasterization, as well as most GPU calculations, favor an architecture with lots of weak cores, as opposed to a few powerful ones.

Farther, Ray Tracing is the future, which even more favors lots of weaker cores (albiet far more powerful then we have now...). In short: The GPU isn't going anywhere.
a b à CPUs
August 30, 2010 2:08:10 PM

4745454b said:
Why intel did it? Stated reason or real reason? I'm not sure they gave us a stated reason, and real reason remains unknown. They probably did it to prevent Ocing, but I get the feeling there was a different reason behind it.


I don't think it makes much of a difference - most oc'ers (which make up a very tiny part of the market compared to Joe Blow buying some PC at Walmart or Best Buy) will go for the unlocked versions, assuming they are just a few bucks more than the locked ones.

IIRC most clock generators are pretty simple dividers or multipliers connected to a master PLL-type clock generator, and they don't take up much die space. If you have to run multiple clock lines to a part of the chip, however, that can take up space (either on-plane or in another connection layer). My bet is that the various generators are there, just disabled by Intel on the non-oc'able versions. Maybe some OEM will discover a way to re-enable them, sorta like unlocking extra cores on an X2 or X3 :D .
a b à CPUs
August 30, 2010 4:07:33 PM

Perhaps, but I doubt it. The OCing chips don't allow OCing by changing the bus, but the CPU multiplier. The bus has to stay at 100 for everything else to work. I honestly get the feeling that Intel is telling us the truth on this, there is only one PLL on the chip. And it runs everything. This is almost a bit like the early days of the P4 where you couldn't OCing past 220MHz or so. (start at 200) 217MHz was a common stopping point. Some manufacturers were able to find a way past the problem, but it was a different problem.

Anyone seen any Intel papers on this? I'd love to know more about this. Pure guess on my part is that there is an issue running that many PLLs so close together? Or when using transistors this small?
August 30, 2010 4:31:36 PM

While I can't speak to Sandy Bridge in particular, I will say that having multiple different clock domains incoming to a chip throws a significant monkey wrench into simulation (and, hence, pretty-much all pre-silicon debug efforts), and the interclock skewings tend to make post-Si validation a bitch as well.

It's better with more clocks being generated internally, but it's still no picnic.
a b à CPUs
August 30, 2010 6:57:46 PM

archibael said:
While I can't speak to Sandy Bridge in particular, I will say that having multiple different clock domains incoming to a chip throws a significant monkey wrench into simulation (and, hence, pretty-much all pre-silicon debug efforts), and the interclock skewings tend to make post-Si validation a bitch as well.

It's better with more clocks being generated internally, but it's still no picnic.


Yup, that's what I remember from a VLSI design course I took in college many moons ago :p . Sometimes it seemed like it would be simpler to just go with independent clocks and asynchronous interface logic at the inputs to each domain, but then you get into metastable state problems and possible race conditions.
August 30, 2010 9:50:48 PM

If somehow they could turbo to this finer grain, they could do it, but me thinks itd be a tad costly
August 31, 2010 1:19:04 PM

I understand that the SB integrated graphics beat dedicated gpus. How will this affect the notebook configurations? is SB going to be paired up with high-end dedicated cards or not?

I'm currently waiting for these year's Intel and nVidia releasess to buy a notebook. I want to upgrade from a desktop Intel Celeron 2Ghz, 1GB RAM, ATI 9600 series card. So is it worth waiting for SB in 2011 as far as performance gain vs a let's say a laptop with core i5 450, 435m? I'm not a hard core gamer, but I want to play from time to time and I found that new games have a 2,4 ghz listed processor speed at system requirements. I need the notebook for 3d animation - animation, mid-quality rendering.
a b à CPUs
August 31, 2010 4:44:02 PM

For casual gaming on a laptop, the integrated graphics look to be sufficient. For serious gaming, a dedicated card will still be needed.
a b à CPUs
August 31, 2010 4:56:11 PM

Quote:
I understand that the SB integrated graphics beat dedicated gpus. How will this affect the notebook configurations? is SB going to be paired up with high-end dedicated cards or not?

For true gaming laptops, SB will not be a replacement for a top end dedicated GPU when gaming. However, if switchable graphics tech is implemented, even a gaming laptop should be able to get quite good battery life. SB is aimed at the low end dedicated GPUs such as the GTS210/220,5450,etc.
August 31, 2010 5:31:44 PM

Well, Im thinking between Intels and AMDs IGP on chip solutions will finally fix TSMCs poor abilities to supply silicon heheh
September 6, 2010 12:58:34 PM

It seems like core i7 was just to buy time for sandy.
its like core i7 is an incomplete version of sandy and also a major preview into sandy.
i believe that amd never made phenom's to compete with i7 but with core 2 quads.
in my opinion intel wanted money and time to develop the full version of sandy, and released i7 as a diversion from the real thing and then released 980x which is closer to the sandy
a b à CPUs
September 6, 2010 1:06:07 PM

dragon5677 said:
It seems like core i7 was just to buy time for sandy.
its like core i7 is an incomplete version of sandy and also a major preview into sandy.
i believe that amd never made phenom's to compete with i7 but with core 2 quads.
in my opinion intel wanted money and time to develop the full version of sandy, and released i7 as a diversion from the real thing and then released 980x which is closer to the sandy


Nope, actually Intel made the i7s 2 years ago. Its just natural course of tech progression.
a b à CPUs
September 6, 2010 1:32:51 PM

Even Sandy Bridge work started way back in 2005. Its coming to the market now. So its quite normal to have processors take sometime to get out into the market.
a b à CPUs
September 6, 2010 2:51:31 PM

Yeah, who knows what Intel and AMD are working on or thinking of now :lol: 
a b à CPUs
September 6, 2010 3:18:41 PM

Maybe after Sandy Bridge it would Stony Bridge!! :lol: 
October 30, 2010 12:24:17 AM

hell_storm2004 said:
Maybe after Sandy Bridge it would Stony Bridge!! :lol: 


Actually the next processor after the Sandy Bridge line will be code named "Haswell".

It's expected to be released around 2013, the standard process for the line will be 8 cores, with I believe the option to go up to 16 cores with hyperthreading, the processor itself will be created on a 22nm dye like the Ivy Bridge, with the second generation "Rockwell" coming out on 16nm dyes.

And the wheels of technology keep on spinning.....
a b à CPUs
October 31, 2010 1:28:13 PM

^ Hmm, wonder if the gen after 'Rockwell" will be named "Roswell" :D 
!