Sign in with
Sign up | Sign in
Your question

Intel to launch six core CPU

Last response: in CPUs
Share
February 23, 2008 10:06:17 AM

The Inquirer reports:

Quote:

CHIP FIRM Intel is preparing to introduce a six core chip called the "Dunnington", a processor that will pave the way for its Nehalem architecture later this year.

According to Eclipse, the “Dunnington” was designed in Bangalore, and will use three dual core 45 nanometre Penryn processors with a shared 16MB L3 cache.

It will use the “Clarksboro” chipset, the report said.


http://www.theinquirer.net/gb/inquirer/news/2008/02/23/...

As an AMD fan, I've wondered why AMD couldn't do quad cores at 65nm like Intel, but read somewhere that hypertransport made that unworkable. Now, isn't nehalem supposed to have Intel's version of hypertransport? Then, how does this pave the way for nehalem if it's 3 dual core CPU's in one package? Will it rely upon the fsb to communicate with memory?

All very interesting, if true. If AMD did the same, we could see six cores made from two Phenom 8xxx's in one package. Makes me think that 8 core Intel CPU's aren't that far away, even if they're the nehalem equivalent to the Pentium D or Q6600 in packaging. This will be more bad news for Phenom, unless AMD pulls a 45nm rabbit out of it's hat.

More about : intel launch core cpu

February 23, 2008 11:41:36 AM

Interesting.

I'm a bit dubious about the 3 dual core Penryns claim, from all accounts Nehalem will be a 'native' design from 2 core all the way to 8 core.

It's The INQ, they get so many things wrong, I'll assume guilt until proven innocent in their case. ;) 
a b à CPUs
February 23, 2008 12:13:38 PM

yipsl said:
The Inquirer reports:

Quote:

CHIP FIRM Intel is preparing to introduce a six core chip called the "Dunnington", a processor that will pave the way for its Nehalem architecture later this year.

According to Eclipse, the “Dunnington” was designed in Bangalore, and will use three dual core 45 nanometre Penryn processors with a shared 16MB L3 cache.

It will use the “Clarksboro” chipset, the report said.


http://www.theinquirer.net/gb/inquirer/news/2008/02/23/...

As an AMD fan, I've wondered why AMD couldn't do quad cores at 65nm like Intel, but read somewhere that hypertransport made that unworkable. Now, isn't nehalem supposed to have Intel's version of hypertransport? Then, how does this pave the way for nehalem if it's 3 dual core CPU's in one package? Will it rely upon the fsb to communicate with memory?

All very interesting, if true. If AMD did the same, we could see six cores made from two Phenom 8xxx's in one package. Makes me think that 8 core Intel CPU's aren't that far away, even if they're the nehalem equivalent to the Pentium D or Q6600 in packaging. This will be more bad news for Phenom, unless AMD pulls a 45nm rabbit out of it's hat.


The thing is, that AMD dont do a quad core like Intel, as Intels Quad cores are almost a year older and still faster. !!!!!!

Im not flame baiting, this is the truth..............

Anyhow you watch, Intel will go on to 8 core, then 12... I dont think that Intel will let them selves fall behind again like they did with the Prescott / AMD 64 times.

Ive seen pictures of a 80 core processor plugged in to a socket 7 motherboard running xp, from Intel....

Just waiting for the 128 bit processor, they must be in their labs by now waiting for software to catch up.....

PS Why arent we all just running 64 bit os right now.......

Microsoft should have had xp and vista modified with major service pack with a dual kernal mode with one for compatability with the old and futurising the new applications with a 64 bit one....Nothings impossible with software...
As for the drivers have dual mode built in with far more compatability thrown in.....
Why cant software people get together and get this right

Related resources
February 23, 2008 12:37:24 PM

the fundamental fact is, vista should have been 64 bit only, or 32 bit only - forcing the matter

if by paving the way to nehalem, they mean, the product right before - ie the last penryn, then yes their information might be correct - but what was descrbied in that article was nothing to do with nehalem itself unless i am VERY much mistaken
February 23, 2008 1:11:28 PM

Interesting... hopefully can hear mroe about this as I may hld back off one of their new penryns if this is coming out in Q3 or earlier.


Doubt it will though...
February 23, 2008 1:13:03 PM

Nehalem will use native 2, 4, and 8 core designs. They will be using the same strategy as AMD then.

I not sure if I need 4 core now, six would be too much for me. I guess there are some people who could really benefit from this though
a b à CPUs
February 23, 2008 1:20:05 PM

Looks like FUD.

February 23, 2008 1:26:33 PM

isn't this intels grand plan to do away with sun? ie ultrasparc
a c 126 à CPUs
February 23, 2008 4:18:45 PM

I could see this. Nothing Intel has ever done has been set in stone. Changes can always happen.
a b à CPUs
February 23, 2008 5:11:36 PM

Toms initial review of the 45nms showed a picture that showed that 8 cores would actually fit. Im sure Intel could make a 6 core 45nm. It will fit on the die and the power usage should also be ok. The issue is the FSB on 6 cores. The FSB would probally be a bottleneck for 6 cores but I think the extra 2 cores would still add a nice performance gain.

The main thing is will Intel actually make this cpu. Right now they dont even had the mainstream quad 45nms out so Intel attempting to bring a 6 core cpu doesnt seem likely.
February 23, 2008 5:15:05 PM

That link doesn't work to well for me, because I can't read Japanese.
February 23, 2008 5:15:18 PM

Why not? If it works people will buy it. If they would they wouldn't bring it out the same time as the quads, they want you to buy those then have to upgrade later on.


Then of coarse buy new intel mobos to support their new ones in Q4.
a b à CPUs
February 23, 2008 5:23:19 PM

Zorg said:
That link doesn't work to well for me, because I can't read Japanese.

:lol:  Agreed.
February 23, 2008 5:29:49 PM

6 cores COULD make sense when you consider the triple-channel mem interface. Yea, the point IS dumb, but hey.

But why make 6 when you can make 8?
February 23, 2008 5:38:03 PM

Zorg said:
That link doesn't work to well for me, because I can't read Japanese.

Neither can I, but I the diagram basically sums it all up:



This could easily be bogus, but it is interesting.
February 23, 2008 5:51:35 PM

Nope... this is certainly true. Its based on Penryn core, codenamed Dunnington.

JK onced said the die shot looked very amazing... :p 

EDIT: It was planned as a bridging solution between Penryn quad core and Nehalem Beckton.
February 23, 2008 5:54:04 PM

True there, I never expanded them to see. Funny, text in Japanese diagrams in English.
February 23, 2008 6:35:49 PM

homerdog said:
Neither can I, but I the diagram basically sums it all up:

http://pc.watch.impress.co.jp/docs/2007/1018/kaigai394_01l.gif

This could easily be bogus, but it is interesting.

That graphic shows Nehalem with 8 cores only doing 8 threads, correct me if I'm wrong but I thought Nehalem was capable of two threads per core.
February 23, 2008 7:36:50 PM

lobofanina said:
That graphic shows Nehalem with 8 cores only doing 8 threads, correct me if I'm wrong but I thought Nehalem was capable of two threads per core.

Yes, the diagram seems to have a few inaccuracies. I personally still have my doubts that this Dunnington thing is real. I mean why haven't we heard of it before now?
a b à CPUs
February 23, 2008 7:52:16 PM

They would go to 6 before 8 because this cpu would still be based on intels current FSB. And 8 cores at 45nm would be tuff to pull off also. Between them sharing the FSB and the power usage/heat it would create would be bad. Im not sure but think intels roadmad for 8 cores isnt till 32nm Nehalem.

A 45nm triple whooper with cheese cpu should be able to work well. It would not take much more silicon space then 65nm quads. A 2.6ghz triple cheeseburger should be able to peform well even with the FSB. It also shouldnt draw more power then current higher clocked quads.

I could see Intel releasing a 6 core penryn a couple months before nehelam. As long as it would work on the current mobos that support 45nm quads it would be a success for those people that use programs that take advantage of multi core.
February 23, 2008 9:20:38 PM

someguy7 said:

I could see Intel releasing a 6 core penryn a couple months before nehelam. As long as it would work on the current mobos that support 45nm quads it would be a success for those people that use programs that take advantage of multi core.


I'm pretty sure this is a server CPU. Being as such, it'll be interesting to see how much the FSB bottlenecks such a chip, since we are already seeing FSB bottlenecking on server workloads even on quad core Xeons.

Of course this chip has 16MB L3, which may offset the bottlenecking somewhat.
February 23, 2008 9:26:58 PM

If the 6 core is based on how quads are made atm it wont have 16mb will it, it'll either have 3x 4mb or 3x 6mb.
February 23, 2008 10:20:22 PM

Hellboy said:
The thing is, that AMD dont do a quad core like Intel, as Intels Quad cores are almost a year older and still faster. !!!!!!

Im not flame baiting, this is the truth..............

Anyhow you watch, Intel will go on to 8 core, then 12... I dont think that Intel will let them selves fall behind again like they did with the Prescott / AMD 64 times.

Ive seen pictures of a 80 core processor plugged in to a socket 7 motherboard running xp, from Intel....



The Phenom 9600 is 14% slower overall than the Q6600, but that includes games which do not take advantage of quad cores. It's actually competitive against C2D and Q6600 in about half the non game apps tested in the Wolfdale article here at Tom's. When Phenom gets to 45nm, we should see improvement. It still won't catch up to Nehalem if Phenom's only 3.2 at the top, but Nehalem goes up to 3.6 native and overclocks like a dream.

So, it's not great for AMD, but if they get the price point right (which means mainstream) then they can still make a profit like the K62 days. As far as it goes, this wasn't an "Intel beats AMD" post on my part. I wanted to start a discussion about the oddity of three dual core Nehalem's in one package. I'd thought any version of hyper transport made that impractical.

That 80 core is a concept running under their conditions. It's probably a decade away from being needed. It's like the great concept cars that never end up in the showroom. It doesn't mean the cars actually in the showroom are better than their competitor's models because the company also has a concept car.

Intel's current CPU's are better than AMD's incremental improvement to Athlon X2, but not phenomenally better. They were phenomenally better than the Pentium D. AMD just hasn't made that level of improvement.

I guess this is a server CPU, that makes sense. I can see triple and quad cores on the desktop, but software isn't there yet for more than four cores, outside of 3DS Max, Supreme Commander and a few other apps that even make Skulltrail look decent.

So, I'll ask again. Is Intel doing their version of hyper transport radically differently than AMD such that they can package dual cores into this configuration? Was hyper transport the reason AMD had to go native quad core at 65nm or is that just something they felt they needed to do, but didn't do well enough for the enthusiast market?
February 23, 2008 10:39:38 PM

Woah... hold up. You want to provide some proof in regard to your claims?

Quote:
When Phenom gets to 45nm, we should see improvement.

What makes you think that? Improvement in architecture? Improvement in transistor design? Improvement in thermal dissipation? Improvement in what?

Quote:
It still won't catch up to Nehalem if Phenom's only 3.2 at the top

Again, how do you know if Phenom's going to top at 3.2Ghz? Maybe it can go up to 3.6Ghz, and maybe it can only do 3.0Ghz.

Quote:
Nehalem goes up to 3.6 native and overclocks like a dream.

This is the most questionable. There has been no Nehalem CPU-Z, and no enthusiast had gotten their hands on it. I believe Nehalem will actually clock lower than C2Qs, as they are much more complex.


So, I'll ask again. Is Intel doing their version of hyper transport radically differently than AMD such that they can package dual cores into this configuration? Was hyper transport the reason AMD had to go native quad core at 65nm or is that just something they felt they needed to do, but didn't do well enough for the enthusiast market? said:

So, I'll ask again. Is Intel doing their version of hyper transport radically differently than AMD such that they can package dual cores into this configuration? Was hyper transport the reason AMD had to go native quad core at 65nm or is that just something they felt they needed to do, but didn't do well enough for the enthusiast market?


Intel's QPI is not radically different than AMD's HTT, although it does represent a major improvement over HyperTransport.

Due to the fundamental design of direct connect, its nearly impossible to package a MCM. Hypertransport was one of the reason why AMD needs to go quad. Another is performance. AMD would not see any performance improvement if they just packaged two K8s together, or even two K10 cores. AMD needed to come up with a game changer, a performance leader. The only choice under that assumption is the use of native quad.
February 23, 2008 11:48:31 PM

Hatman said:
If the 6 core is based on how quads are made atm it wont have 16mb will it, it'll either have 3x 4mb or 3x 6mb.


You beat me to it, I was going to ask if anyone else thought 16mb cache for a 6 core CPU seemed a bit.... odd.
February 24, 2008 12:19:33 AM

Hatman said:
If the 6 core is based on how quads are made atm it wont have 16mb will it, it'll either have 3x 4mb or 3x 6mb.

Looks like 3x3MB L2 and a 16MB L3 shared between all cores. As far as I know this will be the first implementation of an L3 cache in a Core2.

Aren't some of the Yorkfields going to be 2x3MB?
February 24, 2008 2:35:23 AM

If you look closer at the specification for Dunnington, it is in fact a monolithic die CPU. So if the diagram is true, L3 will act as a data pool for all 6 cores.

This would probably reduce the FSB effect.
February 24, 2008 3:05:11 AM

I want one
a c 126 à CPUs
February 24, 2008 6:21:15 AM

Can we call this a Triple Cheeseburger then? Sounds tasty.
February 24, 2008 7:01:41 AM

I'm a little skeptical to be honest.

You'd be hard pressed to **fit** three 45nm Penryn cores under the Intel IHS. That diagram shows a total of 6 cores, and 25MB of total cache, meaning more total die area than three Penryn dual cores (6 cores total, 18mb total).

As such I can't see it happening unless it is in a new CPU package.
February 24, 2008 7:08:21 AM

Ycon said:
6 cores COULD make sense when you consider the triple-channel mem interface. Yea, the point IS dumb, but hey.

But why make 6 when you can make 8?


That works with AMD as well.. Why make 3 when you can make 4...

Perhaps the 6 core is only an 8 core with 2 bad cores or 2 disables cores..

Or intel is just making fun of AMD.. could be...
February 24, 2008 8:55:06 AM

homerdog said:
Yes, the diagram seems to have a few inaccuracies. I personally still have my doubts that this Dunnington thing is real. I mean why haven't we heard of it before now?


Oh it's real alright. This is *not* three dual cores packaged together. It's a 'native' six core CPU with each pair of cores sharing a 3MB L2 cache, with all six cores sharing a large 16MB L3 cache. It's not eight cores with two disabled or anything of the sort. This fits into the same Clarksboro chipset as Tigeron (The Xeon 7300) and is Intel's 2008 MP server product (not desktops or DP servers) before 'Beckton' (i.e. native octa core 45nm Nehalem product) in 2009.




February 24, 2008 10:01:39 AM

inquirer link = auto ignore
a b à CPUs
February 24, 2008 10:52:53 AM

Looks good ... if the software will support it.

February 24, 2008 2:54:57 PM

I wonder what the odds are that there will ever be a 775 version? Not that I need 6 cores, but an L3 would be nice :sol: 
February 24, 2008 5:27:39 PM

mPGA 604? I thought that socket has died long ago....

Why Intel didn't make it a LGA 771? :sarcastic: 
February 24, 2008 5:51:25 PM

Because it's built for density.
February 24, 2008 5:53:44 PM

If you don't mind me asking, what is mPGA604 designated for? MP servers?
February 24, 2008 5:58:40 PM

MP rack servers.
February 24, 2008 6:00:22 PM

That's what I thought too... and again, if you don't mind me asking, was the leaked Nehalem score "Beckton-EP", or just QC? :p 
February 24, 2008 7:17:09 PM

Quote:
I'm a little skeptical to be honest.

You'd be hard pressed to **fit** three 45nm Penryn cores under the Intel IHS. That diagram shows a total of 6 cores, and 25MB of total cache, meaning more total die area than three Penryn dual cores (6 cores total, 18mb total).

As such I can't see it happening unless it is in a new CPU package.

An individual Penryn die (2 cores, shared 6M L2) is about 107 mm^2, so three of those would take 321 mm^2, slightly more than the 283 mm^2 of Barcelona. Don't let the two-dimensional illusion trick you. A 35mm x 35mm package is 1225 mm^2, yet for many people, a square of 300 mm^2 placed in the center appears to take up nearly half the area.
a c 99 à CPUs
February 25, 2008 3:36:28 AM

Hellboy said:


Ive seen pictures of a 80 core processor plugged in to a socket 7 motherboard running xp, from Intel....


Didn't that 80-core unit run at something very slow like 2 MHz? Anyway, that chip is basically an FPU or a GPU core than a CPU if you really look at it.

Quote:
Just waiting for the 128 bit processor, they must be in their labs by now waiting for software to catch up.....


Transmeta's Crusoe uses 128-bit instructions and the Efficeon uses 256-bit instructions. Granted, these chips are VLIW designs and not native x86 internally. I'd expect not to see anything that isn't VLIW or pretty much a one-off special that uses 128-bit instructions for at least 15 years. The 64-bit address space has a ton of headroom even with the biggest supercomputer installations.

Quote:
PS Why arent we all just running 64 bit os right now.......


I ran a poll a while back and quite a few of us are. Of the responses, something around 50% use a 64-bit OS (about 2 in 5 use Vista 64-bit.) Most people cited a lack of drivers and to a much lesser extent, program incompatibility with Windows XP x86_64 and Vista 64-bit as their reason why they run a 32-bit OS with more than 3 GB RAM. I personally run 64-bit OSes on both of my current computers since I got them. They are both 64-bit capable and drivers and 64-bit programs are just as numerous on 64-bit as on 32-bit as I run Linux, plus my desktop has 4 GB of RAM and a 64-bit OS is the easiest way to address it all (yes, there is PAE, but...)

Quote:
Microsoft should have had xp and vista modified with major service pack with a dual kernal mode with one for compatability with the old and futurising the new applications with a 64 bit one....Nothings impossible with software...


That is possible, but you would have to virtualize one of the kernels, the system space and much of the userspace as well. I'd pick putting the 64-bit system "on the bare metal" and virtualizing the 32-bit one. That is best done with hardware assistance as a purely software emulator can put quite a bit of overhead into running whatever is going on the virtualized system.

Quote:
As for the drivers have dual mode built in with far more compatability thrown in.....
Why cant software people get together and get this right


You have to load the correct driver for your running kernel. You cannot load a 32-bit driver into a 64-bit kernel and cannot load a 64-bit driver into a 32-bit kernel. Sure, you may be able to devise a translation layer or wrapper to be able to load 32-bit drivers into a 64-bit kernel. I wouldn't really want to have to rely on that as there are wrappers out there (such as NDISwrapper) that have some very talented people working on them but still manage to have stability and compatibility problems.

The real problem here is that hardware manufacturers don't have ANY incentive to provide new drivers for old parts. They would much rather you buy a new part with a compatible driver as they make money on a sale instead of paying devs to make drivers for old and no-longer-selling parts. There are three solutions to the problem. One is to pressure hardware manufacturers to provide new drivers for older parts- good luck with that one!! The second is to pressure them to release drivers with publicly-available source code such that somebody can modify it and recompile it to work with a new OS. That has happened before but is not very common as there are things in drivers that many manufacturers don't exactly want to become public knowledge. The third is to pressure HW manufacturer to release enough documentation on the hardware such that somebody else could create a driver for the hardware. The *nix guys have done this for years and have had some success- Intel and AMD releasing graphics hardware specifications are two that stand out. The last and most commonly done method is to reverse-engineer the part and make a driver. This can be very difficult as anybody who does this is doing it in a clean-room manner as there is often NOTHING that would give an insight as to the HW's internal workings available. The reverse-engineering method tends to be predominant for the *nixes and less so on Windows as there is a much bigger push to get new hardware working on a vendor-unsupported OS than there is to basically port the driver to a new OS five years after the part shipped.
February 25, 2008 10:20:25 PM

how hot would this thing run?
February 25, 2008 10:32:03 PM

OOOwatah said:
how hot would this thing run?

Hard to say, but it'll be going into rack servers so it must not be too hot.
a c 99 à CPUs
February 26, 2008 2:06:07 AM

OOOwatah said:
how hot would this thing run?


The slides say TDP <= 130 W, so the chips likely throw off as much heat as the Kentsfield quad-cores do, probably less for the lower-clocked Dunningtons. The actual operating temp will of course depend on the load, heatsink, and case ventilation. 2U/4U servers can have some huge chunks of copper sitting over their CPUs and have a bunch of very noisy high-RPM, high-CFM fans cooling them. The 1Us and blades have little sinks but even noisier fans- 7000 rpm is common and some go even faster. Generally blade server processors are towards the lower end of the TDP range (the ~60-watt Xeon LVs and Opteron HEs are popular) but if you have excellent ventilation and keep the ambient air in the server room cold enough, you can run hotter chips in blades.

On another note, this chip sure seems like a trial run for manufacturing Nehalem/Beckton/Bloomfield. Dunnington is a massive monolithic die with a shared L3 and semi-shared L2s on a HK/MG 45 nm process, same as Nehalem & Co. will be. Sure, Dunnington uses the FSB and Nehalem uses an IMC and a point-to-point bus but that's relatively small potatoes compared to ironing out the problems in getting a big-die monolithic chip produced on the process. I think they are trying to learn from AMD's struggles by testing on a relatively non-critical low-volume part instead of "betting the farm" on an ambitious and new design working perfectly from the get-go. Of course, Intel has the luxury of having enough fab capacity and money to do this and AMD didn't, but it shows that Intel isn't getting overly complacent and making stupid decisions after sticking it to AMD for about a year and a half.
!