Sign in with
Sign up | Sign in
Your question

"Native" Quad vs. Dual Dual Core

Tags:
  • CPUs
  • Internet Explorer
  • Dual Core
  • Quad
Last response: in CPUs
Share
November 19, 2007 9:11:28 PM

Are there any real world benefits to having a "native" quad core? I ask because I'm looking into a Q6600, and several people have mentioned it is a dual dual core (ie: two dual cores on the same die), as opposed to a "true," "native" quad (ie: the new AMDs; Penryn, etc). I understand Penryn has obvious benefits, mostly due to the die shrink, but I'm talking specifically the "native" vs. dual dual core thing.

Thanks for your time.

More about : native quad dual dual core

November 19, 2007 9:13:15 PM

Q6600 outperforms the new AMD Phenoms by 13.5% and it's a 65nm. Need I say anything else?
November 19, 2007 9:25:09 PM

I understand that, it's pretty plainly stated in the Tom's Hardware article. However, let's say you have two Intel chips, for example, and their specs are identical, except for the fact that one is a "native" quad, and the other is a dual dual core...is there any benefit to one over the other?
Related resources
November 19, 2007 9:36:18 PM

Yes, I believe that there would be some benefit. The Pentium 4 dual cores (Pentium D?) were basically two single core CPUs slapped next to one another. You had two separate chips running independently. The Core 2 Duos share L2 cache between cores, which allows for better single threaded performance, since the core under load has a massive amount of L2 to utilize. AMD us using a different approach where they have a shared L3 cache, but this should also improve single threaded performance. There are also some other considerations with system and RAM access, but it's beyond my understanding. Hopefully, someone can give a more detailed explanation.
November 19, 2007 9:39:08 PM

native is indian not a cpu

there is no native cpu they are all man made -

In ecology, an indigenous species is an organism which is native to a given region or ecosystem. Indigenous species contrast with introduced species. An introduced species, also known as a naturalized or exotic species, is an organism that is not indigenous to a given place, but has been transported there as a result of human activity.

native amd speak for it looks good on paper but we can not make too many that's why we have the tricore

which is a native disabled cpu - as in special - its a special cpu

developmentally disabled - yes that is my name for the tri core


November 19, 2007 9:41:22 PM

On the desktop, native brings absolutely no benefit. The FSB allows enough bandwidth for communication.

On the server side, native is better for multisocket systems. This is due to Hypertransport, which Intel will be releasing it's own version of with Nehalem - which will be Intel's first "native" quad.
November 19, 2007 9:42:40 PM

jjblanche said:
... for example, and their specs are identical, except for the fact that one is a "native" quad, and the other is a dual dual core...is there any benefit to one over the other?


Currently, you won't see any real difference. The number of cores is too small, and the bottlenecks are currently in other areas, like user input, disk I/O, network I/O, etc.

A "true" or "native" system won't scale at some point anyway. The number of transistors is simply too large. Look at AMD. They're going to start selling one core disabled in some chips to improve yields.

Imagine going into a building, and wiring up 600 million light switches or breakers. Never having a single one wired wrong or else all your wiring is considered failed.

Possibly in another 3-4 generations, this might start making a noticeable difference, but I expect before then, we'll be dealing with terascale type chips, where the cores are smaller, and more dedicated to specific tasks, using a super high speed transport bus, and software is intrinsically more parallel, or at least enough of it, you don't really care.

John
November 19, 2007 9:45:27 PM

Penryn (actually, Yorkfield, unless you plan to buy a notebook) is NOT a monolithic quadcore. That's Nehalem, coming in 2008 supposedly.
November 19, 2007 9:51:28 PM

I did actually see an Intel developer comment on this. He said they found no real need to move to a native quad, and they will when they see a need and think they're ready for it. I do believe Nephalem will be a native quad core, coming out in late 2008/early 2009.

I might be able to suggest a reasoning: Duals are great right now, and quads are still a little over the top for most people (performance and budget wise) and even for a lot of games still. So perhaps Intel saw this and no reason to get hyped up about this. It is of course as young JJBlanche points out, a highlight AMD totes.

As you can see, Intel is still beating AMD for the moment, so it's not a huge selling point when you see the numbers. Non the less, Intel is moving in the direction.

As to answer you question, I believe I can explain it somewhat well. If you have 2 large plants working on two different foundations, it takes time to communicate with each other to average the quotas between them. Now if you can have those 2 large plants combined into 1 HUGE plant, then efficiency will obviously be increased.

:) 
November 19, 2007 10:30:32 PM

exit2dos said:
On the desktop, native brings absolutely no benefit. The FSB allows enough bandwidth for communication.

On the server side, native is better for multisocket systems. This is due to Hypertransport, which Intel will be releasing it's own version of with Nehalem - which will be Intel's first "native" quad.


Nehalem won't be the first monolithic quad core CPU from Intel. You will see a monolithic 6 core CPU before nehalem.
November 19, 2007 10:31:16 PM

hmm seems in 2010 we will have at least 10 cores by this rate
November 19, 2007 10:31:47 PM

There's not much difference, and I'd personally rather have the C2Q stay dual-dual so they can stay cheap(er)
November 19, 2007 10:33:59 PM

jkflipflop98 said:
Nehalem won't be the first monolithic quad core CPU from Intel. You will see a monolithic 6 core CPU before nehalem.

Can you get me an ES? Pretty, pretty please? :D 
November 19, 2007 10:47:16 PM

In short, True quad vs Dual Dual core = AMD Fanboy Math
1+1+1+1=4
2+2 Does not equal 4

Can you do the math?

Both approaches have advantages and disadvantages. A "dual dual" core is as much a quad core as a monolithic quadcore. The "glued" quad core was a PR counter tactic ("Quad Core For Dummies) by AMD, to minimize Intels acheivement of bringing the first quad to the market. In turn, it became a 'war cry' for a bunch of disheartend fanboys grasping desperately for any 'lifeline' (regardeless of how foolish it is) that supports their opinions, regardeless of the fact that it was merely a marketing tactic created by bunch or advertisers who know less about CPUs than the average garbage man.
November 19, 2007 11:07:12 PM

jkflipflop98 said:
Nehalem won't be the first monolithic quad core CPU from Intel. You will see a monolithic 6 core CPU before nehalem.


W00t! 6 core Penryn?

Man, I guess I need to start pulling to some strings.
November 19, 2007 11:10:10 PM

[/Resists urge to post a certain picture]
a b à CPUs
November 19, 2007 11:17:48 PM

ok if you want a system that scales well (ie your running muliti
socket quad father or v8...niether skull trail nor fasn8 are out yet and even then skull trail is a dual dual core) amd is the better anwser. if your running a single socket, intel is a no brainer than if your going by performance in games/appz. now that said keep in mind if u want to have a decent upgrade path...intel is killing the 775 next year with nethlam leaving you maxed CPU wise when this happens. if you go with amd's platform they are making huge strides to make sure if u buy a mobo now...a couple years from now you can still buy the latest and greatest CPU and toss it in you machine with a simple bios upgrade. granted eventually you'll take a performance hit (especially if you get a am2 with ht 2.0 not am2+ socket which runs ht 3.0) with the latest and greatest cpus whether its from the ram standard changes (ie am3 will be ddr3) or you max out you ht 2.0 bus with an am2+/am3 cpu...not sure when that bottle neck will be filled but it always happens eventually. so i would say if your building a system that is single socket and/or you don't plan on upgrading at all or for more than 12 months..get a intel quad. if you want your system to have a longer upgrade path or plan on a mulit socket system...amd is your best choice imo.
November 19, 2007 11:22:20 PM

turpit said:
[/Resists urge to post a certain picture]


I think if you posted that here, you'd have to give yourself a "Deleted". :D 
November 19, 2007 11:34:11 PM

jjblanche said:
Are there any real world benefits to having a "native" quad core? I ask because I'm looking into a Q6600, and several people have mentioned it is a dual dual core (ie: two dual cores on the same die), as opposed to a "true," "native" quad (ie: the new AMDs; Penryn, etc). I understand Penryn has obvious benefits, mostly due to the die shrink, but I'm talking specifically the "native" vs. dual dual core thing.

Thanks for your time.


In "theory" a native quad is better - resources such as cache, available bandwidth, etc can be allocated as needed.

In practice, things may be different.

Because you have 4 cores on a die, each of the 4 cores has to work, and must also work at the rated speed. This dramatically hurts yields.

By way of example, lets say that 80% of all single core dies can work at the top speed bin - for sake of argument 3.0 ghz.

The chances of a quad core die where all 4 dies work at 3.0 ghz would be:

.8 x .8 x .8 x .8 => .4096 .. ie 41% will work at 3.0 ghz.

If we take an example of poor yields where, for example, only 50% of the cores work at 3.0 ghz, the odds of having all 4 cores work at 3.0 ghz would be:

.5 x .5 x .5 x .5 => .0625 or only about a 6% chance a 4 core die will work at 3.0 ghz.

Now nobody ~~really~~ knows what AMD's yields are, so this calculation is purely for illustration, but it does make the point.

Aniother point is that Phenom is a big die +/- 250 sq mm as I recal, while Prenym is about 110 sq mm, so inherently (given the same quality of process) the Intel yield will also be better.

I think "in theory" a native quad may be better, but today, on 65 nano process, I am not so sure theory and reality are the same.

November 19, 2007 11:37:14 PM

exit2dos said:
I think if you posted that here, you'd have to give yourself a "Deleted". :D 

Hell, if I posted that here, I'd have to ban myself :sol: 
November 20, 2007 12:44:49 AM

someone mightve already mentioned this, but, one key advantage of being a quad core die cpu as opposed to a dual dual core die cpu, is being able to independently control power comsumption and speeds among all cores... which is really helpful for companies and businesses where they might have thousands of different systems running simultaneously (and the cost of electricity isnt any cheaper by any means than it was 5 years ago). so obviously this is less directed towards consumers, where most people only have 1 quad core, if that. as opposed to hundreds of thousands of quad cores (or so), on boards with multiple sockets no less. so theres also no need to completely redesign the cpu just for enthusiasts or the general public either, when any quad core will work just fine really, and many times may even be overkill. so no real pros or cons so much as far as performance goes for consumers so much.

thats the way i understand it anyhow, but i may be wrong just the same, and any dual dual core die cpu may be just as capable... but just not have the above features implemented.

as a result of the above though, i could see amds market share improving a noticable amount due to possibly largescale purchases by businesses and corporations. if noone purhases them though, well, market share will continue to dwindle i would imagine... i can imagine amds quad core has a lot of room for improvement too... as they only released cuz they had to, no more delays... and even still, theyre not completely available yet. so, definetly room for improvement it seems, especially because of the relatively disappointing benchmarks.
November 20, 2007 1:52:44 AM

jjblanche said:
I understand that, it's pretty plainly stated in the Tom's Hardware article. However, let's say you have two Intel chips, for example, and their specs are identical, except for the fact that one is a "native" quad, and the other is a dual dual core...is there any benefit to one over the other?


Ultimately, while the differences are quite real architecturally, it ends up being more of a mere academic issue, because the design that is supposed to be architecturally superior is in practice not performing. Intel is seriously pwning AMD right now.
November 20, 2007 1:55:26 AM

quad core = cpu with 4 cores

native quad core = amd marketing that implies falsely there something better with 4 cores on 1 die and not 2


this is a good explanation - do u think?
November 20, 2007 2:25:35 AM

why would intel bother at all to do the same thing then with nehalem... if its seemingly worse for all practical purposes, and even pointless?... the R&D costs are higher, the chance of successfully producing a fully working cpu is lower, and yields may be lower as a result too... the way 'that' looks anyhow would just be stupid on their part... and i dont think intel is a company as a whole thats really stupid... though every company makes mistakes... why would they plan ahead to 2008, for a 'mistake' that they are going to fully make, with an intent to completely follow through with it?... that, would be stupid. and would make nehalem in itself a rather bad idea, especially with all the problems they can see amd going through with it. unless intel is somehow immune to that...?
November 20, 2007 2:58:40 AM

That's a good point, choir. If all the aforementioned bullet points are true, and a dual dual is both easier (and cheaper) to produce, and has marginal, if any, drawbacks, then why would a company such as intel potentially risk profits? It can't be all for the servers, or can it?
November 20, 2007 3:06:17 AM

choirbass said:
why would intel bother at all to do the same thing then with nehalem... if its seemingly worse for all practical purposes, and even pointless?... the R&D costs are higher, the chance of successfully producing a fully working cpu is lower, and yields may be lower as a result too... the way 'that' looks anyhow would just be stupid on their part... and i dont think intel is a company as a whole thats really stupid... though every company makes mistakes... why would they plan ahead to 2008, for a 'mistake' that they are going to fully make, with an intent to completely follow through with it?... that, would be stupid. and would make nehalem in itself a rather bad idea, especially with all the problems they can see amd going through with it. unless intel is somehow immune to that...?


Intel is not "immune" to anything... thats mostly why they skipped doing a monolithic quadcore on 65nm... they were looking ahead and planning on using a 45nm process, hopefully to bring higher yields with less design problems.
November 20, 2007 3:22:00 AM


For Choirbass..

Intel will be going into native quad at 45nm....higher yields than 65nm... More dies per wafer, so for a given percentage of bad cores, the number (not percentage) of good cores produced for a given cash outlay is higher, meaning a greater return on production investment. Additionally, Intel is using a different process than AMD. High K is showing better results than AMDs SOI. This results in higher percentage of succesful cores. But before the go to native quad at 45nm, they are going MCM to perfect the new node with a proven Uarch.

Size DOES matter. AMD could never had gone to quad at 90nm succesfully. Going 'native' at 65nm was questionable....but AMD was between a rock and a hard place....Intel had its quad out over a year ago.
Had AMD waited until they hit 45nm to go 'native', they would have been waiting until 09 if they followed the 'golden' rule of thumb. Rule of Thumb say never go to a new process node and Uarch at the same time....it usually turns out bad. Thats IF their road maps are accurate (we know they're really not). They really couldnt have afforded to do that...they would have lost a hugh chunk of the server market to Intel. AMD could have gone MCM at 65nm, but that would have meant even more and R&D than K10, due to the limitations impossed by their IMC....it doesnt work easily on an MCM. Not to mention they would have had to eat their own words after their "Quad core for dummies" fiasco.

Both MCM and Native have advantages and disadvantages. Couple this with process nodes, manufacturing tech and market pressures
November 20, 2007 3:32:50 AM

yep. the majority of my post was sarcastic. there is a very real reason to use it... along the same reasons why we no longer have 2 single cores on a die being produced, and instead have a single dual core, and even 2 dual cores per quad, instead of just 4 single cores pasted together per quad... that would be bad. but, yeah... i suppose they couldve done that just the same too, with probably very similar performance even. though the thermal envelope, cost, and power draw probably wouldve been a lot worse if they had done that before, on a 90nm, 110, or larger even.
November 20, 2007 3:34:18 AM

Intel went to 4 core on die when after years of research and the did after only it makes sence. The apps show the truth......and is #2


if the amd on board memory controller was so good where is the proof?


intel could have addded the onboard memory controller 2 years ago - in place of all the cache
November 20, 2007 3:50:01 AM

well... the IMC performance was only really good with lower latency DDR1 memory... as soon as they moved up to DDR2, they took a hit, and DDR3 was even worse... the L3 cache helps negate the negative effects of that though... maybe not completely, but it helps. but, they only went to DDR2 even, because thats where the market was, but there was no need to really even, because DDR1 is about as fast as DDR2 even... and DDR3... though the frequencies are the largest difference. but if a smaller process 184pin DDR1 chip was made, then there may not have been a reason to really move up from DDR1 at all either, and amd may still have been there, for who knows how long, and even s939 would still be viable for potentially long term upgrades even... and the power draw and cost would be better as a result too.

lots of what ifs there though. but, the imc was good, but only with DDR1 really. intel did push DDR2, and amd followed, and had to redo everything.

tbh, i am more partial to amd... but not to the point where if there is a clear cut and dry difference in performance, that i would choose amd, regardless of performance. and if, by the time i need to upgrade my X2 3800+, and amd doesnt have a better current processor out for my money, ill go with intel... and vice versa.
November 20, 2007 4:36:14 AM

choirbass said:
yep. the majority of my post was sarcastic. there is a very real reason to use it... along the same reasons why we no longer have 2 single cores on a die being produced, and instead have a single dual core, and even 2 dual cores per quad, instead of just 4 single cores pasted together per quad... that would be bad. but, yeah... i suppose they couldve done that just the same too, with probably very similar performance even. though the thermal envelope, cost, and power draw probably wouldve been a lot worse if they had done that before, on a 90nm, 110, or larger even.


Not only that, but remember, the wafer diameter is fixed, 200mm, now 300 mm. You get less cores for the thicker process. Going quad at 90 or 130 mm, even if it was possible with thermals, would yield low numbers of dies, forcing them to be priced unreasonably high, for limited gain because of need to keep clockspeeds low...to keep temps down. Look at 4x4 @90nm...horrible thermals.
November 20, 2007 5:34:55 AM

that really wouldve been bad then, i see
November 20, 2007 11:36:41 AM

can you post the picture please turpit
November 20, 2007 10:50:23 PM

jamesro said:
can you post the picture please turpit





No


Hmmmmm....Whaddya think Exit...should I tell him where the picture is, in the process offering him up to the denizens of that place?
November 20, 2007 11:13:18 PM

Only if he posts a "Is my system good enough to play Crysis" thread there.
November 21, 2007 12:12:07 AM

Anyone who has done Physics will know that native quad is better than 2 dual cores. The data moving at the same speed (a.k.a. electrons moving at the same velocity) in both CPU's will have less distance to travel on the native than on the 2xduals. HOWEVER, the distances we are talking about here are so small, the difference in time cannot be seen by humans. We are talking something like millionths or billionths of a second (maybe even less). So if you took the q6600 and made a native version of it, at the same clock speeds the native q6600 would have a slight advantage (not even worth mentioning in bechmarks, let alone real world computing). Any major performance gain would have to come from a more effecient architecture or the like; not the native core. The only real advantage of native versus non-native is that you can probably get more cores in a CPU with native quads rather than 2xduals. With dual cores, you can only put 2 duals in one CPU I am pretty sure; anymore than that, and I think you would have some problems with the core communicating and the like.
November 21, 2007 3:11:24 AM

turpit said:
For Choirbass..

Intel will be going into native quad at 45nm....higher yields than 65nm... More dies per wafer, so for a given percentage of bad cores, the number (not percentage) of good cores produced for a given cash outlay is higher, meaning a greater return on production investment. Additionally, Intel is using a different process than AMD. High K is showing better results than AMDs SOI. This results in higher percentage of succesful cores. But before the go to native quad at 45nm, they are going MCM to perfect the new node with a proven Uarch.

Size DOES matter. AMD could never had gone to quad at 90nm succesfully. Going 'native' at 65nm was questionable....but AMD was between a rock and a hard place....Intel had its quad out over a year ago.
Had AMD waited until they hit 45nm to go 'native', they would have been waiting until 09 if they followed the 'golden' rule of thumb. Rule of Thumb say never go to a new process node and Uarch at the same time....it usually turns out bad. Thats IF their road maps are accurate (we know they're really not). They really couldnt have afforded to do that...they would have lost a hugh chunk of the server market to Intel. AMD could have gone MCM at 65nm, but that would have meant even more and R&D than K10, due to the limitations impossed by their IMC....it doesnt work easily on an MCM. Not to mention they would have had to eat their own words after their "Quad core for dummies" fiasco.

Both MCM and Native have advantages and disadvantages. Couple this with process nodes, manufacturing tech and market pressures


I have no idea what you said.
November 21, 2007 3:35:16 AM

exit2dos said:
Only if he posts a "Is my system good enough to play Crysis" thread there.


So is my system good enough to play Crysis?
November 21, 2007 6:01:25 AM

where is this place.... i just want to see the picture...
November 21, 2007 7:39:28 AM

The main advantage of Native QUAD core is the capability to have an L3 cache which improve further peformance over Dual dual core. If Intel can make Native QUAD then they can improve more than just increasing the L2 (which makes price lot higher) but rather exploit an L3 cache. Since these are SRAM chips which is alot pricey.

But L3 cache for now has some errata specially on 2.4Ghz and up (Phenom). Or likely inefficient if I read it right if you disable it speed will increase and enabling it speed will decrease upto 10%. Until AMD fix this bug for upcoming 2.4Ghz and higher, 16% lead is just too short.
!