Sign in with
Sign up | Sign in
Your question

Nehalem and current CPUs

Last response: in CPUs
Share
June 3, 2008 1:22:13 AM

I just build my new pc which is a q9450, 4 gb ddr2 800, BFG 8800GTS 512 so this will be out dated when nehalem comes out and how long will it be for nehalem to be needed for games since right now not a lot of many games use quad core cpus. Will nehalem be a good product for the main stream gamer?

More about : nehalem current cpus

June 3, 2008 1:36:22 AM

For the first year or so nehalem will be ridiculously expensive and used mainly in servers. It will be a while before all games use quad core not to mention nehalem. You'll get a quite few years out of that Q9450 before you need to move onto nehalem.
June 3, 2008 1:50:01 AM

im guessing that games won't be utilizing 8 threads until perhaps 2015 when in 2012 someone discovers a way to make programs multi threaded with ease.
Related resources
Anonymous
a b à CPUs
June 3, 2008 1:56:27 AM

^ i'm guessing that in 2028 the world will end by a man-made black hole










but... i think nehalem will be a step forward because it will finally unleash the potential of ddr3
June 3, 2008 2:03:40 AM

Only the eight core version of Nehalem provides significant benifit over whats available now. We still are yet to see the 8 thread SMT on 4 cores provide real world significant advantages. I think by the time your Q9450 becomes outdated, you will be moving onto something post Nehalem.
June 3, 2008 8:35:03 AM

From what I've read the Nehalem CPUs will be twice as fast per core at the same clock speed at certain apps but on average 30-80% faster at the same clock speed.

Where it will separate itself is its a true quad core allowing 2 quad core dies in a single chip. 8 physical cores with hyperthreading allowing the CPU to scale between 8 and 16 logical cores. More importantly the memory access will be considerably faster.
a b à CPUs
June 3, 2008 8:46:23 AM

You'll need to upgrade your video card at least 6 times before you find your chip bottlenecking you.
June 3, 2008 8:49:13 AM

"8 physical cores with hyperthreading allowing the CPU to scale between 8 and 16 logical cores"

Does that mean 8 cores will be cycling 16 threads at a time or will switch from 8 to 16 where necesary? Because i thought it was a fixed 16.
June 3, 2008 9:04:49 AM

@Vertigon, the idea behind hyperthreading is that you can dynamically allow a single physical cores processing power between 2 threads however the operating system will see this as 2 logical cores because of how hyperthreading persents itself to the OS(doing this avoids the need to have multi core and multi thread per core code). So if the 'two cores' are only running 1 thread a core will get 100% while the other gets nothing which effectively allows it to scale between 1 and 2 logical cores per a physical core.
June 3, 2008 9:23:19 AM

Im guessing after 6 new gpu archs, the C2D will be like the P3 is now
June 3, 2008 9:24:20 AM

Ok thanks. So in theory how many logical cores could we have per physical core? Where does hyperthreading taper off?
June 3, 2008 9:34:22 AM

Vertigon, hyperthreading could put a 1000 on a single core but basically more 2 is not worth it. You see appart from partitioning the CPU threading a second trick....

Consider a Core 2 45nm, it can take 5 instructions per a cycle but if the instruction group has 6 instructions that means it needs 2 cycle or 10 instructions leaving 4 wasted. Hyperthreading can 'redirect' in simple terms to the second virtual core. This is how on certain programs hyperthreading can 'give' or rather save up to 30% of the available processing power on a 4 instructions per cycle processor.

More logical cores would lead to more overhead countering the speed gains besides you could just swap threads between cycles making the extra virtual cores uselss in any case.
June 3, 2008 9:46:40 AM

Vertigon said:
Ok thanks. So in theory how many logical cores could we have per physical core? Where does hyperthreading taper off?

The old Hyperthreading wasn't more than a few registers, flags and instructions that could hold a certain processor stage. The number of threads possible per core should only be limited by the transistor count and, logically, by the efficiency of the additional threads - which i believe is not that great, otherwise intel would've increased them earlier.
Another problem that limits the amount of virtual threads is the inter-core communication since all processors need to know which processor is holding which thread in "storage".
If i'm not mistaken Intel uses some hyperthreading related technology on Larrabee where 1 core can juggle 4 threads.

I think with the new memory controler the new HT tech will be a whole lot more efficient. Another big benefit are the huge L2 cache sizes. If they improved the old Hyperthreading, and i bet they have, it will be very interesting to see and compare a single core with two threads against a real dual core.
June 3, 2008 9:57:43 AM

Slobogob I was refering to the Core 2 Extreme Edition's second generation hyperthreading which was a huge improvement over the P4 hyperthreading since the P4 only has 2 instructions per cycle (not much to save there so the management overhead would at times create a small performance lose) but the Core 2 EE hyperthreading had undergone some updates and had 4 instructions per a cycle to work with leaving far more to be saved.

Edit: With Nehelam, a single socket CPU with dual die would communicate via the L2 cache for cores on the same die (unbelievable bandwidth) or via the memory controller which is capable of 24gb/s per a die if I understand correctly. Bandwidth shouldn't be a problem.
June 3, 2008 10:37:25 AM

JDocs said:
Slobogob I was refering to the Core 2 Extreme Edition's second generation hyperthreading which was a huge improvement over the P4 hyperthreading since the P4 only has 2 instructions per cycle (not much to save there so the management overhead would at times create a small performance lose) but the Core 2 EE hyperthreading had undergone some updates and had 4 instructions per a cycle to work with leaving far more to be saved.

As far as i remember the core 2 series doesn't have hyper-threading. Here's a chart on vr-zone showing the core/thread count on the Core 2 series. Or are you refering to Nehalem as the second generation hyperthreading?

JDocs said:

Edit: With Nehelam, a single socket CPU with dual die would communicate via the L2 cache for cores on the same die (unbelievable bandwidth) or via the memory controller which is capable of 24gb/s per a die if I understand correctly. Bandwidth shouldn't be a problem.

The bandwidth is amazing, thanks to the memory controller, but the bandwidth was not the real problem. Once a thread gets "stored" in a virtual core, another thread gets processed. Once the old one gets loaded up again, data has to be fetched from the memory. With a little luck the data is still in the L2 cache, but if not the processor has to access the memory - and that's slow. With the new interconnect the access gets sped up and the L2 is larger which should improve Hyperthreading even without improvements to the hyperthreading tech itself. This is one of the reasons AMD does not have Hyperthreading. They can do a lot of work per clock and they can switch more easily between threads. Instead of catching up with the memory controller, intel tries to jump one step ahead of that by re-introducing a new version of Hyperthreading. I'm quite eager to see it at work.
June 3, 2008 10:58:49 AM

There's not enough evidence to confirm how Nehalem will perform, so take any wild claims as a pinch of salt. You can also guarantee that Intel are not going to let these new CPU's go like the Value prices we have now, it'll probably take sometime for them to become affordable enough for your average mainstream user. There's always going to be something better there's no such thing as 'Future proofing', always buy what's best for a given budget at the time, not worth waiting.
Being Native Quad core does not always mean it's going to be better, AMD have had problems with theirs and Intel could well encounter problems too.

Wait for Official Benchmarks and testing
June 3, 2008 11:05:03 AM

speedbird said:
There's not enough evidence to confirm how Nehalem will perform, so take any wild claims as a pinch of salt. You can also guarantee that Intel are not going to let these new CPU's go like the Value prices we have now, it'll probably take sometime for them to become affordable enough for your average mainstream user. There's always going to be something better there's no such thing as 'Future proofing', always buy what's best for a given budget at the time, not worth waiting.
Being Native Quad core does not always mean it's going to be better, AMD have had problems with theirs and Intel could well encounter problems too.

Wait for Official Benchmarks and testing

While i agree that official benchmarks are better than speculation, there are always indicators for what is to come. The totally botched launch of AMDs, on paper superior looking and by marketing gurus hyped up, Phenom, is a fine example. Everyone waiting for it will remember the hurricane of NDAs, mysterious and exclusive test-rigs and so on. I haven't heard or seen anything like that regarding intels latest.
June 3, 2008 12:14:41 PM

JAYDEEJOHN said:
Im guessing after 6 new gpu archs, the C2D will be like the P3 is now

That's just fine with me... now let's just hope there's never another P4.
June 3, 2008 1:00:59 PM

mi1ez said:
The newer C2QX chips support Hyperthreading, albeit under a different name.

What is it called and where do i find more information about it?
June 3, 2008 1:07:32 PM

Mi1ez is spot on. Hyperthreading doesn't officially appear in Core 2 but all hyperthreading is is a SMT that allows wasted/used instructions to be redirected into another task.

The tricky part comes in the advertising. Intel said it does 2 threads at once which in laymans terms is about right. In reality it actually just executed two threads almost on top of each other (by saving on the wasted/used instructions) so technically the HT processors still only supported one thread at a time but could switch to a second thread instantly.

With the wide execution bus of the Core 2 they need to be able to do the same to avoid wasting huge amounts of power but its not full blown hyperthreading (which needs to be registered on the system for the OS to run 2 threads at once on a single core).

Well thats how I understand it.
a c 127 à CPUs
June 3, 2008 1:25:33 PM

JDocs said:
From what I've read the Nehalem CPUs will be twice as fast per core at the same clock speed at certain apps but on average 30-80% faster at the same clock speed.

Where it will separate itself is its a true quad core allowing 2 quad core dies in a single chip. 8 physical cores with hyperthreading allowing the CPU to scale between 8 and 16 logical cores. More importantly the memory access will be considerably faster.


Where you are correct here, minus the HT its now called SMT, what you have wrong is the 2 quads on one die. According to what I have read Nehalem is supposed to scale from 2 to 8 cores native with the SMT making it 4 to 16 cores.

It is supposed to be a better version of HT as well meaning it should be faster and more efficient.

As for gaming, Nehalem is also supposed to have Dynamic OC'ing. Meaning that if one process is runnin then in will OC that cor and underclock the others to get that process done faster and since most games are still single threaded that means it will help that way.

But yes with the high speed IMC and memory links Nehalem should unleash the bandwidth potential of DDR3 in a good way. Of course this will add minimal increase to most real world apps but will help in the server arena.

I still drool over the thought of triple channel DDR3. Oh and I think MSI was show casing a mobo for Nehalem and it has 6 slots for memory meaning it will probably be 3GB for normal, 6GB for enthusiast (such as us) and 12GB for overkill. Although 12GB would be awesome.
June 3, 2008 1:27:45 PM

Vertigon said:
Only the eight core version of Nehalem provides significant benifit over whats available now. We still are yet to see the 8 thread SMT on 4 cores provide real world significant advantages. I think by the time your Q9450 becomes outdated, you will be moving onto something post Nehalem.

Proof of this...? Since the architecture has not been released, no one (outside of Intel) can make statements like that with any kind of certainty.
June 3, 2008 4:03:57 PM

JDocs said:
The tricky part comes in the advertising. Intel said it does 2 threads at once which in laymans terms is about right. In reality it actually just executed two threads almost on top of each other (by saving on the wasted/used instructions) so technically the HT processors still only supported one thread at a time but could switch to a second thread instantly.

With the wide execution bus of the Core 2 they need to be able to do the same to avoid wasting huge amounts of power but its not full blown hyperthreading (which needs to be registered on the system for the OS to run 2 threads at once on a single core).


Nice, thanks. I read up on it.

It is indeed SMT albeit intel doesn't advertise it as that. The major difference seems to be that their implementation doesn't always work since it can only reallocate "Instructions" to unused parts of the processor. It is important to differenciate between instructions and threads though.

This one explains the idea of SMT in simple terms. For those interested. They have a simple diagramm showing the inter-core communication and the troubles associated with it - that's what i mentioned some posts earlier.


This one is a whole lot better as it directly compares Core 2, Nehalem and Barcelona.


PS: I didn't think ahead far enough. You were right about the Bandwidth Jdoc. At a certain point the bandwidth becomes a limiting factor once the SMT or Hyperthreading goes past a few cores. Not with Nehalem though, thanks to the QPI.
June 3, 2008 7:17:00 PM

wow this threat blew up with great information. I hope ddr3 will drop in price to ddr2 prices when nehalem comes out.
Anonymous
a b à CPUs
June 3, 2008 8:28:16 PM

^ neph? lol
June 3, 2008 9:52:22 PM

new nickname i see.
June 4, 2008 2:46:08 PM

"Proof of this...?"

The skulltrail benchmarks didn't exactly change the world according to THG. I don't have proof, but if the software is miles from where we need it now, how's it going to catch up by the time Nehalem lands?
a c 127 à CPUs
June 4, 2008 3:21:13 PM

mi1ez said:
What's that Sun architecture that runs quite happily on 32 cores?

EDIT:
http://en.wikipedia.org/wiki/Rock_processor


All I will say is Terascale. Bam.

Vertigon said:
"Proof of this...?"

The skulltrail benchmarks didn't exactly change the world according to THG. I don't have proof, but if the software is miles from where we need it now, how's it going to catch up by the time Nehalem lands?



Um thats a different beast all together. What SMT will do is allow a thread to start on a core that is already processing one thread. So instead of one thread starting and another having to wait for it to finish it can start with it. And I imagine if the thread is multithreaded it could take advantage of all the physical and logical cores thus giving it better overall performance.

If you remember HT (Hyperthreading not Hypertransport) it did boost performance in some situations. So imagine this as a supposively better more advanced version of HT.
!