Sign in with
Sign up | Sign in
Your question
Closed

Intel to Reveal Eight-Core Xeon Next Month

Last response: in News comments
Share
January 30, 2009 11:28:07 AM

Darn, just installed my first quad yesterday (Q6600).

I thought the reason Nehalem has a rectangular die and not square is so Intel can fit two into LGA 1366. It is also the reason LGA 1366 being bigger than LGA 775. Also, why the 8-core Xeon has 2.3billion transistors when i7 only has 731million transistors?
Score
0
January 30, 2009 11:41:28 AM

^The i7 is bigger mainly due to:
1. Cache
2. IMC
3. The rest of the other stuff needed for HT,etc.
Score
0
Related resources
Can't find your answer ? Ask !
January 30, 2009 11:53:59 AM

LGA 1366 is bigger because of the pins used for the integrated memory controller (IMC). The difference in Transistor count should be solely because of bigger cache in the Xeons. As always with Intel: the Xeons are the same chip as the desktop processors with bigger cache.

I can only speculate about the fact that processors never seem to be square. Know that song "It's hip to be square."?
Score
1
January 30, 2009 1:36:44 PM

Pei-chenDarn, just installed my first quad yesterday (Q6600).I thought the reason Nehalem has a rectangular die and not square is so Intel can fit two into LGA 1366. It is also the reason LGA 1366 being bigger than LGA 775. Also, why the 8-core Xeon has 2.3billion transistors when i7 only has 731million transistors?


The Xeon may be an LGA 1567 chip.
Score
1
January 30, 2009 3:10:47 PM

Pei-chenDarn, just installed my first quad yesterday (Q6600).I thought the reason Nehalem has a rectangular die and not square is so Intel can fit two into LGA 1366. It is also the reason LGA 1366 being bigger than LGA 775. Also, why the 8-core Xeon has 2.3billion transistors when i7 only has 731million transistors?


I don't think that Intel can do a dual-die MCM with the Nehalem-EP since it has an IMC. Nobody in the x86 world at least has made an MCM with an IMC- all MCMs have been on FSB-equipped chips since the FSB's shared-bus nature makes it easy to tack two dies together in a package. Doing so with an IMC-equipped chip requires a die-to-die bus to be run to get die-to-die I/O and I don't know if QPI can do that. AMD is slated to be the first to try an MCM with IMCs with its dual-6-core-die "Magny-Cours" in 2010.

The Nehalem-EPs are LGA1567, not LGA1366. The reason LGA1366's socket was large is that it has a lot of lands and Intel also wanted to be able to have a large IHS and heatsink to dissipate the high heat output of overclocked i7s.

The 8-core Xeon has 2.3 billion transistors because it has eight cores versus four for the Bloomfield and likely has more more than 2 MB of L3 per core that the Bloomfield does. Intel likes to tack a lot of L3 onto their Xeons, particularly the MP versions, and L3 cache can eat up a bunch of transistors.
Score
2
January 30, 2009 3:29:25 PM

The eight core chip probably has more transistors than 2 quad core chips because there going to run some kind of huge cache or other some other techniques to keep all eight cores busy.
Score
0
January 30, 2009 3:42:49 PM

brausekopfAs always with Intel: the Xeons are the same chip as the desktop processors with bigger cache.


Not always ... Core2 E8400 = Xeon E3110 ... same chip, same cache.
Score
2
January 30, 2009 4:25:09 PM

Under what conditions does 8 cores offer a performance boost over 4 cores? According to this article the memory controller (memory bandwidth) is a big bottleneck.
Score
0
January 30, 2009 4:36:42 PM

DXrickUnder what conditions does 8 cores offer a performance boost over 4 cores? According to this article the memory controller (memory bandwidth) is a big bottleneck.


It's not a matter of changing the conditions under which existing code runs. Software has to be written correctly to be highly parallel-izable, and a large portion of the software out there isn't.

I'm going to stretch and say that a real answer to this question falls out of the scope of an internet message board.
Score
0
January 30, 2009 5:04:12 PM

wow, eight cores.... i wonder what the core clock is gonna be on it. and, how is one suposed to cool this with air?? :p 
unless it has a really low Vcore or is made to run really hot..
Score
0
January 30, 2009 5:09:56 PM

hmm...dual octa core mac pro anyone???


32 threads of pointlessness.
Score
-1
January 30, 2009 5:43:58 PM

DXrickUnder what conditions does 8 cores offer a performance boost over 4 cores? According to this article the memory controller (memory bandwidth) is a big bottleneck.

most conditions where one would get this processor.

Pretty much all 3D rendering applications I've seen are scalable, those 16 threads would benefit them hugely. Most server/workstation applications are highly scalable. Data processing and accurate physics simulations could benefit from this aswell.

This probably won't do you much good in a desktop running day to day applications, even if you game (most games aren't written in a scalable manner).
Score
3
January 30, 2009 5:47:24 PM

tipoohmm...dual octa core mac pro anyone???
32 threads of pointlessness.


Pointless indeed. 32-threads are only going to be utilized by specialized multi-threaded software like rendering CGI movies or running scientific computations. There's just not that muhc processing to do in most consumer and even commercial settings. Maybe we'll see a trend back towards console computers. Cheap, front-end machines rely on the powerful servers behind them to do the actual work.
Score
-1
Anonymous
January 30, 2009 7:45:30 PM

The article doesn't refer to this chip as "Beckton", a codename I've seen used in the past (e.g., on Wikpedia), but this does seem to fit the Beckton description elsewhere, which in other places has been described as coming more towards the second half of the year. I look forward to it, in any event -- some of us do indeed use all those cores!
Score
0
January 30, 2009 7:49:01 PM

hellwigPointless indeed. 32-threads are only going to be utilized by specialized multi-threaded software like rendering CGI movies or running scientific computations. There's just not that muhc processing to do in most consumer and even commercial settings. Maybe we'll see a trend back towards console computers. Cheap, front-end machines rely on the powerful servers behind them to do the actual work.


Maybe at work, but not at home. First of all, the average home Internet connection isn't all that good to run an RDP, VNC, or X11 terminal from as latency is high and bandwidth is marginal. Secondly, who are you going to trust to run that backend server with all of your data on it? Google and their "we'll rifle through everything" clauses in their user agreements? Microsoft with their general ineptitude? Your ISP with its cruddy service and ridiculous rates? No, the most server-side stuff you'll see is browser-based applications as you will most certainly want to be in control of your own data storage and graphics.
Score
0
January 30, 2009 11:10:43 PM

I can imaginge if you ran a bunch of virtual servers off one box it could replace a whole rack of older equiptment. It might seem like a waste of horsepower but it would be very energy efficient to take out half a dozen old boxes replace then with one of these. I guess I'm saying there's no reason all 8 cores need to be working on the same problem particularily with the hyperthreading and enhanced virtualization built into an I7 core. Assuming the I/O subsystem could kep up.
Score
0
January 31, 2009 3:43:47 AM

While I'm all for advancement of technology I really wish certain parts would keep up to utilize such advances. If Intel were to give you one of those 8 cores by the time you would actually require it the processor will be junk compared to the then modern hardware.

While I realize this isn't for the average consumer if you give it time it eventually will be with equal or even more powerful cpu's which still wouldn't really be needed. How many people are still running a Q6600 happily? Most programs still to this day don't take advantage all 4 cores. Intel really needs to start convincing others to make their programs take full advantage of their hardware.
Score
0
January 31, 2009 4:11:52 AM

NuclearshadowWhile I realize this isn't for the average consumer if you give it time it eventually will be with equal or even more powerful cpu's which still wouldn't really be needed.

Nuff said. This is for servers that are going to be used for large data processing, scientific simulation, or as render farms. All of those apps take extremely well to multiple threads of execution.

NuclearshadowHow many people are still running a Q6600 happily? Most programs still to this day don't take advantage all 4 cores. Intel really needs to start convincing others to make their programs take full advantage of their hardware.

Did everyone forget about multitasking?

I often work with multiple apps open at a time, and even if you're just playing a game, your OS doesn't magically shut off. Most games nowadays are programed for 2 threads, so having 4 leaves plenty of execution space for your OS, and maybe even a Music app.

Score
-1
January 31, 2009 9:33:23 PM


Did everyone forget about multitasking?I often work with multiple apps open at a time, and even if you're just playing a game, your OS doesn't magically shut off. Most games nowadays are programed for 2 threads, so having 4 leaves plenty of execution space for your OS, and maybe even a Music app.


Its either mulithreaded or its not. your OS only needs a single core and most of the time really only uses one. Most games arent multithreaded yet which is a huge problem. Not only are games not multi threaded but most programs arent either. Multi chip and core cpus have been out for as long as i can remember even though for the end user this has only been for a few years its really been there for decades now. Its beyond me that no one is making programs to take advantage of this extra power.

Also last time i checked most programs that arent multi thread enabled will only recognise and execute on the first core it see's since its only expecting to see 1 core so multitasking is kinda a weak excuse. Windows can do fine on its own core and whatever program your using does fine on its core. after that its mostly pointless.

World In Conflict is multi threaded it also has securom o well....... good game but whats the point.

Also am i missing something? Did HT actualy become a true logical processor? if not its not truely 16 core thats really a missleading statment.
Score
-2
February 1, 2009 9:27:33 PM

finally we may see that tri channel DDR3 goto good use here?

That 2.3 billion transistor count may be the cache, even some of those netburst based xeon's had double didget cache figures.

Intel did design this cpu to be modular so.
Score
0
February 2, 2009 9:00:22 PM


Existing FSB-based XEON compute blades don't have enough memory
bandwidth to properly utilise all four cores for complex tasks
that hammer the memory system, eg. animation rendering; the movie
10000BC is a particular example I'm familiar with, much of it
done by MPC in London. Frames are rendered one per core with an
average CPU utilisation of 90%. MPC uses a render management
system called Alfred which monitors how much the memory is being
pushed (probably by monitoring CPU usage, etc.) and if a particular
frame's complexity is such that all four cores can't be fed fast
enough then one of the cores is not used, ie. performance can be
better on an FSB-type quad-core XEON by only using three cores.
This action isn't normally necessary with Opterons of course.

The Nehalem XEON, using QPI, will solve this problem completely,
giving an Intel platform that can run this type of task in the
same way (probably better) than Opterons can. QPI is scalable,
so Intel can add more links to match the number of cores. With
Intel adding NUMA support aswell, highly scalable single-image
Nehalem systems should also run very nicely.

For reference, MPC has more than 900 x Dell PowerEdge-1950 servers,
all of them fitted with two quad-core 3.2GHz XEONs, 32GB RAM and
a single 750GB SATA; more than 7000 cores total. They all run
64-bit Centos, a free Linux variant that is binary compatible
with RHEL.

Movie studios will love the new Nehalem XEON. Apart from being
a faster chip anyway, they shouldn't have to worry so much (if
at all) about load monitoring for turning off cores. We'll have
to wait and see for the proper results, but I expect performance
to be exceptional for rendering.

I'm very familiar with SGI hardware; SGI dealt with these bus
bottlenecks more than 10 years ago at an architectural level.
The early-1990s Challenge/Onyx platform, with up to 36 CPUs each
with 2MB L2 (no dual-cores back then) had a shared bus design
(256bit @ 37MHz, 1.2GB/sec). Complex tasks could severely hamper
the available bandwidth per CPU, often limiting useful scalability
to 20 CPUs or so. The issues were similar to today's FSB bottlenecks.
Here's my Onyx btw, with 24 CPUs and 4GB RAM (not bad for 1994. :D )

The solution was the later Origin/Onyx2 platform (circa 1996;
here's mine) which uses a scalable crossbar instead of a shared
bus. Memory bandwidth scales with the number of CPUs, so CPUs
are much less likely to be bandwidth-starved for complex tasks.
As a result, 18 x 195MHz CPUs in a Challenge can be outperformed
by just four 400MHz CPUs in an Origin, or only two 600MHz CPUs
in the later Origin3K (NB: Intel made a similar mbd architectural
switch with the 440BX, which also uses a crossbar). Three
architectural generations later, I wouldn't be surprised if SGI
use the Nehalem XEONs for its next Altix design instead of the
upcoming quad-core Itanium, assuming NUMA support is sufficient
to support at least 512 CPUs (the current limit of Altix4700).

There are numerous scientific and server tasks that benefit from
lots of cores. Having an IMC will greatly improve how well these
systems run fine grained codes.

I'll be getting an i7 system in May/June, for video encoding -
another area where it runs very well.

Ian.

Score
2
!