Intel way to counter AMD's on-die mem. controller

Spitfire_x86

Splendid
Jun 26, 2002
7,248
0
25,780
<A HREF="http://www.xbitlabs.com/news/chipsets/display/20040311103200.html" target="_new">Click to read</A>


------------
<A HREF="http://geocities.com/spitfire_x86" target="_new">My Website</A>

<A HREF="http://geocities.com/spitfire_x86/myrig.html" target="_new">My Rig</A> & <A HREF="http://geocities.com/spitfire_x86/benchmark.html" target="_new">3DMark score</A>
 

imgod2u

Distinguished
Jul 1, 2002
890
0
18,980
This doesn't seem to "counter" the concept of an on-die memory controller. If anything, it's going further away from the concept. It gives more flexibility in terms of making chipsets, while most likely increasing memory latency as memory commands now have to go through 2 bridges. I don't really like the direction this is going. We're not heading towards more performance, especially considering just how incredibly memory-bound today's applications are.

If Intel insists on using more and more control bridges and buses, they should adopt a low-latency, flexible serial bus like IBM's Elastic-IO used in the PPC970.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
 

redface

Distinguished
Feb 17, 2003
149
0
18,680
That's just Intel's attempt to merge Xeon and Itanium and try to position Itanium as business mainstream server.

A fine day!
 

phial

Splendid
Oct 29, 2002
6,757
0
25,780
It gives more flexibility in terms of making chipsets, while most likely increasing memory latency as memory commands now have to go through 2 bridges


that doesnt nessesarily increase latency..the serial link could possibly reduce latency by quite a bit, and just be passed thru the north bridge with no ill effect. i dont see any bad side of this... besides, things have to change soon anyways, how long has the standard setup for motherboards been around? well over a decade , well over

-------
<A HREF="http://www.albinoblacksheep.com/flash/you.html" target="_new">please dont click here! </A>
 

Xeon

Distinguished
Feb 21, 2004
1,304
0
19,280
Well I don't see it not improveing performance, since logically Intel is aware of what AMD's Opteron line is doing to their Xeon line. Performance is a must have at this point no and's if's or but's.

Xeon

<font color=orange>Scratch Here To Reveal Prize</font color=orange>
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
I don't really understand this either. I guess one advantage is you could integrate several of these chips and attach them to the same NB, so you could address more DIMM slots at higher speeds. But it does nothing to add bandwith to the system since the major bottleneck is the NB on a 4 way xeon MP in the frst place, and it doesnt offer the other benefits a ODMC brings (bandwith scaling with cpu's, lower latency, less complexity). On the contrary, it will add complexity to designing motherboards. I'm sure intel has its reasons, but I fail to see them...

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

Mephistopheles

Distinguished
Feb 10, 2003
2,444
0
19,780
Well, in theory, you could have much added flexibility to the designs. What happened if, instead of having a dual-channel DDR-400 architecture, you used a quad-channel DDR-200 one? If Itanium's memory controller was modular and interacted universally, any of the processors from Intel could use it.

So I guess that more complexity translates into more design flexibility as well. This is good for Intel and for the users, as long as the increased number of chip modules doesn't increase latencies.

But I still see what you mean. There doesn't seem to be a true and immediate benefit from this right now.... We'll see, I guess.

<i><font color=red>You never change the existing reality by fighting it. Instead, create a new model that makes the old one obsolete</font color=red> - Buckminster Fuller </i>
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>What happened if, instead of having a dual-channel DDR-400
>architecture, you used a quad-channel DDR-200 one?

Not much if 4 cpu's have to share the same northbridge as is usually the case with Xeon MPs. It would be like dual channel DDR on an nForce2, that is twice as fast as the FSB. Doesnt bring a lot of extra performance.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

imgod2u

Distinguished
Jul 1, 2002
890
0
18,980
Again, this is for flexibility. You can design the memory controller independent of the processor. So on a Xeon board, you can use the same memory controller as on an Itanium board, only a new north bridge is neccessary.

However, this will most likely increase latency unless Intel has some type of super-link with near-zero latency between the north bridge and memory controller. And even then, that's still 2 clocks instead of 1 clock (north bridge takes one clock to send command, memory controller takes one clock to send to memory). This doesn't bode well for performance.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
 

trooper11

Distinguished
Feb 4, 2004
758
0
18,980
its interesting how much intel is trying to make its own and force conformity to products they may put out. the whole push for ddrII, pci-e, btx, now thier own memory controller seperate from the northbridge. its not neccesarily a bad thing, its good to get innovation, but its interesting how intel seems to easily steer the industry. they want btx, they are going to get it. we will have to see if this helps out peforamnce wise, tis possible, but in the short term probably unlikely.