Sign in with
Sign up | Sign in
Your question

Intel way to counter AMD's on-die mem. controller

Last response: in CPUs
Share
March 12, 2004 8:49:57 PM

<A HREF="http://www.xbitlabs.com/news/chipsets/display/200403111..." target="_new">Click to read</A>


------------
<A HREF="http://geocities.com/spitfire_x86" target="_new">My Website</A>

<A HREF="http://geocities.com/spitfire_x86/myrig.html" target="_new">My Rig</A> & <A HREF="http://geocities.com/spitfire_x86/benchmark.html" target="_new">3DMark score</A>
March 12, 2004 8:52:41 PM

Thats soo earlier today.

Xeon

<font color=orange>Scratch Here To Reveal Prize</font color=orange>
March 13, 2004 12:09:28 AM

This doesn't seem to "counter" the concept of an on-die memory controller. If anything, it's going further away from the concept. It gives more flexibility in terms of making chipsets, while most likely increasing memory latency as memory commands now have to go through 2 bridges. I don't really like the direction this is going. We're not heading towards more performance, especially considering just how incredibly memory-bound today's applications are.

If Intel insists on using more and more control bridges and buses, they should adopt a low-latency, flexible serial bus like IBM's Elastic-IO used in the PPC970.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
Related resources
March 13, 2004 1:18:30 AM

That's just Intel's attempt to merge Xeon and Itanium and try to position Itanium as business mainstream server.

A fine day!
March 13, 2004 6:26:25 AM

Quote:
It gives more flexibility in terms of making chipsets, while most likely increasing memory latency as memory commands now have to go through 2 bridges



that doesnt nessesarily increase latency..the serial link could possibly reduce latency by quite a bit, and just be passed thru the north bridge with no ill effect. i dont see any bad side of this... besides, things have to change soon anyways, how long has the standard setup for motherboards been around? well over a decade , well over

-------
<A HREF="http://www.albinoblacksheep.com/flash/you.html" target="_new">please dont click here! </A>
March 13, 2004 6:42:54 AM

If it's efficient, and improves performance, great. If it's more "it's intel so it must be good" crap....
March 13, 2004 6:46:05 AM

Well I don't see it not improveing performance, since logically Intel is aware of what AMD's Opteron line is doing to their Xeon line. Performance is a must have at this point no and's if's or but's.

Xeon

<font color=orange>Scratch Here To Reveal Prize</font color=orange>
March 13, 2004 7:44:27 AM

Problem is that, for the market today New+Intel=better. Too bad it isn't true.
March 13, 2004 10:14:18 AM

I don't really understand this either. I guess one advantage is you could integrate several of these chips and attach them to the same NB, so you could address more DIMM slots at higher speeds. But it does nothing to add bandwith to the system since the major bottleneck is the NB on a 4 way xeon MP in the frst place, and it doesnt offer the other benefits a ODMC brings (bandwith scaling with cpu's, lower latency, less complexity). On the contrary, it will add complexity to designing motherboards. I'm sure intel has its reasons, but I fail to see them...

= The views stated herein are my personal views, and not necessarily the views of my wife. =
March 13, 2004 7:03:46 PM

Well, in theory, you could have much added flexibility to the designs. What happened if, instead of having a dual-channel DDR-400 architecture, you used a quad-channel DDR-200 one? If Itanium's memory controller was modular and interacted universally, any of the processors from Intel could use it.

So I guess that more complexity translates into more design flexibility as well. This is good for Intel and for the users, as long as the increased number of chip modules doesn't increase latencies.

But I still see what you mean. There doesn't seem to be a true and immediate benefit from this right now.... We'll see, I guess.

<i><font color=red>You never change the existing reality by fighting it. Instead, create a new model that makes the old one obsolete</font color=red> - Buckminster Fuller </i>
March 13, 2004 9:44:08 PM

>What happened if, instead of having a dual-channel DDR-400
>architecture, you used a quad-channel DDR-200 one?

Not much if 4 cpu's have to share the same northbridge as is usually the case with Xeon MPs. It would be like dual channel DDR on an nForce2, that is twice as fast as the FSB. Doesnt bring a lot of extra performance.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
March 14, 2004 6:55:50 AM

Again, this is for flexibility. You can design the memory controller independent of the processor. So on a Xeon board, you can use the same memory controller as on an Itanium board, only a new north bridge is neccessary.

However, this will most likely increase latency unless Intel has some type of super-link with near-zero latency between the north bridge and memory controller. And even then, that's still 2 clocks instead of 1 clock (north bridge takes one clock to send command, memory controller takes one clock to send to memory). This doesn't bode well for performance.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
March 14, 2004 5:35:19 PM

its interesting how much intel is trying to make its own and force conformity to products they may put out. the whole push for ddrII, pci-e, btx, now thier own memory controller seperate from the northbridge. its not neccesarily a bad thing, its good to get innovation, but its interesting how intel seems to easily steer the industry. they want btx, they are going to get it. we will have to see if this helps out peforamnce wise, tis possible, but in the short term probably unlikely.
!