Sign in with
Sign up | Sign in
Your question

The reasons why new core architecture has no IMC

Last response: in CPUs
Share
May 21, 2006 2:02:15 PM

IMC- Integrated Memory Controller


1) It won't fit in with their intelligent power management technology.
2)It will bottleneck their upcoming I/O Acceleration Technology (IOAT
3) If the controller remains on the northbridge, the transition to new memory modules will be easier.

How they solved the prob of no Integrated Memory Controller

1)they came up with their advanced smartcache technology
2)the come up with their smart memory access technology
3)they used the silicon which could have been used in an IMC to improve their cache.
May 21, 2006 2:10:38 PM

Quote:
IMC- Integrated Memory Controller


1) It won't fit in with their intelligent power management technology.
2)It will bottleneck their upcoming I/O Acceleration Technology (IOAT
3) If the controller remains on the northbridge, the transition to new memory modules will be easier.

How they solved the prob of no Integrated Memory Controller

1)they came up with their advanced smartcache technology
2)the come up with their smart memory access technology
3)they used the silicon which could have been used in an IMC to improve their cache.


hmm... nice reasoning!
May 21, 2006 2:20:21 PM

Quote:
IMC- Integrated Memory Controller


1) It won't fit in with their intelligent power management technology.
2)It will bottleneck their upcoming I/O Acceleration Technology (IOAT
3) If the controller remains on the northbridge, the transition to new memory modules will be easier.

How they solved the prob of no Integrated Memory Controller

1)they came up with their advanced smartcache technology
2)the come up with their smart memory access technology
3)they used the silicon which could have been used in an IMC to improve their cache.


hmm... nice reasoning! Agreed!
Related resources
May 21, 2006 3:51:29 PM

Quote:
IMC- Integrated Memory Controller


1) It won't fit in with their intelligent power management technology.
2)It will bottleneck their upcoming I/O Acceleration Technology (IOAT
3) If the controller remains on the northbridge, the transition to new memory modules will be easier.

How they solved the prob of no Integrated Memory Controller

1)they came up with their advanced smartcache technology
2)the come up with their smart memory access technology
3)they used the silicon which could have been used in an IMC to improve their cache.


The reason Core has no IMC, in my opinion, is they did not need the performance boost or extra bandwidth. FSB is aging, true, but in single socket DTs is just as good or better than HTT. HTT only really shines when you scale multiple sockets.

In the new microarchitecture, Intel has redesigned the notrthbridge for servers to give each socket it's own FSB. The northbridge now has 64 megs of cache itself to speculatively track which core has the most recent cache refresh and "traffic" cops the data to each core as I understand it. This bridgeport chipset should eliminate the cache coherency problem of more sockets sharing the same FSB -- the 4-way and 8-way space is owned by AMD, but until we see if the Snoop Filter eliminates the cache coherency problem, the chapter is not finished yet.

Now, Intel has held the position that integrating the IMC actually hurts flexibility to adopt new memory technologies, and they are correct on this -- as a result, you work to hide that latency and decrease the dependency on bussing -- that is why Intel chooses to employ large caches, large cache hides this latency, especially if you have a very good prefetch logic built into the chip. While the pipeline is crunching, the prefetcher is working to keep the correct data in cache sorta parallel computing/caching.

The question is, why do people fixate on the FSB when the real proof is inthe pudding? What I mean by that is, if a Core 2 at 2.6 GHz, consting 550 bucks spanks an FX-62 why would the fact that a FSB is used make a difference?

FSB and it's draw backs are only really critical in the 4-way and 8-way servers as it does not scale as well as the NUMA model.

Jack

Word.
May 21, 2006 4:12:51 PM

maybe in 45nm chips? there might be some plan...
May 21, 2006 4:19:49 PM

Quote:
IMC- Integrated Memory Controller


1) It won't fit in with their intelligent power management technology.
2)It will bottleneck their upcoming I/O Acceleration Technology (IOAT
3) If the controller remains on the northbridge, the transition to new memory modules will be easier.

How they solved the prob of no Integrated Memory Controller

1)they came up with their advanced smartcache technology
2)the come up with their smart memory access technology
3)they used the silicon which could have been used in an IMC to improve their cache.

I disagree...
On the Core, there is no need for IMC. Becouse of the SMA technology and the shared cache the high latency is hiden(this is good becouse cheaper RAM modules with higher latencies can be used without sacriface of performance). The bigger shared cache is usefull becouse more most often used cache blocks can be available in L2 cache. With larger L2 there will be less accesses to the slow RAM, while there will be more time available for waiting the data comming from RAM without making "bubbles" in the pipeline. The SMA schedules and reorders the independend loads before the stores. This is nice and intelegent way to hide the latency of the RAM, just like the DDR IMC on the K8 with the low latency RAM was intelegent way to hide the small L2 cache size(nicely achieved on the s939 with CL2 DDR-400). The DDR2 is high latency memory, so the DDR2 IMC on the K8 sAM2 is unefficient and loosing its main role, to make the needed data for computing availble sooner, just becouse of the more lost cycles for RAM accessing. The silicon wasted for shared L2 cache on the Core is better used than the silicon wasted for IMC + 1MB L2 cache per core on the dualcore K8 sAM2.

FSB1333 is 64bit, providing 10,42GB/s (1333*64/8) bandiwdth, while the only one HTT link, 16bit full-duplex 2000MHz, enabled on the FX-60/Opteron >165,X2, + IMC 128bit 400MHz are providing 14,4GB/s of total bandwidth. As Jack said, both single&dual core K8 and Core don't need that much bandwidth and are not using its full bandwidth potentional.
May 21, 2006 4:41:17 PM

Quote:
maybe in 45nm chips? there might be some plan...


Well, it is likley not to come with Penryn as that is just a massive shrink of the Merom core. However, there are other products scheduled for 45 nm and Intel noted that the first product on a new process would be a shrink followed by a new architecture so if IMC were to make it that would be logically the time.

Now we all know what the AMD fanbase will say -- Intel is copying AMD, this is true. But if you look at the list of K8L goodies, many of those are a "copy" of Intel's new architecture. It's a wash.... but imagine if Intel does a) integrate the memory controller and b) establishes a new serial/ported bus technology and removes that advantage from AMD all together then 4-way and 8-way would be much more competitive. I think you will see that occur in server very quickly, 2007 is not outside the realm of possibility; however this is all JUST SPECULATION. No info has come out so it's only worth bantering about in friendly discussion.

What would be interesting is, given Intel's resources, if they develop a different marketing approach to IMCs, that is, say they integrate the IMC and support two lines of processors one that does not have IMC and one that does.... using a smart northbridge to detect and pass through directly to memory when an IMC is detected. Then they could have the best of both worlds. Flexibility in memory technology, and uber fast memory access via IMC. Food for thought.

Jack

WaaAaooOOOOOW 8O
May 21, 2006 5:12:45 PM

well... i think that if Intel increases its FSB then there isn't much need for an IMC, instead of IMC they can do something better with the new archi. maybe like quad core, more cache, better pipelining, enhanced execution cores... etc etc...
May 21, 2006 5:31:45 PM

Quote:
IMC- Integrated Memory Controller


1) It won't fit in with their intelligent power management technology.
2)It will bottleneck their upcoming I/O Acceleration Technology (IOAT
3) If the controller remains on the northbridge, the transition to new memory modules will be easier.

How they solved the prob of no Integrated Memory Controller

1)they came up with their advanced smartcache technology
2)the come up with their smart memory access technology
3)they used the silicon which could have been used in an IMC to improve their cache.


The reason Core has no IMC, in my opinion, is they did not need the performance boost or extra bandwidth. FSB is aging, true, but in single socket DTs is just as good or better than HTT. HTT only really shines when you scale multiple sockets.

In the new microarchitecture, Intel has redesigned the notrthbridge for servers to give each socket it's own FSB. The northbridge now has 64 megs of cache itself to speculatively track which core has the most recent cache refresh and "traffic" cops the data to each core as I understand it. This bridgeport chipset should eliminate the cache coherency problem of more sockets sharing the same FSB -- the 4-way and 8-way space is owned by AMD, but until we see if the Snoop Filter eliminates the cache coherency problem, the chapter is not finished yet.

Now, Intel has held the position that integrating the IMC actually hurts flexibility to adopt new memory technologies, and they are correct on this -- as a result, you work to hide that latency and decrease the dependency on bussing -- that is why Intel chooses to employ large caches, large cache hides this latency, especially if you have a very good prefetch logic built into the chip. While the pipeline is crunching, the prefetcher is working to keep the correct data in cache sorta parallel computing/caching.

The question is, why do people fixate on the FSB when the real proof is inthe pudding? What I mean by that is, if a Core 2 at 2.6 GHz, consting 550 bucks spanks an FX-62 why would the fact that a FSB is used make a difference?

FSB and it's draw backs are only really critical in the 4-way and 8-way servers as it does not scale as well as the NUMA model.

Jack

Word.

May 21, 2006 5:39:39 PM

Maybe some one could explain to me how AMD's 2Ghz HT Bus accesses Ram ? I mean dual channel DDR2 @ 1Ghz would be the only thing to make use of that kind of bandwidth but that is AM2 and even then only if AMD supports that speed of ram (im sure they will) but right now DD1 dual channel @ 400Mhz gives you an 800Mhz bus right ? I mean I realy want to know ! On the server side I imagine that the HT bus is used to communicate between the CPU's in a multi environment but for the home user who has 1 CPU with 1 or 2 cores... Not much use from what I can tell.
May 21, 2006 5:44:27 PM

Bragging rights maybe?
May 21, 2006 5:45:08 PM

The rest of your explanation is good, but what caught my attention was this:

Quote:
The northbridge now has 64 megs of cache itself to speculatively track which core has the most recent cache refresh and "traffic" cops the data to each core as I understand it.

Do you mean that the northbridge actually has 64MB of on die cache, which doesn't seem sensible or that it swipes that from the RAM? From what David Kanter has said on the Snoop Filtering subject, a 3MB cache of eDRAM (IBM implementation) or SRAM should be sufficient. Also, I don't believe the Snoop Filter has actually been implemented on the Bensley platform. The Blackford server chipset doesn't include it at all and the workstation Greencreek only has a on a trial basis. It'll likely be disabled by default kind of like Hyperthreading was initiatially until Intel can work out the details. Still I definitely think Snoop Filtering will be available sooner or later since the benefits are apparent and the space used by adding cache to the northbridge will be nullified as chipset production switches from 130nm 200mm wafer production to 90nm 300mm wafer production.
May 21, 2006 5:50:20 PM

Quote:
Bragging rights maybe?


Hahahaha thats actualy funny and from some of the posts I have read the main use :)  I have my A64's here and they are nice and they use Dual channel DDR400 wich is nice but I dont see the huge bandwidth advantage over my Intel systems I guess Oblivion just isnt up the the challenge of stressing that bus. I think when AMD switches over to DDR2 and they support realy high speed with that things will get better but untill then I say Intel's short bus is just fine (pun intended)
May 21, 2006 6:00:13 PM

Quote:
Bragging rights maybe?


Hahahaha thats actualy funny and from some of the posts I have read the main use :)  I have my A64's here and they are nice and they use Dual channel DDR400 wich is nice but I dont see the huge bandwidth advantage over my Intel systems I guess Oblivion just isnt up the the challenge of stressing that bus. I think when AMD switches over to DDR2 and they support realy high speed with that things will get better but untill then I say Intel's short bus is just fine (pun intended) Hey it just popped into my mind but if you do think about it it's nice saying to some intel fanboy

I have 2000mhz front side bus on my athlon 64 while all you have is 800mhz on your p4 haha you suck!

.Maybe with ddr3 the bus will start to be pushed to the limit.

Short bus? Oh i get it. :lol: 
May 21, 2006 6:10:54 PM

Quote:
IMC- Integrated Memory Controller


1) It won't fit in with their intelligent power management technology.
2)It will bottleneck their upcoming I/O Acceleration Technology (IOAT
3) If the controller remains on the northbridge, the transition to new memory modules will be easier.

How they solved the prob of no Integrated Memory Controller

1)they came up with their advanced smartcache technology
2)the come up with their smart memory access technology
3)they used the silicon which could have been used in an IMC to improve their cache.


The reason Core has no IMC, in my opinion, is they did not need the performance boost or extra bandwidth. FSB is aging, true, but in single socket DTs is just as good or better than HTT. HTT only really shines when you scale multiple sockets.

In the new microarchitecture, Intel has redesigned the notrthbridge for servers to give each socket it's own FSB. The northbridge now has 64 megs of cache itself to speculatively track which core has the most recent cache refresh and "traffic" cops the data to each core as I understand it. This bridgeport chipset should eliminate the cache coherency problem of more sockets sharing the same FSB -- the 4-way and 8-way space is owned by AMD, but until we see if the Snoop Filter eliminates the cache coherency problem, the chapter is not finished yet.

Now, Intel has held the position that integrating the IMC actually hurts flexibility to adopt new memory technologies, and they are correct on this -- as a result, you work to hide that latency and decrease the dependency on bussing -- that is why Intel chooses to employ large caches, large cache hides this latency, especially if you have a very good prefetch logic built into the chip. While the pipeline is crunching, the prefetcher is working to keep the correct data in cache sorta parallel computing/caching.

The question is, why do people fixate on the FSB when the real proof is inthe pudding? What I mean by that is, if a Core 2 at 2.6 GHz, consting 550 bucks spanks an FX-62 why would the fact that a FSB is used make a difference?

FSB and it's draw backs are only really critical in the 4-way and 8-way servers as it does not scale as well as the NUMA model.

Jack

Word.

May 21, 2006 6:26:57 PM

Quote:
Bragging rights maybe?


Hahahaha thats actualy funny and from some of the posts I have read the main use :)  I have my A64's here and they are nice and they use Dual channel DDR400 wich is nice but I dont see the huge bandwidth advantage over my Intel systems I guess Oblivion just isnt up the the challenge of stressing that bus. I think when AMD switches over to DDR2 and they support realy high speed with that things will get better but untill then I say Intel's short bus is just fine (pun intended) Hey it just popped into my mind but if you do think about it it's nice saying to some intel fanboy

I have 2000mhz front side bus on my athlon 64 while all you have is 800mhz on your p4 haha you suck!

.Maybe with ddr3 the bus will start to be pushed to the limit.

Short bus? Oh i get it. :lol: 

Actually you have a 1000 MHz FSB, they double the fequency because it is two way.... it is a misnomer to call it 2000 MHz, it really runs at 1000 MHz in each direction so you are only getting 1000 MHz to the CPU at any given time and 1000 MHz out of the CPU at any given time. I know but since intel can only go one way at a time.
May 22, 2006 7:26:25 AM

Quote:
IMC- Integrated Memory Controller


1) It won't fit in with their intelligent power management technology.
2)It will bottleneck their upcoming I/O Acceleration Technology (IOAT
3) If the controller remains on the northbridge, the transition to new memory modules will be easier.

How they solved the prob of no Integrated Memory Controller

1)they came up with their advanced smartcache technology
2)the come up with their smart memory access technology
3)they used the silicon which could have been used in an IMC to improve their cache.

I disagree...
On the Core, there is no need for IMC. Becouse of the SMA technology and the shared cache the high latency is hiden(this is good becouse cheaper RAM modules with higher latencies can be used without sacriface of performance). The bigger shared cache is usefull becouse more most often used cache blocks can be available in L2 cache. With larger L2 there will be less accesses to the slow RAM, while there will be more time available for waiting the data comming from RAM without making "bubbles" in the pipeline. The SMA schedules and reorders the independend loads before the stores. This is nice and intelegent way to hide the latency of the RAM, just like the DDR IMC on the K8 with the low latency RAM was intelegent way to hide the small L2 cache size(nicely achieved on the s939 with CL2 DDR-400). The DDR2 is high latency memory, so the DDR2 IMC on the K8 sAM2 is unefficient and loosing its main role, to make the needed data for computing availble sooner, just becouse of the more lost cycles for RAM accessing. The silicon wasted for shared L2 cache on the Core is better used than the silicon wasted for IMC + 1MB L2 cache per core on the dualcore K8 sAM2.

FSB1333 is 64bit, providing 10,42GB/s (1333*64/8) bandiwdth, while the only one HTT link, 16bit full-duplex 2000MHz, enabled on the FX-60/Opteron >165,X2, + IMC 128bit 400MHz are providing 14,4GB/s of total bandwidth. As Jack said, both single&dual core K8 and Core don't need that much bandwidth and are not using its full bandwidth potentional.

well i agree that one bottleneck in am2 is memory access, but amd might be able to design something like the 'smart memory access' technology that fits in with their IMCs.
May 22, 2006 1:41:32 PM

They will in K8L, but it will be allready late, becouse ther will be DDR2-800 with tighter CAS latency in that time and Intel will probably have CSI with IMC.
June 8, 2006 6:26:20 AM

Quote:
IMC- Integrated Memory Controller


1) It won't fit in with their intelligent power management technology.
2)It will bottleneck their upcoming I/O Acceleration Technology (IOAT
3) If the controller remains on the northbridge, the transition to new memory modules will be easier.

How they solved the prob of no Integrated Memory Controller

1)they came up with their advanced smartcache technology
2)the come up with their smart memory access technology
3)they used the silicon which could have been used in an IMC to improve their cache.


The reason Core has no IMC, in my opinion, is they did not need the performance boost or extra bandwidth. FSB is aging, true, but in single socket DTs is just as good or better than HTT. HTT only really shines when you scale multiple sockets.

In the new microarchitecture, Intel has redesigned the notrthbridge for servers to give each socket it's own FSB. The northbridge now has 64 megs of cache itself to speculatively track which core has the most recent cache refresh and "traffic" cops the data to each core as I understand it. This bridgeport chipset should eliminate the cache coherency problem of more sockets sharing the same FSB -- the 4-way and 8-way space is owned by AMD, but until we see if the Snoop Filter eliminates the cache coherency problem, the chapter is not finished yet.

Now, Intel has held the position that integrating the IMC actually hurts flexibility to adopt new memory technologies, and they are correct on this -- as a result, you work to hide that latency and decrease the dependency on bussing -- that is why Intel chooses to employ large caches, large cache hides this latency, especially if you have a very good prefetch logic built into the chip. While the pipeline is crunching, the prefetcher is working to keep the correct data in cache sorta parallel computing/caching.

The question is, why do people fixate on the FSB when the real proof is inthe pudding? What I mean by that is, if a Core 2 at 2.6 GHz, consting 550 bucks spanks an FX-62 why would the fact that a FSB is used make a difference?

FSB and it's draw backs are only really critical in the 4-way and 8-way servers as it does not scale as well as the NUMA model.

Jack

Word.

Well he did a nice bullet form of analyzing the question, points 1,2 of the first group are not correct, but the last one is....and the solutions are all true -- except in different context. IMCs solve a problem, Intel's mehtods just go about a different way of solving that same problem -- memory latency.

I hear rumors that Intel may integrate the memory controller in certain product lines in 2007... but right now that is just rumor -- Rattner always dodges the issue.

Yeeeeaaaahhhhh, my first post!!!

First of all, I am an AMD fanboy, but thats not to say that I don't look at Intel in a good light. That being said, JumpingJack, I'm happy to see someone speak intelligently about Intels FSB vs IMC. But their current setup wont last too much longer.

Sure an 800MHz FSB was ok for dual-channel (DC) DDR-400, but with DDR2 it was a joke. So then they increase it to 1066 for DC 533, which is still a joke. Blackford/Greencreek are 1333, which is good (DC 667), but they need it at least 1600 for DDR3. Unfortunately, this probably won't happen anytime soon because Intel never bothers to give their northbridges die shrinks. Plus, rumor has it that CSI is lagging and might not come out until 2009, ouch.

What this means for Intel is when it comes time for DDR3, Intel will be at a loss. They will most likely only be able to support DC DDR3-800, whereas AMD will have HT3.0 at 5200MHz effective which will be able to handle DC DDR3-1600 and even be able to handle next generation memory as well. Also, AMD's next memory controller is suppose to support DDR2, DDR3 and FB-DIMM all at once, allowing AMD to be more flexible, just like Intel's off-die northbridge.

Lastly, AMD's 1000MHz FSB is equivilent to Intel having an 2000MHz FSB, so that is why people say AMD has a 2000MHz FSB.

:idea: ::I'm not sure if AMD is going to or can do this, but HT3.0 with a slight overclock to 5333 could handle Quad-Channel DDR3-1333. That would be insane.
June 8, 2006 7:14:05 AM

I thought the dual core on one die thing was supposed to save us from dual socket's.... I remember back in the day dual sockets where cool and all the cool kids had them. Now they come back with dual cores on dual sockets with dual video and dual ram I feel like next im going to need dual power supplies :( 


Ahhhh well cant hold back the future :) 
June 8, 2006 7:48:50 AM

Quote:
Oh, and with the 4x4 with two FX-62's (there 200 watts or so in itself) a couple of HDs and memory, you will need 2 power supplies and an electrician to pull a 30 amp line to your computer room :)  .... right now I have 5 computers in the office, a fish tank and my lighting. If I try to run my vacuum cleaner I throw a breaker :) 

....not that it would not be cool to have a two socket quad core rig, I just am wondering how much it will cost.


If AMD allows the use of 65W X2's than it's a completely different story. It would be foolish of them not to. And I believe the FX-62 is actually 125W each, which is even worse. Oh, why did I say that. Sorry AMD, I have forsaken you. :( 
June 8, 2006 8:12:48 AM

Quote:
maybe in 45nm chips? there might be some plan...


Well, it is likley not to come with Penryn as that is just a massive shrink of the Merom core. However, there are other products scheduled for 45 nm and Intel noted that the first product on a new process would be a shrink followed by a new architecture so if IMC were to make it that would be logically the time.

Now we all know what the AMD fanbase will say -- Intel is copying AMD, this is true. But if you look at the list of K8L goodies, many of those are a "copy" of Intel's new architecture. It's a wash.... but imagine if Intel does a) integrate the memory controller and b) establishes a new serial/ported bus technology and removes that advantage from AMD all together then 4-way and 8-way would be much more competitive. I think you will see that occur in server very quickly, 2007 is not outside the realm of possibility; however this is all JUST SPECULATION. No info has come out so it's only worth bantering about in friendly discussion.

What would be interesting is, given Intel's resources, if they develop a different marketing approach to IMCs, that is, say they integrate the IMC and support two lines of processors one that does not have IMC and one that does.... using a smart northbridge to detect and pass through directly to memory when an IMC is detected. Then they could have the best of both worlds. Flexibility in memory technology, and uber fast memory access via IMC. Food for thought.

Jack


AFAIK

Intel is the origin of Integrated Memory Controller... remember the Timna core?
June 8, 2006 8:16:55 AM

Quote:
IMC- Integrated Memory Controller


1) It won't fit in with their intelligent power management technology.
2)It will bottleneck their upcoming I/O Acceleration Technology (IOAT
3) If the controller remains on the northbridge, the transition to new memory modules will be easier.

How they solved the prob of no Integrated Memory Controller

1)they came up with their advanced smartcache technology
2)the come up with their smart memory access technology
3)they used the silicon which could have been used in an IMC to improve their cache.


The reason Core has no IMC, in my opinion, is they did not need the performance boost or extra bandwidth. FSB is aging, true, but in single socket DTs is just as good or better than HTT. HTT only really shines when you scale multiple sockets.

In the new microarchitecture, Intel has redesigned the notrthbridge for servers to give each socket it's own FSB. The northbridge now has 64 megs of cache itself to speculatively track which core has the most recent cache refresh and "traffic" cops the data to each core as I understand it. This bridgeport chipset should eliminate the cache coherency problem of more sockets sharing the same FSB -- the 4-way and 8-way space is owned by AMD, but until we see if the Snoop Filter eliminates the cache coherency problem, the chapter is not finished yet.

Now, Intel has held the position that integrating the IMC actually hurts flexibility to adopt new memory technologies, and they are correct on this -- as a result, you work to hide that latency and decrease the dependency on bussing -- that is why Intel chooses to employ large caches, large cache hides this latency, especially if you have a very good prefetch logic built into the chip. While the pipeline is crunching, the prefetcher is working to keep the correct data in cache sorta parallel computing/caching.

The question is, why do people fixate on the FSB when the real proof is inthe pudding? What I mean by that is, if a Core 2 at 2.6 GHz, consting 550 bucks spanks an FX-62 why would the fact that a FSB is used make a difference?

FSB and it's draw backs are only really critical in the 4-way and 8-way servers as it does not scale as well as the NUMA model.

Jack

Word.

Well he did a nice bullet form of analyzing the question, points 1,2 of the first group are not correct, but the last one is....and the solutions are all true -- except in different context. IMCs solve a problem, Intel's mehtods just go about a different way of solving that same problem -- memory latency.

I hear rumors that Intel may integrate the memory controller in certain product lines in 2007... but right now that is just rumor -- Rattner always dodges the issue.



well, its not a rumor... =) is all but truth..

the Intel Itanium "Tukwila Core" slated on 2008... =)

here's a link:Intel Itanium "Tukwila Core"
June 8, 2006 8:21:01 AM

Quote:
Hey, can you find a low energy AM2 CPU anywhere for purchase? (I am not being facetious) I was just wondering if they are in stock and how much they are going for? I haven't seen any around.


The 65W and 35W parts are not out yet. They will be in another month or two with a $5 premium if I'm not mistaken. I'm waiting for the 65W 4000X2 myself.

PS::I'm gonna try and get some sleep now. It's 4 in the morning where I'm at. Insomnia sucks. If you have anything else to say I'll try to reply tomarrow. Thanks for replying to my posts.
June 8, 2006 8:25:51 AM

Quote:
IMC- Integrated Memory Controller


1) It won't fit in with their intelligent power management technology.
2)It will bottleneck their upcoming I/O Acceleration Technology (IOAT
3) If the controller remains on the northbridge, the transition to new memory modules will be easier.

How they solved the prob of no Integrated Memory Controller

1)they came up with their advanced smartcache technology
2)the come up with their smart memory access technology
3)they used the silicon which could have been used in an IMC to improve their cache.


The reason Core has no IMC, in my opinion, is they did not need the performance boost or extra bandwidth. FSB is aging, true, but in single socket DTs is just as good or better than HTT. HTT only really shines when you scale multiple sockets.

In the new microarchitecture, Intel has redesigned the notrthbridge for servers to give each socket it's own FSB. The northbridge now has 64 megs of cache itself to speculatively track which core has the most recent cache refresh and "traffic" cops the data to each core as I understand it. This bridgeport chipset should eliminate the cache coherency problem of more sockets sharing the same FSB -- the 4-way and 8-way space is owned by AMD, but until we see if the Snoop Filter eliminates the cache coherency problem, the chapter is not finished yet.

Now, Intel has held the position that integrating the IMC actually hurts flexibility to adopt new memory technologies, and they are correct on this -- as a result, you work to hide that latency and decrease the dependency on bussing -- that is why Intel chooses to employ large caches, large cache hides this latency, especially if you have a very good prefetch logic built into the chip. While the pipeline is crunching, the prefetcher is working to keep the correct data in cache sorta parallel computing/caching.

The question is, why do people fixate on the FSB when the real proof is inthe pudding? What I mean by that is, if a Core 2 at 2.6 GHz, consting 550 bucks spanks an FX-62 why would the fact that a FSB is used make a difference?

FSB and it's draw backs are only really critical in the 4-way and 8-way servers as it does not scale as well as the NUMA model.

Jack

Word.

Well he did a nice bullet form of analyzing the question, points 1,2 of the first group are not correct, but the last one is....and the solutions are all true -- except in different context. IMCs solve a problem, Intel's mehtods just go about a different way of solving that same problem -- memory latency.

I hear rumors that Intel may integrate the memory controller in certain product lines in 2007... but right now that is just rumor -- Rattner always dodges the issue.

Yeeeeaaaahhhhh, my first post!!!

First of all, I am an AMD fanboy, but thats not to say that I don't look at Intel in a good light. That being said, JumpingJack, I'm happy to see someone speak intelligently about Intels FSB vs IMC. But their current setup wont last too much longer.

Sure an 800MHz FSB was ok for dual-channel (DC) DDR-400, but with DDR2 it was a joke. So then they increase it to 1066 for DC 533, which is still a joke. Blackford/Greencreek are 1333, which is good (DC 667), but they need it at least 1600 for DDR3. Unfortunately, this probably won't happen anytime soon because Intel never bothers to give their northbridges die shrinks. Plus, rumor has it that CSI is lagging and might not come out until 2009, ouch.

What this means for Intel is when it comes time for DDR3, Intel will be at a loss. They will most likely only be able to support DC DDR3-800, whereas AMD will have HT3.0 at 5200MHz effective which will be able to handle DC DDR3-1600 and even be able to handle next generation memory as well. Also, AMD's next memory controller is suppose to support DDR2, DDR3 and FB-DIMM all at once, allowing AMD to be more flexible, just like Intel's off-die northbridge.

Lastly, AMD's 1000MHz FSB is equivilent to Intel having an 2000MHz FSB, so that is why people say AMD has a 2000MHz FSB.

:idea: ::I'm not sure if AMD is going to or can do this, but HT3.0 with a slight overclock to 5333 could handle Quad-Channel DDR3-1333. That would be insane.


here is a comparison between AMD's HTT3.0 against Intel's Bottleneck 1333Mhz FSB

here's the link1333Mhz FSB vs. HTT3.0

you can clearly see that the HTT3.0 are pissed off by FSB in terms of bandwidth
June 8, 2006 8:39:40 AM

Quote:
here is a comparison between AMD's HTT3.0 against Intel's Bottleneck 1333Mhz FSB

here's the link1333Mhz FSB vs. HTT3.0

you can clearly see that the HTT3.0 are pissed off by FSB in terms of bandwidth


Last post of the night I swear.

I was comparing a single 1600MHz FSB to a single HTT3.0 bus at 5200 while using DDR3. Woodcrest uses two FSB's with Quad-Channel DDR2-667 memory for the theoretical max of 21.3GB/s (It will only achieve about 16GB/s). The Opteron uses two HTT2.0 links, but the bandwidth isn't as high because it is using quad-channel DDR-400. You were comparing apples to oranges.[edit]Intel is comparing apples to oranges. You were just blinded by Intel marketing. I forgive you.
June 8, 2006 8:52:36 AM

Sorry for the double post, but Intel could handle Dual-Channel DDR3-1600 if they can use two FSB for a single socket. This wouldn't suprise me because Intel always slaps stuff together to improve performance quickly (As inefficient as it may be)

I'm one to talk though, hardware vendors didn't even know about 4x4 until AMD announced it a couple days ago.
June 8, 2006 9:01:34 AM

Quote:
Sorry for the couble post, but Intel could handle Dual-Channel DDR3-1600 if they can use two FSB for a single socket. This wouldn't suprise me because Intel always slaps stuff together to improve performance quickly (As inefficient as it may be)

I'm one to talk though, hardware vendors didn't even know about 4x4 until AMD announced it.



oops.. the Intel "Woodcrest" uses 2 FSB.. it has a 1333Mhz FSB per core..
June 8, 2006 9:25:33 PM

Quote:
FSB1333 is 64bit, providing 10,42GB/s (1333*64/8) bandiwdth, while the only one HTT link, 16bit full-duplex 2000MHz, enabled on the FX-60/Opteron >165,X2, + IMC 128bit 400MHz are providing 14,4GB/s of total bandwidth.


Your math is wonky. No wonder you think PCs have higher bandwidth than consoles :) 

1.3GHz * 8bytes (64-bits) is 10.4GB/s (for both directions). In practice, it's theoretical numbers mean little (and you have to read and write data simultaneously).

Most modern CPUs have gone to IMC route - Cell, Opteron, DEC Alpha, Itanium (soon or now). I'm sure Intel will in the near future.
!