Hi guys,
I have a confusion regarding IMC in Nehalem and socket H
Let me explain my understanding first:
Nehalem
45nm, 1-8 cores, IMC, CSI, integrated GPU for clients/mainstream
2 sockets planned: socked B and socket H
Socket B (LG1366): makes perfect sense with 1366 pins to enable IMC and multiple CSI links for 2P or 4P servers.
socket H (LGA715): most articles I read suggest Nehalem CPUs for LGA715 wont have IMC.
Now my confusion:
as an IC designer I know that if you use a point to point memory subsystem versus a bus architecture the read/write latencies are a lot smaller. A smaller latency means that you need less on-chip cache (SRAM) and a smaller pipeline implying a radically smaller design and a much more efficient at that. Now, if Intel were to design Nehalem (Assuming a standard core design) for both IMC and FSB... then FSB would cause the bottleneck and your design will fall back to the bus architecture kind even though you can support p2p memory access.
So socket H in that case shud still be having a direct memory access. Moreover, I understand that for a 4P server support you would need to have 4 CSI links.... which means a pretty large number of pins. My guess is that LGA715 could simply be missing these CSI links and not the IMC interface.....
Would really appreciate if somebody can clear this confusion for me..
Thanks
I have a confusion regarding IMC in Nehalem and socket H
Let me explain my understanding first:
Nehalem
45nm, 1-8 cores, IMC, CSI, integrated GPU for clients/mainstream
2 sockets planned: socked B and socket H
Socket B (LG1366): makes perfect sense with 1366 pins to enable IMC and multiple CSI links for 2P or 4P servers.
socket H (LGA715): most articles I read suggest Nehalem CPUs for LGA715 wont have IMC.
Now my confusion:
as an IC designer I know that if you use a point to point memory subsystem versus a bus architecture the read/write latencies are a lot smaller. A smaller latency means that you need less on-chip cache (SRAM) and a smaller pipeline implying a radically smaller design and a much more efficient at that. Now, if Intel were to design Nehalem (Assuming a standard core design) for both IMC and FSB... then FSB would cause the bottleneck and your design will fall back to the bus architecture kind even though you can support p2p memory access.
So socket H in that case shud still be having a direct memory access. Moreover, I understand that for a 4P server support you would need to have 4 CSI links.... which means a pretty large number of pins. My guess is that LGA715 could simply be missing these CSI links and not the IMC interface.....
Would really appreciate if somebody can clear this confusion for me..
Thanks