Sign in with
Sign up | Sign in
Your question

Second-class Intel to trail AMD for years

Last response: in CPUs
Share
October 30, 2005 12:23:16 AM

So says The Register. :) 

Read <A HREF="http://www.theregister.co.uk/2005/10/29/intel_xeon_2009..." target="_new">here</A>

I'm really sorry for Intel, but they diserve it for being arrogant and foolish...

My Beloved Rig:

ATHLON 64 FX 55 (will be changed for an X2 3800+)
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU
October 30, 2005 1:18:39 AM

I think Intel would do much better if they simply came up with a naming scheme that wasn't in code.

<font color=red><b>Long live Dhanity and the minions scouring the depths of Wingdingium!</b>

XxxxX
(='.'=)
(")_(") Bow down before King Bunny
October 30, 2005 4:15:27 AM

I disagree. There's nothing particularly wrong with most of their processors. It's just that the competition is better for most people.

<font color=red><b>Long live Dhanity and the minions scouring the depths of Wingdingium!</b>

XxxxX
(='.'=)
(")_(") Bow down before King Bunny
Related resources
a b à CPUs
October 30, 2005 4:20:54 AM

The P4 was a huge step backwards in IPC. Intel remade the PIII to a similar degree that AMD remade the Athlon, now we have Athlon 64's and Intel Pentium-M's with high IPC...and the P4 looks worse than ever.

<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>
October 30, 2005 4:28:55 AM

I'm not buying a P4 either. But I'm also not even looking at the P4 articles since their naming scheme makes it very annoying to read.

Eventually the OEM's will figure out what's going on and will start switching over to AMD.

You have to realize though that someone who keeps upgrading within the Intel family of processors is probably happy since to them they are getting an increase in performance. The problem is those that go from Intel to AMD and then try to go back to Intel. That's not fun.

<font color=red><b>Long live Dhanity and the minions scouring the depths of Wingdingium!</b>

XxxxX
(='.'=)
(")_(") Bow down before King Bunny
a b à CPUs
October 30, 2005 4:41:03 AM

yes, I have an AMD system for benchmarking, and I can't even use it because I need the parts for that purpose.

<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>
October 30, 2005 6:59:24 AM

Quote:
Netburst is the worst mistake Intel has made in history EVER!


Worse then the Itanium?

Some people are like slinkies....
Not really good for anything but you cant help smile when you see one tumble down the stairs.
October 30, 2005 4:17:44 PM

if by second class you mean making a ton more money then yes you are correct.
October 30, 2005 5:26:33 PM

Quote:
if by second class you mean making a ton more money then yes you are correct.

It doesn't bother if Intel makes zillion of dollars if they don't know how to use it the right way.

Just look at their roadmap: Delays, cancelled processors and the list keeps growing (don't even mention their actual offerings which are no competition to AMD's processors).

AMD being a smaller company knows how to spend money in a wise way.

What Intel is doing with all that money...
One has to wonder ;) 

My Beloved Rig:

ATHLON 64 FX 55 (will be changed for an X2 3800+)
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU<P ID="edit"><FONT SIZE=-1><EM>Edited by Bullshitter on 10/30/05 02:33 PM.</EM></FONT></P>
October 30, 2005 7:53:18 PM

Quote:
Just look at their roadmap: Delays, cancelled processors and the list keeps growing (don't even mention their actual offerings which are no competition to AMD's processors).


Isn't their roadmap merom, conroe and woodcrest? The only delays and whatnot are the crappy xeons and Itaniums.

Some people are like slinkies....
Not really good for anything but you cant help smile when you see one tumble down the stairs.
October 30, 2005 9:24:06 PM

Well, Intel can blunder a lot without getting really hurt. They are that rich.
They have monopoly in CPU/Chip-set business globally and they think
they can fuckup in about anything without getting caught.

If in trouble, Intel drops a few GaZillions bucks more to the marketing dep, and pisses on the R&D dep, and orders for more champagne for their owners for making such a brilliant decision.
October 30, 2005 9:42:42 PM

-Warning Long Post-

Intel’s roadmap actually isn’t too bad. While the delay of the integrated memory controller is a set back, it probably isn't as catastrophic as it appears.

Intel has been taking a lot of flak lately on their new Paxville DP. In this case, it is deserved. They decided to place 2 Prescott 620s together. The crazy heat production is directly due to the presence of not only HT in each processor but also an extra 1MB of L2 cache. Intel probably felt that the 1MB of extra cache per core was more worthwhile in a server environment than a 400MHz increase in clock rate, which is why they didn’t just use a 840EE. In the end, the 90nm process simply couldn’t handle dual core HT enabled processors with 2MB of cache per core. The lower performance compared to Opteron is due to the low clock speed of 2.8GHz and the bottleneck of 4 cores sharing a 800MHz FSB.

These problems will be greatly reduced once Dempsey and Bensley arrive. Dempsey will probably be closely related to Presler, meaning speeds of up to 3.46GHz HT enabled with 2MB of L2 cache per core. Higher clocked speeds are likely possible as a 3.4GHz 950 was shown to fit within the thermal and power envelop of a 2.8GHz 820. The higher clock speed will help, but the main benefit is the 1066MHz FSB. The 33.3% increase in bandwidth will satisfy Core-to-Core cache transfers while opening up more throughput to the RAM. Even more important is the addition of individual 1066MHz FSB pipes like what AMD has to ensure the processors don’t compete. In addition, the RAM speed has increased from 400MHz to 533MHz and is now quad-channelled. This means that total FSB bandwidth has nearly tripled from the 6.4GB/s in Paxville between 4 cores, to 17GB/s. Memory bandwidth has likewise tripled from 6.4GB/s to 17GB/s. Even a dual processor Opteron system only has 12.8GB/s of memory bandwidth available total. Dempsey and Bensley should certainly make Intel highly competitive with AMD.

Now to address the 4-way server market. While an integrated memory controller would provide better memory bandwidth scaling with additional processors, Intel’s current FSB architecture could easily be expanded to provide much of what’s required. Currently Intel’s Xeon MPs use a 667MHz FSB. Intel is already working on a 1333MHz FSB for Woodcrest, and the application of such a bus would double the available bandwidth. Of course, on the motherboard side, each processor would have an independent FSB to reduce congestion. Memory bandwidth would likewise see an increase from the current 400MHz to 667MHz in a quad-channel configuration. These improvements are easily made and will keep Intel competitive in the near-term.

One of the major improvements with the use of an integrated memory controller is the reduction in latency. The high latencies on Intel’s current systems is partially due to the memory running asynchronously with the FSB. This is corrected in Bensley where 533MHz RAM is matched with a 1066MHz FSB. By working synchronously, some of the latency issues will be reduced. Similary, Xeon MPs working with a 1333MHz FSB will run synchronously with 667MHz RAM. In addition, an advantage that Intel has over AMD is that they design their own chipsets. If they spent the effort, they could easily streamline the CPU-Northbridge-RAM interconnects to reduce latency.

All these are just simple improvements in the buses that will help improve Intel’s performance. Intel’s next-generation architecture isn’t even mentioned, but Conroe, Meron and Woodcrest are certainly something to look forward too. All in all, the delay of an integrated memory controller isn’t a catastrophe to Intel’s roadmaps.
October 31, 2005 12:49:23 AM

Quote:
-Warning Long Post-

Intel’s roadmap actually isn’t too bad. While the delay of the integrated memory controller is a set back, it probably isn't as catastrophic as it appears.

Intel has been taking a lot of flak lately on their new Paxville DP. In this case, it is deserved. They decided to place 2 Prescott 620s together. The crazy heat production is directly due to the presence of not only HT in each processor but also an extra 1MB of L2 cache. Intel probably felt that the 1MB of extra cache per core was more worthwhile in a server environment than a 400MHz increase in clock rate, which is why they didn’t just use a 840EE. In the end, the 90nm process simply couldn’t handle dual core HT enabled processors with 2MB of cache per core. The lower performance compared to Opteron is due to the low clock speed of 2.8GHz and the bottleneck of 4 cores sharing a 800MHz FSB.

These problems will be greatly reduced once Dempsey and Bensley arrive. Dempsey will probably be closely related to Presler, meaning speeds of up to 3.46GHz HT enabled with 2MB of L2 cache per core. Higher clocked speeds are likely possible as a 3.4GHz 950 was shown to fit within the thermal and power envelop of a 2.8GHz 820. The higher clock speed will help, but the main benefit is the 1066MHz FSB. The 33.3% increase in bandwidth will satisfy Core-to-Core cache transfers while opening up more throughput to the RAM. Even more important is the addition of individual 1066MHz FSB pipes like what AMD has to ensure the processors don’t compete. In addition, the RAM speed has increased from 400MHz to 533MHz and is now quad-channelled. This means that total FSB bandwidth has nearly tripled from the 6.4GB/s in Paxville between 4 cores, to 17GB/s. Memory bandwidth has likewise tripled from 6.4GB/s to 17GB/s. Even a dual processor Opteron system only has 12.8GB/s of memory bandwidth available total. Dempsey and Bensley should certainly make Intel highly competitive with AMD.

Now to address the 4-way server market. While an integrated memory controller would provide better memory bandwidth scaling with additional processors, Intel’s current FSB architecture could easily be expanded to provide much of what’s required. Currently Intel’s Xeon MPs use a 667MHz FSB. Intel is already working on a 1333MHz FSB for Woodcrest, and the application of such a bus would double the available bandwidth. Of course, on the motherboard side, each processor would have an independent FSB to reduce congestion. Memory bandwidth would likewise see an increase from the current 400MHz to 667MHz in a quad-channel configuration. These improvements are easily made and will keep Intel competitive in the near-term.

One of the major improvements with the use of an integrated memory controller is the reduction in latency. The high latencies on Intel’s current systems is partially due to the memory running asynchronously with the FSB. This is corrected in Bensley where 533MHz RAM is matched with a 1066MHz FSB. By working synchronously, some of the latency issues will be reduced. Similary, Xeon MPs working with a 1333MHz FSB will run synchronously with 667MHz RAM. In addition, an advantage that Intel has over AMD is that they design their own chipsets. If they spent the effort, they could easily streamline the CPU-Northbridge-RAM interconnects to reduce latency.

All these are just simple improvements in the buses that will help improve Intel’s performance. Intel’s next-generation architecture isn’t even mentioned, but Conroe, Meron and Woodcrest are certainly something to look forward too. All in all, the delay of an integrated memory controller isn’t a catastrophe to Intel’s roadmaps.

After reading <A HREF="http://theinquirer.net/?article=27317" target="_new">this</A>, I'd like to know if you are still optimistic about Intel's upcoming processors.

Quote:
After all, with its well-know NIH (Not Invented Here) policy, Intel rarely took something from the market that it didn't develop itself, even when it was both technically superior and a good business decision.

This clearly backup what I've said about Intel being an arrogant and foolish company.

Also, you'll have to remember that AMD isn't quiet either.
For the time Intel release their "flagship" processor, AMD will be releasing their much improved quad core processor in 65nm process, extensions to the AMD64 instruction set, all this paired up with Hyper Transport 3.0. and a new multimedia instruction set to boost applications like 3D rendering and audio/video encoding. :D 

My Beloved Rig:

ATHLON 64 FX 55 (will be changed for an X2 3800+)
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU<P ID="edit"><FONT SIZE=-1><EM>Edited by Bullshitter on 10/30/05 09:52 PM.</EM></FONT></P>
October 31, 2005 1:03:16 AM

<A HREF="http://www.theregister.co.uk/2005/10/28/intel_whitefiel..." target="_new">Here's</A> more info to backup what I've said about a troubled company called Intel...

Quote:
While stunning in its own right, Intel's cancellation this week of the multicore "Whitefield" processor stands as a more significant miscue that simply excising a chip from a roadmap. Whitfield's disappearance is a blow to India's growing IT endeavors.

Originally discovered by The Register, Whitefield stood as a major breakthrough for Intel and its Indian engineers. The much-ballyhooed chip would combine up to four mobile processor cores and arrive in 2007 as the very first chip designed from the ground up in India. In the end, engineering delays and a financial audit scandal killed the processor, leaving Intel to develop the "Tigerton" replacement chip here and in Israel.

El Reg has discovered that Srinivas Raman, former general manager of Intel India's enterprise products group, left the company in early August and joined semiconductor design tools maker Cadence - the home of former Intel global server chip chief Mike Fister. Raman declined to return our phone calls, but insiders confirm that he was the lead of the Whitefield project. The executive became distressed about the project when Intel's audit resulted in close to 50 of his staff being let go from the company, one source said.

Of the 50 staffers, close to 20 of them were sent to India from Portland in 2001 to work on Whitefield. The cancellation of the project has since resulted in much of the work being sent back to Portland.

Whitefield had been meant to serve as Intel's most sophisticated response to the rising multicore and performance per watt movements. The company has fallen well behind rivals IBM and Sun Microsystems on such fronts in the high-end server market and behind AMD in the more mainstream x86 chip market. The Whitefield chip was designed to give these competitors a real run for their money as it made use of Intel's strong mobile chip technology to deliver a high-performing product with relatively low power consumption.

Instead of wowing customers, Intel has disappointed them and created a painful situation for its India staff.

Local paper The Times of India commented this week on the situation.

"India's ambitions of emerging (as) a global chip design and development hub has just suffered a big knock," the paper wrote. "Intel has killed its much-hyped Whitefield chip, a multicrore Xeon processor for servers with four or more processors that drew its name from Bangalore's IT hotspot, Whitefield, and which was being developed almost wholly in this city.

"Intel had invested heavily in the project, both in infrastructure and people, drawing in some of the brightest talents. Some 600 people are said to be employed in the core hardware part of the project."

Chip staffers in India currently fear losing their jobs and morale is very low as a result of the Whitefield cancellation. Many of the staffers had only been told that Whitefield would be delayed by six to nine months. They learned of the project's end in the press.

The difficulties here show how complex global operations can be with sophisticated products. India hoped to take on more and more of Intel's design work, but such plans look iffy now to say the least.

These disruptions hurt Intel during a very difficult period for the company. It had appeared that Intel managed to correct the chip delay issues and strategy mistakes that plagued it during 2004. Instead, the company this week delayed work on both its Itanium and Xeon lines, giving AMD a chance to take even more market share from the giant.

Intel declined to comment for this story



My Beloved Rig:

ATHLON 64 FX 55 (will be changed for an X2 3800+)
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU
October 31, 2005 2:09:14 AM

Sadly, that article was posted at 6:53PM so I just missed it by 11 minutes.

I agree that an integrated memory controller is preferable, but Intel can still remain competitive despite the delay. By H2 2006, all Intel's processors will have shared L2 caches in addition to direct L1-L1 interconnects. This will eliminate the need to go through the FSB for cache transfers, freeing up bandwidth for other tasks. The use of shared L2 caches is superior to AMD's current implementation of a Crossbar as no L2-L2 transfer needs to take place therefore eliminating any bandwidth or latency issues that exist despite being within the CPU.

The use of a shared L2 cache also means that data does not need to be repeated. This is one aspect where Intel's architecture is superior to AMD's. While AMD only uses 4-way associativity in their L2 cache, Intel uses 16-way. This means that more of the cache can be accessed at a time. Therefore, information only needs to appear once in a larger L2 cache and can be accessed by both processors instead of being repeated twice in two smaller caches. While this obviously reduces latency as mentioned before, the key thing is it makes a 4MB L2 cache store more information than a 2x2MB cache with Crossbar does by eliminating duplicates. Making L2 cache space go farther in holding data helps Intel reduce constraint of FSB bandwidth as L2 cache hits increase, thereby decreasing the need to access the RAM.

The elimination of the need to use the FSB for cache-to-cache transfers and the higher hit rates in a shared L2 cache means that that the 10.6GB/s bandwidth of each 1333MHz FSB (of which each processor has its own) just may be sufficient. If it isn't, Intel could simply augment a large 16MB shared L2 cache with a 16MB shared L3 cache. The additional cache would further decrease the need to access the RAM and ensure the FSB bandwidth goes further. With a 65nm process, the die size of such a behemoth is probably the same as the current 90nm Xeon MPs that integrate 8MB of L3 cache, so it isn't too unreasonable. In addition, with the pipeline reduction of Intel's next-generation architecture, and the inherent improvements in Intel's 65nm process, adding more transistors for the L3 cache wouldn't jump the power consumption too high. Certainly, it would still be cooler than Intel's current Xeons. The use of sleep transistors in the 65nm process also means the L3 cache could be shut down when not needed further alleviating concerns of crazy heat or power consumption.

In regards to the implementation of HyperTransport in Intel processors, seeing that there is a set time to get a new processor to market, whether Intel decides now to design an entirely new processor to accomodate HT or they stick through their own technology's teething problems, the end result would be a similar time-to-market. It would therefore be best to stick with Intel's own technology, especially if they feel that its potentially superior.

On a side note, I would be interested to see what becomes of Intel's attempt to integrate a northbridge and a voltage regulator into a processor. It is works, this will probably do AMD one better since latency and bandwidth issues would be almost nonexistant. Of course this would add to the price, but it may work for high-end products since they are normally coupled with Intel's best chipset anyways (ie. 955EE Presler and the 975X chipset). I believe Intel was looking for an introduction sometime at the end of the decade. With the use of the 45nm process or something smaller, everything would fit nicely and run cool.

While integrated memory controllers are good for high-end computers, I'm curious as to their effects on low-end systems. Since the RAM needs to be accessed through the processor, I wonder what the effects are to integrated, TurboCache, and Hypermemory graphics cards? The latency will obviously be higher. The problem would now have switched from the processor fighting the addons for memory bandwidth to the addons fighting the processor. I'm sure this will also have an effect on sound cards especially with multichannel and digital sound becoming popular as these requirement quite a bit of memory. This is probably why Creative has taken it upon themselves to alleviate the problem by integrating large amounts of RAM into their high-end sound cards. RAM-graphics card latency issues associated with integrated memory controllers will probably be more pronounced once Windows Vista is released. Since Microsoft recommends at least 512MB of video cache to run with all the visual effects activated, even the most high-end graphics cards today are lacking. Even a 256MB 7800GTX may need to access the RAM through the PCIe bus, through the chipset, through HT, through the processor, through the memory controller, then through the memory bus then to the RAM. How much effect this has compared to going directly through a northbridge based memory controller remains to be seen. Most likely the latency issue is small, but this is the core OS. Even small latencies in the OS will filter through and magnify as applications are run.
October 31, 2005 4:24:56 AM

Quote:
How much effect this has compared to going directly through a northbridge based memory controller remains to be seen

Say What? Your grasp of Intel's roadmap is tenuous at best, but to suggest that the NB memory controler works at the beck and call of the graphics card is just too wrong.
Come back when you have some grasp of reality.
October 31, 2005 2:47:36 PM

Quote:
The use of shared L2 caches is superior to AMD's current implementation of a Crossbar as no L2-L2 transfer needs to take place therefore eliminating any bandwidth or latency issues that exist despite being within the CPU.

Ohh, please...
I've read many articles that can prove you're wrong on this. In deed, a shared L2 cache is not so efficient as many thought to be (it all depends on the architechture). That's the reason why AMD doesn't needs a shared L2 cache thanks to the MOESI implementation.

So, do you believe that large caches are a feature in a processor???

Opteron/Athlon 64 doesn't needs large amounts of L2 or L3 caches because it doesn't suffer from badwidth limitations as Intel does. When I see Intel's roadmap, all I can see is processors with 8MB and 16MB of cache. This is a sign that the processor is starving for data. In conclusion, large caches are a sign of a flawed architecture. :D 

My Beloved Rig:

ATHLON 64 FX 55 (will be changed for an X2 3800+)
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU
October 31, 2005 3:51:32 PM

Agreed. The old way was much easier. Now ah forget it. :frown:
October 31, 2005 6:14:38 PM

This is a pretty confusing thread, because there seems to be so much bias to filter through in many of the posts and linked articles.

I don't feel an allegiance to either AMD or Intel but I also haven't judged either company by whether I agree with their business practices or not.

It's my observation that Intel has made a few "misteps" in the last couple years, but I've seen that all companies have their good years and their bad years. Based on available capital, manufacturing capability, and cooperative agreements, it sure looks to me like it's way to early to call the race and figure Intel should throw in the towel any time soon.

AMD has come a long way toward leveling the playing field, but from what I can tell they've got some challenges ahead of them as well. First, as demand continues to grow as is expected, their manufacturing and distribution channels will need to expand accordingly. There's a long list of companies who have been tripped up at that stage, so I think it's still a too early to declare that AMD's going to pull it off without a hitch of their own.

Near as I can determine, AMD and Intel each offer advantages to their niche customers that give them a leg up on the other. AMD appears to have an upper hand in the center performance-wise, but I think it remains to be seen if the market will reflect that edge. I have plenty of friends who recognize that an AMD platform is faster and runs cooler than the comparable Intel counterpart, yet maintain an adherence to Intel based products. That kind of loyalty can be pretty tough to erode, so I don't hear the fat lady warming up quite yet.

Keep sharing the articles - this could be interesting for some time.
October 31, 2005 10:16:57 PM

Most of us here know that Intel will be back. I, and many others look forward to it. After all, the consumer is better served when there is competition.
Being a part of the market, and not a shareholder, I dont care much who is top dog. I do find it "interesting" though, that people would sacrifice their needs, to maintain a loyalty to a company. This is especially true, when that company is as large, and selfserving as Intel. Amd may not be as large, but a blind loyalty there, would be just as misguided.
Personnally, I would love to see Intel use their FD SOI technology. Performance has been advancing a little slow for my liking.
October 31, 2005 11:18:52 PM

Obviously a NB memory controller doesn't work at the beck and call of the graphics card, but the point is neither does an integrated memory controller. However, in the case of a NB memory controller the graphics card can communicate with the RAM directly through the NB. In the case an integrated memory controller, at least one more step is added with the graphics needing to communicate with the chipset then the memory controller through the HT link. The fact I'm pointing out is that while an integrated memory controller decreases RAM access latency for the processor, it increases RAM access latency for every other component.
October 31, 2005 11:55:16 PM

You are right, it depends on the architecture as everything always does. In this case, Intel is designing the architecture around a shared L2 cache. The concept is being introduced in Yonah, which in itself while building on the Pentium M architecture is quite different from Dothan. Conroe, Meron, and Woodcrest are almost a completely new architecture as is combines Netburst with Pentium M. The use of the shared L2 cache will be ingrained in this architecture by adding new algorithms for L1-L1 and L1-L2 transfers. As well, prefetch logic which is already advanced in the Pentium M and Prescott (one of the few things Prescott is good at), will be further improved. In the end, these features will benefit the processor and reduce FSB bandwidth demands.

In terms of Intel's need for L2 cache, the reason is due to its inclusive architecture. AMD has long used exclusive cache. As such, AMD processors due not benefit much from have large cache sizes. More specifically, will increases in L1 cache size may show larger performance gains in an exclusive architecture, increases in L2 cache does does not. Conversely due to its inclusive nature, Intel processors can show larger performance increases from increasing the amount of cache. Specifically, increases in L2 cache size show larger increases in performance. This nature is why AMD's L1 caches have often been larger than Intel's since that is what they require. AMD's L2 caches are small simply because they are of little benefit even if they were to increase. Intel on the other hand, uses large L2 caches because it allows them to greatly increase performance. The transition from the Prescott 500 series to the 600 series is a bad example since in this case Intel was to lazy to update the caching algorithms to reduce latency and fully take advantage of the additional cache. However, a great example is the transition from Banias to Dothan.

The larger caches in Intel processors are directly due to the architecture's ability to use it to increase performance. This isn't even taking into account the benefit in the reduction to FSB bandwidth. That is just another benefit to Intel's inclusive architecture. Looking deeper, Intel's architecture isn't as flawed as it may appear.
October 31, 2005 11:59:35 PM

I'm actually hoping that someone will do a benchmark to investigate. My question from the original post was:

"While integrated memory controllers are good for high-end computers, I'm curious as to their effects on low-end systems. Since the RAM needs to be accessed through the processor, I wonder what the effects are to integrated, TurboCache, and Hypermemory graphics cards? "

From a theoretical standpoint, the latency will be higher. What I'm curious about is what are the actual effects especially since Windows Vista will increase the need for RAM to graphics card transfers.
November 1, 2005 12:50:48 AM

Proved what he links a register link that fortell's doom as it always does. The the thread turns into copy paste opinion and babble.

The fact of the matter is Intel is in no fiscal trouble so the doom can not be soo. Perhaps they will trail for many years its tough to say, really doesnt bother me either way since nothing new interests me.

But hey the registers and mr. bull have forseen the future and know the next 4-7 years of operation for Intel. Lets see if the fortune tellers are correct.

-Jeremy Dach
November 1, 2005 1:19:22 AM

In the it appears, the the loss of an integrated memory controller won't be such a big deal. The technical improvements I've mentioned above help reduce bandwidth problems. Now, Intel is going to implement independant FSB to each processor, ie 4 FSBs in a 4-way server. The Inquirer now feels "a lot more hopeful on the raw numbers side" in terms of bandwidth availability. They do note that costs will increase although "this is far from a killer".

http://www.theinquirer.net/?article=27334

Concerns about Intel's processor being bandwidth starved for the next few years is now a moot point, because they simply won't be.
November 1, 2005 1:49:51 AM

Quote:
Concerns about Intel's processor being bandwidth starved for the next few years is now a moot point, because they simply won't be

Yes they'll be. Intel will still have the bandwidth problem no matter how much they increase their FSB (don't even mention 4 and more processor in a multiprocessor system).

Just to burst your bubbles, AMD will include more memory controllers for each core with socket 1207. Pci-e will be embedded on the processor also (this means that graphic cards will no longer be bottlenecked from the processor).

For the time Intel releases their flagship processor (2007-2008), sadly (for Intel), they'll be competing with K10 which will be a different beast than current dual core Opteron processors. ;) 

ATHLON 64 FX 55 (will be changed for an X2 ?? )
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU
November 1, 2005 1:55:57 AM

And <A HREF="http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=256..." target="_new">here</A> you can read some info that can backup all what I've said.

Untill now, all your statements are just fairy tales for youngsters. Without any links or any kind of proof, all your statements are just BS. You can fool the others, but you can't fool me. :) 

ATHLON 64 FX 55 (will be changed for an X2 ?? )
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU
November 1, 2005 1:58:19 AM

...And talking about coprocessors (if you've read the article from Anand), <A HREF="http://www.sci-tech-today.com/story.xhtml?story_id=0030..." target="_new">this</A> might be AMD's new approach for their upcoming processors.

I'll be really sorry for itanium if this ever gets to happen in a not so distant future. This will finish put the nail on Intel's coffin (just my personal thoughts).

ATHLON 64 FX 55 (will be changed for an X2 ?? )
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU<P ID="edit"><FONT SIZE=-1><EM>Edited by Bullshitter on 10/31/05 11:01 PM.</EM></FONT></P>
November 1, 2005 1:59:32 AM

In the case of 4 or more processors in a multiprocessor system, each processor will have its own dedicated independant FSB to supply enough bandwidth.

I believe Intel is already working on a counter to AMD's integrated PCIe controller. At the recent IDF, Intel demonstrated a working sample of a processor with an integrated northbridge and voltage regulator.

http://www.anandtech.com/tradeshows/showdoc.aspx?i=2511
November 1, 2005 2:10:07 AM

I think you don't get it dude:

The FSB is out dated!!!

I believe it has about 20+ years.

Intel knows it and they are working really hard to come up with something new.

The easiest way for them is to adopt HTT and include memory controllers on the processors. Anyhow, thay're too stubborn since they don't like the idea of using technology from others. That's why they're suffering now the consequences.

Even Dell is suffering from Intel's problem (we all know that Dell is intel's whore). Read <A HREF="http://www.theinquirer.net/?article=27348" target="_new">here</A>

ATHLON 64 FX 55 (will be changed for an X2 ?? )
2X1024 CORSAIR XMX XPERT MODULES
MSI K8N DIAMOND (SLI)
2 MSI 6800 ULTRA (SLI MODE)
OCZ POWERSTREAM 600W PSU
November 1, 2005 2:37:11 AM

I don't doubt that integrated memory controllers are the future. I'm just continuing to respond to the original topic of this thread, which was the article from The Register indicating that Intel will be bandwidth starved without an integrate memory controller. I'm just pointing out that this isn't the case as Intel will implement individual FSBs to each processor. Maybe not as good as integrated memory controllers but it will provide enough bandwidth as viewed by The Inquirer.

http://www.theinquirer.net/?article=27334
November 1, 2005 2:38:54 AM

Quote:
In the case of 4 or more processors in a multiprocessor system, each processor will have its own dedicated independant FSB to supply enough bandwidth

Yes well that is thier way of dealing with opteron's chip to chip HTT link. Sad isn't it?
You are aware that the present xeon latency is double what the opteron's is?
You do know that opteron has lower latency to ram, than the xeons have to L3 cache right?
Before you go spewing Intel PR crap around here, or make up imaginary flaws in Amd chips, take the time to understand what you are talking about.
November 1, 2005 3:22:05 AM

If I'm not wrong, AMD followed this approach with the Athlon MP even before Intel ever thought about it.

Dual independent buses has their advantage and issues, that's the reason AMD didn't continue with this approach and instead they went with the idea of integrating the memory controller on the processos itself.

With dual independent buses, Each core (or processor) has it's own bus, but the latency STILL remains since the processor has to communicate with the northbridge before it gets to main memory. This is great for a dual setup, but things starts to get nasty when you go to 4, 8, or 16 processors. In contrast, AMD's Opteron can be integrated in a system with up to 32 sockets (thanks to <A HREF="http://www.theinquirer.net/?article=26948" target="_new">Horus</A>), this will be IMPOSSIBLE for Intel and their current FSB limitations (please read the article).

As I said before, the FSB is last-century technology.

WOULD YOU LIKE TO BE A WELL FED SLAVE OR A HUNGRY FREE MAN?
November 1, 2005 4:58:36 AM

In regards to the L3 cache latencies, I haven't heard that before. Although in truth I'm not that surprised. Intel has a tendency of slapping cache on processors without tweaking their caching algorithms to take full advantage of it. Blunt addition of cache would also result in increased latency. I'd be interested in seeing those benchmarks.

In the case of understanding what I'm talking about, what are you referring too? It is my analysis of the benefits of a shared L2 cache?

http://yara.ecn.purdue.edu/~pplinux/ppsmp.html

This might be a bit dated and based on Linux, but the process of how shared cache works is still the same. The benefits are clear even back then:

"The good news is that many parallel programs might actually benefit from the shared cache because if both processors will want to access the same line from shared memory, only one had to fetch it into cache and contention for the bus is averted. The lack of processor affinity also causes less damage with a shared L2 cache. Thus, for parallel programs, it isn't really clear that sharing L2 cache is as harmful as one might expect.
Preliminary experience with our dual Pentium shared 256K cache system shows quite a wide range of performance depending on the level of kernel activity required. At worst, we see only about 1.2x speedup. However, we also have seen up to 2.1x speedup, which suggests that compute-intensive SPMD-style code really does profit from the "shared fetch" effect."

The concept behind the benefits of shared L2 cache is also explained by X-Bit Labs.

http://www.xbitlabs.com/articles/editorial/print/idf-f2...

"Mobile dual-core processor manufactured with 65nm technology aka Yonah, which I have already talked about during the Napa platform discussion, will feature Intel Smart cache. Smart cache means shared cache between the two cores. Since we have two cores and we have a single bus, there will also be shared single bus interface. This way both cores can share the same copy of data from the L2 cache. Besides that, the L2 and data cache unit feature improved pre-fetches, i.e. we can do pre-fetches on the per-thread basis, thus ensuring better bus utilization. It also has bandwidth adaptation buffer, i.e. each core takes 4 cycles to adapt.

The difference between independent and shared caches is the following. In case of independent caches the data is transferred from one core to another via the FSB. In case of shared caches the data is transferred directly between the caches, you avoid the bus traffic and synchronization time to get on the bus. This is important for multi-processing systems, because this way you reduce the number of bus cycles involved."

While the article is on the Intel IDF, the blurb above is not Intel marketing, it is the opinion of the X-Bit Labs author of this editorial piece. As well, the improved data-fetching techniques I mentioned which would reduce FSB bandwidth requirements are also mentioned by X-bit Labs. Clearly as I mentioned before, shared L2 caches can greatly benefit multicore processors which is why Intel is implementing it.

The stuff I mentioned about inclusive and exclusive caching and their differences is just very basic background information.

http://www.cpuid.org/reviews/K8/index.php

"The exclusive relationship is the most flexible, as it allows lot of different configurations in keeping a good performance index. The drawback is that the performance does not increase very much with the L2 size. The inclusive relationship can only be chosen for performance purpose, knowing for example that increasing the L2 will create a performance boost."

That is why I said AMD doesn't have large L2 caches, because it wouldn't benefit them much anyways. On the other hand, Intel processors can have large performance gains from more L2 cache.

I hope these external sources satisfy you that there is basis to what I am saying.
November 1, 2005 6:30:59 AM

Quote:
The fact I'm pointing out is that while an integrated memory controller decreases RAM access latency for the processor, it increases RAM access latency for every other component

So now you have a NB that sets memory addressing? Great, too bad your sound card just screwed your game, by replacing AI functions, with sound track, cause the NB said it could.
And now we are developing 64 bit NBs so we can run 32 bit chips, in long mode.
All memory calls have to go through the chip, period. The ODMC improves latency on all memory calls.
November 1, 2005 6:49:40 AM

Quote:
Intel was to lazy to update the caching algorithms to reduce latency

Algorithms cant change natural law. The cache can not have a new charge applied before the earlier charge is gone. That would be a short cct.
November 1, 2005 6:53:13 AM

Quote:
The Register indicating that Intel will be bandwidth starved without an integrate memory controller.

Wrong. The xeons will be latency burdened, not bandwidth starved. The number of fsbs wont affect latencies.
November 1, 2005 7:02:47 AM

Quote:
It is my analysis of the benefits of a shared L2 cache?

What analysis? Analysis requires understanding, which you just lack.
Example. The shared cache requires larger cache, which increases latencies. The shared cash requires additional "optimization" to control cache access. While the shared cache has advantages when bothe cores require the same info on cache, this is seldom the case, in real life. The shared cache will need core time, and interconnect time to function. Duh, that's going to require more prefetch, to work, and eat more bandwidth.
Look, if you dont understand what you are reading, ask, someone will be happy to explain. Dont come around spewing Intel disinformation, and expect people to buy it.
November 1, 2005 9:37:06 AM

Quote:
Obviously a NB memory controller doesn't work at the beck and call of the graphics card, but the point is neither does an integrated memory controller. However, in the case of a NB memory controller the graphics card can communicate with the RAM directly through the NB. In the case an integrated memory controller, at least one more step is added with the graphics needing to communicate with the chipset then the memory controller through the HT link. The fact I'm pointing out is that while an integrated memory controller decreases RAM access latency for the processor, it increases RAM access latency for every other component.

<Obviously a NB memory controller doesn't
<work at the beck and call of the graphics
<card, but the point is neither does an
<integrated memory controller.

Would you please clarify the above sentence. I mean, whatta hell does it mean?
And I don't mean just the word "beck".

It seems we have another BS generator in here. Oh my.

The traffic from RAM to vidcard is mostly one way and lots of it must be preprocessed by the CPU anyway. So, using NB as memory controller between RAM and AGP does not gain the performance a diddely sh|t.

Why do you think new vidcards are holding 256MB DDR2 on board? Tell me that.
I can give you a hint, it's not a marketing gimmick ala Intel, it has a reason.
November 1, 2005 3:52:24 PM

Quote:
I don't doubt that integrated memory controllers are the future. I'm just continuing to respond to the original topic of this thread, which was the article from The Register indicating that Intel will be bandwidth starved without an integrate memory controller. I'm just pointing out that this isn't the case as Intel will implement individual FSBs to each processor.

Individual FSBs won't do anything except alleviate the bus loading issue. All that does is let each Xeon socket's FSB bandwidth remain on par with the desktop socket's FSB. It's better than nothing, but hardly sufficient.

As long as all memory is hanging off a single memory controller, all cores will still be contending for memory bandwidth. Slapping four or more DDR2 memory channels on the board would help, except it's generally not feasible to route traces for four high-frequency memory channels hanging off one controller chip.

"You have been sh<font color=black>it</font color=black> upon by a grue."
November 1, 2005 4:44:58 PM

No need to go deep in technical details such as virtues of on-chip memory controllers, latencies, bandwidth limitations etc. As far as I am concerned, these are not the reasons but results of Intel’s fall.

I believe the main reason is its poor and unfortunate management.

What do you expect when a company which sells technology is managed by technology-illiterate marketing and finance people?

Misleading customers with first appealing but eventually meaningless high MHz specs and bullying manufacturers to use their poorer-quality chips can only help so much and lasts only short term.

Once there is a replacement in the market, it is only a matter of time that the company starts to lose sales and power.

In other sectors, companies try to recover after major mismanagement by downsizing, cost cutting or re-structuring. However, in the CPU manufacturing business where the future of a company strongly depends on a very long-term and expensive R&D, it is close to impossible to reverse the course and swallow losses. That is to say if you play the wrong card, it is usually irrevocable.

And even Intel knows that they did exactly that.

Intel, in my humble opinion, is in the beginning of its career’s end in the tech business. It is just a matter of time. When everything is over, Intel will take its honorable place next to IBM, Apple, SGI, Commodore and other known names of the computing history and might or might not continue to exist as an insignificant manufacturer.
November 2, 2005 2:38:09 AM

It looks like someone forgot to tell Intel they were on the way out.

<A HREF="http://news.yahoo.com/s/nm/20051102/tc_nm/intel_dc" target="_new">http://news.yahoo.com/s/nm/20051102/tc_nm/intel_dc&lt;/A>
Quote:
<i>SAN FRANCISCO (Reuters) - Intel Corp. (Nasdaq:INTC - news) has restarted a factory after spending $2 billion to retool it with the latest technologies that will let it produce more powerful chips more efficiently and at a lower cost.

The plant, known as Fab 12, is Intel's second that has begun volume production combining wafers that are 300 millimeters in diameter, about the size of a dinner plate, with a 65 nanometer etching process.

Intel, the world's top chipmaker, is moving to 65 nanometer technology from 90 nanometer.

The smaller etching process means Intel can make its chips smaller and more powerful by squeezing more transistors on them, while larger wafer size means it can get more chips out of each wafer.

"It's back running production volume and over the next year that will ramp up," Bob Baker, Intel's vice president of manufacturing, told Reuters.

The factory, which was taken offline a year and a half ago, would make "almost all" of Intel's microprocessor line-up, Baker said.

The Chandler, Ariz.-based plant had about 1,000 employees, of which about 800 had been sent to work and train at other Intel plants in Oregon, New Mexico and Ireland during the upgrade, Baker said.</i>

November 2, 2005 10:26:34 AM

Quote:
Intel, in my humble opinion

if you have an opinion you arent humble.. therefore <i>humble opinion</i> is an <i>oxy</i>moron... whereas most people here are just morons
November 2, 2005 10:59:23 AM

Funny enough, I thought of including Microsoft in the list and changed my mind afterwards.

Microsoft is built on another Intel-like arrogance and ignorance of IBM then. For as long as Willy the Gates is around, I don’t think that Microsoft will fail as he very well knows what he is doing. I don’t believe one second that he will make the same mistake which made him and Microsoft what they are today.

On the other hand, when he decides to enjoy his millions and retire, I am pretty sure all those power point presentation-hungry bshtng bureaucrats with no business sense who are lurking in the company today will step up and turn Microsoft to another loser.

But we have time to watch (and enjoy - most of us) it happening.
November 2, 2005 5:02:58 PM

I'm not sure why you're all picking on ltcommander_data. He makes some fair points. I think you folks are just trying to pick a fight with him or something, because so far I haven't seen anything that has deserved the treatment he's gotten.

1) For example, he wonders, and quite fairly I might add, if the on die memory controller adds latency to things like onboard graphics. The reasoning is rather obvious. The ODMC adds extra steps into the process.

NB: PCI DMA -> NB -> MEM -> NB -> PCI DMA
OD: PCI DMA -> NB -> CPU -> MEM -> CPU -> NB -> PCI DMA

The question is not if the ODMC adds more steps, because it clearly does. The question is do these extra steps actually cost anything in terms of latency? Given the PCI bus speed, I doubt it will there for DMA access. For faster busses however (such as graphics) it <i>may</i>. Then again, it may not. Some tests would be nice to prove this one way or the other.

He hasn't said that AMD suxors. He hasn't said that Intel rules. He's merely observed the extra steps and wonders what impact that may actually have. As we've seen with how early AMD HTT tests affected AGP, sometimes these little extra steps that seem harmless aren't so insignificant after all. Even the two versions of PCI busses running on the same system have caused some weird latecy problems. Not that these kinks can't be worked in the end, but that they're design considerations that should be looked at closely.

Though, in truth, this is all rather a moot point IMHO since people with onboard graphics aren't exactly concerned with performance anyway, but still an interesting technical query.

2) As for his talk about memory bandwidth and latency issues, while I don't argue that ODMC has lower latency and thusly is better, I think it's hard to argue that Intel will actually suffer if what he says is true about Intel moving to a quad-channelled architecture. This would not only increase the bandwidth needed for extra cores dramatically, but also decrease the memory latency in the same manner that dual-channel memory did. Sure it'll cost. Sure, it'll be a pain to implement on a motherboard. (Which will in turn cost yet more.) But Intel is hardly known for being cheap. **LOL**

Anywho, aside from the possibly unfair ridicule of ltcommander_data...

Personally, I think Xeon's biggest problem in competition will be that it'll still probably use a slower FSB than an equal P4, and thus be crushed by Opteron there ... same as always really. AMD concentrates on making their server CPUs their best and their desktop CPUs their second best. Intel concentrates on making their server CPUs their second best and their desktop CPUs their best. Intel can't compete soundly against AMD in the server market until they change that one simple strategy to match AMD's.

I also think AMD's lower latency from ODMC benefits their core well because their prefetch isn't stunning. Intel's prefetch however, being fairly good, means that Intel will gain less from an ODMC. That doesn't mean that Intel won't benefit, but so long as Intel sticks to the Netburst architecture, the gain may just not be worth the resources to implement. Either way, Intel certainly won't gain as much as AMD did when (if) they add ODMC to Netburst, so it's not really fair to say that this is holding Intel back all that much.

I think it's a shame that Intel's replacement of Netburst isn't going better. Perhaps they should have just stuck with the P3 all along, but even then, Netburst was an interesting attempt. It likely will fail in the long run, but even then I have my doubts that the failure is in Netburst itself. If you look at all of the hacks Intel put into Scotty to allow it to scale higher (which as we see from Scotty's thermal problems was a stupid decision to make) then you see the death of the P4. However, had Intel actually worked on improving Northwood and just used SoI like AMD did, we'd probably see Netburst thriving well for years to come.

Netburst was about redesigning the CPU to be more virtual. It had a lot of possibility. But the architecture required such a shat load of logic and transistors to overcome the performance losses from it's complexities that it was barely better than a simple design because of the power usage and thermal output. Sure, one could just stick to that simple design, like AMD did. (Hell, like Intel even did for their mobile segment.) But there were theoretical design benefits that even Northwood never got to take advantage of because Intel just never took the architecture far enough. If you look at the original specs for Netburst before Willy's cut-down version, you'll see that Intel could have taken it a lot further. Though I think Transmeta was actually heading in a better direction. Had they Intel's resources and ambition, I doubt that neither AMD nor Intel would even exist in the CPU market today.

But anywhy, I think that while Intel has seen better days, I'm really not seeing anywhere here where things will change in any significant way. It'll still remain the same situation as always as far as I can tell. AMD won't trounce Intel, and Intel won't trounce AMD. It's the same old stalemate. Technology improves, but still, there's really nothing new. It's kind of boring actually. :\

Of course, Intel's biggest downfall is their idiot managers making all manner of stupid decisions. And AMD's biggest downfall is their fear to succeed. **ROFL** Neither company will get anywhere with what they have unless they can overcome themselves first. The race to crush the other won't be won by who's chips are better, but about which company can overcome their own internal problems first.<pre><font color=orange> ∩_∩
Ω Ω
(=¥=)</font color=orange> - Cedrik says that anyone who groups M$ with Commodore will suffer<font color=orange>
_Ū˘Ū_</font color=orange> a fate similar to Bunascii.</pre><p>
:evil:  یί∫υєг ρђœŋίχ :evil: 
The <b><font color=red>Devil</font color=red></b> is in my <b><font color=red>'98 Mercury Sable</font color=red></b>!
<b>@ 201K miles!</b>
November 2, 2005 11:09:19 PM

You must be talking to me. Truth is, I hate it when someone comes spewing a corporate line, when they dont really have a grasp of what's being said.
If everybody and thier uncle (agp, pci, pci-express) starts addressing memory, the chip is going to feel dunsel.
Do you really think 4 fsbs, for 4 cores, is able to compete with the chip to chip HTT? 1 fsb for 1 chip sure seems to cause memory problems now.
Since Intel went to all the trouble of developing fully depleated silicon on insulator, I to was wish they would make use of it. The prescott may need a 400/1600 fsb (or HTT) to be in it's glory, but I think that may be possible with FDSOI. Then people's opinion of scotty might change. What is the point of DDR2, if it cant make that happen. (well aside from lower power)
November 3, 2005 1:54:24 AM

This <A HREF="http://arstechnica.com/news.ars/post/20051011-5416.html" target="_new">article</A> can backup what I've said about Xeons needing more cache to solve bandwidth problems.

Quote:
What all of this bandwidth talk means for our Opteron vs. dual Xeon comparison is that the dual Xeon needs a larger L2 to make up for the fact that it has much less bandwidth and (even more importantly) higher memory read latencies than the Opteron.

Hannibal knows his thing. ;) 

WOULD YOU LIKE TO BE A WELL FED SLAVE OR A HUNGRY FREE MAN?
November 3, 2005 12:47:54 PM

Quote:
You must be talking to me.

Am I? I dunno. I was just talking in general. I didn't really tally up who said what, just when with the general impression I got. But if you feel so then it must be at least partially true.

Quote:
Truth is, I hate it when someone comes spewing a corporate line, when they dont really have a grasp of what's being said.

Truth is, that's just your opinion that it's corporate trash. Just because someone else has a different opinion doesn't mean you should treat them like Bunascii. Maybe he actually beleives in it. I know that I can see <i>some</i> merit in it. It's not what I would do were I running Intel, but then that's true for just about all of Intel's CPU decisions lately.

Quote:
Do you really think 4 fsbs, for 4 cores, is able to compete with the chip to chip HTT? 1 fsb for 1 chip sure seems to cause memory problems now.

First of all, most of the problems Intel CPUs are having these days are not related to the memory at all. They're related to Intel doing some Very Bad Things to Scotty (and all CPUs thereafter) in the hopes that they'd scale to much higher GHz. (Which they aren't scaling to because of power/heat problems.)

Second of all, Yeah, I do think one FSB per core would definately compete with HTT. What are four ODMCs if not four FSBs? It's tit for tat, just with Intel's standpoint instead of AMD's. The only major missing difference is the chip-to-chip HTT, which Intel has their own take on anyway.

Really, it should perform almost as well, just as a single core with a single memory system from Intel performs almost as well as AMD's.

And Intel is far more familiar with that than they are with their own HTT variant. Aside from being slightly less efficient, it's only real flaw is that it'll cost a butt load for Intel's customers. But then, this <i>is</i> Intel. So frankly, it makes perfect sense. From a technical standpoint it's kind of silly. From Intel's business standpoint it makes sense.

Quote:
Since Intel went to all the trouble of developing fully depleated silicon on insulator, I to was wish they would make use of it. The prescott may need a 400/1600 fsb (or HTT) to be in it's glory, but I think that may be possible with FDSOI. Then people's opinion of scotty might change.

I agree. Intel should use SoI by now. I don't get why they don't, other than sheer perversity. Scotty still won't be in any glory though, even with SoI. Oh, sure, it'll use less power. That'll only make it <i>almost</i> as good as a NWC then. Is that really something to be proud of, to finally <i>almost</i> match what is now an ancient product? If anything it's only real advantage would be in allowing Intel to clock significantly higher to finally break their performance slump. But even then... I wish Intel had just never moved Netburst in the direction of Scotty. That was such a bad move. It's going to be hard to overcome that short of taking Netburst back to the drawing board. I really wish Intel would just do a good job of desktopizing their PM and get rid of Netburst all together anyway, but short of that, they could at least stop making Netburst worse than it already was at first launch.


:evil:  یί∫υєг ρђœŋίχ :evil: 
The <b><font color=red>Devil</font color=red></b> is in my <b><font color=red>'98 Mercury Sable</font color=red></b>!
<b>@ 201K miles!</b>
November 3, 2005 6:50:12 PM

Quote:
Second of all, Yeah, I do think one FSB per core would definately compete with HTT. What are four ODMCs if not four FSBs? It's tit for tat, just with Intel's standpoint instead of AMD's. The only major missing difference is the chip-to-chip HTT, which Intel has their own take on anyway.

Really, it should perform almost as well, just as a single core with a single memory system from Intel performs almost as well as AMD's.

Um, no.

The major missing difference is that the Intel solution still leaves all cores competing for memory bandwidth. Four FSBs isn't enough to compete with HTT unless each FSB gets its own memory bank--or, alternatively, if there's enough memory bandwidth hanging off the Northbridge to fully saturate all four FSBs at once.

Never mind the extra latency Intel's solution introduces for chip-to-chip traffic. Data that could efficiently travel by broadcast over a shared FSB (such as the all-important cache-coherency data) will now have to be routed through the NB. AMD's ccNUMA sidesteps that problem by using a classic mesh topology.

"You have been sh<font color=black>it</font color=black> upon by a grue."
!