Sign in with
Sign up | Sign in
Your question

AMD's 65nm is perfect!

Last response: in CPUs
Share
December 7, 2005 2:27:39 PM

I think AMD is going in the right direction..... as Intel thinks by going lower watts and adding a ton more cores will help them sell more CPU's.... I still think AMD's new CPUs will over dominate Intels new line-up. I think AMD is letting Intel release there news and get going on all there new chips before they bush wack them again! with there new CPU's all's I have to say is FX 60 and AMD Athlon X2 5000+
December 7, 2005 2:37:54 PM

Just because they've worked out a new process technology doesn't mean that they're tooled up for it at the FAB level. (Or for that matter have even finished the core redesigns yet.) :?
Related resources
December 7, 2005 2:39:46 PM

on paper everything is perfect. If only on paper perfection is required, Intel P4's accell beyond 5 ghz
December 7, 2005 2:46:03 PM

Quote:
on paper everything is perfect. If only on paper perfection is required, Intel P4's accell beyond 5 ghz

5Ghz with diminishing results!
December 7, 2005 2:59:01 PM

Actually, if Intel could get Scotty to speed up, it'd be with increasing results.

Intel threw in a nasty cache latency so that it'd scale high. But for other reasons it didn't scale high. So the cache latency is really holding it back at low speeds. If they could get to higher speeds, that horrid cache latency wouldn't be as noticable, as it'd finally be hitting the sweet spot where Intel intended it to be, instead of dragging down the low end like it is now.
December 7, 2005 3:09:50 PM

if they went 5Ghz they would need a turbine Jet cooler shipped with every CPU.

"Hey guys I got my new 5 GHz CPU for $40,000 ... man oh man does it scream"
December 7, 2005 4:00:45 PM

Intel still has the advantage. They've already migrated over to 65nm in Oregon and I believe they've started in Dresden.

Intel is on the attack right now.

-mpjesse
December 7, 2005 4:07:42 PM

You have to understand, just because they have swithced to the 65nm or even the 45nm doesn't mean they will have the better CPU.
December 7, 2005 4:50:48 PM

No, but their new process is looking very good right from the beginning.

In fact, Intel was looking up and up until they switched to the 90nm process. So maybe their suckiness was only for that process. :o 
December 8, 2005 1:57:25 AM

Quote:
No, but their new process is looking very good right from the beginning.

In fact, Intel was looking up and up until they switched to the 90nm process. So maybe their suckiness was only for that process. :o 


You say that now...wait till we see real world results...

Maybe they'll be as worthless as the Xbox 360...who knows?
December 8, 2005 2:06:45 AM

Remember what happen with the Presscott everyone on intel side went Presscott going to kick butt. And it turn out to be a heater and I almost went for a Prescott laptop vers a Northwood Laptop.
December 8, 2005 2:07:19 AM

Worthless as the xbox 360? WTF.

I can't wait till they hit the 65nm process, cheap dual cores for all.
December 8, 2005 4:42:43 AM

Quote:
In fact, Intel was looking up and up until they switched to the 90nm process. So maybe their suckiness was only for that process.

That is funny. A prescott on any process is still a big leak. Well maybe not FD SOI, but even then, @ 65nanos, it would still leak badly. Just too many interconnects, with too many dissimilar charges next to each other.
As for Amd and 65 nanos, It was my understanding that fab 39 was designed and built for 300mm wafers and 65 nanos. I don't remember hearing that that had changed. I think it qualified @ 65 nanos during the summer, though Amd has been running aditional tests. The above press release may explain some of that.
I have also heard that Amd has an automation program that allows them to transition seamlessly between 90 and 65 nanos. This also suggests that Amd is already 65 nano capable. Since they have offered the tech to SMC, or one of the other chip giants, I'd guess it must work.
December 8, 2005 5:39:42 PM

Quote:
Intel still has the advantage. They've already migrated over to 65nm in Oregon and I believe they've started in Dresden.

No flame - just a correction - Intel has no FAB (or any other presence that I see listed) in Dresden. That's that other company... And we've got at least 3 (or is it 4) FABs ramped, or ramping, on the 65nm 300mm process.

* Not speaking for Intel Corp *
December 8, 2005 6:08:56 PM

I think they are both going in the right direction for each of them.

AMD doesnt have the fab abilities of Intel so in order to get their chips out they need to partner with other companies while they build up a new fab or two.

Intel is a fab powerhouse unfortunately this is a problem with declining sales of cpu's both by market and by AMD taking a chunk. I think Intel should sell manufacturing to other companies since they can supply more than their own chips.
December 8, 2005 11:22:50 PM

if amd is already 65nm capable, whats stopping them from switching over? i think theres some finishing touches they need to complete. In any case, 65nm is not going to give intel any advantages, except within its own product line. amd doesnt really need 65nm for right now, as its fine with 90nm, and will be fine when presler comes out, although it would be cheaper to make 65nm. amd is saving up its 65nm recipe for conroe, m2 65nm will likely be at least 3ghz dc, scary thought for amd, but thats most likely what theyll need to compete with conroe.
December 9, 2005 12:40:32 AM

Quote:
if amd is already 65nm capable, whats stopping them from switching over?

Aside from what you mentioned, fab 39 is not thier production facility yet. That task still rests with it's next store neighbour.
December 9, 2005 12:51:02 AM

Quote:
No, but their new process is looking very good right from the beginning.

In fact, Intel was looking up and up until they switched to the 90nm process. So maybe their suckiness was only for that process. :o 

I remeber when prescott was supposed to be king shit and Spud @ the time had listed an entire page of improvements out of his head(he had really high hopes) and then a few days before release the pipline increase info was leaked and then the final product only for reviewers to find that it was actually slower overall and especially bad in games. :lol: 
At least all of the Itanium cores have been a shining success......wait a second............ :lol: 
*Sits back waiting for endyen to post with further Pro-AMD propoganda while Xeon writes a 10 page essay to post about how he doesnt care about computers anymore*
December 9, 2005 1:00:34 AM

well, i meant more that why doesnt amd just open fab36 witth 65nm production, there must be a reason.
December 9, 2005 1:13:36 AM

It's easier and safer for them to just follow the plan. At this point the plan does not even include shifting fab 36 to 65 nanos. It may be used for 90 nano parts, perhaps even chipsets, for the near future.
December 9, 2005 1:23:36 AM

slvr_phoenix may have blaimed the 90nm process, but the failure of the P4 was just due to the Prescott architecture, specifically the pipeline increase. A lot of people have negative things to say about the 90nm process, but that has always been in context with Prescott. There really isn't anything wrong with the 90nm process when compared to the 130nm or any other Intel process. Just looking at the Pentium M, the 90nm process allowed Dothan with a 400MHz FSB to increase clock speed from 1.7GHz to 2.1GHz, doubled the cache, while still decreasing the TDP from 24.5W to 21W compared to Banias. The power life for Dothan was virtually identical to that of Banias despite the higher clock speeds, and the increase in power consuming cache. Intel's 90nm process compares quite favourably to AMD's considering a Dothan with a 533MHz FSB has the similar battery life to the Turion 64, despite the Turion's SOI advantage. Granted Dothan may have a few more power optimizations than Turion but the 90nm process certainly isn't working to its disadvantage.

It's funny you mentioned how Intel processors are bad for games. Extremetech actually did some research to discover the reason behind that.

http://www.extremetech.com/article2/0,1697,1895945,00.a...

It seems that game developers, in a rush to get the game out of the door, usually fail to optimize the code for the latest instruction sets. It seems that many games are only optimized for the Pentium III generation which means only SSE support. This means the code penalizes the Pentium IV by not taking into account its higher latencies or its support for SSE2 or SSE3. Now people may feel that optimizing code for the Pentium IV would penalize AMD, but that may not be the case.

"This is unlikely to penalize AMD specifically, though unrolling loops and other P4-specific operations might possibly penalize the Athlon 64, but it's hard to know without actually trying it. But using SSE/SSE2 shouldn't adversely affect AMD. Even Fred Weber, AMD's former chief technology officer, acknowledged that SIMD was the way to go with floating point as we move into the future."

It seems that AMD has no problems with game developers optimizing code for the Pentium IV generation as AMD processors likewise support SSE, SSE2, and SSE3.

What's even more interesting is that in many cases, game developers don't even activate support for SSE as even AMD recommends. They only use FPU code which runs slower on the Pentium IV.

If game developers spent a bit more time to optimize their code for SSE, SSE2, and SSE3 as AMD's Weber suggests, Intel's processors would see better performance in games. It probably won't be enough to dethrone AMD at the very top, but it offers free performance improvements to everyone by making full use of the processor whether AMD or Intel.
December 9, 2005 1:35:21 AM

Quote:


http://www.extremetech.com/article2/0,1697,1895945,00.a...

It seems that game developers, in a rush to get the game out of the door, usually fail to optimize the code for the latest instruction sets. It seems that many games are only optimized for the Pentium III generation which means only SSE support. This means the code penalizes the Pentium IV by not taking into account its higher latencies or its support for SSE2 or SSE3. Now people may feel that optimizing code for the Pentium IV would penalize AMD, but that may not be the case.

"This is unlikely to penalize AMD specifically, though unrolling loops and other P4-specific operations might possibly penalize the Athlon 64, but it's hard to know without actually trying it. But using SSE/SSE2 shouldn't adversely affect AMD. Even Fred Weber, AMD's former chief technology officer, acknowledged that SIMD was the way to go with floating point as we move into the future."

It seems that AMD has no problems with game developers optimizing code for the Pentium IV generation as AMD processors likewise support SSE, SSE2, and SSE3.

What's even more interesting is that in many cases, game developers don't even activate support for SSE as even AMD recommends. They only use FPU code which runs slower on the Pentium IV.

If game developers spent a bit more time to optimize their code for SSE, SSE2, and SSE3 as AMD's Weber suggests, Intel's processors would see better performance in games. It probably won't be enough to dethrone AMD at the very top, but it offers free performance improvements to everyone by making full use of the processor whether AMD or Intel.

Yes possibilities are endless but unfortunatly money and time is not and game developers need to get this stuff out the door on time and budget. Most serious gamers are AMD user's anyway so why bother.
December 9, 2005 2:00:40 AM

I know there are always time and money limitations but sometimes they are just taken too seriously. When developers are too concious of deadlines rushed games like Battlefield series arise where the game is is buggy and the patches only make it worse. Besides, in the case of SSE optimizations all a developer needs to do is to click a checkmark before pressing compile. I can understand a game developer being hesitant with the latest instruction set like SSE3 breaking his code, however unlikely, but older intruction sets like SSE or even SSE2 have long been ingrained in compilers.

"In discussions with game developers over the past few years, I've learned that they tend to be pretty wary of automatic optimizations generated by simple use of compiler switches. Sometimes a large software build will break when certain automatic optimizations are turned on. Some of this is likely institutional memory, as compilers have improved over the years."

There really isn't any reason why developers shouldn't at least activate the original SSE instruction set which has been around since 1999. Even AMD processors would benefit from that.
December 9, 2005 2:25:09 AM

It's not the compiling they worry about. Sure , adding SSE2 would take a few extra hours to compile, but no big deal. The problem arises out of debugging. The more switches you turn on, the harder it is to find the bug. Why take the chance, when the in game results are so small?
December 9, 2005 2:35:36 AM

Quote:
I know there are always time and money limitations but sometimes they are just taken too seriously. When developers are too concious of deadlines rushed games like Battlefield series arise where the game is is buggy and the patches only make it worse. Besides, in the case of SSE optimizations all a developer needs to do is to click a checkmark before pressing compile. I can understand a game developer being hesitant with the latest instruction set like SSE3 breaking his code, however unlikely, but older intruction sets like SSE or even SSE2 have long been ingrained in compilers.

"In discussions with game developers over the past few years, I've learned that they tend to be pretty wary of automatic optimizations generated by simple use of compiler switches. Sometimes a large software build will break when certain automatic optimizations are turned on. Some of this is likely institutional memory, as compilers have improved over the years."

There really isn't any reason why developers shouldn't at least activate the original SSE instruction set which has been around since 1999. Even AMD processors would benefit from that.


Sry I was just generalizing but endyen summed it up very well.
December 9, 2005 3:37:42 AM

My bad. AMD has the fab in Dresden. I get confused easily.

-mpjesse
December 9, 2005 3:42:03 AM

Yeah... smaller process doesn't necessarily translate to less heat. Prescott was a big deal to Intel though- they saved a ton of money on silicon wafers.

A lot of people forget the monetary benefits to a smaller process. Those wafers cost a fortune... the more chips they can fit the less they have to spend on silicon.

Of course, the equipment change to 65nm is fortune too. The latest number I heard was 4 billion to switch to 65nm for all of intel's logic FABs. Ouch!

-mpjesse
December 9, 2005 7:27:23 AM

Exactly where in this article is at stated that AMD has a perfect 65nm product.
December 9, 2005 5:38:49 PM

Note of warning: This reply is to a number of people to conserve space and save time.

Quote:
That is funny. A prescott on any process is still a big leak. Well maybe not FD SOI, but even then, @ 65nanos, it would still leak badly. Just too many interconnects, with too many dissimilar charges next to each other.
I don't think I ever argued that. Scotty itself was such a bad design for other reasons as well. A Scotty on any process is still Scotty. :lol:  But my point is that in switching processes, Intel has to redesign the core. Maybe they'll take that opportunity to fix a few things.

Quote:
I have also heard that Amd has an automation program that allows them to transition seamlessly between 90 and 65 nanos. This also suggests that Amd is already 65 nano capable. Since they have offered the tech to SMC, or one of the other chip giants, I'd guess it must work.
Just because they have an automation program that can 'transition seamlessly' doesn't mean that when they do transition, it will be seamless. And it also doesn't mean that their cores are redesigned for 65nm yet. Time will tell. That's all I'm saying on that.

Quote:
I remeber when prescott was supposed to be king shit and Spud @ the time had listed an entire page of improvements out of his head(he had really high hopes) and then a few days before release the pipline increase info was leaked and then the final product only for reviewers to find that it was actually slower overall and especially bad in games. :lol: 
I remember that too. Hell, we all thought that Scotty would do better than it did. I mean there were tons of improvements. Intel just unfortunately squashed that advantage with tons of bad design. :? It was a sad day indeed. Scotty could have been soooooo much better if Intel had just stuck to fixing the problems with Northwood instead of screwing around with ... everything.

Quote:
slvr_phoenix may have blaimed the 90nm process, but the failure of the P4 was just due to the Prescott architecture
Umm ... I didn't blame the process. I'm just using the process as a marker for the time period. Though the process itself does have it's problems, it's the bloody crap redesign done to the Scotty core that made it really suck. Never let it be said otherwise. :mrgreen:

Quote:
specifically the pipeline increase
Actually, that had a pretty minor effect. I'm not even sure if I'd call that a bad decision. It was really what Intel did to Scotty's cache latency and misprediction handling that screwed Scotty badly. The pipeline increase didn't help, but it's far from the major contributor to Scotty's suckiness.

Quote:
There really isn't anything wrong with the 90nm process when compared to the 130nm or any other Intel process.
That's actually not true. 90nm was the point when leakage became a serious problem. It was much worse than predicted. AMD was smart by starting to implement SoI. Intel ... not so smart. So there really was something wrong with the 90nm process itself. That something is leakage. And that something will get worse and worse as each process gets smaller. This however is balanced by adding new things to the process such as strained silicon, low/high-K dielectrics, SoI, internal carbon nanotube heat channels, etc. .13 was the magic number. Everything is downhill from there.

Quote:
It seems that game developers, in a rush to get the game out of the door, usually fail to optimize the code for the latest instruction sets.
It has nothing to do with being in a rush to get code out the door. It has everything to do with compatability and debugging. You don't want to alienate a giant market segment by requiring something like SSE3. So then you have to either not compile with it at all, or produce multiple branches that all theoretically do the same thing, but with different feature sets. But because there are different branches, you then have to test them all with different hardwares, and when a bug pops up, you first have to track down which branch it's even in. It's a royal pain in the butt. Which is why most people won't optimize nearly as much as they can. It just costs too much to be worth it.

Quote:
Now people may feel that optimizing code for the Pentium IV would penalize AMD, but that may not be the case.
Actually, it is. If you're talking about just optimizing for instruction sets, then not so much. But if you're talking about optimizing for actual architectural differences in a P4, then if sure will penalize AMD. One of the most notorious of such examples is the bitshift optimization. It is (was?) a commonly used cheat to use bitshifting to perform certain multiplication and division functions because in the P3 a bitshift operation was blindingly fast compared to a multiplication operation. But Intel made bitshifting damn slow in the P4. Suddenly all of these 'optimizations' that were killer on a P3 or Athlon were slow as sin on a P4. :o  So to fix that for a P4 meant ditching those optimizations that made the P3 and Athlon code fast.

And that's not even counting the simple optimizations done by organizing your code to maximize the use of a CPU using a profiler. Reorganizing code to keep the instruction units busy can really speed up a program. But the differences between the P4 and, well, anything else are so dramatic that optimizing in this manner for a P4 makes the code slower on everything else. Where as AMD, and even VIA, have kept their CPUs so much like the P3 that optimizing in this manner for these chips works out quite well for all chips ... except the P4.

And again, the only way to get these optimizations in for everyone is to branch code and create a maintanance nightmare. Which is why it's just typically not done.

So sorry, ltcommander_data, but you really just don't know what you're talking about here.
December 10, 2005 12:37:53 AM

Quote:
ltcommander_data wrote:
specifically the pipeline increase
Actually, that had a pretty minor effect. I'm not even sure if I'd call that a bad decision. It was really what Intel did to Scotty's cache latency and misprediction handling that screwed Scotty badly. The pipeline increase didn't help, but it's far from the major contributor to Scotty's suckiness.

Well I was mainly referring to Prescott's heat and scaling problems being more due to its architecture than the 90nm process.

Quote:
So there really was something wrong with the 90nm process itself. That something is leakage. And that something will get worse and worse as each process gets smaller. This however is balanced by adding new things to the process such as strained silicon, low/high-K dielectrics, SoI, internal carbon nanotube heat channels, etc. .13 was the magic number. Everything is downhill from there.


I'm aware that leakage increases with process shrinks, but as you mentioned I'm comparing in balance. As long as the processor architecture used can take advantage of features in the 90nm process which can reduce leakage, its isn't a disaster compared to 130nm. Afterall, unlike Prescott which uses strained silicon to increase transistor performance at the expense of leakage, Dothan uses strained silicon to maintain transistor performance while decreasing leakage.

Quote:
If you're talking about just optimizing for instruction sets, then not so much.


What I'm talking about is instruction sets. As you mentioned, implementing SSE3 support is obviously a waste of time since the Pentium IV has only supported it for less than 2 years, AMD for less than a year, and the Pentium M won't support it until Yonah is released on New Years Day.

However, I'm mainly just referring to the original SSE. Many games still only use FPU code even though even AMD wants the industry to adopt SSE code. It's been around since 1999 so support is not a concern as all processors that meet the system requirements to play current games have it. If all processors support SSE, there isn't a need to create multiple branches to support multiple instruction sets as you mentioned. AMD processors have an efficient SSE implementation, and Intel processors seem to process SSE instructions better than FPU instructions, so incorporating SSE in addition to FPU seems to offer free performance benefits to everyone.
December 10, 2005 2:09:06 AM

Quote:
But my point is that in switching processes, Intel has to redesign the core.

and
Quote:
And it also doesn't mean that their cores are redesigned for 65nm yet.

Why? Have you ever heard of a die shrink?
December 11, 2005 12:19:17 AM

Quote:
I think AMD is going in the right direction..... as Intel thinks by going lower watts and adding a ton more cores will help them sell more CPU's.... I still think AMD's new CPUs will over dominate Intels new line-up. I think AMD is letting Intel release there news and get going on all there new chips before they bush wack them again! with there new CPU's all's I have to say is FX 60 and AMD Athlon X2 5000+


More cores is the way to go forsure. Just more can be done at once. There is only so much you can do with single cores. In the future that is. The more cores the better intel is choosing the right track this time. AMD will have to follow suit.
December 11, 2005 12:35:59 AM

Well two cpu computers been around for years. Just not on the same die. I m not saying two cores will be bad. But I would suggest to people to wait and see what happens. Like with Operating systems going from 16 to 32 bit. Now There changing over to 32 bit to 64 bit.

1. This will Save us money on buying CPUs.
2. You have a faster CPU When Dual core cpu or quad core cpus are use more.
3. They will be cheaper.
December 11, 2005 3:00:16 AM

intel is "following suit" on this one, amd designed the a64 for multi core, they already talked about it in 1999.
December 11, 2005 5:02:55 AM

Quote:
amd designed the a64 for multi core


This is absolutely true. Why do you think that amd pushed for the 940/939 sockets? Mostly for the on-die mem controller. But they could already do that with the 754's.

I really wish I had this link, but I once read an article on the (at the time) new A64 architecture, and they were intriqued by how they saw a "hole" which they said may be in the future used for more physical cores.

Also, I think that it is worth mentioning that neither Intel nor AMD has actually hit a clock speed ceiling. I think that there is a speed ceiling now because of the amount of leakage and heat that both companies are facing, however I may be wrong. In the old days, cpus never had heatsinks. They never needed them. After bumping up the clock speeds, thats is when heat became an issue. Die shrinks always help, but leakage gets worse every time. Back then die shrinks were able to keep up with heat. Now, its a bit different because the die shrinks only shrink a small fraction of what they used to. 180nm to 130nm and 90nm to 65nm are the same percentage shrink, but also different actual shrinks. This may be the reason why heat was able to catch up. I think that as these companies realize that heat and power consumption must be kept low, they will soon realize that the clock ceiling jumps once again. I believe that by the 22nm (3 shrinks after 65nm) process, clock speeds will be at or around 6 GHz
December 11, 2005 1:04:32 PM

amd could have kept pushing clock speeds up if it wanted to, it just saw that intel couldnt do anything with prescott and rather designed chips that ran cooler with its very good 90nm process that made intel look even worse. A64 venice or san diego cores actually run cool, even when you overclock them. These cores just dont recieve alot of amps like the prescotts do, thats why you see like 7ghz ocs on prescotts when theyre under extreme cooling, but you cant really push the a64 all that far. socket m2 is not at all about ddr2, thats just basically a smoke screen to shut up people like porkster who bitch about how amd doesnt use the latest tech, socket m2 is going to have a higher tdp, like an earlier anandtech article stated, and thats where revision f is really gonna show its capabilities. I also heard on amdzone, that the tech theyre gonna use for 65nm will first be used for 90nm, and now it also makes perfect sense as to why fab36 isnt starting with 65nm, it will first practice with a more mature process.
December 12, 2005 1:40:20 PM

Quote:
Intel is a fab powerhouse unfortunately this is a problem with declining sales of cpu's both by market and by AMD taking a chunk. I think Intel should sell manufacturing to other companies since they can supply more than their own chips.


Why would they do that? If you haven't been reading the news, I'll sum it up - Intel is currently fab constrained, especially in the area of chipsets. :o  They have had to tell customers "I'm sorry, but that's all the chips we have, there aren't any more." A chip Intel can make is a chip Intel can sell and their profit margins are definitely trending up. I doubt they could make nearly as much on another company's chips and without any excess capacity to sell, this is pretty much a non-issue.
!