AMD will have nothing over 2.4g for k10 in 07

Joe_The_Dragon

Distinguished
Sep 19, 2006
512
0
18,980
At lest there dual cpu high end desktop system will use desktop ram and have 2 or more pci-e 2.0 x16 slots unlike Skull Trail with FB-DIMMS and only pci-e 1.1.

Also what is time line on the amd / nvidia chipsets with am2+ and pci-e 2.0?
 

BaronMatrix

Splendid
Dec 14, 2005
6,655
0
25,790



AMD said they would release a 2.6 Phenom and a 2.5 Opteron. They sent Anand a 2.5 Opteron right before the launch. They have been demoing a 3GHz Phenom for months now. Since it's now 17 days to launch if they don't say anything to the contrary I think we can safely say that they will do what they said. The speed change for Barcelona came about the middle of August which says to me that they were just a little off since as we all know it takes about 6-8 weeks for AMD to make a chip and get it out to packaging.

Also, if you go back a month or so you will see the Inq tell us that IBM, HP and Dell didn't want HT3 in Barcelona. That means that they had to respin to cut the links to 8bit 1GHz and even turn one off. It seems like that would cause a delay. Phenom will ship with HT3 16bit links so they don't need to adjust it.
 

ryman554

Distinguished
Jul 17, 2006
154
0
18,680


Speeds are never "just a little bit off". People get paid a lot of money in the design process to get these things right. They should *know* how their device is going to perform months/years before it's actually made. And they are very good at that.

We all lament and make fun of what intel went through with Prescott, but something tells me AMD is going through a similar thing: their design is not compatible with their process.

That's the ONLY thing which would account for the continual revision downward in speed for Barcelona, and the only reason why we don't see many more out there right now. And why there are rumours (if you like, by an admitted AMD-lover) of Phenom not being able to hit said speeds. And why the tricore all of a sudden appears on a roadmap.

Considering AMD has a track record in the recent (~1 year) past to the contrary, why is it safe to assume that they will do what they say? In fact, please tell me what, in the past year, any major AMD announcement that has, in fact, been true? (I'm actually not saying this in jest -- initially I was just pulling your chain, but the more I think about it, the more I can't come up with many)



Why would this be? If servers are where the most of the bandwith is needed, then surely the server companies want this. Is it because it's unstable? Or because it isn't needed. And if it isn't needed for performance reason, why would it be the factor you always bring up with regards to the poorly performing (but not officially confirmed) benchmaks out there? If it's unstable, isn't that another example of AMD not doing what they say they are going to do?
 

djgandy

Distinguished
Jul 14, 2006
661
0
18,980


Because both those things matter. An 8800GT in a PCI-E 4x slot only performs 11% slower than in a 16x slot.....ouch the truth must hurt. While bandwidth is great, it's only any good if you use it. There is more bandwidth than the processor and GPU's can use thus it is merely a number. Make it 1000 gigabytes/second if you like. It will make about 2/3% difference. Why? Because of balance. The bandwidth is only there to satisfy the needs of the logic processing cores. I guess more bandwidth == less latency, but....then we have caches with super low latencys and advanced algorithms to counteract latency. Your simple analysis is good, but an engineer would tear it to pieces.

PCI-E 2.0. Great. Isn't the idea of the graphics card to take strain OFF the system? mnemonic instructions are issued to the card to avoid saturation of other components. If we needed 16gb/s (or w/e it is) of bandwidth we'd be in the days of direct draw and fipping backbuffers onto the screen.
 

sailer

Splendid
First off, this is the Inq reporting, and I don't hold them to the greatest degree of trustworthiness. Then again, it involves AMD, and AMD has developed a habit of delays and not delivering as promised. I'm not sure which to believe, but I'm willing to wait a couple weeks and see what happens. If Phenom comes out and performs well enough, it will go into my computer. If Phenom doesn't, its Intel inside.

I don't demand the absolute top of performance from the Phenom, but I want it to be reasonably close so that I don't spend a lot of money and end up in the far distance. In other words, I want something worth my money.
 

turpit

Splendid
Feb 12, 2006
6,373
0
25,780


Yes there have been. Remember, it wasnt AMD making the claims that Brisbane or higher clocked X2s would out perform C2D, it was the fanboys desperately hyping in the forums that drove those expectations up. Brisbane and X2 did what AMD said it would do, IRT its initial release.
 

Evilonigiri

Splendid
Jun 8, 2007
4,381
0
22,780

Well, that's one...anymore? I have hated and loath AMD for not meeting their claims and using underhand techniques (Imo)...but now that I think of it, I'm sure Intel would do the same if the roles were reversed. Right now I doubt AMD would do what ever they claim, but hey, they're struggling and without them we're gonna start paying $1500 for an Octo or something. Intel is monopolizing, and many countries has filed law suites against Intel because of it.

Right now I'll let any "lies" from AMD pass me and forget that they said anything. 3ghz Barcy ftw!!!
 

MattC

Distinguished
Oct 1, 2004
132
0
18,680


They're not? Sorry, but while evidence is not required, it is preferred, and in it's absence a powerful argument - at the least - is required.

As a scientist in a completely different field, my experience is that every single device ever made performs in a slightly different way - I don't see why this rule-of-thumb shouldn't include the machines that make devices (the "fabs" in this case). I suspect that two different facilities producing the same CPU using the same techniques yield, on average, CPUs that do not clock identically - that is, one fab may produce slightly higher-quality/higher-clocking CPUs and it may come down a screw or bolt somewhere that conforms just a little better to it's specifications (in case you don't know, screws/nails/all-variety-of-simple-tools vary in their physical dimensions by small amounts that we cannot see and that do not affects their basic performance. For more complicated machines, such as those I work with - mostly mass spec and gc/lc - the differences have measurable effects.

I can easily imagine a chip maker planning everything thoroughly and being surprised at the poor/excellent performance of the final product, though I agree that they probably go into this with a pretty good rough idea of what to expect.
 

monsterrocks

Distinguished
Sep 19, 2007
284
0
18,780
I think the inq is just stirring up a bunch of BS. AMD lowered their initial clock speeds on their barcys w/o having hit a higher clock then they are at now. (a.k.a. they didn't get it to 2.5Ghz and then release it @ 1.9 and 2.0Ghz) They have hit higher than 2.4Ghz, so why on earth would they lower it like that. I have seen what the inq has written about these things for a while, and they are not at all reliable if you look at what they said would happen, and what actually happened. I still have faith in Phenom. While AMD may take years to get ahead of intel again, they will still get better and faster while keeping their prices low. And in the end, that's all that matters; because the mainstream computer is a much larger percentage of the market than the enthuisiest PC. AMD cut their debts by about 33% this quarter and that is with the creation of another fabrication plant (At least, last I heard they had started on it, but I could be wrong) I think AMD will see better days pretty soon. And I think pretty soon they are going to start delivering what they promised, when they promised; and in quantity too. Only time will tell....
 

elpresidente2075

Distinguished
May 29, 2006
851
0
18,980


............................................________
....................................,.-??...................``~.,
.............................,.-?...................................?-.,
.........................,/...............................................?:,
.....................,?......................................................\,
.................../...........................................................,}
................./......................................................,:`^`..}
.............../...................................................,:?........./
..............?.....__.........................................:`.........../
............./__.(.....?~-,_..............................,:`........../
.........../(_....?~,_........?~,_....................,:`........_/
..........{.._$;_......?=,_.......?-,_.......,.-~-,},.~?;/....}
...........((.....*~_.......?=-._......?;,,./`..../?............../
...,,,___.\`~,......?~.,....................`.....}............../
............(....`=-,,.......`........................(......;_,,-?
............/.`~,......`-...............................\....../\
.............\`~.*-,.....................................|,./.....\,__
,,_..........}.>-._\...................................|..............`=~-,
.....`=~-,_\_......`\,.................................\
...................`=~-,,.\,...............................\
................................`:,,...........................`\..............__
.....................................`=-,...................,%`>--==``
........................................_\..........._,-%.......`\
...................................,
 

BaronMatrix

Splendid
Dec 14, 2005
6,655
0
25,790



I guess you're right. Making CPUs is so easy that if they don't have higher clocks for a new architecture they can't. Intel has had these same problems. It's just not as noticeable because they're larger. Barcelona does have the fp edge they said. Everyone else said the R600 would defeat G80, not ATi.


I didn't mean speeds, I meant revs. The OEMs supposedly said they didn't want to qualify entire new systems. The design is compatible with the process, but the tightness fo the pocketbook may have lessened the number of wafer starts for it. As far as Penryn, it's only a DUAL CORE at 3GHz, not a quad. Intel said they couldn't feasibly make a native quad at 65nm. AMD has at least TWO 2.5GHz k10's they gave to Anand. Intel has no native quad cores, so you could say AMD is a better manuf, right?
 

monsterrocks

Distinguished
Sep 19, 2007
284
0
18,780
I agree with Baron here. AMD is the better company. They took the risk of making native quad whilst Intel opted to put two C2D cores in one chip. Intel can't keep stacking them forever, so somewhere along the lines they are gonna have to make native quad cores and stack those. And when that time comes, AMD will have been ready for a good long time (and probably will be moving on to native octo cores); Intel will have to make those native quad cores fast.
 

monsterrocks

Distinguished
Sep 19, 2007
284
0
18,780
Penryn is only dual core? I would be interested to hear where you got that from. Not that I don't believe you, it's just that I guess I assumed that they would be quad core. Why make a "newer/better" processor if you are going to take a step back to dual core? They should at least make a quad core version of it don't you think?
 

ryman554

Distinguished
Jul 17, 2006
154
0
18,680


You're abdolutely right Intel had the same problems. Prescott at 10GHz? They dropped the physical ball bigrtime on this one. Classic example. The *design* is capable (we think, the p4 has gotten up to, what ~7 ghz(?) under ridiculous cooling) of getting there, their *process* wasn't.



I missed where the OEMs said they didn't want to qualify new systems. It doesn't make sense to me, but if you could forward a link or something it would be appreciated. It just strikes me as odd that they would turn off a key bandwidth feature of the barcy platform... when AMD's only ace in the hole *is* sheer bandwidth and scalability.

Poll 100 people in the semiconductor industry who has the best semiconductor manufacturuing process out there and I will guarantee that you will not come out with AMD on top. But where was I talking about intel or penryn here? I wasn't. But since you bring it up, don't forget that they have also delivered a 4GHz penryn (air OC'd) to Anand. Right now I have a 3.6 GHz air cooled Conroe (65nm, apples to apples) Q6600. I fail to see your point.

I also fail to see your point re: native quad. It is true, intel was late in the game in terms of multicore cpus. (pentrium D anybody? Ugh.) However, it's still unclear to me as to what the real-world advantages of that is. Taking bandwidth out of the equation (which in reality isn't a multiprocessor issue as it is an interconnect issue), where is the advantage, clock for clock, of AMD over intel? x87 FPU? Sure, but AMD had a better FPU engine all the time. This changes when you do FP through SSE instructions. Integer? Nope, intel still holds the crown. How about comparing Barcy to the 4x4? That's two SOCKETS, not two dual-cores glued together. I just don't see any evidence that native multi core is significantly better in real-world OR scientific applications. In theory we can all agree that the more cores natively "together" is "better" and a more elegant solution. No doubt. But what does it really buy you in performance? Show me where it's been quantified. I can't believe that it's even 10%.

So then lets look at the other side of the argument -- cost of production. Lots of people have pointed out that, for the same defect density, the yield of a given processor is scales as a (large) power of the area of the die. Do the experiment yourself. Draw a large circle, and divide into ~200 rectangles of the same size. Then do another circle, of the same size, and divide into rectangles exactly half the size of your other circle. Then take some marbles, cheerios, jacks, legos, whatever, and drop them onto your circle. This estimates the amount of random defects. Count the rectangles which have NO jacks in them. That's your yield. Repeat ~100 times for both circles. I think you'll find that the percentage yield for the circles with the smaller rectangles is larger than that for the larger rectangles. Watch what happens to the yield percentage as you half or double the number of cheerios. It's a fun experiment and a great way to waste a rainy day.

So when we can agree that the smaller die yields a higher percentage of working die than the larger die, you balance cost of production vs. your architecture choice. It's not that intel can't make a native quad at 65nm (they can -- see Tukwilla -- you get maybe 30die total on a 300nm wafer, but I shudder to think what they will have to charge for that beast.), it's not that the defect density of intel is worse than AMDs (it isn't. Intel's defect density is between a factor of two to four better than AMDs), it's purely a cost vs. benefit analysis. And, to intel, 45nm is where the transition becomes economically viable from a manufacturing perspective. See: Nehalem in Q408. AMD took the risk to do it at 65nm. And my hat is off to them -- it is a feat, and probably a coup for their designers more than anything else. But I strongly believe that they are paying the price for that decision.

For a non-quantifiable improvement in performance, you end up having to spend ~twice as much to make the same number of chips. So you have to price them higher to recoup your investment. But you can't, because the performance doesn't give you a 2x improvement. Who is going to be more profitable here? As enthusiasts, we may not like the answer.
 

Ycon

Distinguished
Feb 1, 2006
1,359
0
19,280

You prolly bought an Athlon64 back in 2002 in order to extensively use 64-bit software...
Dont trust everything AMD tells ya.